title
stringlengths
4
295
pmid
stringlengths
8
8
background_abstract
stringlengths
12
1.65k
background_abstract_label
stringclasses
12 values
methods_abstract
stringlengths
39
1.48k
methods_abstract_label
stringlengths
6
31
results_abstract
stringlengths
65
1.93k
results_abstract_label
stringclasses
10 values
conclusions_abstract
stringlengths
57
1.02k
conclusions_abstract_label
stringclasses
22 values
mesh_descriptor_names
sequence
pmcid
stringlengths
6
8
background_title
stringlengths
10
86
background_text
stringlengths
215
23.3k
methods_title
stringlengths
6
74
methods_text
stringlengths
99
42.9k
results_title
stringlengths
6
172
results_text
stringlengths
141
62.9k
conclusions_title
stringlengths
9
44
conclusions_text
stringlengths
5
13.6k
other_sections_titles
sequence
other_sections_texts
sequence
other_sections_sec_types
sequence
all_sections_titles
sequence
all_sections_texts
sequence
all_sections_sec_types
sequence
keywords
sequence
whole_article_text
stringlengths
6.93k
126k
whole_article_abstract
stringlengths
936
2.95k
background_conclusion_text
stringlengths
587
24.7k
background_conclusion_abstract
stringlengths
936
2.83k
whole_article_text_length
int64
1.3k
22.5k
whole_article_abstract_length
int64
183
490
other_sections_lengths
sequence
num_sections
int64
3
28
most_frequent_words
sequence
keybert_topics
sequence
annotated_base_background_abstract_prompt
stringclasses
1 value
annotated_base_methods_abstract_prompt
stringclasses
1 value
annotated_base_results_abstract_prompt
stringclasses
1 value
annotated_base_conclusions_abstract_prompt
stringclasses
1 value
annotated_base_whole_article_abstract_prompt
stringclasses
1 value
annotated_base_background_conclusion_abstract_prompt
stringclasses
1 value
annotated_keywords_background_abstract_prompt
stringlengths
28
460
annotated_keywords_methods_abstract_prompt
stringlengths
28
701
annotated_keywords_results_abstract_prompt
stringlengths
28
701
annotated_keywords_conclusions_abstract_prompt
stringlengths
28
428
annotated_keywords_whole_article_abstract_prompt
stringlengths
28
701
annotated_keywords_background_conclusion_abstract_prompt
stringlengths
28
428
annotated_mesh_background_abstract_prompt
stringlengths
53
701
annotated_mesh_methods_abstract_prompt
stringlengths
53
701
annotated_mesh_results_abstract_prompt
stringlengths
53
692
annotated_mesh_conclusions_abstract_prompt
stringlengths
54
701
annotated_mesh_whole_article_abstract_prompt
stringlengths
53
701
annotated_mesh_background_conclusion_abstract_prompt
stringlengths
54
701
annotated_keybert_background_abstract_prompt
stringlengths
100
219
annotated_keybert_methods_abstract_prompt
stringlengths
100
219
annotated_keybert_results_abstract_prompt
stringlengths
101
219
annotated_keybert_conclusions_abstract_prompt
stringlengths
100
240
annotated_keybert_whole_article_abstract_prompt
stringlengths
100
240
annotated_keybert_background_conclusion_abstract_prompt
stringlengths
100
211
annotated_most_frequent_background_abstract_prompt
stringlengths
67
217
annotated_most_frequent_methods_abstract_prompt
stringlengths
67
217
annotated_most_frequent_results_abstract_prompt
stringlengths
67
217
annotated_most_frequent_conclusions_abstract_prompt
stringlengths
71
217
annotated_most_frequent_whole_article_abstract_prompt
stringlengths
67
217
annotated_most_frequent_background_conclusion_abstract_prompt
stringlengths
71
217
annotated_tf_idf_background_abstract_prompt
stringlengths
74
283
annotated_tf_idf_methods_abstract_prompt
stringlengths
67
325
annotated_tf_idf_results_abstract_prompt
stringlengths
69
340
annotated_tf_idf_conclusions_abstract_prompt
stringlengths
83
403
annotated_tf_idf_whole_article_abstract_prompt
stringlengths
70
254
annotated_tf_idf_background_conclusion_abstract_prompt
stringlengths
71
254
annotated_entity_plan_background_abstract_prompt
stringlengths
20
313
annotated_entity_plan_methods_abstract_prompt
stringlengths
20
452
annotated_entity_plan_results_abstract_prompt
stringlengths
20
596
annotated_entity_plan_conclusions_abstract_prompt
stringlengths
20
150
annotated_entity_plan_whole_article_abstract_prompt
stringlengths
50
758
annotated_entity_plan_background_conclusion_abstract_prompt
stringlengths
50
758
Higher infliximab and adalimumab trough levels are associated with fistula healing in patients with fistulising perianal Crohn's disease.
35949350
Tumor necrosis factor-alpha inhibitors, including infliximab and adalimumab, are effective medical treatments for perianal fistulising Crohn's disease (CD), but not all patients achieve fistula healing.
BACKGROUND
In this multicentre retrospective study conducted across four tertiary inflammatory bowel disease centres in Australia, we identified CD patients with perianal fistulae on maintenance infliximab or adalimumab who had a trough level within twelve weeks of clinical assessment. Data collected included demographics, serum infliximab and adalimumab trough levels (mg/L) within 12 wk before or after their most recent clinical assessment and concomitant medical or surgical therapy. The primary outcome was fistula healing, defined as cessation in fistula drainage. The secondary outcome was fistula closure, defined as healing and closure of all external fistula openings. Differences between patients who did or did not achieve fistula healing were compared using the chi-square test, t test or Mann-Whitney U test.
METHODS
One hundred and fourteen patients (66 infliximab, 48 adalimumab) were included. Forty-eight (72.7%) patients on maintenance infliximab achieved fistula healing and 18 (27.3%) achieved fistula closure. Thirty-seven (77%) patients on maintenance adalimumab achieved fistula healing and 17 (35.4%) achieved fistula closure. Patients who achieved fistula healing had significantly higher infliximab and adalimumab trough levels than patients who did not [infliximab: 6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003; adalimumab: 9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. For patients on infliximab, fistula healing was associated with lower rates of detectable anti-infliximab antibodies and younger age. For patients on adalimumab, fistula healing was associated with higher rates of combination therapy with an immunomodulator. Serum trough levels for patients with and without fistula closure were not significantly different for infliximab [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105] or adalimumab [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083].
RESULTS
Higher maintenance infliximab and adalimumab trough levels are associated with perianal fistula healing in CD.
CONCLUSION
[ "Adalimumab", "Crohn Disease", "Gastrointestinal Agents", "Humans", "Infliximab", "Rectal Fistula", "Retrospective Studies", "Treatment Outcome", "Tumor Necrosis Factor-alpha" ]
9254145
INTRODUCTION
Perianal fistulising disease is a common manifestation occurring in up to 30% of patients with Crohn’s disease (CD). The development of abnormal tracts between the bowel and perineum can cause perianal drainage, pain, bleeding, abscess formation, sepsis and faecal incontinence[1,2]. Perianal CD is associated with significant morbidity and decreased quality of life, negatively impacting physical, emotional, sexual and social wellbeing[1-3] and is an independent predictor for decreased productivity in patients with CD[4,5]. Given that the incidence of perianal fistulising CD is highest in the third and fourth decades of life, this places significant burden on patients, society, the economy and the health care system[6]. Treatment for perianal fistulising CD requires a multidisciplinary approach involving medical management with immunosuppressants and antibiotics, as well as surgical management with sepsis control, seton insertion and sometimes diversion or resection. Anti-tumor necrosis factor (anti-TNF) alpha agents, including infliximab[7,8] and adalimumab[9,10], are the most effective medical therapies available for inducing and maintaining remission of fistulas. Unfortunately, up to 60% of patients treated with maintenance infliximab lose response within one year[7,8]. Accumulating evidence suggests that this loss of response is partly due to subtherapeutic anti-TNF trough levels. Retrospective studies and post-hoc analyses of prospective data have identified that higher infliximab trough levels are associated with fistula healing and closure compared to what is observed for mucosal healing in luminal disease, with emerging data suggesting similar results for adalimumab[11-14]. Quantitative assays for therapeutic drug monitoring (TDM) permit individualisation of infliximab and adalimumab dosing[15,16], however there are very few studies on perianal fistulising CD and the optimal target levels for perianal fistulising CD remain unclear. Our study aims to assess the association between serum trough infliximab and adalimumab levels and perianal fistula healing and closure and identify optimal target levels.
MATERIALS AND METHODS
Study design and patient population This was a multicentre retrospective cross-sectional study of patients with perianal fistulising CD at four tertiary inflammatory bowel disease centres across Australia between January 2014 and June 2020. All patients qualified for infliximab or adalimumab under the Australian Pharmaceutical Benefits Scheme criteria[17] which constitutes the following: (1) A confirmed diagnosis of CD using clinical, radiological, histological and/or endoscopic criteria; and (2) At least one active externally draining complex perianal fistula. We included patients on maintenance infliximab or adalimumab with a documented perianal examination who had a serum infliximab or adalimumab trough level collected within 12 wk before or after their most recent clinical assessment. Infliximab and adalimumab trough levels as well as antibodies to infliximab and adalimumab were measured using a drug sensitive enzyme-linked immunosorbent assay (Grifols Promonitor for adalimumab; LISA-Tracker and Grifols Promonitor for infliximab). Infliximab and adalimumab trough levels were measured both in a proactive manner and reactive manner in patients failing treatment across the study sites. Patients who had been changed from infliximab to adalimumab or vice versa and had relevant data were included in both the infliximab and adalimumab groups. All patients had received standard infliximab or adalimumab induction dosing (infliximab 5 mg/kg intravenously at weeks 0, 2, and 6; adalimumab subcutaneously 160 mg at week 0, 80 mg at week 2) followed by maintenance therapy. The current dose of anti-TNF therapy was recorded and patients with or without dose-escalated maintenance therapy were included. Patients who had a diversion ostomy, rectovaginal fistula or no documented perianal examination were excluded. This was a multicentre retrospective cross-sectional study of patients with perianal fistulising CD at four tertiary inflammatory bowel disease centres across Australia between January 2014 and June 2020. All patients qualified for infliximab or adalimumab under the Australian Pharmaceutical Benefits Scheme criteria[17] which constitutes the following: (1) A confirmed diagnosis of CD using clinical, radiological, histological and/or endoscopic criteria; and (2) At least one active externally draining complex perianal fistula. We included patients on maintenance infliximab or adalimumab with a documented perianal examination who had a serum infliximab or adalimumab trough level collected within 12 wk before or after their most recent clinical assessment. Infliximab and adalimumab trough levels as well as antibodies to infliximab and adalimumab were measured using a drug sensitive enzyme-linked immunosorbent assay (Grifols Promonitor for adalimumab; LISA-Tracker and Grifols Promonitor for infliximab). Infliximab and adalimumab trough levels were measured both in a proactive manner and reactive manner in patients failing treatment across the study sites. Patients who had been changed from infliximab to adalimumab or vice versa and had relevant data were included in both the infliximab and adalimumab groups. All patients had received standard infliximab or adalimumab induction dosing (infliximab 5 mg/kg intravenously at weeks 0, 2, and 6; adalimumab subcutaneously 160 mg at week 0, 80 mg at week 2) followed by maintenance therapy. The current dose of anti-TNF therapy was recorded and patients with or without dose-escalated maintenance therapy were included. Patients who had a diversion ostomy, rectovaginal fistula or no documented perianal examination were excluded. Demographic data Data was retrospectively collected from a clinical database that was updated prospectively during routine clinical practice. Patient demographics collected included age, gender, weight, body mass index, smoking status and CD phenotype classified according to the Montreal Classification[18]. The location of CD was identified as ileal, ileocolonic, colonic, upper gastrointestinal involvement or no luminal disease. The presence or absence of fistulising and stricturing disease was noted, in particular the presence of anal strictures. Biochemical markers of disease activity including C-reactive protein (CRP) and albumin were also recorded. Data was retrospectively collected from a clinical database that was updated prospectively during routine clinical practice. Patient demographics collected included age, gender, weight, body mass index, smoking status and CD phenotype classified according to the Montreal Classification[18]. The location of CD was identified as ileal, ileocolonic, colonic, upper gastrointestinal involvement or no luminal disease. The presence or absence of fistulising and stricturing disease was noted, in particular the presence of anal strictures. Biochemical markers of disease activity including C-reactive protein (CRP) and albumin were also recorded. Current management Prior history of surgical management of perianal disease or fistula was recorded and categorised as examination under anaesthesia and curettage, examination under anaesthesia and seton insertion or fistulotomy. The duration from the last surgical procedure to the follow up visit was recorded. Concomitant medical therapy at the time of follow up was assessed, including corticosteroid use, 5-aminosalicylates and immunomodulators. The doses of infliximab and adalimumab were recorded and stratified according to dose and interval between doses. For patients on dose-escalated anti-TNF therapy, the duration between last dose escalation and follow up was recorded. Prior history of surgical management of perianal disease or fistula was recorded and categorised as examination under anaesthesia and curettage, examination under anaesthesia and seton insertion or fistulotomy. The duration from the last surgical procedure to the follow up visit was recorded. Concomitant medical therapy at the time of follow up was assessed, including corticosteroid use, 5-aminosalicylates and immunomodulators. The doses of infliximab and adalimumab were recorded and stratified according to dose and interval between doses. For patients on dose-escalated anti-TNF therapy, the duration between last dose escalation and follow up was recorded. Primary and secondary outcomes The primary outcome was fistula healing, which was defined as cessation of fistula drainage, with or without a seton in situ[7]. The secondary outcome was fistula closure, which was defined as healing and closure of all external fistula openings[7]. The primary outcome was fistula healing, which was defined as cessation of fistula drainage, with or without a seton in situ[7]. The secondary outcome was fistula closure, which was defined as healing and closure of all external fistula openings[7]. Statistical analysis Statistical review of this study was performed by a biostatistician from the Ingham Institute for Applied Medical Research. Descriptive statistics were used to assess the baseline characteristics of both the infliximab and adalimumab cohorts. Categorical variables were expressed as percentages and compared using the chi-square test. Continuous variables were expressed using mean ± SD for normally distributed variables and median and interquartile range (IQR) for non-normally distributed variables. The means were compared using the t test for normally distributed variables and the mean ranks compared using the Mann-Whitney U test for non-normally distributed variables. A receiver operating characteristic (ROC) curve analysis was used to assess the sensitivity and specificity of infliximab and adalimumab levels at different cut-off points for predicting fistula healing. All reported P values were 2-sided, with P < 0.05 considered statistically significant. Multivariate analysis using logistic regression with forwards selection was used to analyse variables that predicted fistula healing. Variables which were statistically significant in the univariate analysis were included in the multivariate analysis model. Ethics approval was obtained from the South Western Sydney Local Health District (Human Research Ethics Committee LNR/18/LPOOL/404; Local Project Number: HE18/261). Statistical review of this study was performed by a biostatistician from the Ingham Institute for Applied Medical Research. Descriptive statistics were used to assess the baseline characteristics of both the infliximab and adalimumab cohorts. Categorical variables were expressed as percentages and compared using the chi-square test. Continuous variables were expressed using mean ± SD for normally distributed variables and median and interquartile range (IQR) for non-normally distributed variables. The means were compared using the t test for normally distributed variables and the mean ranks compared using the Mann-Whitney U test for non-normally distributed variables. A receiver operating characteristic (ROC) curve analysis was used to assess the sensitivity and specificity of infliximab and adalimumab levels at different cut-off points for predicting fistula healing. All reported P values were 2-sided, with P < 0.05 considered statistically significant. Multivariate analysis using logistic regression with forwards selection was used to analyse variables that predicted fistula healing. Variables which were statistically significant in the univariate analysis were included in the multivariate analysis model. Ethics approval was obtained from the South Western Sydney Local Health District (Human Research Ethics Committee LNR/18/LPOOL/404; Local Project Number: HE18/261).
null
null
CONCLUSION
Our study showed that higher infliximab and adalimumab trough levels are associated with perianal CD fistula healing, with higher rates of healing in higher tertiles of infliximab and adalimumab levels, but no association with fistula closure was observed. Further prospective studies are required to confirm target infliximab and adalimumab trough levels and determine the optimal dose escalation method to achieve these target levels.
[ "INTRODUCTION", "Study design and patient population", "Demographic data", "Current management", "Primary and secondary outcomes", "Statistical analysis", "RESULTS", "Association between fistula healing and closure with infliximab trough levels", "Association between fistula healing and closure with adalimumab trough levels", "DISCUSSION", "CONCLUSION" ]
[ "Perianal fistulising disease is a common manifestation occurring in up to 30% of patients with Crohn’s disease (CD). The development of abnormal tracts between the bowel and perineum can cause perianal drainage, pain, bleeding, abscess formation, sepsis and faecal incontinence[1,2]. Perianal CD is associated with significant morbidity and decreased quality of life, negatively impacting physical, emotional, sexual and social wellbeing[1-3] and is an independent predictor for decreased productivity in patients with CD[4,5]. Given that the incidence of perianal fistulising CD is highest in the third and fourth decades of life, this places significant burden on patients, society, the economy and the health care system[6].\nTreatment for perianal fistulising CD requires a multidisciplinary approach involving medical management with immunosuppressants and antibiotics, as well as surgical management with sepsis control, seton insertion and sometimes diversion or resection. Anti-tumor necrosis factor (anti-TNF) alpha agents, including infliximab[7,8] and adalimumab[9,10], are the most effective medical therapies available for inducing and maintaining remission of fistulas. Unfortunately, up to 60% of patients treated with maintenance infliximab lose response within one year[7,8]. Accumulating evidence suggests that this loss of response is partly due to subtherapeutic anti-TNF trough levels. Retrospective studies and post-hoc analyses of prospective data have identified that higher infliximab trough levels are associated with fistula healing and closure compared to what is observed for mucosal healing in luminal disease, with emerging data suggesting similar results for adalimumab[11-14]. Quantitative assays for therapeutic drug monitoring (TDM) permit individualisation of infliximab and adalimumab dosing[15,16], however there are very few studies on perianal fistulising CD and the optimal target levels for perianal fistulising CD remain unclear. Our study aims to assess the association between serum trough infliximab and adalimumab levels and perianal fistula healing and closure and identify optimal target levels.", "This was a multicentre retrospective cross-sectional study of patients with perianal fistulising CD at four tertiary inflammatory bowel disease centres across Australia between January 2014 and June 2020. All patients qualified for infliximab or adalimumab under the Australian Pharmaceutical Benefits Scheme criteria[17] which constitutes the following: (1) A confirmed diagnosis of CD using clinical, radiological, histological and/or endoscopic criteria; and (2) At least one active externally draining complex perianal fistula. We included patients on maintenance infliximab or adalimumab with a documented perianal examination who had a serum infliximab or adalimumab trough level collected within 12 wk before or after their most recent clinical assessment. Infliximab and adalimumab trough levels as well as antibodies to infliximab and adalimumab were measured using a drug sensitive enzyme-linked immunosorbent assay (Grifols Promonitor for adalimumab; LISA-Tracker and Grifols Promonitor for infliximab). Infliximab and adalimumab trough levels were measured both in a proactive manner and reactive manner in patients failing treatment across the study sites. Patients who had been changed from infliximab to adalimumab or vice versa and had relevant data were included in both the infliximab and adalimumab groups.\nAll patients had received standard infliximab or adalimumab induction dosing (infliximab 5 mg/kg intravenously at weeks 0, 2, and 6; adalimumab subcutaneously 160 mg at week 0, 80 mg at week 2) followed by maintenance therapy. The current dose of anti-TNF therapy was recorded and patients with or without dose-escalated maintenance therapy were included. Patients who had a diversion ostomy, rectovaginal fistula or no documented perianal examination were excluded.", "Data was retrospectively collected from a clinical database that was updated prospectively during routine clinical practice. Patient demographics collected included age, gender, weight, body mass index, smoking status and CD phenotype classified according to the Montreal Classification[18]. The location of CD was identified as ileal, ileocolonic, colonic, upper gastrointestinal involvement or no luminal disease. The presence or absence of fistulising and stricturing disease was noted, in particular the presence of anal strictures. Biochemical markers of disease activity including C-reactive protein (CRP) and albumin were also recorded.", "Prior history of surgical management of perianal disease or fistula was recorded and categorised as examination under anaesthesia and curettage, examination under anaesthesia and seton insertion or fistulotomy. The duration from the last surgical procedure to the follow up visit was recorded. Concomitant medical therapy at the time of follow up was assessed, including corticosteroid use, 5-aminosalicylates and immunomodulators. The doses of infliximab and adalimumab were recorded and stratified according to dose and interval between doses. For patients on dose-escalated anti-TNF therapy, the duration between last dose escalation and follow up was recorded.", "The primary outcome was fistula healing, which was defined as cessation of fistula drainage, with or without a seton in situ[7]. The secondary outcome was fistula closure, which was defined as healing and closure of all external fistula openings[7].", "Statistical review of this study was performed by a biostatistician from the Ingham Institute for Applied Medical Research. Descriptive statistics were used to assess the baseline characteristics of both the infliximab and adalimumab cohorts. Categorical variables were expressed as percentages and compared using the chi-square test. Continuous variables were expressed using mean ± SD for normally distributed variables and median and interquartile range (IQR) for non-normally distributed variables. The means were compared using the t test for normally distributed variables and the mean ranks compared using the Mann-Whitney U test for non-normally distributed variables. A receiver operating characteristic (ROC) curve analysis was used to assess the sensitivity and specificity of infliximab and adalimumab levels at different cut-off points for predicting fistula healing. All reported P values were 2-sided, with P < 0.05 considered statistically significant. Multivariate analysis using logistic regression with forwards selection was used to analyse variables that predicted fistula healing. Variables which were statistically significant in the univariate analysis were included in the multivariate analysis model. Ethics approval was obtained from the South Western Sydney Local Health District (Human Research Ethics Committee LNR/18/LPOOL/404; Local Project Number: HE18/261).", "Out of 454 patients screened, 114 patients (66 infliximab, 48 adalimumab) on maintenance infliximab or adalimumab for perianal CD had a trough level collected within 12 wk of clinical assessment. Five patients had been changed from infliximab to adalimumab or vice versa and were included in both the infliximab and adalimumab groups. Seventy-five (66%) patients were on combination therapy (43 azathioprine, 16 6-mercaptopurine, 16 methotrexate). Nineteen patients (28.8%) on maintenance infliximab were on dose escalated infliximab therapy (5, 7.5, 10, 15 or 20 mg/kg every 6 or 8 wk). For these patients, the median duration between last infliximab dose adjustment and follow up was 60.0 wk (IQR = 44.5-81.0). Eleven (22.9%) patients on maintenance adalimumab were on dose escalated adalimumab therapy (40 mg weekly). For these patients, the median duration between last adalimumab dose adjustment and follow up was 39.0 wk (IQR = 24.0-86.0). Fifty-nine (89.3%) patients on infliximab had prior surgical management of their fistula, with a median duration of 93.0 wk (IQR = 45.5-284.5) between their last surgical procedure and their most recent follow up visit. Thirty-seven (77.1%) patients on adalimumab had prior surgical management of their fistula, with a median duration of 83.0 wk (IQR = 28.75-223.0) between their last surgical procedure and their most recent follow up visit. Patient demographics and disease characteristics of the population are summarised in Table 1.\nPatient demographics and disease characteristics\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.\n Association between fistula healing and closure with infliximab trough levels Forty-eight (72.7%) patients on maintenance infliximab achieved fistula healing. Table 2 summarises the differences between patients on infliximab with and without fistula healing. Patients who achieved fistula healing had higher infliximab trough levels [6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003], lower rates of detectable anti-infliximab antibodies (4.3% vs 33.3%, P = 0.004) and a younger age (33.0 vs 43.5 years old; P = 0.003) compared to patients who did not achieve fistula healing. The presence of detectable anti-infliximab antibodies was associated with lower infliximab trough levels (P = 0.02). The CRP and albumin levels were not significantly different between patients with and without fistula healing. The rates of combination therapy with an immunomodulator were not significantly different between patients who achieved fistula healing and those who did not (P = 0.522).\nDifferences between patients on infliximab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.\nROC curve analysis identified a positive correlation between infliximab trough levels and healing [area under the curve (AUC) = 0.74, 95% confidence interval (CI): 0.60-0.88, P = 0.003; Figure 1A] with an infliximab trough level of 6.10 mg/L that maximised the sensitivity and specificity of predicting fistula healing [sensitivity 58%, specificity 78%, odds ratio (OR) = 4.9, P = 0.013]. Upon tertile analysis, higher tertiles of infliximab levels were associated with a higher proportion of patients achieving fistula healing with 54.5% healing rate for tertile 1 compared to 90.1% for tertile 3 (Figure 2A; P = 0.026). Out of the patients who achieved fistula healing on infliximab, 90% and 95% of the patients who achieved fistula healing were healed with an infliximab trough level of 12.7 and 14.4 mg/L respectively. Given that a drug-sensitive infliximab assay was used where anti-infliximab antibody titres were only performed if infliximab concentrations were < 2.0 mg/L, anti-infliximab antibodies were not included in the multivariate analysis. On multivariate logistic regression analysis, age was associated with healing (P = 0.026) but adequate infliximab levels ≥ 6.10 mg/L were not (P = 0.097). Within our cohort, 18 (27.3%) of patients on infliximab achieved fistula closure. The infliximab trough level for patients with and without fistula closure was not significantly different [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105].\n\nCorrelation between serum trough level of infliximab, adalimumab and fistula healing. A: Infliximab; B: Adalimumab.\n\nTertile analysis of infliximab and adalimumab trough levels for patients with fistula healing and fistula closure. A: Infliximab; B: Adalimumab.\nForty-eight (72.7%) patients on maintenance infliximab achieved fistula healing. Table 2 summarises the differences between patients on infliximab with and without fistula healing. Patients who achieved fistula healing had higher infliximab trough levels [6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003], lower rates of detectable anti-infliximab antibodies (4.3% vs 33.3%, P = 0.004) and a younger age (33.0 vs 43.5 years old; P = 0.003) compared to patients who did not achieve fistula healing. The presence of detectable anti-infliximab antibodies was associated with lower infliximab trough levels (P = 0.02). The CRP and albumin levels were not significantly different between patients with and without fistula healing. The rates of combination therapy with an immunomodulator were not significantly different between patients who achieved fistula healing and those who did not (P = 0.522).\nDifferences between patients on infliximab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.\nROC curve analysis identified a positive correlation between infliximab trough levels and healing [area under the curve (AUC) = 0.74, 95% confidence interval (CI): 0.60-0.88, P = 0.003; Figure 1A] with an infliximab trough level of 6.10 mg/L that maximised the sensitivity and specificity of predicting fistula healing [sensitivity 58%, specificity 78%, odds ratio (OR) = 4.9, P = 0.013]. Upon tertile analysis, higher tertiles of infliximab levels were associated with a higher proportion of patients achieving fistula healing with 54.5% healing rate for tertile 1 compared to 90.1% for tertile 3 (Figure 2A; P = 0.026). Out of the patients who achieved fistula healing on infliximab, 90% and 95% of the patients who achieved fistula healing were healed with an infliximab trough level of 12.7 and 14.4 mg/L respectively. Given that a drug-sensitive infliximab assay was used where anti-infliximab antibody titres were only performed if infliximab concentrations were < 2.0 mg/L, anti-infliximab antibodies were not included in the multivariate analysis. On multivariate logistic regression analysis, age was associated with healing (P = 0.026) but adequate infliximab levels ≥ 6.10 mg/L were not (P = 0.097). Within our cohort, 18 (27.3%) of patients on infliximab achieved fistula closure. The infliximab trough level for patients with and without fistula closure was not significantly different [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105].\n\nCorrelation between serum trough level of infliximab, adalimumab and fistula healing. A: Infliximab; B: Adalimumab.\n\nTertile analysis of infliximab and adalimumab trough levels for patients with fistula healing and fistula closure. A: Infliximab; B: Adalimumab.\n Association between fistula healing and closure with adalimumab trough levels Thirty-seven (77%) patients on maintenance adalimumab achieved fistula healing. Table 3 summarises the differences in patients on adalimumab with and without fistula healing. Patients who achieved fistula healing had higher adalimumab trough levels compared to those who did not [9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. Patients who achieved fistula healing had higher rates of combination therapy with an immunomodulator than those who did not (P = 0.048). The CRP and albumin levels were not significantly different in patients with and without fistula healing. ROC curve analysis identified a positive correlation between adalimumab trough levels and healing (AUC = 0.79, 95%CI: 0.66-0.93, P = 0.004) with an adalimumab trough level of 7.05 mg/L that maximised the sensitivity and specificity of infliximab levels in predicting fistula healing (sensitivity 70%; specificity 73%; OR = 6.3; P = 0.016; Figure 1B). Upon tertile analysis, higher tertiles of adalimumab levels were associated with a higher proportion of patients achieving fistula healing, with 62.5% healing rate for tertile 1 compared to 100% for tertile 3 (Figure 2B; P = 0.034). Out of the patients who achieved fistula healing on adalimumab, 90% and 95% of the patients who achieved fistula healing were healed with an adalimumab trough level of 12.0 and 18.0 mg/L respectively. On multivariate logistic regression analysis, adequate adalimumab trough levels ≥ 7.05 mg/L (P = 0.008) and concurrent immunomodulator therapy (P = 0.026) both remained associated with healing. Within our cohort, 17 (35.4%) of patients on adalimumab achieved fistula closure. The adalimumab trough level for patients with and without fistula closure was not significantly different [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083].\nDifferences in patients on adalimumab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.\nThirty-seven (77%) patients on maintenance adalimumab achieved fistula healing. Table 3 summarises the differences in patients on adalimumab with and without fistula healing. Patients who achieved fistula healing had higher adalimumab trough levels compared to those who did not [9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. Patients who achieved fistula healing had higher rates of combination therapy with an immunomodulator than those who did not (P = 0.048). The CRP and albumin levels were not significantly different in patients with and without fistula healing. ROC curve analysis identified a positive correlation between adalimumab trough levels and healing (AUC = 0.79, 95%CI: 0.66-0.93, P = 0.004) with an adalimumab trough level of 7.05 mg/L that maximised the sensitivity and specificity of infliximab levels in predicting fistula healing (sensitivity 70%; specificity 73%; OR = 6.3; P = 0.016; Figure 1B). Upon tertile analysis, higher tertiles of adalimumab levels were associated with a higher proportion of patients achieving fistula healing, with 62.5% healing rate for tertile 1 compared to 100% for tertile 3 (Figure 2B; P = 0.034). Out of the patients who achieved fistula healing on adalimumab, 90% and 95% of the patients who achieved fistula healing were healed with an adalimumab trough level of 12.0 and 18.0 mg/L respectively. On multivariate logistic regression analysis, adequate adalimumab trough levels ≥ 7.05 mg/L (P = 0.008) and concurrent immunomodulator therapy (P = 0.026) both remained associated with healing. Within our cohort, 17 (35.4%) of patients on adalimumab achieved fistula closure. The adalimumab trough level for patients with and without fistula closure was not significantly different [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083].\nDifferences in patients on adalimumab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.", "Forty-eight (72.7%) patients on maintenance infliximab achieved fistula healing. Table 2 summarises the differences between patients on infliximab with and without fistula healing. Patients who achieved fistula healing had higher infliximab trough levels [6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003], lower rates of detectable anti-infliximab antibodies (4.3% vs 33.3%, P = 0.004) and a younger age (33.0 vs 43.5 years old; P = 0.003) compared to patients who did not achieve fistula healing. The presence of detectable anti-infliximab antibodies was associated with lower infliximab trough levels (P = 0.02). The CRP and albumin levels were not significantly different between patients with and without fistula healing. The rates of combination therapy with an immunomodulator were not significantly different between patients who achieved fistula healing and those who did not (P = 0.522).\nDifferences between patients on infliximab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.\nROC curve analysis identified a positive correlation between infliximab trough levels and healing [area under the curve (AUC) = 0.74, 95% confidence interval (CI): 0.60-0.88, P = 0.003; Figure 1A] with an infliximab trough level of 6.10 mg/L that maximised the sensitivity and specificity of predicting fistula healing [sensitivity 58%, specificity 78%, odds ratio (OR) = 4.9, P = 0.013]. Upon tertile analysis, higher tertiles of infliximab levels were associated with a higher proportion of patients achieving fistula healing with 54.5% healing rate for tertile 1 compared to 90.1% for tertile 3 (Figure 2A; P = 0.026). Out of the patients who achieved fistula healing on infliximab, 90% and 95% of the patients who achieved fistula healing were healed with an infliximab trough level of 12.7 and 14.4 mg/L respectively. Given that a drug-sensitive infliximab assay was used where anti-infliximab antibody titres were only performed if infliximab concentrations were < 2.0 mg/L, anti-infliximab antibodies were not included in the multivariate analysis. On multivariate logistic regression analysis, age was associated with healing (P = 0.026) but adequate infliximab levels ≥ 6.10 mg/L were not (P = 0.097). Within our cohort, 18 (27.3%) of patients on infliximab achieved fistula closure. The infliximab trough level for patients with and without fistula closure was not significantly different [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105].\n\nCorrelation between serum trough level of infliximab, adalimumab and fistula healing. A: Infliximab; B: Adalimumab.\n\nTertile analysis of infliximab and adalimumab trough levels for patients with fistula healing and fistula closure. A: Infliximab; B: Adalimumab.", "Thirty-seven (77%) patients on maintenance adalimumab achieved fistula healing. Table 3 summarises the differences in patients on adalimumab with and without fistula healing. Patients who achieved fistula healing had higher adalimumab trough levels compared to those who did not [9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. Patients who achieved fistula healing had higher rates of combination therapy with an immunomodulator than those who did not (P = 0.048). The CRP and albumin levels were not significantly different in patients with and without fistula healing. ROC curve analysis identified a positive correlation between adalimumab trough levels and healing (AUC = 0.79, 95%CI: 0.66-0.93, P = 0.004) with an adalimumab trough level of 7.05 mg/L that maximised the sensitivity and specificity of infliximab levels in predicting fistula healing (sensitivity 70%; specificity 73%; OR = 6.3; P = 0.016; Figure 1B). Upon tertile analysis, higher tertiles of adalimumab levels were associated with a higher proportion of patients achieving fistula healing, with 62.5% healing rate for tertile 1 compared to 100% for tertile 3 (Figure 2B; P = 0.034). Out of the patients who achieved fistula healing on adalimumab, 90% and 95% of the patients who achieved fistula healing were healed with an adalimumab trough level of 12.0 and 18.0 mg/L respectively. On multivariate logistic regression analysis, adequate adalimumab trough levels ≥ 7.05 mg/L (P = 0.008) and concurrent immunomodulator therapy (P = 0.026) both remained associated with healing. Within our cohort, 17 (35.4%) of patients on adalimumab achieved fistula closure. The adalimumab trough level for patients with and without fistula closure was not significantly different [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083].\nDifferences in patients on adalimumab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.", "Fistulising perianal CD is a highly morbid condition for which treatment outcomes remain suboptimal in many patients. While there is limited data on the role of newer biologic agents such as ustekinumab in perianal CD[19], anti-TNF agents remain the treatment of choice. Our study showed a significant association between both infliximab and adalimumab trough levels and fistula healing, with higher levels associated with increased healing rates. We demonstrated that higher tertiles of both infliximab and adalimumab levels were associated with a higher proportion of patients achieving fistula healing. Notably, when plotting the cumulative percentage of healed patients against infliximab level, we found that 50% of the patients who achieve healing will heal with a level of 6.4 mg/L, 90% of the patients who achieve healing will heal with a level of 12.7 mg/L and 95% of the patients who achieve healing will heal with a level of 14.4 mg/L. Similarly, for patients on adalimumab, 50% of the patients who achieve healing will heal with a level of 9.2 mg/L, and 90% and 95% of patients who achieved fistula healing were healed with levels of 12.0 and 18.0 mg/L respectively. Our results support dose-escalation of both infliximab and adalimumab in non-responders, targeting higher levels to achieve fistula healing prior to changing biologic therapy. Importantly, this study is the largest study to date assessing the relationship between adalimumab trough levels and clinical fistula healing. This data adds to the growing body of evidence that fistula healing improves with higher anti-TNF trough levels, and that higher levels may be required for perianal fistula healing than for mucosal healing in luminal CD[12-14,20].\nThis study did not show an association between infliximab and adalimumab trough levels and fistula closure. Not all previous studies have assessed fistula closure, but some have found that patients with fistula closure had significantly higher maintenance infliximab and adalimumab trough levels[13,14]. Our results may have been limited by inadequate power due to relatively small numbers of patients who achieved fistula closure in our cohort. We had a high fistula healing rate in this study, with 72.7% and 77% of patients on maintenance infliximab and adalimumab achieving fistula healing respectively. This finding was possibly due to high rates of combination therapy with an immunomodulator (69.7% and 60.4% in the infliximab and adalimumab groups respectively).\nRandomised controlled trials have shown that infliximab is effective at both inducing and maintaining fistula healing[7,8]. Our study found that fistula healing was associated with higher infliximab trough levels. This finding is supported by a post-hoc analysis of ACCENT II which found that higher infliximab trough levels during induction were associated with a complete absence of draining fistulas at week 14[12], as well as similar findings in other studies assessing induction and maintenance infliximab therapy[11,13]. In the future, there may be a role for the infliximab biosimilar CT-P13 in order to achieve these high infliximab levels required for perianal fistula healing; with recent randomised controlled trials demonstrating higher trough levels from subcutaneous administration of CT-P13 compared to intravenous administration[21]. Interestingly, our study found that fistula healing was associated with younger age in both univariate and multivariate analyses. Whilst patient factors including albumin and body weight have previously been shown to affect infliximab trough levels[22], the influence of age is unclear. This finding may be due to the relatively younger age at diagnosis of CD for patients with fistula healing or longer duration of infliximab therapy. Five patients in this study had been changed from infliximab to adalimumab or vice versa and were included in both groups, however the anti-TNF level and anti-TNF antibody levels at the time of changing treatment were not collected. Reassuringly, previous studies have demonstrated that the presence of infliximab antibodies does not decrease future response rates to adalimumab and vice versa[23].\nAdalimumab has also been shown to be effective in both inducing[9] and maintaining fistula healing[24]. Our study found that fistula healing was associated with higher adalimumab trough levels. Whilst there is limited data on the association between adalimumab trough levels and fistula healing, our findings are consistent with two smaller retrospective studies that showed that patients with fistula healing had higher adalimumab trough levels compared to those without fistula healing[14,20]. On multivariate logistic regression analysis, adalimumab trough levels ≥ 7.05 mg/L and concurrent immunomodulator therapy both remained significantly associated with healing. This reflects how concomitant immunosuppressive therapy can be used to decrease the immunogenic response and therefore improve fistula healing rates[25].\nThis study has several limitations. Assessment of fistula healing was based on clinical assessment, which may not be as accurate as an objective assessment such as with magnetic resonance imaging of the pelvis. A recent study has demonstrated that higher anti-TNF trough levels are associated with improved rates of radiological healing in perianal fistulising CD[26]. However, the absence of drainage remains a clinically relevant endpoint that impacts on patient quality of life. In order to provide an objective marker of response, biochemical markers of disease activity including CRP and albumin were analysed and found not to correlate with fistula healing. Data was retrospectively collected, so in order to address this we only included patients with documented perianal exams and definitions for fistula healing and closure that were in line with previous randomised controlled trials[8]. We found that fistula healing is associated with higher infliximab and adalimumab trough levels, however further randomised controlled trials are required to assess whether dose escalation to higher levels improves healing and the optimal method for dose escalation. Whilst reactive TDM with dose escalation at the time of loss of response is effective, it remains unknown whether proactive TDM with subsequent dose modification improves outcomes. Notably, all previous studies on proactive TDM have focused on luminal disease with no prospective studies evaluating proactive TDM in perianal fistulising CD.", "Our study showed that higher infliximab and adalimumab trough levels are associated with perianal CD fistula healing, with higher rates of healing in higher tertiles of infliximab and adalimumab levels. However, no association with fistula closure was observed. Further prospective studies are required to confirm target infliximab and adalimumab trough levels and determine the optimal dose escalation method to achieve these target levels." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study design and patient population", "Demographic data", "Current management", "Primary and secondary outcomes", "Statistical analysis", "RESULTS", "Association between fistula healing and closure with infliximab trough levels", "Association between fistula healing and closure with adalimumab trough levels", "DISCUSSION", "CONCLUSION" ]
[ "Perianal fistulising disease is a common manifestation occurring in up to 30% of patients with Crohn’s disease (CD). The development of abnormal tracts between the bowel and perineum can cause perianal drainage, pain, bleeding, abscess formation, sepsis and faecal incontinence[1,2]. Perianal CD is associated with significant morbidity and decreased quality of life, negatively impacting physical, emotional, sexual and social wellbeing[1-3] and is an independent predictor for decreased productivity in patients with CD[4,5]. Given that the incidence of perianal fistulising CD is highest in the third and fourth decades of life, this places significant burden on patients, society, the economy and the health care system[6].\nTreatment for perianal fistulising CD requires a multidisciplinary approach involving medical management with immunosuppressants and antibiotics, as well as surgical management with sepsis control, seton insertion and sometimes diversion or resection. Anti-tumor necrosis factor (anti-TNF) alpha agents, including infliximab[7,8] and adalimumab[9,10], are the most effective medical therapies available for inducing and maintaining remission of fistulas. Unfortunately, up to 60% of patients treated with maintenance infliximab lose response within one year[7,8]. Accumulating evidence suggests that this loss of response is partly due to subtherapeutic anti-TNF trough levels. Retrospective studies and post-hoc analyses of prospective data have identified that higher infliximab trough levels are associated with fistula healing and closure compared to what is observed for mucosal healing in luminal disease, with emerging data suggesting similar results for adalimumab[11-14]. Quantitative assays for therapeutic drug monitoring (TDM) permit individualisation of infliximab and adalimumab dosing[15,16], however there are very few studies on perianal fistulising CD and the optimal target levels for perianal fistulising CD remain unclear. Our study aims to assess the association between serum trough infliximab and adalimumab levels and perianal fistula healing and closure and identify optimal target levels.", " Study design and patient population This was a multicentre retrospective cross-sectional study of patients with perianal fistulising CD at four tertiary inflammatory bowel disease centres across Australia between January 2014 and June 2020. All patients qualified for infliximab or adalimumab under the Australian Pharmaceutical Benefits Scheme criteria[17] which constitutes the following: (1) A confirmed diagnosis of CD using clinical, radiological, histological and/or endoscopic criteria; and (2) At least one active externally draining complex perianal fistula. We included patients on maintenance infliximab or adalimumab with a documented perianal examination who had a serum infliximab or adalimumab trough level collected within 12 wk before or after their most recent clinical assessment. Infliximab and adalimumab trough levels as well as antibodies to infliximab and adalimumab were measured using a drug sensitive enzyme-linked immunosorbent assay (Grifols Promonitor for adalimumab; LISA-Tracker and Grifols Promonitor for infliximab). Infliximab and adalimumab trough levels were measured both in a proactive manner and reactive manner in patients failing treatment across the study sites. Patients who had been changed from infliximab to adalimumab or vice versa and had relevant data were included in both the infliximab and adalimumab groups.\nAll patients had received standard infliximab or adalimumab induction dosing (infliximab 5 mg/kg intravenously at weeks 0, 2, and 6; adalimumab subcutaneously 160 mg at week 0, 80 mg at week 2) followed by maintenance therapy. The current dose of anti-TNF therapy was recorded and patients with or without dose-escalated maintenance therapy were included. Patients who had a diversion ostomy, rectovaginal fistula or no documented perianal examination were excluded.\nThis was a multicentre retrospective cross-sectional study of patients with perianal fistulising CD at four tertiary inflammatory bowel disease centres across Australia between January 2014 and June 2020. All patients qualified for infliximab or adalimumab under the Australian Pharmaceutical Benefits Scheme criteria[17] which constitutes the following: (1) A confirmed diagnosis of CD using clinical, radiological, histological and/or endoscopic criteria; and (2) At least one active externally draining complex perianal fistula. We included patients on maintenance infliximab or adalimumab with a documented perianal examination who had a serum infliximab or adalimumab trough level collected within 12 wk before or after their most recent clinical assessment. Infliximab and adalimumab trough levels as well as antibodies to infliximab and adalimumab were measured using a drug sensitive enzyme-linked immunosorbent assay (Grifols Promonitor for adalimumab; LISA-Tracker and Grifols Promonitor for infliximab). Infliximab and adalimumab trough levels were measured both in a proactive manner and reactive manner in patients failing treatment across the study sites. Patients who had been changed from infliximab to adalimumab or vice versa and had relevant data were included in both the infliximab and adalimumab groups.\nAll patients had received standard infliximab or adalimumab induction dosing (infliximab 5 mg/kg intravenously at weeks 0, 2, and 6; adalimumab subcutaneously 160 mg at week 0, 80 mg at week 2) followed by maintenance therapy. The current dose of anti-TNF therapy was recorded and patients with or without dose-escalated maintenance therapy were included. Patients who had a diversion ostomy, rectovaginal fistula or no documented perianal examination were excluded.\n Demographic data Data was retrospectively collected from a clinical database that was updated prospectively during routine clinical practice. Patient demographics collected included age, gender, weight, body mass index, smoking status and CD phenotype classified according to the Montreal Classification[18]. The location of CD was identified as ileal, ileocolonic, colonic, upper gastrointestinal involvement or no luminal disease. The presence or absence of fistulising and stricturing disease was noted, in particular the presence of anal strictures. Biochemical markers of disease activity including C-reactive protein (CRP) and albumin were also recorded.\nData was retrospectively collected from a clinical database that was updated prospectively during routine clinical practice. Patient demographics collected included age, gender, weight, body mass index, smoking status and CD phenotype classified according to the Montreal Classification[18]. The location of CD was identified as ileal, ileocolonic, colonic, upper gastrointestinal involvement or no luminal disease. The presence or absence of fistulising and stricturing disease was noted, in particular the presence of anal strictures. Biochemical markers of disease activity including C-reactive protein (CRP) and albumin were also recorded.\n Current management Prior history of surgical management of perianal disease or fistula was recorded and categorised as examination under anaesthesia and curettage, examination under anaesthesia and seton insertion or fistulotomy. The duration from the last surgical procedure to the follow up visit was recorded. Concomitant medical therapy at the time of follow up was assessed, including corticosteroid use, 5-aminosalicylates and immunomodulators. The doses of infliximab and adalimumab were recorded and stratified according to dose and interval between doses. For patients on dose-escalated anti-TNF therapy, the duration between last dose escalation and follow up was recorded.\nPrior history of surgical management of perianal disease or fistula was recorded and categorised as examination under anaesthesia and curettage, examination under anaesthesia and seton insertion or fistulotomy. The duration from the last surgical procedure to the follow up visit was recorded. Concomitant medical therapy at the time of follow up was assessed, including corticosteroid use, 5-aminosalicylates and immunomodulators. The doses of infliximab and adalimumab were recorded and stratified according to dose and interval between doses. For patients on dose-escalated anti-TNF therapy, the duration between last dose escalation and follow up was recorded.\n Primary and secondary outcomes The primary outcome was fistula healing, which was defined as cessation of fistula drainage, with or without a seton in situ[7]. The secondary outcome was fistula closure, which was defined as healing and closure of all external fistula openings[7].\nThe primary outcome was fistula healing, which was defined as cessation of fistula drainage, with or without a seton in situ[7]. The secondary outcome was fistula closure, which was defined as healing and closure of all external fistula openings[7].\n Statistical analysis Statistical review of this study was performed by a biostatistician from the Ingham Institute for Applied Medical Research. Descriptive statistics were used to assess the baseline characteristics of both the infliximab and adalimumab cohorts. Categorical variables were expressed as percentages and compared using the chi-square test. Continuous variables were expressed using mean ± SD for normally distributed variables and median and interquartile range (IQR) for non-normally distributed variables. The means were compared using the t test for normally distributed variables and the mean ranks compared using the Mann-Whitney U test for non-normally distributed variables. A receiver operating characteristic (ROC) curve analysis was used to assess the sensitivity and specificity of infliximab and adalimumab levels at different cut-off points for predicting fistula healing. All reported P values were 2-sided, with P < 0.05 considered statistically significant. Multivariate analysis using logistic regression with forwards selection was used to analyse variables that predicted fistula healing. Variables which were statistically significant in the univariate analysis were included in the multivariate analysis model. Ethics approval was obtained from the South Western Sydney Local Health District (Human Research Ethics Committee LNR/18/LPOOL/404; Local Project Number: HE18/261).\nStatistical review of this study was performed by a biostatistician from the Ingham Institute for Applied Medical Research. Descriptive statistics were used to assess the baseline characteristics of both the infliximab and adalimumab cohorts. Categorical variables were expressed as percentages and compared using the chi-square test. Continuous variables were expressed using mean ± SD for normally distributed variables and median and interquartile range (IQR) for non-normally distributed variables. The means were compared using the t test for normally distributed variables and the mean ranks compared using the Mann-Whitney U test for non-normally distributed variables. A receiver operating characteristic (ROC) curve analysis was used to assess the sensitivity and specificity of infliximab and adalimumab levels at different cut-off points for predicting fistula healing. All reported P values were 2-sided, with P < 0.05 considered statistically significant. Multivariate analysis using logistic regression with forwards selection was used to analyse variables that predicted fistula healing. Variables which were statistically significant in the univariate analysis were included in the multivariate analysis model. Ethics approval was obtained from the South Western Sydney Local Health District (Human Research Ethics Committee LNR/18/LPOOL/404; Local Project Number: HE18/261).", "This was a multicentre retrospective cross-sectional study of patients with perianal fistulising CD at four tertiary inflammatory bowel disease centres across Australia between January 2014 and June 2020. All patients qualified for infliximab or adalimumab under the Australian Pharmaceutical Benefits Scheme criteria[17] which constitutes the following: (1) A confirmed diagnosis of CD using clinical, radiological, histological and/or endoscopic criteria; and (2) At least one active externally draining complex perianal fistula. We included patients on maintenance infliximab or adalimumab with a documented perianal examination who had a serum infliximab or adalimumab trough level collected within 12 wk before or after their most recent clinical assessment. Infliximab and adalimumab trough levels as well as antibodies to infliximab and adalimumab were measured using a drug sensitive enzyme-linked immunosorbent assay (Grifols Promonitor for adalimumab; LISA-Tracker and Grifols Promonitor for infliximab). Infliximab and adalimumab trough levels were measured both in a proactive manner and reactive manner in patients failing treatment across the study sites. Patients who had been changed from infliximab to adalimumab or vice versa and had relevant data were included in both the infliximab and adalimumab groups.\nAll patients had received standard infliximab or adalimumab induction dosing (infliximab 5 mg/kg intravenously at weeks 0, 2, and 6; adalimumab subcutaneously 160 mg at week 0, 80 mg at week 2) followed by maintenance therapy. The current dose of anti-TNF therapy was recorded and patients with or without dose-escalated maintenance therapy were included. Patients who had a diversion ostomy, rectovaginal fistula or no documented perianal examination were excluded.", "Data was retrospectively collected from a clinical database that was updated prospectively during routine clinical practice. Patient demographics collected included age, gender, weight, body mass index, smoking status and CD phenotype classified according to the Montreal Classification[18]. The location of CD was identified as ileal, ileocolonic, colonic, upper gastrointestinal involvement or no luminal disease. The presence or absence of fistulising and stricturing disease was noted, in particular the presence of anal strictures. Biochemical markers of disease activity including C-reactive protein (CRP) and albumin were also recorded.", "Prior history of surgical management of perianal disease or fistula was recorded and categorised as examination under anaesthesia and curettage, examination under anaesthesia and seton insertion or fistulotomy. The duration from the last surgical procedure to the follow up visit was recorded. Concomitant medical therapy at the time of follow up was assessed, including corticosteroid use, 5-aminosalicylates and immunomodulators. The doses of infliximab and adalimumab were recorded and stratified according to dose and interval between doses. For patients on dose-escalated anti-TNF therapy, the duration between last dose escalation and follow up was recorded.", "The primary outcome was fistula healing, which was defined as cessation of fistula drainage, with or without a seton in situ[7]. The secondary outcome was fistula closure, which was defined as healing and closure of all external fistula openings[7].", "Statistical review of this study was performed by a biostatistician from the Ingham Institute for Applied Medical Research. Descriptive statistics were used to assess the baseline characteristics of both the infliximab and adalimumab cohorts. Categorical variables were expressed as percentages and compared using the chi-square test. Continuous variables were expressed using mean ± SD for normally distributed variables and median and interquartile range (IQR) for non-normally distributed variables. The means were compared using the t test for normally distributed variables and the mean ranks compared using the Mann-Whitney U test for non-normally distributed variables. A receiver operating characteristic (ROC) curve analysis was used to assess the sensitivity and specificity of infliximab and adalimumab levels at different cut-off points for predicting fistula healing. All reported P values were 2-sided, with P < 0.05 considered statistically significant. Multivariate analysis using logistic regression with forwards selection was used to analyse variables that predicted fistula healing. Variables which were statistically significant in the univariate analysis were included in the multivariate analysis model. Ethics approval was obtained from the South Western Sydney Local Health District (Human Research Ethics Committee LNR/18/LPOOL/404; Local Project Number: HE18/261).", "Out of 454 patients screened, 114 patients (66 infliximab, 48 adalimumab) on maintenance infliximab or adalimumab for perianal CD had a trough level collected within 12 wk of clinical assessment. Five patients had been changed from infliximab to adalimumab or vice versa and were included in both the infliximab and adalimumab groups. Seventy-five (66%) patients were on combination therapy (43 azathioprine, 16 6-mercaptopurine, 16 methotrexate). Nineteen patients (28.8%) on maintenance infliximab were on dose escalated infliximab therapy (5, 7.5, 10, 15 or 20 mg/kg every 6 or 8 wk). For these patients, the median duration between last infliximab dose adjustment and follow up was 60.0 wk (IQR = 44.5-81.0). Eleven (22.9%) patients on maintenance adalimumab were on dose escalated adalimumab therapy (40 mg weekly). For these patients, the median duration between last adalimumab dose adjustment and follow up was 39.0 wk (IQR = 24.0-86.0). Fifty-nine (89.3%) patients on infliximab had prior surgical management of their fistula, with a median duration of 93.0 wk (IQR = 45.5-284.5) between their last surgical procedure and their most recent follow up visit. Thirty-seven (77.1%) patients on adalimumab had prior surgical management of their fistula, with a median duration of 83.0 wk (IQR = 28.75-223.0) between their last surgical procedure and their most recent follow up visit. Patient demographics and disease characteristics of the population are summarised in Table 1.\nPatient demographics and disease characteristics\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.\n Association between fistula healing and closure with infliximab trough levels Forty-eight (72.7%) patients on maintenance infliximab achieved fistula healing. Table 2 summarises the differences between patients on infliximab with and without fistula healing. Patients who achieved fistula healing had higher infliximab trough levels [6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003], lower rates of detectable anti-infliximab antibodies (4.3% vs 33.3%, P = 0.004) and a younger age (33.0 vs 43.5 years old; P = 0.003) compared to patients who did not achieve fistula healing. The presence of detectable anti-infliximab antibodies was associated with lower infliximab trough levels (P = 0.02). The CRP and albumin levels were not significantly different between patients with and without fistula healing. The rates of combination therapy with an immunomodulator were not significantly different between patients who achieved fistula healing and those who did not (P = 0.522).\nDifferences between patients on infliximab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.\nROC curve analysis identified a positive correlation between infliximab trough levels and healing [area under the curve (AUC) = 0.74, 95% confidence interval (CI): 0.60-0.88, P = 0.003; Figure 1A] with an infliximab trough level of 6.10 mg/L that maximised the sensitivity and specificity of predicting fistula healing [sensitivity 58%, specificity 78%, odds ratio (OR) = 4.9, P = 0.013]. Upon tertile analysis, higher tertiles of infliximab levels were associated with a higher proportion of patients achieving fistula healing with 54.5% healing rate for tertile 1 compared to 90.1% for tertile 3 (Figure 2A; P = 0.026). Out of the patients who achieved fistula healing on infliximab, 90% and 95% of the patients who achieved fistula healing were healed with an infliximab trough level of 12.7 and 14.4 mg/L respectively. Given that a drug-sensitive infliximab assay was used where anti-infliximab antibody titres were only performed if infliximab concentrations were < 2.0 mg/L, anti-infliximab antibodies were not included in the multivariate analysis. On multivariate logistic regression analysis, age was associated with healing (P = 0.026) but adequate infliximab levels ≥ 6.10 mg/L were not (P = 0.097). Within our cohort, 18 (27.3%) of patients on infliximab achieved fistula closure. The infliximab trough level for patients with and without fistula closure was not significantly different [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105].\n\nCorrelation between serum trough level of infliximab, adalimumab and fistula healing. A: Infliximab; B: Adalimumab.\n\nTertile analysis of infliximab and adalimumab trough levels for patients with fistula healing and fistula closure. A: Infliximab; B: Adalimumab.\nForty-eight (72.7%) patients on maintenance infliximab achieved fistula healing. Table 2 summarises the differences between patients on infliximab with and without fistula healing. Patients who achieved fistula healing had higher infliximab trough levels [6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003], lower rates of detectable anti-infliximab antibodies (4.3% vs 33.3%, P = 0.004) and a younger age (33.0 vs 43.5 years old; P = 0.003) compared to patients who did not achieve fistula healing. The presence of detectable anti-infliximab antibodies was associated with lower infliximab trough levels (P = 0.02). The CRP and albumin levels were not significantly different between patients with and without fistula healing. The rates of combination therapy with an immunomodulator were not significantly different between patients who achieved fistula healing and those who did not (P = 0.522).\nDifferences between patients on infliximab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.\nROC curve analysis identified a positive correlation between infliximab trough levels and healing [area under the curve (AUC) = 0.74, 95% confidence interval (CI): 0.60-0.88, P = 0.003; Figure 1A] with an infliximab trough level of 6.10 mg/L that maximised the sensitivity and specificity of predicting fistula healing [sensitivity 58%, specificity 78%, odds ratio (OR) = 4.9, P = 0.013]. Upon tertile analysis, higher tertiles of infliximab levels were associated with a higher proportion of patients achieving fistula healing with 54.5% healing rate for tertile 1 compared to 90.1% for tertile 3 (Figure 2A; P = 0.026). Out of the patients who achieved fistula healing on infliximab, 90% and 95% of the patients who achieved fistula healing were healed with an infliximab trough level of 12.7 and 14.4 mg/L respectively. Given that a drug-sensitive infliximab assay was used where anti-infliximab antibody titres were only performed if infliximab concentrations were < 2.0 mg/L, anti-infliximab antibodies were not included in the multivariate analysis. On multivariate logistic regression analysis, age was associated with healing (P = 0.026) but adequate infliximab levels ≥ 6.10 mg/L were not (P = 0.097). Within our cohort, 18 (27.3%) of patients on infliximab achieved fistula closure. The infliximab trough level for patients with and without fistula closure was not significantly different [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105].\n\nCorrelation between serum trough level of infliximab, adalimumab and fistula healing. A: Infliximab; B: Adalimumab.\n\nTertile analysis of infliximab and adalimumab trough levels for patients with fistula healing and fistula closure. A: Infliximab; B: Adalimumab.\n Association between fistula healing and closure with adalimumab trough levels Thirty-seven (77%) patients on maintenance adalimumab achieved fistula healing. Table 3 summarises the differences in patients on adalimumab with and without fistula healing. Patients who achieved fistula healing had higher adalimumab trough levels compared to those who did not [9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. Patients who achieved fistula healing had higher rates of combination therapy with an immunomodulator than those who did not (P = 0.048). The CRP and albumin levels were not significantly different in patients with and without fistula healing. ROC curve analysis identified a positive correlation between adalimumab trough levels and healing (AUC = 0.79, 95%CI: 0.66-0.93, P = 0.004) with an adalimumab trough level of 7.05 mg/L that maximised the sensitivity and specificity of infliximab levels in predicting fistula healing (sensitivity 70%; specificity 73%; OR = 6.3; P = 0.016; Figure 1B). Upon tertile analysis, higher tertiles of adalimumab levels were associated with a higher proportion of patients achieving fistula healing, with 62.5% healing rate for tertile 1 compared to 100% for tertile 3 (Figure 2B; P = 0.034). Out of the patients who achieved fistula healing on adalimumab, 90% and 95% of the patients who achieved fistula healing were healed with an adalimumab trough level of 12.0 and 18.0 mg/L respectively. On multivariate logistic regression analysis, adequate adalimumab trough levels ≥ 7.05 mg/L (P = 0.008) and concurrent immunomodulator therapy (P = 0.026) both remained associated with healing. Within our cohort, 17 (35.4%) of patients on adalimumab achieved fistula closure. The adalimumab trough level for patients with and without fistula closure was not significantly different [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083].\nDifferences in patients on adalimumab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.\nThirty-seven (77%) patients on maintenance adalimumab achieved fistula healing. Table 3 summarises the differences in patients on adalimumab with and without fistula healing. Patients who achieved fistula healing had higher adalimumab trough levels compared to those who did not [9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. Patients who achieved fistula healing had higher rates of combination therapy with an immunomodulator than those who did not (P = 0.048). The CRP and albumin levels were not significantly different in patients with and without fistula healing. ROC curve analysis identified a positive correlation between adalimumab trough levels and healing (AUC = 0.79, 95%CI: 0.66-0.93, P = 0.004) with an adalimumab trough level of 7.05 mg/L that maximised the sensitivity and specificity of infliximab levels in predicting fistula healing (sensitivity 70%; specificity 73%; OR = 6.3; P = 0.016; Figure 1B). Upon tertile analysis, higher tertiles of adalimumab levels were associated with a higher proportion of patients achieving fistula healing, with 62.5% healing rate for tertile 1 compared to 100% for tertile 3 (Figure 2B; P = 0.034). Out of the patients who achieved fistula healing on adalimumab, 90% and 95% of the patients who achieved fistula healing were healed with an adalimumab trough level of 12.0 and 18.0 mg/L respectively. On multivariate logistic regression analysis, adequate adalimumab trough levels ≥ 7.05 mg/L (P = 0.008) and concurrent immunomodulator therapy (P = 0.026) both remained associated with healing. Within our cohort, 17 (35.4%) of patients on adalimumab achieved fistula closure. The adalimumab trough level for patients with and without fistula closure was not significantly different [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083].\nDifferences in patients on adalimumab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.", "Forty-eight (72.7%) patients on maintenance infliximab achieved fistula healing. Table 2 summarises the differences between patients on infliximab with and without fistula healing. Patients who achieved fistula healing had higher infliximab trough levels [6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003], lower rates of detectable anti-infliximab antibodies (4.3% vs 33.3%, P = 0.004) and a younger age (33.0 vs 43.5 years old; P = 0.003) compared to patients who did not achieve fistula healing. The presence of detectable anti-infliximab antibodies was associated with lower infliximab trough levels (P = 0.02). The CRP and albumin levels were not significantly different between patients with and without fistula healing. The rates of combination therapy with an immunomodulator were not significantly different between patients who achieved fistula healing and those who did not (P = 0.522).\nDifferences between patients on infliximab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.\nROC curve analysis identified a positive correlation between infliximab trough levels and healing [area under the curve (AUC) = 0.74, 95% confidence interval (CI): 0.60-0.88, P = 0.003; Figure 1A] with an infliximab trough level of 6.10 mg/L that maximised the sensitivity and specificity of predicting fistula healing [sensitivity 58%, specificity 78%, odds ratio (OR) = 4.9, P = 0.013]. Upon tertile analysis, higher tertiles of infliximab levels were associated with a higher proportion of patients achieving fistula healing with 54.5% healing rate for tertile 1 compared to 90.1% for tertile 3 (Figure 2A; P = 0.026). Out of the patients who achieved fistula healing on infliximab, 90% and 95% of the patients who achieved fistula healing were healed with an infliximab trough level of 12.7 and 14.4 mg/L respectively. Given that a drug-sensitive infliximab assay was used where anti-infliximab antibody titres were only performed if infliximab concentrations were < 2.0 mg/L, anti-infliximab antibodies were not included in the multivariate analysis. On multivariate logistic regression analysis, age was associated with healing (P = 0.026) but adequate infliximab levels ≥ 6.10 mg/L were not (P = 0.097). Within our cohort, 18 (27.3%) of patients on infliximab achieved fistula closure. The infliximab trough level for patients with and without fistula closure was not significantly different [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105].\n\nCorrelation between serum trough level of infliximab, adalimumab and fistula healing. A: Infliximab; B: Adalimumab.\n\nTertile analysis of infliximab and adalimumab trough levels for patients with fistula healing and fistula closure. A: Infliximab; B: Adalimumab.", "Thirty-seven (77%) patients on maintenance adalimumab achieved fistula healing. Table 3 summarises the differences in patients on adalimumab with and without fistula healing. Patients who achieved fistula healing had higher adalimumab trough levels compared to those who did not [9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. Patients who achieved fistula healing had higher rates of combination therapy with an immunomodulator than those who did not (P = 0.048). The CRP and albumin levels were not significantly different in patients with and without fistula healing. ROC curve analysis identified a positive correlation between adalimumab trough levels and healing (AUC = 0.79, 95%CI: 0.66-0.93, P = 0.004) with an adalimumab trough level of 7.05 mg/L that maximised the sensitivity and specificity of infliximab levels in predicting fistula healing (sensitivity 70%; specificity 73%; OR = 6.3; P = 0.016; Figure 1B). Upon tertile analysis, higher tertiles of adalimumab levels were associated with a higher proportion of patients achieving fistula healing, with 62.5% healing rate for tertile 1 compared to 100% for tertile 3 (Figure 2B; P = 0.034). Out of the patients who achieved fistula healing on adalimumab, 90% and 95% of the patients who achieved fistula healing were healed with an adalimumab trough level of 12.0 and 18.0 mg/L respectively. On multivariate logistic regression analysis, adequate adalimumab trough levels ≥ 7.05 mg/L (P = 0.008) and concurrent immunomodulator therapy (P = 0.026) both remained associated with healing. Within our cohort, 17 (35.4%) of patients on adalimumab achieved fistula closure. The adalimumab trough level for patients with and without fistula closure was not significantly different [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083].\nDifferences in patients on adalimumab with and without fistula healing\nADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor.", "Fistulising perianal CD is a highly morbid condition for which treatment outcomes remain suboptimal in many patients. While there is limited data on the role of newer biologic agents such as ustekinumab in perianal CD[19], anti-TNF agents remain the treatment of choice. Our study showed a significant association between both infliximab and adalimumab trough levels and fistula healing, with higher levels associated with increased healing rates. We demonstrated that higher tertiles of both infliximab and adalimumab levels were associated with a higher proportion of patients achieving fistula healing. Notably, when plotting the cumulative percentage of healed patients against infliximab level, we found that 50% of the patients who achieve healing will heal with a level of 6.4 mg/L, 90% of the patients who achieve healing will heal with a level of 12.7 mg/L and 95% of the patients who achieve healing will heal with a level of 14.4 mg/L. Similarly, for patients on adalimumab, 50% of the patients who achieve healing will heal with a level of 9.2 mg/L, and 90% and 95% of patients who achieved fistula healing were healed with levels of 12.0 and 18.0 mg/L respectively. Our results support dose-escalation of both infliximab and adalimumab in non-responders, targeting higher levels to achieve fistula healing prior to changing biologic therapy. Importantly, this study is the largest study to date assessing the relationship between adalimumab trough levels and clinical fistula healing. This data adds to the growing body of evidence that fistula healing improves with higher anti-TNF trough levels, and that higher levels may be required for perianal fistula healing than for mucosal healing in luminal CD[12-14,20].\nThis study did not show an association between infliximab and adalimumab trough levels and fistula closure. Not all previous studies have assessed fistula closure, but some have found that patients with fistula closure had significantly higher maintenance infliximab and adalimumab trough levels[13,14]. Our results may have been limited by inadequate power due to relatively small numbers of patients who achieved fistula closure in our cohort. We had a high fistula healing rate in this study, with 72.7% and 77% of patients on maintenance infliximab and adalimumab achieving fistula healing respectively. This finding was possibly due to high rates of combination therapy with an immunomodulator (69.7% and 60.4% in the infliximab and adalimumab groups respectively).\nRandomised controlled trials have shown that infliximab is effective at both inducing and maintaining fistula healing[7,8]. Our study found that fistula healing was associated with higher infliximab trough levels. This finding is supported by a post-hoc analysis of ACCENT II which found that higher infliximab trough levels during induction were associated with a complete absence of draining fistulas at week 14[12], as well as similar findings in other studies assessing induction and maintenance infliximab therapy[11,13]. In the future, there may be a role for the infliximab biosimilar CT-P13 in order to achieve these high infliximab levels required for perianal fistula healing; with recent randomised controlled trials demonstrating higher trough levels from subcutaneous administration of CT-P13 compared to intravenous administration[21]. Interestingly, our study found that fistula healing was associated with younger age in both univariate and multivariate analyses. Whilst patient factors including albumin and body weight have previously been shown to affect infliximab trough levels[22], the influence of age is unclear. This finding may be due to the relatively younger age at diagnosis of CD for patients with fistula healing or longer duration of infliximab therapy. Five patients in this study had been changed from infliximab to adalimumab or vice versa and were included in both groups, however the anti-TNF level and anti-TNF antibody levels at the time of changing treatment were not collected. Reassuringly, previous studies have demonstrated that the presence of infliximab antibodies does not decrease future response rates to adalimumab and vice versa[23].\nAdalimumab has also been shown to be effective in both inducing[9] and maintaining fistula healing[24]. Our study found that fistula healing was associated with higher adalimumab trough levels. Whilst there is limited data on the association between adalimumab trough levels and fistula healing, our findings are consistent with two smaller retrospective studies that showed that patients with fistula healing had higher adalimumab trough levels compared to those without fistula healing[14,20]. On multivariate logistic regression analysis, adalimumab trough levels ≥ 7.05 mg/L and concurrent immunomodulator therapy both remained significantly associated with healing. This reflects how concomitant immunosuppressive therapy can be used to decrease the immunogenic response and therefore improve fistula healing rates[25].\nThis study has several limitations. Assessment of fistula healing was based on clinical assessment, which may not be as accurate as an objective assessment such as with magnetic resonance imaging of the pelvis. A recent study has demonstrated that higher anti-TNF trough levels are associated with improved rates of radiological healing in perianal fistulising CD[26]. However, the absence of drainage remains a clinically relevant endpoint that impacts on patient quality of life. In order to provide an objective marker of response, biochemical markers of disease activity including CRP and albumin were analysed and found not to correlate with fistula healing. Data was retrospectively collected, so in order to address this we only included patients with documented perianal exams and definitions for fistula healing and closure that were in line with previous randomised controlled trials[8]. We found that fistula healing is associated with higher infliximab and adalimumab trough levels, however further randomised controlled trials are required to assess whether dose escalation to higher levels improves healing and the optimal method for dose escalation. Whilst reactive TDM with dose escalation at the time of loss of response is effective, it remains unknown whether proactive TDM with subsequent dose modification improves outcomes. Notably, all previous studies on proactive TDM have focused on luminal disease with no prospective studies evaluating proactive TDM in perianal fistulising CD.", "Our study showed that higher infliximab and adalimumab trough levels are associated with perianal CD fistula healing, with higher rates of healing in higher tertiles of infliximab and adalimumab levels. However, no association with fistula closure was observed. Further prospective studies are required to confirm target infliximab and adalimumab trough levels and determine the optimal dose escalation method to achieve these target levels." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null ]
[ "Crohn’s disease", "Perianal disorders", "Biologics", "Inflammatory bowel disease" ]
INTRODUCTION: Perianal fistulising disease is a common manifestation occurring in up to 30% of patients with Crohn’s disease (CD). The development of abnormal tracts between the bowel and perineum can cause perianal drainage, pain, bleeding, abscess formation, sepsis and faecal incontinence[1,2]. Perianal CD is associated with significant morbidity and decreased quality of life, negatively impacting physical, emotional, sexual and social wellbeing[1-3] and is an independent predictor for decreased productivity in patients with CD[4,5]. Given that the incidence of perianal fistulising CD is highest in the third and fourth decades of life, this places significant burden on patients, society, the economy and the health care system[6]. Treatment for perianal fistulising CD requires a multidisciplinary approach involving medical management with immunosuppressants and antibiotics, as well as surgical management with sepsis control, seton insertion and sometimes diversion or resection. Anti-tumor necrosis factor (anti-TNF) alpha agents, including infliximab[7,8] and adalimumab[9,10], are the most effective medical therapies available for inducing and maintaining remission of fistulas. Unfortunately, up to 60% of patients treated with maintenance infliximab lose response within one year[7,8]. Accumulating evidence suggests that this loss of response is partly due to subtherapeutic anti-TNF trough levels. Retrospective studies and post-hoc analyses of prospective data have identified that higher infliximab trough levels are associated with fistula healing and closure compared to what is observed for mucosal healing in luminal disease, with emerging data suggesting similar results for adalimumab[11-14]. Quantitative assays for therapeutic drug monitoring (TDM) permit individualisation of infliximab and adalimumab dosing[15,16], however there are very few studies on perianal fistulising CD and the optimal target levels for perianal fistulising CD remain unclear. Our study aims to assess the association between serum trough infliximab and adalimumab levels and perianal fistula healing and closure and identify optimal target levels. MATERIALS AND METHODS: Study design and patient population This was a multicentre retrospective cross-sectional study of patients with perianal fistulising CD at four tertiary inflammatory bowel disease centres across Australia between January 2014 and June 2020. All patients qualified for infliximab or adalimumab under the Australian Pharmaceutical Benefits Scheme criteria[17] which constitutes the following: (1) A confirmed diagnosis of CD using clinical, radiological, histological and/or endoscopic criteria; and (2) At least one active externally draining complex perianal fistula. We included patients on maintenance infliximab or adalimumab with a documented perianal examination who had a serum infliximab or adalimumab trough level collected within 12 wk before or after their most recent clinical assessment. Infliximab and adalimumab trough levels as well as antibodies to infliximab and adalimumab were measured using a drug sensitive enzyme-linked immunosorbent assay (Grifols Promonitor for adalimumab; LISA-Tracker and Grifols Promonitor for infliximab). Infliximab and adalimumab trough levels were measured both in a proactive manner and reactive manner in patients failing treatment across the study sites. Patients who had been changed from infliximab to adalimumab or vice versa and had relevant data were included in both the infliximab and adalimumab groups. All patients had received standard infliximab or adalimumab induction dosing (infliximab 5 mg/kg intravenously at weeks 0, 2, and 6; adalimumab subcutaneously 160 mg at week 0, 80 mg at week 2) followed by maintenance therapy. The current dose of anti-TNF therapy was recorded and patients with or without dose-escalated maintenance therapy were included. Patients who had a diversion ostomy, rectovaginal fistula or no documented perianal examination were excluded. This was a multicentre retrospective cross-sectional study of patients with perianal fistulising CD at four tertiary inflammatory bowel disease centres across Australia between January 2014 and June 2020. All patients qualified for infliximab or adalimumab under the Australian Pharmaceutical Benefits Scheme criteria[17] which constitutes the following: (1) A confirmed diagnosis of CD using clinical, radiological, histological and/or endoscopic criteria; and (2) At least one active externally draining complex perianal fistula. We included patients on maintenance infliximab or adalimumab with a documented perianal examination who had a serum infliximab or adalimumab trough level collected within 12 wk before or after their most recent clinical assessment. Infliximab and adalimumab trough levels as well as antibodies to infliximab and adalimumab were measured using a drug sensitive enzyme-linked immunosorbent assay (Grifols Promonitor for adalimumab; LISA-Tracker and Grifols Promonitor for infliximab). Infliximab and adalimumab trough levels were measured both in a proactive manner and reactive manner in patients failing treatment across the study sites. Patients who had been changed from infliximab to adalimumab or vice versa and had relevant data were included in both the infliximab and adalimumab groups. All patients had received standard infliximab or adalimumab induction dosing (infliximab 5 mg/kg intravenously at weeks 0, 2, and 6; adalimumab subcutaneously 160 mg at week 0, 80 mg at week 2) followed by maintenance therapy. The current dose of anti-TNF therapy was recorded and patients with or without dose-escalated maintenance therapy were included. Patients who had a diversion ostomy, rectovaginal fistula or no documented perianal examination were excluded. Demographic data Data was retrospectively collected from a clinical database that was updated prospectively during routine clinical practice. Patient demographics collected included age, gender, weight, body mass index, smoking status and CD phenotype classified according to the Montreal Classification[18]. The location of CD was identified as ileal, ileocolonic, colonic, upper gastrointestinal involvement or no luminal disease. The presence or absence of fistulising and stricturing disease was noted, in particular the presence of anal strictures. Biochemical markers of disease activity including C-reactive protein (CRP) and albumin were also recorded. Data was retrospectively collected from a clinical database that was updated prospectively during routine clinical practice. Patient demographics collected included age, gender, weight, body mass index, smoking status and CD phenotype classified according to the Montreal Classification[18]. The location of CD was identified as ileal, ileocolonic, colonic, upper gastrointestinal involvement or no luminal disease. The presence or absence of fistulising and stricturing disease was noted, in particular the presence of anal strictures. Biochemical markers of disease activity including C-reactive protein (CRP) and albumin were also recorded. Current management Prior history of surgical management of perianal disease or fistula was recorded and categorised as examination under anaesthesia and curettage, examination under anaesthesia and seton insertion or fistulotomy. The duration from the last surgical procedure to the follow up visit was recorded. Concomitant medical therapy at the time of follow up was assessed, including corticosteroid use, 5-aminosalicylates and immunomodulators. The doses of infliximab and adalimumab were recorded and stratified according to dose and interval between doses. For patients on dose-escalated anti-TNF therapy, the duration between last dose escalation and follow up was recorded. Prior history of surgical management of perianal disease or fistula was recorded and categorised as examination under anaesthesia and curettage, examination under anaesthesia and seton insertion or fistulotomy. The duration from the last surgical procedure to the follow up visit was recorded. Concomitant medical therapy at the time of follow up was assessed, including corticosteroid use, 5-aminosalicylates and immunomodulators. The doses of infliximab and adalimumab were recorded and stratified according to dose and interval between doses. For patients on dose-escalated anti-TNF therapy, the duration between last dose escalation and follow up was recorded. Primary and secondary outcomes The primary outcome was fistula healing, which was defined as cessation of fistula drainage, with or without a seton in situ[7]. The secondary outcome was fistula closure, which was defined as healing and closure of all external fistula openings[7]. The primary outcome was fistula healing, which was defined as cessation of fistula drainage, with or without a seton in situ[7]. The secondary outcome was fistula closure, which was defined as healing and closure of all external fistula openings[7]. Statistical analysis Statistical review of this study was performed by a biostatistician from the Ingham Institute for Applied Medical Research. Descriptive statistics were used to assess the baseline characteristics of both the infliximab and adalimumab cohorts. Categorical variables were expressed as percentages and compared using the chi-square test. Continuous variables were expressed using mean ± SD for normally distributed variables and median and interquartile range (IQR) for non-normally distributed variables. The means were compared using the t test for normally distributed variables and the mean ranks compared using the Mann-Whitney U test for non-normally distributed variables. A receiver operating characteristic (ROC) curve analysis was used to assess the sensitivity and specificity of infliximab and adalimumab levels at different cut-off points for predicting fistula healing. All reported P values were 2-sided, with P < 0.05 considered statistically significant. Multivariate analysis using logistic regression with forwards selection was used to analyse variables that predicted fistula healing. Variables which were statistically significant in the univariate analysis were included in the multivariate analysis model. Ethics approval was obtained from the South Western Sydney Local Health District (Human Research Ethics Committee LNR/18/LPOOL/404; Local Project Number: HE18/261). Statistical review of this study was performed by a biostatistician from the Ingham Institute for Applied Medical Research. Descriptive statistics were used to assess the baseline characteristics of both the infliximab and adalimumab cohorts. Categorical variables were expressed as percentages and compared using the chi-square test. Continuous variables were expressed using mean ± SD for normally distributed variables and median and interquartile range (IQR) for non-normally distributed variables. The means were compared using the t test for normally distributed variables and the mean ranks compared using the Mann-Whitney U test for non-normally distributed variables. A receiver operating characteristic (ROC) curve analysis was used to assess the sensitivity and specificity of infliximab and adalimumab levels at different cut-off points for predicting fistula healing. All reported P values were 2-sided, with P < 0.05 considered statistically significant. Multivariate analysis using logistic regression with forwards selection was used to analyse variables that predicted fistula healing. Variables which were statistically significant in the univariate analysis were included in the multivariate analysis model. Ethics approval was obtained from the South Western Sydney Local Health District (Human Research Ethics Committee LNR/18/LPOOL/404; Local Project Number: HE18/261). Study design and patient population: This was a multicentre retrospective cross-sectional study of patients with perianal fistulising CD at four tertiary inflammatory bowel disease centres across Australia between January 2014 and June 2020. All patients qualified for infliximab or adalimumab under the Australian Pharmaceutical Benefits Scheme criteria[17] which constitutes the following: (1) A confirmed diagnosis of CD using clinical, radiological, histological and/or endoscopic criteria; and (2) At least one active externally draining complex perianal fistula. We included patients on maintenance infliximab or adalimumab with a documented perianal examination who had a serum infliximab or adalimumab trough level collected within 12 wk before or after their most recent clinical assessment. Infliximab and adalimumab trough levels as well as antibodies to infliximab and adalimumab were measured using a drug sensitive enzyme-linked immunosorbent assay (Grifols Promonitor for adalimumab; LISA-Tracker and Grifols Promonitor for infliximab). Infliximab and adalimumab trough levels were measured both in a proactive manner and reactive manner in patients failing treatment across the study sites. Patients who had been changed from infliximab to adalimumab or vice versa and had relevant data were included in both the infliximab and adalimumab groups. All patients had received standard infliximab or adalimumab induction dosing (infliximab 5 mg/kg intravenously at weeks 0, 2, and 6; adalimumab subcutaneously 160 mg at week 0, 80 mg at week 2) followed by maintenance therapy. The current dose of anti-TNF therapy was recorded and patients with or without dose-escalated maintenance therapy were included. Patients who had a diversion ostomy, rectovaginal fistula or no documented perianal examination were excluded. Demographic data: Data was retrospectively collected from a clinical database that was updated prospectively during routine clinical practice. Patient demographics collected included age, gender, weight, body mass index, smoking status and CD phenotype classified according to the Montreal Classification[18]. The location of CD was identified as ileal, ileocolonic, colonic, upper gastrointestinal involvement or no luminal disease. The presence or absence of fistulising and stricturing disease was noted, in particular the presence of anal strictures. Biochemical markers of disease activity including C-reactive protein (CRP) and albumin were also recorded. Current management: Prior history of surgical management of perianal disease or fistula was recorded and categorised as examination under anaesthesia and curettage, examination under anaesthesia and seton insertion or fistulotomy. The duration from the last surgical procedure to the follow up visit was recorded. Concomitant medical therapy at the time of follow up was assessed, including corticosteroid use, 5-aminosalicylates and immunomodulators. The doses of infliximab and adalimumab were recorded and stratified according to dose and interval between doses. For patients on dose-escalated anti-TNF therapy, the duration between last dose escalation and follow up was recorded. Primary and secondary outcomes: The primary outcome was fistula healing, which was defined as cessation of fistula drainage, with or without a seton in situ[7]. The secondary outcome was fistula closure, which was defined as healing and closure of all external fistula openings[7]. Statistical analysis: Statistical review of this study was performed by a biostatistician from the Ingham Institute for Applied Medical Research. Descriptive statistics were used to assess the baseline characteristics of both the infliximab and adalimumab cohorts. Categorical variables were expressed as percentages and compared using the chi-square test. Continuous variables were expressed using mean ± SD for normally distributed variables and median and interquartile range (IQR) for non-normally distributed variables. The means were compared using the t test for normally distributed variables and the mean ranks compared using the Mann-Whitney U test for non-normally distributed variables. A receiver operating characteristic (ROC) curve analysis was used to assess the sensitivity and specificity of infliximab and adalimumab levels at different cut-off points for predicting fistula healing. All reported P values were 2-sided, with P < 0.05 considered statistically significant. Multivariate analysis using logistic regression with forwards selection was used to analyse variables that predicted fistula healing. Variables which were statistically significant in the univariate analysis were included in the multivariate analysis model. Ethics approval was obtained from the South Western Sydney Local Health District (Human Research Ethics Committee LNR/18/LPOOL/404; Local Project Number: HE18/261). RESULTS: Out of 454 patients screened, 114 patients (66 infliximab, 48 adalimumab) on maintenance infliximab or adalimumab for perianal CD had a trough level collected within 12 wk of clinical assessment. Five patients had been changed from infliximab to adalimumab or vice versa and were included in both the infliximab and adalimumab groups. Seventy-five (66%) patients were on combination therapy (43 azathioprine, 16 6-mercaptopurine, 16 methotrexate). Nineteen patients (28.8%) on maintenance infliximab were on dose escalated infliximab therapy (5, 7.5, 10, 15 or 20 mg/kg every 6 or 8 wk). For these patients, the median duration between last infliximab dose adjustment and follow up was 60.0 wk (IQR = 44.5-81.0). Eleven (22.9%) patients on maintenance adalimumab were on dose escalated adalimumab therapy (40 mg weekly). For these patients, the median duration between last adalimumab dose adjustment and follow up was 39.0 wk (IQR = 24.0-86.0). Fifty-nine (89.3%) patients on infliximab had prior surgical management of their fistula, with a median duration of 93.0 wk (IQR = 45.5-284.5) between their last surgical procedure and their most recent follow up visit. Thirty-seven (77.1%) patients on adalimumab had prior surgical management of their fistula, with a median duration of 83.0 wk (IQR = 28.75-223.0) between their last surgical procedure and their most recent follow up visit. Patient demographics and disease characteristics of the population are summarised in Table 1. Patient demographics and disease characteristics ADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor. Association between fistula healing and closure with infliximab trough levels Forty-eight (72.7%) patients on maintenance infliximab achieved fistula healing. Table 2 summarises the differences between patients on infliximab with and without fistula healing. Patients who achieved fistula healing had higher infliximab trough levels [6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003], lower rates of detectable anti-infliximab antibodies (4.3% vs 33.3%, P = 0.004) and a younger age (33.0 vs 43.5 years old; P = 0.003) compared to patients who did not achieve fistula healing. The presence of detectable anti-infliximab antibodies was associated with lower infliximab trough levels (P = 0.02). The CRP and albumin levels were not significantly different between patients with and without fistula healing. The rates of combination therapy with an immunomodulator were not significantly different between patients who achieved fistula healing and those who did not (P = 0.522). Differences between patients on infliximab with and without fistula healing ADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor. ROC curve analysis identified a positive correlation between infliximab trough levels and healing [area under the curve (AUC) = 0.74, 95% confidence interval (CI): 0.60-0.88, P = 0.003; Figure 1A] with an infliximab trough level of 6.10 mg/L that maximised the sensitivity and specificity of predicting fistula healing [sensitivity 58%, specificity 78%, odds ratio (OR) = 4.9, P = 0.013]. Upon tertile analysis, higher tertiles of infliximab levels were associated with a higher proportion of patients achieving fistula healing with 54.5% healing rate for tertile 1 compared to 90.1% for tertile 3 (Figure 2A; P = 0.026). Out of the patients who achieved fistula healing on infliximab, 90% and 95% of the patients who achieved fistula healing were healed with an infliximab trough level of 12.7 and 14.4 mg/L respectively. Given that a drug-sensitive infliximab assay was used where anti-infliximab antibody titres were only performed if infliximab concentrations were < 2.0 mg/L, anti-infliximab antibodies were not included in the multivariate analysis. On multivariate logistic regression analysis, age was associated with healing (P = 0.026) but adequate infliximab levels ≥ 6.10 mg/L were not (P = 0.097). Within our cohort, 18 (27.3%) of patients on infliximab achieved fistula closure. The infliximab trough level for patients with and without fistula closure was not significantly different [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105]. Correlation between serum trough level of infliximab, adalimumab and fistula healing. A: Infliximab; B: Adalimumab. Tertile analysis of infliximab and adalimumab trough levels for patients with fistula healing and fistula closure. A: Infliximab; B: Adalimumab. Forty-eight (72.7%) patients on maintenance infliximab achieved fistula healing. Table 2 summarises the differences between patients on infliximab with and without fistula healing. Patients who achieved fistula healing had higher infliximab trough levels [6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003], lower rates of detectable anti-infliximab antibodies (4.3% vs 33.3%, P = 0.004) and a younger age (33.0 vs 43.5 years old; P = 0.003) compared to patients who did not achieve fistula healing. The presence of detectable anti-infliximab antibodies was associated with lower infliximab trough levels (P = 0.02). The CRP and albumin levels were not significantly different between patients with and without fistula healing. The rates of combination therapy with an immunomodulator were not significantly different between patients who achieved fistula healing and those who did not (P = 0.522). Differences between patients on infliximab with and without fistula healing ADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor. ROC curve analysis identified a positive correlation between infliximab trough levels and healing [area under the curve (AUC) = 0.74, 95% confidence interval (CI): 0.60-0.88, P = 0.003; Figure 1A] with an infliximab trough level of 6.10 mg/L that maximised the sensitivity and specificity of predicting fistula healing [sensitivity 58%, specificity 78%, odds ratio (OR) = 4.9, P = 0.013]. Upon tertile analysis, higher tertiles of infliximab levels were associated with a higher proportion of patients achieving fistula healing with 54.5% healing rate for tertile 1 compared to 90.1% for tertile 3 (Figure 2A; P = 0.026). Out of the patients who achieved fistula healing on infliximab, 90% and 95% of the patients who achieved fistula healing were healed with an infliximab trough level of 12.7 and 14.4 mg/L respectively. Given that a drug-sensitive infliximab assay was used where anti-infliximab antibody titres were only performed if infliximab concentrations were < 2.0 mg/L, anti-infliximab antibodies were not included in the multivariate analysis. On multivariate logistic regression analysis, age was associated with healing (P = 0.026) but adequate infliximab levels ≥ 6.10 mg/L were not (P = 0.097). Within our cohort, 18 (27.3%) of patients on infliximab achieved fistula closure. The infliximab trough level for patients with and without fistula closure was not significantly different [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105]. Correlation between serum trough level of infliximab, adalimumab and fistula healing. A: Infliximab; B: Adalimumab. Tertile analysis of infliximab and adalimumab trough levels for patients with fistula healing and fistula closure. A: Infliximab; B: Adalimumab. Association between fistula healing and closure with adalimumab trough levels Thirty-seven (77%) patients on maintenance adalimumab achieved fistula healing. Table 3 summarises the differences in patients on adalimumab with and without fistula healing. Patients who achieved fistula healing had higher adalimumab trough levels compared to those who did not [9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. Patients who achieved fistula healing had higher rates of combination therapy with an immunomodulator than those who did not (P = 0.048). The CRP and albumin levels were not significantly different in patients with and without fistula healing. ROC curve analysis identified a positive correlation between adalimumab trough levels and healing (AUC = 0.79, 95%CI: 0.66-0.93, P = 0.004) with an adalimumab trough level of 7.05 mg/L that maximised the sensitivity and specificity of infliximab levels in predicting fistula healing (sensitivity 70%; specificity 73%; OR = 6.3; P = 0.016; Figure 1B). Upon tertile analysis, higher tertiles of adalimumab levels were associated with a higher proportion of patients achieving fistula healing, with 62.5% healing rate for tertile 1 compared to 100% for tertile 3 (Figure 2B; P = 0.034). Out of the patients who achieved fistula healing on adalimumab, 90% and 95% of the patients who achieved fistula healing were healed with an adalimumab trough level of 12.0 and 18.0 mg/L respectively. On multivariate logistic regression analysis, adequate adalimumab trough levels ≥ 7.05 mg/L (P = 0.008) and concurrent immunomodulator therapy (P = 0.026) both remained associated with healing. Within our cohort, 17 (35.4%) of patients on adalimumab achieved fistula closure. The adalimumab trough level for patients with and without fistula closure was not significantly different [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083]. Differences in patients on adalimumab with and without fistula healing ADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor. Thirty-seven (77%) patients on maintenance adalimumab achieved fistula healing. Table 3 summarises the differences in patients on adalimumab with and without fistula healing. Patients who achieved fistula healing had higher adalimumab trough levels compared to those who did not [9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. Patients who achieved fistula healing had higher rates of combination therapy with an immunomodulator than those who did not (P = 0.048). The CRP and albumin levels were not significantly different in patients with and without fistula healing. ROC curve analysis identified a positive correlation between adalimumab trough levels and healing (AUC = 0.79, 95%CI: 0.66-0.93, P = 0.004) with an adalimumab trough level of 7.05 mg/L that maximised the sensitivity and specificity of infliximab levels in predicting fistula healing (sensitivity 70%; specificity 73%; OR = 6.3; P = 0.016; Figure 1B). Upon tertile analysis, higher tertiles of adalimumab levels were associated with a higher proportion of patients achieving fistula healing, with 62.5% healing rate for tertile 1 compared to 100% for tertile 3 (Figure 2B; P = 0.034). Out of the patients who achieved fistula healing on adalimumab, 90% and 95% of the patients who achieved fistula healing were healed with an adalimumab trough level of 12.0 and 18.0 mg/L respectively. On multivariate logistic regression analysis, adequate adalimumab trough levels ≥ 7.05 mg/L (P = 0.008) and concurrent immunomodulator therapy (P = 0.026) both remained associated with healing. Within our cohort, 17 (35.4%) of patients on adalimumab achieved fistula closure. The adalimumab trough level for patients with and without fistula closure was not significantly different [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083]. Differences in patients on adalimumab with and without fistula healing ADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor. Association between fistula healing and closure with infliximab trough levels: Forty-eight (72.7%) patients on maintenance infliximab achieved fistula healing. Table 2 summarises the differences between patients on infliximab with and without fistula healing. Patients who achieved fistula healing had higher infliximab trough levels [6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003], lower rates of detectable anti-infliximab antibodies (4.3% vs 33.3%, P = 0.004) and a younger age (33.0 vs 43.5 years old; P = 0.003) compared to patients who did not achieve fistula healing. The presence of detectable anti-infliximab antibodies was associated with lower infliximab trough levels (P = 0.02). The CRP and albumin levels were not significantly different between patients with and without fistula healing. The rates of combination therapy with an immunomodulator were not significantly different between patients who achieved fistula healing and those who did not (P = 0.522). Differences between patients on infliximab with and without fistula healing ADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor. ROC curve analysis identified a positive correlation between infliximab trough levels and healing [area under the curve (AUC) = 0.74, 95% confidence interval (CI): 0.60-0.88, P = 0.003; Figure 1A] with an infliximab trough level of 6.10 mg/L that maximised the sensitivity and specificity of predicting fistula healing [sensitivity 58%, specificity 78%, odds ratio (OR) = 4.9, P = 0.013]. Upon tertile analysis, higher tertiles of infliximab levels were associated with a higher proportion of patients achieving fistula healing with 54.5% healing rate for tertile 1 compared to 90.1% for tertile 3 (Figure 2A; P = 0.026). Out of the patients who achieved fistula healing on infliximab, 90% and 95% of the patients who achieved fistula healing were healed with an infliximab trough level of 12.7 and 14.4 mg/L respectively. Given that a drug-sensitive infliximab assay was used where anti-infliximab antibody titres were only performed if infliximab concentrations were < 2.0 mg/L, anti-infliximab antibodies were not included in the multivariate analysis. On multivariate logistic regression analysis, age was associated with healing (P = 0.026) but adequate infliximab levels ≥ 6.10 mg/L were not (P = 0.097). Within our cohort, 18 (27.3%) of patients on infliximab achieved fistula closure. The infliximab trough level for patients with and without fistula closure was not significantly different [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105]. Correlation between serum trough level of infliximab, adalimumab and fistula healing. A: Infliximab; B: Adalimumab. Tertile analysis of infliximab and adalimumab trough levels for patients with fistula healing and fistula closure. A: Infliximab; B: Adalimumab. Association between fistula healing and closure with adalimumab trough levels: Thirty-seven (77%) patients on maintenance adalimumab achieved fistula healing. Table 3 summarises the differences in patients on adalimumab with and without fistula healing. Patients who achieved fistula healing had higher adalimumab trough levels compared to those who did not [9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. Patients who achieved fistula healing had higher rates of combination therapy with an immunomodulator than those who did not (P = 0.048). The CRP and albumin levels were not significantly different in patients with and without fistula healing. ROC curve analysis identified a positive correlation between adalimumab trough levels and healing (AUC = 0.79, 95%CI: 0.66-0.93, P = 0.004) with an adalimumab trough level of 7.05 mg/L that maximised the sensitivity and specificity of infliximab levels in predicting fistula healing (sensitivity 70%; specificity 73%; OR = 6.3; P = 0.016; Figure 1B). Upon tertile analysis, higher tertiles of adalimumab levels were associated with a higher proportion of patients achieving fistula healing, with 62.5% healing rate for tertile 1 compared to 100% for tertile 3 (Figure 2B; P = 0.034). Out of the patients who achieved fistula healing on adalimumab, 90% and 95% of the patients who achieved fistula healing were healed with an adalimumab trough level of 12.0 and 18.0 mg/L respectively. On multivariate logistic regression analysis, adequate adalimumab trough levels ≥ 7.05 mg/L (P = 0.008) and concurrent immunomodulator therapy (P = 0.026) both remained associated with healing. Within our cohort, 17 (35.4%) of patients on adalimumab achieved fistula closure. The adalimumab trough level for patients with and without fistula closure was not significantly different [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083]. Differences in patients on adalimumab with and without fistula healing ADA: Adalimumab; BMI: Body mass index; CRP: C-reactive protein; IFX: Infliximab; IQR: Interquartile range; SD: Standard deviation; TNF: Tumor necrosis factor. DISCUSSION: Fistulising perianal CD is a highly morbid condition for which treatment outcomes remain suboptimal in many patients. While there is limited data on the role of newer biologic agents such as ustekinumab in perianal CD[19], anti-TNF agents remain the treatment of choice. Our study showed a significant association between both infliximab and adalimumab trough levels and fistula healing, with higher levels associated with increased healing rates. We demonstrated that higher tertiles of both infliximab and adalimumab levels were associated with a higher proportion of patients achieving fistula healing. Notably, when plotting the cumulative percentage of healed patients against infliximab level, we found that 50% of the patients who achieve healing will heal with a level of 6.4 mg/L, 90% of the patients who achieve healing will heal with a level of 12.7 mg/L and 95% of the patients who achieve healing will heal with a level of 14.4 mg/L. Similarly, for patients on adalimumab, 50% of the patients who achieve healing will heal with a level of 9.2 mg/L, and 90% and 95% of patients who achieved fistula healing were healed with levels of 12.0 and 18.0 mg/L respectively. Our results support dose-escalation of both infliximab and adalimumab in non-responders, targeting higher levels to achieve fistula healing prior to changing biologic therapy. Importantly, this study is the largest study to date assessing the relationship between adalimumab trough levels and clinical fistula healing. This data adds to the growing body of evidence that fistula healing improves with higher anti-TNF trough levels, and that higher levels may be required for perianal fistula healing than for mucosal healing in luminal CD[12-14,20]. This study did not show an association between infliximab and adalimumab trough levels and fistula closure. Not all previous studies have assessed fistula closure, but some have found that patients with fistula closure had significantly higher maintenance infliximab and adalimumab trough levels[13,14]. Our results may have been limited by inadequate power due to relatively small numbers of patients who achieved fistula closure in our cohort. We had a high fistula healing rate in this study, with 72.7% and 77% of patients on maintenance infliximab and adalimumab achieving fistula healing respectively. This finding was possibly due to high rates of combination therapy with an immunomodulator (69.7% and 60.4% in the infliximab and adalimumab groups respectively). Randomised controlled trials have shown that infliximab is effective at both inducing and maintaining fistula healing[7,8]. Our study found that fistula healing was associated with higher infliximab trough levels. This finding is supported by a post-hoc analysis of ACCENT II which found that higher infliximab trough levels during induction were associated with a complete absence of draining fistulas at week 14[12], as well as similar findings in other studies assessing induction and maintenance infliximab therapy[11,13]. In the future, there may be a role for the infliximab biosimilar CT-P13 in order to achieve these high infliximab levels required for perianal fistula healing; with recent randomised controlled trials demonstrating higher trough levels from subcutaneous administration of CT-P13 compared to intravenous administration[21]. Interestingly, our study found that fistula healing was associated with younger age in both univariate and multivariate analyses. Whilst patient factors including albumin and body weight have previously been shown to affect infliximab trough levels[22], the influence of age is unclear. This finding may be due to the relatively younger age at diagnosis of CD for patients with fistula healing or longer duration of infliximab therapy. Five patients in this study had been changed from infliximab to adalimumab or vice versa and were included in both groups, however the anti-TNF level and anti-TNF antibody levels at the time of changing treatment were not collected. Reassuringly, previous studies have demonstrated that the presence of infliximab antibodies does not decrease future response rates to adalimumab and vice versa[23]. Adalimumab has also been shown to be effective in both inducing[9] and maintaining fistula healing[24]. Our study found that fistula healing was associated with higher adalimumab trough levels. Whilst there is limited data on the association between adalimumab trough levels and fistula healing, our findings are consistent with two smaller retrospective studies that showed that patients with fistula healing had higher adalimumab trough levels compared to those without fistula healing[14,20]. On multivariate logistic regression analysis, adalimumab trough levels ≥ 7.05 mg/L and concurrent immunomodulator therapy both remained significantly associated with healing. This reflects how concomitant immunosuppressive therapy can be used to decrease the immunogenic response and therefore improve fistula healing rates[25]. This study has several limitations. Assessment of fistula healing was based on clinical assessment, which may not be as accurate as an objective assessment such as with magnetic resonance imaging of the pelvis. A recent study has demonstrated that higher anti-TNF trough levels are associated with improved rates of radiological healing in perianal fistulising CD[26]. However, the absence of drainage remains a clinically relevant endpoint that impacts on patient quality of life. In order to provide an objective marker of response, biochemical markers of disease activity including CRP and albumin were analysed and found not to correlate with fistula healing. Data was retrospectively collected, so in order to address this we only included patients with documented perianal exams and definitions for fistula healing and closure that were in line with previous randomised controlled trials[8]. We found that fistula healing is associated with higher infliximab and adalimumab trough levels, however further randomised controlled trials are required to assess whether dose escalation to higher levels improves healing and the optimal method for dose escalation. Whilst reactive TDM with dose escalation at the time of loss of response is effective, it remains unknown whether proactive TDM with subsequent dose modification improves outcomes. Notably, all previous studies on proactive TDM have focused on luminal disease with no prospective studies evaluating proactive TDM in perianal fistulising CD. CONCLUSION: Our study showed that higher infliximab and adalimumab trough levels are associated with perianal CD fistula healing, with higher rates of healing in higher tertiles of infliximab and adalimumab levels. However, no association with fistula closure was observed. Further prospective studies are required to confirm target infliximab and adalimumab trough levels and determine the optimal dose escalation method to achieve these target levels.
Background: Tumor necrosis factor-alpha inhibitors, including infliximab and adalimumab, are effective medical treatments for perianal fistulising Crohn's disease (CD), but not all patients achieve fistula healing. Methods: In this multicentre retrospective study conducted across four tertiary inflammatory bowel disease centres in Australia, we identified CD patients with perianal fistulae on maintenance infliximab or adalimumab who had a trough level within twelve weeks of clinical assessment. Data collected included demographics, serum infliximab and adalimumab trough levels (mg/L) within 12 wk before or after their most recent clinical assessment and concomitant medical or surgical therapy. The primary outcome was fistula healing, defined as cessation in fistula drainage. The secondary outcome was fistula closure, defined as healing and closure of all external fistula openings. Differences between patients who did or did not achieve fistula healing were compared using the chi-square test, t test or Mann-Whitney U test. Results: One hundred and fourteen patients (66 infliximab, 48 adalimumab) were included. Forty-eight (72.7%) patients on maintenance infliximab achieved fistula healing and 18 (27.3%) achieved fistula closure. Thirty-seven (77%) patients on maintenance adalimumab achieved fistula healing and 17 (35.4%) achieved fistula closure. Patients who achieved fistula healing had significantly higher infliximab and adalimumab trough levels than patients who did not [infliximab: 6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003; adalimumab: 9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. For patients on infliximab, fistula healing was associated with lower rates of detectable anti-infliximab antibodies and younger age. For patients on adalimumab, fistula healing was associated with higher rates of combination therapy with an immunomodulator. Serum trough levels for patients with and without fistula closure were not significantly different for infliximab [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105] or adalimumab [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083]. Conclusions: Higher maintenance infliximab and adalimumab trough levels are associated with perianal fistula healing in CD.
INTRODUCTION: Perianal fistulising disease is a common manifestation occurring in up to 30% of patients with Crohn’s disease (CD). The development of abnormal tracts between the bowel and perineum can cause perianal drainage, pain, bleeding, abscess formation, sepsis and faecal incontinence[1,2]. Perianal CD is associated with significant morbidity and decreased quality of life, negatively impacting physical, emotional, sexual and social wellbeing[1-3] and is an independent predictor for decreased productivity in patients with CD[4,5]. Given that the incidence of perianal fistulising CD is highest in the third and fourth decades of life, this places significant burden on patients, society, the economy and the health care system[6]. Treatment for perianal fistulising CD requires a multidisciplinary approach involving medical management with immunosuppressants and antibiotics, as well as surgical management with sepsis control, seton insertion and sometimes diversion or resection. Anti-tumor necrosis factor (anti-TNF) alpha agents, including infliximab[7,8] and adalimumab[9,10], are the most effective medical therapies available for inducing and maintaining remission of fistulas. Unfortunately, up to 60% of patients treated with maintenance infliximab lose response within one year[7,8]. Accumulating evidence suggests that this loss of response is partly due to subtherapeutic anti-TNF trough levels. Retrospective studies and post-hoc analyses of prospective data have identified that higher infliximab trough levels are associated with fistula healing and closure compared to what is observed for mucosal healing in luminal disease, with emerging data suggesting similar results for adalimumab[11-14]. Quantitative assays for therapeutic drug monitoring (TDM) permit individualisation of infliximab and adalimumab dosing[15,16], however there are very few studies on perianal fistulising CD and the optimal target levels for perianal fistulising CD remain unclear. Our study aims to assess the association between serum trough infliximab and adalimumab levels and perianal fistula healing and closure and identify optimal target levels. CONCLUSION: Our study showed that higher infliximab and adalimumab trough levels are associated with perianal CD fistula healing, with higher rates of healing in higher tertiles of infliximab and adalimumab levels, but no association with fistula closure was observed. Further prospective studies are required to confirm target infliximab and adalimumab trough levels and determine the optimal dose escalation method to achieve these target levels.
Background: Tumor necrosis factor-alpha inhibitors, including infliximab and adalimumab, are effective medical treatments for perianal fistulising Crohn's disease (CD), but not all patients achieve fistula healing. Methods: In this multicentre retrospective study conducted across four tertiary inflammatory bowel disease centres in Australia, we identified CD patients with perianal fistulae on maintenance infliximab or adalimumab who had a trough level within twelve weeks of clinical assessment. Data collected included demographics, serum infliximab and adalimumab trough levels (mg/L) within 12 wk before or after their most recent clinical assessment and concomitant medical or surgical therapy. The primary outcome was fistula healing, defined as cessation in fistula drainage. The secondary outcome was fistula closure, defined as healing and closure of all external fistula openings. Differences between patients who did or did not achieve fistula healing were compared using the chi-square test, t test or Mann-Whitney U test. Results: One hundred and fourteen patients (66 infliximab, 48 adalimumab) were included. Forty-eight (72.7%) patients on maintenance infliximab achieved fistula healing and 18 (27.3%) achieved fistula closure. Thirty-seven (77%) patients on maintenance adalimumab achieved fistula healing and 17 (35.4%) achieved fistula closure. Patients who achieved fistula healing had significantly higher infliximab and adalimumab trough levels than patients who did not [infliximab: 6.4 (3.8-9.5) vs 3.0 (0.3-6.2) mg/L, P = 0.003; adalimumab: 9.2 (6.5-12.0) vs 5.4 (2.5-8.3) mg/L, P = 0.004]. For patients on infliximab, fistula healing was associated with lower rates of detectable anti-infliximab antibodies and younger age. For patients on adalimumab, fistula healing was associated with higher rates of combination therapy with an immunomodulator. Serum trough levels for patients with and without fistula closure were not significantly different for infliximab [6.9 (4.3-10.2) vs 5.5 (2.5-8.3) mg/L, P = 0.105] or adalimumab [10.0 (6.6-12.0) vs 7.8 (4.2-10.0) mg/L, P = 0.083]. Conclusions: Higher maintenance infliximab and adalimumab trough levels are associated with perianal fistula healing in CD.
7,224
433
[ 350, 294, 104, 108, 46, 222, 2313, 568, 406, 1090, 68 ]
12
[ "infliximab", "fistula", "healing", "adalimumab", "patients", "fistula healing", "levels", "trough", "infliximab adalimumab", "mg" ]
[ "levels perianal fistulising", "adalimumab perianal cd", "healing perianal fistulising", "management perianal disease", "perianal cd fistula" ]
null
[CONTENT] Crohn’s disease | Perianal disorders | Biologics | Inflammatory bowel disease [SUMMARY]
[CONTENT] Crohn’s disease | Perianal disorders | Biologics | Inflammatory bowel disease [SUMMARY]
null
[CONTENT] Crohn’s disease | Perianal disorders | Biologics | Inflammatory bowel disease [SUMMARY]
[CONTENT] Crohn’s disease | Perianal disorders | Biologics | Inflammatory bowel disease [SUMMARY]
[CONTENT] Crohn’s disease | Perianal disorders | Biologics | Inflammatory bowel disease [SUMMARY]
[CONTENT] Adalimumab | Crohn Disease | Gastrointestinal Agents | Humans | Infliximab | Rectal Fistula | Retrospective Studies | Treatment Outcome | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Adalimumab | Crohn Disease | Gastrointestinal Agents | Humans | Infliximab | Rectal Fistula | Retrospective Studies | Treatment Outcome | Tumor Necrosis Factor-alpha [SUMMARY]
null
[CONTENT] Adalimumab | Crohn Disease | Gastrointestinal Agents | Humans | Infliximab | Rectal Fistula | Retrospective Studies | Treatment Outcome | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Adalimumab | Crohn Disease | Gastrointestinal Agents | Humans | Infliximab | Rectal Fistula | Retrospective Studies | Treatment Outcome | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] Adalimumab | Crohn Disease | Gastrointestinal Agents | Humans | Infliximab | Rectal Fistula | Retrospective Studies | Treatment Outcome | Tumor Necrosis Factor-alpha [SUMMARY]
[CONTENT] levels perianal fistulising | adalimumab perianal cd | healing perianal fistulising | management perianal disease | perianal cd fistula [SUMMARY]
[CONTENT] levels perianal fistulising | adalimumab perianal cd | healing perianal fistulising | management perianal disease | perianal cd fistula [SUMMARY]
null
[CONTENT] levels perianal fistulising | adalimumab perianal cd | healing perianal fistulising | management perianal disease | perianal cd fistula [SUMMARY]
[CONTENT] levels perianal fistulising | adalimumab perianal cd | healing perianal fistulising | management perianal disease | perianal cd fistula [SUMMARY]
[CONTENT] levels perianal fistulising | adalimumab perianal cd | healing perianal fistulising | management perianal disease | perianal cd fistula [SUMMARY]
[CONTENT] infliximab | fistula | healing | adalimumab | patients | fistula healing | levels | trough | infliximab adalimumab | mg [SUMMARY]
[CONTENT] infliximab | fistula | healing | adalimumab | patients | fistula healing | levels | trough | infliximab adalimumab | mg [SUMMARY]
null
[CONTENT] infliximab | fistula | healing | adalimumab | patients | fistula healing | levels | trough | infliximab adalimumab | mg [SUMMARY]
[CONTENT] infliximab | fistula | healing | adalimumab | patients | fistula healing | levels | trough | infliximab adalimumab | mg [SUMMARY]
[CONTENT] infliximab | fistula | healing | adalimumab | patients | fistula healing | levels | trough | infliximab adalimumab | mg [SUMMARY]
[CONTENT] perianal | cd | perianal fistulising | fistulising | fistulising cd | perianal fistulising cd | levels | infliximab | levels perianal | sepsis [SUMMARY]
[CONTENT] variables | infliximab | adalimumab | infliximab adalimumab | patients | recorded | normally distributed variables | normally distributed | distributed variables | distributed [SUMMARY]
null
[CONTENT] levels | target | higher | infliximab adalimumab | healing higher | adalimumab | infliximab | infliximab adalimumab trough levels | infliximab adalimumab trough | adalimumab trough levels [SUMMARY]
[CONTENT] infliximab | healing | fistula | patients | adalimumab | fistula healing | levels | infliximab adalimumab | trough | higher [SUMMARY]
[CONTENT] infliximab | healing | fistula | patients | adalimumab | fistula healing | levels | infliximab adalimumab | trough | higher [SUMMARY]
[CONTENT] infliximab | Crohn [SUMMARY]
[CONTENT] four | Australia | twelve weeks ||| 12 ||| ||| ||| Mann-Whitney U [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] infliximab | Crohn ||| four | Australia | twelve weeks ||| 12 ||| ||| ||| Mann-Whitney U ||| One hundred and fourteen | 66 | 48 ||| Forty-eight | 72.7% ||| 18 | 27.3% ||| Thirty-seven | 77% ||| 17 | 35.4% ||| infliximab | 6.4 | 3.8-9.5 | 3.0 | 0.3-6.2 | 0.003 | 9.2 | 6.5-12.0 | 5.4 | 2.5-8.3 | 0.004 ||| ||| ||| 6.9 | 4.3-10.2 | 5.5 | 2.5-8.3 | 0.105 ||| 10.0 | 6.6-12.0 | 7.8 | 4.2-10.0 | 0.083 ||| [SUMMARY]
[CONTENT] infliximab | Crohn ||| four | Australia | twelve weeks ||| 12 ||| ||| ||| Mann-Whitney U ||| One hundred and fourteen | 66 | 48 ||| Forty-eight | 72.7% ||| 18 | 27.3% ||| Thirty-seven | 77% ||| 17 | 35.4% ||| infliximab | 6.4 | 3.8-9.5 | 3.0 | 0.3-6.2 | 0.003 | 9.2 | 6.5-12.0 | 5.4 | 2.5-8.3 | 0.004 ||| ||| ||| 6.9 | 4.3-10.2 | 5.5 | 2.5-8.3 | 0.105 ||| 10.0 | 6.6-12.0 | 7.8 | 4.2-10.0 | 0.083 ||| [SUMMARY]
Brazilian Dialysis Survey 2020.
35212702
National data on chronic dialysis treatment are essential to support the development of health policies aimed at improving the treatment for thousands of people.
INTRODUCTION
A survey was carried out in Brazilian chronic dialysis centers using an online questionnaire for the year, covering clinical and epidemiological aspects of patients in a chronic dialysis program, data on dialysis therapy, characteristics of dialysis units and the impact of the COVID-19 pandemic.
METHODS
235 (28%) of the centers responded to the questionnaire. In July 2020, the estimated total number of patients on dialysis was 144,779. The estimated prevalence and incidence rates of patients per million population (pmp) were 684 and 209, respectively. Of the prevalent patients, 92.6% were on hemodialysis (HD) and 7.4% were on peritoneal dialysis (PD); 23% were on the transplant waiting list. A central venous catheter was used by a quarter of patients on HD. The incidence rate of confirmed COVID-19 between February and July 2020 was 684/10,000 dialysis patients, and the lethality rate was 25.7%. The estimated overall mortality and COVID-19 crude annual mortality rates were 24.5 and 4.2%, respectively.
RESULTS
The absolute number of patients on chronic dialysis and prevalence rate continued to increase. The low use of PD as dialysis therapy was maintained and the use of long-term catheters for HD increased. The COVID-19 pandemic contributed to the increase in the overall mortality rate.
CONCLUSION
[ "Brazil", "COVID-19", "Humans", "Kidney Failure, Chronic", "Pandemics", "Renal Dialysis", "Surveys and Questionnaires" ]
9518621
Introduction
With the aim of obtaining and analyzing data on clinical and epidemiological aspects of patients undergoing chronic dialysis, in addition to information on dialysis therapy, the Brazilian Society of Nephrology (BSN) annually sponsors the Brazilian Dialysis Survey 1,2 . Since its initial implementation in 1999, the survey has been conducted nationwide and, in the last decade, it has been completed electronically by dialysis centers. The continuous survey is justified because it provides relevant information for the development of health policies and strategies aimed at improving care for individuals undergoing dialysis. In this study we are reporting data from the 2020 Brazilian Dialysis Survey, including information about the impact of the COVID-19 pandemic on dialysis clinic patients and staff.
Methods
Data collection Dialysis clinics filled out an online questionnaire available on the BSN website. It contained questions about sociodemographic, clinical, and therapeutic variables of patients on chronic dialysis (Supplement). The questionnaire was available from August 2020 to January 2021. Participation in the survey was voluntary, and all dialysis centers registered at BSN were invited to participate by email and by BSN media. After the initial invitation, reminders were emailed monthly to centers that had not filled the questionnaire. During the survey period, the chairs of the regional sections of the BSN were asked to contact the dialysis centers in their states and reiterate the importance of their participation. Dialysis clinics filled out an online questionnaire available on the BSN website. It contained questions about sociodemographic, clinical, and therapeutic variables of patients on chronic dialysis (Supplement). The questionnaire was available from August 2020 to January 2021. Participation in the survey was voluntary, and all dialysis centers registered at BSN were invited to participate by email and by BSN media. After the initial invitation, reminders were emailed monthly to centers that had not filled the questionnaire. During the survey period, the chairs of the regional sections of the BSN were asked to contact the dialysis centers in their states and reiterate the importance of their participation. Data analysis Data for each center were grouped rather than analyzed individually. For the 2020 survey, 235 of 834 active centers responded to the questionnaire, a response rate of 28%. For national estimates of total number of patients and prevalence rate, the sample was expanded. We assumed that the units that did not respond to the questionnaire had the same number of patients as participants (mean of 173.6 patients per unit). Because this extrapolation can be imprecise, we used a variation of ± 5% in the obtained mean (164.9 to 182.3 patients per unit) for the prevalence calculations. Likewise, the mean number of new patients per unit was applied to units that did not report incidence rates. All other sociodemographic data and patient characteristics refer to the sample studied. Annual mortality and annual incidence of patients on dialysis were estimated from events in July 2020. For calculating prevalence and incidence rates, national and regional population data were obtained from the Brazilian Institute of Geography and Statistics (IBGE) estimates for July 2020. According to IBGE, the Brazilian population at that date was 211.75 million inhabitants 3 . Most data were presented descriptively, and results were compared with data from previous years. Information on COVID-19, such as incidence, hospitalizations, and lethality, was considered for the period from February 26th (date of the first case reported in the country) to July 31st, 2020. The diagnosis of COVID-19 required confirmation by real-time polymerase chain reaction (RT-PCR) of nasal/oropharyngeal specimens or serologic testing. Data for each center were grouped rather than analyzed individually. For the 2020 survey, 235 of 834 active centers responded to the questionnaire, a response rate of 28%. For national estimates of total number of patients and prevalence rate, the sample was expanded. We assumed that the units that did not respond to the questionnaire had the same number of patients as participants (mean of 173.6 patients per unit). Because this extrapolation can be imprecise, we used a variation of ± 5% in the obtained mean (164.9 to 182.3 patients per unit) for the prevalence calculations. Likewise, the mean number of new patients per unit was applied to units that did not report incidence rates. All other sociodemographic data and patient characteristics refer to the sample studied. Annual mortality and annual incidence of patients on dialysis were estimated from events in July 2020. For calculating prevalence and incidence rates, national and regional population data were obtained from the Brazilian Institute of Geography and Statistics (IBGE) estimates for July 2020. According to IBGE, the Brazilian population at that date was 211.75 million inhabitants 3 . Most data were presented descriptively, and results were compared with data from previous years. Information on COVID-19, such as incidence, hospitalizations, and lethality, was considered for the period from February 26th (date of the first case reported in the country) to July 31st, 2020. The diagnosis of COVID-19 required confirmation by real-time polymerase chain reaction (RT-PCR) of nasal/oropharyngeal specimens or serologic testing. Calculations of estimates The main calculations and estimates are shown in Table 1. The main calculations and estimates are shown in Table 1.
Results
Estimated incidence, prevalence, and mortality rates In July 2020, there were 834 active chronic dialysis centers registered at BSN, a number 3.6% higher than in 2019. However, the percentage of participating centers decreased compared to the previous year (from 39 to 28%), with a slight difference in percentage of participation among regions (North and Midwest: 25%; South: 28%; Northeast and Southeast: 29%). The lower adherence resulted in a 25% decrease in patients whose data were used for the annual report (54,488 to 40,795). The estimated total number of patients in July 2020 was 144,779 (a variation of ± 5% = 137,527 to 152,038), 3.6% higher than July 2019. The trend observed in recent years toward an increase in number of dialysis patients continued in 2020 (Figure 1). The prevalence rate of dialysis patients also continued to increase, from 665 in 2019 to 684 per million population (pmp) in 2020, consistent with the trend observed in recent years. When stratified by region, a significant decrease in prevalence rate was observed only in the North region (Figure 2); the numbers increased slightly in the other regions. The estimated number of new dialysis patients in 2020 was 44,264, and the overall incidence rate was 209 pmp, lower than in 2019 when it reached 218 pmp. The incidence rate ranged from 75 pmp in the North region to 227 pmp in the South. The estimated number of deaths for the entire year was 35,413. The annual crude mortality rate has been between 18 and 20% since 2016, and is projected to increase to 24.5% in 2020 (Figure 3). Figure 1Estimated number of patients on chronic dialysis per year. Figure 2Estimated prevalence of patients on dialysis by geographic region in Brazil, per million population. Figure 3Estimated annual crude mortality rate of dialysis patients. In July 2020, there were 834 active chronic dialysis centers registered at BSN, a number 3.6% higher than in 2019. However, the percentage of participating centers decreased compared to the previous year (from 39 to 28%), with a slight difference in percentage of participation among regions (North and Midwest: 25%; South: 28%; Northeast and Southeast: 29%). The lower adherence resulted in a 25% decrease in patients whose data were used for the annual report (54,488 to 40,795). The estimated total number of patients in July 2020 was 144,779 (a variation of ± 5% = 137,527 to 152,038), 3.6% higher than July 2019. The trend observed in recent years toward an increase in number of dialysis patients continued in 2020 (Figure 1). The prevalence rate of dialysis patients also continued to increase, from 665 in 2019 to 684 per million population (pmp) in 2020, consistent with the trend observed in recent years. When stratified by region, a significant decrease in prevalence rate was observed only in the North region (Figure 2); the numbers increased slightly in the other regions. The estimated number of new dialysis patients in 2020 was 44,264, and the overall incidence rate was 209 pmp, lower than in 2019 when it reached 218 pmp. The incidence rate ranged from 75 pmp in the North region to 227 pmp in the South. The estimated number of deaths for the entire year was 35,413. The annual crude mortality rate has been between 18 and 20% since 2016, and is projected to increase to 24.5% in 2020 (Figure 3). Figure 1Estimated number of patients on chronic dialysis per year. Figure 2Estimated prevalence of patients on dialysis by geographic region in Brazil, per million population. Figure 3Estimated annual crude mortality rate of dialysis patients. Demographic and clinical characteristics The most prevalent age group was between 45 and 64 years, representing 42.5% (Figure 4). The distribution by sex, 58% men and 42% women, remained stable, as did the percentage of main underlying diseases. Systemic arterial hypertension and diabetes mellitus accounted each for almost one third of all cases (Figure 5). The percentage of patients with hepatitis C continued to decrease, while the percentage of patients with hepatitis B and HIV remained stable (Figure 6). Regarding vascular access, 25% of hemodialysis (HD) patients used a central venous catheter. A decrease in the use of short-term catheters and prostheses was observed, while the use of long-term catheters increased (11%) (Figure 7). The estimated number of dialysis patients on the kidney transplant waiting list in 2020 was 33,239 (23%), similar to the previous year. Figure 4Distribution of patients according to age group. Figure 5Distribution of dialysis patients according to chronic kidney disease etiology. Figure 6Prevalence of patients with positive serology for hepatitis B and C and HIV viruses. Figure 7Distribution of vascular accesses used for hemodialysis. The most prevalent age group was between 45 and 64 years, representing 42.5% (Figure 4). The distribution by sex, 58% men and 42% women, remained stable, as did the percentage of main underlying diseases. Systemic arterial hypertension and diabetes mellitus accounted each for almost one third of all cases (Figure 5). The percentage of patients with hepatitis C continued to decrease, while the percentage of patients with hepatitis B and HIV remained stable (Figure 6). Regarding vascular access, 25% of hemodialysis (HD) patients used a central venous catheter. A decrease in the use of short-term catheters and prostheses was observed, while the use of long-term catheters increased (11%) (Figure 7). The estimated number of dialysis patients on the kidney transplant waiting list in 2020 was 33,239 (23%), similar to the previous year. Figure 4Distribution of patients according to age group. Figure 5Distribution of dialysis patients according to chronic kidney disease etiology. Figure 6Prevalence of patients with positive serology for hepatitis B and C and HIV viruses. Figure 7Distribution of vascular accesses used for hemodialysis. Characteristics of dialysis treatment The distribution of patients by dialysis modality and payment source is shown in Table 2. Hemodialysis (HD) continued to be the treatment for most patients (92.6%) and 7.4% were treated with peritoneal dialysis (PD). Treatment was financed by the public health system for 81.6% of patients and by private health insurance for 18.4% of patients in the participating units. The distribution of patients by dialysis modality and payment source is shown in Table 2. Hemodialysis (HD) continued to be the treatment for most patients (92.6%) and 7.4% were treated with peritoneal dialysis (PD). Treatment was financed by the public health system for 81.6% of patients and by private health insurance for 18.4% of patients in the participating units. Characteristics of participating centers Of the 235 participating dialysis units, 71% were privately owned, 18% were philanthropic, and 10% were public. Among private clinics, international corporations managed 14.4% of participating units. Most centers described themselves as satellite (54%), and 46% were in-patient units. Fifty-six percent of participating units reported having PD as a treatment option. The national average number of patients per nephrologist was 27 and ranged from 21 in the North to 30 in the South. Of the 235 participating dialysis units, 71% were privately owned, 18% were philanthropic, and 10% were public. Among private clinics, international corporations managed 14.4% of participating units. Most centers described themselves as satellite (54%), and 46% were in-patient units. Fifty-six percent of participating units reported having PD as a treatment option. The national average number of patients per nephrologist was 27 and ranged from 21 in the North to 30 in the South. COVID-19 In the 234 dialysis centers between March and July, there were 2791 reported cases of COVID-19. The incidence rate of confirmed COVID-19 between February 26th and July 31st, 2020 was 684/10,000 dialysis patients. Confirmation was performed by real time polymerase chain reaction (RT-PCR) in 68.4%, by serologic testing in 26.3%, and by both methods in 6%. Of the total number of infected patients, 95.7% were on HD and 4.3% on PD. Nearly 52% of confirmed cases were hospitalized and, of these, 57.6% required treatment in intensive care units. Seven hundred and eighteen COVID-19 patients died. Case-fatality rate was 25.7% and mortality rate reached 176/10,000 patients. The estimated crude annual mortality rate attributed to COVID-19 was 4.2%. As a strategy to isolate suspected or confirmed cases of COVID-19, 76.1% of participating centers reported treatment in a separate room or space. The remainder (23.9%) chose transfer to a specific shift (Figure 8). The total percentage of infection among health professionals working in clinics was 21.9%. The percentages in physicians, nurses, and nursing technicians were 25.1, 24.0, and 20.8%, respectively. Death by COVID-19 was only reported in nursing technicians (0.1%). Figure 8COVID-19 mortality rate, case-fatality rate and strategy adopted to isolate cases. In the 234 dialysis centers between March and July, there were 2791 reported cases of COVID-19. The incidence rate of confirmed COVID-19 between February 26th and July 31st, 2020 was 684/10,000 dialysis patients. Confirmation was performed by real time polymerase chain reaction (RT-PCR) in 68.4%, by serologic testing in 26.3%, and by both methods in 6%. Of the total number of infected patients, 95.7% were on HD and 4.3% on PD. Nearly 52% of confirmed cases were hospitalized and, of these, 57.6% required treatment in intensive care units. Seven hundred and eighteen COVID-19 patients died. Case-fatality rate was 25.7% and mortality rate reached 176/10,000 patients. The estimated crude annual mortality rate attributed to COVID-19 was 4.2%. As a strategy to isolate suspected or confirmed cases of COVID-19, 76.1% of participating centers reported treatment in a separate room or space. The remainder (23.9%) chose transfer to a specific shift (Figure 8). The total percentage of infection among health professionals working in clinics was 21.9%. The percentages in physicians, nurses, and nursing technicians were 25.1, 24.0, and 20.8%, respectively. Death by COVID-19 was only reported in nursing technicians (0.1%). Figure 8COVID-19 mortality rate, case-fatality rate and strategy adopted to isolate cases.
null
null
[ "Data collection", "Data analysis", "Calculations of estimates", "Estimated incidence, prevalence, and mortality rates", "Demographic and clinical characteristics", "Characteristics of dialysis treatment", "Characteristics of participating centers", "COVID-19" ]
[ "Dialysis clinics filled out an online questionnaire available on the BSN website. It contained questions about sociodemographic, clinical, and therapeutic variables of patients on chronic dialysis (Supplement). The questionnaire was available from August 2020 to January 2021. Participation in the survey was voluntary, and all dialysis centers registered at BSN were invited to participate by email and by BSN media. After the initial invitation, reminders were emailed monthly to centers that had not filled the questionnaire. During the survey period, the chairs of the regional sections of the BSN were asked to contact the dialysis centers in their states and reiterate the importance of their participation.", "Data for each center were grouped rather than analyzed individually. For the 2020 survey, 235 of 834 active centers responded to the questionnaire, a response rate of 28%.\nFor national estimates of total number of patients and prevalence rate, the sample was expanded. We assumed that the units that did not respond to the questionnaire had the same number of patients as participants (mean of 173.6 patients per unit). Because this extrapolation can be imprecise, we used a variation of ± 5% in the obtained mean (164.9 to 182.3 patients per unit) for the prevalence calculations. Likewise, the mean number of new patients per unit was applied to units that did not report incidence rates. All other sociodemographic data and patient characteristics refer to the sample studied. Annual mortality and annual incidence of patients on dialysis were estimated from events in July 2020. For calculating prevalence and incidence rates, national and regional population data were obtained from the Brazilian Institute of Geography and Statistics (IBGE) estimates for July 2020. According to IBGE, the Brazilian population at that date was 211.75 million inhabitants\n3\n. Most data were presented descriptively, and results were compared with data from previous years. Information on COVID-19, such as incidence, hospitalizations, and lethality, was considered for the period from February 26th (date of the first case reported in the country) to July 31st, 2020. The diagnosis of COVID-19 required confirmation by real-time polymerase chain reaction (RT-PCR) of nasal/oropharyngeal specimens or serologic testing.", "The main calculations and estimates are shown in Table 1.", "In July 2020, there were 834 active chronic dialysis centers registered at BSN, a number 3.6% higher than in 2019. However, the percentage of participating centers decreased compared to the previous year (from 39 to 28%), with a slight difference in percentage of participation among regions (North and Midwest: 25%; South: 28%; Northeast and Southeast: 29%). The lower adherence resulted in a 25% decrease in patients whose data were used for the annual report (54,488 to 40,795).\nThe estimated total number of patients in July 2020 was 144,779 (a variation of ± 5% = 137,527 to 152,038), 3.6% higher than July 2019. The trend observed in recent years toward an increase in number of dialysis patients continued in 2020 (Figure 1). The prevalence rate of dialysis patients also continued to increase, from 665 in 2019 to 684 per million population (pmp) in 2020, consistent with the trend observed in recent years. When stratified by region, a significant decrease in prevalence rate was observed only in the North region (Figure 2); the numbers increased slightly in the other regions. The estimated number of new dialysis patients in 2020 was 44,264, and the overall incidence rate was 209 pmp, lower than in 2019 when it reached 218 pmp. The incidence rate ranged from 75 pmp in the North region to 227 pmp in the South. The estimated number of deaths for the entire year was 35,413. The annual crude mortality rate has been between 18 and 20% since 2016, and is projected to increase to 24.5% in 2020 (Figure 3).\n\nFigure 1Estimated number of patients on chronic dialysis per year.\n\n\nFigure 2Estimated prevalence of patients on dialysis by geographic region in Brazil, per million population.\n\n\nFigure 3Estimated annual crude mortality rate of dialysis patients.\n", "The most prevalent age group was between 45 and 64 years, representing 42.5% (Figure 4). The distribution by sex, 58% men and 42% women, remained stable, as did the percentage of main underlying diseases. Systemic arterial hypertension and diabetes mellitus accounted each for almost one third of all cases (Figure 5). The percentage of patients with hepatitis C continued to decrease, while the percentage of patients with hepatitis B and HIV remained stable (Figure 6). Regarding vascular access, 25% of hemodialysis (HD) patients used a central venous catheter. A decrease in the use of short-term catheters and prostheses was observed, while the use of long-term catheters increased (11%) (Figure 7). The estimated number of dialysis patients on the kidney transplant waiting list in 2020 was 33,239 (23%), similar to the previous year.\n\nFigure 4Distribution of patients according to age group.\n\n\nFigure 5Distribution of dialysis patients according to chronic kidney disease etiology.\n\n\nFigure 6Prevalence of patients with positive serology for hepatitis B and C and HIV viruses.\n\n\nFigure 7Distribution of vascular accesses used for hemodialysis.\n", "The distribution of patients by dialysis modality and payment source is shown in Table 2. Hemodialysis (HD) continued to be the treatment for most patients (92.6%) and 7.4% were treated with peritoneal dialysis (PD). Treatment was financed by the public health system for 81.6% of patients and by private health insurance for 18.4% of patients in the participating units.", "Of the 235 participating dialysis units, 71% were privately owned, 18% were philanthropic, and 10% were public. Among private clinics, international corporations managed 14.4% of participating units. Most centers described themselves as satellite (54%), and 46% were in-patient units. Fifty-six percent of participating units reported having PD as a treatment option. The national average number of patients per nephrologist was 27 and ranged from 21 in the North to 30 in the South.", "In the 234 dialysis centers between March and July, there were 2791 reported cases of COVID-19. The incidence rate of confirmed COVID-19 between February 26th and July 31st, 2020 was 684/10,000 dialysis patients. Confirmation was performed by real time polymerase chain reaction (RT-PCR) in 68.4%, by serologic testing in 26.3%, and by both methods in 6%. Of the total number of infected patients, 95.7% were on HD and 4.3% on PD. Nearly 52% of confirmed cases were hospitalized and, of these, 57.6% required treatment in intensive care units. Seven hundred and eighteen COVID-19 patients died. Case-fatality rate was 25.7% and mortality rate reached 176/10,000 patients. The estimated crude annual mortality rate attributed to COVID-19 was 4.2%. As a strategy to isolate suspected or confirmed cases of COVID-19, 76.1% of participating centers reported treatment in a separate room or space. The remainder (23.9%) chose transfer to a specific shift (Figure 8). The total percentage of infection among health professionals working in clinics was 21.9%. The percentages in physicians, nurses, and nursing technicians were 25.1, 24.0, and 20.8%, respectively. Death by COVID-19 was only reported in nursing technicians (0.1%).\n\nFigure 8COVID-19 mortality rate, case-fatality rate and strategy adopted to isolate cases.\n" ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Data collection", "Data analysis", "Calculations of estimates", "Results", "Estimated incidence, prevalence, and mortality rates", "Demographic and clinical characteristics", "Characteristics of dialysis treatment", "Characteristics of participating centers", "COVID-19", "Discussion" ]
[ "With the aim of obtaining and analyzing data on clinical and epidemiological aspects of patients undergoing chronic dialysis, in addition to information on dialysis therapy, the Brazilian Society of Nephrology (BSN) annually sponsors the Brazilian Dialysis Survey\n1,2\n. Since its initial implementation in 1999, the survey has been conducted nationwide and, in the last decade, it has been completed electronically by dialysis centers. The continuous survey is justified because it provides relevant information for the development of health policies and strategies aimed at improving care for individuals undergoing dialysis. In this study we are reporting data from the 2020 Brazilian Dialysis Survey, including information about the impact of the COVID-19 pandemic on dialysis clinic patients and staff.", "Data collection Dialysis clinics filled out an online questionnaire available on the BSN website. It contained questions about sociodemographic, clinical, and therapeutic variables of patients on chronic dialysis (Supplement). The questionnaire was available from August 2020 to January 2021. Participation in the survey was voluntary, and all dialysis centers registered at BSN were invited to participate by email and by BSN media. After the initial invitation, reminders were emailed monthly to centers that had not filled the questionnaire. During the survey period, the chairs of the regional sections of the BSN were asked to contact the dialysis centers in their states and reiterate the importance of their participation.\nDialysis clinics filled out an online questionnaire available on the BSN website. It contained questions about sociodemographic, clinical, and therapeutic variables of patients on chronic dialysis (Supplement). The questionnaire was available from August 2020 to January 2021. Participation in the survey was voluntary, and all dialysis centers registered at BSN were invited to participate by email and by BSN media. After the initial invitation, reminders were emailed monthly to centers that had not filled the questionnaire. During the survey period, the chairs of the regional sections of the BSN were asked to contact the dialysis centers in their states and reiterate the importance of their participation.\nData analysis Data for each center were grouped rather than analyzed individually. For the 2020 survey, 235 of 834 active centers responded to the questionnaire, a response rate of 28%.\nFor national estimates of total number of patients and prevalence rate, the sample was expanded. We assumed that the units that did not respond to the questionnaire had the same number of patients as participants (mean of 173.6 patients per unit). Because this extrapolation can be imprecise, we used a variation of ± 5% in the obtained mean (164.9 to 182.3 patients per unit) for the prevalence calculations. Likewise, the mean number of new patients per unit was applied to units that did not report incidence rates. All other sociodemographic data and patient characteristics refer to the sample studied. Annual mortality and annual incidence of patients on dialysis were estimated from events in July 2020. For calculating prevalence and incidence rates, national and regional population data were obtained from the Brazilian Institute of Geography and Statistics (IBGE) estimates for July 2020. According to IBGE, the Brazilian population at that date was 211.75 million inhabitants\n3\n. Most data were presented descriptively, and results were compared with data from previous years. Information on COVID-19, such as incidence, hospitalizations, and lethality, was considered for the period from February 26th (date of the first case reported in the country) to July 31st, 2020. The diagnosis of COVID-19 required confirmation by real-time polymerase chain reaction (RT-PCR) of nasal/oropharyngeal specimens or serologic testing.\nData for each center were grouped rather than analyzed individually. For the 2020 survey, 235 of 834 active centers responded to the questionnaire, a response rate of 28%.\nFor national estimates of total number of patients and prevalence rate, the sample was expanded. We assumed that the units that did not respond to the questionnaire had the same number of patients as participants (mean of 173.6 patients per unit). Because this extrapolation can be imprecise, we used a variation of ± 5% in the obtained mean (164.9 to 182.3 patients per unit) for the prevalence calculations. Likewise, the mean number of new patients per unit was applied to units that did not report incidence rates. All other sociodemographic data and patient characteristics refer to the sample studied. Annual mortality and annual incidence of patients on dialysis were estimated from events in July 2020. For calculating prevalence and incidence rates, national and regional population data were obtained from the Brazilian Institute of Geography and Statistics (IBGE) estimates for July 2020. According to IBGE, the Brazilian population at that date was 211.75 million inhabitants\n3\n. Most data were presented descriptively, and results were compared with data from previous years. Information on COVID-19, such as incidence, hospitalizations, and lethality, was considered for the period from February 26th (date of the first case reported in the country) to July 31st, 2020. The diagnosis of COVID-19 required confirmation by real-time polymerase chain reaction (RT-PCR) of nasal/oropharyngeal specimens or serologic testing.\nCalculations of estimates The main calculations and estimates are shown in Table 1.\nThe main calculations and estimates are shown in Table 1.", "Dialysis clinics filled out an online questionnaire available on the BSN website. It contained questions about sociodemographic, clinical, and therapeutic variables of patients on chronic dialysis (Supplement). The questionnaire was available from August 2020 to January 2021. Participation in the survey was voluntary, and all dialysis centers registered at BSN were invited to participate by email and by BSN media. After the initial invitation, reminders were emailed monthly to centers that had not filled the questionnaire. During the survey period, the chairs of the regional sections of the BSN were asked to contact the dialysis centers in their states and reiterate the importance of their participation.", "Data for each center were grouped rather than analyzed individually. For the 2020 survey, 235 of 834 active centers responded to the questionnaire, a response rate of 28%.\nFor national estimates of total number of patients and prevalence rate, the sample was expanded. We assumed that the units that did not respond to the questionnaire had the same number of patients as participants (mean of 173.6 patients per unit). Because this extrapolation can be imprecise, we used a variation of ± 5% in the obtained mean (164.9 to 182.3 patients per unit) for the prevalence calculations. Likewise, the mean number of new patients per unit was applied to units that did not report incidence rates. All other sociodemographic data and patient characteristics refer to the sample studied. Annual mortality and annual incidence of patients on dialysis were estimated from events in July 2020. For calculating prevalence and incidence rates, national and regional population data were obtained from the Brazilian Institute of Geography and Statistics (IBGE) estimates for July 2020. According to IBGE, the Brazilian population at that date was 211.75 million inhabitants\n3\n. Most data were presented descriptively, and results were compared with data from previous years. Information on COVID-19, such as incidence, hospitalizations, and lethality, was considered for the period from February 26th (date of the first case reported in the country) to July 31st, 2020. The diagnosis of COVID-19 required confirmation by real-time polymerase chain reaction (RT-PCR) of nasal/oropharyngeal specimens or serologic testing.", "The main calculations and estimates are shown in Table 1.", "Estimated incidence, prevalence, and mortality rates In July 2020, there were 834 active chronic dialysis centers registered at BSN, a number 3.6% higher than in 2019. However, the percentage of participating centers decreased compared to the previous year (from 39 to 28%), with a slight difference in percentage of participation among regions (North and Midwest: 25%; South: 28%; Northeast and Southeast: 29%). The lower adherence resulted in a 25% decrease in patients whose data were used for the annual report (54,488 to 40,795).\nThe estimated total number of patients in July 2020 was 144,779 (a variation of ± 5% = 137,527 to 152,038), 3.6% higher than July 2019. The trend observed in recent years toward an increase in number of dialysis patients continued in 2020 (Figure 1). The prevalence rate of dialysis patients also continued to increase, from 665 in 2019 to 684 per million population (pmp) in 2020, consistent with the trend observed in recent years. When stratified by region, a significant decrease in prevalence rate was observed only in the North region (Figure 2); the numbers increased slightly in the other regions. The estimated number of new dialysis patients in 2020 was 44,264, and the overall incidence rate was 209 pmp, lower than in 2019 when it reached 218 pmp. The incidence rate ranged from 75 pmp in the North region to 227 pmp in the South. The estimated number of deaths for the entire year was 35,413. The annual crude mortality rate has been between 18 and 20% since 2016, and is projected to increase to 24.5% in 2020 (Figure 3).\n\nFigure 1Estimated number of patients on chronic dialysis per year.\n\n\nFigure 2Estimated prevalence of patients on dialysis by geographic region in Brazil, per million population.\n\n\nFigure 3Estimated annual crude mortality rate of dialysis patients.\n\nIn July 2020, there were 834 active chronic dialysis centers registered at BSN, a number 3.6% higher than in 2019. However, the percentage of participating centers decreased compared to the previous year (from 39 to 28%), with a slight difference in percentage of participation among regions (North and Midwest: 25%; South: 28%; Northeast and Southeast: 29%). The lower adherence resulted in a 25% decrease in patients whose data were used for the annual report (54,488 to 40,795).\nThe estimated total number of patients in July 2020 was 144,779 (a variation of ± 5% = 137,527 to 152,038), 3.6% higher than July 2019. The trend observed in recent years toward an increase in number of dialysis patients continued in 2020 (Figure 1). The prevalence rate of dialysis patients also continued to increase, from 665 in 2019 to 684 per million population (pmp) in 2020, consistent with the trend observed in recent years. When stratified by region, a significant decrease in prevalence rate was observed only in the North region (Figure 2); the numbers increased slightly in the other regions. The estimated number of new dialysis patients in 2020 was 44,264, and the overall incidence rate was 209 pmp, lower than in 2019 when it reached 218 pmp. The incidence rate ranged from 75 pmp in the North region to 227 pmp in the South. The estimated number of deaths for the entire year was 35,413. The annual crude mortality rate has been between 18 and 20% since 2016, and is projected to increase to 24.5% in 2020 (Figure 3).\n\nFigure 1Estimated number of patients on chronic dialysis per year.\n\n\nFigure 2Estimated prevalence of patients on dialysis by geographic region in Brazil, per million population.\n\n\nFigure 3Estimated annual crude mortality rate of dialysis patients.\n\nDemographic and clinical characteristics The most prevalent age group was between 45 and 64 years, representing 42.5% (Figure 4). The distribution by sex, 58% men and 42% women, remained stable, as did the percentage of main underlying diseases. Systemic arterial hypertension and diabetes mellitus accounted each for almost one third of all cases (Figure 5). The percentage of patients with hepatitis C continued to decrease, while the percentage of patients with hepatitis B and HIV remained stable (Figure 6). Regarding vascular access, 25% of hemodialysis (HD) patients used a central venous catheter. A decrease in the use of short-term catheters and prostheses was observed, while the use of long-term catheters increased (11%) (Figure 7). The estimated number of dialysis patients on the kidney transplant waiting list in 2020 was 33,239 (23%), similar to the previous year.\n\nFigure 4Distribution of patients according to age group.\n\n\nFigure 5Distribution of dialysis patients according to chronic kidney disease etiology.\n\n\nFigure 6Prevalence of patients with positive serology for hepatitis B and C and HIV viruses.\n\n\nFigure 7Distribution of vascular accesses used for hemodialysis.\n\nThe most prevalent age group was between 45 and 64 years, representing 42.5% (Figure 4). The distribution by sex, 58% men and 42% women, remained stable, as did the percentage of main underlying diseases. Systemic arterial hypertension and diabetes mellitus accounted each for almost one third of all cases (Figure 5). The percentage of patients with hepatitis C continued to decrease, while the percentage of patients with hepatitis B and HIV remained stable (Figure 6). Regarding vascular access, 25% of hemodialysis (HD) patients used a central venous catheter. A decrease in the use of short-term catheters and prostheses was observed, while the use of long-term catheters increased (11%) (Figure 7). The estimated number of dialysis patients on the kidney transplant waiting list in 2020 was 33,239 (23%), similar to the previous year.\n\nFigure 4Distribution of patients according to age group.\n\n\nFigure 5Distribution of dialysis patients according to chronic kidney disease etiology.\n\n\nFigure 6Prevalence of patients with positive serology for hepatitis B and C and HIV viruses.\n\n\nFigure 7Distribution of vascular accesses used for hemodialysis.\n\nCharacteristics of dialysis treatment The distribution of patients by dialysis modality and payment source is shown in Table 2. Hemodialysis (HD) continued to be the treatment for most patients (92.6%) and 7.4% were treated with peritoneal dialysis (PD). Treatment was financed by the public health system for 81.6% of patients and by private health insurance for 18.4% of patients in the participating units.\nThe distribution of patients by dialysis modality and payment source is shown in Table 2. Hemodialysis (HD) continued to be the treatment for most patients (92.6%) and 7.4% were treated with peritoneal dialysis (PD). Treatment was financed by the public health system for 81.6% of patients and by private health insurance for 18.4% of patients in the participating units.\nCharacteristics of participating centers Of the 235 participating dialysis units, 71% were privately owned, 18% were philanthropic, and 10% were public. Among private clinics, international corporations managed 14.4% of participating units. Most centers described themselves as satellite (54%), and 46% were in-patient units. Fifty-six percent of participating units reported having PD as a treatment option. The national average number of patients per nephrologist was 27 and ranged from 21 in the North to 30 in the South.\nOf the 235 participating dialysis units, 71% were privately owned, 18% were philanthropic, and 10% were public. Among private clinics, international corporations managed 14.4% of participating units. Most centers described themselves as satellite (54%), and 46% were in-patient units. Fifty-six percent of participating units reported having PD as a treatment option. The national average number of patients per nephrologist was 27 and ranged from 21 in the North to 30 in the South.\nCOVID-19 In the 234 dialysis centers between March and July, there were 2791 reported cases of COVID-19. The incidence rate of confirmed COVID-19 between February 26th and July 31st, 2020 was 684/10,000 dialysis patients. Confirmation was performed by real time polymerase chain reaction (RT-PCR) in 68.4%, by serologic testing in 26.3%, and by both methods in 6%. Of the total number of infected patients, 95.7% were on HD and 4.3% on PD. Nearly 52% of confirmed cases were hospitalized and, of these, 57.6% required treatment in intensive care units. Seven hundred and eighteen COVID-19 patients died. Case-fatality rate was 25.7% and mortality rate reached 176/10,000 patients. The estimated crude annual mortality rate attributed to COVID-19 was 4.2%. As a strategy to isolate suspected or confirmed cases of COVID-19, 76.1% of participating centers reported treatment in a separate room or space. The remainder (23.9%) chose transfer to a specific shift (Figure 8). The total percentage of infection among health professionals working in clinics was 21.9%. The percentages in physicians, nurses, and nursing technicians were 25.1, 24.0, and 20.8%, respectively. Death by COVID-19 was only reported in nursing technicians (0.1%).\n\nFigure 8COVID-19 mortality rate, case-fatality rate and strategy adopted to isolate cases.\n\nIn the 234 dialysis centers between March and July, there were 2791 reported cases of COVID-19. The incidence rate of confirmed COVID-19 between February 26th and July 31st, 2020 was 684/10,000 dialysis patients. Confirmation was performed by real time polymerase chain reaction (RT-PCR) in 68.4%, by serologic testing in 26.3%, and by both methods in 6%. Of the total number of infected patients, 95.7% were on HD and 4.3% on PD. Nearly 52% of confirmed cases were hospitalized and, of these, 57.6% required treatment in intensive care units. Seven hundred and eighteen COVID-19 patients died. Case-fatality rate was 25.7% and mortality rate reached 176/10,000 patients. The estimated crude annual mortality rate attributed to COVID-19 was 4.2%. As a strategy to isolate suspected or confirmed cases of COVID-19, 76.1% of participating centers reported treatment in a separate room or space. The remainder (23.9%) chose transfer to a specific shift (Figure 8). The total percentage of infection among health professionals working in clinics was 21.9%. The percentages in physicians, nurses, and nursing technicians were 25.1, 24.0, and 20.8%, respectively. Death by COVID-19 was only reported in nursing technicians (0.1%).\n\nFigure 8COVID-19 mortality rate, case-fatality rate and strategy adopted to isolate cases.\n", "In July 2020, there were 834 active chronic dialysis centers registered at BSN, a number 3.6% higher than in 2019. However, the percentage of participating centers decreased compared to the previous year (from 39 to 28%), with a slight difference in percentage of participation among regions (North and Midwest: 25%; South: 28%; Northeast and Southeast: 29%). The lower adherence resulted in a 25% decrease in patients whose data were used for the annual report (54,488 to 40,795).\nThe estimated total number of patients in July 2020 was 144,779 (a variation of ± 5% = 137,527 to 152,038), 3.6% higher than July 2019. The trend observed in recent years toward an increase in number of dialysis patients continued in 2020 (Figure 1). The prevalence rate of dialysis patients also continued to increase, from 665 in 2019 to 684 per million population (pmp) in 2020, consistent with the trend observed in recent years. When stratified by region, a significant decrease in prevalence rate was observed only in the North region (Figure 2); the numbers increased slightly in the other regions. The estimated number of new dialysis patients in 2020 was 44,264, and the overall incidence rate was 209 pmp, lower than in 2019 when it reached 218 pmp. The incidence rate ranged from 75 pmp in the North region to 227 pmp in the South. The estimated number of deaths for the entire year was 35,413. The annual crude mortality rate has been between 18 and 20% since 2016, and is projected to increase to 24.5% in 2020 (Figure 3).\n\nFigure 1Estimated number of patients on chronic dialysis per year.\n\n\nFigure 2Estimated prevalence of patients on dialysis by geographic region in Brazil, per million population.\n\n\nFigure 3Estimated annual crude mortality rate of dialysis patients.\n", "The most prevalent age group was between 45 and 64 years, representing 42.5% (Figure 4). The distribution by sex, 58% men and 42% women, remained stable, as did the percentage of main underlying diseases. Systemic arterial hypertension and diabetes mellitus accounted each for almost one third of all cases (Figure 5). The percentage of patients with hepatitis C continued to decrease, while the percentage of patients with hepatitis B and HIV remained stable (Figure 6). Regarding vascular access, 25% of hemodialysis (HD) patients used a central venous catheter. A decrease in the use of short-term catheters and prostheses was observed, while the use of long-term catheters increased (11%) (Figure 7). The estimated number of dialysis patients on the kidney transplant waiting list in 2020 was 33,239 (23%), similar to the previous year.\n\nFigure 4Distribution of patients according to age group.\n\n\nFigure 5Distribution of dialysis patients according to chronic kidney disease etiology.\n\n\nFigure 6Prevalence of patients with positive serology for hepatitis B and C and HIV viruses.\n\n\nFigure 7Distribution of vascular accesses used for hemodialysis.\n", "The distribution of patients by dialysis modality and payment source is shown in Table 2. Hemodialysis (HD) continued to be the treatment for most patients (92.6%) and 7.4% were treated with peritoneal dialysis (PD). Treatment was financed by the public health system for 81.6% of patients and by private health insurance for 18.4% of patients in the participating units.", "Of the 235 participating dialysis units, 71% were privately owned, 18% were philanthropic, and 10% were public. Among private clinics, international corporations managed 14.4% of participating units. Most centers described themselves as satellite (54%), and 46% were in-patient units. Fifty-six percent of participating units reported having PD as a treatment option. The national average number of patients per nephrologist was 27 and ranged from 21 in the North to 30 in the South.", "In the 234 dialysis centers between March and July, there were 2791 reported cases of COVID-19. The incidence rate of confirmed COVID-19 between February 26th and July 31st, 2020 was 684/10,000 dialysis patients. Confirmation was performed by real time polymerase chain reaction (RT-PCR) in 68.4%, by serologic testing in 26.3%, and by both methods in 6%. Of the total number of infected patients, 95.7% were on HD and 4.3% on PD. Nearly 52% of confirmed cases were hospitalized and, of these, 57.6% required treatment in intensive care units. Seven hundred and eighteen COVID-19 patients died. Case-fatality rate was 25.7% and mortality rate reached 176/10,000 patients. The estimated crude annual mortality rate attributed to COVID-19 was 4.2%. As a strategy to isolate suspected or confirmed cases of COVID-19, 76.1% of participating centers reported treatment in a separate room or space. The remainder (23.9%) chose transfer to a specific shift (Figure 8). The total percentage of infection among health professionals working in clinics was 21.9%. The percentages in physicians, nurses, and nursing technicians were 25.1, 24.0, and 20.8%, respectively. Death by COVID-19 was only reported in nursing technicians (0.1%).\n\nFigure 8COVID-19 mortality rate, case-fatality rate and strategy adopted to isolate cases.\n", "In the last two decades, the Brazilian dialysis survey has highlighted the panorama of dialysis treatment, providing data and analyses that contribute to public policies and strategies to improve this therapy in our country. In general, the trends observed in recent years were maintained in 2020. As a novelty, we reported information on the impact of the COVID-19 pandemic on dialysis centers.\nThere was a decrease in the percentage of dialysis units participating in the 2020 survey compared to the previous year, from 39 to 28%. Although we did not investigate the causes, we believe that the difficulties of the COVID-19 pandemic faced by dialysis centers may have contributed to this lower voluntary participation. Despite that, participation was around 30% and was similar across the five macro-regions, increasing the chance of national representativeness of the sample.\nThe upward trend in total number of patients by 3.6% and in prevalence by 2.9% followed the pattern observed in recent years. This may be explained by the increase in population longevity\n3\n and by improvements in the quality of care and facilitation of dialysis access for patients with chronic kidney disease, although the latter should be confirmed by further investigations. Compared with the Latin American Society of Nephrology (SLAHN) 2018 registry\n4\n, our prevalence of people on dialysis (684 pmp) was slightly higher than the average of the included countries (617 pmp). Our numbers were substantially lower than those of the United States in 2017 (1538 pmp)\n5\n and higher than those of the European registry in 2018 (556 pmp)\n6\n.\nThe 2020 incidence rate (209 pmp) was slightly lower than the national estimate in 2019 (218 pmp), higher than that of Latin America (159 pmp)\n4\n and Europe (122 pmp) in 20186 and lower than that of the United States (370 pmp) in 20175.\nWe found a significant increase in crude mortality rate, which had been close to 20% in the last years, but reached 24.5% in 2020. The estimated annual number of deaths among dialysis patients jumped from 25,400 to 35,400. In addition to the increase in age and a greater burden of comorbidities in patients in recent years, the mortality associated with COVID-19 in dialysis\n7\n may partially account for the finding. According to our estimate, the annual crude mortality rate attributed to COVID-19 was 4.2%, corresponding to 17.3% of the overall annual crude mortality rate (estimated at 25.7%). It is important to highlight that July and August were the most critical months of 2020 for the COVID-19 pandemic in hemodialysis\n8\n. Therefore, it is conceivable that the extrapolation of July mortality to 12 months led to an overestimation of the annual crude mortality rate for 2020. Regarding the distribution by age group, the upward trend in the prevalence of patients over 45 years of age continues, with those over 75 years of age accounting for 11.8% of the total, slightly more than half of the prevalence observed in developed countries\n5,6,9\n. The prevalence of male dialysis patients has remained constant at 58%.\nA trend toward the stability of baseline diagnosis was observed, with hypertension and DM accounting for 32 and 31%, respectively. In the US in 2017, hypertension accounted for 30% and DM for 45% of baseline diagnoses of the dialysis population\n5\n.\nThe decrease in the percentage of patients with hepatitis C continued in 2020, falling below 3% for the first time. This result was a consequence of the implementation of preventive measures such as the reduction of blood transfusions and the prohibition of the reuse of dialyzers and lines for patients with positive serology, as well as the improved access to direct-acting antiviral agents, which allowed high cure rates\n10\n.\nAs in the previous year, it was found that approximately a quarter of HD patients (24.7%) used a central venous catheter as vascular access (7.6% short-term and 17.1% long-term). In the United States, the prevalence of central venous catheters has remained at one-fifth of all patients on HD therapy.\n5\n There was an increase in the use of long-term catheters, which was 62% in 2019 and 69% in 2020.\nAlthough more than half of the participating centers offer PD as treatment option (56%), only 7.4% of dialysis patients were treated by this modality of renal replacement therapy. The main reason seems to be the model proposed by our public health system, which is not economically viable for most clinics\n11\n. In some countries, strategies that have led to increased use of PD included the implementation of policies and incentives that favor this modality, allow the production and supply of materials at a low cost, and appropriate training for nephrology teams aimed increasing the use of therapy and continuously reducing the failure rates\n12\n.\nWe first reported that multinational companies operate about 15% of private dialysis centers. However, we emphasize that this share seems to be increasing in recent years and that the accuracy of our estimate may be limited due to sampling and the low response rate of these centers.\nCOVID-19 information limited to the period February 26-July 31, 2020, yielded an incidence rate of 684/10,000 patients. We emphasize that this rate was probably underestimated because it assumes a positive diagnostic test, which excludes many asymptomatic, untested cases. The lack of testing in the population, especially in the initial period of the pandemic, also influenced the result obtained. The high need for hospitalization for positive cases in intensive care units and the high lethality confirm the national and global results already reported\n7,13-16\n. Nevertheless, we believe that we found a high lethality rate because we included more severe patients in the sample and possibly because of the lack of experience in case management and the non-standardized treatment of patients in the first months of the pandemic.\nWe highlight as limitations the electronic data collection through voluntary completion, the aggregation of patient data by dialysis center, and the lack of response validation. Because only about 30% of active dialysis centers participated in the study, the methodology used in the national estimates of prevalence and incidence rates has limited accuracy, and caution should be used in interpreting the data. Our death rates estimates should also be interpreted with caution. The 2020 peak of COVID-19 cases in hemodialysis in July and August may have impacted our estimate of the overall annual crude mortality rate for 2020 because July figures were extrapolated to the year.\nIn conclusion, the 2020 survey confirmed a continuous increase in prevalence rate over the years and showed a slight decrease in incidence of dialysis patients in our country. The low prevalence of PD as dialysis therapy continues, although more than half of the centers offer this dialysis modality. The trend towards a progressive increase in the use of long-term central venous catheters was maintained. Also, the high lethality of COVID-19 on dialysis had an unfavorable impact on crude mortality rate in this population, and the influence of the COVID-19 pandemic in this population warrants continuous monitoring. Regular national assessment of patients on chronic dialysis is paramount for monitoring the quantitative and qualitative aspects of chronic kidney disease care in the country." ]
[ "intro", "methods", null, null, null, "results", null, null, null, null, null, "discussion" ]
[ "Renal Dialysis", "Peritoneal dialysis", "Peritoneal Dialysis", "Epidemiology", "COVID-19", "Diálise Renal", "Diálise Peritoneal", "Epidemiologia", "COVID-19" ]
Introduction: With the aim of obtaining and analyzing data on clinical and epidemiological aspects of patients undergoing chronic dialysis, in addition to information on dialysis therapy, the Brazilian Society of Nephrology (BSN) annually sponsors the Brazilian Dialysis Survey 1,2 . Since its initial implementation in 1999, the survey has been conducted nationwide and, in the last decade, it has been completed electronically by dialysis centers. The continuous survey is justified because it provides relevant information for the development of health policies and strategies aimed at improving care for individuals undergoing dialysis. In this study we are reporting data from the 2020 Brazilian Dialysis Survey, including information about the impact of the COVID-19 pandemic on dialysis clinic patients and staff. Methods: Data collection Dialysis clinics filled out an online questionnaire available on the BSN website. It contained questions about sociodemographic, clinical, and therapeutic variables of patients on chronic dialysis (Supplement). The questionnaire was available from August 2020 to January 2021. Participation in the survey was voluntary, and all dialysis centers registered at BSN were invited to participate by email and by BSN media. After the initial invitation, reminders were emailed monthly to centers that had not filled the questionnaire. During the survey period, the chairs of the regional sections of the BSN were asked to contact the dialysis centers in their states and reiterate the importance of their participation. Dialysis clinics filled out an online questionnaire available on the BSN website. It contained questions about sociodemographic, clinical, and therapeutic variables of patients on chronic dialysis (Supplement). The questionnaire was available from August 2020 to January 2021. Participation in the survey was voluntary, and all dialysis centers registered at BSN were invited to participate by email and by BSN media. After the initial invitation, reminders were emailed monthly to centers that had not filled the questionnaire. During the survey period, the chairs of the regional sections of the BSN were asked to contact the dialysis centers in their states and reiterate the importance of their participation. Data analysis Data for each center were grouped rather than analyzed individually. For the 2020 survey, 235 of 834 active centers responded to the questionnaire, a response rate of 28%. For national estimates of total number of patients and prevalence rate, the sample was expanded. We assumed that the units that did not respond to the questionnaire had the same number of patients as participants (mean of 173.6 patients per unit). Because this extrapolation can be imprecise, we used a variation of ± 5% in the obtained mean (164.9 to 182.3 patients per unit) for the prevalence calculations. Likewise, the mean number of new patients per unit was applied to units that did not report incidence rates. All other sociodemographic data and patient characteristics refer to the sample studied. Annual mortality and annual incidence of patients on dialysis were estimated from events in July 2020. For calculating prevalence and incidence rates, national and regional population data were obtained from the Brazilian Institute of Geography and Statistics (IBGE) estimates for July 2020. According to IBGE, the Brazilian population at that date was 211.75 million inhabitants 3 . Most data were presented descriptively, and results were compared with data from previous years. Information on COVID-19, such as incidence, hospitalizations, and lethality, was considered for the period from February 26th (date of the first case reported in the country) to July 31st, 2020. The diagnosis of COVID-19 required confirmation by real-time polymerase chain reaction (RT-PCR) of nasal/oropharyngeal specimens or serologic testing. Data for each center were grouped rather than analyzed individually. For the 2020 survey, 235 of 834 active centers responded to the questionnaire, a response rate of 28%. For national estimates of total number of patients and prevalence rate, the sample was expanded. We assumed that the units that did not respond to the questionnaire had the same number of patients as participants (mean of 173.6 patients per unit). Because this extrapolation can be imprecise, we used a variation of ± 5% in the obtained mean (164.9 to 182.3 patients per unit) for the prevalence calculations. Likewise, the mean number of new patients per unit was applied to units that did not report incidence rates. All other sociodemographic data and patient characteristics refer to the sample studied. Annual mortality and annual incidence of patients on dialysis were estimated from events in July 2020. For calculating prevalence and incidence rates, national and regional population data were obtained from the Brazilian Institute of Geography and Statistics (IBGE) estimates for July 2020. According to IBGE, the Brazilian population at that date was 211.75 million inhabitants 3 . Most data were presented descriptively, and results were compared with data from previous years. Information on COVID-19, such as incidence, hospitalizations, and lethality, was considered for the period from February 26th (date of the first case reported in the country) to July 31st, 2020. The diagnosis of COVID-19 required confirmation by real-time polymerase chain reaction (RT-PCR) of nasal/oropharyngeal specimens or serologic testing. Calculations of estimates The main calculations and estimates are shown in Table 1. The main calculations and estimates are shown in Table 1. Data collection: Dialysis clinics filled out an online questionnaire available on the BSN website. It contained questions about sociodemographic, clinical, and therapeutic variables of patients on chronic dialysis (Supplement). The questionnaire was available from August 2020 to January 2021. Participation in the survey was voluntary, and all dialysis centers registered at BSN were invited to participate by email and by BSN media. After the initial invitation, reminders were emailed monthly to centers that had not filled the questionnaire. During the survey period, the chairs of the regional sections of the BSN were asked to contact the dialysis centers in their states and reiterate the importance of their participation. Data analysis: Data for each center were grouped rather than analyzed individually. For the 2020 survey, 235 of 834 active centers responded to the questionnaire, a response rate of 28%. For national estimates of total number of patients and prevalence rate, the sample was expanded. We assumed that the units that did not respond to the questionnaire had the same number of patients as participants (mean of 173.6 patients per unit). Because this extrapolation can be imprecise, we used a variation of ± 5% in the obtained mean (164.9 to 182.3 patients per unit) for the prevalence calculations. Likewise, the mean number of new patients per unit was applied to units that did not report incidence rates. All other sociodemographic data and patient characteristics refer to the sample studied. Annual mortality and annual incidence of patients on dialysis were estimated from events in July 2020. For calculating prevalence and incidence rates, national and regional population data were obtained from the Brazilian Institute of Geography and Statistics (IBGE) estimates for July 2020. According to IBGE, the Brazilian population at that date was 211.75 million inhabitants 3 . Most data were presented descriptively, and results were compared with data from previous years. Information on COVID-19, such as incidence, hospitalizations, and lethality, was considered for the period from February 26th (date of the first case reported in the country) to July 31st, 2020. The diagnosis of COVID-19 required confirmation by real-time polymerase chain reaction (RT-PCR) of nasal/oropharyngeal specimens or serologic testing. Calculations of estimates: The main calculations and estimates are shown in Table 1. Results: Estimated incidence, prevalence, and mortality rates In July 2020, there were 834 active chronic dialysis centers registered at BSN, a number 3.6% higher than in 2019. However, the percentage of participating centers decreased compared to the previous year (from 39 to 28%), with a slight difference in percentage of participation among regions (North and Midwest: 25%; South: 28%; Northeast and Southeast: 29%). The lower adherence resulted in a 25% decrease in patients whose data were used for the annual report (54,488 to 40,795). The estimated total number of patients in July 2020 was 144,779 (a variation of ± 5% = 137,527 to 152,038), 3.6% higher than July 2019. The trend observed in recent years toward an increase in number of dialysis patients continued in 2020 (Figure 1). The prevalence rate of dialysis patients also continued to increase, from 665 in 2019 to 684 per million population (pmp) in 2020, consistent with the trend observed in recent years. When stratified by region, a significant decrease in prevalence rate was observed only in the North region (Figure 2); the numbers increased slightly in the other regions. The estimated number of new dialysis patients in 2020 was 44,264, and the overall incidence rate was 209 pmp, lower than in 2019 when it reached 218 pmp. The incidence rate ranged from 75 pmp in the North region to 227 pmp in the South. The estimated number of deaths for the entire year was 35,413. The annual crude mortality rate has been between 18 and 20% since 2016, and is projected to increase to 24.5% in 2020 (Figure 3). Figure 1Estimated number of patients on chronic dialysis per year. Figure 2Estimated prevalence of patients on dialysis by geographic region in Brazil, per million population. Figure 3Estimated annual crude mortality rate of dialysis patients. In July 2020, there were 834 active chronic dialysis centers registered at BSN, a number 3.6% higher than in 2019. However, the percentage of participating centers decreased compared to the previous year (from 39 to 28%), with a slight difference in percentage of participation among regions (North and Midwest: 25%; South: 28%; Northeast and Southeast: 29%). The lower adherence resulted in a 25% decrease in patients whose data were used for the annual report (54,488 to 40,795). The estimated total number of patients in July 2020 was 144,779 (a variation of ± 5% = 137,527 to 152,038), 3.6% higher than July 2019. The trend observed in recent years toward an increase in number of dialysis patients continued in 2020 (Figure 1). The prevalence rate of dialysis patients also continued to increase, from 665 in 2019 to 684 per million population (pmp) in 2020, consistent with the trend observed in recent years. When stratified by region, a significant decrease in prevalence rate was observed only in the North region (Figure 2); the numbers increased slightly in the other regions. The estimated number of new dialysis patients in 2020 was 44,264, and the overall incidence rate was 209 pmp, lower than in 2019 when it reached 218 pmp. The incidence rate ranged from 75 pmp in the North region to 227 pmp in the South. The estimated number of deaths for the entire year was 35,413. The annual crude mortality rate has been between 18 and 20% since 2016, and is projected to increase to 24.5% in 2020 (Figure 3). Figure 1Estimated number of patients on chronic dialysis per year. Figure 2Estimated prevalence of patients on dialysis by geographic region in Brazil, per million population. Figure 3Estimated annual crude mortality rate of dialysis patients. Demographic and clinical characteristics The most prevalent age group was between 45 and 64 years, representing 42.5% (Figure 4). The distribution by sex, 58% men and 42% women, remained stable, as did the percentage of main underlying diseases. Systemic arterial hypertension and diabetes mellitus accounted each for almost one third of all cases (Figure 5). The percentage of patients with hepatitis C continued to decrease, while the percentage of patients with hepatitis B and HIV remained stable (Figure 6). Regarding vascular access, 25% of hemodialysis (HD) patients used a central venous catheter. A decrease in the use of short-term catheters and prostheses was observed, while the use of long-term catheters increased (11%) (Figure 7). The estimated number of dialysis patients on the kidney transplant waiting list in 2020 was 33,239 (23%), similar to the previous year. Figure 4Distribution of patients according to age group. Figure 5Distribution of dialysis patients according to chronic kidney disease etiology. Figure 6Prevalence of patients with positive serology for hepatitis B and C and HIV viruses. Figure 7Distribution of vascular accesses used for hemodialysis. The most prevalent age group was between 45 and 64 years, representing 42.5% (Figure 4). The distribution by sex, 58% men and 42% women, remained stable, as did the percentage of main underlying diseases. Systemic arterial hypertension and diabetes mellitus accounted each for almost one third of all cases (Figure 5). The percentage of patients with hepatitis C continued to decrease, while the percentage of patients with hepatitis B and HIV remained stable (Figure 6). Regarding vascular access, 25% of hemodialysis (HD) patients used a central venous catheter. A decrease in the use of short-term catheters and prostheses was observed, while the use of long-term catheters increased (11%) (Figure 7). The estimated number of dialysis patients on the kidney transplant waiting list in 2020 was 33,239 (23%), similar to the previous year. Figure 4Distribution of patients according to age group. Figure 5Distribution of dialysis patients according to chronic kidney disease etiology. Figure 6Prevalence of patients with positive serology for hepatitis B and C and HIV viruses. Figure 7Distribution of vascular accesses used for hemodialysis. Characteristics of dialysis treatment The distribution of patients by dialysis modality and payment source is shown in Table 2. Hemodialysis (HD) continued to be the treatment for most patients (92.6%) and 7.4% were treated with peritoneal dialysis (PD). Treatment was financed by the public health system for 81.6% of patients and by private health insurance for 18.4% of patients in the participating units. The distribution of patients by dialysis modality and payment source is shown in Table 2. Hemodialysis (HD) continued to be the treatment for most patients (92.6%) and 7.4% were treated with peritoneal dialysis (PD). Treatment was financed by the public health system for 81.6% of patients and by private health insurance for 18.4% of patients in the participating units. Characteristics of participating centers Of the 235 participating dialysis units, 71% were privately owned, 18% were philanthropic, and 10% were public. Among private clinics, international corporations managed 14.4% of participating units. Most centers described themselves as satellite (54%), and 46% were in-patient units. Fifty-six percent of participating units reported having PD as a treatment option. The national average number of patients per nephrologist was 27 and ranged from 21 in the North to 30 in the South. Of the 235 participating dialysis units, 71% were privately owned, 18% were philanthropic, and 10% were public. Among private clinics, international corporations managed 14.4% of participating units. Most centers described themselves as satellite (54%), and 46% were in-patient units. Fifty-six percent of participating units reported having PD as a treatment option. The national average number of patients per nephrologist was 27 and ranged from 21 in the North to 30 in the South. COVID-19 In the 234 dialysis centers between March and July, there were 2791 reported cases of COVID-19. The incidence rate of confirmed COVID-19 between February 26th and July 31st, 2020 was 684/10,000 dialysis patients. Confirmation was performed by real time polymerase chain reaction (RT-PCR) in 68.4%, by serologic testing in 26.3%, and by both methods in 6%. Of the total number of infected patients, 95.7% were on HD and 4.3% on PD. Nearly 52% of confirmed cases were hospitalized and, of these, 57.6% required treatment in intensive care units. Seven hundred and eighteen COVID-19 patients died. Case-fatality rate was 25.7% and mortality rate reached 176/10,000 patients. The estimated crude annual mortality rate attributed to COVID-19 was 4.2%. As a strategy to isolate suspected or confirmed cases of COVID-19, 76.1% of participating centers reported treatment in a separate room or space. The remainder (23.9%) chose transfer to a specific shift (Figure 8). The total percentage of infection among health professionals working in clinics was 21.9%. The percentages in physicians, nurses, and nursing technicians were 25.1, 24.0, and 20.8%, respectively. Death by COVID-19 was only reported in nursing technicians (0.1%). Figure 8COVID-19 mortality rate, case-fatality rate and strategy adopted to isolate cases. In the 234 dialysis centers between March and July, there were 2791 reported cases of COVID-19. The incidence rate of confirmed COVID-19 between February 26th and July 31st, 2020 was 684/10,000 dialysis patients. Confirmation was performed by real time polymerase chain reaction (RT-PCR) in 68.4%, by serologic testing in 26.3%, and by both methods in 6%. Of the total number of infected patients, 95.7% were on HD and 4.3% on PD. Nearly 52% of confirmed cases were hospitalized and, of these, 57.6% required treatment in intensive care units. Seven hundred and eighteen COVID-19 patients died. Case-fatality rate was 25.7% and mortality rate reached 176/10,000 patients. The estimated crude annual mortality rate attributed to COVID-19 was 4.2%. As a strategy to isolate suspected or confirmed cases of COVID-19, 76.1% of participating centers reported treatment in a separate room or space. The remainder (23.9%) chose transfer to a specific shift (Figure 8). The total percentage of infection among health professionals working in clinics was 21.9%. The percentages in physicians, nurses, and nursing technicians were 25.1, 24.0, and 20.8%, respectively. Death by COVID-19 was only reported in nursing technicians (0.1%). Figure 8COVID-19 mortality rate, case-fatality rate and strategy adopted to isolate cases. Estimated incidence, prevalence, and mortality rates: In July 2020, there were 834 active chronic dialysis centers registered at BSN, a number 3.6% higher than in 2019. However, the percentage of participating centers decreased compared to the previous year (from 39 to 28%), with a slight difference in percentage of participation among regions (North and Midwest: 25%; South: 28%; Northeast and Southeast: 29%). The lower adherence resulted in a 25% decrease in patients whose data were used for the annual report (54,488 to 40,795). The estimated total number of patients in July 2020 was 144,779 (a variation of ± 5% = 137,527 to 152,038), 3.6% higher than July 2019. The trend observed in recent years toward an increase in number of dialysis patients continued in 2020 (Figure 1). The prevalence rate of dialysis patients also continued to increase, from 665 in 2019 to 684 per million population (pmp) in 2020, consistent with the trend observed in recent years. When stratified by region, a significant decrease in prevalence rate was observed only in the North region (Figure 2); the numbers increased slightly in the other regions. The estimated number of new dialysis patients in 2020 was 44,264, and the overall incidence rate was 209 pmp, lower than in 2019 when it reached 218 pmp. The incidence rate ranged from 75 pmp in the North region to 227 pmp in the South. The estimated number of deaths for the entire year was 35,413. The annual crude mortality rate has been between 18 and 20% since 2016, and is projected to increase to 24.5% in 2020 (Figure 3). Figure 1Estimated number of patients on chronic dialysis per year. Figure 2Estimated prevalence of patients on dialysis by geographic region in Brazil, per million population. Figure 3Estimated annual crude mortality rate of dialysis patients. Demographic and clinical characteristics: The most prevalent age group was between 45 and 64 years, representing 42.5% (Figure 4). The distribution by sex, 58% men and 42% women, remained stable, as did the percentage of main underlying diseases. Systemic arterial hypertension and diabetes mellitus accounted each for almost one third of all cases (Figure 5). The percentage of patients with hepatitis C continued to decrease, while the percentage of patients with hepatitis B and HIV remained stable (Figure 6). Regarding vascular access, 25% of hemodialysis (HD) patients used a central venous catheter. A decrease in the use of short-term catheters and prostheses was observed, while the use of long-term catheters increased (11%) (Figure 7). The estimated number of dialysis patients on the kidney transplant waiting list in 2020 was 33,239 (23%), similar to the previous year. Figure 4Distribution of patients according to age group. Figure 5Distribution of dialysis patients according to chronic kidney disease etiology. Figure 6Prevalence of patients with positive serology for hepatitis B and C and HIV viruses. Figure 7Distribution of vascular accesses used for hemodialysis. Characteristics of dialysis treatment: The distribution of patients by dialysis modality and payment source is shown in Table 2. Hemodialysis (HD) continued to be the treatment for most patients (92.6%) and 7.4% were treated with peritoneal dialysis (PD). Treatment was financed by the public health system for 81.6% of patients and by private health insurance for 18.4% of patients in the participating units. Characteristics of participating centers: Of the 235 participating dialysis units, 71% were privately owned, 18% were philanthropic, and 10% were public. Among private clinics, international corporations managed 14.4% of participating units. Most centers described themselves as satellite (54%), and 46% were in-patient units. Fifty-six percent of participating units reported having PD as a treatment option. The national average number of patients per nephrologist was 27 and ranged from 21 in the North to 30 in the South. COVID-19: In the 234 dialysis centers between March and July, there were 2791 reported cases of COVID-19. The incidence rate of confirmed COVID-19 between February 26th and July 31st, 2020 was 684/10,000 dialysis patients. Confirmation was performed by real time polymerase chain reaction (RT-PCR) in 68.4%, by serologic testing in 26.3%, and by both methods in 6%. Of the total number of infected patients, 95.7% were on HD and 4.3% on PD. Nearly 52% of confirmed cases were hospitalized and, of these, 57.6% required treatment in intensive care units. Seven hundred and eighteen COVID-19 patients died. Case-fatality rate was 25.7% and mortality rate reached 176/10,000 patients. The estimated crude annual mortality rate attributed to COVID-19 was 4.2%. As a strategy to isolate suspected or confirmed cases of COVID-19, 76.1% of participating centers reported treatment in a separate room or space. The remainder (23.9%) chose transfer to a specific shift (Figure 8). The total percentage of infection among health professionals working in clinics was 21.9%. The percentages in physicians, nurses, and nursing technicians were 25.1, 24.0, and 20.8%, respectively. Death by COVID-19 was only reported in nursing technicians (0.1%). Figure 8COVID-19 mortality rate, case-fatality rate and strategy adopted to isolate cases. Discussion: In the last two decades, the Brazilian dialysis survey has highlighted the panorama of dialysis treatment, providing data and analyses that contribute to public policies and strategies to improve this therapy in our country. In general, the trends observed in recent years were maintained in 2020. As a novelty, we reported information on the impact of the COVID-19 pandemic on dialysis centers. There was a decrease in the percentage of dialysis units participating in the 2020 survey compared to the previous year, from 39 to 28%. Although we did not investigate the causes, we believe that the difficulties of the COVID-19 pandemic faced by dialysis centers may have contributed to this lower voluntary participation. Despite that, participation was around 30% and was similar across the five macro-regions, increasing the chance of national representativeness of the sample. The upward trend in total number of patients by 3.6% and in prevalence by 2.9% followed the pattern observed in recent years. This may be explained by the increase in population longevity 3 and by improvements in the quality of care and facilitation of dialysis access for patients with chronic kidney disease, although the latter should be confirmed by further investigations. Compared with the Latin American Society of Nephrology (SLAHN) 2018 registry 4 , our prevalence of people on dialysis (684 pmp) was slightly higher than the average of the included countries (617 pmp). Our numbers were substantially lower than those of the United States in 2017 (1538 pmp) 5 and higher than those of the European registry in 2018 (556 pmp) 6 . The 2020 incidence rate (209 pmp) was slightly lower than the national estimate in 2019 (218 pmp), higher than that of Latin America (159 pmp) 4 and Europe (122 pmp) in 20186 and lower than that of the United States (370 pmp) in 20175. We found a significant increase in crude mortality rate, which had been close to 20% in the last years, but reached 24.5% in 2020. The estimated annual number of deaths among dialysis patients jumped from 25,400 to 35,400. In addition to the increase in age and a greater burden of comorbidities in patients in recent years, the mortality associated with COVID-19 in dialysis 7 may partially account for the finding. According to our estimate, the annual crude mortality rate attributed to COVID-19 was 4.2%, corresponding to 17.3% of the overall annual crude mortality rate (estimated at 25.7%). It is important to highlight that July and August were the most critical months of 2020 for the COVID-19 pandemic in hemodialysis 8 . Therefore, it is conceivable that the extrapolation of July mortality to 12 months led to an overestimation of the annual crude mortality rate for 2020. Regarding the distribution by age group, the upward trend in the prevalence of patients over 45 years of age continues, with those over 75 years of age accounting for 11.8% of the total, slightly more than half of the prevalence observed in developed countries 5,6,9 . The prevalence of male dialysis patients has remained constant at 58%. A trend toward the stability of baseline diagnosis was observed, with hypertension and DM accounting for 32 and 31%, respectively. In the US in 2017, hypertension accounted for 30% and DM for 45% of baseline diagnoses of the dialysis population 5 . The decrease in the percentage of patients with hepatitis C continued in 2020, falling below 3% for the first time. This result was a consequence of the implementation of preventive measures such as the reduction of blood transfusions and the prohibition of the reuse of dialyzers and lines for patients with positive serology, as well as the improved access to direct-acting antiviral agents, which allowed high cure rates 10 . As in the previous year, it was found that approximately a quarter of HD patients (24.7%) used a central venous catheter as vascular access (7.6% short-term and 17.1% long-term). In the United States, the prevalence of central venous catheters has remained at one-fifth of all patients on HD therapy. 5 There was an increase in the use of long-term catheters, which was 62% in 2019 and 69% in 2020. Although more than half of the participating centers offer PD as treatment option (56%), only 7.4% of dialysis patients were treated by this modality of renal replacement therapy. The main reason seems to be the model proposed by our public health system, which is not economically viable for most clinics 11 . In some countries, strategies that have led to increased use of PD included the implementation of policies and incentives that favor this modality, allow the production and supply of materials at a low cost, and appropriate training for nephrology teams aimed increasing the use of therapy and continuously reducing the failure rates 12 . We first reported that multinational companies operate about 15% of private dialysis centers. However, we emphasize that this share seems to be increasing in recent years and that the accuracy of our estimate may be limited due to sampling and the low response rate of these centers. COVID-19 information limited to the period February 26-July 31, 2020, yielded an incidence rate of 684/10,000 patients. We emphasize that this rate was probably underestimated because it assumes a positive diagnostic test, which excludes many asymptomatic, untested cases. The lack of testing in the population, especially in the initial period of the pandemic, also influenced the result obtained. The high need for hospitalization for positive cases in intensive care units and the high lethality confirm the national and global results already reported 7,13-16 . Nevertheless, we believe that we found a high lethality rate because we included more severe patients in the sample and possibly because of the lack of experience in case management and the non-standardized treatment of patients in the first months of the pandemic. We highlight as limitations the electronic data collection through voluntary completion, the aggregation of patient data by dialysis center, and the lack of response validation. Because only about 30% of active dialysis centers participated in the study, the methodology used in the national estimates of prevalence and incidence rates has limited accuracy, and caution should be used in interpreting the data. Our death rates estimates should also be interpreted with caution. The 2020 peak of COVID-19 cases in hemodialysis in July and August may have impacted our estimate of the overall annual crude mortality rate for 2020 because July figures were extrapolated to the year. In conclusion, the 2020 survey confirmed a continuous increase in prevalence rate over the years and showed a slight decrease in incidence of dialysis patients in our country. The low prevalence of PD as dialysis therapy continues, although more than half of the centers offer this dialysis modality. The trend towards a progressive increase in the use of long-term central venous catheters was maintained. Also, the high lethality of COVID-19 on dialysis had an unfavorable impact on crude mortality rate in this population, and the influence of the COVID-19 pandemic in this population warrants continuous monitoring. Regular national assessment of patients on chronic dialysis is paramount for monitoring the quantitative and qualitative aspects of chronic kidney disease care in the country.
Background: National data on chronic dialysis treatment are essential to support the development of health policies aimed at improving the treatment for thousands of people. Methods: A survey was carried out in Brazilian chronic dialysis centers using an online questionnaire for the year, covering clinical and epidemiological aspects of patients in a chronic dialysis program, data on dialysis therapy, characteristics of dialysis units and the impact of the COVID-19 pandemic. Results: 235 (28%) of the centers responded to the questionnaire. In July 2020, the estimated total number of patients on dialysis was 144,779. The estimated prevalence and incidence rates of patients per million population (pmp) were 684 and 209, respectively. Of the prevalent patients, 92.6% were on hemodialysis (HD) and 7.4% were on peritoneal dialysis (PD); 23% were on the transplant waiting list. A central venous catheter was used by a quarter of patients on HD. The incidence rate of confirmed COVID-19 between February and July 2020 was 684/10,000 dialysis patients, and the lethality rate was 25.7%. The estimated overall mortality and COVID-19 crude annual mortality rates were 24.5 and 4.2%, respectively. Conclusions: The absolute number of patients on chronic dialysis and prevalence rate continued to increase. The low use of PD as dialysis therapy was maintained and the use of long-term catheters for HD increased. The COVID-19 pandemic contributed to the increase in the overall mortality rate.
null
null
5,890
276
[ 119, 292, 11, 355, 222, 72, 95, 256 ]
12
[ "patients", "dialysis", "rate", "2020", "figure", "19", "number", "centers", "covid 19", "covid" ]
[ "dialysis patients according", "dialysis survey highlighted", "pandemic dialysis clinic", "2020 brazilian dialysis", "decades brazilian dialysis" ]
null
null
[CONTENT] Renal Dialysis | Peritoneal dialysis | Peritoneal Dialysis | Epidemiology | COVID-19 | Diálise Renal | Diálise Peritoneal | Epidemiologia | COVID-19 [SUMMARY]
[CONTENT] Renal Dialysis | Peritoneal dialysis | Peritoneal Dialysis | Epidemiology | COVID-19 | Diálise Renal | Diálise Peritoneal | Epidemiologia | COVID-19 [SUMMARY]
[CONTENT] Renal Dialysis | Peritoneal dialysis | Peritoneal Dialysis | Epidemiology | COVID-19 | Diálise Renal | Diálise Peritoneal | Epidemiologia | COVID-19 [SUMMARY]
null
[CONTENT] Renal Dialysis | Peritoneal dialysis | Peritoneal Dialysis | Epidemiology | COVID-19 | Diálise Renal | Diálise Peritoneal | Epidemiologia | COVID-19 [SUMMARY]
null
[CONTENT] Brazil | COVID-19 | Humans | Kidney Failure, Chronic | Pandemics | Renal Dialysis | Surveys and Questionnaires [SUMMARY]
[CONTENT] Brazil | COVID-19 | Humans | Kidney Failure, Chronic | Pandemics | Renal Dialysis | Surveys and Questionnaires [SUMMARY]
[CONTENT] Brazil | COVID-19 | Humans | Kidney Failure, Chronic | Pandemics | Renal Dialysis | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] Brazil | COVID-19 | Humans | Kidney Failure, Chronic | Pandemics | Renal Dialysis | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] dialysis patients according | dialysis survey highlighted | pandemic dialysis clinic | 2020 brazilian dialysis | decades brazilian dialysis [SUMMARY]
[CONTENT] dialysis patients according | dialysis survey highlighted | pandemic dialysis clinic | 2020 brazilian dialysis | decades brazilian dialysis [SUMMARY]
[CONTENT] dialysis patients according | dialysis survey highlighted | pandemic dialysis clinic | 2020 brazilian dialysis | decades brazilian dialysis [SUMMARY]
null
[CONTENT] dialysis patients according | dialysis survey highlighted | pandemic dialysis clinic | 2020 brazilian dialysis | decades brazilian dialysis [SUMMARY]
null
[CONTENT] patients | dialysis | rate | 2020 | figure | 19 | number | centers | covid 19 | covid [SUMMARY]
[CONTENT] patients | dialysis | rate | 2020 | figure | 19 | number | centers | covid 19 | covid [SUMMARY]
[CONTENT] patients | dialysis | rate | 2020 | figure | 19 | number | centers | covid 19 | covid [SUMMARY]
null
[CONTENT] patients | dialysis | rate | 2020 | figure | 19 | number | centers | covid 19 | covid [SUMMARY]
null
[CONTENT] dialysis | survey | brazilian | information | undergoing | dialysis survey | brazilian dialysis | brazilian dialysis survey | data | health policies strategies aimed [SUMMARY]
[CONTENT] questionnaire | data | patients | unit | patients unit | mean | bsn | estimates | incidence | 2020 [SUMMARY]
[CONTENT] figure | patients | rate | dialysis | dialysis patients | number | 19 | pmp | percentage | covid [SUMMARY]
null
[CONTENT] patients | dialysis | figure | rate | 2020 | 19 | covid 19 | covid | units | number [SUMMARY]
null
[CONTENT] thousands [SUMMARY]
[CONTENT] Brazilian | the year | COVID-19 [SUMMARY]
[CONTENT] 235 | 28% ||| July 2020 | 144,779 ||| million | 684 | 209 ||| 92.6% | 7.4% | 23% ||| a quarter ||| COVID-19 | between February and July 2020 | 684/10,000 | 25.7% ||| COVID-19 | annual | 24.5 | 4.2% [SUMMARY]
null
[CONTENT] thousands ||| Brazilian | the year | COVID-19 ||| 235 | 28% ||| July 2020 | 144,779 ||| million | 684 | 209 ||| 92.6% | 7.4% | 23% ||| a quarter ||| COVID-19 | between February and July 2020 | 684/10,000 | 25.7% ||| COVID-19 | annual | 24.5 | 4.2% ||| ||| PD ||| COVID-19 [SUMMARY]
null
The prognostic significance of prognostic nutritional index in gastrointestinal stromal tumors: A systematic review and meta-analysis.
36451460
Risk assessment before treatment is important for gastrointestinal stromal tumors (GISTs), which will determine the priority of surgery or preoperative treatment. The prognostic nutritional index (PNI) is an integrated parameter consisting of serum albumin and lymphocyte count. Immunonutritional status defined in this manner is well-known to be closely linked to the prognosis of several other cancers. Nevertheless, the prognostic value of PNI specifically in GISTs has not been well-established. This study aimed to verify the prognostic role of PNI in patients with GISTs.
BACKGROUND
A comprehensive literature search was conducted on medical databases up to June, 2022, and the raw data (hazard ratios and 95% confidence intervals [CIs]) focusing on the prognostic value of PNI in patients with GISTs regarding recurrence-free survival were extracted and synthesized adopting the random-effects model. This review was registered in the PROSPERO database (CRD42022345440).
METHODS
A total of 8 eligible studies including 2627 patients with GISTs was analyzed and the pooled results confirmed that an elevated PNI was associated with a better recurrence-free survival (hazard ratio: 0.52, 95% CI: 0.40-0.68), with a moderate heterogeneity (I-square, 38%). The findings from subgroup analysis were consistent with the overall pooled results, and a sensitivity analysis, not the subgroup analysis, identified the source of heterogeneity.
RESULTS
Elevated pretreatment PNI may be a useful indicator for assessing risk of recurrence in patients from China with GISTs. Studies in other countries and regions are needed to further verify the prognostic value of PNI in GISTs.
CONCLUSION
[ "Humans", "Prognosis", "Gastrointestinal Stromal Tumors", "Nutrition Assessment", "Lymphocyte Count", "China" ]
9704956
1. Introduction
Gastrointestinal stromal tumors (GISTs) are the most common gastrointestinal (GI) mesenchymal tumors. They harbor activating somatic mutations involving the tyrosine kinase receptor c-kit (CD117) and platelet-derived growth factor-α, expressed as by the Interstitial Cells of Cajal which control GI track peristalsis.[1] Tyrosine kinase inhibitors (TKIs), as represented by imatinib, have achieved gratifying therapeutic benefit for the treatment of GISTs.[2] Complete excision is recommended for primary resectable GISTs without significant risk of recurrence, but otherwise, targeted therapy should be considered as the preferred treatment option based on risk assessment, as emphasized by the first guidelines separately and specifically for GISTs recently published by the National Comprehensive Cancer Network.[3] Tumor location, size, mitotic index and tumor rupture are convincingly incorporated into the assessment of potentially malignant biological behavior of GISTs,[4–6] but nevertheless, it is difficult to accurately assess the risk of recurrence without pathological assessment. Hence, for predicting the likelihood of recurrence, other easily accessible and effective indicators are needed. Nutritional status is closely connected to both tumor progression and prognosis.[7] Malnutrition usually indicates a worse clinical outcome of most cancers, and several prognosis-related nutritional parameters have been identified and shown to represent effective predictors of prognosis, such as Nutritional Risk Screening 2002, Patient-Generated Subjective Global Assessment, and the prognostic nutritional index (PNI).[8–18] The PNI was originally proposed by Buzby in 1980 and applied by Onodera in 1984 to predict the surgical risk in GI malignancy. Due to its convenience and efficiency, the PNI was rapidly tested in other types of cancers as well, including GISTs.[8,11–22] The population sample size in reports on predicting the recurrence risk of GISTs based on the PNI was usually small, making it difficult to draw convincing conclusions. However, according to the National Comprehensive Cancer Network guidelines, preoperative effective assessment of postoperative recurrence risk is particularly important for GISTs, because it will determine the priority of surgical treatment or preoperative TKI treatment.[3] Although other meta-analyses have previously determined the prognostic value of PNI in most tumors,[21,22] these earlier studies had not clearly differentiated GISTs from GI epithelial cancers by pathology. In fact, unlike GI epithelial cancers, the existing risk assessment criteria and prognostic parameters for GISTs are self-contained and developing.[6,13] Accordingly, we conducted a meta-analysis to verify the prognostic significance of PNI in GISTs. This is the first meta-analysis in this field.
2. Materials and methods
The present study was carried out based on the published studies. Thus, the approval from an ethics committee or institutional review board was not required. 2.1. Search strategy Two independent researchers conducted a thorough literature search on the electronic databases PubMed, Embase, and the Cochrane Library from inception up to June 2022. Appropriate search terms included MeSH terms, as well as the free text words “gastrointestinal stromal tumors” or “GISTs,” and “prognostic nutritional index” or “PNI,” and “survival” or “prognosis” or “recurrence” or “clinical outcome.” References cited by the identified documents were screened carefully to avoid missing possible eligible articles. This review proceeded in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses[23] and was registered in the PROSPERO database, an international register of systematic reviews (register number CRD42022345440). Two independent researchers conducted a thorough literature search on the electronic databases PubMed, Embase, and the Cochrane Library from inception up to June 2022. Appropriate search terms included MeSH terms, as well as the free text words “gastrointestinal stromal tumors” or “GISTs,” and “prognostic nutritional index” or “PNI,” and “survival” or “prognosis” or “recurrence” or “clinical outcome.” References cited by the identified documents were screened carefully to avoid missing possible eligible articles. This review proceeded in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses[23] and was registered in the PROSPERO database, an international register of systematic reviews (register number CRD42022345440). 2.2. Selection criteria Prespecified acceptance criteria for studies included in the present meta-analysis were discussed by the authors and the final decision was made by the senior author when encountering disagreements, and was then approved by all authors. The inclusion criteria were: studies concerned with GISTs confirmed by pathology; studies provided the prognostic value of the PNI prior to treatment; studies with available hazard ratios (HRs) and 95% confidence intervals (CIs) for survival data of GISTs; availability of the full text published in English. The exclusion criteria were: studies irrelevant to PNI for prognosis of GISTs; duplicated publications; case reports, comments, conference abstracts, and reviews. Prespecified acceptance criteria for studies included in the present meta-analysis were discussed by the authors and the final decision was made by the senior author when encountering disagreements, and was then approved by all authors. The inclusion criteria were: studies concerned with GISTs confirmed by pathology; studies provided the prognostic value of the PNI prior to treatment; studies with available hazard ratios (HRs) and 95% confidence intervals (CIs) for survival data of GISTs; availability of the full text published in English. The exclusion criteria were: studies irrelevant to PNI for prognosis of GISTs; duplicated publications; case reports, comments, conference abstracts, and reviews. 2.3. Data extraction and quality assessment Data extraction was carried out from publications that met the requirements after full-text reading by 2 investigators independently, including the first name of authors, year of publication, region, study type, age, gender, sample size, tumor type, recurrence risk according to the National Institutes of Health (NIH) criteria, treatment, observation period, follow-up, endpoint, cutoff value of PNI, and quality score. The HRs and 95% CIs for endpoints were extracted directly from multivariate or univariate analyses, as feasible. Because all studies met the inclusion criteria reported just for the recurrence-free survival (RFS), here, we took this parameter as the only endpoint.[8,11–18] The Newcastle–Ottawa quality assessment scale was utilized to assess the quality of the included studies.[24] Data extraction was carried out from publications that met the requirements after full-text reading by 2 investigators independently, including the first name of authors, year of publication, region, study type, age, gender, sample size, tumor type, recurrence risk according to the National Institutes of Health (NIH) criteria, treatment, observation period, follow-up, endpoint, cutoff value of PNI, and quality score. The HRs and 95% CIs for endpoints were extracted directly from multivariate or univariate analyses, as feasible. Because all studies met the inclusion criteria reported just for the recurrence-free survival (RFS), here, we took this parameter as the only endpoint.[8,11–18] The Newcastle–Ottawa quality assessment scale was utilized to assess the quality of the included studies.[24] 2.4. Statistical analysis The original data were synthesized using Review Manager software (version 5.4, Cochrane Collaboration, Copenhagen, Denmark). I-square (I2) statistical testing and Cochrane (Q) tests were used to investigate the heterogeneity among the eligible studies, with a P value of < .1 taken to indicate significant heterogeneity.[25] This was classified into 3 degrees according to the I2 results as follows: low (I2 < 25%), moderate (I2, 25–75%) and high (I2 > 75%).[26] A random effect model was used if heterogeneity was observed. A two-sided P value of < .05 was considered statistically significant. Subgroup analysis was conducted to explore the source of heterogeneity subsequently based on the common layering characteristic of the publications. Moreover, sensitivity analysis was executed to further identify the source of heterogeneity and demonstrate the stability of the pooled results by omitting any single study. Results were visualized using Graphpad Prism software (version 7.0, San Diego, California). Finally, publication bias was shown as funnel plots and further validated by Egger’s test and Begg’s test[27,28] using Stata statistical software (version 12.0, College Station, TX). The original data were synthesized using Review Manager software (version 5.4, Cochrane Collaboration, Copenhagen, Denmark). I-square (I2) statistical testing and Cochrane (Q) tests were used to investigate the heterogeneity among the eligible studies, with a P value of < .1 taken to indicate significant heterogeneity.[25] This was classified into 3 degrees according to the I2 results as follows: low (I2 < 25%), moderate (I2, 25–75%) and high (I2 > 75%).[26] A random effect model was used if heterogeneity was observed. A two-sided P value of < .05 was considered statistically significant. Subgroup analysis was conducted to explore the source of heterogeneity subsequently based on the common layering characteristic of the publications. Moreover, sensitivity analysis was executed to further identify the source of heterogeneity and demonstrate the stability of the pooled results by omitting any single study. Results were visualized using Graphpad Prism software (version 7.0, San Diego, California). Finally, publication bias was shown as funnel plots and further validated by Egger’s test and Begg’s test[27,28] using Stata statistical software (version 12.0, College Station, TX).
3.1. Search results
The specific literature search process is shown in the flowchart (Fig. 1). The initial literature search yielded 27 articles, 11 of which were retained for rigorous screening through recognition of titles and abstracts. After reading the full text of these 11 articles, 2 that did not provide enough data and another 2 found to be published by the same center[11,14] were excluded so that finally 8 retrospective studies including 2627 patients with GISTs that fully met the requirements were adopted for the current meta-analysis.[8,11–13,15–18] Search flow diagram for studies included in the meta-analysis. All of these eligible studies published between 2019 and 2022 evaluated the predictive capacity of PNI for the RFS of GISTs patients from China. The main treatment for GISTs in these studies was surgical resection, with 4 of them reporting primary GISTs treated solely by surgery,[8,13,16,18] while the other 4 reported the predictive effect of PNI on the prognosis of GISTs treated with surgery and followed by imatinib.[11,12,15,17] HRs and 95% CIs were extracted directly from the articles, accompanied by the cutoff values of PNI ranging from 40.7 to 51.3. The characteristics of the accepted studies and patients are summarized in Table 1. Baseline characteristics of included studies. HR = hazard ratio, MV = multivariate, NOS = Newcastle-Ottawa quality assessment, NR = not reported, PNI = prognostic nutritional index, RFS = recurrence-free survival, UV = univariate.
6. Conclusion
Conceptualization: Tingyong Tang. Data curation: Chunlin Mo, Tingyong Tang. Formal analysis: Zhenjie Li, Dengming Zhang, Xiaoxi Fan. Investigation: Zhenjie Li, Dengming Zhang, Chunlin Mo. Methodology: Zhenjie Li, Tingyong Tang. Resources: Tingyong Tang. Software: Zhenjie Li, Dengming Zhang, Peijin Zhu. Supervision: Tingyong Tang. Validation: Zhenjie Li, Dengming Zhang, Tingyong Tang. Visualization: Zhenjie Li. Writing – original draft: Zhenjie Li. Writing – review & editing: Tingyong Tang.
[ "2.1. Search strategy", "2.2. Selection criteria", "2.3. Data extraction and quality assessment", "2.4. Statistical analysis", "3. Results", "3.2. Meta-analysis", "4. Publication bias", "6. Conclusion" ]
[ "Two independent researchers conducted a thorough literature search on the electronic databases PubMed, Embase, and the Cochrane Library from inception up to June 2022. Appropriate search terms included MeSH terms, as well as the free text words “gastrointestinal stromal tumors” or “GISTs,” and “prognostic nutritional index” or “PNI,” and “survival” or “prognosis” or “recurrence” or “clinical outcome.” References cited by the identified documents were screened carefully to avoid missing possible eligible articles. This review proceeded in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses[23] and was registered in the PROSPERO database, an international register of systematic reviews (register number CRD42022345440).", "Prespecified acceptance criteria for studies included in the present meta-analysis were discussed by the authors and the final decision was made by the senior author when encountering disagreements, and was then approved by all authors. The inclusion criteria were: studies concerned with GISTs confirmed by pathology; studies provided the prognostic value of the PNI prior to treatment; studies with available hazard ratios (HRs) and 95% confidence intervals (CIs) for survival data of GISTs; availability of the full text published in English. The exclusion criteria were: studies irrelevant to PNI for prognosis of GISTs; duplicated publications; case reports, comments, conference abstracts, and reviews.", "Data extraction was carried out from publications that met the requirements after full-text reading by 2 investigators independently, including the first name of authors, year of publication, region, study type, age, gender, sample size, tumor type, recurrence risk according to the National Institutes of Health (NIH) criteria, treatment, observation period, follow-up, endpoint, cutoff value of PNI, and quality score. The HRs and 95% CIs for endpoints were extracted directly from multivariate or univariate analyses, as feasible. Because all studies met the inclusion criteria reported just for the recurrence-free survival (RFS), here, we took this parameter as the only endpoint.[8,11–18] The Newcastle–Ottawa quality assessment scale was utilized to assess the quality of the included studies.[24]", "The original data were synthesized using Review Manager software (version 5.4, Cochrane Collaboration, Copenhagen, Denmark). I-square (I2) statistical testing and Cochrane (Q) tests were used to investigate the heterogeneity among the eligible studies, with a P value of < .1 taken to indicate significant heterogeneity.[25] This was classified into 3 degrees according to the I2 results as follows: low (I2 < 25%), moderate (I2, 25–75%) and high (I2 > 75%).[26] A random effect model was used if heterogeneity was observed. A two-sided P value of < .05 was considered statistically significant. Subgroup analysis was conducted to explore the source of heterogeneity subsequently based on the common layering characteristic of the publications. Moreover, sensitivity analysis was executed to further identify the source of heterogeneity and demonstrate the stability of the pooled results by omitting any single study. Results were visualized using Graphpad Prism software (version 7.0, San Diego, California). Finally, publication bias was shown as funnel plots and further validated by Egger’s test and Begg’s test[27,28] using Stata statistical software (version 12.0, College Station, TX).", "3.1. Search results The specific literature search process is shown in the flowchart (Fig. 1). The initial literature search yielded 27 articles, 11 of which were retained for rigorous screening through recognition of titles and abstracts. After reading the full text of these 11 articles, 2 that did not provide enough data and another 2 found to be published by the same center[11,14] were excluded so that finally 8 retrospective studies including 2627 patients with GISTs that fully met the requirements were adopted for the current meta-analysis.[8,11–13,15–18]\nSearch flow diagram for studies included in the meta-analysis.\nAll of these eligible studies published between 2019 and 2022 evaluated the predictive capacity of PNI for the RFS of GISTs patients from China. The main treatment for GISTs in these studies was surgical resection, with 4 of them reporting primary GISTs treated solely by surgery,[8,13,16,18] while the other 4 reported the predictive effect of PNI on the prognosis of GISTs treated with surgery and followed by imatinib.[11,12,15,17] HRs and 95% CIs were extracted directly from the articles, accompanied by the cutoff values of PNI ranging from 40.7 to 51.3. The characteristics of the accepted studies and patients are summarized in Table 1.\nBaseline characteristics of included studies.\nHR = hazard ratio, MV = multivariate, NOS = Newcastle-Ottawa quality assessment, NR = not reported, PNI = prognostic nutritional index, RFS = recurrence-free survival, UV = univariate.\nThe specific literature search process is shown in the flowchart (Fig. 1). The initial literature search yielded 27 articles, 11 of which were retained for rigorous screening through recognition of titles and abstracts. After reading the full text of these 11 articles, 2 that did not provide enough data and another 2 found to be published by the same center[11,14] were excluded so that finally 8 retrospective studies including 2627 patients with GISTs that fully met the requirements were adopted for the current meta-analysis.[8,11–13,15–18]\nSearch flow diagram for studies included in the meta-analysis.\nAll of these eligible studies published between 2019 and 2022 evaluated the predictive capacity of PNI for the RFS of GISTs patients from China. The main treatment for GISTs in these studies was surgical resection, with 4 of them reporting primary GISTs treated solely by surgery,[8,13,16,18] while the other 4 reported the predictive effect of PNI on the prognosis of GISTs treated with surgery and followed by imatinib.[11,12,15,17] HRs and 95% CIs were extracted directly from the articles, accompanied by the cutoff values of PNI ranging from 40.7 to 51.3. The characteristics of the accepted studies and patients are summarized in Table 1.\nBaseline characteristics of included studies.\nHR = hazard ratio, MV = multivariate, NOS = Newcastle-Ottawa quality assessment, NR = not reported, PNI = prognostic nutritional index, RFS = recurrence-free survival, UV = univariate.\n3.2. Meta-analysis Eight studies reported the prognostic value of PNI for RFS of 2627 patients, with a pooled HR of 0.52 (95% CI: 0.40–0.68), indicating that patients with an elevated PNI had a better RFS relative to those with a lower PNI. Heterogeneity analysis revealed an I2 value of 38%, indicating moderate heterogeneity among the included studies, but this was not statistically significant (P = .13, Fig. 2). A random-effects model was used and subgroup analysis was performed to explore the differences between groups with respect to the characteristics of the different studies. Subgroup analysis according to sample size (cutoff point, 300) and treatment (surgery only vs. surgery and imatinib) showed lack of heterogeneity in the subgroup “sample size >300” (I2 value = 0; HR: 0.46, 95% CI: 0.36–0.58) and in the subgroup “surgery only” (I2 value = 0; HR: 0.45, 95% CI: 0.33–0.60). In contrast, significant heterogeneity was observed among the subgroup “sample size < 300” (I2 value = 74%; HR: 0.67, 95% CI: 0.47-0.95) and the subgroup “surgery and imatinib” (I2 value = 61%; HR: 0.63, 95% CI: 0.39–1.02). However, these results failed to identify the source of the heterogeneity, as subgroup analysis based on the sample size and treatment could not eliminate or even reduce it (Table 2).\nSubgroup analyses of PNI for RFS in GISTs patients.\nCI = confidence interval, GISTs = gastrointestinal stromal tumors, HR = hazard ratio, PNI = prognostic nutritional index, RFS = recurrence-free survival.\nForest plot of prognostic nutritional index (PNI) in predicting recurrence-free survival (RFS) of gastrointestinal stromal tumors (GISTs) patients.\nTo identify the source of heterogeneity, as well as document the stability of our results, a sensitivity analysis was carried out by omitting one study at a time and recalculating the summarized HRs for the remaining studies. The results changed only when the study by Li et al[12] was excluded (I2 value = 0; HR: 0.48, 95% CI: 0.39–0.59). In contrast, including that study resulted in an I2 value ranging from 38% to 47%, with a similar range of HRs (0.51–0.53) and 95% CIs (Fig. 3). The results of this sensitivity analysis thus indicated that the study published by Li et al in 2020[12] was the sole source of heterogeneity in the current meta-analysis.\nSensitivity analyses of HRs for recurrence-free survival (RFS) in gastrointestinal stromal tumors (GISTs) patients.\nEight studies reported the prognostic value of PNI for RFS of 2627 patients, with a pooled HR of 0.52 (95% CI: 0.40–0.68), indicating that patients with an elevated PNI had a better RFS relative to those with a lower PNI. Heterogeneity analysis revealed an I2 value of 38%, indicating moderate heterogeneity among the included studies, but this was not statistically significant (P = .13, Fig. 2). A random-effects model was used and subgroup analysis was performed to explore the differences between groups with respect to the characteristics of the different studies. Subgroup analysis according to sample size (cutoff point, 300) and treatment (surgery only vs. surgery and imatinib) showed lack of heterogeneity in the subgroup “sample size >300” (I2 value = 0; HR: 0.46, 95% CI: 0.36–0.58) and in the subgroup “surgery only” (I2 value = 0; HR: 0.45, 95% CI: 0.33–0.60). In contrast, significant heterogeneity was observed among the subgroup “sample size < 300” (I2 value = 74%; HR: 0.67, 95% CI: 0.47-0.95) and the subgroup “surgery and imatinib” (I2 value = 61%; HR: 0.63, 95% CI: 0.39–1.02). However, these results failed to identify the source of the heterogeneity, as subgroup analysis based on the sample size and treatment could not eliminate or even reduce it (Table 2).\nSubgroup analyses of PNI for RFS in GISTs patients.\nCI = confidence interval, GISTs = gastrointestinal stromal tumors, HR = hazard ratio, PNI = prognostic nutritional index, RFS = recurrence-free survival.\nForest plot of prognostic nutritional index (PNI) in predicting recurrence-free survival (RFS) of gastrointestinal stromal tumors (GISTs) patients.\nTo identify the source of heterogeneity, as well as document the stability of our results, a sensitivity analysis was carried out by omitting one study at a time and recalculating the summarized HRs for the remaining studies. The results changed only when the study by Li et al[12] was excluded (I2 value = 0; HR: 0.48, 95% CI: 0.39–0.59). In contrast, including that study resulted in an I2 value ranging from 38% to 47%, with a similar range of HRs (0.51–0.53) and 95% CIs (Fig. 3). The results of this sensitivity analysis thus indicated that the study published by Li et al in 2020[12] was the sole source of heterogeneity in the current meta-analysis.\nSensitivity analyses of HRs for recurrence-free survival (RFS) in gastrointestinal stromal tumors (GISTs) patients.", "Eight studies reported the prognostic value of PNI for RFS of 2627 patients, with a pooled HR of 0.52 (95% CI: 0.40–0.68), indicating that patients with an elevated PNI had a better RFS relative to those with a lower PNI. Heterogeneity analysis revealed an I2 value of 38%, indicating moderate heterogeneity among the included studies, but this was not statistically significant (P = .13, Fig. 2). A random-effects model was used and subgroup analysis was performed to explore the differences between groups with respect to the characteristics of the different studies. Subgroup analysis according to sample size (cutoff point, 300) and treatment (surgery only vs. surgery and imatinib) showed lack of heterogeneity in the subgroup “sample size >300” (I2 value = 0; HR: 0.46, 95% CI: 0.36–0.58) and in the subgroup “surgery only” (I2 value = 0; HR: 0.45, 95% CI: 0.33–0.60). In contrast, significant heterogeneity was observed among the subgroup “sample size < 300” (I2 value = 74%; HR: 0.67, 95% CI: 0.47-0.95) and the subgroup “surgery and imatinib” (I2 value = 61%; HR: 0.63, 95% CI: 0.39–1.02). However, these results failed to identify the source of the heterogeneity, as subgroup analysis based on the sample size and treatment could not eliminate or even reduce it (Table 2).\nSubgroup analyses of PNI for RFS in GISTs patients.\nCI = confidence interval, GISTs = gastrointestinal stromal tumors, HR = hazard ratio, PNI = prognostic nutritional index, RFS = recurrence-free survival.\nForest plot of prognostic nutritional index (PNI) in predicting recurrence-free survival (RFS) of gastrointestinal stromal tumors (GISTs) patients.\nTo identify the source of heterogeneity, as well as document the stability of our results, a sensitivity analysis was carried out by omitting one study at a time and recalculating the summarized HRs for the remaining studies. The results changed only when the study by Li et al[12] was excluded (I2 value = 0; HR: 0.48, 95% CI: 0.39–0.59). In contrast, including that study resulted in an I2 value ranging from 38% to 47%, with a similar range of HRs (0.51–0.53) and 95% CIs (Fig. 3). The results of this sensitivity analysis thus indicated that the study published by Li et al in 2020[12] was the sole source of heterogeneity in the current meta-analysis.\nSensitivity analyses of HRs for recurrence-free survival (RFS) in gastrointestinal stromal tumors (GISTs) patients.", "The shape of the funnel plot showed symmetry of the whole dataset (Fig. 4) and indicated no apparent publication bias among the included studies. Further statistical analysis suggested that publication bias was not significant (Table 3, Begg’s test P = .711, Egger’s test P = .995). However, it should be noted that although there is no significant publication bias, there is clearly a geographical bias because all the studies were from China.\nAssessment for publication bias.\nPNI = prognostic nutritional index.\nFunnel plot.", "In conclusion, despite the above limitations, this meta-analysis is the first systematic review concerning the prognostic value of the PNI for GISTs. The pooled results demonstrate that an elevated PNI is associated with a favorable RFS in patients from China with GISTs. As a parameter of immunonutritional status, PNI may not only be helpful for pretreatment evaluation but also for guiding early nutritional and immunological intervention, although well-designed multi-regional studies not limited to China are required for confirmation of this hypothesis." ]
[ null, null, null, null, "results", null, null, null ]
[ "1. Introduction", "2. Materials and methods", "2.1. Search strategy", "2.2. Selection criteria", "2.3. Data extraction and quality assessment", "2.4. Statistical analysis", "3. Results", "3.1. Search results", "3.2. Meta-analysis", "4. Publication bias", "5. Discussion", "6. Conclusion" ]
[ "Gastrointestinal stromal tumors (GISTs) are the most common gastrointestinal (GI) mesenchymal tumors. They harbor activating somatic mutations involving the tyrosine kinase receptor c-kit (CD117) and platelet-derived growth factor-α, expressed as by the Interstitial Cells of Cajal which control GI track peristalsis.[1] Tyrosine kinase inhibitors (TKIs), as represented by imatinib, have achieved gratifying therapeutic benefit for the treatment of GISTs.[2] Complete excision is recommended for primary resectable GISTs without significant risk of recurrence, but otherwise, targeted therapy should be considered as the preferred treatment option based on risk assessment, as emphasized by the first guidelines separately and specifically for GISTs recently published by the National Comprehensive Cancer Network.[3] Tumor location, size, mitotic index and tumor rupture are convincingly incorporated into the assessment of potentially malignant biological behavior of GISTs,[4–6] but nevertheless, it is difficult to accurately assess the risk of recurrence without pathological assessment. Hence, for predicting the likelihood of recurrence, other easily accessible and effective indicators are needed.\nNutritional status is closely connected to both tumor progression and prognosis.[7] Malnutrition usually indicates a worse clinical outcome of most cancers, and several prognosis-related nutritional parameters have been identified and shown to represent effective predictors of prognosis, such as Nutritional Risk Screening 2002, Patient-Generated Subjective Global Assessment, and the prognostic nutritional index (PNI).[8–18] The PNI was originally proposed by Buzby in 1980 and applied by Onodera in 1984 to predict the surgical risk in GI malignancy. Due to its convenience and efficiency, the PNI was rapidly tested in other types of cancers as well, including GISTs.[8,11–22] The population sample size in reports on predicting the recurrence risk of GISTs based on the PNI was usually small, making it difficult to draw convincing conclusions. However, according to the National Comprehensive Cancer Network guidelines, preoperative effective assessment of postoperative recurrence risk is particularly important for GISTs, because it will determine the priority of surgical treatment or preoperative TKI treatment.[3]\nAlthough other meta-analyses have previously determined the prognostic value of PNI in most tumors,[21,22] these earlier studies had not clearly differentiated GISTs from GI epithelial cancers by pathology. In fact, unlike GI epithelial cancers, the existing risk assessment criteria and prognostic parameters for GISTs are self-contained and developing.[6,13] Accordingly, we conducted a meta-analysis to verify the prognostic significance of PNI in GISTs. This is the first meta-analysis in this field.", "The present study was carried out based on the published studies. Thus, the approval from an ethics committee or institutional review board was not required.\n2.1. Search strategy Two independent researchers conducted a thorough literature search on the electronic databases PubMed, Embase, and the Cochrane Library from inception up to June 2022. Appropriate search terms included MeSH terms, as well as the free text words “gastrointestinal stromal tumors” or “GISTs,” and “prognostic nutritional index” or “PNI,” and “survival” or “prognosis” or “recurrence” or “clinical outcome.” References cited by the identified documents were screened carefully to avoid missing possible eligible articles. This review proceeded in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses[23] and was registered in the PROSPERO database, an international register of systematic reviews (register number CRD42022345440).\nTwo independent researchers conducted a thorough literature search on the electronic databases PubMed, Embase, and the Cochrane Library from inception up to June 2022. Appropriate search terms included MeSH terms, as well as the free text words “gastrointestinal stromal tumors” or “GISTs,” and “prognostic nutritional index” or “PNI,” and “survival” or “prognosis” or “recurrence” or “clinical outcome.” References cited by the identified documents were screened carefully to avoid missing possible eligible articles. This review proceeded in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses[23] and was registered in the PROSPERO database, an international register of systematic reviews (register number CRD42022345440).\n2.2. Selection criteria Prespecified acceptance criteria for studies included in the present meta-analysis were discussed by the authors and the final decision was made by the senior author when encountering disagreements, and was then approved by all authors. The inclusion criteria were: studies concerned with GISTs confirmed by pathology; studies provided the prognostic value of the PNI prior to treatment; studies with available hazard ratios (HRs) and 95% confidence intervals (CIs) for survival data of GISTs; availability of the full text published in English. The exclusion criteria were: studies irrelevant to PNI for prognosis of GISTs; duplicated publications; case reports, comments, conference abstracts, and reviews.\nPrespecified acceptance criteria for studies included in the present meta-analysis were discussed by the authors and the final decision was made by the senior author when encountering disagreements, and was then approved by all authors. The inclusion criteria were: studies concerned with GISTs confirmed by pathology; studies provided the prognostic value of the PNI prior to treatment; studies with available hazard ratios (HRs) and 95% confidence intervals (CIs) for survival data of GISTs; availability of the full text published in English. The exclusion criteria were: studies irrelevant to PNI for prognosis of GISTs; duplicated publications; case reports, comments, conference abstracts, and reviews.\n2.3. Data extraction and quality assessment Data extraction was carried out from publications that met the requirements after full-text reading by 2 investigators independently, including the first name of authors, year of publication, region, study type, age, gender, sample size, tumor type, recurrence risk according to the National Institutes of Health (NIH) criteria, treatment, observation period, follow-up, endpoint, cutoff value of PNI, and quality score. The HRs and 95% CIs for endpoints were extracted directly from multivariate or univariate analyses, as feasible. Because all studies met the inclusion criteria reported just for the recurrence-free survival (RFS), here, we took this parameter as the only endpoint.[8,11–18] The Newcastle–Ottawa quality assessment scale was utilized to assess the quality of the included studies.[24]\nData extraction was carried out from publications that met the requirements after full-text reading by 2 investigators independently, including the first name of authors, year of publication, region, study type, age, gender, sample size, tumor type, recurrence risk according to the National Institutes of Health (NIH) criteria, treatment, observation period, follow-up, endpoint, cutoff value of PNI, and quality score. The HRs and 95% CIs for endpoints were extracted directly from multivariate or univariate analyses, as feasible. Because all studies met the inclusion criteria reported just for the recurrence-free survival (RFS), here, we took this parameter as the only endpoint.[8,11–18] The Newcastle–Ottawa quality assessment scale was utilized to assess the quality of the included studies.[24]\n2.4. Statistical analysis The original data were synthesized using Review Manager software (version 5.4, Cochrane Collaboration, Copenhagen, Denmark). I-square (I2) statistical testing and Cochrane (Q) tests were used to investigate the heterogeneity among the eligible studies, with a P value of < .1 taken to indicate significant heterogeneity.[25] This was classified into 3 degrees according to the I2 results as follows: low (I2 < 25%), moderate (I2, 25–75%) and high (I2 > 75%).[26] A random effect model was used if heterogeneity was observed. A two-sided P value of < .05 was considered statistically significant. Subgroup analysis was conducted to explore the source of heterogeneity subsequently based on the common layering characteristic of the publications. Moreover, sensitivity analysis was executed to further identify the source of heterogeneity and demonstrate the stability of the pooled results by omitting any single study. Results were visualized using Graphpad Prism software (version 7.0, San Diego, California). Finally, publication bias was shown as funnel plots and further validated by Egger’s test and Begg’s test[27,28] using Stata statistical software (version 12.0, College Station, TX).\nThe original data were synthesized using Review Manager software (version 5.4, Cochrane Collaboration, Copenhagen, Denmark). I-square (I2) statistical testing and Cochrane (Q) tests were used to investigate the heterogeneity among the eligible studies, with a P value of < .1 taken to indicate significant heterogeneity.[25] This was classified into 3 degrees according to the I2 results as follows: low (I2 < 25%), moderate (I2, 25–75%) and high (I2 > 75%).[26] A random effect model was used if heterogeneity was observed. A two-sided P value of < .05 was considered statistically significant. Subgroup analysis was conducted to explore the source of heterogeneity subsequently based on the common layering characteristic of the publications. Moreover, sensitivity analysis was executed to further identify the source of heterogeneity and demonstrate the stability of the pooled results by omitting any single study. Results were visualized using Graphpad Prism software (version 7.0, San Diego, California). Finally, publication bias was shown as funnel plots and further validated by Egger’s test and Begg’s test[27,28] using Stata statistical software (version 12.0, College Station, TX).", "Two independent researchers conducted a thorough literature search on the electronic databases PubMed, Embase, and the Cochrane Library from inception up to June 2022. Appropriate search terms included MeSH terms, as well as the free text words “gastrointestinal stromal tumors” or “GISTs,” and “prognostic nutritional index” or “PNI,” and “survival” or “prognosis” or “recurrence” or “clinical outcome.” References cited by the identified documents were screened carefully to avoid missing possible eligible articles. This review proceeded in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses[23] and was registered in the PROSPERO database, an international register of systematic reviews (register number CRD42022345440).", "Prespecified acceptance criteria for studies included in the present meta-analysis were discussed by the authors and the final decision was made by the senior author when encountering disagreements, and was then approved by all authors. The inclusion criteria were: studies concerned with GISTs confirmed by pathology; studies provided the prognostic value of the PNI prior to treatment; studies with available hazard ratios (HRs) and 95% confidence intervals (CIs) for survival data of GISTs; availability of the full text published in English. The exclusion criteria were: studies irrelevant to PNI for prognosis of GISTs; duplicated publications; case reports, comments, conference abstracts, and reviews.", "Data extraction was carried out from publications that met the requirements after full-text reading by 2 investigators independently, including the first name of authors, year of publication, region, study type, age, gender, sample size, tumor type, recurrence risk according to the National Institutes of Health (NIH) criteria, treatment, observation period, follow-up, endpoint, cutoff value of PNI, and quality score. The HRs and 95% CIs for endpoints were extracted directly from multivariate or univariate analyses, as feasible. Because all studies met the inclusion criteria reported just for the recurrence-free survival (RFS), here, we took this parameter as the only endpoint.[8,11–18] The Newcastle–Ottawa quality assessment scale was utilized to assess the quality of the included studies.[24]", "The original data were synthesized using Review Manager software (version 5.4, Cochrane Collaboration, Copenhagen, Denmark). I-square (I2) statistical testing and Cochrane (Q) tests were used to investigate the heterogeneity among the eligible studies, with a P value of < .1 taken to indicate significant heterogeneity.[25] This was classified into 3 degrees according to the I2 results as follows: low (I2 < 25%), moderate (I2, 25–75%) and high (I2 > 75%).[26] A random effect model was used if heterogeneity was observed. A two-sided P value of < .05 was considered statistically significant. Subgroup analysis was conducted to explore the source of heterogeneity subsequently based on the common layering characteristic of the publications. Moreover, sensitivity analysis was executed to further identify the source of heterogeneity and demonstrate the stability of the pooled results by omitting any single study. Results were visualized using Graphpad Prism software (version 7.0, San Diego, California). Finally, publication bias was shown as funnel plots and further validated by Egger’s test and Begg’s test[27,28] using Stata statistical software (version 12.0, College Station, TX).", "3.1. Search results The specific literature search process is shown in the flowchart (Fig. 1). The initial literature search yielded 27 articles, 11 of which were retained for rigorous screening through recognition of titles and abstracts. After reading the full text of these 11 articles, 2 that did not provide enough data and another 2 found to be published by the same center[11,14] were excluded so that finally 8 retrospective studies including 2627 patients with GISTs that fully met the requirements were adopted for the current meta-analysis.[8,11–13,15–18]\nSearch flow diagram for studies included in the meta-analysis.\nAll of these eligible studies published between 2019 and 2022 evaluated the predictive capacity of PNI for the RFS of GISTs patients from China. The main treatment for GISTs in these studies was surgical resection, with 4 of them reporting primary GISTs treated solely by surgery,[8,13,16,18] while the other 4 reported the predictive effect of PNI on the prognosis of GISTs treated with surgery and followed by imatinib.[11,12,15,17] HRs and 95% CIs were extracted directly from the articles, accompanied by the cutoff values of PNI ranging from 40.7 to 51.3. The characteristics of the accepted studies and patients are summarized in Table 1.\nBaseline characteristics of included studies.\nHR = hazard ratio, MV = multivariate, NOS = Newcastle-Ottawa quality assessment, NR = not reported, PNI = prognostic nutritional index, RFS = recurrence-free survival, UV = univariate.\nThe specific literature search process is shown in the flowchart (Fig. 1). The initial literature search yielded 27 articles, 11 of which were retained for rigorous screening through recognition of titles and abstracts. After reading the full text of these 11 articles, 2 that did not provide enough data and another 2 found to be published by the same center[11,14] were excluded so that finally 8 retrospective studies including 2627 patients with GISTs that fully met the requirements were adopted for the current meta-analysis.[8,11–13,15–18]\nSearch flow diagram for studies included in the meta-analysis.\nAll of these eligible studies published between 2019 and 2022 evaluated the predictive capacity of PNI for the RFS of GISTs patients from China. The main treatment for GISTs in these studies was surgical resection, with 4 of them reporting primary GISTs treated solely by surgery,[8,13,16,18] while the other 4 reported the predictive effect of PNI on the prognosis of GISTs treated with surgery and followed by imatinib.[11,12,15,17] HRs and 95% CIs were extracted directly from the articles, accompanied by the cutoff values of PNI ranging from 40.7 to 51.3. The characteristics of the accepted studies and patients are summarized in Table 1.\nBaseline characteristics of included studies.\nHR = hazard ratio, MV = multivariate, NOS = Newcastle-Ottawa quality assessment, NR = not reported, PNI = prognostic nutritional index, RFS = recurrence-free survival, UV = univariate.\n3.2. Meta-analysis Eight studies reported the prognostic value of PNI for RFS of 2627 patients, with a pooled HR of 0.52 (95% CI: 0.40–0.68), indicating that patients with an elevated PNI had a better RFS relative to those with a lower PNI. Heterogeneity analysis revealed an I2 value of 38%, indicating moderate heterogeneity among the included studies, but this was not statistically significant (P = .13, Fig. 2). A random-effects model was used and subgroup analysis was performed to explore the differences between groups with respect to the characteristics of the different studies. Subgroup analysis according to sample size (cutoff point, 300) and treatment (surgery only vs. surgery and imatinib) showed lack of heterogeneity in the subgroup “sample size >300” (I2 value = 0; HR: 0.46, 95% CI: 0.36–0.58) and in the subgroup “surgery only” (I2 value = 0; HR: 0.45, 95% CI: 0.33–0.60). In contrast, significant heterogeneity was observed among the subgroup “sample size < 300” (I2 value = 74%; HR: 0.67, 95% CI: 0.47-0.95) and the subgroup “surgery and imatinib” (I2 value = 61%; HR: 0.63, 95% CI: 0.39–1.02). However, these results failed to identify the source of the heterogeneity, as subgroup analysis based on the sample size and treatment could not eliminate or even reduce it (Table 2).\nSubgroup analyses of PNI for RFS in GISTs patients.\nCI = confidence interval, GISTs = gastrointestinal stromal tumors, HR = hazard ratio, PNI = prognostic nutritional index, RFS = recurrence-free survival.\nForest plot of prognostic nutritional index (PNI) in predicting recurrence-free survival (RFS) of gastrointestinal stromal tumors (GISTs) patients.\nTo identify the source of heterogeneity, as well as document the stability of our results, a sensitivity analysis was carried out by omitting one study at a time and recalculating the summarized HRs for the remaining studies. The results changed only when the study by Li et al[12] was excluded (I2 value = 0; HR: 0.48, 95% CI: 0.39–0.59). In contrast, including that study resulted in an I2 value ranging from 38% to 47%, with a similar range of HRs (0.51–0.53) and 95% CIs (Fig. 3). The results of this sensitivity analysis thus indicated that the study published by Li et al in 2020[12] was the sole source of heterogeneity in the current meta-analysis.\nSensitivity analyses of HRs for recurrence-free survival (RFS) in gastrointestinal stromal tumors (GISTs) patients.\nEight studies reported the prognostic value of PNI for RFS of 2627 patients, with a pooled HR of 0.52 (95% CI: 0.40–0.68), indicating that patients with an elevated PNI had a better RFS relative to those with a lower PNI. Heterogeneity analysis revealed an I2 value of 38%, indicating moderate heterogeneity among the included studies, but this was not statistically significant (P = .13, Fig. 2). A random-effects model was used and subgroup analysis was performed to explore the differences between groups with respect to the characteristics of the different studies. Subgroup analysis according to sample size (cutoff point, 300) and treatment (surgery only vs. surgery and imatinib) showed lack of heterogeneity in the subgroup “sample size >300” (I2 value = 0; HR: 0.46, 95% CI: 0.36–0.58) and in the subgroup “surgery only” (I2 value = 0; HR: 0.45, 95% CI: 0.33–0.60). In contrast, significant heterogeneity was observed among the subgroup “sample size < 300” (I2 value = 74%; HR: 0.67, 95% CI: 0.47-0.95) and the subgroup “surgery and imatinib” (I2 value = 61%; HR: 0.63, 95% CI: 0.39–1.02). However, these results failed to identify the source of the heterogeneity, as subgroup analysis based on the sample size and treatment could not eliminate or even reduce it (Table 2).\nSubgroup analyses of PNI for RFS in GISTs patients.\nCI = confidence interval, GISTs = gastrointestinal stromal tumors, HR = hazard ratio, PNI = prognostic nutritional index, RFS = recurrence-free survival.\nForest plot of prognostic nutritional index (PNI) in predicting recurrence-free survival (RFS) of gastrointestinal stromal tumors (GISTs) patients.\nTo identify the source of heterogeneity, as well as document the stability of our results, a sensitivity analysis was carried out by omitting one study at a time and recalculating the summarized HRs for the remaining studies. The results changed only when the study by Li et al[12] was excluded (I2 value = 0; HR: 0.48, 95% CI: 0.39–0.59). In contrast, including that study resulted in an I2 value ranging from 38% to 47%, with a similar range of HRs (0.51–0.53) and 95% CIs (Fig. 3). The results of this sensitivity analysis thus indicated that the study published by Li et al in 2020[12] was the sole source of heterogeneity in the current meta-analysis.\nSensitivity analyses of HRs for recurrence-free survival (RFS) in gastrointestinal stromal tumors (GISTs) patients.", "The specific literature search process is shown in the flowchart (Fig. 1). The initial literature search yielded 27 articles, 11 of which were retained for rigorous screening through recognition of titles and abstracts. After reading the full text of these 11 articles, 2 that did not provide enough data and another 2 found to be published by the same center[11,14] were excluded so that finally 8 retrospective studies including 2627 patients with GISTs that fully met the requirements were adopted for the current meta-analysis.[8,11–13,15–18]\nSearch flow diagram for studies included in the meta-analysis.\nAll of these eligible studies published between 2019 and 2022 evaluated the predictive capacity of PNI for the RFS of GISTs patients from China. The main treatment for GISTs in these studies was surgical resection, with 4 of them reporting primary GISTs treated solely by surgery,[8,13,16,18] while the other 4 reported the predictive effect of PNI on the prognosis of GISTs treated with surgery and followed by imatinib.[11,12,15,17] HRs and 95% CIs were extracted directly from the articles, accompanied by the cutoff values of PNI ranging from 40.7 to 51.3. The characteristics of the accepted studies and patients are summarized in Table 1.\nBaseline characteristics of included studies.\nHR = hazard ratio, MV = multivariate, NOS = Newcastle-Ottawa quality assessment, NR = not reported, PNI = prognostic nutritional index, RFS = recurrence-free survival, UV = univariate.", "Eight studies reported the prognostic value of PNI for RFS of 2627 patients, with a pooled HR of 0.52 (95% CI: 0.40–0.68), indicating that patients with an elevated PNI had a better RFS relative to those with a lower PNI. Heterogeneity analysis revealed an I2 value of 38%, indicating moderate heterogeneity among the included studies, but this was not statistically significant (P = .13, Fig. 2). A random-effects model was used and subgroup analysis was performed to explore the differences between groups with respect to the characteristics of the different studies. Subgroup analysis according to sample size (cutoff point, 300) and treatment (surgery only vs. surgery and imatinib) showed lack of heterogeneity in the subgroup “sample size >300” (I2 value = 0; HR: 0.46, 95% CI: 0.36–0.58) and in the subgroup “surgery only” (I2 value = 0; HR: 0.45, 95% CI: 0.33–0.60). In contrast, significant heterogeneity was observed among the subgroup “sample size < 300” (I2 value = 74%; HR: 0.67, 95% CI: 0.47-0.95) and the subgroup “surgery and imatinib” (I2 value = 61%; HR: 0.63, 95% CI: 0.39–1.02). However, these results failed to identify the source of the heterogeneity, as subgroup analysis based on the sample size and treatment could not eliminate or even reduce it (Table 2).\nSubgroup analyses of PNI for RFS in GISTs patients.\nCI = confidence interval, GISTs = gastrointestinal stromal tumors, HR = hazard ratio, PNI = prognostic nutritional index, RFS = recurrence-free survival.\nForest plot of prognostic nutritional index (PNI) in predicting recurrence-free survival (RFS) of gastrointestinal stromal tumors (GISTs) patients.\nTo identify the source of heterogeneity, as well as document the stability of our results, a sensitivity analysis was carried out by omitting one study at a time and recalculating the summarized HRs for the remaining studies. The results changed only when the study by Li et al[12] was excluded (I2 value = 0; HR: 0.48, 95% CI: 0.39–0.59). In contrast, including that study resulted in an I2 value ranging from 38% to 47%, with a similar range of HRs (0.51–0.53) and 95% CIs (Fig. 3). The results of this sensitivity analysis thus indicated that the study published by Li et al in 2020[12] was the sole source of heterogeneity in the current meta-analysis.\nSensitivity analyses of HRs for recurrence-free survival (RFS) in gastrointestinal stromal tumors (GISTs) patients.", "The shape of the funnel plot showed symmetry of the whole dataset (Fig. 4) and indicated no apparent publication bias among the included studies. Further statistical analysis suggested that publication bias was not significant (Table 3, Begg’s test P = .711, Egger’s test P = .995). However, it should be noted that although there is no significant publication bias, there is clearly a geographical bias because all the studies were from China.\nAssessment for publication bias.\nPNI = prognostic nutritional index.\nFunnel plot.", "Decisions on treatment strategies for GISTs depend on the assessment of the risk of recurrence.[3] R0 resection does not usually imply cure for GISTs with a high preoperative and postoperative risk of recurrence. TKIs are recommended for increasing RFS.[2,3] Even though tumor location, size, mitotic index and tumor rupture have been identified as classic parameters for predicting recurrence of GISTs,[4–6] accuracy is often limited by the inability to obtain accurate pathological confirmation prior to treatment. Hematological indicators such as neutrophil to lymphocyte ratios had been shown to be useful in assessing the risk of GIST recurrence.[29] Similarly, recent studies showed that GIST patients with a high PNI tend to have a longer RFS, with HRs ranging from 0.17 to 0.59.[8,11–18] Moreover, a nomogram including tumor site, tumor size, mitotic index, tumor rupture, and PNI demonstrated better predictive ability than the commonly used risk stratification systems, such as modified NIH criteria and NIH–Miettinen criteria.[13]\nIn the current meta-analysis, a total of 8 studies including 2627 patients with GISTs was analyzed regarding the prognostic value of PNI.[8,11–13,15–18] The pooled results revealed that an elevated PNI prior to treatment was related to a longer RFS (HR, 0.52, 95% CI: 0.40–0.68). Subgroup analysis showed a positive prognostic impact of PNI on RFS regarding studies with a sample size >300 and those where patients had been treated only surgically. Sensitivity analysis enabled the identification of one study that was a source of heterogeneity (albeit this had not statistically significant (P > .1)). For that publication from 2020,[12] compared with the other studies analyzed here, we found no indication of the status of the GISTs included (primary or not, and localized or not). Possibly, GISTs at different stages included in that study may explain the heterogeneity due to the lack of its identification of the PNI as independent prognostic factor in by multiple regression analysis.\nA limitation of the current analysis is that, due to the lack of data, we were unable to explore relationships between PNI and other clinicopathologic features. Nutritional factors act on the development of cancers by regulating the balance between cell proliferation and death, appropriate cell differentiation, the expression of oncogenes and tumor-suppressor genes.[7] Serum albumin, as a common reference entity in assessments of nutritional status, is found to be closely connected with the prognosis of various malignancies.[30] Previous reviews had indicated that hypoalbuminemia is associated with poor cancer survival, including GI cancers.[30] However, it seems that serum albumin alone is not an independent predictor of prognosis of GISTs according to the limited studies included in the current meta-analysis.[15,18] Further verification is required for the prognostic value of serum albumin alone in GISTs. On the other hand, the interactions of various different immune organs, tissues, and cells mediates defense against tumors. Different tumor-infiltrating lymphocyte (TIL) phenotypes rather than the lymphocyte count alone can also be used to predict tumor prognosis.[31,32] Specifically, CD3+, CD8+, and CD56 + TILs were reported to be reliable independent predictors of disease-free survival of GIST patients.[33] Even after treatment with imatinib, enriched intratumoral CD3 + TILs could still be found and still correlated with a better progression-free survival in GISTs.[34] Based on the above data and the results of our meta-analysis, PNI, a parameter of immunonutritional status, can be expected to play an important role in prediction of GIST prognosis.\nThere are several limitations to the current meta-analysis. First, the study design of all the included studies was retrospective, but a randomized controlled trial is logically the pinnacle of the evidence pyramid.[35] Second, all of the studies included in this comprehensive analysis were from China, so a geographical bias is clearly present, and it remains unknown whether the current pooled results can be generalized to other Asian regions or western countries. Third, the risk stratification standards varied among the included studies, which we considered mixed unless the included publication clearly stated that the study population was of medium to high risk.[17] Fourth, most of the cutoff values for the PNI were determined by receiver operator characteristic curve analysis and were not consistent,[8,12,15,16,18] but half of them were concentrated around 47.5.[8,12,13,16] Therefore, optimal cutoff values still need to be determined by more well-designed studies. Fifth, the sample size of the included studies was <500, ranging from 200 to 445, which may make it difficult to draw firm conclusions. Finally, inclusion of only English literature may lead to a language bias.", "In conclusion, despite the above limitations, this meta-analysis is the first systematic review concerning the prognostic value of the PNI for GISTs. The pooled results demonstrate that an elevated PNI is associated with a favorable RFS in patients from China with GISTs. As a parameter of immunonutritional status, PNI may not only be helpful for pretreatment evaluation but also for guiding early nutritional and immunological intervention, although well-designed multi-regional studies not limited to China are required for confirmation of this hypothesis." ]
[ "intro", "methods", null, null, null, null, "results", "results", null, null, "discussion", null ]
[ "gastrointestinal stromal tumors (GISTs)", "prognostic nutritional index (PNI)", "Recurrence-free survival (RFS)" ]
1. Introduction: Gastrointestinal stromal tumors (GISTs) are the most common gastrointestinal (GI) mesenchymal tumors. They harbor activating somatic mutations involving the tyrosine kinase receptor c-kit (CD117) and platelet-derived growth factor-α, expressed as by the Interstitial Cells of Cajal which control GI track peristalsis.[1] Tyrosine kinase inhibitors (TKIs), as represented by imatinib, have achieved gratifying therapeutic benefit for the treatment of GISTs.[2] Complete excision is recommended for primary resectable GISTs without significant risk of recurrence, but otherwise, targeted therapy should be considered as the preferred treatment option based on risk assessment, as emphasized by the first guidelines separately and specifically for GISTs recently published by the National Comprehensive Cancer Network.[3] Tumor location, size, mitotic index and tumor rupture are convincingly incorporated into the assessment of potentially malignant biological behavior of GISTs,[4–6] but nevertheless, it is difficult to accurately assess the risk of recurrence without pathological assessment. Hence, for predicting the likelihood of recurrence, other easily accessible and effective indicators are needed. Nutritional status is closely connected to both tumor progression and prognosis.[7] Malnutrition usually indicates a worse clinical outcome of most cancers, and several prognosis-related nutritional parameters have been identified and shown to represent effective predictors of prognosis, such as Nutritional Risk Screening 2002, Patient-Generated Subjective Global Assessment, and the prognostic nutritional index (PNI).[8–18] The PNI was originally proposed by Buzby in 1980 and applied by Onodera in 1984 to predict the surgical risk in GI malignancy. Due to its convenience and efficiency, the PNI was rapidly tested in other types of cancers as well, including GISTs.[8,11–22] The population sample size in reports on predicting the recurrence risk of GISTs based on the PNI was usually small, making it difficult to draw convincing conclusions. However, according to the National Comprehensive Cancer Network guidelines, preoperative effective assessment of postoperative recurrence risk is particularly important for GISTs, because it will determine the priority of surgical treatment or preoperative TKI treatment.[3] Although other meta-analyses have previously determined the prognostic value of PNI in most tumors,[21,22] these earlier studies had not clearly differentiated GISTs from GI epithelial cancers by pathology. In fact, unlike GI epithelial cancers, the existing risk assessment criteria and prognostic parameters for GISTs are self-contained and developing.[6,13] Accordingly, we conducted a meta-analysis to verify the prognostic significance of PNI in GISTs. This is the first meta-analysis in this field. 2. Materials and methods: The present study was carried out based on the published studies. Thus, the approval from an ethics committee or institutional review board was not required. 2.1. Search strategy Two independent researchers conducted a thorough literature search on the electronic databases PubMed, Embase, and the Cochrane Library from inception up to June 2022. Appropriate search terms included MeSH terms, as well as the free text words “gastrointestinal stromal tumors” or “GISTs,” and “prognostic nutritional index” or “PNI,” and “survival” or “prognosis” or “recurrence” or “clinical outcome.” References cited by the identified documents were screened carefully to avoid missing possible eligible articles. This review proceeded in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses[23] and was registered in the PROSPERO database, an international register of systematic reviews (register number CRD42022345440). Two independent researchers conducted a thorough literature search on the electronic databases PubMed, Embase, and the Cochrane Library from inception up to June 2022. Appropriate search terms included MeSH terms, as well as the free text words “gastrointestinal stromal tumors” or “GISTs,” and “prognostic nutritional index” or “PNI,” and “survival” or “prognosis” or “recurrence” or “clinical outcome.” References cited by the identified documents were screened carefully to avoid missing possible eligible articles. This review proceeded in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses[23] and was registered in the PROSPERO database, an international register of systematic reviews (register number CRD42022345440). 2.2. Selection criteria Prespecified acceptance criteria for studies included in the present meta-analysis were discussed by the authors and the final decision was made by the senior author when encountering disagreements, and was then approved by all authors. The inclusion criteria were: studies concerned with GISTs confirmed by pathology; studies provided the prognostic value of the PNI prior to treatment; studies with available hazard ratios (HRs) and 95% confidence intervals (CIs) for survival data of GISTs; availability of the full text published in English. The exclusion criteria were: studies irrelevant to PNI for prognosis of GISTs; duplicated publications; case reports, comments, conference abstracts, and reviews. Prespecified acceptance criteria for studies included in the present meta-analysis were discussed by the authors and the final decision was made by the senior author when encountering disagreements, and was then approved by all authors. The inclusion criteria were: studies concerned with GISTs confirmed by pathology; studies provided the prognostic value of the PNI prior to treatment; studies with available hazard ratios (HRs) and 95% confidence intervals (CIs) for survival data of GISTs; availability of the full text published in English. The exclusion criteria were: studies irrelevant to PNI for prognosis of GISTs; duplicated publications; case reports, comments, conference abstracts, and reviews. 2.3. Data extraction and quality assessment Data extraction was carried out from publications that met the requirements after full-text reading by 2 investigators independently, including the first name of authors, year of publication, region, study type, age, gender, sample size, tumor type, recurrence risk according to the National Institutes of Health (NIH) criteria, treatment, observation period, follow-up, endpoint, cutoff value of PNI, and quality score. The HRs and 95% CIs for endpoints were extracted directly from multivariate or univariate analyses, as feasible. Because all studies met the inclusion criteria reported just for the recurrence-free survival (RFS), here, we took this parameter as the only endpoint.[8,11–18] The Newcastle–Ottawa quality assessment scale was utilized to assess the quality of the included studies.[24] Data extraction was carried out from publications that met the requirements after full-text reading by 2 investigators independently, including the first name of authors, year of publication, region, study type, age, gender, sample size, tumor type, recurrence risk according to the National Institutes of Health (NIH) criteria, treatment, observation period, follow-up, endpoint, cutoff value of PNI, and quality score. The HRs and 95% CIs for endpoints were extracted directly from multivariate or univariate analyses, as feasible. Because all studies met the inclusion criteria reported just for the recurrence-free survival (RFS), here, we took this parameter as the only endpoint.[8,11–18] The Newcastle–Ottawa quality assessment scale was utilized to assess the quality of the included studies.[24] 2.4. Statistical analysis The original data were synthesized using Review Manager software (version 5.4, Cochrane Collaboration, Copenhagen, Denmark). I-square (I2) statistical testing and Cochrane (Q) tests were used to investigate the heterogeneity among the eligible studies, with a P value of < .1 taken to indicate significant heterogeneity.[25] This was classified into 3 degrees according to the I2 results as follows: low (I2 < 25%), moderate (I2, 25–75%) and high (I2 > 75%).[26] A random effect model was used if heterogeneity was observed. A two-sided P value of < .05 was considered statistically significant. Subgroup analysis was conducted to explore the source of heterogeneity subsequently based on the common layering characteristic of the publications. Moreover, sensitivity analysis was executed to further identify the source of heterogeneity and demonstrate the stability of the pooled results by omitting any single study. Results were visualized using Graphpad Prism software (version 7.0, San Diego, California). Finally, publication bias was shown as funnel plots and further validated by Egger’s test and Begg’s test[27,28] using Stata statistical software (version 12.0, College Station, TX). The original data were synthesized using Review Manager software (version 5.4, Cochrane Collaboration, Copenhagen, Denmark). I-square (I2) statistical testing and Cochrane (Q) tests were used to investigate the heterogeneity among the eligible studies, with a P value of < .1 taken to indicate significant heterogeneity.[25] This was classified into 3 degrees according to the I2 results as follows: low (I2 < 25%), moderate (I2, 25–75%) and high (I2 > 75%).[26] A random effect model was used if heterogeneity was observed. A two-sided P value of < .05 was considered statistically significant. Subgroup analysis was conducted to explore the source of heterogeneity subsequently based on the common layering characteristic of the publications. Moreover, sensitivity analysis was executed to further identify the source of heterogeneity and demonstrate the stability of the pooled results by omitting any single study. Results were visualized using Graphpad Prism software (version 7.0, San Diego, California). Finally, publication bias was shown as funnel plots and further validated by Egger’s test and Begg’s test[27,28] using Stata statistical software (version 12.0, College Station, TX). 2.1. Search strategy: Two independent researchers conducted a thorough literature search on the electronic databases PubMed, Embase, and the Cochrane Library from inception up to June 2022. Appropriate search terms included MeSH terms, as well as the free text words “gastrointestinal stromal tumors” or “GISTs,” and “prognostic nutritional index” or “PNI,” and “survival” or “prognosis” or “recurrence” or “clinical outcome.” References cited by the identified documents were screened carefully to avoid missing possible eligible articles. This review proceeded in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses[23] and was registered in the PROSPERO database, an international register of systematic reviews (register number CRD42022345440). 2.2. Selection criteria: Prespecified acceptance criteria for studies included in the present meta-analysis were discussed by the authors and the final decision was made by the senior author when encountering disagreements, and was then approved by all authors. The inclusion criteria were: studies concerned with GISTs confirmed by pathology; studies provided the prognostic value of the PNI prior to treatment; studies with available hazard ratios (HRs) and 95% confidence intervals (CIs) for survival data of GISTs; availability of the full text published in English. The exclusion criteria were: studies irrelevant to PNI for prognosis of GISTs; duplicated publications; case reports, comments, conference abstracts, and reviews. 2.3. Data extraction and quality assessment: Data extraction was carried out from publications that met the requirements after full-text reading by 2 investigators independently, including the first name of authors, year of publication, region, study type, age, gender, sample size, tumor type, recurrence risk according to the National Institutes of Health (NIH) criteria, treatment, observation period, follow-up, endpoint, cutoff value of PNI, and quality score. The HRs and 95% CIs for endpoints were extracted directly from multivariate or univariate analyses, as feasible. Because all studies met the inclusion criteria reported just for the recurrence-free survival (RFS), here, we took this parameter as the only endpoint.[8,11–18] The Newcastle–Ottawa quality assessment scale was utilized to assess the quality of the included studies.[24] 2.4. Statistical analysis: The original data were synthesized using Review Manager software (version 5.4, Cochrane Collaboration, Copenhagen, Denmark). I-square (I2) statistical testing and Cochrane (Q) tests were used to investigate the heterogeneity among the eligible studies, with a P value of < .1 taken to indicate significant heterogeneity.[25] This was classified into 3 degrees according to the I2 results as follows: low (I2 < 25%), moderate (I2, 25–75%) and high (I2 > 75%).[26] A random effect model was used if heterogeneity was observed. A two-sided P value of < .05 was considered statistically significant. Subgroup analysis was conducted to explore the source of heterogeneity subsequently based on the common layering characteristic of the publications. Moreover, sensitivity analysis was executed to further identify the source of heterogeneity and demonstrate the stability of the pooled results by omitting any single study. Results were visualized using Graphpad Prism software (version 7.0, San Diego, California). Finally, publication bias was shown as funnel plots and further validated by Egger’s test and Begg’s test[27,28] using Stata statistical software (version 12.0, College Station, TX). 3. Results: 3.1. Search results The specific literature search process is shown in the flowchart (Fig. 1). The initial literature search yielded 27 articles, 11 of which were retained for rigorous screening through recognition of titles and abstracts. After reading the full text of these 11 articles, 2 that did not provide enough data and another 2 found to be published by the same center[11,14] were excluded so that finally 8 retrospective studies including 2627 patients with GISTs that fully met the requirements were adopted for the current meta-analysis.[8,11–13,15–18] Search flow diagram for studies included in the meta-analysis. All of these eligible studies published between 2019 and 2022 evaluated the predictive capacity of PNI for the RFS of GISTs patients from China. The main treatment for GISTs in these studies was surgical resection, with 4 of them reporting primary GISTs treated solely by surgery,[8,13,16,18] while the other 4 reported the predictive effect of PNI on the prognosis of GISTs treated with surgery and followed by imatinib.[11,12,15,17] HRs and 95% CIs were extracted directly from the articles, accompanied by the cutoff values of PNI ranging from 40.7 to 51.3. The characteristics of the accepted studies and patients are summarized in Table 1. Baseline characteristics of included studies. HR = hazard ratio, MV = multivariate, NOS = Newcastle-Ottawa quality assessment, NR = not reported, PNI = prognostic nutritional index, RFS = recurrence-free survival, UV = univariate. The specific literature search process is shown in the flowchart (Fig. 1). The initial literature search yielded 27 articles, 11 of which were retained for rigorous screening through recognition of titles and abstracts. After reading the full text of these 11 articles, 2 that did not provide enough data and another 2 found to be published by the same center[11,14] were excluded so that finally 8 retrospective studies including 2627 patients with GISTs that fully met the requirements were adopted for the current meta-analysis.[8,11–13,15–18] Search flow diagram for studies included in the meta-analysis. All of these eligible studies published between 2019 and 2022 evaluated the predictive capacity of PNI for the RFS of GISTs patients from China. The main treatment for GISTs in these studies was surgical resection, with 4 of them reporting primary GISTs treated solely by surgery,[8,13,16,18] while the other 4 reported the predictive effect of PNI on the prognosis of GISTs treated with surgery and followed by imatinib.[11,12,15,17] HRs and 95% CIs were extracted directly from the articles, accompanied by the cutoff values of PNI ranging from 40.7 to 51.3. The characteristics of the accepted studies and patients are summarized in Table 1. Baseline characteristics of included studies. HR = hazard ratio, MV = multivariate, NOS = Newcastle-Ottawa quality assessment, NR = not reported, PNI = prognostic nutritional index, RFS = recurrence-free survival, UV = univariate. 3.2. Meta-analysis Eight studies reported the prognostic value of PNI for RFS of 2627 patients, with a pooled HR of 0.52 (95% CI: 0.40–0.68), indicating that patients with an elevated PNI had a better RFS relative to those with a lower PNI. Heterogeneity analysis revealed an I2 value of 38%, indicating moderate heterogeneity among the included studies, but this was not statistically significant (P = .13, Fig. 2). A random-effects model was used and subgroup analysis was performed to explore the differences between groups with respect to the characteristics of the different studies. Subgroup analysis according to sample size (cutoff point, 300) and treatment (surgery only vs. surgery and imatinib) showed lack of heterogeneity in the subgroup “sample size >300” (I2 value = 0; HR: 0.46, 95% CI: 0.36–0.58) and in the subgroup “surgery only” (I2 value = 0; HR: 0.45, 95% CI: 0.33–0.60). In contrast, significant heterogeneity was observed among the subgroup “sample size < 300” (I2 value = 74%; HR: 0.67, 95% CI: 0.47-0.95) and the subgroup “surgery and imatinib” (I2 value = 61%; HR: 0.63, 95% CI: 0.39–1.02). However, these results failed to identify the source of the heterogeneity, as subgroup analysis based on the sample size and treatment could not eliminate or even reduce it (Table 2). Subgroup analyses of PNI for RFS in GISTs patients. CI = confidence interval, GISTs = gastrointestinal stromal tumors, HR = hazard ratio, PNI = prognostic nutritional index, RFS = recurrence-free survival. Forest plot of prognostic nutritional index (PNI) in predicting recurrence-free survival (RFS) of gastrointestinal stromal tumors (GISTs) patients. To identify the source of heterogeneity, as well as document the stability of our results, a sensitivity analysis was carried out by omitting one study at a time and recalculating the summarized HRs for the remaining studies. The results changed only when the study by Li et al[12] was excluded (I2 value = 0; HR: 0.48, 95% CI: 0.39–0.59). In contrast, including that study resulted in an I2 value ranging from 38% to 47%, with a similar range of HRs (0.51–0.53) and 95% CIs (Fig. 3). The results of this sensitivity analysis thus indicated that the study published by Li et al in 2020[12] was the sole source of heterogeneity in the current meta-analysis. Sensitivity analyses of HRs for recurrence-free survival (RFS) in gastrointestinal stromal tumors (GISTs) patients. Eight studies reported the prognostic value of PNI for RFS of 2627 patients, with a pooled HR of 0.52 (95% CI: 0.40–0.68), indicating that patients with an elevated PNI had a better RFS relative to those with a lower PNI. Heterogeneity analysis revealed an I2 value of 38%, indicating moderate heterogeneity among the included studies, but this was not statistically significant (P = .13, Fig. 2). A random-effects model was used and subgroup analysis was performed to explore the differences between groups with respect to the characteristics of the different studies. Subgroup analysis according to sample size (cutoff point, 300) and treatment (surgery only vs. surgery and imatinib) showed lack of heterogeneity in the subgroup “sample size >300” (I2 value = 0; HR: 0.46, 95% CI: 0.36–0.58) and in the subgroup “surgery only” (I2 value = 0; HR: 0.45, 95% CI: 0.33–0.60). In contrast, significant heterogeneity was observed among the subgroup “sample size < 300” (I2 value = 74%; HR: 0.67, 95% CI: 0.47-0.95) and the subgroup “surgery and imatinib” (I2 value = 61%; HR: 0.63, 95% CI: 0.39–1.02). However, these results failed to identify the source of the heterogeneity, as subgroup analysis based on the sample size and treatment could not eliminate or even reduce it (Table 2). Subgroup analyses of PNI for RFS in GISTs patients. CI = confidence interval, GISTs = gastrointestinal stromal tumors, HR = hazard ratio, PNI = prognostic nutritional index, RFS = recurrence-free survival. Forest plot of prognostic nutritional index (PNI) in predicting recurrence-free survival (RFS) of gastrointestinal stromal tumors (GISTs) patients. To identify the source of heterogeneity, as well as document the stability of our results, a sensitivity analysis was carried out by omitting one study at a time and recalculating the summarized HRs for the remaining studies. The results changed only when the study by Li et al[12] was excluded (I2 value = 0; HR: 0.48, 95% CI: 0.39–0.59). In contrast, including that study resulted in an I2 value ranging from 38% to 47%, with a similar range of HRs (0.51–0.53) and 95% CIs (Fig. 3). The results of this sensitivity analysis thus indicated that the study published by Li et al in 2020[12] was the sole source of heterogeneity in the current meta-analysis. Sensitivity analyses of HRs for recurrence-free survival (RFS) in gastrointestinal stromal tumors (GISTs) patients. 3.1. Search results: The specific literature search process is shown in the flowchart (Fig. 1). The initial literature search yielded 27 articles, 11 of which were retained for rigorous screening through recognition of titles and abstracts. After reading the full text of these 11 articles, 2 that did not provide enough data and another 2 found to be published by the same center[11,14] were excluded so that finally 8 retrospective studies including 2627 patients with GISTs that fully met the requirements were adopted for the current meta-analysis.[8,11–13,15–18] Search flow diagram for studies included in the meta-analysis. All of these eligible studies published between 2019 and 2022 evaluated the predictive capacity of PNI for the RFS of GISTs patients from China. The main treatment for GISTs in these studies was surgical resection, with 4 of them reporting primary GISTs treated solely by surgery,[8,13,16,18] while the other 4 reported the predictive effect of PNI on the prognosis of GISTs treated with surgery and followed by imatinib.[11,12,15,17] HRs and 95% CIs were extracted directly from the articles, accompanied by the cutoff values of PNI ranging from 40.7 to 51.3. The characteristics of the accepted studies and patients are summarized in Table 1. Baseline characteristics of included studies. HR = hazard ratio, MV = multivariate, NOS = Newcastle-Ottawa quality assessment, NR = not reported, PNI = prognostic nutritional index, RFS = recurrence-free survival, UV = univariate. 3.2. Meta-analysis: Eight studies reported the prognostic value of PNI for RFS of 2627 patients, with a pooled HR of 0.52 (95% CI: 0.40–0.68), indicating that patients with an elevated PNI had a better RFS relative to those with a lower PNI. Heterogeneity analysis revealed an I2 value of 38%, indicating moderate heterogeneity among the included studies, but this was not statistically significant (P = .13, Fig. 2). A random-effects model was used and subgroup analysis was performed to explore the differences between groups with respect to the characteristics of the different studies. Subgroup analysis according to sample size (cutoff point, 300) and treatment (surgery only vs. surgery and imatinib) showed lack of heterogeneity in the subgroup “sample size >300” (I2 value = 0; HR: 0.46, 95% CI: 0.36–0.58) and in the subgroup “surgery only” (I2 value = 0; HR: 0.45, 95% CI: 0.33–0.60). In contrast, significant heterogeneity was observed among the subgroup “sample size < 300” (I2 value = 74%; HR: 0.67, 95% CI: 0.47-0.95) and the subgroup “surgery and imatinib” (I2 value = 61%; HR: 0.63, 95% CI: 0.39–1.02). However, these results failed to identify the source of the heterogeneity, as subgroup analysis based on the sample size and treatment could not eliminate or even reduce it (Table 2). Subgroup analyses of PNI for RFS in GISTs patients. CI = confidence interval, GISTs = gastrointestinal stromal tumors, HR = hazard ratio, PNI = prognostic nutritional index, RFS = recurrence-free survival. Forest plot of prognostic nutritional index (PNI) in predicting recurrence-free survival (RFS) of gastrointestinal stromal tumors (GISTs) patients. To identify the source of heterogeneity, as well as document the stability of our results, a sensitivity analysis was carried out by omitting one study at a time and recalculating the summarized HRs for the remaining studies. The results changed only when the study by Li et al[12] was excluded (I2 value = 0; HR: 0.48, 95% CI: 0.39–0.59). In contrast, including that study resulted in an I2 value ranging from 38% to 47%, with a similar range of HRs (0.51–0.53) and 95% CIs (Fig. 3). The results of this sensitivity analysis thus indicated that the study published by Li et al in 2020[12] was the sole source of heterogeneity in the current meta-analysis. Sensitivity analyses of HRs for recurrence-free survival (RFS) in gastrointestinal stromal tumors (GISTs) patients. 4. Publication bias: The shape of the funnel plot showed symmetry of the whole dataset (Fig. 4) and indicated no apparent publication bias among the included studies. Further statistical analysis suggested that publication bias was not significant (Table 3, Begg’s test P = .711, Egger’s test P = .995). However, it should be noted that although there is no significant publication bias, there is clearly a geographical bias because all the studies were from China. Assessment for publication bias. PNI = prognostic nutritional index. Funnel plot. 5. Discussion: Decisions on treatment strategies for GISTs depend on the assessment of the risk of recurrence.[3] R0 resection does not usually imply cure for GISTs with a high preoperative and postoperative risk of recurrence. TKIs are recommended for increasing RFS.[2,3] Even though tumor location, size, mitotic index and tumor rupture have been identified as classic parameters for predicting recurrence of GISTs,[4–6] accuracy is often limited by the inability to obtain accurate pathological confirmation prior to treatment. Hematological indicators such as neutrophil to lymphocyte ratios had been shown to be useful in assessing the risk of GIST recurrence.[29] Similarly, recent studies showed that GIST patients with a high PNI tend to have a longer RFS, with HRs ranging from 0.17 to 0.59.[8,11–18] Moreover, a nomogram including tumor site, tumor size, mitotic index, tumor rupture, and PNI demonstrated better predictive ability than the commonly used risk stratification systems, such as modified NIH criteria and NIH–Miettinen criteria.[13] In the current meta-analysis, a total of 8 studies including 2627 patients with GISTs was analyzed regarding the prognostic value of PNI.[8,11–13,15–18] The pooled results revealed that an elevated PNI prior to treatment was related to a longer RFS (HR, 0.52, 95% CI: 0.40–0.68). Subgroup analysis showed a positive prognostic impact of PNI on RFS regarding studies with a sample size >300 and those where patients had been treated only surgically. Sensitivity analysis enabled the identification of one study that was a source of heterogeneity (albeit this had not statistically significant (P > .1)). For that publication from 2020,[12] compared with the other studies analyzed here, we found no indication of the status of the GISTs included (primary or not, and localized or not). Possibly, GISTs at different stages included in that study may explain the heterogeneity due to the lack of its identification of the PNI as independent prognostic factor in by multiple regression analysis. A limitation of the current analysis is that, due to the lack of data, we were unable to explore relationships between PNI and other clinicopathologic features. Nutritional factors act on the development of cancers by regulating the balance between cell proliferation and death, appropriate cell differentiation, the expression of oncogenes and tumor-suppressor genes.[7] Serum albumin, as a common reference entity in assessments of nutritional status, is found to be closely connected with the prognosis of various malignancies.[30] Previous reviews had indicated that hypoalbuminemia is associated with poor cancer survival, including GI cancers.[30] However, it seems that serum albumin alone is not an independent predictor of prognosis of GISTs according to the limited studies included in the current meta-analysis.[15,18] Further verification is required for the prognostic value of serum albumin alone in GISTs. On the other hand, the interactions of various different immune organs, tissues, and cells mediates defense against tumors. Different tumor-infiltrating lymphocyte (TIL) phenotypes rather than the lymphocyte count alone can also be used to predict tumor prognosis.[31,32] Specifically, CD3+, CD8+, and CD56 + TILs were reported to be reliable independent predictors of disease-free survival of GIST patients.[33] Even after treatment with imatinib, enriched intratumoral CD3 + TILs could still be found and still correlated with a better progression-free survival in GISTs.[34] Based on the above data and the results of our meta-analysis, PNI, a parameter of immunonutritional status, can be expected to play an important role in prediction of GIST prognosis. There are several limitations to the current meta-analysis. First, the study design of all the included studies was retrospective, but a randomized controlled trial is logically the pinnacle of the evidence pyramid.[35] Second, all of the studies included in this comprehensive analysis were from China, so a geographical bias is clearly present, and it remains unknown whether the current pooled results can be generalized to other Asian regions or western countries. Third, the risk stratification standards varied among the included studies, which we considered mixed unless the included publication clearly stated that the study population was of medium to high risk.[17] Fourth, most of the cutoff values for the PNI were determined by receiver operator characteristic curve analysis and were not consistent,[8,12,15,16,18] but half of them were concentrated around 47.5.[8,12,13,16] Therefore, optimal cutoff values still need to be determined by more well-designed studies. Fifth, the sample size of the included studies was <500, ranging from 200 to 445, which may make it difficult to draw firm conclusions. Finally, inclusion of only English literature may lead to a language bias. 6. Conclusion: In conclusion, despite the above limitations, this meta-analysis is the first systematic review concerning the prognostic value of the PNI for GISTs. The pooled results demonstrate that an elevated PNI is associated with a favorable RFS in patients from China with GISTs. As a parameter of immunonutritional status, PNI may not only be helpful for pretreatment evaluation but also for guiding early nutritional and immunological intervention, although well-designed multi-regional studies not limited to China are required for confirmation of this hypothesis.
Background: Risk assessment before treatment is important for gastrointestinal stromal tumors (GISTs), which will determine the priority of surgery or preoperative treatment. The prognostic nutritional index (PNI) is an integrated parameter consisting of serum albumin and lymphocyte count. Immunonutritional status defined in this manner is well-known to be closely linked to the prognosis of several other cancers. Nevertheless, the prognostic value of PNI specifically in GISTs has not been well-established. This study aimed to verify the prognostic role of PNI in patients with GISTs. Methods: A comprehensive literature search was conducted on medical databases up to June, 2022, and the raw data (hazard ratios and 95% confidence intervals [CIs]) focusing on the prognostic value of PNI in patients with GISTs regarding recurrence-free survival were extracted and synthesized adopting the random-effects model. This review was registered in the PROSPERO database (CRD42022345440). Results: A total of 8 eligible studies including 2627 patients with GISTs was analyzed and the pooled results confirmed that an elevated PNI was associated with a better recurrence-free survival (hazard ratio: 0.52, 95% CI: 0.40-0.68), with a moderate heterogeneity (I-square, 38%). The findings from subgroup analysis were consistent with the overall pooled results, and a sensitivity analysis, not the subgroup analysis, identified the source of heterogeneity. Conclusions: Elevated pretreatment PNI may be a useful indicator for assessing risk of recurrence in patients from China with GISTs. Studies in other countries and regions are needed to further verify the prognostic value of PNI in GISTs.
1. Introduction: Gastrointestinal stromal tumors (GISTs) are the most common gastrointestinal (GI) mesenchymal tumors. They harbor activating somatic mutations involving the tyrosine kinase receptor c-kit (CD117) and platelet-derived growth factor-α, expressed as by the Interstitial Cells of Cajal which control GI track peristalsis.[1] Tyrosine kinase inhibitors (TKIs), as represented by imatinib, have achieved gratifying therapeutic benefit for the treatment of GISTs.[2] Complete excision is recommended for primary resectable GISTs without significant risk of recurrence, but otherwise, targeted therapy should be considered as the preferred treatment option based on risk assessment, as emphasized by the first guidelines separately and specifically for GISTs recently published by the National Comprehensive Cancer Network.[3] Tumor location, size, mitotic index and tumor rupture are convincingly incorporated into the assessment of potentially malignant biological behavior of GISTs,[4–6] but nevertheless, it is difficult to accurately assess the risk of recurrence without pathological assessment. Hence, for predicting the likelihood of recurrence, other easily accessible and effective indicators are needed. Nutritional status is closely connected to both tumor progression and prognosis.[7] Malnutrition usually indicates a worse clinical outcome of most cancers, and several prognosis-related nutritional parameters have been identified and shown to represent effective predictors of prognosis, such as Nutritional Risk Screening 2002, Patient-Generated Subjective Global Assessment, and the prognostic nutritional index (PNI).[8–18] The PNI was originally proposed by Buzby in 1980 and applied by Onodera in 1984 to predict the surgical risk in GI malignancy. Due to its convenience and efficiency, the PNI was rapidly tested in other types of cancers as well, including GISTs.[8,11–22] The population sample size in reports on predicting the recurrence risk of GISTs based on the PNI was usually small, making it difficult to draw convincing conclusions. However, according to the National Comprehensive Cancer Network guidelines, preoperative effective assessment of postoperative recurrence risk is particularly important for GISTs, because it will determine the priority of surgical treatment or preoperative TKI treatment.[3] Although other meta-analyses have previously determined the prognostic value of PNI in most tumors,[21,22] these earlier studies had not clearly differentiated GISTs from GI epithelial cancers by pathology. In fact, unlike GI epithelial cancers, the existing risk assessment criteria and prognostic parameters for GISTs are self-contained and developing.[6,13] Accordingly, we conducted a meta-analysis to verify the prognostic significance of PNI in GISTs. This is the first meta-analysis in this field. 6. Conclusion: Conceptualization: Tingyong Tang. Data curation: Chunlin Mo, Tingyong Tang. Formal analysis: Zhenjie Li, Dengming Zhang, Xiaoxi Fan. Investigation: Zhenjie Li, Dengming Zhang, Chunlin Mo. Methodology: Zhenjie Li, Tingyong Tang. Resources: Tingyong Tang. Software: Zhenjie Li, Dengming Zhang, Peijin Zhu. Supervision: Tingyong Tang. Validation: Zhenjie Li, Dengming Zhang, Tingyong Tang. Visualization: Zhenjie Li. Writing – original draft: Zhenjie Li. Writing – review & editing: Tingyong Tang.
Background: Risk assessment before treatment is important for gastrointestinal stromal tumors (GISTs), which will determine the priority of surgery or preoperative treatment. The prognostic nutritional index (PNI) is an integrated parameter consisting of serum albumin and lymphocyte count. Immunonutritional status defined in this manner is well-known to be closely linked to the prognosis of several other cancers. Nevertheless, the prognostic value of PNI specifically in GISTs has not been well-established. This study aimed to verify the prognostic role of PNI in patients with GISTs. Methods: A comprehensive literature search was conducted on medical databases up to June, 2022, and the raw data (hazard ratios and 95% confidence intervals [CIs]) focusing on the prognostic value of PNI in patients with GISTs regarding recurrence-free survival were extracted and synthesized adopting the random-effects model. This review was registered in the PROSPERO database (CRD42022345440). Results: A total of 8 eligible studies including 2627 patients with GISTs was analyzed and the pooled results confirmed that an elevated PNI was associated with a better recurrence-free survival (hazard ratio: 0.52, 95% CI: 0.40-0.68), with a moderate heterogeneity (I-square, 38%). The findings from subgroup analysis were consistent with the overall pooled results, and a sensitivity analysis, not the subgroup analysis, identified the source of heterogeneity. Conclusions: Elevated pretreatment PNI may be a useful indicator for assessing risk of recurrence in patients from China with GISTs. Studies in other countries and regions are needed to further verify the prognostic value of PNI in GISTs.
5,964
310
[ 134, 123, 149, 229, 1604, 529, 109, 94 ]
12
[ "studies", "gists", "pni", "analysis", "value", "heterogeneity", "i2", "95", "rfs", "recurrence" ]
[ "gi malignancy convenience", "gists gastrointestinal stromal", "gi cancers 30", "gi mesenchymal tumors", "gastrointestinal stromal tumors" ]
[CONTENT] gastrointestinal stromal tumors (GISTs) | prognostic nutritional index (PNI) | Recurrence-free survival (RFS) [SUMMARY]
[CONTENT] gastrointestinal stromal tumors (GISTs) | prognostic nutritional index (PNI) | Recurrence-free survival (RFS) [SUMMARY]
[CONTENT] gastrointestinal stromal tumors (GISTs) | prognostic nutritional index (PNI) | Recurrence-free survival (RFS) [SUMMARY]
[CONTENT] gastrointestinal stromal tumors (GISTs) | prognostic nutritional index (PNI) | Recurrence-free survival (RFS) [SUMMARY]
[CONTENT] gastrointestinal stromal tumors (GISTs) | prognostic nutritional index (PNI) | Recurrence-free survival (RFS) [SUMMARY]
[CONTENT] gastrointestinal stromal tumors (GISTs) | prognostic nutritional index (PNI) | Recurrence-free survival (RFS) [SUMMARY]
[CONTENT] Humans | Prognosis | Gastrointestinal Stromal Tumors | Nutrition Assessment | Lymphocyte Count | China [SUMMARY]
[CONTENT] Humans | Prognosis | Gastrointestinal Stromal Tumors | Nutrition Assessment | Lymphocyte Count | China [SUMMARY]
[CONTENT] Humans | Prognosis | Gastrointestinal Stromal Tumors | Nutrition Assessment | Lymphocyte Count | China [SUMMARY]
[CONTENT] Humans | Prognosis | Gastrointestinal Stromal Tumors | Nutrition Assessment | Lymphocyte Count | China [SUMMARY]
[CONTENT] Humans | Prognosis | Gastrointestinal Stromal Tumors | Nutrition Assessment | Lymphocyte Count | China [SUMMARY]
[CONTENT] Humans | Prognosis | Gastrointestinal Stromal Tumors | Nutrition Assessment | Lymphocyte Count | China [SUMMARY]
[CONTENT] gi malignancy convenience | gists gastrointestinal stromal | gi cancers 30 | gi mesenchymal tumors | gastrointestinal stromal tumors [SUMMARY]
[CONTENT] gi malignancy convenience | gists gastrointestinal stromal | gi cancers 30 | gi mesenchymal tumors | gastrointestinal stromal tumors [SUMMARY]
[CONTENT] gi malignancy convenience | gists gastrointestinal stromal | gi cancers 30 | gi mesenchymal tumors | gastrointestinal stromal tumors [SUMMARY]
[CONTENT] gi malignancy convenience | gists gastrointestinal stromal | gi cancers 30 | gi mesenchymal tumors | gastrointestinal stromal tumors [SUMMARY]
[CONTENT] gi malignancy convenience | gists gastrointestinal stromal | gi cancers 30 | gi mesenchymal tumors | gastrointestinal stromal tumors [SUMMARY]
[CONTENT] gi malignancy convenience | gists gastrointestinal stromal | gi cancers 30 | gi mesenchymal tumors | gastrointestinal stromal tumors [SUMMARY]
[CONTENT] studies | gists | pni | analysis | value | heterogeneity | i2 | 95 | rfs | recurrence [SUMMARY]
[CONTENT] studies | gists | pni | analysis | value | heterogeneity | i2 | 95 | rfs | recurrence [SUMMARY]
[CONTENT] studies | gists | pni | analysis | value | heterogeneity | i2 | 95 | rfs | recurrence [SUMMARY]
[CONTENT] studies | gists | pni | analysis | value | heterogeneity | i2 | 95 | rfs | recurrence [SUMMARY]
[CONTENT] studies | gists | pni | analysis | value | heterogeneity | i2 | 95 | rfs | recurrence [SUMMARY]
[CONTENT] studies | gists | pni | analysis | value | heterogeneity | i2 | 95 | rfs | recurrence [SUMMARY]
[CONTENT] risk | gists | gi | cancers | assessment | effective | recurrence | pni | tumor | national comprehensive cancer network [SUMMARY]
[CONTENT] i2 | criteria | studies | heterogeneity | criteria studies | version | software | software version | 25 | quality [SUMMARY]
[CONTENT] 11 | studies | gists | search | articles | patients | gists treated | 15 | treated | characteristics [SUMMARY]
[CONTENT] china | pni | multi | immunological intervention | immunological | rfs patients | rfs patients china | rfs patients china gists | hypothesis | meta analysis systematic [SUMMARY]
[CONTENT] gists | studies | pni | i2 | heterogeneity | analysis | value | patients | rfs | 95 [SUMMARY]
[CONTENT] gists | studies | pni | i2 | heterogeneity | analysis | value | patients | rfs | 95 [SUMMARY]
[CONTENT] ||| ||| ||| PNI ||| PNI [SUMMARY]
[CONTENT] June, 2022 | 95% | PNI ||| PROSPERO [SUMMARY]
[CONTENT] 8 | 2627 | PNI | 0.52 | 95% | CI | 0.40 | 38% ||| [SUMMARY]
[CONTENT] PNI | China ||| PNI [SUMMARY]
[CONTENT] ||| ||| ||| PNI ||| PNI ||| June, 2022 | 95% | PNI ||| PROSPERO ||| ||| 8 | 2627 | PNI | 0.52 | 95% | CI | 0.40 | 38% ||| ||| PNI | China ||| PNI [SUMMARY]
[CONTENT] ||| ||| ||| PNI ||| PNI ||| June, 2022 | 95% | PNI ||| PROSPERO ||| ||| 8 | 2627 | PNI | 0.52 | 95% | CI | 0.40 | 38% ||| ||| PNI | China ||| PNI [SUMMARY]
Hepatitis C virus screening practices and seropositivity among US veterans born during 1945 - 1965.
25023159
The Centers for Disease Control and Prevention (CDC) and the United States Preventive Services Task Force (USPSTF) recently augmented risk-based hepatitis C (HCV) screening guidelines with a recommendation to perform one-time screening in all persons born during 1945 - 1965, a birth cohort known to have a higher prevalence of HCV. We sought to estimate the proportion of veterans seen at the Atlanta VA Medical Center (AVAMC) who had ever been screened for HCV infection by birth year.
BACKGROUND
We used an administrative database of all veterans seen at the AVAMC between January 1, 2011 and December 31, 2011, and a laboratory generated list of all HCV antibody tests and HCV RNA viral loads that were performed at the AVAMC to determine receipt of screening and HCV antibody positivity. Odds ratios and 95% confidence intervals were estimated using SAS version 9.2 (SAS institute, Cary, North Carolina).
METHODS
HCV antibody testing had ever been performed on 48% (41,556) of the veterans seen in 2011; 10% of those tested had a positive antibody. Confirmatory viral loads were performed in 96% of those with a positive antibody screen. Those born during 1945 - 1965 were more likely to have a HCV antibody performed when compared with those born in other years (54% vs. 41%, odds ratio [OR] 1.70, 95% Confidence Interval [CI] 1.65-1.74). Among veterans ever tested for HCV antibody (n = 41,556), those born during 1945 - 1965 were 6 times more likely to have a positive HCV antibody (15% vs. 3%, OR 5.87, 95% CI 5.32-6.78), and 3 times more likely to have chronic HCV infection (76% vs. 50%, OR 3.25, 95% CI 2.65-4.00).
RESULTS
Nearly half of the veterans seen in 2011 at the AVAMC had ever been tested for HCV infection. When examined by birth cohort, over half of the veterans born during 1945 - 1965 had been screened for HCV and 15% of those screened had a positive HCV antibody. Our findings confirm the increased prevalence of HCV infection in persons born during 1945 - 1965 as identified in the updated CDC and USPSTF recommendations.
CONCLUSIONS
[ "Aged", "Female", "Hepacivirus", "Hepatitis C Antibodies", "Hepatitis C, Chronic", "Humans", "Male", "Mass Screening", "Middle Aged", "Odds Ratio", "Prevalence", "Risk Factors", "United States", "Veterans" ]
4105869
Background
In 1990, serologic tests that detect antibodies to the hepatitis C virus (HCV) became commercially available in the United States. The Centers for Disease Control and Prevention (CDC) published guidelines on screening blood donors for HCV infection in 1991 and recommendations for HCV screening in persons with high risk behaviors in 1998 [1,2] (Figure  1). The US Veterans Health Administration (VHA) implemented HCV screening guidelines in 2001 that not only recommended screening veterans with the risk factors described in the CDC guidelines, but also screening any veteran with immoderate use of alcohol, a history of tattoos or repeated body piercing, intranasal cocaine use, multiple sexual partners, or Vietnam-era military service (i.e. dates of active military service between August 5, 1964 and May 7, 1975) [3]. In 2002, VHA introduced an electronic clinical reminder for HCV screening. In August 2012, CDC augmented risk-based HCV screening guidelines with a recommendation to perform one-time HCV screening in all persons born during 1945 – 1965, a birth cohort known to have a higher risk of having HCV infection [4,5]. In guidance published in June 2013, the United States Preventive Services Task Force also recommended one-time HCV screening of adults in this birth cohort [6]. Because this birth cohort overlaps with Vietnam-era veterans, we sought to estimate the proportion of veterans seen at the Atlanta VA Medical Center (AVAMC) who had ever been screened for HCV infection by birth year. Total number of hepatitis C antibody tests performed and percent of tests that are positive, by year — Atlanta VA Medical Center, Atlanta, GA, USA. 1992 – 2011 (n = 91,240).
Methods
We used an administrative database that contained the date of birth, gender, and race of veterans seen as inpatients or outpatients at the AVAMC between January 1, 2011 and December 31, 2011. All laboratory results were obtained from the AVAMC and included HCV antibody tests that were performed between 1992 and 2011 and all available HCV RNA viral loads. For veterans with more than one HCV antibody test, the HCV antibody status was considered positive if any HCV antibody test was positive. Any detectable value on either a quantitative or qualitative RNA viral load was considered positive. Continuous variables were compared using the Wilcoxon rank-sum test. Odds ratios and 95% confidence intervals were estimated using SAS version 9.2 (SAS institute, Cary, North Carolina). A P value of ≤ .05 was considered statistically significant. This study was approved by the Emory University Institutional Review Board and the VA Research and Development Committee. Informed consent was waived under a full HIPAA waiver.
Results
From 1992 through 2011, 91,240 HCV antibody tests were completed on 67,539 veterans at the AVAMC (Figure  1). Before enhanced VHA screening efforts began in 2001, a median of 642 HCV antibody tests were done per year (interquartile range [IQR]: 329 – 1,451); since 2001, a median of 7,356 HCV antibody tests (IQR: 5,949 – 8,500) were done per year (p = 0.002). In 2011, 87,144 veterans were seen at the AVAMC; data on age and sex was available in this database but information on race was largely missing (91% had a missing or unknown race) (Table  1). Over 50% of veterans seen were between 50 and 69 years of age. While 89% of all veterans seen in 2011 are male, when examined by birth cohort, men made up 73% of those born after 1965. HCV antibody testing had ever been performed on 48% (41,556) of the veterans (Table  2). Of those who had antibody testing, 49% (20,396) are African American, 37% (15,343) are Caucasian, 1% (390) are “other” race (includes Asian, Native Hawaiian, Pacific Islander, American Indian, or Alaska Native); race is unknown in 13% (5,427) (data not shown). HCV antibody was positive in 10% (4,107) of those tested (Table  2). Of those with a positive HCV antibody, confirmatory RNA viral loads were performed in 96% (3,944). Chronic hepatitis C (i.e., a positive HCV antibody and a detectable RNA viral load) was identified in 73% (3,004). Gender (by birth cohort), age, and race of veterans seen in 2011 — Atlanta VA Medical Center (n = 87,144) Performance and positivity of hepatitis C antibody and RNA viral load tests for veterans seen in 2011, by birth cohort - Atlanta VA Medical Center (n=87,144) Note: HCV – Hepatitis C virus, AB – Antibody, no. - Number. *Percent is calculated using the HCV Ab positive number as the denominator. When the veterans seen in 2011 were classified by birth year, 27% were born before 1945, 54% were born during 1945 – 1965, and 19% were born after 1965. Among those born before 1945, 35% (8,378) were ever tested for HCV antibody; of those tested, 4% (335) were HCV antibody positive, and 56% (189) of those with a positive HCV antibody had confirmed, chronic HCV infection (Table  2). Among those born during 1945 – 1965, 54% (25,097) had ever been tested; 15% (3,644) were HCV antibody positive, and 76% (2,775) of those with a positive antibody had confirmed, chronic infection. In those born after 1965, 48% (8,081) had been tested and 2% (128) were HCV antibody positive; 31% (40) of those with a positive HCV antibody had confirmed chronic HCV viremia. Over 97% with chronic HCV viremia are male; among those born after 1965, 90% of those with chronic HCV viremia are male (data not shown). Those born during 1945 – 1965 were more likely to have a HCV antibody performed when compared with those born in other years (54% vs. 41%, odds ratio [OR] 1.70, 95% Confidence Interval [CI] 1.65-1.74). Among veterans ever tested for HCV antibody, those born during 1945 – 1965 were 6 times more likely to have a positive HCV antibody (15% vs. 3%, OR 5.87, 95% CI 5.32-6.78) and 3 times more likely to have confirmed, chronic HCV infection (76% vs. 50%, OR 3.25, 95% CI 2.65-4.00).
Conclusions
The enhanced screening efforts undertaken at the AVAMC in 2001 resulted in a significant increase in the annual number of HCV antibody tests performed. The diagnosis of chronic HCV infection allows for prevention interventions such as alcohol counseling, vaccination against hepatitis A and B, screening for advanced liver fibrosis, and referral for antiviral therapy. HCV antiviral management is rapidly evolving with the development of better tolerated and more efficacious therapies [27]. Achieving a sustained virologic response to antiviral therapy is known to significantly reduce the risk of liver failure [28], hepatocellular carcinoma [29], liver-related and all-cause mortality [30,31], and reduces the risk of further HCV transmission. We found that many veterans born during 1945 – 1965 had been screened for HCV infection at the AVAMC even before the augmented CDC screening guidelines were published in 2012. However, given the high prevalence of disease in this birth cohort and the importance of HCV detection, continued screening practices that target this birth cohort are warranted.
[ "Background", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "In 1990, serologic tests that detect antibodies to the hepatitis C virus (HCV) became commercially available in the United States. The Centers for Disease Control and Prevention (CDC) published guidelines on screening blood donors for HCV infection in 1991 and recommendations for HCV screening in persons with high risk behaviors in 1998\n[1,2] (Figure \n1). The US Veterans Health Administration (VHA) implemented HCV screening guidelines in 2001 that not only recommended screening veterans with the risk factors described in the CDC guidelines, but also screening any veteran with immoderate use of alcohol, a history of tattoos or repeated body piercing, intranasal cocaine use, multiple sexual partners, or Vietnam-era military service (i.e. dates of active military service between August 5, 1964 and May 7, 1975)\n[3]. In 2002, VHA introduced an electronic clinical reminder for HCV screening. In August 2012, CDC augmented risk-based HCV screening guidelines with a recommendation to perform one-time HCV screening in all persons born during 1945 – 1965, a birth cohort known to have a higher risk of having HCV infection\n[4,5]. In guidance published in June 2013, the United States Preventive Services Task Force also recommended one-time HCV screening of adults in this birth cohort\n[6]. Because this birth cohort overlaps with Vietnam-era veterans, we sought to estimate the proportion of veterans seen at the Atlanta VA Medical Center (AVAMC) who had ever been screened for HCV infection by birth year.\nTotal number of hepatitis C antibody tests performed and percent of tests that are positive, by year — Atlanta VA Medical Center, Atlanta, GA, USA. 1992 – 2011 (n = 91,240).", "VHA: Veterans health administration; HCV: Hepatitis C virus; RNA: Ribonucleic acid; HIPAA: Health insurance portability and accountability act; OR: Odds ratio; CI: Confidence interval.", "The authors declare that they have no competing interests.", "EC conceived and designed the study, performed data abstraction and analysis, and drafted the manuscript. CR provided input on study design, data analysis, data interpretation, statistical testing, and provided critical review of the manuscript. DR assisted with study design, obtained data, and provided input on data analysis and interpretation, and critically reviewed the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null ]
[ "Background", "Methods", "Results", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "In 1990, serologic tests that detect antibodies to the hepatitis C virus (HCV) became commercially available in the United States. The Centers for Disease Control and Prevention (CDC) published guidelines on screening blood donors for HCV infection in 1991 and recommendations for HCV screening in persons with high risk behaviors in 1998\n[1,2] (Figure \n1). The US Veterans Health Administration (VHA) implemented HCV screening guidelines in 2001 that not only recommended screening veterans with the risk factors described in the CDC guidelines, but also screening any veteran with immoderate use of alcohol, a history of tattoos or repeated body piercing, intranasal cocaine use, multiple sexual partners, or Vietnam-era military service (i.e. dates of active military service between August 5, 1964 and May 7, 1975)\n[3]. In 2002, VHA introduced an electronic clinical reminder for HCV screening. In August 2012, CDC augmented risk-based HCV screening guidelines with a recommendation to perform one-time HCV screening in all persons born during 1945 – 1965, a birth cohort known to have a higher risk of having HCV infection\n[4,5]. In guidance published in June 2013, the United States Preventive Services Task Force also recommended one-time HCV screening of adults in this birth cohort\n[6]. Because this birth cohort overlaps with Vietnam-era veterans, we sought to estimate the proportion of veterans seen at the Atlanta VA Medical Center (AVAMC) who had ever been screened for HCV infection by birth year.\nTotal number of hepatitis C antibody tests performed and percent of tests that are positive, by year — Atlanta VA Medical Center, Atlanta, GA, USA. 1992 – 2011 (n = 91,240).", "We used an administrative database that contained the date of birth, gender, and race of veterans seen as inpatients or outpatients at the AVAMC between January 1, 2011 and December 31, 2011. All laboratory results were obtained from the AVAMC and included HCV antibody tests that were performed between 1992 and 2011 and all available HCV RNA viral loads. For veterans with more than one HCV antibody test, the HCV antibody status was considered positive if any HCV antibody test was positive. Any detectable value on either a quantitative or qualitative RNA viral load was considered positive. Continuous variables were compared using the Wilcoxon rank-sum test. Odds ratios and 95% confidence intervals were estimated using SAS version 9.2 (SAS institute, Cary, North Carolina). A P value of ≤ .05 was considered statistically significant. This study was approved by the Emory University Institutional Review Board and the VA Research and Development Committee. Informed consent was waived under a full HIPAA waiver.", "From 1992 through 2011, 91,240 HCV antibody tests were completed on 67,539 veterans at the AVAMC (Figure \n1). Before enhanced VHA screening efforts began in 2001, a median of 642 HCV antibody tests were done per year (interquartile range [IQR]: 329 – 1,451); since 2001, a median of 7,356 HCV antibody tests (IQR: 5,949 – 8,500) were done per year (p = 0.002).\nIn 2011, 87,144 veterans were seen at the AVAMC; data on age and sex was available in this database but information on race was largely missing (91% had a missing or unknown race) (Table \n1). Over 50% of veterans seen were between 50 and 69 years of age. While 89% of all veterans seen in 2011 are male, when examined by birth cohort, men made up 73% of those born after 1965. HCV antibody testing had ever been performed on 48% (41,556) of the veterans (Table \n2). Of those who had antibody testing, 49% (20,396) are African American, 37% (15,343) are Caucasian, 1% (390) are “other” race (includes Asian, Native Hawaiian, Pacific Islander, American Indian, or Alaska Native); race is unknown in 13% (5,427) (data not shown). HCV antibody was positive in 10% (4,107) of those tested (Table \n2). Of those with a positive HCV antibody, confirmatory RNA viral loads were performed in 96% (3,944). Chronic hepatitis C (i.e., a positive HCV antibody and a detectable RNA viral load) was identified in 73% (3,004).\nGender (by birth cohort), age, and race of veterans seen in 2011 — Atlanta VA Medical Center (n = 87,144)\nPerformance and positivity of hepatitis C antibody and RNA viral load tests for veterans seen in 2011, by birth cohort - Atlanta VA Medical Center (n=87,144)\nNote: HCV – Hepatitis C virus, AB – Antibody, no. - Number.\n*Percent is calculated using the HCV Ab positive number as the denominator.\nWhen the veterans seen in 2011 were classified by birth year, 27% were born before 1945, 54% were born during 1945 – 1965, and 19% were born after 1965. Among those born before 1945, 35% (8,378) were ever tested for HCV antibody; of those tested, 4% (335) were HCV antibody positive, and 56% (189) of those with a positive HCV antibody had confirmed, chronic HCV infection (Table \n2). Among those born during 1945 – 1965, 54% (25,097) had ever been tested; 15% (3,644) were HCV antibody positive, and 76% (2,775) of those with a positive antibody had confirmed, chronic infection. In those born after 1965, 48% (8,081) had been tested and 2% (128) were HCV antibody positive; 31% (40) of those with a positive HCV antibody had confirmed chronic HCV viremia. Over 97% with chronic HCV viremia are male; among those born after 1965, 90% of those with chronic HCV viremia are male (data not shown). Those born during 1945 – 1965 were more likely to have a HCV antibody performed when compared with those born in other years (54% vs. 41%, odds ratio [OR] 1.70, 95% Confidence Interval [CI] 1.65-1.74). Among veterans ever tested for HCV antibody, those born during 1945 – 1965 were 6 times more likely to have a positive HCV antibody (15% vs. 3%, OR 5.87, 95% CI 5.32-6.78) and 3 times more likely to have confirmed, chronic HCV infection (76% vs. 50%, OR 3.25, 95% CI 2.65-4.00).", "We found that nearly 50% of the veterans seen at the AVAMC in 2011 had received HCV antibody screening. Similarly, an analysis of National VHA HCV screening practices found that 53% of veterans with at least one outpatient visit at any VA clinic in 2011 had received HCV screening\n[7]. Civilian primary care settings report HCV screening rates of 1 – 8%\n[8-10] but interventions designed to enhance HCV screening have been shown to increase screening in high risk persons to 40%\n[5,10]. It is likely that the enhanced HCV screening recommendations for veterans and the electronic clinical reminder contributed to the higher HCV screening practices observed in our population and other VHA settings.\nAfter accounting for untested veterans, the overall HCV prevalence among veterans seen at the AVAMC in 2011 is between 5% and 10%. While this estimate is comparable to published estimates at other VHA settings and the national VA estimate\n[7,11-14], it is much higher than the estimated HCV prevalence of 1.6% obtained from a national civilian survey\n[15]. Even among those born during 1945 – 1965, the HCV tested prevalence is higher in the veteran population (15%) compared with the civilian population (3%)\n[15]. Although the higher HCV prevalence in veterans has been well described, reasons for the higher burden are not fully understood and are likely multifactorial.\nVeterans born during 1945 – 1965 were more likely to be screened for HCV, to have a positive HCV antibody, and to develop chronic HCV infection. The VHA recommendation to perform HCV screening in Vietnam-era veterans likely explains the higher HCV screening seen in those born during 1945 – 1965. Our finding confirms the higher HCV prevalence in those born during 1945 – 1965, supporting the birth cohort screening recommendations of CDC and USPSTF. While it is not fully understood why HCV prevalence is higher in this birth cohort, it has been hypothesized that it reflects incident HCV infections acquired through experimental intravenous illicit drug use during the 1970’s and 1980’s\n[16,17]. Additionally, higher HCV prevalence has been observed in similar birth cohorts outside of the United States, including in Scotland\n[18], England\n[19], and Cameroon\n[20]. Lastly, we found that those born during 1945 – 1965 were at increased risk of having chronic HCV infection (i.e. having a positive antibody and a positive viral load). Among those born after 1965, only 31% of those with a positive HCV antibody also had a positive viral load. There are two possible explanations for having a positive HCV antibody and a negative viral load; namely, a false positive HCV antibody or clearance of HCV viremia. While it is surprising that relatively few persons born after 1965 had evidence of chronic HCV infection, it is notable that there are more females in this cohort. Female sex has been associated with clearance of HCV infection in prospective studies of acute HCV\n[21-24]. While complex host and pathogen factors likely influence the development of chronic HCV infection, further exploration of the association between birth year and the development of chronic HCV infection is warranted.\nConfirmatory HCV RNA viral loads were performed in 96% of the HCV antibody positive persons. This high confirmatory testing rate was also seen in an analysis of national VA data\n[7]. In contrast, surveillance for HCV infections from eight civilian US sites found that confirmatory RNA viral loads were only performed in 50% of positive HCV antibody tests\n[25]. By reflexively performing HCV RNA viral load testing on positive HCV antibody tests, the AVAMC testing practices are consistent with the current CDC guidelines which recommend RNA viral loads on all reactive HCV antibody tests\n[26].\nOur analysis was limited to testing performed at the AVAMC and did not include testing done at other VA hospitals or testing done outside of VA system. However, test results from additional sources would likely only increase the screening estimates in our analysis. Because this analysis used administrative databases, information on HCV behavioral risk factors, medical comorbidities, and the rationale used by providers for HCV screening is not known. Information on race was missing for most veterans seen at the AVAMC in 2011. Other administrative databases from the AVAMC shows that 40% of veterans seen in 2011 are African American race, 40% Caucasian, and 2% “other” race; race was unknown in 18%. CDC and USPSTF recommend HCV screening for all persons born during 1945 – 1965, regardless of race. The veteran population in our analysis is older than the general population in the United States [http://www.census.gov/2010census/]. Lastly, because this analysis included a predominantly male population our findings may not be applicable in other settings.", "The enhanced screening efforts undertaken at the AVAMC in 2001 resulted in a significant increase in the annual number of HCV antibody tests performed. The diagnosis of chronic HCV infection allows for prevention interventions such as alcohol counseling, vaccination against hepatitis A and B, screening for advanced liver fibrosis, and referral for antiviral therapy. HCV antiviral management is rapidly evolving with the development of better tolerated and more efficacious therapies\n[27]. Achieving a sustained virologic response to antiviral therapy is known to significantly reduce the risk of liver failure\n[28], hepatocellular carcinoma\n[29], liver-related and all-cause mortality\n[30,31], and reduces the risk of further HCV transmission.\nWe found that many veterans born during 1945 – 1965 had been screened for HCV infection at the AVAMC even before the augmented CDC screening guidelines were published in 2012. However, given the high prevalence of disease in this birth cohort and the importance of HCV detection, continued screening practices that target this birth cohort are warranted.", "VHA: Veterans health administration; HCV: Hepatitis C virus; RNA: Ribonucleic acid; HIPAA: Health insurance portability and accountability act; OR: Odds ratio; CI: Confidence interval.", "The authors declare that they have no competing interests.", "EC conceived and designed the study, performed data abstraction and analysis, and drafted the manuscript. CR provided input on study design, data analysis, data interpretation, statistical testing, and provided critical review of the manuscript. DR assisted with study design, obtained data, and provided input on data analysis and interpretation, and critically reviewed the manuscript. All authors read and approved the final manuscript." ]
[ null, "methods", "results", "discussion", "conclusions", null, null, null ]
[ "Hepatitis C", "Screening", "Prevention & control" ]
Background: In 1990, serologic tests that detect antibodies to the hepatitis C virus (HCV) became commercially available in the United States. The Centers for Disease Control and Prevention (CDC) published guidelines on screening blood donors for HCV infection in 1991 and recommendations for HCV screening in persons with high risk behaviors in 1998 [1,2] (Figure  1). The US Veterans Health Administration (VHA) implemented HCV screening guidelines in 2001 that not only recommended screening veterans with the risk factors described in the CDC guidelines, but also screening any veteran with immoderate use of alcohol, a history of tattoos or repeated body piercing, intranasal cocaine use, multiple sexual partners, or Vietnam-era military service (i.e. dates of active military service between August 5, 1964 and May 7, 1975) [3]. In 2002, VHA introduced an electronic clinical reminder for HCV screening. In August 2012, CDC augmented risk-based HCV screening guidelines with a recommendation to perform one-time HCV screening in all persons born during 1945 – 1965, a birth cohort known to have a higher risk of having HCV infection [4,5]. In guidance published in June 2013, the United States Preventive Services Task Force also recommended one-time HCV screening of adults in this birth cohort [6]. Because this birth cohort overlaps with Vietnam-era veterans, we sought to estimate the proportion of veterans seen at the Atlanta VA Medical Center (AVAMC) who had ever been screened for HCV infection by birth year. Total number of hepatitis C antibody tests performed and percent of tests that are positive, by year — Atlanta VA Medical Center, Atlanta, GA, USA. 1992 – 2011 (n = 91,240). Methods: We used an administrative database that contained the date of birth, gender, and race of veterans seen as inpatients or outpatients at the AVAMC between January 1, 2011 and December 31, 2011. All laboratory results were obtained from the AVAMC and included HCV antibody tests that were performed between 1992 and 2011 and all available HCV RNA viral loads. For veterans with more than one HCV antibody test, the HCV antibody status was considered positive if any HCV antibody test was positive. Any detectable value on either a quantitative or qualitative RNA viral load was considered positive. Continuous variables were compared using the Wilcoxon rank-sum test. Odds ratios and 95% confidence intervals were estimated using SAS version 9.2 (SAS institute, Cary, North Carolina). A P value of ≤ .05 was considered statistically significant. This study was approved by the Emory University Institutional Review Board and the VA Research and Development Committee. Informed consent was waived under a full HIPAA waiver. Results: From 1992 through 2011, 91,240 HCV antibody tests were completed on 67,539 veterans at the AVAMC (Figure  1). Before enhanced VHA screening efforts began in 2001, a median of 642 HCV antibody tests were done per year (interquartile range [IQR]: 329 – 1,451); since 2001, a median of 7,356 HCV antibody tests (IQR: 5,949 – 8,500) were done per year (p = 0.002). In 2011, 87,144 veterans were seen at the AVAMC; data on age and sex was available in this database but information on race was largely missing (91% had a missing or unknown race) (Table  1). Over 50% of veterans seen were between 50 and 69 years of age. While 89% of all veterans seen in 2011 are male, when examined by birth cohort, men made up 73% of those born after 1965. HCV antibody testing had ever been performed on 48% (41,556) of the veterans (Table  2). Of those who had antibody testing, 49% (20,396) are African American, 37% (15,343) are Caucasian, 1% (390) are “other” race (includes Asian, Native Hawaiian, Pacific Islander, American Indian, or Alaska Native); race is unknown in 13% (5,427) (data not shown). HCV antibody was positive in 10% (4,107) of those tested (Table  2). Of those with a positive HCV antibody, confirmatory RNA viral loads were performed in 96% (3,944). Chronic hepatitis C (i.e., a positive HCV antibody and a detectable RNA viral load) was identified in 73% (3,004). Gender (by birth cohort), age, and race of veterans seen in 2011 — Atlanta VA Medical Center (n = 87,144) Performance and positivity of hepatitis C antibody and RNA viral load tests for veterans seen in 2011, by birth cohort - Atlanta VA Medical Center (n=87,144) Note: HCV – Hepatitis C virus, AB – Antibody, no. - Number. *Percent is calculated using the HCV Ab positive number as the denominator. When the veterans seen in 2011 were classified by birth year, 27% were born before 1945, 54% were born during 1945 – 1965, and 19% were born after 1965. Among those born before 1945, 35% (8,378) were ever tested for HCV antibody; of those tested, 4% (335) were HCV antibody positive, and 56% (189) of those with a positive HCV antibody had confirmed, chronic HCV infection (Table  2). Among those born during 1945 – 1965, 54% (25,097) had ever been tested; 15% (3,644) were HCV antibody positive, and 76% (2,775) of those with a positive antibody had confirmed, chronic infection. In those born after 1965, 48% (8,081) had been tested and 2% (128) were HCV antibody positive; 31% (40) of those with a positive HCV antibody had confirmed chronic HCV viremia. Over 97% with chronic HCV viremia are male; among those born after 1965, 90% of those with chronic HCV viremia are male (data not shown). Those born during 1945 – 1965 were more likely to have a HCV antibody performed when compared with those born in other years (54% vs. 41%, odds ratio [OR] 1.70, 95% Confidence Interval [CI] 1.65-1.74). Among veterans ever tested for HCV antibody, those born during 1945 – 1965 were 6 times more likely to have a positive HCV antibody (15% vs. 3%, OR 5.87, 95% CI 5.32-6.78) and 3 times more likely to have confirmed, chronic HCV infection (76% vs. 50%, OR 3.25, 95% CI 2.65-4.00). Discussion: We found that nearly 50% of the veterans seen at the AVAMC in 2011 had received HCV antibody screening. Similarly, an analysis of National VHA HCV screening practices found that 53% of veterans with at least one outpatient visit at any VA clinic in 2011 had received HCV screening [7]. Civilian primary care settings report HCV screening rates of 1 – 8% [8-10] but interventions designed to enhance HCV screening have been shown to increase screening in high risk persons to 40% [5,10]. It is likely that the enhanced HCV screening recommendations for veterans and the electronic clinical reminder contributed to the higher HCV screening practices observed in our population and other VHA settings. After accounting for untested veterans, the overall HCV prevalence among veterans seen at the AVAMC in 2011 is between 5% and 10%. While this estimate is comparable to published estimates at other VHA settings and the national VA estimate [7,11-14], it is much higher than the estimated HCV prevalence of 1.6% obtained from a national civilian survey [15]. Even among those born during 1945 – 1965, the HCV tested prevalence is higher in the veteran population (15%) compared with the civilian population (3%) [15]. Although the higher HCV prevalence in veterans has been well described, reasons for the higher burden are not fully understood and are likely multifactorial. Veterans born during 1945 – 1965 were more likely to be screened for HCV, to have a positive HCV antibody, and to develop chronic HCV infection. The VHA recommendation to perform HCV screening in Vietnam-era veterans likely explains the higher HCV screening seen in those born during 1945 – 1965. Our finding confirms the higher HCV prevalence in those born during 1945 – 1965, supporting the birth cohort screening recommendations of CDC and USPSTF. While it is not fully understood why HCV prevalence is higher in this birth cohort, it has been hypothesized that it reflects incident HCV infections acquired through experimental intravenous illicit drug use during the 1970’s and 1980’s [16,17]. Additionally, higher HCV prevalence has been observed in similar birth cohorts outside of the United States, including in Scotland [18], England [19], and Cameroon [20]. Lastly, we found that those born during 1945 – 1965 were at increased risk of having chronic HCV infection (i.e. having a positive antibody and a positive viral load). Among those born after 1965, only 31% of those with a positive HCV antibody also had a positive viral load. There are two possible explanations for having a positive HCV antibody and a negative viral load; namely, a false positive HCV antibody or clearance of HCV viremia. While it is surprising that relatively few persons born after 1965 had evidence of chronic HCV infection, it is notable that there are more females in this cohort. Female sex has been associated with clearance of HCV infection in prospective studies of acute HCV [21-24]. While complex host and pathogen factors likely influence the development of chronic HCV infection, further exploration of the association between birth year and the development of chronic HCV infection is warranted. Confirmatory HCV RNA viral loads were performed in 96% of the HCV antibody positive persons. This high confirmatory testing rate was also seen in an analysis of national VA data [7]. In contrast, surveillance for HCV infections from eight civilian US sites found that confirmatory RNA viral loads were only performed in 50% of positive HCV antibody tests [25]. By reflexively performing HCV RNA viral load testing on positive HCV antibody tests, the AVAMC testing practices are consistent with the current CDC guidelines which recommend RNA viral loads on all reactive HCV antibody tests [26]. Our analysis was limited to testing performed at the AVAMC and did not include testing done at other VA hospitals or testing done outside of VA system. However, test results from additional sources would likely only increase the screening estimates in our analysis. Because this analysis used administrative databases, information on HCV behavioral risk factors, medical comorbidities, and the rationale used by providers for HCV screening is not known. Information on race was missing for most veterans seen at the AVAMC in 2011. Other administrative databases from the AVAMC shows that 40% of veterans seen in 2011 are African American race, 40% Caucasian, and 2% “other” race; race was unknown in 18%. CDC and USPSTF recommend HCV screening for all persons born during 1945 – 1965, regardless of race. The veteran population in our analysis is older than the general population in the United States [http://www.census.gov/2010census/]. Lastly, because this analysis included a predominantly male population our findings may not be applicable in other settings. Conclusions: The enhanced screening efforts undertaken at the AVAMC in 2001 resulted in a significant increase in the annual number of HCV antibody tests performed. The diagnosis of chronic HCV infection allows for prevention interventions such as alcohol counseling, vaccination against hepatitis A and B, screening for advanced liver fibrosis, and referral for antiviral therapy. HCV antiviral management is rapidly evolving with the development of better tolerated and more efficacious therapies [27]. Achieving a sustained virologic response to antiviral therapy is known to significantly reduce the risk of liver failure [28], hepatocellular carcinoma [29], liver-related and all-cause mortality [30,31], and reduces the risk of further HCV transmission. We found that many veterans born during 1945 – 1965 had been screened for HCV infection at the AVAMC even before the augmented CDC screening guidelines were published in 2012. However, given the high prevalence of disease in this birth cohort and the importance of HCV detection, continued screening practices that target this birth cohort are warranted. Abbreviations: VHA: Veterans health administration; HCV: Hepatitis C virus; RNA: Ribonucleic acid; HIPAA: Health insurance portability and accountability act; OR: Odds ratio; CI: Confidence interval. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: EC conceived and designed the study, performed data abstraction and analysis, and drafted the manuscript. CR provided input on study design, data analysis, data interpretation, statistical testing, and provided critical review of the manuscript. DR assisted with study design, obtained data, and provided input on data analysis and interpretation, and critically reviewed the manuscript. All authors read and approved the final manuscript.
Background: The Centers for Disease Control and Prevention (CDC) and the United States Preventive Services Task Force (USPSTF) recently augmented risk-based hepatitis C (HCV) screening guidelines with a recommendation to perform one-time screening in all persons born during 1945 - 1965, a birth cohort known to have a higher prevalence of HCV. We sought to estimate the proportion of veterans seen at the Atlanta VA Medical Center (AVAMC) who had ever been screened for HCV infection by birth year. Methods: We used an administrative database of all veterans seen at the AVAMC between January 1, 2011 and December 31, 2011, and a laboratory generated list of all HCV antibody tests and HCV RNA viral loads that were performed at the AVAMC to determine receipt of screening and HCV antibody positivity. Odds ratios and 95% confidence intervals were estimated using SAS version 9.2 (SAS institute, Cary, North Carolina). Results: HCV antibody testing had ever been performed on 48% (41,556) of the veterans seen in 2011; 10% of those tested had a positive antibody. Confirmatory viral loads were performed in 96% of those with a positive antibody screen. Those born during 1945 - 1965 were more likely to have a HCV antibody performed when compared with those born in other years (54% vs. 41%, odds ratio [OR] 1.70, 95% Confidence Interval [CI] 1.65-1.74). Among veterans ever tested for HCV antibody (n = 41,556), those born during 1945 - 1965 were 6 times more likely to have a positive HCV antibody (15% vs. 3%, OR 5.87, 95% CI 5.32-6.78), and 3 times more likely to have chronic HCV infection (76% vs. 50%, OR 3.25, 95% CI 2.65-4.00). Conclusions: Nearly half of the veterans seen in 2011 at the AVAMC had ever been tested for HCV infection. When examined by birth cohort, over half of the veterans born during 1945 - 1965 had been screened for HCV and 15% of those screened had a positive HCV antibody. Our findings confirm the increased prevalence of HCV infection in persons born during 1945 - 1965 as identified in the updated CDC and USPSTF recommendations.
Background: In 1990, serologic tests that detect antibodies to the hepatitis C virus (HCV) became commercially available in the United States. The Centers for Disease Control and Prevention (CDC) published guidelines on screening blood donors for HCV infection in 1991 and recommendations for HCV screening in persons with high risk behaviors in 1998 [1,2] (Figure  1). The US Veterans Health Administration (VHA) implemented HCV screening guidelines in 2001 that not only recommended screening veterans with the risk factors described in the CDC guidelines, but also screening any veteran with immoderate use of alcohol, a history of tattoos or repeated body piercing, intranasal cocaine use, multiple sexual partners, or Vietnam-era military service (i.e. dates of active military service between August 5, 1964 and May 7, 1975) [3]. In 2002, VHA introduced an electronic clinical reminder for HCV screening. In August 2012, CDC augmented risk-based HCV screening guidelines with a recommendation to perform one-time HCV screening in all persons born during 1945 – 1965, a birth cohort known to have a higher risk of having HCV infection [4,5]. In guidance published in June 2013, the United States Preventive Services Task Force also recommended one-time HCV screening of adults in this birth cohort [6]. Because this birth cohort overlaps with Vietnam-era veterans, we sought to estimate the proportion of veterans seen at the Atlanta VA Medical Center (AVAMC) who had ever been screened for HCV infection by birth year. Total number of hepatitis C antibody tests performed and percent of tests that are positive, by year — Atlanta VA Medical Center, Atlanta, GA, USA. 1992 – 2011 (n = 91,240). Conclusions: The enhanced screening efforts undertaken at the AVAMC in 2001 resulted in a significant increase in the annual number of HCV antibody tests performed. The diagnosis of chronic HCV infection allows for prevention interventions such as alcohol counseling, vaccination against hepatitis A and B, screening for advanced liver fibrosis, and referral for antiviral therapy. HCV antiviral management is rapidly evolving with the development of better tolerated and more efficacious therapies [27]. Achieving a sustained virologic response to antiviral therapy is known to significantly reduce the risk of liver failure [28], hepatocellular carcinoma [29], liver-related and all-cause mortality [30,31], and reduces the risk of further HCV transmission. We found that many veterans born during 1945 – 1965 had been screened for HCV infection at the AVAMC even before the augmented CDC screening guidelines were published in 2012. However, given the high prevalence of disease in this birth cohort and the importance of HCV detection, continued screening practices that target this birth cohort are warranted.
Background: The Centers for Disease Control and Prevention (CDC) and the United States Preventive Services Task Force (USPSTF) recently augmented risk-based hepatitis C (HCV) screening guidelines with a recommendation to perform one-time screening in all persons born during 1945 - 1965, a birth cohort known to have a higher prevalence of HCV. We sought to estimate the proportion of veterans seen at the Atlanta VA Medical Center (AVAMC) who had ever been screened for HCV infection by birth year. Methods: We used an administrative database of all veterans seen at the AVAMC between January 1, 2011 and December 31, 2011, and a laboratory generated list of all HCV antibody tests and HCV RNA viral loads that were performed at the AVAMC to determine receipt of screening and HCV antibody positivity. Odds ratios and 95% confidence intervals were estimated using SAS version 9.2 (SAS institute, Cary, North Carolina). Results: HCV antibody testing had ever been performed on 48% (41,556) of the veterans seen in 2011; 10% of those tested had a positive antibody. Confirmatory viral loads were performed in 96% of those with a positive antibody screen. Those born during 1945 - 1965 were more likely to have a HCV antibody performed when compared with those born in other years (54% vs. 41%, odds ratio [OR] 1.70, 95% Confidence Interval [CI] 1.65-1.74). Among veterans ever tested for HCV antibody (n = 41,556), those born during 1945 - 1965 were 6 times more likely to have a positive HCV antibody (15% vs. 3%, OR 5.87, 95% CI 5.32-6.78), and 3 times more likely to have chronic HCV infection (76% vs. 50%, OR 3.25, 95% CI 2.65-4.00). Conclusions: Nearly half of the veterans seen in 2011 at the AVAMC had ever been tested for HCV infection. When examined by birth cohort, over half of the veterans born during 1945 - 1965 had been screened for HCV and 15% of those screened had a positive HCV antibody. Our findings confirm the increased prevalence of HCV infection in persons born during 1945 - 1965 as identified in the updated CDC and USPSTF recommendations.
2,535
434
[ 332, 36, 10, 75 ]
8
[ "hcv", "antibody", "hcv antibody", "screening", "veterans", "positive", "born", "1965", "hcv screening", "2011" ]
[ "providers hcv screening", "hcv screening adults", "report hcv screening", "hcv screening guidelines", "hcv screening civilian" ]
[CONTENT] Hepatitis C | Screening | Prevention & control [SUMMARY]
[CONTENT] Hepatitis C | Screening | Prevention & control [SUMMARY]
[CONTENT] Hepatitis C | Screening | Prevention & control [SUMMARY]
[CONTENT] Hepatitis C | Screening | Prevention & control [SUMMARY]
[CONTENT] Hepatitis C | Screening | Prevention & control [SUMMARY]
[CONTENT] Hepatitis C | Screening | Prevention & control [SUMMARY]
[CONTENT] Aged | Female | Hepacivirus | Hepatitis C Antibodies | Hepatitis C, Chronic | Humans | Male | Mass Screening | Middle Aged | Odds Ratio | Prevalence | Risk Factors | United States | Veterans [SUMMARY]
[CONTENT] Aged | Female | Hepacivirus | Hepatitis C Antibodies | Hepatitis C, Chronic | Humans | Male | Mass Screening | Middle Aged | Odds Ratio | Prevalence | Risk Factors | United States | Veterans [SUMMARY]
[CONTENT] Aged | Female | Hepacivirus | Hepatitis C Antibodies | Hepatitis C, Chronic | Humans | Male | Mass Screening | Middle Aged | Odds Ratio | Prevalence | Risk Factors | United States | Veterans [SUMMARY]
[CONTENT] Aged | Female | Hepacivirus | Hepatitis C Antibodies | Hepatitis C, Chronic | Humans | Male | Mass Screening | Middle Aged | Odds Ratio | Prevalence | Risk Factors | United States | Veterans [SUMMARY]
[CONTENT] Aged | Female | Hepacivirus | Hepatitis C Antibodies | Hepatitis C, Chronic | Humans | Male | Mass Screening | Middle Aged | Odds Ratio | Prevalence | Risk Factors | United States | Veterans [SUMMARY]
[CONTENT] Aged | Female | Hepacivirus | Hepatitis C Antibodies | Hepatitis C, Chronic | Humans | Male | Mass Screening | Middle Aged | Odds Ratio | Prevalence | Risk Factors | United States | Veterans [SUMMARY]
[CONTENT] providers hcv screening | hcv screening adults | report hcv screening | hcv screening guidelines | hcv screening civilian [SUMMARY]
[CONTENT] providers hcv screening | hcv screening adults | report hcv screening | hcv screening guidelines | hcv screening civilian [SUMMARY]
[CONTENT] providers hcv screening | hcv screening adults | report hcv screening | hcv screening guidelines | hcv screening civilian [SUMMARY]
[CONTENT] providers hcv screening | hcv screening adults | report hcv screening | hcv screening guidelines | hcv screening civilian [SUMMARY]
[CONTENT] providers hcv screening | hcv screening adults | report hcv screening | hcv screening guidelines | hcv screening civilian [SUMMARY]
[CONTENT] providers hcv screening | hcv screening adults | report hcv screening | hcv screening guidelines | hcv screening civilian [SUMMARY]
[CONTENT] hcv | antibody | hcv antibody | screening | veterans | positive | born | 1965 | hcv screening | 2011 [SUMMARY]
[CONTENT] hcv | antibody | hcv antibody | screening | veterans | positive | born | 1965 | hcv screening | 2011 [SUMMARY]
[CONTENT] hcv | antibody | hcv antibody | screening | veterans | positive | born | 1965 | hcv screening | 2011 [SUMMARY]
[CONTENT] hcv | antibody | hcv antibody | screening | veterans | positive | born | 1965 | hcv screening | 2011 [SUMMARY]
[CONTENT] hcv | antibody | hcv antibody | screening | veterans | positive | born | 1965 | hcv screening | 2011 [SUMMARY]
[CONTENT] hcv | antibody | hcv antibody | screening | veterans | positive | born | 1965 | hcv screening | 2011 [SUMMARY]
[CONTENT] screening | hcv screening | hcv | risk | guidelines | atlanta | birth | cdc | recommended | august [SUMMARY]
[CONTENT] considered | hcv antibody | test | hcv | antibody | value | hcv antibody test | antibody test | sas | considered positive [SUMMARY]
[CONTENT] hcv | antibody | hcv antibody | positive | born | 1965 | chronic | tested | veterans | 87 [SUMMARY]
[CONTENT] antiviral | liver | hcv | screening | antiviral therapy | therapy | risk | cohort | hcv infection | infection [SUMMARY]
[CONTENT] hcv | antibody | screening | hcv antibody | veterans | positive | hcv screening | authors | authors declare | competing interests [SUMMARY]
[CONTENT] hcv | antibody | screening | hcv antibody | veterans | positive | hcv screening | authors | authors declare | competing interests [SUMMARY]
[CONTENT] The Centers for Disease Control and Prevention | CDC | the United States Preventive Services | one | 1945 - 1965 | HCV ||| Atlanta VA Medical Center | AVAMC | birth year [SUMMARY]
[CONTENT] AVAMC | between January 1, 2011 and | December 31, 2011 | AVAMC ||| 95% | SAS | 9.2 | SAS | Cary | North Carolina [SUMMARY]
[CONTENT] 48% | 41,556 | 2011 | 10% ||| 96% ||| 1945 - 1965 | other years | 54% | 41% | 1.70 | 95% ||| CI ||| 41,556 | 1945 - 1965 | 6 | 15% | 3% | 5.87 | 95% | CI | 5.32 | 3 | 76% | 50% | 3.25 | 95% | CI | 2.65-4.00 [SUMMARY]
[CONTENT] Nearly half | 2011 | AVAMC ||| over half | 1945 - 1965 | HCV | 15% ||| 1945 - 1965 | CDC | USPSTF [SUMMARY]
[CONTENT] The Centers for Disease Control and Prevention | CDC | the United States Preventive Services | one | 1945 - 1965 | HCV ||| Atlanta VA Medical Center | AVAMC | birth year ||| AVAMC | between January 1, 2011 and | December 31, 2011 | AVAMC ||| 95% | SAS | 9.2 | SAS | Cary | North Carolina ||| ||| 48% | 41,556 | 2011 | 10% ||| 96% ||| 1945 - 1965 | other years | 54% | 41% | 1.70 | 95% ||| CI ||| 41,556 | 1945 - 1965 | 6 | 15% | 3% | 5.87 | 95% | CI | 5.32 | 3 | 76% | 50% | 3.25 | 95% | CI | 2.65-4.00 ||| Nearly half | 2011 | AVAMC ||| over half | 1945 - 1965 | HCV | 15% ||| 1945 - 1965 | CDC | USPSTF [SUMMARY]
[CONTENT] The Centers for Disease Control and Prevention | CDC | the United States Preventive Services | one | 1945 - 1965 | HCV ||| Atlanta VA Medical Center | AVAMC | birth year ||| AVAMC | between January 1, 2011 and | December 31, 2011 | AVAMC ||| 95% | SAS | 9.2 | SAS | Cary | North Carolina ||| ||| 48% | 41,556 | 2011 | 10% ||| 96% ||| 1945 - 1965 | other years | 54% | 41% | 1.70 | 95% ||| CI ||| 41,556 | 1945 - 1965 | 6 | 15% | 3% | 5.87 | 95% | CI | 5.32 | 3 | 76% | 50% | 3.25 | 95% | CI | 2.65-4.00 ||| Nearly half | 2011 | AVAMC ||| over half | 1945 - 1965 | HCV | 15% ||| 1945 - 1965 | CDC | USPSTF [SUMMARY]
Association of Knee Extensor Muscle Strength and Cardiorespiratory Fitness With Bone Stiffness in Japanese Adults: A Cross-sectional Study.
33840650
Knee extensor muscle strength and cardiorespiratory fitness (CRF) are major components of physical fitness. Because the interactive association of knee extensor muscle strength and CRF with bone health remains unclear, we aimed to investigate such association in Japanese adults.
BACKGROUND
Altogether, 8,829 Japanese adults (3,731 men and 5,098 women) aged ≥45 years completed the maximum voluntary knee extension test, submaximal exercise test, medical examination, and a questionnaire on lifestyle habits. Using an osteo-sono assessment index, low bone stiffness tendency was defined as 80% under the young-adults mean. Multivariable odds ratios (ORs) and 95% confidence intervals (CIs) were calculated after confounder adjustment.
METHODS
Overall, 542 men (14.5%) and 978 women (19.2%) had low bone stiffness tendency. We observed an inverse association between muscle strength and low bone stiffness tendency after adjustment for CRF in both sexes (P for linear trend <0.001). Compared with the lowest CRF, the multivariable ORs for low bone stiffness tendency in the highest CRF were 0.47 (95% CI, 0.36-0.62) for men and 1.05 (95% CI, 0.82-1.35) for post-menopausal women (P < 0.001 and P = 0.704, respectively). No interactive association between muscle strength and CRF for low bone stiffness tendency existed in both sexes and irrespective of menopausal status.
RESULTS
Knee extensor muscle strength and CRF were associated additively, not synergistically, with bone health. Maintaining high levels of both physical fitness components may improve musculoskeletal health in the cohort. The relationship between physical fitness and bone status should be longitudinally investigated in the future.
CONCLUSION
[ "Adult", "Male", "Female", "Humans", "Cardiorespiratory Fitness", "Cross-Sectional Studies", "Japan", "Muscle Strength", "Physical Fitness" ]
9643791
INTRODUCTION
Osteoporosis is characterized by low bone strength and an increased risk of bone fractures,1 which is highly associated with mortality.2,3 Moreover, as osteoporosis imposes a heavy economic burden on the gross domestic product,4 it is widely recognized as a serious public health concern not only in Japan but also in other aged societies. Currently, Japan experiences one of the most serious situations in Asia5 because the estimated number of patients with osteoporosis exceeds 12.8 million (men: 3 million; women: 9.8 million).6 To minimize the detrimental effects of osteoporotic fractures on a patient’s quality of life, early detection of low bone strength and preventive interventions preferably based on risk stratification are strongly needed. The established risk factors for low bone strength are advanced age, female sex, genetic factors, low body weight, and physical inactivity, including low physical fitness represented as cardiorespiratory fitness (CRF) and/or grip strength.7,8 Compared to low CRF, high CRF represents an odds ratio of 0.29 (95% confidence interval [CI], 0.12–0.71) for the femoral neck T-score ≤ 2.5.7 The hazard ratio for osteoporotic fracture per 5 kg reduction in hand grip strength was 1.49 (95% CI, 1.18–1.95) in a 10-year prospective cohort study.8 Maintaining high levels of physical fitness, assessed via CRF and/or hand grip strength, is a key factor for preventing low bone strength, which consequently lowers the risk of osteoporosis development. Grip strength is often used as an indicator of muscle strength because it is valid and reliable for whole-body muscle strength.9 However, its use is controversial since it cannot be measured for antigravity muscles, such as the knee extensor muscle.10 As knee extensor muscle strength predicts the risk of falls and mobility, it can be used as a better indicator of bone strength.11,12 To our knowledge, it remains unclear whether knee extensor muscle strength is beneficially associated with bone strength. Moreover, CRF is also one of the predictors for low bone strength.7 However, information on the association between knee extensor muscle strength and bone strength according to varying levels of CRF is limited. This study aimed to investigate the interactive associations of knee extensor muscle strength and CRF with bone stiffness as a marker of bone strength in Japanese adults. The findings from this cross-sectional study will allow us to design future longitudinal and clinical studies to investigate the preventive effect of various strategies for osteoporosis and possibly result in a decrease in osteoporotic fractures.
METHODS
Design, setting, and participants This study was a cross-sectional analysis on the association of knee extensor muscle strength and CRF with bone stiffness. The eligibility criteria were: i) Sport Program Service (SPS, explained below) participants between April 1998 to July 2019, ii) aged ≥45 years, and iii) participants who had undergone bone stiffness measurements. SPS is a comprehensive medical checkup program primarily examining various domains of physical fitness held at the Yokohama Sports Medical Center. The SPS was initiated in April 1998 to improve the health status of people living or working in Yokohama City. Participants voluntarily apply to the service through the center’s website, and public information is published by the local government. This service receives an average of 10 people daily, totaling to 1,500 people annually. Participants eligible to use the service are those aged between 18 and 65 years, living or working in Yokohama City, and paid 15,000 JPY (approximately $142 in 2020); aged >65 years and paid 7,500 JPY ($71); and not living and working in Yokohama City, aged <65 years old, and paid 17,000 JPY ($161) and 8,500 JPY ($80). All data were selected at a single timepoint when participants first joined the service. Prior to joining the SPS, all participants provided their informed consent for data use. This study was conducted according to the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Yokohama Sports Medical Center (K-2019-07). This study was a cross-sectional analysis on the association of knee extensor muscle strength and CRF with bone stiffness. The eligibility criteria were: i) Sport Program Service (SPS, explained below) participants between April 1998 to July 2019, ii) aged ≥45 years, and iii) participants who had undergone bone stiffness measurements. SPS is a comprehensive medical checkup program primarily examining various domains of physical fitness held at the Yokohama Sports Medical Center. The SPS was initiated in April 1998 to improve the health status of people living or working in Yokohama City. Participants voluntarily apply to the service through the center’s website, and public information is published by the local government. This service receives an average of 10 people daily, totaling to 1,500 people annually. Participants eligible to use the service are those aged between 18 and 65 years, living or working in Yokohama City, and paid 15,000 JPY (approximately $142 in 2020); aged >65 years and paid 7,500 JPY ($71); and not living and working in Yokohama City, aged <65 years old, and paid 17,000 JPY ($161) and 8,500 JPY ($80). All data were selected at a single timepoint when participants first joined the service. Prior to joining the SPS, all participants provided their informed consent for data use. This study was conducted according to the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Yokohama Sports Medical Center (K-2019-07). Measurements Medical checkup Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire. Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire. Bone stiffness and definition of low bone stiffness tendency The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity. The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity. Physical fitness test Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg). CRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min. Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg). CRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min. Medical checkup Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire. Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire. Bone stiffness and definition of low bone stiffness tendency The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity. The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity. Physical fitness test Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg). CRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min. Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg). CRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min. Statistical analysis Due to clear sex and age differences in the prevalence of osteoporosis,19 the participants were segregated by sex and age group (aged 45–54, 55–64, and ≥65 years), and categorized into tertiles based on knee extensor muscle strength. Thereafter, each age category was combined based on muscle strength, and new groups were created with age-adjusted tertiles. Continuous and categorical variables were expressed as median (interquartile rages) or mean (standard deviation), and percentage, respectively. To reduce potential biases due to incomplete data,20 missing data were treated with multiple imputation methods using SPSS (IBM, Inc., Chicago, IL, USA) by creating a random number through the Markov chain Monte Carlo algorithm, wherein 20 para-complete datasets were produced.21 Blood glucose, total cholesterol, HDL cholesterol, LDL cholesterol, TG, ALP, and UA were used as auxiliary variables to account for the missing data. The auxiliary variables were not used in the main analysis. The 20 datasets were integrated with the standard Rubin’s technique. Every missing value is presented in Table 1. HDL, high-density lipoprotein; LDL, low-density lipoprotein cholesterol; OSI, osteo-sono assessment index; and YAM, young-adult mean. Data are presented as median [interquartile ranges], mean (standard deviation), and number (percentage) unless specified. To assess the association of each covariate, knee extensor muscle strength, and CRF with bone stiffness, univariable odds ratios and 95% CIs were calculated using a logistic regression model. The covariates included age,22 smoking status,23 drinking habits,24 BMI,25 systolic blood pressure,26 and menopause.23 To evaluate the association between knee extensor muscle strength and low bone stiffness tendency, multivariable odds ratios and 95% CIs were calculated using the lowest knee extensor muscle strength group as reference after adjusting for smoking status, drinking habits, BMI, systolic blood pressure, and menopause (for women only). In the mutually adjusted final model, CRF was entered into the final model. Moreover, a continuous variable of knee extensor muscle strength was used in another model to test linearity. The same analyses were repeated with CRF as the primary exposure and knee extensor muscle strength as the final covariate. To consider the moderating effect of menopausal status, a menopause-stratified multivariable analysis was also performed. To evaluate the interactive association of knee extensor muscle strength and CRF with low bone stiffness tendency, a CRF-stratified (below or above the median) multivariable analysis was performed. Odds ratios and 95% CIs were calculated similarly as mentioned above, and the interaction term (knee extensor muscle strength*CRF) was entered into the models. A menopause-stratified multivariable analysis was also conducted. To verify the validity of the applicable standards for low bone stiffness tendency, the following two sensitivity analyses were performed. First, we repeated the primary analysis by changing the definition of low bone stiffness tendency from 80% under the YAM to 70% under the YAM. Second, a complete-case analysis was conducted to distinguish findings with and without considering missing values. All sensitivity analysis data are shown in the eTable 1, eTable 2, eTable 3, eTable 4, eTable 5, eTable 6, and eTable 7. All statistical analyses were performed using the SPSS version 25. A P-value <0.05 was considered significant. Due to clear sex and age differences in the prevalence of osteoporosis,19 the participants were segregated by sex and age group (aged 45–54, 55–64, and ≥65 years), and categorized into tertiles based on knee extensor muscle strength. Thereafter, each age category was combined based on muscle strength, and new groups were created with age-adjusted tertiles. Continuous and categorical variables were expressed as median (interquartile rages) or mean (standard deviation), and percentage, respectively. To reduce potential biases due to incomplete data,20 missing data were treated with multiple imputation methods using SPSS (IBM, Inc., Chicago, IL, USA) by creating a random number through the Markov chain Monte Carlo algorithm, wherein 20 para-complete datasets were produced.21 Blood glucose, total cholesterol, HDL cholesterol, LDL cholesterol, TG, ALP, and UA were used as auxiliary variables to account for the missing data. The auxiliary variables were not used in the main analysis. The 20 datasets were integrated with the standard Rubin’s technique. Every missing value is presented in Table 1. HDL, high-density lipoprotein; LDL, low-density lipoprotein cholesterol; OSI, osteo-sono assessment index; and YAM, young-adult mean. Data are presented as median [interquartile ranges], mean (standard deviation), and number (percentage) unless specified. To assess the association of each covariate, knee extensor muscle strength, and CRF with bone stiffness, univariable odds ratios and 95% CIs were calculated using a logistic regression model. The covariates included age,22 smoking status,23 drinking habits,24 BMI,25 systolic blood pressure,26 and menopause.23 To evaluate the association between knee extensor muscle strength and low bone stiffness tendency, multivariable odds ratios and 95% CIs were calculated using the lowest knee extensor muscle strength group as reference after adjusting for smoking status, drinking habits, BMI, systolic blood pressure, and menopause (for women only). In the mutually adjusted final model, CRF was entered into the final model. Moreover, a continuous variable of knee extensor muscle strength was used in another model to test linearity. The same analyses were repeated with CRF as the primary exposure and knee extensor muscle strength as the final covariate. To consider the moderating effect of menopausal status, a menopause-stratified multivariable analysis was also performed. To evaluate the interactive association of knee extensor muscle strength and CRF with low bone stiffness tendency, a CRF-stratified (below or above the median) multivariable analysis was performed. Odds ratios and 95% CIs were calculated similarly as mentioned above, and the interaction term (knee extensor muscle strength*CRF) was entered into the models. A menopause-stratified multivariable analysis was also conducted. To verify the validity of the applicable standards for low bone stiffness tendency, the following two sensitivity analyses were performed. First, we repeated the primary analysis by changing the definition of low bone stiffness tendency from 80% under the YAM to 70% under the YAM. Second, a complete-case analysis was conducted to distinguish findings with and without considering missing values. All sensitivity analysis data are shown in the eTable 1, eTable 2, eTable 3, eTable 4, eTable 5, eTable 6, and eTable 7. All statistical analyses were performed using the SPSS version 25. A P-value <0.05 was considered significant.
RESULTS
Altogether, 18,161 adults (8,902 men and 9,259 women) joined the service from April 1998 to July 2017. Of them, 9,112 aged ≤45 years old and 220 without bone stiffness measurements were excluded (Figure 1). Of the final sample of 8,829 (3,731 men and 5,098 women), 542 men (14.5%) and 978 women (19.2%) had low bone stiffness tendency based on YAM80%. Similarly, 45 men (1.2%) and 49 women (1.0%) were classified as having low bone stiffness based on YAM70%. The lowest fitness category appears to have an inferior biomarker status, such as triglycerides than the other categories (Table 1). Table 2 shows the association between each potential covariate and bone stiffness. The association between smoking status and low bone stiffness tendency was found in men (P < 0.001), but not in women (P = 0.097). Low BMI, low knee extensor muscle strength, and low CRF were all associated with a higher prevalence of low bone stiffness tendency. CI, confidence interval. aPrevalence per 1,000 persons. An inverse association between knee extensor muscle strength and low bone stiffness tendency was observed after adjusting for potential confounders, such as age, smoking status, drinking habits, BMI, systolic blood pressure, menopause (for women only), and CRF in both sexes (P for linear trend <0.001 for both). Meanwhile, CRF and low bone stiffness tendency showed an inverse association in men (P for linear trend <0.001), but not in pre-menopausal, post-menopausal, and all women (P for linear trend = 0.634, 0.841, and 0.924, respectively), as shown in Table 3. Both knee extensor muscle strength and CRF had no interactive association with low bone stiffness tendency in both sexes (men; P = 0.836, all women; P = 0.700, post-menopausal women; P = 0.615), as shown in Table 4. Values are expressed as odds ratio (95% confidence interval). aAdditionally adjusted smoking status, drinking habits, body mass index, systolic blood pressure, and menopause (for women only). bMutually adjusted cardiorespiratory fitness or knee extensor muscle strength plus variables in multivariable adjustmenta. CI, confidence interval. aPrevalence per 1,000 persons. bAdjusted for age, smoking status, drinking habits, body mass index, systolic blood pressure, and menopause (for women only). cUsing the lowest cardiorespiratory fitness and the lowest knee extensor muscle strength as reference.
Conclusion
We found an additive association between knee extensor muscle strength and CRF for bone health. Moreover, an inverse association existed between knee extensor muscle strength and low bone stiffness tendency in both sexes. Accordingly, maintaining not only higher knee extensor muscle strength, but also higher CRF may be important to prevent or delay the onset of osteoporosis. From a practical viewpoint, these results suggest that middle-aged or older women and men should be encouraged to participate in high-intensity physical activities with mechanical loading, such as running, jumping rope, and resistance exercise. Further longitudinal studies are needed to investigate the relationship between physical fitness and bone status.
[ "Design, setting, and participants", "Measurements", "Medical checkup", "Bone stiffness and definition of low bone stiffness tendency", "Physical fitness test", "Statistical analysis", "Conclusion" ]
[ "This study was a cross-sectional analysis on the association of knee extensor muscle strength and CRF with bone stiffness. The eligibility criteria were: i) Sport Program Service (SPS, explained below) participants between April 1998 to July 2019, ii) aged ≥45 years, and iii) participants who had undergone bone stiffness measurements. SPS is a comprehensive medical checkup program primarily examining various domains of physical fitness held at the Yokohama Sports Medical Center. The SPS was initiated in April 1998 to improve the health status of people living or working in Yokohama City. Participants voluntarily apply to the service through the center’s website, and public information is published by the local government. This service receives an average of 10 people daily, totaling to 1,500 people annually. Participants eligible to use the service are those aged between 18 and 65 years, living or working in Yokohama City, and paid 15,000 JPY (approximately $142 in 2020); aged >65 years and paid 7,500 JPY ($71); and not living and working in Yokohama City, aged <65 years old, and paid 17,000 JPY ($161) and 8,500 JPY ($80). All data were selected at a single timepoint when participants first joined the service.\nPrior to joining the SPS, all participants provided their informed consent for data use. This study was conducted according to the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Yokohama Sports Medical Center (K-2019-07).", "Medical checkup Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire.\nHeight and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire.\nBone stiffness and definition of low bone stiffness tendency The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity.\nThe generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity.\nPhysical fitness test Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg).\nCRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min.\nKnee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg).\nCRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min.", "Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire.", "The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity.", "Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg).\nCRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min.", "Due to clear sex and age differences in the prevalence of osteoporosis,19 the participants were segregated by sex and age group (aged 45–54, 55–64, and ≥65 years), and categorized into tertiles based on knee extensor muscle strength. Thereafter, each age category was combined based on muscle strength, and new groups were created with age-adjusted tertiles. Continuous and categorical variables were expressed as median (interquartile rages) or mean (standard deviation), and percentage, respectively.\nTo reduce potential biases due to incomplete data,20 missing data were treated with multiple imputation methods using SPSS (IBM, Inc., Chicago, IL, USA) by creating a random number through the Markov chain Monte Carlo algorithm, wherein 20 para-complete datasets were produced.21 Blood glucose, total cholesterol, HDL cholesterol, LDL cholesterol, TG, ALP, and UA were used as auxiliary variables to account for the missing data. The auxiliary variables were not used in the main analysis. The 20 datasets were integrated with the standard Rubin’s technique. Every missing value is presented in Table 1.\nHDL, high-density lipoprotein; LDL, low-density lipoprotein cholesterol; OSI, osteo-sono assessment index; and YAM, young-adult mean.\nData are presented as median [interquartile ranges], mean (standard deviation), and number (percentage) unless specified.\nTo assess the association of each covariate, knee extensor muscle strength, and CRF with bone stiffness, univariable odds ratios and 95% CIs were calculated using a logistic regression model. The covariates included age,22 smoking status,23 drinking habits,24 BMI,25 systolic blood pressure,26 and menopause.23\nTo evaluate the association between knee extensor muscle strength and low bone stiffness tendency, multivariable odds ratios and 95% CIs were calculated using the lowest knee extensor muscle strength group as reference after adjusting for smoking status, drinking habits, BMI, systolic blood pressure, and menopause (for women only). In the mutually adjusted final model, CRF was entered into the final model. Moreover, a continuous variable of knee extensor muscle strength was used in another model to test linearity. The same analyses were repeated with CRF as the primary exposure and knee extensor muscle strength as the final covariate. To consider the moderating effect of menopausal status, a menopause-stratified multivariable analysis was also performed.\nTo evaluate the interactive association of knee extensor muscle strength and CRF with low bone stiffness tendency, a CRF-stratified (below or above the median) multivariable analysis was performed. Odds ratios and 95% CIs were calculated similarly as mentioned above, and the interaction term (knee extensor muscle strength*CRF) was entered into the models. A menopause-stratified multivariable analysis was also conducted.\nTo verify the validity of the applicable standards for low bone stiffness tendency, the following two sensitivity analyses were performed. First, we repeated the primary analysis by changing the definition of low bone stiffness tendency from 80% under the YAM to 70% under the YAM. Second, a complete-case analysis was conducted to distinguish findings with and without considering missing values. All sensitivity analysis data are shown in the eTable 1, eTable 2, eTable 3, eTable 4, eTable 5, eTable 6, and eTable 7.\nAll statistical analyses were performed using the SPSS version 25. A P-value <0.05 was considered significant.", "We found an additive association between knee extensor muscle strength and CRF for bone health. Moreover, an inverse association existed between knee extensor muscle strength and low bone stiffness tendency in both sexes. Accordingly, maintaining not only higher knee extensor muscle strength, but also higher CRF may be important to prevent or delay the onset of osteoporosis. From a practical viewpoint, these results suggest that middle-aged or older women and men should be encouraged to participate in high-intensity physical activities with mechanical loading, such as running, jumping rope, and resistance exercise. Further longitudinal studies are needed to investigate the relationship between physical fitness and bone status." ]
[ null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Design, setting, and participants", "Measurements", "Medical checkup", "Bone stiffness and definition of low bone stiffness tendency", "Physical fitness test", "Statistical analysis", "RESULTS", "DISCUSSION", "Conclusion" ]
[ "Osteoporosis is characterized by low bone strength and an increased risk of bone fractures,1 which is highly associated with mortality.2,3 Moreover, as osteoporosis imposes a heavy economic burden on the gross domestic product,4 it is widely recognized as a serious public health concern not only in Japan but also in other aged societies. Currently, Japan experiences one of the most serious situations in Asia5 because the estimated number of patients with osteoporosis exceeds 12.8 million (men: 3 million; women: 9.8 million).6 To minimize the detrimental effects of osteoporotic fractures on a patient’s quality of life, early detection of low bone strength and preventive interventions preferably based on risk stratification are strongly needed.\nThe established risk factors for low bone strength are advanced age, female sex, genetic factors, low body weight, and physical inactivity, including low physical fitness represented as cardiorespiratory fitness (CRF) and/or grip strength.7,8 Compared to low CRF, high CRF represents an odds ratio of 0.29 (95% confidence interval [CI], 0.12–0.71) for the femoral neck T-score ≤ 2.5.7 The hazard ratio for osteoporotic fracture per 5 kg reduction in hand grip strength was 1.49 (95% CI, 1.18–1.95) in a 10-year prospective cohort study.8 Maintaining high levels of physical fitness, assessed via CRF and/or hand grip strength, is a key factor for preventing low bone strength, which consequently lowers the risk of osteoporosis development.\nGrip strength is often used as an indicator of muscle strength because it is valid and reliable for whole-body muscle strength.9 However, its use is controversial since it cannot be measured for antigravity muscles, such as the knee extensor muscle.10 As knee extensor muscle strength predicts the risk of falls and mobility, it can be used as a better indicator of bone strength.11,12 To our knowledge, it remains unclear whether knee extensor muscle strength is beneficially associated with bone strength. Moreover, CRF is also one of the predictors for low bone strength.7 However, information on the association between knee extensor muscle strength and bone strength according to varying levels of CRF is limited.\nThis study aimed to investigate the interactive associations of knee extensor muscle strength and CRF with bone stiffness as a marker of bone strength in Japanese adults. The findings from this cross-sectional study will allow us to design future longitudinal and clinical studies to investigate the preventive effect of various strategies for osteoporosis and possibly result in a decrease in osteoporotic fractures.", "Design, setting, and participants This study was a cross-sectional analysis on the association of knee extensor muscle strength and CRF with bone stiffness. The eligibility criteria were: i) Sport Program Service (SPS, explained below) participants between April 1998 to July 2019, ii) aged ≥45 years, and iii) participants who had undergone bone stiffness measurements. SPS is a comprehensive medical checkup program primarily examining various domains of physical fitness held at the Yokohama Sports Medical Center. The SPS was initiated in April 1998 to improve the health status of people living or working in Yokohama City. Participants voluntarily apply to the service through the center’s website, and public information is published by the local government. This service receives an average of 10 people daily, totaling to 1,500 people annually. Participants eligible to use the service are those aged between 18 and 65 years, living or working in Yokohama City, and paid 15,000 JPY (approximately $142 in 2020); aged >65 years and paid 7,500 JPY ($71); and not living and working in Yokohama City, aged <65 years old, and paid 17,000 JPY ($161) and 8,500 JPY ($80). All data were selected at a single timepoint when participants first joined the service.\nPrior to joining the SPS, all participants provided their informed consent for data use. This study was conducted according to the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Yokohama Sports Medical Center (K-2019-07).\nThis study was a cross-sectional analysis on the association of knee extensor muscle strength and CRF with bone stiffness. The eligibility criteria were: i) Sport Program Service (SPS, explained below) participants between April 1998 to July 2019, ii) aged ≥45 years, and iii) participants who had undergone bone stiffness measurements. SPS is a comprehensive medical checkup program primarily examining various domains of physical fitness held at the Yokohama Sports Medical Center. The SPS was initiated in April 1998 to improve the health status of people living or working in Yokohama City. Participants voluntarily apply to the service through the center’s website, and public information is published by the local government. This service receives an average of 10 people daily, totaling to 1,500 people annually. Participants eligible to use the service are those aged between 18 and 65 years, living or working in Yokohama City, and paid 15,000 JPY (approximately $142 in 2020); aged >65 years and paid 7,500 JPY ($71); and not living and working in Yokohama City, aged <65 years old, and paid 17,000 JPY ($161) and 8,500 JPY ($80). All data were selected at a single timepoint when participants first joined the service.\nPrior to joining the SPS, all participants provided their informed consent for data use. This study was conducted according to the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Yokohama Sports Medical Center (K-2019-07).\nMeasurements Medical checkup Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire.\nHeight and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire.\nBone stiffness and definition of low bone stiffness tendency The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity.\nThe generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity.\nPhysical fitness test Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg).\nCRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min.\nKnee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg).\nCRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min.\nMedical checkup Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire.\nHeight and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire.\nBone stiffness and definition of low bone stiffness tendency The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity.\nThe generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity.\nPhysical fitness test Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg).\nCRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min.\nKnee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg).\nCRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min.\nStatistical analysis Due to clear sex and age differences in the prevalence of osteoporosis,19 the participants were segregated by sex and age group (aged 45–54, 55–64, and ≥65 years), and categorized into tertiles based on knee extensor muscle strength. Thereafter, each age category was combined based on muscle strength, and new groups were created with age-adjusted tertiles. Continuous and categorical variables were expressed as median (interquartile rages) or mean (standard deviation), and percentage, respectively.\nTo reduce potential biases due to incomplete data,20 missing data were treated with multiple imputation methods using SPSS (IBM, Inc., Chicago, IL, USA) by creating a random number through the Markov chain Monte Carlo algorithm, wherein 20 para-complete datasets were produced.21 Blood glucose, total cholesterol, HDL cholesterol, LDL cholesterol, TG, ALP, and UA were used as auxiliary variables to account for the missing data. The auxiliary variables were not used in the main analysis. The 20 datasets were integrated with the standard Rubin’s technique. Every missing value is presented in Table 1.\nHDL, high-density lipoprotein; LDL, low-density lipoprotein cholesterol; OSI, osteo-sono assessment index; and YAM, young-adult mean.\nData are presented as median [interquartile ranges], mean (standard deviation), and number (percentage) unless specified.\nTo assess the association of each covariate, knee extensor muscle strength, and CRF with bone stiffness, univariable odds ratios and 95% CIs were calculated using a logistic regression model. The covariates included age,22 smoking status,23 drinking habits,24 BMI,25 systolic blood pressure,26 and menopause.23\nTo evaluate the association between knee extensor muscle strength and low bone stiffness tendency, multivariable odds ratios and 95% CIs were calculated using the lowest knee extensor muscle strength group as reference after adjusting for smoking status, drinking habits, BMI, systolic blood pressure, and menopause (for women only). In the mutually adjusted final model, CRF was entered into the final model. Moreover, a continuous variable of knee extensor muscle strength was used in another model to test linearity. The same analyses were repeated with CRF as the primary exposure and knee extensor muscle strength as the final covariate. To consider the moderating effect of menopausal status, a menopause-stratified multivariable analysis was also performed.\nTo evaluate the interactive association of knee extensor muscle strength and CRF with low bone stiffness tendency, a CRF-stratified (below or above the median) multivariable analysis was performed. Odds ratios and 95% CIs were calculated similarly as mentioned above, and the interaction term (knee extensor muscle strength*CRF) was entered into the models. A menopause-stratified multivariable analysis was also conducted.\nTo verify the validity of the applicable standards for low bone stiffness tendency, the following two sensitivity analyses were performed. First, we repeated the primary analysis by changing the definition of low bone stiffness tendency from 80% under the YAM to 70% under the YAM. Second, a complete-case analysis was conducted to distinguish findings with and without considering missing values. All sensitivity analysis data are shown in the eTable 1, eTable 2, eTable 3, eTable 4, eTable 5, eTable 6, and eTable 7.\nAll statistical analyses were performed using the SPSS version 25. A P-value <0.05 was considered significant.\nDue to clear sex and age differences in the prevalence of osteoporosis,19 the participants were segregated by sex and age group (aged 45–54, 55–64, and ≥65 years), and categorized into tertiles based on knee extensor muscle strength. Thereafter, each age category was combined based on muscle strength, and new groups were created with age-adjusted tertiles. Continuous and categorical variables were expressed as median (interquartile rages) or mean (standard deviation), and percentage, respectively.\nTo reduce potential biases due to incomplete data,20 missing data were treated with multiple imputation methods using SPSS (IBM, Inc., Chicago, IL, USA) by creating a random number through the Markov chain Monte Carlo algorithm, wherein 20 para-complete datasets were produced.21 Blood glucose, total cholesterol, HDL cholesterol, LDL cholesterol, TG, ALP, and UA were used as auxiliary variables to account for the missing data. The auxiliary variables were not used in the main analysis. The 20 datasets were integrated with the standard Rubin’s technique. Every missing value is presented in Table 1.\nHDL, high-density lipoprotein; LDL, low-density lipoprotein cholesterol; OSI, osteo-sono assessment index; and YAM, young-adult mean.\nData are presented as median [interquartile ranges], mean (standard deviation), and number (percentage) unless specified.\nTo assess the association of each covariate, knee extensor muscle strength, and CRF with bone stiffness, univariable odds ratios and 95% CIs were calculated using a logistic regression model. The covariates included age,22 smoking status,23 drinking habits,24 BMI,25 systolic blood pressure,26 and menopause.23\nTo evaluate the association between knee extensor muscle strength and low bone stiffness tendency, multivariable odds ratios and 95% CIs were calculated using the lowest knee extensor muscle strength group as reference after adjusting for smoking status, drinking habits, BMI, systolic blood pressure, and menopause (for women only). In the mutually adjusted final model, CRF was entered into the final model. Moreover, a continuous variable of knee extensor muscle strength was used in another model to test linearity. The same analyses were repeated with CRF as the primary exposure and knee extensor muscle strength as the final covariate. To consider the moderating effect of menopausal status, a menopause-stratified multivariable analysis was also performed.\nTo evaluate the interactive association of knee extensor muscle strength and CRF with low bone stiffness tendency, a CRF-stratified (below or above the median) multivariable analysis was performed. Odds ratios and 95% CIs were calculated similarly as mentioned above, and the interaction term (knee extensor muscle strength*CRF) was entered into the models. A menopause-stratified multivariable analysis was also conducted.\nTo verify the validity of the applicable standards for low bone stiffness tendency, the following two sensitivity analyses were performed. First, we repeated the primary analysis by changing the definition of low bone stiffness tendency from 80% under the YAM to 70% under the YAM. Second, a complete-case analysis was conducted to distinguish findings with and without considering missing values. All sensitivity analysis data are shown in the eTable 1, eTable 2, eTable 3, eTable 4, eTable 5, eTable 6, and eTable 7.\nAll statistical analyses were performed using the SPSS version 25. A P-value <0.05 was considered significant.", "This study was a cross-sectional analysis on the association of knee extensor muscle strength and CRF with bone stiffness. The eligibility criteria were: i) Sport Program Service (SPS, explained below) participants between April 1998 to July 2019, ii) aged ≥45 years, and iii) participants who had undergone bone stiffness measurements. SPS is a comprehensive medical checkup program primarily examining various domains of physical fitness held at the Yokohama Sports Medical Center. The SPS was initiated in April 1998 to improve the health status of people living or working in Yokohama City. Participants voluntarily apply to the service through the center’s website, and public information is published by the local government. This service receives an average of 10 people daily, totaling to 1,500 people annually. Participants eligible to use the service are those aged between 18 and 65 years, living or working in Yokohama City, and paid 15,000 JPY (approximately $142 in 2020); aged >65 years and paid 7,500 JPY ($71); and not living and working in Yokohama City, aged <65 years old, and paid 17,000 JPY ($161) and 8,500 JPY ($80). All data were selected at a single timepoint when participants first joined the service.\nPrior to joining the SPS, all participants provided their informed consent for data use. This study was conducted according to the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Yokohama Sports Medical Center (K-2019-07).", "Medical checkup Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire.\nHeight and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire.\nBone stiffness and definition of low bone stiffness tendency The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity.\nThe generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity.\nPhysical fitness test Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg).\nCRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min.\nKnee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg).\nCRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min.", "Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire.", "The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity.", "Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg).\nCRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min.", "Due to clear sex and age differences in the prevalence of osteoporosis,19 the participants were segregated by sex and age group (aged 45–54, 55–64, and ≥65 years), and categorized into tertiles based on knee extensor muscle strength. Thereafter, each age category was combined based on muscle strength, and new groups were created with age-adjusted tertiles. Continuous and categorical variables were expressed as median (interquartile rages) or mean (standard deviation), and percentage, respectively.\nTo reduce potential biases due to incomplete data,20 missing data were treated with multiple imputation methods using SPSS (IBM, Inc., Chicago, IL, USA) by creating a random number through the Markov chain Monte Carlo algorithm, wherein 20 para-complete datasets were produced.21 Blood glucose, total cholesterol, HDL cholesterol, LDL cholesterol, TG, ALP, and UA were used as auxiliary variables to account for the missing data. The auxiliary variables were not used in the main analysis. The 20 datasets were integrated with the standard Rubin’s technique. Every missing value is presented in Table 1.\nHDL, high-density lipoprotein; LDL, low-density lipoprotein cholesterol; OSI, osteo-sono assessment index; and YAM, young-adult mean.\nData are presented as median [interquartile ranges], mean (standard deviation), and number (percentage) unless specified.\nTo assess the association of each covariate, knee extensor muscle strength, and CRF with bone stiffness, univariable odds ratios and 95% CIs were calculated using a logistic regression model. The covariates included age,22 smoking status,23 drinking habits,24 BMI,25 systolic blood pressure,26 and menopause.23\nTo evaluate the association between knee extensor muscle strength and low bone stiffness tendency, multivariable odds ratios and 95% CIs were calculated using the lowest knee extensor muscle strength group as reference after adjusting for smoking status, drinking habits, BMI, systolic blood pressure, and menopause (for women only). In the mutually adjusted final model, CRF was entered into the final model. Moreover, a continuous variable of knee extensor muscle strength was used in another model to test linearity. The same analyses were repeated with CRF as the primary exposure and knee extensor muscle strength as the final covariate. To consider the moderating effect of menopausal status, a menopause-stratified multivariable analysis was also performed.\nTo evaluate the interactive association of knee extensor muscle strength and CRF with low bone stiffness tendency, a CRF-stratified (below or above the median) multivariable analysis was performed. Odds ratios and 95% CIs were calculated similarly as mentioned above, and the interaction term (knee extensor muscle strength*CRF) was entered into the models. A menopause-stratified multivariable analysis was also conducted.\nTo verify the validity of the applicable standards for low bone stiffness tendency, the following two sensitivity analyses were performed. First, we repeated the primary analysis by changing the definition of low bone stiffness tendency from 80% under the YAM to 70% under the YAM. Second, a complete-case analysis was conducted to distinguish findings with and without considering missing values. All sensitivity analysis data are shown in the eTable 1, eTable 2, eTable 3, eTable 4, eTable 5, eTable 6, and eTable 7.\nAll statistical analyses were performed using the SPSS version 25. A P-value <0.05 was considered significant.", "Altogether, 18,161 adults (8,902 men and 9,259 women) joined the service from April 1998 to July 2017. Of them, 9,112 aged ≤45 years old and 220 without bone stiffness measurements were excluded (Figure 1). Of the final sample of 8,829 (3,731 men and 5,098 women), 542 men (14.5%) and 978 women (19.2%) had low bone stiffness tendency based on YAM80%. Similarly, 45 men (1.2%) and 49 women (1.0%) were classified as having low bone stiffness based on YAM70%. The lowest fitness category appears to have an inferior biomarker status, such as triglycerides than the other categories (Table 1).\nTable 2 shows the association between each potential covariate and bone stiffness. The association between smoking status and low bone stiffness tendency was found in men (P < 0.001), but not in women (P = 0.097). Low BMI, low knee extensor muscle strength, and low CRF were all associated with a higher prevalence of low bone stiffness tendency.\nCI, confidence interval.\naPrevalence per 1,000 persons.\nAn inverse association between knee extensor muscle strength and low bone stiffness tendency was observed after adjusting for potential confounders, such as age, smoking status, drinking habits, BMI, systolic blood pressure, menopause (for women only), and CRF in both sexes (P for linear trend <0.001 for both). Meanwhile, CRF and low bone stiffness tendency showed an inverse association in men (P for linear trend <0.001), but not in pre-menopausal, post-menopausal, and all women (P for linear trend = 0.634, 0.841, and 0.924, respectively), as shown in Table 3. Both knee extensor muscle strength and CRF had no interactive association with low bone stiffness tendency in both sexes (men; P = 0.836, all women; P = 0.700, post-menopausal women; P = 0.615), as shown in Table 4.\nValues are expressed as odds ratio (95% confidence interval).\naAdditionally adjusted smoking status, drinking habits, body mass index, systolic blood pressure, and menopause (for women only).\nbMutually adjusted cardiorespiratory fitness or knee extensor muscle strength plus variables in multivariable adjustmenta.\nCI, confidence interval.\naPrevalence per 1,000 persons.\nbAdjusted for age, smoking status, drinking habits, body mass index, systolic blood pressure, and menopause (for women only).\ncUsing the lowest cardiorespiratory fitness and the lowest knee extensor muscle strength as reference.", "Our primary findings were as follows. First, no interactive association between knee extensor muscle strength and CRF was found for low bone stiffness tendency in both sexes. Second, higher knee extensor muscle strength was associated with a lower prevalence of low bone stiffness tendency in both sexes, independent of CRF. Third, higher CRF was associated with a lower prevalence of low bone stiffness tendency in men but not in women, independent of knee extensor muscle strength. These findings suggest that a combination of muscle strength and CRF may carry an additive benefit, not synergistic, for bone health. However, the association between CRF and bone health may not be evident in adult Japanese women.\nContrary to our expectation, the knee extensor muscle strength and CRF had no interactive association with bone stiffness. Hence, each of the two physical fitness elements is independently or additively, not synergistically, associated with bone health. As described in the subsequent paragraphs, the possible mechanisms of muscle strength or CRF to bone strength may be different. However, it is unknown whether the two different pathways interfere with or work synergistically with each other, due to lack of findings from basic mechanistic studies. A previous randomized controlled trial has revealed that resistance exercise and a combination of resistance and aerobic exercise could prevent bone loss compared to aerobic exercise alone during a weight loss program.27 This finding supports the notion that a combined aerobic and resistance exercise may be recommended to prevent a loss of musculoskeletal function.28 This may be consistent with our results.\nKnee extensor muscle strength was positively associated with bone stiffness after adjusting for CRF in both sexes. According to Wolff’s law, theories have indicated the potential of the bone to adapt its remodeling response to external stressors.29 It is assumed that knee extensor muscle strength may play the role of an external stressor to improve bone stiffness. Meanwhile, another possible mechanism of association between muscle strength and bone stiffness is related to muscle functioning as an endocrine organ that secretes myokines. Previous physiological studies have reported that muscles can play a role in secreting myokines that activate bone metabolism, such as myostatin, insulin-like growth factor (IGF)-1, and IGF-2.30 A previous observational study has reported an association between muscle strength and nonadjacent bones.31 However, the details of this association remain unclear, so further investigations into the mechanisms are warranted. Because bone mineral density has been highly correlated with bone stiffness,13,14 these are reflected in our study’s results, as knee extensor muscle strength was associated with bone stiffness after adjusting for confounders.\nCRF was also deemed beneficial for bone health in men. A previous large-scale observational study has showed similar trends as this study.32 CRF partially reflects one’s physical activity level and has been utilized as an index of health outcome.33 Because physical activity produces mechanical loading to the musculoskeletal system, which enhances osteocyte activation and bone resorption and formation, the American College of Sport Medicine currently recommends weightbearing physical activities.34 This study identified a sex difference in the association between CRF and bone stiffness (ie, no association in women). From a biological viewpoint, one possible mechanism for bone loss could be greater age-dependent loss of estrogen in women, which reduces estrogen receptor and osteogenic response.35 Accordingly, no association between CRF and bone health was observed in women, as in a previous study.36\nThis study has several noteworthy strengths. First, we analyzed a large-scale population of participants across a wide age range (≥45 years old). A difference in the age of onset between men and women of osteoporosis caused by lowering bone strength was observed.19 The number of patients with osteoporosis drastically increases after menopause for women and after the age of 60 for men.19 Second, we reduced potential biases by using multiple imputation methods.21 Third, this study expanded the body of knowledge in this area by examining the joint associations between knee extensor muscle strength and CRF with bone stiffness. Fourth, we used an isokinetic knee extensor muscle strength dynamometer, which enabled precise measurements. Although it requires expert skills, isokinetic muscle strength that reflects active physical activity could be assessed with accuracy.37\nMeanwhile, there are also several limitations. First, causal associations among knee extensor muscle strength, CRF, and bone stiffness could not be assessed owing to the cross-sectional nature of the study. Second, there was no information on habitual dietary intake, such as protein, which could positively contribute to bone mineral density.38 Thus, the risk of residual confounding by dietary habits may exist in the results. Third, there was a lack of information on the use of osteoporosis medication and the history of surgery. Hence, bone stiffness could be underestimated. Finally, there was a lack of information on habitual physical activity. As half of the variances in CRF are accounted for by one’s genotype, CRF is not a direct marker of physical activity, although it may reflect an aspect of physical activity.39 Therefore, precise assessments of physical activity are preferable.\nConclusion We found an additive association between knee extensor muscle strength and CRF for bone health. Moreover, an inverse association existed between knee extensor muscle strength and low bone stiffness tendency in both sexes. Accordingly, maintaining not only higher knee extensor muscle strength, but also higher CRF may be important to prevent or delay the onset of osteoporosis. From a practical viewpoint, these results suggest that middle-aged or older women and men should be encouraged to participate in high-intensity physical activities with mechanical loading, such as running, jumping rope, and resistance exercise. Further longitudinal studies are needed to investigate the relationship between physical fitness and bone status.\nWe found an additive association between knee extensor muscle strength and CRF for bone health. Moreover, an inverse association existed between knee extensor muscle strength and low bone stiffness tendency in both sexes. Accordingly, maintaining not only higher knee extensor muscle strength, but also higher CRF may be important to prevent or delay the onset of osteoporosis. From a practical viewpoint, these results suggest that middle-aged or older women and men should be encouraged to participate in high-intensity physical activities with mechanical loading, such as running, jumping rope, and resistance exercise. Further longitudinal studies are needed to investigate the relationship between physical fitness and bone status.", "We found an additive association between knee extensor muscle strength and CRF for bone health. Moreover, an inverse association existed between knee extensor muscle strength and low bone stiffness tendency in both sexes. Accordingly, maintaining not only higher knee extensor muscle strength, but also higher CRF may be important to prevent or delay the onset of osteoporosis. From a practical viewpoint, these results suggest that middle-aged or older women and men should be encouraged to participate in high-intensity physical activities with mechanical loading, such as running, jumping rope, and resistance exercise. Further longitudinal studies are needed to investigate the relationship between physical fitness and bone status." ]
[ "intro", "methods", null, null, null, null, null, null, "results", "discussion", null ]
[ "knee extensor muscle strength", "cardiorespiratory fitness", "bone stiffness", "quantitative ultrasound" ]
INTRODUCTION: Osteoporosis is characterized by low bone strength and an increased risk of bone fractures,1 which is highly associated with mortality.2,3 Moreover, as osteoporosis imposes a heavy economic burden on the gross domestic product,4 it is widely recognized as a serious public health concern not only in Japan but also in other aged societies. Currently, Japan experiences one of the most serious situations in Asia5 because the estimated number of patients with osteoporosis exceeds 12.8 million (men: 3 million; women: 9.8 million).6 To minimize the detrimental effects of osteoporotic fractures on a patient’s quality of life, early detection of low bone strength and preventive interventions preferably based on risk stratification are strongly needed. The established risk factors for low bone strength are advanced age, female sex, genetic factors, low body weight, and physical inactivity, including low physical fitness represented as cardiorespiratory fitness (CRF) and/or grip strength.7,8 Compared to low CRF, high CRF represents an odds ratio of 0.29 (95% confidence interval [CI], 0.12–0.71) for the femoral neck T-score ≤ 2.5.7 The hazard ratio for osteoporotic fracture per 5 kg reduction in hand grip strength was 1.49 (95% CI, 1.18–1.95) in a 10-year prospective cohort study.8 Maintaining high levels of physical fitness, assessed via CRF and/or hand grip strength, is a key factor for preventing low bone strength, which consequently lowers the risk of osteoporosis development. Grip strength is often used as an indicator of muscle strength because it is valid and reliable for whole-body muscle strength.9 However, its use is controversial since it cannot be measured for antigravity muscles, such as the knee extensor muscle.10 As knee extensor muscle strength predicts the risk of falls and mobility, it can be used as a better indicator of bone strength.11,12 To our knowledge, it remains unclear whether knee extensor muscle strength is beneficially associated with bone strength. Moreover, CRF is also one of the predictors for low bone strength.7 However, information on the association between knee extensor muscle strength and bone strength according to varying levels of CRF is limited. This study aimed to investigate the interactive associations of knee extensor muscle strength and CRF with bone stiffness as a marker of bone strength in Japanese adults. The findings from this cross-sectional study will allow us to design future longitudinal and clinical studies to investigate the preventive effect of various strategies for osteoporosis and possibly result in a decrease in osteoporotic fractures. METHODS: Design, setting, and participants This study was a cross-sectional analysis on the association of knee extensor muscle strength and CRF with bone stiffness. The eligibility criteria were: i) Sport Program Service (SPS, explained below) participants between April 1998 to July 2019, ii) aged ≥45 years, and iii) participants who had undergone bone stiffness measurements. SPS is a comprehensive medical checkup program primarily examining various domains of physical fitness held at the Yokohama Sports Medical Center. The SPS was initiated in April 1998 to improve the health status of people living or working in Yokohama City. Participants voluntarily apply to the service through the center’s website, and public information is published by the local government. This service receives an average of 10 people daily, totaling to 1,500 people annually. Participants eligible to use the service are those aged between 18 and 65 years, living or working in Yokohama City, and paid 15,000 JPY (approximately $142 in 2020); aged >65 years and paid 7,500 JPY ($71); and not living and working in Yokohama City, aged <65 years old, and paid 17,000 JPY ($161) and 8,500 JPY ($80). All data were selected at a single timepoint when participants first joined the service. Prior to joining the SPS, all participants provided their informed consent for data use. This study was conducted according to the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Yokohama Sports Medical Center (K-2019-07). This study was a cross-sectional analysis on the association of knee extensor muscle strength and CRF with bone stiffness. The eligibility criteria were: i) Sport Program Service (SPS, explained below) participants between April 1998 to July 2019, ii) aged ≥45 years, and iii) participants who had undergone bone stiffness measurements. SPS is a comprehensive medical checkup program primarily examining various domains of physical fitness held at the Yokohama Sports Medical Center. The SPS was initiated in April 1998 to improve the health status of people living or working in Yokohama City. Participants voluntarily apply to the service through the center’s website, and public information is published by the local government. This service receives an average of 10 people daily, totaling to 1,500 people annually. Participants eligible to use the service are those aged between 18 and 65 years, living or working in Yokohama City, and paid 15,000 JPY (approximately $142 in 2020); aged >65 years and paid 7,500 JPY ($71); and not living and working in Yokohama City, aged <65 years old, and paid 17,000 JPY ($161) and 8,500 JPY ($80). All data were selected at a single timepoint when participants first joined the service. Prior to joining the SPS, all participants provided their informed consent for data use. This study was conducted according to the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Yokohama Sports Medical Center (K-2019-07). Measurements Medical checkup Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire. Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire. Bone stiffness and definition of low bone stiffness tendency The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity. The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity. Physical fitness test Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg). CRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min. Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg). CRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min. Medical checkup Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire. Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire. Bone stiffness and definition of low bone stiffness tendency The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity. The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity. Physical fitness test Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg). CRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min. Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg). CRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min. Statistical analysis Due to clear sex and age differences in the prevalence of osteoporosis,19 the participants were segregated by sex and age group (aged 45–54, 55–64, and ≥65 years), and categorized into tertiles based on knee extensor muscle strength. Thereafter, each age category was combined based on muscle strength, and new groups were created with age-adjusted tertiles. Continuous and categorical variables were expressed as median (interquartile rages) or mean (standard deviation), and percentage, respectively. To reduce potential biases due to incomplete data,20 missing data were treated with multiple imputation methods using SPSS (IBM, Inc., Chicago, IL, USA) by creating a random number through the Markov chain Monte Carlo algorithm, wherein 20 para-complete datasets were produced.21 Blood glucose, total cholesterol, HDL cholesterol, LDL cholesterol, TG, ALP, and UA were used as auxiliary variables to account for the missing data. The auxiliary variables were not used in the main analysis. The 20 datasets were integrated with the standard Rubin’s technique. Every missing value is presented in Table 1. HDL, high-density lipoprotein; LDL, low-density lipoprotein cholesterol; OSI, osteo-sono assessment index; and YAM, young-adult mean. Data are presented as median [interquartile ranges], mean (standard deviation), and number (percentage) unless specified. To assess the association of each covariate, knee extensor muscle strength, and CRF with bone stiffness, univariable odds ratios and 95% CIs were calculated using a logistic regression model. The covariates included age,22 smoking status,23 drinking habits,24 BMI,25 systolic blood pressure,26 and menopause.23 To evaluate the association between knee extensor muscle strength and low bone stiffness tendency, multivariable odds ratios and 95% CIs were calculated using the lowest knee extensor muscle strength group as reference after adjusting for smoking status, drinking habits, BMI, systolic blood pressure, and menopause (for women only). In the mutually adjusted final model, CRF was entered into the final model. Moreover, a continuous variable of knee extensor muscle strength was used in another model to test linearity. The same analyses were repeated with CRF as the primary exposure and knee extensor muscle strength as the final covariate. To consider the moderating effect of menopausal status, a menopause-stratified multivariable analysis was also performed. To evaluate the interactive association of knee extensor muscle strength and CRF with low bone stiffness tendency, a CRF-stratified (below or above the median) multivariable analysis was performed. Odds ratios and 95% CIs were calculated similarly as mentioned above, and the interaction term (knee extensor muscle strength*CRF) was entered into the models. A menopause-stratified multivariable analysis was also conducted. To verify the validity of the applicable standards for low bone stiffness tendency, the following two sensitivity analyses were performed. First, we repeated the primary analysis by changing the definition of low bone stiffness tendency from 80% under the YAM to 70% under the YAM. Second, a complete-case analysis was conducted to distinguish findings with and without considering missing values. All sensitivity analysis data are shown in the eTable 1, eTable 2, eTable 3, eTable 4, eTable 5, eTable 6, and eTable 7. All statistical analyses were performed using the SPSS version 25. A P-value <0.05 was considered significant. Due to clear sex and age differences in the prevalence of osteoporosis,19 the participants were segregated by sex and age group (aged 45–54, 55–64, and ≥65 years), and categorized into tertiles based on knee extensor muscle strength. Thereafter, each age category was combined based on muscle strength, and new groups were created with age-adjusted tertiles. Continuous and categorical variables were expressed as median (interquartile rages) or mean (standard deviation), and percentage, respectively. To reduce potential biases due to incomplete data,20 missing data were treated with multiple imputation methods using SPSS (IBM, Inc., Chicago, IL, USA) by creating a random number through the Markov chain Monte Carlo algorithm, wherein 20 para-complete datasets were produced.21 Blood glucose, total cholesterol, HDL cholesterol, LDL cholesterol, TG, ALP, and UA were used as auxiliary variables to account for the missing data. The auxiliary variables were not used in the main analysis. The 20 datasets were integrated with the standard Rubin’s technique. Every missing value is presented in Table 1. HDL, high-density lipoprotein; LDL, low-density lipoprotein cholesterol; OSI, osteo-sono assessment index; and YAM, young-adult mean. Data are presented as median [interquartile ranges], mean (standard deviation), and number (percentage) unless specified. To assess the association of each covariate, knee extensor muscle strength, and CRF with bone stiffness, univariable odds ratios and 95% CIs were calculated using a logistic regression model. The covariates included age,22 smoking status,23 drinking habits,24 BMI,25 systolic blood pressure,26 and menopause.23 To evaluate the association between knee extensor muscle strength and low bone stiffness tendency, multivariable odds ratios and 95% CIs were calculated using the lowest knee extensor muscle strength group as reference after adjusting for smoking status, drinking habits, BMI, systolic blood pressure, and menopause (for women only). In the mutually adjusted final model, CRF was entered into the final model. Moreover, a continuous variable of knee extensor muscle strength was used in another model to test linearity. The same analyses were repeated with CRF as the primary exposure and knee extensor muscle strength as the final covariate. To consider the moderating effect of menopausal status, a menopause-stratified multivariable analysis was also performed. To evaluate the interactive association of knee extensor muscle strength and CRF with low bone stiffness tendency, a CRF-stratified (below or above the median) multivariable analysis was performed. Odds ratios and 95% CIs were calculated similarly as mentioned above, and the interaction term (knee extensor muscle strength*CRF) was entered into the models. A menopause-stratified multivariable analysis was also conducted. To verify the validity of the applicable standards for low bone stiffness tendency, the following two sensitivity analyses were performed. First, we repeated the primary analysis by changing the definition of low bone stiffness tendency from 80% under the YAM to 70% under the YAM. Second, a complete-case analysis was conducted to distinguish findings with and without considering missing values. All sensitivity analysis data are shown in the eTable 1, eTable 2, eTable 3, eTable 4, eTable 5, eTable 6, and eTable 7. All statistical analyses were performed using the SPSS version 25. A P-value <0.05 was considered significant. Design, setting, and participants: This study was a cross-sectional analysis on the association of knee extensor muscle strength and CRF with bone stiffness. The eligibility criteria were: i) Sport Program Service (SPS, explained below) participants between April 1998 to July 2019, ii) aged ≥45 years, and iii) participants who had undergone bone stiffness measurements. SPS is a comprehensive medical checkup program primarily examining various domains of physical fitness held at the Yokohama Sports Medical Center. The SPS was initiated in April 1998 to improve the health status of people living or working in Yokohama City. Participants voluntarily apply to the service through the center’s website, and public information is published by the local government. This service receives an average of 10 people daily, totaling to 1,500 people annually. Participants eligible to use the service are those aged between 18 and 65 years, living or working in Yokohama City, and paid 15,000 JPY (approximately $142 in 2020); aged >65 years and paid 7,500 JPY ($71); and not living and working in Yokohama City, aged <65 years old, and paid 17,000 JPY ($161) and 8,500 JPY ($80). All data were selected at a single timepoint when participants first joined the service. Prior to joining the SPS, all participants provided their informed consent for data use. This study was conducted according to the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the Yokohama Sports Medical Center (K-2019-07). Measurements: Medical checkup Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire. Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire. Bone stiffness and definition of low bone stiffness tendency The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity. The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity. Physical fitness test Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg). CRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min. Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg). CRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min. Medical checkup: Height and body weight of barefoot participants were measured using a calibrated height-weight scale (WB-510; Tanita Co., Tokyo, Japan). Body mass index (BMI) was calculated as body weight/height2 (kg/m2). After a 5-minute sitting position on a chair and medical examination by a physician, resting blood pressure was measured through the Riva-Rocci Korotkov method using a mercury sphygmomanometer. Blood glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), alkaline phosphatase (ALP), and uric acid (UA) were sampled after 12-hour fasting and analyzed with Roche INTEGRA 400 plus (Roche International Ltd., Basle, Switzerland). Data on habitual alcohol drinking (yes or no), habitual smoking status (yes or no), and menopausal status (yes or no) were obtained through a self-reported questionnaire. Bone stiffness and definition of low bone stiffness tendency: The generally known method for assessing bone strength is dual energy X-ray absorptiometry (DXA), which is commonly used to diagnose osteoporosis. As DXA emits radiation to assess the bone, there are problems, such as radiation exposure, costliness, being time-consuming, access to the place where DXA is set up, and the need for radiologists. In contrast, quantitative ultrasound (QUS), which is used to assess bone stiffness as an objective marker of bone strength, is well established in Japan.6 Compared to DXA, QUS results in better outcomes for the patients as it avoids exposure to radiation and is inexpensive, highly portable, and efficient. Therefore, in this study, bone stiffness was assessed using a QUS device (AOS-100NW; Hitachi Aloka Medical, Ltd., Mitaka, Tokyo, Japan) that has a high correlation with other QUS devices or DXA (r = 0.804, P < 0.001).13,14 Maintenance and calibration were performed to preserve the quality of the results once per month. The method of measurement was as follows: the participants sat on a chair barefoot with their knees bent at 90°. A device was placed on their right calcaneal, and gel was applied on the membranes. The measurement was performed once for approximately 30 seconds. Bone stiffness was expressed using the osteo-sono assessment index, which was calculated using the speed of sound and transmission index15 and was highly reproducible.13 Young adult mean (YAM) was calculated based on average age of 20 to 44 years as 100%.16 Low bone stiffness tendency was defined as 80% under the YAM (YAM80%), according to the 2011 Japanese guideline for prevention and treatment of osteoporosis—executive summary.6 Similarly, a 70% under the YAM (YAM70%) was also calculated to analyze sensitivity. Physical fitness test: Knee extensor muscle strength, which assessed maximum voluntary knee extension torque, was measured using an isokinetic dynamometer (Cybex Humac Norm 770; Computer Sports Medicine Inc., Stoughton, MA, USA). The measurement process was as follows: the participants sat so that their knee and hip joint formed a right angle and performed warm-ups. Thereafter, the participants exerted isokinetic maximum voluntary knee extension at 60 degrees/s at least twice. Overall, the participants performed three repetitions with a 30-second break. The maximum value was considered as knee extensor strength (Nm) and adjusted by the participant’s own body weight (Nm/kg). CRF was assessed by applying physical working capacity at 75% of the maximum heart rate (PWC75%HRmax),17 which was highly correlated with maximal oxygen uptake (r = 0.942),18 using the submaximal graded exercise test method on an electronic bicycle ergometer (The Multi Exercise Test System, ML-1800, Fukuda-Denshi, Tokyo, Japan). On the graded exercise test, the rate of loading (10–60 W/min), which was an individualized ramp protocol, was decided by experts based on the participants’ age and habitual aerobic exercise. The target heart rate was set at 75% of the estimated maximum heart rate (220 minus age), and the test was ended upon reaching target value. Additionally, the participants could end the test when an abnormal electrocardiogram result (ST depression or frequent occurrence of the extrasystole) was confirmed by the cardiologists or when they could not pedal with the designated rhythm (50 rpm) while showing poor physical condition. Most participants ended the test at approximately 10 min. Statistical analysis: Due to clear sex and age differences in the prevalence of osteoporosis,19 the participants were segregated by sex and age group (aged 45–54, 55–64, and ≥65 years), and categorized into tertiles based on knee extensor muscle strength. Thereafter, each age category was combined based on muscle strength, and new groups were created with age-adjusted tertiles. Continuous and categorical variables were expressed as median (interquartile rages) or mean (standard deviation), and percentage, respectively. To reduce potential biases due to incomplete data,20 missing data were treated with multiple imputation methods using SPSS (IBM, Inc., Chicago, IL, USA) by creating a random number through the Markov chain Monte Carlo algorithm, wherein 20 para-complete datasets were produced.21 Blood glucose, total cholesterol, HDL cholesterol, LDL cholesterol, TG, ALP, and UA were used as auxiliary variables to account for the missing data. The auxiliary variables were not used in the main analysis. The 20 datasets were integrated with the standard Rubin’s technique. Every missing value is presented in Table 1. HDL, high-density lipoprotein; LDL, low-density lipoprotein cholesterol; OSI, osteo-sono assessment index; and YAM, young-adult mean. Data are presented as median [interquartile ranges], mean (standard deviation), and number (percentage) unless specified. To assess the association of each covariate, knee extensor muscle strength, and CRF with bone stiffness, univariable odds ratios and 95% CIs were calculated using a logistic regression model. The covariates included age,22 smoking status,23 drinking habits,24 BMI,25 systolic blood pressure,26 and menopause.23 To evaluate the association between knee extensor muscle strength and low bone stiffness tendency, multivariable odds ratios and 95% CIs were calculated using the lowest knee extensor muscle strength group as reference after adjusting for smoking status, drinking habits, BMI, systolic blood pressure, and menopause (for women only). In the mutually adjusted final model, CRF was entered into the final model. Moreover, a continuous variable of knee extensor muscle strength was used in another model to test linearity. The same analyses were repeated with CRF as the primary exposure and knee extensor muscle strength as the final covariate. To consider the moderating effect of menopausal status, a menopause-stratified multivariable analysis was also performed. To evaluate the interactive association of knee extensor muscle strength and CRF with low bone stiffness tendency, a CRF-stratified (below or above the median) multivariable analysis was performed. Odds ratios and 95% CIs were calculated similarly as mentioned above, and the interaction term (knee extensor muscle strength*CRF) was entered into the models. A menopause-stratified multivariable analysis was also conducted. To verify the validity of the applicable standards for low bone stiffness tendency, the following two sensitivity analyses were performed. First, we repeated the primary analysis by changing the definition of low bone stiffness tendency from 80% under the YAM to 70% under the YAM. Second, a complete-case analysis was conducted to distinguish findings with and without considering missing values. All sensitivity analysis data are shown in the eTable 1, eTable 2, eTable 3, eTable 4, eTable 5, eTable 6, and eTable 7. All statistical analyses were performed using the SPSS version 25. A P-value <0.05 was considered significant. RESULTS: Altogether, 18,161 adults (8,902 men and 9,259 women) joined the service from April 1998 to July 2017. Of them, 9,112 aged ≤45 years old and 220 without bone stiffness measurements were excluded (Figure 1). Of the final sample of 8,829 (3,731 men and 5,098 women), 542 men (14.5%) and 978 women (19.2%) had low bone stiffness tendency based on YAM80%. Similarly, 45 men (1.2%) and 49 women (1.0%) were classified as having low bone stiffness based on YAM70%. The lowest fitness category appears to have an inferior biomarker status, such as triglycerides than the other categories (Table 1). Table 2 shows the association between each potential covariate and bone stiffness. The association between smoking status and low bone stiffness tendency was found in men (P < 0.001), but not in women (P = 0.097). Low BMI, low knee extensor muscle strength, and low CRF were all associated with a higher prevalence of low bone stiffness tendency. CI, confidence interval. aPrevalence per 1,000 persons. An inverse association between knee extensor muscle strength and low bone stiffness tendency was observed after adjusting for potential confounders, such as age, smoking status, drinking habits, BMI, systolic blood pressure, menopause (for women only), and CRF in both sexes (P for linear trend <0.001 for both). Meanwhile, CRF and low bone stiffness tendency showed an inverse association in men (P for linear trend <0.001), but not in pre-menopausal, post-menopausal, and all women (P for linear trend = 0.634, 0.841, and 0.924, respectively), as shown in Table 3. Both knee extensor muscle strength and CRF had no interactive association with low bone stiffness tendency in both sexes (men; P = 0.836, all women; P = 0.700, post-menopausal women; P = 0.615), as shown in Table 4. Values are expressed as odds ratio (95% confidence interval). aAdditionally adjusted smoking status, drinking habits, body mass index, systolic blood pressure, and menopause (for women only). bMutually adjusted cardiorespiratory fitness or knee extensor muscle strength plus variables in multivariable adjustmenta. CI, confidence interval. aPrevalence per 1,000 persons. bAdjusted for age, smoking status, drinking habits, body mass index, systolic blood pressure, and menopause (for women only). cUsing the lowest cardiorespiratory fitness and the lowest knee extensor muscle strength as reference. DISCUSSION: Our primary findings were as follows. First, no interactive association between knee extensor muscle strength and CRF was found for low bone stiffness tendency in both sexes. Second, higher knee extensor muscle strength was associated with a lower prevalence of low bone stiffness tendency in both sexes, independent of CRF. Third, higher CRF was associated with a lower prevalence of low bone stiffness tendency in men but not in women, independent of knee extensor muscle strength. These findings suggest that a combination of muscle strength and CRF may carry an additive benefit, not synergistic, for bone health. However, the association between CRF and bone health may not be evident in adult Japanese women. Contrary to our expectation, the knee extensor muscle strength and CRF had no interactive association with bone stiffness. Hence, each of the two physical fitness elements is independently or additively, not synergistically, associated with bone health. As described in the subsequent paragraphs, the possible mechanisms of muscle strength or CRF to bone strength may be different. However, it is unknown whether the two different pathways interfere with or work synergistically with each other, due to lack of findings from basic mechanistic studies. A previous randomized controlled trial has revealed that resistance exercise and a combination of resistance and aerobic exercise could prevent bone loss compared to aerobic exercise alone during a weight loss program.27 This finding supports the notion that a combined aerobic and resistance exercise may be recommended to prevent a loss of musculoskeletal function.28 This may be consistent with our results. Knee extensor muscle strength was positively associated with bone stiffness after adjusting for CRF in both sexes. According to Wolff’s law, theories have indicated the potential of the bone to adapt its remodeling response to external stressors.29 It is assumed that knee extensor muscle strength may play the role of an external stressor to improve bone stiffness. Meanwhile, another possible mechanism of association between muscle strength and bone stiffness is related to muscle functioning as an endocrine organ that secretes myokines. Previous physiological studies have reported that muscles can play a role in secreting myokines that activate bone metabolism, such as myostatin, insulin-like growth factor (IGF)-1, and IGF-2.30 A previous observational study has reported an association between muscle strength and nonadjacent bones.31 However, the details of this association remain unclear, so further investigations into the mechanisms are warranted. Because bone mineral density has been highly correlated with bone stiffness,13,14 these are reflected in our study’s results, as knee extensor muscle strength was associated with bone stiffness after adjusting for confounders. CRF was also deemed beneficial for bone health in men. A previous large-scale observational study has showed similar trends as this study.32 CRF partially reflects one’s physical activity level and has been utilized as an index of health outcome.33 Because physical activity produces mechanical loading to the musculoskeletal system, which enhances osteocyte activation and bone resorption and formation, the American College of Sport Medicine currently recommends weightbearing physical activities.34 This study identified a sex difference in the association between CRF and bone stiffness (ie, no association in women). From a biological viewpoint, one possible mechanism for bone loss could be greater age-dependent loss of estrogen in women, which reduces estrogen receptor and osteogenic response.35 Accordingly, no association between CRF and bone health was observed in women, as in a previous study.36 This study has several noteworthy strengths. First, we analyzed a large-scale population of participants across a wide age range (≥45 years old). A difference in the age of onset between men and women of osteoporosis caused by lowering bone strength was observed.19 The number of patients with osteoporosis drastically increases after menopause for women and after the age of 60 for men.19 Second, we reduced potential biases by using multiple imputation methods.21 Third, this study expanded the body of knowledge in this area by examining the joint associations between knee extensor muscle strength and CRF with bone stiffness. Fourth, we used an isokinetic knee extensor muscle strength dynamometer, which enabled precise measurements. Although it requires expert skills, isokinetic muscle strength that reflects active physical activity could be assessed with accuracy.37 Meanwhile, there are also several limitations. First, causal associations among knee extensor muscle strength, CRF, and bone stiffness could not be assessed owing to the cross-sectional nature of the study. Second, there was no information on habitual dietary intake, such as protein, which could positively contribute to bone mineral density.38 Thus, the risk of residual confounding by dietary habits may exist in the results. Third, there was a lack of information on the use of osteoporosis medication and the history of surgery. Hence, bone stiffness could be underestimated. Finally, there was a lack of information on habitual physical activity. As half of the variances in CRF are accounted for by one’s genotype, CRF is not a direct marker of physical activity, although it may reflect an aspect of physical activity.39 Therefore, precise assessments of physical activity are preferable. Conclusion We found an additive association between knee extensor muscle strength and CRF for bone health. Moreover, an inverse association existed between knee extensor muscle strength and low bone stiffness tendency in both sexes. Accordingly, maintaining not only higher knee extensor muscle strength, but also higher CRF may be important to prevent or delay the onset of osteoporosis. From a practical viewpoint, these results suggest that middle-aged or older women and men should be encouraged to participate in high-intensity physical activities with mechanical loading, such as running, jumping rope, and resistance exercise. Further longitudinal studies are needed to investigate the relationship between physical fitness and bone status. We found an additive association between knee extensor muscle strength and CRF for bone health. Moreover, an inverse association existed between knee extensor muscle strength and low bone stiffness tendency in both sexes. Accordingly, maintaining not only higher knee extensor muscle strength, but also higher CRF may be important to prevent or delay the onset of osteoporosis. From a practical viewpoint, these results suggest that middle-aged or older women and men should be encouraged to participate in high-intensity physical activities with mechanical loading, such as running, jumping rope, and resistance exercise. Further longitudinal studies are needed to investigate the relationship between physical fitness and bone status. Conclusion: We found an additive association between knee extensor muscle strength and CRF for bone health. Moreover, an inverse association existed between knee extensor muscle strength and low bone stiffness tendency in both sexes. Accordingly, maintaining not only higher knee extensor muscle strength, but also higher CRF may be important to prevent or delay the onset of osteoporosis. From a practical viewpoint, these results suggest that middle-aged or older women and men should be encouraged to participate in high-intensity physical activities with mechanical loading, such as running, jumping rope, and resistance exercise. Further longitudinal studies are needed to investigate the relationship between physical fitness and bone status.
Background: Knee extensor muscle strength and cardiorespiratory fitness (CRF) are major components of physical fitness. Because the interactive association of knee extensor muscle strength and CRF with bone health remains unclear, we aimed to investigate such association in Japanese adults. Methods: Altogether, 8,829 Japanese adults (3,731 men and 5,098 women) aged ≥45 years completed the maximum voluntary knee extension test, submaximal exercise test, medical examination, and a questionnaire on lifestyle habits. Using an osteo-sono assessment index, low bone stiffness tendency was defined as 80% under the young-adults mean. Multivariable odds ratios (ORs) and 95% confidence intervals (CIs) were calculated after confounder adjustment. Results: Overall, 542 men (14.5%) and 978 women (19.2%) had low bone stiffness tendency. We observed an inverse association between muscle strength and low bone stiffness tendency after adjustment for CRF in both sexes (P for linear trend <0.001). Compared with the lowest CRF, the multivariable ORs for low bone stiffness tendency in the highest CRF were 0.47 (95% CI, 0.36-0.62) for men and 1.05 (95% CI, 0.82-1.35) for post-menopausal women (P < 0.001 and P = 0.704, respectively). No interactive association between muscle strength and CRF for low bone stiffness tendency existed in both sexes and irrespective of menopausal status. Conclusions: Knee extensor muscle strength and CRF were associated additively, not synergistically, with bone health. Maintaining high levels of both physical fitness components may improve musculoskeletal health in the cohort. The relationship between physical fitness and bone status should be longitudinally investigated in the future.
INTRODUCTION: Osteoporosis is characterized by low bone strength and an increased risk of bone fractures,1 which is highly associated with mortality.2,3 Moreover, as osteoporosis imposes a heavy economic burden on the gross domestic product,4 it is widely recognized as a serious public health concern not only in Japan but also in other aged societies. Currently, Japan experiences one of the most serious situations in Asia5 because the estimated number of patients with osteoporosis exceeds 12.8 million (men: 3 million; women: 9.8 million).6 To minimize the detrimental effects of osteoporotic fractures on a patient’s quality of life, early detection of low bone strength and preventive interventions preferably based on risk stratification are strongly needed. The established risk factors for low bone strength are advanced age, female sex, genetic factors, low body weight, and physical inactivity, including low physical fitness represented as cardiorespiratory fitness (CRF) and/or grip strength.7,8 Compared to low CRF, high CRF represents an odds ratio of 0.29 (95% confidence interval [CI], 0.12–0.71) for the femoral neck T-score ≤ 2.5.7 The hazard ratio for osteoporotic fracture per 5 kg reduction in hand grip strength was 1.49 (95% CI, 1.18–1.95) in a 10-year prospective cohort study.8 Maintaining high levels of physical fitness, assessed via CRF and/or hand grip strength, is a key factor for preventing low bone strength, which consequently lowers the risk of osteoporosis development. Grip strength is often used as an indicator of muscle strength because it is valid and reliable for whole-body muscle strength.9 However, its use is controversial since it cannot be measured for antigravity muscles, such as the knee extensor muscle.10 As knee extensor muscle strength predicts the risk of falls and mobility, it can be used as a better indicator of bone strength.11,12 To our knowledge, it remains unclear whether knee extensor muscle strength is beneficially associated with bone strength. Moreover, CRF is also one of the predictors for low bone strength.7 However, information on the association between knee extensor muscle strength and bone strength according to varying levels of CRF is limited. This study aimed to investigate the interactive associations of knee extensor muscle strength and CRF with bone stiffness as a marker of bone strength in Japanese adults. The findings from this cross-sectional study will allow us to design future longitudinal and clinical studies to investigate the preventive effect of various strategies for osteoporosis and possibly result in a decrease in osteoporotic fractures. Conclusion: We found an additive association between knee extensor muscle strength and CRF for bone health. Moreover, an inverse association existed between knee extensor muscle strength and low bone stiffness tendency in both sexes. Accordingly, maintaining not only higher knee extensor muscle strength, but also higher CRF may be important to prevent or delay the onset of osteoporosis. From a practical viewpoint, these results suggest that middle-aged or older women and men should be encouraged to participate in high-intensity physical activities with mechanical loading, such as running, jumping rope, and resistance exercise. Further longitudinal studies are needed to investigate the relationship between physical fitness and bone status.
Background: Knee extensor muscle strength and cardiorespiratory fitness (CRF) are major components of physical fitness. Because the interactive association of knee extensor muscle strength and CRF with bone health remains unclear, we aimed to investigate such association in Japanese adults. Methods: Altogether, 8,829 Japanese adults (3,731 men and 5,098 women) aged ≥45 years completed the maximum voluntary knee extension test, submaximal exercise test, medical examination, and a questionnaire on lifestyle habits. Using an osteo-sono assessment index, low bone stiffness tendency was defined as 80% under the young-adults mean. Multivariable odds ratios (ORs) and 95% confidence intervals (CIs) were calculated after confounder adjustment. Results: Overall, 542 men (14.5%) and 978 women (19.2%) had low bone stiffness tendency. We observed an inverse association between muscle strength and low bone stiffness tendency after adjustment for CRF in both sexes (P for linear trend <0.001). Compared with the lowest CRF, the multivariable ORs for low bone stiffness tendency in the highest CRF were 0.47 (95% CI, 0.36-0.62) for men and 1.05 (95% CI, 0.82-1.35) for post-menopausal women (P < 0.001 and P = 0.704, respectively). No interactive association between muscle strength and CRF for low bone stiffness tendency existed in both sexes and irrespective of menopausal status. Conclusions: Knee extensor muscle strength and CRF were associated additively, not synergistically, with bone health. Maintaining high levels of both physical fitness components may improve musculoskeletal health in the cohort. The relationship between physical fitness and bone status should be longitudinally investigated in the future.
10,961
321
[ 286, 1681, 182, 336, 313, 638, 122 ]
11
[ "bone", "strength", "knee", "bone stiffness", "stiffness", "participants", "muscle", "muscle strength", "extensor", "knee extensor" ]
[ "osteoporosis imposes heavy", "strategies osteoporosis", "associated mortality osteoporosis", "prevalence osteoporosis", "risk osteoporosis development" ]
[CONTENT] knee extensor muscle strength | cardiorespiratory fitness | bone stiffness | quantitative ultrasound [SUMMARY]
[CONTENT] knee extensor muscle strength | cardiorespiratory fitness | bone stiffness | quantitative ultrasound [SUMMARY]
[CONTENT] knee extensor muscle strength | cardiorespiratory fitness | bone stiffness | quantitative ultrasound [SUMMARY]
[CONTENT] knee extensor muscle strength | cardiorespiratory fitness | bone stiffness | quantitative ultrasound [SUMMARY]
[CONTENT] knee extensor muscle strength | cardiorespiratory fitness | bone stiffness | quantitative ultrasound [SUMMARY]
[CONTENT] knee extensor muscle strength | cardiorespiratory fitness | bone stiffness | quantitative ultrasound [SUMMARY]
[CONTENT] Adult | Male | Female | Humans | Cardiorespiratory Fitness | Cross-Sectional Studies | Japan | Muscle Strength | Physical Fitness [SUMMARY]
[CONTENT] Adult | Male | Female | Humans | Cardiorespiratory Fitness | Cross-Sectional Studies | Japan | Muscle Strength | Physical Fitness [SUMMARY]
[CONTENT] Adult | Male | Female | Humans | Cardiorespiratory Fitness | Cross-Sectional Studies | Japan | Muscle Strength | Physical Fitness [SUMMARY]
[CONTENT] Adult | Male | Female | Humans | Cardiorespiratory Fitness | Cross-Sectional Studies | Japan | Muscle Strength | Physical Fitness [SUMMARY]
[CONTENT] Adult | Male | Female | Humans | Cardiorespiratory Fitness | Cross-Sectional Studies | Japan | Muscle Strength | Physical Fitness [SUMMARY]
[CONTENT] Adult | Male | Female | Humans | Cardiorespiratory Fitness | Cross-Sectional Studies | Japan | Muscle Strength | Physical Fitness [SUMMARY]
[CONTENT] osteoporosis imposes heavy | strategies osteoporosis | associated mortality osteoporosis | prevalence osteoporosis | risk osteoporosis development [SUMMARY]
[CONTENT] osteoporosis imposes heavy | strategies osteoporosis | associated mortality osteoporosis | prevalence osteoporosis | risk osteoporosis development [SUMMARY]
[CONTENT] osteoporosis imposes heavy | strategies osteoporosis | associated mortality osteoporosis | prevalence osteoporosis | risk osteoporosis development [SUMMARY]
[CONTENT] osteoporosis imposes heavy | strategies osteoporosis | associated mortality osteoporosis | prevalence osteoporosis | risk osteoporosis development [SUMMARY]
[CONTENT] osteoporosis imposes heavy | strategies osteoporosis | associated mortality osteoporosis | prevalence osteoporosis | risk osteoporosis development [SUMMARY]
[CONTENT] osteoporosis imposes heavy | strategies osteoporosis | associated mortality osteoporosis | prevalence osteoporosis | risk osteoporosis development [SUMMARY]
[CONTENT] bone | strength | knee | bone stiffness | stiffness | participants | muscle | muscle strength | extensor | knee extensor [SUMMARY]
[CONTENT] bone | strength | knee | bone stiffness | stiffness | participants | muscle | muscle strength | extensor | knee extensor [SUMMARY]
[CONTENT] bone | strength | knee | bone stiffness | stiffness | participants | muscle | muscle strength | extensor | knee extensor [SUMMARY]
[CONTENT] bone | strength | knee | bone stiffness | stiffness | participants | muscle | muscle strength | extensor | knee extensor [SUMMARY]
[CONTENT] bone | strength | knee | bone stiffness | stiffness | participants | muscle | muscle strength | extensor | knee extensor [SUMMARY]
[CONTENT] bone | strength | knee | bone stiffness | stiffness | participants | muscle | muscle strength | extensor | knee extensor [SUMMARY]
[CONTENT] strength | bone strength | low bone strength | bone | risk | grip strength | grip | low | fractures | million [SUMMARY]
[CONTENT] participants | bone | test | knee | dxa | maximum | performed | stiffness | bone stiffness | strength [SUMMARY]
[CONTENT] women | men | low | bone stiffness | bone | stiffness | low bone stiffness | low bone | low bone stiffness tendency | stiffness tendency [SUMMARY]
[CONTENT] higher | extensor | extensor muscle | knee extensor muscle strength | muscle | bone | knee extensor muscle | knee extensor | extensor muscle strength | muscle strength [SUMMARY]
[CONTENT] bone | strength | knee | muscle | muscle strength | bone stiffness | stiffness | knee extensor | extensor | participants [SUMMARY]
[CONTENT] bone | strength | knee | muscle | muscle strength | bone stiffness | stiffness | knee extensor | extensor | participants [SUMMARY]
[CONTENT] Knee ||| CRF | Japanese [SUMMARY]
[CONTENT] 8,829 | Japanese | 3,731 | 5,098 ||| 80% ||| 95% [SUMMARY]
[CONTENT] 542 | 14.5% | 978 | 19.2% ||| CRF ||| CRF | CRF | 0.47 | 95% | CI | 0.36-0.62 | 1.05 | 95% | CI | 0.82 | 0.704 ||| CRF [SUMMARY]
[CONTENT] Knee | CRF ||| ||| [SUMMARY]
[CONTENT] ||| CRF | Japanese ||| 8,829 | Japanese | 3,731 | 5,098 ||| 80% ||| 95% ||| 542 | 14.5% | 978 | 19.2% ||| CRF ||| CRF | CRF | 0.47 | 95% | CI | 0.36-0.62 | 1.05 | 95% | CI | 0.82 | 0.704 ||| CRF ||| CRF ||| ||| [SUMMARY]
[CONTENT] ||| CRF | Japanese ||| 8,829 | Japanese | 3,731 | 5,098 ||| 80% ||| 95% ||| 542 | 14.5% | 978 | 19.2% ||| CRF ||| CRF | CRF | 0.47 | 95% | CI | 0.36-0.62 | 1.05 | 95% | CI | 0.82 | 0.704 ||| CRF ||| CRF ||| ||| [SUMMARY]
Hospital-based prostate cancer screening in vietnamese men with lower urinary tract symptoms: a classification and regression tree model.
36309745
Prostate cancer (PCa) is a common disease in men over 65 years of age, and should be detected early, while reducing unnecessary biopsies. This study aims to construct a classification and regression tree (CART) model (i.e., risk stratification algorithm) using multivariable approach to select Vietnamese men with lower urinary tract symptoms (LUTS) for PCa biopsy.
BACKGROUND
We conducted a case-control study on 260 men aged ≥ 50 years who visited MEDIC Medical Center, Vietnam in 2017-2018 with self-reported LUTS. The case group included patients with a positive biopsy and the control group included patients with a negative biopsy diagnosis of PCa. Bayesian Model Averaging (BMA) was used for selecting the most parsimonious prediction model. Then the CART with 5-fold cross-validation was constructed for selecting men who can benefit from PCa biopsy in steps by steps and intuitive way.
METHODS
BMA suggested five potential prediction models, in which the most parsimonious model including PSA, I-PSS, and age. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2%, and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively.
RESULTS
CART combining PSA, I-PSS, and age has practical use in hospital-based PCa screening in Vietnamese men with lower urinary tract symptoms.
CONCLUSION
[ "Male", "Humans", "Prostatic Neoplasms", "Prostate-Specific Antigen", "Early Detection of Cancer", "Case-Control Studies", "Bayes Theorem", "Vietnam", "Biopsy", "Lower Urinary Tract Symptoms", "Hospitals", "Asian People" ]
9617302
Background
Prostate cancer (PCa) is common in men, especially in those aged 65 years and older. It has the second-highest incidence/prevalence (i.e., 30.7 per 100 000) and ranks fifth in cancer mortality rate among men (i.e., 7.7 per 100 000) worldwide [1]. In Vietnam, the incidence of PCa is 12.2 per 100 000, and the mortality rate is 5.1 per 100 000 as of 2020 [2]. Approximately 95–98% of PCa cases are adenocarcinomas that develop from adrenal duct cells [3]. PCa treatment depends primarily on the stage of development and the cell and patient characteristics. According to the American Cancer Society, PCa patients diagnosed at the localised or regional stage have a 5-year survival rate of over 90%. However, in the distant stage, the survival rate is only 30% [4]. Therefore, PCa should be detected at an early stage. Prostate-specific antigen (PSA) is a serine protease in the kallikrein family and considered a tool for the screening and early detection of PCa [5]. It can help detect as early as nine years before having clinical symptoms [6]. There are two types of PCa screening studies using PSA, including population-based and hospital-based (or opportunistic) screenings [7]. The first type of screening deals with testing asymptomatic men with only PSA, and those with elevated PSA are immediately referred to biopsy. However, the latter type of screening involves testing men with some symptoms (e.g., lower urinary tract symptoms) using PSA and other clinical tools. Therefore, all men referred for biopsy in population-based screening are at lower risk of having PCa compared to that of hospital-based screening. PSA only based screening could accounted for 45–70% of the reduction in PCa mortality [8]; it could also induce the unnecessary biopsies [9]. In a 16 year follow-up of the European Randomized Study of Screening for Prostate Cancer (ERSPC), the unnecessary biopsy was 76% (i.e., 76% of elevated PSA cases have a negative biopsy) [10]. In addition, the optimal cut-off value of PSA for confirming PCa remains to be determined [11] [5]. In particular, even at a low level of PSA (that is, lower than 4 ng/mL), the false negative rate of PCa was high at 15%, whereas, at a high level of PSA (that is, higher than 10 ng/mL), the false positive rate was 50% [5]. In Vietnam, population-based PCa screening using PSA was conducted 12 years ago; it indicated a low prevalence of PCa (2.5%), but a high rate of medium grade lesions. The author also implied that the benefit of a mass screening program for PCa was not proven. Instead, a selective PCa screening in the usual care and at the hospital was superior in Vietnam. In hospital-based screening, combining clinical parameters, PSA, age, and other risk factors improved the prediction of prostate cancer [12–14]. International Prostate Symptom Score (I-PSS) is a screening scale for lower urinary tract symptoms and is used to screen non-specific prostate gland abnormalities. For PCa screening. For PCa screening, the I-PSS scale showed reasonable sensitivity (78%), but the specificity was not high (59.4%) [15]. A previous study showed that PSA screening performance varied with different I-PSS values. Therefore, combining PSA and I-PSS could improve the screening benefits [16]. There is, however, a paucity of such practical multivariable algorithm for hospital-based PCa screening in Vietnam. The approach of PCa screening based on machine learning algorithms has only recently been applied. Algorithms including logistic regression, artificial neural networks, random forests, support vector machines, and extreme and light gradient boosting machines have been demonstrated to enhance PCa screening efficiency [13, 17–20]. However, these models do not help make clinical decisions in a step-to-step and intuitive manner. Classification and regression tree (CART) is an approach that allows physicians to apply results of the screening process directly and intuitively [17, 21]. Our study aimed to investigate the association of PSA, I-PSS, epidemiological and behavioural characteristics with PCa and then used these factors to construct a classification and regression tree (CART) algorithm to select Vietnamese men with lower urinary tract symptoms (LUTS) for PCa biopsy. The algorithm is expected to aid in reducing the probability of a negative prostate biopsy (i.e., unnecessary biopsy) while maintaining the ability to reduce PCa mortality for Vietnamese patients.
null
null
Results
Association of epidemiological and behavioural characteristics with prostate cancer There were total 260 patients (130 in case group vs. 130 in control group) included in the study. The median age of cases was significantly higher than the controls (71 vs. 61). Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals. Physical exercise and fruit consumption were noted as protective factors of PCa (Table 1). Table 1Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regressionCases(n = 130) No. (%) Controls(n = 130) No. (%) OR (95% CI)p-value Epidemiological characteristics Age (median [25–75 percentile])71 (64–78)66 (61–71)1.07 (1.04–1.11)< 0.001Overweight or Obesity (BMI ≥ 23)63 (48.5)49 (37.7)1.55 (0.92–2.63)0.103Family history of Prostate cancer (yes)2 (1.5)5 (3.9)0.39 (0.04–2.45)0.447Existing of urinary tract diseases (yes)17 (13.1)20 (15.4)0.82 (0.39–1.76)0.723History of urinary surgery (yes)10 (7.8)12 (9.2)0.83 (0.31–2.18)0.824Benign Prostate Hyperplasia (yes)14 (10.8)24 (18.5)0.53 (0.24–1.14)0.113Exposed to agrochemicals (yes)40 (30.8)20 (15.4)2.44 (1.29–4.73)0.005 Lifestyle behaviour Physical activity (yes)68 (52.3)86 (66.2)0.65 (0.33–0.95)0.032Current tobacco smoking (yes)62 (47.7)48 (36.9)1.56 (0.92–2.64)0.103Heavy drinking (yes)14 (10.8)12 (9.2)1.19 (0.53–2.67)0.680 Food consumption behaviour Red meat (≥ 3 times/week)107 (82.3)106 (81.5)1.05 (0.53–2.08)1.000Fruits (≥ 3 times/week)65 (50.0)83 (63.9)0.57 (0.33–0.96)0.033Vegetables (≥ 3 times/week)111 (85.4)120 (92.3)0.49 (0.19–1.16)0.114Nuts (≥ 3 times/week)9 (6.9)9 (6.9)1.00 (0.34–2.95)1.000Vegetable oil (≥ 3 times/week)104 (80.0)95 (73.1)1.47 (0.79–2.75)0.242Tea (≥ 3 times/week)56 (43.1)53 (40.8)1.10 (0.67–1.80)0.706Coffee (≥ 3 times/week)82 (63.1)80 (61.5)1.07 (0.63–1.82)0.898 Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regression There were total 260 patients (130 in case group vs. 130 in control group) included in the study. The median age of cases was significantly higher than the controls (71 vs. 61). Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals. Physical exercise and fruit consumption were noted as protective factors of PCa (Table 1). Table 1Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regressionCases(n = 130) No. (%) Controls(n = 130) No. (%) OR (95% CI)p-value Epidemiological characteristics Age (median [25–75 percentile])71 (64–78)66 (61–71)1.07 (1.04–1.11)< 0.001Overweight or Obesity (BMI ≥ 23)63 (48.5)49 (37.7)1.55 (0.92–2.63)0.103Family history of Prostate cancer (yes)2 (1.5)5 (3.9)0.39 (0.04–2.45)0.447Existing of urinary tract diseases (yes)17 (13.1)20 (15.4)0.82 (0.39–1.76)0.723History of urinary surgery (yes)10 (7.8)12 (9.2)0.83 (0.31–2.18)0.824Benign Prostate Hyperplasia (yes)14 (10.8)24 (18.5)0.53 (0.24–1.14)0.113Exposed to agrochemicals (yes)40 (30.8)20 (15.4)2.44 (1.29–4.73)0.005 Lifestyle behaviour Physical activity (yes)68 (52.3)86 (66.2)0.65 (0.33–0.95)0.032Current tobacco smoking (yes)62 (47.7)48 (36.9)1.56 (0.92–2.64)0.103Heavy drinking (yes)14 (10.8)12 (9.2)1.19 (0.53–2.67)0.680 Food consumption behaviour Red meat (≥ 3 times/week)107 (82.3)106 (81.5)1.05 (0.53–2.08)1.000Fruits (≥ 3 times/week)65 (50.0)83 (63.9)0.57 (0.33–0.96)0.033Vegetables (≥ 3 times/week)111 (85.4)120 (92.3)0.49 (0.19–1.16)0.114Nuts (≥ 3 times/week)9 (6.9)9 (6.9)1.00 (0.34–2.95)1.000Vegetable oil (≥ 3 times/week)104 (80.0)95 (73.1)1.47 (0.79–2.75)0.242Tea (≥ 3 times/week)56 (43.1)53 (40.8)1.10 (0.67–1.80)0.706Coffee (≥ 3 times/week)82 (63.1)80 (61.5)1.07 (0.63–1.82)0.898 Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regression Association of I-PSS and PSA with prostate cancer The PCa odds ratio for 1 ng/mL PSA increase was 1.06 (95% CI, 1.05–1.08). The PCa odds ratio for each I-PSS point increase was 1.14 (95% CI, 1.09–1.18). All the items of I-PSS significantly associated with PCa, except for the “Straining” item with lowest OR = 1.16 (95% CI, 0.98–1.38). “Nocturia” item had highest OR with 2.22 (95% CI, 1.79–2.74), and “Urgency” item came to the second with OR = 1.41 (95% CI, 1.23–1.62) (Table 2). Table 2Association of I-PSS and PSA with prostate cancer - univariable logistic regressionCases (n = 130) Median (25–75 percentile) Controls (n = 130) Median (25–75 percentile) OR (95% CI)p PSA concentration (ng/mL) 89.5 (39.1–100)12 (8–19)1.06 (1.05–1.08)< 0.001 I-PSS (total score) 14 (8–19) 6.5 (3–12) 1.14 (1.09–1.18) < 0.001 Incomplete Emptying4 (1–5)1 (1–5)1.27 (1.12–1.44)Frequency (every 2 h)2 (0–2)0 (0–3)1.22 (1.08–1.37)Intermittency0 (0–1)0 (0–1)1.29 (1.02–1.64)Urgency2 (0–5)0 (0–1)1.41 (1.23–1.62)Weak Stream0 (0–2)0 (0–0)1.32 (1.12–1.55)Straining0 (0–1)0 (0–1)1.16 (0.98–1.38)Nocturia4 (3–5)2 (1–3)2.22 (1.79–2.74) Association of I-PSS and PSA with prostate cancer - univariable logistic regression The PCa odds ratio for 1 ng/mL PSA increase was 1.06 (95% CI, 1.05–1.08). The PCa odds ratio for each I-PSS point increase was 1.14 (95% CI, 1.09–1.18). All the items of I-PSS significantly associated with PCa, except for the “Straining” item with lowest OR = 1.16 (95% CI, 0.98–1.38). “Nocturia” item had highest OR with 2.22 (95% CI, 1.79–2.74), and “Urgency” item came to the second with OR = 1.41 (95% CI, 1.23–1.62) (Table 2). Table 2Association of I-PSS and PSA with prostate cancer - univariable logistic regressionCases (n = 130) Median (25–75 percentile) Controls (n = 130) Median (25–75 percentile) OR (95% CI)p PSA concentration (ng/mL) 89.5 (39.1–100)12 (8–19)1.06 (1.05–1.08)< 0.001 I-PSS (total score) 14 (8–19) 6.5 (3–12) 1.14 (1.09–1.18) < 0.001 Incomplete Emptying4 (1–5)1 (1–5)1.27 (1.12–1.44)Frequency (every 2 h)2 (0–2)0 (0–3)1.22 (1.08–1.37)Intermittency0 (0–1)0 (0–1)1.29 (1.02–1.64)Urgency2 (0–5)0 (0–1)1.41 (1.23–1.62)Weak Stream0 (0–2)0 (0–0)1.32 (1.12–1.55)Straining0 (0–1)0 (0–1)1.16 (0.98–1.38)Nocturia4 (3–5)2 (1–3)2.22 (1.79–2.74) Association of I-PSS and PSA with prostate cancer - univariable logistic regression BMA for PCa prediction To determine whether PCa could be predicted by PSA, I-PSS, epidemiological and behavioural variables. There were 27 models suggested from BMA process, among them the best 5 models are shown in Table 3. The most parsimonious model (i.e., minimum explanatory variables and maximum discrimination power) included two variables: I-PSS, and PSA concentration. The second parsimonious model contained I-PSS, PSA, and age. The Area Under the ROC Curve (AUC) of the most parsimonious model was not much different compared to the second parsimonious model (0.931 vs. 0.929). Because age is an important factor for PCa screening and diagnosis in many previous studies [29–31], it is also a critical factor for disease mechanisms from a clinical standpoint. Therefore, we chose the second model with three variables as the best model to use in the clinical setting (Table 3). This final model is also in light with the final model suggested by CART algorithm (details shown below). Table 3BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCaModelVariableOR (95% CI)pR2 (%)AUCBICPosterior probability1IPSS1.12 (1.06–1.18)< 0.00149.20.931199.90.364PSA1.06 (1.04–1.08)< 0.001Intercept0.03 (0.01–0.08)< 0.0012IPSS1.11 (1.05–1.17)< 0.00150.30.929201.40.174PSA1.06 (1.04–1.07)< 0.001Age1.06 (1.01–1.10)0.047Intercept0.01 (0.00–0.04)< 0.0013IPSS1.13 (1.07–1.19)< 0.00150.10.929205.50.120PSA1.06 (1.04–1.08)< 0.001Fruits (≥ 3 times/week)0.50 (0.23–1.06)0.069Intercept0.05 (0.02–0.12)< 0.0014IPSS1.12 (1.06–1.19)< 0.00149.70.931203.60.057PSA1.06 (1.04–1.08)< 0.001Overweight or Obesity1.68 (0.80–3.53)0.174Intercept0.03 (0.01–0.07)< 0.0015IPSS1.12 (1.06–1.18)< 0.00149.70.931203.70.056PSA1.06 (1.04–1.08)< 0.001Vegetable oil (≥ 3 times/week)1.84 (0.75–4.52)0.186Intercept0.02 (0.01–0.07)< 0.001 BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCa To determine whether PCa could be predicted by PSA, I-PSS, epidemiological and behavioural variables. There were 27 models suggested from BMA process, among them the best 5 models are shown in Table 3. The most parsimonious model (i.e., minimum explanatory variables and maximum discrimination power) included two variables: I-PSS, and PSA concentration. The second parsimonious model contained I-PSS, PSA, and age. The Area Under the ROC Curve (AUC) of the most parsimonious model was not much different compared to the second parsimonious model (0.931 vs. 0.929). Because age is an important factor for PCa screening and diagnosis in many previous studies [29–31], it is also a critical factor for disease mechanisms from a clinical standpoint. Therefore, we chose the second model with three variables as the best model to use in the clinical setting (Table 3). This final model is also in light with the final model suggested by CART algorithm (details shown below). Table 3BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCaModelVariableOR (95% CI)pR2 (%)AUCBICPosterior probability1IPSS1.12 (1.06–1.18)< 0.00149.20.931199.90.364PSA1.06 (1.04–1.08)< 0.001Intercept0.03 (0.01–0.08)< 0.0012IPSS1.11 (1.05–1.17)< 0.00150.30.929201.40.174PSA1.06 (1.04–1.07)< 0.001Age1.06 (1.01–1.10)0.047Intercept0.01 (0.00–0.04)< 0.0013IPSS1.13 (1.07–1.19)< 0.00150.10.929205.50.120PSA1.06 (1.04–1.08)< 0.001Fruits (≥ 3 times/week)0.50 (0.23–1.06)0.069Intercept0.05 (0.02–0.12)< 0.0014IPSS1.12 (1.06–1.19)< 0.00149.70.931203.60.057PSA1.06 (1.04–1.08)< 0.001Overweight or Obesity1.68 (0.80–3.53)0.174Intercept0.03 (0.01–0.07)< 0.0015IPSS1.12 (1.06–1.18)< 0.00149.70.931203.70.056PSA1.06 (1.04–1.08)< 0.001Vegetable oil (≥ 3 times/week)1.84 (0.75–4.52)0.186Intercept0.02 (0.01–0.07)< 0.001 BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCa CART for PCa screening CART was deployed with all independent variables input for PCa screening, and the final model is shown in (Fig. 1). Fig. 1Trained CART in prostate cancer screening Trained CART in prostate cancer screening The results indicated that PSA, I-PSS, and age played important roles in PCa screening. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2% and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively (Table 4). Table 4Diagnosis values of CART in PCa screeningProbability = 20%Probability = 50%Probability = 80%Sensitivity0.9150.9150.792Specificity0.8620.8620.923Positive predictive value0.8690.8690.912Negative predictive value0.9120.9120.816Accuracy0.8890.8890.858 Diagnosis values of CART in PCa screening CART was deployed with all independent variables input for PCa screening, and the final model is shown in (Fig. 1). Fig. 1Trained CART in prostate cancer screening Trained CART in prostate cancer screening The results indicated that PSA, I-PSS, and age played important roles in PCa screening. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2% and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively (Table 4). Table 4Diagnosis values of CART in PCa screeningProbability = 20%Probability = 50%Probability = 80%Sensitivity0.9150.9150.792Specificity0.8620.8620.923Positive predictive value0.8690.8690.912Negative predictive value0.9120.9120.816Accuracy0.8890.8890.858 Diagnosis values of CART in PCa screening
Conclusion
CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.1%, 86.9%, 86.2%, 91.5% and 88.9%; at 80% diagnosis probability threshold were 81.6%, 91.2%, 92.3%, 79.2%, and 85.8%. I-PSS, PSA, and age had importance role in PCa screening. CART combining PSA, I-PSS, and age has practical use in hospital-based PCa screening in Vietnamese patients.
[ "Background", "Methods", "Study design and setting", "Participants", "Sample size", "Data collection and variables’ definition", "Statistical analysis", "Descriptive analysis", "Univariable logistic regression", "Bayesian model averaging (BMA) for model prediction", "CART model for PCa screening", "Association of epidemiological and behavioural characteristics with prostate cancer", "Association of I-PSS and PSA with prostate cancer", "BMA for PCa prediction", "CART for PCa screening", "Epidemiological and behavioral characteristics", "The role of PSA in PCa screening", "CART value for PCa screening" ]
[ "Prostate cancer (PCa) is common in men, especially in those aged 65 years and older. It has the second-highest incidence/prevalence (i.e., 30.7 per 100 000) and ranks fifth in cancer mortality rate among men (i.e., 7.7 per 100 000) worldwide [1]. In Vietnam, the incidence of PCa is 12.2 per 100 000, and the mortality rate is 5.1 per 100 000 as of 2020 [2]. Approximately 95–98% of PCa cases are adenocarcinomas that develop from adrenal duct cells [3]. PCa treatment depends primarily on the stage of development and the cell and patient characteristics. According to the American Cancer Society, PCa patients diagnosed at the localised or regional stage have a 5-year survival rate of over 90%. However, in the distant stage, the survival rate is only 30% [4]. Therefore, PCa should be detected at an early stage.\nProstate-specific antigen (PSA) is a serine protease in the kallikrein family and considered a tool for the screening and early detection of PCa [5]. It can help detect as early as nine years before having clinical symptoms [6]. There are two types of PCa screening studies using PSA, including population-based and hospital-based (or opportunistic) screenings [7]. The first type of screening deals with testing asymptomatic men with only PSA, and those with elevated PSA are immediately referred to biopsy. However, the latter type of screening involves testing men with some symptoms (e.g., lower urinary tract symptoms) using PSA and other clinical tools. Therefore, all men referred for biopsy in population-based screening are at lower risk of having PCa compared to that of hospital-based screening. PSA only based screening could accounted for 45–70% of the reduction in PCa mortality [8]; it could also induce the unnecessary biopsies [9]. In a 16 year follow-up of the European Randomized Study of Screening for Prostate Cancer (ERSPC), the unnecessary biopsy was 76% (i.e., 76% of elevated PSA cases have a negative biopsy) [10]. In addition, the optimal cut-off value of PSA for confirming PCa remains to be determined [11] [5]. In particular, even at a low level of PSA (that is, lower than 4 ng/mL), the false negative rate of PCa was high at 15%, whereas, at a high level of PSA (that is, higher than 10 ng/mL), the false positive rate was 50% [5].\nIn Vietnam, population-based PCa screening using PSA was conducted 12 years ago; it indicated a low prevalence of PCa (2.5%), but a high rate of medium grade lesions. The author also implied that the benefit of a mass screening program for PCa was not proven. Instead, a selective PCa screening in the usual care and at the hospital was superior in Vietnam. In hospital-based screening, combining clinical parameters, PSA, age, and other risk factors improved the prediction of prostate cancer [12–14]. International Prostate Symptom Score (I-PSS) is a screening scale for lower urinary tract symptoms and is used to screen non-specific prostate gland abnormalities. For PCa screening. For PCa screening, the I-PSS scale showed reasonable sensitivity (78%), but the specificity was not high (59.4%) [15]. A previous study showed that PSA screening performance varied with different I-PSS values. Therefore, combining PSA and I-PSS could improve the screening benefits [16]. There is, however, a paucity of such practical multivariable algorithm for hospital-based PCa screening in Vietnam.\nThe approach of PCa screening based on machine learning algorithms has only recently been applied. Algorithms including logistic regression, artificial neural networks, random forests, support vector machines, and extreme and light gradient boosting machines have been demonstrated to enhance PCa screening efficiency [13, 17–20]. However, these models do not help make clinical decisions in a step-to-step and intuitive manner. Classification and regression tree (CART) is an approach that allows physicians to apply results of the screening process directly and intuitively [17, 21].\nOur study aimed to investigate the association of PSA, I-PSS, epidemiological and behavioural characteristics with PCa and then used these factors to construct a classification and regression tree (CART) algorithm to select Vietnamese men with lower urinary tract symptoms (LUTS) for PCa biopsy. The algorithm is expected to aid in reducing the probability of a negative prostate biopsy (i.e., unnecessary biopsy) while maintaining the ability to reduce PCa mortality for Vietnamese patients.", " Study design and setting We conducted a case-control study at the MEDIC Medical Center, Ho Chi Minh City, Vietnam. MEDIC is the first and top modernity private medical centre in Vietnam. Every day, more than 4,000 patients visit the centre for examination and treatment. The study was approved by the local institutional ethics committee of the MEDIC Center, and the opinion was signed on 15th July, 2016.\nWe conducted a case-control study at the MEDIC Medical Center, Ho Chi Minh City, Vietnam. MEDIC is the first and top modernity private medical centre in Vietnam. Every day, more than 4,000 patients visit the centre for examination and treatment. The study was approved by the local institutional ethics committee of the MEDIC Center, and the opinion was signed on 15th July, 2016.\n Participants Our study participants were men aged ≥ 50 years who visited the MEDIC Centre in 2017–2018 with self-reported lower urinary tract symptoms. The inclusion criteria were abnormal lower urinary tract symptoms or enlarged prostate glands identified through DRE or ultrasound images. The exclusion criterions were acute prostatitis or refusal to participate in the study. All patients who meet the selection criteria were prescribed a biopsy. The case group was defined as having a positive biopsy result for PCa, and the control was defined as having a negative biopsy result. Biopsy based on 12-core Transrectal Ultrasound Guided Biopsy of the Prostate [22]. All patients provided written informed consent before participating in the study.\nOur study participants were men aged ≥ 50 years who visited the MEDIC Centre in 2017–2018 with self-reported lower urinary tract symptoms. The inclusion criteria were abnormal lower urinary tract symptoms or enlarged prostate glands identified through DRE or ultrasound images. The exclusion criterions were acute prostatitis or refusal to participate in the study. All patients who meet the selection criteria were prescribed a biopsy. The case group was defined as having a positive biopsy result for PCa, and the control was defined as having a negative biopsy result. Biopsy based on 12-core Transrectal Ultrasound Guided Biopsy of the Prostate [22]. All patients provided written informed consent before participating in the study.\n Sample size The minimum sample size estimated for each group of case-control studies was 116 patients to provide 90% power and 5% type I error to detect an odds ratio of 2.5. In Vietnam, PSA ≥ 10 ng/mL is considered as the high-risk group of PCa. Therefore, we chose the proportion of PSA ≥ 10 ng/mL equal to 23% in the control group as the proportion of controls with exposure in the sample size formula [23]. Our studies selected 130 patients for each group to ensure larger than the minimum sample size, hence the total patients was 260.\nThe minimum sample size estimated for each group of case-control studies was 116 patients to provide 90% power and 5% type I error to detect an odds ratio of 2.5. In Vietnam, PSA ≥ 10 ng/mL is considered as the high-risk group of PCa. Therefore, we chose the proportion of PSA ≥ 10 ng/mL equal to 23% in the control group as the proportion of controls with exposure in the sample size formula [23]. Our studies selected 130 patients for each group to ensure larger than the minimum sample size, hence the total patients was 260.\n Data collection and variables’ definition We collected epidemiological and behavioural characteristics through interviews using a questionnaire and collected clinical and subclinical information from medical records. Epidemiological characteristics included age, number of children, overweight/obesity (BMI ≥ 23 kg/m2 [24]), family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals. Lifestyle behaviours included physical activities (≥ 150 min/week for moderate or vigorous intensity [25]), current tobacco smoking, and heavy drinking (binge drinking (i.e., five drinks or more per occasion) on five or more days in the past month [26]). Food consumption behaviours were determined by the frequency of different food consumption types, including red meat, fruits, vegetables, nuts, vegetable oil, tea, and coffee.\nInternational Prostate Symptom Score (I-PSS) was used to assess seven lower urinary tract symptoms: incomplete emptying, frequency, intermittency, urgency, weak stream, straining, and nocturia. I-PSS Vietnamese version was published by the Vietnamese Ministry of Health and recommended for Benign Prostatic Hyperplasia assessment. Each item of I-PSS was classified on a zero to five scale, reflecting the severity of each symptom [27].\nEpidemiological and behavioural characteristics and I-PSS were assessed by only one oncologist for consistency. The oncologist was trained for conducting interviews before joining the study. The questionnaire consisted of ten interviews in a pilot sample for structure and content adaptation.\nSerum PSA was quantified by a 2-step immunoassay using light-emitting microparticle technology (CMIA) with Alinity CiCi (Abbott) testing machine system. The testing machine system was calibrated, and quality control was performed at least once every day or when changing the reagent batch [28].\nWe collected epidemiological and behavioural characteristics through interviews using a questionnaire and collected clinical and subclinical information from medical records. Epidemiological characteristics included age, number of children, overweight/obesity (BMI ≥ 23 kg/m2 [24]), family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals. Lifestyle behaviours included physical activities (≥ 150 min/week for moderate or vigorous intensity [25]), current tobacco smoking, and heavy drinking (binge drinking (i.e., five drinks or more per occasion) on five or more days in the past month [26]). Food consumption behaviours were determined by the frequency of different food consumption types, including red meat, fruits, vegetables, nuts, vegetable oil, tea, and coffee.\nInternational Prostate Symptom Score (I-PSS) was used to assess seven lower urinary tract symptoms: incomplete emptying, frequency, intermittency, urgency, weak stream, straining, and nocturia. I-PSS Vietnamese version was published by the Vietnamese Ministry of Health and recommended for Benign Prostatic Hyperplasia assessment. Each item of I-PSS was classified on a zero to five scale, reflecting the severity of each symptom [27].\nEpidemiological and behavioural characteristics and I-PSS were assessed by only one oncologist for consistency. The oncologist was trained for conducting interviews before joining the study. The questionnaire consisted of ten interviews in a pilot sample for structure and content adaptation.\nSerum PSA was quantified by a 2-step immunoassay using light-emitting microparticle technology (CMIA) with Alinity CiCi (Abbott) testing machine system. The testing machine system was calibrated, and quality control was performed at least once every day or when changing the reagent batch [28].\n Statistical analysis Descriptive analysis Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\nFrequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\n Univariable logistic regression A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\nA univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\n Bayesian model averaging (BMA) for model prediction A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\nA BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\n CART model for PCa screening CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.\nCART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.\n Descriptive analysis Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\nFrequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\n Univariable logistic regression A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\nA univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\n Bayesian model averaging (BMA) for model prediction A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\nA BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\n CART model for PCa screening CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.\nCART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.", "We conducted a case-control study at the MEDIC Medical Center, Ho Chi Minh City, Vietnam. MEDIC is the first and top modernity private medical centre in Vietnam. Every day, more than 4,000 patients visit the centre for examination and treatment. The study was approved by the local institutional ethics committee of the MEDIC Center, and the opinion was signed on 15th July, 2016.", "Our study participants were men aged ≥ 50 years who visited the MEDIC Centre in 2017–2018 with self-reported lower urinary tract symptoms. The inclusion criteria were abnormal lower urinary tract symptoms or enlarged prostate glands identified through DRE or ultrasound images. The exclusion criterions were acute prostatitis or refusal to participate in the study. All patients who meet the selection criteria were prescribed a biopsy. The case group was defined as having a positive biopsy result for PCa, and the control was defined as having a negative biopsy result. Biopsy based on 12-core Transrectal Ultrasound Guided Biopsy of the Prostate [22]. All patients provided written informed consent before participating in the study.", "The minimum sample size estimated for each group of case-control studies was 116 patients to provide 90% power and 5% type I error to detect an odds ratio of 2.5. In Vietnam, PSA ≥ 10 ng/mL is considered as the high-risk group of PCa. Therefore, we chose the proportion of PSA ≥ 10 ng/mL equal to 23% in the control group as the proportion of controls with exposure in the sample size formula [23]. Our studies selected 130 patients for each group to ensure larger than the minimum sample size, hence the total patients was 260.", "We collected epidemiological and behavioural characteristics through interviews using a questionnaire and collected clinical and subclinical information from medical records. Epidemiological characteristics included age, number of children, overweight/obesity (BMI ≥ 23 kg/m2 [24]), family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals. Lifestyle behaviours included physical activities (≥ 150 min/week for moderate or vigorous intensity [25]), current tobacco smoking, and heavy drinking (binge drinking (i.e., five drinks or more per occasion) on five or more days in the past month [26]). Food consumption behaviours were determined by the frequency of different food consumption types, including red meat, fruits, vegetables, nuts, vegetable oil, tea, and coffee.\nInternational Prostate Symptom Score (I-PSS) was used to assess seven lower urinary tract symptoms: incomplete emptying, frequency, intermittency, urgency, weak stream, straining, and nocturia. I-PSS Vietnamese version was published by the Vietnamese Ministry of Health and recommended for Benign Prostatic Hyperplasia assessment. Each item of I-PSS was classified on a zero to five scale, reflecting the severity of each symptom [27].\nEpidemiological and behavioural characteristics and I-PSS were assessed by only one oncologist for consistency. The oncologist was trained for conducting interviews before joining the study. The questionnaire consisted of ten interviews in a pilot sample for structure and content adaptation.\nSerum PSA was quantified by a 2-step immunoassay using light-emitting microparticle technology (CMIA) with Alinity CiCi (Abbott) testing machine system. The testing machine system was calibrated, and quality control was performed at least once every day or when changing the reagent batch [28].", " Descriptive analysis Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\nFrequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\n Univariable logistic regression A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\nA univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\n Bayesian model averaging (BMA) for model prediction A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\nA BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\n CART model for PCa screening CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.\nCART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.", "Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.", "A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.", "A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.", "CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.", "There were total 260 patients (130 in case group vs. 130 in control group) included in the study. The median age of cases was significantly higher than the controls (71 vs. 61). Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals. Physical exercise and fruit consumption were noted as protective factors of PCa (Table 1).\n\nTable 1Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regressionCases(n = 130)\nNo. (%)\nControls(n = 130)\nNo. (%)\nOR (95% CI)p-value\nEpidemiological characteristics\nAge (median [25–75 percentile])71 (64–78)66 (61–71)1.07 (1.04–1.11)< 0.001Overweight or Obesity (BMI ≥ 23)63 (48.5)49 (37.7)1.55 (0.92–2.63)0.103Family history of Prostate cancer (yes)2 (1.5)5 (3.9)0.39 (0.04–2.45)0.447Existing of urinary tract diseases (yes)17 (13.1)20 (15.4)0.82 (0.39–1.76)0.723History of urinary surgery (yes)10 (7.8)12 (9.2)0.83 (0.31–2.18)0.824Benign Prostate Hyperplasia (yes)14 (10.8)24 (18.5)0.53 (0.24–1.14)0.113Exposed to agrochemicals (yes)40 (30.8)20 (15.4)2.44 (1.29–4.73)0.005\nLifestyle behaviour\nPhysical activity (yes)68 (52.3)86 (66.2)0.65 (0.33–0.95)0.032Current tobacco smoking (yes)62 (47.7)48 (36.9)1.56 (0.92–2.64)0.103Heavy drinking (yes)14 (10.8)12 (9.2)1.19 (0.53–2.67)0.680\nFood consumption behaviour\nRed meat (≥ 3 times/week)107 (82.3)106 (81.5)1.05 (0.53–2.08)1.000Fruits (≥ 3 times/week)65 (50.0)83 (63.9)0.57 (0.33–0.96)0.033Vegetables (≥ 3 times/week)111 (85.4)120 (92.3)0.49 (0.19–1.16)0.114Nuts (≥ 3 times/week)9 (6.9)9 (6.9)1.00 (0.34–2.95)1.000Vegetable oil (≥ 3 times/week)104 (80.0)95 (73.1)1.47 (0.79–2.75)0.242Tea (≥ 3 times/week)56 (43.1)53 (40.8)1.10 (0.67–1.80)0.706Coffee (≥ 3 times/week)82 (63.1)80 (61.5)1.07 (0.63–1.82)0.898\n\nAssociation of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regression", "The PCa odds ratio for 1 ng/mL PSA increase was 1.06 (95% CI, 1.05–1.08). The PCa odds ratio for each I-PSS point increase was 1.14 (95% CI, 1.09–1.18). All the items of I-PSS significantly associated with PCa, except for the “Straining” item with lowest OR = 1.16 (95% CI, 0.98–1.38). “Nocturia” item had highest OR with 2.22 (95% CI, 1.79–2.74), and “Urgency” item came to the second with OR = 1.41 (95% CI, 1.23–1.62) (Table 2).\n\nTable 2Association of I-PSS and PSA with prostate cancer - univariable logistic regressionCases (n = 130)\nMedian\n\n(25–75 percentile)\nControls (n = 130)\nMedian\n\n(25–75 percentile)\nOR (95% CI)p\nPSA concentration (ng/mL)\n89.5 (39.1–100)12 (8–19)1.06 (1.05–1.08)< 0.001\nI-PSS (total score)\n\n14 (8–19)\n\n6.5 (3–12)\n\n1.14 (1.09–1.18)\n\n< 0.001\nIncomplete Emptying4 (1–5)1 (1–5)1.27 (1.12–1.44)Frequency (every 2 h)2 (0–2)0 (0–3)1.22 (1.08–1.37)Intermittency0 (0–1)0 (0–1)1.29 (1.02–1.64)Urgency2 (0–5)0 (0–1)1.41 (1.23–1.62)Weak Stream0 (0–2)0 (0–0)1.32 (1.12–1.55)Straining0 (0–1)0 (0–1)1.16 (0.98–1.38)Nocturia4 (3–5)2 (1–3)2.22 (1.79–2.74)\n\nAssociation of I-PSS and PSA with prostate cancer - univariable logistic regression", "To determine whether PCa could be predicted by PSA, I-PSS, epidemiological and behavioural variables. There were 27 models suggested from BMA process, among them the best 5 models are shown in Table 3. The most parsimonious model (i.e., minimum explanatory variables and maximum discrimination power) included two variables: I-PSS, and PSA concentration. The second parsimonious model contained I-PSS, PSA, and age. The Area Under the ROC Curve (AUC) of the most parsimonious model was not much different compared to the second parsimonious model (0.931 vs. 0.929). Because age is an important factor for PCa screening and diagnosis in many previous studies [29–31], it is also a critical factor for disease mechanisms from a clinical standpoint. Therefore, we chose the second model with three variables as the best model to use in the clinical setting (Table 3). This final model is also in light with the final model suggested by CART algorithm (details shown below).\n\nTable 3BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCaModelVariableOR (95% CI)pR2 (%)AUCBICPosterior probability1IPSS1.12 (1.06–1.18)< 0.00149.20.931199.90.364PSA1.06 (1.04–1.08)< 0.001Intercept0.03 (0.01–0.08)< 0.0012IPSS1.11 (1.05–1.17)< 0.00150.30.929201.40.174PSA1.06 (1.04–1.07)< 0.001Age1.06 (1.01–1.10)0.047Intercept0.01 (0.00–0.04)< 0.0013IPSS1.13 (1.07–1.19)< 0.00150.10.929205.50.120PSA1.06 (1.04–1.08)< 0.001Fruits (≥ 3 times/week)0.50 (0.23–1.06)0.069Intercept0.05 (0.02–0.12)< 0.0014IPSS1.12 (1.06–1.19)< 0.00149.70.931203.60.057PSA1.06 (1.04–1.08)< 0.001Overweight or Obesity1.68 (0.80–3.53)0.174Intercept0.03 (0.01–0.07)< 0.0015IPSS1.12 (1.06–1.18)< 0.00149.70.931203.70.056PSA1.06 (1.04–1.08)< 0.001Vegetable oil (≥ 3 times/week)1.84 (0.75–4.52)0.186Intercept0.02 (0.01–0.07)< 0.001\n\nBMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCa", "CART was deployed with all independent variables input for PCa screening, and the final model is shown in (Fig. 1).\n\nFig. 1Trained CART in prostate cancer screening\n\nTrained CART in prostate cancer screening\nThe results indicated that PSA, I-PSS, and age played important roles in PCa screening. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71.\nIn overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2% and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively (Table 4).\n\nTable 4Diagnosis values of CART in PCa screeningProbability = 20%Probability = 50%Probability = 80%Sensitivity0.9150.9150.792Specificity0.8620.8620.923Positive predictive value0.8690.8690.912Negative predictive value0.9120.9120.816Accuracy0.8890.8890.858\n\nDiagnosis values of CART in PCa screening", "The study included 260 observations at the Medic Center HCMC with 130 in the case group and 130 in control group. Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals and protective factor included physical exercise and fruit consumption.\nPrevious studies found that farming and exposure to agricultural chemicals are risk factors for PCa but not for all agricultural chemicals [29, 30]. Exposure to a few specific pesticides including fonofos, malathion, terbufos, and azinphos-methyl, dimethoate associated with PCa [31–33]. Genomic analysis showed pesticides might interact with genetic variants in pathways related to neurotransmission release in PCa patients [34, 35]. Therefore, the relationship between exposure to agricultural chemicals and PCa is plausible. Further epidemiological and mechanism studies are needed to identify the relationships of PCa with specific agricultural chemicals, in particular in the Vietnamese context.\nAlthough our study initially found a relationship between physical exercise and PCa, there is a lack of evidence in the literature. Recent review and meta-analyses reveal that the association between regular physical activity and a low risk of prostate cancer remains elusive [36, 37]. Given also many general health benefits of physical activity, there is the need to clarify the role of physical activity in association with PCa in further studies.\nThe association of fruit consumption with PCa was shown in our study and recent studies [38, 39]. Total fruit intake significantly reduced PCa risk. However, our study did not analyze fruit subtypes. A previous study found that citrus fruit consumption is associated with PCa, but other fruit subtypes are not associated [38]. This relationship might be due to the anti-carcinogenic properties of vitamins and phytochemicals in citrus fruits [40, 41]. However, the causal relationship remains unclear because most findings are based on a cross-sectional study.\nCART model suggested that age is most important in epidemiological and behavioral characteristics. Therefore, age, PSA, and IPSS were combined in PCa screening CART model. Our study showed that 50% of patients were older than 71 years in cases and older than 66 in controls. PCa is a disease that commonly occurs in the elderly men. Previous studies found that 75–80% of new cases occur in men aged over 65 years [42, 43]. Another study in the United States had an average participant’s age was 66 years [44]. A study by European Association of Urology showed that PCa rarely occurred in men younger than 50 years; it also indicated that the median age of PCa patient was 70 years [45]. Giwercman et al. showed that age was the closest risk factor of PCa [46]. Similarly, our study detected age as an independent risk factor of PCa. According to the Bayesian Model Averaging process, the PCa risk increased by 6% each year of age increased.\nThe role of I-PSS in PCa screening.\nThe Prostate Symptom Score (I-PSS) with seven recommended questions became an international standard to assess the symptoms of urination dysfunction in patients during the previous month. This scale is able to monitor changes in symptoms over time or after an intervention. A symptom severity assessment with an I-PSS scale is an important part of the initial evaluation, diagnosis, prediction, and monitoring of response to treatment [47, 48]. Our study noted that I-PSS was used to detect the symptoms of PCa (p < 0.001) in both univariable and multivariable analyses. A cohort study by Martin et al. [49] detected an association between I-PSS and PCa. For overall PCa, men with I-PSS ≥ 20 had a 2.26-fold increased risk of PCa compared to those with no symptoms. For localised PCa, men with I-PSS ≥ 20 had a 4.6-fold increased risk of PCa compared to those with no symptoms [49]. A study by Hosseini et al. [15] showed an association between I-PSS and PCa. The mean I-PSS score of the PCa group was 16.05 and higher than that of the non-PCa group, with a mean I-PSS score of 6.84. The prevalence of patients with I-PSS ≥ 20 in the PCa group was 30.3%, which was higher than that in the non-PCa group. The sensitivity and specificity of I-PSS at cut-off I-PSS ≥ 20 were 78% and 59%, respectively [15]. Our study and data have provided evidence about the relationship of PCa screening value with I-PSS. In Vietnam, according to the Ministry of Health, I-PSS has not yet been recommended for PCa initial screening; however, I-PSS has been recommended for benign hypertrophy of prostate – a disease that has many symptoms similar to early-stage PCa symptoms. Our study recommended using I-PSS for initial screening for any patient who has self-reported lower urinary tract symptoms.\n The role of PSA in PCa screening Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50].\nIn our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\ge$$\\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58].\nIn hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14].\nProstatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50].\nIn our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\ge$$\\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58].\nIn hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14].", "Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50].\nIn our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\ge$$\\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58].\nIn hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14].", "To remedy the inherent limitations of PSA in PCa screening, we used a combination of PSA with I-PSS and the main risk factors of PCa to build the CART model. Based on CART, patients with a PSA cut-off level > 33.5 ng/mL have a PCa risk of up to 91.2%. Patients with I-PSS < 19, and PSA < 18 ng/mL were at 7.1% risk. CART overcomes the limitations of using only I-PSS or PSA for screening. Other machine learning algorithms have been used in PCa screening in previous studies and have reached higher values than PSA only. In a study by Babaian et al. [59], a neural network algorithm for PCa screening showed an improved value compared to using only PSA, the specificity of the neural network was not good (lower than 65%) [59]. A study by Satoshi et al. [17] showed that artificial neural network, random forest, and support vector machine improved overall value when compared to only PSA; however, sensitivity and specificity were usually lower than 80% [17]. Our CART algorithm with three variables, PSA, I-PSS, and age, showed a relatively high predictive power (AUC = 0.915). In addition, CART algorithm could also support physicians to make clinical decisions in a step-to-step and intuitive manner; hence it has practical use in a daily clinical setting. At 20% diagnosis probability threshold, CART showed a high negative predictive value (91.2%), and at 80% diagnosis probability threshold, CART also had a high positive predictive value (91.2%). Therefore, we recommended 20% diagnosis probability threshold for negative prediction and 80% diagnosis probability threshold for referring prostate biopsy. Any other patients with a probability of PCa range from > 20% to < 80%, further tests including the digital rectal examination (DRE), PSA re-test after a month, and transrectal ultrasonography (TRUS) should be considered to reduce unnecessary biopsy while keeping the ability to diagnose PCa early.\nThe study has some limitations. First, we lack other tests such as DRE, TRUS, biomarkers that can contribute to making biopsy decisions [5, 7]. Second, the CART model has not yet been tested in different populations for the validity and reliability of the algorithm. Finally, we could not estimate the overdiagnosis rate of PCa in the study. It warrants further study in the near future to overcome these limitations." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study design and setting", "Participants", "Sample size", "Data collection and variables’ definition", "Statistical analysis", "Descriptive analysis", "Univariable logistic regression", "Bayesian model averaging (BMA) for model prediction", "CART model for PCa screening", "Results", "Association of epidemiological and behavioural characteristics with prostate cancer", "Association of I-PSS and PSA with prostate cancer", "BMA for PCa prediction", "CART for PCa screening", "Discussion", "Epidemiological and behavioral characteristics", "The role of PSA in PCa screening", "CART value for PCa screening", "Conclusion" ]
[ "Prostate cancer (PCa) is common in men, especially in those aged 65 years and older. It has the second-highest incidence/prevalence (i.e., 30.7 per 100 000) and ranks fifth in cancer mortality rate among men (i.e., 7.7 per 100 000) worldwide [1]. In Vietnam, the incidence of PCa is 12.2 per 100 000, and the mortality rate is 5.1 per 100 000 as of 2020 [2]. Approximately 95–98% of PCa cases are adenocarcinomas that develop from adrenal duct cells [3]. PCa treatment depends primarily on the stage of development and the cell and patient characteristics. According to the American Cancer Society, PCa patients diagnosed at the localised or regional stage have a 5-year survival rate of over 90%. However, in the distant stage, the survival rate is only 30% [4]. Therefore, PCa should be detected at an early stage.\nProstate-specific antigen (PSA) is a serine protease in the kallikrein family and considered a tool for the screening and early detection of PCa [5]. It can help detect as early as nine years before having clinical symptoms [6]. There are two types of PCa screening studies using PSA, including population-based and hospital-based (or opportunistic) screenings [7]. The first type of screening deals with testing asymptomatic men with only PSA, and those with elevated PSA are immediately referred to biopsy. However, the latter type of screening involves testing men with some symptoms (e.g., lower urinary tract symptoms) using PSA and other clinical tools. Therefore, all men referred for biopsy in population-based screening are at lower risk of having PCa compared to that of hospital-based screening. PSA only based screening could accounted for 45–70% of the reduction in PCa mortality [8]; it could also induce the unnecessary biopsies [9]. In a 16 year follow-up of the European Randomized Study of Screening for Prostate Cancer (ERSPC), the unnecessary biopsy was 76% (i.e., 76% of elevated PSA cases have a negative biopsy) [10]. In addition, the optimal cut-off value of PSA for confirming PCa remains to be determined [11] [5]. In particular, even at a low level of PSA (that is, lower than 4 ng/mL), the false negative rate of PCa was high at 15%, whereas, at a high level of PSA (that is, higher than 10 ng/mL), the false positive rate was 50% [5].\nIn Vietnam, population-based PCa screening using PSA was conducted 12 years ago; it indicated a low prevalence of PCa (2.5%), but a high rate of medium grade lesions. The author also implied that the benefit of a mass screening program for PCa was not proven. Instead, a selective PCa screening in the usual care and at the hospital was superior in Vietnam. In hospital-based screening, combining clinical parameters, PSA, age, and other risk factors improved the prediction of prostate cancer [12–14]. International Prostate Symptom Score (I-PSS) is a screening scale for lower urinary tract symptoms and is used to screen non-specific prostate gland abnormalities. For PCa screening. For PCa screening, the I-PSS scale showed reasonable sensitivity (78%), but the specificity was not high (59.4%) [15]. A previous study showed that PSA screening performance varied with different I-PSS values. Therefore, combining PSA and I-PSS could improve the screening benefits [16]. There is, however, a paucity of such practical multivariable algorithm for hospital-based PCa screening in Vietnam.\nThe approach of PCa screening based on machine learning algorithms has only recently been applied. Algorithms including logistic regression, artificial neural networks, random forests, support vector machines, and extreme and light gradient boosting machines have been demonstrated to enhance PCa screening efficiency [13, 17–20]. However, these models do not help make clinical decisions in a step-to-step and intuitive manner. Classification and regression tree (CART) is an approach that allows physicians to apply results of the screening process directly and intuitively [17, 21].\nOur study aimed to investigate the association of PSA, I-PSS, epidemiological and behavioural characteristics with PCa and then used these factors to construct a classification and regression tree (CART) algorithm to select Vietnamese men with lower urinary tract symptoms (LUTS) for PCa biopsy. The algorithm is expected to aid in reducing the probability of a negative prostate biopsy (i.e., unnecessary biopsy) while maintaining the ability to reduce PCa mortality for Vietnamese patients.", " Study design and setting We conducted a case-control study at the MEDIC Medical Center, Ho Chi Minh City, Vietnam. MEDIC is the first and top modernity private medical centre in Vietnam. Every day, more than 4,000 patients visit the centre for examination and treatment. The study was approved by the local institutional ethics committee of the MEDIC Center, and the opinion was signed on 15th July, 2016.\nWe conducted a case-control study at the MEDIC Medical Center, Ho Chi Minh City, Vietnam. MEDIC is the first and top modernity private medical centre in Vietnam. Every day, more than 4,000 patients visit the centre for examination and treatment. The study was approved by the local institutional ethics committee of the MEDIC Center, and the opinion was signed on 15th July, 2016.\n Participants Our study participants were men aged ≥ 50 years who visited the MEDIC Centre in 2017–2018 with self-reported lower urinary tract symptoms. The inclusion criteria were abnormal lower urinary tract symptoms or enlarged prostate glands identified through DRE or ultrasound images. The exclusion criterions were acute prostatitis or refusal to participate in the study. All patients who meet the selection criteria were prescribed a biopsy. The case group was defined as having a positive biopsy result for PCa, and the control was defined as having a negative biopsy result. Biopsy based on 12-core Transrectal Ultrasound Guided Biopsy of the Prostate [22]. All patients provided written informed consent before participating in the study.\nOur study participants were men aged ≥ 50 years who visited the MEDIC Centre in 2017–2018 with self-reported lower urinary tract symptoms. The inclusion criteria were abnormal lower urinary tract symptoms or enlarged prostate glands identified through DRE or ultrasound images. The exclusion criterions were acute prostatitis or refusal to participate in the study. All patients who meet the selection criteria were prescribed a biopsy. The case group was defined as having a positive biopsy result for PCa, and the control was defined as having a negative biopsy result. Biopsy based on 12-core Transrectal Ultrasound Guided Biopsy of the Prostate [22]. All patients provided written informed consent before participating in the study.\n Sample size The minimum sample size estimated for each group of case-control studies was 116 patients to provide 90% power and 5% type I error to detect an odds ratio of 2.5. In Vietnam, PSA ≥ 10 ng/mL is considered as the high-risk group of PCa. Therefore, we chose the proportion of PSA ≥ 10 ng/mL equal to 23% in the control group as the proportion of controls with exposure in the sample size formula [23]. Our studies selected 130 patients for each group to ensure larger than the minimum sample size, hence the total patients was 260.\nThe minimum sample size estimated for each group of case-control studies was 116 patients to provide 90% power and 5% type I error to detect an odds ratio of 2.5. In Vietnam, PSA ≥ 10 ng/mL is considered as the high-risk group of PCa. Therefore, we chose the proportion of PSA ≥ 10 ng/mL equal to 23% in the control group as the proportion of controls with exposure in the sample size formula [23]. Our studies selected 130 patients for each group to ensure larger than the minimum sample size, hence the total patients was 260.\n Data collection and variables’ definition We collected epidemiological and behavioural characteristics through interviews using a questionnaire and collected clinical and subclinical information from medical records. Epidemiological characteristics included age, number of children, overweight/obesity (BMI ≥ 23 kg/m2 [24]), family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals. Lifestyle behaviours included physical activities (≥ 150 min/week for moderate or vigorous intensity [25]), current tobacco smoking, and heavy drinking (binge drinking (i.e., five drinks or more per occasion) on five or more days in the past month [26]). Food consumption behaviours were determined by the frequency of different food consumption types, including red meat, fruits, vegetables, nuts, vegetable oil, tea, and coffee.\nInternational Prostate Symptom Score (I-PSS) was used to assess seven lower urinary tract symptoms: incomplete emptying, frequency, intermittency, urgency, weak stream, straining, and nocturia. I-PSS Vietnamese version was published by the Vietnamese Ministry of Health and recommended for Benign Prostatic Hyperplasia assessment. Each item of I-PSS was classified on a zero to five scale, reflecting the severity of each symptom [27].\nEpidemiological and behavioural characteristics and I-PSS were assessed by only one oncologist for consistency. The oncologist was trained for conducting interviews before joining the study. The questionnaire consisted of ten interviews in a pilot sample for structure and content adaptation.\nSerum PSA was quantified by a 2-step immunoassay using light-emitting microparticle technology (CMIA) with Alinity CiCi (Abbott) testing machine system. The testing machine system was calibrated, and quality control was performed at least once every day or when changing the reagent batch [28].\nWe collected epidemiological and behavioural characteristics through interviews using a questionnaire and collected clinical and subclinical information from medical records. Epidemiological characteristics included age, number of children, overweight/obesity (BMI ≥ 23 kg/m2 [24]), family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals. Lifestyle behaviours included physical activities (≥ 150 min/week for moderate or vigorous intensity [25]), current tobacco smoking, and heavy drinking (binge drinking (i.e., five drinks or more per occasion) on five or more days in the past month [26]). Food consumption behaviours were determined by the frequency of different food consumption types, including red meat, fruits, vegetables, nuts, vegetable oil, tea, and coffee.\nInternational Prostate Symptom Score (I-PSS) was used to assess seven lower urinary tract symptoms: incomplete emptying, frequency, intermittency, urgency, weak stream, straining, and nocturia. I-PSS Vietnamese version was published by the Vietnamese Ministry of Health and recommended for Benign Prostatic Hyperplasia assessment. Each item of I-PSS was classified on a zero to five scale, reflecting the severity of each symptom [27].\nEpidemiological and behavioural characteristics and I-PSS were assessed by only one oncologist for consistency. The oncologist was trained for conducting interviews before joining the study. The questionnaire consisted of ten interviews in a pilot sample for structure and content adaptation.\nSerum PSA was quantified by a 2-step immunoassay using light-emitting microparticle technology (CMIA) with Alinity CiCi (Abbott) testing machine system. The testing machine system was calibrated, and quality control was performed at least once every day or when changing the reagent batch [28].\n Statistical analysis Descriptive analysis Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\nFrequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\n Univariable logistic regression A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\nA univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\n Bayesian model averaging (BMA) for model prediction A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\nA BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\n CART model for PCa screening CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.\nCART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.\n Descriptive analysis Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\nFrequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\n Univariable logistic regression A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\nA univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\n Bayesian model averaging (BMA) for model prediction A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\nA BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\n CART model for PCa screening CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.\nCART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.", "We conducted a case-control study at the MEDIC Medical Center, Ho Chi Minh City, Vietnam. MEDIC is the first and top modernity private medical centre in Vietnam. Every day, more than 4,000 patients visit the centre for examination and treatment. The study was approved by the local institutional ethics committee of the MEDIC Center, and the opinion was signed on 15th July, 2016.", "Our study participants were men aged ≥ 50 years who visited the MEDIC Centre in 2017–2018 with self-reported lower urinary tract symptoms. The inclusion criteria were abnormal lower urinary tract symptoms or enlarged prostate glands identified through DRE or ultrasound images. The exclusion criterions were acute prostatitis or refusal to participate in the study. All patients who meet the selection criteria were prescribed a biopsy. The case group was defined as having a positive biopsy result for PCa, and the control was defined as having a negative biopsy result. Biopsy based on 12-core Transrectal Ultrasound Guided Biopsy of the Prostate [22]. All patients provided written informed consent before participating in the study.", "The minimum sample size estimated for each group of case-control studies was 116 patients to provide 90% power and 5% type I error to detect an odds ratio of 2.5. In Vietnam, PSA ≥ 10 ng/mL is considered as the high-risk group of PCa. Therefore, we chose the proportion of PSA ≥ 10 ng/mL equal to 23% in the control group as the proportion of controls with exposure in the sample size formula [23]. Our studies selected 130 patients for each group to ensure larger than the minimum sample size, hence the total patients was 260.", "We collected epidemiological and behavioural characteristics through interviews using a questionnaire and collected clinical and subclinical information from medical records. Epidemiological characteristics included age, number of children, overweight/obesity (BMI ≥ 23 kg/m2 [24]), family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals. Lifestyle behaviours included physical activities (≥ 150 min/week for moderate or vigorous intensity [25]), current tobacco smoking, and heavy drinking (binge drinking (i.e., five drinks or more per occasion) on five or more days in the past month [26]). Food consumption behaviours were determined by the frequency of different food consumption types, including red meat, fruits, vegetables, nuts, vegetable oil, tea, and coffee.\nInternational Prostate Symptom Score (I-PSS) was used to assess seven lower urinary tract symptoms: incomplete emptying, frequency, intermittency, urgency, weak stream, straining, and nocturia. I-PSS Vietnamese version was published by the Vietnamese Ministry of Health and recommended for Benign Prostatic Hyperplasia assessment. Each item of I-PSS was classified on a zero to five scale, reflecting the severity of each symptom [27].\nEpidemiological and behavioural characteristics and I-PSS were assessed by only one oncologist for consistency. The oncologist was trained for conducting interviews before joining the study. The questionnaire consisted of ten interviews in a pilot sample for structure and content adaptation.\nSerum PSA was quantified by a 2-step immunoassay using light-emitting microparticle technology (CMIA) with Alinity CiCi (Abbott) testing machine system. The testing machine system was calibrated, and quality control was performed at least once every day or when changing the reagent batch [28].", " Descriptive analysis Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\nFrequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.\n Univariable logistic regression A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\nA univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.\n Bayesian model averaging (BMA) for model prediction A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\nA BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.\n CART model for PCa screening CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.\nCART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.", "Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups.", "A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa.", "A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations.", "CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted.", " Association of epidemiological and behavioural characteristics with prostate cancer There were total 260 patients (130 in case group vs. 130 in control group) included in the study. The median age of cases was significantly higher than the controls (71 vs. 61). Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals. Physical exercise and fruit consumption were noted as protective factors of PCa (Table 1).\n\nTable 1Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regressionCases(n = 130)\nNo. (%)\nControls(n = 130)\nNo. (%)\nOR (95% CI)p-value\nEpidemiological characteristics\nAge (median [25–75 percentile])71 (64–78)66 (61–71)1.07 (1.04–1.11)< 0.001Overweight or Obesity (BMI ≥ 23)63 (48.5)49 (37.7)1.55 (0.92–2.63)0.103Family history of Prostate cancer (yes)2 (1.5)5 (3.9)0.39 (0.04–2.45)0.447Existing of urinary tract diseases (yes)17 (13.1)20 (15.4)0.82 (0.39–1.76)0.723History of urinary surgery (yes)10 (7.8)12 (9.2)0.83 (0.31–2.18)0.824Benign Prostate Hyperplasia (yes)14 (10.8)24 (18.5)0.53 (0.24–1.14)0.113Exposed to agrochemicals (yes)40 (30.8)20 (15.4)2.44 (1.29–4.73)0.005\nLifestyle behaviour\nPhysical activity (yes)68 (52.3)86 (66.2)0.65 (0.33–0.95)0.032Current tobacco smoking (yes)62 (47.7)48 (36.9)1.56 (0.92–2.64)0.103Heavy drinking (yes)14 (10.8)12 (9.2)1.19 (0.53–2.67)0.680\nFood consumption behaviour\nRed meat (≥ 3 times/week)107 (82.3)106 (81.5)1.05 (0.53–2.08)1.000Fruits (≥ 3 times/week)65 (50.0)83 (63.9)0.57 (0.33–0.96)0.033Vegetables (≥ 3 times/week)111 (85.4)120 (92.3)0.49 (0.19–1.16)0.114Nuts (≥ 3 times/week)9 (6.9)9 (6.9)1.00 (0.34–2.95)1.000Vegetable oil (≥ 3 times/week)104 (80.0)95 (73.1)1.47 (0.79–2.75)0.242Tea (≥ 3 times/week)56 (43.1)53 (40.8)1.10 (0.67–1.80)0.706Coffee (≥ 3 times/week)82 (63.1)80 (61.5)1.07 (0.63–1.82)0.898\n\nAssociation of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regression\nThere were total 260 patients (130 in case group vs. 130 in control group) included in the study. The median age of cases was significantly higher than the controls (71 vs. 61). Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals. Physical exercise and fruit consumption were noted as protective factors of PCa (Table 1).\n\nTable 1Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regressionCases(n = 130)\nNo. (%)\nControls(n = 130)\nNo. (%)\nOR (95% CI)p-value\nEpidemiological characteristics\nAge (median [25–75 percentile])71 (64–78)66 (61–71)1.07 (1.04–1.11)< 0.001Overweight or Obesity (BMI ≥ 23)63 (48.5)49 (37.7)1.55 (0.92–2.63)0.103Family history of Prostate cancer (yes)2 (1.5)5 (3.9)0.39 (0.04–2.45)0.447Existing of urinary tract diseases (yes)17 (13.1)20 (15.4)0.82 (0.39–1.76)0.723History of urinary surgery (yes)10 (7.8)12 (9.2)0.83 (0.31–2.18)0.824Benign Prostate Hyperplasia (yes)14 (10.8)24 (18.5)0.53 (0.24–1.14)0.113Exposed to agrochemicals (yes)40 (30.8)20 (15.4)2.44 (1.29–4.73)0.005\nLifestyle behaviour\nPhysical activity (yes)68 (52.3)86 (66.2)0.65 (0.33–0.95)0.032Current tobacco smoking (yes)62 (47.7)48 (36.9)1.56 (0.92–2.64)0.103Heavy drinking (yes)14 (10.8)12 (9.2)1.19 (0.53–2.67)0.680\nFood consumption behaviour\nRed meat (≥ 3 times/week)107 (82.3)106 (81.5)1.05 (0.53–2.08)1.000Fruits (≥ 3 times/week)65 (50.0)83 (63.9)0.57 (0.33–0.96)0.033Vegetables (≥ 3 times/week)111 (85.4)120 (92.3)0.49 (0.19–1.16)0.114Nuts (≥ 3 times/week)9 (6.9)9 (6.9)1.00 (0.34–2.95)1.000Vegetable oil (≥ 3 times/week)104 (80.0)95 (73.1)1.47 (0.79–2.75)0.242Tea (≥ 3 times/week)56 (43.1)53 (40.8)1.10 (0.67–1.80)0.706Coffee (≥ 3 times/week)82 (63.1)80 (61.5)1.07 (0.63–1.82)0.898\n\nAssociation of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regression\n Association of I-PSS and PSA with prostate cancer The PCa odds ratio for 1 ng/mL PSA increase was 1.06 (95% CI, 1.05–1.08). The PCa odds ratio for each I-PSS point increase was 1.14 (95% CI, 1.09–1.18). All the items of I-PSS significantly associated with PCa, except for the “Straining” item with lowest OR = 1.16 (95% CI, 0.98–1.38). “Nocturia” item had highest OR with 2.22 (95% CI, 1.79–2.74), and “Urgency” item came to the second with OR = 1.41 (95% CI, 1.23–1.62) (Table 2).\n\nTable 2Association of I-PSS and PSA with prostate cancer - univariable logistic regressionCases (n = 130)\nMedian\n\n(25–75 percentile)\nControls (n = 130)\nMedian\n\n(25–75 percentile)\nOR (95% CI)p\nPSA concentration (ng/mL)\n89.5 (39.1–100)12 (8–19)1.06 (1.05–1.08)< 0.001\nI-PSS (total score)\n\n14 (8–19)\n\n6.5 (3–12)\n\n1.14 (1.09–1.18)\n\n< 0.001\nIncomplete Emptying4 (1–5)1 (1–5)1.27 (1.12–1.44)Frequency (every 2 h)2 (0–2)0 (0–3)1.22 (1.08–1.37)Intermittency0 (0–1)0 (0–1)1.29 (1.02–1.64)Urgency2 (0–5)0 (0–1)1.41 (1.23–1.62)Weak Stream0 (0–2)0 (0–0)1.32 (1.12–1.55)Straining0 (0–1)0 (0–1)1.16 (0.98–1.38)Nocturia4 (3–5)2 (1–3)2.22 (1.79–2.74)\n\nAssociation of I-PSS and PSA with prostate cancer - univariable logistic regression\nThe PCa odds ratio for 1 ng/mL PSA increase was 1.06 (95% CI, 1.05–1.08). The PCa odds ratio for each I-PSS point increase was 1.14 (95% CI, 1.09–1.18). All the items of I-PSS significantly associated with PCa, except for the “Straining” item with lowest OR = 1.16 (95% CI, 0.98–1.38). “Nocturia” item had highest OR with 2.22 (95% CI, 1.79–2.74), and “Urgency” item came to the second with OR = 1.41 (95% CI, 1.23–1.62) (Table 2).\n\nTable 2Association of I-PSS and PSA with prostate cancer - univariable logistic regressionCases (n = 130)\nMedian\n\n(25–75 percentile)\nControls (n = 130)\nMedian\n\n(25–75 percentile)\nOR (95% CI)p\nPSA concentration (ng/mL)\n89.5 (39.1–100)12 (8–19)1.06 (1.05–1.08)< 0.001\nI-PSS (total score)\n\n14 (8–19)\n\n6.5 (3–12)\n\n1.14 (1.09–1.18)\n\n< 0.001\nIncomplete Emptying4 (1–5)1 (1–5)1.27 (1.12–1.44)Frequency (every 2 h)2 (0–2)0 (0–3)1.22 (1.08–1.37)Intermittency0 (0–1)0 (0–1)1.29 (1.02–1.64)Urgency2 (0–5)0 (0–1)1.41 (1.23–1.62)Weak Stream0 (0–2)0 (0–0)1.32 (1.12–1.55)Straining0 (0–1)0 (0–1)1.16 (0.98–1.38)Nocturia4 (3–5)2 (1–3)2.22 (1.79–2.74)\n\nAssociation of I-PSS and PSA with prostate cancer - univariable logistic regression\n BMA for PCa prediction To determine whether PCa could be predicted by PSA, I-PSS, epidemiological and behavioural variables. There were 27 models suggested from BMA process, among them the best 5 models are shown in Table 3. The most parsimonious model (i.e., minimum explanatory variables and maximum discrimination power) included two variables: I-PSS, and PSA concentration. The second parsimonious model contained I-PSS, PSA, and age. The Area Under the ROC Curve (AUC) of the most parsimonious model was not much different compared to the second parsimonious model (0.931 vs. 0.929). Because age is an important factor for PCa screening and diagnosis in many previous studies [29–31], it is also a critical factor for disease mechanisms from a clinical standpoint. Therefore, we chose the second model with three variables as the best model to use in the clinical setting (Table 3). This final model is also in light with the final model suggested by CART algorithm (details shown below).\n\nTable 3BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCaModelVariableOR (95% CI)pR2 (%)AUCBICPosterior probability1IPSS1.12 (1.06–1.18)< 0.00149.20.931199.90.364PSA1.06 (1.04–1.08)< 0.001Intercept0.03 (0.01–0.08)< 0.0012IPSS1.11 (1.05–1.17)< 0.00150.30.929201.40.174PSA1.06 (1.04–1.07)< 0.001Age1.06 (1.01–1.10)0.047Intercept0.01 (0.00–0.04)< 0.0013IPSS1.13 (1.07–1.19)< 0.00150.10.929205.50.120PSA1.06 (1.04–1.08)< 0.001Fruits (≥ 3 times/week)0.50 (0.23–1.06)0.069Intercept0.05 (0.02–0.12)< 0.0014IPSS1.12 (1.06–1.19)< 0.00149.70.931203.60.057PSA1.06 (1.04–1.08)< 0.001Overweight or Obesity1.68 (0.80–3.53)0.174Intercept0.03 (0.01–0.07)< 0.0015IPSS1.12 (1.06–1.18)< 0.00149.70.931203.70.056PSA1.06 (1.04–1.08)< 0.001Vegetable oil (≥ 3 times/week)1.84 (0.75–4.52)0.186Intercept0.02 (0.01–0.07)< 0.001\n\nBMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCa\nTo determine whether PCa could be predicted by PSA, I-PSS, epidemiological and behavioural variables. There were 27 models suggested from BMA process, among them the best 5 models are shown in Table 3. The most parsimonious model (i.e., minimum explanatory variables and maximum discrimination power) included two variables: I-PSS, and PSA concentration. The second parsimonious model contained I-PSS, PSA, and age. The Area Under the ROC Curve (AUC) of the most parsimonious model was not much different compared to the second parsimonious model (0.931 vs. 0.929). Because age is an important factor for PCa screening and diagnosis in many previous studies [29–31], it is also a critical factor for disease mechanisms from a clinical standpoint. Therefore, we chose the second model with three variables as the best model to use in the clinical setting (Table 3). This final model is also in light with the final model suggested by CART algorithm (details shown below).\n\nTable 3BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCaModelVariableOR (95% CI)pR2 (%)AUCBICPosterior probability1IPSS1.12 (1.06–1.18)< 0.00149.20.931199.90.364PSA1.06 (1.04–1.08)< 0.001Intercept0.03 (0.01–0.08)< 0.0012IPSS1.11 (1.05–1.17)< 0.00150.30.929201.40.174PSA1.06 (1.04–1.07)< 0.001Age1.06 (1.01–1.10)0.047Intercept0.01 (0.00–0.04)< 0.0013IPSS1.13 (1.07–1.19)< 0.00150.10.929205.50.120PSA1.06 (1.04–1.08)< 0.001Fruits (≥ 3 times/week)0.50 (0.23–1.06)0.069Intercept0.05 (0.02–0.12)< 0.0014IPSS1.12 (1.06–1.19)< 0.00149.70.931203.60.057PSA1.06 (1.04–1.08)< 0.001Overweight or Obesity1.68 (0.80–3.53)0.174Intercept0.03 (0.01–0.07)< 0.0015IPSS1.12 (1.06–1.18)< 0.00149.70.931203.70.056PSA1.06 (1.04–1.08)< 0.001Vegetable oil (≥ 3 times/week)1.84 (0.75–4.52)0.186Intercept0.02 (0.01–0.07)< 0.001\n\nBMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCa\n CART for PCa screening CART was deployed with all independent variables input for PCa screening, and the final model is shown in (Fig. 1).\n\nFig. 1Trained CART in prostate cancer screening\n\nTrained CART in prostate cancer screening\nThe results indicated that PSA, I-PSS, and age played important roles in PCa screening. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71.\nIn overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2% and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively (Table 4).\n\nTable 4Diagnosis values of CART in PCa screeningProbability = 20%Probability = 50%Probability = 80%Sensitivity0.9150.9150.792Specificity0.8620.8620.923Positive predictive value0.8690.8690.912Negative predictive value0.9120.9120.816Accuracy0.8890.8890.858\n\nDiagnosis values of CART in PCa screening\nCART was deployed with all independent variables input for PCa screening, and the final model is shown in (Fig. 1).\n\nFig. 1Trained CART in prostate cancer screening\n\nTrained CART in prostate cancer screening\nThe results indicated that PSA, I-PSS, and age played important roles in PCa screening. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71.\nIn overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2% and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively (Table 4).\n\nTable 4Diagnosis values of CART in PCa screeningProbability = 20%Probability = 50%Probability = 80%Sensitivity0.9150.9150.792Specificity0.8620.8620.923Positive predictive value0.8690.8690.912Negative predictive value0.9120.9120.816Accuracy0.8890.8890.858\n\nDiagnosis values of CART in PCa screening", "There were total 260 patients (130 in case group vs. 130 in control group) included in the study. The median age of cases was significantly higher than the controls (71 vs. 61). Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals. Physical exercise and fruit consumption were noted as protective factors of PCa (Table 1).\n\nTable 1Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regressionCases(n = 130)\nNo. (%)\nControls(n = 130)\nNo. (%)\nOR (95% CI)p-value\nEpidemiological characteristics\nAge (median [25–75 percentile])71 (64–78)66 (61–71)1.07 (1.04–1.11)< 0.001Overweight or Obesity (BMI ≥ 23)63 (48.5)49 (37.7)1.55 (0.92–2.63)0.103Family history of Prostate cancer (yes)2 (1.5)5 (3.9)0.39 (0.04–2.45)0.447Existing of urinary tract diseases (yes)17 (13.1)20 (15.4)0.82 (0.39–1.76)0.723History of urinary surgery (yes)10 (7.8)12 (9.2)0.83 (0.31–2.18)0.824Benign Prostate Hyperplasia (yes)14 (10.8)24 (18.5)0.53 (0.24–1.14)0.113Exposed to agrochemicals (yes)40 (30.8)20 (15.4)2.44 (1.29–4.73)0.005\nLifestyle behaviour\nPhysical activity (yes)68 (52.3)86 (66.2)0.65 (0.33–0.95)0.032Current tobacco smoking (yes)62 (47.7)48 (36.9)1.56 (0.92–2.64)0.103Heavy drinking (yes)14 (10.8)12 (9.2)1.19 (0.53–2.67)0.680\nFood consumption behaviour\nRed meat (≥ 3 times/week)107 (82.3)106 (81.5)1.05 (0.53–2.08)1.000Fruits (≥ 3 times/week)65 (50.0)83 (63.9)0.57 (0.33–0.96)0.033Vegetables (≥ 3 times/week)111 (85.4)120 (92.3)0.49 (0.19–1.16)0.114Nuts (≥ 3 times/week)9 (6.9)9 (6.9)1.00 (0.34–2.95)1.000Vegetable oil (≥ 3 times/week)104 (80.0)95 (73.1)1.47 (0.79–2.75)0.242Tea (≥ 3 times/week)56 (43.1)53 (40.8)1.10 (0.67–1.80)0.706Coffee (≥ 3 times/week)82 (63.1)80 (61.5)1.07 (0.63–1.82)0.898\n\nAssociation of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regression", "The PCa odds ratio for 1 ng/mL PSA increase was 1.06 (95% CI, 1.05–1.08). The PCa odds ratio for each I-PSS point increase was 1.14 (95% CI, 1.09–1.18). All the items of I-PSS significantly associated with PCa, except for the “Straining” item with lowest OR = 1.16 (95% CI, 0.98–1.38). “Nocturia” item had highest OR with 2.22 (95% CI, 1.79–2.74), and “Urgency” item came to the second with OR = 1.41 (95% CI, 1.23–1.62) (Table 2).\n\nTable 2Association of I-PSS and PSA with prostate cancer - univariable logistic regressionCases (n = 130)\nMedian\n\n(25–75 percentile)\nControls (n = 130)\nMedian\n\n(25–75 percentile)\nOR (95% CI)p\nPSA concentration (ng/mL)\n89.5 (39.1–100)12 (8–19)1.06 (1.05–1.08)< 0.001\nI-PSS (total score)\n\n14 (8–19)\n\n6.5 (3–12)\n\n1.14 (1.09–1.18)\n\n< 0.001\nIncomplete Emptying4 (1–5)1 (1–5)1.27 (1.12–1.44)Frequency (every 2 h)2 (0–2)0 (0–3)1.22 (1.08–1.37)Intermittency0 (0–1)0 (0–1)1.29 (1.02–1.64)Urgency2 (0–5)0 (0–1)1.41 (1.23–1.62)Weak Stream0 (0–2)0 (0–0)1.32 (1.12–1.55)Straining0 (0–1)0 (0–1)1.16 (0.98–1.38)Nocturia4 (3–5)2 (1–3)2.22 (1.79–2.74)\n\nAssociation of I-PSS and PSA with prostate cancer - univariable logistic regression", "To determine whether PCa could be predicted by PSA, I-PSS, epidemiological and behavioural variables. There were 27 models suggested from BMA process, among them the best 5 models are shown in Table 3. The most parsimonious model (i.e., minimum explanatory variables and maximum discrimination power) included two variables: I-PSS, and PSA concentration. The second parsimonious model contained I-PSS, PSA, and age. The Area Under the ROC Curve (AUC) of the most parsimonious model was not much different compared to the second parsimonious model (0.931 vs. 0.929). Because age is an important factor for PCa screening and diagnosis in many previous studies [29–31], it is also a critical factor for disease mechanisms from a clinical standpoint. Therefore, we chose the second model with three variables as the best model to use in the clinical setting (Table 3). This final model is also in light with the final model suggested by CART algorithm (details shown below).\n\nTable 3BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCaModelVariableOR (95% CI)pR2 (%)AUCBICPosterior probability1IPSS1.12 (1.06–1.18)< 0.00149.20.931199.90.364PSA1.06 (1.04–1.08)< 0.001Intercept0.03 (0.01–0.08)< 0.0012IPSS1.11 (1.05–1.17)< 0.00150.30.929201.40.174PSA1.06 (1.04–1.07)< 0.001Age1.06 (1.01–1.10)0.047Intercept0.01 (0.00–0.04)< 0.0013IPSS1.13 (1.07–1.19)< 0.00150.10.929205.50.120PSA1.06 (1.04–1.08)< 0.001Fruits (≥ 3 times/week)0.50 (0.23–1.06)0.069Intercept0.05 (0.02–0.12)< 0.0014IPSS1.12 (1.06–1.19)< 0.00149.70.931203.60.057PSA1.06 (1.04–1.08)< 0.001Overweight or Obesity1.68 (0.80–3.53)0.174Intercept0.03 (0.01–0.07)< 0.0015IPSS1.12 (1.06–1.18)< 0.00149.70.931203.70.056PSA1.06 (1.04–1.08)< 0.001Vegetable oil (≥ 3 times/week)1.84 (0.75–4.52)0.186Intercept0.02 (0.01–0.07)< 0.001\n\nBMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCa", "CART was deployed with all independent variables input for PCa screening, and the final model is shown in (Fig. 1).\n\nFig. 1Trained CART in prostate cancer screening\n\nTrained CART in prostate cancer screening\nThe results indicated that PSA, I-PSS, and age played important roles in PCa screening. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71.\nIn overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2% and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively (Table 4).\n\nTable 4Diagnosis values of CART in PCa screeningProbability = 20%Probability = 50%Probability = 80%Sensitivity0.9150.9150.792Specificity0.8620.8620.923Positive predictive value0.8690.8690.912Negative predictive value0.9120.9120.816Accuracy0.8890.8890.858\n\nDiagnosis values of CART in PCa screening", " Epidemiological and behavioral characteristics The study included 260 observations at the Medic Center HCMC with 130 in the case group and 130 in control group. Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals and protective factor included physical exercise and fruit consumption.\nPrevious studies found that farming and exposure to agricultural chemicals are risk factors for PCa but not for all agricultural chemicals [29, 30]. Exposure to a few specific pesticides including fonofos, malathion, terbufos, and azinphos-methyl, dimethoate associated with PCa [31–33]. Genomic analysis showed pesticides might interact with genetic variants in pathways related to neurotransmission release in PCa patients [34, 35]. Therefore, the relationship between exposure to agricultural chemicals and PCa is plausible. Further epidemiological and mechanism studies are needed to identify the relationships of PCa with specific agricultural chemicals, in particular in the Vietnamese context.\nAlthough our study initially found a relationship between physical exercise and PCa, there is a lack of evidence in the literature. Recent review and meta-analyses reveal that the association between regular physical activity and a low risk of prostate cancer remains elusive [36, 37]. Given also many general health benefits of physical activity, there is the need to clarify the role of physical activity in association with PCa in further studies.\nThe association of fruit consumption with PCa was shown in our study and recent studies [38, 39]. Total fruit intake significantly reduced PCa risk. However, our study did not analyze fruit subtypes. A previous study found that citrus fruit consumption is associated with PCa, but other fruit subtypes are not associated [38]. This relationship might be due to the anti-carcinogenic properties of vitamins and phytochemicals in citrus fruits [40, 41]. However, the causal relationship remains unclear because most findings are based on a cross-sectional study.\nCART model suggested that age is most important in epidemiological and behavioral characteristics. Therefore, age, PSA, and IPSS were combined in PCa screening CART model. Our study showed that 50% of patients were older than 71 years in cases and older than 66 in controls. PCa is a disease that commonly occurs in the elderly men. Previous studies found that 75–80% of new cases occur in men aged over 65 years [42, 43]. Another study in the United States had an average participant’s age was 66 years [44]. A study by European Association of Urology showed that PCa rarely occurred in men younger than 50 years; it also indicated that the median age of PCa patient was 70 years [45]. Giwercman et al. showed that age was the closest risk factor of PCa [46]. Similarly, our study detected age as an independent risk factor of PCa. According to the Bayesian Model Averaging process, the PCa risk increased by 6% each year of age increased.\nThe role of I-PSS in PCa screening.\nThe Prostate Symptom Score (I-PSS) with seven recommended questions became an international standard to assess the symptoms of urination dysfunction in patients during the previous month. This scale is able to monitor changes in symptoms over time or after an intervention. A symptom severity assessment with an I-PSS scale is an important part of the initial evaluation, diagnosis, prediction, and monitoring of response to treatment [47, 48]. Our study noted that I-PSS was used to detect the symptoms of PCa (p < 0.001) in both univariable and multivariable analyses. A cohort study by Martin et al. [49] detected an association between I-PSS and PCa. For overall PCa, men with I-PSS ≥ 20 had a 2.26-fold increased risk of PCa compared to those with no symptoms. For localised PCa, men with I-PSS ≥ 20 had a 4.6-fold increased risk of PCa compared to those with no symptoms [49]. A study by Hosseini et al. [15] showed an association between I-PSS and PCa. The mean I-PSS score of the PCa group was 16.05 and higher than that of the non-PCa group, with a mean I-PSS score of 6.84. The prevalence of patients with I-PSS ≥ 20 in the PCa group was 30.3%, which was higher than that in the non-PCa group. The sensitivity and specificity of I-PSS at cut-off I-PSS ≥ 20 were 78% and 59%, respectively [15]. Our study and data have provided evidence about the relationship of PCa screening value with I-PSS. In Vietnam, according to the Ministry of Health, I-PSS has not yet been recommended for PCa initial screening; however, I-PSS has been recommended for benign hypertrophy of prostate – a disease that has many symptoms similar to early-stage PCa symptoms. Our study recommended using I-PSS for initial screening for any patient who has self-reported lower urinary tract symptoms.\n The role of PSA in PCa screening Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50].\nIn our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\ge$$\\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58].\nIn hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14].\nProstatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50].\nIn our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\ge$$\\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58].\nIn hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14].\nThe study included 260 observations at the Medic Center HCMC with 130 in the case group and 130 in control group. Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals and protective factor included physical exercise and fruit consumption.\nPrevious studies found that farming and exposure to agricultural chemicals are risk factors for PCa but not for all agricultural chemicals [29, 30]. Exposure to a few specific pesticides including fonofos, malathion, terbufos, and azinphos-methyl, dimethoate associated with PCa [31–33]. Genomic analysis showed pesticides might interact with genetic variants in pathways related to neurotransmission release in PCa patients [34, 35]. Therefore, the relationship between exposure to agricultural chemicals and PCa is plausible. Further epidemiological and mechanism studies are needed to identify the relationships of PCa with specific agricultural chemicals, in particular in the Vietnamese context.\nAlthough our study initially found a relationship between physical exercise and PCa, there is a lack of evidence in the literature. Recent review and meta-analyses reveal that the association between regular physical activity and a low risk of prostate cancer remains elusive [36, 37]. Given also many general health benefits of physical activity, there is the need to clarify the role of physical activity in association with PCa in further studies.\nThe association of fruit consumption with PCa was shown in our study and recent studies [38, 39]. Total fruit intake significantly reduced PCa risk. However, our study did not analyze fruit subtypes. A previous study found that citrus fruit consumption is associated with PCa, but other fruit subtypes are not associated [38]. This relationship might be due to the anti-carcinogenic properties of vitamins and phytochemicals in citrus fruits [40, 41]. However, the causal relationship remains unclear because most findings are based on a cross-sectional study.\nCART model suggested that age is most important in epidemiological and behavioral characteristics. Therefore, age, PSA, and IPSS were combined in PCa screening CART model. Our study showed that 50% of patients were older than 71 years in cases and older than 66 in controls. PCa is a disease that commonly occurs in the elderly men. Previous studies found that 75–80% of new cases occur in men aged over 65 years [42, 43]. Another study in the United States had an average participant’s age was 66 years [44]. A study by European Association of Urology showed that PCa rarely occurred in men younger than 50 years; it also indicated that the median age of PCa patient was 70 years [45]. Giwercman et al. showed that age was the closest risk factor of PCa [46]. Similarly, our study detected age as an independent risk factor of PCa. According to the Bayesian Model Averaging process, the PCa risk increased by 6% each year of age increased.\nThe role of I-PSS in PCa screening.\nThe Prostate Symptom Score (I-PSS) with seven recommended questions became an international standard to assess the symptoms of urination dysfunction in patients during the previous month. This scale is able to monitor changes in symptoms over time or after an intervention. A symptom severity assessment with an I-PSS scale is an important part of the initial evaluation, diagnosis, prediction, and monitoring of response to treatment [47, 48]. Our study noted that I-PSS was used to detect the symptoms of PCa (p < 0.001) in both univariable and multivariable analyses. A cohort study by Martin et al. [49] detected an association between I-PSS and PCa. For overall PCa, men with I-PSS ≥ 20 had a 2.26-fold increased risk of PCa compared to those with no symptoms. For localised PCa, men with I-PSS ≥ 20 had a 4.6-fold increased risk of PCa compared to those with no symptoms [49]. A study by Hosseini et al. [15] showed an association between I-PSS and PCa. The mean I-PSS score of the PCa group was 16.05 and higher than that of the non-PCa group, with a mean I-PSS score of 6.84. The prevalence of patients with I-PSS ≥ 20 in the PCa group was 30.3%, which was higher than that in the non-PCa group. The sensitivity and specificity of I-PSS at cut-off I-PSS ≥ 20 were 78% and 59%, respectively [15]. Our study and data have provided evidence about the relationship of PCa screening value with I-PSS. In Vietnam, according to the Ministry of Health, I-PSS has not yet been recommended for PCa initial screening; however, I-PSS has been recommended for benign hypertrophy of prostate – a disease that has many symptoms similar to early-stage PCa symptoms. Our study recommended using I-PSS for initial screening for any patient who has self-reported lower urinary tract symptoms.\n The role of PSA in PCa screening Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50].\nIn our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\ge$$\\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58].\nIn hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14].\nProstatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50].\nIn our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\ge$$\\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58].\nIn hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14].\n CART value for PCa screening To remedy the inherent limitations of PSA in PCa screening, we used a combination of PSA with I-PSS and the main risk factors of PCa to build the CART model. Based on CART, patients with a PSA cut-off level > 33.5 ng/mL have a PCa risk of up to 91.2%. Patients with I-PSS < 19, and PSA < 18 ng/mL were at 7.1% risk. CART overcomes the limitations of using only I-PSS or PSA for screening. Other machine learning algorithms have been used in PCa screening in previous studies and have reached higher values than PSA only. In a study by Babaian et al. [59], a neural network algorithm for PCa screening showed an improved value compared to using only PSA, the specificity of the neural network was not good (lower than 65%) [59]. A study by Satoshi et al. [17] showed that artificial neural network, random forest, and support vector machine improved overall value when compared to only PSA; however, sensitivity and specificity were usually lower than 80% [17]. Our CART algorithm with three variables, PSA, I-PSS, and age, showed a relatively high predictive power (AUC = 0.915). In addition, CART algorithm could also support physicians to make clinical decisions in a step-to-step and intuitive manner; hence it has practical use in a daily clinical setting. At 20% diagnosis probability threshold, CART showed a high negative predictive value (91.2%), and at 80% diagnosis probability threshold, CART also had a high positive predictive value (91.2%). Therefore, we recommended 20% diagnosis probability threshold for negative prediction and 80% diagnosis probability threshold for referring prostate biopsy. Any other patients with a probability of PCa range from > 20% to < 80%, further tests including the digital rectal examination (DRE), PSA re-test after a month, and transrectal ultrasonography (TRUS) should be considered to reduce unnecessary biopsy while keeping the ability to diagnose PCa early.\nThe study has some limitations. First, we lack other tests such as DRE, TRUS, biomarkers that can contribute to making biopsy decisions [5, 7]. Second, the CART model has not yet been tested in different populations for the validity and reliability of the algorithm. Finally, we could not estimate the overdiagnosis rate of PCa in the study. It warrants further study in the near future to overcome these limitations.\nTo remedy the inherent limitations of PSA in PCa screening, we used a combination of PSA with I-PSS and the main risk factors of PCa to build the CART model. Based on CART, patients with a PSA cut-off level > 33.5 ng/mL have a PCa risk of up to 91.2%. Patients with I-PSS < 19, and PSA < 18 ng/mL were at 7.1% risk. CART overcomes the limitations of using only I-PSS or PSA for screening. Other machine learning algorithms have been used in PCa screening in previous studies and have reached higher values than PSA only. In a study by Babaian et al. [59], a neural network algorithm for PCa screening showed an improved value compared to using only PSA, the specificity of the neural network was not good (lower than 65%) [59]. A study by Satoshi et al. [17] showed that artificial neural network, random forest, and support vector machine improved overall value when compared to only PSA; however, sensitivity and specificity were usually lower than 80% [17]. Our CART algorithm with three variables, PSA, I-PSS, and age, showed a relatively high predictive power (AUC = 0.915). In addition, CART algorithm could also support physicians to make clinical decisions in a step-to-step and intuitive manner; hence it has practical use in a daily clinical setting. At 20% diagnosis probability threshold, CART showed a high negative predictive value (91.2%), and at 80% diagnosis probability threshold, CART also had a high positive predictive value (91.2%). Therefore, we recommended 20% diagnosis probability threshold for negative prediction and 80% diagnosis probability threshold for referring prostate biopsy. Any other patients with a probability of PCa range from > 20% to < 80%, further tests including the digital rectal examination (DRE), PSA re-test after a month, and transrectal ultrasonography (TRUS) should be considered to reduce unnecessary biopsy while keeping the ability to diagnose PCa early.\nThe study has some limitations. First, we lack other tests such as DRE, TRUS, biomarkers that can contribute to making biopsy decisions [5, 7]. Second, the CART model has not yet been tested in different populations for the validity and reliability of the algorithm. Finally, we could not estimate the overdiagnosis rate of PCa in the study. It warrants further study in the near future to overcome these limitations.", "The study included 260 observations at the Medic Center HCMC with 130 in the case group and 130 in control group. Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals and protective factor included physical exercise and fruit consumption.\nPrevious studies found that farming and exposure to agricultural chemicals are risk factors for PCa but not for all agricultural chemicals [29, 30]. Exposure to a few specific pesticides including fonofos, malathion, terbufos, and azinphos-methyl, dimethoate associated with PCa [31–33]. Genomic analysis showed pesticides might interact with genetic variants in pathways related to neurotransmission release in PCa patients [34, 35]. Therefore, the relationship between exposure to agricultural chemicals and PCa is plausible. Further epidemiological and mechanism studies are needed to identify the relationships of PCa with specific agricultural chemicals, in particular in the Vietnamese context.\nAlthough our study initially found a relationship between physical exercise and PCa, there is a lack of evidence in the literature. Recent review and meta-analyses reveal that the association between regular physical activity and a low risk of prostate cancer remains elusive [36, 37]. Given also many general health benefits of physical activity, there is the need to clarify the role of physical activity in association with PCa in further studies.\nThe association of fruit consumption with PCa was shown in our study and recent studies [38, 39]. Total fruit intake significantly reduced PCa risk. However, our study did not analyze fruit subtypes. A previous study found that citrus fruit consumption is associated with PCa, but other fruit subtypes are not associated [38]. This relationship might be due to the anti-carcinogenic properties of vitamins and phytochemicals in citrus fruits [40, 41]. However, the causal relationship remains unclear because most findings are based on a cross-sectional study.\nCART model suggested that age is most important in epidemiological and behavioral characteristics. Therefore, age, PSA, and IPSS were combined in PCa screening CART model. Our study showed that 50% of patients were older than 71 years in cases and older than 66 in controls. PCa is a disease that commonly occurs in the elderly men. Previous studies found that 75–80% of new cases occur in men aged over 65 years [42, 43]. Another study in the United States had an average participant’s age was 66 years [44]. A study by European Association of Urology showed that PCa rarely occurred in men younger than 50 years; it also indicated that the median age of PCa patient was 70 years [45]. Giwercman et al. showed that age was the closest risk factor of PCa [46]. Similarly, our study detected age as an independent risk factor of PCa. According to the Bayesian Model Averaging process, the PCa risk increased by 6% each year of age increased.\nThe role of I-PSS in PCa screening.\nThe Prostate Symptom Score (I-PSS) with seven recommended questions became an international standard to assess the symptoms of urination dysfunction in patients during the previous month. This scale is able to monitor changes in symptoms over time or after an intervention. A symptom severity assessment with an I-PSS scale is an important part of the initial evaluation, diagnosis, prediction, and monitoring of response to treatment [47, 48]. Our study noted that I-PSS was used to detect the symptoms of PCa (p < 0.001) in both univariable and multivariable analyses. A cohort study by Martin et al. [49] detected an association between I-PSS and PCa. For overall PCa, men with I-PSS ≥ 20 had a 2.26-fold increased risk of PCa compared to those with no symptoms. For localised PCa, men with I-PSS ≥ 20 had a 4.6-fold increased risk of PCa compared to those with no symptoms [49]. A study by Hosseini et al. [15] showed an association between I-PSS and PCa. The mean I-PSS score of the PCa group was 16.05 and higher than that of the non-PCa group, with a mean I-PSS score of 6.84. The prevalence of patients with I-PSS ≥ 20 in the PCa group was 30.3%, which was higher than that in the non-PCa group. The sensitivity and specificity of I-PSS at cut-off I-PSS ≥ 20 were 78% and 59%, respectively [15]. Our study and data have provided evidence about the relationship of PCa screening value with I-PSS. In Vietnam, according to the Ministry of Health, I-PSS has not yet been recommended for PCa initial screening; however, I-PSS has been recommended for benign hypertrophy of prostate – a disease that has many symptoms similar to early-stage PCa symptoms. Our study recommended using I-PSS for initial screening for any patient who has self-reported lower urinary tract symptoms.\n The role of PSA in PCa screening Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50].\nIn our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\ge$$\\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58].\nIn hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14].\nProstatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50].\nIn our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\ge$$\\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58].\nIn hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14].", "Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50].\nIn our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\ge$$\\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58].\nIn hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14].", "To remedy the inherent limitations of PSA in PCa screening, we used a combination of PSA with I-PSS and the main risk factors of PCa to build the CART model. Based on CART, patients with a PSA cut-off level > 33.5 ng/mL have a PCa risk of up to 91.2%. Patients with I-PSS < 19, and PSA < 18 ng/mL were at 7.1% risk. CART overcomes the limitations of using only I-PSS or PSA for screening. Other machine learning algorithms have been used in PCa screening in previous studies and have reached higher values than PSA only. In a study by Babaian et al. [59], a neural network algorithm for PCa screening showed an improved value compared to using only PSA, the specificity of the neural network was not good (lower than 65%) [59]. A study by Satoshi et al. [17] showed that artificial neural network, random forest, and support vector machine improved overall value when compared to only PSA; however, sensitivity and specificity were usually lower than 80% [17]. Our CART algorithm with three variables, PSA, I-PSS, and age, showed a relatively high predictive power (AUC = 0.915). In addition, CART algorithm could also support physicians to make clinical decisions in a step-to-step and intuitive manner; hence it has practical use in a daily clinical setting. At 20% diagnosis probability threshold, CART showed a high negative predictive value (91.2%), and at 80% diagnosis probability threshold, CART also had a high positive predictive value (91.2%). Therefore, we recommended 20% diagnosis probability threshold for negative prediction and 80% diagnosis probability threshold for referring prostate biopsy. Any other patients with a probability of PCa range from > 20% to < 80%, further tests including the digital rectal examination (DRE), PSA re-test after a month, and transrectal ultrasonography (TRUS) should be considered to reduce unnecessary biopsy while keeping the ability to diagnose PCa early.\nThe study has some limitations. First, we lack other tests such as DRE, TRUS, biomarkers that can contribute to making biopsy decisions [5, 7]. Second, the CART model has not yet been tested in different populations for the validity and reliability of the algorithm. Finally, we could not estimate the overdiagnosis rate of PCa in the study. It warrants further study in the near future to overcome these limitations.", "CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.1%, 86.9%, 86.2%, 91.5% and 88.9%; at 80% diagnosis probability threshold were 81.6%, 91.2%, 92.3%, 79.2%, and 85.8%. I-PSS, PSA, and age had importance role in PCa screening. CART combining PSA, I-PSS, and age has practical use in hospital-based PCa screening in Vietnamese patients." ]
[ null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", null, null, null, "conclusion" ]
[ "Prostate cancer", "PSA", "I-PSS", "Vietnamese patients", "Classification and regression tree", "Bayesian modeling averaging" ]
Background: Prostate cancer (PCa) is common in men, especially in those aged 65 years and older. It has the second-highest incidence/prevalence (i.e., 30.7 per 100 000) and ranks fifth in cancer mortality rate among men (i.e., 7.7 per 100 000) worldwide [1]. In Vietnam, the incidence of PCa is 12.2 per 100 000, and the mortality rate is 5.1 per 100 000 as of 2020 [2]. Approximately 95–98% of PCa cases are adenocarcinomas that develop from adrenal duct cells [3]. PCa treatment depends primarily on the stage of development and the cell and patient characteristics. According to the American Cancer Society, PCa patients diagnosed at the localised or regional stage have a 5-year survival rate of over 90%. However, in the distant stage, the survival rate is only 30% [4]. Therefore, PCa should be detected at an early stage. Prostate-specific antigen (PSA) is a serine protease in the kallikrein family and considered a tool for the screening and early detection of PCa [5]. It can help detect as early as nine years before having clinical symptoms [6]. There are two types of PCa screening studies using PSA, including population-based and hospital-based (or opportunistic) screenings [7]. The first type of screening deals with testing asymptomatic men with only PSA, and those with elevated PSA are immediately referred to biopsy. However, the latter type of screening involves testing men with some symptoms (e.g., lower urinary tract symptoms) using PSA and other clinical tools. Therefore, all men referred for biopsy in population-based screening are at lower risk of having PCa compared to that of hospital-based screening. PSA only based screening could accounted for 45–70% of the reduction in PCa mortality [8]; it could also induce the unnecessary biopsies [9]. In a 16 year follow-up of the European Randomized Study of Screening for Prostate Cancer (ERSPC), the unnecessary biopsy was 76% (i.e., 76% of elevated PSA cases have a negative biopsy) [10]. In addition, the optimal cut-off value of PSA for confirming PCa remains to be determined [11] [5]. In particular, even at a low level of PSA (that is, lower than 4 ng/mL), the false negative rate of PCa was high at 15%, whereas, at a high level of PSA (that is, higher than 10 ng/mL), the false positive rate was 50% [5]. In Vietnam, population-based PCa screening using PSA was conducted 12 years ago; it indicated a low prevalence of PCa (2.5%), but a high rate of medium grade lesions. The author also implied that the benefit of a mass screening program for PCa was not proven. Instead, a selective PCa screening in the usual care and at the hospital was superior in Vietnam. In hospital-based screening, combining clinical parameters, PSA, age, and other risk factors improved the prediction of prostate cancer [12–14]. International Prostate Symptom Score (I-PSS) is a screening scale for lower urinary tract symptoms and is used to screen non-specific prostate gland abnormalities. For PCa screening. For PCa screening, the I-PSS scale showed reasonable sensitivity (78%), but the specificity was not high (59.4%) [15]. A previous study showed that PSA screening performance varied with different I-PSS values. Therefore, combining PSA and I-PSS could improve the screening benefits [16]. There is, however, a paucity of such practical multivariable algorithm for hospital-based PCa screening in Vietnam. The approach of PCa screening based on machine learning algorithms has only recently been applied. Algorithms including logistic regression, artificial neural networks, random forests, support vector machines, and extreme and light gradient boosting machines have been demonstrated to enhance PCa screening efficiency [13, 17–20]. However, these models do not help make clinical decisions in a step-to-step and intuitive manner. Classification and regression tree (CART) is an approach that allows physicians to apply results of the screening process directly and intuitively [17, 21]. Our study aimed to investigate the association of PSA, I-PSS, epidemiological and behavioural characteristics with PCa and then used these factors to construct a classification and regression tree (CART) algorithm to select Vietnamese men with lower urinary tract symptoms (LUTS) for PCa biopsy. The algorithm is expected to aid in reducing the probability of a negative prostate biopsy (i.e., unnecessary biopsy) while maintaining the ability to reduce PCa mortality for Vietnamese patients. Methods: Study design and setting We conducted a case-control study at the MEDIC Medical Center, Ho Chi Minh City, Vietnam. MEDIC is the first and top modernity private medical centre in Vietnam. Every day, more than 4,000 patients visit the centre for examination and treatment. The study was approved by the local institutional ethics committee of the MEDIC Center, and the opinion was signed on 15th July, 2016. We conducted a case-control study at the MEDIC Medical Center, Ho Chi Minh City, Vietnam. MEDIC is the first and top modernity private medical centre in Vietnam. Every day, more than 4,000 patients visit the centre for examination and treatment. The study was approved by the local institutional ethics committee of the MEDIC Center, and the opinion was signed on 15th July, 2016. Participants Our study participants were men aged ≥ 50 years who visited the MEDIC Centre in 2017–2018 with self-reported lower urinary tract symptoms. The inclusion criteria were abnormal lower urinary tract symptoms or enlarged prostate glands identified through DRE or ultrasound images. The exclusion criterions were acute prostatitis or refusal to participate in the study. All patients who meet the selection criteria were prescribed a biopsy. The case group was defined as having a positive biopsy result for PCa, and the control was defined as having a negative biopsy result. Biopsy based on 12-core Transrectal Ultrasound Guided Biopsy of the Prostate [22]. All patients provided written informed consent before participating in the study. Our study participants were men aged ≥ 50 years who visited the MEDIC Centre in 2017–2018 with self-reported lower urinary tract symptoms. The inclusion criteria were abnormal lower urinary tract symptoms or enlarged prostate glands identified through DRE or ultrasound images. The exclusion criterions were acute prostatitis or refusal to participate in the study. All patients who meet the selection criteria were prescribed a biopsy. The case group was defined as having a positive biopsy result for PCa, and the control was defined as having a negative biopsy result. Biopsy based on 12-core Transrectal Ultrasound Guided Biopsy of the Prostate [22]. All patients provided written informed consent before participating in the study. Sample size The minimum sample size estimated for each group of case-control studies was 116 patients to provide 90% power and 5% type I error to detect an odds ratio of 2.5. In Vietnam, PSA ≥ 10 ng/mL is considered as the high-risk group of PCa. Therefore, we chose the proportion of PSA ≥ 10 ng/mL equal to 23% in the control group as the proportion of controls with exposure in the sample size formula [23]. Our studies selected 130 patients for each group to ensure larger than the minimum sample size, hence the total patients was 260. The minimum sample size estimated for each group of case-control studies was 116 patients to provide 90% power and 5% type I error to detect an odds ratio of 2.5. In Vietnam, PSA ≥ 10 ng/mL is considered as the high-risk group of PCa. Therefore, we chose the proportion of PSA ≥ 10 ng/mL equal to 23% in the control group as the proportion of controls with exposure in the sample size formula [23]. Our studies selected 130 patients for each group to ensure larger than the minimum sample size, hence the total patients was 260. Data collection and variables’ definition We collected epidemiological and behavioural characteristics through interviews using a questionnaire and collected clinical and subclinical information from medical records. Epidemiological characteristics included age, number of children, overweight/obesity (BMI ≥ 23 kg/m2 [24]), family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals. Lifestyle behaviours included physical activities (≥ 150 min/week for moderate or vigorous intensity [25]), current tobacco smoking, and heavy drinking (binge drinking (i.e., five drinks or more per occasion) on five or more days in the past month [26]). Food consumption behaviours were determined by the frequency of different food consumption types, including red meat, fruits, vegetables, nuts, vegetable oil, tea, and coffee. International Prostate Symptom Score (I-PSS) was used to assess seven lower urinary tract symptoms: incomplete emptying, frequency, intermittency, urgency, weak stream, straining, and nocturia. I-PSS Vietnamese version was published by the Vietnamese Ministry of Health and recommended for Benign Prostatic Hyperplasia assessment. Each item of I-PSS was classified on a zero to five scale, reflecting the severity of each symptom [27]. Epidemiological and behavioural characteristics and I-PSS were assessed by only one oncologist for consistency. The oncologist was trained for conducting interviews before joining the study. The questionnaire consisted of ten interviews in a pilot sample for structure and content adaptation. Serum PSA was quantified by a 2-step immunoassay using light-emitting microparticle technology (CMIA) with Alinity CiCi (Abbott) testing machine system. The testing machine system was calibrated, and quality control was performed at least once every day or when changing the reagent batch [28]. We collected epidemiological and behavioural characteristics through interviews using a questionnaire and collected clinical and subclinical information from medical records. Epidemiological characteristics included age, number of children, overweight/obesity (BMI ≥ 23 kg/m2 [24]), family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals. Lifestyle behaviours included physical activities (≥ 150 min/week for moderate or vigorous intensity [25]), current tobacco smoking, and heavy drinking (binge drinking (i.e., five drinks or more per occasion) on five or more days in the past month [26]). Food consumption behaviours were determined by the frequency of different food consumption types, including red meat, fruits, vegetables, nuts, vegetable oil, tea, and coffee. International Prostate Symptom Score (I-PSS) was used to assess seven lower urinary tract symptoms: incomplete emptying, frequency, intermittency, urgency, weak stream, straining, and nocturia. I-PSS Vietnamese version was published by the Vietnamese Ministry of Health and recommended for Benign Prostatic Hyperplasia assessment. Each item of I-PSS was classified on a zero to five scale, reflecting the severity of each symptom [27]. Epidemiological and behavioural characteristics and I-PSS were assessed by only one oncologist for consistency. The oncologist was trained for conducting interviews before joining the study. The questionnaire consisted of ten interviews in a pilot sample for structure and content adaptation. Serum PSA was quantified by a 2-step immunoassay using light-emitting microparticle technology (CMIA) with Alinity CiCi (Abbott) testing machine system. The testing machine system was calibrated, and quality control was performed at least once every day or when changing the reagent batch [28]. Statistical analysis Descriptive analysis Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups. Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups. Univariable logistic regression A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa. A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa. Bayesian model averaging (BMA) for model prediction A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations. A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations. CART model for PCa screening CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted. CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted. Descriptive analysis Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups. Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups. Univariable logistic regression A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa. A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa. Bayesian model averaging (BMA) for model prediction A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations. A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations. CART model for PCa screening CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted. CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted. Study design and setting: We conducted a case-control study at the MEDIC Medical Center, Ho Chi Minh City, Vietnam. MEDIC is the first and top modernity private medical centre in Vietnam. Every day, more than 4,000 patients visit the centre for examination and treatment. The study was approved by the local institutional ethics committee of the MEDIC Center, and the opinion was signed on 15th July, 2016. Participants: Our study participants were men aged ≥ 50 years who visited the MEDIC Centre in 2017–2018 with self-reported lower urinary tract symptoms. The inclusion criteria were abnormal lower urinary tract symptoms or enlarged prostate glands identified through DRE or ultrasound images. The exclusion criterions were acute prostatitis or refusal to participate in the study. All patients who meet the selection criteria were prescribed a biopsy. The case group was defined as having a positive biopsy result for PCa, and the control was defined as having a negative biopsy result. Biopsy based on 12-core Transrectal Ultrasound Guided Biopsy of the Prostate [22]. All patients provided written informed consent before participating in the study. Sample size: The minimum sample size estimated for each group of case-control studies was 116 patients to provide 90% power and 5% type I error to detect an odds ratio of 2.5. In Vietnam, PSA ≥ 10 ng/mL is considered as the high-risk group of PCa. Therefore, we chose the proportion of PSA ≥ 10 ng/mL equal to 23% in the control group as the proportion of controls with exposure in the sample size formula [23]. Our studies selected 130 patients for each group to ensure larger than the minimum sample size, hence the total patients was 260. Data collection and variables’ definition: We collected epidemiological and behavioural characteristics through interviews using a questionnaire and collected clinical and subclinical information from medical records. Epidemiological characteristics included age, number of children, overweight/obesity (BMI ≥ 23 kg/m2 [24]), family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals. Lifestyle behaviours included physical activities (≥ 150 min/week for moderate or vigorous intensity [25]), current tobacco smoking, and heavy drinking (binge drinking (i.e., five drinks or more per occasion) on five or more days in the past month [26]). Food consumption behaviours were determined by the frequency of different food consumption types, including red meat, fruits, vegetables, nuts, vegetable oil, tea, and coffee. International Prostate Symptom Score (I-PSS) was used to assess seven lower urinary tract symptoms: incomplete emptying, frequency, intermittency, urgency, weak stream, straining, and nocturia. I-PSS Vietnamese version was published by the Vietnamese Ministry of Health and recommended for Benign Prostatic Hyperplasia assessment. Each item of I-PSS was classified on a zero to five scale, reflecting the severity of each symptom [27]. Epidemiological and behavioural characteristics and I-PSS were assessed by only one oncologist for consistency. The oncologist was trained for conducting interviews before joining the study. The questionnaire consisted of ten interviews in a pilot sample for structure and content adaptation. Serum PSA was quantified by a 2-step immunoassay using light-emitting microparticle technology (CMIA) with Alinity CiCi (Abbott) testing machine system. The testing machine system was calibrated, and quality control was performed at least once every day or when changing the reagent batch [28]. Statistical analysis: Descriptive analysis Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups. Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups. Univariable logistic regression A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa. A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa. Bayesian model averaging (BMA) for model prediction A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations. A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations. CART model for PCa screening CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted. CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted. Descriptive analysis: Frequency and percentage were used to describe qualitative variables, including overweight/obesity, family history of PCa, existence of urinary tract diseases, history of urinary surgery, benign prostate hyperplasia, and exposure to agrochemicals, lifestyle behaviour, and food consumption behaviour. The median and quartiles were used to describe quantitative variables, including I-PSS, PSA concentration, and age. All descriptive analyses were stratified by the case and control groups. Univariable logistic regression: A univariable logistic regression model was used to screen independent variables that were likely to be associated with PCa. The I-PSS score, PSA concentration, epidemiological characteristics, lifestyle behaviour, and food consumption behaviour were tested for association with PCa. Bayesian model averaging (BMA) for model prediction: A BMA approach was used to search for the most parsimonious model for PCa prediction (i.e., minimum explanatory variables and maximum discrimination power) using PSA, I-PSS, epidemiological and behavioural variables. In summary, if there is n variables, there will be 2n possible models constructing from n variables (not including interactive terms). BMA will construct all possible parsimonious prediction models based on the Bayesian Information Criteria (BIC), and posterior probabilities of these models. The final model with the high practical use in the clinical setting can be chosen based on BMA suggesting and clinical considerations. CART model for PCa screening: CART was performed using the rpart function in the rpart package, R language (version 4.0.3). Five-folds cross validation was used to training and testing CART model. All independent variables became CART input in this process. The CART pruning was controlled by the maximum depth of the tree set to 4 to construct a reasonable complexity, the minimum number of observations was allowed to be 10 at each node to ensure sufficient supporting data. Diagnosis values of CART including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy (1 – misclassification error) at the 20%, 50% and 80% probability cut-off were extracted. Results: Association of epidemiological and behavioural characteristics with prostate cancer There were total 260 patients (130 in case group vs. 130 in control group) included in the study. The median age of cases was significantly higher than the controls (71 vs. 61). Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals. Physical exercise and fruit consumption were noted as protective factors of PCa (Table 1). Table 1Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regressionCases(n = 130) No. (%) Controls(n = 130) No. (%) OR (95% CI)p-value Epidemiological characteristics Age (median [25–75 percentile])71 (64–78)66 (61–71)1.07 (1.04–1.11)< 0.001Overweight or Obesity (BMI ≥ 23)63 (48.5)49 (37.7)1.55 (0.92–2.63)0.103Family history of Prostate cancer (yes)2 (1.5)5 (3.9)0.39 (0.04–2.45)0.447Existing of urinary tract diseases (yes)17 (13.1)20 (15.4)0.82 (0.39–1.76)0.723History of urinary surgery (yes)10 (7.8)12 (9.2)0.83 (0.31–2.18)0.824Benign Prostate Hyperplasia (yes)14 (10.8)24 (18.5)0.53 (0.24–1.14)0.113Exposed to agrochemicals (yes)40 (30.8)20 (15.4)2.44 (1.29–4.73)0.005 Lifestyle behaviour Physical activity (yes)68 (52.3)86 (66.2)0.65 (0.33–0.95)0.032Current tobacco smoking (yes)62 (47.7)48 (36.9)1.56 (0.92–2.64)0.103Heavy drinking (yes)14 (10.8)12 (9.2)1.19 (0.53–2.67)0.680 Food consumption behaviour Red meat (≥ 3 times/week)107 (82.3)106 (81.5)1.05 (0.53–2.08)1.000Fruits (≥ 3 times/week)65 (50.0)83 (63.9)0.57 (0.33–0.96)0.033Vegetables (≥ 3 times/week)111 (85.4)120 (92.3)0.49 (0.19–1.16)0.114Nuts (≥ 3 times/week)9 (6.9)9 (6.9)1.00 (0.34–2.95)1.000Vegetable oil (≥ 3 times/week)104 (80.0)95 (73.1)1.47 (0.79–2.75)0.242Tea (≥ 3 times/week)56 (43.1)53 (40.8)1.10 (0.67–1.80)0.706Coffee (≥ 3 times/week)82 (63.1)80 (61.5)1.07 (0.63–1.82)0.898 Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regression There were total 260 patients (130 in case group vs. 130 in control group) included in the study. The median age of cases was significantly higher than the controls (71 vs. 61). Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals. Physical exercise and fruit consumption were noted as protective factors of PCa (Table 1). Table 1Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regressionCases(n = 130) No. (%) Controls(n = 130) No. (%) OR (95% CI)p-value Epidemiological characteristics Age (median [25–75 percentile])71 (64–78)66 (61–71)1.07 (1.04–1.11)< 0.001Overweight or Obesity (BMI ≥ 23)63 (48.5)49 (37.7)1.55 (0.92–2.63)0.103Family history of Prostate cancer (yes)2 (1.5)5 (3.9)0.39 (0.04–2.45)0.447Existing of urinary tract diseases (yes)17 (13.1)20 (15.4)0.82 (0.39–1.76)0.723History of urinary surgery (yes)10 (7.8)12 (9.2)0.83 (0.31–2.18)0.824Benign Prostate Hyperplasia (yes)14 (10.8)24 (18.5)0.53 (0.24–1.14)0.113Exposed to agrochemicals (yes)40 (30.8)20 (15.4)2.44 (1.29–4.73)0.005 Lifestyle behaviour Physical activity (yes)68 (52.3)86 (66.2)0.65 (0.33–0.95)0.032Current tobacco smoking (yes)62 (47.7)48 (36.9)1.56 (0.92–2.64)0.103Heavy drinking (yes)14 (10.8)12 (9.2)1.19 (0.53–2.67)0.680 Food consumption behaviour Red meat (≥ 3 times/week)107 (82.3)106 (81.5)1.05 (0.53–2.08)1.000Fruits (≥ 3 times/week)65 (50.0)83 (63.9)0.57 (0.33–0.96)0.033Vegetables (≥ 3 times/week)111 (85.4)120 (92.3)0.49 (0.19–1.16)0.114Nuts (≥ 3 times/week)9 (6.9)9 (6.9)1.00 (0.34–2.95)1.000Vegetable oil (≥ 3 times/week)104 (80.0)95 (73.1)1.47 (0.79–2.75)0.242Tea (≥ 3 times/week)56 (43.1)53 (40.8)1.10 (0.67–1.80)0.706Coffee (≥ 3 times/week)82 (63.1)80 (61.5)1.07 (0.63–1.82)0.898 Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regression Association of I-PSS and PSA with prostate cancer The PCa odds ratio for 1 ng/mL PSA increase was 1.06 (95% CI, 1.05–1.08). The PCa odds ratio for each I-PSS point increase was 1.14 (95% CI, 1.09–1.18). All the items of I-PSS significantly associated with PCa, except for the “Straining” item with lowest OR = 1.16 (95% CI, 0.98–1.38). “Nocturia” item had highest OR with 2.22 (95% CI, 1.79–2.74), and “Urgency” item came to the second with OR = 1.41 (95% CI, 1.23–1.62) (Table 2). Table 2Association of I-PSS and PSA with prostate cancer - univariable logistic regressionCases (n = 130) Median (25–75 percentile) Controls (n = 130) Median (25–75 percentile) OR (95% CI)p PSA concentration (ng/mL) 89.5 (39.1–100)12 (8–19)1.06 (1.05–1.08)< 0.001 I-PSS (total score) 14 (8–19) 6.5 (3–12) 1.14 (1.09–1.18) < 0.001 Incomplete Emptying4 (1–5)1 (1–5)1.27 (1.12–1.44)Frequency (every 2 h)2 (0–2)0 (0–3)1.22 (1.08–1.37)Intermittency0 (0–1)0 (0–1)1.29 (1.02–1.64)Urgency2 (0–5)0 (0–1)1.41 (1.23–1.62)Weak Stream0 (0–2)0 (0–0)1.32 (1.12–1.55)Straining0 (0–1)0 (0–1)1.16 (0.98–1.38)Nocturia4 (3–5)2 (1–3)2.22 (1.79–2.74) Association of I-PSS and PSA with prostate cancer - univariable logistic regression The PCa odds ratio for 1 ng/mL PSA increase was 1.06 (95% CI, 1.05–1.08). The PCa odds ratio for each I-PSS point increase was 1.14 (95% CI, 1.09–1.18). All the items of I-PSS significantly associated with PCa, except for the “Straining” item with lowest OR = 1.16 (95% CI, 0.98–1.38). “Nocturia” item had highest OR with 2.22 (95% CI, 1.79–2.74), and “Urgency” item came to the second with OR = 1.41 (95% CI, 1.23–1.62) (Table 2). Table 2Association of I-PSS and PSA with prostate cancer - univariable logistic regressionCases (n = 130) Median (25–75 percentile) Controls (n = 130) Median (25–75 percentile) OR (95% CI)p PSA concentration (ng/mL) 89.5 (39.1–100)12 (8–19)1.06 (1.05–1.08)< 0.001 I-PSS (total score) 14 (8–19) 6.5 (3–12) 1.14 (1.09–1.18) < 0.001 Incomplete Emptying4 (1–5)1 (1–5)1.27 (1.12–1.44)Frequency (every 2 h)2 (0–2)0 (0–3)1.22 (1.08–1.37)Intermittency0 (0–1)0 (0–1)1.29 (1.02–1.64)Urgency2 (0–5)0 (0–1)1.41 (1.23–1.62)Weak Stream0 (0–2)0 (0–0)1.32 (1.12–1.55)Straining0 (0–1)0 (0–1)1.16 (0.98–1.38)Nocturia4 (3–5)2 (1–3)2.22 (1.79–2.74) Association of I-PSS and PSA with prostate cancer - univariable logistic regression BMA for PCa prediction To determine whether PCa could be predicted by PSA, I-PSS, epidemiological and behavioural variables. There were 27 models suggested from BMA process, among them the best 5 models are shown in Table 3. The most parsimonious model (i.e., minimum explanatory variables and maximum discrimination power) included two variables: I-PSS, and PSA concentration. The second parsimonious model contained I-PSS, PSA, and age. The Area Under the ROC Curve (AUC) of the most parsimonious model was not much different compared to the second parsimonious model (0.931 vs. 0.929). Because age is an important factor for PCa screening and diagnosis in many previous studies [29–31], it is also a critical factor for disease mechanisms from a clinical standpoint. Therefore, we chose the second model with three variables as the best model to use in the clinical setting (Table 3). This final model is also in light with the final model suggested by CART algorithm (details shown below). Table 3BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCaModelVariableOR (95% CI)pR2 (%)AUCBICPosterior probability1IPSS1.12 (1.06–1.18)< 0.00149.20.931199.90.364PSA1.06 (1.04–1.08)< 0.001Intercept0.03 (0.01–0.08)< 0.0012IPSS1.11 (1.05–1.17)< 0.00150.30.929201.40.174PSA1.06 (1.04–1.07)< 0.001Age1.06 (1.01–1.10)0.047Intercept0.01 (0.00–0.04)< 0.0013IPSS1.13 (1.07–1.19)< 0.00150.10.929205.50.120PSA1.06 (1.04–1.08)< 0.001Fruits (≥ 3 times/week)0.50 (0.23–1.06)0.069Intercept0.05 (0.02–0.12)< 0.0014IPSS1.12 (1.06–1.19)< 0.00149.70.931203.60.057PSA1.06 (1.04–1.08)< 0.001Overweight or Obesity1.68 (0.80–3.53)0.174Intercept0.03 (0.01–0.07)< 0.0015IPSS1.12 (1.06–1.18)< 0.00149.70.931203.70.056PSA1.06 (1.04–1.08)< 0.001Vegetable oil (≥ 3 times/week)1.84 (0.75–4.52)0.186Intercept0.02 (0.01–0.07)< 0.001 BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCa To determine whether PCa could be predicted by PSA, I-PSS, epidemiological and behavioural variables. There were 27 models suggested from BMA process, among them the best 5 models are shown in Table 3. The most parsimonious model (i.e., minimum explanatory variables and maximum discrimination power) included two variables: I-PSS, and PSA concentration. The second parsimonious model contained I-PSS, PSA, and age. The Area Under the ROC Curve (AUC) of the most parsimonious model was not much different compared to the second parsimonious model (0.931 vs. 0.929). Because age is an important factor for PCa screening and diagnosis in many previous studies [29–31], it is also a critical factor for disease mechanisms from a clinical standpoint. Therefore, we chose the second model with three variables as the best model to use in the clinical setting (Table 3). This final model is also in light with the final model suggested by CART algorithm (details shown below). Table 3BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCaModelVariableOR (95% CI)pR2 (%)AUCBICPosterior probability1IPSS1.12 (1.06–1.18)< 0.00149.20.931199.90.364PSA1.06 (1.04–1.08)< 0.001Intercept0.03 (0.01–0.08)< 0.0012IPSS1.11 (1.05–1.17)< 0.00150.30.929201.40.174PSA1.06 (1.04–1.07)< 0.001Age1.06 (1.01–1.10)0.047Intercept0.01 (0.00–0.04)< 0.0013IPSS1.13 (1.07–1.19)< 0.00150.10.929205.50.120PSA1.06 (1.04–1.08)< 0.001Fruits (≥ 3 times/week)0.50 (0.23–1.06)0.069Intercept0.05 (0.02–0.12)< 0.0014IPSS1.12 (1.06–1.19)< 0.00149.70.931203.60.057PSA1.06 (1.04–1.08)< 0.001Overweight or Obesity1.68 (0.80–3.53)0.174Intercept0.03 (0.01–0.07)< 0.0015IPSS1.12 (1.06–1.18)< 0.00149.70.931203.70.056PSA1.06 (1.04–1.08)< 0.001Vegetable oil (≥ 3 times/week)1.84 (0.75–4.52)0.186Intercept0.02 (0.01–0.07)< 0.001 BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCa CART for PCa screening CART was deployed with all independent variables input for PCa screening, and the final model is shown in (Fig. 1). Fig. 1Trained CART in prostate cancer screening Trained CART in prostate cancer screening The results indicated that PSA, I-PSS, and age played important roles in PCa screening. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2% and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively (Table 4). Table 4Diagnosis values of CART in PCa screeningProbability = 20%Probability = 50%Probability = 80%Sensitivity0.9150.9150.792Specificity0.8620.8620.923Positive predictive value0.8690.8690.912Negative predictive value0.9120.9120.816Accuracy0.8890.8890.858 Diagnosis values of CART in PCa screening CART was deployed with all independent variables input for PCa screening, and the final model is shown in (Fig. 1). Fig. 1Trained CART in prostate cancer screening Trained CART in prostate cancer screening The results indicated that PSA, I-PSS, and age played important roles in PCa screening. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2% and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively (Table 4). Table 4Diagnosis values of CART in PCa screeningProbability = 20%Probability = 50%Probability = 80%Sensitivity0.9150.9150.792Specificity0.8620.8620.923Positive predictive value0.8690.8690.912Negative predictive value0.9120.9120.816Accuracy0.8890.8890.858 Diagnosis values of CART in PCa screening Association of epidemiological and behavioural characteristics with prostate cancer: There were total 260 patients (130 in case group vs. 130 in control group) included in the study. The median age of cases was significantly higher than the controls (71 vs. 61). Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals. Physical exercise and fruit consumption were noted as protective factors of PCa (Table 1). Table 1Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regressionCases(n = 130) No. (%) Controls(n = 130) No. (%) OR (95% CI)p-value Epidemiological characteristics Age (median [25–75 percentile])71 (64–78)66 (61–71)1.07 (1.04–1.11)< 0.001Overweight or Obesity (BMI ≥ 23)63 (48.5)49 (37.7)1.55 (0.92–2.63)0.103Family history of Prostate cancer (yes)2 (1.5)5 (3.9)0.39 (0.04–2.45)0.447Existing of urinary tract diseases (yes)17 (13.1)20 (15.4)0.82 (0.39–1.76)0.723History of urinary surgery (yes)10 (7.8)12 (9.2)0.83 (0.31–2.18)0.824Benign Prostate Hyperplasia (yes)14 (10.8)24 (18.5)0.53 (0.24–1.14)0.113Exposed to agrochemicals (yes)40 (30.8)20 (15.4)2.44 (1.29–4.73)0.005 Lifestyle behaviour Physical activity (yes)68 (52.3)86 (66.2)0.65 (0.33–0.95)0.032Current tobacco smoking (yes)62 (47.7)48 (36.9)1.56 (0.92–2.64)0.103Heavy drinking (yes)14 (10.8)12 (9.2)1.19 (0.53–2.67)0.680 Food consumption behaviour Red meat (≥ 3 times/week)107 (82.3)106 (81.5)1.05 (0.53–2.08)1.000Fruits (≥ 3 times/week)65 (50.0)83 (63.9)0.57 (0.33–0.96)0.033Vegetables (≥ 3 times/week)111 (85.4)120 (92.3)0.49 (0.19–1.16)0.114Nuts (≥ 3 times/week)9 (6.9)9 (6.9)1.00 (0.34–2.95)1.000Vegetable oil (≥ 3 times/week)104 (80.0)95 (73.1)1.47 (0.79–2.75)0.242Tea (≥ 3 times/week)56 (43.1)53 (40.8)1.10 (0.67–1.80)0.706Coffee (≥ 3 times/week)82 (63.1)80 (61.5)1.07 (0.63–1.82)0.898 Association of epidemiological and behavioural characteristics with Prostate cancer – univariable logistic regression Association of I-PSS and PSA with prostate cancer: The PCa odds ratio for 1 ng/mL PSA increase was 1.06 (95% CI, 1.05–1.08). The PCa odds ratio for each I-PSS point increase was 1.14 (95% CI, 1.09–1.18). All the items of I-PSS significantly associated with PCa, except for the “Straining” item with lowest OR = 1.16 (95% CI, 0.98–1.38). “Nocturia” item had highest OR with 2.22 (95% CI, 1.79–2.74), and “Urgency” item came to the second with OR = 1.41 (95% CI, 1.23–1.62) (Table 2). Table 2Association of I-PSS and PSA with prostate cancer - univariable logistic regressionCases (n = 130) Median (25–75 percentile) Controls (n = 130) Median (25–75 percentile) OR (95% CI)p PSA concentration (ng/mL) 89.5 (39.1–100)12 (8–19)1.06 (1.05–1.08)< 0.001 I-PSS (total score) 14 (8–19) 6.5 (3–12) 1.14 (1.09–1.18) < 0.001 Incomplete Emptying4 (1–5)1 (1–5)1.27 (1.12–1.44)Frequency (every 2 h)2 (0–2)0 (0–3)1.22 (1.08–1.37)Intermittency0 (0–1)0 (0–1)1.29 (1.02–1.64)Urgency2 (0–5)0 (0–1)1.41 (1.23–1.62)Weak Stream0 (0–2)0 (0–0)1.32 (1.12–1.55)Straining0 (0–1)0 (0–1)1.16 (0.98–1.38)Nocturia4 (3–5)2 (1–3)2.22 (1.79–2.74) Association of I-PSS and PSA with prostate cancer - univariable logistic regression BMA for PCa prediction: To determine whether PCa could be predicted by PSA, I-PSS, epidemiological and behavioural variables. There were 27 models suggested from BMA process, among them the best 5 models are shown in Table 3. The most parsimonious model (i.e., minimum explanatory variables and maximum discrimination power) included two variables: I-PSS, and PSA concentration. The second parsimonious model contained I-PSS, PSA, and age. The Area Under the ROC Curve (AUC) of the most parsimonious model was not much different compared to the second parsimonious model (0.931 vs. 0.929). Because age is an important factor for PCa screening and diagnosis in many previous studies [29–31], it is also a critical factor for disease mechanisms from a clinical standpoint. Therefore, we chose the second model with three variables as the best model to use in the clinical setting (Table 3). This final model is also in light with the final model suggested by CART algorithm (details shown below). Table 3BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCaModelVariableOR (95% CI)pR2 (%)AUCBICPosterior probability1IPSS1.12 (1.06–1.18)< 0.00149.20.931199.90.364PSA1.06 (1.04–1.08)< 0.001Intercept0.03 (0.01–0.08)< 0.0012IPSS1.11 (1.05–1.17)< 0.00150.30.929201.40.174PSA1.06 (1.04–1.07)< 0.001Age1.06 (1.01–1.10)0.047Intercept0.01 (0.00–0.04)< 0.0013IPSS1.13 (1.07–1.19)< 0.00150.10.929205.50.120PSA1.06 (1.04–1.08)< 0.001Fruits (≥ 3 times/week)0.50 (0.23–1.06)0.069Intercept0.05 (0.02–0.12)< 0.0014IPSS1.12 (1.06–1.19)< 0.00149.70.931203.60.057PSA1.06 (1.04–1.08)< 0.001Overweight or Obesity1.68 (0.80–3.53)0.174Intercept0.03 (0.01–0.07)< 0.0015IPSS1.12 (1.06–1.18)< 0.00149.70.931203.70.056PSA1.06 (1.04–1.08)< 0.001Vegetable oil (≥ 3 times/week)1.84 (0.75–4.52)0.186Intercept0.02 (0.01–0.07)< 0.001 BMA prediction models using I-PSS, PSA, epidemiological, and behavioural characteristics for PCa CART for PCa screening: CART was deployed with all independent variables input for PCa screening, and the final model is shown in (Fig. 1). Fig. 1Trained CART in prostate cancer screening Trained CART in prostate cancer screening The results indicated that PSA, I-PSS, and age played important roles in PCa screening. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2% and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively (Table 4). Table 4Diagnosis values of CART in PCa screeningProbability = 20%Probability = 50%Probability = 80%Sensitivity0.9150.9150.792Specificity0.8620.8620.923Positive predictive value0.8690.8690.912Negative predictive value0.9120.9120.816Accuracy0.8890.8890.858 Diagnosis values of CART in PCa screening Discussion: Epidemiological and behavioral characteristics The study included 260 observations at the Medic Center HCMC with 130 in the case group and 130 in control group. Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals and protective factor included physical exercise and fruit consumption. Previous studies found that farming and exposure to agricultural chemicals are risk factors for PCa but not for all agricultural chemicals [29, 30]. Exposure to a few specific pesticides including fonofos, malathion, terbufos, and azinphos-methyl, dimethoate associated with PCa [31–33]. Genomic analysis showed pesticides might interact with genetic variants in pathways related to neurotransmission release in PCa patients [34, 35]. Therefore, the relationship between exposure to agricultural chemicals and PCa is plausible. Further epidemiological and mechanism studies are needed to identify the relationships of PCa with specific agricultural chemicals, in particular in the Vietnamese context. Although our study initially found a relationship between physical exercise and PCa, there is a lack of evidence in the literature. Recent review and meta-analyses reveal that the association between regular physical activity and a low risk of prostate cancer remains elusive [36, 37]. Given also many general health benefits of physical activity, there is the need to clarify the role of physical activity in association with PCa in further studies. The association of fruit consumption with PCa was shown in our study and recent studies [38, 39]. Total fruit intake significantly reduced PCa risk. However, our study did not analyze fruit subtypes. A previous study found that citrus fruit consumption is associated with PCa, but other fruit subtypes are not associated [38]. This relationship might be due to the anti-carcinogenic properties of vitamins and phytochemicals in citrus fruits [40, 41]. However, the causal relationship remains unclear because most findings are based on a cross-sectional study. CART model suggested that age is most important in epidemiological and behavioral characteristics. Therefore, age, PSA, and IPSS were combined in PCa screening CART model. Our study showed that 50% of patients were older than 71 years in cases and older than 66 in controls. PCa is a disease that commonly occurs in the elderly men. Previous studies found that 75–80% of new cases occur in men aged over 65 years [42, 43]. Another study in the United States had an average participant’s age was 66 years [44]. A study by European Association of Urology showed that PCa rarely occurred in men younger than 50 years; it also indicated that the median age of PCa patient was 70 years [45]. Giwercman et al. showed that age was the closest risk factor of PCa [46]. Similarly, our study detected age as an independent risk factor of PCa. According to the Bayesian Model Averaging process, the PCa risk increased by 6% each year of age increased. The role of I-PSS in PCa screening. The Prostate Symptom Score (I-PSS) with seven recommended questions became an international standard to assess the symptoms of urination dysfunction in patients during the previous month. This scale is able to monitor changes in symptoms over time or after an intervention. A symptom severity assessment with an I-PSS scale is an important part of the initial evaluation, diagnosis, prediction, and monitoring of response to treatment [47, 48]. Our study noted that I-PSS was used to detect the symptoms of PCa (p < 0.001) in both univariable and multivariable analyses. A cohort study by Martin et al. [49] detected an association between I-PSS and PCa. For overall PCa, men with I-PSS ≥ 20 had a 2.26-fold increased risk of PCa compared to those with no symptoms. For localised PCa, men with I-PSS ≥ 20 had a 4.6-fold increased risk of PCa compared to those with no symptoms [49]. A study by Hosseini et al. [15] showed an association between I-PSS and PCa. The mean I-PSS score of the PCa group was 16.05 and higher than that of the non-PCa group, with a mean I-PSS score of 6.84. The prevalence of patients with I-PSS ≥ 20 in the PCa group was 30.3%, which was higher than that in the non-PCa group. The sensitivity and specificity of I-PSS at cut-off I-PSS ≥ 20 were 78% and 59%, respectively [15]. Our study and data have provided evidence about the relationship of PCa screening value with I-PSS. In Vietnam, according to the Ministry of Health, I-PSS has not yet been recommended for PCa initial screening; however, I-PSS has been recommended for benign hypertrophy of prostate – a disease that has many symptoms similar to early-stage PCa symptoms. Our study recommended using I-PSS for initial screening for any patient who has self-reported lower urinary tract symptoms. The role of PSA in PCa screening Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50]. In our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ge$$\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58]. In hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14]. Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50]. In our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ge$$\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58]. In hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14]. The study included 260 observations at the Medic Center HCMC with 130 in the case group and 130 in control group. Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals and protective factor included physical exercise and fruit consumption. Previous studies found that farming and exposure to agricultural chemicals are risk factors for PCa but not for all agricultural chemicals [29, 30]. Exposure to a few specific pesticides including fonofos, malathion, terbufos, and azinphos-methyl, dimethoate associated with PCa [31–33]. Genomic analysis showed pesticides might interact with genetic variants in pathways related to neurotransmission release in PCa patients [34, 35]. Therefore, the relationship between exposure to agricultural chemicals and PCa is plausible. Further epidemiological and mechanism studies are needed to identify the relationships of PCa with specific agricultural chemicals, in particular in the Vietnamese context. Although our study initially found a relationship between physical exercise and PCa, there is a lack of evidence in the literature. Recent review and meta-analyses reveal that the association between regular physical activity and a low risk of prostate cancer remains elusive [36, 37]. Given also many general health benefits of physical activity, there is the need to clarify the role of physical activity in association with PCa in further studies. The association of fruit consumption with PCa was shown in our study and recent studies [38, 39]. Total fruit intake significantly reduced PCa risk. However, our study did not analyze fruit subtypes. A previous study found that citrus fruit consumption is associated with PCa, but other fruit subtypes are not associated [38]. This relationship might be due to the anti-carcinogenic properties of vitamins and phytochemicals in citrus fruits [40, 41]. However, the causal relationship remains unclear because most findings are based on a cross-sectional study. CART model suggested that age is most important in epidemiological and behavioral characteristics. Therefore, age, PSA, and IPSS were combined in PCa screening CART model. Our study showed that 50% of patients were older than 71 years in cases and older than 66 in controls. PCa is a disease that commonly occurs in the elderly men. Previous studies found that 75–80% of new cases occur in men aged over 65 years [42, 43]. Another study in the United States had an average participant’s age was 66 years [44]. A study by European Association of Urology showed that PCa rarely occurred in men younger than 50 years; it also indicated that the median age of PCa patient was 70 years [45]. Giwercman et al. showed that age was the closest risk factor of PCa [46]. Similarly, our study detected age as an independent risk factor of PCa. According to the Bayesian Model Averaging process, the PCa risk increased by 6% each year of age increased. The role of I-PSS in PCa screening. The Prostate Symptom Score (I-PSS) with seven recommended questions became an international standard to assess the symptoms of urination dysfunction in patients during the previous month. This scale is able to monitor changes in symptoms over time or after an intervention. A symptom severity assessment with an I-PSS scale is an important part of the initial evaluation, diagnosis, prediction, and monitoring of response to treatment [47, 48]. Our study noted that I-PSS was used to detect the symptoms of PCa (p < 0.001) in both univariable and multivariable analyses. A cohort study by Martin et al. [49] detected an association between I-PSS and PCa. For overall PCa, men with I-PSS ≥ 20 had a 2.26-fold increased risk of PCa compared to those with no symptoms. For localised PCa, men with I-PSS ≥ 20 had a 4.6-fold increased risk of PCa compared to those with no symptoms [49]. A study by Hosseini et al. [15] showed an association between I-PSS and PCa. The mean I-PSS score of the PCa group was 16.05 and higher than that of the non-PCa group, with a mean I-PSS score of 6.84. The prevalence of patients with I-PSS ≥ 20 in the PCa group was 30.3%, which was higher than that in the non-PCa group. The sensitivity and specificity of I-PSS at cut-off I-PSS ≥ 20 were 78% and 59%, respectively [15]. Our study and data have provided evidence about the relationship of PCa screening value with I-PSS. In Vietnam, according to the Ministry of Health, I-PSS has not yet been recommended for PCa initial screening; however, I-PSS has been recommended for benign hypertrophy of prostate – a disease that has many symptoms similar to early-stage PCa symptoms. Our study recommended using I-PSS for initial screening for any patient who has self-reported lower urinary tract symptoms. The role of PSA in PCa screening Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50]. In our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ge$$\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58]. In hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14]. Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50]. In our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ge$$\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58]. In hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14]. CART value for PCa screening To remedy the inherent limitations of PSA in PCa screening, we used a combination of PSA with I-PSS and the main risk factors of PCa to build the CART model. Based on CART, patients with a PSA cut-off level > 33.5 ng/mL have a PCa risk of up to 91.2%. Patients with I-PSS < 19, and PSA < 18 ng/mL were at 7.1% risk. CART overcomes the limitations of using only I-PSS or PSA for screening. Other machine learning algorithms have been used in PCa screening in previous studies and have reached higher values than PSA only. In a study by Babaian et al. [59], a neural network algorithm for PCa screening showed an improved value compared to using only PSA, the specificity of the neural network was not good (lower than 65%) [59]. A study by Satoshi et al. [17] showed that artificial neural network, random forest, and support vector machine improved overall value when compared to only PSA; however, sensitivity and specificity were usually lower than 80% [17]. Our CART algorithm with three variables, PSA, I-PSS, and age, showed a relatively high predictive power (AUC = 0.915). In addition, CART algorithm could also support physicians to make clinical decisions in a step-to-step and intuitive manner; hence it has practical use in a daily clinical setting. At 20% diagnosis probability threshold, CART showed a high negative predictive value (91.2%), and at 80% diagnosis probability threshold, CART also had a high positive predictive value (91.2%). Therefore, we recommended 20% diagnosis probability threshold for negative prediction and 80% diagnosis probability threshold for referring prostate biopsy. Any other patients with a probability of PCa range from > 20% to < 80%, further tests including the digital rectal examination (DRE), PSA re-test after a month, and transrectal ultrasonography (TRUS) should be considered to reduce unnecessary biopsy while keeping the ability to diagnose PCa early. The study has some limitations. First, we lack other tests such as DRE, TRUS, biomarkers that can contribute to making biopsy decisions [5, 7]. Second, the CART model has not yet been tested in different populations for the validity and reliability of the algorithm. Finally, we could not estimate the overdiagnosis rate of PCa in the study. It warrants further study in the near future to overcome these limitations. To remedy the inherent limitations of PSA in PCa screening, we used a combination of PSA with I-PSS and the main risk factors of PCa to build the CART model. Based on CART, patients with a PSA cut-off level > 33.5 ng/mL have a PCa risk of up to 91.2%. Patients with I-PSS < 19, and PSA < 18 ng/mL were at 7.1% risk. CART overcomes the limitations of using only I-PSS or PSA for screening. Other machine learning algorithms have been used in PCa screening in previous studies and have reached higher values than PSA only. In a study by Babaian et al. [59], a neural network algorithm for PCa screening showed an improved value compared to using only PSA, the specificity of the neural network was not good (lower than 65%) [59]. A study by Satoshi et al. [17] showed that artificial neural network, random forest, and support vector machine improved overall value when compared to only PSA; however, sensitivity and specificity were usually lower than 80% [17]. Our CART algorithm with three variables, PSA, I-PSS, and age, showed a relatively high predictive power (AUC = 0.915). In addition, CART algorithm could also support physicians to make clinical decisions in a step-to-step and intuitive manner; hence it has practical use in a daily clinical setting. At 20% diagnosis probability threshold, CART showed a high negative predictive value (91.2%), and at 80% diagnosis probability threshold, CART also had a high positive predictive value (91.2%). Therefore, we recommended 20% diagnosis probability threshold for negative prediction and 80% diagnosis probability threshold for referring prostate biopsy. Any other patients with a probability of PCa range from > 20% to < 80%, further tests including the digital rectal examination (DRE), PSA re-test after a month, and transrectal ultrasonography (TRUS) should be considered to reduce unnecessary biopsy while keeping the ability to diagnose PCa early. The study has some limitations. First, we lack other tests such as DRE, TRUS, biomarkers that can contribute to making biopsy decisions [5, 7]. Second, the CART model has not yet been tested in different populations for the validity and reliability of the algorithm. Finally, we could not estimate the overdiagnosis rate of PCa in the study. It warrants further study in the near future to overcome these limitations. Epidemiological and behavioral characteristics: The study included 260 observations at the Medic Center HCMC with 130 in the case group and 130 in control group. Based on univariable logistic regression, the risk factors of PCa included age, exposure to agricultural chemicals and protective factor included physical exercise and fruit consumption. Previous studies found that farming and exposure to agricultural chemicals are risk factors for PCa but not for all agricultural chemicals [29, 30]. Exposure to a few specific pesticides including fonofos, malathion, terbufos, and azinphos-methyl, dimethoate associated with PCa [31–33]. Genomic analysis showed pesticides might interact with genetic variants in pathways related to neurotransmission release in PCa patients [34, 35]. Therefore, the relationship between exposure to agricultural chemicals and PCa is plausible. Further epidemiological and mechanism studies are needed to identify the relationships of PCa with specific agricultural chemicals, in particular in the Vietnamese context. Although our study initially found a relationship between physical exercise and PCa, there is a lack of evidence in the literature. Recent review and meta-analyses reveal that the association between regular physical activity and a low risk of prostate cancer remains elusive [36, 37]. Given also many general health benefits of physical activity, there is the need to clarify the role of physical activity in association with PCa in further studies. The association of fruit consumption with PCa was shown in our study and recent studies [38, 39]. Total fruit intake significantly reduced PCa risk. However, our study did not analyze fruit subtypes. A previous study found that citrus fruit consumption is associated with PCa, but other fruit subtypes are not associated [38]. This relationship might be due to the anti-carcinogenic properties of vitamins and phytochemicals in citrus fruits [40, 41]. However, the causal relationship remains unclear because most findings are based on a cross-sectional study. CART model suggested that age is most important in epidemiological and behavioral characteristics. Therefore, age, PSA, and IPSS were combined in PCa screening CART model. Our study showed that 50% of patients were older than 71 years in cases and older than 66 in controls. PCa is a disease that commonly occurs in the elderly men. Previous studies found that 75–80% of new cases occur in men aged over 65 years [42, 43]. Another study in the United States had an average participant’s age was 66 years [44]. A study by European Association of Urology showed that PCa rarely occurred in men younger than 50 years; it also indicated that the median age of PCa patient was 70 years [45]. Giwercman et al. showed that age was the closest risk factor of PCa [46]. Similarly, our study detected age as an independent risk factor of PCa. According to the Bayesian Model Averaging process, the PCa risk increased by 6% each year of age increased. The role of I-PSS in PCa screening. The Prostate Symptom Score (I-PSS) with seven recommended questions became an international standard to assess the symptoms of urination dysfunction in patients during the previous month. This scale is able to monitor changes in symptoms over time or after an intervention. A symptom severity assessment with an I-PSS scale is an important part of the initial evaluation, diagnosis, prediction, and monitoring of response to treatment [47, 48]. Our study noted that I-PSS was used to detect the symptoms of PCa (p < 0.001) in both univariable and multivariable analyses. A cohort study by Martin et al. [49] detected an association between I-PSS and PCa. For overall PCa, men with I-PSS ≥ 20 had a 2.26-fold increased risk of PCa compared to those with no symptoms. For localised PCa, men with I-PSS ≥ 20 had a 4.6-fold increased risk of PCa compared to those with no symptoms [49]. A study by Hosseini et al. [15] showed an association between I-PSS and PCa. The mean I-PSS score of the PCa group was 16.05 and higher than that of the non-PCa group, with a mean I-PSS score of 6.84. The prevalence of patients with I-PSS ≥ 20 in the PCa group was 30.3%, which was higher than that in the non-PCa group. The sensitivity and specificity of I-PSS at cut-off I-PSS ≥ 20 were 78% and 59%, respectively [15]. Our study and data have provided evidence about the relationship of PCa screening value with I-PSS. In Vietnam, according to the Ministry of Health, I-PSS has not yet been recommended for PCa initial screening; however, I-PSS has been recommended for benign hypertrophy of prostate – a disease that has many symptoms similar to early-stage PCa symptoms. Our study recommended using I-PSS for initial screening for any patient who has self-reported lower urinary tract symptoms. The role of PSA in PCa screening Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50]. In our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ge$$\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58]. In hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14]. Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50]. In our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ge$$\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58]. In hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14]. The role of PSA in PCa screening: Prostatic specific antigen (PSA) is an antigens-proteolytic protein that is secreted by prostate cells and excreted into the glandular microducts, which are largely poured into the sperm through the crystalline ducts, and smaller portions are poured into the serum and lymphatic secretions. PSA increases in PCa, prostate benign proliferation, and prostate inflammation after the procedure (cystoscopy, catheterisation of urethral, prostate massage, after a prostate biopsy within 4 weeks, after ejaculation within 48 h). PSA decreases by 50% when taking 5 alpha-reductase inhibitors with a continuous period of over 6 months [50]. In our study, PSA shown a significant associated with PCa and is an important predictor for PCa in both BMA and CART algorithm. Currently, all guidelines of the American and European Urogenital Societies use PSA cut-off levels ranging from 2 ng/mL to 4 ng/mL in order to make prostate biopsy decision [51]. Meanwhile, researchers chose PSA > 4 ng/mL as the cut-off level to ensure high sensitivity in screening [52–54]. According to Vietnam Ministry of Health guideline, PSA > 4 ng/mL has been recommended for selecting patients with lower urinary tract symptoms for a further clinical assessment for PCa diagnosis. The cut-off value of PSA for referring biopsy, however, is not determined. Previous studies showed that only using PSA for PCa screening before biopsy could tend to the high probability of a negative biopsy out of elevated PSA cases (high proportion of unnecessary biopsy). In PCa patients, only 65–75% of cases have PSA > 4 ng/mL; 35% of the remaining PSA cases remain at a normal level [55]. The study by Thompson et al. [56] in U.S. on cancer screening with 2950 men over 50 years old showed that 15.2% of patients had PSA < 4 ng/mL got prostate cancer, as well as 14.9% of the negative prediction group with a Gleason score of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ge$$\end{document}7 [56]. Wright et al. [57] found that a PSA threshold of > 4 ng/mL detected more cancer cases but increased unnecessary biopsy cases [57]. Morgan et al. [28] noted that the sensitivity of reached 98.2% at PSA cut-off level of > 4 ng/mL, and the sensitivity at PSA > 10 ng/mL was 91% with a specificity of 54% [28]. In some cases, the serum PSA values in the PCa and non-PCa groups overlapped, especially when PSA levels were 4–10 ng/mL. The PSA value in this range was called “diagnostic gray zone,“ according to Shariat and Karakiewicz [58]. In hospital-based PCa screening, combining PSA, clinical parameters, age, and other risk factors could reduce the rate of unnecessary biopsy while maintaining the ability to reduce PCa mortality [12–14]. CART value for PCa screening: To remedy the inherent limitations of PSA in PCa screening, we used a combination of PSA with I-PSS and the main risk factors of PCa to build the CART model. Based on CART, patients with a PSA cut-off level > 33.5 ng/mL have a PCa risk of up to 91.2%. Patients with I-PSS < 19, and PSA < 18 ng/mL were at 7.1% risk. CART overcomes the limitations of using only I-PSS or PSA for screening. Other machine learning algorithms have been used in PCa screening in previous studies and have reached higher values than PSA only. In a study by Babaian et al. [59], a neural network algorithm for PCa screening showed an improved value compared to using only PSA, the specificity of the neural network was not good (lower than 65%) [59]. A study by Satoshi et al. [17] showed that artificial neural network, random forest, and support vector machine improved overall value when compared to only PSA; however, sensitivity and specificity were usually lower than 80% [17]. Our CART algorithm with three variables, PSA, I-PSS, and age, showed a relatively high predictive power (AUC = 0.915). In addition, CART algorithm could also support physicians to make clinical decisions in a step-to-step and intuitive manner; hence it has practical use in a daily clinical setting. At 20% diagnosis probability threshold, CART showed a high negative predictive value (91.2%), and at 80% diagnosis probability threshold, CART also had a high positive predictive value (91.2%). Therefore, we recommended 20% diagnosis probability threshold for negative prediction and 80% diagnosis probability threshold for referring prostate biopsy. Any other patients with a probability of PCa range from > 20% to < 80%, further tests including the digital rectal examination (DRE), PSA re-test after a month, and transrectal ultrasonography (TRUS) should be considered to reduce unnecessary biopsy while keeping the ability to diagnose PCa early. The study has some limitations. First, we lack other tests such as DRE, TRUS, biomarkers that can contribute to making biopsy decisions [5, 7]. Second, the CART model has not yet been tested in different populations for the validity and reliability of the algorithm. Finally, we could not estimate the overdiagnosis rate of PCa in the study. It warrants further study in the near future to overcome these limitations. Conclusion: CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.1%, 86.9%, 86.2%, 91.5% and 88.9%; at 80% diagnosis probability threshold were 81.6%, 91.2%, 92.3%, 79.2%, and 85.8%. I-PSS, PSA, and age had importance role in PCa screening. CART combining PSA, I-PSS, and age has practical use in hospital-based PCa screening in Vietnamese patients.
Background: Prostate cancer (PCa) is a common disease in men over 65 years of age, and should be detected early, while reducing unnecessary biopsies. This study aims to construct a classification and regression tree (CART) model (i.e., risk stratification algorithm) using multivariable approach to select Vietnamese men with lower urinary tract symptoms (LUTS) for PCa biopsy. Methods: We conducted a case-control study on 260 men aged ≥ 50 years who visited MEDIC Medical Center, Vietnam in 2017-2018 with self-reported LUTS. The case group included patients with a positive biopsy and the control group included patients with a negative biopsy diagnosis of PCa. Bayesian Model Averaging (BMA) was used for selecting the most parsimonious prediction model. Then the CART with 5-fold cross-validation was constructed for selecting men who can benefit from PCa biopsy in steps by steps and intuitive way. Results: BMA suggested five potential prediction models, in which the most parsimonious model including PSA, I-PSS, and age. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2%, and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively. Conclusions: CART combining PSA, I-PSS, and age has practical use in hospital-based PCa screening in Vietnamese men with lower urinary tract symptoms.
Background: Prostate cancer (PCa) is common in men, especially in those aged 65 years and older. It has the second-highest incidence/prevalence (i.e., 30.7 per 100 000) and ranks fifth in cancer mortality rate among men (i.e., 7.7 per 100 000) worldwide [1]. In Vietnam, the incidence of PCa is 12.2 per 100 000, and the mortality rate is 5.1 per 100 000 as of 2020 [2]. Approximately 95–98% of PCa cases are adenocarcinomas that develop from adrenal duct cells [3]. PCa treatment depends primarily on the stage of development and the cell and patient characteristics. According to the American Cancer Society, PCa patients diagnosed at the localised or regional stage have a 5-year survival rate of over 90%. However, in the distant stage, the survival rate is only 30% [4]. Therefore, PCa should be detected at an early stage. Prostate-specific antigen (PSA) is a serine protease in the kallikrein family and considered a tool for the screening and early detection of PCa [5]. It can help detect as early as nine years before having clinical symptoms [6]. There are two types of PCa screening studies using PSA, including population-based and hospital-based (or opportunistic) screenings [7]. The first type of screening deals with testing asymptomatic men with only PSA, and those with elevated PSA are immediately referred to biopsy. However, the latter type of screening involves testing men with some symptoms (e.g., lower urinary tract symptoms) using PSA and other clinical tools. Therefore, all men referred for biopsy in population-based screening are at lower risk of having PCa compared to that of hospital-based screening. PSA only based screening could accounted for 45–70% of the reduction in PCa mortality [8]; it could also induce the unnecessary biopsies [9]. In a 16 year follow-up of the European Randomized Study of Screening for Prostate Cancer (ERSPC), the unnecessary biopsy was 76% (i.e., 76% of elevated PSA cases have a negative biopsy) [10]. In addition, the optimal cut-off value of PSA for confirming PCa remains to be determined [11] [5]. In particular, even at a low level of PSA (that is, lower than 4 ng/mL), the false negative rate of PCa was high at 15%, whereas, at a high level of PSA (that is, higher than 10 ng/mL), the false positive rate was 50% [5]. In Vietnam, population-based PCa screening using PSA was conducted 12 years ago; it indicated a low prevalence of PCa (2.5%), but a high rate of medium grade lesions. The author also implied that the benefit of a mass screening program for PCa was not proven. Instead, a selective PCa screening in the usual care and at the hospital was superior in Vietnam. In hospital-based screening, combining clinical parameters, PSA, age, and other risk factors improved the prediction of prostate cancer [12–14]. International Prostate Symptom Score (I-PSS) is a screening scale for lower urinary tract symptoms and is used to screen non-specific prostate gland abnormalities. For PCa screening. For PCa screening, the I-PSS scale showed reasonable sensitivity (78%), but the specificity was not high (59.4%) [15]. A previous study showed that PSA screening performance varied with different I-PSS values. Therefore, combining PSA and I-PSS could improve the screening benefits [16]. There is, however, a paucity of such practical multivariable algorithm for hospital-based PCa screening in Vietnam. The approach of PCa screening based on machine learning algorithms has only recently been applied. Algorithms including logistic regression, artificial neural networks, random forests, support vector machines, and extreme and light gradient boosting machines have been demonstrated to enhance PCa screening efficiency [13, 17–20]. However, these models do not help make clinical decisions in a step-to-step and intuitive manner. Classification and regression tree (CART) is an approach that allows physicians to apply results of the screening process directly and intuitively [17, 21]. Our study aimed to investigate the association of PSA, I-PSS, epidemiological and behavioural characteristics with PCa and then used these factors to construct a classification and regression tree (CART) algorithm to select Vietnamese men with lower urinary tract symptoms (LUTS) for PCa biopsy. The algorithm is expected to aid in reducing the probability of a negative prostate biopsy (i.e., unnecessary biopsy) while maintaining the ability to reduce PCa mortality for Vietnamese patients. Conclusion: CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.1%, 86.9%, 86.2%, 91.5% and 88.9%; at 80% diagnosis probability threshold were 81.6%, 91.2%, 92.3%, 79.2%, and 85.8%. I-PSS, PSA, and age had importance role in PCa screening. CART combining PSA, I-PSS, and age has practical use in hospital-based PCa screening in Vietnamese patients.
Background: Prostate cancer (PCa) is a common disease in men over 65 years of age, and should be detected early, while reducing unnecessary biopsies. This study aims to construct a classification and regression tree (CART) model (i.e., risk stratification algorithm) using multivariable approach to select Vietnamese men with lower urinary tract symptoms (LUTS) for PCa biopsy. Methods: We conducted a case-control study on 260 men aged ≥ 50 years who visited MEDIC Medical Center, Vietnam in 2017-2018 with self-reported LUTS. The case group included patients with a positive biopsy and the control group included patients with a negative biopsy diagnosis of PCa. Bayesian Model Averaging (BMA) was used for selecting the most parsimonious prediction model. Then the CART with 5-fold cross-validation was constructed for selecting men who can benefit from PCa biopsy in steps by steps and intuitive way. Results: BMA suggested five potential prediction models, in which the most parsimonious model including PSA, I-PSS, and age. CART advised the following cut-off points in the marked screening sequence: 18 < PSA < 33.5 ng/mL, I-PSS ≥ 19, and age ≥ 71. Patients with PSA ≥ 33.5 ng/mL have a PCa risk was 91.2%; patients with PSA < 18 ng/mL and I-PSS < 19 have a PCa risk was 7.1%. Patient with 18 ≤ PSA < 33.5ng/mL and I-PSS < 19 have a PCa risk is 70% if age ≥ 71; and is 16% if age < 71. In overall, CART reached high predictive value with AUC = 0.915. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CART at the 20% diagnosis probability threshold were 91.5%, 86.2%, 86.9%, 91.2%, and 88.9% respectively; at 80% diagnosis probability threshold were 79.2%, 92.3%, 91.2%, 81.6%, and 85.8% respectively. Conclusions: CART combining PSA, I-PSS, and age has practical use in hospital-based PCa screening in Vietnamese men with lower urinary tract symptoms.
18,499
447
[ 913, 2890, 74, 128, 120, 348, 761, 82, 46, 112, 125, 357, 287, 353, 312, 2155, 590, 492 ]
21
[ "pca", "psa", "pss", "prostate", "cart", "screening", "ml", "ng ml", "ng", "study" ]
[ "increases pca prostate", "cancer pca", "screening prostate cancer", "psa prostate", "prostate specific antigen" ]
null
[CONTENT] Prostate cancer | PSA | I-PSS | Vietnamese patients | Classification and regression tree | Bayesian modeling averaging [SUMMARY]
null
[CONTENT] Prostate cancer | PSA | I-PSS | Vietnamese patients | Classification and regression tree | Bayesian modeling averaging [SUMMARY]
[CONTENT] Prostate cancer | PSA | I-PSS | Vietnamese patients | Classification and regression tree | Bayesian modeling averaging [SUMMARY]
[CONTENT] Prostate cancer | PSA | I-PSS | Vietnamese patients | Classification and regression tree | Bayesian modeling averaging [SUMMARY]
[CONTENT] Prostate cancer | PSA | I-PSS | Vietnamese patients | Classification and regression tree | Bayesian modeling averaging [SUMMARY]
[CONTENT] Male | Humans | Prostatic Neoplasms | Prostate-Specific Antigen | Early Detection of Cancer | Case-Control Studies | Bayes Theorem | Vietnam | Biopsy | Lower Urinary Tract Symptoms | Hospitals | Asian People [SUMMARY]
null
[CONTENT] Male | Humans | Prostatic Neoplasms | Prostate-Specific Antigen | Early Detection of Cancer | Case-Control Studies | Bayes Theorem | Vietnam | Biopsy | Lower Urinary Tract Symptoms | Hospitals | Asian People [SUMMARY]
[CONTENT] Male | Humans | Prostatic Neoplasms | Prostate-Specific Antigen | Early Detection of Cancer | Case-Control Studies | Bayes Theorem | Vietnam | Biopsy | Lower Urinary Tract Symptoms | Hospitals | Asian People [SUMMARY]
[CONTENT] Male | Humans | Prostatic Neoplasms | Prostate-Specific Antigen | Early Detection of Cancer | Case-Control Studies | Bayes Theorem | Vietnam | Biopsy | Lower Urinary Tract Symptoms | Hospitals | Asian People [SUMMARY]
[CONTENT] Male | Humans | Prostatic Neoplasms | Prostate-Specific Antigen | Early Detection of Cancer | Case-Control Studies | Bayes Theorem | Vietnam | Biopsy | Lower Urinary Tract Symptoms | Hospitals | Asian People [SUMMARY]
[CONTENT] increases pca prostate | cancer pca | screening prostate cancer | psa prostate | prostate specific antigen [SUMMARY]
null
[CONTENT] increases pca prostate | cancer pca | screening prostate cancer | psa prostate | prostate specific antigen [SUMMARY]
[CONTENT] increases pca prostate | cancer pca | screening prostate cancer | psa prostate | prostate specific antigen [SUMMARY]
[CONTENT] increases pca prostate | cancer pca | screening prostate cancer | psa prostate | prostate specific antigen [SUMMARY]
[CONTENT] increases pca prostate | cancer pca | screening prostate cancer | psa prostate | prostate specific antigen [SUMMARY]
[CONTENT] pca | psa | pss | prostate | cart | screening | ml | ng ml | ng | study [SUMMARY]
null
[CONTENT] pca | psa | pss | prostate | cart | screening | ml | ng ml | ng | study [SUMMARY]
[CONTENT] pca | psa | pss | prostate | cart | screening | ml | ng ml | ng | study [SUMMARY]
[CONTENT] pca | psa | pss | prostate | cart | screening | ml | ng ml | ng | study [SUMMARY]
[CONTENT] pca | psa | pss | prostate | cart | screening | ml | ng ml | ng | study [SUMMARY]
[CONTENT] screening | pca | psa | rate | biopsy | based | 100 000 | based screening | men | pca screening [SUMMARY]
null
[CONTENT] 06 | 95 | times week | times | yes | 08 | 04 | week | table | ci [SUMMARY]
[CONTENT] age 71 | ml pss 19 | ml pss | psa 33 | 91 | psa | age | pss 19 | pss | 71 [SUMMARY]
[CONTENT] psa | pca | pss | cart | screening | variables | ml | biopsy | ng ml | ng [SUMMARY]
[CONTENT] psa | pca | pss | cart | screening | variables | ml | biopsy | ng ml | ng [SUMMARY]
[CONTENT] 65 years of age ||| Vietnamese [SUMMARY]
null
[CONTENT] BMA | five | PSA ||| 18 | 33.5 ng/mL | ≥ | 19 | ≥ | 71 ||| PSA ≥ | 33.5 ng/mL | 91.2% | PSA <  | 18 ng/mL | 19 | 7.1% ||| 18 | 19 | 70% | ≥ | 71 | 16% | 71 ||| AUC | 0.915 ||| 20% | 91.5% | 86.2% | 86.9% | 91.2% | 88.9% | 80% | 79.2% | 92.3% | 91.2% | 81.6% | 85.8% [SUMMARY]
[CONTENT] Vietnamese [SUMMARY]
[CONTENT] 65 years of age ||| Vietnamese ||| 260 | ≥ | 50 years | MEDIC Medical Center | Vietnam | 2017-2018 ||| ||| Bayesian Model Averaging | BMA ||| 5-fold ||| BMA | five | PSA ||| 18 | 33.5 ng/mL | ≥ | 19 | ≥ | 71 ||| PSA ≥ | 33.5 ng/mL | 91.2% | PSA <  | 18 ng/mL | 19 | 7.1% ||| 18 | 19 | 70% | ≥ | 71 | 16% | 71 ||| AUC | 0.915 ||| 20% | 91.5% | 86.2% | 86.9% | 91.2% | 88.9% | 80% | 79.2% | 92.3% | 91.2% | 81.6% | 85.8% ||| Vietnamese [SUMMARY]
[CONTENT] 65 years of age ||| Vietnamese ||| 260 | ≥ | 50 years | MEDIC Medical Center | Vietnam | 2017-2018 ||| ||| Bayesian Model Averaging | BMA ||| 5-fold ||| BMA | five | PSA ||| 18 | 33.5 ng/mL | ≥ | 19 | ≥ | 71 ||| PSA ≥ | 33.5 ng/mL | 91.2% | PSA <  | 18 ng/mL | 19 | 7.1% ||| 18 | 19 | 70% | ≥ | 71 | 16% | 71 ||| AUC | 0.915 ||| 20% | 91.5% | 86.2% | 86.9% | 91.2% | 88.9% | 80% | 79.2% | 92.3% | 91.2% | 81.6% | 85.8% ||| Vietnamese [SUMMARY]
Minimally invasive debridement and drainage using intraoperative CT-Guide in multilevel spondylodiscitis: a long-term follow-up study.
33514356
Spondylodiscitis is an unusual infectious disease, which usually originates as a pathogenic infection of intervertebral discs and then spreads to neighboring vertebral bodies. The objective of this study is to evaluate percutaneous debridement and drainage using intraoperative CT-Guide in multilevel spondylodiscitis.
BACKGROUND
From January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with minimally invasive debridement and drainage procedures in our department. The clinical manifestations, evolution, and minimally invasive debridement and drainage treatment of this refractory vertebral infection were investigated.
METHODS
Of the enrolled patients, the operation time ranged from 30 minutes to 124 minutes every level with an average of 48 minutes. Intraoperative hemorrhage was minimal. The postoperative follow-up period ranged from 12 months to 6.5 years with an average of 3.7 years. There was no reactivation of infection in the treated vertebral segment during follow-up, but two patients with fungal spinal infection continued to progress by affecting adjacent segments prior to final resolution. According to the classification system of Macnab, one patient had a good outcome at the final follow-up, and the rest were excellent.
RESULTS
Minimally invasive percutaneous debridement and irrigation using intraoperative CT-Guide is an effective minimally invasive method for the treatment of multilevel spondylodiscitis.
CONCLUSIONS
[ "Debridement", "Discitis", "Drainage", "Follow-Up Studies", "Humans", "Lumbar Vertebrae", "Minimally Invasive Surgical Procedures", "Retrospective Studies", "Spinal Fusion", "Tomography, X-Ray Computed", "Treatment Outcome" ]
7844889
Background
Spondylodiscitis is an unusual infectious disease, which usually originates as a pathogenic infection of intervertebral discs and then spreads to neighboring vertebral bodies [1]. Mortality is around 2–3 % [2]. Its incidence varies between 0.2 % and 3.6 % after spine surgery [1, 3, 4]. There is no uniform standard for the treatment of spondylodiscitis. Conservative therapy including bracing and appropriate antibiotics is always adequate for the patients involving early detection of mild infection [5]. But delayed diagnosis and treatment of spondylitis are common because of their early variable clinical manifestations and indolent courses, which may lead to the failure of conservative treatment [5]. Surgical treatment is reserved for patients with failed conservative therapy, including intractable spinal pain, large epidural abscesses, and extensive vertebral body destruction [6]. The major purpose of surgical intervention in spondylodiscitis is to remove the infected tissues, relieve spinal pain, rebuild the spinal stability, and improve limb dysfunction. Open surgery has been advocated in the past [7–11]. However, whether the anterior or posterior approach, open surgery faced serious complications like nerve or vessel injuries due to extensive anatomical dissection and destruction of the stable spinal structure [12, 13]. Recent research favors a minimally invasive surgery(MIS) [7, 14, 15]. Existing studies have focused on single-level or early-stage infectious spondylodiscitis, and good clinical results have been reported after percutaneous debridement and drainage [14, 16, 17]. However, there are few reports of the management of advanced multilevel infections. These are difficult to treat using open or endoscopic surgery in current clinical practice, because of mechanical instability of the affected multilevel segments caused by widespread destruction due to the disease process [14, 18–21]. To my knowledge, up to date, this is the first report to treat multilevel spondylodiscitis with MIS. MIS may provide minimized damage stable structure of the posterior spine and paraspinal soft tissues. However, it is difficult to identify anatomic landmarks in MIS that may lead to severe complications. Therefore, to increase the accuracy of debridement and drainage, CT-Guide was performed during the operation. The purpose of this study was to evaluate the clinical effect of percutaneous debridement and drainage using intraoperative CT-Guide in the treatment of multilevel spondylodiscitis. Considering the particularity of tuberculous spondylodiscitis, it was not included in this study.
null
null
Results
Demographic data The 23 enrolled patients included 9 patients with Gram-positive (+), 6 patients with Gram-negative (-), 7 patients with Fungi, and 1 with mixed infection (Tables 1 and 2). Of the 23 patients, the maximum number of infected levels was 6 and the average number of infected levels was 3.2. Table 1Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatmentCultured pathogensNumberGram-positive (+)9S. aureus4S. epidermidis3S. viridans1MRSA1Gram-negative (-)6E. coli3E. faecalis2Brucella1Fungi7Aspergillus fumigatus5Aspergillus flavus2Mixed infection1Aspergillus fumigatus and S. aureusTotal Number123Table 2Demographic data of seven patients with postoperative spinal infectionNOOperations before MISInternal fixation(Y/N)Fixation removed(Y/N)Cultured bacteriaCase 1open lumbar surgery + debridement and internal fixationYNAspergillus fumigatusCase 2Open internal fixation of lumbar spineYNAspergillus flavusCase 3Open Operation of malignant schwannomaN-E. coliCase 4Open internal fixation of lumbar spineN-S.epidermidisCase 5Open internal fixation of lumbar spineYNS. aureusCase 6Unidentified Lumbar infection + open debridement and internal fixationYNAspergillus fumigatusCase 7Open internal fixation of lumbar spineYNS. aureus Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatment Aspergillus fumigatus and S. aureus Total Number 1 23 Demographic data of seven patients with postoperative spinal infection At admission, all but one patient presented with a fever of more than 38.5 °C. All patients presented with ESR of more than 20 mm/hr (range, 50 to 115 mm/hr). The elevated CRP ranged from 14.9 mg/L to 30.6 mg/L with an average of 23 mg/L (Table 3). The white blood cell count was elevated above normal in only one case. Table 3The results of the outcome measures for Clinical and laboratory indicatorsLaboratory testsPre-Treatment(at admission)Post-Operation(1w)Post- Extubation(1 m)ESR (mm/hr)84.5 (50–115)23.4 (15–38)13.2 (6–19)CRP (mg/L)26.0 (14–30)16.5 (12–21)3.7 (2–10)Functional resultsVisual analog scale8.6 (6–10)4.5 (3–5)2.3 (1–3)Oswestry disability index69.3(56–89)34.2 (25–44)26.3 (13–42) The results of the outcome measures for Clinical and laboratory indicators In this study, 7 patients with spondylitis were infected after the operation, and 5 of them underwent internal fixation implantation. All patients received minimally invasive treatment without the removal of internal fixation (Table 2). Before coming to our hospital, only 2 patients had a culture report (fungal infection), and both of them underwent open debridement surgery in other hospitals, and then transferred to our hospital. The 23 enrolled patients included 9 patients with Gram-positive (+), 6 patients with Gram-negative (-), 7 patients with Fungi, and 1 with mixed infection (Tables 1 and 2). Of the 23 patients, the maximum number of infected levels was 6 and the average number of infected levels was 3.2. Table 1Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatmentCultured pathogensNumberGram-positive (+)9S. aureus4S. epidermidis3S. viridans1MRSA1Gram-negative (-)6E. coli3E. faecalis2Brucella1Fungi7Aspergillus fumigatus5Aspergillus flavus2Mixed infection1Aspergillus fumigatus and S. aureusTotal Number123Table 2Demographic data of seven patients with postoperative spinal infectionNOOperations before MISInternal fixation(Y/N)Fixation removed(Y/N)Cultured bacteriaCase 1open lumbar surgery + debridement and internal fixationYNAspergillus fumigatusCase 2Open internal fixation of lumbar spineYNAspergillus flavusCase 3Open Operation of malignant schwannomaN-E. coliCase 4Open internal fixation of lumbar spineN-S.epidermidisCase 5Open internal fixation of lumbar spineYNS. aureusCase 6Unidentified Lumbar infection + open debridement and internal fixationYNAspergillus fumigatusCase 7Open internal fixation of lumbar spineYNS. aureus Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatment Aspergillus fumigatus and S. aureus Total Number 1 23 Demographic data of seven patients with postoperative spinal infection At admission, all but one patient presented with a fever of more than 38.5 °C. All patients presented with ESR of more than 20 mm/hr (range, 50 to 115 mm/hr). The elevated CRP ranged from 14.9 mg/L to 30.6 mg/L with an average of 23 mg/L (Table 3). The white blood cell count was elevated above normal in only one case. Table 3The results of the outcome measures for Clinical and laboratory indicatorsLaboratory testsPre-Treatment(at admission)Post-Operation(1w)Post- Extubation(1 m)ESR (mm/hr)84.5 (50–115)23.4 (15–38)13.2 (6–19)CRP (mg/L)26.0 (14–30)16.5 (12–21)3.7 (2–10)Functional resultsVisual analog scale8.6 (6–10)4.5 (3–5)2.3 (1–3)Oswestry disability index69.3(56–89)34.2 (25–44)26.3 (13–42) The results of the outcome measures for Clinical and laboratory indicators In this study, 7 patients with spondylitis were infected after the operation, and 5 of them underwent internal fixation implantation. All patients received minimally invasive treatment without the removal of internal fixation (Table 2). Before coming to our hospital, only 2 patients had a culture report (fungal infection), and both of them underwent open debridement surgery in other hospitals, and then transferred to our hospital. Radiologic findings In all patients, X-ray examination showed no signs of spinal instability; CT showed destruction of the vertebral endplate and varying degree of vertebral space collapse (Fig. 1). MRI revealed no compression of the spinal cord. There was no recurrence in the treated vertebral segment during follow-up, but two patients with fungal spinal infection (Case1 Fig. 2/3) (Case2 Fig. 4/5) progressed to involve adjacent segments. During postoperative follow-up, no deformities such as scoliosis or kyphosis were observed by plain radiography, and MRI showed varying degrees of spontaneous fusion [24]. Fig. 1A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyedFig. 2Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)Fig. 3The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discsFig. 4Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)Fig. 5The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyed Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h) The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discs Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h) The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved In all patients, X-ray examination showed no signs of spinal instability; CT showed destruction of the vertebral endplate and varying degree of vertebral space collapse (Fig. 1). MRI revealed no compression of the spinal cord. There was no recurrence in the treated vertebral segment during follow-up, but two patients with fungal spinal infection (Case1 Fig. 2/3) (Case2 Fig. 4/5) progressed to involve adjacent segments. During postoperative follow-up, no deformities such as scoliosis or kyphosis were observed by plain radiography, and MRI showed varying degrees of spontaneous fusion [24]. Fig. 1A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyedFig. 2Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)Fig. 3The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discsFig. 4Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)Fig. 5The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyed Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h) The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discs Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h) The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved Clinical outcomes Before surgery, all patients had significant back pain. There were 2 patients with radiating lower limb pain but no patients had muscle weakness, bowel or bladder dysfunction preoperatively. The operation time ranged from 30 minutes to 124 minutes for every spinal level with an average of 48 minutes. Intraoperative hemorrhage was minimal. In this patient population, the average delay of the diagnosis[25] was thirty-seven days. There were significant differences in VAS and ODI between the pre-treatment and post-operation (P > 0.05). And there were significant differences in VAS and ODI between the pre-treatment and post-extubation (P > 0.05) (Table 3). The average drainage time was 14 days (5–26 days). No serious complications were found in the perioperative period. In one case (Case 6), a drainage tube became detached, and another puncture and catheterization were performed. Before surgery, all patients had significant back pain. There were 2 patients with radiating lower limb pain but no patients had muscle weakness, bowel or bladder dysfunction preoperatively. The operation time ranged from 30 minutes to 124 minutes for every spinal level with an average of 48 minutes. Intraoperative hemorrhage was minimal. In this patient population, the average delay of the diagnosis[25] was thirty-seven days. There were significant differences in VAS and ODI between the pre-treatment and post-operation (P > 0.05). And there were significant differences in VAS and ODI between the pre-treatment and post-extubation (P > 0.05) (Table 3). The average drainage time was 14 days (5–26 days). No serious complications were found in the perioperative period. In one case (Case 6), a drainage tube became detached, and another puncture and catheterization were performed. Follow‐up Postoperative follow-up periods ranged from 1 year to 6.5 years, (mean 3.7 years). One patient (Case 2) was lost to follow-up at postoperative 12 months. One patient (Case 3) died from an abdominal neoplasm at postoperative 12 months. According to the classification system of Macnab [22], one patient (Case 2) had a good outcome at final follow-up, and the remainder were excellent. Postoperative follow-up periods ranged from 1 year to 6.5 years, (mean 3.7 years). One patient (Case 2) was lost to follow-up at postoperative 12 months. One patient (Case 3) died from an abdominal neoplasm at postoperative 12 months. According to the classification system of Macnab [22], one patient (Case 2) had a good outcome at final follow-up, and the remainder were excellent.
Conclusions
On the basis of these study findings, we believe that minimally invasive debridement and irrigation using intraoperative CT-Guide is an effective minimally invasive method for the treatment of multilevel advanced spondylodiscitis.
[ "Background", "Methods", "Patients", "Operative procedure", "Outcome measures", "Statistical methods", "Demographic data", "Radiologic findings", "Clinical outcomes", "Follow‐up" ]
[ "Spondylodiscitis is an unusual infectious disease, which usually originates as a pathogenic infection of intervertebral discs and then spreads to neighboring vertebral bodies [1]. Mortality is around 2–3 % [2]. Its incidence varies between 0.2 % and 3.6 % after spine surgery [1, 3, 4]. There is no uniform standard for the treatment of spondylodiscitis. Conservative therapy including bracing and appropriate antibiotics is always adequate for the patients involving early detection of mild infection [5]. But delayed diagnosis and treatment of spondylitis are common because of their early variable clinical manifestations and indolent courses, which may lead to the failure of conservative treatment [5]. Surgical treatment is reserved for patients with failed conservative therapy, including intractable spinal pain, large epidural abscesses, and extensive vertebral body destruction [6]. The major purpose of surgical intervention in spondylodiscitis is to remove the infected tissues, relieve spinal pain, rebuild the spinal stability, and improve limb dysfunction. Open surgery has been advocated in the past [7–11]. However, whether the anterior or posterior approach, open surgery faced serious complications like nerve or vessel injuries due to extensive anatomical dissection and destruction of the stable spinal structure [12, 13]. Recent research favors a minimally invasive surgery(MIS) [7, 14, 15].\nExisting studies have focused on single-level or early-stage infectious spondylodiscitis, and good clinical results have been reported after percutaneous debridement and drainage [14, 16, 17]. However, there are few reports of the management of advanced multilevel infections. These are difficult to treat using open or endoscopic surgery in current clinical practice, because of mechanical instability of the affected multilevel segments caused by widespread destruction due to the disease process [14, 18–21]. To my knowledge, up to date, this is the first report to treat multilevel spondylodiscitis with MIS. MIS may provide minimized damage stable structure of the posterior spine and paraspinal soft tissues. However, it is difficult to identify anatomic landmarks in MIS that may lead to severe complications. Therefore, to increase the accuracy of debridement and drainage, CT-Guide was performed during the operation.\nThe purpose of this study was to evaluate the clinical effect of percutaneous debridement and drainage using intraoperative CT-Guide in the treatment of multilevel spondylodiscitis. Considering the particularity of tuberculous spondylodiscitis, it was not included in this study.", " Patients From January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with percutaneous debridement and drainage procedures in our department. There were 12 female and 11 male patients with an average age of 56.5 years (range from 40 to 65 years). All patients were treated conservatively (antibiotics and bed rest) or with open surgery in other hospitals before transfer to our department. There were 7 cases of infection after open or minimally surgery and 16 cases of unknown etiology.\nClinical diagnosis of spondylodiscitis was mainly based on routine blood tests including C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR); imaging examinations comprising X-ray, CT scan, and magnetic resonance imaging (MRI).\nFrom January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with percutaneous debridement and drainage procedures in our department. There were 12 female and 11 male patients with an average age of 56.5 years (range from 40 to 65 years). All patients were treated conservatively (antibiotics and bed rest) or with open surgery in other hospitals before transfer to our department. There were 7 cases of infection after open or minimally surgery and 16 cases of unknown etiology.\nClinical diagnosis of spondylodiscitis was mainly based on routine blood tests including C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR); imaging examinations comprising X-ray, CT scan, and magnetic resonance imaging (MRI).\n Operative procedure The patients were positioned in the prone position after induction of local anesthesia on a radiolucent surgical bed. Under CT-Guide (Brainlab® System), the target disc was located and the entry site was marked on the skin at a point 3–10 cm from the midline. All cases were treated by a transforaminal approach. The puncture direction was about 45 degrees abduction. The needle was punctured through the safety triangle to the infected vertebral space. On each side, a spinal needle was inserted directly into the infected disc and through the spinal needle, a guidewire was introduced. After a small incision (about 1 cm) was made, a dilator and a cannulated sleeve were guided through the wire into the targeted site. The infected tissue of the targeted disc was extracted with discectomy forceps through the cannulated sleeve if necessary. The same procedures were performed on the contralateral side. In the case of a paravertebral abscess, paravertebral tube drainage should be performed at the same time. This allowed for both sufficient biopsy material and extensive debridement of the necrosis and inflammatory tissue from a different direction. After biopsy and debridement, at least 1000 ml of the physiological saline was used for irrigation at each level. Finally, double cavity flushing drainage catheters were placed into the debrided segment and attached to the negative pressure suction for postoperative irrigation. Postoperatively, 1500 ml of broad-spectrum antibiotic saline was used irrigated locally every day via continuous irrigation and flushing. After the results of microbial culture became available, broad-spectrum antibiotics were replaced by those with more narrow-sensitivity. The antibiotic treatment in the perioperative period follows the relevant literature [5].\nThe criteria for the cessation of irrigation was: 1. complete disappearance of clinical symptoms; 2. clear fluid following flushing; 3. CRP declined to the normal range or the level before spondylodiscitis. If two of the above three indicators are met, the flushing will be stopped. After cessation of irrigation, the cannulae were removed after about 48 hours if CRP remained low.\nThe patients were positioned in the prone position after induction of local anesthesia on a radiolucent surgical bed. Under CT-Guide (Brainlab® System), the target disc was located and the entry site was marked on the skin at a point 3–10 cm from the midline. All cases were treated by a transforaminal approach. The puncture direction was about 45 degrees abduction. The needle was punctured through the safety triangle to the infected vertebral space. On each side, a spinal needle was inserted directly into the infected disc and through the spinal needle, a guidewire was introduced. After a small incision (about 1 cm) was made, a dilator and a cannulated sleeve were guided through the wire into the targeted site. The infected tissue of the targeted disc was extracted with discectomy forceps through the cannulated sleeve if necessary. The same procedures were performed on the contralateral side. In the case of a paravertebral abscess, paravertebral tube drainage should be performed at the same time. This allowed for both sufficient biopsy material and extensive debridement of the necrosis and inflammatory tissue from a different direction. After biopsy and debridement, at least 1000 ml of the physiological saline was used for irrigation at each level. Finally, double cavity flushing drainage catheters were placed into the debrided segment and attached to the negative pressure suction for postoperative irrigation. Postoperatively, 1500 ml of broad-spectrum antibiotic saline was used irrigated locally every day via continuous irrigation and flushing. After the results of microbial culture became available, broad-spectrum antibiotics were replaced by those with more narrow-sensitivity. The antibiotic treatment in the perioperative period follows the relevant literature [5].\nThe criteria for the cessation of irrigation was: 1. complete disappearance of clinical symptoms; 2. clear fluid following flushing; 3. CRP declined to the normal range or the level before spondylodiscitis. If two of the above three indicators are met, the flushing will be stopped. After cessation of irrigation, the cannulae were removed after about 48 hours if CRP remained low.\n Outcome measures All patients were followed up in the clinic at 1 month and then every 3 months to determine whether the infection remained under control after discharge [22]. All patients were followed up with X-ray and MRI at each visit [23]. The following factors were assessed: physical examination, laboratory tests, back pain score (visual analog scale, VAS), Oswestry disability index(ODI), and Macnab criteria(as proposed by Macnab I) [22]. After the operation, the patient was asked to identify which one of the four levels corresponded to their condition: excellent, good, fair, poor. No pain and no restriction of activity are excellent; occasional back or leg pain is good; intermittent pain affecting work and life is fair; no improvement or further operative intervention required is poor.\nAll patients were followed up in the clinic at 1 month and then every 3 months to determine whether the infection remained under control after discharge [22]. All patients were followed up with X-ray and MRI at each visit [23]. The following factors were assessed: physical examination, laboratory tests, back pain score (visual analog scale, VAS), Oswestry disability index(ODI), and Macnab criteria(as proposed by Macnab I) [22]. After the operation, the patient was asked to identify which one of the four levels corresponded to their condition: excellent, good, fair, poor. No pain and no restriction of activity are excellent; occasional back or leg pain is good; intermittent pain affecting work and life is fair; no improvement or further operative intervention required is poor.\n Statistical methods SPSS 23.0 was used for statistical analysis. Overall summary statistics were calculated in terms of means ± SD for continuous variables. In this study, t-test was used for measurement data. All statistical tests were bilateral, with P < 0.05 as the significance standard.\nSPSS 23.0 was used for statistical analysis. Overall summary statistics were calculated in terms of means ± SD for continuous variables. In this study, t-test was used for measurement data. All statistical tests were bilateral, with P < 0.05 as the significance standard.", "From January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with percutaneous debridement and drainage procedures in our department. There were 12 female and 11 male patients with an average age of 56.5 years (range from 40 to 65 years). All patients were treated conservatively (antibiotics and bed rest) or with open surgery in other hospitals before transfer to our department. There were 7 cases of infection after open or minimally surgery and 16 cases of unknown etiology.\nClinical diagnosis of spondylodiscitis was mainly based on routine blood tests including C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR); imaging examinations comprising X-ray, CT scan, and magnetic resonance imaging (MRI).", "The patients were positioned in the prone position after induction of local anesthesia on a radiolucent surgical bed. Under CT-Guide (Brainlab® System), the target disc was located and the entry site was marked on the skin at a point 3–10 cm from the midline. All cases were treated by a transforaminal approach. The puncture direction was about 45 degrees abduction. The needle was punctured through the safety triangle to the infected vertebral space. On each side, a spinal needle was inserted directly into the infected disc and through the spinal needle, a guidewire was introduced. After a small incision (about 1 cm) was made, a dilator and a cannulated sleeve were guided through the wire into the targeted site. The infected tissue of the targeted disc was extracted with discectomy forceps through the cannulated sleeve if necessary. The same procedures were performed on the contralateral side. In the case of a paravertebral abscess, paravertebral tube drainage should be performed at the same time. This allowed for both sufficient biopsy material and extensive debridement of the necrosis and inflammatory tissue from a different direction. After biopsy and debridement, at least 1000 ml of the physiological saline was used for irrigation at each level. Finally, double cavity flushing drainage catheters were placed into the debrided segment and attached to the negative pressure suction for postoperative irrigation. Postoperatively, 1500 ml of broad-spectrum antibiotic saline was used irrigated locally every day via continuous irrigation and flushing. After the results of microbial culture became available, broad-spectrum antibiotics were replaced by those with more narrow-sensitivity. The antibiotic treatment in the perioperative period follows the relevant literature [5].\nThe criteria for the cessation of irrigation was: 1. complete disappearance of clinical symptoms; 2. clear fluid following flushing; 3. CRP declined to the normal range or the level before spondylodiscitis. If two of the above three indicators are met, the flushing will be stopped. After cessation of irrigation, the cannulae were removed after about 48 hours if CRP remained low.", "All patients were followed up in the clinic at 1 month and then every 3 months to determine whether the infection remained under control after discharge [22]. All patients were followed up with X-ray and MRI at each visit [23]. The following factors were assessed: physical examination, laboratory tests, back pain score (visual analog scale, VAS), Oswestry disability index(ODI), and Macnab criteria(as proposed by Macnab I) [22]. After the operation, the patient was asked to identify which one of the four levels corresponded to their condition: excellent, good, fair, poor. No pain and no restriction of activity are excellent; occasional back or leg pain is good; intermittent pain affecting work and life is fair; no improvement or further operative intervention required is poor.", "SPSS 23.0 was used for statistical analysis. Overall summary statistics were calculated in terms of means ± SD for continuous variables. In this study, t-test was used for measurement data. All statistical tests were bilateral, with P < 0.05 as the significance standard.", "The 23 enrolled patients included 9 patients with Gram-positive (+), 6 patients with Gram-negative (-), 7 patients with Fungi, and 1 with mixed infection (Tables 1 and 2). Of the 23 patients, the maximum number of infected levels was 6 and the average number of infected levels was 3.2.\nTable 1Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatmentCultured pathogensNumberGram-positive (+)9S. aureus4S. epidermidis3S. viridans1MRSA1Gram-negative (-)6E. coli3E. faecalis2Brucella1Fungi7Aspergillus fumigatus5Aspergillus flavus2Mixed infection1Aspergillus fumigatus and S. aureusTotal Number123Table 2Demographic data of seven patients with postoperative spinal infectionNOOperations before MISInternal fixation(Y/N)Fixation removed(Y/N)Cultured bacteriaCase 1open lumbar surgery + debridement and internal fixationYNAspergillus fumigatusCase 2Open internal fixation of lumbar spineYNAspergillus flavusCase 3Open Operation of malignant schwannomaN-E. coliCase 4Open internal fixation of lumbar spineN-S.epidermidisCase 5Open internal fixation of lumbar spineYNS. aureusCase 6Unidentified Lumbar infection + open debridement and internal fixationYNAspergillus fumigatusCase 7Open internal fixation of lumbar spineYNS. aureus\nCultured pathogens in 23 patients received minimally invasive debridement and drainage treatment\nAspergillus fumigatus and S. aureus\nTotal Number\n1\n23\nDemographic data of seven patients with postoperative spinal infection\nAt admission, all but one patient presented with a fever of more than 38.5 °C. All patients presented with ESR of more than 20 mm/hr (range, 50 to 115 mm/hr). The elevated CRP ranged from 14.9 mg/L to 30.6 mg/L with an average of 23 mg/L (Table 3). The white blood cell count was elevated above normal in only one case.\nTable 3The results of the outcome measures for Clinical and laboratory indicatorsLaboratory testsPre-Treatment(at admission)Post-Operation(1w)Post- Extubation(1 m)ESR (mm/hr)84.5 (50–115)23.4 (15–38)13.2 (6–19)CRP (mg/L)26.0 (14–30)16.5 (12–21)3.7 (2–10)Functional resultsVisual analog scale8.6 (6–10)4.5 (3–5)2.3 (1–3)Oswestry disability index69.3(56–89)34.2 (25–44)26.3 (13–42)\nThe results of the outcome measures for Clinical and laboratory indicators\nIn this study, 7 patients with spondylitis were infected after the operation, and 5 of them underwent internal fixation implantation. All patients received minimally invasive treatment without the removal of internal fixation (Table 2). Before coming to our hospital, only 2 patients had a culture report (fungal infection), and both of them underwent open debridement surgery in other hospitals, and then transferred to our hospital.", "In all patients, X-ray examination showed no signs of spinal instability; CT showed destruction of the vertebral endplate and varying degree of vertebral space collapse (Fig. 1). MRI revealed no compression of the spinal cord. There was no recurrence in the treated vertebral segment during follow-up, but two patients with fungal spinal infection (Case1 Fig. 2/3) (Case2 Fig. 4/5) progressed to involve adjacent segments. During postoperative follow-up, no deformities such as scoliosis or kyphosis were observed by plain radiography, and MRI showed varying degrees of spontaneous fusion [24].\nFig. 1A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyedFig. 2Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)Fig. 3The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discsFig. 4Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)Fig. 5The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved\nA 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyed\nSequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)\nThe same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discs\nSequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)\nThe same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved", "Before surgery, all patients had significant back pain. There were 2 patients with radiating lower limb pain but no patients had muscle weakness, bowel or bladder dysfunction preoperatively. The operation time ranged from 30 minutes to 124 minutes for every spinal level with an average of 48 minutes. Intraoperative hemorrhage was minimal. In this patient population, the average delay of the diagnosis[25] was thirty-seven days. There were significant differences in VAS and ODI between the pre-treatment and post-operation (P > 0.05). And there were significant differences in VAS and ODI between the pre-treatment and post-extubation (P > 0.05) (Table 3).\nThe average drainage time was 14 days (5–26 days). No serious complications were found in the perioperative period. In one case (Case 6), a drainage tube became detached, and another puncture and catheterization were performed.", "Postoperative follow-up periods ranged from 1 year to 6.5 years, (mean 3.7 years). One patient (Case 2) was lost to follow-up at postoperative 12 months. One patient (Case 3) died from an abdominal neoplasm at postoperative 12 months. According to the classification system of Macnab [22], one patient (Case 2) had a good outcome at final follow-up, and the remainder were excellent." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Operative procedure", "Outcome measures", "Statistical methods", "Results", "Demographic data", "Radiologic findings", "Clinical outcomes", "Follow‐up", "Discussion", "Conclusions" ]
[ "Spondylodiscitis is an unusual infectious disease, which usually originates as a pathogenic infection of intervertebral discs and then spreads to neighboring vertebral bodies [1]. Mortality is around 2–3 % [2]. Its incidence varies between 0.2 % and 3.6 % after spine surgery [1, 3, 4]. There is no uniform standard for the treatment of spondylodiscitis. Conservative therapy including bracing and appropriate antibiotics is always adequate for the patients involving early detection of mild infection [5]. But delayed diagnosis and treatment of spondylitis are common because of their early variable clinical manifestations and indolent courses, which may lead to the failure of conservative treatment [5]. Surgical treatment is reserved for patients with failed conservative therapy, including intractable spinal pain, large epidural abscesses, and extensive vertebral body destruction [6]. The major purpose of surgical intervention in spondylodiscitis is to remove the infected tissues, relieve spinal pain, rebuild the spinal stability, and improve limb dysfunction. Open surgery has been advocated in the past [7–11]. However, whether the anterior or posterior approach, open surgery faced serious complications like nerve or vessel injuries due to extensive anatomical dissection and destruction of the stable spinal structure [12, 13]. Recent research favors a minimally invasive surgery(MIS) [7, 14, 15].\nExisting studies have focused on single-level or early-stage infectious spondylodiscitis, and good clinical results have been reported after percutaneous debridement and drainage [14, 16, 17]. However, there are few reports of the management of advanced multilevel infections. These are difficult to treat using open or endoscopic surgery in current clinical practice, because of mechanical instability of the affected multilevel segments caused by widespread destruction due to the disease process [14, 18–21]. To my knowledge, up to date, this is the first report to treat multilevel spondylodiscitis with MIS. MIS may provide minimized damage stable structure of the posterior spine and paraspinal soft tissues. However, it is difficult to identify anatomic landmarks in MIS that may lead to severe complications. Therefore, to increase the accuracy of debridement and drainage, CT-Guide was performed during the operation.\nThe purpose of this study was to evaluate the clinical effect of percutaneous debridement and drainage using intraoperative CT-Guide in the treatment of multilevel spondylodiscitis. Considering the particularity of tuberculous spondylodiscitis, it was not included in this study.", " Patients From January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with percutaneous debridement and drainage procedures in our department. There were 12 female and 11 male patients with an average age of 56.5 years (range from 40 to 65 years). All patients were treated conservatively (antibiotics and bed rest) or with open surgery in other hospitals before transfer to our department. There were 7 cases of infection after open or minimally surgery and 16 cases of unknown etiology.\nClinical diagnosis of spondylodiscitis was mainly based on routine blood tests including C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR); imaging examinations comprising X-ray, CT scan, and magnetic resonance imaging (MRI).\nFrom January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with percutaneous debridement and drainage procedures in our department. There were 12 female and 11 male patients with an average age of 56.5 years (range from 40 to 65 years). All patients were treated conservatively (antibiotics and bed rest) or with open surgery in other hospitals before transfer to our department. There were 7 cases of infection after open or minimally surgery and 16 cases of unknown etiology.\nClinical diagnosis of spondylodiscitis was mainly based on routine blood tests including C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR); imaging examinations comprising X-ray, CT scan, and magnetic resonance imaging (MRI).\n Operative procedure The patients were positioned in the prone position after induction of local anesthesia on a radiolucent surgical bed. Under CT-Guide (Brainlab® System), the target disc was located and the entry site was marked on the skin at a point 3–10 cm from the midline. All cases were treated by a transforaminal approach. The puncture direction was about 45 degrees abduction. The needle was punctured through the safety triangle to the infected vertebral space. On each side, a spinal needle was inserted directly into the infected disc and through the spinal needle, a guidewire was introduced. After a small incision (about 1 cm) was made, a dilator and a cannulated sleeve were guided through the wire into the targeted site. The infected tissue of the targeted disc was extracted with discectomy forceps through the cannulated sleeve if necessary. The same procedures were performed on the contralateral side. In the case of a paravertebral abscess, paravertebral tube drainage should be performed at the same time. This allowed for both sufficient biopsy material and extensive debridement of the necrosis and inflammatory tissue from a different direction. After biopsy and debridement, at least 1000 ml of the physiological saline was used for irrigation at each level. Finally, double cavity flushing drainage catheters were placed into the debrided segment and attached to the negative pressure suction for postoperative irrigation. Postoperatively, 1500 ml of broad-spectrum antibiotic saline was used irrigated locally every day via continuous irrigation and flushing. After the results of microbial culture became available, broad-spectrum antibiotics were replaced by those with more narrow-sensitivity. The antibiotic treatment in the perioperative period follows the relevant literature [5].\nThe criteria for the cessation of irrigation was: 1. complete disappearance of clinical symptoms; 2. clear fluid following flushing; 3. CRP declined to the normal range or the level before spondylodiscitis. If two of the above three indicators are met, the flushing will be stopped. After cessation of irrigation, the cannulae were removed after about 48 hours if CRP remained low.\nThe patients were positioned in the prone position after induction of local anesthesia on a radiolucent surgical bed. Under CT-Guide (Brainlab® System), the target disc was located and the entry site was marked on the skin at a point 3–10 cm from the midline. All cases were treated by a transforaminal approach. The puncture direction was about 45 degrees abduction. The needle was punctured through the safety triangle to the infected vertebral space. On each side, a spinal needle was inserted directly into the infected disc and through the spinal needle, a guidewire was introduced. After a small incision (about 1 cm) was made, a dilator and a cannulated sleeve were guided through the wire into the targeted site. The infected tissue of the targeted disc was extracted with discectomy forceps through the cannulated sleeve if necessary. The same procedures were performed on the contralateral side. In the case of a paravertebral abscess, paravertebral tube drainage should be performed at the same time. This allowed for both sufficient biopsy material and extensive debridement of the necrosis and inflammatory tissue from a different direction. After biopsy and debridement, at least 1000 ml of the physiological saline was used for irrigation at each level. Finally, double cavity flushing drainage catheters were placed into the debrided segment and attached to the negative pressure suction for postoperative irrigation. Postoperatively, 1500 ml of broad-spectrum antibiotic saline was used irrigated locally every day via continuous irrigation and flushing. After the results of microbial culture became available, broad-spectrum antibiotics were replaced by those with more narrow-sensitivity. The antibiotic treatment in the perioperative period follows the relevant literature [5].\nThe criteria for the cessation of irrigation was: 1. complete disappearance of clinical symptoms; 2. clear fluid following flushing; 3. CRP declined to the normal range or the level before spondylodiscitis. If two of the above three indicators are met, the flushing will be stopped. After cessation of irrigation, the cannulae were removed after about 48 hours if CRP remained low.\n Outcome measures All patients were followed up in the clinic at 1 month and then every 3 months to determine whether the infection remained under control after discharge [22]. All patients were followed up with X-ray and MRI at each visit [23]. The following factors were assessed: physical examination, laboratory tests, back pain score (visual analog scale, VAS), Oswestry disability index(ODI), and Macnab criteria(as proposed by Macnab I) [22]. After the operation, the patient was asked to identify which one of the four levels corresponded to their condition: excellent, good, fair, poor. No pain and no restriction of activity are excellent; occasional back or leg pain is good; intermittent pain affecting work and life is fair; no improvement or further operative intervention required is poor.\nAll patients were followed up in the clinic at 1 month and then every 3 months to determine whether the infection remained under control after discharge [22]. All patients were followed up with X-ray and MRI at each visit [23]. The following factors were assessed: physical examination, laboratory tests, back pain score (visual analog scale, VAS), Oswestry disability index(ODI), and Macnab criteria(as proposed by Macnab I) [22]. After the operation, the patient was asked to identify which one of the four levels corresponded to their condition: excellent, good, fair, poor. No pain and no restriction of activity are excellent; occasional back or leg pain is good; intermittent pain affecting work and life is fair; no improvement or further operative intervention required is poor.\n Statistical methods SPSS 23.0 was used for statistical analysis. Overall summary statistics were calculated in terms of means ± SD for continuous variables. In this study, t-test was used for measurement data. All statistical tests were bilateral, with P < 0.05 as the significance standard.\nSPSS 23.0 was used for statistical analysis. Overall summary statistics were calculated in terms of means ± SD for continuous variables. In this study, t-test was used for measurement data. All statistical tests were bilateral, with P < 0.05 as the significance standard.", "From January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with percutaneous debridement and drainage procedures in our department. There were 12 female and 11 male patients with an average age of 56.5 years (range from 40 to 65 years). All patients were treated conservatively (antibiotics and bed rest) or with open surgery in other hospitals before transfer to our department. There were 7 cases of infection after open or minimally surgery and 16 cases of unknown etiology.\nClinical diagnosis of spondylodiscitis was mainly based on routine blood tests including C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR); imaging examinations comprising X-ray, CT scan, and magnetic resonance imaging (MRI).", "The patients were positioned in the prone position after induction of local anesthesia on a radiolucent surgical bed. Under CT-Guide (Brainlab® System), the target disc was located and the entry site was marked on the skin at a point 3–10 cm from the midline. All cases were treated by a transforaminal approach. The puncture direction was about 45 degrees abduction. The needle was punctured through the safety triangle to the infected vertebral space. On each side, a spinal needle was inserted directly into the infected disc and through the spinal needle, a guidewire was introduced. After a small incision (about 1 cm) was made, a dilator and a cannulated sleeve were guided through the wire into the targeted site. The infected tissue of the targeted disc was extracted with discectomy forceps through the cannulated sleeve if necessary. The same procedures were performed on the contralateral side. In the case of a paravertebral abscess, paravertebral tube drainage should be performed at the same time. This allowed for both sufficient biopsy material and extensive debridement of the necrosis and inflammatory tissue from a different direction. After biopsy and debridement, at least 1000 ml of the physiological saline was used for irrigation at each level. Finally, double cavity flushing drainage catheters were placed into the debrided segment and attached to the negative pressure suction for postoperative irrigation. Postoperatively, 1500 ml of broad-spectrum antibiotic saline was used irrigated locally every day via continuous irrigation and flushing. After the results of microbial culture became available, broad-spectrum antibiotics were replaced by those with more narrow-sensitivity. The antibiotic treatment in the perioperative period follows the relevant literature [5].\nThe criteria for the cessation of irrigation was: 1. complete disappearance of clinical symptoms; 2. clear fluid following flushing; 3. CRP declined to the normal range or the level before spondylodiscitis. If two of the above three indicators are met, the flushing will be stopped. After cessation of irrigation, the cannulae were removed after about 48 hours if CRP remained low.", "All patients were followed up in the clinic at 1 month and then every 3 months to determine whether the infection remained under control after discharge [22]. All patients were followed up with X-ray and MRI at each visit [23]. The following factors were assessed: physical examination, laboratory tests, back pain score (visual analog scale, VAS), Oswestry disability index(ODI), and Macnab criteria(as proposed by Macnab I) [22]. After the operation, the patient was asked to identify which one of the four levels corresponded to their condition: excellent, good, fair, poor. No pain and no restriction of activity are excellent; occasional back or leg pain is good; intermittent pain affecting work and life is fair; no improvement or further operative intervention required is poor.", "SPSS 23.0 was used for statistical analysis. Overall summary statistics were calculated in terms of means ± SD for continuous variables. In this study, t-test was used for measurement data. All statistical tests were bilateral, with P < 0.05 as the significance standard.", " Demographic data The 23 enrolled patients included 9 patients with Gram-positive (+), 6 patients with Gram-negative (-), 7 patients with Fungi, and 1 with mixed infection (Tables 1 and 2). Of the 23 patients, the maximum number of infected levels was 6 and the average number of infected levels was 3.2.\nTable 1Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatmentCultured pathogensNumberGram-positive (+)9S. aureus4S. epidermidis3S. viridans1MRSA1Gram-negative (-)6E. coli3E. faecalis2Brucella1Fungi7Aspergillus fumigatus5Aspergillus flavus2Mixed infection1Aspergillus fumigatus and S. aureusTotal Number123Table 2Demographic data of seven patients with postoperative spinal infectionNOOperations before MISInternal fixation(Y/N)Fixation removed(Y/N)Cultured bacteriaCase 1open lumbar surgery + debridement and internal fixationYNAspergillus fumigatusCase 2Open internal fixation of lumbar spineYNAspergillus flavusCase 3Open Operation of malignant schwannomaN-E. coliCase 4Open internal fixation of lumbar spineN-S.epidermidisCase 5Open internal fixation of lumbar spineYNS. aureusCase 6Unidentified Lumbar infection + open debridement and internal fixationYNAspergillus fumigatusCase 7Open internal fixation of lumbar spineYNS. aureus\nCultured pathogens in 23 patients received minimally invasive debridement and drainage treatment\nAspergillus fumigatus and S. aureus\nTotal Number\n1\n23\nDemographic data of seven patients with postoperative spinal infection\nAt admission, all but one patient presented with a fever of more than 38.5 °C. All patients presented with ESR of more than 20 mm/hr (range, 50 to 115 mm/hr). The elevated CRP ranged from 14.9 mg/L to 30.6 mg/L with an average of 23 mg/L (Table 3). The white blood cell count was elevated above normal in only one case.\nTable 3The results of the outcome measures for Clinical and laboratory indicatorsLaboratory testsPre-Treatment(at admission)Post-Operation(1w)Post- Extubation(1 m)ESR (mm/hr)84.5 (50–115)23.4 (15–38)13.2 (6–19)CRP (mg/L)26.0 (14–30)16.5 (12–21)3.7 (2–10)Functional resultsVisual analog scale8.6 (6–10)4.5 (3–5)2.3 (1–3)Oswestry disability index69.3(56–89)34.2 (25–44)26.3 (13–42)\nThe results of the outcome measures for Clinical and laboratory indicators\nIn this study, 7 patients with spondylitis were infected after the operation, and 5 of them underwent internal fixation implantation. All patients received minimally invasive treatment without the removal of internal fixation (Table 2). Before coming to our hospital, only 2 patients had a culture report (fungal infection), and both of them underwent open debridement surgery in other hospitals, and then transferred to our hospital.\nThe 23 enrolled patients included 9 patients with Gram-positive (+), 6 patients with Gram-negative (-), 7 patients with Fungi, and 1 with mixed infection (Tables 1 and 2). Of the 23 patients, the maximum number of infected levels was 6 and the average number of infected levels was 3.2.\nTable 1Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatmentCultured pathogensNumberGram-positive (+)9S. aureus4S. epidermidis3S. viridans1MRSA1Gram-negative (-)6E. coli3E. faecalis2Brucella1Fungi7Aspergillus fumigatus5Aspergillus flavus2Mixed infection1Aspergillus fumigatus and S. aureusTotal Number123Table 2Demographic data of seven patients with postoperative spinal infectionNOOperations before MISInternal fixation(Y/N)Fixation removed(Y/N)Cultured bacteriaCase 1open lumbar surgery + debridement and internal fixationYNAspergillus fumigatusCase 2Open internal fixation of lumbar spineYNAspergillus flavusCase 3Open Operation of malignant schwannomaN-E. coliCase 4Open internal fixation of lumbar spineN-S.epidermidisCase 5Open internal fixation of lumbar spineYNS. aureusCase 6Unidentified Lumbar infection + open debridement and internal fixationYNAspergillus fumigatusCase 7Open internal fixation of lumbar spineYNS. aureus\nCultured pathogens in 23 patients received minimally invasive debridement and drainage treatment\nAspergillus fumigatus and S. aureus\nTotal Number\n1\n23\nDemographic data of seven patients with postoperative spinal infection\nAt admission, all but one patient presented with a fever of more than 38.5 °C. All patients presented with ESR of more than 20 mm/hr (range, 50 to 115 mm/hr). The elevated CRP ranged from 14.9 mg/L to 30.6 mg/L with an average of 23 mg/L (Table 3). The white blood cell count was elevated above normal in only one case.\nTable 3The results of the outcome measures for Clinical and laboratory indicatorsLaboratory testsPre-Treatment(at admission)Post-Operation(1w)Post- Extubation(1 m)ESR (mm/hr)84.5 (50–115)23.4 (15–38)13.2 (6–19)CRP (mg/L)26.0 (14–30)16.5 (12–21)3.7 (2–10)Functional resultsVisual analog scale8.6 (6–10)4.5 (3–5)2.3 (1–3)Oswestry disability index69.3(56–89)34.2 (25–44)26.3 (13–42)\nThe results of the outcome measures for Clinical and laboratory indicators\nIn this study, 7 patients with spondylitis were infected after the operation, and 5 of them underwent internal fixation implantation. All patients received minimally invasive treatment without the removal of internal fixation (Table 2). Before coming to our hospital, only 2 patients had a culture report (fungal infection), and both of them underwent open debridement surgery in other hospitals, and then transferred to our hospital.\n Radiologic findings In all patients, X-ray examination showed no signs of spinal instability; CT showed destruction of the vertebral endplate and varying degree of vertebral space collapse (Fig. 1). MRI revealed no compression of the spinal cord. There was no recurrence in the treated vertebral segment during follow-up, but two patients with fungal spinal infection (Case1 Fig. 2/3) (Case2 Fig. 4/5) progressed to involve adjacent segments. During postoperative follow-up, no deformities such as scoliosis or kyphosis were observed by plain radiography, and MRI showed varying degrees of spontaneous fusion [24].\nFig. 1A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyedFig. 2Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)Fig. 3The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discsFig. 4Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)Fig. 5The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved\nA 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyed\nSequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)\nThe same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discs\nSequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)\nThe same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved\nIn all patients, X-ray examination showed no signs of spinal instability; CT showed destruction of the vertebral endplate and varying degree of vertebral space collapse (Fig. 1). MRI revealed no compression of the spinal cord. There was no recurrence in the treated vertebral segment during follow-up, but two patients with fungal spinal infection (Case1 Fig. 2/3) (Case2 Fig. 4/5) progressed to involve adjacent segments. During postoperative follow-up, no deformities such as scoliosis or kyphosis were observed by plain radiography, and MRI showed varying degrees of spontaneous fusion [24].\nFig. 1A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyedFig. 2Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)Fig. 3The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discsFig. 4Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)Fig. 5The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved\nA 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyed\nSequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)\nThe same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discs\nSequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)\nThe same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved\n Clinical outcomes Before surgery, all patients had significant back pain. There were 2 patients with radiating lower limb pain but no patients had muscle weakness, bowel or bladder dysfunction preoperatively. The operation time ranged from 30 minutes to 124 minutes for every spinal level with an average of 48 minutes. Intraoperative hemorrhage was minimal. In this patient population, the average delay of the diagnosis[25] was thirty-seven days. There were significant differences in VAS and ODI between the pre-treatment and post-operation (P > 0.05). And there were significant differences in VAS and ODI between the pre-treatment and post-extubation (P > 0.05) (Table 3).\nThe average drainage time was 14 days (5–26 days). No serious complications were found in the perioperative period. In one case (Case 6), a drainage tube became detached, and another puncture and catheterization were performed.\nBefore surgery, all patients had significant back pain. There were 2 patients with radiating lower limb pain but no patients had muscle weakness, bowel or bladder dysfunction preoperatively. The operation time ranged from 30 minutes to 124 minutes for every spinal level with an average of 48 minutes. Intraoperative hemorrhage was minimal. In this patient population, the average delay of the diagnosis[25] was thirty-seven days. There were significant differences in VAS and ODI between the pre-treatment and post-operation (P > 0.05). And there were significant differences in VAS and ODI between the pre-treatment and post-extubation (P > 0.05) (Table 3).\nThe average drainage time was 14 days (5–26 days). No serious complications were found in the perioperative period. In one case (Case 6), a drainage tube became detached, and another puncture and catheterization were performed.\n Follow‐up Postoperative follow-up periods ranged from 1 year to 6.5 years, (mean 3.7 years). One patient (Case 2) was lost to follow-up at postoperative 12 months. One patient (Case 3) died from an abdominal neoplasm at postoperative 12 months. According to the classification system of Macnab [22], one patient (Case 2) had a good outcome at final follow-up, and the remainder were excellent.\nPostoperative follow-up periods ranged from 1 year to 6.5 years, (mean 3.7 years). One patient (Case 2) was lost to follow-up at postoperative 12 months. One patient (Case 3) died from an abdominal neoplasm at postoperative 12 months. According to the classification system of Macnab [22], one patient (Case 2) had a good outcome at final follow-up, and the remainder were excellent.", "The 23 enrolled patients included 9 patients with Gram-positive (+), 6 patients with Gram-negative (-), 7 patients with Fungi, and 1 with mixed infection (Tables 1 and 2). Of the 23 patients, the maximum number of infected levels was 6 and the average number of infected levels was 3.2.\nTable 1Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatmentCultured pathogensNumberGram-positive (+)9S. aureus4S. epidermidis3S. viridans1MRSA1Gram-negative (-)6E. coli3E. faecalis2Brucella1Fungi7Aspergillus fumigatus5Aspergillus flavus2Mixed infection1Aspergillus fumigatus and S. aureusTotal Number123Table 2Demographic data of seven patients with postoperative spinal infectionNOOperations before MISInternal fixation(Y/N)Fixation removed(Y/N)Cultured bacteriaCase 1open lumbar surgery + debridement and internal fixationYNAspergillus fumigatusCase 2Open internal fixation of lumbar spineYNAspergillus flavusCase 3Open Operation of malignant schwannomaN-E. coliCase 4Open internal fixation of lumbar spineN-S.epidermidisCase 5Open internal fixation of lumbar spineYNS. aureusCase 6Unidentified Lumbar infection + open debridement and internal fixationYNAspergillus fumigatusCase 7Open internal fixation of lumbar spineYNS. aureus\nCultured pathogens in 23 patients received minimally invasive debridement and drainage treatment\nAspergillus fumigatus and S. aureus\nTotal Number\n1\n23\nDemographic data of seven patients with postoperative spinal infection\nAt admission, all but one patient presented with a fever of more than 38.5 °C. All patients presented with ESR of more than 20 mm/hr (range, 50 to 115 mm/hr). The elevated CRP ranged from 14.9 mg/L to 30.6 mg/L with an average of 23 mg/L (Table 3). The white blood cell count was elevated above normal in only one case.\nTable 3The results of the outcome measures for Clinical and laboratory indicatorsLaboratory testsPre-Treatment(at admission)Post-Operation(1w)Post- Extubation(1 m)ESR (mm/hr)84.5 (50–115)23.4 (15–38)13.2 (6–19)CRP (mg/L)26.0 (14–30)16.5 (12–21)3.7 (2–10)Functional resultsVisual analog scale8.6 (6–10)4.5 (3–5)2.3 (1–3)Oswestry disability index69.3(56–89)34.2 (25–44)26.3 (13–42)\nThe results of the outcome measures for Clinical and laboratory indicators\nIn this study, 7 patients with spondylitis were infected after the operation, and 5 of them underwent internal fixation implantation. All patients received minimally invasive treatment without the removal of internal fixation (Table 2). Before coming to our hospital, only 2 patients had a culture report (fungal infection), and both of them underwent open debridement surgery in other hospitals, and then transferred to our hospital.", "In all patients, X-ray examination showed no signs of spinal instability; CT showed destruction of the vertebral endplate and varying degree of vertebral space collapse (Fig. 1). MRI revealed no compression of the spinal cord. There was no recurrence in the treated vertebral segment during follow-up, but two patients with fungal spinal infection (Case1 Fig. 2/3) (Case2 Fig. 4/5) progressed to involve adjacent segments. During postoperative follow-up, no deformities such as scoliosis or kyphosis were observed by plain radiography, and MRI showed varying degrees of spontaneous fusion [24].\nFig. 1A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyedFig. 2Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)Fig. 3The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discsFig. 4Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)Fig. 5The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved\nA 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyed\nSequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)\nThe same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discs\nSequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)\nThe same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved", "Before surgery, all patients had significant back pain. There were 2 patients with radiating lower limb pain but no patients had muscle weakness, bowel or bladder dysfunction preoperatively. The operation time ranged from 30 minutes to 124 minutes for every spinal level with an average of 48 minutes. Intraoperative hemorrhage was minimal. In this patient population, the average delay of the diagnosis[25] was thirty-seven days. There were significant differences in VAS and ODI between the pre-treatment and post-operation (P > 0.05). And there were significant differences in VAS and ODI between the pre-treatment and post-extubation (P > 0.05) (Table 3).\nThe average drainage time was 14 days (5–26 days). No serious complications were found in the perioperative period. In one case (Case 6), a drainage tube became detached, and another puncture and catheterization were performed.", "Postoperative follow-up periods ranged from 1 year to 6.5 years, (mean 3.7 years). One patient (Case 2) was lost to follow-up at postoperative 12 months. One patient (Case 3) died from an abdominal neoplasm at postoperative 12 months. According to the classification system of Macnab [22], one patient (Case 2) had a good outcome at final follow-up, and the remainder were excellent.", "Spondylodiscitis is a term encompassing vertebral osteomyelitis, spondylitis, and discitis, the incidence of which is increasing due to an increase in the susceptible population and improved diagnostics [26, 27]. The etiology of spondylodiscitis usually is monobacterial infection and in Europe,more than 50 % of cases are caused by Staphylococcus aureus [5, 28]. Fungal discitis is rare and to date, only case reports exist [26]. Fungal discitis is usually due to molds [29–32] and Candida species [33–38], which are consistent with our results.\nClinical presentation of spondylodiscitis especially early stage may be nonspecific. Refractory and unremitting back pain often requiring narcotic pain control is the most common clinical presentation, with fever and neurologic deficiency less frequently encountered [1, 39]. The clinical symptoms appear at an average of six weeks after the primary procedure [1]. Elevated ESR and CRP are extremely sensitive, but WBC count might be within the normal range [40]. In our series, CRP and ESR values were increased on admission in all patients, whereas white blood count was increased in only one patient. MRI is the gold standard in imaging studies to detect spondylodiscitis [5, 41]. MRI exhibits high specificity and sensitivity,which are extremely high at 96 % and 92 % respectively [23, 42, 43]. Therefore, we routinely performed an MRI during the follow-up.\nTreatment of spondylodiscitis, especially fungal infection, is often delayed because fungal organisms are slow-growing and difficult to identify by culture [25]. CT-guided needle biopsy has been recommended for the isolation of causative pathogens [44–46]. However, the aspirate is often inadequate. Percutaneous endoscopic aspirate has been reported to have a high accuracy rate in the detection of causative organisms [47]. In this study, the causative organisms were extracted through a cannula less than 1 cm diameter using discectomy forceps. Our study uses CT-Guide so the discectomy forceps were able to access the center of the lesion. Although the radiation dose of CT is higher than that of C-arm, CT-Guide is accurate and convenient for precise catheterization. We also consider that careful CT-Guide also provides potential benefit for theatre-users by reducing radiation exposure compared with fluoroscopically assisted spinal surgery [48, 49].\nMultilevel spondylodiscitis after surgery is an intractable and troublesome complication, which may not be resolved by simple surgical debridement. Major open surgery has important drawbacks in patients with multilevel infection because of concern that the extensive destruction and mechanical instability of the affected segments following this type of surgery may be associated with significant rates of perioperative complications and mortality [1, 50]. Minimally invasive endoscopic debridement with dilute betadine solution irrigation is an effective alternative to extensive open surgery for the treatment of single-level infectious spondylodiscitis,but the effectiveness of this procedure for extensive destruction of vertebral bodies and multilevel refractory infections may be limited because the thorough debridement of synchronous multiple lesions by endoscopic means is difficult and can exhaust both patient and surgeon [14, 15, 51, 52]. Minimally invasive drainage and continuous irrigation with local administration of antibiotic agents, including minimally invasive suction aspiration, have been found to be effective in patients with spondylodiscitis [53, 54].\nThere are several advantages of our method: (1) Continuous drainage can dilute the density of pathogens, which reduces pathogenic capacity (including the invasiveness of pathogens, external toxins, and endotoxins); (2) Minimally invasive implant of the drainage tube will not cause major surgical trauma and is conducive to the rehabilitation of patients; (3) Continuous perfusion does not destroy the body’s protective immune response; (4) Formation of hematoma as a culture medium is inhibited [12].\nIn our series, two patients with fungal infection showed progressive disease spreading to adjacent segments. A previous study has reported this unique pathological feature of spondylodiscitis, but the exact reason remains unclear[39]. It may depend on the premorbid general condition of the patient (malnutrition, immune suppression, malignancy), the type of fungal species, and delay in treatment[55]. Progressive disease may occur either above or below the lesion. Adjacent segment preventive catheterization was performed in one patient and achieved its purpose, but its reliable effect needs further study.\nThis study has several limitations. First, we were able to collect only 23 cases; firmer conclusions await large sample studies. Second, because our study was a retrospective non-control study of various disc infections, it is difficult to evaluate the efficacy of surgery independent of antimicrobial therapy. The feasibility and benefits of minimally invasive debridement and irrigation for the treatment of multilevel spondylitis need to be further evaluated in larger series as part of a prospectively controlled study. Third, in view of the particularity of tuberculosis treatment, the patients with tuberculosis spondylitis were not included in this study. In addition, this technology can’t be used for decompression and deformity correction. For patients with neurological compromise or spinal deformity, open surgery would be the requirement. However, in our cohort, enrolled patients had similar clinical features of multilevel spondylodiscitis and all operations were performed by the same experienced surgeon (XFZ).", "On the basis of these study findings, we believe that minimally invasive debridement and irrigation using intraoperative CT-Guide is an effective minimally invasive method for the treatment of multilevel advanced spondylodiscitis." ]
[ null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusion" ]
[ "Spondylodiscitis", "Minimally invasive surgery", "Drainage" ]
Background: Spondylodiscitis is an unusual infectious disease, which usually originates as a pathogenic infection of intervertebral discs and then spreads to neighboring vertebral bodies [1]. Mortality is around 2–3 % [2]. Its incidence varies between 0.2 % and 3.6 % after spine surgery [1, 3, 4]. There is no uniform standard for the treatment of spondylodiscitis. Conservative therapy including bracing and appropriate antibiotics is always adequate for the patients involving early detection of mild infection [5]. But delayed diagnosis and treatment of spondylitis are common because of their early variable clinical manifestations and indolent courses, which may lead to the failure of conservative treatment [5]. Surgical treatment is reserved for patients with failed conservative therapy, including intractable spinal pain, large epidural abscesses, and extensive vertebral body destruction [6]. The major purpose of surgical intervention in spondylodiscitis is to remove the infected tissues, relieve spinal pain, rebuild the spinal stability, and improve limb dysfunction. Open surgery has been advocated in the past [7–11]. However, whether the anterior or posterior approach, open surgery faced serious complications like nerve or vessel injuries due to extensive anatomical dissection and destruction of the stable spinal structure [12, 13]. Recent research favors a minimally invasive surgery(MIS) [7, 14, 15]. Existing studies have focused on single-level or early-stage infectious spondylodiscitis, and good clinical results have been reported after percutaneous debridement and drainage [14, 16, 17]. However, there are few reports of the management of advanced multilevel infections. These are difficult to treat using open or endoscopic surgery in current clinical practice, because of mechanical instability of the affected multilevel segments caused by widespread destruction due to the disease process [14, 18–21]. To my knowledge, up to date, this is the first report to treat multilevel spondylodiscitis with MIS. MIS may provide minimized damage stable structure of the posterior spine and paraspinal soft tissues. However, it is difficult to identify anatomic landmarks in MIS that may lead to severe complications. Therefore, to increase the accuracy of debridement and drainage, CT-Guide was performed during the operation. The purpose of this study was to evaluate the clinical effect of percutaneous debridement and drainage using intraoperative CT-Guide in the treatment of multilevel spondylodiscitis. Considering the particularity of tuberculous spondylodiscitis, it was not included in this study. Methods: Patients From January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with percutaneous debridement and drainage procedures in our department. There were 12 female and 11 male patients with an average age of 56.5 years (range from 40 to 65 years). All patients were treated conservatively (antibiotics and bed rest) or with open surgery in other hospitals before transfer to our department. There were 7 cases of infection after open or minimally surgery and 16 cases of unknown etiology. Clinical diagnosis of spondylodiscitis was mainly based on routine blood tests including C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR); imaging examinations comprising X-ray, CT scan, and magnetic resonance imaging (MRI). From January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with percutaneous debridement and drainage procedures in our department. There were 12 female and 11 male patients with an average age of 56.5 years (range from 40 to 65 years). All patients were treated conservatively (antibiotics and bed rest) or with open surgery in other hospitals before transfer to our department. There were 7 cases of infection after open or minimally surgery and 16 cases of unknown etiology. Clinical diagnosis of spondylodiscitis was mainly based on routine blood tests including C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR); imaging examinations comprising X-ray, CT scan, and magnetic resonance imaging (MRI). Operative procedure The patients were positioned in the prone position after induction of local anesthesia on a radiolucent surgical bed. Under CT-Guide (Brainlab® System), the target disc was located and the entry site was marked on the skin at a point 3–10 cm from the midline. All cases were treated by a transforaminal approach. The puncture direction was about 45 degrees abduction. The needle was punctured through the safety triangle to the infected vertebral space. On each side, a spinal needle was inserted directly into the infected disc and through the spinal needle, a guidewire was introduced. After a small incision (about 1 cm) was made, a dilator and a cannulated sleeve were guided through the wire into the targeted site. The infected tissue of the targeted disc was extracted with discectomy forceps through the cannulated sleeve if necessary. The same procedures were performed on the contralateral side. In the case of a paravertebral abscess, paravertebral tube drainage should be performed at the same time. This allowed for both sufficient biopsy material and extensive debridement of the necrosis and inflammatory tissue from a different direction. After biopsy and debridement, at least 1000 ml of the physiological saline was used for irrigation at each level. Finally, double cavity flushing drainage catheters were placed into the debrided segment and attached to the negative pressure suction for postoperative irrigation. Postoperatively, 1500 ml of broad-spectrum antibiotic saline was used irrigated locally every day via continuous irrigation and flushing. After the results of microbial culture became available, broad-spectrum antibiotics were replaced by those with more narrow-sensitivity. The antibiotic treatment in the perioperative period follows the relevant literature [5]. The criteria for the cessation of irrigation was: 1. complete disappearance of clinical symptoms; 2. clear fluid following flushing; 3. CRP declined to the normal range or the level before spondylodiscitis. If two of the above three indicators are met, the flushing will be stopped. After cessation of irrigation, the cannulae were removed after about 48 hours if CRP remained low. The patients were positioned in the prone position after induction of local anesthesia on a radiolucent surgical bed. Under CT-Guide (Brainlab® System), the target disc was located and the entry site was marked on the skin at a point 3–10 cm from the midline. All cases were treated by a transforaminal approach. The puncture direction was about 45 degrees abduction. The needle was punctured through the safety triangle to the infected vertebral space. On each side, a spinal needle was inserted directly into the infected disc and through the spinal needle, a guidewire was introduced. After a small incision (about 1 cm) was made, a dilator and a cannulated sleeve were guided through the wire into the targeted site. The infected tissue of the targeted disc was extracted with discectomy forceps through the cannulated sleeve if necessary. The same procedures were performed on the contralateral side. In the case of a paravertebral abscess, paravertebral tube drainage should be performed at the same time. This allowed for both sufficient biopsy material and extensive debridement of the necrosis and inflammatory tissue from a different direction. After biopsy and debridement, at least 1000 ml of the physiological saline was used for irrigation at each level. Finally, double cavity flushing drainage catheters were placed into the debrided segment and attached to the negative pressure suction for postoperative irrigation. Postoperatively, 1500 ml of broad-spectrum antibiotic saline was used irrigated locally every day via continuous irrigation and flushing. After the results of microbial culture became available, broad-spectrum antibiotics were replaced by those with more narrow-sensitivity. The antibiotic treatment in the perioperative period follows the relevant literature [5]. The criteria for the cessation of irrigation was: 1. complete disappearance of clinical symptoms; 2. clear fluid following flushing; 3. CRP declined to the normal range or the level before spondylodiscitis. If two of the above three indicators are met, the flushing will be stopped. After cessation of irrigation, the cannulae were removed after about 48 hours if CRP remained low. Outcome measures All patients were followed up in the clinic at 1 month and then every 3 months to determine whether the infection remained under control after discharge [22]. All patients were followed up with X-ray and MRI at each visit [23]. The following factors were assessed: physical examination, laboratory tests, back pain score (visual analog scale, VAS), Oswestry disability index(ODI), and Macnab criteria(as proposed by Macnab I) [22]. After the operation, the patient was asked to identify which one of the four levels corresponded to their condition: excellent, good, fair, poor. No pain and no restriction of activity are excellent; occasional back or leg pain is good; intermittent pain affecting work and life is fair; no improvement or further operative intervention required is poor. All patients were followed up in the clinic at 1 month and then every 3 months to determine whether the infection remained under control after discharge [22]. All patients were followed up with X-ray and MRI at each visit [23]. The following factors were assessed: physical examination, laboratory tests, back pain score (visual analog scale, VAS), Oswestry disability index(ODI), and Macnab criteria(as proposed by Macnab I) [22]. After the operation, the patient was asked to identify which one of the four levels corresponded to their condition: excellent, good, fair, poor. No pain and no restriction of activity are excellent; occasional back or leg pain is good; intermittent pain affecting work and life is fair; no improvement or further operative intervention required is poor. Statistical methods SPSS 23.0 was used for statistical analysis. Overall summary statistics were calculated in terms of means ± SD for continuous variables. In this study, t-test was used for measurement data. All statistical tests were bilateral, with P < 0.05 as the significance standard. SPSS 23.0 was used for statistical analysis. Overall summary statistics were calculated in terms of means ± SD for continuous variables. In this study, t-test was used for measurement data. All statistical tests were bilateral, with P < 0.05 as the significance standard. Patients: From January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with percutaneous debridement and drainage procedures in our department. There were 12 female and 11 male patients with an average age of 56.5 years (range from 40 to 65 years). All patients were treated conservatively (antibiotics and bed rest) or with open surgery in other hospitals before transfer to our department. There were 7 cases of infection after open or minimally surgery and 16 cases of unknown etiology. Clinical diagnosis of spondylodiscitis was mainly based on routine blood tests including C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR); imaging examinations comprising X-ray, CT scan, and magnetic resonance imaging (MRI). Operative procedure: The patients were positioned in the prone position after induction of local anesthesia on a radiolucent surgical bed. Under CT-Guide (Brainlab® System), the target disc was located and the entry site was marked on the skin at a point 3–10 cm from the midline. All cases were treated by a transforaminal approach. The puncture direction was about 45 degrees abduction. The needle was punctured through the safety triangle to the infected vertebral space. On each side, a spinal needle was inserted directly into the infected disc and through the spinal needle, a guidewire was introduced. After a small incision (about 1 cm) was made, a dilator and a cannulated sleeve were guided through the wire into the targeted site. The infected tissue of the targeted disc was extracted with discectomy forceps through the cannulated sleeve if necessary. The same procedures were performed on the contralateral side. In the case of a paravertebral abscess, paravertebral tube drainage should be performed at the same time. This allowed for both sufficient biopsy material and extensive debridement of the necrosis and inflammatory tissue from a different direction. After biopsy and debridement, at least 1000 ml of the physiological saline was used for irrigation at each level. Finally, double cavity flushing drainage catheters were placed into the debrided segment and attached to the negative pressure suction for postoperative irrigation. Postoperatively, 1500 ml of broad-spectrum antibiotic saline was used irrigated locally every day via continuous irrigation and flushing. After the results of microbial culture became available, broad-spectrum antibiotics were replaced by those with more narrow-sensitivity. The antibiotic treatment in the perioperative period follows the relevant literature [5]. The criteria for the cessation of irrigation was: 1. complete disappearance of clinical symptoms; 2. clear fluid following flushing; 3. CRP declined to the normal range or the level before spondylodiscitis. If two of the above three indicators are met, the flushing will be stopped. After cessation of irrigation, the cannulae were removed after about 48 hours if CRP remained low. Outcome measures: All patients were followed up in the clinic at 1 month and then every 3 months to determine whether the infection remained under control after discharge [22]. All patients were followed up with X-ray and MRI at each visit [23]. The following factors were assessed: physical examination, laboratory tests, back pain score (visual analog scale, VAS), Oswestry disability index(ODI), and Macnab criteria(as proposed by Macnab I) [22]. After the operation, the patient was asked to identify which one of the four levels corresponded to their condition: excellent, good, fair, poor. No pain and no restriction of activity are excellent; occasional back or leg pain is good; intermittent pain affecting work and life is fair; no improvement or further operative intervention required is poor. Statistical methods: SPSS 23.0 was used for statistical analysis. Overall summary statistics were calculated in terms of means ± SD for continuous variables. In this study, t-test was used for measurement data. All statistical tests were bilateral, with P < 0.05 as the significance standard. Results: Demographic data The 23 enrolled patients included 9 patients with Gram-positive (+), 6 patients with Gram-negative (-), 7 patients with Fungi, and 1 with mixed infection (Tables 1 and 2). Of the 23 patients, the maximum number of infected levels was 6 and the average number of infected levels was 3.2. Table 1Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatmentCultured pathogensNumberGram-positive (+)9S. aureus4S. epidermidis3S. viridans1MRSA1Gram-negative (-)6E. coli3E. faecalis2Brucella1Fungi7Aspergillus fumigatus5Aspergillus flavus2Mixed infection1Aspergillus fumigatus and S. aureusTotal Number123Table 2Demographic data of seven patients with postoperative spinal infectionNOOperations before MISInternal fixation(Y/N)Fixation removed(Y/N)Cultured bacteriaCase 1open lumbar surgery + debridement and internal fixationYNAspergillus fumigatusCase 2Open internal fixation of lumbar spineYNAspergillus flavusCase 3Open Operation of malignant schwannomaN-E. coliCase 4Open internal fixation of lumbar spineN-S.epidermidisCase 5Open internal fixation of lumbar spineYNS. aureusCase 6Unidentified Lumbar infection + open debridement and internal fixationYNAspergillus fumigatusCase 7Open internal fixation of lumbar spineYNS. aureus Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatment Aspergillus fumigatus and S. aureus Total Number 1 23 Demographic data of seven patients with postoperative spinal infection At admission, all but one patient presented with a fever of more than 38.5 °C. All patients presented with ESR of more than 20 mm/hr (range, 50 to 115 mm/hr). The elevated CRP ranged from 14.9 mg/L to 30.6 mg/L with an average of 23 mg/L (Table 3). The white blood cell count was elevated above normal in only one case. Table 3The results of the outcome measures for Clinical and laboratory indicatorsLaboratory testsPre-Treatment(at admission)Post-Operation(1w)Post- Extubation(1 m)ESR (mm/hr)84.5 (50–115)23.4 (15–38)13.2 (6–19)CRP (mg/L)26.0 (14–30)16.5 (12–21)3.7 (2–10)Functional resultsVisual analog scale8.6 (6–10)4.5 (3–5)2.3 (1–3)Oswestry disability index69.3(56–89)34.2 (25–44)26.3 (13–42) The results of the outcome measures for Clinical and laboratory indicators In this study, 7 patients with spondylitis were infected after the operation, and 5 of them underwent internal fixation implantation. All patients received minimally invasive treatment without the removal of internal fixation (Table 2). Before coming to our hospital, only 2 patients had a culture report (fungal infection), and both of them underwent open debridement surgery in other hospitals, and then transferred to our hospital. The 23 enrolled patients included 9 patients with Gram-positive (+), 6 patients with Gram-negative (-), 7 patients with Fungi, and 1 with mixed infection (Tables 1 and 2). Of the 23 patients, the maximum number of infected levels was 6 and the average number of infected levels was 3.2. Table 1Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatmentCultured pathogensNumberGram-positive (+)9S. aureus4S. epidermidis3S. viridans1MRSA1Gram-negative (-)6E. coli3E. faecalis2Brucella1Fungi7Aspergillus fumigatus5Aspergillus flavus2Mixed infection1Aspergillus fumigatus and S. aureusTotal Number123Table 2Demographic data of seven patients with postoperative spinal infectionNOOperations before MISInternal fixation(Y/N)Fixation removed(Y/N)Cultured bacteriaCase 1open lumbar surgery + debridement and internal fixationYNAspergillus fumigatusCase 2Open internal fixation of lumbar spineYNAspergillus flavusCase 3Open Operation of malignant schwannomaN-E. coliCase 4Open internal fixation of lumbar spineN-S.epidermidisCase 5Open internal fixation of lumbar spineYNS. aureusCase 6Unidentified Lumbar infection + open debridement and internal fixationYNAspergillus fumigatusCase 7Open internal fixation of lumbar spineYNS. aureus Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatment Aspergillus fumigatus and S. aureus Total Number 1 23 Demographic data of seven patients with postoperative spinal infection At admission, all but one patient presented with a fever of more than 38.5 °C. All patients presented with ESR of more than 20 mm/hr (range, 50 to 115 mm/hr). The elevated CRP ranged from 14.9 mg/L to 30.6 mg/L with an average of 23 mg/L (Table 3). The white blood cell count was elevated above normal in only one case. Table 3The results of the outcome measures for Clinical and laboratory indicatorsLaboratory testsPre-Treatment(at admission)Post-Operation(1w)Post- Extubation(1 m)ESR (mm/hr)84.5 (50–115)23.4 (15–38)13.2 (6–19)CRP (mg/L)26.0 (14–30)16.5 (12–21)3.7 (2–10)Functional resultsVisual analog scale8.6 (6–10)4.5 (3–5)2.3 (1–3)Oswestry disability index69.3(56–89)34.2 (25–44)26.3 (13–42) The results of the outcome measures for Clinical and laboratory indicators In this study, 7 patients with spondylitis were infected after the operation, and 5 of them underwent internal fixation implantation. All patients received minimally invasive treatment without the removal of internal fixation (Table 2). Before coming to our hospital, only 2 patients had a culture report (fungal infection), and both of them underwent open debridement surgery in other hospitals, and then transferred to our hospital. Radiologic findings In all patients, X-ray examination showed no signs of spinal instability; CT showed destruction of the vertebral endplate and varying degree of vertebral space collapse (Fig. 1). MRI revealed no compression of the spinal cord. There was no recurrence in the treated vertebral segment during follow-up, but two patients with fungal spinal infection (Case1 Fig. 2/3) (Case2 Fig. 4/5) progressed to involve adjacent segments. During postoperative follow-up, no deformities such as scoliosis or kyphosis were observed by plain radiography, and MRI showed varying degrees of spontaneous fusion [24]. Fig. 1A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyedFig. 2Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)Fig. 3The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discsFig. 4Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)Fig. 5The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyed Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h) The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discs Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h) The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved In all patients, X-ray examination showed no signs of spinal instability; CT showed destruction of the vertebral endplate and varying degree of vertebral space collapse (Fig. 1). MRI revealed no compression of the spinal cord. There was no recurrence in the treated vertebral segment during follow-up, but two patients with fungal spinal infection (Case1 Fig. 2/3) (Case2 Fig. 4/5) progressed to involve adjacent segments. During postoperative follow-up, no deformities such as scoliosis or kyphosis were observed by plain radiography, and MRI showed varying degrees of spontaneous fusion [24]. Fig. 1A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyedFig. 2Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)Fig. 3The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discsFig. 4Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)Fig. 5The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyed Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h) The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discs Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h) The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved Clinical outcomes Before surgery, all patients had significant back pain. There were 2 patients with radiating lower limb pain but no patients had muscle weakness, bowel or bladder dysfunction preoperatively. The operation time ranged from 30 minutes to 124 minutes for every spinal level with an average of 48 minutes. Intraoperative hemorrhage was minimal. In this patient population, the average delay of the diagnosis[25] was thirty-seven days. There were significant differences in VAS and ODI between the pre-treatment and post-operation (P > 0.05). And there were significant differences in VAS and ODI between the pre-treatment and post-extubation (P > 0.05) (Table 3). The average drainage time was 14 days (5–26 days). No serious complications were found in the perioperative period. In one case (Case 6), a drainage tube became detached, and another puncture and catheterization were performed. Before surgery, all patients had significant back pain. There were 2 patients with radiating lower limb pain but no patients had muscle weakness, bowel or bladder dysfunction preoperatively. The operation time ranged from 30 minutes to 124 minutes for every spinal level with an average of 48 minutes. Intraoperative hemorrhage was minimal. In this patient population, the average delay of the diagnosis[25] was thirty-seven days. There were significant differences in VAS and ODI between the pre-treatment and post-operation (P > 0.05). And there were significant differences in VAS and ODI between the pre-treatment and post-extubation (P > 0.05) (Table 3). The average drainage time was 14 days (5–26 days). No serious complications were found in the perioperative period. In one case (Case 6), a drainage tube became detached, and another puncture and catheterization were performed. Follow‐up Postoperative follow-up periods ranged from 1 year to 6.5 years, (mean 3.7 years). One patient (Case 2) was lost to follow-up at postoperative 12 months. One patient (Case 3) died from an abdominal neoplasm at postoperative 12 months. According to the classification system of Macnab [22], one patient (Case 2) had a good outcome at final follow-up, and the remainder were excellent. Postoperative follow-up periods ranged from 1 year to 6.5 years, (mean 3.7 years). One patient (Case 2) was lost to follow-up at postoperative 12 months. One patient (Case 3) died from an abdominal neoplasm at postoperative 12 months. According to the classification system of Macnab [22], one patient (Case 2) had a good outcome at final follow-up, and the remainder were excellent. Demographic data: The 23 enrolled patients included 9 patients with Gram-positive (+), 6 patients with Gram-negative (-), 7 patients with Fungi, and 1 with mixed infection (Tables 1 and 2). Of the 23 patients, the maximum number of infected levels was 6 and the average number of infected levels was 3.2. Table 1Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatmentCultured pathogensNumberGram-positive (+)9S. aureus4S. epidermidis3S. viridans1MRSA1Gram-negative (-)6E. coli3E. faecalis2Brucella1Fungi7Aspergillus fumigatus5Aspergillus flavus2Mixed infection1Aspergillus fumigatus and S. aureusTotal Number123Table 2Demographic data of seven patients with postoperative spinal infectionNOOperations before MISInternal fixation(Y/N)Fixation removed(Y/N)Cultured bacteriaCase 1open lumbar surgery + debridement and internal fixationYNAspergillus fumigatusCase 2Open internal fixation of lumbar spineYNAspergillus flavusCase 3Open Operation of malignant schwannomaN-E. coliCase 4Open internal fixation of lumbar spineN-S.epidermidisCase 5Open internal fixation of lumbar spineYNS. aureusCase 6Unidentified Lumbar infection + open debridement and internal fixationYNAspergillus fumigatusCase 7Open internal fixation of lumbar spineYNS. aureus Cultured pathogens in 23 patients received minimally invasive debridement and drainage treatment Aspergillus fumigatus and S. aureus Total Number 1 23 Demographic data of seven patients with postoperative spinal infection At admission, all but one patient presented with a fever of more than 38.5 °C. All patients presented with ESR of more than 20 mm/hr (range, 50 to 115 mm/hr). The elevated CRP ranged from 14.9 mg/L to 30.6 mg/L with an average of 23 mg/L (Table 3). The white blood cell count was elevated above normal in only one case. Table 3The results of the outcome measures for Clinical and laboratory indicatorsLaboratory testsPre-Treatment(at admission)Post-Operation(1w)Post- Extubation(1 m)ESR (mm/hr)84.5 (50–115)23.4 (15–38)13.2 (6–19)CRP (mg/L)26.0 (14–30)16.5 (12–21)3.7 (2–10)Functional resultsVisual analog scale8.6 (6–10)4.5 (3–5)2.3 (1–3)Oswestry disability index69.3(56–89)34.2 (25–44)26.3 (13–42) The results of the outcome measures for Clinical and laboratory indicators In this study, 7 patients with spondylitis were infected after the operation, and 5 of them underwent internal fixation implantation. All patients received minimally invasive treatment without the removal of internal fixation (Table 2). Before coming to our hospital, only 2 patients had a culture report (fungal infection), and both of them underwent open debridement surgery in other hospitals, and then transferred to our hospital. Radiologic findings: In all patients, X-ray examination showed no signs of spinal instability; CT showed destruction of the vertebral endplate and varying degree of vertebral space collapse (Fig. 1). MRI revealed no compression of the spinal cord. There was no recurrence in the treated vertebral segment during follow-up, but two patients with fungal spinal infection (Case1 Fig. 2/3) (Case2 Fig. 4/5) progressed to involve adjacent segments. During postoperative follow-up, no deformities such as scoliosis or kyphosis were observed by plain radiography, and MRI showed varying degrees of spontaneous fusion [24]. Fig. 1A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyedFig. 2Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h)Fig. 3The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discsFig. 4Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h)Fig. 5The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved A 56-year-old man was diagnosed as having T9-12/L2-3/L4-5 fungal spondylodiscitis. The CT scan showed the endplates were grossly destroyed Sequential radiologic findings in a previously healthy 40-year-old man (case 1). Prior to primary lumbar discectomy, sagittal T2-weighted MRI showed a herniated disk at L3/4 (a). The original single-level infection was detected by T1-weighted MRI, approximately 2 weeks after primary discectomy (b). One month later, T1- and T2-weighted MRI showed the infection had progressed to L3/4 (c) and (d). Lesion debridement + bone graft fusion + pedicle screw fixation were performed in another hospital(e) and (f). CT showed that the infection had progressed to the adjacent L2/3 and L4/5 levels, with endplate destruction 2 months after fixation surgery(g) and (h) The same 40-year-old female as shown in Fig. 2. Samples confirmed a fungal spondylodiscitis (case 1) using intraoperative CT-Guide(a). At five months after the minimally invasive debridement and irrigation, MRI showed that the L2-5 infections had resolved but the infection had progressed to the L1-2、T12-L1 and T11-12 levels (b and c). Postoperative minimally invasive debridement and irrigation picture(d). At the 48-month follow-up, MRI (e) and (f) showed all the infections had resolved and no further progression was observed; T1-weighted image showed increased signal with the extensive additional enhancement of the fat signals of vertebral bodies. The endplates were grossly deformed. T1 and T2-weighted show a narrow line at the site of the discs Sequential radiologic findings in a previously healthy 49-year-old woman before admission to our institution (case 2). X-ray showed L4/5 internal fixation and fusion surgery (a and b). The original single-level infection(L4/5) was detected by MRI, approximately 40 days after the primary operation (c and d). Three months later, T1- and T2-weighted MRI showed the infection had progressed to L2/3 and L3/4 (e) and (f). Two months later, T1- and T2-weighted MRI showed the infection had progressed to L1/2 (g) and (h) The same 49-year-old female as shown in Fig. 4 (case 2). Lesion debridement + fixation removal was performed. MRI in our institution showed that the infection had progressed to the L1/L2 level, with endplate destruction six weeks after fixation removal (a) and (b). Clinical photograph of postoperative percutaneous debridement and irrigation (c). At 12-month follow-up, MRI T1(d) and T2(e) showed all infection had resolved Clinical outcomes: Before surgery, all patients had significant back pain. There were 2 patients with radiating lower limb pain but no patients had muscle weakness, bowel or bladder dysfunction preoperatively. The operation time ranged from 30 minutes to 124 minutes for every spinal level with an average of 48 minutes. Intraoperative hemorrhage was minimal. In this patient population, the average delay of the diagnosis[25] was thirty-seven days. There were significant differences in VAS and ODI between the pre-treatment and post-operation (P > 0.05). And there were significant differences in VAS and ODI between the pre-treatment and post-extubation (P > 0.05) (Table 3). The average drainage time was 14 days (5–26 days). No serious complications were found in the perioperative period. In one case (Case 6), a drainage tube became detached, and another puncture and catheterization were performed. Follow‐up: Postoperative follow-up periods ranged from 1 year to 6.5 years, (mean 3.7 years). One patient (Case 2) was lost to follow-up at postoperative 12 months. One patient (Case 3) died from an abdominal neoplasm at postoperative 12 months. According to the classification system of Macnab [22], one patient (Case 2) had a good outcome at final follow-up, and the remainder were excellent. Discussion: Spondylodiscitis is a term encompassing vertebral osteomyelitis, spondylitis, and discitis, the incidence of which is increasing due to an increase in the susceptible population and improved diagnostics [26, 27]. The etiology of spondylodiscitis usually is monobacterial infection and in Europe,more than 50 % of cases are caused by Staphylococcus aureus [5, 28]. Fungal discitis is rare and to date, only case reports exist [26]. Fungal discitis is usually due to molds [29–32] and Candida species [33–38], which are consistent with our results. Clinical presentation of spondylodiscitis especially early stage may be nonspecific. Refractory and unremitting back pain often requiring narcotic pain control is the most common clinical presentation, with fever and neurologic deficiency less frequently encountered [1, 39]. The clinical symptoms appear at an average of six weeks after the primary procedure [1]. Elevated ESR and CRP are extremely sensitive, but WBC count might be within the normal range [40]. In our series, CRP and ESR values were increased on admission in all patients, whereas white blood count was increased in only one patient. MRI is the gold standard in imaging studies to detect spondylodiscitis [5, 41]. MRI exhibits high specificity and sensitivity,which are extremely high at 96 % and 92 % respectively [23, 42, 43]. Therefore, we routinely performed an MRI during the follow-up. Treatment of spondylodiscitis, especially fungal infection, is often delayed because fungal organisms are slow-growing and difficult to identify by culture [25]. CT-guided needle biopsy has been recommended for the isolation of causative pathogens [44–46]. However, the aspirate is often inadequate. Percutaneous endoscopic aspirate has been reported to have a high accuracy rate in the detection of causative organisms [47]. In this study, the causative organisms were extracted through a cannula less than 1 cm diameter using discectomy forceps. Our study uses CT-Guide so the discectomy forceps were able to access the center of the lesion. Although the radiation dose of CT is higher than that of C-arm, CT-Guide is accurate and convenient for precise catheterization. We also consider that careful CT-Guide also provides potential benefit for theatre-users by reducing radiation exposure compared with fluoroscopically assisted spinal surgery [48, 49]. Multilevel spondylodiscitis after surgery is an intractable and troublesome complication, which may not be resolved by simple surgical debridement. Major open surgery has important drawbacks in patients with multilevel infection because of concern that the extensive destruction and mechanical instability of the affected segments following this type of surgery may be associated with significant rates of perioperative complications and mortality [1, 50]. Minimally invasive endoscopic debridement with dilute betadine solution irrigation is an effective alternative to extensive open surgery for the treatment of single-level infectious spondylodiscitis,but the effectiveness of this procedure for extensive destruction of vertebral bodies and multilevel refractory infections may be limited because the thorough debridement of synchronous multiple lesions by endoscopic means is difficult and can exhaust both patient and surgeon [14, 15, 51, 52]. Minimally invasive drainage and continuous irrigation with local administration of antibiotic agents, including minimally invasive suction aspiration, have been found to be effective in patients with spondylodiscitis [53, 54]. There are several advantages of our method: (1) Continuous drainage can dilute the density of pathogens, which reduces pathogenic capacity (including the invasiveness of pathogens, external toxins, and endotoxins); (2) Minimally invasive implant of the drainage tube will not cause major surgical trauma and is conducive to the rehabilitation of patients; (3) Continuous perfusion does not destroy the body’s protective immune response; (4) Formation of hematoma as a culture medium is inhibited [12]. In our series, two patients with fungal infection showed progressive disease spreading to adjacent segments. A previous study has reported this unique pathological feature of spondylodiscitis, but the exact reason remains unclear[39]. It may depend on the premorbid general condition of the patient (malnutrition, immune suppression, malignancy), the type of fungal species, and delay in treatment[55]. Progressive disease may occur either above or below the lesion. Adjacent segment preventive catheterization was performed in one patient and achieved its purpose, but its reliable effect needs further study. This study has several limitations. First, we were able to collect only 23 cases; firmer conclusions await large sample studies. Second, because our study was a retrospective non-control study of various disc infections, it is difficult to evaluate the efficacy of surgery independent of antimicrobial therapy. The feasibility and benefits of minimally invasive debridement and irrigation for the treatment of multilevel spondylitis need to be further evaluated in larger series as part of a prospectively controlled study. Third, in view of the particularity of tuberculosis treatment, the patients with tuberculosis spondylitis were not included in this study. In addition, this technology can’t be used for decompression and deformity correction. For patients with neurological compromise or spinal deformity, open surgery would be the requirement. However, in our cohort, enrolled patients had similar clinical features of multilevel spondylodiscitis and all operations were performed by the same experienced surgeon (XFZ). Conclusions: On the basis of these study findings, we believe that minimally invasive debridement and irrigation using intraoperative CT-Guide is an effective minimally invasive method for the treatment of multilevel advanced spondylodiscitis.
Background: Spondylodiscitis is an unusual infectious disease, which usually originates as a pathogenic infection of intervertebral discs and then spreads to neighboring vertebral bodies. The objective of this study is to evaluate percutaneous debridement and drainage using intraoperative CT-Guide in multilevel spondylodiscitis. Methods: From January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with minimally invasive debridement and drainage procedures in our department. The clinical manifestations, evolution, and minimally invasive debridement and drainage treatment of this refractory vertebral infection were investigated. Results: Of the enrolled patients, the operation time ranged from 30 minutes to 124 minutes every level with an average of 48 minutes. Intraoperative hemorrhage was minimal. The postoperative follow-up period ranged from 12 months to 6.5 years with an average of 3.7 years. There was no reactivation of infection in the treated vertebral segment during follow-up, but two patients with fungal spinal infection continued to progress by affecting adjacent segments prior to final resolution. According to the classification system of Macnab, one patient had a good outcome at the final follow-up, and the rest were excellent. Conclusions: Minimally invasive percutaneous debridement and irrigation using intraoperative CT-Guide is an effective minimally invasive method for the treatment of multilevel spondylodiscitis.
Background: Spondylodiscitis is an unusual infectious disease, which usually originates as a pathogenic infection of intervertebral discs and then spreads to neighboring vertebral bodies [1]. Mortality is around 2–3 % [2]. Its incidence varies between 0.2 % and 3.6 % after spine surgery [1, 3, 4]. There is no uniform standard for the treatment of spondylodiscitis. Conservative therapy including bracing and appropriate antibiotics is always adequate for the patients involving early detection of mild infection [5]. But delayed diagnosis and treatment of spondylitis are common because of their early variable clinical manifestations and indolent courses, which may lead to the failure of conservative treatment [5]. Surgical treatment is reserved for patients with failed conservative therapy, including intractable spinal pain, large epidural abscesses, and extensive vertebral body destruction [6]. The major purpose of surgical intervention in spondylodiscitis is to remove the infected tissues, relieve spinal pain, rebuild the spinal stability, and improve limb dysfunction. Open surgery has been advocated in the past [7–11]. However, whether the anterior or posterior approach, open surgery faced serious complications like nerve or vessel injuries due to extensive anatomical dissection and destruction of the stable spinal structure [12, 13]. Recent research favors a minimally invasive surgery(MIS) [7, 14, 15]. Existing studies have focused on single-level or early-stage infectious spondylodiscitis, and good clinical results have been reported after percutaneous debridement and drainage [14, 16, 17]. However, there are few reports of the management of advanced multilevel infections. These are difficult to treat using open or endoscopic surgery in current clinical practice, because of mechanical instability of the affected multilevel segments caused by widespread destruction due to the disease process [14, 18–21]. To my knowledge, up to date, this is the first report to treat multilevel spondylodiscitis with MIS. MIS may provide minimized damage stable structure of the posterior spine and paraspinal soft tissues. However, it is difficult to identify anatomic landmarks in MIS that may lead to severe complications. Therefore, to increase the accuracy of debridement and drainage, CT-Guide was performed during the operation. The purpose of this study was to evaluate the clinical effect of percutaneous debridement and drainage using intraoperative CT-Guide in the treatment of multilevel spondylodiscitis. Considering the particularity of tuberculous spondylodiscitis, it was not included in this study. Conclusions: On the basis of these study findings, we believe that minimally invasive debridement and irrigation using intraoperative CT-Guide is an effective minimally invasive method for the treatment of multilevel advanced spondylodiscitis.
Background: Spondylodiscitis is an unusual infectious disease, which usually originates as a pathogenic infection of intervertebral discs and then spreads to neighboring vertebral bodies. The objective of this study is to evaluate percutaneous debridement and drainage using intraoperative CT-Guide in multilevel spondylodiscitis. Methods: From January 2002 to May 2017, 23 patients with multilevel spondylodiscitis were treated with minimally invasive debridement and drainage procedures in our department. The clinical manifestations, evolution, and minimally invasive debridement and drainage treatment of this refractory vertebral infection were investigated. Results: Of the enrolled patients, the operation time ranged from 30 minutes to 124 minutes every level with an average of 48 minutes. Intraoperative hemorrhage was minimal. The postoperative follow-up period ranged from 12 months to 6.5 years with an average of 3.7 years. There was no reactivation of infection in the treated vertebral segment during follow-up, but two patients with fungal spinal infection continued to progress by affecting adjacent segments prior to final resolution. According to the classification system of Macnab, one patient had a good outcome at the final follow-up, and the rest were excellent. Conclusions: Minimally invasive percutaneous debridement and irrigation using intraoperative CT-Guide is an effective minimally invasive method for the treatment of multilevel spondylodiscitis.
9,613
245
[ 457, 1491, 139, 388, 154, 55, 459, 1215, 177, 89 ]
13
[ "patients", "showed", "infection", "mri", "debridement", "fixation", "case", "t1", "weighted", "progressed" ]
[ "spondylodiscitis effectiveness", "spondylodiscitis operations performed", "standard treatment spondylodiscitis", "treatment spondylodiscitis conservative", "surgical intervention spondylodiscitis" ]
null
[CONTENT] Spondylodiscitis | Minimally invasive surgery | Drainage [SUMMARY]
null
[CONTENT] Spondylodiscitis | Minimally invasive surgery | Drainage [SUMMARY]
[CONTENT] Spondylodiscitis | Minimally invasive surgery | Drainage [SUMMARY]
[CONTENT] Spondylodiscitis | Minimally invasive surgery | Drainage [SUMMARY]
[CONTENT] Spondylodiscitis | Minimally invasive surgery | Drainage [SUMMARY]
[CONTENT] Debridement | Discitis | Drainage | Follow-Up Studies | Humans | Lumbar Vertebrae | Minimally Invasive Surgical Procedures | Retrospective Studies | Spinal Fusion | Tomography, X-Ray Computed | Treatment Outcome [SUMMARY]
null
[CONTENT] Debridement | Discitis | Drainage | Follow-Up Studies | Humans | Lumbar Vertebrae | Minimally Invasive Surgical Procedures | Retrospective Studies | Spinal Fusion | Tomography, X-Ray Computed | Treatment Outcome [SUMMARY]
[CONTENT] Debridement | Discitis | Drainage | Follow-Up Studies | Humans | Lumbar Vertebrae | Minimally Invasive Surgical Procedures | Retrospective Studies | Spinal Fusion | Tomography, X-Ray Computed | Treatment Outcome [SUMMARY]
[CONTENT] Debridement | Discitis | Drainage | Follow-Up Studies | Humans | Lumbar Vertebrae | Minimally Invasive Surgical Procedures | Retrospective Studies | Spinal Fusion | Tomography, X-Ray Computed | Treatment Outcome [SUMMARY]
[CONTENT] Debridement | Discitis | Drainage | Follow-Up Studies | Humans | Lumbar Vertebrae | Minimally Invasive Surgical Procedures | Retrospective Studies | Spinal Fusion | Tomography, X-Ray Computed | Treatment Outcome [SUMMARY]
[CONTENT] spondylodiscitis effectiveness | spondylodiscitis operations performed | standard treatment spondylodiscitis | treatment spondylodiscitis conservative | surgical intervention spondylodiscitis [SUMMARY]
null
[CONTENT] spondylodiscitis effectiveness | spondylodiscitis operations performed | standard treatment spondylodiscitis | treatment spondylodiscitis conservative | surgical intervention spondylodiscitis [SUMMARY]
[CONTENT] spondylodiscitis effectiveness | spondylodiscitis operations performed | standard treatment spondylodiscitis | treatment spondylodiscitis conservative | surgical intervention spondylodiscitis [SUMMARY]
[CONTENT] spondylodiscitis effectiveness | spondylodiscitis operations performed | standard treatment spondylodiscitis | treatment spondylodiscitis conservative | surgical intervention spondylodiscitis [SUMMARY]
[CONTENT] spondylodiscitis effectiveness | spondylodiscitis operations performed | standard treatment spondylodiscitis | treatment spondylodiscitis conservative | surgical intervention spondylodiscitis [SUMMARY]
[CONTENT] patients | showed | infection | mri | debridement | fixation | case | t1 | weighted | progressed [SUMMARY]
null
[CONTENT] patients | showed | infection | mri | debridement | fixation | case | t1 | weighted | progressed [SUMMARY]
[CONTENT] patients | showed | infection | mri | debridement | fixation | case | t1 | weighted | progressed [SUMMARY]
[CONTENT] patients | showed | infection | mri | debridement | fixation | case | t1 | weighted | progressed [SUMMARY]
[CONTENT] patients | showed | infection | mri | debridement | fixation | case | t1 | weighted | progressed [SUMMARY]
[CONTENT] mis | spondylodiscitis | conservative | early | multilevel | treatment | surgery | destruction | tissues | conservative therapy including [SUMMARY]
null
[CONTENT] showed | fixation | mri | t1 | weighted | infection | progressed | mri showed | showed infection | infection progressed [SUMMARY]
[CONTENT] invasive | minimally invasive | invasive method treatment multilevel | believe minimally | effective minimally | debridement irrigation intraoperative ct | debridement irrigation intraoperative | ct guide effective minimally | ct guide effective | believe minimally invasive debridement [SUMMARY]
[CONTENT] patients | showed | infection | spondylodiscitis | debridement | mri | fixation | irrigation | case | surgery [SUMMARY]
[CONTENT] patients | showed | infection | spondylodiscitis | debridement | mri | fixation | irrigation | case | surgery [SUMMARY]
[CONTENT] Spondylodiscitis ||| CT-Guide [SUMMARY]
null
[CONTENT] 30 minutes to 124 minutes | 48 minutes ||| ||| 12 months | 6.5 years | 3.7 years ||| two ||| Macnab | one [SUMMARY]
[CONTENT] CT-Guide [SUMMARY]
[CONTENT] Spondylodiscitis ||| CT-Guide ||| January 2002 | May 2017 | 23 ||| ||| 30 minutes to 124 minutes | 48 minutes ||| ||| 12 months | 6.5 years | 3.7 years ||| two ||| Macnab | one ||| CT-Guide [SUMMARY]
[CONTENT] Spondylodiscitis ||| CT-Guide ||| January 2002 | May 2017 | 23 ||| ||| 30 minutes to 124 minutes | 48 minutes ||| ||| 12 months | 6.5 years | 3.7 years ||| two ||| Macnab | one ||| CT-Guide [SUMMARY]
Role of aminoglycoside-modifying enzymes and 16S rRNA methylase (ArmA) in resistance of Acinetobacter baumannii clinical isolates against aminoglycosides.
33533819
This study aimed to determine the role of genes encoding aminoglycoside-modifying enzymes (AMEs) and 16S rRNA methylase (ArmA) in Acinetobacter baumannii clinical isolates.
INTRODUCTION
We collected 100 clinical isolates of A. baumannii and identified and confirmed them using microbiological tests and assessment of the OXA-51 gene. Antibiotic susceptibility testing was carried out using disk agar diffusion and micro-broth dilution methods. The presence of AME genes and ArmA was detected by PCR and multiplex PCR.
METHODS
The most and least effective antibiotics in this study were netilmicin and ciprofloxacin with 68% and 100% resistance rates, respectively. According to the minimum inhibitory concentration test, 94% of the isolates were resistant to gentamicin, tobramycin, and streptomycin, while the highest susceptibility (20%) was observed against netilmicin. The proportion of strains harboring the aminoglycoside resistance genes was as follows: APH(3')-VIa (aphA6) (77%), ANT(2")-Ia (aadB) (73%), ANT(3")-Ia (aadA1) (33%), AAC(6')-Ib (aacA4) (33%), ArmA (22%), and AAC(3)-IIa (aacC2) (19%). Among the 22 gene profiles detected in this study, the most prevalent profiles included APH(3')-VIa + ANT(2")-Ia (39 isolates, 100% of which were kanamycin-resistant), and AAC(3)-IIa + AAC(6')-Ib + ANT(3")-Ia + APH(3')-VIa + ANT(2")-Ia (14 isolates, all of which were resistant to gentamicin, kanamycin, and streptomycin).
RESULTS
High minimum inhibitory concentration of aminoglycosides in isolates with the simultaneous presence of AME- and ArmA-encoding genes indicated the importance of these genes in resistance to aminoglycosides. However, control of their spread could be effective in the treatment of infections caused by A. baumannii.
CONCLUSIONS
[ "Acinetobacter baumannii", "Aminoglycosides", "Anti-Bacterial Agents", "Bacterial Proteins", "Drug Resistance, Bacterial", "Methyltransferases", "Microbial Sensitivity Tests", "RNA, Ribosomal, 16S" ]
7849326
INTRODUCTION
Acinetobacter baumannii, living in the soil, the water, and different hospital environments, is an important opportunistic pathogen that causes nosocomial infections such as pneumonia, urinary tract infections, intravenous catheter-associated infections, and ventilation-associated infections, particularly in intensive care units 1 - 4 . The ability of this microorganism to remain in the hospital environment and to spread among the patients, along with their resistance to several antibiotics, are the main driving forces behind large-scale recurrent events in different countries 5 . The major antibiotics used for the treatment of infections caused by this organism are beta-lactams, aminoglycosides, fluoroquinolones, and carbapenems; however, A. baumannii has shown different rates of resistance against these antimicrobial agents 6 - 8 . These infections are difficult, costly, and sometimes impossible to treat owing to the high ability of A. baumannii to acquire antibiotic resistance genes and the development of multidrug-resistant (MDR) strains 9 , 10 . Aminoglycosides are one of the main drugs used for the treatment of Acinetobacter infections 11 ; however, recently, the resistance of A. baumannii to these antibiotics has also increased. Two main mechanisms of resistance to aminoglycosides are the alteration of the ribosome structure caused by mutations in the ribosomal 16S rRNA and the enzymatic resistance mechanism 12 . The enzymatic alteration of the aminoglycoside molecule at -OH or -NH2 groups by aminoglycoside-modifying enzymes (AMEs) is the most important resistance mechanism 12 - 14 . AMEs are classified into three major groups: aminoglycoside phosphotransferase (APH), aminoglycoside acetyltransferase (AAC), aminoglycoside nucleotidyltransferase (ANT), and aminoglycoside adenylyltransferase (AAD) 5 , 13 . Aminoglycoside acetyltransferases cause acetylation of the -NH2 groups of aminoglycosides at the 1, 3, 2', and 6' positions using acetyl coenzyme A as a donor substrate 15 . Aminoglycoside phosphotransferases phosphorylate the hydroxyl groups present in the structure of aminoglycosides at the 4, 6, 9, 3', 2'', 3'', and 7'' positions (seven different groups) with the help of ATP; the largest enzymatic group in this family is the APH(3′)-I group 16 . The proportion of strains harboring the aphA6 gene in A. baumannii is widespread, and this enzyme is the cause of resistance to neomycin, amikacin, kanamycin, paromomycin, ribostamycin, butirosin, and isepamicin 17 . Aminoglycoside nucleotidyltransferases are classified into 5 groups, and the genes encoding these enzymes can be found in chromosomes or transferred by plasmids and transposons 12 . These enzymes transfer an AMP group from ATP to a hydroxyl group at the 2'', 3'', 4', 6, and 9 positions of the aminoglycoside molecule 13 . In addition to AMEs, 16S rRNA methylation by the ArmA enzyme is a novel mechanism that contributes to the high level of aminoglycoside resistance in A. baumannii, as reported in the Far East, Europe, and North America 5 . This enzyme can be transferred by class 1 integrons and is often detected in carbapenem-resistant A. baumannii isolates 18 . This study aimed to investigate the role of some important aminoglycoside-modifying enzymes and 16S rRNA methylase (ArmA) in the resistance of A. baumannii clinical isolates to aminoglycosides in Sari, located north of Iran.
METHODS
Sample collection and bacterial isolates This study was performed on A. baumannii isolated from patients admitted to different educational hospitals in Sari, north of Iran, for 6 months (April 2019 to September 2019). The clinical specimens included blood, urine, respiratory secretions (bronchial lavage and tracheal secretions), CSF, and ulcer (surgical and burn wound). The clinical isolates were identified using conventional microbiological tests 19 and confirmed by polymerase chain reaction (PCR) amplification of the blaOXA-51 gene using specific primers 20 ; the reaction conditions are shown in Table 1. TABLE 1:Primers used to amplify the blaOXA-51 and aminoglycoside resistance genes along with the conditions of PCR.Target genesPrimer sequences (5´-3´)Amplicon size (bp)94 °C94 °CAnnealing Temperature and time72 °C72 °CReference OXA-51TAATGCTTTGATCGGCCTTG3532 min25 sec51 °C for 30 sec30 sec5 min 5 TGGATTGCACTTCATCTTGG APH(3′)-VIa CGGAAACAGCGTTTTAGA7172 min25 sec49 °C for 30 sec30 sec5 min 5 (aphA6) TTCCTTTTGTCAGGTC AAC(3)-IIa ATGCATACGCGGAAGGC8222 min25 sec54 °C for 30 sec30 sec5 min 5 (aacC2) TGCTGGCACGATCGGAG AAC(6′)-Ib TATGAGTGGCTAAATCGAT3952 min25 sec54 °C for 30 sec30 sec5 min 5 (aacA4) CCCGCTTTCTCGTAGCA ANT(2")-Ia ATCTGCCGCTCTGGAT4052 min25 sec49 °C for 30 sec30 sec5 min 5 (aadB) CGAGCCTGTAGGACT ANT(3")-Ia ATGAGGGAAGCGGTGATCG7922 min25 sec62 °C for 30 sec30 sec5 min 5 (aadA1) TTATTTGCCGACTACCTTGGT ArmA ATTCTGCCTATCCTAATTGG3152 min25 sec49 °C for 30 sec30 sec5 min 5 ACCTATACTTTATCGTCGTC This study was performed on A. baumannii isolated from patients admitted to different educational hospitals in Sari, north of Iran, for 6 months (April 2019 to September 2019). The clinical specimens included blood, urine, respiratory secretions (bronchial lavage and tracheal secretions), CSF, and ulcer (surgical and burn wound). The clinical isolates were identified using conventional microbiological tests 19 and confirmed by polymerase chain reaction (PCR) amplification of the blaOXA-51 gene using specific primers 20 ; the reaction conditions are shown in Table 1. TABLE 1:Primers used to amplify the blaOXA-51 and aminoglycoside resistance genes along with the conditions of PCR.Target genesPrimer sequences (5´-3´)Amplicon size (bp)94 °C94 °CAnnealing Temperature and time72 °C72 °CReference OXA-51TAATGCTTTGATCGGCCTTG3532 min25 sec51 °C for 30 sec30 sec5 min 5 TGGATTGCACTTCATCTTGG APH(3′)-VIa CGGAAACAGCGTTTTAGA7172 min25 sec49 °C for 30 sec30 sec5 min 5 (aphA6) TTCCTTTTGTCAGGTC AAC(3)-IIa ATGCATACGCGGAAGGC8222 min25 sec54 °C for 30 sec30 sec5 min 5 (aacC2) TGCTGGCACGATCGGAG AAC(6′)-Ib TATGAGTGGCTAAATCGAT3952 min25 sec54 °C for 30 sec30 sec5 min 5 (aacA4) CCCGCTTTCTCGTAGCA ANT(2")-Ia ATCTGCCGCTCTGGAT4052 min25 sec49 °C for 30 sec30 sec5 min 5 (aadB) CGAGCCTGTAGGACT ANT(3")-Ia ATGAGGGAAGCGGTGATCG7922 min25 sec62 °C for 30 sec30 sec5 min 5 (aadA1) TTATTTGCCGACTACCTTGGT ArmA ATTCTGCCTATCCTAATTGG3152 min25 sec49 °C for 30 sec30 sec5 min 5 ACCTATACTTTATCGTCGTC Antimicrobial susceptibility testing The antibiotic susceptibility pattern of the isolates was determined by the disk agar diffusion method on Muller Hinton agar (Merck, Germany) according to the Clinical and Laboratory Standards Institute (CLSI) guidelines 21 . The antibiotics included piperacillin (100 µg), piperacillin-tazobactam (100/10 µg), imipenem (10 µg), meropenem (10 µg), doripenem (10 µg), ciprofloxacin (5 µg), levofloxacin (5 µg), trimethoprim-sulfamethoxazole (1.25-23.75 µg), ceftazidime (30 µg), cefotaxime (30 µg), and cefepime (30 µg) (MAST Co., England). The susceptibility pattern of the isolates against aminoglycosides including kanamycin, amikacin, spectinomycin, netilmicin, gentamicin, streptomycin, and tobramycin was determined using the micro-broth dilution method according to the CLSI guidelines 21 . For interpretation of the minimum inhibitory concentration (MIC) values, we referred to the CLSI guidelines and previous studies 1 , 21 , 22 . Escherichia coli ATCC 25922 and A. baumannii ATCC 19606 were used as control strains for antibiotic susceptibility testing. The antibiotic susceptibility pattern of the isolates was determined by the disk agar diffusion method on Muller Hinton agar (Merck, Germany) according to the Clinical and Laboratory Standards Institute (CLSI) guidelines 21 . The antibiotics included piperacillin (100 µg), piperacillin-tazobactam (100/10 µg), imipenem (10 µg), meropenem (10 µg), doripenem (10 µg), ciprofloxacin (5 µg), levofloxacin (5 µg), trimethoprim-sulfamethoxazole (1.25-23.75 µg), ceftazidime (30 µg), cefotaxime (30 µg), and cefepime (30 µg) (MAST Co., England). The susceptibility pattern of the isolates against aminoglycosides including kanamycin, amikacin, spectinomycin, netilmicin, gentamicin, streptomycin, and tobramycin was determined using the micro-broth dilution method according to the CLSI guidelines 21 . For interpretation of the minimum inhibitory concentration (MIC) values, we referred to the CLSI guidelines and previous studies 1 , 21 , 22 . Escherichia coli ATCC 25922 and A. baumannii ATCC 19606 were used as control strains for antibiotic susceptibility testing. DNA extraction, PCR, and multiplex-PCR DNA was extracted from all A. baumanniiisolates grown for 24 h using an alkaline lysis method with sodium dodecyl sulphate (SDS) and NaOH, as previously published 23 , with few modifications. In brief, first, we prepared a lysis buffer by dissolving 0.5 g of SDS and 0.4 g of NaOH in 200 µL of distilled water. Next, 4-6 colonies of the bacteria were suspended in 60 µL of lysis buffer and subsequently heated at 95 °C for 10 min. In the next step, the suspension was centrifuged at 13000 rpm for 5 min, and 180 µL of distilled water was added to the microtubes. The obtained supernatant was frozen at −20 °C until use as the extracted DNA in PCR. Two sets of multiplex-PCR were used to detect AME-encoding genes in A. baumannii isolates using the specific primers shown in Table 1. APH(3′)-VIa (aphA6), ANT(2")-Ia (aadB), and ArmA genes were detected in the same set; AAC(6′)-Ib (aacA4) and AAC(3)-IIa (aacC2) were identified in the second set; and the ANT(3")-Ia (aadA1) gene was detected by PCR alone. The PCR and multiplex-PCR were performed in 25 µL of final volume containing 12.5 µL of the master mix (Ampliqon, Denmark), 10 pmol of each primer (Bioneer, South Korea), and 500 ng of template DNA; the reaction solutions were brought to the desired volume through the addition of distilled water. The genes were amplified under standard conditions using a thermocycler machine (Bio-Rad, USA). All reactions were performed in 34 cycles, and the conditions are shown in Table 1. DNA was extracted from all A. baumanniiisolates grown for 24 h using an alkaline lysis method with sodium dodecyl sulphate (SDS) and NaOH, as previously published 23 , with few modifications. In brief, first, we prepared a lysis buffer by dissolving 0.5 g of SDS and 0.4 g of NaOH in 200 µL of distilled water. Next, 4-6 colonies of the bacteria were suspended in 60 µL of lysis buffer and subsequently heated at 95 °C for 10 min. In the next step, the suspension was centrifuged at 13000 rpm for 5 min, and 180 µL of distilled water was added to the microtubes. The obtained supernatant was frozen at −20 °C until use as the extracted DNA in PCR. Two sets of multiplex-PCR were used to detect AME-encoding genes in A. baumannii isolates using the specific primers shown in Table 1. APH(3′)-VIa (aphA6), ANT(2")-Ia (aadB), and ArmA genes were detected in the same set; AAC(6′)-Ib (aacA4) and AAC(3)-IIa (aacC2) were identified in the second set; and the ANT(3")-Ia (aadA1) gene was detected by PCR alone. The PCR and multiplex-PCR were performed in 25 µL of final volume containing 12.5 µL of the master mix (Ampliqon, Denmark), 10 pmol of each primer (Bioneer, South Korea), and 500 ng of template DNA; the reaction solutions were brought to the desired volume through the addition of distilled water. The genes were amplified under standard conditions using a thermocycler machine (Bio-Rad, USA). All reactions were performed in 34 cycles, and the conditions are shown in Table 1. Statistical analysis The data were analyzed using SPSS (version 21). Categorical data were analyzed using the Fisher’s exact test, and a P-value less than 0.05 was considered statistically significant. In addition, an independent t-test was used to examine the mean age of the subjects. The data were analyzed using SPSS (version 21). Categorical data were analyzed using the Fisher’s exact test, and a P-value less than 0.05 was considered statistically significant. In addition, an independent t-test was used to examine the mean age of the subjects.
RESULTS
Patients, samples, and bacterial isolates In this study, 100 non-duplicated A. baumannii clinical isolates were collected from 100 patients admitted to the teaching and educational hospitals of Sari, north of Iran. All isolates identified using the phenotypic method contained the blaOXA-51 gene according to the PCR results. The mean age of the patients was 42.08±25.08 years (minimum age: 6 months; highest age: 88 years), and 50% of the patients were male. There was no significant difference between men and women in terms of mean age (p=0.64). Most of the bacterial isolates (34%) were obtained from patients admitted to the burn wards, while 29%, 21%, and 16% of the isolates were collected from the ICU, surgery, and pediatric wards, respectively. The most common type of specimen (73%) for isolation of the bacteria was the wound samples; however, 15% and 12% of other clinical isolates were obtained from urine and blood cultures, respectively. In this study, 100 non-duplicated A. baumannii clinical isolates were collected from 100 patients admitted to the teaching and educational hospitals of Sari, north of Iran. All isolates identified using the phenotypic method contained the blaOXA-51 gene according to the PCR results. The mean age of the patients was 42.08±25.08 years (minimum age: 6 months; highest age: 88 years), and 50% of the patients were male. There was no significant difference between men and women in terms of mean age (p=0.64). Most of the bacterial isolates (34%) were obtained from patients admitted to the burn wards, while 29%, 21%, and 16% of the isolates were collected from the ICU, surgery, and pediatric wards, respectively. The most common type of specimen (73%) for isolation of the bacteria was the wound samples; however, 15% and 12% of other clinical isolates were obtained from urine and blood cultures, respectively. Antimicrobial susceptibility pattern According to the results of the disk agar diffusion method, the most and least effective antibiotics in the present study were imipenem and ciprofloxacin, with resistance rates of 75% and 100%, respectively (Table 2). Moreover, 94% of the isolates were detected as multi-drug resistant (MDR), and the most MDR isolates were collected from wound samples. Table 2 presents the antibiotic resistance patterns of all A. baumannii clinical isolates in this study based on hospital wards, as well as sample types. Resistance to the tested antibiotics was not significantly correlated with the sample types and hospital wards where the samples were collected. TABLE 2:Antimicrobial susceptibility pattern of the Acinetobacter baumannii clinical isolates in disk agar diffusion method.AntibioticsNo. (%) of resistant, intermediate resistant or susceptible isolates in terms of SusceptibilityTotalHospital wards Sample types (n=100)BurnICUSurgeryPediatric P-value WoundUrineBlood P-value (n=34)(n=29)(n=21)s (n=16) (n=73)(n=15)(n=12) PIPR8628 (82.3)28 (96.5)19 (90.4)11 (68.7)0.41262 (84.9)14 (93.3)10 (83.3)0.917 I105 (14.7)1 (3.4)2 (9.5)2 (12.5) 8 (10.9)1 (6.6)1 (8.3) S41 (2.9)003 (18.7) 3 (4.1)01 (8.3) PIP-TAZR7826 (76.4)24 (82.7)16 (76.1)12 (75)0.10454 (73.9)14 (93.3)10 (83.3)0.372 I104 (11.7)2 (6.8)3 (14.2)1 (6.2) 8 (10.9)02 (16.6) S124 (11.7)3 (10.3)2 (9.5)3 (18.7) 11 (15)1 (6.6)0 CAZR7625 (73.5)22 (75.8)18 (85.7)11 (68.7)0.74352 (71.2)15 (100)9 (75)0.559 I00000 000 S249 (26.4)1 (3.4)3 (14.2)5 (31.2) 21 (28.7)03 (25) CTXR9332 (94.1)28 (96.5)19 (90.4)14 (87.5)0.76267 (91.7)15 (100)11 (91.6)0.618 I02 (5.8)1 (3.4)2 (9.5)2 (12.5) 6 (8.2)01 (8.3) S70000 000 CEFR9232 (94.1)28 (96.5)18 (85.7)14 (87.5)0.44867 (91.7)14 (93.3)11 (91.6)0.728 I41 (2.9)1 (3.4)2 (9.5)0 2 (2.7)1 (6.6)1 (8.3) S41 (2.9)01 (4.7)2 (12.5) 4 (5.4)00 IMIR7527 (79.4)25 (86.2)17 (80.9)10 (62.5)0.61755 (75.3)12 (80)8 (66.6)0.873 I114 (11.7)2 (6.8)2 (9.5)3 (18.7) 9 (12.3)1 (6.6)1 (8.3) S147 (20.5)2 (6.8)2 (9.5)3 (18.7) 9 (12.3)2 (13.3)3 (25) MERR9733 (97)28 (96.5)20 (95.2)16 (100)0.96470 (95.8)15 (100)12 100)0.667 I00000 000 S31 (2.9)1 (3.4)1 (4.7)0 3 (4.1)00 DORR9632 (94.1)28 (96.5)20 (95.2)16 (100)0.79769 (94.5)15 (100)12 (100)0.913 I21 (2.9)1 (3.4)00 2 (2.7)00 S21 (2.9)01 (5)0 2 (2.7)00 CIPR10034 (100)29 (100)21 (100)16 (100)0.10073 (100)15 (100)12 (100)0.100 I00000 000 S00000 000 LEVR9331 (91.1)27 (93.1)20 (95.2)15 (93.7)0.72567 (91.7)14 (93.3)12 (100)0.842 I32 (5.8)001 (6.2) 3 (4.1)00 S41 (2.9)2 (6.8)1 (4.7)0 3 (4.1)1 (6.6)0 SXTR9231 (91.1)27 (93.1)18 (85.7)16 (100)0.93568 (93.1)12 (80)12 (100)0.216 I31 (2.9)1 (3.4)1 (4.7)0 1 (1.3)2 (13.3)0 S52 (5.8)1 (3.4)2 (9.5)0 4 (5.4)1 (6.6)0 PIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible. PIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible. Moreover, according to the MIC results, the resistance rate against gentamicin, kanamycin, tobramycin, and streptomycin was 94%, while the highest susceptibility (20%) of A. baumannii isolates was observed against netilmicin. In contrast, 74%, 68%, and 78% of our clinical isolates were resistant to amikacin, netilmicin, and spectinomycin, respectively. The MIC ranges of aminoglycosides and their relationship with the presence of AMEs-encoding genes are shown in Table 3. TABLE 3:Aminoglycoside resistance pattern of the Acinetobacter baumannii clinical isolates in this study. AntibioticsMIC rangesNo. of the isolates with (µg/mL)Susceptibility APH(3′)-VIa ANT(2")-Ia ANT(3")-Ia AAC(6′)-Ib AAC(3)-IIa armA Total pattern (aphA6) (aadB) (aadA1) (aacA4) (aacC2) (n=22) (n=77)(n=73)(n=33)(n=33)(n=19) Gentamicin ≤4S------6 8I5311-1- 16-32R--- --94 64-128R5----- ≥256R677032321921 MIC50 5122565125122561024256 MIC90 102425610242562561024512Tobramycin≤4S33---14 8I1----22 16-32R2-1---94 64-128R-21423 ≥256R716831291716 MIC50 256512512256512512512 MIC90 10241024102451210245121024Amikacin≤16S1310662316 32I710665110 64R-1----74 128R-4---- ≥256R574821211218 MIC50 2561024512512512256512 MIC90 51251251210245122561024Netilmicin≤8S883-1220 16I6--4-612 32R810464268 64-128R571324 ≥256R50482520128 MIC50 25612864256256128128 MIC90 25612864512512128256Kanamycin≤16S42----4 32I22----2 64R-3-1--94 128R322-21 ≥256R686431321721 MIC50 643212864256128128 MIC90 6425625664256256256Streptomycin≤4S1-11--6 8S11-1-- 16S---- - 32I------- 64-128R33-32-94 ≥256R726932281718 MIC50 256128256256256512256 MIC90 2561285121024128512512Spectinomycin≤4S11---312 8S7522-- 16S - --- 32I83-1-510 64-128R21-22178 ≥256R596331281713 MIC50 2562565126464256256 MIC90 256512256128128256512 R: resistant; I: intermediate resistant; S: susceptible. Notes: MIC 50 : Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC 90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms. R: resistant; I: intermediate resistant; S: susceptible. Notes: MIC 50 : Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC 90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms. According to the results of the disk agar diffusion method, the most and least effective antibiotics in the present study were imipenem and ciprofloxacin, with resistance rates of 75% and 100%, respectively (Table 2). Moreover, 94% of the isolates were detected as multi-drug resistant (MDR), and the most MDR isolates were collected from wound samples. Table 2 presents the antibiotic resistance patterns of all A. baumannii clinical isolates in this study based on hospital wards, as well as sample types. Resistance to the tested antibiotics was not significantly correlated with the sample types and hospital wards where the samples were collected. TABLE 2:Antimicrobial susceptibility pattern of the Acinetobacter baumannii clinical isolates in disk agar diffusion method.AntibioticsNo. (%) of resistant, intermediate resistant or susceptible isolates in terms of SusceptibilityTotalHospital wards Sample types (n=100)BurnICUSurgeryPediatric P-value WoundUrineBlood P-value (n=34)(n=29)(n=21)s (n=16) (n=73)(n=15)(n=12) PIPR8628 (82.3)28 (96.5)19 (90.4)11 (68.7)0.41262 (84.9)14 (93.3)10 (83.3)0.917 I105 (14.7)1 (3.4)2 (9.5)2 (12.5) 8 (10.9)1 (6.6)1 (8.3) S41 (2.9)003 (18.7) 3 (4.1)01 (8.3) PIP-TAZR7826 (76.4)24 (82.7)16 (76.1)12 (75)0.10454 (73.9)14 (93.3)10 (83.3)0.372 I104 (11.7)2 (6.8)3 (14.2)1 (6.2) 8 (10.9)02 (16.6) S124 (11.7)3 (10.3)2 (9.5)3 (18.7) 11 (15)1 (6.6)0 CAZR7625 (73.5)22 (75.8)18 (85.7)11 (68.7)0.74352 (71.2)15 (100)9 (75)0.559 I00000 000 S249 (26.4)1 (3.4)3 (14.2)5 (31.2) 21 (28.7)03 (25) CTXR9332 (94.1)28 (96.5)19 (90.4)14 (87.5)0.76267 (91.7)15 (100)11 (91.6)0.618 I02 (5.8)1 (3.4)2 (9.5)2 (12.5) 6 (8.2)01 (8.3) S70000 000 CEFR9232 (94.1)28 (96.5)18 (85.7)14 (87.5)0.44867 (91.7)14 (93.3)11 (91.6)0.728 I41 (2.9)1 (3.4)2 (9.5)0 2 (2.7)1 (6.6)1 (8.3) S41 (2.9)01 (4.7)2 (12.5) 4 (5.4)00 IMIR7527 (79.4)25 (86.2)17 (80.9)10 (62.5)0.61755 (75.3)12 (80)8 (66.6)0.873 I114 (11.7)2 (6.8)2 (9.5)3 (18.7) 9 (12.3)1 (6.6)1 (8.3) S147 (20.5)2 (6.8)2 (9.5)3 (18.7) 9 (12.3)2 (13.3)3 (25) MERR9733 (97)28 (96.5)20 (95.2)16 (100)0.96470 (95.8)15 (100)12 100)0.667 I00000 000 S31 (2.9)1 (3.4)1 (4.7)0 3 (4.1)00 DORR9632 (94.1)28 (96.5)20 (95.2)16 (100)0.79769 (94.5)15 (100)12 (100)0.913 I21 (2.9)1 (3.4)00 2 (2.7)00 S21 (2.9)01 (5)0 2 (2.7)00 CIPR10034 (100)29 (100)21 (100)16 (100)0.10073 (100)15 (100)12 (100)0.100 I00000 000 S00000 000 LEVR9331 (91.1)27 (93.1)20 (95.2)15 (93.7)0.72567 (91.7)14 (93.3)12 (100)0.842 I32 (5.8)001 (6.2) 3 (4.1)00 S41 (2.9)2 (6.8)1 (4.7)0 3 (4.1)1 (6.6)0 SXTR9231 (91.1)27 (93.1)18 (85.7)16 (100)0.93568 (93.1)12 (80)12 (100)0.216 I31 (2.9)1 (3.4)1 (4.7)0 1 (1.3)2 (13.3)0 S52 (5.8)1 (3.4)2 (9.5)0 4 (5.4)1 (6.6)0 PIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible. PIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible. Moreover, according to the MIC results, the resistance rate against gentamicin, kanamycin, tobramycin, and streptomycin was 94%, while the highest susceptibility (20%) of A. baumannii isolates was observed against netilmicin. In contrast, 74%, 68%, and 78% of our clinical isolates were resistant to amikacin, netilmicin, and spectinomycin, respectively. The MIC ranges of aminoglycosides and their relationship with the presence of AMEs-encoding genes are shown in Table 3. TABLE 3:Aminoglycoside resistance pattern of the Acinetobacter baumannii clinical isolates in this study. AntibioticsMIC rangesNo. of the isolates with (µg/mL)Susceptibility APH(3′)-VIa ANT(2")-Ia ANT(3")-Ia AAC(6′)-Ib AAC(3)-IIa armA Total pattern (aphA6) (aadB) (aadA1) (aacA4) (aacC2) (n=22) (n=77)(n=73)(n=33)(n=33)(n=19) Gentamicin ≤4S------6 8I5311-1- 16-32R--- --94 64-128R5----- ≥256R677032321921 MIC50 5122565125122561024256 MIC90 102425610242562561024512Tobramycin≤4S33---14 8I1----22 16-32R2-1---94 64-128R-21423 ≥256R716831291716 MIC50 256512512256512512512 MIC90 10241024102451210245121024Amikacin≤16S1310662316 32I710665110 64R-1----74 128R-4---- ≥256R574821211218 MIC50 2561024512512512256512 MIC90 51251251210245122561024Netilmicin≤8S883-1220 16I6--4-612 32R810464268 64-128R571324 ≥256R50482520128 MIC50 25612864256256128128 MIC90 25612864512512128256Kanamycin≤16S42----4 32I22----2 64R-3-1--94 128R322-21 ≥256R686431321721 MIC50 643212864256128128 MIC90 6425625664256256256Streptomycin≤4S1-11--6 8S11-1-- 16S---- - 32I------- 64-128R33-32-94 ≥256R726932281718 MIC50 256128256256256512256 MIC90 2561285121024128512512Spectinomycin≤4S11---312 8S7522-- 16S - --- 32I83-1-510 64-128R21-22178 ≥256R596331281713 MIC50 2562565126464256256 MIC90 256512256128128256512 R: resistant; I: intermediate resistant; S: susceptible. Notes: MIC 50 : Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC 90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms. R: resistant; I: intermediate resistant; S: susceptible. Notes: MIC 50 : Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC 90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms. Gene profiles of the isolates The frequency of each aminoglycoside resistance gene and its relation with the MIC ranges are shown in Table 3. In total, the proportions of aminoglycoside resistance genes among our clinical isolates of A. baumannii were as follows: APH(3′)-VIa (aphA6) (77%), ANT(2")-Ia (aadB) (73%), ANT(3")-Ia (aadA1) (33%), AAC(6′)-Ib (aacA4) (33%), AAC(3)-IIa (aacC2) (19%), and ArmA (22%). The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the isolates is shown in Table 4. There was a significant association between the presence of all resistance genes and the non-susceptibility (resistance or intermediate resistance) to all aminoglycosides, except armA and resistance to netilmicin. Important data from this table indicates that in some groups, such as gentamicin- and tobramycin-resistant groups, all resistant isolates contained some AMEs-encoding genes such as aacC2, aacA4, and aadA1. TABLE 4:The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the A. baumannii clinical isolates.AntibioticsSusceptibility PatternNo. (%) of the isolates contained APH(3′)-VIa (aphA6) P-value ANT(2")-Ia (aadB) P-value ANT(3")-Ia (aadA1) P-value AAC(6′)-Ib (aacA4) P-value AAC(3)-IIa (aacC2) P-value armA P-value (n=77) (n=73) (n=33) (n=33) (n=19) (n=22) GentamicinNon-susceptible77 (100)0.01273 (100)0.02333 (100)0.03333 (100)0.03319 (100)0.00722 (100)0.021 Susceptible0 0 0 0 0 0 TobramycinNon-susceptible74 (96.1)0.01970 (95.8)0.02933 (100)0.00333 (100)0.00319 (100)0.00721 (95.4)0.024 Susceptible3 (3.8) 3 (4.1) 0 0 0 1 (4.5) AmikacinNon-susceptible64 (83.1)0.03663 (86.3)0.03727 (81.8)0.03827 (81.8)0.03817 (89.4)0.02419 (86.3)0.033 Susceptible13 (16.8) 10 (13.6) 6 (18.1) 6 (18.1) 2 (10.5) 3 (13.6) NetilmicinNon-susceptible63 (81.8)0.04265 (89.04)0.03930 (90.9)0.01429 (87.8)0.03218 (94.7)0.01814 (63.6)0.072 Susceptible15 (19.4) 8 (10.9) 3 (9.09) 4 (12.1) 1 (5.2) 8 (36.3) KanamycinNon-susceptible73 (94.8)0.02971 (97.2)0.02333 (100)0.00333 (100)0.00319 (100)0.00722 (100)0.021 Susceptible4 (5.1) 2 (2.7) 0 0 0 0 StreptomycinNon-susceptible75 (97.4)0.01572 (98.6)0.01932 (96.9)0.01931 (93.9)0.02119 (100)0.00718 (81.8)0.044 Susceptible2 (2.5) 1 (1.3) 1 (3.03) 2 (6.06) 0 4 (18.1) SpectinomycinNon-susceptible69 (89.6(0.02867 (91.7)0.03731 (93.9)0.02131 (93.9)0.02119 (100)0.00719 (86.3)0.033 Susceptible8 (10.3) 6 (8.2) 2 (6.06) 2 (6.06) 0 3 (13.6) In addition, we detected 22 gene profiles among all clinical isolates of A. baumannii (Table 5). The most prevalent combination gene profiles in the present study included: 1) APH(3')-VIa + ANT(2")-Ia with 39 isolates containing these genes, among which 100% isolates were resistant towards kanamycin, while almost 95% were resistant against netilmicin and 97.4% were resistant to tobramycin and gentamicin, and 2) AAC(3)-IIa + AAC(6')-Ib + ANT(3")-Ia + APH(3')-VIa + ANT(2")-Ia with 14 isolates, among which 100% were resistant to gentamicin, kanamycin, and streptomycin, while almost 93% were resistant against tobramycin and spectinomycin. However, 15 isolates showed an AME gene profile with one AME gene, most of which were resistant to tested aminoglycosides. Other AME-encoding gene profiles were detected at a low rate (Table 5). However, 15, 52, 12, 5, 14, and 2 isolates in the present study contained 1, 2, 3, 4, 5, and 6 AME genes, respectively. The most prevalent gene profiles exhibited the simultaneous presence of 2 genes followed by 5 genes and 3 AME genes. TABLE 5:Gene profiles of the Acinetobacter baumannii clinical isolates and their association with aminoglycoside resistance.Gene profile (No.)Gentamicin Tobramycin Amikacin Netilmicin Kanamycin Streptomycin Spectinomycin NSSNSSNSSNSSNSSNSSNSS (%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%) ArmA (9)9 (100)-9 (100)-7 (77.7)2 (22.2)3 (33.3)6 (66.6)9 (100) - 7 (77.7)2 (22.2)9 (100)- aphA6 (2)2 (100)-2 (100)--2 (100)-2 (100)1 (50)1 (50)2 (100)--2 (100) aadB (4)4 (100)-3 (75)1 (25)3 (75)1 (25)2 (50)2 (50)3 (75)1 (25)4 (100)-3 (75)1 (25) ArmA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100) - 3 (100)-2 (66.6)1 (33.3) aacA4 + aadA1 (2)2 (100)-2 (100)-2 (100)-1 (50)1 (50)2 (100) - 2 (100)-2 (100)- aadA1 + armA (2)2 (100)-2 (100)-1 (50)1 (50)2 (100)-2 (100) - 1 (50)1 (50)2 (100)- aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aadB + aadA1 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aphA6 + aadB (39)38 (97.4)1 (2.56)38 (97.4)1 (2.56)34 (87.1)5 (12.8)37 (94.8)2 (5.1)39 (100)-33 (84.6)6 (15.3)32 (82)7 (17.9) aacA4 + aphA6 (3)3 (100)-3 (100)-3 (100)-3 (100)-3 (100) - 3 (100)-3 (100)- aadB + aadA1 + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aacA4 + aadB + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)--1 (100)1 (100) - -1 (100)1 (100)- aadB + armA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100) - 3 (100)-2 (66.6)1 (33.3) aacA4 + aacC2 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aacC2 + aacA4 + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aacC2 + aacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aacA4 + aadB + aadA1 + aphA6 (4)4 (100)-4 (100)-4 (100)-4 (100)-4 (100) - 4 (100)-4 (100)- aacC2 + aacA4 + aadA1 + aphA6 + aadB (14)14 (100)-13 (92.8)1 (7.1)12 (85.7)2 (14.2)12 (85.7)2 (14.2)14 (100)-14 (100)-13 (92.8)1 (7.1) aacC2 + aacA4 + aadA1 + aadB + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)- NS: non-susceptible (resistant and intermediate resistant); S: susceptible. NS: non-susceptible (resistant and intermediate resistant); S: susceptible. The frequency of each aminoglycoside resistance gene and its relation with the MIC ranges are shown in Table 3. In total, the proportions of aminoglycoside resistance genes among our clinical isolates of A. baumannii were as follows: APH(3′)-VIa (aphA6) (77%), ANT(2")-Ia (aadB) (73%), ANT(3")-Ia (aadA1) (33%), AAC(6′)-Ib (aacA4) (33%), AAC(3)-IIa (aacC2) (19%), and ArmA (22%). The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the isolates is shown in Table 4. There was a significant association between the presence of all resistance genes and the non-susceptibility (resistance or intermediate resistance) to all aminoglycosides, except armA and resistance to netilmicin. Important data from this table indicates that in some groups, such as gentamicin- and tobramycin-resistant groups, all resistant isolates contained some AMEs-encoding genes such as aacC2, aacA4, and aadA1. TABLE 4:The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the A. baumannii clinical isolates.AntibioticsSusceptibility PatternNo. (%) of the isolates contained APH(3′)-VIa (aphA6) P-value ANT(2")-Ia (aadB) P-value ANT(3")-Ia (aadA1) P-value AAC(6′)-Ib (aacA4) P-value AAC(3)-IIa (aacC2) P-value armA P-value (n=77) (n=73) (n=33) (n=33) (n=19) (n=22) GentamicinNon-susceptible77 (100)0.01273 (100)0.02333 (100)0.03333 (100)0.03319 (100)0.00722 (100)0.021 Susceptible0 0 0 0 0 0 TobramycinNon-susceptible74 (96.1)0.01970 (95.8)0.02933 (100)0.00333 (100)0.00319 (100)0.00721 (95.4)0.024 Susceptible3 (3.8) 3 (4.1) 0 0 0 1 (4.5) AmikacinNon-susceptible64 (83.1)0.03663 (86.3)0.03727 (81.8)0.03827 (81.8)0.03817 (89.4)0.02419 (86.3)0.033 Susceptible13 (16.8) 10 (13.6) 6 (18.1) 6 (18.1) 2 (10.5) 3 (13.6) NetilmicinNon-susceptible63 (81.8)0.04265 (89.04)0.03930 (90.9)0.01429 (87.8)0.03218 (94.7)0.01814 (63.6)0.072 Susceptible15 (19.4) 8 (10.9) 3 (9.09) 4 (12.1) 1 (5.2) 8 (36.3) KanamycinNon-susceptible73 (94.8)0.02971 (97.2)0.02333 (100)0.00333 (100)0.00319 (100)0.00722 (100)0.021 Susceptible4 (5.1) 2 (2.7) 0 0 0 0 StreptomycinNon-susceptible75 (97.4)0.01572 (98.6)0.01932 (96.9)0.01931 (93.9)0.02119 (100)0.00718 (81.8)0.044 Susceptible2 (2.5) 1 (1.3) 1 (3.03) 2 (6.06) 0 4 (18.1) SpectinomycinNon-susceptible69 (89.6(0.02867 (91.7)0.03731 (93.9)0.02131 (93.9)0.02119 (100)0.00719 (86.3)0.033 Susceptible8 (10.3) 6 (8.2) 2 (6.06) 2 (6.06) 0 3 (13.6) In addition, we detected 22 gene profiles among all clinical isolates of A. baumannii (Table 5). The most prevalent combination gene profiles in the present study included: 1) APH(3')-VIa + ANT(2")-Ia with 39 isolates containing these genes, among which 100% isolates were resistant towards kanamycin, while almost 95% were resistant against netilmicin and 97.4% were resistant to tobramycin and gentamicin, and 2) AAC(3)-IIa + AAC(6')-Ib + ANT(3")-Ia + APH(3')-VIa + ANT(2")-Ia with 14 isolates, among which 100% were resistant to gentamicin, kanamycin, and streptomycin, while almost 93% were resistant against tobramycin and spectinomycin. However, 15 isolates showed an AME gene profile with one AME gene, most of which were resistant to tested aminoglycosides. Other AME-encoding gene profiles were detected at a low rate (Table 5). However, 15, 52, 12, 5, 14, and 2 isolates in the present study contained 1, 2, 3, 4, 5, and 6 AME genes, respectively. The most prevalent gene profiles exhibited the simultaneous presence of 2 genes followed by 5 genes and 3 AME genes. TABLE 5:Gene profiles of the Acinetobacter baumannii clinical isolates and their association with aminoglycoside resistance.Gene profile (No.)Gentamicin Tobramycin Amikacin Netilmicin Kanamycin Streptomycin Spectinomycin NSSNSSNSSNSSNSSNSSNSS (%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%) ArmA (9)9 (100)-9 (100)-7 (77.7)2 (22.2)3 (33.3)6 (66.6)9 (100) - 7 (77.7)2 (22.2)9 (100)- aphA6 (2)2 (100)-2 (100)--2 (100)-2 (100)1 (50)1 (50)2 (100)--2 (100) aadB (4)4 (100)-3 (75)1 (25)3 (75)1 (25)2 (50)2 (50)3 (75)1 (25)4 (100)-3 (75)1 (25) ArmA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100) - 3 (100)-2 (66.6)1 (33.3) aacA4 + aadA1 (2)2 (100)-2 (100)-2 (100)-1 (50)1 (50)2 (100) - 2 (100)-2 (100)- aadA1 + armA (2)2 (100)-2 (100)-1 (50)1 (50)2 (100)-2 (100) - 1 (50)1 (50)2 (100)- aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aadB + aadA1 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aphA6 + aadB (39)38 (97.4)1 (2.56)38 (97.4)1 (2.56)34 (87.1)5 (12.8)37 (94.8)2 (5.1)39 (100)-33 (84.6)6 (15.3)32 (82)7 (17.9) aacA4 + aphA6 (3)3 (100)-3 (100)-3 (100)-3 (100)-3 (100) - 3 (100)-3 (100)- aadB + aadA1 + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aacA4 + aadB + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)--1 (100)1 (100) - -1 (100)1 (100)- aadB + armA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100) - 3 (100)-2 (66.6)1 (33.3) aacA4 + aacC2 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aacC2 + aacA4 + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aacC2 + aacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aacA4 + aadB + aadA1 + aphA6 (4)4 (100)-4 (100)-4 (100)-4 (100)-4 (100) - 4 (100)-4 (100)- aacC2 + aacA4 + aadA1 + aphA6 + aadB (14)14 (100)-13 (92.8)1 (7.1)12 (85.7)2 (14.2)12 (85.7)2 (14.2)14 (100)-14 (100)-13 (92.8)1 (7.1) aacC2 + aacA4 + aadA1 + aadB + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)- NS: non-susceptible (resistant and intermediate resistant); S: susceptible. NS: non-susceptible (resistant and intermediate resistant); S: susceptible.
CONCLUSIONS
High-level aminoglycoside MIC ranges in isolates with the simultaneous presence of AME and ArmA-encoding genes indicated the importance of these genes in resistance to aminoglycosides in A. baumannii. However, it seems that the selection of the appropriate antibiotic based on antimicrobial susceptibility testing and the use of combination therapy would be effective in overcoming this problem in such countries. Therefore, it is necessary to collect data from monitoring studies for the prevention, treatment, and control of the infections caused by this microorganism.
[ "Sample collection and bacterial isolates", "Antimicrobial susceptibility testing", "DNA extraction, PCR, and multiplex-PCR", "Statistical analysis", "Patients, samples, and bacterial isolates", "Antimicrobial susceptibility pattern", "Gene profiles of the isolates" ]
[ "This study was performed on A. baumannii isolated from patients admitted to different educational hospitals in Sari, north of Iran, for 6 months (April 2019 to September 2019). The clinical specimens included blood, urine, respiratory secretions (bronchial lavage and tracheal secretions), CSF, and ulcer (surgical and burn wound). The clinical isolates were identified using conventional microbiological tests\n19\n and confirmed by polymerase chain reaction (PCR) amplification of the blaOXA-51 gene using specific primers\n20\n; the reaction conditions are shown in Table 1. \n\nTABLE 1:Primers used to amplify the blaOXA-51 and aminoglycoside resistance genes along with the conditions of PCR.Target genesPrimer sequences (5´-3´)Amplicon size (bp)94 °C94 °CAnnealing Temperature and time72 °C72 °CReference\nOXA-51TAATGCTTTGATCGGCCTTG3532 min25 sec51 °C for 30 sec30 sec5 min\n5\n\nTGGATTGCACTTCATCTTGG\n\n\n\n\n\n\n\nAPH(3′)-VIa\nCGGAAACAGCGTTTTAGA7172 min25 sec49 °C for 30 sec30 sec5 min\n5\n\n(aphA6)\nTTCCTTTTGTCAGGTC\n\n\n\n\n\n\n\nAAC(3)-IIa\nATGCATACGCGGAAGGC8222 min25 sec54 °C for 30 sec30 sec5 min\n5\n\n(aacC2)\nTGCTGGCACGATCGGAG\n\n\n\n\n\n\n\nAAC(6′)-Ib\nTATGAGTGGCTAAATCGAT3952 min25 sec54 °C for 30 sec30 sec5 min\n5\n\n(aacA4)\nCCCGCTTTCTCGTAGCA\n\n\n\n\n\n\n\nANT(2\")-Ia\nATCTGCCGCTCTGGAT4052 min25 sec49 °C for 30 sec30 sec5 min\n5\n\n(aadB)\nCGAGCCTGTAGGACT\n\n\n\n\n\n\n\nANT(3\")-Ia\nATGAGGGAAGCGGTGATCG7922 min25 sec62 °C for 30 sec30 sec5 min\n5\n\n(aadA1)\nTTATTTGCCGACTACCTTGGT\n\n\n\n\n\n\n\nArmA\nATTCTGCCTATCCTAATTGG3152 min25 sec49 °C for 30 sec30 sec5 min\n5\n\nACCTATACTTTATCGTCGTC\n\n\n\n\n\n\n\n", "The antibiotic susceptibility pattern of the isolates was determined by the disk agar diffusion method on Muller Hinton agar (Merck, Germany) according to the Clinical and Laboratory Standards Institute (CLSI) guidelines\n21\n. The antibiotics included piperacillin (100 µg), piperacillin-tazobactam (100/10 µg), imipenem (10 µg), meropenem (10 µg), doripenem (10 µg), ciprofloxacin (5 µg), levofloxacin (5 µg), trimethoprim-sulfamethoxazole (1.25-23.75 µg), ceftazidime (30 µg), cefotaxime (30 µg), and cefepime (30 µg) (MAST Co., England). The susceptibility pattern of the isolates against aminoglycosides including kanamycin, amikacin, spectinomycin, netilmicin, gentamicin, streptomycin, and tobramycin was determined using the micro-broth dilution method according to the CLSI guidelines\n21\n. For interpretation of the minimum inhibitory concentration (MIC) values, we referred to the CLSI guidelines and previous studies\n1\n\n,\n\n21\n\n,\n\n22\n. Escherichia coli ATCC 25922 and A. baumannii ATCC 19606 were used as control strains for antibiotic susceptibility testing. ", "DNA was extracted from all A. baumanniiisolates grown for 24 h using an alkaline lysis method with sodium dodecyl sulphate (SDS) and NaOH, as previously published\n23\n, with few modifications. In brief, first, we prepared a lysis buffer by dissolving 0.5 g of SDS and 0.4 g of NaOH in 200 µL of distilled water. Next, 4-6 colonies of the bacteria were suspended in 60 µL of lysis buffer and subsequently heated at 95 °C for 10 min. In the next step, the suspension was centrifuged at 13000 rpm for 5 min, and 180 µL of distilled water was added to the microtubes. The obtained supernatant was frozen at −20 °C until use as the extracted DNA in PCR. \nTwo sets of multiplex-PCR were used to detect AME-encoding genes in A. baumannii isolates using the specific primers shown in Table 1. APH(3′)-VIa (aphA6), ANT(2\")-Ia (aadB), and ArmA genes were detected in the same set; AAC(6′)-Ib (aacA4) and AAC(3)-IIa (aacC2) were identified in the second set; and the ANT(3\")-Ia (aadA1) gene was detected by PCR alone. The PCR and multiplex-PCR were performed in 25 µL of final volume containing 12.5 µL of the master mix (Ampliqon, Denmark), 10 pmol of each primer (Bioneer, South Korea), and 500 ng of template DNA; the reaction solutions were brought to the desired volume through the addition of distilled water. The genes were amplified under standard conditions using a thermocycler machine (Bio-Rad, USA). All reactions were performed in 34 cycles, and the conditions are shown in Table 1.", "The data were analyzed using SPSS (version 21). Categorical data were analyzed using the Fisher’s exact test, and a P-value less than 0.05 was considered statistically significant. In addition, an independent t-test was used to examine the mean age of the subjects. ", "In this study, 100 non-duplicated A. baumannii clinical isolates were collected from 100 patients admitted to the teaching and educational hospitals of Sari, north of Iran. All isolates identified using the phenotypic method contained the blaOXA-51 gene according to the PCR results. The mean age of the patients was 42.08±25.08 years (minimum age: 6 months; highest age: 88 years), and 50% of the patients were male. There was no significant difference between men and women in terms of mean age (p=0.64). Most of the bacterial isolates (34%) were obtained from patients admitted to the burn wards, while 29%, 21%, and 16% of the isolates were collected from the ICU, surgery, and pediatric wards, respectively. The most common type of specimen (73%) for isolation of the bacteria was the wound samples; however, 15% and 12% of other clinical isolates were obtained from urine and blood cultures, respectively.", "According to the results of the disk agar diffusion method, the most and least effective antibiotics in the present study were imipenem and ciprofloxacin, with resistance rates of 75% and 100%, respectively (Table 2). Moreover, 94% of the isolates were detected as multi-drug resistant (MDR), and the most MDR isolates were collected from wound samples. Table 2 presents the antibiotic resistance patterns of all A. baumannii clinical isolates in this study based on hospital wards, as well as sample types. Resistance to the tested antibiotics was not significantly correlated with the sample types and hospital wards where the samples were collected.\n\nTABLE 2:Antimicrobial susceptibility pattern of the Acinetobacter baumannii clinical isolates in disk agar diffusion method.AntibioticsNo. (%) of resistant, intermediate resistant or susceptible isolates in terms of \nSusceptibilityTotalHospital wards Sample types \n\n(n=100)BurnICUSurgeryPediatric\nP-value\nWoundUrineBlood\nP-value\n\n\n\n(n=34)(n=29)(n=21)s (n=16)\n(n=73)(n=15)(n=12)\nPIPR8628 (82.3)28 (96.5)19 (90.4)11 (68.7)0.41262 (84.9)14 (93.3)10 (83.3)0.917\nI105 (14.7)1 (3.4)2 (9.5)2 (12.5)\n8 (10.9)1 (6.6)1 (8.3)\n\nS41 (2.9)003 (18.7)\n3 (4.1)01 (8.3)\nPIP-TAZR7826 (76.4)24 (82.7)16 (76.1)12 (75)0.10454 (73.9)14 (93.3)10 (83.3)0.372\nI104 (11.7)2 (6.8)3 (14.2)1 (6.2)\n8 (10.9)02 (16.6)\n\nS124 (11.7)3 (10.3)2 (9.5)3 (18.7)\n11 (15)1 (6.6)0\nCAZR7625 (73.5)22 (75.8)18 (85.7)11 (68.7)0.74352 (71.2)15 (100)9 (75)0.559\nI00000\n000\n\nS249 (26.4)1 (3.4)3 (14.2)5 (31.2)\n21 (28.7)03 (25)\nCTXR9332 (94.1)28 (96.5)19 (90.4)14 (87.5)0.76267 (91.7)15 (100)11 (91.6)0.618\nI02 (5.8)1 (3.4)2 (9.5)2 (12.5)\n6 (8.2)01 (8.3)\n\nS70000\n000\nCEFR9232 (94.1)28 (96.5)18 (85.7)14 (87.5)0.44867 (91.7)14 (93.3)11 (91.6)0.728\nI41 (2.9)1 (3.4)2 (9.5)0\n2 (2.7)1 (6.6)1 (8.3)\n\nS41 (2.9)01 (4.7)2 (12.5)\n4 (5.4)00\nIMIR7527 (79.4)25 (86.2)17 (80.9)10 (62.5)0.61755 (75.3)12 (80)8 (66.6)0.873\nI114 (11.7)2 (6.8)2 (9.5)3 (18.7)\n9 (12.3)1 (6.6)1 (8.3)\n\nS147 (20.5)2 (6.8)2 (9.5)3 (18.7)\n9 (12.3)2 (13.3)3 (25)\nMERR9733 (97)28 (96.5)20 (95.2)16 (100)0.96470 (95.8)15 (100)12 100)0.667\nI00000\n000\n\nS31 (2.9)1 (3.4)1 (4.7)0\n3 (4.1)00\nDORR9632 (94.1)28 (96.5)20 (95.2)16 (100)0.79769 (94.5)15 (100)12 (100)0.913\nI21 (2.9)1 (3.4)00\n2 (2.7)00\n\nS21 (2.9)01 (5)0\n2 (2.7)00\nCIPR10034 (100)29 (100)21 (100)16 (100)0.10073 (100)15 (100)12 (100)0.100\nI00000\n000\n\nS00000\n000\nLEVR9331 (91.1)27 (93.1)20 (95.2)15 (93.7)0.72567 (91.7)14 (93.3)12 (100)0.842\nI32 (5.8)001 (6.2)\n3 (4.1)00\n\nS41 (2.9)2 (6.8)1 (4.7)0\n3 (4.1)1 (6.6)0\nSXTR9231 (91.1)27 (93.1)18 (85.7)16 (100)0.93568 (93.1)12 (80)12 (100)0.216\nI31 (2.9)1 (3.4)1 (4.7)0\n1 (1.3)2 (13.3)0\n\nS52 (5.8)1 (3.4)2 (9.5)0\n4 (5.4)1 (6.6)0\n\nPIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible.\n\n\nPIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible.\nMoreover, according to the MIC results, the resistance rate against gentamicin, kanamycin, tobramycin, and streptomycin was 94%, while the highest susceptibility (20%) of A. baumannii isolates was observed against netilmicin. In contrast, 74%, 68%, and 78% of our clinical isolates were resistant to amikacin, netilmicin, and spectinomycin, respectively. The MIC ranges of aminoglycosides and their relationship with the presence of AMEs-encoding genes are shown in Table 3.\n\nTABLE 3:Aminoglycoside resistance pattern of the Acinetobacter baumannii clinical isolates in this study. AntibioticsMIC rangesNo. of the isolates with \n(µg/mL)Susceptibility\nAPH(3′)-VIa\n\nANT(2\")-Ia\n\nANT(3\")-Ia\n\nAAC(6′)-Ib\n\nAAC(3)-IIa\n\narmA\nTotal\n\npattern\n(aphA6)\n\n(aadB)\n\n(aadA1)\n\n(aacA4)\n\n(aacC2)\n(n=22)\n\n\n\n(n=77)(n=73)(n=33)(n=33)(n=19)\n\nGentamicin ≤4S------6\n8I5311-1-\n16-32R---\n--94\n64-128R5-----\n\n≥256R677032321921\n\nMIC50\n5122565125122561024256\nMIC90\n102425610242562561024512Tobramycin≤4S33---14\n8I1----22\n16-32R2-1---94\n64-128R-21423\n\n≥256R716831291716\n\nMIC50\n256512512256512512512\nMIC90\n10241024102451210245121024Amikacin≤16S1310662316\n32I710665110\n64R-1----74\n128R-4----\n\n≥256R574821211218\n\nMIC50\n2561024512512512256512\nMIC90\n51251251210245122561024Netilmicin≤8S883-1220\n16I6--4-612\n32R810464268\n64-128R571324\n\n≥256R50482520128\n\nMIC50\n25612864256256128128\nMIC90\n25612864512512128256Kanamycin≤16S42----4\n32I22----2\n64R-3-1--94\n128R322-21\n\n≥256R686431321721\n\nMIC50\n643212864256128128\nMIC90\n6425625664256256256Streptomycin≤4S1-11--6\n8S11-1--\n\n16S----\n-\n\n32I-------\n64-128R33-32-94\n≥256R726932281718\n\nMIC50\n256128256256256512256\nMIC90\n2561285121024128512512Spectinomycin≤4S11---312\n8S7522--\n\n16S\n-\n---\n\n32I83-1-510\n64-128R21-22178\n≥256R596331281713\n\nMIC50\n2562565126464256256\nMIC90\n256512256128128256512\nR: resistant; I: intermediate resistant; S: susceptible. Notes: MIC\n50\n: Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC\n90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms.\n\n\nR: resistant; I: intermediate resistant; S: susceptible.\n Notes: MIC\n50\n: Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC\n90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms.", "The frequency of each aminoglycoside resistance gene and its relation with the MIC ranges are shown in Table 3. In total, the proportions of aminoglycoside resistance genes among our clinical isolates of A. baumannii were as follows: APH(3′)-VIa (aphA6) (77%), ANT(2\")-Ia (aadB) (73%), ANT(3\")-Ia (aadA1) (33%), AAC(6′)-Ib (aacA4) (33%), AAC(3)-IIa (aacC2) (19%), and ArmA (22%). The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the isolates is shown in Table 4. There was a significant association between the presence of all resistance genes and the non-susceptibility (resistance or intermediate resistance) to all aminoglycosides, except armA and resistance to netilmicin. Important data from this table indicates that in some groups, such as gentamicin- and tobramycin-resistant groups, all resistant isolates contained some AMEs-encoding genes such as aacC2, aacA4, and aadA1. \n\nTABLE 4:The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the A. baumannii clinical isolates.AntibioticsSusceptibility PatternNo. (%) of the isolates contained \n\n\nAPH(3′)-VIa (aphA6)\n\nP-value\n\nANT(2\")-Ia (aadB)\n\nP-value\n\nANT(3\")-Ia (aadA1)\n\nP-value\n\nAAC(6′)-Ib (aacA4)\n\nP-value\n\nAAC(3)-IIa (aacC2)\n\nP-value\n\narmA\n\nP-value\n\n\n(n=77)\n(n=73)\n(n=33)\n(n=33)\n(n=19)\n(n=22)\nGentamicinNon-susceptible77 (100)0.01273 (100)0.02333 (100)0.03333 (100)0.03319 (100)0.00722 (100)0.021\nSusceptible0\n0\n0\n0\n0\n0\nTobramycinNon-susceptible74 (96.1)0.01970 (95.8)0.02933 (100)0.00333 (100)0.00319 (100)0.00721 (95.4)0.024\nSusceptible3 (3.8)\n3 (4.1)\n0\n0\n0\n1 (4.5)\nAmikacinNon-susceptible64 (83.1)0.03663 (86.3)0.03727 (81.8)0.03827 (81.8)0.03817 (89.4)0.02419 (86.3)0.033\nSusceptible13 (16.8)\n10 (13.6)\n6 (18.1)\n6 (18.1)\n2 (10.5)\n3 (13.6)\nNetilmicinNon-susceptible63 (81.8)0.04265 (89.04)0.03930 (90.9)0.01429 (87.8)0.03218 (94.7)0.01814 (63.6)0.072\nSusceptible15 (19.4)\n8 (10.9)\n3 (9.09)\n4 (12.1)\n1 (5.2)\n8 (36.3)\nKanamycinNon-susceptible73 (94.8)0.02971 (97.2)0.02333 (100)0.00333 (100)0.00319 (100)0.00722 (100)0.021\nSusceptible4 (5.1)\n2 (2.7)\n0\n0\n0\n0\nStreptomycinNon-susceptible75 (97.4)0.01572 (98.6)0.01932 (96.9)0.01931 (93.9)0.02119 (100)0.00718 (81.8)0.044\nSusceptible2 (2.5)\n1 (1.3)\n1 (3.03)\n2 (6.06)\n0\n4 (18.1)\nSpectinomycinNon-susceptible69 (89.6(0.02867 (91.7)0.03731 (93.9)0.02131 (93.9)0.02119 (100)0.00719 (86.3)0.033\nSusceptible8 (10.3)\n6 (8.2)\n2 (6.06)\n2 (6.06)\n0\n3 (13.6)\n\n\nIn addition, we detected 22 gene profiles among all clinical isolates of A. baumannii (Table 5). The most prevalent combination gene profiles in the present study included: 1) APH(3')-VIa + ANT(2\")-Ia with 39 isolates containing these genes, among which 100% isolates were resistant towards kanamycin, while almost 95% were resistant against netilmicin and 97.4% were resistant to tobramycin and gentamicin, and 2) AAC(3)-IIa + AAC(6')-Ib + ANT(3\")-Ia + APH(3')-VIa + ANT(2\")-Ia with 14 isolates, among which 100% were resistant to gentamicin, kanamycin, and streptomycin, while almost 93% were resistant against tobramycin and spectinomycin. However, 15 isolates showed an AME gene profile with one AME gene, most of which were resistant to tested aminoglycosides. Other AME-encoding gene profiles were detected at a low rate (Table 5). However, 15, 52, 12, 5, 14, and 2 isolates in the present study contained 1, 2, 3, 4, 5, and 6 AME genes, respectively. The most prevalent gene profiles exhibited the simultaneous presence of 2 genes followed by 5 genes and 3 AME genes.\n\nTABLE 5:Gene profiles of the Acinetobacter baumannii clinical isolates and their association with aminoglycoside resistance.Gene profile (No.)Gentamicin Tobramycin Amikacin Netilmicin Kanamycin Streptomycin Spectinomycin \nNSSNSSNSSNSSNSSNSSNSS\n(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)\nArmA (9)9 (100)-9 (100)-7 (77.7)2 (22.2)3 (33.3)6 (66.6)9 (100)\n-\n7 (77.7)2 (22.2)9 (100)-\naphA6 (2)2 (100)-2 (100)--2 (100)-2 (100)1 (50)1 (50)2 (100)--2 (100)\naadB (4)4 (100)-3 (75)1 (25)3 (75)1 (25)2 (50)2 (50)3 (75)1 (25)4 (100)-3 (75)1 (25)\nArmA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100)\n-\n3 (100)-2 (66.6)1 (33.3)\naacA4 + aadA1 (2)2 (100)-2 (100)-2 (100)-1 (50)1 (50)2 (100)\n-\n2 (100)-2 (100)-\naadA1 + armA (2)2 (100)-2 (100)-1 (50)1 (50)2 (100)-2 (100)\n-\n1 (50)1 (50)2 (100)-\naadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naadB + aadA1 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naphA6 + aadB (39)38 (97.4)1 (2.56)38 (97.4)1 (2.56)34 (87.1)5 (12.8)37 (94.8)2 (5.1)39 (100)-33 (84.6)6 (15.3)32 (82)7 (17.9)\naacA4 + aphA6 (3)3 (100)-3 (100)-3 (100)-3 (100)-3 (100)\n-\n3 (100)-3 (100)-\naadB + aadA1 +\naphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naacA4 + aadB +\naadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)--1 (100)1 (100)\n-\n-1 (100)1 (100)-\naadB + armA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100)\n-\n3 (100)-2 (66.6)1 (33.3)\naacA4 + aacC2 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + armA\n+ aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naacC2 + aacA4 + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naacC2 + aacA4\n+ aadB + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naacA4 + aadB +\naadA1 + aphA6 (4)4 (100)-4 (100)-4 (100)-4 (100)-4 (100)\n-\n4 (100)-4 (100)-\naacC2 + aacA4 + aadA1 + aphA6 + aadB (14)14 (100)-13 (92.8)1 (7.1)12 (85.7)2 (14.2)12 (85.7)2 (14.2)14 (100)-14 (100)-13 (92.8)1 (7.1)\naacC2 + aacA4 + aadA1 + aadB + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-\nNS: non-susceptible (resistant and intermediate resistant); S: susceptible.\n\n\nNS: non-susceptible (resistant and intermediate resistant); S: susceptible." ]
[ null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Sample collection and bacterial isolates", "Antimicrobial susceptibility testing", "DNA extraction, PCR, and multiplex-PCR", "Statistical analysis", "RESULTS", "Patients, samples, and bacterial isolates", "Antimicrobial susceptibility pattern", "Gene profiles of the isolates", "DISCUSSION", "CONCLUSIONS" ]
[ "\nAcinetobacter baumannii, living in the soil, the water, and different hospital environments, is an important opportunistic pathogen that causes nosocomial infections such as pneumonia, urinary tract infections, intravenous catheter-associated infections, and ventilation-associated infections, particularly in intensive care units\n1\n\n-\n\n4\n. The ability of this microorganism to remain in the hospital environment and to spread among the patients, along with their resistance to several antibiotics, are the main driving forces behind large-scale recurrent events in different countries\n5\n.\nThe major antibiotics used for the treatment of infections caused by this organism are beta-lactams, aminoglycosides, fluoroquinolones, and carbapenems; however, A. baumannii has shown different rates of resistance against these antimicrobial agents\n6\n\n-\n\n8\n. These infections are difficult, costly, and sometimes impossible to treat owing to the high ability of A. baumannii to acquire antibiotic resistance genes and the development of multidrug-resistant (MDR) strains\n9\n\n,\n\n10\n. Aminoglycosides are one of the main drugs used for the treatment of Acinetobacter infections\n11\n; however, recently, the resistance of A. baumannii to these antibiotics has also increased. Two main mechanisms of resistance to aminoglycosides are the alteration of the ribosome structure caused by mutations in the ribosomal 16S rRNA and the enzymatic resistance mechanism\n12\n. The enzymatic alteration of the aminoglycoside molecule at -OH or -NH2 groups by aminoglycoside-modifying enzymes (AMEs) is the most important resistance mechanism\n12\n\n-\n\n14\n. AMEs are classified into three major groups: aminoglycoside phosphotransferase (APH), aminoglycoside acetyltransferase (AAC), aminoglycoside nucleotidyltransferase (ANT), and aminoglycoside adenylyltransferase (AAD)\n5\n\n,\n\n13\n. Aminoglycoside acetyltransferases cause acetylation of the -NH2 groups of aminoglycosides at the 1, 3, 2', and 6' positions using acetyl coenzyme A as a donor substrate\n15\n. Aminoglycoside phosphotransferases phosphorylate the hydroxyl groups present in the structure of aminoglycosides at the 4, 6, 9, 3', 2'', 3'', and 7'' positions (seven different groups) with the help of ATP; the largest enzymatic group in this family is the APH(3′)-I group\n16\n. The proportion of strains harboring the aphA6 gene in A. baumannii is widespread, and this enzyme is the cause of resistance to neomycin, amikacin, kanamycin, paromomycin, ribostamycin, butirosin, and isepamicin\n17\n. Aminoglycoside nucleotidyltransferases are classified into 5 groups, and the genes encoding these enzymes can be found in chromosomes or transferred by plasmids and transposons\n12\n. These enzymes transfer an AMP group from ATP to a hydroxyl group at the 2'', 3'', 4', 6, and 9 positions of the aminoglycoside molecule\n13\n. In addition to AMEs, 16S rRNA methylation by the ArmA enzyme is a novel mechanism that contributes to the high level of aminoglycoside resistance in A. baumannii, as reported in the Far East, Europe, and North America\n5\n. This enzyme can be transferred by class 1 integrons and is often detected in carbapenem-resistant A. baumannii isolates\n18\n. This study aimed to investigate the role of some important aminoglycoside-modifying enzymes and 16S rRNA methylase (ArmA) in the resistance of A. baumannii clinical isolates to aminoglycosides in Sari, located north of Iran.", " Sample collection and bacterial isolates This study was performed on A. baumannii isolated from patients admitted to different educational hospitals in Sari, north of Iran, for 6 months (April 2019 to September 2019). The clinical specimens included blood, urine, respiratory secretions (bronchial lavage and tracheal secretions), CSF, and ulcer (surgical and burn wound). The clinical isolates were identified using conventional microbiological tests\n19\n and confirmed by polymerase chain reaction (PCR) amplification of the blaOXA-51 gene using specific primers\n20\n; the reaction conditions are shown in Table 1. \n\nTABLE 1:Primers used to amplify the blaOXA-51 and aminoglycoside resistance genes along with the conditions of PCR.Target genesPrimer sequences (5´-3´)Amplicon size (bp)94 °C94 °CAnnealing Temperature and time72 °C72 °CReference\nOXA-51TAATGCTTTGATCGGCCTTG3532 min25 sec51 °C for 30 sec30 sec5 min\n5\n\nTGGATTGCACTTCATCTTGG\n\n\n\n\n\n\n\nAPH(3′)-VIa\nCGGAAACAGCGTTTTAGA7172 min25 sec49 °C for 30 sec30 sec5 min\n5\n\n(aphA6)\nTTCCTTTTGTCAGGTC\n\n\n\n\n\n\n\nAAC(3)-IIa\nATGCATACGCGGAAGGC8222 min25 sec54 °C for 30 sec30 sec5 min\n5\n\n(aacC2)\nTGCTGGCACGATCGGAG\n\n\n\n\n\n\n\nAAC(6′)-Ib\nTATGAGTGGCTAAATCGAT3952 min25 sec54 °C for 30 sec30 sec5 min\n5\n\n(aacA4)\nCCCGCTTTCTCGTAGCA\n\n\n\n\n\n\n\nANT(2\")-Ia\nATCTGCCGCTCTGGAT4052 min25 sec49 °C for 30 sec30 sec5 min\n5\n\n(aadB)\nCGAGCCTGTAGGACT\n\n\n\n\n\n\n\nANT(3\")-Ia\nATGAGGGAAGCGGTGATCG7922 min25 sec62 °C for 30 sec30 sec5 min\n5\n\n(aadA1)\nTTATTTGCCGACTACCTTGGT\n\n\n\n\n\n\n\nArmA\nATTCTGCCTATCCTAATTGG3152 min25 sec49 °C for 30 sec30 sec5 min\n5\n\nACCTATACTTTATCGTCGTC\n\n\n\n\n\n\n\n\nThis study was performed on A. baumannii isolated from patients admitted to different educational hospitals in Sari, north of Iran, for 6 months (April 2019 to September 2019). The clinical specimens included blood, urine, respiratory secretions (bronchial lavage and tracheal secretions), CSF, and ulcer (surgical and burn wound). The clinical isolates were identified using conventional microbiological tests\n19\n and confirmed by polymerase chain reaction (PCR) amplification of the blaOXA-51 gene using specific primers\n20\n; the reaction conditions are shown in Table 1. \n\nTABLE 1:Primers used to amplify the blaOXA-51 and aminoglycoside resistance genes along with the conditions of PCR.Target genesPrimer sequences (5´-3´)Amplicon size (bp)94 °C94 °CAnnealing Temperature and time72 °C72 °CReference\nOXA-51TAATGCTTTGATCGGCCTTG3532 min25 sec51 °C for 30 sec30 sec5 min\n5\n\nTGGATTGCACTTCATCTTGG\n\n\n\n\n\n\n\nAPH(3′)-VIa\nCGGAAACAGCGTTTTAGA7172 min25 sec49 °C for 30 sec30 sec5 min\n5\n\n(aphA6)\nTTCCTTTTGTCAGGTC\n\n\n\n\n\n\n\nAAC(3)-IIa\nATGCATACGCGGAAGGC8222 min25 sec54 °C for 30 sec30 sec5 min\n5\n\n(aacC2)\nTGCTGGCACGATCGGAG\n\n\n\n\n\n\n\nAAC(6′)-Ib\nTATGAGTGGCTAAATCGAT3952 min25 sec54 °C for 30 sec30 sec5 min\n5\n\n(aacA4)\nCCCGCTTTCTCGTAGCA\n\n\n\n\n\n\n\nANT(2\")-Ia\nATCTGCCGCTCTGGAT4052 min25 sec49 °C for 30 sec30 sec5 min\n5\n\n(aadB)\nCGAGCCTGTAGGACT\n\n\n\n\n\n\n\nANT(3\")-Ia\nATGAGGGAAGCGGTGATCG7922 min25 sec62 °C for 30 sec30 sec5 min\n5\n\n(aadA1)\nTTATTTGCCGACTACCTTGGT\n\n\n\n\n\n\n\nArmA\nATTCTGCCTATCCTAATTGG3152 min25 sec49 °C for 30 sec30 sec5 min\n5\n\nACCTATACTTTATCGTCGTC\n\n\n\n\n\n\n\n\n Antimicrobial susceptibility testing The antibiotic susceptibility pattern of the isolates was determined by the disk agar diffusion method on Muller Hinton agar (Merck, Germany) according to the Clinical and Laboratory Standards Institute (CLSI) guidelines\n21\n. The antibiotics included piperacillin (100 µg), piperacillin-tazobactam (100/10 µg), imipenem (10 µg), meropenem (10 µg), doripenem (10 µg), ciprofloxacin (5 µg), levofloxacin (5 µg), trimethoprim-sulfamethoxazole (1.25-23.75 µg), ceftazidime (30 µg), cefotaxime (30 µg), and cefepime (30 µg) (MAST Co., England). The susceptibility pattern of the isolates against aminoglycosides including kanamycin, amikacin, spectinomycin, netilmicin, gentamicin, streptomycin, and tobramycin was determined using the micro-broth dilution method according to the CLSI guidelines\n21\n. For interpretation of the minimum inhibitory concentration (MIC) values, we referred to the CLSI guidelines and previous studies\n1\n\n,\n\n21\n\n,\n\n22\n. Escherichia coli ATCC 25922 and A. baumannii ATCC 19606 were used as control strains for antibiotic susceptibility testing. \nThe antibiotic susceptibility pattern of the isolates was determined by the disk agar diffusion method on Muller Hinton agar (Merck, Germany) according to the Clinical and Laboratory Standards Institute (CLSI) guidelines\n21\n. The antibiotics included piperacillin (100 µg), piperacillin-tazobactam (100/10 µg), imipenem (10 µg), meropenem (10 µg), doripenem (10 µg), ciprofloxacin (5 µg), levofloxacin (5 µg), trimethoprim-sulfamethoxazole (1.25-23.75 µg), ceftazidime (30 µg), cefotaxime (30 µg), and cefepime (30 µg) (MAST Co., England). The susceptibility pattern of the isolates against aminoglycosides including kanamycin, amikacin, spectinomycin, netilmicin, gentamicin, streptomycin, and tobramycin was determined using the micro-broth dilution method according to the CLSI guidelines\n21\n. For interpretation of the minimum inhibitory concentration (MIC) values, we referred to the CLSI guidelines and previous studies\n1\n\n,\n\n21\n\n,\n\n22\n. Escherichia coli ATCC 25922 and A. baumannii ATCC 19606 were used as control strains for antibiotic susceptibility testing. \n DNA extraction, PCR, and multiplex-PCR DNA was extracted from all A. baumanniiisolates grown for 24 h using an alkaline lysis method with sodium dodecyl sulphate (SDS) and NaOH, as previously published\n23\n, with few modifications. In brief, first, we prepared a lysis buffer by dissolving 0.5 g of SDS and 0.4 g of NaOH in 200 µL of distilled water. Next, 4-6 colonies of the bacteria were suspended in 60 µL of lysis buffer and subsequently heated at 95 °C for 10 min. In the next step, the suspension was centrifuged at 13000 rpm for 5 min, and 180 µL of distilled water was added to the microtubes. The obtained supernatant was frozen at −20 °C until use as the extracted DNA in PCR. \nTwo sets of multiplex-PCR were used to detect AME-encoding genes in A. baumannii isolates using the specific primers shown in Table 1. APH(3′)-VIa (aphA6), ANT(2\")-Ia (aadB), and ArmA genes were detected in the same set; AAC(6′)-Ib (aacA4) and AAC(3)-IIa (aacC2) were identified in the second set; and the ANT(3\")-Ia (aadA1) gene was detected by PCR alone. The PCR and multiplex-PCR were performed in 25 µL of final volume containing 12.5 µL of the master mix (Ampliqon, Denmark), 10 pmol of each primer (Bioneer, South Korea), and 500 ng of template DNA; the reaction solutions were brought to the desired volume through the addition of distilled water. The genes were amplified under standard conditions using a thermocycler machine (Bio-Rad, USA). All reactions were performed in 34 cycles, and the conditions are shown in Table 1.\nDNA was extracted from all A. baumanniiisolates grown for 24 h using an alkaline lysis method with sodium dodecyl sulphate (SDS) and NaOH, as previously published\n23\n, with few modifications. In brief, first, we prepared a lysis buffer by dissolving 0.5 g of SDS and 0.4 g of NaOH in 200 µL of distilled water. Next, 4-6 colonies of the bacteria were suspended in 60 µL of lysis buffer and subsequently heated at 95 °C for 10 min. In the next step, the suspension was centrifuged at 13000 rpm for 5 min, and 180 µL of distilled water was added to the microtubes. The obtained supernatant was frozen at −20 °C until use as the extracted DNA in PCR. \nTwo sets of multiplex-PCR were used to detect AME-encoding genes in A. baumannii isolates using the specific primers shown in Table 1. APH(3′)-VIa (aphA6), ANT(2\")-Ia (aadB), and ArmA genes were detected in the same set; AAC(6′)-Ib (aacA4) and AAC(3)-IIa (aacC2) were identified in the second set; and the ANT(3\")-Ia (aadA1) gene was detected by PCR alone. The PCR and multiplex-PCR were performed in 25 µL of final volume containing 12.5 µL of the master mix (Ampliqon, Denmark), 10 pmol of each primer (Bioneer, South Korea), and 500 ng of template DNA; the reaction solutions were brought to the desired volume through the addition of distilled water. The genes were amplified under standard conditions using a thermocycler machine (Bio-Rad, USA). All reactions were performed in 34 cycles, and the conditions are shown in Table 1.\n Statistical analysis The data were analyzed using SPSS (version 21). Categorical data were analyzed using the Fisher’s exact test, and a P-value less than 0.05 was considered statistically significant. In addition, an independent t-test was used to examine the mean age of the subjects. \nThe data were analyzed using SPSS (version 21). Categorical data were analyzed using the Fisher’s exact test, and a P-value less than 0.05 was considered statistically significant. In addition, an independent t-test was used to examine the mean age of the subjects. ", "This study was performed on A. baumannii isolated from patients admitted to different educational hospitals in Sari, north of Iran, for 6 months (April 2019 to September 2019). The clinical specimens included blood, urine, respiratory secretions (bronchial lavage and tracheal secretions), CSF, and ulcer (surgical and burn wound). The clinical isolates were identified using conventional microbiological tests\n19\n and confirmed by polymerase chain reaction (PCR) amplification of the blaOXA-51 gene using specific primers\n20\n; the reaction conditions are shown in Table 1. \n\nTABLE 1:Primers used to amplify the blaOXA-51 and aminoglycoside resistance genes along with the conditions of PCR.Target genesPrimer sequences (5´-3´)Amplicon size (bp)94 °C94 °CAnnealing Temperature and time72 °C72 °CReference\nOXA-51TAATGCTTTGATCGGCCTTG3532 min25 sec51 °C for 30 sec30 sec5 min\n5\n\nTGGATTGCACTTCATCTTGG\n\n\n\n\n\n\n\nAPH(3′)-VIa\nCGGAAACAGCGTTTTAGA7172 min25 sec49 °C for 30 sec30 sec5 min\n5\n\n(aphA6)\nTTCCTTTTGTCAGGTC\n\n\n\n\n\n\n\nAAC(3)-IIa\nATGCATACGCGGAAGGC8222 min25 sec54 °C for 30 sec30 sec5 min\n5\n\n(aacC2)\nTGCTGGCACGATCGGAG\n\n\n\n\n\n\n\nAAC(6′)-Ib\nTATGAGTGGCTAAATCGAT3952 min25 sec54 °C for 30 sec30 sec5 min\n5\n\n(aacA4)\nCCCGCTTTCTCGTAGCA\n\n\n\n\n\n\n\nANT(2\")-Ia\nATCTGCCGCTCTGGAT4052 min25 sec49 °C for 30 sec30 sec5 min\n5\n\n(aadB)\nCGAGCCTGTAGGACT\n\n\n\n\n\n\n\nANT(3\")-Ia\nATGAGGGAAGCGGTGATCG7922 min25 sec62 °C for 30 sec30 sec5 min\n5\n\n(aadA1)\nTTATTTGCCGACTACCTTGGT\n\n\n\n\n\n\n\nArmA\nATTCTGCCTATCCTAATTGG3152 min25 sec49 °C for 30 sec30 sec5 min\n5\n\nACCTATACTTTATCGTCGTC\n\n\n\n\n\n\n\n", "The antibiotic susceptibility pattern of the isolates was determined by the disk agar diffusion method on Muller Hinton agar (Merck, Germany) according to the Clinical and Laboratory Standards Institute (CLSI) guidelines\n21\n. The antibiotics included piperacillin (100 µg), piperacillin-tazobactam (100/10 µg), imipenem (10 µg), meropenem (10 µg), doripenem (10 µg), ciprofloxacin (5 µg), levofloxacin (5 µg), trimethoprim-sulfamethoxazole (1.25-23.75 µg), ceftazidime (30 µg), cefotaxime (30 µg), and cefepime (30 µg) (MAST Co., England). The susceptibility pattern of the isolates against aminoglycosides including kanamycin, amikacin, spectinomycin, netilmicin, gentamicin, streptomycin, and tobramycin was determined using the micro-broth dilution method according to the CLSI guidelines\n21\n. For interpretation of the minimum inhibitory concentration (MIC) values, we referred to the CLSI guidelines and previous studies\n1\n\n,\n\n21\n\n,\n\n22\n. Escherichia coli ATCC 25922 and A. baumannii ATCC 19606 were used as control strains for antibiotic susceptibility testing. ", "DNA was extracted from all A. baumanniiisolates grown for 24 h using an alkaline lysis method with sodium dodecyl sulphate (SDS) and NaOH, as previously published\n23\n, with few modifications. In brief, first, we prepared a lysis buffer by dissolving 0.5 g of SDS and 0.4 g of NaOH in 200 µL of distilled water. Next, 4-6 colonies of the bacteria were suspended in 60 µL of lysis buffer and subsequently heated at 95 °C for 10 min. In the next step, the suspension was centrifuged at 13000 rpm for 5 min, and 180 µL of distilled water was added to the microtubes. The obtained supernatant was frozen at −20 °C until use as the extracted DNA in PCR. \nTwo sets of multiplex-PCR were used to detect AME-encoding genes in A. baumannii isolates using the specific primers shown in Table 1. APH(3′)-VIa (aphA6), ANT(2\")-Ia (aadB), and ArmA genes were detected in the same set; AAC(6′)-Ib (aacA4) and AAC(3)-IIa (aacC2) were identified in the second set; and the ANT(3\")-Ia (aadA1) gene was detected by PCR alone. The PCR and multiplex-PCR were performed in 25 µL of final volume containing 12.5 µL of the master mix (Ampliqon, Denmark), 10 pmol of each primer (Bioneer, South Korea), and 500 ng of template DNA; the reaction solutions were brought to the desired volume through the addition of distilled water. The genes were amplified under standard conditions using a thermocycler machine (Bio-Rad, USA). All reactions were performed in 34 cycles, and the conditions are shown in Table 1.", "The data were analyzed using SPSS (version 21). Categorical data were analyzed using the Fisher’s exact test, and a P-value less than 0.05 was considered statistically significant. In addition, an independent t-test was used to examine the mean age of the subjects. ", " Patients, samples, and bacterial isolates In this study, 100 non-duplicated A. baumannii clinical isolates were collected from 100 patients admitted to the teaching and educational hospitals of Sari, north of Iran. All isolates identified using the phenotypic method contained the blaOXA-51 gene according to the PCR results. The mean age of the patients was 42.08±25.08 years (minimum age: 6 months; highest age: 88 years), and 50% of the patients were male. There was no significant difference between men and women in terms of mean age (p=0.64). Most of the bacterial isolates (34%) were obtained from patients admitted to the burn wards, while 29%, 21%, and 16% of the isolates were collected from the ICU, surgery, and pediatric wards, respectively. The most common type of specimen (73%) for isolation of the bacteria was the wound samples; however, 15% and 12% of other clinical isolates were obtained from urine and blood cultures, respectively.\nIn this study, 100 non-duplicated A. baumannii clinical isolates were collected from 100 patients admitted to the teaching and educational hospitals of Sari, north of Iran. All isolates identified using the phenotypic method contained the blaOXA-51 gene according to the PCR results. The mean age of the patients was 42.08±25.08 years (minimum age: 6 months; highest age: 88 years), and 50% of the patients were male. There was no significant difference between men and women in terms of mean age (p=0.64). Most of the bacterial isolates (34%) were obtained from patients admitted to the burn wards, while 29%, 21%, and 16% of the isolates were collected from the ICU, surgery, and pediatric wards, respectively. The most common type of specimen (73%) for isolation of the bacteria was the wound samples; however, 15% and 12% of other clinical isolates were obtained from urine and blood cultures, respectively.\n Antimicrobial susceptibility pattern According to the results of the disk agar diffusion method, the most and least effective antibiotics in the present study were imipenem and ciprofloxacin, with resistance rates of 75% and 100%, respectively (Table 2). Moreover, 94% of the isolates were detected as multi-drug resistant (MDR), and the most MDR isolates were collected from wound samples. Table 2 presents the antibiotic resistance patterns of all A. baumannii clinical isolates in this study based on hospital wards, as well as sample types. Resistance to the tested antibiotics was not significantly correlated with the sample types and hospital wards where the samples were collected.\n\nTABLE 2:Antimicrobial susceptibility pattern of the Acinetobacter baumannii clinical isolates in disk agar diffusion method.AntibioticsNo. (%) of resistant, intermediate resistant or susceptible isolates in terms of \nSusceptibilityTotalHospital wards Sample types \n\n(n=100)BurnICUSurgeryPediatric\nP-value\nWoundUrineBlood\nP-value\n\n\n\n(n=34)(n=29)(n=21)s (n=16)\n(n=73)(n=15)(n=12)\nPIPR8628 (82.3)28 (96.5)19 (90.4)11 (68.7)0.41262 (84.9)14 (93.3)10 (83.3)0.917\nI105 (14.7)1 (3.4)2 (9.5)2 (12.5)\n8 (10.9)1 (6.6)1 (8.3)\n\nS41 (2.9)003 (18.7)\n3 (4.1)01 (8.3)\nPIP-TAZR7826 (76.4)24 (82.7)16 (76.1)12 (75)0.10454 (73.9)14 (93.3)10 (83.3)0.372\nI104 (11.7)2 (6.8)3 (14.2)1 (6.2)\n8 (10.9)02 (16.6)\n\nS124 (11.7)3 (10.3)2 (9.5)3 (18.7)\n11 (15)1 (6.6)0\nCAZR7625 (73.5)22 (75.8)18 (85.7)11 (68.7)0.74352 (71.2)15 (100)9 (75)0.559\nI00000\n000\n\nS249 (26.4)1 (3.4)3 (14.2)5 (31.2)\n21 (28.7)03 (25)\nCTXR9332 (94.1)28 (96.5)19 (90.4)14 (87.5)0.76267 (91.7)15 (100)11 (91.6)0.618\nI02 (5.8)1 (3.4)2 (9.5)2 (12.5)\n6 (8.2)01 (8.3)\n\nS70000\n000\nCEFR9232 (94.1)28 (96.5)18 (85.7)14 (87.5)0.44867 (91.7)14 (93.3)11 (91.6)0.728\nI41 (2.9)1 (3.4)2 (9.5)0\n2 (2.7)1 (6.6)1 (8.3)\n\nS41 (2.9)01 (4.7)2 (12.5)\n4 (5.4)00\nIMIR7527 (79.4)25 (86.2)17 (80.9)10 (62.5)0.61755 (75.3)12 (80)8 (66.6)0.873\nI114 (11.7)2 (6.8)2 (9.5)3 (18.7)\n9 (12.3)1 (6.6)1 (8.3)\n\nS147 (20.5)2 (6.8)2 (9.5)3 (18.7)\n9 (12.3)2 (13.3)3 (25)\nMERR9733 (97)28 (96.5)20 (95.2)16 (100)0.96470 (95.8)15 (100)12 100)0.667\nI00000\n000\n\nS31 (2.9)1 (3.4)1 (4.7)0\n3 (4.1)00\nDORR9632 (94.1)28 (96.5)20 (95.2)16 (100)0.79769 (94.5)15 (100)12 (100)0.913\nI21 (2.9)1 (3.4)00\n2 (2.7)00\n\nS21 (2.9)01 (5)0\n2 (2.7)00\nCIPR10034 (100)29 (100)21 (100)16 (100)0.10073 (100)15 (100)12 (100)0.100\nI00000\n000\n\nS00000\n000\nLEVR9331 (91.1)27 (93.1)20 (95.2)15 (93.7)0.72567 (91.7)14 (93.3)12 (100)0.842\nI32 (5.8)001 (6.2)\n3 (4.1)00\n\nS41 (2.9)2 (6.8)1 (4.7)0\n3 (4.1)1 (6.6)0\nSXTR9231 (91.1)27 (93.1)18 (85.7)16 (100)0.93568 (93.1)12 (80)12 (100)0.216\nI31 (2.9)1 (3.4)1 (4.7)0\n1 (1.3)2 (13.3)0\n\nS52 (5.8)1 (3.4)2 (9.5)0\n4 (5.4)1 (6.6)0\n\nPIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible.\n\n\nPIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible.\nMoreover, according to the MIC results, the resistance rate against gentamicin, kanamycin, tobramycin, and streptomycin was 94%, while the highest susceptibility (20%) of A. baumannii isolates was observed against netilmicin. In contrast, 74%, 68%, and 78% of our clinical isolates were resistant to amikacin, netilmicin, and spectinomycin, respectively. The MIC ranges of aminoglycosides and their relationship with the presence of AMEs-encoding genes are shown in Table 3.\n\nTABLE 3:Aminoglycoside resistance pattern of the Acinetobacter baumannii clinical isolates in this study. AntibioticsMIC rangesNo. of the isolates with \n(µg/mL)Susceptibility\nAPH(3′)-VIa\n\nANT(2\")-Ia\n\nANT(3\")-Ia\n\nAAC(6′)-Ib\n\nAAC(3)-IIa\n\narmA\nTotal\n\npattern\n(aphA6)\n\n(aadB)\n\n(aadA1)\n\n(aacA4)\n\n(aacC2)\n(n=22)\n\n\n\n(n=77)(n=73)(n=33)(n=33)(n=19)\n\nGentamicin ≤4S------6\n8I5311-1-\n16-32R---\n--94\n64-128R5-----\n\n≥256R677032321921\n\nMIC50\n5122565125122561024256\nMIC90\n102425610242562561024512Tobramycin≤4S33---14\n8I1----22\n16-32R2-1---94\n64-128R-21423\n\n≥256R716831291716\n\nMIC50\n256512512256512512512\nMIC90\n10241024102451210245121024Amikacin≤16S1310662316\n32I710665110\n64R-1----74\n128R-4----\n\n≥256R574821211218\n\nMIC50\n2561024512512512256512\nMIC90\n51251251210245122561024Netilmicin≤8S883-1220\n16I6--4-612\n32R810464268\n64-128R571324\n\n≥256R50482520128\n\nMIC50\n25612864256256128128\nMIC90\n25612864512512128256Kanamycin≤16S42----4\n32I22----2\n64R-3-1--94\n128R322-21\n\n≥256R686431321721\n\nMIC50\n643212864256128128\nMIC90\n6425625664256256256Streptomycin≤4S1-11--6\n8S11-1--\n\n16S----\n-\n\n32I-------\n64-128R33-32-94\n≥256R726932281718\n\nMIC50\n256128256256256512256\nMIC90\n2561285121024128512512Spectinomycin≤4S11---312\n8S7522--\n\n16S\n-\n---\n\n32I83-1-510\n64-128R21-22178\n≥256R596331281713\n\nMIC50\n2562565126464256256\nMIC90\n256512256128128256512\nR: resistant; I: intermediate resistant; S: susceptible. Notes: MIC\n50\n: Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC\n90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms.\n\n\nR: resistant; I: intermediate resistant; S: susceptible.\n Notes: MIC\n50\n: Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC\n90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms.\nAccording to the results of the disk agar diffusion method, the most and least effective antibiotics in the present study were imipenem and ciprofloxacin, with resistance rates of 75% and 100%, respectively (Table 2). Moreover, 94% of the isolates were detected as multi-drug resistant (MDR), and the most MDR isolates were collected from wound samples. Table 2 presents the antibiotic resistance patterns of all A. baumannii clinical isolates in this study based on hospital wards, as well as sample types. Resistance to the tested antibiotics was not significantly correlated with the sample types and hospital wards where the samples were collected.\n\nTABLE 2:Antimicrobial susceptibility pattern of the Acinetobacter baumannii clinical isolates in disk agar diffusion method.AntibioticsNo. (%) of resistant, intermediate resistant or susceptible isolates in terms of \nSusceptibilityTotalHospital wards Sample types \n\n(n=100)BurnICUSurgeryPediatric\nP-value\nWoundUrineBlood\nP-value\n\n\n\n(n=34)(n=29)(n=21)s (n=16)\n(n=73)(n=15)(n=12)\nPIPR8628 (82.3)28 (96.5)19 (90.4)11 (68.7)0.41262 (84.9)14 (93.3)10 (83.3)0.917\nI105 (14.7)1 (3.4)2 (9.5)2 (12.5)\n8 (10.9)1 (6.6)1 (8.3)\n\nS41 (2.9)003 (18.7)\n3 (4.1)01 (8.3)\nPIP-TAZR7826 (76.4)24 (82.7)16 (76.1)12 (75)0.10454 (73.9)14 (93.3)10 (83.3)0.372\nI104 (11.7)2 (6.8)3 (14.2)1 (6.2)\n8 (10.9)02 (16.6)\n\nS124 (11.7)3 (10.3)2 (9.5)3 (18.7)\n11 (15)1 (6.6)0\nCAZR7625 (73.5)22 (75.8)18 (85.7)11 (68.7)0.74352 (71.2)15 (100)9 (75)0.559\nI00000\n000\n\nS249 (26.4)1 (3.4)3 (14.2)5 (31.2)\n21 (28.7)03 (25)\nCTXR9332 (94.1)28 (96.5)19 (90.4)14 (87.5)0.76267 (91.7)15 (100)11 (91.6)0.618\nI02 (5.8)1 (3.4)2 (9.5)2 (12.5)\n6 (8.2)01 (8.3)\n\nS70000\n000\nCEFR9232 (94.1)28 (96.5)18 (85.7)14 (87.5)0.44867 (91.7)14 (93.3)11 (91.6)0.728\nI41 (2.9)1 (3.4)2 (9.5)0\n2 (2.7)1 (6.6)1 (8.3)\n\nS41 (2.9)01 (4.7)2 (12.5)\n4 (5.4)00\nIMIR7527 (79.4)25 (86.2)17 (80.9)10 (62.5)0.61755 (75.3)12 (80)8 (66.6)0.873\nI114 (11.7)2 (6.8)2 (9.5)3 (18.7)\n9 (12.3)1 (6.6)1 (8.3)\n\nS147 (20.5)2 (6.8)2 (9.5)3 (18.7)\n9 (12.3)2 (13.3)3 (25)\nMERR9733 (97)28 (96.5)20 (95.2)16 (100)0.96470 (95.8)15 (100)12 100)0.667\nI00000\n000\n\nS31 (2.9)1 (3.4)1 (4.7)0\n3 (4.1)00\nDORR9632 (94.1)28 (96.5)20 (95.2)16 (100)0.79769 (94.5)15 (100)12 (100)0.913\nI21 (2.9)1 (3.4)00\n2 (2.7)00\n\nS21 (2.9)01 (5)0\n2 (2.7)00\nCIPR10034 (100)29 (100)21 (100)16 (100)0.10073 (100)15 (100)12 (100)0.100\nI00000\n000\n\nS00000\n000\nLEVR9331 (91.1)27 (93.1)20 (95.2)15 (93.7)0.72567 (91.7)14 (93.3)12 (100)0.842\nI32 (5.8)001 (6.2)\n3 (4.1)00\n\nS41 (2.9)2 (6.8)1 (4.7)0\n3 (4.1)1 (6.6)0\nSXTR9231 (91.1)27 (93.1)18 (85.7)16 (100)0.93568 (93.1)12 (80)12 (100)0.216\nI31 (2.9)1 (3.4)1 (4.7)0\n1 (1.3)2 (13.3)0\n\nS52 (5.8)1 (3.4)2 (9.5)0\n4 (5.4)1 (6.6)0\n\nPIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible.\n\n\nPIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible.\nMoreover, according to the MIC results, the resistance rate against gentamicin, kanamycin, tobramycin, and streptomycin was 94%, while the highest susceptibility (20%) of A. baumannii isolates was observed against netilmicin. In contrast, 74%, 68%, and 78% of our clinical isolates were resistant to amikacin, netilmicin, and spectinomycin, respectively. The MIC ranges of aminoglycosides and their relationship with the presence of AMEs-encoding genes are shown in Table 3.\n\nTABLE 3:Aminoglycoside resistance pattern of the Acinetobacter baumannii clinical isolates in this study. AntibioticsMIC rangesNo. of the isolates with \n(µg/mL)Susceptibility\nAPH(3′)-VIa\n\nANT(2\")-Ia\n\nANT(3\")-Ia\n\nAAC(6′)-Ib\n\nAAC(3)-IIa\n\narmA\nTotal\n\npattern\n(aphA6)\n\n(aadB)\n\n(aadA1)\n\n(aacA4)\n\n(aacC2)\n(n=22)\n\n\n\n(n=77)(n=73)(n=33)(n=33)(n=19)\n\nGentamicin ≤4S------6\n8I5311-1-\n16-32R---\n--94\n64-128R5-----\n\n≥256R677032321921\n\nMIC50\n5122565125122561024256\nMIC90\n102425610242562561024512Tobramycin≤4S33---14\n8I1----22\n16-32R2-1---94\n64-128R-21423\n\n≥256R716831291716\n\nMIC50\n256512512256512512512\nMIC90\n10241024102451210245121024Amikacin≤16S1310662316\n32I710665110\n64R-1----74\n128R-4----\n\n≥256R574821211218\n\nMIC50\n2561024512512512256512\nMIC90\n51251251210245122561024Netilmicin≤8S883-1220\n16I6--4-612\n32R810464268\n64-128R571324\n\n≥256R50482520128\n\nMIC50\n25612864256256128128\nMIC90\n25612864512512128256Kanamycin≤16S42----4\n32I22----2\n64R-3-1--94\n128R322-21\n\n≥256R686431321721\n\nMIC50\n643212864256128128\nMIC90\n6425625664256256256Streptomycin≤4S1-11--6\n8S11-1--\n\n16S----\n-\n\n32I-------\n64-128R33-32-94\n≥256R726932281718\n\nMIC50\n256128256256256512256\nMIC90\n2561285121024128512512Spectinomycin≤4S11---312\n8S7522--\n\n16S\n-\n---\n\n32I83-1-510\n64-128R21-22178\n≥256R596331281713\n\nMIC50\n2562565126464256256\nMIC90\n256512256128128256512\nR: resistant; I: intermediate resistant; S: susceptible. Notes: MIC\n50\n: Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC\n90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms.\n\n\nR: resistant; I: intermediate resistant; S: susceptible.\n Notes: MIC\n50\n: Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC\n90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms.\n Gene profiles of the isolates The frequency of each aminoglycoside resistance gene and its relation with the MIC ranges are shown in Table 3. In total, the proportions of aminoglycoside resistance genes among our clinical isolates of A. baumannii were as follows: APH(3′)-VIa (aphA6) (77%), ANT(2\")-Ia (aadB) (73%), ANT(3\")-Ia (aadA1) (33%), AAC(6′)-Ib (aacA4) (33%), AAC(3)-IIa (aacC2) (19%), and ArmA (22%). The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the isolates is shown in Table 4. There was a significant association between the presence of all resistance genes and the non-susceptibility (resistance or intermediate resistance) to all aminoglycosides, except armA and resistance to netilmicin. Important data from this table indicates that in some groups, such as gentamicin- and tobramycin-resistant groups, all resistant isolates contained some AMEs-encoding genes such as aacC2, aacA4, and aadA1. \n\nTABLE 4:The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the A. baumannii clinical isolates.AntibioticsSusceptibility PatternNo. (%) of the isolates contained \n\n\nAPH(3′)-VIa (aphA6)\n\nP-value\n\nANT(2\")-Ia (aadB)\n\nP-value\n\nANT(3\")-Ia (aadA1)\n\nP-value\n\nAAC(6′)-Ib (aacA4)\n\nP-value\n\nAAC(3)-IIa (aacC2)\n\nP-value\n\narmA\n\nP-value\n\n\n(n=77)\n(n=73)\n(n=33)\n(n=33)\n(n=19)\n(n=22)\nGentamicinNon-susceptible77 (100)0.01273 (100)0.02333 (100)0.03333 (100)0.03319 (100)0.00722 (100)0.021\nSusceptible0\n0\n0\n0\n0\n0\nTobramycinNon-susceptible74 (96.1)0.01970 (95.8)0.02933 (100)0.00333 (100)0.00319 (100)0.00721 (95.4)0.024\nSusceptible3 (3.8)\n3 (4.1)\n0\n0\n0\n1 (4.5)\nAmikacinNon-susceptible64 (83.1)0.03663 (86.3)0.03727 (81.8)0.03827 (81.8)0.03817 (89.4)0.02419 (86.3)0.033\nSusceptible13 (16.8)\n10 (13.6)\n6 (18.1)\n6 (18.1)\n2 (10.5)\n3 (13.6)\nNetilmicinNon-susceptible63 (81.8)0.04265 (89.04)0.03930 (90.9)0.01429 (87.8)0.03218 (94.7)0.01814 (63.6)0.072\nSusceptible15 (19.4)\n8 (10.9)\n3 (9.09)\n4 (12.1)\n1 (5.2)\n8 (36.3)\nKanamycinNon-susceptible73 (94.8)0.02971 (97.2)0.02333 (100)0.00333 (100)0.00319 (100)0.00722 (100)0.021\nSusceptible4 (5.1)\n2 (2.7)\n0\n0\n0\n0\nStreptomycinNon-susceptible75 (97.4)0.01572 (98.6)0.01932 (96.9)0.01931 (93.9)0.02119 (100)0.00718 (81.8)0.044\nSusceptible2 (2.5)\n1 (1.3)\n1 (3.03)\n2 (6.06)\n0\n4 (18.1)\nSpectinomycinNon-susceptible69 (89.6(0.02867 (91.7)0.03731 (93.9)0.02131 (93.9)0.02119 (100)0.00719 (86.3)0.033\nSusceptible8 (10.3)\n6 (8.2)\n2 (6.06)\n2 (6.06)\n0\n3 (13.6)\n\n\nIn addition, we detected 22 gene profiles among all clinical isolates of A. baumannii (Table 5). The most prevalent combination gene profiles in the present study included: 1) APH(3')-VIa + ANT(2\")-Ia with 39 isolates containing these genes, among which 100% isolates were resistant towards kanamycin, while almost 95% were resistant against netilmicin and 97.4% were resistant to tobramycin and gentamicin, and 2) AAC(3)-IIa + AAC(6')-Ib + ANT(3\")-Ia + APH(3')-VIa + ANT(2\")-Ia with 14 isolates, among which 100% were resistant to gentamicin, kanamycin, and streptomycin, while almost 93% were resistant against tobramycin and spectinomycin. However, 15 isolates showed an AME gene profile with one AME gene, most of which were resistant to tested aminoglycosides. Other AME-encoding gene profiles were detected at a low rate (Table 5). However, 15, 52, 12, 5, 14, and 2 isolates in the present study contained 1, 2, 3, 4, 5, and 6 AME genes, respectively. The most prevalent gene profiles exhibited the simultaneous presence of 2 genes followed by 5 genes and 3 AME genes.\n\nTABLE 5:Gene profiles of the Acinetobacter baumannii clinical isolates and their association with aminoglycoside resistance.Gene profile (No.)Gentamicin Tobramycin Amikacin Netilmicin Kanamycin Streptomycin Spectinomycin \nNSSNSSNSSNSSNSSNSSNSS\n(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)\nArmA (9)9 (100)-9 (100)-7 (77.7)2 (22.2)3 (33.3)6 (66.6)9 (100)\n-\n7 (77.7)2 (22.2)9 (100)-\naphA6 (2)2 (100)-2 (100)--2 (100)-2 (100)1 (50)1 (50)2 (100)--2 (100)\naadB (4)4 (100)-3 (75)1 (25)3 (75)1 (25)2 (50)2 (50)3 (75)1 (25)4 (100)-3 (75)1 (25)\nArmA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100)\n-\n3 (100)-2 (66.6)1 (33.3)\naacA4 + aadA1 (2)2 (100)-2 (100)-2 (100)-1 (50)1 (50)2 (100)\n-\n2 (100)-2 (100)-\naadA1 + armA (2)2 (100)-2 (100)-1 (50)1 (50)2 (100)-2 (100)\n-\n1 (50)1 (50)2 (100)-\naadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naadB + aadA1 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naphA6 + aadB (39)38 (97.4)1 (2.56)38 (97.4)1 (2.56)34 (87.1)5 (12.8)37 (94.8)2 (5.1)39 (100)-33 (84.6)6 (15.3)32 (82)7 (17.9)\naacA4 + aphA6 (3)3 (100)-3 (100)-3 (100)-3 (100)-3 (100)\n-\n3 (100)-3 (100)-\naadB + aadA1 +\naphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naacA4 + aadB +\naadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)--1 (100)1 (100)\n-\n-1 (100)1 (100)-\naadB + armA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100)\n-\n3 (100)-2 (66.6)1 (33.3)\naacA4 + aacC2 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + armA\n+ aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naacC2 + aacA4 + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naacC2 + aacA4\n+ aadB + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naacA4 + aadB +\naadA1 + aphA6 (4)4 (100)-4 (100)-4 (100)-4 (100)-4 (100)\n-\n4 (100)-4 (100)-\naacC2 + aacA4 + aadA1 + aphA6 + aadB (14)14 (100)-13 (92.8)1 (7.1)12 (85.7)2 (14.2)12 (85.7)2 (14.2)14 (100)-14 (100)-13 (92.8)1 (7.1)\naacC2 + aacA4 + aadA1 + aadB + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-\nNS: non-susceptible (resistant and intermediate resistant); S: susceptible.\n\n\nNS: non-susceptible (resistant and intermediate resistant); S: susceptible.\nThe frequency of each aminoglycoside resistance gene and its relation with the MIC ranges are shown in Table 3. In total, the proportions of aminoglycoside resistance genes among our clinical isolates of A. baumannii were as follows: APH(3′)-VIa (aphA6) (77%), ANT(2\")-Ia (aadB) (73%), ANT(3\")-Ia (aadA1) (33%), AAC(6′)-Ib (aacA4) (33%), AAC(3)-IIa (aacC2) (19%), and ArmA (22%). The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the isolates is shown in Table 4. There was a significant association between the presence of all resistance genes and the non-susceptibility (resistance or intermediate resistance) to all aminoglycosides, except armA and resistance to netilmicin. Important data from this table indicates that in some groups, such as gentamicin- and tobramycin-resistant groups, all resistant isolates contained some AMEs-encoding genes such as aacC2, aacA4, and aadA1. \n\nTABLE 4:The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the A. baumannii clinical isolates.AntibioticsSusceptibility PatternNo. (%) of the isolates contained \n\n\nAPH(3′)-VIa (aphA6)\n\nP-value\n\nANT(2\")-Ia (aadB)\n\nP-value\n\nANT(3\")-Ia (aadA1)\n\nP-value\n\nAAC(6′)-Ib (aacA4)\n\nP-value\n\nAAC(3)-IIa (aacC2)\n\nP-value\n\narmA\n\nP-value\n\n\n(n=77)\n(n=73)\n(n=33)\n(n=33)\n(n=19)\n(n=22)\nGentamicinNon-susceptible77 (100)0.01273 (100)0.02333 (100)0.03333 (100)0.03319 (100)0.00722 (100)0.021\nSusceptible0\n0\n0\n0\n0\n0\nTobramycinNon-susceptible74 (96.1)0.01970 (95.8)0.02933 (100)0.00333 (100)0.00319 (100)0.00721 (95.4)0.024\nSusceptible3 (3.8)\n3 (4.1)\n0\n0\n0\n1 (4.5)\nAmikacinNon-susceptible64 (83.1)0.03663 (86.3)0.03727 (81.8)0.03827 (81.8)0.03817 (89.4)0.02419 (86.3)0.033\nSusceptible13 (16.8)\n10 (13.6)\n6 (18.1)\n6 (18.1)\n2 (10.5)\n3 (13.6)\nNetilmicinNon-susceptible63 (81.8)0.04265 (89.04)0.03930 (90.9)0.01429 (87.8)0.03218 (94.7)0.01814 (63.6)0.072\nSusceptible15 (19.4)\n8 (10.9)\n3 (9.09)\n4 (12.1)\n1 (5.2)\n8 (36.3)\nKanamycinNon-susceptible73 (94.8)0.02971 (97.2)0.02333 (100)0.00333 (100)0.00319 (100)0.00722 (100)0.021\nSusceptible4 (5.1)\n2 (2.7)\n0\n0\n0\n0\nStreptomycinNon-susceptible75 (97.4)0.01572 (98.6)0.01932 (96.9)0.01931 (93.9)0.02119 (100)0.00718 (81.8)0.044\nSusceptible2 (2.5)\n1 (1.3)\n1 (3.03)\n2 (6.06)\n0\n4 (18.1)\nSpectinomycinNon-susceptible69 (89.6(0.02867 (91.7)0.03731 (93.9)0.02131 (93.9)0.02119 (100)0.00719 (86.3)0.033\nSusceptible8 (10.3)\n6 (8.2)\n2 (6.06)\n2 (6.06)\n0\n3 (13.6)\n\n\nIn addition, we detected 22 gene profiles among all clinical isolates of A. baumannii (Table 5). The most prevalent combination gene profiles in the present study included: 1) APH(3')-VIa + ANT(2\")-Ia with 39 isolates containing these genes, among which 100% isolates were resistant towards kanamycin, while almost 95% were resistant against netilmicin and 97.4% were resistant to tobramycin and gentamicin, and 2) AAC(3)-IIa + AAC(6')-Ib + ANT(3\")-Ia + APH(3')-VIa + ANT(2\")-Ia with 14 isolates, among which 100% were resistant to gentamicin, kanamycin, and streptomycin, while almost 93% were resistant against tobramycin and spectinomycin. However, 15 isolates showed an AME gene profile with one AME gene, most of which were resistant to tested aminoglycosides. Other AME-encoding gene profiles were detected at a low rate (Table 5). However, 15, 52, 12, 5, 14, and 2 isolates in the present study contained 1, 2, 3, 4, 5, and 6 AME genes, respectively. The most prevalent gene profiles exhibited the simultaneous presence of 2 genes followed by 5 genes and 3 AME genes.\n\nTABLE 5:Gene profiles of the Acinetobacter baumannii clinical isolates and their association with aminoglycoside resistance.Gene profile (No.)Gentamicin Tobramycin Amikacin Netilmicin Kanamycin Streptomycin Spectinomycin \nNSSNSSNSSNSSNSSNSSNSS\n(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)\nArmA (9)9 (100)-9 (100)-7 (77.7)2 (22.2)3 (33.3)6 (66.6)9 (100)\n-\n7 (77.7)2 (22.2)9 (100)-\naphA6 (2)2 (100)-2 (100)--2 (100)-2 (100)1 (50)1 (50)2 (100)--2 (100)\naadB (4)4 (100)-3 (75)1 (25)3 (75)1 (25)2 (50)2 (50)3 (75)1 (25)4 (100)-3 (75)1 (25)\nArmA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100)\n-\n3 (100)-2 (66.6)1 (33.3)\naacA4 + aadA1 (2)2 (100)-2 (100)-2 (100)-1 (50)1 (50)2 (100)\n-\n2 (100)-2 (100)-\naadA1 + armA (2)2 (100)-2 (100)-1 (50)1 (50)2 (100)-2 (100)\n-\n1 (50)1 (50)2 (100)-\naadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naadB + aadA1 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naphA6 + aadB (39)38 (97.4)1 (2.56)38 (97.4)1 (2.56)34 (87.1)5 (12.8)37 (94.8)2 (5.1)39 (100)-33 (84.6)6 (15.3)32 (82)7 (17.9)\naacA4 + aphA6 (3)3 (100)-3 (100)-3 (100)-3 (100)-3 (100)\n-\n3 (100)-3 (100)-\naadB + aadA1 +\naphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naacA4 + aadB +\naadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)--1 (100)1 (100)\n-\n-1 (100)1 (100)-\naadB + armA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100)\n-\n3 (100)-2 (66.6)1 (33.3)\naacA4 + aacC2 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + armA\n+ aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naacC2 + aacA4 + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naacC2 + aacA4\n+ aadB + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naacA4 + aadB +\naadA1 + aphA6 (4)4 (100)-4 (100)-4 (100)-4 (100)-4 (100)\n-\n4 (100)-4 (100)-\naacC2 + aacA4 + aadA1 + aphA6 + aadB (14)14 (100)-13 (92.8)1 (7.1)12 (85.7)2 (14.2)12 (85.7)2 (14.2)14 (100)-14 (100)-13 (92.8)1 (7.1)\naacC2 + aacA4 + aadA1 + aadB + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-\nNS: non-susceptible (resistant and intermediate resistant); S: susceptible.\n\n\nNS: non-susceptible (resistant and intermediate resistant); S: susceptible.", "In this study, 100 non-duplicated A. baumannii clinical isolates were collected from 100 patients admitted to the teaching and educational hospitals of Sari, north of Iran. All isolates identified using the phenotypic method contained the blaOXA-51 gene according to the PCR results. The mean age of the patients was 42.08±25.08 years (minimum age: 6 months; highest age: 88 years), and 50% of the patients were male. There was no significant difference between men and women in terms of mean age (p=0.64). Most of the bacterial isolates (34%) were obtained from patients admitted to the burn wards, while 29%, 21%, and 16% of the isolates were collected from the ICU, surgery, and pediatric wards, respectively. The most common type of specimen (73%) for isolation of the bacteria was the wound samples; however, 15% and 12% of other clinical isolates were obtained from urine and blood cultures, respectively.", "According to the results of the disk agar diffusion method, the most and least effective antibiotics in the present study were imipenem and ciprofloxacin, with resistance rates of 75% and 100%, respectively (Table 2). Moreover, 94% of the isolates were detected as multi-drug resistant (MDR), and the most MDR isolates were collected from wound samples. Table 2 presents the antibiotic resistance patterns of all A. baumannii clinical isolates in this study based on hospital wards, as well as sample types. Resistance to the tested antibiotics was not significantly correlated with the sample types and hospital wards where the samples were collected.\n\nTABLE 2:Antimicrobial susceptibility pattern of the Acinetobacter baumannii clinical isolates in disk agar diffusion method.AntibioticsNo. (%) of resistant, intermediate resistant or susceptible isolates in terms of \nSusceptibilityTotalHospital wards Sample types \n\n(n=100)BurnICUSurgeryPediatric\nP-value\nWoundUrineBlood\nP-value\n\n\n\n(n=34)(n=29)(n=21)s (n=16)\n(n=73)(n=15)(n=12)\nPIPR8628 (82.3)28 (96.5)19 (90.4)11 (68.7)0.41262 (84.9)14 (93.3)10 (83.3)0.917\nI105 (14.7)1 (3.4)2 (9.5)2 (12.5)\n8 (10.9)1 (6.6)1 (8.3)\n\nS41 (2.9)003 (18.7)\n3 (4.1)01 (8.3)\nPIP-TAZR7826 (76.4)24 (82.7)16 (76.1)12 (75)0.10454 (73.9)14 (93.3)10 (83.3)0.372\nI104 (11.7)2 (6.8)3 (14.2)1 (6.2)\n8 (10.9)02 (16.6)\n\nS124 (11.7)3 (10.3)2 (9.5)3 (18.7)\n11 (15)1 (6.6)0\nCAZR7625 (73.5)22 (75.8)18 (85.7)11 (68.7)0.74352 (71.2)15 (100)9 (75)0.559\nI00000\n000\n\nS249 (26.4)1 (3.4)3 (14.2)5 (31.2)\n21 (28.7)03 (25)\nCTXR9332 (94.1)28 (96.5)19 (90.4)14 (87.5)0.76267 (91.7)15 (100)11 (91.6)0.618\nI02 (5.8)1 (3.4)2 (9.5)2 (12.5)\n6 (8.2)01 (8.3)\n\nS70000\n000\nCEFR9232 (94.1)28 (96.5)18 (85.7)14 (87.5)0.44867 (91.7)14 (93.3)11 (91.6)0.728\nI41 (2.9)1 (3.4)2 (9.5)0\n2 (2.7)1 (6.6)1 (8.3)\n\nS41 (2.9)01 (4.7)2 (12.5)\n4 (5.4)00\nIMIR7527 (79.4)25 (86.2)17 (80.9)10 (62.5)0.61755 (75.3)12 (80)8 (66.6)0.873\nI114 (11.7)2 (6.8)2 (9.5)3 (18.7)\n9 (12.3)1 (6.6)1 (8.3)\n\nS147 (20.5)2 (6.8)2 (9.5)3 (18.7)\n9 (12.3)2 (13.3)3 (25)\nMERR9733 (97)28 (96.5)20 (95.2)16 (100)0.96470 (95.8)15 (100)12 100)0.667\nI00000\n000\n\nS31 (2.9)1 (3.4)1 (4.7)0\n3 (4.1)00\nDORR9632 (94.1)28 (96.5)20 (95.2)16 (100)0.79769 (94.5)15 (100)12 (100)0.913\nI21 (2.9)1 (3.4)00\n2 (2.7)00\n\nS21 (2.9)01 (5)0\n2 (2.7)00\nCIPR10034 (100)29 (100)21 (100)16 (100)0.10073 (100)15 (100)12 (100)0.100\nI00000\n000\n\nS00000\n000\nLEVR9331 (91.1)27 (93.1)20 (95.2)15 (93.7)0.72567 (91.7)14 (93.3)12 (100)0.842\nI32 (5.8)001 (6.2)\n3 (4.1)00\n\nS41 (2.9)2 (6.8)1 (4.7)0\n3 (4.1)1 (6.6)0\nSXTR9231 (91.1)27 (93.1)18 (85.7)16 (100)0.93568 (93.1)12 (80)12 (100)0.216\nI31 (2.9)1 (3.4)1 (4.7)0\n1 (1.3)2 (13.3)0\n\nS52 (5.8)1 (3.4)2 (9.5)0\n4 (5.4)1 (6.6)0\n\nPIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible.\n\n\nPIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible.\nMoreover, according to the MIC results, the resistance rate against gentamicin, kanamycin, tobramycin, and streptomycin was 94%, while the highest susceptibility (20%) of A. baumannii isolates was observed against netilmicin. In contrast, 74%, 68%, and 78% of our clinical isolates were resistant to amikacin, netilmicin, and spectinomycin, respectively. The MIC ranges of aminoglycosides and their relationship with the presence of AMEs-encoding genes are shown in Table 3.\n\nTABLE 3:Aminoglycoside resistance pattern of the Acinetobacter baumannii clinical isolates in this study. AntibioticsMIC rangesNo. of the isolates with \n(µg/mL)Susceptibility\nAPH(3′)-VIa\n\nANT(2\")-Ia\n\nANT(3\")-Ia\n\nAAC(6′)-Ib\n\nAAC(3)-IIa\n\narmA\nTotal\n\npattern\n(aphA6)\n\n(aadB)\n\n(aadA1)\n\n(aacA4)\n\n(aacC2)\n(n=22)\n\n\n\n(n=77)(n=73)(n=33)(n=33)(n=19)\n\nGentamicin ≤4S------6\n8I5311-1-\n16-32R---\n--94\n64-128R5-----\n\n≥256R677032321921\n\nMIC50\n5122565125122561024256\nMIC90\n102425610242562561024512Tobramycin≤4S33---14\n8I1----22\n16-32R2-1---94\n64-128R-21423\n\n≥256R716831291716\n\nMIC50\n256512512256512512512\nMIC90\n10241024102451210245121024Amikacin≤16S1310662316\n32I710665110\n64R-1----74\n128R-4----\n\n≥256R574821211218\n\nMIC50\n2561024512512512256512\nMIC90\n51251251210245122561024Netilmicin≤8S883-1220\n16I6--4-612\n32R810464268\n64-128R571324\n\n≥256R50482520128\n\nMIC50\n25612864256256128128\nMIC90\n25612864512512128256Kanamycin≤16S42----4\n32I22----2\n64R-3-1--94\n128R322-21\n\n≥256R686431321721\n\nMIC50\n643212864256128128\nMIC90\n6425625664256256256Streptomycin≤4S1-11--6\n8S11-1--\n\n16S----\n-\n\n32I-------\n64-128R33-32-94\n≥256R726932281718\n\nMIC50\n256128256256256512256\nMIC90\n2561285121024128512512Spectinomycin≤4S11---312\n8S7522--\n\n16S\n-\n---\n\n32I83-1-510\n64-128R21-22178\n≥256R596331281713\n\nMIC50\n2562565126464256256\nMIC90\n256512256128128256512\nR: resistant; I: intermediate resistant; S: susceptible. Notes: MIC\n50\n: Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC\n90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms.\n\n\nR: resistant; I: intermediate resistant; S: susceptible.\n Notes: MIC\n50\n: Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC\n90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms.", "The frequency of each aminoglycoside resistance gene and its relation with the MIC ranges are shown in Table 3. In total, the proportions of aminoglycoside resistance genes among our clinical isolates of A. baumannii were as follows: APH(3′)-VIa (aphA6) (77%), ANT(2\")-Ia (aadB) (73%), ANT(3\")-Ia (aadA1) (33%), AAC(6′)-Ib (aacA4) (33%), AAC(3)-IIa (aacC2) (19%), and ArmA (22%). The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the isolates is shown in Table 4. There was a significant association between the presence of all resistance genes and the non-susceptibility (resistance or intermediate resistance) to all aminoglycosides, except armA and resistance to netilmicin. Important data from this table indicates that in some groups, such as gentamicin- and tobramycin-resistant groups, all resistant isolates contained some AMEs-encoding genes such as aacC2, aacA4, and aadA1. \n\nTABLE 4:The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the A. baumannii clinical isolates.AntibioticsSusceptibility PatternNo. (%) of the isolates contained \n\n\nAPH(3′)-VIa (aphA6)\n\nP-value\n\nANT(2\")-Ia (aadB)\n\nP-value\n\nANT(3\")-Ia (aadA1)\n\nP-value\n\nAAC(6′)-Ib (aacA4)\n\nP-value\n\nAAC(3)-IIa (aacC2)\n\nP-value\n\narmA\n\nP-value\n\n\n(n=77)\n(n=73)\n(n=33)\n(n=33)\n(n=19)\n(n=22)\nGentamicinNon-susceptible77 (100)0.01273 (100)0.02333 (100)0.03333 (100)0.03319 (100)0.00722 (100)0.021\nSusceptible0\n0\n0\n0\n0\n0\nTobramycinNon-susceptible74 (96.1)0.01970 (95.8)0.02933 (100)0.00333 (100)0.00319 (100)0.00721 (95.4)0.024\nSusceptible3 (3.8)\n3 (4.1)\n0\n0\n0\n1 (4.5)\nAmikacinNon-susceptible64 (83.1)0.03663 (86.3)0.03727 (81.8)0.03827 (81.8)0.03817 (89.4)0.02419 (86.3)0.033\nSusceptible13 (16.8)\n10 (13.6)\n6 (18.1)\n6 (18.1)\n2 (10.5)\n3 (13.6)\nNetilmicinNon-susceptible63 (81.8)0.04265 (89.04)0.03930 (90.9)0.01429 (87.8)0.03218 (94.7)0.01814 (63.6)0.072\nSusceptible15 (19.4)\n8 (10.9)\n3 (9.09)\n4 (12.1)\n1 (5.2)\n8 (36.3)\nKanamycinNon-susceptible73 (94.8)0.02971 (97.2)0.02333 (100)0.00333 (100)0.00319 (100)0.00722 (100)0.021\nSusceptible4 (5.1)\n2 (2.7)\n0\n0\n0\n0\nStreptomycinNon-susceptible75 (97.4)0.01572 (98.6)0.01932 (96.9)0.01931 (93.9)0.02119 (100)0.00718 (81.8)0.044\nSusceptible2 (2.5)\n1 (1.3)\n1 (3.03)\n2 (6.06)\n0\n4 (18.1)\nSpectinomycinNon-susceptible69 (89.6(0.02867 (91.7)0.03731 (93.9)0.02131 (93.9)0.02119 (100)0.00719 (86.3)0.033\nSusceptible8 (10.3)\n6 (8.2)\n2 (6.06)\n2 (6.06)\n0\n3 (13.6)\n\n\nIn addition, we detected 22 gene profiles among all clinical isolates of A. baumannii (Table 5). The most prevalent combination gene profiles in the present study included: 1) APH(3')-VIa + ANT(2\")-Ia with 39 isolates containing these genes, among which 100% isolates were resistant towards kanamycin, while almost 95% were resistant against netilmicin and 97.4% were resistant to tobramycin and gentamicin, and 2) AAC(3)-IIa + AAC(6')-Ib + ANT(3\")-Ia + APH(3')-VIa + ANT(2\")-Ia with 14 isolates, among which 100% were resistant to gentamicin, kanamycin, and streptomycin, while almost 93% were resistant against tobramycin and spectinomycin. However, 15 isolates showed an AME gene profile with one AME gene, most of which were resistant to tested aminoglycosides. Other AME-encoding gene profiles were detected at a low rate (Table 5). However, 15, 52, 12, 5, 14, and 2 isolates in the present study contained 1, 2, 3, 4, 5, and 6 AME genes, respectively. The most prevalent gene profiles exhibited the simultaneous presence of 2 genes followed by 5 genes and 3 AME genes.\n\nTABLE 5:Gene profiles of the Acinetobacter baumannii clinical isolates and their association with aminoglycoside resistance.Gene profile (No.)Gentamicin Tobramycin Amikacin Netilmicin Kanamycin Streptomycin Spectinomycin \nNSSNSSNSSNSSNSSNSSNSS\n(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)\nArmA (9)9 (100)-9 (100)-7 (77.7)2 (22.2)3 (33.3)6 (66.6)9 (100)\n-\n7 (77.7)2 (22.2)9 (100)-\naphA6 (2)2 (100)-2 (100)--2 (100)-2 (100)1 (50)1 (50)2 (100)--2 (100)\naadB (4)4 (100)-3 (75)1 (25)3 (75)1 (25)2 (50)2 (50)3 (75)1 (25)4 (100)-3 (75)1 (25)\nArmA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100)\n-\n3 (100)-2 (66.6)1 (33.3)\naacA4 + aadA1 (2)2 (100)-2 (100)-2 (100)-1 (50)1 (50)2 (100)\n-\n2 (100)-2 (100)-\naadA1 + armA (2)2 (100)-2 (100)-1 (50)1 (50)2 (100)-2 (100)\n-\n1 (50)1 (50)2 (100)-\naadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naadB + aadA1 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naphA6 + aadB (39)38 (97.4)1 (2.56)38 (97.4)1 (2.56)34 (87.1)5 (12.8)37 (94.8)2 (5.1)39 (100)-33 (84.6)6 (15.3)32 (82)7 (17.9)\naacA4 + aphA6 (3)3 (100)-3 (100)-3 (100)-3 (100)-3 (100)\n-\n3 (100)-3 (100)-\naadB + aadA1 +\naphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naacA4 + aadB +\naadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)--1 (100)1 (100)\n-\n-1 (100)1 (100)-\naadB + armA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100)\n-\n3 (100)-2 (66.6)1 (33.3)\naacA4 + aacC2 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + armA\n+ aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)\n-\n2 (100)-2 (100)-\naacC2 + aacA4 + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)\n-\n1 (100)-1 (100)-\naacA4 + aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naacC2 + aacA4\n+ aadB + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-\naacA4 + aadB +\naadA1 + aphA6 (4)4 (100)-4 (100)-4 (100)-4 (100)-4 (100)\n-\n4 (100)-4 (100)-\naacC2 + aacA4 + aadA1 + aphA6 + aadB (14)14 (100)-13 (92.8)1 (7.1)12 (85.7)2 (14.2)12 (85.7)2 (14.2)14 (100)-14 (100)-13 (92.8)1 (7.1)\naacC2 + aacA4 + aadA1 + aadB + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-\nNS: non-susceptible (resistant and intermediate resistant); S: susceptible.\n\n\nNS: non-susceptible (resistant and intermediate resistant); S: susceptible.", "Overuse and misuse of antibiotics in the treatment of infections caused by A. baumannii has led to the emergence of MDR isolates in hospitals and health centers\n24\n. The spread of AME-encoding genes among the clinical isolates of A. baumannii is an important concern in the prescription of these traditional and effective antibiotics, as 94% of our isolates were resistant to kanamycin, gentamicin, streptomycin, and tobramycin. However, we found that netilmicin was the most effective aminoglycoside, as this antibiotic is not commonly used in the treatment of bacterial infections. This finding was similar to that of another Iranian study\n1\n. However, their isolates revealed an MIC50 ≤8 µg/mL, while in the present study it ranged from 128 µg/mL, indicating an increased resistance rate in our region. However, the MIC ranges of other clinically important aminoglycosides such as amikacin, gentamicin, and tobramycin in the present study were significantly higher than those reported in previous studies in Iran and other countries\n1\n\n,\n\n5\n\n,\n\n25\n, while these ranges were almost similar to those reported by Yoo Jin Cho et al. in 2009\n22\n. \nThe molecular analysis of AME-encoding genes in the present study showed a high frequency of aphA6, aadB, aadA1, aacA4, aacC2, and armA genes, consistent with the previous studies from Iran\n1\n\n,\n\n5\n. Given that AME-encoding genes can spread by transferable genetic elements\n18\n, this high proportion would be justified. Possibly, these resistance genes can spread between different gram-negative bacteria such as Pseudomonas aeruginosa and Enterobacteriaceae. Further confirmation of this hypothesis can be found in another study conducted in Iran on the clinical isolates of P. aeruginosa, according to which aadB and aacA4 were the most prevalent AME genes\n26\n. In a study performed by Lee et al. in Korea, the highest frequency was reported for aphA6 (71%), aacC1 (56%), and aadB (48%)\n27\n.\nIn addition, a high proportion of aphA6 and aadB was reported by Akers et al. in USA in agreement with the present study; almost 42% of their isolates were collected from the burn ward and ICU. However, the resistance rates toward gentamicin and amikacin in their isolates were 96.6% and 57.1%, respectively\n25\n. However, aphA6 confers resistance to amikacin and kanamycin\n17\n. Interestingly, 74% and 88.3% of our isolates containing aphA6 exhibited MIC values of ≥256 µg/mL for amikacin and kanamycin, respectively. Moreover, a study carried out in Poland revealed that aphA6 was the second most prevalent AME gene (78.7%) among 61 A. baumannii isolates\n28\n. However, aadB, the second most prevalent AME gene in the current study, confers resistance to gentamicin, tobramycin, and kanamycin in gram-negative bacteria\n26\n, while 95.8%, 93.1%, and 87.6% of our isolates containing aadB showed an MIC range of ≥256 µg/mL for these antibiotics, respectively. \nIn addition, we found that 33% of our isolates contained the aacA4 gene. Other research performed in the USA detected only one isolate carrying this gene from blood and wound infections that were resistant to gentamicin, tobramycin, and amikacin\n25\n. However, 96.9% of our aadB-positive isolates showed a ≥256 µg/mL MIC range for gentamicin and kanamycin; 87.8% of the isolates exhibited this MIC range for tobramycin, while in a previous study in Iran, this percentage was 26.4%\n5\n. It is noteworthy that this gene was reported as the second most prevalent AME gene carryied by class 1 integrons among clinical isolates of P. aeruginosa in Iran\n26\n. However, 83.6% of the aminoglycoside-resistant A. baumannii isolates in South Korea contained the aacA4 gene, while their MIC ranges were 64 to greater than 1024 µg/mL\n22\n. Sheikhalizadeh et al. reported that 27.6% isolated exhibited the aadA1 gene proportion, which was almost concordant with the results of the present study\n5\n, while another Iranian study detected 26.4%, 31%, and 54.5% exhibiting this gene among the sequence group (SG) of A. baumannii, 1, 2, and 3, respectively. Any isolates belonging to SG4-9 contained this resistance gene\n1\n. In addition, we detected that 19% of our isolates carried aacC2, while Akers et al. reported a 3.7% proportion of this gene\n25\n, and another Iranian study detected a proportion of 8.04% for this gene owing to which all isolates were non-susceptible to kanamycin\n5\n. However, 89.4% of our isolates containing this gene showed an MIC range of ≥256 µg/mL for spectinomycin, streptomycin, kanamycin, and tobramycin, while 100% of them exhibited this MIC range for gentamicin. Moreover, according to research by Hasani et al., this gene was detected in SG1-4\n1\n, while Nowak et al. did not detect this gene among their isolates\n28\n.\nAdditionally, the armA gene, which is an effective factor in the development of resistance to aminoglycosides in A. baumannii, can be placed on plasmids and frequently recognized in carbapenem-resistant isolates\n18\n. This gene encodes a 16S rRNA methylase, resulting in limited access of aminoglycosides to the ribosome of the bacteria and causing high-level aminoglycoside resistance (HLAR) against gentamicin, tobramycin, amikacin, and kanamycin\n1\n. Surprisingly, among 22 armA-positive A. baumannii isolates in this study, 21 (95.4%), 16 (72.7%), 18 (81.8%), and 21 (95.4%) isolates showed high-level resistance (MIC≥256 µg/mL) to gentamicin, tobramycin, amikacin, and kanamycin, respectively, with an MIC50≥128 μg/mL. Considering that most isolates in the present study were MDR, and a high proportion of strains harboring the AME genes was detected, the simultaneous presence of carbapenem-resistance genes and AME genes in A. baumannii has been proven\n18\n\n,\n\n28\n; this assumption may also be true for our isolates. However, 75-97% of our isolates were resistant to carbapenems and other β-lactams. Other studies from South Korea, Iran, and North America have reported armA production by A. baumannii\n\n1\n\n,\n\n27\n\n,\n\n29\n. Additionally, other researchers have revealed the role of the armA gene in high-level resistance to amikacin and gentamicin\n22\n\n,\n\n30\n.\nIn addition to the material presented, the most important problem observed in our study was the simultaneous presence of aminoglycoside resistance genes. We detected 22 gene profiles, while Nowak et al. detected only 3 combinations of AME genes from 61 carbapenem-resistant and aminoglycoside non-susceptible A. baumannii isolates\n28\n. Our most prevalent combinations were APH(3')-VIa+ANT(2\")-Ia (39 isolates) with 95-100% resistance rates against aminoglycosides and AAC(3)-IIa+AAC(6')-Ib+ANT(3\")-Ia+APH(3')-VIa+ANT(2\")-Ia (14 isolates) of which 93-100% were resistant to aminoglycosides. The common point between our study and the study by Nowak et al. was the presence of aphA6 among most of the isolates. However, Akers et al. detected 16 AME gene profiles, of which 12 (75%) isolates had a combination of these genes\n25\n. The most prevalent (38/107 isolates) combination of their study included APH(3')-Ia +ANT(2'')-Ia, and 35 (92.1%) were concurrently resistant to gentamicin, tobramycin, and amikacin. Nevertheless, 85% of our A. baumannii isolates carried more than one AME gene, of which 52 (61.1%) contained 2 AME genes concurrently and most of them were resistant to all tested aminoglycosides. Moreover, we found that as the number of AME genes increased, the likelihood of resistance to aminoglycosides, especially gentamicin, tobramycin, streptomycin, and kanamycin, increased. Due to the higher proportion of strains harboring the AME genes, especially aph, it may be better to use phosphotransferases and acetyltransferase inhibitors such as the bovine antimicrobial peptide indolicidin, as previously reported\n31\n, in combination with aminoglycosides in our region, Iran. ", "High-level aminoglycoside MIC ranges in isolates with the simultaneous presence of AME and ArmA-encoding genes indicated the importance of these genes in resistance to aminoglycosides in A. baumannii. However, it seems that the selection of the appropriate antibiotic based on antimicrobial susceptibility testing and the use of combination therapy would be effective in overcoming this problem in such countries. Therefore, it is necessary to collect data from monitoring studies for the prevention, treatment, and control of the infections caused by this microorganism." ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, "discussion", "conclusions" ]
[ "Acinetobacter baumannii", "Aminoglycoside-modifying enzymes", "ArmA", "Aminoglycoside resistance" ]
INTRODUCTION: Acinetobacter baumannii, living in the soil, the water, and different hospital environments, is an important opportunistic pathogen that causes nosocomial infections such as pneumonia, urinary tract infections, intravenous catheter-associated infections, and ventilation-associated infections, particularly in intensive care units 1 - 4 . The ability of this microorganism to remain in the hospital environment and to spread among the patients, along with their resistance to several antibiotics, are the main driving forces behind large-scale recurrent events in different countries 5 . The major antibiotics used for the treatment of infections caused by this organism are beta-lactams, aminoglycosides, fluoroquinolones, and carbapenems; however, A. baumannii has shown different rates of resistance against these antimicrobial agents 6 - 8 . These infections are difficult, costly, and sometimes impossible to treat owing to the high ability of A. baumannii to acquire antibiotic resistance genes and the development of multidrug-resistant (MDR) strains 9 , 10 . Aminoglycosides are one of the main drugs used for the treatment of Acinetobacter infections 11 ; however, recently, the resistance of A. baumannii to these antibiotics has also increased. Two main mechanisms of resistance to aminoglycosides are the alteration of the ribosome structure caused by mutations in the ribosomal 16S rRNA and the enzymatic resistance mechanism 12 . The enzymatic alteration of the aminoglycoside molecule at -OH or -NH2 groups by aminoglycoside-modifying enzymes (AMEs) is the most important resistance mechanism 12 - 14 . AMEs are classified into three major groups: aminoglycoside phosphotransferase (APH), aminoglycoside acetyltransferase (AAC), aminoglycoside nucleotidyltransferase (ANT), and aminoglycoside adenylyltransferase (AAD) 5 , 13 . Aminoglycoside acetyltransferases cause acetylation of the -NH2 groups of aminoglycosides at the 1, 3, 2', and 6' positions using acetyl coenzyme A as a donor substrate 15 . Aminoglycoside phosphotransferases phosphorylate the hydroxyl groups present in the structure of aminoglycosides at the 4, 6, 9, 3', 2'', 3'', and 7'' positions (seven different groups) with the help of ATP; the largest enzymatic group in this family is the APH(3′)-I group 16 . The proportion of strains harboring the aphA6 gene in A. baumannii is widespread, and this enzyme is the cause of resistance to neomycin, amikacin, kanamycin, paromomycin, ribostamycin, butirosin, and isepamicin 17 . Aminoglycoside nucleotidyltransferases are classified into 5 groups, and the genes encoding these enzymes can be found in chromosomes or transferred by plasmids and transposons 12 . These enzymes transfer an AMP group from ATP to a hydroxyl group at the 2'', 3'', 4', 6, and 9 positions of the aminoglycoside molecule 13 . In addition to AMEs, 16S rRNA methylation by the ArmA enzyme is a novel mechanism that contributes to the high level of aminoglycoside resistance in A. baumannii, as reported in the Far East, Europe, and North America 5 . This enzyme can be transferred by class 1 integrons and is often detected in carbapenem-resistant A. baumannii isolates 18 . This study aimed to investigate the role of some important aminoglycoside-modifying enzymes and 16S rRNA methylase (ArmA) in the resistance of A. baumannii clinical isolates to aminoglycosides in Sari, located north of Iran. METHODS: Sample collection and bacterial isolates This study was performed on A. baumannii isolated from patients admitted to different educational hospitals in Sari, north of Iran, for 6 months (April 2019 to September 2019). The clinical specimens included blood, urine, respiratory secretions (bronchial lavage and tracheal secretions), CSF, and ulcer (surgical and burn wound). The clinical isolates were identified using conventional microbiological tests 19 and confirmed by polymerase chain reaction (PCR) amplification of the blaOXA-51 gene using specific primers 20 ; the reaction conditions are shown in Table 1. TABLE 1:Primers used to amplify the blaOXA-51 and aminoglycoside resistance genes along with the conditions of PCR.Target genesPrimer sequences (5´-3´)Amplicon size (bp)94 °C94 °CAnnealing Temperature and time72 °C72 °CReference OXA-51TAATGCTTTGATCGGCCTTG3532 min25 sec51 °C for 30 sec30 sec5 min 5 TGGATTGCACTTCATCTTGG APH(3′)-VIa CGGAAACAGCGTTTTAGA7172 min25 sec49 °C for 30 sec30 sec5 min 5 (aphA6) TTCCTTTTGTCAGGTC AAC(3)-IIa ATGCATACGCGGAAGGC8222 min25 sec54 °C for 30 sec30 sec5 min 5 (aacC2) TGCTGGCACGATCGGAG AAC(6′)-Ib TATGAGTGGCTAAATCGAT3952 min25 sec54 °C for 30 sec30 sec5 min 5 (aacA4) CCCGCTTTCTCGTAGCA ANT(2")-Ia ATCTGCCGCTCTGGAT4052 min25 sec49 °C for 30 sec30 sec5 min 5 (aadB) CGAGCCTGTAGGACT ANT(3")-Ia ATGAGGGAAGCGGTGATCG7922 min25 sec62 °C for 30 sec30 sec5 min 5 (aadA1) TTATTTGCCGACTACCTTGGT ArmA ATTCTGCCTATCCTAATTGG3152 min25 sec49 °C for 30 sec30 sec5 min 5 ACCTATACTTTATCGTCGTC This study was performed on A. baumannii isolated from patients admitted to different educational hospitals in Sari, north of Iran, for 6 months (April 2019 to September 2019). The clinical specimens included blood, urine, respiratory secretions (bronchial lavage and tracheal secretions), CSF, and ulcer (surgical and burn wound). The clinical isolates were identified using conventional microbiological tests 19 and confirmed by polymerase chain reaction (PCR) amplification of the blaOXA-51 gene using specific primers 20 ; the reaction conditions are shown in Table 1. TABLE 1:Primers used to amplify the blaOXA-51 and aminoglycoside resistance genes along with the conditions of PCR.Target genesPrimer sequences (5´-3´)Amplicon size (bp)94 °C94 °CAnnealing Temperature and time72 °C72 °CReference OXA-51TAATGCTTTGATCGGCCTTG3532 min25 sec51 °C for 30 sec30 sec5 min 5 TGGATTGCACTTCATCTTGG APH(3′)-VIa CGGAAACAGCGTTTTAGA7172 min25 sec49 °C for 30 sec30 sec5 min 5 (aphA6) TTCCTTTTGTCAGGTC AAC(3)-IIa ATGCATACGCGGAAGGC8222 min25 sec54 °C for 30 sec30 sec5 min 5 (aacC2) TGCTGGCACGATCGGAG AAC(6′)-Ib TATGAGTGGCTAAATCGAT3952 min25 sec54 °C for 30 sec30 sec5 min 5 (aacA4) CCCGCTTTCTCGTAGCA ANT(2")-Ia ATCTGCCGCTCTGGAT4052 min25 sec49 °C for 30 sec30 sec5 min 5 (aadB) CGAGCCTGTAGGACT ANT(3")-Ia ATGAGGGAAGCGGTGATCG7922 min25 sec62 °C for 30 sec30 sec5 min 5 (aadA1) TTATTTGCCGACTACCTTGGT ArmA ATTCTGCCTATCCTAATTGG3152 min25 sec49 °C for 30 sec30 sec5 min 5 ACCTATACTTTATCGTCGTC Antimicrobial susceptibility testing The antibiotic susceptibility pattern of the isolates was determined by the disk agar diffusion method on Muller Hinton agar (Merck, Germany) according to the Clinical and Laboratory Standards Institute (CLSI) guidelines 21 . The antibiotics included piperacillin (100 µg), piperacillin-tazobactam (100/10 µg), imipenem (10 µg), meropenem (10 µg), doripenem (10 µg), ciprofloxacin (5 µg), levofloxacin (5 µg), trimethoprim-sulfamethoxazole (1.25-23.75 µg), ceftazidime (30 µg), cefotaxime (30 µg), and cefepime (30 µg) (MAST Co., England). The susceptibility pattern of the isolates against aminoglycosides including kanamycin, amikacin, spectinomycin, netilmicin, gentamicin, streptomycin, and tobramycin was determined using the micro-broth dilution method according to the CLSI guidelines 21 . For interpretation of the minimum inhibitory concentration (MIC) values, we referred to the CLSI guidelines and previous studies 1 , 21 , 22 . Escherichia coli ATCC 25922 and A. baumannii ATCC 19606 were used as control strains for antibiotic susceptibility testing. The antibiotic susceptibility pattern of the isolates was determined by the disk agar diffusion method on Muller Hinton agar (Merck, Germany) according to the Clinical and Laboratory Standards Institute (CLSI) guidelines 21 . The antibiotics included piperacillin (100 µg), piperacillin-tazobactam (100/10 µg), imipenem (10 µg), meropenem (10 µg), doripenem (10 µg), ciprofloxacin (5 µg), levofloxacin (5 µg), trimethoprim-sulfamethoxazole (1.25-23.75 µg), ceftazidime (30 µg), cefotaxime (30 µg), and cefepime (30 µg) (MAST Co., England). The susceptibility pattern of the isolates against aminoglycosides including kanamycin, amikacin, spectinomycin, netilmicin, gentamicin, streptomycin, and tobramycin was determined using the micro-broth dilution method according to the CLSI guidelines 21 . For interpretation of the minimum inhibitory concentration (MIC) values, we referred to the CLSI guidelines and previous studies 1 , 21 , 22 . Escherichia coli ATCC 25922 and A. baumannii ATCC 19606 were used as control strains for antibiotic susceptibility testing. DNA extraction, PCR, and multiplex-PCR DNA was extracted from all A. baumanniiisolates grown for 24 h using an alkaline lysis method with sodium dodecyl sulphate (SDS) and NaOH, as previously published 23 , with few modifications. In brief, first, we prepared a lysis buffer by dissolving 0.5 g of SDS and 0.4 g of NaOH in 200 µL of distilled water. Next, 4-6 colonies of the bacteria were suspended in 60 µL of lysis buffer and subsequently heated at 95 °C for 10 min. In the next step, the suspension was centrifuged at 13000 rpm for 5 min, and 180 µL of distilled water was added to the microtubes. The obtained supernatant was frozen at −20 °C until use as the extracted DNA in PCR. Two sets of multiplex-PCR were used to detect AME-encoding genes in A. baumannii isolates using the specific primers shown in Table 1. APH(3′)-VIa (aphA6), ANT(2")-Ia (aadB), and ArmA genes were detected in the same set; AAC(6′)-Ib (aacA4) and AAC(3)-IIa (aacC2) were identified in the second set; and the ANT(3")-Ia (aadA1) gene was detected by PCR alone. The PCR and multiplex-PCR were performed in 25 µL of final volume containing 12.5 µL of the master mix (Ampliqon, Denmark), 10 pmol of each primer (Bioneer, South Korea), and 500 ng of template DNA; the reaction solutions were brought to the desired volume through the addition of distilled water. The genes were amplified under standard conditions using a thermocycler machine (Bio-Rad, USA). All reactions were performed in 34 cycles, and the conditions are shown in Table 1. DNA was extracted from all A. baumanniiisolates grown for 24 h using an alkaline lysis method with sodium dodecyl sulphate (SDS) and NaOH, as previously published 23 , with few modifications. In brief, first, we prepared a lysis buffer by dissolving 0.5 g of SDS and 0.4 g of NaOH in 200 µL of distilled water. Next, 4-6 colonies of the bacteria were suspended in 60 µL of lysis buffer and subsequently heated at 95 °C for 10 min. In the next step, the suspension was centrifuged at 13000 rpm for 5 min, and 180 µL of distilled water was added to the microtubes. The obtained supernatant was frozen at −20 °C until use as the extracted DNA in PCR. Two sets of multiplex-PCR were used to detect AME-encoding genes in A. baumannii isolates using the specific primers shown in Table 1. APH(3′)-VIa (aphA6), ANT(2")-Ia (aadB), and ArmA genes were detected in the same set; AAC(6′)-Ib (aacA4) and AAC(3)-IIa (aacC2) were identified in the second set; and the ANT(3")-Ia (aadA1) gene was detected by PCR alone. The PCR and multiplex-PCR were performed in 25 µL of final volume containing 12.5 µL of the master mix (Ampliqon, Denmark), 10 pmol of each primer (Bioneer, South Korea), and 500 ng of template DNA; the reaction solutions were brought to the desired volume through the addition of distilled water. The genes were amplified under standard conditions using a thermocycler machine (Bio-Rad, USA). All reactions were performed in 34 cycles, and the conditions are shown in Table 1. Statistical analysis The data were analyzed using SPSS (version 21). Categorical data were analyzed using the Fisher’s exact test, and a P-value less than 0.05 was considered statistically significant. In addition, an independent t-test was used to examine the mean age of the subjects. The data were analyzed using SPSS (version 21). Categorical data were analyzed using the Fisher’s exact test, and a P-value less than 0.05 was considered statistically significant. In addition, an independent t-test was used to examine the mean age of the subjects. Sample collection and bacterial isolates: This study was performed on A. baumannii isolated from patients admitted to different educational hospitals in Sari, north of Iran, for 6 months (April 2019 to September 2019). The clinical specimens included blood, urine, respiratory secretions (bronchial lavage and tracheal secretions), CSF, and ulcer (surgical and burn wound). The clinical isolates were identified using conventional microbiological tests 19 and confirmed by polymerase chain reaction (PCR) amplification of the blaOXA-51 gene using specific primers 20 ; the reaction conditions are shown in Table 1. TABLE 1:Primers used to amplify the blaOXA-51 and aminoglycoside resistance genes along with the conditions of PCR.Target genesPrimer sequences (5´-3´)Amplicon size (bp)94 °C94 °CAnnealing Temperature and time72 °C72 °CReference OXA-51TAATGCTTTGATCGGCCTTG3532 min25 sec51 °C for 30 sec30 sec5 min 5 TGGATTGCACTTCATCTTGG APH(3′)-VIa CGGAAACAGCGTTTTAGA7172 min25 sec49 °C for 30 sec30 sec5 min 5 (aphA6) TTCCTTTTGTCAGGTC AAC(3)-IIa ATGCATACGCGGAAGGC8222 min25 sec54 °C for 30 sec30 sec5 min 5 (aacC2) TGCTGGCACGATCGGAG AAC(6′)-Ib TATGAGTGGCTAAATCGAT3952 min25 sec54 °C for 30 sec30 sec5 min 5 (aacA4) CCCGCTTTCTCGTAGCA ANT(2")-Ia ATCTGCCGCTCTGGAT4052 min25 sec49 °C for 30 sec30 sec5 min 5 (aadB) CGAGCCTGTAGGACT ANT(3")-Ia ATGAGGGAAGCGGTGATCG7922 min25 sec62 °C for 30 sec30 sec5 min 5 (aadA1) TTATTTGCCGACTACCTTGGT ArmA ATTCTGCCTATCCTAATTGG3152 min25 sec49 °C for 30 sec30 sec5 min 5 ACCTATACTTTATCGTCGTC Antimicrobial susceptibility testing: The antibiotic susceptibility pattern of the isolates was determined by the disk agar diffusion method on Muller Hinton agar (Merck, Germany) according to the Clinical and Laboratory Standards Institute (CLSI) guidelines 21 . The antibiotics included piperacillin (100 µg), piperacillin-tazobactam (100/10 µg), imipenem (10 µg), meropenem (10 µg), doripenem (10 µg), ciprofloxacin (5 µg), levofloxacin (5 µg), trimethoprim-sulfamethoxazole (1.25-23.75 µg), ceftazidime (30 µg), cefotaxime (30 µg), and cefepime (30 µg) (MAST Co., England). The susceptibility pattern of the isolates against aminoglycosides including kanamycin, amikacin, spectinomycin, netilmicin, gentamicin, streptomycin, and tobramycin was determined using the micro-broth dilution method according to the CLSI guidelines 21 . For interpretation of the minimum inhibitory concentration (MIC) values, we referred to the CLSI guidelines and previous studies 1 , 21 , 22 . Escherichia coli ATCC 25922 and A. baumannii ATCC 19606 were used as control strains for antibiotic susceptibility testing. DNA extraction, PCR, and multiplex-PCR: DNA was extracted from all A. baumanniiisolates grown for 24 h using an alkaline lysis method with sodium dodecyl sulphate (SDS) and NaOH, as previously published 23 , with few modifications. In brief, first, we prepared a lysis buffer by dissolving 0.5 g of SDS and 0.4 g of NaOH in 200 µL of distilled water. Next, 4-6 colonies of the bacteria were suspended in 60 µL of lysis buffer and subsequently heated at 95 °C for 10 min. In the next step, the suspension was centrifuged at 13000 rpm for 5 min, and 180 µL of distilled water was added to the microtubes. The obtained supernatant was frozen at −20 °C until use as the extracted DNA in PCR. Two sets of multiplex-PCR were used to detect AME-encoding genes in A. baumannii isolates using the specific primers shown in Table 1. APH(3′)-VIa (aphA6), ANT(2")-Ia (aadB), and ArmA genes were detected in the same set; AAC(6′)-Ib (aacA4) and AAC(3)-IIa (aacC2) were identified in the second set; and the ANT(3")-Ia (aadA1) gene was detected by PCR alone. The PCR and multiplex-PCR were performed in 25 µL of final volume containing 12.5 µL of the master mix (Ampliqon, Denmark), 10 pmol of each primer (Bioneer, South Korea), and 500 ng of template DNA; the reaction solutions were brought to the desired volume through the addition of distilled water. The genes were amplified under standard conditions using a thermocycler machine (Bio-Rad, USA). All reactions were performed in 34 cycles, and the conditions are shown in Table 1. Statistical analysis: The data were analyzed using SPSS (version 21). Categorical data were analyzed using the Fisher’s exact test, and a P-value less than 0.05 was considered statistically significant. In addition, an independent t-test was used to examine the mean age of the subjects. RESULTS: Patients, samples, and bacterial isolates In this study, 100 non-duplicated A. baumannii clinical isolates were collected from 100 patients admitted to the teaching and educational hospitals of Sari, north of Iran. All isolates identified using the phenotypic method contained the blaOXA-51 gene according to the PCR results. The mean age of the patients was 42.08±25.08 years (minimum age: 6 months; highest age: 88 years), and 50% of the patients were male. There was no significant difference between men and women in terms of mean age (p=0.64). Most of the bacterial isolates (34%) were obtained from patients admitted to the burn wards, while 29%, 21%, and 16% of the isolates were collected from the ICU, surgery, and pediatric wards, respectively. The most common type of specimen (73%) for isolation of the bacteria was the wound samples; however, 15% and 12% of other clinical isolates were obtained from urine and blood cultures, respectively. In this study, 100 non-duplicated A. baumannii clinical isolates were collected from 100 patients admitted to the teaching and educational hospitals of Sari, north of Iran. All isolates identified using the phenotypic method contained the blaOXA-51 gene according to the PCR results. The mean age of the patients was 42.08±25.08 years (minimum age: 6 months; highest age: 88 years), and 50% of the patients were male. There was no significant difference between men and women in terms of mean age (p=0.64). Most of the bacterial isolates (34%) were obtained from patients admitted to the burn wards, while 29%, 21%, and 16% of the isolates were collected from the ICU, surgery, and pediatric wards, respectively. The most common type of specimen (73%) for isolation of the bacteria was the wound samples; however, 15% and 12% of other clinical isolates were obtained from urine and blood cultures, respectively. Antimicrobial susceptibility pattern According to the results of the disk agar diffusion method, the most and least effective antibiotics in the present study were imipenem and ciprofloxacin, with resistance rates of 75% and 100%, respectively (Table 2). Moreover, 94% of the isolates were detected as multi-drug resistant (MDR), and the most MDR isolates were collected from wound samples. Table 2 presents the antibiotic resistance patterns of all A. baumannii clinical isolates in this study based on hospital wards, as well as sample types. Resistance to the tested antibiotics was not significantly correlated with the sample types and hospital wards where the samples were collected. TABLE 2:Antimicrobial susceptibility pattern of the Acinetobacter baumannii clinical isolates in disk agar diffusion method.AntibioticsNo. (%) of resistant, intermediate resistant or susceptible isolates in terms of SusceptibilityTotalHospital wards Sample types (n=100)BurnICUSurgeryPediatric P-value WoundUrineBlood P-value (n=34)(n=29)(n=21)s (n=16) (n=73)(n=15)(n=12) PIPR8628 (82.3)28 (96.5)19 (90.4)11 (68.7)0.41262 (84.9)14 (93.3)10 (83.3)0.917 I105 (14.7)1 (3.4)2 (9.5)2 (12.5) 8 (10.9)1 (6.6)1 (8.3) S41 (2.9)003 (18.7) 3 (4.1)01 (8.3) PIP-TAZR7826 (76.4)24 (82.7)16 (76.1)12 (75)0.10454 (73.9)14 (93.3)10 (83.3)0.372 I104 (11.7)2 (6.8)3 (14.2)1 (6.2) 8 (10.9)02 (16.6) S124 (11.7)3 (10.3)2 (9.5)3 (18.7) 11 (15)1 (6.6)0 CAZR7625 (73.5)22 (75.8)18 (85.7)11 (68.7)0.74352 (71.2)15 (100)9 (75)0.559 I00000 000 S249 (26.4)1 (3.4)3 (14.2)5 (31.2) 21 (28.7)03 (25) CTXR9332 (94.1)28 (96.5)19 (90.4)14 (87.5)0.76267 (91.7)15 (100)11 (91.6)0.618 I02 (5.8)1 (3.4)2 (9.5)2 (12.5) 6 (8.2)01 (8.3) S70000 000 CEFR9232 (94.1)28 (96.5)18 (85.7)14 (87.5)0.44867 (91.7)14 (93.3)11 (91.6)0.728 I41 (2.9)1 (3.4)2 (9.5)0 2 (2.7)1 (6.6)1 (8.3) S41 (2.9)01 (4.7)2 (12.5) 4 (5.4)00 IMIR7527 (79.4)25 (86.2)17 (80.9)10 (62.5)0.61755 (75.3)12 (80)8 (66.6)0.873 I114 (11.7)2 (6.8)2 (9.5)3 (18.7) 9 (12.3)1 (6.6)1 (8.3) S147 (20.5)2 (6.8)2 (9.5)3 (18.7) 9 (12.3)2 (13.3)3 (25) MERR9733 (97)28 (96.5)20 (95.2)16 (100)0.96470 (95.8)15 (100)12 100)0.667 I00000 000 S31 (2.9)1 (3.4)1 (4.7)0 3 (4.1)00 DORR9632 (94.1)28 (96.5)20 (95.2)16 (100)0.79769 (94.5)15 (100)12 (100)0.913 I21 (2.9)1 (3.4)00 2 (2.7)00 S21 (2.9)01 (5)0 2 (2.7)00 CIPR10034 (100)29 (100)21 (100)16 (100)0.10073 (100)15 (100)12 (100)0.100 I00000 000 S00000 000 LEVR9331 (91.1)27 (93.1)20 (95.2)15 (93.7)0.72567 (91.7)14 (93.3)12 (100)0.842 I32 (5.8)001 (6.2) 3 (4.1)00 S41 (2.9)2 (6.8)1 (4.7)0 3 (4.1)1 (6.6)0 SXTR9231 (91.1)27 (93.1)18 (85.7)16 (100)0.93568 (93.1)12 (80)12 (100)0.216 I31 (2.9)1 (3.4)1 (4.7)0 1 (1.3)2 (13.3)0 S52 (5.8)1 (3.4)2 (9.5)0 4 (5.4)1 (6.6)0 PIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible. PIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible. Moreover, according to the MIC results, the resistance rate against gentamicin, kanamycin, tobramycin, and streptomycin was 94%, while the highest susceptibility (20%) of A. baumannii isolates was observed against netilmicin. In contrast, 74%, 68%, and 78% of our clinical isolates were resistant to amikacin, netilmicin, and spectinomycin, respectively. The MIC ranges of aminoglycosides and their relationship with the presence of AMEs-encoding genes are shown in Table 3. TABLE 3:Aminoglycoside resistance pattern of the Acinetobacter baumannii clinical isolates in this study. AntibioticsMIC rangesNo. of the isolates with (µg/mL)Susceptibility APH(3′)-VIa ANT(2")-Ia ANT(3")-Ia AAC(6′)-Ib AAC(3)-IIa armA Total pattern (aphA6) (aadB) (aadA1) (aacA4) (aacC2) (n=22) (n=77)(n=73)(n=33)(n=33)(n=19) Gentamicin ≤4S------6 8I5311-1- 16-32R--- --94 64-128R5----- ≥256R677032321921 MIC50 5122565125122561024256 MIC90 102425610242562561024512Tobramycin≤4S33---14 8I1----22 16-32R2-1---94 64-128R-21423 ≥256R716831291716 MIC50 256512512256512512512 MIC90 10241024102451210245121024Amikacin≤16S1310662316 32I710665110 64R-1----74 128R-4---- ≥256R574821211218 MIC50 2561024512512512256512 MIC90 51251251210245122561024Netilmicin≤8S883-1220 16I6--4-612 32R810464268 64-128R571324 ≥256R50482520128 MIC50 25612864256256128128 MIC90 25612864512512128256Kanamycin≤16S42----4 32I22----2 64R-3-1--94 128R322-21 ≥256R686431321721 MIC50 643212864256128128 MIC90 6425625664256256256Streptomycin≤4S1-11--6 8S11-1-- 16S---- - 32I------- 64-128R33-32-94 ≥256R726932281718 MIC50 256128256256256512256 MIC90 2561285121024128512512Spectinomycin≤4S11---312 8S7522-- 16S - --- 32I83-1-510 64-128R21-22178 ≥256R596331281713 MIC50 2562565126464256256 MIC90 256512256128128256512 R: resistant; I: intermediate resistant; S: susceptible. Notes: MIC 50 : Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC 90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms. R: resistant; I: intermediate resistant; S: susceptible. Notes: MIC 50 : Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC 90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms. According to the results of the disk agar diffusion method, the most and least effective antibiotics in the present study were imipenem and ciprofloxacin, with resistance rates of 75% and 100%, respectively (Table 2). Moreover, 94% of the isolates were detected as multi-drug resistant (MDR), and the most MDR isolates were collected from wound samples. Table 2 presents the antibiotic resistance patterns of all A. baumannii clinical isolates in this study based on hospital wards, as well as sample types. Resistance to the tested antibiotics was not significantly correlated with the sample types and hospital wards where the samples were collected. TABLE 2:Antimicrobial susceptibility pattern of the Acinetobacter baumannii clinical isolates in disk agar diffusion method.AntibioticsNo. (%) of resistant, intermediate resistant or susceptible isolates in terms of SusceptibilityTotalHospital wards Sample types (n=100)BurnICUSurgeryPediatric P-value WoundUrineBlood P-value (n=34)(n=29)(n=21)s (n=16) (n=73)(n=15)(n=12) PIPR8628 (82.3)28 (96.5)19 (90.4)11 (68.7)0.41262 (84.9)14 (93.3)10 (83.3)0.917 I105 (14.7)1 (3.4)2 (9.5)2 (12.5) 8 (10.9)1 (6.6)1 (8.3) S41 (2.9)003 (18.7) 3 (4.1)01 (8.3) PIP-TAZR7826 (76.4)24 (82.7)16 (76.1)12 (75)0.10454 (73.9)14 (93.3)10 (83.3)0.372 I104 (11.7)2 (6.8)3 (14.2)1 (6.2) 8 (10.9)02 (16.6) S124 (11.7)3 (10.3)2 (9.5)3 (18.7) 11 (15)1 (6.6)0 CAZR7625 (73.5)22 (75.8)18 (85.7)11 (68.7)0.74352 (71.2)15 (100)9 (75)0.559 I00000 000 S249 (26.4)1 (3.4)3 (14.2)5 (31.2) 21 (28.7)03 (25) CTXR9332 (94.1)28 (96.5)19 (90.4)14 (87.5)0.76267 (91.7)15 (100)11 (91.6)0.618 I02 (5.8)1 (3.4)2 (9.5)2 (12.5) 6 (8.2)01 (8.3) S70000 000 CEFR9232 (94.1)28 (96.5)18 (85.7)14 (87.5)0.44867 (91.7)14 (93.3)11 (91.6)0.728 I41 (2.9)1 (3.4)2 (9.5)0 2 (2.7)1 (6.6)1 (8.3) S41 (2.9)01 (4.7)2 (12.5) 4 (5.4)00 IMIR7527 (79.4)25 (86.2)17 (80.9)10 (62.5)0.61755 (75.3)12 (80)8 (66.6)0.873 I114 (11.7)2 (6.8)2 (9.5)3 (18.7) 9 (12.3)1 (6.6)1 (8.3) S147 (20.5)2 (6.8)2 (9.5)3 (18.7) 9 (12.3)2 (13.3)3 (25) MERR9733 (97)28 (96.5)20 (95.2)16 (100)0.96470 (95.8)15 (100)12 100)0.667 I00000 000 S31 (2.9)1 (3.4)1 (4.7)0 3 (4.1)00 DORR9632 (94.1)28 (96.5)20 (95.2)16 (100)0.79769 (94.5)15 (100)12 (100)0.913 I21 (2.9)1 (3.4)00 2 (2.7)00 S21 (2.9)01 (5)0 2 (2.7)00 CIPR10034 (100)29 (100)21 (100)16 (100)0.10073 (100)15 (100)12 (100)0.100 I00000 000 S00000 000 LEVR9331 (91.1)27 (93.1)20 (95.2)15 (93.7)0.72567 (91.7)14 (93.3)12 (100)0.842 I32 (5.8)001 (6.2) 3 (4.1)00 S41 (2.9)2 (6.8)1 (4.7)0 3 (4.1)1 (6.6)0 SXTR9231 (91.1)27 (93.1)18 (85.7)16 (100)0.93568 (93.1)12 (80)12 (100)0.216 I31 (2.9)1 (3.4)1 (4.7)0 1 (1.3)2 (13.3)0 S52 (5.8)1 (3.4)2 (9.5)0 4 (5.4)1 (6.6)0 PIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible. PIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible. Moreover, according to the MIC results, the resistance rate against gentamicin, kanamycin, tobramycin, and streptomycin was 94%, while the highest susceptibility (20%) of A. baumannii isolates was observed against netilmicin. In contrast, 74%, 68%, and 78% of our clinical isolates were resistant to amikacin, netilmicin, and spectinomycin, respectively. The MIC ranges of aminoglycosides and their relationship with the presence of AMEs-encoding genes are shown in Table 3. TABLE 3:Aminoglycoside resistance pattern of the Acinetobacter baumannii clinical isolates in this study. AntibioticsMIC rangesNo. of the isolates with (µg/mL)Susceptibility APH(3′)-VIa ANT(2")-Ia ANT(3")-Ia AAC(6′)-Ib AAC(3)-IIa armA Total pattern (aphA6) (aadB) (aadA1) (aacA4) (aacC2) (n=22) (n=77)(n=73)(n=33)(n=33)(n=19) Gentamicin ≤4S------6 8I5311-1- 16-32R--- --94 64-128R5----- ≥256R677032321921 MIC50 5122565125122561024256 MIC90 102425610242562561024512Tobramycin≤4S33---14 8I1----22 16-32R2-1---94 64-128R-21423 ≥256R716831291716 MIC50 256512512256512512512 MIC90 10241024102451210245121024Amikacin≤16S1310662316 32I710665110 64R-1----74 128R-4---- ≥256R574821211218 MIC50 2561024512512512256512 MIC90 51251251210245122561024Netilmicin≤8S883-1220 16I6--4-612 32R810464268 64-128R571324 ≥256R50482520128 MIC50 25612864256256128128 MIC90 25612864512512128256Kanamycin≤16S42----4 32I22----2 64R-3-1--94 128R322-21 ≥256R686431321721 MIC50 643212864256128128 MIC90 6425625664256256256Streptomycin≤4S1-11--6 8S11-1-- 16S---- - 32I------- 64-128R33-32-94 ≥256R726932281718 MIC50 256128256256256512256 MIC90 2561285121024128512512Spectinomycin≤4S11---312 8S7522-- 16S - --- 32I83-1-510 64-128R21-22178 ≥256R596331281713 MIC50 2562565126464256256 MIC90 256512256128128256512 R: resistant; I: intermediate resistant; S: susceptible. Notes: MIC 50 : Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC 90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms. R: resistant; I: intermediate resistant; S: susceptible. Notes: MIC 50 : Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC 90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms. Gene profiles of the isolates The frequency of each aminoglycoside resistance gene and its relation with the MIC ranges are shown in Table 3. In total, the proportions of aminoglycoside resistance genes among our clinical isolates of A. baumannii were as follows: APH(3′)-VIa (aphA6) (77%), ANT(2")-Ia (aadB) (73%), ANT(3")-Ia (aadA1) (33%), AAC(6′)-Ib (aacA4) (33%), AAC(3)-IIa (aacC2) (19%), and ArmA (22%). The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the isolates is shown in Table 4. There was a significant association between the presence of all resistance genes and the non-susceptibility (resistance or intermediate resistance) to all aminoglycosides, except armA and resistance to netilmicin. Important data from this table indicates that in some groups, such as gentamicin- and tobramycin-resistant groups, all resistant isolates contained some AMEs-encoding genes such as aacC2, aacA4, and aadA1. TABLE 4:The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the A. baumannii clinical isolates.AntibioticsSusceptibility PatternNo. (%) of the isolates contained APH(3′)-VIa (aphA6) P-value ANT(2")-Ia (aadB) P-value ANT(3")-Ia (aadA1) P-value AAC(6′)-Ib (aacA4) P-value AAC(3)-IIa (aacC2) P-value armA P-value (n=77) (n=73) (n=33) (n=33) (n=19) (n=22) GentamicinNon-susceptible77 (100)0.01273 (100)0.02333 (100)0.03333 (100)0.03319 (100)0.00722 (100)0.021 Susceptible0 0 0 0 0 0 TobramycinNon-susceptible74 (96.1)0.01970 (95.8)0.02933 (100)0.00333 (100)0.00319 (100)0.00721 (95.4)0.024 Susceptible3 (3.8) 3 (4.1) 0 0 0 1 (4.5) AmikacinNon-susceptible64 (83.1)0.03663 (86.3)0.03727 (81.8)0.03827 (81.8)0.03817 (89.4)0.02419 (86.3)0.033 Susceptible13 (16.8) 10 (13.6) 6 (18.1) 6 (18.1) 2 (10.5) 3 (13.6) NetilmicinNon-susceptible63 (81.8)0.04265 (89.04)0.03930 (90.9)0.01429 (87.8)0.03218 (94.7)0.01814 (63.6)0.072 Susceptible15 (19.4) 8 (10.9) 3 (9.09) 4 (12.1) 1 (5.2) 8 (36.3) KanamycinNon-susceptible73 (94.8)0.02971 (97.2)0.02333 (100)0.00333 (100)0.00319 (100)0.00722 (100)0.021 Susceptible4 (5.1) 2 (2.7) 0 0 0 0 StreptomycinNon-susceptible75 (97.4)0.01572 (98.6)0.01932 (96.9)0.01931 (93.9)0.02119 (100)0.00718 (81.8)0.044 Susceptible2 (2.5) 1 (1.3) 1 (3.03) 2 (6.06) 0 4 (18.1) SpectinomycinNon-susceptible69 (89.6(0.02867 (91.7)0.03731 (93.9)0.02131 (93.9)0.02119 (100)0.00719 (86.3)0.033 Susceptible8 (10.3) 6 (8.2) 2 (6.06) 2 (6.06) 0 3 (13.6) In addition, we detected 22 gene profiles among all clinical isolates of A. baumannii (Table 5). The most prevalent combination gene profiles in the present study included: 1) APH(3')-VIa + ANT(2")-Ia with 39 isolates containing these genes, among which 100% isolates were resistant towards kanamycin, while almost 95% were resistant against netilmicin and 97.4% were resistant to tobramycin and gentamicin, and 2) AAC(3)-IIa + AAC(6')-Ib + ANT(3")-Ia + APH(3')-VIa + ANT(2")-Ia with 14 isolates, among which 100% were resistant to gentamicin, kanamycin, and streptomycin, while almost 93% were resistant against tobramycin and spectinomycin. However, 15 isolates showed an AME gene profile with one AME gene, most of which were resistant to tested aminoglycosides. Other AME-encoding gene profiles were detected at a low rate (Table 5). However, 15, 52, 12, 5, 14, and 2 isolates in the present study contained 1, 2, 3, 4, 5, and 6 AME genes, respectively. The most prevalent gene profiles exhibited the simultaneous presence of 2 genes followed by 5 genes and 3 AME genes. TABLE 5:Gene profiles of the Acinetobacter baumannii clinical isolates and their association with aminoglycoside resistance.Gene profile (No.)Gentamicin Tobramycin Amikacin Netilmicin Kanamycin Streptomycin Spectinomycin NSSNSSNSSNSSNSSNSSNSS (%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%) ArmA (9)9 (100)-9 (100)-7 (77.7)2 (22.2)3 (33.3)6 (66.6)9 (100) - 7 (77.7)2 (22.2)9 (100)- aphA6 (2)2 (100)-2 (100)--2 (100)-2 (100)1 (50)1 (50)2 (100)--2 (100) aadB (4)4 (100)-3 (75)1 (25)3 (75)1 (25)2 (50)2 (50)3 (75)1 (25)4 (100)-3 (75)1 (25) ArmA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100) - 3 (100)-2 (66.6)1 (33.3) aacA4 + aadA1 (2)2 (100)-2 (100)-2 (100)-1 (50)1 (50)2 (100) - 2 (100)-2 (100)- aadA1 + armA (2)2 (100)-2 (100)-1 (50)1 (50)2 (100)-2 (100) - 1 (50)1 (50)2 (100)- aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aadB + aadA1 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aphA6 + aadB (39)38 (97.4)1 (2.56)38 (97.4)1 (2.56)34 (87.1)5 (12.8)37 (94.8)2 (5.1)39 (100)-33 (84.6)6 (15.3)32 (82)7 (17.9) aacA4 + aphA6 (3)3 (100)-3 (100)-3 (100)-3 (100)-3 (100) - 3 (100)-3 (100)- aadB + aadA1 + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aacA4 + aadB + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)--1 (100)1 (100) - -1 (100)1 (100)- aadB + armA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100) - 3 (100)-2 (66.6)1 (33.3) aacA4 + aacC2 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aacC2 + aacA4 + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aacC2 + aacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aacA4 + aadB + aadA1 + aphA6 (4)4 (100)-4 (100)-4 (100)-4 (100)-4 (100) - 4 (100)-4 (100)- aacC2 + aacA4 + aadA1 + aphA6 + aadB (14)14 (100)-13 (92.8)1 (7.1)12 (85.7)2 (14.2)12 (85.7)2 (14.2)14 (100)-14 (100)-13 (92.8)1 (7.1) aacC2 + aacA4 + aadA1 + aadB + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)- NS: non-susceptible (resistant and intermediate resistant); S: susceptible. NS: non-susceptible (resistant and intermediate resistant); S: susceptible. The frequency of each aminoglycoside resistance gene and its relation with the MIC ranges are shown in Table 3. In total, the proportions of aminoglycoside resistance genes among our clinical isolates of A. baumannii were as follows: APH(3′)-VIa (aphA6) (77%), ANT(2")-Ia (aadB) (73%), ANT(3")-Ia (aadA1) (33%), AAC(6′)-Ib (aacA4) (33%), AAC(3)-IIa (aacC2) (19%), and ArmA (22%). The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the isolates is shown in Table 4. There was a significant association between the presence of all resistance genes and the non-susceptibility (resistance or intermediate resistance) to all aminoglycosides, except armA and resistance to netilmicin. Important data from this table indicates that in some groups, such as gentamicin- and tobramycin-resistant groups, all resistant isolates contained some AMEs-encoding genes such as aacC2, aacA4, and aadA1. TABLE 4:The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the A. baumannii clinical isolates.AntibioticsSusceptibility PatternNo. (%) of the isolates contained APH(3′)-VIa (aphA6) P-value ANT(2")-Ia (aadB) P-value ANT(3")-Ia (aadA1) P-value AAC(6′)-Ib (aacA4) P-value AAC(3)-IIa (aacC2) P-value armA P-value (n=77) (n=73) (n=33) (n=33) (n=19) (n=22) GentamicinNon-susceptible77 (100)0.01273 (100)0.02333 (100)0.03333 (100)0.03319 (100)0.00722 (100)0.021 Susceptible0 0 0 0 0 0 TobramycinNon-susceptible74 (96.1)0.01970 (95.8)0.02933 (100)0.00333 (100)0.00319 (100)0.00721 (95.4)0.024 Susceptible3 (3.8) 3 (4.1) 0 0 0 1 (4.5) AmikacinNon-susceptible64 (83.1)0.03663 (86.3)0.03727 (81.8)0.03827 (81.8)0.03817 (89.4)0.02419 (86.3)0.033 Susceptible13 (16.8) 10 (13.6) 6 (18.1) 6 (18.1) 2 (10.5) 3 (13.6) NetilmicinNon-susceptible63 (81.8)0.04265 (89.04)0.03930 (90.9)0.01429 (87.8)0.03218 (94.7)0.01814 (63.6)0.072 Susceptible15 (19.4) 8 (10.9) 3 (9.09) 4 (12.1) 1 (5.2) 8 (36.3) KanamycinNon-susceptible73 (94.8)0.02971 (97.2)0.02333 (100)0.00333 (100)0.00319 (100)0.00722 (100)0.021 Susceptible4 (5.1) 2 (2.7) 0 0 0 0 StreptomycinNon-susceptible75 (97.4)0.01572 (98.6)0.01932 (96.9)0.01931 (93.9)0.02119 (100)0.00718 (81.8)0.044 Susceptible2 (2.5) 1 (1.3) 1 (3.03) 2 (6.06) 0 4 (18.1) SpectinomycinNon-susceptible69 (89.6(0.02867 (91.7)0.03731 (93.9)0.02131 (93.9)0.02119 (100)0.00719 (86.3)0.033 Susceptible8 (10.3) 6 (8.2) 2 (6.06) 2 (6.06) 0 3 (13.6) In addition, we detected 22 gene profiles among all clinical isolates of A. baumannii (Table 5). The most prevalent combination gene profiles in the present study included: 1) APH(3')-VIa + ANT(2")-Ia with 39 isolates containing these genes, among which 100% isolates were resistant towards kanamycin, while almost 95% were resistant against netilmicin and 97.4% were resistant to tobramycin and gentamicin, and 2) AAC(3)-IIa + AAC(6')-Ib + ANT(3")-Ia + APH(3')-VIa + ANT(2")-Ia with 14 isolates, among which 100% were resistant to gentamicin, kanamycin, and streptomycin, while almost 93% were resistant against tobramycin and spectinomycin. However, 15 isolates showed an AME gene profile with one AME gene, most of which were resistant to tested aminoglycosides. Other AME-encoding gene profiles were detected at a low rate (Table 5). However, 15, 52, 12, 5, 14, and 2 isolates in the present study contained 1, 2, 3, 4, 5, and 6 AME genes, respectively. The most prevalent gene profiles exhibited the simultaneous presence of 2 genes followed by 5 genes and 3 AME genes. TABLE 5:Gene profiles of the Acinetobacter baumannii clinical isolates and their association with aminoglycoside resistance.Gene profile (No.)Gentamicin Tobramycin Amikacin Netilmicin Kanamycin Streptomycin Spectinomycin NSSNSSNSSNSSNSSNSSNSS (%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%) ArmA (9)9 (100)-9 (100)-7 (77.7)2 (22.2)3 (33.3)6 (66.6)9 (100) - 7 (77.7)2 (22.2)9 (100)- aphA6 (2)2 (100)-2 (100)--2 (100)-2 (100)1 (50)1 (50)2 (100)--2 (100) aadB (4)4 (100)-3 (75)1 (25)3 (75)1 (25)2 (50)2 (50)3 (75)1 (25)4 (100)-3 (75)1 (25) ArmA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100) - 3 (100)-2 (66.6)1 (33.3) aacA4 + aadA1 (2)2 (100)-2 (100)-2 (100)-1 (50)1 (50)2 (100) - 2 (100)-2 (100)- aadA1 + armA (2)2 (100)-2 (100)-1 (50)1 (50)2 (100)-2 (100) - 1 (50)1 (50)2 (100)- aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aadB + aadA1 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aphA6 + aadB (39)38 (97.4)1 (2.56)38 (97.4)1 (2.56)34 (87.1)5 (12.8)37 (94.8)2 (5.1)39 (100)-33 (84.6)6 (15.3)32 (82)7 (17.9) aacA4 + aphA6 (3)3 (100)-3 (100)-3 (100)-3 (100)-3 (100) - 3 (100)-3 (100)- aadB + aadA1 + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aacA4 + aadB + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)--1 (100)1 (100) - -1 (100)1 (100)- aadB + armA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100) - 3 (100)-2 (66.6)1 (33.3) aacA4 + aacC2 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aacC2 + aacA4 + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aacC2 + aacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aacA4 + aadB + aadA1 + aphA6 (4)4 (100)-4 (100)-4 (100)-4 (100)-4 (100) - 4 (100)-4 (100)- aacC2 + aacA4 + aadA1 + aphA6 + aadB (14)14 (100)-13 (92.8)1 (7.1)12 (85.7)2 (14.2)12 (85.7)2 (14.2)14 (100)-14 (100)-13 (92.8)1 (7.1) aacC2 + aacA4 + aadA1 + aadB + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)- NS: non-susceptible (resistant and intermediate resistant); S: susceptible. NS: non-susceptible (resistant and intermediate resistant); S: susceptible. Patients, samples, and bacterial isolates: In this study, 100 non-duplicated A. baumannii clinical isolates were collected from 100 patients admitted to the teaching and educational hospitals of Sari, north of Iran. All isolates identified using the phenotypic method contained the blaOXA-51 gene according to the PCR results. The mean age of the patients was 42.08±25.08 years (minimum age: 6 months; highest age: 88 years), and 50% of the patients were male. There was no significant difference between men and women in terms of mean age (p=0.64). Most of the bacterial isolates (34%) were obtained from patients admitted to the burn wards, while 29%, 21%, and 16% of the isolates were collected from the ICU, surgery, and pediatric wards, respectively. The most common type of specimen (73%) for isolation of the bacteria was the wound samples; however, 15% and 12% of other clinical isolates were obtained from urine and blood cultures, respectively. Antimicrobial susceptibility pattern: According to the results of the disk agar diffusion method, the most and least effective antibiotics in the present study were imipenem and ciprofloxacin, with resistance rates of 75% and 100%, respectively (Table 2). Moreover, 94% of the isolates were detected as multi-drug resistant (MDR), and the most MDR isolates were collected from wound samples. Table 2 presents the antibiotic resistance patterns of all A. baumannii clinical isolates in this study based on hospital wards, as well as sample types. Resistance to the tested antibiotics was not significantly correlated with the sample types and hospital wards where the samples were collected. TABLE 2:Antimicrobial susceptibility pattern of the Acinetobacter baumannii clinical isolates in disk agar diffusion method.AntibioticsNo. (%) of resistant, intermediate resistant or susceptible isolates in terms of SusceptibilityTotalHospital wards Sample types (n=100)BurnICUSurgeryPediatric P-value WoundUrineBlood P-value (n=34)(n=29)(n=21)s (n=16) (n=73)(n=15)(n=12) PIPR8628 (82.3)28 (96.5)19 (90.4)11 (68.7)0.41262 (84.9)14 (93.3)10 (83.3)0.917 I105 (14.7)1 (3.4)2 (9.5)2 (12.5) 8 (10.9)1 (6.6)1 (8.3) S41 (2.9)003 (18.7) 3 (4.1)01 (8.3) PIP-TAZR7826 (76.4)24 (82.7)16 (76.1)12 (75)0.10454 (73.9)14 (93.3)10 (83.3)0.372 I104 (11.7)2 (6.8)3 (14.2)1 (6.2) 8 (10.9)02 (16.6) S124 (11.7)3 (10.3)2 (9.5)3 (18.7) 11 (15)1 (6.6)0 CAZR7625 (73.5)22 (75.8)18 (85.7)11 (68.7)0.74352 (71.2)15 (100)9 (75)0.559 I00000 000 S249 (26.4)1 (3.4)3 (14.2)5 (31.2) 21 (28.7)03 (25) CTXR9332 (94.1)28 (96.5)19 (90.4)14 (87.5)0.76267 (91.7)15 (100)11 (91.6)0.618 I02 (5.8)1 (3.4)2 (9.5)2 (12.5) 6 (8.2)01 (8.3) S70000 000 CEFR9232 (94.1)28 (96.5)18 (85.7)14 (87.5)0.44867 (91.7)14 (93.3)11 (91.6)0.728 I41 (2.9)1 (3.4)2 (9.5)0 2 (2.7)1 (6.6)1 (8.3) S41 (2.9)01 (4.7)2 (12.5) 4 (5.4)00 IMIR7527 (79.4)25 (86.2)17 (80.9)10 (62.5)0.61755 (75.3)12 (80)8 (66.6)0.873 I114 (11.7)2 (6.8)2 (9.5)3 (18.7) 9 (12.3)1 (6.6)1 (8.3) S147 (20.5)2 (6.8)2 (9.5)3 (18.7) 9 (12.3)2 (13.3)3 (25) MERR9733 (97)28 (96.5)20 (95.2)16 (100)0.96470 (95.8)15 (100)12 100)0.667 I00000 000 S31 (2.9)1 (3.4)1 (4.7)0 3 (4.1)00 DORR9632 (94.1)28 (96.5)20 (95.2)16 (100)0.79769 (94.5)15 (100)12 (100)0.913 I21 (2.9)1 (3.4)00 2 (2.7)00 S21 (2.9)01 (5)0 2 (2.7)00 CIPR10034 (100)29 (100)21 (100)16 (100)0.10073 (100)15 (100)12 (100)0.100 I00000 000 S00000 000 LEVR9331 (91.1)27 (93.1)20 (95.2)15 (93.7)0.72567 (91.7)14 (93.3)12 (100)0.842 I32 (5.8)001 (6.2) 3 (4.1)00 S41 (2.9)2 (6.8)1 (4.7)0 3 (4.1)1 (6.6)0 SXTR9231 (91.1)27 (93.1)18 (85.7)16 (100)0.93568 (93.1)12 (80)12 (100)0.216 I31 (2.9)1 (3.4)1 (4.7)0 1 (1.3)2 (13.3)0 S52 (5.8)1 (3.4)2 (9.5)0 4 (5.4)1 (6.6)0 PIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible. PIP: piperacillin; PIP-TAZ: piperacillin-tazobactam; CAZ: ceftazidime; CTX: cefotaxime; CEF: cefepime; IMI: imipenem; MER: meropenem; DOR: doripenem; CIP: ciprofloxacin; LEV: levofloxacin; SXT: trimethoprim-sulfamethoxazole; R: resistant; I: intermediate resistant; and S: susceptible. Moreover, according to the MIC results, the resistance rate against gentamicin, kanamycin, tobramycin, and streptomycin was 94%, while the highest susceptibility (20%) of A. baumannii isolates was observed against netilmicin. In contrast, 74%, 68%, and 78% of our clinical isolates were resistant to amikacin, netilmicin, and spectinomycin, respectively. The MIC ranges of aminoglycosides and their relationship with the presence of AMEs-encoding genes are shown in Table 3. TABLE 3:Aminoglycoside resistance pattern of the Acinetobacter baumannii clinical isolates in this study. AntibioticsMIC rangesNo. of the isolates with (µg/mL)Susceptibility APH(3′)-VIa ANT(2")-Ia ANT(3")-Ia AAC(6′)-Ib AAC(3)-IIa armA Total pattern (aphA6) (aadB) (aadA1) (aacA4) (aacC2) (n=22) (n=77)(n=73)(n=33)(n=33)(n=19) Gentamicin ≤4S------6 8I5311-1- 16-32R--- --94 64-128R5----- ≥256R677032321921 MIC50 5122565125122561024256 MIC90 102425610242562561024512Tobramycin≤4S33---14 8I1----22 16-32R2-1---94 64-128R-21423 ≥256R716831291716 MIC50 256512512256512512512 MIC90 10241024102451210245121024Amikacin≤16S1310662316 32I710665110 64R-1----74 128R-4---- ≥256R574821211218 MIC50 2561024512512512256512 MIC90 51251251210245122561024Netilmicin≤8S883-1220 16I6--4-612 32R810464268 64-128R571324 ≥256R50482520128 MIC50 25612864256256128128 MIC90 25612864512512128256Kanamycin≤16S42----4 32I22----2 64R-3-1--94 128R322-21 ≥256R686431321721 MIC50 643212864256128128 MIC90 6425625664256256256Streptomycin≤4S1-11--6 8S11-1-- 16S---- - 32I------- 64-128R33-32-94 ≥256R726932281718 MIC50 256128256256256512256 MIC90 2561285121024128512512Spectinomycin≤4S11---312 8S7522-- 16S - --- 32I83-1-510 64-128R21-22178 ≥256R596331281713 MIC50 2562565126464256256 MIC90 256512256128128256512 R: resistant; I: intermediate resistant; S: susceptible. Notes: MIC 50 : Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC 90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms. R: resistant; I: intermediate resistant; S: susceptible. Notes: MIC 50 : Minimum inhibitory concentration required to inhibit the growth of 50% of organisms; MIC 90: Minimum inhibitory concentration required to inhibit the growth of 90% of organisms. Gene profiles of the isolates: The frequency of each aminoglycoside resistance gene and its relation with the MIC ranges are shown in Table 3. In total, the proportions of aminoglycoside resistance genes among our clinical isolates of A. baumannii were as follows: APH(3′)-VIa (aphA6) (77%), ANT(2")-Ia (aadB) (73%), ANT(3")-Ia (aadA1) (33%), AAC(6′)-Ib (aacA4) (33%), AAC(3)-IIa (aacC2) (19%), and ArmA (22%). The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the isolates is shown in Table 4. There was a significant association between the presence of all resistance genes and the non-susceptibility (resistance or intermediate resistance) to all aminoglycosides, except armA and resistance to netilmicin. Important data from this table indicates that in some groups, such as gentamicin- and tobramycin-resistant groups, all resistant isolates contained some AMEs-encoding genes such as aacC2, aacA4, and aadA1. TABLE 4:The relationship between the presence of aminoglycoside resistance genes and the aminoglycoside susceptibility pattern of the A. baumannii clinical isolates.AntibioticsSusceptibility PatternNo. (%) of the isolates contained APH(3′)-VIa (aphA6) P-value ANT(2")-Ia (aadB) P-value ANT(3")-Ia (aadA1) P-value AAC(6′)-Ib (aacA4) P-value AAC(3)-IIa (aacC2) P-value armA P-value (n=77) (n=73) (n=33) (n=33) (n=19) (n=22) GentamicinNon-susceptible77 (100)0.01273 (100)0.02333 (100)0.03333 (100)0.03319 (100)0.00722 (100)0.021 Susceptible0 0 0 0 0 0 TobramycinNon-susceptible74 (96.1)0.01970 (95.8)0.02933 (100)0.00333 (100)0.00319 (100)0.00721 (95.4)0.024 Susceptible3 (3.8) 3 (4.1) 0 0 0 1 (4.5) AmikacinNon-susceptible64 (83.1)0.03663 (86.3)0.03727 (81.8)0.03827 (81.8)0.03817 (89.4)0.02419 (86.3)0.033 Susceptible13 (16.8) 10 (13.6) 6 (18.1) 6 (18.1) 2 (10.5) 3 (13.6) NetilmicinNon-susceptible63 (81.8)0.04265 (89.04)0.03930 (90.9)0.01429 (87.8)0.03218 (94.7)0.01814 (63.6)0.072 Susceptible15 (19.4) 8 (10.9) 3 (9.09) 4 (12.1) 1 (5.2) 8 (36.3) KanamycinNon-susceptible73 (94.8)0.02971 (97.2)0.02333 (100)0.00333 (100)0.00319 (100)0.00722 (100)0.021 Susceptible4 (5.1) 2 (2.7) 0 0 0 0 StreptomycinNon-susceptible75 (97.4)0.01572 (98.6)0.01932 (96.9)0.01931 (93.9)0.02119 (100)0.00718 (81.8)0.044 Susceptible2 (2.5) 1 (1.3) 1 (3.03) 2 (6.06) 0 4 (18.1) SpectinomycinNon-susceptible69 (89.6(0.02867 (91.7)0.03731 (93.9)0.02131 (93.9)0.02119 (100)0.00719 (86.3)0.033 Susceptible8 (10.3) 6 (8.2) 2 (6.06) 2 (6.06) 0 3 (13.6) In addition, we detected 22 gene profiles among all clinical isolates of A. baumannii (Table 5). The most prevalent combination gene profiles in the present study included: 1) APH(3')-VIa + ANT(2")-Ia with 39 isolates containing these genes, among which 100% isolates were resistant towards kanamycin, while almost 95% were resistant against netilmicin and 97.4% were resistant to tobramycin and gentamicin, and 2) AAC(3)-IIa + AAC(6')-Ib + ANT(3")-Ia + APH(3')-VIa + ANT(2")-Ia with 14 isolates, among which 100% were resistant to gentamicin, kanamycin, and streptomycin, while almost 93% were resistant against tobramycin and spectinomycin. However, 15 isolates showed an AME gene profile with one AME gene, most of which were resistant to tested aminoglycosides. Other AME-encoding gene profiles were detected at a low rate (Table 5). However, 15, 52, 12, 5, 14, and 2 isolates in the present study contained 1, 2, 3, 4, 5, and 6 AME genes, respectively. The most prevalent gene profiles exhibited the simultaneous presence of 2 genes followed by 5 genes and 3 AME genes. TABLE 5:Gene profiles of the Acinetobacter baumannii clinical isolates and their association with aminoglycoside resistance.Gene profile (No.)Gentamicin Tobramycin Amikacin Netilmicin Kanamycin Streptomycin Spectinomycin NSSNSSNSSNSSNSSNSSNSS (%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%)(%) ArmA (9)9 (100)-9 (100)-7 (77.7)2 (22.2)3 (33.3)6 (66.6)9 (100) - 7 (77.7)2 (22.2)9 (100)- aphA6 (2)2 (100)-2 (100)--2 (100)-2 (100)1 (50)1 (50)2 (100)--2 (100) aadB (4)4 (100)-3 (75)1 (25)3 (75)1 (25)2 (50)2 (50)3 (75)1 (25)4 (100)-3 (75)1 (25) ArmA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100) - 3 (100)-2 (66.6)1 (33.3) aacA4 + aadA1 (2)2 (100)-2 (100)-2 (100)-1 (50)1 (50)2 (100) - 2 (100)-2 (100)- aadA1 + armA (2)2 (100)-2 (100)-1 (50)1 (50)2 (100)-2 (100) - 1 (50)1 (50)2 (100)- aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aadB + aadA1 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aphA6 + aadB (39)38 (97.4)1 (2.56)38 (97.4)1 (2.56)34 (87.1)5 (12.8)37 (94.8)2 (5.1)39 (100)-33 (84.6)6 (15.3)32 (82)7 (17.9) aacA4 + aphA6 (3)3 (100)-3 (100)-3 (100)-3 (100)-3 (100) - 3 (100)-3 (100)- aadB + aadA1 + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aacA4 + aadB + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)--1 (100)1 (100) - -1 (100)1 (100)- aadB + armA + aphA6 (3)3 (100)-3 (100)-2 (66.6)1 (33.3)2 (66.6)1 (33.3)3 (100) - 3 (100)-2 (66.6)1 (33.3) aacA4 + aacC2 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100) - 2 (100)-2 (100)- aacC2 + aacA4 + aadA1 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100) - 1 (100)-1 (100)- aacA4 + aadA1 + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aacC2 + aacA4 + aadB + aphA6 (1)1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)-1 (100)- aacA4 + aadB + aadA1 + aphA6 (4)4 (100)-4 (100)-4 (100)-4 (100)-4 (100) - 4 (100)-4 (100)- aacC2 + aacA4 + aadA1 + aphA6 + aadB (14)14 (100)-13 (92.8)1 (7.1)12 (85.7)2 (14.2)12 (85.7)2 (14.2)14 (100)-14 (100)-13 (92.8)1 (7.1) aacC2 + aacA4 + aadA1 + aadB + armA + aphA6 (2)2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)-2 (100)- NS: non-susceptible (resistant and intermediate resistant); S: susceptible. NS: non-susceptible (resistant and intermediate resistant); S: susceptible. DISCUSSION: Overuse and misuse of antibiotics in the treatment of infections caused by A. baumannii has led to the emergence of MDR isolates in hospitals and health centers 24 . The spread of AME-encoding genes among the clinical isolates of A. baumannii is an important concern in the prescription of these traditional and effective antibiotics, as 94% of our isolates were resistant to kanamycin, gentamicin, streptomycin, and tobramycin. However, we found that netilmicin was the most effective aminoglycoside, as this antibiotic is not commonly used in the treatment of bacterial infections. This finding was similar to that of another Iranian study 1 . However, their isolates revealed an MIC50 ≤8 µg/mL, while in the present study it ranged from 128 µg/mL, indicating an increased resistance rate in our region. However, the MIC ranges of other clinically important aminoglycosides such as amikacin, gentamicin, and tobramycin in the present study were significantly higher than those reported in previous studies in Iran and other countries 1 , 5 , 25 , while these ranges were almost similar to those reported by Yoo Jin Cho et al. in 2009 22 . The molecular analysis of AME-encoding genes in the present study showed a high frequency of aphA6, aadB, aadA1, aacA4, aacC2, and armA genes, consistent with the previous studies from Iran 1 , 5 . Given that AME-encoding genes can spread by transferable genetic elements 18 , this high proportion would be justified. Possibly, these resistance genes can spread between different gram-negative bacteria such as Pseudomonas aeruginosa and Enterobacteriaceae. Further confirmation of this hypothesis can be found in another study conducted in Iran on the clinical isolates of P. aeruginosa, according to which aadB and aacA4 were the most prevalent AME genes 26 . In a study performed by Lee et al. in Korea, the highest frequency was reported for aphA6 (71%), aacC1 (56%), and aadB (48%) 27 . In addition, a high proportion of aphA6 and aadB was reported by Akers et al. in USA in agreement with the present study; almost 42% of their isolates were collected from the burn ward and ICU. However, the resistance rates toward gentamicin and amikacin in their isolates were 96.6% and 57.1%, respectively 25 . However, aphA6 confers resistance to amikacin and kanamycin 17 . Interestingly, 74% and 88.3% of our isolates containing aphA6 exhibited MIC values of ≥256 µg/mL for amikacin and kanamycin, respectively. Moreover, a study carried out in Poland revealed that aphA6 was the second most prevalent AME gene (78.7%) among 61 A. baumannii isolates 28 . However, aadB, the second most prevalent AME gene in the current study, confers resistance to gentamicin, tobramycin, and kanamycin in gram-negative bacteria 26 , while 95.8%, 93.1%, and 87.6% of our isolates containing aadB showed an MIC range of ≥256 µg/mL for these antibiotics, respectively. In addition, we found that 33% of our isolates contained the aacA4 gene. Other research performed in the USA detected only one isolate carrying this gene from blood and wound infections that were resistant to gentamicin, tobramycin, and amikacin 25 . However, 96.9% of our aadB-positive isolates showed a ≥256 µg/mL MIC range for gentamicin and kanamycin; 87.8% of the isolates exhibited this MIC range for tobramycin, while in a previous study in Iran, this percentage was 26.4% 5 . It is noteworthy that this gene was reported as the second most prevalent AME gene carryied by class 1 integrons among clinical isolates of P. aeruginosa in Iran 26 . However, 83.6% of the aminoglycoside-resistant A. baumannii isolates in South Korea contained the aacA4 gene, while their MIC ranges were 64 to greater than 1024 µg/mL 22 . Sheikhalizadeh et al. reported that 27.6% isolated exhibited the aadA1 gene proportion, which was almost concordant with the results of the present study 5 , while another Iranian study detected 26.4%, 31%, and 54.5% exhibiting this gene among the sequence group (SG) of A. baumannii, 1, 2, and 3, respectively. Any isolates belonging to SG4-9 contained this resistance gene 1 . In addition, we detected that 19% of our isolates carried aacC2, while Akers et al. reported a 3.7% proportion of this gene 25 , and another Iranian study detected a proportion of 8.04% for this gene owing to which all isolates were non-susceptible to kanamycin 5 . However, 89.4% of our isolates containing this gene showed an MIC range of ≥256 µg/mL for spectinomycin, streptomycin, kanamycin, and tobramycin, while 100% of them exhibited this MIC range for gentamicin. Moreover, according to research by Hasani et al., this gene was detected in SG1-4 1 , while Nowak et al. did not detect this gene among their isolates 28 . Additionally, the armA gene, which is an effective factor in the development of resistance to aminoglycosides in A. baumannii, can be placed on plasmids and frequently recognized in carbapenem-resistant isolates 18 . This gene encodes a 16S rRNA methylase, resulting in limited access of aminoglycosides to the ribosome of the bacteria and causing high-level aminoglycoside resistance (HLAR) against gentamicin, tobramycin, amikacin, and kanamycin 1 . Surprisingly, among 22 armA-positive A. baumannii isolates in this study, 21 (95.4%), 16 (72.7%), 18 (81.8%), and 21 (95.4%) isolates showed high-level resistance (MIC≥256 µg/mL) to gentamicin, tobramycin, amikacin, and kanamycin, respectively, with an MIC50≥128 μg/mL. Considering that most isolates in the present study were MDR, and a high proportion of strains harboring the AME genes was detected, the simultaneous presence of carbapenem-resistance genes and AME genes in A. baumannii has been proven 18 , 28 ; this assumption may also be true for our isolates. However, 75-97% of our isolates were resistant to carbapenems and other β-lactams. Other studies from South Korea, Iran, and North America have reported armA production by A. baumannii 1 , 27 , 29 . Additionally, other researchers have revealed the role of the armA gene in high-level resistance to amikacin and gentamicin 22 , 30 . In addition to the material presented, the most important problem observed in our study was the simultaneous presence of aminoglycoside resistance genes. We detected 22 gene profiles, while Nowak et al. detected only 3 combinations of AME genes from 61 carbapenem-resistant and aminoglycoside non-susceptible A. baumannii isolates 28 . Our most prevalent combinations were APH(3')-VIa+ANT(2")-Ia (39 isolates) with 95-100% resistance rates against aminoglycosides and AAC(3)-IIa+AAC(6')-Ib+ANT(3")-Ia+APH(3')-VIa+ANT(2")-Ia (14 isolates) of which 93-100% were resistant to aminoglycosides. The common point between our study and the study by Nowak et al. was the presence of aphA6 among most of the isolates. However, Akers et al. detected 16 AME gene profiles, of which 12 (75%) isolates had a combination of these genes 25 . The most prevalent (38/107 isolates) combination of their study included APH(3')-Ia +ANT(2'')-Ia, and 35 (92.1%) were concurrently resistant to gentamicin, tobramycin, and amikacin. Nevertheless, 85% of our A. baumannii isolates carried more than one AME gene, of which 52 (61.1%) contained 2 AME genes concurrently and most of them were resistant to all tested aminoglycosides. Moreover, we found that as the number of AME genes increased, the likelihood of resistance to aminoglycosides, especially gentamicin, tobramycin, streptomycin, and kanamycin, increased. Due to the higher proportion of strains harboring the AME genes, especially aph, it may be better to use phosphotransferases and acetyltransferase inhibitors such as the bovine antimicrobial peptide indolicidin, as previously reported 31 , in combination with aminoglycosides in our region, Iran. CONCLUSIONS: High-level aminoglycoside MIC ranges in isolates with the simultaneous presence of AME and ArmA-encoding genes indicated the importance of these genes in resistance to aminoglycosides in A. baumannii. However, it seems that the selection of the appropriate antibiotic based on antimicrobial susceptibility testing and the use of combination therapy would be effective in overcoming this problem in such countries. Therefore, it is necessary to collect data from monitoring studies for the prevention, treatment, and control of the infections caused by this microorganism.
Background: This study aimed to determine the role of genes encoding aminoglycoside-modifying enzymes (AMEs) and 16S rRNA methylase (ArmA) in Acinetobacter baumannii clinical isolates. Methods: We collected 100 clinical isolates of A. baumannii and identified and confirmed them using microbiological tests and assessment of the OXA-51 gene. Antibiotic susceptibility testing was carried out using disk agar diffusion and micro-broth dilution methods. The presence of AME genes and ArmA was detected by PCR and multiplex PCR. Results: The most and least effective antibiotics in this study were netilmicin and ciprofloxacin with 68% and 100% resistance rates, respectively. According to the minimum inhibitory concentration test, 94% of the isolates were resistant to gentamicin, tobramycin, and streptomycin, while the highest susceptibility (20%) was observed against netilmicin. The proportion of strains harboring the aminoglycoside resistance genes was as follows: APH(3')-VIa (aphA6) (77%), ANT(2")-Ia (aadB) (73%), ANT(3")-Ia (aadA1) (33%), AAC(6')-Ib (aacA4) (33%), ArmA (22%), and AAC(3)-IIa (aacC2) (19%). Among the 22 gene profiles detected in this study, the most prevalent profiles included APH(3')-VIa + ANT(2")-Ia (39 isolates, 100% of which were kanamycin-resistant), and AAC(3)-IIa + AAC(6')-Ib + ANT(3")-Ia + APH(3')-VIa + ANT(2")-Ia (14 isolates, all of which were resistant to gentamicin, kanamycin, and streptomycin). Conclusions: High minimum inhibitory concentration of aminoglycosides in isolates with the simultaneous presence of AME- and ArmA-encoding genes indicated the importance of these genes in resistance to aminoglycosides. However, control of their spread could be effective in the treatment of infections caused by A. baumannii.
INTRODUCTION: Acinetobacter baumannii, living in the soil, the water, and different hospital environments, is an important opportunistic pathogen that causes nosocomial infections such as pneumonia, urinary tract infections, intravenous catheter-associated infections, and ventilation-associated infections, particularly in intensive care units 1 - 4 . The ability of this microorganism to remain in the hospital environment and to spread among the patients, along with their resistance to several antibiotics, are the main driving forces behind large-scale recurrent events in different countries 5 . The major antibiotics used for the treatment of infections caused by this organism are beta-lactams, aminoglycosides, fluoroquinolones, and carbapenems; however, A. baumannii has shown different rates of resistance against these antimicrobial agents 6 - 8 . These infections are difficult, costly, and sometimes impossible to treat owing to the high ability of A. baumannii to acquire antibiotic resistance genes and the development of multidrug-resistant (MDR) strains 9 , 10 . Aminoglycosides are one of the main drugs used for the treatment of Acinetobacter infections 11 ; however, recently, the resistance of A. baumannii to these antibiotics has also increased. Two main mechanisms of resistance to aminoglycosides are the alteration of the ribosome structure caused by mutations in the ribosomal 16S rRNA and the enzymatic resistance mechanism 12 . The enzymatic alteration of the aminoglycoside molecule at -OH or -NH2 groups by aminoglycoside-modifying enzymes (AMEs) is the most important resistance mechanism 12 - 14 . AMEs are classified into three major groups: aminoglycoside phosphotransferase (APH), aminoglycoside acetyltransferase (AAC), aminoglycoside nucleotidyltransferase (ANT), and aminoglycoside adenylyltransferase (AAD) 5 , 13 . Aminoglycoside acetyltransferases cause acetylation of the -NH2 groups of aminoglycosides at the 1, 3, 2', and 6' positions using acetyl coenzyme A as a donor substrate 15 . Aminoglycoside phosphotransferases phosphorylate the hydroxyl groups present in the structure of aminoglycosides at the 4, 6, 9, 3', 2'', 3'', and 7'' positions (seven different groups) with the help of ATP; the largest enzymatic group in this family is the APH(3′)-I group 16 . The proportion of strains harboring the aphA6 gene in A. baumannii is widespread, and this enzyme is the cause of resistance to neomycin, amikacin, kanamycin, paromomycin, ribostamycin, butirosin, and isepamicin 17 . Aminoglycoside nucleotidyltransferases are classified into 5 groups, and the genes encoding these enzymes can be found in chromosomes or transferred by plasmids and transposons 12 . These enzymes transfer an AMP group from ATP to a hydroxyl group at the 2'', 3'', 4', 6, and 9 positions of the aminoglycoside molecule 13 . In addition to AMEs, 16S rRNA methylation by the ArmA enzyme is a novel mechanism that contributes to the high level of aminoglycoside resistance in A. baumannii, as reported in the Far East, Europe, and North America 5 . This enzyme can be transferred by class 1 integrons and is often detected in carbapenem-resistant A. baumannii isolates 18 . This study aimed to investigate the role of some important aminoglycoside-modifying enzymes and 16S rRNA methylase (ArmA) in the resistance of A. baumannii clinical isolates to aminoglycosides in Sari, located north of Iran. CONCLUSIONS: High-level aminoglycoside MIC ranges in isolates with the simultaneous presence of AME and ArmA-encoding genes indicated the importance of these genes in resistance to aminoglycosides in A. baumannii. However, it seems that the selection of the appropriate antibiotic based on antimicrobial susceptibility testing and the use of combination therapy would be effective in overcoming this problem in such countries. Therefore, it is necessary to collect data from monitoring studies for the prevention, treatment, and control of the infections caused by this microorganism.
Background: This study aimed to determine the role of genes encoding aminoglycoside-modifying enzymes (AMEs) and 16S rRNA methylase (ArmA) in Acinetobacter baumannii clinical isolates. Methods: We collected 100 clinical isolates of A. baumannii and identified and confirmed them using microbiological tests and assessment of the OXA-51 gene. Antibiotic susceptibility testing was carried out using disk agar diffusion and micro-broth dilution methods. The presence of AME genes and ArmA was detected by PCR and multiplex PCR. Results: The most and least effective antibiotics in this study were netilmicin and ciprofloxacin with 68% and 100% resistance rates, respectively. According to the minimum inhibitory concentration test, 94% of the isolates were resistant to gentamicin, tobramycin, and streptomycin, while the highest susceptibility (20%) was observed against netilmicin. The proportion of strains harboring the aminoglycoside resistance genes was as follows: APH(3')-VIa (aphA6) (77%), ANT(2")-Ia (aadB) (73%), ANT(3")-Ia (aadA1) (33%), AAC(6')-Ib (aacA4) (33%), ArmA (22%), and AAC(3)-IIa (aacC2) (19%). Among the 22 gene profiles detected in this study, the most prevalent profiles included APH(3')-VIa + ANT(2")-Ia (39 isolates, 100% of which were kanamycin-resistant), and AAC(3)-IIa + AAC(6')-Ib + ANT(3")-Ia + APH(3')-VIa + ANT(2")-Ia (14 isolates, all of which were resistant to gentamicin, kanamycin, and streptomycin). Conclusions: High minimum inhibitory concentration of aminoglycosides in isolates with the simultaneous presence of AME- and ArmA-encoding genes indicated the importance of these genes in resistance to aminoglycosides. However, control of their spread could be effective in the treatment of infections caused by A. baumannii.
13,797
334
[ 280, 215, 314, 54, 185, 1237, 1503 ]
12
[ "100", "100 100", "100 100 100", "isolates", "resistant", "resistance", "apha6", "12", "genes", "gene" ]
[ "introduction acinetobacter baumannii", "acinetobacter baumannii clinical", "baumannii antibiotics increased", "resistant baumannii isolates", "resistance aminoglycosides baumannii" ]
[CONTENT] Acinetobacter baumannii | Aminoglycoside-modifying enzymes | ArmA | Aminoglycoside resistance [SUMMARY]
[CONTENT] Acinetobacter baumannii | Aminoglycoside-modifying enzymes | ArmA | Aminoglycoside resistance [SUMMARY]
[CONTENT] Acinetobacter baumannii | Aminoglycoside-modifying enzymes | ArmA | Aminoglycoside resistance [SUMMARY]
[CONTENT] Acinetobacter baumannii | Aminoglycoside-modifying enzymes | ArmA | Aminoglycoside resistance [SUMMARY]
[CONTENT] Acinetobacter baumannii | Aminoglycoside-modifying enzymes | ArmA | Aminoglycoside resistance [SUMMARY]
[CONTENT] Acinetobacter baumannii | Aminoglycoside-modifying enzymes | ArmA | Aminoglycoside resistance [SUMMARY]
[CONTENT] Acinetobacter baumannii | Aminoglycosides | Anti-Bacterial Agents | Bacterial Proteins | Drug Resistance, Bacterial | Methyltransferases | Microbial Sensitivity Tests | RNA, Ribosomal, 16S [SUMMARY]
[CONTENT] Acinetobacter baumannii | Aminoglycosides | Anti-Bacterial Agents | Bacterial Proteins | Drug Resistance, Bacterial | Methyltransferases | Microbial Sensitivity Tests | RNA, Ribosomal, 16S [SUMMARY]
[CONTENT] Acinetobacter baumannii | Aminoglycosides | Anti-Bacterial Agents | Bacterial Proteins | Drug Resistance, Bacterial | Methyltransferases | Microbial Sensitivity Tests | RNA, Ribosomal, 16S [SUMMARY]
[CONTENT] Acinetobacter baumannii | Aminoglycosides | Anti-Bacterial Agents | Bacterial Proteins | Drug Resistance, Bacterial | Methyltransferases | Microbial Sensitivity Tests | RNA, Ribosomal, 16S [SUMMARY]
[CONTENT] Acinetobacter baumannii | Aminoglycosides | Anti-Bacterial Agents | Bacterial Proteins | Drug Resistance, Bacterial | Methyltransferases | Microbial Sensitivity Tests | RNA, Ribosomal, 16S [SUMMARY]
[CONTENT] Acinetobacter baumannii | Aminoglycosides | Anti-Bacterial Agents | Bacterial Proteins | Drug Resistance, Bacterial | Methyltransferases | Microbial Sensitivity Tests | RNA, Ribosomal, 16S [SUMMARY]
[CONTENT] introduction acinetobacter baumannii | acinetobacter baumannii clinical | baumannii antibiotics increased | resistant baumannii isolates | resistance aminoglycosides baumannii [SUMMARY]
[CONTENT] introduction acinetobacter baumannii | acinetobacter baumannii clinical | baumannii antibiotics increased | resistant baumannii isolates | resistance aminoglycosides baumannii [SUMMARY]
[CONTENT] introduction acinetobacter baumannii | acinetobacter baumannii clinical | baumannii antibiotics increased | resistant baumannii isolates | resistance aminoglycosides baumannii [SUMMARY]
[CONTENT] introduction acinetobacter baumannii | acinetobacter baumannii clinical | baumannii antibiotics increased | resistant baumannii isolates | resistance aminoglycosides baumannii [SUMMARY]
[CONTENT] introduction acinetobacter baumannii | acinetobacter baumannii clinical | baumannii antibiotics increased | resistant baumannii isolates | resistance aminoglycosides baumannii [SUMMARY]
[CONTENT] introduction acinetobacter baumannii | acinetobacter baumannii clinical | baumannii antibiotics increased | resistant baumannii isolates | resistance aminoglycosides baumannii [SUMMARY]
[CONTENT] 100 | 100 100 | 100 100 100 | isolates | resistant | resistance | apha6 | 12 | genes | gene [SUMMARY]
[CONTENT] 100 | 100 100 | 100 100 100 | isolates | resistant | resistance | apha6 | 12 | genes | gene [SUMMARY]
[CONTENT] 100 | 100 100 | 100 100 100 | isolates | resistant | resistance | apha6 | 12 | genes | gene [SUMMARY]
[CONTENT] 100 | 100 100 | 100 100 100 | isolates | resistant | resistance | apha6 | 12 | genes | gene [SUMMARY]
[CONTENT] 100 | 100 100 | 100 100 100 | isolates | resistant | resistance | apha6 | 12 | genes | gene [SUMMARY]
[CONTENT] 100 | 100 100 | 100 100 100 | isolates | resistant | resistance | apha6 | 12 | genes | gene [SUMMARY]
[CONTENT] aminoglycoside | infections | resistance | groups | enzymes | group | baumannii | enzyme | mechanism | enzymatic [SUMMARY]
[CONTENT] min | 30 | µg | sec5 | 30 sec30 | 30 sec30 sec5 | 30 sec30 sec5 min | min25 | sec30 | sec30 sec5 [SUMMARY]
[CONTENT] 100 | 100 100 | 100 100 100 | 100 100 100 100 | resistant | apha6 100 100 | apha6 100 | 14 | 50 | isolates [SUMMARY]
[CONTENT] overcoming problem countries necessary | problem countries necessary | combination therapy effective overcoming | combination therapy effective | combination therapy | therapy effective overcoming problem | therapy effective overcoming | therapy effective | therapy | problem countries [SUMMARY]
[CONTENT] 100 | 100 100 | 100 100 100 | isolates | 100 100 100 100 | µg | 30 | resistance | min | resistant [SUMMARY]
[CONTENT] 100 | 100 100 | 100 100 100 | isolates | 100 100 100 100 | µg | 30 | resistance | min | resistant [SUMMARY]
[CONTENT] 16S | ArmA [SUMMARY]
[CONTENT] 100 ||| Antibiotic ||| AME | ArmA | PCR | PCR [SUMMARY]
[CONTENT] 68% | 100% ||| 94% | streptomycin | 20% ||| 77% | 73% | 33% | 33% | ArmA | 22% | 19% ||| 22 | 39 | 100% | 14 | streptomycin [SUMMARY]
[CONTENT] ArmA ||| [SUMMARY]
[CONTENT] 16S | ArmA ||| 100 ||| Antibiotic ||| AME | ArmA | PCR | PCR ||| 68% | 100% ||| 94% | streptomycin | 20% ||| 77% | 73% | 33% | 33% | ArmA | 22% | 19% ||| 22 | 39 | 100% | 14 | streptomycin ||| ArmA ||| [SUMMARY]
[CONTENT] 16S | ArmA ||| 100 ||| Antibiotic ||| AME | ArmA | PCR | PCR ||| 68% | 100% ||| 94% | streptomycin | 20% ||| 77% | 73% | 33% | 33% | ArmA | 22% | 19% ||| 22 | 39 | 100% | 14 | streptomycin ||| ArmA ||| [SUMMARY]
Risk factors for maternal mortality in a Tertiary Hospital in Kenya: a case control study.
24447854
Maternal mortality is high in Africa, especially in Kenya where there is evidence of insufficient progress towards Millennium Development Goal (MDG) Five, which is to reduce the global maternal mortality rate by three quarters and provide universal access to reproductive health by 2015. This study aims to identify risk factors associated with maternal mortality in a tertiary level hospital in Kenya.
BACKGROUND
A manual review of records for 150 maternal deaths (cases) and 300 controls was undertaken using a standard audit form. The sample included pregnant women aged 15-49 years admitted to the Obstetric and Gynaecological wards at the Moi Teaching and Referral Hospital (MTRH) in Kenya from January 2004 and March 2011. Logistic regression analysis was used to assess risk factors for maternal mortality.
METHODS
Factors significantly associated with maternal mortality included: having no education relative to secondary education (OR 3.3, 95% CI 1.1-10.4, p = 0.0284), history of underlying medical conditions (OR 3.9, 95% CI 1.7-9.2, p = 0.0016), doctor attendance at birth (OR 4.6, 95% CI 2.1-10.1, p = 0.0001), having no antenatal visits (OR 4.1, 95% CI 1.6-10.4, p = 0.0007), being admitted with eclampsia (OR 10.9, 95% CI 3.7-31.9, p < 0.0001), being admitted with comorbidities (OR 9.0, 95% CI 4.2-19.3, p < 0.0001), having an elevated pulse on admission (OR 10.7, 95% CI 2.7-43.4, p = 0.0002), and being referred to MTRH (OR 2.1, 95% CI 1.0-4.3, p = 0.0459).
RESULTS
Antenatal care and maternal education are important risk factors for maternal mortality, even after adjusting for comorbidities and complications. Antenatal visits can provide opportunities for detecting risk factors for eclampsia, and other underlying illnesses but the visits need to be frequent and timely. Education enables access to information and helps empower women and their spouses to make appropriate decisions during pregnancy.
CONCLUSIONS
[ "Adolescent", "Adult", "Alcohol Drinking", "Case-Control Studies", "Comorbidity", "Delivery, Obstetric", "Eclampsia", "Educational Status", "Female", "Humans", "Kenya", "Maternal Mortality", "Medical Audit", "Middle Aged", "Parity", "Pregnancy", "Prenatal Care", "Referral and Consultation", "Retrospective Studies", "Risk Factors", "Tachycardia", "Tertiary Care Centers", "Young Adult" ]
3904405
Background
The maternal mortality ratio (MMR) is defined as “the ratio of the number of maternal deaths during a given period per 100,000 live births during the same time-period”. The global MMR is 210 per 100,000 live births [1]. Despite worldwide declines since 1990, the MMR is 15 times higher in developing than developed regions [1]. Sub-Saharan Africa has the highest MMR at 500 per 100,000 live births. In developed regions the MMR is 16 per 100,000 live births [1]. The target for Millennium Development Goal (MDG) Five is to reduce the global MMR by three quarters and to achieve universal access to reproductive health by 2015 [2]. In Kenya, the MMR has remained at 400-600 per 100,000 live births over the past decade - resulting in little or no progress being made towards achieving MDG Five [1,3]. The main direct causes of maternal death in developing countries include haemorrhage, sepsis, obstructed labour and hypertensive disorders [4]. The risk of death from haemorrhage is one in 1,000 deliveries in developing countries, compared with one in 100,000 in developed countries, and accounts for one third of the maternal deaths in Africa [5]. A study in Canada found increased risk of eclampsia among women with existing heart disease and anaemia [6]. A retrospective study undertaken at a tertiary hospital in Nigeria in 2007 found that the most common risk factors for maternal mortality were primaparity, haemorrhage, anaemia, eclampsia and malaria [7]. Risk factors for complications arising from infections include birthing under unhygienic conditions, poor nutrition, anaemia, caesarean section, membrane rupture, prolonged labour, retained products and haemorrhage [8]. In developing countries, indirect causes of maternal death include both previously existing diseases and diseases that develop during pregnancy. These include HIV, malaria, tuberculosis, diabetes, and cardiovascular disease, all of which and have an enormous impact on maternal and fetal outcomes during pregnancy [4]. Many individual and socioeconomic factors have been associated with high maternal mortality. These include lack of education, parity, previous obstetric history, employment, socioeconomic status, and types of care seeking behaviours during pregnancy. There is also evidence of increased risk of death among women who are less than 24 and older than 35 years [9]. A study in Tanzania found that low level of spouse education was a risk factor for maternal mortality [10]. Lack of knowledge regarding the need for skilled attendants is a barrier to women seeking care, especially during birth emergencies. A survey conducted in Kenya in 2006 showed that 15% of pregnant women were not informed of the importance of hospital deliveries [11]. In Nigeria, a cross-sectional survey revealed that the most common risk factors for maternal death were primigravidity (19%), and unbooked status (19%) [12]. Poverty has also been associated with adverse maternal outcomes, not directly, but as a contributor to maternal ability to access and utilise care where complications occur [13,14]. There is also evidence that contraceptive use is efficient for the primary prevention of maternal mortality in developing countries by about 44% [15]. Antenatal care (ANC) is very important during pregnancy. International organizations recommend a minimum of four visits, the administration of two doses of tetanus toxoid and folic acid supplementation during ANC attendance [16]. When women receive good care during the pre-partum period, they have been shown to be at less risk of maternal morbidity and mortality, since they had a higher likelihood of using a professional health facility during birth [10,17]. In the Kenya Demographic and Health Survey (2008-2009), it was reported that 92% of women received ANC from a skilled provider (doctor, nurse, or midwife), especially those who were more educated and resided in urban areas [3]. The report further showed that 83% of women who visited public hospitals were required to pay for antenatal services, which may explain why only 47% of antenatal women attended the recommended four visits [3]. Women had also been required to pay for delivery services until June 2013, when the Kenyan government rolled out a program where pregnant women can receive free maternity services in public hospitals. Health systems functioning with adequate equipment, resources and trained personnel to handle maternal complications can reduce the risks of mortality. In Africa maternal deaths are associated with delayed referrals for women from lower level facilities, and where referral systems are not well equipped to handle emergency obstetric care [18]. The presence of skilled attendants during birth is also important in managing life threatening complications. In Kenya, the use of skilled attendants at delivery is currently 50% [19]. The Delay Model by McCarthy and Maine is a conceptual framework that has been used to assess factors contributing to maternal mortality in developing countries [20]. This framework attributes mortality to certain determinants that contribute to the delay in deciding to seek care, the delay in reaching a health facility, and the delay in receiving quality care upon reaching a health facility. In Kenya there has been insufficient progress made towards achieving MDG Five. The aim of this study is to identify risk factors associated with maternal mortality in a tertiary level hospital in Kenya. Using a framework adapted from the Delay Model, this study analyses four sets of determinants: individual and socio-demographic, maternal history, reproductive or obstetric, and hospital admission/health system.
Methods
An unmatched case control study of women who delivered between January 2004 and March 2011 was conducted at Moi Teaching and Referral Hospital (MTRH) located in the Western region of the Rift Valley Province, Kenya [21]. As the second largest national hospital in Kenya with over 800 beds, MTRH provides a range of curative, preventive and rehabilitative health services to a population of about 400,000 inhabitants, and an indigent referral population of 16 million from Northern and Western Kenya [21]. The Mother and Baby Unit at MTRH at has an antenatal ward, post natal ward, labour ward, Newborn Unit (NBU) and two theatres dedicated for obstetrics. The bed capacity is approximately 20 for the antenatal and labour wards, and 50 for the post natal wards [21]. Cases (n = 150) were maternal deaths identified from a manual review of hospital records. Two controls (n = 300) were selected per case. Controls were surviving women who were admitted immediately preceding and following cases. Cases were selected retrospectively and sequentially from the most recent delivery until the required sample size was achieved. Trained staff collected information using a standard audit form. Abortion related deaths were excluded from the study. Maternal hospital death was the outcome. This was a clearly defined adverse event certified by medical personnel. The data collection form included: mother’s age, mother’s marital status, mother’s education, spouse’s education, mother’s occupation, spouse’s occupation, and the source of funding for the delivery. Information relating to the mother’s medical history included: smoking, alcohol use, contraceptive use, previous abortion, previous twins, gravida, and pre-existing medical conditions. Obstetric or reproductive factors were pregnancy stage, labour stage, number of ANC visits, and place of ANC care. Health system factors included mode of delivery, qualification of birth attendant, and referral from another facility (yes/no). Information on the mother’s admission factors comprised: clinical cause of death or diagnosis on admission (e.g. eclampsia, dystocia haemorrhage, or comorbid causes), diastolic blood pressure (millimetres of mercury/mm Hg), systolic blood pressure (mm HG), haemoglobin level (grams per decilitre g/dL), pulse rate (beats per minute/bpm), and temperature (degrees Celsius/°C). The primary obstetric cause of death was that documented in the patient hospital and post mortem records. Statistical analyses Analyses were performed using Stata version 10.0 (Stata-Corp, College Station, TX, USA). Following initial data checking and exploratory analysis, univariable logistic regression analysis was conducted for each potential risk factor. The multivariable models initially included all variables with p < 0.2 in the univariable models. Backward stepwise multiple logistic regression was undertaken separately for the four groups of risk factors in the framework adapted from the Delay Model, (individual and socio-demographic; maternal history; reproductive or obstetric; and admission). Variables were removed from the models where p-values > = 0.1 on the Likelihood Ratio Test. The variables in each of the final models were then included in a combined model and removed where p-values > = 0.1 in order to derive a final parsimonious model. Odds ratios (ORs), 95% confidence intervals and p-values are reported for all models. The reference group was the category with the lowest expected risk of death, or if there were few cases in this category, the group with the majority of respondents. Assuming the probability of exposure in controls was 40% and the ratio of cases to controls was 1:2, with 80% power and a 5% level of significance, a sample of approximately 450 women (150 cases and 300 controls) was needed to detect an odds ratio of approximately 0.5 or 1.8. Ethical approval was sought from the Human Research Ethics Committee (HREC) at the University of Newcastle and The Institute for Research and Ethics Committee (IREC) in Kenya. Analyses were performed using Stata version 10.0 (Stata-Corp, College Station, TX, USA). Following initial data checking and exploratory analysis, univariable logistic regression analysis was conducted for each potential risk factor. The multivariable models initially included all variables with p < 0.2 in the univariable models. Backward stepwise multiple logistic regression was undertaken separately for the four groups of risk factors in the framework adapted from the Delay Model, (individual and socio-demographic; maternal history; reproductive or obstetric; and admission). Variables were removed from the models where p-values > = 0.1 on the Likelihood Ratio Test. The variables in each of the final models were then included in a combined model and removed where p-values > = 0.1 in order to derive a final parsimonious model. Odds ratios (ORs), 95% confidence intervals and p-values are reported for all models. The reference group was the category with the lowest expected risk of death, or if there were few cases in this category, the group with the majority of respondents. Assuming the probability of exposure in controls was 40% and the ratio of cases to controls was 1:2, with 80% power and a 5% level of significance, a sample of approximately 450 women (150 cases and 300 controls) was needed to detect an odds ratio of approximately 0.5 or 1.8. Ethical approval was sought from the Human Research Ethics Committee (HREC) at the University of Newcastle and The Institute for Research and Ethics Committee (IREC) in Kenya.
Results
Table 1 shows the demographic factors associated with maternal mortality. In this model, mother’s age and mother’s education were significantly associated with mortality. Relative to controls, cases had three times the odds of being aged 35-45 years rather than 15-24 years (OR 3.1, 95% CI 1.5- 6.2; p < 0.0001). Cases had eight times the odds of having no education versus secondary education compared with controls (OR 8.0, 95% CI 4.0-16.3; p < 0.0001). Individual and Socio-demographic risk factors for maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011 *Spouse’s education was not included in the multiple regression model due to high cases of missing data and correlation with mothers education. aAdjusted for variables included in the final demographic model. †P- value for Likelihood Ratio Test in the adjusted model. Reference category for logistic regression represented by 1. Numbers may not add to total sample due to missing values. Table 2 shows the association between maternal history of prevailing conditions and obstetric and reproductive factors with maternal mortality. After adjusting for all other factors in the model, cases had higher odds than controls of having a history of maternal alcohol use (OR 2.5, 95% CI 1.2-5.3; p = 0.018), more than five previous pregnancies (OR 2.6, 95% CI 1.4-4.8; p = 0.0049), and a history of pre-existing illnesses (OR 3.0, 95% CI 1.7-5.3; p < 0.0001). Contraceptive use was protective (OR 0.3 95% CI 0.1-0.6; p = 0.0007). Table 2 also shows obstetric and reproductive characteristics associated maternal mortality. Compared to controls, cases had higher odds of assisted or caesarean deliveries (OR 3.0 95% CI 1.5-5.6; p < 0.0006). Cases had higher odds than controls of having a doctor, rather than a nurse or midwife attend the birth (OR 4.1 95% CI 2.2-7.6; p < 0.0001). Relative to controls, cases had almost nine times the odds of arriving at hospital at the puerperium stage (OR 8.9, 95% CI 3.5, 22.7; p < 0.0001), and almost six times the odds of lack of antenatal care (OR 5.7, 95% CI 2.6-12.4; p < 0.0001). Mother’s history of prevailing conditions and obstetric characteristics associated with maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011 *These include: HIV, malaria, rheumatic heart disease, cardiovascular disease, and diabetes. †P- value for Likelihood Ratio Test in the adjusted model. aAdjusted for variables included in the final demographic model. Reference category for logistic regression represented by 1. ††Did not include ANC place because of high correlation with number of ANC visits. Numbers may not add to total sample due to missing values. Table 3 shows maternal admission factors associated with mortality. Admission from comorbid complications (OR 6.7, 95% CI 3.8-11.8; p < 0.0001), eclampsia (OR 4.7, 95% CI 1.6, 13.7; p = 0.0038), non-normal blood pressure (OR 7.5, 95% CI 1.5-37.7; p = 0.0039), tachycardia (OR 16.5, 95% CI 4.8-57.3; p < 0.0001), and being referred to MTRH (OR 3.3, 95% CI 1.9-5.7; p < 0.0001) were all statistically significant risk factors for maternal mortality. Maternal admission factors associated with maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011 **Diastolic blood pressure was used as a proxy for systolic because of high correlation. †Temp and haemoglobin were omitted in the adjusted model because of too many missing values. ††P- value for Likelihood Ratio Test in the adjusted model. aAdjusted for variables included in the final demographic model. Reference category for logistic regression represented by 1. Numbers may not add to total sample due to missing values. Table 4 shows the multivariable analysis combining all factors from the previous models. Statistically significant risk factors for maternal mortality included: no education, relative to secondary education (OR 3.3, 95% CI 1.1-10.4, p = 0.0284), history of pre-existing medical conditions (OR 3.9, 95% CI 1.7-9.2, p = 0.0016), doctor attendance at birth (OR 4.6, 95% CI 2.1-10.1, p = 0.0001), having no antenatal visits (OR 4.1, 95% CI 1.6-10.4, p = 0.0007), being admitted with eclampsia (OR 10.9 95% CI 3.7-31.9, p < 0.0001), having comorbid complications on admission (OR 9.0, 95% CI 4.2-19.3, p < 0.0001), having elevated pulse (OR 10.7, 95% CI 2.7-43.4, p = 0.0002), and being referred to MTRH (OR 2.1, 95% CI 1.0-4.3, p = 0.0459). Multivariable model showing risk factors for maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011 Final model included all variables with P-value of 0.1 or less on the likelihood ratio test. Reference category for logistic regression represented by 1. †These include: HIV, malaria, rheumatic heart disease, cardiovascular disease, and diabetes. Numbers may not add to total sample due to missing values; the final model included 367 observations.
Conclusions
This study highlights risk factors for mortality at a tertiary hospital in Kenya showing the importance of antenatal care and maternal education in preventing maternal mortality. The findings are timely given Kenya’s limited progress towards achieving MDG Five by 2015. Antenatal visits provide opportunities for the detection of risk factors for eclampsia and other underlying illnesses that may put a mother at risk during birth. There is need to focus on integrated care throughout the pregnancy by improving women’s knowledge and empowering them to take an active role in their own health as well as gaining access to skilled care at birth and during pregnancy.
[ "Background", "Statistical analyses", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "The maternal mortality ratio (MMR) is defined as “the ratio of the number of maternal deaths during a given period per 100,000 live births during the same time-period”. The global MMR is 210 per 100,000 live births [1]. Despite worldwide declines since 1990, the MMR is 15 times higher in developing than developed regions [1]. Sub-Saharan Africa has the highest MMR at 500 per 100,000 live births. In developed regions the MMR is 16 per 100,000 live births [1]. The target for Millennium Development Goal (MDG) Five is to reduce the global MMR by three quarters and to achieve universal access to reproductive health by 2015 [2]. In Kenya, the MMR has remained at 400-600 per 100,000 live births over the past decade - resulting in little or no progress being made towards achieving MDG Five [1,3].\nThe main direct causes of maternal death in developing countries include haemorrhage, sepsis, obstructed labour and hypertensive disorders [4]. The risk of death from haemorrhage is one in 1,000 deliveries in developing countries, compared with one in 100,000 in developed countries, and accounts for one third of the maternal deaths in Africa [5]. A study in Canada found increased risk of eclampsia among women with existing heart disease and anaemia [6]. A retrospective study undertaken at a tertiary hospital in Nigeria in 2007 found that the most common risk factors for maternal mortality were primaparity, haemorrhage, anaemia, eclampsia and malaria [7]. Risk factors for complications arising from infections include birthing under unhygienic conditions, poor nutrition, anaemia, caesarean section, membrane rupture, prolonged labour, retained products and haemorrhage [8].\nIn developing countries, indirect causes of maternal death include both previously existing diseases and diseases that develop during pregnancy. These include HIV, malaria, tuberculosis, diabetes, and cardiovascular disease, all of which and have an enormous impact on maternal and fetal outcomes during pregnancy [4].\nMany individual and socioeconomic factors have been associated with high maternal mortality. These include lack of education, parity, previous obstetric history, employment, socioeconomic status, and types of care seeking behaviours during pregnancy. There is also evidence of increased risk of death among women who are less than 24 and older than 35 years [9]. A study in Tanzania found that low level of spouse education was a risk factor for maternal mortality [10]. Lack of knowledge regarding the need for skilled attendants is a barrier to women seeking care, especially during birth emergencies. A survey conducted in Kenya in 2006 showed that 15% of pregnant women were not informed of the importance of hospital deliveries [11]. In Nigeria, a cross-sectional survey revealed that the most common risk factors for maternal death were primigravidity (19%), and unbooked status (19%) [12]. Poverty has also been associated with adverse maternal outcomes, not directly, but as a contributor to maternal ability to access and utilise care where complications occur [13,14]. There is also evidence that contraceptive use is efficient for the primary prevention of maternal mortality in developing countries by about 44% [15].\nAntenatal care (ANC) is very important during pregnancy. International organizations recommend a minimum of four visits, the administration of two doses of tetanus toxoid and folic acid supplementation during ANC attendance [16]. When women receive good care during the pre-partum period, they have been shown to be at less risk of maternal morbidity and mortality, since they had a higher likelihood of using a professional health facility during birth [10,17].\nIn the Kenya Demographic and Health Survey (2008-2009), it was reported that 92% of women received ANC from a skilled provider (doctor, nurse, or midwife), especially those who were more educated and resided in urban areas [3]. The report further showed that 83% of women who visited public hospitals were required to pay for antenatal services, which may explain why only 47% of antenatal women attended the recommended four visits [3]. Women had also been required to pay for delivery services until June 2013, when the Kenyan government rolled out a program where pregnant women can receive free maternity services in public hospitals.\nHealth systems functioning with adequate equipment, resources and trained personnel to handle maternal complications can reduce the risks of mortality. In Africa maternal deaths are associated with delayed referrals for women from lower level facilities, and where referral systems are not well equipped to handle emergency obstetric care [18]. The presence of skilled attendants during birth is also important in managing life threatening complications. In Kenya, the use of skilled attendants at delivery is currently 50% [19].\nThe Delay Model by McCarthy and Maine is a conceptual framework that has been used to assess factors contributing to maternal mortality in developing countries [20]. This framework attributes mortality to certain determinants that contribute to the delay in deciding to seek care, the delay in reaching a health facility, and the delay in receiving quality care upon reaching a health facility. In Kenya there has been insufficient progress made towards achieving MDG Five. The aim of this study is to identify risk factors associated with maternal mortality in a tertiary level hospital in Kenya. Using a framework adapted from the Delay Model, this study analyses four sets of determinants: individual and socio-demographic, maternal history, reproductive or obstetric, and hospital admission/health system.", "Analyses were performed using Stata version 10.0 (Stata-Corp, College Station, TX, USA). Following initial data checking and exploratory analysis, univariable logistic regression analysis was conducted for each potential risk factor. The multivariable models initially included all variables with p < 0.2 in the univariable models. Backward stepwise multiple logistic regression was undertaken separately for the four groups of risk factors in the framework adapted from the Delay Model, (individual and socio-demographic; maternal history; reproductive or obstetric; and admission). Variables were removed from the models where p-values > = 0.1 on the Likelihood Ratio Test. The variables in each of the final models were then included in a combined model and removed where p-values > = 0.1 in order to derive a final parsimonious model. Odds ratios (ORs), 95% confidence intervals and p-values are reported for all models. The reference group was the category with the lowest expected risk of death, or if there were few cases in this category, the group with the majority of respondents.\nAssuming the probability of exposure in controls was 40% and the ratio of cases to controls was 1:2, with 80% power and a 5% level of significance, a sample of approximately 450 women (150 cases and 300 controls) was needed to detect an odds ratio of approximately 0.5 or 1.8.\nEthical approval was sought from the Human Research Ethics Committee (HREC) at the University of Newcastle and The Institute for Research and Ethics Committee (IREC) in Kenya.", "MDG: Millennium development goal; MMR: Maternal mortality ratio; ANC: Antenatal care; MTRH: Moi teaching and referral hospital; NBU: Newborn unit; WHO: World Health Organization; HIV: Human immunodeficiency virus.", "The authors declare that they have no competing interests.", "FY conceived the study. FY, CD, JB, JSW, and PN all contributed to the protocol design, questionnaire design and ethics application process. FY contributed in data collection and extraction. PN contributed in providing consultation and advice during data extraction. FY, CD, PN and JB contributed to data analysis and writing of the paper. CD, JB, FY and JSW contributed to the drafting and editing the paper. All authors contributed to reviewing the paper and approved the final version for publication.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2393/14/38/prepub\n" ]
[ null, null, null, null, null, null ]
[ "Background", "Methods", "Statistical analyses", "Results", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "The maternal mortality ratio (MMR) is defined as “the ratio of the number of maternal deaths during a given period per 100,000 live births during the same time-period”. The global MMR is 210 per 100,000 live births [1]. Despite worldwide declines since 1990, the MMR is 15 times higher in developing than developed regions [1]. Sub-Saharan Africa has the highest MMR at 500 per 100,000 live births. In developed regions the MMR is 16 per 100,000 live births [1]. The target for Millennium Development Goal (MDG) Five is to reduce the global MMR by three quarters and to achieve universal access to reproductive health by 2015 [2]. In Kenya, the MMR has remained at 400-600 per 100,000 live births over the past decade - resulting in little or no progress being made towards achieving MDG Five [1,3].\nThe main direct causes of maternal death in developing countries include haemorrhage, sepsis, obstructed labour and hypertensive disorders [4]. The risk of death from haemorrhage is one in 1,000 deliveries in developing countries, compared with one in 100,000 in developed countries, and accounts for one third of the maternal deaths in Africa [5]. A study in Canada found increased risk of eclampsia among women with existing heart disease and anaemia [6]. A retrospective study undertaken at a tertiary hospital in Nigeria in 2007 found that the most common risk factors for maternal mortality were primaparity, haemorrhage, anaemia, eclampsia and malaria [7]. Risk factors for complications arising from infections include birthing under unhygienic conditions, poor nutrition, anaemia, caesarean section, membrane rupture, prolonged labour, retained products and haemorrhage [8].\nIn developing countries, indirect causes of maternal death include both previously existing diseases and diseases that develop during pregnancy. These include HIV, malaria, tuberculosis, diabetes, and cardiovascular disease, all of which and have an enormous impact on maternal and fetal outcomes during pregnancy [4].\nMany individual and socioeconomic factors have been associated with high maternal mortality. These include lack of education, parity, previous obstetric history, employment, socioeconomic status, and types of care seeking behaviours during pregnancy. There is also evidence of increased risk of death among women who are less than 24 and older than 35 years [9]. A study in Tanzania found that low level of spouse education was a risk factor for maternal mortality [10]. Lack of knowledge regarding the need for skilled attendants is a barrier to women seeking care, especially during birth emergencies. A survey conducted in Kenya in 2006 showed that 15% of pregnant women were not informed of the importance of hospital deliveries [11]. In Nigeria, a cross-sectional survey revealed that the most common risk factors for maternal death were primigravidity (19%), and unbooked status (19%) [12]. Poverty has also been associated with adverse maternal outcomes, not directly, but as a contributor to maternal ability to access and utilise care where complications occur [13,14]. There is also evidence that contraceptive use is efficient for the primary prevention of maternal mortality in developing countries by about 44% [15].\nAntenatal care (ANC) is very important during pregnancy. International organizations recommend a minimum of four visits, the administration of two doses of tetanus toxoid and folic acid supplementation during ANC attendance [16]. When women receive good care during the pre-partum period, they have been shown to be at less risk of maternal morbidity and mortality, since they had a higher likelihood of using a professional health facility during birth [10,17].\nIn the Kenya Demographic and Health Survey (2008-2009), it was reported that 92% of women received ANC from a skilled provider (doctor, nurse, or midwife), especially those who were more educated and resided in urban areas [3]. The report further showed that 83% of women who visited public hospitals were required to pay for antenatal services, which may explain why only 47% of antenatal women attended the recommended four visits [3]. Women had also been required to pay for delivery services until June 2013, when the Kenyan government rolled out a program where pregnant women can receive free maternity services in public hospitals.\nHealth systems functioning with adequate equipment, resources and trained personnel to handle maternal complications can reduce the risks of mortality. In Africa maternal deaths are associated with delayed referrals for women from lower level facilities, and where referral systems are not well equipped to handle emergency obstetric care [18]. The presence of skilled attendants during birth is also important in managing life threatening complications. In Kenya, the use of skilled attendants at delivery is currently 50% [19].\nThe Delay Model by McCarthy and Maine is a conceptual framework that has been used to assess factors contributing to maternal mortality in developing countries [20]. This framework attributes mortality to certain determinants that contribute to the delay in deciding to seek care, the delay in reaching a health facility, and the delay in receiving quality care upon reaching a health facility. In Kenya there has been insufficient progress made towards achieving MDG Five. The aim of this study is to identify risk factors associated with maternal mortality in a tertiary level hospital in Kenya. Using a framework adapted from the Delay Model, this study analyses four sets of determinants: individual and socio-demographic, maternal history, reproductive or obstetric, and hospital admission/health system.", "An unmatched case control study of women who delivered between January 2004 and March 2011 was conducted at Moi Teaching and Referral Hospital (MTRH) located in the Western region of the Rift Valley Province, Kenya [21]. As the second largest national hospital in Kenya with over 800 beds, MTRH provides a range of curative, preventive and rehabilitative health services to a population of about 400,000 inhabitants, and an indigent referral population of 16 million from Northern and Western Kenya [21]. The Mother and Baby Unit at MTRH at has an antenatal ward, post natal ward, labour ward, Newborn Unit (NBU) and two theatres dedicated for obstetrics. The bed capacity is approximately 20 for the antenatal and labour wards, and 50 for the post natal wards [21].\nCases (n = 150) were maternal deaths identified from a manual review of hospital records. Two controls (n = 300) were selected per case. Controls were surviving women who were admitted immediately preceding and following cases. Cases were selected retrospectively and sequentially from the most recent delivery until the required sample size was achieved. Trained staff collected information using a standard audit form. Abortion related deaths were excluded from the study.\nMaternal hospital death was the outcome. This was a clearly defined adverse event certified by medical personnel. The data collection form included: mother’s age, mother’s marital status, mother’s education, spouse’s education, mother’s occupation, spouse’s occupation, and the source of funding for the delivery. Information relating to the mother’s medical history included: smoking, alcohol use, contraceptive use, previous abortion, previous twins, gravida, and pre-existing medical conditions. Obstetric or reproductive factors were pregnancy stage, labour stage, number of ANC visits, and place of ANC care. Health system factors included mode of delivery, qualification of birth attendant, and referral from another facility (yes/no). Information on the mother’s admission factors comprised: clinical cause of death or diagnosis on admission (e.g. eclampsia, dystocia haemorrhage, or comorbid causes), diastolic blood pressure (millimetres of mercury/mm Hg), systolic blood pressure (mm HG), haemoglobin level (grams per decilitre g/dL), pulse rate (beats per minute/bpm), and temperature (degrees Celsius/°C). The primary obstetric cause of death was that documented in the patient hospital and post mortem records.\n Statistical analyses Analyses were performed using Stata version 10.0 (Stata-Corp, College Station, TX, USA). Following initial data checking and exploratory analysis, univariable logistic regression analysis was conducted for each potential risk factor. The multivariable models initially included all variables with p < 0.2 in the univariable models. Backward stepwise multiple logistic regression was undertaken separately for the four groups of risk factors in the framework adapted from the Delay Model, (individual and socio-demographic; maternal history; reproductive or obstetric; and admission). Variables were removed from the models where p-values > = 0.1 on the Likelihood Ratio Test. The variables in each of the final models were then included in a combined model and removed where p-values > = 0.1 in order to derive a final parsimonious model. Odds ratios (ORs), 95% confidence intervals and p-values are reported for all models. The reference group was the category with the lowest expected risk of death, or if there were few cases in this category, the group with the majority of respondents.\nAssuming the probability of exposure in controls was 40% and the ratio of cases to controls was 1:2, with 80% power and a 5% level of significance, a sample of approximately 450 women (150 cases and 300 controls) was needed to detect an odds ratio of approximately 0.5 or 1.8.\nEthical approval was sought from the Human Research Ethics Committee (HREC) at the University of Newcastle and The Institute for Research and Ethics Committee (IREC) in Kenya.\nAnalyses were performed using Stata version 10.0 (Stata-Corp, College Station, TX, USA). Following initial data checking and exploratory analysis, univariable logistic regression analysis was conducted for each potential risk factor. The multivariable models initially included all variables with p < 0.2 in the univariable models. Backward stepwise multiple logistic regression was undertaken separately for the four groups of risk factors in the framework adapted from the Delay Model, (individual and socio-demographic; maternal history; reproductive or obstetric; and admission). Variables were removed from the models where p-values > = 0.1 on the Likelihood Ratio Test. The variables in each of the final models were then included in a combined model and removed where p-values > = 0.1 in order to derive a final parsimonious model. Odds ratios (ORs), 95% confidence intervals and p-values are reported for all models. The reference group was the category with the lowest expected risk of death, or if there were few cases in this category, the group with the majority of respondents.\nAssuming the probability of exposure in controls was 40% and the ratio of cases to controls was 1:2, with 80% power and a 5% level of significance, a sample of approximately 450 women (150 cases and 300 controls) was needed to detect an odds ratio of approximately 0.5 or 1.8.\nEthical approval was sought from the Human Research Ethics Committee (HREC) at the University of Newcastle and The Institute for Research and Ethics Committee (IREC) in Kenya.", "Analyses were performed using Stata version 10.0 (Stata-Corp, College Station, TX, USA). Following initial data checking and exploratory analysis, univariable logistic regression analysis was conducted for each potential risk factor. The multivariable models initially included all variables with p < 0.2 in the univariable models. Backward stepwise multiple logistic regression was undertaken separately for the four groups of risk factors in the framework adapted from the Delay Model, (individual and socio-demographic; maternal history; reproductive or obstetric; and admission). Variables were removed from the models where p-values > = 0.1 on the Likelihood Ratio Test. The variables in each of the final models were then included in a combined model and removed where p-values > = 0.1 in order to derive a final parsimonious model. Odds ratios (ORs), 95% confidence intervals and p-values are reported for all models. The reference group was the category with the lowest expected risk of death, or if there were few cases in this category, the group with the majority of respondents.\nAssuming the probability of exposure in controls was 40% and the ratio of cases to controls was 1:2, with 80% power and a 5% level of significance, a sample of approximately 450 women (150 cases and 300 controls) was needed to detect an odds ratio of approximately 0.5 or 1.8.\nEthical approval was sought from the Human Research Ethics Committee (HREC) at the University of Newcastle and The Institute for Research and Ethics Committee (IREC) in Kenya.", "Table 1 shows the demographic factors associated with maternal mortality. In this model, mother’s age and mother’s education were significantly associated with mortality. Relative to controls, cases had three times the odds of being aged 35-45 years rather than 15-24 years (OR 3.1, 95% CI 1.5- 6.2; p < 0.0001). Cases had eight times the odds of having no education versus secondary education compared with controls (OR 8.0, 95% CI 4.0-16.3; p < 0.0001).\nIndividual and Socio-demographic risk factors for maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011\n*Spouse’s education was not included in the multiple regression model due to high cases of missing data and correlation with mothers education.\naAdjusted for variables included in the final demographic model.\n†P- value for Likelihood Ratio Test in the adjusted model.\nReference category for logistic regression represented by 1.\nNumbers may not add to total sample due to missing values.\nTable 2 shows the association between maternal history of prevailing conditions and obstetric and reproductive factors with maternal mortality. After adjusting for all other factors in the model, cases had higher odds than controls of having a history of maternal alcohol use (OR 2.5, 95% CI 1.2-5.3; p = 0.018), more than five previous pregnancies (OR 2.6, 95% CI 1.4-4.8; p = 0.0049), and a history of pre-existing illnesses (OR 3.0, 95% CI 1.7-5.3; p < 0.0001). Contraceptive use was protective (OR 0.3 95% CI 0.1-0.6; p = 0.0007). Table 2 also shows obstetric and reproductive characteristics associated maternal mortality. Compared to controls, cases had higher odds of assisted or caesarean deliveries (OR 3.0 95% CI 1.5-5.6; p < 0.0006). Cases had higher odds than controls of having a doctor, rather than a nurse or midwife attend the birth (OR 4.1 95% CI 2.2-7.6; p < 0.0001). Relative to controls, cases had almost nine times the odds of arriving at hospital at the puerperium stage (OR 8.9, 95% CI 3.5, 22.7; p < 0.0001), and almost six times the odds of lack of antenatal care (OR 5.7, 95% CI 2.6-12.4; p < 0.0001).\nMother’s history of prevailing conditions and obstetric characteristics associated with maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011\n*These include: HIV, malaria, rheumatic heart disease, cardiovascular disease, and diabetes.\n†P- value for Likelihood Ratio Test in the adjusted model.\naAdjusted for variables included in the final demographic model.\nReference category for logistic regression represented by 1.\n††Did not include ANC place because of high correlation with number of ANC visits.\nNumbers may not add to total sample due to missing values.\nTable 3 shows maternal admission factors associated with mortality. Admission from comorbid complications (OR 6.7, 95% CI 3.8-11.8; p < 0.0001), eclampsia (OR 4.7, 95% CI 1.6, 13.7; p = 0.0038), non-normal blood pressure (OR 7.5, 95% CI 1.5-37.7; p = 0.0039), tachycardia (OR 16.5, 95% CI 4.8-57.3; p < 0.0001), and being referred to MTRH (OR 3.3, 95% CI 1.9-5.7; p < 0.0001) were all statistically significant risk factors for maternal mortality.\nMaternal admission factors associated with maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011\n**Diastolic blood pressure was used as a proxy for systolic because of high correlation.\n†Temp and haemoglobin were omitted in the adjusted model because of too many missing values.\n††P- value for Likelihood Ratio Test in the adjusted model.\naAdjusted for variables included in the final demographic model.\nReference category for logistic regression represented by 1.\nNumbers may not add to total sample due to missing values.\nTable 4 shows the multivariable analysis combining all factors from the previous models. Statistically significant risk factors for maternal mortality included: no education, relative to secondary education (OR 3.3, 95% CI 1.1-10.4, p = 0.0284), history of pre-existing medical conditions (OR 3.9, 95% CI 1.7-9.2, p = 0.0016), doctor attendance at birth (OR 4.6, 95% CI 2.1-10.1, p = 0.0001), having no antenatal visits (OR 4.1, 95% CI 1.6-10.4, p = 0.0007), being admitted with eclampsia (OR 10.9 95% CI 3.7-31.9, p < 0.0001), having comorbid complications on admission (OR 9.0, 95% CI 4.2-19.3, p < 0.0001), having elevated pulse (OR 10.7, 95% CI 2.7-43.4, p = 0.0002), and being referred to MTRH (OR 2.1, 95% CI 1.0-4.3, p = 0.0459).\nMultivariable model showing risk factors for maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011\nFinal model included all variables with P-value of 0.1 or less on the likelihood ratio test.\nReference category for logistic regression represented by 1.\n†These include: HIV, malaria, rheumatic heart disease, cardiovascular disease, and diabetes.\nNumbers may not add to total sample due to missing values; the final model included 367 observations.", "In the multivariable analysis of each of the four groups of risk factors (socio-demographic, maternal history, reproductive/ obstetric and admission factors) variables significantly associated with maternal mortality included: age, education, alcohol use, contraceptive use, gravida, pre-existing medical conditions, mode of delivery, type of birth attendant, pregnancy stage, number of ANC visits, having comorbid complications on admission, eclampsia, diastolic blood pressure, elevated pulse, and referral status. However, in the final model combining only significant factors from the four separate sets of analyses into a parsimonious model, only education, underlying medical conditions, birth attendant, number of ANC visits, having comorbid complications, eclampsia, having an elevated pulse on admission, and referral status were significant risk factors for maternal mortality.\nCases had three times the odds of having no education versus secondary education compared with controls. This is in agreement with another study that also reported a higher risk of mortality among illiterate women [22]. This finding is important since it emphasizes the role of education for both the mother and her spouse in obtaining and understanding the benefits of good health and being able to make appropriate decisions during pregnancy. It is important to note that despite the woman’s weaker role in decision-making in African settings, education has a strong influence on mortality. In this study, we used mother’s education as a proxy for the husband’s education. Although there was considerable missing data for spouse’s education, there was correlation between these two education variables.\nHaving no antenatal care during pregnancy was associated with mortality in this study, a finding which corresponds with those of other studies [22,23]. Antenatal care is important in screening for pre-existing illnesses and complications in the early stages of pregnancy that could impact adversely during pregnancy and childbirth [24]. Since ANC coverage is high in Kenya, there is a need to scale up interventions that empower women to make at least four visits during pregnancy as recommended by international organizations [16,19].\nThe findings here were that comorbid conditions including HIV, malaria, rheumatic heart disease, cardiovascular disease, and diabetes contributed to maternal deaths. This is contrary to other research showing that direct pregnancy complications are the leading causes of maternal deaths [4]. However, other research shows that the significant increase in MMRs in Sub-Saharan Africa are predominantly due to increasing HIV prevalence in that region [25]. The finding that the odds of comorbid conditions were higher in cases than controls also demonstrates the importance of ANC for screening, detection and management of underlying illnesses that could potentially pose a threat to the mother during pregnancy and childbirth.\nContraceptive use was protective for maternal mortality, which coincides with findings from another study that found that maternal mortality would be 77% higher globally in the absence of family planning programs and contraceptive use [15]. The role of contraceptives in tackling maternal mortality has been through reducing exposure to incidence of pregnancy, lowering hazards of fragility from high parity pregnancies, reducing vulnerability to abortion risks, and postponing pregnancies, especially in countries with high fertility rates [15].\nIn this study, hypertensive disorders during pregnancy were higher among cases than controls. Our study demonstrated increased odds of eclampsia in cases, which is in agreement with another study that found that the delay in diagnosis, triage, transport and treatment of eclampsia increases the risk of maternal death [26]. There is evidence that screening for hypertensive conditions during the antenatal period plays a significant role in reducing the risk of death to the mother [13]. This study also found higher odds of elevated pulse amongst cases, which could explain the increased risk of death due to eclampsia. After adjusting for other factors, haemorrhage was not significantly associated with mortality possibly as a result of hospital protocols for management of haemorrhage.\nThis study found health care system related factors that identified cases as being at risk including doctor attendance at birth and referrals. Cases had higher odds than controls of a doctor attending their delivery, potentially because they were diagnosed with the most difficult complications. This has been previously reported, especially in low resource settings where uptake of professional birth attendants is low hence women only seek help when the condition is critical or too late for the doctor to save their lives [4]. Cases had twice the odds of referral relative to controls, potentially because the number of referrals represented over half of the cases who were referred following complications of birth.\nThis study provides information that is important for the identification of risk factors that contribute to maternal mortality in the second largest referral hospital in Kenya. It also provides information that will aid in identifying areas of improving health facilities locally and nationally in terms of referrals, antenatal care, and the availability of skilled birth attendants who are able to manage pregnancy related complications. This study is timely given the free maternity program roll out in Kenya since June 2013. Importantly, these findings will inform policy makers about ways of strengthening the health system and promoting more hospital births.\nThis study has some limitations. Firstly, it only includes deaths that occurred during the hospital admission and therefore the risk factors identified here were specifically associated with in-hospital mortality. Pregnancy related mortality that occurs outside hospital may have other risk factors that were not identified here. Secondly, bias may have resulted from the misclassification of causes of death data and missing information in some fields.", "This study highlights risk factors for mortality at a tertiary hospital in Kenya showing the importance of antenatal care and maternal education in preventing maternal mortality. The findings are timely given Kenya’s limited progress towards achieving MDG Five by 2015. Antenatal visits provide opportunities for the detection of risk factors for eclampsia and other underlying illnesses that may put a mother at risk during birth. There is need to focus on integrated care throughout the pregnancy by improving women’s knowledge and empowering them to take an active role in their own health as well as gaining access to skilled care at birth and during pregnancy.", "MDG: Millennium development goal; MMR: Maternal mortality ratio; ANC: Antenatal care; MTRH: Moi teaching and referral hospital; NBU: Newborn unit; WHO: World Health Organization; HIV: Human immunodeficiency virus.", "The authors declare that they have no competing interests.", "FY conceived the study. FY, CD, JB, JSW, and PN all contributed to the protocol design, questionnaire design and ethics application process. FY contributed in data collection and extraction. PN contributed in providing consultation and advice during data extraction. FY, CD, PN and JB contributed to data analysis and writing of the paper. CD, JB, FY and JSW contributed to the drafting and editing the paper. All authors contributed to reviewing the paper and approved the final version for publication.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2393/14/38/prepub\n" ]
[ null, "methods", null, "results", "discussion", "conclusions", null, null, null, null ]
[ "Maternal mortality", "Tertiary hospital", "Risk factors", "Kenya" ]
Background: The maternal mortality ratio (MMR) is defined as “the ratio of the number of maternal deaths during a given period per 100,000 live births during the same time-period”. The global MMR is 210 per 100,000 live births [1]. Despite worldwide declines since 1990, the MMR is 15 times higher in developing than developed regions [1]. Sub-Saharan Africa has the highest MMR at 500 per 100,000 live births. In developed regions the MMR is 16 per 100,000 live births [1]. The target for Millennium Development Goal (MDG) Five is to reduce the global MMR by three quarters and to achieve universal access to reproductive health by 2015 [2]. In Kenya, the MMR has remained at 400-600 per 100,000 live births over the past decade - resulting in little or no progress being made towards achieving MDG Five [1,3]. The main direct causes of maternal death in developing countries include haemorrhage, sepsis, obstructed labour and hypertensive disorders [4]. The risk of death from haemorrhage is one in 1,000 deliveries in developing countries, compared with one in 100,000 in developed countries, and accounts for one third of the maternal deaths in Africa [5]. A study in Canada found increased risk of eclampsia among women with existing heart disease and anaemia [6]. A retrospective study undertaken at a tertiary hospital in Nigeria in 2007 found that the most common risk factors for maternal mortality were primaparity, haemorrhage, anaemia, eclampsia and malaria [7]. Risk factors for complications arising from infections include birthing under unhygienic conditions, poor nutrition, anaemia, caesarean section, membrane rupture, prolonged labour, retained products and haemorrhage [8]. In developing countries, indirect causes of maternal death include both previously existing diseases and diseases that develop during pregnancy. These include HIV, malaria, tuberculosis, diabetes, and cardiovascular disease, all of which and have an enormous impact on maternal and fetal outcomes during pregnancy [4]. Many individual and socioeconomic factors have been associated with high maternal mortality. These include lack of education, parity, previous obstetric history, employment, socioeconomic status, and types of care seeking behaviours during pregnancy. There is also evidence of increased risk of death among women who are less than 24 and older than 35 years [9]. A study in Tanzania found that low level of spouse education was a risk factor for maternal mortality [10]. Lack of knowledge regarding the need for skilled attendants is a barrier to women seeking care, especially during birth emergencies. A survey conducted in Kenya in 2006 showed that 15% of pregnant women were not informed of the importance of hospital deliveries [11]. In Nigeria, a cross-sectional survey revealed that the most common risk factors for maternal death were primigravidity (19%), and unbooked status (19%) [12]. Poverty has also been associated with adverse maternal outcomes, not directly, but as a contributor to maternal ability to access and utilise care where complications occur [13,14]. There is also evidence that contraceptive use is efficient for the primary prevention of maternal mortality in developing countries by about 44% [15]. Antenatal care (ANC) is very important during pregnancy. International organizations recommend a minimum of four visits, the administration of two doses of tetanus toxoid and folic acid supplementation during ANC attendance [16]. When women receive good care during the pre-partum period, they have been shown to be at less risk of maternal morbidity and mortality, since they had a higher likelihood of using a professional health facility during birth [10,17]. In the Kenya Demographic and Health Survey (2008-2009), it was reported that 92% of women received ANC from a skilled provider (doctor, nurse, or midwife), especially those who were more educated and resided in urban areas [3]. The report further showed that 83% of women who visited public hospitals were required to pay for antenatal services, which may explain why only 47% of antenatal women attended the recommended four visits [3]. Women had also been required to pay for delivery services until June 2013, when the Kenyan government rolled out a program where pregnant women can receive free maternity services in public hospitals. Health systems functioning with adequate equipment, resources and trained personnel to handle maternal complications can reduce the risks of mortality. In Africa maternal deaths are associated with delayed referrals for women from lower level facilities, and where referral systems are not well equipped to handle emergency obstetric care [18]. The presence of skilled attendants during birth is also important in managing life threatening complications. In Kenya, the use of skilled attendants at delivery is currently 50% [19]. The Delay Model by McCarthy and Maine is a conceptual framework that has been used to assess factors contributing to maternal mortality in developing countries [20]. This framework attributes mortality to certain determinants that contribute to the delay in deciding to seek care, the delay in reaching a health facility, and the delay in receiving quality care upon reaching a health facility. In Kenya there has been insufficient progress made towards achieving MDG Five. The aim of this study is to identify risk factors associated with maternal mortality in a tertiary level hospital in Kenya. Using a framework adapted from the Delay Model, this study analyses four sets of determinants: individual and socio-demographic, maternal history, reproductive or obstetric, and hospital admission/health system. Methods: An unmatched case control study of women who delivered between January 2004 and March 2011 was conducted at Moi Teaching and Referral Hospital (MTRH) located in the Western region of the Rift Valley Province, Kenya [21]. As the second largest national hospital in Kenya with over 800 beds, MTRH provides a range of curative, preventive and rehabilitative health services to a population of about 400,000 inhabitants, and an indigent referral population of 16 million from Northern and Western Kenya [21]. The Mother and Baby Unit at MTRH at has an antenatal ward, post natal ward, labour ward, Newborn Unit (NBU) and two theatres dedicated for obstetrics. The bed capacity is approximately 20 for the antenatal and labour wards, and 50 for the post natal wards [21]. Cases (n = 150) were maternal deaths identified from a manual review of hospital records. Two controls (n = 300) were selected per case. Controls were surviving women who were admitted immediately preceding and following cases. Cases were selected retrospectively and sequentially from the most recent delivery until the required sample size was achieved. Trained staff collected information using a standard audit form. Abortion related deaths were excluded from the study. Maternal hospital death was the outcome. This was a clearly defined adverse event certified by medical personnel. The data collection form included: mother’s age, mother’s marital status, mother’s education, spouse’s education, mother’s occupation, spouse’s occupation, and the source of funding for the delivery. Information relating to the mother’s medical history included: smoking, alcohol use, contraceptive use, previous abortion, previous twins, gravida, and pre-existing medical conditions. Obstetric or reproductive factors were pregnancy stage, labour stage, number of ANC visits, and place of ANC care. Health system factors included mode of delivery, qualification of birth attendant, and referral from another facility (yes/no). Information on the mother’s admission factors comprised: clinical cause of death or diagnosis on admission (e.g. eclampsia, dystocia haemorrhage, or comorbid causes), diastolic blood pressure (millimetres of mercury/mm Hg), systolic blood pressure (mm HG), haemoglobin level (grams per decilitre g/dL), pulse rate (beats per minute/bpm), and temperature (degrees Celsius/°C). The primary obstetric cause of death was that documented in the patient hospital and post mortem records. Statistical analyses Analyses were performed using Stata version 10.0 (Stata-Corp, College Station, TX, USA). Following initial data checking and exploratory analysis, univariable logistic regression analysis was conducted for each potential risk factor. The multivariable models initially included all variables with p < 0.2 in the univariable models. Backward stepwise multiple logistic regression was undertaken separately for the four groups of risk factors in the framework adapted from the Delay Model, (individual and socio-demographic; maternal history; reproductive or obstetric; and admission). Variables were removed from the models where p-values > = 0.1 on the Likelihood Ratio Test. The variables in each of the final models were then included in a combined model and removed where p-values > = 0.1 in order to derive a final parsimonious model. Odds ratios (ORs), 95% confidence intervals and p-values are reported for all models. The reference group was the category with the lowest expected risk of death, or if there were few cases in this category, the group with the majority of respondents. Assuming the probability of exposure in controls was 40% and the ratio of cases to controls was 1:2, with 80% power and a 5% level of significance, a sample of approximately 450 women (150 cases and 300 controls) was needed to detect an odds ratio of approximately 0.5 or 1.8. Ethical approval was sought from the Human Research Ethics Committee (HREC) at the University of Newcastle and The Institute for Research and Ethics Committee (IREC) in Kenya. Analyses were performed using Stata version 10.0 (Stata-Corp, College Station, TX, USA). Following initial data checking and exploratory analysis, univariable logistic regression analysis was conducted for each potential risk factor. The multivariable models initially included all variables with p < 0.2 in the univariable models. Backward stepwise multiple logistic regression was undertaken separately for the four groups of risk factors in the framework adapted from the Delay Model, (individual and socio-demographic; maternal history; reproductive or obstetric; and admission). Variables were removed from the models where p-values > = 0.1 on the Likelihood Ratio Test. The variables in each of the final models were then included in a combined model and removed where p-values > = 0.1 in order to derive a final parsimonious model. Odds ratios (ORs), 95% confidence intervals and p-values are reported for all models. The reference group was the category with the lowest expected risk of death, or if there were few cases in this category, the group with the majority of respondents. Assuming the probability of exposure in controls was 40% and the ratio of cases to controls was 1:2, with 80% power and a 5% level of significance, a sample of approximately 450 women (150 cases and 300 controls) was needed to detect an odds ratio of approximately 0.5 or 1.8. Ethical approval was sought from the Human Research Ethics Committee (HREC) at the University of Newcastle and The Institute for Research and Ethics Committee (IREC) in Kenya. Statistical analyses: Analyses were performed using Stata version 10.0 (Stata-Corp, College Station, TX, USA). Following initial data checking and exploratory analysis, univariable logistic regression analysis was conducted for each potential risk factor. The multivariable models initially included all variables with p < 0.2 in the univariable models. Backward stepwise multiple logistic regression was undertaken separately for the four groups of risk factors in the framework adapted from the Delay Model, (individual and socio-demographic; maternal history; reproductive or obstetric; and admission). Variables were removed from the models where p-values > = 0.1 on the Likelihood Ratio Test. The variables in each of the final models were then included in a combined model and removed where p-values > = 0.1 in order to derive a final parsimonious model. Odds ratios (ORs), 95% confidence intervals and p-values are reported for all models. The reference group was the category with the lowest expected risk of death, or if there were few cases in this category, the group with the majority of respondents. Assuming the probability of exposure in controls was 40% and the ratio of cases to controls was 1:2, with 80% power and a 5% level of significance, a sample of approximately 450 women (150 cases and 300 controls) was needed to detect an odds ratio of approximately 0.5 or 1.8. Ethical approval was sought from the Human Research Ethics Committee (HREC) at the University of Newcastle and The Institute for Research and Ethics Committee (IREC) in Kenya. Results: Table 1 shows the demographic factors associated with maternal mortality. In this model, mother’s age and mother’s education were significantly associated with mortality. Relative to controls, cases had three times the odds of being aged 35-45 years rather than 15-24 years (OR 3.1, 95% CI 1.5- 6.2; p < 0.0001). Cases had eight times the odds of having no education versus secondary education compared with controls (OR 8.0, 95% CI 4.0-16.3; p < 0.0001). Individual and Socio-demographic risk factors for maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011 *Spouse’s education was not included in the multiple regression model due to high cases of missing data and correlation with mothers education. aAdjusted for variables included in the final demographic model. †P- value for Likelihood Ratio Test in the adjusted model. Reference category for logistic regression represented by 1. Numbers may not add to total sample due to missing values. Table 2 shows the association between maternal history of prevailing conditions and obstetric and reproductive factors with maternal mortality. After adjusting for all other factors in the model, cases had higher odds than controls of having a history of maternal alcohol use (OR 2.5, 95% CI 1.2-5.3; p = 0.018), more than five previous pregnancies (OR 2.6, 95% CI 1.4-4.8; p = 0.0049), and a history of pre-existing illnesses (OR 3.0, 95% CI 1.7-5.3; p < 0.0001). Contraceptive use was protective (OR 0.3 95% CI 0.1-0.6; p = 0.0007). Table 2 also shows obstetric and reproductive characteristics associated maternal mortality. Compared to controls, cases had higher odds of assisted or caesarean deliveries (OR 3.0 95% CI 1.5-5.6; p < 0.0006). Cases had higher odds than controls of having a doctor, rather than a nurse or midwife attend the birth (OR 4.1 95% CI 2.2-7.6; p < 0.0001). Relative to controls, cases had almost nine times the odds of arriving at hospital at the puerperium stage (OR 8.9, 95% CI 3.5, 22.7; p < 0.0001), and almost six times the odds of lack of antenatal care (OR 5.7, 95% CI 2.6-12.4; p < 0.0001). Mother’s history of prevailing conditions and obstetric characteristics associated with maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011 *These include: HIV, malaria, rheumatic heart disease, cardiovascular disease, and diabetes. †P- value for Likelihood Ratio Test in the adjusted model. aAdjusted for variables included in the final demographic model. Reference category for logistic regression represented by 1. ††Did not include ANC place because of high correlation with number of ANC visits. Numbers may not add to total sample due to missing values. Table 3 shows maternal admission factors associated with mortality. Admission from comorbid complications (OR 6.7, 95% CI 3.8-11.8; p < 0.0001), eclampsia (OR 4.7, 95% CI 1.6, 13.7; p = 0.0038), non-normal blood pressure (OR 7.5, 95% CI 1.5-37.7; p = 0.0039), tachycardia (OR 16.5, 95% CI 4.8-57.3; p < 0.0001), and being referred to MTRH (OR 3.3, 95% CI 1.9-5.7; p < 0.0001) were all statistically significant risk factors for maternal mortality. Maternal admission factors associated with maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011 **Diastolic blood pressure was used as a proxy for systolic because of high correlation. †Temp and haemoglobin were omitted in the adjusted model because of too many missing values. ††P- value for Likelihood Ratio Test in the adjusted model. aAdjusted for variables included in the final demographic model. Reference category for logistic regression represented by 1. Numbers may not add to total sample due to missing values. Table 4 shows the multivariable analysis combining all factors from the previous models. Statistically significant risk factors for maternal mortality included: no education, relative to secondary education (OR 3.3, 95% CI 1.1-10.4, p = 0.0284), history of pre-existing medical conditions (OR 3.9, 95% CI 1.7-9.2, p = 0.0016), doctor attendance at birth (OR 4.6, 95% CI 2.1-10.1, p = 0.0001), having no antenatal visits (OR 4.1, 95% CI 1.6-10.4, p = 0.0007), being admitted with eclampsia (OR 10.9 95% CI 3.7-31.9, p < 0.0001), having comorbid complications on admission (OR 9.0, 95% CI 4.2-19.3, p < 0.0001), having elevated pulse (OR 10.7, 95% CI 2.7-43.4, p = 0.0002), and being referred to MTRH (OR 2.1, 95% CI 1.0-4.3, p = 0.0459). Multivariable model showing risk factors for maternal mortality in a tertiary hospital in Kenya from January 2004 to March 2011 Final model included all variables with P-value of 0.1 or less on the likelihood ratio test. Reference category for logistic regression represented by 1. †These include: HIV, malaria, rheumatic heart disease, cardiovascular disease, and diabetes. Numbers may not add to total sample due to missing values; the final model included 367 observations. Discussion: In the multivariable analysis of each of the four groups of risk factors (socio-demographic, maternal history, reproductive/ obstetric and admission factors) variables significantly associated with maternal mortality included: age, education, alcohol use, contraceptive use, gravida, pre-existing medical conditions, mode of delivery, type of birth attendant, pregnancy stage, number of ANC visits, having comorbid complications on admission, eclampsia, diastolic blood pressure, elevated pulse, and referral status. However, in the final model combining only significant factors from the four separate sets of analyses into a parsimonious model, only education, underlying medical conditions, birth attendant, number of ANC visits, having comorbid complications, eclampsia, having an elevated pulse on admission, and referral status were significant risk factors for maternal mortality. Cases had three times the odds of having no education versus secondary education compared with controls. This is in agreement with another study that also reported a higher risk of mortality among illiterate women [22]. This finding is important since it emphasizes the role of education for both the mother and her spouse in obtaining and understanding the benefits of good health and being able to make appropriate decisions during pregnancy. It is important to note that despite the woman’s weaker role in decision-making in African settings, education has a strong influence on mortality. In this study, we used mother’s education as a proxy for the husband’s education. Although there was considerable missing data for spouse’s education, there was correlation between these two education variables. Having no antenatal care during pregnancy was associated with mortality in this study, a finding which corresponds with those of other studies [22,23]. Antenatal care is important in screening for pre-existing illnesses and complications in the early stages of pregnancy that could impact adversely during pregnancy and childbirth [24]. Since ANC coverage is high in Kenya, there is a need to scale up interventions that empower women to make at least four visits during pregnancy as recommended by international organizations [16,19]. The findings here were that comorbid conditions including HIV, malaria, rheumatic heart disease, cardiovascular disease, and diabetes contributed to maternal deaths. This is contrary to other research showing that direct pregnancy complications are the leading causes of maternal deaths [4]. However, other research shows that the significant increase in MMRs in Sub-Saharan Africa are predominantly due to increasing HIV prevalence in that region [25]. The finding that the odds of comorbid conditions were higher in cases than controls also demonstrates the importance of ANC for screening, detection and management of underlying illnesses that could potentially pose a threat to the mother during pregnancy and childbirth. Contraceptive use was protective for maternal mortality, which coincides with findings from another study that found that maternal mortality would be 77% higher globally in the absence of family planning programs and contraceptive use [15]. The role of contraceptives in tackling maternal mortality has been through reducing exposure to incidence of pregnancy, lowering hazards of fragility from high parity pregnancies, reducing vulnerability to abortion risks, and postponing pregnancies, especially in countries with high fertility rates [15]. In this study, hypertensive disorders during pregnancy were higher among cases than controls. Our study demonstrated increased odds of eclampsia in cases, which is in agreement with another study that found that the delay in diagnosis, triage, transport and treatment of eclampsia increases the risk of maternal death [26]. There is evidence that screening for hypertensive conditions during the antenatal period plays a significant role in reducing the risk of death to the mother [13]. This study also found higher odds of elevated pulse amongst cases, which could explain the increased risk of death due to eclampsia. After adjusting for other factors, haemorrhage was not significantly associated with mortality possibly as a result of hospital protocols for management of haemorrhage. This study found health care system related factors that identified cases as being at risk including doctor attendance at birth and referrals. Cases had higher odds than controls of a doctor attending their delivery, potentially because they were diagnosed with the most difficult complications. This has been previously reported, especially in low resource settings where uptake of professional birth attendants is low hence women only seek help when the condition is critical or too late for the doctor to save their lives [4]. Cases had twice the odds of referral relative to controls, potentially because the number of referrals represented over half of the cases who were referred following complications of birth. This study provides information that is important for the identification of risk factors that contribute to maternal mortality in the second largest referral hospital in Kenya. It also provides information that will aid in identifying areas of improving health facilities locally and nationally in terms of referrals, antenatal care, and the availability of skilled birth attendants who are able to manage pregnancy related complications. This study is timely given the free maternity program roll out in Kenya since June 2013. Importantly, these findings will inform policy makers about ways of strengthening the health system and promoting more hospital births. This study has some limitations. Firstly, it only includes deaths that occurred during the hospital admission and therefore the risk factors identified here were specifically associated with in-hospital mortality. Pregnancy related mortality that occurs outside hospital may have other risk factors that were not identified here. Secondly, bias may have resulted from the misclassification of causes of death data and missing information in some fields. Conclusions: This study highlights risk factors for mortality at a tertiary hospital in Kenya showing the importance of antenatal care and maternal education in preventing maternal mortality. The findings are timely given Kenya’s limited progress towards achieving MDG Five by 2015. Antenatal visits provide opportunities for the detection of risk factors for eclampsia and other underlying illnesses that may put a mother at risk during birth. There is need to focus on integrated care throughout the pregnancy by improving women’s knowledge and empowering them to take an active role in their own health as well as gaining access to skilled care at birth and during pregnancy. Abbreviations: MDG: Millennium development goal; MMR: Maternal mortality ratio; ANC: Antenatal care; MTRH: Moi teaching and referral hospital; NBU: Newborn unit; WHO: World Health Organization; HIV: Human immunodeficiency virus. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: FY conceived the study. FY, CD, JB, JSW, and PN all contributed to the protocol design, questionnaire design and ethics application process. FY contributed in data collection and extraction. PN contributed in providing consultation and advice during data extraction. FY, CD, PN and JB contributed to data analysis and writing of the paper. CD, JB, FY and JSW contributed to the drafting and editing the paper. All authors contributed to reviewing the paper and approved the final version for publication. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2393/14/38/prepub
Background: Maternal mortality is high in Africa, especially in Kenya where there is evidence of insufficient progress towards Millennium Development Goal (MDG) Five, which is to reduce the global maternal mortality rate by three quarters and provide universal access to reproductive health by 2015. This study aims to identify risk factors associated with maternal mortality in a tertiary level hospital in Kenya. Methods: A manual review of records for 150 maternal deaths (cases) and 300 controls was undertaken using a standard audit form. The sample included pregnant women aged 15-49 years admitted to the Obstetric and Gynaecological wards at the Moi Teaching and Referral Hospital (MTRH) in Kenya from January 2004 and March 2011. Logistic regression analysis was used to assess risk factors for maternal mortality. Results: Factors significantly associated with maternal mortality included: having no education relative to secondary education (OR 3.3, 95% CI 1.1-10.4, p = 0.0284), history of underlying medical conditions (OR 3.9, 95% CI 1.7-9.2, p = 0.0016), doctor attendance at birth (OR 4.6, 95% CI 2.1-10.1, p = 0.0001), having no antenatal visits (OR 4.1, 95% CI 1.6-10.4, p = 0.0007), being admitted with eclampsia (OR 10.9, 95% CI 3.7-31.9, p < 0.0001), being admitted with comorbidities (OR 9.0, 95% CI 4.2-19.3, p < 0.0001), having an elevated pulse on admission (OR 10.7, 95% CI 2.7-43.4, p = 0.0002), and being referred to MTRH (OR 2.1, 95% CI 1.0-4.3, p = 0.0459). Conclusions: Antenatal care and maternal education are important risk factors for maternal mortality, even after adjusting for comorbidities and complications. Antenatal visits can provide opportunities for detecting risk factors for eclampsia, and other underlying illnesses but the visits need to be frequent and timely. Education enables access to information and helps empower women and their spouses to make appropriate decisions during pregnancy.
Background: The maternal mortality ratio (MMR) is defined as “the ratio of the number of maternal deaths during a given period per 100,000 live births during the same time-period”. The global MMR is 210 per 100,000 live births [1]. Despite worldwide declines since 1990, the MMR is 15 times higher in developing than developed regions [1]. Sub-Saharan Africa has the highest MMR at 500 per 100,000 live births. In developed regions the MMR is 16 per 100,000 live births [1]. The target for Millennium Development Goal (MDG) Five is to reduce the global MMR by three quarters and to achieve universal access to reproductive health by 2015 [2]. In Kenya, the MMR has remained at 400-600 per 100,000 live births over the past decade - resulting in little or no progress being made towards achieving MDG Five [1,3]. The main direct causes of maternal death in developing countries include haemorrhage, sepsis, obstructed labour and hypertensive disorders [4]. The risk of death from haemorrhage is one in 1,000 deliveries in developing countries, compared with one in 100,000 in developed countries, and accounts for one third of the maternal deaths in Africa [5]. A study in Canada found increased risk of eclampsia among women with existing heart disease and anaemia [6]. A retrospective study undertaken at a tertiary hospital in Nigeria in 2007 found that the most common risk factors for maternal mortality were primaparity, haemorrhage, anaemia, eclampsia and malaria [7]. Risk factors for complications arising from infections include birthing under unhygienic conditions, poor nutrition, anaemia, caesarean section, membrane rupture, prolonged labour, retained products and haemorrhage [8]. In developing countries, indirect causes of maternal death include both previously existing diseases and diseases that develop during pregnancy. These include HIV, malaria, tuberculosis, diabetes, and cardiovascular disease, all of which and have an enormous impact on maternal and fetal outcomes during pregnancy [4]. Many individual and socioeconomic factors have been associated with high maternal mortality. These include lack of education, parity, previous obstetric history, employment, socioeconomic status, and types of care seeking behaviours during pregnancy. There is also evidence of increased risk of death among women who are less than 24 and older than 35 years [9]. A study in Tanzania found that low level of spouse education was a risk factor for maternal mortality [10]. Lack of knowledge regarding the need for skilled attendants is a barrier to women seeking care, especially during birth emergencies. A survey conducted in Kenya in 2006 showed that 15% of pregnant women were not informed of the importance of hospital deliveries [11]. In Nigeria, a cross-sectional survey revealed that the most common risk factors for maternal death were primigravidity (19%), and unbooked status (19%) [12]. Poverty has also been associated with adverse maternal outcomes, not directly, but as a contributor to maternal ability to access and utilise care where complications occur [13,14]. There is also evidence that contraceptive use is efficient for the primary prevention of maternal mortality in developing countries by about 44% [15]. Antenatal care (ANC) is very important during pregnancy. International organizations recommend a minimum of four visits, the administration of two doses of tetanus toxoid and folic acid supplementation during ANC attendance [16]. When women receive good care during the pre-partum period, they have been shown to be at less risk of maternal morbidity and mortality, since they had a higher likelihood of using a professional health facility during birth [10,17]. In the Kenya Demographic and Health Survey (2008-2009), it was reported that 92% of women received ANC from a skilled provider (doctor, nurse, or midwife), especially those who were more educated and resided in urban areas [3]. The report further showed that 83% of women who visited public hospitals were required to pay for antenatal services, which may explain why only 47% of antenatal women attended the recommended four visits [3]. Women had also been required to pay for delivery services until June 2013, when the Kenyan government rolled out a program where pregnant women can receive free maternity services in public hospitals. Health systems functioning with adequate equipment, resources and trained personnel to handle maternal complications can reduce the risks of mortality. In Africa maternal deaths are associated with delayed referrals for women from lower level facilities, and where referral systems are not well equipped to handle emergency obstetric care [18]. The presence of skilled attendants during birth is also important in managing life threatening complications. In Kenya, the use of skilled attendants at delivery is currently 50% [19]. The Delay Model by McCarthy and Maine is a conceptual framework that has been used to assess factors contributing to maternal mortality in developing countries [20]. This framework attributes mortality to certain determinants that contribute to the delay in deciding to seek care, the delay in reaching a health facility, and the delay in receiving quality care upon reaching a health facility. In Kenya there has been insufficient progress made towards achieving MDG Five. The aim of this study is to identify risk factors associated with maternal mortality in a tertiary level hospital in Kenya. Using a framework adapted from the Delay Model, this study analyses four sets of determinants: individual and socio-demographic, maternal history, reproductive or obstetric, and hospital admission/health system. Conclusions: This study highlights risk factors for mortality at a tertiary hospital in Kenya showing the importance of antenatal care and maternal education in preventing maternal mortality. The findings are timely given Kenya’s limited progress towards achieving MDG Five by 2015. Antenatal visits provide opportunities for the detection of risk factors for eclampsia and other underlying illnesses that may put a mother at risk during birth. There is need to focus on integrated care throughout the pregnancy by improving women’s knowledge and empowering them to take an active role in their own health as well as gaining access to skilled care at birth and during pregnancy.
Background: Maternal mortality is high in Africa, especially in Kenya where there is evidence of insufficient progress towards Millennium Development Goal (MDG) Five, which is to reduce the global maternal mortality rate by three quarters and provide universal access to reproductive health by 2015. This study aims to identify risk factors associated with maternal mortality in a tertiary level hospital in Kenya. Methods: A manual review of records for 150 maternal deaths (cases) and 300 controls was undertaken using a standard audit form. The sample included pregnant women aged 15-49 years admitted to the Obstetric and Gynaecological wards at the Moi Teaching and Referral Hospital (MTRH) in Kenya from January 2004 and March 2011. Logistic regression analysis was used to assess risk factors for maternal mortality. Results: Factors significantly associated with maternal mortality included: having no education relative to secondary education (OR 3.3, 95% CI 1.1-10.4, p = 0.0284), history of underlying medical conditions (OR 3.9, 95% CI 1.7-9.2, p = 0.0016), doctor attendance at birth (OR 4.6, 95% CI 2.1-10.1, p = 0.0001), having no antenatal visits (OR 4.1, 95% CI 1.6-10.4, p = 0.0007), being admitted with eclampsia (OR 10.9, 95% CI 3.7-31.9, p < 0.0001), being admitted with comorbidities (OR 9.0, 95% CI 4.2-19.3, p < 0.0001), having an elevated pulse on admission (OR 10.7, 95% CI 2.7-43.4, p = 0.0002), and being referred to MTRH (OR 2.1, 95% CI 1.0-4.3, p = 0.0459). Conclusions: Antenatal care and maternal education are important risk factors for maternal mortality, even after adjusting for comorbidities and complications. Antenatal visits can provide opportunities for detecting risk factors for eclampsia, and other underlying illnesses but the visits need to be frequent and timely. Education enables access to information and helps empower women and their spouses to make appropriate decisions during pregnancy.
4,901
395
[ 1052, 302, 42, 10, 96, 16 ]
10
[ "maternal", "mortality", "risk", "factors", "cases", "95", "model", "maternal mortality", "95 ci", "ci" ]
[ "high maternal mortality", "risks mortality africa", "mortality developing countries", "maternal deaths africa", "mmr maternal mortality" ]
[CONTENT] Maternal mortality | Tertiary hospital | Risk factors | Kenya [SUMMARY]
[CONTENT] Maternal mortality | Tertiary hospital | Risk factors | Kenya [SUMMARY]
[CONTENT] Maternal mortality | Tertiary hospital | Risk factors | Kenya [SUMMARY]
[CONTENT] Maternal mortality | Tertiary hospital | Risk factors | Kenya [SUMMARY]
[CONTENT] Maternal mortality | Tertiary hospital | Risk factors | Kenya [SUMMARY]
[CONTENT] Maternal mortality | Tertiary hospital | Risk factors | Kenya [SUMMARY]
[CONTENT] Adolescent | Adult | Alcohol Drinking | Case-Control Studies | Comorbidity | Delivery, Obstetric | Eclampsia | Educational Status | Female | Humans | Kenya | Maternal Mortality | Medical Audit | Middle Aged | Parity | Pregnancy | Prenatal Care | Referral and Consultation | Retrospective Studies | Risk Factors | Tachycardia | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Alcohol Drinking | Case-Control Studies | Comorbidity | Delivery, Obstetric | Eclampsia | Educational Status | Female | Humans | Kenya | Maternal Mortality | Medical Audit | Middle Aged | Parity | Pregnancy | Prenatal Care | Referral and Consultation | Retrospective Studies | Risk Factors | Tachycardia | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Alcohol Drinking | Case-Control Studies | Comorbidity | Delivery, Obstetric | Eclampsia | Educational Status | Female | Humans | Kenya | Maternal Mortality | Medical Audit | Middle Aged | Parity | Pregnancy | Prenatal Care | Referral and Consultation | Retrospective Studies | Risk Factors | Tachycardia | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Alcohol Drinking | Case-Control Studies | Comorbidity | Delivery, Obstetric | Eclampsia | Educational Status | Female | Humans | Kenya | Maternal Mortality | Medical Audit | Middle Aged | Parity | Pregnancy | Prenatal Care | Referral and Consultation | Retrospective Studies | Risk Factors | Tachycardia | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Alcohol Drinking | Case-Control Studies | Comorbidity | Delivery, Obstetric | Eclampsia | Educational Status | Female | Humans | Kenya | Maternal Mortality | Medical Audit | Middle Aged | Parity | Pregnancy | Prenatal Care | Referral and Consultation | Retrospective Studies | Risk Factors | Tachycardia | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Alcohol Drinking | Case-Control Studies | Comorbidity | Delivery, Obstetric | Eclampsia | Educational Status | Female | Humans | Kenya | Maternal Mortality | Medical Audit | Middle Aged | Parity | Pregnancy | Prenatal Care | Referral and Consultation | Retrospective Studies | Risk Factors | Tachycardia | Tertiary Care Centers | Young Adult [SUMMARY]
[CONTENT] high maternal mortality | risks mortality africa | mortality developing countries | maternal deaths africa | mmr maternal mortality [SUMMARY]
[CONTENT] high maternal mortality | risks mortality africa | mortality developing countries | maternal deaths africa | mmr maternal mortality [SUMMARY]
[CONTENT] high maternal mortality | risks mortality africa | mortality developing countries | maternal deaths africa | mmr maternal mortality [SUMMARY]
[CONTENT] high maternal mortality | risks mortality africa | mortality developing countries | maternal deaths africa | mmr maternal mortality [SUMMARY]
[CONTENT] high maternal mortality | risks mortality africa | mortality developing countries | maternal deaths africa | mmr maternal mortality [SUMMARY]
[CONTENT] high maternal mortality | risks mortality africa | mortality developing countries | maternal deaths africa | mmr maternal mortality [SUMMARY]
[CONTENT] maternal | mortality | risk | factors | cases | 95 | model | maternal mortality | 95 ci | ci [SUMMARY]
[CONTENT] maternal | mortality | risk | factors | cases | 95 | model | maternal mortality | 95 ci | ci [SUMMARY]
[CONTENT] maternal | mortality | risk | factors | cases | 95 | model | maternal mortality | 95 ci | ci [SUMMARY]
[CONTENT] maternal | mortality | risk | factors | cases | 95 | model | maternal mortality | 95 ci | ci [SUMMARY]
[CONTENT] maternal | mortality | risk | factors | cases | 95 | model | maternal mortality | 95 ci | ci [SUMMARY]
[CONTENT] maternal | mortality | risk | factors | cases | 95 | model | maternal mortality | 95 ci | ci [SUMMARY]
[CONTENT] maternal | women | 100 | 100 000 | developing | 000 | mmr | mortality | countries | 000 live [SUMMARY]
[CONTENT] models | cases | controls | mother | included | values | approximately | variables | ratio | model [SUMMARY]
[CONTENT] ci | 95 ci | 95 | 0001 | model | mortality | maternal | factors | maternal mortality | having [SUMMARY]
[CONTENT] risk | care | pregnancy | mortality | birth | risk factors | kenya | antenatal | factors | progress achieving mdg 2015 [SUMMARY]
[CONTENT] maternal | mortality | risk | cases | factors | 95 ci | ci | maternal mortality | models | authors [SUMMARY]
[CONTENT] maternal | mortality | risk | cases | factors | 95 ci | ci | maternal mortality | models | authors [SUMMARY]
[CONTENT] Africa | Kenya | Millennium Development Goal | MDG | Five | three quarters | 2015 ||| tertiary | Kenya [SUMMARY]
[CONTENT] 150 | 300 ||| 15-49 years | the Moi Teaching and Referral Hospital | MTRH | Kenya | January 2004 and March 2011 ||| [SUMMARY]
[CONTENT] 3.3 | 95% | CI | 1.1-10.4 | 0.0284 | 3.9 | 95% | CI | 1.7-9.2 | 0.0016 | 4.6 | 95% | CI | 2.1-10.1 | 0.0001 | 4.1 | 95% | CI | 1.6-10.4 | eclampsia | 10.9 | 95% | CI | 3.7 | 9.0 | 95% | CI | 4.2-19.3 | 10.7 | 95% | CI | 2.7-43.4 | 0.0002 | MTRH | 2.1 | 95% | CI | 1.0-4.3 | 0.0459 [SUMMARY]
[CONTENT] ||| eclampsia ||| [SUMMARY]
[CONTENT] Africa | Kenya | Millennium Development Goal | MDG | Five | three quarters | 2015 ||| tertiary | Kenya ||| 150 | 300 ||| 15-49 years | the Moi Teaching and Referral Hospital | MTRH | Kenya | January 2004 and March 2011 ||| ||| ||| 3.3 | 95% | CI | 1.1-10.4 | 0.0284 | 3.9 | 95% | CI | 1.7-9.2 | 0.0016 | 4.6 | 95% | CI | 2.1-10.1 | 0.0001 | 4.1 | 95% | CI | 1.6-10.4 | eclampsia | 10.9 | 95% | CI | 3.7 | 9.0 | 95% | CI | 4.2-19.3 | 10.7 | 95% | CI | 2.7-43.4 | 0.0002 | MTRH | 2.1 | 95% | CI | 1.0-4.3 | 0.0459 ||| ||| eclampsia ||| [SUMMARY]
[CONTENT] Africa | Kenya | Millennium Development Goal | MDG | Five | three quarters | 2015 ||| tertiary | Kenya ||| 150 | 300 ||| 15-49 years | the Moi Teaching and Referral Hospital | MTRH | Kenya | January 2004 and March 2011 ||| ||| ||| 3.3 | 95% | CI | 1.1-10.4 | 0.0284 | 3.9 | 95% | CI | 1.7-9.2 | 0.0016 | 4.6 | 95% | CI | 2.1-10.1 | 0.0001 | 4.1 | 95% | CI | 1.6-10.4 | eclampsia | 10.9 | 95% | CI | 3.7 | 9.0 | 95% | CI | 4.2-19.3 | 10.7 | 95% | CI | 2.7-43.4 | 0.0002 | MTRH | 2.1 | 95% | CI | 1.0-4.3 | 0.0459 ||| ||| eclampsia ||| [SUMMARY]
Evaluation of patients admitted with musculoskeletal tuberculosis: sixteen years' experience from a single center in Turkey.
34126976
The aim of our study was to describe musculoskeletal system tuberculosis (TB) as a single-center experience.
BACKGROUND
This is a retrospective observational study conducted at a TB Dispensary in the east Mediterranean part of Turkey between 2004 and 2020. The clinical and demographic characteristics including age, gender, involvement location and duration of illness, presenting complaint, local examination findings, treatment outcome were retrieved and analyzed from the case files. Statistical analyses were performed using SPSS Statistics version 17.0 (IBM). The normality of data analysed by using Kolmogorov-Smirnov. The descriptive statistics were reported as mean ± standard deviation, medians, and ranges (min-max).
METHODS
Overall, 31 patients (3.2 % of all TB cases) with a mean age of 44.2 ± 16.7 years had musculoskeletal tuberculosis. The mean duration of treatment was 12.9 ± 5.5 months. Of the 31 patients, six (19.4 %) had concomitant pulmonary TB. One of the patients was in the pediatrics age group, and two of them were in the geriatric group. The most affected area was the vertebra. The most common complaint of the patients was back pain and seen in 22 patients (70.9 %).
RESULTS
The physicians should be suspicious about the diagnosis of musculoskeletal TB disease. If the diagnosis and treatment are delayed, spinal damage and other consequences might be incurable.
CONCLUSIONS
[ "Adult", "Aged", "Back Pain", "Child", "Humans", "Middle Aged", "Musculoskeletal Diseases", "Retrospective Studies", "Tuberculosis", "Turkey" ]
8204496
Introduction
Tuberculosis (TB) is a chronic infectious disease mainly caused by Mycobacterium tuberculosis. TB is a public health problem, especially in underdeveloped or developing countries, and remains a major cause of morbidity and mortality [1]. As reported by World Health Organization (WHO), the estimated global incidence of TB cases was 10.0 million in 2018 [2]. In Turkey, as a developing country, the TB incidence (the number of new TB cases per 100,000 populations per year) decreased annually, from 29.8 to 2005 to 14.4 in 2018 [3, 4]. More than 80 % of TB cases are pulmonary TB. However, several manifestations of extra-pulmonary TB have been reported, including musculoskeletal system TB [5]. Although pulmonary TB has been extensively covered in the literature [6–8], publications on musculoskeletal TB are relatively limited, and have been reported mostly in case reports [9–11]. In addition, the diagnosis of musculoskeletal system TB can be challenging, particularly for the physicians who are not familiar, because TB can present in atypical locations and mimic tumoral lesions [12]. Therefore, a biopsy should be taken to confirm the diagnosis. Histopathologically, a granulomatous inflammatory reaction that occurs against mycobacterial species is composed of epithelioid histiocytes, giant cells, lymphocytes, and some of them have centrally located caseous necrosis. The granuloma can not only occur against TB disease, but also against fungi as an infectious agent, sarcoidosis as an immunological reaction, or against a foreign body. Histochemically, Mycobacterium tuberculosis bacilli can be demonstrated inside the granuloma, mainly in necrotic areas with Erlich-Ziehl-Neelsen (EZN) stain to confirm the diagnosis. Also, the diagnosis can be made by culture and amplification of Mycobacterium tuberculosis DNA using the polymerase chain reaction (PCR) method. The diagnosis of musculoskeletal TB is usually tricky and could be delayed. A positive skin tuberculin test or abnormal chest radiograph will support the diagnosis though it is not excluded by negative results [13]. Respecting immunosuppression, the incidence of musculoskeletal TB and all other extra-pulmonary TB infections depend on the grade of weakening cellular immunity. People with human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) and latent TB are also far more likely to progress active disease, with a nearly 10 % risk of developing active disease each year, in comparison with a lifelong reactivation risk of about 5 % in an HIV/AIDS-negative population. As impairment of the immune system progresses, patients living with HIV/AIDS are more likely to develop extra-pulmonary TB, just like musculoskeletal TB [14]. Keeping in mind the increased prevalence of TB in recent years [15], we deem it important to review musculoskeletal involvement of TB. Accordingly, the objective of this study was to elaborate musculoskeletal system TB as a single-center experience.
null
null
null
null
Conclusions
To sum up, although it is a very rare disease, surgeons and clinicians should be aware of the diagnosis of musculoskeletal TB infection especially the presence of pain not relieved despite the physical therapy and B symptoms such as fever and weight loss. Further, in these cases, confused with cancer clinically and radiologically, it would be appropriate to take a biopsy to make the diagnosis at an earlier stage and exclude the cancer. In addition, cases in which granulomatous inflammatory reaction is reported should be referred to the dispensary immediately. On the assumption that the diagnosis and treatment is delayed, spinal damage in addition to the bones and joints of the patients may be an inevitable consequence. Consequently, musculoskeletal TB needs a complex approach and cooperation.
[ "Materials and methods", "Results", "Bursit case" ]
[ "This retrospective study was conducted as a descriptive single-center experience. Patients who were followed up in Osmaniye Tuberculosis Dispensary between 2004 and 2020 were screened from the dispensary records. This dispensary is a governmental institution located in the south of the country with a population of approximately 350,000 (~ 90 % urban and ~ 10 % rural) that provides free service (including medicines) to all patients. In addition, all TB patients living in this area have to apply to this dispensary to provide directly observed therapy since it is the only TB center in the city. Among them, cases with musculoskeletal system involvement were noted. All patients were diagnosed with tuberculosis according to the histopathologic examinations by a pathologist. The biopsy samples were evaluated grossly and processed using the routine histopathologic technique (fixation and paraffin embedding) and then stained with Haematoxylin-Eosin.\nDefinite diagnosis of musculoskeletal TB was made when either (I) joint aspiration or tissue biopsy/fine-needle aspiration cytology (FNAC) revealed acid-fast bacilli (AFB) detection by smear examination, (II) culture of the synovial fluid (or tissue aspirated) grew Mycobacterium tuberculosis, or (III) histopathology/FNAC revealed granulomatous lesions with or without caseation with AFB positivity. Probable TB was considered if there was no explicit evidence of AFB or granuloma, but clinical, radiological, and serological evidence suggested TB and the patient responded to empirical antitubercular therapy. Other tests include tuberculin skin sensitivity (PPD) test and TB interferon γ-assays. Radiology included X-ray, magnetic resonance imaging (MRI), or computed tomography (CT) of the organ involved. CT-guided biopsy or fine-needle aspiration and arthroscopic biopsies were also performed in some cases. The clinical and demographic characteristics, including age, gender, involvement location and duration of illness, presenting complaint, local examination findings, treatment outcome were retrieved and analyzed from the case files. TB is a notifiable disease, and data are recorded prospectively by our specialist TB nurses. Patients with missing data in terms of diagnosis, clinical follow-up, or treatment, and patients transferred to another tuberculosis center were excluded.\nTypical treatment of all cases included the use of a four-drug regimen: isoniazid, rifampicin, ethambutol, and pyrazinamide for the initiative phase of two months, followed by 4–10 months of continuation phase with isoniazid and rifampicin. If the case was a recurrence TB, streptomycin applied to the patients intravenously (IV) is added to the treatment regimen for first two months. The drug doses were adjusted according to the weight of the patients. Drug susceptibility testing for the first-line drugs was conducted for patients who did not respond to treatment, and regimens were changed based on the susceptibility results or strong clinical suspicion of unresponsiveness.\nStatistical analyses were performed using SPSS Statistics version 17.0 (IBM). The normality of data analyzed by using Kolmogorov-Smirnov. The descriptive statistics were reported as mean ± standard deviation, medians, and ranges (min-max).\nThe current study protocol was approved by the Local Ethics Committee of Adana City Training and Research Hospital (decision number: 321/2018). This study was exempted from informed consent form by this ethics committee, since it was a retrospective study. All authors confirmed that all methods were performed in accordance with the relevant guidelines and regulations.", "A total of 949 patients were followed up due to tuberculosis in our center between 2004 and 2020. Overall, 31 (3.2 % of all TB cases) patients (14 males, 17 females) with a mean age of 44.2 ± 16.7 years (ranges: 3 to 75 years) had musculoskeletal tuberculosis. Demographic features are summarized in Table 1.\nTable 1Demographic FeaturesVariablesResultsAge (years)a44.2± 16.749 (3-75)Gender n, (%) - Male14 (45.2) - Female17 (54.8)Age group (years) n, (%) - Pediatric (<18)1 (3.2) - Adult (18-64)28 (90.3) - Geriatric (≥ 65)2 (6.5)Race n, (%) - Turkish29 (93.5) - Afghanistan1 (3.2) - Syria1 (3.2)Occupation n, (%) - Housewife12 (38.7) - Butcher4 (12.9) - Student4 (12.9) - Retired4 (12.9) - Worker2 (6.5) - Farmer2 (6.5) - Cooker1 (3.2) - Craftsman1 (3.2) - Cooker1 (3.2)Family history of TB n, (%)6 (19.4)Concomitant Pulmonary TB n, (%)6 (19.4)TB Tuberculosisamean ± S.D and median (minimum-maximum)\nDemographic Features\nTB Tuberculosis\namean ± S.D and median (minimum-maximum)\nClinical presentations of the cases are given in Table 2. Back pain (n = 22, 70.9 %) and shoulder pain (n = 2, 6.5 %) were the most seen symptoms, and thoracic or lumbar vertebra was the most involved location (74.2 %). The example images of a shoulder and thoracic vertebra TB are also shown in Figs. 1 and 2, respectively. Most of the cases (58.1 %) were diagnosed in tertiary hospitals. The mean duration of treatment was 12.9 ± 5.5 months (range, 1.3 week to 27.7 months). Two of the patients had a recurrent illness and were treated with the HRZE + S (isoniazid, rifampicin, ethambutol, and pyrazinamide + IV streptomycin) regimen (Table 2).\nTable 2Clinical PresentationVariablen%Symptoms - Back pain2270.9 - Mass on back and neck13.2 - Knee pain13.2 - Wrist swelling13.2 - Chest pain13.2 - Hand pain13.2 - Shoulder pain26.5 - Abdominal/hip pain13.2Location - Vertebra2374.2 - Hand26.5 - Knee joint13.2 - Costa13.2 - Shoulder13.2 - Patella13.2 - Psoas muscle26.5Vertebra Location - Thoracic1032.3 - Lumbar1238.7 - Sacrococcygeal13.2Center of Diagnosis - Secondary1341.9 - Tertiary1858.1Treatment duration (months)a12.9± 5.5 & 12.0 (1.3-27.7)Outcome - Treatment completed2683.8 - Treatment is going on39.6 - Exitus13.2 - Leaved treatment13.2Primer TB2993.5Recurrence TB26.5Treatment Regimen - HRZE2993.5 - HRZE + S26.5TB Tuberculosis, HRZE isoniazid + rifampicin + ethambutol + pyrazinamide, HRZE + S isoniazid + rifampicin + ethambutol + pyrazinamide + streptomycinamean ± S.D & median (percentile 25-percentile 75)Fig. 1Granulomatosis inflammatory reaction against M. tuberculosis with caseous necrosis (white arrow) seen in muscle tissue (M) (×200, H&E)Fig. 2Granulomatous inflammatory reaction involving caseified necrosis that destroys bone tissue (x200, H&E)\nClinical Presentation\nTB Tuberculosis, HRZE isoniazid + rifampicin + ethambutol + pyrazinamide, HRZE + S isoniazid + rifampicin + ethambutol + pyrazinamide + streptomycin\namean ± S.D & median (percentile 25-percentile 75)\nGranulomatosis inflammatory reaction against M. tuberculosis with caseous necrosis (white arrow) seen in muscle tissue (M) (×200, H&E)\nGranulomatous inflammatory reaction involving caseified necrosis that destroys bone tissue (x200, H&E)\nBursit case The example images are from a 49-year-old man who presented with fatigue, left shoulder pain, and restriction in shoulder movements (Fig. 3). Laboratory investigations yielded increased levels of C-reactive protein and erythrocyte sedimentation rate. MRI showed joint effusion, synovial hypertrophy, bursitis, rice bodies, and cortical erosions. The pathological examination was consistent with lymphohistiocytic inflammatory response, including giant cells and caseified necrosis in adipose and connective tissue (i.e., granulomatous inflammatory response). The patient was received a 12 month-treatment (2 months HRZE + 10 months HR). After 12 months of treatment, the patient’s treatment file was closed as ‘treatment completion’ after his complaints completely disappeared.\nFig. 3Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case\nPreoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case\nThe example images are from a 49-year-old man who presented with fatigue, left shoulder pain, and restriction in shoulder movements (Fig. 3). Laboratory investigations yielded increased levels of C-reactive protein and erythrocyte sedimentation rate. MRI showed joint effusion, synovial hypertrophy, bursitis, rice bodies, and cortical erosions. The pathological examination was consistent with lymphohistiocytic inflammatory response, including giant cells and caseified necrosis in adipose and connective tissue (i.e., granulomatous inflammatory response). The patient was received a 12 month-treatment (2 months HRZE + 10 months HR). After 12 months of treatment, the patient’s treatment file was closed as ‘treatment completion’ after his complaints completely disappeared.\nFig. 3Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case\nPreoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case", "The example images are from a 49-year-old man who presented with fatigue, left shoulder pain, and restriction in shoulder movements (Fig. 3). Laboratory investigations yielded increased levels of C-reactive protein and erythrocyte sedimentation rate. MRI showed joint effusion, synovial hypertrophy, bursitis, rice bodies, and cortical erosions. The pathological examination was consistent with lymphohistiocytic inflammatory response, including giant cells and caseified necrosis in adipose and connective tissue (i.e., granulomatous inflammatory response). The patient was received a 12 month-treatment (2 months HRZE + 10 months HR). After 12 months of treatment, the patient’s treatment file was closed as ‘treatment completion’ after his complaints completely disappeared.\nFig. 3Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case\nPreoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case" ]
[ null, null, null ]
[ "Introduction", "Materials and methods", "Results", "Bursit case", "Discussion", "Conclusions" ]
[ "Tuberculosis (TB) is a chronic infectious disease mainly caused by Mycobacterium tuberculosis. TB is a public health problem, especially in underdeveloped or developing countries, and remains a major cause of morbidity and mortality [1]. As reported by World Health Organization (WHO), the estimated global incidence of TB cases was 10.0 million in 2018 [2]. In Turkey, as a developing country, the TB incidence (the number of new TB cases per 100,000 populations per year) decreased annually, from 29.8 to 2005 to 14.4 in 2018 [3, 4].\nMore than 80 % of TB cases are pulmonary TB. However, several manifestations of extra-pulmonary TB have been reported, including musculoskeletal system TB [5]. Although pulmonary TB has been extensively covered in the literature [6–8], publications on musculoskeletal TB are relatively limited, and have been reported mostly in case reports [9–11]. In addition, the diagnosis of musculoskeletal system TB can be challenging, particularly for the physicians who are not familiar, because TB can present in atypical locations and mimic tumoral lesions [12]. Therefore, a biopsy should be taken to confirm the diagnosis.\nHistopathologically, a granulomatous inflammatory reaction that occurs against mycobacterial species is composed of epithelioid histiocytes, giant cells, lymphocytes, and some of them have centrally located caseous necrosis. The granuloma can not only occur against TB disease, but also against fungi as an infectious agent, sarcoidosis as an immunological reaction, or against a foreign body. Histochemically, Mycobacterium tuberculosis bacilli can be demonstrated inside the granuloma, mainly in necrotic areas with Erlich-Ziehl-Neelsen (EZN) stain to confirm the diagnosis. Also, the diagnosis can be made by culture and amplification of Mycobacterium tuberculosis DNA using the polymerase chain reaction (PCR) method. The diagnosis of musculoskeletal TB is usually tricky and could be delayed. A positive skin tuberculin test or abnormal chest radiograph will support the diagnosis though it is not excluded by negative results [13].\nRespecting immunosuppression, the incidence of musculoskeletal TB and all other extra-pulmonary TB infections depend on the grade of weakening cellular immunity. People with human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) and latent TB are also far more likely to progress active disease, with a nearly 10 % risk of developing active disease each year, in comparison with a lifelong reactivation risk of about 5 % in an HIV/AIDS-negative population. As impairment of the immune system progresses, patients living with HIV/AIDS are more likely to develop extra-pulmonary TB, just like musculoskeletal TB [14].\nKeeping in mind the increased prevalence of TB in recent years [15], we deem it important to review musculoskeletal involvement of TB. Accordingly, the objective of this study was to elaborate musculoskeletal system TB as a single-center experience.", "This retrospective study was conducted as a descriptive single-center experience. Patients who were followed up in Osmaniye Tuberculosis Dispensary between 2004 and 2020 were screened from the dispensary records. This dispensary is a governmental institution located in the south of the country with a population of approximately 350,000 (~ 90 % urban and ~ 10 % rural) that provides free service (including medicines) to all patients. In addition, all TB patients living in this area have to apply to this dispensary to provide directly observed therapy since it is the only TB center in the city. Among them, cases with musculoskeletal system involvement were noted. All patients were diagnosed with tuberculosis according to the histopathologic examinations by a pathologist. The biopsy samples were evaluated grossly and processed using the routine histopathologic technique (fixation and paraffin embedding) and then stained with Haematoxylin-Eosin.\nDefinite diagnosis of musculoskeletal TB was made when either (I) joint aspiration or tissue biopsy/fine-needle aspiration cytology (FNAC) revealed acid-fast bacilli (AFB) detection by smear examination, (II) culture of the synovial fluid (or tissue aspirated) grew Mycobacterium tuberculosis, or (III) histopathology/FNAC revealed granulomatous lesions with or without caseation with AFB positivity. Probable TB was considered if there was no explicit evidence of AFB or granuloma, but clinical, radiological, and serological evidence suggested TB and the patient responded to empirical antitubercular therapy. Other tests include tuberculin skin sensitivity (PPD) test and TB interferon γ-assays. Radiology included X-ray, magnetic resonance imaging (MRI), or computed tomography (CT) of the organ involved. CT-guided biopsy or fine-needle aspiration and arthroscopic biopsies were also performed in some cases. The clinical and demographic characteristics, including age, gender, involvement location and duration of illness, presenting complaint, local examination findings, treatment outcome were retrieved and analyzed from the case files. TB is a notifiable disease, and data are recorded prospectively by our specialist TB nurses. Patients with missing data in terms of diagnosis, clinical follow-up, or treatment, and patients transferred to another tuberculosis center were excluded.\nTypical treatment of all cases included the use of a four-drug regimen: isoniazid, rifampicin, ethambutol, and pyrazinamide for the initiative phase of two months, followed by 4–10 months of continuation phase with isoniazid and rifampicin. If the case was a recurrence TB, streptomycin applied to the patients intravenously (IV) is added to the treatment regimen for first two months. The drug doses were adjusted according to the weight of the patients. Drug susceptibility testing for the first-line drugs was conducted for patients who did not respond to treatment, and regimens were changed based on the susceptibility results or strong clinical suspicion of unresponsiveness.\nStatistical analyses were performed using SPSS Statistics version 17.0 (IBM). The normality of data analyzed by using Kolmogorov-Smirnov. The descriptive statistics were reported as mean ± standard deviation, medians, and ranges (min-max).\nThe current study protocol was approved by the Local Ethics Committee of Adana City Training and Research Hospital (decision number: 321/2018). This study was exempted from informed consent form by this ethics committee, since it was a retrospective study. All authors confirmed that all methods were performed in accordance with the relevant guidelines and regulations.", "A total of 949 patients were followed up due to tuberculosis in our center between 2004 and 2020. Overall, 31 (3.2 % of all TB cases) patients (14 males, 17 females) with a mean age of 44.2 ± 16.7 years (ranges: 3 to 75 years) had musculoskeletal tuberculosis. Demographic features are summarized in Table 1.\nTable 1Demographic FeaturesVariablesResultsAge (years)a44.2± 16.749 (3-75)Gender n, (%) - Male14 (45.2) - Female17 (54.8)Age group (years) n, (%) - Pediatric (<18)1 (3.2) - Adult (18-64)28 (90.3) - Geriatric (≥ 65)2 (6.5)Race n, (%) - Turkish29 (93.5) - Afghanistan1 (3.2) - Syria1 (3.2)Occupation n, (%) - Housewife12 (38.7) - Butcher4 (12.9) - Student4 (12.9) - Retired4 (12.9) - Worker2 (6.5) - Farmer2 (6.5) - Cooker1 (3.2) - Craftsman1 (3.2) - Cooker1 (3.2)Family history of TB n, (%)6 (19.4)Concomitant Pulmonary TB n, (%)6 (19.4)TB Tuberculosisamean ± S.D and median (minimum-maximum)\nDemographic Features\nTB Tuberculosis\namean ± S.D and median (minimum-maximum)\nClinical presentations of the cases are given in Table 2. Back pain (n = 22, 70.9 %) and shoulder pain (n = 2, 6.5 %) were the most seen symptoms, and thoracic or lumbar vertebra was the most involved location (74.2 %). The example images of a shoulder and thoracic vertebra TB are also shown in Figs. 1 and 2, respectively. Most of the cases (58.1 %) were diagnosed in tertiary hospitals. The mean duration of treatment was 12.9 ± 5.5 months (range, 1.3 week to 27.7 months). Two of the patients had a recurrent illness and were treated with the HRZE + S (isoniazid, rifampicin, ethambutol, and pyrazinamide + IV streptomycin) regimen (Table 2).\nTable 2Clinical PresentationVariablen%Symptoms - Back pain2270.9 - Mass on back and neck13.2 - Knee pain13.2 - Wrist swelling13.2 - Chest pain13.2 - Hand pain13.2 - Shoulder pain26.5 - Abdominal/hip pain13.2Location - Vertebra2374.2 - Hand26.5 - Knee joint13.2 - Costa13.2 - Shoulder13.2 - Patella13.2 - Psoas muscle26.5Vertebra Location - Thoracic1032.3 - Lumbar1238.7 - Sacrococcygeal13.2Center of Diagnosis - Secondary1341.9 - Tertiary1858.1Treatment duration (months)a12.9± 5.5 & 12.0 (1.3-27.7)Outcome - Treatment completed2683.8 - Treatment is going on39.6 - Exitus13.2 - Leaved treatment13.2Primer TB2993.5Recurrence TB26.5Treatment Regimen - HRZE2993.5 - HRZE + S26.5TB Tuberculosis, HRZE isoniazid + rifampicin + ethambutol + pyrazinamide, HRZE + S isoniazid + rifampicin + ethambutol + pyrazinamide + streptomycinamean ± S.D & median (percentile 25-percentile 75)Fig. 1Granulomatosis inflammatory reaction against M. tuberculosis with caseous necrosis (white arrow) seen in muscle tissue (M) (×200, H&E)Fig. 2Granulomatous inflammatory reaction involving caseified necrosis that destroys bone tissue (x200, H&E)\nClinical Presentation\nTB Tuberculosis, HRZE isoniazid + rifampicin + ethambutol + pyrazinamide, HRZE + S isoniazid + rifampicin + ethambutol + pyrazinamide + streptomycin\namean ± S.D & median (percentile 25-percentile 75)\nGranulomatosis inflammatory reaction against M. tuberculosis with caseous necrosis (white arrow) seen in muscle tissue (M) (×200, H&E)\nGranulomatous inflammatory reaction involving caseified necrosis that destroys bone tissue (x200, H&E)\nBursit case The example images are from a 49-year-old man who presented with fatigue, left shoulder pain, and restriction in shoulder movements (Fig. 3). Laboratory investigations yielded increased levels of C-reactive protein and erythrocyte sedimentation rate. MRI showed joint effusion, synovial hypertrophy, bursitis, rice bodies, and cortical erosions. The pathological examination was consistent with lymphohistiocytic inflammatory response, including giant cells and caseified necrosis in adipose and connective tissue (i.e., granulomatous inflammatory response). The patient was received a 12 month-treatment (2 months HRZE + 10 months HR). After 12 months of treatment, the patient’s treatment file was closed as ‘treatment completion’ after his complaints completely disappeared.\nFig. 3Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case\nPreoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case\nThe example images are from a 49-year-old man who presented with fatigue, left shoulder pain, and restriction in shoulder movements (Fig. 3). Laboratory investigations yielded increased levels of C-reactive protein and erythrocyte sedimentation rate. MRI showed joint effusion, synovial hypertrophy, bursitis, rice bodies, and cortical erosions. The pathological examination was consistent with lymphohistiocytic inflammatory response, including giant cells and caseified necrosis in adipose and connective tissue (i.e., granulomatous inflammatory response). The patient was received a 12 month-treatment (2 months HRZE + 10 months HR). After 12 months of treatment, the patient’s treatment file was closed as ‘treatment completion’ after his complaints completely disappeared.\nFig. 3Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case\nPreoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case", "The example images are from a 49-year-old man who presented with fatigue, left shoulder pain, and restriction in shoulder movements (Fig. 3). Laboratory investigations yielded increased levels of C-reactive protein and erythrocyte sedimentation rate. MRI showed joint effusion, synovial hypertrophy, bursitis, rice bodies, and cortical erosions. The pathological examination was consistent with lymphohistiocytic inflammatory response, including giant cells and caseified necrosis in adipose and connective tissue (i.e., granulomatous inflammatory response). The patient was received a 12 month-treatment (2 months HRZE + 10 months HR). After 12 months of treatment, the patient’s treatment file was closed as ‘treatment completion’ after his complaints completely disappeared.\nFig. 3Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case\nPreoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case", "The majority of extra-pulmonary TB cases involve the pleura, musculoskeletal and lymphatic systems. Most cases verified to have a previous pulmonary TB origin [16]. TB bacilli spread over the pleural cavity, starting from a pulmonary infection focus on that occasion, immigrates through the blood vessels or lymphatics to other organs developing extrapulmonary TB.\nThe distribution of age and site of musculoskeletal TB has been previously studied. Overall, of all TB cases, 1-4 % showed musculoskeletal involvement. In a retrospective and observational study conducted in a tertiary area with the highest prevalence of TB worldwide, three peaks in the first, third, and sixth decade of life were found [17]. However, some reports highlighted a bimodal age distribution [18]. In a nine-year single-center experience, the rate of musculoskeletal TB was found as 4 % [19]. In our study, the rate of musculoskeletal system TB was 3.2 %, which was compatible with the previous reports. On the other hand, the mean age was 44 years, and 90 % of all cases were adult patients. We did not find an age peak in childhood or in the advanced age group. Musculoskeletal TB is distributed almost equally among males and females in our sample group.\nIn the literature, most of the TB cases present with spine involvement. More than 10 % of patients with extra-pulmonary TB have skeletal involvement [20]. The most common form of skeletal TB is the Pott’s disease, comprising approximately half of musculoskeletal TB cases, and followed by TB arthritis and extra-spinal TB osteomyelitis [21]. In the retrospective study done by Held et al. [17], 78 % of the cases had spine TB while the remaining 21.6 % had extra-spinal diseases, comprising hip, knee, foot/ankle, shoulder, elbow, wrist, and others. The thoracic region of the vertebral column is commonly involved [20]. Likewise, in our study, the prevalent anatomically affected location was the spine (23 of 31 patients; 75 %) as well as hands, knee joint, costa, shoulder, patella, and psoas muscle. Only two cases in this study were involved with psoas muscle. These two cases had both concomitant pulmonary TB. The psoas muscle is a retroperitoneal muscle that originates from the lateral borders of T12 to L5 vertebrae and ends as a tendon that inserts into the lesser trochanter. The primary TB of iliopsoas compartment abscess with occult cause rarely encountered in the clinical practice and is generally idiopathic.\nThe joints’ TB infection comes after hematogenous spread or direct invasion from neighboring tissue of TB osteomyelitis. Mostly monoarticular joints are involved such as the knee and hip. Oligoarticular/polyarticular patterns are very rare, ranging from 5 to 15 % of cases, sometimes with small joint involvement, and ordinary in immunosuppressed patients [22]. A patient in our series (male, 38-year-old, butcher), TB was detected in the biopsy performed a long time after the local infectious swelling developed with a knife sticking in the left hand at the workplace.\nThe delays in the diagnosis of musculoskeletal TB have been sufficiently presented [13]. This may be due to the patients’ uncertain histories, perhaps complicated by inaccurate stories of irrelevant trauma, and lack of presence of a concomitant pulmonary involvement. In the current study, complaints of 70.9 % of cases have been started as back pain. Therefore, these patients have been investigated in neurosurgery, orthopedic and traumatology, or algology clinics for a long time. The patients usually presented with nonspecific symptoms such as back pain, mass on back/neck, knee pain, wrist swelling, chest pain, shoulder pain, hand pain, or abdominal pain.\nThere were some limitations to our work. Primarily, because of the retrospective nature of our study, it was not possible to obtain detailed information of every patient. Furthermore, because the patient registry systems of dispensaries and hospitals are not integrated, monitoring and clinical information, such as misdiagnosis or delayed diagnosis for other reasons, that are not recorded in the ledger could not be accessed. In Turkey, all patients suspected with TB, whether they are pediatrics or adults, are directed to dispensaries to confirm their diagnosis and to give their treatment. However, dispensaries do not have a chance to diagnose possible overlooked cases unless clinicians from other branches express suspicion of TB. Even if all musculoskeletal TB patients living in this area have to apply to this dispensary as it is the only TB center in the city, the present results should not be generalized to the entire Turkey.", "To sum up, although it is a very rare disease, surgeons and clinicians should be aware of the diagnosis of musculoskeletal TB infection especially the presence of pain not relieved despite the physical therapy and B symptoms such as fever and weight loss. Further, in these cases, confused with cancer clinically and radiologically, it would be appropriate to take a biopsy to make the diagnosis at an earlier stage and exclude the cancer. In addition, cases in which granulomatous inflammatory reaction is reported should be referred to the dispensary immediately. On the assumption that the diagnosis and treatment is delayed, spinal damage in addition to the bones and joints of the patients may be an inevitable consequence. Consequently, musculoskeletal TB needs a complex approach and cooperation." ]
[ "introduction", null, null, null, "discussion", "conclusion" ]
[ "Bursitis", "Extrapulmonary", "Pott's spine", "Tuberculosis dispensary" ]
Introduction: Tuberculosis (TB) is a chronic infectious disease mainly caused by Mycobacterium tuberculosis. TB is a public health problem, especially in underdeveloped or developing countries, and remains a major cause of morbidity and mortality [1]. As reported by World Health Organization (WHO), the estimated global incidence of TB cases was 10.0 million in 2018 [2]. In Turkey, as a developing country, the TB incidence (the number of new TB cases per 100,000 populations per year) decreased annually, from 29.8 to 2005 to 14.4 in 2018 [3, 4]. More than 80 % of TB cases are pulmonary TB. However, several manifestations of extra-pulmonary TB have been reported, including musculoskeletal system TB [5]. Although pulmonary TB has been extensively covered in the literature [6–8], publications on musculoskeletal TB are relatively limited, and have been reported mostly in case reports [9–11]. In addition, the diagnosis of musculoskeletal system TB can be challenging, particularly for the physicians who are not familiar, because TB can present in atypical locations and mimic tumoral lesions [12]. Therefore, a biopsy should be taken to confirm the diagnosis. Histopathologically, a granulomatous inflammatory reaction that occurs against mycobacterial species is composed of epithelioid histiocytes, giant cells, lymphocytes, and some of them have centrally located caseous necrosis. The granuloma can not only occur against TB disease, but also against fungi as an infectious agent, sarcoidosis as an immunological reaction, or against a foreign body. Histochemically, Mycobacterium tuberculosis bacilli can be demonstrated inside the granuloma, mainly in necrotic areas with Erlich-Ziehl-Neelsen (EZN) stain to confirm the diagnosis. Also, the diagnosis can be made by culture and amplification of Mycobacterium tuberculosis DNA using the polymerase chain reaction (PCR) method. The diagnosis of musculoskeletal TB is usually tricky and could be delayed. A positive skin tuberculin test or abnormal chest radiograph will support the diagnosis though it is not excluded by negative results [13]. Respecting immunosuppression, the incidence of musculoskeletal TB and all other extra-pulmonary TB infections depend on the grade of weakening cellular immunity. People with human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) and latent TB are also far more likely to progress active disease, with a nearly 10 % risk of developing active disease each year, in comparison with a lifelong reactivation risk of about 5 % in an HIV/AIDS-negative population. As impairment of the immune system progresses, patients living with HIV/AIDS are more likely to develop extra-pulmonary TB, just like musculoskeletal TB [14]. Keeping in mind the increased prevalence of TB in recent years [15], we deem it important to review musculoskeletal involvement of TB. Accordingly, the objective of this study was to elaborate musculoskeletal system TB as a single-center experience. Materials and methods: This retrospective study was conducted as a descriptive single-center experience. Patients who were followed up in Osmaniye Tuberculosis Dispensary between 2004 and 2020 were screened from the dispensary records. This dispensary is a governmental institution located in the south of the country with a population of approximately 350,000 (~ 90 % urban and ~ 10 % rural) that provides free service (including medicines) to all patients. In addition, all TB patients living in this area have to apply to this dispensary to provide directly observed therapy since it is the only TB center in the city. Among them, cases with musculoskeletal system involvement were noted. All patients were diagnosed with tuberculosis according to the histopathologic examinations by a pathologist. The biopsy samples were evaluated grossly and processed using the routine histopathologic technique (fixation and paraffin embedding) and then stained with Haematoxylin-Eosin. Definite diagnosis of musculoskeletal TB was made when either (I) joint aspiration or tissue biopsy/fine-needle aspiration cytology (FNAC) revealed acid-fast bacilli (AFB) detection by smear examination, (II) culture of the synovial fluid (or tissue aspirated) grew Mycobacterium tuberculosis, or (III) histopathology/FNAC revealed granulomatous lesions with or without caseation with AFB positivity. Probable TB was considered if there was no explicit evidence of AFB or granuloma, but clinical, radiological, and serological evidence suggested TB and the patient responded to empirical antitubercular therapy. Other tests include tuberculin skin sensitivity (PPD) test and TB interferon γ-assays. Radiology included X-ray, magnetic resonance imaging (MRI), or computed tomography (CT) of the organ involved. CT-guided biopsy or fine-needle aspiration and arthroscopic biopsies were also performed in some cases. The clinical and demographic characteristics, including age, gender, involvement location and duration of illness, presenting complaint, local examination findings, treatment outcome were retrieved and analyzed from the case files. TB is a notifiable disease, and data are recorded prospectively by our specialist TB nurses. Patients with missing data in terms of diagnosis, clinical follow-up, or treatment, and patients transferred to another tuberculosis center were excluded. Typical treatment of all cases included the use of a four-drug regimen: isoniazid, rifampicin, ethambutol, and pyrazinamide for the initiative phase of two months, followed by 4–10 months of continuation phase with isoniazid and rifampicin. If the case was a recurrence TB, streptomycin applied to the patients intravenously (IV) is added to the treatment regimen for first two months. The drug doses were adjusted according to the weight of the patients. Drug susceptibility testing for the first-line drugs was conducted for patients who did not respond to treatment, and regimens were changed based on the susceptibility results or strong clinical suspicion of unresponsiveness. Statistical analyses were performed using SPSS Statistics version 17.0 (IBM). The normality of data analyzed by using Kolmogorov-Smirnov. The descriptive statistics were reported as mean ± standard deviation, medians, and ranges (min-max). The current study protocol was approved by the Local Ethics Committee of Adana City Training and Research Hospital (decision number: 321/2018). This study was exempted from informed consent form by this ethics committee, since it was a retrospective study. All authors confirmed that all methods were performed in accordance with the relevant guidelines and regulations. Results: A total of 949 patients were followed up due to tuberculosis in our center between 2004 and 2020. Overall, 31 (3.2 % of all TB cases) patients (14 males, 17 females) with a mean age of 44.2 ± 16.7 years (ranges: 3 to 75 years) had musculoskeletal tuberculosis. Demographic features are summarized in Table 1. Table 1Demographic FeaturesVariablesResultsAge (years)a44.2± 16.749 (3-75)Gender n, (%) - Male14 (45.2) - Female17 (54.8)Age group (years) n, (%) - Pediatric (<18)1 (3.2) - Adult (18-64)28 (90.3) - Geriatric (≥ 65)2 (6.5)Race n, (%) - Turkish29 (93.5) - Afghanistan1 (3.2) - Syria1 (3.2)Occupation n, (%) - Housewife12 (38.7) - Butcher4 (12.9) - Student4 (12.9) - Retired4 (12.9) - Worker2 (6.5) - Farmer2 (6.5) - Cooker1 (3.2) - Craftsman1 (3.2) - Cooker1 (3.2)Family history of TB n, (%)6 (19.4)Concomitant Pulmonary TB n, (%)6 (19.4)TB Tuberculosisamean ± S.D and median (minimum-maximum) Demographic Features TB Tuberculosis amean ± S.D and median (minimum-maximum) Clinical presentations of the cases are given in Table 2. Back pain (n = 22, 70.9 %) and shoulder pain (n = 2, 6.5 %) were the most seen symptoms, and thoracic or lumbar vertebra was the most involved location (74.2 %). The example images of a shoulder and thoracic vertebra TB are also shown in Figs. 1 and 2, respectively. Most of the cases (58.1 %) were diagnosed in tertiary hospitals. The mean duration of treatment was 12.9 ± 5.5 months (range, 1.3 week to 27.7 months). Two of the patients had a recurrent illness and were treated with the HRZE + S (isoniazid, rifampicin, ethambutol, and pyrazinamide + IV streptomycin) regimen (Table 2). Table 2Clinical PresentationVariablen%Symptoms - Back pain2270.9 - Mass on back and neck13.2 - Knee pain13.2 - Wrist swelling13.2 - Chest pain13.2 - Hand pain13.2 - Shoulder pain26.5 - Abdominal/hip pain13.2Location - Vertebra2374.2 - Hand26.5 - Knee joint13.2 - Costa13.2 - Shoulder13.2 - Patella13.2 - Psoas muscle26.5Vertebra Location - Thoracic1032.3 - Lumbar1238.7 - Sacrococcygeal13.2Center of Diagnosis - Secondary1341.9 - Tertiary1858.1Treatment duration (months)a12.9± 5.5 & 12.0 (1.3-27.7)Outcome - Treatment completed2683.8 - Treatment is going on39.6 - Exitus13.2 - Leaved treatment13.2Primer TB2993.5Recurrence TB26.5Treatment Regimen - HRZE2993.5 - HRZE + S26.5TB Tuberculosis, HRZE isoniazid + rifampicin + ethambutol + pyrazinamide, HRZE + S isoniazid + rifampicin + ethambutol + pyrazinamide + streptomycinamean ± S.D & median (percentile 25-percentile 75)Fig. 1Granulomatosis inflammatory reaction against M. tuberculosis with caseous necrosis (white arrow) seen in muscle tissue (M) (×200, H&E)Fig. 2Granulomatous inflammatory reaction involving caseified necrosis that destroys bone tissue (x200, H&E) Clinical Presentation TB Tuberculosis, HRZE isoniazid + rifampicin + ethambutol + pyrazinamide, HRZE + S isoniazid + rifampicin + ethambutol + pyrazinamide + streptomycin amean ± S.D & median (percentile 25-percentile 75) Granulomatosis inflammatory reaction against M. tuberculosis with caseous necrosis (white arrow) seen in muscle tissue (M) (×200, H&E) Granulomatous inflammatory reaction involving caseified necrosis that destroys bone tissue (x200, H&E) Bursit case The example images are from a 49-year-old man who presented with fatigue, left shoulder pain, and restriction in shoulder movements (Fig. 3). Laboratory investigations yielded increased levels of C-reactive protein and erythrocyte sedimentation rate. MRI showed joint effusion, synovial hypertrophy, bursitis, rice bodies, and cortical erosions. The pathological examination was consistent with lymphohistiocytic inflammatory response, including giant cells and caseified necrosis in adipose and connective tissue (i.e., granulomatous inflammatory response). The patient was received a 12 month-treatment (2 months HRZE + 10 months HR). After 12 months of treatment, the patient’s treatment file was closed as ‘treatment completion’ after his complaints completely disappeared. Fig. 3Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case The example images are from a 49-year-old man who presented with fatigue, left shoulder pain, and restriction in shoulder movements (Fig. 3). Laboratory investigations yielded increased levels of C-reactive protein and erythrocyte sedimentation rate. MRI showed joint effusion, synovial hypertrophy, bursitis, rice bodies, and cortical erosions. The pathological examination was consistent with lymphohistiocytic inflammatory response, including giant cells and caseified necrosis in adipose and connective tissue (i.e., granulomatous inflammatory response). The patient was received a 12 month-treatment (2 months HRZE + 10 months HR). After 12 months of treatment, the patient’s treatment file was closed as ‘treatment completion’ after his complaints completely disappeared. Fig. 3Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case Bursit case: The example images are from a 49-year-old man who presented with fatigue, left shoulder pain, and restriction in shoulder movements (Fig. 3). Laboratory investigations yielded increased levels of C-reactive protein and erythrocyte sedimentation rate. MRI showed joint effusion, synovial hypertrophy, bursitis, rice bodies, and cortical erosions. The pathological examination was consistent with lymphohistiocytic inflammatory response, including giant cells and caseified necrosis in adipose and connective tissue (i.e., granulomatous inflammatory response). The patient was received a 12 month-treatment (2 months HRZE + 10 months HR). After 12 months of treatment, the patient’s treatment file was closed as ‘treatment completion’ after his complaints completely disappeared. Fig. 3Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case Preoperative (left) and postoperative (right) Magnetic Rezonans Imaging (sagittal plane T1) of bursitis case Discussion: The majority of extra-pulmonary TB cases involve the pleura, musculoskeletal and lymphatic systems. Most cases verified to have a previous pulmonary TB origin [16]. TB bacilli spread over the pleural cavity, starting from a pulmonary infection focus on that occasion, immigrates through the blood vessels or lymphatics to other organs developing extrapulmonary TB. The distribution of age and site of musculoskeletal TB has been previously studied. Overall, of all TB cases, 1-4 % showed musculoskeletal involvement. In a retrospective and observational study conducted in a tertiary area with the highest prevalence of TB worldwide, three peaks in the first, third, and sixth decade of life were found [17]. However, some reports highlighted a bimodal age distribution [18]. In a nine-year single-center experience, the rate of musculoskeletal TB was found as 4 % [19]. In our study, the rate of musculoskeletal system TB was 3.2 %, which was compatible with the previous reports. On the other hand, the mean age was 44 years, and 90 % of all cases were adult patients. We did not find an age peak in childhood or in the advanced age group. Musculoskeletal TB is distributed almost equally among males and females in our sample group. In the literature, most of the TB cases present with spine involvement. More than 10 % of patients with extra-pulmonary TB have skeletal involvement [20]. The most common form of skeletal TB is the Pott’s disease, comprising approximately half of musculoskeletal TB cases, and followed by TB arthritis and extra-spinal TB osteomyelitis [21]. In the retrospective study done by Held et al. [17], 78 % of the cases had spine TB while the remaining 21.6 % had extra-spinal diseases, comprising hip, knee, foot/ankle, shoulder, elbow, wrist, and others. The thoracic region of the vertebral column is commonly involved [20]. Likewise, in our study, the prevalent anatomically affected location was the spine (23 of 31 patients; 75 %) as well as hands, knee joint, costa, shoulder, patella, and psoas muscle. Only two cases in this study were involved with psoas muscle. These two cases had both concomitant pulmonary TB. The psoas muscle is a retroperitoneal muscle that originates from the lateral borders of T12 to L5 vertebrae and ends as a tendon that inserts into the lesser trochanter. The primary TB of iliopsoas compartment abscess with occult cause rarely encountered in the clinical practice and is generally idiopathic. The joints’ TB infection comes after hematogenous spread or direct invasion from neighboring tissue of TB osteomyelitis. Mostly monoarticular joints are involved such as the knee and hip. Oligoarticular/polyarticular patterns are very rare, ranging from 5 to 15 % of cases, sometimes with small joint involvement, and ordinary in immunosuppressed patients [22]. A patient in our series (male, 38-year-old, butcher), TB was detected in the biopsy performed a long time after the local infectious swelling developed with a knife sticking in the left hand at the workplace. The delays in the diagnosis of musculoskeletal TB have been sufficiently presented [13]. This may be due to the patients’ uncertain histories, perhaps complicated by inaccurate stories of irrelevant trauma, and lack of presence of a concomitant pulmonary involvement. In the current study, complaints of 70.9 % of cases have been started as back pain. Therefore, these patients have been investigated in neurosurgery, orthopedic and traumatology, or algology clinics for a long time. The patients usually presented with nonspecific symptoms such as back pain, mass on back/neck, knee pain, wrist swelling, chest pain, shoulder pain, hand pain, or abdominal pain. There were some limitations to our work. Primarily, because of the retrospective nature of our study, it was not possible to obtain detailed information of every patient. Furthermore, because the patient registry systems of dispensaries and hospitals are not integrated, monitoring and clinical information, such as misdiagnosis or delayed diagnosis for other reasons, that are not recorded in the ledger could not be accessed. In Turkey, all patients suspected with TB, whether they are pediatrics or adults, are directed to dispensaries to confirm their diagnosis and to give their treatment. However, dispensaries do not have a chance to diagnose possible overlooked cases unless clinicians from other branches express suspicion of TB. Even if all musculoskeletal TB patients living in this area have to apply to this dispensary as it is the only TB center in the city, the present results should not be generalized to the entire Turkey. Conclusions: To sum up, although it is a very rare disease, surgeons and clinicians should be aware of the diagnosis of musculoskeletal TB infection especially the presence of pain not relieved despite the physical therapy and B symptoms such as fever and weight loss. Further, in these cases, confused with cancer clinically and radiologically, it would be appropriate to take a biopsy to make the diagnosis at an earlier stage and exclude the cancer. In addition, cases in which granulomatous inflammatory reaction is reported should be referred to the dispensary immediately. On the assumption that the diagnosis and treatment is delayed, spinal damage in addition to the bones and joints of the patients may be an inevitable consequence. Consequently, musculoskeletal TB needs a complex approach and cooperation.
Background: The aim of our study was to describe musculoskeletal system tuberculosis (TB) as a single-center experience. Methods: This is a retrospective observational study conducted at a TB Dispensary in the east Mediterranean part of Turkey between 2004 and 2020. The clinical and demographic characteristics including age, gender, involvement location and duration of illness, presenting complaint, local examination findings, treatment outcome were retrieved and analyzed from the case files. Statistical analyses were performed using SPSS Statistics version 17.0 (IBM). The normality of data analysed by using Kolmogorov-Smirnov. The descriptive statistics were reported as mean ± standard deviation, medians, and ranges (min-max). Results: Overall, 31 patients (3.2 % of all TB cases) with a mean age of 44.2 ± 16.7 years had musculoskeletal tuberculosis. The mean duration of treatment was 12.9 ± 5.5 months. Of the 31 patients, six (19.4 %) had concomitant pulmonary TB. One of the patients was in the pediatrics age group, and two of them were in the geriatric group. The most affected area was the vertebra. The most common complaint of the patients was back pain and seen in 22 patients (70.9 %). Conclusions: The physicians should be suspicious about the diagnosis of musculoskeletal TB disease. If the diagnosis and treatment are delayed, spinal damage and other consequences might be incurable.
Introduction: Tuberculosis (TB) is a chronic infectious disease mainly caused by Mycobacterium tuberculosis. TB is a public health problem, especially in underdeveloped or developing countries, and remains a major cause of morbidity and mortality [1]. As reported by World Health Organization (WHO), the estimated global incidence of TB cases was 10.0 million in 2018 [2]. In Turkey, as a developing country, the TB incidence (the number of new TB cases per 100,000 populations per year) decreased annually, from 29.8 to 2005 to 14.4 in 2018 [3, 4]. More than 80 % of TB cases are pulmonary TB. However, several manifestations of extra-pulmonary TB have been reported, including musculoskeletal system TB [5]. Although pulmonary TB has been extensively covered in the literature [6–8], publications on musculoskeletal TB are relatively limited, and have been reported mostly in case reports [9–11]. In addition, the diagnosis of musculoskeletal system TB can be challenging, particularly for the physicians who are not familiar, because TB can present in atypical locations and mimic tumoral lesions [12]. Therefore, a biopsy should be taken to confirm the diagnosis. Histopathologically, a granulomatous inflammatory reaction that occurs against mycobacterial species is composed of epithelioid histiocytes, giant cells, lymphocytes, and some of them have centrally located caseous necrosis. The granuloma can not only occur against TB disease, but also against fungi as an infectious agent, sarcoidosis as an immunological reaction, or against a foreign body. Histochemically, Mycobacterium tuberculosis bacilli can be demonstrated inside the granuloma, mainly in necrotic areas with Erlich-Ziehl-Neelsen (EZN) stain to confirm the diagnosis. Also, the diagnosis can be made by culture and amplification of Mycobacterium tuberculosis DNA using the polymerase chain reaction (PCR) method. The diagnosis of musculoskeletal TB is usually tricky and could be delayed. A positive skin tuberculin test or abnormal chest radiograph will support the diagnosis though it is not excluded by negative results [13]. Respecting immunosuppression, the incidence of musculoskeletal TB and all other extra-pulmonary TB infections depend on the grade of weakening cellular immunity. People with human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) and latent TB are also far more likely to progress active disease, with a nearly 10 % risk of developing active disease each year, in comparison with a lifelong reactivation risk of about 5 % in an HIV/AIDS-negative population. As impairment of the immune system progresses, patients living with HIV/AIDS are more likely to develop extra-pulmonary TB, just like musculoskeletal TB [14]. Keeping in mind the increased prevalence of TB in recent years [15], we deem it important to review musculoskeletal involvement of TB. Accordingly, the objective of this study was to elaborate musculoskeletal system TB as a single-center experience. Conclusions: To sum up, although it is a very rare disease, surgeons and clinicians should be aware of the diagnosis of musculoskeletal TB infection especially the presence of pain not relieved despite the physical therapy and B symptoms such as fever and weight loss. Further, in these cases, confused with cancer clinically and radiologically, it would be appropriate to take a biopsy to make the diagnosis at an earlier stage and exclude the cancer. In addition, cases in which granulomatous inflammatory reaction is reported should be referred to the dispensary immediately. On the assumption that the diagnosis and treatment is delayed, spinal damage in addition to the bones and joints of the patients may be an inevitable consequence. Consequently, musculoskeletal TB needs a complex approach and cooperation.
Background: The aim of our study was to describe musculoskeletal system tuberculosis (TB) as a single-center experience. Methods: This is a retrospective observational study conducted at a TB Dispensary in the east Mediterranean part of Turkey between 2004 and 2020. The clinical and demographic characteristics including age, gender, involvement location and duration of illness, presenting complaint, local examination findings, treatment outcome were retrieved and analyzed from the case files. Statistical analyses were performed using SPSS Statistics version 17.0 (IBM). The normality of data analysed by using Kolmogorov-Smirnov. The descriptive statistics were reported as mean ± standard deviation, medians, and ranges (min-max). Results: Overall, 31 patients (3.2 % of all TB cases) with a mean age of 44.2 ± 16.7 years had musculoskeletal tuberculosis. The mean duration of treatment was 12.9 ± 5.5 months. Of the 31 patients, six (19.4 %) had concomitant pulmonary TB. One of the patients was in the pediatrics age group, and two of them were in the geriatric group. The most affected area was the vertebra. The most common complaint of the patients was back pain and seen in 22 patients (70.9 %). Conclusions: The physicians should be suspicious about the diagnosis of musculoskeletal TB disease. If the diagnosis and treatment are delayed, spinal damage and other consequences might be incurable.
3,515
278
[ 645, 1070, 183 ]
6
[ "tb", "cases", "patients", "musculoskeletal", "treatment", "tuberculosis", "diagnosis", "months", "pain", "musculoskeletal tb" ]
[ "musculoskeletal tb distributed", "musculoskeletal tb cases", "musculoskeletal tuberculosis", "diagnosis musculoskeletal tb", "incidence musculoskeletal tb" ]
null
null
[CONTENT] Bursitis | Extrapulmonary | Pott's spine | Tuberculosis dispensary [SUMMARY]
null
null
[CONTENT] Bursitis | Extrapulmonary | Pott's spine | Tuberculosis dispensary [SUMMARY]
[CONTENT] Bursitis | Extrapulmonary | Pott's spine | Tuberculosis dispensary [SUMMARY]
[CONTENT] Bursitis | Extrapulmonary | Pott's spine | Tuberculosis dispensary [SUMMARY]
[CONTENT] Adult | Aged | Back Pain | Child | Humans | Middle Aged | Musculoskeletal Diseases | Retrospective Studies | Tuberculosis | Turkey [SUMMARY]
null
null
[CONTENT] Adult | Aged | Back Pain | Child | Humans | Middle Aged | Musculoskeletal Diseases | Retrospective Studies | Tuberculosis | Turkey [SUMMARY]
[CONTENT] Adult | Aged | Back Pain | Child | Humans | Middle Aged | Musculoskeletal Diseases | Retrospective Studies | Tuberculosis | Turkey [SUMMARY]
[CONTENT] Adult | Aged | Back Pain | Child | Humans | Middle Aged | Musculoskeletal Diseases | Retrospective Studies | Tuberculosis | Turkey [SUMMARY]
[CONTENT] musculoskeletal tb distributed | musculoskeletal tb cases | musculoskeletal tuberculosis | diagnosis musculoskeletal tb | incidence musculoskeletal tb [SUMMARY]
null
null
[CONTENT] musculoskeletal tb distributed | musculoskeletal tb cases | musculoskeletal tuberculosis | diagnosis musculoskeletal tb | incidence musculoskeletal tb [SUMMARY]
[CONTENT] musculoskeletal tb distributed | musculoskeletal tb cases | musculoskeletal tuberculosis | diagnosis musculoskeletal tb | incidence musculoskeletal tb [SUMMARY]
[CONTENT] musculoskeletal tb distributed | musculoskeletal tb cases | musculoskeletal tuberculosis | diagnosis musculoskeletal tb | incidence musculoskeletal tb [SUMMARY]
[CONTENT] tb | cases | patients | musculoskeletal | treatment | tuberculosis | diagnosis | months | pain | musculoskeletal tb [SUMMARY]
null
null
[CONTENT] tb | cases | patients | musculoskeletal | treatment | tuberculosis | diagnosis | months | pain | musculoskeletal tb [SUMMARY]
[CONTENT] tb | cases | patients | musculoskeletal | treatment | tuberculosis | diagnosis | months | pain | musculoskeletal tb [SUMMARY]
[CONTENT] tb | cases | patients | musculoskeletal | treatment | tuberculosis | diagnosis | months | pain | musculoskeletal tb [SUMMARY]
[CONTENT] tb | musculoskeletal | pulmonary tb | pulmonary | diagnosis | hiv aids | hiv | incidence | aids | system [SUMMARY]
null
null
[CONTENT] cancer | diagnosis | addition | musculoskeletal tb | musculoskeletal | tb | cases | infection especially presence | inevitable | inevitable consequence consequently [SUMMARY]
[CONTENT] tb | treatment | musculoskeletal | cases | patients | months | diagnosis | musculoskeletal tb | tuberculosis | bursitis [SUMMARY]
[CONTENT] tb | treatment | musculoskeletal | cases | patients | months | diagnosis | musculoskeletal tb | tuberculosis | bursitis [SUMMARY]
[CONTENT] TB [SUMMARY]
null
null
[CONTENT] ||| [SUMMARY]
[CONTENT] TB ||| TB Dispensary | Mediterranean | Turkey | between 2004 and 2020 ||| ||| SPSS | 17.0 | IBM ||| Kolmogorov-Smirnov ||| ||| ||| 31 | 3.2 % | TB | 44.2 ± | 16.7 years ||| 12.9 ± | 5.5 months ||| 31 | six | 19.4 % | TB ||| One | two ||| ||| 22 | 70.9 % ||| ||| [SUMMARY]
[CONTENT] TB ||| TB Dispensary | Mediterranean | Turkey | between 2004 and 2020 ||| ||| SPSS | 17.0 | IBM ||| Kolmogorov-Smirnov ||| ||| ||| 31 | 3.2 % | TB | 44.2 ± | 16.7 years ||| 12.9 ± | 5.5 months ||| 31 | six | 19.4 % | TB ||| One | two ||| ||| 22 | 70.9 % ||| ||| [SUMMARY]
Hospital Variation in Non-Invasive Ventilation Use for Acute Respiratory Failure Due to COPD Exacerbation.
34824529
Non-invasive mechanical ventilation (NIV) use in patients admitted with acute respiratory failure due to COPD exacerbations (AECOPDs) varies significantly between hospitals. However, previous literature did not account for patients' illness severity. Our objective was to examine the variation in risk-standardized NIV use after adjusting for illness severity.
BACKGROUND
We retrospectively analyzed AECOPD hospitalizations from 2011 to 2017 at 106 acute-care Veterans Health Administration (VA) hospitals in the USA. We stratified hospitals based on the percentage of NIV use among patients who received ventilation support within the first 24 hours of admission into quartiles, and compared patient characteristics. We calculated the risk-standardized NIV % using hierarchical models adjusting for comorbidities and severity of illness. We then stratified the hospitals by risk-standardized NIV % into quartiles and compared hospital characteristics between quartiles. We also compared the risk-standardized NIV % between rural and urban hospitals.
METHODS
In 42,048 admissions for AECOPD over 6 years, the median risk-standardized initial NIV % was 57.3% (interquartile interval [IQI]=41.9-64.4%). Hospitals in the highest risk-standardized NIV % quartiles cared for more rural patients, used invasive ventilators less frequently, and had longer length of hospital stay, but had no difference in mortality relative to the hospitals in the lowest quartiles. The risk-standardized NIV % was 65.3% (IQI=34.2-84.2%) in rural and 55.1% (IQI=10.8-86.6%) in urban hospitals (p=0.047), but hospital mortality did not differ between the two groups.
RESULTS
NIV use varied significantly across hospitals, with rural hospitals having higher risk-standardized NIV % rates than urban hospitals. Further research should investigate the exact mechanism of variation in NIV use between rural and urban hospitals.
CONCLUSION
[ "Hospital Mortality", "Hospitals", "Humans", "Noninvasive Ventilation", "Pulmonary Disease, Chronic Obstructive", "Respiration, Artificial", "Respiratory Insufficiency", "Retrospective Studies" ]
8609200
Introduction
Chronic obstructive pulmonary disease (COPD) is a leading cause of mortality in the USA1 and is associated with high resource utilization.2,3 COPD patients experience exacerbations of the disease (AECOPD), defined as acute worsening of respiratory symptoms according to the Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines.2 Standard therapy for moderate AECOPDs includes bronchodilators, glucocorticoids, and antibiotics. Severe AECOPDs, defined as those requiring an emergency room visit or hospitalization, have a 1-year mortality of 26.2% and are associated with high rates of hospital readmission.4 AECOPD-related hospitalizations are also associated with reduced quality of life and are responsible for 70% of the total direct health-care costs for COPD.5,6 Severe AECOPD may lead to acute respiratory failure requiring supplemental oxygen and/or ventilation support in addition to standard therapy. The most effective treatment for acute respiratory failure due to severe AECOPD is non-invasive ventilation (NIV). NIV reduces the need for invasive mechanical ventilation (IMV) by 64% and mortality by 46%, according to a 2017 meta-analysis.7 Despite the benefit of NIV in severe AECOPDs, NIV use in AECOPD-related hospitalizations varies significantly across the USA.8,9 This variation in NIV use may be due to hospital- and patient-level characteristics, such as the patient’s severity of illness (acuity). Facilities with high-acuity patients may use NIV more frequently. Nevertheless, prior studies examining NIV variation across US hospitals did not account for patient acuity.8,9 Variation in NIV use may also be related to the heterogeneity in its adoption across hospitals. High-resource hospitals may use NIV more frequently, as opposed to low-resource facilities, which may often transfer patients who require NIV because they do not feel comfortable treating these patients in case of NIV failure. We hypothesized that NIV varies significantly across US hospitals, even after adjusting for patient acuity. Our primary objective was to investigate NIV use in AECOPD-related hospitalizations across the Veterans Health Administration (VA) and assess whether patient characteristics (eg, demographics, residential location, and comorbidities) or hospital characteristics (eg, location, volume, and resources are) were associated with NIV use.
null
null
null
null
Conclusions
Among patients admitted with AECOPD in VA hospitals, NIV use varied significantly across hospitals, with rural hospitals having higher risk-standardized NIV % than urban ones. Hospitals with the highest NIV rates cared for more rural patients and had longer hospital length of stay than hospitals with the lowest NIV rates. Further research should investigate the exact mechanism of variation in NIV use between rural and urban hospitals.
[ "Methods", "Setting", "Definitions", "Statistical Analysis", "Results", "Discussion", "Conclusions" ]
[ "This retrospective cohort study included AECOPD-related hospitalizations to acute-care VA hospitals between October 1, 2011 and September 30, 2017. This work was approved by the Institutional Review Boards and Research and Development Committee at the Iowa City VA Health Care System [IRB 201712713] as part of a larger study with a previous publication.10 The study was conducted in accordance with the Declaration of Helsinki. A waiver of informed consent was granted for this retrospective study because the study examined only existing patient-level data. Patients’ data were kept confidential.\nSetting We obtained data from the Veterans Informatics and Computing Infrastructure (VINCI), an integrated system that includes VA’s electronic health records and administrative data. Admissions to VA acute-care hospitals were identified via the Corporate Data Warehouse (CDW) using the inpatient domain. These datasets contain patient demographics, including residential address and ZIP code, diagnosis and procedure codes during admission, admission source, and admission and discharge dates.\nWe obtained data from the Veterans Informatics and Computing Infrastructure (VINCI), an integrated system that includes VA’s electronic health records and administrative data. Admissions to VA acute-care hospitals were identified via the Corporate Data Warehouse (CDW) using the inpatient domain. These datasets contain patient demographics, including residential address and ZIP code, diagnosis and procedure codes during admission, admission source, and admission and discharge dates.\nDefinitions We identified patients hospitalized with AECOPD based on the International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modification (ICD-9-CM and ICD-10-CM) and using the following criteria: 1) a principal diagnosis of COPD (ICD-9-CM codes: 490.x, 491.xx, 492.xx, and 496.xx; or ICD-10-CM codes: J41, J43, J43.1, J43.2, J43.8, J43.9, J44.0, J44.1, J41.8, J42, or J44.9) or 2) a principal diagnosis of acute respiratory failure (518.81, 518.82, 518.84, or 799.1; ICD-10-CM: J96.00, J96.01, J96.02, J96.11, J96.12, J96.20, J96.21, J96.22, J96.90, J96.92, or R06.03) with a secondary AECOPD diagnosis (ICD-9-CM codes: 491.21, 491.22; ICD-10-CM codes: J44.1, J44.0).\nNIV use was defined using ICD-9-CM procedure code 93.90 or, for ICD-10, codes 5A09357, 5A09457, and 5A09557. IMV use was defined using ICD-9-CM procedure codes 96.04 and 96.70–96.72 or, for ICD-10, codes 0BH17EZ, 0BH18EZ, 5A1935Z, 5A1945Z, and 5A1955Z. Primary ventilation support was defined based on whether the patient received NIV and/or IMV within one day from the admission date.\nPatient rurality was defined using census tracts based on Rural Urban Commuting Area (RUCA) codes. RUCA codes reflect measures of urbanization, commuting, and population density.11–15 RUCA codes were further condensed to designate an area as urban (RUCA codes: 1 and 1.1), rural (RUCA codes: 2, 2.1, 3, 4, 4.1, 5, 5.1, 6, 7, 7.1, 7.2, 8, 8.1, 8.2, and 9), or isolated rural (RUCA codes: 10, 10.2, 10.2, and 10.3), using the categories defined by the VA Office of Rural Health.16 Travel time to the nearest VA hospital was determined from VA Planning Systems and Support Group (PSSG) geo-coded enrollment files. PSSG calculates distances to tertiary care VA sites for all enrolled patients using actual longitude and latitude coordinates of patient residences and the nearest VA hospitals. Travel time to the nearest VA hospital was estimated using geospatial technologies, which reflect roads and average driving conditions.11 Comorbidities were defined using ICD-9-CM and ICD-10-CM diagnosis codes within 1 year prior to admission in the inpatient, outpatient, or fee-basis data files, except for pneumonia, which was defined based on the presence of the corresponding diagnosis code during the hospitalization, as previously described.17 Severity of illness was quantified using a modified APACHE score (mAPACHE), which includes vital signs and commonly obtained laboratory values,12,18,19 and has also been externally validated.20 In brief, mAPACHE uses the same scoring assignments as APACHE III and includes all the predictor variables from the APACHE III excluding the Glasgow Coma Scale, urine output, arterial blood gas, and mechanical ventilator components. mAPACHE variables include age, comorbidities, mean arterial blood pressure, heart rate, respiratory rate, temperature, hematocrit, white blood cells, sodium, blood urea nitrogen, creatinine, glucose, albumin, and bilirubin. We retrieved all the data from electronic medical records, as described previously.12,18,19 Obstructive sleep apnea was defined using ICD-9 diagnosis code 327.2 or 780.57, or ICD-10 diagnosis codes G47.30–G47.39.\nHospital rurality was defined using the same methods as patient rurality, but the facilities in rural and isolated rural areas were collapsed to one category (rural). Hospital complexity was defined by VA as 1 (high resource), 2 (medium resource), and 3 (low resource).21 Hospital COPD-case volume was classified as high (above the median) or low (below the median) based on the total COPD volume during the study period. Hospital length of stay was calculated from the admission and discharge dates, in days. Hospital mortality was defined using the date of death listed in the VA Vital Status File and the occurrence of this date between the admission and discharge dates inclusive.\nWe included admissions with AECOPD at 121 VA acute-care facilities from 2011 to 2017. We excluded records admitted to facilities with complexity 3 (low resource) and newly opened facilities because these facilities may not provide ventilator support. We also excluded patients in hospices (V667/Z51.5 code within 1 year prior to the hospitalization date) and admissions of patients who received no oral or intravenous glucocorticoids during the hospitalizations.\nWe identified patients hospitalized with AECOPD based on the International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modification (ICD-9-CM and ICD-10-CM) and using the following criteria: 1) a principal diagnosis of COPD (ICD-9-CM codes: 490.x, 491.xx, 492.xx, and 496.xx; or ICD-10-CM codes: J41, J43, J43.1, J43.2, J43.8, J43.9, J44.0, J44.1, J41.8, J42, or J44.9) or 2) a principal diagnosis of acute respiratory failure (518.81, 518.82, 518.84, or 799.1; ICD-10-CM: J96.00, J96.01, J96.02, J96.11, J96.12, J96.20, J96.21, J96.22, J96.90, J96.92, or R06.03) with a secondary AECOPD diagnosis (ICD-9-CM codes: 491.21, 491.22; ICD-10-CM codes: J44.1, J44.0).\nNIV use was defined using ICD-9-CM procedure code 93.90 or, for ICD-10, codes 5A09357, 5A09457, and 5A09557. IMV use was defined using ICD-9-CM procedure codes 96.04 and 96.70–96.72 or, for ICD-10, codes 0BH17EZ, 0BH18EZ, 5A1935Z, 5A1945Z, and 5A1955Z. Primary ventilation support was defined based on whether the patient received NIV and/or IMV within one day from the admission date.\nPatient rurality was defined using census tracts based on Rural Urban Commuting Area (RUCA) codes. RUCA codes reflect measures of urbanization, commuting, and population density.11–15 RUCA codes were further condensed to designate an area as urban (RUCA codes: 1 and 1.1), rural (RUCA codes: 2, 2.1, 3, 4, 4.1, 5, 5.1, 6, 7, 7.1, 7.2, 8, 8.1, 8.2, and 9), or isolated rural (RUCA codes: 10, 10.2, 10.2, and 10.3), using the categories defined by the VA Office of Rural Health.16 Travel time to the nearest VA hospital was determined from VA Planning Systems and Support Group (PSSG) geo-coded enrollment files. PSSG calculates distances to tertiary care VA sites for all enrolled patients using actual longitude and latitude coordinates of patient residences and the nearest VA hospitals. Travel time to the nearest VA hospital was estimated using geospatial technologies, which reflect roads and average driving conditions.11 Comorbidities were defined using ICD-9-CM and ICD-10-CM diagnosis codes within 1 year prior to admission in the inpatient, outpatient, or fee-basis data files, except for pneumonia, which was defined based on the presence of the corresponding diagnosis code during the hospitalization, as previously described.17 Severity of illness was quantified using a modified APACHE score (mAPACHE), which includes vital signs and commonly obtained laboratory values,12,18,19 and has also been externally validated.20 In brief, mAPACHE uses the same scoring assignments as APACHE III and includes all the predictor variables from the APACHE III excluding the Glasgow Coma Scale, urine output, arterial blood gas, and mechanical ventilator components. mAPACHE variables include age, comorbidities, mean arterial blood pressure, heart rate, respiratory rate, temperature, hematocrit, white blood cells, sodium, blood urea nitrogen, creatinine, glucose, albumin, and bilirubin. We retrieved all the data from electronic medical records, as described previously.12,18,19 Obstructive sleep apnea was defined using ICD-9 diagnosis code 327.2 or 780.57, or ICD-10 diagnosis codes G47.30–G47.39.\nHospital rurality was defined using the same methods as patient rurality, but the facilities in rural and isolated rural areas were collapsed to one category (rural). Hospital complexity was defined by VA as 1 (high resource), 2 (medium resource), and 3 (low resource).21 Hospital COPD-case volume was classified as high (above the median) or low (below the median) based on the total COPD volume during the study period. Hospital length of stay was calculated from the admission and discharge dates, in days. Hospital mortality was defined using the date of death listed in the VA Vital Status File and the occurrence of this date between the admission and discharge dates inclusive.\nWe included admissions with AECOPD at 121 VA acute-care facilities from 2011 to 2017. We excluded records admitted to facilities with complexity 3 (low resource) and newly opened facilities because these facilities may not provide ventilator support. We also excluded patients in hospices (V667/Z51.5 code within 1 year prior to the hospitalization date) and admissions of patients who received no oral or intravenous glucocorticoids during the hospitalizations.\nStatistical Analysis As NIV use may vary depending on the patient’s severity of illness (acuity), we calculated the risk-standardized NIV % to assess the NIV variation across the hospitals. We followed a similar approach to that used in a previous study.8 In brief, we created a hierarchical multivariable logistic regression model with NIV use within the first 24 hours of admission as the dependent variable (outcome) among records started on ventilation (records without NIV or invasive ventilation within first 24 hours were not used). Hospital was included as a random effect. Models were adjusted for severity of illness (mAPACHE) and several comorbidities (obstructive sleep apnea, presence of comorbid pneumonia, diabetes mellitus, congestive heart failure, hypertension, peripheral vascular disease, coronary artery disease, cancer, chronic kidney disease, stroke, and liver disease).22 Similarly to the approach used for publicly reported hospital performance metrics,23,24 we calculated the expected and predicted proportion of NIV use for each hospital using these models. Specifically, the expected rate is calculated based on hospital patient characteristics and does not include the hierarchical model hospital-specific random intercepts, whereas the predicted rate does incorporate hospital-specific random intercepts. The risk-standardized proportion of NIV use for each hospital was determined by multiplying the overall unadjusted proportion of NIV use by the ratio of the predicted to expected proportion for each hospital. Then, we stratified hospitals into quartiles based on hospital risk-standardized NIV % and compared patient characteristics among quartiles for trend and differences between Q1 and Q4. We used linear regression to examine trends between risk-standardized NIV % quartiles (Q1 to Q4) for continuous variables (p for trend), and a Cochran–Mantel–Haenszel test to examine trends between risk-standardized NIV % quartiles for categorical variables. The t-test (for continuous variables) and chi-squared test (for categorical variables) were also used to compare differences in characteristics between the lowest (Q1) and highest (Q4) quartiles of risk-standardized NIV %. Subsequently, we compared the risk-standardized NIV % using the Wilcoxon–Mann–Whitney test, and unadjusted hospital mortality using the chi-squared test between: 1) rural and urban, 2) low-volume and high-volume, and 3) complexity 2 (medium-resource) and complexity 1 (high-resource) hospitals. We also created a hierarchical logistic regression model to examine the association of hospital characteristics with hospital mortality adjusted for severity of illness and accounting for repeated admissions. All statistical analyses were conducted using SAS Enterprise Guide, 2014 (SAS Institute Inc).\nAs NIV use may vary depending on the patient’s severity of illness (acuity), we calculated the risk-standardized NIV % to assess the NIV variation across the hospitals. We followed a similar approach to that used in a previous study.8 In brief, we created a hierarchical multivariable logistic regression model with NIV use within the first 24 hours of admission as the dependent variable (outcome) among records started on ventilation (records without NIV or invasive ventilation within first 24 hours were not used). Hospital was included as a random effect. Models were adjusted for severity of illness (mAPACHE) and several comorbidities (obstructive sleep apnea, presence of comorbid pneumonia, diabetes mellitus, congestive heart failure, hypertension, peripheral vascular disease, coronary artery disease, cancer, chronic kidney disease, stroke, and liver disease).22 Similarly to the approach used for publicly reported hospital performance metrics,23,24 we calculated the expected and predicted proportion of NIV use for each hospital using these models. Specifically, the expected rate is calculated based on hospital patient characteristics and does not include the hierarchical model hospital-specific random intercepts, whereas the predicted rate does incorporate hospital-specific random intercepts. The risk-standardized proportion of NIV use for each hospital was determined by multiplying the overall unadjusted proportion of NIV use by the ratio of the predicted to expected proportion for each hospital. Then, we stratified hospitals into quartiles based on hospital risk-standardized NIV % and compared patient characteristics among quartiles for trend and differences between Q1 and Q4. We used linear regression to examine trends between risk-standardized NIV % quartiles (Q1 to Q4) for continuous variables (p for trend), and a Cochran–Mantel–Haenszel test to examine trends between risk-standardized NIV % quartiles for categorical variables. The t-test (for continuous variables) and chi-squared test (for categorical variables) were also used to compare differences in characteristics between the lowest (Q1) and highest (Q4) quartiles of risk-standardized NIV %. Subsequently, we compared the risk-standardized NIV % using the Wilcoxon–Mann–Whitney test, and unadjusted hospital mortality using the chi-squared test between: 1) rural and urban, 2) low-volume and high-volume, and 3) complexity 2 (medium-resource) and complexity 1 (high-resource) hospitals. We also created a hierarchical logistic regression model to examine the association of hospital characteristics with hospital mortality adjusted for severity of illness and accounting for repeated admissions. All statistical analyses were conducted using SAS Enterprise Guide, 2014 (SAS Institute Inc).", "We obtained data from the Veterans Informatics and Computing Infrastructure (VINCI), an integrated system that includes VA’s electronic health records and administrative data. Admissions to VA acute-care hospitals were identified via the Corporate Data Warehouse (CDW) using the inpatient domain. These datasets contain patient demographics, including residential address and ZIP code, diagnosis and procedure codes during admission, admission source, and admission and discharge dates.", "We identified patients hospitalized with AECOPD based on the International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modification (ICD-9-CM and ICD-10-CM) and using the following criteria: 1) a principal diagnosis of COPD (ICD-9-CM codes: 490.x, 491.xx, 492.xx, and 496.xx; or ICD-10-CM codes: J41, J43, J43.1, J43.2, J43.8, J43.9, J44.0, J44.1, J41.8, J42, or J44.9) or 2) a principal diagnosis of acute respiratory failure (518.81, 518.82, 518.84, or 799.1; ICD-10-CM: J96.00, J96.01, J96.02, J96.11, J96.12, J96.20, J96.21, J96.22, J96.90, J96.92, or R06.03) with a secondary AECOPD diagnosis (ICD-9-CM codes: 491.21, 491.22; ICD-10-CM codes: J44.1, J44.0).\nNIV use was defined using ICD-9-CM procedure code 93.90 or, for ICD-10, codes 5A09357, 5A09457, and 5A09557. IMV use was defined using ICD-9-CM procedure codes 96.04 and 96.70–96.72 or, for ICD-10, codes 0BH17EZ, 0BH18EZ, 5A1935Z, 5A1945Z, and 5A1955Z. Primary ventilation support was defined based on whether the patient received NIV and/or IMV within one day from the admission date.\nPatient rurality was defined using census tracts based on Rural Urban Commuting Area (RUCA) codes. RUCA codes reflect measures of urbanization, commuting, and population density.11–15 RUCA codes were further condensed to designate an area as urban (RUCA codes: 1 and 1.1), rural (RUCA codes: 2, 2.1, 3, 4, 4.1, 5, 5.1, 6, 7, 7.1, 7.2, 8, 8.1, 8.2, and 9), or isolated rural (RUCA codes: 10, 10.2, 10.2, and 10.3), using the categories defined by the VA Office of Rural Health.16 Travel time to the nearest VA hospital was determined from VA Planning Systems and Support Group (PSSG) geo-coded enrollment files. PSSG calculates distances to tertiary care VA sites for all enrolled patients using actual longitude and latitude coordinates of patient residences and the nearest VA hospitals. Travel time to the nearest VA hospital was estimated using geospatial technologies, which reflect roads and average driving conditions.11 Comorbidities were defined using ICD-9-CM and ICD-10-CM diagnosis codes within 1 year prior to admission in the inpatient, outpatient, or fee-basis data files, except for pneumonia, which was defined based on the presence of the corresponding diagnosis code during the hospitalization, as previously described.17 Severity of illness was quantified using a modified APACHE score (mAPACHE), which includes vital signs and commonly obtained laboratory values,12,18,19 and has also been externally validated.20 In brief, mAPACHE uses the same scoring assignments as APACHE III and includes all the predictor variables from the APACHE III excluding the Glasgow Coma Scale, urine output, arterial blood gas, and mechanical ventilator components. mAPACHE variables include age, comorbidities, mean arterial blood pressure, heart rate, respiratory rate, temperature, hematocrit, white blood cells, sodium, blood urea nitrogen, creatinine, glucose, albumin, and bilirubin. We retrieved all the data from electronic medical records, as described previously.12,18,19 Obstructive sleep apnea was defined using ICD-9 diagnosis code 327.2 or 780.57, or ICD-10 diagnosis codes G47.30–G47.39.\nHospital rurality was defined using the same methods as patient rurality, but the facilities in rural and isolated rural areas were collapsed to one category (rural). Hospital complexity was defined by VA as 1 (high resource), 2 (medium resource), and 3 (low resource).21 Hospital COPD-case volume was classified as high (above the median) or low (below the median) based on the total COPD volume during the study period. Hospital length of stay was calculated from the admission and discharge dates, in days. Hospital mortality was defined using the date of death listed in the VA Vital Status File and the occurrence of this date between the admission and discharge dates inclusive.\nWe included admissions with AECOPD at 121 VA acute-care facilities from 2011 to 2017. We excluded records admitted to facilities with complexity 3 (low resource) and newly opened facilities because these facilities may not provide ventilator support. We also excluded patients in hospices (V667/Z51.5 code within 1 year prior to the hospitalization date) and admissions of patients who received no oral or intravenous glucocorticoids during the hospitalizations.", "As NIV use may vary depending on the patient’s severity of illness (acuity), we calculated the risk-standardized NIV % to assess the NIV variation across the hospitals. We followed a similar approach to that used in a previous study.8 In brief, we created a hierarchical multivariable logistic regression model with NIV use within the first 24 hours of admission as the dependent variable (outcome) among records started on ventilation (records without NIV or invasive ventilation within first 24 hours were not used). Hospital was included as a random effect. Models were adjusted for severity of illness (mAPACHE) and several comorbidities (obstructive sleep apnea, presence of comorbid pneumonia, diabetes mellitus, congestive heart failure, hypertension, peripheral vascular disease, coronary artery disease, cancer, chronic kidney disease, stroke, and liver disease).22 Similarly to the approach used for publicly reported hospital performance metrics,23,24 we calculated the expected and predicted proportion of NIV use for each hospital using these models. Specifically, the expected rate is calculated based on hospital patient characteristics and does not include the hierarchical model hospital-specific random intercepts, whereas the predicted rate does incorporate hospital-specific random intercepts. The risk-standardized proportion of NIV use for each hospital was determined by multiplying the overall unadjusted proportion of NIV use by the ratio of the predicted to expected proportion for each hospital. Then, we stratified hospitals into quartiles based on hospital risk-standardized NIV % and compared patient characteristics among quartiles for trend and differences between Q1 and Q4. We used linear regression to examine trends between risk-standardized NIV % quartiles (Q1 to Q4) for continuous variables (p for trend), and a Cochran–Mantel–Haenszel test to examine trends between risk-standardized NIV % quartiles for categorical variables. The t-test (for continuous variables) and chi-squared test (for categorical variables) were also used to compare differences in characteristics between the lowest (Q1) and highest (Q4) quartiles of risk-standardized NIV %. Subsequently, we compared the risk-standardized NIV % using the Wilcoxon–Mann–Whitney test, and unadjusted hospital mortality using the chi-squared test between: 1) rural and urban, 2) low-volume and high-volume, and 3) complexity 2 (medium-resource) and complexity 1 (high-resource) hospitals. We also created a hierarchical logistic regression model to examine the association of hospital characteristics with hospital mortality adjusted for severity of illness and accounting for repeated admissions. All statistical analyses were conducted using SAS Enterprise Guide, 2014 (SAS Institute Inc).", "Of 64,895 admissions with AECOPD exacerbations at 121 VA acute-care facilities from 2011 to 2017, 22,847 records met the exclusion criteria (Figure 1). The final cohort included 42,048 admissions in 106 VA hospitals and all of them were ICU admissions. We stratified these records by ventilator support within the first 24 hours of admission (primary ventilation support). Figure 1 also shows the outcomes of admissions stratified by ventilation support.Figure 1Patient flowchart and outcomes by ventilation support within one day from admission.Abbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation.\nPatient flowchart and outcomes by ventilation support within one day from admission.\nThe percentage of total admissions who received any ventilation support (NIV, IMV, or both) within the first 24 hours across all 106 hospitals ranged from 0.8% to 13.7%, with a median of 5.4% (interquartile interval [IQI]=3.7–7.8%) (Figure 2). The percentage of total admissions who received primary NIV ranged from 0% to 9%, with a median of 2.8% (IQI=1.2–4.5%). Among only those admissions who received ventilation support within the first 24 hours (primary NIV, IMV, or both), the unadjusted NIV % use ranged from 0 to 100, with a median of 54.7% (IQI=34.8–68.2%).Figure 2Percentage of ventilatory support within one day from admission across the 106 facilities.Abbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation.\nPercentage of ventilatory support within one day from admission across the 106 facilities.\nThe risk-standardized NIV % (the adjusted initial NIV of those admissions receiving ventilation support in the initial 24 hours) ranged from 10.8% to 86.6%, with a median of 58.0% (IQI=41.9–64.4%). Table 1 shows the characteristics of the hospitals stratified by risk-standardized NIV % into quartiles. Risk-standardized NIV % use accounted for severity of illness and comorbidities. Quartile 1 (Q1) included hospitals with the lowest risk-standardized NIV while quartile 4 (Q4) included hospitals with the highest risk-standardized NIV; other patient- and hospital-level characteristics are reported by risk-standardized NIV % quartiles. There were more black and rural patients in the highest risk-standardized NIV % relative to lower quartiles. Patients in the highest risk-standardized NIV % quartile had longer travel time to the nearest VA hospitals than those in the lowest quartiles. The IMV rates were lower and hospital length of stay was longer in hospitals in the highest NIV quartiles relative to the lowest quartiles.Table 1Patient and Hospital Characteristics of 106 Hospitals Stratified by Risk-Standardized Non-Invasive Ventilation Percentage Within One Day from AdmissionNIV Q1 (N=11,810)NIV Q2 (N=10,031)NIV Q3 (N=11,719)NIV Q4 (N=8488)p Value for Trendp Value Q1 vs Q4*Risk-standardized NIV %10.8–41.9%42.2–56.7%58.0–64.3%64.4–86.6%N hospital26272726Age, mean (SD)70.29 (8.69)69.92 (9.01)70.49 (9.74)70.33 (9.21)0.100.79Sex (female), N (%)507 (4.29)471 (4.70)476 (4.06)350 (4.12)0.110.55Race, N (%)<0.001<0.001 White9126 (77.30)7505 (74.82)9047 (77.20)6249 (73.62) Black1896 (16.05)1780 (17.74)1989 (16.97)1637 (19.29) Other364 (3.08)365 (3.64)262 (2.24)290 (3.42) Missing421 (3.56)381 (3.80)421 (3.59)312 (3.68)Patient residential location, N (%)<0.001<0.001 Urban7454 (63.12)6999 (69.77)7422 (63.33)5155 (60.73) Rural3549 (30.05)2416 (24.09)3525 (30.08)2717 (32.01) Isolated478 (4.05)347 (3.46)468 (3.99)354 (4.17) Missing329 (2.79)269 (2.68)304 (2.59)262 (3.09)Obstructive sleep apnea4372 (37.02)3895 (38.83)4568 (38.98)3059 (36.04)<0.0010.15Diabetes mellitus1964 (16.63)1643 (16.38)2349 (20.04)1614 (19.02)<0.001<0.001Congestive heart failure1532 (12.97)1386 (13.82)1759 (15.01)1179 (13.89)<0.0010.058Pneumonia6377 (54.00)5532 (55.15)6472 (55.23)4783 (56.35)0.010<0.001Hypertension665 (5.63)690 (6.88)1038 (8.86)795 (9.37)<0.001<0.001Peripheral vascular disease903 (7.65)746 (7.44)1028 (8.77)676 (7.96)0.0010.40Cancer931 (7.88)814 (8.11)1126 (9.61)788 (9.28)<0.001<0.001Coronary artery disease565 (4.78)569 (5.67)780 (6.66)530 (6.24)<0.001<0.001Chronic kidney disease770 (6.52)709 (7.07)925 (7.89)673 (7.93)<0.001<0.001Stroke627 (5.31)550 (5.48)727 (6.20)521 (6.14)0.0060.012Liver disease398 (3.37)338 (3.37)424 (3.62)320 (3.77)0.340.13Admission source, N (%)<0.001<0.001 Hospital515 (4.36)337 (3.36)371 (3.17)238 (2.82) Nursing home140 (1.19)135 (1.35)170 (1.45)131 (1.55) Other facility20 (0.17)11 (0.11)12 (0.10)26 (0.31) Outpatient11,100 (93.99)9494 (94.65)11,041 (94.21)7963 (94.44) Unknown35 (0.30)54 (0.54)125 (1.07)74 (0.88)Travel time to VAMC (min), mean (SD)67.26 (68.46)68.11 (73.11)72.51 (55.21)82.71 (76.47)<0.001<0.001Any IMV during hospital stay, N (%)604 (5.11)486 (4.84)437 (3.73)277 (3.26)<0.001<0.001mAPACHE, mean (SD)35.73 (11.95)35.00 (12.02)35.30 (11.91)35.63 (11.98)0.600.55Hospital mortality, N (%)303 (2.57)303 (3.02)315 (2.69)238 (2.80)0.210.30Hospital length of stay (days), mean (SD)5.54 (15.21)6.32 (27.04)7.13 (40.05)7.68 (43.30)<0.001<0.001Note: *Q1 vs Q4 using chi-squared or t-test.Abbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation; Q, quartile; SE, standard error; VAMC, Veterans Health Administration Medical Center.\n\nPatient and Hospital Characteristics of 106 Hospitals Stratified by Risk-Standardized Non-Invasive Ventilation Percentage Within One Day from Admission\nNote: *Q1 vs Q4 using chi-squared or t-test.\nAbbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation; Q, quartile; SE, standard error; VAMC, Veterans Health Administration Medical Center.\nTable 2 shows a risk-standardized NIV % of 55.1% (min=10.8%; max=86.6%) in urban hospitals, while the NIV % was 65.3.5% (min=34.2%; max=84.2%) in rural hospitals (p=0.047). There was no difference in risk-standardized NIV % between high- and low-COPD-volume facilities (p=0.33), or between complexity 1 (high-resource) and complexity 2 (medium-resource) hospitals (p=0.45).Table 2Risk-Standardized Non-Invasive Ventilation Percentage Stratified by Hospital CharacteristicsNumber of HospitalsNumber of PatientsRisk-Standardized NIV %, Median (Min–Max)p Value*Overall10642,04857.3 (10.8–86.6)Rural/urban0.047 Rural10254865.3 (34.2–84.2) Urban9639,50055.1 (10.8–86.6)Hospital volume0.33 Low5312,81359.90 (10.8–86.6) High5329,23553.43 (13.7–84.9)Hospital complexity0.45 2 (Medium resource)15294462.14 (20.4–86.6) 1 (High resource)9139,10455.71 (10.8–84.9)Note: *Wilcoxon two-sample test.\n\nRisk-Standardized Non-Invasive Ventilation Percentage Stratified by Hospital Characteristics\nNote: *Wilcoxon two-sample test.\nTable 3 shows unadjusted mortality stratified by hospital characteristics. After adjusting for patient acuity and taking into account repeated admissions, admission at an urban hospital was not associated with mortality (odds ratio [OR]=0.90; 95% CI=0.68 to 1.19, p=0.46) relative to admission at a rural hospital. Similarly, admission to a low-volume hospital was not associated with mortality (OR=1.00; 95% CI=0.86 to 1.18, p=0.96) relative to admission at a high-volume hospital, but admission to a medium-resource facility (complexity 2) was associated with increased mortality (OR=1.55; 95% CI=1.20 to 2.00, p<0.001) relative to admission at a high-resource facility.Table 3Unadjusted Mortality Stratified by Hospital CharacteristicsNumber of HospitalsNumber of PatientsUnadjusted Hospital Mortality, Deaths (%)p Value*Overall10642,0481159 (2.8%)Rural/urban0.22 Rural10254880 (3.14%) Urban9639,5001079 (2.73%)Hospital volume0.004 Low5312,813398 (3.11%) High5329,235761 (2.60%)Hospital complexity<0.001 2 (Medium resource)152944128 (4.35%) 1 (High resource)9139,1041031 (2.64%)Note: *Chi-squared test.\n\nUnadjusted Mortality Stratified by Hospital Characteristics\nNote: *Chi-squared test.", "In a cohort of 42,048 patients admitted with AECOPD to VA hospitals over 6 years, we observed considerable variation in NIV use across hospitals. After taking into account patient acuity using a validated critical illness severity score18,20,25 and comorbidities, we observed that hospitals in the highest NIV use quartile cared for more rural patients, used IMV less, and had longer length of hospital stay, but had no difference in mortality relative to hospitals in the lowest quartiles. The NIV use rate was higher in rural than in urban hospitals. Hospital mortality did not differ between rural and urban facilities.\nIt is well established that NIV reduces the need for IMV and hospital length of stay, and improves mortality in acute respiratory failure due to AECOPD26 and heart failure,27 but its role in other causes of respiratory failure, such as pneumonia, is controversial.28 In a study using 2011 claims data from California, the most common reason for NIV (26.1%) was pneumonia. Among hospitalizations with NIV as the primary ventilator support, hospitals that used NIV for strong-evidence conditions (COPD and heart failure) had lower NIV failure rates.9 We found that concomitant pneumonia was present in more than 50% of the AECOPD patients requiring NIV and/or IMV within 24 hours of admission. Although this may merely represent pneumonia overdiagnosis or using an ICD code to order a chest X-ray, pneumonia and AECOPD frequently coexist.29,30 In our analysis, we included only admissions with a primary diagnosis of AECOPD. We also adjusted for concomitant pneumonia when we calculated the risk-standardized NIV use because pneumonia is associated with worse outcomes.30\nSeveral observational studies conducted in various countries have reported approximately a five-fold increase in NIV use rates in the early 2010s compared to those in the early 2000s.31–33 Using the Premier inpatient database, Stefan et al showed that initial NIV use increased by 15.1% annually in the USA.33 In 2001, NIV use was 5.9% but by 2011 it was 14.8%. This increase concurred with a decline in IMV use. The widespread use of NIV is due to its easy application. This may be the reason why we observed an inverse association between NIV and IMV in the hospitals, and not necessarily that NIV reduced the need for IMV.\nA plethora of studies have shown significant variation in NIV use across hospitals.8,9,34 Mehta et al showed that the risk-standardized NIV % rate among 37,516 hospitalizations for AECOPD across 250 hospitals in California was 10% (IQI=1.6–35.7%).9 In another cross-sectional analysis of 77,576 hospitalizations across 386 US hospitals, the risk-standardized NIV % among those admissions requiring ventilation support ranged from 47.4% at the 10th percentile to 84.7% at the 90th percentile.8 We also observed significant variation even though we conducted the study in a single health system that has published guidelines to provide care for COPD patients.35\nNIV variation across hospitals may be related to the case mix of patients admitted to those hospitals. Large tertiary centers may care for very sick patients, and for that reason, IMV (and not NIV) is more frequently used there than in small rural hospitals. Adjusting for patient illness severity is crucial for studying NIV variation across hospitals. Prior reports studying risk-standardized NIV use were limited because patient acuity was not included in the analysis. Instead, comorbidities identified using ICD-9 codes were used as surrogates for severity of illness. In our analysis, we not only adjusted for comorbidities, but also accounted for severity of illness. Adjusting for patient acuity is critical as NIV may be underused or overused. NIV may be unnecessary in low-acuity patients, but it may be harmful in high-acuity patients who may require IMV, and delays in initiating IMV in these patients may increase the level of mortality.25,36 Our findings showed that rural hospitals used NIV more, which is in agreement with previous literature.8\nNumerous hospital factors may also be responsible for variations in NIV use. In the early years of NIV, lack of equipment was the main reason for underuse.37 Subsequent surveys showed that insufficient training of physicians was the main reason for NIV underuse, followed by lack of equipment.38 Inadequate training of respiratory therapists was another contributor. A survey of Canadian physicians conducted in 2003 did not show variation in practice among various facilities, but rather variation among specialties.39 A national survey of VA physicians and respiratory therapists in 2004 revealed that lack of training and experience was a significant factor in NIV underutilization.40 A study using a qualitative approach with in-depth interviews of 32 participants in seven US hospitals, including nurses, physicians, respiratory therapists, and leaders, showed that respiratory therapist autonomy, interdisciplinary teamwork, staff education, and the presence of policies and protocols were features of high-performing hospitals.41\nLindenauer et al demonstrated that among several hospital characteristics, including teaching status, staffing levels, and hospital location and volume, only the presence of intensivists and hospital rurality was associated with high risk-adjusted NIV use.8 Although we found that rural hospitals more frequently use NIV, we did not find any difference in risk-standardized NIV % between medium- and high-resource hospitals, or between low- and high-volume hospitals. We did find fewer black and rural patients in hospitals with higher risk-standardized NIV %. This finding may be related to how and where patients seek medical care (eg, rural patients may seek care in nearby low-resource hospitals). Racial disparities in treatment are also possible. A previous study in VA demonstrated that rates of IMV were higher in black than in white patients.42\nAnother finding of this study is that hospitals in the highest risk-standardized NIV % use quartiles had longer hospital length of stay and received more rural patients, indicating that these hospitals delay discharging rural patients (perhaps because of a lack of skilled nursing and rehabilitation facilities in the area).\nOur study has several limitations. It was conducted in a single health-care system, with most of the patients being male. Therefore, we should generalize our findings with caution. Because we did not have smoking exposure data or pulmonary function data, we cannot confirm the diagnosis of COPD. We cannot exclude the misdiagnosis of hypercapnic respiratory failure due to obesity hypoventilation as COPD-related respiratory failure. Nevertheless, we included only patients who received oral steroids and we took obstructive sleep apnea into account to calculate our risk-standardized NIV %. In addition, our previous work, which used less stringent ICD code criteria, showed an accuracy in identifying AECOPD of between 80% and 90%.10 We did not exclude patients with obstructive sleep apnea, as obstructive sleep apnea–COPD overlap is common, in particular among patients with advanced disease.43 Excluding those patients would have resulted in a sample that was not a good representative of the true population. We have no data regarding NIV modes used (eg, BIPAP). Physiological parameters such as partial pressure of CO2 in the arterial blood, which are indicators of patient acuity, were not available. Moreover, we did not have data from civilian hospitals. Veterans may have been hospitalized at civilian hospitals owing to ease of access or convenience. There were no available data regarding the staffing levels of hospitals, including whether a board-certified critical care physician or ICU telemedicine was available, which are associated with improved outcomes. The above limitations do not undermine the strengths of our study, which are the large sample size, the strict exclusion criteria, and the fact that we used an illness severity scoring system to calculate the risk-standardized NIV %.", "Among patients admitted with AECOPD in VA hospitals, NIV use varied significantly across hospitals, with rural hospitals having higher risk-standardized NIV % than urban ones. Hospitals with the highest NIV rates cared for more rural patients and had longer hospital length of stay than hospitals with the lowest NIV rates. Further research should investigate the exact mechanism of variation in NIV use between rural and urban hospitals." ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Setting", "Definitions", "Statistical Analysis", "Results", "Discussion", "Conclusions" ]
[ "Chronic obstructive pulmonary disease (COPD) is a leading cause of mortality in the USA1 and is associated with high resource utilization.2,3 COPD patients experience exacerbations of the disease (AECOPD), defined as acute worsening of respiratory symptoms according to the Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines.2 Standard therapy for moderate AECOPDs includes bronchodilators, glucocorticoids, and antibiotics.\nSevere AECOPDs, defined as those requiring an emergency room visit or hospitalization, have a 1-year mortality of 26.2% and are associated with high rates of hospital readmission.4 AECOPD-related hospitalizations are also associated with reduced quality of life and are responsible for 70% of the total direct health-care costs for COPD.5,6 Severe AECOPD may lead to acute respiratory failure requiring supplemental oxygen and/or ventilation support in addition to standard therapy. The most effective treatment for acute respiratory failure due to severe AECOPD is non-invasive ventilation (NIV). NIV reduces the need for invasive mechanical ventilation (IMV) by 64% and mortality by 46%, according to a 2017 meta-analysis.7\nDespite the benefit of NIV in severe AECOPDs, NIV use in AECOPD-related hospitalizations varies significantly across the USA.8,9 This variation in NIV use may be due to hospital- and patient-level characteristics, such as the patient’s severity of illness (acuity). Facilities with high-acuity patients may use NIV more frequently. Nevertheless, prior studies examining NIV variation across US hospitals did not account for patient acuity.8,9 Variation in NIV use may also be related to the heterogeneity in its adoption across hospitals. High-resource hospitals may use NIV more frequently, as opposed to low-resource facilities, which may often transfer patients who require NIV because they do not feel comfortable treating these patients in case of NIV failure. We hypothesized that NIV varies significantly across US hospitals, even after adjusting for patient acuity. Our primary objective was to investigate NIV use in AECOPD-related hospitalizations across the Veterans Health Administration (VA) and assess whether patient characteristics (eg, demographics, residential location, and comorbidities) or hospital characteristics (eg, location, volume, and resources are) were associated with NIV use.", "This retrospective cohort study included AECOPD-related hospitalizations to acute-care VA hospitals between October 1, 2011 and September 30, 2017. This work was approved by the Institutional Review Boards and Research and Development Committee at the Iowa City VA Health Care System [IRB 201712713] as part of a larger study with a previous publication.10 The study was conducted in accordance with the Declaration of Helsinki. A waiver of informed consent was granted for this retrospective study because the study examined only existing patient-level data. Patients’ data were kept confidential.\nSetting We obtained data from the Veterans Informatics and Computing Infrastructure (VINCI), an integrated system that includes VA’s electronic health records and administrative data. Admissions to VA acute-care hospitals were identified via the Corporate Data Warehouse (CDW) using the inpatient domain. These datasets contain patient demographics, including residential address and ZIP code, diagnosis and procedure codes during admission, admission source, and admission and discharge dates.\nWe obtained data from the Veterans Informatics and Computing Infrastructure (VINCI), an integrated system that includes VA’s electronic health records and administrative data. Admissions to VA acute-care hospitals were identified via the Corporate Data Warehouse (CDW) using the inpatient domain. These datasets contain patient demographics, including residential address and ZIP code, diagnosis and procedure codes during admission, admission source, and admission and discharge dates.\nDefinitions We identified patients hospitalized with AECOPD based on the International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modification (ICD-9-CM and ICD-10-CM) and using the following criteria: 1) a principal diagnosis of COPD (ICD-9-CM codes: 490.x, 491.xx, 492.xx, and 496.xx; or ICD-10-CM codes: J41, J43, J43.1, J43.2, J43.8, J43.9, J44.0, J44.1, J41.8, J42, or J44.9) or 2) a principal diagnosis of acute respiratory failure (518.81, 518.82, 518.84, or 799.1; ICD-10-CM: J96.00, J96.01, J96.02, J96.11, J96.12, J96.20, J96.21, J96.22, J96.90, J96.92, or R06.03) with a secondary AECOPD diagnosis (ICD-9-CM codes: 491.21, 491.22; ICD-10-CM codes: J44.1, J44.0).\nNIV use was defined using ICD-9-CM procedure code 93.90 or, for ICD-10, codes 5A09357, 5A09457, and 5A09557. IMV use was defined using ICD-9-CM procedure codes 96.04 and 96.70–96.72 or, for ICD-10, codes 0BH17EZ, 0BH18EZ, 5A1935Z, 5A1945Z, and 5A1955Z. Primary ventilation support was defined based on whether the patient received NIV and/or IMV within one day from the admission date.\nPatient rurality was defined using census tracts based on Rural Urban Commuting Area (RUCA) codes. RUCA codes reflect measures of urbanization, commuting, and population density.11–15 RUCA codes were further condensed to designate an area as urban (RUCA codes: 1 and 1.1), rural (RUCA codes: 2, 2.1, 3, 4, 4.1, 5, 5.1, 6, 7, 7.1, 7.2, 8, 8.1, 8.2, and 9), or isolated rural (RUCA codes: 10, 10.2, 10.2, and 10.3), using the categories defined by the VA Office of Rural Health.16 Travel time to the nearest VA hospital was determined from VA Planning Systems and Support Group (PSSG) geo-coded enrollment files. PSSG calculates distances to tertiary care VA sites for all enrolled patients using actual longitude and latitude coordinates of patient residences and the nearest VA hospitals. Travel time to the nearest VA hospital was estimated using geospatial technologies, which reflect roads and average driving conditions.11 Comorbidities were defined using ICD-9-CM and ICD-10-CM diagnosis codes within 1 year prior to admission in the inpatient, outpatient, or fee-basis data files, except for pneumonia, which was defined based on the presence of the corresponding diagnosis code during the hospitalization, as previously described.17 Severity of illness was quantified using a modified APACHE score (mAPACHE), which includes vital signs and commonly obtained laboratory values,12,18,19 and has also been externally validated.20 In brief, mAPACHE uses the same scoring assignments as APACHE III and includes all the predictor variables from the APACHE III excluding the Glasgow Coma Scale, urine output, arterial blood gas, and mechanical ventilator components. mAPACHE variables include age, comorbidities, mean arterial blood pressure, heart rate, respiratory rate, temperature, hematocrit, white blood cells, sodium, blood urea nitrogen, creatinine, glucose, albumin, and bilirubin. We retrieved all the data from electronic medical records, as described previously.12,18,19 Obstructive sleep apnea was defined using ICD-9 diagnosis code 327.2 or 780.57, or ICD-10 diagnosis codes G47.30–G47.39.\nHospital rurality was defined using the same methods as patient rurality, but the facilities in rural and isolated rural areas were collapsed to one category (rural). Hospital complexity was defined by VA as 1 (high resource), 2 (medium resource), and 3 (low resource).21 Hospital COPD-case volume was classified as high (above the median) or low (below the median) based on the total COPD volume during the study period. Hospital length of stay was calculated from the admission and discharge dates, in days. Hospital mortality was defined using the date of death listed in the VA Vital Status File and the occurrence of this date between the admission and discharge dates inclusive.\nWe included admissions with AECOPD at 121 VA acute-care facilities from 2011 to 2017. We excluded records admitted to facilities with complexity 3 (low resource) and newly opened facilities because these facilities may not provide ventilator support. We also excluded patients in hospices (V667/Z51.5 code within 1 year prior to the hospitalization date) and admissions of patients who received no oral or intravenous glucocorticoids during the hospitalizations.\nWe identified patients hospitalized with AECOPD based on the International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modification (ICD-9-CM and ICD-10-CM) and using the following criteria: 1) a principal diagnosis of COPD (ICD-9-CM codes: 490.x, 491.xx, 492.xx, and 496.xx; or ICD-10-CM codes: J41, J43, J43.1, J43.2, J43.8, J43.9, J44.0, J44.1, J41.8, J42, or J44.9) or 2) a principal diagnosis of acute respiratory failure (518.81, 518.82, 518.84, or 799.1; ICD-10-CM: J96.00, J96.01, J96.02, J96.11, J96.12, J96.20, J96.21, J96.22, J96.90, J96.92, or R06.03) with a secondary AECOPD diagnosis (ICD-9-CM codes: 491.21, 491.22; ICD-10-CM codes: J44.1, J44.0).\nNIV use was defined using ICD-9-CM procedure code 93.90 or, for ICD-10, codes 5A09357, 5A09457, and 5A09557. IMV use was defined using ICD-9-CM procedure codes 96.04 and 96.70–96.72 or, for ICD-10, codes 0BH17EZ, 0BH18EZ, 5A1935Z, 5A1945Z, and 5A1955Z. Primary ventilation support was defined based on whether the patient received NIV and/or IMV within one day from the admission date.\nPatient rurality was defined using census tracts based on Rural Urban Commuting Area (RUCA) codes. RUCA codes reflect measures of urbanization, commuting, and population density.11–15 RUCA codes were further condensed to designate an area as urban (RUCA codes: 1 and 1.1), rural (RUCA codes: 2, 2.1, 3, 4, 4.1, 5, 5.1, 6, 7, 7.1, 7.2, 8, 8.1, 8.2, and 9), or isolated rural (RUCA codes: 10, 10.2, 10.2, and 10.3), using the categories defined by the VA Office of Rural Health.16 Travel time to the nearest VA hospital was determined from VA Planning Systems and Support Group (PSSG) geo-coded enrollment files. PSSG calculates distances to tertiary care VA sites for all enrolled patients using actual longitude and latitude coordinates of patient residences and the nearest VA hospitals. Travel time to the nearest VA hospital was estimated using geospatial technologies, which reflect roads and average driving conditions.11 Comorbidities were defined using ICD-9-CM and ICD-10-CM diagnosis codes within 1 year prior to admission in the inpatient, outpatient, or fee-basis data files, except for pneumonia, which was defined based on the presence of the corresponding diagnosis code during the hospitalization, as previously described.17 Severity of illness was quantified using a modified APACHE score (mAPACHE), which includes vital signs and commonly obtained laboratory values,12,18,19 and has also been externally validated.20 In brief, mAPACHE uses the same scoring assignments as APACHE III and includes all the predictor variables from the APACHE III excluding the Glasgow Coma Scale, urine output, arterial blood gas, and mechanical ventilator components. mAPACHE variables include age, comorbidities, mean arterial blood pressure, heart rate, respiratory rate, temperature, hematocrit, white blood cells, sodium, blood urea nitrogen, creatinine, glucose, albumin, and bilirubin. We retrieved all the data from electronic medical records, as described previously.12,18,19 Obstructive sleep apnea was defined using ICD-9 diagnosis code 327.2 or 780.57, or ICD-10 diagnosis codes G47.30–G47.39.\nHospital rurality was defined using the same methods as patient rurality, but the facilities in rural and isolated rural areas were collapsed to one category (rural). Hospital complexity was defined by VA as 1 (high resource), 2 (medium resource), and 3 (low resource).21 Hospital COPD-case volume was classified as high (above the median) or low (below the median) based on the total COPD volume during the study period. Hospital length of stay was calculated from the admission and discharge dates, in days. Hospital mortality was defined using the date of death listed in the VA Vital Status File and the occurrence of this date between the admission and discharge dates inclusive.\nWe included admissions with AECOPD at 121 VA acute-care facilities from 2011 to 2017. We excluded records admitted to facilities with complexity 3 (low resource) and newly opened facilities because these facilities may not provide ventilator support. We also excluded patients in hospices (V667/Z51.5 code within 1 year prior to the hospitalization date) and admissions of patients who received no oral or intravenous glucocorticoids during the hospitalizations.\nStatistical Analysis As NIV use may vary depending on the patient’s severity of illness (acuity), we calculated the risk-standardized NIV % to assess the NIV variation across the hospitals. We followed a similar approach to that used in a previous study.8 In brief, we created a hierarchical multivariable logistic regression model with NIV use within the first 24 hours of admission as the dependent variable (outcome) among records started on ventilation (records without NIV or invasive ventilation within first 24 hours were not used). Hospital was included as a random effect. Models were adjusted for severity of illness (mAPACHE) and several comorbidities (obstructive sleep apnea, presence of comorbid pneumonia, diabetes mellitus, congestive heart failure, hypertension, peripheral vascular disease, coronary artery disease, cancer, chronic kidney disease, stroke, and liver disease).22 Similarly to the approach used for publicly reported hospital performance metrics,23,24 we calculated the expected and predicted proportion of NIV use for each hospital using these models. Specifically, the expected rate is calculated based on hospital patient characteristics and does not include the hierarchical model hospital-specific random intercepts, whereas the predicted rate does incorporate hospital-specific random intercepts. The risk-standardized proportion of NIV use for each hospital was determined by multiplying the overall unadjusted proportion of NIV use by the ratio of the predicted to expected proportion for each hospital. Then, we stratified hospitals into quartiles based on hospital risk-standardized NIV % and compared patient characteristics among quartiles for trend and differences between Q1 and Q4. We used linear regression to examine trends between risk-standardized NIV % quartiles (Q1 to Q4) for continuous variables (p for trend), and a Cochran–Mantel–Haenszel test to examine trends between risk-standardized NIV % quartiles for categorical variables. The t-test (for continuous variables) and chi-squared test (for categorical variables) were also used to compare differences in characteristics between the lowest (Q1) and highest (Q4) quartiles of risk-standardized NIV %. Subsequently, we compared the risk-standardized NIV % using the Wilcoxon–Mann–Whitney test, and unadjusted hospital mortality using the chi-squared test between: 1) rural and urban, 2) low-volume and high-volume, and 3) complexity 2 (medium-resource) and complexity 1 (high-resource) hospitals. We also created a hierarchical logistic regression model to examine the association of hospital characteristics with hospital mortality adjusted for severity of illness and accounting for repeated admissions. All statistical analyses were conducted using SAS Enterprise Guide, 2014 (SAS Institute Inc).\nAs NIV use may vary depending on the patient’s severity of illness (acuity), we calculated the risk-standardized NIV % to assess the NIV variation across the hospitals. We followed a similar approach to that used in a previous study.8 In brief, we created a hierarchical multivariable logistic regression model with NIV use within the first 24 hours of admission as the dependent variable (outcome) among records started on ventilation (records without NIV or invasive ventilation within first 24 hours were not used). Hospital was included as a random effect. Models were adjusted for severity of illness (mAPACHE) and several comorbidities (obstructive sleep apnea, presence of comorbid pneumonia, diabetes mellitus, congestive heart failure, hypertension, peripheral vascular disease, coronary artery disease, cancer, chronic kidney disease, stroke, and liver disease).22 Similarly to the approach used for publicly reported hospital performance metrics,23,24 we calculated the expected and predicted proportion of NIV use for each hospital using these models. Specifically, the expected rate is calculated based on hospital patient characteristics and does not include the hierarchical model hospital-specific random intercepts, whereas the predicted rate does incorporate hospital-specific random intercepts. The risk-standardized proportion of NIV use for each hospital was determined by multiplying the overall unadjusted proportion of NIV use by the ratio of the predicted to expected proportion for each hospital. Then, we stratified hospitals into quartiles based on hospital risk-standardized NIV % and compared patient characteristics among quartiles for trend and differences between Q1 and Q4. We used linear regression to examine trends between risk-standardized NIV % quartiles (Q1 to Q4) for continuous variables (p for trend), and a Cochran–Mantel–Haenszel test to examine trends between risk-standardized NIV % quartiles for categorical variables. The t-test (for continuous variables) and chi-squared test (for categorical variables) were also used to compare differences in characteristics between the lowest (Q1) and highest (Q4) quartiles of risk-standardized NIV %. Subsequently, we compared the risk-standardized NIV % using the Wilcoxon–Mann–Whitney test, and unadjusted hospital mortality using the chi-squared test between: 1) rural and urban, 2) low-volume and high-volume, and 3) complexity 2 (medium-resource) and complexity 1 (high-resource) hospitals. We also created a hierarchical logistic regression model to examine the association of hospital characteristics with hospital mortality adjusted for severity of illness and accounting for repeated admissions. All statistical analyses were conducted using SAS Enterprise Guide, 2014 (SAS Institute Inc).", "We obtained data from the Veterans Informatics and Computing Infrastructure (VINCI), an integrated system that includes VA’s electronic health records and administrative data. Admissions to VA acute-care hospitals were identified via the Corporate Data Warehouse (CDW) using the inpatient domain. These datasets contain patient demographics, including residential address and ZIP code, diagnosis and procedure codes during admission, admission source, and admission and discharge dates.", "We identified patients hospitalized with AECOPD based on the International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modification (ICD-9-CM and ICD-10-CM) and using the following criteria: 1) a principal diagnosis of COPD (ICD-9-CM codes: 490.x, 491.xx, 492.xx, and 496.xx; or ICD-10-CM codes: J41, J43, J43.1, J43.2, J43.8, J43.9, J44.0, J44.1, J41.8, J42, or J44.9) or 2) a principal diagnosis of acute respiratory failure (518.81, 518.82, 518.84, or 799.1; ICD-10-CM: J96.00, J96.01, J96.02, J96.11, J96.12, J96.20, J96.21, J96.22, J96.90, J96.92, or R06.03) with a secondary AECOPD diagnosis (ICD-9-CM codes: 491.21, 491.22; ICD-10-CM codes: J44.1, J44.0).\nNIV use was defined using ICD-9-CM procedure code 93.90 or, for ICD-10, codes 5A09357, 5A09457, and 5A09557. IMV use was defined using ICD-9-CM procedure codes 96.04 and 96.70–96.72 or, for ICD-10, codes 0BH17EZ, 0BH18EZ, 5A1935Z, 5A1945Z, and 5A1955Z. Primary ventilation support was defined based on whether the patient received NIV and/or IMV within one day from the admission date.\nPatient rurality was defined using census tracts based on Rural Urban Commuting Area (RUCA) codes. RUCA codes reflect measures of urbanization, commuting, and population density.11–15 RUCA codes were further condensed to designate an area as urban (RUCA codes: 1 and 1.1), rural (RUCA codes: 2, 2.1, 3, 4, 4.1, 5, 5.1, 6, 7, 7.1, 7.2, 8, 8.1, 8.2, and 9), or isolated rural (RUCA codes: 10, 10.2, 10.2, and 10.3), using the categories defined by the VA Office of Rural Health.16 Travel time to the nearest VA hospital was determined from VA Planning Systems and Support Group (PSSG) geo-coded enrollment files. PSSG calculates distances to tertiary care VA sites for all enrolled patients using actual longitude and latitude coordinates of patient residences and the nearest VA hospitals. Travel time to the nearest VA hospital was estimated using geospatial technologies, which reflect roads and average driving conditions.11 Comorbidities were defined using ICD-9-CM and ICD-10-CM diagnosis codes within 1 year prior to admission in the inpatient, outpatient, or fee-basis data files, except for pneumonia, which was defined based on the presence of the corresponding diagnosis code during the hospitalization, as previously described.17 Severity of illness was quantified using a modified APACHE score (mAPACHE), which includes vital signs and commonly obtained laboratory values,12,18,19 and has also been externally validated.20 In brief, mAPACHE uses the same scoring assignments as APACHE III and includes all the predictor variables from the APACHE III excluding the Glasgow Coma Scale, urine output, arterial blood gas, and mechanical ventilator components. mAPACHE variables include age, comorbidities, mean arterial blood pressure, heart rate, respiratory rate, temperature, hematocrit, white blood cells, sodium, blood urea nitrogen, creatinine, glucose, albumin, and bilirubin. We retrieved all the data from electronic medical records, as described previously.12,18,19 Obstructive sleep apnea was defined using ICD-9 diagnosis code 327.2 or 780.57, or ICD-10 diagnosis codes G47.30–G47.39.\nHospital rurality was defined using the same methods as patient rurality, but the facilities in rural and isolated rural areas were collapsed to one category (rural). Hospital complexity was defined by VA as 1 (high resource), 2 (medium resource), and 3 (low resource).21 Hospital COPD-case volume was classified as high (above the median) or low (below the median) based on the total COPD volume during the study period. Hospital length of stay was calculated from the admission and discharge dates, in days. Hospital mortality was defined using the date of death listed in the VA Vital Status File and the occurrence of this date between the admission and discharge dates inclusive.\nWe included admissions with AECOPD at 121 VA acute-care facilities from 2011 to 2017. We excluded records admitted to facilities with complexity 3 (low resource) and newly opened facilities because these facilities may not provide ventilator support. We also excluded patients in hospices (V667/Z51.5 code within 1 year prior to the hospitalization date) and admissions of patients who received no oral or intravenous glucocorticoids during the hospitalizations.", "As NIV use may vary depending on the patient’s severity of illness (acuity), we calculated the risk-standardized NIV % to assess the NIV variation across the hospitals. We followed a similar approach to that used in a previous study.8 In brief, we created a hierarchical multivariable logistic regression model with NIV use within the first 24 hours of admission as the dependent variable (outcome) among records started on ventilation (records without NIV or invasive ventilation within first 24 hours were not used). Hospital was included as a random effect. Models were adjusted for severity of illness (mAPACHE) and several comorbidities (obstructive sleep apnea, presence of comorbid pneumonia, diabetes mellitus, congestive heart failure, hypertension, peripheral vascular disease, coronary artery disease, cancer, chronic kidney disease, stroke, and liver disease).22 Similarly to the approach used for publicly reported hospital performance metrics,23,24 we calculated the expected and predicted proportion of NIV use for each hospital using these models. Specifically, the expected rate is calculated based on hospital patient characteristics and does not include the hierarchical model hospital-specific random intercepts, whereas the predicted rate does incorporate hospital-specific random intercepts. The risk-standardized proportion of NIV use for each hospital was determined by multiplying the overall unadjusted proportion of NIV use by the ratio of the predicted to expected proportion for each hospital. Then, we stratified hospitals into quartiles based on hospital risk-standardized NIV % and compared patient characteristics among quartiles for trend and differences between Q1 and Q4. We used linear regression to examine trends between risk-standardized NIV % quartiles (Q1 to Q4) for continuous variables (p for trend), and a Cochran–Mantel–Haenszel test to examine trends between risk-standardized NIV % quartiles for categorical variables. The t-test (for continuous variables) and chi-squared test (for categorical variables) were also used to compare differences in characteristics between the lowest (Q1) and highest (Q4) quartiles of risk-standardized NIV %. Subsequently, we compared the risk-standardized NIV % using the Wilcoxon–Mann–Whitney test, and unadjusted hospital mortality using the chi-squared test between: 1) rural and urban, 2) low-volume and high-volume, and 3) complexity 2 (medium-resource) and complexity 1 (high-resource) hospitals. We also created a hierarchical logistic regression model to examine the association of hospital characteristics with hospital mortality adjusted for severity of illness and accounting for repeated admissions. All statistical analyses were conducted using SAS Enterprise Guide, 2014 (SAS Institute Inc).", "Of 64,895 admissions with AECOPD exacerbations at 121 VA acute-care facilities from 2011 to 2017, 22,847 records met the exclusion criteria (Figure 1). The final cohort included 42,048 admissions in 106 VA hospitals and all of them were ICU admissions. We stratified these records by ventilator support within the first 24 hours of admission (primary ventilation support). Figure 1 also shows the outcomes of admissions stratified by ventilation support.Figure 1Patient flowchart and outcomes by ventilation support within one day from admission.Abbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation.\nPatient flowchart and outcomes by ventilation support within one day from admission.\nThe percentage of total admissions who received any ventilation support (NIV, IMV, or both) within the first 24 hours across all 106 hospitals ranged from 0.8% to 13.7%, with a median of 5.4% (interquartile interval [IQI]=3.7–7.8%) (Figure 2). The percentage of total admissions who received primary NIV ranged from 0% to 9%, with a median of 2.8% (IQI=1.2–4.5%). Among only those admissions who received ventilation support within the first 24 hours (primary NIV, IMV, or both), the unadjusted NIV % use ranged from 0 to 100, with a median of 54.7% (IQI=34.8–68.2%).Figure 2Percentage of ventilatory support within one day from admission across the 106 facilities.Abbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation.\nPercentage of ventilatory support within one day from admission across the 106 facilities.\nThe risk-standardized NIV % (the adjusted initial NIV of those admissions receiving ventilation support in the initial 24 hours) ranged from 10.8% to 86.6%, with a median of 58.0% (IQI=41.9–64.4%). Table 1 shows the characteristics of the hospitals stratified by risk-standardized NIV % into quartiles. Risk-standardized NIV % use accounted for severity of illness and comorbidities. Quartile 1 (Q1) included hospitals with the lowest risk-standardized NIV while quartile 4 (Q4) included hospitals with the highest risk-standardized NIV; other patient- and hospital-level characteristics are reported by risk-standardized NIV % quartiles. There were more black and rural patients in the highest risk-standardized NIV % relative to lower quartiles. Patients in the highest risk-standardized NIV % quartile had longer travel time to the nearest VA hospitals than those in the lowest quartiles. The IMV rates were lower and hospital length of stay was longer in hospitals in the highest NIV quartiles relative to the lowest quartiles.Table 1Patient and Hospital Characteristics of 106 Hospitals Stratified by Risk-Standardized Non-Invasive Ventilation Percentage Within One Day from AdmissionNIV Q1 (N=11,810)NIV Q2 (N=10,031)NIV Q3 (N=11,719)NIV Q4 (N=8488)p Value for Trendp Value Q1 vs Q4*Risk-standardized NIV %10.8–41.9%42.2–56.7%58.0–64.3%64.4–86.6%N hospital26272726Age, mean (SD)70.29 (8.69)69.92 (9.01)70.49 (9.74)70.33 (9.21)0.100.79Sex (female), N (%)507 (4.29)471 (4.70)476 (4.06)350 (4.12)0.110.55Race, N (%)<0.001<0.001 White9126 (77.30)7505 (74.82)9047 (77.20)6249 (73.62) Black1896 (16.05)1780 (17.74)1989 (16.97)1637 (19.29) Other364 (3.08)365 (3.64)262 (2.24)290 (3.42) Missing421 (3.56)381 (3.80)421 (3.59)312 (3.68)Patient residential location, N (%)<0.001<0.001 Urban7454 (63.12)6999 (69.77)7422 (63.33)5155 (60.73) Rural3549 (30.05)2416 (24.09)3525 (30.08)2717 (32.01) Isolated478 (4.05)347 (3.46)468 (3.99)354 (4.17) Missing329 (2.79)269 (2.68)304 (2.59)262 (3.09)Obstructive sleep apnea4372 (37.02)3895 (38.83)4568 (38.98)3059 (36.04)<0.0010.15Diabetes mellitus1964 (16.63)1643 (16.38)2349 (20.04)1614 (19.02)<0.001<0.001Congestive heart failure1532 (12.97)1386 (13.82)1759 (15.01)1179 (13.89)<0.0010.058Pneumonia6377 (54.00)5532 (55.15)6472 (55.23)4783 (56.35)0.010<0.001Hypertension665 (5.63)690 (6.88)1038 (8.86)795 (9.37)<0.001<0.001Peripheral vascular disease903 (7.65)746 (7.44)1028 (8.77)676 (7.96)0.0010.40Cancer931 (7.88)814 (8.11)1126 (9.61)788 (9.28)<0.001<0.001Coronary artery disease565 (4.78)569 (5.67)780 (6.66)530 (6.24)<0.001<0.001Chronic kidney disease770 (6.52)709 (7.07)925 (7.89)673 (7.93)<0.001<0.001Stroke627 (5.31)550 (5.48)727 (6.20)521 (6.14)0.0060.012Liver disease398 (3.37)338 (3.37)424 (3.62)320 (3.77)0.340.13Admission source, N (%)<0.001<0.001 Hospital515 (4.36)337 (3.36)371 (3.17)238 (2.82) Nursing home140 (1.19)135 (1.35)170 (1.45)131 (1.55) Other facility20 (0.17)11 (0.11)12 (0.10)26 (0.31) Outpatient11,100 (93.99)9494 (94.65)11,041 (94.21)7963 (94.44) Unknown35 (0.30)54 (0.54)125 (1.07)74 (0.88)Travel time to VAMC (min), mean (SD)67.26 (68.46)68.11 (73.11)72.51 (55.21)82.71 (76.47)<0.001<0.001Any IMV during hospital stay, N (%)604 (5.11)486 (4.84)437 (3.73)277 (3.26)<0.001<0.001mAPACHE, mean (SD)35.73 (11.95)35.00 (12.02)35.30 (11.91)35.63 (11.98)0.600.55Hospital mortality, N (%)303 (2.57)303 (3.02)315 (2.69)238 (2.80)0.210.30Hospital length of stay (days), mean (SD)5.54 (15.21)6.32 (27.04)7.13 (40.05)7.68 (43.30)<0.001<0.001Note: *Q1 vs Q4 using chi-squared or t-test.Abbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation; Q, quartile; SE, standard error; VAMC, Veterans Health Administration Medical Center.\n\nPatient and Hospital Characteristics of 106 Hospitals Stratified by Risk-Standardized Non-Invasive Ventilation Percentage Within One Day from Admission\nNote: *Q1 vs Q4 using chi-squared or t-test.\nAbbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation; Q, quartile; SE, standard error; VAMC, Veterans Health Administration Medical Center.\nTable 2 shows a risk-standardized NIV % of 55.1% (min=10.8%; max=86.6%) in urban hospitals, while the NIV % was 65.3.5% (min=34.2%; max=84.2%) in rural hospitals (p=0.047). There was no difference in risk-standardized NIV % between high- and low-COPD-volume facilities (p=0.33), or between complexity 1 (high-resource) and complexity 2 (medium-resource) hospitals (p=0.45).Table 2Risk-Standardized Non-Invasive Ventilation Percentage Stratified by Hospital CharacteristicsNumber of HospitalsNumber of PatientsRisk-Standardized NIV %, Median (Min–Max)p Value*Overall10642,04857.3 (10.8–86.6)Rural/urban0.047 Rural10254865.3 (34.2–84.2) Urban9639,50055.1 (10.8–86.6)Hospital volume0.33 Low5312,81359.90 (10.8–86.6) High5329,23553.43 (13.7–84.9)Hospital complexity0.45 2 (Medium resource)15294462.14 (20.4–86.6) 1 (High resource)9139,10455.71 (10.8–84.9)Note: *Wilcoxon two-sample test.\n\nRisk-Standardized Non-Invasive Ventilation Percentage Stratified by Hospital Characteristics\nNote: *Wilcoxon two-sample test.\nTable 3 shows unadjusted mortality stratified by hospital characteristics. After adjusting for patient acuity and taking into account repeated admissions, admission at an urban hospital was not associated with mortality (odds ratio [OR]=0.90; 95% CI=0.68 to 1.19, p=0.46) relative to admission at a rural hospital. Similarly, admission to a low-volume hospital was not associated with mortality (OR=1.00; 95% CI=0.86 to 1.18, p=0.96) relative to admission at a high-volume hospital, but admission to a medium-resource facility (complexity 2) was associated with increased mortality (OR=1.55; 95% CI=1.20 to 2.00, p<0.001) relative to admission at a high-resource facility.Table 3Unadjusted Mortality Stratified by Hospital CharacteristicsNumber of HospitalsNumber of PatientsUnadjusted Hospital Mortality, Deaths (%)p Value*Overall10642,0481159 (2.8%)Rural/urban0.22 Rural10254880 (3.14%) Urban9639,5001079 (2.73%)Hospital volume0.004 Low5312,813398 (3.11%) High5329,235761 (2.60%)Hospital complexity<0.001 2 (Medium resource)152944128 (4.35%) 1 (High resource)9139,1041031 (2.64%)Note: *Chi-squared test.\n\nUnadjusted Mortality Stratified by Hospital Characteristics\nNote: *Chi-squared test.", "In a cohort of 42,048 patients admitted with AECOPD to VA hospitals over 6 years, we observed considerable variation in NIV use across hospitals. After taking into account patient acuity using a validated critical illness severity score18,20,25 and comorbidities, we observed that hospitals in the highest NIV use quartile cared for more rural patients, used IMV less, and had longer length of hospital stay, but had no difference in mortality relative to hospitals in the lowest quartiles. The NIV use rate was higher in rural than in urban hospitals. Hospital mortality did not differ between rural and urban facilities.\nIt is well established that NIV reduces the need for IMV and hospital length of stay, and improves mortality in acute respiratory failure due to AECOPD26 and heart failure,27 but its role in other causes of respiratory failure, such as pneumonia, is controversial.28 In a study using 2011 claims data from California, the most common reason for NIV (26.1%) was pneumonia. Among hospitalizations with NIV as the primary ventilator support, hospitals that used NIV for strong-evidence conditions (COPD and heart failure) had lower NIV failure rates.9 We found that concomitant pneumonia was present in more than 50% of the AECOPD patients requiring NIV and/or IMV within 24 hours of admission. Although this may merely represent pneumonia overdiagnosis or using an ICD code to order a chest X-ray, pneumonia and AECOPD frequently coexist.29,30 In our analysis, we included only admissions with a primary diagnosis of AECOPD. We also adjusted for concomitant pneumonia when we calculated the risk-standardized NIV use because pneumonia is associated with worse outcomes.30\nSeveral observational studies conducted in various countries have reported approximately a five-fold increase in NIV use rates in the early 2010s compared to those in the early 2000s.31–33 Using the Premier inpatient database, Stefan et al showed that initial NIV use increased by 15.1% annually in the USA.33 In 2001, NIV use was 5.9% but by 2011 it was 14.8%. This increase concurred with a decline in IMV use. The widespread use of NIV is due to its easy application. This may be the reason why we observed an inverse association between NIV and IMV in the hospitals, and not necessarily that NIV reduced the need for IMV.\nA plethora of studies have shown significant variation in NIV use across hospitals.8,9,34 Mehta et al showed that the risk-standardized NIV % rate among 37,516 hospitalizations for AECOPD across 250 hospitals in California was 10% (IQI=1.6–35.7%).9 In another cross-sectional analysis of 77,576 hospitalizations across 386 US hospitals, the risk-standardized NIV % among those admissions requiring ventilation support ranged from 47.4% at the 10th percentile to 84.7% at the 90th percentile.8 We also observed significant variation even though we conducted the study in a single health system that has published guidelines to provide care for COPD patients.35\nNIV variation across hospitals may be related to the case mix of patients admitted to those hospitals. Large tertiary centers may care for very sick patients, and for that reason, IMV (and not NIV) is more frequently used there than in small rural hospitals. Adjusting for patient illness severity is crucial for studying NIV variation across hospitals. Prior reports studying risk-standardized NIV use were limited because patient acuity was not included in the analysis. Instead, comorbidities identified using ICD-9 codes were used as surrogates for severity of illness. In our analysis, we not only adjusted for comorbidities, but also accounted for severity of illness. Adjusting for patient acuity is critical as NIV may be underused or overused. NIV may be unnecessary in low-acuity patients, but it may be harmful in high-acuity patients who may require IMV, and delays in initiating IMV in these patients may increase the level of mortality.25,36 Our findings showed that rural hospitals used NIV more, which is in agreement with previous literature.8\nNumerous hospital factors may also be responsible for variations in NIV use. In the early years of NIV, lack of equipment was the main reason for underuse.37 Subsequent surveys showed that insufficient training of physicians was the main reason for NIV underuse, followed by lack of equipment.38 Inadequate training of respiratory therapists was another contributor. A survey of Canadian physicians conducted in 2003 did not show variation in practice among various facilities, but rather variation among specialties.39 A national survey of VA physicians and respiratory therapists in 2004 revealed that lack of training and experience was a significant factor in NIV underutilization.40 A study using a qualitative approach with in-depth interviews of 32 participants in seven US hospitals, including nurses, physicians, respiratory therapists, and leaders, showed that respiratory therapist autonomy, interdisciplinary teamwork, staff education, and the presence of policies and protocols were features of high-performing hospitals.41\nLindenauer et al demonstrated that among several hospital characteristics, including teaching status, staffing levels, and hospital location and volume, only the presence of intensivists and hospital rurality was associated with high risk-adjusted NIV use.8 Although we found that rural hospitals more frequently use NIV, we did not find any difference in risk-standardized NIV % between medium- and high-resource hospitals, or between low- and high-volume hospitals. We did find fewer black and rural patients in hospitals with higher risk-standardized NIV %. This finding may be related to how and where patients seek medical care (eg, rural patients may seek care in nearby low-resource hospitals). Racial disparities in treatment are also possible. A previous study in VA demonstrated that rates of IMV were higher in black than in white patients.42\nAnother finding of this study is that hospitals in the highest risk-standardized NIV % use quartiles had longer hospital length of stay and received more rural patients, indicating that these hospitals delay discharging rural patients (perhaps because of a lack of skilled nursing and rehabilitation facilities in the area).\nOur study has several limitations. It was conducted in a single health-care system, with most of the patients being male. Therefore, we should generalize our findings with caution. Because we did not have smoking exposure data or pulmonary function data, we cannot confirm the diagnosis of COPD. We cannot exclude the misdiagnosis of hypercapnic respiratory failure due to obesity hypoventilation as COPD-related respiratory failure. Nevertheless, we included only patients who received oral steroids and we took obstructive sleep apnea into account to calculate our risk-standardized NIV %. In addition, our previous work, which used less stringent ICD code criteria, showed an accuracy in identifying AECOPD of between 80% and 90%.10 We did not exclude patients with obstructive sleep apnea, as obstructive sleep apnea–COPD overlap is common, in particular among patients with advanced disease.43 Excluding those patients would have resulted in a sample that was not a good representative of the true population. We have no data regarding NIV modes used (eg, BIPAP). Physiological parameters such as partial pressure of CO2 in the arterial blood, which are indicators of patient acuity, were not available. Moreover, we did not have data from civilian hospitals. Veterans may have been hospitalized at civilian hospitals owing to ease of access or convenience. There were no available data regarding the staffing levels of hospitals, including whether a board-certified critical care physician or ICU telemedicine was available, which are associated with improved outcomes. The above limitations do not undermine the strengths of our study, which are the large sample size, the strict exclusion criteria, and the fact that we used an illness severity scoring system to calculate the risk-standardized NIV %.", "Among patients admitted with AECOPD in VA hospitals, NIV use varied significantly across hospitals, with rural hospitals having higher risk-standardized NIV % than urban ones. Hospitals with the highest NIV rates cared for more rural patients and had longer hospital length of stay than hospitals with the lowest NIV rates. Further research should investigate the exact mechanism of variation in NIV use between rural and urban hospitals." ]
[ "intro", null, null, null, null, null, null, null ]
[ "pulmonary disease", "chronic obstructive", "epidemiology", "non-invasive ventilation" ]
Introduction: Chronic obstructive pulmonary disease (COPD) is a leading cause of mortality in the USA1 and is associated with high resource utilization.2,3 COPD patients experience exacerbations of the disease (AECOPD), defined as acute worsening of respiratory symptoms according to the Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines.2 Standard therapy for moderate AECOPDs includes bronchodilators, glucocorticoids, and antibiotics. Severe AECOPDs, defined as those requiring an emergency room visit or hospitalization, have a 1-year mortality of 26.2% and are associated with high rates of hospital readmission.4 AECOPD-related hospitalizations are also associated with reduced quality of life and are responsible for 70% of the total direct health-care costs for COPD.5,6 Severe AECOPD may lead to acute respiratory failure requiring supplemental oxygen and/or ventilation support in addition to standard therapy. The most effective treatment for acute respiratory failure due to severe AECOPD is non-invasive ventilation (NIV). NIV reduces the need for invasive mechanical ventilation (IMV) by 64% and mortality by 46%, according to a 2017 meta-analysis.7 Despite the benefit of NIV in severe AECOPDs, NIV use in AECOPD-related hospitalizations varies significantly across the USA.8,9 This variation in NIV use may be due to hospital- and patient-level characteristics, such as the patient’s severity of illness (acuity). Facilities with high-acuity patients may use NIV more frequently. Nevertheless, prior studies examining NIV variation across US hospitals did not account for patient acuity.8,9 Variation in NIV use may also be related to the heterogeneity in its adoption across hospitals. High-resource hospitals may use NIV more frequently, as opposed to low-resource facilities, which may often transfer patients who require NIV because they do not feel comfortable treating these patients in case of NIV failure. We hypothesized that NIV varies significantly across US hospitals, even after adjusting for patient acuity. Our primary objective was to investigate NIV use in AECOPD-related hospitalizations across the Veterans Health Administration (VA) and assess whether patient characteristics (eg, demographics, residential location, and comorbidities) or hospital characteristics (eg, location, volume, and resources are) were associated with NIV use. Methods: This retrospective cohort study included AECOPD-related hospitalizations to acute-care VA hospitals between October 1, 2011 and September 30, 2017. This work was approved by the Institutional Review Boards and Research and Development Committee at the Iowa City VA Health Care System [IRB 201712713] as part of a larger study with a previous publication.10 The study was conducted in accordance with the Declaration of Helsinki. A waiver of informed consent was granted for this retrospective study because the study examined only existing patient-level data. Patients’ data were kept confidential. Setting We obtained data from the Veterans Informatics and Computing Infrastructure (VINCI), an integrated system that includes VA’s electronic health records and administrative data. Admissions to VA acute-care hospitals were identified via the Corporate Data Warehouse (CDW) using the inpatient domain. These datasets contain patient demographics, including residential address and ZIP code, diagnosis and procedure codes during admission, admission source, and admission and discharge dates. We obtained data from the Veterans Informatics and Computing Infrastructure (VINCI), an integrated system that includes VA’s electronic health records and administrative data. Admissions to VA acute-care hospitals were identified via the Corporate Data Warehouse (CDW) using the inpatient domain. These datasets contain patient demographics, including residential address and ZIP code, diagnosis and procedure codes during admission, admission source, and admission and discharge dates. Definitions We identified patients hospitalized with AECOPD based on the International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modification (ICD-9-CM and ICD-10-CM) and using the following criteria: 1) a principal diagnosis of COPD (ICD-9-CM codes: 490.x, 491.xx, 492.xx, and 496.xx; or ICD-10-CM codes: J41, J43, J43.1, J43.2, J43.8, J43.9, J44.0, J44.1, J41.8, J42, or J44.9) or 2) a principal diagnosis of acute respiratory failure (518.81, 518.82, 518.84, or 799.1; ICD-10-CM: J96.00, J96.01, J96.02, J96.11, J96.12, J96.20, J96.21, J96.22, J96.90, J96.92, or R06.03) with a secondary AECOPD diagnosis (ICD-9-CM codes: 491.21, 491.22; ICD-10-CM codes: J44.1, J44.0). NIV use was defined using ICD-9-CM procedure code 93.90 or, for ICD-10, codes 5A09357, 5A09457, and 5A09557. IMV use was defined using ICD-9-CM procedure codes 96.04 and 96.70–96.72 or, for ICD-10, codes 0BH17EZ, 0BH18EZ, 5A1935Z, 5A1945Z, and 5A1955Z. Primary ventilation support was defined based on whether the patient received NIV and/or IMV within one day from the admission date. Patient rurality was defined using census tracts based on Rural Urban Commuting Area (RUCA) codes. RUCA codes reflect measures of urbanization, commuting, and population density.11–15 RUCA codes were further condensed to designate an area as urban (RUCA codes: 1 and 1.1), rural (RUCA codes: 2, 2.1, 3, 4, 4.1, 5, 5.1, 6, 7, 7.1, 7.2, 8, 8.1, 8.2, and 9), or isolated rural (RUCA codes: 10, 10.2, 10.2, and 10.3), using the categories defined by the VA Office of Rural Health.16 Travel time to the nearest VA hospital was determined from VA Planning Systems and Support Group (PSSG) geo-coded enrollment files. PSSG calculates distances to tertiary care VA sites for all enrolled patients using actual longitude and latitude coordinates of patient residences and the nearest VA hospitals. Travel time to the nearest VA hospital was estimated using geospatial technologies, which reflect roads and average driving conditions.11 Comorbidities were defined using ICD-9-CM and ICD-10-CM diagnosis codes within 1 year prior to admission in the inpatient, outpatient, or fee-basis data files, except for pneumonia, which was defined based on the presence of the corresponding diagnosis code during the hospitalization, as previously described.17 Severity of illness was quantified using a modified APACHE score (mAPACHE), which includes vital signs and commonly obtained laboratory values,12,18,19 and has also been externally validated.20 In brief, mAPACHE uses the same scoring assignments as APACHE III and includes all the predictor variables from the APACHE III excluding the Glasgow Coma Scale, urine output, arterial blood gas, and mechanical ventilator components. mAPACHE variables include age, comorbidities, mean arterial blood pressure, heart rate, respiratory rate, temperature, hematocrit, white blood cells, sodium, blood urea nitrogen, creatinine, glucose, albumin, and bilirubin. We retrieved all the data from electronic medical records, as described previously.12,18,19 Obstructive sleep apnea was defined using ICD-9 diagnosis code 327.2 or 780.57, or ICD-10 diagnosis codes G47.30–G47.39. Hospital rurality was defined using the same methods as patient rurality, but the facilities in rural and isolated rural areas were collapsed to one category (rural). Hospital complexity was defined by VA as 1 (high resource), 2 (medium resource), and 3 (low resource).21 Hospital COPD-case volume was classified as high (above the median) or low (below the median) based on the total COPD volume during the study period. Hospital length of stay was calculated from the admission and discharge dates, in days. Hospital mortality was defined using the date of death listed in the VA Vital Status File and the occurrence of this date between the admission and discharge dates inclusive. We included admissions with AECOPD at 121 VA acute-care facilities from 2011 to 2017. We excluded records admitted to facilities with complexity 3 (low resource) and newly opened facilities because these facilities may not provide ventilator support. We also excluded patients in hospices (V667/Z51.5 code within 1 year prior to the hospitalization date) and admissions of patients who received no oral or intravenous glucocorticoids during the hospitalizations. We identified patients hospitalized with AECOPD based on the International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modification (ICD-9-CM and ICD-10-CM) and using the following criteria: 1) a principal diagnosis of COPD (ICD-9-CM codes: 490.x, 491.xx, 492.xx, and 496.xx; or ICD-10-CM codes: J41, J43, J43.1, J43.2, J43.8, J43.9, J44.0, J44.1, J41.8, J42, or J44.9) or 2) a principal diagnosis of acute respiratory failure (518.81, 518.82, 518.84, or 799.1; ICD-10-CM: J96.00, J96.01, J96.02, J96.11, J96.12, J96.20, J96.21, J96.22, J96.90, J96.92, or R06.03) with a secondary AECOPD diagnosis (ICD-9-CM codes: 491.21, 491.22; ICD-10-CM codes: J44.1, J44.0). NIV use was defined using ICD-9-CM procedure code 93.90 or, for ICD-10, codes 5A09357, 5A09457, and 5A09557. IMV use was defined using ICD-9-CM procedure codes 96.04 and 96.70–96.72 or, for ICD-10, codes 0BH17EZ, 0BH18EZ, 5A1935Z, 5A1945Z, and 5A1955Z. Primary ventilation support was defined based on whether the patient received NIV and/or IMV within one day from the admission date. Patient rurality was defined using census tracts based on Rural Urban Commuting Area (RUCA) codes. RUCA codes reflect measures of urbanization, commuting, and population density.11–15 RUCA codes were further condensed to designate an area as urban (RUCA codes: 1 and 1.1), rural (RUCA codes: 2, 2.1, 3, 4, 4.1, 5, 5.1, 6, 7, 7.1, 7.2, 8, 8.1, 8.2, and 9), or isolated rural (RUCA codes: 10, 10.2, 10.2, and 10.3), using the categories defined by the VA Office of Rural Health.16 Travel time to the nearest VA hospital was determined from VA Planning Systems and Support Group (PSSG) geo-coded enrollment files. PSSG calculates distances to tertiary care VA sites for all enrolled patients using actual longitude and latitude coordinates of patient residences and the nearest VA hospitals. Travel time to the nearest VA hospital was estimated using geospatial technologies, which reflect roads and average driving conditions.11 Comorbidities were defined using ICD-9-CM and ICD-10-CM diagnosis codes within 1 year prior to admission in the inpatient, outpatient, or fee-basis data files, except for pneumonia, which was defined based on the presence of the corresponding diagnosis code during the hospitalization, as previously described.17 Severity of illness was quantified using a modified APACHE score (mAPACHE), which includes vital signs and commonly obtained laboratory values,12,18,19 and has also been externally validated.20 In brief, mAPACHE uses the same scoring assignments as APACHE III and includes all the predictor variables from the APACHE III excluding the Glasgow Coma Scale, urine output, arterial blood gas, and mechanical ventilator components. mAPACHE variables include age, comorbidities, mean arterial blood pressure, heart rate, respiratory rate, temperature, hematocrit, white blood cells, sodium, blood urea nitrogen, creatinine, glucose, albumin, and bilirubin. We retrieved all the data from electronic medical records, as described previously.12,18,19 Obstructive sleep apnea was defined using ICD-9 diagnosis code 327.2 or 780.57, or ICD-10 diagnosis codes G47.30–G47.39. Hospital rurality was defined using the same methods as patient rurality, but the facilities in rural and isolated rural areas were collapsed to one category (rural). Hospital complexity was defined by VA as 1 (high resource), 2 (medium resource), and 3 (low resource).21 Hospital COPD-case volume was classified as high (above the median) or low (below the median) based on the total COPD volume during the study period. Hospital length of stay was calculated from the admission and discharge dates, in days. Hospital mortality was defined using the date of death listed in the VA Vital Status File and the occurrence of this date between the admission and discharge dates inclusive. We included admissions with AECOPD at 121 VA acute-care facilities from 2011 to 2017. We excluded records admitted to facilities with complexity 3 (low resource) and newly opened facilities because these facilities may not provide ventilator support. We also excluded patients in hospices (V667/Z51.5 code within 1 year prior to the hospitalization date) and admissions of patients who received no oral or intravenous glucocorticoids during the hospitalizations. Statistical Analysis As NIV use may vary depending on the patient’s severity of illness (acuity), we calculated the risk-standardized NIV % to assess the NIV variation across the hospitals. We followed a similar approach to that used in a previous study.8 In brief, we created a hierarchical multivariable logistic regression model with NIV use within the first 24 hours of admission as the dependent variable (outcome) among records started on ventilation (records without NIV or invasive ventilation within first 24 hours were not used). Hospital was included as a random effect. Models were adjusted for severity of illness (mAPACHE) and several comorbidities (obstructive sleep apnea, presence of comorbid pneumonia, diabetes mellitus, congestive heart failure, hypertension, peripheral vascular disease, coronary artery disease, cancer, chronic kidney disease, stroke, and liver disease).22 Similarly to the approach used for publicly reported hospital performance metrics,23,24 we calculated the expected and predicted proportion of NIV use for each hospital using these models. Specifically, the expected rate is calculated based on hospital patient characteristics and does not include the hierarchical model hospital-specific random intercepts, whereas the predicted rate does incorporate hospital-specific random intercepts. The risk-standardized proportion of NIV use for each hospital was determined by multiplying the overall unadjusted proportion of NIV use by the ratio of the predicted to expected proportion for each hospital. Then, we stratified hospitals into quartiles based on hospital risk-standardized NIV % and compared patient characteristics among quartiles for trend and differences between Q1 and Q4. We used linear regression to examine trends between risk-standardized NIV % quartiles (Q1 to Q4) for continuous variables (p for trend), and a Cochran–Mantel–Haenszel test to examine trends between risk-standardized NIV % quartiles for categorical variables. The t-test (for continuous variables) and chi-squared test (for categorical variables) were also used to compare differences in characteristics between the lowest (Q1) and highest (Q4) quartiles of risk-standardized NIV %. Subsequently, we compared the risk-standardized NIV % using the Wilcoxon–Mann–Whitney test, and unadjusted hospital mortality using the chi-squared test between: 1) rural and urban, 2) low-volume and high-volume, and 3) complexity 2 (medium-resource) and complexity 1 (high-resource) hospitals. We also created a hierarchical logistic regression model to examine the association of hospital characteristics with hospital mortality adjusted for severity of illness and accounting for repeated admissions. All statistical analyses were conducted using SAS Enterprise Guide, 2014 (SAS Institute Inc). As NIV use may vary depending on the patient’s severity of illness (acuity), we calculated the risk-standardized NIV % to assess the NIV variation across the hospitals. We followed a similar approach to that used in a previous study.8 In brief, we created a hierarchical multivariable logistic regression model with NIV use within the first 24 hours of admission as the dependent variable (outcome) among records started on ventilation (records without NIV or invasive ventilation within first 24 hours were not used). Hospital was included as a random effect. Models were adjusted for severity of illness (mAPACHE) and several comorbidities (obstructive sleep apnea, presence of comorbid pneumonia, diabetes mellitus, congestive heart failure, hypertension, peripheral vascular disease, coronary artery disease, cancer, chronic kidney disease, stroke, and liver disease).22 Similarly to the approach used for publicly reported hospital performance metrics,23,24 we calculated the expected and predicted proportion of NIV use for each hospital using these models. Specifically, the expected rate is calculated based on hospital patient characteristics and does not include the hierarchical model hospital-specific random intercepts, whereas the predicted rate does incorporate hospital-specific random intercepts. The risk-standardized proportion of NIV use for each hospital was determined by multiplying the overall unadjusted proportion of NIV use by the ratio of the predicted to expected proportion for each hospital. Then, we stratified hospitals into quartiles based on hospital risk-standardized NIV % and compared patient characteristics among quartiles for trend and differences between Q1 and Q4. We used linear regression to examine trends between risk-standardized NIV % quartiles (Q1 to Q4) for continuous variables (p for trend), and a Cochran–Mantel–Haenszel test to examine trends between risk-standardized NIV % quartiles for categorical variables. The t-test (for continuous variables) and chi-squared test (for categorical variables) were also used to compare differences in characteristics between the lowest (Q1) and highest (Q4) quartiles of risk-standardized NIV %. Subsequently, we compared the risk-standardized NIV % using the Wilcoxon–Mann–Whitney test, and unadjusted hospital mortality using the chi-squared test between: 1) rural and urban, 2) low-volume and high-volume, and 3) complexity 2 (medium-resource) and complexity 1 (high-resource) hospitals. We also created a hierarchical logistic regression model to examine the association of hospital characteristics with hospital mortality adjusted for severity of illness and accounting for repeated admissions. All statistical analyses were conducted using SAS Enterprise Guide, 2014 (SAS Institute Inc). Setting: We obtained data from the Veterans Informatics and Computing Infrastructure (VINCI), an integrated system that includes VA’s electronic health records and administrative data. Admissions to VA acute-care hospitals were identified via the Corporate Data Warehouse (CDW) using the inpatient domain. These datasets contain patient demographics, including residential address and ZIP code, diagnosis and procedure codes during admission, admission source, and admission and discharge dates. Definitions: We identified patients hospitalized with AECOPD based on the International Classification of Diseases, Ninth and Tenth Revisions, Clinical Modification (ICD-9-CM and ICD-10-CM) and using the following criteria: 1) a principal diagnosis of COPD (ICD-9-CM codes: 490.x, 491.xx, 492.xx, and 496.xx; or ICD-10-CM codes: J41, J43, J43.1, J43.2, J43.8, J43.9, J44.0, J44.1, J41.8, J42, or J44.9) or 2) a principal diagnosis of acute respiratory failure (518.81, 518.82, 518.84, or 799.1; ICD-10-CM: J96.00, J96.01, J96.02, J96.11, J96.12, J96.20, J96.21, J96.22, J96.90, J96.92, or R06.03) with a secondary AECOPD diagnosis (ICD-9-CM codes: 491.21, 491.22; ICD-10-CM codes: J44.1, J44.0). NIV use was defined using ICD-9-CM procedure code 93.90 or, for ICD-10, codes 5A09357, 5A09457, and 5A09557. IMV use was defined using ICD-9-CM procedure codes 96.04 and 96.70–96.72 or, for ICD-10, codes 0BH17EZ, 0BH18EZ, 5A1935Z, 5A1945Z, and 5A1955Z. Primary ventilation support was defined based on whether the patient received NIV and/or IMV within one day from the admission date. Patient rurality was defined using census tracts based on Rural Urban Commuting Area (RUCA) codes. RUCA codes reflect measures of urbanization, commuting, and population density.11–15 RUCA codes were further condensed to designate an area as urban (RUCA codes: 1 and 1.1), rural (RUCA codes: 2, 2.1, 3, 4, 4.1, 5, 5.1, 6, 7, 7.1, 7.2, 8, 8.1, 8.2, and 9), or isolated rural (RUCA codes: 10, 10.2, 10.2, and 10.3), using the categories defined by the VA Office of Rural Health.16 Travel time to the nearest VA hospital was determined from VA Planning Systems and Support Group (PSSG) geo-coded enrollment files. PSSG calculates distances to tertiary care VA sites for all enrolled patients using actual longitude and latitude coordinates of patient residences and the nearest VA hospitals. Travel time to the nearest VA hospital was estimated using geospatial technologies, which reflect roads and average driving conditions.11 Comorbidities were defined using ICD-9-CM and ICD-10-CM diagnosis codes within 1 year prior to admission in the inpatient, outpatient, or fee-basis data files, except for pneumonia, which was defined based on the presence of the corresponding diagnosis code during the hospitalization, as previously described.17 Severity of illness was quantified using a modified APACHE score (mAPACHE), which includes vital signs and commonly obtained laboratory values,12,18,19 and has also been externally validated.20 In brief, mAPACHE uses the same scoring assignments as APACHE III and includes all the predictor variables from the APACHE III excluding the Glasgow Coma Scale, urine output, arterial blood gas, and mechanical ventilator components. mAPACHE variables include age, comorbidities, mean arterial blood pressure, heart rate, respiratory rate, temperature, hematocrit, white blood cells, sodium, blood urea nitrogen, creatinine, glucose, albumin, and bilirubin. We retrieved all the data from electronic medical records, as described previously.12,18,19 Obstructive sleep apnea was defined using ICD-9 diagnosis code 327.2 or 780.57, or ICD-10 diagnosis codes G47.30–G47.39. Hospital rurality was defined using the same methods as patient rurality, but the facilities in rural and isolated rural areas were collapsed to one category (rural). Hospital complexity was defined by VA as 1 (high resource), 2 (medium resource), and 3 (low resource).21 Hospital COPD-case volume was classified as high (above the median) or low (below the median) based on the total COPD volume during the study period. Hospital length of stay was calculated from the admission and discharge dates, in days. Hospital mortality was defined using the date of death listed in the VA Vital Status File and the occurrence of this date between the admission and discharge dates inclusive. We included admissions with AECOPD at 121 VA acute-care facilities from 2011 to 2017. We excluded records admitted to facilities with complexity 3 (low resource) and newly opened facilities because these facilities may not provide ventilator support. We also excluded patients in hospices (V667/Z51.5 code within 1 year prior to the hospitalization date) and admissions of patients who received no oral or intravenous glucocorticoids during the hospitalizations. Statistical Analysis: As NIV use may vary depending on the patient’s severity of illness (acuity), we calculated the risk-standardized NIV % to assess the NIV variation across the hospitals. We followed a similar approach to that used in a previous study.8 In brief, we created a hierarchical multivariable logistic regression model with NIV use within the first 24 hours of admission as the dependent variable (outcome) among records started on ventilation (records without NIV or invasive ventilation within first 24 hours were not used). Hospital was included as a random effect. Models were adjusted for severity of illness (mAPACHE) and several comorbidities (obstructive sleep apnea, presence of comorbid pneumonia, diabetes mellitus, congestive heart failure, hypertension, peripheral vascular disease, coronary artery disease, cancer, chronic kidney disease, stroke, and liver disease).22 Similarly to the approach used for publicly reported hospital performance metrics,23,24 we calculated the expected and predicted proportion of NIV use for each hospital using these models. Specifically, the expected rate is calculated based on hospital patient characteristics and does not include the hierarchical model hospital-specific random intercepts, whereas the predicted rate does incorporate hospital-specific random intercepts. The risk-standardized proportion of NIV use for each hospital was determined by multiplying the overall unadjusted proportion of NIV use by the ratio of the predicted to expected proportion for each hospital. Then, we stratified hospitals into quartiles based on hospital risk-standardized NIV % and compared patient characteristics among quartiles for trend and differences between Q1 and Q4. We used linear regression to examine trends between risk-standardized NIV % quartiles (Q1 to Q4) for continuous variables (p for trend), and a Cochran–Mantel–Haenszel test to examine trends between risk-standardized NIV % quartiles for categorical variables. The t-test (for continuous variables) and chi-squared test (for categorical variables) were also used to compare differences in characteristics between the lowest (Q1) and highest (Q4) quartiles of risk-standardized NIV %. Subsequently, we compared the risk-standardized NIV % using the Wilcoxon–Mann–Whitney test, and unadjusted hospital mortality using the chi-squared test between: 1) rural and urban, 2) low-volume and high-volume, and 3) complexity 2 (medium-resource) and complexity 1 (high-resource) hospitals. We also created a hierarchical logistic regression model to examine the association of hospital characteristics with hospital mortality adjusted for severity of illness and accounting for repeated admissions. All statistical analyses were conducted using SAS Enterprise Guide, 2014 (SAS Institute Inc). Results: Of 64,895 admissions with AECOPD exacerbations at 121 VA acute-care facilities from 2011 to 2017, 22,847 records met the exclusion criteria (Figure 1). The final cohort included 42,048 admissions in 106 VA hospitals and all of them were ICU admissions. We stratified these records by ventilator support within the first 24 hours of admission (primary ventilation support). Figure 1 also shows the outcomes of admissions stratified by ventilation support.Figure 1Patient flowchart and outcomes by ventilation support within one day from admission.Abbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation. Patient flowchart and outcomes by ventilation support within one day from admission. The percentage of total admissions who received any ventilation support (NIV, IMV, or both) within the first 24 hours across all 106 hospitals ranged from 0.8% to 13.7%, with a median of 5.4% (interquartile interval [IQI]=3.7–7.8%) (Figure 2). The percentage of total admissions who received primary NIV ranged from 0% to 9%, with a median of 2.8% (IQI=1.2–4.5%). Among only those admissions who received ventilation support within the first 24 hours (primary NIV, IMV, or both), the unadjusted NIV % use ranged from 0 to 100, with a median of 54.7% (IQI=34.8–68.2%).Figure 2Percentage of ventilatory support within one day from admission across the 106 facilities.Abbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation. Percentage of ventilatory support within one day from admission across the 106 facilities. The risk-standardized NIV % (the adjusted initial NIV of those admissions receiving ventilation support in the initial 24 hours) ranged from 10.8% to 86.6%, with a median of 58.0% (IQI=41.9–64.4%). Table 1 shows the characteristics of the hospitals stratified by risk-standardized NIV % into quartiles. Risk-standardized NIV % use accounted for severity of illness and comorbidities. Quartile 1 (Q1) included hospitals with the lowest risk-standardized NIV while quartile 4 (Q4) included hospitals with the highest risk-standardized NIV; other patient- and hospital-level characteristics are reported by risk-standardized NIV % quartiles. There were more black and rural patients in the highest risk-standardized NIV % relative to lower quartiles. Patients in the highest risk-standardized NIV % quartile had longer travel time to the nearest VA hospitals than those in the lowest quartiles. The IMV rates were lower and hospital length of stay was longer in hospitals in the highest NIV quartiles relative to the lowest quartiles.Table 1Patient and Hospital Characteristics of 106 Hospitals Stratified by Risk-Standardized Non-Invasive Ventilation Percentage Within One Day from AdmissionNIV Q1 (N=11,810)NIV Q2 (N=10,031)NIV Q3 (N=11,719)NIV Q4 (N=8488)p Value for Trendp Value Q1 vs Q4*Risk-standardized NIV %10.8–41.9%42.2–56.7%58.0–64.3%64.4–86.6%N hospital26272726Age, mean (SD)70.29 (8.69)69.92 (9.01)70.49 (9.74)70.33 (9.21)0.100.79Sex (female), N (%)507 (4.29)471 (4.70)476 (4.06)350 (4.12)0.110.55Race, N (%)<0.001<0.001 White9126 (77.30)7505 (74.82)9047 (77.20)6249 (73.62) Black1896 (16.05)1780 (17.74)1989 (16.97)1637 (19.29) Other364 (3.08)365 (3.64)262 (2.24)290 (3.42) Missing421 (3.56)381 (3.80)421 (3.59)312 (3.68)Patient residential location, N (%)<0.001<0.001 Urban7454 (63.12)6999 (69.77)7422 (63.33)5155 (60.73) Rural3549 (30.05)2416 (24.09)3525 (30.08)2717 (32.01) Isolated478 (4.05)347 (3.46)468 (3.99)354 (4.17) Missing329 (2.79)269 (2.68)304 (2.59)262 (3.09)Obstructive sleep apnea4372 (37.02)3895 (38.83)4568 (38.98)3059 (36.04)<0.0010.15Diabetes mellitus1964 (16.63)1643 (16.38)2349 (20.04)1614 (19.02)<0.001<0.001Congestive heart failure1532 (12.97)1386 (13.82)1759 (15.01)1179 (13.89)<0.0010.058Pneumonia6377 (54.00)5532 (55.15)6472 (55.23)4783 (56.35)0.010<0.001Hypertension665 (5.63)690 (6.88)1038 (8.86)795 (9.37)<0.001<0.001Peripheral vascular disease903 (7.65)746 (7.44)1028 (8.77)676 (7.96)0.0010.40Cancer931 (7.88)814 (8.11)1126 (9.61)788 (9.28)<0.001<0.001Coronary artery disease565 (4.78)569 (5.67)780 (6.66)530 (6.24)<0.001<0.001Chronic kidney disease770 (6.52)709 (7.07)925 (7.89)673 (7.93)<0.001<0.001Stroke627 (5.31)550 (5.48)727 (6.20)521 (6.14)0.0060.012Liver disease398 (3.37)338 (3.37)424 (3.62)320 (3.77)0.340.13Admission source, N (%)<0.001<0.001 Hospital515 (4.36)337 (3.36)371 (3.17)238 (2.82) Nursing home140 (1.19)135 (1.35)170 (1.45)131 (1.55) Other facility20 (0.17)11 (0.11)12 (0.10)26 (0.31) Outpatient11,100 (93.99)9494 (94.65)11,041 (94.21)7963 (94.44) Unknown35 (0.30)54 (0.54)125 (1.07)74 (0.88)Travel time to VAMC (min), mean (SD)67.26 (68.46)68.11 (73.11)72.51 (55.21)82.71 (76.47)<0.001<0.001Any IMV during hospital stay, N (%)604 (5.11)486 (4.84)437 (3.73)277 (3.26)<0.001<0.001mAPACHE, mean (SD)35.73 (11.95)35.00 (12.02)35.30 (11.91)35.63 (11.98)0.600.55Hospital mortality, N (%)303 (2.57)303 (3.02)315 (2.69)238 (2.80)0.210.30Hospital length of stay (days), mean (SD)5.54 (15.21)6.32 (27.04)7.13 (40.05)7.68 (43.30)<0.001<0.001Note: *Q1 vs Q4 using chi-squared or t-test.Abbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation; Q, quartile; SE, standard error; VAMC, Veterans Health Administration Medical Center. Patient and Hospital Characteristics of 106 Hospitals Stratified by Risk-Standardized Non-Invasive Ventilation Percentage Within One Day from Admission Note: *Q1 vs Q4 using chi-squared or t-test. Abbreviations: IMV, invasive mechanical ventilation; NIV, non-invasive ventilation; Q, quartile; SE, standard error; VAMC, Veterans Health Administration Medical Center. Table 2 shows a risk-standardized NIV % of 55.1% (min=10.8%; max=86.6%) in urban hospitals, while the NIV % was 65.3.5% (min=34.2%; max=84.2%) in rural hospitals (p=0.047). There was no difference in risk-standardized NIV % between high- and low-COPD-volume facilities (p=0.33), or between complexity 1 (high-resource) and complexity 2 (medium-resource) hospitals (p=0.45).Table 2Risk-Standardized Non-Invasive Ventilation Percentage Stratified by Hospital CharacteristicsNumber of HospitalsNumber of PatientsRisk-Standardized NIV %, Median (Min–Max)p Value*Overall10642,04857.3 (10.8–86.6)Rural/urban0.047 Rural10254865.3 (34.2–84.2) Urban9639,50055.1 (10.8–86.6)Hospital volume0.33 Low5312,81359.90 (10.8–86.6) High5329,23553.43 (13.7–84.9)Hospital complexity0.45 2 (Medium resource)15294462.14 (20.4–86.6) 1 (High resource)9139,10455.71 (10.8–84.9)Note: *Wilcoxon two-sample test. Risk-Standardized Non-Invasive Ventilation Percentage Stratified by Hospital Characteristics Note: *Wilcoxon two-sample test. Table 3 shows unadjusted mortality stratified by hospital characteristics. After adjusting for patient acuity and taking into account repeated admissions, admission at an urban hospital was not associated with mortality (odds ratio [OR]=0.90; 95% CI=0.68 to 1.19, p=0.46) relative to admission at a rural hospital. Similarly, admission to a low-volume hospital was not associated with mortality (OR=1.00; 95% CI=0.86 to 1.18, p=0.96) relative to admission at a high-volume hospital, but admission to a medium-resource facility (complexity 2) was associated with increased mortality (OR=1.55; 95% CI=1.20 to 2.00, p<0.001) relative to admission at a high-resource facility.Table 3Unadjusted Mortality Stratified by Hospital CharacteristicsNumber of HospitalsNumber of PatientsUnadjusted Hospital Mortality, Deaths (%)p Value*Overall10642,0481159 (2.8%)Rural/urban0.22 Rural10254880 (3.14%) Urban9639,5001079 (2.73%)Hospital volume0.004 Low5312,813398 (3.11%) High5329,235761 (2.60%)Hospital complexity<0.001 2 (Medium resource)152944128 (4.35%) 1 (High resource)9139,1041031 (2.64%)Note: *Chi-squared test. Unadjusted Mortality Stratified by Hospital Characteristics Note: *Chi-squared test. Discussion: In a cohort of 42,048 patients admitted with AECOPD to VA hospitals over 6 years, we observed considerable variation in NIV use across hospitals. After taking into account patient acuity using a validated critical illness severity score18,20,25 and comorbidities, we observed that hospitals in the highest NIV use quartile cared for more rural patients, used IMV less, and had longer length of hospital stay, but had no difference in mortality relative to hospitals in the lowest quartiles. The NIV use rate was higher in rural than in urban hospitals. Hospital mortality did not differ between rural and urban facilities. It is well established that NIV reduces the need for IMV and hospital length of stay, and improves mortality in acute respiratory failure due to AECOPD26 and heart failure,27 but its role in other causes of respiratory failure, such as pneumonia, is controversial.28 In a study using 2011 claims data from California, the most common reason for NIV (26.1%) was pneumonia. Among hospitalizations with NIV as the primary ventilator support, hospitals that used NIV for strong-evidence conditions (COPD and heart failure) had lower NIV failure rates.9 We found that concomitant pneumonia was present in more than 50% of the AECOPD patients requiring NIV and/or IMV within 24 hours of admission. Although this may merely represent pneumonia overdiagnosis or using an ICD code to order a chest X-ray, pneumonia and AECOPD frequently coexist.29,30 In our analysis, we included only admissions with a primary diagnosis of AECOPD. We also adjusted for concomitant pneumonia when we calculated the risk-standardized NIV use because pneumonia is associated with worse outcomes.30 Several observational studies conducted in various countries have reported approximately a five-fold increase in NIV use rates in the early 2010s compared to those in the early 2000s.31–33 Using the Premier inpatient database, Stefan et al showed that initial NIV use increased by 15.1% annually in the USA.33 In 2001, NIV use was 5.9% but by 2011 it was 14.8%. This increase concurred with a decline in IMV use. The widespread use of NIV is due to its easy application. This may be the reason why we observed an inverse association between NIV and IMV in the hospitals, and not necessarily that NIV reduced the need for IMV. A plethora of studies have shown significant variation in NIV use across hospitals.8,9,34 Mehta et al showed that the risk-standardized NIV % rate among 37,516 hospitalizations for AECOPD across 250 hospitals in California was 10% (IQI=1.6–35.7%).9 In another cross-sectional analysis of 77,576 hospitalizations across 386 US hospitals, the risk-standardized NIV % among those admissions requiring ventilation support ranged from 47.4% at the 10th percentile to 84.7% at the 90th percentile.8 We also observed significant variation even though we conducted the study in a single health system that has published guidelines to provide care for COPD patients.35 NIV variation across hospitals may be related to the case mix of patients admitted to those hospitals. Large tertiary centers may care for very sick patients, and for that reason, IMV (and not NIV) is more frequently used there than in small rural hospitals. Adjusting for patient illness severity is crucial for studying NIV variation across hospitals. Prior reports studying risk-standardized NIV use were limited because patient acuity was not included in the analysis. Instead, comorbidities identified using ICD-9 codes were used as surrogates for severity of illness. In our analysis, we not only adjusted for comorbidities, but also accounted for severity of illness. Adjusting for patient acuity is critical as NIV may be underused or overused. NIV may be unnecessary in low-acuity patients, but it may be harmful in high-acuity patients who may require IMV, and delays in initiating IMV in these patients may increase the level of mortality.25,36 Our findings showed that rural hospitals used NIV more, which is in agreement with previous literature.8 Numerous hospital factors may also be responsible for variations in NIV use. In the early years of NIV, lack of equipment was the main reason for underuse.37 Subsequent surveys showed that insufficient training of physicians was the main reason for NIV underuse, followed by lack of equipment.38 Inadequate training of respiratory therapists was another contributor. A survey of Canadian physicians conducted in 2003 did not show variation in practice among various facilities, but rather variation among specialties.39 A national survey of VA physicians and respiratory therapists in 2004 revealed that lack of training and experience was a significant factor in NIV underutilization.40 A study using a qualitative approach with in-depth interviews of 32 participants in seven US hospitals, including nurses, physicians, respiratory therapists, and leaders, showed that respiratory therapist autonomy, interdisciplinary teamwork, staff education, and the presence of policies and protocols were features of high-performing hospitals.41 Lindenauer et al demonstrated that among several hospital characteristics, including teaching status, staffing levels, and hospital location and volume, only the presence of intensivists and hospital rurality was associated with high risk-adjusted NIV use.8 Although we found that rural hospitals more frequently use NIV, we did not find any difference in risk-standardized NIV % between medium- and high-resource hospitals, or between low- and high-volume hospitals. We did find fewer black and rural patients in hospitals with higher risk-standardized NIV %. This finding may be related to how and where patients seek medical care (eg, rural patients may seek care in nearby low-resource hospitals). Racial disparities in treatment are also possible. A previous study in VA demonstrated that rates of IMV were higher in black than in white patients.42 Another finding of this study is that hospitals in the highest risk-standardized NIV % use quartiles had longer hospital length of stay and received more rural patients, indicating that these hospitals delay discharging rural patients (perhaps because of a lack of skilled nursing and rehabilitation facilities in the area). Our study has several limitations. It was conducted in a single health-care system, with most of the patients being male. Therefore, we should generalize our findings with caution. Because we did not have smoking exposure data or pulmonary function data, we cannot confirm the diagnosis of COPD. We cannot exclude the misdiagnosis of hypercapnic respiratory failure due to obesity hypoventilation as COPD-related respiratory failure. Nevertheless, we included only patients who received oral steroids and we took obstructive sleep apnea into account to calculate our risk-standardized NIV %. In addition, our previous work, which used less stringent ICD code criteria, showed an accuracy in identifying AECOPD of between 80% and 90%.10 We did not exclude patients with obstructive sleep apnea, as obstructive sleep apnea–COPD overlap is common, in particular among patients with advanced disease.43 Excluding those patients would have resulted in a sample that was not a good representative of the true population. We have no data regarding NIV modes used (eg, BIPAP). Physiological parameters such as partial pressure of CO2 in the arterial blood, which are indicators of patient acuity, were not available. Moreover, we did not have data from civilian hospitals. Veterans may have been hospitalized at civilian hospitals owing to ease of access or convenience. There were no available data regarding the staffing levels of hospitals, including whether a board-certified critical care physician or ICU telemedicine was available, which are associated with improved outcomes. The above limitations do not undermine the strengths of our study, which are the large sample size, the strict exclusion criteria, and the fact that we used an illness severity scoring system to calculate the risk-standardized NIV %. Conclusions: Among patients admitted with AECOPD in VA hospitals, NIV use varied significantly across hospitals, with rural hospitals having higher risk-standardized NIV % than urban ones. Hospitals with the highest NIV rates cared for more rural patients and had longer hospital length of stay than hospitals with the lowest NIV rates. Further research should investigate the exact mechanism of variation in NIV use between rural and urban hospitals.
Background: Non-invasive mechanical ventilation (NIV) use in patients admitted with acute respiratory failure due to COPD exacerbations (AECOPDs) varies significantly between hospitals. However, previous literature did not account for patients' illness severity. Our objective was to examine the variation in risk-standardized NIV use after adjusting for illness severity. Methods: We retrospectively analyzed AECOPD hospitalizations from 2011 to 2017 at 106 acute-care Veterans Health Administration (VA) hospitals in the USA. We stratified hospitals based on the percentage of NIV use among patients who received ventilation support within the first 24 hours of admission into quartiles, and compared patient characteristics. We calculated the risk-standardized NIV % using hierarchical models adjusting for comorbidities and severity of illness. We then stratified the hospitals by risk-standardized NIV % into quartiles and compared hospital characteristics between quartiles. We also compared the risk-standardized NIV % between rural and urban hospitals. Results: In 42,048 admissions for AECOPD over 6 years, the median risk-standardized initial NIV % was 57.3% (interquartile interval [IQI]=41.9-64.4%). Hospitals in the highest risk-standardized NIV % quartiles cared for more rural patients, used invasive ventilators less frequently, and had longer length of hospital stay, but had no difference in mortality relative to the hospitals in the lowest quartiles. The risk-standardized NIV % was 65.3% (IQI=34.2-84.2%) in rural and 55.1% (IQI=10.8-86.6%) in urban hospitals (p=0.047), but hospital mortality did not differ between the two groups. Conclusions: NIV use varied significantly across hospitals, with rural hospitals having higher risk-standardized NIV % rates than urban hospitals. Further research should investigate the exact mechanism of variation in NIV use between rural and urban hospitals.
Introduction: Chronic obstructive pulmonary disease (COPD) is a leading cause of mortality in the USA1 and is associated with high resource utilization.2,3 COPD patients experience exacerbations of the disease (AECOPD), defined as acute worsening of respiratory symptoms according to the Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines.2 Standard therapy for moderate AECOPDs includes bronchodilators, glucocorticoids, and antibiotics. Severe AECOPDs, defined as those requiring an emergency room visit or hospitalization, have a 1-year mortality of 26.2% and are associated with high rates of hospital readmission.4 AECOPD-related hospitalizations are also associated with reduced quality of life and are responsible for 70% of the total direct health-care costs for COPD.5,6 Severe AECOPD may lead to acute respiratory failure requiring supplemental oxygen and/or ventilation support in addition to standard therapy. The most effective treatment for acute respiratory failure due to severe AECOPD is non-invasive ventilation (NIV). NIV reduces the need for invasive mechanical ventilation (IMV) by 64% and mortality by 46%, according to a 2017 meta-analysis.7 Despite the benefit of NIV in severe AECOPDs, NIV use in AECOPD-related hospitalizations varies significantly across the USA.8,9 This variation in NIV use may be due to hospital- and patient-level characteristics, such as the patient’s severity of illness (acuity). Facilities with high-acuity patients may use NIV more frequently. Nevertheless, prior studies examining NIV variation across US hospitals did not account for patient acuity.8,9 Variation in NIV use may also be related to the heterogeneity in its adoption across hospitals. High-resource hospitals may use NIV more frequently, as opposed to low-resource facilities, which may often transfer patients who require NIV because they do not feel comfortable treating these patients in case of NIV failure. We hypothesized that NIV varies significantly across US hospitals, even after adjusting for patient acuity. Our primary objective was to investigate NIV use in AECOPD-related hospitalizations across the Veterans Health Administration (VA) and assess whether patient characteristics (eg, demographics, residential location, and comorbidities) or hospital characteristics (eg, location, volume, and resources are) were associated with NIV use. Conclusions: Among patients admitted with AECOPD in VA hospitals, NIV use varied significantly across hospitals, with rural hospitals having higher risk-standardized NIV % than urban ones. Hospitals with the highest NIV rates cared for more rural patients and had longer hospital length of stay than hospitals with the lowest NIV rates. Further research should investigate the exact mechanism of variation in NIV use between rural and urban hospitals.
Background: Non-invasive mechanical ventilation (NIV) use in patients admitted with acute respiratory failure due to COPD exacerbations (AECOPDs) varies significantly between hospitals. However, previous literature did not account for patients' illness severity. Our objective was to examine the variation in risk-standardized NIV use after adjusting for illness severity. Methods: We retrospectively analyzed AECOPD hospitalizations from 2011 to 2017 at 106 acute-care Veterans Health Administration (VA) hospitals in the USA. We stratified hospitals based on the percentage of NIV use among patients who received ventilation support within the first 24 hours of admission into quartiles, and compared patient characteristics. We calculated the risk-standardized NIV % using hierarchical models adjusting for comorbidities and severity of illness. We then stratified the hospitals by risk-standardized NIV % into quartiles and compared hospital characteristics between quartiles. We also compared the risk-standardized NIV % between rural and urban hospitals. Results: In 42,048 admissions for AECOPD over 6 years, the median risk-standardized initial NIV % was 57.3% (interquartile interval [IQI]=41.9-64.4%). Hospitals in the highest risk-standardized NIV % quartiles cared for more rural patients, used invasive ventilators less frequently, and had longer length of hospital stay, but had no difference in mortality relative to the hospitals in the lowest quartiles. The risk-standardized NIV % was 65.3% (IQI=34.2-84.2%) in rural and 55.1% (IQI=10.8-86.6%) in urban hospitals (p=0.047), but hospital mortality did not differ between the two groups. Conclusions: NIV use varied significantly across hospitals, with rural hospitals having higher risk-standardized NIV % rates than urban hospitals. Further research should investigate the exact mechanism of variation in NIV use between rural and urban hospitals.
7,681
344
[ 2925, 79, 834, 493, 1424, 1412, 74 ]
8
[ "niv", "hospital", "hospitals", "codes", "10", "icd", "standardized", "use", "risk", "risk standardized" ]
[ "respiratory failure aecopd26", "pulmonary disease copd", "hospitalizations aecopd", "chronic obstructive pulmonary", "utilization copd patients" ]
null
null
[CONTENT] pulmonary disease | chronic obstructive | epidemiology | non-invasive ventilation [SUMMARY]
null
null
[CONTENT] pulmonary disease | chronic obstructive | epidemiology | non-invasive ventilation [SUMMARY]
[CONTENT] pulmonary disease | chronic obstructive | epidemiology | non-invasive ventilation [SUMMARY]
[CONTENT] pulmonary disease | chronic obstructive | epidemiology | non-invasive ventilation [SUMMARY]
[CONTENT] Hospital Mortality | Hospitals | Humans | Noninvasive Ventilation | Pulmonary Disease, Chronic Obstructive | Respiration, Artificial | Respiratory Insufficiency | Retrospective Studies [SUMMARY]
null
null
[CONTENT] Hospital Mortality | Hospitals | Humans | Noninvasive Ventilation | Pulmonary Disease, Chronic Obstructive | Respiration, Artificial | Respiratory Insufficiency | Retrospective Studies [SUMMARY]
[CONTENT] Hospital Mortality | Hospitals | Humans | Noninvasive Ventilation | Pulmonary Disease, Chronic Obstructive | Respiration, Artificial | Respiratory Insufficiency | Retrospective Studies [SUMMARY]
[CONTENT] Hospital Mortality | Hospitals | Humans | Noninvasive Ventilation | Pulmonary Disease, Chronic Obstructive | Respiration, Artificial | Respiratory Insufficiency | Retrospective Studies [SUMMARY]
[CONTENT] respiratory failure aecopd26 | pulmonary disease copd | hospitalizations aecopd | chronic obstructive pulmonary | utilization copd patients [SUMMARY]
null
null
[CONTENT] respiratory failure aecopd26 | pulmonary disease copd | hospitalizations aecopd | chronic obstructive pulmonary | utilization copd patients [SUMMARY]
[CONTENT] respiratory failure aecopd26 | pulmonary disease copd | hospitalizations aecopd | chronic obstructive pulmonary | utilization copd patients [SUMMARY]
[CONTENT] respiratory failure aecopd26 | pulmonary disease copd | hospitalizations aecopd | chronic obstructive pulmonary | utilization copd patients [SUMMARY]
[CONTENT] niv | hospital | hospitals | codes | 10 | icd | standardized | use | risk | risk standardized [SUMMARY]
null
null
[CONTENT] niv | hospital | hospitals | codes | 10 | icd | standardized | use | risk | risk standardized [SUMMARY]
[CONTENT] niv | hospital | hospitals | codes | 10 | icd | standardized | use | risk | risk standardized [SUMMARY]
[CONTENT] niv | hospital | hospitals | codes | 10 | icd | standardized | use | risk | risk standardized [SUMMARY]
[CONTENT] niv | severe | use | aecopds | aecopd | associated | related | related hospitalizations | aecopd related hospitalizations | aecopd related [SUMMARY]
null
null
[CONTENT] hospitals | niv | niv rates | rural | rates | hospitals highest niv rates | hospitals having higher | hospitals having higher risk | mechanism | hospital length stay hospitals [SUMMARY]
[CONTENT] niv | hospital | hospitals | icd | codes | use | standardized | risk | risk standardized | patients [SUMMARY]
[CONTENT] niv | hospital | hospitals | icd | codes | use | standardized | risk | risk standardized | patients [SUMMARY]
[CONTENT] NIV ||| ||| NIV [SUMMARY]
null
null
[CONTENT] NIV | NIV ||| NIV [SUMMARY]
[CONTENT] NIV ||| ||| NIV ||| 2011 to 2017 | 106 | Veterans Health Administration | USA ||| NIV | the first 24 hours ||| NIV ||| NIV % ||| NIV ||| ||| 42,048 | 6 years | NIV % | 57.3% | 64.4% ||| NIV ||| NIV % | 65.3% | 84.2% | 55.1% | IQI=10.8-86.6% | two ||| NIV | NIV ||| NIV [SUMMARY]
[CONTENT] NIV ||| ||| NIV ||| 2011 to 2017 | 106 | Veterans Health Administration | USA ||| NIV | the first 24 hours ||| NIV ||| NIV % ||| NIV ||| ||| 42,048 | 6 years | NIV % | 57.3% | 64.4% ||| NIV ||| NIV % | 65.3% | 84.2% | 55.1% | IQI=10.8-86.6% | two ||| NIV | NIV ||| NIV [SUMMARY]
The Assessment and Outcomes of Crossmatching in Lung Transplantation in Korean Patients.
35668687
In lung transplantation, human leukocyte antigen (HLA) compatibility is not included in the lung allocation score system or considered when placing donor allografts. However, HLA matching may affect the outcomes of lung transplantation. This study evaluated the current assessment status, prevalence, and effects of HLA crossmatching in lung transplantation in Korean patients using nationwide multicenter registry data.
BACKGROUND
Two hundred and twenty patients who received lung transplantation at six tertiary hospitals in South Korea between March 2015 and December 2019 were retrospectively reviewed. Clinical data, including general demographic characteristics, primary diagnosis, and pretransplant status of the recipients and donors registered by the Korean Organ Transplant Registry, were retrospectively analyzed. Survival analysis was performed using the Kaplan-Meier method with log-rank tests.
METHODS
Complement-dependent cytotoxic crossmatch (CDC-XM) was performed in 208 patients (94.5%) and flow cytometric crossmatch (flow-XM) was performed in 125 patients (56.8%). Among them, nine patients (4.1%) showed T cell- and/or B cell-positive crossmatches. The incidences of postoperative complications, including primary graft dysfunction, acute rejection, and chronic allograft dysfunction in positively crossmatched patients, were not significant compared with those in patients without mismatches. Moreover, Kaplan-Meier analyses showed poorer 1-year survival in patients with positive crossmatch according to CDC-XM (P < 0.001) and T lymphocyte XM (P = 0.002) than in patients without mismatches.
RESULTS
Positive CDC and T lymphocyte crossmatching results should be considered in the allocation of donor lungs. If unavailable, the result should be considered for postoperative management in lung transplantation.
CONCLUSION
[ "Graft Rejection", "Graft Survival", "HLA Antigens", "Histocompatibility Testing", "Humans", "Isoantibodies", "Kidney Transplantation", "Lung Transplantation", "Retrospective Studies" ]
9171353
INTRODUCTION
Crossmatching is the assessment of the immune compatibility of a particular donor and the recipient. The association between positive crossmatches and hyperacute rejection was first demonstrated in the 1960s with renal transplantation.1 Over the following decades, a positive crossmatch result began to be considered a contraindication to transplantation because of its devastating postoperative effects.23 In particular, complement-dependent cytotoxicity (CDC) results against T lymphocytes are considered an absolute contraindication.34 However, the meaning of positive crossmatches has changed with the development of crossmatching methodology and immunosuppression strategies.56 Early research suggested that it was crucial to avoid the impact of positive T cell crossmatches on renal transplantation; however, because of their low specificity and sensitivity, recent analyses have found that avoidance is not mandatory.2 A combination of different crossmatching techniques has recently been recommended for solid organ transplantations.7 In lung transplantation, considering the limited number of available donors, persistent demand for required organs, and risk of increasing incidences of wait-list mortality and morbidity, immunological “mismatches” have to be accepted despite the potential for the development of donor-specific antibodies (DSAs), which can trigger antibody-mediated rejection (AMR).8910 A pretransplant crossmatch is not mandatory in the lung allocation system suggested by the International Society of Heart and Lung Transplantation (ISHLT); interpretation and decisions are left to local protocols of each center and no definitive guidelines are available.7810 According to the Korea donor allocation system, pretransplant crossmatching is only considered mandatory for renal and pancreas transplantation; the system does not play a role in nonrenal transplantation, including lung transplantation. According to a report from Korean Network for Organ Sharing, there was no lung transplantation with positive crossmatching between March 2014 to February 2015.11 However, the total number of nationwide lung transplantations has almost tripled since 2014 (55 cases in 2014; 64 cases in 2015; 89 cases in 2016; 93 cases in 2017; 92 cases in 2018; 157 cases in 2019)12; thus, the incidences and outcomes of positive crossmatching are worthy of analysis. In this study, we aimed to investigate the positive crossmatch rate in lung transplantation and their posttransplant outcomes using a multicenter nationwide cohort. In addition, we analyzed the impact of each crossmatch technique on the crossmatching results to clarify the meaning of positive crossmatching.
METHODS
Lung transplant cohort Clinical lung transplantation data were derived from patients who received lung transplantation from deceased donors at one of five tertiary centers which performed more than 10 cases annually between 2010 to 2012 in South Korea via the Korean Organ Transplant Registry (KOTRY). The KOTRY was established in 2014 and began to organize the lung transplantation registry in 2015.13 We analyzed the data on registered lung transplantations performed between March 2015 and December 2019. Among these patients, ten patients who did not undergo crossmatching were excluded. Finally, 210 patients were included in the study. Clinical data, including the general demographic characteristics, primary diagnosis, and pretransplant status of the recipients and donors, were prospectively collected. The details of desensitization protocols, transplant operations, and postoperative follow-up results were also prospectively collected. All clinical data were collected and registered using a web-based report form by the attending physician. Because of donor shortage, transplantation was performed regardless of the status of DSA screening. Moreover, according to a medical urgency-based allocation system in Korea, the results of human leukocyte antigen (HLA) crossmatching were not regarded as mandatory considerations.14 Most patients received induction therapy with high-dose steroids (methylprednisolone, 500 mg) or interleukin-2 antagonist followed by a standard triple immunosuppressant regimen consisting of prednisolone, mycophenolate, and tacrolimus, except when there were contraindications to these medications. Pretransplant immunological results did not affect the choice of immunosuppressant regimen. Desensitization protocols, including plasma exchange and intravenous infusion of immunoglobulin, were considered after transplantation in patients with pretransplant DSA and high mean fluorescence intensity (MFI). Clinical lung transplantation data were derived from patients who received lung transplantation from deceased donors at one of five tertiary centers which performed more than 10 cases annually between 2010 to 2012 in South Korea via the Korean Organ Transplant Registry (KOTRY). The KOTRY was established in 2014 and began to organize the lung transplantation registry in 2015.13 We analyzed the data on registered lung transplantations performed between March 2015 and December 2019. Among these patients, ten patients who did not undergo crossmatching were excluded. Finally, 210 patients were included in the study. Clinical data, including the general demographic characteristics, primary diagnosis, and pretransplant status of the recipients and donors, were prospectively collected. The details of desensitization protocols, transplant operations, and postoperative follow-up results were also prospectively collected. All clinical data were collected and registered using a web-based report form by the attending physician. Because of donor shortage, transplantation was performed regardless of the status of DSA screening. Moreover, according to a medical urgency-based allocation system in Korea, the results of human leukocyte antigen (HLA) crossmatching were not regarded as mandatory considerations.14 Most patients received induction therapy with high-dose steroids (methylprednisolone, 500 mg) or interleukin-2 antagonist followed by a standard triple immunosuppressant regimen consisting of prednisolone, mycophenolate, and tacrolimus, except when there were contraindications to these medications. Pretransplant immunological results did not affect the choice of immunosuppressant regimen. Desensitization protocols, including plasma exchange and intravenous infusion of immunoglobulin, were considered after transplantation in patients with pretransplant DSA and high mean fluorescence intensity (MFI). HLA crossmatching and other immunologic evaluation The Korean Organ Donation Agency (KODA) laboratory performed crossmatching of registered lung transplantations. Both CDC crossmatch (CDC-XM) and flow cytometric crossmatch (flow-XM) were performed. Although virtual crossmatch techniques have gained influence in many countries, they are currently unavailable in Korea. For the CDC crossmatch, both T and B lymphocytes were isolated by negative selection methods using the EasySep HLA Total Lymphocyte Enrichment kit (STEMCELL Technologies Inc., Tukwila, WA, USA). Both the National Institutes of Health and antihuman globulin augmented methods were performed using standard protocols with some minor modifications.11 Cells and duplicate serum dilutions of 1:1 to 1:4, respectively, were incubated at 25°C for 30 minutes for complement reaction. Cells were stained with commercial staining reagent (FluoroQuench Stain; One Lambda, Canoga Park, CA, USA) and observed under an inverted fluorescent microscope. The positive crossmatch results were recorded when the cytotoxic reaction resulted in more than 11% cell lysis. For flow-XM, both T and B lymphocytes were stained using three-color immunofluorescence staining in a single tube as previously described,1415 with minor modifications. After the incubation of cells and serum at 25°C for 15 minutes, a fluorescent conjugate reaction was performed at 25°C for 20 minutes using titrated goat F(ab’)2 antihuman immunoglobulin G fluorescein isothiocyanate (Jackson Immunoresearch Laboratories, West Grove, PA, USA), anti-CD3 PerCP (Becton Dickinson, San Jose, CA, USA), and anti-CD19 allophycocyanin conjugates (Becton Dickinson). Fluorescence was analyzed using a FACSCalibur Flow Cytometer with an HTS microplate acquisition system (Becton Dickinson). Fluorescence was considered positive when the MFI ratio to negative reference test was over 2.0 for both T and B lymphocytes. Panel reactive antibody (PRA) class I and II identifications were performed before transplantation with the identification kit (One Lambda, Inc., West Hills, CA, USA or Gen-Probe Inc., San Diego, CA, USA). PRA over 10% was considered positive, and over 50% was considered highly sensitized.16 Antibodies against donor HLA-A, B, DR, and DQ were defined DSAs, and the strength of each DSA was quantified based on MFI. The Korean Organ Donation Agency (KODA) laboratory performed crossmatching of registered lung transplantations. Both CDC crossmatch (CDC-XM) and flow cytometric crossmatch (flow-XM) were performed. Although virtual crossmatch techniques have gained influence in many countries, they are currently unavailable in Korea. For the CDC crossmatch, both T and B lymphocytes were isolated by negative selection methods using the EasySep HLA Total Lymphocyte Enrichment kit (STEMCELL Technologies Inc., Tukwila, WA, USA). Both the National Institutes of Health and antihuman globulin augmented methods were performed using standard protocols with some minor modifications.11 Cells and duplicate serum dilutions of 1:1 to 1:4, respectively, were incubated at 25°C for 30 minutes for complement reaction. Cells were stained with commercial staining reagent (FluoroQuench Stain; One Lambda, Canoga Park, CA, USA) and observed under an inverted fluorescent microscope. The positive crossmatch results were recorded when the cytotoxic reaction resulted in more than 11% cell lysis. For flow-XM, both T and B lymphocytes were stained using three-color immunofluorescence staining in a single tube as previously described,1415 with minor modifications. After the incubation of cells and serum at 25°C for 15 minutes, a fluorescent conjugate reaction was performed at 25°C for 20 minutes using titrated goat F(ab’)2 antihuman immunoglobulin G fluorescein isothiocyanate (Jackson Immunoresearch Laboratories, West Grove, PA, USA), anti-CD3 PerCP (Becton Dickinson, San Jose, CA, USA), and anti-CD19 allophycocyanin conjugates (Becton Dickinson). Fluorescence was analyzed using a FACSCalibur Flow Cytometer with an HTS microplate acquisition system (Becton Dickinson). Fluorescence was considered positive when the MFI ratio to negative reference test was over 2.0 for both T and B lymphocytes. Panel reactive antibody (PRA) class I and II identifications were performed before transplantation with the identification kit (One Lambda, Inc., West Hills, CA, USA or Gen-Probe Inc., San Diego, CA, USA). PRA over 10% was considered positive, and over 50% was considered highly sensitized.16 Antibodies against donor HLA-A, B, DR, and DQ were defined DSAs, and the strength of each DSA was quantified based on MFI. Clinical outcomes Clinical outcomes, including acute rejection, primary graft dysfunction (PGD), chronic lung allograft dysfunction (CLAD), and mortality, were analyzed. Acute cellular rejection (ACR) was diagnosed according to the ISHLT grading system with a transbronchial biopsy specimen.17 However, not all treated recipients with acute rejection were available for transbronchial biopsies for histopathologic confirmation. As such, clinical diagnosis of acute rejection was also assumed when allograft dysfunction without definite entities was responsive to steroid pulse therapy. Though there is no distinct definition of AMR in lung transplantation, AMR was diagnosed as per a proposal published by the ISHLT in 2016, including allograft dysfunction, lung histology, C4d+, and DSA without other explainable causes.818 PGD in lung transplantation was defined as an allograft dysfunction of the transplanted lung within the first 72 hours after the procedure. PGD was diagnosed and graded according to the ISHLT criteria.19 Bronchiolitis obliterans syndrome (BOS) and restrictive allograft syndrome (RAS) were the two main phenotypes of CLAD, which represented the irreversible loss of lung function and a major cause of limiting long-term survival. BOS was identified by the sustained and irreversible reduction in forced expiratory volume in 1 second (FEV1) compared with the post-lung transplant baseline FEV1 in the absence of any other etiologies.20 RAS is defined by restrictive physiology according to a pulmonary function test, positive findings of a radiologic study, the presence of ground-glass opacity, and interstitial fibrosis.21 Clinical outcomes, including acute rejection, primary graft dysfunction (PGD), chronic lung allograft dysfunction (CLAD), and mortality, were analyzed. Acute cellular rejection (ACR) was diagnosed according to the ISHLT grading system with a transbronchial biopsy specimen.17 However, not all treated recipients with acute rejection were available for transbronchial biopsies for histopathologic confirmation. As such, clinical diagnosis of acute rejection was also assumed when allograft dysfunction without definite entities was responsive to steroid pulse therapy. Though there is no distinct definition of AMR in lung transplantation, AMR was diagnosed as per a proposal published by the ISHLT in 2016, including allograft dysfunction, lung histology, C4d+, and DSA without other explainable causes.818 PGD in lung transplantation was defined as an allograft dysfunction of the transplanted lung within the first 72 hours after the procedure. PGD was diagnosed and graded according to the ISHLT criteria.19 Bronchiolitis obliterans syndrome (BOS) and restrictive allograft syndrome (RAS) were the two main phenotypes of CLAD, which represented the irreversible loss of lung function and a major cause of limiting long-term survival. BOS was identified by the sustained and irreversible reduction in forced expiratory volume in 1 second (FEV1) compared with the post-lung transplant baseline FEV1 in the absence of any other etiologies.20 RAS is defined by restrictive physiology according to a pulmonary function test, positive findings of a radiologic study, the presence of ground-glass opacity, and interstitial fibrosis.21 Statistical analysis Descriptive statistics were used to demonstrate baseline clinical characteristics of the study cohort. For categorical variables, data are shown as numbers and percentages, and chi-squared or Fisher’s exact tests were used where appropriate. For continuous variables, data were analyzed using Student’s t-test or Mann-Whitney U test and presented as the mean ± standard deviation or the median, interquartile range (IQR). Univariable and multivariable regression analysis was used to evaluate the factors associated with clinical outcomes. To confirm the independent association between variables, linear regression was applied for confounders. Survival analysis was performed by the Kaplan-Meier method with log-rank tests. All statistical analysis was performed with SPSS (version 25.0; SPSS Inc., Chicago, IL, USA), and a P value < 0.05 was considered statistically significant. Descriptive statistics were used to demonstrate baseline clinical characteristics of the study cohort. For categorical variables, data are shown as numbers and percentages, and chi-squared or Fisher’s exact tests were used where appropriate. For continuous variables, data were analyzed using Student’s t-test or Mann-Whitney U test and presented as the mean ± standard deviation or the median, interquartile range (IQR). Univariable and multivariable regression analysis was used to evaluate the factors associated with clinical outcomes. To confirm the independent association between variables, linear regression was applied for confounders. Survival analysis was performed by the Kaplan-Meier method with log-rank tests. All statistical analysis was performed with SPSS (version 25.0; SPSS Inc., Chicago, IL, USA), and a P value < 0.05 was considered statistically significant. Ethics statement This study was approved by the Institutional Review Board (IRB) of the Severance Hospital of Yonsei University (IRB No. 4-2021-0673). The IRB waived the requirement for obtaining informed consent from the patients. This study was approved by the Institutional Review Board (IRB) of the Severance Hospital of Yonsei University (IRB No. 4-2021-0673). The IRB waived the requirement for obtaining informed consent from the patients.
RESULTS
Baseline characteristics The baseline characteristics of the patients according to the results of crossmatches are shown in Table 1. Among the included 210 recipients, nine patients showed positive crossmatches. The median age was 56.4 years, and 134 patients (63.8%) were male. Only one patient was male in the positive crossmatch group. There was no RH− blood type within the study cohort. The most common primary diagnosis was idiopathic pulmonary fibrosis (114 patients, 54.3%), followed by connective tissue-related interstitial lung disease (CTD-ILD) (34 patients, 16.2%). The median number of days on the waiting list was 71.0, and 63 patients (30.0%) had undergone pretransplant extracorporeal membrane oxygenation support. The baseline characteristics were not significantly different between the positive and negative crossmatch groups. Values are presented as mean ± standard deviation or number (%). XM = crossmatch, BMI = body mass index, COPD = chronic obstructive pulmonary disease, CTD-ILD = connective tissue-related interstitial lung disease, BOS = bronchiolitis obliterans syndrome, ECMO = extracorporeal membrane oxygenation, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, HSCT = hematopoietic stem cell transplantation. Table 2 shows the results of immunological evaluations, including crossmatching. CDC-XM was performed in 208 patients (99.0%) and flow-XM was performed in 125 patients (59.5%). PRA was identified in 208 patients (99.0%). Among these patients, high levels of class I or class II PRA (> 50%) were detected in 18 patients (8.7%). The proportion of patients with high PRA was higher in the positive crossmatch group than in the negative crossmatch group (37.5% vs. 10.7%, P = 0.026). Values are presented as number (%). XM = crossmatch, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibody, HLA = human leukocyte antigen. At the time of lung transplantation, most patients (201 patients, 95.7%) had undergone the DSA screening test. Twenty-seven patients (13.4%) were revealed to have DSA. A higher proportion of the patients in the positive crossmatch group had DSA (7 patients, 87.5%, vs. 20 patients, 10.4%, P < 0.001) compared with that in the negative crossmatch group. Desensitization was performed in 35 patients (16.7%), and four patients in the positive crossmatch group underwent desensitization. The characteristics of nine patients with positive crossmatch are shown in Table 3. Among these patients, the most common etiology for lung transplantation was CTD-ILD. CDC-XM and flow-XM revealed positive crossmatches in five and six patients, respectively. DSA was detected in six patients, and three patients (patient numbers 5, 6, and 7) had an MFI higher than 5,000. Five patients underwent desensitization before lung transplantation. Acute rejection was diagnosed in three patients (patient numbers 3, 4, and 6), and two patients underwent transbronchial lung biopsy (patient numbers 3 and 4). Patient number 6 was positively crossmatched to the donor as observed using CDC-XM with both T land B lymphocytes, and class I DSA was detected with MFI over 5,000. The patient was clinically diagnosed with AMR and received aggressive treatments for rejection, including steroid pulse, plasmapheresis, and intravenous infusion of immunoglobulin. However, the patient received retransplantation on postoperative day 15 due to rejection. XM = crossmatch, CTD-ILD = connective tissue-related interstitial lung disease, ECMO = extracorporeal membrane oxygenation, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibodies, PGD = primary graft dysfunction, MFI = mean fluorescence intensity, AMR = antibody-mediated rejection, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, IVIG = intravenous immunoglobulin. The baseline characteristics of the patients according to the results of crossmatches are shown in Table 1. Among the included 210 recipients, nine patients showed positive crossmatches. The median age was 56.4 years, and 134 patients (63.8%) were male. Only one patient was male in the positive crossmatch group. There was no RH− blood type within the study cohort. The most common primary diagnosis was idiopathic pulmonary fibrosis (114 patients, 54.3%), followed by connective tissue-related interstitial lung disease (CTD-ILD) (34 patients, 16.2%). The median number of days on the waiting list was 71.0, and 63 patients (30.0%) had undergone pretransplant extracorporeal membrane oxygenation support. The baseline characteristics were not significantly different between the positive and negative crossmatch groups. Values are presented as mean ± standard deviation or number (%). XM = crossmatch, BMI = body mass index, COPD = chronic obstructive pulmonary disease, CTD-ILD = connective tissue-related interstitial lung disease, BOS = bronchiolitis obliterans syndrome, ECMO = extracorporeal membrane oxygenation, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, HSCT = hematopoietic stem cell transplantation. Table 2 shows the results of immunological evaluations, including crossmatching. CDC-XM was performed in 208 patients (99.0%) and flow-XM was performed in 125 patients (59.5%). PRA was identified in 208 patients (99.0%). Among these patients, high levels of class I or class II PRA (> 50%) were detected in 18 patients (8.7%). The proportion of patients with high PRA was higher in the positive crossmatch group than in the negative crossmatch group (37.5% vs. 10.7%, P = 0.026). Values are presented as number (%). XM = crossmatch, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibody, HLA = human leukocyte antigen. At the time of lung transplantation, most patients (201 patients, 95.7%) had undergone the DSA screening test. Twenty-seven patients (13.4%) were revealed to have DSA. A higher proportion of the patients in the positive crossmatch group had DSA (7 patients, 87.5%, vs. 20 patients, 10.4%, P < 0.001) compared with that in the negative crossmatch group. Desensitization was performed in 35 patients (16.7%), and four patients in the positive crossmatch group underwent desensitization. The characteristics of nine patients with positive crossmatch are shown in Table 3. Among these patients, the most common etiology for lung transplantation was CTD-ILD. CDC-XM and flow-XM revealed positive crossmatches in five and six patients, respectively. DSA was detected in six patients, and three patients (patient numbers 5, 6, and 7) had an MFI higher than 5,000. Five patients underwent desensitization before lung transplantation. Acute rejection was diagnosed in three patients (patient numbers 3, 4, and 6), and two patients underwent transbronchial lung biopsy (patient numbers 3 and 4). Patient number 6 was positively crossmatched to the donor as observed using CDC-XM with both T land B lymphocytes, and class I DSA was detected with MFI over 5,000. The patient was clinically diagnosed with AMR and received aggressive treatments for rejection, including steroid pulse, plasmapheresis, and intravenous infusion of immunoglobulin. However, the patient received retransplantation on postoperative day 15 due to rejection. XM = crossmatch, CTD-ILD = connective tissue-related interstitial lung disease, ECMO = extracorporeal membrane oxygenation, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibodies, PGD = primary graft dysfunction, MFI = mean fluorescence intensity, AMR = antibody-mediated rejection, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, IVIG = intravenous immunoglobulin. Operative details and postoperative outcomes Most patients underwent bilateral lung transplantation (202 patients, 96.2%). According to the PGD grade, patients presenting with grade 3 at postoperative 48 hours or 72 hours were regarded as high-grade PGD. Forty-four patients showed high-grade PGD, and no statistical difference was observed between the groups. The incidences of other postoperative complications, including postoperative bleeding, anastomosis dehiscence, pulmonary thromboembolism, pneumonia, and acute kidney injury, were not statistically distinguishable between the groups (Table 4). Values are presented as number (%), mean ± standard deviation or number (interquartile range). XM = crossmatch, PGD = primary graft dysfunction, ECMO = extracorporeal membrane oxygenation, ICU = intensive care unit, AKI = acute kidney injury. Acute rejection was diagnosed in 65 patients. Transbronchial lung biopsy-proven ACR was diagnosed in 37 patients, and other patients were diagnosed clinically. Though the incidence of acute rejection with regard to crossmatching results was not statistically different between the groups, the positive crossmatch group showed shorter intervals between transplantation and diagnosis of rejection. Five patients had clinical presentations that were compatible with AMR. However, none of these patients were diagnosed using the biopsy specimen or capillary C4d composition. Seven patients (3.3%) showed physiologic changes that were considered as CLAD. BOS was diagnosed in five patients (2.4%), and RAS was diagnosed in three patients (1.4%). The median number of days from transplant to CLAD diagnosis was 648 (IQR, 488.0–759.0). All of the patients showed CLAD were in the negative crossmatch group. Most patients underwent bilateral lung transplantation (202 patients, 96.2%). According to the PGD grade, patients presenting with grade 3 at postoperative 48 hours or 72 hours were regarded as high-grade PGD. Forty-four patients showed high-grade PGD, and no statistical difference was observed between the groups. The incidences of other postoperative complications, including postoperative bleeding, anastomosis dehiscence, pulmonary thromboembolism, pneumonia, and acute kidney injury, were not statistically distinguishable between the groups (Table 4). Values are presented as number (%), mean ± standard deviation or number (interquartile range). XM = crossmatch, PGD = primary graft dysfunction, ECMO = extracorporeal membrane oxygenation, ICU = intensive care unit, AKI = acute kidney injury. Acute rejection was diagnosed in 65 patients. Transbronchial lung biopsy-proven ACR was diagnosed in 37 patients, and other patients were diagnosed clinically. Though the incidence of acute rejection with regard to crossmatching results was not statistically different between the groups, the positive crossmatch group showed shorter intervals between transplantation and diagnosis of rejection. Five patients had clinical presentations that were compatible with AMR. However, none of these patients were diagnosed using the biopsy specimen or capillary C4d composition. Seven patients (3.3%) showed physiologic changes that were considered as CLAD. BOS was diagnosed in five patients (2.4%), and RAS was diagnosed in three patients (1.4%). The median number of days from transplant to CLAD diagnosis was 648 (IQR, 488.0–759.0). All of the patients showed CLAD were in the negative crossmatch group. Survival analysis As shown in Table 4, 55 patients (26.2%) showed 1-year mortality. The 1-year and overall mortality rates were not statistically different between the groups (Table 4). Fig. 1 shows the Kaplan-Meier curves of 1-year and overall mortality rates of the patients. According to Kaplan-Meier analysis, there were no significant differences in the survival rates, regardless of crossmatching status. We also performed a regression analysis to identify the risk factors for poor 1-year survival. Positive CDC crossmatching was the only significant risk factor for poor 1-year survival in the univariable analysis (odds ratio, 11.922, 95% confidence interval, 1.302–19.125, P = 0.018). We further divided the patients based on crossmatching methods and performed survival analysis for the subgroups. When positive crossmatching was confirmed by CDC-XM, the post-transplant 1-year and overall mortality rates were poorer than the negative crossmatching results (P < 0.001 and P < 0.001, respectively) (Fig. 2A and B). According to the Kaplan-Meier method, positive crossmatching by the flow-XM method did not significantly differ in the 1-year and overall morality (Fig. 2C and D). When considering the detection of T and B lymphocytes individually, only positive crossmatching by T lymphocytes was related to mortality within 1-year after transplantation (P = 0.002) and overall mortality (P = 0.006) (Fig. 3). CDC = complement-dependent cytotoxicity. As shown in Table 4, 55 patients (26.2%) showed 1-year mortality. The 1-year and overall mortality rates were not statistically different between the groups (Table 4). Fig. 1 shows the Kaplan-Meier curves of 1-year and overall mortality rates of the patients. According to Kaplan-Meier analysis, there were no significant differences in the survival rates, regardless of crossmatching status. We also performed a regression analysis to identify the risk factors for poor 1-year survival. Positive CDC crossmatching was the only significant risk factor for poor 1-year survival in the univariable analysis (odds ratio, 11.922, 95% confidence interval, 1.302–19.125, P = 0.018). We further divided the patients based on crossmatching methods and performed survival analysis for the subgroups. When positive crossmatching was confirmed by CDC-XM, the post-transplant 1-year and overall mortality rates were poorer than the negative crossmatching results (P < 0.001 and P < 0.001, respectively) (Fig. 2A and B). According to the Kaplan-Meier method, positive crossmatching by the flow-XM method did not significantly differ in the 1-year and overall morality (Fig. 2C and D). When considering the detection of T and B lymphocytes individually, only positive crossmatching by T lymphocytes was related to mortality within 1-year after transplantation (P = 0.002) and overall mortality (P = 0.006) (Fig. 3). CDC = complement-dependent cytotoxicity.
null
null
[ "Lung transplant cohort", "HLA crossmatching and other immunologic evaluation", "Clinical outcomes", "Statistical analysis", "Ethics statement", "Baseline characteristics", "Operative details and postoperative outcomes", "Survival analysis" ]
[ "Clinical lung transplantation data were derived from patients who received lung transplantation from deceased donors at one of five tertiary centers which performed more than 10 cases annually between 2010 to 2012 in South Korea via the Korean Organ Transplant Registry (KOTRY). The KOTRY was established in 2014 and began to organize the lung transplantation registry in 2015.13 We analyzed the data on registered lung transplantations performed between March 2015 and December 2019. Among these patients, ten patients who did not undergo crossmatching were excluded. Finally, 210 patients were included in the study.\nClinical data, including the general demographic characteristics, primary diagnosis, and pretransplant status of the recipients and donors, were prospectively collected. The details of desensitization protocols, transplant operations, and postoperative follow-up results were also prospectively collected. All clinical data were collected and registered using a web-based report form by the attending physician.\nBecause of donor shortage, transplantation was performed regardless of the status of DSA screening. Moreover, according to a medical urgency-based allocation system in Korea, the results of human leukocyte antigen (HLA) crossmatching were not regarded as mandatory considerations.14 Most patients received induction therapy with high-dose steroids (methylprednisolone, 500 mg) or interleukin-2 antagonist followed by a standard triple immunosuppressant regimen consisting of prednisolone, mycophenolate, and tacrolimus, except when there were contraindications to these medications. Pretransplant immunological results did not affect the choice of immunosuppressant regimen. Desensitization protocols, including plasma exchange and intravenous infusion of immunoglobulin, were considered after transplantation in patients with pretransplant DSA and high mean fluorescence intensity (MFI).", "The Korean Organ Donation Agency (KODA) laboratory performed crossmatching of registered lung transplantations. Both CDC crossmatch (CDC-XM) and flow cytometric crossmatch (flow-XM) were performed. Although virtual crossmatch techniques have gained influence in many countries, they are currently unavailable in Korea.\nFor the CDC crossmatch, both T and B lymphocytes were isolated by negative selection methods using the EasySep HLA Total Lymphocyte Enrichment kit (STEMCELL Technologies Inc., Tukwila, WA, USA). Both the National Institutes of Health and antihuman globulin augmented methods were performed using standard protocols with some minor modifications.11 Cells and duplicate serum dilutions of 1:1 to 1:4, respectively, were incubated at 25°C for 30 minutes for complement reaction. Cells were stained with commercial staining reagent (FluoroQuench Stain; One Lambda, Canoga Park, CA, USA) and observed under an inverted fluorescent microscope. The positive crossmatch results were recorded when the cytotoxic reaction resulted in more than 11% cell lysis.\nFor flow-XM, both T and B lymphocytes were stained using three-color immunofluorescence staining in a single tube as previously described,1415 with minor modifications. After the incubation of cells and serum at 25°C for 15 minutes, a fluorescent conjugate reaction was performed at 25°C for 20 minutes using titrated goat F(ab’)2 antihuman immunoglobulin G fluorescein isothiocyanate (Jackson Immunoresearch Laboratories, West Grove, PA, USA), anti-CD3 PerCP (Becton Dickinson, San Jose, CA, USA), and anti-CD19 allophycocyanin conjugates (Becton Dickinson). Fluorescence was analyzed using a FACSCalibur Flow Cytometer with an HTS microplate acquisition system (Becton Dickinson). Fluorescence was considered positive when the MFI ratio to negative reference test was over 2.0 for both T and B lymphocytes.\nPanel reactive antibody (PRA) class I and II identifications were performed before transplantation with the identification kit (One Lambda, Inc., West Hills, CA, USA or Gen-Probe Inc., San Diego, CA, USA). PRA over 10% was considered positive, and over 50% was considered highly sensitized.16 Antibodies against donor HLA-A, B, DR, and DQ were defined DSAs, and the strength of each DSA was quantified based on MFI.", "Clinical outcomes, including acute rejection, primary graft dysfunction (PGD), chronic lung allograft dysfunction (CLAD), and mortality, were analyzed. Acute cellular rejection (ACR) was diagnosed according to the ISHLT grading system with a transbronchial biopsy specimen.17 However, not all treated recipients with acute rejection were available for transbronchial biopsies for histopathologic confirmation. As such, clinical diagnosis of acute rejection was also assumed when allograft dysfunction without definite entities was responsive to steroid pulse therapy. Though there is no distinct definition of AMR in lung transplantation, AMR was diagnosed as per a proposal published by the ISHLT in 2016, including allograft dysfunction, lung histology, C4d+, and DSA without other explainable causes.818\nPGD in lung transplantation was defined as an allograft dysfunction of the transplanted lung within the first 72 hours after the procedure. PGD was diagnosed and graded according to the ISHLT criteria.19 Bronchiolitis obliterans syndrome (BOS) and restrictive allograft syndrome (RAS) were the two main phenotypes of CLAD, which represented the irreversible loss of lung function and a major cause of limiting long-term survival. BOS was identified by the sustained and irreversible reduction in forced expiratory volume in 1 second (FEV1) compared with the post-lung transplant baseline FEV1 in the absence of any other etiologies.20 RAS is defined by restrictive physiology according to a pulmonary function test, positive findings of a radiologic study, the presence of ground-glass opacity, and interstitial fibrosis.21", "Descriptive statistics were used to demonstrate baseline clinical characteristics of the study cohort. For categorical variables, data are shown as numbers and percentages, and chi-squared or Fisher’s exact tests were used where appropriate. For continuous variables, data were analyzed using Student’s t-test or Mann-Whitney U test and presented as the mean ± standard deviation or the median, interquartile range (IQR). Univariable and multivariable regression analysis was used to evaluate the factors associated with clinical outcomes. To confirm the independent association between variables, linear regression was applied for confounders. Survival analysis was performed by the Kaplan-Meier method with log-rank tests. All statistical analysis was performed with SPSS (version 25.0; SPSS Inc., Chicago, IL, USA), and a P value < 0.05 was considered statistically significant.", "This study was approved by the Institutional Review Board (IRB) of the Severance Hospital of Yonsei University (IRB No. 4-2021-0673). The IRB waived the requirement for obtaining informed consent from the patients.", "The baseline characteristics of the patients according to the results of crossmatches are shown in Table 1. Among the included 210 recipients, nine patients showed positive crossmatches. The median age was 56.4 years, and 134 patients (63.8%) were male. Only one patient was male in the positive crossmatch group. There was no RH− blood type within the study cohort. The most common primary diagnosis was idiopathic pulmonary fibrosis (114 patients, 54.3%), followed by connective tissue-related interstitial lung disease (CTD-ILD) (34 patients, 16.2%). The median number of days on the waiting list was 71.0, and 63 patients (30.0%) had undergone pretransplant extracorporeal membrane oxygenation support. The baseline characteristics were not significantly different between the positive and negative crossmatch groups.\nValues are presented as mean ± standard deviation or number (%).\nXM = crossmatch, BMI = body mass index, COPD = chronic obstructive pulmonary disease, CTD-ILD = connective tissue-related interstitial lung disease, BOS = bronchiolitis obliterans syndrome, ECMO = extracorporeal membrane oxygenation, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, HSCT = hematopoietic stem cell transplantation.\n\nTable 2 shows the results of immunological evaluations, including crossmatching. CDC-XM was performed in 208 patients (99.0%) and flow-XM was performed in 125 patients (59.5%). PRA was identified in 208 patients (99.0%). Among these patients, high levels of class I or class II PRA (> 50%) were detected in 18 patients (8.7%). The proportion of patients with high PRA was higher in the positive crossmatch group than in the negative crossmatch group (37.5% vs. 10.7%, P = 0.026).\nValues are presented as number (%).\nXM = crossmatch, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibody, HLA = human leukocyte antigen.\nAt the time of lung transplantation, most patients (201 patients, 95.7%) had undergone the DSA screening test. Twenty-seven patients (13.4%) were revealed to have DSA. A higher proportion of the patients in the positive crossmatch group had DSA (7 patients, 87.5%, vs. 20 patients, 10.4%, P < 0.001) compared with that in the negative crossmatch group. Desensitization was performed in 35 patients (16.7%), and four patients in the positive crossmatch group underwent desensitization.\nThe characteristics of nine patients with positive crossmatch are shown in Table 3. Among these patients, the most common etiology for lung transplantation was CTD-ILD. CDC-XM and flow-XM revealed positive crossmatches in five and six patients, respectively. DSA was detected in six patients, and three patients (patient numbers 5, 6, and 7) had an MFI higher than 5,000. Five patients underwent desensitization before lung transplantation. Acute rejection was diagnosed in three patients (patient numbers 3, 4, and 6), and two patients underwent transbronchial lung biopsy (patient numbers 3 and 4). Patient number 6 was positively crossmatched to the donor as observed using CDC-XM with both T land B lymphocytes, and class I DSA was detected with MFI over 5,000. The patient was clinically diagnosed with AMR and received aggressive treatments for rejection, including steroid pulse, plasmapheresis, and intravenous infusion of immunoglobulin. However, the patient received retransplantation on postoperative day 15 due to rejection.\nXM = crossmatch, CTD-ILD = connective tissue-related interstitial lung disease, ECMO = extracorporeal membrane oxygenation, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibodies, PGD = primary graft dysfunction, MFI = mean fluorescence intensity, AMR = antibody-mediated rejection, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, IVIG = intravenous immunoglobulin.", "Most patients underwent bilateral lung transplantation (202 patients, 96.2%). According to the PGD grade, patients presenting with grade 3 at postoperative 48 hours or 72 hours were regarded as high-grade PGD. Forty-four patients showed high-grade PGD, and no statistical difference was observed between the groups. The incidences of other postoperative complications, including postoperative bleeding, anastomosis dehiscence, pulmonary thromboembolism, pneumonia, and acute kidney injury, were not statistically distinguishable between the groups (Table 4).\nValues are presented as number (%), mean ± standard deviation or number (interquartile range).\nXM = crossmatch, PGD = primary graft dysfunction, ECMO = extracorporeal membrane oxygenation, ICU = intensive care unit, AKI = acute kidney injury.\nAcute rejection was diagnosed in 65 patients. Transbronchial lung biopsy-proven ACR was diagnosed in 37 patients, and other patients were diagnosed clinically. Though the incidence of acute rejection with regard to crossmatching results was not statistically different between the groups, the positive crossmatch group showed shorter intervals between transplantation and diagnosis of rejection. Five patients had clinical presentations that were compatible with AMR. However, none of these patients were diagnosed using the biopsy specimen or capillary C4d composition. Seven patients (3.3%) showed physiologic changes that were considered as CLAD. BOS was diagnosed in five patients (2.4%), and RAS was diagnosed in three patients (1.4%). The median number of days from transplant to CLAD diagnosis was 648 (IQR, 488.0–759.0). All of the patients showed CLAD were in the negative crossmatch group.", "As shown in Table 4, 55 patients (26.2%) showed 1-year mortality. The 1-year and overall mortality rates were not statistically different between the groups (Table 4). Fig. 1 shows the Kaplan-Meier curves of 1-year and overall mortality rates of the patients. According to Kaplan-Meier analysis, there were no significant differences in the survival rates, regardless of crossmatching status.\nWe also performed a regression analysis to identify the risk factors for poor 1-year survival. Positive CDC crossmatching was the only significant risk factor for poor 1-year survival in the univariable analysis (odds ratio, 11.922, 95% confidence interval, 1.302–19.125, P = 0.018).\nWe further divided the patients based on crossmatching methods and performed survival analysis for the subgroups. When positive crossmatching was confirmed by CDC-XM, the post-transplant 1-year and overall mortality rates were poorer than the negative crossmatching results (P < 0.001 and P < 0.001, respectively) (Fig. 2A and B). According to the Kaplan-Meier method, positive crossmatching by the flow-XM method did not significantly differ in the 1-year and overall morality (Fig. 2C and D). When considering the detection of T and B lymphocytes individually, only positive crossmatching by T lymphocytes was related to mortality within 1-year after transplantation (P = 0.002) and overall mortality (P = 0.006) (Fig. 3).\nCDC = complement-dependent cytotoxicity." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Lung transplant cohort", "HLA crossmatching and other immunologic evaluation", "Clinical outcomes", "Statistical analysis", "Ethics statement", "RESULTS", "Baseline characteristics", "Operative details and postoperative outcomes", "Survival analysis", "DISCUSSION" ]
[ "Crossmatching is the assessment of the immune compatibility of a particular donor and the recipient. The association between positive crossmatches and hyperacute rejection was first demonstrated in the 1960s with renal transplantation.1 Over the following decades, a positive crossmatch result began to be considered a contraindication to transplantation because of its devastating postoperative effects.23 In particular, complement-dependent cytotoxicity (CDC) results against T lymphocytes are considered an absolute contraindication.34 However, the meaning of positive crossmatches has changed with the development of crossmatching methodology and immunosuppression strategies.56 Early research suggested that it was crucial to avoid the impact of positive T cell crossmatches on renal transplantation; however, because of their low specificity and sensitivity, recent analyses have found that avoidance is not mandatory.2 A combination of different crossmatching techniques has recently been recommended for solid organ transplantations.7\nIn lung transplantation, considering the limited number of available donors, persistent demand for required organs, and risk of increasing incidences of wait-list mortality and morbidity, immunological “mismatches” have to be accepted despite the potential for the development of donor-specific antibodies (DSAs), which can trigger antibody-mediated rejection (AMR).8910 A pretransplant crossmatch is not mandatory in the lung allocation system suggested by the International Society of Heart and Lung Transplantation (ISHLT); interpretation and decisions are left to local protocols of each center and no definitive guidelines are available.7810\nAccording to the Korea donor allocation system, pretransplant crossmatching is only considered mandatory for renal and pancreas transplantation; the system does not play a role in nonrenal transplantation, including lung transplantation. According to a report from Korean Network for Organ Sharing, there was no lung transplantation with positive crossmatching between March 2014 to February 2015.11 However, the total number of nationwide lung transplantations has almost tripled since 2014 (55 cases in 2014; 64 cases in 2015; 89 cases in 2016; 93 cases in 2017; 92 cases in 2018; 157 cases in 2019)12; thus, the incidences and outcomes of positive crossmatching are worthy of analysis. In this study, we aimed to investigate the positive crossmatch rate in lung transplantation and their posttransplant outcomes using a multicenter nationwide cohort. In addition, we analyzed the impact of each crossmatch technique on the crossmatching results to clarify the meaning of positive crossmatching.", "Lung transplant cohort Clinical lung transplantation data were derived from patients who received lung transplantation from deceased donors at one of five tertiary centers which performed more than 10 cases annually between 2010 to 2012 in South Korea via the Korean Organ Transplant Registry (KOTRY). The KOTRY was established in 2014 and began to organize the lung transplantation registry in 2015.13 We analyzed the data on registered lung transplantations performed between March 2015 and December 2019. Among these patients, ten patients who did not undergo crossmatching were excluded. Finally, 210 patients were included in the study.\nClinical data, including the general demographic characteristics, primary diagnosis, and pretransplant status of the recipients and donors, were prospectively collected. The details of desensitization protocols, transplant operations, and postoperative follow-up results were also prospectively collected. All clinical data were collected and registered using a web-based report form by the attending physician.\nBecause of donor shortage, transplantation was performed regardless of the status of DSA screening. Moreover, according to a medical urgency-based allocation system in Korea, the results of human leukocyte antigen (HLA) crossmatching were not regarded as mandatory considerations.14 Most patients received induction therapy with high-dose steroids (methylprednisolone, 500 mg) or interleukin-2 antagonist followed by a standard triple immunosuppressant regimen consisting of prednisolone, mycophenolate, and tacrolimus, except when there were contraindications to these medications. Pretransplant immunological results did not affect the choice of immunosuppressant regimen. Desensitization protocols, including plasma exchange and intravenous infusion of immunoglobulin, were considered after transplantation in patients with pretransplant DSA and high mean fluorescence intensity (MFI).\nClinical lung transplantation data were derived from patients who received lung transplantation from deceased donors at one of five tertiary centers which performed more than 10 cases annually between 2010 to 2012 in South Korea via the Korean Organ Transplant Registry (KOTRY). The KOTRY was established in 2014 and began to organize the lung transplantation registry in 2015.13 We analyzed the data on registered lung transplantations performed between March 2015 and December 2019. Among these patients, ten patients who did not undergo crossmatching were excluded. Finally, 210 patients were included in the study.\nClinical data, including the general demographic characteristics, primary diagnosis, and pretransplant status of the recipients and donors, were prospectively collected. The details of desensitization protocols, transplant operations, and postoperative follow-up results were also prospectively collected. All clinical data were collected and registered using a web-based report form by the attending physician.\nBecause of donor shortage, transplantation was performed regardless of the status of DSA screening. Moreover, according to a medical urgency-based allocation system in Korea, the results of human leukocyte antigen (HLA) crossmatching were not regarded as mandatory considerations.14 Most patients received induction therapy with high-dose steroids (methylprednisolone, 500 mg) or interleukin-2 antagonist followed by a standard triple immunosuppressant regimen consisting of prednisolone, mycophenolate, and tacrolimus, except when there were contraindications to these medications. Pretransplant immunological results did not affect the choice of immunosuppressant regimen. Desensitization protocols, including plasma exchange and intravenous infusion of immunoglobulin, were considered after transplantation in patients with pretransplant DSA and high mean fluorescence intensity (MFI).\nHLA crossmatching and other immunologic evaluation The Korean Organ Donation Agency (KODA) laboratory performed crossmatching of registered lung transplantations. Both CDC crossmatch (CDC-XM) and flow cytometric crossmatch (flow-XM) were performed. Although virtual crossmatch techniques have gained influence in many countries, they are currently unavailable in Korea.\nFor the CDC crossmatch, both T and B lymphocytes were isolated by negative selection methods using the EasySep HLA Total Lymphocyte Enrichment kit (STEMCELL Technologies Inc., Tukwila, WA, USA). Both the National Institutes of Health and antihuman globulin augmented methods were performed using standard protocols with some minor modifications.11 Cells and duplicate serum dilutions of 1:1 to 1:4, respectively, were incubated at 25°C for 30 minutes for complement reaction. Cells were stained with commercial staining reagent (FluoroQuench Stain; One Lambda, Canoga Park, CA, USA) and observed under an inverted fluorescent microscope. The positive crossmatch results were recorded when the cytotoxic reaction resulted in more than 11% cell lysis.\nFor flow-XM, both T and B lymphocytes were stained using three-color immunofluorescence staining in a single tube as previously described,1415 with minor modifications. After the incubation of cells and serum at 25°C for 15 minutes, a fluorescent conjugate reaction was performed at 25°C for 20 minutes using titrated goat F(ab’)2 antihuman immunoglobulin G fluorescein isothiocyanate (Jackson Immunoresearch Laboratories, West Grove, PA, USA), anti-CD3 PerCP (Becton Dickinson, San Jose, CA, USA), and anti-CD19 allophycocyanin conjugates (Becton Dickinson). Fluorescence was analyzed using a FACSCalibur Flow Cytometer with an HTS microplate acquisition system (Becton Dickinson). Fluorescence was considered positive when the MFI ratio to negative reference test was over 2.0 for both T and B lymphocytes.\nPanel reactive antibody (PRA) class I and II identifications were performed before transplantation with the identification kit (One Lambda, Inc., West Hills, CA, USA or Gen-Probe Inc., San Diego, CA, USA). PRA over 10% was considered positive, and over 50% was considered highly sensitized.16 Antibodies against donor HLA-A, B, DR, and DQ were defined DSAs, and the strength of each DSA was quantified based on MFI.\nThe Korean Organ Donation Agency (KODA) laboratory performed crossmatching of registered lung transplantations. Both CDC crossmatch (CDC-XM) and flow cytometric crossmatch (flow-XM) were performed. Although virtual crossmatch techniques have gained influence in many countries, they are currently unavailable in Korea.\nFor the CDC crossmatch, both T and B lymphocytes were isolated by negative selection methods using the EasySep HLA Total Lymphocyte Enrichment kit (STEMCELL Technologies Inc., Tukwila, WA, USA). Both the National Institutes of Health and antihuman globulin augmented methods were performed using standard protocols with some minor modifications.11 Cells and duplicate serum dilutions of 1:1 to 1:4, respectively, were incubated at 25°C for 30 minutes for complement reaction. Cells were stained with commercial staining reagent (FluoroQuench Stain; One Lambda, Canoga Park, CA, USA) and observed under an inverted fluorescent microscope. The positive crossmatch results were recorded when the cytotoxic reaction resulted in more than 11% cell lysis.\nFor flow-XM, both T and B lymphocytes were stained using three-color immunofluorescence staining in a single tube as previously described,1415 with minor modifications. After the incubation of cells and serum at 25°C for 15 minutes, a fluorescent conjugate reaction was performed at 25°C for 20 minutes using titrated goat F(ab’)2 antihuman immunoglobulin G fluorescein isothiocyanate (Jackson Immunoresearch Laboratories, West Grove, PA, USA), anti-CD3 PerCP (Becton Dickinson, San Jose, CA, USA), and anti-CD19 allophycocyanin conjugates (Becton Dickinson). Fluorescence was analyzed using a FACSCalibur Flow Cytometer with an HTS microplate acquisition system (Becton Dickinson). Fluorescence was considered positive when the MFI ratio to negative reference test was over 2.0 for both T and B lymphocytes.\nPanel reactive antibody (PRA) class I and II identifications were performed before transplantation with the identification kit (One Lambda, Inc., West Hills, CA, USA or Gen-Probe Inc., San Diego, CA, USA). PRA over 10% was considered positive, and over 50% was considered highly sensitized.16 Antibodies against donor HLA-A, B, DR, and DQ were defined DSAs, and the strength of each DSA was quantified based on MFI.\nClinical outcomes Clinical outcomes, including acute rejection, primary graft dysfunction (PGD), chronic lung allograft dysfunction (CLAD), and mortality, were analyzed. Acute cellular rejection (ACR) was diagnosed according to the ISHLT grading system with a transbronchial biopsy specimen.17 However, not all treated recipients with acute rejection were available for transbronchial biopsies for histopathologic confirmation. As such, clinical diagnosis of acute rejection was also assumed when allograft dysfunction without definite entities was responsive to steroid pulse therapy. Though there is no distinct definition of AMR in lung transplantation, AMR was diagnosed as per a proposal published by the ISHLT in 2016, including allograft dysfunction, lung histology, C4d+, and DSA without other explainable causes.818\nPGD in lung transplantation was defined as an allograft dysfunction of the transplanted lung within the first 72 hours after the procedure. PGD was diagnosed and graded according to the ISHLT criteria.19 Bronchiolitis obliterans syndrome (BOS) and restrictive allograft syndrome (RAS) were the two main phenotypes of CLAD, which represented the irreversible loss of lung function and a major cause of limiting long-term survival. BOS was identified by the sustained and irreversible reduction in forced expiratory volume in 1 second (FEV1) compared with the post-lung transplant baseline FEV1 in the absence of any other etiologies.20 RAS is defined by restrictive physiology according to a pulmonary function test, positive findings of a radiologic study, the presence of ground-glass opacity, and interstitial fibrosis.21\nClinical outcomes, including acute rejection, primary graft dysfunction (PGD), chronic lung allograft dysfunction (CLAD), and mortality, were analyzed. Acute cellular rejection (ACR) was diagnosed according to the ISHLT grading system with a transbronchial biopsy specimen.17 However, not all treated recipients with acute rejection were available for transbronchial biopsies for histopathologic confirmation. As such, clinical diagnosis of acute rejection was also assumed when allograft dysfunction without definite entities was responsive to steroid pulse therapy. Though there is no distinct definition of AMR in lung transplantation, AMR was diagnosed as per a proposal published by the ISHLT in 2016, including allograft dysfunction, lung histology, C4d+, and DSA without other explainable causes.818\nPGD in lung transplantation was defined as an allograft dysfunction of the transplanted lung within the first 72 hours after the procedure. PGD was diagnosed and graded according to the ISHLT criteria.19 Bronchiolitis obliterans syndrome (BOS) and restrictive allograft syndrome (RAS) were the two main phenotypes of CLAD, which represented the irreversible loss of lung function and a major cause of limiting long-term survival. BOS was identified by the sustained and irreversible reduction in forced expiratory volume in 1 second (FEV1) compared with the post-lung transplant baseline FEV1 in the absence of any other etiologies.20 RAS is defined by restrictive physiology according to a pulmonary function test, positive findings of a radiologic study, the presence of ground-glass opacity, and interstitial fibrosis.21\nStatistical analysis Descriptive statistics were used to demonstrate baseline clinical characteristics of the study cohort. For categorical variables, data are shown as numbers and percentages, and chi-squared or Fisher’s exact tests were used where appropriate. For continuous variables, data were analyzed using Student’s t-test or Mann-Whitney U test and presented as the mean ± standard deviation or the median, interquartile range (IQR). Univariable and multivariable regression analysis was used to evaluate the factors associated with clinical outcomes. To confirm the independent association between variables, linear regression was applied for confounders. Survival analysis was performed by the Kaplan-Meier method with log-rank tests. All statistical analysis was performed with SPSS (version 25.0; SPSS Inc., Chicago, IL, USA), and a P value < 0.05 was considered statistically significant.\nDescriptive statistics were used to demonstrate baseline clinical characteristics of the study cohort. For categorical variables, data are shown as numbers and percentages, and chi-squared or Fisher’s exact tests were used where appropriate. For continuous variables, data were analyzed using Student’s t-test or Mann-Whitney U test and presented as the mean ± standard deviation or the median, interquartile range (IQR). Univariable and multivariable regression analysis was used to evaluate the factors associated with clinical outcomes. To confirm the independent association between variables, linear regression was applied for confounders. Survival analysis was performed by the Kaplan-Meier method with log-rank tests. All statistical analysis was performed with SPSS (version 25.0; SPSS Inc., Chicago, IL, USA), and a P value < 0.05 was considered statistically significant.\nEthics statement This study was approved by the Institutional Review Board (IRB) of the Severance Hospital of Yonsei University (IRB No. 4-2021-0673). The IRB waived the requirement for obtaining informed consent from the patients.\nThis study was approved by the Institutional Review Board (IRB) of the Severance Hospital of Yonsei University (IRB No. 4-2021-0673). The IRB waived the requirement for obtaining informed consent from the patients.", "Clinical lung transplantation data were derived from patients who received lung transplantation from deceased donors at one of five tertiary centers which performed more than 10 cases annually between 2010 to 2012 in South Korea via the Korean Organ Transplant Registry (KOTRY). The KOTRY was established in 2014 and began to organize the lung transplantation registry in 2015.13 We analyzed the data on registered lung transplantations performed between March 2015 and December 2019. Among these patients, ten patients who did not undergo crossmatching were excluded. Finally, 210 patients were included in the study.\nClinical data, including the general demographic characteristics, primary diagnosis, and pretransplant status of the recipients and donors, were prospectively collected. The details of desensitization protocols, transplant operations, and postoperative follow-up results were also prospectively collected. All clinical data were collected and registered using a web-based report form by the attending physician.\nBecause of donor shortage, transplantation was performed regardless of the status of DSA screening. Moreover, according to a medical urgency-based allocation system in Korea, the results of human leukocyte antigen (HLA) crossmatching were not regarded as mandatory considerations.14 Most patients received induction therapy with high-dose steroids (methylprednisolone, 500 mg) or interleukin-2 antagonist followed by a standard triple immunosuppressant regimen consisting of prednisolone, mycophenolate, and tacrolimus, except when there were contraindications to these medications. Pretransplant immunological results did not affect the choice of immunosuppressant regimen. Desensitization protocols, including plasma exchange and intravenous infusion of immunoglobulin, were considered after transplantation in patients with pretransplant DSA and high mean fluorescence intensity (MFI).", "The Korean Organ Donation Agency (KODA) laboratory performed crossmatching of registered lung transplantations. Both CDC crossmatch (CDC-XM) and flow cytometric crossmatch (flow-XM) were performed. Although virtual crossmatch techniques have gained influence in many countries, they are currently unavailable in Korea.\nFor the CDC crossmatch, both T and B lymphocytes were isolated by negative selection methods using the EasySep HLA Total Lymphocyte Enrichment kit (STEMCELL Technologies Inc., Tukwila, WA, USA). Both the National Institutes of Health and antihuman globulin augmented methods were performed using standard protocols with some minor modifications.11 Cells and duplicate serum dilutions of 1:1 to 1:4, respectively, were incubated at 25°C for 30 minutes for complement reaction. Cells were stained with commercial staining reagent (FluoroQuench Stain; One Lambda, Canoga Park, CA, USA) and observed under an inverted fluorescent microscope. The positive crossmatch results were recorded when the cytotoxic reaction resulted in more than 11% cell lysis.\nFor flow-XM, both T and B lymphocytes were stained using three-color immunofluorescence staining in a single tube as previously described,1415 with minor modifications. After the incubation of cells and serum at 25°C for 15 minutes, a fluorescent conjugate reaction was performed at 25°C for 20 minutes using titrated goat F(ab’)2 antihuman immunoglobulin G fluorescein isothiocyanate (Jackson Immunoresearch Laboratories, West Grove, PA, USA), anti-CD3 PerCP (Becton Dickinson, San Jose, CA, USA), and anti-CD19 allophycocyanin conjugates (Becton Dickinson). Fluorescence was analyzed using a FACSCalibur Flow Cytometer with an HTS microplate acquisition system (Becton Dickinson). Fluorescence was considered positive when the MFI ratio to negative reference test was over 2.0 for both T and B lymphocytes.\nPanel reactive antibody (PRA) class I and II identifications were performed before transplantation with the identification kit (One Lambda, Inc., West Hills, CA, USA or Gen-Probe Inc., San Diego, CA, USA). PRA over 10% was considered positive, and over 50% was considered highly sensitized.16 Antibodies against donor HLA-A, B, DR, and DQ were defined DSAs, and the strength of each DSA was quantified based on MFI.", "Clinical outcomes, including acute rejection, primary graft dysfunction (PGD), chronic lung allograft dysfunction (CLAD), and mortality, were analyzed. Acute cellular rejection (ACR) was diagnosed according to the ISHLT grading system with a transbronchial biopsy specimen.17 However, not all treated recipients with acute rejection were available for transbronchial biopsies for histopathologic confirmation. As such, clinical diagnosis of acute rejection was also assumed when allograft dysfunction without definite entities was responsive to steroid pulse therapy. Though there is no distinct definition of AMR in lung transplantation, AMR was diagnosed as per a proposal published by the ISHLT in 2016, including allograft dysfunction, lung histology, C4d+, and DSA without other explainable causes.818\nPGD in lung transplantation was defined as an allograft dysfunction of the transplanted lung within the first 72 hours after the procedure. PGD was diagnosed and graded according to the ISHLT criteria.19 Bronchiolitis obliterans syndrome (BOS) and restrictive allograft syndrome (RAS) were the two main phenotypes of CLAD, which represented the irreversible loss of lung function and a major cause of limiting long-term survival. BOS was identified by the sustained and irreversible reduction in forced expiratory volume in 1 second (FEV1) compared with the post-lung transplant baseline FEV1 in the absence of any other etiologies.20 RAS is defined by restrictive physiology according to a pulmonary function test, positive findings of a radiologic study, the presence of ground-glass opacity, and interstitial fibrosis.21", "Descriptive statistics were used to demonstrate baseline clinical characteristics of the study cohort. For categorical variables, data are shown as numbers and percentages, and chi-squared or Fisher’s exact tests were used where appropriate. For continuous variables, data were analyzed using Student’s t-test or Mann-Whitney U test and presented as the mean ± standard deviation or the median, interquartile range (IQR). Univariable and multivariable regression analysis was used to evaluate the factors associated with clinical outcomes. To confirm the independent association between variables, linear regression was applied for confounders. Survival analysis was performed by the Kaplan-Meier method with log-rank tests. All statistical analysis was performed with SPSS (version 25.0; SPSS Inc., Chicago, IL, USA), and a P value < 0.05 was considered statistically significant.", "This study was approved by the Institutional Review Board (IRB) of the Severance Hospital of Yonsei University (IRB No. 4-2021-0673). The IRB waived the requirement for obtaining informed consent from the patients.", "Baseline characteristics The baseline characteristics of the patients according to the results of crossmatches are shown in Table 1. Among the included 210 recipients, nine patients showed positive crossmatches. The median age was 56.4 years, and 134 patients (63.8%) were male. Only one patient was male in the positive crossmatch group. There was no RH− blood type within the study cohort. The most common primary diagnosis was idiopathic pulmonary fibrosis (114 patients, 54.3%), followed by connective tissue-related interstitial lung disease (CTD-ILD) (34 patients, 16.2%). The median number of days on the waiting list was 71.0, and 63 patients (30.0%) had undergone pretransplant extracorporeal membrane oxygenation support. The baseline characteristics were not significantly different between the positive and negative crossmatch groups.\nValues are presented as mean ± standard deviation or number (%).\nXM = crossmatch, BMI = body mass index, COPD = chronic obstructive pulmonary disease, CTD-ILD = connective tissue-related interstitial lung disease, BOS = bronchiolitis obliterans syndrome, ECMO = extracorporeal membrane oxygenation, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, HSCT = hematopoietic stem cell transplantation.\n\nTable 2 shows the results of immunological evaluations, including crossmatching. CDC-XM was performed in 208 patients (99.0%) and flow-XM was performed in 125 patients (59.5%). PRA was identified in 208 patients (99.0%). Among these patients, high levels of class I or class II PRA (> 50%) were detected in 18 patients (8.7%). The proportion of patients with high PRA was higher in the positive crossmatch group than in the negative crossmatch group (37.5% vs. 10.7%, P = 0.026).\nValues are presented as number (%).\nXM = crossmatch, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibody, HLA = human leukocyte antigen.\nAt the time of lung transplantation, most patients (201 patients, 95.7%) had undergone the DSA screening test. Twenty-seven patients (13.4%) were revealed to have DSA. A higher proportion of the patients in the positive crossmatch group had DSA (7 patients, 87.5%, vs. 20 patients, 10.4%, P < 0.001) compared with that in the negative crossmatch group. Desensitization was performed in 35 patients (16.7%), and four patients in the positive crossmatch group underwent desensitization.\nThe characteristics of nine patients with positive crossmatch are shown in Table 3. Among these patients, the most common etiology for lung transplantation was CTD-ILD. CDC-XM and flow-XM revealed positive crossmatches in five and six patients, respectively. DSA was detected in six patients, and three patients (patient numbers 5, 6, and 7) had an MFI higher than 5,000. Five patients underwent desensitization before lung transplantation. Acute rejection was diagnosed in three patients (patient numbers 3, 4, and 6), and two patients underwent transbronchial lung biopsy (patient numbers 3 and 4). Patient number 6 was positively crossmatched to the donor as observed using CDC-XM with both T land B lymphocytes, and class I DSA was detected with MFI over 5,000. The patient was clinically diagnosed with AMR and received aggressive treatments for rejection, including steroid pulse, plasmapheresis, and intravenous infusion of immunoglobulin. However, the patient received retransplantation on postoperative day 15 due to rejection.\nXM = crossmatch, CTD-ILD = connective tissue-related interstitial lung disease, ECMO = extracorporeal membrane oxygenation, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibodies, PGD = primary graft dysfunction, MFI = mean fluorescence intensity, AMR = antibody-mediated rejection, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, IVIG = intravenous immunoglobulin.\nThe baseline characteristics of the patients according to the results of crossmatches are shown in Table 1. Among the included 210 recipients, nine patients showed positive crossmatches. The median age was 56.4 years, and 134 patients (63.8%) were male. Only one patient was male in the positive crossmatch group. There was no RH− blood type within the study cohort. The most common primary diagnosis was idiopathic pulmonary fibrosis (114 patients, 54.3%), followed by connective tissue-related interstitial lung disease (CTD-ILD) (34 patients, 16.2%). The median number of days on the waiting list was 71.0, and 63 patients (30.0%) had undergone pretransplant extracorporeal membrane oxygenation support. The baseline characteristics were not significantly different between the positive and negative crossmatch groups.\nValues are presented as mean ± standard deviation or number (%).\nXM = crossmatch, BMI = body mass index, COPD = chronic obstructive pulmonary disease, CTD-ILD = connective tissue-related interstitial lung disease, BOS = bronchiolitis obliterans syndrome, ECMO = extracorporeal membrane oxygenation, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, HSCT = hematopoietic stem cell transplantation.\n\nTable 2 shows the results of immunological evaluations, including crossmatching. CDC-XM was performed in 208 patients (99.0%) and flow-XM was performed in 125 patients (59.5%). PRA was identified in 208 patients (99.0%). Among these patients, high levels of class I or class II PRA (> 50%) were detected in 18 patients (8.7%). The proportion of patients with high PRA was higher in the positive crossmatch group than in the negative crossmatch group (37.5% vs. 10.7%, P = 0.026).\nValues are presented as number (%).\nXM = crossmatch, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibody, HLA = human leukocyte antigen.\nAt the time of lung transplantation, most patients (201 patients, 95.7%) had undergone the DSA screening test. Twenty-seven patients (13.4%) were revealed to have DSA. A higher proportion of the patients in the positive crossmatch group had DSA (7 patients, 87.5%, vs. 20 patients, 10.4%, P < 0.001) compared with that in the negative crossmatch group. Desensitization was performed in 35 patients (16.7%), and four patients in the positive crossmatch group underwent desensitization.\nThe characteristics of nine patients with positive crossmatch are shown in Table 3. Among these patients, the most common etiology for lung transplantation was CTD-ILD. CDC-XM and flow-XM revealed positive crossmatches in five and six patients, respectively. DSA was detected in six patients, and three patients (patient numbers 5, 6, and 7) had an MFI higher than 5,000. Five patients underwent desensitization before lung transplantation. Acute rejection was diagnosed in three patients (patient numbers 3, 4, and 6), and two patients underwent transbronchial lung biopsy (patient numbers 3 and 4). Patient number 6 was positively crossmatched to the donor as observed using CDC-XM with both T land B lymphocytes, and class I DSA was detected with MFI over 5,000. The patient was clinically diagnosed with AMR and received aggressive treatments for rejection, including steroid pulse, plasmapheresis, and intravenous infusion of immunoglobulin. However, the patient received retransplantation on postoperative day 15 due to rejection.\nXM = crossmatch, CTD-ILD = connective tissue-related interstitial lung disease, ECMO = extracorporeal membrane oxygenation, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibodies, PGD = primary graft dysfunction, MFI = mean fluorescence intensity, AMR = antibody-mediated rejection, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, IVIG = intravenous immunoglobulin.\nOperative details and postoperative outcomes Most patients underwent bilateral lung transplantation (202 patients, 96.2%). According to the PGD grade, patients presenting with grade 3 at postoperative 48 hours or 72 hours were regarded as high-grade PGD. Forty-four patients showed high-grade PGD, and no statistical difference was observed between the groups. The incidences of other postoperative complications, including postoperative bleeding, anastomosis dehiscence, pulmonary thromboembolism, pneumonia, and acute kidney injury, were not statistically distinguishable between the groups (Table 4).\nValues are presented as number (%), mean ± standard deviation or number (interquartile range).\nXM = crossmatch, PGD = primary graft dysfunction, ECMO = extracorporeal membrane oxygenation, ICU = intensive care unit, AKI = acute kidney injury.\nAcute rejection was diagnosed in 65 patients. Transbronchial lung biopsy-proven ACR was diagnosed in 37 patients, and other patients were diagnosed clinically. Though the incidence of acute rejection with regard to crossmatching results was not statistically different between the groups, the positive crossmatch group showed shorter intervals between transplantation and diagnosis of rejection. Five patients had clinical presentations that were compatible with AMR. However, none of these patients were diagnosed using the biopsy specimen or capillary C4d composition. Seven patients (3.3%) showed physiologic changes that were considered as CLAD. BOS was diagnosed in five patients (2.4%), and RAS was diagnosed in three patients (1.4%). The median number of days from transplant to CLAD diagnosis was 648 (IQR, 488.0–759.0). All of the patients showed CLAD were in the negative crossmatch group.\nMost patients underwent bilateral lung transplantation (202 patients, 96.2%). According to the PGD grade, patients presenting with grade 3 at postoperative 48 hours or 72 hours were regarded as high-grade PGD. Forty-four patients showed high-grade PGD, and no statistical difference was observed between the groups. The incidences of other postoperative complications, including postoperative bleeding, anastomosis dehiscence, pulmonary thromboembolism, pneumonia, and acute kidney injury, were not statistically distinguishable between the groups (Table 4).\nValues are presented as number (%), mean ± standard deviation or number (interquartile range).\nXM = crossmatch, PGD = primary graft dysfunction, ECMO = extracorporeal membrane oxygenation, ICU = intensive care unit, AKI = acute kidney injury.\nAcute rejection was diagnosed in 65 patients. Transbronchial lung biopsy-proven ACR was diagnosed in 37 patients, and other patients were diagnosed clinically. Though the incidence of acute rejection with regard to crossmatching results was not statistically different between the groups, the positive crossmatch group showed shorter intervals between transplantation and diagnosis of rejection. Five patients had clinical presentations that were compatible with AMR. However, none of these patients were diagnosed using the biopsy specimen or capillary C4d composition. Seven patients (3.3%) showed physiologic changes that were considered as CLAD. BOS was diagnosed in five patients (2.4%), and RAS was diagnosed in three patients (1.4%). The median number of days from transplant to CLAD diagnosis was 648 (IQR, 488.0–759.0). All of the patients showed CLAD were in the negative crossmatch group.\nSurvival analysis As shown in Table 4, 55 patients (26.2%) showed 1-year mortality. The 1-year and overall mortality rates were not statistically different between the groups (Table 4). Fig. 1 shows the Kaplan-Meier curves of 1-year and overall mortality rates of the patients. According to Kaplan-Meier analysis, there were no significant differences in the survival rates, regardless of crossmatching status.\nWe also performed a regression analysis to identify the risk factors for poor 1-year survival. Positive CDC crossmatching was the only significant risk factor for poor 1-year survival in the univariable analysis (odds ratio, 11.922, 95% confidence interval, 1.302–19.125, P = 0.018).\nWe further divided the patients based on crossmatching methods and performed survival analysis for the subgroups. When positive crossmatching was confirmed by CDC-XM, the post-transplant 1-year and overall mortality rates were poorer than the negative crossmatching results (P < 0.001 and P < 0.001, respectively) (Fig. 2A and B). According to the Kaplan-Meier method, positive crossmatching by the flow-XM method did not significantly differ in the 1-year and overall morality (Fig. 2C and D). When considering the detection of T and B lymphocytes individually, only positive crossmatching by T lymphocytes was related to mortality within 1-year after transplantation (P = 0.002) and overall mortality (P = 0.006) (Fig. 3).\nCDC = complement-dependent cytotoxicity.\nAs shown in Table 4, 55 patients (26.2%) showed 1-year mortality. The 1-year and overall mortality rates were not statistically different between the groups (Table 4). Fig. 1 shows the Kaplan-Meier curves of 1-year and overall mortality rates of the patients. According to Kaplan-Meier analysis, there were no significant differences in the survival rates, regardless of crossmatching status.\nWe also performed a regression analysis to identify the risk factors for poor 1-year survival. Positive CDC crossmatching was the only significant risk factor for poor 1-year survival in the univariable analysis (odds ratio, 11.922, 95% confidence interval, 1.302–19.125, P = 0.018).\nWe further divided the patients based on crossmatching methods and performed survival analysis for the subgroups. When positive crossmatching was confirmed by CDC-XM, the post-transplant 1-year and overall mortality rates were poorer than the negative crossmatching results (P < 0.001 and P < 0.001, respectively) (Fig. 2A and B). According to the Kaplan-Meier method, positive crossmatching by the flow-XM method did not significantly differ in the 1-year and overall morality (Fig. 2C and D). When considering the detection of T and B lymphocytes individually, only positive crossmatching by T lymphocytes was related to mortality within 1-year after transplantation (P = 0.002) and overall mortality (P = 0.006) (Fig. 3).\nCDC = complement-dependent cytotoxicity.", "The baseline characteristics of the patients according to the results of crossmatches are shown in Table 1. Among the included 210 recipients, nine patients showed positive crossmatches. The median age was 56.4 years, and 134 patients (63.8%) were male. Only one patient was male in the positive crossmatch group. There was no RH− blood type within the study cohort. The most common primary diagnosis was idiopathic pulmonary fibrosis (114 patients, 54.3%), followed by connective tissue-related interstitial lung disease (CTD-ILD) (34 patients, 16.2%). The median number of days on the waiting list was 71.0, and 63 patients (30.0%) had undergone pretransplant extracorporeal membrane oxygenation support. The baseline characteristics were not significantly different between the positive and negative crossmatch groups.\nValues are presented as mean ± standard deviation or number (%).\nXM = crossmatch, BMI = body mass index, COPD = chronic obstructive pulmonary disease, CTD-ILD = connective tissue-related interstitial lung disease, BOS = bronchiolitis obliterans syndrome, ECMO = extracorporeal membrane oxygenation, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, HSCT = hematopoietic stem cell transplantation.\n\nTable 2 shows the results of immunological evaluations, including crossmatching. CDC-XM was performed in 208 patients (99.0%) and flow-XM was performed in 125 patients (59.5%). PRA was identified in 208 patients (99.0%). Among these patients, high levels of class I or class II PRA (> 50%) were detected in 18 patients (8.7%). The proportion of patients with high PRA was higher in the positive crossmatch group than in the negative crossmatch group (37.5% vs. 10.7%, P = 0.026).\nValues are presented as number (%).\nXM = crossmatch, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibody, HLA = human leukocyte antigen.\nAt the time of lung transplantation, most patients (201 patients, 95.7%) had undergone the DSA screening test. Twenty-seven patients (13.4%) were revealed to have DSA. A higher proportion of the patients in the positive crossmatch group had DSA (7 patients, 87.5%, vs. 20 patients, 10.4%, P < 0.001) compared with that in the negative crossmatch group. Desensitization was performed in 35 patients (16.7%), and four patients in the positive crossmatch group underwent desensitization.\nThe characteristics of nine patients with positive crossmatch are shown in Table 3. Among these patients, the most common etiology for lung transplantation was CTD-ILD. CDC-XM and flow-XM revealed positive crossmatches in five and six patients, respectively. DSA was detected in six patients, and three patients (patient numbers 5, 6, and 7) had an MFI higher than 5,000. Five patients underwent desensitization before lung transplantation. Acute rejection was diagnosed in three patients (patient numbers 3, 4, and 6), and two patients underwent transbronchial lung biopsy (patient numbers 3 and 4). Patient number 6 was positively crossmatched to the donor as observed using CDC-XM with both T land B lymphocytes, and class I DSA was detected with MFI over 5,000. The patient was clinically diagnosed with AMR and received aggressive treatments for rejection, including steroid pulse, plasmapheresis, and intravenous infusion of immunoglobulin. However, the patient received retransplantation on postoperative day 15 due to rejection.\nXM = crossmatch, CTD-ILD = connective tissue-related interstitial lung disease, ECMO = extracorporeal membrane oxygenation, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibodies, PGD = primary graft dysfunction, MFI = mean fluorescence intensity, AMR = antibody-mediated rejection, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, IVIG = intravenous immunoglobulin.", "Most patients underwent bilateral lung transplantation (202 patients, 96.2%). According to the PGD grade, patients presenting with grade 3 at postoperative 48 hours or 72 hours were regarded as high-grade PGD. Forty-four patients showed high-grade PGD, and no statistical difference was observed between the groups. The incidences of other postoperative complications, including postoperative bleeding, anastomosis dehiscence, pulmonary thromboembolism, pneumonia, and acute kidney injury, were not statistically distinguishable between the groups (Table 4).\nValues are presented as number (%), mean ± standard deviation or number (interquartile range).\nXM = crossmatch, PGD = primary graft dysfunction, ECMO = extracorporeal membrane oxygenation, ICU = intensive care unit, AKI = acute kidney injury.\nAcute rejection was diagnosed in 65 patients. Transbronchial lung biopsy-proven ACR was diagnosed in 37 patients, and other patients were diagnosed clinically. Though the incidence of acute rejection with regard to crossmatching results was not statistically different between the groups, the positive crossmatch group showed shorter intervals between transplantation and diagnosis of rejection. Five patients had clinical presentations that were compatible with AMR. However, none of these patients were diagnosed using the biopsy specimen or capillary C4d composition. Seven patients (3.3%) showed physiologic changes that were considered as CLAD. BOS was diagnosed in five patients (2.4%), and RAS was diagnosed in three patients (1.4%). The median number of days from transplant to CLAD diagnosis was 648 (IQR, 488.0–759.0). All of the patients showed CLAD were in the negative crossmatch group.", "As shown in Table 4, 55 patients (26.2%) showed 1-year mortality. The 1-year and overall mortality rates were not statistically different between the groups (Table 4). Fig. 1 shows the Kaplan-Meier curves of 1-year and overall mortality rates of the patients. According to Kaplan-Meier analysis, there were no significant differences in the survival rates, regardless of crossmatching status.\nWe also performed a regression analysis to identify the risk factors for poor 1-year survival. Positive CDC crossmatching was the only significant risk factor for poor 1-year survival in the univariable analysis (odds ratio, 11.922, 95% confidence interval, 1.302–19.125, P = 0.018).\nWe further divided the patients based on crossmatching methods and performed survival analysis for the subgroups. When positive crossmatching was confirmed by CDC-XM, the post-transplant 1-year and overall mortality rates were poorer than the negative crossmatching results (P < 0.001 and P < 0.001, respectively) (Fig. 2A and B). According to the Kaplan-Meier method, positive crossmatching by the flow-XM method did not significantly differ in the 1-year and overall morality (Fig. 2C and D). When considering the detection of T and B lymphocytes individually, only positive crossmatching by T lymphocytes was related to mortality within 1-year after transplantation (P = 0.002) and overall mortality (P = 0.006) (Fig. 3).\nCDC = complement-dependent cytotoxicity.", "We conducted a multicenter nationwide study to analyze the prevalence of positive crossmatching and the outcomes of positively crossmatched patients in lung transplantation. Although the incidence of positive crossmatching was low (4.3%), positive CDC-XM and T lymphocyte crossmatching results were related to poor 1-year and overall survival.\nIt is controversial to consider HLA matching as an allocation factor. ISHLT introduced the lung allocation score (LAS) in 2005 and revised it in 2015 and 2020.22 The LAS includes factors related to waiting list mortality and post-transplant 1-year survival. HLA matching is not considered in the LAS system for several reasons. First, as lung transplantation is mostly performed under emergency settings, it is not always possible to conduct donor HLA typing prior to allocation. Furthermore, there is also a possibility of prolonging allograft ischemic time while waiting for crossmatching results.810 Brugière et al.23 analyzed the relative impact of mismatching and graft ischemic time; an allograft ischemic time of over 330 minutes worsened patient survival, even with well-matched donor and recipient pairs. Second, because of the shortage of possible donor organs and the extensive complexity of the HLA system, the probability of finding a matched recipient is extremely low.10 Furthermore, nonimmunologic factors are considered more important for 1-year survival. Early mortality resulting from the ischemia and reperfusion injury has been shown to decrease after introducing low-potassium dextran lung preservation solution.24 Advances in immunosuppressive regimens and enhanced combinations of antibiotics may also be responsible for improved unstable hemodynamics during the early post-transplant period.25 After alleviating these nonimmunological factors for early graft dysfunction and mortality in lung transplantation, the effect of HLA compatibility between the donor and recipient becomes more important for early outcomes. In addition, an improved methodology for HLA crossmatching has facilitated more accurate and rapid access to crossmatching results, leading to increased awareness of the importance of HLA crossmatching in lung transplantation.\nSeveral studies analyzed the impact of HLA mismatching on lung transplantation. It is well known that HLA mismatching can trigger graft damage or even mortality.10 Donor HLA molecules are recognized by the immune system of the recipient. Immunogenetic discrepancies can lead to ACR and AMR, which play major roles in graft dysfunction and loss.\nIn 2003, the Eurotransplant Foundation analyzed 590 patients who underwent cadaveric lung transplantation between January 1997 and December 1999. In this study, patients with more than four mismatches of HLA loci showed poorer 1-year survival than the others.26 A retrospective study by the United Network for Organ Sharing/ISHLT also confirmed HLA mismatches as a risk factor for 1-year mortality in lung transplantation.27 As routine pretransplant crossmatching is not considered feasible in lung transplantation; these studies did not analyze prospective crossmatching; however, the results confirmed that HLA mismatching is related to graft dysfunction and survival.\nIn our study, the incidence of rejection or 1-year survival was not statistically different according to the crossmatching results. This result may be because of the low incidence of positive crossmatching in the cohort (4.3%) and the relatively short-term follow-up period of the positive group (median 129.0 vs. 386.5 days). Moreover, most centers involved in the registry tended to decline the allocation when positive crossmatching had been verified not only by the KODA lab but also by the center’s study. However, the relationship between 1-year survival and positive crossmatching was observed when the cohort was subdivided according to the crossmatching method.\nFlow-XM is reportedly more sensitive for detecting anti-HLA antibodies than CDC-XM.28 Several studies have shown the relationship between positive flow-XM results and a higher risk of graft dysfunction and poor survival.237 However, as flow-XM is sometimes too sensitive, denying the allocation because of a positive flow crossmatch is still controversial.29 Moreover, flow-XM is not a functional test, and the binding of antibodies to lymphocytes may not always reflect complement system activation.6 Thus, transplantation across a positive flow crossmatch and negative CDC crossmatch is acceptable, whereas transplantation across a positive CDC result is not typically recommended. In our study, the Kaplan-Meier analysis also supported this principle. The patients with positive CDC crossmatch showed significantly lower 1-year and overall survival, whereas positive flow crossmatch did not significantly affect the outcome.\nThe impact of positive T cell crossmatches on poor graft outcomes was apparent in renal, cardiac, and liver transplantation.230 A negative lymphocytotoxic T cell result is generally considered as grounds to proceed with transplantation. Here, the results of Kaplan-Meier analysis also demonstrated significantly poor 1-year and overall survival in positive T-cell crossmatched patients, whereas positive B-cell crossmatching did not show a significant difference in graft survival. Positive T-cell crossmatching indicates the presence of DSAs against class I antigens, which can lead to antibody-mediated damages to the graft.378 This damage can cause hyperacute rejection or graft loss.\nThis study has certain limitations. First, although this study analyzed a nationwide cohort, this is a retrospective study with a relatively small number of patients and a short follow-up period. CLAD was not observed in the positively crossmatched patients, probably because of the short follow-up time (median 129.0 days). Thus, further follow-up for the CLAD is needed for the positive crossmath group. Second, the incidence of positive crossmatching was too low to perform further statistical analyses. As mentioned above, we performed regression analysis to identify the risk factors of poor survival, and positive CDC crossmatching was the only meaningful risk factor for poor 1-year survival. Third, a nationwide multicenter study with registry data does not reflect detailed information on each case. In particular, it is difficult to determine the actual immunologic consequences on graft survival.\nIn conclusion, this study analyzed the outcomes of positively crossmatched patients in lung transplantation and revealed the importance of crossmatching methods in lung transplantation using nationwide data. Positive results of CDC and T lymphocyte crossmatching may lead to devastating outcomes in lung transplantation. Although crossmatching is not part of the allocation system, the results of crossmatching should be considered with caution during postoperative management after lung transplantation." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "Lung Transplantation", "Crossmatching", "Allocation", "Histocompatibility" ]
INTRODUCTION: Crossmatching is the assessment of the immune compatibility of a particular donor and the recipient. The association between positive crossmatches and hyperacute rejection was first demonstrated in the 1960s with renal transplantation.1 Over the following decades, a positive crossmatch result began to be considered a contraindication to transplantation because of its devastating postoperative effects.23 In particular, complement-dependent cytotoxicity (CDC) results against T lymphocytes are considered an absolute contraindication.34 However, the meaning of positive crossmatches has changed with the development of crossmatching methodology and immunosuppression strategies.56 Early research suggested that it was crucial to avoid the impact of positive T cell crossmatches on renal transplantation; however, because of their low specificity and sensitivity, recent analyses have found that avoidance is not mandatory.2 A combination of different crossmatching techniques has recently been recommended for solid organ transplantations.7 In lung transplantation, considering the limited number of available donors, persistent demand for required organs, and risk of increasing incidences of wait-list mortality and morbidity, immunological “mismatches” have to be accepted despite the potential for the development of donor-specific antibodies (DSAs), which can trigger antibody-mediated rejection (AMR).8910 A pretransplant crossmatch is not mandatory in the lung allocation system suggested by the International Society of Heart and Lung Transplantation (ISHLT); interpretation and decisions are left to local protocols of each center and no definitive guidelines are available.7810 According to the Korea donor allocation system, pretransplant crossmatching is only considered mandatory for renal and pancreas transplantation; the system does not play a role in nonrenal transplantation, including lung transplantation. According to a report from Korean Network for Organ Sharing, there was no lung transplantation with positive crossmatching between March 2014 to February 2015.11 However, the total number of nationwide lung transplantations has almost tripled since 2014 (55 cases in 2014; 64 cases in 2015; 89 cases in 2016; 93 cases in 2017; 92 cases in 2018; 157 cases in 2019)12; thus, the incidences and outcomes of positive crossmatching are worthy of analysis. In this study, we aimed to investigate the positive crossmatch rate in lung transplantation and their posttransplant outcomes using a multicenter nationwide cohort. In addition, we analyzed the impact of each crossmatch technique on the crossmatching results to clarify the meaning of positive crossmatching. METHODS: Lung transplant cohort Clinical lung transplantation data were derived from patients who received lung transplantation from deceased donors at one of five tertiary centers which performed more than 10 cases annually between 2010 to 2012 in South Korea via the Korean Organ Transplant Registry (KOTRY). The KOTRY was established in 2014 and began to organize the lung transplantation registry in 2015.13 We analyzed the data on registered lung transplantations performed between March 2015 and December 2019. Among these patients, ten patients who did not undergo crossmatching were excluded. Finally, 210 patients were included in the study. Clinical data, including the general demographic characteristics, primary diagnosis, and pretransplant status of the recipients and donors, were prospectively collected. The details of desensitization protocols, transplant operations, and postoperative follow-up results were also prospectively collected. All clinical data were collected and registered using a web-based report form by the attending physician. Because of donor shortage, transplantation was performed regardless of the status of DSA screening. Moreover, according to a medical urgency-based allocation system in Korea, the results of human leukocyte antigen (HLA) crossmatching were not regarded as mandatory considerations.14 Most patients received induction therapy with high-dose steroids (methylprednisolone, 500 mg) or interleukin-2 antagonist followed by a standard triple immunosuppressant regimen consisting of prednisolone, mycophenolate, and tacrolimus, except when there were contraindications to these medications. Pretransplant immunological results did not affect the choice of immunosuppressant regimen. Desensitization protocols, including plasma exchange and intravenous infusion of immunoglobulin, were considered after transplantation in patients with pretransplant DSA and high mean fluorescence intensity (MFI). Clinical lung transplantation data were derived from patients who received lung transplantation from deceased donors at one of five tertiary centers which performed more than 10 cases annually between 2010 to 2012 in South Korea via the Korean Organ Transplant Registry (KOTRY). The KOTRY was established in 2014 and began to organize the lung transplantation registry in 2015.13 We analyzed the data on registered lung transplantations performed between March 2015 and December 2019. Among these patients, ten patients who did not undergo crossmatching were excluded. Finally, 210 patients were included in the study. Clinical data, including the general demographic characteristics, primary diagnosis, and pretransplant status of the recipients and donors, were prospectively collected. The details of desensitization protocols, transplant operations, and postoperative follow-up results were also prospectively collected. All clinical data were collected and registered using a web-based report form by the attending physician. Because of donor shortage, transplantation was performed regardless of the status of DSA screening. Moreover, according to a medical urgency-based allocation system in Korea, the results of human leukocyte antigen (HLA) crossmatching were not regarded as mandatory considerations.14 Most patients received induction therapy with high-dose steroids (methylprednisolone, 500 mg) or interleukin-2 antagonist followed by a standard triple immunosuppressant regimen consisting of prednisolone, mycophenolate, and tacrolimus, except when there were contraindications to these medications. Pretransplant immunological results did not affect the choice of immunosuppressant regimen. Desensitization protocols, including plasma exchange and intravenous infusion of immunoglobulin, were considered after transplantation in patients with pretransplant DSA and high mean fluorescence intensity (MFI). HLA crossmatching and other immunologic evaluation The Korean Organ Donation Agency (KODA) laboratory performed crossmatching of registered lung transplantations. Both CDC crossmatch (CDC-XM) and flow cytometric crossmatch (flow-XM) were performed. Although virtual crossmatch techniques have gained influence in many countries, they are currently unavailable in Korea. For the CDC crossmatch, both T and B lymphocytes were isolated by negative selection methods using the EasySep HLA Total Lymphocyte Enrichment kit (STEMCELL Technologies Inc., Tukwila, WA, USA). Both the National Institutes of Health and antihuman globulin augmented methods were performed using standard protocols with some minor modifications.11 Cells and duplicate serum dilutions of 1:1 to 1:4, respectively, were incubated at 25°C for 30 minutes for complement reaction. Cells were stained with commercial staining reagent (FluoroQuench Stain; One Lambda, Canoga Park, CA, USA) and observed under an inverted fluorescent microscope. The positive crossmatch results were recorded when the cytotoxic reaction resulted in more than 11% cell lysis. For flow-XM, both T and B lymphocytes were stained using three-color immunofluorescence staining in a single tube as previously described,1415 with minor modifications. After the incubation of cells and serum at 25°C for 15 minutes, a fluorescent conjugate reaction was performed at 25°C for 20 minutes using titrated goat F(ab’)2 antihuman immunoglobulin G fluorescein isothiocyanate (Jackson Immunoresearch Laboratories, West Grove, PA, USA), anti-CD3 PerCP (Becton Dickinson, San Jose, CA, USA), and anti-CD19 allophycocyanin conjugates (Becton Dickinson). Fluorescence was analyzed using a FACSCalibur Flow Cytometer with an HTS microplate acquisition system (Becton Dickinson). Fluorescence was considered positive when the MFI ratio to negative reference test was over 2.0 for both T and B lymphocytes. Panel reactive antibody (PRA) class I and II identifications were performed before transplantation with the identification kit (One Lambda, Inc., West Hills, CA, USA or Gen-Probe Inc., San Diego, CA, USA). PRA over 10% was considered positive, and over 50% was considered highly sensitized.16 Antibodies against donor HLA-A, B, DR, and DQ were defined DSAs, and the strength of each DSA was quantified based on MFI. The Korean Organ Donation Agency (KODA) laboratory performed crossmatching of registered lung transplantations. Both CDC crossmatch (CDC-XM) and flow cytometric crossmatch (flow-XM) were performed. Although virtual crossmatch techniques have gained influence in many countries, they are currently unavailable in Korea. For the CDC crossmatch, both T and B lymphocytes were isolated by negative selection methods using the EasySep HLA Total Lymphocyte Enrichment kit (STEMCELL Technologies Inc., Tukwila, WA, USA). Both the National Institutes of Health and antihuman globulin augmented methods were performed using standard protocols with some minor modifications.11 Cells and duplicate serum dilutions of 1:1 to 1:4, respectively, were incubated at 25°C for 30 minutes for complement reaction. Cells were stained with commercial staining reagent (FluoroQuench Stain; One Lambda, Canoga Park, CA, USA) and observed under an inverted fluorescent microscope. The positive crossmatch results were recorded when the cytotoxic reaction resulted in more than 11% cell lysis. For flow-XM, both T and B lymphocytes were stained using three-color immunofluorescence staining in a single tube as previously described,1415 with minor modifications. After the incubation of cells and serum at 25°C for 15 minutes, a fluorescent conjugate reaction was performed at 25°C for 20 minutes using titrated goat F(ab’)2 antihuman immunoglobulin G fluorescein isothiocyanate (Jackson Immunoresearch Laboratories, West Grove, PA, USA), anti-CD3 PerCP (Becton Dickinson, San Jose, CA, USA), and anti-CD19 allophycocyanin conjugates (Becton Dickinson). Fluorescence was analyzed using a FACSCalibur Flow Cytometer with an HTS microplate acquisition system (Becton Dickinson). Fluorescence was considered positive when the MFI ratio to negative reference test was over 2.0 for both T and B lymphocytes. Panel reactive antibody (PRA) class I and II identifications were performed before transplantation with the identification kit (One Lambda, Inc., West Hills, CA, USA or Gen-Probe Inc., San Diego, CA, USA). PRA over 10% was considered positive, and over 50% was considered highly sensitized.16 Antibodies against donor HLA-A, B, DR, and DQ were defined DSAs, and the strength of each DSA was quantified based on MFI. Clinical outcomes Clinical outcomes, including acute rejection, primary graft dysfunction (PGD), chronic lung allograft dysfunction (CLAD), and mortality, were analyzed. Acute cellular rejection (ACR) was diagnosed according to the ISHLT grading system with a transbronchial biopsy specimen.17 However, not all treated recipients with acute rejection were available for transbronchial biopsies for histopathologic confirmation. As such, clinical diagnosis of acute rejection was also assumed when allograft dysfunction without definite entities was responsive to steroid pulse therapy. Though there is no distinct definition of AMR in lung transplantation, AMR was diagnosed as per a proposal published by the ISHLT in 2016, including allograft dysfunction, lung histology, C4d+, and DSA without other explainable causes.818 PGD in lung transplantation was defined as an allograft dysfunction of the transplanted lung within the first 72 hours after the procedure. PGD was diagnosed and graded according to the ISHLT criteria.19 Bronchiolitis obliterans syndrome (BOS) and restrictive allograft syndrome (RAS) were the two main phenotypes of CLAD, which represented the irreversible loss of lung function and a major cause of limiting long-term survival. BOS was identified by the sustained and irreversible reduction in forced expiratory volume in 1 second (FEV1) compared with the post-lung transplant baseline FEV1 in the absence of any other etiologies.20 RAS is defined by restrictive physiology according to a pulmonary function test, positive findings of a radiologic study, the presence of ground-glass opacity, and interstitial fibrosis.21 Clinical outcomes, including acute rejection, primary graft dysfunction (PGD), chronic lung allograft dysfunction (CLAD), and mortality, were analyzed. Acute cellular rejection (ACR) was diagnosed according to the ISHLT grading system with a transbronchial biopsy specimen.17 However, not all treated recipients with acute rejection were available for transbronchial biopsies for histopathologic confirmation. As such, clinical diagnosis of acute rejection was also assumed when allograft dysfunction without definite entities was responsive to steroid pulse therapy. Though there is no distinct definition of AMR in lung transplantation, AMR was diagnosed as per a proposal published by the ISHLT in 2016, including allograft dysfunction, lung histology, C4d+, and DSA without other explainable causes.818 PGD in lung transplantation was defined as an allograft dysfunction of the transplanted lung within the first 72 hours after the procedure. PGD was diagnosed and graded according to the ISHLT criteria.19 Bronchiolitis obliterans syndrome (BOS) and restrictive allograft syndrome (RAS) were the two main phenotypes of CLAD, which represented the irreversible loss of lung function and a major cause of limiting long-term survival. BOS was identified by the sustained and irreversible reduction in forced expiratory volume in 1 second (FEV1) compared with the post-lung transplant baseline FEV1 in the absence of any other etiologies.20 RAS is defined by restrictive physiology according to a pulmonary function test, positive findings of a radiologic study, the presence of ground-glass opacity, and interstitial fibrosis.21 Statistical analysis Descriptive statistics were used to demonstrate baseline clinical characteristics of the study cohort. For categorical variables, data are shown as numbers and percentages, and chi-squared or Fisher’s exact tests were used where appropriate. For continuous variables, data were analyzed using Student’s t-test or Mann-Whitney U test and presented as the mean ± standard deviation or the median, interquartile range (IQR). Univariable and multivariable regression analysis was used to evaluate the factors associated with clinical outcomes. To confirm the independent association between variables, linear regression was applied for confounders. Survival analysis was performed by the Kaplan-Meier method with log-rank tests. All statistical analysis was performed with SPSS (version 25.0; SPSS Inc., Chicago, IL, USA), and a P value < 0.05 was considered statistically significant. Descriptive statistics were used to demonstrate baseline clinical characteristics of the study cohort. For categorical variables, data are shown as numbers and percentages, and chi-squared or Fisher’s exact tests were used where appropriate. For continuous variables, data were analyzed using Student’s t-test or Mann-Whitney U test and presented as the mean ± standard deviation or the median, interquartile range (IQR). Univariable and multivariable regression analysis was used to evaluate the factors associated with clinical outcomes. To confirm the independent association between variables, linear regression was applied for confounders. Survival analysis was performed by the Kaplan-Meier method with log-rank tests. All statistical analysis was performed with SPSS (version 25.0; SPSS Inc., Chicago, IL, USA), and a P value < 0.05 was considered statistically significant. Ethics statement This study was approved by the Institutional Review Board (IRB) of the Severance Hospital of Yonsei University (IRB No. 4-2021-0673). The IRB waived the requirement for obtaining informed consent from the patients. This study was approved by the Institutional Review Board (IRB) of the Severance Hospital of Yonsei University (IRB No. 4-2021-0673). The IRB waived the requirement for obtaining informed consent from the patients. Lung transplant cohort: Clinical lung transplantation data were derived from patients who received lung transplantation from deceased donors at one of five tertiary centers which performed more than 10 cases annually between 2010 to 2012 in South Korea via the Korean Organ Transplant Registry (KOTRY). The KOTRY was established in 2014 and began to organize the lung transplantation registry in 2015.13 We analyzed the data on registered lung transplantations performed between March 2015 and December 2019. Among these patients, ten patients who did not undergo crossmatching were excluded. Finally, 210 patients were included in the study. Clinical data, including the general demographic characteristics, primary diagnosis, and pretransplant status of the recipients and donors, were prospectively collected. The details of desensitization protocols, transplant operations, and postoperative follow-up results were also prospectively collected. All clinical data were collected and registered using a web-based report form by the attending physician. Because of donor shortage, transplantation was performed regardless of the status of DSA screening. Moreover, according to a medical urgency-based allocation system in Korea, the results of human leukocyte antigen (HLA) crossmatching were not regarded as mandatory considerations.14 Most patients received induction therapy with high-dose steroids (methylprednisolone, 500 mg) or interleukin-2 antagonist followed by a standard triple immunosuppressant regimen consisting of prednisolone, mycophenolate, and tacrolimus, except when there were contraindications to these medications. Pretransplant immunological results did not affect the choice of immunosuppressant regimen. Desensitization protocols, including plasma exchange and intravenous infusion of immunoglobulin, were considered after transplantation in patients with pretransplant DSA and high mean fluorescence intensity (MFI). HLA crossmatching and other immunologic evaluation: The Korean Organ Donation Agency (KODA) laboratory performed crossmatching of registered lung transplantations. Both CDC crossmatch (CDC-XM) and flow cytometric crossmatch (flow-XM) were performed. Although virtual crossmatch techniques have gained influence in many countries, they are currently unavailable in Korea. For the CDC crossmatch, both T and B lymphocytes were isolated by negative selection methods using the EasySep HLA Total Lymphocyte Enrichment kit (STEMCELL Technologies Inc., Tukwila, WA, USA). Both the National Institutes of Health and antihuman globulin augmented methods were performed using standard protocols with some minor modifications.11 Cells and duplicate serum dilutions of 1:1 to 1:4, respectively, were incubated at 25°C for 30 minutes for complement reaction. Cells were stained with commercial staining reagent (FluoroQuench Stain; One Lambda, Canoga Park, CA, USA) and observed under an inverted fluorescent microscope. The positive crossmatch results were recorded when the cytotoxic reaction resulted in more than 11% cell lysis. For flow-XM, both T and B lymphocytes were stained using three-color immunofluorescence staining in a single tube as previously described,1415 with minor modifications. After the incubation of cells and serum at 25°C for 15 minutes, a fluorescent conjugate reaction was performed at 25°C for 20 minutes using titrated goat F(ab’)2 antihuman immunoglobulin G fluorescein isothiocyanate (Jackson Immunoresearch Laboratories, West Grove, PA, USA), anti-CD3 PerCP (Becton Dickinson, San Jose, CA, USA), and anti-CD19 allophycocyanin conjugates (Becton Dickinson). Fluorescence was analyzed using a FACSCalibur Flow Cytometer with an HTS microplate acquisition system (Becton Dickinson). Fluorescence was considered positive when the MFI ratio to negative reference test was over 2.0 for both T and B lymphocytes. Panel reactive antibody (PRA) class I and II identifications were performed before transplantation with the identification kit (One Lambda, Inc., West Hills, CA, USA or Gen-Probe Inc., San Diego, CA, USA). PRA over 10% was considered positive, and over 50% was considered highly sensitized.16 Antibodies against donor HLA-A, B, DR, and DQ were defined DSAs, and the strength of each DSA was quantified based on MFI. Clinical outcomes: Clinical outcomes, including acute rejection, primary graft dysfunction (PGD), chronic lung allograft dysfunction (CLAD), and mortality, were analyzed. Acute cellular rejection (ACR) was diagnosed according to the ISHLT grading system with a transbronchial biopsy specimen.17 However, not all treated recipients with acute rejection were available for transbronchial biopsies for histopathologic confirmation. As such, clinical diagnosis of acute rejection was also assumed when allograft dysfunction without definite entities was responsive to steroid pulse therapy. Though there is no distinct definition of AMR in lung transplantation, AMR was diagnosed as per a proposal published by the ISHLT in 2016, including allograft dysfunction, lung histology, C4d+, and DSA without other explainable causes.818 PGD in lung transplantation was defined as an allograft dysfunction of the transplanted lung within the first 72 hours after the procedure. PGD was diagnosed and graded according to the ISHLT criteria.19 Bronchiolitis obliterans syndrome (BOS) and restrictive allograft syndrome (RAS) were the two main phenotypes of CLAD, which represented the irreversible loss of lung function and a major cause of limiting long-term survival. BOS was identified by the sustained and irreversible reduction in forced expiratory volume in 1 second (FEV1) compared with the post-lung transplant baseline FEV1 in the absence of any other etiologies.20 RAS is defined by restrictive physiology according to a pulmonary function test, positive findings of a radiologic study, the presence of ground-glass opacity, and interstitial fibrosis.21 Statistical analysis: Descriptive statistics were used to demonstrate baseline clinical characteristics of the study cohort. For categorical variables, data are shown as numbers and percentages, and chi-squared or Fisher’s exact tests were used where appropriate. For continuous variables, data were analyzed using Student’s t-test or Mann-Whitney U test and presented as the mean ± standard deviation or the median, interquartile range (IQR). Univariable and multivariable regression analysis was used to evaluate the factors associated with clinical outcomes. To confirm the independent association between variables, linear regression was applied for confounders. Survival analysis was performed by the Kaplan-Meier method with log-rank tests. All statistical analysis was performed with SPSS (version 25.0; SPSS Inc., Chicago, IL, USA), and a P value < 0.05 was considered statistically significant. Ethics statement: This study was approved by the Institutional Review Board (IRB) of the Severance Hospital of Yonsei University (IRB No. 4-2021-0673). The IRB waived the requirement for obtaining informed consent from the patients. RESULTS: Baseline characteristics The baseline characteristics of the patients according to the results of crossmatches are shown in Table 1. Among the included 210 recipients, nine patients showed positive crossmatches. The median age was 56.4 years, and 134 patients (63.8%) were male. Only one patient was male in the positive crossmatch group. There was no RH− blood type within the study cohort. The most common primary diagnosis was idiopathic pulmonary fibrosis (114 patients, 54.3%), followed by connective tissue-related interstitial lung disease (CTD-ILD) (34 patients, 16.2%). The median number of days on the waiting list was 71.0, and 63 patients (30.0%) had undergone pretransplant extracorporeal membrane oxygenation support. The baseline characteristics were not significantly different between the positive and negative crossmatch groups. Values are presented as mean ± standard deviation or number (%). XM = crossmatch, BMI = body mass index, COPD = chronic obstructive pulmonary disease, CTD-ILD = connective tissue-related interstitial lung disease, BOS = bronchiolitis obliterans syndrome, ECMO = extracorporeal membrane oxygenation, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, HSCT = hematopoietic stem cell transplantation. Table 2 shows the results of immunological evaluations, including crossmatching. CDC-XM was performed in 208 patients (99.0%) and flow-XM was performed in 125 patients (59.5%). PRA was identified in 208 patients (99.0%). Among these patients, high levels of class I or class II PRA (> 50%) were detected in 18 patients (8.7%). The proportion of patients with high PRA was higher in the positive crossmatch group than in the negative crossmatch group (37.5% vs. 10.7%, P = 0.026). Values are presented as number (%). XM = crossmatch, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibody, HLA = human leukocyte antigen. At the time of lung transplantation, most patients (201 patients, 95.7%) had undergone the DSA screening test. Twenty-seven patients (13.4%) were revealed to have DSA. A higher proportion of the patients in the positive crossmatch group had DSA (7 patients, 87.5%, vs. 20 patients, 10.4%, P < 0.001) compared with that in the negative crossmatch group. Desensitization was performed in 35 patients (16.7%), and four patients in the positive crossmatch group underwent desensitization. The characteristics of nine patients with positive crossmatch are shown in Table 3. Among these patients, the most common etiology for lung transplantation was CTD-ILD. CDC-XM and flow-XM revealed positive crossmatches in five and six patients, respectively. DSA was detected in six patients, and three patients (patient numbers 5, 6, and 7) had an MFI higher than 5,000. Five patients underwent desensitization before lung transplantation. Acute rejection was diagnosed in three patients (patient numbers 3, 4, and 6), and two patients underwent transbronchial lung biopsy (patient numbers 3 and 4). Patient number 6 was positively crossmatched to the donor as observed using CDC-XM with both T land B lymphocytes, and class I DSA was detected with MFI over 5,000. The patient was clinically diagnosed with AMR and received aggressive treatments for rejection, including steroid pulse, plasmapheresis, and intravenous infusion of immunoglobulin. However, the patient received retransplantation on postoperative day 15 due to rejection. XM = crossmatch, CTD-ILD = connective tissue-related interstitial lung disease, ECMO = extracorporeal membrane oxygenation, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibodies, PGD = primary graft dysfunction, MFI = mean fluorescence intensity, AMR = antibody-mediated rejection, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, IVIG = intravenous immunoglobulin. The baseline characteristics of the patients according to the results of crossmatches are shown in Table 1. Among the included 210 recipients, nine patients showed positive crossmatches. The median age was 56.4 years, and 134 patients (63.8%) were male. Only one patient was male in the positive crossmatch group. There was no RH− blood type within the study cohort. The most common primary diagnosis was idiopathic pulmonary fibrosis (114 patients, 54.3%), followed by connective tissue-related interstitial lung disease (CTD-ILD) (34 patients, 16.2%). The median number of days on the waiting list was 71.0, and 63 patients (30.0%) had undergone pretransplant extracorporeal membrane oxygenation support. The baseline characteristics were not significantly different between the positive and negative crossmatch groups. Values are presented as mean ± standard deviation or number (%). XM = crossmatch, BMI = body mass index, COPD = chronic obstructive pulmonary disease, CTD-ILD = connective tissue-related interstitial lung disease, BOS = bronchiolitis obliterans syndrome, ECMO = extracorporeal membrane oxygenation, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, HSCT = hematopoietic stem cell transplantation. Table 2 shows the results of immunological evaluations, including crossmatching. CDC-XM was performed in 208 patients (99.0%) and flow-XM was performed in 125 patients (59.5%). PRA was identified in 208 patients (99.0%). Among these patients, high levels of class I or class II PRA (> 50%) were detected in 18 patients (8.7%). The proportion of patients with high PRA was higher in the positive crossmatch group than in the negative crossmatch group (37.5% vs. 10.7%, P = 0.026). Values are presented as number (%). XM = crossmatch, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibody, HLA = human leukocyte antigen. At the time of lung transplantation, most patients (201 patients, 95.7%) had undergone the DSA screening test. Twenty-seven patients (13.4%) were revealed to have DSA. A higher proportion of the patients in the positive crossmatch group had DSA (7 patients, 87.5%, vs. 20 patients, 10.4%, P < 0.001) compared with that in the negative crossmatch group. Desensitization was performed in 35 patients (16.7%), and four patients in the positive crossmatch group underwent desensitization. The characteristics of nine patients with positive crossmatch are shown in Table 3. Among these patients, the most common etiology for lung transplantation was CTD-ILD. CDC-XM and flow-XM revealed positive crossmatches in five and six patients, respectively. DSA was detected in six patients, and three patients (patient numbers 5, 6, and 7) had an MFI higher than 5,000. Five patients underwent desensitization before lung transplantation. Acute rejection was diagnosed in three patients (patient numbers 3, 4, and 6), and two patients underwent transbronchial lung biopsy (patient numbers 3 and 4). Patient number 6 was positively crossmatched to the donor as observed using CDC-XM with both T land B lymphocytes, and class I DSA was detected with MFI over 5,000. The patient was clinically diagnosed with AMR and received aggressive treatments for rejection, including steroid pulse, plasmapheresis, and intravenous infusion of immunoglobulin. However, the patient received retransplantation on postoperative day 15 due to rejection. XM = crossmatch, CTD-ILD = connective tissue-related interstitial lung disease, ECMO = extracorporeal membrane oxygenation, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibodies, PGD = primary graft dysfunction, MFI = mean fluorescence intensity, AMR = antibody-mediated rejection, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, IVIG = intravenous immunoglobulin. Operative details and postoperative outcomes Most patients underwent bilateral lung transplantation (202 patients, 96.2%). According to the PGD grade, patients presenting with grade 3 at postoperative 48 hours or 72 hours were regarded as high-grade PGD. Forty-four patients showed high-grade PGD, and no statistical difference was observed between the groups. The incidences of other postoperative complications, including postoperative bleeding, anastomosis dehiscence, pulmonary thromboembolism, pneumonia, and acute kidney injury, were not statistically distinguishable between the groups (Table 4). Values are presented as number (%), mean ± standard deviation or number (interquartile range). XM = crossmatch, PGD = primary graft dysfunction, ECMO = extracorporeal membrane oxygenation, ICU = intensive care unit, AKI = acute kidney injury. Acute rejection was diagnosed in 65 patients. Transbronchial lung biopsy-proven ACR was diagnosed in 37 patients, and other patients were diagnosed clinically. Though the incidence of acute rejection with regard to crossmatching results was not statistically different between the groups, the positive crossmatch group showed shorter intervals between transplantation and diagnosis of rejection. Five patients had clinical presentations that were compatible with AMR. However, none of these patients were diagnosed using the biopsy specimen or capillary C4d composition. Seven patients (3.3%) showed physiologic changes that were considered as CLAD. BOS was diagnosed in five patients (2.4%), and RAS was diagnosed in three patients (1.4%). The median number of days from transplant to CLAD diagnosis was 648 (IQR, 488.0–759.0). All of the patients showed CLAD were in the negative crossmatch group. Most patients underwent bilateral lung transplantation (202 patients, 96.2%). According to the PGD grade, patients presenting with grade 3 at postoperative 48 hours or 72 hours were regarded as high-grade PGD. Forty-four patients showed high-grade PGD, and no statistical difference was observed between the groups. The incidences of other postoperative complications, including postoperative bleeding, anastomosis dehiscence, pulmonary thromboembolism, pneumonia, and acute kidney injury, were not statistically distinguishable between the groups (Table 4). Values are presented as number (%), mean ± standard deviation or number (interquartile range). XM = crossmatch, PGD = primary graft dysfunction, ECMO = extracorporeal membrane oxygenation, ICU = intensive care unit, AKI = acute kidney injury. Acute rejection was diagnosed in 65 patients. Transbronchial lung biopsy-proven ACR was diagnosed in 37 patients, and other patients were diagnosed clinically. Though the incidence of acute rejection with regard to crossmatching results was not statistically different between the groups, the positive crossmatch group showed shorter intervals between transplantation and diagnosis of rejection. Five patients had clinical presentations that were compatible with AMR. However, none of these patients were diagnosed using the biopsy specimen or capillary C4d composition. Seven patients (3.3%) showed physiologic changes that were considered as CLAD. BOS was diagnosed in five patients (2.4%), and RAS was diagnosed in three patients (1.4%). The median number of days from transplant to CLAD diagnosis was 648 (IQR, 488.0–759.0). All of the patients showed CLAD were in the negative crossmatch group. Survival analysis As shown in Table 4, 55 patients (26.2%) showed 1-year mortality. The 1-year and overall mortality rates were not statistically different between the groups (Table 4). Fig. 1 shows the Kaplan-Meier curves of 1-year and overall mortality rates of the patients. According to Kaplan-Meier analysis, there were no significant differences in the survival rates, regardless of crossmatching status. We also performed a regression analysis to identify the risk factors for poor 1-year survival. Positive CDC crossmatching was the only significant risk factor for poor 1-year survival in the univariable analysis (odds ratio, 11.922, 95% confidence interval, 1.302–19.125, P = 0.018). We further divided the patients based on crossmatching methods and performed survival analysis for the subgroups. When positive crossmatching was confirmed by CDC-XM, the post-transplant 1-year and overall mortality rates were poorer than the negative crossmatching results (P < 0.001 and P < 0.001, respectively) (Fig. 2A and B). According to the Kaplan-Meier method, positive crossmatching by the flow-XM method did not significantly differ in the 1-year and overall morality (Fig. 2C and D). When considering the detection of T and B lymphocytes individually, only positive crossmatching by T lymphocytes was related to mortality within 1-year after transplantation (P = 0.002) and overall mortality (P = 0.006) (Fig. 3). CDC = complement-dependent cytotoxicity. As shown in Table 4, 55 patients (26.2%) showed 1-year mortality. The 1-year and overall mortality rates were not statistically different between the groups (Table 4). Fig. 1 shows the Kaplan-Meier curves of 1-year and overall mortality rates of the patients. According to Kaplan-Meier analysis, there were no significant differences in the survival rates, regardless of crossmatching status. We also performed a regression analysis to identify the risk factors for poor 1-year survival. Positive CDC crossmatching was the only significant risk factor for poor 1-year survival in the univariable analysis (odds ratio, 11.922, 95% confidence interval, 1.302–19.125, P = 0.018). We further divided the patients based on crossmatching methods and performed survival analysis for the subgroups. When positive crossmatching was confirmed by CDC-XM, the post-transplant 1-year and overall mortality rates were poorer than the negative crossmatching results (P < 0.001 and P < 0.001, respectively) (Fig. 2A and B). According to the Kaplan-Meier method, positive crossmatching by the flow-XM method did not significantly differ in the 1-year and overall morality (Fig. 2C and D). When considering the detection of T and B lymphocytes individually, only positive crossmatching by T lymphocytes was related to mortality within 1-year after transplantation (P = 0.002) and overall mortality (P = 0.006) (Fig. 3). CDC = complement-dependent cytotoxicity. Baseline characteristics: The baseline characteristics of the patients according to the results of crossmatches are shown in Table 1. Among the included 210 recipients, nine patients showed positive crossmatches. The median age was 56.4 years, and 134 patients (63.8%) were male. Only one patient was male in the positive crossmatch group. There was no RH− blood type within the study cohort. The most common primary diagnosis was idiopathic pulmonary fibrosis (114 patients, 54.3%), followed by connective tissue-related interstitial lung disease (CTD-ILD) (34 patients, 16.2%). The median number of days on the waiting list was 71.0, and 63 patients (30.0%) had undergone pretransplant extracorporeal membrane oxygenation support. The baseline characteristics were not significantly different between the positive and negative crossmatch groups. Values are presented as mean ± standard deviation or number (%). XM = crossmatch, BMI = body mass index, COPD = chronic obstructive pulmonary disease, CTD-ILD = connective tissue-related interstitial lung disease, BOS = bronchiolitis obliterans syndrome, ECMO = extracorporeal membrane oxygenation, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, HSCT = hematopoietic stem cell transplantation. Table 2 shows the results of immunological evaluations, including crossmatching. CDC-XM was performed in 208 patients (99.0%) and flow-XM was performed in 125 patients (59.5%). PRA was identified in 208 patients (99.0%). Among these patients, high levels of class I or class II PRA (> 50%) were detected in 18 patients (8.7%). The proportion of patients with high PRA was higher in the positive crossmatch group than in the negative crossmatch group (37.5% vs. 10.7%, P = 0.026). Values are presented as number (%). XM = crossmatch, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibody, HLA = human leukocyte antigen. At the time of lung transplantation, most patients (201 patients, 95.7%) had undergone the DSA screening test. Twenty-seven patients (13.4%) were revealed to have DSA. A higher proportion of the patients in the positive crossmatch group had DSA (7 patients, 87.5%, vs. 20 patients, 10.4%, P < 0.001) compared with that in the negative crossmatch group. Desensitization was performed in 35 patients (16.7%), and four patients in the positive crossmatch group underwent desensitization. The characteristics of nine patients with positive crossmatch are shown in Table 3. Among these patients, the most common etiology for lung transplantation was CTD-ILD. CDC-XM and flow-XM revealed positive crossmatches in five and six patients, respectively. DSA was detected in six patients, and three patients (patient numbers 5, 6, and 7) had an MFI higher than 5,000. Five patients underwent desensitization before lung transplantation. Acute rejection was diagnosed in three patients (patient numbers 3, 4, and 6), and two patients underwent transbronchial lung biopsy (patient numbers 3 and 4). Patient number 6 was positively crossmatched to the donor as observed using CDC-XM with both T land B lymphocytes, and class I DSA was detected with MFI over 5,000. The patient was clinically diagnosed with AMR and received aggressive treatments for rejection, including steroid pulse, plasmapheresis, and intravenous infusion of immunoglobulin. However, the patient received retransplantation on postoperative day 15 due to rejection. XM = crossmatch, CTD-ILD = connective tissue-related interstitial lung disease, ECMO = extracorporeal membrane oxygenation, CDC = complement-dependent cytotoxicity, PRA = panel-reactive antibody, DSA = donor specific antibodies, PGD = primary graft dysfunction, MFI = mean fluorescence intensity, AMR = antibody-mediated rejection, ARDS = acute respiratory distress syndrome, IPF = idiopathic pulmonary fibrosis, IVIG = intravenous immunoglobulin. Operative details and postoperative outcomes: Most patients underwent bilateral lung transplantation (202 patients, 96.2%). According to the PGD grade, patients presenting with grade 3 at postoperative 48 hours or 72 hours were regarded as high-grade PGD. Forty-four patients showed high-grade PGD, and no statistical difference was observed between the groups. The incidences of other postoperative complications, including postoperative bleeding, anastomosis dehiscence, pulmonary thromboembolism, pneumonia, and acute kidney injury, were not statistically distinguishable between the groups (Table 4). Values are presented as number (%), mean ± standard deviation or number (interquartile range). XM = crossmatch, PGD = primary graft dysfunction, ECMO = extracorporeal membrane oxygenation, ICU = intensive care unit, AKI = acute kidney injury. Acute rejection was diagnosed in 65 patients. Transbronchial lung biopsy-proven ACR was diagnosed in 37 patients, and other patients were diagnosed clinically. Though the incidence of acute rejection with regard to crossmatching results was not statistically different between the groups, the positive crossmatch group showed shorter intervals between transplantation and diagnosis of rejection. Five patients had clinical presentations that were compatible with AMR. However, none of these patients were diagnosed using the biopsy specimen or capillary C4d composition. Seven patients (3.3%) showed physiologic changes that were considered as CLAD. BOS was diagnosed in five patients (2.4%), and RAS was diagnosed in three patients (1.4%). The median number of days from transplant to CLAD diagnosis was 648 (IQR, 488.0–759.0). All of the patients showed CLAD were in the negative crossmatch group. Survival analysis: As shown in Table 4, 55 patients (26.2%) showed 1-year mortality. The 1-year and overall mortality rates were not statistically different between the groups (Table 4). Fig. 1 shows the Kaplan-Meier curves of 1-year and overall mortality rates of the patients. According to Kaplan-Meier analysis, there were no significant differences in the survival rates, regardless of crossmatching status. We also performed a regression analysis to identify the risk factors for poor 1-year survival. Positive CDC crossmatching was the only significant risk factor for poor 1-year survival in the univariable analysis (odds ratio, 11.922, 95% confidence interval, 1.302–19.125, P = 0.018). We further divided the patients based on crossmatching methods and performed survival analysis for the subgroups. When positive crossmatching was confirmed by CDC-XM, the post-transplant 1-year and overall mortality rates were poorer than the negative crossmatching results (P < 0.001 and P < 0.001, respectively) (Fig. 2A and B). According to the Kaplan-Meier method, positive crossmatching by the flow-XM method did not significantly differ in the 1-year and overall morality (Fig. 2C and D). When considering the detection of T and B lymphocytes individually, only positive crossmatching by T lymphocytes was related to mortality within 1-year after transplantation (P = 0.002) and overall mortality (P = 0.006) (Fig. 3). CDC = complement-dependent cytotoxicity. DISCUSSION: We conducted a multicenter nationwide study to analyze the prevalence of positive crossmatching and the outcomes of positively crossmatched patients in lung transplantation. Although the incidence of positive crossmatching was low (4.3%), positive CDC-XM and T lymphocyte crossmatching results were related to poor 1-year and overall survival. It is controversial to consider HLA matching as an allocation factor. ISHLT introduced the lung allocation score (LAS) in 2005 and revised it in 2015 and 2020.22 The LAS includes factors related to waiting list mortality and post-transplant 1-year survival. HLA matching is not considered in the LAS system for several reasons. First, as lung transplantation is mostly performed under emergency settings, it is not always possible to conduct donor HLA typing prior to allocation. Furthermore, there is also a possibility of prolonging allograft ischemic time while waiting for crossmatching results.810 Brugière et al.23 analyzed the relative impact of mismatching and graft ischemic time; an allograft ischemic time of over 330 minutes worsened patient survival, even with well-matched donor and recipient pairs. Second, because of the shortage of possible donor organs and the extensive complexity of the HLA system, the probability of finding a matched recipient is extremely low.10 Furthermore, nonimmunologic factors are considered more important for 1-year survival. Early mortality resulting from the ischemia and reperfusion injury has been shown to decrease after introducing low-potassium dextran lung preservation solution.24 Advances in immunosuppressive regimens and enhanced combinations of antibiotics may also be responsible for improved unstable hemodynamics during the early post-transplant period.25 After alleviating these nonimmunological factors for early graft dysfunction and mortality in lung transplantation, the effect of HLA compatibility between the donor and recipient becomes more important for early outcomes. In addition, an improved methodology for HLA crossmatching has facilitated more accurate and rapid access to crossmatching results, leading to increased awareness of the importance of HLA crossmatching in lung transplantation. Several studies analyzed the impact of HLA mismatching on lung transplantation. It is well known that HLA mismatching can trigger graft damage or even mortality.10 Donor HLA molecules are recognized by the immune system of the recipient. Immunogenetic discrepancies can lead to ACR and AMR, which play major roles in graft dysfunction and loss. In 2003, the Eurotransplant Foundation analyzed 590 patients who underwent cadaveric lung transplantation between January 1997 and December 1999. In this study, patients with more than four mismatches of HLA loci showed poorer 1-year survival than the others.26 A retrospective study by the United Network for Organ Sharing/ISHLT also confirmed HLA mismatches as a risk factor for 1-year mortality in lung transplantation.27 As routine pretransplant crossmatching is not considered feasible in lung transplantation; these studies did not analyze prospective crossmatching; however, the results confirmed that HLA mismatching is related to graft dysfunction and survival. In our study, the incidence of rejection or 1-year survival was not statistically different according to the crossmatching results. This result may be because of the low incidence of positive crossmatching in the cohort (4.3%) and the relatively short-term follow-up period of the positive group (median 129.0 vs. 386.5 days). Moreover, most centers involved in the registry tended to decline the allocation when positive crossmatching had been verified not only by the KODA lab but also by the center’s study. However, the relationship between 1-year survival and positive crossmatching was observed when the cohort was subdivided according to the crossmatching method. Flow-XM is reportedly more sensitive for detecting anti-HLA antibodies than CDC-XM.28 Several studies have shown the relationship between positive flow-XM results and a higher risk of graft dysfunction and poor survival.237 However, as flow-XM is sometimes too sensitive, denying the allocation because of a positive flow crossmatch is still controversial.29 Moreover, flow-XM is not a functional test, and the binding of antibodies to lymphocytes may not always reflect complement system activation.6 Thus, transplantation across a positive flow crossmatch and negative CDC crossmatch is acceptable, whereas transplantation across a positive CDC result is not typically recommended. In our study, the Kaplan-Meier analysis also supported this principle. The patients with positive CDC crossmatch showed significantly lower 1-year and overall survival, whereas positive flow crossmatch did not significantly affect the outcome. The impact of positive T cell crossmatches on poor graft outcomes was apparent in renal, cardiac, and liver transplantation.230 A negative lymphocytotoxic T cell result is generally considered as grounds to proceed with transplantation. Here, the results of Kaplan-Meier analysis also demonstrated significantly poor 1-year and overall survival in positive T-cell crossmatched patients, whereas positive B-cell crossmatching did not show a significant difference in graft survival. Positive T-cell crossmatching indicates the presence of DSAs against class I antigens, which can lead to antibody-mediated damages to the graft.378 This damage can cause hyperacute rejection or graft loss. This study has certain limitations. First, although this study analyzed a nationwide cohort, this is a retrospective study with a relatively small number of patients and a short follow-up period. CLAD was not observed in the positively crossmatched patients, probably because of the short follow-up time (median 129.0 days). Thus, further follow-up for the CLAD is needed for the positive crossmath group. Second, the incidence of positive crossmatching was too low to perform further statistical analyses. As mentioned above, we performed regression analysis to identify the risk factors of poor survival, and positive CDC crossmatching was the only meaningful risk factor for poor 1-year survival. Third, a nationwide multicenter study with registry data does not reflect detailed information on each case. In particular, it is difficult to determine the actual immunologic consequences on graft survival. In conclusion, this study analyzed the outcomes of positively crossmatched patients in lung transplantation and revealed the importance of crossmatching methods in lung transplantation using nationwide data. Positive results of CDC and T lymphocyte crossmatching may lead to devastating outcomes in lung transplantation. Although crossmatching is not part of the allocation system, the results of crossmatching should be considered with caution during postoperative management after lung transplantation.
Background: In lung transplantation, human leukocyte antigen (HLA) compatibility is not included in the lung allocation score system or considered when placing donor allografts. However, HLA matching may affect the outcomes of lung transplantation. This study evaluated the current assessment status, prevalence, and effects of HLA crossmatching in lung transplantation in Korean patients using nationwide multicenter registry data. Methods: Two hundred and twenty patients who received lung transplantation at six tertiary hospitals in South Korea between March 2015 and December 2019 were retrospectively reviewed. Clinical data, including general demographic characteristics, primary diagnosis, and pretransplant status of the recipients and donors registered by the Korean Organ Transplant Registry, were retrospectively analyzed. Survival analysis was performed using the Kaplan-Meier method with log-rank tests. Results: Complement-dependent cytotoxic crossmatch (CDC-XM) was performed in 208 patients (94.5%) and flow cytometric crossmatch (flow-XM) was performed in 125 patients (56.8%). Among them, nine patients (4.1%) showed T cell- and/or B cell-positive crossmatches. The incidences of postoperative complications, including primary graft dysfunction, acute rejection, and chronic allograft dysfunction in positively crossmatched patients, were not significant compared with those in patients without mismatches. Moreover, Kaplan-Meier analyses showed poorer 1-year survival in patients with positive crossmatch according to CDC-XM (P < 0.001) and T lymphocyte XM (P = 0.002) than in patients without mismatches. Conclusions: Positive CDC and T lymphocyte crossmatching results should be considered in the allocation of donor lungs. If unavailable, the result should be considered for postoperative management in lung transplantation.
null
null
9,280
319
[ 299, 423, 270, 156, 43, 748, 304, 292 ]
12
[ "patients", "lung", "positive", "transplantation", "crossmatching", "crossmatch", "xm", "performed", "lung transplantation", "cdc" ]
[ "immunologic consequences graft", "crossmatching lung transplantation", "crossmatching methodology immunosuppression", "crossmatching immunologic evaluation", "crossmatches renal transplantation" ]
null
null
[CONTENT] Lung Transplantation | Crossmatching | Allocation | Histocompatibility [SUMMARY]
[CONTENT] Lung Transplantation | Crossmatching | Allocation | Histocompatibility [SUMMARY]
[CONTENT] Lung Transplantation | Crossmatching | Allocation | Histocompatibility [SUMMARY]
null
[CONTENT] Lung Transplantation | Crossmatching | Allocation | Histocompatibility [SUMMARY]
null
[CONTENT] Graft Rejection | Graft Survival | HLA Antigens | Histocompatibility Testing | Humans | Isoantibodies | Kidney Transplantation | Lung Transplantation | Retrospective Studies [SUMMARY]
[CONTENT] Graft Rejection | Graft Survival | HLA Antigens | Histocompatibility Testing | Humans | Isoantibodies | Kidney Transplantation | Lung Transplantation | Retrospective Studies [SUMMARY]
[CONTENT] Graft Rejection | Graft Survival | HLA Antigens | Histocompatibility Testing | Humans | Isoantibodies | Kidney Transplantation | Lung Transplantation | Retrospective Studies [SUMMARY]
null
[CONTENT] Graft Rejection | Graft Survival | HLA Antigens | Histocompatibility Testing | Humans | Isoantibodies | Kidney Transplantation | Lung Transplantation | Retrospective Studies [SUMMARY]
null
[CONTENT] immunologic consequences graft | crossmatching lung transplantation | crossmatching methodology immunosuppression | crossmatching immunologic evaluation | crossmatches renal transplantation [SUMMARY]
[CONTENT] immunologic consequences graft | crossmatching lung transplantation | crossmatching methodology immunosuppression | crossmatching immunologic evaluation | crossmatches renal transplantation [SUMMARY]
[CONTENT] immunologic consequences graft | crossmatching lung transplantation | crossmatching methodology immunosuppression | crossmatching immunologic evaluation | crossmatches renal transplantation [SUMMARY]
null
[CONTENT] immunologic consequences graft | crossmatching lung transplantation | crossmatching methodology immunosuppression | crossmatching immunologic evaluation | crossmatches renal transplantation [SUMMARY]
null
[CONTENT] patients | lung | positive | transplantation | crossmatching | crossmatch | xm | performed | lung transplantation | cdc [SUMMARY]
[CONTENT] patients | lung | positive | transplantation | crossmatching | crossmatch | xm | performed | lung transplantation | cdc [SUMMARY]
[CONTENT] patients | lung | positive | transplantation | crossmatching | crossmatch | xm | performed | lung transplantation | cdc [SUMMARY]
null
[CONTENT] patients | lung | positive | transplantation | crossmatching | crossmatch | xm | performed | lung transplantation | cdc [SUMMARY]
null
[CONTENT] cases | transplantation | crossmatching | positive | lung | renal | lung transplantation | 2014 | mandatory | crossmatch [SUMMARY]
[CONTENT] lung | usa | performed | clinical | data | allograft | allograft dysfunction | patients | transplantation | inc [SUMMARY]
[CONTENT] patients | crossmatch | crossmatch group | year | positive | xm | group | patient | diagnosed | number [SUMMARY]
null
[CONTENT] patients | lung | positive | crossmatching | transplantation | crossmatch | year | performed | lung transplantation | xm [SUMMARY]
null
[CONTENT] ||| HLA ||| Korean [SUMMARY]
[CONTENT] Two hundred and twenty | six | South Korea | March 2015 | December 2019 ||| the Korean Organ Transplant Registry ||| [SUMMARY]
[CONTENT] CDC-XM | 208 | 94.5% | 125 | 56.8% ||| nine | 4.1% ||| ||| ||| Kaplan-Meier | 1-year | CDC-XM | XM | 0.002 [SUMMARY]
null
[CONTENT] ||| HLA ||| Korean ||| Two hundred and twenty | six | South Korea | March 2015 | December 2019 ||| the Korean Organ Transplant Registry ||| ||| CDC-XM | 208 | 94.5% | 125 | 56.8% ||| nine | 4.1% ||| ||| ||| Kaplan-Meier | 1-year | CDC-XM | XM | 0.002 ||| CDC ||| [SUMMARY]
null
Psychosocial Work Environment Explains the Association of Job Dissatisfaction With Long-term Sickness Absence: A One-Year Prospect Study of Japanese Employees.
31308301
Using a 1-year prospective design, we examined the association of job dissatisfaction with long-term sickness absence lasting 1 month or more, before and after adjusting for psychosocial work environment (ie, quantitative job overload, job control, and workplace social support) in Japanese employees.
BACKGROUND
We surveyed 14,687 employees (7,343 men and 7,344 women) aged 20-66 years, who had not taken long-term sickness absence in the past 3 years, from a financial service company in Japan. The Brief Job Stress Questionnaire, including scales on job satisfaction and psychosocial work environment, was administered, and information on demographic and occupational characteristics (ie, age, gender, length of service, job type, and employment position) was obtained from the personnel records of the surveyed company at baseline (July-August 2015). Subsequently, information on the start dates of long-term sickness absences was obtained during the follow-up period (until July 2016) from the personnel records. Cox's proportional hazard regression analysis was conducted.
METHODS
After adjusting for demographic and occupational characteristics, those who perceived job dissatisfaction had a significantly higher hazard ratio of long-term sickness absence than those who perceived job satisfaction (hazard ratio 2.91; 95% confidence interval, 1.74-4.87). After additionally adjusting for psychosocial work environment, this association was weakened and no longer significant (hazard ratio 1.55; 95% confidence interval, 0.86-2.80).
RESULTS
Our findings suggest that the association of job dissatisfaction with long-term sickness absence is spurious and explained mainly via psychosocial work environment.
CONCLUSIONS
[ "Absenteeism", "Adult", "Aged", "Employment", "Female", "Humans", "Japan", "Job Satisfaction", "Longitudinal Studies", "Male", "Middle Aged", "Occupational Health", "Prospective Studies", "Risk Factors", "Sick Leave", "Social Environment", "Social Support", "Stress, Psychological", "Surveys and Questionnaires", "Workload", "Workplace", "Young Adult" ]
7429151
INTRODUCTION
Sickness absence is a major public health and economic problem in many countries.1,2 Among others, long-term sickness absence, often defined as sickness absence lasting 4 weeks/1 month or more,3 bears high costs for a variety of stakeholders, including employees, employers, insurance agencies, and society at large.4,5 The Organization for Economic Co-operation and Development (OECD) has reported that OECD member countries spend, on average, approximately 1.9% of the gross domestic product (GDP) on sickness absence benefits,6 most of which are accounted for by long-term sickness absence.2 Furthermore, long-term sickness absence has various adverse effects on employees, such as lower probability of returning to work,7,8 a higher risk of social exclusion,9 and mortality.10–12 Therefore, identifying predictors of long-term sickness absence and preventing it are beneficial for both employees and society. In the occupational health research field, job dissatisfaction (ie, an unpleasant emotion when one’s work is frustrating and blocking the affirmation of their values)13 has been attracting attention as a predictor of sickness absence, as well as of poor mental health (ie, anxiety, burnout, depression, and low self-esteem) and physical health (ie, cardiovascular disease and musculoskeletal disorders).14 Several prospective studies in European countries have examined the association of job dissatisfaction with sickness absence15–24; the results have been inconsistent, and most of these studies focused mainly on short-term sickness absence lasting from a few days to a few weeks. To date, only three studies focused on long-term sickness absence16,21,22; two, however, relied on self-reports rather than on personnel records or national register data for measuring sickness absence duration.21,22 This may have led to a less accurate association with job dissatisfaction.25 Furthermore, only one study conducted a survival analysis.23 In addition to the above, psychosocial work environment may explain the association of job dissatisfaction with sickness absence.26 In fact, major psychosocial work environment, such as described in the job demands-control (JD-C) or demand-control-support (DCS) model,27,28 has been associated with job dissatisfaction.29,30 It is also known that poor psychosocial wok environment causes sickness absence.31 It might be interesting to know how much unique impact job dissatisfaction has on long-term sickness absence independent of psychosocial work environment, because it would be relevant for developing an effective strategy to prevent long-term sickness absence whether targeting on job dissatisfaction per se or psychosocial work environment. Contrary to European countries, the association between psychosocial work environment, job dissatisfaction, and long-term sickness absence has not been fully examined among Japanese employees. In Japan, approximately 60% of employees reported job-related distress due to psychosocial work environment such as job overload and workplace human relations.32 Furthermore, compared to European countries, Japanese employees have been found to have lower levels of job satisfaction,33 as well as positive work-related state of mind, such as work engagement.34 On the other hand, because the social notion that “not taking time off and working hard are virtues” is still strongly rooted in the Japanese psyche,35 taking long-term sickness absence is a serious event for Japanese employees. Therefore, it is extremely valuable to clarify the association of job dissatisfaction with long-term sickness absence and the role of psychosocial work environment in this association among Japanese employees. To date, two cross-sectional studies have reported the association of job dissatisfaction with sickness absence among Japanese employees,36,37 while prospective evidence is lacking and the role of psychosocial work environment in the association is still unclear. The purpose of the present study was twofold. The first purpose was to examine the prospective association of job dissatisfaction with long-term sickness absence obtained from personnel records in a large sample of Japanese employees, conducting survival analysis. The second purpose was to examine whether psychosocial work environment explains the association of job dissatisfaction with long-term sickness absence. In the present study, we focused especially on financial service employees because they experience increased stress and worries due to greater time pressures, problems with ergonomics, conflicting roles, work demands, and difficult relationships with customers.38
null
null
RESULTS
Table 1 shows the detailed characteristics of the participants in the satisfied and dissatisfied groups. Compared to the satisfied group, the dissatisfied group was significantly younger, had a greater proportion of women, claims service, staff, and temporary employees, and perceived significantly higher levels of quantitative job overload and lower levels of job control and workplace social support. SD, standard deviation. aStudent’s t test and Fisher’s exact test were used for the continuous and categorical variables, respectively. Figure 2 shows the Kaplan-Meier curves for the cumulative hazard of long-term sickness absence among the dissatisfied group compared to the satisfied group. The log-rank test showed that the dissatisfied group had a significantly higher incidence rate of long-term sickness absence compared to the satisfied group (P < 0.001). Table 2 shows the results of the Cox’s proportional hazard regression analysis. During 5,258,910 person-days (mean: 358 days, range: 3–373 days), 62 employees (32 men and 30 women) took long-term sickness absence (mental disorders: 51 cases, musculoskeletal disorders: 6 cases, cerebrovascular disease: 3 cases, and cardiovascular disease: 2 cases). After adjusting for demographic and occupational characteristics (models 1 and 2), the dissatisfied group had a significantly higher HR of long-term sickness absence than the satisfied group (HR 3.00; 95% CI, 1.80–5.00 and HR 2.91; 95% CI, 1.74–4.87 for models 1 and 2, respectively). However, after additionally adjusting for psychosocial work environment (model 3), this association was weakened and no longer significant (HR 1.55; 95% CI, 0.86–2.80). aAdjusted for age (and gender). bAdditionally adjusted for length of service, job type, and employment position. cAdditionally adjusted for quantitative job overload, job control, and workplace social support. For the Harman’s single-factor test, three factors with eigenvalues greater than 1.0 were extracted and the first (largest) factor did not account for a majority of the variance (32.7%), indicating that overcontrol bias due to common method variance was not of great concern. When we conducted the gender-stratified analysis, similar tendency to the main analysis was observed among both genders while statistical significance was marginal for the log-rank test (P = 0.063) and for models 1 and 2 of the Cox’s proportional hazard regression analysis among women (Table 2).
null
null
[ "Participants", "Measures", "Job dissatisfaction", "Long-term sickness absence", "Psychosocial work environment", "Covariates", "Statistical analysis" ]
[ "A 1-year prospective study of employees from a financial service company listed on the major stock exchanges was conducted from July 2015 to July 2016. Information was gathered using a self-administered questionnaire and the personnel records of the surveyed company. At baseline (July–August 2015), all employees, except for board members; temporary transferred, overseas, and dispatched employees; and absentees (N = 15,615) were invited to participate in this study; a total of 14,711 employees completed the baseline questionnaire (response rate: 94.2%). After excluding 24 employees who had histories of long-term sickness absence in the past 3 years, 14,687 employees (7,343 men and 7,344 women) aged 20–66 years were followed for 1 year (until July 31st, 2016) (Figure 1). Informed consent was obtained from participants using the opt-out method for the secondary analysis of existing anonymous data. The study procedure was reviewed and approved by the Kitasato University Medical Ethics Organization (No. B15-113).", " Job dissatisfaction Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups.\nJob dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups.\n Long-term sickness absence Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first.\nInformation on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first.\n Psychosocial work environment For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively.\nFor psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively.\n Covariates Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others.\nCovariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others.", "Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups.", "Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first.", "For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively.", "Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others.", "We first conducted a descriptive analysis using Student’s t test or Fisher’s exact test to compare the demographic and occupational characteristics and the scale scores between the satisfied and dissatisfied groups. Afterwards, the cumulative hazard of long-term sickness absence was plotted as Kaplan-Meier curves and the log-rank test was conducted to compare the hazard functions between the satisfied and dissatisfied groups. Finally, using the satisfied group as a reference, Cox’s proportional hazard regression analysis was conducted to estimate the hazard ratio (HR) and its 95% confidence interval (CI) of the incidence of long-term sickness absence during the follow-up period in the dissatisfied group. In the series of analyses, we first adjusted for demographic characteristics (ie, age and gender) (model 1). Subsequently, we incrementally adjusted for occupational characteristics (ie, length of service, job type, and employment position) (model 2) and psychosocial work environment (ie, quantitative job overload, job control, and workplace social support) (model 3). For model 3, overcontrol bias due to common method variance might occur since the present study measured job dissatisfaction and psychosocial work environment simultaneously with the same self-administered questionnaire (ie, the BJSQ). Therefore, to test the presence of overcontrol bias due to common method variance, Harman’s single-factor test40 was conducted by entering items for job dissatisfaction, quantitative job overload, job control, and workplace social support (ie, a total of 13 items) into the unrotated principal component analysis. Furthermore, as sub-analyses, the log-rank test and the Cox’s proportional hazard regression analysis were conducted by gender because men and women are exposed to different work environment in Japan. The level of significance was 0.05 (two-tailed). The statistical analyses were conducted using IBM® SPSS® Statistics Version 23.0 for Windows (IBM Corp., Armonk, NY, USA)." ]
[ null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIAL AND METHODS", "Participants", "Measures", "Job dissatisfaction", "Long-term sickness absence", "Psychosocial work environment", "Covariates", "Statistical analysis", "RESULTS", "DISCUSSION" ]
[ "Sickness absence is a major public health and economic problem in many countries.1,2 Among others, long-term sickness absence, often defined as sickness absence lasting 4 weeks/1 month or more,3 bears high costs for a variety of stakeholders, including employees, employers, insurance agencies, and society at large.4,5 The Organization for Economic Co-operation and Development (OECD) has reported that OECD member countries spend, on average, approximately 1.9% of the gross domestic product (GDP) on sickness absence benefits,6 most of which are accounted for by long-term sickness absence.2 Furthermore, long-term sickness absence has various adverse effects on employees, such as lower probability of returning to work,7,8 a higher risk of social exclusion,9 and mortality.10–12 Therefore, identifying predictors of long-term sickness absence and preventing it are beneficial for both employees and society.\nIn the occupational health research field, job dissatisfaction (ie, an unpleasant emotion when one’s work is frustrating and blocking the affirmation of their values)13 has been attracting attention as a predictor of sickness absence, as well as of poor mental health (ie, anxiety, burnout, depression, and low self-esteem) and physical health (ie, cardiovascular disease and musculoskeletal disorders).14 Several prospective studies in European countries have examined the association of job dissatisfaction with sickness absence15–24; the results have been inconsistent, and most of these studies focused mainly on short-term sickness absence lasting from a few days to a few weeks. To date, only three studies focused on long-term sickness absence16,21,22; two, however, relied on self-reports rather than on personnel records or national register data for measuring sickness absence duration.21,22 This may have led to a less accurate association with job dissatisfaction.25 Furthermore, only one study conducted a survival analysis.23\nIn addition to the above, psychosocial work environment may explain the association of job dissatisfaction with sickness absence.26 In fact, major psychosocial work environment, such as described in the job demands-control (JD-C) or demand-control-support (DCS) model,27,28 has been associated with job dissatisfaction.29,30 It is also known that poor psychosocial wok environment causes sickness absence.31 It might be interesting to know how much unique impact job dissatisfaction has on long-term sickness absence independent of psychosocial work environment, because it would be relevant for developing an effective strategy to prevent long-term sickness absence whether targeting on job dissatisfaction per se or psychosocial work environment.\nContrary to European countries, the association between psychosocial work environment, job dissatisfaction, and long-term sickness absence has not been fully examined among Japanese employees. In Japan, approximately 60% of employees reported job-related distress due to psychosocial work environment such as job overload and workplace human relations.32 Furthermore, compared to European countries, Japanese employees have been found to have lower levels of job satisfaction,33 as well as positive work-related state of mind, such as work engagement.34 On the other hand, because the social notion that “not taking time off and working hard are virtues” is still strongly rooted in the Japanese psyche,35 taking long-term sickness absence is a serious event for Japanese employees. Therefore, it is extremely valuable to clarify the association of job dissatisfaction with long-term sickness absence and the role of psychosocial work environment in this association among Japanese employees. To date, two cross-sectional studies have reported the association of job dissatisfaction with sickness absence among Japanese employees,36,37 while prospective evidence is lacking and the role of psychosocial work environment in the association is still unclear.\nThe purpose of the present study was twofold. The first purpose was to examine the prospective association of job dissatisfaction with long-term sickness absence obtained from personnel records in a large sample of Japanese employees, conducting survival analysis. The second purpose was to examine whether psychosocial work environment explains the association of job dissatisfaction with long-term sickness absence. In the present study, we focused especially on financial service employees because they experience increased stress and worries due to greater time pressures, problems with ergonomics, conflicting roles, work demands, and difficult relationships with customers.38", " Participants A 1-year prospective study of employees from a financial service company listed on the major stock exchanges was conducted from July 2015 to July 2016. Information was gathered using a self-administered questionnaire and the personnel records of the surveyed company. At baseline (July–August 2015), all employees, except for board members; temporary transferred, overseas, and dispatched employees; and absentees (N = 15,615) were invited to participate in this study; a total of 14,711 employees completed the baseline questionnaire (response rate: 94.2%). After excluding 24 employees who had histories of long-term sickness absence in the past 3 years, 14,687 employees (7,343 men and 7,344 women) aged 20–66 years were followed for 1 year (until July 31st, 2016) (Figure 1). Informed consent was obtained from participants using the opt-out method for the secondary analysis of existing anonymous data. The study procedure was reviewed and approved by the Kitasato University Medical Ethics Organization (No. B15-113).\nA 1-year prospective study of employees from a financial service company listed on the major stock exchanges was conducted from July 2015 to July 2016. Information was gathered using a self-administered questionnaire and the personnel records of the surveyed company. At baseline (July–August 2015), all employees, except for board members; temporary transferred, overseas, and dispatched employees; and absentees (N = 15,615) were invited to participate in this study; a total of 14,711 employees completed the baseline questionnaire (response rate: 94.2%). After excluding 24 employees who had histories of long-term sickness absence in the past 3 years, 14,687 employees (7,343 men and 7,344 women) aged 20–66 years were followed for 1 year (until July 31st, 2016) (Figure 1). Informed consent was obtained from participants using the opt-out method for the secondary analysis of existing anonymous data. The study procedure was reviewed and approved by the Kitasato University Medical Ethics Organization (No. B15-113).\n Measures Job dissatisfaction Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups.\nJob dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups.\n Long-term sickness absence Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first.\nInformation on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first.\n Psychosocial work environment For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively.\nFor psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively.\n Covariates Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others.\nCovariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others.\n Job dissatisfaction Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups.\nJob dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups.\n Long-term sickness absence Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first.\nInformation on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first.\n Psychosocial work environment For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively.\nFor psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively.\n Covariates Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others.\nCovariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others.\n Statistical analysis We first conducted a descriptive analysis using Student’s t test or Fisher’s exact test to compare the demographic and occupational characteristics and the scale scores between the satisfied and dissatisfied groups. Afterwards, the cumulative hazard of long-term sickness absence was plotted as Kaplan-Meier curves and the log-rank test was conducted to compare the hazard functions between the satisfied and dissatisfied groups. Finally, using the satisfied group as a reference, Cox’s proportional hazard regression analysis was conducted to estimate the hazard ratio (HR) and its 95% confidence interval (CI) of the incidence of long-term sickness absence during the follow-up period in the dissatisfied group. In the series of analyses, we first adjusted for demographic characteristics (ie, age and gender) (model 1). Subsequently, we incrementally adjusted for occupational characteristics (ie, length of service, job type, and employment position) (model 2) and psychosocial work environment (ie, quantitative job overload, job control, and workplace social support) (model 3). For model 3, overcontrol bias due to common method variance might occur since the present study measured job dissatisfaction and psychosocial work environment simultaneously with the same self-administered questionnaire (ie, the BJSQ). Therefore, to test the presence of overcontrol bias due to common method variance, Harman’s single-factor test40 was conducted by entering items for job dissatisfaction, quantitative job overload, job control, and workplace social support (ie, a total of 13 items) into the unrotated principal component analysis. Furthermore, as sub-analyses, the log-rank test and the Cox’s proportional hazard regression analysis were conducted by gender because men and women are exposed to different work environment in Japan. The level of significance was 0.05 (two-tailed). The statistical analyses were conducted using IBM® SPSS® Statistics Version 23.0 for Windows (IBM Corp., Armonk, NY, USA).\nWe first conducted a descriptive analysis using Student’s t test or Fisher’s exact test to compare the demographic and occupational characteristics and the scale scores between the satisfied and dissatisfied groups. Afterwards, the cumulative hazard of long-term sickness absence was plotted as Kaplan-Meier curves and the log-rank test was conducted to compare the hazard functions between the satisfied and dissatisfied groups. Finally, using the satisfied group as a reference, Cox’s proportional hazard regression analysis was conducted to estimate the hazard ratio (HR) and its 95% confidence interval (CI) of the incidence of long-term sickness absence during the follow-up period in the dissatisfied group. In the series of analyses, we first adjusted for demographic characteristics (ie, age and gender) (model 1). Subsequently, we incrementally adjusted for occupational characteristics (ie, length of service, job type, and employment position) (model 2) and psychosocial work environment (ie, quantitative job overload, job control, and workplace social support) (model 3). For model 3, overcontrol bias due to common method variance might occur since the present study measured job dissatisfaction and psychosocial work environment simultaneously with the same self-administered questionnaire (ie, the BJSQ). Therefore, to test the presence of overcontrol bias due to common method variance, Harman’s single-factor test40 was conducted by entering items for job dissatisfaction, quantitative job overload, job control, and workplace social support (ie, a total of 13 items) into the unrotated principal component analysis. Furthermore, as sub-analyses, the log-rank test and the Cox’s proportional hazard regression analysis were conducted by gender because men and women are exposed to different work environment in Japan. The level of significance was 0.05 (two-tailed). The statistical analyses were conducted using IBM® SPSS® Statistics Version 23.0 for Windows (IBM Corp., Armonk, NY, USA).", "A 1-year prospective study of employees from a financial service company listed on the major stock exchanges was conducted from July 2015 to July 2016. Information was gathered using a self-administered questionnaire and the personnel records of the surveyed company. At baseline (July–August 2015), all employees, except for board members; temporary transferred, overseas, and dispatched employees; and absentees (N = 15,615) were invited to participate in this study; a total of 14,711 employees completed the baseline questionnaire (response rate: 94.2%). After excluding 24 employees who had histories of long-term sickness absence in the past 3 years, 14,687 employees (7,343 men and 7,344 women) aged 20–66 years were followed for 1 year (until July 31st, 2016) (Figure 1). Informed consent was obtained from participants using the opt-out method for the secondary analysis of existing anonymous data. The study procedure was reviewed and approved by the Kitasato University Medical Ethics Organization (No. B15-113).", " Job dissatisfaction Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups.\nJob dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups.\n Long-term sickness absence Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first.\nInformation on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first.\n Psychosocial work environment For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively.\nFor psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively.\n Covariates Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others.\nCovariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others.", "Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups.", "Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first.", "For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively.", "Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others.", "We first conducted a descriptive analysis using Student’s t test or Fisher’s exact test to compare the demographic and occupational characteristics and the scale scores between the satisfied and dissatisfied groups. Afterwards, the cumulative hazard of long-term sickness absence was plotted as Kaplan-Meier curves and the log-rank test was conducted to compare the hazard functions between the satisfied and dissatisfied groups. Finally, using the satisfied group as a reference, Cox’s proportional hazard regression analysis was conducted to estimate the hazard ratio (HR) and its 95% confidence interval (CI) of the incidence of long-term sickness absence during the follow-up period in the dissatisfied group. In the series of analyses, we first adjusted for demographic characteristics (ie, age and gender) (model 1). Subsequently, we incrementally adjusted for occupational characteristics (ie, length of service, job type, and employment position) (model 2) and psychosocial work environment (ie, quantitative job overload, job control, and workplace social support) (model 3). For model 3, overcontrol bias due to common method variance might occur since the present study measured job dissatisfaction and psychosocial work environment simultaneously with the same self-administered questionnaire (ie, the BJSQ). Therefore, to test the presence of overcontrol bias due to common method variance, Harman’s single-factor test40 was conducted by entering items for job dissatisfaction, quantitative job overload, job control, and workplace social support (ie, a total of 13 items) into the unrotated principal component analysis. Furthermore, as sub-analyses, the log-rank test and the Cox’s proportional hazard regression analysis were conducted by gender because men and women are exposed to different work environment in Japan. The level of significance was 0.05 (two-tailed). The statistical analyses were conducted using IBM® SPSS® Statistics Version 23.0 for Windows (IBM Corp., Armonk, NY, USA).", "Table 1 shows the detailed characteristics of the participants in the satisfied and dissatisfied groups. Compared to the satisfied group, the dissatisfied group was significantly younger, had a greater proportion of women, claims service, staff, and temporary employees, and perceived significantly higher levels of quantitative job overload and lower levels of job control and workplace social support.\nSD, standard deviation.\naStudent’s t test and Fisher’s exact test were used for the continuous and categorical variables, respectively.\nFigure 2 shows the Kaplan-Meier curves for the cumulative hazard of long-term sickness absence among the dissatisfied group compared to the satisfied group. The log-rank test showed that the dissatisfied group had a significantly higher incidence rate of long-term sickness absence compared to the satisfied group (P < 0.001).\nTable 2 shows the results of the Cox’s proportional hazard regression analysis. During 5,258,910 person-days (mean: 358 days, range: 3–373 days), 62 employees (32 men and 30 women) took long-term sickness absence (mental disorders: 51 cases, musculoskeletal disorders: 6 cases, cerebrovascular disease: 3 cases, and cardiovascular disease: 2 cases). After adjusting for demographic and occupational characteristics (models 1 and 2), the dissatisfied group had a significantly higher HR of long-term sickness absence than the satisfied group (HR 3.00; 95% CI, 1.80–5.00 and HR 2.91; 95% CI, 1.74–4.87 for models 1 and 2, respectively). However, after additionally adjusting for psychosocial work environment (model 3), this association was weakened and no longer significant (HR 1.55; 95% CI, 0.86–2.80).\naAdjusted for age (and gender).\nbAdditionally adjusted for length of service, job type, and employment position.\ncAdditionally adjusted for quantitative job overload, job control, and workplace social support.\nFor the Harman’s single-factor test, three factors with eigenvalues greater than 1.0 were extracted and the first (largest) factor did not account for a majority of the variance (32.7%), indicating that overcontrol bias due to common method variance was not of great concern.\nWhen we conducted the gender-stratified analysis, similar tendency to the main analysis was observed among both genders while statistical significance was marginal for the log-rank test (P = 0.063) and for models 1 and 2 of the Cox’s proportional hazard regression analysis among women (Table 2).", "The present study demonstrated that after adjusting for demographic and occupational characteristics, those who perceived job dissatisfaction had a significantly higher risk of long-term sickness absence during the 1-year follow-up period than those who perceived job satisfaction. After additionally adjusting for psychosocial work environment based on the JD-C or DCS model, the risk was no longer significant.\nJob dissatisfaction was significantly associated with a higher risk of long-term sickness absence after adjusting for demographic and occupational characteristics. This finding is consistent with previous prospective studies in European countries (ie, Norway and the Netherlands) that have reported a significant association of job dissatisfaction with long-term sickness absence in the crude model,22 as well as after adjusting for demographic and occupational characteristics (eg, age, gender, education, and affiliation).16,21 Using personnel records to measure long-term sickness absence and conducting a survival analysis, the present study expanded this evidence into other than European countries.\nAfter additionally adjusting for psychosocial work environment based on the JD-C or DCS model, the association of job dissatisfaction with long-term sickness absence was weakened and no longer significant. This is consistent with previous studies in that a significant association of job dissatisfaction with sickness absence (including both short-term and long-term ones) was not observed when psychosocial work environment was included in the model.16,17,20 Our findings suggest that the association of job dissatisfaction with long-term sickness absence is explained mainly by psychosocial work environment and that improving psychosocial work environment is effective for the prevention of long-term sickness absence. However, although not statistically significant, the fully adjusted HR of job dissatisfaction was still approximately 1.5; therefore, there may be a unique effect of job dissatisfaction on long-term sickness absence independently of psychosocial work environment. Future research should examine more precisely the association between psychosocial work environment, job dissatisfaction, and sickness absence.\nPossible limitations of the present study should be considered. First, our sample was recruited from one financial service company in Japan; therefore, our findings should be interpreted with caution in light of limited generalizability. Second, job dissatisfaction was measured using a single-item question, which may limit its measurement validity; however, some researchers have argued that single-item questions are preferred to measure overall job dissatisfaction because differences in individual scores are lost in the total mean scores of multi-item questions.41,42 Third, some employees may have transferred to another department in the surveyed company, which may have influenced job dissatisfaction and masked the true association; nevertheless, the frequency of transfer may not have been so high at 1-year follow-up. Finally, although some previous studies focused on workplace-level (in addition to individual-level) job dissatisfaction to examine its association with sickness absence,19 the present study could not take workplace-level job dissatisfaction into account due to a lack of information on the departments to which the individual participants belonged.\nIn conclusion, the present study provided evidence that the association of job dissatisfaction with long-term sickness absence lasting 1 month or more is spurious and explained mainly via adverse psychosocial work environment. More detailed underlying mechanisms in the association between psychosocial work environment, job dissatisfaction, and sickness absence can be explored using mediation analysis." ]
[ "intro", "materials|methods", null, null, null, null, null, null, null, "results", "discussion" ]
[ "absenteeism", "job satisfaction", "longitudinal studies", "psychosocial job characteristics", "survival analysis" ]
INTRODUCTION: Sickness absence is a major public health and economic problem in many countries.1,2 Among others, long-term sickness absence, often defined as sickness absence lasting 4 weeks/1 month or more,3 bears high costs for a variety of stakeholders, including employees, employers, insurance agencies, and society at large.4,5 The Organization for Economic Co-operation and Development (OECD) has reported that OECD member countries spend, on average, approximately 1.9% of the gross domestic product (GDP) on sickness absence benefits,6 most of which are accounted for by long-term sickness absence.2 Furthermore, long-term sickness absence has various adverse effects on employees, such as lower probability of returning to work,7,8 a higher risk of social exclusion,9 and mortality.10–12 Therefore, identifying predictors of long-term sickness absence and preventing it are beneficial for both employees and society. In the occupational health research field, job dissatisfaction (ie, an unpleasant emotion when one’s work is frustrating and blocking the affirmation of their values)13 has been attracting attention as a predictor of sickness absence, as well as of poor mental health (ie, anxiety, burnout, depression, and low self-esteem) and physical health (ie, cardiovascular disease and musculoskeletal disorders).14 Several prospective studies in European countries have examined the association of job dissatisfaction with sickness absence15–24; the results have been inconsistent, and most of these studies focused mainly on short-term sickness absence lasting from a few days to a few weeks. To date, only three studies focused on long-term sickness absence16,21,22; two, however, relied on self-reports rather than on personnel records or national register data for measuring sickness absence duration.21,22 This may have led to a less accurate association with job dissatisfaction.25 Furthermore, only one study conducted a survival analysis.23 In addition to the above, psychosocial work environment may explain the association of job dissatisfaction with sickness absence.26 In fact, major psychosocial work environment, such as described in the job demands-control (JD-C) or demand-control-support (DCS) model,27,28 has been associated with job dissatisfaction.29,30 It is also known that poor psychosocial wok environment causes sickness absence.31 It might be interesting to know how much unique impact job dissatisfaction has on long-term sickness absence independent of psychosocial work environment, because it would be relevant for developing an effective strategy to prevent long-term sickness absence whether targeting on job dissatisfaction per se or psychosocial work environment. Contrary to European countries, the association between psychosocial work environment, job dissatisfaction, and long-term sickness absence has not been fully examined among Japanese employees. In Japan, approximately 60% of employees reported job-related distress due to psychosocial work environment such as job overload and workplace human relations.32 Furthermore, compared to European countries, Japanese employees have been found to have lower levels of job satisfaction,33 as well as positive work-related state of mind, such as work engagement.34 On the other hand, because the social notion that “not taking time off and working hard are virtues” is still strongly rooted in the Japanese psyche,35 taking long-term sickness absence is a serious event for Japanese employees. Therefore, it is extremely valuable to clarify the association of job dissatisfaction with long-term sickness absence and the role of psychosocial work environment in this association among Japanese employees. To date, two cross-sectional studies have reported the association of job dissatisfaction with sickness absence among Japanese employees,36,37 while prospective evidence is lacking and the role of psychosocial work environment in the association is still unclear. The purpose of the present study was twofold. The first purpose was to examine the prospective association of job dissatisfaction with long-term sickness absence obtained from personnel records in a large sample of Japanese employees, conducting survival analysis. The second purpose was to examine whether psychosocial work environment explains the association of job dissatisfaction with long-term sickness absence. In the present study, we focused especially on financial service employees because they experience increased stress and worries due to greater time pressures, problems with ergonomics, conflicting roles, work demands, and difficult relationships with customers.38 MATERIAL AND METHODS: Participants A 1-year prospective study of employees from a financial service company listed on the major stock exchanges was conducted from July 2015 to July 2016. Information was gathered using a self-administered questionnaire and the personnel records of the surveyed company. At baseline (July–August 2015), all employees, except for board members; temporary transferred, overseas, and dispatched employees; and absentees (N = 15,615) were invited to participate in this study; a total of 14,711 employees completed the baseline questionnaire (response rate: 94.2%). After excluding 24 employees who had histories of long-term sickness absence in the past 3 years, 14,687 employees (7,343 men and 7,344 women) aged 20–66 years were followed for 1 year (until July 31st, 2016) (Figure 1). Informed consent was obtained from participants using the opt-out method for the secondary analysis of existing anonymous data. The study procedure was reviewed and approved by the Kitasato University Medical Ethics Organization (No. B15-113). A 1-year prospective study of employees from a financial service company listed on the major stock exchanges was conducted from July 2015 to July 2016. Information was gathered using a self-administered questionnaire and the personnel records of the surveyed company. At baseline (July–August 2015), all employees, except for board members; temporary transferred, overseas, and dispatched employees; and absentees (N = 15,615) were invited to participate in this study; a total of 14,711 employees completed the baseline questionnaire (response rate: 94.2%). After excluding 24 employees who had histories of long-term sickness absence in the past 3 years, 14,687 employees (7,343 men and 7,344 women) aged 20–66 years were followed for 1 year (until July 31st, 2016) (Figure 1). Informed consent was obtained from participants using the opt-out method for the secondary analysis of existing anonymous data. The study procedure was reviewed and approved by the Kitasato University Medical Ethics Organization (No. B15-113). Measures Job dissatisfaction Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups. Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups. Long-term sickness absence Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first. Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first. Psychosocial work environment For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively. For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively. Covariates Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others. Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others. Job dissatisfaction Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups. Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups. Long-term sickness absence Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first. Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first. Psychosocial work environment For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively. For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively. Covariates Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others. Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others. Statistical analysis We first conducted a descriptive analysis using Student’s t test or Fisher’s exact test to compare the demographic and occupational characteristics and the scale scores between the satisfied and dissatisfied groups. Afterwards, the cumulative hazard of long-term sickness absence was plotted as Kaplan-Meier curves and the log-rank test was conducted to compare the hazard functions between the satisfied and dissatisfied groups. Finally, using the satisfied group as a reference, Cox’s proportional hazard regression analysis was conducted to estimate the hazard ratio (HR) and its 95% confidence interval (CI) of the incidence of long-term sickness absence during the follow-up period in the dissatisfied group. In the series of analyses, we first adjusted for demographic characteristics (ie, age and gender) (model 1). Subsequently, we incrementally adjusted for occupational characteristics (ie, length of service, job type, and employment position) (model 2) and psychosocial work environment (ie, quantitative job overload, job control, and workplace social support) (model 3). For model 3, overcontrol bias due to common method variance might occur since the present study measured job dissatisfaction and psychosocial work environment simultaneously with the same self-administered questionnaire (ie, the BJSQ). Therefore, to test the presence of overcontrol bias due to common method variance, Harman’s single-factor test40 was conducted by entering items for job dissatisfaction, quantitative job overload, job control, and workplace social support (ie, a total of 13 items) into the unrotated principal component analysis. Furthermore, as sub-analyses, the log-rank test and the Cox’s proportional hazard regression analysis were conducted by gender because men and women are exposed to different work environment in Japan. The level of significance was 0.05 (two-tailed). The statistical analyses were conducted using IBM® SPSS® Statistics Version 23.0 for Windows (IBM Corp., Armonk, NY, USA). We first conducted a descriptive analysis using Student’s t test or Fisher’s exact test to compare the demographic and occupational characteristics and the scale scores between the satisfied and dissatisfied groups. Afterwards, the cumulative hazard of long-term sickness absence was plotted as Kaplan-Meier curves and the log-rank test was conducted to compare the hazard functions between the satisfied and dissatisfied groups. Finally, using the satisfied group as a reference, Cox’s proportional hazard regression analysis was conducted to estimate the hazard ratio (HR) and its 95% confidence interval (CI) of the incidence of long-term sickness absence during the follow-up period in the dissatisfied group. In the series of analyses, we first adjusted for demographic characteristics (ie, age and gender) (model 1). Subsequently, we incrementally adjusted for occupational characteristics (ie, length of service, job type, and employment position) (model 2) and psychosocial work environment (ie, quantitative job overload, job control, and workplace social support) (model 3). For model 3, overcontrol bias due to common method variance might occur since the present study measured job dissatisfaction and psychosocial work environment simultaneously with the same self-administered questionnaire (ie, the BJSQ). Therefore, to test the presence of overcontrol bias due to common method variance, Harman’s single-factor test40 was conducted by entering items for job dissatisfaction, quantitative job overload, job control, and workplace social support (ie, a total of 13 items) into the unrotated principal component analysis. Furthermore, as sub-analyses, the log-rank test and the Cox’s proportional hazard regression analysis were conducted by gender because men and women are exposed to different work environment in Japan. The level of significance was 0.05 (two-tailed). The statistical analyses were conducted using IBM® SPSS® Statistics Version 23.0 for Windows (IBM Corp., Armonk, NY, USA). Participants: A 1-year prospective study of employees from a financial service company listed on the major stock exchanges was conducted from July 2015 to July 2016. Information was gathered using a self-administered questionnaire and the personnel records of the surveyed company. At baseline (July–August 2015), all employees, except for board members; temporary transferred, overseas, and dispatched employees; and absentees (N = 15,615) were invited to participate in this study; a total of 14,711 employees completed the baseline questionnaire (response rate: 94.2%). After excluding 24 employees who had histories of long-term sickness absence in the past 3 years, 14,687 employees (7,343 men and 7,344 women) aged 20–66 years were followed for 1 year (until July 31st, 2016) (Figure 1). Informed consent was obtained from participants using the opt-out method for the secondary analysis of existing anonymous data. The study procedure was reviewed and approved by the Kitasato University Medical Ethics Organization (No. B15-113). Measures: Job dissatisfaction Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups. Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups. Long-term sickness absence Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first. Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first. Psychosocial work environment For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively. For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively. Covariates Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others. Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others. Job dissatisfaction: Job dissatisfaction was measured using the Brief Job Stress Questionnaire (BJSQ). The BJSQ has high levels of internal consistency reliability and factor-based validity39 and includes a single-item summary measure of job satisfaction (“I am satisfied with my job”). Responses are provided on a four-point Likert scale (1 = Dissatisfied, 2 = Somewhat dissatisfied, 3 = Somewhat satisfied, and 4 = Satisfied). Participants were dichotomized into “dissatisfied” (those who answered 1 or 2) and “satisfied” (those who answered 3 or 4) groups. Long-term sickness absence: Information on dates of application for invalidity benefits with medical certification for long-term sickness absence lasting 1 month or more was obtained from the personnel records of the surveyed company. In the surveyed company, it was mandatory for employees to submit medical certification from his/her attending physician to the human resource department when applying for invalidity benefits. Furthermore, the personnel records included information on resignation/retirement date. Based on this information, those who resigned/retired from the surveyed company during the follow-up period were treated as censored cases. The follow-up began on the date of response to the BJSQ and ended at the start date of long-term sickness absence (ie, the date of application for invalidity benefits), the resignation/retirement date, or July 31st, 2016, whichever came first. Psychosocial work environment: For psychosocial work environment, we examined quantitative job overload, job control, and workplace social support, based on the JD-C or DCS model.27,28 These were measured using the BJSQ introduced above. The BJSQ includes three-item quantitative job overload, job control, supervisor support, and coworker support scales. The answers are provided on a four-point Likert scale (1 = Not at all, 2 = Somewhat, 3 = Moderately so, and 4 = Very much so for quantitative job overload and job control; 1 = Not at all, 2 = Somewhat, 3 = Very much, and 4 = Extremely for supervisor support and coworker support), with the scores of each scale ranging from 3–12. For workplace social support, total scores for supervisor support and coworker support were calculated (score range: 6–24). In this sample, the Cronbach’s alpha coefficients were 0.78, 0.70, and 0.88 for quantitative job overload, job control, and workplace social support, respectively. Covariates: Covariates included demographic and occupational characteristics, all of which were obtained from the personnel records of the surveyed company. Demographic characteristics included age and gender. Age was used as a continuous variable. Occupational characteristics included length of service, job type, and employment position. Length of service was used as a continuous variable. Job type was classified into four groups: sales, claims service, administrative, and others. Employment position was classified into five groups: manager, staff, senior employee, temporary employee, and others. Statistical analysis: We first conducted a descriptive analysis using Student’s t test or Fisher’s exact test to compare the demographic and occupational characteristics and the scale scores between the satisfied and dissatisfied groups. Afterwards, the cumulative hazard of long-term sickness absence was plotted as Kaplan-Meier curves and the log-rank test was conducted to compare the hazard functions between the satisfied and dissatisfied groups. Finally, using the satisfied group as a reference, Cox’s proportional hazard regression analysis was conducted to estimate the hazard ratio (HR) and its 95% confidence interval (CI) of the incidence of long-term sickness absence during the follow-up period in the dissatisfied group. In the series of analyses, we first adjusted for demographic characteristics (ie, age and gender) (model 1). Subsequently, we incrementally adjusted for occupational characteristics (ie, length of service, job type, and employment position) (model 2) and psychosocial work environment (ie, quantitative job overload, job control, and workplace social support) (model 3). For model 3, overcontrol bias due to common method variance might occur since the present study measured job dissatisfaction and psychosocial work environment simultaneously with the same self-administered questionnaire (ie, the BJSQ). Therefore, to test the presence of overcontrol bias due to common method variance, Harman’s single-factor test40 was conducted by entering items for job dissatisfaction, quantitative job overload, job control, and workplace social support (ie, a total of 13 items) into the unrotated principal component analysis. Furthermore, as sub-analyses, the log-rank test and the Cox’s proportional hazard regression analysis were conducted by gender because men and women are exposed to different work environment in Japan. The level of significance was 0.05 (two-tailed). The statistical analyses were conducted using IBM® SPSS® Statistics Version 23.0 for Windows (IBM Corp., Armonk, NY, USA). RESULTS: Table 1 shows the detailed characteristics of the participants in the satisfied and dissatisfied groups. Compared to the satisfied group, the dissatisfied group was significantly younger, had a greater proportion of women, claims service, staff, and temporary employees, and perceived significantly higher levels of quantitative job overload and lower levels of job control and workplace social support. SD, standard deviation. aStudent’s t test and Fisher’s exact test were used for the continuous and categorical variables, respectively. Figure 2 shows the Kaplan-Meier curves for the cumulative hazard of long-term sickness absence among the dissatisfied group compared to the satisfied group. The log-rank test showed that the dissatisfied group had a significantly higher incidence rate of long-term sickness absence compared to the satisfied group (P < 0.001). Table 2 shows the results of the Cox’s proportional hazard regression analysis. During 5,258,910 person-days (mean: 358 days, range: 3–373 days), 62 employees (32 men and 30 women) took long-term sickness absence (mental disorders: 51 cases, musculoskeletal disorders: 6 cases, cerebrovascular disease: 3 cases, and cardiovascular disease: 2 cases). After adjusting for demographic and occupational characteristics (models 1 and 2), the dissatisfied group had a significantly higher HR of long-term sickness absence than the satisfied group (HR 3.00; 95% CI, 1.80–5.00 and HR 2.91; 95% CI, 1.74–4.87 for models 1 and 2, respectively). However, after additionally adjusting for psychosocial work environment (model 3), this association was weakened and no longer significant (HR 1.55; 95% CI, 0.86–2.80). aAdjusted for age (and gender). bAdditionally adjusted for length of service, job type, and employment position. cAdditionally adjusted for quantitative job overload, job control, and workplace social support. For the Harman’s single-factor test, three factors with eigenvalues greater than 1.0 were extracted and the first (largest) factor did not account for a majority of the variance (32.7%), indicating that overcontrol bias due to common method variance was not of great concern. When we conducted the gender-stratified analysis, similar tendency to the main analysis was observed among both genders while statistical significance was marginal for the log-rank test (P = 0.063) and for models 1 and 2 of the Cox’s proportional hazard regression analysis among women (Table 2). DISCUSSION: The present study demonstrated that after adjusting for demographic and occupational characteristics, those who perceived job dissatisfaction had a significantly higher risk of long-term sickness absence during the 1-year follow-up period than those who perceived job satisfaction. After additionally adjusting for psychosocial work environment based on the JD-C or DCS model, the risk was no longer significant. Job dissatisfaction was significantly associated with a higher risk of long-term sickness absence after adjusting for demographic and occupational characteristics. This finding is consistent with previous prospective studies in European countries (ie, Norway and the Netherlands) that have reported a significant association of job dissatisfaction with long-term sickness absence in the crude model,22 as well as after adjusting for demographic and occupational characteristics (eg, age, gender, education, and affiliation).16,21 Using personnel records to measure long-term sickness absence and conducting a survival analysis, the present study expanded this evidence into other than European countries. After additionally adjusting for psychosocial work environment based on the JD-C or DCS model, the association of job dissatisfaction with long-term sickness absence was weakened and no longer significant. This is consistent with previous studies in that a significant association of job dissatisfaction with sickness absence (including both short-term and long-term ones) was not observed when psychosocial work environment was included in the model.16,17,20 Our findings suggest that the association of job dissatisfaction with long-term sickness absence is explained mainly by psychosocial work environment and that improving psychosocial work environment is effective for the prevention of long-term sickness absence. However, although not statistically significant, the fully adjusted HR of job dissatisfaction was still approximately 1.5; therefore, there may be a unique effect of job dissatisfaction on long-term sickness absence independently of psychosocial work environment. Future research should examine more precisely the association between psychosocial work environment, job dissatisfaction, and sickness absence. Possible limitations of the present study should be considered. First, our sample was recruited from one financial service company in Japan; therefore, our findings should be interpreted with caution in light of limited generalizability. Second, job dissatisfaction was measured using a single-item question, which may limit its measurement validity; however, some researchers have argued that single-item questions are preferred to measure overall job dissatisfaction because differences in individual scores are lost in the total mean scores of multi-item questions.41,42 Third, some employees may have transferred to another department in the surveyed company, which may have influenced job dissatisfaction and masked the true association; nevertheless, the frequency of transfer may not have been so high at 1-year follow-up. Finally, although some previous studies focused on workplace-level (in addition to individual-level) job dissatisfaction to examine its association with sickness absence,19 the present study could not take workplace-level job dissatisfaction into account due to a lack of information on the departments to which the individual participants belonged. In conclusion, the present study provided evidence that the association of job dissatisfaction with long-term sickness absence lasting 1 month or more is spurious and explained mainly via adverse psychosocial work environment. More detailed underlying mechanisms in the association between psychosocial work environment, job dissatisfaction, and sickness absence can be explored using mediation analysis.
Background: Using a 1-year prospective design, we examined the association of job dissatisfaction with long-term sickness absence lasting 1 month or more, before and after adjusting for psychosocial work environment (ie, quantitative job overload, job control, and workplace social support) in Japanese employees. Methods: We surveyed 14,687 employees (7,343 men and 7,344 women) aged 20-66 years, who had not taken long-term sickness absence in the past 3 years, from a financial service company in Japan. The Brief Job Stress Questionnaire, including scales on job satisfaction and psychosocial work environment, was administered, and information on demographic and occupational characteristics (ie, age, gender, length of service, job type, and employment position) was obtained from the personnel records of the surveyed company at baseline (July-August 2015). Subsequently, information on the start dates of long-term sickness absences was obtained during the follow-up period (until July 2016) from the personnel records. Cox's proportional hazard regression analysis was conducted. Results: After adjusting for demographic and occupational characteristics, those who perceived job dissatisfaction had a significantly higher hazard ratio of long-term sickness absence than those who perceived job satisfaction (hazard ratio 2.91; 95% confidence interval, 1.74-4.87). After additionally adjusting for psychosocial work environment, this association was weakened and no longer significant (hazard ratio 1.55; 95% confidence interval, 0.86-2.80). Conclusions: Our findings suggest that the association of job dissatisfaction with long-term sickness absence is spurious and explained mainly via psychosocial work environment.
null
null
7,574
314
[ 195, 1131, 110, 156, 189, 99, 372 ]
11
[ "job", "support", "sickness", "absence", "sickness absence", "term", "term sickness", "long term", "long", "term sickness absence" ]
[ "sickness absence ie", "dissatisfaction sickness absence15", "sickness absence compared", "job dissatisfaction sickness", "sickness absence benefits" ]
null
null
null
[CONTENT] absenteeism | job satisfaction | longitudinal studies | psychosocial job characteristics | survival analysis [SUMMARY]
null
[CONTENT] absenteeism | job satisfaction | longitudinal studies | psychosocial job characteristics | survival analysis [SUMMARY]
null
[CONTENT] absenteeism | job satisfaction | longitudinal studies | psychosocial job characteristics | survival analysis [SUMMARY]
null
[CONTENT] Absenteeism | Adult | Aged | Employment | Female | Humans | Japan | Job Satisfaction | Longitudinal Studies | Male | Middle Aged | Occupational Health | Prospective Studies | Risk Factors | Sick Leave | Social Environment | Social Support | Stress, Psychological | Surveys and Questionnaires | Workload | Workplace | Young Adult [SUMMARY]
null
[CONTENT] Absenteeism | Adult | Aged | Employment | Female | Humans | Japan | Job Satisfaction | Longitudinal Studies | Male | Middle Aged | Occupational Health | Prospective Studies | Risk Factors | Sick Leave | Social Environment | Social Support | Stress, Psychological | Surveys and Questionnaires | Workload | Workplace | Young Adult [SUMMARY]
null
[CONTENT] Absenteeism | Adult | Aged | Employment | Female | Humans | Japan | Job Satisfaction | Longitudinal Studies | Male | Middle Aged | Occupational Health | Prospective Studies | Risk Factors | Sick Leave | Social Environment | Social Support | Stress, Psychological | Surveys and Questionnaires | Workload | Workplace | Young Adult [SUMMARY]
null
[CONTENT] sickness absence ie | dissatisfaction sickness absence15 | sickness absence compared | job dissatisfaction sickness | sickness absence benefits [SUMMARY]
null
[CONTENT] sickness absence ie | dissatisfaction sickness absence15 | sickness absence compared | job dissatisfaction sickness | sickness absence benefits [SUMMARY]
null
[CONTENT] sickness absence ie | dissatisfaction sickness absence15 | sickness absence compared | job dissatisfaction sickness | sickness absence benefits [SUMMARY]
null
[CONTENT] job | support | sickness | absence | sickness absence | term | term sickness | long term | long | term sickness absence [SUMMARY]
null
[CONTENT] job | support | sickness | absence | sickness absence | term | term sickness | long term | long | term sickness absence [SUMMARY]
null
[CONTENT] job | support | sickness | absence | sickness absence | term | term sickness | long term | long | term sickness absence [SUMMARY]
null
[CONTENT] sickness | absence | sickness absence | association | work | japanese | job | job dissatisfaction | dissatisfaction | japanese employees [SUMMARY]
null
[CONTENT] group | test | significantly | satisfied | dissatisfied | satisfied group | dissatisfied group | table | 95 ci | models [SUMMARY]
null
[CONTENT] job | support | sickness | absence | sickness absence | satisfied | term | dissatisfaction | job dissatisfaction | long [SUMMARY]
null
[CONTENT] 1-year | 1 month | Japanese [SUMMARY]
null
[CONTENT] 2.91 | 95% | 1.74 ||| 1.55 | 95% | 0.86 [SUMMARY]
null
[CONTENT] 1-year | 1 month | Japanese ||| 14,687 | 7,343 | 7,344 | 20-66 years | the past 3 years | Japan ||| The Brief Job Stress Questionnaire | July-August 2015 ||| July 2016 ||| Cox ||| 2.91 | 95% | 1.74 ||| 1.55 | 95% | 0.86 ||| [SUMMARY]
null
Response to treatment of myasthenia gravis according to clinical subtype.
27855632
We have previously reported using two-step cluster analysis to classify myasthenia gravis (MG) patients into the following five subtypes: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; anti-acetylcholine receptor antibody (AChR-Ab)-negative MG; and AChR-Ab-positive MG without thymic abnormalities. The objectives of the present study were to examine the reproducibility of this five-subtype classification using a new data set of MG patients and to identify additional characteristics of these subtypes, particularly in regard to response to treatment.
BACKGROUND
A total of 923 consecutive MG patients underwent two-step cluster analysis for the classification of subtypes. The variables used for classification were sex, age of onset, disease duration, presence of thymoma or thymic hyperplasia, positivity for AChR-Ab or anti-muscle-specific tyrosine kinase antibody, positivity for other concurrent autoantibodies, and disease condition at worst and current. The period from the start of treatment until the achievement of minimal manifestation status (early-stage response) was determined and then compared between subtypes using Kaplan-Meier analysis and the log-rank test. In addition, between subtypes, the rate of the number of patients who maintained minimal manifestations during the study period/that of patients who only achieved the status once (stability of improved status) was compared.
METHODS
As a result of two-step cluster analysis, 923 MG patients were classified into five subtypes as follows: ocular MG (AChR-Ab-positivity, 77%; histogram of onset age, skewed to older age); thymoma-associated MG (100%; normal distribution); MG with thymic hyperplasia (89%; skewed to younger age); AChR-Ab-negative MG (0%; normal distribution); and AChR-Ab-positive MG without thymic abnormalities (100%, skewed to older age). Furthermore, patients classified as ocular MG showed the best early-stage response to treatment and stability of improved status, followed by those classified as thymoma-associated MG and AChR-Ab-positive MG without thymic abnormalities; by contrast, those classified as AChR-Ab-negative MG showed the worst early-stage response to treatment and stability of improved status.
RESULTS
Differences were seen between the five subtypes in demographic characteristics, clinical severity, and therapeutic response. Our five-subtype classification approach would be beneficial not only to elucidate disease subtypes, but also to plan treatment strategies for individual MG patients.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Child", "Cluster Analysis", "Female", "Humans", "Japan", "Kaplan-Meier Estimate", "Male", "Middle Aged", "Myasthenia Gravis", "Reproducibility of Results", "Severity of Illness Index", "Thymoma", "Thymus Neoplasms", "Young Adult" ]
5114805
Background
Myasthenia gravis (MG) is a neurological disorder that manifests as fatigable and fluctuating weakness of voluntary muscles, which are mediated by autoantibodies against neuromuscular junction proteins in skeletal muscle that impair neuromuscular transmission [1]. MG typically involves the ocular, bulbar, and extremity muscles, and, in severe cases, respiratory muscles. The clinical course and outcome in MG are affected by several different autoantibodies, thymic abnormalities, onset age and disease severity, as well as response to treatment [2–4]. MG is distinguished according to the production of pathogenic autoantibodies such as anti-acetylcholine receptor antibody (AChR-Ab) and anti–muscle-specific tyrosine kinase antibody (MuSK-Ab) [1, 5, 6]. Clinically, MG is often classified into the following three subtypes based on thymic abnormalities and onset age: thymoma-associated MG; early-onset MG (onset age <50 years); and late-onset MG (onset age ≥50 years) [7]. Furthermore, discrimination is observed in the clinical setting—for example, between ocular and generalized MG—based on the distribution of symptoms. Previously, we reported classifying MG into the following five subtypes using two-step cluster analysis of a detailed cross-sectional data set of 640 consecutive patients (Japan MG Registry Study 2012): ocular MG; generalized thymoma-associated MG; generalized MG with thymic hyperplasia; generalized AChR-Ab-negative MG; and generalized AChR-Ab-positive MG without thymic abnormalities [8]. However, this five-subtype classification approach requires further confirmation, and its clinical relevance remains to be established. Therefore, in 2015, we conducted a larger cross-sectional survey to obtain clinical parameters from 1,088 consecutive MG patients. In the present study, using this new data set, we attempted to confirm the reproducibility of our five-subtype classification approach and to specify additional characteristics of these five subtypes with a particular focus on response to treatment in the clinical setting.
null
null
Results
Two-step cluster analysis Based on the results of two-step cluster analyses, all 923 MG patients could be classified into the same five subtypes described elsewhere [8]: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and other (in order of predicted importance). Among these five subtypes, the residual patients group “other” was the largest, and could be defined as generalized AChR-Ab-positive MG without thymic abnormalities. These results were demonstrated repeatedly with several sets of variables, as shown in Table 2, which confirmed the high reliability and reproducibility of the classification system. Although the order among thymoma-associated MG, MG with thymic hyperplasia, and AChR-Ab-negative MG was unstable depending on the variable sets used, the differences in terms of predicted importance were not large. These results were almost identical to those reported elsewhere [8], with only minor discrepancies in regard to the order of selection priority (the order in the previous study was as follows: ocular MG; MG with thymic hyperplasia; AChR-Ab-negative MG; thymoma-associated MG; and AChR-Ab-positive MG without thymic abnormalities). In the present study, the quality of clusterization under each set of variables, which was estimated using a previously reported interpretation model [14], was indicated as “fair” to “good” for all clusters, suggesting that the results were reasonable. A total of 111 patients (10.2%) fit two of the five subtypes (Table 3). These patients were allocated to sole subtypes according to the separation priority in the two-step cluster analysis. For example, an ocular MG patient with thymoma was allocated into ocular MG. Under this criterion, the percentage of patients assigned to the five subtypes was as follows: ocular MG, 23.0%; thymoma-associated MG, 21.5%; MG with thymic hyperplasia, 12.9%; AChR-Ab-negative MG, 12.1%; and AChR-Ab-positive MG without thymic abnormalities, 30.5% (Table 4).Table 3Number of patients fitting two categoriesNumber of patientsFinal assignmentOcular MG and thymoma-associated MG32 (2.9%)Ocular MGOcular MG and AChR-Ab-negative MG56 (5.1%)Ocular MGOcular MG and MG with thymic hyperplasia8 (0.7%)Ocular MGMG with thymic hyperplasia and AChR-Ab-negative MG14 (1.3%)THMGThymoma-associated MG and AChR-Ab-negative MG1 (0.09%)TAMGValues within the parentheses show the percentages of the total of 1,088 patients AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis Table 4Characteristics and severity for each of the five MG subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPatients (n)250234140132(22)3321088Female,%52.067.581.4*81.1*(81.8)61.165.4Onset age, y51.0 ± 20.0, 53.0a 51.0 ± 12.8, 51.033.3 ± 13.9, 31.0a 39.9 ± 16.2, 41.5a (38.6 ± 15.3, 42.0)50.7 ± 20.7, 55.0a 47.3 ± 18.8, 48.1Duration of disease, y10.4 ± 11.4, 6.49.5 ± 7.7, 8.017.4 ± 11.8, 14.5a 10.8 ± 9.3, 8.0(11.0 ± 8.1, 9.8)11.6 ± 10.5, 8.211.6 ± 10.6, 8.2AChR-Ab-positivity,%77.299.689.40.0(0.0)100.081.3MuSK-Ab-positivity,%0.0%0.0%0.0%20.6%(100.0%)0.0%2.1%Thymectomy,%23.6%*97.4%*100.0%*12.1%*(9.1%)35.8%*51.7%The worst condition of the disease MGFA classification (n = 1088) I,%100%0.0%0.0%0.0%(0.0%)0.0%23.0% II,%0.0%44.0%52.9%59.1%(45.5%)64.8%43.2% III,%0.0%27.8%30.7%28.0%(13.6%)20.5%19.6% IV,%0.0%7.7%7.9%4.5%(9.1%)4.2%4.5% V,%0.0%20.5%8.6%8.3%(31.8%)10.5%9.7% Rate of MGFA > III,%0.0%*56.0%*47.1%40.9%(54.5%)35.2%33.8% QMG score (n = 922)6.6 ± 2.6, 6.0a (n = 225)17.1 ± 8.0,15.5a (n = 194)15.8 ± 5.8, 15.0a (n = 107)14.7 ± 7.2, 13.0(n = 114)18.1 ± 9.7, 15.5(n = 20)14.7 ± 7.0, 13.0(n = 282)13.4 ± 7.5, 12.0Current disease condition (mean ± SD, median) QMG score (n = 923)4.2 ± 2.8, 4.0a (n = 208)6.8 ± 4.8, 6.0(n = 198)7.8 ± 5.5, 7.0(n = 125)8.4 ± 5.4, 8.0(n = 106)(8.3 ± 6.4, 7.0)(n = 20)7.2 ± 4.8, 6.0(n = 286)6.6 ± 4.8, 6.0 MGC score (n = 923)1.9 ± 2.5, 1.0a (n = 208)4.5 ± 5.4, 3.0(n = 198)5.4 ± 5.7, 3.0(n = 125)6.5 ± 6.2, 5.0a (n = 106)(6.2 ± 7.0, 4.5)(n = 20)4.2 ± 4.6, 3.0(n = 286)4.1 ± 5.0, 3.0 MG-QOL-15 (n = 923)8.1 ± 9.0, 5.0a (n = 208)14.7 ± 13.6, 11.0(n = 198)16.2 ± 13.7, 13.5a (n = 125)14.6 ± 12.6, 12.0(n = 106)(11.6 ± 8.8, 11.0)(n = 20)14.1 ± 13.9, 9.0(n = 286)13.2 ± 13.0, 9.0All continuous data are expressed as the mean ± standard deviation (SD) and the median AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation*p < 0.0001, chi-square test (compared to the others), † p < 0.0001, Mann–Whitney U test Number of patients fitting two categories Values within the parentheses show the percentages of the total of 1,088 patients AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis Characteristics and severity for each of the five MG subtypes All continuous data are expressed as the mean ± standard deviation (SD) and the median AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation *p < 0.0001, chi-square test (compared to the others), † p < 0.0001, Mann–Whitney U test MG with thymic hyperplasia is only diagnosed for thymectomized patients; therefore, some non-thymectomized patients with thymic hyperplasia may be assigned as other subtypes, particularly AChR-Ab-positive MG patients without thymic abnormalities. Based on the results of two-step cluster analyses, all 923 MG patients could be classified into the same five subtypes described elsewhere [8]: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and other (in order of predicted importance). Among these five subtypes, the residual patients group “other” was the largest, and could be defined as generalized AChR-Ab-positive MG without thymic abnormalities. These results were demonstrated repeatedly with several sets of variables, as shown in Table 2, which confirmed the high reliability and reproducibility of the classification system. Although the order among thymoma-associated MG, MG with thymic hyperplasia, and AChR-Ab-negative MG was unstable depending on the variable sets used, the differences in terms of predicted importance were not large. These results were almost identical to those reported elsewhere [8], with only minor discrepancies in regard to the order of selection priority (the order in the previous study was as follows: ocular MG; MG with thymic hyperplasia; AChR-Ab-negative MG; thymoma-associated MG; and AChR-Ab-positive MG without thymic abnormalities). In the present study, the quality of clusterization under each set of variables, which was estimated using a previously reported interpretation model [14], was indicated as “fair” to “good” for all clusters, suggesting that the results were reasonable. A total of 111 patients (10.2%) fit two of the five subtypes (Table 3). These patients were allocated to sole subtypes according to the separation priority in the two-step cluster analysis. For example, an ocular MG patient with thymoma was allocated into ocular MG. Under this criterion, the percentage of patients assigned to the five subtypes was as follows: ocular MG, 23.0%; thymoma-associated MG, 21.5%; MG with thymic hyperplasia, 12.9%; AChR-Ab-negative MG, 12.1%; and AChR-Ab-positive MG without thymic abnormalities, 30.5% (Table 4).Table 3Number of patients fitting two categoriesNumber of patientsFinal assignmentOcular MG and thymoma-associated MG32 (2.9%)Ocular MGOcular MG and AChR-Ab-negative MG56 (5.1%)Ocular MGOcular MG and MG with thymic hyperplasia8 (0.7%)Ocular MGMG with thymic hyperplasia and AChR-Ab-negative MG14 (1.3%)THMGThymoma-associated MG and AChR-Ab-negative MG1 (0.09%)TAMGValues within the parentheses show the percentages of the total of 1,088 patients AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis Table 4Characteristics and severity for each of the five MG subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPatients (n)250234140132(22)3321088Female,%52.067.581.4*81.1*(81.8)61.165.4Onset age, y51.0 ± 20.0, 53.0a 51.0 ± 12.8, 51.033.3 ± 13.9, 31.0a 39.9 ± 16.2, 41.5a (38.6 ± 15.3, 42.0)50.7 ± 20.7, 55.0a 47.3 ± 18.8, 48.1Duration of disease, y10.4 ± 11.4, 6.49.5 ± 7.7, 8.017.4 ± 11.8, 14.5a 10.8 ± 9.3, 8.0(11.0 ± 8.1, 9.8)11.6 ± 10.5, 8.211.6 ± 10.6, 8.2AChR-Ab-positivity,%77.299.689.40.0(0.0)100.081.3MuSK-Ab-positivity,%0.0%0.0%0.0%20.6%(100.0%)0.0%2.1%Thymectomy,%23.6%*97.4%*100.0%*12.1%*(9.1%)35.8%*51.7%The worst condition of the disease MGFA classification (n = 1088) I,%100%0.0%0.0%0.0%(0.0%)0.0%23.0% II,%0.0%44.0%52.9%59.1%(45.5%)64.8%43.2% III,%0.0%27.8%30.7%28.0%(13.6%)20.5%19.6% IV,%0.0%7.7%7.9%4.5%(9.1%)4.2%4.5% V,%0.0%20.5%8.6%8.3%(31.8%)10.5%9.7% Rate of MGFA > III,%0.0%*56.0%*47.1%40.9%(54.5%)35.2%33.8% QMG score (n = 922)6.6 ± 2.6, 6.0a (n = 225)17.1 ± 8.0,15.5a (n = 194)15.8 ± 5.8, 15.0a (n = 107)14.7 ± 7.2, 13.0(n = 114)18.1 ± 9.7, 15.5(n = 20)14.7 ± 7.0, 13.0(n = 282)13.4 ± 7.5, 12.0Current disease condition (mean ± SD, median) QMG score (n = 923)4.2 ± 2.8, 4.0a (n = 208)6.8 ± 4.8, 6.0(n = 198)7.8 ± 5.5, 7.0(n = 125)8.4 ± 5.4, 8.0(n = 106)(8.3 ± 6.4, 7.0)(n = 20)7.2 ± 4.8, 6.0(n = 286)6.6 ± 4.8, 6.0 MGC score (n = 923)1.9 ± 2.5, 1.0a (n = 208)4.5 ± 5.4, 3.0(n = 198)5.4 ± 5.7, 3.0(n = 125)6.5 ± 6.2, 5.0a (n = 106)(6.2 ± 7.0, 4.5)(n = 20)4.2 ± 4.6, 3.0(n = 286)4.1 ± 5.0, 3.0 MG-QOL-15 (n = 923)8.1 ± 9.0, 5.0a (n = 208)14.7 ± 13.6, 11.0(n = 198)16.2 ± 13.7, 13.5a (n = 125)14.6 ± 12.6, 12.0(n = 106)(11.6 ± 8.8, 11.0)(n = 20)14.1 ± 13.9, 9.0(n = 286)13.2 ± 13.0, 9.0All continuous data are expressed as the mean ± standard deviation (SD) and the median AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation*p < 0.0001, chi-square test (compared to the others), † p < 0.0001, Mann–Whitney U test Number of patients fitting two categories Values within the parentheses show the percentages of the total of 1,088 patients AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis Characteristics and severity for each of the five MG subtypes All continuous data are expressed as the mean ± standard deviation (SD) and the median AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation *p < 0.0001, chi-square test (compared to the others), † p < 0.0001, Mann–Whitney U test MG with thymic hyperplasia is only diagnosed for thymectomized patients; therefore, some non-thymectomized patients with thymic hyperplasia may be assigned as other subtypes, particularly AChR-Ab-positive MG patients without thymic abnormalities. Clinical characteristics of each subtype The clinical characteristics, including current and worst severity, for each of the five subtypes are shown in Table 4. The patients with MuSK-Ab were not separated by two-step cluster analysis because the number is not great enough for statistical evaluation. However, MG patients with MuSK-Ab showed a distinct clinical manifestation and therapy responsiveness reflecting the unique pathological mechanism [15]. Therefore, details of MG patients with MuSK-Ab (n = 22) are individually described next to SNMG patients in Table 4. The percentage of females was significantly higher among MG with thymic hyperplasia and AChR-Ab-negative MG patients compared with the other three subtypes (p < 0.0001, chi-square test). Onset age was significantly younger in MG with thymic hyperplasia and AChR-Ab-negative MG patients (p < 0.0001, Mann–Whitney U test) and older in ocular MG and AChR-Ab-positive MG patients without thymic abnormalities (p < 0.0001, Mann–Whitney U test). Severity at the worst condition (MGFA classification and QMG) was significantly higher in thymoma-associated MG patients (p < 0.0001, Mann–Whitney U test). Patients with MuSK-Ab also showed worst severity at the same level as thymoma-associated MG patients, although this result was not statistically significant because of the small number of patients. The severity scales (QMG and MGC), and a QOL scale (MG-QOL-15) scores in the present survey were generally worse, although not statistically significant, in MG with thymic hyperplasia and AChR-Ab-negative MG patients, both of which primarily comprise females with younger onset ages. On the other hand, as a matter of course, ocular MG patients showed much lower clinical severity in all batteries at both current and worst conditions (p < 0.0001, Mann–Whitney U test). The clinical characteristics, including current and worst severity, for each of the five subtypes are shown in Table 4. The patients with MuSK-Ab were not separated by two-step cluster analysis because the number is not great enough for statistical evaluation. However, MG patients with MuSK-Ab showed a distinct clinical manifestation and therapy responsiveness reflecting the unique pathological mechanism [15]. Therefore, details of MG patients with MuSK-Ab (n = 22) are individually described next to SNMG patients in Table 4. The percentage of females was significantly higher among MG with thymic hyperplasia and AChR-Ab-negative MG patients compared with the other three subtypes (p < 0.0001, chi-square test). Onset age was significantly younger in MG with thymic hyperplasia and AChR-Ab-negative MG patients (p < 0.0001, Mann–Whitney U test) and older in ocular MG and AChR-Ab-positive MG patients without thymic abnormalities (p < 0.0001, Mann–Whitney U test). Severity at the worst condition (MGFA classification and QMG) was significantly higher in thymoma-associated MG patients (p < 0.0001, Mann–Whitney U test). Patients with MuSK-Ab also showed worst severity at the same level as thymoma-associated MG patients, although this result was not statistically significant because of the small number of patients. The severity scales (QMG and MGC), and a QOL scale (MG-QOL-15) scores in the present survey were generally worse, although not statistically significant, in MG with thymic hyperplasia and AChR-Ab-negative MG patients, both of which primarily comprise females with younger onset ages. On the other hand, as a matter of course, ocular MG patients showed much lower clinical severity in all batteries at both current and worst conditions (p < 0.0001, Mann–Whitney U test). Onset age histograms of the five subtypes Histograms of the onset age for each of the five subtypes are shown in Fig. 1a. These histograms were converted into approximate curves (sixth-order polynomial approximations) and superimposed in Fig. 1b. The peak ages of the histogram were 60–64 years in ocular MG, 25–29 years in MG with thymic hyperplasia, 35–39 years in AChR-Ab-negative MG, 50–54 years in thymoma-associated MG, and 65–69 years in AChR-Ab-positive MG without thymic abnormalities. The histogram was skewed toward younger onset age in MG with thymic hyperplasia and toward older onset age in ocular MG and AChR-Ab-positive MG without thymic abnormalities. Regarding patients with MuSK-Ab (n = 22), the mean ± SD onset age was 38.6 ± 15.3 (median, 42.0 years), and the ratio of females was 81.8%; however, neither of these findings was significantly different from other AChR-Ab-negative MG patients.Fig. 1Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis Histograms of the onset age for each of the five subtypes are shown in Fig. 1a. These histograms were converted into approximate curves (sixth-order polynomial approximations) and superimposed in Fig. 1b. The peak ages of the histogram were 60–64 years in ocular MG, 25–29 years in MG with thymic hyperplasia, 35–39 years in AChR-Ab-negative MG, 50–54 years in thymoma-associated MG, and 65–69 years in AChR-Ab-positive MG without thymic abnormalities. The histogram was skewed toward younger onset age in MG with thymic hyperplasia and toward older onset age in ocular MG and AChR-Ab-positive MG without thymic abnormalities. Regarding patients with MuSK-Ab (n = 22), the mean ± SD onset age was 38.6 ± 15.3 (median, 42.0 years), and the ratio of females was 81.8%; however, neither of these findings was significantly different from other AChR-Ab-negative MG patients.Fig. 1Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis Early-stage response to treatment and stability of improved status among the five subtypes Details of past immunotherapy for each of the five subtypes are shown at the top of Table 5.Table 5Details of past treatment and response to treatment for each of the five subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPast immunotherapy (n = 923) Peak dose of oral PSL, mg/day9.2 ± 12.2, 5.0† 28.5 ± 18.8, 30.0† 29.7 ± 19.4, 30.0† 18.8 ± 17.2, 15.032.6 ± 20.6, 30.023.7 ± 20.2, 20.021.5 ± 19.3, 15.0 Duration of PSL ≥20 mg/day, M0.0 ± 0.0, 0.0† 12.0 ± 25.2, 5.0† 13.0 ± 27.3, 6.0† 3.8 ± 7.0, 0.07.2 ± 9.5, 4.08.2 ± 17.0, 2.07.9 ± 19.3, 1.0 CNIs,%24.0%*68.2%*54.0%67.4%(72.7%)58.1%52.9% PP,%2.0%*48.1%*22.1%46.0%*(54.5%)37.2%27.3% IVIG,%6.1%*36.1%29.9%42.5%*(27.3%)24.7%15.0%Initial response to treatment (n = 923, see Fig. 2a) Achievement of MM-or-better once,%79.8%*73.5%66.1%56.2%*(75.0%)67.8%70.2% Months to achieve MM-or-better in 50% of patients4.0‡ 8.018.0‡ 31.0‡ (7.0)6.08.0Stability of improved status (n = 923) MM-or-better at present,%74.0%*58.1%49.6%39.6%*(55.0%)55.4%57.6% Maintaining rate of MM-or-better, %92.7%*79.0%75.0%70.5%(73.3%)81.7%82.1%All continuous data are expressed as the mean ± standard deviation (SD) and the median CNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation*p < 0.0001, chi-square test,† p < 0.0001, Mann–Whitney U test,‡ p < 0.0001, log-rank test Details of past treatment and response to treatment for each of the five subtypes All continuous data are expressed as the mean ± standard deviation (SD) and the median CNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation *p < 0.0001, chi-square test,† p < 0.0001, Mann–Whitney U test,‡ p < 0.0001, log-rank test Early-stage response to treatment (first achievement of MM-or-better ≥1 M) As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test). Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis For comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10). As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test). Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis For comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10). Stability of improved status The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test). The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test). Details of past immunotherapy for each of the five subtypes are shown at the top of Table 5.Table 5Details of past treatment and response to treatment for each of the five subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPast immunotherapy (n = 923) Peak dose of oral PSL, mg/day9.2 ± 12.2, 5.0† 28.5 ± 18.8, 30.0† 29.7 ± 19.4, 30.0† 18.8 ± 17.2, 15.032.6 ± 20.6, 30.023.7 ± 20.2, 20.021.5 ± 19.3, 15.0 Duration of PSL ≥20 mg/day, M0.0 ± 0.0, 0.0† 12.0 ± 25.2, 5.0† 13.0 ± 27.3, 6.0† 3.8 ± 7.0, 0.07.2 ± 9.5, 4.08.2 ± 17.0, 2.07.9 ± 19.3, 1.0 CNIs,%24.0%*68.2%*54.0%67.4%(72.7%)58.1%52.9% PP,%2.0%*48.1%*22.1%46.0%*(54.5%)37.2%27.3% IVIG,%6.1%*36.1%29.9%42.5%*(27.3%)24.7%15.0%Initial response to treatment (n = 923, see Fig. 2a) Achievement of MM-or-better once,%79.8%*73.5%66.1%56.2%*(75.0%)67.8%70.2% Months to achieve MM-or-better in 50% of patients4.0‡ 8.018.0‡ 31.0‡ (7.0)6.08.0Stability of improved status (n = 923) MM-or-better at present,%74.0%*58.1%49.6%39.6%*(55.0%)55.4%57.6% Maintaining rate of MM-or-better, %92.7%*79.0%75.0%70.5%(73.3%)81.7%82.1%All continuous data are expressed as the mean ± standard deviation (SD) and the median CNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation*p < 0.0001, chi-square test,† p < 0.0001, Mann–Whitney U test,‡ p < 0.0001, log-rank test Details of past treatment and response to treatment for each of the five subtypes All continuous data are expressed as the mean ± standard deviation (SD) and the median CNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation *p < 0.0001, chi-square test,† p < 0.0001, Mann–Whitney U test,‡ p < 0.0001, log-rank test Early-stage response to treatment (first achievement of MM-or-better ≥1 M) As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test). Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis For comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10). As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test). Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis For comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10). Stability of improved status The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test). The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test).
Conclusion
The results of the present study suggest that MG patients can be classified into the following five subtypes in order of priority: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and AChR-Ab-positive MG without thymic abnormalities. All MG patients can be allocated to one of the subtypes based on the results of routine examinations. These five subtypes were shown to have characteristic demographic characteristics, clinical severity, and therapeutic responses. Therefore, our five-subtype classification method is expected to be beneficial not only for elucidating disease types, but also for planning proper treatment for individual patients.
[ "Patients and clinical factors", "Two-step cluster analysis", "Early-stage response to treatment and stability of improved status in each of the five subtypes", "Early-stage response to treatment", "Stability of improved status of MM-or-better ≥1 M", "Statistical analysis", "Two-step cluster analysis", "Clinical characteristics of each subtype", "Onset age histograms of the five subtypes", "Early-stage response to treatment and stability of improved status among the five subtypes", "Early-stage response to treatment (first achievement of MM-or-better ≥1 M)", "Stability of improved status" ]
[ "This survey was conducted by the Japan MG Registry Study Group, which comprises 13 neurological centers (Table 1). We evaluated patients with established MG between April and July 2015. To avoid potential bias, we enrolled consecutive patients over a short duration (4 months). All 1088 of these MG patients visited our hospitals, provided written informed consent, and underwent analysis. Among these 1088 patients, 331 (30.4%) were included in our previous survey in 2012 [8].Table 1Institutions participating in the Japan MG Registry Study 2015Department of Neurology, Sapporo Medical University Hospital, SapporoDepartment of Neurology, Hokkaido Medical Center, SapporoDepartment of Neurology, Hanamaki General Hospital, HanamakiDepartment of Neurology, Sendai Medical Center, SendaiDepartment of Neurology, Tohoku University Graduate School of Medicine, SendaiChiba Neurology Clinic, ChibaDepartment of Neurology, Chiba University School of Medicine, ChibaDepartment of Neurology, Tokyo Medical University, TokyoDepartment of Neurology, Toho University Medical Center Oh-hashi Hospital, TokyoDepartment of Neurology, Tokyo Women's Medical University, TokyoDepartment of Neurology, Kinki University School of Medicine, OsakaDepartment of Neurology, Graduate School of Medical Sciences, Kyushu University, FukuokaDepartment of Neurology and Strokology, Nagasaki University Hospital, Nagasaki\nAbbreviation: MG myasthenia gravis\n\nInstitutions participating in the Japan MG Registry Study 2015\n\nAbbreviation: MG myasthenia gravis\nThe following clinical parameters were obtained for all patients: sex; age; age at disease onset; duration of disease; duration of immunotherapy; history of bulbar symptoms; presence of thymoma or thymic hyperplasia in thymectomized patients; presence of serum AChR-Ab or MuSK-Ab; and presence of other non-MG-specific autoantibodies, such as anti-nuclear antibody, SS-A/SS-B antibody, TSH-receptor antibody, anti-thyroglobulin/thyroperoxidase antibody, and rheumatoid factor. In addition, the current and past disease status and details of treatment were surveyed for all patients. Clinical severity at the worst condition was determined according to the classification of the MG Foundation of America (MGFA) [9], and, in some patients, the MGFA quantitative MG score (QMG) [9, 10] from medical records. Clinical severity at the current condition was determined according to QMG and MG Composite (MGC) scores [11]. Furthermore, all patients completed the Japanese version of the 15-item Myasthenia Gravis Quality-of-Life Scale (MG-QOL-15), [12, 13] upon study entry.\nPrednisone and prednisolone are the global standard oral corticosteroids used to treat MG, and prednisolone is generally used in Japan. Therefore, the current use, peak dose [mg/day], and duration of prednisolone ≥20 mg/day were recorded for all patients, as was the use of calcineurin inhibitors, azathioprine, plasmapheresis, and intravenous immunoglobulin.\nFinally, the courses of current and past MGFA post-intervention statuses, particularly the time required to achieve first minimal manifestations (MM) or better status lasting more than one month (MM-or-better ≥1 M) [9], were determined as benchmarks for evaluating response to treatment in each patient. These clinical data were fully collected from 923 (84.8%) of the 1088 patients.", "To examine the reproducibility of the five-subtype classification in the same manner as reported elsewhere [8], we conducted two-step cluster analysis of the 923 patients using SPSS Statistics Base 22 software (IBM, Armonk, NY, USA). To avoid bias beset by the problem of multicollinearity, current or worst disease status was handled as a single variable (Table 2). The other variables evaluated were: sex; age of onset; disease duration; presence of thymoma; presence of thymic hyperplasia in thymectomized cases; positivity for AChR-Ab or MuSK-Ab; and positivity for other concurrent autoantibodies (Table 2).Table 2Set of variables used in the cluster analysesPatients’ backgroundsAutoantibody statusDisease status during the worst conditionCurrent disease statusSexAge of onsetDisease durationPresence of thymomaPresence of thymic hyperplasiaAChR-AbMuSK-AbNon-MG-specific antibodies\nOne of the following:\nThe worst MGFA classificationThe worst QMG\nOne of the following:\nCurrent PISCurrent QMGMG-QOL-15 score\nAChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status\n\nSet of variables used in the cluster analyses\n\nAChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status", " Early-stage response to treatment The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes.\nThe time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes.\n Stability of improved status of MM-or-better ≥1 M As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes.\nAs an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes.", "The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes.", "As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes.", "All statistical analyses were performed using SPSS Statistics Base 22 software (IBM) and MATLAB R2015a (MathWorks, Natick, MA, USA). All continuous data are expressed as the mean ± standard deviation (SD) and the median.", "Based on the results of two-step cluster analyses, all 923 MG patients could be classified into the same five subtypes described elsewhere [8]: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and other (in order of predicted importance). Among these five subtypes, the residual patients group “other” was the largest, and could be defined as generalized AChR-Ab-positive MG without thymic abnormalities. These results were demonstrated repeatedly with several sets of variables, as shown in Table 2, which confirmed the high reliability and reproducibility of the classification system. Although the order among thymoma-associated MG, MG with thymic hyperplasia, and AChR-Ab-negative MG was unstable depending on the variable sets used, the differences in terms of predicted importance were not large. These results were almost identical to those reported elsewhere [8], with only minor discrepancies in regard to the order of selection priority (the order in the previous study was as follows: ocular MG; MG with thymic hyperplasia; AChR-Ab-negative MG; thymoma-associated MG; and AChR-Ab-positive MG without thymic abnormalities). In the present study, the quality of clusterization under each set of variables, which was estimated using a previously reported interpretation model [14], was indicated as “fair” to “good” for all clusters, suggesting that the results were reasonable.\nA total of 111 patients (10.2%) fit two of the five subtypes (Table 3). These patients were allocated to sole subtypes according to the separation priority in the two-step cluster analysis. For example, an ocular MG patient with thymoma was allocated into ocular MG. Under this criterion, the percentage of patients assigned to the five subtypes was as follows: ocular MG, 23.0%; thymoma-associated MG, 21.5%; MG with thymic hyperplasia, 12.9%; AChR-Ab-negative MG, 12.1%; and AChR-Ab-positive MG without thymic abnormalities, 30.5% (Table 4).Table 3Number of patients fitting two categoriesNumber of patientsFinal assignmentOcular MG and thymoma-associated MG32 (2.9%)Ocular MGOcular MG and AChR-Ab-negative MG56 (5.1%)Ocular MGOcular MG and MG with thymic hyperplasia8 (0.7%)Ocular MGMG with thymic hyperplasia and AChR-Ab-negative MG14 (1.3%)THMGThymoma-associated MG and AChR-Ab-negative MG1 (0.09%)TAMGValues within the parentheses show the percentages of the total of 1,088 patients\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis\nTable 4Characteristics and severity for each of the five MG subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPatients (n)250234140132(22)3321088Female,%52.067.581.4*81.1*(81.8)61.165.4Onset age, y51.0 ± 20.0, 53.0a\n51.0 ± 12.8, 51.033.3 ± 13.9, 31.0a\n39.9 ± 16.2, 41.5a\n(38.6 ± 15.3, 42.0)50.7 ± 20.7, 55.0a\n47.3 ± 18.8, 48.1Duration of disease, y10.4 ± 11.4, 6.49.5 ± 7.7, 8.017.4 ± 11.8, 14.5a\n10.8 ± 9.3, 8.0(11.0 ± 8.1, 9.8)11.6 ± 10.5, 8.211.6 ± 10.6, 8.2AChR-Ab-positivity,%77.299.689.40.0(0.0)100.081.3MuSK-Ab-positivity,%0.0%0.0%0.0%20.6%(100.0%)0.0%2.1%Thymectomy,%23.6%*97.4%*100.0%*12.1%*(9.1%)35.8%*51.7%The worst condition of the disease MGFA classification (n = 1088) I,%100%0.0%0.0%0.0%(0.0%)0.0%23.0% II,%0.0%44.0%52.9%59.1%(45.5%)64.8%43.2% III,%0.0%27.8%30.7%28.0%(13.6%)20.5%19.6% IV,%0.0%7.7%7.9%4.5%(9.1%)4.2%4.5% V,%0.0%20.5%8.6%8.3%(31.8%)10.5%9.7% Rate of MGFA > III,%0.0%*56.0%*47.1%40.9%(54.5%)35.2%33.8% QMG score (n = 922)6.6 ± 2.6, 6.0a\n(n = 225)17.1 ± 8.0,15.5a\n(n = 194)15.8 ± 5.8, 15.0a\n(n = 107)14.7 ± 7.2, 13.0(n = 114)18.1 ± 9.7, 15.5(n = 20)14.7 ± 7.0, 13.0(n = 282)13.4 ± 7.5, 12.0Current disease condition (mean ± SD, median) QMG score (n = 923)4.2 ± 2.8, 4.0a\n(n = 208)6.8 ± 4.8, 6.0(n = 198)7.8 ± 5.5, 7.0(n = 125)8.4 ± 5.4, 8.0(n = 106)(8.3 ± 6.4, 7.0)(n = 20)7.2 ± 4.8, 6.0(n = 286)6.6 ± 4.8, 6.0 MGC score (n = 923)1.9 ± 2.5, 1.0a\n(n = 208)4.5 ± 5.4, 3.0(n = 198)5.4 ± 5.7, 3.0(n = 125)6.5 ± 6.2, 5.0a\n(n = 106)(6.2 ± 7.0, 4.5)(n = 20)4.2 ± 4.6, 3.0(n = 286)4.1 ± 5.0, 3.0 MG-QOL-15 (n = 923)8.1 ± 9.0, 5.0a\n(n = 208)14.7 ± 13.6, 11.0(n = 198)16.2 ± 13.7, 13.5a\n(n = 125)14.6 ± 12.6, 12.0(n = 106)(11.6 ± 8.8, 11.0)(n = 20)14.1 ± 13.9, 9.0(n = 286)13.2 ± 13.0, 9.0All continuous data are expressed as the mean ± standard deviation (SD) and the median\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation*p < 0.0001, chi-square test (compared to the others), †\np < 0.0001, Mann–Whitney U test\n\nNumber of patients fitting two categories\nValues within the parentheses show the percentages of the total of 1,088 patients\n\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis\nCharacteristics and severity for each of the five MG subtypes\nAll continuous data are expressed as the mean ± standard deviation (SD) and the median\n\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation\n*p < 0.0001, chi-square test (compared to the others), †\np < 0.0001, Mann–Whitney U test\nMG with thymic hyperplasia is only diagnosed for thymectomized patients; therefore, some non-thymectomized patients with thymic hyperplasia may be assigned as other subtypes, particularly AChR-Ab-positive MG patients without thymic abnormalities.", "The clinical characteristics, including current and worst severity, for each of the five subtypes are shown in Table 4. The patients with MuSK-Ab were not separated by two-step cluster analysis because the number is not great enough for statistical evaluation. However, MG patients with MuSK-Ab showed a distinct clinical manifestation and therapy responsiveness reflecting the unique pathological mechanism [15]. Therefore, details of MG patients with MuSK-Ab (n = 22) are individually described next to SNMG patients in Table 4. The percentage of females was significantly higher among MG with thymic hyperplasia and AChR-Ab-negative MG patients compared with the other three subtypes (p < 0.0001, chi-square test). Onset age was significantly younger in MG with thymic hyperplasia and AChR-Ab-negative MG patients (p < 0.0001, Mann–Whitney U test) and older in ocular MG and AChR-Ab-positive MG patients without thymic abnormalities (p < 0.0001, Mann–Whitney U test).\nSeverity at the worst condition (MGFA classification and QMG) was significantly higher in thymoma-associated MG patients (p < 0.0001, Mann–Whitney U test). Patients with MuSK-Ab also showed worst severity at the same level as thymoma-associated MG patients, although this result was not statistically significant because of the small number of patients. The severity scales (QMG and MGC), and a QOL scale (MG-QOL-15) scores in the present survey were generally worse, although not statistically significant, in MG with thymic hyperplasia and AChR-Ab-negative MG patients, both of which primarily comprise females with younger onset ages. On the other hand, as a matter of course, ocular MG patients showed much lower clinical severity in all batteries at both current and worst conditions (p < 0.0001, Mann–Whitney U test).", "Histograms of the onset age for each of the five subtypes are shown in Fig. 1a. These histograms were converted into approximate curves (sixth-order polynomial approximations) and superimposed in Fig. 1b. The peak ages of the histogram were 60–64 years in ocular MG, 25–29 years in MG with thymic hyperplasia, 35–39 years in AChR-Ab-negative MG, 50–54 years in thymoma-associated MG, and 65–69 years in AChR-Ab-positive MG without thymic abnormalities. The histogram was skewed toward younger onset age in MG with thymic hyperplasia and toward older onset age in ocular MG and AChR-Ab-positive MG without thymic abnormalities. Regarding patients with MuSK-Ab (n = 22), the mean ± SD onset age was 38.6 ± 15.3 (median, 42.0 years), and the ratio of females was 81.8%; however, neither of these findings was significantly different from other AChR-Ab-negative MG patients.Fig. 1Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis\n\nHistograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis", "Details of past immunotherapy for each of the five subtypes are shown at the top of Table 5.Table 5Details of past treatment and response to treatment for each of the five subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPast immunotherapy (n = 923) Peak dose of oral PSL, mg/day9.2 ± 12.2, 5.0†\n28.5 ± 18.8, 30.0†\n29.7 ± 19.4, 30.0†\n18.8 ± 17.2, 15.032.6 ± 20.6, 30.023.7 ± 20.2, 20.021.5 ± 19.3, 15.0 Duration of PSL ≥20 mg/day, M0.0 ± 0.0, 0.0†\n12.0 ± 25.2, 5.0†\n13.0 ± 27.3, 6.0†\n3.8 ± 7.0, 0.07.2 ± 9.5, 4.08.2 ± 17.0, 2.07.9 ± 19.3, 1.0 CNIs,%24.0%*68.2%*54.0%67.4%(72.7%)58.1%52.9% PP,%2.0%*48.1%*22.1%46.0%*(54.5%)37.2%27.3% IVIG,%6.1%*36.1%29.9%42.5%*(27.3%)24.7%15.0%Initial response to treatment (n = 923, see Fig. 2a) Achievement of MM-or-better once,%79.8%*73.5%66.1%56.2%*(75.0%)67.8%70.2% Months to achieve MM-or-better in 50% of patients4.0‡\n8.018.0‡\n31.0‡\n(7.0)6.08.0Stability of improved status (n = 923) MM-or-better at present,%74.0%*58.1%49.6%39.6%*(55.0%)55.4%57.6% Maintaining rate of MM-or-better, %92.7%*79.0%75.0%70.5%(73.3%)81.7%82.1%All continuous data are expressed as the mean ± standard deviation (SD) and the median\nCNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation*p < 0.0001, chi-square test,†\np < 0.0001, Mann–Whitney U test,‡\np < 0.0001, log-rank test\n\nDetails of past treatment and response to treatment for each of the five subtypes\nAll continuous data are expressed as the mean ± standard deviation (SD) and the median\n\nCNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation\n*p < 0.0001, chi-square test,†\np < 0.0001, Mann–Whitney U test,‡\np < 0.0001, log-rank test\n Early-stage response to treatment (first achievement of MM-or-better ≥1 M) As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test).\nKaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\n\nKaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\nFor comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10).\nAs shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test).\nKaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\n\nKaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\nFor comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10).\n Stability of improved status The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test).\nThe rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test).", "As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test).\nKaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\n\nKaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\nFor comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10).", "The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients and clinical factors", "Two-step cluster analysis", "Early-stage response to treatment and stability of improved status in each of the five subtypes", "Early-stage response to treatment", "Stability of improved status of MM-or-better ≥1 M", "Statistical analysis", "Results", "Two-step cluster analysis", "Clinical characteristics of each subtype", "Onset age histograms of the five subtypes", "Early-stage response to treatment and stability of improved status among the five subtypes", "Early-stage response to treatment (first achievement of MM-or-better ≥1 M)", "Stability of improved status", "Discussion", "Conclusion" ]
[ "Myasthenia gravis (MG) is a neurological disorder that manifests as fatigable and fluctuating weakness of voluntary muscles, which are mediated by autoantibodies against neuromuscular junction proteins in skeletal muscle that impair neuromuscular transmission [1]. MG typically involves the ocular, bulbar, and extremity muscles, and, in severe cases, respiratory muscles. The clinical course and outcome in MG are affected by several different autoantibodies, thymic abnormalities, onset age and disease severity, as well as response to treatment [2–4]. MG is distinguished according to the production of pathogenic autoantibodies such as anti-acetylcholine receptor antibody (AChR-Ab) and anti–muscle-specific tyrosine kinase antibody (MuSK-Ab) [1, 5, 6]. Clinically, MG is often classified into the following three subtypes based on thymic abnormalities and onset age: thymoma-associated MG; early-onset MG (onset age <50 years); and late-onset MG (onset age ≥50 years) [7]. Furthermore, discrimination is observed in the clinical setting—for example, between ocular and generalized MG—based on the distribution of symptoms.\nPreviously, we reported classifying MG into the following five subtypes using two-step cluster analysis of a detailed cross-sectional data set of 640 consecutive patients (Japan MG Registry Study 2012): ocular MG; generalized thymoma-associated MG; generalized MG with thymic hyperplasia; generalized AChR-Ab-negative MG; and generalized AChR-Ab-positive MG without thymic abnormalities [8]. However, this five-subtype classification approach requires further confirmation, and its clinical relevance remains to be established.\nTherefore, in 2015, we conducted a larger cross-sectional survey to obtain clinical parameters from 1,088 consecutive MG patients. In the present study, using this new data set, we attempted to confirm the reproducibility of our five-subtype classification approach and to specify additional characteristics of these five subtypes with a particular focus on response to treatment in the clinical setting.", " Patients and clinical factors This survey was conducted by the Japan MG Registry Study Group, which comprises 13 neurological centers (Table 1). We evaluated patients with established MG between April and July 2015. To avoid potential bias, we enrolled consecutive patients over a short duration (4 months). All 1088 of these MG patients visited our hospitals, provided written informed consent, and underwent analysis. Among these 1088 patients, 331 (30.4%) were included in our previous survey in 2012 [8].Table 1Institutions participating in the Japan MG Registry Study 2015Department of Neurology, Sapporo Medical University Hospital, SapporoDepartment of Neurology, Hokkaido Medical Center, SapporoDepartment of Neurology, Hanamaki General Hospital, HanamakiDepartment of Neurology, Sendai Medical Center, SendaiDepartment of Neurology, Tohoku University Graduate School of Medicine, SendaiChiba Neurology Clinic, ChibaDepartment of Neurology, Chiba University School of Medicine, ChibaDepartment of Neurology, Tokyo Medical University, TokyoDepartment of Neurology, Toho University Medical Center Oh-hashi Hospital, TokyoDepartment of Neurology, Tokyo Women's Medical University, TokyoDepartment of Neurology, Kinki University School of Medicine, OsakaDepartment of Neurology, Graduate School of Medical Sciences, Kyushu University, FukuokaDepartment of Neurology and Strokology, Nagasaki University Hospital, Nagasaki\nAbbreviation: MG myasthenia gravis\n\nInstitutions participating in the Japan MG Registry Study 2015\n\nAbbreviation: MG myasthenia gravis\nThe following clinical parameters were obtained for all patients: sex; age; age at disease onset; duration of disease; duration of immunotherapy; history of bulbar symptoms; presence of thymoma or thymic hyperplasia in thymectomized patients; presence of serum AChR-Ab or MuSK-Ab; and presence of other non-MG-specific autoantibodies, such as anti-nuclear antibody, SS-A/SS-B antibody, TSH-receptor antibody, anti-thyroglobulin/thyroperoxidase antibody, and rheumatoid factor. In addition, the current and past disease status and details of treatment were surveyed for all patients. Clinical severity at the worst condition was determined according to the classification of the MG Foundation of America (MGFA) [9], and, in some patients, the MGFA quantitative MG score (QMG) [9, 10] from medical records. Clinical severity at the current condition was determined according to QMG and MG Composite (MGC) scores [11]. Furthermore, all patients completed the Japanese version of the 15-item Myasthenia Gravis Quality-of-Life Scale (MG-QOL-15), [12, 13] upon study entry.\nPrednisone and prednisolone are the global standard oral corticosteroids used to treat MG, and prednisolone is generally used in Japan. Therefore, the current use, peak dose [mg/day], and duration of prednisolone ≥20 mg/day were recorded for all patients, as was the use of calcineurin inhibitors, azathioprine, plasmapheresis, and intravenous immunoglobulin.\nFinally, the courses of current and past MGFA post-intervention statuses, particularly the time required to achieve first minimal manifestations (MM) or better status lasting more than one month (MM-or-better ≥1 M) [9], were determined as benchmarks for evaluating response to treatment in each patient. These clinical data were fully collected from 923 (84.8%) of the 1088 patients.\nThis survey was conducted by the Japan MG Registry Study Group, which comprises 13 neurological centers (Table 1). We evaluated patients with established MG between April and July 2015. To avoid potential bias, we enrolled consecutive patients over a short duration (4 months). All 1088 of these MG patients visited our hospitals, provided written informed consent, and underwent analysis. Among these 1088 patients, 331 (30.4%) were included in our previous survey in 2012 [8].Table 1Institutions participating in the Japan MG Registry Study 2015Department of Neurology, Sapporo Medical University Hospital, SapporoDepartment of Neurology, Hokkaido Medical Center, SapporoDepartment of Neurology, Hanamaki General Hospital, HanamakiDepartment of Neurology, Sendai Medical Center, SendaiDepartment of Neurology, Tohoku University Graduate School of Medicine, SendaiChiba Neurology Clinic, ChibaDepartment of Neurology, Chiba University School of Medicine, ChibaDepartment of Neurology, Tokyo Medical University, TokyoDepartment of Neurology, Toho University Medical Center Oh-hashi Hospital, TokyoDepartment of Neurology, Tokyo Women's Medical University, TokyoDepartment of Neurology, Kinki University School of Medicine, OsakaDepartment of Neurology, Graduate School of Medical Sciences, Kyushu University, FukuokaDepartment of Neurology and Strokology, Nagasaki University Hospital, Nagasaki\nAbbreviation: MG myasthenia gravis\n\nInstitutions participating in the Japan MG Registry Study 2015\n\nAbbreviation: MG myasthenia gravis\nThe following clinical parameters were obtained for all patients: sex; age; age at disease onset; duration of disease; duration of immunotherapy; history of bulbar symptoms; presence of thymoma or thymic hyperplasia in thymectomized patients; presence of serum AChR-Ab or MuSK-Ab; and presence of other non-MG-specific autoantibodies, such as anti-nuclear antibody, SS-A/SS-B antibody, TSH-receptor antibody, anti-thyroglobulin/thyroperoxidase antibody, and rheumatoid factor. In addition, the current and past disease status and details of treatment were surveyed for all patients. Clinical severity at the worst condition was determined according to the classification of the MG Foundation of America (MGFA) [9], and, in some patients, the MGFA quantitative MG score (QMG) [9, 10] from medical records. Clinical severity at the current condition was determined according to QMG and MG Composite (MGC) scores [11]. Furthermore, all patients completed the Japanese version of the 15-item Myasthenia Gravis Quality-of-Life Scale (MG-QOL-15), [12, 13] upon study entry.\nPrednisone and prednisolone are the global standard oral corticosteroids used to treat MG, and prednisolone is generally used in Japan. Therefore, the current use, peak dose [mg/day], and duration of prednisolone ≥20 mg/day were recorded for all patients, as was the use of calcineurin inhibitors, azathioprine, plasmapheresis, and intravenous immunoglobulin.\nFinally, the courses of current and past MGFA post-intervention statuses, particularly the time required to achieve first minimal manifestations (MM) or better status lasting more than one month (MM-or-better ≥1 M) [9], were determined as benchmarks for evaluating response to treatment in each patient. These clinical data were fully collected from 923 (84.8%) of the 1088 patients.\n Two-step cluster analysis To examine the reproducibility of the five-subtype classification in the same manner as reported elsewhere [8], we conducted two-step cluster analysis of the 923 patients using SPSS Statistics Base 22 software (IBM, Armonk, NY, USA). To avoid bias beset by the problem of multicollinearity, current or worst disease status was handled as a single variable (Table 2). The other variables evaluated were: sex; age of onset; disease duration; presence of thymoma; presence of thymic hyperplasia in thymectomized cases; positivity for AChR-Ab or MuSK-Ab; and positivity for other concurrent autoantibodies (Table 2).Table 2Set of variables used in the cluster analysesPatients’ backgroundsAutoantibody statusDisease status during the worst conditionCurrent disease statusSexAge of onsetDisease durationPresence of thymomaPresence of thymic hyperplasiaAChR-AbMuSK-AbNon-MG-specific antibodies\nOne of the following:\nThe worst MGFA classificationThe worst QMG\nOne of the following:\nCurrent PISCurrent QMGMG-QOL-15 score\nAChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status\n\nSet of variables used in the cluster analyses\n\nAChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status\nTo examine the reproducibility of the five-subtype classification in the same manner as reported elsewhere [8], we conducted two-step cluster analysis of the 923 patients using SPSS Statistics Base 22 software (IBM, Armonk, NY, USA). To avoid bias beset by the problem of multicollinearity, current or worst disease status was handled as a single variable (Table 2). The other variables evaluated were: sex; age of onset; disease duration; presence of thymoma; presence of thymic hyperplasia in thymectomized cases; positivity for AChR-Ab or MuSK-Ab; and positivity for other concurrent autoantibodies (Table 2).Table 2Set of variables used in the cluster analysesPatients’ backgroundsAutoantibody statusDisease status during the worst conditionCurrent disease statusSexAge of onsetDisease durationPresence of thymomaPresence of thymic hyperplasiaAChR-AbMuSK-AbNon-MG-specific antibodies\nOne of the following:\nThe worst MGFA classificationThe worst QMG\nOne of the following:\nCurrent PISCurrent QMGMG-QOL-15 score\nAChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status\n\nSet of variables used in the cluster analyses\n\nAChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status\n Early-stage response to treatment and stability of improved status in each of the five subtypes Early-stage response to treatment The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes.\nThe time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes.\n Stability of improved status of MM-or-better ≥1 M As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes.\nAs an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes.\n Early-stage response to treatment The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes.\nThe time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes.\n Stability of improved status of MM-or-better ≥1 M As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes.\nAs an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes.\n Statistical analysis All statistical analyses were performed using SPSS Statistics Base 22 software (IBM) and MATLAB R2015a (MathWorks, Natick, MA, USA). All continuous data are expressed as the mean ± standard deviation (SD) and the median.\nAll statistical analyses were performed using SPSS Statistics Base 22 software (IBM) and MATLAB R2015a (MathWorks, Natick, MA, USA). All continuous data are expressed as the mean ± standard deviation (SD) and the median.", "This survey was conducted by the Japan MG Registry Study Group, which comprises 13 neurological centers (Table 1). We evaluated patients with established MG between April and July 2015. To avoid potential bias, we enrolled consecutive patients over a short duration (4 months). All 1088 of these MG patients visited our hospitals, provided written informed consent, and underwent analysis. Among these 1088 patients, 331 (30.4%) were included in our previous survey in 2012 [8].Table 1Institutions participating in the Japan MG Registry Study 2015Department of Neurology, Sapporo Medical University Hospital, SapporoDepartment of Neurology, Hokkaido Medical Center, SapporoDepartment of Neurology, Hanamaki General Hospital, HanamakiDepartment of Neurology, Sendai Medical Center, SendaiDepartment of Neurology, Tohoku University Graduate School of Medicine, SendaiChiba Neurology Clinic, ChibaDepartment of Neurology, Chiba University School of Medicine, ChibaDepartment of Neurology, Tokyo Medical University, TokyoDepartment of Neurology, Toho University Medical Center Oh-hashi Hospital, TokyoDepartment of Neurology, Tokyo Women's Medical University, TokyoDepartment of Neurology, Kinki University School of Medicine, OsakaDepartment of Neurology, Graduate School of Medical Sciences, Kyushu University, FukuokaDepartment of Neurology and Strokology, Nagasaki University Hospital, Nagasaki\nAbbreviation: MG myasthenia gravis\n\nInstitutions participating in the Japan MG Registry Study 2015\n\nAbbreviation: MG myasthenia gravis\nThe following clinical parameters were obtained for all patients: sex; age; age at disease onset; duration of disease; duration of immunotherapy; history of bulbar symptoms; presence of thymoma or thymic hyperplasia in thymectomized patients; presence of serum AChR-Ab or MuSK-Ab; and presence of other non-MG-specific autoantibodies, such as anti-nuclear antibody, SS-A/SS-B antibody, TSH-receptor antibody, anti-thyroglobulin/thyroperoxidase antibody, and rheumatoid factor. In addition, the current and past disease status and details of treatment were surveyed for all patients. Clinical severity at the worst condition was determined according to the classification of the MG Foundation of America (MGFA) [9], and, in some patients, the MGFA quantitative MG score (QMG) [9, 10] from medical records. Clinical severity at the current condition was determined according to QMG and MG Composite (MGC) scores [11]. Furthermore, all patients completed the Japanese version of the 15-item Myasthenia Gravis Quality-of-Life Scale (MG-QOL-15), [12, 13] upon study entry.\nPrednisone and prednisolone are the global standard oral corticosteroids used to treat MG, and prednisolone is generally used in Japan. Therefore, the current use, peak dose [mg/day], and duration of prednisolone ≥20 mg/day were recorded for all patients, as was the use of calcineurin inhibitors, azathioprine, plasmapheresis, and intravenous immunoglobulin.\nFinally, the courses of current and past MGFA post-intervention statuses, particularly the time required to achieve first minimal manifestations (MM) or better status lasting more than one month (MM-or-better ≥1 M) [9], were determined as benchmarks for evaluating response to treatment in each patient. These clinical data were fully collected from 923 (84.8%) of the 1088 patients.", "To examine the reproducibility of the five-subtype classification in the same manner as reported elsewhere [8], we conducted two-step cluster analysis of the 923 patients using SPSS Statistics Base 22 software (IBM, Armonk, NY, USA). To avoid bias beset by the problem of multicollinearity, current or worst disease status was handled as a single variable (Table 2). The other variables evaluated were: sex; age of onset; disease duration; presence of thymoma; presence of thymic hyperplasia in thymectomized cases; positivity for AChR-Ab or MuSK-Ab; and positivity for other concurrent autoantibodies (Table 2).Table 2Set of variables used in the cluster analysesPatients’ backgroundsAutoantibody statusDisease status during the worst conditionCurrent disease statusSexAge of onsetDisease durationPresence of thymomaPresence of thymic hyperplasiaAChR-AbMuSK-AbNon-MG-specific antibodies\nOne of the following:\nThe worst MGFA classificationThe worst QMG\nOne of the following:\nCurrent PISCurrent QMGMG-QOL-15 score\nAChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status\n\nSet of variables used in the cluster analyses\n\nAChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status", " Early-stage response to treatment The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes.\nThe time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes.\n Stability of improved status of MM-or-better ≥1 M As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes.\nAs an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes.", "The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes.", "As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes.", "All statistical analyses were performed using SPSS Statistics Base 22 software (IBM) and MATLAB R2015a (MathWorks, Natick, MA, USA). All continuous data are expressed as the mean ± standard deviation (SD) and the median.", " Two-step cluster analysis Based on the results of two-step cluster analyses, all 923 MG patients could be classified into the same five subtypes described elsewhere [8]: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and other (in order of predicted importance). Among these five subtypes, the residual patients group “other” was the largest, and could be defined as generalized AChR-Ab-positive MG without thymic abnormalities. These results were demonstrated repeatedly with several sets of variables, as shown in Table 2, which confirmed the high reliability and reproducibility of the classification system. Although the order among thymoma-associated MG, MG with thymic hyperplasia, and AChR-Ab-negative MG was unstable depending on the variable sets used, the differences in terms of predicted importance were not large. These results were almost identical to those reported elsewhere [8], with only minor discrepancies in regard to the order of selection priority (the order in the previous study was as follows: ocular MG; MG with thymic hyperplasia; AChR-Ab-negative MG; thymoma-associated MG; and AChR-Ab-positive MG without thymic abnormalities). In the present study, the quality of clusterization under each set of variables, which was estimated using a previously reported interpretation model [14], was indicated as “fair” to “good” for all clusters, suggesting that the results were reasonable.\nA total of 111 patients (10.2%) fit two of the five subtypes (Table 3). These patients were allocated to sole subtypes according to the separation priority in the two-step cluster analysis. For example, an ocular MG patient with thymoma was allocated into ocular MG. Under this criterion, the percentage of patients assigned to the five subtypes was as follows: ocular MG, 23.0%; thymoma-associated MG, 21.5%; MG with thymic hyperplasia, 12.9%; AChR-Ab-negative MG, 12.1%; and AChR-Ab-positive MG without thymic abnormalities, 30.5% (Table 4).Table 3Number of patients fitting two categoriesNumber of patientsFinal assignmentOcular MG and thymoma-associated MG32 (2.9%)Ocular MGOcular MG and AChR-Ab-negative MG56 (5.1%)Ocular MGOcular MG and MG with thymic hyperplasia8 (0.7%)Ocular MGMG with thymic hyperplasia and AChR-Ab-negative MG14 (1.3%)THMGThymoma-associated MG and AChR-Ab-negative MG1 (0.09%)TAMGValues within the parentheses show the percentages of the total of 1,088 patients\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis\nTable 4Characteristics and severity for each of the five MG subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPatients (n)250234140132(22)3321088Female,%52.067.581.4*81.1*(81.8)61.165.4Onset age, y51.0 ± 20.0, 53.0a\n51.0 ± 12.8, 51.033.3 ± 13.9, 31.0a\n39.9 ± 16.2, 41.5a\n(38.6 ± 15.3, 42.0)50.7 ± 20.7, 55.0a\n47.3 ± 18.8, 48.1Duration of disease, y10.4 ± 11.4, 6.49.5 ± 7.7, 8.017.4 ± 11.8, 14.5a\n10.8 ± 9.3, 8.0(11.0 ± 8.1, 9.8)11.6 ± 10.5, 8.211.6 ± 10.6, 8.2AChR-Ab-positivity,%77.299.689.40.0(0.0)100.081.3MuSK-Ab-positivity,%0.0%0.0%0.0%20.6%(100.0%)0.0%2.1%Thymectomy,%23.6%*97.4%*100.0%*12.1%*(9.1%)35.8%*51.7%The worst condition of the disease MGFA classification (n = 1088) I,%100%0.0%0.0%0.0%(0.0%)0.0%23.0% II,%0.0%44.0%52.9%59.1%(45.5%)64.8%43.2% III,%0.0%27.8%30.7%28.0%(13.6%)20.5%19.6% IV,%0.0%7.7%7.9%4.5%(9.1%)4.2%4.5% V,%0.0%20.5%8.6%8.3%(31.8%)10.5%9.7% Rate of MGFA > III,%0.0%*56.0%*47.1%40.9%(54.5%)35.2%33.8% QMG score (n = 922)6.6 ± 2.6, 6.0a\n(n = 225)17.1 ± 8.0,15.5a\n(n = 194)15.8 ± 5.8, 15.0a\n(n = 107)14.7 ± 7.2, 13.0(n = 114)18.1 ± 9.7, 15.5(n = 20)14.7 ± 7.0, 13.0(n = 282)13.4 ± 7.5, 12.0Current disease condition (mean ± SD, median) QMG score (n = 923)4.2 ± 2.8, 4.0a\n(n = 208)6.8 ± 4.8, 6.0(n = 198)7.8 ± 5.5, 7.0(n = 125)8.4 ± 5.4, 8.0(n = 106)(8.3 ± 6.4, 7.0)(n = 20)7.2 ± 4.8, 6.0(n = 286)6.6 ± 4.8, 6.0 MGC score (n = 923)1.9 ± 2.5, 1.0a\n(n = 208)4.5 ± 5.4, 3.0(n = 198)5.4 ± 5.7, 3.0(n = 125)6.5 ± 6.2, 5.0a\n(n = 106)(6.2 ± 7.0, 4.5)(n = 20)4.2 ± 4.6, 3.0(n = 286)4.1 ± 5.0, 3.0 MG-QOL-15 (n = 923)8.1 ± 9.0, 5.0a\n(n = 208)14.7 ± 13.6, 11.0(n = 198)16.2 ± 13.7, 13.5a\n(n = 125)14.6 ± 12.6, 12.0(n = 106)(11.6 ± 8.8, 11.0)(n = 20)14.1 ± 13.9, 9.0(n = 286)13.2 ± 13.0, 9.0All continuous data are expressed as the mean ± standard deviation (SD) and the median\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation*p < 0.0001, chi-square test (compared to the others), †\np < 0.0001, Mann–Whitney U test\n\nNumber of patients fitting two categories\nValues within the parentheses show the percentages of the total of 1,088 patients\n\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis\nCharacteristics and severity for each of the five MG subtypes\nAll continuous data are expressed as the mean ± standard deviation (SD) and the median\n\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation\n*p < 0.0001, chi-square test (compared to the others), †\np < 0.0001, Mann–Whitney U test\nMG with thymic hyperplasia is only diagnosed for thymectomized patients; therefore, some non-thymectomized patients with thymic hyperplasia may be assigned as other subtypes, particularly AChR-Ab-positive MG patients without thymic abnormalities.\nBased on the results of two-step cluster analyses, all 923 MG patients could be classified into the same five subtypes described elsewhere [8]: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and other (in order of predicted importance). Among these five subtypes, the residual patients group “other” was the largest, and could be defined as generalized AChR-Ab-positive MG without thymic abnormalities. These results were demonstrated repeatedly with several sets of variables, as shown in Table 2, which confirmed the high reliability and reproducibility of the classification system. Although the order among thymoma-associated MG, MG with thymic hyperplasia, and AChR-Ab-negative MG was unstable depending on the variable sets used, the differences in terms of predicted importance were not large. These results were almost identical to those reported elsewhere [8], with only minor discrepancies in regard to the order of selection priority (the order in the previous study was as follows: ocular MG; MG with thymic hyperplasia; AChR-Ab-negative MG; thymoma-associated MG; and AChR-Ab-positive MG without thymic abnormalities). In the present study, the quality of clusterization under each set of variables, which was estimated using a previously reported interpretation model [14], was indicated as “fair” to “good” for all clusters, suggesting that the results were reasonable.\nA total of 111 patients (10.2%) fit two of the five subtypes (Table 3). These patients were allocated to sole subtypes according to the separation priority in the two-step cluster analysis. For example, an ocular MG patient with thymoma was allocated into ocular MG. Under this criterion, the percentage of patients assigned to the five subtypes was as follows: ocular MG, 23.0%; thymoma-associated MG, 21.5%; MG with thymic hyperplasia, 12.9%; AChR-Ab-negative MG, 12.1%; and AChR-Ab-positive MG without thymic abnormalities, 30.5% (Table 4).Table 3Number of patients fitting two categoriesNumber of patientsFinal assignmentOcular MG and thymoma-associated MG32 (2.9%)Ocular MGOcular MG and AChR-Ab-negative MG56 (5.1%)Ocular MGOcular MG and MG with thymic hyperplasia8 (0.7%)Ocular MGMG with thymic hyperplasia and AChR-Ab-negative MG14 (1.3%)THMGThymoma-associated MG and AChR-Ab-negative MG1 (0.09%)TAMGValues within the parentheses show the percentages of the total of 1,088 patients\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis\nTable 4Characteristics and severity for each of the five MG subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPatients (n)250234140132(22)3321088Female,%52.067.581.4*81.1*(81.8)61.165.4Onset age, y51.0 ± 20.0, 53.0a\n51.0 ± 12.8, 51.033.3 ± 13.9, 31.0a\n39.9 ± 16.2, 41.5a\n(38.6 ± 15.3, 42.0)50.7 ± 20.7, 55.0a\n47.3 ± 18.8, 48.1Duration of disease, y10.4 ± 11.4, 6.49.5 ± 7.7, 8.017.4 ± 11.8, 14.5a\n10.8 ± 9.3, 8.0(11.0 ± 8.1, 9.8)11.6 ± 10.5, 8.211.6 ± 10.6, 8.2AChR-Ab-positivity,%77.299.689.40.0(0.0)100.081.3MuSK-Ab-positivity,%0.0%0.0%0.0%20.6%(100.0%)0.0%2.1%Thymectomy,%23.6%*97.4%*100.0%*12.1%*(9.1%)35.8%*51.7%The worst condition of the disease MGFA classification (n = 1088) I,%100%0.0%0.0%0.0%(0.0%)0.0%23.0% II,%0.0%44.0%52.9%59.1%(45.5%)64.8%43.2% III,%0.0%27.8%30.7%28.0%(13.6%)20.5%19.6% IV,%0.0%7.7%7.9%4.5%(9.1%)4.2%4.5% V,%0.0%20.5%8.6%8.3%(31.8%)10.5%9.7% Rate of MGFA > III,%0.0%*56.0%*47.1%40.9%(54.5%)35.2%33.8% QMG score (n = 922)6.6 ± 2.6, 6.0a\n(n = 225)17.1 ± 8.0,15.5a\n(n = 194)15.8 ± 5.8, 15.0a\n(n = 107)14.7 ± 7.2, 13.0(n = 114)18.1 ± 9.7, 15.5(n = 20)14.7 ± 7.0, 13.0(n = 282)13.4 ± 7.5, 12.0Current disease condition (mean ± SD, median) QMG score (n = 923)4.2 ± 2.8, 4.0a\n(n = 208)6.8 ± 4.8, 6.0(n = 198)7.8 ± 5.5, 7.0(n = 125)8.4 ± 5.4, 8.0(n = 106)(8.3 ± 6.4, 7.0)(n = 20)7.2 ± 4.8, 6.0(n = 286)6.6 ± 4.8, 6.0 MGC score (n = 923)1.9 ± 2.5, 1.0a\n(n = 208)4.5 ± 5.4, 3.0(n = 198)5.4 ± 5.7, 3.0(n = 125)6.5 ± 6.2, 5.0a\n(n = 106)(6.2 ± 7.0, 4.5)(n = 20)4.2 ± 4.6, 3.0(n = 286)4.1 ± 5.0, 3.0 MG-QOL-15 (n = 923)8.1 ± 9.0, 5.0a\n(n = 208)14.7 ± 13.6, 11.0(n = 198)16.2 ± 13.7, 13.5a\n(n = 125)14.6 ± 12.6, 12.0(n = 106)(11.6 ± 8.8, 11.0)(n = 20)14.1 ± 13.9, 9.0(n = 286)13.2 ± 13.0, 9.0All continuous data are expressed as the mean ± standard deviation (SD) and the median\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation*p < 0.0001, chi-square test (compared to the others), †\np < 0.0001, Mann–Whitney U test\n\nNumber of patients fitting two categories\nValues within the parentheses show the percentages of the total of 1,088 patients\n\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis\nCharacteristics and severity for each of the five MG subtypes\nAll continuous data are expressed as the mean ± standard deviation (SD) and the median\n\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation\n*p < 0.0001, chi-square test (compared to the others), †\np < 0.0001, Mann–Whitney U test\nMG with thymic hyperplasia is only diagnosed for thymectomized patients; therefore, some non-thymectomized patients with thymic hyperplasia may be assigned as other subtypes, particularly AChR-Ab-positive MG patients without thymic abnormalities.\n Clinical characteristics of each subtype The clinical characteristics, including current and worst severity, for each of the five subtypes are shown in Table 4. The patients with MuSK-Ab were not separated by two-step cluster analysis because the number is not great enough for statistical evaluation. However, MG patients with MuSK-Ab showed a distinct clinical manifestation and therapy responsiveness reflecting the unique pathological mechanism [15]. Therefore, details of MG patients with MuSK-Ab (n = 22) are individually described next to SNMG patients in Table 4. The percentage of females was significantly higher among MG with thymic hyperplasia and AChR-Ab-negative MG patients compared with the other three subtypes (p < 0.0001, chi-square test). Onset age was significantly younger in MG with thymic hyperplasia and AChR-Ab-negative MG patients (p < 0.0001, Mann–Whitney U test) and older in ocular MG and AChR-Ab-positive MG patients without thymic abnormalities (p < 0.0001, Mann–Whitney U test).\nSeverity at the worst condition (MGFA classification and QMG) was significantly higher in thymoma-associated MG patients (p < 0.0001, Mann–Whitney U test). Patients with MuSK-Ab also showed worst severity at the same level as thymoma-associated MG patients, although this result was not statistically significant because of the small number of patients. The severity scales (QMG and MGC), and a QOL scale (MG-QOL-15) scores in the present survey were generally worse, although not statistically significant, in MG with thymic hyperplasia and AChR-Ab-negative MG patients, both of which primarily comprise females with younger onset ages. On the other hand, as a matter of course, ocular MG patients showed much lower clinical severity in all batteries at both current and worst conditions (p < 0.0001, Mann–Whitney U test).\nThe clinical characteristics, including current and worst severity, for each of the five subtypes are shown in Table 4. The patients with MuSK-Ab were not separated by two-step cluster analysis because the number is not great enough for statistical evaluation. However, MG patients with MuSK-Ab showed a distinct clinical manifestation and therapy responsiveness reflecting the unique pathological mechanism [15]. Therefore, details of MG patients with MuSK-Ab (n = 22) are individually described next to SNMG patients in Table 4. The percentage of females was significantly higher among MG with thymic hyperplasia and AChR-Ab-negative MG patients compared with the other three subtypes (p < 0.0001, chi-square test). Onset age was significantly younger in MG with thymic hyperplasia and AChR-Ab-negative MG patients (p < 0.0001, Mann–Whitney U test) and older in ocular MG and AChR-Ab-positive MG patients without thymic abnormalities (p < 0.0001, Mann–Whitney U test).\nSeverity at the worst condition (MGFA classification and QMG) was significantly higher in thymoma-associated MG patients (p < 0.0001, Mann–Whitney U test). Patients with MuSK-Ab also showed worst severity at the same level as thymoma-associated MG patients, although this result was not statistically significant because of the small number of patients. The severity scales (QMG and MGC), and a QOL scale (MG-QOL-15) scores in the present survey were generally worse, although not statistically significant, in MG with thymic hyperplasia and AChR-Ab-negative MG patients, both of which primarily comprise females with younger onset ages. On the other hand, as a matter of course, ocular MG patients showed much lower clinical severity in all batteries at both current and worst conditions (p < 0.0001, Mann–Whitney U test).\n Onset age histograms of the five subtypes Histograms of the onset age for each of the five subtypes are shown in Fig. 1a. These histograms were converted into approximate curves (sixth-order polynomial approximations) and superimposed in Fig. 1b. The peak ages of the histogram were 60–64 years in ocular MG, 25–29 years in MG with thymic hyperplasia, 35–39 years in AChR-Ab-negative MG, 50–54 years in thymoma-associated MG, and 65–69 years in AChR-Ab-positive MG without thymic abnormalities. The histogram was skewed toward younger onset age in MG with thymic hyperplasia and toward older onset age in ocular MG and AChR-Ab-positive MG without thymic abnormalities. Regarding patients with MuSK-Ab (n = 22), the mean ± SD onset age was 38.6 ± 15.3 (median, 42.0 years), and the ratio of females was 81.8%; however, neither of these findings was significantly different from other AChR-Ab-negative MG patients.Fig. 1Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis\n\nHistograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis\nHistograms of the onset age for each of the five subtypes are shown in Fig. 1a. These histograms were converted into approximate curves (sixth-order polynomial approximations) and superimposed in Fig. 1b. The peak ages of the histogram were 60–64 years in ocular MG, 25–29 years in MG with thymic hyperplasia, 35–39 years in AChR-Ab-negative MG, 50–54 years in thymoma-associated MG, and 65–69 years in AChR-Ab-positive MG without thymic abnormalities. The histogram was skewed toward younger onset age in MG with thymic hyperplasia and toward older onset age in ocular MG and AChR-Ab-positive MG without thymic abnormalities. Regarding patients with MuSK-Ab (n = 22), the mean ± SD onset age was 38.6 ± 15.3 (median, 42.0 years), and the ratio of females was 81.8%; however, neither of these findings was significantly different from other AChR-Ab-negative MG patients.Fig. 1Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis\n\nHistograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis\n Early-stage response to treatment and stability of improved status among the five subtypes Details of past immunotherapy for each of the five subtypes are shown at the top of Table 5.Table 5Details of past treatment and response to treatment for each of the five subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPast immunotherapy (n = 923) Peak dose of oral PSL, mg/day9.2 ± 12.2, 5.0†\n28.5 ± 18.8, 30.0†\n29.7 ± 19.4, 30.0†\n18.8 ± 17.2, 15.032.6 ± 20.6, 30.023.7 ± 20.2, 20.021.5 ± 19.3, 15.0 Duration of PSL ≥20 mg/day, M0.0 ± 0.0, 0.0†\n12.0 ± 25.2, 5.0†\n13.0 ± 27.3, 6.0†\n3.8 ± 7.0, 0.07.2 ± 9.5, 4.08.2 ± 17.0, 2.07.9 ± 19.3, 1.0 CNIs,%24.0%*68.2%*54.0%67.4%(72.7%)58.1%52.9% PP,%2.0%*48.1%*22.1%46.0%*(54.5%)37.2%27.3% IVIG,%6.1%*36.1%29.9%42.5%*(27.3%)24.7%15.0%Initial response to treatment (n = 923, see Fig. 2a) Achievement of MM-or-better once,%79.8%*73.5%66.1%56.2%*(75.0%)67.8%70.2% Months to achieve MM-or-better in 50% of patients4.0‡\n8.018.0‡\n31.0‡\n(7.0)6.08.0Stability of improved status (n = 923) MM-or-better at present,%74.0%*58.1%49.6%39.6%*(55.0%)55.4%57.6% Maintaining rate of MM-or-better, %92.7%*79.0%75.0%70.5%(73.3%)81.7%82.1%All continuous data are expressed as the mean ± standard deviation (SD) and the median\nCNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation*p < 0.0001, chi-square test,†\np < 0.0001, Mann–Whitney U test,‡\np < 0.0001, log-rank test\n\nDetails of past treatment and response to treatment for each of the five subtypes\nAll continuous data are expressed as the mean ± standard deviation (SD) and the median\n\nCNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation\n*p < 0.0001, chi-square test,†\np < 0.0001, Mann–Whitney U test,‡\np < 0.0001, log-rank test\n Early-stage response to treatment (first achievement of MM-or-better ≥1 M) As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test).\nKaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\n\nKaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\nFor comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10).\nAs shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test).\nKaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\n\nKaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\nFor comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10).\n Stability of improved status The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test).\nThe rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test).\nDetails of past immunotherapy for each of the five subtypes are shown at the top of Table 5.Table 5Details of past treatment and response to treatment for each of the five subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPast immunotherapy (n = 923) Peak dose of oral PSL, mg/day9.2 ± 12.2, 5.0†\n28.5 ± 18.8, 30.0†\n29.7 ± 19.4, 30.0†\n18.8 ± 17.2, 15.032.6 ± 20.6, 30.023.7 ± 20.2, 20.021.5 ± 19.3, 15.0 Duration of PSL ≥20 mg/day, M0.0 ± 0.0, 0.0†\n12.0 ± 25.2, 5.0†\n13.0 ± 27.3, 6.0†\n3.8 ± 7.0, 0.07.2 ± 9.5, 4.08.2 ± 17.0, 2.07.9 ± 19.3, 1.0 CNIs,%24.0%*68.2%*54.0%67.4%(72.7%)58.1%52.9% PP,%2.0%*48.1%*22.1%46.0%*(54.5%)37.2%27.3% IVIG,%6.1%*36.1%29.9%42.5%*(27.3%)24.7%15.0%Initial response to treatment (n = 923, see Fig. 2a) Achievement of MM-or-better once,%79.8%*73.5%66.1%56.2%*(75.0%)67.8%70.2% Months to achieve MM-or-better in 50% of patients4.0‡\n8.018.0‡\n31.0‡\n(7.0)6.08.0Stability of improved status (n = 923) MM-or-better at present,%74.0%*58.1%49.6%39.6%*(55.0%)55.4%57.6% Maintaining rate of MM-or-better, %92.7%*79.0%75.0%70.5%(73.3%)81.7%82.1%All continuous data are expressed as the mean ± standard deviation (SD) and the median\nCNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation*p < 0.0001, chi-square test,†\np < 0.0001, Mann–Whitney U test,‡\np < 0.0001, log-rank test\n\nDetails of past treatment and response to treatment for each of the five subtypes\nAll continuous data are expressed as the mean ± standard deviation (SD) and the median\n\nCNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation\n*p < 0.0001, chi-square test,†\np < 0.0001, Mann–Whitney U test,‡\np < 0.0001, log-rank test\n Early-stage response to treatment (first achievement of MM-or-better ≥1 M) As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test).\nKaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\n\nKaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\nFor comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10).\nAs shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test).\nKaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\n\nKaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\nFor comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10).\n Stability of improved status The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test).\nThe rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test).", "Based on the results of two-step cluster analyses, all 923 MG patients could be classified into the same five subtypes described elsewhere [8]: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and other (in order of predicted importance). Among these five subtypes, the residual patients group “other” was the largest, and could be defined as generalized AChR-Ab-positive MG without thymic abnormalities. These results were demonstrated repeatedly with several sets of variables, as shown in Table 2, which confirmed the high reliability and reproducibility of the classification system. Although the order among thymoma-associated MG, MG with thymic hyperplasia, and AChR-Ab-negative MG was unstable depending on the variable sets used, the differences in terms of predicted importance were not large. These results were almost identical to those reported elsewhere [8], with only minor discrepancies in regard to the order of selection priority (the order in the previous study was as follows: ocular MG; MG with thymic hyperplasia; AChR-Ab-negative MG; thymoma-associated MG; and AChR-Ab-positive MG without thymic abnormalities). In the present study, the quality of clusterization under each set of variables, which was estimated using a previously reported interpretation model [14], was indicated as “fair” to “good” for all clusters, suggesting that the results were reasonable.\nA total of 111 patients (10.2%) fit two of the five subtypes (Table 3). These patients were allocated to sole subtypes according to the separation priority in the two-step cluster analysis. For example, an ocular MG patient with thymoma was allocated into ocular MG. Under this criterion, the percentage of patients assigned to the five subtypes was as follows: ocular MG, 23.0%; thymoma-associated MG, 21.5%; MG with thymic hyperplasia, 12.9%; AChR-Ab-negative MG, 12.1%; and AChR-Ab-positive MG without thymic abnormalities, 30.5% (Table 4).Table 3Number of patients fitting two categoriesNumber of patientsFinal assignmentOcular MG and thymoma-associated MG32 (2.9%)Ocular MGOcular MG and AChR-Ab-negative MG56 (5.1%)Ocular MGOcular MG and MG with thymic hyperplasia8 (0.7%)Ocular MGMG with thymic hyperplasia and AChR-Ab-negative MG14 (1.3%)THMGThymoma-associated MG and AChR-Ab-negative MG1 (0.09%)TAMGValues within the parentheses show the percentages of the total of 1,088 patients\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis\nTable 4Characteristics and severity for each of the five MG subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPatients (n)250234140132(22)3321088Female,%52.067.581.4*81.1*(81.8)61.165.4Onset age, y51.0 ± 20.0, 53.0a\n51.0 ± 12.8, 51.033.3 ± 13.9, 31.0a\n39.9 ± 16.2, 41.5a\n(38.6 ± 15.3, 42.0)50.7 ± 20.7, 55.0a\n47.3 ± 18.8, 48.1Duration of disease, y10.4 ± 11.4, 6.49.5 ± 7.7, 8.017.4 ± 11.8, 14.5a\n10.8 ± 9.3, 8.0(11.0 ± 8.1, 9.8)11.6 ± 10.5, 8.211.6 ± 10.6, 8.2AChR-Ab-positivity,%77.299.689.40.0(0.0)100.081.3MuSK-Ab-positivity,%0.0%0.0%0.0%20.6%(100.0%)0.0%2.1%Thymectomy,%23.6%*97.4%*100.0%*12.1%*(9.1%)35.8%*51.7%The worst condition of the disease MGFA classification (n = 1088) I,%100%0.0%0.0%0.0%(0.0%)0.0%23.0% II,%0.0%44.0%52.9%59.1%(45.5%)64.8%43.2% III,%0.0%27.8%30.7%28.0%(13.6%)20.5%19.6% IV,%0.0%7.7%7.9%4.5%(9.1%)4.2%4.5% V,%0.0%20.5%8.6%8.3%(31.8%)10.5%9.7% Rate of MGFA > III,%0.0%*56.0%*47.1%40.9%(54.5%)35.2%33.8% QMG score (n = 922)6.6 ± 2.6, 6.0a\n(n = 225)17.1 ± 8.0,15.5a\n(n = 194)15.8 ± 5.8, 15.0a\n(n = 107)14.7 ± 7.2, 13.0(n = 114)18.1 ± 9.7, 15.5(n = 20)14.7 ± 7.0, 13.0(n = 282)13.4 ± 7.5, 12.0Current disease condition (mean ± SD, median) QMG score (n = 923)4.2 ± 2.8, 4.0a\n(n = 208)6.8 ± 4.8, 6.0(n = 198)7.8 ± 5.5, 7.0(n = 125)8.4 ± 5.4, 8.0(n = 106)(8.3 ± 6.4, 7.0)(n = 20)7.2 ± 4.8, 6.0(n = 286)6.6 ± 4.8, 6.0 MGC score (n = 923)1.9 ± 2.5, 1.0a\n(n = 208)4.5 ± 5.4, 3.0(n = 198)5.4 ± 5.7, 3.0(n = 125)6.5 ± 6.2, 5.0a\n(n = 106)(6.2 ± 7.0, 4.5)(n = 20)4.2 ± 4.6, 3.0(n = 286)4.1 ± 5.0, 3.0 MG-QOL-15 (n = 923)8.1 ± 9.0, 5.0a\n(n = 208)14.7 ± 13.6, 11.0(n = 198)16.2 ± 13.7, 13.5a\n(n = 125)14.6 ± 12.6, 12.0(n = 106)(11.6 ± 8.8, 11.0)(n = 20)14.1 ± 13.9, 9.0(n = 286)13.2 ± 13.0, 9.0All continuous data are expressed as the mean ± standard deviation (SD) and the median\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation*p < 0.0001, chi-square test (compared to the others), †\np < 0.0001, Mann–Whitney U test\n\nNumber of patients fitting two categories\nValues within the parentheses show the percentages of the total of 1,088 patients\n\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis\nCharacteristics and severity for each of the five MG subtypes\nAll continuous data are expressed as the mean ± standard deviation (SD) and the median\n\nAChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation\n*p < 0.0001, chi-square test (compared to the others), †\np < 0.0001, Mann–Whitney U test\nMG with thymic hyperplasia is only diagnosed for thymectomized patients; therefore, some non-thymectomized patients with thymic hyperplasia may be assigned as other subtypes, particularly AChR-Ab-positive MG patients without thymic abnormalities.", "The clinical characteristics, including current and worst severity, for each of the five subtypes are shown in Table 4. The patients with MuSK-Ab were not separated by two-step cluster analysis because the number is not great enough for statistical evaluation. However, MG patients with MuSK-Ab showed a distinct clinical manifestation and therapy responsiveness reflecting the unique pathological mechanism [15]. Therefore, details of MG patients with MuSK-Ab (n = 22) are individually described next to SNMG patients in Table 4. The percentage of females was significantly higher among MG with thymic hyperplasia and AChR-Ab-negative MG patients compared with the other three subtypes (p < 0.0001, chi-square test). Onset age was significantly younger in MG with thymic hyperplasia and AChR-Ab-negative MG patients (p < 0.0001, Mann–Whitney U test) and older in ocular MG and AChR-Ab-positive MG patients without thymic abnormalities (p < 0.0001, Mann–Whitney U test).\nSeverity at the worst condition (MGFA classification and QMG) was significantly higher in thymoma-associated MG patients (p < 0.0001, Mann–Whitney U test). Patients with MuSK-Ab also showed worst severity at the same level as thymoma-associated MG patients, although this result was not statistically significant because of the small number of patients. The severity scales (QMG and MGC), and a QOL scale (MG-QOL-15) scores in the present survey were generally worse, although not statistically significant, in MG with thymic hyperplasia and AChR-Ab-negative MG patients, both of which primarily comprise females with younger onset ages. On the other hand, as a matter of course, ocular MG patients showed much lower clinical severity in all batteries at both current and worst conditions (p < 0.0001, Mann–Whitney U test).", "Histograms of the onset age for each of the five subtypes are shown in Fig. 1a. These histograms were converted into approximate curves (sixth-order polynomial approximations) and superimposed in Fig. 1b. The peak ages of the histogram were 60–64 years in ocular MG, 25–29 years in MG with thymic hyperplasia, 35–39 years in AChR-Ab-negative MG, 50–54 years in thymoma-associated MG, and 65–69 years in AChR-Ab-positive MG without thymic abnormalities. The histogram was skewed toward younger onset age in MG with thymic hyperplasia and toward older onset age in ocular MG and AChR-Ab-positive MG without thymic abnormalities. Regarding patients with MuSK-Ab (n = 22), the mean ± SD onset age was 38.6 ± 15.3 (median, 42.0 years), and the ratio of females was 81.8%; however, neither of these findings was significantly different from other AChR-Ab-negative MG patients.Fig. 1Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis\n\nHistograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis", "Details of past immunotherapy for each of the five subtypes are shown at the top of Table 5.Table 5Details of past treatment and response to treatment for each of the five subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPast immunotherapy (n = 923) Peak dose of oral PSL, mg/day9.2 ± 12.2, 5.0†\n28.5 ± 18.8, 30.0†\n29.7 ± 19.4, 30.0†\n18.8 ± 17.2, 15.032.6 ± 20.6, 30.023.7 ± 20.2, 20.021.5 ± 19.3, 15.0 Duration of PSL ≥20 mg/day, M0.0 ± 0.0, 0.0†\n12.0 ± 25.2, 5.0†\n13.0 ± 27.3, 6.0†\n3.8 ± 7.0, 0.07.2 ± 9.5, 4.08.2 ± 17.0, 2.07.9 ± 19.3, 1.0 CNIs,%24.0%*68.2%*54.0%67.4%(72.7%)58.1%52.9% PP,%2.0%*48.1%*22.1%46.0%*(54.5%)37.2%27.3% IVIG,%6.1%*36.1%29.9%42.5%*(27.3%)24.7%15.0%Initial response to treatment (n = 923, see Fig. 2a) Achievement of MM-or-better once,%79.8%*73.5%66.1%56.2%*(75.0%)67.8%70.2% Months to achieve MM-or-better in 50% of patients4.0‡\n8.018.0‡\n31.0‡\n(7.0)6.08.0Stability of improved status (n = 923) MM-or-better at present,%74.0%*58.1%49.6%39.6%*(55.0%)55.4%57.6% Maintaining rate of MM-or-better, %92.7%*79.0%75.0%70.5%(73.3%)81.7%82.1%All continuous data are expressed as the mean ± standard deviation (SD) and the median\nCNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation*p < 0.0001, chi-square test,†\np < 0.0001, Mann–Whitney U test,‡\np < 0.0001, log-rank test\n\nDetails of past treatment and response to treatment for each of the five subtypes\nAll continuous data are expressed as the mean ± standard deviation (SD) and the median\n\nCNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation\n*p < 0.0001, chi-square test,†\np < 0.0001, Mann–Whitney U test,‡\np < 0.0001, log-rank test\n Early-stage response to treatment (first achievement of MM-or-better ≥1 M) As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test).\nKaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\n\nKaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\nFor comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10).\nAs shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test).\nKaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\n\nKaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\nFor comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10).\n Stability of improved status The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test).\nThe rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test).", "As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test).\nKaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\n\nKaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis\nFor comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10).", "The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test).", "The present analyses based on several sets of variables classified 923 MG patients into the same five following subtypes with the same characteristics of the onset-age histograms as reported in our previous study [8]: ocular MG (AChR-Ab-positivity, 77%; histogram of onset age, skewed to older age); thymoma-associated MG (100%; normal distribution); MG with thymic hyperplasia (89%; skewed to younger age); AChR-Ab-negative MG (0%; normal distribution); and AChR-Ab-positive MG without thymic abnormalities (100%, skewed to older age). The results from the two different samples demonstrated high reproducibility, which suggests the reliability of our five-subtype classification method. In the process of analyses, two points were suggested. First, discrimination between ocular and generalized MG is more principal than that according to onset age, thymus pathology or AChR-Ab-positivity. Second, AChR-Ab-negative MG shows normal distribution of onset age not fitting discrimination based on onset age. Therefore, it is probably better to adopt the often-used three-type classification (early-onset, late-onset and thymoma-associated MG) for generalized and AChR-Ab-positive phenotypes. Consistently, in our five-subtype classification, MG with thymic hyperplasia (with early-onset age), AChR-Ab-positive MG without thymic abnormalities (with late-onset age) and thymoma-associated MG were generalized and AChR-Ab-positive phenotypes.\nIn fact, these results of our classification statistically performed are consistent with a recently reported classification of MG by Gilhus et al. [16, 17], which included the following classifications: early-onset MG; late-onset MG; thymoma-associated MG; MuSK-Ab positive MG; lipoprotein-related protein 4 (LRP4)-Ab positive MG; seronegative MG; and ocular MG. In addition, they commented that early- and late-onset MG should be distinguished according to onset age only for patients having generalized symptoms and AChR-Ab. In the present study, because of their small numbers, MuSK-Ab-positive MG patients were not separated, and LRP4-Ab positivity was not systematically determined; though MG patients with MuSK-Ab or LRP 4-Ab have a distinct clinical manifestation and a unique pathological mechanism [15].\nOcular MG was found to have unique characteristics such as having a higher onset age, predominantly affecting males, and having an ocular muscle-specific pathogenesis [18], which may be related to the aging-associated susceptibility of ocular muscles to antibodies against the neuromuscular junction. Given that response to treatment and stability of improved status were substantially better in ocular MG compared with the other four subtypes, it seems reasonable to conclude that ocular MG should be treated as a distinct subgroup of MG in the clinical setting.\nAmong the four generalized subtypes in the present classification method, both early-stage response to treatment and stability of improved status were worst in AChR-Ab-negative MG, although symptoms at the time the condition was at its worst were not particularly severe. Patients with MuSK-Ab-positive MG showed better results despite having more severe worst conditions, which suggests that AChR-Ab-negative MG, excluding MuSK-Ab-positive MG, is distinct from other generalized MG subtypes from the perspective of response to therapy. Overall, as shown in Fig. 2a, each of the five present subtypes showed different levels of response to treatment, whereas such differences in the three commonly used subtypes (early-onset, late-onset, and thymoma-associated MG) remain somewhat unclear (Fig. 2b). It would be more helpful in the clinical setting to elucidate the levels of response to some types of medication or therapy (e.g. corticosteroids, non-steroid immunosuppressants, intravenous immunoglobulin and plasmapheresis) in the five subtypes. However, it was difficult to analyze such response levels, as plural treatment agents and methods were employed simultaneously in most of individual patients. We are now analyzing the response levels according to patterns of immune treatment (treatment strategies) in generalized MG patients [19]. Such analysis should be performed also for the present five subtypes, but could not be addressed in the present report.\nThe present study did have some limitations. First, 331 (30.4%) of the 1,088 patients were included in our previous survey in 2012, which might have affected the reproducibility of the present five-subtype classification. Second, almost all of the MG patients in our database are Japanese; therefore, a race/ethnicity bias may have affected the results. Finally, some MG patients with thymic hyperplasia might have been classified as AChR-Ab-positive MG without thymic abnormalities because the diagnosis of thymic hyperplasia is made based on the results of pathological examinations after thymectomy. However, the frequency of thymectomy for MG patients without thymoma has been decreasing [20]; therefore, some of the AChR-Ab-positive MG patients with younger onset age who had not undergone thymectomy could have had thymic hyperplasia.", "The results of the present study suggest that MG patients can be classified into the following five subtypes in order of priority: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and AChR-Ab-positive MG without thymic abnormalities. All MG patients can be allocated to one of the subtypes based on the results of routine examinations. These five subtypes were shown to have characteristic demographic characteristics, clinical severity, and therapeutic responses. Therefore, our five-subtype classification method is expected to be beneficial not only for elucidating disease types, but also for planning proper treatment for individual patients." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusion" ]
[ "Classification", "Cluster analysis", "Myasthenia", "Onset age", "Treatment" ]
Background: Myasthenia gravis (MG) is a neurological disorder that manifests as fatigable and fluctuating weakness of voluntary muscles, which are mediated by autoantibodies against neuromuscular junction proteins in skeletal muscle that impair neuromuscular transmission [1]. MG typically involves the ocular, bulbar, and extremity muscles, and, in severe cases, respiratory muscles. The clinical course and outcome in MG are affected by several different autoantibodies, thymic abnormalities, onset age and disease severity, as well as response to treatment [2–4]. MG is distinguished according to the production of pathogenic autoantibodies such as anti-acetylcholine receptor antibody (AChR-Ab) and anti–muscle-specific tyrosine kinase antibody (MuSK-Ab) [1, 5, 6]. Clinically, MG is often classified into the following three subtypes based on thymic abnormalities and onset age: thymoma-associated MG; early-onset MG (onset age <50 years); and late-onset MG (onset age ≥50 years) [7]. Furthermore, discrimination is observed in the clinical setting—for example, between ocular and generalized MG—based on the distribution of symptoms. Previously, we reported classifying MG into the following five subtypes using two-step cluster analysis of a detailed cross-sectional data set of 640 consecutive patients (Japan MG Registry Study 2012): ocular MG; generalized thymoma-associated MG; generalized MG with thymic hyperplasia; generalized AChR-Ab-negative MG; and generalized AChR-Ab-positive MG without thymic abnormalities [8]. However, this five-subtype classification approach requires further confirmation, and its clinical relevance remains to be established. Therefore, in 2015, we conducted a larger cross-sectional survey to obtain clinical parameters from 1,088 consecutive MG patients. In the present study, using this new data set, we attempted to confirm the reproducibility of our five-subtype classification approach and to specify additional characteristics of these five subtypes with a particular focus on response to treatment in the clinical setting. Methods: Patients and clinical factors This survey was conducted by the Japan MG Registry Study Group, which comprises 13 neurological centers (Table 1). We evaluated patients with established MG between April and July 2015. To avoid potential bias, we enrolled consecutive patients over a short duration (4 months). All 1088 of these MG patients visited our hospitals, provided written informed consent, and underwent analysis. Among these 1088 patients, 331 (30.4%) were included in our previous survey in 2012 [8].Table 1Institutions participating in the Japan MG Registry Study 2015Department of Neurology, Sapporo Medical University Hospital, SapporoDepartment of Neurology, Hokkaido Medical Center, SapporoDepartment of Neurology, Hanamaki General Hospital, HanamakiDepartment of Neurology, Sendai Medical Center, SendaiDepartment of Neurology, Tohoku University Graduate School of Medicine, SendaiChiba Neurology Clinic, ChibaDepartment of Neurology, Chiba University School of Medicine, ChibaDepartment of Neurology, Tokyo Medical University, TokyoDepartment of Neurology, Toho University Medical Center Oh-hashi Hospital, TokyoDepartment of Neurology, Tokyo Women's Medical University, TokyoDepartment of Neurology, Kinki University School of Medicine, OsakaDepartment of Neurology, Graduate School of Medical Sciences, Kyushu University, FukuokaDepartment of Neurology and Strokology, Nagasaki University Hospital, Nagasaki Abbreviation: MG myasthenia gravis Institutions participating in the Japan MG Registry Study 2015 Abbreviation: MG myasthenia gravis The following clinical parameters were obtained for all patients: sex; age; age at disease onset; duration of disease; duration of immunotherapy; history of bulbar symptoms; presence of thymoma or thymic hyperplasia in thymectomized patients; presence of serum AChR-Ab or MuSK-Ab; and presence of other non-MG-specific autoantibodies, such as anti-nuclear antibody, SS-A/SS-B antibody, TSH-receptor antibody, anti-thyroglobulin/thyroperoxidase antibody, and rheumatoid factor. In addition, the current and past disease status and details of treatment were surveyed for all patients. Clinical severity at the worst condition was determined according to the classification of the MG Foundation of America (MGFA) [9], and, in some patients, the MGFA quantitative MG score (QMG) [9, 10] from medical records. Clinical severity at the current condition was determined according to QMG and MG Composite (MGC) scores [11]. Furthermore, all patients completed the Japanese version of the 15-item Myasthenia Gravis Quality-of-Life Scale (MG-QOL-15), [12, 13] upon study entry. Prednisone and prednisolone are the global standard oral corticosteroids used to treat MG, and prednisolone is generally used in Japan. Therefore, the current use, peak dose [mg/day], and duration of prednisolone ≥20 mg/day were recorded for all patients, as was the use of calcineurin inhibitors, azathioprine, plasmapheresis, and intravenous immunoglobulin. Finally, the courses of current and past MGFA post-intervention statuses, particularly the time required to achieve first minimal manifestations (MM) or better status lasting more than one month (MM-or-better ≥1 M) [9], were determined as benchmarks for evaluating response to treatment in each patient. These clinical data were fully collected from 923 (84.8%) of the 1088 patients. This survey was conducted by the Japan MG Registry Study Group, which comprises 13 neurological centers (Table 1). We evaluated patients with established MG between April and July 2015. To avoid potential bias, we enrolled consecutive patients over a short duration (4 months). All 1088 of these MG patients visited our hospitals, provided written informed consent, and underwent analysis. Among these 1088 patients, 331 (30.4%) were included in our previous survey in 2012 [8].Table 1Institutions participating in the Japan MG Registry Study 2015Department of Neurology, Sapporo Medical University Hospital, SapporoDepartment of Neurology, Hokkaido Medical Center, SapporoDepartment of Neurology, Hanamaki General Hospital, HanamakiDepartment of Neurology, Sendai Medical Center, SendaiDepartment of Neurology, Tohoku University Graduate School of Medicine, SendaiChiba Neurology Clinic, ChibaDepartment of Neurology, Chiba University School of Medicine, ChibaDepartment of Neurology, Tokyo Medical University, TokyoDepartment of Neurology, Toho University Medical Center Oh-hashi Hospital, TokyoDepartment of Neurology, Tokyo Women's Medical University, TokyoDepartment of Neurology, Kinki University School of Medicine, OsakaDepartment of Neurology, Graduate School of Medical Sciences, Kyushu University, FukuokaDepartment of Neurology and Strokology, Nagasaki University Hospital, Nagasaki Abbreviation: MG myasthenia gravis Institutions participating in the Japan MG Registry Study 2015 Abbreviation: MG myasthenia gravis The following clinical parameters were obtained for all patients: sex; age; age at disease onset; duration of disease; duration of immunotherapy; history of bulbar symptoms; presence of thymoma or thymic hyperplasia in thymectomized patients; presence of serum AChR-Ab or MuSK-Ab; and presence of other non-MG-specific autoantibodies, such as anti-nuclear antibody, SS-A/SS-B antibody, TSH-receptor antibody, anti-thyroglobulin/thyroperoxidase antibody, and rheumatoid factor. In addition, the current and past disease status and details of treatment were surveyed for all patients. Clinical severity at the worst condition was determined according to the classification of the MG Foundation of America (MGFA) [9], and, in some patients, the MGFA quantitative MG score (QMG) [9, 10] from medical records. Clinical severity at the current condition was determined according to QMG and MG Composite (MGC) scores [11]. Furthermore, all patients completed the Japanese version of the 15-item Myasthenia Gravis Quality-of-Life Scale (MG-QOL-15), [12, 13] upon study entry. Prednisone and prednisolone are the global standard oral corticosteroids used to treat MG, and prednisolone is generally used in Japan. Therefore, the current use, peak dose [mg/day], and duration of prednisolone ≥20 mg/day were recorded for all patients, as was the use of calcineurin inhibitors, azathioprine, plasmapheresis, and intravenous immunoglobulin. Finally, the courses of current and past MGFA post-intervention statuses, particularly the time required to achieve first minimal manifestations (MM) or better status lasting more than one month (MM-or-better ≥1 M) [9], were determined as benchmarks for evaluating response to treatment in each patient. These clinical data were fully collected from 923 (84.8%) of the 1088 patients. Two-step cluster analysis To examine the reproducibility of the five-subtype classification in the same manner as reported elsewhere [8], we conducted two-step cluster analysis of the 923 patients using SPSS Statistics Base 22 software (IBM, Armonk, NY, USA). To avoid bias beset by the problem of multicollinearity, current or worst disease status was handled as a single variable (Table 2). The other variables evaluated were: sex; age of onset; disease duration; presence of thymoma; presence of thymic hyperplasia in thymectomized cases; positivity for AChR-Ab or MuSK-Ab; and positivity for other concurrent autoantibodies (Table 2).Table 2Set of variables used in the cluster analysesPatients’ backgroundsAutoantibody statusDisease status during the worst conditionCurrent disease statusSexAge of onsetDisease durationPresence of thymomaPresence of thymic hyperplasiaAChR-AbMuSK-AbNon-MG-specific antibodies One of the following: The worst MGFA classificationThe worst QMG One of the following: Current PISCurrent QMGMG-QOL-15 score AChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status Set of variables used in the cluster analyses AChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status To examine the reproducibility of the five-subtype classification in the same manner as reported elsewhere [8], we conducted two-step cluster analysis of the 923 patients using SPSS Statistics Base 22 software (IBM, Armonk, NY, USA). To avoid bias beset by the problem of multicollinearity, current or worst disease status was handled as a single variable (Table 2). The other variables evaluated were: sex; age of onset; disease duration; presence of thymoma; presence of thymic hyperplasia in thymectomized cases; positivity for AChR-Ab or MuSK-Ab; and positivity for other concurrent autoantibodies (Table 2).Table 2Set of variables used in the cluster analysesPatients’ backgroundsAutoantibody statusDisease status during the worst conditionCurrent disease statusSexAge of onsetDisease durationPresence of thymomaPresence of thymic hyperplasiaAChR-AbMuSK-AbNon-MG-specific antibodies One of the following: The worst MGFA classificationThe worst QMG One of the following: Current PISCurrent QMGMG-QOL-15 score AChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status Set of variables used in the cluster analyses AChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status Early-stage response to treatment and stability of improved status in each of the five subtypes Early-stage response to treatment The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes. The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes. Stability of improved status of MM-or-better ≥1 M As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes. As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes. Early-stage response to treatment The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes. The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes. Stability of improved status of MM-or-better ≥1 M As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes. As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes. Statistical analysis All statistical analyses were performed using SPSS Statistics Base 22 software (IBM) and MATLAB R2015a (MathWorks, Natick, MA, USA). All continuous data are expressed as the mean ± standard deviation (SD) and the median. All statistical analyses were performed using SPSS Statistics Base 22 software (IBM) and MATLAB R2015a (MathWorks, Natick, MA, USA). All continuous data are expressed as the mean ± standard deviation (SD) and the median. Patients and clinical factors: This survey was conducted by the Japan MG Registry Study Group, which comprises 13 neurological centers (Table 1). We evaluated patients with established MG between April and July 2015. To avoid potential bias, we enrolled consecutive patients over a short duration (4 months). All 1088 of these MG patients visited our hospitals, provided written informed consent, and underwent analysis. Among these 1088 patients, 331 (30.4%) were included in our previous survey in 2012 [8].Table 1Institutions participating in the Japan MG Registry Study 2015Department of Neurology, Sapporo Medical University Hospital, SapporoDepartment of Neurology, Hokkaido Medical Center, SapporoDepartment of Neurology, Hanamaki General Hospital, HanamakiDepartment of Neurology, Sendai Medical Center, SendaiDepartment of Neurology, Tohoku University Graduate School of Medicine, SendaiChiba Neurology Clinic, ChibaDepartment of Neurology, Chiba University School of Medicine, ChibaDepartment of Neurology, Tokyo Medical University, TokyoDepartment of Neurology, Toho University Medical Center Oh-hashi Hospital, TokyoDepartment of Neurology, Tokyo Women's Medical University, TokyoDepartment of Neurology, Kinki University School of Medicine, OsakaDepartment of Neurology, Graduate School of Medical Sciences, Kyushu University, FukuokaDepartment of Neurology and Strokology, Nagasaki University Hospital, Nagasaki Abbreviation: MG myasthenia gravis Institutions participating in the Japan MG Registry Study 2015 Abbreviation: MG myasthenia gravis The following clinical parameters were obtained for all patients: sex; age; age at disease onset; duration of disease; duration of immunotherapy; history of bulbar symptoms; presence of thymoma or thymic hyperplasia in thymectomized patients; presence of serum AChR-Ab or MuSK-Ab; and presence of other non-MG-specific autoantibodies, such as anti-nuclear antibody, SS-A/SS-B antibody, TSH-receptor antibody, anti-thyroglobulin/thyroperoxidase antibody, and rheumatoid factor. In addition, the current and past disease status and details of treatment were surveyed for all patients. Clinical severity at the worst condition was determined according to the classification of the MG Foundation of America (MGFA) [9], and, in some patients, the MGFA quantitative MG score (QMG) [9, 10] from medical records. Clinical severity at the current condition was determined according to QMG and MG Composite (MGC) scores [11]. Furthermore, all patients completed the Japanese version of the 15-item Myasthenia Gravis Quality-of-Life Scale (MG-QOL-15), [12, 13] upon study entry. Prednisone and prednisolone are the global standard oral corticosteroids used to treat MG, and prednisolone is generally used in Japan. Therefore, the current use, peak dose [mg/day], and duration of prednisolone ≥20 mg/day were recorded for all patients, as was the use of calcineurin inhibitors, azathioprine, plasmapheresis, and intravenous immunoglobulin. Finally, the courses of current and past MGFA post-intervention statuses, particularly the time required to achieve first minimal manifestations (MM) or better status lasting more than one month (MM-or-better ≥1 M) [9], were determined as benchmarks for evaluating response to treatment in each patient. These clinical data were fully collected from 923 (84.8%) of the 1088 patients. Two-step cluster analysis: To examine the reproducibility of the five-subtype classification in the same manner as reported elsewhere [8], we conducted two-step cluster analysis of the 923 patients using SPSS Statistics Base 22 software (IBM, Armonk, NY, USA). To avoid bias beset by the problem of multicollinearity, current or worst disease status was handled as a single variable (Table 2). The other variables evaluated were: sex; age of onset; disease duration; presence of thymoma; presence of thymic hyperplasia in thymectomized cases; positivity for AChR-Ab or MuSK-Ab; and positivity for other concurrent autoantibodies (Table 2).Table 2Set of variables used in the cluster analysesPatients’ backgroundsAutoantibody statusDisease status during the worst conditionCurrent disease statusSexAge of onsetDisease durationPresence of thymomaPresence of thymic hyperplasiaAChR-AbMuSK-AbNon-MG-specific antibodies One of the following: The worst MGFA classificationThe worst QMG One of the following: Current PISCurrent QMGMG-QOL-15 score AChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status Set of variables used in the cluster analyses AChR-Ab anti-acetylcholine receptor antibody, MuSK-Ab anti-muscle specific kinase antibody, MG myasthenia gravis, MGFA MG Foundation of America, QMG quantitative MG, MG-QOL-15 15-item MG-specific quality of life scale, PIS post-intervention status Early-stage response to treatment and stability of improved status in each of the five subtypes: Early-stage response to treatment The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes. The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes. Stability of improved status of MM-or-better ≥1 M As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes. As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes. Early-stage response to treatment: The time (months) from the start of the immunotherapy until achieving first MM-or-better ≥1 M was determined from medical records and compared between the five subtypes using Kaplan-Meier analysis and the log-rank test with the Cochran-Mantel-Haenszel procedure. The time required to achieve first MM-or-better ≥1 M in 50% of patients was also compared among subtypes. Stability of improved status of MM-or-better ≥1 M: As an indicator of stability of improved status, the rate of the number of patients who maintained minimal manifestations in the 2015 survey/that of patients who achieved the status at least once was calculated and compared among the five subtypes. Statistical analysis: All statistical analyses were performed using SPSS Statistics Base 22 software (IBM) and MATLAB R2015a (MathWorks, Natick, MA, USA). All continuous data are expressed as the mean ± standard deviation (SD) and the median. Results: Two-step cluster analysis Based on the results of two-step cluster analyses, all 923 MG patients could be classified into the same five subtypes described elsewhere [8]: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and other (in order of predicted importance). Among these five subtypes, the residual patients group “other” was the largest, and could be defined as generalized AChR-Ab-positive MG without thymic abnormalities. These results were demonstrated repeatedly with several sets of variables, as shown in Table 2, which confirmed the high reliability and reproducibility of the classification system. Although the order among thymoma-associated MG, MG with thymic hyperplasia, and AChR-Ab-negative MG was unstable depending on the variable sets used, the differences in terms of predicted importance were not large. These results were almost identical to those reported elsewhere [8], with only minor discrepancies in regard to the order of selection priority (the order in the previous study was as follows: ocular MG; MG with thymic hyperplasia; AChR-Ab-negative MG; thymoma-associated MG; and AChR-Ab-positive MG without thymic abnormalities). In the present study, the quality of clusterization under each set of variables, which was estimated using a previously reported interpretation model [14], was indicated as “fair” to “good” for all clusters, suggesting that the results were reasonable. A total of 111 patients (10.2%) fit two of the five subtypes (Table 3). These patients were allocated to sole subtypes according to the separation priority in the two-step cluster analysis. For example, an ocular MG patient with thymoma was allocated into ocular MG. Under this criterion, the percentage of patients assigned to the five subtypes was as follows: ocular MG, 23.0%; thymoma-associated MG, 21.5%; MG with thymic hyperplasia, 12.9%; AChR-Ab-negative MG, 12.1%; and AChR-Ab-positive MG without thymic abnormalities, 30.5% (Table 4).Table 3Number of patients fitting two categoriesNumber of patientsFinal assignmentOcular MG and thymoma-associated MG32 (2.9%)Ocular MGOcular MG and AChR-Ab-negative MG56 (5.1%)Ocular MGOcular MG and MG with thymic hyperplasia8 (0.7%)Ocular MGMG with thymic hyperplasia and AChR-Ab-negative MG14 (1.3%)THMGThymoma-associated MG and AChR-Ab-negative MG1 (0.09%)TAMGValues within the parentheses show the percentages of the total of 1,088 patients AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis Table 4Characteristics and severity for each of the five MG subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPatients (n)250234140132(22)3321088Female,%52.067.581.4*81.1*(81.8)61.165.4Onset age, y51.0 ± 20.0, 53.0a 51.0 ± 12.8, 51.033.3 ± 13.9, 31.0a 39.9 ± 16.2, 41.5a (38.6 ± 15.3, 42.0)50.7 ± 20.7, 55.0a 47.3 ± 18.8, 48.1Duration of disease, y10.4 ± 11.4, 6.49.5 ± 7.7, 8.017.4 ± 11.8, 14.5a 10.8 ± 9.3, 8.0(11.0 ± 8.1, 9.8)11.6 ± 10.5, 8.211.6 ± 10.6, 8.2AChR-Ab-positivity,%77.299.689.40.0(0.0)100.081.3MuSK-Ab-positivity,%0.0%0.0%0.0%20.6%(100.0%)0.0%2.1%Thymectomy,%23.6%*97.4%*100.0%*12.1%*(9.1%)35.8%*51.7%The worst condition of the disease MGFA classification (n = 1088) I,%100%0.0%0.0%0.0%(0.0%)0.0%23.0% II,%0.0%44.0%52.9%59.1%(45.5%)64.8%43.2% III,%0.0%27.8%30.7%28.0%(13.6%)20.5%19.6% IV,%0.0%7.7%7.9%4.5%(9.1%)4.2%4.5% V,%0.0%20.5%8.6%8.3%(31.8%)10.5%9.7% Rate of MGFA > III,%0.0%*56.0%*47.1%40.9%(54.5%)35.2%33.8% QMG score (n = 922)6.6 ± 2.6, 6.0a (n = 225)17.1 ± 8.0,15.5a (n = 194)15.8 ± 5.8, 15.0a (n = 107)14.7 ± 7.2, 13.0(n = 114)18.1 ± 9.7, 15.5(n = 20)14.7 ± 7.0, 13.0(n = 282)13.4 ± 7.5, 12.0Current disease condition (mean ± SD, median) QMG score (n = 923)4.2 ± 2.8, 4.0a (n = 208)6.8 ± 4.8, 6.0(n = 198)7.8 ± 5.5, 7.0(n = 125)8.4 ± 5.4, 8.0(n = 106)(8.3 ± 6.4, 7.0)(n = 20)7.2 ± 4.8, 6.0(n = 286)6.6 ± 4.8, 6.0 MGC score (n = 923)1.9 ± 2.5, 1.0a (n = 208)4.5 ± 5.4, 3.0(n = 198)5.4 ± 5.7, 3.0(n = 125)6.5 ± 6.2, 5.0a (n = 106)(6.2 ± 7.0, 4.5)(n = 20)4.2 ± 4.6, 3.0(n = 286)4.1 ± 5.0, 3.0 MG-QOL-15 (n = 923)8.1 ± 9.0, 5.0a (n = 208)14.7 ± 13.6, 11.0(n = 198)16.2 ± 13.7, 13.5a (n = 125)14.6 ± 12.6, 12.0(n = 106)(11.6 ± 8.8, 11.0)(n = 20)14.1 ± 13.9, 9.0(n = 286)13.2 ± 13.0, 9.0All continuous data are expressed as the mean ± standard deviation (SD) and the median AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation*p < 0.0001, chi-square test (compared to the others), † p < 0.0001, Mann–Whitney U test Number of patients fitting two categories Values within the parentheses show the percentages of the total of 1,088 patients AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis Characteristics and severity for each of the five MG subtypes All continuous data are expressed as the mean ± standard deviation (SD) and the median AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation *p < 0.0001, chi-square test (compared to the others), † p < 0.0001, Mann–Whitney U test MG with thymic hyperplasia is only diagnosed for thymectomized patients; therefore, some non-thymectomized patients with thymic hyperplasia may be assigned as other subtypes, particularly AChR-Ab-positive MG patients without thymic abnormalities. Based on the results of two-step cluster analyses, all 923 MG patients could be classified into the same five subtypes described elsewhere [8]: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and other (in order of predicted importance). Among these five subtypes, the residual patients group “other” was the largest, and could be defined as generalized AChR-Ab-positive MG without thymic abnormalities. These results were demonstrated repeatedly with several sets of variables, as shown in Table 2, which confirmed the high reliability and reproducibility of the classification system. Although the order among thymoma-associated MG, MG with thymic hyperplasia, and AChR-Ab-negative MG was unstable depending on the variable sets used, the differences in terms of predicted importance were not large. These results were almost identical to those reported elsewhere [8], with only minor discrepancies in regard to the order of selection priority (the order in the previous study was as follows: ocular MG; MG with thymic hyperplasia; AChR-Ab-negative MG; thymoma-associated MG; and AChR-Ab-positive MG without thymic abnormalities). In the present study, the quality of clusterization under each set of variables, which was estimated using a previously reported interpretation model [14], was indicated as “fair” to “good” for all clusters, suggesting that the results were reasonable. A total of 111 patients (10.2%) fit two of the five subtypes (Table 3). These patients were allocated to sole subtypes according to the separation priority in the two-step cluster analysis. For example, an ocular MG patient with thymoma was allocated into ocular MG. Under this criterion, the percentage of patients assigned to the five subtypes was as follows: ocular MG, 23.0%; thymoma-associated MG, 21.5%; MG with thymic hyperplasia, 12.9%; AChR-Ab-negative MG, 12.1%; and AChR-Ab-positive MG without thymic abnormalities, 30.5% (Table 4).Table 3Number of patients fitting two categoriesNumber of patientsFinal assignmentOcular MG and thymoma-associated MG32 (2.9%)Ocular MGOcular MG and AChR-Ab-negative MG56 (5.1%)Ocular MGOcular MG and MG with thymic hyperplasia8 (0.7%)Ocular MGMG with thymic hyperplasia and AChR-Ab-negative MG14 (1.3%)THMGThymoma-associated MG and AChR-Ab-negative MG1 (0.09%)TAMGValues within the parentheses show the percentages of the total of 1,088 patients AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis Table 4Characteristics and severity for each of the five MG subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPatients (n)250234140132(22)3321088Female,%52.067.581.4*81.1*(81.8)61.165.4Onset age, y51.0 ± 20.0, 53.0a 51.0 ± 12.8, 51.033.3 ± 13.9, 31.0a 39.9 ± 16.2, 41.5a (38.6 ± 15.3, 42.0)50.7 ± 20.7, 55.0a 47.3 ± 18.8, 48.1Duration of disease, y10.4 ± 11.4, 6.49.5 ± 7.7, 8.017.4 ± 11.8, 14.5a 10.8 ± 9.3, 8.0(11.0 ± 8.1, 9.8)11.6 ± 10.5, 8.211.6 ± 10.6, 8.2AChR-Ab-positivity,%77.299.689.40.0(0.0)100.081.3MuSK-Ab-positivity,%0.0%0.0%0.0%20.6%(100.0%)0.0%2.1%Thymectomy,%23.6%*97.4%*100.0%*12.1%*(9.1%)35.8%*51.7%The worst condition of the disease MGFA classification (n = 1088) I,%100%0.0%0.0%0.0%(0.0%)0.0%23.0% II,%0.0%44.0%52.9%59.1%(45.5%)64.8%43.2% III,%0.0%27.8%30.7%28.0%(13.6%)20.5%19.6% IV,%0.0%7.7%7.9%4.5%(9.1%)4.2%4.5% V,%0.0%20.5%8.6%8.3%(31.8%)10.5%9.7% Rate of MGFA > III,%0.0%*56.0%*47.1%40.9%(54.5%)35.2%33.8% QMG score (n = 922)6.6 ± 2.6, 6.0a (n = 225)17.1 ± 8.0,15.5a (n = 194)15.8 ± 5.8, 15.0a (n = 107)14.7 ± 7.2, 13.0(n = 114)18.1 ± 9.7, 15.5(n = 20)14.7 ± 7.0, 13.0(n = 282)13.4 ± 7.5, 12.0Current disease condition (mean ± SD, median) QMG score (n = 923)4.2 ± 2.8, 4.0a (n = 208)6.8 ± 4.8, 6.0(n = 198)7.8 ± 5.5, 7.0(n = 125)8.4 ± 5.4, 8.0(n = 106)(8.3 ± 6.4, 7.0)(n = 20)7.2 ± 4.8, 6.0(n = 286)6.6 ± 4.8, 6.0 MGC score (n = 923)1.9 ± 2.5, 1.0a (n = 208)4.5 ± 5.4, 3.0(n = 198)5.4 ± 5.7, 3.0(n = 125)6.5 ± 6.2, 5.0a (n = 106)(6.2 ± 7.0, 4.5)(n = 20)4.2 ± 4.6, 3.0(n = 286)4.1 ± 5.0, 3.0 MG-QOL-15 (n = 923)8.1 ± 9.0, 5.0a (n = 208)14.7 ± 13.6, 11.0(n = 198)16.2 ± 13.7, 13.5a (n = 125)14.6 ± 12.6, 12.0(n = 106)(11.6 ± 8.8, 11.0)(n = 20)14.1 ± 13.9, 9.0(n = 286)13.2 ± 13.0, 9.0All continuous data are expressed as the mean ± standard deviation (SD) and the median AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation*p < 0.0001, chi-square test (compared to the others), † p < 0.0001, Mann–Whitney U test Number of patients fitting two categories Values within the parentheses show the percentages of the total of 1,088 patients AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis Characteristics and severity for each of the five MG subtypes All continuous data are expressed as the mean ± standard deviation (SD) and the median AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation *p < 0.0001, chi-square test (compared to the others), † p < 0.0001, Mann–Whitney U test MG with thymic hyperplasia is only diagnosed for thymectomized patients; therefore, some non-thymectomized patients with thymic hyperplasia may be assigned as other subtypes, particularly AChR-Ab-positive MG patients without thymic abnormalities. Clinical characteristics of each subtype The clinical characteristics, including current and worst severity, for each of the five subtypes are shown in Table 4. The patients with MuSK-Ab were not separated by two-step cluster analysis because the number is not great enough for statistical evaluation. However, MG patients with MuSK-Ab showed a distinct clinical manifestation and therapy responsiveness reflecting the unique pathological mechanism [15]. Therefore, details of MG patients with MuSK-Ab (n = 22) are individually described next to SNMG patients in Table 4. The percentage of females was significantly higher among MG with thymic hyperplasia and AChR-Ab-negative MG patients compared with the other three subtypes (p < 0.0001, chi-square test). Onset age was significantly younger in MG with thymic hyperplasia and AChR-Ab-negative MG patients (p < 0.0001, Mann–Whitney U test) and older in ocular MG and AChR-Ab-positive MG patients without thymic abnormalities (p < 0.0001, Mann–Whitney U test). Severity at the worst condition (MGFA classification and QMG) was significantly higher in thymoma-associated MG patients (p < 0.0001, Mann–Whitney U test). Patients with MuSK-Ab also showed worst severity at the same level as thymoma-associated MG patients, although this result was not statistically significant because of the small number of patients. The severity scales (QMG and MGC), and a QOL scale (MG-QOL-15) scores in the present survey were generally worse, although not statistically significant, in MG with thymic hyperplasia and AChR-Ab-negative MG patients, both of which primarily comprise females with younger onset ages. On the other hand, as a matter of course, ocular MG patients showed much lower clinical severity in all batteries at both current and worst conditions (p < 0.0001, Mann–Whitney U test). The clinical characteristics, including current and worst severity, for each of the five subtypes are shown in Table 4. The patients with MuSK-Ab were not separated by two-step cluster analysis because the number is not great enough for statistical evaluation. However, MG patients with MuSK-Ab showed a distinct clinical manifestation and therapy responsiveness reflecting the unique pathological mechanism [15]. Therefore, details of MG patients with MuSK-Ab (n = 22) are individually described next to SNMG patients in Table 4. The percentage of females was significantly higher among MG with thymic hyperplasia and AChR-Ab-negative MG patients compared with the other three subtypes (p < 0.0001, chi-square test). Onset age was significantly younger in MG with thymic hyperplasia and AChR-Ab-negative MG patients (p < 0.0001, Mann–Whitney U test) and older in ocular MG and AChR-Ab-positive MG patients without thymic abnormalities (p < 0.0001, Mann–Whitney U test). Severity at the worst condition (MGFA classification and QMG) was significantly higher in thymoma-associated MG patients (p < 0.0001, Mann–Whitney U test). Patients with MuSK-Ab also showed worst severity at the same level as thymoma-associated MG patients, although this result was not statistically significant because of the small number of patients. The severity scales (QMG and MGC), and a QOL scale (MG-QOL-15) scores in the present survey were generally worse, although not statistically significant, in MG with thymic hyperplasia and AChR-Ab-negative MG patients, both of which primarily comprise females with younger onset ages. On the other hand, as a matter of course, ocular MG patients showed much lower clinical severity in all batteries at both current and worst conditions (p < 0.0001, Mann–Whitney U test). Onset age histograms of the five subtypes Histograms of the onset age for each of the five subtypes are shown in Fig. 1a. These histograms were converted into approximate curves (sixth-order polynomial approximations) and superimposed in Fig. 1b. The peak ages of the histogram were 60–64 years in ocular MG, 25–29 years in MG with thymic hyperplasia, 35–39 years in AChR-Ab-negative MG, 50–54 years in thymoma-associated MG, and 65–69 years in AChR-Ab-positive MG without thymic abnormalities. The histogram was skewed toward younger onset age in MG with thymic hyperplasia and toward older onset age in ocular MG and AChR-Ab-positive MG without thymic abnormalities. Regarding patients with MuSK-Ab (n = 22), the mean ± SD onset age was 38.6 ± 15.3 (median, 42.0 years), and the ratio of females was 81.8%; however, neither of these findings was significantly different from other AChR-Ab-negative MG patients.Fig. 1Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis Histograms of the onset age for each of the five subtypes are shown in Fig. 1a. These histograms were converted into approximate curves (sixth-order polynomial approximations) and superimposed in Fig. 1b. The peak ages of the histogram were 60–64 years in ocular MG, 25–29 years in MG with thymic hyperplasia, 35–39 years in AChR-Ab-negative MG, 50–54 years in thymoma-associated MG, and 65–69 years in AChR-Ab-positive MG without thymic abnormalities. The histogram was skewed toward younger onset age in MG with thymic hyperplasia and toward older onset age in ocular MG and AChR-Ab-positive MG without thymic abnormalities. Regarding patients with MuSK-Ab (n = 22), the mean ± SD onset age was 38.6 ± 15.3 (median, 42.0 years), and the ratio of females was 81.8%; however, neither of these findings was significantly different from other AChR-Ab-negative MG patients.Fig. 1Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis Early-stage response to treatment and stability of improved status among the five subtypes Details of past immunotherapy for each of the five subtypes are shown at the top of Table 5.Table 5Details of past treatment and response to treatment for each of the five subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPast immunotherapy (n = 923) Peak dose of oral PSL, mg/day9.2 ± 12.2, 5.0† 28.5 ± 18.8, 30.0† 29.7 ± 19.4, 30.0† 18.8 ± 17.2, 15.032.6 ± 20.6, 30.023.7 ± 20.2, 20.021.5 ± 19.3, 15.0 Duration of PSL ≥20 mg/day, M0.0 ± 0.0, 0.0† 12.0 ± 25.2, 5.0† 13.0 ± 27.3, 6.0† 3.8 ± 7.0, 0.07.2 ± 9.5, 4.08.2 ± 17.0, 2.07.9 ± 19.3, 1.0 CNIs,%24.0%*68.2%*54.0%67.4%(72.7%)58.1%52.9% PP,%2.0%*48.1%*22.1%46.0%*(54.5%)37.2%27.3% IVIG,%6.1%*36.1%29.9%42.5%*(27.3%)24.7%15.0%Initial response to treatment (n = 923, see Fig. 2a) Achievement of MM-or-better once,%79.8%*73.5%66.1%56.2%*(75.0%)67.8%70.2% Months to achieve MM-or-better in 50% of patients4.0‡ 8.018.0‡ 31.0‡ (7.0)6.08.0Stability of improved status (n = 923) MM-or-better at present,%74.0%*58.1%49.6%39.6%*(55.0%)55.4%57.6% Maintaining rate of MM-or-better, %92.7%*79.0%75.0%70.5%(73.3%)81.7%82.1%All continuous data are expressed as the mean ± standard deviation (SD) and the median CNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation*p < 0.0001, chi-square test,† p < 0.0001, Mann–Whitney U test,‡ p < 0.0001, log-rank test Details of past treatment and response to treatment for each of the five subtypes All continuous data are expressed as the mean ± standard deviation (SD) and the median CNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation *p < 0.0001, chi-square test,† p < 0.0001, Mann–Whitney U test,‡ p < 0.0001, log-rank test Early-stage response to treatment (first achievement of MM-or-better ≥1 M) As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test). Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis For comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10). As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test). Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis For comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10). Stability of improved status The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test). The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test). Details of past immunotherapy for each of the five subtypes are shown at the top of Table 5.Table 5Details of past treatment and response to treatment for each of the five subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPast immunotherapy (n = 923) Peak dose of oral PSL, mg/day9.2 ± 12.2, 5.0† 28.5 ± 18.8, 30.0† 29.7 ± 19.4, 30.0† 18.8 ± 17.2, 15.032.6 ± 20.6, 30.023.7 ± 20.2, 20.021.5 ± 19.3, 15.0 Duration of PSL ≥20 mg/day, M0.0 ± 0.0, 0.0† 12.0 ± 25.2, 5.0† 13.0 ± 27.3, 6.0† 3.8 ± 7.0, 0.07.2 ± 9.5, 4.08.2 ± 17.0, 2.07.9 ± 19.3, 1.0 CNIs,%24.0%*68.2%*54.0%67.4%(72.7%)58.1%52.9% PP,%2.0%*48.1%*22.1%46.0%*(54.5%)37.2%27.3% IVIG,%6.1%*36.1%29.9%42.5%*(27.3%)24.7%15.0%Initial response to treatment (n = 923, see Fig. 2a) Achievement of MM-or-better once,%79.8%*73.5%66.1%56.2%*(75.0%)67.8%70.2% Months to achieve MM-or-better in 50% of patients4.0‡ 8.018.0‡ 31.0‡ (7.0)6.08.0Stability of improved status (n = 923) MM-or-better at present,%74.0%*58.1%49.6%39.6%*(55.0%)55.4%57.6% Maintaining rate of MM-or-better, %92.7%*79.0%75.0%70.5%(73.3%)81.7%82.1%All continuous data are expressed as the mean ± standard deviation (SD) and the median CNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation*p < 0.0001, chi-square test,† p < 0.0001, Mann–Whitney U test,‡ p < 0.0001, log-rank test Details of past treatment and response to treatment for each of the five subtypes All continuous data are expressed as the mean ± standard deviation (SD) and the median CNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation *p < 0.0001, chi-square test,† p < 0.0001, Mann–Whitney U test,‡ p < 0.0001, log-rank test Early-stage response to treatment (first achievement of MM-or-better ≥1 M) As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test). Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis For comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10). As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test). Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis For comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10). Stability of improved status The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test). The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test). Two-step cluster analysis: Based on the results of two-step cluster analyses, all 923 MG patients could be classified into the same five subtypes described elsewhere [8]: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and other (in order of predicted importance). Among these five subtypes, the residual patients group “other” was the largest, and could be defined as generalized AChR-Ab-positive MG without thymic abnormalities. These results were demonstrated repeatedly with several sets of variables, as shown in Table 2, which confirmed the high reliability and reproducibility of the classification system. Although the order among thymoma-associated MG, MG with thymic hyperplasia, and AChR-Ab-negative MG was unstable depending on the variable sets used, the differences in terms of predicted importance were not large. These results were almost identical to those reported elsewhere [8], with only minor discrepancies in regard to the order of selection priority (the order in the previous study was as follows: ocular MG; MG with thymic hyperplasia; AChR-Ab-negative MG; thymoma-associated MG; and AChR-Ab-positive MG without thymic abnormalities). In the present study, the quality of clusterization under each set of variables, which was estimated using a previously reported interpretation model [14], was indicated as “fair” to “good” for all clusters, suggesting that the results were reasonable. A total of 111 patients (10.2%) fit two of the five subtypes (Table 3). These patients were allocated to sole subtypes according to the separation priority in the two-step cluster analysis. For example, an ocular MG patient with thymoma was allocated into ocular MG. Under this criterion, the percentage of patients assigned to the five subtypes was as follows: ocular MG, 23.0%; thymoma-associated MG, 21.5%; MG with thymic hyperplasia, 12.9%; AChR-Ab-negative MG, 12.1%; and AChR-Ab-positive MG without thymic abnormalities, 30.5% (Table 4).Table 3Number of patients fitting two categoriesNumber of patientsFinal assignmentOcular MG and thymoma-associated MG32 (2.9%)Ocular MGOcular MG and AChR-Ab-negative MG56 (5.1%)Ocular MGOcular MG and MG with thymic hyperplasia8 (0.7%)Ocular MGMG with thymic hyperplasia and AChR-Ab-negative MG14 (1.3%)THMGThymoma-associated MG and AChR-Ab-negative MG1 (0.09%)TAMGValues within the parentheses show the percentages of the total of 1,088 patients AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis Table 4Characteristics and severity for each of the five MG subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPatients (n)250234140132(22)3321088Female,%52.067.581.4*81.1*(81.8)61.165.4Onset age, y51.0 ± 20.0, 53.0a 51.0 ± 12.8, 51.033.3 ± 13.9, 31.0a 39.9 ± 16.2, 41.5a (38.6 ± 15.3, 42.0)50.7 ± 20.7, 55.0a 47.3 ± 18.8, 48.1Duration of disease, y10.4 ± 11.4, 6.49.5 ± 7.7, 8.017.4 ± 11.8, 14.5a 10.8 ± 9.3, 8.0(11.0 ± 8.1, 9.8)11.6 ± 10.5, 8.211.6 ± 10.6, 8.2AChR-Ab-positivity,%77.299.689.40.0(0.0)100.081.3MuSK-Ab-positivity,%0.0%0.0%0.0%20.6%(100.0%)0.0%2.1%Thymectomy,%23.6%*97.4%*100.0%*12.1%*(9.1%)35.8%*51.7%The worst condition of the disease MGFA classification (n = 1088) I,%100%0.0%0.0%0.0%(0.0%)0.0%23.0% II,%0.0%44.0%52.9%59.1%(45.5%)64.8%43.2% III,%0.0%27.8%30.7%28.0%(13.6%)20.5%19.6% IV,%0.0%7.7%7.9%4.5%(9.1%)4.2%4.5% V,%0.0%20.5%8.6%8.3%(31.8%)10.5%9.7% Rate of MGFA > III,%0.0%*56.0%*47.1%40.9%(54.5%)35.2%33.8% QMG score (n = 922)6.6 ± 2.6, 6.0a (n = 225)17.1 ± 8.0,15.5a (n = 194)15.8 ± 5.8, 15.0a (n = 107)14.7 ± 7.2, 13.0(n = 114)18.1 ± 9.7, 15.5(n = 20)14.7 ± 7.0, 13.0(n = 282)13.4 ± 7.5, 12.0Current disease condition (mean ± SD, median) QMG score (n = 923)4.2 ± 2.8, 4.0a (n = 208)6.8 ± 4.8, 6.0(n = 198)7.8 ± 5.5, 7.0(n = 125)8.4 ± 5.4, 8.0(n = 106)(8.3 ± 6.4, 7.0)(n = 20)7.2 ± 4.8, 6.0(n = 286)6.6 ± 4.8, 6.0 MGC score (n = 923)1.9 ± 2.5, 1.0a (n = 208)4.5 ± 5.4, 3.0(n = 198)5.4 ± 5.7, 3.0(n = 125)6.5 ± 6.2, 5.0a (n = 106)(6.2 ± 7.0, 4.5)(n = 20)4.2 ± 4.6, 3.0(n = 286)4.1 ± 5.0, 3.0 MG-QOL-15 (n = 923)8.1 ± 9.0, 5.0a (n = 208)14.7 ± 13.6, 11.0(n = 198)16.2 ± 13.7, 13.5a (n = 125)14.6 ± 12.6, 12.0(n = 106)(11.6 ± 8.8, 11.0)(n = 20)14.1 ± 13.9, 9.0(n = 286)13.2 ± 13.0, 9.0All continuous data are expressed as the mean ± standard deviation (SD) and the median AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation*p < 0.0001, chi-square test (compared to the others), † p < 0.0001, Mann–Whitney U test Number of patients fitting two categories Values within the parentheses show the percentages of the total of 1,088 patients AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis Characteristics and severity for each of the five MG subtypes All continuous data are expressed as the mean ± standard deviation (SD) and the median AChR-Ab anti-acetylcholine receptor antibody, MG myasthenia gravis, MGC MG composite scale, MGFA MG Foundation of America, MG-QOL-15 15-item MG-specific quality of life scale, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, QMG quantitative MG score, SD standard deviation *p < 0.0001, chi-square test (compared to the others), † p < 0.0001, Mann–Whitney U test MG with thymic hyperplasia is only diagnosed for thymectomized patients; therefore, some non-thymectomized patients with thymic hyperplasia may be assigned as other subtypes, particularly AChR-Ab-positive MG patients without thymic abnormalities. Clinical characteristics of each subtype: The clinical characteristics, including current and worst severity, for each of the five subtypes are shown in Table 4. The patients with MuSK-Ab were not separated by two-step cluster analysis because the number is not great enough for statistical evaluation. However, MG patients with MuSK-Ab showed a distinct clinical manifestation and therapy responsiveness reflecting the unique pathological mechanism [15]. Therefore, details of MG patients with MuSK-Ab (n = 22) are individually described next to SNMG patients in Table 4. The percentage of females was significantly higher among MG with thymic hyperplasia and AChR-Ab-negative MG patients compared with the other three subtypes (p < 0.0001, chi-square test). Onset age was significantly younger in MG with thymic hyperplasia and AChR-Ab-negative MG patients (p < 0.0001, Mann–Whitney U test) and older in ocular MG and AChR-Ab-positive MG patients without thymic abnormalities (p < 0.0001, Mann–Whitney U test). Severity at the worst condition (MGFA classification and QMG) was significantly higher in thymoma-associated MG patients (p < 0.0001, Mann–Whitney U test). Patients with MuSK-Ab also showed worst severity at the same level as thymoma-associated MG patients, although this result was not statistically significant because of the small number of patients. The severity scales (QMG and MGC), and a QOL scale (MG-QOL-15) scores in the present survey were generally worse, although not statistically significant, in MG with thymic hyperplasia and AChR-Ab-negative MG patients, both of which primarily comprise females with younger onset ages. On the other hand, as a matter of course, ocular MG patients showed much lower clinical severity in all batteries at both current and worst conditions (p < 0.0001, Mann–Whitney U test). Onset age histograms of the five subtypes: Histograms of the onset age for each of the five subtypes are shown in Fig. 1a. These histograms were converted into approximate curves (sixth-order polynomial approximations) and superimposed in Fig. 1b. The peak ages of the histogram were 60–64 years in ocular MG, 25–29 years in MG with thymic hyperplasia, 35–39 years in AChR-Ab-negative MG, 50–54 years in thymoma-associated MG, and 65–69 years in AChR-Ab-positive MG without thymic abnormalities. The histogram was skewed toward younger onset age in MG with thymic hyperplasia and toward older onset age in ocular MG and AChR-Ab-positive MG without thymic abnormalities. Regarding patients with MuSK-Ab (n = 22), the mean ± SD onset age was 38.6 ± 15.3 (median, 42.0 years), and the ratio of females was 81.8%; however, neither of these findings was significantly different from other AChR-Ab-negative MG patients.Fig. 1Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis Histograms and approximate curves for onset age in the five MG subtypes. a Histograms for ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG). b Superimposed approximate curves for the five subtypes regarding the distribution of onset age. The vertical broken line indicates the cutoff onset age of 50 years between early- and late-onset MG. MG, myasthenia gravis Early-stage response to treatment and stability of improved status among the five subtypes: Details of past immunotherapy for each of the five subtypes are shown at the top of Table 5.Table 5Details of past treatment and response to treatment for each of the five subtypesOcular MGThymoma-associated MGMG with thymic hyperplasiaAChR-Ab-negative MG(MuSK-Ab-positive)AChR-Ab-positive MG without thymic abnormalitiesTotalPast immunotherapy (n = 923) Peak dose of oral PSL, mg/day9.2 ± 12.2, 5.0† 28.5 ± 18.8, 30.0† 29.7 ± 19.4, 30.0† 18.8 ± 17.2, 15.032.6 ± 20.6, 30.023.7 ± 20.2, 20.021.5 ± 19.3, 15.0 Duration of PSL ≥20 mg/day, M0.0 ± 0.0, 0.0† 12.0 ± 25.2, 5.0† 13.0 ± 27.3, 6.0† 3.8 ± 7.0, 0.07.2 ± 9.5, 4.08.2 ± 17.0, 2.07.9 ± 19.3, 1.0 CNIs,%24.0%*68.2%*54.0%67.4%(72.7%)58.1%52.9% PP,%2.0%*48.1%*22.1%46.0%*(54.5%)37.2%27.3% IVIG,%6.1%*36.1%29.9%42.5%*(27.3%)24.7%15.0%Initial response to treatment (n = 923, see Fig. 2a) Achievement of MM-or-better once,%79.8%*73.5%66.1%56.2%*(75.0%)67.8%70.2% Months to achieve MM-or-better in 50% of patients4.0‡ 8.018.0‡ 31.0‡ (7.0)6.08.0Stability of improved status (n = 923) MM-or-better at present,%74.0%*58.1%49.6%39.6%*(55.0%)55.4%57.6% Maintaining rate of MM-or-better, %92.7%*79.0%75.0%70.5%(73.3%)81.7%82.1%All continuous data are expressed as the mean ± standard deviation (SD) and the median CNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation*p < 0.0001, chi-square test,† p < 0.0001, Mann–Whitney U test,‡ p < 0.0001, log-rank test Details of past treatment and response to treatment for each of the five subtypes All continuous data are expressed as the mean ± standard deviation (SD) and the median CNIs calcineurin inhibitors, EAT early aggressive therapy, IVIG intravenous immunoglobulin, M months, MG myasthenia gravis, MM-or-better ≥1 M minimal manifestation or better status lasting more than one month, MuSK-Ab-positive MG patients with serum anti-muscle specific kinase (MuSK) autoantibody in AChR-Ab-negative patients, PP plasmapheresis, PSL prednisolone, SD standard deviation *p < 0.0001, chi-square test,† p < 0.0001, Mann–Whitney U test,‡ p < 0.0001, log-rank test Early-stage response to treatment (first achievement of MM-or-better ≥1 M) As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test). Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis For comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10). As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test). Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis For comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10). Stability of improved status The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test). The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test). Early-stage response to treatment (first achievement of MM-or-better ≥1 M): As shown in the middle of Table 5, the rate of the patients achieving MM-or-better ≥1 M at least once was significantly higher for ocular MG (p < 0.001, chi-square test) and significantly lower for AChR-Ab-negative MG (p < 0.001, chi-square test). Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in each of the five subtypes up to 10 years from initiating immunotherapy are shown in Fig. 2a. The time to first achieve MM-or-better ≥1 M was significantly different among the five subtypes (p < 0.0001; generalized Wilcoxon test and log-rank test). Significant differences were observed between all pairs of two subtypes (p < 0.01 for all pairs; generalized Wilcoxon test) except MG with thymic hyperplasia and AChR-Ab-negative MG (p ≥ 0.10). Patients with ocular MG showed the best early-stage response to treatment compared with others (p < 0.0001; log-rank test, p < 0.001; chi-square test). The time required to achieve MM-or-better ≥1 M in 50% of the patients was significantly longer in MG with thymic hyperplasia and AChR-Ab-negative MG compared with ocular MG, thymoma-associated MG, and AChR-Ab-positive MG without thymic abnormalities (p < 0.0001, log-rank test; middle of Table 5).Fig. 2Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis Kaplan-Meier curves for the first achievement of MM-or-better ≥1 M in the five subtypes and those in the three subtypes of early-onset, late-onset, and thymoma-associated MG. a Kaplan-Meier curves for the five subtypes [ocular MG, generalized thymoma-associated MG (TAMG), generalized MG with thymic hyperplasia (THMG), generalized AChR-Ab-negative MG (SNMG) and generalized AChR-Ab-positive MG without thymic abnormalities (SPMG)]. b Kaplan-Meier curves for the three subtypes of early-onset, late-onset, and thymoma-associated MG. MM, minimal manifestations; MG, myasthenia gravis For comparison, Kaplan-Meier curves for the time to first achieve MM-or-better ≥1 M in early-onset, late-onset, and thymoma-associated MG (three-type classification) are shown in Fig. 2b. Significant differences were observed between early- and late-onset MG (p < 0.01) and between early-onset and thymoma-associated MG (p < 0.01); however, no significant differences were found between late-onset and thymoma-associated MG (p ≥ 0.10). Stability of improved status: The rates of patients with MM-or-better status during the survey and stability of improved status are shown in the bottom of Table 5. Stability of improved status was significantly better in ocular MG compared with other subtypes (p < 0.0001; chi-square test); however, no significant differences were observed among subtypes other than ocular MG (p ≥ 0.10 for all pairs, excluding ocular MG; chi-square test). Discussion: The present analyses based on several sets of variables classified 923 MG patients into the same five following subtypes with the same characteristics of the onset-age histograms as reported in our previous study [8]: ocular MG (AChR-Ab-positivity, 77%; histogram of onset age, skewed to older age); thymoma-associated MG (100%; normal distribution); MG with thymic hyperplasia (89%; skewed to younger age); AChR-Ab-negative MG (0%; normal distribution); and AChR-Ab-positive MG without thymic abnormalities (100%, skewed to older age). The results from the two different samples demonstrated high reproducibility, which suggests the reliability of our five-subtype classification method. In the process of analyses, two points were suggested. First, discrimination between ocular and generalized MG is more principal than that according to onset age, thymus pathology or AChR-Ab-positivity. Second, AChR-Ab-negative MG shows normal distribution of onset age not fitting discrimination based on onset age. Therefore, it is probably better to adopt the often-used three-type classification (early-onset, late-onset and thymoma-associated MG) for generalized and AChR-Ab-positive phenotypes. Consistently, in our five-subtype classification, MG with thymic hyperplasia (with early-onset age), AChR-Ab-positive MG without thymic abnormalities (with late-onset age) and thymoma-associated MG were generalized and AChR-Ab-positive phenotypes. In fact, these results of our classification statistically performed are consistent with a recently reported classification of MG by Gilhus et al. [16, 17], which included the following classifications: early-onset MG; late-onset MG; thymoma-associated MG; MuSK-Ab positive MG; lipoprotein-related protein 4 (LRP4)-Ab positive MG; seronegative MG; and ocular MG. In addition, they commented that early- and late-onset MG should be distinguished according to onset age only for patients having generalized symptoms and AChR-Ab. In the present study, because of their small numbers, MuSK-Ab-positive MG patients were not separated, and LRP4-Ab positivity was not systematically determined; though MG patients with MuSK-Ab or LRP 4-Ab have a distinct clinical manifestation and a unique pathological mechanism [15]. Ocular MG was found to have unique characteristics such as having a higher onset age, predominantly affecting males, and having an ocular muscle-specific pathogenesis [18], which may be related to the aging-associated susceptibility of ocular muscles to antibodies against the neuromuscular junction. Given that response to treatment and stability of improved status were substantially better in ocular MG compared with the other four subtypes, it seems reasonable to conclude that ocular MG should be treated as a distinct subgroup of MG in the clinical setting. Among the four generalized subtypes in the present classification method, both early-stage response to treatment and stability of improved status were worst in AChR-Ab-negative MG, although symptoms at the time the condition was at its worst were not particularly severe. Patients with MuSK-Ab-positive MG showed better results despite having more severe worst conditions, which suggests that AChR-Ab-negative MG, excluding MuSK-Ab-positive MG, is distinct from other generalized MG subtypes from the perspective of response to therapy. Overall, as shown in Fig. 2a, each of the five present subtypes showed different levels of response to treatment, whereas such differences in the three commonly used subtypes (early-onset, late-onset, and thymoma-associated MG) remain somewhat unclear (Fig. 2b). It would be more helpful in the clinical setting to elucidate the levels of response to some types of medication or therapy (e.g. corticosteroids, non-steroid immunosuppressants, intravenous immunoglobulin and plasmapheresis) in the five subtypes. However, it was difficult to analyze such response levels, as plural treatment agents and methods were employed simultaneously in most of individual patients. We are now analyzing the response levels according to patterns of immune treatment (treatment strategies) in generalized MG patients [19]. Such analysis should be performed also for the present five subtypes, but could not be addressed in the present report. The present study did have some limitations. First, 331 (30.4%) of the 1,088 patients were included in our previous survey in 2012, which might have affected the reproducibility of the present five-subtype classification. Second, almost all of the MG patients in our database are Japanese; therefore, a race/ethnicity bias may have affected the results. Finally, some MG patients with thymic hyperplasia might have been classified as AChR-Ab-positive MG without thymic abnormalities because the diagnosis of thymic hyperplasia is made based on the results of pathological examinations after thymectomy. However, the frequency of thymectomy for MG patients without thymoma has been decreasing [20]; therefore, some of the AChR-Ab-positive MG patients with younger onset age who had not undergone thymectomy could have had thymic hyperplasia. Conclusion: The results of the present study suggest that MG patients can be classified into the following five subtypes in order of priority: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and AChR-Ab-positive MG without thymic abnormalities. All MG patients can be allocated to one of the subtypes based on the results of routine examinations. These five subtypes were shown to have characteristic demographic characteristics, clinical severity, and therapeutic responses. Therefore, our five-subtype classification method is expected to be beneficial not only for elucidating disease types, but also for planning proper treatment for individual patients.
Background: We have previously reported using two-step cluster analysis to classify myasthenia gravis (MG) patients into the following five subtypes: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; anti-acetylcholine receptor antibody (AChR-Ab)-negative MG; and AChR-Ab-positive MG without thymic abnormalities. The objectives of the present study were to examine the reproducibility of this five-subtype classification using a new data set of MG patients and to identify additional characteristics of these subtypes, particularly in regard to response to treatment. Methods: A total of 923 consecutive MG patients underwent two-step cluster analysis for the classification of subtypes. The variables used for classification were sex, age of onset, disease duration, presence of thymoma or thymic hyperplasia, positivity for AChR-Ab or anti-muscle-specific tyrosine kinase antibody, positivity for other concurrent autoantibodies, and disease condition at worst and current. The period from the start of treatment until the achievement of minimal manifestation status (early-stage response) was determined and then compared between subtypes using Kaplan-Meier analysis and the log-rank test. In addition, between subtypes, the rate of the number of patients who maintained minimal manifestations during the study period/that of patients who only achieved the status once (stability of improved status) was compared. Results: As a result of two-step cluster analysis, 923 MG patients were classified into five subtypes as follows: ocular MG (AChR-Ab-positivity, 77%; histogram of onset age, skewed to older age); thymoma-associated MG (100%; normal distribution); MG with thymic hyperplasia (89%; skewed to younger age); AChR-Ab-negative MG (0%; normal distribution); and AChR-Ab-positive MG without thymic abnormalities (100%, skewed to older age). Furthermore, patients classified as ocular MG showed the best early-stage response to treatment and stability of improved status, followed by those classified as thymoma-associated MG and AChR-Ab-positive MG without thymic abnormalities; by contrast, those classified as AChR-Ab-negative MG showed the worst early-stage response to treatment and stability of improved status. Conclusions: Differences were seen between the five subtypes in demographic characteristics, clinical severity, and therapeutic response. Our five-subtype classification approach would be beneficial not only to elucidate disease subtypes, but also to plan treatment strategies for individual MG patients.
Background: Myasthenia gravis (MG) is a neurological disorder that manifests as fatigable and fluctuating weakness of voluntary muscles, which are mediated by autoantibodies against neuromuscular junction proteins in skeletal muscle that impair neuromuscular transmission [1]. MG typically involves the ocular, bulbar, and extremity muscles, and, in severe cases, respiratory muscles. The clinical course and outcome in MG are affected by several different autoantibodies, thymic abnormalities, onset age and disease severity, as well as response to treatment [2–4]. MG is distinguished according to the production of pathogenic autoantibodies such as anti-acetylcholine receptor antibody (AChR-Ab) and anti–muscle-specific tyrosine kinase antibody (MuSK-Ab) [1, 5, 6]. Clinically, MG is often classified into the following three subtypes based on thymic abnormalities and onset age: thymoma-associated MG; early-onset MG (onset age <50 years); and late-onset MG (onset age ≥50 years) [7]. Furthermore, discrimination is observed in the clinical setting—for example, between ocular and generalized MG—based on the distribution of symptoms. Previously, we reported classifying MG into the following five subtypes using two-step cluster analysis of a detailed cross-sectional data set of 640 consecutive patients (Japan MG Registry Study 2012): ocular MG; generalized thymoma-associated MG; generalized MG with thymic hyperplasia; generalized AChR-Ab-negative MG; and generalized AChR-Ab-positive MG without thymic abnormalities [8]. However, this five-subtype classification approach requires further confirmation, and its clinical relevance remains to be established. Therefore, in 2015, we conducted a larger cross-sectional survey to obtain clinical parameters from 1,088 consecutive MG patients. In the present study, using this new data set, we attempted to confirm the reproducibility of our five-subtype classification approach and to specify additional characteristics of these five subtypes with a particular focus on response to treatment in the clinical setting. Conclusion: The results of the present study suggest that MG patients can be classified into the following five subtypes in order of priority: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; AChR-Ab-negative MG; and AChR-Ab-positive MG without thymic abnormalities. All MG patients can be allocated to one of the subtypes based on the results of routine examinations. These five subtypes were shown to have characteristic demographic characteristics, clinical severity, and therapeutic responses. Therefore, our five-subtype classification method is expected to be beneficial not only for elucidating disease types, but also for planning proper treatment for individual patients.
Background: We have previously reported using two-step cluster analysis to classify myasthenia gravis (MG) patients into the following five subtypes: ocular MG; thymoma-associated MG; MG with thymic hyperplasia; anti-acetylcholine receptor antibody (AChR-Ab)-negative MG; and AChR-Ab-positive MG without thymic abnormalities. The objectives of the present study were to examine the reproducibility of this five-subtype classification using a new data set of MG patients and to identify additional characteristics of these subtypes, particularly in regard to response to treatment. Methods: A total of 923 consecutive MG patients underwent two-step cluster analysis for the classification of subtypes. The variables used for classification were sex, age of onset, disease duration, presence of thymoma or thymic hyperplasia, positivity for AChR-Ab or anti-muscle-specific tyrosine kinase antibody, positivity for other concurrent autoantibodies, and disease condition at worst and current. The period from the start of treatment until the achievement of minimal manifestation status (early-stage response) was determined and then compared between subtypes using Kaplan-Meier analysis and the log-rank test. In addition, between subtypes, the rate of the number of patients who maintained minimal manifestations during the study period/that of patients who only achieved the status once (stability of improved status) was compared. Results: As a result of two-step cluster analysis, 923 MG patients were classified into five subtypes as follows: ocular MG (AChR-Ab-positivity, 77%; histogram of onset age, skewed to older age); thymoma-associated MG (100%; normal distribution); MG with thymic hyperplasia (89%; skewed to younger age); AChR-Ab-negative MG (0%; normal distribution); and AChR-Ab-positive MG without thymic abnormalities (100%, skewed to older age). Furthermore, patients classified as ocular MG showed the best early-stage response to treatment and stability of improved status, followed by those classified as thymoma-associated MG and AChR-Ab-positive MG without thymic abnormalities; by contrast, those classified as AChR-Ab-negative MG showed the worst early-stage response to treatment and stability of improved status. Conclusions: Differences were seen between the five subtypes in demographic characteristics, clinical severity, and therapeutic response. Our five-subtype classification approach would be beneficial not only to elucidate disease subtypes, but also to plan treatment strategies for individual MG patients.
18,949
483
[ 615, 298, 269, 78, 44, 47, 1348, 372, 397, 2099, 677, 89 ]
17
[ "mg", "ab", "patients", "achr", "achr ab", "subtypes", "onset", "thymic", "mg thymic", "test" ]
[ "autoantibodies neuromuscular junction", "myasthenia gravis mgc", "myasthenia gravis mg", "muscles antibodies", "mg specific autoantibodies" ]
null
[CONTENT] Classification | Cluster analysis | Myasthenia | Onset age | Treatment [SUMMARY]
null
[CONTENT] Classification | Cluster analysis | Myasthenia | Onset age | Treatment [SUMMARY]
[CONTENT] Classification | Cluster analysis | Myasthenia | Onset age | Treatment [SUMMARY]
[CONTENT] Classification | Cluster analysis | Myasthenia | Onset age | Treatment [SUMMARY]
[CONTENT] Classification | Cluster analysis | Myasthenia | Onset age | Treatment [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Cluster Analysis | Female | Humans | Japan | Kaplan-Meier Estimate | Male | Middle Aged | Myasthenia Gravis | Reproducibility of Results | Severity of Illness Index | Thymoma | Thymus Neoplasms | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Cluster Analysis | Female | Humans | Japan | Kaplan-Meier Estimate | Male | Middle Aged | Myasthenia Gravis | Reproducibility of Results | Severity of Illness Index | Thymoma | Thymus Neoplasms | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Cluster Analysis | Female | Humans | Japan | Kaplan-Meier Estimate | Male | Middle Aged | Myasthenia Gravis | Reproducibility of Results | Severity of Illness Index | Thymoma | Thymus Neoplasms | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Cluster Analysis | Female | Humans | Japan | Kaplan-Meier Estimate | Male | Middle Aged | Myasthenia Gravis | Reproducibility of Results | Severity of Illness Index | Thymoma | Thymus Neoplasms | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Cluster Analysis | Female | Humans | Japan | Kaplan-Meier Estimate | Male | Middle Aged | Myasthenia Gravis | Reproducibility of Results | Severity of Illness Index | Thymoma | Thymus Neoplasms | Young Adult [SUMMARY]
[CONTENT] autoantibodies neuromuscular junction | myasthenia gravis mgc | myasthenia gravis mg | muscles antibodies | mg specific autoantibodies [SUMMARY]
null
[CONTENT] autoantibodies neuromuscular junction | myasthenia gravis mgc | myasthenia gravis mg | muscles antibodies | mg specific autoantibodies [SUMMARY]
[CONTENT] autoantibodies neuromuscular junction | myasthenia gravis mgc | myasthenia gravis mg | muscles antibodies | mg specific autoantibodies [SUMMARY]
[CONTENT] autoantibodies neuromuscular junction | myasthenia gravis mgc | myasthenia gravis mg | muscles antibodies | mg specific autoantibodies [SUMMARY]
[CONTENT] autoantibodies neuromuscular junction | myasthenia gravis mgc | myasthenia gravis mg | muscles antibodies | mg specific autoantibodies [SUMMARY]
[CONTENT] mg | ab | patients | achr | achr ab | subtypes | onset | thymic | mg thymic | test [SUMMARY]
null
[CONTENT] mg | ab | patients | achr | achr ab | subtypes | onset | thymic | mg thymic | test [SUMMARY]
[CONTENT] mg | ab | patients | achr | achr ab | subtypes | onset | thymic | mg thymic | test [SUMMARY]
[CONTENT] mg | ab | patients | achr | achr ab | subtypes | onset | thymic | mg thymic | test [SUMMARY]
[CONTENT] mg | ab | patients | achr | achr ab | subtypes | onset | thymic | mg thymic | test [SUMMARY]
[CONTENT] mg | generalized | clinical | onset | onset age | muscles | autoantibodies | mg onset age 50 | data set | cross sectional [SUMMARY]
null
[CONTENT] mg | ab | achr ab | achr | onset | mg thymic | thymic | test | generalized | associated [SUMMARY]
[CONTENT] mg | results | subtypes | beneficial | demographic characteristics clinical severity | method expected beneficial | method expected beneficial elucidating | classification method expected | classification method expected beneficial | subtypes shown characteristic [SUMMARY]
[CONTENT] mg | ab | patients | subtypes | onset | achr ab | achr | better | thymic | mm [SUMMARY]
[CONTENT] mg | ab | patients | subtypes | onset | achr ab | achr | better | thymic | mm [SUMMARY]
[CONTENT] two | five | MG | MG | MG | MG | AChR-Ab | MG ||| five | MG [SUMMARY]
null
[CONTENT] two | 923 | five | MG | 77% | MG | 100% ||| MG | 89% | AChR-Ab-negative MG | 0% | AChR-Ab | MG | 100% ||| MG | MG | AChR-Ab | MG | AChR-Ab-negative MG [SUMMARY]
[CONTENT] five ||| five | MG [SUMMARY]
[CONTENT] two | five | MG | MG | MG | MG | AChR-Ab | MG ||| five | MG ||| 923 | MG | two ||| thymoma | AChR-Ab ||| Kaplan-Meier ||| ||| ||| two | 923 | five | MG | 77% | MG | 100% ||| MG | 89% | AChR-Ab-negative MG | 0% | AChR-Ab | MG | 100% ||| MG | MG | AChR-Ab | MG | AChR-Ab-negative MG ||| five ||| five | MG [SUMMARY]
[CONTENT] two | five | MG | MG | MG | MG | AChR-Ab | MG ||| five | MG ||| 923 | MG | two ||| thymoma | AChR-Ab ||| Kaplan-Meier ||| ||| ||| two | 923 | five | MG | 77% | MG | 100% ||| MG | 89% | AChR-Ab-negative MG | 0% | AChR-Ab | MG | 100% ||| MG | MG | AChR-Ab | MG | AChR-Ab-negative MG ||| five ||| five | MG [SUMMARY]
Effect of chronic lymphocytic thyroiditis on the efficacy and safety of ultrasound-guided radiofrequency ablation for papillary thyroid microcarcinoma.
31359613
Chronic lymphocytic thyroiditis (CLT) is an autoimmune disease commonly associated with papillary thyroid carcinoma characterized by a smaller primary tumor size at presentation. The efficacy and safety of ultrasound-guided radiofrequency ablation (RFA) for papillary thyroid microcarcinoma (PTMC) coexisting with CLT is still unknown.
BACKGROUND
Sixty patients with unifocal PTMC were enrolled and classified into PTMC and PTMC+CLT groups (n = 30/group). CLT was diagnosed histopathologically. The ablation area exceeded the tumor margins, and was evaluated by US and contrast-enhanced US (CEUS) for residual tumor to prevent recurrence. Three months after ablation, US-guided core-needle biopsy was performed to assess the presence of residual and recurrent cancer. Preoperative and postoperative data on patients and tumors were recorded and analyzed.
METHODS
There were no differences between groups in age, sex, preoperative tumor volume, ablation time, or ablation power (P > 0.05). There was also no significant difference in postoperative ablation zone volume between the groups at the 1-, 3-, 6-, 12-, and 18-month follow-ups (P > 0.05). The volume reduction rate significantly differed between the two groups at month 3 (P = 0.03). The ablation area could not be identified on US and CEUS at 9.8 ± 5.0 and 10.0 ± 4.8 months in the PTMC and PTMC + CLT groups, respectively (P = 0.197). No serious complications occurred during and after ablation. No residual cancer cells were found on biopsy after ablation.
RESULTS
RFA was effective in patients with PTMC+CLT, and its therapeutic efficacy and safety were similar to those in patients with PTMC without CLT.
CONCLUSIONS
[ "Adult", "Carcinoma, Papillary", "Catheter Ablation", "Comorbidity", "Endoscopic Ultrasound-Guided Fine Needle Aspiration", "Female", "Hashimoto Disease", "Humans", "Male", "Neoplasm Recurrence, Local", "Survival Analysis", "Thyroid Neoplasms", "Treatment Outcome" ]
6746112
INTRODUCTION
Since the mid‐1990s, the incidence of thyroid cancer has increased worldwide and is reportedly the fastest‐growing cancer. In the United States, the overall incidence of thyroid cancer increased by 3% annually from 1974 to 20131 and the number of new cases likely reached 53 900 in 2018.2 In China, the incidence of thyroid cancer is also increasing, and in 2014, it was among the top four cancers in terms of incidence among women.3 Thyroid cancer includes several pathological types, of which papillary thyroid carcinoma (PTC) is the most common. Tumors ≤10 mm in diameter are defined as papillary thyroid microcarcinomas (PTMCs). These have been observed in 15.5% of autopsies in which whole‐gland examination was performed and are typically associated with good prognosis.4 Chronic lymphocytic thyroiditis (CLT), also known as Hashimoto's thyroiditis, is an autoimmune disease characterized by widespread lymphocyte infiltration, fibrosis and parenchymal atrophy of the thyroid tissue. The male‐to‐female incidence ratio of CLT is 1:5‐20. CLT is commonly associated with PTC, and is characterized by a smaller primary tumor size at presentation. Leni et al5 reported that one‐third of PTC cases (33.3%, 168/505) have CLT coexistence. No association has been found between CLT and follicular, medullary, or anaplastic thyroid cancer.6 The coexistence of PTC and CLT is reportedly associated with better prognoses, lower rates of lymph node and distant metastases, and recurrence, particularly in patients aged ≥45 years.7 There is controversy surrounding the treatment of PTC with coexisting CLT. For PTMC, some guidelines recommend the performance of surgery.8, 9 However, traditional surgery results in injuries, prominent neck scarring, and a lowered quality of life, especially among older patients with chronic comorbidities. The Korean Society of Thyroid Radiology consensus statement10 highlights the need for active surveillance rather than immediate surgery in adult patients with low‐risk PTMC. However, a majority of patients experience severe anxiety if no treatment is provided. Therefore, minimally invasive therapy is utilized in many cases. Thermal tumor ablation (using microwave, radiofrequency, or laser ablation, or high‐intensity focused ultrasound [US]) has been applied in clinical practice with good feedback.11, 12, 13 Radiofrequency ablation (RFA) was found to be efficient and safe in the treatment of PTMC in our preliminary study.14 We aimed to investigate the therapeutic effect and safety of RFA in PTMC cases with CLT in this study.
null
null
RESULTS
Preoperative patient and tumor characteristics There were no significant differences in terms of age, sex, and tumor volume between the PTMC and PTMC+CLT groups (P > 0.05) (Table 1). All thyroid nodules were hypoechoic; other characteristics are shown in Table 2. There were no significant differences in nodule location, margin, shape, height/width, calcification, and CDFI type between the two groups (P > 0.05). The distance between the nodule and trachea or common carotid artery (CCA) was <2 mm in 13 nodules, including eight near the trachea (0.117 ± 0.024) cm, and five near the CCA (0.094 ± 0.013) cm. There were no significant differences in free T3, T4 and TSH values between the two groups (U = 0.434, P = 0.664; U = 0.452, P = 0.651; and U = 0.886, P = 0.376, respectively). All patients with CLT had positive TgAb and TPOAb. The thyroid function test results of those with PTMC were in the normal range. Comparison of the preoperative data of the patients and tumor volume between the PTMC+CLT and PTMC groups Abbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma. Ultrasonic characteristics of the tumors before RFA in the PTMC+CLT and PTMC groups Abbreviations: CDFI, color Doppler flow imaging; CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma. There were no significant differences in terms of age, sex, and tumor volume between the PTMC and PTMC+CLT groups (P > 0.05) (Table 1). All thyroid nodules were hypoechoic; other characteristics are shown in Table 2. There were no significant differences in nodule location, margin, shape, height/width, calcification, and CDFI type between the two groups (P > 0.05). The distance between the nodule and trachea or common carotid artery (CCA) was <2 mm in 13 nodules, including eight near the trachea (0.117 ± 0.024) cm, and five near the CCA (0.094 ± 0.013) cm. There were no significant differences in free T3, T4 and TSH values between the two groups (U = 0.434, P = 0.664; U = 0.452, P = 0.651; and U = 0.886, P = 0.376, respectively). All patients with CLT had positive TgAb and TPOAb. The thyroid function test results of those with PTMC were in the normal range. Comparison of the preoperative data of the patients and tumor volume between the PTMC+CLT and PTMC groups Abbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma. Ultrasonic characteristics of the tumors before RFA in the PTMC+CLT and PTMC groups Abbreviations: CDFI, color Doppler flow imaging; CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma. Ablation power and time in the PTMC and PTMC+CLT groups The ablation powers in the PTMC and PTMC+CLT groups were 0.9 ± 0.5 KJ and 0.7 ± 0.5 KJ, respectively (t = 1.453, P = 0.1515); ablation times were 3.8 ± 2.1 min and 2.9 ± 2 0.1 min, respectively (t = 1.801, P = 0.0768). The ablation powers in the PTMC and PTMC+CLT groups were 0.9 ± 0.5 KJ and 0.7 ± 0.5 KJ, respectively (t = 1.453, P = 0.1515); ablation times were 3.8 ± 2.1 min and 2.9 ± 2 0.1 min, respectively (t = 1.801, P = 0.0768). Tumor volume and VRR after RFA In terms of postoperative ablation volume, there was no significant difference between the two groups at 1, 3, 6, 12, and 18 months after RFA (P > 0.05) (Table 3, Figure 1). While there was a significant difference in VRR between the two groups 3 months after ablation (t = 1.28, P = 0.03), no difference was observed at any other time‐point (P > 0.05) (Table 4, Figure 2). Changes in the tumor volume between the PTMC + CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation. Radiofrequency ablation (RFA) treatment and follow‐up of one case of papillary thyroid microcarcinoma with chronic lymphocytic thyroiditis. (A) A hypoechoic nodule sized 0.4 × 0.5 × 0.4 cm, with irregular margins and microcalcifications was displayed in the right thyroid lobe (arrow). (B) Uneven and irregular hypo‐enhancement in the nodule was observed by contrast‐enhanced ultrasound (CEUS) (arrow, left image). (C) During RFA, the nodule was covered by a hyperechoic area (arrow) on US. (D) Immediately after RFA, the ablation area was showed completely no enhancement by CEUS, and its size (0.7 × 1.1 × 1.0 cm) was larger than the initial nodule size. (E) One month after RFA, the ablation area decreased in size to 0.9 × 0.8 × 0.5 cm. (F) Three months after RFA, the ablation area decreased to 0.6 × 0.5 × 0.6 cm. (G) Six months after RFA, the ablation area decreased to 0.3 × 0.2 × 0.3 cm. (H) The ablation area could not be identified on both US and CEUS. (I) Before RFA, the pathologic examination of this nodule showed the presence of papillary thyroid carcinoma accompanied by chronic lymphocytic thyroiditis. (J) Three months after RFA, pathology showed degenerated and necrotic follicular epithelia, interstitial fibrous tissue hyperplasia, and hyaline degeneration in the ablation lesion, with lymphocyte infiltration and multinucleated giant cell reaction in the adjacent thyroid tissue. No residual cancer was found Changes in the tumor volume reduction ratio between the PTMC+CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation. Changes in ablation zone volume in PTMC cases with and without CLT at each follow‐up. PTMC, papillary thyroid microcarcinoma; CLT, chronic lymphocytic thyroiditis After RFA, the times to tumor disappearance were 9.800 ± 5.041 months and 10.0 ± 4.8 months (t = 0.16, P = 0.88) in the PTMC and PTMC + CLT groups, respectively. A total of 43% of the nodules in the PTMC+CLT group resolved in 12 months, and 47% in the PTMC group resolved in 6 months. No significant differences were observed with respect to the number of ablation areas between the groups (u = 0.319, P > 0.05) (Table 5). Calcification was observed in 20 PTMC cases, with time to tumor disappearance of 10.2 ± 5.1 months; no calcification was found in 40 PTMC cases, with time to tumor disappearance of 9.8 ± 4.9 months; no significant difference in time was observed between the PTMC cases with and without calcification (t = 0.28, P = 0.78). No residual cancer cells were found by CNB 3 months after ablation (Figure 1). No recurrent tumors or suspicious metastatic lymph nodes were detected. Number of patients with tumors disappearance in the PTMC+CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation. In terms of postoperative ablation volume, there was no significant difference between the two groups at 1, 3, 6, 12, and 18 months after RFA (P > 0.05) (Table 3, Figure 1). While there was a significant difference in VRR between the two groups 3 months after ablation (t = 1.28, P = 0.03), no difference was observed at any other time‐point (P > 0.05) (Table 4, Figure 2). Changes in the tumor volume between the PTMC + CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation. Radiofrequency ablation (RFA) treatment and follow‐up of one case of papillary thyroid microcarcinoma with chronic lymphocytic thyroiditis. (A) A hypoechoic nodule sized 0.4 × 0.5 × 0.4 cm, with irregular margins and microcalcifications was displayed in the right thyroid lobe (arrow). (B) Uneven and irregular hypo‐enhancement in the nodule was observed by contrast‐enhanced ultrasound (CEUS) (arrow, left image). (C) During RFA, the nodule was covered by a hyperechoic area (arrow) on US. (D) Immediately after RFA, the ablation area was showed completely no enhancement by CEUS, and its size (0.7 × 1.1 × 1.0 cm) was larger than the initial nodule size. (E) One month after RFA, the ablation area decreased in size to 0.9 × 0.8 × 0.5 cm. (F) Three months after RFA, the ablation area decreased to 0.6 × 0.5 × 0.6 cm. (G) Six months after RFA, the ablation area decreased to 0.3 × 0.2 × 0.3 cm. (H) The ablation area could not be identified on both US and CEUS. (I) Before RFA, the pathologic examination of this nodule showed the presence of papillary thyroid carcinoma accompanied by chronic lymphocytic thyroiditis. (J) Three months after RFA, pathology showed degenerated and necrotic follicular epithelia, interstitial fibrous tissue hyperplasia, and hyaline degeneration in the ablation lesion, with lymphocyte infiltration and multinucleated giant cell reaction in the adjacent thyroid tissue. No residual cancer was found Changes in the tumor volume reduction ratio between the PTMC+CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation. Changes in ablation zone volume in PTMC cases with and without CLT at each follow‐up. PTMC, papillary thyroid microcarcinoma; CLT, chronic lymphocytic thyroiditis After RFA, the times to tumor disappearance were 9.800 ± 5.041 months and 10.0 ± 4.8 months (t = 0.16, P = 0.88) in the PTMC and PTMC + CLT groups, respectively. A total of 43% of the nodules in the PTMC+CLT group resolved in 12 months, and 47% in the PTMC group resolved in 6 months. No significant differences were observed with respect to the number of ablation areas between the groups (u = 0.319, P > 0.05) (Table 5). Calcification was observed in 20 PTMC cases, with time to tumor disappearance of 10.2 ± 5.1 months; no calcification was found in 40 PTMC cases, with time to tumor disappearance of 9.8 ± 4.9 months; no significant difference in time was observed between the PTMC cases with and without calcification (t = 0.28, P = 0.78). No residual cancer cells were found by CNB 3 months after ablation (Figure 1). No recurrent tumors or suspicious metastatic lymph nodes were detected. Number of patients with tumors disappearance in the PTMC+CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation. Complications Slight voice hoarseness was observed in one patient (1.7%, 1/60) after RFA; voice recovery occurred without treatment in 1 week. The presence of moderate‐intensity pain was reported by two patients (3.3%, 2/60), and slight fever was noted in one patient (1.7%, 1/60); in all cases, spontaneous recovery was observed. Slight voice hoarseness was observed in one patient (1.7%, 1/60) after RFA; voice recovery occurred without treatment in 1 week. The presence of moderate‐intensity pain was reported by two patients (3.3%, 2/60), and slight fever was noted in one patient (1.7%, 1/60); in all cases, spontaneous recovery was observed.
null
null
[ "INTRODUCTION", "Patients", "Instrument and equipment", "Preablation assessment", "Ablation procedure", "Follow‐up", "Statistical analysis", "Preoperative patient and tumor characteristics", "Ablation power and time in the PTMC and PTMC+CLT groups", "Tumor volume and VRR after RFA", "Complications", "AUTHOR CONTRIBUTIONS" ]
[ "Since the mid‐1990s, the incidence of thyroid cancer has increased worldwide and is reportedly the fastest‐growing cancer. In the United States, the overall incidence of thyroid cancer increased by 3% annually from 1974 to 20131 and the number of new cases likely reached 53 900 in 2018.2 In China, the incidence of thyroid cancer is also increasing, and in 2014, it was among the top four cancers in terms of incidence among women.3 Thyroid cancer includes several pathological types, of which papillary thyroid carcinoma (PTC) is the most common. Tumors ≤10 mm in diameter are defined as papillary thyroid microcarcinomas (PTMCs). These have been observed in 15.5% of autopsies in which whole‐gland examination was performed and are typically associated with good prognosis.4\n\nChronic lymphocytic thyroiditis (CLT), also known as Hashimoto's thyroiditis, is an autoimmune disease characterized by widespread lymphocyte infiltration, fibrosis and parenchymal atrophy of the thyroid tissue. The male‐to‐female incidence ratio of CLT is 1:5‐20. CLT is commonly associated with PTC, and is characterized by a smaller primary tumor size at presentation. Leni et al5 reported that one‐third of PTC cases (33.3%, 168/505) have CLT coexistence. No association has been found between CLT and follicular, medullary, or anaplastic thyroid cancer.6 The coexistence of PTC and CLT is reportedly associated with better prognoses, lower rates of lymph node and distant metastases, and recurrence, particularly in patients aged ≥45 years.7 There is controversy surrounding the treatment of PTC with coexisting CLT. For PTMC, some guidelines recommend the performance of surgery.8, 9 However, traditional surgery results in injuries, prominent neck scarring, and a lowered quality of life, especially among older patients with chronic comorbidities. The Korean Society of Thyroid Radiology consensus statement10 highlights the need for active surveillance rather than immediate surgery in adult patients with low‐risk PTMC. However, a majority of patients experience severe anxiety if no treatment is provided. Therefore, minimally invasive therapy is utilized in many cases. Thermal tumor ablation (using microwave, radiofrequency, or laser ablation, or high‐intensity focused ultrasound [US]) has been applied in clinical practice with good feedback.11, 12, 13 Radiofrequency ablation (RFA) was found to be efficient and safe in the treatment of PTMC in our preliminary study.14 We aimed to investigate the therapeutic effect and safety of RFA in PTMC cases with CLT in this study.", "Patients who fulfilled the following criteria were enrolled: (1) presence of PTC confirmed by US‐guided CNB; (2) maximum diameter less than 1 cm; (3) absence of capsular infiltration and extrathyroidal invasion, and lack of LNM detection; (4) absence of neck irradiation history; and (5) unable or refused to receive surgery. Exclusion criteria were: the presence of (1) multifocal cancer; (2) aggressive histological PTMC, such as tall cell, insular, or columnar cell carcinoma; (3) suspicious cervical LNM; (4) pregnancy and lactation; (5) severe coagulation disorders, respiratory failure, myocardial infarction, systemic infection, or uncontrolled diabetes; (6) neuropsychiatric disturbance or neck extension disorder leading to RFA nontolerance; (7) cardiac pacemaker implantation; (8) contralateral vocal cord paralysis; and (9) allergic to sulfur hexafluoride microbubbles (SonoVue, Bracco International, Milan, Italy).\nBetween February 2013 and March 2017, 60 tumors in 60 patients (12 men and 48 women) treated with US‐guided RFA were included, comprising 30 patients with only PTMC, and 30 with CLT+PTMC. CLT was confirmed by pathologic examination and serological test, and was defined as diffuse lymphocytic and plasma cell infiltrates, lymphoid follicles formation with germinal centers, varying degree of fibrosis, parenchymal atrophy, and the presence of large follicular cells with oxyphilic cell changes,7 as well as positive hyroperoxidase antibody (TPOAb) and thyroglobulin antibody (TgAb). The presence of peritumoral inflammatory reaction was not considered CLT. All patients had complete records and were followed for more than 18 months.", "US and contrast‐enhanced US (CEUS) examinations were performed using a Siemens Acuson Sequoia 512 Ultrasound System (Siemens, Mountain View, CA) with a 15L8W linear array transducer. US‐guided RFA and CNB were performed using a Siemens Acuson Sequoia 512 Ultrasound System with a 6L3 linear array transducer. CNB on each nodule was performed using an 18‐gauge biopsy needle after RFA (Biopty; Bard, Covington, GA).\nA bipolar RFA generator (CelonLabPOWER; Olympus Surgical Technologies Europe, Hamburg, Germany) and an 18‐gauge bipolar radiofrequency (RF) applicator with a 0.9‐cm active tip (CelonProSurge micro 100‐T09; Olympus Surgical Technologies Europe) were used for RFA treatment in this study. During the application of RF energy, the electric impedance of the tissue between the two electrodes at the tip of the RF applicator was measured continuously by the generator. The power is automatically reduced if the temperature at the electrodes reaches 100°C.", "Careful history‐taking and thorough physical examinations were conducted in all patients in our department. All patients who qualified for RFA were subject to thyroid US, CEUS, and determination of the levels of free thyroid hormones T3 (normal reference range: 2.76‐6.30 pmol/L) and T4 (10.42‐24.32 pmol/L), thyroid‐stimulating hormone (TSH) (0.35‐5.5 mU/L), TPOAb (<60 IU/mL) and TgAb (<60 IU/mL).\nPatients were supine with the neck extended during the procedure. An intravenous line was introduced into an elbow vein. US appearances were evaluated and recorded according to the multidisciplinary consensus statement for thyroid nodules.15 For each tumor, the size, volume, location, echogenicity, margin, shape (height/width), calcifications, and vascularity were evaluated by US. The volume of each tumor was calculated as V = πabc/6 (V: volume, a: transverse diameter of tumor, b: vertical diameter of tumor, and c: anteroposterior diameter of tumor). CEUS with a low mechanical index (0.19‐0.24) was used to describe the blood supply region of the tumor before and after RFA. The contrast agent used was 59 mg of dry powder SonoVue constituted in 5 mL of normal saline. CEUS was performed after a bolus injection of SonoVue (2.4 mL), followed by a normal saline flush (5 mL). Real‐time microbubble perfusions within the tumor and surrounding tissues were observed for a minimum of 2 minutes and recorded electronically.\nCapsular and extrathyroidal invasion of thyroid cancer were evaluated by both US and CEUS. Extracapsular extension on the US image was defined as cases in which the anterior and posterior hyperechoic thyroid capsules were discontinued. During the real‐time and multi‐angle scanning, the capsular infiltration and extrathyroidal invasion on CEUS were shown as low‐ or nonenhancing areas on the thyroid capsule invaded by malignant nodules.16\n", "Conventional US was performed to evaluate the relationship between the tumor and critical structures in the neck, such as the trachea, esophagus, jugular vein, common carotid artery (CCA), and recurrent laryngeal nerves, in order to decide the best insertion way. A local anesthetic (1% lidocaine) was injected at the subcutaneous puncture site and the thyroid anterior capsule. If the distance between the tumor and critical neck structures was <5 mm, normal saline was injected using another needle (23 gauge) to form at least a 1cm distance between the tumor and critical structure for the prevention of thermal injuries. RFA was performed using the moving‐shot technique (P–Q); 3W was the initial radiofrequency power which was increased to 5W if a transient hyperechoic zone did not form at the electrode tip within 5‐10 s. To prevent residual tumor and recurrence, the RFA area exceeded the tumor margin. The ablation procedure ended when the tumor was completely covered with hyperechoic zone in three‐dimensional space on US. The damage range of ablation was evaluated by CEUS immediately after ablation, and the electrode needle was pulled out of the tissue after ensuring that no residual tumor (completely no enhancement in the ablation area). If the enhancement was showed in some areas of tumor, complementary ablation could be applied promptly to avoid residual cancer cells. During the procedure, special attention was given to the protection of critical neck structures to prevent the significant complications such as hematoma or nerve injury. All complications occurring during and after RFA were carefully assessed according to patients' clinical signs and symptoms. Each patient was observed for 1‐2 hours after RFA.", "Patients were followed‐up using conventional US and CEUS at months 1, 3, 6, and 12 after RFA, and every 6 months thereafter. The ablation area was evaluated by CEUS to detect volume of ablation and provide a baseline to screen for recurrence. The volume reduction ratio (VRR) was calculated as follows: VRR (%) = [(1 − final volume)/initial volume] × 100%. The involvement of cervical lymph nodes was evaluated by US; suspicious lymph nodes (a globular shape, loss of the normal echogenic hilum, presence of peripheral rather than hilar flow, and microcalcifications17) were biopsied. US‐guided CNB was performed at the center and the margin of the ablation area, and in the surrounding thyroid parenchyma 3 months after RFA. Complications during the follow‐up period were assessed according to the reporting standards of the Society of Interventional Radiology.18\n\nAll RFA procedures and preoperative examinations were performed by one experienced physician to exclude bias associated with different operators. Postoperative CNB was also performed by the same physician who performed RFA. Follow‐up conventional US and CEUS examinations were performed by two other experienced physicians who were blinded to the histological and imaging findings. Discrepancies were resolved by the judgment of a third experienced physician who had specialized in thyroid US and CEUS for over 15 years.", "All statistical analyses were performed using SPSS software package, Version 13 for Windows (SPSS Inc, Chicago, IL). A chi‐squared test (χ2) was used to analyze the categorical variables. Continuous data were reported as mean ± standard deviation (range). Volume and VRR of the ablation area before RFA and at each follow‐up were analyzed by the t test. The Wilcoxon signed rank test was used to compare tumor calcification, color Doppler flow imaging (CDFI) blood flow grades, and changes in the number of patients with tumor disappearance at each follow‐up between CLT+PTMC and PTMC groups. The Wilcoxon rank sum test was used to compare Free T3, T4, and TSH values in patients with and without CLT P values <0.05 were considered statistically significant.", "There were no significant differences in terms of age, sex, and tumor volume between the PTMC and PTMC+CLT groups (P > 0.05) (Table 1). All thyroid nodules were hypoechoic; other characteristics are shown in Table 2. There were no significant differences in nodule location, margin, shape, height/width, calcification, and CDFI type between the two groups (P > 0.05). The distance between the nodule and trachea or common carotid artery (CCA) was <2 mm in 13 nodules, including eight near the trachea (0.117 ± 0.024) cm, and five near the CCA (0.094 ± 0.013) cm. There were no significant differences in free T3, T4 and TSH values between the two groups (U = 0.434, P = 0.664; U = 0.452, P = 0.651; and U = 0.886, P = 0.376, respectively). All patients with CLT had positive TgAb and TPOAb. The thyroid function test results of those with PTMC were in the normal range.\nComparison of the preoperative data of the patients and tumor volume between the PTMC+CLT and PTMC groups\nAbbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma.\nUltrasonic characteristics of the tumors before RFA in the PTMC+CLT and PTMC groups\nAbbreviations: CDFI, color Doppler flow imaging; CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma.", "The ablation powers in the PTMC and PTMC+CLT groups were 0.9 ± 0.5 KJ and 0.7 ± 0.5 KJ, respectively (t = 1.453, P = 0.1515); ablation times were 3.8 ± 2.1 min and 2.9 ± 2 0.1 min, respectively (t = 1.801, P = 0.0768).", "In terms of postoperative ablation volume, there was no significant difference between the two groups at 1, 3, 6, 12, and 18 months after RFA (P > 0.05) (Table 3, Figure 1). While there was a significant difference in VRR between the two groups 3 months after ablation (t = 1.28, P = 0.03), no difference was observed at any other time‐point (P > 0.05) (Table 4, Figure 2).\nChanges in the tumor volume between the PTMC + CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation.\nRadiofrequency ablation (RFA) treatment and follow‐up of one case of papillary thyroid microcarcinoma with chronic lymphocytic thyroiditis. (A) A hypoechoic nodule sized 0.4 × 0.5 × 0.4 cm, with irregular margins and microcalcifications was displayed in the right thyroid lobe (arrow). (B) Uneven and irregular hypo‐enhancement in the nodule was observed by contrast‐enhanced ultrasound (CEUS) (arrow, left image). (C) During RFA, the nodule was covered by a hyperechoic area (arrow) on US. (D) Immediately after RFA, the ablation area was showed completely no enhancement by CEUS, and its size (0.7 × 1.1 × 1.0 cm) was larger than the initial nodule size. (E) One month after RFA, the ablation area decreased in size to 0.9 × 0.8 × 0.5 cm. (F) Three months after RFA, the ablation area decreased to 0.6 × 0.5 × 0.6 cm. (G) Six months after RFA, the ablation area decreased to 0.3 × 0.2 × 0.3 cm. (H) The ablation area could not be identified on both US and CEUS. (I) Before RFA, the pathologic examination of this nodule showed the presence of papillary thyroid carcinoma accompanied by chronic lymphocytic thyroiditis. (J) Three months after RFA, pathology showed degenerated and necrotic follicular epithelia, interstitial fibrous tissue hyperplasia, and hyaline degeneration in the ablation lesion, with lymphocyte infiltration and multinucleated giant cell reaction in the adjacent thyroid tissue. No residual cancer was found\nChanges in the tumor volume reduction ratio between the PTMC+CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation.\nChanges in ablation zone volume in PTMC cases with and without CLT at each follow‐up. PTMC, papillary thyroid microcarcinoma; CLT, chronic lymphocytic thyroiditis\nAfter RFA, the times to tumor disappearance were 9.800 ± 5.041 months and 10.0 ± 4.8 months (t = 0.16, P = 0.88) in the PTMC and PTMC + CLT groups, respectively. A total of 43% of the nodules in the PTMC+CLT group resolved in 12 months, and 47% in the PTMC group resolved in 6 months. No significant differences were observed with respect to the number of ablation areas between the groups (u = 0.319, P > 0.05) (Table 5). Calcification was observed in 20 PTMC cases, with time to tumor disappearance of 10.2 ± 5.1 months; no calcification was found in 40 PTMC cases, with time to tumor disappearance of 9.8 ± 4.9 months; no significant difference in time was observed between the PTMC cases with and without calcification (t = 0.28, P = 0.78). No residual cancer cells were found by CNB 3 months after ablation (Figure 1). No recurrent tumors or suspicious metastatic lymph nodes were detected.\nNumber of patients with tumors disappearance in the PTMC+CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation.", "Slight voice hoarseness was observed in one patient (1.7%, 1/60) after RFA; voice recovery occurred without treatment in 1 week. The presence of moderate‐intensity pain was reported by two patients (3.3%, 2/60), and slight fever was noted in one patient (1.7%, 1/60); in all cases, spontaneous recovery was observed.", "Yan Zhang contributed to the collection, analysis, and interpretation of data and writing initial draft. Mingbo Zhang performed the statistical analyses, had full access to all data, and takes responsibility for the accuracy of the data analysis in the study. Ying Zhang contributed to the collection, analysis and interpreted the results. Jie Li: contributed to the pathological figures analysis and interpretation. Jie Tang contributed to writing–review and revisions. Yukun Luo contributed to the study design, writing–initial draft, guarantor of the study and had full access to all data in the study, and takes responsibility for the integrity of the data and the accuracy of the data analysis. All authors contributed to critical revisions and approved the final version." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patients", "Instrument and equipment", "Preablation assessment", "Ablation procedure", "Follow‐up", "Statistical analysis", "RESULTS", "Preoperative patient and tumor characteristics", "Ablation power and time in the PTMC and PTMC+CLT groups", "Tumor volume and VRR after RFA", "Complications", "DISCUSSION", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTIONS" ]
[ "Since the mid‐1990s, the incidence of thyroid cancer has increased worldwide and is reportedly the fastest‐growing cancer. In the United States, the overall incidence of thyroid cancer increased by 3% annually from 1974 to 20131 and the number of new cases likely reached 53 900 in 2018.2 In China, the incidence of thyroid cancer is also increasing, and in 2014, it was among the top four cancers in terms of incidence among women.3 Thyroid cancer includes several pathological types, of which papillary thyroid carcinoma (PTC) is the most common. Tumors ≤10 mm in diameter are defined as papillary thyroid microcarcinomas (PTMCs). These have been observed in 15.5% of autopsies in which whole‐gland examination was performed and are typically associated with good prognosis.4\n\nChronic lymphocytic thyroiditis (CLT), also known as Hashimoto's thyroiditis, is an autoimmune disease characterized by widespread lymphocyte infiltration, fibrosis and parenchymal atrophy of the thyroid tissue. The male‐to‐female incidence ratio of CLT is 1:5‐20. CLT is commonly associated with PTC, and is characterized by a smaller primary tumor size at presentation. Leni et al5 reported that one‐third of PTC cases (33.3%, 168/505) have CLT coexistence. No association has been found between CLT and follicular, medullary, or anaplastic thyroid cancer.6 The coexistence of PTC and CLT is reportedly associated with better prognoses, lower rates of lymph node and distant metastases, and recurrence, particularly in patients aged ≥45 years.7 There is controversy surrounding the treatment of PTC with coexisting CLT. For PTMC, some guidelines recommend the performance of surgery.8, 9 However, traditional surgery results in injuries, prominent neck scarring, and a lowered quality of life, especially among older patients with chronic comorbidities. The Korean Society of Thyroid Radiology consensus statement10 highlights the need for active surveillance rather than immediate surgery in adult patients with low‐risk PTMC. However, a majority of patients experience severe anxiety if no treatment is provided. Therefore, minimally invasive therapy is utilized in many cases. Thermal tumor ablation (using microwave, radiofrequency, or laser ablation, or high‐intensity focused ultrasound [US]) has been applied in clinical practice with good feedback.11, 12, 13 Radiofrequency ablation (RFA) was found to be efficient and safe in the treatment of PTMC in our preliminary study.14 We aimed to investigate the therapeutic effect and safety of RFA in PTMC cases with CLT in this study.", "This study was approved by the ethics committee of our hospital, and informed consent was obtained from each patient before the performance of US‐guided core‐needle biopsy (CNB) and RFA. The informed consent form for RFA emphasized that surgery is the routine treatment procedure recommended by guidelines, and that RFA administration cannot prevent the development of recurrent PTMC and undetectable cervical lymph node metastasis (LNM).\n Patients Patients who fulfilled the following criteria were enrolled: (1) presence of PTC confirmed by US‐guided CNB; (2) maximum diameter less than 1 cm; (3) absence of capsular infiltration and extrathyroidal invasion, and lack of LNM detection; (4) absence of neck irradiation history; and (5) unable or refused to receive surgery. Exclusion criteria were: the presence of (1) multifocal cancer; (2) aggressive histological PTMC, such as tall cell, insular, or columnar cell carcinoma; (3) suspicious cervical LNM; (4) pregnancy and lactation; (5) severe coagulation disorders, respiratory failure, myocardial infarction, systemic infection, or uncontrolled diabetes; (6) neuropsychiatric disturbance or neck extension disorder leading to RFA nontolerance; (7) cardiac pacemaker implantation; (8) contralateral vocal cord paralysis; and (9) allergic to sulfur hexafluoride microbubbles (SonoVue, Bracco International, Milan, Italy).\nBetween February 2013 and March 2017, 60 tumors in 60 patients (12 men and 48 women) treated with US‐guided RFA were included, comprising 30 patients with only PTMC, and 30 with CLT+PTMC. CLT was confirmed by pathologic examination and serological test, and was defined as diffuse lymphocytic and plasma cell infiltrates, lymphoid follicles formation with germinal centers, varying degree of fibrosis, parenchymal atrophy, and the presence of large follicular cells with oxyphilic cell changes,7 as well as positive hyroperoxidase antibody (TPOAb) and thyroglobulin antibody (TgAb). The presence of peritumoral inflammatory reaction was not considered CLT. All patients had complete records and were followed for more than 18 months.\nPatients who fulfilled the following criteria were enrolled: (1) presence of PTC confirmed by US‐guided CNB; (2) maximum diameter less than 1 cm; (3) absence of capsular infiltration and extrathyroidal invasion, and lack of LNM detection; (4) absence of neck irradiation history; and (5) unable or refused to receive surgery. Exclusion criteria were: the presence of (1) multifocal cancer; (2) aggressive histological PTMC, such as tall cell, insular, or columnar cell carcinoma; (3) suspicious cervical LNM; (4) pregnancy and lactation; (5) severe coagulation disorders, respiratory failure, myocardial infarction, systemic infection, or uncontrolled diabetes; (6) neuropsychiatric disturbance or neck extension disorder leading to RFA nontolerance; (7) cardiac pacemaker implantation; (8) contralateral vocal cord paralysis; and (9) allergic to sulfur hexafluoride microbubbles (SonoVue, Bracco International, Milan, Italy).\nBetween February 2013 and March 2017, 60 tumors in 60 patients (12 men and 48 women) treated with US‐guided RFA were included, comprising 30 patients with only PTMC, and 30 with CLT+PTMC. CLT was confirmed by pathologic examination and serological test, and was defined as diffuse lymphocytic and plasma cell infiltrates, lymphoid follicles formation with germinal centers, varying degree of fibrosis, parenchymal atrophy, and the presence of large follicular cells with oxyphilic cell changes,7 as well as positive hyroperoxidase antibody (TPOAb) and thyroglobulin antibody (TgAb). The presence of peritumoral inflammatory reaction was not considered CLT. All patients had complete records and were followed for more than 18 months.\n Instrument and equipment US and contrast‐enhanced US (CEUS) examinations were performed using a Siemens Acuson Sequoia 512 Ultrasound System (Siemens, Mountain View, CA) with a 15L8W linear array transducer. US‐guided RFA and CNB were performed using a Siemens Acuson Sequoia 512 Ultrasound System with a 6L3 linear array transducer. CNB on each nodule was performed using an 18‐gauge biopsy needle after RFA (Biopty; Bard, Covington, GA).\nA bipolar RFA generator (CelonLabPOWER; Olympus Surgical Technologies Europe, Hamburg, Germany) and an 18‐gauge bipolar radiofrequency (RF) applicator with a 0.9‐cm active tip (CelonProSurge micro 100‐T09; Olympus Surgical Technologies Europe) were used for RFA treatment in this study. During the application of RF energy, the electric impedance of the tissue between the two electrodes at the tip of the RF applicator was measured continuously by the generator. The power is automatically reduced if the temperature at the electrodes reaches 100°C.\nUS and contrast‐enhanced US (CEUS) examinations were performed using a Siemens Acuson Sequoia 512 Ultrasound System (Siemens, Mountain View, CA) with a 15L8W linear array transducer. US‐guided RFA and CNB were performed using a Siemens Acuson Sequoia 512 Ultrasound System with a 6L3 linear array transducer. CNB on each nodule was performed using an 18‐gauge biopsy needle after RFA (Biopty; Bard, Covington, GA).\nA bipolar RFA generator (CelonLabPOWER; Olympus Surgical Technologies Europe, Hamburg, Germany) and an 18‐gauge bipolar radiofrequency (RF) applicator with a 0.9‐cm active tip (CelonProSurge micro 100‐T09; Olympus Surgical Technologies Europe) were used for RFA treatment in this study. During the application of RF energy, the electric impedance of the tissue between the two electrodes at the tip of the RF applicator was measured continuously by the generator. The power is automatically reduced if the temperature at the electrodes reaches 100°C.\n Preablation assessment Careful history‐taking and thorough physical examinations were conducted in all patients in our department. All patients who qualified for RFA were subject to thyroid US, CEUS, and determination of the levels of free thyroid hormones T3 (normal reference range: 2.76‐6.30 pmol/L) and T4 (10.42‐24.32 pmol/L), thyroid‐stimulating hormone (TSH) (0.35‐5.5 mU/L), TPOAb (<60 IU/mL) and TgAb (<60 IU/mL).\nPatients were supine with the neck extended during the procedure. An intravenous line was introduced into an elbow vein. US appearances were evaluated and recorded according to the multidisciplinary consensus statement for thyroid nodules.15 For each tumor, the size, volume, location, echogenicity, margin, shape (height/width), calcifications, and vascularity were evaluated by US. The volume of each tumor was calculated as V = πabc/6 (V: volume, a: transverse diameter of tumor, b: vertical diameter of tumor, and c: anteroposterior diameter of tumor). CEUS with a low mechanical index (0.19‐0.24) was used to describe the blood supply region of the tumor before and after RFA. The contrast agent used was 59 mg of dry powder SonoVue constituted in 5 mL of normal saline. CEUS was performed after a bolus injection of SonoVue (2.4 mL), followed by a normal saline flush (5 mL). Real‐time microbubble perfusions within the tumor and surrounding tissues were observed for a minimum of 2 minutes and recorded electronically.\nCapsular and extrathyroidal invasion of thyroid cancer were evaluated by both US and CEUS. Extracapsular extension on the US image was defined as cases in which the anterior and posterior hyperechoic thyroid capsules were discontinued. During the real‐time and multi‐angle scanning, the capsular infiltration and extrathyroidal invasion on CEUS were shown as low‐ or nonenhancing areas on the thyroid capsule invaded by malignant nodules.16\n\nCareful history‐taking and thorough physical examinations were conducted in all patients in our department. All patients who qualified for RFA were subject to thyroid US, CEUS, and determination of the levels of free thyroid hormones T3 (normal reference range: 2.76‐6.30 pmol/L) and T4 (10.42‐24.32 pmol/L), thyroid‐stimulating hormone (TSH) (0.35‐5.5 mU/L), TPOAb (<60 IU/mL) and TgAb (<60 IU/mL).\nPatients were supine with the neck extended during the procedure. An intravenous line was introduced into an elbow vein. US appearances were evaluated and recorded according to the multidisciplinary consensus statement for thyroid nodules.15 For each tumor, the size, volume, location, echogenicity, margin, shape (height/width), calcifications, and vascularity were evaluated by US. The volume of each tumor was calculated as V = πabc/6 (V: volume, a: transverse diameter of tumor, b: vertical diameter of tumor, and c: anteroposterior diameter of tumor). CEUS with a low mechanical index (0.19‐0.24) was used to describe the blood supply region of the tumor before and after RFA. The contrast agent used was 59 mg of dry powder SonoVue constituted in 5 mL of normal saline. CEUS was performed after a bolus injection of SonoVue (2.4 mL), followed by a normal saline flush (5 mL). Real‐time microbubble perfusions within the tumor and surrounding tissues were observed for a minimum of 2 minutes and recorded electronically.\nCapsular and extrathyroidal invasion of thyroid cancer were evaluated by both US and CEUS. Extracapsular extension on the US image was defined as cases in which the anterior and posterior hyperechoic thyroid capsules were discontinued. During the real‐time and multi‐angle scanning, the capsular infiltration and extrathyroidal invasion on CEUS were shown as low‐ or nonenhancing areas on the thyroid capsule invaded by malignant nodules.16\n\n Ablation procedure Conventional US was performed to evaluate the relationship between the tumor and critical structures in the neck, such as the trachea, esophagus, jugular vein, common carotid artery (CCA), and recurrent laryngeal nerves, in order to decide the best insertion way. A local anesthetic (1% lidocaine) was injected at the subcutaneous puncture site and the thyroid anterior capsule. If the distance between the tumor and critical neck structures was <5 mm, normal saline was injected using another needle (23 gauge) to form at least a 1cm distance between the tumor and critical structure for the prevention of thermal injuries. RFA was performed using the moving‐shot technique (P–Q); 3W was the initial radiofrequency power which was increased to 5W if a transient hyperechoic zone did not form at the electrode tip within 5‐10 s. To prevent residual tumor and recurrence, the RFA area exceeded the tumor margin. The ablation procedure ended when the tumor was completely covered with hyperechoic zone in three‐dimensional space on US. The damage range of ablation was evaluated by CEUS immediately after ablation, and the electrode needle was pulled out of the tissue after ensuring that no residual tumor (completely no enhancement in the ablation area). If the enhancement was showed in some areas of tumor, complementary ablation could be applied promptly to avoid residual cancer cells. During the procedure, special attention was given to the protection of critical neck structures to prevent the significant complications such as hematoma or nerve injury. All complications occurring during and after RFA were carefully assessed according to patients' clinical signs and symptoms. Each patient was observed for 1‐2 hours after RFA.\nConventional US was performed to evaluate the relationship between the tumor and critical structures in the neck, such as the trachea, esophagus, jugular vein, common carotid artery (CCA), and recurrent laryngeal nerves, in order to decide the best insertion way. A local anesthetic (1% lidocaine) was injected at the subcutaneous puncture site and the thyroid anterior capsule. If the distance between the tumor and critical neck structures was <5 mm, normal saline was injected using another needle (23 gauge) to form at least a 1cm distance between the tumor and critical structure for the prevention of thermal injuries. RFA was performed using the moving‐shot technique (P–Q); 3W was the initial radiofrequency power which was increased to 5W if a transient hyperechoic zone did not form at the electrode tip within 5‐10 s. To prevent residual tumor and recurrence, the RFA area exceeded the tumor margin. The ablation procedure ended when the tumor was completely covered with hyperechoic zone in three‐dimensional space on US. The damage range of ablation was evaluated by CEUS immediately after ablation, and the electrode needle was pulled out of the tissue after ensuring that no residual tumor (completely no enhancement in the ablation area). If the enhancement was showed in some areas of tumor, complementary ablation could be applied promptly to avoid residual cancer cells. During the procedure, special attention was given to the protection of critical neck structures to prevent the significant complications such as hematoma or nerve injury. All complications occurring during and after RFA were carefully assessed according to patients' clinical signs and symptoms. Each patient was observed for 1‐2 hours after RFA.\n Follow‐up Patients were followed‐up using conventional US and CEUS at months 1, 3, 6, and 12 after RFA, and every 6 months thereafter. The ablation area was evaluated by CEUS to detect volume of ablation and provide a baseline to screen for recurrence. The volume reduction ratio (VRR) was calculated as follows: VRR (%) = [(1 − final volume)/initial volume] × 100%. The involvement of cervical lymph nodes was evaluated by US; suspicious lymph nodes (a globular shape, loss of the normal echogenic hilum, presence of peripheral rather than hilar flow, and microcalcifications17) were biopsied. US‐guided CNB was performed at the center and the margin of the ablation area, and in the surrounding thyroid parenchyma 3 months after RFA. Complications during the follow‐up period were assessed according to the reporting standards of the Society of Interventional Radiology.18\n\nAll RFA procedures and preoperative examinations were performed by one experienced physician to exclude bias associated with different operators. Postoperative CNB was also performed by the same physician who performed RFA. Follow‐up conventional US and CEUS examinations were performed by two other experienced physicians who were blinded to the histological and imaging findings. Discrepancies were resolved by the judgment of a third experienced physician who had specialized in thyroid US and CEUS for over 15 years.\nPatients were followed‐up using conventional US and CEUS at months 1, 3, 6, and 12 after RFA, and every 6 months thereafter. The ablation area was evaluated by CEUS to detect volume of ablation and provide a baseline to screen for recurrence. The volume reduction ratio (VRR) was calculated as follows: VRR (%) = [(1 − final volume)/initial volume] × 100%. The involvement of cervical lymph nodes was evaluated by US; suspicious lymph nodes (a globular shape, loss of the normal echogenic hilum, presence of peripheral rather than hilar flow, and microcalcifications17) were biopsied. US‐guided CNB was performed at the center and the margin of the ablation area, and in the surrounding thyroid parenchyma 3 months after RFA. Complications during the follow‐up period were assessed according to the reporting standards of the Society of Interventional Radiology.18\n\nAll RFA procedures and preoperative examinations were performed by one experienced physician to exclude bias associated with different operators. Postoperative CNB was also performed by the same physician who performed RFA. Follow‐up conventional US and CEUS examinations were performed by two other experienced physicians who were blinded to the histological and imaging findings. Discrepancies were resolved by the judgment of a third experienced physician who had specialized in thyroid US and CEUS for over 15 years.\n Statistical analysis All statistical analyses were performed using SPSS software package, Version 13 for Windows (SPSS Inc, Chicago, IL). A chi‐squared test (χ2) was used to analyze the categorical variables. Continuous data were reported as mean ± standard deviation (range). Volume and VRR of the ablation area before RFA and at each follow‐up were analyzed by the t test. The Wilcoxon signed rank test was used to compare tumor calcification, color Doppler flow imaging (CDFI) blood flow grades, and changes in the number of patients with tumor disappearance at each follow‐up between CLT+PTMC and PTMC groups. The Wilcoxon rank sum test was used to compare Free T3, T4, and TSH values in patients with and without CLT P values <0.05 were considered statistically significant.\nAll statistical analyses were performed using SPSS software package, Version 13 for Windows (SPSS Inc, Chicago, IL). A chi‐squared test (χ2) was used to analyze the categorical variables. Continuous data were reported as mean ± standard deviation (range). Volume and VRR of the ablation area before RFA and at each follow‐up were analyzed by the t test. The Wilcoxon signed rank test was used to compare tumor calcification, color Doppler flow imaging (CDFI) blood flow grades, and changes in the number of patients with tumor disappearance at each follow‐up between CLT+PTMC and PTMC groups. The Wilcoxon rank sum test was used to compare Free T3, T4, and TSH values in patients with and without CLT P values <0.05 were considered statistically significant.", "Patients who fulfilled the following criteria were enrolled: (1) presence of PTC confirmed by US‐guided CNB; (2) maximum diameter less than 1 cm; (3) absence of capsular infiltration and extrathyroidal invasion, and lack of LNM detection; (4) absence of neck irradiation history; and (5) unable or refused to receive surgery. Exclusion criteria were: the presence of (1) multifocal cancer; (2) aggressive histological PTMC, such as tall cell, insular, or columnar cell carcinoma; (3) suspicious cervical LNM; (4) pregnancy and lactation; (5) severe coagulation disorders, respiratory failure, myocardial infarction, systemic infection, or uncontrolled diabetes; (6) neuropsychiatric disturbance or neck extension disorder leading to RFA nontolerance; (7) cardiac pacemaker implantation; (8) contralateral vocal cord paralysis; and (9) allergic to sulfur hexafluoride microbubbles (SonoVue, Bracco International, Milan, Italy).\nBetween February 2013 and March 2017, 60 tumors in 60 patients (12 men and 48 women) treated with US‐guided RFA were included, comprising 30 patients with only PTMC, and 30 with CLT+PTMC. CLT was confirmed by pathologic examination and serological test, and was defined as diffuse lymphocytic and plasma cell infiltrates, lymphoid follicles formation with germinal centers, varying degree of fibrosis, parenchymal atrophy, and the presence of large follicular cells with oxyphilic cell changes,7 as well as positive hyroperoxidase antibody (TPOAb) and thyroglobulin antibody (TgAb). The presence of peritumoral inflammatory reaction was not considered CLT. All patients had complete records and were followed for more than 18 months.", "US and contrast‐enhanced US (CEUS) examinations were performed using a Siemens Acuson Sequoia 512 Ultrasound System (Siemens, Mountain View, CA) with a 15L8W linear array transducer. US‐guided RFA and CNB were performed using a Siemens Acuson Sequoia 512 Ultrasound System with a 6L3 linear array transducer. CNB on each nodule was performed using an 18‐gauge biopsy needle after RFA (Biopty; Bard, Covington, GA).\nA bipolar RFA generator (CelonLabPOWER; Olympus Surgical Technologies Europe, Hamburg, Germany) and an 18‐gauge bipolar radiofrequency (RF) applicator with a 0.9‐cm active tip (CelonProSurge micro 100‐T09; Olympus Surgical Technologies Europe) were used for RFA treatment in this study. During the application of RF energy, the electric impedance of the tissue between the two electrodes at the tip of the RF applicator was measured continuously by the generator. The power is automatically reduced if the temperature at the electrodes reaches 100°C.", "Careful history‐taking and thorough physical examinations were conducted in all patients in our department. All patients who qualified for RFA were subject to thyroid US, CEUS, and determination of the levels of free thyroid hormones T3 (normal reference range: 2.76‐6.30 pmol/L) and T4 (10.42‐24.32 pmol/L), thyroid‐stimulating hormone (TSH) (0.35‐5.5 mU/L), TPOAb (<60 IU/mL) and TgAb (<60 IU/mL).\nPatients were supine with the neck extended during the procedure. An intravenous line was introduced into an elbow vein. US appearances were evaluated and recorded according to the multidisciplinary consensus statement for thyroid nodules.15 For each tumor, the size, volume, location, echogenicity, margin, shape (height/width), calcifications, and vascularity were evaluated by US. The volume of each tumor was calculated as V = πabc/6 (V: volume, a: transverse diameter of tumor, b: vertical diameter of tumor, and c: anteroposterior diameter of tumor). CEUS with a low mechanical index (0.19‐0.24) was used to describe the blood supply region of the tumor before and after RFA. The contrast agent used was 59 mg of dry powder SonoVue constituted in 5 mL of normal saline. CEUS was performed after a bolus injection of SonoVue (2.4 mL), followed by a normal saline flush (5 mL). Real‐time microbubble perfusions within the tumor and surrounding tissues were observed for a minimum of 2 minutes and recorded electronically.\nCapsular and extrathyroidal invasion of thyroid cancer were evaluated by both US and CEUS. Extracapsular extension on the US image was defined as cases in which the anterior and posterior hyperechoic thyroid capsules were discontinued. During the real‐time and multi‐angle scanning, the capsular infiltration and extrathyroidal invasion on CEUS were shown as low‐ or nonenhancing areas on the thyroid capsule invaded by malignant nodules.16\n", "Conventional US was performed to evaluate the relationship between the tumor and critical structures in the neck, such as the trachea, esophagus, jugular vein, common carotid artery (CCA), and recurrent laryngeal nerves, in order to decide the best insertion way. A local anesthetic (1% lidocaine) was injected at the subcutaneous puncture site and the thyroid anterior capsule. If the distance between the tumor and critical neck structures was <5 mm, normal saline was injected using another needle (23 gauge) to form at least a 1cm distance between the tumor and critical structure for the prevention of thermal injuries. RFA was performed using the moving‐shot technique (P–Q); 3W was the initial radiofrequency power which was increased to 5W if a transient hyperechoic zone did not form at the electrode tip within 5‐10 s. To prevent residual tumor and recurrence, the RFA area exceeded the tumor margin. The ablation procedure ended when the tumor was completely covered with hyperechoic zone in three‐dimensional space on US. The damage range of ablation was evaluated by CEUS immediately after ablation, and the electrode needle was pulled out of the tissue after ensuring that no residual tumor (completely no enhancement in the ablation area). If the enhancement was showed in some areas of tumor, complementary ablation could be applied promptly to avoid residual cancer cells. During the procedure, special attention was given to the protection of critical neck structures to prevent the significant complications such as hematoma or nerve injury. All complications occurring during and after RFA were carefully assessed according to patients' clinical signs and symptoms. Each patient was observed for 1‐2 hours after RFA.", "Patients were followed‐up using conventional US and CEUS at months 1, 3, 6, and 12 after RFA, and every 6 months thereafter. The ablation area was evaluated by CEUS to detect volume of ablation and provide a baseline to screen for recurrence. The volume reduction ratio (VRR) was calculated as follows: VRR (%) = [(1 − final volume)/initial volume] × 100%. The involvement of cervical lymph nodes was evaluated by US; suspicious lymph nodes (a globular shape, loss of the normal echogenic hilum, presence of peripheral rather than hilar flow, and microcalcifications17) were biopsied. US‐guided CNB was performed at the center and the margin of the ablation area, and in the surrounding thyroid parenchyma 3 months after RFA. Complications during the follow‐up period were assessed according to the reporting standards of the Society of Interventional Radiology.18\n\nAll RFA procedures and preoperative examinations were performed by one experienced physician to exclude bias associated with different operators. Postoperative CNB was also performed by the same physician who performed RFA. Follow‐up conventional US and CEUS examinations were performed by two other experienced physicians who were blinded to the histological and imaging findings. Discrepancies were resolved by the judgment of a third experienced physician who had specialized in thyroid US and CEUS for over 15 years.", "All statistical analyses were performed using SPSS software package, Version 13 for Windows (SPSS Inc, Chicago, IL). A chi‐squared test (χ2) was used to analyze the categorical variables. Continuous data were reported as mean ± standard deviation (range). Volume and VRR of the ablation area before RFA and at each follow‐up were analyzed by the t test. The Wilcoxon signed rank test was used to compare tumor calcification, color Doppler flow imaging (CDFI) blood flow grades, and changes in the number of patients with tumor disappearance at each follow‐up between CLT+PTMC and PTMC groups. The Wilcoxon rank sum test was used to compare Free T3, T4, and TSH values in patients with and without CLT P values <0.05 were considered statistically significant.", " Preoperative patient and tumor characteristics There were no significant differences in terms of age, sex, and tumor volume between the PTMC and PTMC+CLT groups (P > 0.05) (Table 1). All thyroid nodules were hypoechoic; other characteristics are shown in Table 2. There were no significant differences in nodule location, margin, shape, height/width, calcification, and CDFI type between the two groups (P > 0.05). The distance between the nodule and trachea or common carotid artery (CCA) was <2 mm in 13 nodules, including eight near the trachea (0.117 ± 0.024) cm, and five near the CCA (0.094 ± 0.013) cm. There were no significant differences in free T3, T4 and TSH values between the two groups (U = 0.434, P = 0.664; U = 0.452, P = 0.651; and U = 0.886, P = 0.376, respectively). All patients with CLT had positive TgAb and TPOAb. The thyroid function test results of those with PTMC were in the normal range.\nComparison of the preoperative data of the patients and tumor volume between the PTMC+CLT and PTMC groups\nAbbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma.\nUltrasonic characteristics of the tumors before RFA in the PTMC+CLT and PTMC groups\nAbbreviations: CDFI, color Doppler flow imaging; CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma.\nThere were no significant differences in terms of age, sex, and tumor volume between the PTMC and PTMC+CLT groups (P > 0.05) (Table 1). All thyroid nodules were hypoechoic; other characteristics are shown in Table 2. There were no significant differences in nodule location, margin, shape, height/width, calcification, and CDFI type between the two groups (P > 0.05). The distance between the nodule and trachea or common carotid artery (CCA) was <2 mm in 13 nodules, including eight near the trachea (0.117 ± 0.024) cm, and five near the CCA (0.094 ± 0.013) cm. There were no significant differences in free T3, T4 and TSH values between the two groups (U = 0.434, P = 0.664; U = 0.452, P = 0.651; and U = 0.886, P = 0.376, respectively). All patients with CLT had positive TgAb and TPOAb. The thyroid function test results of those with PTMC were in the normal range.\nComparison of the preoperative data of the patients and tumor volume between the PTMC+CLT and PTMC groups\nAbbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma.\nUltrasonic characteristics of the tumors before RFA in the PTMC+CLT and PTMC groups\nAbbreviations: CDFI, color Doppler flow imaging; CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma.\n Ablation power and time in the PTMC and PTMC+CLT groups The ablation powers in the PTMC and PTMC+CLT groups were 0.9 ± 0.5 KJ and 0.7 ± 0.5 KJ, respectively (t = 1.453, P = 0.1515); ablation times were 3.8 ± 2.1 min and 2.9 ± 2 0.1 min, respectively (t = 1.801, P = 0.0768).\nThe ablation powers in the PTMC and PTMC+CLT groups were 0.9 ± 0.5 KJ and 0.7 ± 0.5 KJ, respectively (t = 1.453, P = 0.1515); ablation times were 3.8 ± 2.1 min and 2.9 ± 2 0.1 min, respectively (t = 1.801, P = 0.0768).\n Tumor volume and VRR after RFA In terms of postoperative ablation volume, there was no significant difference between the two groups at 1, 3, 6, 12, and 18 months after RFA (P > 0.05) (Table 3, Figure 1). While there was a significant difference in VRR between the two groups 3 months after ablation (t = 1.28, P = 0.03), no difference was observed at any other time‐point (P > 0.05) (Table 4, Figure 2).\nChanges in the tumor volume between the PTMC + CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation.\nRadiofrequency ablation (RFA) treatment and follow‐up of one case of papillary thyroid microcarcinoma with chronic lymphocytic thyroiditis. (A) A hypoechoic nodule sized 0.4 × 0.5 × 0.4 cm, with irregular margins and microcalcifications was displayed in the right thyroid lobe (arrow). (B) Uneven and irregular hypo‐enhancement in the nodule was observed by contrast‐enhanced ultrasound (CEUS) (arrow, left image). (C) During RFA, the nodule was covered by a hyperechoic area (arrow) on US. (D) Immediately after RFA, the ablation area was showed completely no enhancement by CEUS, and its size (0.7 × 1.1 × 1.0 cm) was larger than the initial nodule size. (E) One month after RFA, the ablation area decreased in size to 0.9 × 0.8 × 0.5 cm. (F) Three months after RFA, the ablation area decreased to 0.6 × 0.5 × 0.6 cm. (G) Six months after RFA, the ablation area decreased to 0.3 × 0.2 × 0.3 cm. (H) The ablation area could not be identified on both US and CEUS. (I) Before RFA, the pathologic examination of this nodule showed the presence of papillary thyroid carcinoma accompanied by chronic lymphocytic thyroiditis. (J) Three months after RFA, pathology showed degenerated and necrotic follicular epithelia, interstitial fibrous tissue hyperplasia, and hyaline degeneration in the ablation lesion, with lymphocyte infiltration and multinucleated giant cell reaction in the adjacent thyroid tissue. No residual cancer was found\nChanges in the tumor volume reduction ratio between the PTMC+CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation.\nChanges in ablation zone volume in PTMC cases with and without CLT at each follow‐up. PTMC, papillary thyroid microcarcinoma; CLT, chronic lymphocytic thyroiditis\nAfter RFA, the times to tumor disappearance were 9.800 ± 5.041 months and 10.0 ± 4.8 months (t = 0.16, P = 0.88) in the PTMC and PTMC + CLT groups, respectively. A total of 43% of the nodules in the PTMC+CLT group resolved in 12 months, and 47% in the PTMC group resolved in 6 months. No significant differences were observed with respect to the number of ablation areas between the groups (u = 0.319, P > 0.05) (Table 5). Calcification was observed in 20 PTMC cases, with time to tumor disappearance of 10.2 ± 5.1 months; no calcification was found in 40 PTMC cases, with time to tumor disappearance of 9.8 ± 4.9 months; no significant difference in time was observed between the PTMC cases with and without calcification (t = 0.28, P = 0.78). No residual cancer cells were found by CNB 3 months after ablation (Figure 1). No recurrent tumors or suspicious metastatic lymph nodes were detected.\nNumber of patients with tumors disappearance in the PTMC+CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation.\nIn terms of postoperative ablation volume, there was no significant difference between the two groups at 1, 3, 6, 12, and 18 months after RFA (P > 0.05) (Table 3, Figure 1). While there was a significant difference in VRR between the two groups 3 months after ablation (t = 1.28, P = 0.03), no difference was observed at any other time‐point (P > 0.05) (Table 4, Figure 2).\nChanges in the tumor volume between the PTMC + CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation.\nRadiofrequency ablation (RFA) treatment and follow‐up of one case of papillary thyroid microcarcinoma with chronic lymphocytic thyroiditis. (A) A hypoechoic nodule sized 0.4 × 0.5 × 0.4 cm, with irregular margins and microcalcifications was displayed in the right thyroid lobe (arrow). (B) Uneven and irregular hypo‐enhancement in the nodule was observed by contrast‐enhanced ultrasound (CEUS) (arrow, left image). (C) During RFA, the nodule was covered by a hyperechoic area (arrow) on US. (D) Immediately after RFA, the ablation area was showed completely no enhancement by CEUS, and its size (0.7 × 1.1 × 1.0 cm) was larger than the initial nodule size. (E) One month after RFA, the ablation area decreased in size to 0.9 × 0.8 × 0.5 cm. (F) Three months after RFA, the ablation area decreased to 0.6 × 0.5 × 0.6 cm. (G) Six months after RFA, the ablation area decreased to 0.3 × 0.2 × 0.3 cm. (H) The ablation area could not be identified on both US and CEUS. (I) Before RFA, the pathologic examination of this nodule showed the presence of papillary thyroid carcinoma accompanied by chronic lymphocytic thyroiditis. (J) Three months after RFA, pathology showed degenerated and necrotic follicular epithelia, interstitial fibrous tissue hyperplasia, and hyaline degeneration in the ablation lesion, with lymphocyte infiltration and multinucleated giant cell reaction in the adjacent thyroid tissue. No residual cancer was found\nChanges in the tumor volume reduction ratio between the PTMC+CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation.\nChanges in ablation zone volume in PTMC cases with and without CLT at each follow‐up. PTMC, papillary thyroid microcarcinoma; CLT, chronic lymphocytic thyroiditis\nAfter RFA, the times to tumor disappearance were 9.800 ± 5.041 months and 10.0 ± 4.8 months (t = 0.16, P = 0.88) in the PTMC and PTMC + CLT groups, respectively. A total of 43% of the nodules in the PTMC+CLT group resolved in 12 months, and 47% in the PTMC group resolved in 6 months. No significant differences were observed with respect to the number of ablation areas between the groups (u = 0.319, P > 0.05) (Table 5). Calcification was observed in 20 PTMC cases, with time to tumor disappearance of 10.2 ± 5.1 months; no calcification was found in 40 PTMC cases, with time to tumor disappearance of 9.8 ± 4.9 months; no significant difference in time was observed between the PTMC cases with and without calcification (t = 0.28, P = 0.78). No residual cancer cells were found by CNB 3 months after ablation (Figure 1). No recurrent tumors or suspicious metastatic lymph nodes were detected.\nNumber of patients with tumors disappearance in the PTMC+CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation.\n Complications Slight voice hoarseness was observed in one patient (1.7%, 1/60) after RFA; voice recovery occurred without treatment in 1 week. The presence of moderate‐intensity pain was reported by two patients (3.3%, 2/60), and slight fever was noted in one patient (1.7%, 1/60); in all cases, spontaneous recovery was observed.\nSlight voice hoarseness was observed in one patient (1.7%, 1/60) after RFA; voice recovery occurred without treatment in 1 week. The presence of moderate‐intensity pain was reported by two patients (3.3%, 2/60), and slight fever was noted in one patient (1.7%, 1/60); in all cases, spontaneous recovery was observed.", "There were no significant differences in terms of age, sex, and tumor volume between the PTMC and PTMC+CLT groups (P > 0.05) (Table 1). All thyroid nodules were hypoechoic; other characteristics are shown in Table 2. There were no significant differences in nodule location, margin, shape, height/width, calcification, and CDFI type between the two groups (P > 0.05). The distance between the nodule and trachea or common carotid artery (CCA) was <2 mm in 13 nodules, including eight near the trachea (0.117 ± 0.024) cm, and five near the CCA (0.094 ± 0.013) cm. There were no significant differences in free T3, T4 and TSH values between the two groups (U = 0.434, P = 0.664; U = 0.452, P = 0.651; and U = 0.886, P = 0.376, respectively). All patients with CLT had positive TgAb and TPOAb. The thyroid function test results of those with PTMC were in the normal range.\nComparison of the preoperative data of the patients and tumor volume between the PTMC+CLT and PTMC groups\nAbbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma.\nUltrasonic characteristics of the tumors before RFA in the PTMC+CLT and PTMC groups\nAbbreviations: CDFI, color Doppler flow imaging; CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma.", "The ablation powers in the PTMC and PTMC+CLT groups were 0.9 ± 0.5 KJ and 0.7 ± 0.5 KJ, respectively (t = 1.453, P = 0.1515); ablation times were 3.8 ± 2.1 min and 2.9 ± 2 0.1 min, respectively (t = 1.801, P = 0.0768).", "In terms of postoperative ablation volume, there was no significant difference between the two groups at 1, 3, 6, 12, and 18 months after RFA (P > 0.05) (Table 3, Figure 1). While there was a significant difference in VRR between the two groups 3 months after ablation (t = 1.28, P = 0.03), no difference was observed at any other time‐point (P > 0.05) (Table 4, Figure 2).\nChanges in the tumor volume between the PTMC + CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation.\nRadiofrequency ablation (RFA) treatment and follow‐up of one case of papillary thyroid microcarcinoma with chronic lymphocytic thyroiditis. (A) A hypoechoic nodule sized 0.4 × 0.5 × 0.4 cm, with irregular margins and microcalcifications was displayed in the right thyroid lobe (arrow). (B) Uneven and irregular hypo‐enhancement in the nodule was observed by contrast‐enhanced ultrasound (CEUS) (arrow, left image). (C) During RFA, the nodule was covered by a hyperechoic area (arrow) on US. (D) Immediately after RFA, the ablation area was showed completely no enhancement by CEUS, and its size (0.7 × 1.1 × 1.0 cm) was larger than the initial nodule size. (E) One month after RFA, the ablation area decreased in size to 0.9 × 0.8 × 0.5 cm. (F) Three months after RFA, the ablation area decreased to 0.6 × 0.5 × 0.6 cm. (G) Six months after RFA, the ablation area decreased to 0.3 × 0.2 × 0.3 cm. (H) The ablation area could not be identified on both US and CEUS. (I) Before RFA, the pathologic examination of this nodule showed the presence of papillary thyroid carcinoma accompanied by chronic lymphocytic thyroiditis. (J) Three months after RFA, pathology showed degenerated and necrotic follicular epithelia, interstitial fibrous tissue hyperplasia, and hyaline degeneration in the ablation lesion, with lymphocyte infiltration and multinucleated giant cell reaction in the adjacent thyroid tissue. No residual cancer was found\nChanges in the tumor volume reduction ratio between the PTMC+CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation.\nChanges in ablation zone volume in PTMC cases with and without CLT at each follow‐up. PTMC, papillary thyroid microcarcinoma; CLT, chronic lymphocytic thyroiditis\nAfter RFA, the times to tumor disappearance were 9.800 ± 5.041 months and 10.0 ± 4.8 months (t = 0.16, P = 0.88) in the PTMC and PTMC + CLT groups, respectively. A total of 43% of the nodules in the PTMC+CLT group resolved in 12 months, and 47% in the PTMC group resolved in 6 months. No significant differences were observed with respect to the number of ablation areas between the groups (u = 0.319, P > 0.05) (Table 5). Calcification was observed in 20 PTMC cases, with time to tumor disappearance of 10.2 ± 5.1 months; no calcification was found in 40 PTMC cases, with time to tumor disappearance of 9.8 ± 4.9 months; no significant difference in time was observed between the PTMC cases with and without calcification (t = 0.28, P = 0.78). No residual cancer cells were found by CNB 3 months after ablation (Figure 1). No recurrent tumors or suspicious metastatic lymph nodes were detected.\nNumber of patients with tumors disappearance in the PTMC+CLT and PTMC groups after RFA and at each follow‐up\nAbbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation.", "Slight voice hoarseness was observed in one patient (1.7%, 1/60) after RFA; voice recovery occurred without treatment in 1 week. The presence of moderate‐intensity pain was reported by two patients (3.3%, 2/60), and slight fever was noted in one patient (1.7%, 1/60); in all cases, spontaneous recovery was observed.", "The feasibility of RFA in thyroid cancer treatment is still controversial, as some surgeons believe that it is difficult to detect tiny lymph node metastases using US19; in addition, the use of prophylactic central compartment neck dissection (CCND) is more reassuring for patients with clinically node‐negative PTC. However, with the increasing incidence of thyroid cancer, more patients with low‐risk microcarcinomas are receiving unnecessary extended thyroidectomy and prophylactic CCND. Previous studies have shown that the proportion of CCND with negative findings ranges from 57.6% to 84.8%,20, 21 and that excessive surgery decreases patients' quality of life (postoperative complications included transient hypoparathyroidism, permanent hypoparathyroidism, vocal cord palsy, and bleeding,). Therefore, the treatment approach for this disease is changing. In recent years, prophylactic CCND has not been recommended for PTMC patients without LNM.8, 22, 23, 24, 25 The median risk of local‐regional lymph node recurrence varies markedly by clinical staging in patients with pathologically proven neck LNM, with recurrence rates of 2% (range 0%‐9%) in patients with an initial N0 stage vs 22% (range 10%‐42%) in those with initially positive lymph nodes. Furthermore, the median risk of recurrence in LNM patients varies markedly by the number of positive nodes, with values of 4% (range 3%‐8%) in cases with <5 nodes and 19% (range 7%‐21%) in those with >5 nodes.23 At the 5‐year follow‐up, no difference was observed in the outcomes of patients treated with total thyroidectomy and those treated with total thyroidectomy + prophylactic CCND.24 The results of these studies support the use of RFA for PTMC.\nThe presence of multifocality and capsular infiltration indicates a high risk of cancer invasion and metastasis.25, 26 Therefore, cases with a single tumor in the thyroid parenchyma and within the thyroid capsule were eligible for RFA in this study. Our preliminary study showed that RFA is efficient with a low complication rate in PTMC treatment.14 CLT is a type of chronic inflammation, and its effect on the ability to recover from the heat damage caused by RFA is still unknown.\nNo differences were observed in the preoperative volume and US characteristics of the tumors between the PTMC and PTMC+CLT groups (P > 0.05), and the patients' age and sex did not differ between the two groups (P > 0.05). Our results showed that recovery after ablation in patients with PTMC+CLT is similar to that in patients with only PTMC. The ablation volume in the PTMC group decreased rapidly in the first 3 months, and the VRR in the PTMC group was greater than that in the PTMC+CLT group (P = 0.03). Tissue damage due to trauma induces acute inflammation. The features of acute and subacute inflammation include the expansion of blood vessels (vasodilation), increase in blood flow (hyperemia), capillary permeability, and migration of neutrophils into the damaged tissue. However, the composition of white blood cells changes rapidly, with macrophages and lymphocytes replacing neutrophils. The hallmark of chronic inflammation is the infiltration of the primary inflammatory cells into the site, leading to the production of inflammatory cytokines, growth factors, and enzymes, thereby leading to progression of tissue damage and secondary repair processes, such as fibrosis and granuloma formation.27 The thyroid parenchyma in the case of CLT is already infiltrated by diffuse chronic inflammatory cells, so inflammation of the ablation zone was more marked in the acute and subacute periods after RFA. Immune cells were activated to phagocytize and remove the necrotic tissue, activating the autoimmune system so we saw no significant difference in ablation zone outcomes between the PTMC and PTMC+CLT groups. Three months after ablation, US‐guided CNB was performed at the center and margin of the ablation zone and in the adjacent thyroid parenchyma; pathological results showed degenerated and necrotic follicular epithelia, interstitial fibrous tissue hyperplasia, and hyaline degeneration in the central and peripheral areas of the ablation lesion, with lymphocyte infiltration and multinucleated giant cell reaction in the adjacent thyroid tissue. No residual cancer was found. Previous studies28, 29 with larger sample size showed that PTC with CLT has a good prognosis, and that the recurrence rate was lower than that associated with only PTC. This may be attributed to the fact that inflammation inhibits cancer‐cell proliferation. CLT may be involved in the destruction of cancer cells that express thyroid‐specific antigens in PTC as a result of its autoimmune response to the thyroid‐specific antigens. Few studies30 have shown no relationship between CLT and PTC. This study indicated that RFA is efficient and safe for PTMC cases with CLT; CLT should not be regarded as a factor that excludes a patient from receiving RFA.\nIn terms of safety, adhesions are found between the thyroid and adjacent connective tissue and muscles after ablation, distorting local anatomy, which results in difficulty performing surgery after ablation; therefore some surgeons have a negative attitude towards ablation. However, liquid isolation zone injection can be utilized to separate the thyroid from critical neck structures; this method is effective in preventing the occurrence of significant complications such as hematoma, and tracheal and nerve injury. Postoperative examination by both US and CEUS has shown clear boundaries of the thyroid gland and surrounding structures without signs of adhesion. In this study, 13 PTMC cases in which the distance to the trachea or CCA was <2 mm were successfully treated with RFA as a result of liquid isolation zone injection without serious complications.\nThis study had the limitation of a relatively short follow‐up period; longest follow‐up was 4 years and the shortest 20 months. The outcomes of all these patients should be confirmed in studies with larger sample size and extended follow‐up period.\nIn conclusion, this study found that RFA was effective and safe in PTMC+CLT patients. As evidenced by the low recurrence and high survival rates of these patients, CLT may be regarded as a protective factor for patients enrolled in PTMC treatment using RFA. The present study provides a basis for the study of immune regulation mechanisms induced by thyroid cancer necrosis.", "The authors made no disclosures.", "Yan Zhang contributed to the collection, analysis, and interpretation of data and writing initial draft. Mingbo Zhang performed the statistical analyses, had full access to all data, and takes responsibility for the accuracy of the data analysis in the study. Ying Zhang contributed to the collection, analysis and interpreted the results. Jie Li: contributed to the pathological figures analysis and interpretation. Jie Tang contributed to writing–review and revisions. Yukun Luo contributed to the study design, writing–initial draft, guarantor of the study and had full access to all data in the study, and takes responsibility for the integrity of the data and the accuracy of the data analysis. All authors contributed to critical revisions and approved the final version." ]
[ null, "materials-and-methods", null, null, null, null, null, null, "results", null, null, null, null, "discussion", "COI-statement", null ]
[ "ablation", "contrast media", "radiofrequency", "thyroid carcinoma; ultrasonography" ]
INTRODUCTION: Since the mid‐1990s, the incidence of thyroid cancer has increased worldwide and is reportedly the fastest‐growing cancer. In the United States, the overall incidence of thyroid cancer increased by 3% annually from 1974 to 20131 and the number of new cases likely reached 53 900 in 2018.2 In China, the incidence of thyroid cancer is also increasing, and in 2014, it was among the top four cancers in terms of incidence among women.3 Thyroid cancer includes several pathological types, of which papillary thyroid carcinoma (PTC) is the most common. Tumors ≤10 mm in diameter are defined as papillary thyroid microcarcinomas (PTMCs). These have been observed in 15.5% of autopsies in which whole‐gland examination was performed and are typically associated with good prognosis.4 Chronic lymphocytic thyroiditis (CLT), also known as Hashimoto's thyroiditis, is an autoimmune disease characterized by widespread lymphocyte infiltration, fibrosis and parenchymal atrophy of the thyroid tissue. The male‐to‐female incidence ratio of CLT is 1:5‐20. CLT is commonly associated with PTC, and is characterized by a smaller primary tumor size at presentation. Leni et al5 reported that one‐third of PTC cases (33.3%, 168/505) have CLT coexistence. No association has been found between CLT and follicular, medullary, or anaplastic thyroid cancer.6 The coexistence of PTC and CLT is reportedly associated with better prognoses, lower rates of lymph node and distant metastases, and recurrence, particularly in patients aged ≥45 years.7 There is controversy surrounding the treatment of PTC with coexisting CLT. For PTMC, some guidelines recommend the performance of surgery.8, 9 However, traditional surgery results in injuries, prominent neck scarring, and a lowered quality of life, especially among older patients with chronic comorbidities. The Korean Society of Thyroid Radiology consensus statement10 highlights the need for active surveillance rather than immediate surgery in adult patients with low‐risk PTMC. However, a majority of patients experience severe anxiety if no treatment is provided. Therefore, minimally invasive therapy is utilized in many cases. Thermal tumor ablation (using microwave, radiofrequency, or laser ablation, or high‐intensity focused ultrasound [US]) has been applied in clinical practice with good feedback.11, 12, 13 Radiofrequency ablation (RFA) was found to be efficient and safe in the treatment of PTMC in our preliminary study.14 We aimed to investigate the therapeutic effect and safety of RFA in PTMC cases with CLT in this study. MATERIALS AND METHODS: This study was approved by the ethics committee of our hospital, and informed consent was obtained from each patient before the performance of US‐guided core‐needle biopsy (CNB) and RFA. The informed consent form for RFA emphasized that surgery is the routine treatment procedure recommended by guidelines, and that RFA administration cannot prevent the development of recurrent PTMC and undetectable cervical lymph node metastasis (LNM). Patients Patients who fulfilled the following criteria were enrolled: (1) presence of PTC confirmed by US‐guided CNB; (2) maximum diameter less than 1 cm; (3) absence of capsular infiltration and extrathyroidal invasion, and lack of LNM detection; (4) absence of neck irradiation history; and (5) unable or refused to receive surgery. Exclusion criteria were: the presence of (1) multifocal cancer; (2) aggressive histological PTMC, such as tall cell, insular, or columnar cell carcinoma; (3) suspicious cervical LNM; (4) pregnancy and lactation; (5) severe coagulation disorders, respiratory failure, myocardial infarction, systemic infection, or uncontrolled diabetes; (6) neuropsychiatric disturbance or neck extension disorder leading to RFA nontolerance; (7) cardiac pacemaker implantation; (8) contralateral vocal cord paralysis; and (9) allergic to sulfur hexafluoride microbubbles (SonoVue, Bracco International, Milan, Italy). Between February 2013 and March 2017, 60 tumors in 60 patients (12 men and 48 women) treated with US‐guided RFA were included, comprising 30 patients with only PTMC, and 30 with CLT+PTMC. CLT was confirmed by pathologic examination and serological test, and was defined as diffuse lymphocytic and plasma cell infiltrates, lymphoid follicles formation with germinal centers, varying degree of fibrosis, parenchymal atrophy, and the presence of large follicular cells with oxyphilic cell changes,7 as well as positive hyroperoxidase antibody (TPOAb) and thyroglobulin antibody (TgAb). The presence of peritumoral inflammatory reaction was not considered CLT. All patients had complete records and were followed for more than 18 months. Patients who fulfilled the following criteria were enrolled: (1) presence of PTC confirmed by US‐guided CNB; (2) maximum diameter less than 1 cm; (3) absence of capsular infiltration and extrathyroidal invasion, and lack of LNM detection; (4) absence of neck irradiation history; and (5) unable or refused to receive surgery. Exclusion criteria were: the presence of (1) multifocal cancer; (2) aggressive histological PTMC, such as tall cell, insular, or columnar cell carcinoma; (3) suspicious cervical LNM; (4) pregnancy and lactation; (5) severe coagulation disorders, respiratory failure, myocardial infarction, systemic infection, or uncontrolled diabetes; (6) neuropsychiatric disturbance or neck extension disorder leading to RFA nontolerance; (7) cardiac pacemaker implantation; (8) contralateral vocal cord paralysis; and (9) allergic to sulfur hexafluoride microbubbles (SonoVue, Bracco International, Milan, Italy). Between February 2013 and March 2017, 60 tumors in 60 patients (12 men and 48 women) treated with US‐guided RFA were included, comprising 30 patients with only PTMC, and 30 with CLT+PTMC. CLT was confirmed by pathologic examination and serological test, and was defined as diffuse lymphocytic and plasma cell infiltrates, lymphoid follicles formation with germinal centers, varying degree of fibrosis, parenchymal atrophy, and the presence of large follicular cells with oxyphilic cell changes,7 as well as positive hyroperoxidase antibody (TPOAb) and thyroglobulin antibody (TgAb). The presence of peritumoral inflammatory reaction was not considered CLT. All patients had complete records and were followed for more than 18 months. Instrument and equipment US and contrast‐enhanced US (CEUS) examinations were performed using a Siemens Acuson Sequoia 512 Ultrasound System (Siemens, Mountain View, CA) with a 15L8W linear array transducer. US‐guided RFA and CNB were performed using a Siemens Acuson Sequoia 512 Ultrasound System with a 6L3 linear array transducer. CNB on each nodule was performed using an 18‐gauge biopsy needle after RFA (Biopty; Bard, Covington, GA). A bipolar RFA generator (CelonLabPOWER; Olympus Surgical Technologies Europe, Hamburg, Germany) and an 18‐gauge bipolar radiofrequency (RF) applicator with a 0.9‐cm active tip (CelonProSurge micro 100‐T09; Olympus Surgical Technologies Europe) were used for RFA treatment in this study. During the application of RF energy, the electric impedance of the tissue between the two electrodes at the tip of the RF applicator was measured continuously by the generator. The power is automatically reduced if the temperature at the electrodes reaches 100°C. US and contrast‐enhanced US (CEUS) examinations were performed using a Siemens Acuson Sequoia 512 Ultrasound System (Siemens, Mountain View, CA) with a 15L8W linear array transducer. US‐guided RFA and CNB were performed using a Siemens Acuson Sequoia 512 Ultrasound System with a 6L3 linear array transducer. CNB on each nodule was performed using an 18‐gauge biopsy needle after RFA (Biopty; Bard, Covington, GA). A bipolar RFA generator (CelonLabPOWER; Olympus Surgical Technologies Europe, Hamburg, Germany) and an 18‐gauge bipolar radiofrequency (RF) applicator with a 0.9‐cm active tip (CelonProSurge micro 100‐T09; Olympus Surgical Technologies Europe) were used for RFA treatment in this study. During the application of RF energy, the electric impedance of the tissue between the two electrodes at the tip of the RF applicator was measured continuously by the generator. The power is automatically reduced if the temperature at the electrodes reaches 100°C. Preablation assessment Careful history‐taking and thorough physical examinations were conducted in all patients in our department. All patients who qualified for RFA were subject to thyroid US, CEUS, and determination of the levels of free thyroid hormones T3 (normal reference range: 2.76‐6.30 pmol/L) and T4 (10.42‐24.32 pmol/L), thyroid‐stimulating hormone (TSH) (0.35‐5.5 mU/L), TPOAb (<60 IU/mL) and TgAb (<60 IU/mL). Patients were supine with the neck extended during the procedure. An intravenous line was introduced into an elbow vein. US appearances were evaluated and recorded according to the multidisciplinary consensus statement for thyroid nodules.15 For each tumor, the size, volume, location, echogenicity, margin, shape (height/width), calcifications, and vascularity were evaluated by US. The volume of each tumor was calculated as V = πabc/6 (V: volume, a: transverse diameter of tumor, b: vertical diameter of tumor, and c: anteroposterior diameter of tumor). CEUS with a low mechanical index (0.19‐0.24) was used to describe the blood supply region of the tumor before and after RFA. The contrast agent used was 59 mg of dry powder SonoVue constituted in 5 mL of normal saline. CEUS was performed after a bolus injection of SonoVue (2.4 mL), followed by a normal saline flush (5 mL). Real‐time microbubble perfusions within the tumor and surrounding tissues were observed for a minimum of 2 minutes and recorded electronically. Capsular and extrathyroidal invasion of thyroid cancer were evaluated by both US and CEUS. Extracapsular extension on the US image was defined as cases in which the anterior and posterior hyperechoic thyroid capsules were discontinued. During the real‐time and multi‐angle scanning, the capsular infiltration and extrathyroidal invasion on CEUS were shown as low‐ or nonenhancing areas on the thyroid capsule invaded by malignant nodules.16 Careful history‐taking and thorough physical examinations were conducted in all patients in our department. All patients who qualified for RFA were subject to thyroid US, CEUS, and determination of the levels of free thyroid hormones T3 (normal reference range: 2.76‐6.30 pmol/L) and T4 (10.42‐24.32 pmol/L), thyroid‐stimulating hormone (TSH) (0.35‐5.5 mU/L), TPOAb (<60 IU/mL) and TgAb (<60 IU/mL). Patients were supine with the neck extended during the procedure. An intravenous line was introduced into an elbow vein. US appearances were evaluated and recorded according to the multidisciplinary consensus statement for thyroid nodules.15 For each tumor, the size, volume, location, echogenicity, margin, shape (height/width), calcifications, and vascularity were evaluated by US. The volume of each tumor was calculated as V = πabc/6 (V: volume, a: transverse diameter of tumor, b: vertical diameter of tumor, and c: anteroposterior diameter of tumor). CEUS with a low mechanical index (0.19‐0.24) was used to describe the blood supply region of the tumor before and after RFA. The contrast agent used was 59 mg of dry powder SonoVue constituted in 5 mL of normal saline. CEUS was performed after a bolus injection of SonoVue (2.4 mL), followed by a normal saline flush (5 mL). Real‐time microbubble perfusions within the tumor and surrounding tissues were observed for a minimum of 2 minutes and recorded electronically. Capsular and extrathyroidal invasion of thyroid cancer were evaluated by both US and CEUS. Extracapsular extension on the US image was defined as cases in which the anterior and posterior hyperechoic thyroid capsules were discontinued. During the real‐time and multi‐angle scanning, the capsular infiltration and extrathyroidal invasion on CEUS were shown as low‐ or nonenhancing areas on the thyroid capsule invaded by malignant nodules.16 Ablation procedure Conventional US was performed to evaluate the relationship between the tumor and critical structures in the neck, such as the trachea, esophagus, jugular vein, common carotid artery (CCA), and recurrent laryngeal nerves, in order to decide the best insertion way. A local anesthetic (1% lidocaine) was injected at the subcutaneous puncture site and the thyroid anterior capsule. If the distance between the tumor and critical neck structures was <5 mm, normal saline was injected using another needle (23 gauge) to form at least a 1cm distance between the tumor and critical structure for the prevention of thermal injuries. RFA was performed using the moving‐shot technique (P–Q); 3W was the initial radiofrequency power which was increased to 5W if a transient hyperechoic zone did not form at the electrode tip within 5‐10 s. To prevent residual tumor and recurrence, the RFA area exceeded the tumor margin. The ablation procedure ended when the tumor was completely covered with hyperechoic zone in three‐dimensional space on US. The damage range of ablation was evaluated by CEUS immediately after ablation, and the electrode needle was pulled out of the tissue after ensuring that no residual tumor (completely no enhancement in the ablation area). If the enhancement was showed in some areas of tumor, complementary ablation could be applied promptly to avoid residual cancer cells. During the procedure, special attention was given to the protection of critical neck structures to prevent the significant complications such as hematoma or nerve injury. All complications occurring during and after RFA were carefully assessed according to patients' clinical signs and symptoms. Each patient was observed for 1‐2 hours after RFA. Conventional US was performed to evaluate the relationship between the tumor and critical structures in the neck, such as the trachea, esophagus, jugular vein, common carotid artery (CCA), and recurrent laryngeal nerves, in order to decide the best insertion way. A local anesthetic (1% lidocaine) was injected at the subcutaneous puncture site and the thyroid anterior capsule. If the distance between the tumor and critical neck structures was <5 mm, normal saline was injected using another needle (23 gauge) to form at least a 1cm distance between the tumor and critical structure for the prevention of thermal injuries. RFA was performed using the moving‐shot technique (P–Q); 3W was the initial radiofrequency power which was increased to 5W if a transient hyperechoic zone did not form at the electrode tip within 5‐10 s. To prevent residual tumor and recurrence, the RFA area exceeded the tumor margin. The ablation procedure ended when the tumor was completely covered with hyperechoic zone in three‐dimensional space on US. The damage range of ablation was evaluated by CEUS immediately after ablation, and the electrode needle was pulled out of the tissue after ensuring that no residual tumor (completely no enhancement in the ablation area). If the enhancement was showed in some areas of tumor, complementary ablation could be applied promptly to avoid residual cancer cells. During the procedure, special attention was given to the protection of critical neck structures to prevent the significant complications such as hematoma or nerve injury. All complications occurring during and after RFA were carefully assessed according to patients' clinical signs and symptoms. Each patient was observed for 1‐2 hours after RFA. Follow‐up Patients were followed‐up using conventional US and CEUS at months 1, 3, 6, and 12 after RFA, and every 6 months thereafter. The ablation area was evaluated by CEUS to detect volume of ablation and provide a baseline to screen for recurrence. The volume reduction ratio (VRR) was calculated as follows: VRR (%) = [(1 − final volume)/initial volume] × 100%. The involvement of cervical lymph nodes was evaluated by US; suspicious lymph nodes (a globular shape, loss of the normal echogenic hilum, presence of peripheral rather than hilar flow, and microcalcifications17) were biopsied. US‐guided CNB was performed at the center and the margin of the ablation area, and in the surrounding thyroid parenchyma 3 months after RFA. Complications during the follow‐up period were assessed according to the reporting standards of the Society of Interventional Radiology.18 All RFA procedures and preoperative examinations were performed by one experienced physician to exclude bias associated with different operators. Postoperative CNB was also performed by the same physician who performed RFA. Follow‐up conventional US and CEUS examinations were performed by two other experienced physicians who were blinded to the histological and imaging findings. Discrepancies were resolved by the judgment of a third experienced physician who had specialized in thyroid US and CEUS for over 15 years. Patients were followed‐up using conventional US and CEUS at months 1, 3, 6, and 12 after RFA, and every 6 months thereafter. The ablation area was evaluated by CEUS to detect volume of ablation and provide a baseline to screen for recurrence. The volume reduction ratio (VRR) was calculated as follows: VRR (%) = [(1 − final volume)/initial volume] × 100%. The involvement of cervical lymph nodes was evaluated by US; suspicious lymph nodes (a globular shape, loss of the normal echogenic hilum, presence of peripheral rather than hilar flow, and microcalcifications17) were biopsied. US‐guided CNB was performed at the center and the margin of the ablation area, and in the surrounding thyroid parenchyma 3 months after RFA. Complications during the follow‐up period were assessed according to the reporting standards of the Society of Interventional Radiology.18 All RFA procedures and preoperative examinations were performed by one experienced physician to exclude bias associated with different operators. Postoperative CNB was also performed by the same physician who performed RFA. Follow‐up conventional US and CEUS examinations were performed by two other experienced physicians who were blinded to the histological and imaging findings. Discrepancies were resolved by the judgment of a third experienced physician who had specialized in thyroid US and CEUS for over 15 years. Statistical analysis All statistical analyses were performed using SPSS software package, Version 13 for Windows (SPSS Inc, Chicago, IL). A chi‐squared test (χ2) was used to analyze the categorical variables. Continuous data were reported as mean ± standard deviation (range). Volume and VRR of the ablation area before RFA and at each follow‐up were analyzed by the t test. The Wilcoxon signed rank test was used to compare tumor calcification, color Doppler flow imaging (CDFI) blood flow grades, and changes in the number of patients with tumor disappearance at each follow‐up between CLT+PTMC and PTMC groups. The Wilcoxon rank sum test was used to compare Free T3, T4, and TSH values in patients with and without CLT P values <0.05 were considered statistically significant. All statistical analyses were performed using SPSS software package, Version 13 for Windows (SPSS Inc, Chicago, IL). A chi‐squared test (χ2) was used to analyze the categorical variables. Continuous data were reported as mean ± standard deviation (range). Volume and VRR of the ablation area before RFA and at each follow‐up were analyzed by the t test. The Wilcoxon signed rank test was used to compare tumor calcification, color Doppler flow imaging (CDFI) blood flow grades, and changes in the number of patients with tumor disappearance at each follow‐up between CLT+PTMC and PTMC groups. The Wilcoxon rank sum test was used to compare Free T3, T4, and TSH values in patients with and without CLT P values <0.05 were considered statistically significant. Patients: Patients who fulfilled the following criteria were enrolled: (1) presence of PTC confirmed by US‐guided CNB; (2) maximum diameter less than 1 cm; (3) absence of capsular infiltration and extrathyroidal invasion, and lack of LNM detection; (4) absence of neck irradiation history; and (5) unable or refused to receive surgery. Exclusion criteria were: the presence of (1) multifocal cancer; (2) aggressive histological PTMC, such as tall cell, insular, or columnar cell carcinoma; (3) suspicious cervical LNM; (4) pregnancy and lactation; (5) severe coagulation disorders, respiratory failure, myocardial infarction, systemic infection, or uncontrolled diabetes; (6) neuropsychiatric disturbance or neck extension disorder leading to RFA nontolerance; (7) cardiac pacemaker implantation; (8) contralateral vocal cord paralysis; and (9) allergic to sulfur hexafluoride microbubbles (SonoVue, Bracco International, Milan, Italy). Between February 2013 and March 2017, 60 tumors in 60 patients (12 men and 48 women) treated with US‐guided RFA were included, comprising 30 patients with only PTMC, and 30 with CLT+PTMC. CLT was confirmed by pathologic examination and serological test, and was defined as diffuse lymphocytic and plasma cell infiltrates, lymphoid follicles formation with germinal centers, varying degree of fibrosis, parenchymal atrophy, and the presence of large follicular cells with oxyphilic cell changes,7 as well as positive hyroperoxidase antibody (TPOAb) and thyroglobulin antibody (TgAb). The presence of peritumoral inflammatory reaction was not considered CLT. All patients had complete records and were followed for more than 18 months. Instrument and equipment: US and contrast‐enhanced US (CEUS) examinations were performed using a Siemens Acuson Sequoia 512 Ultrasound System (Siemens, Mountain View, CA) with a 15L8W linear array transducer. US‐guided RFA and CNB were performed using a Siemens Acuson Sequoia 512 Ultrasound System with a 6L3 linear array transducer. CNB on each nodule was performed using an 18‐gauge biopsy needle after RFA (Biopty; Bard, Covington, GA). A bipolar RFA generator (CelonLabPOWER; Olympus Surgical Technologies Europe, Hamburg, Germany) and an 18‐gauge bipolar radiofrequency (RF) applicator with a 0.9‐cm active tip (CelonProSurge micro 100‐T09; Olympus Surgical Technologies Europe) were used for RFA treatment in this study. During the application of RF energy, the electric impedance of the tissue between the two electrodes at the tip of the RF applicator was measured continuously by the generator. The power is automatically reduced if the temperature at the electrodes reaches 100°C. Preablation assessment: Careful history‐taking and thorough physical examinations were conducted in all patients in our department. All patients who qualified for RFA were subject to thyroid US, CEUS, and determination of the levels of free thyroid hormones T3 (normal reference range: 2.76‐6.30 pmol/L) and T4 (10.42‐24.32 pmol/L), thyroid‐stimulating hormone (TSH) (0.35‐5.5 mU/L), TPOAb (<60 IU/mL) and TgAb (<60 IU/mL). Patients were supine with the neck extended during the procedure. An intravenous line was introduced into an elbow vein. US appearances were evaluated and recorded according to the multidisciplinary consensus statement for thyroid nodules.15 For each tumor, the size, volume, location, echogenicity, margin, shape (height/width), calcifications, and vascularity were evaluated by US. The volume of each tumor was calculated as V = πabc/6 (V: volume, a: transverse diameter of tumor, b: vertical diameter of tumor, and c: anteroposterior diameter of tumor). CEUS with a low mechanical index (0.19‐0.24) was used to describe the blood supply region of the tumor before and after RFA. The contrast agent used was 59 mg of dry powder SonoVue constituted in 5 mL of normal saline. CEUS was performed after a bolus injection of SonoVue (2.4 mL), followed by a normal saline flush (5 mL). Real‐time microbubble perfusions within the tumor and surrounding tissues were observed for a minimum of 2 minutes and recorded electronically. Capsular and extrathyroidal invasion of thyroid cancer were evaluated by both US and CEUS. Extracapsular extension on the US image was defined as cases in which the anterior and posterior hyperechoic thyroid capsules were discontinued. During the real‐time and multi‐angle scanning, the capsular infiltration and extrathyroidal invasion on CEUS were shown as low‐ or nonenhancing areas on the thyroid capsule invaded by malignant nodules.16 Ablation procedure: Conventional US was performed to evaluate the relationship between the tumor and critical structures in the neck, such as the trachea, esophagus, jugular vein, common carotid artery (CCA), and recurrent laryngeal nerves, in order to decide the best insertion way. A local anesthetic (1% lidocaine) was injected at the subcutaneous puncture site and the thyroid anterior capsule. If the distance between the tumor and critical neck structures was <5 mm, normal saline was injected using another needle (23 gauge) to form at least a 1cm distance between the tumor and critical structure for the prevention of thermal injuries. RFA was performed using the moving‐shot technique (P–Q); 3W was the initial radiofrequency power which was increased to 5W if a transient hyperechoic zone did not form at the electrode tip within 5‐10 s. To prevent residual tumor and recurrence, the RFA area exceeded the tumor margin. The ablation procedure ended when the tumor was completely covered with hyperechoic zone in three‐dimensional space on US. The damage range of ablation was evaluated by CEUS immediately after ablation, and the electrode needle was pulled out of the tissue after ensuring that no residual tumor (completely no enhancement in the ablation area). If the enhancement was showed in some areas of tumor, complementary ablation could be applied promptly to avoid residual cancer cells. During the procedure, special attention was given to the protection of critical neck structures to prevent the significant complications such as hematoma or nerve injury. All complications occurring during and after RFA were carefully assessed according to patients' clinical signs and symptoms. Each patient was observed for 1‐2 hours after RFA. Follow‐up: Patients were followed‐up using conventional US and CEUS at months 1, 3, 6, and 12 after RFA, and every 6 months thereafter. The ablation area was evaluated by CEUS to detect volume of ablation and provide a baseline to screen for recurrence. The volume reduction ratio (VRR) was calculated as follows: VRR (%) = [(1 − final volume)/initial volume] × 100%. The involvement of cervical lymph nodes was evaluated by US; suspicious lymph nodes (a globular shape, loss of the normal echogenic hilum, presence of peripheral rather than hilar flow, and microcalcifications17) were biopsied. US‐guided CNB was performed at the center and the margin of the ablation area, and in the surrounding thyroid parenchyma 3 months after RFA. Complications during the follow‐up period were assessed according to the reporting standards of the Society of Interventional Radiology.18 All RFA procedures and preoperative examinations were performed by one experienced physician to exclude bias associated with different operators. Postoperative CNB was also performed by the same physician who performed RFA. Follow‐up conventional US and CEUS examinations were performed by two other experienced physicians who were blinded to the histological and imaging findings. Discrepancies were resolved by the judgment of a third experienced physician who had specialized in thyroid US and CEUS for over 15 years. Statistical analysis: All statistical analyses were performed using SPSS software package, Version 13 for Windows (SPSS Inc, Chicago, IL). A chi‐squared test (χ2) was used to analyze the categorical variables. Continuous data were reported as mean ± standard deviation (range). Volume and VRR of the ablation area before RFA and at each follow‐up were analyzed by the t test. The Wilcoxon signed rank test was used to compare tumor calcification, color Doppler flow imaging (CDFI) blood flow grades, and changes in the number of patients with tumor disappearance at each follow‐up between CLT+PTMC and PTMC groups. The Wilcoxon rank sum test was used to compare Free T3, T4, and TSH values in patients with and without CLT P values <0.05 were considered statistically significant. RESULTS: Preoperative patient and tumor characteristics There were no significant differences in terms of age, sex, and tumor volume between the PTMC and PTMC+CLT groups (P > 0.05) (Table 1). All thyroid nodules were hypoechoic; other characteristics are shown in Table 2. There were no significant differences in nodule location, margin, shape, height/width, calcification, and CDFI type between the two groups (P > 0.05). The distance between the nodule and trachea or common carotid artery (CCA) was <2 mm in 13 nodules, including eight near the trachea (0.117 ± 0.024) cm, and five near the CCA (0.094 ± 0.013) cm. There were no significant differences in free T3, T4 and TSH values between the two groups (U = 0.434, P = 0.664; U = 0.452, P = 0.651; and U = 0.886, P = 0.376, respectively). All patients with CLT had positive TgAb and TPOAb. The thyroid function test results of those with PTMC were in the normal range. Comparison of the preoperative data of the patients and tumor volume between the PTMC+CLT and PTMC groups Abbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma. Ultrasonic characteristics of the tumors before RFA in the PTMC+CLT and PTMC groups Abbreviations: CDFI, color Doppler flow imaging; CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma. There were no significant differences in terms of age, sex, and tumor volume between the PTMC and PTMC+CLT groups (P > 0.05) (Table 1). All thyroid nodules were hypoechoic; other characteristics are shown in Table 2. There were no significant differences in nodule location, margin, shape, height/width, calcification, and CDFI type between the two groups (P > 0.05). The distance between the nodule and trachea or common carotid artery (CCA) was <2 mm in 13 nodules, including eight near the trachea (0.117 ± 0.024) cm, and five near the CCA (0.094 ± 0.013) cm. There were no significant differences in free T3, T4 and TSH values between the two groups (U = 0.434, P = 0.664; U = 0.452, P = 0.651; and U = 0.886, P = 0.376, respectively). All patients with CLT had positive TgAb and TPOAb. The thyroid function test results of those with PTMC were in the normal range. Comparison of the preoperative data of the patients and tumor volume between the PTMC+CLT and PTMC groups Abbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma. Ultrasonic characteristics of the tumors before RFA in the PTMC+CLT and PTMC groups Abbreviations: CDFI, color Doppler flow imaging; CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma. Ablation power and time in the PTMC and PTMC+CLT groups The ablation powers in the PTMC and PTMC+CLT groups were 0.9 ± 0.5 KJ and 0.7 ± 0.5 KJ, respectively (t = 1.453, P = 0.1515); ablation times were 3.8 ± 2.1 min and 2.9 ± 2 0.1 min, respectively (t = 1.801, P = 0.0768). The ablation powers in the PTMC and PTMC+CLT groups were 0.9 ± 0.5 KJ and 0.7 ± 0.5 KJ, respectively (t = 1.453, P = 0.1515); ablation times were 3.8 ± 2.1 min and 2.9 ± 2 0.1 min, respectively (t = 1.801, P = 0.0768). Tumor volume and VRR after RFA In terms of postoperative ablation volume, there was no significant difference between the two groups at 1, 3, 6, 12, and 18 months after RFA (P > 0.05) (Table 3, Figure 1). While there was a significant difference in VRR between the two groups 3 months after ablation (t = 1.28, P = 0.03), no difference was observed at any other time‐point (P > 0.05) (Table 4, Figure 2). Changes in the tumor volume between the PTMC + CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation. Radiofrequency ablation (RFA) treatment and follow‐up of one case of papillary thyroid microcarcinoma with chronic lymphocytic thyroiditis. (A) A hypoechoic nodule sized 0.4 × 0.5 × 0.4 cm, with irregular margins and microcalcifications was displayed in the right thyroid lobe (arrow). (B) Uneven and irregular hypo‐enhancement in the nodule was observed by contrast‐enhanced ultrasound (CEUS) (arrow, left image). (C) During RFA, the nodule was covered by a hyperechoic area (arrow) on US. (D) Immediately after RFA, the ablation area was showed completely no enhancement by CEUS, and its size (0.7 × 1.1 × 1.0 cm) was larger than the initial nodule size. (E) One month after RFA, the ablation area decreased in size to 0.9 × 0.8 × 0.5 cm. (F) Three months after RFA, the ablation area decreased to 0.6 × 0.5 × 0.6 cm. (G) Six months after RFA, the ablation area decreased to 0.3 × 0.2 × 0.3 cm. (H) The ablation area could not be identified on both US and CEUS. (I) Before RFA, the pathologic examination of this nodule showed the presence of papillary thyroid carcinoma accompanied by chronic lymphocytic thyroiditis. (J) Three months after RFA, pathology showed degenerated and necrotic follicular epithelia, interstitial fibrous tissue hyperplasia, and hyaline degeneration in the ablation lesion, with lymphocyte infiltration and multinucleated giant cell reaction in the adjacent thyroid tissue. No residual cancer was found Changes in the tumor volume reduction ratio between the PTMC+CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation. Changes in ablation zone volume in PTMC cases with and without CLT at each follow‐up. PTMC, papillary thyroid microcarcinoma; CLT, chronic lymphocytic thyroiditis After RFA, the times to tumor disappearance were 9.800 ± 5.041 months and 10.0 ± 4.8 months (t = 0.16, P = 0.88) in the PTMC and PTMC + CLT groups, respectively. A total of 43% of the nodules in the PTMC+CLT group resolved in 12 months, and 47% in the PTMC group resolved in 6 months. No significant differences were observed with respect to the number of ablation areas between the groups (u = 0.319, P > 0.05) (Table 5). Calcification was observed in 20 PTMC cases, with time to tumor disappearance of 10.2 ± 5.1 months; no calcification was found in 40 PTMC cases, with time to tumor disappearance of 9.8 ± 4.9 months; no significant difference in time was observed between the PTMC cases with and without calcification (t = 0.28, P = 0.78). No residual cancer cells were found by CNB 3 months after ablation (Figure 1). No recurrent tumors or suspicious metastatic lymph nodes were detected. Number of patients with tumors disappearance in the PTMC+CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation. In terms of postoperative ablation volume, there was no significant difference between the two groups at 1, 3, 6, 12, and 18 months after RFA (P > 0.05) (Table 3, Figure 1). While there was a significant difference in VRR between the two groups 3 months after ablation (t = 1.28, P = 0.03), no difference was observed at any other time‐point (P > 0.05) (Table 4, Figure 2). Changes in the tumor volume between the PTMC + CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation. Radiofrequency ablation (RFA) treatment and follow‐up of one case of papillary thyroid microcarcinoma with chronic lymphocytic thyroiditis. (A) A hypoechoic nodule sized 0.4 × 0.5 × 0.4 cm, with irregular margins and microcalcifications was displayed in the right thyroid lobe (arrow). (B) Uneven and irregular hypo‐enhancement in the nodule was observed by contrast‐enhanced ultrasound (CEUS) (arrow, left image). (C) During RFA, the nodule was covered by a hyperechoic area (arrow) on US. (D) Immediately after RFA, the ablation area was showed completely no enhancement by CEUS, and its size (0.7 × 1.1 × 1.0 cm) was larger than the initial nodule size. (E) One month after RFA, the ablation area decreased in size to 0.9 × 0.8 × 0.5 cm. (F) Three months after RFA, the ablation area decreased to 0.6 × 0.5 × 0.6 cm. (G) Six months after RFA, the ablation area decreased to 0.3 × 0.2 × 0.3 cm. (H) The ablation area could not be identified on both US and CEUS. (I) Before RFA, the pathologic examination of this nodule showed the presence of papillary thyroid carcinoma accompanied by chronic lymphocytic thyroiditis. (J) Three months after RFA, pathology showed degenerated and necrotic follicular epithelia, interstitial fibrous tissue hyperplasia, and hyaline degeneration in the ablation lesion, with lymphocyte infiltration and multinucleated giant cell reaction in the adjacent thyroid tissue. No residual cancer was found Changes in the tumor volume reduction ratio between the PTMC+CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation. Changes in ablation zone volume in PTMC cases with and without CLT at each follow‐up. PTMC, papillary thyroid microcarcinoma; CLT, chronic lymphocytic thyroiditis After RFA, the times to tumor disappearance were 9.800 ± 5.041 months and 10.0 ± 4.8 months (t = 0.16, P = 0.88) in the PTMC and PTMC + CLT groups, respectively. A total of 43% of the nodules in the PTMC+CLT group resolved in 12 months, and 47% in the PTMC group resolved in 6 months. No significant differences were observed with respect to the number of ablation areas between the groups (u = 0.319, P > 0.05) (Table 5). Calcification was observed in 20 PTMC cases, with time to tumor disappearance of 10.2 ± 5.1 months; no calcification was found in 40 PTMC cases, with time to tumor disappearance of 9.8 ± 4.9 months; no significant difference in time was observed between the PTMC cases with and without calcification (t = 0.28, P = 0.78). No residual cancer cells were found by CNB 3 months after ablation (Figure 1). No recurrent tumors or suspicious metastatic lymph nodes were detected. Number of patients with tumors disappearance in the PTMC+CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation. Complications Slight voice hoarseness was observed in one patient (1.7%, 1/60) after RFA; voice recovery occurred without treatment in 1 week. The presence of moderate‐intensity pain was reported by two patients (3.3%, 2/60), and slight fever was noted in one patient (1.7%, 1/60); in all cases, spontaneous recovery was observed. Slight voice hoarseness was observed in one patient (1.7%, 1/60) after RFA; voice recovery occurred without treatment in 1 week. The presence of moderate‐intensity pain was reported by two patients (3.3%, 2/60), and slight fever was noted in one patient (1.7%, 1/60); in all cases, spontaneous recovery was observed. Preoperative patient and tumor characteristics: There were no significant differences in terms of age, sex, and tumor volume between the PTMC and PTMC+CLT groups (P > 0.05) (Table 1). All thyroid nodules were hypoechoic; other characteristics are shown in Table 2. There were no significant differences in nodule location, margin, shape, height/width, calcification, and CDFI type between the two groups (P > 0.05). The distance between the nodule and trachea or common carotid artery (CCA) was <2 mm in 13 nodules, including eight near the trachea (0.117 ± 0.024) cm, and five near the CCA (0.094 ± 0.013) cm. There were no significant differences in free T3, T4 and TSH values between the two groups (U = 0.434, P = 0.664; U = 0.452, P = 0.651; and U = 0.886, P = 0.376, respectively). All patients with CLT had positive TgAb and TPOAb. The thyroid function test results of those with PTMC were in the normal range. Comparison of the preoperative data of the patients and tumor volume between the PTMC+CLT and PTMC groups Abbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma. Ultrasonic characteristics of the tumors before RFA in the PTMC+CLT and PTMC groups Abbreviations: CDFI, color Doppler flow imaging; CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma. Ablation power and time in the PTMC and PTMC+CLT groups: The ablation powers in the PTMC and PTMC+CLT groups were 0.9 ± 0.5 KJ and 0.7 ± 0.5 KJ, respectively (t = 1.453, P = 0.1515); ablation times were 3.8 ± 2.1 min and 2.9 ± 2 0.1 min, respectively (t = 1.801, P = 0.0768). Tumor volume and VRR after RFA: In terms of postoperative ablation volume, there was no significant difference between the two groups at 1, 3, 6, 12, and 18 months after RFA (P > 0.05) (Table 3, Figure 1). While there was a significant difference in VRR between the two groups 3 months after ablation (t = 1.28, P = 0.03), no difference was observed at any other time‐point (P > 0.05) (Table 4, Figure 2). Changes in the tumor volume between the PTMC + CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation. Radiofrequency ablation (RFA) treatment and follow‐up of one case of papillary thyroid microcarcinoma with chronic lymphocytic thyroiditis. (A) A hypoechoic nodule sized 0.4 × 0.5 × 0.4 cm, with irregular margins and microcalcifications was displayed in the right thyroid lobe (arrow). (B) Uneven and irregular hypo‐enhancement in the nodule was observed by contrast‐enhanced ultrasound (CEUS) (arrow, left image). (C) During RFA, the nodule was covered by a hyperechoic area (arrow) on US. (D) Immediately after RFA, the ablation area was showed completely no enhancement by CEUS, and its size (0.7 × 1.1 × 1.0 cm) was larger than the initial nodule size. (E) One month after RFA, the ablation area decreased in size to 0.9 × 0.8 × 0.5 cm. (F) Three months after RFA, the ablation area decreased to 0.6 × 0.5 × 0.6 cm. (G) Six months after RFA, the ablation area decreased to 0.3 × 0.2 × 0.3 cm. (H) The ablation area could not be identified on both US and CEUS. (I) Before RFA, the pathologic examination of this nodule showed the presence of papillary thyroid carcinoma accompanied by chronic lymphocytic thyroiditis. (J) Three months after RFA, pathology showed degenerated and necrotic follicular epithelia, interstitial fibrous tissue hyperplasia, and hyaline degeneration in the ablation lesion, with lymphocyte infiltration and multinucleated giant cell reaction in the adjacent thyroid tissue. No residual cancer was found Changes in the tumor volume reduction ratio between the PTMC+CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; M, mean; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation; SD, standard deviation. Changes in ablation zone volume in PTMC cases with and without CLT at each follow‐up. PTMC, papillary thyroid microcarcinoma; CLT, chronic lymphocytic thyroiditis After RFA, the times to tumor disappearance were 9.800 ± 5.041 months and 10.0 ± 4.8 months (t = 0.16, P = 0.88) in the PTMC and PTMC + CLT groups, respectively. A total of 43% of the nodules in the PTMC+CLT group resolved in 12 months, and 47% in the PTMC group resolved in 6 months. No significant differences were observed with respect to the number of ablation areas between the groups (u = 0.319, P > 0.05) (Table 5). Calcification was observed in 20 PTMC cases, with time to tumor disappearance of 10.2 ± 5.1 months; no calcification was found in 40 PTMC cases, with time to tumor disappearance of 9.8 ± 4.9 months; no significant difference in time was observed between the PTMC cases with and without calcification (t = 0.28, P = 0.78). No residual cancer cells were found by CNB 3 months after ablation (Figure 1). No recurrent tumors or suspicious metastatic lymph nodes were detected. Number of patients with tumors disappearance in the PTMC+CLT and PTMC groups after RFA and at each follow‐up Abbreviations: CLT, chronic lymphocytic thyroiditis; PTMC, papillary thyroid microcarcinoma; RFA, radiofrequency ablation. Complications: Slight voice hoarseness was observed in one patient (1.7%, 1/60) after RFA; voice recovery occurred without treatment in 1 week. The presence of moderate‐intensity pain was reported by two patients (3.3%, 2/60), and slight fever was noted in one patient (1.7%, 1/60); in all cases, spontaneous recovery was observed. DISCUSSION: The feasibility of RFA in thyroid cancer treatment is still controversial, as some surgeons believe that it is difficult to detect tiny lymph node metastases using US19; in addition, the use of prophylactic central compartment neck dissection (CCND) is more reassuring for patients with clinically node‐negative PTC. However, with the increasing incidence of thyroid cancer, more patients with low‐risk microcarcinomas are receiving unnecessary extended thyroidectomy and prophylactic CCND. Previous studies have shown that the proportion of CCND with negative findings ranges from 57.6% to 84.8%,20, 21 and that excessive surgery decreases patients' quality of life (postoperative complications included transient hypoparathyroidism, permanent hypoparathyroidism, vocal cord palsy, and bleeding,). Therefore, the treatment approach for this disease is changing. In recent years, prophylactic CCND has not been recommended for PTMC patients without LNM.8, 22, 23, 24, 25 The median risk of local‐regional lymph node recurrence varies markedly by clinical staging in patients with pathologically proven neck LNM, with recurrence rates of 2% (range 0%‐9%) in patients with an initial N0 stage vs 22% (range 10%‐42%) in those with initially positive lymph nodes. Furthermore, the median risk of recurrence in LNM patients varies markedly by the number of positive nodes, with values of 4% (range 3%‐8%) in cases with <5 nodes and 19% (range 7%‐21%) in those with >5 nodes.23 At the 5‐year follow‐up, no difference was observed in the outcomes of patients treated with total thyroidectomy and those treated with total thyroidectomy + prophylactic CCND.24 The results of these studies support the use of RFA for PTMC. The presence of multifocality and capsular infiltration indicates a high risk of cancer invasion and metastasis.25, 26 Therefore, cases with a single tumor in the thyroid parenchyma and within the thyroid capsule were eligible for RFA in this study. Our preliminary study showed that RFA is efficient with a low complication rate in PTMC treatment.14 CLT is a type of chronic inflammation, and its effect on the ability to recover from the heat damage caused by RFA is still unknown. No differences were observed in the preoperative volume and US characteristics of the tumors between the PTMC and PTMC+CLT groups (P > 0.05), and the patients' age and sex did not differ between the two groups (P > 0.05). Our results showed that recovery after ablation in patients with PTMC+CLT is similar to that in patients with only PTMC. The ablation volume in the PTMC group decreased rapidly in the first 3 months, and the VRR in the PTMC group was greater than that in the PTMC+CLT group (P = 0.03). Tissue damage due to trauma induces acute inflammation. The features of acute and subacute inflammation include the expansion of blood vessels (vasodilation), increase in blood flow (hyperemia), capillary permeability, and migration of neutrophils into the damaged tissue. However, the composition of white blood cells changes rapidly, with macrophages and lymphocytes replacing neutrophils. The hallmark of chronic inflammation is the infiltration of the primary inflammatory cells into the site, leading to the production of inflammatory cytokines, growth factors, and enzymes, thereby leading to progression of tissue damage and secondary repair processes, such as fibrosis and granuloma formation.27 The thyroid parenchyma in the case of CLT is already infiltrated by diffuse chronic inflammatory cells, so inflammation of the ablation zone was more marked in the acute and subacute periods after RFA. Immune cells were activated to phagocytize and remove the necrotic tissue, activating the autoimmune system so we saw no significant difference in ablation zone outcomes between the PTMC and PTMC+CLT groups. Three months after ablation, US‐guided CNB was performed at the center and margin of the ablation zone and in the adjacent thyroid parenchyma; pathological results showed degenerated and necrotic follicular epithelia, interstitial fibrous tissue hyperplasia, and hyaline degeneration in the central and peripheral areas of the ablation lesion, with lymphocyte infiltration and multinucleated giant cell reaction in the adjacent thyroid tissue. No residual cancer was found. Previous studies28, 29 with larger sample size showed that PTC with CLT has a good prognosis, and that the recurrence rate was lower than that associated with only PTC. This may be attributed to the fact that inflammation inhibits cancer‐cell proliferation. CLT may be involved in the destruction of cancer cells that express thyroid‐specific antigens in PTC as a result of its autoimmune response to the thyroid‐specific antigens. Few studies30 have shown no relationship between CLT and PTC. This study indicated that RFA is efficient and safe for PTMC cases with CLT; CLT should not be regarded as a factor that excludes a patient from receiving RFA. In terms of safety, adhesions are found between the thyroid and adjacent connective tissue and muscles after ablation, distorting local anatomy, which results in difficulty performing surgery after ablation; therefore some surgeons have a negative attitude towards ablation. However, liquid isolation zone injection can be utilized to separate the thyroid from critical neck structures; this method is effective in preventing the occurrence of significant complications such as hematoma, and tracheal and nerve injury. Postoperative examination by both US and CEUS has shown clear boundaries of the thyroid gland and surrounding structures without signs of adhesion. In this study, 13 PTMC cases in which the distance to the trachea or CCA was <2 mm were successfully treated with RFA as a result of liquid isolation zone injection without serious complications. This study had the limitation of a relatively short follow‐up period; longest follow‐up was 4 years and the shortest 20 months. The outcomes of all these patients should be confirmed in studies with larger sample size and extended follow‐up period. In conclusion, this study found that RFA was effective and safe in PTMC+CLT patients. As evidenced by the low recurrence and high survival rates of these patients, CLT may be regarded as a protective factor for patients enrolled in PTMC treatment using RFA. The present study provides a basis for the study of immune regulation mechanisms induced by thyroid cancer necrosis. CONFLICT OF INTEREST: The authors made no disclosures. AUTHOR CONTRIBUTIONS: Yan Zhang contributed to the collection, analysis, and interpretation of data and writing initial draft. Mingbo Zhang performed the statistical analyses, had full access to all data, and takes responsibility for the accuracy of the data analysis in the study. Ying Zhang contributed to the collection, analysis and interpreted the results. Jie Li: contributed to the pathological figures analysis and interpretation. Jie Tang contributed to writing–review and revisions. Yukun Luo contributed to the study design, writing–initial draft, guarantor of the study and had full access to all data in the study, and takes responsibility for the integrity of the data and the accuracy of the data analysis. All authors contributed to critical revisions and approved the final version.
Background: Chronic lymphocytic thyroiditis (CLT) is an autoimmune disease commonly associated with papillary thyroid carcinoma characterized by a smaller primary tumor size at presentation. The efficacy and safety of ultrasound-guided radiofrequency ablation (RFA) for papillary thyroid microcarcinoma (PTMC) coexisting with CLT is still unknown. Methods: Sixty patients with unifocal PTMC were enrolled and classified into PTMC and PTMC+CLT groups (n = 30/group). CLT was diagnosed histopathologically. The ablation area exceeded the tumor margins, and was evaluated by US and contrast-enhanced US (CEUS) for residual tumor to prevent recurrence. Three months after ablation, US-guided core-needle biopsy was performed to assess the presence of residual and recurrent cancer. Preoperative and postoperative data on patients and tumors were recorded and analyzed. Results: There were no differences between groups in age, sex, preoperative tumor volume, ablation time, or ablation power (P > 0.05). There was also no significant difference in postoperative ablation zone volume between the groups at the 1-, 3-, 6-, 12-, and 18-month follow-ups (P > 0.05). The volume reduction rate significantly differed between the two groups at month 3 (P = 0.03). The ablation area could not be identified on US and CEUS at 9.8 ± 5.0 and 10.0 ± 4.8 months in the PTMC and PTMC + CLT groups, respectively (P = 0.197). No serious complications occurred during and after ablation. No residual cancer cells were found on biopsy after ablation. Conclusions: RFA was effective in patients with PTMC+CLT, and its therapeutic efficacy and safety were similar to those in patients with PTMC without CLT.
null
null
10,290
341
[ 447, 310, 174, 364, 312, 251, 147, 285, 75, 795, 68, 138 ]
16
[ "ptmc", "rfa", "ablation", "clt", "thyroid", "tumor", "patients", "months", "groups", "volume" ]
[ "thyroid carcinoma", "thyroid microcarcinoma clt", "thyroid microcarcinoma chronic", "thyroid microcarcinomas ptmcs", "papillary thyroid carcinoma" ]
null
null
null
[CONTENT] ablation | contrast media | radiofrequency | thyroid carcinoma; ultrasonography [SUMMARY]
null
[CONTENT] ablation | contrast media | radiofrequency | thyroid carcinoma; ultrasonography [SUMMARY]
null
[CONTENT] ablation | contrast media | radiofrequency | thyroid carcinoma; ultrasonography [SUMMARY]
null
[CONTENT] Adult | Carcinoma, Papillary | Catheter Ablation | Comorbidity | Endoscopic Ultrasound-Guided Fine Needle Aspiration | Female | Hashimoto Disease | Humans | Male | Neoplasm Recurrence, Local | Survival Analysis | Thyroid Neoplasms | Treatment Outcome [SUMMARY]
null
[CONTENT] Adult | Carcinoma, Papillary | Catheter Ablation | Comorbidity | Endoscopic Ultrasound-Guided Fine Needle Aspiration | Female | Hashimoto Disease | Humans | Male | Neoplasm Recurrence, Local | Survival Analysis | Thyroid Neoplasms | Treatment Outcome [SUMMARY]
null
[CONTENT] Adult | Carcinoma, Papillary | Catheter Ablation | Comorbidity | Endoscopic Ultrasound-Guided Fine Needle Aspiration | Female | Hashimoto Disease | Humans | Male | Neoplasm Recurrence, Local | Survival Analysis | Thyroid Neoplasms | Treatment Outcome [SUMMARY]
null
[CONTENT] thyroid carcinoma | thyroid microcarcinoma clt | thyroid microcarcinoma chronic | thyroid microcarcinomas ptmcs | papillary thyroid carcinoma [SUMMARY]
null
[CONTENT] thyroid carcinoma | thyroid microcarcinoma clt | thyroid microcarcinoma chronic | thyroid microcarcinomas ptmcs | papillary thyroid carcinoma [SUMMARY]
null
[CONTENT] thyroid carcinoma | thyroid microcarcinoma clt | thyroid microcarcinoma chronic | thyroid microcarcinomas ptmcs | papillary thyroid carcinoma [SUMMARY]
null
[CONTENT] ptmc | rfa | ablation | clt | thyroid | tumor | patients | months | groups | volume [SUMMARY]
null
[CONTENT] ptmc | rfa | ablation | clt | thyroid | tumor | patients | months | groups | volume [SUMMARY]
null
[CONTENT] ptmc | rfa | ablation | clt | thyroid | tumor | patients | months | groups | volume [SUMMARY]
null
[CONTENT] thyroid | incidence | clt | ptc | thyroid cancer | cancer | incidence thyroid | incidence thyroid cancer | cases | associated [SUMMARY]
null
[CONTENT] ptmc | ablation | clt | groups | months | rfa | thyroid | chronic lymphocytic thyroiditis | chronic lymphocytic | papillary [SUMMARY]
null
[CONTENT] ptmc | ablation | clt | rfa | thyroid | tumor | authors disclosures | disclosures | patients | authors [SUMMARY]
null
[CONTENT] CLT | thyroid carcinoma ||| thyroid microcarcinoma | CLT [SUMMARY]
null
[CONTENT] 0.05 ||| 18-month | 0.05 ||| two | month 3 | 0.03 ||| US | CEUS | 9.8 ± | 5.0 | 4.8 months | 0.197 ||| ||| [SUMMARY]
null
[CONTENT] CLT | thyroid carcinoma ||| thyroid microcarcinoma | CLT ||| Sixty ||| CLT ||| US | US ||| Three months | US ||| ||| 0.05 ||| 18-month | 0.05 ||| two | month 3 | 0.03 ||| US | CEUS | 9.8 ± | 5.0 | 4.8 months | 0.197 ||| ||| ||| CLT [SUMMARY]
null
Causes of death and demographic characteristics of victims of meteorological disasters in Korea from 1990 to 2008.
21943038
Meteorological disasters are an important component when considering climate change issues that impact morbidity and mortality rates. However, there are few epidemiological studies assessing the causes and characteristics of deaths from meteorological disasters. The present study aimed to analyze the causes of death associated with meteorological disasters in Korea, as well as demographic and geographic vulnerabilities and their changing trends, to establish effective measures for the adaptation to meteorological disasters.
BACKGROUND
Deaths associated with meteorological disasters were examined from 2,045 cases in Victim Survey Reports prepared by 16 local governments from 1990 to 2008. Specific causes of death were categorized as drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Death rates were analyzed according to the meteorological type, specific causes of death, and demographic and geographic characteristics.
METHODS
Drowning (60.3%) caused the greatest number of deaths in total, followed by landslide (19.7%) and structural collapse (10.1%). However, the causes of deaths differed between disaster types. The meteorological disaster associated with the greatest number of deaths has changed from flood to typhoon. Factors that raised vulnerability included living in coastal provinces (11.3 times higher than inland metropolitan), male gender (1.9 times higher than female), and older age.
RESULTS
Epidemiological analyses of the causes of death and vulnerability associated with meteorological disasters can provide the necessary information for establishing future adaptation measures against climate change. A more comprehensive system for assessing disaster epidemiology needs to be established.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Cause of Death", "Child", "Child, Preschool", "Disasters", "Female", "Humans", "Infant", "Male", "Middle Aged", "Republic of Korea", "Socioeconomic Factors", "Weather", "Young Adult" ]
3204281
Background
Patterns of meteorological elements such as temperature and precipitation have been altered due to climate change [1]. The frequency, intensity, and duration of meteorological disasters have also increased since the 1920s [2,3]. These phenomena are attributed to the accelerated rise in the sea level due to climate changes, increasing sea surface temperatures, more powerful tropical and temperate cyclones, changing characteristics of air pressure and precipitation, and acidification of the oceans [1,4,5]. Meteorological disasters, along with extreme heat, infectious diseases, water- and food-borne illnesses, and air pollution, are important components of climate change that impact morbidity and mortality rates [6-8]. Studies of damages caused by meteorological disasters, however, usually address economic loss and rarely deal with disaster epidemiology regarding victims and vulnerable groups [9]. It is estimated that climate change will worsen the frequency and severity of meteorological disasters in the future [7,10,11]. There are several examples of this trend, involving massive numbers of victims caused by powerful meteorological disasters in recent years, including the Sri Lanka tsunami of 2004 [12] and hurricane Katrina in the US in 2005 [13]. It is also estimated that poorer countries will have more victims as a result of meteorological disasters, because of the huge disease burden [10,14]. Thus, victims of meteorological disasters will emerge as a critical social and public health issue on a global level. Information regarding specific causes of death due to meteorological disasters is essential to identify vulnerable population groups and establish preventive measures against damages, and to recommend behavioral guidelines for citizens. Thus, it is important to analyze specific causes of death when we study the demographic and regional characteristics of deaths due to meteorological disasters according to countries and regions. Some studies have analyzed causes of death according to specific types of disasters such as floods, tsunamis, and hurricanes [12,15-17]. However, it is difficult to find research that concretely analyzes causes of death and sociodemographic characteristics over a long time-period beyond the accumulation of the number of victims by meteorological disaster. A previous study [2] examined trends in the number and rate of deaths for all types of disasters such as drought, flood, storm, and hurricane; Borden and Cutter [18] analyzed the spatial trends in death rates due to disasters in the US, and another study [9] analyzed various disasters in the US over 25 years, evaluating gender, race, and age characteristics, as well as vulnerable areas. However, these researchers did not assess the specific causes of death according to diverse types of disasters. It is also necessary to examine changing trends in the numbers and characteristics of deaths due to meteorological disasters by analyzing long-term data, in order to assess the influences of climate change on health, predict changes in future damages, and set up appropriate countermeasures. The present study analyzed specific causes, and demographic and regional characteristics of death due to meteorological disasters in South Korea from 1990 to 2008, compared those characteristics according to the types of meteorological disasters, and analyzed changing trends with regards to deaths.
Methods
Data collection In Korea, the local governments dispatch their civil servants to check accident scenes and prepare and submit Victim Survey Reports when people are killed in meteorological disasters. Civil servants identify death by a meteorological disaster based on the relatively clear causal relationship between the meteorological disaster and the accident or damage that directly caused the victim's death. A Victim Survey Reports is prepared on the basis of the accident and contains information concerning the time of death, place of accident, name, gender, and address of the deceased, details and cause of the accident, photos of the accident scene, meteorological conditions at the time of the accident, and measures taken by the local government. A uniform report form is used across the nation. The 16 metropolitan governments combine those reports and submit them to the National Emergency Management Agency (NEMA). Based on these reports, NEMA has collected all information of the victims of meteorological disasters. In the present study, information from all deaths from 1990 to 2008 were re-processed and analyzed for our specific research goals. After excluding data from 2001 where some information was missing, a total of 2,045 deaths were examined for cause, location, and year of accident, as well as gender and age of victims. Deaths also included subjects who were missing for a long period of time. In Korea, the local governments dispatch their civil servants to check accident scenes and prepare and submit Victim Survey Reports when people are killed in meteorological disasters. Civil servants identify death by a meteorological disaster based on the relatively clear causal relationship between the meteorological disaster and the accident or damage that directly caused the victim's death. A Victim Survey Reports is prepared on the basis of the accident and contains information concerning the time of death, place of accident, name, gender, and address of the deceased, details and cause of the accident, photos of the accident scene, meteorological conditions at the time of the accident, and measures taken by the local government. A uniform report form is used across the nation. The 16 metropolitan governments combine those reports and submit them to the National Emergency Management Agency (NEMA). Based on these reports, NEMA has collected all information of the victims of meteorological disasters. In the present study, information from all deaths from 1990 to 2008 were re-processed and analyzed for our specific research goals. After excluding data from 2001 where some information was missing, a total of 2,045 deaths were examined for cause, location, and year of accident, as well as gender and age of victims. Deaths also included subjects who were missing for a long period of time. Categories of Meteorological Disasters The types of meteorological disasters, which could be the cause of death, were selected based on official annual meteorological reports by the Korea Meteorological Administration at the time of death. Due to the high level of complexity and ambiguity of the categories proposed by administration investigators, similar type disasters were combined, placing them in the same category in order to facilitate comparison of results with international studies. Meteorological disasters such as strong wind, storms, and wind and waves meet the criterion of wind speed of 14 m/s on land or 21 m/s in the sea, and thus they were combined under the category of storm. Heavy rain defined by the Korea Meteorological Administration as precipitation of 80 mm for 12 hours or 150 mm, is referred to as flood in this study. Meteorological disasters caused by winter cold such as heavy snow, snowstorms, and cold are grouped together as cold. Since a typhoon is a phenomenon of strong wind, wind and waves, and heavy rain occurring together, those meteorological events were placed in the typhoon category (Table 1). These categories are similar to those used by Thacker et al. [9] for defining the categories of storm and flood, and cold and lightning, with the exception that, unlike their study, our study distinguishes storm from flood. Categories of Meteorological Disasters The types of meteorological disasters, which could be the cause of death, were selected based on official annual meteorological reports by the Korea Meteorological Administration at the time of death. Due to the high level of complexity and ambiguity of the categories proposed by administration investigators, similar type disasters were combined, placing them in the same category in order to facilitate comparison of results with international studies. Meteorological disasters such as strong wind, storms, and wind and waves meet the criterion of wind speed of 14 m/s on land or 21 m/s in the sea, and thus they were combined under the category of storm. Heavy rain defined by the Korea Meteorological Administration as precipitation of 80 mm for 12 hours or 150 mm, is referred to as flood in this study. Meteorological disasters caused by winter cold such as heavy snow, snowstorms, and cold are grouped together as cold. Since a typhoon is a phenomenon of strong wind, wind and waves, and heavy rain occurring together, those meteorological events were placed in the typhoon category (Table 1). These categories are similar to those used by Thacker et al. [9] for defining the categories of storm and flood, and cold and lightning, with the exception that, unlike their study, our study distinguishes storm from flood. Categories of Meteorological Disasters Causes of Death Causes of death listed on victim survey reports do not follow the systematic categories used in medicine, but adopt diverse expressions to depict the varied situations surrounding the deaths. Therefore, based on the categories described in previous studies [9,16] and the Disaster-Related Mortality Surveillance [19] by the Centers for Disease Control and Prevention of the U. S. A., the present study categorized specific causes of death into drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Drowning by place of occurrence was further categorized into drowning in a river, sea, submerged house, sunken vessel, or urban facility such as manholes, roads, or sewer. When deaths occurred as a result of meteorological disasters resulting in the collapse of a bridge, wall, bungalow, stable, terrace, bank, house, or flooding causing collapse of a house and/or road, these deaths were all combined in the category of structural collapse. When death occurred due to being hit by an object, having limbs severed, or being hit by a vehicle, these deaths were combined in the category of collision. Electrocution, lightning, fall, avalanche, deterioration of disease by disaster, and landslide were analyzed as separate causes of death, since the cause of death in such cases was clear (Table 2). Causes of death by meteorological disasters Causes of death listed on victim survey reports do not follow the systematic categories used in medicine, but adopt diverse expressions to depict the varied situations surrounding the deaths. Therefore, based on the categories described in previous studies [9,16] and the Disaster-Related Mortality Surveillance [19] by the Centers for Disease Control and Prevention of the U. S. A., the present study categorized specific causes of death into drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Drowning by place of occurrence was further categorized into drowning in a river, sea, submerged house, sunken vessel, or urban facility such as manholes, roads, or sewer. When deaths occurred as a result of meteorological disasters resulting in the collapse of a bridge, wall, bungalow, stable, terrace, bank, house, or flooding causing collapse of a house and/or road, these deaths were all combined in the category of structural collapse. When death occurred due to being hit by an object, having limbs severed, or being hit by a vehicle, these deaths were combined in the category of collision. Electrocution, lightning, fall, avalanche, deterioration of disease by disaster, and landslide were analyzed as separate causes of death, since the cause of death in such cases was clear (Table 2). Causes of death by meteorological disasters Statistical Analysis The specific causes of death were compared and analyzed according to types of meteorological disaster. Demographic and regional characteristics were also analyzed to examine relative vulnerability. Victims 0-4 years of age were placed in the young childhood age group and those 5-19 years of age in the adolescence age group. Groups were formed in 10-year increments for those victims ranging from 20-80 years of age. Regions were analyzed according to their geographic distribution based on 232 basic administrative units, including cities, guns, and gus, through the geographic information system(GIS). Regions were also divided into metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland areas, and coastal regions. In order to calculate death rates per 1,000,000 individuals, the resident registration data for inhabitants over 5 years old of administrative district such as dongs, eups, and myeons in 2000 of the National Statistics Office were used, along with resident registration data of administrative district such as cities, guns, and gus. The annual average of deaths was compared and analyzed according to types of meteorological disasters between the 1990s and 2000s in order to determine if there were changes in the types of meteorological disasters that produced the most deaths. Statistical analysis was carried out with the SPSS 12.0 program and Excel. The specific causes of death were compared and analyzed according to types of meteorological disaster. Demographic and regional characteristics were also analyzed to examine relative vulnerability. Victims 0-4 years of age were placed in the young childhood age group and those 5-19 years of age in the adolescence age group. Groups were formed in 10-year increments for those victims ranging from 20-80 years of age. Regions were analyzed according to their geographic distribution based on 232 basic administrative units, including cities, guns, and gus, through the geographic information system(GIS). Regions were also divided into metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland areas, and coastal regions. In order to calculate death rates per 1,000,000 individuals, the resident registration data for inhabitants over 5 years old of administrative district such as dongs, eups, and myeons in 2000 of the National Statistics Office were used, along with resident registration data of administrative district such as cities, guns, and gus. The annual average of deaths was compared and analyzed according to types of meteorological disasters between the 1990s and 2000s in order to determine if there were changes in the types of meteorological disasters that produced the most deaths. Statistical analysis was carried out with the SPSS 12.0 program and Excel.
null
null
Conclusions
JYJ and HNM designed and coordinated the study. HNM analyzed and interpreted the data. JYJ and HNM wrote the manuscript and the revision. Both authors approved the final manuscript.
[ "Background", "Data collection", "Categories of Meteorological Disasters", "Causes of Death", "Statistical Analysis", "Results", "Discussion", "Conclusions" ]
[ "Patterns of meteorological elements such as temperature and precipitation have been altered due to climate change [1]. The frequency, intensity, and duration of meteorological disasters have also increased since the 1920s [2,3]. These phenomena are attributed to the accelerated rise in the sea level due to climate changes, increasing sea surface temperatures, more powerful tropical and temperate cyclones, changing characteristics of air pressure and precipitation, and acidification of the oceans [1,4,5]. Meteorological disasters, along with extreme heat, infectious diseases, water- and food-borne illnesses, and air pollution, are important components of climate change that impact morbidity and mortality rates [6-8].\nStudies of damages caused by meteorological disasters, however, usually address economic loss and rarely deal with disaster epidemiology regarding victims and vulnerable groups [9]. It is estimated that climate change will worsen the frequency and severity of meteorological disasters in the future [7,10,11]. There are several examples of this trend, involving massive numbers of victims caused by powerful meteorological disasters in recent years, including the Sri Lanka tsunami of 2004 [12] and hurricane Katrina in the US in 2005 [13]. It is also estimated that poorer countries will have more victims as a result of meteorological disasters, because of the huge disease burden [10,14]. Thus, victims of meteorological disasters will emerge as a critical social and public health issue on a global level.\nInformation regarding specific causes of death due to meteorological disasters is essential to identify vulnerable population groups and establish preventive measures against damages, and to recommend behavioral guidelines for citizens. Thus, it is important to analyze specific causes of death when we study the demographic and regional characteristics of deaths due to meteorological disasters according to countries and regions.\nSome studies have analyzed causes of death according to specific types of disasters such as floods, tsunamis, and hurricanes [12,15-17]. However, it is difficult to find research that concretely analyzes causes of death and sociodemographic characteristics over a long time-period beyond the accumulation of the number of victims by meteorological disaster.\nA previous study [2] examined trends in the number and rate of deaths for all types of disasters such as drought, flood, storm, and hurricane; Borden and Cutter [18] analyzed the spatial trends in death rates due to disasters in the US, and another study [9] analyzed various disasters in the US over 25 years, evaluating gender, race, and age characteristics, as well as vulnerable areas. However, these researchers did not assess the specific causes of death according to diverse types of disasters.\nIt is also necessary to examine changing trends in the numbers and characteristics of deaths due to meteorological disasters by analyzing long-term data, in order to assess the influences of climate change on health, predict changes in future damages, and set up appropriate countermeasures.\nThe present study analyzed specific causes, and demographic and regional characteristics of death due to meteorological disasters in South Korea from 1990 to 2008, compared those characteristics according to the types of meteorological disasters, and analyzed changing trends with regards to deaths.", "In Korea, the local governments dispatch their civil servants to check accident scenes and prepare and submit Victim Survey Reports when people are killed in meteorological disasters. Civil servants identify death by a meteorological disaster based on the relatively clear causal relationship between the meteorological disaster and the accident or damage that directly caused the victim's death.\nA Victim Survey Reports is prepared on the basis of the accident and contains information concerning the time of death, place of accident, name, gender, and address of the deceased, details and cause of the accident, photos of the accident scene, meteorological conditions at the time of the accident, and measures taken by the local government. A uniform report form is used across the nation. The 16 metropolitan governments combine those reports and submit them to the National Emergency Management Agency (NEMA). Based on these reports, NEMA has collected all information of the victims of meteorological disasters. In the present study, information from all deaths from 1990 to 2008 were re-processed and analyzed for our specific research goals. After excluding data from 2001 where some information was missing, a total of 2,045 deaths were examined for cause, location, and year of accident, as well as gender and age of victims. Deaths also included subjects who were missing for a long period of time.", "The types of meteorological disasters, which could be the cause of death, were selected based on official annual meteorological reports by the Korea Meteorological Administration at the time of death. Due to the high level of complexity and ambiguity of the categories proposed by administration investigators, similar type disasters were combined, placing them in the same category in order to facilitate comparison of results with international studies. Meteorological disasters such as strong wind, storms, and wind and waves meet the criterion of wind speed of 14 m/s on land or 21 m/s in the sea, and thus they were combined under the category of storm. Heavy rain defined by the Korea Meteorological Administration as precipitation of 80 mm for 12 hours or 150 mm, is referred to as flood in this study. Meteorological disasters caused by winter cold such as heavy snow, snowstorms, and cold are grouped together as cold. Since a typhoon is a phenomenon of strong wind, wind and waves, and heavy rain occurring together, those meteorological events were placed in the typhoon category (Table 1). These categories are similar to those used by Thacker et al. [9] for defining the categories of storm and flood, and cold and lightning, with the exception that, unlike their study, our study distinguishes storm from flood.\nCategories of Meteorological Disasters", "Causes of death listed on victim survey reports do not follow the systematic categories used in medicine, but adopt diverse expressions to depict the varied situations surrounding the deaths. Therefore, based on the categories described in previous studies [9,16] and the Disaster-Related Mortality Surveillance [19] by the Centers for Disease Control and Prevention of the U. S. A., the present study categorized specific causes of death into drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Drowning by place of occurrence was further categorized into drowning in a river, sea, submerged house, sunken vessel, or urban facility such as manholes, roads, or sewer. When deaths occurred as a result of meteorological disasters resulting in the collapse of a bridge, wall, bungalow, stable, terrace, bank, house, or flooding causing collapse of a house and/or road, these deaths were all combined in the category of structural collapse. When death occurred due to being hit by an object, having limbs severed, or being hit by a vehicle, these deaths were combined in the category of collision. Electrocution, lightning, fall, avalanche, deterioration of disease by disaster, and landslide were analyzed as separate causes of death, since the cause of death in such cases was clear (Table 2).\nCauses of death by meteorological disasters", "The specific causes of death were compared and analyzed according to types of meteorological disaster. Demographic and regional characteristics were also analyzed to examine relative vulnerability. Victims 0-4 years of age were placed in the young childhood age group and those 5-19 years of age in the adolescence age group. Groups were formed in 10-year increments for those victims ranging from 20-80 years of age. Regions were analyzed according to their geographic distribution based on 232 basic administrative units, including cities, guns, and gus, through the geographic information system(GIS). Regions were also divided into metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland areas, and coastal regions. In order to calculate death rates per 1,000,000 individuals, the resident registration data for inhabitants over 5 years old of administrative district such as dongs, eups, and myeons in 2000 of the National Statistics Office were used, along with resident registration data of administrative district such as cities, guns, and gus. The annual average of deaths was compared and analyzed according to types of meteorological disasters between the 1990s and 2000s in order to determine if there were changes in the types of meteorological disasters that produced the most deaths. Statistical analysis was carried out with the SPSS 12.0 program and Excel.", "Table 3 shows the causes of all deaths according to the types of meteorological disasters analyzed in the study. Deaths from flood were the greatest at 966 (47.2%), followed by typhoon at 748 (36.6%) and storm at 316 (15.5%). Drowning at 1,234 (60.3%) was the number 1 cause of death among deaths by meteorological disasters, followed by landslide at 403 (19.7%) and structural collapse at 206 (10.1%). The percentages of deaths by landslide (26.0%) and structural collapse (14.4%) were higher in floods than other types of meteorological disasters, and 96.6% of deaths in storms were due to drowning.\nCauses of death according to the type of meteorological disaster, 1990-2008\nDrowning was found to be the most important cause of death by meteorological disaster and was further analyzed according to location. Of the 1,234 people that drowned, those who drowned in a river and in a sunken vessel totaled 743 (60.1%) and 368 (29.8%), respectively. Of the 505 deaths by drowning in a flood, 478 (94.7%) occurred in a river. Of the 424 deaths by typhoon, 263 (62.0%) occurred in a river, with the remaining individuals drowning in a sunken vessel or submerged house. Most deaths by storm were due to drowning, and 93.1% occurred in a sunken vessel. As for gender, men accounted for 60.8% of deaths by \"drowning in a river,\" whereas women accounted for only 38.9%. Men accounted for 65.5%, 96.5%, and 54.1% of deaths by \"drowning in the sea,\" \"drowning in a sunken vessel,\" and \"landslide,\" respectively, with greater percentages for each category compared with women. Women accounted for 52.4% and 56.0% of deaths by \"structural collapse\" and \"drowning in a submerged house,\" respectively, percentages that were slightly greater than those for men. These findings show that in women, death occurs more frequently in residential areas, whereas in men, death occurs more frequently outdoors.\nTable 4 presents the results of the analysis of deaths by meteorological disaster according to demographic characteristics. While 3.11 per 1 million men died due to a meteorological disaster, the number was 1.63 for women, making the male death rate 1.9 times higher than the female death rate. The death rate increased with age; specifically, there were 1.71 deaths per 1 million individuals in their twenties, 2.71 in their forties, 3.68 in their fifties, and 4.70 in their sixties. A similar tendency was seen with flood and typhoon disasters. The number of males who died by storm was significantly higher than that for females, with the highest number of deaths in the forties and fifties age groups.\nDeath rate * according to the type of meteorological disaster by gender and age, 1990-2008\n* Annual average deaths per 1 million people calculated using the 2000 resident registration data for individuals over 5 years old of the Korean Statistical Information Service.\nFigure 1 shows the geographic distribution of death rates by meteorological disasters according to the basic administrative units of Korea including cities, guns, and gus.\nDeath rates* in administrative units of cities, guns, and gus, 1990-2008. * Annual mean deaths per 1 million people calculated using the 2000 resident registration data of the Korean Statistical Information Service.\nTable 5 shows the comparison of death rates between metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland regions, and coastal regions of the 232 basic administrative units. The coastal regions of provinces (small- and medium-sized cities and agricultural and fishing towns) recorded the highest death rate due to disasters at 6.19 per 1 million people, followed by the inland regions of provinces at 3.23, the coastal regions of metropolitan cities at 0.94, and the inland regions of metropolitan cities at 0.55. Death rates due to meteorological disasters were higher in provinces than metropolitan cities and in coastal regions compared to inland regions. The overall death rate showed the same tendency as that for typhoons, which had a 0.26 death rate in the inland regions of metropolitan cities, 0.39 in the coastal regions of metropolitan cities, 0.73 in the inland regions of provinces, and 3.11 in the coastal regions of provinces. In the case of storms, most deaths occurred in coastal regions, and the coastal regions of provinces accounted for the majority of deaths, at 2.17 per 1 million individuals. The death rate by flood was 0.35 and 0.29 in the coastal and inland regions of metropolitan cities, respectively, with the coastal regions recording a slightly higher death rate. In provinces, the death rate of inland regions was 2.47 per 1 million individuals, which was much higher than the 0.84 of the coastal regions.\nDeath rates* of regions according to meteorological disaster types, 1990-2008\n* Annual mean deaths per 1 million people calculated using the 2000 resident registration data of the Korean Statistical Information Service.\nFigure 2 shows the changes in annual mean deaths by meteorological disasters. Deaths from all meteorological disasters were considerably decreased in the 2000s compared to the 1990s. Annual mean deaths by flood were 80.2 in the 1990s, decreasing to 20.5 by the quarter of the 2000s; and those by storm were 29.3 in the 1990s, decreasing by nine-tenths to 2.88 in the 2000s; but those by typhoon were 31.3 in the 1990s, increasing by 1.7 times to 54.4 in the 2000s.\nChange in the number of annual average deaths due to meteorological disasters, 1990-2008.", "The results of the present study indicate that, of all the meteorological disasters striking South Korea, floods caused the greatest number of deaths (966, 47.2%), with the majority due to drowning (505, 52.2%). These results are similar to previous studies that reported that two-thirds of all deaths from floods were due to drowning. In addition to floods, drowning was also the biggest cause of death for typhoons and storms, at 426 (56.7%) and 305 (96.6%), respectively. In the drowning category, \"drowning in a river\" accounted for 60.1%, which suggests a need to set up specific measures focused on avoiding river drowning when establishing preventive guidelines for dealing with meteorological disasters. Meanwhile, in the case of storms, 93.1% of those who drowned did so in a sunken vessel, which also emphasizes the need for preventive measures to address this cause of death.\nA previous study in Australia [15] reported that men younger than 25 years of age and older than 59 years of age were vulnerable to flood damage. Another study in the US [9] reported that the male population 75 years of age or older was vulnerable to storms and floods. The fourth IPCC report also reported that the elderly were more vulnerable to damage and injury caused by meteorological disasters than younger age groups. On the other hand, a study on tsunamis [12] reported that women, infants, and young children were more vulnerable to this type of natural disaster. According to the results of the present study on the demographic vulnerability of victims of meteorological disasters, vulnerability was high among men and the elderly; particularly, in men, \"drowning in a river\" or \"drowning in a sunken vessel,\" was more common compared to women, possibly because men tend to attempt more dangerous actions such as crossing an overflowing river. The fact that more men succumbed to \"drowning in a sunken vessel\" than women reflects the fact that in South Korea men typically work on fishing vessels that often sink during meteorological disasters. Since vulnerability differs according to the type of meteorological disaster, future preventive measures with regard to meteorological disasters should be based on vulnerability analysis results.\nRegarding geographic vulnerability, provinces (small- and medium-sized cities and agricultural and fishing towns) and coastal regions were relatively more vulnerable to meteorological disasters than metropolitan cities and inland regions. These results correspond to predictions that climate change will most likely affect coastal regions [1,5,20-22].\nThe results of the analyses of the number and characteristics of deaths by meteorological disasters over the evaluated years confirm that the most deadly disaster has gradually moved from flood to typhoon. Annual mean deaths due to floods were decreased in the 2000s compared to the 1990s, because 37 floods occurred in the 1990s, a frequency that decreased by one-half to 18 in the 2000s; the mean deaths per occurrence were 21.7 in the 1990s, which decreased by more than half to 9.1 in the 2000s. On the contrary, annual mean deaths by typhoons increased, because the frequency of typhoons rose, with the mean deaths per occurrence 24.1 in the 1990s, which increased by 1.8 times to 43.5 in the 2000s. The present study also confirmed that the seasonal period of most deaths changed (data not shown). Annual mean deaths in August were 32.9 in the 1990s, which decreased to 11.0 in the 2000s, while they were 13.2 in September in the 1990s, which increased to 16.1 in the 2000s. The fact that the month with the most deaths moved from August (summer) to September (fall) is in line with the fact that more floods occur in the summer, whereas the fall is subject to more typhoons.\nThe data used in the present study have several strengths, in that the scenes where deaths occurred due to meteorological disasters were inspected first-hand. Our analysis included all deaths from accidents or injuries that had a relatively clear causal relationship with meteorological disasters in Korea. Previous studies gathered death data from different sources such as newspapers, articles, scientific and government reports, death certificates, or databases compiled from several national datasets. Thus, these researchers were not able to apply standardized death categories and therefore these studies had limitations that stemmed from inconsistent hazard mortality data. However, our study gathered data from the same database and utilized the total number of deaths, with the advantage of identifying causes of death in a consistent and specific manner.\nAnother advantage of the present study lies in the fact that we analyzed causes of death according to all types of meteorological disasters in detail, unlike previous studies [16,23-25]. Not only did we analyze the causes of death by separating the places of drowning, a major cause of death, but we also assessed the causes of landslide and structural collapse that accounted for approximately 40% of deaths other than drowning.\nHowever, the present study is not without limitations. First, the analysis period was relatively short, from 1990 to 2008, compared to previous study [2], who analyzed data from 1900 to 2006, and another study in the US [9] that analyzed data from 1974 to 2004. nevertheless, the period analyzed in the current study appears to be long enough to identify causes of death and determine the demographic characteristics of these deaths, which was proven by the fact that changing trends in human casualties by meteorological disasters appeared in a clear and logical fashion according to decade.\nSecond, the study failed to include social and environmental elements of deaths by meteorological disasters. Vulnerability, such as the influence on health by meteorological disasters, depends on personal characteristics (location of residence, age, income, education, and disability) and social and environmental elements (level of preparation for disasters, responses for health, and environmental collapse) of individuals in danger [26-31]. Information concerning elements of individual behavior that leads to vulnerability, such as drinking and swimming skills, can be useful when developing behavioral guidelines for citizens. However, the present study failed to reflect those elements due to limitations of system data that did not consider disaster epidemiology. That limitation, however, applies to most of the previous studies on damage by meteorological disasters, as well as the present study. In the future, research on data system establishment and disaster epidemiology should be conducted to systematically collect personal characteristics and social and environmental information.", "The total number of deaths in Korea from meteorological disasters between 1990 and 2008 was 2,045. Floods caused the greatest number of deaths, but the greatest meteorological cause of death is slowly changing to typhoons. The most common cause of death was drowning. Factors associated with greater vulnerability were living in coastal provinces, older age, and male gender. A disaster epidemiology system is needed to establish effective measures for the adaptation to meteorological disasters." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Data collection", "Categories of Meteorological Disasters", "Causes of Death", "Statistical Analysis", "Results", "Discussion", "Conclusions" ]
[ "Patterns of meteorological elements such as temperature and precipitation have been altered due to climate change [1]. The frequency, intensity, and duration of meteorological disasters have also increased since the 1920s [2,3]. These phenomena are attributed to the accelerated rise in the sea level due to climate changes, increasing sea surface temperatures, more powerful tropical and temperate cyclones, changing characteristics of air pressure and precipitation, and acidification of the oceans [1,4,5]. Meteorological disasters, along with extreme heat, infectious diseases, water- and food-borne illnesses, and air pollution, are important components of climate change that impact morbidity and mortality rates [6-8].\nStudies of damages caused by meteorological disasters, however, usually address economic loss and rarely deal with disaster epidemiology regarding victims and vulnerable groups [9]. It is estimated that climate change will worsen the frequency and severity of meteorological disasters in the future [7,10,11]. There are several examples of this trend, involving massive numbers of victims caused by powerful meteorological disasters in recent years, including the Sri Lanka tsunami of 2004 [12] and hurricane Katrina in the US in 2005 [13]. It is also estimated that poorer countries will have more victims as a result of meteorological disasters, because of the huge disease burden [10,14]. Thus, victims of meteorological disasters will emerge as a critical social and public health issue on a global level.\nInformation regarding specific causes of death due to meteorological disasters is essential to identify vulnerable population groups and establish preventive measures against damages, and to recommend behavioral guidelines for citizens. Thus, it is important to analyze specific causes of death when we study the demographic and regional characteristics of deaths due to meteorological disasters according to countries and regions.\nSome studies have analyzed causes of death according to specific types of disasters such as floods, tsunamis, and hurricanes [12,15-17]. However, it is difficult to find research that concretely analyzes causes of death and sociodemographic characteristics over a long time-period beyond the accumulation of the number of victims by meteorological disaster.\nA previous study [2] examined trends in the number and rate of deaths for all types of disasters such as drought, flood, storm, and hurricane; Borden and Cutter [18] analyzed the spatial trends in death rates due to disasters in the US, and another study [9] analyzed various disasters in the US over 25 years, evaluating gender, race, and age characteristics, as well as vulnerable areas. However, these researchers did not assess the specific causes of death according to diverse types of disasters.\nIt is also necessary to examine changing trends in the numbers and characteristics of deaths due to meteorological disasters by analyzing long-term data, in order to assess the influences of climate change on health, predict changes in future damages, and set up appropriate countermeasures.\nThe present study analyzed specific causes, and demographic and regional characteristics of death due to meteorological disasters in South Korea from 1990 to 2008, compared those characteristics according to the types of meteorological disasters, and analyzed changing trends with regards to deaths.", " Data collection In Korea, the local governments dispatch their civil servants to check accident scenes and prepare and submit Victim Survey Reports when people are killed in meteorological disasters. Civil servants identify death by a meteorological disaster based on the relatively clear causal relationship between the meteorological disaster and the accident or damage that directly caused the victim's death.\nA Victim Survey Reports is prepared on the basis of the accident and contains information concerning the time of death, place of accident, name, gender, and address of the deceased, details and cause of the accident, photos of the accident scene, meteorological conditions at the time of the accident, and measures taken by the local government. A uniform report form is used across the nation. The 16 metropolitan governments combine those reports and submit them to the National Emergency Management Agency (NEMA). Based on these reports, NEMA has collected all information of the victims of meteorological disasters. In the present study, information from all deaths from 1990 to 2008 were re-processed and analyzed for our specific research goals. After excluding data from 2001 where some information was missing, a total of 2,045 deaths were examined for cause, location, and year of accident, as well as gender and age of victims. Deaths also included subjects who were missing for a long period of time.\nIn Korea, the local governments dispatch their civil servants to check accident scenes and prepare and submit Victim Survey Reports when people are killed in meteorological disasters. Civil servants identify death by a meteorological disaster based on the relatively clear causal relationship between the meteorological disaster and the accident or damage that directly caused the victim's death.\nA Victim Survey Reports is prepared on the basis of the accident and contains information concerning the time of death, place of accident, name, gender, and address of the deceased, details and cause of the accident, photos of the accident scene, meteorological conditions at the time of the accident, and measures taken by the local government. A uniform report form is used across the nation. The 16 metropolitan governments combine those reports and submit them to the National Emergency Management Agency (NEMA). Based on these reports, NEMA has collected all information of the victims of meteorological disasters. In the present study, information from all deaths from 1990 to 2008 were re-processed and analyzed for our specific research goals. After excluding data from 2001 where some information was missing, a total of 2,045 deaths were examined for cause, location, and year of accident, as well as gender and age of victims. Deaths also included subjects who were missing for a long period of time.\n Categories of Meteorological Disasters The types of meteorological disasters, which could be the cause of death, were selected based on official annual meteorological reports by the Korea Meteorological Administration at the time of death. Due to the high level of complexity and ambiguity of the categories proposed by administration investigators, similar type disasters were combined, placing them in the same category in order to facilitate comparison of results with international studies. Meteorological disasters such as strong wind, storms, and wind and waves meet the criterion of wind speed of 14 m/s on land or 21 m/s in the sea, and thus they were combined under the category of storm. Heavy rain defined by the Korea Meteorological Administration as precipitation of 80 mm for 12 hours or 150 mm, is referred to as flood in this study. Meteorological disasters caused by winter cold such as heavy snow, snowstorms, and cold are grouped together as cold. Since a typhoon is a phenomenon of strong wind, wind and waves, and heavy rain occurring together, those meteorological events were placed in the typhoon category (Table 1). These categories are similar to those used by Thacker et al. [9] for defining the categories of storm and flood, and cold and lightning, with the exception that, unlike their study, our study distinguishes storm from flood.\nCategories of Meteorological Disasters\nThe types of meteorological disasters, which could be the cause of death, were selected based on official annual meteorological reports by the Korea Meteorological Administration at the time of death. Due to the high level of complexity and ambiguity of the categories proposed by administration investigators, similar type disasters were combined, placing them in the same category in order to facilitate comparison of results with international studies. Meteorological disasters such as strong wind, storms, and wind and waves meet the criterion of wind speed of 14 m/s on land or 21 m/s in the sea, and thus they were combined under the category of storm. Heavy rain defined by the Korea Meteorological Administration as precipitation of 80 mm for 12 hours or 150 mm, is referred to as flood in this study. Meteorological disasters caused by winter cold such as heavy snow, snowstorms, and cold are grouped together as cold. Since a typhoon is a phenomenon of strong wind, wind and waves, and heavy rain occurring together, those meteorological events were placed in the typhoon category (Table 1). These categories are similar to those used by Thacker et al. [9] for defining the categories of storm and flood, and cold and lightning, with the exception that, unlike their study, our study distinguishes storm from flood.\nCategories of Meteorological Disasters\n Causes of Death Causes of death listed on victim survey reports do not follow the systematic categories used in medicine, but adopt diverse expressions to depict the varied situations surrounding the deaths. Therefore, based on the categories described in previous studies [9,16] and the Disaster-Related Mortality Surveillance [19] by the Centers for Disease Control and Prevention of the U. S. A., the present study categorized specific causes of death into drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Drowning by place of occurrence was further categorized into drowning in a river, sea, submerged house, sunken vessel, or urban facility such as manholes, roads, or sewer. When deaths occurred as a result of meteorological disasters resulting in the collapse of a bridge, wall, bungalow, stable, terrace, bank, house, or flooding causing collapse of a house and/or road, these deaths were all combined in the category of structural collapse. When death occurred due to being hit by an object, having limbs severed, or being hit by a vehicle, these deaths were combined in the category of collision. Electrocution, lightning, fall, avalanche, deterioration of disease by disaster, and landslide were analyzed as separate causes of death, since the cause of death in such cases was clear (Table 2).\nCauses of death by meteorological disasters\nCauses of death listed on victim survey reports do not follow the systematic categories used in medicine, but adopt diverse expressions to depict the varied situations surrounding the deaths. Therefore, based on the categories described in previous studies [9,16] and the Disaster-Related Mortality Surveillance [19] by the Centers for Disease Control and Prevention of the U. S. A., the present study categorized specific causes of death into drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Drowning by place of occurrence was further categorized into drowning in a river, sea, submerged house, sunken vessel, or urban facility such as manholes, roads, or sewer. When deaths occurred as a result of meteorological disasters resulting in the collapse of a bridge, wall, bungalow, stable, terrace, bank, house, or flooding causing collapse of a house and/or road, these deaths were all combined in the category of structural collapse. When death occurred due to being hit by an object, having limbs severed, or being hit by a vehicle, these deaths were combined in the category of collision. Electrocution, lightning, fall, avalanche, deterioration of disease by disaster, and landslide were analyzed as separate causes of death, since the cause of death in such cases was clear (Table 2).\nCauses of death by meteorological disasters\n Statistical Analysis The specific causes of death were compared and analyzed according to types of meteorological disaster. Demographic and regional characteristics were also analyzed to examine relative vulnerability. Victims 0-4 years of age were placed in the young childhood age group and those 5-19 years of age in the adolescence age group. Groups were formed in 10-year increments for those victims ranging from 20-80 years of age. Regions were analyzed according to their geographic distribution based on 232 basic administrative units, including cities, guns, and gus, through the geographic information system(GIS). Regions were also divided into metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland areas, and coastal regions. In order to calculate death rates per 1,000,000 individuals, the resident registration data for inhabitants over 5 years old of administrative district such as dongs, eups, and myeons in 2000 of the National Statistics Office were used, along with resident registration data of administrative district such as cities, guns, and gus. The annual average of deaths was compared and analyzed according to types of meteorological disasters between the 1990s and 2000s in order to determine if there were changes in the types of meteorological disasters that produced the most deaths. Statistical analysis was carried out with the SPSS 12.0 program and Excel.\nThe specific causes of death were compared and analyzed according to types of meteorological disaster. Demographic and regional characteristics were also analyzed to examine relative vulnerability. Victims 0-4 years of age were placed in the young childhood age group and those 5-19 years of age in the adolescence age group. Groups were formed in 10-year increments for those victims ranging from 20-80 years of age. Regions were analyzed according to their geographic distribution based on 232 basic administrative units, including cities, guns, and gus, through the geographic information system(GIS). Regions were also divided into metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland areas, and coastal regions. In order to calculate death rates per 1,000,000 individuals, the resident registration data for inhabitants over 5 years old of administrative district such as dongs, eups, and myeons in 2000 of the National Statistics Office were used, along with resident registration data of administrative district such as cities, guns, and gus. The annual average of deaths was compared and analyzed according to types of meteorological disasters between the 1990s and 2000s in order to determine if there were changes in the types of meteorological disasters that produced the most deaths. Statistical analysis was carried out with the SPSS 12.0 program and Excel.", "In Korea, the local governments dispatch their civil servants to check accident scenes and prepare and submit Victim Survey Reports when people are killed in meteorological disasters. Civil servants identify death by a meteorological disaster based on the relatively clear causal relationship between the meteorological disaster and the accident or damage that directly caused the victim's death.\nA Victim Survey Reports is prepared on the basis of the accident and contains information concerning the time of death, place of accident, name, gender, and address of the deceased, details and cause of the accident, photos of the accident scene, meteorological conditions at the time of the accident, and measures taken by the local government. A uniform report form is used across the nation. The 16 metropolitan governments combine those reports and submit them to the National Emergency Management Agency (NEMA). Based on these reports, NEMA has collected all information of the victims of meteorological disasters. In the present study, information from all deaths from 1990 to 2008 were re-processed and analyzed for our specific research goals. After excluding data from 2001 where some information was missing, a total of 2,045 deaths were examined for cause, location, and year of accident, as well as gender and age of victims. Deaths also included subjects who were missing for a long period of time.", "The types of meteorological disasters, which could be the cause of death, were selected based on official annual meteorological reports by the Korea Meteorological Administration at the time of death. Due to the high level of complexity and ambiguity of the categories proposed by administration investigators, similar type disasters were combined, placing them in the same category in order to facilitate comparison of results with international studies. Meteorological disasters such as strong wind, storms, and wind and waves meet the criterion of wind speed of 14 m/s on land or 21 m/s in the sea, and thus they were combined under the category of storm. Heavy rain defined by the Korea Meteorological Administration as precipitation of 80 mm for 12 hours or 150 mm, is referred to as flood in this study. Meteorological disasters caused by winter cold such as heavy snow, snowstorms, and cold are grouped together as cold. Since a typhoon is a phenomenon of strong wind, wind and waves, and heavy rain occurring together, those meteorological events were placed in the typhoon category (Table 1). These categories are similar to those used by Thacker et al. [9] for defining the categories of storm and flood, and cold and lightning, with the exception that, unlike their study, our study distinguishes storm from flood.\nCategories of Meteorological Disasters", "Causes of death listed on victim survey reports do not follow the systematic categories used in medicine, but adopt diverse expressions to depict the varied situations surrounding the deaths. Therefore, based on the categories described in previous studies [9,16] and the Disaster-Related Mortality Surveillance [19] by the Centers for Disease Control and Prevention of the U. S. A., the present study categorized specific causes of death into drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Drowning by place of occurrence was further categorized into drowning in a river, sea, submerged house, sunken vessel, or urban facility such as manholes, roads, or sewer. When deaths occurred as a result of meteorological disasters resulting in the collapse of a bridge, wall, bungalow, stable, terrace, bank, house, or flooding causing collapse of a house and/or road, these deaths were all combined in the category of structural collapse. When death occurred due to being hit by an object, having limbs severed, or being hit by a vehicle, these deaths were combined in the category of collision. Electrocution, lightning, fall, avalanche, deterioration of disease by disaster, and landslide were analyzed as separate causes of death, since the cause of death in such cases was clear (Table 2).\nCauses of death by meteorological disasters", "The specific causes of death were compared and analyzed according to types of meteorological disaster. Demographic and regional characteristics were also analyzed to examine relative vulnerability. Victims 0-4 years of age were placed in the young childhood age group and those 5-19 years of age in the adolescence age group. Groups were formed in 10-year increments for those victims ranging from 20-80 years of age. Regions were analyzed according to their geographic distribution based on 232 basic administrative units, including cities, guns, and gus, through the geographic information system(GIS). Regions were also divided into metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland areas, and coastal regions. In order to calculate death rates per 1,000,000 individuals, the resident registration data for inhabitants over 5 years old of administrative district such as dongs, eups, and myeons in 2000 of the National Statistics Office were used, along with resident registration data of administrative district such as cities, guns, and gus. The annual average of deaths was compared and analyzed according to types of meteorological disasters between the 1990s and 2000s in order to determine if there were changes in the types of meteorological disasters that produced the most deaths. Statistical analysis was carried out with the SPSS 12.0 program and Excel.", "Table 3 shows the causes of all deaths according to the types of meteorological disasters analyzed in the study. Deaths from flood were the greatest at 966 (47.2%), followed by typhoon at 748 (36.6%) and storm at 316 (15.5%). Drowning at 1,234 (60.3%) was the number 1 cause of death among deaths by meteorological disasters, followed by landslide at 403 (19.7%) and structural collapse at 206 (10.1%). The percentages of deaths by landslide (26.0%) and structural collapse (14.4%) were higher in floods than other types of meteorological disasters, and 96.6% of deaths in storms were due to drowning.\nCauses of death according to the type of meteorological disaster, 1990-2008\nDrowning was found to be the most important cause of death by meteorological disaster and was further analyzed according to location. Of the 1,234 people that drowned, those who drowned in a river and in a sunken vessel totaled 743 (60.1%) and 368 (29.8%), respectively. Of the 505 deaths by drowning in a flood, 478 (94.7%) occurred in a river. Of the 424 deaths by typhoon, 263 (62.0%) occurred in a river, with the remaining individuals drowning in a sunken vessel or submerged house. Most deaths by storm were due to drowning, and 93.1% occurred in a sunken vessel. As for gender, men accounted for 60.8% of deaths by \"drowning in a river,\" whereas women accounted for only 38.9%. Men accounted for 65.5%, 96.5%, and 54.1% of deaths by \"drowning in the sea,\" \"drowning in a sunken vessel,\" and \"landslide,\" respectively, with greater percentages for each category compared with women. Women accounted for 52.4% and 56.0% of deaths by \"structural collapse\" and \"drowning in a submerged house,\" respectively, percentages that were slightly greater than those for men. These findings show that in women, death occurs more frequently in residential areas, whereas in men, death occurs more frequently outdoors.\nTable 4 presents the results of the analysis of deaths by meteorological disaster according to demographic characteristics. While 3.11 per 1 million men died due to a meteorological disaster, the number was 1.63 for women, making the male death rate 1.9 times higher than the female death rate. The death rate increased with age; specifically, there were 1.71 deaths per 1 million individuals in their twenties, 2.71 in their forties, 3.68 in their fifties, and 4.70 in their sixties. A similar tendency was seen with flood and typhoon disasters. The number of males who died by storm was significantly higher than that for females, with the highest number of deaths in the forties and fifties age groups.\nDeath rate * according to the type of meteorological disaster by gender and age, 1990-2008\n* Annual average deaths per 1 million people calculated using the 2000 resident registration data for individuals over 5 years old of the Korean Statistical Information Service.\nFigure 1 shows the geographic distribution of death rates by meteorological disasters according to the basic administrative units of Korea including cities, guns, and gus.\nDeath rates* in administrative units of cities, guns, and gus, 1990-2008. * Annual mean deaths per 1 million people calculated using the 2000 resident registration data of the Korean Statistical Information Service.\nTable 5 shows the comparison of death rates between metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland regions, and coastal regions of the 232 basic administrative units. The coastal regions of provinces (small- and medium-sized cities and agricultural and fishing towns) recorded the highest death rate due to disasters at 6.19 per 1 million people, followed by the inland regions of provinces at 3.23, the coastal regions of metropolitan cities at 0.94, and the inland regions of metropolitan cities at 0.55. Death rates due to meteorological disasters were higher in provinces than metropolitan cities and in coastal regions compared to inland regions. The overall death rate showed the same tendency as that for typhoons, which had a 0.26 death rate in the inland regions of metropolitan cities, 0.39 in the coastal regions of metropolitan cities, 0.73 in the inland regions of provinces, and 3.11 in the coastal regions of provinces. In the case of storms, most deaths occurred in coastal regions, and the coastal regions of provinces accounted for the majority of deaths, at 2.17 per 1 million individuals. The death rate by flood was 0.35 and 0.29 in the coastal and inland regions of metropolitan cities, respectively, with the coastal regions recording a slightly higher death rate. In provinces, the death rate of inland regions was 2.47 per 1 million individuals, which was much higher than the 0.84 of the coastal regions.\nDeath rates* of regions according to meteorological disaster types, 1990-2008\n* Annual mean deaths per 1 million people calculated using the 2000 resident registration data of the Korean Statistical Information Service.\nFigure 2 shows the changes in annual mean deaths by meteorological disasters. Deaths from all meteorological disasters were considerably decreased in the 2000s compared to the 1990s. Annual mean deaths by flood were 80.2 in the 1990s, decreasing to 20.5 by the quarter of the 2000s; and those by storm were 29.3 in the 1990s, decreasing by nine-tenths to 2.88 in the 2000s; but those by typhoon were 31.3 in the 1990s, increasing by 1.7 times to 54.4 in the 2000s.\nChange in the number of annual average deaths due to meteorological disasters, 1990-2008.", "The results of the present study indicate that, of all the meteorological disasters striking South Korea, floods caused the greatest number of deaths (966, 47.2%), with the majority due to drowning (505, 52.2%). These results are similar to previous studies that reported that two-thirds of all deaths from floods were due to drowning. In addition to floods, drowning was also the biggest cause of death for typhoons and storms, at 426 (56.7%) and 305 (96.6%), respectively. In the drowning category, \"drowning in a river\" accounted for 60.1%, which suggests a need to set up specific measures focused on avoiding river drowning when establishing preventive guidelines for dealing with meteorological disasters. Meanwhile, in the case of storms, 93.1% of those who drowned did so in a sunken vessel, which also emphasizes the need for preventive measures to address this cause of death.\nA previous study in Australia [15] reported that men younger than 25 years of age and older than 59 years of age were vulnerable to flood damage. Another study in the US [9] reported that the male population 75 years of age or older was vulnerable to storms and floods. The fourth IPCC report also reported that the elderly were more vulnerable to damage and injury caused by meteorological disasters than younger age groups. On the other hand, a study on tsunamis [12] reported that women, infants, and young children were more vulnerable to this type of natural disaster. According to the results of the present study on the demographic vulnerability of victims of meteorological disasters, vulnerability was high among men and the elderly; particularly, in men, \"drowning in a river\" or \"drowning in a sunken vessel,\" was more common compared to women, possibly because men tend to attempt more dangerous actions such as crossing an overflowing river. The fact that more men succumbed to \"drowning in a sunken vessel\" than women reflects the fact that in South Korea men typically work on fishing vessels that often sink during meteorological disasters. Since vulnerability differs according to the type of meteorological disaster, future preventive measures with regard to meteorological disasters should be based on vulnerability analysis results.\nRegarding geographic vulnerability, provinces (small- and medium-sized cities and agricultural and fishing towns) and coastal regions were relatively more vulnerable to meteorological disasters than metropolitan cities and inland regions. These results correspond to predictions that climate change will most likely affect coastal regions [1,5,20-22].\nThe results of the analyses of the number and characteristics of deaths by meteorological disasters over the evaluated years confirm that the most deadly disaster has gradually moved from flood to typhoon. Annual mean deaths due to floods were decreased in the 2000s compared to the 1990s, because 37 floods occurred in the 1990s, a frequency that decreased by one-half to 18 in the 2000s; the mean deaths per occurrence were 21.7 in the 1990s, which decreased by more than half to 9.1 in the 2000s. On the contrary, annual mean deaths by typhoons increased, because the frequency of typhoons rose, with the mean deaths per occurrence 24.1 in the 1990s, which increased by 1.8 times to 43.5 in the 2000s. The present study also confirmed that the seasonal period of most deaths changed (data not shown). Annual mean deaths in August were 32.9 in the 1990s, which decreased to 11.0 in the 2000s, while they were 13.2 in September in the 1990s, which increased to 16.1 in the 2000s. The fact that the month with the most deaths moved from August (summer) to September (fall) is in line with the fact that more floods occur in the summer, whereas the fall is subject to more typhoons.\nThe data used in the present study have several strengths, in that the scenes where deaths occurred due to meteorological disasters were inspected first-hand. Our analysis included all deaths from accidents or injuries that had a relatively clear causal relationship with meteorological disasters in Korea. Previous studies gathered death data from different sources such as newspapers, articles, scientific and government reports, death certificates, or databases compiled from several national datasets. Thus, these researchers were not able to apply standardized death categories and therefore these studies had limitations that stemmed from inconsistent hazard mortality data. However, our study gathered data from the same database and utilized the total number of deaths, with the advantage of identifying causes of death in a consistent and specific manner.\nAnother advantage of the present study lies in the fact that we analyzed causes of death according to all types of meteorological disasters in detail, unlike previous studies [16,23-25]. Not only did we analyze the causes of death by separating the places of drowning, a major cause of death, but we also assessed the causes of landslide and structural collapse that accounted for approximately 40% of deaths other than drowning.\nHowever, the present study is not without limitations. First, the analysis period was relatively short, from 1990 to 2008, compared to previous study [2], who analyzed data from 1900 to 2006, and another study in the US [9] that analyzed data from 1974 to 2004. nevertheless, the period analyzed in the current study appears to be long enough to identify causes of death and determine the demographic characteristics of these deaths, which was proven by the fact that changing trends in human casualties by meteorological disasters appeared in a clear and logical fashion according to decade.\nSecond, the study failed to include social and environmental elements of deaths by meteorological disasters. Vulnerability, such as the influence on health by meteorological disasters, depends on personal characteristics (location of residence, age, income, education, and disability) and social and environmental elements (level of preparation for disasters, responses for health, and environmental collapse) of individuals in danger [26-31]. Information concerning elements of individual behavior that leads to vulnerability, such as drinking and swimming skills, can be useful when developing behavioral guidelines for citizens. However, the present study failed to reflect those elements due to limitations of system data that did not consider disaster epidemiology. That limitation, however, applies to most of the previous studies on damage by meteorological disasters, as well as the present study. In the future, research on data system establishment and disaster epidemiology should be conducted to systematically collect personal characteristics and social and environmental information.", "The total number of deaths in Korea from meteorological disasters between 1990 and 2008 was 2,045. Floods caused the greatest number of deaths, but the greatest meteorological cause of death is slowly changing to typhoons. The most common cause of death was drowning. Factors associated with greater vulnerability were living in coastal provinces, older age, and male gender. A disaster epidemiology system is needed to establish effective measures for the adaptation to meteorological disasters." ]
[ null, "methods", null, null, null, null, null, null, null ]
[ "climate change", "disasters", "flood", "typhoon", "death rate", "vulnerability", "epidemiology" ]
Background: Patterns of meteorological elements such as temperature and precipitation have been altered due to climate change [1]. The frequency, intensity, and duration of meteorological disasters have also increased since the 1920s [2,3]. These phenomena are attributed to the accelerated rise in the sea level due to climate changes, increasing sea surface temperatures, more powerful tropical and temperate cyclones, changing characteristics of air pressure and precipitation, and acidification of the oceans [1,4,5]. Meteorological disasters, along with extreme heat, infectious diseases, water- and food-borne illnesses, and air pollution, are important components of climate change that impact morbidity and mortality rates [6-8]. Studies of damages caused by meteorological disasters, however, usually address economic loss and rarely deal with disaster epidemiology regarding victims and vulnerable groups [9]. It is estimated that climate change will worsen the frequency and severity of meteorological disasters in the future [7,10,11]. There are several examples of this trend, involving massive numbers of victims caused by powerful meteorological disasters in recent years, including the Sri Lanka tsunami of 2004 [12] and hurricane Katrina in the US in 2005 [13]. It is also estimated that poorer countries will have more victims as a result of meteorological disasters, because of the huge disease burden [10,14]. Thus, victims of meteorological disasters will emerge as a critical social and public health issue on a global level. Information regarding specific causes of death due to meteorological disasters is essential to identify vulnerable population groups and establish preventive measures against damages, and to recommend behavioral guidelines for citizens. Thus, it is important to analyze specific causes of death when we study the demographic and regional characteristics of deaths due to meteorological disasters according to countries and regions. Some studies have analyzed causes of death according to specific types of disasters such as floods, tsunamis, and hurricanes [12,15-17]. However, it is difficult to find research that concretely analyzes causes of death and sociodemographic characteristics over a long time-period beyond the accumulation of the number of victims by meteorological disaster. A previous study [2] examined trends in the number and rate of deaths for all types of disasters such as drought, flood, storm, and hurricane; Borden and Cutter [18] analyzed the spatial trends in death rates due to disasters in the US, and another study [9] analyzed various disasters in the US over 25 years, evaluating gender, race, and age characteristics, as well as vulnerable areas. However, these researchers did not assess the specific causes of death according to diverse types of disasters. It is also necessary to examine changing trends in the numbers and characteristics of deaths due to meteorological disasters by analyzing long-term data, in order to assess the influences of climate change on health, predict changes in future damages, and set up appropriate countermeasures. The present study analyzed specific causes, and demographic and regional characteristics of death due to meteorological disasters in South Korea from 1990 to 2008, compared those characteristics according to the types of meteorological disasters, and analyzed changing trends with regards to deaths. Methods: Data collection In Korea, the local governments dispatch their civil servants to check accident scenes and prepare and submit Victim Survey Reports when people are killed in meteorological disasters. Civil servants identify death by a meteorological disaster based on the relatively clear causal relationship between the meteorological disaster and the accident or damage that directly caused the victim's death. A Victim Survey Reports is prepared on the basis of the accident and contains information concerning the time of death, place of accident, name, gender, and address of the deceased, details and cause of the accident, photos of the accident scene, meteorological conditions at the time of the accident, and measures taken by the local government. A uniform report form is used across the nation. The 16 metropolitan governments combine those reports and submit them to the National Emergency Management Agency (NEMA). Based on these reports, NEMA has collected all information of the victims of meteorological disasters. In the present study, information from all deaths from 1990 to 2008 were re-processed and analyzed for our specific research goals. After excluding data from 2001 where some information was missing, a total of 2,045 deaths were examined for cause, location, and year of accident, as well as gender and age of victims. Deaths also included subjects who were missing for a long period of time. In Korea, the local governments dispatch their civil servants to check accident scenes and prepare and submit Victim Survey Reports when people are killed in meteorological disasters. Civil servants identify death by a meteorological disaster based on the relatively clear causal relationship between the meteorological disaster and the accident or damage that directly caused the victim's death. A Victim Survey Reports is prepared on the basis of the accident and contains information concerning the time of death, place of accident, name, gender, and address of the deceased, details and cause of the accident, photos of the accident scene, meteorological conditions at the time of the accident, and measures taken by the local government. A uniform report form is used across the nation. The 16 metropolitan governments combine those reports and submit them to the National Emergency Management Agency (NEMA). Based on these reports, NEMA has collected all information of the victims of meteorological disasters. In the present study, information from all deaths from 1990 to 2008 were re-processed and analyzed for our specific research goals. After excluding data from 2001 where some information was missing, a total of 2,045 deaths were examined for cause, location, and year of accident, as well as gender and age of victims. Deaths also included subjects who were missing for a long period of time. Categories of Meteorological Disasters The types of meteorological disasters, which could be the cause of death, were selected based on official annual meteorological reports by the Korea Meteorological Administration at the time of death. Due to the high level of complexity and ambiguity of the categories proposed by administration investigators, similar type disasters were combined, placing them in the same category in order to facilitate comparison of results with international studies. Meteorological disasters such as strong wind, storms, and wind and waves meet the criterion of wind speed of 14 m/s on land or 21 m/s in the sea, and thus they were combined under the category of storm. Heavy rain defined by the Korea Meteorological Administration as precipitation of 80 mm for 12 hours or 150 mm, is referred to as flood in this study. Meteorological disasters caused by winter cold such as heavy snow, snowstorms, and cold are grouped together as cold. Since a typhoon is a phenomenon of strong wind, wind and waves, and heavy rain occurring together, those meteorological events were placed in the typhoon category (Table 1). These categories are similar to those used by Thacker et al. [9] for defining the categories of storm and flood, and cold and lightning, with the exception that, unlike their study, our study distinguishes storm from flood. Categories of Meteorological Disasters The types of meteorological disasters, which could be the cause of death, were selected based on official annual meteorological reports by the Korea Meteorological Administration at the time of death. Due to the high level of complexity and ambiguity of the categories proposed by administration investigators, similar type disasters were combined, placing them in the same category in order to facilitate comparison of results with international studies. Meteorological disasters such as strong wind, storms, and wind and waves meet the criterion of wind speed of 14 m/s on land or 21 m/s in the sea, and thus they were combined under the category of storm. Heavy rain defined by the Korea Meteorological Administration as precipitation of 80 mm for 12 hours or 150 mm, is referred to as flood in this study. Meteorological disasters caused by winter cold such as heavy snow, snowstorms, and cold are grouped together as cold. Since a typhoon is a phenomenon of strong wind, wind and waves, and heavy rain occurring together, those meteorological events were placed in the typhoon category (Table 1). These categories are similar to those used by Thacker et al. [9] for defining the categories of storm and flood, and cold and lightning, with the exception that, unlike their study, our study distinguishes storm from flood. Categories of Meteorological Disasters Causes of Death Causes of death listed on victim survey reports do not follow the systematic categories used in medicine, but adopt diverse expressions to depict the varied situations surrounding the deaths. Therefore, based on the categories described in previous studies [9,16] and the Disaster-Related Mortality Surveillance [19] by the Centers for Disease Control and Prevention of the U. S. A., the present study categorized specific causes of death into drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Drowning by place of occurrence was further categorized into drowning in a river, sea, submerged house, sunken vessel, or urban facility such as manholes, roads, or sewer. When deaths occurred as a result of meteorological disasters resulting in the collapse of a bridge, wall, bungalow, stable, terrace, bank, house, or flooding causing collapse of a house and/or road, these deaths were all combined in the category of structural collapse. When death occurred due to being hit by an object, having limbs severed, or being hit by a vehicle, these deaths were combined in the category of collision. Electrocution, lightning, fall, avalanche, deterioration of disease by disaster, and landslide were analyzed as separate causes of death, since the cause of death in such cases was clear (Table 2). Causes of death by meteorological disasters Causes of death listed on victim survey reports do not follow the systematic categories used in medicine, but adopt diverse expressions to depict the varied situations surrounding the deaths. Therefore, based on the categories described in previous studies [9,16] and the Disaster-Related Mortality Surveillance [19] by the Centers for Disease Control and Prevention of the U. S. A., the present study categorized specific causes of death into drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Drowning by place of occurrence was further categorized into drowning in a river, sea, submerged house, sunken vessel, or urban facility such as manholes, roads, or sewer. When deaths occurred as a result of meteorological disasters resulting in the collapse of a bridge, wall, bungalow, stable, terrace, bank, house, or flooding causing collapse of a house and/or road, these deaths were all combined in the category of structural collapse. When death occurred due to being hit by an object, having limbs severed, or being hit by a vehicle, these deaths were combined in the category of collision. Electrocution, lightning, fall, avalanche, deterioration of disease by disaster, and landslide were analyzed as separate causes of death, since the cause of death in such cases was clear (Table 2). Causes of death by meteorological disasters Statistical Analysis The specific causes of death were compared and analyzed according to types of meteorological disaster. Demographic and regional characteristics were also analyzed to examine relative vulnerability. Victims 0-4 years of age were placed in the young childhood age group and those 5-19 years of age in the adolescence age group. Groups were formed in 10-year increments for those victims ranging from 20-80 years of age. Regions were analyzed according to their geographic distribution based on 232 basic administrative units, including cities, guns, and gus, through the geographic information system(GIS). Regions were also divided into metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland areas, and coastal regions. In order to calculate death rates per 1,000,000 individuals, the resident registration data for inhabitants over 5 years old of administrative district such as dongs, eups, and myeons in 2000 of the National Statistics Office were used, along with resident registration data of administrative district such as cities, guns, and gus. The annual average of deaths was compared and analyzed according to types of meteorological disasters between the 1990s and 2000s in order to determine if there were changes in the types of meteorological disasters that produced the most deaths. Statistical analysis was carried out with the SPSS 12.0 program and Excel. The specific causes of death were compared and analyzed according to types of meteorological disaster. Demographic and regional characteristics were also analyzed to examine relative vulnerability. Victims 0-4 years of age were placed in the young childhood age group and those 5-19 years of age in the adolescence age group. Groups were formed in 10-year increments for those victims ranging from 20-80 years of age. Regions were analyzed according to their geographic distribution based on 232 basic administrative units, including cities, guns, and gus, through the geographic information system(GIS). Regions were also divided into metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland areas, and coastal regions. In order to calculate death rates per 1,000,000 individuals, the resident registration data for inhabitants over 5 years old of administrative district such as dongs, eups, and myeons in 2000 of the National Statistics Office were used, along with resident registration data of administrative district such as cities, guns, and gus. The annual average of deaths was compared and analyzed according to types of meteorological disasters between the 1990s and 2000s in order to determine if there were changes in the types of meteorological disasters that produced the most deaths. Statistical analysis was carried out with the SPSS 12.0 program and Excel. Data collection: In Korea, the local governments dispatch their civil servants to check accident scenes and prepare and submit Victim Survey Reports when people are killed in meteorological disasters. Civil servants identify death by a meteorological disaster based on the relatively clear causal relationship between the meteorological disaster and the accident or damage that directly caused the victim's death. A Victim Survey Reports is prepared on the basis of the accident and contains information concerning the time of death, place of accident, name, gender, and address of the deceased, details and cause of the accident, photos of the accident scene, meteorological conditions at the time of the accident, and measures taken by the local government. A uniform report form is used across the nation. The 16 metropolitan governments combine those reports and submit them to the National Emergency Management Agency (NEMA). Based on these reports, NEMA has collected all information of the victims of meteorological disasters. In the present study, information from all deaths from 1990 to 2008 were re-processed and analyzed for our specific research goals. After excluding data from 2001 where some information was missing, a total of 2,045 deaths were examined for cause, location, and year of accident, as well as gender and age of victims. Deaths also included subjects who were missing for a long period of time. Categories of Meteorological Disasters: The types of meteorological disasters, which could be the cause of death, were selected based on official annual meteorological reports by the Korea Meteorological Administration at the time of death. Due to the high level of complexity and ambiguity of the categories proposed by administration investigators, similar type disasters were combined, placing them in the same category in order to facilitate comparison of results with international studies. Meteorological disasters such as strong wind, storms, and wind and waves meet the criterion of wind speed of 14 m/s on land or 21 m/s in the sea, and thus they were combined under the category of storm. Heavy rain defined by the Korea Meteorological Administration as precipitation of 80 mm for 12 hours or 150 mm, is referred to as flood in this study. Meteorological disasters caused by winter cold such as heavy snow, snowstorms, and cold are grouped together as cold. Since a typhoon is a phenomenon of strong wind, wind and waves, and heavy rain occurring together, those meteorological events were placed in the typhoon category (Table 1). These categories are similar to those used by Thacker et al. [9] for defining the categories of storm and flood, and cold and lightning, with the exception that, unlike their study, our study distinguishes storm from flood. Categories of Meteorological Disasters Causes of Death: Causes of death listed on victim survey reports do not follow the systematic categories used in medicine, but adopt diverse expressions to depict the varied situations surrounding the deaths. Therefore, based on the categories described in previous studies [9,16] and the Disaster-Related Mortality Surveillance [19] by the Centers for Disease Control and Prevention of the U. S. A., the present study categorized specific causes of death into drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Drowning by place of occurrence was further categorized into drowning in a river, sea, submerged house, sunken vessel, or urban facility such as manholes, roads, or sewer. When deaths occurred as a result of meteorological disasters resulting in the collapse of a bridge, wall, bungalow, stable, terrace, bank, house, or flooding causing collapse of a house and/or road, these deaths were all combined in the category of structural collapse. When death occurred due to being hit by an object, having limbs severed, or being hit by a vehicle, these deaths were combined in the category of collision. Electrocution, lightning, fall, avalanche, deterioration of disease by disaster, and landslide were analyzed as separate causes of death, since the cause of death in such cases was clear (Table 2). Causes of death by meteorological disasters Statistical Analysis: The specific causes of death were compared and analyzed according to types of meteorological disaster. Demographic and regional characteristics were also analyzed to examine relative vulnerability. Victims 0-4 years of age were placed in the young childhood age group and those 5-19 years of age in the adolescence age group. Groups were formed in 10-year increments for those victims ranging from 20-80 years of age. Regions were analyzed according to their geographic distribution based on 232 basic administrative units, including cities, guns, and gus, through the geographic information system(GIS). Regions were also divided into metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland areas, and coastal regions. In order to calculate death rates per 1,000,000 individuals, the resident registration data for inhabitants over 5 years old of administrative district such as dongs, eups, and myeons in 2000 of the National Statistics Office were used, along with resident registration data of administrative district such as cities, guns, and gus. The annual average of deaths was compared and analyzed according to types of meteorological disasters between the 1990s and 2000s in order to determine if there were changes in the types of meteorological disasters that produced the most deaths. Statistical analysis was carried out with the SPSS 12.0 program and Excel. Results: Table 3 shows the causes of all deaths according to the types of meteorological disasters analyzed in the study. Deaths from flood were the greatest at 966 (47.2%), followed by typhoon at 748 (36.6%) and storm at 316 (15.5%). Drowning at 1,234 (60.3%) was the number 1 cause of death among deaths by meteorological disasters, followed by landslide at 403 (19.7%) and structural collapse at 206 (10.1%). The percentages of deaths by landslide (26.0%) and structural collapse (14.4%) were higher in floods than other types of meteorological disasters, and 96.6% of deaths in storms were due to drowning. Causes of death according to the type of meteorological disaster, 1990-2008 Drowning was found to be the most important cause of death by meteorological disaster and was further analyzed according to location. Of the 1,234 people that drowned, those who drowned in a river and in a sunken vessel totaled 743 (60.1%) and 368 (29.8%), respectively. Of the 505 deaths by drowning in a flood, 478 (94.7%) occurred in a river. Of the 424 deaths by typhoon, 263 (62.0%) occurred in a river, with the remaining individuals drowning in a sunken vessel or submerged house. Most deaths by storm were due to drowning, and 93.1% occurred in a sunken vessel. As for gender, men accounted for 60.8% of deaths by "drowning in a river," whereas women accounted for only 38.9%. Men accounted for 65.5%, 96.5%, and 54.1% of deaths by "drowning in the sea," "drowning in a sunken vessel," and "landslide," respectively, with greater percentages for each category compared with women. Women accounted for 52.4% and 56.0% of deaths by "structural collapse" and "drowning in a submerged house," respectively, percentages that were slightly greater than those for men. These findings show that in women, death occurs more frequently in residential areas, whereas in men, death occurs more frequently outdoors. Table 4 presents the results of the analysis of deaths by meteorological disaster according to demographic characteristics. While 3.11 per 1 million men died due to a meteorological disaster, the number was 1.63 for women, making the male death rate 1.9 times higher than the female death rate. The death rate increased with age; specifically, there were 1.71 deaths per 1 million individuals in their twenties, 2.71 in their forties, 3.68 in their fifties, and 4.70 in their sixties. A similar tendency was seen with flood and typhoon disasters. The number of males who died by storm was significantly higher than that for females, with the highest number of deaths in the forties and fifties age groups. Death rate * according to the type of meteorological disaster by gender and age, 1990-2008 * Annual average deaths per 1 million people calculated using the 2000 resident registration data for individuals over 5 years old of the Korean Statistical Information Service. Figure 1 shows the geographic distribution of death rates by meteorological disasters according to the basic administrative units of Korea including cities, guns, and gus. Death rates* in administrative units of cities, guns, and gus, 1990-2008. * Annual mean deaths per 1 million people calculated using the 2000 resident registration data of the Korean Statistical Information Service. Table 5 shows the comparison of death rates between metropolitan cities, provinces (small- and medium-sized cities and farming and fishing towns), inland regions, and coastal regions of the 232 basic administrative units. The coastal regions of provinces (small- and medium-sized cities and agricultural and fishing towns) recorded the highest death rate due to disasters at 6.19 per 1 million people, followed by the inland regions of provinces at 3.23, the coastal regions of metropolitan cities at 0.94, and the inland regions of metropolitan cities at 0.55. Death rates due to meteorological disasters were higher in provinces than metropolitan cities and in coastal regions compared to inland regions. The overall death rate showed the same tendency as that for typhoons, which had a 0.26 death rate in the inland regions of metropolitan cities, 0.39 in the coastal regions of metropolitan cities, 0.73 in the inland regions of provinces, and 3.11 in the coastal regions of provinces. In the case of storms, most deaths occurred in coastal regions, and the coastal regions of provinces accounted for the majority of deaths, at 2.17 per 1 million individuals. The death rate by flood was 0.35 and 0.29 in the coastal and inland regions of metropolitan cities, respectively, with the coastal regions recording a slightly higher death rate. In provinces, the death rate of inland regions was 2.47 per 1 million individuals, which was much higher than the 0.84 of the coastal regions. Death rates* of regions according to meteorological disaster types, 1990-2008 * Annual mean deaths per 1 million people calculated using the 2000 resident registration data of the Korean Statistical Information Service. Figure 2 shows the changes in annual mean deaths by meteorological disasters. Deaths from all meteorological disasters were considerably decreased in the 2000s compared to the 1990s. Annual mean deaths by flood were 80.2 in the 1990s, decreasing to 20.5 by the quarter of the 2000s; and those by storm were 29.3 in the 1990s, decreasing by nine-tenths to 2.88 in the 2000s; but those by typhoon were 31.3 in the 1990s, increasing by 1.7 times to 54.4 in the 2000s. Change in the number of annual average deaths due to meteorological disasters, 1990-2008. Discussion: The results of the present study indicate that, of all the meteorological disasters striking South Korea, floods caused the greatest number of deaths (966, 47.2%), with the majority due to drowning (505, 52.2%). These results are similar to previous studies that reported that two-thirds of all deaths from floods were due to drowning. In addition to floods, drowning was also the biggest cause of death for typhoons and storms, at 426 (56.7%) and 305 (96.6%), respectively. In the drowning category, "drowning in a river" accounted for 60.1%, which suggests a need to set up specific measures focused on avoiding river drowning when establishing preventive guidelines for dealing with meteorological disasters. Meanwhile, in the case of storms, 93.1% of those who drowned did so in a sunken vessel, which also emphasizes the need for preventive measures to address this cause of death. A previous study in Australia [15] reported that men younger than 25 years of age and older than 59 years of age were vulnerable to flood damage. Another study in the US [9] reported that the male population 75 years of age or older was vulnerable to storms and floods. The fourth IPCC report also reported that the elderly were more vulnerable to damage and injury caused by meteorological disasters than younger age groups. On the other hand, a study on tsunamis [12] reported that women, infants, and young children were more vulnerable to this type of natural disaster. According to the results of the present study on the demographic vulnerability of victims of meteorological disasters, vulnerability was high among men and the elderly; particularly, in men, "drowning in a river" or "drowning in a sunken vessel," was more common compared to women, possibly because men tend to attempt more dangerous actions such as crossing an overflowing river. The fact that more men succumbed to "drowning in a sunken vessel" than women reflects the fact that in South Korea men typically work on fishing vessels that often sink during meteorological disasters. Since vulnerability differs according to the type of meteorological disaster, future preventive measures with regard to meteorological disasters should be based on vulnerability analysis results. Regarding geographic vulnerability, provinces (small- and medium-sized cities and agricultural and fishing towns) and coastal regions were relatively more vulnerable to meteorological disasters than metropolitan cities and inland regions. These results correspond to predictions that climate change will most likely affect coastal regions [1,5,20-22]. The results of the analyses of the number and characteristics of deaths by meteorological disasters over the evaluated years confirm that the most deadly disaster has gradually moved from flood to typhoon. Annual mean deaths due to floods were decreased in the 2000s compared to the 1990s, because 37 floods occurred in the 1990s, a frequency that decreased by one-half to 18 in the 2000s; the mean deaths per occurrence were 21.7 in the 1990s, which decreased by more than half to 9.1 in the 2000s. On the contrary, annual mean deaths by typhoons increased, because the frequency of typhoons rose, with the mean deaths per occurrence 24.1 in the 1990s, which increased by 1.8 times to 43.5 in the 2000s. The present study also confirmed that the seasonal period of most deaths changed (data not shown). Annual mean deaths in August were 32.9 in the 1990s, which decreased to 11.0 in the 2000s, while they were 13.2 in September in the 1990s, which increased to 16.1 in the 2000s. The fact that the month with the most deaths moved from August (summer) to September (fall) is in line with the fact that more floods occur in the summer, whereas the fall is subject to more typhoons. The data used in the present study have several strengths, in that the scenes where deaths occurred due to meteorological disasters were inspected first-hand. Our analysis included all deaths from accidents or injuries that had a relatively clear causal relationship with meteorological disasters in Korea. Previous studies gathered death data from different sources such as newspapers, articles, scientific and government reports, death certificates, or databases compiled from several national datasets. Thus, these researchers were not able to apply standardized death categories and therefore these studies had limitations that stemmed from inconsistent hazard mortality data. However, our study gathered data from the same database and utilized the total number of deaths, with the advantage of identifying causes of death in a consistent and specific manner. Another advantage of the present study lies in the fact that we analyzed causes of death according to all types of meteorological disasters in detail, unlike previous studies [16,23-25]. Not only did we analyze the causes of death by separating the places of drowning, a major cause of death, but we also assessed the causes of landslide and structural collapse that accounted for approximately 40% of deaths other than drowning. However, the present study is not without limitations. First, the analysis period was relatively short, from 1990 to 2008, compared to previous study [2], who analyzed data from 1900 to 2006, and another study in the US [9] that analyzed data from 1974 to 2004. nevertheless, the period analyzed in the current study appears to be long enough to identify causes of death and determine the demographic characteristics of these deaths, which was proven by the fact that changing trends in human casualties by meteorological disasters appeared in a clear and logical fashion according to decade. Second, the study failed to include social and environmental elements of deaths by meteorological disasters. Vulnerability, such as the influence on health by meteorological disasters, depends on personal characteristics (location of residence, age, income, education, and disability) and social and environmental elements (level of preparation for disasters, responses for health, and environmental collapse) of individuals in danger [26-31]. Information concerning elements of individual behavior that leads to vulnerability, such as drinking and swimming skills, can be useful when developing behavioral guidelines for citizens. However, the present study failed to reflect those elements due to limitations of system data that did not consider disaster epidemiology. That limitation, however, applies to most of the previous studies on damage by meteorological disasters, as well as the present study. In the future, research on data system establishment and disaster epidemiology should be conducted to systematically collect personal characteristics and social and environmental information. Conclusions: The total number of deaths in Korea from meteorological disasters between 1990 and 2008 was 2,045. Floods caused the greatest number of deaths, but the greatest meteorological cause of death is slowly changing to typhoons. The most common cause of death was drowning. Factors associated with greater vulnerability were living in coastal provinces, older age, and male gender. A disaster epidemiology system is needed to establish effective measures for the adaptation to meteorological disasters.
Background: Meteorological disasters are an important component when considering climate change issues that impact morbidity and mortality rates. However, there are few epidemiological studies assessing the causes and characteristics of deaths from meteorological disasters. The present study aimed to analyze the causes of death associated with meteorological disasters in Korea, as well as demographic and geographic vulnerabilities and their changing trends, to establish effective measures for the adaptation to meteorological disasters. Methods: Deaths associated with meteorological disasters were examined from 2,045 cases in Victim Survey Reports prepared by 16 local governments from 1990 to 2008. Specific causes of death were categorized as drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Death rates were analyzed according to the meteorological type, specific causes of death, and demographic and geographic characteristics. Results: Drowning (60.3%) caused the greatest number of deaths in total, followed by landslide (19.7%) and structural collapse (10.1%). However, the causes of deaths differed between disaster types. The meteorological disaster associated with the greatest number of deaths has changed from flood to typhoon. Factors that raised vulnerability included living in coastal provinces (11.3 times higher than inland metropolitan), male gender (1.9 times higher than female), and older age. Conclusions: Epidemiological analyses of the causes of death and vulnerability associated with meteorological disasters can provide the necessary information for establishing future adaptation measures against climate change. A more comprehensive system for assessing disaster epidemiology needs to be established.
Background: Patterns of meteorological elements such as temperature and precipitation have been altered due to climate change [1]. The frequency, intensity, and duration of meteorological disasters have also increased since the 1920s [2,3]. These phenomena are attributed to the accelerated rise in the sea level due to climate changes, increasing sea surface temperatures, more powerful tropical and temperate cyclones, changing characteristics of air pressure and precipitation, and acidification of the oceans [1,4,5]. Meteorological disasters, along with extreme heat, infectious diseases, water- and food-borne illnesses, and air pollution, are important components of climate change that impact morbidity and mortality rates [6-8]. Studies of damages caused by meteorological disasters, however, usually address economic loss and rarely deal with disaster epidemiology regarding victims and vulnerable groups [9]. It is estimated that climate change will worsen the frequency and severity of meteorological disasters in the future [7,10,11]. There are several examples of this trend, involving massive numbers of victims caused by powerful meteorological disasters in recent years, including the Sri Lanka tsunami of 2004 [12] and hurricane Katrina in the US in 2005 [13]. It is also estimated that poorer countries will have more victims as a result of meteorological disasters, because of the huge disease burden [10,14]. Thus, victims of meteorological disasters will emerge as a critical social and public health issue on a global level. Information regarding specific causes of death due to meteorological disasters is essential to identify vulnerable population groups and establish preventive measures against damages, and to recommend behavioral guidelines for citizens. Thus, it is important to analyze specific causes of death when we study the demographic and regional characteristics of deaths due to meteorological disasters according to countries and regions. Some studies have analyzed causes of death according to specific types of disasters such as floods, tsunamis, and hurricanes [12,15-17]. However, it is difficult to find research that concretely analyzes causes of death and sociodemographic characteristics over a long time-period beyond the accumulation of the number of victims by meteorological disaster. A previous study [2] examined trends in the number and rate of deaths for all types of disasters such as drought, flood, storm, and hurricane; Borden and Cutter [18] analyzed the spatial trends in death rates due to disasters in the US, and another study [9] analyzed various disasters in the US over 25 years, evaluating gender, race, and age characteristics, as well as vulnerable areas. However, these researchers did not assess the specific causes of death according to diverse types of disasters. It is also necessary to examine changing trends in the numbers and characteristics of deaths due to meteorological disasters by analyzing long-term data, in order to assess the influences of climate change on health, predict changes in future damages, and set up appropriate countermeasures. The present study analyzed specific causes, and demographic and regional characteristics of death due to meteorological disasters in South Korea from 1990 to 2008, compared those characteristics according to the types of meteorological disasters, and analyzed changing trends with regards to deaths. Conclusions: JYJ and HNM designed and coordinated the study. HNM analyzed and interpreted the data. JYJ and HNM wrote the manuscript and the revision. Both authors approved the final manuscript.
Background: Meteorological disasters are an important component when considering climate change issues that impact morbidity and mortality rates. However, there are few epidemiological studies assessing the causes and characteristics of deaths from meteorological disasters. The present study aimed to analyze the causes of death associated with meteorological disasters in Korea, as well as demographic and geographic vulnerabilities and their changing trends, to establish effective measures for the adaptation to meteorological disasters. Methods: Deaths associated with meteorological disasters were examined from 2,045 cases in Victim Survey Reports prepared by 16 local governments from 1990 to 2008. Specific causes of death were categorized as drowning, structural collapse, electrocution, lightning, fall, collision, landslide, avalanche, deterioration of disease by disaster, and others. Death rates were analyzed according to the meteorological type, specific causes of death, and demographic and geographic characteristics. Results: Drowning (60.3%) caused the greatest number of deaths in total, followed by landslide (19.7%) and structural collapse (10.1%). However, the causes of deaths differed between disaster types. The meteorological disaster associated with the greatest number of deaths has changed from flood to typhoon. Factors that raised vulnerability included living in coastal provinces (11.3 times higher than inland metropolitan), male gender (1.9 times higher than female), and older age. Conclusions: Epidemiological analyses of the causes of death and vulnerability associated with meteorological disasters can provide the necessary information for establishing future adaptation measures against climate change. A more comprehensive system for assessing disaster epidemiology needs to be established.
6,094
298
[ 598, 250, 253, 266, 248, 1072, 1226, 82 ]
9
[ "meteorological", "death", "disasters", "deaths", "meteorological disasters", "study", "regions", "disaster", "drowning", "causes" ]
[ "rates meteorological disasters", "disasters deaths meteorological", "meteorological disasters depends", "meteorological disasters cause", "meteorological disasters increased" ]
null
[CONTENT] climate change | disasters | flood | typhoon | death rate | vulnerability | epidemiology [SUMMARY]
[CONTENT] climate change | disasters | flood | typhoon | death rate | vulnerability | epidemiology [SUMMARY]
null
[CONTENT] climate change | disasters | flood | typhoon | death rate | vulnerability | epidemiology [SUMMARY]
[CONTENT] climate change | disasters | flood | typhoon | death rate | vulnerability | epidemiology [SUMMARY]
[CONTENT] climate change | disasters | flood | typhoon | death rate | vulnerability | epidemiology [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cause of Death | Child | Child, Preschool | Disasters | Female | Humans | Infant | Male | Middle Aged | Republic of Korea | Socioeconomic Factors | Weather | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cause of Death | Child | Child, Preschool | Disasters | Female | Humans | Infant | Male | Middle Aged | Republic of Korea | Socioeconomic Factors | Weather | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cause of Death | Child | Child, Preschool | Disasters | Female | Humans | Infant | Male | Middle Aged | Republic of Korea | Socioeconomic Factors | Weather | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cause of Death | Child | Child, Preschool | Disasters | Female | Humans | Infant | Male | Middle Aged | Republic of Korea | Socioeconomic Factors | Weather | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Cause of Death | Child | Child, Preschool | Disasters | Female | Humans | Infant | Male | Middle Aged | Republic of Korea | Socioeconomic Factors | Weather | Young Adult [SUMMARY]
[CONTENT] rates meteorological disasters | disasters deaths meteorological | meteorological disasters depends | meteorological disasters cause | meteorological disasters increased [SUMMARY]
[CONTENT] rates meteorological disasters | disasters deaths meteorological | meteorological disasters depends | meteorological disasters cause | meteorological disasters increased [SUMMARY]
null
[CONTENT] rates meteorological disasters | disasters deaths meteorological | meteorological disasters depends | meteorological disasters cause | meteorological disasters increased [SUMMARY]
[CONTENT] rates meteorological disasters | disasters deaths meteorological | meteorological disasters depends | meteorological disasters cause | meteorological disasters increased [SUMMARY]
[CONTENT] rates meteorological disasters | disasters deaths meteorological | meteorological disasters depends | meteorological disasters cause | meteorological disasters increased [SUMMARY]
[CONTENT] meteorological | death | disasters | deaths | meteorological disasters | study | regions | disaster | drowning | causes [SUMMARY]
[CONTENT] meteorological | death | disasters | deaths | meteorological disasters | study | regions | disaster | drowning | causes [SUMMARY]
null
[CONTENT] meteorological | death | disasters | deaths | meteorological disasters | study | regions | disaster | drowning | causes [SUMMARY]
[CONTENT] meteorological | death | disasters | deaths | meteorological disasters | study | regions | disaster | drowning | causes [SUMMARY]
[CONTENT] meteorological | death | disasters | deaths | meteorological disasters | study | regions | disaster | drowning | causes [SUMMARY]
[CONTENT] disasters | meteorological | meteorological disasters | climate | characteristics | climate change | trends | causes | damages | types disasters [SUMMARY]
[CONTENT] meteorological | accident | death | disasters | wind | categories | meteorological disasters | deaths | reports | cold [SUMMARY]
null
[CONTENT] number deaths | greatest | number | meteorological | cause death | cause death drowning | establish effective measures adaptation | establish effective measures | establish effective | epidemiology system needed establish [SUMMARY]
[CONTENT] meteorological | disasters | death | deaths | meteorological disasters | accident | drowning | study | regions | causes [SUMMARY]
[CONTENT] meteorological | disasters | death | deaths | meteorological disasters | accident | drowning | study | regions | causes [SUMMARY]
[CONTENT] ||| ||| Korea [SUMMARY]
[CONTENT] 2,045 | Victim Survey Reports | 16 | from 1990 to 2008 ||| ||| [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| Korea ||| 2,045 | Victim Survey Reports | 16 | from 1990 to 2008 ||| ||| ||| 60.3% | 19.7% | 10.1% ||| ||| ||| 11.3 | 1.9 ||| ||| [SUMMARY]
[CONTENT] ||| ||| Korea ||| 2,045 | Victim Survey Reports | 16 | from 1990 to 2008 ||| ||| ||| 60.3% | 19.7% | 10.1% ||| ||| ||| 11.3 | 1.9 ||| ||| [SUMMARY]
Surgical Outcome of Chronic Pulmonary Aspergilloma: An Experience from Two Tertiary Referral Hospitals in Addis Ababa, Ethiopia.
33897212
Surgical management of pulmonary aspergillosis is challenging and controversial. This study is designed to assess the clinical profile, indications and surgical outcome of Pulmonary aspergilloma.
BACKGROUND
A retrospective cross-sectional analysis of 72 patients who underwent pulmonary resection for pulmonary aspergilloma over the period from November, 2014, to November, 2019 was done. Data on demographic, clinical and surgical out come were retrieved. Analysis was done using SPSS version 23. Chi-square test was used to assess for significance of the association between variables and surgical outcome.
METHODS
There were 46(63.9%) males and 26(36.1%) female patients with a mean age of 35.2+/-11.6 years (Range 16- 65 years). All patients were previously treated for tuberculosis. Cough, hemoptysis, and shortness of breath were the main symptoms identified. A ball of fungus together with the surrounding lung was removed. Accordingly, 32(44,4%) lobectomies, 12(16.7%) pneumonectomy, 7(9.7%) Bi-lobectomy, and 21(29.2%) cavernostomy were done. Intraoperative and Postoperative complications were seen in 8(11.1%) and 21(29.1%) patients respectively. Major morbidity encounters included massive intraoperative blood loss, prolonged air leak, empyema, air space, bronchopleural fistula, and wound infection. The hospital mortality was 3(4.2%) and the average hospital stay was 14.8days. Postoperative complications were evaluated for the difference in socio-demographic characteristics and other variables and a statistically significant difference was detected only for the location of aspergilloma, site of the lung involved and type of surgery done. (P-value =0.05.).
RESULTS
Pulmonary resection done for pulmonary aspergilloma showed favorable outcomes when done with good patient selection, meticulous surgical techniques, and good postoperative management. However, its long term outcome and role of antifungal treatment as adjunctive therapy for surgical resection need further investigation.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Cross-Sectional Studies", "Ethiopia", "Female", "Humans", "Lung", "Male", "Middle Aged", "Pulmonary Aspergillosis", "Retrospective Studies", "Tertiary Care Centers", "Treatment Outcome", "Young Adult" ]
8054460
Introduction
Pulmonary aspergilloma (PA) is a ball of fungus that is composed of Aspergillus hyphae, fibrin, mucus, and cellular debris within a pulmonary cavity (1). It usually arises within a preexisting pulmonary cavity that has become colonized with Aspergillus spp (2). The most common underlying pathology that predisposes to PA includes pulmonary tuberculosis, chronic obstructive pulmonary disease (COPD), prior pneumothorax with associated bulla and fibrocavitary sarcoidosis (3). Pulmonary cavities of ≥ 2cm which are left after the treatment of pulmonary tuberculosis have about 20% chance of developing PA (4). Studies were done using modeling to estimate the global burden of PA as a sequel of pulmonary tuberculosis and showed in the range of =1 case/100,000 populations in western Europe and the USA to 42.9/100,000 population in Democratic Republic of Congo and Nigeria (5). Reflecting on the high frequency of pulmonary tuberculosis, Ethiopia has an estimated rate of 14.5 cases/100,000 population (6). The natural history of PA has been poorly studied. So far, there appears to be no consistent variable that helps to predict its outcome (7). The course of the disease is extremely variable, ranging from undergoing spontaneous lysis (710%) to causing severe hemoptysis (8). In up to 30% of patients, minor hemoptysis can go on to develop life-threatening hemoptysis (9). Besides, the severity of hemoptysis is also not related to the size, the number of aspergillomas nor to the underlying lung disease (10). At present, there is a considerable controversy about the optimal treatment for PA, primarily because the natural history of the lesion is not well defined. Moreover, high morbidity and mortality have been reported from some surgical series (11–13). However, the current globally accepted mainstay of treatment is surgery and medical options have shown limited role. In light of the newer available anti-fungal agents, treating with drugs, though disappointing, requires further investigation. The purpose of this study was to evaluate the results for all patients who underwent pulmonary resection at Tikur Anbessa Specialized Hospital (TASH) and Menillik 2nd Hospital done for 72 consecutive patients with PA.
Patients And Methods
A retrospective review of medical records and theatre operation register notes of patients operated for PA at Menillik 2nd and Tikur Anbessa Specialized Hospital (TASH) was done. Both hospitals are tertiary referral and teaching hospitals found in Addis Ababa, Ethiopia. All patients diagnosed to have PA that underwent surgical treatment throughout 5 years (between Nov 30, 2014 – Nov30, 2019) were included. Socio-demographic variables, underlying previous diseases, symptoms, affected lobe of the lung, radiographic findings, indications, and types of surgery, intra-operative and postoperative complications, mortality, and the outcome of surgical interventions were analyzed. Most patients were referred for surgery from other regional hospitals after long-term and repeated treatment of pulmonary tuberculosis. Preoperative assessment and diagnosis of PA were done based on clinical history, physical examination, blood urea nitrogen (BUN) and creatinine, Chest Radiography, and computed tomography of the chest. Pre-operatively for selected patients, pulmonary function tests and echocardiography assessment of left and right ventricular function and possible pulmonary hypertension (if present is a contraindication for surgery) were performed. Immune-diffusion tests or sputum culture for fungus is not routinely done. Surgical resection was considered for patients with symptoms that justify operations like recurrent hemoptysis, recurrent pneumonia, shortness of breath (SOB) with localized ‘aircrescent’ lesions on CT-Scan of the chest with local destruction of the lung and a good pulmonary reserve. Patients with poor pulmonary reserve or high risk to undergo thoracotomy, and those with concomitant active pulmonary tuberculosis were recommended to have conservative medical treatment. All patients were operated under general anesthesia. Based on the availability, either a single or double-lumen endobronchial intubation was done. Posterolateral thoracotomy was used to access the thoracic cavity. The choice of resection (cavernostomy, lobectomy, bilobectomy or pneumonectomy) is usually based on the general condition of the patient and the extent of the affected lung. After surgery, most of the patients were extubated in the operating room and sent to the Intensive Care Unit (ICU). Some patients required ventilator support. Routine ICU or postoperative treatment protocol was followed during their hospital stay. Follow-up was subsequently done in the Cardiothoracic surgical referral clinic with serial chest X-rays. A structured questionnaire was used to collect relevant data from patients’ medical records and operation theater registry logbooks. Data was collected by final year surgery residents. Treatment success for this study was defined as a patient operated for PA and fully improved with no immediate complications seen during their hospital stay or subsequent follow-up visit to the cardiothoracic surgical referral clinic. Data was then cross-checked for completeness, accuracy, and consistency before entry to a software program. Descriptive statistics as percentages, mean, and ranges were computed. Data were expressed as mean ± standard deviation (SD) for the variance of a normal distribution or as the median for those with non-normal distribution. Characteristics of the study subgroups were compared using Mann-Whitney U-test, for continuous variables, and Pearson Chi-Square test for categorical variables. The differences in postoperative complications were assessed by univariate analysis. Variables with significance on univariate analysis and previously known risk factors like age, sex, the severity of hemoptysis, past-history of tuberculosis, affected side of the lung and type of surgery done (14–16) were evaluated by multivariate analysis with stepwise backward elimination method to determine the independent predictors of postoperative complications. Statistical analyses were performed using SPSS 23.0 (IBM Corporation, Armonk, NY, USA). For all tests, a value of P=0.05 was considered significant.
Results
Socio-demographic profile Out of the 72 patients included in the study, there were 46(63.9%) males and 26(36.1%) females with male-to-female ratio of 1.8:1. The mean age affected was 35.2 +/−11.6 years (range 16–65). The age group 25–49 years, 53(73.6%) was predominantly affected. Besides, the majority of patients came from urban areas, 43(59.7%), and were married, 43(59.7%) (Table 1). Analysis of demographics, Clinical features and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menilik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 1Pearson Chi-Square, *reference group used to calculate AOR Out of the 72 patients included in the study, there were 46(63.9%) males and 26(36.1%) females with male-to-female ratio of 1.8:1. The mean age affected was 35.2 +/−11.6 years (range 16–65). The age group 25–49 years, 53(73.6%) was predominantly affected. Besides, the majority of patients came from urban areas, 43(59.7%), and were married, 43(59.7%) (Table 1). Analysis of demographics, Clinical features and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menilik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 1Pearson Chi-Square, *reference group used to calculate AOR Diagnosis The diagnosis of PA was based on clinical features and diagnostic investigations. Cough, 72(100%), and hemoptysis, 72(100%), were the primary symptoms reported in all patients. Other symptoms were pain, 50(69.4%), fever, 26(36.1%), and SOB, 36(50%). The majority of the patients had mild hemoptysis, 40(55.6%). Only 16(22.2%) had a history of severe hemoptysis. All patients were treated for pulmonary tuberculosis, and 49(68%) of them were treated for more than one time. The time between the first incidence of pulmonary tuberculosis and subsequent development of aspergilloma was variable (range 1–20 years). Concomitant illnesses identified were diabetes mellitus, 2(2.8%), and HIV infection, 2(2.8%). The vital sign was deranged in 9(12.5%) patients, and 17(23.6%) patients had Pallor (Table 1). An initial plain chest x-ray was done for 70(97.2%) patients and demonstrated a cavitary lesion with a hyperdensity within it. Subsequently, CT scan was done for all patients, and the diagnosis of PA was made by identifying air crescent sign (Figure 1,2). CT scan was also used to further characterize the lesion, to assess the involved site and lobe of the lung and to assess the concomitant disease of the lung. Almost all patients had upper lobe aspergilloma, 71(98.6%), with only one (1.4%) exception who had isolated lower lobe lesion. When the performance status of the patient was doubtful, the pulmonary function test was used in 17(23.6%) patients that help a better understanding of the general condition. For suspected malignant cavitary lesion, bronchoscopy and BAL cytology were done on 12(16.7%) patients which demonstrated fungal hyphae. Due to the lack of test reagent, the serologic examination to identify possible fungal infection was not done. During surgery, a ball of fungus was demonstrated in all patients, and histologic examination was made for only 11(15.3%) malignant suspected specimens and it demonstrated only A. fumigatus. Left upper lobe cavity with internal soft tissue density with surrounding fibrosis Figure 2: Left upper lobe intra-cavitary mass, with an air crescent sign The diagnosis of PA was based on clinical features and diagnostic investigations. Cough, 72(100%), and hemoptysis, 72(100%), were the primary symptoms reported in all patients. Other symptoms were pain, 50(69.4%), fever, 26(36.1%), and SOB, 36(50%). The majority of the patients had mild hemoptysis, 40(55.6%). Only 16(22.2%) had a history of severe hemoptysis. All patients were treated for pulmonary tuberculosis, and 49(68%) of them were treated for more than one time. The time between the first incidence of pulmonary tuberculosis and subsequent development of aspergilloma was variable (range 1–20 years). Concomitant illnesses identified were diabetes mellitus, 2(2.8%), and HIV infection, 2(2.8%). The vital sign was deranged in 9(12.5%) patients, and 17(23.6%) patients had Pallor (Table 1). An initial plain chest x-ray was done for 70(97.2%) patients and demonstrated a cavitary lesion with a hyperdensity within it. Subsequently, CT scan was done for all patients, and the diagnosis of PA was made by identifying air crescent sign (Figure 1,2). CT scan was also used to further characterize the lesion, to assess the involved site and lobe of the lung and to assess the concomitant disease of the lung. Almost all patients had upper lobe aspergilloma, 71(98.6%), with only one (1.4%) exception who had isolated lower lobe lesion. When the performance status of the patient was doubtful, the pulmonary function test was used in 17(23.6%) patients that help a better understanding of the general condition. For suspected malignant cavitary lesion, bronchoscopy and BAL cytology were done on 12(16.7%) patients which demonstrated fungal hyphae. Due to the lack of test reagent, the serologic examination to identify possible fungal infection was not done. During surgery, a ball of fungus was demonstrated in all patients, and histologic examination was made for only 11(15.3%) malignant suspected specimens and it demonstrated only A. fumigatus. Left upper lobe cavity with internal soft tissue density with surrounding fibrosis Figure 2: Left upper lobe intra-cavitary mass, with an air crescent sign Operative procedures and follow-up Double lumen Endo-tracheal intubation was used in 46(63.9%) patients. The rest, 26(36.1%) patients, were operated with single-lumen endotracheal tubes. The left lung and right lung were affected in 45(62.5%) and 27(37.5%) patients respectively. In all cases, the pleural surface was found extensively adherent to the chest wall and was difficult to dissect. The type of procedures done was Lobectomy, 32(44.4%), pneumonectomy, 12(16.7%), Bi-lobectomies, 7(9.7%), and cavernostomy, 21(29.2%). The estimated average intra-operative blood loss was 745 +/−307ml (range 300–1600ml) (Table 2). Analysis of the location of aspergilloma, affected lobe, type of endotracheal tube used, type of surgery, the estimated blood loss and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30,2014–Nov 30, 2019 Pearson Chi-Square reference group used to calculate AOR During the initial 30 days of postoperative hospital stay, 21(29.1%) patients developed one or more complications. The major postoperative complications seen were prolonged air leak, 7(9.7%), Pneumonia, 3(4.2%), empyema, 7(9.7%), wound infection, 3 (4.2%), and others, 1(1.4). Five (6.9%) patients had significant intraoperative bleeding of which one died due to exsanguinating bleeding from major vessel injury. The other two patients died due to respiratory failure and sepsis which happened after pneumonectomy and lobectomy respectively. The overall mortality within 30 days of surgery was 3(4.2%), and the rest 95.8% of patients were discharged with improvement (Table 3). Frequency table of intraoperative and postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 Those patients who improved were discharged and followed for a median of 5 months (range 2 weeks to 24 months) at the surgical referral clinic. During the follow-up period, no significant addition of morbidity nor mortality was identified. Few patients were noticed to have received different antifungal treatment. Double lumen Endo-tracheal intubation was used in 46(63.9%) patients. The rest, 26(36.1%) patients, were operated with single-lumen endotracheal tubes. The left lung and right lung were affected in 45(62.5%) and 27(37.5%) patients respectively. In all cases, the pleural surface was found extensively adherent to the chest wall and was difficult to dissect. The type of procedures done was Lobectomy, 32(44.4%), pneumonectomy, 12(16.7%), Bi-lobectomies, 7(9.7%), and cavernostomy, 21(29.2%). The estimated average intra-operative blood loss was 745 +/−307ml (range 300–1600ml) (Table 2). Analysis of the location of aspergilloma, affected lobe, type of endotracheal tube used, type of surgery, the estimated blood loss and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30,2014–Nov 30, 2019 Pearson Chi-Square reference group used to calculate AOR During the initial 30 days of postoperative hospital stay, 21(29.1%) patients developed one or more complications. The major postoperative complications seen were prolonged air leak, 7(9.7%), Pneumonia, 3(4.2%), empyema, 7(9.7%), wound infection, 3 (4.2%), and others, 1(1.4). Five (6.9%) patients had significant intraoperative bleeding of which one died due to exsanguinating bleeding from major vessel injury. The other two patients died due to respiratory failure and sepsis which happened after pneumonectomy and lobectomy respectively. The overall mortality within 30 days of surgery was 3(4.2%), and the rest 95.8% of patients were discharged with improvement (Table 3). Frequency table of intraoperative and postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 Those patients who improved were discharged and followed for a median of 5 months (range 2 weeks to 24 months) at the surgical referral clinic. During the follow-up period, no significant addition of morbidity nor mortality was identified. Few patients were noticed to have received different antifungal treatment. Factors predicting adverse outcome Analysis of the postoperative complications between the type of surgery varied between 9.1% and 40%. It was further evaluated with the chi-square test and a significant difference was detected (p-value=0.011). Patients with pneumonectomy/ lobectomy were most likely to develop complications than those with cavernostomy or Bi-lobectomy. In the bivariate logistic regression models, the affected site of the lung, type of surgery done and the presence of intraoperative complications, were identified to have a significant association with postoperative complications. Postoperative outcome (complication) was significantly affected by the location of the aspergilloma [(P=0.022; AOR=−0.297; 95%CI)], the type of procedure performed [(P=0.011; AOR=−-0.486; 95%CI)] and the presence of intraoperative complications [P=0.020; AOR=5.875; 95% CI). Analysis of the postoperative complications between the type of surgery varied between 9.1% and 40%. It was further evaluated with the chi-square test and a significant difference was detected (p-value=0.011). Patients with pneumonectomy/ lobectomy were most likely to develop complications than those with cavernostomy or Bi-lobectomy. In the bivariate logistic regression models, the affected site of the lung, type of surgery done and the presence of intraoperative complications, were identified to have a significant association with postoperative complications. Postoperative outcome (complication) was significantly affected by the location of the aspergilloma [(P=0.022; AOR=−0.297; 95%CI)], the type of procedure performed [(P=0.011; AOR=−-0.486; 95%CI)] and the presence of intraoperative complications [P=0.020; AOR=5.875; 95% CI).
Discussion
Pulmonary aspergilloma develops most frequently in residual tuberculosis cavities. The British Thoracic and Tuberculosis Association reported 6% of patients with a healed tuberculosis cavity developing pulmonary aspergilloma within 3 years (17). Many studies also showed pulmonary tuberculosis as the most common pre-existing cavity lesion accounting for 32–45% (28,29). Chen QK from China reported that 71.1% of their patients with PA were treated for tuberculosis (9). In Ethiopia, due to the high incidence of tuberculosis, it is not uncommon to see patients with PA (6). In this study, all patients were previously treated for tuberculosis, and the majority of them placed on a repeated course of anti-tuberculosis treatment. This finding of patients being referred for surgery after treated multiple times for tuberculosis could indicate a wrong effort to resolve PA. This observation in delay referral and possibly wrong treatment of PA requires further study. Similar to other studies, males are affected more than females (9). Unlike this study, patients with PA in developed countries typically were found in their middle ages. In a series of 18 patients with chronic pulmonary aspergillosis done in the United Kingdom, the median age affected was 59 years (2). According to United Nations data (April 12,2020), the median age in Ethiopian population is 19.5 years. A similar finding of higher proportion of the younger patients with PA was also noticed in this study (17–19). Clinical presentation of PA ranges from an incidental radiological finding to exsanguinating hemoptysis (20). In our series, cough and hemoptysis were the primary symptoms reported in all patients. Chest pain, SOB, and fever were also reported in half of the cases. A similar study was done in India which reported hemoptysis in 79.16% patients (21). Bleeding during pulmonary aspergilloma usually arises from the bronchial arteries, and it stops spontaneously (22,23). However, when the cavity erodes into the intercostal vessel, the hemoptysis is severe and is unlikely to stop. All fungal balls in this series were found in the upper lobe. This is because the healed tuberculosis cavities in the upper lobe serve as a good nidus for colonization with A. fumigatus. Many studies reported a similar association (range 13%–89%) (21–23). Even though HIV infection is a common problem in our society, 73.6% of the patients in this study were not tested for HIV infection. Among those tested, only 2/19 were tested positive. Similar studies showed that since the introduction of potent ART, the incidence of aspergilloma in HIV-infected patients is low. In a review of 342 cases of pulmonary aspergillosis and invasive disease with AIDS, only 14 patients were diagnosed with aspergillomas (12). In this study, a low association of PA with another disease like diabetes mellitus was also identified. The surgical management of pulmonary aspergilloma is challenging and controversial. No double-blind, placebo-control or randomized trials have been undertaken. According to Shakil et al (24), among the 30 patients operated for PA, 23 were symptomatic and were treated with preoperative antifungal treatment. In this study, although all patients were symptomatic, they were operated without receiving preoperative antifungal treatment. In 2008, a guideline for the treatment of aspergillosis was developed by the Infectious Diseases Society of America. The guideline recommends an approach to therapy by distinguishing ‘simple aspergilloma’ from the more complex forms of ‘chronic cavitary pulmonary aspergillosis’ and ‘chronic fibrosing pulmonary aspergillosis’ (13). Nevertheless, such kind of diagnostic and therapeutic approach was not followed in the management of our patients. Surgical treatment for simple aspergilloma is done to prevent or treat potentially life-threatening complications, which usually is curative (14,15). The role of antifungal treatment for simple aspergilloma is controversial. Many patients, such as those who are asymptomatic and have stable radiographic findings over many months, may not require therapy (14). However, if surgery is indicated for symptomatic PA, the use of pre-and post-operative antifungal therapy (usually variconazole) is recommended. This is because during operation, most cavities will inadvertently be opened that leads to spillage, which increases the risk of postoperative aspergillosis (14,25,26). The use of postoperative antifungal treatment in this study was found to be inconsistent. Hence, based on the current recommendation, a protocol on the use of antifungal treatment before and after surgery need to be developed. In contrast to patients with simple aspergilloma, those with chronic cavitary pulmonary aspergillosis and chronic fibrosing pulmonary aspergillosis require life-long antifungal therapy, a practice that was not reported in our cases management (12–14). The evidence regarding the efficacy of long-term antifungal therapy for such diseases was based on small case series and open-label non-comparative studies. According to those studies, Itraconazole and voriconazole are the preferred oral agents. They also reported that discontinuation of therapy could lead to a gradual return of symptoms, manifested with worsening of chest radiography findings, and rising level of Aspergill (14–16). Surgical outcomes for this category of diseases were not as good as those for simple aspergillomas (12,13). In general, because of the treatment of massive hemoptysis, surgical resection remains the mainstay of treatment for PA. However, no agreed document exists on the extent of lung resection. In this study, patients with pneumonectomy/lobectomy did not have favorable outcomes compared with those cavernostomy/bilobectomy group. This finding is consistent with a previous study (9,19,). Similarly, Shirakusa et al. advocate a limited lung resection (27). However, studies showed that radical resection of the affected areas had effectively improved patient outcomes. Because of this argument, many case series studies advocate a standard thoracotomy and lobectomy to be the preferred surgical procedures (7–11,18–23). In this series, lobectomy was done for 44.4% of patients. Studies recommend pneumonectomy only when the affected lung was destroyed or the remaining lobe was severely fibrotic and small, which cannot expand to fill the chest cavity (7). Cavernostomy was done primarily for either peripheral placed small PA or when the fungal ball lies close to the fissure or hilum and pneumonectomy was not an option because of the patient’s general condition. Similar to other studies, (19–21, 28–32), the postoperative complications related to it were significantly low (P=0.011). Cavernostomy is now considered as a viable treatment alternative even to those that can be submitted to pulmonary resection. We investigated the risk factors of postoperative complications and short-term observation of patients with PA. Among the study population, 21(29.1%) of the patients developed postoperative complications. At risk-adjusted analysis, socio-demographic characteristics and clinical presentations showed no effect on the incidence of adverse events. However, the affected side of the lung, the type of surgery done, and the presence of intraoperative complications were revealed as the independent predictors of postoperative complications. The reason for the higher postoperative complication in the right lung surgery is unclear. Previous studies reported operative mortality rates to range from 4 to 22%, (34–37) and postoperative complications of 15–78%, (28) with the current surgical and anesthetic advances; recent reports show an improved outcome (34). Several of such series describe acceptable outcomes with 2 to 5% mortality and 25% overall complication rate (28–35) Overall, the result of this study is equally favorable to other recent results. Some of the limitations of this study include the following. First the size of the study population is relatively small. Second, because of the study design (retrospective cross-section) selection bias towards selecting more symptomatic and referred patients than asymptomatic and not referred patients is expected. Third, Bronchoscopy and cytology were done for only 16.7% of patients. The diagnosis was primarily based on the radiologic and intraoperative findings. Hence, some diseases like actinomycosis, nocardiosis, intracavitary hematoma, and adenocarcinoma, could mimic PA. (32). Fourth, because of the extended time period between first incidence of tuberculosis and subsequent development of aspergilloma, it is difficult to ascertain the exact time which may result in a recall bias. And fifth, the duration of follow-up was relatively short, and the long-term outcome of surgery is not known. Therefore, a well-designed prospective cohort study is recommended to confirm the association between different variables and the immediate and long-term adverse outcome. Finally, the study represents one of the largest surgical experience in Sub-Saharan countries with a resource-limited setup. An outcome, which was comparable with many international studies done in developed countries was achieved. It is, therefore, logical to recommend operative resection of the involved area of the lung with a thorough preoperative assessment and a careful evaluation of the risk/benefit ratio is made. In conclusion, since pulmonary aspergilloma represents a spectrum of different disease stages, and the risks and benefits of medical and surgical therapy vary with the manifestations of the disease and the patient's pulmonary status, the approach to therapy should be individualized. To improve our patients’ outcome, treating aspergilloma patients in line with those of the 2008 Infectious Diseases Society of America guidelines is recommendable (13).
[ "Socio-demographic profile", "Diagnosis", "Operative procedures and follow-up", "Factors predicting adverse outcome", "Discussion" ]
[ "Out of the 72 patients included in the study, there were 46(63.9%) males and 26(36.1%) females with male-to-female ratio of 1.8:1. The mean age affected was 35.2 +/−11.6 years (range 16–65). The age group 25–49 years, 53(73.6%) was predominantly affected. Besides, the majority of patients came from urban areas, 43(59.7%), and were married, 43(59.7%) (Table 1).\nAnalysis of demographics, Clinical features and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menilik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 1Pearson Chi-Square, *reference group used to calculate AOR", "The diagnosis of PA was based on clinical features and diagnostic investigations. Cough, 72(100%), and hemoptysis, 72(100%), were the primary symptoms reported in all patients. Other symptoms were pain, 50(69.4%), fever, 26(36.1%), and SOB, 36(50%). The majority of the patients had mild hemoptysis, 40(55.6%). Only 16(22.2%) had a history of severe hemoptysis.\nAll patients were treated for pulmonary tuberculosis, and 49(68%) of them were treated for more than one time. The time between the first incidence of pulmonary tuberculosis and subsequent development of aspergilloma was variable (range 1–20 years). Concomitant illnesses identified were diabetes mellitus, 2(2.8%), and HIV infection, 2(2.8%). The vital sign was deranged in 9(12.5%) patients, and 17(23.6%) patients had Pallor (Table 1).\nAn initial plain chest x-ray was done for 70(97.2%) patients and demonstrated a cavitary lesion with a hyperdensity within it. Subsequently, CT scan was done for all patients, and the diagnosis of PA was made by identifying air crescent sign (Figure 1,2). CT scan was also used to further characterize the lesion, to assess the involved site and lobe of the lung and to assess the concomitant disease of the lung. Almost all patients had upper lobe aspergilloma, 71(98.6%), with only one (1.4%) exception who had isolated lower lobe lesion.\nWhen the performance status of the patient was doubtful, the pulmonary function test was used in 17(23.6%) patients that help a better understanding of the general condition. For suspected malignant cavitary lesion, bronchoscopy and BAL cytology were done on 12(16.7%) patients which demonstrated fungal hyphae. Due to the lack of test reagent, the serologic examination to identify possible fungal infection was not done. During surgery, a ball of fungus was demonstrated in all patients, and histologic examination was made for only 11(15.3%) malignant suspected specimens and it demonstrated only A. fumigatus.\nLeft upper lobe cavity with internal soft tissue density with surrounding fibrosis\nFigure 2: Left upper lobe intra-cavitary mass, with an air crescent sign", "Double lumen Endo-tracheal intubation was used in 46(63.9%) patients. The rest, 26(36.1%) patients, were operated with single-lumen endotracheal tubes. The left lung and right lung were affected in 45(62.5%) and 27(37.5%) patients respectively. In all cases, the pleural surface was found extensively adherent to the chest wall and was difficult to dissect. The type of procedures done was Lobectomy, 32(44.4%), pneumonectomy, 12(16.7%), Bi-lobectomies, 7(9.7%), and cavernostomy, 21(29.2%). The estimated average intra-operative blood loss was 745 +/−307ml (range 300–1600ml) (Table 2).\nAnalysis of the location of aspergilloma, affected lobe, type of endotracheal tube used, type of surgery, the estimated blood loss and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30,2014–Nov 30, 2019\nPearson Chi-Square\nreference group used to calculate AOR\nDuring the initial 30 days of postoperative hospital stay, 21(29.1%) patients developed one or more complications. The major postoperative complications seen were prolonged air leak, 7(9.7%), Pneumonia, 3(4.2%), empyema, 7(9.7%), wound infection, 3 (4.2%), and others, 1(1.4). Five (6.9%) patients had significant intraoperative bleeding of which one died due to exsanguinating bleeding from major vessel injury. The other two patients died due to respiratory failure and sepsis which happened after pneumonectomy and lobectomy respectively. The overall mortality within 30 days of surgery was 3(4.2%), and the rest 95.8% of patients were discharged with improvement (Table 3).\nFrequency table of intraoperative and postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019\nThose patients who improved were discharged and followed for a median of 5 months (range 2 weeks to 24 months) at the surgical referral clinic. During the follow-up period, no significant addition of morbidity nor mortality was identified. Few patients were noticed to have received different antifungal treatment.", "Analysis of the postoperative complications between the type of surgery varied between 9.1% and 40%. It was further evaluated with the chi-square test and a significant difference was detected (p-value=0.011). Patients with pneumonectomy/ lobectomy were most likely to develop complications than those with cavernostomy or Bi-lobectomy. In the bivariate logistic regression models, the affected site of the lung, type of surgery done and the presence of intraoperative complications, were identified to have a significant association with postoperative complications. Postoperative outcome (complication) was significantly affected by the location of the aspergilloma [(P=0.022; AOR=−0.297; 95%CI)], the type of procedure performed [(P=0.011; AOR=−-0.486; 95%CI)] and the presence of intraoperative complications [P=0.020; AOR=5.875; 95% CI).", "Pulmonary aspergilloma develops most frequently in residual tuberculosis cavities. The British Thoracic and Tuberculosis Association reported 6% of patients with a healed tuberculosis cavity developing pulmonary aspergilloma within 3 years (17). Many studies also showed pulmonary tuberculosis as the most common pre-existing cavity lesion accounting for 32–45% (28,29). Chen QK from China reported that 71.1% of their patients with PA were treated for tuberculosis (9). In Ethiopia, due to the high incidence of tuberculosis, it is not uncommon to see patients with PA (6). In this study, all patients were previously treated for tuberculosis, and the majority of them placed on a repeated course of anti-tuberculosis treatment. This finding of patients being referred for surgery after treated multiple times for tuberculosis could indicate a wrong effort to resolve PA. This observation in delay referral and possibly wrong treatment of PA requires further study.\nSimilar to other studies, males are affected more than females (9). Unlike this study, patients with PA in developed countries typically were found in their middle ages. In a series of 18 patients with chronic pulmonary aspergillosis done in the United Kingdom, the median age affected was 59 years (2). According to United Nations data (April 12,2020), the median age in Ethiopian population is 19.5 years. A similar finding of higher proportion of the younger patients with PA was also noticed in this study (17–19).\nClinical presentation of PA ranges from an incidental radiological finding to exsanguinating hemoptysis (20). In our series, cough and hemoptysis were the primary symptoms reported in all patients. Chest pain, SOB, and fever were also reported in half of the cases. A similar study was done in India which reported hemoptysis in 79.16% patients (21). Bleeding during pulmonary aspergilloma usually arises from the bronchial arteries, and it stops spontaneously (22,23). However, when the cavity erodes into the intercostal vessel, the hemoptysis is severe and is unlikely to stop.\nAll fungal balls in this series were found in the upper lobe. This is because the healed tuberculosis cavities in the upper lobe serve as a good nidus for colonization with A. fumigatus. Many studies reported a similar association (range 13%–89%) (21–23).\nEven though HIV infection is a common problem in our society, 73.6% of the patients in this study were not tested for HIV infection. Among those tested, only 2/19 were tested positive. Similar studies showed that since the introduction of potent ART, the incidence of aspergilloma in HIV-infected patients is low. In a review of 342 cases of pulmonary aspergillosis and invasive disease with AIDS, only 14 patients were diagnosed with aspergillomas (12). In this study, a low association of PA with another disease like diabetes mellitus was also identified.\nThe surgical management of pulmonary aspergilloma is challenging and controversial. No double-blind, placebo-control or randomized trials have been undertaken. According to Shakil et al (24), among the 30 patients operated for PA, 23 were symptomatic and were treated with preoperative antifungal treatment. In this study, although all patients were symptomatic, they were operated without receiving preoperative antifungal treatment.\nIn 2008, a guideline for the treatment of aspergillosis was developed by the Infectious Diseases Society of America. The guideline recommends an approach to therapy by distinguishing ‘simple aspergilloma’ from the more complex forms of ‘chronic cavitary pulmonary aspergillosis’ and ‘chronic fibrosing pulmonary aspergillosis’ (13). Nevertheless, such kind of diagnostic and therapeutic approach was not followed in the management of our patients.\nSurgical treatment for simple aspergilloma is done to prevent or treat potentially life-threatening complications, which usually is curative (14,15). The role of antifungal treatment for simple aspergilloma is controversial. Many patients, such as those who are asymptomatic and have stable radiographic findings over many months, may not require therapy (14). However, if surgery is indicated for symptomatic PA, the use of pre-and post-operative antifungal therapy (usually variconazole) is recommended. This is because during operation, most cavities will inadvertently be opened that leads to spillage, which increases the risk of postoperative aspergillosis (14,25,26). The use of postoperative antifungal treatment in this study was found to be inconsistent. Hence, based on the current recommendation, a protocol on the use of antifungal treatment before and after surgery need to be developed.\nIn contrast to patients with simple aspergilloma, those with chronic cavitary pulmonary aspergillosis and chronic fibrosing pulmonary aspergillosis require life-long antifungal therapy, a practice that was not reported in our cases management (12–14). The evidence regarding the efficacy of long-term antifungal therapy for such diseases was based on small case series and open-label non-comparative studies. According to those studies, Itraconazole and voriconazole are the preferred oral agents. They also reported that discontinuation of therapy could lead to a gradual return of symptoms, manifested with worsening of chest radiography findings, and rising level of Aspergill (14–16). Surgical outcomes for this category of diseases were not as good as those for simple aspergillomas (12,13).\nIn general, because of the treatment of massive hemoptysis, surgical resection remains the mainstay of treatment for PA. However, no agreed document exists on the extent of lung resection. In this study, patients with pneumonectomy/lobectomy did not have favorable outcomes compared with those cavernostomy/bilobectomy group. This finding is consistent with a previous study (9,19,). Similarly, Shirakusa et al. advocate a limited lung resection (27). However, studies showed that radical resection of the affected areas had effectively improved patient outcomes. Because of this argument, many case series studies advocate a standard thoracotomy and lobectomy to be the preferred surgical procedures (7–11,18–23). In this series, lobectomy was done for 44.4% of patients. Studies recommend pneumonectomy only when the affected lung was destroyed or the remaining lobe was severely fibrotic and small, which cannot expand to fill the chest cavity (7). Cavernostomy was done primarily for either peripheral placed small PA or when the fungal ball lies close to the fissure or hilum and pneumonectomy was not an option because of the patient’s general condition. Similar to other studies, (19–21, 28–32), the postoperative complications related to it were significantly low (P=0.011). Cavernostomy is now considered as a viable treatment alternative even to those that can be submitted to pulmonary resection.\nWe investigated the risk factors of postoperative complications and short-term observation of patients with PA. Among the study population, 21(29.1%) of the patients developed postoperative complications. At risk-adjusted analysis, socio-demographic characteristics and clinical presentations showed no effect on the incidence of adverse events. However, the affected side of the lung, the type of surgery done, and the presence of intraoperative complications were revealed as the independent predictors of postoperative complications. The reason for the higher postoperative complication in the right lung surgery is unclear.\nPrevious studies reported operative mortality rates to range from 4 to 22%, (34–37) and postoperative complications of 15–78%, (28) with the current surgical and anesthetic advances; recent reports show an improved outcome (34). Several of such series describe acceptable outcomes with 2 to 5% mortality and 25% overall complication rate (28–35) Overall, the result of this study is equally favorable to other recent results.\nSome of the limitations of this study include the following. First the size of the study population is relatively small. Second, because of the study design (retrospective cross-section) selection bias towards selecting more symptomatic and referred patients than asymptomatic and not referred patients is expected. Third, Bronchoscopy and cytology were done for only 16.7% of patients. The diagnosis was primarily based on the radiologic and intraoperative findings. Hence, some diseases like actinomycosis, nocardiosis, intracavitary hematoma, and adenocarcinoma, could mimic PA. (32). Fourth, because of the extended time period between first incidence of tuberculosis and subsequent development of aspergilloma, it is difficult to ascertain the exact time which may result in a recall bias. And fifth, the duration of follow-up was relatively short, and the long-term outcome of surgery is not known. Therefore, a well-designed prospective cohort study is recommended to confirm the association between different variables and the immediate and long-term adverse outcome.\nFinally, the study represents one of the largest surgical experience in Sub-Saharan countries with a resource-limited setup. An outcome, which was comparable with many international studies done in developed countries was achieved. It is, therefore, logical to recommend operative resection of the involved area of the lung with a thorough preoperative assessment and a careful evaluation of the risk/benefit ratio is made.\nIn conclusion, since pulmonary aspergilloma represents a spectrum of different disease stages, and the risks and benefits of medical and surgical therapy vary with the manifestations of the disease and the patient's pulmonary status, the approach to therapy should be individualized. To improve our patients’ outcome, treating aspergilloma patients in line with those of the 2008 Infectious Diseases Society of America guidelines is recommendable (13)." ]
[ null, null, null, null, null ]
[ "Introduction", "Patients And Methods", "Results", "Socio-demographic profile", "Diagnosis", "Operative procedures and follow-up", "Factors predicting adverse outcome", "Discussion" ]
[ "Pulmonary aspergilloma (PA) is a ball of fungus that is composed of Aspergillus hyphae, fibrin, mucus, and cellular debris within a pulmonary cavity (1). It usually arises within a preexisting pulmonary cavity that has become colonized with Aspergillus spp (2). The most common underlying pathology that predisposes to PA includes pulmonary tuberculosis, chronic obstructive pulmonary disease (COPD), prior pneumothorax with associated bulla and fibrocavitary sarcoidosis (3). Pulmonary cavities of ≥ 2cm which are left after the treatment of pulmonary tuberculosis have about 20% chance of developing PA (4).\nStudies were done using modeling to estimate the global burden of PA as a sequel of pulmonary tuberculosis and showed in the range of =1 case/100,000 populations in western Europe and the USA to 42.9/100,000 population in Democratic Republic of Congo and Nigeria (5). Reflecting on the high frequency of pulmonary tuberculosis, Ethiopia has an estimated rate of 14.5 cases/100,000 population (6).\nThe natural history of PA has been poorly studied. So far, there appears to be no consistent variable that helps to predict its outcome (7). The course of the disease is extremely variable, ranging from undergoing spontaneous lysis (710%) to causing severe hemoptysis (8). In up to 30% of patients, minor hemoptysis can go on to develop life-threatening hemoptysis (9). Besides, the severity of hemoptysis is also not related to the size, the number of aspergillomas nor to the underlying lung disease (10).\nAt present, there is a considerable controversy about the optimal treatment for PA, primarily because the natural history of the lesion is not well defined. Moreover, high morbidity and mortality have been reported from some surgical series (11–13). However, the current globally accepted mainstay of treatment is surgery and medical options have shown limited role. In light of the newer available anti-fungal agents, treating with drugs, though disappointing, requires further investigation.\nThe purpose of this study was to evaluate the results for all patients who underwent pulmonary resection at Tikur Anbessa Specialized Hospital (TASH) and Menillik 2nd Hospital done for 72 consecutive patients with PA.", "A retrospective review of medical records and theatre operation register notes of patients operated for PA at Menillik 2nd and Tikur Anbessa Specialized Hospital (TASH) was done. Both hospitals are tertiary referral and teaching hospitals found in Addis Ababa, Ethiopia.\nAll patients diagnosed to have PA that underwent surgical treatment throughout 5 years (between Nov 30, 2014 – Nov30, 2019) were included. Socio-demographic variables, underlying previous diseases, symptoms, affected lobe of the lung, radiographic findings, indications, and types of surgery, intra-operative and postoperative complications, mortality, and the outcome of surgical interventions were analyzed. Most patients were referred for surgery from other regional hospitals after long-term and repeated treatment of pulmonary tuberculosis. Preoperative assessment and diagnosis of PA were done based on clinical history, physical examination, blood urea nitrogen (BUN) and creatinine, Chest Radiography, and computed tomography of the chest. Pre-operatively for selected patients, pulmonary function tests and echocardiography assessment of left and right ventricular function and possible pulmonary hypertension (if present is a contraindication for surgery) were performed. Immune-diffusion tests or sputum culture for fungus is not routinely done.\nSurgical resection was considered for patients with symptoms that justify operations like recurrent hemoptysis, recurrent pneumonia, shortness of breath (SOB) with localized ‘aircrescent’ lesions on CT-Scan of the chest with local destruction of the lung and a good pulmonary reserve. Patients with poor pulmonary reserve or high risk to undergo thoracotomy, and those with concomitant active pulmonary tuberculosis were recommended to have conservative medical treatment.\nAll patients were operated under general anesthesia. Based on the availability, either a single or double-lumen endobronchial intubation was done. Posterolateral thoracotomy was used to access the thoracic cavity. The choice of resection (cavernostomy, lobectomy, bilobectomy or pneumonectomy) is usually based on the general condition of the patient and the extent of the affected lung. After surgery, most of the patients were extubated in the operating room and sent to the Intensive Care Unit (ICU). Some patients required ventilator support. Routine ICU or postoperative treatment protocol was followed during their hospital stay. Follow-up was subsequently done in the Cardiothoracic surgical referral clinic with serial chest X-rays.\nA structured questionnaire was used to collect relevant data from patients’ medical records and operation theater registry logbooks. Data was collected by final year surgery residents. Treatment success for this study was defined as a patient operated for PA and fully improved with no immediate complications seen during their hospital stay or subsequent follow-up visit to the cardiothoracic surgical referral clinic. Data was then cross-checked for completeness, accuracy, and consistency before entry to a software program. Descriptive statistics as percentages, mean, and ranges were computed. Data were expressed as mean ± standard deviation (SD) for the variance of a normal distribution or as the median for those with non-normal distribution. Characteristics of the study subgroups were compared using Mann-Whitney U-test, for continuous variables, and Pearson Chi-Square test for categorical variables. The differences in postoperative complications were assessed by univariate analysis. Variables with significance on univariate analysis and previously known risk factors like age, sex, the severity of hemoptysis, past-history of tuberculosis, affected side of the lung and type of surgery done (14–16) were evaluated by multivariate analysis with stepwise backward elimination method to determine the independent predictors of postoperative complications. Statistical analyses were performed using SPSS 23.0 (IBM Corporation, Armonk, NY, USA). For all tests, a value of P=0.05 was considered significant.", "Socio-demographic profile Out of the 72 patients included in the study, there were 46(63.9%) males and 26(36.1%) females with male-to-female ratio of 1.8:1. The mean age affected was 35.2 +/−11.6 years (range 16–65). The age group 25–49 years, 53(73.6%) was predominantly affected. Besides, the majority of patients came from urban areas, 43(59.7%), and were married, 43(59.7%) (Table 1).\nAnalysis of demographics, Clinical features and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menilik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 1Pearson Chi-Square, *reference group used to calculate AOR\nOut of the 72 patients included in the study, there were 46(63.9%) males and 26(36.1%) females with male-to-female ratio of 1.8:1. The mean age affected was 35.2 +/−11.6 years (range 16–65). The age group 25–49 years, 53(73.6%) was predominantly affected. Besides, the majority of patients came from urban areas, 43(59.7%), and were married, 43(59.7%) (Table 1).\nAnalysis of demographics, Clinical features and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menilik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 1Pearson Chi-Square, *reference group used to calculate AOR\nDiagnosis The diagnosis of PA was based on clinical features and diagnostic investigations. Cough, 72(100%), and hemoptysis, 72(100%), were the primary symptoms reported in all patients. Other symptoms were pain, 50(69.4%), fever, 26(36.1%), and SOB, 36(50%). The majority of the patients had mild hemoptysis, 40(55.6%). Only 16(22.2%) had a history of severe hemoptysis.\nAll patients were treated for pulmonary tuberculosis, and 49(68%) of them were treated for more than one time. The time between the first incidence of pulmonary tuberculosis and subsequent development of aspergilloma was variable (range 1–20 years). Concomitant illnesses identified were diabetes mellitus, 2(2.8%), and HIV infection, 2(2.8%). The vital sign was deranged in 9(12.5%) patients, and 17(23.6%) patients had Pallor (Table 1).\nAn initial plain chest x-ray was done for 70(97.2%) patients and demonstrated a cavitary lesion with a hyperdensity within it. Subsequently, CT scan was done for all patients, and the diagnosis of PA was made by identifying air crescent sign (Figure 1,2). CT scan was also used to further characterize the lesion, to assess the involved site and lobe of the lung and to assess the concomitant disease of the lung. Almost all patients had upper lobe aspergilloma, 71(98.6%), with only one (1.4%) exception who had isolated lower lobe lesion.\nWhen the performance status of the patient was doubtful, the pulmonary function test was used in 17(23.6%) patients that help a better understanding of the general condition. For suspected malignant cavitary lesion, bronchoscopy and BAL cytology were done on 12(16.7%) patients which demonstrated fungal hyphae. Due to the lack of test reagent, the serologic examination to identify possible fungal infection was not done. During surgery, a ball of fungus was demonstrated in all patients, and histologic examination was made for only 11(15.3%) malignant suspected specimens and it demonstrated only A. fumigatus.\nLeft upper lobe cavity with internal soft tissue density with surrounding fibrosis\nFigure 2: Left upper lobe intra-cavitary mass, with an air crescent sign\nThe diagnosis of PA was based on clinical features and diagnostic investigations. Cough, 72(100%), and hemoptysis, 72(100%), were the primary symptoms reported in all patients. Other symptoms were pain, 50(69.4%), fever, 26(36.1%), and SOB, 36(50%). The majority of the patients had mild hemoptysis, 40(55.6%). Only 16(22.2%) had a history of severe hemoptysis.\nAll patients were treated for pulmonary tuberculosis, and 49(68%) of them were treated for more than one time. The time between the first incidence of pulmonary tuberculosis and subsequent development of aspergilloma was variable (range 1–20 years). Concomitant illnesses identified were diabetes mellitus, 2(2.8%), and HIV infection, 2(2.8%). The vital sign was deranged in 9(12.5%) patients, and 17(23.6%) patients had Pallor (Table 1).\nAn initial plain chest x-ray was done for 70(97.2%) patients and demonstrated a cavitary lesion with a hyperdensity within it. Subsequently, CT scan was done for all patients, and the diagnosis of PA was made by identifying air crescent sign (Figure 1,2). CT scan was also used to further characterize the lesion, to assess the involved site and lobe of the lung and to assess the concomitant disease of the lung. Almost all patients had upper lobe aspergilloma, 71(98.6%), with only one (1.4%) exception who had isolated lower lobe lesion.\nWhen the performance status of the patient was doubtful, the pulmonary function test was used in 17(23.6%) patients that help a better understanding of the general condition. For suspected malignant cavitary lesion, bronchoscopy and BAL cytology were done on 12(16.7%) patients which demonstrated fungal hyphae. Due to the lack of test reagent, the serologic examination to identify possible fungal infection was not done. During surgery, a ball of fungus was demonstrated in all patients, and histologic examination was made for only 11(15.3%) malignant suspected specimens and it demonstrated only A. fumigatus.\nLeft upper lobe cavity with internal soft tissue density with surrounding fibrosis\nFigure 2: Left upper lobe intra-cavitary mass, with an air crescent sign\nOperative procedures and follow-up Double lumen Endo-tracheal intubation was used in 46(63.9%) patients. The rest, 26(36.1%) patients, were operated with single-lumen endotracheal tubes. The left lung and right lung were affected in 45(62.5%) and 27(37.5%) patients respectively. In all cases, the pleural surface was found extensively adherent to the chest wall and was difficult to dissect. The type of procedures done was Lobectomy, 32(44.4%), pneumonectomy, 12(16.7%), Bi-lobectomies, 7(9.7%), and cavernostomy, 21(29.2%). The estimated average intra-operative blood loss was 745 +/−307ml (range 300–1600ml) (Table 2).\nAnalysis of the location of aspergilloma, affected lobe, type of endotracheal tube used, type of surgery, the estimated blood loss and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30,2014–Nov 30, 2019\nPearson Chi-Square\nreference group used to calculate AOR\nDuring the initial 30 days of postoperative hospital stay, 21(29.1%) patients developed one or more complications. The major postoperative complications seen were prolonged air leak, 7(9.7%), Pneumonia, 3(4.2%), empyema, 7(9.7%), wound infection, 3 (4.2%), and others, 1(1.4). Five (6.9%) patients had significant intraoperative bleeding of which one died due to exsanguinating bleeding from major vessel injury. The other two patients died due to respiratory failure and sepsis which happened after pneumonectomy and lobectomy respectively. The overall mortality within 30 days of surgery was 3(4.2%), and the rest 95.8% of patients were discharged with improvement (Table 3).\nFrequency table of intraoperative and postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019\nThose patients who improved were discharged and followed for a median of 5 months (range 2 weeks to 24 months) at the surgical referral clinic. During the follow-up period, no significant addition of morbidity nor mortality was identified. Few patients were noticed to have received different antifungal treatment.\nDouble lumen Endo-tracheal intubation was used in 46(63.9%) patients. The rest, 26(36.1%) patients, were operated with single-lumen endotracheal tubes. The left lung and right lung were affected in 45(62.5%) and 27(37.5%) patients respectively. In all cases, the pleural surface was found extensively adherent to the chest wall and was difficult to dissect. The type of procedures done was Lobectomy, 32(44.4%), pneumonectomy, 12(16.7%), Bi-lobectomies, 7(9.7%), and cavernostomy, 21(29.2%). The estimated average intra-operative blood loss was 745 +/−307ml (range 300–1600ml) (Table 2).\nAnalysis of the location of aspergilloma, affected lobe, type of endotracheal tube used, type of surgery, the estimated blood loss and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30,2014–Nov 30, 2019\nPearson Chi-Square\nreference group used to calculate AOR\nDuring the initial 30 days of postoperative hospital stay, 21(29.1%) patients developed one or more complications. The major postoperative complications seen were prolonged air leak, 7(9.7%), Pneumonia, 3(4.2%), empyema, 7(9.7%), wound infection, 3 (4.2%), and others, 1(1.4). Five (6.9%) patients had significant intraoperative bleeding of which one died due to exsanguinating bleeding from major vessel injury. The other two patients died due to respiratory failure and sepsis which happened after pneumonectomy and lobectomy respectively. The overall mortality within 30 days of surgery was 3(4.2%), and the rest 95.8% of patients were discharged with improvement (Table 3).\nFrequency table of intraoperative and postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019\nThose patients who improved were discharged and followed for a median of 5 months (range 2 weeks to 24 months) at the surgical referral clinic. During the follow-up period, no significant addition of morbidity nor mortality was identified. Few patients were noticed to have received different antifungal treatment.\nFactors predicting adverse outcome Analysis of the postoperative complications between the type of surgery varied between 9.1% and 40%. It was further evaluated with the chi-square test and a significant difference was detected (p-value=0.011). Patients with pneumonectomy/ lobectomy were most likely to develop complications than those with cavernostomy or Bi-lobectomy. In the bivariate logistic regression models, the affected site of the lung, type of surgery done and the presence of intraoperative complications, were identified to have a significant association with postoperative complications. Postoperative outcome (complication) was significantly affected by the location of the aspergilloma [(P=0.022; AOR=−0.297; 95%CI)], the type of procedure performed [(P=0.011; AOR=−-0.486; 95%CI)] and the presence of intraoperative complications [P=0.020; AOR=5.875; 95% CI).\nAnalysis of the postoperative complications between the type of surgery varied between 9.1% and 40%. It was further evaluated with the chi-square test and a significant difference was detected (p-value=0.011). Patients with pneumonectomy/ lobectomy were most likely to develop complications than those with cavernostomy or Bi-lobectomy. In the bivariate logistic regression models, the affected site of the lung, type of surgery done and the presence of intraoperative complications, were identified to have a significant association with postoperative complications. Postoperative outcome (complication) was significantly affected by the location of the aspergilloma [(P=0.022; AOR=−0.297; 95%CI)], the type of procedure performed [(P=0.011; AOR=−-0.486; 95%CI)] and the presence of intraoperative complications [P=0.020; AOR=5.875; 95% CI).", "Out of the 72 patients included in the study, there were 46(63.9%) males and 26(36.1%) females with male-to-female ratio of 1.8:1. The mean age affected was 35.2 +/−11.6 years (range 16–65). The age group 25–49 years, 53(73.6%) was predominantly affected. Besides, the majority of patients came from urban areas, 43(59.7%), and were married, 43(59.7%) (Table 1).\nAnalysis of demographics, Clinical features and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menilik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 1Pearson Chi-Square, *reference group used to calculate AOR", "The diagnosis of PA was based on clinical features and diagnostic investigations. Cough, 72(100%), and hemoptysis, 72(100%), were the primary symptoms reported in all patients. Other symptoms were pain, 50(69.4%), fever, 26(36.1%), and SOB, 36(50%). The majority of the patients had mild hemoptysis, 40(55.6%). Only 16(22.2%) had a history of severe hemoptysis.\nAll patients were treated for pulmonary tuberculosis, and 49(68%) of them were treated for more than one time. The time between the first incidence of pulmonary tuberculosis and subsequent development of aspergilloma was variable (range 1–20 years). Concomitant illnesses identified were diabetes mellitus, 2(2.8%), and HIV infection, 2(2.8%). The vital sign was deranged in 9(12.5%) patients, and 17(23.6%) patients had Pallor (Table 1).\nAn initial plain chest x-ray was done for 70(97.2%) patients and demonstrated a cavitary lesion with a hyperdensity within it. Subsequently, CT scan was done for all patients, and the diagnosis of PA was made by identifying air crescent sign (Figure 1,2). CT scan was also used to further characterize the lesion, to assess the involved site and lobe of the lung and to assess the concomitant disease of the lung. Almost all patients had upper lobe aspergilloma, 71(98.6%), with only one (1.4%) exception who had isolated lower lobe lesion.\nWhen the performance status of the patient was doubtful, the pulmonary function test was used in 17(23.6%) patients that help a better understanding of the general condition. For suspected malignant cavitary lesion, bronchoscopy and BAL cytology were done on 12(16.7%) patients which demonstrated fungal hyphae. Due to the lack of test reagent, the serologic examination to identify possible fungal infection was not done. During surgery, a ball of fungus was demonstrated in all patients, and histologic examination was made for only 11(15.3%) malignant suspected specimens and it demonstrated only A. fumigatus.\nLeft upper lobe cavity with internal soft tissue density with surrounding fibrosis\nFigure 2: Left upper lobe intra-cavitary mass, with an air crescent sign", "Double lumen Endo-tracheal intubation was used in 46(63.9%) patients. The rest, 26(36.1%) patients, were operated with single-lumen endotracheal tubes. The left lung and right lung were affected in 45(62.5%) and 27(37.5%) patients respectively. In all cases, the pleural surface was found extensively adherent to the chest wall and was difficult to dissect. The type of procedures done was Lobectomy, 32(44.4%), pneumonectomy, 12(16.7%), Bi-lobectomies, 7(9.7%), and cavernostomy, 21(29.2%). The estimated average intra-operative blood loss was 745 +/−307ml (range 300–1600ml) (Table 2).\nAnalysis of the location of aspergilloma, affected lobe, type of endotracheal tube used, type of surgery, the estimated blood loss and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30,2014–Nov 30, 2019\nPearson Chi-Square\nreference group used to calculate AOR\nDuring the initial 30 days of postoperative hospital stay, 21(29.1%) patients developed one or more complications. The major postoperative complications seen were prolonged air leak, 7(9.7%), Pneumonia, 3(4.2%), empyema, 7(9.7%), wound infection, 3 (4.2%), and others, 1(1.4). Five (6.9%) patients had significant intraoperative bleeding of which one died due to exsanguinating bleeding from major vessel injury. The other two patients died due to respiratory failure and sepsis which happened after pneumonectomy and lobectomy respectively. The overall mortality within 30 days of surgery was 3(4.2%), and the rest 95.8% of patients were discharged with improvement (Table 3).\nFrequency table of intraoperative and postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019\nThose patients who improved were discharged and followed for a median of 5 months (range 2 weeks to 24 months) at the surgical referral clinic. During the follow-up period, no significant addition of morbidity nor mortality was identified. Few patients were noticed to have received different antifungal treatment.", "Analysis of the postoperative complications between the type of surgery varied between 9.1% and 40%. It was further evaluated with the chi-square test and a significant difference was detected (p-value=0.011). Patients with pneumonectomy/ lobectomy were most likely to develop complications than those with cavernostomy or Bi-lobectomy. In the bivariate logistic regression models, the affected site of the lung, type of surgery done and the presence of intraoperative complications, were identified to have a significant association with postoperative complications. Postoperative outcome (complication) was significantly affected by the location of the aspergilloma [(P=0.022; AOR=−0.297; 95%CI)], the type of procedure performed [(P=0.011; AOR=−-0.486; 95%CI)] and the presence of intraoperative complications [P=0.020; AOR=5.875; 95% CI).", "Pulmonary aspergilloma develops most frequently in residual tuberculosis cavities. The British Thoracic and Tuberculosis Association reported 6% of patients with a healed tuberculosis cavity developing pulmonary aspergilloma within 3 years (17). Many studies also showed pulmonary tuberculosis as the most common pre-existing cavity lesion accounting for 32–45% (28,29). Chen QK from China reported that 71.1% of their patients with PA were treated for tuberculosis (9). In Ethiopia, due to the high incidence of tuberculosis, it is not uncommon to see patients with PA (6). In this study, all patients were previously treated for tuberculosis, and the majority of them placed on a repeated course of anti-tuberculosis treatment. This finding of patients being referred for surgery after treated multiple times for tuberculosis could indicate a wrong effort to resolve PA. This observation in delay referral and possibly wrong treatment of PA requires further study.\nSimilar to other studies, males are affected more than females (9). Unlike this study, patients with PA in developed countries typically were found in their middle ages. In a series of 18 patients with chronic pulmonary aspergillosis done in the United Kingdom, the median age affected was 59 years (2). According to United Nations data (April 12,2020), the median age in Ethiopian population is 19.5 years. A similar finding of higher proportion of the younger patients with PA was also noticed in this study (17–19).\nClinical presentation of PA ranges from an incidental radiological finding to exsanguinating hemoptysis (20). In our series, cough and hemoptysis were the primary symptoms reported in all patients. Chest pain, SOB, and fever were also reported in half of the cases. A similar study was done in India which reported hemoptysis in 79.16% patients (21). Bleeding during pulmonary aspergilloma usually arises from the bronchial arteries, and it stops spontaneously (22,23). However, when the cavity erodes into the intercostal vessel, the hemoptysis is severe and is unlikely to stop.\nAll fungal balls in this series were found in the upper lobe. This is because the healed tuberculosis cavities in the upper lobe serve as a good nidus for colonization with A. fumigatus. Many studies reported a similar association (range 13%–89%) (21–23).\nEven though HIV infection is a common problem in our society, 73.6% of the patients in this study were not tested for HIV infection. Among those tested, only 2/19 were tested positive. Similar studies showed that since the introduction of potent ART, the incidence of aspergilloma in HIV-infected patients is low. In a review of 342 cases of pulmonary aspergillosis and invasive disease with AIDS, only 14 patients were diagnosed with aspergillomas (12). In this study, a low association of PA with another disease like diabetes mellitus was also identified.\nThe surgical management of pulmonary aspergilloma is challenging and controversial. No double-blind, placebo-control or randomized trials have been undertaken. According to Shakil et al (24), among the 30 patients operated for PA, 23 were symptomatic and were treated with preoperative antifungal treatment. In this study, although all patients were symptomatic, they were operated without receiving preoperative antifungal treatment.\nIn 2008, a guideline for the treatment of aspergillosis was developed by the Infectious Diseases Society of America. The guideline recommends an approach to therapy by distinguishing ‘simple aspergilloma’ from the more complex forms of ‘chronic cavitary pulmonary aspergillosis’ and ‘chronic fibrosing pulmonary aspergillosis’ (13). Nevertheless, such kind of diagnostic and therapeutic approach was not followed in the management of our patients.\nSurgical treatment for simple aspergilloma is done to prevent or treat potentially life-threatening complications, which usually is curative (14,15). The role of antifungal treatment for simple aspergilloma is controversial. Many patients, such as those who are asymptomatic and have stable radiographic findings over many months, may not require therapy (14). However, if surgery is indicated for symptomatic PA, the use of pre-and post-operative antifungal therapy (usually variconazole) is recommended. This is because during operation, most cavities will inadvertently be opened that leads to spillage, which increases the risk of postoperative aspergillosis (14,25,26). The use of postoperative antifungal treatment in this study was found to be inconsistent. Hence, based on the current recommendation, a protocol on the use of antifungal treatment before and after surgery need to be developed.\nIn contrast to patients with simple aspergilloma, those with chronic cavitary pulmonary aspergillosis and chronic fibrosing pulmonary aspergillosis require life-long antifungal therapy, a practice that was not reported in our cases management (12–14). The evidence regarding the efficacy of long-term antifungal therapy for such diseases was based on small case series and open-label non-comparative studies. According to those studies, Itraconazole and voriconazole are the preferred oral agents. They also reported that discontinuation of therapy could lead to a gradual return of symptoms, manifested with worsening of chest radiography findings, and rising level of Aspergill (14–16). Surgical outcomes for this category of diseases were not as good as those for simple aspergillomas (12,13).\nIn general, because of the treatment of massive hemoptysis, surgical resection remains the mainstay of treatment for PA. However, no agreed document exists on the extent of lung resection. In this study, patients with pneumonectomy/lobectomy did not have favorable outcomes compared with those cavernostomy/bilobectomy group. This finding is consistent with a previous study (9,19,). Similarly, Shirakusa et al. advocate a limited lung resection (27). However, studies showed that radical resection of the affected areas had effectively improved patient outcomes. Because of this argument, many case series studies advocate a standard thoracotomy and lobectomy to be the preferred surgical procedures (7–11,18–23). In this series, lobectomy was done for 44.4% of patients. Studies recommend pneumonectomy only when the affected lung was destroyed or the remaining lobe was severely fibrotic and small, which cannot expand to fill the chest cavity (7). Cavernostomy was done primarily for either peripheral placed small PA or when the fungal ball lies close to the fissure or hilum and pneumonectomy was not an option because of the patient’s general condition. Similar to other studies, (19–21, 28–32), the postoperative complications related to it were significantly low (P=0.011). Cavernostomy is now considered as a viable treatment alternative even to those that can be submitted to pulmonary resection.\nWe investigated the risk factors of postoperative complications and short-term observation of patients with PA. Among the study population, 21(29.1%) of the patients developed postoperative complications. At risk-adjusted analysis, socio-demographic characteristics and clinical presentations showed no effect on the incidence of adverse events. However, the affected side of the lung, the type of surgery done, and the presence of intraoperative complications were revealed as the independent predictors of postoperative complications. The reason for the higher postoperative complication in the right lung surgery is unclear.\nPrevious studies reported operative mortality rates to range from 4 to 22%, (34–37) and postoperative complications of 15–78%, (28) with the current surgical and anesthetic advances; recent reports show an improved outcome (34). Several of such series describe acceptable outcomes with 2 to 5% mortality and 25% overall complication rate (28–35) Overall, the result of this study is equally favorable to other recent results.\nSome of the limitations of this study include the following. First the size of the study population is relatively small. Second, because of the study design (retrospective cross-section) selection bias towards selecting more symptomatic and referred patients than asymptomatic and not referred patients is expected. Third, Bronchoscopy and cytology were done for only 16.7% of patients. The diagnosis was primarily based on the radiologic and intraoperative findings. Hence, some diseases like actinomycosis, nocardiosis, intracavitary hematoma, and adenocarcinoma, could mimic PA. (32). Fourth, because of the extended time period between first incidence of tuberculosis and subsequent development of aspergilloma, it is difficult to ascertain the exact time which may result in a recall bias. And fifth, the duration of follow-up was relatively short, and the long-term outcome of surgery is not known. Therefore, a well-designed prospective cohort study is recommended to confirm the association between different variables and the immediate and long-term adverse outcome.\nFinally, the study represents one of the largest surgical experience in Sub-Saharan countries with a resource-limited setup. An outcome, which was comparable with many international studies done in developed countries was achieved. It is, therefore, logical to recommend operative resection of the involved area of the lung with a thorough preoperative assessment and a careful evaluation of the risk/benefit ratio is made.\nIn conclusion, since pulmonary aspergilloma represents a spectrum of different disease stages, and the risks and benefits of medical and surgical therapy vary with the manifestations of the disease and the patient's pulmonary status, the approach to therapy should be individualized. To improve our patients’ outcome, treating aspergilloma patients in line with those of the 2008 Infectious Diseases Society of America guidelines is recommendable (13)." ]
[ "intro", "methods", "results", null, null, null, null, null ]
[ "Pulmonary Aspergilloma", "Cavernostomy", "Hemoptysis" ]
Introduction: Pulmonary aspergilloma (PA) is a ball of fungus that is composed of Aspergillus hyphae, fibrin, mucus, and cellular debris within a pulmonary cavity (1). It usually arises within a preexisting pulmonary cavity that has become colonized with Aspergillus spp (2). The most common underlying pathology that predisposes to PA includes pulmonary tuberculosis, chronic obstructive pulmonary disease (COPD), prior pneumothorax with associated bulla and fibrocavitary sarcoidosis (3). Pulmonary cavities of ≥ 2cm which are left after the treatment of pulmonary tuberculosis have about 20% chance of developing PA (4). Studies were done using modeling to estimate the global burden of PA as a sequel of pulmonary tuberculosis and showed in the range of =1 case/100,000 populations in western Europe and the USA to 42.9/100,000 population in Democratic Republic of Congo and Nigeria (5). Reflecting on the high frequency of pulmonary tuberculosis, Ethiopia has an estimated rate of 14.5 cases/100,000 population (6). The natural history of PA has been poorly studied. So far, there appears to be no consistent variable that helps to predict its outcome (7). The course of the disease is extremely variable, ranging from undergoing spontaneous lysis (710%) to causing severe hemoptysis (8). In up to 30% of patients, minor hemoptysis can go on to develop life-threatening hemoptysis (9). Besides, the severity of hemoptysis is also not related to the size, the number of aspergillomas nor to the underlying lung disease (10). At present, there is a considerable controversy about the optimal treatment for PA, primarily because the natural history of the lesion is not well defined. Moreover, high morbidity and mortality have been reported from some surgical series (11–13). However, the current globally accepted mainstay of treatment is surgery and medical options have shown limited role. In light of the newer available anti-fungal agents, treating with drugs, though disappointing, requires further investigation. The purpose of this study was to evaluate the results for all patients who underwent pulmonary resection at Tikur Anbessa Specialized Hospital (TASH) and Menillik 2nd Hospital done for 72 consecutive patients with PA. Patients And Methods: A retrospective review of medical records and theatre operation register notes of patients operated for PA at Menillik 2nd and Tikur Anbessa Specialized Hospital (TASH) was done. Both hospitals are tertiary referral and teaching hospitals found in Addis Ababa, Ethiopia. All patients diagnosed to have PA that underwent surgical treatment throughout 5 years (between Nov 30, 2014 – Nov30, 2019) were included. Socio-demographic variables, underlying previous diseases, symptoms, affected lobe of the lung, radiographic findings, indications, and types of surgery, intra-operative and postoperative complications, mortality, and the outcome of surgical interventions were analyzed. Most patients were referred for surgery from other regional hospitals after long-term and repeated treatment of pulmonary tuberculosis. Preoperative assessment and diagnosis of PA were done based on clinical history, physical examination, blood urea nitrogen (BUN) and creatinine, Chest Radiography, and computed tomography of the chest. Pre-operatively for selected patients, pulmonary function tests and echocardiography assessment of left and right ventricular function and possible pulmonary hypertension (if present is a contraindication for surgery) were performed. Immune-diffusion tests or sputum culture for fungus is not routinely done. Surgical resection was considered for patients with symptoms that justify operations like recurrent hemoptysis, recurrent pneumonia, shortness of breath (SOB) with localized ‘aircrescent’ lesions on CT-Scan of the chest with local destruction of the lung and a good pulmonary reserve. Patients with poor pulmonary reserve or high risk to undergo thoracotomy, and those with concomitant active pulmonary tuberculosis were recommended to have conservative medical treatment. All patients were operated under general anesthesia. Based on the availability, either a single or double-lumen endobronchial intubation was done. Posterolateral thoracotomy was used to access the thoracic cavity. The choice of resection (cavernostomy, lobectomy, bilobectomy or pneumonectomy) is usually based on the general condition of the patient and the extent of the affected lung. After surgery, most of the patients were extubated in the operating room and sent to the Intensive Care Unit (ICU). Some patients required ventilator support. Routine ICU or postoperative treatment protocol was followed during their hospital stay. Follow-up was subsequently done in the Cardiothoracic surgical referral clinic with serial chest X-rays. A structured questionnaire was used to collect relevant data from patients’ medical records and operation theater registry logbooks. Data was collected by final year surgery residents. Treatment success for this study was defined as a patient operated for PA and fully improved with no immediate complications seen during their hospital stay or subsequent follow-up visit to the cardiothoracic surgical referral clinic. Data was then cross-checked for completeness, accuracy, and consistency before entry to a software program. Descriptive statistics as percentages, mean, and ranges were computed. Data were expressed as mean ± standard deviation (SD) for the variance of a normal distribution or as the median for those with non-normal distribution. Characteristics of the study subgroups were compared using Mann-Whitney U-test, for continuous variables, and Pearson Chi-Square test for categorical variables. The differences in postoperative complications were assessed by univariate analysis. Variables with significance on univariate analysis and previously known risk factors like age, sex, the severity of hemoptysis, past-history of tuberculosis, affected side of the lung and type of surgery done (14–16) were evaluated by multivariate analysis with stepwise backward elimination method to determine the independent predictors of postoperative complications. Statistical analyses were performed using SPSS 23.0 (IBM Corporation, Armonk, NY, USA). For all tests, a value of P=0.05 was considered significant. Results: Socio-demographic profile Out of the 72 patients included in the study, there were 46(63.9%) males and 26(36.1%) females with male-to-female ratio of 1.8:1. The mean age affected was 35.2 +/−11.6 years (range 16–65). The age group 25–49 years, 53(73.6%) was predominantly affected. Besides, the majority of patients came from urban areas, 43(59.7%), and were married, 43(59.7%) (Table 1). Analysis of demographics, Clinical features and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menilik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 1Pearson Chi-Square, *reference group used to calculate AOR Out of the 72 patients included in the study, there were 46(63.9%) males and 26(36.1%) females with male-to-female ratio of 1.8:1. The mean age affected was 35.2 +/−11.6 years (range 16–65). The age group 25–49 years, 53(73.6%) was predominantly affected. Besides, the majority of patients came from urban areas, 43(59.7%), and were married, 43(59.7%) (Table 1). Analysis of demographics, Clinical features and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menilik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 1Pearson Chi-Square, *reference group used to calculate AOR Diagnosis The diagnosis of PA was based on clinical features and diagnostic investigations. Cough, 72(100%), and hemoptysis, 72(100%), were the primary symptoms reported in all patients. Other symptoms were pain, 50(69.4%), fever, 26(36.1%), and SOB, 36(50%). The majority of the patients had mild hemoptysis, 40(55.6%). Only 16(22.2%) had a history of severe hemoptysis. All patients were treated for pulmonary tuberculosis, and 49(68%) of them were treated for more than one time. The time between the first incidence of pulmonary tuberculosis and subsequent development of aspergilloma was variable (range 1–20 years). Concomitant illnesses identified were diabetes mellitus, 2(2.8%), and HIV infection, 2(2.8%). The vital sign was deranged in 9(12.5%) patients, and 17(23.6%) patients had Pallor (Table 1). An initial plain chest x-ray was done for 70(97.2%) patients and demonstrated a cavitary lesion with a hyperdensity within it. Subsequently, CT scan was done for all patients, and the diagnosis of PA was made by identifying air crescent sign (Figure 1,2). CT scan was also used to further characterize the lesion, to assess the involved site and lobe of the lung and to assess the concomitant disease of the lung. Almost all patients had upper lobe aspergilloma, 71(98.6%), with only one (1.4%) exception who had isolated lower lobe lesion. When the performance status of the patient was doubtful, the pulmonary function test was used in 17(23.6%) patients that help a better understanding of the general condition. For suspected malignant cavitary lesion, bronchoscopy and BAL cytology were done on 12(16.7%) patients which demonstrated fungal hyphae. Due to the lack of test reagent, the serologic examination to identify possible fungal infection was not done. During surgery, a ball of fungus was demonstrated in all patients, and histologic examination was made for only 11(15.3%) malignant suspected specimens and it demonstrated only A. fumigatus. Left upper lobe cavity with internal soft tissue density with surrounding fibrosis Figure 2: Left upper lobe intra-cavitary mass, with an air crescent sign The diagnosis of PA was based on clinical features and diagnostic investigations. Cough, 72(100%), and hemoptysis, 72(100%), were the primary symptoms reported in all patients. Other symptoms were pain, 50(69.4%), fever, 26(36.1%), and SOB, 36(50%). The majority of the patients had mild hemoptysis, 40(55.6%). Only 16(22.2%) had a history of severe hemoptysis. All patients were treated for pulmonary tuberculosis, and 49(68%) of them were treated for more than one time. The time between the first incidence of pulmonary tuberculosis and subsequent development of aspergilloma was variable (range 1–20 years). Concomitant illnesses identified were diabetes mellitus, 2(2.8%), and HIV infection, 2(2.8%). The vital sign was deranged in 9(12.5%) patients, and 17(23.6%) patients had Pallor (Table 1). An initial plain chest x-ray was done for 70(97.2%) patients and demonstrated a cavitary lesion with a hyperdensity within it. Subsequently, CT scan was done for all patients, and the diagnosis of PA was made by identifying air crescent sign (Figure 1,2). CT scan was also used to further characterize the lesion, to assess the involved site and lobe of the lung and to assess the concomitant disease of the lung. Almost all patients had upper lobe aspergilloma, 71(98.6%), with only one (1.4%) exception who had isolated lower lobe lesion. When the performance status of the patient was doubtful, the pulmonary function test was used in 17(23.6%) patients that help a better understanding of the general condition. For suspected malignant cavitary lesion, bronchoscopy and BAL cytology were done on 12(16.7%) patients which demonstrated fungal hyphae. Due to the lack of test reagent, the serologic examination to identify possible fungal infection was not done. During surgery, a ball of fungus was demonstrated in all patients, and histologic examination was made for only 11(15.3%) malignant suspected specimens and it demonstrated only A. fumigatus. Left upper lobe cavity with internal soft tissue density with surrounding fibrosis Figure 2: Left upper lobe intra-cavitary mass, with an air crescent sign Operative procedures and follow-up Double lumen Endo-tracheal intubation was used in 46(63.9%) patients. The rest, 26(36.1%) patients, were operated with single-lumen endotracheal tubes. The left lung and right lung were affected in 45(62.5%) and 27(37.5%) patients respectively. In all cases, the pleural surface was found extensively adherent to the chest wall and was difficult to dissect. The type of procedures done was Lobectomy, 32(44.4%), pneumonectomy, 12(16.7%), Bi-lobectomies, 7(9.7%), and cavernostomy, 21(29.2%). The estimated average intra-operative blood loss was 745 +/−307ml (range 300–1600ml) (Table 2). Analysis of the location of aspergilloma, affected lobe, type of endotracheal tube used, type of surgery, the estimated blood loss and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30,2014–Nov 30, 2019 Pearson Chi-Square reference group used to calculate AOR During the initial 30 days of postoperative hospital stay, 21(29.1%) patients developed one or more complications. The major postoperative complications seen were prolonged air leak, 7(9.7%), Pneumonia, 3(4.2%), empyema, 7(9.7%), wound infection, 3 (4.2%), and others, 1(1.4). Five (6.9%) patients had significant intraoperative bleeding of which one died due to exsanguinating bleeding from major vessel injury. The other two patients died due to respiratory failure and sepsis which happened after pneumonectomy and lobectomy respectively. The overall mortality within 30 days of surgery was 3(4.2%), and the rest 95.8% of patients were discharged with improvement (Table 3). Frequency table of intraoperative and postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 Those patients who improved were discharged and followed for a median of 5 months (range 2 weeks to 24 months) at the surgical referral clinic. During the follow-up period, no significant addition of morbidity nor mortality was identified. Few patients were noticed to have received different antifungal treatment. Double lumen Endo-tracheal intubation was used in 46(63.9%) patients. The rest, 26(36.1%) patients, were operated with single-lumen endotracheal tubes. The left lung and right lung were affected in 45(62.5%) and 27(37.5%) patients respectively. In all cases, the pleural surface was found extensively adherent to the chest wall and was difficult to dissect. The type of procedures done was Lobectomy, 32(44.4%), pneumonectomy, 12(16.7%), Bi-lobectomies, 7(9.7%), and cavernostomy, 21(29.2%). The estimated average intra-operative blood loss was 745 +/−307ml (range 300–1600ml) (Table 2). Analysis of the location of aspergilloma, affected lobe, type of endotracheal tube used, type of surgery, the estimated blood loss and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30,2014–Nov 30, 2019 Pearson Chi-Square reference group used to calculate AOR During the initial 30 days of postoperative hospital stay, 21(29.1%) patients developed one or more complications. The major postoperative complications seen were prolonged air leak, 7(9.7%), Pneumonia, 3(4.2%), empyema, 7(9.7%), wound infection, 3 (4.2%), and others, 1(1.4). Five (6.9%) patients had significant intraoperative bleeding of which one died due to exsanguinating bleeding from major vessel injury. The other two patients died due to respiratory failure and sepsis which happened after pneumonectomy and lobectomy respectively. The overall mortality within 30 days of surgery was 3(4.2%), and the rest 95.8% of patients were discharged with improvement (Table 3). Frequency table of intraoperative and postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 Those patients who improved were discharged and followed for a median of 5 months (range 2 weeks to 24 months) at the surgical referral clinic. During the follow-up period, no significant addition of morbidity nor mortality was identified. Few patients were noticed to have received different antifungal treatment. Factors predicting adverse outcome Analysis of the postoperative complications between the type of surgery varied between 9.1% and 40%. It was further evaluated with the chi-square test and a significant difference was detected (p-value=0.011). Patients with pneumonectomy/ lobectomy were most likely to develop complications than those with cavernostomy or Bi-lobectomy. In the bivariate logistic regression models, the affected site of the lung, type of surgery done and the presence of intraoperative complications, were identified to have a significant association with postoperative complications. Postoperative outcome (complication) was significantly affected by the location of the aspergilloma [(P=0.022; AOR=−0.297; 95%CI)], the type of procedure performed [(P=0.011; AOR=−-0.486; 95%CI)] and the presence of intraoperative complications [P=0.020; AOR=5.875; 95% CI). Analysis of the postoperative complications between the type of surgery varied between 9.1% and 40%. It was further evaluated with the chi-square test and a significant difference was detected (p-value=0.011). Patients with pneumonectomy/ lobectomy were most likely to develop complications than those with cavernostomy or Bi-lobectomy. In the bivariate logistic regression models, the affected site of the lung, type of surgery done and the presence of intraoperative complications, were identified to have a significant association with postoperative complications. Postoperative outcome (complication) was significantly affected by the location of the aspergilloma [(P=0.022; AOR=−0.297; 95%CI)], the type of procedure performed [(P=0.011; AOR=−-0.486; 95%CI)] and the presence of intraoperative complications [P=0.020; AOR=5.875; 95% CI). Socio-demographic profile: Out of the 72 patients included in the study, there were 46(63.9%) males and 26(36.1%) females with male-to-female ratio of 1.8:1. The mean age affected was 35.2 +/−11.6 years (range 16–65). The age group 25–49 years, 53(73.6%) was predominantly affected. Besides, the majority of patients came from urban areas, 43(59.7%), and were married, 43(59.7%) (Table 1). Analysis of demographics, Clinical features and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menilik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 1Pearson Chi-Square, *reference group used to calculate AOR Diagnosis: The diagnosis of PA was based on clinical features and diagnostic investigations. Cough, 72(100%), and hemoptysis, 72(100%), were the primary symptoms reported in all patients. Other symptoms were pain, 50(69.4%), fever, 26(36.1%), and SOB, 36(50%). The majority of the patients had mild hemoptysis, 40(55.6%). Only 16(22.2%) had a history of severe hemoptysis. All patients were treated for pulmonary tuberculosis, and 49(68%) of them were treated for more than one time. The time between the first incidence of pulmonary tuberculosis and subsequent development of aspergilloma was variable (range 1–20 years). Concomitant illnesses identified were diabetes mellitus, 2(2.8%), and HIV infection, 2(2.8%). The vital sign was deranged in 9(12.5%) patients, and 17(23.6%) patients had Pallor (Table 1). An initial plain chest x-ray was done for 70(97.2%) patients and demonstrated a cavitary lesion with a hyperdensity within it. Subsequently, CT scan was done for all patients, and the diagnosis of PA was made by identifying air crescent sign (Figure 1,2). CT scan was also used to further characterize the lesion, to assess the involved site and lobe of the lung and to assess the concomitant disease of the lung. Almost all patients had upper lobe aspergilloma, 71(98.6%), with only one (1.4%) exception who had isolated lower lobe lesion. When the performance status of the patient was doubtful, the pulmonary function test was used in 17(23.6%) patients that help a better understanding of the general condition. For suspected malignant cavitary lesion, bronchoscopy and BAL cytology were done on 12(16.7%) patients which demonstrated fungal hyphae. Due to the lack of test reagent, the serologic examination to identify possible fungal infection was not done. During surgery, a ball of fungus was demonstrated in all patients, and histologic examination was made for only 11(15.3%) malignant suspected specimens and it demonstrated only A. fumigatus. Left upper lobe cavity with internal soft tissue density with surrounding fibrosis Figure 2: Left upper lobe intra-cavitary mass, with an air crescent sign Operative procedures and follow-up: Double lumen Endo-tracheal intubation was used in 46(63.9%) patients. The rest, 26(36.1%) patients, were operated with single-lumen endotracheal tubes. The left lung and right lung were affected in 45(62.5%) and 27(37.5%) patients respectively. In all cases, the pleural surface was found extensively adherent to the chest wall and was difficult to dissect. The type of procedures done was Lobectomy, 32(44.4%), pneumonectomy, 12(16.7%), Bi-lobectomies, 7(9.7%), and cavernostomy, 21(29.2%). The estimated average intra-operative blood loss was 745 +/−307ml (range 300–1600ml) (Table 2). Analysis of the location of aspergilloma, affected lobe, type of endotracheal tube used, type of surgery, the estimated blood loss and its relation to Postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30,2014–Nov 30, 2019 Pearson Chi-Square reference group used to calculate AOR During the initial 30 days of postoperative hospital stay, 21(29.1%) patients developed one or more complications. The major postoperative complications seen were prolonged air leak, 7(9.7%), Pneumonia, 3(4.2%), empyema, 7(9.7%), wound infection, 3 (4.2%), and others, 1(1.4). Five (6.9%) patients had significant intraoperative bleeding of which one died due to exsanguinating bleeding from major vessel injury. The other two patients died due to respiratory failure and sepsis which happened after pneumonectomy and lobectomy respectively. The overall mortality within 30 days of surgery was 3(4.2%), and the rest 95.8% of patients were discharged with improvement (Table 3). Frequency table of intraoperative and postoperative complications done at Tikur Anbessa Specialized Hospital and Menillik the 2nd Hospital from Nov 30, 2014 – Nov30, 2019 Those patients who improved were discharged and followed for a median of 5 months (range 2 weeks to 24 months) at the surgical referral clinic. During the follow-up period, no significant addition of morbidity nor mortality was identified. Few patients were noticed to have received different antifungal treatment. Factors predicting adverse outcome: Analysis of the postoperative complications between the type of surgery varied between 9.1% and 40%. It was further evaluated with the chi-square test and a significant difference was detected (p-value=0.011). Patients with pneumonectomy/ lobectomy were most likely to develop complications than those with cavernostomy or Bi-lobectomy. In the bivariate logistic regression models, the affected site of the lung, type of surgery done and the presence of intraoperative complications, were identified to have a significant association with postoperative complications. Postoperative outcome (complication) was significantly affected by the location of the aspergilloma [(P=0.022; AOR=−0.297; 95%CI)], the type of procedure performed [(P=0.011; AOR=−-0.486; 95%CI)] and the presence of intraoperative complications [P=0.020; AOR=5.875; 95% CI). Discussion: Pulmonary aspergilloma develops most frequently in residual tuberculosis cavities. The British Thoracic and Tuberculosis Association reported 6% of patients with a healed tuberculosis cavity developing pulmonary aspergilloma within 3 years (17). Many studies also showed pulmonary tuberculosis as the most common pre-existing cavity lesion accounting for 32–45% (28,29). Chen QK from China reported that 71.1% of their patients with PA were treated for tuberculosis (9). In Ethiopia, due to the high incidence of tuberculosis, it is not uncommon to see patients with PA (6). In this study, all patients were previously treated for tuberculosis, and the majority of them placed on a repeated course of anti-tuberculosis treatment. This finding of patients being referred for surgery after treated multiple times for tuberculosis could indicate a wrong effort to resolve PA. This observation in delay referral and possibly wrong treatment of PA requires further study. Similar to other studies, males are affected more than females (9). Unlike this study, patients with PA in developed countries typically were found in their middle ages. In a series of 18 patients with chronic pulmonary aspergillosis done in the United Kingdom, the median age affected was 59 years (2). According to United Nations data (April 12,2020), the median age in Ethiopian population is 19.5 years. A similar finding of higher proportion of the younger patients with PA was also noticed in this study (17–19). Clinical presentation of PA ranges from an incidental radiological finding to exsanguinating hemoptysis (20). In our series, cough and hemoptysis were the primary symptoms reported in all patients. Chest pain, SOB, and fever were also reported in half of the cases. A similar study was done in India which reported hemoptysis in 79.16% patients (21). Bleeding during pulmonary aspergilloma usually arises from the bronchial arteries, and it stops spontaneously (22,23). However, when the cavity erodes into the intercostal vessel, the hemoptysis is severe and is unlikely to stop. All fungal balls in this series were found in the upper lobe. This is because the healed tuberculosis cavities in the upper lobe serve as a good nidus for colonization with A. fumigatus. Many studies reported a similar association (range 13%–89%) (21–23). Even though HIV infection is a common problem in our society, 73.6% of the patients in this study were not tested for HIV infection. Among those tested, only 2/19 were tested positive. Similar studies showed that since the introduction of potent ART, the incidence of aspergilloma in HIV-infected patients is low. In a review of 342 cases of pulmonary aspergillosis and invasive disease with AIDS, only 14 patients were diagnosed with aspergillomas (12). In this study, a low association of PA with another disease like diabetes mellitus was also identified. The surgical management of pulmonary aspergilloma is challenging and controversial. No double-blind, placebo-control or randomized trials have been undertaken. According to Shakil et al (24), among the 30 patients operated for PA, 23 were symptomatic and were treated with preoperative antifungal treatment. In this study, although all patients were symptomatic, they were operated without receiving preoperative antifungal treatment. In 2008, a guideline for the treatment of aspergillosis was developed by the Infectious Diseases Society of America. The guideline recommends an approach to therapy by distinguishing ‘simple aspergilloma’ from the more complex forms of ‘chronic cavitary pulmonary aspergillosis’ and ‘chronic fibrosing pulmonary aspergillosis’ (13). Nevertheless, such kind of diagnostic and therapeutic approach was not followed in the management of our patients. Surgical treatment for simple aspergilloma is done to prevent or treat potentially life-threatening complications, which usually is curative (14,15). The role of antifungal treatment for simple aspergilloma is controversial. Many patients, such as those who are asymptomatic and have stable radiographic findings over many months, may not require therapy (14). However, if surgery is indicated for symptomatic PA, the use of pre-and post-operative antifungal therapy (usually variconazole) is recommended. This is because during operation, most cavities will inadvertently be opened that leads to spillage, which increases the risk of postoperative aspergillosis (14,25,26). The use of postoperative antifungal treatment in this study was found to be inconsistent. Hence, based on the current recommendation, a protocol on the use of antifungal treatment before and after surgery need to be developed. In contrast to patients with simple aspergilloma, those with chronic cavitary pulmonary aspergillosis and chronic fibrosing pulmonary aspergillosis require life-long antifungal therapy, a practice that was not reported in our cases management (12–14). The evidence regarding the efficacy of long-term antifungal therapy for such diseases was based on small case series and open-label non-comparative studies. According to those studies, Itraconazole and voriconazole are the preferred oral agents. They also reported that discontinuation of therapy could lead to a gradual return of symptoms, manifested with worsening of chest radiography findings, and rising level of Aspergill (14–16). Surgical outcomes for this category of diseases were not as good as those for simple aspergillomas (12,13). In general, because of the treatment of massive hemoptysis, surgical resection remains the mainstay of treatment for PA. However, no agreed document exists on the extent of lung resection. In this study, patients with pneumonectomy/lobectomy did not have favorable outcomes compared with those cavernostomy/bilobectomy group. This finding is consistent with a previous study (9,19,). Similarly, Shirakusa et al. advocate a limited lung resection (27). However, studies showed that radical resection of the affected areas had effectively improved patient outcomes. Because of this argument, many case series studies advocate a standard thoracotomy and lobectomy to be the preferred surgical procedures (7–11,18–23). In this series, lobectomy was done for 44.4% of patients. Studies recommend pneumonectomy only when the affected lung was destroyed or the remaining lobe was severely fibrotic and small, which cannot expand to fill the chest cavity (7). Cavernostomy was done primarily for either peripheral placed small PA or when the fungal ball lies close to the fissure or hilum and pneumonectomy was not an option because of the patient’s general condition. Similar to other studies, (19–21, 28–32), the postoperative complications related to it were significantly low (P=0.011). Cavernostomy is now considered as a viable treatment alternative even to those that can be submitted to pulmonary resection. We investigated the risk factors of postoperative complications and short-term observation of patients with PA. Among the study population, 21(29.1%) of the patients developed postoperative complications. At risk-adjusted analysis, socio-demographic characteristics and clinical presentations showed no effect on the incidence of adverse events. However, the affected side of the lung, the type of surgery done, and the presence of intraoperative complications were revealed as the independent predictors of postoperative complications. The reason for the higher postoperative complication in the right lung surgery is unclear. Previous studies reported operative mortality rates to range from 4 to 22%, (34–37) and postoperative complications of 15–78%, (28) with the current surgical and anesthetic advances; recent reports show an improved outcome (34). Several of such series describe acceptable outcomes with 2 to 5% mortality and 25% overall complication rate (28–35) Overall, the result of this study is equally favorable to other recent results. Some of the limitations of this study include the following. First the size of the study population is relatively small. Second, because of the study design (retrospective cross-section) selection bias towards selecting more symptomatic and referred patients than asymptomatic and not referred patients is expected. Third, Bronchoscopy and cytology were done for only 16.7% of patients. The diagnosis was primarily based on the radiologic and intraoperative findings. Hence, some diseases like actinomycosis, nocardiosis, intracavitary hematoma, and adenocarcinoma, could mimic PA. (32). Fourth, because of the extended time period between first incidence of tuberculosis and subsequent development of aspergilloma, it is difficult to ascertain the exact time which may result in a recall bias. And fifth, the duration of follow-up was relatively short, and the long-term outcome of surgery is not known. Therefore, a well-designed prospective cohort study is recommended to confirm the association between different variables and the immediate and long-term adverse outcome. Finally, the study represents one of the largest surgical experience in Sub-Saharan countries with a resource-limited setup. An outcome, which was comparable with many international studies done in developed countries was achieved. It is, therefore, logical to recommend operative resection of the involved area of the lung with a thorough preoperative assessment and a careful evaluation of the risk/benefit ratio is made. In conclusion, since pulmonary aspergilloma represents a spectrum of different disease stages, and the risks and benefits of medical and surgical therapy vary with the manifestations of the disease and the patient's pulmonary status, the approach to therapy should be individualized. To improve our patients’ outcome, treating aspergilloma patients in line with those of the 2008 Infectious Diseases Society of America guidelines is recommendable (13).
Background: Surgical management of pulmonary aspergillosis is challenging and controversial. This study is designed to assess the clinical profile, indications and surgical outcome of Pulmonary aspergilloma. Methods: A retrospective cross-sectional analysis of 72 patients who underwent pulmonary resection for pulmonary aspergilloma over the period from November, 2014, to November, 2019 was done. Data on demographic, clinical and surgical out come were retrieved. Analysis was done using SPSS version 23. Chi-square test was used to assess for significance of the association between variables and surgical outcome. Results: There were 46(63.9%) males and 26(36.1%) female patients with a mean age of 35.2+/-11.6 years (Range 16- 65 years). All patients were previously treated for tuberculosis. Cough, hemoptysis, and shortness of breath were the main symptoms identified. A ball of fungus together with the surrounding lung was removed. Accordingly, 32(44,4%) lobectomies, 12(16.7%) pneumonectomy, 7(9.7%) Bi-lobectomy, and 21(29.2%) cavernostomy were done. Intraoperative and Postoperative complications were seen in 8(11.1%) and 21(29.1%) patients respectively. Major morbidity encounters included massive intraoperative blood loss, prolonged air leak, empyema, air space, bronchopleural fistula, and wound infection. The hospital mortality was 3(4.2%) and the average hospital stay was 14.8days. Postoperative complications were evaluated for the difference in socio-demographic characteristics and other variables and a statistically significant difference was detected only for the location of aspergilloma, site of the lung involved and type of surgery done. (P-value =0.05.). Conclusions: Pulmonary resection done for pulmonary aspergilloma showed favorable outcomes when done with good patient selection, meticulous surgical techniques, and good postoperative management. However, its long term outcome and role of antifungal treatment as adjunctive therapy for surgical resection need further investigation.
Introduction: Pulmonary aspergilloma (PA) is a ball of fungus that is composed of Aspergillus hyphae, fibrin, mucus, and cellular debris within a pulmonary cavity (1). It usually arises within a preexisting pulmonary cavity that has become colonized with Aspergillus spp (2). The most common underlying pathology that predisposes to PA includes pulmonary tuberculosis, chronic obstructive pulmonary disease (COPD), prior pneumothorax with associated bulla and fibrocavitary sarcoidosis (3). Pulmonary cavities of ≥ 2cm which are left after the treatment of pulmonary tuberculosis have about 20% chance of developing PA (4). Studies were done using modeling to estimate the global burden of PA as a sequel of pulmonary tuberculosis and showed in the range of =1 case/100,000 populations in western Europe and the USA to 42.9/100,000 population in Democratic Republic of Congo and Nigeria (5). Reflecting on the high frequency of pulmonary tuberculosis, Ethiopia has an estimated rate of 14.5 cases/100,000 population (6). The natural history of PA has been poorly studied. So far, there appears to be no consistent variable that helps to predict its outcome (7). The course of the disease is extremely variable, ranging from undergoing spontaneous lysis (710%) to causing severe hemoptysis (8). In up to 30% of patients, minor hemoptysis can go on to develop life-threatening hemoptysis (9). Besides, the severity of hemoptysis is also not related to the size, the number of aspergillomas nor to the underlying lung disease (10). At present, there is a considerable controversy about the optimal treatment for PA, primarily because the natural history of the lesion is not well defined. Moreover, high morbidity and mortality have been reported from some surgical series (11–13). However, the current globally accepted mainstay of treatment is surgery and medical options have shown limited role. In light of the newer available anti-fungal agents, treating with drugs, though disappointing, requires further investigation. The purpose of this study was to evaluate the results for all patients who underwent pulmonary resection at Tikur Anbessa Specialized Hospital (TASH) and Menillik 2nd Hospital done for 72 consecutive patients with PA. Discussion: Pulmonary aspergilloma develops most frequently in residual tuberculosis cavities. The British Thoracic and Tuberculosis Association reported 6% of patients with a healed tuberculosis cavity developing pulmonary aspergilloma within 3 years (17). Many studies also showed pulmonary tuberculosis as the most common pre-existing cavity lesion accounting for 32–45% (28,29). Chen QK from China reported that 71.1% of their patients with PA were treated for tuberculosis (9). In Ethiopia, due to the high incidence of tuberculosis, it is not uncommon to see patients with PA (6). In this study, all patients were previously treated for tuberculosis, and the majority of them placed on a repeated course of anti-tuberculosis treatment. This finding of patients being referred for surgery after treated multiple times for tuberculosis could indicate a wrong effort to resolve PA. This observation in delay referral and possibly wrong treatment of PA requires further study. Similar to other studies, males are affected more than females (9). Unlike this study, patients with PA in developed countries typically were found in their middle ages. In a series of 18 patients with chronic pulmonary aspergillosis done in the United Kingdom, the median age affected was 59 years (2). According to United Nations data (April 12,2020), the median age in Ethiopian population is 19.5 years. A similar finding of higher proportion of the younger patients with PA was also noticed in this study (17–19). Clinical presentation of PA ranges from an incidental radiological finding to exsanguinating hemoptysis (20). In our series, cough and hemoptysis were the primary symptoms reported in all patients. Chest pain, SOB, and fever were also reported in half of the cases. A similar study was done in India which reported hemoptysis in 79.16% patients (21). Bleeding during pulmonary aspergilloma usually arises from the bronchial arteries, and it stops spontaneously (22,23). However, when the cavity erodes into the intercostal vessel, the hemoptysis is severe and is unlikely to stop. All fungal balls in this series were found in the upper lobe. This is because the healed tuberculosis cavities in the upper lobe serve as a good nidus for colonization with A. fumigatus. Many studies reported a similar association (range 13%–89%) (21–23). Even though HIV infection is a common problem in our society, 73.6% of the patients in this study were not tested for HIV infection. Among those tested, only 2/19 were tested positive. Similar studies showed that since the introduction of potent ART, the incidence of aspergilloma in HIV-infected patients is low. In a review of 342 cases of pulmonary aspergillosis and invasive disease with AIDS, only 14 patients were diagnosed with aspergillomas (12). In this study, a low association of PA with another disease like diabetes mellitus was also identified. The surgical management of pulmonary aspergilloma is challenging and controversial. No double-blind, placebo-control or randomized trials have been undertaken. According to Shakil et al (24), among the 30 patients operated for PA, 23 were symptomatic and were treated with preoperative antifungal treatment. In this study, although all patients were symptomatic, they were operated without receiving preoperative antifungal treatment. In 2008, a guideline for the treatment of aspergillosis was developed by the Infectious Diseases Society of America. The guideline recommends an approach to therapy by distinguishing ‘simple aspergilloma’ from the more complex forms of ‘chronic cavitary pulmonary aspergillosis’ and ‘chronic fibrosing pulmonary aspergillosis’ (13). Nevertheless, such kind of diagnostic and therapeutic approach was not followed in the management of our patients. Surgical treatment for simple aspergilloma is done to prevent or treat potentially life-threatening complications, which usually is curative (14,15). The role of antifungal treatment for simple aspergilloma is controversial. Many patients, such as those who are asymptomatic and have stable radiographic findings over many months, may not require therapy (14). However, if surgery is indicated for symptomatic PA, the use of pre-and post-operative antifungal therapy (usually variconazole) is recommended. This is because during operation, most cavities will inadvertently be opened that leads to spillage, which increases the risk of postoperative aspergillosis (14,25,26). The use of postoperative antifungal treatment in this study was found to be inconsistent. Hence, based on the current recommendation, a protocol on the use of antifungal treatment before and after surgery need to be developed. In contrast to patients with simple aspergilloma, those with chronic cavitary pulmonary aspergillosis and chronic fibrosing pulmonary aspergillosis require life-long antifungal therapy, a practice that was not reported in our cases management (12–14). The evidence regarding the efficacy of long-term antifungal therapy for such diseases was based on small case series and open-label non-comparative studies. According to those studies, Itraconazole and voriconazole are the preferred oral agents. They also reported that discontinuation of therapy could lead to a gradual return of symptoms, manifested with worsening of chest radiography findings, and rising level of Aspergill (14–16). Surgical outcomes for this category of diseases were not as good as those for simple aspergillomas (12,13). In general, because of the treatment of massive hemoptysis, surgical resection remains the mainstay of treatment for PA. However, no agreed document exists on the extent of lung resection. In this study, patients with pneumonectomy/lobectomy did not have favorable outcomes compared with those cavernostomy/bilobectomy group. This finding is consistent with a previous study (9,19,). Similarly, Shirakusa et al. advocate a limited lung resection (27). However, studies showed that radical resection of the affected areas had effectively improved patient outcomes. Because of this argument, many case series studies advocate a standard thoracotomy and lobectomy to be the preferred surgical procedures (7–11,18–23). In this series, lobectomy was done for 44.4% of patients. Studies recommend pneumonectomy only when the affected lung was destroyed or the remaining lobe was severely fibrotic and small, which cannot expand to fill the chest cavity (7). Cavernostomy was done primarily for either peripheral placed small PA or when the fungal ball lies close to the fissure or hilum and pneumonectomy was not an option because of the patient’s general condition. Similar to other studies, (19–21, 28–32), the postoperative complications related to it were significantly low (P=0.011). Cavernostomy is now considered as a viable treatment alternative even to those that can be submitted to pulmonary resection. We investigated the risk factors of postoperative complications and short-term observation of patients with PA. Among the study population, 21(29.1%) of the patients developed postoperative complications. At risk-adjusted analysis, socio-demographic characteristics and clinical presentations showed no effect on the incidence of adverse events. However, the affected side of the lung, the type of surgery done, and the presence of intraoperative complications were revealed as the independent predictors of postoperative complications. The reason for the higher postoperative complication in the right lung surgery is unclear. Previous studies reported operative mortality rates to range from 4 to 22%, (34–37) and postoperative complications of 15–78%, (28) with the current surgical and anesthetic advances; recent reports show an improved outcome (34). Several of such series describe acceptable outcomes with 2 to 5% mortality and 25% overall complication rate (28–35) Overall, the result of this study is equally favorable to other recent results. Some of the limitations of this study include the following. First the size of the study population is relatively small. Second, because of the study design (retrospective cross-section) selection bias towards selecting more symptomatic and referred patients than asymptomatic and not referred patients is expected. Third, Bronchoscopy and cytology were done for only 16.7% of patients. The diagnosis was primarily based on the radiologic and intraoperative findings. Hence, some diseases like actinomycosis, nocardiosis, intracavitary hematoma, and adenocarcinoma, could mimic PA. (32). Fourth, because of the extended time period between first incidence of tuberculosis and subsequent development of aspergilloma, it is difficult to ascertain the exact time which may result in a recall bias. And fifth, the duration of follow-up was relatively short, and the long-term outcome of surgery is not known. Therefore, a well-designed prospective cohort study is recommended to confirm the association between different variables and the immediate and long-term adverse outcome. Finally, the study represents one of the largest surgical experience in Sub-Saharan countries with a resource-limited setup. An outcome, which was comparable with many international studies done in developed countries was achieved. It is, therefore, logical to recommend operative resection of the involved area of the lung with a thorough preoperative assessment and a careful evaluation of the risk/benefit ratio is made. In conclusion, since pulmonary aspergilloma represents a spectrum of different disease stages, and the risks and benefits of medical and surgical therapy vary with the manifestations of the disease and the patient's pulmonary status, the approach to therapy should be individualized. To improve our patients’ outcome, treating aspergilloma patients in line with those of the 2008 Infectious Diseases Society of America guidelines is recommendable (13).
Background: Surgical management of pulmonary aspergillosis is challenging and controversial. This study is designed to assess the clinical profile, indications and surgical outcome of Pulmonary aspergilloma. Methods: A retrospective cross-sectional analysis of 72 patients who underwent pulmonary resection for pulmonary aspergilloma over the period from November, 2014, to November, 2019 was done. Data on demographic, clinical and surgical out come were retrieved. Analysis was done using SPSS version 23. Chi-square test was used to assess for significance of the association between variables and surgical outcome. Results: There were 46(63.9%) males and 26(36.1%) female patients with a mean age of 35.2+/-11.6 years (Range 16- 65 years). All patients were previously treated for tuberculosis. Cough, hemoptysis, and shortness of breath were the main symptoms identified. A ball of fungus together with the surrounding lung was removed. Accordingly, 32(44,4%) lobectomies, 12(16.7%) pneumonectomy, 7(9.7%) Bi-lobectomy, and 21(29.2%) cavernostomy were done. Intraoperative and Postoperative complications were seen in 8(11.1%) and 21(29.1%) patients respectively. Major morbidity encounters included massive intraoperative blood loss, prolonged air leak, empyema, air space, bronchopleural fistula, and wound infection. The hospital mortality was 3(4.2%) and the average hospital stay was 14.8days. Postoperative complications were evaluated for the difference in socio-demographic characteristics and other variables and a statistically significant difference was detected only for the location of aspergilloma, site of the lung involved and type of surgery done. (P-value =0.05.). Conclusions: Pulmonary resection done for pulmonary aspergilloma showed favorable outcomes when done with good patient selection, meticulous surgical techniques, and good postoperative management. However, its long term outcome and role of antifungal treatment as adjunctive therapy for surgical resection need further investigation.
6,235
352
[ 130, 416, 405, 149, 1766 ]
8
[ "patients", "complications", "pulmonary", "postoperative", "pa", "surgery", "lung", "postoperative complications", "affected", "hospital" ]
[ "pulmonary aspergillosis require", "cases pulmonary aspergillosis", "developing pulmonary aspergilloma", "pulmonary aspergilloma usually", "pulmonary aspergilloma pa" ]
[CONTENT] Pulmonary Aspergilloma | Cavernostomy | Hemoptysis [SUMMARY]
[CONTENT] Pulmonary Aspergilloma | Cavernostomy | Hemoptysis [SUMMARY]
[CONTENT] Pulmonary Aspergilloma | Cavernostomy | Hemoptysis [SUMMARY]
[CONTENT] Pulmonary Aspergilloma | Cavernostomy | Hemoptysis [SUMMARY]
[CONTENT] Pulmonary Aspergilloma | Cavernostomy | Hemoptysis [SUMMARY]
[CONTENT] Pulmonary Aspergilloma | Cavernostomy | Hemoptysis [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cross-Sectional Studies | Ethiopia | Female | Humans | Lung | Male | Middle Aged | Pulmonary Aspergillosis | Retrospective Studies | Tertiary Care Centers | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cross-Sectional Studies | Ethiopia | Female | Humans | Lung | Male | Middle Aged | Pulmonary Aspergillosis | Retrospective Studies | Tertiary Care Centers | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cross-Sectional Studies | Ethiopia | Female | Humans | Lung | Male | Middle Aged | Pulmonary Aspergillosis | Retrospective Studies | Tertiary Care Centers | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cross-Sectional Studies | Ethiopia | Female | Humans | Lung | Male | Middle Aged | Pulmonary Aspergillosis | Retrospective Studies | Tertiary Care Centers | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cross-Sectional Studies | Ethiopia | Female | Humans | Lung | Male | Middle Aged | Pulmonary Aspergillosis | Retrospective Studies | Tertiary Care Centers | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Cross-Sectional Studies | Ethiopia | Female | Humans | Lung | Male | Middle Aged | Pulmonary Aspergillosis | Retrospective Studies | Tertiary Care Centers | Treatment Outcome | Young Adult [SUMMARY]
[CONTENT] pulmonary aspergillosis require | cases pulmonary aspergillosis | developing pulmonary aspergilloma | pulmonary aspergilloma usually | pulmonary aspergilloma pa [SUMMARY]
[CONTENT] pulmonary aspergillosis require | cases pulmonary aspergillosis | developing pulmonary aspergilloma | pulmonary aspergilloma usually | pulmonary aspergilloma pa [SUMMARY]
[CONTENT] pulmonary aspergillosis require | cases pulmonary aspergillosis | developing pulmonary aspergilloma | pulmonary aspergilloma usually | pulmonary aspergilloma pa [SUMMARY]
[CONTENT] pulmonary aspergillosis require | cases pulmonary aspergillosis | developing pulmonary aspergilloma | pulmonary aspergilloma usually | pulmonary aspergilloma pa [SUMMARY]
[CONTENT] pulmonary aspergillosis require | cases pulmonary aspergillosis | developing pulmonary aspergilloma | pulmonary aspergilloma usually | pulmonary aspergilloma pa [SUMMARY]
[CONTENT] pulmonary aspergillosis require | cases pulmonary aspergillosis | developing pulmonary aspergilloma | pulmonary aspergilloma usually | pulmonary aspergilloma pa [SUMMARY]
[CONTENT] patients | complications | pulmonary | postoperative | pa | surgery | lung | postoperative complications | affected | hospital [SUMMARY]
[CONTENT] patients | complications | pulmonary | postoperative | pa | surgery | lung | postoperative complications | affected | hospital [SUMMARY]
[CONTENT] patients | complications | pulmonary | postoperative | pa | surgery | lung | postoperative complications | affected | hospital [SUMMARY]
[CONTENT] patients | complications | pulmonary | postoperative | pa | surgery | lung | postoperative complications | affected | hospital [SUMMARY]
[CONTENT] patients | complications | pulmonary | postoperative | pa | surgery | lung | postoperative complications | affected | hospital [SUMMARY]
[CONTENT] patients | complications | pulmonary | postoperative | pa | surgery | lung | postoperative complications | affected | hospital [SUMMARY]
[CONTENT] pulmonary | pa | 000 | 100 000 | tuberculosis | pulmonary tuberculosis | hemoptysis | 100 | aspergillus | natural [SUMMARY]
[CONTENT] patients | pulmonary | variables | data | hospitals | tests | surgical | treatment | surgery | pa [SUMMARY]
[CONTENT] patients | complications | postoperative | hospital | lobe | type | demonstrated | table | aor | affected [SUMMARY]
[CONTENT] patients | study | studies | aspergillosis | therapy | pulmonary | pa | treatment | tuberculosis | pulmonary aspergillosis [SUMMARY]
[CONTENT] patients | complications | pulmonary | postoperative | pa | hospital | affected | postoperative complications | surgery | type [SUMMARY]
[CONTENT] patients | complications | pulmonary | postoperative | pa | hospital | affected | postoperative complications | surgery | type [SUMMARY]
[CONTENT] ||| Pulmonary [SUMMARY]
[CONTENT] 72 | the period | November, 2014 | November, 2019 ||| ||| SPSS | 23 ||| Chi-square [SUMMARY]
[CONTENT] 46(63.9% | 26(36.1% | age of 35.2+/-11.6 years | Range 16- 65 years ||| ||| ||| ||| 32(44,4% | 12(16.7% | 7(9.7% | 21(29.2% ||| 8(11.1% | 21(29.1% ||| ||| 3(4.2% | 14.8days ||| ||| 0.05 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| Pulmonary ||| 72 | the period | November, 2014 | November, 2019 ||| ||| SPSS | 23 ||| Chi-square ||| 46(63.9% | 26(36.1% | age of 35.2+/-11.6 years | Range 16- 65 years ||| ||| ||| ||| 32(44,4% | 12(16.7% | 7(9.7% | 21(29.2% ||| 8(11.1% | 21(29.1% ||| ||| 3(4.2% | 14.8days ||| ||| 0.05 ||| ||| [SUMMARY]
[CONTENT] ||| Pulmonary ||| 72 | the period | November, 2014 | November, 2019 ||| ||| SPSS | 23 ||| Chi-square ||| 46(63.9% | 26(36.1% | age of 35.2+/-11.6 years | Range 16- 65 years ||| ||| ||| ||| 32(44,4% | 12(16.7% | 7(9.7% | 21(29.2% ||| 8(11.1% | 21(29.1% ||| ||| 3(4.2% | 14.8days ||| ||| 0.05 ||| ||| [SUMMARY]
Impact of weight reduction on selected immune system response among Hepatitis C virus Saudi patients.
30602969
Recently, about 2.35% of the world populations are estimated to be chronically infected with hepatitis C virus (HCV). Previous cohort studies indicated that obesity increases risk of hepatic steatosis and fibrosis in non-diabetic patients with chronic hepatitis C infection due to diminished response to anti-viral therapy and as a result obesity is considered as an important factor in the progression of chronic HCV. However, there is a strong association between BMI and the human immune system among HCV patients.
BACKGROUND
One-hundred obese Saudi patients with chronic HCV infection participated in this study, their age ranged from 50-58 years and their body mass index (BMI) ranged from 30-35 kg/m2. All Subjects were included in two groups: The first group received weight reduction program in the form of treadmill aerobic exercises in addition to diet control whereas, the second group received no therapeutic intervention. Parameters of CD3, CD4 and CD8 were quantified; Leukocyte, differential counts and BMI were measured before and after 3 months, at the end of the study.
MATERIAL AND METHODS
The mean values of BMI, white blood cells, total neutrophil count, monocytes, CD3, CD4 and CD8 were significantly decreased in the training group as a result of weight loss program; however the results of the control group were not significant. Also, there were significant differences between both groups at the end of the study.
RESULTS
Weight loss modulates immune system parameters of patients with HCV.
CONCLUSION
[ "Adult", "Antigens, CD", "Body Mass Index", "Cohort Studies", "Female", "Hepacivirus", "Hepatitis C, Chronic", "Humans", "Immune System", "Leukocytes", "Male", "Middle Aged", "Obesity", "Program Evaluation", "Saudi Arabia", "Weight Loss", "Weight Reduction Programs" ]
6306970
Introduction
Globally, an estimated 180 million people are chronically infected with HCV and 3 to 4 million are newly infected each year1,2. Hepatitis C virus (HCV) infection is one of the main causes of chronic liver disease worldwide3 and persistent infection occurs in 50 to 80% of those infected and may lead to the development of cirrhosis and subsequent hepatocellular carcinoma1. The progression of HCV involves changes in the cellular immunity of those affected4,5. Some studies have indicated that the cellular immunity of HCV patients undergoes alterations, leading to poor immunological responses or dysfunctions6–8. Obesity has also been associated with decreased immunocompetence as it alters innate and adaptive immunity and immunity deterioration is related to the grade of obesity9. Moreover, impaired immune responses have also been suggested to occur in obese humans. Studies indicated that the incidence and severity of certain infections are higher in obese individuals when compared to lean people10,11. Retrospective and prospective studies showed obesity to be an independent risk factor for infection after trauma12–14. In a prospective cohort study of critically ill trauma patients, obese patients had more than two fold increased risk of acquiring infection12. Also, Renehan et al. demonstrated an association of obesity with 25–40% of certain malignancies in both obese men and women15. Many authors reported dysregulation and alteration in number of immune cells in obese subjects. Obese subjects showed either increased or decreased total lymphocytes in peripheral blood populations16–19 and had decreased CD8+ T cell population along with increased or decreased CD4+T cells18,19. Moreover, many previous studies as Moulin et al. who showed in their study that obesity is associated with the modulation of immune parameters23, elevated numbers of circulating immune cells as neutrophil, monocyte, leukocyte and total white blood cells (WBC)16,24, as well as elevated activation levels of certain WBC and suppressed immune cell function17. Also, several authors have reported a chronic inflammation status in individuals with higher BMI25–27 which was associated with elevated amounts of white blood cells, neutrophils, and monocytes in the blood of all participants with BMI higher than that of the control group28. Also, Kintscher et al. observed an increased number of CD3 and CD4 lymphocytes in the peripheral blood of obese women correlating with body mass index (BMI)29. Finally, Antuna-Puente et al. found that BMI is positively correlated with the number of macrophages in adipose tissue30. As there are limited studies reporting the benefits of lifestyle modification on immune system response among hepatitis C virus Saudi patients. This study aimed to examine effects of a weight reduction program on selected immune parameters among HCV Saudi patients.
null
null
Results
The demographic and clinical characteristics of the subjects are shown in Table 1. There were no significant differences between both groups regarding age, height, albumin, fasting blood glucose, hemoglobin, total bilirubin, systolic blood pressure, diastolic blood pressure, body weight, body mass index (BMI), waist circumference, fat mass, alanine aminotransferase (ALT) and HCV viral. Comparison of clinical data between HCV patients in both groups BMI: Body Mass Index; Hb: Hemoglobin; FPG: Fasting Blood Glucose; ALT: Alanine aminotransferase; SBP: Systolic blood pressure;DBP: Diastolic blood pressure. The mean values of BMI, white blood cells, total neutrophil count, monocytes, CD3, CD4 and CD8 were significantly decreased in the training group as a result of weight loss program (Table 2 and figure 2), however the results of the control group were not significant (table 3 and figure 3). Also, there were significant differences between both groups at the end of the study (table 4 and figure 4). Mean value and significance of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) before and at the end of the study BMI= Body Mass Index indicates a significant difference between the two groups, P < 0.05 Mean value of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) before and at the end of the study. Mean value and significance of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (B) before and at the end of the study BMI= Body Mass Index indicates a significant difference between the two groups, P < 0.05 Mean value of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (B) before and at the end of the study. Mean value and significance of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) and group (B) at the end of the study BMI= Body Mass Index indicates a significant difference between the two groups, P < 0.05. Mean value of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) and group (B) at the end of the study.
Conclusion
The current study provides evidence that weight loss modulates immune system parameters of patients with HCV.
[ "Subjects", "Measurements", "Procedures", "Statistical analysis" ]
[ "One hundred non-hypertensive, non-cirrhotic Saudi patients with chronic HCV infection;their age ranged from 50 to 38 (41.63 ± 4.57) years, were selected on referral to Gastroenterology and Hepatology Department, King Abdulaziz University Teaching Hospital, Saudi Arabia. All participants were anti HCV positive by enzyme-linked immunosorbent assay (ELISA). Exclusion criteria included other potential causes of liver disease, such as alcoholism or autoimmune phenomena. Only patients diagnosed with chronic HCV mono-infection and had anti HCV antibodies by ELISA were selected to undergo Real-Time polymerase chain reaction (RT-PCR) and were treated with combined pegylatedinterferon--alfa (PEG-IFNα)-ribavirin therapy. All participants were free to withdraw from the study at any time. Patients were divided in to two equal groups: Group (A): received weight reduction program in the form of treadmill aerobic exercises in addition to diet control, Whereas group (B): received no therapeutic intervention. The CONSORT diagram outlining the details of the screening, run-in and randomization phases of the study and reasons for participant exclusion illustrated in figure (1). Informed consent was obtained from all participants. This study was approved by the Scientific Research Ethical Committee, Faculty of Applied Medical Sciences at King Abdulaziz University.\nSubjects screening and recruitment CONSORT diagram.", "Clinical and laboratory analysis were performed by independent assessors who were blinded to group assignment and not involved in the routine treatment of the patients, however the following measurements were taken before the study and after 3 months at the end of the study:\nA. Real-Time polymerase chain reaction (RT-PCR): Ten milliliter blood samples were collected from each participant at study entry. The blood samples were obtained using disposable needles and heparinized vacuum syringes and stored at −70°C until assayed. Serum samples of all participants were tested for Real-Time polymerase chain reaction (RT-PCR) to detect serum HCV RNA levels by polymerase chain reaction using the COBAS TaqMan HCV test, v2.0 (Roche Diagnostics, Indianapolis, NJ, USA).\nB. Analysis of peripheral blood cells: The analysis of peripheral blood cells (e.g., total and differential count) was performed on a Beckman Coulter AcT 5diff hematology analyzer. The values are expressed in percentages and absolute numbers.\nC. Flow cytometry analysis: The human leukocyte differentiation antigens CD3, CD4 and CD8 (Beckman Coulter, Marseille, France) Five microliters of appropriate monoclonal antibody was added to 50 µL of a wholeblood sample and incubated for 15 minutes at room temperature. Thereafter, the erythrocytes were lysed with 125 µL of a lysing solution, OptiLyse C, for 10 minutes. The reaction was stopped by the addition of 250 µL phosphate-buffered saline.The samples were analyzed by flow cytometry using Cytomics FC 500 and CXP software (Beckman Coulter).The leukocyte subsets were defined by forward- and side-scatter pattern. The negative control value was determined by a fluorescence background and antibody-nonspecific staining.\nD. Body mass index (BMI): Weight and height scale (Metrotype -England) was used to measure weight and height to calculate the body mass index (BMI). Body mass index was calculated by dividing the weight in kilograms by the square of the height in meters (Kg/m2). According to the WHO classification, a BMI of <18.5 kg/m2 is under weight, 18.5–24.9 kg/m2 is normal 25–29.9 kg/m2 is overweight. A BMI of > 30 kg/m2 is classified as obese and this group is further divided into moderate obesity (30–34.9 kg/m2), severe obesity (35–39.9 kg/m2) and very severe obesity (≤40 kg /m2)31.", "Following the previous evaluation , all patients were divided randomly into the following groups:\n1. The training group (Group A): These were submitted to aerobic exercise training to complete a 12-week treadmill aerobic exercise (Enraf Nonium, Model display panel Standard, NR 1475.801, Holland) which was conducted according to recommendation of aerobic exercise application approved by the American College of Sports Medicine32. Training program included 5 minutes for warming-up in the form of range motion and stretching exercises, 30 minutes of aerobic exercise training with intensity equal 60–70% of the individual maximum heart rate followed by cooling down for 10 minutes ( on treadmill with low speed and without inclination). Participants had 3 sessions /week for 3 months with close supervision of physical therapist. Also, a dietician performed an interview-based food survey for all participants of group (A) for detection of feeding habits, abnormal dietary behavior and to prescribe the balanced low caloric diet33 that provided 1200 Kilocalories/day for 12 weeks. The same dietitian continuously monitored all participant caloric intakes through reviewing the detailed record of food intake every 2 weeks by the dietitian34,35.\n2. The control group (Group B): received no intervention.", "The mean values of the investigated parameters obtained before and after three months in both groups were compared using paired “t” test. Independent “t” test was used for the comparison between the two groups (P<0.05)." ]
[ null, null, null, null ]
[ "Introduction", "Patients and methods", "Subjects", "Measurements", "Procedures", "Statistical analysis", "Results", "Discussion", "Conclusion" ]
[ "Globally, an estimated 180 million people are chronically infected with HCV and 3 to 4 million are newly infected each year1,2. Hepatitis C virus (HCV) infection is one of the main causes of chronic liver disease worldwide3 and persistent infection occurs in 50 to 80% of those infected and may lead to the development of cirrhosis and subsequent hepatocellular carcinoma1.\nThe progression of HCV involves changes in the cellular immunity of those affected4,5. Some studies have indicated that the cellular immunity of HCV patients undergoes alterations, leading to poor immunological responses or dysfunctions6–8.\nObesity has also been associated with decreased immunocompetence as it alters innate and adaptive immunity and immunity deterioration is related to the grade of obesity9. Moreover, impaired immune responses have also been suggested to occur in obese humans. Studies indicated that the incidence and severity of certain infections are higher in obese individuals when compared to lean people10,11. Retrospective and prospective studies showed obesity to be an independent risk factor for infection after trauma12–14. In a prospective cohort study of critically ill trauma patients, obese patients had more than two fold increased risk of acquiring infection12. Also, Renehan et al. demonstrated an association of obesity with 25–40% of certain malignancies in both obese men and women15.\nMany authors reported dysregulation and alteration in number of immune cells in obese subjects. Obese subjects showed either increased or decreased total lymphocytes in peripheral blood populations16–19 and had decreased CD8+ T cell population along with increased or decreased CD4+T cells18,19. Moreover, many previous studies as Moulin et al. who showed in their study that obesity is associated with the modulation of immune parameters23, elevated numbers of circulating immune cells as neutrophil, monocyte, leukocyte and total white blood cells (WBC)16,24, as well as elevated activation levels of certain WBC and suppressed immune cell function17. Also, several authors have reported a chronic inflammation status in individuals with higher BMI25–27 which was associated with elevated amounts of white blood cells, neutrophils, and monocytes in the blood of all participants with BMI higher than that of the control group28. Also, Kintscher et al. observed an increased number of CD3 and CD4 lymphocytes in the peripheral blood of obese women correlating with body mass index (BMI)29. Finally, Antuna-Puente et al. found that BMI is positively correlated with the number of macrophages in adipose tissue30.\nAs there are limited studies reporting the benefits of lifestyle modification on immune system response among hepatitis C virus Saudi patients. This study aimed to examine effects of a weight reduction program on selected immune parameters among HCV Saudi patients.", " Subjects One hundred non-hypertensive, non-cirrhotic Saudi patients with chronic HCV infection;their age ranged from 50 to 38 (41.63 ± 4.57) years, were selected on referral to Gastroenterology and Hepatology Department, King Abdulaziz University Teaching Hospital, Saudi Arabia. All participants were anti HCV positive by enzyme-linked immunosorbent assay (ELISA). Exclusion criteria included other potential causes of liver disease, such as alcoholism or autoimmune phenomena. Only patients diagnosed with chronic HCV mono-infection and had anti HCV antibodies by ELISA were selected to undergo Real-Time polymerase chain reaction (RT-PCR) and were treated with combined pegylatedinterferon--alfa (PEG-IFNα)-ribavirin therapy. All participants were free to withdraw from the study at any time. Patients were divided in to two equal groups: Group (A): received weight reduction program in the form of treadmill aerobic exercises in addition to diet control, Whereas group (B): received no therapeutic intervention. The CONSORT diagram outlining the details of the screening, run-in and randomization phases of the study and reasons for participant exclusion illustrated in figure (1). Informed consent was obtained from all participants. This study was approved by the Scientific Research Ethical Committee, Faculty of Applied Medical Sciences at King Abdulaziz University.\nSubjects screening and recruitment CONSORT diagram.\nOne hundred non-hypertensive, non-cirrhotic Saudi patients with chronic HCV infection;their age ranged from 50 to 38 (41.63 ± 4.57) years, were selected on referral to Gastroenterology and Hepatology Department, King Abdulaziz University Teaching Hospital, Saudi Arabia. All participants were anti HCV positive by enzyme-linked immunosorbent assay (ELISA). Exclusion criteria included other potential causes of liver disease, such as alcoholism or autoimmune phenomena. Only patients diagnosed with chronic HCV mono-infection and had anti HCV antibodies by ELISA were selected to undergo Real-Time polymerase chain reaction (RT-PCR) and were treated with combined pegylatedinterferon--alfa (PEG-IFNα)-ribavirin therapy. All participants were free to withdraw from the study at any time. Patients were divided in to two equal groups: Group (A): received weight reduction program in the form of treadmill aerobic exercises in addition to diet control, Whereas group (B): received no therapeutic intervention. The CONSORT diagram outlining the details of the screening, run-in and randomization phases of the study and reasons for participant exclusion illustrated in figure (1). Informed consent was obtained from all participants. This study was approved by the Scientific Research Ethical Committee, Faculty of Applied Medical Sciences at King Abdulaziz University.\nSubjects screening and recruitment CONSORT diagram.\n Measurements Clinical and laboratory analysis were performed by independent assessors who were blinded to group assignment and not involved in the routine treatment of the patients, however the following measurements were taken before the study and after 3 months at the end of the study:\nA. Real-Time polymerase chain reaction (RT-PCR): Ten milliliter blood samples were collected from each participant at study entry. The blood samples were obtained using disposable needles and heparinized vacuum syringes and stored at −70°C until assayed. Serum samples of all participants were tested for Real-Time polymerase chain reaction (RT-PCR) to detect serum HCV RNA levels by polymerase chain reaction using the COBAS TaqMan HCV test, v2.0 (Roche Diagnostics, Indianapolis, NJ, USA).\nB. Analysis of peripheral blood cells: The analysis of peripheral blood cells (e.g., total and differential count) was performed on a Beckman Coulter AcT 5diff hematology analyzer. The values are expressed in percentages and absolute numbers.\nC. Flow cytometry analysis: The human leukocyte differentiation antigens CD3, CD4 and CD8 (Beckman Coulter, Marseille, France) Five microliters of appropriate monoclonal antibody was added to 50 µL of a wholeblood sample and incubated for 15 minutes at room temperature. Thereafter, the erythrocytes were lysed with 125 µL of a lysing solution, OptiLyse C, for 10 minutes. The reaction was stopped by the addition of 250 µL phosphate-buffered saline.The samples were analyzed by flow cytometry using Cytomics FC 500 and CXP software (Beckman Coulter).The leukocyte subsets were defined by forward- and side-scatter pattern. The negative control value was determined by a fluorescence background and antibody-nonspecific staining.\nD. Body mass index (BMI): Weight and height scale (Metrotype -England) was used to measure weight and height to calculate the body mass index (BMI). Body mass index was calculated by dividing the weight in kilograms by the square of the height in meters (Kg/m2). According to the WHO classification, a BMI of <18.5 kg/m2 is under weight, 18.5–24.9 kg/m2 is normal 25–29.9 kg/m2 is overweight. A BMI of > 30 kg/m2 is classified as obese and this group is further divided into moderate obesity (30–34.9 kg/m2), severe obesity (35–39.9 kg/m2) and very severe obesity (≤40 kg /m2)31.\nClinical and laboratory analysis were performed by independent assessors who were blinded to group assignment and not involved in the routine treatment of the patients, however the following measurements were taken before the study and after 3 months at the end of the study:\nA. Real-Time polymerase chain reaction (RT-PCR): Ten milliliter blood samples were collected from each participant at study entry. The blood samples were obtained using disposable needles and heparinized vacuum syringes and stored at −70°C until assayed. Serum samples of all participants were tested for Real-Time polymerase chain reaction (RT-PCR) to detect serum HCV RNA levels by polymerase chain reaction using the COBAS TaqMan HCV test, v2.0 (Roche Diagnostics, Indianapolis, NJ, USA).\nB. Analysis of peripheral blood cells: The analysis of peripheral blood cells (e.g., total and differential count) was performed on a Beckman Coulter AcT 5diff hematology analyzer. The values are expressed in percentages and absolute numbers.\nC. Flow cytometry analysis: The human leukocyte differentiation antigens CD3, CD4 and CD8 (Beckman Coulter, Marseille, France) Five microliters of appropriate monoclonal antibody was added to 50 µL of a wholeblood sample and incubated for 15 minutes at room temperature. Thereafter, the erythrocytes were lysed with 125 µL of a lysing solution, OptiLyse C, for 10 minutes. The reaction was stopped by the addition of 250 µL phosphate-buffered saline.The samples were analyzed by flow cytometry using Cytomics FC 500 and CXP software (Beckman Coulter).The leukocyte subsets were defined by forward- and side-scatter pattern. The negative control value was determined by a fluorescence background and antibody-nonspecific staining.\nD. Body mass index (BMI): Weight and height scale (Metrotype -England) was used to measure weight and height to calculate the body mass index (BMI). Body mass index was calculated by dividing the weight in kilograms by the square of the height in meters (Kg/m2). According to the WHO classification, a BMI of <18.5 kg/m2 is under weight, 18.5–24.9 kg/m2 is normal 25–29.9 kg/m2 is overweight. A BMI of > 30 kg/m2 is classified as obese and this group is further divided into moderate obesity (30–34.9 kg/m2), severe obesity (35–39.9 kg/m2) and very severe obesity (≤40 kg /m2)31.\n Procedures Following the previous evaluation , all patients were divided randomly into the following groups:\n1. The training group (Group A): These were submitted to aerobic exercise training to complete a 12-week treadmill aerobic exercise (Enraf Nonium, Model display panel Standard, NR 1475.801, Holland) which was conducted according to recommendation of aerobic exercise application approved by the American College of Sports Medicine32. Training program included 5 minutes for warming-up in the form of range motion and stretching exercises, 30 minutes of aerobic exercise training with intensity equal 60–70% of the individual maximum heart rate followed by cooling down for 10 minutes ( on treadmill with low speed and without inclination). Participants had 3 sessions /week for 3 months with close supervision of physical therapist. Also, a dietician performed an interview-based food survey for all participants of group (A) for detection of feeding habits, abnormal dietary behavior and to prescribe the balanced low caloric diet33 that provided 1200 Kilocalories/day for 12 weeks. The same dietitian continuously monitored all participant caloric intakes through reviewing the detailed record of food intake every 2 weeks by the dietitian34,35.\n2. The control group (Group B): received no intervention.\nFollowing the previous evaluation , all patients were divided randomly into the following groups:\n1. The training group (Group A): These were submitted to aerobic exercise training to complete a 12-week treadmill aerobic exercise (Enraf Nonium, Model display panel Standard, NR 1475.801, Holland) which was conducted according to recommendation of aerobic exercise application approved by the American College of Sports Medicine32. Training program included 5 minutes for warming-up in the form of range motion and stretching exercises, 30 minutes of aerobic exercise training with intensity equal 60–70% of the individual maximum heart rate followed by cooling down for 10 minutes ( on treadmill with low speed and without inclination). Participants had 3 sessions /week for 3 months with close supervision of physical therapist. Also, a dietician performed an interview-based food survey for all participants of group (A) for detection of feeding habits, abnormal dietary behavior and to prescribe the balanced low caloric diet33 that provided 1200 Kilocalories/day for 12 weeks. The same dietitian continuously monitored all participant caloric intakes through reviewing the detailed record of food intake every 2 weeks by the dietitian34,35.\n2. The control group (Group B): received no intervention.\n Statistical analysis The mean values of the investigated parameters obtained before and after three months in both groups were compared using paired “t” test. Independent “t” test was used for the comparison between the two groups (P<0.05).\nThe mean values of the investigated parameters obtained before and after three months in both groups were compared using paired “t” test. Independent “t” test was used for the comparison between the two groups (P<0.05).", "One hundred non-hypertensive, non-cirrhotic Saudi patients with chronic HCV infection;their age ranged from 50 to 38 (41.63 ± 4.57) years, were selected on referral to Gastroenterology and Hepatology Department, King Abdulaziz University Teaching Hospital, Saudi Arabia. All participants were anti HCV positive by enzyme-linked immunosorbent assay (ELISA). Exclusion criteria included other potential causes of liver disease, such as alcoholism or autoimmune phenomena. Only patients diagnosed with chronic HCV mono-infection and had anti HCV antibodies by ELISA were selected to undergo Real-Time polymerase chain reaction (RT-PCR) and were treated with combined pegylatedinterferon--alfa (PEG-IFNα)-ribavirin therapy. All participants were free to withdraw from the study at any time. Patients were divided in to two equal groups: Group (A): received weight reduction program in the form of treadmill aerobic exercises in addition to diet control, Whereas group (B): received no therapeutic intervention. The CONSORT diagram outlining the details of the screening, run-in and randomization phases of the study and reasons for participant exclusion illustrated in figure (1). Informed consent was obtained from all participants. This study was approved by the Scientific Research Ethical Committee, Faculty of Applied Medical Sciences at King Abdulaziz University.\nSubjects screening and recruitment CONSORT diagram.", "Clinical and laboratory analysis were performed by independent assessors who were blinded to group assignment and not involved in the routine treatment of the patients, however the following measurements were taken before the study and after 3 months at the end of the study:\nA. Real-Time polymerase chain reaction (RT-PCR): Ten milliliter blood samples were collected from each participant at study entry. The blood samples were obtained using disposable needles and heparinized vacuum syringes and stored at −70°C until assayed. Serum samples of all participants were tested for Real-Time polymerase chain reaction (RT-PCR) to detect serum HCV RNA levels by polymerase chain reaction using the COBAS TaqMan HCV test, v2.0 (Roche Diagnostics, Indianapolis, NJ, USA).\nB. Analysis of peripheral blood cells: The analysis of peripheral blood cells (e.g., total and differential count) was performed on a Beckman Coulter AcT 5diff hematology analyzer. The values are expressed in percentages and absolute numbers.\nC. Flow cytometry analysis: The human leukocyte differentiation antigens CD3, CD4 and CD8 (Beckman Coulter, Marseille, France) Five microliters of appropriate monoclonal antibody was added to 50 µL of a wholeblood sample and incubated for 15 minutes at room temperature. Thereafter, the erythrocytes were lysed with 125 µL of a lysing solution, OptiLyse C, for 10 minutes. The reaction was stopped by the addition of 250 µL phosphate-buffered saline.The samples were analyzed by flow cytometry using Cytomics FC 500 and CXP software (Beckman Coulter).The leukocyte subsets were defined by forward- and side-scatter pattern. The negative control value was determined by a fluorescence background and antibody-nonspecific staining.\nD. Body mass index (BMI): Weight and height scale (Metrotype -England) was used to measure weight and height to calculate the body mass index (BMI). Body mass index was calculated by dividing the weight in kilograms by the square of the height in meters (Kg/m2). According to the WHO classification, a BMI of <18.5 kg/m2 is under weight, 18.5–24.9 kg/m2 is normal 25–29.9 kg/m2 is overweight. A BMI of > 30 kg/m2 is classified as obese and this group is further divided into moderate obesity (30–34.9 kg/m2), severe obesity (35–39.9 kg/m2) and very severe obesity (≤40 kg /m2)31.", "Following the previous evaluation , all patients were divided randomly into the following groups:\n1. The training group (Group A): These were submitted to aerobic exercise training to complete a 12-week treadmill aerobic exercise (Enraf Nonium, Model display panel Standard, NR 1475.801, Holland) which was conducted according to recommendation of aerobic exercise application approved by the American College of Sports Medicine32. Training program included 5 minutes for warming-up in the form of range motion and stretching exercises, 30 minutes of aerobic exercise training with intensity equal 60–70% of the individual maximum heart rate followed by cooling down for 10 minutes ( on treadmill with low speed and without inclination). Participants had 3 sessions /week for 3 months with close supervision of physical therapist. Also, a dietician performed an interview-based food survey for all participants of group (A) for detection of feeding habits, abnormal dietary behavior and to prescribe the balanced low caloric diet33 that provided 1200 Kilocalories/day for 12 weeks. The same dietitian continuously monitored all participant caloric intakes through reviewing the detailed record of food intake every 2 weeks by the dietitian34,35.\n2. The control group (Group B): received no intervention.", "The mean values of the investigated parameters obtained before and after three months in both groups were compared using paired “t” test. Independent “t” test was used for the comparison between the two groups (P<0.05).", "The demographic and clinical characteristics of the subjects are shown in Table 1. There were no significant differences between both groups regarding age, height, albumin, fasting blood glucose, hemoglobin, total bilirubin, systolic blood pressure, diastolic blood pressure, body weight, body mass index (BMI), waist circumference, fat mass, alanine aminotransferase (ALT) and HCV viral.\nComparison of clinical data between HCV patients in both groups\nBMI: Body Mass Index; Hb: Hemoglobin; FPG: Fasting Blood Glucose; ALT: Alanine aminotransferase; SBP: Systolic blood pressure;DBP: Diastolic blood pressure.\nThe mean values of BMI, white blood cells, total neutrophil count, monocytes, CD3, CD4 and CD8 were significantly decreased in the training group as a result of weight loss program (Table 2 and figure 2), however the results of the control group were not significant (table 3 and figure 3). Also, there were significant differences between both groups at the end of the study (table 4 and figure 4).\nMean value and significance of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) before and at the end of the study\nBMI= Body Mass Index\nindicates a significant difference between the two groups, P < 0.05\nMean value of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) before and at the end of the study.\nMean value and significance of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (B) before and at the end of the study\nBMI= Body Mass Index\nindicates a significant difference between the two groups, P < 0.05\nMean value of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (B) before and at the end of the study.\nMean value and significance of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) and group (B) at the end of the study\nBMI= Body Mass Index\nindicates a significant difference between the two groups, P < 0.05.\nMean value of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) and group (B) at the end of the study.", "The best immune correlate of HCV control is a strong and broad CD4+ T-cell response to HCV antigens, which is often associated with an equally robust and broad HCV specific CD8+ T-cell immune response36–38. It is believed that CD4+ T cells are necessary to ensure fully functional CD8+ T-cell responses, which then can clear the virus39. However, obesity is associated with the modulation of immune parameters40,41. To our knowledge, this is the first study of immune function measures in relation to previous intentional weight loss in HCV patients. Our results indicate that there may be long-term effects of intentional weight loss on immune function. These results are in line with several previous studies.\nShade and colleagues conducted a study on one hundred fourteen healthy, overweight, sedentary, post-menopausal women, who were recruited for an exercise intervention study and were currently weight stable who lost ≥ 10 pounds had lower measured natural killer cell (NK) cytotoxicity than those who did not (24.7%±12.1% vs 31.1%±14.7%, respectively). The frequency of weight loss episodes was also associated with differences in the number and proportion of NK cells. This study provides evidence that frequent intentional weight loss may have long-term effects on immune function42. However, Wasinski and colleagues conducted a study on mice who were submitted to chronic swimming training or a 30% caloric restriction after consuming a high-fat diet. The mice were subjected to swimming sessions 5 times per week for 6 weeks.Data demonstrated that exercise and caloric restriction modulate resident immune cells in adipose tissues. We observed a reduction in CD4+ and CD8+ T lymphocytes in AT43. Also, Carpenter and colleagues evaluated the forced and voluntary exercise as weight-loss treatments in diet-induced obese (DIO) mice and assessed the effects of weight loss on monocyte concentration and cell-surface expression of Toll-like receptor. Results confirmed that short-term exercise and low-fat diet consumption over the 8 weeks caused significant weight loss and altered immune profiles44. While, Viardot and colleagues looked at 13 obese people with Type 2 diabetes or pre-diabetes who were limited to a diet of between 1000 and 1600 calories a day for 24 weeks. Gastric banding was performed at 12 weeks to help restrict food intake further. Their results showed an 80% reduction of pro-inflammatory T-helper cells, as well as reduced activation of other circulating immune cells (T cells, monocytes and neutrophils) and decreased activation of macrophages in fat45. Wing and colleagues examined subjects after weight loss, induced by 14-days fast, showed improvement in serum immunoglobulin levels, bactericidal capacity of blood monocytes and NK cell cytotoxic activity46. Moreover, Tanaka and colleagues reported that caloric restriction improves parameters of immunity such as T cell counts and NK cell activity and the ability of mononuclear cells to produce pro-inflammatory cytokines47.\nThe possible mechanism of immune system modulation by weight reduction could be explained by reduction of adipose tissues which is not only a storage organ, but produces close to 100 cytokines. These secreted adipokines are directly correlated to the increased adipose tissue mass and play an intricate role in various aspects of the innate and adaptive immune response and participate in a wide variety of physiological or physiopathological processes including food intake, insulin sensitivity and inflammation48. Also, weight reduction reduces serum level of leptin. Leptin has pleiotropic effects on immune cell activity as evidenced from the presence of leptin receptors on all immune cells of both the arms of innate and adaptive immunity49. Leptin promotes macrophage phagocytosis by activating phospholipase. Leptin increases secretion of pro-inflammatory cytokines by macrophages. Leptin stimulates monocyte proliferation andupregulates the expression of activation markers including CD38, CD69, CD25 and CD7150. Leptin is involved in the natural killer cell development, differentiation, proliferation, activation and cytotoxicity51.\nThe current study has important strengths and limitations. The major strength is the supervised nature of the study. Supervising food intake and physical activity removes the need to question compliance or to rely on food and activity questionnaires. Further, all exercise sessions were supervised and adherence to the diet and activities was essentially 100%. Moreover, the study was randomized; hence, we can extrapolate adherence to the general population. In the other hand, the major limitation is the small sample size in both groups which may limit the possibility of generalization of the findings in the present study. Finally, within the limit of this study, weight reduction is recommended for modulation of immune system parameters of patients with HCV. Further researches are needed to explore the impact of weight reduction on quality of life and other biochemical parameters among patients with HCV.", "The current study provides evidence that weight loss modulates immune system parameters of patients with HCV." ]
[ "intro", "subjects|methods", null, null, null, null, "results", "discussion", "conclusions" ]
[ "Hepatitis C virus", "obesity", "immune system", "weight reduction" ]
Introduction: Globally, an estimated 180 million people are chronically infected with HCV and 3 to 4 million are newly infected each year1,2. Hepatitis C virus (HCV) infection is one of the main causes of chronic liver disease worldwide3 and persistent infection occurs in 50 to 80% of those infected and may lead to the development of cirrhosis and subsequent hepatocellular carcinoma1. The progression of HCV involves changes in the cellular immunity of those affected4,5. Some studies have indicated that the cellular immunity of HCV patients undergoes alterations, leading to poor immunological responses or dysfunctions6–8. Obesity has also been associated with decreased immunocompetence as it alters innate and adaptive immunity and immunity deterioration is related to the grade of obesity9. Moreover, impaired immune responses have also been suggested to occur in obese humans. Studies indicated that the incidence and severity of certain infections are higher in obese individuals when compared to lean people10,11. Retrospective and prospective studies showed obesity to be an independent risk factor for infection after trauma12–14. In a prospective cohort study of critically ill trauma patients, obese patients had more than two fold increased risk of acquiring infection12. Also, Renehan et al. demonstrated an association of obesity with 25–40% of certain malignancies in both obese men and women15. Many authors reported dysregulation and alteration in number of immune cells in obese subjects. Obese subjects showed either increased or decreased total lymphocytes in peripheral blood populations16–19 and had decreased CD8+ T cell population along with increased or decreased CD4+T cells18,19. Moreover, many previous studies as Moulin et al. who showed in their study that obesity is associated with the modulation of immune parameters23, elevated numbers of circulating immune cells as neutrophil, monocyte, leukocyte and total white blood cells (WBC)16,24, as well as elevated activation levels of certain WBC and suppressed immune cell function17. Also, several authors have reported a chronic inflammation status in individuals with higher BMI25–27 which was associated with elevated amounts of white blood cells, neutrophils, and monocytes in the blood of all participants with BMI higher than that of the control group28. Also, Kintscher et al. observed an increased number of CD3 and CD4 lymphocytes in the peripheral blood of obese women correlating with body mass index (BMI)29. Finally, Antuna-Puente et al. found that BMI is positively correlated with the number of macrophages in adipose tissue30. As there are limited studies reporting the benefits of lifestyle modification on immune system response among hepatitis C virus Saudi patients. This study aimed to examine effects of a weight reduction program on selected immune parameters among HCV Saudi patients. Patients and methods: Subjects One hundred non-hypertensive, non-cirrhotic Saudi patients with chronic HCV infection;their age ranged from 50 to 38 (41.63 ± 4.57) years, were selected on referral to Gastroenterology and Hepatology Department, King Abdulaziz University Teaching Hospital, Saudi Arabia. All participants were anti HCV positive by enzyme-linked immunosorbent assay (ELISA). Exclusion criteria included other potential causes of liver disease, such as alcoholism or autoimmune phenomena. Only patients diagnosed with chronic HCV mono-infection and had anti HCV antibodies by ELISA were selected to undergo Real-Time polymerase chain reaction (RT-PCR) and were treated with combined pegylatedinterferon--alfa (PEG-IFNα)-ribavirin therapy. All participants were free to withdraw from the study at any time. Patients were divided in to two equal groups: Group (A): received weight reduction program in the form of treadmill aerobic exercises in addition to diet control, Whereas group (B): received no therapeutic intervention. The CONSORT diagram outlining the details of the screening, run-in and randomization phases of the study and reasons for participant exclusion illustrated in figure (1). Informed consent was obtained from all participants. This study was approved by the Scientific Research Ethical Committee, Faculty of Applied Medical Sciences at King Abdulaziz University. Subjects screening and recruitment CONSORT diagram. One hundred non-hypertensive, non-cirrhotic Saudi patients with chronic HCV infection;their age ranged from 50 to 38 (41.63 ± 4.57) years, were selected on referral to Gastroenterology and Hepatology Department, King Abdulaziz University Teaching Hospital, Saudi Arabia. All participants were anti HCV positive by enzyme-linked immunosorbent assay (ELISA). Exclusion criteria included other potential causes of liver disease, such as alcoholism or autoimmune phenomena. Only patients diagnosed with chronic HCV mono-infection and had anti HCV antibodies by ELISA were selected to undergo Real-Time polymerase chain reaction (RT-PCR) and were treated with combined pegylatedinterferon--alfa (PEG-IFNα)-ribavirin therapy. All participants were free to withdraw from the study at any time. Patients were divided in to two equal groups: Group (A): received weight reduction program in the form of treadmill aerobic exercises in addition to diet control, Whereas group (B): received no therapeutic intervention. The CONSORT diagram outlining the details of the screening, run-in and randomization phases of the study and reasons for participant exclusion illustrated in figure (1). Informed consent was obtained from all participants. This study was approved by the Scientific Research Ethical Committee, Faculty of Applied Medical Sciences at King Abdulaziz University. Subjects screening and recruitment CONSORT diagram. Measurements Clinical and laboratory analysis were performed by independent assessors who were blinded to group assignment and not involved in the routine treatment of the patients, however the following measurements were taken before the study and after 3 months at the end of the study: A. Real-Time polymerase chain reaction (RT-PCR): Ten milliliter blood samples were collected from each participant at study entry. The blood samples were obtained using disposable needles and heparinized vacuum syringes and stored at −70°C until assayed. Serum samples of all participants were tested for Real-Time polymerase chain reaction (RT-PCR) to detect serum HCV RNA levels by polymerase chain reaction using the COBAS TaqMan HCV test, v2.0 (Roche Diagnostics, Indianapolis, NJ, USA). B. Analysis of peripheral blood cells: The analysis of peripheral blood cells (e.g., total and differential count) was performed on a Beckman Coulter AcT 5diff hematology analyzer. The values are expressed in percentages and absolute numbers. C. Flow cytometry analysis: The human leukocyte differentiation antigens CD3, CD4 and CD8 (Beckman Coulter, Marseille, France) Five microliters of appropriate monoclonal antibody was added to 50 µL of a wholeblood sample and incubated for 15 minutes at room temperature. Thereafter, the erythrocytes were lysed with 125 µL of a lysing solution, OptiLyse C, for 10 minutes. The reaction was stopped by the addition of 250 µL phosphate-buffered saline.The samples were analyzed by flow cytometry using Cytomics FC 500 and CXP software (Beckman Coulter).The leukocyte subsets were defined by forward- and side-scatter pattern. The negative control value was determined by a fluorescence background and antibody-nonspecific staining. D. Body mass index (BMI): Weight and height scale (Metrotype -England) was used to measure weight and height to calculate the body mass index (BMI). Body mass index was calculated by dividing the weight in kilograms by the square of the height in meters (Kg/m2). According to the WHO classification, a BMI of <18.5 kg/m2 is under weight, 18.5–24.9 kg/m2 is normal 25–29.9 kg/m2 is overweight. A BMI of > 30 kg/m2 is classified as obese and this group is further divided into moderate obesity (30–34.9 kg/m2), severe obesity (35–39.9 kg/m2) and very severe obesity (≤40 kg /m2)31. Clinical and laboratory analysis were performed by independent assessors who were blinded to group assignment and not involved in the routine treatment of the patients, however the following measurements were taken before the study and after 3 months at the end of the study: A. Real-Time polymerase chain reaction (RT-PCR): Ten milliliter blood samples were collected from each participant at study entry. The blood samples were obtained using disposable needles and heparinized vacuum syringes and stored at −70°C until assayed. Serum samples of all participants were tested for Real-Time polymerase chain reaction (RT-PCR) to detect serum HCV RNA levels by polymerase chain reaction using the COBAS TaqMan HCV test, v2.0 (Roche Diagnostics, Indianapolis, NJ, USA). B. Analysis of peripheral blood cells: The analysis of peripheral blood cells (e.g., total and differential count) was performed on a Beckman Coulter AcT 5diff hematology analyzer. The values are expressed in percentages and absolute numbers. C. Flow cytometry analysis: The human leukocyte differentiation antigens CD3, CD4 and CD8 (Beckman Coulter, Marseille, France) Five microliters of appropriate monoclonal antibody was added to 50 µL of a wholeblood sample and incubated for 15 minutes at room temperature. Thereafter, the erythrocytes were lysed with 125 µL of a lysing solution, OptiLyse C, for 10 minutes. The reaction was stopped by the addition of 250 µL phosphate-buffered saline.The samples were analyzed by flow cytometry using Cytomics FC 500 and CXP software (Beckman Coulter).The leukocyte subsets were defined by forward- and side-scatter pattern. The negative control value was determined by a fluorescence background and antibody-nonspecific staining. D. Body mass index (BMI): Weight and height scale (Metrotype -England) was used to measure weight and height to calculate the body mass index (BMI). Body mass index was calculated by dividing the weight in kilograms by the square of the height in meters (Kg/m2). According to the WHO classification, a BMI of <18.5 kg/m2 is under weight, 18.5–24.9 kg/m2 is normal 25–29.9 kg/m2 is overweight. A BMI of > 30 kg/m2 is classified as obese and this group is further divided into moderate obesity (30–34.9 kg/m2), severe obesity (35–39.9 kg/m2) and very severe obesity (≤40 kg /m2)31. Procedures Following the previous evaluation , all patients were divided randomly into the following groups: 1. The training group (Group A): These were submitted to aerobic exercise training to complete a 12-week treadmill aerobic exercise (Enraf Nonium, Model display panel Standard, NR 1475.801, Holland) which was conducted according to recommendation of aerobic exercise application approved by the American College of Sports Medicine32. Training program included 5 minutes for warming-up in the form of range motion and stretching exercises, 30 minutes of aerobic exercise training with intensity equal 60–70% of the individual maximum heart rate followed by cooling down for 10 minutes ( on treadmill with low speed and without inclination). Participants had 3 sessions /week for 3 months with close supervision of physical therapist. Also, a dietician performed an interview-based food survey for all participants of group (A) for detection of feeding habits, abnormal dietary behavior and to prescribe the balanced low caloric diet33 that provided 1200 Kilocalories/day for 12 weeks. The same dietitian continuously monitored all participant caloric intakes through reviewing the detailed record of food intake every 2 weeks by the dietitian34,35. 2. The control group (Group B): received no intervention. Following the previous evaluation , all patients were divided randomly into the following groups: 1. The training group (Group A): These were submitted to aerobic exercise training to complete a 12-week treadmill aerobic exercise (Enraf Nonium, Model display panel Standard, NR 1475.801, Holland) which was conducted according to recommendation of aerobic exercise application approved by the American College of Sports Medicine32. Training program included 5 minutes for warming-up in the form of range motion and stretching exercises, 30 minutes of aerobic exercise training with intensity equal 60–70% of the individual maximum heart rate followed by cooling down for 10 minutes ( on treadmill with low speed and without inclination). Participants had 3 sessions /week for 3 months with close supervision of physical therapist. Also, a dietician performed an interview-based food survey for all participants of group (A) for detection of feeding habits, abnormal dietary behavior and to prescribe the balanced low caloric diet33 that provided 1200 Kilocalories/day for 12 weeks. The same dietitian continuously monitored all participant caloric intakes through reviewing the detailed record of food intake every 2 weeks by the dietitian34,35. 2. The control group (Group B): received no intervention. Statistical analysis The mean values of the investigated parameters obtained before and after three months in both groups were compared using paired “t” test. Independent “t” test was used for the comparison between the two groups (P<0.05). The mean values of the investigated parameters obtained before and after three months in both groups were compared using paired “t” test. Independent “t” test was used for the comparison between the two groups (P<0.05). Subjects: One hundred non-hypertensive, non-cirrhotic Saudi patients with chronic HCV infection;their age ranged from 50 to 38 (41.63 ± 4.57) years, were selected on referral to Gastroenterology and Hepatology Department, King Abdulaziz University Teaching Hospital, Saudi Arabia. All participants were anti HCV positive by enzyme-linked immunosorbent assay (ELISA). Exclusion criteria included other potential causes of liver disease, such as alcoholism or autoimmune phenomena. Only patients diagnosed with chronic HCV mono-infection and had anti HCV antibodies by ELISA were selected to undergo Real-Time polymerase chain reaction (RT-PCR) and were treated with combined pegylatedinterferon--alfa (PEG-IFNα)-ribavirin therapy. All participants were free to withdraw from the study at any time. Patients were divided in to two equal groups: Group (A): received weight reduction program in the form of treadmill aerobic exercises in addition to diet control, Whereas group (B): received no therapeutic intervention. The CONSORT diagram outlining the details of the screening, run-in and randomization phases of the study and reasons for participant exclusion illustrated in figure (1). Informed consent was obtained from all participants. This study was approved by the Scientific Research Ethical Committee, Faculty of Applied Medical Sciences at King Abdulaziz University. Subjects screening and recruitment CONSORT diagram. Measurements: Clinical and laboratory analysis were performed by independent assessors who were blinded to group assignment and not involved in the routine treatment of the patients, however the following measurements were taken before the study and after 3 months at the end of the study: A. Real-Time polymerase chain reaction (RT-PCR): Ten milliliter blood samples were collected from each participant at study entry. The blood samples were obtained using disposable needles and heparinized vacuum syringes and stored at −70°C until assayed. Serum samples of all participants were tested for Real-Time polymerase chain reaction (RT-PCR) to detect serum HCV RNA levels by polymerase chain reaction using the COBAS TaqMan HCV test, v2.0 (Roche Diagnostics, Indianapolis, NJ, USA). B. Analysis of peripheral blood cells: The analysis of peripheral blood cells (e.g., total and differential count) was performed on a Beckman Coulter AcT 5diff hematology analyzer. The values are expressed in percentages and absolute numbers. C. Flow cytometry analysis: The human leukocyte differentiation antigens CD3, CD4 and CD8 (Beckman Coulter, Marseille, France) Five microliters of appropriate monoclonal antibody was added to 50 µL of a wholeblood sample and incubated for 15 minutes at room temperature. Thereafter, the erythrocytes were lysed with 125 µL of a lysing solution, OptiLyse C, for 10 minutes. The reaction was stopped by the addition of 250 µL phosphate-buffered saline.The samples were analyzed by flow cytometry using Cytomics FC 500 and CXP software (Beckman Coulter).The leukocyte subsets were defined by forward- and side-scatter pattern. The negative control value was determined by a fluorescence background and antibody-nonspecific staining. D. Body mass index (BMI): Weight and height scale (Metrotype -England) was used to measure weight and height to calculate the body mass index (BMI). Body mass index was calculated by dividing the weight in kilograms by the square of the height in meters (Kg/m2). According to the WHO classification, a BMI of <18.5 kg/m2 is under weight, 18.5–24.9 kg/m2 is normal 25–29.9 kg/m2 is overweight. A BMI of > 30 kg/m2 is classified as obese and this group is further divided into moderate obesity (30–34.9 kg/m2), severe obesity (35–39.9 kg/m2) and very severe obesity (≤40 kg /m2)31. Procedures: Following the previous evaluation , all patients were divided randomly into the following groups: 1. The training group (Group A): These were submitted to aerobic exercise training to complete a 12-week treadmill aerobic exercise (Enraf Nonium, Model display panel Standard, NR 1475.801, Holland) which was conducted according to recommendation of aerobic exercise application approved by the American College of Sports Medicine32. Training program included 5 minutes for warming-up in the form of range motion and stretching exercises, 30 minutes of aerobic exercise training with intensity equal 60–70% of the individual maximum heart rate followed by cooling down for 10 minutes ( on treadmill with low speed and without inclination). Participants had 3 sessions /week for 3 months with close supervision of physical therapist. Also, a dietician performed an interview-based food survey for all participants of group (A) for detection of feeding habits, abnormal dietary behavior and to prescribe the balanced low caloric diet33 that provided 1200 Kilocalories/day for 12 weeks. The same dietitian continuously monitored all participant caloric intakes through reviewing the detailed record of food intake every 2 weeks by the dietitian34,35. 2. The control group (Group B): received no intervention. Statistical analysis: The mean values of the investigated parameters obtained before and after three months in both groups were compared using paired “t” test. Independent “t” test was used for the comparison between the two groups (P<0.05). Results: The demographic and clinical characteristics of the subjects are shown in Table 1. There were no significant differences between both groups regarding age, height, albumin, fasting blood glucose, hemoglobin, total bilirubin, systolic blood pressure, diastolic blood pressure, body weight, body mass index (BMI), waist circumference, fat mass, alanine aminotransferase (ALT) and HCV viral. Comparison of clinical data between HCV patients in both groups BMI: Body Mass Index; Hb: Hemoglobin; FPG: Fasting Blood Glucose; ALT: Alanine aminotransferase; SBP: Systolic blood pressure;DBP: Diastolic blood pressure. The mean values of BMI, white blood cells, total neutrophil count, monocytes, CD3, CD4 and CD8 were significantly decreased in the training group as a result of weight loss program (Table 2 and figure 2), however the results of the control group were not significant (table 3 and figure 3). Also, there were significant differences between both groups at the end of the study (table 4 and figure 4). Mean value and significance of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) before and at the end of the study BMI= Body Mass Index indicates a significant difference between the two groups, P < 0.05 Mean value of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) before and at the end of the study. Mean value and significance of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (B) before and at the end of the study BMI= Body Mass Index indicates a significant difference between the two groups, P < 0.05 Mean value of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (B) before and at the end of the study. Mean value and significance of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) and group (B) at the end of the study BMI= Body Mass Index indicates a significant difference between the two groups, P < 0.05. Mean value of body mass index, white blood cells, total neutrophil, monocytes, CD3, CD4 and CD8 count of group (A) and group (B) at the end of the study. Discussion: The best immune correlate of HCV control is a strong and broad CD4+ T-cell response to HCV antigens, which is often associated with an equally robust and broad HCV specific CD8+ T-cell immune response36–38. It is believed that CD4+ T cells are necessary to ensure fully functional CD8+ T-cell responses, which then can clear the virus39. However, obesity is associated with the modulation of immune parameters40,41. To our knowledge, this is the first study of immune function measures in relation to previous intentional weight loss in HCV patients. Our results indicate that there may be long-term effects of intentional weight loss on immune function. These results are in line with several previous studies. Shade and colleagues conducted a study on one hundred fourteen healthy, overweight, sedentary, post-menopausal women, who were recruited for an exercise intervention study and were currently weight stable who lost ≥ 10 pounds had lower measured natural killer cell (NK) cytotoxicity than those who did not (24.7%±12.1% vs 31.1%±14.7%, respectively). The frequency of weight loss episodes was also associated with differences in the number and proportion of NK cells. This study provides evidence that frequent intentional weight loss may have long-term effects on immune function42. However, Wasinski and colleagues conducted a study on mice who were submitted to chronic swimming training or a 30% caloric restriction after consuming a high-fat diet. The mice were subjected to swimming sessions 5 times per week for 6 weeks.Data demonstrated that exercise and caloric restriction modulate resident immune cells in adipose tissues. We observed a reduction in CD4+ and CD8+ T lymphocytes in AT43. Also, Carpenter and colleagues evaluated the forced and voluntary exercise as weight-loss treatments in diet-induced obese (DIO) mice and assessed the effects of weight loss on monocyte concentration and cell-surface expression of Toll-like receptor. Results confirmed that short-term exercise and low-fat diet consumption over the 8 weeks caused significant weight loss and altered immune profiles44. While, Viardot and colleagues looked at 13 obese people with Type 2 diabetes or pre-diabetes who were limited to a diet of between 1000 and 1600 calories a day for 24 weeks. Gastric banding was performed at 12 weeks to help restrict food intake further. Their results showed an 80% reduction of pro-inflammatory T-helper cells, as well as reduced activation of other circulating immune cells (T cells, monocytes and neutrophils) and decreased activation of macrophages in fat45. Wing and colleagues examined subjects after weight loss, induced by 14-days fast, showed improvement in serum immunoglobulin levels, bactericidal capacity of blood monocytes and NK cell cytotoxic activity46. Moreover, Tanaka and colleagues reported that caloric restriction improves parameters of immunity such as T cell counts and NK cell activity and the ability of mononuclear cells to produce pro-inflammatory cytokines47. The possible mechanism of immune system modulation by weight reduction could be explained by reduction of adipose tissues which is not only a storage organ, but produces close to 100 cytokines. These secreted adipokines are directly correlated to the increased adipose tissue mass and play an intricate role in various aspects of the innate and adaptive immune response and participate in a wide variety of physiological or physiopathological processes including food intake, insulin sensitivity and inflammation48. Also, weight reduction reduces serum level of leptin. Leptin has pleiotropic effects on immune cell activity as evidenced from the presence of leptin receptors on all immune cells of both the arms of innate and adaptive immunity49. Leptin promotes macrophage phagocytosis by activating phospholipase. Leptin increases secretion of pro-inflammatory cytokines by macrophages. Leptin stimulates monocyte proliferation andupregulates the expression of activation markers including CD38, CD69, CD25 and CD7150. Leptin is involved in the natural killer cell development, differentiation, proliferation, activation and cytotoxicity51. The current study has important strengths and limitations. The major strength is the supervised nature of the study. Supervising food intake and physical activity removes the need to question compliance or to rely on food and activity questionnaires. Further, all exercise sessions were supervised and adherence to the diet and activities was essentially 100%. Moreover, the study was randomized; hence, we can extrapolate adherence to the general population. In the other hand, the major limitation is the small sample size in both groups which may limit the possibility of generalization of the findings in the present study. Finally, within the limit of this study, weight reduction is recommended for modulation of immune system parameters of patients with HCV. Further researches are needed to explore the impact of weight reduction on quality of life and other biochemical parameters among patients with HCV. Conclusion: The current study provides evidence that weight loss modulates immune system parameters of patients with HCV.
Background: Recently, about 2.35% of the world populations are estimated to be chronically infected with hepatitis C virus (HCV). Previous cohort studies indicated that obesity increases risk of hepatic steatosis and fibrosis in non-diabetic patients with chronic hepatitis C infection due to diminished response to anti-viral therapy and as a result obesity is considered as an important factor in the progression of chronic HCV. However, there is a strong association between BMI and the human immune system among HCV patients. Methods: One-hundred obese Saudi patients with chronic HCV infection participated in this study, their age ranged from 50-58 years and their body mass index (BMI) ranged from 30-35 kg/m2. All Subjects were included in two groups: The first group received weight reduction program in the form of treadmill aerobic exercises in addition to diet control whereas, the second group received no therapeutic intervention. Parameters of CD3, CD4 and CD8 were quantified; Leukocyte, differential counts and BMI were measured before and after 3 months, at the end of the study. Results: The mean values of BMI, white blood cells, total neutrophil count, monocytes, CD3, CD4 and CD8 were significantly decreased in the training group as a result of weight loss program; however the results of the control group were not significant. Also, there were significant differences between both groups at the end of the study. Conclusions: Weight loss modulates immune system parameters of patients with HCV.
Introduction: Globally, an estimated 180 million people are chronically infected with HCV and 3 to 4 million are newly infected each year1,2. Hepatitis C virus (HCV) infection is one of the main causes of chronic liver disease worldwide3 and persistent infection occurs in 50 to 80% of those infected and may lead to the development of cirrhosis and subsequent hepatocellular carcinoma1. The progression of HCV involves changes in the cellular immunity of those affected4,5. Some studies have indicated that the cellular immunity of HCV patients undergoes alterations, leading to poor immunological responses or dysfunctions6–8. Obesity has also been associated with decreased immunocompetence as it alters innate and adaptive immunity and immunity deterioration is related to the grade of obesity9. Moreover, impaired immune responses have also been suggested to occur in obese humans. Studies indicated that the incidence and severity of certain infections are higher in obese individuals when compared to lean people10,11. Retrospective and prospective studies showed obesity to be an independent risk factor for infection after trauma12–14. In a prospective cohort study of critically ill trauma patients, obese patients had more than two fold increased risk of acquiring infection12. Also, Renehan et al. demonstrated an association of obesity with 25–40% of certain malignancies in both obese men and women15. Many authors reported dysregulation and alteration in number of immune cells in obese subjects. Obese subjects showed either increased or decreased total lymphocytes in peripheral blood populations16–19 and had decreased CD8+ T cell population along with increased or decreased CD4+T cells18,19. Moreover, many previous studies as Moulin et al. who showed in their study that obesity is associated with the modulation of immune parameters23, elevated numbers of circulating immune cells as neutrophil, monocyte, leukocyte and total white blood cells (WBC)16,24, as well as elevated activation levels of certain WBC and suppressed immune cell function17. Also, several authors have reported a chronic inflammation status in individuals with higher BMI25–27 which was associated with elevated amounts of white blood cells, neutrophils, and monocytes in the blood of all participants with BMI higher than that of the control group28. Also, Kintscher et al. observed an increased number of CD3 and CD4 lymphocytes in the peripheral blood of obese women correlating with body mass index (BMI)29. Finally, Antuna-Puente et al. found that BMI is positively correlated with the number of macrophages in adipose tissue30. As there are limited studies reporting the benefits of lifestyle modification on immune system response among hepatitis C virus Saudi patients. This study aimed to examine effects of a weight reduction program on selected immune parameters among HCV Saudi patients. Conclusion: The current study provides evidence that weight loss modulates immune system parameters of patients with HCV.
Background: Recently, about 2.35% of the world populations are estimated to be chronically infected with hepatitis C virus (HCV). Previous cohort studies indicated that obesity increases risk of hepatic steatosis and fibrosis in non-diabetic patients with chronic hepatitis C infection due to diminished response to anti-viral therapy and as a result obesity is considered as an important factor in the progression of chronic HCV. However, there is a strong association between BMI and the human immune system among HCV patients. Methods: One-hundred obese Saudi patients with chronic HCV infection participated in this study, their age ranged from 50-58 years and their body mass index (BMI) ranged from 30-35 kg/m2. All Subjects were included in two groups: The first group received weight reduction program in the form of treadmill aerobic exercises in addition to diet control whereas, the second group received no therapeutic intervention. Parameters of CD3, CD4 and CD8 were quantified; Leukocyte, differential counts and BMI were measured before and after 3 months, at the end of the study. Results: The mean values of BMI, white blood cells, total neutrophil count, monocytes, CD3, CD4 and CD8 were significantly decreased in the training group as a result of weight loss program; however the results of the control group were not significant. Also, there were significant differences between both groups at the end of the study. Conclusions: Weight loss modulates immune system parameters of patients with HCV.
4,808
288
[ 246, 447, 227, 43 ]
9
[ "study", "group", "hcv", "weight", "blood", "patients", "cells", "kg m2", "m2", "kg" ]
[ "patients hcv conclusion", "hepatitis virus hcv", "chronically infected hcv", "cellular immunity hcv", "immune parameters hcv" ]
null
[CONTENT] Hepatitis C virus | obesity | immune system | weight reduction [SUMMARY]
null
[CONTENT] Hepatitis C virus | obesity | immune system | weight reduction [SUMMARY]
[CONTENT] Hepatitis C virus | obesity | immune system | weight reduction [SUMMARY]
[CONTENT] Hepatitis C virus | obesity | immune system | weight reduction [SUMMARY]
[CONTENT] Hepatitis C virus | obesity | immune system | weight reduction [SUMMARY]
[CONTENT] Adult | Antigens, CD | Body Mass Index | Cohort Studies | Female | Hepacivirus | Hepatitis C, Chronic | Humans | Immune System | Leukocytes | Male | Middle Aged | Obesity | Program Evaluation | Saudi Arabia | Weight Loss | Weight Reduction Programs [SUMMARY]
null
[CONTENT] Adult | Antigens, CD | Body Mass Index | Cohort Studies | Female | Hepacivirus | Hepatitis C, Chronic | Humans | Immune System | Leukocytes | Male | Middle Aged | Obesity | Program Evaluation | Saudi Arabia | Weight Loss | Weight Reduction Programs [SUMMARY]
[CONTENT] Adult | Antigens, CD | Body Mass Index | Cohort Studies | Female | Hepacivirus | Hepatitis C, Chronic | Humans | Immune System | Leukocytes | Male | Middle Aged | Obesity | Program Evaluation | Saudi Arabia | Weight Loss | Weight Reduction Programs [SUMMARY]
[CONTENT] Adult | Antigens, CD | Body Mass Index | Cohort Studies | Female | Hepacivirus | Hepatitis C, Chronic | Humans | Immune System | Leukocytes | Male | Middle Aged | Obesity | Program Evaluation | Saudi Arabia | Weight Loss | Weight Reduction Programs [SUMMARY]
[CONTENT] Adult | Antigens, CD | Body Mass Index | Cohort Studies | Female | Hepacivirus | Hepatitis C, Chronic | Humans | Immune System | Leukocytes | Male | Middle Aged | Obesity | Program Evaluation | Saudi Arabia | Weight Loss | Weight Reduction Programs [SUMMARY]
[CONTENT] patients hcv conclusion | hepatitis virus hcv | chronically infected hcv | cellular immunity hcv | immune parameters hcv [SUMMARY]
null
[CONTENT] patients hcv conclusion | hepatitis virus hcv | chronically infected hcv | cellular immunity hcv | immune parameters hcv [SUMMARY]
[CONTENT] patients hcv conclusion | hepatitis virus hcv | chronically infected hcv | cellular immunity hcv | immune parameters hcv [SUMMARY]
[CONTENT] patients hcv conclusion | hepatitis virus hcv | chronically infected hcv | cellular immunity hcv | immune parameters hcv [SUMMARY]
[CONTENT] patients hcv conclusion | hepatitis virus hcv | chronically infected hcv | cellular immunity hcv | immune parameters hcv [SUMMARY]
[CONTENT] study | group | hcv | weight | blood | patients | cells | kg m2 | m2 | kg [SUMMARY]
null
[CONTENT] study | group | hcv | weight | blood | patients | cells | kg m2 | m2 | kg [SUMMARY]
[CONTENT] study | group | hcv | weight | blood | patients | cells | kg m2 | m2 | kg [SUMMARY]
[CONTENT] study | group | hcv | weight | blood | patients | cells | kg m2 | m2 | kg [SUMMARY]
[CONTENT] study | group | hcv | weight | blood | patients | cells | kg m2 | m2 | kg [SUMMARY]
[CONTENT] immune | obese | studies | increased | immunity | elevated | higher | certain | infected | decreased [SUMMARY]
null
[CONTENT] body | blood | mass index | index | body mass index | body mass | monocytes cd3 cd4 | total neutrophil | monocytes cd3 cd4 cd8 | cells total neutrophil [SUMMARY]
[CONTENT] provides evidence weight loss | loss modulates | study provides evidence weight | provides evidence weight | modulates immune system parameters | evidence weight | evidence weight loss | loss modulates immune system | loss modulates immune | modulates immune system [SUMMARY]
[CONTENT] immune | group | study | hcv | m2 | kg | kg m2 | weight | blood | groups [SUMMARY]
[CONTENT] immune | group | study | hcv | m2 | kg | kg m2 | weight | blood | groups [SUMMARY]
[CONTENT] about 2.35% ||| ||| BMI [SUMMARY]
null
[CONTENT] BMI ||| [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] about 2.35% ||| ||| BMI ||| One-hundred | obese | Saudi | 50-58 years | BMI | 30-35 kg ||| two | first | second ||| Parameters of CD3, CD4 | Leukocyte | BMI | 3 months ||| ||| BMI ||| ||| [SUMMARY]
[CONTENT] about 2.35% ||| ||| BMI ||| One-hundred | obese | Saudi | 50-58 years | BMI | 30-35 kg ||| two | first | second ||| Parameters of CD3, CD4 | Leukocyte | BMI | 3 months ||| ||| BMI ||| ||| [SUMMARY]
TMEM173 rs7447927 genetic polymorphism and susceptibility to severe enterovirus 71 infection in Chinese children.
36444630
This study was designed to explore the association between the TMEM173 polymorphism (rs7447927) and the severity of enterovirus 71 (EV71) infection among Chinese children.
INTRODUCTION
The TMEM173 polymorphism was identified in EV71-infected patients (n = 497) and healthy controls (n = 535) using the improved multiplex ligation detection reaction (iMLDR). The interferon-α (IFN-α) serum levels were detected using enzyme linked immunosorbent assay (ELISA).
METHODS
The frequencies of the GG genotype and G allele of TMEM173 rs7447927 in the mild EV71 infection and severe EV71 infection groups were markedly higher than those in the control group. The GG genotype and G allele frequencies in severely infected EV71 patients were significantly higher than those in mildly infected EV71 patients. Severely infected EV71 patients with the GG genotype had higher white blood cell counts (WBC), and C-reactive proteins (CRP), and blood glucose (BG) levels, longer fever duration, higher vomiting frequency, spirit changes, and electroencephalography (EEG) abnormalities. IFN-α serum concentration in severely infected patients was significantly higher than in the mildly infected group. The IFN-α concentration in the GG genotype was significantly higher compared with those in the GC and CC genotypes in severe cases.
RESULTS
The TMEM173 rs7447927 polymorphism was associated with EV71 infection susceptibility and severity. The G allele and GG genotype are susceptibility factors in the development of severe EV71 infection in Chinese children.
CONCLUSIONS
[ "Child", "Humans", "Enterovirus A, Human", "Polymorphism, Genetic", "Genotype", "Gene Frequency", "Interferon-alpha", "China" ]
9695089
INTRODUCTION
Enterovirus 71 (EV71), a single positive‐sense‐strand, neurovirulent RNA virus, is a well‐known pathogen that causes hand‐foot‐and‐mouth disease (HFMD). 1 , 2 Most children infected with EV71 manifest only mild symptoms (fever and rashes on the mouth, hands, feet, and hip); however, a few children have symptoms that are accompanied by serious central nervous system complications such as aseptic meningitis, encephalomyelitis, brainstem encephalitis, flaccid paralysis, and neurogenic pulmonary edema. 3 In 2008, a large‐scale Asia‐Pacific pandemic in China led to the infection of 490,000 children with EV71, where of 126 patients died. 4 The clinical manifestations of the disease in infected children differ, depending on virulence and host immunity. Recent studies have mainly focused on exploring the relationship between EV71 infection and host immunity at the molecular level, and it has been found that human gene polymorphisms, such as those of CPT2, OAS2, and IL‐17F, are related to infection susceptibility and severity. 5 , 6 , 7 , 8 , 9 The transmembrane protein 173 (TMEM173), also known as MPYS, mediates the interferon regulatory factor 3 (IRF3) activation (MITA), endoplasmic reticulum interferon stimulator (ERIS), stimulator of interferon genes (STING), and stimulator of interferon response cyclic guanosine monophosphate‐adenosine monophosphate (cGAMP) interactor 1 (STING1). Human TMEM173 is located on chromosome 5q31.2, contains eight exons, and encodes a transmembrane protein between the endoplasmic reticulum and Golgi in response to the presence of cytosolic dsDNA as well as an adaptor protein that mediates the production of type I interferon. 10 TMEM173 plays an important role in the cross‐reaction between innate immunity, inflammation, autophagy, and cell death in response to invasive microbial pathogens or endogenous host damage. 11 Studies have shown that TMEM173 is involved in the disease progression of multiple viruses that infect the human body, but the relationship between EV71 and TMEM173 is not clear. TMEM173 gene has rs7447927, rs13166214, rs55792153, and other single nucleotide polymorphisms (SNP) loci, but only the rs7447927 locus has been reported to be associated with oesophageal cancer, dose‐related telomere damage, and other diseases. Therefore, this study aimed to explore the relationship between the TMEM173 rs7447927 locus and EV71 infection susceptibility and severity.
null
null
RESULTS
Study population A total of 535 controls (ages 5.16 [2.79–8.00] years, 268 males) and 504 EV71‐infected patients (ages 5.77 [3.62–7.84] years, 247 males) were examined. There were no significant differences in age (Z = 1.840, p = .066) or gender (χ2 = 0.001, p = .970) between EV71‐infected patients and controls. The EV71‐infected patients were divided into two groups: 169 severely infected patients (ages 5.66 [3.50–7.51] years, 89 males) and 328 mildly infected patients (ages 5.77 [3.62–7.84] years, 158 males). No significant differences in age (Z = 0.417, p = .677) or gender (χ 2 = 0.900, p = .343) were observed between patients with severe or mild EV71 infections. A total of 535 controls (ages 5.16 [2.79–8.00] years, 268 males) and 504 EV71‐infected patients (ages 5.77 [3.62–7.84] years, 247 males) were examined. There were no significant differences in age (Z = 1.840, p = .066) or gender (χ2 = 0.001, p = .970) between EV71‐infected patients and controls. The EV71‐infected patients were divided into two groups: 169 severely infected patients (ages 5.66 [3.50–7.51] years, 89 males) and 328 mildly infected patients (ages 5.77 [3.62–7.84] years, 158 males). No significant differences in age (Z = 0.417, p = .677) or gender (χ 2 = 0.900, p = .343) were observed between patients with severe or mild EV71 infections. Distribution of genotypes and alleles of TMEM173 The genotypic and allelic distributions of each group obeyed Hardy–Weinberg equilibrium (p > .05). EV71‐infected patients had a significantly higher frequency of the GG (p < .001), GG + GC genotypes (p = .001), and G allele (p = .001), than the controls (Table 2). The same result was obtained when comparing severely infected EV71 patients with mildly infected patients (p < .001, p < .001, and p = .001, respectively) (Table 3). Genotype and allele frequencies of the TMEM173 rs7447927 polymorphism in EV71‐infected patients and controls Abbreviations: CI, confidence interval; OR, odds ratio.  Genotype and allele frequencies of the TMEM173 rs7447927 polymorphism in mild and severe EV71‐infected cases Abbreviations: CI, confidence interval; OR, odds ratio. The genotypic and allelic distributions of each group obeyed Hardy–Weinberg equilibrium (p > .05). EV71‐infected patients had a significantly higher frequency of the GG (p < .001), GG + GC genotypes (p = .001), and G allele (p = .001), than the controls (Table 2). The same result was obtained when comparing severely infected EV71 patients with mildly infected patients (p < .001, p < .001, and p = .001, respectively) (Table 3). Genotype and allele frequencies of the TMEM173 rs7447927 polymorphism in EV71‐infected patients and controls Abbreviations: CI, confidence interval; OR, odds ratio.  Genotype and allele frequencies of the TMEM173 rs7447927 polymorphism in mild and severe EV71‐infected cases Abbreviations: CI, confidence interval; OR, odds ratio. Analysis of clinical features Among EV71‐infected patients, those with the GG genotype had higher WBC (p < .001), C‐reactive proteins (CRP) (p < .001), and longer fever duration (p = .002). There were no significant differences in the serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), cardiac creatine kinase‐MB fraction (CK‐MB), or BG among the different genotypes. There were no significant differences in the frequencies of vomiting, seizure, spirit change, brain magnetic resonance imaging (MRI) abnormalities, or abnormal electroencephalography (EEG) frequencies among children with GG, GC, and CC genotypes (Table 4). In severely infected cases, children with the GG genotype had higher WBC (p < .001), CRP (p < .001), ALT (p = .049), AST (p = .041), BG (p < .001) levels and longer fever duration (p < .001). In addition, the percentages of vomiting (p = .017), spirit change (p = .045), and EEG abnormalities (p = .041) in patients with the GG genotype were significantly higher than those in patients with the other two genotypes. There were no significant differences in gender, age, and EV71 load among the different genotypes in both the mild and severe infection groups (Table 5). Characteristic of EV71‐infected group according to different genotypes Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP, C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count. Groups compared using 2 × 3 χ 2 test. Groups compared using the ANOVA, values expressed as mean ± SD. Groups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values). Characteristic of severe EV71 infection group according to different genotypes Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP,C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count. Groups compared using χ 2 test. Groups compared using the ANOVA, values expressed as mean ± SD. Groups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values). Among EV71‐infected patients, those with the GG genotype had higher WBC (p < .001), C‐reactive proteins (CRP) (p < .001), and longer fever duration (p = .002). There were no significant differences in the serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), cardiac creatine kinase‐MB fraction (CK‐MB), or BG among the different genotypes. There were no significant differences in the frequencies of vomiting, seizure, spirit change, brain magnetic resonance imaging (MRI) abnormalities, or abnormal electroencephalography (EEG) frequencies among children with GG, GC, and CC genotypes (Table 4). In severely infected cases, children with the GG genotype had higher WBC (p < .001), CRP (p < .001), ALT (p = .049), AST (p = .041), BG (p < .001) levels and longer fever duration (p < .001). In addition, the percentages of vomiting (p = .017), spirit change (p = .045), and EEG abnormalities (p = .041) in patients with the GG genotype were significantly higher than those in patients with the other two genotypes. There were no significant differences in gender, age, and EV71 load among the different genotypes in both the mild and severe infection groups (Table 5). Characteristic of EV71‐infected group according to different genotypes Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP, C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count. Groups compared using 2 × 3 χ 2 test. Groups compared using the ANOVA, values expressed as mean ± SD. Groups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values). Characteristic of severe EV71 infection group according to different genotypes Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP,C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count. Groups compared using χ 2 test. Groups compared using the ANOVA, values expressed as mean ± SD. Groups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values). Correlation between the TMEM173 rs7447927 polymorphism and serum IFN‐a level IFN‐α serum levels were noticeably higher in EV71‐infected patients (89.34 ± 7.45 pg/ml, p < .001) compared with those in uninfected children (57.69 ± 3.57 pg/ml) (Figure 2A). IFN‐α concentrations were significantly higher in severely infected cases (95.92 ± 6.13 pg/ml, p < .001) compared with those in mildly infected cases (83.92 ± 2.32 pg/ml) (Figure 2B). In the infected group, the serum concentration of IFN‐α in patients with the GG genotype (93.88 ± 9.28 pg/ml, p = .001 and p < .001, respectively) was significantly higher than those in patients with GC (89.38 ± 6.07 pg/ml) and CC (85.08 ± 4.47 pg/ml) genotypes, but there was no obvious difference among the genotypes in controls (Figure 2C). In severely infected patients, the serum level of IFN‐α in patients with the GG genotype (102.31 ± 3.84 pg/ml, p < .001 and p < .001, respectively) was significantly higher than in patients with the GC (94.20 ± 3.90 pg/ml) and CC genotypes (89.93 ± 5.58 pg/ml), but there was no significant difference among the genotypes in mildly infected patients (Figure 2D). Level of interferon‐α (IFN‐α) in peripheral blood lymphocytes were detected in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. Values expressed as mean ± SD. (A) The level of IFN‐α in peripheral blood lymphocytes in EV71‐infected cases was significantly higher than in controls (p <  0.001). (B) The level of IFN‐α in peripheral blood lymphocytes in severe EV71‐infected cases was significantly higher than in mild cases (p < 0.001). (C). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in EV71 patients were significantly higher than in GC and CC genotypes patients (p < .01 and p < .001, respectively), but there was no difference in controls. (D). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in severe EV71‐infected patients were significantly higher than in GC and CC genotypes patients (p < .001 and p < .001, respectively), but there was no difference in mild cases. **p < .01, ***p < .001. IFN‐α serum levels were noticeably higher in EV71‐infected patients (89.34 ± 7.45 pg/ml, p < .001) compared with those in uninfected children (57.69 ± 3.57 pg/ml) (Figure 2A). IFN‐α concentrations were significantly higher in severely infected cases (95.92 ± 6.13 pg/ml, p < .001) compared with those in mildly infected cases (83.92 ± 2.32 pg/ml) (Figure 2B). In the infected group, the serum concentration of IFN‐α in patients with the GG genotype (93.88 ± 9.28 pg/ml, p = .001 and p < .001, respectively) was significantly higher than those in patients with GC (89.38 ± 6.07 pg/ml) and CC (85.08 ± 4.47 pg/ml) genotypes, but there was no obvious difference among the genotypes in controls (Figure 2C). In severely infected patients, the serum level of IFN‐α in patients with the GG genotype (102.31 ± 3.84 pg/ml, p < .001 and p < .001, respectively) was significantly higher than in patients with the GC (94.20 ± 3.90 pg/ml) and CC genotypes (89.93 ± 5.58 pg/ml), but there was no significant difference among the genotypes in mildly infected patients (Figure 2D). Level of interferon‐α (IFN‐α) in peripheral blood lymphocytes were detected in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. Values expressed as mean ± SD. (A) The level of IFN‐α in peripheral blood lymphocytes in EV71‐infected cases was significantly higher than in controls (p <  0.001). (B) The level of IFN‐α in peripheral blood lymphocytes in severe EV71‐infected cases was significantly higher than in mild cases (p < 0.001). (C). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in EV71 patients were significantly higher than in GC and CC genotypes patients (p < .01 and p < .001, respectively), but there was no difference in controls. (D). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in severe EV71‐infected patients were significantly higher than in GC and CC genotypes patients (p < .001 and p < .001, respectively), but there was no difference in mild cases. **p < .01, ***p < .001.
null
null
[ "INTRODUCTION", "Case selection", "Genomic DNA isolation and genotyping of TMEM173 rs7447927\n", "Detection of EV71 loading", "Estimation of interferon‐α (IFN‐α) levels", "Statistical analysis", "Study population", "Distribution of genotypes and alleles of TMEM173\n", "Analysis of clinical features", "Correlation between the TMEM173 rs7447927 polymorphism and serum IFN‐a level", "AUTHOR CONTRIBUTIONS", "ETHICS STATEMENT" ]
[ "Enterovirus 71 (EV71), a single positive‐sense‐strand, neurovirulent RNA virus, is a well‐known pathogen that causes hand‐foot‐and‐mouth disease (HFMD).\n1\n, \n2\n Most children infected with EV71 manifest only mild symptoms (fever and rashes on the mouth, hands, feet, and hip); however, a few children have symptoms that are accompanied by serious central nervous system complications such as aseptic meningitis, encephalomyelitis, brainstem encephalitis, flaccid paralysis, and neurogenic pulmonary edema.\n3\n In 2008, a large‐scale Asia‐Pacific pandemic in China led to the infection of 490,000 children with EV71, where of 126 patients died.\n4\n The clinical manifestations of the disease in infected children differ, depending on virulence and host immunity. Recent studies have mainly focused on exploring the relationship between EV71 infection and host immunity at the molecular level, and it has been found that human gene polymorphisms, such as those of CPT2, OAS2, and IL‐17F, are related to infection susceptibility and severity.\n5\n, \n6\n, \n7\n, \n8\n, \n9\n\n\nThe transmembrane protein 173 (TMEM173), also known as MPYS, mediates the interferon regulatory factor 3 (IRF3) activation (MITA), endoplasmic reticulum interferon stimulator (ERIS), stimulator of interferon genes (STING), and stimulator of interferon response cyclic guanosine monophosphate‐adenosine monophosphate (cGAMP) interactor 1 (STING1). Human TMEM173 is located on chromosome 5q31.2, contains eight exons, and encodes a transmembrane protein between the endoplasmic reticulum and Golgi in response to the presence of cytosolic dsDNA as well as an adaptor protein that mediates the production of type I interferon.\n10\n\nTMEM173 plays an important role in the cross‐reaction between innate immunity, inflammation, autophagy, and cell death in response to invasive microbial pathogens or endogenous host damage.\n11\n Studies have shown that TMEM173 is involved in the disease progression of multiple viruses that infect the human body, but the relationship between EV71 and TMEM173 is not clear. TMEM173 gene has rs7447927, rs13166214, rs55792153, and other single nucleotide polymorphisms (SNP) loci, but only the rs7447927 locus has been reported to be associated with oesophageal cancer, dose‐related telomere damage, and other diseases. Therefore, this study aimed to explore the relationship between the TMEM173 rs7447927 locus and EV71 infection susceptibility and severity.", "We examined 535 healthy children undergoing physical examination and 504 EV71‐infected children in the Affiliated Hospital of Qingdao University, Qingdao Women & Children's Hospital, and Affiliated Hospital of Jining Medical University between 2013 and 2019. Seven cases in the infected group were excluded due to underlying diseases: congenital heart disease (n = 1), epilepsy (n = 3), and other immune system diseases (n = 3). EV71 infection was confirmed by both an analysis of the clinical features and a reverse transcription polymerase chain reaction (RT‐PCR). RNA was extracted from stool specimens obtained from the patients on the day after admission. Clinical and laboratory data were collected. Inclusion/exclusion criteria in our study are shown in Figure 1. The severity of EV71 infection was clarified according to the guidelines of HFMD diagnosis and treatment (2010) by the Chinese Ministry of Health.\n12\n\n\nFlowchart demonstrating the patients included in the study as well as the inclusion and exclusion criteria", "A commercial kit (Qiagen) was used to extract genomic DNA from peripheral blood. An iMLDR technique impoldered by Genesky Biotechnologies Inc. was used to detect the TMEM173 rs7447927 polymorphism genotype. The product was 201 bp in size. For rs7447927 gene polymorphism analysis, PCR was performed using the forward primer 5ʹ‐CAGGGCTAGGCATCAAGGGAGT−3ʹ and reverse primer 5ʹ‐ GACCCTTTGGGGGCTAGGAGAG −3ʹ. The PCR procedure was as follows: 95°C for 2 min; 11 cycles at 94°C for 20 s, 65–0.5°C/cycle for 40 s; 72°C for 90 s; 24 cycles at 94°C for 20 s, 59°C for 30 s; 72°C for 90 s; followed by 72°C for 2 min and held at 4°C. The PCR products were purified by digestion with 1U shrimp alkaline phosphatase at 37°C for 1 h and at 75°C for 15 min. The ligation reaction mixture contained 10× ligase buffer 2 μl, Taq DNA ligase 0.2 μl, probe mixture 1 μl, and the purified PCR product mixture 3 μl. In a double connection reaction, each site contained two 5′ ends of allele‐specific probes, followed by the 3′ end of an allele‐specific probe. Each allele‐specific interlinkage product was distinguished by its immune fluorescence, but different loci were distinguished by the different lengths added to the 3′ end of the allele‐specific probe. The probes used were as follows:\nrs1946519FC: TCTCTCGGGTCAATTCGTCCTTGCAGGGAGGCTAGGTGGTG, rs1946519FG: TGTTTGGGCCGGTTAGTGCAGGGAGGCTAGGTGGTGG, and rs1946519FP: ACCAGGTACCGGAGAGTGTGCTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT.\nThe ligation cycling step consisted of 35 cycles at 94°C for 1 min, 56°C for 4 min, followed by holding at 4°C. Alleles were genotyped using a 3130xl Genetic Analyzer (ABI), and the raw data were analysed using Gene Mapper 4.0 (Applied Biosystems) (Table 1).\nPrimers and PCR conditions", "Peripheral blood lymphocytes were extracted according to the manufacturer's instructions (Solarbio) on the second day after admission. Total RNA was purified by TRIzol reagent (Invitrogen) in accordance with the manufacturer's protocol, and the concentration and quality of the RNA were assessed by A QuickDrop spectrophotometer. The SYBR‑Green Real‑Time PCR kit (Vazyme) was used to reverse‐transcribe the mRNA into complementary DNA (cDNA). A standard curve for calculating the copy numbers of viral RNA was built by a series of dilutions (1 × 107, 1 × 106, 1 × 105, and 1 × 104 copies/μl of a DNA fragment) in various samples. The ligation cycling step consisted of 95°C for 15 min and 40 cycles of thermal cycling at 95°C for 10 s and 60°C for 60 s. Quantitative RT‐PCR was performed using a Linegene9660 system (Thermo Fisher Scientific). The specific primers used were as follows: EV71‐S: GTTCTTAACTCACATAGCA, EV71‐A: TTGCAAAAACTGAGGGTT (Table 1).", "Plasma IFN‐α levels were detected using ELISA kits (Thermo Fisher) in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. The sensitivity of IFN‐α detection was 3.2 pg/ml. Each sample was repeated three times, and these values were in the linear portion of the standard curve.", "A Hardy–Weinberg equilibrium (HWE) was used to test the proportion of genotypes in controls. The frequencies of genotypes and alleles in different groups were compared using the χ\n2 test. The relationships between rs7447927 polymorphism and EV71 susceptibility and severity of infection were evaluated by calculating the odds ratio (OR) and 95% confidence intervals (95% CI) using logistic regression. Parameters of normal distribution were expressed as means ± standard deviation (SD) and analysed via one‐way analysis of variance (ANOVA), followed by Student's t test. Nonparametric data were analysed using the Kruskal–Wallis test and expressed as median values (25th–75th percentile). All statistical analyses were performed using SPSS 21.0 (IBM SPSS software), and significance was set at p < .05.", "A total of 535 controls (ages 5.16 [2.79–8.00] years, 268 males) and 504 EV71‐infected patients (ages 5.77 [3.62–7.84] years, 247 males) were examined. There were no significant differences in age (Z = 1.840, p = .066) or gender (χ2 = 0.001, p = .970) between EV71‐infected patients and controls. The EV71‐infected patients were divided into two groups: 169 severely infected patients (ages 5.66 [3.50–7.51] years, 89 males) and 328 mildly infected patients (ages 5.77 [3.62–7.84] years, 158 males). No significant differences in age (Z = 0.417, p = .677) or gender (χ\n2 = 0.900, p = .343) were observed between patients with severe or mild EV71 infections.", "The genotypic and allelic distributions of each group obeyed Hardy–Weinberg equilibrium (p > .05). EV71‐infected patients had a significantly higher frequency of the GG (p < .001), GG + GC genotypes (p = .001), and G allele (p = .001), than the controls (Table 2). The same result was obtained when comparing severely infected EV71 patients with mildly infected patients (p < .001, p < .001, and p = .001, respectively) (Table 3).\nGenotype and allele frequencies of the TMEM173 rs7447927 polymorphism in EV71‐infected patients and controls\nAbbreviations: CI, confidence interval; OR, odds ratio. \nGenotype and allele frequencies of the TMEM173 rs7447927 polymorphism in mild and severe EV71‐infected cases\nAbbreviations: CI, confidence interval; OR, odds ratio.", "Among EV71‐infected patients, those with the GG genotype had higher WBC (p < .001), C‐reactive proteins (CRP) (p < .001), and longer fever duration (p = .002). There were no significant differences in the serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), cardiac creatine kinase‐MB fraction (CK‐MB), or BG among the different genotypes. There were no significant differences in the frequencies of vomiting, seizure, spirit change, brain magnetic resonance imaging (MRI) abnormalities, or abnormal electroencephalography (EEG) frequencies among children with GG, GC, and CC genotypes (Table 4). In severely infected cases, children with the GG genotype had higher WBC (p < .001), CRP (p < .001), ALT (p = .049), AST (p = .041), BG (p < .001) levels and longer fever duration (p < .001). In addition, the percentages of vomiting (p = .017), spirit change (p = .045), and EEG abnormalities (p = .041) in patients with the GG genotype were significantly higher than those in patients with the other two genotypes. There were no significant differences in gender, age, and EV71 load among the different genotypes in both the mild and severe infection groups (Table 5).\nCharacteristic of EV71‐infected group according to different genotypes\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP, C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count.\nGroups compared using 2 × 3 χ\n2 test.\nGroups compared using the ANOVA, values expressed as mean ± SD.\nGroups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values).\nCharacteristic of severe EV71 infection group according to different genotypes\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP,C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count.\nGroups compared using χ\n2 test.\nGroups compared using the ANOVA, values expressed as mean ± SD.\nGroups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values).", "IFN‐α serum levels were noticeably higher in EV71‐infected patients (89.34 ± 7.45 pg/ml, p < .001) compared with those in uninfected children (57.69 ± 3.57 pg/ml) (Figure 2A). IFN‐α concentrations were significantly higher in severely infected cases (95.92 ± 6.13 pg/ml, p < .001) compared with those in mildly infected cases (83.92 ± 2.32 pg/ml) (Figure 2B). In the infected group, the serum concentration of IFN‐α in patients with the GG genotype (93.88 ± 9.28 pg/ml, p = .001 and p < .001, respectively) was significantly higher than those in patients with GC (89.38 ± 6.07 pg/ml) and CC (85.08 ± 4.47 pg/ml) genotypes, but there was no obvious difference among the genotypes in controls (Figure 2C). In severely infected patients, the serum level of IFN‐α in patients with the GG genotype (102.31 ± 3.84 pg/ml, p < .001 and p < .001, respectively) was significantly higher than in patients with the GC (94.20 ± 3.90 pg/ml) and CC genotypes (89.93 ± 5.58 pg/ml), but there was no significant difference among the genotypes in mildly infected patients (Figure 2D).\nLevel of interferon‐α (IFN‐α) in peripheral blood lymphocytes were detected in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. Values expressed as mean ± SD. (A) The level of IFN‐α in peripheral blood lymphocytes in EV71‐infected cases was significantly higher than in controls (p <  0.001). (B) The level of IFN‐α in peripheral blood lymphocytes in severe EV71‐infected cases was significantly higher than in mild cases (p < 0.001). (C). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in EV71 patients were significantly higher than in GC and CC genotypes patients (p < .01 and p < .001, respectively), but there was no difference in controls. (D). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in severe EV71‐infected patients were significantly higher than in GC and CC genotypes patients (p < .001 and p < .001, respectively), but there was no difference in mild cases. **p < .01, ***p < .001.", "\nJie Song: Data curation, writing – original draft, visualization, investigation. Yedan Liu: Data curation, methodology, software, writing – review & editing. Ya Guo: Validation, formal analysis, data curation, writing – review & editing. Peipei Liu: Validation, data curation, visualization. Fei Li: Validation, resources, data curation. Chengqing Yang: Validation, investigation, resources. Fan Fan: Validation, writing – review & editing. Zongbo Chen: Validation, writing – review & editing, supervision, project administration, funding acquisition. All authors contributed to the writing of the final manuscript. All members of our Study Team contributed to the management or administration of the trial.", "All the procedures in this study were in accordance with the standards of the Ethics Review Committee of the Affiliated Hospital of Qingdao University. We obtained informed consent from the children's parents." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Case selection", "Genomic DNA isolation and genotyping of TMEM173 rs7447927\n", "Detection of EV71 loading", "Estimation of interferon‐α (IFN‐α) levels", "Statistical analysis", "RESULTS", "Study population", "Distribution of genotypes and alleles of TMEM173\n", "Analysis of clinical features", "Correlation between the TMEM173 rs7447927 polymorphism and serum IFN‐a level", "DISCUSSION", "AUTHOR CONTRIBUTIONS", "CONFLICT OF INTEREST", "ETHICS STATEMENT" ]
[ "Enterovirus 71 (EV71), a single positive‐sense‐strand, neurovirulent RNA virus, is a well‐known pathogen that causes hand‐foot‐and‐mouth disease (HFMD).\n1\n, \n2\n Most children infected with EV71 manifest only mild symptoms (fever and rashes on the mouth, hands, feet, and hip); however, a few children have symptoms that are accompanied by serious central nervous system complications such as aseptic meningitis, encephalomyelitis, brainstem encephalitis, flaccid paralysis, and neurogenic pulmonary edema.\n3\n In 2008, a large‐scale Asia‐Pacific pandemic in China led to the infection of 490,000 children with EV71, where of 126 patients died.\n4\n The clinical manifestations of the disease in infected children differ, depending on virulence and host immunity. Recent studies have mainly focused on exploring the relationship between EV71 infection and host immunity at the molecular level, and it has been found that human gene polymorphisms, such as those of CPT2, OAS2, and IL‐17F, are related to infection susceptibility and severity.\n5\n, \n6\n, \n7\n, \n8\n, \n9\n\n\nThe transmembrane protein 173 (TMEM173), also known as MPYS, mediates the interferon regulatory factor 3 (IRF3) activation (MITA), endoplasmic reticulum interferon stimulator (ERIS), stimulator of interferon genes (STING), and stimulator of interferon response cyclic guanosine monophosphate‐adenosine monophosphate (cGAMP) interactor 1 (STING1). Human TMEM173 is located on chromosome 5q31.2, contains eight exons, and encodes a transmembrane protein between the endoplasmic reticulum and Golgi in response to the presence of cytosolic dsDNA as well as an adaptor protein that mediates the production of type I interferon.\n10\n\nTMEM173 plays an important role in the cross‐reaction between innate immunity, inflammation, autophagy, and cell death in response to invasive microbial pathogens or endogenous host damage.\n11\n Studies have shown that TMEM173 is involved in the disease progression of multiple viruses that infect the human body, but the relationship between EV71 and TMEM173 is not clear. TMEM173 gene has rs7447927, rs13166214, rs55792153, and other single nucleotide polymorphisms (SNP) loci, but only the rs7447927 locus has been reported to be associated with oesophageal cancer, dose‐related telomere damage, and other diseases. Therefore, this study aimed to explore the relationship between the TMEM173 rs7447927 locus and EV71 infection susceptibility and severity.", "Case selection We examined 535 healthy children undergoing physical examination and 504 EV71‐infected children in the Affiliated Hospital of Qingdao University, Qingdao Women & Children's Hospital, and Affiliated Hospital of Jining Medical University between 2013 and 2019. Seven cases in the infected group were excluded due to underlying diseases: congenital heart disease (n = 1), epilepsy (n = 3), and other immune system diseases (n = 3). EV71 infection was confirmed by both an analysis of the clinical features and a reverse transcription polymerase chain reaction (RT‐PCR). RNA was extracted from stool specimens obtained from the patients on the day after admission. Clinical and laboratory data were collected. Inclusion/exclusion criteria in our study are shown in Figure 1. The severity of EV71 infection was clarified according to the guidelines of HFMD diagnosis and treatment (2010) by the Chinese Ministry of Health.\n12\n\n\nFlowchart demonstrating the patients included in the study as well as the inclusion and exclusion criteria\nWe examined 535 healthy children undergoing physical examination and 504 EV71‐infected children in the Affiliated Hospital of Qingdao University, Qingdao Women & Children's Hospital, and Affiliated Hospital of Jining Medical University between 2013 and 2019. Seven cases in the infected group were excluded due to underlying diseases: congenital heart disease (n = 1), epilepsy (n = 3), and other immune system diseases (n = 3). EV71 infection was confirmed by both an analysis of the clinical features and a reverse transcription polymerase chain reaction (RT‐PCR). RNA was extracted from stool specimens obtained from the patients on the day after admission. Clinical and laboratory data were collected. Inclusion/exclusion criteria in our study are shown in Figure 1. The severity of EV71 infection was clarified according to the guidelines of HFMD diagnosis and treatment (2010) by the Chinese Ministry of Health.\n12\n\n\nFlowchart demonstrating the patients included in the study as well as the inclusion and exclusion criteria\nGenomic DNA isolation and genotyping of TMEM173 rs7447927\n A commercial kit (Qiagen) was used to extract genomic DNA from peripheral blood. An iMLDR technique impoldered by Genesky Biotechnologies Inc. was used to detect the TMEM173 rs7447927 polymorphism genotype. The product was 201 bp in size. For rs7447927 gene polymorphism analysis, PCR was performed using the forward primer 5ʹ‐CAGGGCTAGGCATCAAGGGAGT−3ʹ and reverse primer 5ʹ‐ GACCCTTTGGGGGCTAGGAGAG −3ʹ. The PCR procedure was as follows: 95°C for 2 min; 11 cycles at 94°C for 20 s, 65–0.5°C/cycle for 40 s; 72°C for 90 s; 24 cycles at 94°C for 20 s, 59°C for 30 s; 72°C for 90 s; followed by 72°C for 2 min and held at 4°C. The PCR products were purified by digestion with 1U shrimp alkaline phosphatase at 37°C for 1 h and at 75°C for 15 min. The ligation reaction mixture contained 10× ligase buffer 2 μl, Taq DNA ligase 0.2 μl, probe mixture 1 μl, and the purified PCR product mixture 3 μl. In a double connection reaction, each site contained two 5′ ends of allele‐specific probes, followed by the 3′ end of an allele‐specific probe. Each allele‐specific interlinkage product was distinguished by its immune fluorescence, but different loci were distinguished by the different lengths added to the 3′ end of the allele‐specific probe. The probes used were as follows:\nrs1946519FC: TCTCTCGGGTCAATTCGTCCTTGCAGGGAGGCTAGGTGGTG, rs1946519FG: TGTTTGGGCCGGTTAGTGCAGGGAGGCTAGGTGGTGG, and rs1946519FP: ACCAGGTACCGGAGAGTGTGCTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT.\nThe ligation cycling step consisted of 35 cycles at 94°C for 1 min, 56°C for 4 min, followed by holding at 4°C. Alleles were genotyped using a 3130xl Genetic Analyzer (ABI), and the raw data were analysed using Gene Mapper 4.0 (Applied Biosystems) (Table 1).\nPrimers and PCR conditions\nA commercial kit (Qiagen) was used to extract genomic DNA from peripheral blood. An iMLDR technique impoldered by Genesky Biotechnologies Inc. was used to detect the TMEM173 rs7447927 polymorphism genotype. The product was 201 bp in size. For rs7447927 gene polymorphism analysis, PCR was performed using the forward primer 5ʹ‐CAGGGCTAGGCATCAAGGGAGT−3ʹ and reverse primer 5ʹ‐ GACCCTTTGGGGGCTAGGAGAG −3ʹ. The PCR procedure was as follows: 95°C for 2 min; 11 cycles at 94°C for 20 s, 65–0.5°C/cycle for 40 s; 72°C for 90 s; 24 cycles at 94°C for 20 s, 59°C for 30 s; 72°C for 90 s; followed by 72°C for 2 min and held at 4°C. The PCR products were purified by digestion with 1U shrimp alkaline phosphatase at 37°C for 1 h and at 75°C for 15 min. The ligation reaction mixture contained 10× ligase buffer 2 μl, Taq DNA ligase 0.2 μl, probe mixture 1 μl, and the purified PCR product mixture 3 μl. In a double connection reaction, each site contained two 5′ ends of allele‐specific probes, followed by the 3′ end of an allele‐specific probe. Each allele‐specific interlinkage product was distinguished by its immune fluorescence, but different loci were distinguished by the different lengths added to the 3′ end of the allele‐specific probe. The probes used were as follows:\nrs1946519FC: TCTCTCGGGTCAATTCGTCCTTGCAGGGAGGCTAGGTGGTG, rs1946519FG: TGTTTGGGCCGGTTAGTGCAGGGAGGCTAGGTGGTGG, and rs1946519FP: ACCAGGTACCGGAGAGTGTGCTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT.\nThe ligation cycling step consisted of 35 cycles at 94°C for 1 min, 56°C for 4 min, followed by holding at 4°C. Alleles were genotyped using a 3130xl Genetic Analyzer (ABI), and the raw data were analysed using Gene Mapper 4.0 (Applied Biosystems) (Table 1).\nPrimers and PCR conditions\nDetection of EV71 loading Peripheral blood lymphocytes were extracted according to the manufacturer's instructions (Solarbio) on the second day after admission. Total RNA was purified by TRIzol reagent (Invitrogen) in accordance with the manufacturer's protocol, and the concentration and quality of the RNA were assessed by A QuickDrop spectrophotometer. The SYBR‑Green Real‑Time PCR kit (Vazyme) was used to reverse‐transcribe the mRNA into complementary DNA (cDNA). A standard curve for calculating the copy numbers of viral RNA was built by a series of dilutions (1 × 107, 1 × 106, 1 × 105, and 1 × 104 copies/μl of a DNA fragment) in various samples. The ligation cycling step consisted of 95°C for 15 min and 40 cycles of thermal cycling at 95°C for 10 s and 60°C for 60 s. Quantitative RT‐PCR was performed using a Linegene9660 system (Thermo Fisher Scientific). The specific primers used were as follows: EV71‐S: GTTCTTAACTCACATAGCA, EV71‐A: TTGCAAAAACTGAGGGTT (Table 1).\nPeripheral blood lymphocytes were extracted according to the manufacturer's instructions (Solarbio) on the second day after admission. Total RNA was purified by TRIzol reagent (Invitrogen) in accordance with the manufacturer's protocol, and the concentration and quality of the RNA were assessed by A QuickDrop spectrophotometer. The SYBR‑Green Real‑Time PCR kit (Vazyme) was used to reverse‐transcribe the mRNA into complementary DNA (cDNA). A standard curve for calculating the copy numbers of viral RNA was built by a series of dilutions (1 × 107, 1 × 106, 1 × 105, and 1 × 104 copies/μl of a DNA fragment) in various samples. The ligation cycling step consisted of 95°C for 15 min and 40 cycles of thermal cycling at 95°C for 10 s and 60°C for 60 s. Quantitative RT‐PCR was performed using a Linegene9660 system (Thermo Fisher Scientific). The specific primers used were as follows: EV71‐S: GTTCTTAACTCACATAGCA, EV71‐A: TTGCAAAAACTGAGGGTT (Table 1).\nEstimation of interferon‐α (IFN‐α) levels Plasma IFN‐α levels were detected using ELISA kits (Thermo Fisher) in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. The sensitivity of IFN‐α detection was 3.2 pg/ml. Each sample was repeated three times, and these values were in the linear portion of the standard curve.\nPlasma IFN‐α levels were detected using ELISA kits (Thermo Fisher) in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. The sensitivity of IFN‐α detection was 3.2 pg/ml. Each sample was repeated three times, and these values were in the linear portion of the standard curve.\nStatistical analysis A Hardy–Weinberg equilibrium (HWE) was used to test the proportion of genotypes in controls. The frequencies of genotypes and alleles in different groups were compared using the χ\n2 test. The relationships between rs7447927 polymorphism and EV71 susceptibility and severity of infection were evaluated by calculating the odds ratio (OR) and 95% confidence intervals (95% CI) using logistic regression. Parameters of normal distribution were expressed as means ± standard deviation (SD) and analysed via one‐way analysis of variance (ANOVA), followed by Student's t test. Nonparametric data were analysed using the Kruskal–Wallis test and expressed as median values (25th–75th percentile). All statistical analyses were performed using SPSS 21.0 (IBM SPSS software), and significance was set at p < .05.\nA Hardy–Weinberg equilibrium (HWE) was used to test the proportion of genotypes in controls. The frequencies of genotypes and alleles in different groups were compared using the χ\n2 test. The relationships between rs7447927 polymorphism and EV71 susceptibility and severity of infection were evaluated by calculating the odds ratio (OR) and 95% confidence intervals (95% CI) using logistic regression. Parameters of normal distribution were expressed as means ± standard deviation (SD) and analysed via one‐way analysis of variance (ANOVA), followed by Student's t test. Nonparametric data were analysed using the Kruskal–Wallis test and expressed as median values (25th–75th percentile). All statistical analyses were performed using SPSS 21.0 (IBM SPSS software), and significance was set at p < .05.", "We examined 535 healthy children undergoing physical examination and 504 EV71‐infected children in the Affiliated Hospital of Qingdao University, Qingdao Women & Children's Hospital, and Affiliated Hospital of Jining Medical University between 2013 and 2019. Seven cases in the infected group were excluded due to underlying diseases: congenital heart disease (n = 1), epilepsy (n = 3), and other immune system diseases (n = 3). EV71 infection was confirmed by both an analysis of the clinical features and a reverse transcription polymerase chain reaction (RT‐PCR). RNA was extracted from stool specimens obtained from the patients on the day after admission. Clinical and laboratory data were collected. Inclusion/exclusion criteria in our study are shown in Figure 1. The severity of EV71 infection was clarified according to the guidelines of HFMD diagnosis and treatment (2010) by the Chinese Ministry of Health.\n12\n\n\nFlowchart demonstrating the patients included in the study as well as the inclusion and exclusion criteria", "A commercial kit (Qiagen) was used to extract genomic DNA from peripheral blood. An iMLDR technique impoldered by Genesky Biotechnologies Inc. was used to detect the TMEM173 rs7447927 polymorphism genotype. The product was 201 bp in size. For rs7447927 gene polymorphism analysis, PCR was performed using the forward primer 5ʹ‐CAGGGCTAGGCATCAAGGGAGT−3ʹ and reverse primer 5ʹ‐ GACCCTTTGGGGGCTAGGAGAG −3ʹ. The PCR procedure was as follows: 95°C for 2 min; 11 cycles at 94°C for 20 s, 65–0.5°C/cycle for 40 s; 72°C for 90 s; 24 cycles at 94°C for 20 s, 59°C for 30 s; 72°C for 90 s; followed by 72°C for 2 min and held at 4°C. The PCR products were purified by digestion with 1U shrimp alkaline phosphatase at 37°C for 1 h and at 75°C for 15 min. The ligation reaction mixture contained 10× ligase buffer 2 μl, Taq DNA ligase 0.2 μl, probe mixture 1 μl, and the purified PCR product mixture 3 μl. In a double connection reaction, each site contained two 5′ ends of allele‐specific probes, followed by the 3′ end of an allele‐specific probe. Each allele‐specific interlinkage product was distinguished by its immune fluorescence, but different loci were distinguished by the different lengths added to the 3′ end of the allele‐specific probe. The probes used were as follows:\nrs1946519FC: TCTCTCGGGTCAATTCGTCCTTGCAGGGAGGCTAGGTGGTG, rs1946519FG: TGTTTGGGCCGGTTAGTGCAGGGAGGCTAGGTGGTGG, and rs1946519FP: ACCAGGTACCGGAGAGTGTGCTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT.\nThe ligation cycling step consisted of 35 cycles at 94°C for 1 min, 56°C for 4 min, followed by holding at 4°C. Alleles were genotyped using a 3130xl Genetic Analyzer (ABI), and the raw data were analysed using Gene Mapper 4.0 (Applied Biosystems) (Table 1).\nPrimers and PCR conditions", "Peripheral blood lymphocytes were extracted according to the manufacturer's instructions (Solarbio) on the second day after admission. Total RNA was purified by TRIzol reagent (Invitrogen) in accordance with the manufacturer's protocol, and the concentration and quality of the RNA were assessed by A QuickDrop spectrophotometer. The SYBR‑Green Real‑Time PCR kit (Vazyme) was used to reverse‐transcribe the mRNA into complementary DNA (cDNA). A standard curve for calculating the copy numbers of viral RNA was built by a series of dilutions (1 × 107, 1 × 106, 1 × 105, and 1 × 104 copies/μl of a DNA fragment) in various samples. The ligation cycling step consisted of 95°C for 15 min and 40 cycles of thermal cycling at 95°C for 10 s and 60°C for 60 s. Quantitative RT‐PCR was performed using a Linegene9660 system (Thermo Fisher Scientific). The specific primers used were as follows: EV71‐S: GTTCTTAACTCACATAGCA, EV71‐A: TTGCAAAAACTGAGGGTT (Table 1).", "Plasma IFN‐α levels were detected using ELISA kits (Thermo Fisher) in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. The sensitivity of IFN‐α detection was 3.2 pg/ml. Each sample was repeated three times, and these values were in the linear portion of the standard curve.", "A Hardy–Weinberg equilibrium (HWE) was used to test the proportion of genotypes in controls. The frequencies of genotypes and alleles in different groups were compared using the χ\n2 test. The relationships between rs7447927 polymorphism and EV71 susceptibility and severity of infection were evaluated by calculating the odds ratio (OR) and 95% confidence intervals (95% CI) using logistic regression. Parameters of normal distribution were expressed as means ± standard deviation (SD) and analysed via one‐way analysis of variance (ANOVA), followed by Student's t test. Nonparametric data were analysed using the Kruskal–Wallis test and expressed as median values (25th–75th percentile). All statistical analyses were performed using SPSS 21.0 (IBM SPSS software), and significance was set at p < .05.", "Study population A total of 535 controls (ages 5.16 [2.79–8.00] years, 268 males) and 504 EV71‐infected patients (ages 5.77 [3.62–7.84] years, 247 males) were examined. There were no significant differences in age (Z = 1.840, p = .066) or gender (χ2 = 0.001, p = .970) between EV71‐infected patients and controls. The EV71‐infected patients were divided into two groups: 169 severely infected patients (ages 5.66 [3.50–7.51] years, 89 males) and 328 mildly infected patients (ages 5.77 [3.62–7.84] years, 158 males). No significant differences in age (Z = 0.417, p = .677) or gender (χ\n2 = 0.900, p = .343) were observed between patients with severe or mild EV71 infections.\nA total of 535 controls (ages 5.16 [2.79–8.00] years, 268 males) and 504 EV71‐infected patients (ages 5.77 [3.62–7.84] years, 247 males) were examined. There were no significant differences in age (Z = 1.840, p = .066) or gender (χ2 = 0.001, p = .970) between EV71‐infected patients and controls. The EV71‐infected patients were divided into two groups: 169 severely infected patients (ages 5.66 [3.50–7.51] years, 89 males) and 328 mildly infected patients (ages 5.77 [3.62–7.84] years, 158 males). No significant differences in age (Z = 0.417, p = .677) or gender (χ\n2 = 0.900, p = .343) were observed between patients with severe or mild EV71 infections.\nDistribution of genotypes and alleles of TMEM173\n The genotypic and allelic distributions of each group obeyed Hardy–Weinberg equilibrium (p > .05). EV71‐infected patients had a significantly higher frequency of the GG (p < .001), GG + GC genotypes (p = .001), and G allele (p = .001), than the controls (Table 2). The same result was obtained when comparing severely infected EV71 patients with mildly infected patients (p < .001, p < .001, and p = .001, respectively) (Table 3).\nGenotype and allele frequencies of the TMEM173 rs7447927 polymorphism in EV71‐infected patients and controls\nAbbreviations: CI, confidence interval; OR, odds ratio. \nGenotype and allele frequencies of the TMEM173 rs7447927 polymorphism in mild and severe EV71‐infected cases\nAbbreviations: CI, confidence interval; OR, odds ratio.\nThe genotypic and allelic distributions of each group obeyed Hardy–Weinberg equilibrium (p > .05). EV71‐infected patients had a significantly higher frequency of the GG (p < .001), GG + GC genotypes (p = .001), and G allele (p = .001), than the controls (Table 2). The same result was obtained when comparing severely infected EV71 patients with mildly infected patients (p < .001, p < .001, and p = .001, respectively) (Table 3).\nGenotype and allele frequencies of the TMEM173 rs7447927 polymorphism in EV71‐infected patients and controls\nAbbreviations: CI, confidence interval; OR, odds ratio. \nGenotype and allele frequencies of the TMEM173 rs7447927 polymorphism in mild and severe EV71‐infected cases\nAbbreviations: CI, confidence interval; OR, odds ratio.\nAnalysis of clinical features Among EV71‐infected patients, those with the GG genotype had higher WBC (p < .001), C‐reactive proteins (CRP) (p < .001), and longer fever duration (p = .002). There were no significant differences in the serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), cardiac creatine kinase‐MB fraction (CK‐MB), or BG among the different genotypes. There were no significant differences in the frequencies of vomiting, seizure, spirit change, brain magnetic resonance imaging (MRI) abnormalities, or abnormal electroencephalography (EEG) frequencies among children with GG, GC, and CC genotypes (Table 4). In severely infected cases, children with the GG genotype had higher WBC (p < .001), CRP (p < .001), ALT (p = .049), AST (p = .041), BG (p < .001) levels and longer fever duration (p < .001). In addition, the percentages of vomiting (p = .017), spirit change (p = .045), and EEG abnormalities (p = .041) in patients with the GG genotype were significantly higher than those in patients with the other two genotypes. There were no significant differences in gender, age, and EV71 load among the different genotypes in both the mild and severe infection groups (Table 5).\nCharacteristic of EV71‐infected group according to different genotypes\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP, C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count.\nGroups compared using 2 × 3 χ\n2 test.\nGroups compared using the ANOVA, values expressed as mean ± SD.\nGroups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values).\nCharacteristic of severe EV71 infection group according to different genotypes\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP,C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count.\nGroups compared using χ\n2 test.\nGroups compared using the ANOVA, values expressed as mean ± SD.\nGroups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values).\nAmong EV71‐infected patients, those with the GG genotype had higher WBC (p < .001), C‐reactive proteins (CRP) (p < .001), and longer fever duration (p = .002). There were no significant differences in the serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), cardiac creatine kinase‐MB fraction (CK‐MB), or BG among the different genotypes. There were no significant differences in the frequencies of vomiting, seizure, spirit change, brain magnetic resonance imaging (MRI) abnormalities, or abnormal electroencephalography (EEG) frequencies among children with GG, GC, and CC genotypes (Table 4). In severely infected cases, children with the GG genotype had higher WBC (p < .001), CRP (p < .001), ALT (p = .049), AST (p = .041), BG (p < .001) levels and longer fever duration (p < .001). In addition, the percentages of vomiting (p = .017), spirit change (p = .045), and EEG abnormalities (p = .041) in patients with the GG genotype were significantly higher than those in patients with the other two genotypes. There were no significant differences in gender, age, and EV71 load among the different genotypes in both the mild and severe infection groups (Table 5).\nCharacteristic of EV71‐infected group according to different genotypes\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP, C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count.\nGroups compared using 2 × 3 χ\n2 test.\nGroups compared using the ANOVA, values expressed as mean ± SD.\nGroups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values).\nCharacteristic of severe EV71 infection group according to different genotypes\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP,C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count.\nGroups compared using χ\n2 test.\nGroups compared using the ANOVA, values expressed as mean ± SD.\nGroups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values).\nCorrelation between the TMEM173 rs7447927 polymorphism and serum IFN‐a level IFN‐α serum levels were noticeably higher in EV71‐infected patients (89.34 ± 7.45 pg/ml, p < .001) compared with those in uninfected children (57.69 ± 3.57 pg/ml) (Figure 2A). IFN‐α concentrations were significantly higher in severely infected cases (95.92 ± 6.13 pg/ml, p < .001) compared with those in mildly infected cases (83.92 ± 2.32 pg/ml) (Figure 2B). In the infected group, the serum concentration of IFN‐α in patients with the GG genotype (93.88 ± 9.28 pg/ml, p = .001 and p < .001, respectively) was significantly higher than those in patients with GC (89.38 ± 6.07 pg/ml) and CC (85.08 ± 4.47 pg/ml) genotypes, but there was no obvious difference among the genotypes in controls (Figure 2C). In severely infected patients, the serum level of IFN‐α in patients with the GG genotype (102.31 ± 3.84 pg/ml, p < .001 and p < .001, respectively) was significantly higher than in patients with the GC (94.20 ± 3.90 pg/ml) and CC genotypes (89.93 ± 5.58 pg/ml), but there was no significant difference among the genotypes in mildly infected patients (Figure 2D).\nLevel of interferon‐α (IFN‐α) in peripheral blood lymphocytes were detected in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. Values expressed as mean ± SD. (A) The level of IFN‐α in peripheral blood lymphocytes in EV71‐infected cases was significantly higher than in controls (p <  0.001). (B) The level of IFN‐α in peripheral blood lymphocytes in severe EV71‐infected cases was significantly higher than in mild cases (p < 0.001). (C). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in EV71 patients were significantly higher than in GC and CC genotypes patients (p < .01 and p < .001, respectively), but there was no difference in controls. (D). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in severe EV71‐infected patients were significantly higher than in GC and CC genotypes patients (p < .001 and p < .001, respectively), but there was no difference in mild cases. **p < .01, ***p < .001.\nIFN‐α serum levels were noticeably higher in EV71‐infected patients (89.34 ± 7.45 pg/ml, p < .001) compared with those in uninfected children (57.69 ± 3.57 pg/ml) (Figure 2A). IFN‐α concentrations were significantly higher in severely infected cases (95.92 ± 6.13 pg/ml, p < .001) compared with those in mildly infected cases (83.92 ± 2.32 pg/ml) (Figure 2B). In the infected group, the serum concentration of IFN‐α in patients with the GG genotype (93.88 ± 9.28 pg/ml, p = .001 and p < .001, respectively) was significantly higher than those in patients with GC (89.38 ± 6.07 pg/ml) and CC (85.08 ± 4.47 pg/ml) genotypes, but there was no obvious difference among the genotypes in controls (Figure 2C). In severely infected patients, the serum level of IFN‐α in patients with the GG genotype (102.31 ± 3.84 pg/ml, p < .001 and p < .001, respectively) was significantly higher than in patients with the GC (94.20 ± 3.90 pg/ml) and CC genotypes (89.93 ± 5.58 pg/ml), but there was no significant difference among the genotypes in mildly infected patients (Figure 2D).\nLevel of interferon‐α (IFN‐α) in peripheral blood lymphocytes were detected in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. Values expressed as mean ± SD. (A) The level of IFN‐α in peripheral blood lymphocytes in EV71‐infected cases was significantly higher than in controls (p <  0.001). (B) The level of IFN‐α in peripheral blood lymphocytes in severe EV71‐infected cases was significantly higher than in mild cases (p < 0.001). (C). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in EV71 patients were significantly higher than in GC and CC genotypes patients (p < .01 and p < .001, respectively), but there was no difference in controls. (D). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in severe EV71‐infected patients were significantly higher than in GC and CC genotypes patients (p < .001 and p < .001, respectively), but there was no difference in mild cases. **p < .01, ***p < .001.", "A total of 535 controls (ages 5.16 [2.79–8.00] years, 268 males) and 504 EV71‐infected patients (ages 5.77 [3.62–7.84] years, 247 males) were examined. There were no significant differences in age (Z = 1.840, p = .066) or gender (χ2 = 0.001, p = .970) between EV71‐infected patients and controls. The EV71‐infected patients were divided into two groups: 169 severely infected patients (ages 5.66 [3.50–7.51] years, 89 males) and 328 mildly infected patients (ages 5.77 [3.62–7.84] years, 158 males). No significant differences in age (Z = 0.417, p = .677) or gender (χ\n2 = 0.900, p = .343) were observed between patients with severe or mild EV71 infections.", "The genotypic and allelic distributions of each group obeyed Hardy–Weinberg equilibrium (p > .05). EV71‐infected patients had a significantly higher frequency of the GG (p < .001), GG + GC genotypes (p = .001), and G allele (p = .001), than the controls (Table 2). The same result was obtained when comparing severely infected EV71 patients with mildly infected patients (p < .001, p < .001, and p = .001, respectively) (Table 3).\nGenotype and allele frequencies of the TMEM173 rs7447927 polymorphism in EV71‐infected patients and controls\nAbbreviations: CI, confidence interval; OR, odds ratio. \nGenotype and allele frequencies of the TMEM173 rs7447927 polymorphism in mild and severe EV71‐infected cases\nAbbreviations: CI, confidence interval; OR, odds ratio.", "Among EV71‐infected patients, those with the GG genotype had higher WBC (p < .001), C‐reactive proteins (CRP) (p < .001), and longer fever duration (p = .002). There were no significant differences in the serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), cardiac creatine kinase‐MB fraction (CK‐MB), or BG among the different genotypes. There were no significant differences in the frequencies of vomiting, seizure, spirit change, brain magnetic resonance imaging (MRI) abnormalities, or abnormal electroencephalography (EEG) frequencies among children with GG, GC, and CC genotypes (Table 4). In severely infected cases, children with the GG genotype had higher WBC (p < .001), CRP (p < .001), ALT (p = .049), AST (p = .041), BG (p < .001) levels and longer fever duration (p < .001). In addition, the percentages of vomiting (p = .017), spirit change (p = .045), and EEG abnormalities (p = .041) in patients with the GG genotype were significantly higher than those in patients with the other two genotypes. There were no significant differences in gender, age, and EV71 load among the different genotypes in both the mild and severe infection groups (Table 5).\nCharacteristic of EV71‐infected group according to different genotypes\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP, C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count.\nGroups compared using 2 × 3 χ\n2 test.\nGroups compared using the ANOVA, values expressed as mean ± SD.\nGroups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values).\nCharacteristic of severe EV71 infection group according to different genotypes\nAbbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP,C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count.\nGroups compared using χ\n2 test.\nGroups compared using the ANOVA, values expressed as mean ± SD.\nGroups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values).", "IFN‐α serum levels were noticeably higher in EV71‐infected patients (89.34 ± 7.45 pg/ml, p < .001) compared with those in uninfected children (57.69 ± 3.57 pg/ml) (Figure 2A). IFN‐α concentrations were significantly higher in severely infected cases (95.92 ± 6.13 pg/ml, p < .001) compared with those in mildly infected cases (83.92 ± 2.32 pg/ml) (Figure 2B). In the infected group, the serum concentration of IFN‐α in patients with the GG genotype (93.88 ± 9.28 pg/ml, p = .001 and p < .001, respectively) was significantly higher than those in patients with GC (89.38 ± 6.07 pg/ml) and CC (85.08 ± 4.47 pg/ml) genotypes, but there was no obvious difference among the genotypes in controls (Figure 2C). In severely infected patients, the serum level of IFN‐α in patients with the GG genotype (102.31 ± 3.84 pg/ml, p < .001 and p < .001, respectively) was significantly higher than in patients with the GC (94.20 ± 3.90 pg/ml) and CC genotypes (89.93 ± 5.58 pg/ml), but there was no significant difference among the genotypes in mildly infected patients (Figure 2D).\nLevel of interferon‐α (IFN‐α) in peripheral blood lymphocytes were detected in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. Values expressed as mean ± SD. (A) The level of IFN‐α in peripheral blood lymphocytes in EV71‐infected cases was significantly higher than in controls (p <  0.001). (B) The level of IFN‐α in peripheral blood lymphocytes in severe EV71‐infected cases was significantly higher than in mild cases (p < 0.001). (C). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in EV71 patients were significantly higher than in GC and CC genotypes patients (p < .01 and p < .001, respectively), but there was no difference in controls. (D). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in severe EV71‐infected patients were significantly higher than in GC and CC genotypes patients (p < .001 and p < .001, respectively), but there was no difference in mild cases. **p < .01, ***p < .001.", "Severe EV71 infections are caused by a virus that threatens the safety of children. Although major breakthroughs have been made in EV71‐related vaccine research, targeted treatment measures are currently lacking.\n2\n Therefore, exploring the pathogenic mechanism of severe infections and effectively preventing their occurrence are still priorities for current research. Previous studies by Chen et al. have shown that SNP in multiple host inflammation‐related genes, such as TLR3, IL‐8, CPT2, and IRAK4,\n6\n, \n7\n, \n13\n, \n14\n are associated with susceptibility to severe EV71 infection. It has been speculated that TMEM173, as an interferon stimulating factor and cGAMP interacting factor, may be involved in the immune, inflammatory, and cell death processes triggered by viral infections.\n15\n, \n16\n In this study, we aimed to explore the relationship between the TMEM173 rs7447927 locus and severe EV71 infections in children.\nWe found that infected patients, especially those with severe infection, had a significantly higher frequency of the G allele and GG genotype than the controls. Patients with the G allele and GG genotype showed increased EV71 infection susceptibility and severity. These results suggest that children with the TMEM173 rs7447927 G allele and GG genotype may be susceptible to severe EV71 infection in the Chinese population. Combined with the SNP sequencing results, it was predicted that the rs7447927 nucleotide polymorphism is in the third exon region of the TMEM173 gene, which is a synonymous mutation. In rs7447927, the C allele becomes a G allele, which does not change the amino acid sequence of the protein but may influence protein activity. The effect of this change on protein activity has been shown in previous studies, revealing that rs7447927 has similar results as telomere shortening.\n17\n\n\nWBC, CRP levels, and fever duration in children with the GG genotype were significantly higher than those in children with other genotypes, both in the infected and severely infected groups. As reported previously, higher WBC count, CRP level, and longer fever duration are markers of severe EV71 infection.\n18\n This further verifies that the GG genotype may be involved in the process of severe infection. In addition, a higher BG concentration was found in patients with the GG genotype in the group with severe cases, which may reflect a more severe infection or complications.\n19\n Combined with the significant increase in the AST and ALT indices (which represent liver function), frequency of vomiting, spirit changes, and EEG abnormalities (representing nervous system symptoms) in children with the GG genotype, we can further confirm that the GG genotype is involved in the process of severe EV71 infections. Unlike previous studies that showed that boys under 4 years of age were prone to severe infection, there were no significant differences in gender or age among children with different genotypes in this study.\n19\n This may be related to the fact that this study was conducted only on Asian Han children.\nTMEM173, which is located between the endoplasmic reticulum and Golgi, is a key adaptor protein for macrophages or monocytes to promote the production of type 1 interferon for immune response.\n11\n After TMEM173 binds to cGAMP, TMEM173 is stimulated, which then transfers to the Golgi and activates transcription factor IRF3 by recruiting phosphokinase tank binding kinase 1 (TBK1).\n11\n Subsequently, the active IRF3 transfer nucleus induces the transcription of the type I IFN gene, which is involved in infection and antitumor immune responses.\n11\n In our study, we found that the IFN‐α levels in the severely infected group were significantly higher than those in the mildly infected group, and the serum IFN‐α levels of patients with the GG genotype in the severely infected group were significantly higher than those of patients with the CC and GC genotypes. No significant differences were observed in mild EV71 disease in children. This proved that TMEM173 rs7447927 affects the development of EV71 infection by altering the expression of IFN‐α.\nThe actual mechanism of TMEM173 rs7447927 in terms of promoting EV71 infection severity is unclear, but the following hypotheses may be credible. First, after EV71 infection, its specific antigen (RNA) is recognized by the surface or internal receptors (Toll‐like receptor, NOD‐like receptor, RIG receptor, etc.) of the innate immune cells (such as macrophages). Through the transmission of the intracellular signal molecule TMEM173, the IRF3/7 transcription factor is activated to initiate the expression of type I interferon, playing an antiviral role.\n11\n, \n20\n Second, the TMEM173 rs7447927 G allele and GG genotype may cause the abnormal activation of TMEM173 during EV71 infection, resulting in inflammation and an imbalance of the immune network, leading to severe infection. Third, rs7447927 may influence TMEM173 activity and further affect the recruitment of TGF‐β‐activated kinase 1 and IκB kinase, thereby activating the production of NF‐κB‐mediated inflammatory cytokines, such as TNF‐α, IL‐6, and CXCL10. Finally, the rs7447927 mutation affects TMEM173 activity and promotes autophagy and cell death (including apoptosis and scorch death). Apoptosis of lymphocytes and dendritic cells releases a large number of DAMPs (including HMGB1, cfDNA, and histone), which leads to cytokine storm, immunosuppression, and coagulation activation, and ultimately ending with multiple organ failure (such as liver, heart, and brain).\n21\n, \n22\n The pathogenic mechanism of EV71 is related to virulence, host genetic background, and other factors. In this study, the relationship between the severity of EV71 infection and the genetic background of the host was discussed, which will help clinicians to understand severe EV71 infections from a genetic perspective, further explore its pathogenesis, and provide a theoretical basis for effective prevention and early intervention.\nThis study had some limitations. First, the samples collected were not sufficient to represent the whole population; therefore, the characteristics of the study sample only reflected a portion of the entire population. Second, other cytokines and inflammatory factors in the serum related to immune disorders caused by EV71 infection were not detected or analysed. Lastly, this study did not genotype or sequence the EV71 virus, but C4 was the most popular virus type in this period according to other reports.\n23\n, \n24\n, \n25\n\n", "\nJie Song: Data curation, writing – original draft, visualization, investigation. Yedan Liu: Data curation, methodology, software, writing – review & editing. Ya Guo: Validation, formal analysis, data curation, writing – review & editing. Peipei Liu: Validation, data curation, visualization. Fei Li: Validation, resources, data curation. Chengqing Yang: Validation, investigation, resources. Fan Fan: Validation, writing – review & editing. Zongbo Chen: Validation, writing – review & editing, supervision, project administration, funding acquisition. All authors contributed to the writing of the final manuscript. All members of our Study Team contributed to the management or administration of the trial.", "The authors declare no conflict of interest.", "All the procedures in this study were in accordance with the standards of the Ethics Review Committee of the Affiliated Hospital of Qingdao University. We obtained informed consent from the children's parents." ]
[ null, "materials-and-methods", null, null, null, null, null, "results", null, null, null, null, "discussion", null, "COI-statement", null ]
[ "TMEM173", "single‐nucleotide polymorphism", "IFN‐α", "EV71" ]
INTRODUCTION: Enterovirus 71 (EV71), a single positive‐sense‐strand, neurovirulent RNA virus, is a well‐known pathogen that causes hand‐foot‐and‐mouth disease (HFMD). 1 , 2 Most children infected with EV71 manifest only mild symptoms (fever and rashes on the mouth, hands, feet, and hip); however, a few children have symptoms that are accompanied by serious central nervous system complications such as aseptic meningitis, encephalomyelitis, brainstem encephalitis, flaccid paralysis, and neurogenic pulmonary edema. 3 In 2008, a large‐scale Asia‐Pacific pandemic in China led to the infection of 490,000 children with EV71, where of 126 patients died. 4 The clinical manifestations of the disease in infected children differ, depending on virulence and host immunity. Recent studies have mainly focused on exploring the relationship between EV71 infection and host immunity at the molecular level, and it has been found that human gene polymorphisms, such as those of CPT2, OAS2, and IL‐17F, are related to infection susceptibility and severity. 5 , 6 , 7 , 8 , 9 The transmembrane protein 173 (TMEM173), also known as MPYS, mediates the interferon regulatory factor 3 (IRF3) activation (MITA), endoplasmic reticulum interferon stimulator (ERIS), stimulator of interferon genes (STING), and stimulator of interferon response cyclic guanosine monophosphate‐adenosine monophosphate (cGAMP) interactor 1 (STING1). Human TMEM173 is located on chromosome 5q31.2, contains eight exons, and encodes a transmembrane protein between the endoplasmic reticulum and Golgi in response to the presence of cytosolic dsDNA as well as an adaptor protein that mediates the production of type I interferon. 10 TMEM173 plays an important role in the cross‐reaction between innate immunity, inflammation, autophagy, and cell death in response to invasive microbial pathogens or endogenous host damage. 11 Studies have shown that TMEM173 is involved in the disease progression of multiple viruses that infect the human body, but the relationship between EV71 and TMEM173 is not clear. TMEM173 gene has rs7447927, rs13166214, rs55792153, and other single nucleotide polymorphisms (SNP) loci, but only the rs7447927 locus has been reported to be associated with oesophageal cancer, dose‐related telomere damage, and other diseases. Therefore, this study aimed to explore the relationship between the TMEM173 rs7447927 locus and EV71 infection susceptibility and severity. MATERIALS AND METHODS: Case selection We examined 535 healthy children undergoing physical examination and 504 EV71‐infected children in the Affiliated Hospital of Qingdao University, Qingdao Women & Children's Hospital, and Affiliated Hospital of Jining Medical University between 2013 and 2019. Seven cases in the infected group were excluded due to underlying diseases: congenital heart disease (n = 1), epilepsy (n = 3), and other immune system diseases (n = 3). EV71 infection was confirmed by both an analysis of the clinical features and a reverse transcription polymerase chain reaction (RT‐PCR). RNA was extracted from stool specimens obtained from the patients on the day after admission. Clinical and laboratory data were collected. Inclusion/exclusion criteria in our study are shown in Figure 1. The severity of EV71 infection was clarified according to the guidelines of HFMD diagnosis and treatment (2010) by the Chinese Ministry of Health. 12 Flowchart demonstrating the patients included in the study as well as the inclusion and exclusion criteria We examined 535 healthy children undergoing physical examination and 504 EV71‐infected children in the Affiliated Hospital of Qingdao University, Qingdao Women & Children's Hospital, and Affiliated Hospital of Jining Medical University between 2013 and 2019. Seven cases in the infected group were excluded due to underlying diseases: congenital heart disease (n = 1), epilepsy (n = 3), and other immune system diseases (n = 3). EV71 infection was confirmed by both an analysis of the clinical features and a reverse transcription polymerase chain reaction (RT‐PCR). RNA was extracted from stool specimens obtained from the patients on the day after admission. Clinical and laboratory data were collected. Inclusion/exclusion criteria in our study are shown in Figure 1. The severity of EV71 infection was clarified according to the guidelines of HFMD diagnosis and treatment (2010) by the Chinese Ministry of Health. 12 Flowchart demonstrating the patients included in the study as well as the inclusion and exclusion criteria Genomic DNA isolation and genotyping of TMEM173 rs7447927 A commercial kit (Qiagen) was used to extract genomic DNA from peripheral blood. An iMLDR technique impoldered by Genesky Biotechnologies Inc. was used to detect the TMEM173 rs7447927 polymorphism genotype. The product was 201 bp in size. For rs7447927 gene polymorphism analysis, PCR was performed using the forward primer 5ʹ‐CAGGGCTAGGCATCAAGGGAGT−3ʹ and reverse primer 5ʹ‐ GACCCTTTGGGGGCTAGGAGAG −3ʹ. The PCR procedure was as follows: 95°C for 2 min; 11 cycles at 94°C for 20 s, 65–0.5°C/cycle for 40 s; 72°C for 90 s; 24 cycles at 94°C for 20 s, 59°C for 30 s; 72°C for 90 s; followed by 72°C for 2 min and held at 4°C. The PCR products were purified by digestion with 1U shrimp alkaline phosphatase at 37°C for 1 h and at 75°C for 15 min. The ligation reaction mixture contained 10× ligase buffer 2 μl, Taq DNA ligase 0.2 μl, probe mixture 1 μl, and the purified PCR product mixture 3 μl. In a double connection reaction, each site contained two 5′ ends of allele‐specific probes, followed by the 3′ end of an allele‐specific probe. Each allele‐specific interlinkage product was distinguished by its immune fluorescence, but different loci were distinguished by the different lengths added to the 3′ end of the allele‐specific probe. The probes used were as follows: rs1946519FC: TCTCTCGGGTCAATTCGTCCTTGCAGGGAGGCTAGGTGGTG, rs1946519FG: TGTTTGGGCCGGTTAGTGCAGGGAGGCTAGGTGGTGG, and rs1946519FP: ACCAGGTACCGGAGAGTGTGCTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT. The ligation cycling step consisted of 35 cycles at 94°C for 1 min, 56°C for 4 min, followed by holding at 4°C. Alleles were genotyped using a 3130xl Genetic Analyzer (ABI), and the raw data were analysed using Gene Mapper 4.0 (Applied Biosystems) (Table 1). Primers and PCR conditions A commercial kit (Qiagen) was used to extract genomic DNA from peripheral blood. An iMLDR technique impoldered by Genesky Biotechnologies Inc. was used to detect the TMEM173 rs7447927 polymorphism genotype. The product was 201 bp in size. For rs7447927 gene polymorphism analysis, PCR was performed using the forward primer 5ʹ‐CAGGGCTAGGCATCAAGGGAGT−3ʹ and reverse primer 5ʹ‐ GACCCTTTGGGGGCTAGGAGAG −3ʹ. The PCR procedure was as follows: 95°C for 2 min; 11 cycles at 94°C for 20 s, 65–0.5°C/cycle for 40 s; 72°C for 90 s; 24 cycles at 94°C for 20 s, 59°C for 30 s; 72°C for 90 s; followed by 72°C for 2 min and held at 4°C. The PCR products were purified by digestion with 1U shrimp alkaline phosphatase at 37°C for 1 h and at 75°C for 15 min. The ligation reaction mixture contained 10× ligase buffer 2 μl, Taq DNA ligase 0.2 μl, probe mixture 1 μl, and the purified PCR product mixture 3 μl. In a double connection reaction, each site contained two 5′ ends of allele‐specific probes, followed by the 3′ end of an allele‐specific probe. Each allele‐specific interlinkage product was distinguished by its immune fluorescence, but different loci were distinguished by the different lengths added to the 3′ end of the allele‐specific probe. The probes used were as follows: rs1946519FC: TCTCTCGGGTCAATTCGTCCTTGCAGGGAGGCTAGGTGGTG, rs1946519FG: TGTTTGGGCCGGTTAGTGCAGGGAGGCTAGGTGGTGG, and rs1946519FP: ACCAGGTACCGGAGAGTGTGCTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT. The ligation cycling step consisted of 35 cycles at 94°C for 1 min, 56°C for 4 min, followed by holding at 4°C. Alleles were genotyped using a 3130xl Genetic Analyzer (ABI), and the raw data were analysed using Gene Mapper 4.0 (Applied Biosystems) (Table 1). Primers and PCR conditions Detection of EV71 loading Peripheral blood lymphocytes were extracted according to the manufacturer's instructions (Solarbio) on the second day after admission. Total RNA was purified by TRIzol reagent (Invitrogen) in accordance with the manufacturer's protocol, and the concentration and quality of the RNA were assessed by A QuickDrop spectrophotometer. The SYBR‑Green Real‑Time PCR kit (Vazyme) was used to reverse‐transcribe the mRNA into complementary DNA (cDNA). A standard curve for calculating the copy numbers of viral RNA was built by a series of dilutions (1 × 107, 1 × 106, 1 × 105, and 1 × 104 copies/μl of a DNA fragment) in various samples. The ligation cycling step consisted of 95°C for 15 min and 40 cycles of thermal cycling at 95°C for 10 s and 60°C for 60 s. Quantitative RT‐PCR was performed using a Linegene9660 system (Thermo Fisher Scientific). The specific primers used were as follows: EV71‐S: GTTCTTAACTCACATAGCA, EV71‐A: TTGCAAAAACTGAGGGTT (Table 1). Peripheral blood lymphocytes were extracted according to the manufacturer's instructions (Solarbio) on the second day after admission. Total RNA was purified by TRIzol reagent (Invitrogen) in accordance with the manufacturer's protocol, and the concentration and quality of the RNA were assessed by A QuickDrop spectrophotometer. The SYBR‑Green Real‑Time PCR kit (Vazyme) was used to reverse‐transcribe the mRNA into complementary DNA (cDNA). A standard curve for calculating the copy numbers of viral RNA was built by a series of dilutions (1 × 107, 1 × 106, 1 × 105, and 1 × 104 copies/μl of a DNA fragment) in various samples. The ligation cycling step consisted of 95°C for 15 min and 40 cycles of thermal cycling at 95°C for 10 s and 60°C for 60 s. Quantitative RT‐PCR was performed using a Linegene9660 system (Thermo Fisher Scientific). The specific primers used were as follows: EV71‐S: GTTCTTAACTCACATAGCA, EV71‐A: TTGCAAAAACTGAGGGTT (Table 1). Estimation of interferon‐α (IFN‐α) levels Plasma IFN‐α levels were detected using ELISA kits (Thermo Fisher) in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. The sensitivity of IFN‐α detection was 3.2 pg/ml. Each sample was repeated three times, and these values were in the linear portion of the standard curve. Plasma IFN‐α levels were detected using ELISA kits (Thermo Fisher) in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. The sensitivity of IFN‐α detection was 3.2 pg/ml. Each sample was repeated three times, and these values were in the linear portion of the standard curve. Statistical analysis A Hardy–Weinberg equilibrium (HWE) was used to test the proportion of genotypes in controls. The frequencies of genotypes and alleles in different groups were compared using the χ 2 test. The relationships between rs7447927 polymorphism and EV71 susceptibility and severity of infection were evaluated by calculating the odds ratio (OR) and 95% confidence intervals (95% CI) using logistic regression. Parameters of normal distribution were expressed as means ± standard deviation (SD) and analysed via one‐way analysis of variance (ANOVA), followed by Student's t test. Nonparametric data were analysed using the Kruskal–Wallis test and expressed as median values (25th–75th percentile). All statistical analyses were performed using SPSS 21.0 (IBM SPSS software), and significance was set at p < .05. A Hardy–Weinberg equilibrium (HWE) was used to test the proportion of genotypes in controls. The frequencies of genotypes and alleles in different groups were compared using the χ 2 test. The relationships between rs7447927 polymorphism and EV71 susceptibility and severity of infection were evaluated by calculating the odds ratio (OR) and 95% confidence intervals (95% CI) using logistic regression. Parameters of normal distribution were expressed as means ± standard deviation (SD) and analysed via one‐way analysis of variance (ANOVA), followed by Student's t test. Nonparametric data were analysed using the Kruskal–Wallis test and expressed as median values (25th–75th percentile). All statistical analyses were performed using SPSS 21.0 (IBM SPSS software), and significance was set at p < .05. Case selection: We examined 535 healthy children undergoing physical examination and 504 EV71‐infected children in the Affiliated Hospital of Qingdao University, Qingdao Women & Children's Hospital, and Affiliated Hospital of Jining Medical University between 2013 and 2019. Seven cases in the infected group were excluded due to underlying diseases: congenital heart disease (n = 1), epilepsy (n = 3), and other immune system diseases (n = 3). EV71 infection was confirmed by both an analysis of the clinical features and a reverse transcription polymerase chain reaction (RT‐PCR). RNA was extracted from stool specimens obtained from the patients on the day after admission. Clinical and laboratory data were collected. Inclusion/exclusion criteria in our study are shown in Figure 1. The severity of EV71 infection was clarified according to the guidelines of HFMD diagnosis and treatment (2010) by the Chinese Ministry of Health. 12 Flowchart demonstrating the patients included in the study as well as the inclusion and exclusion criteria Genomic DNA isolation and genotyping of TMEM173 rs7447927 : A commercial kit (Qiagen) was used to extract genomic DNA from peripheral blood. An iMLDR technique impoldered by Genesky Biotechnologies Inc. was used to detect the TMEM173 rs7447927 polymorphism genotype. The product was 201 bp in size. For rs7447927 gene polymorphism analysis, PCR was performed using the forward primer 5ʹ‐CAGGGCTAGGCATCAAGGGAGT−3ʹ and reverse primer 5ʹ‐ GACCCTTTGGGGGCTAGGAGAG −3ʹ. The PCR procedure was as follows: 95°C for 2 min; 11 cycles at 94°C for 20 s, 65–0.5°C/cycle for 40 s; 72°C for 90 s; 24 cycles at 94°C for 20 s, 59°C for 30 s; 72°C for 90 s; followed by 72°C for 2 min and held at 4°C. The PCR products were purified by digestion with 1U shrimp alkaline phosphatase at 37°C for 1 h and at 75°C for 15 min. The ligation reaction mixture contained 10× ligase buffer 2 μl, Taq DNA ligase 0.2 μl, probe mixture 1 μl, and the purified PCR product mixture 3 μl. In a double connection reaction, each site contained two 5′ ends of allele‐specific probes, followed by the 3′ end of an allele‐specific probe. Each allele‐specific interlinkage product was distinguished by its immune fluorescence, but different loci were distinguished by the different lengths added to the 3′ end of the allele‐specific probe. The probes used were as follows: rs1946519FC: TCTCTCGGGTCAATTCGTCCTTGCAGGGAGGCTAGGTGGTG, rs1946519FG: TGTTTGGGCCGGTTAGTGCAGGGAGGCTAGGTGGTGG, and rs1946519FP: ACCAGGTACCGGAGAGTGTGCTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT. The ligation cycling step consisted of 35 cycles at 94°C for 1 min, 56°C for 4 min, followed by holding at 4°C. Alleles were genotyped using a 3130xl Genetic Analyzer (ABI), and the raw data were analysed using Gene Mapper 4.0 (Applied Biosystems) (Table 1). Primers and PCR conditions Detection of EV71 loading: Peripheral blood lymphocytes were extracted according to the manufacturer's instructions (Solarbio) on the second day after admission. Total RNA was purified by TRIzol reagent (Invitrogen) in accordance with the manufacturer's protocol, and the concentration and quality of the RNA were assessed by A QuickDrop spectrophotometer. The SYBR‑Green Real‑Time PCR kit (Vazyme) was used to reverse‐transcribe the mRNA into complementary DNA (cDNA). A standard curve for calculating the copy numbers of viral RNA was built by a series of dilutions (1 × 107, 1 × 106, 1 × 105, and 1 × 104 copies/μl of a DNA fragment) in various samples. The ligation cycling step consisted of 95°C for 15 min and 40 cycles of thermal cycling at 95°C for 10 s and 60°C for 60 s. Quantitative RT‐PCR was performed using a Linegene9660 system (Thermo Fisher Scientific). The specific primers used were as follows: EV71‐S: GTTCTTAACTCACATAGCA, EV71‐A: TTGCAAAAACTGAGGGTT (Table 1). Estimation of interferon‐α (IFN‐α) levels: Plasma IFN‐α levels were detected using ELISA kits (Thermo Fisher) in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. The sensitivity of IFN‐α detection was 3.2 pg/ml. Each sample was repeated three times, and these values were in the linear portion of the standard curve. Statistical analysis: A Hardy–Weinberg equilibrium (HWE) was used to test the proportion of genotypes in controls. The frequencies of genotypes and alleles in different groups were compared using the χ 2 test. The relationships between rs7447927 polymorphism and EV71 susceptibility and severity of infection were evaluated by calculating the odds ratio (OR) and 95% confidence intervals (95% CI) using logistic regression. Parameters of normal distribution were expressed as means ± standard deviation (SD) and analysed via one‐way analysis of variance (ANOVA), followed by Student's t test. Nonparametric data were analysed using the Kruskal–Wallis test and expressed as median values (25th–75th percentile). All statistical analyses were performed using SPSS 21.0 (IBM SPSS software), and significance was set at p < .05. RESULTS: Study population A total of 535 controls (ages 5.16 [2.79–8.00] years, 268 males) and 504 EV71‐infected patients (ages 5.77 [3.62–7.84] years, 247 males) were examined. There were no significant differences in age (Z = 1.840, p = .066) or gender (χ2 = 0.001, p = .970) between EV71‐infected patients and controls. The EV71‐infected patients were divided into two groups: 169 severely infected patients (ages 5.66 [3.50–7.51] years, 89 males) and 328 mildly infected patients (ages 5.77 [3.62–7.84] years, 158 males). No significant differences in age (Z = 0.417, p = .677) or gender (χ 2 = 0.900, p = .343) were observed between patients with severe or mild EV71 infections. A total of 535 controls (ages 5.16 [2.79–8.00] years, 268 males) and 504 EV71‐infected patients (ages 5.77 [3.62–7.84] years, 247 males) were examined. There were no significant differences in age (Z = 1.840, p = .066) or gender (χ2 = 0.001, p = .970) between EV71‐infected patients and controls. The EV71‐infected patients were divided into two groups: 169 severely infected patients (ages 5.66 [3.50–7.51] years, 89 males) and 328 mildly infected patients (ages 5.77 [3.62–7.84] years, 158 males). No significant differences in age (Z = 0.417, p = .677) or gender (χ 2 = 0.900, p = .343) were observed between patients with severe or mild EV71 infections. Distribution of genotypes and alleles of TMEM173 The genotypic and allelic distributions of each group obeyed Hardy–Weinberg equilibrium (p > .05). EV71‐infected patients had a significantly higher frequency of the GG (p < .001), GG + GC genotypes (p = .001), and G allele (p = .001), than the controls (Table 2). The same result was obtained when comparing severely infected EV71 patients with mildly infected patients (p < .001, p < .001, and p = .001, respectively) (Table 3). Genotype and allele frequencies of the TMEM173 rs7447927 polymorphism in EV71‐infected patients and controls Abbreviations: CI, confidence interval; OR, odds ratio.  Genotype and allele frequencies of the TMEM173 rs7447927 polymorphism in mild and severe EV71‐infected cases Abbreviations: CI, confidence interval; OR, odds ratio. The genotypic and allelic distributions of each group obeyed Hardy–Weinberg equilibrium (p > .05). EV71‐infected patients had a significantly higher frequency of the GG (p < .001), GG + GC genotypes (p = .001), and G allele (p = .001), than the controls (Table 2). The same result was obtained when comparing severely infected EV71 patients with mildly infected patients (p < .001, p < .001, and p = .001, respectively) (Table 3). Genotype and allele frequencies of the TMEM173 rs7447927 polymorphism in EV71‐infected patients and controls Abbreviations: CI, confidence interval; OR, odds ratio.  Genotype and allele frequencies of the TMEM173 rs7447927 polymorphism in mild and severe EV71‐infected cases Abbreviations: CI, confidence interval; OR, odds ratio. Analysis of clinical features Among EV71‐infected patients, those with the GG genotype had higher WBC (p < .001), C‐reactive proteins (CRP) (p < .001), and longer fever duration (p = .002). There were no significant differences in the serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), cardiac creatine kinase‐MB fraction (CK‐MB), or BG among the different genotypes. There were no significant differences in the frequencies of vomiting, seizure, spirit change, brain magnetic resonance imaging (MRI) abnormalities, or abnormal electroencephalography (EEG) frequencies among children with GG, GC, and CC genotypes (Table 4). In severely infected cases, children with the GG genotype had higher WBC (p < .001), CRP (p < .001), ALT (p = .049), AST (p = .041), BG (p < .001) levels and longer fever duration (p < .001). In addition, the percentages of vomiting (p = .017), spirit change (p = .045), and EEG abnormalities (p = .041) in patients with the GG genotype were significantly higher than those in patients with the other two genotypes. There were no significant differences in gender, age, and EV71 load among the different genotypes in both the mild and severe infection groups (Table 5). Characteristic of EV71‐infected group according to different genotypes Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP, C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count. Groups compared using 2 × 3 χ 2 test. Groups compared using the ANOVA, values expressed as mean ± SD. Groups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values). Characteristic of severe EV71 infection group according to different genotypes Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP,C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count. Groups compared using χ 2 test. Groups compared using the ANOVA, values expressed as mean ± SD. Groups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values). Among EV71‐infected patients, those with the GG genotype had higher WBC (p < .001), C‐reactive proteins (CRP) (p < .001), and longer fever duration (p = .002). There were no significant differences in the serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), cardiac creatine kinase‐MB fraction (CK‐MB), or BG among the different genotypes. There were no significant differences in the frequencies of vomiting, seizure, spirit change, brain magnetic resonance imaging (MRI) abnormalities, or abnormal electroencephalography (EEG) frequencies among children with GG, GC, and CC genotypes (Table 4). In severely infected cases, children with the GG genotype had higher WBC (p < .001), CRP (p < .001), ALT (p = .049), AST (p = .041), BG (p < .001) levels and longer fever duration (p < .001). In addition, the percentages of vomiting (p = .017), spirit change (p = .045), and EEG abnormalities (p = .041) in patients with the GG genotype were significantly higher than those in patients with the other two genotypes. There were no significant differences in gender, age, and EV71 load among the different genotypes in both the mild and severe infection groups (Table 5). Characteristic of EV71‐infected group according to different genotypes Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP, C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count. Groups compared using 2 × 3 χ 2 test. Groups compared using the ANOVA, values expressed as mean ± SD. Groups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values). Characteristic of severe EV71 infection group according to different genotypes Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP,C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count. Groups compared using χ 2 test. Groups compared using the ANOVA, values expressed as mean ± SD. Groups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values). Correlation between the TMEM173 rs7447927 polymorphism and serum IFN‐a level IFN‐α serum levels were noticeably higher in EV71‐infected patients (89.34 ± 7.45 pg/ml, p < .001) compared with those in uninfected children (57.69 ± 3.57 pg/ml) (Figure 2A). IFN‐α concentrations were significantly higher in severely infected cases (95.92 ± 6.13 pg/ml, p < .001) compared with those in mildly infected cases (83.92 ± 2.32 pg/ml) (Figure 2B). In the infected group, the serum concentration of IFN‐α in patients with the GG genotype (93.88 ± 9.28 pg/ml, p = .001 and p < .001, respectively) was significantly higher than those in patients with GC (89.38 ± 6.07 pg/ml) and CC (85.08 ± 4.47 pg/ml) genotypes, but there was no obvious difference among the genotypes in controls (Figure 2C). In severely infected patients, the serum level of IFN‐α in patients with the GG genotype (102.31 ± 3.84 pg/ml, p < .001 and p < .001, respectively) was significantly higher than in patients with the GC (94.20 ± 3.90 pg/ml) and CC genotypes (89.93 ± 5.58 pg/ml), but there was no significant difference among the genotypes in mildly infected patients (Figure 2D). Level of interferon‐α (IFN‐α) in peripheral blood lymphocytes were detected in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. Values expressed as mean ± SD. (A) The level of IFN‐α in peripheral blood lymphocytes in EV71‐infected cases was significantly higher than in controls (p <  0.001). (B) The level of IFN‐α in peripheral blood lymphocytes in severe EV71‐infected cases was significantly higher than in mild cases (p < 0.001). (C). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in EV71 patients were significantly higher than in GC and CC genotypes patients (p < .01 and p < .001, respectively), but there was no difference in controls. (D). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in severe EV71‐infected patients were significantly higher than in GC and CC genotypes patients (p < .001 and p < .001, respectively), but there was no difference in mild cases. **p < .01, ***p < .001. IFN‐α serum levels were noticeably higher in EV71‐infected patients (89.34 ± 7.45 pg/ml, p < .001) compared with those in uninfected children (57.69 ± 3.57 pg/ml) (Figure 2A). IFN‐α concentrations were significantly higher in severely infected cases (95.92 ± 6.13 pg/ml, p < .001) compared with those in mildly infected cases (83.92 ± 2.32 pg/ml) (Figure 2B). In the infected group, the serum concentration of IFN‐α in patients with the GG genotype (93.88 ± 9.28 pg/ml, p = .001 and p < .001, respectively) was significantly higher than those in patients with GC (89.38 ± 6.07 pg/ml) and CC (85.08 ± 4.47 pg/ml) genotypes, but there was no obvious difference among the genotypes in controls (Figure 2C). In severely infected patients, the serum level of IFN‐α in patients with the GG genotype (102.31 ± 3.84 pg/ml, p < .001 and p < .001, respectively) was significantly higher than in patients with the GC (94.20 ± 3.90 pg/ml) and CC genotypes (89.93 ± 5.58 pg/ml), but there was no significant difference among the genotypes in mildly infected patients (Figure 2D). Level of interferon‐α (IFN‐α) in peripheral blood lymphocytes were detected in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. Values expressed as mean ± SD. (A) The level of IFN‐α in peripheral blood lymphocytes in EV71‐infected cases was significantly higher than in controls (p <  0.001). (B) The level of IFN‐α in peripheral blood lymphocytes in severe EV71‐infected cases was significantly higher than in mild cases (p < 0.001). (C). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in EV71 patients were significantly higher than in GC and CC genotypes patients (p < .01 and p < .001, respectively), but there was no difference in controls. (D). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in severe EV71‐infected patients were significantly higher than in GC and CC genotypes patients (p < .001 and p < .001, respectively), but there was no difference in mild cases. **p < .01, ***p < .001. Study population: A total of 535 controls (ages 5.16 [2.79–8.00] years, 268 males) and 504 EV71‐infected patients (ages 5.77 [3.62–7.84] years, 247 males) were examined. There were no significant differences in age (Z = 1.840, p = .066) or gender (χ2 = 0.001, p = .970) between EV71‐infected patients and controls. The EV71‐infected patients were divided into two groups: 169 severely infected patients (ages 5.66 [3.50–7.51] years, 89 males) and 328 mildly infected patients (ages 5.77 [3.62–7.84] years, 158 males). No significant differences in age (Z = 0.417, p = .677) or gender (χ 2 = 0.900, p = .343) were observed between patients with severe or mild EV71 infections. Distribution of genotypes and alleles of TMEM173 : The genotypic and allelic distributions of each group obeyed Hardy–Weinberg equilibrium (p > .05). EV71‐infected patients had a significantly higher frequency of the GG (p < .001), GG + GC genotypes (p = .001), and G allele (p = .001), than the controls (Table 2). The same result was obtained when comparing severely infected EV71 patients with mildly infected patients (p < .001, p < .001, and p = .001, respectively) (Table 3). Genotype and allele frequencies of the TMEM173 rs7447927 polymorphism in EV71‐infected patients and controls Abbreviations: CI, confidence interval; OR, odds ratio.  Genotype and allele frequencies of the TMEM173 rs7447927 polymorphism in mild and severe EV71‐infected cases Abbreviations: CI, confidence interval; OR, odds ratio. Analysis of clinical features: Among EV71‐infected patients, those with the GG genotype had higher WBC (p < .001), C‐reactive proteins (CRP) (p < .001), and longer fever duration (p = .002). There were no significant differences in the serum levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), cardiac creatine kinase‐MB fraction (CK‐MB), or BG among the different genotypes. There were no significant differences in the frequencies of vomiting, seizure, spirit change, brain magnetic resonance imaging (MRI) abnormalities, or abnormal electroencephalography (EEG) frequencies among children with GG, GC, and CC genotypes (Table 4). In severely infected cases, children with the GG genotype had higher WBC (p < .001), CRP (p < .001), ALT (p = .049), AST (p = .041), BG (p < .001) levels and longer fever duration (p < .001). In addition, the percentages of vomiting (p = .017), spirit change (p = .045), and EEG abnormalities (p = .041) in patients with the GG genotype were significantly higher than those in patients with the other two genotypes. There were no significant differences in gender, age, and EV71 load among the different genotypes in both the mild and severe infection groups (Table 5). Characteristic of EV71‐infected group according to different genotypes Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP, C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count. Groups compared using 2 × 3 χ 2 test. Groups compared using the ANOVA, values expressed as mean ± SD. Groups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values). Characteristic of severe EV71 infection group according to different genotypes Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; BG, blood glucose; CRP,C‐reactive protein; CK‐MB, cardiac creatine kinase‐MB fraction; EEG, electroencephalography; MRI, Magnetic Resonance Imaging; WBC, white blood cell count. Groups compared using χ 2 test. Groups compared using the ANOVA, values expressed as mean ± SD. Groups compared using the Wilcoxon rank‐sum test, values expressed as median (25th–75th percentile values). Correlation between the TMEM173 rs7447927 polymorphism and serum IFN‐a level: IFN‐α serum levels were noticeably higher in EV71‐infected patients (89.34 ± 7.45 pg/ml, p < .001) compared with those in uninfected children (57.69 ± 3.57 pg/ml) (Figure 2A). IFN‐α concentrations were significantly higher in severely infected cases (95.92 ± 6.13 pg/ml, p < .001) compared with those in mildly infected cases (83.92 ± 2.32 pg/ml) (Figure 2B). In the infected group, the serum concentration of IFN‐α in patients with the GG genotype (93.88 ± 9.28 pg/ml, p = .001 and p < .001, respectively) was significantly higher than those in patients with GC (89.38 ± 6.07 pg/ml) and CC (85.08 ± 4.47 pg/ml) genotypes, but there was no obvious difference among the genotypes in controls (Figure 2C). In severely infected patients, the serum level of IFN‐α in patients with the GG genotype (102.31 ± 3.84 pg/ml, p < .001 and p < .001, respectively) was significantly higher than in patients with the GC (94.20 ± 3.90 pg/ml) and CC genotypes (89.93 ± 5.58 pg/ml), but there was no significant difference among the genotypes in mildly infected patients (Figure 2D). Level of interferon‐α (IFN‐α) in peripheral blood lymphocytes were detected in 99 controls, 73 mild EV71‐infected cases, and 60 severe EV71‐infected cases. Values expressed as mean ± SD. (A) The level of IFN‐α in peripheral blood lymphocytes in EV71‐infected cases was significantly higher than in controls (p <  0.001). (B) The level of IFN‐α in peripheral blood lymphocytes in severe EV71‐infected cases was significantly higher than in mild cases (p < 0.001). (C). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in EV71 patients were significantly higher than in GC and CC genotypes patients (p < .01 and p < .001, respectively), but there was no difference in controls. (D). The level of IFN‐α in peripheral blood lymphocytes in GG genotypes in severe EV71‐infected patients were significantly higher than in GC and CC genotypes patients (p < .001 and p < .001, respectively), but there was no difference in mild cases. **p < .01, ***p < .001. DISCUSSION: Severe EV71 infections are caused by a virus that threatens the safety of children. Although major breakthroughs have been made in EV71‐related vaccine research, targeted treatment measures are currently lacking. 2 Therefore, exploring the pathogenic mechanism of severe infections and effectively preventing their occurrence are still priorities for current research. Previous studies by Chen et al. have shown that SNP in multiple host inflammation‐related genes, such as TLR3, IL‐8, CPT2, and IRAK4, 6 , 7 , 13 , 14 are associated with susceptibility to severe EV71 infection. It has been speculated that TMEM173, as an interferon stimulating factor and cGAMP interacting factor, may be involved in the immune, inflammatory, and cell death processes triggered by viral infections. 15 , 16 In this study, we aimed to explore the relationship between the TMEM173 rs7447927 locus and severe EV71 infections in children. We found that infected patients, especially those with severe infection, had a significantly higher frequency of the G allele and GG genotype than the controls. Patients with the G allele and GG genotype showed increased EV71 infection susceptibility and severity. These results suggest that children with the TMEM173 rs7447927 G allele and GG genotype may be susceptible to severe EV71 infection in the Chinese population. Combined with the SNP sequencing results, it was predicted that the rs7447927 nucleotide polymorphism is in the third exon region of the TMEM173 gene, which is a synonymous mutation. In rs7447927, the C allele becomes a G allele, which does not change the amino acid sequence of the protein but may influence protein activity. The effect of this change on protein activity has been shown in previous studies, revealing that rs7447927 has similar results as telomere shortening. 17 WBC, CRP levels, and fever duration in children with the GG genotype were significantly higher than those in children with other genotypes, both in the infected and severely infected groups. As reported previously, higher WBC count, CRP level, and longer fever duration are markers of severe EV71 infection. 18 This further verifies that the GG genotype may be involved in the process of severe infection. In addition, a higher BG concentration was found in patients with the GG genotype in the group with severe cases, which may reflect a more severe infection or complications. 19 Combined with the significant increase in the AST and ALT indices (which represent liver function), frequency of vomiting, spirit changes, and EEG abnormalities (representing nervous system symptoms) in children with the GG genotype, we can further confirm that the GG genotype is involved in the process of severe EV71 infections. Unlike previous studies that showed that boys under 4 years of age were prone to severe infection, there were no significant differences in gender or age among children with different genotypes in this study. 19 This may be related to the fact that this study was conducted only on Asian Han children. TMEM173, which is located between the endoplasmic reticulum and Golgi, is a key adaptor protein for macrophages or monocytes to promote the production of type 1 interferon for immune response. 11 After TMEM173 binds to cGAMP, TMEM173 is stimulated, which then transfers to the Golgi and activates transcription factor IRF3 by recruiting phosphokinase tank binding kinase 1 (TBK1). 11 Subsequently, the active IRF3 transfer nucleus induces the transcription of the type I IFN gene, which is involved in infection and antitumor immune responses. 11 In our study, we found that the IFN‐α levels in the severely infected group were significantly higher than those in the mildly infected group, and the serum IFN‐α levels of patients with the GG genotype in the severely infected group were significantly higher than those of patients with the CC and GC genotypes. No significant differences were observed in mild EV71 disease in children. This proved that TMEM173 rs7447927 affects the development of EV71 infection by altering the expression of IFN‐α. The actual mechanism of TMEM173 rs7447927 in terms of promoting EV71 infection severity is unclear, but the following hypotheses may be credible. First, after EV71 infection, its specific antigen (RNA) is recognized by the surface or internal receptors (Toll‐like receptor, NOD‐like receptor, RIG receptor, etc.) of the innate immune cells (such as macrophages). Through the transmission of the intracellular signal molecule TMEM173, the IRF3/7 transcription factor is activated to initiate the expression of type I interferon, playing an antiviral role. 11 , 20 Second, the TMEM173 rs7447927 G allele and GG genotype may cause the abnormal activation of TMEM173 during EV71 infection, resulting in inflammation and an imbalance of the immune network, leading to severe infection. Third, rs7447927 may influence TMEM173 activity and further affect the recruitment of TGF‐β‐activated kinase 1 and IκB kinase, thereby activating the production of NF‐κB‐mediated inflammatory cytokines, such as TNF‐α, IL‐6, and CXCL10. Finally, the rs7447927 mutation affects TMEM173 activity and promotes autophagy and cell death (including apoptosis and scorch death). Apoptosis of lymphocytes and dendritic cells releases a large number of DAMPs (including HMGB1, cfDNA, and histone), which leads to cytokine storm, immunosuppression, and coagulation activation, and ultimately ending with multiple organ failure (such as liver, heart, and brain). 21 , 22 The pathogenic mechanism of EV71 is related to virulence, host genetic background, and other factors. In this study, the relationship between the severity of EV71 infection and the genetic background of the host was discussed, which will help clinicians to understand severe EV71 infections from a genetic perspective, further explore its pathogenesis, and provide a theoretical basis for effective prevention and early intervention. This study had some limitations. First, the samples collected were not sufficient to represent the whole population; therefore, the characteristics of the study sample only reflected a portion of the entire population. Second, other cytokines and inflammatory factors in the serum related to immune disorders caused by EV71 infection were not detected or analysed. Lastly, this study did not genotype or sequence the EV71 virus, but C4 was the most popular virus type in this period according to other reports. 23 , 24 , 25 AUTHOR CONTRIBUTIONS: Jie Song: Data curation, writing – original draft, visualization, investigation. Yedan Liu: Data curation, methodology, software, writing – review & editing. Ya Guo: Validation, formal analysis, data curation, writing – review & editing. Peipei Liu: Validation, data curation, visualization. Fei Li: Validation, resources, data curation. Chengqing Yang: Validation, investigation, resources. Fan Fan: Validation, writing – review & editing. Zongbo Chen: Validation, writing – review & editing, supervision, project administration, funding acquisition. All authors contributed to the writing of the final manuscript. All members of our Study Team contributed to the management or administration of the trial. CONFLICT OF INTEREST: The authors declare no conflict of interest. ETHICS STATEMENT: All the procedures in this study were in accordance with the standards of the Ethics Review Committee of the Affiliated Hospital of Qingdao University. We obtained informed consent from the children's parents.
Background: This study was designed to explore the association between the TMEM173 polymorphism (rs7447927) and the severity of enterovirus 71 (EV71) infection among Chinese children. Methods: The TMEM173 polymorphism was identified in EV71-infected patients (n = 497) and healthy controls (n = 535) using the improved multiplex ligation detection reaction (iMLDR). The interferon-α (IFN-α) serum levels were detected using enzyme linked immunosorbent assay (ELISA). Results: The frequencies of the GG genotype and G allele of TMEM173 rs7447927 in the mild EV71 infection and severe EV71 infection groups were markedly higher than those in the control group. The GG genotype and G allele frequencies in severely infected EV71 patients were significantly higher than those in mildly infected EV71 patients. Severely infected EV71 patients with the GG genotype had higher white blood cell counts (WBC), and C-reactive proteins (CRP), and blood glucose (BG) levels, longer fever duration, higher vomiting frequency, spirit changes, and electroencephalography (EEG) abnormalities. IFN-α serum concentration in severely infected patients was significantly higher than in the mildly infected group. The IFN-α concentration in the GG genotype was significantly higher compared with those in the GC and CC genotypes in severe cases. Conclusions: The TMEM173 rs7447927 polymorphism was associated with EV71 infection susceptibility and severity. The G allele and GG genotype are susceptibility factors in the development of severe EV71 infection in Chinese children.
null
null
8,866
289
[ 446, 190, 364, 200, 59, 152, 162, 173, 491, 505, 135, 35 ]
16
[ "ev71", "infected", "patients", "001", "genotypes", "ev71 infected", "higher", "ifn", "gg", "infected patients" ]
[ "ev71 infections genetic", "enterovirus", "ev71 infection susceptibility", "introduction enterovirus", "diseases ev71 infection" ]
null
null
null
[CONTENT] TMEM173 | single‐nucleotide polymorphism | IFN‐α | EV71 [SUMMARY]
null
[CONTENT] TMEM173 | single‐nucleotide polymorphism | IFN‐α | EV71 [SUMMARY]
null
[CONTENT] TMEM173 | single‐nucleotide polymorphism | IFN‐α | EV71 [SUMMARY]
null
[CONTENT] Child | Humans | Enterovirus A, Human | Polymorphism, Genetic | Genotype | Gene Frequency | Interferon-alpha | China [SUMMARY]
null
[CONTENT] Child | Humans | Enterovirus A, Human | Polymorphism, Genetic | Genotype | Gene Frequency | Interferon-alpha | China [SUMMARY]
null
[CONTENT] Child | Humans | Enterovirus A, Human | Polymorphism, Genetic | Genotype | Gene Frequency | Interferon-alpha | China [SUMMARY]
null
[CONTENT] ev71 infections genetic | enterovirus | ev71 infection susceptibility | introduction enterovirus | diseases ev71 infection [SUMMARY]
null
[CONTENT] ev71 infections genetic | enterovirus | ev71 infection susceptibility | introduction enterovirus | diseases ev71 infection [SUMMARY]
null
[CONTENT] ev71 infections genetic | enterovirus | ev71 infection susceptibility | introduction enterovirus | diseases ev71 infection [SUMMARY]
null
[CONTENT] ev71 | infected | patients | 001 | genotypes | ev71 infected | higher | ifn | gg | infected patients [SUMMARY]
null
[CONTENT] ev71 | infected | patients | 001 | genotypes | ev71 infected | higher | ifn | gg | infected patients [SUMMARY]
null
[CONTENT] ev71 | infected | patients | 001 | genotypes | ev71 infected | higher | ifn | gg | infected patients [SUMMARY]
null
[CONTENT] tmem173 | interferon | human | stimulator | immunity | response | host | relationship | ev71 | infection [SUMMARY]
null
[CONTENT] 001 | patients | infected | genotypes | infected patients | higher | ev71 | ev71 infected | ml | pg [SUMMARY]
null
[CONTENT] ev71 | infected | 001 | patients | ev71 infected | genotypes | infected patients | ifn | gg | tmem173 [SUMMARY]
null
[CONTENT] 71 | Chinese [SUMMARY]
null
[CONTENT] ||| ||| WBC | CRP | EEG ||| IFN ||| IFN | GC | CC [SUMMARY]
null
[CONTENT] 71 | Chinese ||| 497 | 535 ||| IFN | ELISA ||| ||| ||| ||| WBC | CRP | EEG ||| IFN ||| IFN | GC | CC ||| ||| Chinese [SUMMARY]
null
Preparation and characterization of a coacervate extended-release microparticulate delivery system for Lactobacillus rhamnosus.
21984867
The purpose of this study was to develop a mucoadhesive coacervate microparticulate system to deliver viable Lactobacillus rhamnosus cells into the gut for an extended period of time while maintaining high numbers of viable cells within the formulation throughout its shelf-life and during gastrointestinal transit.
BACKGROUND
Core coacervate mucoadhesive microparticles of L. rhamnosus were developed using several grades of hypromellose and were subsequently enteric-coated with hypromellose phthalate. Microparticles were evaluated for percent yield, entrapment efficiency, surface morphology, particle size, size distribution, zeta potential, flow properties, in vitro swelling, mucoadhesion properties, in vitro release profile and release kinetics, in vivo probiotic activity, and stability. The values for the kinetic constant and release exponent of model-dependent approaches, the difference factor, similarity factor, and Rescigno indices of model-independent approaches were determined for analyzing in vitro dissolution profiles.
METHODS
Experimental microparticles of formulation batches were of spherical shape with percent yields of 41.24%-58.18%, entrapment efficiency 45.18%-64.16%, mean particle size 33.10-49.62 μm, and zeta potential around -11.5 mV, confirming adequate stability of L. rhamnosus at room temperature. The in vitro L. rhamnosus release profile follows zero-order kinetics and depends on the grade of hypromellose and the L. rhamnosus to hypromellose ratio.
RESULTS
Microparticles delivered L. rhamnosus in simulated intestinal conditions for an extended period, following zero-order kinetics, and exhibited appreciable mucoadhesion in simulated intestinal conditions.
CONCLUSION
[ "Adhesiveness", "Bacterial Load", "Drug Carriers", "Drug Delivery Systems", "Humans", "In Vitro Techniques", "Intestinal Mucosa", "Lacticaseibacillus rhamnosus", "Methylcellulose", "Microscopy, Electron, Scanning", "Microspheres", "Particle Size", "Probiotics", "Tablets, Enteric-Coated" ]
3184930
Introduction
Intake of viable Lactobacillus rhamnosus (LR) cells, at around 107 cfu,1,2 aids in the prevention of intestinal tract illnesses,3 suppresses bacterial infection in renal patients,4 safeguards the urogenital tract by excreting biosurfactants,5 stimulates antibody production, aids the immune system, assists the phagocytic process, helps the body to combat dangerous invasive bacteria, controls food-associated allergic inflammation,6 shortens the duration of diarrhea associated with rotavirus infection,7 and reduces use of antibiotics to treat Helicobacter pylori infection.8 Reported therapeutic benefits are associated with the ability of LR to secret coagulin, a bacteriocin, which is active against a broad spectrum of enteric microbes.1 LR is well tolerated with very rare side effects, and its regular intake can be effective in supplementing and maintaining digestive tract health. Processing conditions during formulation and noncompliance with storage requirements during shipment and storage result in loss of cell viability in the dosage formulation. Acidic conditions in the stomach, various hydrolytic enzymes, and bile salts in the gastrointestinal tract also adversely affect the viability of LR after ingestion.9–14 Nowadays, microparticulate systems have been exploited, not only to reduce loss of cell viability during storage and transport, but also to improve and maintain viable cells arriving in the intestine.9–11 Decreased performance of microparticles is attributable to their short gastric retention time, a physiological limitation which can be improved by coupling mucoadhesion properties to the microparticles through developing mucoadhesive microparticles which will in turn simultaneously improve gastric retention time and bioavailability.15–17 Hypromellose and hypromellose phthalate are safe for human consumption, and because of the good mucoadhesive and release rate-controlling properties of hypromellose, it is preferred in mucoadhesive formulations.16–19 These observations indicate a strong need to develop a dosage form that will deliver LR into the gut with improved gastric retention time and adequate stability during storage and gastrointestinal transit, which can be achieved with extended-release mucoadhesive microparticles.
In vitro release kinetics, statistical evaluation, and data fitting
A mean value of three determinations at each time point was used to fit an in vitro viable cell release profile of all formulation batches to different kinetic models so as to find their release exponents. The mean value of 12 determinations was used to estimate the difference factor (f1), the similarity factor (f2), and the two indices of Rescigno (ξ1 and ξ2).16,32 Statistical analysis of percent released data and other data were performed using one-way analysis of variance at a significance level of 5% (P < 0.05). In vitro release kinetic studies, statistical evaluation, data fitting, nonlinear least square curve fitting, simulation, and plotting were performed using Excel software (version 2007, Microsoft Software Inc, Redmond, WA) for determining the parameters of each equation.
null
null
Conclusion
These experimental results suggest that this extended-release microparticulate system loaded with LR cells could be prepared by a conventional coacervation and phase separation technique. It has the potential to deliver viable LR cells to the gut for an extended period of time, while maintaining the viability of LR cells during storage and gastrointestinal transit, and could be viewed as an alternative to conventional dosage forms. However, extensive in vivo studies will be required to establish the use of a coacervate extended-release microparticulate system as an alternative to the conventional dosage form of LR.
[ "In-house LR specification compliance test", "Preparation of mucoadhesive microparticles", "Coating of microparticles", "Coating stage percent weight gain", "Percent yield study", "Measurement of viable cell number", "Direct microscopic count using dye exclusion test", "Viable plate counts", "Entrapment efficiency", "Morphology", "Particle size, size distribution, and zeta potential", "Flow properties", "In vitro swelling", "Mucoadhesion", "Ex vivo mucoadhesive strength", "In vitro washoff test", "In vitro release", "In vivo probiotic activity", "Accelerated stability", "Conclusion" ]
[ "A number of cell count tests (bacteriological, total aerobic bacteria, coliforms, enterobacteriaceae, other Gram-negative bacteria, yeast, molds), and tests to ensure the absence of contaminants (Escherichia coli, Staphylococcus aureus, and Salmonella), were performed as a compliance to the specifications of the certificate of analysis.", "Core mucoadhesive microparticles of LR were prepared aseptically with hypromellose employing coacervation and phase separation technique.16,20 Hypromellose 5 g was dissolved in 200 mL of cold deionized water (4°C ± 2°C). Polysorbate-80 2 g was dissolved in this solution under stirring, followed by aseptic filtration using a 0.45 μm PVDF filter membrane (Millipore Corporation, Bedford, MA). A calculated quantity of LR was dispersed in the above solution, sonicated at 20 kHz for 1 minute, and the temperature was raised gradually up to 30°C ± 2°C with stirring at 500 ± 25 rpm for 30 minutes. Acetone 50 mL was added dropwise under stirring and stirred for a further 10 minutes. Microparticles were collected by aseptic filtration of the dispersion with a 10 μm nylon filter (Millipore Corporation), followed by washing three times with sterile water for injection (30°C ± 2°C) and kept in a desiccator for 24 hours. All formulation batches having the composition described in Table 1 were prepared in triplicate. Aseptic processing was carried out on the bench using a horizontal laminar flow clean air work station (1500048-24-24, Klenzaids Bioclean Devices Ltd, Mumbai, India).16,20", "HP-50 solution 200 mL (10% w/w) was prepared with phosphate buffer21 at pH 6.8, and polyethylene glycol 200 4 g and polysorbate-80 2 g was dissolved in it. The solution was filtered aseptically using a 0.45 μm PVDF filter membrane followed by dispersing tare core microparticles in it under stirring at 300 ± 25 rpm, and then 40 mL of propan-2-ol was added dropwise. Stirring was continued for 30 minutes, then the coated microparticles were separated by aseptic filtration, washed three times with sterile water for injection (30°C ± 2°C), and kept in a desiccator for 24 hours, followed by determination of the final weight, aseptically packed in glass vials, and stored in a refrigerator for further use.", "From the tare weight (WI) of the dried core microparticles that had been subjected to coating and the tare weight (WF) of the dried coated microparticles, the coating stage percent weight gain value was determined using equation 1.\n(1)Coating stage percent weight gain (% w/w)=WF−WIWI×100", "Calculation for percent yield values (w/w) of all batches were done using equation 2.\n(2)Percent yield=W1(weight of coated microparticles recovered)W2[Weight (drug (viable cell+nonviable cell)+polymer)]×100", "Measurement of viable cells in sample was done using the following methods.22\n Direct microscopic count using dye exclusion test A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate.\n(3)Viable cells=average viable cells count per square× dilution factor×104\nA thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate.\n(3)Viable cells=average viable cells count per square× dilution factor×104\n Viable plate counts One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22\nDeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification.\nSix plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4.\n(4)Number of clu=Average number of coloniescounted per plateDilution factor×100\nOne gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22\nDeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification.\nSix plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4.\n(4)Number of clu=Average number of coloniescounted per plateDilution factor×100", "A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate.\n(3)Viable cells=average viable cells count per square× dilution factor×104", "One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22\nDeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification.\nSix plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4.\n(4)Number of clu=Average number of coloniescounted per plateDilution factor×100", "In an aseptic manner, 500 mg of accurately weighed coated microparticles were kept with 25 mL of sterile simulated intestinal fluid in a hermetically sealed sterile glass vial at 4°C ± 2°C for 24 hours. The dispersion was subjected to a viable plate count (ie, a viable spore count value in cfu/g) and entrapment efficiency was calculated using equation 5.16\n(5)Percent entrapment efficiency=Practical viable spore count valueTheoretical viable spore count value×100", "The coated microparticles were mounted on aluminum stubs using double-sided adhesive tape. The stubs were then vacuum-coated with a thin layer of gold and examined with a scanning electron microscope (JSM 5610 LV, Jeol, Tokyo, Japan).23–28", "The core and coated microparticles were dispersed in deionized water (pH 6.8) and sonicated at 20 kHz for three minutes to get a homogenous dispersion (0.5% w/v). The dispersions were put into a small-volume disposable zeta cell and subjected to particle size study using photon correlation spectroscopy with an inbuilt Zetasizer (Nano ZS, Malvern Instruments, Worcestershire, UK) at 633 nm and 25°C ± 0.1°C. The electrophoretic mobility measured (in mm/sec) was converted to the zeta potential.16,25–30", "The flow properties of the coated microparticles were determined from the result of the study parameters, ie, angle of repose, Carr’s index, and the Hausner ratio.16,21", "An in vitro swelling test of the coated microparticles was conducted in simulated intestinal fluid. The size of the dried microparticles and those after incubation in simulated intestinal fluid for 5 hours were measured using a calibrated optical microscope (CX RIII, Labomed, Ambala, India). Percent swelling value was determined from the diameter of the microparticles at time t (DT) and initial time t= 0 (D0) using equation 6.16\n(6)Percent swelling=[DT−D0]/D0×100", "Following institutional animal ethical committee guidelines, the mucoadhesion affinity of the coated microparticles for intestinal mucosa was assessed by the following methods.\n Ex vivo mucoadhesive strength A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16\n(7)Percent adhesive strength=[NS/N0]×100\nA suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16\n(7)Percent adhesive strength=[NS/N0]×100\n In vitro washoff test A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31\n(8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100\nA strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31\n(8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100", "A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16\n(7)Percent adhesive strength=[NS/N0]×100", "A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31\n(8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100", "In vitro release studies of the coated microparticles were done using a USP basket apparatus (TDT-06T, Electrolab, Mumbai, India) at 37°C ± 0.5°C and 100 rpm containing 900 mL of sterile dissolution medium, ie, simulated gastric fluid and simulated intestinal fluid with about 1 g of accurately weighed microparticles contained in the basket (wrapped with 100 mesh nylon cloth) of dissolution apparatus. At predetermined time points, 5 mL of dissolution medium was withdrawn for up to 14 hours, with immediate replacement of fresh dissolution medium, subjected to viable cell number determination, and the result was expressed as the percentage of viable LR cells released with respect to the practical viable spore count value.16", "The in vivo probiotic activity of the coated microparticles was evaluated using a mouse enterococci stool colonization method, following institutional animal ethical committee guidelines.16 One milliliter of coated microparticle dispersion (102 cfu/mL) in simulated intestinal fluid was fed to albino mice in groups of six. Stools were collected at 6-hourly intervals for up to 48 hours and subjected to an enterococci colonization density study.", "Following an International Conference on Harmonization guidelines, coated microparticles from all formulation batches were stored under a range of temperature and humidity conditions (30°C ± 2°C/65% ± 5% relative humidity and 40°C ± 2°C/75% ± 5% relative humidity) in a stability analysis chamber (Darwin Chambers Company, St Louis, MO) and in a refrigerator (2°C–8°C) for an accelerated stability study of up to six months.16,32,33", "These experimental results suggest that this extended-release microparticulate system loaded with LR cells could be prepared by a conventional coacervation and phase separation technique. It has the potential to deliver viable LR cells to the gut for an extended period of time, while maintaining the viability of LR cells during storage and gastrointestinal transit, and could be viewed as an alternative to conventional dosage forms. However, extensive in vivo studies will be required to establish the use of a coacervate extended-release microparticulate system as an alternative to the conventional dosage form of LR." ]
[ null, null, null, null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Materials", "In-house LR specification compliance test", "Preparation of mucoadhesive microparticles", "Coating of microparticles", "Coating stage percent weight gain", "Percent yield study", "Measurement of viable cell number", "Direct microscopic count using dye exclusion test", "Viable plate counts", "Entrapment efficiency", "Morphology", "Particle size, size distribution, and zeta potential", "Flow properties", "In vitro swelling", "Mucoadhesion", "Ex vivo mucoadhesive strength", "In vitro washoff test", "In vitro release", "In vitro release kinetics, statistical evaluation, and data fitting", "In vivo probiotic activity", "Accelerated stability", "Results and discussion", "Conclusion" ]
[ "Intake of viable Lactobacillus rhamnosus (LR) cells, at around 107 cfu,1,2 aids in the prevention of intestinal tract illnesses,3 suppresses bacterial infection in renal patients,4 safeguards the urogenital tract by excreting biosurfactants,5 stimulates antibody production, aids the immune system, assists the phagocytic process, helps the body to combat dangerous invasive bacteria, controls food-associated allergic inflammation,6 shortens the duration of diarrhea associated with rotavirus infection,7 and reduces use of antibiotics to treat Helicobacter pylori infection.8\nReported therapeutic benefits are associated with the ability of LR to secret coagulin, a bacteriocin, which is active against a broad spectrum of enteric microbes.1 LR is well tolerated with very rare side effects, and its regular intake can be effective in supplementing and maintaining digestive tract health. Processing conditions during formulation and noncompliance with storage requirements during shipment and storage result in loss of cell viability in the dosage formulation. Acidic conditions in the stomach, various hydrolytic enzymes, and bile salts in the gastrointestinal tract also adversely affect the viability of LR after ingestion.9–14\nNowadays, microparticulate systems have been exploited, not only to reduce loss of cell viability during storage and transport, but also to improve and maintain viable cells arriving in the intestine.9–11 Decreased performance of microparticles is attributable to their short gastric retention time, a physiological limitation which can be improved by coupling mucoadhesion properties to the microparticles through developing mucoadhesive microparticles which will in turn simultaneously improve gastric retention time and bioavailability.15–17 Hypromellose and hypromellose phthalate are safe for human consumption, and because of the good mucoadhesive and release rate-controlling properties of hypromellose, it is preferred in mucoadhesive formulations.16–19 These observations indicate a strong need to develop a dosage form that will deliver LR into the gut with improved gastric retention time and adequate stability during storage and gastrointestinal transit, which can be achieved with extended-release mucoadhesive microparticles.", " Materials Freeze-dried LR R0011-150 powder was donated by Cipla Limited (Mumbai, India). Hypromellose phthalate (HP-50) was donated by Glenmark Pharmaceuticals Limited (Nasik, India). Different grades of hypromellose, ie, Methocel E5 Premium LV (E5), Methocel E50 Premium LV (E50), and Methocel E10 M Premium CR (E10 M), were donated by Indoco Remedies Limited (Mumbai, India). DeMann Rogosa Sharpe agar media and other analytical grade laboratory chemicals were purchased from HiMedia Laboratories Limited (Mumbai, India).\nFreeze-dried LR R0011-150 powder was donated by Cipla Limited (Mumbai, India). Hypromellose phthalate (HP-50) was donated by Glenmark Pharmaceuticals Limited (Nasik, India). Different grades of hypromellose, ie, Methocel E5 Premium LV (E5), Methocel E50 Premium LV (E50), and Methocel E10 M Premium CR (E10 M), were donated by Indoco Remedies Limited (Mumbai, India). DeMann Rogosa Sharpe agar media and other analytical grade laboratory chemicals were purchased from HiMedia Laboratories Limited (Mumbai, India).\n In-house LR specification compliance test A number of cell count tests (bacteriological, total aerobic bacteria, coliforms, enterobacteriaceae, other Gram-negative bacteria, yeast, molds), and tests to ensure the absence of contaminants (Escherichia coli, Staphylococcus aureus, and Salmonella), were performed as a compliance to the specifications of the certificate of analysis.\nA number of cell count tests (bacteriological, total aerobic bacteria, coliforms, enterobacteriaceae, other Gram-negative bacteria, yeast, molds), and tests to ensure the absence of contaminants (Escherichia coli, Staphylococcus aureus, and Salmonella), were performed as a compliance to the specifications of the certificate of analysis.\n Preparation of mucoadhesive microparticles Core mucoadhesive microparticles of LR were prepared aseptically with hypromellose employing coacervation and phase separation technique.16,20 Hypromellose 5 g was dissolved in 200 mL of cold deionized water (4°C ± 2°C). Polysorbate-80 2 g was dissolved in this solution under stirring, followed by aseptic filtration using a 0.45 μm PVDF filter membrane (Millipore Corporation, Bedford, MA). A calculated quantity of LR was dispersed in the above solution, sonicated at 20 kHz for 1 minute, and the temperature was raised gradually up to 30°C ± 2°C with stirring at 500 ± 25 rpm for 30 minutes. Acetone 50 mL was added dropwise under stirring and stirred for a further 10 minutes. Microparticles were collected by aseptic filtration of the dispersion with a 10 μm nylon filter (Millipore Corporation), followed by washing three times with sterile water for injection (30°C ± 2°C) and kept in a desiccator for 24 hours. All formulation batches having the composition described in Table 1 were prepared in triplicate. Aseptic processing was carried out on the bench using a horizontal laminar flow clean air work station (1500048-24-24, Klenzaids Bioclean Devices Ltd, Mumbai, India).16,20\nCore mucoadhesive microparticles of LR were prepared aseptically with hypromellose employing coacervation and phase separation technique.16,20 Hypromellose 5 g was dissolved in 200 mL of cold deionized water (4°C ± 2°C). Polysorbate-80 2 g was dissolved in this solution under stirring, followed by aseptic filtration using a 0.45 μm PVDF filter membrane (Millipore Corporation, Bedford, MA). A calculated quantity of LR was dispersed in the above solution, sonicated at 20 kHz for 1 minute, and the temperature was raised gradually up to 30°C ± 2°C with stirring at 500 ± 25 rpm for 30 minutes. Acetone 50 mL was added dropwise under stirring and stirred for a further 10 minutes. Microparticles were collected by aseptic filtration of the dispersion with a 10 μm nylon filter (Millipore Corporation), followed by washing three times with sterile water for injection (30°C ± 2°C) and kept in a desiccator for 24 hours. All formulation batches having the composition described in Table 1 were prepared in triplicate. Aseptic processing was carried out on the bench using a horizontal laminar flow clean air work station (1500048-24-24, Klenzaids Bioclean Devices Ltd, Mumbai, India).16,20\n Coating of microparticles HP-50 solution 200 mL (10% w/w) was prepared with phosphate buffer21 at pH 6.8, and polyethylene glycol 200 4 g and polysorbate-80 2 g was dissolved in it. The solution was filtered aseptically using a 0.45 μm PVDF filter membrane followed by dispersing tare core microparticles in it under stirring at 300 ± 25 rpm, and then 40 mL of propan-2-ol was added dropwise. Stirring was continued for 30 minutes, then the coated microparticles were separated by aseptic filtration, washed three times with sterile water for injection (30°C ± 2°C), and kept in a desiccator for 24 hours, followed by determination of the final weight, aseptically packed in glass vials, and stored in a refrigerator for further use.\nHP-50 solution 200 mL (10% w/w) was prepared with phosphate buffer21 at pH 6.8, and polyethylene glycol 200 4 g and polysorbate-80 2 g was dissolved in it. The solution was filtered aseptically using a 0.45 μm PVDF filter membrane followed by dispersing tare core microparticles in it under stirring at 300 ± 25 rpm, and then 40 mL of propan-2-ol was added dropwise. Stirring was continued for 30 minutes, then the coated microparticles were separated by aseptic filtration, washed three times with sterile water for injection (30°C ± 2°C), and kept in a desiccator for 24 hours, followed by determination of the final weight, aseptically packed in glass vials, and stored in a refrigerator for further use.\n Coating stage percent weight gain From the tare weight (WI) of the dried core microparticles that had been subjected to coating and the tare weight (WF) of the dried coated microparticles, the coating stage percent weight gain value was determined using equation 1.\n(1)Coating stage percent weight gain (% w/w)=WF−WIWI×100\nFrom the tare weight (WI) of the dried core microparticles that had been subjected to coating and the tare weight (WF) of the dried coated microparticles, the coating stage percent weight gain value was determined using equation 1.\n(1)Coating stage percent weight gain (% w/w)=WF−WIWI×100\n Percent yield study Calculation for percent yield values (w/w) of all batches were done using equation 2.\n(2)Percent yield=W1(weight of coated microparticles recovered)W2[Weight (drug (viable cell+nonviable cell)+polymer)]×100\nCalculation for percent yield values (w/w) of all batches were done using equation 2.\n(2)Percent yield=W1(weight of coated microparticles recovered)W2[Weight (drug (viable cell+nonviable cell)+polymer)]×100\n Measurement of viable cell number Measurement of viable cells in sample was done using the following methods.22\n Direct microscopic count using dye exclusion test A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate.\n(3)Viable cells=average viable cells count per square× dilution factor×104\nA thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate.\n(3)Viable cells=average viable cells count per square× dilution factor×104\n Viable plate counts One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22\nDeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification.\nSix plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4.\n(4)Number of clu=Average number of coloniescounted per plateDilution factor×100\nOne gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22\nDeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification.\nSix plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4.\n(4)Number of clu=Average number of coloniescounted per plateDilution factor×100\nMeasurement of viable cells in sample was done using the following methods.22\n Direct microscopic count using dye exclusion test A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate.\n(3)Viable cells=average viable cells count per square× dilution factor×104\nA thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate.\n(3)Viable cells=average viable cells count per square× dilution factor×104\n Viable plate counts One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22\nDeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification.\nSix plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4.\n(4)Number of clu=Average number of coloniescounted per plateDilution factor×100\nOne gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22\nDeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification.\nSix plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4.\n(4)Number of clu=Average number of coloniescounted per plateDilution factor×100\n Entrapment efficiency In an aseptic manner, 500 mg of accurately weighed coated microparticles were kept with 25 mL of sterile simulated intestinal fluid in a hermetically sealed sterile glass vial at 4°C ± 2°C for 24 hours. The dispersion was subjected to a viable plate count (ie, a viable spore count value in cfu/g) and entrapment efficiency was calculated using equation 5.16\n(5)Percent entrapment efficiency=Practical viable spore count valueTheoretical viable spore count value×100\nIn an aseptic manner, 500 mg of accurately weighed coated microparticles were kept with 25 mL of sterile simulated intestinal fluid in a hermetically sealed sterile glass vial at 4°C ± 2°C for 24 hours. The dispersion was subjected to a viable plate count (ie, a viable spore count value in cfu/g) and entrapment efficiency was calculated using equation 5.16\n(5)Percent entrapment efficiency=Practical viable spore count valueTheoretical viable spore count value×100\n Morphology The coated microparticles were mounted on aluminum stubs using double-sided adhesive tape. The stubs were then vacuum-coated with a thin layer of gold and examined with a scanning electron microscope (JSM 5610 LV, Jeol, Tokyo, Japan).23–28\nThe coated microparticles were mounted on aluminum stubs using double-sided adhesive tape. The stubs were then vacuum-coated with a thin layer of gold and examined with a scanning electron microscope (JSM 5610 LV, Jeol, Tokyo, Japan).23–28\n Particle size, size distribution, and zeta potential The core and coated microparticles were dispersed in deionized water (pH 6.8) and sonicated at 20 kHz for three minutes to get a homogenous dispersion (0.5% w/v). The dispersions were put into a small-volume disposable zeta cell and subjected to particle size study using photon correlation spectroscopy with an inbuilt Zetasizer (Nano ZS, Malvern Instruments, Worcestershire, UK) at 633 nm and 25°C ± 0.1°C. The electrophoretic mobility measured (in mm/sec) was converted to the zeta potential.16,25–30\nThe core and coated microparticles were dispersed in deionized water (pH 6.8) and sonicated at 20 kHz for three minutes to get a homogenous dispersion (0.5% w/v). The dispersions were put into a small-volume disposable zeta cell and subjected to particle size study using photon correlation spectroscopy with an inbuilt Zetasizer (Nano ZS, Malvern Instruments, Worcestershire, UK) at 633 nm and 25°C ± 0.1°C. The electrophoretic mobility measured (in mm/sec) was converted to the zeta potential.16,25–30\n Flow properties The flow properties of the coated microparticles were determined from the result of the study parameters, ie, angle of repose, Carr’s index, and the Hausner ratio.16,21\nThe flow properties of the coated microparticles were determined from the result of the study parameters, ie, angle of repose, Carr’s index, and the Hausner ratio.16,21\n In vitro swelling An in vitro swelling test of the coated microparticles was conducted in simulated intestinal fluid. The size of the dried microparticles and those after incubation in simulated intestinal fluid for 5 hours were measured using a calibrated optical microscope (CX RIII, Labomed, Ambala, India). Percent swelling value was determined from the diameter of the microparticles at time t (DT) and initial time t= 0 (D0) using equation 6.16\n(6)Percent swelling=[DT−D0]/D0×100\nAn in vitro swelling test of the coated microparticles was conducted in simulated intestinal fluid. The size of the dried microparticles and those after incubation in simulated intestinal fluid for 5 hours were measured using a calibrated optical microscope (CX RIII, Labomed, Ambala, India). Percent swelling value was determined from the diameter of the microparticles at time t (DT) and initial time t= 0 (D0) using equation 6.16\n(6)Percent swelling=[DT−D0]/D0×100\n Mucoadhesion Following institutional animal ethical committee guidelines, the mucoadhesion affinity of the coated microparticles for intestinal mucosa was assessed by the following methods.\n Ex vivo mucoadhesive strength A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16\n(7)Percent adhesive strength=[NS/N0]×100\nA suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16\n(7)Percent adhesive strength=[NS/N0]×100\n In vitro washoff test A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31\n(8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100\nA strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31\n(8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100\nFollowing institutional animal ethical committee guidelines, the mucoadhesion affinity of the coated microparticles for intestinal mucosa was assessed by the following methods.\n Ex vivo mucoadhesive strength A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16\n(7)Percent adhesive strength=[NS/N0]×100\nA suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16\n(7)Percent adhesive strength=[NS/N0]×100\n In vitro washoff test A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31\n(8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100\nA strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31\n(8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100\n In vitro release In vitro release studies of the coated microparticles were done using a USP basket apparatus (TDT-06T, Electrolab, Mumbai, India) at 37°C ± 0.5°C and 100 rpm containing 900 mL of sterile dissolution medium, ie, simulated gastric fluid and simulated intestinal fluid with about 1 g of accurately weighed microparticles contained in the basket (wrapped with 100 mesh nylon cloth) of dissolution apparatus. At predetermined time points, 5 mL of dissolution medium was withdrawn for up to 14 hours, with immediate replacement of fresh dissolution medium, subjected to viable cell number determination, and the result was expressed as the percentage of viable LR cells released with respect to the practical viable spore count value.16\nIn vitro release studies of the coated microparticles were done using a USP basket apparatus (TDT-06T, Electrolab, Mumbai, India) at 37°C ± 0.5°C and 100 rpm containing 900 mL of sterile dissolution medium, ie, simulated gastric fluid and simulated intestinal fluid with about 1 g of accurately weighed microparticles contained in the basket (wrapped with 100 mesh nylon cloth) of dissolution apparatus. At predetermined time points, 5 mL of dissolution medium was withdrawn for up to 14 hours, with immediate replacement of fresh dissolution medium, subjected to viable cell number determination, and the result was expressed as the percentage of viable LR cells released with respect to the practical viable spore count value.16\n In vitro release kinetics, statistical evaluation, and data fitting A mean value of three determinations at each time point was used to fit an in vitro viable cell release profile of all formulation batches to different kinetic models so as to find their release exponents. The mean value of 12 determinations was used to estimate the difference factor (f1), the similarity factor (f2), and the two indices of Rescigno (ξ1 and ξ2).16,32 Statistical analysis of percent released data and other data were performed using one-way analysis of variance at a significance level of 5% (P < 0.05). In vitro release kinetic studies, statistical evaluation, data fitting, nonlinear least square curve fitting, simulation, and plotting were performed using Excel software (version 2007, Microsoft Software Inc, Redmond, WA) for determining the parameters of each equation.\nA mean value of three determinations at each time point was used to fit an in vitro viable cell release profile of all formulation batches to different kinetic models so as to find their release exponents. The mean value of 12 determinations was used to estimate the difference factor (f1), the similarity factor (f2), and the two indices of Rescigno (ξ1 and ξ2).16,32 Statistical analysis of percent released data and other data were performed using one-way analysis of variance at a significance level of 5% (P < 0.05). In vitro release kinetic studies, statistical evaluation, data fitting, nonlinear least square curve fitting, simulation, and plotting were performed using Excel software (version 2007, Microsoft Software Inc, Redmond, WA) for determining the parameters of each equation.\n In vivo probiotic activity The in vivo probiotic activity of the coated microparticles was evaluated using a mouse enterococci stool colonization method, following institutional animal ethical committee guidelines.16 One milliliter of coated microparticle dispersion (102 cfu/mL) in simulated intestinal fluid was fed to albino mice in groups of six. Stools were collected at 6-hourly intervals for up to 48 hours and subjected to an enterococci colonization density study.\nThe in vivo probiotic activity of the coated microparticles was evaluated using a mouse enterococci stool colonization method, following institutional animal ethical committee guidelines.16 One milliliter of coated microparticle dispersion (102 cfu/mL) in simulated intestinal fluid was fed to albino mice in groups of six. Stools were collected at 6-hourly intervals for up to 48 hours and subjected to an enterococci colonization density study.\n Accelerated stability Following an International Conference on Harmonization guidelines, coated microparticles from all formulation batches were stored under a range of temperature and humidity conditions (30°C ± 2°C/65% ± 5% relative humidity and 40°C ± 2°C/75% ± 5% relative humidity) in a stability analysis chamber (Darwin Chambers Company, St Louis, MO) and in a refrigerator (2°C–8°C) for an accelerated stability study of up to six months.16,32,33\nFollowing an International Conference on Harmonization guidelines, coated microparticles from all formulation batches were stored under a range of temperature and humidity conditions (30°C ± 2°C/65% ± 5% relative humidity and 40°C ± 2°C/75% ± 5% relative humidity) in a stability analysis chamber (Darwin Chambers Company, St Louis, MO) and in a refrigerator (2°C–8°C) for an accelerated stability study of up to six months.16,32,33", "Freeze-dried LR R0011-150 powder was donated by Cipla Limited (Mumbai, India). Hypromellose phthalate (HP-50) was donated by Glenmark Pharmaceuticals Limited (Nasik, India). Different grades of hypromellose, ie, Methocel E5 Premium LV (E5), Methocel E50 Premium LV (E50), and Methocel E10 M Premium CR (E10 M), were donated by Indoco Remedies Limited (Mumbai, India). DeMann Rogosa Sharpe agar media and other analytical grade laboratory chemicals were purchased from HiMedia Laboratories Limited (Mumbai, India).", "A number of cell count tests (bacteriological, total aerobic bacteria, coliforms, enterobacteriaceae, other Gram-negative bacteria, yeast, molds), and tests to ensure the absence of contaminants (Escherichia coli, Staphylococcus aureus, and Salmonella), were performed as a compliance to the specifications of the certificate of analysis.", "Core mucoadhesive microparticles of LR were prepared aseptically with hypromellose employing coacervation and phase separation technique.16,20 Hypromellose 5 g was dissolved in 200 mL of cold deionized water (4°C ± 2°C). Polysorbate-80 2 g was dissolved in this solution under stirring, followed by aseptic filtration using a 0.45 μm PVDF filter membrane (Millipore Corporation, Bedford, MA). A calculated quantity of LR was dispersed in the above solution, sonicated at 20 kHz for 1 minute, and the temperature was raised gradually up to 30°C ± 2°C with stirring at 500 ± 25 rpm for 30 minutes. Acetone 50 mL was added dropwise under stirring and stirred for a further 10 minutes. Microparticles were collected by aseptic filtration of the dispersion with a 10 μm nylon filter (Millipore Corporation), followed by washing three times with sterile water for injection (30°C ± 2°C) and kept in a desiccator for 24 hours. All formulation batches having the composition described in Table 1 were prepared in triplicate. Aseptic processing was carried out on the bench using a horizontal laminar flow clean air work station (1500048-24-24, Klenzaids Bioclean Devices Ltd, Mumbai, India).16,20", "HP-50 solution 200 mL (10% w/w) was prepared with phosphate buffer21 at pH 6.8, and polyethylene glycol 200 4 g and polysorbate-80 2 g was dissolved in it. The solution was filtered aseptically using a 0.45 μm PVDF filter membrane followed by dispersing tare core microparticles in it under stirring at 300 ± 25 rpm, and then 40 mL of propan-2-ol was added dropwise. Stirring was continued for 30 minutes, then the coated microparticles were separated by aseptic filtration, washed three times with sterile water for injection (30°C ± 2°C), and kept in a desiccator for 24 hours, followed by determination of the final weight, aseptically packed in glass vials, and stored in a refrigerator for further use.", "From the tare weight (WI) of the dried core microparticles that had been subjected to coating and the tare weight (WF) of the dried coated microparticles, the coating stage percent weight gain value was determined using equation 1.\n(1)Coating stage percent weight gain (% w/w)=WF−WIWI×100", "Calculation for percent yield values (w/w) of all batches were done using equation 2.\n(2)Percent yield=W1(weight of coated microparticles recovered)W2[Weight (drug (viable cell+nonviable cell)+polymer)]×100", "Measurement of viable cells in sample was done using the following methods.22\n Direct microscopic count using dye exclusion test A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate.\n(3)Viable cells=average viable cells count per square× dilution factor×104\nA thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate.\n(3)Viable cells=average viable cells count per square× dilution factor×104\n Viable plate counts One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22\nDeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification.\nSix plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4.\n(4)Number of clu=Average number of coloniescounted per plateDilution factor×100\nOne gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22\nDeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification.\nSix plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4.\n(4)Number of clu=Average number of coloniescounted per plateDilution factor×100", "A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate.\n(3)Viable cells=average viable cells count per square× dilution factor×104", "One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22\nDeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification.\nSix plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4.\n(4)Number of clu=Average number of coloniescounted per plateDilution factor×100", "In an aseptic manner, 500 mg of accurately weighed coated microparticles were kept with 25 mL of sterile simulated intestinal fluid in a hermetically sealed sterile glass vial at 4°C ± 2°C for 24 hours. The dispersion was subjected to a viable plate count (ie, a viable spore count value in cfu/g) and entrapment efficiency was calculated using equation 5.16\n(5)Percent entrapment efficiency=Practical viable spore count valueTheoretical viable spore count value×100", "The coated microparticles were mounted on aluminum stubs using double-sided adhesive tape. The stubs were then vacuum-coated with a thin layer of gold and examined with a scanning electron microscope (JSM 5610 LV, Jeol, Tokyo, Japan).23–28", "The core and coated microparticles were dispersed in deionized water (pH 6.8) and sonicated at 20 kHz for three minutes to get a homogenous dispersion (0.5% w/v). The dispersions were put into a small-volume disposable zeta cell and subjected to particle size study using photon correlation spectroscopy with an inbuilt Zetasizer (Nano ZS, Malvern Instruments, Worcestershire, UK) at 633 nm and 25°C ± 0.1°C. The electrophoretic mobility measured (in mm/sec) was converted to the zeta potential.16,25–30", "The flow properties of the coated microparticles were determined from the result of the study parameters, ie, angle of repose, Carr’s index, and the Hausner ratio.16,21", "An in vitro swelling test of the coated microparticles was conducted in simulated intestinal fluid. The size of the dried microparticles and those after incubation in simulated intestinal fluid for 5 hours were measured using a calibrated optical microscope (CX RIII, Labomed, Ambala, India). Percent swelling value was determined from the diameter of the microparticles at time t (DT) and initial time t= 0 (D0) using equation 6.16\n(6)Percent swelling=[DT−D0]/D0×100", "Following institutional animal ethical committee guidelines, the mucoadhesion affinity of the coated microparticles for intestinal mucosa was assessed by the following methods.\n Ex vivo mucoadhesive strength A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16\n(7)Percent adhesive strength=[NS/N0]×100\nA suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16\n(7)Percent adhesive strength=[NS/N0]×100\n In vitro washoff test A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31\n(8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100\nA strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31\n(8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100", "A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16\n(7)Percent adhesive strength=[NS/N0]×100", "A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31\n(8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100", "In vitro release studies of the coated microparticles were done using a USP basket apparatus (TDT-06T, Electrolab, Mumbai, India) at 37°C ± 0.5°C and 100 rpm containing 900 mL of sterile dissolution medium, ie, simulated gastric fluid and simulated intestinal fluid with about 1 g of accurately weighed microparticles contained in the basket (wrapped with 100 mesh nylon cloth) of dissolution apparatus. At predetermined time points, 5 mL of dissolution medium was withdrawn for up to 14 hours, with immediate replacement of fresh dissolution medium, subjected to viable cell number determination, and the result was expressed as the percentage of viable LR cells released with respect to the practical viable spore count value.16", "A mean value of three determinations at each time point was used to fit an in vitro viable cell release profile of all formulation batches to different kinetic models so as to find their release exponents. The mean value of 12 determinations was used to estimate the difference factor (f1), the similarity factor (f2), and the two indices of Rescigno (ξ1 and ξ2).16,32 Statistical analysis of percent released data and other data were performed using one-way analysis of variance at a significance level of 5% (P < 0.05). In vitro release kinetic studies, statistical evaluation, data fitting, nonlinear least square curve fitting, simulation, and plotting were performed using Excel software (version 2007, Microsoft Software Inc, Redmond, WA) for determining the parameters of each equation.", "The in vivo probiotic activity of the coated microparticles was evaluated using a mouse enterococci stool colonization method, following institutional animal ethical committee guidelines.16 One milliliter of coated microparticle dispersion (102 cfu/mL) in simulated intestinal fluid was fed to albino mice in groups of six. Stools were collected at 6-hourly intervals for up to 48 hours and subjected to an enterococci colonization density study.", "Following an International Conference on Harmonization guidelines, coated microparticles from all formulation batches were stored under a range of temperature and humidity conditions (30°C ± 2°C/65% ± 5% relative humidity and 40°C ± 2°C/75% ± 5% relative humidity) in a stability analysis chamber (Darwin Chambers Company, St Louis, MO) and in a refrigerator (2°C–8°C) for an accelerated stability study of up to six months.16,32,33", "The coacervation and phase separation technique described here is a simple, rapid, two-step method, which appears to be suitable for the preparation of coacervate extended-release mucoadhesive microparticles loaded with LR cells. It eliminates exposure of LR cells to high temperatures, organic solvents, and mechanical stress, while maintaining their viability during processing. Temperatures above 20°C and nonaqueous solvents adversely affect and decrease the viability of LR, and this is the reason for commencing developmental processes below 20°C in aqueous medium. Hypromellose is soluble in cold water, with solubility in water decreasing with increasing temperature, and it is insoluble in organic solvents like chloroform, dichloromethane, ether, and acetone.19 Hypromellose has excellent rate-controlling and mucoadhesion properties.16,17,19 Hypromellose phthalate is soluble in aqueous alkali and insoluble in water and propan-2-ol.19 Hypromellose was selected as a mucoadhesive polymer and hypromellose phthalate as a coating polymer because of their aforementioned properties, and both are considered to be safe for human consumption.16–19 Polysorbate-80 was incorporated in the formulation as a dispersing agent for homogeneous dispersion of LR cells, and polyethylene glycol 200 was incorporated into the coating solution as a plasticizer to impart plasticity to the coat and to prevent it from splitting and cracking.\nLR cells used complied with certificate of analysis specifications, when tested in accordance with the method of analysis provided by the manufacturer. Coating stage percent weight gain values of the formulation batches were in the range of 10.1%–13.2% w/w.\nThe percent yield value of the formulation batches ranged from 41.24% to 58.18% w/w, which varied according to the grade of hypromellose used, following the order E5 > E50 > E10 M, and an increase in the LR to hypromellose ratio decreased the value, and the highest value was observed for the formulation containing E5 (Table 1). A similar trend was also noticed for the entrapment efficiency values that lie between 45.18% and 64.16% cfu/g.\nScanning electron micrographs (Figure 1) of formulations F1, F3, and F5 demonstrate the surface morphology and particle size of the coated microparticles. The microparticles of all formulation batches were spherical in shape with a smooth surface, with the exception of microparticles belonging to formulation F5, the surface morphology of which was found to be coarser and shriveled. A coarser and shriveled surface texture in turn will improve adhesion by having stronger mechanical interactions.17\nThe mean particle size values of all formulation batches were in the range of 33.10–49.62 μm (Table 1), which increases with an increase in polymer concentration, while that the grade of hypromellose varied according to mean particle size value in the order of E10 > E50 > E5 M, with the highest value for microparticles prepared with E10 M (Table 1 and Figure 2). A nearly equal zeta potential of around −19.2 mV was observed for uncoated microparticles of all formulation batches, while coated microparticles had a nearly equal zeta potential at around −11.5 mV. This value is lower than that of the coated microparticles, indicating the presence of hypromellose phthalate on the surface of the microparticles. The zeta potential report for the uncoated microparticles from formulation batch F1 is shown in Figure 3. The flow properties of the formulation batches lie within the passable and very poor ranges.\nThe percent swelling value of the formulation batches was 0.82%–1.38%, and decreases with increasing LR to hypromellose ratio. A variation in the grade of hypromellose also decreased the value in the order of E5 > E50 > E10 M, with the highest value for the microparticles prepared with E5 (Table 1).\nThe percent adhesive strength of all formulation batches was 42.61%–73.36%, which decreases with an increase in the LR to hypromellose ratio (Table 1). A difference in the grade of hypromellose varied the value in the order of E5 > E50 > E10 M, with the highest value seen for E5. A similar trend was also noticed with the percent mucoadhesion value, which ranged between 44.43% and 75.92% for all formulation batches. These results indicate that the mucoadhesion properties of the microparticles varied according to the grade of hypromellose and the LR to hypromellose ratio, and that microparticles from formulation batch F1 had the highest mucoadhesion affinity with the intestinal mucosa, so may exhibit high gastric retention time in comparison with the other batches.\nThe in vitro swelling test result, ex vivo mucoadhesive strength determination, and in vitro washoff test result, as a measure of the mucoadhesion affinity of the microparticles reveals that the mechanism of mucoadhesion initially follows the adsorption theory34,35 and subsequently the diffusion theory.35\nThe amount of viable LR cells released from the microparticle system in simulated gastric fluid was negligible, but viable LR cell release was almost regulated and extended in simulated intestinal fluid (Figure 4), indicating that enteric coating of microparticles competently protects cell viability at acidic pH, prevents cell release at gastric pH, and releases viable LR cells at intestinal pH.\nThe results of the in vitro swelling test, ex vivo mucoadhesive strength determination test, in vitro washoff test, and in vitro release profile study demonstrates that, in the intestine (pH > 5.0), the coating of the microparticle is dissolved, thereby releasing core microparticles. The liberated core microparticles swell in the intestine, resulting in intimate contact between the microparticles and the mucous membrane. The mucoadhesive chains then penetrate into the crevices of the tissue surface and intermingle with ions in the mucus, with formation of hydrogen bonds between the carboxylic groups of the polymer chains (hypromellose) and mucin molecules, leading to adhesion of the microparticles to the mucous membrane lining the intestinal wall.36–38\nThe kinetic constant and release exponent values of model-dependent approaches (Table 1) show that the mechanism of viable LR cell release from coated microparticles follows a zero-order kinetics model, because the plot of cumulative percent viable cell release versus time was found to be linear, with the highest regression coefficient (r2) value in comparison with those of the other models. For all formulation batches, the zero-order kinetics model r2 value ranged between 0.9834 and 0.9947. Study of shape parameter values for the Weibull model (Table 1) reveal that the curve is sigmoid or S-shaped, with upward curvature, followed by a turning point as β exceeded 1.16,32 Study of the location parameter (Td) for the Weibull model (Table 1) characterizes the time interval necessary to dissolve or release 63.2% of the drug present in the delivery system16,32 and shows that the Td of the formulation batches ranges from 7.7580 to 26.637 hours and the r2 value from 0.9310 to 0.9670.\nModel-independent release exponent values are listed in Table 2, and show that for all formulation pairs, ie, the intrapolymer and interpolymer batches, the ξ1 values lie between 0.070 and 0.328, the ξ2 values lie between 0.240 and 0.407, the f1 value lies between 17.00 and 58.00, and the f2 value lies between 24.00 and 68.00, indicating dissimilarity in product performance of the formulation batches.16,32\nA plot of the in vitro viable LR cell release profile following a zero-order kinetics model for all formulation batches in simulated intestinal fluid is shown in Figure 4, and demonstrates that the rate of viable LR cell release from the microparticles decreased significantly with an increase in the LR to polymer ratio, while variation in the grade of hypromellose influenced the release rate from the microparticles, following the order of E10 > E50 > E5 M.\nThe in vivo probiotic activity evaluation result shows that oral administration of the extended-release mucoadhesive microparticles of LR from all the formulation batches resulted in statistically significant reductions in the density of enterococci colonization in the stool of albino mice up to 24 hours to 36 hours.\nThe stability study result shows adequate stability of the microparticles under storage conditions of 30°C ± 2°C/65% ± 5% relative humidity, with no change in color and texture or statistically significant decrease in viable LR cell content with respect to the viable spore count, and also confirms that the LR cells were compatible with the excipients used in the formulation. A statistically significant decrease in viable LR cell content was observed with respect to practical viable spore counts at 40°C ± 2°C/75 ± 5% relative humidity, indicating product instability under these storage conditions.\nExtended-release mucoadhesive microparticles from formulation batch F1 was found to be superior to the other prototype formulations because it exhibited the highest values of percent yield, entrapment efficiency, and mucoadhesion affinity, having the ability to protect the viability of LR cells during storage and gastrointestinal transit, and releasing viable LR cells in the gut for an extended period of time, as shown via zero-order kinetics.", "These experimental results suggest that this extended-release microparticulate system loaded with LR cells could be prepared by a conventional coacervation and phase separation technique. It has the potential to deliver viable LR cells to the gut for an extended period of time, while maintaining the viability of LR cells during storage and gastrointestinal transit, and could be viewed as an alternative to conventional dosage forms. However, extensive in vivo studies will be required to establish the use of a coacervate extended-release microparticulate system as an alternative to the conventional dosage form of LR." ]
[ "intro", "materials|methods", "materials", null, null, null, null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, "methods", null, null, "results|discussion", null ]
[ "\nLactobacillus rhamnosus\n", "mucoadhesive", "microparticles", "extended-release", "intestine" ]
Introduction: Intake of viable Lactobacillus rhamnosus (LR) cells, at around 107 cfu,1,2 aids in the prevention of intestinal tract illnesses,3 suppresses bacterial infection in renal patients,4 safeguards the urogenital tract by excreting biosurfactants,5 stimulates antibody production, aids the immune system, assists the phagocytic process, helps the body to combat dangerous invasive bacteria, controls food-associated allergic inflammation,6 shortens the duration of diarrhea associated with rotavirus infection,7 and reduces use of antibiotics to treat Helicobacter pylori infection.8 Reported therapeutic benefits are associated with the ability of LR to secret coagulin, a bacteriocin, which is active against a broad spectrum of enteric microbes.1 LR is well tolerated with very rare side effects, and its regular intake can be effective in supplementing and maintaining digestive tract health. Processing conditions during formulation and noncompliance with storage requirements during shipment and storage result in loss of cell viability in the dosage formulation. Acidic conditions in the stomach, various hydrolytic enzymes, and bile salts in the gastrointestinal tract also adversely affect the viability of LR after ingestion.9–14 Nowadays, microparticulate systems have been exploited, not only to reduce loss of cell viability during storage and transport, but also to improve and maintain viable cells arriving in the intestine.9–11 Decreased performance of microparticles is attributable to their short gastric retention time, a physiological limitation which can be improved by coupling mucoadhesion properties to the microparticles through developing mucoadhesive microparticles which will in turn simultaneously improve gastric retention time and bioavailability.15–17 Hypromellose and hypromellose phthalate are safe for human consumption, and because of the good mucoadhesive and release rate-controlling properties of hypromellose, it is preferred in mucoadhesive formulations.16–19 These observations indicate a strong need to develop a dosage form that will deliver LR into the gut with improved gastric retention time and adequate stability during storage and gastrointestinal transit, which can be achieved with extended-release mucoadhesive microparticles. Materials and methods: Materials Freeze-dried LR R0011-150 powder was donated by Cipla Limited (Mumbai, India). Hypromellose phthalate (HP-50) was donated by Glenmark Pharmaceuticals Limited (Nasik, India). Different grades of hypromellose, ie, Methocel E5 Premium LV (E5), Methocel E50 Premium LV (E50), and Methocel E10 M Premium CR (E10 M), were donated by Indoco Remedies Limited (Mumbai, India). DeMann Rogosa Sharpe agar media and other analytical grade laboratory chemicals were purchased from HiMedia Laboratories Limited (Mumbai, India). Freeze-dried LR R0011-150 powder was donated by Cipla Limited (Mumbai, India). Hypromellose phthalate (HP-50) was donated by Glenmark Pharmaceuticals Limited (Nasik, India). Different grades of hypromellose, ie, Methocel E5 Premium LV (E5), Methocel E50 Premium LV (E50), and Methocel E10 M Premium CR (E10 M), were donated by Indoco Remedies Limited (Mumbai, India). DeMann Rogosa Sharpe agar media and other analytical grade laboratory chemicals were purchased from HiMedia Laboratories Limited (Mumbai, India). In-house LR specification compliance test A number of cell count tests (bacteriological, total aerobic bacteria, coliforms, enterobacteriaceae, other Gram-negative bacteria, yeast, molds), and tests to ensure the absence of contaminants (Escherichia coli, Staphylococcus aureus, and Salmonella), were performed as a compliance to the specifications of the certificate of analysis. A number of cell count tests (bacteriological, total aerobic bacteria, coliforms, enterobacteriaceae, other Gram-negative bacteria, yeast, molds), and tests to ensure the absence of contaminants (Escherichia coli, Staphylococcus aureus, and Salmonella), were performed as a compliance to the specifications of the certificate of analysis. Preparation of mucoadhesive microparticles Core mucoadhesive microparticles of LR were prepared aseptically with hypromellose employing coacervation and phase separation technique.16,20 Hypromellose 5 g was dissolved in 200 mL of cold deionized water (4°C ± 2°C). Polysorbate-80 2 g was dissolved in this solution under stirring, followed by aseptic filtration using a 0.45 μm PVDF filter membrane (Millipore Corporation, Bedford, MA). A calculated quantity of LR was dispersed in the above solution, sonicated at 20 kHz for 1 minute, and the temperature was raised gradually up to 30°C ± 2°C with stirring at 500 ± 25 rpm for 30 minutes. Acetone 50 mL was added dropwise under stirring and stirred for a further 10 minutes. Microparticles were collected by aseptic filtration of the dispersion with a 10 μm nylon filter (Millipore Corporation), followed by washing three times with sterile water for injection (30°C ± 2°C) and kept in a desiccator for 24 hours. All formulation batches having the composition described in Table 1 were prepared in triplicate. Aseptic processing was carried out on the bench using a horizontal laminar flow clean air work station (1500048-24-24, Klenzaids Bioclean Devices Ltd, Mumbai, India).16,20 Core mucoadhesive microparticles of LR were prepared aseptically with hypromellose employing coacervation and phase separation technique.16,20 Hypromellose 5 g was dissolved in 200 mL of cold deionized water (4°C ± 2°C). Polysorbate-80 2 g was dissolved in this solution under stirring, followed by aseptic filtration using a 0.45 μm PVDF filter membrane (Millipore Corporation, Bedford, MA). A calculated quantity of LR was dispersed in the above solution, sonicated at 20 kHz for 1 minute, and the temperature was raised gradually up to 30°C ± 2°C with stirring at 500 ± 25 rpm for 30 minutes. Acetone 50 mL was added dropwise under stirring and stirred for a further 10 minutes. Microparticles were collected by aseptic filtration of the dispersion with a 10 μm nylon filter (Millipore Corporation), followed by washing three times with sterile water for injection (30°C ± 2°C) and kept in a desiccator for 24 hours. All formulation batches having the composition described in Table 1 were prepared in triplicate. Aseptic processing was carried out on the bench using a horizontal laminar flow clean air work station (1500048-24-24, Klenzaids Bioclean Devices Ltd, Mumbai, India).16,20 Coating of microparticles HP-50 solution 200 mL (10% w/w) was prepared with phosphate buffer21 at pH 6.8, and polyethylene glycol 200 4 g and polysorbate-80 2 g was dissolved in it. The solution was filtered aseptically using a 0.45 μm PVDF filter membrane followed by dispersing tare core microparticles in it under stirring at 300 ± 25 rpm, and then 40 mL of propan-2-ol was added dropwise. Stirring was continued for 30 minutes, then the coated microparticles were separated by aseptic filtration, washed three times with sterile water for injection (30°C ± 2°C), and kept in a desiccator for 24 hours, followed by determination of the final weight, aseptically packed in glass vials, and stored in a refrigerator for further use. HP-50 solution 200 mL (10% w/w) was prepared with phosphate buffer21 at pH 6.8, and polyethylene glycol 200 4 g and polysorbate-80 2 g was dissolved in it. The solution was filtered aseptically using a 0.45 μm PVDF filter membrane followed by dispersing tare core microparticles in it under stirring at 300 ± 25 rpm, and then 40 mL of propan-2-ol was added dropwise. Stirring was continued for 30 minutes, then the coated microparticles were separated by aseptic filtration, washed three times with sterile water for injection (30°C ± 2°C), and kept in a desiccator for 24 hours, followed by determination of the final weight, aseptically packed in glass vials, and stored in a refrigerator for further use. Coating stage percent weight gain From the tare weight (WI) of the dried core microparticles that had been subjected to coating and the tare weight (WF) of the dried coated microparticles, the coating stage percent weight gain value was determined using equation 1. (1)Coating stage percent weight gain (% w/w)=WF−WIWI×100 From the tare weight (WI) of the dried core microparticles that had been subjected to coating and the tare weight (WF) of the dried coated microparticles, the coating stage percent weight gain value was determined using equation 1. (1)Coating stage percent weight gain (% w/w)=WF−WIWI×100 Percent yield study Calculation for percent yield values (w/w) of all batches were done using equation 2. (2)Percent yield=W1(weight of coated microparticles recovered)W2[Weight (drug (viable cell+nonviable cell)+polymer)]×100 Calculation for percent yield values (w/w) of all batches were done using equation 2. (2)Percent yield=W1(weight of coated microparticles recovered)W2[Weight (drug (viable cell+nonviable cell)+polymer)]×100 Measurement of viable cell number Measurement of viable cells in sample was done using the following methods.22 Direct microscopic count using dye exclusion test A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate. (3)Viable cells=average viable cells count per square× dilution factor×104 A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate. (3)Viable cells=average viable cells count per square× dilution factor×104 Viable plate counts One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22 DeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification. Six plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4. (4)Number of clu=Average number of coloniescounted per plateDilution factor×100 One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22 DeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification. Six plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4. (4)Number of clu=Average number of coloniescounted per plateDilution factor×100 Measurement of viable cells in sample was done using the following methods.22 Direct microscopic count using dye exclusion test A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate. (3)Viable cells=average viable cells count per square× dilution factor×104 A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate. (3)Viable cells=average viable cells count per square× dilution factor×104 Viable plate counts One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22 DeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification. Six plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4. (4)Number of clu=Average number of coloniescounted per plateDilution factor×100 One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22 DeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification. Six plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4. (4)Number of clu=Average number of coloniescounted per plateDilution factor×100 Entrapment efficiency In an aseptic manner, 500 mg of accurately weighed coated microparticles were kept with 25 mL of sterile simulated intestinal fluid in a hermetically sealed sterile glass vial at 4°C ± 2°C for 24 hours. The dispersion was subjected to a viable plate count (ie, a viable spore count value in cfu/g) and entrapment efficiency was calculated using equation 5.16 (5)Percent entrapment efficiency=Practical viable spore count valueTheoretical viable spore count value×100 In an aseptic manner, 500 mg of accurately weighed coated microparticles were kept with 25 mL of sterile simulated intestinal fluid in a hermetically sealed sterile glass vial at 4°C ± 2°C for 24 hours. The dispersion was subjected to a viable plate count (ie, a viable spore count value in cfu/g) and entrapment efficiency was calculated using equation 5.16 (5)Percent entrapment efficiency=Practical viable spore count valueTheoretical viable spore count value×100 Morphology The coated microparticles were mounted on aluminum stubs using double-sided adhesive tape. The stubs were then vacuum-coated with a thin layer of gold and examined with a scanning electron microscope (JSM 5610 LV, Jeol, Tokyo, Japan).23–28 The coated microparticles were mounted on aluminum stubs using double-sided adhesive tape. The stubs were then vacuum-coated with a thin layer of gold and examined with a scanning electron microscope (JSM 5610 LV, Jeol, Tokyo, Japan).23–28 Particle size, size distribution, and zeta potential The core and coated microparticles were dispersed in deionized water (pH 6.8) and sonicated at 20 kHz for three minutes to get a homogenous dispersion (0.5% w/v). The dispersions were put into a small-volume disposable zeta cell and subjected to particle size study using photon correlation spectroscopy with an inbuilt Zetasizer (Nano ZS, Malvern Instruments, Worcestershire, UK) at 633 nm and 25°C ± 0.1°C. The electrophoretic mobility measured (in mm/sec) was converted to the zeta potential.16,25–30 The core and coated microparticles were dispersed in deionized water (pH 6.8) and sonicated at 20 kHz for three minutes to get a homogenous dispersion (0.5% w/v). The dispersions were put into a small-volume disposable zeta cell and subjected to particle size study using photon correlation spectroscopy with an inbuilt Zetasizer (Nano ZS, Malvern Instruments, Worcestershire, UK) at 633 nm and 25°C ± 0.1°C. The electrophoretic mobility measured (in mm/sec) was converted to the zeta potential.16,25–30 Flow properties The flow properties of the coated microparticles were determined from the result of the study parameters, ie, angle of repose, Carr’s index, and the Hausner ratio.16,21 The flow properties of the coated microparticles were determined from the result of the study parameters, ie, angle of repose, Carr’s index, and the Hausner ratio.16,21 In vitro swelling An in vitro swelling test of the coated microparticles was conducted in simulated intestinal fluid. The size of the dried microparticles and those after incubation in simulated intestinal fluid for 5 hours were measured using a calibrated optical microscope (CX RIII, Labomed, Ambala, India). Percent swelling value was determined from the diameter of the microparticles at time t (DT) and initial time t= 0 (D0) using equation 6.16 (6)Percent swelling=[DT−D0]/D0×100 An in vitro swelling test of the coated microparticles was conducted in simulated intestinal fluid. The size of the dried microparticles and those after incubation in simulated intestinal fluid for 5 hours were measured using a calibrated optical microscope (CX RIII, Labomed, Ambala, India). Percent swelling value was determined from the diameter of the microparticles at time t (DT) and initial time t= 0 (D0) using equation 6.16 (6)Percent swelling=[DT−D0]/D0×100 Mucoadhesion Following institutional animal ethical committee guidelines, the mucoadhesion affinity of the coated microparticles for intestinal mucosa was assessed by the following methods. Ex vivo mucoadhesive strength A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16 (7)Percent adhesive strength=[NS/N0]×100 A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16 (7)Percent adhesive strength=[NS/N0]×100 In vitro washoff test A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31 (8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100 A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31 (8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100 Following institutional animal ethical committee guidelines, the mucoadhesion affinity of the coated microparticles for intestinal mucosa was assessed by the following methods. Ex vivo mucoadhesive strength A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16 (7)Percent adhesive strength=[NS/N0]×100 A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16 (7)Percent adhesive strength=[NS/N0]×100 In vitro washoff test A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31 (8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100 A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31 (8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100 In vitro release In vitro release studies of the coated microparticles were done using a USP basket apparatus (TDT-06T, Electrolab, Mumbai, India) at 37°C ± 0.5°C and 100 rpm containing 900 mL of sterile dissolution medium, ie, simulated gastric fluid and simulated intestinal fluid with about 1 g of accurately weighed microparticles contained in the basket (wrapped with 100 mesh nylon cloth) of dissolution apparatus. At predetermined time points, 5 mL of dissolution medium was withdrawn for up to 14 hours, with immediate replacement of fresh dissolution medium, subjected to viable cell number determination, and the result was expressed as the percentage of viable LR cells released with respect to the practical viable spore count value.16 In vitro release studies of the coated microparticles were done using a USP basket apparatus (TDT-06T, Electrolab, Mumbai, India) at 37°C ± 0.5°C and 100 rpm containing 900 mL of sterile dissolution medium, ie, simulated gastric fluid and simulated intestinal fluid with about 1 g of accurately weighed microparticles contained in the basket (wrapped with 100 mesh nylon cloth) of dissolution apparatus. At predetermined time points, 5 mL of dissolution medium was withdrawn for up to 14 hours, with immediate replacement of fresh dissolution medium, subjected to viable cell number determination, and the result was expressed as the percentage of viable LR cells released with respect to the practical viable spore count value.16 In vitro release kinetics, statistical evaluation, and data fitting A mean value of three determinations at each time point was used to fit an in vitro viable cell release profile of all formulation batches to different kinetic models so as to find their release exponents. The mean value of 12 determinations was used to estimate the difference factor (f1), the similarity factor (f2), and the two indices of Rescigno (ξ1 and ξ2).16,32 Statistical analysis of percent released data and other data were performed using one-way analysis of variance at a significance level of 5% (P < 0.05). In vitro release kinetic studies, statistical evaluation, data fitting, nonlinear least square curve fitting, simulation, and plotting were performed using Excel software (version 2007, Microsoft Software Inc, Redmond, WA) for determining the parameters of each equation. A mean value of three determinations at each time point was used to fit an in vitro viable cell release profile of all formulation batches to different kinetic models so as to find their release exponents. The mean value of 12 determinations was used to estimate the difference factor (f1), the similarity factor (f2), and the two indices of Rescigno (ξ1 and ξ2).16,32 Statistical analysis of percent released data and other data were performed using one-way analysis of variance at a significance level of 5% (P < 0.05). In vitro release kinetic studies, statistical evaluation, data fitting, nonlinear least square curve fitting, simulation, and plotting were performed using Excel software (version 2007, Microsoft Software Inc, Redmond, WA) for determining the parameters of each equation. In vivo probiotic activity The in vivo probiotic activity of the coated microparticles was evaluated using a mouse enterococci stool colonization method, following institutional animal ethical committee guidelines.16 One milliliter of coated microparticle dispersion (102 cfu/mL) in simulated intestinal fluid was fed to albino mice in groups of six. Stools were collected at 6-hourly intervals for up to 48 hours and subjected to an enterococci colonization density study. The in vivo probiotic activity of the coated microparticles was evaluated using a mouse enterococci stool colonization method, following institutional animal ethical committee guidelines.16 One milliliter of coated microparticle dispersion (102 cfu/mL) in simulated intestinal fluid was fed to albino mice in groups of six. Stools were collected at 6-hourly intervals for up to 48 hours and subjected to an enterococci colonization density study. Accelerated stability Following an International Conference on Harmonization guidelines, coated microparticles from all formulation batches were stored under a range of temperature and humidity conditions (30°C ± 2°C/65% ± 5% relative humidity and 40°C ± 2°C/75% ± 5% relative humidity) in a stability analysis chamber (Darwin Chambers Company, St Louis, MO) and in a refrigerator (2°C–8°C) for an accelerated stability study of up to six months.16,32,33 Following an International Conference on Harmonization guidelines, coated microparticles from all formulation batches were stored under a range of temperature and humidity conditions (30°C ± 2°C/65% ± 5% relative humidity and 40°C ± 2°C/75% ± 5% relative humidity) in a stability analysis chamber (Darwin Chambers Company, St Louis, MO) and in a refrigerator (2°C–8°C) for an accelerated stability study of up to six months.16,32,33 Materials: Freeze-dried LR R0011-150 powder was donated by Cipla Limited (Mumbai, India). Hypromellose phthalate (HP-50) was donated by Glenmark Pharmaceuticals Limited (Nasik, India). Different grades of hypromellose, ie, Methocel E5 Premium LV (E5), Methocel E50 Premium LV (E50), and Methocel E10 M Premium CR (E10 M), were donated by Indoco Remedies Limited (Mumbai, India). DeMann Rogosa Sharpe agar media and other analytical grade laboratory chemicals were purchased from HiMedia Laboratories Limited (Mumbai, India). In-house LR specification compliance test: A number of cell count tests (bacteriological, total aerobic bacteria, coliforms, enterobacteriaceae, other Gram-negative bacteria, yeast, molds), and tests to ensure the absence of contaminants (Escherichia coli, Staphylococcus aureus, and Salmonella), were performed as a compliance to the specifications of the certificate of analysis. Preparation of mucoadhesive microparticles: Core mucoadhesive microparticles of LR were prepared aseptically with hypromellose employing coacervation and phase separation technique.16,20 Hypromellose 5 g was dissolved in 200 mL of cold deionized water (4°C ± 2°C). Polysorbate-80 2 g was dissolved in this solution under stirring, followed by aseptic filtration using a 0.45 μm PVDF filter membrane (Millipore Corporation, Bedford, MA). A calculated quantity of LR was dispersed in the above solution, sonicated at 20 kHz for 1 minute, and the temperature was raised gradually up to 30°C ± 2°C with stirring at 500 ± 25 rpm for 30 minutes. Acetone 50 mL was added dropwise under stirring and stirred for a further 10 minutes. Microparticles were collected by aseptic filtration of the dispersion with a 10 μm nylon filter (Millipore Corporation), followed by washing three times with sterile water for injection (30°C ± 2°C) and kept in a desiccator for 24 hours. All formulation batches having the composition described in Table 1 were prepared in triplicate. Aseptic processing was carried out on the bench using a horizontal laminar flow clean air work station (1500048-24-24, Klenzaids Bioclean Devices Ltd, Mumbai, India).16,20 Coating of microparticles: HP-50 solution 200 mL (10% w/w) was prepared with phosphate buffer21 at pH 6.8, and polyethylene glycol 200 4 g and polysorbate-80 2 g was dissolved in it. The solution was filtered aseptically using a 0.45 μm PVDF filter membrane followed by dispersing tare core microparticles in it under stirring at 300 ± 25 rpm, and then 40 mL of propan-2-ol was added dropwise. Stirring was continued for 30 minutes, then the coated microparticles were separated by aseptic filtration, washed three times with sterile water for injection (30°C ± 2°C), and kept in a desiccator for 24 hours, followed by determination of the final weight, aseptically packed in glass vials, and stored in a refrigerator for further use. Coating stage percent weight gain: From the tare weight (WI) of the dried core microparticles that had been subjected to coating and the tare weight (WF) of the dried coated microparticles, the coating stage percent weight gain value was determined using equation 1. (1)Coating stage percent weight gain (% w/w)=WF−WIWI×100 Percent yield study: Calculation for percent yield values (w/w) of all batches were done using equation 2. (2)Percent yield=W1(weight of coated microparticles recovered)W2[Weight (drug (viable cell+nonviable cell)+polymer)]×100 Measurement of viable cell number: Measurement of viable cells in sample was done using the following methods.22 Direct microscopic count using dye exclusion test A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate. (3)Viable cells=average viable cells count per square× dilution factor×104 A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate. (3)Viable cells=average viable cells count per square× dilution factor×104 Viable plate counts One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22 DeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification. Six plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4. (4)Number of clu=Average number of coloniescounted per plateDilution factor×100 One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22 DeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification. Six plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4. (4)Number of clu=Average number of coloniescounted per plateDilution factor×100 Direct microscopic count using dye exclusion test: A thoroughly mixed cell suspension (2–5 × 105 cells/mL) was aseptically prepared to 1 mL with sterile phosphate buffer pH 6.8. Cell suspension 200 μL was mixed thoroughly with 300 μL of sterile phosphate buffer pH 6.8 and 500 μL of 0.4% Trypan blue solution in a 1.5 mL microfuge tube (creating a dilution factor of 5), and kept aside for five minutes. With a coverslip in place, a small volume of the Trypan blue cell suspension was transferred into the chamber of a hemocytometer using a Pasteur pipette and the chamber was allowed to fill up by capillary action to avoid overfilling or underfilling. All the cells (nonviable cells stain blue and viable cells remain opaque) in the 1 mm center square and the four corner squares were counted under a microscope. The number of viable cells per unit of sample (g or mL) was calculated using equation 3. This is a simple and rapid method that provides an approximate result, and was performed in triplicate. (3)Viable cells=average viable cells count per square× dilution factor×104 Viable plate counts: One gram of sample, alternately one mL of sample solution, containing LR was transferred aseptically into a presterilized 10 mL volumetric flask containing 5 mL of sterile saline TS, sonicated at 20 kHz for one minute, and diluted to 10 mL with sterile saline TS. One mL of this suspension was diluted to 10 mL in an autoclaved test tube (25 mm × 150 mm size) with sterile saline TS and mixed thoroughly. Serial dilution was continued until a suitable dilution was achieved (approximately 100 cell/mL). The final dilution tube was allowed to stand in a water bath at 70°C for 30 minutes and was then cooled immediately to about 45°C. Saline TS,21 simulated gastric fluid TS, 21 and simulated intestinal fluid TS21 contain inorganic salts but no carbon source, thus LR cells will not proliferate in this media, and remain in a state of stasis until plated on media containing a carbon source.22 DeMann Rogosa Sharpe agar medium was liquefied and cooled to 45°C on a water bath. One mL of sample from the heat-treated final dilution tube was transferred into sterile Petri dishes (six per sample), and 15 mL of molten medium was poured, mixed thoroughly, and then incubated in an inverted position at 40°C for 48 hours after solidification. Six plates were counted and the average count per plate was calculated. The number of cfu per unit (mL or g) of sample was calculated using equation 4. (4)Number of clu=Average number of coloniescounted per plateDilution factor×100 Entrapment efficiency: In an aseptic manner, 500 mg of accurately weighed coated microparticles were kept with 25 mL of sterile simulated intestinal fluid in a hermetically sealed sterile glass vial at 4°C ± 2°C for 24 hours. The dispersion was subjected to a viable plate count (ie, a viable spore count value in cfu/g) and entrapment efficiency was calculated using equation 5.16 (5)Percent entrapment efficiency=Practical viable spore count valueTheoretical viable spore count value×100 Morphology: The coated microparticles were mounted on aluminum stubs using double-sided adhesive tape. The stubs were then vacuum-coated with a thin layer of gold and examined with a scanning electron microscope (JSM 5610 LV, Jeol, Tokyo, Japan).23–28 Particle size, size distribution, and zeta potential: The core and coated microparticles were dispersed in deionized water (pH 6.8) and sonicated at 20 kHz for three minutes to get a homogenous dispersion (0.5% w/v). The dispersions were put into a small-volume disposable zeta cell and subjected to particle size study using photon correlation spectroscopy with an inbuilt Zetasizer (Nano ZS, Malvern Instruments, Worcestershire, UK) at 633 nm and 25°C ± 0.1°C. The electrophoretic mobility measured (in mm/sec) was converted to the zeta potential.16,25–30 Flow properties: The flow properties of the coated microparticles were determined from the result of the study parameters, ie, angle of repose, Carr’s index, and the Hausner ratio.16,21 In vitro swelling: An in vitro swelling test of the coated microparticles was conducted in simulated intestinal fluid. The size of the dried microparticles and those after incubation in simulated intestinal fluid for 5 hours were measured using a calibrated optical microscope (CX RIII, Labomed, Ambala, India). Percent swelling value was determined from the diameter of the microparticles at time t (DT) and initial time t= 0 (D0) using equation 6.16 (6)Percent swelling=[DT−D0]/D0×100 Mucoadhesion: Following institutional animal ethical committee guidelines, the mucoadhesion affinity of the coated microparticles for intestinal mucosa was assessed by the following methods. Ex vivo mucoadhesive strength A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16 (7)Percent adhesive strength=[NS/N0]×100 A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16 (7)Percent adhesive strength=[NS/N0]×100 In vitro washoff test A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31 (8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100 A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31 (8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100 Ex vivo mucoadhesive strength: A suspension of coated microparticles in simulated intestinal fluid was prepared, and the number of microparticles per mL (No) was determined by optical microscopy. One mL of this suspension was fed to overnight-fasted albino rats of either gender (in groups of three) which were then sacrificed at hours 0, 4, 8, and 12 to isolate their stomach and intestinal regions. The number of microparticles adhering to the stomach and the intestinal regions (NS) was counted after the regions were cut open longitudinally. Percent adhesive strength value as a measure of ex vivo mucoadhesive strength test was calculated using equation 7.16 (7)Percent adhesive strength=[NS/N0]×100 In vitro washoff test: A strip of goat intestinal mucosa was mounted on a glass slide, on which a dispersion of accurately weighed microparticles (Wa) in simulated intestinal fluid was uniformly spread and incubated in a desiccator at 90% relative humidity for 15 minutes. The slide was then placed in a cell at an angle of 45°. Simulated intestinal fluid of 37°C ± 0.5°C was circulated at a rate of 1 mL/min to the cell over microparticles adhering to the intestinal mucosa. The weight of washed out microparticles (Wf) in the washings was determined by separation through centrifugation followed by drying at 50°C. The percent mucoadhesion value as a measure of the in vitro washoff test was calculated using equation 8.31 (8)Percent mucoadhesion=[(Wa−Wf)/Wa]×100 In vitro release: In vitro release studies of the coated microparticles were done using a USP basket apparatus (TDT-06T, Electrolab, Mumbai, India) at 37°C ± 0.5°C and 100 rpm containing 900 mL of sterile dissolution medium, ie, simulated gastric fluid and simulated intestinal fluid with about 1 g of accurately weighed microparticles contained in the basket (wrapped with 100 mesh nylon cloth) of dissolution apparatus. At predetermined time points, 5 mL of dissolution medium was withdrawn for up to 14 hours, with immediate replacement of fresh dissolution medium, subjected to viable cell number determination, and the result was expressed as the percentage of viable LR cells released with respect to the practical viable spore count value.16 In vitro release kinetics, statistical evaluation, and data fitting: A mean value of three determinations at each time point was used to fit an in vitro viable cell release profile of all formulation batches to different kinetic models so as to find their release exponents. The mean value of 12 determinations was used to estimate the difference factor (f1), the similarity factor (f2), and the two indices of Rescigno (ξ1 and ξ2).16,32 Statistical analysis of percent released data and other data were performed using one-way analysis of variance at a significance level of 5% (P < 0.05). In vitro release kinetic studies, statistical evaluation, data fitting, nonlinear least square curve fitting, simulation, and plotting were performed using Excel software (version 2007, Microsoft Software Inc, Redmond, WA) for determining the parameters of each equation. In vivo probiotic activity: The in vivo probiotic activity of the coated microparticles was evaluated using a mouse enterococci stool colonization method, following institutional animal ethical committee guidelines.16 One milliliter of coated microparticle dispersion (102 cfu/mL) in simulated intestinal fluid was fed to albino mice in groups of six. Stools were collected at 6-hourly intervals for up to 48 hours and subjected to an enterococci colonization density study. Accelerated stability: Following an International Conference on Harmonization guidelines, coated microparticles from all formulation batches were stored under a range of temperature and humidity conditions (30°C ± 2°C/65% ± 5% relative humidity and 40°C ± 2°C/75% ± 5% relative humidity) in a stability analysis chamber (Darwin Chambers Company, St Louis, MO) and in a refrigerator (2°C–8°C) for an accelerated stability study of up to six months.16,32,33 Results and discussion: The coacervation and phase separation technique described here is a simple, rapid, two-step method, which appears to be suitable for the preparation of coacervate extended-release mucoadhesive microparticles loaded with LR cells. It eliminates exposure of LR cells to high temperatures, organic solvents, and mechanical stress, while maintaining their viability during processing. Temperatures above 20°C and nonaqueous solvents adversely affect and decrease the viability of LR, and this is the reason for commencing developmental processes below 20°C in aqueous medium. Hypromellose is soluble in cold water, with solubility in water decreasing with increasing temperature, and it is insoluble in organic solvents like chloroform, dichloromethane, ether, and acetone.19 Hypromellose has excellent rate-controlling and mucoadhesion properties.16,17,19 Hypromellose phthalate is soluble in aqueous alkali and insoluble in water and propan-2-ol.19 Hypromellose was selected as a mucoadhesive polymer and hypromellose phthalate as a coating polymer because of their aforementioned properties, and both are considered to be safe for human consumption.16–19 Polysorbate-80 was incorporated in the formulation as a dispersing agent for homogeneous dispersion of LR cells, and polyethylene glycol 200 was incorporated into the coating solution as a plasticizer to impart plasticity to the coat and to prevent it from splitting and cracking. LR cells used complied with certificate of analysis specifications, when tested in accordance with the method of analysis provided by the manufacturer. Coating stage percent weight gain values of the formulation batches were in the range of 10.1%–13.2% w/w. The percent yield value of the formulation batches ranged from 41.24% to 58.18% w/w, which varied according to the grade of hypromellose used, following the order E5 > E50 > E10 M, and an increase in the LR to hypromellose ratio decreased the value, and the highest value was observed for the formulation containing E5 (Table 1). A similar trend was also noticed for the entrapment efficiency values that lie between 45.18% and 64.16% cfu/g. Scanning electron micrographs (Figure 1) of formulations F1, F3, and F5 demonstrate the surface morphology and particle size of the coated microparticles. The microparticles of all formulation batches were spherical in shape with a smooth surface, with the exception of microparticles belonging to formulation F5, the surface morphology of which was found to be coarser and shriveled. A coarser and shriveled surface texture in turn will improve adhesion by having stronger mechanical interactions.17 The mean particle size values of all formulation batches were in the range of 33.10–49.62 μm (Table 1), which increases with an increase in polymer concentration, while that the grade of hypromellose varied according to mean particle size value in the order of E10 > E50 > E5 M, with the highest value for microparticles prepared with E10 M (Table 1 and Figure 2). A nearly equal zeta potential of around −19.2 mV was observed for uncoated microparticles of all formulation batches, while coated microparticles had a nearly equal zeta potential at around −11.5 mV. This value is lower than that of the coated microparticles, indicating the presence of hypromellose phthalate on the surface of the microparticles. The zeta potential report for the uncoated microparticles from formulation batch F1 is shown in Figure 3. The flow properties of the formulation batches lie within the passable and very poor ranges. The percent swelling value of the formulation batches was 0.82%–1.38%, and decreases with increasing LR to hypromellose ratio. A variation in the grade of hypromellose also decreased the value in the order of E5 > E50 > E10 M, with the highest value for the microparticles prepared with E5 (Table 1). The percent adhesive strength of all formulation batches was 42.61%–73.36%, which decreases with an increase in the LR to hypromellose ratio (Table 1). A difference in the grade of hypromellose varied the value in the order of E5 > E50 > E10 M, with the highest value seen for E5. A similar trend was also noticed with the percent mucoadhesion value, which ranged between 44.43% and 75.92% for all formulation batches. These results indicate that the mucoadhesion properties of the microparticles varied according to the grade of hypromellose and the LR to hypromellose ratio, and that microparticles from formulation batch F1 had the highest mucoadhesion affinity with the intestinal mucosa, so may exhibit high gastric retention time in comparison with the other batches. The in vitro swelling test result, ex vivo mucoadhesive strength determination, and in vitro washoff test result, as a measure of the mucoadhesion affinity of the microparticles reveals that the mechanism of mucoadhesion initially follows the adsorption theory34,35 and subsequently the diffusion theory.35 The amount of viable LR cells released from the microparticle system in simulated gastric fluid was negligible, but viable LR cell release was almost regulated and extended in simulated intestinal fluid (Figure 4), indicating that enteric coating of microparticles competently protects cell viability at acidic pH, prevents cell release at gastric pH, and releases viable LR cells at intestinal pH. The results of the in vitro swelling test, ex vivo mucoadhesive strength determination test, in vitro washoff test, and in vitro release profile study demonstrates that, in the intestine (pH > 5.0), the coating of the microparticle is dissolved, thereby releasing core microparticles. The liberated core microparticles swell in the intestine, resulting in intimate contact between the microparticles and the mucous membrane. The mucoadhesive chains then penetrate into the crevices of the tissue surface and intermingle with ions in the mucus, with formation of hydrogen bonds between the carboxylic groups of the polymer chains (hypromellose) and mucin molecules, leading to adhesion of the microparticles to the mucous membrane lining the intestinal wall.36–38 The kinetic constant and release exponent values of model-dependent approaches (Table 1) show that the mechanism of viable LR cell release from coated microparticles follows a zero-order kinetics model, because the plot of cumulative percent viable cell release versus time was found to be linear, with the highest regression coefficient (r2) value in comparison with those of the other models. For all formulation batches, the zero-order kinetics model r2 value ranged between 0.9834 and 0.9947. Study of shape parameter values for the Weibull model (Table 1) reveal that the curve is sigmoid or S-shaped, with upward curvature, followed by a turning point as β exceeded 1.16,32 Study of the location parameter (Td) for the Weibull model (Table 1) characterizes the time interval necessary to dissolve or release 63.2% of the drug present in the delivery system16,32 and shows that the Td of the formulation batches ranges from 7.7580 to 26.637 hours and the r2 value from 0.9310 to 0.9670. Model-independent release exponent values are listed in Table 2, and show that for all formulation pairs, ie, the intrapolymer and interpolymer batches, the ξ1 values lie between 0.070 and 0.328, the ξ2 values lie between 0.240 and 0.407, the f1 value lies between 17.00 and 58.00, and the f2 value lies between 24.00 and 68.00, indicating dissimilarity in product performance of the formulation batches.16,32 A plot of the in vitro viable LR cell release profile following a zero-order kinetics model for all formulation batches in simulated intestinal fluid is shown in Figure 4, and demonstrates that the rate of viable LR cell release from the microparticles decreased significantly with an increase in the LR to polymer ratio, while variation in the grade of hypromellose influenced the release rate from the microparticles, following the order of E10 > E50 > E5 M. The in vivo probiotic activity evaluation result shows that oral administration of the extended-release mucoadhesive microparticles of LR from all the formulation batches resulted in statistically significant reductions in the density of enterococci colonization in the stool of albino mice up to 24 hours to 36 hours. The stability study result shows adequate stability of the microparticles under storage conditions of 30°C ± 2°C/65% ± 5% relative humidity, with no change in color and texture or statistically significant decrease in viable LR cell content with respect to the viable spore count, and also confirms that the LR cells were compatible with the excipients used in the formulation. A statistically significant decrease in viable LR cell content was observed with respect to practical viable spore counts at 40°C ± 2°C/75 ± 5% relative humidity, indicating product instability under these storage conditions. Extended-release mucoadhesive microparticles from formulation batch F1 was found to be superior to the other prototype formulations because it exhibited the highest values of percent yield, entrapment efficiency, and mucoadhesion affinity, having the ability to protect the viability of LR cells during storage and gastrointestinal transit, and releasing viable LR cells in the gut for an extended period of time, as shown via zero-order kinetics. Conclusion: These experimental results suggest that this extended-release microparticulate system loaded with LR cells could be prepared by a conventional coacervation and phase separation technique. It has the potential to deliver viable LR cells to the gut for an extended period of time, while maintaining the viability of LR cells during storage and gastrointestinal transit, and could be viewed as an alternative to conventional dosage forms. However, extensive in vivo studies will be required to establish the use of a coacervate extended-release microparticulate system as an alternative to the conventional dosage form of LR.
Background: The purpose of this study was to develop a mucoadhesive coacervate microparticulate system to deliver viable Lactobacillus rhamnosus cells into the gut for an extended period of time while maintaining high numbers of viable cells within the formulation throughout its shelf-life and during gastrointestinal transit. Methods: Core coacervate mucoadhesive microparticles of L. rhamnosus were developed using several grades of hypromellose and were subsequently enteric-coated with hypromellose phthalate. Microparticles were evaluated for percent yield, entrapment efficiency, surface morphology, particle size, size distribution, zeta potential, flow properties, in vitro swelling, mucoadhesion properties, in vitro release profile and release kinetics, in vivo probiotic activity, and stability. The values for the kinetic constant and release exponent of model-dependent approaches, the difference factor, similarity factor, and Rescigno indices of model-independent approaches were determined for analyzing in vitro dissolution profiles. Results: Experimental microparticles of formulation batches were of spherical shape with percent yields of 41.24%-58.18%, entrapment efficiency 45.18%-64.16%, mean particle size 33.10-49.62 μm, and zeta potential around -11.5 mV, confirming adequate stability of L. rhamnosus at room temperature. The in vitro L. rhamnosus release profile follows zero-order kinetics and depends on the grade of hypromellose and the L. rhamnosus to hypromellose ratio. Conclusions: Microparticles delivered L. rhamnosus in simulated intestinal conditions for an extended period, following zero-order kinetics, and exhibited appreciable mucoadhesion in simulated intestinal conditions.
Introduction: Intake of viable Lactobacillus rhamnosus (LR) cells, at around 107 cfu,1,2 aids in the prevention of intestinal tract illnesses,3 suppresses bacterial infection in renal patients,4 safeguards the urogenital tract by excreting biosurfactants,5 stimulates antibody production, aids the immune system, assists the phagocytic process, helps the body to combat dangerous invasive bacteria, controls food-associated allergic inflammation,6 shortens the duration of diarrhea associated with rotavirus infection,7 and reduces use of antibiotics to treat Helicobacter pylori infection.8 Reported therapeutic benefits are associated with the ability of LR to secret coagulin, a bacteriocin, which is active against a broad spectrum of enteric microbes.1 LR is well tolerated with very rare side effects, and its regular intake can be effective in supplementing and maintaining digestive tract health. Processing conditions during formulation and noncompliance with storage requirements during shipment and storage result in loss of cell viability in the dosage formulation. Acidic conditions in the stomach, various hydrolytic enzymes, and bile salts in the gastrointestinal tract also adversely affect the viability of LR after ingestion.9–14 Nowadays, microparticulate systems have been exploited, not only to reduce loss of cell viability during storage and transport, but also to improve and maintain viable cells arriving in the intestine.9–11 Decreased performance of microparticles is attributable to their short gastric retention time, a physiological limitation which can be improved by coupling mucoadhesion properties to the microparticles through developing mucoadhesive microparticles which will in turn simultaneously improve gastric retention time and bioavailability.15–17 Hypromellose and hypromellose phthalate are safe for human consumption, and because of the good mucoadhesive and release rate-controlling properties of hypromellose, it is preferred in mucoadhesive formulations.16–19 These observations indicate a strong need to develop a dosage form that will deliver LR into the gut with improved gastric retention time and adequate stability during storage and gastrointestinal transit, which can be achieved with extended-release mucoadhesive microparticles. Conclusion: These experimental results suggest that this extended-release microparticulate system loaded with LR cells could be prepared by a conventional coacervation and phase separation technique. It has the potential to deliver viable LR cells to the gut for an extended period of time, while maintaining the viability of LR cells during storage and gastrointestinal transit, and could be viewed as an alternative to conventional dosage forms. However, extensive in vivo studies will be required to establish the use of a coacervate extended-release microparticulate system as an alternative to the conventional dosage form of LR.
Background: The purpose of this study was to develop a mucoadhesive coacervate microparticulate system to deliver viable Lactobacillus rhamnosus cells into the gut for an extended period of time while maintaining high numbers of viable cells within the formulation throughout its shelf-life and during gastrointestinal transit. Methods: Core coacervate mucoadhesive microparticles of L. rhamnosus were developed using several grades of hypromellose and were subsequently enteric-coated with hypromellose phthalate. Microparticles were evaluated for percent yield, entrapment efficiency, surface morphology, particle size, size distribution, zeta potential, flow properties, in vitro swelling, mucoadhesion properties, in vitro release profile and release kinetics, in vivo probiotic activity, and stability. The values for the kinetic constant and release exponent of model-dependent approaches, the difference factor, similarity factor, and Rescigno indices of model-independent approaches were determined for analyzing in vitro dissolution profiles. Results: Experimental microparticles of formulation batches were of spherical shape with percent yields of 41.24%-58.18%, entrapment efficiency 45.18%-64.16%, mean particle size 33.10-49.62 μm, and zeta potential around -11.5 mV, confirming adequate stability of L. rhamnosus at room temperature. The in vitro L. rhamnosus release profile follows zero-order kinetics and depends on the grade of hypromellose and the L. rhamnosus to hypromellose ratio. Conclusions: Microparticles delivered L. rhamnosus in simulated intestinal conditions for an extended period, following zero-order kinetics, and exhibited appreciable mucoadhesion in simulated intestinal conditions.
12,285
275
[ 61, 226, 142, 62, 44, 1046, 208, 301, 96, 45, 100, 31, 84, 568, 124, 141, 132, 73, 89, 103 ]
25
[ "ml", "microparticles", "intestinal", "viable", "cells", "cell", "percent", "lr", "sterile", "simulated" ]
[ "study vivo probiotic", "enteric microbes lr", "lactobacillus rhamnosus", "suppresses bacterial infection", "intake viable lactobacillus" ]
null
[CONTENT] Lactobacillus rhamnosus | mucoadhesive | microparticles | extended-release | intestine [SUMMARY]
[CONTENT] Lactobacillus rhamnosus | mucoadhesive | microparticles | extended-release | intestine [SUMMARY]
null
[CONTENT] Lactobacillus rhamnosus | mucoadhesive | microparticles | extended-release | intestine [SUMMARY]
[CONTENT] Lactobacillus rhamnosus | mucoadhesive | microparticles | extended-release | intestine [SUMMARY]
[CONTENT] Lactobacillus rhamnosus | mucoadhesive | microparticles | extended-release | intestine [SUMMARY]
[CONTENT] Adhesiveness | Bacterial Load | Drug Carriers | Drug Delivery Systems | Humans | In Vitro Techniques | Intestinal Mucosa | Lacticaseibacillus rhamnosus | Methylcellulose | Microscopy, Electron, Scanning | Microspheres | Particle Size | Probiotics | Tablets, Enteric-Coated [SUMMARY]
[CONTENT] Adhesiveness | Bacterial Load | Drug Carriers | Drug Delivery Systems | Humans | In Vitro Techniques | Intestinal Mucosa | Lacticaseibacillus rhamnosus | Methylcellulose | Microscopy, Electron, Scanning | Microspheres | Particle Size | Probiotics | Tablets, Enteric-Coated [SUMMARY]
null
[CONTENT] Adhesiveness | Bacterial Load | Drug Carriers | Drug Delivery Systems | Humans | In Vitro Techniques | Intestinal Mucosa | Lacticaseibacillus rhamnosus | Methylcellulose | Microscopy, Electron, Scanning | Microspheres | Particle Size | Probiotics | Tablets, Enteric-Coated [SUMMARY]
[CONTENT] Adhesiveness | Bacterial Load | Drug Carriers | Drug Delivery Systems | Humans | In Vitro Techniques | Intestinal Mucosa | Lacticaseibacillus rhamnosus | Methylcellulose | Microscopy, Electron, Scanning | Microspheres | Particle Size | Probiotics | Tablets, Enteric-Coated [SUMMARY]
[CONTENT] Adhesiveness | Bacterial Load | Drug Carriers | Drug Delivery Systems | Humans | In Vitro Techniques | Intestinal Mucosa | Lacticaseibacillus rhamnosus | Methylcellulose | Microscopy, Electron, Scanning | Microspheres | Particle Size | Probiotics | Tablets, Enteric-Coated [SUMMARY]
[CONTENT] study vivo probiotic | enteric microbes lr | lactobacillus rhamnosus | suppresses bacterial infection | intake viable lactobacillus [SUMMARY]
[CONTENT] study vivo probiotic | enteric microbes lr | lactobacillus rhamnosus | suppresses bacterial infection | intake viable lactobacillus [SUMMARY]
null
[CONTENT] study vivo probiotic | enteric microbes lr | lactobacillus rhamnosus | suppresses bacterial infection | intake viable lactobacillus [SUMMARY]
[CONTENT] study vivo probiotic | enteric microbes lr | lactobacillus rhamnosus | suppresses bacterial infection | intake viable lactobacillus [SUMMARY]
[CONTENT] study vivo probiotic | enteric microbes lr | lactobacillus rhamnosus | suppresses bacterial infection | intake viable lactobacillus [SUMMARY]
[CONTENT] ml | microparticles | intestinal | viable | cells | cell | percent | lr | sterile | simulated [SUMMARY]
[CONTENT] ml | microparticles | intestinal | viable | cells | cell | percent | lr | sterile | simulated [SUMMARY]
null
[CONTENT] ml | microparticles | intestinal | viable | cells | cell | percent | lr | sterile | simulated [SUMMARY]
[CONTENT] ml | microparticles | intestinal | viable | cells | cell | percent | lr | sterile | simulated [SUMMARY]
[CONTENT] ml | microparticles | intestinal | viable | cells | cell | percent | lr | sterile | simulated [SUMMARY]
[CONTENT] tract | storage | infection | associated | lr | gastric retention | gastric retention time | retention time | retention | mucoadhesive [SUMMARY]
[CONTENT] data | release | fitting | determinations | software | mean value | statistical | kinetic | mean | factor [SUMMARY]
null
[CONTENT] conventional | extended | lr | extended release microparticulate | alternative conventional dosage | extended release microparticulate system | release microparticulate | conventional dosage | release microparticulate system | alternative conventional [SUMMARY]
[CONTENT] ml | microparticles | viable | intestinal | percent | cells | cell | coated | lr | fluid [SUMMARY]
[CONTENT] ml | microparticles | viable | intestinal | percent | cells | cell | coated | lr | fluid [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] L. ||| ||| Rescigno [SUMMARY]
null
[CONTENT] an extended period | zero [SUMMARY]
[CONTENT] ||| L. ||| ||| Rescigno ||| 41.24%-58.18% | 45.18%-64.16% | 33.10-49.62 ||| zero ||| an extended period | zero [SUMMARY]
[CONTENT] ||| L. ||| ||| Rescigno ||| 41.24%-58.18% | 45.18%-64.16% | 33.10-49.62 ||| zero ||| an extended period | zero [SUMMARY]
Secukinumab Efficacy in Psoriatic Arthritis: Machine Learning and Meta-analysis of Four Phase 3 Trials.
32015257
Using a machine learning approach, the study investigated if specific baseline characteristics could predict which psoriatic arthritis (PsA) patients may gain additional benefit from a starting dose of secukinumab 300 mg over 150 mg. We also report results from individual patient efficacy meta-analysis (IPEM) in 2049 PsA patients from the FUTURE 2 to 5 studies to evaluate the efficacy of secukinumab 300 mg, 150 mg with and without loading regimen versus placebo at week 16 on achievement of several clinically relevant difficult-to-achieve (higher hurdle) endpoints.
BACKGROUND
Machine learning employed Bayesian elastic net to analyze baseline data of 2148 PsA patients investigating 275 predictors. For IPEM, results were presented as difference in response rates versus placebo at week 16.
METHODS
Machine learning showed secukinumab 300 mg has additional benefits in patients who are anti-tumor necrosis factor-naive, treated with 1 prior anti-tumor necrosis factor agent, not receiving methotrexate, with enthesitis at baseline, and with shorter PsA disease duration. For IPEM, at week 16, all secukinumab doses had greater treatment effect (%) versus placebo for higher hurdle endpoints in the overall population and in all subgroups; 300-mg dose had greater treatment effect than 150 mg for all endpoints in overall population and most subgroups.
RESULTS
Machine learning identified predictors for additional benefit of secukinumab 300 mg compared with 150 mg dose. Individual patient efficacy meta-analysis showed that secukinumab 300 mg provided greater improvements compared with 150 mg in higher hurdle efficacy endpoints in patients with active PsA in the overall population and most subgroups with various levels of baseline disease activity and psoriasis.
CONCLUSIONS
[ "Antibodies, Monoclonal", "Antibodies, Monoclonal, Humanized", "Arthritis, Psoriatic", "Bayes Theorem", "Clinical Trials, Phase III as Topic", "Double-Blind Method", "Humans", "Machine Learning" ]
8389345
null
null
METHODS
For this study, a machine learning exploratory approach was used to screen a more comprehensive set of baseline patient characteristics. Unlike traditional multivariate statistical regression methods that tend to break down when the number of predictors is relatively large or the predictors are highly correlated, machine learning techniques allow us to evaluate much larger numbers of possibly highly correlated predictors. One such machine learning method is the Bayesian elastic net10 that extends traditional regression method by including large numbers of patient characteristics together with constraints on the magnitude of their associations with the response. These constraints effectively remove all those predictors with little or no association with the response, and by selecting the appropriate constraints, a parsimonious yet complete set of baseline characteristics associated with the response can be identified. One additional advantage of the Bayesian elastic net over other machine learning methods is that it supports statistical inferences (e.g., confidence intervals) on the coefficients of the identified predictors. We investigated a total of 275 predictors (based on disease characteristics and interaction terms) for association with the improvement of disease signs and symptoms, and the Bayesian elastic net algorithm10 identified predictors for enhanced benefit of secukinumab 300 mg as a starting dose versus 150 mg from the data of 2148 patients. The predictors thus identified were further subjected to evaluation of the treatment effect of secukinumab 300 versus 150 mg using individual patient efficacy meta-analysis (IPEM). Individual patient efficacy meta-analysis aims to summarize the evidence on a specific clinical question from multiple related studies. The statistical implementation of an IPEM fundamentally retains the clustering of patients within studies. It facilitates standardization of analyses across studies and direct derivation of the information desired, independent of significance or how it was reported.11 The IPEM included 2049 patients with PsA from four phase 3 studies; FUTURE 2, FUTURE 3, FUTURE 4, and FUTURE 5. The designs and patient inclusion and exclusion criteria of the individual studies have been reported previously.2,6,7,12 Patients in each study were randomized to secukinumab and placebo at baseline. Secukinumab doses included subcutaneous 300 and 150 mg administered at baseline with loading dose at weeks 1, 2, and 3, followed by maintenance dose every 4 weeks starting at week 4; placebo group was treated similarly. For secukinumab no-load regimen (in FUTURE 4 and 5), secukinumab 150 mg was administered at baseline, with placebo at weeks 1, 2, and 3 followed by secukinumab 150 mg every 4 weeks starting at week 4. For IPEM, from a total of 2148 patients with active PsA who were originally randomized in 4 phase 3 studies (397, 414, 341, and 996 patients in FUTURE 2, FUTURE 3, FUTURE 4, and FUTURE 5, respectively), data from 2049 patients receiving secukinumab 300 mg, 150 mg load, and 150 mg no-load were included in the efficacy pool for subgroup analysis. A total of 99 patients who received secukinumab 75 mg were excluded. Achievement of several clinically relevant higher hurdle endpoints with secukinumab versus placebo at week 16 was assessed in the overall population and in a subgroup analysis by prior anti-TNF use (naive and IR), concomitant MTX use (with and without MTX), baseline DAS28-CRP levels (≤5.1 [LDA and/or REM] and >5.1 [active disease]), baseline CRP levels (≤10 and >10 mg/L), and baseline BSA with psoriasis (≥3%, <10%, and ≥10%). Outcome Measures Efficacy endpoints analyzed at week 16 using machine learning included ACR20/50 response, ACR-n score (an extension of the ACR response criteria defined as the lowest of the following three values: percentage change in the number of swollen joints, percentage change in the number of tender joints, and the median of the percentage change in the other five measures, which are part of the ACR criteria), PASI 75/90 response, PASDAS (change from baseline), resolution of dactylitis and enthesitis, improvement in MDA, and Health Assessment Questionnaire–Disability Index (HAQ-DI) response and HAQ-DI score. Individual patient efficacy meta-analysis was performed in outcomes including ACR50/70, PASI 90, PASDAS-LDA, and MDA at week 16 (placebo-controlled period) in the overall population. The ACR50, ACR70, and PASI 90 responses were also assessed in patients stratified by prior anti-TNF use, concomitant MTX use, baseline DAS28-CRP, baseline CRP, and baseline BSA with psoriasis. Efficacy endpoints analyzed at week 16 using machine learning included ACR20/50 response, ACR-n score (an extension of the ACR response criteria defined as the lowest of the following three values: percentage change in the number of swollen joints, percentage change in the number of tender joints, and the median of the percentage change in the other five measures, which are part of the ACR criteria), PASI 75/90 response, PASDAS (change from baseline), resolution of dactylitis and enthesitis, improvement in MDA, and Health Assessment Questionnaire–Disability Index (HAQ-DI) response and HAQ-DI score. Individual patient efficacy meta-analysis was performed in outcomes including ACR50/70, PASI 90, PASDAS-LDA, and MDA at week 16 (placebo-controlled period) in the overall population. The ACR50, ACR70, and PASI 90 responses were also assessed in patients stratified by prior anti-TNF use, concomitant MTX use, baseline DAS28-CRP, baseline CRP, and baseline BSA with psoriasis. Statistical Analysis For analyses involving machine learning, a Bayesian elastic net algorithm was used to predict each endpoint from a set of ~40 of 275 covariates, and their interactions with treatment were identified to form subgroups of covariates with substantial predictive power. A heat map was used to display common predictors' magnitude across efficacy endpoints at week 16. Missing values for binary endpoints were imputed as nonresponders. To assess the performance of the Bayesian elastic net model, receiver operating characteristic curves were produced to quantify the trade-off between sensitivity and specificity for each modeled endpoint. The area under each receiver operating characteristic curve (AUC) was computed to summarize the overall model discriminating ability to identify subgroups (0/1) of patients for each outcome. In particular, the AUC scores at week 16 used 5-fold cross validation within FUTURE 2 to FUTURE 5 studies that randomly selected 4-fold of the entire patients to predict the fifth fold. This step was iterated 5 times to get the average. The scores ranged from 0.75 for dactylitis to a high score of 0.81 corresponding to PASI 90. The AUC scores observed with Bayesian elastic net model were higher than those observed with multivariate logistic regression (0.45 for enthesitis to 0.58 for PASI 90) (Supplementary Table 1, http://links.lww.com/RHU/A174). For IPEM, model-based treatment effects versus placebo (%) for the meta-analysis of binary endpoints are expressed as least-squares means from logistic regression, with study, treatment, and anti-TNF status (removed when the subgroup is stratified by prior anti-TNF use) as factors and body weight as a covariate. Missing values were imputed as nonresponders. For analyses involving machine learning, a Bayesian elastic net algorithm was used to predict each endpoint from a set of ~40 of 275 covariates, and their interactions with treatment were identified to form subgroups of covariates with substantial predictive power. A heat map was used to display common predictors' magnitude across efficacy endpoints at week 16. Missing values for binary endpoints were imputed as nonresponders. To assess the performance of the Bayesian elastic net model, receiver operating characteristic curves were produced to quantify the trade-off between sensitivity and specificity for each modeled endpoint. The area under each receiver operating characteristic curve (AUC) was computed to summarize the overall model discriminating ability to identify subgroups (0/1) of patients for each outcome. In particular, the AUC scores at week 16 used 5-fold cross validation within FUTURE 2 to FUTURE 5 studies that randomly selected 4-fold of the entire patients to predict the fifth fold. This step was iterated 5 times to get the average. The scores ranged from 0.75 for dactylitis to a high score of 0.81 corresponding to PASI 90. The AUC scores observed with Bayesian elastic net model were higher than those observed with multivariate logistic regression (0.45 for enthesitis to 0.58 for PASI 90) (Supplementary Table 1, http://links.lww.com/RHU/A174). For IPEM, model-based treatment effects versus placebo (%) for the meta-analysis of binary endpoints are expressed as least-squares means from logistic regression, with study, treatment, and anti-TNF status (removed when the subgroup is stratified by prior anti-TNF use) as factors and body weight as a covariate. Missing values were imputed as nonresponders.
IPEM Results (Overall Population and Subgroups)
Improvements were observed with secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens versus placebo for all endpoints at week 16 in the overall population (Fig. 3). Secukinumab 300 mg was associated with greater improvements as compared with both 150-mg dose regimens for all higher hurdle endpoints. Secukinumab 150-mg load had higher responses (%) in ACR50, PASI 90, and MDA responses as compared with no-load regimen at week 16. Treatment effect with secukinumab versus placebo in overall population at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA; PASDAS-LDA including remission is defined as PASDAS ≤3.2. At week 16, greater treatment effects versus placebo were observed for ACR50/70 and PASI 90 responses with all secukinumab doses in anti-TNF–naive and anti–TNF-IR patients (Fig. 4A). For all three endpoints, greater response rates were observed with secukinumab 300 mg as compared with 150-mg load and no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Secukinumab 150-mg load showed higher responses than the 150-mg no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Treatment effect with secukinumab versus placebo by (A) prior anti-TNF use and (B) concomitant MTX use at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA. In patients with or without concomitant MTX use, all secukinumab regimens demonstrated greater treatment effect in ACR50/70 and PASI 90 responses versus placebo at week 16. Secukinumab 300 mg was associated with numerically higher PASI 90 response compared with 150-mg load and no-load dose regimens in patients with and without concomitant MTX. The ACR50 response was numerically similar in 300-mg and 150-mg load regimen in patients with concomitant MTX use. Secukinumab 150-mg load showed higher responses than the no-load regimen in the concomitant MTX group (Fig. 4B). The ACR50, ACR70, and PASI 90 response rates were superior to placebo at week 16 in all dose groups analyzed by baseline DAS28-CRP, baseline CRP level, and baseline BSA with psoriasis (Figs. 5A, B and Fig. 6). Higher responses were observed for secukinumab 300 mg compared with the 150-mg dose in all subgroups for most endpoints (Figs. 5A, B and Fig. 6). Secukinumab 150-mg load showed higher responses than no-load regimen for most endpoints across subgroups (Fig. 5A, B and Fig. 6). Treatment effect with secukinumab versus placebo by (A) baseline DAS28-CRP and (B) baseline CRP levels at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA. Treatment effect with secukinumab versus placebo by baseline BSA (psoriasis subset) at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 body surface.
CONCLUSIONS
Machine learning predicted additional benefit of secukinumab 300 mg over 150 mg in anti–TNF-IR patients, patients treated without concomitant MTX, and patients with psoriasis. In addition, early PsA and the presence of enthesitis were identified as predictors of PASDAS-LDA including remission that warrant further research. Individual meta-analysis of 2049 patients in four phase 3 studies showed that secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens provided improvements in higher hurdle endpoints versus placebo in patients with active PsA. These results were observed in the overall population and across subgroups regardless of prior anti-TNF use, use of concomitant MTX, and various levels of baseline disease activity and BSA of psoriasis. Secukinumab 300 mg was associated with greater improvements compared with 150-mg dose in the overall population and in most subgroups.
[ "Outcome Measures", "Statistical Analysis", "RESULTS", "Predictor Outcome Analyses by Machine Learning", "Pooled Safety of Secukinumab" ]
[ "Efficacy endpoints analyzed at week 16 using machine learning included ACR20/50 response, ACR-n score (an extension of the ACR response criteria defined as the lowest of the following three values: percentage change in the number of swollen joints, percentage change in the number of tender joints, and the median of the percentage change in the other five measures, which are part of the ACR criteria), PASI 75/90 response, PASDAS (change from baseline), resolution of dactylitis and enthesitis, improvement in MDA, and Health Assessment Questionnaire–Disability Index (HAQ-DI) response and HAQ-DI score.\nIndividual patient efficacy meta-analysis was performed in outcomes including ACR50/70, PASI 90, PASDAS-LDA, and MDA at week 16 (placebo-controlled period) in the overall population. The ACR50, ACR70, and PASI 90 responses were also assessed in patients stratified by prior anti-TNF use, concomitant MTX use, baseline DAS28-CRP, baseline CRP, and baseline BSA with psoriasis.", "For analyses involving machine learning, a Bayesian elastic net algorithm was used to predict each endpoint from a set of ~40 of 275 covariates, and their interactions with treatment were identified to form subgroups of covariates with substantial predictive power. A heat map was used to display common predictors' magnitude across efficacy endpoints at week 16. Missing values for binary endpoints were imputed as nonresponders.\nTo assess the performance of the Bayesian elastic net model, receiver operating characteristic curves were produced to quantify the trade-off between sensitivity and specificity for each modeled endpoint. The area under each receiver operating characteristic curve (AUC) was computed to summarize the overall model discriminating ability to identify subgroups (0/1) of patients for each outcome. In particular, the AUC scores at week 16 used 5-fold cross validation within FUTURE 2 to FUTURE 5 studies that randomly selected 4-fold of the entire patients to predict the fifth fold. This step was iterated 5 times to get the average. The scores ranged from 0.75 for dactylitis to a high score of 0.81 corresponding to PASI 90. The AUC scores observed with Bayesian elastic net model were higher than those observed with multivariate logistic regression (0.45 for enthesitis to 0.58 for PASI 90) (Supplementary Table 1, http://links.lww.com/RHU/A174).\nFor IPEM, model-based treatment effects versus placebo (%) for the meta-analysis of binary endpoints are expressed as least-squares means from logistic regression, with study, treatment, and anti-TNF status (removed when the subgroup is stratified by prior anti-TNF use) as factors and body weight as a covariate. Missing values were imputed as nonresponders.", "For machine learning analysis, the patient population was distributed almost equally in three groups for time since diagnosis of PsA (33% patients each with time since diagnosis between 0 and 2 years, 2–7 years, and >7 years).\nOf the 2049 PsA patients included in the IPEM, 461, 572, 335, and 681 patients received secukinumab 300 mg, 150 mg, 150 mg no-load, and placebo, respectively. The majority of patients completed week 24 in all treatment groups (95.7%, 94.8%, 94.0%, and 90.3% patients in secukinumab 300 mg, 150 mg load, 150 mg no-load, and placebo, respectively). Demographic and baseline characteristics were comparable across the treatment groups (Supplementary Table 2, http://links.lww.com/RHU/A174). Approximately two-thirds of patients were anti-TNF–naive (68.5%–72.8%), and approximately half (47.4%–51.9%) were receiving concomitant MTX at baseline.\n Predictor Outcome Analyses by Machine Learning No single predictor alone could identify patients who would benefit from a starting dose of secukinumab 300 versus 150 mg for the endpoints assessed. Heat map analysis showed that of 275 predictors, several predictors jointly produced adequate predictions for achieving better treatment response with secukinumab 300 versus 150 mg (Fig. 1). This figure includes only baseline predictors that were found to be associated with 2 or more endpoints (these predictors suggest a treatment difference between 300 and 150 mg among subgroups of patients). Rows depict patients' baseline predictors (e.g., MTX use), and columns depict endpoints (e.g., ACR20). The color of each cell depicts the association between the dose-response differences (300 vs 150 mg) for each endpoint (column) among patient subgroups defined by the predictors (row) (e.g., patients with or without dactylitis at randomization). Green color depicts greater efficacy response of secukinumab 300 versus 150 mg among patients in the mentioned subgroup (e.g., patients without concomitant MTX use) than their counter subgroup (e.g., patients with concomitant MTX), red color depicts greater efficacy of secukinumab 300 versus 150 mg among patients in the counter subgroup, and no color exhibits almost equal difference in efficacy responses of secukinumab 300 and 150 mg in the mentioned subgroup and its counter subgroup. Intensity of the color depicts the degree of association between the subgroup of patients and the efficacy outcome and is in no way related to the inefficacy of any dose.\nHeat map showing common predictors of response to secukinumab 300 versus 150 mg across endpoints at week 16.\nPatients without concomitant MTX use were estimated to have better responses with secukinumab 300-mg than 150-mg dose regimens for several endpoints. The patients treated prior with one anti-TNF agent were predicted to show better improvement in HAQ-DI and PASDAS scores and ACR-n responses and higher resolution of enthesitis with secukinumab 300 mg compared with 150 mg. Similarly, the presence of enthesitis at baseline was a strong predictor of greater reduction in PASDAS score with secukinumab 300 versus 150 mg. Patients who did not use any systemic glucocorticoid achieved better PASI 90 responses with secukinumab 300 mg than 150 mg.\nCovariates that demonstrated greater response for secukinumab 300 versus 150 mg in terms of improvement of disease symptoms at week 16 are shown in Supplementary Table 3, http://links.lww.com/RHU/A174. The ACR50 responses were higher in patients treated with secukinumab 300 mg who had one of the following baseline characteristics: did not use concomitant MTX or had presence of enthesitis at baseline (Fig. 2A). PASDAS-LDA, including remission (PASDAS-LDA + REM; PASDAS score <3.2) is reported as a recommended disease activity target in clinical trials. Current analysis showed that significantly higher proportions of patients treated with secukinumab 300 mg who had one of the following characteristics: no previous anti-TNF therapy (p < 0.05), no use of concomitant MTX (p < 0.01), presence of enthesitis at baseline (p < 0.001), and earlier time since PsA diagnosis (<2 or 2–7 years; p < 0.05), reached PASDAS-LDA + REM compared with secukinumab 150 mg (Fig. 2B).\nCovariates that predicted a higher proportion of patients achieving (A) ACR50 and (B) PASDAS-LDA + REM with secukinumab 300 versus 150 mg. *p < 0.0001, †p < 0.001, §p < 0.01, ‡p < 0.05 versus secukinumab 150 mg. Data reported as nonresponder imputation. Data presented from estimates of logistic regression model with study, treatment, and randomization stratum (TNF status: naive or IR) as factors, baseline score and weight as a covariate. PASDAS-LDA including REM defined as PASDAS score <3.2. LS indicates lease squares; N, number of evaluable patients; SF-36 MCS, Short Form-36 Mental Component Summary.\nNo single predictor alone could identify patients who would benefit from a starting dose of secukinumab 300 versus 150 mg for the endpoints assessed. Heat map analysis showed that of 275 predictors, several predictors jointly produced adequate predictions for achieving better treatment response with secukinumab 300 versus 150 mg (Fig. 1). This figure includes only baseline predictors that were found to be associated with 2 or more endpoints (these predictors suggest a treatment difference between 300 and 150 mg among subgroups of patients). Rows depict patients' baseline predictors (e.g., MTX use), and columns depict endpoints (e.g., ACR20). The color of each cell depicts the association between the dose-response differences (300 vs 150 mg) for each endpoint (column) among patient subgroups defined by the predictors (row) (e.g., patients with or without dactylitis at randomization). Green color depicts greater efficacy response of secukinumab 300 versus 150 mg among patients in the mentioned subgroup (e.g., patients without concomitant MTX use) than their counter subgroup (e.g., patients with concomitant MTX), red color depicts greater efficacy of secukinumab 300 versus 150 mg among patients in the counter subgroup, and no color exhibits almost equal difference in efficacy responses of secukinumab 300 and 150 mg in the mentioned subgroup and its counter subgroup. Intensity of the color depicts the degree of association between the subgroup of patients and the efficacy outcome and is in no way related to the inefficacy of any dose.\nHeat map showing common predictors of response to secukinumab 300 versus 150 mg across endpoints at week 16.\nPatients without concomitant MTX use were estimated to have better responses with secukinumab 300-mg than 150-mg dose regimens for several endpoints. The patients treated prior with one anti-TNF agent were predicted to show better improvement in HAQ-DI and PASDAS scores and ACR-n responses and higher resolution of enthesitis with secukinumab 300 mg compared with 150 mg. Similarly, the presence of enthesitis at baseline was a strong predictor of greater reduction in PASDAS score with secukinumab 300 versus 150 mg. Patients who did not use any systemic glucocorticoid achieved better PASI 90 responses with secukinumab 300 mg than 150 mg.\nCovariates that demonstrated greater response for secukinumab 300 versus 150 mg in terms of improvement of disease symptoms at week 16 are shown in Supplementary Table 3, http://links.lww.com/RHU/A174. The ACR50 responses were higher in patients treated with secukinumab 300 mg who had one of the following baseline characteristics: did not use concomitant MTX or had presence of enthesitis at baseline (Fig. 2A). PASDAS-LDA, including remission (PASDAS-LDA + REM; PASDAS score <3.2) is reported as a recommended disease activity target in clinical trials. Current analysis showed that significantly higher proportions of patients treated with secukinumab 300 mg who had one of the following characteristics: no previous anti-TNF therapy (p < 0.05), no use of concomitant MTX (p < 0.01), presence of enthesitis at baseline (p < 0.001), and earlier time since PsA diagnosis (<2 or 2–7 years; p < 0.05), reached PASDAS-LDA + REM compared with secukinumab 150 mg (Fig. 2B).\nCovariates that predicted a higher proportion of patients achieving (A) ACR50 and (B) PASDAS-LDA + REM with secukinumab 300 versus 150 mg. *p < 0.0001, †p < 0.001, §p < 0.01, ‡p < 0.05 versus secukinumab 150 mg. Data reported as nonresponder imputation. Data presented from estimates of logistic regression model with study, treatment, and randomization stratum (TNF status: naive or IR) as factors, baseline score and weight as a covariate. PASDAS-LDA including REM defined as PASDAS score <3.2. LS indicates lease squares; N, number of evaluable patients; SF-36 MCS, Short Form-36 Mental Component Summary.\n IPEM Results (Overall Population and Subgroups) Improvements were observed with secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens versus placebo for all endpoints at week 16 in the overall population (Fig. 3). Secukinumab 300 mg was associated with greater improvements as compared with both 150-mg dose regimens for all higher hurdle endpoints. Secukinumab 150-mg load had higher responses (%) in ACR50, PASI 90, and MDA responses as compared with no-load regimen at week 16.\nTreatment effect with secukinumab versus placebo in overall population at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA; PASDAS-LDA including remission is defined as PASDAS ≤3.2.\nAt week 16, greater treatment effects versus placebo were observed for ACR50/70 and PASI 90 responses with all secukinumab doses in anti-TNF–naive and anti–TNF-IR patients (Fig. 4A). For all three endpoints, greater response rates were observed with secukinumab 300 mg as compared with 150-mg load and no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Secukinumab 150-mg load showed higher responses than the 150-mg no-load regimens in anti–TNF-naive and anti–TNF-IR patients.\nTreatment effect with secukinumab versus placebo by (A) prior anti-TNF use and (B) concomitant MTX use at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA.\nIn patients with or without concomitant MTX use, all secukinumab regimens demonstrated greater treatment effect in ACR50/70 and PASI 90 responses versus placebo at week 16. Secukinumab 300 mg was associated with numerically higher PASI 90 response compared with 150-mg load and no-load dose regimens in patients with and without concomitant MTX. The ACR50 response was numerically similar in 300-mg and 150-mg load regimen in patients with concomitant MTX use. Secukinumab 150-mg load showed higher responses than the no-load regimen in the concomitant MTX group (Fig. 4B).\nThe ACR50, ACR70, and PASI 90 response rates were superior to placebo at week 16 in all dose groups analyzed by baseline DAS28-CRP, baseline CRP level, and baseline BSA with psoriasis (Figs. 5A, B and Fig. 6). Higher responses were observed for secukinumab 300 mg compared with the 150-mg dose in all subgroups for most endpoints (Figs. 5A, B and Fig. 6). Secukinumab 150-mg load showed higher responses than no-load regimen for most endpoints across subgroups (Fig. 5A, B and Fig. 6).\nTreatment effect with secukinumab versus placebo by (A) baseline DAS28-CRP and (B) baseline CRP levels at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA.\nTreatment effect with secukinumab versus placebo by baseline BSA (psoriasis subset) at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 body surface.\nImprovements were observed with secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens versus placebo for all endpoints at week 16 in the overall population (Fig. 3). Secukinumab 300 mg was associated with greater improvements as compared with both 150-mg dose regimens for all higher hurdle endpoints. Secukinumab 150-mg load had higher responses (%) in ACR50, PASI 90, and MDA responses as compared with no-load regimen at week 16.\nTreatment effect with secukinumab versus placebo in overall population at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA; PASDAS-LDA including remission is defined as PASDAS ≤3.2.\nAt week 16, greater treatment effects versus placebo were observed for ACR50/70 and PASI 90 responses with all secukinumab doses in anti-TNF–naive and anti–TNF-IR patients (Fig. 4A). For all three endpoints, greater response rates were observed with secukinumab 300 mg as compared with 150-mg load and no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Secukinumab 150-mg load showed higher responses than the 150-mg no-load regimens in anti–TNF-naive and anti–TNF-IR patients.\nTreatment effect with secukinumab versus placebo by (A) prior anti-TNF use and (B) concomitant MTX use at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA.\nIn patients with or without concomitant MTX use, all secukinumab regimens demonstrated greater treatment effect in ACR50/70 and PASI 90 responses versus placebo at week 16. Secukinumab 300 mg was associated with numerically higher PASI 90 response compared with 150-mg load and no-load dose regimens in patients with and without concomitant MTX. The ACR50 response was numerically similar in 300-mg and 150-mg load regimen in patients with concomitant MTX use. Secukinumab 150-mg load showed higher responses than the no-load regimen in the concomitant MTX group (Fig. 4B).\nThe ACR50, ACR70, and PASI 90 response rates were superior to placebo at week 16 in all dose groups analyzed by baseline DAS28-CRP, baseline CRP level, and baseline BSA with psoriasis (Figs. 5A, B and Fig. 6). Higher responses were observed for secukinumab 300 mg compared with the 150-mg dose in all subgroups for most endpoints (Figs. 5A, B and Fig. 6). Secukinumab 150-mg load showed higher responses than no-load regimen for most endpoints across subgroups (Fig. 5A, B and Fig. 6).\nTreatment effect with secukinumab versus placebo by (A) baseline DAS28-CRP and (B) baseline CRP levels at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA.\nTreatment effect with secukinumab versus placebo by baseline BSA (psoriasis subset) at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 body surface.\n Pooled Safety of Secukinumab The safety profile of secukinumab over long-term treatment (up to 5 years) in patients with PsA, psoriasis, and ankylosing spondylitis has been previously reported.13 The safety profile of secukinumab was consistent in the pooled population with what has been reported for individual studies of secukinumab in PsA, and no dose-response relationship was observed.2,6,7\nThe safety profile of secukinumab over long-term treatment (up to 5 years) in patients with PsA, psoriasis, and ankylosing spondylitis has been previously reported.13 The safety profile of secukinumab was consistent in the pooled population with what has been reported for individual studies of secukinumab in PsA, and no dose-response relationship was observed.2,6,7", "No single predictor alone could identify patients who would benefit from a starting dose of secukinumab 300 versus 150 mg for the endpoints assessed. Heat map analysis showed that of 275 predictors, several predictors jointly produced adequate predictions for achieving better treatment response with secukinumab 300 versus 150 mg (Fig. 1). This figure includes only baseline predictors that were found to be associated with 2 or more endpoints (these predictors suggest a treatment difference between 300 and 150 mg among subgroups of patients). Rows depict patients' baseline predictors (e.g., MTX use), and columns depict endpoints (e.g., ACR20). The color of each cell depicts the association between the dose-response differences (300 vs 150 mg) for each endpoint (column) among patient subgroups defined by the predictors (row) (e.g., patients with or without dactylitis at randomization). Green color depicts greater efficacy response of secukinumab 300 versus 150 mg among patients in the mentioned subgroup (e.g., patients without concomitant MTX use) than their counter subgroup (e.g., patients with concomitant MTX), red color depicts greater efficacy of secukinumab 300 versus 150 mg among patients in the counter subgroup, and no color exhibits almost equal difference in efficacy responses of secukinumab 300 and 150 mg in the mentioned subgroup and its counter subgroup. Intensity of the color depicts the degree of association between the subgroup of patients and the efficacy outcome and is in no way related to the inefficacy of any dose.\nHeat map showing common predictors of response to secukinumab 300 versus 150 mg across endpoints at week 16.\nPatients without concomitant MTX use were estimated to have better responses with secukinumab 300-mg than 150-mg dose regimens for several endpoints. The patients treated prior with one anti-TNF agent were predicted to show better improvement in HAQ-DI and PASDAS scores and ACR-n responses and higher resolution of enthesitis with secukinumab 300 mg compared with 150 mg. Similarly, the presence of enthesitis at baseline was a strong predictor of greater reduction in PASDAS score with secukinumab 300 versus 150 mg. Patients who did not use any systemic glucocorticoid achieved better PASI 90 responses with secukinumab 300 mg than 150 mg.\nCovariates that demonstrated greater response for secukinumab 300 versus 150 mg in terms of improvement of disease symptoms at week 16 are shown in Supplementary Table 3, http://links.lww.com/RHU/A174. The ACR50 responses were higher in patients treated with secukinumab 300 mg who had one of the following baseline characteristics: did not use concomitant MTX or had presence of enthesitis at baseline (Fig. 2A). PASDAS-LDA, including remission (PASDAS-LDA + REM; PASDAS score <3.2) is reported as a recommended disease activity target in clinical trials. Current analysis showed that significantly higher proportions of patients treated with secukinumab 300 mg who had one of the following characteristics: no previous anti-TNF therapy (p < 0.05), no use of concomitant MTX (p < 0.01), presence of enthesitis at baseline (p < 0.001), and earlier time since PsA diagnosis (<2 or 2–7 years; p < 0.05), reached PASDAS-LDA + REM compared with secukinumab 150 mg (Fig. 2B).\nCovariates that predicted a higher proportion of patients achieving (A) ACR50 and (B) PASDAS-LDA + REM with secukinumab 300 versus 150 mg. *p < 0.0001, †p < 0.001, §p < 0.01, ‡p < 0.05 versus secukinumab 150 mg. Data reported as nonresponder imputation. Data presented from estimates of logistic regression model with study, treatment, and randomization stratum (TNF status: naive or IR) as factors, baseline score and weight as a covariate. PASDAS-LDA including REM defined as PASDAS score <3.2. LS indicates lease squares; N, number of evaluable patients; SF-36 MCS, Short Form-36 Mental Component Summary.", "The safety profile of secukinumab over long-term treatment (up to 5 years) in patients with PsA, psoriasis, and ankylosing spondylitis has been previously reported.13 The safety profile of secukinumab was consistent in the pooled population with what has been reported for individual studies of secukinumab in PsA, and no dose-response relationship was observed.2,6,7" ]
[ null, null, "results", null, null ]
[ "METHODS", "Outcome Measures", "Statistical Analysis", "RESULTS", "Predictor Outcome Analyses by Machine Learning", "IPEM Results (Overall Population and Subgroups)", "Pooled Safety of Secukinumab", "DISCUSSION", "CONCLUSIONS", "Supplementary Material" ]
[ "For this study, a machine learning exploratory approach was used to screen a more comprehensive set of baseline patient characteristics. Unlike traditional multivariate statistical regression methods that tend to break down when the number of predictors is relatively large or the predictors are highly correlated, machine learning techniques allow us to evaluate much larger numbers of possibly highly correlated predictors. One such machine learning method is the Bayesian elastic net10 that extends traditional regression method by including large numbers of patient characteristics together with constraints on the magnitude of their associations with the response. These constraints effectively remove all those predictors with little or no association with the response, and by selecting the appropriate constraints, a parsimonious yet complete set of baseline characteristics associated with the response can be identified. One additional advantage of the Bayesian elastic net over other machine learning methods is that it supports statistical inferences (e.g., confidence intervals) on the coefficients of the identified predictors.\nWe investigated a total of 275 predictors (based on disease characteristics and interaction terms) for association with the improvement of disease signs and symptoms, and the Bayesian elastic net algorithm10 identified predictors for enhanced benefit of secukinumab 300 mg as a starting dose versus 150 mg from the data of 2148 patients.\nThe predictors thus identified were further subjected to evaluation of the treatment effect of secukinumab 300 versus 150 mg using individual patient efficacy meta-analysis (IPEM).\nIndividual patient efficacy meta-analysis aims to summarize the evidence on a specific clinical question from multiple related studies. The statistical implementation of an IPEM fundamentally retains the clustering of patients within studies. It facilitates standardization of analyses across studies and direct derivation of the information desired, independent of significance or how it was reported.11\nThe IPEM included 2049 patients with PsA from four phase 3 studies; FUTURE 2, FUTURE 3, FUTURE 4, and FUTURE 5. The designs and patient inclusion and exclusion criteria of the individual studies have been reported previously.2,6,7,12 Patients in each study were randomized to secukinumab and placebo at baseline. Secukinumab doses included subcutaneous 300 and 150 mg administered at baseline with loading dose at weeks 1, 2, and 3, followed by maintenance dose every 4 weeks starting at week 4; placebo group was treated similarly. For secukinumab no-load regimen (in FUTURE 4 and 5), secukinumab 150 mg was administered at baseline, with placebo at weeks 1, 2, and 3 followed by secukinumab 150 mg every 4 weeks starting at week 4. For IPEM, from a total of 2148 patients with active PsA who were originally randomized in 4 phase 3 studies (397, 414, 341, and 996 patients in FUTURE 2, FUTURE 3, FUTURE 4, and FUTURE 5, respectively), data from 2049 patients receiving secukinumab 300 mg, 150 mg load, and 150 mg no-load were included in the efficacy pool for subgroup analysis. A total of 99 patients who received secukinumab 75 mg were excluded. Achievement of several clinically relevant higher hurdle endpoints with secukinumab versus placebo at week 16 was assessed in the overall population and in a subgroup analysis by prior anti-TNF use (naive and IR), concomitant MTX use (with and without MTX), baseline DAS28-CRP levels (≤5.1 [LDA and/or REM] and >5.1 [active disease]), baseline CRP levels (≤10 and >10 mg/L), and baseline BSA with psoriasis (≥3%, <10%, and ≥10%).\n Outcome Measures Efficacy endpoints analyzed at week 16 using machine learning included ACR20/50 response, ACR-n score (an extension of the ACR response criteria defined as the lowest of the following three values: percentage change in the number of swollen joints, percentage change in the number of tender joints, and the median of the percentage change in the other five measures, which are part of the ACR criteria), PASI 75/90 response, PASDAS (change from baseline), resolution of dactylitis and enthesitis, improvement in MDA, and Health Assessment Questionnaire–Disability Index (HAQ-DI) response and HAQ-DI score.\nIndividual patient efficacy meta-analysis was performed in outcomes including ACR50/70, PASI 90, PASDAS-LDA, and MDA at week 16 (placebo-controlled period) in the overall population. The ACR50, ACR70, and PASI 90 responses were also assessed in patients stratified by prior anti-TNF use, concomitant MTX use, baseline DAS28-CRP, baseline CRP, and baseline BSA with psoriasis.\nEfficacy endpoints analyzed at week 16 using machine learning included ACR20/50 response, ACR-n score (an extension of the ACR response criteria defined as the lowest of the following three values: percentage change in the number of swollen joints, percentage change in the number of tender joints, and the median of the percentage change in the other five measures, which are part of the ACR criteria), PASI 75/90 response, PASDAS (change from baseline), resolution of dactylitis and enthesitis, improvement in MDA, and Health Assessment Questionnaire–Disability Index (HAQ-DI) response and HAQ-DI score.\nIndividual patient efficacy meta-analysis was performed in outcomes including ACR50/70, PASI 90, PASDAS-LDA, and MDA at week 16 (placebo-controlled period) in the overall population. The ACR50, ACR70, and PASI 90 responses were also assessed in patients stratified by prior anti-TNF use, concomitant MTX use, baseline DAS28-CRP, baseline CRP, and baseline BSA with psoriasis.\n Statistical Analysis For analyses involving machine learning, a Bayesian elastic net algorithm was used to predict each endpoint from a set of ~40 of 275 covariates, and their interactions with treatment were identified to form subgroups of covariates with substantial predictive power. A heat map was used to display common predictors' magnitude across efficacy endpoints at week 16. Missing values for binary endpoints were imputed as nonresponders.\nTo assess the performance of the Bayesian elastic net model, receiver operating characteristic curves were produced to quantify the trade-off between sensitivity and specificity for each modeled endpoint. The area under each receiver operating characteristic curve (AUC) was computed to summarize the overall model discriminating ability to identify subgroups (0/1) of patients for each outcome. In particular, the AUC scores at week 16 used 5-fold cross validation within FUTURE 2 to FUTURE 5 studies that randomly selected 4-fold of the entire patients to predict the fifth fold. This step was iterated 5 times to get the average. The scores ranged from 0.75 for dactylitis to a high score of 0.81 corresponding to PASI 90. The AUC scores observed with Bayesian elastic net model were higher than those observed with multivariate logistic regression (0.45 for enthesitis to 0.58 for PASI 90) (Supplementary Table 1, http://links.lww.com/RHU/A174).\nFor IPEM, model-based treatment effects versus placebo (%) for the meta-analysis of binary endpoints are expressed as least-squares means from logistic regression, with study, treatment, and anti-TNF status (removed when the subgroup is stratified by prior anti-TNF use) as factors and body weight as a covariate. Missing values were imputed as nonresponders.\nFor analyses involving machine learning, a Bayesian elastic net algorithm was used to predict each endpoint from a set of ~40 of 275 covariates, and their interactions with treatment were identified to form subgroups of covariates with substantial predictive power. A heat map was used to display common predictors' magnitude across efficacy endpoints at week 16. Missing values for binary endpoints were imputed as nonresponders.\nTo assess the performance of the Bayesian elastic net model, receiver operating characteristic curves were produced to quantify the trade-off between sensitivity and specificity for each modeled endpoint. The area under each receiver operating characteristic curve (AUC) was computed to summarize the overall model discriminating ability to identify subgroups (0/1) of patients for each outcome. In particular, the AUC scores at week 16 used 5-fold cross validation within FUTURE 2 to FUTURE 5 studies that randomly selected 4-fold of the entire patients to predict the fifth fold. This step was iterated 5 times to get the average. The scores ranged from 0.75 for dactylitis to a high score of 0.81 corresponding to PASI 90. The AUC scores observed with Bayesian elastic net model were higher than those observed with multivariate logistic regression (0.45 for enthesitis to 0.58 for PASI 90) (Supplementary Table 1, http://links.lww.com/RHU/A174).\nFor IPEM, model-based treatment effects versus placebo (%) for the meta-analysis of binary endpoints are expressed as least-squares means from logistic regression, with study, treatment, and anti-TNF status (removed when the subgroup is stratified by prior anti-TNF use) as factors and body weight as a covariate. Missing values were imputed as nonresponders.", "Efficacy endpoints analyzed at week 16 using machine learning included ACR20/50 response, ACR-n score (an extension of the ACR response criteria defined as the lowest of the following three values: percentage change in the number of swollen joints, percentage change in the number of tender joints, and the median of the percentage change in the other five measures, which are part of the ACR criteria), PASI 75/90 response, PASDAS (change from baseline), resolution of dactylitis and enthesitis, improvement in MDA, and Health Assessment Questionnaire–Disability Index (HAQ-DI) response and HAQ-DI score.\nIndividual patient efficacy meta-analysis was performed in outcomes including ACR50/70, PASI 90, PASDAS-LDA, and MDA at week 16 (placebo-controlled period) in the overall population. The ACR50, ACR70, and PASI 90 responses were also assessed in patients stratified by prior anti-TNF use, concomitant MTX use, baseline DAS28-CRP, baseline CRP, and baseline BSA with psoriasis.", "For analyses involving machine learning, a Bayesian elastic net algorithm was used to predict each endpoint from a set of ~40 of 275 covariates, and their interactions with treatment were identified to form subgroups of covariates with substantial predictive power. A heat map was used to display common predictors' magnitude across efficacy endpoints at week 16. Missing values for binary endpoints were imputed as nonresponders.\nTo assess the performance of the Bayesian elastic net model, receiver operating characteristic curves were produced to quantify the trade-off between sensitivity and specificity for each modeled endpoint. The area under each receiver operating characteristic curve (AUC) was computed to summarize the overall model discriminating ability to identify subgroups (0/1) of patients for each outcome. In particular, the AUC scores at week 16 used 5-fold cross validation within FUTURE 2 to FUTURE 5 studies that randomly selected 4-fold of the entire patients to predict the fifth fold. This step was iterated 5 times to get the average. The scores ranged from 0.75 for dactylitis to a high score of 0.81 corresponding to PASI 90. The AUC scores observed with Bayesian elastic net model were higher than those observed with multivariate logistic regression (0.45 for enthesitis to 0.58 for PASI 90) (Supplementary Table 1, http://links.lww.com/RHU/A174).\nFor IPEM, model-based treatment effects versus placebo (%) for the meta-analysis of binary endpoints are expressed as least-squares means from logistic regression, with study, treatment, and anti-TNF status (removed when the subgroup is stratified by prior anti-TNF use) as factors and body weight as a covariate. Missing values were imputed as nonresponders.", "For machine learning analysis, the patient population was distributed almost equally in three groups for time since diagnosis of PsA (33% patients each with time since diagnosis between 0 and 2 years, 2–7 years, and >7 years).\nOf the 2049 PsA patients included in the IPEM, 461, 572, 335, and 681 patients received secukinumab 300 mg, 150 mg, 150 mg no-load, and placebo, respectively. The majority of patients completed week 24 in all treatment groups (95.7%, 94.8%, 94.0%, and 90.3% patients in secukinumab 300 mg, 150 mg load, 150 mg no-load, and placebo, respectively). Demographic and baseline characteristics were comparable across the treatment groups (Supplementary Table 2, http://links.lww.com/RHU/A174). Approximately two-thirds of patients were anti-TNF–naive (68.5%–72.8%), and approximately half (47.4%–51.9%) were receiving concomitant MTX at baseline.\n Predictor Outcome Analyses by Machine Learning No single predictor alone could identify patients who would benefit from a starting dose of secukinumab 300 versus 150 mg for the endpoints assessed. Heat map analysis showed that of 275 predictors, several predictors jointly produced adequate predictions for achieving better treatment response with secukinumab 300 versus 150 mg (Fig. 1). This figure includes only baseline predictors that were found to be associated with 2 or more endpoints (these predictors suggest a treatment difference between 300 and 150 mg among subgroups of patients). Rows depict patients' baseline predictors (e.g., MTX use), and columns depict endpoints (e.g., ACR20). The color of each cell depicts the association between the dose-response differences (300 vs 150 mg) for each endpoint (column) among patient subgroups defined by the predictors (row) (e.g., patients with or without dactylitis at randomization). Green color depicts greater efficacy response of secukinumab 300 versus 150 mg among patients in the mentioned subgroup (e.g., patients without concomitant MTX use) than their counter subgroup (e.g., patients with concomitant MTX), red color depicts greater efficacy of secukinumab 300 versus 150 mg among patients in the counter subgroup, and no color exhibits almost equal difference in efficacy responses of secukinumab 300 and 150 mg in the mentioned subgroup and its counter subgroup. Intensity of the color depicts the degree of association between the subgroup of patients and the efficacy outcome and is in no way related to the inefficacy of any dose.\nHeat map showing common predictors of response to secukinumab 300 versus 150 mg across endpoints at week 16.\nPatients without concomitant MTX use were estimated to have better responses with secukinumab 300-mg than 150-mg dose regimens for several endpoints. The patients treated prior with one anti-TNF agent were predicted to show better improvement in HAQ-DI and PASDAS scores and ACR-n responses and higher resolution of enthesitis with secukinumab 300 mg compared with 150 mg. Similarly, the presence of enthesitis at baseline was a strong predictor of greater reduction in PASDAS score with secukinumab 300 versus 150 mg. Patients who did not use any systemic glucocorticoid achieved better PASI 90 responses with secukinumab 300 mg than 150 mg.\nCovariates that demonstrated greater response for secukinumab 300 versus 150 mg in terms of improvement of disease symptoms at week 16 are shown in Supplementary Table 3, http://links.lww.com/RHU/A174. The ACR50 responses were higher in patients treated with secukinumab 300 mg who had one of the following baseline characteristics: did not use concomitant MTX or had presence of enthesitis at baseline (Fig. 2A). PASDAS-LDA, including remission (PASDAS-LDA + REM; PASDAS score <3.2) is reported as a recommended disease activity target in clinical trials. Current analysis showed that significantly higher proportions of patients treated with secukinumab 300 mg who had one of the following characteristics: no previous anti-TNF therapy (p < 0.05), no use of concomitant MTX (p < 0.01), presence of enthesitis at baseline (p < 0.001), and earlier time since PsA diagnosis (<2 or 2–7 years; p < 0.05), reached PASDAS-LDA + REM compared with secukinumab 150 mg (Fig. 2B).\nCovariates that predicted a higher proportion of patients achieving (A) ACR50 and (B) PASDAS-LDA + REM with secukinumab 300 versus 150 mg. *p < 0.0001, †p < 0.001, §p < 0.01, ‡p < 0.05 versus secukinumab 150 mg. Data reported as nonresponder imputation. Data presented from estimates of logistic regression model with study, treatment, and randomization stratum (TNF status: naive or IR) as factors, baseline score and weight as a covariate. PASDAS-LDA including REM defined as PASDAS score <3.2. LS indicates lease squares; N, number of evaluable patients; SF-36 MCS, Short Form-36 Mental Component Summary.\nNo single predictor alone could identify patients who would benefit from a starting dose of secukinumab 300 versus 150 mg for the endpoints assessed. Heat map analysis showed that of 275 predictors, several predictors jointly produced adequate predictions for achieving better treatment response with secukinumab 300 versus 150 mg (Fig. 1). This figure includes only baseline predictors that were found to be associated with 2 or more endpoints (these predictors suggest a treatment difference between 300 and 150 mg among subgroups of patients). Rows depict patients' baseline predictors (e.g., MTX use), and columns depict endpoints (e.g., ACR20). The color of each cell depicts the association between the dose-response differences (300 vs 150 mg) for each endpoint (column) among patient subgroups defined by the predictors (row) (e.g., patients with or without dactylitis at randomization). Green color depicts greater efficacy response of secukinumab 300 versus 150 mg among patients in the mentioned subgroup (e.g., patients without concomitant MTX use) than their counter subgroup (e.g., patients with concomitant MTX), red color depicts greater efficacy of secukinumab 300 versus 150 mg among patients in the counter subgroup, and no color exhibits almost equal difference in efficacy responses of secukinumab 300 and 150 mg in the mentioned subgroup and its counter subgroup. Intensity of the color depicts the degree of association between the subgroup of patients and the efficacy outcome and is in no way related to the inefficacy of any dose.\nHeat map showing common predictors of response to secukinumab 300 versus 150 mg across endpoints at week 16.\nPatients without concomitant MTX use were estimated to have better responses with secukinumab 300-mg than 150-mg dose regimens for several endpoints. The patients treated prior with one anti-TNF agent were predicted to show better improvement in HAQ-DI and PASDAS scores and ACR-n responses and higher resolution of enthesitis with secukinumab 300 mg compared with 150 mg. Similarly, the presence of enthesitis at baseline was a strong predictor of greater reduction in PASDAS score with secukinumab 300 versus 150 mg. Patients who did not use any systemic glucocorticoid achieved better PASI 90 responses with secukinumab 300 mg than 150 mg.\nCovariates that demonstrated greater response for secukinumab 300 versus 150 mg in terms of improvement of disease symptoms at week 16 are shown in Supplementary Table 3, http://links.lww.com/RHU/A174. The ACR50 responses were higher in patients treated with secukinumab 300 mg who had one of the following baseline characteristics: did not use concomitant MTX or had presence of enthesitis at baseline (Fig. 2A). PASDAS-LDA, including remission (PASDAS-LDA + REM; PASDAS score <3.2) is reported as a recommended disease activity target in clinical trials. Current analysis showed that significantly higher proportions of patients treated with secukinumab 300 mg who had one of the following characteristics: no previous anti-TNF therapy (p < 0.05), no use of concomitant MTX (p < 0.01), presence of enthesitis at baseline (p < 0.001), and earlier time since PsA diagnosis (<2 or 2–7 years; p < 0.05), reached PASDAS-LDA + REM compared with secukinumab 150 mg (Fig. 2B).\nCovariates that predicted a higher proportion of patients achieving (A) ACR50 and (B) PASDAS-LDA + REM with secukinumab 300 versus 150 mg. *p < 0.0001, †p < 0.001, §p < 0.01, ‡p < 0.05 versus secukinumab 150 mg. Data reported as nonresponder imputation. Data presented from estimates of logistic regression model with study, treatment, and randomization stratum (TNF status: naive or IR) as factors, baseline score and weight as a covariate. PASDAS-LDA including REM defined as PASDAS score <3.2. LS indicates lease squares; N, number of evaluable patients; SF-36 MCS, Short Form-36 Mental Component Summary.\n IPEM Results (Overall Population and Subgroups) Improvements were observed with secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens versus placebo for all endpoints at week 16 in the overall population (Fig. 3). Secukinumab 300 mg was associated with greater improvements as compared with both 150-mg dose regimens for all higher hurdle endpoints. Secukinumab 150-mg load had higher responses (%) in ACR50, PASI 90, and MDA responses as compared with no-load regimen at week 16.\nTreatment effect with secukinumab versus placebo in overall population at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA; PASDAS-LDA including remission is defined as PASDAS ≤3.2.\nAt week 16, greater treatment effects versus placebo were observed for ACR50/70 and PASI 90 responses with all secukinumab doses in anti-TNF–naive and anti–TNF-IR patients (Fig. 4A). For all three endpoints, greater response rates were observed with secukinumab 300 mg as compared with 150-mg load and no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Secukinumab 150-mg load showed higher responses than the 150-mg no-load regimens in anti–TNF-naive and anti–TNF-IR patients.\nTreatment effect with secukinumab versus placebo by (A) prior anti-TNF use and (B) concomitant MTX use at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA.\nIn patients with or without concomitant MTX use, all secukinumab regimens demonstrated greater treatment effect in ACR50/70 and PASI 90 responses versus placebo at week 16. Secukinumab 300 mg was associated with numerically higher PASI 90 response compared with 150-mg load and no-load dose regimens in patients with and without concomitant MTX. The ACR50 response was numerically similar in 300-mg and 150-mg load regimen in patients with concomitant MTX use. Secukinumab 150-mg load showed higher responses than the no-load regimen in the concomitant MTX group (Fig. 4B).\nThe ACR50, ACR70, and PASI 90 response rates were superior to placebo at week 16 in all dose groups analyzed by baseline DAS28-CRP, baseline CRP level, and baseline BSA with psoriasis (Figs. 5A, B and Fig. 6). Higher responses were observed for secukinumab 300 mg compared with the 150-mg dose in all subgroups for most endpoints (Figs. 5A, B and Fig. 6). Secukinumab 150-mg load showed higher responses than no-load regimen for most endpoints across subgroups (Fig. 5A, B and Fig. 6).\nTreatment effect with secukinumab versus placebo by (A) baseline DAS28-CRP and (B) baseline CRP levels at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA.\nTreatment effect with secukinumab versus placebo by baseline BSA (psoriasis subset) at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 body surface.\nImprovements were observed with secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens versus placebo for all endpoints at week 16 in the overall population (Fig. 3). Secukinumab 300 mg was associated with greater improvements as compared with both 150-mg dose regimens for all higher hurdle endpoints. Secukinumab 150-mg load had higher responses (%) in ACR50, PASI 90, and MDA responses as compared with no-load regimen at week 16.\nTreatment effect with secukinumab versus placebo in overall population at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA; PASDAS-LDA including remission is defined as PASDAS ≤3.2.\nAt week 16, greater treatment effects versus placebo were observed for ACR50/70 and PASI 90 responses with all secukinumab doses in anti-TNF–naive and anti–TNF-IR patients (Fig. 4A). For all three endpoints, greater response rates were observed with secukinumab 300 mg as compared with 150-mg load and no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Secukinumab 150-mg load showed higher responses than the 150-mg no-load regimens in anti–TNF-naive and anti–TNF-IR patients.\nTreatment effect with secukinumab versus placebo by (A) prior anti-TNF use and (B) concomitant MTX use at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA.\nIn patients with or without concomitant MTX use, all secukinumab regimens demonstrated greater treatment effect in ACR50/70 and PASI 90 responses versus placebo at week 16. Secukinumab 300 mg was associated with numerically higher PASI 90 response compared with 150-mg load and no-load dose regimens in patients with and without concomitant MTX. The ACR50 response was numerically similar in 300-mg and 150-mg load regimen in patients with concomitant MTX use. Secukinumab 150-mg load showed higher responses than the no-load regimen in the concomitant MTX group (Fig. 4B).\nThe ACR50, ACR70, and PASI 90 response rates were superior to placebo at week 16 in all dose groups analyzed by baseline DAS28-CRP, baseline CRP level, and baseline BSA with psoriasis (Figs. 5A, B and Fig. 6). Higher responses were observed for secukinumab 300 mg compared with the 150-mg dose in all subgroups for most endpoints (Figs. 5A, B and Fig. 6). Secukinumab 150-mg load showed higher responses than no-load regimen for most endpoints across subgroups (Fig. 5A, B and Fig. 6).\nTreatment effect with secukinumab versus placebo by (A) baseline DAS28-CRP and (B) baseline CRP levels at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA.\nTreatment effect with secukinumab versus placebo by baseline BSA (psoriasis subset) at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 body surface.\n Pooled Safety of Secukinumab The safety profile of secukinumab over long-term treatment (up to 5 years) in patients with PsA, psoriasis, and ankylosing spondylitis has been previously reported.13 The safety profile of secukinumab was consistent in the pooled population with what has been reported for individual studies of secukinumab in PsA, and no dose-response relationship was observed.2,6,7\nThe safety profile of secukinumab over long-term treatment (up to 5 years) in patients with PsA, psoriasis, and ankylosing spondylitis has been previously reported.13 The safety profile of secukinumab was consistent in the pooled population with what has been reported for individual studies of secukinumab in PsA, and no dose-response relationship was observed.2,6,7", "No single predictor alone could identify patients who would benefit from a starting dose of secukinumab 300 versus 150 mg for the endpoints assessed. Heat map analysis showed that of 275 predictors, several predictors jointly produced adequate predictions for achieving better treatment response with secukinumab 300 versus 150 mg (Fig. 1). This figure includes only baseline predictors that were found to be associated with 2 or more endpoints (these predictors suggest a treatment difference between 300 and 150 mg among subgroups of patients). Rows depict patients' baseline predictors (e.g., MTX use), and columns depict endpoints (e.g., ACR20). The color of each cell depicts the association between the dose-response differences (300 vs 150 mg) for each endpoint (column) among patient subgroups defined by the predictors (row) (e.g., patients with or without dactylitis at randomization). Green color depicts greater efficacy response of secukinumab 300 versus 150 mg among patients in the mentioned subgroup (e.g., patients without concomitant MTX use) than their counter subgroup (e.g., patients with concomitant MTX), red color depicts greater efficacy of secukinumab 300 versus 150 mg among patients in the counter subgroup, and no color exhibits almost equal difference in efficacy responses of secukinumab 300 and 150 mg in the mentioned subgroup and its counter subgroup. Intensity of the color depicts the degree of association between the subgroup of patients and the efficacy outcome and is in no way related to the inefficacy of any dose.\nHeat map showing common predictors of response to secukinumab 300 versus 150 mg across endpoints at week 16.\nPatients without concomitant MTX use were estimated to have better responses with secukinumab 300-mg than 150-mg dose regimens for several endpoints. The patients treated prior with one anti-TNF agent were predicted to show better improvement in HAQ-DI and PASDAS scores and ACR-n responses and higher resolution of enthesitis with secukinumab 300 mg compared with 150 mg. Similarly, the presence of enthesitis at baseline was a strong predictor of greater reduction in PASDAS score with secukinumab 300 versus 150 mg. Patients who did not use any systemic glucocorticoid achieved better PASI 90 responses with secukinumab 300 mg than 150 mg.\nCovariates that demonstrated greater response for secukinumab 300 versus 150 mg in terms of improvement of disease symptoms at week 16 are shown in Supplementary Table 3, http://links.lww.com/RHU/A174. The ACR50 responses were higher in patients treated with secukinumab 300 mg who had one of the following baseline characteristics: did not use concomitant MTX or had presence of enthesitis at baseline (Fig. 2A). PASDAS-LDA, including remission (PASDAS-LDA + REM; PASDAS score <3.2) is reported as a recommended disease activity target in clinical trials. Current analysis showed that significantly higher proportions of patients treated with secukinumab 300 mg who had one of the following characteristics: no previous anti-TNF therapy (p < 0.05), no use of concomitant MTX (p < 0.01), presence of enthesitis at baseline (p < 0.001), and earlier time since PsA diagnosis (<2 or 2–7 years; p < 0.05), reached PASDAS-LDA + REM compared with secukinumab 150 mg (Fig. 2B).\nCovariates that predicted a higher proportion of patients achieving (A) ACR50 and (B) PASDAS-LDA + REM with secukinumab 300 versus 150 mg. *p < 0.0001, †p < 0.001, §p < 0.01, ‡p < 0.05 versus secukinumab 150 mg. Data reported as nonresponder imputation. Data presented from estimates of logistic regression model with study, treatment, and randomization stratum (TNF status: naive or IR) as factors, baseline score and weight as a covariate. PASDAS-LDA including REM defined as PASDAS score <3.2. LS indicates lease squares; N, number of evaluable patients; SF-36 MCS, Short Form-36 Mental Component Summary.", "Improvements were observed with secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens versus placebo for all endpoints at week 16 in the overall population (Fig. 3). Secukinumab 300 mg was associated with greater improvements as compared with both 150-mg dose regimens for all higher hurdle endpoints. Secukinumab 150-mg load had higher responses (%) in ACR50, PASI 90, and MDA responses as compared with no-load regimen at week 16.\nTreatment effect with secukinumab versus placebo in overall population at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA; PASDAS-LDA including remission is defined as PASDAS ≤3.2.\nAt week 16, greater treatment effects versus placebo were observed for ACR50/70 and PASI 90 responses with all secukinumab doses in anti-TNF–naive and anti–TNF-IR patients (Fig. 4A). For all three endpoints, greater response rates were observed with secukinumab 300 mg as compared with 150-mg load and no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Secukinumab 150-mg load showed higher responses than the 150-mg no-load regimens in anti–TNF-naive and anti–TNF-IR patients.\nTreatment effect with secukinumab versus placebo by (A) prior anti-TNF use and (B) concomitant MTX use at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA.\nIn patients with or without concomitant MTX use, all secukinumab regimens demonstrated greater treatment effect in ACR50/70 and PASI 90 responses versus placebo at week 16. Secukinumab 300 mg was associated with numerically higher PASI 90 response compared with 150-mg load and no-load dose regimens in patients with and without concomitant MTX. The ACR50 response was numerically similar in 300-mg and 150-mg load regimen in patients with concomitant MTX use. Secukinumab 150-mg load showed higher responses than the no-load regimen in the concomitant MTX group (Fig. 4B).\nThe ACR50, ACR70, and PASI 90 response rates were superior to placebo at week 16 in all dose groups analyzed by baseline DAS28-CRP, baseline CRP level, and baseline BSA with psoriasis (Figs. 5A, B and Fig. 6). Higher responses were observed for secukinumab 300 mg compared with the 150-mg dose in all subgroups for most endpoints (Figs. 5A, B and Fig. 6). Secukinumab 150-mg load showed higher responses than no-load regimen for most endpoints across subgroups (Fig. 5A, B and Fig. 6).\nTreatment effect with secukinumab versus placebo by (A) baseline DAS28-CRP and (B) baseline CRP levels at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA.\nTreatment effect with secukinumab versus placebo by baseline BSA (psoriasis subset) at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 body surface.", "The safety profile of secukinumab over long-term treatment (up to 5 years) in patients with PsA, psoriasis, and ankylosing spondylitis has been previously reported.13 The safety profile of secukinumab was consistent in the pooled population with what has been reported for individual studies of secukinumab in PsA, and no dose-response relationship was observed.2,6,7", "We have used machine learning approach to analyze predictors of response. Because machine learning is a novel and complicated approach, so to further validate these results, the predictors thus identified were further subjected to evaluation of the treatment effect of secukinumab 300 versus 150 mg at week 16 for clinically important endpoints, including remission in patients with active PsA using data from FUTURE 2, FUTURE 3, FUTURE 4, and FUTURE 5 studies using IPEM.\nPsoriatic arthritis is a heterogeneous chronic inflammatory disease that has clinical manifestations that include peripheral arthritis, dactylitis, enthesitis, or axial involvement.14 It has different disease courses (e.g., mild, intermittent, or with high structural damage and disability).15 Considering the heterogeneity of the disease, predictors of response and remission should be identified in order to individualize the treatment and prevent further worsening of the disease.\nThe machine learning technique investigated whether there were specific baseline clinical characteristics that can predict which patients could gain additional benefit from the secukinumab 300-mg dose using pooled data. The performance of Bayesian elastic net model employed for predictor analysis was assessed by AUC scores. The closer the AUC score is to 1, the better the performance of the predicting model. The AUC scores ranged from 0.75 to 0.81 for various outcomes indicating the discriminatory power of the model used. The common key covariate that predicted higher response of secukinumab 300 mg over 150 mg for higher hurdle endpoints such as ACR50, PASI 90, and PASDAS (change from baseline) was presence of enthesitis at baseline. Higher responses in most of the efficacy endpoints tested were predicted to be achieved with the secukinumab 300 mg as compared with the 150-mg dose in patients with no concomitant use of MTX and patients with a diagnosis of PsA between 2 and 7 years. Patients with a diagnosis of PsA between 2 and 7 years (early PsA) and patients who have enthesitis were identified as being likely to achieve higher efficacy response rates, as assessed by PASDAS-LDA including remission when treated with secukinumab 300 mg compared with 150 mg. Similarly, patients who had enthesitis and did not use concomitant MTX at baseline were predicted to have higher ACR50 response with secukinumab 300 mg compared with the 150-mg dose.\nMachine learning identified possible baseline variables where starting patients on 300-mg dose could be more beneficial than on 150 mg. These baseline variables identified via machine learning were further analyzed using subgroup analysis.\nIn the IPEM, all secukinumab dose regimens (300 mg, 150 mg load, and 150 mg no-load) showed superiority over placebo in improving clinical signs and symptoms and physical function in patients with PsA in the overall population. These findings are in corroboration with those reported previously.2,6,7,16 Results showed that the secukinumab 300-mg dose provided additional benefits over the 150-mg dose for higher hurdle endpoints such as ACR50/70, PASI 90, PASDAS-LDA + REM, and MDA. Similar findings were reported earlier, further substantiating the results of this analysis.2,17,18\nAll secukinumab regimens versus placebo showed improvement in ACR50/70 and PASI 90 responses across subgroups regardless of prior anti-TNF status, use of concomitant MTX, baseline DAS-28 CRP, baseline CRP, and baseline BSA with psoriasis. Secukinumab 300 mg was associated with higher responses than the 150-mg regimens across most subgroups, particularly in TNF-IR subset and in patients with psoriasis, which is consistent with previous reports.2,3,6,7,16 Patients who did not use concomitant MTX showed greater treatment effect in ACR50 and PASI 90 responses with secukinumab 300 mg than secukinumab 150-mg dose regimens. Secukinumab 300 mg showed higher ACR50/70 and PASI 90 responses in patients with baseline DAS28 CRP levels of >5.1, baseline CRP levels of ≤10 mg/L, and patients with ≥10% of baseline BSA with psoriasis in comparison to secukinumab 150-mg dose regimens.\nSecukinumab 150-mg load showed higher responses than the 150-mg no-load regimen for most endpoints across the various subgroups supporting the use of the loading regimen for the achievement of more rapid responses by week 16 in line with the results previously reported in the FUTURE 5 study.7\nThe results of these analyses should be interpreted in light of associated limitations. It is important to understand that certain predictors may not have clinical relevance in specific patients. Therefore, meta-analysis models or machine learning techniques should not replace the medical or scientific judgment of trained professionals. The use of meta-analysis data and machine learning algorithms in combination with targeted clinical evaluations may be more effective in obtaining better treatment results and management of the disease.", "Machine learning predicted additional benefit of secukinumab 300 mg over 150 mg in anti–TNF-IR patients, patients treated without concomitant MTX, and patients with psoriasis. In addition, early PsA and the presence of enthesitis were identified as predictors of PASDAS-LDA including remission that warrant further research. Individual meta-analysis of 2049 patients in four phase 3 studies showed that secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens provided improvements in higher hurdle endpoints versus placebo in patients with active PsA. These results were observed in the overall population and across subgroups regardless of prior anti-TNF use, use of concomitant MTX, and various levels of baseline disease activity and BSA of psoriasis. Secukinumab 300 mg was associated with greater improvements compared with 150-mg dose in the overall population and in most subgroups.", "" ]
[ "methods", null, null, "results", null, "results", null, "discussions", "conclusions", "supplementary-material" ]
[ "biologics", "efficacy", "interleukin", "machine learning", "TNF inhibitors" ]
METHODS: For this study, a machine learning exploratory approach was used to screen a more comprehensive set of baseline patient characteristics. Unlike traditional multivariate statistical regression methods that tend to break down when the number of predictors is relatively large or the predictors are highly correlated, machine learning techniques allow us to evaluate much larger numbers of possibly highly correlated predictors. One such machine learning method is the Bayesian elastic net10 that extends traditional regression method by including large numbers of patient characteristics together with constraints on the magnitude of their associations with the response. These constraints effectively remove all those predictors with little or no association with the response, and by selecting the appropriate constraints, a parsimonious yet complete set of baseline characteristics associated with the response can be identified. One additional advantage of the Bayesian elastic net over other machine learning methods is that it supports statistical inferences (e.g., confidence intervals) on the coefficients of the identified predictors. We investigated a total of 275 predictors (based on disease characteristics and interaction terms) for association with the improvement of disease signs and symptoms, and the Bayesian elastic net algorithm10 identified predictors for enhanced benefit of secukinumab 300 mg as a starting dose versus 150 mg from the data of 2148 patients. The predictors thus identified were further subjected to evaluation of the treatment effect of secukinumab 300 versus 150 mg using individual patient efficacy meta-analysis (IPEM). Individual patient efficacy meta-analysis aims to summarize the evidence on a specific clinical question from multiple related studies. The statistical implementation of an IPEM fundamentally retains the clustering of patients within studies. It facilitates standardization of analyses across studies and direct derivation of the information desired, independent of significance or how it was reported.11 The IPEM included 2049 patients with PsA from four phase 3 studies; FUTURE 2, FUTURE 3, FUTURE 4, and FUTURE 5. The designs and patient inclusion and exclusion criteria of the individual studies have been reported previously.2,6,7,12 Patients in each study were randomized to secukinumab and placebo at baseline. Secukinumab doses included subcutaneous 300 and 150 mg administered at baseline with loading dose at weeks 1, 2, and 3, followed by maintenance dose every 4 weeks starting at week 4; placebo group was treated similarly. For secukinumab no-load regimen (in FUTURE 4 and 5), secukinumab 150 mg was administered at baseline, with placebo at weeks 1, 2, and 3 followed by secukinumab 150 mg every 4 weeks starting at week 4. For IPEM, from a total of 2148 patients with active PsA who were originally randomized in 4 phase 3 studies (397, 414, 341, and 996 patients in FUTURE 2, FUTURE 3, FUTURE 4, and FUTURE 5, respectively), data from 2049 patients receiving secukinumab 300 mg, 150 mg load, and 150 mg no-load were included in the efficacy pool for subgroup analysis. A total of 99 patients who received secukinumab 75 mg were excluded. Achievement of several clinically relevant higher hurdle endpoints with secukinumab versus placebo at week 16 was assessed in the overall population and in a subgroup analysis by prior anti-TNF use (naive and IR), concomitant MTX use (with and without MTX), baseline DAS28-CRP levels (≤5.1 [LDA and/or REM] and >5.1 [active disease]), baseline CRP levels (≤10 and >10 mg/L), and baseline BSA with psoriasis (≥3%, <10%, and ≥10%). Outcome Measures Efficacy endpoints analyzed at week 16 using machine learning included ACR20/50 response, ACR-n score (an extension of the ACR response criteria defined as the lowest of the following three values: percentage change in the number of swollen joints, percentage change in the number of tender joints, and the median of the percentage change in the other five measures, which are part of the ACR criteria), PASI 75/90 response, PASDAS (change from baseline), resolution of dactylitis and enthesitis, improvement in MDA, and Health Assessment Questionnaire–Disability Index (HAQ-DI) response and HAQ-DI score. Individual patient efficacy meta-analysis was performed in outcomes including ACR50/70, PASI 90, PASDAS-LDA, and MDA at week 16 (placebo-controlled period) in the overall population. The ACR50, ACR70, and PASI 90 responses were also assessed in patients stratified by prior anti-TNF use, concomitant MTX use, baseline DAS28-CRP, baseline CRP, and baseline BSA with psoriasis. Efficacy endpoints analyzed at week 16 using machine learning included ACR20/50 response, ACR-n score (an extension of the ACR response criteria defined as the lowest of the following three values: percentage change in the number of swollen joints, percentage change in the number of tender joints, and the median of the percentage change in the other five measures, which are part of the ACR criteria), PASI 75/90 response, PASDAS (change from baseline), resolution of dactylitis and enthesitis, improvement in MDA, and Health Assessment Questionnaire–Disability Index (HAQ-DI) response and HAQ-DI score. Individual patient efficacy meta-analysis was performed in outcomes including ACR50/70, PASI 90, PASDAS-LDA, and MDA at week 16 (placebo-controlled period) in the overall population. The ACR50, ACR70, and PASI 90 responses were also assessed in patients stratified by prior anti-TNF use, concomitant MTX use, baseline DAS28-CRP, baseline CRP, and baseline BSA with psoriasis. Statistical Analysis For analyses involving machine learning, a Bayesian elastic net algorithm was used to predict each endpoint from a set of ~40 of 275 covariates, and their interactions with treatment were identified to form subgroups of covariates with substantial predictive power. A heat map was used to display common predictors' magnitude across efficacy endpoints at week 16. Missing values for binary endpoints were imputed as nonresponders. To assess the performance of the Bayesian elastic net model, receiver operating characteristic curves were produced to quantify the trade-off between sensitivity and specificity for each modeled endpoint. The area under each receiver operating characteristic curve (AUC) was computed to summarize the overall model discriminating ability to identify subgroups (0/1) of patients for each outcome. In particular, the AUC scores at week 16 used 5-fold cross validation within FUTURE 2 to FUTURE 5 studies that randomly selected 4-fold of the entire patients to predict the fifth fold. This step was iterated 5 times to get the average. The scores ranged from 0.75 for dactylitis to a high score of 0.81 corresponding to PASI 90. The AUC scores observed with Bayesian elastic net model were higher than those observed with multivariate logistic regression (0.45 for enthesitis to 0.58 for PASI 90) (Supplementary Table 1, http://links.lww.com/RHU/A174). For IPEM, model-based treatment effects versus placebo (%) for the meta-analysis of binary endpoints are expressed as least-squares means from logistic regression, with study, treatment, and anti-TNF status (removed when the subgroup is stratified by prior anti-TNF use) as factors and body weight as a covariate. Missing values were imputed as nonresponders. For analyses involving machine learning, a Bayesian elastic net algorithm was used to predict each endpoint from a set of ~40 of 275 covariates, and their interactions with treatment were identified to form subgroups of covariates with substantial predictive power. A heat map was used to display common predictors' magnitude across efficacy endpoints at week 16. Missing values for binary endpoints were imputed as nonresponders. To assess the performance of the Bayesian elastic net model, receiver operating characteristic curves were produced to quantify the trade-off between sensitivity and specificity for each modeled endpoint. The area under each receiver operating characteristic curve (AUC) was computed to summarize the overall model discriminating ability to identify subgroups (0/1) of patients for each outcome. In particular, the AUC scores at week 16 used 5-fold cross validation within FUTURE 2 to FUTURE 5 studies that randomly selected 4-fold of the entire patients to predict the fifth fold. This step was iterated 5 times to get the average. The scores ranged from 0.75 for dactylitis to a high score of 0.81 corresponding to PASI 90. The AUC scores observed with Bayesian elastic net model were higher than those observed with multivariate logistic regression (0.45 for enthesitis to 0.58 for PASI 90) (Supplementary Table 1, http://links.lww.com/RHU/A174). For IPEM, model-based treatment effects versus placebo (%) for the meta-analysis of binary endpoints are expressed as least-squares means from logistic regression, with study, treatment, and anti-TNF status (removed when the subgroup is stratified by prior anti-TNF use) as factors and body weight as a covariate. Missing values were imputed as nonresponders. Outcome Measures: Efficacy endpoints analyzed at week 16 using machine learning included ACR20/50 response, ACR-n score (an extension of the ACR response criteria defined as the lowest of the following three values: percentage change in the number of swollen joints, percentage change in the number of tender joints, and the median of the percentage change in the other five measures, which are part of the ACR criteria), PASI 75/90 response, PASDAS (change from baseline), resolution of dactylitis and enthesitis, improvement in MDA, and Health Assessment Questionnaire–Disability Index (HAQ-DI) response and HAQ-DI score. Individual patient efficacy meta-analysis was performed in outcomes including ACR50/70, PASI 90, PASDAS-LDA, and MDA at week 16 (placebo-controlled period) in the overall population. The ACR50, ACR70, and PASI 90 responses were also assessed in patients stratified by prior anti-TNF use, concomitant MTX use, baseline DAS28-CRP, baseline CRP, and baseline BSA with psoriasis. Statistical Analysis: For analyses involving machine learning, a Bayesian elastic net algorithm was used to predict each endpoint from a set of ~40 of 275 covariates, and their interactions with treatment were identified to form subgroups of covariates with substantial predictive power. A heat map was used to display common predictors' magnitude across efficacy endpoints at week 16. Missing values for binary endpoints were imputed as nonresponders. To assess the performance of the Bayesian elastic net model, receiver operating characteristic curves were produced to quantify the trade-off between sensitivity and specificity for each modeled endpoint. The area under each receiver operating characteristic curve (AUC) was computed to summarize the overall model discriminating ability to identify subgroups (0/1) of patients for each outcome. In particular, the AUC scores at week 16 used 5-fold cross validation within FUTURE 2 to FUTURE 5 studies that randomly selected 4-fold of the entire patients to predict the fifth fold. This step was iterated 5 times to get the average. The scores ranged from 0.75 for dactylitis to a high score of 0.81 corresponding to PASI 90. The AUC scores observed with Bayesian elastic net model were higher than those observed with multivariate logistic regression (0.45 for enthesitis to 0.58 for PASI 90) (Supplementary Table 1, http://links.lww.com/RHU/A174). For IPEM, model-based treatment effects versus placebo (%) for the meta-analysis of binary endpoints are expressed as least-squares means from logistic regression, with study, treatment, and anti-TNF status (removed when the subgroup is stratified by prior anti-TNF use) as factors and body weight as a covariate. Missing values were imputed as nonresponders. RESULTS: For machine learning analysis, the patient population was distributed almost equally in three groups for time since diagnosis of PsA (33% patients each with time since diagnosis between 0 and 2 years, 2–7 years, and >7 years). Of the 2049 PsA patients included in the IPEM, 461, 572, 335, and 681 patients received secukinumab 300 mg, 150 mg, 150 mg no-load, and placebo, respectively. The majority of patients completed week 24 in all treatment groups (95.7%, 94.8%, 94.0%, and 90.3% patients in secukinumab 300 mg, 150 mg load, 150 mg no-load, and placebo, respectively). Demographic and baseline characteristics were comparable across the treatment groups (Supplementary Table 2, http://links.lww.com/RHU/A174). Approximately two-thirds of patients were anti-TNF–naive (68.5%–72.8%), and approximately half (47.4%–51.9%) were receiving concomitant MTX at baseline. Predictor Outcome Analyses by Machine Learning No single predictor alone could identify patients who would benefit from a starting dose of secukinumab 300 versus 150 mg for the endpoints assessed. Heat map analysis showed that of 275 predictors, several predictors jointly produced adequate predictions for achieving better treatment response with secukinumab 300 versus 150 mg (Fig. 1). This figure includes only baseline predictors that were found to be associated with 2 or more endpoints (these predictors suggest a treatment difference between 300 and 150 mg among subgroups of patients). Rows depict patients' baseline predictors (e.g., MTX use), and columns depict endpoints (e.g., ACR20). The color of each cell depicts the association between the dose-response differences (300 vs 150 mg) for each endpoint (column) among patient subgroups defined by the predictors (row) (e.g., patients with or without dactylitis at randomization). Green color depicts greater efficacy response of secukinumab 300 versus 150 mg among patients in the mentioned subgroup (e.g., patients without concomitant MTX use) than their counter subgroup (e.g., patients with concomitant MTX), red color depicts greater efficacy of secukinumab 300 versus 150 mg among patients in the counter subgroup, and no color exhibits almost equal difference in efficacy responses of secukinumab 300 and 150 mg in the mentioned subgroup and its counter subgroup. Intensity of the color depicts the degree of association between the subgroup of patients and the efficacy outcome and is in no way related to the inefficacy of any dose. Heat map showing common predictors of response to secukinumab 300 versus 150 mg across endpoints at week 16. Patients without concomitant MTX use were estimated to have better responses with secukinumab 300-mg than 150-mg dose regimens for several endpoints. The patients treated prior with one anti-TNF agent were predicted to show better improvement in HAQ-DI and PASDAS scores and ACR-n responses and higher resolution of enthesitis with secukinumab 300 mg compared with 150 mg. Similarly, the presence of enthesitis at baseline was a strong predictor of greater reduction in PASDAS score with secukinumab 300 versus 150 mg. Patients who did not use any systemic glucocorticoid achieved better PASI 90 responses with secukinumab 300 mg than 150 mg. Covariates that demonstrated greater response for secukinumab 300 versus 150 mg in terms of improvement of disease symptoms at week 16 are shown in Supplementary Table 3, http://links.lww.com/RHU/A174. The ACR50 responses were higher in patients treated with secukinumab 300 mg who had one of the following baseline characteristics: did not use concomitant MTX or had presence of enthesitis at baseline (Fig. 2A). PASDAS-LDA, including remission (PASDAS-LDA + REM; PASDAS score <3.2) is reported as a recommended disease activity target in clinical trials. Current analysis showed that significantly higher proportions of patients treated with secukinumab 300 mg who had one of the following characteristics: no previous anti-TNF therapy (p < 0.05), no use of concomitant MTX (p < 0.01), presence of enthesitis at baseline (p < 0.001), and earlier time since PsA diagnosis (<2 or 2–7 years; p < 0.05), reached PASDAS-LDA + REM compared with secukinumab 150 mg (Fig. 2B). Covariates that predicted a higher proportion of patients achieving (A) ACR50 and (B) PASDAS-LDA + REM with secukinumab 300 versus 150 mg. *p < 0.0001, †p < 0.001, §p < 0.01, ‡p < 0.05 versus secukinumab 150 mg. Data reported as nonresponder imputation. Data presented from estimates of logistic regression model with study, treatment, and randomization stratum (TNF status: naive or IR) as factors, baseline score and weight as a covariate. PASDAS-LDA including REM defined as PASDAS score <3.2. LS indicates lease squares; N, number of evaluable patients; SF-36 MCS, Short Form-36 Mental Component Summary. No single predictor alone could identify patients who would benefit from a starting dose of secukinumab 300 versus 150 mg for the endpoints assessed. Heat map analysis showed that of 275 predictors, several predictors jointly produced adequate predictions for achieving better treatment response with secukinumab 300 versus 150 mg (Fig. 1). This figure includes only baseline predictors that were found to be associated with 2 or more endpoints (these predictors suggest a treatment difference between 300 and 150 mg among subgroups of patients). Rows depict patients' baseline predictors (e.g., MTX use), and columns depict endpoints (e.g., ACR20). The color of each cell depicts the association between the dose-response differences (300 vs 150 mg) for each endpoint (column) among patient subgroups defined by the predictors (row) (e.g., patients with or without dactylitis at randomization). Green color depicts greater efficacy response of secukinumab 300 versus 150 mg among patients in the mentioned subgroup (e.g., patients without concomitant MTX use) than their counter subgroup (e.g., patients with concomitant MTX), red color depicts greater efficacy of secukinumab 300 versus 150 mg among patients in the counter subgroup, and no color exhibits almost equal difference in efficacy responses of secukinumab 300 and 150 mg in the mentioned subgroup and its counter subgroup. Intensity of the color depicts the degree of association between the subgroup of patients and the efficacy outcome and is in no way related to the inefficacy of any dose. Heat map showing common predictors of response to secukinumab 300 versus 150 mg across endpoints at week 16. Patients without concomitant MTX use were estimated to have better responses with secukinumab 300-mg than 150-mg dose regimens for several endpoints. The patients treated prior with one anti-TNF agent were predicted to show better improvement in HAQ-DI and PASDAS scores and ACR-n responses and higher resolution of enthesitis with secukinumab 300 mg compared with 150 mg. Similarly, the presence of enthesitis at baseline was a strong predictor of greater reduction in PASDAS score with secukinumab 300 versus 150 mg. Patients who did not use any systemic glucocorticoid achieved better PASI 90 responses with secukinumab 300 mg than 150 mg. Covariates that demonstrated greater response for secukinumab 300 versus 150 mg in terms of improvement of disease symptoms at week 16 are shown in Supplementary Table 3, http://links.lww.com/RHU/A174. The ACR50 responses were higher in patients treated with secukinumab 300 mg who had one of the following baseline characteristics: did not use concomitant MTX or had presence of enthesitis at baseline (Fig. 2A). PASDAS-LDA, including remission (PASDAS-LDA + REM; PASDAS score <3.2) is reported as a recommended disease activity target in clinical trials. Current analysis showed that significantly higher proportions of patients treated with secukinumab 300 mg who had one of the following characteristics: no previous anti-TNF therapy (p < 0.05), no use of concomitant MTX (p < 0.01), presence of enthesitis at baseline (p < 0.001), and earlier time since PsA diagnosis (<2 or 2–7 years; p < 0.05), reached PASDAS-LDA + REM compared with secukinumab 150 mg (Fig. 2B). Covariates that predicted a higher proportion of patients achieving (A) ACR50 and (B) PASDAS-LDA + REM with secukinumab 300 versus 150 mg. *p < 0.0001, †p < 0.001, §p < 0.01, ‡p < 0.05 versus secukinumab 150 mg. Data reported as nonresponder imputation. Data presented from estimates of logistic regression model with study, treatment, and randomization stratum (TNF status: naive or IR) as factors, baseline score and weight as a covariate. PASDAS-LDA including REM defined as PASDAS score <3.2. LS indicates lease squares; N, number of evaluable patients; SF-36 MCS, Short Form-36 Mental Component Summary. IPEM Results (Overall Population and Subgroups) Improvements were observed with secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens versus placebo for all endpoints at week 16 in the overall population (Fig. 3). Secukinumab 300 mg was associated with greater improvements as compared with both 150-mg dose regimens for all higher hurdle endpoints. Secukinumab 150-mg load had higher responses (%) in ACR50, PASI 90, and MDA responses as compared with no-load regimen at week 16. Treatment effect with secukinumab versus placebo in overall population at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA; PASDAS-LDA including remission is defined as PASDAS ≤3.2. At week 16, greater treatment effects versus placebo were observed for ACR50/70 and PASI 90 responses with all secukinumab doses in anti-TNF–naive and anti–TNF-IR patients (Fig. 4A). For all three endpoints, greater response rates were observed with secukinumab 300 mg as compared with 150-mg load and no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Secukinumab 150-mg load showed higher responses than the 150-mg no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Treatment effect with secukinumab versus placebo by (A) prior anti-TNF use and (B) concomitant MTX use at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA. In patients with or without concomitant MTX use, all secukinumab regimens demonstrated greater treatment effect in ACR50/70 and PASI 90 responses versus placebo at week 16. Secukinumab 300 mg was associated with numerically higher PASI 90 response compared with 150-mg load and no-load dose regimens in patients with and without concomitant MTX. The ACR50 response was numerically similar in 300-mg and 150-mg load regimen in patients with concomitant MTX use. Secukinumab 150-mg load showed higher responses than the no-load regimen in the concomitant MTX group (Fig. 4B). The ACR50, ACR70, and PASI 90 response rates were superior to placebo at week 16 in all dose groups analyzed by baseline DAS28-CRP, baseline CRP level, and baseline BSA with psoriasis (Figs. 5A, B and Fig. 6). Higher responses were observed for secukinumab 300 mg compared with the 150-mg dose in all subgroups for most endpoints (Figs. 5A, B and Fig. 6). Secukinumab 150-mg load showed higher responses than no-load regimen for most endpoints across subgroups (Fig. 5A, B and Fig. 6). Treatment effect with secukinumab versus placebo by (A) baseline DAS28-CRP and (B) baseline CRP levels at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA. Treatment effect with secukinumab versus placebo by baseline BSA (psoriasis subset) at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 body surface. Improvements were observed with secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens versus placebo for all endpoints at week 16 in the overall population (Fig. 3). Secukinumab 300 mg was associated with greater improvements as compared with both 150-mg dose regimens for all higher hurdle endpoints. Secukinumab 150-mg load had higher responses (%) in ACR50, PASI 90, and MDA responses as compared with no-load regimen at week 16. Treatment effect with secukinumab versus placebo in overall population at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA; PASDAS-LDA including remission is defined as PASDAS ≤3.2. At week 16, greater treatment effects versus placebo were observed for ACR50/70 and PASI 90 responses with all secukinumab doses in anti-TNF–naive and anti–TNF-IR patients (Fig. 4A). For all three endpoints, greater response rates were observed with secukinumab 300 mg as compared with 150-mg load and no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Secukinumab 150-mg load showed higher responses than the 150-mg no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Treatment effect with secukinumab versus placebo by (A) prior anti-TNF use and (B) concomitant MTX use at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA. In patients with or without concomitant MTX use, all secukinumab regimens demonstrated greater treatment effect in ACR50/70 and PASI 90 responses versus placebo at week 16. Secukinumab 300 mg was associated with numerically higher PASI 90 response compared with 150-mg load and no-load dose regimens in patients with and without concomitant MTX. The ACR50 response was numerically similar in 300-mg and 150-mg load regimen in patients with concomitant MTX use. Secukinumab 150-mg load showed higher responses than the no-load regimen in the concomitant MTX group (Fig. 4B). The ACR50, ACR70, and PASI 90 response rates were superior to placebo at week 16 in all dose groups analyzed by baseline DAS28-CRP, baseline CRP level, and baseline BSA with psoriasis (Figs. 5A, B and Fig. 6). Higher responses were observed for secukinumab 300 mg compared with the 150-mg dose in all subgroups for most endpoints (Figs. 5A, B and Fig. 6). Secukinumab 150-mg load showed higher responses than no-load regimen for most endpoints across subgroups (Fig. 5A, B and Fig. 6). Treatment effect with secukinumab versus placebo by (A) baseline DAS28-CRP and (B) baseline CRP levels at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA. Treatment effect with secukinumab versus placebo by baseline BSA (psoriasis subset) at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 body surface. Pooled Safety of Secukinumab The safety profile of secukinumab over long-term treatment (up to 5 years) in patients with PsA, psoriasis, and ankylosing spondylitis has been previously reported.13 The safety profile of secukinumab was consistent in the pooled population with what has been reported for individual studies of secukinumab in PsA, and no dose-response relationship was observed.2,6,7 The safety profile of secukinumab over long-term treatment (up to 5 years) in patients with PsA, psoriasis, and ankylosing spondylitis has been previously reported.13 The safety profile of secukinumab was consistent in the pooled population with what has been reported for individual studies of secukinumab in PsA, and no dose-response relationship was observed.2,6,7 Predictor Outcome Analyses by Machine Learning: No single predictor alone could identify patients who would benefit from a starting dose of secukinumab 300 versus 150 mg for the endpoints assessed. Heat map analysis showed that of 275 predictors, several predictors jointly produced adequate predictions for achieving better treatment response with secukinumab 300 versus 150 mg (Fig. 1). This figure includes only baseline predictors that were found to be associated with 2 or more endpoints (these predictors suggest a treatment difference between 300 and 150 mg among subgroups of patients). Rows depict patients' baseline predictors (e.g., MTX use), and columns depict endpoints (e.g., ACR20). The color of each cell depicts the association between the dose-response differences (300 vs 150 mg) for each endpoint (column) among patient subgroups defined by the predictors (row) (e.g., patients with or without dactylitis at randomization). Green color depicts greater efficacy response of secukinumab 300 versus 150 mg among patients in the mentioned subgroup (e.g., patients without concomitant MTX use) than their counter subgroup (e.g., patients with concomitant MTX), red color depicts greater efficacy of secukinumab 300 versus 150 mg among patients in the counter subgroup, and no color exhibits almost equal difference in efficacy responses of secukinumab 300 and 150 mg in the mentioned subgroup and its counter subgroup. Intensity of the color depicts the degree of association between the subgroup of patients and the efficacy outcome and is in no way related to the inefficacy of any dose. Heat map showing common predictors of response to secukinumab 300 versus 150 mg across endpoints at week 16. Patients without concomitant MTX use were estimated to have better responses with secukinumab 300-mg than 150-mg dose regimens for several endpoints. The patients treated prior with one anti-TNF agent were predicted to show better improvement in HAQ-DI and PASDAS scores and ACR-n responses and higher resolution of enthesitis with secukinumab 300 mg compared with 150 mg. Similarly, the presence of enthesitis at baseline was a strong predictor of greater reduction in PASDAS score with secukinumab 300 versus 150 mg. Patients who did not use any systemic glucocorticoid achieved better PASI 90 responses with secukinumab 300 mg than 150 mg. Covariates that demonstrated greater response for secukinumab 300 versus 150 mg in terms of improvement of disease symptoms at week 16 are shown in Supplementary Table 3, http://links.lww.com/RHU/A174. The ACR50 responses were higher in patients treated with secukinumab 300 mg who had one of the following baseline characteristics: did not use concomitant MTX or had presence of enthesitis at baseline (Fig. 2A). PASDAS-LDA, including remission (PASDAS-LDA + REM; PASDAS score <3.2) is reported as a recommended disease activity target in clinical trials. Current analysis showed that significantly higher proportions of patients treated with secukinumab 300 mg who had one of the following characteristics: no previous anti-TNF therapy (p < 0.05), no use of concomitant MTX (p < 0.01), presence of enthesitis at baseline (p < 0.001), and earlier time since PsA diagnosis (<2 or 2–7 years; p < 0.05), reached PASDAS-LDA + REM compared with secukinumab 150 mg (Fig. 2B). Covariates that predicted a higher proportion of patients achieving (A) ACR50 and (B) PASDAS-LDA + REM with secukinumab 300 versus 150 mg. *p < 0.0001, †p < 0.001, §p < 0.01, ‡p < 0.05 versus secukinumab 150 mg. Data reported as nonresponder imputation. Data presented from estimates of logistic regression model with study, treatment, and randomization stratum (TNF status: naive or IR) as factors, baseline score and weight as a covariate. PASDAS-LDA including REM defined as PASDAS score <3.2. LS indicates lease squares; N, number of evaluable patients; SF-36 MCS, Short Form-36 Mental Component Summary. IPEM Results (Overall Population and Subgroups): Improvements were observed with secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens versus placebo for all endpoints at week 16 in the overall population (Fig. 3). Secukinumab 300 mg was associated with greater improvements as compared with both 150-mg dose regimens for all higher hurdle endpoints. Secukinumab 150-mg load had higher responses (%) in ACR50, PASI 90, and MDA responses as compared with no-load regimen at week 16. Treatment effect with secukinumab versus placebo in overall population at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA; PASDAS-LDA including remission is defined as PASDAS ≤3.2. At week 16, greater treatment effects versus placebo were observed for ACR50/70 and PASI 90 responses with all secukinumab doses in anti-TNF–naive and anti–TNF-IR patients (Fig. 4A). For all three endpoints, greater response rates were observed with secukinumab 300 mg as compared with 150-mg load and no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Secukinumab 150-mg load showed higher responses than the 150-mg no-load regimens in anti–TNF-naive and anti–TNF-IR patients. Treatment effect with secukinumab versus placebo by (A) prior anti-TNF use and (B) concomitant MTX use at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA. In patients with or without concomitant MTX use, all secukinumab regimens demonstrated greater treatment effect in ACR50/70 and PASI 90 responses versus placebo at week 16. Secukinumab 300 mg was associated with numerically higher PASI 90 response compared with 150-mg load and no-load dose regimens in patients with and without concomitant MTX. The ACR50 response was numerically similar in 300-mg and 150-mg load regimen in patients with concomitant MTX use. Secukinumab 150-mg load showed higher responses than the no-load regimen in the concomitant MTX group (Fig. 4B). The ACR50, ACR70, and PASI 90 response rates were superior to placebo at week 16 in all dose groups analyzed by baseline DAS28-CRP, baseline CRP level, and baseline BSA with psoriasis (Figs. 5A, B and Fig. 6). Higher responses were observed for secukinumab 300 mg compared with the 150-mg dose in all subgroups for most endpoints (Figs. 5A, B and Fig. 6). Secukinumab 150-mg load showed higher responses than no-load regimen for most endpoints across subgroups (Fig. 5A, B and Fig. 6). Treatment effect with secukinumab versus placebo by (A) baseline DAS28-CRP and (B) baseline CRP levels at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 BSA. Treatment effect with secukinumab versus placebo by baseline BSA (psoriasis subset) at week 16. The PASI 90 response was analyzed in patients with psoriasis ≥3 body surface. Pooled Safety of Secukinumab: The safety profile of secukinumab over long-term treatment (up to 5 years) in patients with PsA, psoriasis, and ankylosing spondylitis has been previously reported.13 The safety profile of secukinumab was consistent in the pooled population with what has been reported for individual studies of secukinumab in PsA, and no dose-response relationship was observed.2,6,7 DISCUSSION: We have used machine learning approach to analyze predictors of response. Because machine learning is a novel and complicated approach, so to further validate these results, the predictors thus identified were further subjected to evaluation of the treatment effect of secukinumab 300 versus 150 mg at week 16 for clinically important endpoints, including remission in patients with active PsA using data from FUTURE 2, FUTURE 3, FUTURE 4, and FUTURE 5 studies using IPEM. Psoriatic arthritis is a heterogeneous chronic inflammatory disease that has clinical manifestations that include peripheral arthritis, dactylitis, enthesitis, or axial involvement.14 It has different disease courses (e.g., mild, intermittent, or with high structural damage and disability).15 Considering the heterogeneity of the disease, predictors of response and remission should be identified in order to individualize the treatment and prevent further worsening of the disease. The machine learning technique investigated whether there were specific baseline clinical characteristics that can predict which patients could gain additional benefit from the secukinumab 300-mg dose using pooled data. The performance of Bayesian elastic net model employed for predictor analysis was assessed by AUC scores. The closer the AUC score is to 1, the better the performance of the predicting model. The AUC scores ranged from 0.75 to 0.81 for various outcomes indicating the discriminatory power of the model used. The common key covariate that predicted higher response of secukinumab 300 mg over 150 mg for higher hurdle endpoints such as ACR50, PASI 90, and PASDAS (change from baseline) was presence of enthesitis at baseline. Higher responses in most of the efficacy endpoints tested were predicted to be achieved with the secukinumab 300 mg as compared with the 150-mg dose in patients with no concomitant use of MTX and patients with a diagnosis of PsA between 2 and 7 years. Patients with a diagnosis of PsA between 2 and 7 years (early PsA) and patients who have enthesitis were identified as being likely to achieve higher efficacy response rates, as assessed by PASDAS-LDA including remission when treated with secukinumab 300 mg compared with 150 mg. Similarly, patients who had enthesitis and did not use concomitant MTX at baseline were predicted to have higher ACR50 response with secukinumab 300 mg compared with the 150-mg dose. Machine learning identified possible baseline variables where starting patients on 300-mg dose could be more beneficial than on 150 mg. These baseline variables identified via machine learning were further analyzed using subgroup analysis. In the IPEM, all secukinumab dose regimens (300 mg, 150 mg load, and 150 mg no-load) showed superiority over placebo in improving clinical signs and symptoms and physical function in patients with PsA in the overall population. These findings are in corroboration with those reported previously.2,6,7,16 Results showed that the secukinumab 300-mg dose provided additional benefits over the 150-mg dose for higher hurdle endpoints such as ACR50/70, PASI 90, PASDAS-LDA + REM, and MDA. Similar findings were reported earlier, further substantiating the results of this analysis.2,17,18 All secukinumab regimens versus placebo showed improvement in ACR50/70 and PASI 90 responses across subgroups regardless of prior anti-TNF status, use of concomitant MTX, baseline DAS-28 CRP, baseline CRP, and baseline BSA with psoriasis. Secukinumab 300 mg was associated with higher responses than the 150-mg regimens across most subgroups, particularly in TNF-IR subset and in patients with psoriasis, which is consistent with previous reports.2,3,6,7,16 Patients who did not use concomitant MTX showed greater treatment effect in ACR50 and PASI 90 responses with secukinumab 300 mg than secukinumab 150-mg dose regimens. Secukinumab 300 mg showed higher ACR50/70 and PASI 90 responses in patients with baseline DAS28 CRP levels of >5.1, baseline CRP levels of ≤10 mg/L, and patients with ≥10% of baseline BSA with psoriasis in comparison to secukinumab 150-mg dose regimens. Secukinumab 150-mg load showed higher responses than the 150-mg no-load regimen for most endpoints across the various subgroups supporting the use of the loading regimen for the achievement of more rapid responses by week 16 in line with the results previously reported in the FUTURE 5 study.7 The results of these analyses should be interpreted in light of associated limitations. It is important to understand that certain predictors may not have clinical relevance in specific patients. Therefore, meta-analysis models or machine learning techniques should not replace the medical or scientific judgment of trained professionals. The use of meta-analysis data and machine learning algorithms in combination with targeted clinical evaluations may be more effective in obtaining better treatment results and management of the disease. CONCLUSIONS: Machine learning predicted additional benefit of secukinumab 300 mg over 150 mg in anti–TNF-IR patients, patients treated without concomitant MTX, and patients with psoriasis. In addition, early PsA and the presence of enthesitis were identified as predictors of PASDAS-LDA including remission that warrant further research. Individual meta-analysis of 2049 patients in four phase 3 studies showed that secukinumab 300-mg, 150-mg load, and 150-mg no-load regimens provided improvements in higher hurdle endpoints versus placebo in patients with active PsA. These results were observed in the overall population and across subgroups regardless of prior anti-TNF use, use of concomitant MTX, and various levels of baseline disease activity and BSA of psoriasis. Secukinumab 300 mg was associated with greater improvements compared with 150-mg dose in the overall population and in most subgroups. Supplementary Material:
Background: Using a machine learning approach, the study investigated if specific baseline characteristics could predict which psoriatic arthritis (PsA) patients may gain additional benefit from a starting dose of secukinumab 300 mg over 150 mg. We also report results from individual patient efficacy meta-analysis (IPEM) in 2049 PsA patients from the FUTURE 2 to 5 studies to evaluate the efficacy of secukinumab 300 mg, 150 mg with and without loading regimen versus placebo at week 16 on achievement of several clinically relevant difficult-to-achieve (higher hurdle) endpoints. Methods: Machine learning employed Bayesian elastic net to analyze baseline data of 2148 PsA patients investigating 275 predictors. For IPEM, results were presented as difference in response rates versus placebo at week 16. Results: Machine learning showed secukinumab 300 mg has additional benefits in patients who are anti-tumor necrosis factor-naive, treated with 1 prior anti-tumor necrosis factor agent, not receiving methotrexate, with enthesitis at baseline, and with shorter PsA disease duration. For IPEM, at week 16, all secukinumab doses had greater treatment effect (%) versus placebo for higher hurdle endpoints in the overall population and in all subgroups; 300-mg dose had greater treatment effect than 150 mg for all endpoints in overall population and most subgroups. Conclusions: Machine learning identified predictors for additional benefit of secukinumab 300 mg compared with 150 mg dose. Individual patient efficacy meta-analysis showed that secukinumab 300 mg provided greater improvements compared with 150 mg in higher hurdle efficacy endpoints in patients with active PsA in the overall population and most subgroups with various levels of baseline disease activity and psoriasis.
null
null
7,552
314
[ 192, 312, 2940, 729, 62 ]
10
[ "mg", "secukinumab", "patients", "150 mg", "150", "300", "secukinumab 300", "baseline", "response", "versus" ]
[ "2148 patients predictors", "clinical characteristics predict", "disease predictors response", "predictor identify patients", "patients baseline predictors" ]
null
null
null
[CONTENT] biologics | efficacy | interleukin | machine learning | TNF inhibitors [SUMMARY]
[CONTENT] biologics | efficacy | interleukin | machine learning | TNF inhibitors [SUMMARY]
[CONTENT] biologics | efficacy | interleukin | machine learning | TNF inhibitors [SUMMARY]
[CONTENT] biologics | efficacy | interleukin | machine learning | TNF inhibitors [SUMMARY]
null
null
[CONTENT] Antibodies, Monoclonal | Antibodies, Monoclonal, Humanized | Arthritis, Psoriatic | Bayes Theorem | Clinical Trials, Phase III as Topic | Double-Blind Method | Humans | Machine Learning [SUMMARY]
[CONTENT] Antibodies, Monoclonal | Antibodies, Monoclonal, Humanized | Arthritis, Psoriatic | Bayes Theorem | Clinical Trials, Phase III as Topic | Double-Blind Method | Humans | Machine Learning [SUMMARY]
[CONTENT] Antibodies, Monoclonal | Antibodies, Monoclonal, Humanized | Arthritis, Psoriatic | Bayes Theorem | Clinical Trials, Phase III as Topic | Double-Blind Method | Humans | Machine Learning [SUMMARY]
[CONTENT] Antibodies, Monoclonal | Antibodies, Monoclonal, Humanized | Arthritis, Psoriatic | Bayes Theorem | Clinical Trials, Phase III as Topic | Double-Blind Method | Humans | Machine Learning [SUMMARY]
null
null
[CONTENT] 2148 patients predictors | clinical characteristics predict | disease predictors response | predictor identify patients | patients baseline predictors [SUMMARY]
[CONTENT] 2148 patients predictors | clinical characteristics predict | disease predictors response | predictor identify patients | patients baseline predictors [SUMMARY]
[CONTENT] 2148 patients predictors | clinical characteristics predict | disease predictors response | predictor identify patients | patients baseline predictors [SUMMARY]
[CONTENT] 2148 patients predictors | clinical characteristics predict | disease predictors response | predictor identify patients | patients baseline predictors [SUMMARY]
null
null
[CONTENT] mg | secukinumab | patients | 150 mg | 150 | 300 | secukinumab 300 | baseline | response | versus [SUMMARY]
[CONTENT] mg | secukinumab | patients | 150 mg | 150 | 300 | secukinumab 300 | baseline | response | versus [SUMMARY]
[CONTENT] mg | secukinumab | patients | 150 mg | 150 | 300 | secukinumab 300 | baseline | response | versus [SUMMARY]
[CONTENT] mg | secukinumab | patients | 150 mg | 150 | 300 | secukinumab 300 | baseline | response | versus [SUMMARY]
null
null
[CONTENT] future | baseline | bayesian | bayesian elastic | elastic | change | elastic net | net | future future | bayesian elastic net [SUMMARY]
[CONTENT] mg | load | secukinumab | 150 | 150 mg | 150 mg load | mg load | fig | pasi 90 response | pasi [SUMMARY]
[CONTENT] mg | 150 mg | 150 | patients | overall population subgroups | population subgroups | 300 | 300 mg | secukinumab 300 mg | secukinumab 300 [SUMMARY]
[CONTENT] mg | secukinumab | 150 | 150 mg | patients | 300 | baseline | secukinumab 300 | response | load [SUMMARY]
null
null
[CONTENT] Bayesian | 2148 | 275 ||| IPEM | week 16 [SUMMARY]
[CONTENT] 300 | 1 prior ||| IPEM | week 16 | 300 | 150 [SUMMARY]
[CONTENT] 300 | 150 ||| 300 | 150 [SUMMARY]
[CONTENT] 300 mg over 150 mg ||| 2049 | 5 | 300 mg, | 150 | week 16 ||| Bayesian | 2148 | 275 ||| IPEM | week 16 ||| ||| 300 | 1 prior ||| IPEM | week 16 | 300 | 150 ||| ||| 300 | 150 ||| 300 | 150 [SUMMARY]
null
Association between coverage of maternal and child health interventions, and under-5 mortality: a repeated cross-sectional analysis of 35 sub-Saharan African countries.
25190448
Infant and child mortality rates are among the most important indicators of child health, nutrition, implementation of key survival interventions, and the overall social and economic development of a population. In this paper, we investigate the role of coverage of maternal and child health (MNCH) interventions in contributing to declines in child mortality in sub-Saharan Africa.
BACKGROUND
Data are from 81 Demographic and Health Surveys from 35 sub-Saharan African countries. Using ecological time-series and child-level regression models, we estimated the effect of MNCH interventions (summarized by the percent composite coverage index, or CCI) on child mortality with in the first 5 years of life net of temporal trends and covariates at the household, maternal, and child levels.
DESIGN
At the ecologic level, a unit increase in standardized CCI was associated with a reduction in under-5 child mortality rate (U5MR) of 29.0 per 1,000 (95% CI: -43.2, -14.7) after adjustment for survey period effects and country-level per capita gross domestic product (pcGDP). At the child level, a unit increase in standardized CCI was associated with an odds ratio of 0.86 for child mortality (95% CI: 0.82-0.90) after adjustment for survey period effect, country-level pcGDP, and a set of household-, maternal-, and child-level covariates.
RESULTS
MNCH interventions are important in reducing U5MR, while the effects of economic growth in sub-Saharan Africa remain weak and inconsistent. Improved coverage of proven life-saving interventions will likely contribute to further reductions in U5MR in sub-Saharan Africa.
CONCLUSIONS
[ "Africa South of the Sahara", "Child Mortality", "Child Welfare", "Child, Preschool", "Cross-Sectional Studies", "Developing Countries", "Diet", "Female", "Health Policy", "Health Services Accessibility", "Humans", "Infant", "Infant, Newborn", "Male", "Maternal Health Services", "Social Determinants of Health", "Socioeconomic Factors" ]
4155074
null
null
Methods
Data sources We extracted data from DHS surveys (38) conducted since 1990. DHS are household surveys that use nationally representative sampling plans and have special emphasis on fertility, child mortality, and indicators of MNCH (38). We selected standard surveys for each country that included birth histories (‘BR’ files from which child mortality rates could be calculated) of women aged 15–49 and MNCH coverage indicators. In total, 81 surveys were included, covering 35 countries and 93% of the population of sub-Saharan Africa (39). Twenty-four surveys were conducted between 1992 and 2000, 20 between 2000 and 2004, 21 between 2005 and 2008, and 16 since 2009. Twenty-four of the 35 countries conducted at least two surveys during this period, and 22 conducted three or more. We extracted data from DHS surveys (38) conducted since 1990. DHS are household surveys that use nationally representative sampling plans and have special emphasis on fertility, child mortality, and indicators of MNCH (38). We selected standard surveys for each country that included birth histories (‘BR’ files from which child mortality rates could be calculated) of women aged 15–49 and MNCH coverage indicators. In total, 81 surveys were included, covering 35 countries and 93% of the population of sub-Saharan Africa (39). Twenty-four surveys were conducted between 1992 and 2000, 20 between 2000 and 2004, 21 between 2005 and 2008, and 16 since 2009. Twenty-four of the 35 countries conducted at least two surveys during this period, and 22 conducted three or more. Study population, data designs, and sample sizes The study population was structured as two distinct data designs. First, we examined the study population as an ecological time-series design with countries repeatedly observed over time. In this design, the lowest level of analysis was the survey period, nested within countries as a hierarchical structure. Second, we used a repeated cross-sectional design, with children at the lowest unit of analysis. A key substantive advantage of the second approach is the ability to account for within-country between-child factors that can influence both child mortality and the country-level economic development and coverage indicators. Further, the ecological time-series data structure assumes that the probability of dying (or U5MR) is the same for all children within a country period. This assumption is relaxed in the second data structure, although in doing so we are modeling the probability of a child dying before the fifth birthday, and not U5MR. In the ecological time-series design, 81 survey periods were available for analysis, covering 35 countries, with an average of 2.3 surveys per country. For the child-level analyses, children across all surveys were pooled, and the probability of child death was examined in the 3-year period immediately preceding the survey. In total, there was information on 395,493 children born within the reference period. After making exclusions for missing data on covariates, the final analytical sample size was 393,934. The study population was structured as two distinct data designs. First, we examined the study population as an ecological time-series design with countries repeatedly observed over time. In this design, the lowest level of analysis was the survey period, nested within countries as a hierarchical structure. Second, we used a repeated cross-sectional design, with children at the lowest unit of analysis. A key substantive advantage of the second approach is the ability to account for within-country between-child factors that can influence both child mortality and the country-level economic development and coverage indicators. Further, the ecological time-series data structure assumes that the probability of dying (or U5MR) is the same for all children within a country period. This assumption is relaxed in the second data structure, although in doing so we are modeling the probability of a child dying before the fifth birthday, and not U5MR. In the ecological time-series design, 81 survey periods were available for analysis, covering 35 countries, with an average of 2.3 surveys per country. For the child-level analyses, children across all surveys were pooled, and the probability of child death was examined in the 3-year period immediately preceding the survey. In total, there was information on 395,493 children born within the reference period. After making exclusions for missing data on covariates, the final analytical sample size was 393,934. Outcomes This study uses two outcomes, corresponding to the two data designs employed. In the ecologic time-series design, the outcome is U5MR for the 3 years reference period in each survey. In the child-level design, the outcome is the probability of child death occurring within 3 years prior to the survey. At an aggregate level, child mortality is typically expressed as probabilities of dying between exact ages (x and x+n), which are derived from life tables and denoted by n q x (40). The U5MR, also denoted 5 q 0, is formally defined as the probability a child death occurring between birth and a child's fifth birthday, expressed as deaths per 1,000 live births (7, 40). U5MR is a composite measure of mortality occurring during the first 5 years, which can be further defined as the probability of dying within 1 month (neonatal mortality), 0–11 months (infant mortality, including neonatal deaths, or 1 q 0), and 12–59 months (child mortality, conditional on having reached the first birthday, or 4 q 1) (41). U5MRs were calculated using the DHS synthetic cohort life table methodology (42). This approach uses age segments 0, 1–2, 3–5, 6–11, 12–23, 24–35, 36–47 months (completed ages) for the calculation of the individual probabilities of dying, without adjustment for age of death heaping at 12 months. Such heaping may occur during fieldwork when deaths occurring slightly prior to or after 12 months of age are reported as a 1 year age of death (42). Therefore, some deaths that are actually infant deaths are shifted up to age 1. The analyses in this paper, however, were on all under 5 deaths, and any heaping would have little influence on the results. Imputation procedures were used for children with missing ages at death. On average, only small numbers of children in the DHS (about 1 in 1,000) reported to have died were not given an age at death and had the age of death imputed (T Pullum, year of personal communication was 2014). The imputation procedure involved finding a range of dates within which death could have occurred, and then selecting a value randomly within that range which would likely not introduce any upwards or downwards bias. The calculation of the U5MR was based on the number of deaths to live-born children in a 3-year reference period preceding the survey. Death probabilities were calculated for each of the age segments defined above and then combined into the mortality rate as the product of the component survival probabilities, and expressed as a rate per 1,000 live births. In the child-level design, the outcome was defined as a child death occurring within the reference period. This was expressed as a binary outcome: 1 for a death occurring in the child's first 5 years; 0 for survival through 37 months of age. This study uses two outcomes, corresponding to the two data designs employed. In the ecologic time-series design, the outcome is U5MR for the 3 years reference period in each survey. In the child-level design, the outcome is the probability of child death occurring within 3 years prior to the survey. At an aggregate level, child mortality is typically expressed as probabilities of dying between exact ages (x and x+n), which are derived from life tables and denoted by n q x (40). The U5MR, also denoted 5 q 0, is formally defined as the probability a child death occurring between birth and a child's fifth birthday, expressed as deaths per 1,000 live births (7, 40). U5MR is a composite measure of mortality occurring during the first 5 years, which can be further defined as the probability of dying within 1 month (neonatal mortality), 0–11 months (infant mortality, including neonatal deaths, or 1 q 0), and 12–59 months (child mortality, conditional on having reached the first birthday, or 4 q 1) (41). U5MRs were calculated using the DHS synthetic cohort life table methodology (42). This approach uses age segments 0, 1–2, 3–5, 6–11, 12–23, 24–35, 36–47 months (completed ages) for the calculation of the individual probabilities of dying, without adjustment for age of death heaping at 12 months. Such heaping may occur during fieldwork when deaths occurring slightly prior to or after 12 months of age are reported as a 1 year age of death (42). Therefore, some deaths that are actually infant deaths are shifted up to age 1. The analyses in this paper, however, were on all under 5 deaths, and any heaping would have little influence on the results. Imputation procedures were used for children with missing ages at death. On average, only small numbers of children in the DHS (about 1 in 1,000) reported to have died were not given an age at death and had the age of death imputed (T Pullum, year of personal communication was 2014). The imputation procedure involved finding a range of dates within which death could have occurred, and then selecting a value randomly within that range which would likely not introduce any upwards or downwards bias. The calculation of the U5MR was based on the number of deaths to live-born children in a 3-year reference period preceding the survey. Death probabilities were calculated for each of the age segments defined above and then combined into the mortality rate as the product of the component survival probabilities, and expressed as a rate per 1,000 live births. In the child-level design, the outcome was defined as a child death occurring within the reference period. This was expressed as a binary outcome: 1 for a death occurring in the child's first 5 years; 0 for survival through 37 months of age. Exposure Our key exposure of interest was coverage of MNCH interventions. Based on prior literature, we selected eight established interventions that have sufficient evidence of an effect on reducing child mortality from the major causes of under-5 deaths and can be summarized as a composite index for comparability between countries and within countries over time (12, 22–37, 43). The interventions included were: FPS, skilled birth attendant at delivery (SBA), at least one antenatal care visit with a skilled provider (ANCS), three doses of diphtheria-pertussis-tetanus (DPT3) vaccine, measles vaccination (MSL), BCG (tuberculosis) vaccination (BCG), ORT for children with diarrhea, and CPNM. The coverage of these interventions at a country level was summarized using the CCI, which is based on the following weighed average of the eight interventions (12):1CCI=14(FPS+SBA+ANCS2+2DPT3+MSL+BCG4+ORT+CPNM2) The CCI gives equal weight to family planning, maternal and newborn care, immunization, and case management of sick children, and has been proposed as an effective way summarize and compare coverage of MNCH interventions across countries and over time (12). Our key exposure of interest was coverage of MNCH interventions. Based on prior literature, we selected eight established interventions that have sufficient evidence of an effect on reducing child mortality from the major causes of under-5 deaths and can be summarized as a composite index for comparability between countries and within countries over time (12, 22–37, 43). The interventions included were: FPS, skilled birth attendant at delivery (SBA), at least one antenatal care visit with a skilled provider (ANCS), three doses of diphtheria-pertussis-tetanus (DPT3) vaccine, measles vaccination (MSL), BCG (tuberculosis) vaccination (BCG), ORT for children with diarrhea, and CPNM. The coverage of these interventions at a country level was summarized using the CCI, which is based on the following weighed average of the eight interventions (12):1CCI=14(FPS+SBA+ANCS2+2DPT3+MSL+BCG4+ORT+CPNM2) The CCI gives equal weight to family planning, maternal and newborn care, immunization, and case management of sick children, and has been proposed as an effective way summarize and compare coverage of MNCH interventions across countries and over time (12). Covariates At the country level, per capita gross domestic product (pcGDP) was used as the primary measure of a country's economic growth and development. These data were obtained from the Penn World Tables (44) and were lagged 2 years from the date at which the survey began. Analysis of pcGDP was included in regression models as the logarithm (base 10) of pcGDP. At the child level, we used a variety of theoretically important maternal and child characteristics as covariates (45). Age, sex, multiple/single birth, birth order, and preceding birth interval were included as child characteristics; age of the mother at birth, maternal education, household wealth quintile, area of residence were included as maternal/household-level characteristics. Household wealth was defined according to an index developed from indicators of household asset ownership and housing characteristics (e.g. whether the household had a flush toilet, refrigerator, car, moped/motorcycle, television, washing machine, or telephone). Country-specific and weighted linear combinations of these items were constructed with weights for each item obtained from a principal component analysis (46). The index was then standardized, and using the quintiles of this distribution, the survey population in each country was divided into fifths from poorest to richest. Similar measures have been developed in India and other settings and have been shown to be a consistent proxy for household income and expenditure (47). At the country level, per capita gross domestic product (pcGDP) was used as the primary measure of a country's economic growth and development. These data were obtained from the Penn World Tables (44) and were lagged 2 years from the date at which the survey began. Analysis of pcGDP was included in regression models as the logarithm (base 10) of pcGDP. At the child level, we used a variety of theoretically important maternal and child characteristics as covariates (45). Age, sex, multiple/single birth, birth order, and preceding birth interval were included as child characteristics; age of the mother at birth, maternal education, household wealth quintile, area of residence were included as maternal/household-level characteristics. Household wealth was defined according to an index developed from indicators of household asset ownership and housing characteristics (e.g. whether the household had a flush toilet, refrigerator, car, moped/motorcycle, television, washing machine, or telephone). Country-specific and weighted linear combinations of these items were constructed with weights for each item obtained from a principal component analysis (46). The index was then standardized, and using the quintiles of this distribution, the survey population in each country was divided into fifths from poorest to richest. Similar measures have been developed in India and other settings and have been shown to be a consistent proxy for household income and expenditure (47). Statistical analyses We conducted two separate set of analyses corresponding to the two data structures described previously. For the ecological time-series data, we fit linear regression models of the form:2yij=β0+BCj+BSij+β1CCIij+e0ij where yij represents the U5MR for survey time i in country j; β0 represents the constant or the average U5MR holding CCI constant, and after accounting for country differences (BCj);BCj represents the country-specific dummy variables estimating differences in U5MR between countries; BSij represents the effects associated with dummies for survey years; β1CCIij represents the change in U5MR for a unit change in CCI; and e0ij represents the residuals at the survey-year level i in country j. A second series of analyses were conducted child-level dataset. In these analyses, the basic model is a logistic regression model with a binary response (y=1 for child death during the reference period, y=0 otherwise). Countries are treated as fixed effects using country indicator variables in the fixed part of the model (BCj). The outcome of child mortality, Pr(y ij=1), is assumed to be binomially distributed yij~Binomial (1,πij) with probability πij related to the set of independent variables X and a random effect for each level by a logit link function:3Logit(πij)=β0+BCj+BSij+β1CCIij+BXij The intercept, β0, represents the log odds of child mortality for the reference group, BSij is a vector of coefficients for dummy variables for survey years, β1CCIij represents the log odds of child mortality for a one-unit increase in CCI, and the BX represents a vector of coefficients for the log odds of child mortality for a one-unit increase for each independent variable. Models were weighted and standard errors adjusted for the complex multistage sampling design of the surveys. Coefficients were exponentiated and presented as odds ratios with 95% confidence intervals. We conducted two separate set of analyses corresponding to the two data structures described previously. For the ecological time-series data, we fit linear regression models of the form:2yij=β0+BCj+BSij+β1CCIij+e0ij where yij represents the U5MR for survey time i in country j; β0 represents the constant or the average U5MR holding CCI constant, and after accounting for country differences (BCj);BCj represents the country-specific dummy variables estimating differences in U5MR between countries; BSij represents the effects associated with dummies for survey years; β1CCIij represents the change in U5MR for a unit change in CCI; and e0ij represents the residuals at the survey-year level i in country j. A second series of analyses were conducted child-level dataset. In these analyses, the basic model is a logistic regression model with a binary response (y=1 for child death during the reference period, y=0 otherwise). Countries are treated as fixed effects using country indicator variables in the fixed part of the model (BCj). The outcome of child mortality, Pr(y ij=1), is assumed to be binomially distributed yij~Binomial (1,πij) with probability πij related to the set of independent variables X and a random effect for each level by a logit link function:3Logit(πij)=β0+BCj+BSij+β1CCIij+BXij The intercept, β0, represents the log odds of child mortality for the reference group, BSij is a vector of coefficients for dummy variables for survey years, β1CCIij represents the log odds of child mortality for a one-unit increase in CCI, and the BX represents a vector of coefficients for the log odds of child mortality for a one-unit increase for each independent variable. Models were weighted and standard errors adjusted for the complex multistage sampling design of the surveys. Coefficients were exponentiated and presented as odds ratios with 95% confidence intervals.
Results
Between 1992 and 2012, the U5MR declined in a majority (19 of 24) of sub-Saharan African countries where repeated DHS surveys were available, although the rate of change varied across countries (Table 1). The U5MR varied between 62.8 deaths per 1,000 live births in Sao Tome to 305.8 per 1,000 in Niger in the initial round of surveys (median year: 1998), corresponding to a five-fold difference across countries. In the most recent round of DHS surveys (median year: 2005), the U5MR ranged from 67.2 deaths per 1,000 live births in Senegal to 190.1 per 1,000 in Chad, indicating a three-fold difference across countries. During this period, the CCI increased in 17 countries from an average of 53.4% (SD 14.4) among all countries in the first survey period to 58.7% (SD 12.2) among the most recent wave in 24 countries. Country, year of survey, sample size, % under-5 child deaths, U5MR, CCI, and log per capita GDP in 35 sub-Saharan African countries for the first and most recent survey during the period 1990–2012 Source: Authors’ calculations from DHS data; percent child deaths weighted and calculated in a 3-year window preceding the start of the survey. U5MR=under-5 mortality rate; CCI=composite coverage index; log pcGDP=logarithm of per capita gross domestic product. At both the baseline and repeated surveys, an inverse association was seen between country-level U5MR and CCI coverage, indicating lower rates of under-5 mortality in countries with greater coverage of intervention (Pearson correlation −0.73 at both times, p<0.001, Fig. 2a and b). This association held when examining the average changes in U5MR and CCI over time in a subset of 24 countries with repeated surveys (Pearson correlation −0.74, p<0.001, Fig. 2c). Correlation between under-5 mortality rate (U5MR) and composite coverage index (CCI) at baseline (panel a, n=35 surveys) and repeated surveys (panel b, n=24) and correlation between the change in U5MR and change in CCI from baseline (panel c, n=24). At an ecologic level, the regression analyses described in Equation 2 show that a standardized unit increase in CCI was associated with a reduction of 28.5 per 1,000 in U5MR (95% CI: −42.4, −14.6), after accounting for secular declines in U5MR as captured by survey period fixed effects (Table 2). The inclusion of log pcGDP to Model 2 did not substantial alter this effect (β=−29.0, 95% CI: −43.2, −14.7). Risks, bivariate odds ratios (OR), and multivariable adjusted odds ratios (aOR) of child mortality according to child-, maternal-, and household-level covariates across 35 sub-Saharan African countries, 1992–2012 Table 3 shows the sample sizes, unadjusted and adjusted risks of child mortality by covariates for the child-level analyses. In these analyses, CCI was also associated with a reduction in mortality, with an odds ratio of 0.87 (95% CI: 0.84, 0.92) indicating a protective effect against under-5 mortality independent of survey period effects (Model 1 of Fig. 3a). The inclusion of log pcGDP to this model did not alter the effect size. In a third model that included all child and maternal covariates in addition to log pcGDP, CCI remained robustly associated with a reduction in child mortality (odds ratio: 0.86, 95% CI: 0.82–0.90). However, the effect became attenuated when individual-level indicators were included for whether the mother received skilled antenatal care during pregnancy and had the presence of a skilled attendant at birth (odds ratio: 0.97, 95% CI: 0.92–1.01, Model 4 of Fig. 3a). Without considering the individual-level indicators of intervention utilization, there was a graded and inverse association between CCI at the country level and probability of child mortality. Children from countries in the highest quartile of CCI coverage had the lowest probability of mortality conditional on all covariates (odds ratio 0.74, 95% CI: 0.67–0.82) (Fig. 3b). Odds ratios (OR) for the association of CCI with child mortality in 35 sub-Saharan African countries; OR per quartile increase in composite coverage index (CCI) (left) and per SD increase in CCI from 4 separate models (right). *Model 1 (M1) included country and survey period fixed effects; M2 added log pcGDP to M1; M3 added maternal- and child-level covariates to M2; M4 added indicators for whether mother received skilled antenatal care during pregnancy and presence of skilled attendant at birth. **Model includes country and survey period fixed effects, log pcGDP, and maternal- and child-level covariates; quartiles of CCI defined at the country level. Coefficients of two ecological models predicting U5MR across 81 survey periods in 35 sub-Saharan African countries, 1992–2012
null
null
[ "Data sources", "Study population, data designs, and sample sizes", "Outcomes", "Exposure", "Covariates", "Statistical analyses" ]
[ "We extracted data from DHS surveys (38) conducted since 1990. DHS are household surveys that use nationally representative sampling plans and have special emphasis on fertility, child mortality, and indicators of MNCH (38). We selected standard surveys for each country that included birth histories (‘BR’ files from which child mortality rates could be calculated) of women aged 15–49 and MNCH coverage indicators. In total, 81 surveys were included, covering 35 countries and 93% of the population of sub-Saharan Africa (39). Twenty-four surveys were conducted between 1992 and 2000, 20 between 2000 and 2004, 21 between 2005 and 2008, and 16 since 2009. Twenty-four of the 35 countries conducted at least two surveys during this period, and 22 conducted three or more.", "The study population was structured as two distinct data designs. First, we examined the study population as an ecological time-series design with countries repeatedly observed over time. In this design, the lowest level of analysis was the survey period, nested within countries as a hierarchical structure. Second, we used a repeated cross-sectional design, with children at the lowest unit of analysis. A key substantive advantage of the second approach is the ability to account for within-country between-child factors that can influence both child mortality and the country-level economic development and coverage indicators. Further, the ecological time-series data structure assumes that the probability of dying (or U5MR) is the same for all children within a country period. This assumption is relaxed in the second data structure, although in doing so we are modeling the probability of a child dying before the fifth birthday, and not U5MR.\nIn the ecological time-series design, 81 survey periods were available for analysis, covering 35 countries, with an average of 2.3 surveys per country. For the child-level analyses, children across all surveys were pooled, and the probability of child death was examined in the 3-year period immediately preceding the survey. In total, there was information on 395,493 children born within the reference period. After making exclusions for missing data on covariates, the final analytical sample size was 393,934.", "This study uses two outcomes, corresponding to the two data designs employed. In the ecologic time-series design, the outcome is U5MR for the 3 years reference period in each survey. In the child-level design, the outcome is the probability of child death occurring within 3 years prior to the survey. At an aggregate level, child mortality is typically expressed as probabilities of dying between exact ages (x and x+n), which are derived from life tables and denoted by n\nq\nx\n(40). The U5MR, also denoted 5\nq\n0, is formally defined as the probability a child death occurring between birth and a child's fifth birthday, expressed as deaths per 1,000 live births (7, 40). U5MR is a composite measure of mortality occurring during the first 5 years, which can be further defined as the probability of dying within 1 month (neonatal mortality), 0–11 months (infant mortality, including neonatal deaths, or 1\nq\n0), and 12–59 months (child mortality, conditional on having reached the first birthday, or 4\nq\n1) (41).\nU5MRs were calculated using the DHS synthetic cohort life table methodology (42). This approach uses age segments 0, 1–2, 3–5, 6–11, 12–23, 24–35, 36–47 months (completed ages) for the calculation of the individual probabilities of dying, without adjustment for age of death heaping at 12 months. Such heaping may occur during fieldwork when deaths occurring slightly prior to or after 12 months of age are reported as a 1 year age of death (42). Therefore, some deaths that are actually infant deaths are shifted up to age 1. The analyses in this paper, however, were on all under 5 deaths, and any heaping would have little influence on the results. Imputation procedures were used for children with missing ages at death. On average, only small numbers of children in the DHS (about 1 in 1,000) reported to have died were not given an age at death and had the age of death imputed (T Pullum, year of personal communication was 2014). The imputation procedure involved finding a range of dates within which death could have occurred, and then selecting a value randomly within that range which would likely not introduce any upwards or downwards bias. The calculation of the U5MR was based on the number of deaths to live-born children in a 3-year reference period preceding the survey. Death probabilities were calculated for each of the age segments defined above and then combined into the mortality rate as the product of the component survival probabilities, and expressed as a rate per 1,000 live births.\nIn the child-level design, the outcome was defined as a child death occurring within the reference period. This was expressed as a binary outcome: 1 for a death occurring in the child's first 5 years; 0 for survival through 37 months of age.", "Our key exposure of interest was coverage of MNCH interventions. Based on prior literature, we selected eight established interventions that have sufficient evidence of an effect on reducing child mortality from the major causes of under-5 deaths and can be summarized as a composite index for comparability between countries and within countries over time (12, 22–37, 43). The interventions included were: FPS, skilled birth attendant at delivery (SBA), at least one antenatal care visit with a skilled provider (ANCS), three doses of diphtheria-pertussis-tetanus (DPT3) vaccine, measles vaccination (MSL), BCG (tuberculosis) vaccination (BCG), ORT for children with diarrhea, and CPNM. The coverage of these interventions at a country level was summarized using the CCI, which is based on the following weighed average of the eight interventions (12):1CCI=14(FPS+SBA+ANCS2+2DPT3+MSL+BCG4+ORT+CPNM2)\n\nThe CCI gives equal weight to family planning, maternal and newborn care, immunization, and case management of sick children, and has been proposed as an effective way summarize and compare coverage of MNCH interventions across countries and over time (12).", "At the country level, per capita gross domestic product (pcGDP) was used as the primary measure of a country's economic growth and development. These data were obtained from the Penn World Tables (44) and were lagged 2 years from the date at which the survey began. Analysis of pcGDP was included in regression models as the logarithm (base 10) of pcGDP. At the child level, we used a variety of theoretically important maternal and child characteristics as covariates (45). Age, sex, multiple/single birth, birth order, and preceding birth interval were included as child characteristics; age of the mother at birth, maternal education, household wealth quintile, area of residence were included as maternal/household-level characteristics. Household wealth was defined according to an index developed from indicators of household asset ownership and housing characteristics (e.g. whether the household had a flush toilet, refrigerator, car, moped/motorcycle, television, washing machine, or telephone). Country-specific and weighted linear combinations of these items were constructed with weights for each item obtained from a principal component analysis (46). The index was then standardized, and using the quintiles of this distribution, the survey population in each country was divided into fifths from poorest to richest. Similar measures have been developed in India and other settings and have been shown to be a consistent proxy for household income and expenditure (47).", "We conducted two separate set of analyses corresponding to the two data structures described previously. For the ecological time-series data, we fit linear regression models of the form:2yij=β0+BCj+BSij+β1CCIij+e0ij\n\nwhere yij represents the U5MR for survey time i in country j; β0 represents the constant or the average U5MR holding CCI constant, and after accounting for country differences (BCj);BCj represents the country-specific dummy variables estimating differences in U5MR between countries; BSij represents the effects associated with dummies for survey years; β1CCIij represents the change in U5MR for a unit change in CCI; and e0ij represents the residuals at the survey-year level i in country j.\nA second series of analyses were conducted child-level dataset. In these analyses, the basic model is a logistic regression model with a binary response (y=1 for child death during the reference period, y=0 otherwise). Countries are treated as fixed effects using country indicator variables in the fixed part of the model (BCj). The outcome of child mortality, Pr(y\nij=1), is assumed to be binomially distributed yij~Binomial (1,πij) with probability πij related to the set of independent variables X and a random effect for each level by a logit link function:3Logit(πij)=β0+BCj+BSij+β1CCIij+BXij\n\nThe intercept, β0, represents the log odds of child mortality for the reference group, BSij is a vector of coefficients for dummy variables for survey years, β1CCIij represents the log odds of child mortality for a one-unit increase in CCI, and the BX represents a vector of coefficients for the log odds of child mortality for a one-unit increase for each independent variable. Models were weighted and standard errors adjusted for the complex multistage sampling design of the surveys. Coefficients were exponentiated and presented as odds ratios with 95% confidence intervals." ]
[ null, null, null, null, null, null ]
[ "Methods", "Data sources", "Study population, data designs, and sample sizes", "Outcomes", "Exposure", "Covariates", "Statistical analyses", "Results", "Discussion" ]
[ " Data sources We extracted data from DHS surveys (38) conducted since 1990. DHS are household surveys that use nationally representative sampling plans and have special emphasis on fertility, child mortality, and indicators of MNCH (38). We selected standard surveys for each country that included birth histories (‘BR’ files from which child mortality rates could be calculated) of women aged 15–49 and MNCH coverage indicators. In total, 81 surveys were included, covering 35 countries and 93% of the population of sub-Saharan Africa (39). Twenty-four surveys were conducted between 1992 and 2000, 20 between 2000 and 2004, 21 between 2005 and 2008, and 16 since 2009. Twenty-four of the 35 countries conducted at least two surveys during this period, and 22 conducted three or more.\nWe extracted data from DHS surveys (38) conducted since 1990. DHS are household surveys that use nationally representative sampling plans and have special emphasis on fertility, child mortality, and indicators of MNCH (38). We selected standard surveys for each country that included birth histories (‘BR’ files from which child mortality rates could be calculated) of women aged 15–49 and MNCH coverage indicators. In total, 81 surveys were included, covering 35 countries and 93% of the population of sub-Saharan Africa (39). Twenty-four surveys were conducted between 1992 and 2000, 20 between 2000 and 2004, 21 between 2005 and 2008, and 16 since 2009. Twenty-four of the 35 countries conducted at least two surveys during this period, and 22 conducted three or more.\n Study population, data designs, and sample sizes The study population was structured as two distinct data designs. First, we examined the study population as an ecological time-series design with countries repeatedly observed over time. In this design, the lowest level of analysis was the survey period, nested within countries as a hierarchical structure. Second, we used a repeated cross-sectional design, with children at the lowest unit of analysis. A key substantive advantage of the second approach is the ability to account for within-country between-child factors that can influence both child mortality and the country-level economic development and coverage indicators. Further, the ecological time-series data structure assumes that the probability of dying (or U5MR) is the same for all children within a country period. This assumption is relaxed in the second data structure, although in doing so we are modeling the probability of a child dying before the fifth birthday, and not U5MR.\nIn the ecological time-series design, 81 survey periods were available for analysis, covering 35 countries, with an average of 2.3 surveys per country. For the child-level analyses, children across all surveys were pooled, and the probability of child death was examined in the 3-year period immediately preceding the survey. In total, there was information on 395,493 children born within the reference period. After making exclusions for missing data on covariates, the final analytical sample size was 393,934.\nThe study population was structured as two distinct data designs. First, we examined the study population as an ecological time-series design with countries repeatedly observed over time. In this design, the lowest level of analysis was the survey period, nested within countries as a hierarchical structure. Second, we used a repeated cross-sectional design, with children at the lowest unit of analysis. A key substantive advantage of the second approach is the ability to account for within-country between-child factors that can influence both child mortality and the country-level economic development and coverage indicators. Further, the ecological time-series data structure assumes that the probability of dying (or U5MR) is the same for all children within a country period. This assumption is relaxed in the second data structure, although in doing so we are modeling the probability of a child dying before the fifth birthday, and not U5MR.\nIn the ecological time-series design, 81 survey periods were available for analysis, covering 35 countries, with an average of 2.3 surveys per country. For the child-level analyses, children across all surveys were pooled, and the probability of child death was examined in the 3-year period immediately preceding the survey. In total, there was information on 395,493 children born within the reference period. After making exclusions for missing data on covariates, the final analytical sample size was 393,934.\n Outcomes This study uses two outcomes, corresponding to the two data designs employed. In the ecologic time-series design, the outcome is U5MR for the 3 years reference period in each survey. In the child-level design, the outcome is the probability of child death occurring within 3 years prior to the survey. At an aggregate level, child mortality is typically expressed as probabilities of dying between exact ages (x and x+n), which are derived from life tables and denoted by n\nq\nx\n(40). The U5MR, also denoted 5\nq\n0, is formally defined as the probability a child death occurring between birth and a child's fifth birthday, expressed as deaths per 1,000 live births (7, 40). U5MR is a composite measure of mortality occurring during the first 5 years, which can be further defined as the probability of dying within 1 month (neonatal mortality), 0–11 months (infant mortality, including neonatal deaths, or 1\nq\n0), and 12–59 months (child mortality, conditional on having reached the first birthday, or 4\nq\n1) (41).\nU5MRs were calculated using the DHS synthetic cohort life table methodology (42). This approach uses age segments 0, 1–2, 3–5, 6–11, 12–23, 24–35, 36–47 months (completed ages) for the calculation of the individual probabilities of dying, without adjustment for age of death heaping at 12 months. Such heaping may occur during fieldwork when deaths occurring slightly prior to or after 12 months of age are reported as a 1 year age of death (42). Therefore, some deaths that are actually infant deaths are shifted up to age 1. The analyses in this paper, however, were on all under 5 deaths, and any heaping would have little influence on the results. Imputation procedures were used for children with missing ages at death. On average, only small numbers of children in the DHS (about 1 in 1,000) reported to have died were not given an age at death and had the age of death imputed (T Pullum, year of personal communication was 2014). The imputation procedure involved finding a range of dates within which death could have occurred, and then selecting a value randomly within that range which would likely not introduce any upwards or downwards bias. The calculation of the U5MR was based on the number of deaths to live-born children in a 3-year reference period preceding the survey. Death probabilities were calculated for each of the age segments defined above and then combined into the mortality rate as the product of the component survival probabilities, and expressed as a rate per 1,000 live births.\nIn the child-level design, the outcome was defined as a child death occurring within the reference period. This was expressed as a binary outcome: 1 for a death occurring in the child's first 5 years; 0 for survival through 37 months of age.\nThis study uses two outcomes, corresponding to the two data designs employed. In the ecologic time-series design, the outcome is U5MR for the 3 years reference period in each survey. In the child-level design, the outcome is the probability of child death occurring within 3 years prior to the survey. At an aggregate level, child mortality is typically expressed as probabilities of dying between exact ages (x and x+n), which are derived from life tables and denoted by n\nq\nx\n(40). The U5MR, also denoted 5\nq\n0, is formally defined as the probability a child death occurring between birth and a child's fifth birthday, expressed as deaths per 1,000 live births (7, 40). U5MR is a composite measure of mortality occurring during the first 5 years, which can be further defined as the probability of dying within 1 month (neonatal mortality), 0–11 months (infant mortality, including neonatal deaths, or 1\nq\n0), and 12–59 months (child mortality, conditional on having reached the first birthday, or 4\nq\n1) (41).\nU5MRs were calculated using the DHS synthetic cohort life table methodology (42). This approach uses age segments 0, 1–2, 3–5, 6–11, 12–23, 24–35, 36–47 months (completed ages) for the calculation of the individual probabilities of dying, without adjustment for age of death heaping at 12 months. Such heaping may occur during fieldwork when deaths occurring slightly prior to or after 12 months of age are reported as a 1 year age of death (42). Therefore, some deaths that are actually infant deaths are shifted up to age 1. The analyses in this paper, however, were on all under 5 deaths, and any heaping would have little influence on the results. Imputation procedures were used for children with missing ages at death. On average, only small numbers of children in the DHS (about 1 in 1,000) reported to have died were not given an age at death and had the age of death imputed (T Pullum, year of personal communication was 2014). The imputation procedure involved finding a range of dates within which death could have occurred, and then selecting a value randomly within that range which would likely not introduce any upwards or downwards bias. The calculation of the U5MR was based on the number of deaths to live-born children in a 3-year reference period preceding the survey. Death probabilities were calculated for each of the age segments defined above and then combined into the mortality rate as the product of the component survival probabilities, and expressed as a rate per 1,000 live births.\nIn the child-level design, the outcome was defined as a child death occurring within the reference period. This was expressed as a binary outcome: 1 for a death occurring in the child's first 5 years; 0 for survival through 37 months of age.\n Exposure Our key exposure of interest was coverage of MNCH interventions. Based on prior literature, we selected eight established interventions that have sufficient evidence of an effect on reducing child mortality from the major causes of under-5 deaths and can be summarized as a composite index for comparability between countries and within countries over time (12, 22–37, 43). The interventions included were: FPS, skilled birth attendant at delivery (SBA), at least one antenatal care visit with a skilled provider (ANCS), three doses of diphtheria-pertussis-tetanus (DPT3) vaccine, measles vaccination (MSL), BCG (tuberculosis) vaccination (BCG), ORT for children with diarrhea, and CPNM. The coverage of these interventions at a country level was summarized using the CCI, which is based on the following weighed average of the eight interventions (12):1CCI=14(FPS+SBA+ANCS2+2DPT3+MSL+BCG4+ORT+CPNM2)\n\nThe CCI gives equal weight to family planning, maternal and newborn care, immunization, and case management of sick children, and has been proposed as an effective way summarize and compare coverage of MNCH interventions across countries and over time (12).\nOur key exposure of interest was coverage of MNCH interventions. Based on prior literature, we selected eight established interventions that have sufficient evidence of an effect on reducing child mortality from the major causes of under-5 deaths and can be summarized as a composite index for comparability between countries and within countries over time (12, 22–37, 43). The interventions included were: FPS, skilled birth attendant at delivery (SBA), at least one antenatal care visit with a skilled provider (ANCS), three doses of diphtheria-pertussis-tetanus (DPT3) vaccine, measles vaccination (MSL), BCG (tuberculosis) vaccination (BCG), ORT for children with diarrhea, and CPNM. The coverage of these interventions at a country level was summarized using the CCI, which is based on the following weighed average of the eight interventions (12):1CCI=14(FPS+SBA+ANCS2+2DPT3+MSL+BCG4+ORT+CPNM2)\n\nThe CCI gives equal weight to family planning, maternal and newborn care, immunization, and case management of sick children, and has been proposed as an effective way summarize and compare coverage of MNCH interventions across countries and over time (12).\n Covariates At the country level, per capita gross domestic product (pcGDP) was used as the primary measure of a country's economic growth and development. These data were obtained from the Penn World Tables (44) and were lagged 2 years from the date at which the survey began. Analysis of pcGDP was included in regression models as the logarithm (base 10) of pcGDP. At the child level, we used a variety of theoretically important maternal and child characteristics as covariates (45). Age, sex, multiple/single birth, birth order, and preceding birth interval were included as child characteristics; age of the mother at birth, maternal education, household wealth quintile, area of residence were included as maternal/household-level characteristics. Household wealth was defined according to an index developed from indicators of household asset ownership and housing characteristics (e.g. whether the household had a flush toilet, refrigerator, car, moped/motorcycle, television, washing machine, or telephone). Country-specific and weighted linear combinations of these items were constructed with weights for each item obtained from a principal component analysis (46). The index was then standardized, and using the quintiles of this distribution, the survey population in each country was divided into fifths from poorest to richest. Similar measures have been developed in India and other settings and have been shown to be a consistent proxy for household income and expenditure (47).\nAt the country level, per capita gross domestic product (pcGDP) was used as the primary measure of a country's economic growth and development. These data were obtained from the Penn World Tables (44) and were lagged 2 years from the date at which the survey began. Analysis of pcGDP was included in regression models as the logarithm (base 10) of pcGDP. At the child level, we used a variety of theoretically important maternal and child characteristics as covariates (45). Age, sex, multiple/single birth, birth order, and preceding birth interval were included as child characteristics; age of the mother at birth, maternal education, household wealth quintile, area of residence were included as maternal/household-level characteristics. Household wealth was defined according to an index developed from indicators of household asset ownership and housing characteristics (e.g. whether the household had a flush toilet, refrigerator, car, moped/motorcycle, television, washing machine, or telephone). Country-specific and weighted linear combinations of these items were constructed with weights for each item obtained from a principal component analysis (46). The index was then standardized, and using the quintiles of this distribution, the survey population in each country was divided into fifths from poorest to richest. Similar measures have been developed in India and other settings and have been shown to be a consistent proxy for household income and expenditure (47).\n Statistical analyses We conducted two separate set of analyses corresponding to the two data structures described previously. For the ecological time-series data, we fit linear regression models of the form:2yij=β0+BCj+BSij+β1CCIij+e0ij\n\nwhere yij represents the U5MR for survey time i in country j; β0 represents the constant or the average U5MR holding CCI constant, and after accounting for country differences (BCj);BCj represents the country-specific dummy variables estimating differences in U5MR between countries; BSij represents the effects associated with dummies for survey years; β1CCIij represents the change in U5MR for a unit change in CCI; and e0ij represents the residuals at the survey-year level i in country j.\nA second series of analyses were conducted child-level dataset. In these analyses, the basic model is a logistic regression model with a binary response (y=1 for child death during the reference period, y=0 otherwise). Countries are treated as fixed effects using country indicator variables in the fixed part of the model (BCj). The outcome of child mortality, Pr(y\nij=1), is assumed to be binomially distributed yij~Binomial (1,πij) with probability πij related to the set of independent variables X and a random effect for each level by a logit link function:3Logit(πij)=β0+BCj+BSij+β1CCIij+BXij\n\nThe intercept, β0, represents the log odds of child mortality for the reference group, BSij is a vector of coefficients for dummy variables for survey years, β1CCIij represents the log odds of child mortality for a one-unit increase in CCI, and the BX represents a vector of coefficients for the log odds of child mortality for a one-unit increase for each independent variable. Models were weighted and standard errors adjusted for the complex multistage sampling design of the surveys. Coefficients were exponentiated and presented as odds ratios with 95% confidence intervals.\nWe conducted two separate set of analyses corresponding to the two data structures described previously. For the ecological time-series data, we fit linear regression models of the form:2yij=β0+BCj+BSij+β1CCIij+e0ij\n\nwhere yij represents the U5MR for survey time i in country j; β0 represents the constant or the average U5MR holding CCI constant, and after accounting for country differences (BCj);BCj represents the country-specific dummy variables estimating differences in U5MR between countries; BSij represents the effects associated with dummies for survey years; β1CCIij represents the change in U5MR for a unit change in CCI; and e0ij represents the residuals at the survey-year level i in country j.\nA second series of analyses were conducted child-level dataset. In these analyses, the basic model is a logistic regression model with a binary response (y=1 for child death during the reference period, y=0 otherwise). Countries are treated as fixed effects using country indicator variables in the fixed part of the model (BCj). The outcome of child mortality, Pr(y\nij=1), is assumed to be binomially distributed yij~Binomial (1,πij) with probability πij related to the set of independent variables X and a random effect for each level by a logit link function:3Logit(πij)=β0+BCj+BSij+β1CCIij+BXij\n\nThe intercept, β0, represents the log odds of child mortality for the reference group, BSij is a vector of coefficients for dummy variables for survey years, β1CCIij represents the log odds of child mortality for a one-unit increase in CCI, and the BX represents a vector of coefficients for the log odds of child mortality for a one-unit increase for each independent variable. Models were weighted and standard errors adjusted for the complex multistage sampling design of the surveys. Coefficients were exponentiated and presented as odds ratios with 95% confidence intervals.", "We extracted data from DHS surveys (38) conducted since 1990. DHS are household surveys that use nationally representative sampling plans and have special emphasis on fertility, child mortality, and indicators of MNCH (38). We selected standard surveys for each country that included birth histories (‘BR’ files from which child mortality rates could be calculated) of women aged 15–49 and MNCH coverage indicators. In total, 81 surveys were included, covering 35 countries and 93% of the population of sub-Saharan Africa (39). Twenty-four surveys were conducted between 1992 and 2000, 20 between 2000 and 2004, 21 between 2005 and 2008, and 16 since 2009. Twenty-four of the 35 countries conducted at least two surveys during this period, and 22 conducted three or more.", "The study population was structured as two distinct data designs. First, we examined the study population as an ecological time-series design with countries repeatedly observed over time. In this design, the lowest level of analysis was the survey period, nested within countries as a hierarchical structure. Second, we used a repeated cross-sectional design, with children at the lowest unit of analysis. A key substantive advantage of the second approach is the ability to account for within-country between-child factors that can influence both child mortality and the country-level economic development and coverage indicators. Further, the ecological time-series data structure assumes that the probability of dying (or U5MR) is the same for all children within a country period. This assumption is relaxed in the second data structure, although in doing so we are modeling the probability of a child dying before the fifth birthday, and not U5MR.\nIn the ecological time-series design, 81 survey periods were available for analysis, covering 35 countries, with an average of 2.3 surveys per country. For the child-level analyses, children across all surveys were pooled, and the probability of child death was examined in the 3-year period immediately preceding the survey. In total, there was information on 395,493 children born within the reference period. After making exclusions for missing data on covariates, the final analytical sample size was 393,934.", "This study uses two outcomes, corresponding to the two data designs employed. In the ecologic time-series design, the outcome is U5MR for the 3 years reference period in each survey. In the child-level design, the outcome is the probability of child death occurring within 3 years prior to the survey. At an aggregate level, child mortality is typically expressed as probabilities of dying between exact ages (x and x+n), which are derived from life tables and denoted by n\nq\nx\n(40). The U5MR, also denoted 5\nq\n0, is formally defined as the probability a child death occurring between birth and a child's fifth birthday, expressed as deaths per 1,000 live births (7, 40). U5MR is a composite measure of mortality occurring during the first 5 years, which can be further defined as the probability of dying within 1 month (neonatal mortality), 0–11 months (infant mortality, including neonatal deaths, or 1\nq\n0), and 12–59 months (child mortality, conditional on having reached the first birthday, or 4\nq\n1) (41).\nU5MRs were calculated using the DHS synthetic cohort life table methodology (42). This approach uses age segments 0, 1–2, 3–5, 6–11, 12–23, 24–35, 36–47 months (completed ages) for the calculation of the individual probabilities of dying, without adjustment for age of death heaping at 12 months. Such heaping may occur during fieldwork when deaths occurring slightly prior to or after 12 months of age are reported as a 1 year age of death (42). Therefore, some deaths that are actually infant deaths are shifted up to age 1. The analyses in this paper, however, were on all under 5 deaths, and any heaping would have little influence on the results. Imputation procedures were used for children with missing ages at death. On average, only small numbers of children in the DHS (about 1 in 1,000) reported to have died were not given an age at death and had the age of death imputed (T Pullum, year of personal communication was 2014). The imputation procedure involved finding a range of dates within which death could have occurred, and then selecting a value randomly within that range which would likely not introduce any upwards or downwards bias. The calculation of the U5MR was based on the number of deaths to live-born children in a 3-year reference period preceding the survey. Death probabilities were calculated for each of the age segments defined above and then combined into the mortality rate as the product of the component survival probabilities, and expressed as a rate per 1,000 live births.\nIn the child-level design, the outcome was defined as a child death occurring within the reference period. This was expressed as a binary outcome: 1 for a death occurring in the child's first 5 years; 0 for survival through 37 months of age.", "Our key exposure of interest was coverage of MNCH interventions. Based on prior literature, we selected eight established interventions that have sufficient evidence of an effect on reducing child mortality from the major causes of under-5 deaths and can be summarized as a composite index for comparability between countries and within countries over time (12, 22–37, 43). The interventions included were: FPS, skilled birth attendant at delivery (SBA), at least one antenatal care visit with a skilled provider (ANCS), three doses of diphtheria-pertussis-tetanus (DPT3) vaccine, measles vaccination (MSL), BCG (tuberculosis) vaccination (BCG), ORT for children with diarrhea, and CPNM. The coverage of these interventions at a country level was summarized using the CCI, which is based on the following weighed average of the eight interventions (12):1CCI=14(FPS+SBA+ANCS2+2DPT3+MSL+BCG4+ORT+CPNM2)\n\nThe CCI gives equal weight to family planning, maternal and newborn care, immunization, and case management of sick children, and has been proposed as an effective way summarize and compare coverage of MNCH interventions across countries and over time (12).", "At the country level, per capita gross domestic product (pcGDP) was used as the primary measure of a country's economic growth and development. These data were obtained from the Penn World Tables (44) and were lagged 2 years from the date at which the survey began. Analysis of pcGDP was included in regression models as the logarithm (base 10) of pcGDP. At the child level, we used a variety of theoretically important maternal and child characteristics as covariates (45). Age, sex, multiple/single birth, birth order, and preceding birth interval were included as child characteristics; age of the mother at birth, maternal education, household wealth quintile, area of residence were included as maternal/household-level characteristics. Household wealth was defined according to an index developed from indicators of household asset ownership and housing characteristics (e.g. whether the household had a flush toilet, refrigerator, car, moped/motorcycle, television, washing machine, or telephone). Country-specific and weighted linear combinations of these items were constructed with weights for each item obtained from a principal component analysis (46). The index was then standardized, and using the quintiles of this distribution, the survey population in each country was divided into fifths from poorest to richest. Similar measures have been developed in India and other settings and have been shown to be a consistent proxy for household income and expenditure (47).", "We conducted two separate set of analyses corresponding to the two data structures described previously. For the ecological time-series data, we fit linear regression models of the form:2yij=β0+BCj+BSij+β1CCIij+e0ij\n\nwhere yij represents the U5MR for survey time i in country j; β0 represents the constant or the average U5MR holding CCI constant, and after accounting for country differences (BCj);BCj represents the country-specific dummy variables estimating differences in U5MR between countries; BSij represents the effects associated with dummies for survey years; β1CCIij represents the change in U5MR for a unit change in CCI; and e0ij represents the residuals at the survey-year level i in country j.\nA second series of analyses were conducted child-level dataset. In these analyses, the basic model is a logistic regression model with a binary response (y=1 for child death during the reference period, y=0 otherwise). Countries are treated as fixed effects using country indicator variables in the fixed part of the model (BCj). The outcome of child mortality, Pr(y\nij=1), is assumed to be binomially distributed yij~Binomial (1,πij) with probability πij related to the set of independent variables X and a random effect for each level by a logit link function:3Logit(πij)=β0+BCj+BSij+β1CCIij+BXij\n\nThe intercept, β0, represents the log odds of child mortality for the reference group, BSij is a vector of coefficients for dummy variables for survey years, β1CCIij represents the log odds of child mortality for a one-unit increase in CCI, and the BX represents a vector of coefficients for the log odds of child mortality for a one-unit increase for each independent variable. Models were weighted and standard errors adjusted for the complex multistage sampling design of the surveys. Coefficients were exponentiated and presented as odds ratios with 95% confidence intervals.", "Between 1992 and 2012, the U5MR declined in a majority (19 of 24) of sub-Saharan African countries where repeated DHS surveys were available, although the rate of change varied across countries (Table 1). The U5MR varied between 62.8 deaths per 1,000 live births in Sao Tome to 305.8 per 1,000 in Niger in the initial round of surveys (median year: 1998), corresponding to a five-fold difference across countries. In the most recent round of DHS surveys (median year: 2005), the U5MR ranged from 67.2 deaths per 1,000 live births in Senegal to 190.1 per 1,000 in Chad, indicating a three-fold difference across countries. During this period, the CCI increased in 17 countries from an average of 53.4% (SD 14.4) among all countries in the first survey period to 58.7% (SD 12.2) among the most recent wave in 24 countries.\nCountry, year of survey, sample size, % under-5 child deaths, U5MR, CCI, and log per capita GDP in 35 sub-Saharan African countries for the first and most recent survey during the period 1990–2012\nSource: Authors’ calculations from DHS data; percent child deaths weighted and calculated in a 3-year window preceding the start of the survey.\nU5MR=under-5 mortality rate; CCI=composite coverage index; log pcGDP=logarithm of per capita gross domestic product.\nAt both the baseline and repeated surveys, an inverse association was seen between country-level U5MR and CCI coverage, indicating lower rates of under-5 mortality in countries with greater coverage of intervention (Pearson correlation −0.73 at both times, p<0.001, Fig. 2a and b). This association held when examining the average changes in U5MR and CCI over time in a subset of 24 countries with repeated surveys (Pearson correlation −0.74, p<0.001, Fig. 2c).\nCorrelation between under-5 mortality rate (U5MR) and composite coverage index (CCI) at baseline (panel a, n=35 surveys) and repeated surveys (panel b, n=24) and correlation between the change in U5MR and change in CCI from baseline (panel c, n=24).\nAt an ecologic level, the regression analyses described in Equation 2 show that a standardized unit increase in CCI was associated with a reduction of 28.5 per 1,000 in U5MR (95% CI: −42.4, −14.6), after accounting for secular declines in U5MR as captured by survey period fixed effects (Table 2). The inclusion of log pcGDP to Model 2 did not substantial alter this effect (β=−29.0, 95% CI: −43.2, −14.7).\nRisks, bivariate odds ratios (OR), and multivariable adjusted odds ratios (aOR) of child mortality according to child-, maternal-, and household-level covariates across 35 sub-Saharan African countries, 1992–2012\n\nTable 3 shows the sample sizes, unadjusted and adjusted risks of child mortality by covariates for the child-level analyses. In these analyses, CCI was also associated with a reduction in mortality, with an odds ratio of 0.87 (95% CI: 0.84, 0.92) indicating a protective effect against under-5 mortality independent of survey period effects (Model 1 of Fig. 3a). The inclusion of log pcGDP to this model did not alter the effect size. In a third model that included all child and maternal covariates in addition to log pcGDP, CCI remained robustly associated with a reduction in child mortality (odds ratio: 0.86, 95% CI: 0.82–0.90). However, the effect became attenuated when individual-level indicators were included for whether the mother received skilled antenatal care during pregnancy and had the presence of a skilled attendant at birth (odds ratio: 0.97, 95% CI: 0.92–1.01, Model 4 of Fig. 3a). Without considering the individual-level indicators of intervention utilization, there was a graded and inverse association between CCI at the country level and probability of child mortality. Children from countries in the highest quartile of CCI coverage had the lowest probability of mortality conditional on all covariates (odds ratio 0.74, 95% CI: 0.67–0.82) (Fig. 3b).\nOdds ratios (OR) for the association of CCI with child mortality in 35 sub-Saharan African countries; OR per quartile increase in composite coverage index (CCI) (left) and per SD increase in CCI from 4 separate models (right).\n*Model 1 (M1) included country and survey period fixed effects; M2 added log pcGDP to M1; M3 added maternal- and child-level covariates to M2; M4 added indicators for whether mother received skilled antenatal care during pregnancy and presence of skilled attendant at birth.\n**Model includes country and survey period fixed effects, log pcGDP, and maternal- and child-level covariates; quartiles of CCI defined at the country level.\nCoefficients of two ecological models predicting U5MR across 81 survey periods in 35 sub-Saharan African countries, 1992–2012", "In this study, we explored the contribution of coverage of MNCH interventions to the declines in U5MR across 35 sub-Saharan African countries from 1990 to 2012. Improvements in MNCH coverage and interventions were strongly associated with reductions in child mortality; this association was universally consistent across the two types of data structures analyzed and regardless of the statistical specification. Analyses at the individual level, however, demonstrated that the CCI may only be a proxy for maternal/child-level utilization of health interventions as the protective effect of national-level CCI coverage attenuated after controlling for indicators of ANC and SBA.\nThis study has several limitations. First, given that DHS surveys are typically conducted only at intervals of 3–6 years (38), we were only able to study large changes in U5MR, and in some countries repeated surveys were not available, prohibiting a full time-series cross-sectional analysis. Further, some baseline surveys were conducted at different time periods especially if countries were only involved later in the DHS program and thus have had only one survey conducted. We decided to retain these countries in the levels analyses in order to make the full use of available data although our primary focus was on change in CCI and change in U5MR over time and such countries did not contribute to the change analyses. Finally, all of our models accounted for a survey-year variable to adjust for the different periods in which the surveys were conducted. Second, due to sample size restrictions in the ecological analyses, we could not model each indicator of MCNH interventions separately in a multivariable model, and instead we chose the CCI which is a composite index of eight different key interventions. Other potentially relevant determinants of U5MR were not examined in this study.\nThird, we analyzed U5MR over the 3-year period preceding each survey. This method provided a balance between increasing precision in the estimates of U5MR but also allowing for some information on recent trends in U5MR to be revealed by shortening the traditional 5-year reference period (48). Fourth, the coverage of maternal, newborn, and child health interventions were also calculated from each of the surveys, using the 3-year reference period before each survey and coverage of interventions are self-reported by survey respondents. There are some limitations to the validity of self-reports of uptake of these interventions. Some studies have described acceptable validity of maternal reports for peripartum interventions in DHS/MICS surveys (49). Further, DHS uses a combination of maternal reports with other documentation, for example, the use of health cards to gather information on vaccination uptake. Despite these limitations, DHS and other household surveys have generally been found to have reasonable and perhaps better validity than officially reported data by service providers (50). A related limitation is that there remains some difficulty in establishing the timing of exposures and outcomes, as both were measured contemporaneously in the same survey. Further, although we chose a logistic regression analysis for the child-level models, a hazard model would have been another alternative. Regardless of model choice, there would be no additional information gained from the independent variables, given that the indicators were calculated using a 3-year window.\nFifth, the analyses did not include indicators of the incidence (or prevalence) of childhood diseases. Given the method in which the prevalence of diseases is captured in DHS (i.e. any diarrhea within 2 weeks preceding the survey), we were not confident that these would be comparable across countries, especially since surveys may have been conducted at different times and in different seasons. Finally, a more general limitation is that this study was based on estimates of U5MR. Any estimate of U5MR from survey data is subjected to sampling errors and will always be inferior to complete vital registration data (4). Countries where U5MR remains high and/or rates of mortality decline are slow typically lack comprehensive vital registration systems (51). Strengthening such systems is likely to improve future assessments of factors associated with declines in U5MR in sub-Saharan African countries.\nThe results presented in this study indicate a secular decline in U5MR in a majority of countries in sub-Saharan Africa over the past two decades. A large part of this decline can be explained by coverage of selected maternal, newborn, and child health interventions. On average, the increases in CCI correlated with decreases in U5MR; however, all countries did not fit this trend. For example, in Zimbabwe, Nigeria and Zambia, the CCI decreased between the baseline and repeated survey and in Zambia the U5MR decreased from 195.4 to 117.1 even though the CCI decreased from 67.4 to 61.8%. These findings suggest that other factors not considered here may also be influencing change in U5MR. Further, the CCI is a composite measure, and a decline in CCI may reflect that one of the components decreased over time while other components may have increased. We were not able to assess the association of each component of the CCI with U5MR, but it is likely that some components are more strongly associated than others. For example, our analysis presented in Fig. 3 suggests that antenatal care is particularly important in reducing U5MR. It is therefore possible that increases in coverage of some interventions but not other may result in an improvement in U5MR without a corresponding improvement in CCI. Other social improvements, such as improved access to clean drinking water and sanitation facilities, may also have an important role (29). Our analyses did not fully account for the variation in U5MR or child-level mortality, suggesting that other factors related to health systems as well as economic, social, or political factors play a role in influencing U5MR in sub-Saharan Africa.\nIt has been suggested that effective implementation of available, cost-effective MNCH interventions can prevent much of the current burden of under-5 mortality in low-income settings (52). However, many countries in sub-Saharan Africa are not on track to reach MDG-4 (7), which is likely related in part to the low levels of coverage of key interventions in the 1990s in many countries (37, 53). In the 2000s, global health initiatives and resources for health increased, and along with such increases came improvements in coverage of life-saving child health interventions in several countries (18, 41). We would therefore expect that progress toward MDG-4 in such settings, while lagging behind other areas, might likely continue into 2015 and beyond (4, 7).\nIt appears that health system improvements, including scaling up of key MNCH interventions, are a key explanation for reductions in U5MR in sub-Saharan Africa. For example, in Tanzania between 1999 and 2004–05, the coverage of interventions relevant to child survival improved substantially (54). In particular, vitamin A supplementation increased from 14% in 1999 to 85% in 2005, and other improvements also were seen: children sleeping under insecticide-treated nets increased from 10 to 29%, ORT for children increased from 57 to 70%, and exclusive breastfeeding for those younger than age 2 months increased from 58 to 70% (54).\nOver this same period, Tanzania's national wealth (in GDP per person) increased by 93 international dollars, from $819 to $912 per person (or US$256–US$303). Improvements in the proportion of households living below the poverty line, in educational attainment, and in literacy rates improved only marginally during this time. Therefore, it is unlikely that growth in national wealth would account for much of the reduction in mortality, especially since poverty rates in Tanzania and other sub-Saharan African countries did not reduce dramatically over the study period.\nBased on our child-level analyses, it appears that the coverage of health interventions have played a relatively more important role in reducing child mortality compared with the role of economic growth. However, it is not clear whether these improvements are being driving by supply side increases in the national or regional availability and coverage of health services and interventions or through increased demand and access at an individual level. Our inclusion of individual-level analogues of two components of CCI (ANC and SBA) was sufficient to attenuate the effect of CCI, suggesting that individual-level demand and access to interventions maybe the pathway where the improvements to child health can be gained. It also suggests that the projected gains in child mortality reductions from scaling up of coverage of the various interventions may be overstated unless these increases in coverage can be appropriately translated to individual-level utilization (55).\nAlthough recent gains have been made in reducing under-5 mortality in sub-Saharan Africa, U5MR in this region continues to be the highest globally. While sub-Saharan Africa as a whole has reduced U5MR by 30%, this is less than half of the MDG-4 target. As the global health community considers both the strong likelihood that the MDG-4 targets are not going to be accomplished by 2015 (56, 57), and looks ahead to the post-MDG era (58), it is important to sustain efforts to reduce child mortality. For sub-Saharan Africa, a continued focus on fertility declines, improved health coverage, and greater equity in the coverage of proven life-saving interventions might be the key to reducing mortality." ]
[ "methods", null, null, null, null, null, null, "results", "discussion" ]
[ "Africa", "child mortality", "child health", "maternal and child health interventions", "low-income countries", "trends", "socioeconomic factors" ]
Methods: Data sources We extracted data from DHS surveys (38) conducted since 1990. DHS are household surveys that use nationally representative sampling plans and have special emphasis on fertility, child mortality, and indicators of MNCH (38). We selected standard surveys for each country that included birth histories (‘BR’ files from which child mortality rates could be calculated) of women aged 15–49 and MNCH coverage indicators. In total, 81 surveys were included, covering 35 countries and 93% of the population of sub-Saharan Africa (39). Twenty-four surveys were conducted between 1992 and 2000, 20 between 2000 and 2004, 21 between 2005 and 2008, and 16 since 2009. Twenty-four of the 35 countries conducted at least two surveys during this period, and 22 conducted three or more. We extracted data from DHS surveys (38) conducted since 1990. DHS are household surveys that use nationally representative sampling plans and have special emphasis on fertility, child mortality, and indicators of MNCH (38). We selected standard surveys for each country that included birth histories (‘BR’ files from which child mortality rates could be calculated) of women aged 15–49 and MNCH coverage indicators. In total, 81 surveys were included, covering 35 countries and 93% of the population of sub-Saharan Africa (39). Twenty-four surveys were conducted between 1992 and 2000, 20 between 2000 and 2004, 21 between 2005 and 2008, and 16 since 2009. Twenty-four of the 35 countries conducted at least two surveys during this period, and 22 conducted three or more. Study population, data designs, and sample sizes The study population was structured as two distinct data designs. First, we examined the study population as an ecological time-series design with countries repeatedly observed over time. In this design, the lowest level of analysis was the survey period, nested within countries as a hierarchical structure. Second, we used a repeated cross-sectional design, with children at the lowest unit of analysis. A key substantive advantage of the second approach is the ability to account for within-country between-child factors that can influence both child mortality and the country-level economic development and coverage indicators. Further, the ecological time-series data structure assumes that the probability of dying (or U5MR) is the same for all children within a country period. This assumption is relaxed in the second data structure, although in doing so we are modeling the probability of a child dying before the fifth birthday, and not U5MR. In the ecological time-series design, 81 survey periods were available for analysis, covering 35 countries, with an average of 2.3 surveys per country. For the child-level analyses, children across all surveys were pooled, and the probability of child death was examined in the 3-year period immediately preceding the survey. In total, there was information on 395,493 children born within the reference period. After making exclusions for missing data on covariates, the final analytical sample size was 393,934. The study population was structured as two distinct data designs. First, we examined the study population as an ecological time-series design with countries repeatedly observed over time. In this design, the lowest level of analysis was the survey period, nested within countries as a hierarchical structure. Second, we used a repeated cross-sectional design, with children at the lowest unit of analysis. A key substantive advantage of the second approach is the ability to account for within-country between-child factors that can influence both child mortality and the country-level economic development and coverage indicators. Further, the ecological time-series data structure assumes that the probability of dying (or U5MR) is the same for all children within a country period. This assumption is relaxed in the second data structure, although in doing so we are modeling the probability of a child dying before the fifth birthday, and not U5MR. In the ecological time-series design, 81 survey periods were available for analysis, covering 35 countries, with an average of 2.3 surveys per country. For the child-level analyses, children across all surveys were pooled, and the probability of child death was examined in the 3-year period immediately preceding the survey. In total, there was information on 395,493 children born within the reference period. After making exclusions for missing data on covariates, the final analytical sample size was 393,934. Outcomes This study uses two outcomes, corresponding to the two data designs employed. In the ecologic time-series design, the outcome is U5MR for the 3 years reference period in each survey. In the child-level design, the outcome is the probability of child death occurring within 3 years prior to the survey. At an aggregate level, child mortality is typically expressed as probabilities of dying between exact ages (x and x+n), which are derived from life tables and denoted by n q x (40). The U5MR, also denoted 5 q 0, is formally defined as the probability a child death occurring between birth and a child's fifth birthday, expressed as deaths per 1,000 live births (7, 40). U5MR is a composite measure of mortality occurring during the first 5 years, which can be further defined as the probability of dying within 1 month (neonatal mortality), 0–11 months (infant mortality, including neonatal deaths, or 1 q 0), and 12–59 months (child mortality, conditional on having reached the first birthday, or 4 q 1) (41). U5MRs were calculated using the DHS synthetic cohort life table methodology (42). This approach uses age segments 0, 1–2, 3–5, 6–11, 12–23, 24–35, 36–47 months (completed ages) for the calculation of the individual probabilities of dying, without adjustment for age of death heaping at 12 months. Such heaping may occur during fieldwork when deaths occurring slightly prior to or after 12 months of age are reported as a 1 year age of death (42). Therefore, some deaths that are actually infant deaths are shifted up to age 1. The analyses in this paper, however, were on all under 5 deaths, and any heaping would have little influence on the results. Imputation procedures were used for children with missing ages at death. On average, only small numbers of children in the DHS (about 1 in 1,000) reported to have died were not given an age at death and had the age of death imputed (T Pullum, year of personal communication was 2014). The imputation procedure involved finding a range of dates within which death could have occurred, and then selecting a value randomly within that range which would likely not introduce any upwards or downwards bias. The calculation of the U5MR was based on the number of deaths to live-born children in a 3-year reference period preceding the survey. Death probabilities were calculated for each of the age segments defined above and then combined into the mortality rate as the product of the component survival probabilities, and expressed as a rate per 1,000 live births. In the child-level design, the outcome was defined as a child death occurring within the reference period. This was expressed as a binary outcome: 1 for a death occurring in the child's first 5 years; 0 for survival through 37 months of age. This study uses two outcomes, corresponding to the two data designs employed. In the ecologic time-series design, the outcome is U5MR for the 3 years reference period in each survey. In the child-level design, the outcome is the probability of child death occurring within 3 years prior to the survey. At an aggregate level, child mortality is typically expressed as probabilities of dying between exact ages (x and x+n), which are derived from life tables and denoted by n q x (40). The U5MR, also denoted 5 q 0, is formally defined as the probability a child death occurring between birth and a child's fifth birthday, expressed as deaths per 1,000 live births (7, 40). U5MR is a composite measure of mortality occurring during the first 5 years, which can be further defined as the probability of dying within 1 month (neonatal mortality), 0–11 months (infant mortality, including neonatal deaths, or 1 q 0), and 12–59 months (child mortality, conditional on having reached the first birthday, or 4 q 1) (41). U5MRs were calculated using the DHS synthetic cohort life table methodology (42). This approach uses age segments 0, 1–2, 3–5, 6–11, 12–23, 24–35, 36–47 months (completed ages) for the calculation of the individual probabilities of dying, without adjustment for age of death heaping at 12 months. Such heaping may occur during fieldwork when deaths occurring slightly prior to or after 12 months of age are reported as a 1 year age of death (42). Therefore, some deaths that are actually infant deaths are shifted up to age 1. The analyses in this paper, however, were on all under 5 deaths, and any heaping would have little influence on the results. Imputation procedures were used for children with missing ages at death. On average, only small numbers of children in the DHS (about 1 in 1,000) reported to have died were not given an age at death and had the age of death imputed (T Pullum, year of personal communication was 2014). The imputation procedure involved finding a range of dates within which death could have occurred, and then selecting a value randomly within that range which would likely not introduce any upwards or downwards bias. The calculation of the U5MR was based on the number of deaths to live-born children in a 3-year reference period preceding the survey. Death probabilities were calculated for each of the age segments defined above and then combined into the mortality rate as the product of the component survival probabilities, and expressed as a rate per 1,000 live births. In the child-level design, the outcome was defined as a child death occurring within the reference period. This was expressed as a binary outcome: 1 for a death occurring in the child's first 5 years; 0 for survival through 37 months of age. Exposure Our key exposure of interest was coverage of MNCH interventions. Based on prior literature, we selected eight established interventions that have sufficient evidence of an effect on reducing child mortality from the major causes of under-5 deaths and can be summarized as a composite index for comparability between countries and within countries over time (12, 22–37, 43). The interventions included were: FPS, skilled birth attendant at delivery (SBA), at least one antenatal care visit with a skilled provider (ANCS), three doses of diphtheria-pertussis-tetanus (DPT3) vaccine, measles vaccination (MSL), BCG (tuberculosis) vaccination (BCG), ORT for children with diarrhea, and CPNM. The coverage of these interventions at a country level was summarized using the CCI, which is based on the following weighed average of the eight interventions (12):1CCI=14(FPS+SBA+ANCS2+2DPT3+MSL+BCG4+ORT+CPNM2) The CCI gives equal weight to family planning, maternal and newborn care, immunization, and case management of sick children, and has been proposed as an effective way summarize and compare coverage of MNCH interventions across countries and over time (12). Our key exposure of interest was coverage of MNCH interventions. Based on prior literature, we selected eight established interventions that have sufficient evidence of an effect on reducing child mortality from the major causes of under-5 deaths and can be summarized as a composite index for comparability between countries and within countries over time (12, 22–37, 43). The interventions included were: FPS, skilled birth attendant at delivery (SBA), at least one antenatal care visit with a skilled provider (ANCS), three doses of diphtheria-pertussis-tetanus (DPT3) vaccine, measles vaccination (MSL), BCG (tuberculosis) vaccination (BCG), ORT for children with diarrhea, and CPNM. The coverage of these interventions at a country level was summarized using the CCI, which is based on the following weighed average of the eight interventions (12):1CCI=14(FPS+SBA+ANCS2+2DPT3+MSL+BCG4+ORT+CPNM2) The CCI gives equal weight to family planning, maternal and newborn care, immunization, and case management of sick children, and has been proposed as an effective way summarize and compare coverage of MNCH interventions across countries and over time (12). Covariates At the country level, per capita gross domestic product (pcGDP) was used as the primary measure of a country's economic growth and development. These data were obtained from the Penn World Tables (44) and were lagged 2 years from the date at which the survey began. Analysis of pcGDP was included in regression models as the logarithm (base 10) of pcGDP. At the child level, we used a variety of theoretically important maternal and child characteristics as covariates (45). Age, sex, multiple/single birth, birth order, and preceding birth interval were included as child characteristics; age of the mother at birth, maternal education, household wealth quintile, area of residence were included as maternal/household-level characteristics. Household wealth was defined according to an index developed from indicators of household asset ownership and housing characteristics (e.g. whether the household had a flush toilet, refrigerator, car, moped/motorcycle, television, washing machine, or telephone). Country-specific and weighted linear combinations of these items were constructed with weights for each item obtained from a principal component analysis (46). The index was then standardized, and using the quintiles of this distribution, the survey population in each country was divided into fifths from poorest to richest. Similar measures have been developed in India and other settings and have been shown to be a consistent proxy for household income and expenditure (47). At the country level, per capita gross domestic product (pcGDP) was used as the primary measure of a country's economic growth and development. These data were obtained from the Penn World Tables (44) and were lagged 2 years from the date at which the survey began. Analysis of pcGDP was included in regression models as the logarithm (base 10) of pcGDP. At the child level, we used a variety of theoretically important maternal and child characteristics as covariates (45). Age, sex, multiple/single birth, birth order, and preceding birth interval were included as child characteristics; age of the mother at birth, maternal education, household wealth quintile, area of residence were included as maternal/household-level characteristics. Household wealth was defined according to an index developed from indicators of household asset ownership and housing characteristics (e.g. whether the household had a flush toilet, refrigerator, car, moped/motorcycle, television, washing machine, or telephone). Country-specific and weighted linear combinations of these items were constructed with weights for each item obtained from a principal component analysis (46). The index was then standardized, and using the quintiles of this distribution, the survey population in each country was divided into fifths from poorest to richest. Similar measures have been developed in India and other settings and have been shown to be a consistent proxy for household income and expenditure (47). Statistical analyses We conducted two separate set of analyses corresponding to the two data structures described previously. For the ecological time-series data, we fit linear regression models of the form:2yij=β0+BCj+BSij+β1CCIij+e0ij where yij represents the U5MR for survey time i in country j; β0 represents the constant or the average U5MR holding CCI constant, and after accounting for country differences (BCj);BCj represents the country-specific dummy variables estimating differences in U5MR between countries; BSij represents the effects associated with dummies for survey years; β1CCIij represents the change in U5MR for a unit change in CCI; and e0ij represents the residuals at the survey-year level i in country j. A second series of analyses were conducted child-level dataset. In these analyses, the basic model is a logistic regression model with a binary response (y=1 for child death during the reference period, y=0 otherwise). Countries are treated as fixed effects using country indicator variables in the fixed part of the model (BCj). The outcome of child mortality, Pr(y ij=1), is assumed to be binomially distributed yij~Binomial (1,πij) with probability πij related to the set of independent variables X and a random effect for each level by a logit link function:3Logit(πij)=β0+BCj+BSij+β1CCIij+BXij The intercept, β0, represents the log odds of child mortality for the reference group, BSij is a vector of coefficients for dummy variables for survey years, β1CCIij represents the log odds of child mortality for a one-unit increase in CCI, and the BX represents a vector of coefficients for the log odds of child mortality for a one-unit increase for each independent variable. Models were weighted and standard errors adjusted for the complex multistage sampling design of the surveys. Coefficients were exponentiated and presented as odds ratios with 95% confidence intervals. We conducted two separate set of analyses corresponding to the two data structures described previously. For the ecological time-series data, we fit linear regression models of the form:2yij=β0+BCj+BSij+β1CCIij+e0ij where yij represents the U5MR for survey time i in country j; β0 represents the constant or the average U5MR holding CCI constant, and after accounting for country differences (BCj);BCj represents the country-specific dummy variables estimating differences in U5MR between countries; BSij represents the effects associated with dummies for survey years; β1CCIij represents the change in U5MR for a unit change in CCI; and e0ij represents the residuals at the survey-year level i in country j. A second series of analyses were conducted child-level dataset. In these analyses, the basic model is a logistic regression model with a binary response (y=1 for child death during the reference period, y=0 otherwise). Countries are treated as fixed effects using country indicator variables in the fixed part of the model (BCj). The outcome of child mortality, Pr(y ij=1), is assumed to be binomially distributed yij~Binomial (1,πij) with probability πij related to the set of independent variables X and a random effect for each level by a logit link function:3Logit(πij)=β0+BCj+BSij+β1CCIij+BXij The intercept, β0, represents the log odds of child mortality for the reference group, BSij is a vector of coefficients for dummy variables for survey years, β1CCIij represents the log odds of child mortality for a one-unit increase in CCI, and the BX represents a vector of coefficients for the log odds of child mortality for a one-unit increase for each independent variable. Models were weighted and standard errors adjusted for the complex multistage sampling design of the surveys. Coefficients were exponentiated and presented as odds ratios with 95% confidence intervals. Data sources: We extracted data from DHS surveys (38) conducted since 1990. DHS are household surveys that use nationally representative sampling plans and have special emphasis on fertility, child mortality, and indicators of MNCH (38). We selected standard surveys for each country that included birth histories (‘BR’ files from which child mortality rates could be calculated) of women aged 15–49 and MNCH coverage indicators. In total, 81 surveys were included, covering 35 countries and 93% of the population of sub-Saharan Africa (39). Twenty-four surveys were conducted between 1992 and 2000, 20 between 2000 and 2004, 21 between 2005 and 2008, and 16 since 2009. Twenty-four of the 35 countries conducted at least two surveys during this period, and 22 conducted three or more. Study population, data designs, and sample sizes: The study population was structured as two distinct data designs. First, we examined the study population as an ecological time-series design with countries repeatedly observed over time. In this design, the lowest level of analysis was the survey period, nested within countries as a hierarchical structure. Second, we used a repeated cross-sectional design, with children at the lowest unit of analysis. A key substantive advantage of the second approach is the ability to account for within-country between-child factors that can influence both child mortality and the country-level economic development and coverage indicators. Further, the ecological time-series data structure assumes that the probability of dying (or U5MR) is the same for all children within a country period. This assumption is relaxed in the second data structure, although in doing so we are modeling the probability of a child dying before the fifth birthday, and not U5MR. In the ecological time-series design, 81 survey periods were available for analysis, covering 35 countries, with an average of 2.3 surveys per country. For the child-level analyses, children across all surveys were pooled, and the probability of child death was examined in the 3-year period immediately preceding the survey. In total, there was information on 395,493 children born within the reference period. After making exclusions for missing data on covariates, the final analytical sample size was 393,934. Outcomes: This study uses two outcomes, corresponding to the two data designs employed. In the ecologic time-series design, the outcome is U5MR for the 3 years reference period in each survey. In the child-level design, the outcome is the probability of child death occurring within 3 years prior to the survey. At an aggregate level, child mortality is typically expressed as probabilities of dying between exact ages (x and x+n), which are derived from life tables and denoted by n q x (40). The U5MR, also denoted 5 q 0, is formally defined as the probability a child death occurring between birth and a child's fifth birthday, expressed as deaths per 1,000 live births (7, 40). U5MR is a composite measure of mortality occurring during the first 5 years, which can be further defined as the probability of dying within 1 month (neonatal mortality), 0–11 months (infant mortality, including neonatal deaths, or 1 q 0), and 12–59 months (child mortality, conditional on having reached the first birthday, or 4 q 1) (41). U5MRs were calculated using the DHS synthetic cohort life table methodology (42). This approach uses age segments 0, 1–2, 3–5, 6–11, 12–23, 24–35, 36–47 months (completed ages) for the calculation of the individual probabilities of dying, without adjustment for age of death heaping at 12 months. Such heaping may occur during fieldwork when deaths occurring slightly prior to or after 12 months of age are reported as a 1 year age of death (42). Therefore, some deaths that are actually infant deaths are shifted up to age 1. The analyses in this paper, however, were on all under 5 deaths, and any heaping would have little influence on the results. Imputation procedures were used for children with missing ages at death. On average, only small numbers of children in the DHS (about 1 in 1,000) reported to have died were not given an age at death and had the age of death imputed (T Pullum, year of personal communication was 2014). The imputation procedure involved finding a range of dates within which death could have occurred, and then selecting a value randomly within that range which would likely not introduce any upwards or downwards bias. The calculation of the U5MR was based on the number of deaths to live-born children in a 3-year reference period preceding the survey. Death probabilities were calculated for each of the age segments defined above and then combined into the mortality rate as the product of the component survival probabilities, and expressed as a rate per 1,000 live births. In the child-level design, the outcome was defined as a child death occurring within the reference period. This was expressed as a binary outcome: 1 for a death occurring in the child's first 5 years; 0 for survival through 37 months of age. Exposure: Our key exposure of interest was coverage of MNCH interventions. Based on prior literature, we selected eight established interventions that have sufficient evidence of an effect on reducing child mortality from the major causes of under-5 deaths and can be summarized as a composite index for comparability between countries and within countries over time (12, 22–37, 43). The interventions included were: FPS, skilled birth attendant at delivery (SBA), at least one antenatal care visit with a skilled provider (ANCS), three doses of diphtheria-pertussis-tetanus (DPT3) vaccine, measles vaccination (MSL), BCG (tuberculosis) vaccination (BCG), ORT for children with diarrhea, and CPNM. The coverage of these interventions at a country level was summarized using the CCI, which is based on the following weighed average of the eight interventions (12):1CCI=14(FPS+SBA+ANCS2+2DPT3+MSL+BCG4+ORT+CPNM2) The CCI gives equal weight to family planning, maternal and newborn care, immunization, and case management of sick children, and has been proposed as an effective way summarize and compare coverage of MNCH interventions across countries and over time (12). Covariates: At the country level, per capita gross domestic product (pcGDP) was used as the primary measure of a country's economic growth and development. These data were obtained from the Penn World Tables (44) and were lagged 2 years from the date at which the survey began. Analysis of pcGDP was included in regression models as the logarithm (base 10) of pcGDP. At the child level, we used a variety of theoretically important maternal and child characteristics as covariates (45). Age, sex, multiple/single birth, birth order, and preceding birth interval were included as child characteristics; age of the mother at birth, maternal education, household wealth quintile, area of residence were included as maternal/household-level characteristics. Household wealth was defined according to an index developed from indicators of household asset ownership and housing characteristics (e.g. whether the household had a flush toilet, refrigerator, car, moped/motorcycle, television, washing machine, or telephone). Country-specific and weighted linear combinations of these items were constructed with weights for each item obtained from a principal component analysis (46). The index was then standardized, and using the quintiles of this distribution, the survey population in each country was divided into fifths from poorest to richest. Similar measures have been developed in India and other settings and have been shown to be a consistent proxy for household income and expenditure (47). Statistical analyses: We conducted two separate set of analyses corresponding to the two data structures described previously. For the ecological time-series data, we fit linear regression models of the form:2yij=β0+BCj+BSij+β1CCIij+e0ij where yij represents the U5MR for survey time i in country j; β0 represents the constant or the average U5MR holding CCI constant, and after accounting for country differences (BCj);BCj represents the country-specific dummy variables estimating differences in U5MR between countries; BSij represents the effects associated with dummies for survey years; β1CCIij represents the change in U5MR for a unit change in CCI; and e0ij represents the residuals at the survey-year level i in country j. A second series of analyses were conducted child-level dataset. In these analyses, the basic model is a logistic regression model with a binary response (y=1 for child death during the reference period, y=0 otherwise). Countries are treated as fixed effects using country indicator variables in the fixed part of the model (BCj). The outcome of child mortality, Pr(y ij=1), is assumed to be binomially distributed yij~Binomial (1,πij) with probability πij related to the set of independent variables X and a random effect for each level by a logit link function:3Logit(πij)=β0+BCj+BSij+β1CCIij+BXij The intercept, β0, represents the log odds of child mortality for the reference group, BSij is a vector of coefficients for dummy variables for survey years, β1CCIij represents the log odds of child mortality for a one-unit increase in CCI, and the BX represents a vector of coefficients for the log odds of child mortality for a one-unit increase for each independent variable. Models were weighted and standard errors adjusted for the complex multistage sampling design of the surveys. Coefficients were exponentiated and presented as odds ratios with 95% confidence intervals. Results: Between 1992 and 2012, the U5MR declined in a majority (19 of 24) of sub-Saharan African countries where repeated DHS surveys were available, although the rate of change varied across countries (Table 1). The U5MR varied between 62.8 deaths per 1,000 live births in Sao Tome to 305.8 per 1,000 in Niger in the initial round of surveys (median year: 1998), corresponding to a five-fold difference across countries. In the most recent round of DHS surveys (median year: 2005), the U5MR ranged from 67.2 deaths per 1,000 live births in Senegal to 190.1 per 1,000 in Chad, indicating a three-fold difference across countries. During this period, the CCI increased in 17 countries from an average of 53.4% (SD 14.4) among all countries in the first survey period to 58.7% (SD 12.2) among the most recent wave in 24 countries. Country, year of survey, sample size, % under-5 child deaths, U5MR, CCI, and log per capita GDP in 35 sub-Saharan African countries for the first and most recent survey during the period 1990–2012 Source: Authors’ calculations from DHS data; percent child deaths weighted and calculated in a 3-year window preceding the start of the survey. U5MR=under-5 mortality rate; CCI=composite coverage index; log pcGDP=logarithm of per capita gross domestic product. At both the baseline and repeated surveys, an inverse association was seen between country-level U5MR and CCI coverage, indicating lower rates of under-5 mortality in countries with greater coverage of intervention (Pearson correlation −0.73 at both times, p<0.001, Fig. 2a and b). This association held when examining the average changes in U5MR and CCI over time in a subset of 24 countries with repeated surveys (Pearson correlation −0.74, p<0.001, Fig. 2c). Correlation between under-5 mortality rate (U5MR) and composite coverage index (CCI) at baseline (panel a, n=35 surveys) and repeated surveys (panel b, n=24) and correlation between the change in U5MR and change in CCI from baseline (panel c, n=24). At an ecologic level, the regression analyses described in Equation 2 show that a standardized unit increase in CCI was associated with a reduction of 28.5 per 1,000 in U5MR (95% CI: −42.4, −14.6), after accounting for secular declines in U5MR as captured by survey period fixed effects (Table 2). The inclusion of log pcGDP to Model 2 did not substantial alter this effect (β=−29.0, 95% CI: −43.2, −14.7). Risks, bivariate odds ratios (OR), and multivariable adjusted odds ratios (aOR) of child mortality according to child-, maternal-, and household-level covariates across 35 sub-Saharan African countries, 1992–2012 Table 3 shows the sample sizes, unadjusted and adjusted risks of child mortality by covariates for the child-level analyses. In these analyses, CCI was also associated with a reduction in mortality, with an odds ratio of 0.87 (95% CI: 0.84, 0.92) indicating a protective effect against under-5 mortality independent of survey period effects (Model 1 of Fig. 3a). The inclusion of log pcGDP to this model did not alter the effect size. In a third model that included all child and maternal covariates in addition to log pcGDP, CCI remained robustly associated with a reduction in child mortality (odds ratio: 0.86, 95% CI: 0.82–0.90). However, the effect became attenuated when individual-level indicators were included for whether the mother received skilled antenatal care during pregnancy and had the presence of a skilled attendant at birth (odds ratio: 0.97, 95% CI: 0.92–1.01, Model 4 of Fig. 3a). Without considering the individual-level indicators of intervention utilization, there was a graded and inverse association between CCI at the country level and probability of child mortality. Children from countries in the highest quartile of CCI coverage had the lowest probability of mortality conditional on all covariates (odds ratio 0.74, 95% CI: 0.67–0.82) (Fig. 3b). Odds ratios (OR) for the association of CCI with child mortality in 35 sub-Saharan African countries; OR per quartile increase in composite coverage index (CCI) (left) and per SD increase in CCI from 4 separate models (right). *Model 1 (M1) included country and survey period fixed effects; M2 added log pcGDP to M1; M3 added maternal- and child-level covariates to M2; M4 added indicators for whether mother received skilled antenatal care during pregnancy and presence of skilled attendant at birth. **Model includes country and survey period fixed effects, log pcGDP, and maternal- and child-level covariates; quartiles of CCI defined at the country level. Coefficients of two ecological models predicting U5MR across 81 survey periods in 35 sub-Saharan African countries, 1992–2012 Discussion: In this study, we explored the contribution of coverage of MNCH interventions to the declines in U5MR across 35 sub-Saharan African countries from 1990 to 2012. Improvements in MNCH coverage and interventions were strongly associated with reductions in child mortality; this association was universally consistent across the two types of data structures analyzed and regardless of the statistical specification. Analyses at the individual level, however, demonstrated that the CCI may only be a proxy for maternal/child-level utilization of health interventions as the protective effect of national-level CCI coverage attenuated after controlling for indicators of ANC and SBA. This study has several limitations. First, given that DHS surveys are typically conducted only at intervals of 3–6 years (38), we were only able to study large changes in U5MR, and in some countries repeated surveys were not available, prohibiting a full time-series cross-sectional analysis. Further, some baseline surveys were conducted at different time periods especially if countries were only involved later in the DHS program and thus have had only one survey conducted. We decided to retain these countries in the levels analyses in order to make the full use of available data although our primary focus was on change in CCI and change in U5MR over time and such countries did not contribute to the change analyses. Finally, all of our models accounted for a survey-year variable to adjust for the different periods in which the surveys were conducted. Second, due to sample size restrictions in the ecological analyses, we could not model each indicator of MCNH interventions separately in a multivariable model, and instead we chose the CCI which is a composite index of eight different key interventions. Other potentially relevant determinants of U5MR were not examined in this study. Third, we analyzed U5MR over the 3-year period preceding each survey. This method provided a balance between increasing precision in the estimates of U5MR but also allowing for some information on recent trends in U5MR to be revealed by shortening the traditional 5-year reference period (48). Fourth, the coverage of maternal, newborn, and child health interventions were also calculated from each of the surveys, using the 3-year reference period before each survey and coverage of interventions are self-reported by survey respondents. There are some limitations to the validity of self-reports of uptake of these interventions. Some studies have described acceptable validity of maternal reports for peripartum interventions in DHS/MICS surveys (49). Further, DHS uses a combination of maternal reports with other documentation, for example, the use of health cards to gather information on vaccination uptake. Despite these limitations, DHS and other household surveys have generally been found to have reasonable and perhaps better validity than officially reported data by service providers (50). A related limitation is that there remains some difficulty in establishing the timing of exposures and outcomes, as both were measured contemporaneously in the same survey. Further, although we chose a logistic regression analysis for the child-level models, a hazard model would have been another alternative. Regardless of model choice, there would be no additional information gained from the independent variables, given that the indicators were calculated using a 3-year window. Fifth, the analyses did not include indicators of the incidence (or prevalence) of childhood diseases. Given the method in which the prevalence of diseases is captured in DHS (i.e. any diarrhea within 2 weeks preceding the survey), we were not confident that these would be comparable across countries, especially since surveys may have been conducted at different times and in different seasons. Finally, a more general limitation is that this study was based on estimates of U5MR. Any estimate of U5MR from survey data is subjected to sampling errors and will always be inferior to complete vital registration data (4). Countries where U5MR remains high and/or rates of mortality decline are slow typically lack comprehensive vital registration systems (51). Strengthening such systems is likely to improve future assessments of factors associated with declines in U5MR in sub-Saharan African countries. The results presented in this study indicate a secular decline in U5MR in a majority of countries in sub-Saharan Africa over the past two decades. A large part of this decline can be explained by coverage of selected maternal, newborn, and child health interventions. On average, the increases in CCI correlated with decreases in U5MR; however, all countries did not fit this trend. For example, in Zimbabwe, Nigeria and Zambia, the CCI decreased between the baseline and repeated survey and in Zambia the U5MR decreased from 195.4 to 117.1 even though the CCI decreased from 67.4 to 61.8%. These findings suggest that other factors not considered here may also be influencing change in U5MR. Further, the CCI is a composite measure, and a decline in CCI may reflect that one of the components decreased over time while other components may have increased. We were not able to assess the association of each component of the CCI with U5MR, but it is likely that some components are more strongly associated than others. For example, our analysis presented in Fig. 3 suggests that antenatal care is particularly important in reducing U5MR. It is therefore possible that increases in coverage of some interventions but not other may result in an improvement in U5MR without a corresponding improvement in CCI. Other social improvements, such as improved access to clean drinking water and sanitation facilities, may also have an important role (29). Our analyses did not fully account for the variation in U5MR or child-level mortality, suggesting that other factors related to health systems as well as economic, social, or political factors play a role in influencing U5MR in sub-Saharan Africa. It has been suggested that effective implementation of available, cost-effective MNCH interventions can prevent much of the current burden of under-5 mortality in low-income settings (52). However, many countries in sub-Saharan Africa are not on track to reach MDG-4 (7), which is likely related in part to the low levels of coverage of key interventions in the 1990s in many countries (37, 53). In the 2000s, global health initiatives and resources for health increased, and along with such increases came improvements in coverage of life-saving child health interventions in several countries (18, 41). We would therefore expect that progress toward MDG-4 in such settings, while lagging behind other areas, might likely continue into 2015 and beyond (4, 7). It appears that health system improvements, including scaling up of key MNCH interventions, are a key explanation for reductions in U5MR in sub-Saharan Africa. For example, in Tanzania between 1999 and 2004–05, the coverage of interventions relevant to child survival improved substantially (54). In particular, vitamin A supplementation increased from 14% in 1999 to 85% in 2005, and other improvements also were seen: children sleeping under insecticide-treated nets increased from 10 to 29%, ORT for children increased from 57 to 70%, and exclusive breastfeeding for those younger than age 2 months increased from 58 to 70% (54). Over this same period, Tanzania's national wealth (in GDP per person) increased by 93 international dollars, from $819 to $912 per person (or US$256–US$303). Improvements in the proportion of households living below the poverty line, in educational attainment, and in literacy rates improved only marginally during this time. Therefore, it is unlikely that growth in national wealth would account for much of the reduction in mortality, especially since poverty rates in Tanzania and other sub-Saharan African countries did not reduce dramatically over the study period. Based on our child-level analyses, it appears that the coverage of health interventions have played a relatively more important role in reducing child mortality compared with the role of economic growth. However, it is not clear whether these improvements are being driving by supply side increases in the national or regional availability and coverage of health services and interventions or through increased demand and access at an individual level. Our inclusion of individual-level analogues of two components of CCI (ANC and SBA) was sufficient to attenuate the effect of CCI, suggesting that individual-level demand and access to interventions maybe the pathway where the improvements to child health can be gained. It also suggests that the projected gains in child mortality reductions from scaling up of coverage of the various interventions may be overstated unless these increases in coverage can be appropriately translated to individual-level utilization (55). Although recent gains have been made in reducing under-5 mortality in sub-Saharan Africa, U5MR in this region continues to be the highest globally. While sub-Saharan Africa as a whole has reduced U5MR by 30%, this is less than half of the MDG-4 target. As the global health community considers both the strong likelihood that the MDG-4 targets are not going to be accomplished by 2015 (56, 57), and looks ahead to the post-MDG era (58), it is important to sustain efforts to reduce child mortality. For sub-Saharan Africa, a continued focus on fertility declines, improved health coverage, and greater equity in the coverage of proven life-saving interventions might be the key to reducing mortality.
Background: Infant and child mortality rates are among the most important indicators of child health, nutrition, implementation of key survival interventions, and the overall social and economic development of a population. In this paper, we investigate the role of coverage of maternal and child health (MNCH) interventions in contributing to declines in child mortality in sub-Saharan Africa. Methods: Data are from 81 Demographic and Health Surveys from 35 sub-Saharan African countries. Using ecological time-series and child-level regression models, we estimated the effect of MNCH interventions (summarized by the percent composite coverage index, or CCI) on child mortality with in the first 5 years of life net of temporal trends and covariates at the household, maternal, and child levels. Results: At the ecologic level, a unit increase in standardized CCI was associated with a reduction in under-5 child mortality rate (U5MR) of 29.0 per 1,000 (95% CI: -43.2, -14.7) after adjustment for survey period effects and country-level per capita gross domestic product (pcGDP). At the child level, a unit increase in standardized CCI was associated with an odds ratio of 0.86 for child mortality (95% CI: 0.82-0.90) after adjustment for survey period effect, country-level pcGDP, and a set of household-, maternal-, and child-level covariates. Conclusions: MNCH interventions are important in reducing U5MR, while the effects of economic growth in sub-Saharan Africa remain weak and inconsistent. Improved coverage of proven life-saving interventions will likely contribute to further reductions in U5MR in sub-Saharan Africa.
null
null
8,215
314
[ 152, 269, 567, 211, 272, 336 ]
9
[ "child", "u5mr", "mortality", "level", "countries", "survey", "country", "cci", "surveys", "period" ]
[ "countries survey period", "child mortality country", "africa 39 surveys", "mortality indicators mnch", "coverage selected maternal" ]
null
null
null
null
[CONTENT] Africa | child mortality | child health | maternal and child health interventions | low-income countries | trends | socioeconomic factors [SUMMARY]
[CONTENT] Africa | child mortality | child health | maternal and child health interventions | low-income countries | trends | socioeconomic factors [SUMMARY]
null
[CONTENT] Africa | child mortality | child health | maternal and child health interventions | low-income countries | trends | socioeconomic factors [SUMMARY]
null
null
[CONTENT] Africa South of the Sahara | Child Mortality | Child Welfare | Child, Preschool | Cross-Sectional Studies | Developing Countries | Diet | Female | Health Policy | Health Services Accessibility | Humans | Infant | Infant, Newborn | Male | Maternal Health Services | Social Determinants of Health | Socioeconomic Factors [SUMMARY]
[CONTENT] Africa South of the Sahara | Child Mortality | Child Welfare | Child, Preschool | Cross-Sectional Studies | Developing Countries | Diet | Female | Health Policy | Health Services Accessibility | Humans | Infant | Infant, Newborn | Male | Maternal Health Services | Social Determinants of Health | Socioeconomic Factors [SUMMARY]
null
[CONTENT] Africa South of the Sahara | Child Mortality | Child Welfare | Child, Preschool | Cross-Sectional Studies | Developing Countries | Diet | Female | Health Policy | Health Services Accessibility | Humans | Infant | Infant, Newborn | Male | Maternal Health Services | Social Determinants of Health | Socioeconomic Factors [SUMMARY]
null
null
[CONTENT] countries survey period | child mortality country | africa 39 surveys | mortality indicators mnch | coverage selected maternal [SUMMARY]
[CONTENT] countries survey period | child mortality country | africa 39 surveys | mortality indicators mnch | coverage selected maternal [SUMMARY]
null
[CONTENT] countries survey period | child mortality country | africa 39 surveys | mortality indicators mnch | coverage selected maternal [SUMMARY]
null
null
[CONTENT] child | u5mr | mortality | level | countries | survey | country | cci | surveys | period [SUMMARY]
[CONTENT] child | u5mr | mortality | level | countries | survey | country | cci | surveys | period [SUMMARY]
null
[CONTENT] child | u5mr | mortality | level | countries | survey | country | cci | surveys | period [SUMMARY]
null
null
[CONTENT] child | death | represents | age | country | mortality | survey | level | u5mr | design [SUMMARY]
[CONTENT] cci | countries | u5mr | ci | log pcgdp | 95 ci | odds | log | mortality | child [SUMMARY]
null
[CONTENT] child | u5mr | countries | interventions | country | mortality | surveys | level | survey | cci [SUMMARY]
null
null
[CONTENT] 81 | Demographic and Health Surveys | 35 | African ||| MNCH | CCI | the first 5 years [SUMMARY]
[CONTENT] CCI | U5MR | 29.0 | 1,000 | 95% | CI ||| CCI | 0.86 | 95% | CI | 0.82 [SUMMARY]
null
[CONTENT] ||| Africa ||| 81 | Demographic and Health Surveys | 35 | African ||| MNCH | CCI | the first 5 years ||| ||| CCI | U5MR | 29.0 | 1,000 | 95% | CI ||| CCI | 0.86 | 95% | CI | 0.82 ||| U5MR | Africa ||| U5MR | Africa [SUMMARY]
null
Prevalence of positive mental health and functioning among adults with sickle cell disease in Ghana.
33883773
Self-funded.
FUNDING
A quantitative cross-sectional survey design was implemented for data-gathering. A random sample of 62 adult SCD patients (21 to 56 years; mean age of 29 years) receiving treatment at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics at the Korle-Bu Teaching Hospital completed the Mental Health Continuum-Short Form (MHC-SF). Descriptive statistics and reliability indices were estimated for the MHC-SF. We implemented Keyes's criteria for the assessment and categorisation of levels of mental health to determine the prevalence of positive mental health and functioning.
METHODS
We found a high level of positive mental health (66% flourishing; 26% moderately mentally healthy; 8% languishing) and functioning, with no significant difference between the genders. A total of 34% of the participants were functioning at suboptimal levels and were at risk of psychopathology.
RESULTS
This study gives the first overview of the prevalence of positive mental health and functioning in a clinical population in Ghana. Although the majority of participants were flourishing, contextually appropriate positive psychological interventions are needed to promote the mental health of SCD patients who are functioning at suboptimal levels, which would, inherently, also buffer against psychopathology.
CONCLUSION
[ "Adolescent", "Adult", "Anemia, Sickle Cell", "Cross-Sectional Studies", "Female", "Ghana", "Humans", "Longitudinal Studies", "Male", "Mental Health", "Middle Aged", "Prevalence", "Quality of Life", "Reproducibility of Results", "Young Adult" ]
8042807
Introduction
Data from global epidemiological surveys show that sickle cell disease (SCD) remains the commonest genetic blood disorder that affects about 20–25 million people1,2, with a further 400,000 children born annually with the disease1. SCD is prevalent among people of the African and Hispanic descents, with approximately 80% of all SCD births occurring in West and Central Africa.1,2 Regional statistics suggest that about 25% of Africans are carriers of the abnormal hemoglobin gene.3 In Nigeria, for instance, about 2 to 3% of the population is affected with SCD and a projected 150,000 children are born each year with the condition.4 In Ghana, an estimated 2% of neonates are born with SCD annually.3 SCD is a hemoglobinopathy caused by a β-globin gene (βs) mutation at position 6. The mutation results in an abnormal hemoglobin (Hb) with altered physical properties (sickle hemoglobin).5,6 SCD is categorised into four major genotypes: sickle cell β0 thalassemia (Sβ0 thalassemia), sickle cell β+ thalassemia (Sβ+ thalassemia), sickle haemoglobin C (SC) disease, and homozygous sickle cell (SS) disease.6,7 Each of these subtypes presents with unique symptoms with varying degree of severity. In particular, the haemoglobin SS, where an individual inherits a sickle S gene from both parents, presents the most severe of clinical symptoms.8 The most reported genotypes in Ghana include the ‘SS’, ‘SC’, ‘SD’, and Sβ thalassemia.9 SCD differs from sickle cell trait (SCT) or carriers of the sickle cell gene. People who have SCT do not have the disease but inherit a normal haemoglobin (A) from one parent and an abnormal haemoglobin (SC) from the other parent. Carriers of the sickle cell gene do not often exhibit signs and symptoms of the disease but are capable of passing the gene to their children. When there is reduced oxygen tension, red blood cells (RBCs) with haemoglobin S (HbS) tend to sickle. This is because HbS becomes less soluble in low oxygen environment and forms tactoids and microfibrils along the long axis of RBC. Sickled RBCs lose their deformability property and thus occlude tiny capillaries. During this process, some of the RBCs are broken down, or lysed. As a result of this occlusion, the distal organs are deprived of oxygen, resulting in complications such as infarction, anaemia, priapism, splenomegaly, and dactylitis.10 SCD is associated with poor quality of life, depression, anxiety, and low levels of mental health functioning.11,12 Although there is limited data on SCD in Ghana at the national level, several studies report on the prevalence of SCD across various health facilities. A retrospective review of the medical records of all SCD patients (aged 13 and above) at the Ghana Institute of Clinical Genetics, Korle-Bu, from January 2013 to December 2014 shows that a total of 5,451 SCD patients accessed healthcare services at the facility, with 20,788 clinic visits.13 Another study estimates that about 2% of all newborns are diagnosed with SCD, and at least 25% of the Ghanaian populace carry the sickle cell gene.3 Keyes hypothesised a tripartite model of positive mental health that describes three dimensions of mental health, namely, emotional (EWB), psychological (PWB), and social (SWB) well-being.15 The EWB describes the presence of positive affect and satisfactions with life, while the PWB defines an individual's intrapersonal and close interpersonal functioning. Keyes conceptualised the SWB to refer to a sense of welfare and happiness and experiences of an individual's well-being in society, as well as satisfaction with their social structure and function.14,15 Keyes suggested that complete mental health functioning comprised of a combination of all three dimensions: emotional, psychological, and social well-being. For its measurement, a high level of positive mental health, called flourishing, requires a combination of a high level of subjective well-being and an optimal level of psychological and social functioning. Likewise, Keyes termed an individual's experience of low levels of positive mental health and psychological and social functioning as languishing. Individuals who function between this continuum, that is, neither flourishing nor languishing, are considered to be experiencing moderate mental health. Keyes' tripartite model, together with empirical evidence from Ryff's14 model of psychological well-being, formed the conceptual foundation that directed the development of the 40-item self-administered Mental Health Continuum Long-Form questionnaire.15,21 A shorter 14-item version, the Mental Health Continuum Short-Form (MHC-SF)21, was later constructed to assess the three dimensions of well-being (emotional, psychological, and social well-being) and the categorical diagnosis of positive mental health. According to Keyes15,21, positive mental health and mental illness exist on two continua – where positive mental health correlates with, but is distinct from, mental illness. In this respect, an individual can present with symptoms of mental illness, such as depressive episode, and simultaneously exhibit a high level of positive mental health (i.e., flourishing). A wealth of research demonstrates that flourishing states are associated with a range of positive outcomes, including good health, higher levels of life expectancy, and satisfaction with life.17 The measurement of the prevalence of positive mental health remains a challenging endeavour, partly due to varied conceptualisations and operational definitions of mental well-being. However, in recent efforts, scholars have increasingly utilised the MHCSF to assess positive mental health across various population groups.19 There is evidence that suggests that mental health is understood and experienced differently across various cultures and religions.18 For instance, researchers found significant difference in the expressions of well-being between collectivist East Asian and individualistic Western samples.20 Given the developmental challenges and disease-related complications associated with SCD, survivors require individual-targeted biopsychosocial interventions to be able to lead quality and meaningful lives.23 In most cases, SCD patients endure lifelong treatment regimen that can be expensive, complex, and multifaceted, with a high risk for poor mental health functioning, that is, their ability to reach their optimal human potential and lead a meaningful life.24 Assessment of positive mental health remains an important approach for evaluating the prognosis and outcomes of chronically ill patients.25,26 Prevalence of positive mental health Scholarly interest in the use of the MHC-SF to assess the prevalence of positive mental health and functioning has increased exponentially in the last decade. Researchers have successfully administered the MHC-SF to adults from South Africa27, Poland28, Canada22, Italy29, China30, United States21 and Australia31, and to adolescents from Egypt32, India33 and South Korea34 to explore the prevalence of positive mental health and its correlates. Prevalence rates vary across the globe depending on the context and the sample involved. For instance, researchers found that only 11.7 % of a sample of South Korean adolescents were flourishing, compared to the 23.5% of Egyptian adolescents, 46.4% of Indian adolescents, and 76.9% of Canadian adolescents and adults.43 In Ghana, there are limited studies that explore the concept of positive mental health into its categorical prevalence levels. Largely, the prevalence of mental health has been conceptualised and measured in terms of mental disorders. To our knowledge, only Appiah and colleagues36 measured the effect of a community-based positive psychology intervention in promoting the (positive) mental health of a rural adult sample using the MHC-SF. The researchers reported a flourishing rate of 25% at pre-test, 57.5% immediately after the 10-week intervention, and 77.5% three months post-intervention for programme participants. The majority of studies employed proxies in their measurement to assess subjective well-being by measuring individuals' standards of living rather than how they feel about their overall life.50 As far as could be established, presently, little is known about the prevalence of positive mental health and functioning in any population group in Ghana. The outcomes of prevalence rates of positive mental health and functioning of a population could serve as an important reference and resource for researchers and practitioners working to develop context-ainterventions to promote the overall mental well-being of the population involved.35 Scholarly interest in the use of the MHC-SF to assess the prevalence of positive mental health and functioning has increased exponentially in the last decade. Researchers have successfully administered the MHC-SF to adults from South Africa27, Poland28, Canada22, Italy29, China30, United States21 and Australia31, and to adolescents from Egypt32, India33 and South Korea34 to explore the prevalence of positive mental health and its correlates. Prevalence rates vary across the globe depending on the context and the sample involved. For instance, researchers found that only 11.7 % of a sample of South Korean adolescents were flourishing, compared to the 23.5% of Egyptian adolescents, 46.4% of Indian adolescents, and 76.9% of Canadian adolescents and adults.43 In Ghana, there are limited studies that explore the concept of positive mental health into its categorical prevalence levels. Largely, the prevalence of mental health has been conceptualised and measured in terms of mental disorders. To our knowledge, only Appiah and colleagues36 measured the effect of a community-based positive psychology intervention in promoting the (positive) mental health of a rural adult sample using the MHC-SF. The researchers reported a flourishing rate of 25% at pre-test, 57.5% immediately after the 10-week intervention, and 77.5% three months post-intervention for programme participants. The majority of studies employed proxies in their measurement to assess subjective well-being by measuring individuals' standards of living rather than how they feel about their overall life.50 As far as could be established, presently, little is known about the prevalence of positive mental health and functioning in any population group in Ghana. The outcomes of prevalence rates of positive mental health and functioning of a population could serve as an important reference and resource for researchers and practitioners working to develop context-ainterventions to promote the overall mental well-being of the population involved.35
Methods
Setting and participants The study was conducted at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics (GICG) at the Korle-Bu Teaching Hospital. The GICG was established in 1974 by the Ministry of Health and the Managing Trustees of Volta Aluminium Company Limited and serves as a referral health facility for SCD patients across the country. The clinic attends to patients aged 12 and older on every working day of the week with an average daily attendance of 50. Participants (N = 62) were Englishspeaking adult SCD patients, aged 18 years and above, of both genders, and were receiving treatment at the GICG for SCD. The study was conducted at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics (GICG) at the Korle-Bu Teaching Hospital. The GICG was established in 1974 by the Ministry of Health and the Managing Trustees of Volta Aluminium Company Limited and serves as a referral health facility for SCD patients across the country. The clinic attends to patients aged 12 and older on every working day of the week with an average daily attendance of 50. Participants (N = 62) were Englishspeaking adult SCD patients, aged 18 years and above, of both genders, and were receiving treatment at the GICG for SCD. Design and Procedure A quantitative survey design was implemented. We randomly recruited a total of 62 adults who were receiving in- and out-patient treatment for SCD at the GICG in May and June 2018, by means of a simple random sampling (i.e., ballot) method. After an eligible patient had completed the required medical examination and treatment protocol, the Medical Officer who attended to the patient then introduced the study and the research team to the patient. The research team thereafter provided further information on the study, addressed the patient's concerns, if any, and offered an invitation to the patient to participate. Each consented participant was asked to pick from a box containing pieces of papers with ‘Yes’ and ‘No’ written on them. The papers were thoroughly mixed after each pick before the next individual was made to pick. The papers were thoroughly mixed in a container at the beginning of each day of the data collection period, and the process repeated until the desired sample size was attained. All individuals who picked ‘Yes’ were contacted and scheduled for interview. Written informed consent was obtained from patients who agreed to participate. Each participant was assured of confidentiality and was informed of their right to withdraw from the study at any point in time without any consequences. Participants were ushered into a pre-arranged office where they selfadministered the questionnaire. A quantitative survey design was implemented. We randomly recruited a total of 62 adults who were receiving in- and out-patient treatment for SCD at the GICG in May and June 2018, by means of a simple random sampling (i.e., ballot) method. After an eligible patient had completed the required medical examination and treatment protocol, the Medical Officer who attended to the patient then introduced the study and the research team to the patient. The research team thereafter provided further information on the study, addressed the patient's concerns, if any, and offered an invitation to the patient to participate. Each consented participant was asked to pick from a box containing pieces of papers with ‘Yes’ and ‘No’ written on them. The papers were thoroughly mixed after each pick before the next individual was made to pick. The papers were thoroughly mixed in a container at the beginning of each day of the data collection period, and the process repeated until the desired sample size was attained. All individuals who picked ‘Yes’ were contacted and scheduled for interview. Written informed consent was obtained from patients who agreed to participate. Each participant was assured of confidentiality and was informed of their right to withdraw from the study at any point in time without any consequences. Participants were ushered into a pre-arranged office where they selfadministered the questionnaire. Measure Mental Health Continuum-Short Form21 The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy. A large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales. The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy. A large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales. Mental Health Continuum-Short Form21 The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy. A large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales. The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy. A large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales. Ethical consideration The study was approved by the Ethics and Protocol Review Committee of the School of Biomedical and Allied Health Sciences (SBAHS) of the University of Ghana (SBAHS – OT. /10517354/SA/2017-2018). Written informed consent was obtained from each study participant. The processes instituted to ensure confidentiality and anonymity of data were thoroughly explained. The study was approved by the Ethics and Protocol Review Committee of the School of Biomedical and Allied Health Sciences (SBAHS) of the University of Ghana (SBAHS – OT. /10517354/SA/2017-2018). Written informed consent was obtained from each study participant. The processes instituted to ensure confidentiality and anonymity of data were thoroughly explained. Data analysis Descriptive statistics and reliability indices for the MHC-SF were established with the statistical software SPSS 23.0. We applied Keyes' criteria for categorisation of well-being to establish the prevalence of positive mental health and functioning21,37 using the SPSS. Descriptive statistics and reliability indices for the MHC-SF were established with the statistical software SPSS 23.0. We applied Keyes' criteria for categorisation of well-being to establish the prevalence of positive mental health and functioning21,37 using the SPSS.
Results
Sociodemographic characteristics of participants The study involved male and female SCD patients aged from 21 to 56 years. The majority of participants were females, employed, Christians, obtained tertiary level of education. Details of the sociodemographic characteristics of participants are shown in Table 1. Sociodemographic characteristics of participants (N = 62) The study involved male and female SCD patients aged from 21 to 56 years. The majority of participants were females, employed, Christians, obtained tertiary level of education. Details of the sociodemographic characteristics of participants are shown in Table 1. Sociodemographic characteristics of participants (N = 62) Descriptive statistics and reliability indices for the MHC-SF Table 2 shows the descriptive statistics and Cronbach's alpha coefficients of the MHC-SF for the sample. We report the mean scores and standard deviations for the scale, which are more or less in line with those reported in the literature. The Cronbach's alpha reliability coefficient for the total MHC-SF was 0.77, which is within the acceptable value of 0.70 and above for reliable instruments.38 The mean of the inter-item correlations was .25. According to Clark and Watson's guideline, the interitem correlations of a standardised measure should range between 0.15 – 0.50, with a range of 0.15 – 0.20 for broad constructs and a range of .40 – .50 for narrower constructs.39 We found a range of .02 and .68 for the itemtotal correlations for the MHC-SF for this sample. Items 9 to 14 (cluster 3 = eudaimonic; psychological well-being) recorded the highest mean scores (3.90–4.35), whereas Item 8 (Social coherence/interest) presented the lowest mean of 3.15. Descriptive Statistics for Sub and Total MHC-SF Scale Note. MHCSF = Mental Health Continuum-Short Form; EWB = Emotional Well-Being; SWB = Social Well-Being, PWB = Psychological Well-Being Table 2 shows the descriptive statistics and Cronbach's alpha coefficients of the MHC-SF for the sample. We report the mean scores and standard deviations for the scale, which are more or less in line with those reported in the literature. The Cronbach's alpha reliability coefficient for the total MHC-SF was 0.77, which is within the acceptable value of 0.70 and above for reliable instruments.38 The mean of the inter-item correlations was .25. According to Clark and Watson's guideline, the interitem correlations of a standardised measure should range between 0.15 – 0.50, with a range of 0.15 – 0.20 for broad constructs and a range of .40 – .50 for narrower constructs.39 We found a range of .02 and .68 for the itemtotal correlations for the MHC-SF for this sample. Items 9 to 14 (cluster 3 = eudaimonic; psychological well-being) recorded the highest mean scores (3.90–4.35), whereas Item 8 (Social coherence/interest) presented the lowest mean of 3.15. Descriptive Statistics for Sub and Total MHC-SF Scale Note. MHCSF = Mental Health Continuum-Short Form; EWB = Emotional Well-Being; SWB = Social Well-Being, PWB = Psychological Well-Being The prevalence levels of positive mental health Table 3 shows the prevalence of positive mental health of participants. The results show a high level of positive mental health (flourishing; 66%), a significant level of moderate mental health (26%), and a relatively low level of languishing (8%) among participants. Overall, twothirds of SCD patients have high level of positive mental health, whereas about a third is not functioning optimally. We found no difference [t=0.111, p>.05] between the genders in their levels of mental health (see Table 4). Prevalence of positive mental health (N = 62) Gender differences in mental health of participants Table 3 shows the prevalence of positive mental health of participants. The results show a high level of positive mental health (flourishing; 66%), a significant level of moderate mental health (26%), and a relatively low level of languishing (8%) among participants. Overall, twothirds of SCD patients have high level of positive mental health, whereas about a third is not functioning optimally. We found no difference [t=0.111, p>.05] between the genders in their levels of mental health (see Table 4). Prevalence of positive mental health (N = 62) Gender differences in mental health of participants
Conclusion
The findings from this study provide insight into the mental health functioning of patients with SCD in Ghana, as measured with the MHC-SF. We found a considerably higher prevalence of positive mental health among our sample, with a third of the sample functioning at suboptimal levels, mentally. It is important that researchers, practitioners, and policymakers explore and take advantage of the contextual and psychosocial factors that promote the emotional, psychological, and social wellbeing to design context-appropriate mental health interventions for individuals in treatment for SCDs, and the Ghanaian people, more generally.
[ "Prevalence of positive mental health", "Setting and participants", "Design and Procedure", "Measure", "Mental Health Continuum-Short Form21", "Ethical consideration", "Data analysis", "Sociodemographic characteristics of participants", "Descriptive statistics and reliability indices for the MHC-SF", "The prevalence levels of positive mental health", "Study limitations" ]
[ "Scholarly interest in the use of the MHC-SF to assess the prevalence of positive mental health and functioning has increased exponentially in the last decade. Researchers have successfully administered the MHC-SF to adults from South Africa27, Poland28, Canada22, Italy29, China30, United States21 and Australia31, and to adolescents from Egypt32, India33 and South Korea34 to explore the prevalence of positive mental health and its correlates.\nPrevalence rates vary across the globe depending on the context and the sample involved. For instance, researchers found that only 11.7 % of a sample of South Korean adolescents were flourishing, compared to the 23.5% of Egyptian adolescents, 46.4% of Indian adolescents, and 76.9% of Canadian adolescents and adults.43\nIn Ghana, there are limited studies that explore the concept of positive mental health into its categorical prevalence levels. Largely, the prevalence of mental health has been conceptualised and measured in terms of mental disorders. To our knowledge, only Appiah and colleagues36 measured the effect of a community-based positive psychology intervention in promoting the (positive) mental health of a rural adult sample using the MHC-SF. The researchers reported a flourishing rate of 25% at pre-test, 57.5% immediately after the 10-week intervention, and 77.5% three months post-intervention for programme participants. The majority of studies employed proxies in their measurement to assess subjective well-being by measuring individuals' standards of living rather than how they feel about their overall life.50 As far as could be established, presently, little is known about the prevalence of positive mental health and functioning in any population group in Ghana. The outcomes of prevalence rates of positive mental health and functioning of a population could serve as an important reference and resource for researchers and practitioners working to develop context-ainterventions to promote the overall mental well-being of the population involved.35", "The study was conducted at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics (GICG) at the Korle-Bu Teaching Hospital. The GICG was established in 1974 by the Ministry of Health and the Managing Trustees of Volta Aluminium Company Limited and serves as a referral health facility for SCD patients across the country. The clinic attends to patients aged 12 and older on every working day of the week with an average daily attendance of 50. Participants (N = 62) were Englishspeaking adult SCD patients, aged 18 years and above, of both genders, and were receiving treatment at the GICG for SCD.", "A quantitative survey design was implemented. We randomly recruited a total of 62 adults who were receiving in- and out-patient treatment for SCD at the GICG in May and June 2018, by means of a simple random sampling (i.e., ballot) method. After an eligible patient had completed the required medical examination and treatment protocol, the Medical Officer who attended to the patient then introduced the study and the research team to the patient. The research team thereafter provided further information on the study, addressed the patient's concerns, if any, and offered an invitation to the patient to participate. Each consented participant was asked to pick from a box containing pieces of papers with ‘Yes’ and ‘No’ written on them. The papers were thoroughly mixed after each pick before the next individual was made to pick.\nThe papers were thoroughly mixed in a container at the beginning of each day of the data collection period, and the process repeated until the desired sample size was attained. All individuals who picked ‘Yes’ were contacted and scheduled for interview. Written informed consent was obtained from patients who agreed to participate. Each participant was assured of confidentiality and was informed of their right to withdraw from the study at any point in time without any consequences. Participants were ushered into a pre-arranged office where they selfadministered the questionnaire.", "Mental Health Continuum-Short Form21 The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy.\nA large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales.\nThe MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy.\nA large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales.", "The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy.\nA large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales.", "The study was approved by the Ethics and Protocol Review Committee of the School of Biomedical and Allied Health Sciences (SBAHS) of the University of Ghana (SBAHS – OT. /10517354/SA/2017-2018). Written informed consent was obtained from each study participant. The processes instituted to ensure confidentiality and anonymity of data were thoroughly explained.", "Descriptive statistics and reliability indices for the MHC-SF were established with the statistical software SPSS 23.0. We applied Keyes' criteria for categorisation of well-being to establish the prevalence of positive mental health and functioning21,37 using the SPSS.", "The study involved male and female SCD patients aged from 21 to 56 years. The majority of participants were females, employed, Christians, obtained tertiary level of education. Details of the sociodemographic characteristics of participants are shown in Table 1.\nSociodemographic characteristics of participants (N = 62)", "Table 2 shows the descriptive statistics and Cronbach's alpha coefficients of the MHC-SF for the sample. We report the mean scores and standard deviations for the scale, which are more or less in line with those reported in the literature. The Cronbach's alpha reliability coefficient for the total MHC-SF was 0.77, which is within the acceptable value of 0.70 and above for reliable instruments.38 The mean of the inter-item correlations was .25. According to Clark and Watson's guideline, the interitem correlations of a standardised measure should range between 0.15 – 0.50, with a range of 0.15 – 0.20 for broad constructs and a range of .40 – .50 for narrower constructs.39 We found a range of .02 and .68 for the itemtotal correlations for the MHC-SF for this sample. Items 9 to 14 (cluster 3 = eudaimonic; psychological well-being) recorded the highest mean scores (3.90–4.35), whereas Item 8 (Social coherence/interest) presented the lowest mean of 3.15.\nDescriptive Statistics for Sub and Total MHC-SF Scale\nNote. MHCSF = Mental Health Continuum-Short Form; EWB = Emotional Well-Being; SWB = Social Well-Being, PWB = Psychological Well-Being", "Table 3 shows the prevalence of positive mental health of participants. The results show a high level of positive mental health (flourishing; 66%), a significant level of moderate mental health (26%), and a relatively low level of languishing (8%) among participants. Overall, twothirds of SCD patients have high level of positive mental health, whereas about a third is not functioning optimally. We found no difference [t=0.111, p>.05] between the genders in their levels of mental health (see Table 4).\nPrevalence of positive mental health (N = 62)\nGender differences in mental health of participants", "This study may be considered in the light of a few limitations. First, the study was conducted in Accra – the nation's capital and an economically developed part of Ghana. It is admissible that family income might be higher among our participants than in other less developed areas, considering the regional economic disparities in Ghana.46 Previous research demonstrates that economic status impacts levels of mental health functioning.49 Second, given that the information was self-reported, there is (as it is often the case with most self-reported responses) a risk of socially desirable responses. To minimise this risk, however, we implemented a data collection process that ensured that the self-administered questionnaire was answered anonymously. Lastly, this study recruited a small sample of 62 SCD patients from one healthcare facility. It is recommended that a largescale longitudinal study using questionnaires validated in the languages and cultural contexts of the target population56 be carried out with samples representative of the 16 regions of Ghana to establish the level of positive mental health at the national level. National estimates of prevalence of mental health (or illness) have both policy and clinical implications." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Prevalence of positive mental health", "Methods", "Setting and participants", "Design and Procedure", "Measure", "Mental Health Continuum-Short Form21", "Ethical consideration", "Data analysis", "Results", "Sociodemographic characteristics of participants", "Descriptive statistics and reliability indices for the MHC-SF", "The prevalence levels of positive mental health", "Discussion", "Study limitations", "Conclusion" ]
[ "Data from global epidemiological surveys show that sickle cell disease (SCD) remains the commonest genetic blood disorder that affects about 20–25 million people1,2, with a further 400,000 children born annually with the disease1. SCD is prevalent among people of the African and Hispanic descents, with approximately 80% of all SCD births occurring in West and Central Africa.1,2 Regional statistics suggest that about 25% of Africans are carriers of the abnormal hemoglobin gene.3 In Nigeria, for instance, about 2 to 3% of the population is affected with SCD and a projected 150,000 children are born each year with the condition.4 In Ghana, an estimated 2% of neonates are born with SCD annually.3\nSCD is a hemoglobinopathy caused by a β-globin gene (βs) mutation at position 6. The mutation results in an abnormal hemoglobin (Hb) with altered physical properties (sickle hemoglobin).5,6\nSCD is categorised into four major genotypes: sickle cell β0 thalassemia (Sβ0 thalassemia), sickle cell β+ thalassemia (Sβ+ thalassemia), sickle haemoglobin C (SC) disease, and homozygous sickle cell (SS) disease.6,7 Each of these subtypes presents with unique symptoms with varying degree of severity. In particular, the haemoglobin SS, where an individual inherits a sickle S gene from both parents, presents the most severe of clinical symptoms.8 The most reported genotypes in Ghana include the ‘SS’, ‘SC’, ‘SD’, and Sβ thalassemia.9 SCD differs from sickle cell trait (SCT) or carriers of the sickle cell gene.\nPeople who have SCT do not have the disease but inherit a normal haemoglobin (A) from one parent and an abnormal haemoglobin (SC) from the other parent.\nCarriers of the sickle cell gene do not often exhibit signs and symptoms of the disease but are capable of passing the gene to their children. When there is reduced oxygen tension, red blood cells (RBCs) with haemoglobin S (HbS) tend to sickle. This is because HbS becomes less soluble in low oxygen environment and forms tactoids and microfibrils along the long axis of RBC. Sickled RBCs lose their deformability property and thus occlude tiny capillaries. During this process, some of the RBCs are broken down, or lysed. As a result of this occlusion, the distal organs are deprived of oxygen, resulting in complications such as infarction, anaemia, priapism, splenomegaly, and dactylitis.10 SCD is associated with poor quality of life, depression, anxiety, and low levels of mental health functioning.11,12 Although there is limited data on SCD in Ghana at the national level, several studies report on the prevalence of SCD across various health facilities. A retrospective review of the medical records of all SCD patients (aged 13 and above) at the Ghana Institute of Clinical Genetics, Korle-Bu, from January 2013 to December 2014 shows that a total of 5,451 SCD patients accessed healthcare services at the facility, with 20,788 clinic visits.13 Another study estimates that about 2% of all newborns are diagnosed with SCD, and at least 25% of the Ghanaian populace carry the sickle cell gene.3\nKeyes hypothesised a tripartite model of positive mental health that describes three dimensions of mental health, namely, emotional (EWB), psychological (PWB), and social (SWB) well-being.15 The EWB describes the presence of positive affect and satisfactions with life, while the PWB defines an individual's intrapersonal and close interpersonal functioning. Keyes conceptualised the SWB to refer to a sense of welfare and happiness and experiences of an individual's well-being in society, as well as satisfaction with their social structure and function.14,15 Keyes suggested that complete mental health functioning comprised of a combination of all three dimensions: emotional, psychological, and social well-being. For its measurement, a high level of positive mental health, called flourishing, requires a combination of a high level of subjective well-being and an optimal level of psychological and social functioning. Likewise, Keyes termed an individual's experience of low levels of positive mental health and psychological and social functioning as languishing. Individuals who function between this continuum, that is, neither flourishing nor languishing, are considered to be experiencing moderate mental health.\nKeyes' tripartite model, together with empirical evidence from Ryff's14 model of psychological well-being, formed the conceptual foundation that directed the development of the 40-item self-administered Mental Health Continuum Long-Form questionnaire.15,21\nA shorter 14-item version, the Mental Health Continuum Short-Form (MHC-SF)21, was later constructed to assess the three dimensions of well-being (emotional, psychological, and social well-being) and the categorical diagnosis of positive mental health. According to Keyes15,21, positive mental health and mental illness exist on two continua – where positive mental health correlates with, but is distinct from, mental illness. In this respect, an individual can present with symptoms of mental illness, such as depressive episode, and simultaneously exhibit a high level of positive mental health (i.e., flourishing). A wealth of research demonstrates that flourishing states are associated with a range of positive outcomes, including good health, higher levels of life expectancy, and satisfaction with life.17 The measurement of the prevalence of positive mental health remains a challenging endeavour, partly due to varied conceptualisations and operational definitions of mental well-being. However, in recent efforts, scholars have increasingly utilised the MHCSF to assess positive mental health across various population groups.19 There is evidence that suggests that mental health is understood and experienced differently across various cultures and religions.18 For instance, researchers found significant difference in the expressions of well-being between collectivist East Asian and individualistic Western samples.20\nGiven the developmental challenges and disease-related complications associated with SCD, survivors require individual-targeted biopsychosocial interventions to be able to lead quality and meaningful lives.23 In most cases, SCD patients endure lifelong treatment regimen that can be expensive, complex, and multifaceted, with a high risk for poor mental health functioning, that is, their ability to reach their optimal human potential and lead a meaningful life.24 Assessment of positive mental health remains an important approach for evaluating the prognosis and outcomes of chronically ill patients.25,26\nPrevalence of positive mental health Scholarly interest in the use of the MHC-SF to assess the prevalence of positive mental health and functioning has increased exponentially in the last decade. Researchers have successfully administered the MHC-SF to adults from South Africa27, Poland28, Canada22, Italy29, China30, United States21 and Australia31, and to adolescents from Egypt32, India33 and South Korea34 to explore the prevalence of positive mental health and its correlates.\nPrevalence rates vary across the globe depending on the context and the sample involved. For instance, researchers found that only 11.7 % of a sample of South Korean adolescents were flourishing, compared to the 23.5% of Egyptian adolescents, 46.4% of Indian adolescents, and 76.9% of Canadian adolescents and adults.43\nIn Ghana, there are limited studies that explore the concept of positive mental health into its categorical prevalence levels. Largely, the prevalence of mental health has been conceptualised and measured in terms of mental disorders. To our knowledge, only Appiah and colleagues36 measured the effect of a community-based positive psychology intervention in promoting the (positive) mental health of a rural adult sample using the MHC-SF. The researchers reported a flourishing rate of 25% at pre-test, 57.5% immediately after the 10-week intervention, and 77.5% three months post-intervention for programme participants. The majority of studies employed proxies in their measurement to assess subjective well-being by measuring individuals' standards of living rather than how they feel about their overall life.50 As far as could be established, presently, little is known about the prevalence of positive mental health and functioning in any population group in Ghana. The outcomes of prevalence rates of positive mental health and functioning of a population could serve as an important reference and resource for researchers and practitioners working to develop context-ainterventions to promote the overall mental well-being of the population involved.35\nScholarly interest in the use of the MHC-SF to assess the prevalence of positive mental health and functioning has increased exponentially in the last decade. Researchers have successfully administered the MHC-SF to adults from South Africa27, Poland28, Canada22, Italy29, China30, United States21 and Australia31, and to adolescents from Egypt32, India33 and South Korea34 to explore the prevalence of positive mental health and its correlates.\nPrevalence rates vary across the globe depending on the context and the sample involved. For instance, researchers found that only 11.7 % of a sample of South Korean adolescents were flourishing, compared to the 23.5% of Egyptian adolescents, 46.4% of Indian adolescents, and 76.9% of Canadian adolescents and adults.43\nIn Ghana, there are limited studies that explore the concept of positive mental health into its categorical prevalence levels. Largely, the prevalence of mental health has been conceptualised and measured in terms of mental disorders. To our knowledge, only Appiah and colleagues36 measured the effect of a community-based positive psychology intervention in promoting the (positive) mental health of a rural adult sample using the MHC-SF. The researchers reported a flourishing rate of 25% at pre-test, 57.5% immediately after the 10-week intervention, and 77.5% three months post-intervention for programme participants. The majority of studies employed proxies in their measurement to assess subjective well-being by measuring individuals' standards of living rather than how they feel about their overall life.50 As far as could be established, presently, little is known about the prevalence of positive mental health and functioning in any population group in Ghana. The outcomes of prevalence rates of positive mental health and functioning of a population could serve as an important reference and resource for researchers and practitioners working to develop context-ainterventions to promote the overall mental well-being of the population involved.35", "Scholarly interest in the use of the MHC-SF to assess the prevalence of positive mental health and functioning has increased exponentially in the last decade. Researchers have successfully administered the MHC-SF to adults from South Africa27, Poland28, Canada22, Italy29, China30, United States21 and Australia31, and to adolescents from Egypt32, India33 and South Korea34 to explore the prevalence of positive mental health and its correlates.\nPrevalence rates vary across the globe depending on the context and the sample involved. For instance, researchers found that only 11.7 % of a sample of South Korean adolescents were flourishing, compared to the 23.5% of Egyptian adolescents, 46.4% of Indian adolescents, and 76.9% of Canadian adolescents and adults.43\nIn Ghana, there are limited studies that explore the concept of positive mental health into its categorical prevalence levels. Largely, the prevalence of mental health has been conceptualised and measured in terms of mental disorders. To our knowledge, only Appiah and colleagues36 measured the effect of a community-based positive psychology intervention in promoting the (positive) mental health of a rural adult sample using the MHC-SF. The researchers reported a flourishing rate of 25% at pre-test, 57.5% immediately after the 10-week intervention, and 77.5% three months post-intervention for programme participants. The majority of studies employed proxies in their measurement to assess subjective well-being by measuring individuals' standards of living rather than how they feel about their overall life.50 As far as could be established, presently, little is known about the prevalence of positive mental health and functioning in any population group in Ghana. The outcomes of prevalence rates of positive mental health and functioning of a population could serve as an important reference and resource for researchers and practitioners working to develop context-ainterventions to promote the overall mental well-being of the population involved.35", "Setting and participants The study was conducted at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics (GICG) at the Korle-Bu Teaching Hospital. The GICG was established in 1974 by the Ministry of Health and the Managing Trustees of Volta Aluminium Company Limited and serves as a referral health facility for SCD patients across the country. The clinic attends to patients aged 12 and older on every working day of the week with an average daily attendance of 50. Participants (N = 62) were Englishspeaking adult SCD patients, aged 18 years and above, of both genders, and were receiving treatment at the GICG for SCD.\nThe study was conducted at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics (GICG) at the Korle-Bu Teaching Hospital. The GICG was established in 1974 by the Ministry of Health and the Managing Trustees of Volta Aluminium Company Limited and serves as a referral health facility for SCD patients across the country. The clinic attends to patients aged 12 and older on every working day of the week with an average daily attendance of 50. Participants (N = 62) were Englishspeaking adult SCD patients, aged 18 years and above, of both genders, and were receiving treatment at the GICG for SCD.\nDesign and Procedure A quantitative survey design was implemented. We randomly recruited a total of 62 adults who were receiving in- and out-patient treatment for SCD at the GICG in May and June 2018, by means of a simple random sampling (i.e., ballot) method. After an eligible patient had completed the required medical examination and treatment protocol, the Medical Officer who attended to the patient then introduced the study and the research team to the patient. The research team thereafter provided further information on the study, addressed the patient's concerns, if any, and offered an invitation to the patient to participate. Each consented participant was asked to pick from a box containing pieces of papers with ‘Yes’ and ‘No’ written on them. The papers were thoroughly mixed after each pick before the next individual was made to pick.\nThe papers were thoroughly mixed in a container at the beginning of each day of the data collection period, and the process repeated until the desired sample size was attained. All individuals who picked ‘Yes’ were contacted and scheduled for interview. Written informed consent was obtained from patients who agreed to participate. Each participant was assured of confidentiality and was informed of their right to withdraw from the study at any point in time without any consequences. Participants were ushered into a pre-arranged office where they selfadministered the questionnaire.\nA quantitative survey design was implemented. We randomly recruited a total of 62 adults who were receiving in- and out-patient treatment for SCD at the GICG in May and June 2018, by means of a simple random sampling (i.e., ballot) method. After an eligible patient had completed the required medical examination and treatment protocol, the Medical Officer who attended to the patient then introduced the study and the research team to the patient. The research team thereafter provided further information on the study, addressed the patient's concerns, if any, and offered an invitation to the patient to participate. Each consented participant was asked to pick from a box containing pieces of papers with ‘Yes’ and ‘No’ written on them. The papers were thoroughly mixed after each pick before the next individual was made to pick.\nThe papers were thoroughly mixed in a container at the beginning of each day of the data collection period, and the process repeated until the desired sample size was attained. All individuals who picked ‘Yes’ were contacted and scheduled for interview. Written informed consent was obtained from patients who agreed to participate. Each participant was assured of confidentiality and was informed of their right to withdraw from the study at any point in time without any consequences. Participants were ushered into a pre-arranged office where they selfadministered the questionnaire.\nMeasure Mental Health Continuum-Short Form21 The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy.\nA large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales.\nThe MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy.\nA large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales.\nMental Health Continuum-Short Form21 The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy.\nA large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales.\nThe MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy.\nA large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales.\nEthical consideration The study was approved by the Ethics and Protocol Review Committee of the School of Biomedical and Allied Health Sciences (SBAHS) of the University of Ghana (SBAHS – OT. /10517354/SA/2017-2018). Written informed consent was obtained from each study participant. The processes instituted to ensure confidentiality and anonymity of data were thoroughly explained.\nThe study was approved by the Ethics and Protocol Review Committee of the School of Biomedical and Allied Health Sciences (SBAHS) of the University of Ghana (SBAHS – OT. /10517354/SA/2017-2018). Written informed consent was obtained from each study participant. The processes instituted to ensure confidentiality and anonymity of data were thoroughly explained.\nData analysis Descriptive statistics and reliability indices for the MHC-SF were established with the statistical software SPSS 23.0. We applied Keyes' criteria for categorisation of well-being to establish the prevalence of positive mental health and functioning21,37 using the SPSS.\nDescriptive statistics and reliability indices for the MHC-SF were established with the statistical software SPSS 23.0. We applied Keyes' criteria for categorisation of well-being to establish the prevalence of positive mental health and functioning21,37 using the SPSS.", "The study was conducted at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics (GICG) at the Korle-Bu Teaching Hospital. The GICG was established in 1974 by the Ministry of Health and the Managing Trustees of Volta Aluminium Company Limited and serves as a referral health facility for SCD patients across the country. The clinic attends to patients aged 12 and older on every working day of the week with an average daily attendance of 50. Participants (N = 62) were Englishspeaking adult SCD patients, aged 18 years and above, of both genders, and were receiving treatment at the GICG for SCD.", "A quantitative survey design was implemented. We randomly recruited a total of 62 adults who were receiving in- and out-patient treatment for SCD at the GICG in May and June 2018, by means of a simple random sampling (i.e., ballot) method. After an eligible patient had completed the required medical examination and treatment protocol, the Medical Officer who attended to the patient then introduced the study and the research team to the patient. The research team thereafter provided further information on the study, addressed the patient's concerns, if any, and offered an invitation to the patient to participate. Each consented participant was asked to pick from a box containing pieces of papers with ‘Yes’ and ‘No’ written on them. The papers were thoroughly mixed after each pick before the next individual was made to pick.\nThe papers were thoroughly mixed in a container at the beginning of each day of the data collection period, and the process repeated until the desired sample size was attained. All individuals who picked ‘Yes’ were contacted and scheduled for interview. Written informed consent was obtained from patients who agreed to participate. Each participant was assured of confidentiality and was informed of their right to withdraw from the study at any point in time without any consequences. Participants were ushered into a pre-arranged office where they selfadministered the questionnaire.", "Mental Health Continuum-Short Form21 The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy.\nA large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales.\nThe MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy.\nA large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales.", "The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy.\nA large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales.", "The study was approved by the Ethics and Protocol Review Committee of the School of Biomedical and Allied Health Sciences (SBAHS) of the University of Ghana (SBAHS – OT. /10517354/SA/2017-2018). Written informed consent was obtained from each study participant. The processes instituted to ensure confidentiality and anonymity of data were thoroughly explained.", "Descriptive statistics and reliability indices for the MHC-SF were established with the statistical software SPSS 23.0. We applied Keyes' criteria for categorisation of well-being to establish the prevalence of positive mental health and functioning21,37 using the SPSS.", "Sociodemographic characteristics of participants The study involved male and female SCD patients aged from 21 to 56 years. The majority of participants were females, employed, Christians, obtained tertiary level of education. Details of the sociodemographic characteristics of participants are shown in Table 1.\nSociodemographic characteristics of participants (N = 62)\nThe study involved male and female SCD patients aged from 21 to 56 years. The majority of participants were females, employed, Christians, obtained tertiary level of education. Details of the sociodemographic characteristics of participants are shown in Table 1.\nSociodemographic characteristics of participants (N = 62)\nDescriptive statistics and reliability indices for the MHC-SF Table 2 shows the descriptive statistics and Cronbach's alpha coefficients of the MHC-SF for the sample. We report the mean scores and standard deviations for the scale, which are more or less in line with those reported in the literature. The Cronbach's alpha reliability coefficient for the total MHC-SF was 0.77, which is within the acceptable value of 0.70 and above for reliable instruments.38 The mean of the inter-item correlations was .25. According to Clark and Watson's guideline, the interitem correlations of a standardised measure should range between 0.15 – 0.50, with a range of 0.15 – 0.20 for broad constructs and a range of .40 – .50 for narrower constructs.39 We found a range of .02 and .68 for the itemtotal correlations for the MHC-SF for this sample. Items 9 to 14 (cluster 3 = eudaimonic; psychological well-being) recorded the highest mean scores (3.90–4.35), whereas Item 8 (Social coherence/interest) presented the lowest mean of 3.15.\nDescriptive Statistics for Sub and Total MHC-SF Scale\nNote. MHCSF = Mental Health Continuum-Short Form; EWB = Emotional Well-Being; SWB = Social Well-Being, PWB = Psychological Well-Being\nTable 2 shows the descriptive statistics and Cronbach's alpha coefficients of the MHC-SF for the sample. We report the mean scores and standard deviations for the scale, which are more or less in line with those reported in the literature. The Cronbach's alpha reliability coefficient for the total MHC-SF was 0.77, which is within the acceptable value of 0.70 and above for reliable instruments.38 The mean of the inter-item correlations was .25. According to Clark and Watson's guideline, the interitem correlations of a standardised measure should range between 0.15 – 0.50, with a range of 0.15 – 0.20 for broad constructs and a range of .40 – .50 for narrower constructs.39 We found a range of .02 and .68 for the itemtotal correlations for the MHC-SF for this sample. Items 9 to 14 (cluster 3 = eudaimonic; psychological well-being) recorded the highest mean scores (3.90–4.35), whereas Item 8 (Social coherence/interest) presented the lowest mean of 3.15.\nDescriptive Statistics for Sub and Total MHC-SF Scale\nNote. MHCSF = Mental Health Continuum-Short Form; EWB = Emotional Well-Being; SWB = Social Well-Being, PWB = Psychological Well-Being\nThe prevalence levels of positive mental health Table 3 shows the prevalence of positive mental health of participants. The results show a high level of positive mental health (flourishing; 66%), a significant level of moderate mental health (26%), and a relatively low level of languishing (8%) among participants. Overall, twothirds of SCD patients have high level of positive mental health, whereas about a third is not functioning optimally. We found no difference [t=0.111, p>.05] between the genders in their levels of mental health (see Table 4).\nPrevalence of positive mental health (N = 62)\nGender differences in mental health of participants\nTable 3 shows the prevalence of positive mental health of participants. The results show a high level of positive mental health (flourishing; 66%), a significant level of moderate mental health (26%), and a relatively low level of languishing (8%) among participants. Overall, twothirds of SCD patients have high level of positive mental health, whereas about a third is not functioning optimally. We found no difference [t=0.111, p>.05] between the genders in their levels of mental health (see Table 4).\nPrevalence of positive mental health (N = 62)\nGender differences in mental health of participants", "The study involved male and female SCD patients aged from 21 to 56 years. The majority of participants were females, employed, Christians, obtained tertiary level of education. Details of the sociodemographic characteristics of participants are shown in Table 1.\nSociodemographic characteristics of participants (N = 62)", "Table 2 shows the descriptive statistics and Cronbach's alpha coefficients of the MHC-SF for the sample. We report the mean scores and standard deviations for the scale, which are more or less in line with those reported in the literature. The Cronbach's alpha reliability coefficient for the total MHC-SF was 0.77, which is within the acceptable value of 0.70 and above for reliable instruments.38 The mean of the inter-item correlations was .25. According to Clark and Watson's guideline, the interitem correlations of a standardised measure should range between 0.15 – 0.50, with a range of 0.15 – 0.20 for broad constructs and a range of .40 – .50 for narrower constructs.39 We found a range of .02 and .68 for the itemtotal correlations for the MHC-SF for this sample. Items 9 to 14 (cluster 3 = eudaimonic; psychological well-being) recorded the highest mean scores (3.90–4.35), whereas Item 8 (Social coherence/interest) presented the lowest mean of 3.15.\nDescriptive Statistics for Sub and Total MHC-SF Scale\nNote. MHCSF = Mental Health Continuum-Short Form; EWB = Emotional Well-Being; SWB = Social Well-Being, PWB = Psychological Well-Being", "Table 3 shows the prevalence of positive mental health of participants. The results show a high level of positive mental health (flourishing; 66%), a significant level of moderate mental health (26%), and a relatively low level of languishing (8%) among participants. Overall, twothirds of SCD patients have high level of positive mental health, whereas about a third is not functioning optimally. We found no difference [t=0.111, p>.05] between the genders in their levels of mental health (see Table 4).\nPrevalence of positive mental health (N = 62)\nGender differences in mental health of participants", "This study set out to explore the prevalence of positive mental health and functioning in a sample of adults with SCD in Ghana. The results show, surprisingly, a high level of positive mental health in the sample, as indicated by a large percentage of participants (i.e., 66%) in the flourishing category, in spite of the high levels of depression40 and poor coping methods41 previously reported among SCD patients in other settings. A significant percentage of the sample (i.e., 26%) was moderately mentally healthy, with a small number (i.e., 8%) languishing. While the sample for this study differs in many ways from other studies, the results are comparable with previous findings. For instance, the percentage of individuals flourishing in this study is relatively comparable with the 53.1% found among college students in the United States42, but lower than the 76.9% reported for a large sample of adolescents and adults in Canada.22 In many instances, however, the level of flourishing (i.e., positive mental health) found in the current sample is far higher when compared with the 8% flourishing reported for a sample of South Korean adults43, the 20% found among adult South Africans34, the 23% found among Egyptian adolescents32, the 26% found among a sample of Polish adults29, the 42% in a group of South African secondary school children44, and the 44% among Chinese adults.36 Of note, the participants in the present study are mostly adults, who also present with a medical condition, unlike the participants in the aforementioned studies.\nA possible explanation for the high prevalence of positive mental health in our sample could be attributed to the collectivistic cultural and social orientation of the Ghanaian society.45 In the Ghanaian communal context, an individual is deeply connected to both the nuclear and the extended family system, where a person falls on relatives and friends for social and financial support. Family and community members may be obligated to assist other individuals in need, particularly when confronted with economic or health crisis.45 The availability of support (or the consciousness of its existence) can, in many ways, serve as a source of real or perceived hope for people during difficult times, which, in turn, can help them to build resilience and propel them to function at optimal levels.\nAnother explanation for the high level of positive mental health found in our sample might be credited to the high level of religious involvement and spirituality46 in Ghana. Previous research findings showed that spirituality and religious practices are related to many dimensions of mental health and contribute to advancing the overall mental well-being of individuals and groups.47 High levels of spirituality and religious engagements have been associated with positive mental state.47\nWe propose another explanation for the high level of positive mental health and functioning found in this clinical sample, namely, that the general optimism and positive outlook, which are typical of the Ghanaian peoples, have the potential to protect against poor mental health.\nResearch evidence suggests that most Ghanaians are resilient52, optimistic53, and happy54, even in the midst of life's difficulties. These characteristics, fostered in the Ghanaian collectivistic cultural values55 and high religiosity46, might have contributed to the high proportion of flourishers in this sample, in spite of their medical presentation.\nThe findings also showed that, overall, 34% of participants are not functioning at optimal levels. Presence of moderate mental health and languishing are indications of poor mental health, and indeed, a risk factor for psychopathology.48 Although the majority of participants reported high level of positive mental health, a third of the sample (i.e., those in the languishing and moderately mental health categories) could benefit from contextually appropriate positive psychology interventions (PPIs) to promote their mental health and functioning. PPIs have been found to protect against psychopathology.36\nOur results did not demonstrate significant difference between the genders. This finding is consistent with previous research reports that did not establish any significant difference in positive mental health between males and females.27–29,31 Nonetheless, more recently, Guo and colleagues found a slightly better positive mental health among females than males.35\nStudy limitations This study may be considered in the light of a few limitations. First, the study was conducted in Accra – the nation's capital and an economically developed part of Ghana. It is admissible that family income might be higher among our participants than in other less developed areas, considering the regional economic disparities in Ghana.46 Previous research demonstrates that economic status impacts levels of mental health functioning.49 Second, given that the information was self-reported, there is (as it is often the case with most self-reported responses) a risk of socially desirable responses. To minimise this risk, however, we implemented a data collection process that ensured that the self-administered questionnaire was answered anonymously. Lastly, this study recruited a small sample of 62 SCD patients from one healthcare facility. It is recommended that a largescale longitudinal study using questionnaires validated in the languages and cultural contexts of the target population56 be carried out with samples representative of the 16 regions of Ghana to establish the level of positive mental health at the national level. National estimates of prevalence of mental health (or illness) have both policy and clinical implications.\nThis study may be considered in the light of a few limitations. First, the study was conducted in Accra – the nation's capital and an economically developed part of Ghana. It is admissible that family income might be higher among our participants than in other less developed areas, considering the regional economic disparities in Ghana.46 Previous research demonstrates that economic status impacts levels of mental health functioning.49 Second, given that the information was self-reported, there is (as it is often the case with most self-reported responses) a risk of socially desirable responses. To minimise this risk, however, we implemented a data collection process that ensured that the self-administered questionnaire was answered anonymously. Lastly, this study recruited a small sample of 62 SCD patients from one healthcare facility. It is recommended that a largescale longitudinal study using questionnaires validated in the languages and cultural contexts of the target population56 be carried out with samples representative of the 16 regions of Ghana to establish the level of positive mental health at the national level. National estimates of prevalence of mental health (or illness) have both policy and clinical implications.", "This study may be considered in the light of a few limitations. First, the study was conducted in Accra – the nation's capital and an economically developed part of Ghana. It is admissible that family income might be higher among our participants than in other less developed areas, considering the regional economic disparities in Ghana.46 Previous research demonstrates that economic status impacts levels of mental health functioning.49 Second, given that the information was self-reported, there is (as it is often the case with most self-reported responses) a risk of socially desirable responses. To minimise this risk, however, we implemented a data collection process that ensured that the self-administered questionnaire was answered anonymously. Lastly, this study recruited a small sample of 62 SCD patients from one healthcare facility. It is recommended that a largescale longitudinal study using questionnaires validated in the languages and cultural contexts of the target population56 be carried out with samples representative of the 16 regions of Ghana to establish the level of positive mental health at the national level. National estimates of prevalence of mental health (or illness) have both policy and clinical implications.", "The findings from this study provide insight into the mental health functioning of patients with SCD in Ghana, as measured with the MHC-SF. We found a considerably higher prevalence of positive mental health among our sample, with a third of the sample functioning at suboptimal levels, mentally. It is important that researchers, practitioners, and policymakers explore and take advantage of the contextual and psychosocial factors that promote the emotional, psychological, and social wellbeing to design context-appropriate mental health interventions for individuals in treatment for SCDs, and the Ghanaian people, more generally." ]
[ "intro", null, "methods", null, null, null, null, null, null, "results", null, null, null, "discussion", null, "conclusions" ]
[ "positive mental health", "flourishing", "psychological well-being", "sickle cell disease", "Ghana" ]
Introduction: Data from global epidemiological surveys show that sickle cell disease (SCD) remains the commonest genetic blood disorder that affects about 20–25 million people1,2, with a further 400,000 children born annually with the disease1. SCD is prevalent among people of the African and Hispanic descents, with approximately 80% of all SCD births occurring in West and Central Africa.1,2 Regional statistics suggest that about 25% of Africans are carriers of the abnormal hemoglobin gene.3 In Nigeria, for instance, about 2 to 3% of the population is affected with SCD and a projected 150,000 children are born each year with the condition.4 In Ghana, an estimated 2% of neonates are born with SCD annually.3 SCD is a hemoglobinopathy caused by a β-globin gene (βs) mutation at position 6. The mutation results in an abnormal hemoglobin (Hb) with altered physical properties (sickle hemoglobin).5,6 SCD is categorised into four major genotypes: sickle cell β0 thalassemia (Sβ0 thalassemia), sickle cell β+ thalassemia (Sβ+ thalassemia), sickle haemoglobin C (SC) disease, and homozygous sickle cell (SS) disease.6,7 Each of these subtypes presents with unique symptoms with varying degree of severity. In particular, the haemoglobin SS, where an individual inherits a sickle S gene from both parents, presents the most severe of clinical symptoms.8 The most reported genotypes in Ghana include the ‘SS’, ‘SC’, ‘SD’, and Sβ thalassemia.9 SCD differs from sickle cell trait (SCT) or carriers of the sickle cell gene. People who have SCT do not have the disease but inherit a normal haemoglobin (A) from one parent and an abnormal haemoglobin (SC) from the other parent. Carriers of the sickle cell gene do not often exhibit signs and symptoms of the disease but are capable of passing the gene to their children. When there is reduced oxygen tension, red blood cells (RBCs) with haemoglobin S (HbS) tend to sickle. This is because HbS becomes less soluble in low oxygen environment and forms tactoids and microfibrils along the long axis of RBC. Sickled RBCs lose their deformability property and thus occlude tiny capillaries. During this process, some of the RBCs are broken down, or lysed. As a result of this occlusion, the distal organs are deprived of oxygen, resulting in complications such as infarction, anaemia, priapism, splenomegaly, and dactylitis.10 SCD is associated with poor quality of life, depression, anxiety, and low levels of mental health functioning.11,12 Although there is limited data on SCD in Ghana at the national level, several studies report on the prevalence of SCD across various health facilities. A retrospective review of the medical records of all SCD patients (aged 13 and above) at the Ghana Institute of Clinical Genetics, Korle-Bu, from January 2013 to December 2014 shows that a total of 5,451 SCD patients accessed healthcare services at the facility, with 20,788 clinic visits.13 Another study estimates that about 2% of all newborns are diagnosed with SCD, and at least 25% of the Ghanaian populace carry the sickle cell gene.3 Keyes hypothesised a tripartite model of positive mental health that describes three dimensions of mental health, namely, emotional (EWB), psychological (PWB), and social (SWB) well-being.15 The EWB describes the presence of positive affect and satisfactions with life, while the PWB defines an individual's intrapersonal and close interpersonal functioning. Keyes conceptualised the SWB to refer to a sense of welfare and happiness and experiences of an individual's well-being in society, as well as satisfaction with their social structure and function.14,15 Keyes suggested that complete mental health functioning comprised of a combination of all three dimensions: emotional, psychological, and social well-being. For its measurement, a high level of positive mental health, called flourishing, requires a combination of a high level of subjective well-being and an optimal level of psychological and social functioning. Likewise, Keyes termed an individual's experience of low levels of positive mental health and psychological and social functioning as languishing. Individuals who function between this continuum, that is, neither flourishing nor languishing, are considered to be experiencing moderate mental health. Keyes' tripartite model, together with empirical evidence from Ryff's14 model of psychological well-being, formed the conceptual foundation that directed the development of the 40-item self-administered Mental Health Continuum Long-Form questionnaire.15,21 A shorter 14-item version, the Mental Health Continuum Short-Form (MHC-SF)21, was later constructed to assess the three dimensions of well-being (emotional, psychological, and social well-being) and the categorical diagnosis of positive mental health. According to Keyes15,21, positive mental health and mental illness exist on two continua – where positive mental health correlates with, but is distinct from, mental illness. In this respect, an individual can present with symptoms of mental illness, such as depressive episode, and simultaneously exhibit a high level of positive mental health (i.e., flourishing). A wealth of research demonstrates that flourishing states are associated with a range of positive outcomes, including good health, higher levels of life expectancy, and satisfaction with life.17 The measurement of the prevalence of positive mental health remains a challenging endeavour, partly due to varied conceptualisations and operational definitions of mental well-being. However, in recent efforts, scholars have increasingly utilised the MHCSF to assess positive mental health across various population groups.19 There is evidence that suggests that mental health is understood and experienced differently across various cultures and religions.18 For instance, researchers found significant difference in the expressions of well-being between collectivist East Asian and individualistic Western samples.20 Given the developmental challenges and disease-related complications associated with SCD, survivors require individual-targeted biopsychosocial interventions to be able to lead quality and meaningful lives.23 In most cases, SCD patients endure lifelong treatment regimen that can be expensive, complex, and multifaceted, with a high risk for poor mental health functioning, that is, their ability to reach their optimal human potential and lead a meaningful life.24 Assessment of positive mental health remains an important approach for evaluating the prognosis and outcomes of chronically ill patients.25,26 Prevalence of positive mental health Scholarly interest in the use of the MHC-SF to assess the prevalence of positive mental health and functioning has increased exponentially in the last decade. Researchers have successfully administered the MHC-SF to adults from South Africa27, Poland28, Canada22, Italy29, China30, United States21 and Australia31, and to adolescents from Egypt32, India33 and South Korea34 to explore the prevalence of positive mental health and its correlates. Prevalence rates vary across the globe depending on the context and the sample involved. For instance, researchers found that only 11.7 % of a sample of South Korean adolescents were flourishing, compared to the 23.5% of Egyptian adolescents, 46.4% of Indian adolescents, and 76.9% of Canadian adolescents and adults.43 In Ghana, there are limited studies that explore the concept of positive mental health into its categorical prevalence levels. Largely, the prevalence of mental health has been conceptualised and measured in terms of mental disorders. To our knowledge, only Appiah and colleagues36 measured the effect of a community-based positive psychology intervention in promoting the (positive) mental health of a rural adult sample using the MHC-SF. The researchers reported a flourishing rate of 25% at pre-test, 57.5% immediately after the 10-week intervention, and 77.5% three months post-intervention for programme participants. The majority of studies employed proxies in their measurement to assess subjective well-being by measuring individuals' standards of living rather than how they feel about their overall life.50 As far as could be established, presently, little is known about the prevalence of positive mental health and functioning in any population group in Ghana. The outcomes of prevalence rates of positive mental health and functioning of a population could serve as an important reference and resource for researchers and practitioners working to develop context-ainterventions to promote the overall mental well-being of the population involved.35 Scholarly interest in the use of the MHC-SF to assess the prevalence of positive mental health and functioning has increased exponentially in the last decade. Researchers have successfully administered the MHC-SF to adults from South Africa27, Poland28, Canada22, Italy29, China30, United States21 and Australia31, and to adolescents from Egypt32, India33 and South Korea34 to explore the prevalence of positive mental health and its correlates. Prevalence rates vary across the globe depending on the context and the sample involved. For instance, researchers found that only 11.7 % of a sample of South Korean adolescents were flourishing, compared to the 23.5% of Egyptian adolescents, 46.4% of Indian adolescents, and 76.9% of Canadian adolescents and adults.43 In Ghana, there are limited studies that explore the concept of positive mental health into its categorical prevalence levels. Largely, the prevalence of mental health has been conceptualised and measured in terms of mental disorders. To our knowledge, only Appiah and colleagues36 measured the effect of a community-based positive psychology intervention in promoting the (positive) mental health of a rural adult sample using the MHC-SF. The researchers reported a flourishing rate of 25% at pre-test, 57.5% immediately after the 10-week intervention, and 77.5% three months post-intervention for programme participants. The majority of studies employed proxies in their measurement to assess subjective well-being by measuring individuals' standards of living rather than how they feel about their overall life.50 As far as could be established, presently, little is known about the prevalence of positive mental health and functioning in any population group in Ghana. The outcomes of prevalence rates of positive mental health and functioning of a population could serve as an important reference and resource for researchers and practitioners working to develop context-ainterventions to promote the overall mental well-being of the population involved.35 Prevalence of positive mental health: Scholarly interest in the use of the MHC-SF to assess the prevalence of positive mental health and functioning has increased exponentially in the last decade. Researchers have successfully administered the MHC-SF to adults from South Africa27, Poland28, Canada22, Italy29, China30, United States21 and Australia31, and to adolescents from Egypt32, India33 and South Korea34 to explore the prevalence of positive mental health and its correlates. Prevalence rates vary across the globe depending on the context and the sample involved. For instance, researchers found that only 11.7 % of a sample of South Korean adolescents were flourishing, compared to the 23.5% of Egyptian adolescents, 46.4% of Indian adolescents, and 76.9% of Canadian adolescents and adults.43 In Ghana, there are limited studies that explore the concept of positive mental health into its categorical prevalence levels. Largely, the prevalence of mental health has been conceptualised and measured in terms of mental disorders. To our knowledge, only Appiah and colleagues36 measured the effect of a community-based positive psychology intervention in promoting the (positive) mental health of a rural adult sample using the MHC-SF. The researchers reported a flourishing rate of 25% at pre-test, 57.5% immediately after the 10-week intervention, and 77.5% three months post-intervention for programme participants. The majority of studies employed proxies in their measurement to assess subjective well-being by measuring individuals' standards of living rather than how they feel about their overall life.50 As far as could be established, presently, little is known about the prevalence of positive mental health and functioning in any population group in Ghana. The outcomes of prevalence rates of positive mental health and functioning of a population could serve as an important reference and resource for researchers and practitioners working to develop context-ainterventions to promote the overall mental well-being of the population involved.35 Methods: Setting and participants The study was conducted at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics (GICG) at the Korle-Bu Teaching Hospital. The GICG was established in 1974 by the Ministry of Health and the Managing Trustees of Volta Aluminium Company Limited and serves as a referral health facility for SCD patients across the country. The clinic attends to patients aged 12 and older on every working day of the week with an average daily attendance of 50. Participants (N = 62) were Englishspeaking adult SCD patients, aged 18 years and above, of both genders, and were receiving treatment at the GICG for SCD. The study was conducted at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics (GICG) at the Korle-Bu Teaching Hospital. The GICG was established in 1974 by the Ministry of Health and the Managing Trustees of Volta Aluminium Company Limited and serves as a referral health facility for SCD patients across the country. The clinic attends to patients aged 12 and older on every working day of the week with an average daily attendance of 50. Participants (N = 62) were Englishspeaking adult SCD patients, aged 18 years and above, of both genders, and were receiving treatment at the GICG for SCD. Design and Procedure A quantitative survey design was implemented. We randomly recruited a total of 62 adults who were receiving in- and out-patient treatment for SCD at the GICG in May and June 2018, by means of a simple random sampling (i.e., ballot) method. After an eligible patient had completed the required medical examination and treatment protocol, the Medical Officer who attended to the patient then introduced the study and the research team to the patient. The research team thereafter provided further information on the study, addressed the patient's concerns, if any, and offered an invitation to the patient to participate. Each consented participant was asked to pick from a box containing pieces of papers with ‘Yes’ and ‘No’ written on them. The papers were thoroughly mixed after each pick before the next individual was made to pick. The papers were thoroughly mixed in a container at the beginning of each day of the data collection period, and the process repeated until the desired sample size was attained. All individuals who picked ‘Yes’ were contacted and scheduled for interview. Written informed consent was obtained from patients who agreed to participate. Each participant was assured of confidentiality and was informed of their right to withdraw from the study at any point in time without any consequences. Participants were ushered into a pre-arranged office where they selfadministered the questionnaire. A quantitative survey design was implemented. We randomly recruited a total of 62 adults who were receiving in- and out-patient treatment for SCD at the GICG in May and June 2018, by means of a simple random sampling (i.e., ballot) method. After an eligible patient had completed the required medical examination and treatment protocol, the Medical Officer who attended to the patient then introduced the study and the research team to the patient. The research team thereafter provided further information on the study, addressed the patient's concerns, if any, and offered an invitation to the patient to participate. Each consented participant was asked to pick from a box containing pieces of papers with ‘Yes’ and ‘No’ written on them. The papers were thoroughly mixed after each pick before the next individual was made to pick. The papers were thoroughly mixed in a container at the beginning of each day of the data collection period, and the process repeated until the desired sample size was attained. All individuals who picked ‘Yes’ were contacted and scheduled for interview. Written informed consent was obtained from patients who agreed to participate. Each participant was assured of confidentiality and was informed of their right to withdraw from the study at any point in time without any consequences. Participants were ushered into a pre-arranged office where they selfadministered the questionnaire. Measure Mental Health Continuum-Short Form21 The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy. A large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales. The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy. A large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales. Mental Health Continuum-Short Form21 The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy. A large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales. The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy. A large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales. Ethical consideration The study was approved by the Ethics and Protocol Review Committee of the School of Biomedical and Allied Health Sciences (SBAHS) of the University of Ghana (SBAHS – OT. /10517354/SA/2017-2018). Written informed consent was obtained from each study participant. The processes instituted to ensure confidentiality and anonymity of data were thoroughly explained. The study was approved by the Ethics and Protocol Review Committee of the School of Biomedical and Allied Health Sciences (SBAHS) of the University of Ghana (SBAHS – OT. /10517354/SA/2017-2018). Written informed consent was obtained from each study participant. The processes instituted to ensure confidentiality and anonymity of data were thoroughly explained. Data analysis Descriptive statistics and reliability indices for the MHC-SF were established with the statistical software SPSS 23.0. We applied Keyes' criteria for categorisation of well-being to establish the prevalence of positive mental health and functioning21,37 using the SPSS. Descriptive statistics and reliability indices for the MHC-SF were established with the statistical software SPSS 23.0. We applied Keyes' criteria for categorisation of well-being to establish the prevalence of positive mental health and functioning21,37 using the SPSS. Setting and participants: The study was conducted at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics (GICG) at the Korle-Bu Teaching Hospital. The GICG was established in 1974 by the Ministry of Health and the Managing Trustees of Volta Aluminium Company Limited and serves as a referral health facility for SCD patients across the country. The clinic attends to patients aged 12 and older on every working day of the week with an average daily attendance of 50. Participants (N = 62) were Englishspeaking adult SCD patients, aged 18 years and above, of both genders, and were receiving treatment at the GICG for SCD. Design and Procedure: A quantitative survey design was implemented. We randomly recruited a total of 62 adults who were receiving in- and out-patient treatment for SCD at the GICG in May and June 2018, by means of a simple random sampling (i.e., ballot) method. After an eligible patient had completed the required medical examination and treatment protocol, the Medical Officer who attended to the patient then introduced the study and the research team to the patient. The research team thereafter provided further information on the study, addressed the patient's concerns, if any, and offered an invitation to the patient to participate. Each consented participant was asked to pick from a box containing pieces of papers with ‘Yes’ and ‘No’ written on them. The papers were thoroughly mixed after each pick before the next individual was made to pick. The papers were thoroughly mixed in a container at the beginning of each day of the data collection period, and the process repeated until the desired sample size was attained. All individuals who picked ‘Yes’ were contacted and scheduled for interview. Written informed consent was obtained from patients who agreed to participate. Each participant was assured of confidentiality and was informed of their right to withdraw from the study at any point in time without any consequences. Participants were ushered into a pre-arranged office where they selfadministered the questionnaire. Measure: Mental Health Continuum-Short Form21 The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy. A large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales. The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy. A large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales. Mental Health Continuum-Short Form21: The MHC-SF consists of 14 items that measures positive mental health in three dimensions: emotional, social, and psychological well-being. Respondents rate how often during the past thirty days they experienced a range of fourteen feelings by choosing one of six options “never”, “once or twice”, “about once a week”, “2 or 3 times a week”, “almost every day”, or “every day”. A categorisation of flourishing (presence of high levels of social, emotional, and psychological well-being) is made when an individual reports experiences as “every day” or “almost every day” for at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster (i.e., happy, interested in life, or satisfied), and the others from the social and personal/psychological well-being (eudaimonic) clusters. A diagnosis of languishing (absence of positive mental health) is established when a participant reports that they “never” or “once or twice” experienced at least seven of the characteristics, where one of them is from the hedonic (i.e., emotional well-being) cluster and the others from the eudaimonic clusters. A participant who does not fit the criteria for flourishing or languishing is said to be moderately mentally healthy. A large body of evidence exists that supports the validity of these diagnoses16,44,48 The MHC-SF total score has demonstrated good internal consistency (> 0.80) and discriminant validity. The sub-scales obtained a reliability score of 0.64 for the emotional well-being sub-scale, .57 for the psychological well-being sub-scale, to .71 for the social well-being sub-scale.37 Keyes and colleagues found Cronbach alpha of 0.74 for the overall MHC-SF Scale, 0.67 for the PWB subscale, and 0.59 for the Social Well-Being (SWB) subscale in sample of Setswana-speaking adults in the North-West province of South Africa.28 The structural validity and psychometric properties of a Twi version of the MHC-SF was recently examined within a sample of rural Ghanaian adults.51 The researchers found a high omega coefficient of reliability for the total scale (ω =0.97) and satisfactory reliabilities for the subscales. Ethical consideration: The study was approved by the Ethics and Protocol Review Committee of the School of Biomedical and Allied Health Sciences (SBAHS) of the University of Ghana (SBAHS – OT. /10517354/SA/2017-2018). Written informed consent was obtained from each study participant. The processes instituted to ensure confidentiality and anonymity of data were thoroughly explained. Data analysis: Descriptive statistics and reliability indices for the MHC-SF were established with the statistical software SPSS 23.0. We applied Keyes' criteria for categorisation of well-being to establish the prevalence of positive mental health and functioning21,37 using the SPSS. Results: Sociodemographic characteristics of participants The study involved male and female SCD patients aged from 21 to 56 years. The majority of participants were females, employed, Christians, obtained tertiary level of education. Details of the sociodemographic characteristics of participants are shown in Table 1. Sociodemographic characteristics of participants (N = 62) The study involved male and female SCD patients aged from 21 to 56 years. The majority of participants were females, employed, Christians, obtained tertiary level of education. Details of the sociodemographic characteristics of participants are shown in Table 1. Sociodemographic characteristics of participants (N = 62) Descriptive statistics and reliability indices for the MHC-SF Table 2 shows the descriptive statistics and Cronbach's alpha coefficients of the MHC-SF for the sample. We report the mean scores and standard deviations for the scale, which are more or less in line with those reported in the literature. The Cronbach's alpha reliability coefficient for the total MHC-SF was 0.77, which is within the acceptable value of 0.70 and above for reliable instruments.38 The mean of the inter-item correlations was .25. According to Clark and Watson's guideline, the interitem correlations of a standardised measure should range between 0.15 – 0.50, with a range of 0.15 – 0.20 for broad constructs and a range of .40 – .50 for narrower constructs.39 We found a range of .02 and .68 for the itemtotal correlations for the MHC-SF for this sample. Items 9 to 14 (cluster 3 = eudaimonic; psychological well-being) recorded the highest mean scores (3.90–4.35), whereas Item 8 (Social coherence/interest) presented the lowest mean of 3.15. Descriptive Statistics for Sub and Total MHC-SF Scale Note. MHCSF = Mental Health Continuum-Short Form; EWB = Emotional Well-Being; SWB = Social Well-Being, PWB = Psychological Well-Being Table 2 shows the descriptive statistics and Cronbach's alpha coefficients of the MHC-SF for the sample. We report the mean scores and standard deviations for the scale, which are more or less in line with those reported in the literature. The Cronbach's alpha reliability coefficient for the total MHC-SF was 0.77, which is within the acceptable value of 0.70 and above for reliable instruments.38 The mean of the inter-item correlations was .25. According to Clark and Watson's guideline, the interitem correlations of a standardised measure should range between 0.15 – 0.50, with a range of 0.15 – 0.20 for broad constructs and a range of .40 – .50 for narrower constructs.39 We found a range of .02 and .68 for the itemtotal correlations for the MHC-SF for this sample. Items 9 to 14 (cluster 3 = eudaimonic; psychological well-being) recorded the highest mean scores (3.90–4.35), whereas Item 8 (Social coherence/interest) presented the lowest mean of 3.15. Descriptive Statistics for Sub and Total MHC-SF Scale Note. MHCSF = Mental Health Continuum-Short Form; EWB = Emotional Well-Being; SWB = Social Well-Being, PWB = Psychological Well-Being The prevalence levels of positive mental health Table 3 shows the prevalence of positive mental health of participants. The results show a high level of positive mental health (flourishing; 66%), a significant level of moderate mental health (26%), and a relatively low level of languishing (8%) among participants. Overall, twothirds of SCD patients have high level of positive mental health, whereas about a third is not functioning optimally. We found no difference [t=0.111, p>.05] between the genders in their levels of mental health (see Table 4). Prevalence of positive mental health (N = 62) Gender differences in mental health of participants Table 3 shows the prevalence of positive mental health of participants. The results show a high level of positive mental health (flourishing; 66%), a significant level of moderate mental health (26%), and a relatively low level of languishing (8%) among participants. Overall, twothirds of SCD patients have high level of positive mental health, whereas about a third is not functioning optimally. We found no difference [t=0.111, p>.05] between the genders in their levels of mental health (see Table 4). Prevalence of positive mental health (N = 62) Gender differences in mental health of participants Sociodemographic characteristics of participants: The study involved male and female SCD patients aged from 21 to 56 years. The majority of participants were females, employed, Christians, obtained tertiary level of education. Details of the sociodemographic characteristics of participants are shown in Table 1. Sociodemographic characteristics of participants (N = 62) Descriptive statistics and reliability indices for the MHC-SF: Table 2 shows the descriptive statistics and Cronbach's alpha coefficients of the MHC-SF for the sample. We report the mean scores and standard deviations for the scale, which are more or less in line with those reported in the literature. The Cronbach's alpha reliability coefficient for the total MHC-SF was 0.77, which is within the acceptable value of 0.70 and above for reliable instruments.38 The mean of the inter-item correlations was .25. According to Clark and Watson's guideline, the interitem correlations of a standardised measure should range between 0.15 – 0.50, with a range of 0.15 – 0.20 for broad constructs and a range of .40 – .50 for narrower constructs.39 We found a range of .02 and .68 for the itemtotal correlations for the MHC-SF for this sample. Items 9 to 14 (cluster 3 = eudaimonic; psychological well-being) recorded the highest mean scores (3.90–4.35), whereas Item 8 (Social coherence/interest) presented the lowest mean of 3.15. Descriptive Statistics for Sub and Total MHC-SF Scale Note. MHCSF = Mental Health Continuum-Short Form; EWB = Emotional Well-Being; SWB = Social Well-Being, PWB = Psychological Well-Being The prevalence levels of positive mental health: Table 3 shows the prevalence of positive mental health of participants. The results show a high level of positive mental health (flourishing; 66%), a significant level of moderate mental health (26%), and a relatively low level of languishing (8%) among participants. Overall, twothirds of SCD patients have high level of positive mental health, whereas about a third is not functioning optimally. We found no difference [t=0.111, p>.05] between the genders in their levels of mental health (see Table 4). Prevalence of positive mental health (N = 62) Gender differences in mental health of participants Discussion: This study set out to explore the prevalence of positive mental health and functioning in a sample of adults with SCD in Ghana. The results show, surprisingly, a high level of positive mental health in the sample, as indicated by a large percentage of participants (i.e., 66%) in the flourishing category, in spite of the high levels of depression40 and poor coping methods41 previously reported among SCD patients in other settings. A significant percentage of the sample (i.e., 26%) was moderately mentally healthy, with a small number (i.e., 8%) languishing. While the sample for this study differs in many ways from other studies, the results are comparable with previous findings. For instance, the percentage of individuals flourishing in this study is relatively comparable with the 53.1% found among college students in the United States42, but lower than the 76.9% reported for a large sample of adolescents and adults in Canada.22 In many instances, however, the level of flourishing (i.e., positive mental health) found in the current sample is far higher when compared with the 8% flourishing reported for a sample of South Korean adults43, the 20% found among adult South Africans34, the 23% found among Egyptian adolescents32, the 26% found among a sample of Polish adults29, the 42% in a group of South African secondary school children44, and the 44% among Chinese adults.36 Of note, the participants in the present study are mostly adults, who also present with a medical condition, unlike the participants in the aforementioned studies. A possible explanation for the high prevalence of positive mental health in our sample could be attributed to the collectivistic cultural and social orientation of the Ghanaian society.45 In the Ghanaian communal context, an individual is deeply connected to both the nuclear and the extended family system, where a person falls on relatives and friends for social and financial support. Family and community members may be obligated to assist other individuals in need, particularly when confronted with economic or health crisis.45 The availability of support (or the consciousness of its existence) can, in many ways, serve as a source of real or perceived hope for people during difficult times, which, in turn, can help them to build resilience and propel them to function at optimal levels. Another explanation for the high level of positive mental health found in our sample might be credited to the high level of religious involvement and spirituality46 in Ghana. Previous research findings showed that spirituality and religious practices are related to many dimensions of mental health and contribute to advancing the overall mental well-being of individuals and groups.47 High levels of spirituality and religious engagements have been associated with positive mental state.47 We propose another explanation for the high level of positive mental health and functioning found in this clinical sample, namely, that the general optimism and positive outlook, which are typical of the Ghanaian peoples, have the potential to protect against poor mental health. Research evidence suggests that most Ghanaians are resilient52, optimistic53, and happy54, even in the midst of life's difficulties. These characteristics, fostered in the Ghanaian collectivistic cultural values55 and high religiosity46, might have contributed to the high proportion of flourishers in this sample, in spite of their medical presentation. The findings also showed that, overall, 34% of participants are not functioning at optimal levels. Presence of moderate mental health and languishing are indications of poor mental health, and indeed, a risk factor for psychopathology.48 Although the majority of participants reported high level of positive mental health, a third of the sample (i.e., those in the languishing and moderately mental health categories) could benefit from contextually appropriate positive psychology interventions (PPIs) to promote their mental health and functioning. PPIs have been found to protect against psychopathology.36 Our results did not demonstrate significant difference between the genders. This finding is consistent with previous research reports that did not establish any significant difference in positive mental health between males and females.27–29,31 Nonetheless, more recently, Guo and colleagues found a slightly better positive mental health among females than males.35 Study limitations This study may be considered in the light of a few limitations. First, the study was conducted in Accra – the nation's capital and an economically developed part of Ghana. It is admissible that family income might be higher among our participants than in other less developed areas, considering the regional economic disparities in Ghana.46 Previous research demonstrates that economic status impacts levels of mental health functioning.49 Second, given that the information was self-reported, there is (as it is often the case with most self-reported responses) a risk of socially desirable responses. To minimise this risk, however, we implemented a data collection process that ensured that the self-administered questionnaire was answered anonymously. Lastly, this study recruited a small sample of 62 SCD patients from one healthcare facility. It is recommended that a largescale longitudinal study using questionnaires validated in the languages and cultural contexts of the target population56 be carried out with samples representative of the 16 regions of Ghana to establish the level of positive mental health at the national level. National estimates of prevalence of mental health (or illness) have both policy and clinical implications. This study may be considered in the light of a few limitations. First, the study was conducted in Accra – the nation's capital and an economically developed part of Ghana. It is admissible that family income might be higher among our participants than in other less developed areas, considering the regional economic disparities in Ghana.46 Previous research demonstrates that economic status impacts levels of mental health functioning.49 Second, given that the information was self-reported, there is (as it is often the case with most self-reported responses) a risk of socially desirable responses. To minimise this risk, however, we implemented a data collection process that ensured that the self-administered questionnaire was answered anonymously. Lastly, this study recruited a small sample of 62 SCD patients from one healthcare facility. It is recommended that a largescale longitudinal study using questionnaires validated in the languages and cultural contexts of the target population56 be carried out with samples representative of the 16 regions of Ghana to establish the level of positive mental health at the national level. National estimates of prevalence of mental health (or illness) have both policy and clinical implications. Study limitations: This study may be considered in the light of a few limitations. First, the study was conducted in Accra – the nation's capital and an economically developed part of Ghana. It is admissible that family income might be higher among our participants than in other less developed areas, considering the regional economic disparities in Ghana.46 Previous research demonstrates that economic status impacts levels of mental health functioning.49 Second, given that the information was self-reported, there is (as it is often the case with most self-reported responses) a risk of socially desirable responses. To minimise this risk, however, we implemented a data collection process that ensured that the self-administered questionnaire was answered anonymously. Lastly, this study recruited a small sample of 62 SCD patients from one healthcare facility. It is recommended that a largescale longitudinal study using questionnaires validated in the languages and cultural contexts of the target population56 be carried out with samples representative of the 16 regions of Ghana to establish the level of positive mental health at the national level. National estimates of prevalence of mental health (or illness) have both policy and clinical implications. Conclusion: The findings from this study provide insight into the mental health functioning of patients with SCD in Ghana, as measured with the MHC-SF. We found a considerably higher prevalence of positive mental health among our sample, with a third of the sample functioning at suboptimal levels, mentally. It is important that researchers, practitioners, and policymakers explore and take advantage of the contextual and psychosocial factors that promote the emotional, psychological, and social wellbeing to design context-appropriate mental health interventions for individuals in treatment for SCDs, and the Ghanaian people, more generally.
Background: Self-funded. Methods: A quantitative cross-sectional survey design was implemented for data-gathering. A random sample of 62 adult SCD patients (21 to 56 years; mean age of 29 years) receiving treatment at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics at the Korle-Bu Teaching Hospital completed the Mental Health Continuum-Short Form (MHC-SF). Descriptive statistics and reliability indices were estimated for the MHC-SF. We implemented Keyes's criteria for the assessment and categorisation of levels of mental health to determine the prevalence of positive mental health and functioning. Results: We found a high level of positive mental health (66% flourishing; 26% moderately mentally healthy; 8% languishing) and functioning, with no significant difference between the genders. A total of 34% of the participants were functioning at suboptimal levels and were at risk of psychopathology. Conclusions: This study gives the first overview of the prevalence of positive mental health and functioning in a clinical population in Ghana. Although the majority of participants were flourishing, contextually appropriate positive psychological interventions are needed to promote the mental health of SCD patients who are functioning at suboptimal levels, which would, inherently, also buffer against psychopathology.
Introduction: Data from global epidemiological surveys show that sickle cell disease (SCD) remains the commonest genetic blood disorder that affects about 20–25 million people1,2, with a further 400,000 children born annually with the disease1. SCD is prevalent among people of the African and Hispanic descents, with approximately 80% of all SCD births occurring in West and Central Africa.1,2 Regional statistics suggest that about 25% of Africans are carriers of the abnormal hemoglobin gene.3 In Nigeria, for instance, about 2 to 3% of the population is affected with SCD and a projected 150,000 children are born each year with the condition.4 In Ghana, an estimated 2% of neonates are born with SCD annually.3 SCD is a hemoglobinopathy caused by a β-globin gene (βs) mutation at position 6. The mutation results in an abnormal hemoglobin (Hb) with altered physical properties (sickle hemoglobin).5,6 SCD is categorised into four major genotypes: sickle cell β0 thalassemia (Sβ0 thalassemia), sickle cell β+ thalassemia (Sβ+ thalassemia), sickle haemoglobin C (SC) disease, and homozygous sickle cell (SS) disease.6,7 Each of these subtypes presents with unique symptoms with varying degree of severity. In particular, the haemoglobin SS, where an individual inherits a sickle S gene from both parents, presents the most severe of clinical symptoms.8 The most reported genotypes in Ghana include the ‘SS’, ‘SC’, ‘SD’, and Sβ thalassemia.9 SCD differs from sickle cell trait (SCT) or carriers of the sickle cell gene. People who have SCT do not have the disease but inherit a normal haemoglobin (A) from one parent and an abnormal haemoglobin (SC) from the other parent. Carriers of the sickle cell gene do not often exhibit signs and symptoms of the disease but are capable of passing the gene to their children. When there is reduced oxygen tension, red blood cells (RBCs) with haemoglobin S (HbS) tend to sickle. This is because HbS becomes less soluble in low oxygen environment and forms tactoids and microfibrils along the long axis of RBC. Sickled RBCs lose their deformability property and thus occlude tiny capillaries. During this process, some of the RBCs are broken down, or lysed. As a result of this occlusion, the distal organs are deprived of oxygen, resulting in complications such as infarction, anaemia, priapism, splenomegaly, and dactylitis.10 SCD is associated with poor quality of life, depression, anxiety, and low levels of mental health functioning.11,12 Although there is limited data on SCD in Ghana at the national level, several studies report on the prevalence of SCD across various health facilities. A retrospective review of the medical records of all SCD patients (aged 13 and above) at the Ghana Institute of Clinical Genetics, Korle-Bu, from January 2013 to December 2014 shows that a total of 5,451 SCD patients accessed healthcare services at the facility, with 20,788 clinic visits.13 Another study estimates that about 2% of all newborns are diagnosed with SCD, and at least 25% of the Ghanaian populace carry the sickle cell gene.3 Keyes hypothesised a tripartite model of positive mental health that describes three dimensions of mental health, namely, emotional (EWB), psychological (PWB), and social (SWB) well-being.15 The EWB describes the presence of positive affect and satisfactions with life, while the PWB defines an individual's intrapersonal and close interpersonal functioning. Keyes conceptualised the SWB to refer to a sense of welfare and happiness and experiences of an individual's well-being in society, as well as satisfaction with their social structure and function.14,15 Keyes suggested that complete mental health functioning comprised of a combination of all three dimensions: emotional, psychological, and social well-being. For its measurement, a high level of positive mental health, called flourishing, requires a combination of a high level of subjective well-being and an optimal level of psychological and social functioning. Likewise, Keyes termed an individual's experience of low levels of positive mental health and psychological and social functioning as languishing. Individuals who function between this continuum, that is, neither flourishing nor languishing, are considered to be experiencing moderate mental health. Keyes' tripartite model, together with empirical evidence from Ryff's14 model of psychological well-being, formed the conceptual foundation that directed the development of the 40-item self-administered Mental Health Continuum Long-Form questionnaire.15,21 A shorter 14-item version, the Mental Health Continuum Short-Form (MHC-SF)21, was later constructed to assess the three dimensions of well-being (emotional, psychological, and social well-being) and the categorical diagnosis of positive mental health. According to Keyes15,21, positive mental health and mental illness exist on two continua – where positive mental health correlates with, but is distinct from, mental illness. In this respect, an individual can present with symptoms of mental illness, such as depressive episode, and simultaneously exhibit a high level of positive mental health (i.e., flourishing). A wealth of research demonstrates that flourishing states are associated with a range of positive outcomes, including good health, higher levels of life expectancy, and satisfaction with life.17 The measurement of the prevalence of positive mental health remains a challenging endeavour, partly due to varied conceptualisations and operational definitions of mental well-being. However, in recent efforts, scholars have increasingly utilised the MHCSF to assess positive mental health across various population groups.19 There is evidence that suggests that mental health is understood and experienced differently across various cultures and religions.18 For instance, researchers found significant difference in the expressions of well-being between collectivist East Asian and individualistic Western samples.20 Given the developmental challenges and disease-related complications associated with SCD, survivors require individual-targeted biopsychosocial interventions to be able to lead quality and meaningful lives.23 In most cases, SCD patients endure lifelong treatment regimen that can be expensive, complex, and multifaceted, with a high risk for poor mental health functioning, that is, their ability to reach their optimal human potential and lead a meaningful life.24 Assessment of positive mental health remains an important approach for evaluating the prognosis and outcomes of chronically ill patients.25,26 Prevalence of positive mental health Scholarly interest in the use of the MHC-SF to assess the prevalence of positive mental health and functioning has increased exponentially in the last decade. Researchers have successfully administered the MHC-SF to adults from South Africa27, Poland28, Canada22, Italy29, China30, United States21 and Australia31, and to adolescents from Egypt32, India33 and South Korea34 to explore the prevalence of positive mental health and its correlates. Prevalence rates vary across the globe depending on the context and the sample involved. For instance, researchers found that only 11.7 % of a sample of South Korean adolescents were flourishing, compared to the 23.5% of Egyptian adolescents, 46.4% of Indian adolescents, and 76.9% of Canadian adolescents and adults.43 In Ghana, there are limited studies that explore the concept of positive mental health into its categorical prevalence levels. Largely, the prevalence of mental health has been conceptualised and measured in terms of mental disorders. To our knowledge, only Appiah and colleagues36 measured the effect of a community-based positive psychology intervention in promoting the (positive) mental health of a rural adult sample using the MHC-SF. The researchers reported a flourishing rate of 25% at pre-test, 57.5% immediately after the 10-week intervention, and 77.5% three months post-intervention for programme participants. The majority of studies employed proxies in their measurement to assess subjective well-being by measuring individuals' standards of living rather than how they feel about their overall life.50 As far as could be established, presently, little is known about the prevalence of positive mental health and functioning in any population group in Ghana. The outcomes of prevalence rates of positive mental health and functioning of a population could serve as an important reference and resource for researchers and practitioners working to develop context-ainterventions to promote the overall mental well-being of the population involved.35 Scholarly interest in the use of the MHC-SF to assess the prevalence of positive mental health and functioning has increased exponentially in the last decade. Researchers have successfully administered the MHC-SF to adults from South Africa27, Poland28, Canada22, Italy29, China30, United States21 and Australia31, and to adolescents from Egypt32, India33 and South Korea34 to explore the prevalence of positive mental health and its correlates. Prevalence rates vary across the globe depending on the context and the sample involved. For instance, researchers found that only 11.7 % of a sample of South Korean adolescents were flourishing, compared to the 23.5% of Egyptian adolescents, 46.4% of Indian adolescents, and 76.9% of Canadian adolescents and adults.43 In Ghana, there are limited studies that explore the concept of positive mental health into its categorical prevalence levels. Largely, the prevalence of mental health has been conceptualised and measured in terms of mental disorders. To our knowledge, only Appiah and colleagues36 measured the effect of a community-based positive psychology intervention in promoting the (positive) mental health of a rural adult sample using the MHC-SF. The researchers reported a flourishing rate of 25% at pre-test, 57.5% immediately after the 10-week intervention, and 77.5% three months post-intervention for programme participants. The majority of studies employed proxies in their measurement to assess subjective well-being by measuring individuals' standards of living rather than how they feel about their overall life.50 As far as could be established, presently, little is known about the prevalence of positive mental health and functioning in any population group in Ghana. The outcomes of prevalence rates of positive mental health and functioning of a population could serve as an important reference and resource for researchers and practitioners working to develop context-ainterventions to promote the overall mental well-being of the population involved.35 Conclusion: The findings from this study provide insight into the mental health functioning of patients with SCD in Ghana, as measured with the MHC-SF. We found a considerably higher prevalence of positive mental health among our sample, with a third of the sample functioning at suboptimal levels, mentally. It is important that researchers, practitioners, and policymakers explore and take advantage of the contextual and psychosocial factors that promote the emotional, psychological, and social wellbeing to design context-appropriate mental health interventions for individuals in treatment for SCDs, and the Ghanaian people, more generally.
Background: Self-funded. Methods: A quantitative cross-sectional survey design was implemented for data-gathering. A random sample of 62 adult SCD patients (21 to 56 years; mean age of 29 years) receiving treatment at the Sickle Cell Clinic of the Ghana Institute of Clinical Genetics at the Korle-Bu Teaching Hospital completed the Mental Health Continuum-Short Form (MHC-SF). Descriptive statistics and reliability indices were estimated for the MHC-SF. We implemented Keyes's criteria for the assessment and categorisation of levels of mental health to determine the prevalence of positive mental health and functioning. Results: We found a high level of positive mental health (66% flourishing; 26% moderately mentally healthy; 8% languishing) and functioning, with no significant difference between the genders. A total of 34% of the participants were functioning at suboptimal levels and were at risk of psychopathology. Conclusions: This study gives the first overview of the prevalence of positive mental health and functioning in a clinical population in Ghana. Although the majority of participants were flourishing, contextually appropriate positive psychological interventions are needed to promote the mental health of SCD patients who are functioning at suboptimal levels, which would, inherently, also buffer against psychopathology.
9,627
244
[ 352, 117, 256, 881, 437, 64, 44, 55, 231, 121, 212 ]
16
[ "health", "mental", "mental health", "positive", "positive mental", "positive mental health", "mhc sf", "sf", "mhc", "sample" ]
[ "disease homozygous sickle", "major genotypes sickle", "sickle hemoglobin scd", "thalassemia sickle cell", "thalassemia sickle haemoglobin" ]
[CONTENT] positive mental health | flourishing | psychological well-being | sickle cell disease | Ghana [SUMMARY]
[CONTENT] positive mental health | flourishing | psychological well-being | sickle cell disease | Ghana [SUMMARY]
[CONTENT] positive mental health | flourishing | psychological well-being | sickle cell disease | Ghana [SUMMARY]
[CONTENT] positive mental health | flourishing | psychological well-being | sickle cell disease | Ghana [SUMMARY]
[CONTENT] positive mental health | flourishing | psychological well-being | sickle cell disease | Ghana [SUMMARY]
[CONTENT] positive mental health | flourishing | psychological well-being | sickle cell disease | Ghana [SUMMARY]
[CONTENT] Adolescent | Adult | Anemia, Sickle Cell | Cross-Sectional Studies | Female | Ghana | Humans | Longitudinal Studies | Male | Mental Health | Middle Aged | Prevalence | Quality of Life | Reproducibility of Results | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Anemia, Sickle Cell | Cross-Sectional Studies | Female | Ghana | Humans | Longitudinal Studies | Male | Mental Health | Middle Aged | Prevalence | Quality of Life | Reproducibility of Results | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Anemia, Sickle Cell | Cross-Sectional Studies | Female | Ghana | Humans | Longitudinal Studies | Male | Mental Health | Middle Aged | Prevalence | Quality of Life | Reproducibility of Results | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Anemia, Sickle Cell | Cross-Sectional Studies | Female | Ghana | Humans | Longitudinal Studies | Male | Mental Health | Middle Aged | Prevalence | Quality of Life | Reproducibility of Results | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Anemia, Sickle Cell | Cross-Sectional Studies | Female | Ghana | Humans | Longitudinal Studies | Male | Mental Health | Middle Aged | Prevalence | Quality of Life | Reproducibility of Results | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Anemia, Sickle Cell | Cross-Sectional Studies | Female | Ghana | Humans | Longitudinal Studies | Male | Mental Health | Middle Aged | Prevalence | Quality of Life | Reproducibility of Results | Young Adult [SUMMARY]
[CONTENT] disease homozygous sickle | major genotypes sickle | sickle hemoglobin scd | thalassemia sickle cell | thalassemia sickle haemoglobin [SUMMARY]
[CONTENT] disease homozygous sickle | major genotypes sickle | sickle hemoglobin scd | thalassemia sickle cell | thalassemia sickle haemoglobin [SUMMARY]
[CONTENT] disease homozygous sickle | major genotypes sickle | sickle hemoglobin scd | thalassemia sickle cell | thalassemia sickle haemoglobin [SUMMARY]
[CONTENT] disease homozygous sickle | major genotypes sickle | sickle hemoglobin scd | thalassemia sickle cell | thalassemia sickle haemoglobin [SUMMARY]
[CONTENT] disease homozygous sickle | major genotypes sickle | sickle hemoglobin scd | thalassemia sickle cell | thalassemia sickle haemoglobin [SUMMARY]
[CONTENT] disease homozygous sickle | major genotypes sickle | sickle hemoglobin scd | thalassemia sickle cell | thalassemia sickle haemoglobin [SUMMARY]
[CONTENT] health | mental | mental health | positive | positive mental | positive mental health | mhc sf | sf | mhc | sample [SUMMARY]
[CONTENT] health | mental | mental health | positive | positive mental | positive mental health | mhc sf | sf | mhc | sample [SUMMARY]
[CONTENT] health | mental | mental health | positive | positive mental | positive mental health | mhc sf | sf | mhc | sample [SUMMARY]
[CONTENT] health | mental | mental health | positive | positive mental | positive mental health | mhc sf | sf | mhc | sample [SUMMARY]
[CONTENT] health | mental | mental health | positive | positive mental | positive mental health | mhc sf | sf | mhc | sample [SUMMARY]
[CONTENT] health | mental | mental health | positive | positive mental | positive mental health | mhc sf | sf | mhc | sample [SUMMARY]
[CONTENT] mental | mental health | health | positive | positive mental health | positive mental | sickle | prevalence | adolescents | scd [SUMMARY]
[CONTENT] scale | day | emotional | patient | social | sub | sub scale | validity | participant | psychological [SUMMARY]
[CONTENT] mean | mental health | mental | health | level | participants | table | correlations | range | 15 [SUMMARY]
[CONTENT] mental health | mental | functioning | health | sample sample | considerably higher prevalence positive | individuals treatment scds | individuals treatment | mental health sample sample | mhc sf found considerably [SUMMARY]
[CONTENT] mental | mental health | health | positive | positive mental | positive mental health | sf | mhc sf | mhc | level [SUMMARY]
[CONTENT] mental | mental health | health | positive | positive mental | positive mental health | sf | mhc sf | mhc | level [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| 62 | SCD | 21 to 56 years | age of 29 years | the Sickle Cell Clinic | Institute of Clinical Genetics | the Korle-Bu Teaching Hospital | the Mental Health Continuum-Short Form ||| the MHC-SF ||| Keyes [SUMMARY]
[CONTENT] 66% | 26% | 8% ||| 34% [SUMMARY]
[CONTENT] first | Ghana ||| SCD [SUMMARY]
[CONTENT] ||| ||| 62 | SCD | 21 to 56 years | age of 29 years | the Sickle Cell Clinic | Institute of Clinical Genetics | the Korle-Bu Teaching Hospital | the Mental Health Continuum-Short Form ||| the MHC-SF ||| Keyes ||| ||| 66% | 26% | 8% ||| 34% ||| first | Ghana ||| SCD [SUMMARY]
[CONTENT] ||| ||| 62 | SCD | 21 to 56 years | age of 29 years | the Sickle Cell Clinic | Institute of Clinical Genetics | the Korle-Bu Teaching Hospital | the Mental Health Continuum-Short Form ||| the MHC-SF ||| Keyes ||| ||| 66% | 26% | 8% ||| 34% ||| first | Ghana ||| SCD [SUMMARY]
Occupational health and health care in Russia and Russian Arctic: 1980-2010.
23519691
There is a paradox in Russia and its Arctic regions which reports extremely low rates of occupational diseases (ODs), far below those of other socially and economically advanced circumpolar countries. Yet, there is widespread disregard for occupational health regulations and neglect of basic occupational health services across many industrial enterprises.
BACKGROUND
This review article presents official statistics and summarises the results of a search of peer-reviewed scientific literature published in Russia on ODs and occupational health care in Russia and the Russian Arctic, within the period 1980-2010.
STUDY DESIGN AND METHODS
Worsening of the economic situation, layoff of workers, threat of unemployment and increased work load happened during the "wild market" industrial restructuring in 1990-2000, when the health and safety of workers were of little concern. Russian employers are not legally held accountable for neglecting safety rules and for underreporting of ODs. Almost 80% of all Russian industrial enterprises are considered dangerous or hazardous for health. Hygienic control of working conditions was minimised or excluded in the majority of enterprises, and the health status of workers remains largely unknown. There is direct evidence of general degradation of the occupational health care system in Russia. The real levels of ODs in Russia are estimated to be at least 10-100 times higher than reported by official statistics. The low official rates are the result of deliberate hiding of ODs, lack of coverage of working personnel by properly conducted medical examinations, incompetent management and the poor quality of staff, facilities and equipment.
RESULTS
Reform of the Russian occupational health care system is urgently needed, including the passing of strong occupational health legislation and their enforcement, the maintenance of credible health monitoring and effective health services for workers, improved training of occupational health personnel, protection of sanitary-hygienic laboratories in industrial enterprises, and support for research assessing occupational risk and the effectiveness of interventions.
CONCLUSIONS
[ "Arctic Regions", "Industry", "Occupational Diseases", "Occupational Health", "Occupational Health Services", "Population Surveillance", "Russia", "Safety Management" ]
3604356
null
null
null
null
null
null
Conclusions
This review presents official statistical data on occupational disease and summarises Russian published peer-reviewed scientific literature on occupational health care in Russia and the Russian Arctic in 1980–2010. If Russian official statistics are to be trusted, the country and its northern regions have extremely low rates of ODs, substantially lower (some 10–100 times) than other circumpolar countries with advanced social welfare systems and high occupational health and safety standards. This positive picture is at odds with the poor compliance with occupational health regulations, and the breakdown in occupational health monitoring and workers’ health programmes, which were exacerbated by the “wild market” economy since the 1990s and accompanying major industrial restructuring in the North and elsewhere. Reform is urgently needed across Russia. Legislation is needed to hold employers accountable for adherence to safety regulations and truthful reporting of ODs and injuries. Where monitoring and surveillance systems have been put in place, there has been a corresponding increase in reported ODs. Well-equipped and staffed hygienic laboratories and regular health screening of workers by qualified personnel in both state and private enterprises require adequate financial and administrative support. There is some evidence that the very poor state of occupational health and health care is receiving high-level governmental attention. What is needed is implementation of sound principles and policies.
[ "Recent governmental statements on occupational health", "Official statistics on occupational health: 1980–2010", "International comparison of occupational diseases rates", "History of occupational disease surveillance in USSR and Russia", "Review of published literature in Russian" ]
[ "The deplorable state of occupational health in the country was recognised by the Russian government and was highlighted in the National Report for 2005 (2).\nAnnually, in the Russian Federation, 190,000 workers employed in dangerous and hazardous industries die; 15,000 workers die from occupational injuries; and 180,000 workers retire early, many for health reasons. More than 30% of deaths nationally occur among people at working-age, 4.5 times higher than in other European countries. These figures were presented by Dmitry Medvedev, the President of the Russian Federation, in 2008, during a meeting with representatives of a Russian business organisation devoted to the problems of population and health care. The President told the assembled participants: “Problems threatening the safety of people still exist in Russia; they also exist in workplaces; high mortality is not only a medical but also social problem. It is necessary to improve the health care system, particularly the preventive services.”\nThe Concept of the Federal Program of Actions on Improvement of Labour Conditions and Safety for 2008–2010 (3) was published in 2008, which became a part of the Concept of Demographic Policy of the Russian Federation for the period until 2025 (4). Its assessment states, in summary: “In the Russian Federation annually about 180 thousand persons die of causes related to harmful and dangerous industrial practices, about 200 thousand persons are injured occupationally, more than 10 thousand cases of occupational diseases (ODs) are registered and more than 14 thousand persons become invalid because of occupational injuries and diseases” (3).\nNikolaj Izmerov, Director of the Research Institute of Labour Medicine, and Academician of the Russian Academy of Medical Sciences, provided more gloomy data specific to women workers: “In 2007 generally in Russia 66.8 million people were employed, 33.9 million of them women, of whom 20 million were of child-bearing age. About 4 million women (12%) worked in conditions which violated sanitary-hygienic regulations and promoted ODs. According to the data of periodical medical examinations, every second working woman was chronically ill. Among women who worked in conditions where they were exposed to heat or chemicals, every sixth woman suffered from infertility, and every seventh experienced spontaneous abortions” (5).", "Official statistical data on ODs were available for the USSR/Russia for the period 1980–2010 from the Russian Statistical Yearbooks (6–8) and for 7 northern and far eastern regions from regional statistical yearbooks of Arkhangelsk Oblast, Murmansk Oblast, the Republic of Karelia, the Republic of Komi, Chukotka Autonomous Okrug, Kamchatka Oblast and Magadan Oblast (9–15) (Fig. 1). Regional data are available mainly for the period 2000–2008. A more complete regional picture is not possible as not all regions produced their own statistical yearbooks, or they are difficult to locate. The 7 regions (4 far-western and 3 far-eastern) nevertheless demonstrate the scope of the problem during the first decade of the century.\nMap of northern regions of the Russian Federation. (Reproduced by permission from Young TK, ed. Circumpolar Health Atlas. Toronto: University of Toronto Press, 2012)\nThe rate of new cases of ODs in 6 northern regions – Arkhangelsk Oblast, Murmansk Oblast, the Republic of Komi, Chukotka Autonomous Okrug, Kamchatka Oblast and Magadan Oblast compared to the whole Russia are presented in Fig. 2.\nRate of new cases of occupational diseases, including poisonings (per 10,000 workers), in Russia and selected Arctic regions.\nFigure 2 shows that in Russia the rate of ODs have been stable over the 30-year period, averaging 2 cases per 10,000 workers per year, which is quite low. Levels in Murmansk and Komi are 2–5 times higher than Russian rates (up to 10 cases/10,000), with Murmansk showing a steep rise and Komi a fall in rates. The situation in Arkhangelsk and Magadan are similar to that of Russia, but by mid-decade, the Arkhangelsk rate curves sharply upwards. The very low rates reported from the 2 far-eastern regions (Chukotka and Kamchatka), with almost no cases during 2003–2006, is unexpected and likely represent gross under-reporting, considering the heavy industrial activity in Chukotka, which annually produces 20–25 tons of pure gold alone, as well as other metals and coal.", "According to the Organisation for Economic Co-operation and Development (OECD), the rate of ODs in Russia is considerably lower (some 10–100 times lower) than other circumpolar countries (16).\nWorld health Organisation data for 2005 (Table I) show that Russia, countries who are formerly part of the USSR and former socialist countries in Eastern Europe all reported extremely low rate of ODs compared to Scandinavian countries (17).\nRate of new cases of occupational diseases (OD) in selected European countries, 2005\nSource: World Health Organisation (cited in 17).\nLatvia is unique among former USSR countries in having a high level of ODs. Latvia instituted reforms after independence (17), with legislation regulating relations between employers and employees in the field of social insurance, obligatory medical examination of workers (paid for by the employers), improvement in the training and certification of occupational health specialists, the creation of state ODs register, and so on. Quantity of the certified professionals increased in Latvia by 8.4 times between 1996 and 2007. The number of ODs in Latvia increased 9 times during 1996–2004, from 20 to 185 cases per 100,000 workers. This increase is not the result of a worsening of workers’ health but the improvement of surveillance with more comprehensive coverage and accurate enumeration.", "The history of registration of ODs in the USSR/Russia is well described in the introduction of the doctoral dissertation by L.G. Zhavoronok from the Research Institute of Medicine of Labour, Russian Academy of Medical Sciences, Moscow (18).\nThe obligatory system of registration and reporting of ODs in Russia began in 1924 by the joint Decision of the National Commissariats of Work and of Public Health Services About the obligatory notice on occupational poisonings and diseases. Before that time, there were separate registrations for certain groups of occupational illnesses, mainly, acute poisonings, occupational infections, caisson illness and various accidents.\nIn 1939, the National Commissariat of Public Health Services adopted a new Principle of the notice and registration of occupational poisonings and occupational diseases, which introduced a person-based system of patients with ODs. Changes were made to the reporting form over the years, but essentially the system established in 1939 existed until 1986, for almost 50 years. It did not contain data on the cause of occupational disease, adverse factors influencing the patient, character of performed work, actions directed at eliminating these factors or health status of the patient at the moment of disease investigation. Even basic data, such as sex, age, occupational route, work experience, and so on, were not captured. The system did not provide sufficient information to analyse causes and effects.\nA new system of obligatory registration of ODs was introduced in the USSR in 1986 and in Russia in 1991. In Russia, the state sanitary-epidemiological service (Gossanepidnadzor, now Rospotrebnadzor) of the Ministry of Health Care of the Russian Federation provides the centralised gathering of primary materials for registration of ODs. An important innovation is the unified form of “Sanitary-hygienic characteristic of workplace conditions in case of assumption of occupational disease (poisoning)” and instructions on its completion. The sanitary-hygienic characteristic of workplace is made by Rospotrebnadzor centre, taking into account the preliminary diagnosis of occupational disease, characteristics of all harmful factors of the occupational environment, labour process and modes of work, which could lead to occupational disease (poisoning). This is the major document confirming, or denying, the occupational character of disease (18).\nThe bureaucratic system and procedural complexity in establishing preliminary and definitive diagnosis of an occupational disease often lead to frequent occurrence of conflict situations, delay of a thorough medical examination of a patient in a specialised centre of occupational health, possibility of concealment of diseases and judicial proceedings.\nThe Federal centre of Rospotrebnadzor carries out the analysis of ODs in Russia, compiles annual reports and publishes newsletters. However, the published information does not provide sufficient details for different regions in Russia, separate sectors of the economy and different occupations. Different types of ODs and poisonings are presented only as a proportion of all ODs. Methodological and statistical errors occur frequently in the bulletins. There is no interaction between Rospotrebnadzor centres and the treatment and prevention facilities in which the diagnosis of an occupational disease is first established and patient supervision carried out. Reform of the current situation is badly needed (18).", "Russian hygienic scientific journals have not yet embraced the electronic age. For example, Gig Sanit (Hygiene and sanitation) published since 1922 has a web page only with a list of the names of articles from 1998 (abstracts since 2010), while Med Tr Prom Ekol (Medicine of Labour and Industrial Ecology) published since 1957 has a much shorter list – since 2008; both journals are indexed in Index Medicus and in many other web-based bibliographic systems, but full text articles are unavailable. Russian special medical web searching systems are only at the early stages of development; comprehensive thematic catalogues do not exist. PubMed does contain names of articles from almost all major Russian biomedical and public health journals since the end of 1960s, and thus could be an important resource for searching Russian language articles.\nThe libraries inside Russian research institutes have faced serious financial problems since 1990. Today, across the vast country, only the Russian National Library in St. Petersburg, Russian State Library in Moscow and several other libraries in the biggest cities remain the repositories of comprehensive Russian scientific literature. Stocks of these libraries are electronically catalogued only partially, and most full text articles are available only in hard copies.\nA systematic search of the Russian peer-reviewed literature on occupational health and disease was conducted for the period 1980–2010 using PubMed and electronic catalogues of the Russian National Library in St. Petersburg. The specific terms “Arctic” and “North” were not initially included to search as broadly as possible. Publications specific to the Russian Arctic, Siberia and Far East were then further identified.\nMost of the articles on occupational health are descriptive in nature, assessing single exposures (such as air pollutants, noise, vibration, temperature, etc.); others deal with physiological, cardiac, immune and neuro-hormonal status in occupational conditions but without analysis of exposure–effect association; others present data on time lost at work due to general sickness (such as colds and diarrhoea, etc.) at industrial enterprises. The review below is focused on critical evaluation of the occupational health care system in Russia.\nInitial timid attempts at criticism of the quality of medical examination of workers began to appear in Russian scientific journals in 1986, soon after Gorbachev's coming to power and the beginning of “perestroika”.\nIn 1982–1984, during the construction of gas pipelines from Urengoy (in the Yamal peninsula), there was on average one doctor for 328 workers engaged in dangerous and hazardous job conditions, and 83 patients from the general population. A mobile medical team moved with the workers as the construction progressed. A questionnaire survey in Urengoy showed that 11% of workers complained about difficulties in getting to see the doctor. Some 74% of workers considered themselves to be overexposed to noise and vibration and 78% to petrochemicals. Housing conditions were generally poor, with 81% living in wagons or “dog houses” (4–5 m2/person). There were serious lifestyle issues, with 79% smokers, and 77% consumed on average 250–500 ml of vodka in a single dose (19).\nBy the end of the 1990s, more critical articles on the system of diagnosis and registration of ODs appeared in Russian scientific journals. In 1997, the Research Institute of Hygiene and Occupational Pathology in Nizhniy Novgorod noted the substantial increase of ODs in Nizhniy Novgorod region since 1990, which the author attributed to more frequent visits of workers to doctors due to the worsening of the economic situation and the wild market economy, with 80% of enterprises in the region privatised (20). There was an increase in chronic OD rate, while the number of disabled workers and general degradation of the OD detection system were discussed in connection with dramatic changes towards the wild market economy. The author identified an urgent need of holding employers legally accountable for neglecting job safety rules and for under-reporting ODs (20).\nIn 1998 the nickel-cobalt industry in the Russian Arctic was the subject of an assessment by the Kola Research Laboratory of Occupational Health in Kirovsk city, Murmansk Oblast (21). Despite considerable financial inputs, substantial improvement of working conditions had not occurred. Occupational disease level among nickel-cobalt industry workers was very high, but it was estimated that only 2% of ODs were detected, based on detailed medical examination of the workers, with the true prevalence likely around 510 cases per 10,000 workers (21).\nBy the end of the 2000s, numerous articles disclosing and unmasking the Russian system of occupational health care and OD registration have appeared on the pages of the most respectable journals. Among the vocal critics was Gennady Onishchenko, the Chief of Sanitary-Epidemiological Surveillance (Chief State Sanitary Officer) of the Ministry of Health Care and Social Development, academician of the Russian Academy of Medical Sciences (22). Some of the main points of his article in the journal Hygiene and Sanitation are highlighted below.\nIn 2006, 23% of workers employed in main Russian industries worked in job conditions that violated sanitary-hygienic rules. Almost 80% of all Russian industrial enterprises were categorised as dangerous or hazardous for health. The worst labour conditions are found in coal mining, ship building, ferrous and non-ferrous metallurgy, agriculture, tractor and agricultural machinery construction, building materials production, lumbering and construction activities. The highest number of people who work in dangerous and hazardous conditions are employed at non-governmental enterprises. In many regions of Russia, the budget for improving labour conditions has been cut dramatically. In many enterprises, their hygienic laboratories were eliminated or their financing was reduced significantly (22).\nIndustrial recession and economic instability have affected many enterprises, resulting in dilapidation and disrepair of buildings, machines and equipment. Much of the Russian industry still relies on manual labour (accounting for 70% of production), with a low level of mechanisation and automation. Many companies blatantly ignore or violate occupational safety regulations relating to noise, vibration, temperature, air quality, illumination, personal protective gear, duration and schedule of work, limits for lifting and carrying of weights, and so on.\nIndustrial reorganisation involving partition and mergers of industrial resources and properties has resulted in the emergence of new legal entities with very limited or no responsibility for the health and safety of workers. Some of the new enterprises are located on temporary rented sites where employers do not want to invest in infrastructure improvement.\nIn Russia, official statistics show a continuing decline in the rate of ODs in the face of worsening labour conditions, a paradox that can be explained by inadequate monitoring and reporting. Among the factors responsible for this state of affairs are:no coverage and low quality of preventive health examination of workers;employers’ disinterest in detecting ODs to maintain low insurance premiums and avoid costs associated with improving labour conditions and health care measures;lacklustre performance of medical examiners, particularly when the employer has a long-term arrangement with patient care institutions;workers’ tendency to hide early symptoms of disease, which will affect their ability to continue employment; on the other hand, later detection after the onset of disability will bring in significant compensation for the family; andinadequate labour protection legislation.\n\nno coverage and low quality of preventive health examination of workers;\nemployers’ disinterest in detecting ODs to maintain low insurance premiums and avoid costs associated with improving labour conditions and health care measures;\nlacklustre performance of medical examiners, particularly when the employer has a long-term arrangement with patient care institutions;\nworkers’ tendency to hide early symptoms of disease, which will affect their ability to continue employment; on the other hand, later detection after the onset of disability will bring in significant compensation for the family; and\ninadequate labour protection legislation.\nAdditional criticism of the problem was expressed by other specialists from the Research Institute of Labour Medicine in Moscow. The number of disabled workers in Russia increased dramatically from 7.9 million in 1997 to 13 million in 2007, when disabled workers constituted 11% of the adult population of Russia (23). The fears of unemployment among workers (a new phenomenon in Russia), the low regard for human life (an old phenomenon in Russia), and the quest for profits by employers jointly produced a situation of widespread worsening of job conditions, and the physical and psychological exhaustion among workers.\nOfficial statistics of Magadan Oblast show incredibly low values of OD (1.3–3.2 cases per 10,000 workers) in 2004–2008 (14). The Angarsk Research Institute of Labour Medicine and Human Ecology evaluated the registration of OD among miners in Magadan Oblast between 1981 and 2000 and found that the real situation is very different from official statistics. For the mining industry as a whole in Magadan Oblast, OD rate among all workers fluctuated between 35 and 61 cases per 10,000 workers, 52 and 93 cases per 10,000 workers among above-ground workers and 266 and 398 cases per 10,000 workers among miners working underground. For specific conditions such as vibration disease, among underground miners who had been exposed to vibration for 5 or more years, 64% had signs of vibration disease. Vibration tends to develop about twice as fast in the Arctic than among workers in middle latitudes, a contributing factor being the below freezing temperatures often found underground (24).\nAn assessment of Krasnoyarsk region by the regional centre of occupational pathology and Krasnoyarsk Medical University (25) found that in the 1990s, much of previous gains in occupational health care had been lost – the number of medical units at enterprises was considerably reduced, preventive work decreased, and the quality of medical examinations and diagnosis of ODs declined. In Krasnoyarsk region, every second worker works in conditions below hygienic standards. With the intensification of medical examination of workers, the proportion of workers screened by surveys increased from 80% in 1998 to 91% in 2008 (25).\nBetween 1978 and 1988, official OD levels in Archangelsk city were comparable to those in Archangelsk Oblast and Russia as a whole (0.8–1.8 per 10,000 of workers), but during the next decade (1989–2000), the rate for Archangelsk city increased dramatically to 14.1/10,000, which was more than 7 times higher than in Russia. This is the result of the launching of the department of occupational pathology in Archangelsk city and expert assessment of the quality of medical examination of workers. However, in 2000–2002, the OD level in Archangelsk city again fell to the level of the 1980s due to the cessation of expert work (26)." ]
[ null, null, null, null, null ]
[ "Recent governmental statements on occupational health", "Official statistics on occupational health: 1980–2010", "International comparison of occupational diseases rates", "History of occupational disease surveillance in USSR and Russia", "Review of published literature in Russian", "Conclusions" ]
[ "The deplorable state of occupational health in the country was recognised by the Russian government and was highlighted in the National Report for 2005 (2).\nAnnually, in the Russian Federation, 190,000 workers employed in dangerous and hazardous industries die; 15,000 workers die from occupational injuries; and 180,000 workers retire early, many for health reasons. More than 30% of deaths nationally occur among people at working-age, 4.5 times higher than in other European countries. These figures were presented by Dmitry Medvedev, the President of the Russian Federation, in 2008, during a meeting with representatives of a Russian business organisation devoted to the problems of population and health care. The President told the assembled participants: “Problems threatening the safety of people still exist in Russia; they also exist in workplaces; high mortality is not only a medical but also social problem. It is necessary to improve the health care system, particularly the preventive services.”\nThe Concept of the Federal Program of Actions on Improvement of Labour Conditions and Safety for 2008–2010 (3) was published in 2008, which became a part of the Concept of Demographic Policy of the Russian Federation for the period until 2025 (4). Its assessment states, in summary: “In the Russian Federation annually about 180 thousand persons die of causes related to harmful and dangerous industrial practices, about 200 thousand persons are injured occupationally, more than 10 thousand cases of occupational diseases (ODs) are registered and more than 14 thousand persons become invalid because of occupational injuries and diseases” (3).\nNikolaj Izmerov, Director of the Research Institute of Labour Medicine, and Academician of the Russian Academy of Medical Sciences, provided more gloomy data specific to women workers: “In 2007 generally in Russia 66.8 million people were employed, 33.9 million of them women, of whom 20 million were of child-bearing age. About 4 million women (12%) worked in conditions which violated sanitary-hygienic regulations and promoted ODs. According to the data of periodical medical examinations, every second working woman was chronically ill. Among women who worked in conditions where they were exposed to heat or chemicals, every sixth woman suffered from infertility, and every seventh experienced spontaneous abortions” (5).", "Official statistical data on ODs were available for the USSR/Russia for the period 1980–2010 from the Russian Statistical Yearbooks (6–8) and for 7 northern and far eastern regions from regional statistical yearbooks of Arkhangelsk Oblast, Murmansk Oblast, the Republic of Karelia, the Republic of Komi, Chukotka Autonomous Okrug, Kamchatka Oblast and Magadan Oblast (9–15) (Fig. 1). Regional data are available mainly for the period 2000–2008. A more complete regional picture is not possible as not all regions produced their own statistical yearbooks, or they are difficult to locate. The 7 regions (4 far-western and 3 far-eastern) nevertheless demonstrate the scope of the problem during the first decade of the century.\nMap of northern regions of the Russian Federation. (Reproduced by permission from Young TK, ed. Circumpolar Health Atlas. Toronto: University of Toronto Press, 2012)\nThe rate of new cases of ODs in 6 northern regions – Arkhangelsk Oblast, Murmansk Oblast, the Republic of Komi, Chukotka Autonomous Okrug, Kamchatka Oblast and Magadan Oblast compared to the whole Russia are presented in Fig. 2.\nRate of new cases of occupational diseases, including poisonings (per 10,000 workers), in Russia and selected Arctic regions.\nFigure 2 shows that in Russia the rate of ODs have been stable over the 30-year period, averaging 2 cases per 10,000 workers per year, which is quite low. Levels in Murmansk and Komi are 2–5 times higher than Russian rates (up to 10 cases/10,000), with Murmansk showing a steep rise and Komi a fall in rates. The situation in Arkhangelsk and Magadan are similar to that of Russia, but by mid-decade, the Arkhangelsk rate curves sharply upwards. The very low rates reported from the 2 far-eastern regions (Chukotka and Kamchatka), with almost no cases during 2003–2006, is unexpected and likely represent gross under-reporting, considering the heavy industrial activity in Chukotka, which annually produces 20–25 tons of pure gold alone, as well as other metals and coal.", "According to the Organisation for Economic Co-operation and Development (OECD), the rate of ODs in Russia is considerably lower (some 10–100 times lower) than other circumpolar countries (16).\nWorld health Organisation data for 2005 (Table I) show that Russia, countries who are formerly part of the USSR and former socialist countries in Eastern Europe all reported extremely low rate of ODs compared to Scandinavian countries (17).\nRate of new cases of occupational diseases (OD) in selected European countries, 2005\nSource: World Health Organisation (cited in 17).\nLatvia is unique among former USSR countries in having a high level of ODs. Latvia instituted reforms after independence (17), with legislation regulating relations between employers and employees in the field of social insurance, obligatory medical examination of workers (paid for by the employers), improvement in the training and certification of occupational health specialists, the creation of state ODs register, and so on. Quantity of the certified professionals increased in Latvia by 8.4 times between 1996 and 2007. The number of ODs in Latvia increased 9 times during 1996–2004, from 20 to 185 cases per 100,000 workers. This increase is not the result of a worsening of workers’ health but the improvement of surveillance with more comprehensive coverage and accurate enumeration.", "The history of registration of ODs in the USSR/Russia is well described in the introduction of the doctoral dissertation by L.G. Zhavoronok from the Research Institute of Medicine of Labour, Russian Academy of Medical Sciences, Moscow (18).\nThe obligatory system of registration and reporting of ODs in Russia began in 1924 by the joint Decision of the National Commissariats of Work and of Public Health Services About the obligatory notice on occupational poisonings and diseases. Before that time, there were separate registrations for certain groups of occupational illnesses, mainly, acute poisonings, occupational infections, caisson illness and various accidents.\nIn 1939, the National Commissariat of Public Health Services adopted a new Principle of the notice and registration of occupational poisonings and occupational diseases, which introduced a person-based system of patients with ODs. Changes were made to the reporting form over the years, but essentially the system established in 1939 existed until 1986, for almost 50 years. It did not contain data on the cause of occupational disease, adverse factors influencing the patient, character of performed work, actions directed at eliminating these factors or health status of the patient at the moment of disease investigation. Even basic data, such as sex, age, occupational route, work experience, and so on, were not captured. The system did not provide sufficient information to analyse causes and effects.\nA new system of obligatory registration of ODs was introduced in the USSR in 1986 and in Russia in 1991. In Russia, the state sanitary-epidemiological service (Gossanepidnadzor, now Rospotrebnadzor) of the Ministry of Health Care of the Russian Federation provides the centralised gathering of primary materials for registration of ODs. An important innovation is the unified form of “Sanitary-hygienic characteristic of workplace conditions in case of assumption of occupational disease (poisoning)” and instructions on its completion. The sanitary-hygienic characteristic of workplace is made by Rospotrebnadzor centre, taking into account the preliminary diagnosis of occupational disease, characteristics of all harmful factors of the occupational environment, labour process and modes of work, which could lead to occupational disease (poisoning). This is the major document confirming, or denying, the occupational character of disease (18).\nThe bureaucratic system and procedural complexity in establishing preliminary and definitive diagnosis of an occupational disease often lead to frequent occurrence of conflict situations, delay of a thorough medical examination of a patient in a specialised centre of occupational health, possibility of concealment of diseases and judicial proceedings.\nThe Federal centre of Rospotrebnadzor carries out the analysis of ODs in Russia, compiles annual reports and publishes newsletters. However, the published information does not provide sufficient details for different regions in Russia, separate sectors of the economy and different occupations. Different types of ODs and poisonings are presented only as a proportion of all ODs. Methodological and statistical errors occur frequently in the bulletins. There is no interaction between Rospotrebnadzor centres and the treatment and prevention facilities in which the diagnosis of an occupational disease is first established and patient supervision carried out. Reform of the current situation is badly needed (18).", "Russian hygienic scientific journals have not yet embraced the electronic age. For example, Gig Sanit (Hygiene and sanitation) published since 1922 has a web page only with a list of the names of articles from 1998 (abstracts since 2010), while Med Tr Prom Ekol (Medicine of Labour and Industrial Ecology) published since 1957 has a much shorter list – since 2008; both journals are indexed in Index Medicus and in many other web-based bibliographic systems, but full text articles are unavailable. Russian special medical web searching systems are only at the early stages of development; comprehensive thematic catalogues do not exist. PubMed does contain names of articles from almost all major Russian biomedical and public health journals since the end of 1960s, and thus could be an important resource for searching Russian language articles.\nThe libraries inside Russian research institutes have faced serious financial problems since 1990. Today, across the vast country, only the Russian National Library in St. Petersburg, Russian State Library in Moscow and several other libraries in the biggest cities remain the repositories of comprehensive Russian scientific literature. Stocks of these libraries are electronically catalogued only partially, and most full text articles are available only in hard copies.\nA systematic search of the Russian peer-reviewed literature on occupational health and disease was conducted for the period 1980–2010 using PubMed and electronic catalogues of the Russian National Library in St. Petersburg. The specific terms “Arctic” and “North” were not initially included to search as broadly as possible. Publications specific to the Russian Arctic, Siberia and Far East were then further identified.\nMost of the articles on occupational health are descriptive in nature, assessing single exposures (such as air pollutants, noise, vibration, temperature, etc.); others deal with physiological, cardiac, immune and neuro-hormonal status in occupational conditions but without analysis of exposure–effect association; others present data on time lost at work due to general sickness (such as colds and diarrhoea, etc.) at industrial enterprises. The review below is focused on critical evaluation of the occupational health care system in Russia.\nInitial timid attempts at criticism of the quality of medical examination of workers began to appear in Russian scientific journals in 1986, soon after Gorbachev's coming to power and the beginning of “perestroika”.\nIn 1982–1984, during the construction of gas pipelines from Urengoy (in the Yamal peninsula), there was on average one doctor for 328 workers engaged in dangerous and hazardous job conditions, and 83 patients from the general population. A mobile medical team moved with the workers as the construction progressed. A questionnaire survey in Urengoy showed that 11% of workers complained about difficulties in getting to see the doctor. Some 74% of workers considered themselves to be overexposed to noise and vibration and 78% to petrochemicals. Housing conditions were generally poor, with 81% living in wagons or “dog houses” (4–5 m2/person). There were serious lifestyle issues, with 79% smokers, and 77% consumed on average 250–500 ml of vodka in a single dose (19).\nBy the end of the 1990s, more critical articles on the system of diagnosis and registration of ODs appeared in Russian scientific journals. In 1997, the Research Institute of Hygiene and Occupational Pathology in Nizhniy Novgorod noted the substantial increase of ODs in Nizhniy Novgorod region since 1990, which the author attributed to more frequent visits of workers to doctors due to the worsening of the economic situation and the wild market economy, with 80% of enterprises in the region privatised (20). There was an increase in chronic OD rate, while the number of disabled workers and general degradation of the OD detection system were discussed in connection with dramatic changes towards the wild market economy. The author identified an urgent need of holding employers legally accountable for neglecting job safety rules and for under-reporting ODs (20).\nIn 1998 the nickel-cobalt industry in the Russian Arctic was the subject of an assessment by the Kola Research Laboratory of Occupational Health in Kirovsk city, Murmansk Oblast (21). Despite considerable financial inputs, substantial improvement of working conditions had not occurred. Occupational disease level among nickel-cobalt industry workers was very high, but it was estimated that only 2% of ODs were detected, based on detailed medical examination of the workers, with the true prevalence likely around 510 cases per 10,000 workers (21).\nBy the end of the 2000s, numerous articles disclosing and unmasking the Russian system of occupational health care and OD registration have appeared on the pages of the most respectable journals. Among the vocal critics was Gennady Onishchenko, the Chief of Sanitary-Epidemiological Surveillance (Chief State Sanitary Officer) of the Ministry of Health Care and Social Development, academician of the Russian Academy of Medical Sciences (22). Some of the main points of his article in the journal Hygiene and Sanitation are highlighted below.\nIn 2006, 23% of workers employed in main Russian industries worked in job conditions that violated sanitary-hygienic rules. Almost 80% of all Russian industrial enterprises were categorised as dangerous or hazardous for health. The worst labour conditions are found in coal mining, ship building, ferrous and non-ferrous metallurgy, agriculture, tractor and agricultural machinery construction, building materials production, lumbering and construction activities. The highest number of people who work in dangerous and hazardous conditions are employed at non-governmental enterprises. In many regions of Russia, the budget for improving labour conditions has been cut dramatically. In many enterprises, their hygienic laboratories were eliminated or their financing was reduced significantly (22).\nIndustrial recession and economic instability have affected many enterprises, resulting in dilapidation and disrepair of buildings, machines and equipment. Much of the Russian industry still relies on manual labour (accounting for 70% of production), with a low level of mechanisation and automation. Many companies blatantly ignore or violate occupational safety regulations relating to noise, vibration, temperature, air quality, illumination, personal protective gear, duration and schedule of work, limits for lifting and carrying of weights, and so on.\nIndustrial reorganisation involving partition and mergers of industrial resources and properties has resulted in the emergence of new legal entities with very limited or no responsibility for the health and safety of workers. Some of the new enterprises are located on temporary rented sites where employers do not want to invest in infrastructure improvement.\nIn Russia, official statistics show a continuing decline in the rate of ODs in the face of worsening labour conditions, a paradox that can be explained by inadequate monitoring and reporting. Among the factors responsible for this state of affairs are:no coverage and low quality of preventive health examination of workers;employers’ disinterest in detecting ODs to maintain low insurance premiums and avoid costs associated with improving labour conditions and health care measures;lacklustre performance of medical examiners, particularly when the employer has a long-term arrangement with patient care institutions;workers’ tendency to hide early symptoms of disease, which will affect their ability to continue employment; on the other hand, later detection after the onset of disability will bring in significant compensation for the family; andinadequate labour protection legislation.\n\nno coverage and low quality of preventive health examination of workers;\nemployers’ disinterest in detecting ODs to maintain low insurance premiums and avoid costs associated with improving labour conditions and health care measures;\nlacklustre performance of medical examiners, particularly when the employer has a long-term arrangement with patient care institutions;\nworkers’ tendency to hide early symptoms of disease, which will affect their ability to continue employment; on the other hand, later detection after the onset of disability will bring in significant compensation for the family; and\ninadequate labour protection legislation.\nAdditional criticism of the problem was expressed by other specialists from the Research Institute of Labour Medicine in Moscow. The number of disabled workers in Russia increased dramatically from 7.9 million in 1997 to 13 million in 2007, when disabled workers constituted 11% of the adult population of Russia (23). The fears of unemployment among workers (a new phenomenon in Russia), the low regard for human life (an old phenomenon in Russia), and the quest for profits by employers jointly produced a situation of widespread worsening of job conditions, and the physical and psychological exhaustion among workers.\nOfficial statistics of Magadan Oblast show incredibly low values of OD (1.3–3.2 cases per 10,000 workers) in 2004–2008 (14). The Angarsk Research Institute of Labour Medicine and Human Ecology evaluated the registration of OD among miners in Magadan Oblast between 1981 and 2000 and found that the real situation is very different from official statistics. For the mining industry as a whole in Magadan Oblast, OD rate among all workers fluctuated between 35 and 61 cases per 10,000 workers, 52 and 93 cases per 10,000 workers among above-ground workers and 266 and 398 cases per 10,000 workers among miners working underground. For specific conditions such as vibration disease, among underground miners who had been exposed to vibration for 5 or more years, 64% had signs of vibration disease. Vibration tends to develop about twice as fast in the Arctic than among workers in middle latitudes, a contributing factor being the below freezing temperatures often found underground (24).\nAn assessment of Krasnoyarsk region by the regional centre of occupational pathology and Krasnoyarsk Medical University (25) found that in the 1990s, much of previous gains in occupational health care had been lost – the number of medical units at enterprises was considerably reduced, preventive work decreased, and the quality of medical examinations and diagnosis of ODs declined. In Krasnoyarsk region, every second worker works in conditions below hygienic standards. With the intensification of medical examination of workers, the proportion of workers screened by surveys increased from 80% in 1998 to 91% in 2008 (25).\nBetween 1978 and 1988, official OD levels in Archangelsk city were comparable to those in Archangelsk Oblast and Russia as a whole (0.8–1.8 per 10,000 of workers), but during the next decade (1989–2000), the rate for Archangelsk city increased dramatically to 14.1/10,000, which was more than 7 times higher than in Russia. This is the result of the launching of the department of occupational pathology in Archangelsk city and expert assessment of the quality of medical examination of workers. However, in 2000–2002, the OD level in Archangelsk city again fell to the level of the 1980s due to the cessation of expert work (26).", "This review presents official statistical data on occupational disease and summarises Russian published peer-reviewed scientific literature on occupational health care in Russia and the Russian Arctic in 1980–2010.\nIf Russian official statistics are to be trusted, the country and its northern regions have extremely low rates of ODs, substantially lower (some 10–100 times) than other circumpolar countries with advanced social welfare systems and high occupational health and safety standards. This positive picture is at odds with the poor compliance with occupational health regulations, and the breakdown in occupational health monitoring and workers’ health programmes, which were exacerbated by the “wild market” economy since the 1990s and accompanying major industrial restructuring in the North and elsewhere.\nReform is urgently needed across Russia. Legislation is needed to hold employers accountable for adherence to safety regulations and truthful reporting of ODs and injuries. Where monitoring and surveillance systems have been put in place, there has been a corresponding increase in reported ODs. Well-equipped and staffed hygienic laboratories and regular health screening of workers by qualified personnel in both state and private enterprises require adequate financial and administrative support. There is some evidence that the very poor state of occupational health and health care is receiving high-level governmental attention. What is needed is implementation of sound principles and policies." ]
[ null, null, null, null, null, "conclusions" ]
[ "occupational diseases", "occupational health care", "occupational safety", "labour conditions", "Russian Arctic" ]
Recent governmental statements on occupational health: The deplorable state of occupational health in the country was recognised by the Russian government and was highlighted in the National Report for 2005 (2). Annually, in the Russian Federation, 190,000 workers employed in dangerous and hazardous industries die; 15,000 workers die from occupational injuries; and 180,000 workers retire early, many for health reasons. More than 30% of deaths nationally occur among people at working-age, 4.5 times higher than in other European countries. These figures were presented by Dmitry Medvedev, the President of the Russian Federation, in 2008, during a meeting with representatives of a Russian business organisation devoted to the problems of population and health care. The President told the assembled participants: “Problems threatening the safety of people still exist in Russia; they also exist in workplaces; high mortality is not only a medical but also social problem. It is necessary to improve the health care system, particularly the preventive services.” The Concept of the Federal Program of Actions on Improvement of Labour Conditions and Safety for 2008–2010 (3) was published in 2008, which became a part of the Concept of Demographic Policy of the Russian Federation for the period until 2025 (4). Its assessment states, in summary: “In the Russian Federation annually about 180 thousand persons die of causes related to harmful and dangerous industrial practices, about 200 thousand persons are injured occupationally, more than 10 thousand cases of occupational diseases (ODs) are registered and more than 14 thousand persons become invalid because of occupational injuries and diseases” (3). Nikolaj Izmerov, Director of the Research Institute of Labour Medicine, and Academician of the Russian Academy of Medical Sciences, provided more gloomy data specific to women workers: “In 2007 generally in Russia 66.8 million people were employed, 33.9 million of them women, of whom 20 million were of child-bearing age. About 4 million women (12%) worked in conditions which violated sanitary-hygienic regulations and promoted ODs. According to the data of periodical medical examinations, every second working woman was chronically ill. Among women who worked in conditions where they were exposed to heat or chemicals, every sixth woman suffered from infertility, and every seventh experienced spontaneous abortions” (5). Official statistics on occupational health: 1980–2010: Official statistical data on ODs were available for the USSR/Russia for the period 1980–2010 from the Russian Statistical Yearbooks (6–8) and for 7 northern and far eastern regions from regional statistical yearbooks of Arkhangelsk Oblast, Murmansk Oblast, the Republic of Karelia, the Republic of Komi, Chukotka Autonomous Okrug, Kamchatka Oblast and Magadan Oblast (9–15) (Fig. 1). Regional data are available mainly for the period 2000–2008. A more complete regional picture is not possible as not all regions produced their own statistical yearbooks, or they are difficult to locate. The 7 regions (4 far-western and 3 far-eastern) nevertheless demonstrate the scope of the problem during the first decade of the century. Map of northern regions of the Russian Federation. (Reproduced by permission from Young TK, ed. Circumpolar Health Atlas. Toronto: University of Toronto Press, 2012) The rate of new cases of ODs in 6 northern regions – Arkhangelsk Oblast, Murmansk Oblast, the Republic of Komi, Chukotka Autonomous Okrug, Kamchatka Oblast and Magadan Oblast compared to the whole Russia are presented in Fig. 2. Rate of new cases of occupational diseases, including poisonings (per 10,000 workers), in Russia and selected Arctic regions. Figure 2 shows that in Russia the rate of ODs have been stable over the 30-year period, averaging 2 cases per 10,000 workers per year, which is quite low. Levels in Murmansk and Komi are 2–5 times higher than Russian rates (up to 10 cases/10,000), with Murmansk showing a steep rise and Komi a fall in rates. The situation in Arkhangelsk and Magadan are similar to that of Russia, but by mid-decade, the Arkhangelsk rate curves sharply upwards. The very low rates reported from the 2 far-eastern regions (Chukotka and Kamchatka), with almost no cases during 2003–2006, is unexpected and likely represent gross under-reporting, considering the heavy industrial activity in Chukotka, which annually produces 20–25 tons of pure gold alone, as well as other metals and coal. International comparison of occupational diseases rates: According to the Organisation for Economic Co-operation and Development (OECD), the rate of ODs in Russia is considerably lower (some 10–100 times lower) than other circumpolar countries (16). World health Organisation data for 2005 (Table I) show that Russia, countries who are formerly part of the USSR and former socialist countries in Eastern Europe all reported extremely low rate of ODs compared to Scandinavian countries (17). Rate of new cases of occupational diseases (OD) in selected European countries, 2005 Source: World Health Organisation (cited in 17). Latvia is unique among former USSR countries in having a high level of ODs. Latvia instituted reforms after independence (17), with legislation regulating relations between employers and employees in the field of social insurance, obligatory medical examination of workers (paid for by the employers), improvement in the training and certification of occupational health specialists, the creation of state ODs register, and so on. Quantity of the certified professionals increased in Latvia by 8.4 times between 1996 and 2007. The number of ODs in Latvia increased 9 times during 1996–2004, from 20 to 185 cases per 100,000 workers. This increase is not the result of a worsening of workers’ health but the improvement of surveillance with more comprehensive coverage and accurate enumeration. History of occupational disease surveillance in USSR and Russia: The history of registration of ODs in the USSR/Russia is well described in the introduction of the doctoral dissertation by L.G. Zhavoronok from the Research Institute of Medicine of Labour, Russian Academy of Medical Sciences, Moscow (18). The obligatory system of registration and reporting of ODs in Russia began in 1924 by the joint Decision of the National Commissariats of Work and of Public Health Services About the obligatory notice on occupational poisonings and diseases. Before that time, there were separate registrations for certain groups of occupational illnesses, mainly, acute poisonings, occupational infections, caisson illness and various accidents. In 1939, the National Commissariat of Public Health Services adopted a new Principle of the notice and registration of occupational poisonings and occupational diseases, which introduced a person-based system of patients with ODs. Changes were made to the reporting form over the years, but essentially the system established in 1939 existed until 1986, for almost 50 years. It did not contain data on the cause of occupational disease, adverse factors influencing the patient, character of performed work, actions directed at eliminating these factors or health status of the patient at the moment of disease investigation. Even basic data, such as sex, age, occupational route, work experience, and so on, were not captured. The system did not provide sufficient information to analyse causes and effects. A new system of obligatory registration of ODs was introduced in the USSR in 1986 and in Russia in 1991. In Russia, the state sanitary-epidemiological service (Gossanepidnadzor, now Rospotrebnadzor) of the Ministry of Health Care of the Russian Federation provides the centralised gathering of primary materials for registration of ODs. An important innovation is the unified form of “Sanitary-hygienic characteristic of workplace conditions in case of assumption of occupational disease (poisoning)” and instructions on its completion. The sanitary-hygienic characteristic of workplace is made by Rospotrebnadzor centre, taking into account the preliminary diagnosis of occupational disease, characteristics of all harmful factors of the occupational environment, labour process and modes of work, which could lead to occupational disease (poisoning). This is the major document confirming, or denying, the occupational character of disease (18). The bureaucratic system and procedural complexity in establishing preliminary and definitive diagnosis of an occupational disease often lead to frequent occurrence of conflict situations, delay of a thorough medical examination of a patient in a specialised centre of occupational health, possibility of concealment of diseases and judicial proceedings. The Federal centre of Rospotrebnadzor carries out the analysis of ODs in Russia, compiles annual reports and publishes newsletters. However, the published information does not provide sufficient details for different regions in Russia, separate sectors of the economy and different occupations. Different types of ODs and poisonings are presented only as a proportion of all ODs. Methodological and statistical errors occur frequently in the bulletins. There is no interaction between Rospotrebnadzor centres and the treatment and prevention facilities in which the diagnosis of an occupational disease is first established and patient supervision carried out. Reform of the current situation is badly needed (18). Review of published literature in Russian: Russian hygienic scientific journals have not yet embraced the electronic age. For example, Gig Sanit (Hygiene and sanitation) published since 1922 has a web page only with a list of the names of articles from 1998 (abstracts since 2010), while Med Tr Prom Ekol (Medicine of Labour and Industrial Ecology) published since 1957 has a much shorter list – since 2008; both journals are indexed in Index Medicus and in many other web-based bibliographic systems, but full text articles are unavailable. Russian special medical web searching systems are only at the early stages of development; comprehensive thematic catalogues do not exist. PubMed does contain names of articles from almost all major Russian biomedical and public health journals since the end of 1960s, and thus could be an important resource for searching Russian language articles. The libraries inside Russian research institutes have faced serious financial problems since 1990. Today, across the vast country, only the Russian National Library in St. Petersburg, Russian State Library in Moscow and several other libraries in the biggest cities remain the repositories of comprehensive Russian scientific literature. Stocks of these libraries are electronically catalogued only partially, and most full text articles are available only in hard copies. A systematic search of the Russian peer-reviewed literature on occupational health and disease was conducted for the period 1980–2010 using PubMed and electronic catalogues of the Russian National Library in St. Petersburg. The specific terms “Arctic” and “North” were not initially included to search as broadly as possible. Publications specific to the Russian Arctic, Siberia and Far East were then further identified. Most of the articles on occupational health are descriptive in nature, assessing single exposures (such as air pollutants, noise, vibration, temperature, etc.); others deal with physiological, cardiac, immune and neuro-hormonal status in occupational conditions but without analysis of exposure–effect association; others present data on time lost at work due to general sickness (such as colds and diarrhoea, etc.) at industrial enterprises. The review below is focused on critical evaluation of the occupational health care system in Russia. Initial timid attempts at criticism of the quality of medical examination of workers began to appear in Russian scientific journals in 1986, soon after Gorbachev's coming to power and the beginning of “perestroika”. In 1982–1984, during the construction of gas pipelines from Urengoy (in the Yamal peninsula), there was on average one doctor for 328 workers engaged in dangerous and hazardous job conditions, and 83 patients from the general population. A mobile medical team moved with the workers as the construction progressed. A questionnaire survey in Urengoy showed that 11% of workers complained about difficulties in getting to see the doctor. Some 74% of workers considered themselves to be overexposed to noise and vibration and 78% to petrochemicals. Housing conditions were generally poor, with 81% living in wagons or “dog houses” (4–5 m2/person). There were serious lifestyle issues, with 79% smokers, and 77% consumed on average 250–500 ml of vodka in a single dose (19). By the end of the 1990s, more critical articles on the system of diagnosis and registration of ODs appeared in Russian scientific journals. In 1997, the Research Institute of Hygiene and Occupational Pathology in Nizhniy Novgorod noted the substantial increase of ODs in Nizhniy Novgorod region since 1990, which the author attributed to more frequent visits of workers to doctors due to the worsening of the economic situation and the wild market economy, with 80% of enterprises in the region privatised (20). There was an increase in chronic OD rate, while the number of disabled workers and general degradation of the OD detection system were discussed in connection with dramatic changes towards the wild market economy. The author identified an urgent need of holding employers legally accountable for neglecting job safety rules and for under-reporting ODs (20). In 1998 the nickel-cobalt industry in the Russian Arctic was the subject of an assessment by the Kola Research Laboratory of Occupational Health in Kirovsk city, Murmansk Oblast (21). Despite considerable financial inputs, substantial improvement of working conditions had not occurred. Occupational disease level among nickel-cobalt industry workers was very high, but it was estimated that only 2% of ODs were detected, based on detailed medical examination of the workers, with the true prevalence likely around 510 cases per 10,000 workers (21). By the end of the 2000s, numerous articles disclosing and unmasking the Russian system of occupational health care and OD registration have appeared on the pages of the most respectable journals. Among the vocal critics was Gennady Onishchenko, the Chief of Sanitary-Epidemiological Surveillance (Chief State Sanitary Officer) of the Ministry of Health Care and Social Development, academician of the Russian Academy of Medical Sciences (22). Some of the main points of his article in the journal Hygiene and Sanitation are highlighted below. In 2006, 23% of workers employed in main Russian industries worked in job conditions that violated sanitary-hygienic rules. Almost 80% of all Russian industrial enterprises were categorised as dangerous or hazardous for health. The worst labour conditions are found in coal mining, ship building, ferrous and non-ferrous metallurgy, agriculture, tractor and agricultural machinery construction, building materials production, lumbering and construction activities. The highest number of people who work in dangerous and hazardous conditions are employed at non-governmental enterprises. In many regions of Russia, the budget for improving labour conditions has been cut dramatically. In many enterprises, their hygienic laboratories were eliminated or their financing was reduced significantly (22). Industrial recession and economic instability have affected many enterprises, resulting in dilapidation and disrepair of buildings, machines and equipment. Much of the Russian industry still relies on manual labour (accounting for 70% of production), with a low level of mechanisation and automation. Many companies blatantly ignore or violate occupational safety regulations relating to noise, vibration, temperature, air quality, illumination, personal protective gear, duration and schedule of work, limits for lifting and carrying of weights, and so on. Industrial reorganisation involving partition and mergers of industrial resources and properties has resulted in the emergence of new legal entities with very limited or no responsibility for the health and safety of workers. Some of the new enterprises are located on temporary rented sites where employers do not want to invest in infrastructure improvement. In Russia, official statistics show a continuing decline in the rate of ODs in the face of worsening labour conditions, a paradox that can be explained by inadequate monitoring and reporting. Among the factors responsible for this state of affairs are:no coverage and low quality of preventive health examination of workers;employers’ disinterest in detecting ODs to maintain low insurance premiums and avoid costs associated with improving labour conditions and health care measures;lacklustre performance of medical examiners, particularly when the employer has a long-term arrangement with patient care institutions;workers’ tendency to hide early symptoms of disease, which will affect their ability to continue employment; on the other hand, later detection after the onset of disability will bring in significant compensation for the family; andinadequate labour protection legislation. no coverage and low quality of preventive health examination of workers; employers’ disinterest in detecting ODs to maintain low insurance premiums and avoid costs associated with improving labour conditions and health care measures; lacklustre performance of medical examiners, particularly when the employer has a long-term arrangement with patient care institutions; workers’ tendency to hide early symptoms of disease, which will affect their ability to continue employment; on the other hand, later detection after the onset of disability will bring in significant compensation for the family; and inadequate labour protection legislation. Additional criticism of the problem was expressed by other specialists from the Research Institute of Labour Medicine in Moscow. The number of disabled workers in Russia increased dramatically from 7.9 million in 1997 to 13 million in 2007, when disabled workers constituted 11% of the adult population of Russia (23). The fears of unemployment among workers (a new phenomenon in Russia), the low regard for human life (an old phenomenon in Russia), and the quest for profits by employers jointly produced a situation of widespread worsening of job conditions, and the physical and psychological exhaustion among workers. Official statistics of Magadan Oblast show incredibly low values of OD (1.3–3.2 cases per 10,000 workers) in 2004–2008 (14). The Angarsk Research Institute of Labour Medicine and Human Ecology evaluated the registration of OD among miners in Magadan Oblast between 1981 and 2000 and found that the real situation is very different from official statistics. For the mining industry as a whole in Magadan Oblast, OD rate among all workers fluctuated between 35 and 61 cases per 10,000 workers, 52 and 93 cases per 10,000 workers among above-ground workers and 266 and 398 cases per 10,000 workers among miners working underground. For specific conditions such as vibration disease, among underground miners who had been exposed to vibration for 5 or more years, 64% had signs of vibration disease. Vibration tends to develop about twice as fast in the Arctic than among workers in middle latitudes, a contributing factor being the below freezing temperatures often found underground (24). An assessment of Krasnoyarsk region by the regional centre of occupational pathology and Krasnoyarsk Medical University (25) found that in the 1990s, much of previous gains in occupational health care had been lost – the number of medical units at enterprises was considerably reduced, preventive work decreased, and the quality of medical examinations and diagnosis of ODs declined. In Krasnoyarsk region, every second worker works in conditions below hygienic standards. With the intensification of medical examination of workers, the proportion of workers screened by surveys increased from 80% in 1998 to 91% in 2008 (25). Between 1978 and 1988, official OD levels in Archangelsk city were comparable to those in Archangelsk Oblast and Russia as a whole (0.8–1.8 per 10,000 of workers), but during the next decade (1989–2000), the rate for Archangelsk city increased dramatically to 14.1/10,000, which was more than 7 times higher than in Russia. This is the result of the launching of the department of occupational pathology in Archangelsk city and expert assessment of the quality of medical examination of workers. However, in 2000–2002, the OD level in Archangelsk city again fell to the level of the 1980s due to the cessation of expert work (26). Conclusions: This review presents official statistical data on occupational disease and summarises Russian published peer-reviewed scientific literature on occupational health care in Russia and the Russian Arctic in 1980–2010. If Russian official statistics are to be trusted, the country and its northern regions have extremely low rates of ODs, substantially lower (some 10–100 times) than other circumpolar countries with advanced social welfare systems and high occupational health and safety standards. This positive picture is at odds with the poor compliance with occupational health regulations, and the breakdown in occupational health monitoring and workers’ health programmes, which were exacerbated by the “wild market” economy since the 1990s and accompanying major industrial restructuring in the North and elsewhere. Reform is urgently needed across Russia. Legislation is needed to hold employers accountable for adherence to safety regulations and truthful reporting of ODs and injuries. Where monitoring and surveillance systems have been put in place, there has been a corresponding increase in reported ODs. Well-equipped and staffed hygienic laboratories and regular health screening of workers by qualified personnel in both state and private enterprises require adequate financial and administrative support. There is some evidence that the very poor state of occupational health and health care is receiving high-level governmental attention. What is needed is implementation of sound principles and policies.
Background: There is a paradox in Russia and its Arctic regions which reports extremely low rates of occupational diseases (ODs), far below those of other socially and economically advanced circumpolar countries. Yet, there is widespread disregard for occupational health regulations and neglect of basic occupational health services across many industrial enterprises. Methods: This review article presents official statistics and summarises the results of a search of peer-reviewed scientific literature published in Russia on ODs and occupational health care in Russia and the Russian Arctic, within the period 1980-2010. Results: Worsening of the economic situation, layoff of workers, threat of unemployment and increased work load happened during the "wild market" industrial restructuring in 1990-2000, when the health and safety of workers were of little concern. Russian employers are not legally held accountable for neglecting safety rules and for underreporting of ODs. Almost 80% of all Russian industrial enterprises are considered dangerous or hazardous for health. Hygienic control of working conditions was minimised or excluded in the majority of enterprises, and the health status of workers remains largely unknown. There is direct evidence of general degradation of the occupational health care system in Russia. The real levels of ODs in Russia are estimated to be at least 10-100 times higher than reported by official statistics. The low official rates are the result of deliberate hiding of ODs, lack of coverage of working personnel by properly conducted medical examinations, incompetent management and the poor quality of staff, facilities and equipment. Conclusions: Reform of the Russian occupational health care system is urgently needed, including the passing of strong occupational health legislation and their enforcement, the maintenance of credible health monitoring and effective health services for workers, improved training of occupational health personnel, protection of sanitary-hygienic laboratories in industrial enterprises, and support for research assessing occupational risk and the effectiveness of interventions.
null
null
3,944
359
[ 428, 391, 250, 586, 1994 ]
6
[ "occupational", "workers", "health", "russian", "ods", "russia", "conditions", "medical", "occupational health", "disease" ]
[ "workers russia increased", "labour medicine moscow", "disabled workers russia", "russian system occupational", "health care russian" ]
null
null
null
null
null
null
null
[CONTENT] occupational diseases | occupational health care | occupational safety | labour conditions | Russian Arctic [SUMMARY]
[CONTENT] occupational diseases | occupational health care | occupational safety | labour conditions | Russian Arctic [SUMMARY]
null
null
null
null
[CONTENT] Arctic Regions | Industry | Occupational Diseases | Occupational Health | Occupational Health Services | Population Surveillance | Russia | Safety Management [SUMMARY]
[CONTENT] Arctic Regions | Industry | Occupational Diseases | Occupational Health | Occupational Health Services | Population Surveillance | Russia | Safety Management [SUMMARY]
null
null
null
null
[CONTENT] workers russia increased | labour medicine moscow | disabled workers russia | russian system occupational | health care russian [SUMMARY]
[CONTENT] workers russia increased | labour medicine moscow | disabled workers russia | russian system occupational | health care russian [SUMMARY]
null
null
null
null
[CONTENT] occupational | workers | health | russian | ods | russia | conditions | medical | occupational health | disease [SUMMARY]
[CONTENT] occupational | workers | health | russian | ods | russia | conditions | medical | occupational health | disease [SUMMARY]
null
null
null
null
[CONTENT] health | occupational | occupational health | needed | poor | systems | monitoring | russian | safety | official [SUMMARY]
[CONTENT] occupational | health | workers | russian | ods | russia | oblast | countries | disease | rate [SUMMARY]
null
null
null
null
[CONTENT] Russian [SUMMARY]
[CONTENT] Russia | Arctic ||| ||| Russia | Russia | the Russian Arctic | 1980-2010 ||| ||| 1990-2000 ||| Russian ||| Almost 80% | Russian ||| ||| Russia ||| Russia | at least 10-100 ||| ||| Russian [SUMMARY]
null
Survey of Opioid Risk Tool Among Cancer Patients Receiving Opioid Analgesics.
35698838
The risk of opioid-related aberrant behavior (OAB) in Korean cancer patients has not been previously evaluated. The purpose of this study is to investigate the Opioid Risk Tool (ORT) in Korean cancer patients receiving opioid treatment.
BACKGROUND
Data were obtained from a multicenter, cross-sectional, nationwide observational study regarding breakthrough cancer pain. The study was conducted in 33 South Korean institutions from March 2016 to December 2017. Patients were eligible if they had cancer-related pain within the past 7 days, which was treated with strong opioids in the previous 7 days.
METHODS
We analyzed ORT results of 946 patients. Only one patient in each sex (0.2%) was classified as high risk for OAB. Moderate risk was observed in 18 males (3.3%) and in three females (0.7%). Scores above 0 were primarily derived from positive responses for personal or familial history of alcohol abuse (in men), or depression (in women). In patients with an ORT score of 1 or higher (n = 132, 14%), the score primarily represented positive responses for personal history of depression (in females), personal or family history of alcohol abuse (in males), or 16-45 years age range. These patients had more severe worst and average pain intensity (proportion of numeric rating scale ≥ 4: 20.5% vs. 11.4%, P < 0.001) and used rescue analgesics more frequently than patients with ORT scores of 0. The proportion of moderate- or high-risk patients according to ORT was lower in patients receiving low doses of long-acting opioids than in those receiving high doses (2.0% vs. 6.6%, P = 0.031). Moderate or high risk was more frequent when ORT was completed in an isolated room than in an open, busy place (2.7% vs. 0.6%, P = 0.089).
RESULTS
The score of ORT was very low in cancer patients receiving strong opioids for analgesia. Higher pain intensity may associate with positive response to one or more ORT item.
CONCLUSIONS
[ "Alcoholism", "Analgesics, Opioid", "Cross-Sectional Studies", "Female", "Humans", "Male", "Neoplasms", "Opioid-Related Disorders" ]
9194487
INTRODUCTION
Opioid analgesics are the most important drugs for controlling cancer pain. Nevertheless, their potential for dependency, misuse, and addiction has long been a major concern. Recently, the prescription of opioids has soared worldwide. Because of cultural conventions, opioid usage has been traditionally low in East Asia, but it has increased rapidly in recent years.12 In a large-scale cohort study, the proportion of opioid users in South Korea increased six to nine times from 2002 to 2015.3 Moreover, opioid-related chemical coping was 21% among South Korean patients receiving long-term opioid therapy for chronic non-cancer pain in a cross-sectional study.4 Thus, concerns about opioid-related aberrant behavior (OAB) are now greater than ever. The rate of opioid misuse is approximately 21% to 29% among patients with chronic pain, according to a systematic review of studies conducted in North America and Europe.5 To prescreen high-risk patients, many tools have been developed for predicting the risk of OAB.6 The Opioid Risk Tool (ORT), Current Opioid Misuse Measure (COMM), and Patient Medication Questionnaire (PMQ) are common risk assessment tools.789 Although no controlled study has directly compared the performance of these tools, ORT is the most widely used. In the original study describing the use of ORT, approximately 66% and 24% of patients with chronic pain were classified as having a moderate and high risk of aberrant opioid use, respectively. With respect to cancer patients, the risk of OAB varies considerably according to cancer type and treatment situation. Moderate to high risk of OAB predicted by ORT has been reported in approximately 15% to 43% of cancer patients.101112 In a recent study using different screening tools, 10% to 39% of cancer patients receiving supportive care were noted to be at risk of OAB.1314 In this paper, we describe the ORT results of a large multicenter, nationwide survey regarding breakthrough cancer pain in South Korean patients. The results of the entire study population will be published elsewhere.15 In this study, we used ORT to evaluate the risk of OAB in patients receiving opioids to control cancer pain.
METHODS
Study design This study is a subgroup analysis of a multicenter, cross-sectional, nationwide observational study about breakthrough cancer pain. The study was conducted in 33 South Korean institutions from March 2016 to December 2017. The full analysis set of the original study includes 956 subjects which will be published elsewhere. This paper reports the results of subjects who completed the ORT from the original study. This study is a subgroup analysis of a multicenter, cross-sectional, nationwide observational study about breakthrough cancer pain. The study was conducted in 33 South Korean institutions from March 2016 to December 2017. The full analysis set of the original study includes 956 subjects which will be published elsewhere. This paper reports the results of subjects who completed the ORT from the original study. Patient eligibility Patients were eligible if they met the following criteria: 1) aged 19 years or older, 2) histologically-diagnosed cancer, 3) current or previous anti-cancer treatment (surgery, radiation, or systemic therapy) or palliative care, 4) cancer-related pain within 7 days before the date of written informed consent, 5) use of strong opioids within 7 days before the date of written informed consent, and 6) cognitive function sufficient to read and understand the informed consent form and study questionnaires. Patients were eligible if they met the following criteria: 1) aged 19 years or older, 2) histologically-diagnosed cancer, 3) current or previous anti-cancer treatment (surgery, radiation, or systemic therapy) or palliative care, 4) cancer-related pain within 7 days before the date of written informed consent, 5) use of strong opioids within 7 days before the date of written informed consent, and 6) cognitive function sufficient to read and understand the informed consent form and study questionnaires. Data acquisition We first collected details about cancer status and treatment history from the medical records of patients who provided written informed consent. Patients were requested to complete a questionnaire of ORT which was written in Korean (Supplementary Data 1). If patients were unable to complete the questionnaires on their own, their caregiver was permitted to record the responses. In the absence of a caregiver, the clinical research coordinator recorded the responses. The identity of the person providing assistance was recorded, as was the physical location where the questionnaires were completed. The ORT scores of 0–3 (low risk), 4–7 (moderate risk), or ≥ 8 (high risk), indicated the probability of OABs according to the original study.7 We first collected details about cancer status and treatment history from the medical records of patients who provided written informed consent. Patients were requested to complete a questionnaire of ORT which was written in Korean (Supplementary Data 1). If patients were unable to complete the questionnaires on their own, their caregiver was permitted to record the responses. In the absence of a caregiver, the clinical research coordinator recorded the responses. The identity of the person providing assistance was recorded, as was the physical location where the questionnaires were completed. The ORT scores of 0–3 (low risk), 4–7 (moderate risk), or ≥ 8 (high risk), indicated the probability of OABs according to the original study.7 Statistical analyses Continuous variables are summarized as mean ± standard deviation (SD) or median (minimum, maximum). Frequency, percentage, and cumulative percentage are presented for categorical variables. To compare two continuous variables, the two-sample t-test was used for normally distributed data, and the Mann-Whitney U test was used for non-normally distributed variables. Chi-square test or Fisher's exact test was used to compare categorical variables between groups. All statistical analyses were performed using IBM SPSS Statistics version 26 and Microsoft Excel 2019. Continuous variables are summarized as mean ± standard deviation (SD) or median (minimum, maximum). Frequency, percentage, and cumulative percentage are presented for categorical variables. To compare two continuous variables, the two-sample t-test was used for normally distributed data, and the Mann-Whitney U test was used for non-normally distributed variables. Chi-square test or Fisher's exact test was used to compare categorical variables between groups. All statistical analyses were performed using IBM SPSS Statistics version 26 and Microsoft Excel 2019. Ethical statement The authors are accountable for all aspects of this work. All authors are ensuring that questions related to the accuracy or integrity of the work are appropriately investigated and resolved. The protocol was performed in accordance with the principles of the Declaration of Helsinki (as revised in 2013) and the Good Clinical Practice Guidelines defined by the International Conference on Harmonization. After receiving approval from the KCSG Protocol Review Committee, this study was also approved by the Institutional Review Boards (IRBs) of each participating center (Pusan National University Yangsan Hospital IRB No. 04-2016-002). All patients provided written informed consent before enrollment. The authors are accountable for all aspects of this work. All authors are ensuring that questions related to the accuracy or integrity of the work are appropriately investigated and resolved. The protocol was performed in accordance with the principles of the Declaration of Helsinki (as revised in 2013) and the Good Clinical Practice Guidelines defined by the International Conference on Harmonization. After receiving approval from the KCSG Protocol Review Committee, this study was also approved by the Institutional Review Boards (IRBs) of each participating center (Pusan National University Yangsan Hospital IRB No. 04-2016-002). All patients provided written informed consent before enrollment.
RESULTS
Patient characteristics Total 946 patients completed the ORT. The characteristics of patients are summarized in Table 1. The majority of patients had stage IV cancer and a fair performance status, and most had received anti-cancer treatment within the previous 4 weeks. Values are number (percentage) or median (minimum, maximum). ECOG = Eastern Cooperative Oncology Group, PS = performance status. aOverlap was permitted. Total 946 patients completed the ORT. The characteristics of patients are summarized in Table 1. The majority of patients had stage IV cancer and a fair performance status, and most had received anti-cancer treatment within the previous 4 weeks. Values are number (percentage) or median (minimum, maximum). ECOG = Eastern Cooperative Oncology Group, PS = performance status. aOverlap was permitted. ORT results The ORT results are presented in Table 2. In men, scores above 0 were primarily derived from positive responses for depression, personal or familial history of alcohol abuse, and age within the 16 to 45 years range. In women, the majority of scores above 0 were derived from positive responses for depression and the 16 to 45 years age range. Drug abuse and a history of preadolescent sexual abuse or psychological disease other than depression were extremely rare in both sexes. Fig. 1 is a color-coded depiction of the number of positive responses for each ORT item as an easy-to-read presentation of the results. The mean total ORT score was very low in both men and women, with a median value of 0 in both sexes. The distribution of total scores for each sex is presented in Table 3. Most males were classified as low risk, and 18 (3.3%) were considered moderate risk. Likewise, most females were classified as low risk, and only three (0.7%) were classified as moderate risk. Only one subject (0.2%) in each sex was considered high risk according to ORT. The proportion of moderate- or high-risk patients was higher in men than in women (3.5% vs. 1.0%, P = 0.011 by Fisher’s exact test). Data for ORT items are numbers (percentage). NA = not available, ORT = Opioid Risk Tool, SD = standard deviation. F.Hx = family history, Hx = personal history. Values are presented as number (%). NA = not available, ORT = Opioid Risk Tool. The ORT results are presented in Table 2. In men, scores above 0 were primarily derived from positive responses for depression, personal or familial history of alcohol abuse, and age within the 16 to 45 years range. In women, the majority of scores above 0 were derived from positive responses for depression and the 16 to 45 years age range. Drug abuse and a history of preadolescent sexual abuse or psychological disease other than depression were extremely rare in both sexes. Fig. 1 is a color-coded depiction of the number of positive responses for each ORT item as an easy-to-read presentation of the results. The mean total ORT score was very low in both men and women, with a median value of 0 in both sexes. The distribution of total scores for each sex is presented in Table 3. Most males were classified as low risk, and 18 (3.3%) were considered moderate risk. Likewise, most females were classified as low risk, and only three (0.7%) were classified as moderate risk. Only one subject (0.2%) in each sex was considered high risk according to ORT. The proportion of moderate- or high-risk patients was higher in men than in women (3.5% vs. 1.0%, P = 0.011 by Fisher’s exact test). Data for ORT items are numbers (percentage). NA = not available, ORT = Opioid Risk Tool, SD = standard deviation. F.Hx = family history, Hx = personal history. Values are presented as number (%). NA = not available, ORT = Opioid Risk Tool. ORT scores and pain intensity Because the vast majority of subjects had a 0 ORT score, we further analyzed the patients who answered ‘YES’ on any item of the ORT (n = 132, 14%). These patients had a higher average pain intensity score during the past week than those with a 0 ORT score (mean ± SD: 4.04 ± 2.10 vs. 3.22 ± 1.94, P < 0.001 by t-test). Furthermore, patients with moderate or severe pain according to the average 1-week pain intensity score were more likely to answer ‘YES’ to at least one ORT item than those with weak pain intensity (20.5% vs. 11.4%, P < 0.001 by chi-squared test) (Table 4). Likewise, patients with an ORT score of 1 or more had a higher maximal pain intensity during the previous 1 week than those with a 0 ORT score (mean ± SD: 6.66 ± 2.53 vs. 6.00 ± 2.47, P = 0.013 by t-test). Patients with an ORT score of at least 1 also used short-acting opioids more frequently to control breakthrough pain, compared to patients with an ORT score of 0 (2.5 ± 1.6 times/day vs. 2.0 ± 1.6 times/day, P = 0.013 by t-test). Additionally, pain interfered with the enjoyment of life in the past 24 hours more in patients with an ORT of least 1 than in those with an ORT score of 0 (Brief Pain Inventory-Short Form, Korean version, scores: 6.7 ± 2.9 vs. 6.1 ± 2.9, P = 0.037 by t-test). Values are presented as number (%). ORT = Opioid Risk Tool. aBased on the number of ORT items with a ‘YES’ answer, excluding the age range item; bχ2 test. Because the vast majority of subjects had a 0 ORT score, we further analyzed the patients who answered ‘YES’ on any item of the ORT (n = 132, 14%). These patients had a higher average pain intensity score during the past week than those with a 0 ORT score (mean ± SD: 4.04 ± 2.10 vs. 3.22 ± 1.94, P < 0.001 by t-test). Furthermore, patients with moderate or severe pain according to the average 1-week pain intensity score were more likely to answer ‘YES’ to at least one ORT item than those with weak pain intensity (20.5% vs. 11.4%, P < 0.001 by chi-squared test) (Table 4). Likewise, patients with an ORT score of 1 or more had a higher maximal pain intensity during the previous 1 week than those with a 0 ORT score (mean ± SD: 6.66 ± 2.53 vs. 6.00 ± 2.47, P = 0.013 by t-test). Patients with an ORT score of at least 1 also used short-acting opioids more frequently to control breakthrough pain, compared to patients with an ORT score of 0 (2.5 ± 1.6 times/day vs. 2.0 ± 1.6 times/day, P = 0.013 by t-test). Additionally, pain interfered with the enjoyment of life in the past 24 hours more in patients with an ORT of least 1 than in those with an ORT score of 0 (Brief Pain Inventory-Short Form, Korean version, scores: 6.7 ± 2.9 vs. 6.1 ± 2.9, P = 0.037 by t-test). Values are presented as number (%). ORT = Opioid Risk Tool. aBased on the number of ORT items with a ‘YES’ answer, excluding the age range item; bχ2 test. Opioid utilization and ORT score The daily dose of total long-acting opioids was converted to oral morphine equivalents (OME) for each patient. The mean and median OMEs were 93.1 ± 246.8 mg and 60 mg (0, 6,300), respectively. The mean daily OME of patients classified as moderate or high risk according to ORT was not statistically significantly higher than that of the low-risk patients (mean ± SD: 143.1 ± 194.6 mg vs. 91.9 ± 247.9 mg, P = 0.228 by t-test). However, the proportion of patients who were classified as moderate or high risk was lower in the low-dose group than in the high-dose group (2.1% vs. 6.6%, P = 0.031 by Fisher’s exact test) (Table 5). ORT = Opioid Risk Tool. aTotal daily dose of long-acting opioids (i.e., extended-release, controlled-release, and slow-release forms) for background pain converted to oral morphine equivalents. High dose was defined as daily OME of 200 mg or higher; bFisher’s exact test. The daily dose of total long-acting opioids was converted to oral morphine equivalents (OME) for each patient. The mean and median OMEs were 93.1 ± 246.8 mg and 60 mg (0, 6,300), respectively. The mean daily OME of patients classified as moderate or high risk according to ORT was not statistically significantly higher than that of the low-risk patients (mean ± SD: 143.1 ± 194.6 mg vs. 91.9 ± 247.9 mg, P = 0.228 by t-test). However, the proportion of patients who were classified as moderate or high risk was lower in the low-dose group than in the high-dose group (2.1% vs. 6.6%, P = 0.031 by Fisher’s exact test) (Table 5). ORT = Opioid Risk Tool. aTotal daily dose of long-acting opioids (i.e., extended-release, controlled-release, and slow-release forms) for background pain converted to oral morphine equivalents. High dose was defined as daily OME of 200 mg or higher; bFisher’s exact test. Circumstances during ORT completion We further investigated whether the circumstances when completing ORT affected the ORT results. First, we compared the scores of self-completed ORT (n = 420) with the scores of ORTs completed with the assistance of a caregiver (n = 56) or study staff (n = 471). Although all high-risk patients were in the self-completed group, there was no statistically significant difference in ORT scores according to whether the ORT was completed alone or with assistance (P = 0.111, Likelihood ratio Chi-Square). Second, we tested whether the environment where patients completed the ORT correlated with ORT scores. ORT scores tended to be lower when ORT was completed in an open, busy space (n = 154) than when it was completed in a quiet, isolated room (n = 792) (moderate or high risk in open, busy location vs. quiet, isolated location: 0.6% vs. 2.7%, P = 0.089 by Fisher’s exact test, Supplementary Table 1). We further investigated whether the circumstances when completing ORT affected the ORT results. First, we compared the scores of self-completed ORT (n = 420) with the scores of ORTs completed with the assistance of a caregiver (n = 56) or study staff (n = 471). Although all high-risk patients were in the self-completed group, there was no statistically significant difference in ORT scores according to whether the ORT was completed alone or with assistance (P = 0.111, Likelihood ratio Chi-Square). Second, we tested whether the environment where patients completed the ORT correlated with ORT scores. ORT scores tended to be lower when ORT was completed in an open, busy space (n = 154) than when it was completed in a quiet, isolated room (n = 792) (moderate or high risk in open, busy location vs. quiet, isolated location: 0.6% vs. 2.7%, P = 0.089 by Fisher’s exact test, Supplementary Table 1).
null
null
[ "Study design", "Patient eligibility", "Data acquisition", "Statistical analyses", "Ethical statement", "Patient characteristics", "ORT results", "ORT scores and pain intensity", "Opioid utilization and ORT score", "Circumstances during ORT completion" ]
[ "This study is a subgroup analysis of a multicenter, cross-sectional, nationwide observational study about breakthrough cancer pain. The study was conducted in 33 South Korean institutions from March 2016 to December 2017. The full analysis set of the original study includes 956 subjects which will be published elsewhere. This paper reports the results of subjects who completed the ORT from the original study.", "Patients were eligible if they met the following criteria: 1) aged 19 years or older, 2) histologically-diagnosed cancer, 3) current or previous anti-cancer treatment (surgery, radiation, or systemic therapy) or palliative care, 4) cancer-related pain within 7 days before the date of written informed consent, 5) use of strong opioids within 7 days before the date of written informed consent, and 6) cognitive function sufficient to read and understand the informed consent form and study questionnaires.", "We first collected details about cancer status and treatment history from the medical records of patients who provided written informed consent. Patients were requested to complete a questionnaire of ORT which was written in Korean (Supplementary Data 1). If patients were unable to complete the questionnaires on their own, their caregiver was permitted to record the responses. In the absence of a caregiver, the clinical research coordinator recorded the responses. The identity of the person providing assistance was recorded, as was the physical location where the questionnaires were completed. The ORT scores of 0–3 (low risk), 4–7 (moderate risk), or ≥ 8 (high risk), indicated the probability of OABs according to the original study.7", "Continuous variables are summarized as mean ± standard deviation (SD) or median (minimum, maximum). Frequency, percentage, and cumulative percentage are presented for categorical variables. To compare two continuous variables, the two-sample t-test was used for normally distributed data, and the Mann-Whitney U test was used for non-normally distributed variables. Chi-square test or Fisher's exact test was used to compare categorical variables between groups. All statistical analyses were performed using IBM SPSS Statistics version 26 and Microsoft Excel 2019.", "The authors are accountable for all aspects of this work. All authors are ensuring that questions related to the accuracy or integrity of the work are appropriately investigated and resolved. The protocol was performed in accordance with the principles of the Declaration of Helsinki (as revised in 2013) and the Good Clinical Practice Guidelines defined by the International Conference on Harmonization. After receiving approval from the KCSG Protocol Review Committee, this study was also approved by the Institutional Review Boards (IRBs) of each participating center (Pusan National University Yangsan Hospital IRB No. 04-2016-002). All patients provided written informed consent before enrollment.", "Total 946 patients completed the ORT. The characteristics of patients are summarized in Table 1. The majority of patients had stage IV cancer and a fair performance status, and most had received anti-cancer treatment within the previous 4 weeks.\nValues are number (percentage) or median (minimum, maximum).\nECOG = Eastern Cooperative Oncology Group, PS = performance status.\naOverlap was permitted.", "The ORT results are presented in Table 2. In men, scores above 0 were primarily derived from positive responses for depression, personal or familial history of alcohol abuse, and age within the 16 to 45 years range. In women, the majority of scores above 0 were derived from positive responses for depression and the 16 to 45 years age range. Drug abuse and a history of preadolescent sexual abuse or psychological disease other than depression were extremely rare in both sexes. Fig. 1 is a color-coded depiction of the number of positive responses for each ORT item as an easy-to-read presentation of the results. The mean total ORT score was very low in both men and women, with a median value of 0 in both sexes. The distribution of total scores for each sex is presented in Table 3. Most males were classified as low risk, and 18 (3.3%) were considered moderate risk. Likewise, most females were classified as low risk, and only three (0.7%) were classified as moderate risk. Only one subject (0.2%) in each sex was considered high risk according to ORT. The proportion of moderate- or high-risk patients was higher in men than in women (3.5% vs. 1.0%, P = 0.011 by Fisher’s exact test).\nData for ORT items are numbers (percentage).\nNA = not available, ORT = Opioid Risk Tool, SD = standard deviation.\nF.Hx = family history, Hx = personal history.\nValues are presented as number (%).\nNA = not available, ORT = Opioid Risk Tool.", "Because the vast majority of subjects had a 0 ORT score, we further analyzed the patients who answered ‘YES’ on any item of the ORT (n = 132, 14%). These patients had a higher average pain intensity score during the past week than those with a 0 ORT score (mean ± SD: 4.04 ± 2.10 vs. 3.22 ± 1.94, P < 0.001 by t-test). Furthermore, patients with moderate or severe pain according to the average 1-week pain intensity score were more likely to answer ‘YES’ to at least one ORT item than those with weak pain intensity (20.5% vs. 11.4%, P < 0.001 by chi-squared test) (Table 4). Likewise, patients with an ORT score of 1 or more had a higher maximal pain intensity during the previous 1 week than those with a 0 ORT score (mean ± SD: 6.66 ± 2.53 vs. 6.00 ± 2.47, P = 0.013 by t-test). Patients with an ORT score of at least 1 also used short-acting opioids more frequently to control breakthrough pain, compared to patients with an ORT score of 0 (2.5 ± 1.6 times/day vs. 2.0 ± 1.6 times/day, P = 0.013 by t-test). Additionally, pain interfered with the enjoyment of life in the past 24 hours more in patients with an ORT of least 1 than in those with an ORT score of 0 (Brief Pain Inventory-Short Form, Korean version, scores: 6.7 ± 2.9 vs. 6.1 ± 2.9, P = 0.037 by t-test).\nValues are presented as number (%).\nORT = Opioid Risk Tool.\naBased on the number of ORT items with a ‘YES’ answer, excluding the age range item; bχ2 test.", "The daily dose of total long-acting opioids was converted to oral morphine equivalents (OME) for each patient. The mean and median OMEs were 93.1 ± 246.8 mg and 60 mg (0, 6,300), respectively. The mean daily OME of patients classified as moderate or high risk according to ORT was not statistically significantly higher than that of the low-risk patients (mean ± SD: 143.1 ± 194.6 mg vs. 91.9 ± 247.9 mg, P = 0.228 by t-test). However, the proportion of patients who were classified as moderate or high risk was lower in the low-dose group than in the high-dose group (2.1% vs. 6.6%, P = 0.031 by Fisher’s exact test) (Table 5).\nORT = Opioid Risk Tool.\naTotal daily dose of long-acting opioids (i.e., extended-release, controlled-release, and slow-release forms) for background pain converted to oral morphine equivalents. High dose was defined as daily OME of 200 mg or higher; bFisher’s exact test.", "We further investigated whether the circumstances when completing ORT affected the ORT results. First, we compared the scores of self-completed ORT (n = 420) with the scores of ORTs completed with the assistance of a caregiver (n = 56) or study staff (n = 471). Although all high-risk patients were in the self-completed group, there was no statistically significant difference in ORT scores according to whether the ORT was completed alone or with assistance (P = 0.111, Likelihood ratio Chi-Square). Second, we tested whether the environment where patients completed the ORT correlated with ORT scores. ORT scores tended to be lower when ORT was completed in an open, busy space (n = 154) than when it was completed in a quiet, isolated room (n = 792) (moderate or high risk in open, busy location vs. quiet, isolated location: 0.6% vs. 2.7%, P = 0.089 by Fisher’s exact test, Supplementary Table 1)." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study design", "Patient eligibility", "Data acquisition", "Statistical analyses", "Ethical statement", "RESULTS", "Patient characteristics", "ORT results", "ORT scores and pain intensity", "Opioid utilization and ORT score", "Circumstances during ORT completion", "DISCUSSION" ]
[ "Opioid analgesics are the most important drugs for controlling cancer pain. Nevertheless, their potential for dependency, misuse, and addiction has long been a major concern. Recently, the prescription of opioids has soared worldwide. Because of cultural conventions, opioid usage has been traditionally low in East Asia, but it has increased rapidly in recent years.12 In a large-scale cohort study, the proportion of opioid users in South Korea increased six to nine times from 2002 to 2015.3 Moreover, opioid-related chemical coping was 21% among South Korean patients receiving long-term opioid therapy for chronic non-cancer pain in a cross-sectional study.4 Thus, concerns about opioid-related aberrant behavior (OAB) are now greater than ever.\nThe rate of opioid misuse is approximately 21% to 29% among patients with chronic pain, according to a systematic review of studies conducted in North America and Europe.5 To prescreen high-risk patients, many tools have been developed for predicting the risk of OAB.6 The Opioid Risk Tool (ORT), Current Opioid Misuse Measure (COMM), and Patient Medication Questionnaire (PMQ) are common risk assessment tools.789 Although no controlled study has directly compared the performance of these tools, ORT is the most widely used. In the original study describing the use of ORT, approximately 66% and 24% of patients with chronic pain were classified as having a moderate and high risk of aberrant opioid use, respectively. With respect to cancer patients, the risk of OAB varies considerably according to cancer type and treatment situation. Moderate to high risk of OAB predicted by ORT has been reported in approximately 15% to 43% of cancer patients.101112 In a recent study using different screening tools, 10% to 39% of cancer patients receiving supportive care were noted to be at risk of OAB.1314\nIn this paper, we describe the ORT results of a large multicenter, nationwide survey regarding breakthrough cancer pain in South Korean patients. The results of the entire study population will be published elsewhere.15 In this study, we used ORT to evaluate the risk of OAB in patients receiving opioids to control cancer pain.", "Study design This study is a subgroup analysis of a multicenter, cross-sectional, nationwide observational study about breakthrough cancer pain. The study was conducted in 33 South Korean institutions from March 2016 to December 2017. The full analysis set of the original study includes 956 subjects which will be published elsewhere. This paper reports the results of subjects who completed the ORT from the original study.\nThis study is a subgroup analysis of a multicenter, cross-sectional, nationwide observational study about breakthrough cancer pain. The study was conducted in 33 South Korean institutions from March 2016 to December 2017. The full analysis set of the original study includes 956 subjects which will be published elsewhere. This paper reports the results of subjects who completed the ORT from the original study.\nPatient eligibility Patients were eligible if they met the following criteria: 1) aged 19 years or older, 2) histologically-diagnosed cancer, 3) current or previous anti-cancer treatment (surgery, radiation, or systemic therapy) or palliative care, 4) cancer-related pain within 7 days before the date of written informed consent, 5) use of strong opioids within 7 days before the date of written informed consent, and 6) cognitive function sufficient to read and understand the informed consent form and study questionnaires.\nPatients were eligible if they met the following criteria: 1) aged 19 years or older, 2) histologically-diagnosed cancer, 3) current or previous anti-cancer treatment (surgery, radiation, or systemic therapy) or palliative care, 4) cancer-related pain within 7 days before the date of written informed consent, 5) use of strong opioids within 7 days before the date of written informed consent, and 6) cognitive function sufficient to read and understand the informed consent form and study questionnaires.\nData acquisition We first collected details about cancer status and treatment history from the medical records of patients who provided written informed consent. Patients were requested to complete a questionnaire of ORT which was written in Korean (Supplementary Data 1). If patients were unable to complete the questionnaires on their own, their caregiver was permitted to record the responses. In the absence of a caregiver, the clinical research coordinator recorded the responses. The identity of the person providing assistance was recorded, as was the physical location where the questionnaires were completed. The ORT scores of 0–3 (low risk), 4–7 (moderate risk), or ≥ 8 (high risk), indicated the probability of OABs according to the original study.7\nWe first collected details about cancer status and treatment history from the medical records of patients who provided written informed consent. Patients were requested to complete a questionnaire of ORT which was written in Korean (Supplementary Data 1). If patients were unable to complete the questionnaires on their own, their caregiver was permitted to record the responses. In the absence of a caregiver, the clinical research coordinator recorded the responses. The identity of the person providing assistance was recorded, as was the physical location where the questionnaires were completed. The ORT scores of 0–3 (low risk), 4–7 (moderate risk), or ≥ 8 (high risk), indicated the probability of OABs according to the original study.7\nStatistical analyses Continuous variables are summarized as mean ± standard deviation (SD) or median (minimum, maximum). Frequency, percentage, and cumulative percentage are presented for categorical variables. To compare two continuous variables, the two-sample t-test was used for normally distributed data, and the Mann-Whitney U test was used for non-normally distributed variables. Chi-square test or Fisher's exact test was used to compare categorical variables between groups. All statistical analyses were performed using IBM SPSS Statistics version 26 and Microsoft Excel 2019.\nContinuous variables are summarized as mean ± standard deviation (SD) or median (minimum, maximum). Frequency, percentage, and cumulative percentage are presented for categorical variables. To compare two continuous variables, the two-sample t-test was used for normally distributed data, and the Mann-Whitney U test was used for non-normally distributed variables. Chi-square test or Fisher's exact test was used to compare categorical variables between groups. All statistical analyses were performed using IBM SPSS Statistics version 26 and Microsoft Excel 2019.\nEthical statement The authors are accountable for all aspects of this work. All authors are ensuring that questions related to the accuracy or integrity of the work are appropriately investigated and resolved. The protocol was performed in accordance with the principles of the Declaration of Helsinki (as revised in 2013) and the Good Clinical Practice Guidelines defined by the International Conference on Harmonization. After receiving approval from the KCSG Protocol Review Committee, this study was also approved by the Institutional Review Boards (IRBs) of each participating center (Pusan National University Yangsan Hospital IRB No. 04-2016-002). All patients provided written informed consent before enrollment.\nThe authors are accountable for all aspects of this work. All authors are ensuring that questions related to the accuracy or integrity of the work are appropriately investigated and resolved. The protocol was performed in accordance with the principles of the Declaration of Helsinki (as revised in 2013) and the Good Clinical Practice Guidelines defined by the International Conference on Harmonization. After receiving approval from the KCSG Protocol Review Committee, this study was also approved by the Institutional Review Boards (IRBs) of each participating center (Pusan National University Yangsan Hospital IRB No. 04-2016-002). All patients provided written informed consent before enrollment.", "This study is a subgroup analysis of a multicenter, cross-sectional, nationwide observational study about breakthrough cancer pain. The study was conducted in 33 South Korean institutions from March 2016 to December 2017. The full analysis set of the original study includes 956 subjects which will be published elsewhere. This paper reports the results of subjects who completed the ORT from the original study.", "Patients were eligible if they met the following criteria: 1) aged 19 years or older, 2) histologically-diagnosed cancer, 3) current or previous anti-cancer treatment (surgery, radiation, or systemic therapy) or palliative care, 4) cancer-related pain within 7 days before the date of written informed consent, 5) use of strong opioids within 7 days before the date of written informed consent, and 6) cognitive function sufficient to read and understand the informed consent form and study questionnaires.", "We first collected details about cancer status and treatment history from the medical records of patients who provided written informed consent. Patients were requested to complete a questionnaire of ORT which was written in Korean (Supplementary Data 1). If patients were unable to complete the questionnaires on their own, their caregiver was permitted to record the responses. In the absence of a caregiver, the clinical research coordinator recorded the responses. The identity of the person providing assistance was recorded, as was the physical location where the questionnaires were completed. The ORT scores of 0–3 (low risk), 4–7 (moderate risk), or ≥ 8 (high risk), indicated the probability of OABs according to the original study.7", "Continuous variables are summarized as mean ± standard deviation (SD) or median (minimum, maximum). Frequency, percentage, and cumulative percentage are presented for categorical variables. To compare two continuous variables, the two-sample t-test was used for normally distributed data, and the Mann-Whitney U test was used for non-normally distributed variables. Chi-square test or Fisher's exact test was used to compare categorical variables between groups. All statistical analyses were performed using IBM SPSS Statistics version 26 and Microsoft Excel 2019.", "The authors are accountable for all aspects of this work. All authors are ensuring that questions related to the accuracy or integrity of the work are appropriately investigated and resolved. The protocol was performed in accordance with the principles of the Declaration of Helsinki (as revised in 2013) and the Good Clinical Practice Guidelines defined by the International Conference on Harmonization. After receiving approval from the KCSG Protocol Review Committee, this study was also approved by the Institutional Review Boards (IRBs) of each participating center (Pusan National University Yangsan Hospital IRB No. 04-2016-002). All patients provided written informed consent before enrollment.", "Patient characteristics Total 946 patients completed the ORT. The characteristics of patients are summarized in Table 1. The majority of patients had stage IV cancer and a fair performance status, and most had received anti-cancer treatment within the previous 4 weeks.\nValues are number (percentage) or median (minimum, maximum).\nECOG = Eastern Cooperative Oncology Group, PS = performance status.\naOverlap was permitted.\nTotal 946 patients completed the ORT. The characteristics of patients are summarized in Table 1. The majority of patients had stage IV cancer and a fair performance status, and most had received anti-cancer treatment within the previous 4 weeks.\nValues are number (percentage) or median (minimum, maximum).\nECOG = Eastern Cooperative Oncology Group, PS = performance status.\naOverlap was permitted.\nORT results The ORT results are presented in Table 2. In men, scores above 0 were primarily derived from positive responses for depression, personal or familial history of alcohol abuse, and age within the 16 to 45 years range. In women, the majority of scores above 0 were derived from positive responses for depression and the 16 to 45 years age range. Drug abuse and a history of preadolescent sexual abuse or psychological disease other than depression were extremely rare in both sexes. Fig. 1 is a color-coded depiction of the number of positive responses for each ORT item as an easy-to-read presentation of the results. The mean total ORT score was very low in both men and women, with a median value of 0 in both sexes. The distribution of total scores for each sex is presented in Table 3. Most males were classified as low risk, and 18 (3.3%) were considered moderate risk. Likewise, most females were classified as low risk, and only three (0.7%) were classified as moderate risk. Only one subject (0.2%) in each sex was considered high risk according to ORT. The proportion of moderate- or high-risk patients was higher in men than in women (3.5% vs. 1.0%, P = 0.011 by Fisher’s exact test).\nData for ORT items are numbers (percentage).\nNA = not available, ORT = Opioid Risk Tool, SD = standard deviation.\nF.Hx = family history, Hx = personal history.\nValues are presented as number (%).\nNA = not available, ORT = Opioid Risk Tool.\nThe ORT results are presented in Table 2. In men, scores above 0 were primarily derived from positive responses for depression, personal or familial history of alcohol abuse, and age within the 16 to 45 years range. In women, the majority of scores above 0 were derived from positive responses for depression and the 16 to 45 years age range. Drug abuse and a history of preadolescent sexual abuse or psychological disease other than depression were extremely rare in both sexes. Fig. 1 is a color-coded depiction of the number of positive responses for each ORT item as an easy-to-read presentation of the results. The mean total ORT score was very low in both men and women, with a median value of 0 in both sexes. The distribution of total scores for each sex is presented in Table 3. Most males were classified as low risk, and 18 (3.3%) were considered moderate risk. Likewise, most females were classified as low risk, and only three (0.7%) were classified as moderate risk. Only one subject (0.2%) in each sex was considered high risk according to ORT. The proportion of moderate- or high-risk patients was higher in men than in women (3.5% vs. 1.0%, P = 0.011 by Fisher’s exact test).\nData for ORT items are numbers (percentage).\nNA = not available, ORT = Opioid Risk Tool, SD = standard deviation.\nF.Hx = family history, Hx = personal history.\nValues are presented as number (%).\nNA = not available, ORT = Opioid Risk Tool.\nORT scores and pain intensity Because the vast majority of subjects had a 0 ORT score, we further analyzed the patients who answered ‘YES’ on any item of the ORT (n = 132, 14%). These patients had a higher average pain intensity score during the past week than those with a 0 ORT score (mean ± SD: 4.04 ± 2.10 vs. 3.22 ± 1.94, P < 0.001 by t-test). Furthermore, patients with moderate or severe pain according to the average 1-week pain intensity score were more likely to answer ‘YES’ to at least one ORT item than those with weak pain intensity (20.5% vs. 11.4%, P < 0.001 by chi-squared test) (Table 4). Likewise, patients with an ORT score of 1 or more had a higher maximal pain intensity during the previous 1 week than those with a 0 ORT score (mean ± SD: 6.66 ± 2.53 vs. 6.00 ± 2.47, P = 0.013 by t-test). Patients with an ORT score of at least 1 also used short-acting opioids more frequently to control breakthrough pain, compared to patients with an ORT score of 0 (2.5 ± 1.6 times/day vs. 2.0 ± 1.6 times/day, P = 0.013 by t-test). Additionally, pain interfered with the enjoyment of life in the past 24 hours more in patients with an ORT of least 1 than in those with an ORT score of 0 (Brief Pain Inventory-Short Form, Korean version, scores: 6.7 ± 2.9 vs. 6.1 ± 2.9, P = 0.037 by t-test).\nValues are presented as number (%).\nORT = Opioid Risk Tool.\naBased on the number of ORT items with a ‘YES’ answer, excluding the age range item; bχ2 test.\nBecause the vast majority of subjects had a 0 ORT score, we further analyzed the patients who answered ‘YES’ on any item of the ORT (n = 132, 14%). These patients had a higher average pain intensity score during the past week than those with a 0 ORT score (mean ± SD: 4.04 ± 2.10 vs. 3.22 ± 1.94, P < 0.001 by t-test). Furthermore, patients with moderate or severe pain according to the average 1-week pain intensity score were more likely to answer ‘YES’ to at least one ORT item than those with weak pain intensity (20.5% vs. 11.4%, P < 0.001 by chi-squared test) (Table 4). Likewise, patients with an ORT score of 1 or more had a higher maximal pain intensity during the previous 1 week than those with a 0 ORT score (mean ± SD: 6.66 ± 2.53 vs. 6.00 ± 2.47, P = 0.013 by t-test). Patients with an ORT score of at least 1 also used short-acting opioids more frequently to control breakthrough pain, compared to patients with an ORT score of 0 (2.5 ± 1.6 times/day vs. 2.0 ± 1.6 times/day, P = 0.013 by t-test). Additionally, pain interfered with the enjoyment of life in the past 24 hours more in patients with an ORT of least 1 than in those with an ORT score of 0 (Brief Pain Inventory-Short Form, Korean version, scores: 6.7 ± 2.9 vs. 6.1 ± 2.9, P = 0.037 by t-test).\nValues are presented as number (%).\nORT = Opioid Risk Tool.\naBased on the number of ORT items with a ‘YES’ answer, excluding the age range item; bχ2 test.\nOpioid utilization and ORT score The daily dose of total long-acting opioids was converted to oral morphine equivalents (OME) for each patient. The mean and median OMEs were 93.1 ± 246.8 mg and 60 mg (0, 6,300), respectively. The mean daily OME of patients classified as moderate or high risk according to ORT was not statistically significantly higher than that of the low-risk patients (mean ± SD: 143.1 ± 194.6 mg vs. 91.9 ± 247.9 mg, P = 0.228 by t-test). However, the proportion of patients who were classified as moderate or high risk was lower in the low-dose group than in the high-dose group (2.1% vs. 6.6%, P = 0.031 by Fisher’s exact test) (Table 5).\nORT = Opioid Risk Tool.\naTotal daily dose of long-acting opioids (i.e., extended-release, controlled-release, and slow-release forms) for background pain converted to oral morphine equivalents. High dose was defined as daily OME of 200 mg or higher; bFisher’s exact test.\nThe daily dose of total long-acting opioids was converted to oral morphine equivalents (OME) for each patient. The mean and median OMEs were 93.1 ± 246.8 mg and 60 mg (0, 6,300), respectively. The mean daily OME of patients classified as moderate or high risk according to ORT was not statistically significantly higher than that of the low-risk patients (mean ± SD: 143.1 ± 194.6 mg vs. 91.9 ± 247.9 mg, P = 0.228 by t-test). However, the proportion of patients who were classified as moderate or high risk was lower in the low-dose group than in the high-dose group (2.1% vs. 6.6%, P = 0.031 by Fisher’s exact test) (Table 5).\nORT = Opioid Risk Tool.\naTotal daily dose of long-acting opioids (i.e., extended-release, controlled-release, and slow-release forms) for background pain converted to oral morphine equivalents. High dose was defined as daily OME of 200 mg or higher; bFisher’s exact test.\nCircumstances during ORT completion We further investigated whether the circumstances when completing ORT affected the ORT results. First, we compared the scores of self-completed ORT (n = 420) with the scores of ORTs completed with the assistance of a caregiver (n = 56) or study staff (n = 471). Although all high-risk patients were in the self-completed group, there was no statistically significant difference in ORT scores according to whether the ORT was completed alone or with assistance (P = 0.111, Likelihood ratio Chi-Square). Second, we tested whether the environment where patients completed the ORT correlated with ORT scores. ORT scores tended to be lower when ORT was completed in an open, busy space (n = 154) than when it was completed in a quiet, isolated room (n = 792) (moderate or high risk in open, busy location vs. quiet, isolated location: 0.6% vs. 2.7%, P = 0.089 by Fisher’s exact test, Supplementary Table 1).\nWe further investigated whether the circumstances when completing ORT affected the ORT results. First, we compared the scores of self-completed ORT (n = 420) with the scores of ORTs completed with the assistance of a caregiver (n = 56) or study staff (n = 471). Although all high-risk patients were in the self-completed group, there was no statistically significant difference in ORT scores according to whether the ORT was completed alone or with assistance (P = 0.111, Likelihood ratio Chi-Square). Second, we tested whether the environment where patients completed the ORT correlated with ORT scores. ORT scores tended to be lower when ORT was completed in an open, busy space (n = 154) than when it was completed in a quiet, isolated room (n = 792) (moderate or high risk in open, busy location vs. quiet, isolated location: 0.6% vs. 2.7%, P = 0.089 by Fisher’s exact test, Supplementary Table 1).", "Total 946 patients completed the ORT. The characteristics of patients are summarized in Table 1. The majority of patients had stage IV cancer and a fair performance status, and most had received anti-cancer treatment within the previous 4 weeks.\nValues are number (percentage) or median (minimum, maximum).\nECOG = Eastern Cooperative Oncology Group, PS = performance status.\naOverlap was permitted.", "The ORT results are presented in Table 2. In men, scores above 0 were primarily derived from positive responses for depression, personal or familial history of alcohol abuse, and age within the 16 to 45 years range. In women, the majority of scores above 0 were derived from positive responses for depression and the 16 to 45 years age range. Drug abuse and a history of preadolescent sexual abuse or psychological disease other than depression were extremely rare in both sexes. Fig. 1 is a color-coded depiction of the number of positive responses for each ORT item as an easy-to-read presentation of the results. The mean total ORT score was very low in both men and women, with a median value of 0 in both sexes. The distribution of total scores for each sex is presented in Table 3. Most males were classified as low risk, and 18 (3.3%) were considered moderate risk. Likewise, most females were classified as low risk, and only three (0.7%) were classified as moderate risk. Only one subject (0.2%) in each sex was considered high risk according to ORT. The proportion of moderate- or high-risk patients was higher in men than in women (3.5% vs. 1.0%, P = 0.011 by Fisher’s exact test).\nData for ORT items are numbers (percentage).\nNA = not available, ORT = Opioid Risk Tool, SD = standard deviation.\nF.Hx = family history, Hx = personal history.\nValues are presented as number (%).\nNA = not available, ORT = Opioid Risk Tool.", "Because the vast majority of subjects had a 0 ORT score, we further analyzed the patients who answered ‘YES’ on any item of the ORT (n = 132, 14%). These patients had a higher average pain intensity score during the past week than those with a 0 ORT score (mean ± SD: 4.04 ± 2.10 vs. 3.22 ± 1.94, P < 0.001 by t-test). Furthermore, patients with moderate or severe pain according to the average 1-week pain intensity score were more likely to answer ‘YES’ to at least one ORT item than those with weak pain intensity (20.5% vs. 11.4%, P < 0.001 by chi-squared test) (Table 4). Likewise, patients with an ORT score of 1 or more had a higher maximal pain intensity during the previous 1 week than those with a 0 ORT score (mean ± SD: 6.66 ± 2.53 vs. 6.00 ± 2.47, P = 0.013 by t-test). Patients with an ORT score of at least 1 also used short-acting opioids more frequently to control breakthrough pain, compared to patients with an ORT score of 0 (2.5 ± 1.6 times/day vs. 2.0 ± 1.6 times/day, P = 0.013 by t-test). Additionally, pain interfered with the enjoyment of life in the past 24 hours more in patients with an ORT of least 1 than in those with an ORT score of 0 (Brief Pain Inventory-Short Form, Korean version, scores: 6.7 ± 2.9 vs. 6.1 ± 2.9, P = 0.037 by t-test).\nValues are presented as number (%).\nORT = Opioid Risk Tool.\naBased on the number of ORT items with a ‘YES’ answer, excluding the age range item; bχ2 test.", "The daily dose of total long-acting opioids was converted to oral morphine equivalents (OME) for each patient. The mean and median OMEs were 93.1 ± 246.8 mg and 60 mg (0, 6,300), respectively. The mean daily OME of patients classified as moderate or high risk according to ORT was not statistically significantly higher than that of the low-risk patients (mean ± SD: 143.1 ± 194.6 mg vs. 91.9 ± 247.9 mg, P = 0.228 by t-test). However, the proportion of patients who were classified as moderate or high risk was lower in the low-dose group than in the high-dose group (2.1% vs. 6.6%, P = 0.031 by Fisher’s exact test) (Table 5).\nORT = Opioid Risk Tool.\naTotal daily dose of long-acting opioids (i.e., extended-release, controlled-release, and slow-release forms) for background pain converted to oral morphine equivalents. High dose was defined as daily OME of 200 mg or higher; bFisher’s exact test.", "We further investigated whether the circumstances when completing ORT affected the ORT results. First, we compared the scores of self-completed ORT (n = 420) with the scores of ORTs completed with the assistance of a caregiver (n = 56) or study staff (n = 471). Although all high-risk patients were in the self-completed group, there was no statistically significant difference in ORT scores according to whether the ORT was completed alone or with assistance (P = 0.111, Likelihood ratio Chi-Square). Second, we tested whether the environment where patients completed the ORT correlated with ORT scores. ORT scores tended to be lower when ORT was completed in an open, busy space (n = 154) than when it was completed in a quiet, isolated room (n = 792) (moderate or high risk in open, busy location vs. quiet, isolated location: 0.6% vs. 2.7%, P = 0.089 by Fisher’s exact test, Supplementary Table 1).", "In this study, we investigated the risk of OAB by using ORT to survey cancer patients who were already receiving strong opioids for pain control. To our knowledge, this is the first study to investigate ORT results in Korean patients with cancer. In this study, the score of ORT was very low in cancer patients receiving strong opioids for cancer-related pain. Moderate risk was observed rare; in 18 males (3.3%) and in three females (0.7%). Among almost 1,000 patients, only one man and one woman were classified as high risk. This proportion (0.2%) was lower than the percentages previously reported in the literature. Nearly 89% of males and 83% of females had ORT scores of 0. Of those patients with an ORT score of 1 or higher, the score primarily reflected positive responses for a history of depression, alcohol abuse, and age with the 16 to 45 years range.\nIn this study, we demonstrated several new findings. Patients with an ORT score of 1 or more had higher average and worst pain intensities, reported more interference with their enjoyment of life because of pain, and used more short-acting opioids for breakthrough pain, when compared to patients with an ORT score of 0. Additionally, high-dose opioid users tended to be classified as moderate or high risk according to ORT more frequently than low-dose opioid users. We also noted that the place where ORT is completed may influence the results. Patients were more frequently classified as moderate or high risk when ORT was completed in a quiet, independent environment than when it was compared in a busy, open space. This result suggests that if researchers use ORT in a future study, it should be completed in a quiet, independent location.\nAlthough ORT scores seemed be associated with pain intensity, it is not clear that patients with more severe pain have a higher risk of OAB. However, the likelihood that patients with a higher risk for OAB may demand more opioids cannot be ruled out. Conversely, consideration should be given to the possibility that positive response on some of the ORT items may reflect a lower pain threshold. Many studies have reported substantial genetic influences in drug addiction, reflecting the hereditability of addiction.161718 This genetic trait may influence both ORT scores and opioid demands. We can assume that genetic predisposition of drug addiction may have association with lowered pain threshold. Some studies already suggested that pain threshold may vary among individuals by genetic predisposition.19 Few candidate genes (such as COMT, OPRM1, GCH1, TRPV1, or OPRD1), haplotypes or single nucleotide polymorphisms are investigated as contributing traits.2021222324 Because this study was cross-sectional, we cannot check causality and should interpret this finding with caution. However, we believe that these findings could provide hints for future studies.\nIt is worth considering why our population had an overall low score of ORT. One possibility is that the actual risk is low in patients with cancer pain. In the past, it was generally accepted that cancer patients had an extremely low likelihood of opioid addiction or OAB.252627 However, more recent studies have reported that the rate of opioid addiction or aberrant behavior is increasing in this patient population. The prevalence of OAB was reported as low as 7.7% and as high as 43% in these more recent studies.101112131428 Therefore, it is necessary to consider the possibility that the risk of OAB in cancer patients is higher than previously known. Another possible explanation for the overall low risk observed in the current study was that patients may have wanted to be perceived as a ‘good patient’ by their doctors.2930 Consequently, they may have concealed elements of their past history that they thought would have little effect, or even a negative impact, on their treatment. Although statistical significance was not achieved, no one was classified as high risk when ORT was completed with the assistance of medical staff. All high-risk patients were in the group of patients who completed ORT by themselves. A third, and most important, possibility is that ORT is not a suitable risk predicting tool in cancer patients who are already receiving opioid analgesics. ORT was originally developed to predict the probability of aberrant behaviors indicative of abuse in patients with chronic pain.7 The original study presented no information about the primary diagnosis of the subjects. In cancer patients, the risk of OAB has been reported to vary considerably, from 10% to 40%.1011121314 In a study comparing ORT of cancer patients and heart failure patients, the proportion of moderate- to high-risk patients was higher in cancer patients (39% versus 23%), although the difference was not statistically significant.31 Some studies have reported that the predictive performance of ORT is relatively poor in various populations of patients with chronic non-cancer pain. The ORT scores of chronic non-cancer pain patients using a physician-administered ORT in a tertiary care pain clinic were quite different from those reported in the original ORT study.32 In another study of patients with chronic pain, neither patient-generated nor physician-generated ORT was predictive of moderate-to-severe aberrant drug-related behavior.33 Thus, ORT may not be an appropriate predictor for OAB, and some investigators have tried to simplify and improve the performance of this risk assessment tool.34 A final consideration is whether selection bias occurred in this study. This study enrolled subjects of various carcinomas with very simple inclusion/exclusion criteria. In fact, most cancer patients who were prescribed opioids within one week for cancer pain were able to participate. In a nationwide study of 2003 Korean cancer patients, opioids were administered in 65% of patients with cancer pain.35 Whereas, only patients who were prescribed opioids were enrolled in this study, so it is possible that subjects with relatively more severe pain were included in this study. However, in terms of the prescribed opioids dose, the OME dose was 93 mg (mean, median is 60 mg) per day, which is not significantly different from the daily doses of an average of between 100 mg and 250 mg in a systematic review.36 In a Korean study, though it was hydromorphone, the mean daily OME dose was 53mg among a similarly advanced cancer patient population.37 Therefore, the probability of selection bias seems to be low.\nThis study has several limitations, First, this study was not specifically designed to evaluate OAB; instead, ORT was completed as part of a study regarding cancer pain. Thus, because the study’s endpoint was not targeted for ORT, one must be careful about drawing conclusions regarding OAB in cancer patients. As discussed above, another screening tool for OAB may have performed better in this patient population, which could be explored in future studies. Second, the cancer stage was heterogenous in our study population. At least 115 patients (12%) were at stage I to III, indicating that they may have had no residual tumor. Opioid prescribing may differ substantially between patients with and without viable tumor tissue. Third, ORT was completed under varying circumstances, according to the study timeline and study sites, and environment and location may affect ORT results. For example, ORT contains personal information, which may be embarrassing to patients. Therefore, it is probably more appropriate to complete this survey in an isolated space.\nIn summary, the score of ORT was very low in Korean cancer patients receiving strong opioids for pain control. Patients with at least one ORT risk item (ORT score of 1 or higher) had higher average and worse pain intensities, reported more interference with enjoyment of life because of pain, and used short-acting opioids more frequently. The circumstances under which ORT was completed may have influenced the results. In future studies evaluating the risk of OAB, it is recommended that risk predicting tools be selected according to the subjects’ characteristics." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, null, null, "discussion" ]
[ "Opioid Risk Tool", "Opioid-Related Disorders", "Narcotic-Related Disorders", "Cancer Pain" ]
INTRODUCTION: Opioid analgesics are the most important drugs for controlling cancer pain. Nevertheless, their potential for dependency, misuse, and addiction has long been a major concern. Recently, the prescription of opioids has soared worldwide. Because of cultural conventions, opioid usage has been traditionally low in East Asia, but it has increased rapidly in recent years.12 In a large-scale cohort study, the proportion of opioid users in South Korea increased six to nine times from 2002 to 2015.3 Moreover, opioid-related chemical coping was 21% among South Korean patients receiving long-term opioid therapy for chronic non-cancer pain in a cross-sectional study.4 Thus, concerns about opioid-related aberrant behavior (OAB) are now greater than ever. The rate of opioid misuse is approximately 21% to 29% among patients with chronic pain, according to a systematic review of studies conducted in North America and Europe.5 To prescreen high-risk patients, many tools have been developed for predicting the risk of OAB.6 The Opioid Risk Tool (ORT), Current Opioid Misuse Measure (COMM), and Patient Medication Questionnaire (PMQ) are common risk assessment tools.789 Although no controlled study has directly compared the performance of these tools, ORT is the most widely used. In the original study describing the use of ORT, approximately 66% and 24% of patients with chronic pain were classified as having a moderate and high risk of aberrant opioid use, respectively. With respect to cancer patients, the risk of OAB varies considerably according to cancer type and treatment situation. Moderate to high risk of OAB predicted by ORT has been reported in approximately 15% to 43% of cancer patients.101112 In a recent study using different screening tools, 10% to 39% of cancer patients receiving supportive care were noted to be at risk of OAB.1314 In this paper, we describe the ORT results of a large multicenter, nationwide survey regarding breakthrough cancer pain in South Korean patients. The results of the entire study population will be published elsewhere.15 In this study, we used ORT to evaluate the risk of OAB in patients receiving opioids to control cancer pain. METHODS: Study design This study is a subgroup analysis of a multicenter, cross-sectional, nationwide observational study about breakthrough cancer pain. The study was conducted in 33 South Korean institutions from March 2016 to December 2017. The full analysis set of the original study includes 956 subjects which will be published elsewhere. This paper reports the results of subjects who completed the ORT from the original study. This study is a subgroup analysis of a multicenter, cross-sectional, nationwide observational study about breakthrough cancer pain. The study was conducted in 33 South Korean institutions from March 2016 to December 2017. The full analysis set of the original study includes 956 subjects which will be published elsewhere. This paper reports the results of subjects who completed the ORT from the original study. Patient eligibility Patients were eligible if they met the following criteria: 1) aged 19 years or older, 2) histologically-diagnosed cancer, 3) current or previous anti-cancer treatment (surgery, radiation, or systemic therapy) or palliative care, 4) cancer-related pain within 7 days before the date of written informed consent, 5) use of strong opioids within 7 days before the date of written informed consent, and 6) cognitive function sufficient to read and understand the informed consent form and study questionnaires. Patients were eligible if they met the following criteria: 1) aged 19 years or older, 2) histologically-diagnosed cancer, 3) current or previous anti-cancer treatment (surgery, radiation, or systemic therapy) or palliative care, 4) cancer-related pain within 7 days before the date of written informed consent, 5) use of strong opioids within 7 days before the date of written informed consent, and 6) cognitive function sufficient to read and understand the informed consent form and study questionnaires. Data acquisition We first collected details about cancer status and treatment history from the medical records of patients who provided written informed consent. Patients were requested to complete a questionnaire of ORT which was written in Korean (Supplementary Data 1). If patients were unable to complete the questionnaires on their own, their caregiver was permitted to record the responses. In the absence of a caregiver, the clinical research coordinator recorded the responses. The identity of the person providing assistance was recorded, as was the physical location where the questionnaires were completed. The ORT scores of 0–3 (low risk), 4–7 (moderate risk), or ≥ 8 (high risk), indicated the probability of OABs according to the original study.7 We first collected details about cancer status and treatment history from the medical records of patients who provided written informed consent. Patients were requested to complete a questionnaire of ORT which was written in Korean (Supplementary Data 1). If patients were unable to complete the questionnaires on their own, their caregiver was permitted to record the responses. In the absence of a caregiver, the clinical research coordinator recorded the responses. The identity of the person providing assistance was recorded, as was the physical location where the questionnaires were completed. The ORT scores of 0–3 (low risk), 4–7 (moderate risk), or ≥ 8 (high risk), indicated the probability of OABs according to the original study.7 Statistical analyses Continuous variables are summarized as mean ± standard deviation (SD) or median (minimum, maximum). Frequency, percentage, and cumulative percentage are presented for categorical variables. To compare two continuous variables, the two-sample t-test was used for normally distributed data, and the Mann-Whitney U test was used for non-normally distributed variables. Chi-square test or Fisher's exact test was used to compare categorical variables between groups. All statistical analyses were performed using IBM SPSS Statistics version 26 and Microsoft Excel 2019. Continuous variables are summarized as mean ± standard deviation (SD) or median (minimum, maximum). Frequency, percentage, and cumulative percentage are presented for categorical variables. To compare two continuous variables, the two-sample t-test was used for normally distributed data, and the Mann-Whitney U test was used for non-normally distributed variables. Chi-square test or Fisher's exact test was used to compare categorical variables between groups. All statistical analyses were performed using IBM SPSS Statistics version 26 and Microsoft Excel 2019. Ethical statement The authors are accountable for all aspects of this work. All authors are ensuring that questions related to the accuracy or integrity of the work are appropriately investigated and resolved. The protocol was performed in accordance with the principles of the Declaration of Helsinki (as revised in 2013) and the Good Clinical Practice Guidelines defined by the International Conference on Harmonization. After receiving approval from the KCSG Protocol Review Committee, this study was also approved by the Institutional Review Boards (IRBs) of each participating center (Pusan National University Yangsan Hospital IRB No. 04-2016-002). All patients provided written informed consent before enrollment. The authors are accountable for all aspects of this work. All authors are ensuring that questions related to the accuracy or integrity of the work are appropriately investigated and resolved. The protocol was performed in accordance with the principles of the Declaration of Helsinki (as revised in 2013) and the Good Clinical Practice Guidelines defined by the International Conference on Harmonization. After receiving approval from the KCSG Protocol Review Committee, this study was also approved by the Institutional Review Boards (IRBs) of each participating center (Pusan National University Yangsan Hospital IRB No. 04-2016-002). All patients provided written informed consent before enrollment. Study design: This study is a subgroup analysis of a multicenter, cross-sectional, nationwide observational study about breakthrough cancer pain. The study was conducted in 33 South Korean institutions from March 2016 to December 2017. The full analysis set of the original study includes 956 subjects which will be published elsewhere. This paper reports the results of subjects who completed the ORT from the original study. Patient eligibility: Patients were eligible if they met the following criteria: 1) aged 19 years or older, 2) histologically-diagnosed cancer, 3) current or previous anti-cancer treatment (surgery, radiation, or systemic therapy) or palliative care, 4) cancer-related pain within 7 days before the date of written informed consent, 5) use of strong opioids within 7 days before the date of written informed consent, and 6) cognitive function sufficient to read and understand the informed consent form and study questionnaires. Data acquisition: We first collected details about cancer status and treatment history from the medical records of patients who provided written informed consent. Patients were requested to complete a questionnaire of ORT which was written in Korean (Supplementary Data 1). If patients were unable to complete the questionnaires on their own, their caregiver was permitted to record the responses. In the absence of a caregiver, the clinical research coordinator recorded the responses. The identity of the person providing assistance was recorded, as was the physical location where the questionnaires were completed. The ORT scores of 0–3 (low risk), 4–7 (moderate risk), or ≥ 8 (high risk), indicated the probability of OABs according to the original study.7 Statistical analyses: Continuous variables are summarized as mean ± standard deviation (SD) or median (minimum, maximum). Frequency, percentage, and cumulative percentage are presented for categorical variables. To compare two continuous variables, the two-sample t-test was used for normally distributed data, and the Mann-Whitney U test was used for non-normally distributed variables. Chi-square test or Fisher's exact test was used to compare categorical variables between groups. All statistical analyses were performed using IBM SPSS Statistics version 26 and Microsoft Excel 2019. Ethical statement: The authors are accountable for all aspects of this work. All authors are ensuring that questions related to the accuracy or integrity of the work are appropriately investigated and resolved. The protocol was performed in accordance with the principles of the Declaration of Helsinki (as revised in 2013) and the Good Clinical Practice Guidelines defined by the International Conference on Harmonization. After receiving approval from the KCSG Protocol Review Committee, this study was also approved by the Institutional Review Boards (IRBs) of each participating center (Pusan National University Yangsan Hospital IRB No. 04-2016-002). All patients provided written informed consent before enrollment. RESULTS: Patient characteristics Total 946 patients completed the ORT. The characteristics of patients are summarized in Table 1. The majority of patients had stage IV cancer and a fair performance status, and most had received anti-cancer treatment within the previous 4 weeks. Values are number (percentage) or median (minimum, maximum). ECOG = Eastern Cooperative Oncology Group, PS = performance status. aOverlap was permitted. Total 946 patients completed the ORT. The characteristics of patients are summarized in Table 1. The majority of patients had stage IV cancer and a fair performance status, and most had received anti-cancer treatment within the previous 4 weeks. Values are number (percentage) or median (minimum, maximum). ECOG = Eastern Cooperative Oncology Group, PS = performance status. aOverlap was permitted. ORT results The ORT results are presented in Table 2. In men, scores above 0 were primarily derived from positive responses for depression, personal or familial history of alcohol abuse, and age within the 16 to 45 years range. In women, the majority of scores above 0 were derived from positive responses for depression and the 16 to 45 years age range. Drug abuse and a history of preadolescent sexual abuse or psychological disease other than depression were extremely rare in both sexes. Fig. 1 is a color-coded depiction of the number of positive responses for each ORT item as an easy-to-read presentation of the results. The mean total ORT score was very low in both men and women, with a median value of 0 in both sexes. The distribution of total scores for each sex is presented in Table 3. Most males were classified as low risk, and 18 (3.3%) were considered moderate risk. Likewise, most females were classified as low risk, and only three (0.7%) were classified as moderate risk. Only one subject (0.2%) in each sex was considered high risk according to ORT. The proportion of moderate- or high-risk patients was higher in men than in women (3.5% vs. 1.0%, P = 0.011 by Fisher’s exact test). Data for ORT items are numbers (percentage). NA = not available, ORT = Opioid Risk Tool, SD = standard deviation. F.Hx = family history, Hx = personal history. Values are presented as number (%). NA = not available, ORT = Opioid Risk Tool. The ORT results are presented in Table 2. In men, scores above 0 were primarily derived from positive responses for depression, personal or familial history of alcohol abuse, and age within the 16 to 45 years range. In women, the majority of scores above 0 were derived from positive responses for depression and the 16 to 45 years age range. Drug abuse and a history of preadolescent sexual abuse or psychological disease other than depression were extremely rare in both sexes. Fig. 1 is a color-coded depiction of the number of positive responses for each ORT item as an easy-to-read presentation of the results. The mean total ORT score was very low in both men and women, with a median value of 0 in both sexes. The distribution of total scores for each sex is presented in Table 3. Most males were classified as low risk, and 18 (3.3%) were considered moderate risk. Likewise, most females were classified as low risk, and only three (0.7%) were classified as moderate risk. Only one subject (0.2%) in each sex was considered high risk according to ORT. The proportion of moderate- or high-risk patients was higher in men than in women (3.5% vs. 1.0%, P = 0.011 by Fisher’s exact test). Data for ORT items are numbers (percentage). NA = not available, ORT = Opioid Risk Tool, SD = standard deviation. F.Hx = family history, Hx = personal history. Values are presented as number (%). NA = not available, ORT = Opioid Risk Tool. ORT scores and pain intensity Because the vast majority of subjects had a 0 ORT score, we further analyzed the patients who answered ‘YES’ on any item of the ORT (n = 132, 14%). These patients had a higher average pain intensity score during the past week than those with a 0 ORT score (mean ± SD: 4.04 ± 2.10 vs. 3.22 ± 1.94, P < 0.001 by t-test). Furthermore, patients with moderate or severe pain according to the average 1-week pain intensity score were more likely to answer ‘YES’ to at least one ORT item than those with weak pain intensity (20.5% vs. 11.4%, P < 0.001 by chi-squared test) (Table 4). Likewise, patients with an ORT score of 1 or more had a higher maximal pain intensity during the previous 1 week than those with a 0 ORT score (mean ± SD: 6.66 ± 2.53 vs. 6.00 ± 2.47, P = 0.013 by t-test). Patients with an ORT score of at least 1 also used short-acting opioids more frequently to control breakthrough pain, compared to patients with an ORT score of 0 (2.5 ± 1.6 times/day vs. 2.0 ± 1.6 times/day, P = 0.013 by t-test). Additionally, pain interfered with the enjoyment of life in the past 24 hours more in patients with an ORT of least 1 than in those with an ORT score of 0 (Brief Pain Inventory-Short Form, Korean version, scores: 6.7 ± 2.9 vs. 6.1 ± 2.9, P = 0.037 by t-test). Values are presented as number (%). ORT = Opioid Risk Tool. aBased on the number of ORT items with a ‘YES’ answer, excluding the age range item; bχ2 test. Because the vast majority of subjects had a 0 ORT score, we further analyzed the patients who answered ‘YES’ on any item of the ORT (n = 132, 14%). These patients had a higher average pain intensity score during the past week than those with a 0 ORT score (mean ± SD: 4.04 ± 2.10 vs. 3.22 ± 1.94, P < 0.001 by t-test). Furthermore, patients with moderate or severe pain according to the average 1-week pain intensity score were more likely to answer ‘YES’ to at least one ORT item than those with weak pain intensity (20.5% vs. 11.4%, P < 0.001 by chi-squared test) (Table 4). Likewise, patients with an ORT score of 1 or more had a higher maximal pain intensity during the previous 1 week than those with a 0 ORT score (mean ± SD: 6.66 ± 2.53 vs. 6.00 ± 2.47, P = 0.013 by t-test). Patients with an ORT score of at least 1 also used short-acting opioids more frequently to control breakthrough pain, compared to patients with an ORT score of 0 (2.5 ± 1.6 times/day vs. 2.0 ± 1.6 times/day, P = 0.013 by t-test). Additionally, pain interfered with the enjoyment of life in the past 24 hours more in patients with an ORT of least 1 than in those with an ORT score of 0 (Brief Pain Inventory-Short Form, Korean version, scores: 6.7 ± 2.9 vs. 6.1 ± 2.9, P = 0.037 by t-test). Values are presented as number (%). ORT = Opioid Risk Tool. aBased on the number of ORT items with a ‘YES’ answer, excluding the age range item; bχ2 test. Opioid utilization and ORT score The daily dose of total long-acting opioids was converted to oral morphine equivalents (OME) for each patient. The mean and median OMEs were 93.1 ± 246.8 mg and 60 mg (0, 6,300), respectively. The mean daily OME of patients classified as moderate or high risk according to ORT was not statistically significantly higher than that of the low-risk patients (mean ± SD: 143.1 ± 194.6 mg vs. 91.9 ± 247.9 mg, P = 0.228 by t-test). However, the proportion of patients who were classified as moderate or high risk was lower in the low-dose group than in the high-dose group (2.1% vs. 6.6%, P = 0.031 by Fisher’s exact test) (Table 5). ORT = Opioid Risk Tool. aTotal daily dose of long-acting opioids (i.e., extended-release, controlled-release, and slow-release forms) for background pain converted to oral morphine equivalents. High dose was defined as daily OME of 200 mg or higher; bFisher’s exact test. The daily dose of total long-acting opioids was converted to oral morphine equivalents (OME) for each patient. The mean and median OMEs were 93.1 ± 246.8 mg and 60 mg (0, 6,300), respectively. The mean daily OME of patients classified as moderate or high risk according to ORT was not statistically significantly higher than that of the low-risk patients (mean ± SD: 143.1 ± 194.6 mg vs. 91.9 ± 247.9 mg, P = 0.228 by t-test). However, the proportion of patients who were classified as moderate or high risk was lower in the low-dose group than in the high-dose group (2.1% vs. 6.6%, P = 0.031 by Fisher’s exact test) (Table 5). ORT = Opioid Risk Tool. aTotal daily dose of long-acting opioids (i.e., extended-release, controlled-release, and slow-release forms) for background pain converted to oral morphine equivalents. High dose was defined as daily OME of 200 mg or higher; bFisher’s exact test. Circumstances during ORT completion We further investigated whether the circumstances when completing ORT affected the ORT results. First, we compared the scores of self-completed ORT (n = 420) with the scores of ORTs completed with the assistance of a caregiver (n = 56) or study staff (n = 471). Although all high-risk patients were in the self-completed group, there was no statistically significant difference in ORT scores according to whether the ORT was completed alone or with assistance (P = 0.111, Likelihood ratio Chi-Square). Second, we tested whether the environment where patients completed the ORT correlated with ORT scores. ORT scores tended to be lower when ORT was completed in an open, busy space (n = 154) than when it was completed in a quiet, isolated room (n = 792) (moderate or high risk in open, busy location vs. quiet, isolated location: 0.6% vs. 2.7%, P = 0.089 by Fisher’s exact test, Supplementary Table 1). We further investigated whether the circumstances when completing ORT affected the ORT results. First, we compared the scores of self-completed ORT (n = 420) with the scores of ORTs completed with the assistance of a caregiver (n = 56) or study staff (n = 471). Although all high-risk patients were in the self-completed group, there was no statistically significant difference in ORT scores according to whether the ORT was completed alone or with assistance (P = 0.111, Likelihood ratio Chi-Square). Second, we tested whether the environment where patients completed the ORT correlated with ORT scores. ORT scores tended to be lower when ORT was completed in an open, busy space (n = 154) than when it was completed in a quiet, isolated room (n = 792) (moderate or high risk in open, busy location vs. quiet, isolated location: 0.6% vs. 2.7%, P = 0.089 by Fisher’s exact test, Supplementary Table 1). Patient characteristics: Total 946 patients completed the ORT. The characteristics of patients are summarized in Table 1. The majority of patients had stage IV cancer and a fair performance status, and most had received anti-cancer treatment within the previous 4 weeks. Values are number (percentage) or median (minimum, maximum). ECOG = Eastern Cooperative Oncology Group, PS = performance status. aOverlap was permitted. ORT results: The ORT results are presented in Table 2. In men, scores above 0 were primarily derived from positive responses for depression, personal or familial history of alcohol abuse, and age within the 16 to 45 years range. In women, the majority of scores above 0 were derived from positive responses for depression and the 16 to 45 years age range. Drug abuse and a history of preadolescent sexual abuse or psychological disease other than depression were extremely rare in both sexes. Fig. 1 is a color-coded depiction of the number of positive responses for each ORT item as an easy-to-read presentation of the results. The mean total ORT score was very low in both men and women, with a median value of 0 in both sexes. The distribution of total scores for each sex is presented in Table 3. Most males were classified as low risk, and 18 (3.3%) were considered moderate risk. Likewise, most females were classified as low risk, and only three (0.7%) were classified as moderate risk. Only one subject (0.2%) in each sex was considered high risk according to ORT. The proportion of moderate- or high-risk patients was higher in men than in women (3.5% vs. 1.0%, P = 0.011 by Fisher’s exact test). Data for ORT items are numbers (percentage). NA = not available, ORT = Opioid Risk Tool, SD = standard deviation. F.Hx = family history, Hx = personal history. Values are presented as number (%). NA = not available, ORT = Opioid Risk Tool. ORT scores and pain intensity: Because the vast majority of subjects had a 0 ORT score, we further analyzed the patients who answered ‘YES’ on any item of the ORT (n = 132, 14%). These patients had a higher average pain intensity score during the past week than those with a 0 ORT score (mean ± SD: 4.04 ± 2.10 vs. 3.22 ± 1.94, P < 0.001 by t-test). Furthermore, patients with moderate or severe pain according to the average 1-week pain intensity score were more likely to answer ‘YES’ to at least one ORT item than those with weak pain intensity (20.5% vs. 11.4%, P < 0.001 by chi-squared test) (Table 4). Likewise, patients with an ORT score of 1 or more had a higher maximal pain intensity during the previous 1 week than those with a 0 ORT score (mean ± SD: 6.66 ± 2.53 vs. 6.00 ± 2.47, P = 0.013 by t-test). Patients with an ORT score of at least 1 also used short-acting opioids more frequently to control breakthrough pain, compared to patients with an ORT score of 0 (2.5 ± 1.6 times/day vs. 2.0 ± 1.6 times/day, P = 0.013 by t-test). Additionally, pain interfered with the enjoyment of life in the past 24 hours more in patients with an ORT of least 1 than in those with an ORT score of 0 (Brief Pain Inventory-Short Form, Korean version, scores: 6.7 ± 2.9 vs. 6.1 ± 2.9, P = 0.037 by t-test). Values are presented as number (%). ORT = Opioid Risk Tool. aBased on the number of ORT items with a ‘YES’ answer, excluding the age range item; bχ2 test. Opioid utilization and ORT score: The daily dose of total long-acting opioids was converted to oral morphine equivalents (OME) for each patient. The mean and median OMEs were 93.1 ± 246.8 mg and 60 mg (0, 6,300), respectively. The mean daily OME of patients classified as moderate or high risk according to ORT was not statistically significantly higher than that of the low-risk patients (mean ± SD: 143.1 ± 194.6 mg vs. 91.9 ± 247.9 mg, P = 0.228 by t-test). However, the proportion of patients who were classified as moderate or high risk was lower in the low-dose group than in the high-dose group (2.1% vs. 6.6%, P = 0.031 by Fisher’s exact test) (Table 5). ORT = Opioid Risk Tool. aTotal daily dose of long-acting opioids (i.e., extended-release, controlled-release, and slow-release forms) for background pain converted to oral morphine equivalents. High dose was defined as daily OME of 200 mg or higher; bFisher’s exact test. Circumstances during ORT completion: We further investigated whether the circumstances when completing ORT affected the ORT results. First, we compared the scores of self-completed ORT (n = 420) with the scores of ORTs completed with the assistance of a caregiver (n = 56) or study staff (n = 471). Although all high-risk patients were in the self-completed group, there was no statistically significant difference in ORT scores according to whether the ORT was completed alone or with assistance (P = 0.111, Likelihood ratio Chi-Square). Second, we tested whether the environment where patients completed the ORT correlated with ORT scores. ORT scores tended to be lower when ORT was completed in an open, busy space (n = 154) than when it was completed in a quiet, isolated room (n = 792) (moderate or high risk in open, busy location vs. quiet, isolated location: 0.6% vs. 2.7%, P = 0.089 by Fisher’s exact test, Supplementary Table 1). DISCUSSION: In this study, we investigated the risk of OAB by using ORT to survey cancer patients who were already receiving strong opioids for pain control. To our knowledge, this is the first study to investigate ORT results in Korean patients with cancer. In this study, the score of ORT was very low in cancer patients receiving strong opioids for cancer-related pain. Moderate risk was observed rare; in 18 males (3.3%) and in three females (0.7%). Among almost 1,000 patients, only one man and one woman were classified as high risk. This proportion (0.2%) was lower than the percentages previously reported in the literature. Nearly 89% of males and 83% of females had ORT scores of 0. Of those patients with an ORT score of 1 or higher, the score primarily reflected positive responses for a history of depression, alcohol abuse, and age with the 16 to 45 years range. In this study, we demonstrated several new findings. Patients with an ORT score of 1 or more had higher average and worst pain intensities, reported more interference with their enjoyment of life because of pain, and used more short-acting opioids for breakthrough pain, when compared to patients with an ORT score of 0. Additionally, high-dose opioid users tended to be classified as moderate or high risk according to ORT more frequently than low-dose opioid users. We also noted that the place where ORT is completed may influence the results. Patients were more frequently classified as moderate or high risk when ORT was completed in a quiet, independent environment than when it was compared in a busy, open space. This result suggests that if researchers use ORT in a future study, it should be completed in a quiet, independent location. Although ORT scores seemed be associated with pain intensity, it is not clear that patients with more severe pain have a higher risk of OAB. However, the likelihood that patients with a higher risk for OAB may demand more opioids cannot be ruled out. Conversely, consideration should be given to the possibility that positive response on some of the ORT items may reflect a lower pain threshold. Many studies have reported substantial genetic influences in drug addiction, reflecting the hereditability of addiction.161718 This genetic trait may influence both ORT scores and opioid demands. We can assume that genetic predisposition of drug addiction may have association with lowered pain threshold. Some studies already suggested that pain threshold may vary among individuals by genetic predisposition.19 Few candidate genes (such as COMT, OPRM1, GCH1, TRPV1, or OPRD1), haplotypes or single nucleotide polymorphisms are investigated as contributing traits.2021222324 Because this study was cross-sectional, we cannot check causality and should interpret this finding with caution. However, we believe that these findings could provide hints for future studies. It is worth considering why our population had an overall low score of ORT. One possibility is that the actual risk is low in patients with cancer pain. In the past, it was generally accepted that cancer patients had an extremely low likelihood of opioid addiction or OAB.252627 However, more recent studies have reported that the rate of opioid addiction or aberrant behavior is increasing in this patient population. The prevalence of OAB was reported as low as 7.7% and as high as 43% in these more recent studies.101112131428 Therefore, it is necessary to consider the possibility that the risk of OAB in cancer patients is higher than previously known. Another possible explanation for the overall low risk observed in the current study was that patients may have wanted to be perceived as a ‘good patient’ by their doctors.2930 Consequently, they may have concealed elements of their past history that they thought would have little effect, or even a negative impact, on their treatment. Although statistical significance was not achieved, no one was classified as high risk when ORT was completed with the assistance of medical staff. All high-risk patients were in the group of patients who completed ORT by themselves. A third, and most important, possibility is that ORT is not a suitable risk predicting tool in cancer patients who are already receiving opioid analgesics. ORT was originally developed to predict the probability of aberrant behaviors indicative of abuse in patients with chronic pain.7 The original study presented no information about the primary diagnosis of the subjects. In cancer patients, the risk of OAB has been reported to vary considerably, from 10% to 40%.1011121314 In a study comparing ORT of cancer patients and heart failure patients, the proportion of moderate- to high-risk patients was higher in cancer patients (39% versus 23%), although the difference was not statistically significant.31 Some studies have reported that the predictive performance of ORT is relatively poor in various populations of patients with chronic non-cancer pain. The ORT scores of chronic non-cancer pain patients using a physician-administered ORT in a tertiary care pain clinic were quite different from those reported in the original ORT study.32 In another study of patients with chronic pain, neither patient-generated nor physician-generated ORT was predictive of moderate-to-severe aberrant drug-related behavior.33 Thus, ORT may not be an appropriate predictor for OAB, and some investigators have tried to simplify and improve the performance of this risk assessment tool.34 A final consideration is whether selection bias occurred in this study. This study enrolled subjects of various carcinomas with very simple inclusion/exclusion criteria. In fact, most cancer patients who were prescribed opioids within one week for cancer pain were able to participate. In a nationwide study of 2003 Korean cancer patients, opioids were administered in 65% of patients with cancer pain.35 Whereas, only patients who were prescribed opioids were enrolled in this study, so it is possible that subjects with relatively more severe pain were included in this study. However, in terms of the prescribed opioids dose, the OME dose was 93 mg (mean, median is 60 mg) per day, which is not significantly different from the daily doses of an average of between 100 mg and 250 mg in a systematic review.36 In a Korean study, though it was hydromorphone, the mean daily OME dose was 53mg among a similarly advanced cancer patient population.37 Therefore, the probability of selection bias seems to be low. This study has several limitations, First, this study was not specifically designed to evaluate OAB; instead, ORT was completed as part of a study regarding cancer pain. Thus, because the study’s endpoint was not targeted for ORT, one must be careful about drawing conclusions regarding OAB in cancer patients. As discussed above, another screening tool for OAB may have performed better in this patient population, which could be explored in future studies. Second, the cancer stage was heterogenous in our study population. At least 115 patients (12%) were at stage I to III, indicating that they may have had no residual tumor. Opioid prescribing may differ substantially between patients with and without viable tumor tissue. Third, ORT was completed under varying circumstances, according to the study timeline and study sites, and environment and location may affect ORT results. For example, ORT contains personal information, which may be embarrassing to patients. Therefore, it is probably more appropriate to complete this survey in an isolated space. In summary, the score of ORT was very low in Korean cancer patients receiving strong opioids for pain control. Patients with at least one ORT risk item (ORT score of 1 or higher) had higher average and worse pain intensities, reported more interference with enjoyment of life because of pain, and used short-acting opioids more frequently. The circumstances under which ORT was completed may have influenced the results. In future studies evaluating the risk of OAB, it is recommended that risk predicting tools be selected according to the subjects’ characteristics.
Background: The risk of opioid-related aberrant behavior (OAB) in Korean cancer patients has not been previously evaluated. The purpose of this study is to investigate the Opioid Risk Tool (ORT) in Korean cancer patients receiving opioid treatment. Methods: Data were obtained from a multicenter, cross-sectional, nationwide observational study regarding breakthrough cancer pain. The study was conducted in 33 South Korean institutions from March 2016 to December 2017. Patients were eligible if they had cancer-related pain within the past 7 days, which was treated with strong opioids in the previous 7 days. Results: We analyzed ORT results of 946 patients. Only one patient in each sex (0.2%) was classified as high risk for OAB. Moderate risk was observed in 18 males (3.3%) and in three females (0.7%). Scores above 0 were primarily derived from positive responses for personal or familial history of alcohol abuse (in men), or depression (in women). In patients with an ORT score of 1 or higher (n = 132, 14%), the score primarily represented positive responses for personal history of depression (in females), personal or family history of alcohol abuse (in males), or 16-45 years age range. These patients had more severe worst and average pain intensity (proportion of numeric rating scale ≥ 4: 20.5% vs. 11.4%, P < 0.001) and used rescue analgesics more frequently than patients with ORT scores of 0. The proportion of moderate- or high-risk patients according to ORT was lower in patients receiving low doses of long-acting opioids than in those receiving high doses (2.0% vs. 6.6%, P = 0.031). Moderate or high risk was more frequent when ORT was completed in an isolated room than in an open, busy place (2.7% vs. 0.6%, P = 0.089). Conclusions: The score of ORT was very low in cancer patients receiving strong opioids for analgesia. Higher pain intensity may associate with positive response to one or more ORT item.
null
null
6,979
402
[ 71, 99, 133, 103, 118, 78, 312, 347, 205, 192 ]
14
[ "ort", "patients", "risk", "pain", "study", "cancer", "test", "score", "completed", "high" ]
[ "current opioid misuse", "opioids control cancer", "opioid misuse measure", "oab opioid risk", "cancer patients opioids" ]
null
null
[CONTENT] Opioid Risk Tool | Opioid-Related Disorders | Narcotic-Related Disorders | Cancer Pain [SUMMARY]
[CONTENT] Opioid Risk Tool | Opioid-Related Disorders | Narcotic-Related Disorders | Cancer Pain [SUMMARY]
[CONTENT] Opioid Risk Tool | Opioid-Related Disorders | Narcotic-Related Disorders | Cancer Pain [SUMMARY]
null
[CONTENT] Opioid Risk Tool | Opioid-Related Disorders | Narcotic-Related Disorders | Cancer Pain [SUMMARY]
null
[CONTENT] Alcoholism | Analgesics, Opioid | Cross-Sectional Studies | Female | Humans | Male | Neoplasms | Opioid-Related Disorders [SUMMARY]
[CONTENT] Alcoholism | Analgesics, Opioid | Cross-Sectional Studies | Female | Humans | Male | Neoplasms | Opioid-Related Disorders [SUMMARY]
[CONTENT] Alcoholism | Analgesics, Opioid | Cross-Sectional Studies | Female | Humans | Male | Neoplasms | Opioid-Related Disorders [SUMMARY]
null
[CONTENT] Alcoholism | Analgesics, Opioid | Cross-Sectional Studies | Female | Humans | Male | Neoplasms | Opioid-Related Disorders [SUMMARY]
null
[CONTENT] current opioid misuse | opioids control cancer | opioid misuse measure | oab opioid risk | cancer patients opioids [SUMMARY]
[CONTENT] current opioid misuse | opioids control cancer | opioid misuse measure | oab opioid risk | cancer patients opioids [SUMMARY]
[CONTENT] current opioid misuse | opioids control cancer | opioid misuse measure | oab opioid risk | cancer patients opioids [SUMMARY]
null
[CONTENT] current opioid misuse | opioids control cancer | opioid misuse measure | oab opioid risk | cancer patients opioids [SUMMARY]
null
[CONTENT] ort | patients | risk | pain | study | cancer | test | score | completed | high [SUMMARY]
[CONTENT] ort | patients | risk | pain | study | cancer | test | score | completed | high [SUMMARY]
[CONTENT] ort | patients | risk | pain | study | cancer | test | score | completed | high [SUMMARY]
null
[CONTENT] ort | patients | risk | pain | study | cancer | test | score | completed | high [SUMMARY]
null
[CONTENT] opioid | oab | risk oab | risk | cancer | tools | patients | study | pain | approximately [SUMMARY]
[CONTENT] variables | study | informed | informed consent | consent | written | written informed | written informed consent | cancer | questionnaires [SUMMARY]
[CONTENT] ort | score | risk | vs | patients | test | ort score | scores | pain | completed [SUMMARY]
null
[CONTENT] ort | patients | risk | study | cancer | pain | test | completed | score | variables [SUMMARY]
null
[CONTENT] Korean ||| the Opioid Risk Tool | Korean [SUMMARY]
[CONTENT] ||| 33 | South Korean | March 2016 to December 2017 ||| the past 7 days | the previous 7 days [SUMMARY]
[CONTENT] 946 ||| Only one | 0.2% | OAB ||| 18 | 3.3% | three | 0.7% ||| 0 ||| 1 | 132 | 14% | 16-45 years ||| ≥ | 4 | 20.5% | 11.4% | P < 0.001 | 0 ||| 2.0% | 6.6% | P = | 0.031 ||| 2.7% | 0.6% | 0.089 [SUMMARY]
null
[CONTENT] Korean ||| the Opioid Risk Tool | Korean ||| ||| 33 | South Korean | March 2016 to December 2017 ||| the past 7 days | the previous 7 days ||| 946 ||| Only one | 0.2% | OAB ||| 18 | 3.3% | three | 0.7% ||| 0 ||| 1 | 132 | 14% | 16-45 years ||| ≥ | 4 | 20.5% | 11.4% | P < 0.001 | 0 ||| 2.0% | 6.6% | P = | 0.031 ||| 2.7% | 0.6% | 0.089 ||| analgesia ||| one [SUMMARY]
null
Respiratory syncytial virus surveillance in the United States, 2007-2012: results from a national surveillance system.
24445835
Annual respiratory syncytial virus (RSV) outbreaks throughout the US exhibit variable patterns in onset, peak month of activity and duration of season. RSVAlert, a US surveillance system, collects and characterizes RSV test data at national, regional, state and local levels.
BACKGROUND
RSV test data from 296 to 666 laboratories from 50 states, the District of Columbia and Puerto Rico (as of 2010) were collected during the 2007-2008 to 2011-2012 RSV seasons. Data were collected in early August/September to the following August/September each season. Participating laboratories provided the total number and types of RSV tests performed each week and test results. RSV season onset and offset were defined as the first and last, respectively, of 2 consecutive weeks during which the mean percentage of specimens testing positive for RSV was ≥10%.
METHODS
Nationally, the RSV season onset occurred in October/November of each year with offset occurring in March/April of the following year. The RSV season averaged 20 weeks and typically occurred earliest in the South and latest in the West. The onset, offset and duration varied considerably within the U.S. Department of Health and Human Services regions. RSV activity in Puerto Rico was elevated throughout the 2-year period studied. Median onset in core-based statistical areas ranged from 2 weeks earlier to 5 weeks later than those in their corresponding states.
RESULTS
Substantial variability existed in the timing of RSV activity at all geographic strata analyzed. RSV actively circulated (ie, ≥10%) in many areas outside the traditionally defined RSV epidemic period of November to March.
CONCLUSIONS
[ "Disease Notification", "Disease Outbreaks", "Humans", "Laboratories", "Public Health Surveillance", "Respiratory Syncytial Virus Infections", "Seasons" ]
4025589
null
null
null
null
Regional Results
The RSV season typically began and ended earliest in the South and latest in the West over the 5 seasons studied (Table 2). Across the 4 census regions (Northeast, Midwest, South and West), the median onset varied across a 4-week period and the median offset varied between the groups by 3 weeks. The median peak in each of the groups varied over a range of 6 weeks. RSV activity peaked as early as the week ending December 29 (2007–2008 RSV season) in the South and as late as the week ending March 5 (2010–2011 RSV season) in the Midwest. The season duration ranged from 15 weeks (West in 2009–2010) to 19 weeks (Northeast in 2008–2009; Midwest in 2011–2012 and South in 2010–2011) and averaged 17–18 weeks in the 4 regions. As shown in Figure 3, onset, offset, peak and duration of the RSV seasons varied considerably between the HHS regions and FL compared with the national season characteristics. Across the 13 geographic groups (10 HHS regions, FL, national and national without FL and PR), the median onset varied across a 19-week period and the median offset varied between the groups by 6 weeks. The median peak in each of the groups varied over a range of 12 weeks. In Florida, an extended season was observed in each of the 5 seasons studied. The mean duration of the 5 RSV seasons in FL was approximately 29 weeks, 9 weeks longer than the national average (not including FL or PR) within the same period (Table 2). RSV activity in FL peaked as early as the week ending October 18 (2008–2009 RSV season) and as late as the week ending January 2 (2009–2010 RSV season). RSV season onset and offset range and median, national and HHS region levels* and FL, September 2007 to August 2012. HHS, US Department of Health and Human Services. Figure format source was taken from Mutuc and Langley.10 *Listed by region number and headquarters city. Region 1 (Boston): Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont. Region 2 (NY): New Jersey and NY. Region 3 (Philadelphia): Delaware, DC, Maryland, Pennsylvania, Virginia and West Virginia. Region 4 (Atlanta): Alabama, Georgia, Kentucky, Mississippi, North Carolina, South Carolina and Tennessee. Region 5 (Chicago): Illinois, Indiana, Michigan, Minnesota, OH and Wisconsin. Region 6 (Dallas): Arkansas, Louisiana, New Mexico, Oklahoma and Texas. Region 7 (Kansas City): Iowa, Kansas, Missouri and Nebraska. Region 8 (Denver): Colorado, Montana, North Dakota, South Dakota, Utah and Wyoming. Region 9 (San Francisco): Arizona, California, Hawaii and Nevada. Region 10 (Seattle): Alaska, Idaho, Oregon and Washington.
CONCLUSIONS
In summary, results from the RSVAlert program for 2007–2012 demonstrate substantial variability in the timing of RSV activity between and within geographic areas. This observation was noted at all geographic strata (census regions, HHS regions, states and CBSAs) over the 5 seasons studied. RSV actively circulated in many areas for periods of time that were either longer or shorter than the commonly described season (ie, onset in the fall and offset in the late winter or early spring). Local and regional variability in RSV circulation emphasizes the importance of local surveillance data in accurately assessing RSV activity.
[ "RESULTS", "National Results", "State and Local Trends", "ACKNOWLEDGMENTS" ]
[ " National Results RSV test data from 296 to 666 laboratories in all 50 states and DC were collected during the 2007–2008 through 2011–2012 RSV seasons (Table 1). Data collection and reporting from PR began with the 2010–2011 season. Weekly proportions of positive RSV tests in each of the 5 seasons are displayed in Figure 1. Across the 5 seasons, the total number of tests ranged from 527,653 to 678,649 and the total number of positive tests ranged from 67,819 to 84,094. Season onset and offset varied by up to 5 weeks over the 5 seasons. The median peak week was the week ending January 28 and occurred as early as the week ending January 3 and as late as February 5. Evaluation of the RSV season nationally showed that the duration of the RSV season ranged from 18 weeks in 2009–2010 to 20 weeks in 2007–2008, 2008–2009, 2010–2011 and 2011–2012 (Fig. 2). When data from FL and PR were excluded, the mean duration of the RSV season was shortened by 1 week over the 5 seasons with the onset time each season occurring during the same week or 1–2 weeks later (Table 2).\nRSVAlert Program Characteristics\nOnset, Offset and Duration of Significant RSV Activity by Region and Season\nProportions of RSV tests that were positive over 5 seasons, 2007–2008 through 2011–2012. Data from August 13 to August 6 of the following year are displayed for each season. If data collection for the RSVAlert program began or ended outside those weeks within a given season, data from those weeks are not displayed.\nRSV season characteristics based on RSV antigen, PCR and virus isolation test results, national level (including FL and PR). Black diamonds represent RSV season peak week.\nRSV test data from 296 to 666 laboratories in all 50 states and DC were collected during the 2007–2008 through 2011–2012 RSV seasons (Table 1). Data collection and reporting from PR began with the 2010–2011 season. Weekly proportions of positive RSV tests in each of the 5 seasons are displayed in Figure 1. Across the 5 seasons, the total number of tests ranged from 527,653 to 678,649 and the total number of positive tests ranged from 67,819 to 84,094. Season onset and offset varied by up to 5 weeks over the 5 seasons. The median peak week was the week ending January 28 and occurred as early as the week ending January 3 and as late as February 5. Evaluation of the RSV season nationally showed that the duration of the RSV season ranged from 18 weeks in 2009–2010 to 20 weeks in 2007–2008, 2008–2009, 2010–2011 and 2011–2012 (Fig. 2). When data from FL and PR were excluded, the mean duration of the RSV season was shortened by 1 week over the 5 seasons with the onset time each season occurring during the same week or 1–2 weeks later (Table 2).\nRSVAlert Program Characteristics\nOnset, Offset and Duration of Significant RSV Activity by Region and Season\nProportions of RSV tests that were positive over 5 seasons, 2007–2008 through 2011–2012. Data from August 13 to August 6 of the following year are displayed for each season. If data collection for the RSVAlert program began or ended outside those weeks within a given season, data from those weeks are not displayed.\nRSV season characteristics based on RSV antigen, PCR and virus isolation test results, national level (including FL and PR). Black diamonds represent RSV season peak week.\n Regional Results The RSV season typically began and ended earliest in the South and latest in the West over the 5 seasons studied (Table 2). Across the 4 census regions (Northeast, Midwest, South and West), the median onset varied across a 4-week period and the median offset varied between the groups by 3 weeks. The median peak in each of the groups varied over a range of 6 weeks. RSV activity peaked as early as the week ending December 29 (2007–2008 RSV season) in the South and as late as the week ending March 5 (2010–2011 RSV season) in the Midwest. The season duration ranged from 15 weeks (West in 2009–2010) to 19 weeks (Northeast in 2008–2009; Midwest in 2011–2012 and South in 2010–2011) and averaged 17–18 weeks in the 4 regions.\nAs shown in Figure 3, onset, offset, peak and duration of the RSV seasons varied considerably between the HHS regions and FL compared with the national season characteristics. Across the 13 geographic groups (10 HHS regions, FL, national and national without FL and PR), the median onset varied across a 19-week period and the median offset varied between the groups by 6 weeks. The median peak in each of the groups varied over a range of 12 weeks. In Florida, an extended season was observed in each of the 5 seasons studied. The mean duration of the 5 RSV seasons in FL was approximately 29 weeks, 9 weeks longer than the national average (not including FL or PR) within the same period (Table 2). RSV activity in FL peaked as early as the week ending October 18 (2008–2009 RSV season) and as late as the week ending January 2 (2009–2010 RSV season).\nRSV season onset and offset range and median, national and HHS region levels* and FL, September 2007 to August 2012. HHS, US Department of Health and Human Services. Figure format source was taken from Mutuc and Langley.10 *Listed by region number and headquarters city. Region 1 (Boston): Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont. Region 2 (NY): New Jersey and NY. Region 3 (Philadelphia): Delaware, DC, Maryland, Pennsylvania, Virginia and West Virginia. Region 4 (Atlanta): Alabama, Georgia, Kentucky, Mississippi, North Carolina, South Carolina and Tennessee. Region 5 (Chicago): Illinois, Indiana, Michigan, Minnesota, OH and Wisconsin. Region 6 (Dallas): Arkansas, Louisiana, New Mexico, Oklahoma and Texas. Region 7 (Kansas City): Iowa, Kansas, Missouri and Nebraska. Region 8 (Denver): Colorado, Montana, North Dakota, South Dakota, Utah and Wyoming. Region 9 (San Francisco): Arizona, California, Hawaii and Nevada. Region 10 (Seattle): Alaska, Idaho, Oregon and Washington.\nThe RSV season typically began and ended earliest in the South and latest in the West over the 5 seasons studied (Table 2). Across the 4 census regions (Northeast, Midwest, South and West), the median onset varied across a 4-week period and the median offset varied between the groups by 3 weeks. The median peak in each of the groups varied over a range of 6 weeks. RSV activity peaked as early as the week ending December 29 (2007–2008 RSV season) in the South and as late as the week ending March 5 (2010–2011 RSV season) in the Midwest. The season duration ranged from 15 weeks (West in 2009–2010) to 19 weeks (Northeast in 2008–2009; Midwest in 2011–2012 and South in 2010–2011) and averaged 17–18 weeks in the 4 regions.\nAs shown in Figure 3, onset, offset, peak and duration of the RSV seasons varied considerably between the HHS regions and FL compared with the national season characteristics. Across the 13 geographic groups (10 HHS regions, FL, national and national without FL and PR), the median onset varied across a 19-week period and the median offset varied between the groups by 6 weeks. The median peak in each of the groups varied over a range of 12 weeks. In Florida, an extended season was observed in each of the 5 seasons studied. The mean duration of the 5 RSV seasons in FL was approximately 29 weeks, 9 weeks longer than the national average (not including FL or PR) within the same period (Table 2). RSV activity in FL peaked as early as the week ending October 18 (2008–2009 RSV season) and as late as the week ending January 2 (2009–2010 RSV season).\nRSV season onset and offset range and median, national and HHS region levels* and FL, September 2007 to August 2012. HHS, US Department of Health and Human Services. Figure format source was taken from Mutuc and Langley.10 *Listed by region number and headquarters city. Region 1 (Boston): Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont. Region 2 (NY): New Jersey and NY. Region 3 (Philadelphia): Delaware, DC, Maryland, Pennsylvania, Virginia and West Virginia. Region 4 (Atlanta): Alabama, Georgia, Kentucky, Mississippi, North Carolina, South Carolina and Tennessee. Region 5 (Chicago): Illinois, Indiana, Michigan, Minnesota, OH and Wisconsin. Region 6 (Dallas): Arkansas, Louisiana, New Mexico, Oklahoma and Texas. Region 7 (Kansas City): Iowa, Kansas, Missouri and Nebraska. Region 8 (Denver): Colorado, Montana, North Dakota, South Dakota, Utah and Wyoming. Region 9 (San Francisco): Arizona, California, Hawaii and Nevada. Region 10 (Seattle): Alaska, Idaho, Oregon and Washington.\n State and Local Trends Several state and local trends were observed. In PR, where data collection began in the 2010–2011 season, the mean percentage of specimens testing positive for RSV was ≥10% (range, 12–62%) throughout each of the 2 seasons studied (Fig. 4).\nRSV activity in PR during the 2010–2011 and 2011–2012 seasons.\nWhen the median RSV season onsets from the CBSAs were compared with the median onsets of their respective states, results demonstrated onset differences between 1 week earlier and 8 weeks later than the median onset at the state level (Fig. 5). Average CBSA season durations ranged from 11 weeks shorter (the Youngstown-Warren-Boardman, OH-PA CBSA) to similar duration (the Dayton, OH CBSA) as their respective states. Among the 3 CBSAs that were analyzed in each state, the proximal CBSAs had a median onset that occurred from 3 weeks before (OH) to 6 weeks after (NY) the median onset of the index CBSAs. The average season duration for the proximal CBSAs was up to 3 weeks shorter (NY) to 6 weeks longer (OH) than the average season duration from the index CBSA. Differences similar to those between proximal and index CBSAs were observed between the distal and index CBSAs.\nRSV season onset and offset range and median, CBSA level. *Median onset and offset ranges overlap.\nSeveral state and local trends were observed. In PR, where data collection began in the 2010–2011 season, the mean percentage of specimens testing positive for RSV was ≥10% (range, 12–62%) throughout each of the 2 seasons studied (Fig. 4).\nRSV activity in PR during the 2010–2011 and 2011–2012 seasons.\nWhen the median RSV season onsets from the CBSAs were compared with the median onsets of their respective states, results demonstrated onset differences between 1 week earlier and 8 weeks later than the median onset at the state level (Fig. 5). Average CBSA season durations ranged from 11 weeks shorter (the Youngstown-Warren-Boardman, OH-PA CBSA) to similar duration (the Dayton, OH CBSA) as their respective states. Among the 3 CBSAs that were analyzed in each state, the proximal CBSAs had a median onset that occurred from 3 weeks before (OH) to 6 weeks after (NY) the median onset of the index CBSAs. The average season duration for the proximal CBSAs was up to 3 weeks shorter (NY) to 6 weeks longer (OH) than the average season duration from the index CBSA. Differences similar to those between proximal and index CBSAs were observed between the distal and index CBSAs.\nRSV season onset and offset range and median, CBSA level. *Median onset and offset ranges overlap.", "RSV test data from 296 to 666 laboratories in all 50 states and DC were collected during the 2007–2008 through 2011–2012 RSV seasons (Table 1). Data collection and reporting from PR began with the 2010–2011 season. Weekly proportions of positive RSV tests in each of the 5 seasons are displayed in Figure 1. Across the 5 seasons, the total number of tests ranged from 527,653 to 678,649 and the total number of positive tests ranged from 67,819 to 84,094. Season onset and offset varied by up to 5 weeks over the 5 seasons. The median peak week was the week ending January 28 and occurred as early as the week ending January 3 and as late as February 5. Evaluation of the RSV season nationally showed that the duration of the RSV season ranged from 18 weeks in 2009–2010 to 20 weeks in 2007–2008, 2008–2009, 2010–2011 and 2011–2012 (Fig. 2). When data from FL and PR were excluded, the mean duration of the RSV season was shortened by 1 week over the 5 seasons with the onset time each season occurring during the same week or 1–2 weeks later (Table 2).\nRSVAlert Program Characteristics\nOnset, Offset and Duration of Significant RSV Activity by Region and Season\nProportions of RSV tests that were positive over 5 seasons, 2007–2008 through 2011–2012. Data from August 13 to August 6 of the following year are displayed for each season. If data collection for the RSVAlert program began or ended outside those weeks within a given season, data from those weeks are not displayed.\nRSV season characteristics based on RSV antigen, PCR and virus isolation test results, national level (including FL and PR). Black diamonds represent RSV season peak week.", "Several state and local trends were observed. In PR, where data collection began in the 2010–2011 season, the mean percentage of specimens testing positive for RSV was ≥10% (range, 12–62%) throughout each of the 2 seasons studied (Fig. 4).\nRSV activity in PR during the 2010–2011 and 2011–2012 seasons.\nWhen the median RSV season onsets from the CBSAs were compared with the median onsets of their respective states, results demonstrated onset differences between 1 week earlier and 8 weeks later than the median onset at the state level (Fig. 5). Average CBSA season durations ranged from 11 weeks shorter (the Youngstown-Warren-Boardman, OH-PA CBSA) to similar duration (the Dayton, OH CBSA) as their respective states. Among the 3 CBSAs that were analyzed in each state, the proximal CBSAs had a median onset that occurred from 3 weeks before (OH) to 6 weeks after (NY) the median onset of the index CBSAs. The average season duration for the proximal CBSAs was up to 3 weeks shorter (NY) to 6 weeks longer (OH) than the average season duration from the index CBSA. Differences similar to those between proximal and index CBSAs were observed between the distal and index CBSAs.\nRSV season onset and offset range and median, CBSA level. *Median onset and offset ranges overlap.", "The authors thank employees of MedImmune who worked collaboratively with the investigators in the design of the study, in analysis and interpretation of the data and reviewed and approved the article. Editorial assistance in formatting the article for submission was provided by John E. Fincke, PhD, and Gerard P. Johnson, PhD, of Complete Healthcare Communications, Inc. (Chadds Ford, PA) and funded by MedImmune. The authors would like to acknowledge Lisa Rose for her assistance with data acquisition and management and Christopher S. Ambrose, MD, for his critical review of the article.\nAuthors’ Contributions: All authors contributed to the study concept and design; analysis and interpretation of data; drafting of the article and critical revision of the article for important intellectual content. C.B.M., B.S. and L.E. contributed to the acquisition of data. All authors have read and approved the final article for submission." ]
[ "results", "results", null, null ]
[ "MATERIALS AND METHODS", "RESULTS", "National Results", "Regional Results", "State and Local Trends", "DISCUSSION", "CONCLUSIONS", "ACKNOWLEDGMENTS" ]
[ "The RSVAlert program is based on active data collection (ie, sites report even when no testing occurs and are reminded to report if the weekly reporting deadline is missed) from participating laboratories of RSV testing performed as a part of routine clinical practice. Laboratories with 1 or more of the following characteristics are invited to participate in the program on an annual basis: the laboratory is part of the National Association of Children’s Hospitals and Related Institutions; the laboratory is associated with a large children’s hospital; the laboratory is associated with a hospital that contains a neonatal and/or pediatric intensive care unit; the laboratory had a high volume of RSV tests reported in prior years (≥10 tests per week during RSV peak season) and had good reporting compliance (ie, at least 70%) in prior years. Geographic representation is also taken into account during the annual enrollment process. Certain sites were asked to participate to ensure representation across states and local community areas.\nFor each year analyzed, collection of surveillance data began in August or early September to the following August/September, with the exception of May 2010 to August 2010 when data were not collected due to a program reduction. Participating laboratories provided the total number of diagnostic RSV tests performed each week and the type of test performed [ie, antigen detection, virus isolation or polymerase chain reaction (PCR)]. If multiple (ie, confirmatory) tests were performed, laboratories were asked to report the initial test type and result. RSV test data from the previous week (ending on Saturday) were aggregated (weekly) according to national, state and local geographic regions. Data from each year were analyzed for seasonal trends at the national, regional, state and local levels. To characterize seasonal patterns on a large scale, data were stratified by the 4 census regions (Northeast, Midwest, South and West) and the 10 regions defined by the US Department of Health and Human Services (HHS). Data from Florida (FL) and PR were analyzed separately due to their uncharacteristically longer season durations.\nTo examine RSV seasonality at local levels, a sample of core-based statistical areas (CBSAs, formerly known as metropolitan statistical areas) were systematically selected for comparison. CBSAs consist of metropolitan (urban core of ≥50,000 people) and micropolitan (urban core of ≥10,000 to <50,000 people) statistical areas. Each area consists of ≥1 county and any adjacent counties with a high degree of social and economic integration with the urban core (as measured by commuting ties).14\nStates with ≥6 CBSAs reported in RSVAlert across all 5 seasons that each contained ≥2 participating laboratories within the CBSA (California, FL, Texas, North Carolina, New York (NY), Ohio (OH), Pennsylvania, South Carolina, Michigan and Washington) were identified. After excluding FL because it is examined in more detail elsewhere, 3 states were randomly selected by first assigning a random order rank 1–9 and then using a random number generator from 1 to 9 to determine that CBSAs within NY, North Carolina and OH would be reviewed. Within these states, trends between CBSAs with ≥2 participating laboratories were assessed. For this assessment, the most populous CBSA in a state was identified and designated as the index CBSA. Two additional CBSAs were selected for comparison to the index CBSA based on proximity to the index CBSA. The proximal CBSA was the most populous CBSA within 150 miles of the index CBSA. The distal CBSA was the most populous CBSA between 150 and 300 miles of the index CBSA.\nRSV season onset was derived from the National Respiratory and Enteric Virus Surveillance System definition10,15 and was the first of 2 consecutive weeks during which at least 2 of 10 specimens that were tested each week were positive for RSV or when at least 10% of ≥11 tests were reported to be RSV positive; season offset was defined as the last of 2 consecutive weeks when at least 2 of 10 specimens that were tested each week were positive for RSV or when at least 10% of ≥11 tests were reported RSV positive. Percentage positivity for a given week was calculated using the following formula:", " National Results RSV test data from 296 to 666 laboratories in all 50 states and DC were collected during the 2007–2008 through 2011–2012 RSV seasons (Table 1). Data collection and reporting from PR began with the 2010–2011 season. Weekly proportions of positive RSV tests in each of the 5 seasons are displayed in Figure 1. Across the 5 seasons, the total number of tests ranged from 527,653 to 678,649 and the total number of positive tests ranged from 67,819 to 84,094. Season onset and offset varied by up to 5 weeks over the 5 seasons. The median peak week was the week ending January 28 and occurred as early as the week ending January 3 and as late as February 5. Evaluation of the RSV season nationally showed that the duration of the RSV season ranged from 18 weeks in 2009–2010 to 20 weeks in 2007–2008, 2008–2009, 2010–2011 and 2011–2012 (Fig. 2). When data from FL and PR were excluded, the mean duration of the RSV season was shortened by 1 week over the 5 seasons with the onset time each season occurring during the same week or 1–2 weeks later (Table 2).\nRSVAlert Program Characteristics\nOnset, Offset and Duration of Significant RSV Activity by Region and Season\nProportions of RSV tests that were positive over 5 seasons, 2007–2008 through 2011–2012. Data from August 13 to August 6 of the following year are displayed for each season. If data collection for the RSVAlert program began or ended outside those weeks within a given season, data from those weeks are not displayed.\nRSV season characteristics based on RSV antigen, PCR and virus isolation test results, national level (including FL and PR). Black diamonds represent RSV season peak week.\nRSV test data from 296 to 666 laboratories in all 50 states and DC were collected during the 2007–2008 through 2011–2012 RSV seasons (Table 1). Data collection and reporting from PR began with the 2010–2011 season. Weekly proportions of positive RSV tests in each of the 5 seasons are displayed in Figure 1. Across the 5 seasons, the total number of tests ranged from 527,653 to 678,649 and the total number of positive tests ranged from 67,819 to 84,094. Season onset and offset varied by up to 5 weeks over the 5 seasons. The median peak week was the week ending January 28 and occurred as early as the week ending January 3 and as late as February 5. Evaluation of the RSV season nationally showed that the duration of the RSV season ranged from 18 weeks in 2009–2010 to 20 weeks in 2007–2008, 2008–2009, 2010–2011 and 2011–2012 (Fig. 2). When data from FL and PR were excluded, the mean duration of the RSV season was shortened by 1 week over the 5 seasons with the onset time each season occurring during the same week or 1–2 weeks later (Table 2).\nRSVAlert Program Characteristics\nOnset, Offset and Duration of Significant RSV Activity by Region and Season\nProportions of RSV tests that were positive over 5 seasons, 2007–2008 through 2011–2012. Data from August 13 to August 6 of the following year are displayed for each season. If data collection for the RSVAlert program began or ended outside those weeks within a given season, data from those weeks are not displayed.\nRSV season characteristics based on RSV antigen, PCR and virus isolation test results, national level (including FL and PR). Black diamonds represent RSV season peak week.\n Regional Results The RSV season typically began and ended earliest in the South and latest in the West over the 5 seasons studied (Table 2). Across the 4 census regions (Northeast, Midwest, South and West), the median onset varied across a 4-week period and the median offset varied between the groups by 3 weeks. The median peak in each of the groups varied over a range of 6 weeks. RSV activity peaked as early as the week ending December 29 (2007–2008 RSV season) in the South and as late as the week ending March 5 (2010–2011 RSV season) in the Midwest. The season duration ranged from 15 weeks (West in 2009–2010) to 19 weeks (Northeast in 2008–2009; Midwest in 2011–2012 and South in 2010–2011) and averaged 17–18 weeks in the 4 regions.\nAs shown in Figure 3, onset, offset, peak and duration of the RSV seasons varied considerably between the HHS regions and FL compared with the national season characteristics. Across the 13 geographic groups (10 HHS regions, FL, national and national without FL and PR), the median onset varied across a 19-week period and the median offset varied between the groups by 6 weeks. The median peak in each of the groups varied over a range of 12 weeks. In Florida, an extended season was observed in each of the 5 seasons studied. The mean duration of the 5 RSV seasons in FL was approximately 29 weeks, 9 weeks longer than the national average (not including FL or PR) within the same period (Table 2). RSV activity in FL peaked as early as the week ending October 18 (2008–2009 RSV season) and as late as the week ending January 2 (2009–2010 RSV season).\nRSV season onset and offset range and median, national and HHS region levels* and FL, September 2007 to August 2012. HHS, US Department of Health and Human Services. Figure format source was taken from Mutuc and Langley.10 *Listed by region number and headquarters city. Region 1 (Boston): Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont. Region 2 (NY): New Jersey and NY. Region 3 (Philadelphia): Delaware, DC, Maryland, Pennsylvania, Virginia and West Virginia. Region 4 (Atlanta): Alabama, Georgia, Kentucky, Mississippi, North Carolina, South Carolina and Tennessee. Region 5 (Chicago): Illinois, Indiana, Michigan, Minnesota, OH and Wisconsin. Region 6 (Dallas): Arkansas, Louisiana, New Mexico, Oklahoma and Texas. Region 7 (Kansas City): Iowa, Kansas, Missouri and Nebraska. Region 8 (Denver): Colorado, Montana, North Dakota, South Dakota, Utah and Wyoming. Region 9 (San Francisco): Arizona, California, Hawaii and Nevada. Region 10 (Seattle): Alaska, Idaho, Oregon and Washington.\nThe RSV season typically began and ended earliest in the South and latest in the West over the 5 seasons studied (Table 2). Across the 4 census regions (Northeast, Midwest, South and West), the median onset varied across a 4-week period and the median offset varied between the groups by 3 weeks. The median peak in each of the groups varied over a range of 6 weeks. RSV activity peaked as early as the week ending December 29 (2007–2008 RSV season) in the South and as late as the week ending March 5 (2010–2011 RSV season) in the Midwest. The season duration ranged from 15 weeks (West in 2009–2010) to 19 weeks (Northeast in 2008–2009; Midwest in 2011–2012 and South in 2010–2011) and averaged 17–18 weeks in the 4 regions.\nAs shown in Figure 3, onset, offset, peak and duration of the RSV seasons varied considerably between the HHS regions and FL compared with the national season characteristics. Across the 13 geographic groups (10 HHS regions, FL, national and national without FL and PR), the median onset varied across a 19-week period and the median offset varied between the groups by 6 weeks. The median peak in each of the groups varied over a range of 12 weeks. In Florida, an extended season was observed in each of the 5 seasons studied. The mean duration of the 5 RSV seasons in FL was approximately 29 weeks, 9 weeks longer than the national average (not including FL or PR) within the same period (Table 2). RSV activity in FL peaked as early as the week ending October 18 (2008–2009 RSV season) and as late as the week ending January 2 (2009–2010 RSV season).\nRSV season onset and offset range and median, national and HHS region levels* and FL, September 2007 to August 2012. HHS, US Department of Health and Human Services. Figure format source was taken from Mutuc and Langley.10 *Listed by region number and headquarters city. Region 1 (Boston): Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont. Region 2 (NY): New Jersey and NY. Region 3 (Philadelphia): Delaware, DC, Maryland, Pennsylvania, Virginia and West Virginia. Region 4 (Atlanta): Alabama, Georgia, Kentucky, Mississippi, North Carolina, South Carolina and Tennessee. Region 5 (Chicago): Illinois, Indiana, Michigan, Minnesota, OH and Wisconsin. Region 6 (Dallas): Arkansas, Louisiana, New Mexico, Oklahoma and Texas. Region 7 (Kansas City): Iowa, Kansas, Missouri and Nebraska. Region 8 (Denver): Colorado, Montana, North Dakota, South Dakota, Utah and Wyoming. Region 9 (San Francisco): Arizona, California, Hawaii and Nevada. Region 10 (Seattle): Alaska, Idaho, Oregon and Washington.\n State and Local Trends Several state and local trends were observed. In PR, where data collection began in the 2010–2011 season, the mean percentage of specimens testing positive for RSV was ≥10% (range, 12–62%) throughout each of the 2 seasons studied (Fig. 4).\nRSV activity in PR during the 2010–2011 and 2011–2012 seasons.\nWhen the median RSV season onsets from the CBSAs were compared with the median onsets of their respective states, results demonstrated onset differences between 1 week earlier and 8 weeks later than the median onset at the state level (Fig. 5). Average CBSA season durations ranged from 11 weeks shorter (the Youngstown-Warren-Boardman, OH-PA CBSA) to similar duration (the Dayton, OH CBSA) as their respective states. Among the 3 CBSAs that were analyzed in each state, the proximal CBSAs had a median onset that occurred from 3 weeks before (OH) to 6 weeks after (NY) the median onset of the index CBSAs. The average season duration for the proximal CBSAs was up to 3 weeks shorter (NY) to 6 weeks longer (OH) than the average season duration from the index CBSA. Differences similar to those between proximal and index CBSAs were observed between the distal and index CBSAs.\nRSV season onset and offset range and median, CBSA level. *Median onset and offset ranges overlap.\nSeveral state and local trends were observed. In PR, where data collection began in the 2010–2011 season, the mean percentage of specimens testing positive for RSV was ≥10% (range, 12–62%) throughout each of the 2 seasons studied (Fig. 4).\nRSV activity in PR during the 2010–2011 and 2011–2012 seasons.\nWhen the median RSV season onsets from the CBSAs were compared with the median onsets of their respective states, results demonstrated onset differences between 1 week earlier and 8 weeks later than the median onset at the state level (Fig. 5). Average CBSA season durations ranged from 11 weeks shorter (the Youngstown-Warren-Boardman, OH-PA CBSA) to similar duration (the Dayton, OH CBSA) as their respective states. Among the 3 CBSAs that were analyzed in each state, the proximal CBSAs had a median onset that occurred from 3 weeks before (OH) to 6 weeks after (NY) the median onset of the index CBSAs. The average season duration for the proximal CBSAs was up to 3 weeks shorter (NY) to 6 weeks longer (OH) than the average season duration from the index CBSA. Differences similar to those between proximal and index CBSAs were observed between the distal and index CBSAs.\nRSV season onset and offset range and median, CBSA level. *Median onset and offset ranges overlap.", "RSV test data from 296 to 666 laboratories in all 50 states and DC were collected during the 2007–2008 through 2011–2012 RSV seasons (Table 1). Data collection and reporting from PR began with the 2010–2011 season. Weekly proportions of positive RSV tests in each of the 5 seasons are displayed in Figure 1. Across the 5 seasons, the total number of tests ranged from 527,653 to 678,649 and the total number of positive tests ranged from 67,819 to 84,094. Season onset and offset varied by up to 5 weeks over the 5 seasons. The median peak week was the week ending January 28 and occurred as early as the week ending January 3 and as late as February 5. Evaluation of the RSV season nationally showed that the duration of the RSV season ranged from 18 weeks in 2009–2010 to 20 weeks in 2007–2008, 2008–2009, 2010–2011 and 2011–2012 (Fig. 2). When data from FL and PR were excluded, the mean duration of the RSV season was shortened by 1 week over the 5 seasons with the onset time each season occurring during the same week or 1–2 weeks later (Table 2).\nRSVAlert Program Characteristics\nOnset, Offset and Duration of Significant RSV Activity by Region and Season\nProportions of RSV tests that were positive over 5 seasons, 2007–2008 through 2011–2012. Data from August 13 to August 6 of the following year are displayed for each season. If data collection for the RSVAlert program began or ended outside those weeks within a given season, data from those weeks are not displayed.\nRSV season characteristics based on RSV antigen, PCR and virus isolation test results, national level (including FL and PR). Black diamonds represent RSV season peak week.", "The RSV season typically began and ended earliest in the South and latest in the West over the 5 seasons studied (Table 2). Across the 4 census regions (Northeast, Midwest, South and West), the median onset varied across a 4-week period and the median offset varied between the groups by 3 weeks. The median peak in each of the groups varied over a range of 6 weeks. RSV activity peaked as early as the week ending December 29 (2007–2008 RSV season) in the South and as late as the week ending March 5 (2010–2011 RSV season) in the Midwest. The season duration ranged from 15 weeks (West in 2009–2010) to 19 weeks (Northeast in 2008–2009; Midwest in 2011–2012 and South in 2010–2011) and averaged 17–18 weeks in the 4 regions.\nAs shown in Figure 3, onset, offset, peak and duration of the RSV seasons varied considerably between the HHS regions and FL compared with the national season characteristics. Across the 13 geographic groups (10 HHS regions, FL, national and national without FL and PR), the median onset varied across a 19-week period and the median offset varied between the groups by 6 weeks. The median peak in each of the groups varied over a range of 12 weeks. In Florida, an extended season was observed in each of the 5 seasons studied. The mean duration of the 5 RSV seasons in FL was approximately 29 weeks, 9 weeks longer than the national average (not including FL or PR) within the same period (Table 2). RSV activity in FL peaked as early as the week ending October 18 (2008–2009 RSV season) and as late as the week ending January 2 (2009–2010 RSV season).\nRSV season onset and offset range and median, national and HHS region levels* and FL, September 2007 to August 2012. HHS, US Department of Health and Human Services. Figure format source was taken from Mutuc and Langley.10 *Listed by region number and headquarters city. Region 1 (Boston): Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont. Region 2 (NY): New Jersey and NY. Region 3 (Philadelphia): Delaware, DC, Maryland, Pennsylvania, Virginia and West Virginia. Region 4 (Atlanta): Alabama, Georgia, Kentucky, Mississippi, North Carolina, South Carolina and Tennessee. Region 5 (Chicago): Illinois, Indiana, Michigan, Minnesota, OH and Wisconsin. Region 6 (Dallas): Arkansas, Louisiana, New Mexico, Oklahoma and Texas. Region 7 (Kansas City): Iowa, Kansas, Missouri and Nebraska. Region 8 (Denver): Colorado, Montana, North Dakota, South Dakota, Utah and Wyoming. Region 9 (San Francisco): Arizona, California, Hawaii and Nevada. Region 10 (Seattle): Alaska, Idaho, Oregon and Washington.", "Several state and local trends were observed. In PR, where data collection began in the 2010–2011 season, the mean percentage of specimens testing positive for RSV was ≥10% (range, 12–62%) throughout each of the 2 seasons studied (Fig. 4).\nRSV activity in PR during the 2010–2011 and 2011–2012 seasons.\nWhen the median RSV season onsets from the CBSAs were compared with the median onsets of their respective states, results demonstrated onset differences between 1 week earlier and 8 weeks later than the median onset at the state level (Fig. 5). Average CBSA season durations ranged from 11 weeks shorter (the Youngstown-Warren-Boardman, OH-PA CBSA) to similar duration (the Dayton, OH CBSA) as their respective states. Among the 3 CBSAs that were analyzed in each state, the proximal CBSAs had a median onset that occurred from 3 weeks before (OH) to 6 weeks after (NY) the median onset of the index CBSAs. The average season duration for the proximal CBSAs was up to 3 weeks shorter (NY) to 6 weeks longer (OH) than the average season duration from the index CBSA. Differences similar to those between proximal and index CBSAs were observed between the distal and index CBSAs.\nRSV season onset and offset range and median, CBSA level. *Median onset and offset ranges overlap.", "This report describes RSV activity reported during 5 RSV seasons (2007–2008 through 2011–2012) at the local (ie, CBSA), state, regional and national levels. Our results underscore the variability in annual RSV season characteristics by season and at all geographic levels.\nAt the national level, RSV season onset and offset varied across the seasons by 0–5 weeks while season duration varied by 0–2 weeks in length. The findings are similar to those reported for RSVAlert during the 2004–2005 through 2006–2007 RSV seasons, which showed the season onset and offset varied by 1–4 weeks and duration by 4–5 weeks.12 Season onset and offset occurred later during 2009–2012 compared with 2007–2009, but this is not atypical; we previously reported onset occurring in the second half of November during the 2005–2006 RSV season.12 Preliminary data from RSVAlert and the CDC16 for the 2012–2013 RSV season also suggest a reversion to an earlier onset and offset, similar to that observed in the 2007–2009 RSV seasons. These observations highlight the variability in RSV season characteristics.\nVarying seasonal patterns of RSV activity were also observed between and within individual states. Analysis of CBSA-level data demonstrated a high degree of intra-state variability. Consistent with previous reports,15,17,18 our findings suggest that RSV circulation is primarily a local phenomenon. For example, previous reports10,19,20 have shown that the RSV season is longer in FL compared with all other states in the United States. In PR, RSV activity was year-round and lacked a discernable seasonal pattern (ie, the weekly percentage of specimens testing positive for RSV was ≥10% throughout the entire year in each of the 2 seasons studied). To our knowledge, this has not been reported elsewhere for RSV, although it is similar to other respiratory viruses, such as influenza, which have been shown to circulate seasonally in temperate regions of the world and aseasonally in tropical regions.21,22 RSV activity in PR should be monitored in the future to substantiate the observations reported here.\nA comparison of the current analysis with the data published by the CDC9,10,19,20 for the 2007–2009 and 2010–2012 RSV seasons (2009–2010 RSV season data were not published separately) showed that nationally (including FL) our data differed by up to 1 week for onset or offset times and up to 2 weeks in duration. When the CDC data from the HHS regions were compared (across the 2007–2009 and 2010–2012 RSV seasons and all HHS regions), our data differed by up to 4 weeks for onset or offset times and up to 5 weeks in duration. Comparisons at the CBSA level were not available because those data are not reported by the CDC. The primary difference between the methods of the CDC analysis and the current analysis is that the CDC defines RSV seasonal onset, offset and duration based solely on antigen test results,9,10,19,20 whereas RSVAlert uses the results of all RSV tests regardless of type (antigen, virus isolation and PCR). The minimal differences between the 2 approaches at the national level suggest that RSV test type does not significantly affect the determination of RSV seasonality. However, the larger differences at the regional level suggest that inclusion of all RSV test types may better characterize regional RSV seasonality. Differences between surveillance programs should be considered when comparing results.\nThe primary limitation of this analysis is the lack of standardization of clinical or laboratory methods across participating sites. The timing of RSV testing at individual sites may be influenced by provider perceptions of expected timing of RSV activity, which may bias RSV surveillance data patterns. The results of a large US study in 2006–2008 of infants presenting to emergency departments with lower respiratory tract illness or apnea suggested that RSV activity may be higher than generally appreciated in September to October in the MidAtlantic and Southeast regions.23 Although the current analysis demonstrated an early onset of RSV activity in the Northeast and South, it did not demonstrate high levels of RSV activity before late October. This difference suggests that surveillance data obtained via general clinical practices (ie, without protocols regarding whom to test and when) may not fully capture RSV activity. Although the recruitment strategy focused on capturing pediatric data, it is likely that there is a low proportion of data from nonpediatric hospitals, especially if sites were contacted to ensure geographic representation. Results may also be subject to nonresponse bias because program participation is voluntary; the data may be influenced by a participation bias and thus may differ from the total population. Moreover, participating laboratories in the RSVAlert program varied from season to season, which may account for some of the observed variability in RSV season characteristics.\nStrengths of the study include active data collection methods which allowed for more accurate, complete and timely data capture compared with methods that rely on voluntary reporting of data. Moreover, our large and geographically diverse sample of participating laboratories allows for a robust description of US RSV activity. Finally, the ability to examine local geographic areas (ie, CBSAs) provides additional insight into the nuances of geographic patterns of RSV activity in the United States.", "In summary, results from the RSVAlert program for 2007–2012 demonstrate substantial variability in the timing of RSV activity between and within geographic areas. This observation was noted at all geographic strata (census regions, HHS regions, states and CBSAs) over the 5 seasons studied. RSV actively circulated in many areas for periods of time that were either longer or shorter than the commonly described season (ie, onset in the fall and offset in the late winter or early spring). Local and regional variability in RSV circulation emphasizes the importance of local surveillance data in accurately assessing RSV activity.", "The authors thank employees of MedImmune who worked collaboratively with the investigators in the design of the study, in analysis and interpretation of the data and reviewed and approved the article. Editorial assistance in formatting the article for submission was provided by John E. Fincke, PhD, and Gerard P. Johnson, PhD, of Complete Healthcare Communications, Inc. (Chadds Ford, PA) and funded by MedImmune. The authors would like to acknowledge Lisa Rose for her assistance with data acquisition and management and Christopher S. Ambrose, MD, for his critical review of the article.\nAuthors’ Contributions: All authors contributed to the study concept and design; analysis and interpretation of data; drafting of the article and critical revision of the article for important intellectual content. C.B.M., B.S. and L.E. contributed to the acquisition of data. All authors have read and approved the final article for submission." ]
[ "materials|methods", "results", "results", "results", null, "discussion", "conclusions", null ]
[ "respiratory syncytial virus", "surveillance", "seasonality" ]
MATERIALS AND METHODS: The RSVAlert program is based on active data collection (ie, sites report even when no testing occurs and are reminded to report if the weekly reporting deadline is missed) from participating laboratories of RSV testing performed as a part of routine clinical practice. Laboratories with 1 or more of the following characteristics are invited to participate in the program on an annual basis: the laboratory is part of the National Association of Children’s Hospitals and Related Institutions; the laboratory is associated with a large children’s hospital; the laboratory is associated with a hospital that contains a neonatal and/or pediatric intensive care unit; the laboratory had a high volume of RSV tests reported in prior years (≥10 tests per week during RSV peak season) and had good reporting compliance (ie, at least 70%) in prior years. Geographic representation is also taken into account during the annual enrollment process. Certain sites were asked to participate to ensure representation across states and local community areas. For each year analyzed, collection of surveillance data began in August or early September to the following August/September, with the exception of May 2010 to August 2010 when data were not collected due to a program reduction. Participating laboratories provided the total number of diagnostic RSV tests performed each week and the type of test performed [ie, antigen detection, virus isolation or polymerase chain reaction (PCR)]. If multiple (ie, confirmatory) tests were performed, laboratories were asked to report the initial test type and result. RSV test data from the previous week (ending on Saturday) were aggregated (weekly) according to national, state and local geographic regions. Data from each year were analyzed for seasonal trends at the national, regional, state and local levels. To characterize seasonal patterns on a large scale, data were stratified by the 4 census regions (Northeast, Midwest, South and West) and the 10 regions defined by the US Department of Health and Human Services (HHS). Data from Florida (FL) and PR were analyzed separately due to their uncharacteristically longer season durations. To examine RSV seasonality at local levels, a sample of core-based statistical areas (CBSAs, formerly known as metropolitan statistical areas) were systematically selected for comparison. CBSAs consist of metropolitan (urban core of ≥50,000 people) and micropolitan (urban core of ≥10,000 to <50,000 people) statistical areas. Each area consists of ≥1 county and any adjacent counties with a high degree of social and economic integration with the urban core (as measured by commuting ties).14 States with ≥6 CBSAs reported in RSVAlert across all 5 seasons that each contained ≥2 participating laboratories within the CBSA (California, FL, Texas, North Carolina, New York (NY), Ohio (OH), Pennsylvania, South Carolina, Michigan and Washington) were identified. After excluding FL because it is examined in more detail elsewhere, 3 states were randomly selected by first assigning a random order rank 1–9 and then using a random number generator from 1 to 9 to determine that CBSAs within NY, North Carolina and OH would be reviewed. Within these states, trends between CBSAs with ≥2 participating laboratories were assessed. For this assessment, the most populous CBSA in a state was identified and designated as the index CBSA. Two additional CBSAs were selected for comparison to the index CBSA based on proximity to the index CBSA. The proximal CBSA was the most populous CBSA within 150 miles of the index CBSA. The distal CBSA was the most populous CBSA between 150 and 300 miles of the index CBSA. RSV season onset was derived from the National Respiratory and Enteric Virus Surveillance System definition10,15 and was the first of 2 consecutive weeks during which at least 2 of 10 specimens that were tested each week were positive for RSV or when at least 10% of ≥11 tests were reported to be RSV positive; season offset was defined as the last of 2 consecutive weeks when at least 2 of 10 specimens that were tested each week were positive for RSV or when at least 10% of ≥11 tests were reported RSV positive. Percentage positivity for a given week was calculated using the following formula: RESULTS: National Results RSV test data from 296 to 666 laboratories in all 50 states and DC were collected during the 2007–2008 through 2011–2012 RSV seasons (Table 1). Data collection and reporting from PR began with the 2010–2011 season. Weekly proportions of positive RSV tests in each of the 5 seasons are displayed in Figure 1. Across the 5 seasons, the total number of tests ranged from 527,653 to 678,649 and the total number of positive tests ranged from 67,819 to 84,094. Season onset and offset varied by up to 5 weeks over the 5 seasons. The median peak week was the week ending January 28 and occurred as early as the week ending January 3 and as late as February 5. Evaluation of the RSV season nationally showed that the duration of the RSV season ranged from 18 weeks in 2009–2010 to 20 weeks in 2007–2008, 2008–2009, 2010–2011 and 2011–2012 (Fig. 2). When data from FL and PR were excluded, the mean duration of the RSV season was shortened by 1 week over the 5 seasons with the onset time each season occurring during the same week or 1–2 weeks later (Table 2). RSVAlert Program Characteristics Onset, Offset and Duration of Significant RSV Activity by Region and Season Proportions of RSV tests that were positive over 5 seasons, 2007–2008 through 2011–2012. Data from August 13 to August 6 of the following year are displayed for each season. If data collection for the RSVAlert program began or ended outside those weeks within a given season, data from those weeks are not displayed. RSV season characteristics based on RSV antigen, PCR and virus isolation test results, national level (including FL and PR). Black diamonds represent RSV season peak week. RSV test data from 296 to 666 laboratories in all 50 states and DC were collected during the 2007–2008 through 2011–2012 RSV seasons (Table 1). Data collection and reporting from PR began with the 2010–2011 season. Weekly proportions of positive RSV tests in each of the 5 seasons are displayed in Figure 1. Across the 5 seasons, the total number of tests ranged from 527,653 to 678,649 and the total number of positive tests ranged from 67,819 to 84,094. Season onset and offset varied by up to 5 weeks over the 5 seasons. The median peak week was the week ending January 28 and occurred as early as the week ending January 3 and as late as February 5. Evaluation of the RSV season nationally showed that the duration of the RSV season ranged from 18 weeks in 2009–2010 to 20 weeks in 2007–2008, 2008–2009, 2010–2011 and 2011–2012 (Fig. 2). When data from FL and PR were excluded, the mean duration of the RSV season was shortened by 1 week over the 5 seasons with the onset time each season occurring during the same week or 1–2 weeks later (Table 2). RSVAlert Program Characteristics Onset, Offset and Duration of Significant RSV Activity by Region and Season Proportions of RSV tests that were positive over 5 seasons, 2007–2008 through 2011–2012. Data from August 13 to August 6 of the following year are displayed for each season. If data collection for the RSVAlert program began or ended outside those weeks within a given season, data from those weeks are not displayed. RSV season characteristics based on RSV antigen, PCR and virus isolation test results, national level (including FL and PR). Black diamonds represent RSV season peak week. Regional Results The RSV season typically began and ended earliest in the South and latest in the West over the 5 seasons studied (Table 2). Across the 4 census regions (Northeast, Midwest, South and West), the median onset varied across a 4-week period and the median offset varied between the groups by 3 weeks. The median peak in each of the groups varied over a range of 6 weeks. RSV activity peaked as early as the week ending December 29 (2007–2008 RSV season) in the South and as late as the week ending March 5 (2010–2011 RSV season) in the Midwest. The season duration ranged from 15 weeks (West in 2009–2010) to 19 weeks (Northeast in 2008–2009; Midwest in 2011–2012 and South in 2010–2011) and averaged 17–18 weeks in the 4 regions. As shown in Figure 3, onset, offset, peak and duration of the RSV seasons varied considerably between the HHS regions and FL compared with the national season characteristics. Across the 13 geographic groups (10 HHS regions, FL, national and national without FL and PR), the median onset varied across a 19-week period and the median offset varied between the groups by 6 weeks. The median peak in each of the groups varied over a range of 12 weeks. In Florida, an extended season was observed in each of the 5 seasons studied. The mean duration of the 5 RSV seasons in FL was approximately 29 weeks, 9 weeks longer than the national average (not including FL or PR) within the same period (Table 2). RSV activity in FL peaked as early as the week ending October 18 (2008–2009 RSV season) and as late as the week ending January 2 (2009–2010 RSV season). RSV season onset and offset range and median, national and HHS region levels* and FL, September 2007 to August 2012. HHS, US Department of Health and Human Services. Figure format source was taken from Mutuc and Langley.10 *Listed by region number and headquarters city. Region 1 (Boston): Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont. Region 2 (NY): New Jersey and NY. Region 3 (Philadelphia): Delaware, DC, Maryland, Pennsylvania, Virginia and West Virginia. Region 4 (Atlanta): Alabama, Georgia, Kentucky, Mississippi, North Carolina, South Carolina and Tennessee. Region 5 (Chicago): Illinois, Indiana, Michigan, Minnesota, OH and Wisconsin. Region 6 (Dallas): Arkansas, Louisiana, New Mexico, Oklahoma and Texas. Region 7 (Kansas City): Iowa, Kansas, Missouri and Nebraska. Region 8 (Denver): Colorado, Montana, North Dakota, South Dakota, Utah and Wyoming. Region 9 (San Francisco): Arizona, California, Hawaii and Nevada. Region 10 (Seattle): Alaska, Idaho, Oregon and Washington. The RSV season typically began and ended earliest in the South and latest in the West over the 5 seasons studied (Table 2). Across the 4 census regions (Northeast, Midwest, South and West), the median onset varied across a 4-week period and the median offset varied between the groups by 3 weeks. The median peak in each of the groups varied over a range of 6 weeks. RSV activity peaked as early as the week ending December 29 (2007–2008 RSV season) in the South and as late as the week ending March 5 (2010–2011 RSV season) in the Midwest. The season duration ranged from 15 weeks (West in 2009–2010) to 19 weeks (Northeast in 2008–2009; Midwest in 2011–2012 and South in 2010–2011) and averaged 17–18 weeks in the 4 regions. As shown in Figure 3, onset, offset, peak and duration of the RSV seasons varied considerably between the HHS regions and FL compared with the national season characteristics. Across the 13 geographic groups (10 HHS regions, FL, national and national without FL and PR), the median onset varied across a 19-week period and the median offset varied between the groups by 6 weeks. The median peak in each of the groups varied over a range of 12 weeks. In Florida, an extended season was observed in each of the 5 seasons studied. The mean duration of the 5 RSV seasons in FL was approximately 29 weeks, 9 weeks longer than the national average (not including FL or PR) within the same period (Table 2). RSV activity in FL peaked as early as the week ending October 18 (2008–2009 RSV season) and as late as the week ending January 2 (2009–2010 RSV season). RSV season onset and offset range and median, national and HHS region levels* and FL, September 2007 to August 2012. HHS, US Department of Health and Human Services. Figure format source was taken from Mutuc and Langley.10 *Listed by region number and headquarters city. Region 1 (Boston): Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont. Region 2 (NY): New Jersey and NY. Region 3 (Philadelphia): Delaware, DC, Maryland, Pennsylvania, Virginia and West Virginia. Region 4 (Atlanta): Alabama, Georgia, Kentucky, Mississippi, North Carolina, South Carolina and Tennessee. Region 5 (Chicago): Illinois, Indiana, Michigan, Minnesota, OH and Wisconsin. Region 6 (Dallas): Arkansas, Louisiana, New Mexico, Oklahoma and Texas. Region 7 (Kansas City): Iowa, Kansas, Missouri and Nebraska. Region 8 (Denver): Colorado, Montana, North Dakota, South Dakota, Utah and Wyoming. Region 9 (San Francisco): Arizona, California, Hawaii and Nevada. Region 10 (Seattle): Alaska, Idaho, Oregon and Washington. State and Local Trends Several state and local trends were observed. In PR, where data collection began in the 2010–2011 season, the mean percentage of specimens testing positive for RSV was ≥10% (range, 12–62%) throughout each of the 2 seasons studied (Fig. 4). RSV activity in PR during the 2010–2011 and 2011–2012 seasons. When the median RSV season onsets from the CBSAs were compared with the median onsets of their respective states, results demonstrated onset differences between 1 week earlier and 8 weeks later than the median onset at the state level (Fig. 5). Average CBSA season durations ranged from 11 weeks shorter (the Youngstown-Warren-Boardman, OH-PA CBSA) to similar duration (the Dayton, OH CBSA) as their respective states. Among the 3 CBSAs that were analyzed in each state, the proximal CBSAs had a median onset that occurred from 3 weeks before (OH) to 6 weeks after (NY) the median onset of the index CBSAs. The average season duration for the proximal CBSAs was up to 3 weeks shorter (NY) to 6 weeks longer (OH) than the average season duration from the index CBSA. Differences similar to those between proximal and index CBSAs were observed between the distal and index CBSAs. RSV season onset and offset range and median, CBSA level. *Median onset and offset ranges overlap. Several state and local trends were observed. In PR, where data collection began in the 2010–2011 season, the mean percentage of specimens testing positive for RSV was ≥10% (range, 12–62%) throughout each of the 2 seasons studied (Fig. 4). RSV activity in PR during the 2010–2011 and 2011–2012 seasons. When the median RSV season onsets from the CBSAs were compared with the median onsets of their respective states, results demonstrated onset differences between 1 week earlier and 8 weeks later than the median onset at the state level (Fig. 5). Average CBSA season durations ranged from 11 weeks shorter (the Youngstown-Warren-Boardman, OH-PA CBSA) to similar duration (the Dayton, OH CBSA) as their respective states. Among the 3 CBSAs that were analyzed in each state, the proximal CBSAs had a median onset that occurred from 3 weeks before (OH) to 6 weeks after (NY) the median onset of the index CBSAs. The average season duration for the proximal CBSAs was up to 3 weeks shorter (NY) to 6 weeks longer (OH) than the average season duration from the index CBSA. Differences similar to those between proximal and index CBSAs were observed between the distal and index CBSAs. RSV season onset and offset range and median, CBSA level. *Median onset and offset ranges overlap. National Results: RSV test data from 296 to 666 laboratories in all 50 states and DC were collected during the 2007–2008 through 2011–2012 RSV seasons (Table 1). Data collection and reporting from PR began with the 2010–2011 season. Weekly proportions of positive RSV tests in each of the 5 seasons are displayed in Figure 1. Across the 5 seasons, the total number of tests ranged from 527,653 to 678,649 and the total number of positive tests ranged from 67,819 to 84,094. Season onset and offset varied by up to 5 weeks over the 5 seasons. The median peak week was the week ending January 28 and occurred as early as the week ending January 3 and as late as February 5. Evaluation of the RSV season nationally showed that the duration of the RSV season ranged from 18 weeks in 2009–2010 to 20 weeks in 2007–2008, 2008–2009, 2010–2011 and 2011–2012 (Fig. 2). When data from FL and PR were excluded, the mean duration of the RSV season was shortened by 1 week over the 5 seasons with the onset time each season occurring during the same week or 1–2 weeks later (Table 2). RSVAlert Program Characteristics Onset, Offset and Duration of Significant RSV Activity by Region and Season Proportions of RSV tests that were positive over 5 seasons, 2007–2008 through 2011–2012. Data from August 13 to August 6 of the following year are displayed for each season. If data collection for the RSVAlert program began or ended outside those weeks within a given season, data from those weeks are not displayed. RSV season characteristics based on RSV antigen, PCR and virus isolation test results, national level (including FL and PR). Black diamonds represent RSV season peak week. Regional Results: The RSV season typically began and ended earliest in the South and latest in the West over the 5 seasons studied (Table 2). Across the 4 census regions (Northeast, Midwest, South and West), the median onset varied across a 4-week period and the median offset varied between the groups by 3 weeks. The median peak in each of the groups varied over a range of 6 weeks. RSV activity peaked as early as the week ending December 29 (2007–2008 RSV season) in the South and as late as the week ending March 5 (2010–2011 RSV season) in the Midwest. The season duration ranged from 15 weeks (West in 2009–2010) to 19 weeks (Northeast in 2008–2009; Midwest in 2011–2012 and South in 2010–2011) and averaged 17–18 weeks in the 4 regions. As shown in Figure 3, onset, offset, peak and duration of the RSV seasons varied considerably between the HHS regions and FL compared with the national season characteristics. Across the 13 geographic groups (10 HHS regions, FL, national and national without FL and PR), the median onset varied across a 19-week period and the median offset varied between the groups by 6 weeks. The median peak in each of the groups varied over a range of 12 weeks. In Florida, an extended season was observed in each of the 5 seasons studied. The mean duration of the 5 RSV seasons in FL was approximately 29 weeks, 9 weeks longer than the national average (not including FL or PR) within the same period (Table 2). RSV activity in FL peaked as early as the week ending October 18 (2008–2009 RSV season) and as late as the week ending January 2 (2009–2010 RSV season). RSV season onset and offset range and median, national and HHS region levels* and FL, September 2007 to August 2012. HHS, US Department of Health and Human Services. Figure format source was taken from Mutuc and Langley.10 *Listed by region number and headquarters city. Region 1 (Boston): Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island and Vermont. Region 2 (NY): New Jersey and NY. Region 3 (Philadelphia): Delaware, DC, Maryland, Pennsylvania, Virginia and West Virginia. Region 4 (Atlanta): Alabama, Georgia, Kentucky, Mississippi, North Carolina, South Carolina and Tennessee. Region 5 (Chicago): Illinois, Indiana, Michigan, Minnesota, OH and Wisconsin. Region 6 (Dallas): Arkansas, Louisiana, New Mexico, Oklahoma and Texas. Region 7 (Kansas City): Iowa, Kansas, Missouri and Nebraska. Region 8 (Denver): Colorado, Montana, North Dakota, South Dakota, Utah and Wyoming. Region 9 (San Francisco): Arizona, California, Hawaii and Nevada. Region 10 (Seattle): Alaska, Idaho, Oregon and Washington. State and Local Trends: Several state and local trends were observed. In PR, where data collection began in the 2010–2011 season, the mean percentage of specimens testing positive for RSV was ≥10% (range, 12–62%) throughout each of the 2 seasons studied (Fig. 4). RSV activity in PR during the 2010–2011 and 2011–2012 seasons. When the median RSV season onsets from the CBSAs were compared with the median onsets of their respective states, results demonstrated onset differences between 1 week earlier and 8 weeks later than the median onset at the state level (Fig. 5). Average CBSA season durations ranged from 11 weeks shorter (the Youngstown-Warren-Boardman, OH-PA CBSA) to similar duration (the Dayton, OH CBSA) as their respective states. Among the 3 CBSAs that were analyzed in each state, the proximal CBSAs had a median onset that occurred from 3 weeks before (OH) to 6 weeks after (NY) the median onset of the index CBSAs. The average season duration for the proximal CBSAs was up to 3 weeks shorter (NY) to 6 weeks longer (OH) than the average season duration from the index CBSA. Differences similar to those between proximal and index CBSAs were observed between the distal and index CBSAs. RSV season onset and offset range and median, CBSA level. *Median onset and offset ranges overlap. DISCUSSION: This report describes RSV activity reported during 5 RSV seasons (2007–2008 through 2011–2012) at the local (ie, CBSA), state, regional and national levels. Our results underscore the variability in annual RSV season characteristics by season and at all geographic levels. At the national level, RSV season onset and offset varied across the seasons by 0–5 weeks while season duration varied by 0–2 weeks in length. The findings are similar to those reported for RSVAlert during the 2004–2005 through 2006–2007 RSV seasons, which showed the season onset and offset varied by 1–4 weeks and duration by 4–5 weeks.12 Season onset and offset occurred later during 2009–2012 compared with 2007–2009, but this is not atypical; we previously reported onset occurring in the second half of November during the 2005–2006 RSV season.12 Preliminary data from RSVAlert and the CDC16 for the 2012–2013 RSV season also suggest a reversion to an earlier onset and offset, similar to that observed in the 2007–2009 RSV seasons. These observations highlight the variability in RSV season characteristics. Varying seasonal patterns of RSV activity were also observed between and within individual states. Analysis of CBSA-level data demonstrated a high degree of intra-state variability. Consistent with previous reports,15,17,18 our findings suggest that RSV circulation is primarily a local phenomenon. For example, previous reports10,19,20 have shown that the RSV season is longer in FL compared with all other states in the United States. In PR, RSV activity was year-round and lacked a discernable seasonal pattern (ie, the weekly percentage of specimens testing positive for RSV was ≥10% throughout the entire year in each of the 2 seasons studied). To our knowledge, this has not been reported elsewhere for RSV, although it is similar to other respiratory viruses, such as influenza, which have been shown to circulate seasonally in temperate regions of the world and aseasonally in tropical regions.21,22 RSV activity in PR should be monitored in the future to substantiate the observations reported here. A comparison of the current analysis with the data published by the CDC9,10,19,20 for the 2007–2009 and 2010–2012 RSV seasons (2009–2010 RSV season data were not published separately) showed that nationally (including FL) our data differed by up to 1 week for onset or offset times and up to 2 weeks in duration. When the CDC data from the HHS regions were compared (across the 2007–2009 and 2010–2012 RSV seasons and all HHS regions), our data differed by up to 4 weeks for onset or offset times and up to 5 weeks in duration. Comparisons at the CBSA level were not available because those data are not reported by the CDC. The primary difference between the methods of the CDC analysis and the current analysis is that the CDC defines RSV seasonal onset, offset and duration based solely on antigen test results,9,10,19,20 whereas RSVAlert uses the results of all RSV tests regardless of type (antigen, virus isolation and PCR). The minimal differences between the 2 approaches at the national level suggest that RSV test type does not significantly affect the determination of RSV seasonality. However, the larger differences at the regional level suggest that inclusion of all RSV test types may better characterize regional RSV seasonality. Differences between surveillance programs should be considered when comparing results. The primary limitation of this analysis is the lack of standardization of clinical or laboratory methods across participating sites. The timing of RSV testing at individual sites may be influenced by provider perceptions of expected timing of RSV activity, which may bias RSV surveillance data patterns. The results of a large US study in 2006–2008 of infants presenting to emergency departments with lower respiratory tract illness or apnea suggested that RSV activity may be higher than generally appreciated in September to October in the MidAtlantic and Southeast regions.23 Although the current analysis demonstrated an early onset of RSV activity in the Northeast and South, it did not demonstrate high levels of RSV activity before late October. This difference suggests that surveillance data obtained via general clinical practices (ie, without protocols regarding whom to test and when) may not fully capture RSV activity. Although the recruitment strategy focused on capturing pediatric data, it is likely that there is a low proportion of data from nonpediatric hospitals, especially if sites were contacted to ensure geographic representation. Results may also be subject to nonresponse bias because program participation is voluntary; the data may be influenced by a participation bias and thus may differ from the total population. Moreover, participating laboratories in the RSVAlert program varied from season to season, which may account for some of the observed variability in RSV season characteristics. Strengths of the study include active data collection methods which allowed for more accurate, complete and timely data capture compared with methods that rely on voluntary reporting of data. Moreover, our large and geographically diverse sample of participating laboratories allows for a robust description of US RSV activity. Finally, the ability to examine local geographic areas (ie, CBSAs) provides additional insight into the nuances of geographic patterns of RSV activity in the United States. CONCLUSIONS: In summary, results from the RSVAlert program for 2007–2012 demonstrate substantial variability in the timing of RSV activity between and within geographic areas. This observation was noted at all geographic strata (census regions, HHS regions, states and CBSAs) over the 5 seasons studied. RSV actively circulated in many areas for periods of time that were either longer or shorter than the commonly described season (ie, onset in the fall and offset in the late winter or early spring). Local and regional variability in RSV circulation emphasizes the importance of local surveillance data in accurately assessing RSV activity. ACKNOWLEDGMENTS: The authors thank employees of MedImmune who worked collaboratively with the investigators in the design of the study, in analysis and interpretation of the data and reviewed and approved the article. Editorial assistance in formatting the article for submission was provided by John E. Fincke, PhD, and Gerard P. Johnson, PhD, of Complete Healthcare Communications, Inc. (Chadds Ford, PA) and funded by MedImmune. The authors would like to acknowledge Lisa Rose for her assistance with data acquisition and management and Christopher S. Ambrose, MD, for his critical review of the article. Authors’ Contributions: All authors contributed to the study concept and design; analysis and interpretation of data; drafting of the article and critical revision of the article for important intellectual content. C.B.M., B.S. and L.E. contributed to the acquisition of data. All authors have read and approved the final article for submission.
Background: Annual respiratory syncytial virus (RSV) outbreaks throughout the US exhibit variable patterns in onset, peak month of activity and duration of season. RSVAlert, a US surveillance system, collects and characterizes RSV test data at national, regional, state and local levels. Methods: RSV test data from 296 to 666 laboratories from 50 states, the District of Columbia and Puerto Rico (as of 2010) were collected during the 2007-2008 to 2011-2012 RSV seasons. Data were collected in early August/September to the following August/September each season. Participating laboratories provided the total number and types of RSV tests performed each week and test results. RSV season onset and offset were defined as the first and last, respectively, of 2 consecutive weeks during which the mean percentage of specimens testing positive for RSV was ≥10%. Results: Nationally, the RSV season onset occurred in October/November of each year with offset occurring in March/April of the following year. The RSV season averaged 20 weeks and typically occurred earliest in the South and latest in the West. The onset, offset and duration varied considerably within the U.S. Department of Health and Human Services regions. RSV activity in Puerto Rico was elevated throughout the 2-year period studied. Median onset in core-based statistical areas ranged from 2 weeks earlier to 5 weeks later than those in their corresponding states. Conclusions: Substantial variability existed in the timing of RSV activity at all geographic strata analyzed. RSV actively circulated (ie, ≥10%) in many areas outside the traditionally defined RSV epidemic period of November to March.
null
null
5,428
313
[ 2273, 320, 262, 165 ]
8
[ "rsv", "season", "weeks", "onset", "data", "rsv season", "week", "seasons", "median", "region" ]
[ "tests reported rsv", "assessing rsv", "laboratories rsv", "rsv tests seasons", "participating laboratories rsv" ]
null
null
null
null
null
[CONTENT] respiratory syncytial virus | surveillance | seasonality [SUMMARY]
[CONTENT] respiratory syncytial virus | surveillance | seasonality [SUMMARY]
[CONTENT] respiratory syncytial virus | surveillance | seasonality [SUMMARY]
null
null
null
[CONTENT] Disease Notification | Disease Outbreaks | Humans | Laboratories | Public Health Surveillance | Respiratory Syncytial Virus Infections | Seasons [SUMMARY]
[CONTENT] Disease Notification | Disease Outbreaks | Humans | Laboratories | Public Health Surveillance | Respiratory Syncytial Virus Infections | Seasons [SUMMARY]
[CONTENT] Disease Notification | Disease Outbreaks | Humans | Laboratories | Public Health Surveillance | Respiratory Syncytial Virus Infections | Seasons [SUMMARY]
null
null
null
[CONTENT] tests reported rsv | assessing rsv | laboratories rsv | rsv tests seasons | participating laboratories rsv [SUMMARY]
[CONTENT] tests reported rsv | assessing rsv | laboratories rsv | rsv tests seasons | participating laboratories rsv [SUMMARY]
[CONTENT] tests reported rsv | assessing rsv | laboratories rsv | rsv tests seasons | participating laboratories rsv [SUMMARY]
null
null
null
[CONTENT] rsv | season | weeks | onset | data | rsv season | week | seasons | median | region [SUMMARY]
[CONTENT] rsv | season | weeks | onset | data | rsv season | week | seasons | median | region [SUMMARY]
[CONTENT] rsv | season | weeks | onset | data | rsv season | week | seasons | median | region [SUMMARY]
null
null
null
[CONTENT] region | weeks | rsv | varied | median | groups | season | fl | south | rsv season [SUMMARY]
[CONTENT] rsv | variability | areas | regions | geographic | local | shorter commonly | activity geographic | studied rsv actively circulated | activity geographic areas [SUMMARY]
[CONTENT] rsv | season | weeks | median | data | region | onset | rsv season | week | seasons [SUMMARY]
null
null
null
[CONTENT] RSV | October/November of each year | March/April of the following year ||| RSV | 20 weeks | South | West ||| the U.S. Department of Health and Human Services ||| RSV | Puerto Rico | 2-year ||| 2 weeks earlier to 5 weeks later [SUMMARY]
[CONTENT] RSV ||| RSV | ie, ≥10% | RSV | November to March [SUMMARY]
[CONTENT] Annual | RSV | US | month | season ||| RSVAlert | US | RSV ||| RSV | 296 | 666 | 50 | the District of Columbia | Puerto Rico | 2010 | 2007-2008 | 2011-2012 ||| early August/September | August/September each season ||| RSV | each week ||| RSV | season | first | 2 consecutive weeks | RSV | ≥10% ||| ||| RSV | October/November of each year | March/April of the following year ||| RSV | 20 weeks | South | West ||| the U.S. Department of Health and Human Services ||| RSV | Puerto Rico | 2-year ||| 2 weeks earlier to 5 weeks later ||| RSV ||| RSV | ie, ≥10% | RSV | November to March [SUMMARY]
null
A novel nomogram provides improved accuracy for predicting biochemical recurrence after radical prostatectomy.
34133352
Various prediction tools have been developed to predict biochemical recurrence (BCR) after radical prostatectomy (RP); however, few of the previous prediction tools used serum prostate-specific antigen (PSA) nadir after RP and maximum tumor diameter (MTD) at the same time. In this study, a nomogram incorporating MTD and PSA nadir was developed to predict BCR-free survival (BCRFS).
BACKGROUND
A total of 337 patients who underwent RP between January 2010 and March 2017 were retrospectively enrolled in this study. The maximum diameter of the index lesion was measured on magnetic resonance imaging (MRI). Cox regression analysis was performed to evaluate independent predictors of BCR. A nomogram was subsequently developed for the prediction of BCRFS at 3 and 5 years after RP. Time-dependent receiver operating characteristic (ROC) curve and decision curve analyses were performed to identify the advantage of the new nomogram in comparison with the cancer of the prostate risk assessment post-surgical (CAPRA-S) score.
METHODS
A novel nomogram was developed to predict BCR by including PSA nadir, MTD, Gleason score, surgical margin (SM), and seminal vesicle invasion (SVI), considering these variables were significantly associated with BCR in both univariate and multivariate analyses (P < 0.05). In addition, a basic model including Gleason score, SM, and SVI was developed and used as a control to assess the incremental predictive power of the new model. The concordance index of our model was slightly higher than CAPRA-S model (0.76 vs. 0.70, P = 0.02) and it was significantly higher than that of the basic model (0.76 vs. 0.66, P = 0.001). Time-dependent ROC curve and decision curve analyses also demonstrated the advantages of the new nomogram.
RESULTS
PSA nadir after RP and MTD based on MRI before surgery are independent predictors of BCR. By incorporating PSA nadir and MTD into the conventional predictive model, our newly developed nomogram significantly improved the accuracy in predicting BCRFS after RP.
CONCLUSIONS
[ "Humans", "Male", "Neoplasm Grading", "Neoplasm Recurrence, Local", "Nomograms", "Prognosis", "Prostate-Specific Antigen", "Prostatectomy", "Prostatic Neoplasms", "Retrospective Studies", "Seminal Vesicles" ]
8280057
Introduction
Prostate cancer is among the most frequent cancers and the second leading cause of mortality in men. It is estimated that there might be around 191,930 new cases of prostate cancer and 33,330 deaths in the United States in 2020.[1] Approximately, 20% to 30% of the patients experience biochemical recurrence (BCR) after radical prostatectomy (RP) during follow-up.[2–4] Various prediction tools for BCR have been developed to guide the clinical decision-making for subsequent treatment. Most of these tools are developed based on clinical and pathological parameters such as pre-operative serum prostate-specific antigen (PSA), Gleason score, tumor stage, surgical margin (SM), extracapsular extension (ECE), seminal vesicle invasion (SVI), and lymph node invasion (LNI).[5–8] The cancer of the prostate risk assessment post-surgical (CAPRA-S) score is one of the most commonly used tools with good discriminative accuracy and calibration.[7] However, only few of these tools include tumor diameter and post-operative PSA nadir, simultaneously, although the prognostic value of these two characteristics in predicting BCR has been verified.[9,10] Measurement of PSA is the cornerstone in post-operative follow-up. Serum PSA is expected to be undetectable within 6 weeks after RP and a detectable PSA in patients after RP is thought to be associated with residual cancer.[11] A persistent (detectable) PSA after RP has been proved to be a poor prognostic indicator of oncologic outcomes.[12] Magnetic resonance imaging (MRI) has been widely used for prostate cancer diagnosis, and the prognostic potential of MRI is constantly being explored with the advancement of radiographic technologies.[13,14] Maximum tumor diameter (MTD) has been demonstrated to be an independent predictor of BCR in patients after RP.[15] However, in most studies, MTD measurement was carried out on the pathological specimens and only few of them measured MTD on MRI,[16] while the latter is considered to be more accurate and comparable. To our knowledge, no study addressing the relationship between MTD measured on MRI and BCR was conducted. In this study, we aim to assess the prognostic power of MTD from MRI in predicting BCR-free survival (BCRFS) after RP and develop a new nomogram that incorporates MTD, PSA nadir, and other common perioperative variables.
Methods
Ethics approval This study was approved by the Peking University Third Hospital Medical Science Research Ethics Committee with a waiver of informed consent and compliant with the principles in the Declaration of Helsinki (S2019326). This study was approved by the Peking University Third Hospital Medical Science Research Ethics Committee with a waiver of informed consent and compliant with the principles in the Declaration of Helsinki (S2019326). Patients Data of 542 patients who underwent laparoscopic RP for prostate cancer between January 2010 and March 2017 were retrospectively analyzed. The exclusion criteria were as follows: (1) patients with neoadjuvant therapy before surgery; (2) patients who had undergone transurethral resection of the prostate; (3) patients with unidentifiable lesions on MRI; (4) patients whose pathological results were not prostatic adenocarcinoma; and (5) incomplete follow-up data. Follow-ups were performed every 3 months for the first 2 years, semi-annually for the third and fourth year, and annually thereafter. The suspicious tumor lesions were identified according to comprehensive understanding of T2-weighted images, diffusion weighted images, and apparent diffusion coefficient maps of MRI. MTD was defined as the largest tumor diameter of index lesion on axial T2-weighted images. For multifocal cases, only the largest tumor nodule was measured for analysis. PSA nadir was defined as the lowest level of serum PSA in the first two follow-ups after RP without adjuvant androgen deprivation therapy or radiotherapy. BCR was defined as post-operative PSA value >0.20 ng/mL in two consecutive measurements, and the recurrence date was assigned to the day when PSA value >0.20 ng/mL was measured for the first time. BCRFS was calculated from date of RP to date of documented BCR or date of last follow-up for those patients who did not experience BCR. Other clinical and pathological data, such as age at RP, body mass index (BMI), pre-operative PSA, Gleason score, SM, ECE, SVI, and LNI, were also collected for each patient. Data of 542 patients who underwent laparoscopic RP for prostate cancer between January 2010 and March 2017 were retrospectively analyzed. The exclusion criteria were as follows: (1) patients with neoadjuvant therapy before surgery; (2) patients who had undergone transurethral resection of the prostate; (3) patients with unidentifiable lesions on MRI; (4) patients whose pathological results were not prostatic adenocarcinoma; and (5) incomplete follow-up data. Follow-ups were performed every 3 months for the first 2 years, semi-annually for the third and fourth year, and annually thereafter. The suspicious tumor lesions were identified according to comprehensive understanding of T2-weighted images, diffusion weighted images, and apparent diffusion coefficient maps of MRI. MTD was defined as the largest tumor diameter of index lesion on axial T2-weighted images. For multifocal cases, only the largest tumor nodule was measured for analysis. PSA nadir was defined as the lowest level of serum PSA in the first two follow-ups after RP without adjuvant androgen deprivation therapy or radiotherapy. BCR was defined as post-operative PSA value >0.20 ng/mL in two consecutive measurements, and the recurrence date was assigned to the day when PSA value >0.20 ng/mL was measured for the first time. BCRFS was calculated from date of RP to date of documented BCR or date of last follow-up for those patients who did not experience BCR. Other clinical and pathological data, such as age at RP, body mass index (BMI), pre-operative PSA, Gleason score, SM, ECE, SVI, and LNI, were also collected for each patient. Statistical analysis Means, standard deviation, median, and interquartile ranges (IQR) were reported for continuous variables. Numbers and proportions were reported for categorical variables. The Mann-Whitney U test and Chi-square test were applied for between-group comparison. BCRFS was estimated using the Kaplan-Meier curves and log-rank test. MTD was categorized into ≤2.9 and >2.9 cm. The cutoff value of MTD that best discriminated low- and high-risk for BCR was estimated by maximally selected test with the “maxstat” package of R software.[17] PSA nadir was categorized into undetectable and detectable PSA. An undetectable PSA was defined as a PSA nadir <0.01 ng/mL. Univariable and multivariable Cox proportional hazards regression models were used to identify significant predictors of BCR. A nomogram predicting BCRFS at 3 and 5 years after RP was developed based on the multivariable model. For the validation of the nomogram, a bootstrap technique (1000 bootstrap resamples) was used for internal validation to assess the discrimination and calibration. The concordance index (c-index) was used to assess the discrimination. The calibration curve was made to assess the calibration which graphically revealed the relationship between predicted probability of BCR and actual observed events. Additionally, we compared our newly developed nomogram to the CAPRA-S score with one-shot non-parametric approach, and comparison of the two models was performed using the “compareC” package of R software.[18] Time-dependent ROC curves were illustrated using the “survivalROC” package.[19] Decision curve analyses at 3 and 5 years were performed to ascertain the clinical value of the new nomogram. Statistical analyses were performed with the R software (version 3.6.2, R Foundation for Statistical Computing, Vienna, Austria) and GraphPad Prism (version 7.00, GraphPad Software, San Diego, CA, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant. Means, standard deviation, median, and interquartile ranges (IQR) were reported for continuous variables. Numbers and proportions were reported for categorical variables. The Mann-Whitney U test and Chi-square test were applied for between-group comparison. BCRFS was estimated using the Kaplan-Meier curves and log-rank test. MTD was categorized into ≤2.9 and >2.9 cm. The cutoff value of MTD that best discriminated low- and high-risk for BCR was estimated by maximally selected test with the “maxstat” package of R software.[17] PSA nadir was categorized into undetectable and detectable PSA. An undetectable PSA was defined as a PSA nadir <0.01 ng/mL. Univariable and multivariable Cox proportional hazards regression models were used to identify significant predictors of BCR. A nomogram predicting BCRFS at 3 and 5 years after RP was developed based on the multivariable model. For the validation of the nomogram, a bootstrap technique (1000 bootstrap resamples) was used for internal validation to assess the discrimination and calibration. The concordance index (c-index) was used to assess the discrimination. The calibration curve was made to assess the calibration which graphically revealed the relationship between predicted probability of BCR and actual observed events. Additionally, we compared our newly developed nomogram to the CAPRA-S score with one-shot non-parametric approach, and comparison of the two models was performed using the “compareC” package of R software.[18] Time-dependent ROC curves were illustrated using the “survivalROC” package.[19] Decision curve analyses at 3 and 5 years were performed to ascertain the clinical value of the new nomogram. Statistical analyses were performed with the R software (version 3.6.2, R Foundation for Statistical Computing, Vienna, Austria) and GraphPad Prism (version 7.00, GraphPad Software, San Diego, CA, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant.
Results
Patients’ characteristics Overall, 337 patients were included in this study and the demographic and clinical characteristics of these patients are shown in Table 1. The median follow-up time was 42 months (IQR, 19–64 months) and 100 (29.7%) patients developed BCR during follow-up. The median age of all patients was 71 years (IQR, 65–75 years) with median BMI of 24.6 kg/m2 (IQR, 22.8–26.6 kg/m2). The median value of pre-operative PSA was 10.8 ng/mL (IQR, 7.3–19.1 ng/mL) and was divided into three groups: <10 ng/mL group, 10 to 20 ng/mL group, and >20 ng/mL group. The majority of the patients had PSA nadir <0.01 ng/mL (n = 242, 71.8%), while 95 (28.2%) patients had PSA nadir ≥0.01 ng/mL. Median PSA nadir was 0 (IQR, 0–0.01) ng/mL. The median MTD was 3.09 cm (IQR, 2.24–3.91 cm) with 45.1% of MTD ≤2.9 cm and 54.9% of MTD >2.9 cm. Comparison of clinical parameters between patients who experienced BCR or not is shown in Table 2. Characteristics of prostate cancer patients treated by RP (N = 337). BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy; SD: Standard deviation; SM: surgical margin; SVI: Seminal vesicle invasion. Comparison of clinical parameters between prostate cancer patients with or without BCR. ∗Z values. †χ2 values. BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SD: Standard deviation; SM: Surgical margin; SVI: Seminal vesicle invasion. Overall, 337 patients were included in this study and the demographic and clinical characteristics of these patients are shown in Table 1. The median follow-up time was 42 months (IQR, 19–64 months) and 100 (29.7%) patients developed BCR during follow-up. The median age of all patients was 71 years (IQR, 65–75 years) with median BMI of 24.6 kg/m2 (IQR, 22.8–26.6 kg/m2). The median value of pre-operative PSA was 10.8 ng/mL (IQR, 7.3–19.1 ng/mL) and was divided into three groups: <10 ng/mL group, 10 to 20 ng/mL group, and >20 ng/mL group. The majority of the patients had PSA nadir <0.01 ng/mL (n = 242, 71.8%), while 95 (28.2%) patients had PSA nadir ≥0.01 ng/mL. Median PSA nadir was 0 (IQR, 0–0.01) ng/mL. The median MTD was 3.09 cm (IQR, 2.24–3.91 cm) with 45.1% of MTD ≤2.9 cm and 54.9% of MTD >2.9 cm. Comparison of clinical parameters between patients who experienced BCR or not is shown in Table 2. Characteristics of prostate cancer patients treated by RP (N = 337). BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy; SD: Standard deviation; SM: surgical margin; SVI: Seminal vesicle invasion. Comparison of clinical parameters between prostate cancer patients with or without BCR. ∗Z values. †χ2 values. BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SD: Standard deviation; SM: Surgical margin; SVI: Seminal vesicle invasion. Development and evaluation of the novel nomogram To identify significant predictors of BCR, we evaluated age, BMI, pre-operative PSA, Gleason score, SM, ECE, SVI, PSA nadir, and MTD in a univariable Cox proportional hazards regression model and the results are shown in Table 3. Except for age and BMI, all predictors were statistically significantly associated with BCR after RP (P < 0.01). Univariable and multivariable Cox regression analyses of BCRFS. BCRFS: BCR-free survival; BMI: Body mass index; CI: Confidence interval; ECE: Extracapsular extension; HR: Hazard ratio; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SM: Surgical margin; SVI: Seminal vesicle invasion. As shown in Figure 1, Kaplan-Meier curves were stratified by PSA nadir (<0.01 vs. ≥0.01 ng/mL) [Figure 1B], MTD (≤2.9 vs. >2.9 cm) [Figure 1C], and the combination of PSA nadir and MTD (0 risk factor: PSA nadir <0.01 ng/mL and MTD ≤2.9 cm; one risk factor: PSA nadir <0.01 ng/mL and MTD >2.9 cm or PSA nadir ≥0.01 ng/mL and MTD ≤2.9 cm; two risk factors: PSA nadir ≥0.01 ng/mL and MTD >2.9 cm) [Figure 1D] and showed that the patients with detectable PSA or/and MTD >2.9 cm had significantly shorter BCRFS (log-rank P < 0.001). (A) Kaplan-Meier curves of BCRFS for the whole patient population, (B) patients grouped by PSA nadir (<0.01 vs. ≥0.01 ng/mL), (C) MTD (≤2.9 vs. >2.9 cm), and (D) a combination of PSA nadir and MTD. BCR: Biochemical recurrence; BCRFS: BCR-free survival; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy. These significant predictors in univariable analyses were then assessed in a multivariable Cox regression model, and pre-operative PSA and ECE did not retain their significance and were excluded (P > 0.05) [Table 3]. Finally, PSA nadir and MTD, as well as Gleason score, SM, and SVI, were independent predictors of BCR in multivariable Cox regression analysis (P < 0.05). These variables were incorporated in a nomogram predicting BCRFS at 3 and 5 years after RP [Figure 2], which yielded a c-index of 0.76 (95% confidence interval [CI], 0.71–0.81). The calibration plots of the nomogram are shown in Figure 3 illustrating how the predicted probability of BCRFS compared with the actual outcomes. Nomogram predicting BCRFS at 3 and 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; PSA: Prostate-specific antigen; RP: Radical prostatectomy. Calibration plot of the nomogram predicting BCRFS at (A) 3 years and (B) 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; RP: Radical prostatectomy. The c-index of the CAPRA-S score was 0.70 (95% CI, 0.64–0.75) in our study cohort, which is slightly lower than that of our nomogram (P = 0.022). To further verify the prognostic power of the combination of PSA nadir and MTD, we developed a basic model including Gleason score, SM, and SVI. It yielded a c-index of 0.66 (95% CI, 0.60–0.71), which was significantly lower than the c-index of the new nomogram (P = 0.001). The time-dependent ROC curve and decision curve analyses compared the new nomogram, the CAPRA-S score, and the basic model [Figures 4 and 5]. Our new nomogram showed an advantage in identifying patients with BCRFS in both time-dependent ROC curve and decision curve analyses. Time-dependent ROC curves comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; ROC: Receiver operating characteristic; RP: Radical prostatectomy. Decision curve analyses comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; RP: Radical prostatectomy. To identify significant predictors of BCR, we evaluated age, BMI, pre-operative PSA, Gleason score, SM, ECE, SVI, PSA nadir, and MTD in a univariable Cox proportional hazards regression model and the results are shown in Table 3. Except for age and BMI, all predictors were statistically significantly associated with BCR after RP (P < 0.01). Univariable and multivariable Cox regression analyses of BCRFS. BCRFS: BCR-free survival; BMI: Body mass index; CI: Confidence interval; ECE: Extracapsular extension; HR: Hazard ratio; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SM: Surgical margin; SVI: Seminal vesicle invasion. As shown in Figure 1, Kaplan-Meier curves were stratified by PSA nadir (<0.01 vs. ≥0.01 ng/mL) [Figure 1B], MTD (≤2.9 vs. >2.9 cm) [Figure 1C], and the combination of PSA nadir and MTD (0 risk factor: PSA nadir <0.01 ng/mL and MTD ≤2.9 cm; one risk factor: PSA nadir <0.01 ng/mL and MTD >2.9 cm or PSA nadir ≥0.01 ng/mL and MTD ≤2.9 cm; two risk factors: PSA nadir ≥0.01 ng/mL and MTD >2.9 cm) [Figure 1D] and showed that the patients with detectable PSA or/and MTD >2.9 cm had significantly shorter BCRFS (log-rank P < 0.001). (A) Kaplan-Meier curves of BCRFS for the whole patient population, (B) patients grouped by PSA nadir (<0.01 vs. ≥0.01 ng/mL), (C) MTD (≤2.9 vs. >2.9 cm), and (D) a combination of PSA nadir and MTD. BCR: Biochemical recurrence; BCRFS: BCR-free survival; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy. These significant predictors in univariable analyses were then assessed in a multivariable Cox regression model, and pre-operative PSA and ECE did not retain their significance and were excluded (P > 0.05) [Table 3]. Finally, PSA nadir and MTD, as well as Gleason score, SM, and SVI, were independent predictors of BCR in multivariable Cox regression analysis (P < 0.05). These variables were incorporated in a nomogram predicting BCRFS at 3 and 5 years after RP [Figure 2], which yielded a c-index of 0.76 (95% confidence interval [CI], 0.71–0.81). The calibration plots of the nomogram are shown in Figure 3 illustrating how the predicted probability of BCRFS compared with the actual outcomes. Nomogram predicting BCRFS at 3 and 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; PSA: Prostate-specific antigen; RP: Radical prostatectomy. Calibration plot of the nomogram predicting BCRFS at (A) 3 years and (B) 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; RP: Radical prostatectomy. The c-index of the CAPRA-S score was 0.70 (95% CI, 0.64–0.75) in our study cohort, which is slightly lower than that of our nomogram (P = 0.022). To further verify the prognostic power of the combination of PSA nadir and MTD, we developed a basic model including Gleason score, SM, and SVI. It yielded a c-index of 0.66 (95% CI, 0.60–0.71), which was significantly lower than the c-index of the new nomogram (P = 0.001). The time-dependent ROC curve and decision curve analyses compared the new nomogram, the CAPRA-S score, and the basic model [Figures 4 and 5]. Our new nomogram showed an advantage in identifying patients with BCRFS in both time-dependent ROC curve and decision curve analyses. Time-dependent ROC curves comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; ROC: Receiver operating characteristic; RP: Radical prostatectomy. Decision curve analyses comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; RP: Radical prostatectomy.
null
null
[ "Ethics approval", "Patients", "Statistical analysis", "Development and evaluation of the novel nomogram", "Acknowledgements", "Funding" ]
[ "This study was approved by the Peking University Third Hospital Medical Science Research Ethics Committee with a waiver of informed consent and compliant with the principles in the Declaration of Helsinki (S2019326).", "Data of 542 patients who underwent laparoscopic RP for prostate cancer between January 2010 and March 2017 were retrospectively analyzed. The exclusion criteria were as follows: (1) patients with neoadjuvant therapy before surgery; (2) patients who had undergone transurethral resection of the prostate; (3) patients with unidentifiable lesions on MRI; (4) patients whose pathological results were not prostatic adenocarcinoma; and (5) incomplete follow-up data. Follow-ups were performed every 3 months for the first 2 years, semi-annually for the third and fourth year, and annually thereafter.\nThe suspicious tumor lesions were identified according to comprehensive understanding of T2-weighted images, diffusion weighted images, and apparent diffusion coefficient maps of MRI. MTD was defined as the largest tumor diameter of index lesion on axial T2-weighted images. For multifocal cases, only the largest tumor nodule was measured for analysis. PSA nadir was defined as the lowest level of serum PSA in the first two follow-ups after RP without adjuvant androgen deprivation therapy or radiotherapy. BCR was defined as post-operative PSA value >0.20 ng/mL in two consecutive measurements, and the recurrence date was assigned to the day when PSA value >0.20 ng/mL was measured for the first time. BCRFS was calculated from date of RP to date of documented BCR or date of last follow-up for those patients who did not experience BCR. Other clinical and pathological data, such as age at RP, body mass index (BMI), pre-operative PSA, Gleason score, SM, ECE, SVI, and LNI, were also collected for each patient.", "Means, standard deviation, median, and interquartile ranges (IQR) were reported for continuous variables. Numbers and proportions were reported for categorical variables. The Mann-Whitney U test and Chi-square test were applied for between-group comparison. BCRFS was estimated using the Kaplan-Meier curves and log-rank test. MTD was categorized into ≤2.9 and >2.9 cm. The cutoff value of MTD that best discriminated low- and high-risk for BCR was estimated by maximally selected test with the “maxstat” package of R software.[17] PSA nadir was categorized into undetectable and detectable PSA. An undetectable PSA was defined as a PSA nadir <0.01 ng/mL. Univariable and multivariable Cox proportional hazards regression models were used to identify significant predictors of BCR. A nomogram predicting BCRFS at 3 and 5 years after RP was developed based on the multivariable model. For the validation of the nomogram, a bootstrap technique (1000 bootstrap resamples) was used for internal validation to assess the discrimination and calibration. The concordance index (c-index) was used to assess the discrimination. The calibration curve was made to assess the calibration which graphically revealed the relationship between predicted probability of BCR and actual observed events. Additionally, we compared our newly developed nomogram to the CAPRA-S score with one-shot non-parametric approach, and comparison of the two models was performed using the “compareC” package of R software.[18] Time-dependent ROC curves were illustrated using the “survivalROC” package.[19] Decision curve analyses at 3 and 5 years were performed to ascertain the clinical value of the new nomogram. Statistical analyses were performed with the R software (version 3.6.2, R Foundation for Statistical Computing, Vienna, Austria) and GraphPad Prism (version 7.00, GraphPad Software, San Diego, CA, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant.", "To identify significant predictors of BCR, we evaluated age, BMI, pre-operative PSA, Gleason score, SM, ECE, SVI, PSA nadir, and MTD in a univariable Cox proportional hazards regression model and the results are shown in Table 3. Except for age and BMI, all predictors were statistically significantly associated with BCR after RP (P < 0.01).\nUnivariable and multivariable Cox regression analyses of BCRFS.\nBCRFS: BCR-free survival; BMI: Body mass index; CI: Confidence interval; ECE: Extracapsular extension; HR: Hazard ratio; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SM: Surgical margin; SVI: Seminal vesicle invasion.\nAs shown in Figure 1, Kaplan-Meier curves were stratified by PSA nadir (<0.01 vs. ≥0.01 ng/mL) [Figure 1B], MTD (≤2.9 vs. >2.9 cm) [Figure 1C], and the combination of PSA nadir and MTD (0 risk factor: PSA nadir <0.01 ng/mL and MTD ≤2.9 cm; one risk factor: PSA nadir <0.01 ng/mL and MTD >2.9 cm or PSA nadir ≥0.01 ng/mL and MTD ≤2.9 cm; two risk factors: PSA nadir ≥0.01 ng/mL and MTD >2.9 cm) [Figure 1D] and showed that the patients with detectable PSA or/and MTD >2.9 cm had significantly shorter BCRFS (log-rank P < 0.001).\n(A) Kaplan-Meier curves of BCRFS for the whole patient population, (B) patients grouped by PSA nadir (<0.01 vs. ≥0.01 ng/mL), (C) MTD (≤2.9 vs. >2.9 cm), and (D) a combination of PSA nadir and MTD. BCR: Biochemical recurrence; BCRFS: BCR-free survival; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy.\nThese significant predictors in univariable analyses were then assessed in a multivariable Cox regression model, and pre-operative PSA and ECE did not retain their significance and were excluded (P > 0.05) [Table 3]. Finally, PSA nadir and MTD, as well as Gleason score, SM, and SVI, were independent predictors of BCR in multivariable Cox regression analysis (P < 0.05). These variables were incorporated in a nomogram predicting BCRFS at 3 and 5 years after RP [Figure 2], which yielded a c-index of 0.76 (95% confidence interval [CI], 0.71–0.81). The calibration plots of the nomogram are shown in Figure 3 illustrating how the predicted probability of BCRFS compared with the actual outcomes.\nNomogram predicting BCRFS at 3 and 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; PSA: Prostate-specific antigen; RP: Radical prostatectomy.\nCalibration plot of the nomogram predicting BCRFS at (A) 3 years and (B) 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; RP: Radical prostatectomy.\nThe c-index of the CAPRA-S score was 0.70 (95% CI, 0.64–0.75) in our study cohort, which is slightly lower than that of our nomogram (P = 0.022). To further verify the prognostic power of the combination of PSA nadir and MTD, we developed a basic model including Gleason score, SM, and SVI. It yielded a c-index of 0.66 (95% CI, 0.60–0.71), which was significantly lower than the c-index of the new nomogram (P = 0.001). The time-dependent ROC curve and decision curve analyses compared the new nomogram, the CAPRA-S score, and the basic model [Figures 4 and 5]. Our new nomogram showed an advantage in identifying patients with BCRFS in both time-dependent ROC curve and decision curve analyses.\nTime-dependent ROC curves comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; ROC: Receiver operating characteristic; RP: Radical prostatectomy.\nDecision curve analyses comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; RP: Radical prostatectomy.", "The authors thank Mr. Li-Yuan Tao from the Research Center of Clinical Epidemiology of Peking University Third Hospital for the help in statistical analyses.", "This work was supported by grants from the Beijing Natural Science Foundation (No. Z200027), National Natural Science Foundation of China (No. 61871004), National Key R&D Program of China (No. 2018YFC0115900), Innovation & Transfer Fund of Peking University Third Hospital (No. BYSYZHKC2020111), and Peking University Medicine Fund of Fostering Young Scholars’ Scientific & Technological Innovation (No. BMU2020PYB002). Funds were used for the collection and analysis of data." ]
[ null, "subjects", null, null, null, null ]
[ "Introduction", "Methods", "Ethics approval", "Patients", "Statistical analysis", "Results", "Patients’ characteristics", "Development and evaluation of the novel nomogram", "Discussion", "Acknowledgements", "Funding", "Conflicts of interest" ]
[ "Prostate cancer is among the most frequent cancers and the second leading cause of mortality in men. It is estimated that there might be around 191,930 new cases of prostate cancer and 33,330 deaths in the United States in 2020.[1] Approximately, 20% to 30% of the patients experience biochemical recurrence (BCR) after radical prostatectomy (RP) during follow-up.[2–4] Various prediction tools for BCR have been developed to guide the clinical decision-making for subsequent treatment. Most of these tools are developed based on clinical and pathological parameters such as pre-operative serum prostate-specific antigen (PSA), Gleason score, tumor stage, surgical margin (SM), extracapsular extension (ECE), seminal vesicle invasion (SVI), and lymph node invasion (LNI).[5–8] The cancer of the prostate risk assessment post-surgical (CAPRA-S) score is one of the most commonly used tools with good discriminative accuracy and calibration.[7] However, only few of these tools include tumor diameter and post-operative PSA nadir, simultaneously, although the prognostic value of these two characteristics in predicting BCR has been verified.[9,10]\nMeasurement of PSA is the cornerstone in post-operative follow-up. Serum PSA is expected to be undetectable within 6 weeks after RP and a detectable PSA in patients after RP is thought to be associated with residual cancer.[11] A persistent (detectable) PSA after RP has been proved to be a poor prognostic indicator of oncologic outcomes.[12]\nMagnetic resonance imaging (MRI) has been widely used for prostate cancer diagnosis, and the prognostic potential of MRI is constantly being explored with the advancement of radiographic technologies.[13,14] Maximum tumor diameter (MTD) has been demonstrated to be an independent predictor of BCR in patients after RP.[15] However, in most studies, MTD measurement was carried out on the pathological specimens and only few of them measured MTD on MRI,[16] while the latter is considered to be more accurate and comparable. To our knowledge, no study addressing the relationship between MTD measured on MRI and BCR was conducted.\nIn this study, we aim to assess the prognostic power of MTD from MRI in predicting BCR-free survival (BCRFS) after RP and develop a new nomogram that incorporates MTD, PSA nadir, and other common perioperative variables.", "Ethics approval This study was approved by the Peking University Third Hospital Medical Science Research Ethics Committee with a waiver of informed consent and compliant with the principles in the Declaration of Helsinki (S2019326).\nThis study was approved by the Peking University Third Hospital Medical Science Research Ethics Committee with a waiver of informed consent and compliant with the principles in the Declaration of Helsinki (S2019326).\nPatients Data of 542 patients who underwent laparoscopic RP for prostate cancer between January 2010 and March 2017 were retrospectively analyzed. The exclusion criteria were as follows: (1) patients with neoadjuvant therapy before surgery; (2) patients who had undergone transurethral resection of the prostate; (3) patients with unidentifiable lesions on MRI; (4) patients whose pathological results were not prostatic adenocarcinoma; and (5) incomplete follow-up data. Follow-ups were performed every 3 months for the first 2 years, semi-annually for the third and fourth year, and annually thereafter.\nThe suspicious tumor lesions were identified according to comprehensive understanding of T2-weighted images, diffusion weighted images, and apparent diffusion coefficient maps of MRI. MTD was defined as the largest tumor diameter of index lesion on axial T2-weighted images. For multifocal cases, only the largest tumor nodule was measured for analysis. PSA nadir was defined as the lowest level of serum PSA in the first two follow-ups after RP without adjuvant androgen deprivation therapy or radiotherapy. BCR was defined as post-operative PSA value >0.20 ng/mL in two consecutive measurements, and the recurrence date was assigned to the day when PSA value >0.20 ng/mL was measured for the first time. BCRFS was calculated from date of RP to date of documented BCR or date of last follow-up for those patients who did not experience BCR. Other clinical and pathological data, such as age at RP, body mass index (BMI), pre-operative PSA, Gleason score, SM, ECE, SVI, and LNI, were also collected for each patient.\nData of 542 patients who underwent laparoscopic RP for prostate cancer between January 2010 and March 2017 were retrospectively analyzed. The exclusion criteria were as follows: (1) patients with neoadjuvant therapy before surgery; (2) patients who had undergone transurethral resection of the prostate; (3) patients with unidentifiable lesions on MRI; (4) patients whose pathological results were not prostatic adenocarcinoma; and (5) incomplete follow-up data. Follow-ups were performed every 3 months for the first 2 years, semi-annually for the third and fourth year, and annually thereafter.\nThe suspicious tumor lesions were identified according to comprehensive understanding of T2-weighted images, diffusion weighted images, and apparent diffusion coefficient maps of MRI. MTD was defined as the largest tumor diameter of index lesion on axial T2-weighted images. For multifocal cases, only the largest tumor nodule was measured for analysis. PSA nadir was defined as the lowest level of serum PSA in the first two follow-ups after RP without adjuvant androgen deprivation therapy or radiotherapy. BCR was defined as post-operative PSA value >0.20 ng/mL in two consecutive measurements, and the recurrence date was assigned to the day when PSA value >0.20 ng/mL was measured for the first time. BCRFS was calculated from date of RP to date of documented BCR or date of last follow-up for those patients who did not experience BCR. Other clinical and pathological data, such as age at RP, body mass index (BMI), pre-operative PSA, Gleason score, SM, ECE, SVI, and LNI, were also collected for each patient.\nStatistical analysis Means, standard deviation, median, and interquartile ranges (IQR) were reported for continuous variables. Numbers and proportions were reported for categorical variables. The Mann-Whitney U test and Chi-square test were applied for between-group comparison. BCRFS was estimated using the Kaplan-Meier curves and log-rank test. MTD was categorized into ≤2.9 and >2.9 cm. The cutoff value of MTD that best discriminated low- and high-risk for BCR was estimated by maximally selected test with the “maxstat” package of R software.[17] PSA nadir was categorized into undetectable and detectable PSA. An undetectable PSA was defined as a PSA nadir <0.01 ng/mL. Univariable and multivariable Cox proportional hazards regression models were used to identify significant predictors of BCR. A nomogram predicting BCRFS at 3 and 5 years after RP was developed based on the multivariable model. For the validation of the nomogram, a bootstrap technique (1000 bootstrap resamples) was used for internal validation to assess the discrimination and calibration. The concordance index (c-index) was used to assess the discrimination. The calibration curve was made to assess the calibration which graphically revealed the relationship between predicted probability of BCR and actual observed events. Additionally, we compared our newly developed nomogram to the CAPRA-S score with one-shot non-parametric approach, and comparison of the two models was performed using the “compareC” package of R software.[18] Time-dependent ROC curves were illustrated using the “survivalROC” package.[19] Decision curve analyses at 3 and 5 years were performed to ascertain the clinical value of the new nomogram. Statistical analyses were performed with the R software (version 3.6.2, R Foundation for Statistical Computing, Vienna, Austria) and GraphPad Prism (version 7.00, GraphPad Software, San Diego, CA, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant.\nMeans, standard deviation, median, and interquartile ranges (IQR) were reported for continuous variables. Numbers and proportions were reported for categorical variables. The Mann-Whitney U test and Chi-square test were applied for between-group comparison. BCRFS was estimated using the Kaplan-Meier curves and log-rank test. MTD was categorized into ≤2.9 and >2.9 cm. The cutoff value of MTD that best discriminated low- and high-risk for BCR was estimated by maximally selected test with the “maxstat” package of R software.[17] PSA nadir was categorized into undetectable and detectable PSA. An undetectable PSA was defined as a PSA nadir <0.01 ng/mL. Univariable and multivariable Cox proportional hazards regression models were used to identify significant predictors of BCR. A nomogram predicting BCRFS at 3 and 5 years after RP was developed based on the multivariable model. For the validation of the nomogram, a bootstrap technique (1000 bootstrap resamples) was used for internal validation to assess the discrimination and calibration. The concordance index (c-index) was used to assess the discrimination. The calibration curve was made to assess the calibration which graphically revealed the relationship between predicted probability of BCR and actual observed events. Additionally, we compared our newly developed nomogram to the CAPRA-S score with one-shot non-parametric approach, and comparison of the two models was performed using the “compareC” package of R software.[18] Time-dependent ROC curves were illustrated using the “survivalROC” package.[19] Decision curve analyses at 3 and 5 years were performed to ascertain the clinical value of the new nomogram. Statistical analyses were performed with the R software (version 3.6.2, R Foundation for Statistical Computing, Vienna, Austria) and GraphPad Prism (version 7.00, GraphPad Software, San Diego, CA, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant.", "This study was approved by the Peking University Third Hospital Medical Science Research Ethics Committee with a waiver of informed consent and compliant with the principles in the Declaration of Helsinki (S2019326).", "Data of 542 patients who underwent laparoscopic RP for prostate cancer between January 2010 and March 2017 were retrospectively analyzed. The exclusion criteria were as follows: (1) patients with neoadjuvant therapy before surgery; (2) patients who had undergone transurethral resection of the prostate; (3) patients with unidentifiable lesions on MRI; (4) patients whose pathological results were not prostatic adenocarcinoma; and (5) incomplete follow-up data. Follow-ups were performed every 3 months for the first 2 years, semi-annually for the third and fourth year, and annually thereafter.\nThe suspicious tumor lesions were identified according to comprehensive understanding of T2-weighted images, diffusion weighted images, and apparent diffusion coefficient maps of MRI. MTD was defined as the largest tumor diameter of index lesion on axial T2-weighted images. For multifocal cases, only the largest tumor nodule was measured for analysis. PSA nadir was defined as the lowest level of serum PSA in the first two follow-ups after RP without adjuvant androgen deprivation therapy or radiotherapy. BCR was defined as post-operative PSA value >0.20 ng/mL in two consecutive measurements, and the recurrence date was assigned to the day when PSA value >0.20 ng/mL was measured for the first time. BCRFS was calculated from date of RP to date of documented BCR or date of last follow-up for those patients who did not experience BCR. Other clinical and pathological data, such as age at RP, body mass index (BMI), pre-operative PSA, Gleason score, SM, ECE, SVI, and LNI, were also collected for each patient.", "Means, standard deviation, median, and interquartile ranges (IQR) were reported for continuous variables. Numbers and proportions were reported for categorical variables. The Mann-Whitney U test and Chi-square test were applied for between-group comparison. BCRFS was estimated using the Kaplan-Meier curves and log-rank test. MTD was categorized into ≤2.9 and >2.9 cm. The cutoff value of MTD that best discriminated low- and high-risk for BCR was estimated by maximally selected test with the “maxstat” package of R software.[17] PSA nadir was categorized into undetectable and detectable PSA. An undetectable PSA was defined as a PSA nadir <0.01 ng/mL. Univariable and multivariable Cox proportional hazards regression models were used to identify significant predictors of BCR. A nomogram predicting BCRFS at 3 and 5 years after RP was developed based on the multivariable model. For the validation of the nomogram, a bootstrap technique (1000 bootstrap resamples) was used for internal validation to assess the discrimination and calibration. The concordance index (c-index) was used to assess the discrimination. The calibration curve was made to assess the calibration which graphically revealed the relationship between predicted probability of BCR and actual observed events. Additionally, we compared our newly developed nomogram to the CAPRA-S score with one-shot non-parametric approach, and comparison of the two models was performed using the “compareC” package of R software.[18] Time-dependent ROC curves were illustrated using the “survivalROC” package.[19] Decision curve analyses at 3 and 5 years were performed to ascertain the clinical value of the new nomogram. Statistical analyses were performed with the R software (version 3.6.2, R Foundation for Statistical Computing, Vienna, Austria) and GraphPad Prism (version 7.00, GraphPad Software, San Diego, CA, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant.", "Patients’ characteristics Overall, 337 patients were included in this study and the demographic and clinical characteristics of these patients are shown in Table 1. The median follow-up time was 42 months (IQR, 19–64 months) and 100 (29.7%) patients developed BCR during follow-up. The median age of all patients was 71 years (IQR, 65–75 years) with median BMI of 24.6 kg/m2 (IQR, 22.8–26.6 kg/m2). The median value of pre-operative PSA was 10.8 ng/mL (IQR, 7.3–19.1 ng/mL) and was divided into three groups: <10 ng/mL group, 10 to 20 ng/mL group, and >20 ng/mL group. The majority of the patients had PSA nadir <0.01 ng/mL (n = 242, 71.8%), while 95 (28.2%) patients had PSA nadir ≥0.01 ng/mL. Median PSA nadir was 0 (IQR, 0–0.01) ng/mL. The median MTD was 3.09 cm (IQR, 2.24–3.91 cm) with 45.1% of MTD ≤2.9 cm and 54.9% of MTD >2.9 cm. Comparison of clinical parameters between patients who experienced BCR or not is shown in Table 2.\nCharacteristics of prostate cancer patients treated by RP (N = 337).\nBCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy; SD: Standard deviation; SM: surgical margin; SVI: Seminal vesicle invasion.\nComparison of clinical parameters between prostate cancer patients with or without BCR.\n∗Z values. †χ2 values. BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SD: Standard deviation; SM: Surgical margin; SVI: Seminal vesicle invasion.\nOverall, 337 patients were included in this study and the demographic and clinical characteristics of these patients are shown in Table 1. The median follow-up time was 42 months (IQR, 19–64 months) and 100 (29.7%) patients developed BCR during follow-up. The median age of all patients was 71 years (IQR, 65–75 years) with median BMI of 24.6 kg/m2 (IQR, 22.8–26.6 kg/m2). The median value of pre-operative PSA was 10.8 ng/mL (IQR, 7.3–19.1 ng/mL) and was divided into three groups: <10 ng/mL group, 10 to 20 ng/mL group, and >20 ng/mL group. The majority of the patients had PSA nadir <0.01 ng/mL (n = 242, 71.8%), while 95 (28.2%) patients had PSA nadir ≥0.01 ng/mL. Median PSA nadir was 0 (IQR, 0–0.01) ng/mL. The median MTD was 3.09 cm (IQR, 2.24–3.91 cm) with 45.1% of MTD ≤2.9 cm and 54.9% of MTD >2.9 cm. Comparison of clinical parameters between patients who experienced BCR or not is shown in Table 2.\nCharacteristics of prostate cancer patients treated by RP (N = 337).\nBCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy; SD: Standard deviation; SM: surgical margin; SVI: Seminal vesicle invasion.\nComparison of clinical parameters between prostate cancer patients with or without BCR.\n∗Z values. †χ2 values. BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SD: Standard deviation; SM: Surgical margin; SVI: Seminal vesicle invasion.\nDevelopment and evaluation of the novel nomogram To identify significant predictors of BCR, we evaluated age, BMI, pre-operative PSA, Gleason score, SM, ECE, SVI, PSA nadir, and MTD in a univariable Cox proportional hazards regression model and the results are shown in Table 3. Except for age and BMI, all predictors were statistically significantly associated with BCR after RP (P < 0.01).\nUnivariable and multivariable Cox regression analyses of BCRFS.\nBCRFS: BCR-free survival; BMI: Body mass index; CI: Confidence interval; ECE: Extracapsular extension; HR: Hazard ratio; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SM: Surgical margin; SVI: Seminal vesicle invasion.\nAs shown in Figure 1, Kaplan-Meier curves were stratified by PSA nadir (<0.01 vs. ≥0.01 ng/mL) [Figure 1B], MTD (≤2.9 vs. >2.9 cm) [Figure 1C], and the combination of PSA nadir and MTD (0 risk factor: PSA nadir <0.01 ng/mL and MTD ≤2.9 cm; one risk factor: PSA nadir <0.01 ng/mL and MTD >2.9 cm or PSA nadir ≥0.01 ng/mL and MTD ≤2.9 cm; two risk factors: PSA nadir ≥0.01 ng/mL and MTD >2.9 cm) [Figure 1D] and showed that the patients with detectable PSA or/and MTD >2.9 cm had significantly shorter BCRFS (log-rank P < 0.001).\n(A) Kaplan-Meier curves of BCRFS for the whole patient population, (B) patients grouped by PSA nadir (<0.01 vs. ≥0.01 ng/mL), (C) MTD (≤2.9 vs. >2.9 cm), and (D) a combination of PSA nadir and MTD. BCR: Biochemical recurrence; BCRFS: BCR-free survival; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy.\nThese significant predictors in univariable analyses were then assessed in a multivariable Cox regression model, and pre-operative PSA and ECE did not retain their significance and were excluded (P > 0.05) [Table 3]. Finally, PSA nadir and MTD, as well as Gleason score, SM, and SVI, were independent predictors of BCR in multivariable Cox regression analysis (P < 0.05). These variables were incorporated in a nomogram predicting BCRFS at 3 and 5 years after RP [Figure 2], which yielded a c-index of 0.76 (95% confidence interval [CI], 0.71–0.81). The calibration plots of the nomogram are shown in Figure 3 illustrating how the predicted probability of BCRFS compared with the actual outcomes.\nNomogram predicting BCRFS at 3 and 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; PSA: Prostate-specific antigen; RP: Radical prostatectomy.\nCalibration plot of the nomogram predicting BCRFS at (A) 3 years and (B) 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; RP: Radical prostatectomy.\nThe c-index of the CAPRA-S score was 0.70 (95% CI, 0.64–0.75) in our study cohort, which is slightly lower than that of our nomogram (P = 0.022). To further verify the prognostic power of the combination of PSA nadir and MTD, we developed a basic model including Gleason score, SM, and SVI. It yielded a c-index of 0.66 (95% CI, 0.60–0.71), which was significantly lower than the c-index of the new nomogram (P = 0.001). The time-dependent ROC curve and decision curve analyses compared the new nomogram, the CAPRA-S score, and the basic model [Figures 4 and 5]. Our new nomogram showed an advantage in identifying patients with BCRFS in both time-dependent ROC curve and decision curve analyses.\nTime-dependent ROC curves comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; ROC: Receiver operating characteristic; RP: Radical prostatectomy.\nDecision curve analyses comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; RP: Radical prostatectomy.\nTo identify significant predictors of BCR, we evaluated age, BMI, pre-operative PSA, Gleason score, SM, ECE, SVI, PSA nadir, and MTD in a univariable Cox proportional hazards regression model and the results are shown in Table 3. Except for age and BMI, all predictors were statistically significantly associated with BCR after RP (P < 0.01).\nUnivariable and multivariable Cox regression analyses of BCRFS.\nBCRFS: BCR-free survival; BMI: Body mass index; CI: Confidence interval; ECE: Extracapsular extension; HR: Hazard ratio; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SM: Surgical margin; SVI: Seminal vesicle invasion.\nAs shown in Figure 1, Kaplan-Meier curves were stratified by PSA nadir (<0.01 vs. ≥0.01 ng/mL) [Figure 1B], MTD (≤2.9 vs. >2.9 cm) [Figure 1C], and the combination of PSA nadir and MTD (0 risk factor: PSA nadir <0.01 ng/mL and MTD ≤2.9 cm; one risk factor: PSA nadir <0.01 ng/mL and MTD >2.9 cm or PSA nadir ≥0.01 ng/mL and MTD ≤2.9 cm; two risk factors: PSA nadir ≥0.01 ng/mL and MTD >2.9 cm) [Figure 1D] and showed that the patients with detectable PSA or/and MTD >2.9 cm had significantly shorter BCRFS (log-rank P < 0.001).\n(A) Kaplan-Meier curves of BCRFS for the whole patient population, (B) patients grouped by PSA nadir (<0.01 vs. ≥0.01 ng/mL), (C) MTD (≤2.9 vs. >2.9 cm), and (D) a combination of PSA nadir and MTD. BCR: Biochemical recurrence; BCRFS: BCR-free survival; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy.\nThese significant predictors in univariable analyses were then assessed in a multivariable Cox regression model, and pre-operative PSA and ECE did not retain their significance and were excluded (P > 0.05) [Table 3]. Finally, PSA nadir and MTD, as well as Gleason score, SM, and SVI, were independent predictors of BCR in multivariable Cox regression analysis (P < 0.05). These variables were incorporated in a nomogram predicting BCRFS at 3 and 5 years after RP [Figure 2], which yielded a c-index of 0.76 (95% confidence interval [CI], 0.71–0.81). The calibration plots of the nomogram are shown in Figure 3 illustrating how the predicted probability of BCRFS compared with the actual outcomes.\nNomogram predicting BCRFS at 3 and 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; PSA: Prostate-specific antigen; RP: Radical prostatectomy.\nCalibration plot of the nomogram predicting BCRFS at (A) 3 years and (B) 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; RP: Radical prostatectomy.\nThe c-index of the CAPRA-S score was 0.70 (95% CI, 0.64–0.75) in our study cohort, which is slightly lower than that of our nomogram (P = 0.022). To further verify the prognostic power of the combination of PSA nadir and MTD, we developed a basic model including Gleason score, SM, and SVI. It yielded a c-index of 0.66 (95% CI, 0.60–0.71), which was significantly lower than the c-index of the new nomogram (P = 0.001). The time-dependent ROC curve and decision curve analyses compared the new nomogram, the CAPRA-S score, and the basic model [Figures 4 and 5]. Our new nomogram showed an advantage in identifying patients with BCRFS in both time-dependent ROC curve and decision curve analyses.\nTime-dependent ROC curves comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; ROC: Receiver operating characteristic; RP: Radical prostatectomy.\nDecision curve analyses comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; RP: Radical prostatectomy.", "Overall, 337 patients were included in this study and the demographic and clinical characteristics of these patients are shown in Table 1. The median follow-up time was 42 months (IQR, 19–64 months) and 100 (29.7%) patients developed BCR during follow-up. The median age of all patients was 71 years (IQR, 65–75 years) with median BMI of 24.6 kg/m2 (IQR, 22.8–26.6 kg/m2). The median value of pre-operative PSA was 10.8 ng/mL (IQR, 7.3–19.1 ng/mL) and was divided into three groups: <10 ng/mL group, 10 to 20 ng/mL group, and >20 ng/mL group. The majority of the patients had PSA nadir <0.01 ng/mL (n = 242, 71.8%), while 95 (28.2%) patients had PSA nadir ≥0.01 ng/mL. Median PSA nadir was 0 (IQR, 0–0.01) ng/mL. The median MTD was 3.09 cm (IQR, 2.24–3.91 cm) with 45.1% of MTD ≤2.9 cm and 54.9% of MTD >2.9 cm. Comparison of clinical parameters between patients who experienced BCR or not is shown in Table 2.\nCharacteristics of prostate cancer patients treated by RP (N = 337).\nBCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy; SD: Standard deviation; SM: surgical margin; SVI: Seminal vesicle invasion.\nComparison of clinical parameters between prostate cancer patients with or without BCR.\n∗Z values. †χ2 values. BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SD: Standard deviation; SM: Surgical margin; SVI: Seminal vesicle invasion.", "To identify significant predictors of BCR, we evaluated age, BMI, pre-operative PSA, Gleason score, SM, ECE, SVI, PSA nadir, and MTD in a univariable Cox proportional hazards regression model and the results are shown in Table 3. Except for age and BMI, all predictors were statistically significantly associated with BCR after RP (P < 0.01).\nUnivariable and multivariable Cox regression analyses of BCRFS.\nBCRFS: BCR-free survival; BMI: Body mass index; CI: Confidence interval; ECE: Extracapsular extension; HR: Hazard ratio; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SM: Surgical margin; SVI: Seminal vesicle invasion.\nAs shown in Figure 1, Kaplan-Meier curves were stratified by PSA nadir (<0.01 vs. ≥0.01 ng/mL) [Figure 1B], MTD (≤2.9 vs. >2.9 cm) [Figure 1C], and the combination of PSA nadir and MTD (0 risk factor: PSA nadir <0.01 ng/mL and MTD ≤2.9 cm; one risk factor: PSA nadir <0.01 ng/mL and MTD >2.9 cm or PSA nadir ≥0.01 ng/mL and MTD ≤2.9 cm; two risk factors: PSA nadir ≥0.01 ng/mL and MTD >2.9 cm) [Figure 1D] and showed that the patients with detectable PSA or/and MTD >2.9 cm had significantly shorter BCRFS (log-rank P < 0.001).\n(A) Kaplan-Meier curves of BCRFS for the whole patient population, (B) patients grouped by PSA nadir (<0.01 vs. ≥0.01 ng/mL), (C) MTD (≤2.9 vs. >2.9 cm), and (D) a combination of PSA nadir and MTD. BCR: Biochemical recurrence; BCRFS: BCR-free survival; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy.\nThese significant predictors in univariable analyses were then assessed in a multivariable Cox regression model, and pre-operative PSA and ECE did not retain their significance and were excluded (P > 0.05) [Table 3]. Finally, PSA nadir and MTD, as well as Gleason score, SM, and SVI, were independent predictors of BCR in multivariable Cox regression analysis (P < 0.05). These variables were incorporated in a nomogram predicting BCRFS at 3 and 5 years after RP [Figure 2], which yielded a c-index of 0.76 (95% confidence interval [CI], 0.71–0.81). The calibration plots of the nomogram are shown in Figure 3 illustrating how the predicted probability of BCRFS compared with the actual outcomes.\nNomogram predicting BCRFS at 3 and 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; PSA: Prostate-specific antigen; RP: Radical prostatectomy.\nCalibration plot of the nomogram predicting BCRFS at (A) 3 years and (B) 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; RP: Radical prostatectomy.\nThe c-index of the CAPRA-S score was 0.70 (95% CI, 0.64–0.75) in our study cohort, which is slightly lower than that of our nomogram (P = 0.022). To further verify the prognostic power of the combination of PSA nadir and MTD, we developed a basic model including Gleason score, SM, and SVI. It yielded a c-index of 0.66 (95% CI, 0.60–0.71), which was significantly lower than the c-index of the new nomogram (P = 0.001). The time-dependent ROC curve and decision curve analyses compared the new nomogram, the CAPRA-S score, and the basic model [Figures 4 and 5]. Our new nomogram showed an advantage in identifying patients with BCRFS in both time-dependent ROC curve and decision curve analyses.\nTime-dependent ROC curves comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; ROC: Receiver operating characteristic; RP: Radical prostatectomy.\nDecision curve analyses comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; RP: Radical prostatectomy.", "In the present study, comparing to conventional predictive models, we proposed a new nomogram by incorporating MTD and PSA nadir, which showed improved accuracy of BCR prediction for patients after RP.\nAfter RP, PSA is expected to be undetectable within 6 weeks and it is utmost important parameter that should be monitored post-operatively.[11] Elevated PSA level after RP indicates high risk of local recurrence or metastasis.[12] If the post-operative PSA reaches 0.2 ng/mL, patient is assigned the status of BCR,[20] which was a signal of cancer activity at visual undetectable level. The relationship between PSA nadir and BCR after RP has been extensively studied. A retrospective study reported that compared to men with PSA <0.01 ng/mL after RP, the probability of BCRFS at 5 years dropped from 92.4% to 56.8% in patients with PSA ≥0.01 ng/mL.[21] In a study of 582 patients carried out by Matsumoto et al,[22] PSA persistence after RP was associated with increased BCR and overall mortality. These results are in line with the observations in the present study. In the present study, 71.8% patients had an undetectable PSA nadir and 28.2% patients had a detectable nadir during follow-up. PSA nadir after RP was found to be an independent prognostic factor (P < 0.001) in predicting BCR in univariable and multivariable analyses. Patients with PSA nadir <0.01 ng/mL had significantly longer BCRFS in our study cohort (log-rank P < 0.0001) [Figure 1B].\nAccording to our clinical experience, tumor burden should be associated with oncological outcomes. Tumor volume and MTD as the common indicators of tumor burden have been studied by the researchers and have proved to be independent prognostic factors of BCR.[15,23] However, prostate cancer has been recognized as a multifocal disease, and the calculation of tumor volume and MTD are complicated.[24] In 2013, Billis et al[25] found that the tumor extent in a surgical specimen should be estimated with the dominant tumor and not the total tumor extent. They also reported the association of the dominant tumor with BCR prediction. Nonetheless, the calculation of tumor volume is time consuming and difficult. For the above reasons, we chose MTD as the research target and it was defined as the maximum diameter of the dominant tumor. Unlike previous studies, we measured MTD based on MRI instead of pathological specimen. MRI has better repeatability and less deformation, while on pathological specimen, deformation can vary greatly because of the shrinking of tissues after soaking in formalin. Lee et al[16] measured the diameter of the suspicious tumor lesion on diffusion weighted images of MRI and demonstrated that the diameter of tumor could increase the prediction of insignificant prostate cancer in candidates for active surveillance. In the studies of Kozal et al[26] and Müller et al,[27] MTD was an independent prognostic factor for BCR, even though they measured MTD on pathological specimens. Based on their findings, we hypothesized that the MTD on MRI could be an independent prognostic factor for prostate cancer; however, the relationship between MTD measured on MRI and BCR after RP has rarely been explored in their study as well as other previous studies. As expected, the results of the present study showed that MTD on MRI was an independently significant predictor of BCR (P = 0.0340) and the Kaplan-Meier curve depicted that men with MTD >2.9 cm had shorter BCRFS (log-rank P = 0.0003) [Figure 1C]. Interestingly, the median MTD in the present study was larger than that in the previous studies.[28] We attributed this phenomenon to shrinking of tissues after soaking in formalin which might decrease the MTD.[29] Additionally, in the present study, pathological tumor stage ≥T2c was reported in the majority of patients (n = 290, 86%) [Table 1] and it might be another reason why we have larger MTD. In the study of Eichelberger et al,[30] MTD was found to be associated with the pathological stages. With the rapid development of radiographic technologies and artificial intelligence, the identification and measurements of prostate cancer on MRI are more accurate with high repeatability for prognostic evaluation.\nThe CAPRA-S score is a post-operative score created by Cooperberg et al,[7] based on pre-operative PSA, Gleason score, SM, ECE, SVI, and LNI. The prognostic value of these variables was verified in our study cohort as well. All of them were significantly associated with BCR in the univariable analysis, and Gleason score, SM, and SVI were independent predictors of BCR in multivariable analysis. The c-index of our newly developed nomogram was slightly higher than that of the CAPRA-S score in our study cohort. Moreover, our nomogram predictions closely approached the actual outcome both at 3 and 5 years after RP, demonstrating good calibration, as depicted in the calibration plot. Comparing these two models, we found that our new nomogram consisted of two parts. One part was composed of the commonly used variables, namely Gleason score, SM, and SVI; and the other part was composed of PSA nadir and MTD measured on MRI. In the present study, we observed that both PSA nadir and MTD were significantly associated with BCR in univariable analysis and they were also independent prognostic factors after adjusting pre-operative PSA, Gleason score, SM, ECE, and SVI. Kaplan-Meier curve showed that the patients with these two risk factors simultaneously had the shortest BCRFS and patients with none of these two risk factors had the longest BCRFS (log-rank P < 0.0001) [Figure 1D]. However, only few of the previous prediction tools used MTD and PSA nadir at the same time. To verify the incremental predictive power of the combination of PSA nadir and MTD, we developed a basic model including Gleason score, SM, and SVI for comparison. The c-index was decreased from 0.76 to 0.66 (P < 0.001) when PSA nadir and MTD were removed from our new nomogram. The time-dependent ROC curves illustrated the advantage of our new nomogram at both 3 and 5 years after RP. The decision curve analyses also showed the advantage of our new nomogram, across the various threshold probabilities, and the new nomogram had greater net benefit than both the basic model and the CAPRA-S score in our study cohort. Our new nomogram is a promising tool to predict BCRFS and guide follow-up and decision-making of adjuvant treatment. In addition, PSA nadir and MTD improved the accuracy of our new nomogram in predicting BCR.\nThis study has several limitations. First, it was a retrospective study and the population was relatively smaller compared with the previous studies. Second, the present study has not yet been validated externally and the analysis of overall survival lacked because of the short-term follow-up duration.\nThe newly developed nomogram, which included PSA nadir, MTD measured on MRI, and several commonly used variables, shows excellent accuracy in predicting BCRFS after RP. This nomogram is a useful tool for risk stratification and follow-up planning. The combination of PSA nadir and MTD can improve the accuracy of BCR prediction.", "The authors thank Mr. Li-Yuan Tao from the Research Center of Clinical Epidemiology of Peking University Third Hospital for the help in statistical analyses.", "This work was supported by grants from the Beijing Natural Science Foundation (No. Z200027), National Natural Science Foundation of China (No. 61871004), National Key R&D Program of China (No. 2018YFC0115900), Innovation & Transfer Fund of Peking University Third Hospital (No. BYSYZHKC2020111), and Peking University Medicine Fund of Fostering Young Scholars’ Scientific & Technological Innovation (No. BMU2020PYB002). Funds were used for the collection and analysis of data.", "None." ]
[ "intro", "methods", null, "subjects", null, "results", "subjects", null, "discussion", null, null, "COI-statement" ]
[ "Nomogram", "PSA nadir", "Tumor diameter", "Magnetic resonance imaging", "Biochemical recurrence", "Radical prostatectomy" ]
Introduction: Prostate cancer is among the most frequent cancers and the second leading cause of mortality in men. It is estimated that there might be around 191,930 new cases of prostate cancer and 33,330 deaths in the United States in 2020.[1] Approximately, 20% to 30% of the patients experience biochemical recurrence (BCR) after radical prostatectomy (RP) during follow-up.[2–4] Various prediction tools for BCR have been developed to guide the clinical decision-making for subsequent treatment. Most of these tools are developed based on clinical and pathological parameters such as pre-operative serum prostate-specific antigen (PSA), Gleason score, tumor stage, surgical margin (SM), extracapsular extension (ECE), seminal vesicle invasion (SVI), and lymph node invasion (LNI).[5–8] The cancer of the prostate risk assessment post-surgical (CAPRA-S) score is one of the most commonly used tools with good discriminative accuracy and calibration.[7] However, only few of these tools include tumor diameter and post-operative PSA nadir, simultaneously, although the prognostic value of these two characteristics in predicting BCR has been verified.[9,10] Measurement of PSA is the cornerstone in post-operative follow-up. Serum PSA is expected to be undetectable within 6 weeks after RP and a detectable PSA in patients after RP is thought to be associated with residual cancer.[11] A persistent (detectable) PSA after RP has been proved to be a poor prognostic indicator of oncologic outcomes.[12] Magnetic resonance imaging (MRI) has been widely used for prostate cancer diagnosis, and the prognostic potential of MRI is constantly being explored with the advancement of radiographic technologies.[13,14] Maximum tumor diameter (MTD) has been demonstrated to be an independent predictor of BCR in patients after RP.[15] However, in most studies, MTD measurement was carried out on the pathological specimens and only few of them measured MTD on MRI,[16] while the latter is considered to be more accurate and comparable. To our knowledge, no study addressing the relationship between MTD measured on MRI and BCR was conducted. In this study, we aim to assess the prognostic power of MTD from MRI in predicting BCR-free survival (BCRFS) after RP and develop a new nomogram that incorporates MTD, PSA nadir, and other common perioperative variables. Methods: Ethics approval This study was approved by the Peking University Third Hospital Medical Science Research Ethics Committee with a waiver of informed consent and compliant with the principles in the Declaration of Helsinki (S2019326). This study was approved by the Peking University Third Hospital Medical Science Research Ethics Committee with a waiver of informed consent and compliant with the principles in the Declaration of Helsinki (S2019326). Patients Data of 542 patients who underwent laparoscopic RP for prostate cancer between January 2010 and March 2017 were retrospectively analyzed. The exclusion criteria were as follows: (1) patients with neoadjuvant therapy before surgery; (2) patients who had undergone transurethral resection of the prostate; (3) patients with unidentifiable lesions on MRI; (4) patients whose pathological results were not prostatic adenocarcinoma; and (5) incomplete follow-up data. Follow-ups were performed every 3 months for the first 2 years, semi-annually for the third and fourth year, and annually thereafter. The suspicious tumor lesions were identified according to comprehensive understanding of T2-weighted images, diffusion weighted images, and apparent diffusion coefficient maps of MRI. MTD was defined as the largest tumor diameter of index lesion on axial T2-weighted images. For multifocal cases, only the largest tumor nodule was measured for analysis. PSA nadir was defined as the lowest level of serum PSA in the first two follow-ups after RP without adjuvant androgen deprivation therapy or radiotherapy. BCR was defined as post-operative PSA value >0.20 ng/mL in two consecutive measurements, and the recurrence date was assigned to the day when PSA value >0.20 ng/mL was measured for the first time. BCRFS was calculated from date of RP to date of documented BCR or date of last follow-up for those patients who did not experience BCR. Other clinical and pathological data, such as age at RP, body mass index (BMI), pre-operative PSA, Gleason score, SM, ECE, SVI, and LNI, were also collected for each patient. Data of 542 patients who underwent laparoscopic RP for prostate cancer between January 2010 and March 2017 were retrospectively analyzed. The exclusion criteria were as follows: (1) patients with neoadjuvant therapy before surgery; (2) patients who had undergone transurethral resection of the prostate; (3) patients with unidentifiable lesions on MRI; (4) patients whose pathological results were not prostatic adenocarcinoma; and (5) incomplete follow-up data. Follow-ups were performed every 3 months for the first 2 years, semi-annually for the third and fourth year, and annually thereafter. The suspicious tumor lesions were identified according to comprehensive understanding of T2-weighted images, diffusion weighted images, and apparent diffusion coefficient maps of MRI. MTD was defined as the largest tumor diameter of index lesion on axial T2-weighted images. For multifocal cases, only the largest tumor nodule was measured for analysis. PSA nadir was defined as the lowest level of serum PSA in the first two follow-ups after RP without adjuvant androgen deprivation therapy or radiotherapy. BCR was defined as post-operative PSA value >0.20 ng/mL in two consecutive measurements, and the recurrence date was assigned to the day when PSA value >0.20 ng/mL was measured for the first time. BCRFS was calculated from date of RP to date of documented BCR or date of last follow-up for those patients who did not experience BCR. Other clinical and pathological data, such as age at RP, body mass index (BMI), pre-operative PSA, Gleason score, SM, ECE, SVI, and LNI, were also collected for each patient. Statistical analysis Means, standard deviation, median, and interquartile ranges (IQR) were reported for continuous variables. Numbers and proportions were reported for categorical variables. The Mann-Whitney U test and Chi-square test were applied for between-group comparison. BCRFS was estimated using the Kaplan-Meier curves and log-rank test. MTD was categorized into ≤2.9 and >2.9 cm. The cutoff value of MTD that best discriminated low- and high-risk for BCR was estimated by maximally selected test with the “maxstat” package of R software.[17] PSA nadir was categorized into undetectable and detectable PSA. An undetectable PSA was defined as a PSA nadir <0.01 ng/mL. Univariable and multivariable Cox proportional hazards regression models were used to identify significant predictors of BCR. A nomogram predicting BCRFS at 3 and 5 years after RP was developed based on the multivariable model. For the validation of the nomogram, a bootstrap technique (1000 bootstrap resamples) was used for internal validation to assess the discrimination and calibration. The concordance index (c-index) was used to assess the discrimination. The calibration curve was made to assess the calibration which graphically revealed the relationship between predicted probability of BCR and actual observed events. Additionally, we compared our newly developed nomogram to the CAPRA-S score with one-shot non-parametric approach, and comparison of the two models was performed using the “compareC” package of R software.[18] Time-dependent ROC curves were illustrated using the “survivalROC” package.[19] Decision curve analyses at 3 and 5 years were performed to ascertain the clinical value of the new nomogram. Statistical analyses were performed with the R software (version 3.6.2, R Foundation for Statistical Computing, Vienna, Austria) and GraphPad Prism (version 7.00, GraphPad Software, San Diego, CA, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant. Means, standard deviation, median, and interquartile ranges (IQR) were reported for continuous variables. Numbers and proportions were reported for categorical variables. The Mann-Whitney U test and Chi-square test were applied for between-group comparison. BCRFS was estimated using the Kaplan-Meier curves and log-rank test. MTD was categorized into ≤2.9 and >2.9 cm. The cutoff value of MTD that best discriminated low- and high-risk for BCR was estimated by maximally selected test with the “maxstat” package of R software.[17] PSA nadir was categorized into undetectable and detectable PSA. An undetectable PSA was defined as a PSA nadir <0.01 ng/mL. Univariable and multivariable Cox proportional hazards regression models were used to identify significant predictors of BCR. A nomogram predicting BCRFS at 3 and 5 years after RP was developed based on the multivariable model. For the validation of the nomogram, a bootstrap technique (1000 bootstrap resamples) was used for internal validation to assess the discrimination and calibration. The concordance index (c-index) was used to assess the discrimination. The calibration curve was made to assess the calibration which graphically revealed the relationship between predicted probability of BCR and actual observed events. Additionally, we compared our newly developed nomogram to the CAPRA-S score with one-shot non-parametric approach, and comparison of the two models was performed using the “compareC” package of R software.[18] Time-dependent ROC curves were illustrated using the “survivalROC” package.[19] Decision curve analyses at 3 and 5 years were performed to ascertain the clinical value of the new nomogram. Statistical analyses were performed with the R software (version 3.6.2, R Foundation for Statistical Computing, Vienna, Austria) and GraphPad Prism (version 7.00, GraphPad Software, San Diego, CA, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant. Ethics approval: This study was approved by the Peking University Third Hospital Medical Science Research Ethics Committee with a waiver of informed consent and compliant with the principles in the Declaration of Helsinki (S2019326). Patients: Data of 542 patients who underwent laparoscopic RP for prostate cancer between January 2010 and March 2017 were retrospectively analyzed. The exclusion criteria were as follows: (1) patients with neoadjuvant therapy before surgery; (2) patients who had undergone transurethral resection of the prostate; (3) patients with unidentifiable lesions on MRI; (4) patients whose pathological results were not prostatic adenocarcinoma; and (5) incomplete follow-up data. Follow-ups were performed every 3 months for the first 2 years, semi-annually for the third and fourth year, and annually thereafter. The suspicious tumor lesions were identified according to comprehensive understanding of T2-weighted images, diffusion weighted images, and apparent diffusion coefficient maps of MRI. MTD was defined as the largest tumor diameter of index lesion on axial T2-weighted images. For multifocal cases, only the largest tumor nodule was measured for analysis. PSA nadir was defined as the lowest level of serum PSA in the first two follow-ups after RP without adjuvant androgen deprivation therapy or radiotherapy. BCR was defined as post-operative PSA value >0.20 ng/mL in two consecutive measurements, and the recurrence date was assigned to the day when PSA value >0.20 ng/mL was measured for the first time. BCRFS was calculated from date of RP to date of documented BCR or date of last follow-up for those patients who did not experience BCR. Other clinical and pathological data, such as age at RP, body mass index (BMI), pre-operative PSA, Gleason score, SM, ECE, SVI, and LNI, were also collected for each patient. Statistical analysis: Means, standard deviation, median, and interquartile ranges (IQR) were reported for continuous variables. Numbers and proportions were reported for categorical variables. The Mann-Whitney U test and Chi-square test were applied for between-group comparison. BCRFS was estimated using the Kaplan-Meier curves and log-rank test. MTD was categorized into ≤2.9 and >2.9 cm. The cutoff value of MTD that best discriminated low- and high-risk for BCR was estimated by maximally selected test with the “maxstat” package of R software.[17] PSA nadir was categorized into undetectable and detectable PSA. An undetectable PSA was defined as a PSA nadir <0.01 ng/mL. Univariable and multivariable Cox proportional hazards regression models were used to identify significant predictors of BCR. A nomogram predicting BCRFS at 3 and 5 years after RP was developed based on the multivariable model. For the validation of the nomogram, a bootstrap technique (1000 bootstrap resamples) was used for internal validation to assess the discrimination and calibration. The concordance index (c-index) was used to assess the discrimination. The calibration curve was made to assess the calibration which graphically revealed the relationship between predicted probability of BCR and actual observed events. Additionally, we compared our newly developed nomogram to the CAPRA-S score with one-shot non-parametric approach, and comparison of the two models was performed using the “compareC” package of R software.[18] Time-dependent ROC curves were illustrated using the “survivalROC” package.[19] Decision curve analyses at 3 and 5 years were performed to ascertain the clinical value of the new nomogram. Statistical analyses were performed with the R software (version 3.6.2, R Foundation for Statistical Computing, Vienna, Austria) and GraphPad Prism (version 7.00, GraphPad Software, San Diego, CA, USA). All statistical tests were two-sided, and P < 0.05 was considered statistically significant. Results: Patients’ characteristics Overall, 337 patients were included in this study and the demographic and clinical characteristics of these patients are shown in Table 1. The median follow-up time was 42 months (IQR, 19–64 months) and 100 (29.7%) patients developed BCR during follow-up. The median age of all patients was 71 years (IQR, 65–75 years) with median BMI of 24.6 kg/m2 (IQR, 22.8–26.6 kg/m2). The median value of pre-operative PSA was 10.8 ng/mL (IQR, 7.3–19.1 ng/mL) and was divided into three groups: <10 ng/mL group, 10 to 20 ng/mL group, and >20 ng/mL group. The majority of the patients had PSA nadir <0.01 ng/mL (n = 242, 71.8%), while 95 (28.2%) patients had PSA nadir ≥0.01 ng/mL. Median PSA nadir was 0 (IQR, 0–0.01) ng/mL. The median MTD was 3.09 cm (IQR, 2.24–3.91 cm) with 45.1% of MTD ≤2.9 cm and 54.9% of MTD >2.9 cm. Comparison of clinical parameters between patients who experienced BCR or not is shown in Table 2. Characteristics of prostate cancer patients treated by RP (N = 337). BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy; SD: Standard deviation; SM: surgical margin; SVI: Seminal vesicle invasion. Comparison of clinical parameters between prostate cancer patients with or without BCR. ∗Z values. †χ2 values. BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SD: Standard deviation; SM: Surgical margin; SVI: Seminal vesicle invasion. Overall, 337 patients were included in this study and the demographic and clinical characteristics of these patients are shown in Table 1. The median follow-up time was 42 months (IQR, 19–64 months) and 100 (29.7%) patients developed BCR during follow-up. The median age of all patients was 71 years (IQR, 65–75 years) with median BMI of 24.6 kg/m2 (IQR, 22.8–26.6 kg/m2). The median value of pre-operative PSA was 10.8 ng/mL (IQR, 7.3–19.1 ng/mL) and was divided into three groups: <10 ng/mL group, 10 to 20 ng/mL group, and >20 ng/mL group. The majority of the patients had PSA nadir <0.01 ng/mL (n = 242, 71.8%), while 95 (28.2%) patients had PSA nadir ≥0.01 ng/mL. Median PSA nadir was 0 (IQR, 0–0.01) ng/mL. The median MTD was 3.09 cm (IQR, 2.24–3.91 cm) with 45.1% of MTD ≤2.9 cm and 54.9% of MTD >2.9 cm. Comparison of clinical parameters between patients who experienced BCR or not is shown in Table 2. Characteristics of prostate cancer patients treated by RP (N = 337). BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy; SD: Standard deviation; SM: surgical margin; SVI: Seminal vesicle invasion. Comparison of clinical parameters between prostate cancer patients with or without BCR. ∗Z values. †χ2 values. BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SD: Standard deviation; SM: Surgical margin; SVI: Seminal vesicle invasion. Development and evaluation of the novel nomogram To identify significant predictors of BCR, we evaluated age, BMI, pre-operative PSA, Gleason score, SM, ECE, SVI, PSA nadir, and MTD in a univariable Cox proportional hazards regression model and the results are shown in Table 3. Except for age and BMI, all predictors were statistically significantly associated with BCR after RP (P < 0.01). Univariable and multivariable Cox regression analyses of BCRFS. BCRFS: BCR-free survival; BMI: Body mass index; CI: Confidence interval; ECE: Extracapsular extension; HR: Hazard ratio; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SM: Surgical margin; SVI: Seminal vesicle invasion. As shown in Figure 1, Kaplan-Meier curves were stratified by PSA nadir (<0.01 vs. ≥0.01 ng/mL) [Figure 1B], MTD (≤2.9 vs. >2.9 cm) [Figure 1C], and the combination of PSA nadir and MTD (0 risk factor: PSA nadir <0.01 ng/mL and MTD ≤2.9 cm; one risk factor: PSA nadir <0.01 ng/mL and MTD >2.9 cm or PSA nadir ≥0.01 ng/mL and MTD ≤2.9 cm; two risk factors: PSA nadir ≥0.01 ng/mL and MTD >2.9 cm) [Figure 1D] and showed that the patients with detectable PSA or/and MTD >2.9 cm had significantly shorter BCRFS (log-rank P < 0.001). (A) Kaplan-Meier curves of BCRFS for the whole patient population, (B) patients grouped by PSA nadir (<0.01 vs. ≥0.01 ng/mL), (C) MTD (≤2.9 vs. >2.9 cm), and (D) a combination of PSA nadir and MTD. BCR: Biochemical recurrence; BCRFS: BCR-free survival; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy. These significant predictors in univariable analyses were then assessed in a multivariable Cox regression model, and pre-operative PSA and ECE did not retain their significance and were excluded (P > 0.05) [Table 3]. Finally, PSA nadir and MTD, as well as Gleason score, SM, and SVI, were independent predictors of BCR in multivariable Cox regression analysis (P < 0.05). These variables were incorporated in a nomogram predicting BCRFS at 3 and 5 years after RP [Figure 2], which yielded a c-index of 0.76 (95% confidence interval [CI], 0.71–0.81). The calibration plots of the nomogram are shown in Figure 3 illustrating how the predicted probability of BCRFS compared with the actual outcomes. Nomogram predicting BCRFS at 3 and 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; PSA: Prostate-specific antigen; RP: Radical prostatectomy. Calibration plot of the nomogram predicting BCRFS at (A) 3 years and (B) 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; RP: Radical prostatectomy. The c-index of the CAPRA-S score was 0.70 (95% CI, 0.64–0.75) in our study cohort, which is slightly lower than that of our nomogram (P = 0.022). To further verify the prognostic power of the combination of PSA nadir and MTD, we developed a basic model including Gleason score, SM, and SVI. It yielded a c-index of 0.66 (95% CI, 0.60–0.71), which was significantly lower than the c-index of the new nomogram (P = 0.001). The time-dependent ROC curve and decision curve analyses compared the new nomogram, the CAPRA-S score, and the basic model [Figures 4 and 5]. Our new nomogram showed an advantage in identifying patients with BCRFS in both time-dependent ROC curve and decision curve analyses. Time-dependent ROC curves comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; ROC: Receiver operating characteristic; RP: Radical prostatectomy. Decision curve analyses comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; RP: Radical prostatectomy. To identify significant predictors of BCR, we evaluated age, BMI, pre-operative PSA, Gleason score, SM, ECE, SVI, PSA nadir, and MTD in a univariable Cox proportional hazards regression model and the results are shown in Table 3. Except for age and BMI, all predictors were statistically significantly associated with BCR after RP (P < 0.01). Univariable and multivariable Cox regression analyses of BCRFS. BCRFS: BCR-free survival; BMI: Body mass index; CI: Confidence interval; ECE: Extracapsular extension; HR: Hazard ratio; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SM: Surgical margin; SVI: Seminal vesicle invasion. As shown in Figure 1, Kaplan-Meier curves were stratified by PSA nadir (<0.01 vs. ≥0.01 ng/mL) [Figure 1B], MTD (≤2.9 vs. >2.9 cm) [Figure 1C], and the combination of PSA nadir and MTD (0 risk factor: PSA nadir <0.01 ng/mL and MTD ≤2.9 cm; one risk factor: PSA nadir <0.01 ng/mL and MTD >2.9 cm or PSA nadir ≥0.01 ng/mL and MTD ≤2.9 cm; two risk factors: PSA nadir ≥0.01 ng/mL and MTD >2.9 cm) [Figure 1D] and showed that the patients with detectable PSA or/and MTD >2.9 cm had significantly shorter BCRFS (log-rank P < 0.001). (A) Kaplan-Meier curves of BCRFS for the whole patient population, (B) patients grouped by PSA nadir (<0.01 vs. ≥0.01 ng/mL), (C) MTD (≤2.9 vs. >2.9 cm), and (D) a combination of PSA nadir and MTD. BCR: Biochemical recurrence; BCRFS: BCR-free survival; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy. These significant predictors in univariable analyses were then assessed in a multivariable Cox regression model, and pre-operative PSA and ECE did not retain their significance and were excluded (P > 0.05) [Table 3]. Finally, PSA nadir and MTD, as well as Gleason score, SM, and SVI, were independent predictors of BCR in multivariable Cox regression analysis (P < 0.05). These variables were incorporated in a nomogram predicting BCRFS at 3 and 5 years after RP [Figure 2], which yielded a c-index of 0.76 (95% confidence interval [CI], 0.71–0.81). The calibration plots of the nomogram are shown in Figure 3 illustrating how the predicted probability of BCRFS compared with the actual outcomes. Nomogram predicting BCRFS at 3 and 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; PSA: Prostate-specific antigen; RP: Radical prostatectomy. Calibration plot of the nomogram predicting BCRFS at (A) 3 years and (B) 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; RP: Radical prostatectomy. The c-index of the CAPRA-S score was 0.70 (95% CI, 0.64–0.75) in our study cohort, which is slightly lower than that of our nomogram (P = 0.022). To further verify the prognostic power of the combination of PSA nadir and MTD, we developed a basic model including Gleason score, SM, and SVI. It yielded a c-index of 0.66 (95% CI, 0.60–0.71), which was significantly lower than the c-index of the new nomogram (P = 0.001). The time-dependent ROC curve and decision curve analyses compared the new nomogram, the CAPRA-S score, and the basic model [Figures 4 and 5]. Our new nomogram showed an advantage in identifying patients with BCRFS in both time-dependent ROC curve and decision curve analyses. Time-dependent ROC curves comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; ROC: Receiver operating characteristic; RP: Radical prostatectomy. Decision curve analyses comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; RP: Radical prostatectomy. Patients’ characteristics: Overall, 337 patients were included in this study and the demographic and clinical characteristics of these patients are shown in Table 1. The median follow-up time was 42 months (IQR, 19–64 months) and 100 (29.7%) patients developed BCR during follow-up. The median age of all patients was 71 years (IQR, 65–75 years) with median BMI of 24.6 kg/m2 (IQR, 22.8–26.6 kg/m2). The median value of pre-operative PSA was 10.8 ng/mL (IQR, 7.3–19.1 ng/mL) and was divided into three groups: <10 ng/mL group, 10 to 20 ng/mL group, and >20 ng/mL group. The majority of the patients had PSA nadir <0.01 ng/mL (n = 242, 71.8%), while 95 (28.2%) patients had PSA nadir ≥0.01 ng/mL. Median PSA nadir was 0 (IQR, 0–0.01) ng/mL. The median MTD was 3.09 cm (IQR, 2.24–3.91 cm) with 45.1% of MTD ≤2.9 cm and 54.9% of MTD >2.9 cm. Comparison of clinical parameters between patients who experienced BCR or not is shown in Table 2. Characteristics of prostate cancer patients treated by RP (N = 337). BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy; SD: Standard deviation; SM: surgical margin; SVI: Seminal vesicle invasion. Comparison of clinical parameters between prostate cancer patients with or without BCR. ∗Z values. †χ2 values. BCR: Biochemical recurrence; BMI: Body mass index; ECE: Extracapsular extension; IQR: Interquartile range; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SD: Standard deviation; SM: Surgical margin; SVI: Seminal vesicle invasion. Development and evaluation of the novel nomogram: To identify significant predictors of BCR, we evaluated age, BMI, pre-operative PSA, Gleason score, SM, ECE, SVI, PSA nadir, and MTD in a univariable Cox proportional hazards regression model and the results are shown in Table 3. Except for age and BMI, all predictors were statistically significantly associated with BCR after RP (P < 0.01). Univariable and multivariable Cox regression analyses of BCRFS. BCRFS: BCR-free survival; BMI: Body mass index; CI: Confidence interval; ECE: Extracapsular extension; HR: Hazard ratio; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; SM: Surgical margin; SVI: Seminal vesicle invasion. As shown in Figure 1, Kaplan-Meier curves were stratified by PSA nadir (<0.01 vs. ≥0.01 ng/mL) [Figure 1B], MTD (≤2.9 vs. >2.9 cm) [Figure 1C], and the combination of PSA nadir and MTD (0 risk factor: PSA nadir <0.01 ng/mL and MTD ≤2.9 cm; one risk factor: PSA nadir <0.01 ng/mL and MTD >2.9 cm or PSA nadir ≥0.01 ng/mL and MTD ≤2.9 cm; two risk factors: PSA nadir ≥0.01 ng/mL and MTD >2.9 cm) [Figure 1D] and showed that the patients with detectable PSA or/and MTD >2.9 cm had significantly shorter BCRFS (log-rank P < 0.001). (A) Kaplan-Meier curves of BCRFS for the whole patient population, (B) patients grouped by PSA nadir (<0.01 vs. ≥0.01 ng/mL), (C) MTD (≤2.9 vs. >2.9 cm), and (D) a combination of PSA nadir and MTD. BCR: Biochemical recurrence; BCRFS: BCR-free survival; MTD: Maximum tumor diameter; PSA: Prostate-specific antigen; RP: Radical prostatectomy. These significant predictors in univariable analyses were then assessed in a multivariable Cox regression model, and pre-operative PSA and ECE did not retain their significance and were excluded (P > 0.05) [Table 3]. Finally, PSA nadir and MTD, as well as Gleason score, SM, and SVI, were independent predictors of BCR in multivariable Cox regression analysis (P < 0.05). These variables were incorporated in a nomogram predicting BCRFS at 3 and 5 years after RP [Figure 2], which yielded a c-index of 0.76 (95% confidence interval [CI], 0.71–0.81). The calibration plots of the nomogram are shown in Figure 3 illustrating how the predicted probability of BCRFS compared with the actual outcomes. Nomogram predicting BCRFS at 3 and 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; PSA: Prostate-specific antigen; RP: Radical prostatectomy. Calibration plot of the nomogram predicting BCRFS at (A) 3 years and (B) 5 years after RP. BCRFS: BCR-free survival; BCR: Biochemical recurrence; RP: Radical prostatectomy. The c-index of the CAPRA-S score was 0.70 (95% CI, 0.64–0.75) in our study cohort, which is slightly lower than that of our nomogram (P = 0.022). To further verify the prognostic power of the combination of PSA nadir and MTD, we developed a basic model including Gleason score, SM, and SVI. It yielded a c-index of 0.66 (95% CI, 0.60–0.71), which was significantly lower than the c-index of the new nomogram (P = 0.001). The time-dependent ROC curve and decision curve analyses compared the new nomogram, the CAPRA-S score, and the basic model [Figures 4 and 5]. Our new nomogram showed an advantage in identifying patients with BCRFS in both time-dependent ROC curve and decision curve analyses. Time-dependent ROC curves comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; ROC: Receiver operating characteristic; RP: Radical prostatectomy. Decision curve analyses comparing the base model, the new nomogram, and the CAPRA-S score in predicting BCR at (A) 3 years and (B) 5 years after RP. BCR: Biochemical recurrence; CAPRA-S: Cancer of the prostate risk assessment post-surgical; RP: Radical prostatectomy. Discussion: In the present study, comparing to conventional predictive models, we proposed a new nomogram by incorporating MTD and PSA nadir, which showed improved accuracy of BCR prediction for patients after RP. After RP, PSA is expected to be undetectable within 6 weeks and it is utmost important parameter that should be monitored post-operatively.[11] Elevated PSA level after RP indicates high risk of local recurrence or metastasis.[12] If the post-operative PSA reaches 0.2 ng/mL, patient is assigned the status of BCR,[20] which was a signal of cancer activity at visual undetectable level. The relationship between PSA nadir and BCR after RP has been extensively studied. A retrospective study reported that compared to men with PSA <0.01 ng/mL after RP, the probability of BCRFS at 5 years dropped from 92.4% to 56.8% in patients with PSA ≥0.01 ng/mL.[21] In a study of 582 patients carried out by Matsumoto et al,[22] PSA persistence after RP was associated with increased BCR and overall mortality. These results are in line with the observations in the present study. In the present study, 71.8% patients had an undetectable PSA nadir and 28.2% patients had a detectable nadir during follow-up. PSA nadir after RP was found to be an independent prognostic factor (P < 0.001) in predicting BCR in univariable and multivariable analyses. Patients with PSA nadir <0.01 ng/mL had significantly longer BCRFS in our study cohort (log-rank P < 0.0001) [Figure 1B]. According to our clinical experience, tumor burden should be associated with oncological outcomes. Tumor volume and MTD as the common indicators of tumor burden have been studied by the researchers and have proved to be independent prognostic factors of BCR.[15,23] However, prostate cancer has been recognized as a multifocal disease, and the calculation of tumor volume and MTD are complicated.[24] In 2013, Billis et al[25] found that the tumor extent in a surgical specimen should be estimated with the dominant tumor and not the total tumor extent. They also reported the association of the dominant tumor with BCR prediction. Nonetheless, the calculation of tumor volume is time consuming and difficult. For the above reasons, we chose MTD as the research target and it was defined as the maximum diameter of the dominant tumor. Unlike previous studies, we measured MTD based on MRI instead of pathological specimen. MRI has better repeatability and less deformation, while on pathological specimen, deformation can vary greatly because of the shrinking of tissues after soaking in formalin. Lee et al[16] measured the diameter of the suspicious tumor lesion on diffusion weighted images of MRI and demonstrated that the diameter of tumor could increase the prediction of insignificant prostate cancer in candidates for active surveillance. In the studies of Kozal et al[26] and Müller et al,[27] MTD was an independent prognostic factor for BCR, even though they measured MTD on pathological specimens. Based on their findings, we hypothesized that the MTD on MRI could be an independent prognostic factor for prostate cancer; however, the relationship between MTD measured on MRI and BCR after RP has rarely been explored in their study as well as other previous studies. As expected, the results of the present study showed that MTD on MRI was an independently significant predictor of BCR (P = 0.0340) and the Kaplan-Meier curve depicted that men with MTD >2.9 cm had shorter BCRFS (log-rank P = 0.0003) [Figure 1C]. Interestingly, the median MTD in the present study was larger than that in the previous studies.[28] We attributed this phenomenon to shrinking of tissues after soaking in formalin which might decrease the MTD.[29] Additionally, in the present study, pathological tumor stage ≥T2c was reported in the majority of patients (n = 290, 86%) [Table 1] and it might be another reason why we have larger MTD. In the study of Eichelberger et al,[30] MTD was found to be associated with the pathological stages. With the rapid development of radiographic technologies and artificial intelligence, the identification and measurements of prostate cancer on MRI are more accurate with high repeatability for prognostic evaluation. The CAPRA-S score is a post-operative score created by Cooperberg et al,[7] based on pre-operative PSA, Gleason score, SM, ECE, SVI, and LNI. The prognostic value of these variables was verified in our study cohort as well. All of them were significantly associated with BCR in the univariable analysis, and Gleason score, SM, and SVI were independent predictors of BCR in multivariable analysis. The c-index of our newly developed nomogram was slightly higher than that of the CAPRA-S score in our study cohort. Moreover, our nomogram predictions closely approached the actual outcome both at 3 and 5 years after RP, demonstrating good calibration, as depicted in the calibration plot. Comparing these two models, we found that our new nomogram consisted of two parts. One part was composed of the commonly used variables, namely Gleason score, SM, and SVI; and the other part was composed of PSA nadir and MTD measured on MRI. In the present study, we observed that both PSA nadir and MTD were significantly associated with BCR in univariable analysis and they were also independent prognostic factors after adjusting pre-operative PSA, Gleason score, SM, ECE, and SVI. Kaplan-Meier curve showed that the patients with these two risk factors simultaneously had the shortest BCRFS and patients with none of these two risk factors had the longest BCRFS (log-rank P < 0.0001) [Figure 1D]. However, only few of the previous prediction tools used MTD and PSA nadir at the same time. To verify the incremental predictive power of the combination of PSA nadir and MTD, we developed a basic model including Gleason score, SM, and SVI for comparison. The c-index was decreased from 0.76 to 0.66 (P < 0.001) when PSA nadir and MTD were removed from our new nomogram. The time-dependent ROC curves illustrated the advantage of our new nomogram at both 3 and 5 years after RP. The decision curve analyses also showed the advantage of our new nomogram, across the various threshold probabilities, and the new nomogram had greater net benefit than both the basic model and the CAPRA-S score in our study cohort. Our new nomogram is a promising tool to predict BCRFS and guide follow-up and decision-making of adjuvant treatment. In addition, PSA nadir and MTD improved the accuracy of our new nomogram in predicting BCR. This study has several limitations. First, it was a retrospective study and the population was relatively smaller compared with the previous studies. Second, the present study has not yet been validated externally and the analysis of overall survival lacked because of the short-term follow-up duration. The newly developed nomogram, which included PSA nadir, MTD measured on MRI, and several commonly used variables, shows excellent accuracy in predicting BCRFS after RP. This nomogram is a useful tool for risk stratification and follow-up planning. The combination of PSA nadir and MTD can improve the accuracy of BCR prediction. Acknowledgements: The authors thank Mr. Li-Yuan Tao from the Research Center of Clinical Epidemiology of Peking University Third Hospital for the help in statistical analyses. Funding: This work was supported by grants from the Beijing Natural Science Foundation (No. Z200027), National Natural Science Foundation of China (No. 61871004), National Key R&D Program of China (No. 2018YFC0115900), Innovation & Transfer Fund of Peking University Third Hospital (No. BYSYZHKC2020111), and Peking University Medicine Fund of Fostering Young Scholars’ Scientific & Technological Innovation (No. BMU2020PYB002). Funds were used for the collection and analysis of data. Conflicts of interest: None.
Background: Various prediction tools have been developed to predict biochemical recurrence (BCR) after radical prostatectomy (RP); however, few of the previous prediction tools used serum prostate-specific antigen (PSA) nadir after RP and maximum tumor diameter (MTD) at the same time. In this study, a nomogram incorporating MTD and PSA nadir was developed to predict BCR-free survival (BCRFS). Methods: A total of 337 patients who underwent RP between January 2010 and March 2017 were retrospectively enrolled in this study. The maximum diameter of the index lesion was measured on magnetic resonance imaging (MRI). Cox regression analysis was performed to evaluate independent predictors of BCR. A nomogram was subsequently developed for the prediction of BCRFS at 3 and 5 years after RP. Time-dependent receiver operating characteristic (ROC) curve and decision curve analyses were performed to identify the advantage of the new nomogram in comparison with the cancer of the prostate risk assessment post-surgical (CAPRA-S) score. Results: A novel nomogram was developed to predict BCR by including PSA nadir, MTD, Gleason score, surgical margin (SM), and seminal vesicle invasion (SVI), considering these variables were significantly associated with BCR in both univariate and multivariate analyses (P < 0.05). In addition, a basic model including Gleason score, SM, and SVI was developed and used as a control to assess the incremental predictive power of the new model. The concordance index of our model was slightly higher than CAPRA-S model (0.76 vs. 0.70, P = 0.02) and it was significantly higher than that of the basic model (0.76 vs. 0.66, P = 0.001). Time-dependent ROC curve and decision curve analyses also demonstrated the advantages of the new nomogram. Conclusions: PSA nadir after RP and MTD based on MRI before surgery are independent predictors of BCR. By incorporating PSA nadir and MTD into the conventional predictive model, our newly developed nomogram significantly improved the accuracy in predicting BCRFS after RP.
null
null
8,008
397
[ 35, 316, 365, 897, 27, 88 ]
12
[ "psa", "bcr", "mtd", "rp", "patients", "nadir", "psa nadir", "nomogram", "ng", "ml" ]
[ "bcr radical prostatectomy", "prostate risk assessment", "radical prostatectomy calibration", "prognostic factor prostate", "prostate cancer candidates" ]
null
null
[CONTENT] Nomogram | PSA nadir | Tumor diameter | Magnetic resonance imaging | Biochemical recurrence | Radical prostatectomy [SUMMARY]
[CONTENT] Nomogram | PSA nadir | Tumor diameter | Magnetic resonance imaging | Biochemical recurrence | Radical prostatectomy [SUMMARY]
[CONTENT] Nomogram | PSA nadir | Tumor diameter | Magnetic resonance imaging | Biochemical recurrence | Radical prostatectomy [SUMMARY]
null
[CONTENT] Nomogram | PSA nadir | Tumor diameter | Magnetic resonance imaging | Biochemical recurrence | Radical prostatectomy [SUMMARY]
null
[CONTENT] Humans | Male | Neoplasm Grading | Neoplasm Recurrence, Local | Nomograms | Prognosis | Prostate-Specific Antigen | Prostatectomy | Prostatic Neoplasms | Retrospective Studies | Seminal Vesicles [SUMMARY]
[CONTENT] Humans | Male | Neoplasm Grading | Neoplasm Recurrence, Local | Nomograms | Prognosis | Prostate-Specific Antigen | Prostatectomy | Prostatic Neoplasms | Retrospective Studies | Seminal Vesicles [SUMMARY]
[CONTENT] Humans | Male | Neoplasm Grading | Neoplasm Recurrence, Local | Nomograms | Prognosis | Prostate-Specific Antigen | Prostatectomy | Prostatic Neoplasms | Retrospective Studies | Seminal Vesicles [SUMMARY]
null
[CONTENT] Humans | Male | Neoplasm Grading | Neoplasm Recurrence, Local | Nomograms | Prognosis | Prostate-Specific Antigen | Prostatectomy | Prostatic Neoplasms | Retrospective Studies | Seminal Vesicles [SUMMARY]
null
[CONTENT] bcr radical prostatectomy | prostate risk assessment | radical prostatectomy calibration | prognostic factor prostate | prostate cancer candidates [SUMMARY]
[CONTENT] bcr radical prostatectomy | prostate risk assessment | radical prostatectomy calibration | prognostic factor prostate | prostate cancer candidates [SUMMARY]
[CONTENT] bcr radical prostatectomy | prostate risk assessment | radical prostatectomy calibration | prognostic factor prostate | prostate cancer candidates [SUMMARY]
null
[CONTENT] bcr radical prostatectomy | prostate risk assessment | radical prostatectomy calibration | prognostic factor prostate | prostate cancer candidates [SUMMARY]
null
[CONTENT] psa | bcr | mtd | rp | patients | nadir | psa nadir | nomogram | ng | ml [SUMMARY]
[CONTENT] psa | bcr | mtd | rp | patients | nadir | psa nadir | nomogram | ng | ml [SUMMARY]
[CONTENT] psa | bcr | mtd | rp | patients | nadir | psa nadir | nomogram | ng | ml [SUMMARY]
null
[CONTENT] psa | bcr | mtd | rp | patients | nadir | psa nadir | nomogram | ng | ml [SUMMARY]
null
[CONTENT] tools | mri | psa | rp | mtd | bcr | prognostic | prostate | cancer | measurement [SUMMARY]
[CONTENT] psa | test | software | date | patients | performed | bcr | defined | statistical | package [SUMMARY]
[CONTENT] psa | mtd | bcr | ng ml | ml | ng | 01 | nadir | psa nadir | patients [SUMMARY]
null
[CONTENT] psa | bcr | mtd | patients | rp | nadir | nomogram | psa nadir | ng | ng ml [SUMMARY]
null
[CONTENT] BCR ||| MTD | PSA nadir | BCR [SUMMARY]
[CONTENT] 337 | January 2010 and March 2017 ||| ||| BCR ||| 3 and 5 years ||| ROC [SUMMARY]
[CONTENT] BCR | PSA nadir | MTD | Gleason | SM | SVI | BCR ||| Gleason | SM | SVI ||| 0.76 | 0.70 | 0.02 | 0.76 | 0.66 | 0.001 ||| ROC [SUMMARY]
null
[CONTENT] BCR ||| MTD | PSA nadir | BCR ||| 337 | January 2010 and March 2017 ||| ||| BCR ||| 3 and 5 years ||| ROC ||| ||| BCR | PSA nadir | MTD | Gleason | SM | SVI | BCR ||| Gleason | SM | SVI ||| 0.76 | 0.70 | 0.02 | 0.76 | 0.66 | 0.001 ||| ROC ||| RP | MTD | BCR ||| MTD | BCRFS [SUMMARY]
null
Phylogenetic and antigenic characterization of reassortant H9N2 avian influenza viruses isolated from wild waterfowl in the East Dongting Lake wetland in 2011-2012.
24779444
Wild waterfowl are recognized as the natural reservoir for influenza A viruses. Two distinct lineages, the American and Eurasian lineages, have been identified in wild birds. Gene flow between the two lineages is limited. The H9N2 virus has become prevalent in poultry throughout Eurasia, and mainly circulates in wild ducks and shorebirds in North America.
BACKGROUND
In this study, 22 H9N2 avian influenza viruses were isolated from wild waterfowl feces in East Dongting Lake Nature Reserve in November 2011 and March 2012. The phylogenetic, molecular, and antigenic characteristics of these viruses were analyzed based on analyses of the whole genome sequence of each isolate.
METHODS
Phylogenetic analyses indicated that these H9N2 viruses were generated by reassortment events. The HA, NA, PA, and NS genes were derived from the American gene pool, and the other four genes were derived from the Eurasian gene pool. Antigenic analyses indicated that these viruses were significantly different from the Eurasian lineage viruses.
RESULTS
This study presents the isolation of novel intercontinental recombinant H9N2 viruses from wild waterfowl in the East Dongting Lake wetland. The novel genotype H9N2 virus has not been detected in poultry in the region yet, and may be transmitted to naïve birds in poultry farms. Therefore, our results highlight the need for ongoing surveillance of wild birds and poultry in this region.
CONCLUSIONS
[ "Animals", "Antigens, Viral", "Birds", "China", "Evolution, Molecular", "Genome, Viral", "Influenza A Virus, H9N2 Subtype", "Influenza in Birds", "Lakes", "Molecular Sequence Data", "Phylogeny", "RNA, Viral", "Reassortant Viruses", "Sequence Analysis, DNA", "Wetlands" ]
4012091
Background
Influenza A viruses can be divided into different subtypes based upon the two surface glycoproteins, hemagglutinin (HA) and neuraminidase (NA) [1,2]. Wild waterfowl are recognized as the natural reservoir of influenza A viruses, especially low pathogenic avian influenza virus (LPAIV) [3]. To date, all of the HA (H1–H16) and NA (N1–N9) subtype avian influenza viruses (AIVs) have been identified in wild waterfowl, with the exceptions of H17N10 and H18N11, which were isolated from bats [2,4,5]. Previous studies indicated that migratory birds play an important role in the emergence of epidemics in birds, pigs, horses, and humans [1]. Generally, AIVs are nonpathogenic in wild birds, although they sometimes cause significant morbidity and mortality when transmitted to domestic poultry [6-8]. The H9N2 subtype influenza virus was first isolated from a turkey in Wisconsin in 1966. It has been most prevalent in wild ducks and shorebirds and has shown no evidence of establishing a stable lineage in land-based poultry in North America [9]. Since the l990s, H9N2 subtype influenza viruses have become prevalent in land-based poultry across East Asia, Middle Asia, Europe and Africa. Epidemiological and phylogenetic studies indicate that three distinct sublineages of H9N2 subtype influenza viruses have been established: Ck/Bj/94-like (A/chicken/Beijing/1/94 or A/duck/Hongkong/Y280/1997), G1-like (A/Quail/Hong Kong/G1/1997) and Korea-like (A/chicken/Y439/1997 or A/chicken/Korea/383490-p96323/1996) [10-12]. Poultry infected with H9N2 subtype AIVs have reduced egg production and moderate morbidity when co-infected with other viruses or bacteria [13]. Since 1999, some H9N2 viruses have been identified with potential human virus receptor specificity, and have been occasionally transmitted to human and swine [14-16]. Moreover, H9N2 viruses may have contributed internal genes to the highly pathogenic avian influenza (HPAI) H5N1 virus in Hong Kong in 1997 and the novel H7N9 avian influenza virus in mainland China in 2013 [17,18]. Therefore, the H9N2 subtype avian influenza viruses have been classified as candidate viruses with pandemic potential [19,20]. The threat to the poultry industry and public health posed by H9N2 subtype avian influenza viruses should not be ignored. Hunan East Dongting Lake Nature Reserve is one of the largest wetlands in mainland China and is an important overwintering area and staging site for migratory birds that fly along the East Asia–Australia flyway [21]. Moreover, numerous duck farms are found around the lake [22,23]. Every migrating season, wild birds congregate at the lake where they share a common habitat with domestic ducks, which provides an opportunity for virus reassortment. During active surveillance of AIVs between 2011 and 2012, we isolated H9N2 viruses from wild waterfowl in East Dongting Lake wetlands in November 2011 (7 isolates) and March 2012 (15 isolates). The whole genome sequences of all 22 isolates were obtained, and phylogenetic trees for each gene segment were generated to analyze the relationship of these isolates with other circulating viruses in wild birds or poultry. Furthermore, we performed antigenic analyses to investigate the antigenic characteristics of the isolates.
null
null
Results
Virus isolation and sequence analysis In total, 6621 environmental samples were collected in Hunan East Dongting Lake wetland. H9N2 subtype avian influenza viruses were isolated in November 2011 (7 isolates) and March 2012 (15 isolates) from wild waterfowl feces (Table  1). The whole genome sequence of each isolate was obtained. The complete viral genome consists of 8 gene segments of negative-sense, single-stranded RNA, including PB2 (2341 bp), PB1 (2341 bp), PA (2233 bp), HA (1742 bp), NP (1565 bp), NA (1467 bp), M (1027 bp), and NS (890 bp). The 22 H9N2 AIVs isolated from wild waterfowl feces in the Hunan East Dongting Lake Nature Reserve in 2011–2012 Complete sequences of the 22 H9N2 viruses showed that they were over 99% nucleotide identity in all eight gene segments. Therefore, we selected a representative virus, A/Wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148), for further analysis. Each of the 8 gene segments of C2148 had the highest nucleotide (over 99%) identity with those of A/Egret/Hunan/1/2012 (H9N2), which was isolated in the same region [24]. Additionally, the following genes showed 99% homology with the indicated reference strain: the PB2 gene of C2148 with A/wild duck/Korea/SNU50-5/2009 (H5N1), the PB1 gene of C2148 with A/wild duck/Korea/CSM4-28/2010 (H4N6), the PA gene with A/northern shoveler/California/2810/2011 (H11N2), the HA gene with A/northern shoveler/Interior Alaska/8BM3470/2008 (H9N2), the NP gene with A/duck/Nanjing/1102/2010 (H4N8), the NA gene with A/snow goose/Montana/466771-4/2006 (H5N2), and the M gene with A/wild duck/Korea/CSM4-28/2010 (H4N6). The NS gene showed 98% nucleotide similarity with that of A/surface water/Minnesota/W07-2241/2007 (H3N8) (Table  2). Nucleotide Identity (%) of A/Wild waterfowl/Dongting/C2148/2011 H9N2 virus with the closely related isolates in GenBank Database Comparisons were performed by using the BLAST search in the Influenza Sequcence Database. In total, 6621 environmental samples were collected in Hunan East Dongting Lake wetland. H9N2 subtype avian influenza viruses were isolated in November 2011 (7 isolates) and March 2012 (15 isolates) from wild waterfowl feces (Table  1). The whole genome sequence of each isolate was obtained. The complete viral genome consists of 8 gene segments of negative-sense, single-stranded RNA, including PB2 (2341 bp), PB1 (2341 bp), PA (2233 bp), HA (1742 bp), NP (1565 bp), NA (1467 bp), M (1027 bp), and NS (890 bp). The 22 H9N2 AIVs isolated from wild waterfowl feces in the Hunan East Dongting Lake Nature Reserve in 2011–2012 Complete sequences of the 22 H9N2 viruses showed that they were over 99% nucleotide identity in all eight gene segments. Therefore, we selected a representative virus, A/Wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148), for further analysis. Each of the 8 gene segments of C2148 had the highest nucleotide (over 99%) identity with those of A/Egret/Hunan/1/2012 (H9N2), which was isolated in the same region [24]. Additionally, the following genes showed 99% homology with the indicated reference strain: the PB2 gene of C2148 with A/wild duck/Korea/SNU50-5/2009 (H5N1), the PB1 gene of C2148 with A/wild duck/Korea/CSM4-28/2010 (H4N6), the PA gene with A/northern shoveler/California/2810/2011 (H11N2), the HA gene with A/northern shoveler/Interior Alaska/8BM3470/2008 (H9N2), the NP gene with A/duck/Nanjing/1102/2010 (H4N8), the NA gene with A/snow goose/Montana/466771-4/2006 (H5N2), and the M gene with A/wild duck/Korea/CSM4-28/2010 (H4N6). The NS gene showed 98% nucleotide similarity with that of A/surface water/Minnesota/W07-2241/2007 (H3N8) (Table  2). Nucleotide Identity (%) of A/Wild waterfowl/Dongting/C2148/2011 H9N2 virus with the closely related isolates in GenBank Database Comparisons were performed by using the BLAST search in the Influenza Sequcence Database. Phylogenetic analyses To further characterize the evolution of the 22 H9N2 viruses isolated from wild waterfowl, phylogenetic trees for each gene were constructed. In the HA gene tree, the H9 subtype AIVs clustered into five distinct lineages: G1-like, Y280-like, Korea-like, and two American lineages. The strains that clustered with the G1-like, Y280-like, and Korea-like sublineages are known to have been endemic in domestic poultry in Eurasia and Africa [25]. All of the 22 isolates belonged to the American lineage II, which included viruses that are prevalent in wild birds or turkeys in North America. Some genetic exchange was observed between the North American and Eurasian strains, as a strain isolated in Korea clustered with an American lineage, and several H9 subtype AIVs isolated in America clustered with the Korea-like sublineage (Figure  1a). Similar evolutionary patterns were observed in the N2 gene tree, which also clustered into four lineages: G1-like, Y280-like, Korea-like and American lineages. The NA genes of all isolates clustered with the American lineage, and were most closely related to an H5N2 AIV subtype isolated from a snow goose in Montana (Figure  1b). The PA and NS genes of the isolates both clustered with the American lineage and showed high nucleotide similarity with strains isolated from wild birds in North America (Figure  1e and h). By contrast, the PB2, PB1, NP, and M internal genes clustered with the Eurasian gene pool, but were distinct from the H9N2 viruses that circulated in Eurasia and Africa and the corresponding internal genes of the novel H7N9 AIV (Figure  1c, d, f, and g). Phylogenetic trees of HA (a), NA (b), PB2 (c), PB1 (d), PA (e), NP (f), M (g), and NS (h) genes of H9N2 subtype AIVs. Neighbor-joining (NJ) trees were generated using MEGA 5.01. Estimates of the phylogenies were calculated by performing 1000 neighbor-joining bootstrap replicates, all rooted to the sequence of Turkey/Wisconsin/1/66 (H9N2). Our 22 isolates are highlighted in red and representative strains are shown in blue. Abbreviations: Agt, American green-winged teal; Aw, American wigeon; Bd, black duck; Bh, bufflehead; Bs, Bewick’s swan; Bt, baikal teal; Ch, chukar; Ck, chicken; Dk, duck; Eg, egret; En, environment; Ew, Eurasian wigeon; Fe, feces; Gw, gadwall; Gf, guinea fowl; Gs, goose; Gt, green-winged teal; Lg, laughing gull; Ltd, longtail duck; Md, mallard; Mud, Muscovy duck; Np, northern pintail; Ns, northern shoveler; Os, Ostrich; Pfg, pink-footed goose; Pi, pintail duck; Qa, quail; Rcp, red crested pochard; Rt, ruddy turnstone; Sb, Shorebird; Sg, snow goose; Sw, swine; Suw, surface water; Tbm, thick-billed murre; Ty, turkey; Wb, wild bird; Wd, wild duck; Wfg, white-fronted goose; AH, Anhui; AL, Alaska; ALB, Alberta; AR, Arkansas; BJ, Beijing; CA, California; DE, Delaware; EG, Egypt; GD, Guangdong; GE, Germany; GX, Guangxi; GZ, Guangzhou; HLJ, Heilongjiang; HK, Hong Kong; HN, Hunan; HO, Hokkaido; IA, Interior Alaska; IL, Illinois; IR, Iran; JX, Jiangxi; Js, Jiangsu; KR, Korea; LO, Louisiana; MG, Mongolia; MI, Minnesota; MIS, Missouri; MT, Montana; ML, Maryland; NanJ, Nanjing; NE, Netherlands; NJ, New Jersey; OH, Ohio; PA, Pakistan; QB, Quebec; RZ, Rizhao; SA, South Africa; SD, Shandong; SH, Shanghai; ST, Shantou; TX, Texas; VN, Vietnam; WA, Washington; Wi, Wisconsin; XH, Xianghai; ZJ, Zhejiang. Phylogenetic analyses indicated that these isolates were novel recombinant H9N2 viruses. In these viruses, 4 genes (HA, NA, PA, and NS) were derived from the American AIV gene pool and 4 genes (PB2, PB1, NP, and M) were derived from the Eurasian gene pool (Figure  2). A hypothetical reassortment pattern of the novel H9N2 virus isolates. Red colored gene segments indicate genes that are derived from the American AIV gene pool and blue colored gene segments indicate genes that originated in the Eurasian gene pool. To further characterize the evolution of the 22 H9N2 viruses isolated from wild waterfowl, phylogenetic trees for each gene were constructed. In the HA gene tree, the H9 subtype AIVs clustered into five distinct lineages: G1-like, Y280-like, Korea-like, and two American lineages. The strains that clustered with the G1-like, Y280-like, and Korea-like sublineages are known to have been endemic in domestic poultry in Eurasia and Africa [25]. All of the 22 isolates belonged to the American lineage II, which included viruses that are prevalent in wild birds or turkeys in North America. Some genetic exchange was observed between the North American and Eurasian strains, as a strain isolated in Korea clustered with an American lineage, and several H9 subtype AIVs isolated in America clustered with the Korea-like sublineage (Figure  1a). Similar evolutionary patterns were observed in the N2 gene tree, which also clustered into four lineages: G1-like, Y280-like, Korea-like and American lineages. The NA genes of all isolates clustered with the American lineage, and were most closely related to an H5N2 AIV subtype isolated from a snow goose in Montana (Figure  1b). The PA and NS genes of the isolates both clustered with the American lineage and showed high nucleotide similarity with strains isolated from wild birds in North America (Figure  1e and h). By contrast, the PB2, PB1, NP, and M internal genes clustered with the Eurasian gene pool, but were distinct from the H9N2 viruses that circulated in Eurasia and Africa and the corresponding internal genes of the novel H7N9 AIV (Figure  1c, d, f, and g). Phylogenetic trees of HA (a), NA (b), PB2 (c), PB1 (d), PA (e), NP (f), M (g), and NS (h) genes of H9N2 subtype AIVs. Neighbor-joining (NJ) trees were generated using MEGA 5.01. Estimates of the phylogenies were calculated by performing 1000 neighbor-joining bootstrap replicates, all rooted to the sequence of Turkey/Wisconsin/1/66 (H9N2). Our 22 isolates are highlighted in red and representative strains are shown in blue. Abbreviations: Agt, American green-winged teal; Aw, American wigeon; Bd, black duck; Bh, bufflehead; Bs, Bewick’s swan; Bt, baikal teal; Ch, chukar; Ck, chicken; Dk, duck; Eg, egret; En, environment; Ew, Eurasian wigeon; Fe, feces; Gw, gadwall; Gf, guinea fowl; Gs, goose; Gt, green-winged teal; Lg, laughing gull; Ltd, longtail duck; Md, mallard; Mud, Muscovy duck; Np, northern pintail; Ns, northern shoveler; Os, Ostrich; Pfg, pink-footed goose; Pi, pintail duck; Qa, quail; Rcp, red crested pochard; Rt, ruddy turnstone; Sb, Shorebird; Sg, snow goose; Sw, swine; Suw, surface water; Tbm, thick-billed murre; Ty, turkey; Wb, wild bird; Wd, wild duck; Wfg, white-fronted goose; AH, Anhui; AL, Alaska; ALB, Alberta; AR, Arkansas; BJ, Beijing; CA, California; DE, Delaware; EG, Egypt; GD, Guangdong; GE, Germany; GX, Guangxi; GZ, Guangzhou; HLJ, Heilongjiang; HK, Hong Kong; HN, Hunan; HO, Hokkaido; IA, Interior Alaska; IL, Illinois; IR, Iran; JX, Jiangxi; Js, Jiangsu; KR, Korea; LO, Louisiana; MG, Mongolia; MI, Minnesota; MIS, Missouri; MT, Montana; ML, Maryland; NanJ, Nanjing; NE, Netherlands; NJ, New Jersey; OH, Ohio; PA, Pakistan; QB, Quebec; RZ, Rizhao; SA, South Africa; SD, Shandong; SH, Shanghai; ST, Shantou; TX, Texas; VN, Vietnam; WA, Washington; Wi, Wisconsin; XH, Xianghai; ZJ, Zhejiang. Phylogenetic analyses indicated that these isolates were novel recombinant H9N2 viruses. In these viruses, 4 genes (HA, NA, PA, and NS) were derived from the American AIV gene pool and 4 genes (PB2, PB1, NP, and M) were derived from the Eurasian gene pool (Figure  2). A hypothetical reassortment pattern of the novel H9N2 virus isolates. Red colored gene segments indicate genes that are derived from the American AIV gene pool and blue colored gene segments indicate genes that originated in the Eurasian gene pool. Molecular analysis The molecular characteristics of the 22 H9N2 isolates were compared with representative H9N2 virus strains circulating in Eurasia and America. The amino acid sequence of the cleavage sites of all isolates possessed a single basic amino acid (R) within the HA connecting peptide (VPELPKGR↓GLF), which is a typical feature of LPAIVs [1]. The HA receptor-binding pocket included the avian-like motif, Q226 and G228 (H3 numbering, Table  3), suggesting that these viruses could preferentially bind to an avian-like receptor (α2, 3-linked sialic acid) [26]. Analysis of potential HA protein N-X-S/T glycosylation site motifs revealed 8 sites at positions 29, 82, 141, 218, 298, 305, 492, and 551 (Table  4). Amino acid sequences of specific sites in the HA, NA, and PB2 proteins of 22 H9N2 AIVs aAccording to the H3 numbering. The comparison of HA amino acid sequences of H9N2 AIVs There were no deletions in the NA stalk region, and no H274Y or N294S substitutions were observed in the NA protein. These properties indicated that these viruses would be sensitive to NA inhibitors, such as oseltamivir [27,28]. They encoded amino acids E and D at positions 627 and 701 of the PB2 protein, respectively, which are characteristics of AIVs [29-31]. No amino acid mutations associated with amantadine resistance were observed in the M2 ion channel protein. Additionally, no substitutions associated with increased virulence in mammals were detected in the PB2, PB1, or NS proteins (data not shown). The molecular characteristics of the 22 H9N2 isolates were compared with representative H9N2 virus strains circulating in Eurasia and America. The amino acid sequence of the cleavage sites of all isolates possessed a single basic amino acid (R) within the HA connecting peptide (VPELPKGR↓GLF), which is a typical feature of LPAIVs [1]. The HA receptor-binding pocket included the avian-like motif, Q226 and G228 (H3 numbering, Table  3), suggesting that these viruses could preferentially bind to an avian-like receptor (α2, 3-linked sialic acid) [26]. Analysis of potential HA protein N-X-S/T glycosylation site motifs revealed 8 sites at positions 29, 82, 141, 218, 298, 305, 492, and 551 (Table  4). Amino acid sequences of specific sites in the HA, NA, and PB2 proteins of 22 H9N2 AIVs aAccording to the H3 numbering. The comparison of HA amino acid sequences of H9N2 AIVs There were no deletions in the NA stalk region, and no H274Y or N294S substitutions were observed in the NA protein. These properties indicated that these viruses would be sensitive to NA inhibitors, such as oseltamivir [27,28]. They encoded amino acids E and D at positions 627 and 701 of the PB2 protein, respectively, which are characteristics of AIVs [29-31]. No amino acid mutations associated with amantadine resistance were observed in the M2 ion channel protein. Additionally, no substitutions associated with increased virulence in mammals were detected in the PB2, PB1, or NS proteins (data not shown). Antigenic analysis To assess the antigenic properties of our novel H9N2 isolates, we performed Hemagglutination inhibition (HI) assays with 5 antisera raised against representative H9N2 viruses and two antisera raised against H9N2 isolates from this study (C2148 and PC2539). The HI antibody titers that recognized the C2148 and PC2539 viruses were much higher than the titers against the other H9N2 representative viruses, including the antiserum that was raised against a virus isolated from shorebirds in America (Sb/DE/249/06) (Table  5). These results indicated that the H9N2 viruses isolated in this study were antigenically distinct from the previously identified H9N2 viruses. Cross-reactivity (HI-determined titers) of the H9N2 AIVs with sera against represented strains Homologous titers are bold. To assess the antigenic properties of our novel H9N2 isolates, we performed Hemagglutination inhibition (HI) assays with 5 antisera raised against representative H9N2 viruses and two antisera raised against H9N2 isolates from this study (C2148 and PC2539). The HI antibody titers that recognized the C2148 and PC2539 viruses were much higher than the titers against the other H9N2 representative viruses, including the antiserum that was raised against a virus isolated from shorebirds in America (Sb/DE/249/06) (Table  5). These results indicated that the H9N2 viruses isolated in this study were antigenically distinct from the previously identified H9N2 viruses. Cross-reactivity (HI-determined titers) of the H9N2 AIVs with sera against represented strains Homologous titers are bold.
Conclusions
In summary, the 22 H9N2 viruses isolated from wild waterfowl in 2011–2012 were novel reassorted H9N2 subtype AIVs with similar genotypes. All isolates encoded genes for proteins with low pathogenic characteristics. Determining whether these H9N2 AIVs can transmit to poultry or other animals or further adapt to new hosts will require continuous monitoring in the future. Our findings extend our knowledge of the ecology of AIVs circulating in wild birds in the Dongting Lake region and highlight the importance of intercontinental AIV gene flow in migratory birds. Therefore, we emphasize the vital need for continued surveillance of AIVs in wild birds and poultry to prepare for and respond to potential influenza pandemics.
[ "Background", "Virus isolation and sequence analysis", "Phylogenetic analyses", "Molecular analysis", "Antigenic analysis", "Sample collection", "Virus detection and isolation", "Virus subtyping and sequencing", "Phylogenetic analyses", "Antigenic analyses", "Nucleotide sequence accession numbers", "Competing interests", "Authors’ contributions" ]
[ "Influenza A viruses can be divided into different subtypes based upon the two surface glycoproteins, hemagglutinin (HA) and neuraminidase (NA)\n[1,2]. Wild waterfowl are recognized as the natural reservoir of influenza A viruses, especially low pathogenic avian influenza virus (LPAIV)\n[3]. To date, all of the HA (H1–H16) and NA (N1–N9) subtype avian influenza viruses (AIVs) have been identified in wild waterfowl, with the exceptions of H17N10 and H18N11, which were isolated from bats\n[2,4,5]. Previous studies indicated that migratory birds play an important role in the emergence of epidemics in birds, pigs, horses, and humans\n[1]. Generally, AIVs are nonpathogenic in wild birds, although they sometimes cause significant morbidity and mortality when transmitted to domestic poultry\n[6-8].\nThe H9N2 subtype influenza virus was first isolated from a turkey in Wisconsin in 1966. It has been most prevalent in wild ducks and shorebirds and has shown no evidence of establishing a stable lineage in land-based poultry in North America\n[9]. Since the l990s, H9N2 subtype influenza viruses have become prevalent in land-based poultry across East Asia, Middle Asia, Europe and Africa. Epidemiological and phylogenetic studies indicate that three distinct sublineages of H9N2 subtype influenza viruses have been established: Ck/Bj/94-like (A/chicken/Beijing/1/94 or A/duck/Hongkong/Y280/1997), G1-like (A/Quail/Hong Kong/G1/1997) and Korea-like (A/chicken/Y439/1997 or A/chicken/Korea/383490-p96323/1996)\n[10-12].\nPoultry infected with H9N2 subtype AIVs have reduced egg production and moderate morbidity when co-infected with other viruses or bacteria\n[13]. Since 1999, some H9N2 viruses have been identified with potential human virus receptor specificity, and have been occasionally transmitted to human and swine\n[14-16]. Moreover, H9N2 viruses may have contributed internal genes to the highly pathogenic avian influenza (HPAI) H5N1 virus in Hong Kong in 1997 and the novel H7N9 avian influenza virus in mainland China in 2013\n[17,18]. Therefore, the H9N2 subtype avian influenza viruses have been classified as candidate viruses with pandemic potential\n[19,20]. The threat to the poultry industry and public health posed by H9N2 subtype avian influenza viruses should not be ignored.\nHunan East Dongting Lake Nature Reserve is one of the largest wetlands in mainland China and is an important overwintering area and staging site for migratory birds that fly along the East Asia–Australia flyway\n[21]. Moreover, numerous duck farms are found around the lake\n[22,23]. Every migrating season, wild birds congregate at the lake where they share a common habitat with domestic ducks, which provides an opportunity for virus reassortment. During active surveillance of AIVs between 2011 and 2012, we isolated H9N2 viruses from wild waterfowl in East Dongting Lake wetlands in November 2011 (7 isolates) and March 2012 (15 isolates). The whole genome sequences of all 22 isolates were obtained, and phylogenetic trees for each gene segment were generated to analyze the relationship of these isolates with other circulating viruses in wild birds or poultry. Furthermore, we performed antigenic analyses to investigate the antigenic characteristics of the isolates.", "In total, 6621 environmental samples were collected in Hunan East Dongting Lake wetland. H9N2 subtype avian influenza viruses were isolated in November 2011 (7 isolates) and March 2012 (15 isolates) from wild waterfowl feces (Table \n1). The whole genome sequence of each isolate was obtained. The complete viral genome consists of 8 gene segments of negative-sense, single-stranded RNA, including PB2 (2341 bp), PB1 (2341 bp), PA (2233 bp), HA (1742 bp), NP (1565 bp), NA (1467 bp), M (1027 bp), and NS (890 bp).\nThe 22 H9N2 AIVs isolated from wild waterfowl feces in the Hunan East Dongting Lake Nature Reserve in 2011–2012\nComplete sequences of the 22 H9N2 viruses showed that they were over 99% nucleotide identity in all eight gene segments. Therefore, we selected a representative virus, A/Wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148), for further analysis. Each of the 8 gene segments of C2148 had the highest nucleotide (over 99%) identity with those of A/Egret/Hunan/1/2012 (H9N2), which was isolated in the same region\n[24]. Additionally, the following genes showed 99% homology with the indicated reference strain: the PB2 gene of C2148 with A/wild duck/Korea/SNU50-5/2009 (H5N1), the PB1 gene of C2148 with A/wild duck/Korea/CSM4-28/2010 (H4N6), the PA gene with A/northern shoveler/California/2810/2011 (H11N2), the HA gene with A/northern shoveler/Interior Alaska/8BM3470/2008 (H9N2), the NP gene with A/duck/Nanjing/1102/2010 (H4N8), the NA gene with A/snow goose/Montana/466771-4/2006 (H5N2), and the M gene with A/wild duck/Korea/CSM4-28/2010 (H4N6). The NS gene showed 98% nucleotide similarity with that of A/surface water/Minnesota/W07-2241/2007 (H3N8) (Table \n2).\nNucleotide Identity (%) of A/Wild waterfowl/Dongting/C2148/2011 H9N2 virus with the closely related isolates in GenBank Database\nComparisons were performed by using the BLAST search in the Influenza Sequcence Database.", "To further characterize the evolution of the 22 H9N2 viruses isolated from wild waterfowl, phylogenetic trees for each gene were constructed. In the HA gene tree, the H9 subtype AIVs clustered into five distinct lineages: G1-like, Y280-like, Korea-like, and two American lineages. The strains that clustered with the G1-like, Y280-like, and Korea-like sublineages are known to have been endemic in domestic poultry in Eurasia and Africa\n[25]. All of the 22 isolates belonged to the American lineage II, which included viruses that are prevalent in wild birds or turkeys in North America. Some genetic exchange was observed between the North American and Eurasian strains, as a strain isolated in Korea clustered with an American lineage, and several H9 subtype AIVs isolated in America clustered with the Korea-like sublineage (Figure \n1a). Similar evolutionary patterns were observed in the N2 gene tree, which also clustered into four lineages: G1-like, Y280-like, Korea-like and American lineages. The NA genes of all isolates clustered with the American lineage, and were most closely related to an H5N2 AIV subtype isolated from a snow goose in Montana (Figure \n1b). The PA and NS genes of the isolates both clustered with the American lineage and showed high nucleotide similarity with strains isolated from wild birds in North America (Figure \n1e and h). By contrast, the PB2, PB1, NP, and M internal genes clustered with the Eurasian gene pool, but were distinct from the H9N2 viruses that circulated in Eurasia and Africa and the corresponding internal genes of the novel H7N9 AIV (Figure \n1c, d, f, and g).\nPhylogenetic trees of HA (a), NA (b), PB2 (c), PB1 (d), PA (e), NP (f), M (g), and NS (h) genes of H9N2 subtype AIVs. Neighbor-joining (NJ) trees were generated using MEGA 5.01. Estimates of the phylogenies were calculated by performing 1000 neighbor-joining bootstrap replicates, all rooted to the sequence of Turkey/Wisconsin/1/66 (H9N2). Our 22 isolates are highlighted in red and representative strains are shown in blue. Abbreviations: Agt, American green-winged teal; Aw, American wigeon; Bd, black duck; Bh, bufflehead; Bs, Bewick’s swan; Bt, baikal teal; Ch, chukar; Ck, chicken; Dk, duck; Eg, egret; En, environment; Ew, Eurasian wigeon; Fe, feces; Gw, gadwall; Gf, guinea fowl; Gs, goose; Gt, green-winged teal; Lg, laughing gull; Ltd, longtail duck; Md, mallard; Mud, Muscovy duck; Np, northern pintail; Ns, northern shoveler; Os, Ostrich; Pfg, pink-footed goose; Pi, pintail duck; Qa, quail; Rcp, red crested pochard; Rt, ruddy turnstone; Sb, Shorebird; Sg, snow goose; Sw, swine; Suw, surface water; Tbm, thick-billed murre; Ty, turkey; Wb, wild bird; Wd, wild duck; Wfg, white-fronted goose; AH, Anhui; AL, Alaska; ALB, Alberta; AR, Arkansas; BJ, Beijing; CA, California; DE, Delaware; EG, Egypt; GD, Guangdong; GE, Germany; GX, Guangxi; GZ, Guangzhou; HLJ, Heilongjiang; HK, Hong Kong; HN, Hunan; HO, Hokkaido; IA, Interior Alaska; IL, Illinois; IR, Iran; JX, Jiangxi; Js, Jiangsu; KR, Korea; LO, Louisiana; MG, Mongolia; MI, Minnesota; MIS, Missouri; MT, Montana; ML, Maryland; NanJ, Nanjing; NE, Netherlands; NJ, New Jersey; OH, Ohio; PA, Pakistan; QB, Quebec; RZ, Rizhao; SA, South Africa; SD, Shandong; SH, Shanghai; ST, Shantou; TX, Texas; VN, Vietnam; WA, Washington; Wi, Wisconsin; XH, Xianghai; ZJ, Zhejiang.\nPhylogenetic analyses indicated that these isolates were novel recombinant H9N2 viruses. In these viruses, 4 genes (HA, NA, PA, and NS) were derived from the American AIV gene pool and 4 genes (PB2, PB1, NP, and M) were derived from the Eurasian gene pool (Figure \n2).\nA hypothetical reassortment pattern of the novel H9N2 virus isolates. Red colored gene segments indicate genes that are derived from the American AIV gene pool and blue colored gene segments indicate genes that originated in the Eurasian gene pool.", "The molecular characteristics of the 22 H9N2 isolates were compared with representative H9N2 virus strains circulating in Eurasia and America. The amino acid sequence of the cleavage sites of all isolates possessed a single basic amino acid (R) within the HA connecting peptide (VPELPKGR↓GLF), which is a typical feature of LPAIVs\n[1]. The HA receptor-binding pocket included the avian-like motif, Q226 and G228 (H3 numbering, Table \n3), suggesting that these viruses could preferentially bind to an avian-like receptor (α2, 3-linked sialic acid)\n[26]. Analysis of potential HA protein N-X-S/T glycosylation site motifs revealed 8 sites at positions 29, 82, 141, 218, 298, 305, 492, and 551 (Table \n4).\nAmino acid sequences of specific sites in the HA, NA, and PB2 proteins of 22 H9N2 AIVs\naAccording to the H3 numbering.\nThe comparison of HA amino acid sequences of H9N2 AIVs\nThere were no deletions in the NA stalk region, and no H274Y or N294S substitutions were observed in the NA protein. These properties indicated that these viruses would be sensitive to NA inhibitors, such as oseltamivir\n[27,28]. They encoded amino acids E and D at positions 627 and 701 of the PB2 protein, respectively, which are characteristics of AIVs\n[29-31]. No amino acid mutations associated with amantadine resistance were observed in the M2 ion channel protein. Additionally, no substitutions associated with increased virulence in mammals were detected in the PB2, PB1, or NS proteins (data not shown).", "To assess the antigenic properties of our novel H9N2 isolates, we performed Hemagglutination inhibition (HI) assays with 5 antisera raised against representative H9N2 viruses and two antisera raised against H9N2 isolates from this study (C2148 and PC2539). The HI antibody titers that recognized the C2148 and PC2539 viruses were much higher than the titers against the other H9N2 representative viruses, including the antiserum that was raised against a virus isolated from shorebirds in America (Sb/DE/249/06) (Table \n5). These results indicated that the H9N2 viruses isolated in this study were antigenically distinct from the previously identified H9N2 viruses.\nCross-reactivity (HI-determined titers) of the H9N2 AIVs with sera against represented strains\nHomologous titers are bold.", "During November 2011 and April 2012, we collected fresh fecal samples from wild birds and lake water in Hunan East Dongting Lake Nature Reserve (28°58′–29°38′N, 112°43′–113°13′E), Yueyang, Hunan, China. This site is an important overwintering habitat for East Asian migratory birds that is located in the middle reaches of the Yangtze River. Fresh fecal samples were collected with sterile cotton swabs, following previously described protocols\n[39], and placed in 15 mL tubes containing 4 mL virus transport media (VTM). The VTM contained: Tissue culture medium 199 (Thermo Scientific Hyclone, Logan, UT, USA), 0.5% BSA (Roche, Mannheim, Germany), 10% glycerol, 2 × 106 U/L penicillin G, 200 mg/L streptomycin, 2 × 106 U/L polymyxin B sulfate, 250 mg/L gentamicin, 60 mg/L ofloxacin HCI, 0.2 g/L sulfamethoxazole, and 5 × 105U/L nystatin (Sigma, St Louis, MO, USA). Samples were immediately transported to the laboratory at 4°C and stored at -80°C.", "RNA was extracted from 200 μL fecal suspension using the BioRobot Universal system with the QIAamp One-For-All Nucleic Acid Kit (Qiagen, Hilden, Germany) in accordance with the manufacturer’s instructions. Influenza A virus was screened using qPCR assay that targeted the influenza Matrix gene. Detection was performed using a Stratagene Mx3005P thermocycler with an AgPath RT-PCR Kit (Applied Biosystems, Foster City, CA, USA) using 5 μL eluate in 25 μL total volume. Each run included 2 negative and 2 positive control samples along with 92 samples. The positive samples detected by qPCR were inoculated into the allantoic cavity of 9-day-old specific pathogen free (SPF) embryonated chicken eggs (ECEs). The ECEs were incubated at 37°C for 48 h and then chilled at 4°C for 6–8 h. Allantoic fluids were harvested and hemagglutination assays with 0.5% turkey red blood cells confirmed the presence of viruses.", "Viral RNA was extracted from infected allantoic fluid using the RNeasy Mini Kit (Qiagen), and reverse transcribed using the Uni12 primer (5′-AGCAAAAGCAGG-3′) with the SuperRT cDNA Kit (CWBIO, Beijing, China). Isolate subtyping was performed by PCR using 16 sets of HA (H1–H16) primers and 9 sets of NA (N1–N9) primers designed by the Chinese National Influenza Center. Complete genome amplification was performed using specific primers (primer sequences available on request) with 2× Es Taq MasterMix Kit (CWBIO). PCR products of the expected sizes were purified using a QIAquick PCR purification kit (Qiagen). Sequencing was performed using the BigDye Terminator v3.1 Cycle Sequencing Kit with an ABI PRISM 3700xl DNA Analyzer (Applied Biosystems), following the manufacturer’s instructions.", "We performed multiple sequence alignments using the MAFFT software, version 6. Both sequences of the representative H9N2 subtype influenza A virus strains circulating in America and Eurasia and homologous sequences that shared high nucleotide similarities with our H9N2 isolates were included in the phylogenetic analyses. Preliminary phylogenetic trees were constructed to infer the overall topology, using more than 500 sequences for each gene. In order to more clearly define the phylogenetic relationships of the 22 H9N2 virus isolates, representative sequences of each cluster were selected to generate neighbor-joining (NJ) trees. Phylogenetic estimates were calculated by performing 1000 neighbor-joining bootstrap replicates.", "Antigenic analyses were performed using 6 polyclonal ferret antisera against A/chicken/Beijing/1/1994 (H9N2) (Ck/Bj), A/quail/Hong Kong/G1/1997 (H9N2) (G1), A/chicken/Hong Kong/G9/1997 (H9N2) (G9), A/Hong Kong/1073/1999 (H9N2) (HK/1073), A/chicken/Hong Kong/NT101/2003 (H9N2) (HK/NT101), and A/shorebird/ Delaware /249/2006 (H9N2) (DE/249), which were kindly provided by Dr. Webby. Polyclonal ferret antisera raised against A/wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148) and A/wild waterfowl/Dongting/PC2539/2012 (H9N2) (PC2539), two representative influenza strains, were also used in our study. HI assays were performed as previously described. Briefly, all sera were treated with receptor destroying enzyme II (RDE) (Denka Seiken, Tokyo, Japan) to remove nonspecific inhibitors of hemagglutination by adding 3 volumes of RDE to tubes with 1 volume of serum. Samples were incubated at 37°C for 16–18 h, and then inactivated at 56°C for 30 min. After RDE inactivation, 6 volumes of phosphate buffered saline (PBS; Thermo Scientific Hyclone) were added. The diluted sera were then serially diluted 2-fold with 25 μL PBS, and equal volumes of antigen (8 HA unit/50 μL) were added to each well. The plates were gently mixed and incubated at room temperature for 20–30 min. Viral titers were determined by adding 50 μL 0.5% turkey red blood cells to each well. The limit of detection for HI titers was ≤ 20.", "The nucleotide sequences generated in our study were deposited at GenBank under the accession numbers KF971946 to KF972121.", "The findings and conclusions of this report are those of the authors and do not necessarily represent the views of the funding agency. We declare no conflict of interest.", "YLS and SXH designed the research; YZ performed research and drafted the manuscript; TB, YWH, ZHD, HZ, ZYB, MDY, and JFH collected samples and performed research; LY and XZ analyzed data; and YLS and WFZ helped to draft and revise the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Results", "Virus isolation and sequence analysis", "Phylogenetic analyses", "Molecular analysis", "Antigenic analysis", "Discussion", "Conclusions", "Materials and methods", "Sample collection", "Virus detection and isolation", "Virus subtyping and sequencing", "Phylogenetic analyses", "Antigenic analyses", "Nucleotide sequence accession numbers", "Competing interests", "Authors’ contributions" ]
[ "Influenza A viruses can be divided into different subtypes based upon the two surface glycoproteins, hemagglutinin (HA) and neuraminidase (NA)\n[1,2]. Wild waterfowl are recognized as the natural reservoir of influenza A viruses, especially low pathogenic avian influenza virus (LPAIV)\n[3]. To date, all of the HA (H1–H16) and NA (N1–N9) subtype avian influenza viruses (AIVs) have been identified in wild waterfowl, with the exceptions of H17N10 and H18N11, which were isolated from bats\n[2,4,5]. Previous studies indicated that migratory birds play an important role in the emergence of epidemics in birds, pigs, horses, and humans\n[1]. Generally, AIVs are nonpathogenic in wild birds, although they sometimes cause significant morbidity and mortality when transmitted to domestic poultry\n[6-8].\nThe H9N2 subtype influenza virus was first isolated from a turkey in Wisconsin in 1966. It has been most prevalent in wild ducks and shorebirds and has shown no evidence of establishing a stable lineage in land-based poultry in North America\n[9]. Since the l990s, H9N2 subtype influenza viruses have become prevalent in land-based poultry across East Asia, Middle Asia, Europe and Africa. Epidemiological and phylogenetic studies indicate that three distinct sublineages of H9N2 subtype influenza viruses have been established: Ck/Bj/94-like (A/chicken/Beijing/1/94 or A/duck/Hongkong/Y280/1997), G1-like (A/Quail/Hong Kong/G1/1997) and Korea-like (A/chicken/Y439/1997 or A/chicken/Korea/383490-p96323/1996)\n[10-12].\nPoultry infected with H9N2 subtype AIVs have reduced egg production and moderate morbidity when co-infected with other viruses or bacteria\n[13]. Since 1999, some H9N2 viruses have been identified with potential human virus receptor specificity, and have been occasionally transmitted to human and swine\n[14-16]. Moreover, H9N2 viruses may have contributed internal genes to the highly pathogenic avian influenza (HPAI) H5N1 virus in Hong Kong in 1997 and the novel H7N9 avian influenza virus in mainland China in 2013\n[17,18]. Therefore, the H9N2 subtype avian influenza viruses have been classified as candidate viruses with pandemic potential\n[19,20]. The threat to the poultry industry and public health posed by H9N2 subtype avian influenza viruses should not be ignored.\nHunan East Dongting Lake Nature Reserve is one of the largest wetlands in mainland China and is an important overwintering area and staging site for migratory birds that fly along the East Asia–Australia flyway\n[21]. Moreover, numerous duck farms are found around the lake\n[22,23]. Every migrating season, wild birds congregate at the lake where they share a common habitat with domestic ducks, which provides an opportunity for virus reassortment. During active surveillance of AIVs between 2011 and 2012, we isolated H9N2 viruses from wild waterfowl in East Dongting Lake wetlands in November 2011 (7 isolates) and March 2012 (15 isolates). The whole genome sequences of all 22 isolates were obtained, and phylogenetic trees for each gene segment were generated to analyze the relationship of these isolates with other circulating viruses in wild birds or poultry. Furthermore, we performed antigenic analyses to investigate the antigenic characteristics of the isolates.", " Virus isolation and sequence analysis In total, 6621 environmental samples were collected in Hunan East Dongting Lake wetland. H9N2 subtype avian influenza viruses were isolated in November 2011 (7 isolates) and March 2012 (15 isolates) from wild waterfowl feces (Table \n1). The whole genome sequence of each isolate was obtained. The complete viral genome consists of 8 gene segments of negative-sense, single-stranded RNA, including PB2 (2341 bp), PB1 (2341 bp), PA (2233 bp), HA (1742 bp), NP (1565 bp), NA (1467 bp), M (1027 bp), and NS (890 bp).\nThe 22 H9N2 AIVs isolated from wild waterfowl feces in the Hunan East Dongting Lake Nature Reserve in 2011–2012\nComplete sequences of the 22 H9N2 viruses showed that they were over 99% nucleotide identity in all eight gene segments. Therefore, we selected a representative virus, A/Wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148), for further analysis. Each of the 8 gene segments of C2148 had the highest nucleotide (over 99%) identity with those of A/Egret/Hunan/1/2012 (H9N2), which was isolated in the same region\n[24]. Additionally, the following genes showed 99% homology with the indicated reference strain: the PB2 gene of C2148 with A/wild duck/Korea/SNU50-5/2009 (H5N1), the PB1 gene of C2148 with A/wild duck/Korea/CSM4-28/2010 (H4N6), the PA gene with A/northern shoveler/California/2810/2011 (H11N2), the HA gene with A/northern shoveler/Interior Alaska/8BM3470/2008 (H9N2), the NP gene with A/duck/Nanjing/1102/2010 (H4N8), the NA gene with A/snow goose/Montana/466771-4/2006 (H5N2), and the M gene with A/wild duck/Korea/CSM4-28/2010 (H4N6). The NS gene showed 98% nucleotide similarity with that of A/surface water/Minnesota/W07-2241/2007 (H3N8) (Table \n2).\nNucleotide Identity (%) of A/Wild waterfowl/Dongting/C2148/2011 H9N2 virus with the closely related isolates in GenBank Database\nComparisons were performed by using the BLAST search in the Influenza Sequcence Database.\nIn total, 6621 environmental samples were collected in Hunan East Dongting Lake wetland. H9N2 subtype avian influenza viruses were isolated in November 2011 (7 isolates) and March 2012 (15 isolates) from wild waterfowl feces (Table \n1). The whole genome sequence of each isolate was obtained. The complete viral genome consists of 8 gene segments of negative-sense, single-stranded RNA, including PB2 (2341 bp), PB1 (2341 bp), PA (2233 bp), HA (1742 bp), NP (1565 bp), NA (1467 bp), M (1027 bp), and NS (890 bp).\nThe 22 H9N2 AIVs isolated from wild waterfowl feces in the Hunan East Dongting Lake Nature Reserve in 2011–2012\nComplete sequences of the 22 H9N2 viruses showed that they were over 99% nucleotide identity in all eight gene segments. Therefore, we selected a representative virus, A/Wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148), for further analysis. Each of the 8 gene segments of C2148 had the highest nucleotide (over 99%) identity with those of A/Egret/Hunan/1/2012 (H9N2), which was isolated in the same region\n[24]. Additionally, the following genes showed 99% homology with the indicated reference strain: the PB2 gene of C2148 with A/wild duck/Korea/SNU50-5/2009 (H5N1), the PB1 gene of C2148 with A/wild duck/Korea/CSM4-28/2010 (H4N6), the PA gene with A/northern shoveler/California/2810/2011 (H11N2), the HA gene with A/northern shoveler/Interior Alaska/8BM3470/2008 (H9N2), the NP gene with A/duck/Nanjing/1102/2010 (H4N8), the NA gene with A/snow goose/Montana/466771-4/2006 (H5N2), and the M gene with A/wild duck/Korea/CSM4-28/2010 (H4N6). The NS gene showed 98% nucleotide similarity with that of A/surface water/Minnesota/W07-2241/2007 (H3N8) (Table \n2).\nNucleotide Identity (%) of A/Wild waterfowl/Dongting/C2148/2011 H9N2 virus with the closely related isolates in GenBank Database\nComparisons were performed by using the BLAST search in the Influenza Sequcence Database.\n Phylogenetic analyses To further characterize the evolution of the 22 H9N2 viruses isolated from wild waterfowl, phylogenetic trees for each gene were constructed. In the HA gene tree, the H9 subtype AIVs clustered into five distinct lineages: G1-like, Y280-like, Korea-like, and two American lineages. The strains that clustered with the G1-like, Y280-like, and Korea-like sublineages are known to have been endemic in domestic poultry in Eurasia and Africa\n[25]. All of the 22 isolates belonged to the American lineage II, which included viruses that are prevalent in wild birds or turkeys in North America. Some genetic exchange was observed between the North American and Eurasian strains, as a strain isolated in Korea clustered with an American lineage, and several H9 subtype AIVs isolated in America clustered with the Korea-like sublineage (Figure \n1a). Similar evolutionary patterns were observed in the N2 gene tree, which also clustered into four lineages: G1-like, Y280-like, Korea-like and American lineages. The NA genes of all isolates clustered with the American lineage, and were most closely related to an H5N2 AIV subtype isolated from a snow goose in Montana (Figure \n1b). The PA and NS genes of the isolates both clustered with the American lineage and showed high nucleotide similarity with strains isolated from wild birds in North America (Figure \n1e and h). By contrast, the PB2, PB1, NP, and M internal genes clustered with the Eurasian gene pool, but were distinct from the H9N2 viruses that circulated in Eurasia and Africa and the corresponding internal genes of the novel H7N9 AIV (Figure \n1c, d, f, and g).\nPhylogenetic trees of HA (a), NA (b), PB2 (c), PB1 (d), PA (e), NP (f), M (g), and NS (h) genes of H9N2 subtype AIVs. Neighbor-joining (NJ) trees were generated using MEGA 5.01. Estimates of the phylogenies were calculated by performing 1000 neighbor-joining bootstrap replicates, all rooted to the sequence of Turkey/Wisconsin/1/66 (H9N2). Our 22 isolates are highlighted in red and representative strains are shown in blue. Abbreviations: Agt, American green-winged teal; Aw, American wigeon; Bd, black duck; Bh, bufflehead; Bs, Bewick’s swan; Bt, baikal teal; Ch, chukar; Ck, chicken; Dk, duck; Eg, egret; En, environment; Ew, Eurasian wigeon; Fe, feces; Gw, gadwall; Gf, guinea fowl; Gs, goose; Gt, green-winged teal; Lg, laughing gull; Ltd, longtail duck; Md, mallard; Mud, Muscovy duck; Np, northern pintail; Ns, northern shoveler; Os, Ostrich; Pfg, pink-footed goose; Pi, pintail duck; Qa, quail; Rcp, red crested pochard; Rt, ruddy turnstone; Sb, Shorebird; Sg, snow goose; Sw, swine; Suw, surface water; Tbm, thick-billed murre; Ty, turkey; Wb, wild bird; Wd, wild duck; Wfg, white-fronted goose; AH, Anhui; AL, Alaska; ALB, Alberta; AR, Arkansas; BJ, Beijing; CA, California; DE, Delaware; EG, Egypt; GD, Guangdong; GE, Germany; GX, Guangxi; GZ, Guangzhou; HLJ, Heilongjiang; HK, Hong Kong; HN, Hunan; HO, Hokkaido; IA, Interior Alaska; IL, Illinois; IR, Iran; JX, Jiangxi; Js, Jiangsu; KR, Korea; LO, Louisiana; MG, Mongolia; MI, Minnesota; MIS, Missouri; MT, Montana; ML, Maryland; NanJ, Nanjing; NE, Netherlands; NJ, New Jersey; OH, Ohio; PA, Pakistan; QB, Quebec; RZ, Rizhao; SA, South Africa; SD, Shandong; SH, Shanghai; ST, Shantou; TX, Texas; VN, Vietnam; WA, Washington; Wi, Wisconsin; XH, Xianghai; ZJ, Zhejiang.\nPhylogenetic analyses indicated that these isolates were novel recombinant H9N2 viruses. In these viruses, 4 genes (HA, NA, PA, and NS) were derived from the American AIV gene pool and 4 genes (PB2, PB1, NP, and M) were derived from the Eurasian gene pool (Figure \n2).\nA hypothetical reassortment pattern of the novel H9N2 virus isolates. Red colored gene segments indicate genes that are derived from the American AIV gene pool and blue colored gene segments indicate genes that originated in the Eurasian gene pool.\nTo further characterize the evolution of the 22 H9N2 viruses isolated from wild waterfowl, phylogenetic trees for each gene were constructed. In the HA gene tree, the H9 subtype AIVs clustered into five distinct lineages: G1-like, Y280-like, Korea-like, and two American lineages. The strains that clustered with the G1-like, Y280-like, and Korea-like sublineages are known to have been endemic in domestic poultry in Eurasia and Africa\n[25]. All of the 22 isolates belonged to the American lineage II, which included viruses that are prevalent in wild birds or turkeys in North America. Some genetic exchange was observed between the North American and Eurasian strains, as a strain isolated in Korea clustered with an American lineage, and several H9 subtype AIVs isolated in America clustered with the Korea-like sublineage (Figure \n1a). Similar evolutionary patterns were observed in the N2 gene tree, which also clustered into four lineages: G1-like, Y280-like, Korea-like and American lineages. The NA genes of all isolates clustered with the American lineage, and were most closely related to an H5N2 AIV subtype isolated from a snow goose in Montana (Figure \n1b). The PA and NS genes of the isolates both clustered with the American lineage and showed high nucleotide similarity with strains isolated from wild birds in North America (Figure \n1e and h). By contrast, the PB2, PB1, NP, and M internal genes clustered with the Eurasian gene pool, but were distinct from the H9N2 viruses that circulated in Eurasia and Africa and the corresponding internal genes of the novel H7N9 AIV (Figure \n1c, d, f, and g).\nPhylogenetic trees of HA (a), NA (b), PB2 (c), PB1 (d), PA (e), NP (f), M (g), and NS (h) genes of H9N2 subtype AIVs. Neighbor-joining (NJ) trees were generated using MEGA 5.01. Estimates of the phylogenies were calculated by performing 1000 neighbor-joining bootstrap replicates, all rooted to the sequence of Turkey/Wisconsin/1/66 (H9N2). Our 22 isolates are highlighted in red and representative strains are shown in blue. Abbreviations: Agt, American green-winged teal; Aw, American wigeon; Bd, black duck; Bh, bufflehead; Bs, Bewick’s swan; Bt, baikal teal; Ch, chukar; Ck, chicken; Dk, duck; Eg, egret; En, environment; Ew, Eurasian wigeon; Fe, feces; Gw, gadwall; Gf, guinea fowl; Gs, goose; Gt, green-winged teal; Lg, laughing gull; Ltd, longtail duck; Md, mallard; Mud, Muscovy duck; Np, northern pintail; Ns, northern shoveler; Os, Ostrich; Pfg, pink-footed goose; Pi, pintail duck; Qa, quail; Rcp, red crested pochard; Rt, ruddy turnstone; Sb, Shorebird; Sg, snow goose; Sw, swine; Suw, surface water; Tbm, thick-billed murre; Ty, turkey; Wb, wild bird; Wd, wild duck; Wfg, white-fronted goose; AH, Anhui; AL, Alaska; ALB, Alberta; AR, Arkansas; BJ, Beijing; CA, California; DE, Delaware; EG, Egypt; GD, Guangdong; GE, Germany; GX, Guangxi; GZ, Guangzhou; HLJ, Heilongjiang; HK, Hong Kong; HN, Hunan; HO, Hokkaido; IA, Interior Alaska; IL, Illinois; IR, Iran; JX, Jiangxi; Js, Jiangsu; KR, Korea; LO, Louisiana; MG, Mongolia; MI, Minnesota; MIS, Missouri; MT, Montana; ML, Maryland; NanJ, Nanjing; NE, Netherlands; NJ, New Jersey; OH, Ohio; PA, Pakistan; QB, Quebec; RZ, Rizhao; SA, South Africa; SD, Shandong; SH, Shanghai; ST, Shantou; TX, Texas; VN, Vietnam; WA, Washington; Wi, Wisconsin; XH, Xianghai; ZJ, Zhejiang.\nPhylogenetic analyses indicated that these isolates were novel recombinant H9N2 viruses. In these viruses, 4 genes (HA, NA, PA, and NS) were derived from the American AIV gene pool and 4 genes (PB2, PB1, NP, and M) were derived from the Eurasian gene pool (Figure \n2).\nA hypothetical reassortment pattern of the novel H9N2 virus isolates. Red colored gene segments indicate genes that are derived from the American AIV gene pool and blue colored gene segments indicate genes that originated in the Eurasian gene pool.\n Molecular analysis The molecular characteristics of the 22 H9N2 isolates were compared with representative H9N2 virus strains circulating in Eurasia and America. The amino acid sequence of the cleavage sites of all isolates possessed a single basic amino acid (R) within the HA connecting peptide (VPELPKGR↓GLF), which is a typical feature of LPAIVs\n[1]. The HA receptor-binding pocket included the avian-like motif, Q226 and G228 (H3 numbering, Table \n3), suggesting that these viruses could preferentially bind to an avian-like receptor (α2, 3-linked sialic acid)\n[26]. Analysis of potential HA protein N-X-S/T glycosylation site motifs revealed 8 sites at positions 29, 82, 141, 218, 298, 305, 492, and 551 (Table \n4).\nAmino acid sequences of specific sites in the HA, NA, and PB2 proteins of 22 H9N2 AIVs\naAccording to the H3 numbering.\nThe comparison of HA amino acid sequences of H9N2 AIVs\nThere were no deletions in the NA stalk region, and no H274Y or N294S substitutions were observed in the NA protein. These properties indicated that these viruses would be sensitive to NA inhibitors, such as oseltamivir\n[27,28]. They encoded amino acids E and D at positions 627 and 701 of the PB2 protein, respectively, which are characteristics of AIVs\n[29-31]. No amino acid mutations associated with amantadine resistance were observed in the M2 ion channel protein. Additionally, no substitutions associated with increased virulence in mammals were detected in the PB2, PB1, or NS proteins (data not shown).\nThe molecular characteristics of the 22 H9N2 isolates were compared with representative H9N2 virus strains circulating in Eurasia and America. The amino acid sequence of the cleavage sites of all isolates possessed a single basic amino acid (R) within the HA connecting peptide (VPELPKGR↓GLF), which is a typical feature of LPAIVs\n[1]. The HA receptor-binding pocket included the avian-like motif, Q226 and G228 (H3 numbering, Table \n3), suggesting that these viruses could preferentially bind to an avian-like receptor (α2, 3-linked sialic acid)\n[26]. Analysis of potential HA protein N-X-S/T glycosylation site motifs revealed 8 sites at positions 29, 82, 141, 218, 298, 305, 492, and 551 (Table \n4).\nAmino acid sequences of specific sites in the HA, NA, and PB2 proteins of 22 H9N2 AIVs\naAccording to the H3 numbering.\nThe comparison of HA amino acid sequences of H9N2 AIVs\nThere were no deletions in the NA stalk region, and no H274Y or N294S substitutions were observed in the NA protein. These properties indicated that these viruses would be sensitive to NA inhibitors, such as oseltamivir\n[27,28]. They encoded amino acids E and D at positions 627 and 701 of the PB2 protein, respectively, which are characteristics of AIVs\n[29-31]. No amino acid mutations associated with amantadine resistance were observed in the M2 ion channel protein. Additionally, no substitutions associated with increased virulence in mammals were detected in the PB2, PB1, or NS proteins (data not shown).\n Antigenic analysis To assess the antigenic properties of our novel H9N2 isolates, we performed Hemagglutination inhibition (HI) assays with 5 antisera raised against representative H9N2 viruses and two antisera raised against H9N2 isolates from this study (C2148 and PC2539). The HI antibody titers that recognized the C2148 and PC2539 viruses were much higher than the titers against the other H9N2 representative viruses, including the antiserum that was raised against a virus isolated from shorebirds in America (Sb/DE/249/06) (Table \n5). These results indicated that the H9N2 viruses isolated in this study were antigenically distinct from the previously identified H9N2 viruses.\nCross-reactivity (HI-determined titers) of the H9N2 AIVs with sera against represented strains\nHomologous titers are bold.\nTo assess the antigenic properties of our novel H9N2 isolates, we performed Hemagglutination inhibition (HI) assays with 5 antisera raised against representative H9N2 viruses and two antisera raised against H9N2 isolates from this study (C2148 and PC2539). The HI antibody titers that recognized the C2148 and PC2539 viruses were much higher than the titers against the other H9N2 representative viruses, including the antiserum that was raised against a virus isolated from shorebirds in America (Sb/DE/249/06) (Table \n5). These results indicated that the H9N2 viruses isolated in this study were antigenically distinct from the previously identified H9N2 viruses.\nCross-reactivity (HI-determined titers) of the H9N2 AIVs with sera against represented strains\nHomologous titers are bold.", "In total, 6621 environmental samples were collected in Hunan East Dongting Lake wetland. H9N2 subtype avian influenza viruses were isolated in November 2011 (7 isolates) and March 2012 (15 isolates) from wild waterfowl feces (Table \n1). The whole genome sequence of each isolate was obtained. The complete viral genome consists of 8 gene segments of negative-sense, single-stranded RNA, including PB2 (2341 bp), PB1 (2341 bp), PA (2233 bp), HA (1742 bp), NP (1565 bp), NA (1467 bp), M (1027 bp), and NS (890 bp).\nThe 22 H9N2 AIVs isolated from wild waterfowl feces in the Hunan East Dongting Lake Nature Reserve in 2011–2012\nComplete sequences of the 22 H9N2 viruses showed that they were over 99% nucleotide identity in all eight gene segments. Therefore, we selected a representative virus, A/Wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148), for further analysis. Each of the 8 gene segments of C2148 had the highest nucleotide (over 99%) identity with those of A/Egret/Hunan/1/2012 (H9N2), which was isolated in the same region\n[24]. Additionally, the following genes showed 99% homology with the indicated reference strain: the PB2 gene of C2148 with A/wild duck/Korea/SNU50-5/2009 (H5N1), the PB1 gene of C2148 with A/wild duck/Korea/CSM4-28/2010 (H4N6), the PA gene with A/northern shoveler/California/2810/2011 (H11N2), the HA gene with A/northern shoveler/Interior Alaska/8BM3470/2008 (H9N2), the NP gene with A/duck/Nanjing/1102/2010 (H4N8), the NA gene with A/snow goose/Montana/466771-4/2006 (H5N2), and the M gene with A/wild duck/Korea/CSM4-28/2010 (H4N6). The NS gene showed 98% nucleotide similarity with that of A/surface water/Minnesota/W07-2241/2007 (H3N8) (Table \n2).\nNucleotide Identity (%) of A/Wild waterfowl/Dongting/C2148/2011 H9N2 virus with the closely related isolates in GenBank Database\nComparisons were performed by using the BLAST search in the Influenza Sequcence Database.", "To further characterize the evolution of the 22 H9N2 viruses isolated from wild waterfowl, phylogenetic trees for each gene were constructed. In the HA gene tree, the H9 subtype AIVs clustered into five distinct lineages: G1-like, Y280-like, Korea-like, and two American lineages. The strains that clustered with the G1-like, Y280-like, and Korea-like sublineages are known to have been endemic in domestic poultry in Eurasia and Africa\n[25]. All of the 22 isolates belonged to the American lineage II, which included viruses that are prevalent in wild birds or turkeys in North America. Some genetic exchange was observed between the North American and Eurasian strains, as a strain isolated in Korea clustered with an American lineage, and several H9 subtype AIVs isolated in America clustered with the Korea-like sublineage (Figure \n1a). Similar evolutionary patterns were observed in the N2 gene tree, which also clustered into four lineages: G1-like, Y280-like, Korea-like and American lineages. The NA genes of all isolates clustered with the American lineage, and were most closely related to an H5N2 AIV subtype isolated from a snow goose in Montana (Figure \n1b). The PA and NS genes of the isolates both clustered with the American lineage and showed high nucleotide similarity with strains isolated from wild birds in North America (Figure \n1e and h). By contrast, the PB2, PB1, NP, and M internal genes clustered with the Eurasian gene pool, but were distinct from the H9N2 viruses that circulated in Eurasia and Africa and the corresponding internal genes of the novel H7N9 AIV (Figure \n1c, d, f, and g).\nPhylogenetic trees of HA (a), NA (b), PB2 (c), PB1 (d), PA (e), NP (f), M (g), and NS (h) genes of H9N2 subtype AIVs. Neighbor-joining (NJ) trees were generated using MEGA 5.01. Estimates of the phylogenies were calculated by performing 1000 neighbor-joining bootstrap replicates, all rooted to the sequence of Turkey/Wisconsin/1/66 (H9N2). Our 22 isolates are highlighted in red and representative strains are shown in blue. Abbreviations: Agt, American green-winged teal; Aw, American wigeon; Bd, black duck; Bh, bufflehead; Bs, Bewick’s swan; Bt, baikal teal; Ch, chukar; Ck, chicken; Dk, duck; Eg, egret; En, environment; Ew, Eurasian wigeon; Fe, feces; Gw, gadwall; Gf, guinea fowl; Gs, goose; Gt, green-winged teal; Lg, laughing gull; Ltd, longtail duck; Md, mallard; Mud, Muscovy duck; Np, northern pintail; Ns, northern shoveler; Os, Ostrich; Pfg, pink-footed goose; Pi, pintail duck; Qa, quail; Rcp, red crested pochard; Rt, ruddy turnstone; Sb, Shorebird; Sg, snow goose; Sw, swine; Suw, surface water; Tbm, thick-billed murre; Ty, turkey; Wb, wild bird; Wd, wild duck; Wfg, white-fronted goose; AH, Anhui; AL, Alaska; ALB, Alberta; AR, Arkansas; BJ, Beijing; CA, California; DE, Delaware; EG, Egypt; GD, Guangdong; GE, Germany; GX, Guangxi; GZ, Guangzhou; HLJ, Heilongjiang; HK, Hong Kong; HN, Hunan; HO, Hokkaido; IA, Interior Alaska; IL, Illinois; IR, Iran; JX, Jiangxi; Js, Jiangsu; KR, Korea; LO, Louisiana; MG, Mongolia; MI, Minnesota; MIS, Missouri; MT, Montana; ML, Maryland; NanJ, Nanjing; NE, Netherlands; NJ, New Jersey; OH, Ohio; PA, Pakistan; QB, Quebec; RZ, Rizhao; SA, South Africa; SD, Shandong; SH, Shanghai; ST, Shantou; TX, Texas; VN, Vietnam; WA, Washington; Wi, Wisconsin; XH, Xianghai; ZJ, Zhejiang.\nPhylogenetic analyses indicated that these isolates were novel recombinant H9N2 viruses. In these viruses, 4 genes (HA, NA, PA, and NS) were derived from the American AIV gene pool and 4 genes (PB2, PB1, NP, and M) were derived from the Eurasian gene pool (Figure \n2).\nA hypothetical reassortment pattern of the novel H9N2 virus isolates. Red colored gene segments indicate genes that are derived from the American AIV gene pool and blue colored gene segments indicate genes that originated in the Eurasian gene pool.", "The molecular characteristics of the 22 H9N2 isolates were compared with representative H9N2 virus strains circulating in Eurasia and America. The amino acid sequence of the cleavage sites of all isolates possessed a single basic amino acid (R) within the HA connecting peptide (VPELPKGR↓GLF), which is a typical feature of LPAIVs\n[1]. The HA receptor-binding pocket included the avian-like motif, Q226 and G228 (H3 numbering, Table \n3), suggesting that these viruses could preferentially bind to an avian-like receptor (α2, 3-linked sialic acid)\n[26]. Analysis of potential HA protein N-X-S/T glycosylation site motifs revealed 8 sites at positions 29, 82, 141, 218, 298, 305, 492, and 551 (Table \n4).\nAmino acid sequences of specific sites in the HA, NA, and PB2 proteins of 22 H9N2 AIVs\naAccording to the H3 numbering.\nThe comparison of HA amino acid sequences of H9N2 AIVs\nThere were no deletions in the NA stalk region, and no H274Y or N294S substitutions were observed in the NA protein. These properties indicated that these viruses would be sensitive to NA inhibitors, such as oseltamivir\n[27,28]. They encoded amino acids E and D at positions 627 and 701 of the PB2 protein, respectively, which are characteristics of AIVs\n[29-31]. No amino acid mutations associated with amantadine resistance were observed in the M2 ion channel protein. Additionally, no substitutions associated with increased virulence in mammals were detected in the PB2, PB1, or NS proteins (data not shown).", "To assess the antigenic properties of our novel H9N2 isolates, we performed Hemagglutination inhibition (HI) assays with 5 antisera raised against representative H9N2 viruses and two antisera raised against H9N2 isolates from this study (C2148 and PC2539). The HI antibody titers that recognized the C2148 and PC2539 viruses were much higher than the titers against the other H9N2 representative viruses, including the antiserum that was raised against a virus isolated from shorebirds in America (Sb/DE/249/06) (Table \n5). These results indicated that the H9N2 viruses isolated in this study were antigenically distinct from the previously identified H9N2 viruses.\nCross-reactivity (HI-determined titers) of the H9N2 AIVs with sera against represented strains\nHomologous titers are bold.", "It is generally accepted that wild waterfowl are the natural reservoir for AIVs, and play important roles in the perpetuation and dissemination of AIVs, especially LPAIV. Previous surveillance studies indicated that AIVs circulate in diverse bird species, including Anseriforms (ducks, geese, and swans) and Charadriiforms (gulls and terns). AIVs preferentially infect cells that line the intestinal tract and numerous viruses can be excreted at high concentrations in wild waterfowl feces. Influenza viruses remain infectious in lake water for up to 4 days at 22°C and for over 30 days at 0°C. Contaminated lake water may result in efficient transmission to naïve birds by the fecal-oral route\n[32-34]. Migrating birds annually travel between breeding and overwintering sites, so AIVs harbored by migrating birds can be distributed along the migration flyway. The Hunan East Dongting Lake Nature Reserve is located along the East Asia–Australia flyway and is a major staging and overwintering site for migratory birds. Moreover, many domestic duck farms are located in this region and employ a free-range style to raise domestic ducks\n[22]. Because wild waterfowl and domestic ducks may share common habitats, water, and food, genetic exchange between different subtypes of AIVs circulating in wild waterfowl and domestic poultry may be possible in this location\n[35].\nIn this study, we obtained 22 H9N2 subtype AIV isolates from wild waterfowl in East Dongting Lake wetland in November 2011 and March 2012. Genetic analyses showed that the 22 H9N2 viruses shared over 99% nucleotide sequence homology with the strain A/Egret/Hunan/1/2012 (H9N2) in all eight gene segments, which was isolated from an egret in the same region in January 2012. These findings indicated that these viruses had a common origin. Long-term surveillance studies in America and Eurasia demonstrated that the most prevalent AIVs in wild birds could be separated into Eurasian and American lineages. While intercontinental virus exchange existed in migrating birds, its frequency was limited and invasions by whole viruses were not observed\n[36]. Instances of invasion by the American lineage H9N2 virus into the Eurasian continent have been reported\n[24]. Notably, the 22 isolates reported in this study were novel recombinant H9N2 subtype AIVs, which encoded genes derived from both the American and Eurasian gene pools circulating in wild birds. Therefore, genes that originated from the American lineage may have been carried by migratory birds from North America, leading to the generation of multiple AIV reassortants circulating in wild birds in Eurasia. Moreover, this recombinant genotype was detected both by us and Chen and colleagues in different wild birds in three months during 2011–2012 (November 2011, January 2012, and March 2012), suggesting that this virus has been circulating in wild birds in this region\n[24]. Although H9N2 viruses of this genotype have not been reported in other regions, they may have been distributed to other places by migratory birds along their flyways.\nH9N2 AIVs have been endemic in domestic poultry across Eurasia and Africa since the late 1990s. The wide circulation of H9N2 viruses provides more opportunities for reassortment with other AIV subtypes in poultry. In the last few decades, H9N2 viruses may have provided internal genes to the HPAI H5N1 and the novel H7N9 avian influenza viruses\n[17,18]. Additionally, H9N2 viruses have been occasionally transmitted from poultry to mammals (including humans and swine), and some of the H9N2 viruses showed the ability to bind efficiently to α2, 6-linked sialic acid, which can indicate human virus receptor specificity\n[15,26]. Further studies indicated that H9N2 viruses that encoded Leu226 could replicate in ferrets and be transmitted by direct contact\n[37]. Therefore, many researchers believe that the H9N2 viruses have the potential to cause a future pandemic. Even if they do not directly cause a pandemic, they could indirectly contribute to an influenza pandemic by contributing internal genes to a reassortant virus\n[17,18,38]. Our findings demonstrated that all 22 isolates were LPAIVs, and we observed neither G226L substitutions in HA proteins nor any other pathogenicity-associated mutations in other viral proteins. Nevertheless, the pathogenicity of these viruses should be further accessed in poultry and mammalian animal models.\nAntigenic analyses suggested that the antigenicity of the isolates were significantly different from the G1- and Y280-like viruses, which have been dominant in domestic poultry in Eurasia and Africa, and the “America” virus, which is closely related to the Korea-like viruses. The antigenic and phylogenetic analyses were generally consistent with each other. Importantly, poultry in mainland China have been widely vaccinated with commercial inactivated vaccines that contain representative prevalent influenza strains circulating in poultry in Eurasia, but not strains of the American lineage. Therefore, the currently used commercial vaccines may not protect poultry or prevent the transmission of the novel H9N2 subtype viruses in China.", "In summary, the 22 H9N2 viruses isolated from wild waterfowl in 2011–2012 were novel reassorted H9N2 subtype AIVs with similar genotypes. All isolates encoded genes for proteins with low pathogenic characteristics. Determining whether these H9N2 AIVs can transmit to poultry or other animals or further adapt to new hosts will require continuous monitoring in the future. Our findings extend our knowledge of the ecology of AIVs circulating in wild birds in the Dongting Lake region and highlight the importance of intercontinental AIV gene flow in migratory birds. Therefore, we emphasize the vital need for continued surveillance of AIVs in wild birds and poultry to prepare for and respond to potential influenza pandemics.", " Sample collection During November 2011 and April 2012, we collected fresh fecal samples from wild birds and lake water in Hunan East Dongting Lake Nature Reserve (28°58′–29°38′N, 112°43′–113°13′E), Yueyang, Hunan, China. This site is an important overwintering habitat for East Asian migratory birds that is located in the middle reaches of the Yangtze River. Fresh fecal samples were collected with sterile cotton swabs, following previously described protocols\n[39], and placed in 15 mL tubes containing 4 mL virus transport media (VTM). The VTM contained: Tissue culture medium 199 (Thermo Scientific Hyclone, Logan, UT, USA), 0.5% BSA (Roche, Mannheim, Germany), 10% glycerol, 2 × 106 U/L penicillin G, 200 mg/L streptomycin, 2 × 106 U/L polymyxin B sulfate, 250 mg/L gentamicin, 60 mg/L ofloxacin HCI, 0.2 g/L sulfamethoxazole, and 5 × 105U/L nystatin (Sigma, St Louis, MO, USA). Samples were immediately transported to the laboratory at 4°C and stored at -80°C.\nDuring November 2011 and April 2012, we collected fresh fecal samples from wild birds and lake water in Hunan East Dongting Lake Nature Reserve (28°58′–29°38′N, 112°43′–113°13′E), Yueyang, Hunan, China. This site is an important overwintering habitat for East Asian migratory birds that is located in the middle reaches of the Yangtze River. Fresh fecal samples were collected with sterile cotton swabs, following previously described protocols\n[39], and placed in 15 mL tubes containing 4 mL virus transport media (VTM). The VTM contained: Tissue culture medium 199 (Thermo Scientific Hyclone, Logan, UT, USA), 0.5% BSA (Roche, Mannheim, Germany), 10% glycerol, 2 × 106 U/L penicillin G, 200 mg/L streptomycin, 2 × 106 U/L polymyxin B sulfate, 250 mg/L gentamicin, 60 mg/L ofloxacin HCI, 0.2 g/L sulfamethoxazole, and 5 × 105U/L nystatin (Sigma, St Louis, MO, USA). Samples were immediately transported to the laboratory at 4°C and stored at -80°C.\n Virus detection and isolation RNA was extracted from 200 μL fecal suspension using the BioRobot Universal system with the QIAamp One-For-All Nucleic Acid Kit (Qiagen, Hilden, Germany) in accordance with the manufacturer’s instructions. Influenza A virus was screened using qPCR assay that targeted the influenza Matrix gene. Detection was performed using a Stratagene Mx3005P thermocycler with an AgPath RT-PCR Kit (Applied Biosystems, Foster City, CA, USA) using 5 μL eluate in 25 μL total volume. Each run included 2 negative and 2 positive control samples along with 92 samples. The positive samples detected by qPCR were inoculated into the allantoic cavity of 9-day-old specific pathogen free (SPF) embryonated chicken eggs (ECEs). The ECEs were incubated at 37°C for 48 h and then chilled at 4°C for 6–8 h. Allantoic fluids were harvested and hemagglutination assays with 0.5% turkey red blood cells confirmed the presence of viruses.\nRNA was extracted from 200 μL fecal suspension using the BioRobot Universal system with the QIAamp One-For-All Nucleic Acid Kit (Qiagen, Hilden, Germany) in accordance with the manufacturer’s instructions. Influenza A virus was screened using qPCR assay that targeted the influenza Matrix gene. Detection was performed using a Stratagene Mx3005P thermocycler with an AgPath RT-PCR Kit (Applied Biosystems, Foster City, CA, USA) using 5 μL eluate in 25 μL total volume. Each run included 2 negative and 2 positive control samples along with 92 samples. The positive samples detected by qPCR were inoculated into the allantoic cavity of 9-day-old specific pathogen free (SPF) embryonated chicken eggs (ECEs). The ECEs were incubated at 37°C for 48 h and then chilled at 4°C for 6–8 h. Allantoic fluids were harvested and hemagglutination assays with 0.5% turkey red blood cells confirmed the presence of viruses.\n Virus subtyping and sequencing Viral RNA was extracted from infected allantoic fluid using the RNeasy Mini Kit (Qiagen), and reverse transcribed using the Uni12 primer (5′-AGCAAAAGCAGG-3′) with the SuperRT cDNA Kit (CWBIO, Beijing, China). Isolate subtyping was performed by PCR using 16 sets of HA (H1–H16) primers and 9 sets of NA (N1–N9) primers designed by the Chinese National Influenza Center. Complete genome amplification was performed using specific primers (primer sequences available on request) with 2× Es Taq MasterMix Kit (CWBIO). PCR products of the expected sizes were purified using a QIAquick PCR purification kit (Qiagen). Sequencing was performed using the BigDye Terminator v3.1 Cycle Sequencing Kit with an ABI PRISM 3700xl DNA Analyzer (Applied Biosystems), following the manufacturer’s instructions.\nViral RNA was extracted from infected allantoic fluid using the RNeasy Mini Kit (Qiagen), and reverse transcribed using the Uni12 primer (5′-AGCAAAAGCAGG-3′) with the SuperRT cDNA Kit (CWBIO, Beijing, China). Isolate subtyping was performed by PCR using 16 sets of HA (H1–H16) primers and 9 sets of NA (N1–N9) primers designed by the Chinese National Influenza Center. Complete genome amplification was performed using specific primers (primer sequences available on request) with 2× Es Taq MasterMix Kit (CWBIO). PCR products of the expected sizes were purified using a QIAquick PCR purification kit (Qiagen). Sequencing was performed using the BigDye Terminator v3.1 Cycle Sequencing Kit with an ABI PRISM 3700xl DNA Analyzer (Applied Biosystems), following the manufacturer’s instructions.\n Phylogenetic analyses We performed multiple sequence alignments using the MAFFT software, version 6. Both sequences of the representative H9N2 subtype influenza A virus strains circulating in America and Eurasia and homologous sequences that shared high nucleotide similarities with our H9N2 isolates were included in the phylogenetic analyses. Preliminary phylogenetic trees were constructed to infer the overall topology, using more than 500 sequences for each gene. In order to more clearly define the phylogenetic relationships of the 22 H9N2 virus isolates, representative sequences of each cluster were selected to generate neighbor-joining (NJ) trees. Phylogenetic estimates were calculated by performing 1000 neighbor-joining bootstrap replicates.\nWe performed multiple sequence alignments using the MAFFT software, version 6. Both sequences of the representative H9N2 subtype influenza A virus strains circulating in America and Eurasia and homologous sequences that shared high nucleotide similarities with our H9N2 isolates were included in the phylogenetic analyses. Preliminary phylogenetic trees were constructed to infer the overall topology, using more than 500 sequences for each gene. In order to more clearly define the phylogenetic relationships of the 22 H9N2 virus isolates, representative sequences of each cluster were selected to generate neighbor-joining (NJ) trees. Phylogenetic estimates were calculated by performing 1000 neighbor-joining bootstrap replicates.\n Antigenic analyses Antigenic analyses were performed using 6 polyclonal ferret antisera against A/chicken/Beijing/1/1994 (H9N2) (Ck/Bj), A/quail/Hong Kong/G1/1997 (H9N2) (G1), A/chicken/Hong Kong/G9/1997 (H9N2) (G9), A/Hong Kong/1073/1999 (H9N2) (HK/1073), A/chicken/Hong Kong/NT101/2003 (H9N2) (HK/NT101), and A/shorebird/ Delaware /249/2006 (H9N2) (DE/249), which were kindly provided by Dr. Webby. Polyclonal ferret antisera raised against A/wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148) and A/wild waterfowl/Dongting/PC2539/2012 (H9N2) (PC2539), two representative influenza strains, were also used in our study. HI assays were performed as previously described. Briefly, all sera were treated with receptor destroying enzyme II (RDE) (Denka Seiken, Tokyo, Japan) to remove nonspecific inhibitors of hemagglutination by adding 3 volumes of RDE to tubes with 1 volume of serum. Samples were incubated at 37°C for 16–18 h, and then inactivated at 56°C for 30 min. After RDE inactivation, 6 volumes of phosphate buffered saline (PBS; Thermo Scientific Hyclone) were added. The diluted sera were then serially diluted 2-fold with 25 μL PBS, and equal volumes of antigen (8 HA unit/50 μL) were added to each well. The plates were gently mixed and incubated at room temperature for 20–30 min. Viral titers were determined by adding 50 μL 0.5% turkey red blood cells to each well. The limit of detection for HI titers was ≤ 20.\nAntigenic analyses were performed using 6 polyclonal ferret antisera against A/chicken/Beijing/1/1994 (H9N2) (Ck/Bj), A/quail/Hong Kong/G1/1997 (H9N2) (G1), A/chicken/Hong Kong/G9/1997 (H9N2) (G9), A/Hong Kong/1073/1999 (H9N2) (HK/1073), A/chicken/Hong Kong/NT101/2003 (H9N2) (HK/NT101), and A/shorebird/ Delaware /249/2006 (H9N2) (DE/249), which were kindly provided by Dr. Webby. Polyclonal ferret antisera raised against A/wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148) and A/wild waterfowl/Dongting/PC2539/2012 (H9N2) (PC2539), two representative influenza strains, were also used in our study. HI assays were performed as previously described. Briefly, all sera were treated with receptor destroying enzyme II (RDE) (Denka Seiken, Tokyo, Japan) to remove nonspecific inhibitors of hemagglutination by adding 3 volumes of RDE to tubes with 1 volume of serum. Samples were incubated at 37°C for 16–18 h, and then inactivated at 56°C for 30 min. After RDE inactivation, 6 volumes of phosphate buffered saline (PBS; Thermo Scientific Hyclone) were added. The diluted sera were then serially diluted 2-fold with 25 μL PBS, and equal volumes of antigen (8 HA unit/50 μL) were added to each well. The plates were gently mixed and incubated at room temperature for 20–30 min. Viral titers were determined by adding 50 μL 0.5% turkey red blood cells to each well. The limit of detection for HI titers was ≤ 20.\n Nucleotide sequence accession numbers The nucleotide sequences generated in our study were deposited at GenBank under the accession numbers KF971946 to KF972121.\nThe nucleotide sequences generated in our study were deposited at GenBank under the accession numbers KF971946 to KF972121.", "During November 2011 and April 2012, we collected fresh fecal samples from wild birds and lake water in Hunan East Dongting Lake Nature Reserve (28°58′–29°38′N, 112°43′–113°13′E), Yueyang, Hunan, China. This site is an important overwintering habitat for East Asian migratory birds that is located in the middle reaches of the Yangtze River. Fresh fecal samples were collected with sterile cotton swabs, following previously described protocols\n[39], and placed in 15 mL tubes containing 4 mL virus transport media (VTM). The VTM contained: Tissue culture medium 199 (Thermo Scientific Hyclone, Logan, UT, USA), 0.5% BSA (Roche, Mannheim, Germany), 10% glycerol, 2 × 106 U/L penicillin G, 200 mg/L streptomycin, 2 × 106 U/L polymyxin B sulfate, 250 mg/L gentamicin, 60 mg/L ofloxacin HCI, 0.2 g/L sulfamethoxazole, and 5 × 105U/L nystatin (Sigma, St Louis, MO, USA). Samples were immediately transported to the laboratory at 4°C and stored at -80°C.", "RNA was extracted from 200 μL fecal suspension using the BioRobot Universal system with the QIAamp One-For-All Nucleic Acid Kit (Qiagen, Hilden, Germany) in accordance with the manufacturer’s instructions. Influenza A virus was screened using qPCR assay that targeted the influenza Matrix gene. Detection was performed using a Stratagene Mx3005P thermocycler with an AgPath RT-PCR Kit (Applied Biosystems, Foster City, CA, USA) using 5 μL eluate in 25 μL total volume. Each run included 2 negative and 2 positive control samples along with 92 samples. The positive samples detected by qPCR were inoculated into the allantoic cavity of 9-day-old specific pathogen free (SPF) embryonated chicken eggs (ECEs). The ECEs were incubated at 37°C for 48 h and then chilled at 4°C for 6–8 h. Allantoic fluids were harvested and hemagglutination assays with 0.5% turkey red blood cells confirmed the presence of viruses.", "Viral RNA was extracted from infected allantoic fluid using the RNeasy Mini Kit (Qiagen), and reverse transcribed using the Uni12 primer (5′-AGCAAAAGCAGG-3′) with the SuperRT cDNA Kit (CWBIO, Beijing, China). Isolate subtyping was performed by PCR using 16 sets of HA (H1–H16) primers and 9 sets of NA (N1–N9) primers designed by the Chinese National Influenza Center. Complete genome amplification was performed using specific primers (primer sequences available on request) with 2× Es Taq MasterMix Kit (CWBIO). PCR products of the expected sizes were purified using a QIAquick PCR purification kit (Qiagen). Sequencing was performed using the BigDye Terminator v3.1 Cycle Sequencing Kit with an ABI PRISM 3700xl DNA Analyzer (Applied Biosystems), following the manufacturer’s instructions.", "We performed multiple sequence alignments using the MAFFT software, version 6. Both sequences of the representative H9N2 subtype influenza A virus strains circulating in America and Eurasia and homologous sequences that shared high nucleotide similarities with our H9N2 isolates were included in the phylogenetic analyses. Preliminary phylogenetic trees were constructed to infer the overall topology, using more than 500 sequences for each gene. In order to more clearly define the phylogenetic relationships of the 22 H9N2 virus isolates, representative sequences of each cluster were selected to generate neighbor-joining (NJ) trees. Phylogenetic estimates were calculated by performing 1000 neighbor-joining bootstrap replicates.", "Antigenic analyses were performed using 6 polyclonal ferret antisera against A/chicken/Beijing/1/1994 (H9N2) (Ck/Bj), A/quail/Hong Kong/G1/1997 (H9N2) (G1), A/chicken/Hong Kong/G9/1997 (H9N2) (G9), A/Hong Kong/1073/1999 (H9N2) (HK/1073), A/chicken/Hong Kong/NT101/2003 (H9N2) (HK/NT101), and A/shorebird/ Delaware /249/2006 (H9N2) (DE/249), which were kindly provided by Dr. Webby. Polyclonal ferret antisera raised against A/wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148) and A/wild waterfowl/Dongting/PC2539/2012 (H9N2) (PC2539), two representative influenza strains, were also used in our study. HI assays were performed as previously described. Briefly, all sera were treated with receptor destroying enzyme II (RDE) (Denka Seiken, Tokyo, Japan) to remove nonspecific inhibitors of hemagglutination by adding 3 volumes of RDE to tubes with 1 volume of serum. Samples were incubated at 37°C for 16–18 h, and then inactivated at 56°C for 30 min. After RDE inactivation, 6 volumes of phosphate buffered saline (PBS; Thermo Scientific Hyclone) were added. The diluted sera were then serially diluted 2-fold with 25 μL PBS, and equal volumes of antigen (8 HA unit/50 μL) were added to each well. The plates were gently mixed and incubated at room temperature for 20–30 min. Viral titers were determined by adding 50 μL 0.5% turkey red blood cells to each well. The limit of detection for HI titers was ≤ 20.", "The nucleotide sequences generated in our study were deposited at GenBank under the accession numbers KF971946 to KF972121.", "The findings and conclusions of this report are those of the authors and do not necessarily represent the views of the funding agency. We declare no conflict of interest.", "YLS and SXH designed the research; YZ performed research and drafted the manuscript; TB, YWH, ZHD, HZ, ZYB, MDY, and JFH collected samples and performed research; LY and XZ analyzed data; and YLS and WFZ helped to draft and revise the manuscript. All authors read and approved the final manuscript." ]
[ null, "results", null, null, null, null, "discussion", "conclusions", "materials|methods", null, null, null, null, null, null, null, null ]
[ "Avian influenza virus", "Wild waterfowl", "H9N2 subtype", "Dongting Lake", "Reassortant" ]
Background: Influenza A viruses can be divided into different subtypes based upon the two surface glycoproteins, hemagglutinin (HA) and neuraminidase (NA) [1,2]. Wild waterfowl are recognized as the natural reservoir of influenza A viruses, especially low pathogenic avian influenza virus (LPAIV) [3]. To date, all of the HA (H1–H16) and NA (N1–N9) subtype avian influenza viruses (AIVs) have been identified in wild waterfowl, with the exceptions of H17N10 and H18N11, which were isolated from bats [2,4,5]. Previous studies indicated that migratory birds play an important role in the emergence of epidemics in birds, pigs, horses, and humans [1]. Generally, AIVs are nonpathogenic in wild birds, although they sometimes cause significant morbidity and mortality when transmitted to domestic poultry [6-8]. The H9N2 subtype influenza virus was first isolated from a turkey in Wisconsin in 1966. It has been most prevalent in wild ducks and shorebirds and has shown no evidence of establishing a stable lineage in land-based poultry in North America [9]. Since the l990s, H9N2 subtype influenza viruses have become prevalent in land-based poultry across East Asia, Middle Asia, Europe and Africa. Epidemiological and phylogenetic studies indicate that three distinct sublineages of H9N2 subtype influenza viruses have been established: Ck/Bj/94-like (A/chicken/Beijing/1/94 or A/duck/Hongkong/Y280/1997), G1-like (A/Quail/Hong Kong/G1/1997) and Korea-like (A/chicken/Y439/1997 or A/chicken/Korea/383490-p96323/1996) [10-12]. Poultry infected with H9N2 subtype AIVs have reduced egg production and moderate morbidity when co-infected with other viruses or bacteria [13]. Since 1999, some H9N2 viruses have been identified with potential human virus receptor specificity, and have been occasionally transmitted to human and swine [14-16]. Moreover, H9N2 viruses may have contributed internal genes to the highly pathogenic avian influenza (HPAI) H5N1 virus in Hong Kong in 1997 and the novel H7N9 avian influenza virus in mainland China in 2013 [17,18]. Therefore, the H9N2 subtype avian influenza viruses have been classified as candidate viruses with pandemic potential [19,20]. The threat to the poultry industry and public health posed by H9N2 subtype avian influenza viruses should not be ignored. Hunan East Dongting Lake Nature Reserve is one of the largest wetlands in mainland China and is an important overwintering area and staging site for migratory birds that fly along the East Asia–Australia flyway [21]. Moreover, numerous duck farms are found around the lake [22,23]. Every migrating season, wild birds congregate at the lake where they share a common habitat with domestic ducks, which provides an opportunity for virus reassortment. During active surveillance of AIVs between 2011 and 2012, we isolated H9N2 viruses from wild waterfowl in East Dongting Lake wetlands in November 2011 (7 isolates) and March 2012 (15 isolates). The whole genome sequences of all 22 isolates were obtained, and phylogenetic trees for each gene segment were generated to analyze the relationship of these isolates with other circulating viruses in wild birds or poultry. Furthermore, we performed antigenic analyses to investigate the antigenic characteristics of the isolates. Results: Virus isolation and sequence analysis In total, 6621 environmental samples were collected in Hunan East Dongting Lake wetland. H9N2 subtype avian influenza viruses were isolated in November 2011 (7 isolates) and March 2012 (15 isolates) from wild waterfowl feces (Table  1). The whole genome sequence of each isolate was obtained. The complete viral genome consists of 8 gene segments of negative-sense, single-stranded RNA, including PB2 (2341 bp), PB1 (2341 bp), PA (2233 bp), HA (1742 bp), NP (1565 bp), NA (1467 bp), M (1027 bp), and NS (890 bp). The 22 H9N2 AIVs isolated from wild waterfowl feces in the Hunan East Dongting Lake Nature Reserve in 2011–2012 Complete sequences of the 22 H9N2 viruses showed that they were over 99% nucleotide identity in all eight gene segments. Therefore, we selected a representative virus, A/Wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148), for further analysis. Each of the 8 gene segments of C2148 had the highest nucleotide (over 99%) identity with those of A/Egret/Hunan/1/2012 (H9N2), which was isolated in the same region [24]. Additionally, the following genes showed 99% homology with the indicated reference strain: the PB2 gene of C2148 with A/wild duck/Korea/SNU50-5/2009 (H5N1), the PB1 gene of C2148 with A/wild duck/Korea/CSM4-28/2010 (H4N6), the PA gene with A/northern shoveler/California/2810/2011 (H11N2), the HA gene with A/northern shoveler/Interior Alaska/8BM3470/2008 (H9N2), the NP gene with A/duck/Nanjing/1102/2010 (H4N8), the NA gene with A/snow goose/Montana/466771-4/2006 (H5N2), and the M gene with A/wild duck/Korea/CSM4-28/2010 (H4N6). The NS gene showed 98% nucleotide similarity with that of A/surface water/Minnesota/W07-2241/2007 (H3N8) (Table  2). Nucleotide Identity (%) of A/Wild waterfowl/Dongting/C2148/2011 H9N2 virus with the closely related isolates in GenBank Database Comparisons were performed by using the BLAST search in the Influenza Sequcence Database. In total, 6621 environmental samples were collected in Hunan East Dongting Lake wetland. H9N2 subtype avian influenza viruses were isolated in November 2011 (7 isolates) and March 2012 (15 isolates) from wild waterfowl feces (Table  1). The whole genome sequence of each isolate was obtained. The complete viral genome consists of 8 gene segments of negative-sense, single-stranded RNA, including PB2 (2341 bp), PB1 (2341 bp), PA (2233 bp), HA (1742 bp), NP (1565 bp), NA (1467 bp), M (1027 bp), and NS (890 bp). The 22 H9N2 AIVs isolated from wild waterfowl feces in the Hunan East Dongting Lake Nature Reserve in 2011–2012 Complete sequences of the 22 H9N2 viruses showed that they were over 99% nucleotide identity in all eight gene segments. Therefore, we selected a representative virus, A/Wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148), for further analysis. Each of the 8 gene segments of C2148 had the highest nucleotide (over 99%) identity with those of A/Egret/Hunan/1/2012 (H9N2), which was isolated in the same region [24]. Additionally, the following genes showed 99% homology with the indicated reference strain: the PB2 gene of C2148 with A/wild duck/Korea/SNU50-5/2009 (H5N1), the PB1 gene of C2148 with A/wild duck/Korea/CSM4-28/2010 (H4N6), the PA gene with A/northern shoveler/California/2810/2011 (H11N2), the HA gene with A/northern shoveler/Interior Alaska/8BM3470/2008 (H9N2), the NP gene with A/duck/Nanjing/1102/2010 (H4N8), the NA gene with A/snow goose/Montana/466771-4/2006 (H5N2), and the M gene with A/wild duck/Korea/CSM4-28/2010 (H4N6). The NS gene showed 98% nucleotide similarity with that of A/surface water/Minnesota/W07-2241/2007 (H3N8) (Table  2). Nucleotide Identity (%) of A/Wild waterfowl/Dongting/C2148/2011 H9N2 virus with the closely related isolates in GenBank Database Comparisons were performed by using the BLAST search in the Influenza Sequcence Database. Phylogenetic analyses To further characterize the evolution of the 22 H9N2 viruses isolated from wild waterfowl, phylogenetic trees for each gene were constructed. In the HA gene tree, the H9 subtype AIVs clustered into five distinct lineages: G1-like, Y280-like, Korea-like, and two American lineages. The strains that clustered with the G1-like, Y280-like, and Korea-like sublineages are known to have been endemic in domestic poultry in Eurasia and Africa [25]. All of the 22 isolates belonged to the American lineage II, which included viruses that are prevalent in wild birds or turkeys in North America. Some genetic exchange was observed between the North American and Eurasian strains, as a strain isolated in Korea clustered with an American lineage, and several H9 subtype AIVs isolated in America clustered with the Korea-like sublineage (Figure  1a). Similar evolutionary patterns were observed in the N2 gene tree, which also clustered into four lineages: G1-like, Y280-like, Korea-like and American lineages. The NA genes of all isolates clustered with the American lineage, and were most closely related to an H5N2 AIV subtype isolated from a snow goose in Montana (Figure  1b). The PA and NS genes of the isolates both clustered with the American lineage and showed high nucleotide similarity with strains isolated from wild birds in North America (Figure  1e and h). By contrast, the PB2, PB1, NP, and M internal genes clustered with the Eurasian gene pool, but were distinct from the H9N2 viruses that circulated in Eurasia and Africa and the corresponding internal genes of the novel H7N9 AIV (Figure  1c, d, f, and g). Phylogenetic trees of HA (a), NA (b), PB2 (c), PB1 (d), PA (e), NP (f), M (g), and NS (h) genes of H9N2 subtype AIVs. Neighbor-joining (NJ) trees were generated using MEGA 5.01. Estimates of the phylogenies were calculated by performing 1000 neighbor-joining bootstrap replicates, all rooted to the sequence of Turkey/Wisconsin/1/66 (H9N2). Our 22 isolates are highlighted in red and representative strains are shown in blue. Abbreviations: Agt, American green-winged teal; Aw, American wigeon; Bd, black duck; Bh, bufflehead; Bs, Bewick’s swan; Bt, baikal teal; Ch, chukar; Ck, chicken; Dk, duck; Eg, egret; En, environment; Ew, Eurasian wigeon; Fe, feces; Gw, gadwall; Gf, guinea fowl; Gs, goose; Gt, green-winged teal; Lg, laughing gull; Ltd, longtail duck; Md, mallard; Mud, Muscovy duck; Np, northern pintail; Ns, northern shoveler; Os, Ostrich; Pfg, pink-footed goose; Pi, pintail duck; Qa, quail; Rcp, red crested pochard; Rt, ruddy turnstone; Sb, Shorebird; Sg, snow goose; Sw, swine; Suw, surface water; Tbm, thick-billed murre; Ty, turkey; Wb, wild bird; Wd, wild duck; Wfg, white-fronted goose; AH, Anhui; AL, Alaska; ALB, Alberta; AR, Arkansas; BJ, Beijing; CA, California; DE, Delaware; EG, Egypt; GD, Guangdong; GE, Germany; GX, Guangxi; GZ, Guangzhou; HLJ, Heilongjiang; HK, Hong Kong; HN, Hunan; HO, Hokkaido; IA, Interior Alaska; IL, Illinois; IR, Iran; JX, Jiangxi; Js, Jiangsu; KR, Korea; LO, Louisiana; MG, Mongolia; MI, Minnesota; MIS, Missouri; MT, Montana; ML, Maryland; NanJ, Nanjing; NE, Netherlands; NJ, New Jersey; OH, Ohio; PA, Pakistan; QB, Quebec; RZ, Rizhao; SA, South Africa; SD, Shandong; SH, Shanghai; ST, Shantou; TX, Texas; VN, Vietnam; WA, Washington; Wi, Wisconsin; XH, Xianghai; ZJ, Zhejiang. Phylogenetic analyses indicated that these isolates were novel recombinant H9N2 viruses. In these viruses, 4 genes (HA, NA, PA, and NS) were derived from the American AIV gene pool and 4 genes (PB2, PB1, NP, and M) were derived from the Eurasian gene pool (Figure  2). A hypothetical reassortment pattern of the novel H9N2 virus isolates. Red colored gene segments indicate genes that are derived from the American AIV gene pool and blue colored gene segments indicate genes that originated in the Eurasian gene pool. To further characterize the evolution of the 22 H9N2 viruses isolated from wild waterfowl, phylogenetic trees for each gene were constructed. In the HA gene tree, the H9 subtype AIVs clustered into five distinct lineages: G1-like, Y280-like, Korea-like, and two American lineages. The strains that clustered with the G1-like, Y280-like, and Korea-like sublineages are known to have been endemic in domestic poultry in Eurasia and Africa [25]. All of the 22 isolates belonged to the American lineage II, which included viruses that are prevalent in wild birds or turkeys in North America. Some genetic exchange was observed between the North American and Eurasian strains, as a strain isolated in Korea clustered with an American lineage, and several H9 subtype AIVs isolated in America clustered with the Korea-like sublineage (Figure  1a). Similar evolutionary patterns were observed in the N2 gene tree, which also clustered into four lineages: G1-like, Y280-like, Korea-like and American lineages. The NA genes of all isolates clustered with the American lineage, and were most closely related to an H5N2 AIV subtype isolated from a snow goose in Montana (Figure  1b). The PA and NS genes of the isolates both clustered with the American lineage and showed high nucleotide similarity with strains isolated from wild birds in North America (Figure  1e and h). By contrast, the PB2, PB1, NP, and M internal genes clustered with the Eurasian gene pool, but were distinct from the H9N2 viruses that circulated in Eurasia and Africa and the corresponding internal genes of the novel H7N9 AIV (Figure  1c, d, f, and g). Phylogenetic trees of HA (a), NA (b), PB2 (c), PB1 (d), PA (e), NP (f), M (g), and NS (h) genes of H9N2 subtype AIVs. Neighbor-joining (NJ) trees were generated using MEGA 5.01. Estimates of the phylogenies were calculated by performing 1000 neighbor-joining bootstrap replicates, all rooted to the sequence of Turkey/Wisconsin/1/66 (H9N2). Our 22 isolates are highlighted in red and representative strains are shown in blue. Abbreviations: Agt, American green-winged teal; Aw, American wigeon; Bd, black duck; Bh, bufflehead; Bs, Bewick’s swan; Bt, baikal teal; Ch, chukar; Ck, chicken; Dk, duck; Eg, egret; En, environment; Ew, Eurasian wigeon; Fe, feces; Gw, gadwall; Gf, guinea fowl; Gs, goose; Gt, green-winged teal; Lg, laughing gull; Ltd, longtail duck; Md, mallard; Mud, Muscovy duck; Np, northern pintail; Ns, northern shoveler; Os, Ostrich; Pfg, pink-footed goose; Pi, pintail duck; Qa, quail; Rcp, red crested pochard; Rt, ruddy turnstone; Sb, Shorebird; Sg, snow goose; Sw, swine; Suw, surface water; Tbm, thick-billed murre; Ty, turkey; Wb, wild bird; Wd, wild duck; Wfg, white-fronted goose; AH, Anhui; AL, Alaska; ALB, Alberta; AR, Arkansas; BJ, Beijing; CA, California; DE, Delaware; EG, Egypt; GD, Guangdong; GE, Germany; GX, Guangxi; GZ, Guangzhou; HLJ, Heilongjiang; HK, Hong Kong; HN, Hunan; HO, Hokkaido; IA, Interior Alaska; IL, Illinois; IR, Iran; JX, Jiangxi; Js, Jiangsu; KR, Korea; LO, Louisiana; MG, Mongolia; MI, Minnesota; MIS, Missouri; MT, Montana; ML, Maryland; NanJ, Nanjing; NE, Netherlands; NJ, New Jersey; OH, Ohio; PA, Pakistan; QB, Quebec; RZ, Rizhao; SA, South Africa; SD, Shandong; SH, Shanghai; ST, Shantou; TX, Texas; VN, Vietnam; WA, Washington; Wi, Wisconsin; XH, Xianghai; ZJ, Zhejiang. Phylogenetic analyses indicated that these isolates were novel recombinant H9N2 viruses. In these viruses, 4 genes (HA, NA, PA, and NS) were derived from the American AIV gene pool and 4 genes (PB2, PB1, NP, and M) were derived from the Eurasian gene pool (Figure  2). A hypothetical reassortment pattern of the novel H9N2 virus isolates. Red colored gene segments indicate genes that are derived from the American AIV gene pool and blue colored gene segments indicate genes that originated in the Eurasian gene pool. Molecular analysis The molecular characteristics of the 22 H9N2 isolates were compared with representative H9N2 virus strains circulating in Eurasia and America. The amino acid sequence of the cleavage sites of all isolates possessed a single basic amino acid (R) within the HA connecting peptide (VPELPKGR↓GLF), which is a typical feature of LPAIVs [1]. The HA receptor-binding pocket included the avian-like motif, Q226 and G228 (H3 numbering, Table  3), suggesting that these viruses could preferentially bind to an avian-like receptor (α2, 3-linked sialic acid) [26]. Analysis of potential HA protein N-X-S/T glycosylation site motifs revealed 8 sites at positions 29, 82, 141, 218, 298, 305, 492, and 551 (Table  4). Amino acid sequences of specific sites in the HA, NA, and PB2 proteins of 22 H9N2 AIVs aAccording to the H3 numbering. The comparison of HA amino acid sequences of H9N2 AIVs There were no deletions in the NA stalk region, and no H274Y or N294S substitutions were observed in the NA protein. These properties indicated that these viruses would be sensitive to NA inhibitors, such as oseltamivir [27,28]. They encoded amino acids E and D at positions 627 and 701 of the PB2 protein, respectively, which are characteristics of AIVs [29-31]. No amino acid mutations associated with amantadine resistance were observed in the M2 ion channel protein. Additionally, no substitutions associated with increased virulence in mammals were detected in the PB2, PB1, or NS proteins (data not shown). The molecular characteristics of the 22 H9N2 isolates were compared with representative H9N2 virus strains circulating in Eurasia and America. The amino acid sequence of the cleavage sites of all isolates possessed a single basic amino acid (R) within the HA connecting peptide (VPELPKGR↓GLF), which is a typical feature of LPAIVs [1]. The HA receptor-binding pocket included the avian-like motif, Q226 and G228 (H3 numbering, Table  3), suggesting that these viruses could preferentially bind to an avian-like receptor (α2, 3-linked sialic acid) [26]. Analysis of potential HA protein N-X-S/T glycosylation site motifs revealed 8 sites at positions 29, 82, 141, 218, 298, 305, 492, and 551 (Table  4). Amino acid sequences of specific sites in the HA, NA, and PB2 proteins of 22 H9N2 AIVs aAccording to the H3 numbering. The comparison of HA amino acid sequences of H9N2 AIVs There were no deletions in the NA stalk region, and no H274Y or N294S substitutions were observed in the NA protein. These properties indicated that these viruses would be sensitive to NA inhibitors, such as oseltamivir [27,28]. They encoded amino acids E and D at positions 627 and 701 of the PB2 protein, respectively, which are characteristics of AIVs [29-31]. No amino acid mutations associated with amantadine resistance were observed in the M2 ion channel protein. Additionally, no substitutions associated with increased virulence in mammals were detected in the PB2, PB1, or NS proteins (data not shown). Antigenic analysis To assess the antigenic properties of our novel H9N2 isolates, we performed Hemagglutination inhibition (HI) assays with 5 antisera raised against representative H9N2 viruses and two antisera raised against H9N2 isolates from this study (C2148 and PC2539). The HI antibody titers that recognized the C2148 and PC2539 viruses were much higher than the titers against the other H9N2 representative viruses, including the antiserum that was raised against a virus isolated from shorebirds in America (Sb/DE/249/06) (Table  5). These results indicated that the H9N2 viruses isolated in this study were antigenically distinct from the previously identified H9N2 viruses. Cross-reactivity (HI-determined titers) of the H9N2 AIVs with sera against represented strains Homologous titers are bold. To assess the antigenic properties of our novel H9N2 isolates, we performed Hemagglutination inhibition (HI) assays with 5 antisera raised against representative H9N2 viruses and two antisera raised against H9N2 isolates from this study (C2148 and PC2539). The HI antibody titers that recognized the C2148 and PC2539 viruses were much higher than the titers against the other H9N2 representative viruses, including the antiserum that was raised against a virus isolated from shorebirds in America (Sb/DE/249/06) (Table  5). These results indicated that the H9N2 viruses isolated in this study were antigenically distinct from the previously identified H9N2 viruses. Cross-reactivity (HI-determined titers) of the H9N2 AIVs with sera against represented strains Homologous titers are bold. Virus isolation and sequence analysis: In total, 6621 environmental samples were collected in Hunan East Dongting Lake wetland. H9N2 subtype avian influenza viruses were isolated in November 2011 (7 isolates) and March 2012 (15 isolates) from wild waterfowl feces (Table  1). The whole genome sequence of each isolate was obtained. The complete viral genome consists of 8 gene segments of negative-sense, single-stranded RNA, including PB2 (2341 bp), PB1 (2341 bp), PA (2233 bp), HA (1742 bp), NP (1565 bp), NA (1467 bp), M (1027 bp), and NS (890 bp). The 22 H9N2 AIVs isolated from wild waterfowl feces in the Hunan East Dongting Lake Nature Reserve in 2011–2012 Complete sequences of the 22 H9N2 viruses showed that they were over 99% nucleotide identity in all eight gene segments. Therefore, we selected a representative virus, A/Wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148), for further analysis. Each of the 8 gene segments of C2148 had the highest nucleotide (over 99%) identity with those of A/Egret/Hunan/1/2012 (H9N2), which was isolated in the same region [24]. Additionally, the following genes showed 99% homology with the indicated reference strain: the PB2 gene of C2148 with A/wild duck/Korea/SNU50-5/2009 (H5N1), the PB1 gene of C2148 with A/wild duck/Korea/CSM4-28/2010 (H4N6), the PA gene with A/northern shoveler/California/2810/2011 (H11N2), the HA gene with A/northern shoveler/Interior Alaska/8BM3470/2008 (H9N2), the NP gene with A/duck/Nanjing/1102/2010 (H4N8), the NA gene with A/snow goose/Montana/466771-4/2006 (H5N2), and the M gene with A/wild duck/Korea/CSM4-28/2010 (H4N6). The NS gene showed 98% nucleotide similarity with that of A/surface water/Minnesota/W07-2241/2007 (H3N8) (Table  2). Nucleotide Identity (%) of A/Wild waterfowl/Dongting/C2148/2011 H9N2 virus with the closely related isolates in GenBank Database Comparisons were performed by using the BLAST search in the Influenza Sequcence Database. Phylogenetic analyses: To further characterize the evolution of the 22 H9N2 viruses isolated from wild waterfowl, phylogenetic trees for each gene were constructed. In the HA gene tree, the H9 subtype AIVs clustered into five distinct lineages: G1-like, Y280-like, Korea-like, and two American lineages. The strains that clustered with the G1-like, Y280-like, and Korea-like sublineages are known to have been endemic in domestic poultry in Eurasia and Africa [25]. All of the 22 isolates belonged to the American lineage II, which included viruses that are prevalent in wild birds or turkeys in North America. Some genetic exchange was observed between the North American and Eurasian strains, as a strain isolated in Korea clustered with an American lineage, and several H9 subtype AIVs isolated in America clustered with the Korea-like sublineage (Figure  1a). Similar evolutionary patterns were observed in the N2 gene tree, which also clustered into four lineages: G1-like, Y280-like, Korea-like and American lineages. The NA genes of all isolates clustered with the American lineage, and were most closely related to an H5N2 AIV subtype isolated from a snow goose in Montana (Figure  1b). The PA and NS genes of the isolates both clustered with the American lineage and showed high nucleotide similarity with strains isolated from wild birds in North America (Figure  1e and h). By contrast, the PB2, PB1, NP, and M internal genes clustered with the Eurasian gene pool, but were distinct from the H9N2 viruses that circulated in Eurasia and Africa and the corresponding internal genes of the novel H7N9 AIV (Figure  1c, d, f, and g). Phylogenetic trees of HA (a), NA (b), PB2 (c), PB1 (d), PA (e), NP (f), M (g), and NS (h) genes of H9N2 subtype AIVs. Neighbor-joining (NJ) trees were generated using MEGA 5.01. Estimates of the phylogenies were calculated by performing 1000 neighbor-joining bootstrap replicates, all rooted to the sequence of Turkey/Wisconsin/1/66 (H9N2). Our 22 isolates are highlighted in red and representative strains are shown in blue. Abbreviations: Agt, American green-winged teal; Aw, American wigeon; Bd, black duck; Bh, bufflehead; Bs, Bewick’s swan; Bt, baikal teal; Ch, chukar; Ck, chicken; Dk, duck; Eg, egret; En, environment; Ew, Eurasian wigeon; Fe, feces; Gw, gadwall; Gf, guinea fowl; Gs, goose; Gt, green-winged teal; Lg, laughing gull; Ltd, longtail duck; Md, mallard; Mud, Muscovy duck; Np, northern pintail; Ns, northern shoveler; Os, Ostrich; Pfg, pink-footed goose; Pi, pintail duck; Qa, quail; Rcp, red crested pochard; Rt, ruddy turnstone; Sb, Shorebird; Sg, snow goose; Sw, swine; Suw, surface water; Tbm, thick-billed murre; Ty, turkey; Wb, wild bird; Wd, wild duck; Wfg, white-fronted goose; AH, Anhui; AL, Alaska; ALB, Alberta; AR, Arkansas; BJ, Beijing; CA, California; DE, Delaware; EG, Egypt; GD, Guangdong; GE, Germany; GX, Guangxi; GZ, Guangzhou; HLJ, Heilongjiang; HK, Hong Kong; HN, Hunan; HO, Hokkaido; IA, Interior Alaska; IL, Illinois; IR, Iran; JX, Jiangxi; Js, Jiangsu; KR, Korea; LO, Louisiana; MG, Mongolia; MI, Minnesota; MIS, Missouri; MT, Montana; ML, Maryland; NanJ, Nanjing; NE, Netherlands; NJ, New Jersey; OH, Ohio; PA, Pakistan; QB, Quebec; RZ, Rizhao; SA, South Africa; SD, Shandong; SH, Shanghai; ST, Shantou; TX, Texas; VN, Vietnam; WA, Washington; Wi, Wisconsin; XH, Xianghai; ZJ, Zhejiang. Phylogenetic analyses indicated that these isolates were novel recombinant H9N2 viruses. In these viruses, 4 genes (HA, NA, PA, and NS) were derived from the American AIV gene pool and 4 genes (PB2, PB1, NP, and M) were derived from the Eurasian gene pool (Figure  2). A hypothetical reassortment pattern of the novel H9N2 virus isolates. Red colored gene segments indicate genes that are derived from the American AIV gene pool and blue colored gene segments indicate genes that originated in the Eurasian gene pool. Molecular analysis: The molecular characteristics of the 22 H9N2 isolates were compared with representative H9N2 virus strains circulating in Eurasia and America. The amino acid sequence of the cleavage sites of all isolates possessed a single basic amino acid (R) within the HA connecting peptide (VPELPKGR↓GLF), which is a typical feature of LPAIVs [1]. The HA receptor-binding pocket included the avian-like motif, Q226 and G228 (H3 numbering, Table  3), suggesting that these viruses could preferentially bind to an avian-like receptor (α2, 3-linked sialic acid) [26]. Analysis of potential HA protein N-X-S/T glycosylation site motifs revealed 8 sites at positions 29, 82, 141, 218, 298, 305, 492, and 551 (Table  4). Amino acid sequences of specific sites in the HA, NA, and PB2 proteins of 22 H9N2 AIVs aAccording to the H3 numbering. The comparison of HA amino acid sequences of H9N2 AIVs There were no deletions in the NA stalk region, and no H274Y or N294S substitutions were observed in the NA protein. These properties indicated that these viruses would be sensitive to NA inhibitors, such as oseltamivir [27,28]. They encoded amino acids E and D at positions 627 and 701 of the PB2 protein, respectively, which are characteristics of AIVs [29-31]. No amino acid mutations associated with amantadine resistance were observed in the M2 ion channel protein. Additionally, no substitutions associated with increased virulence in mammals were detected in the PB2, PB1, or NS proteins (data not shown). Antigenic analysis: To assess the antigenic properties of our novel H9N2 isolates, we performed Hemagglutination inhibition (HI) assays with 5 antisera raised against representative H9N2 viruses and two antisera raised against H9N2 isolates from this study (C2148 and PC2539). The HI antibody titers that recognized the C2148 and PC2539 viruses were much higher than the titers against the other H9N2 representative viruses, including the antiserum that was raised against a virus isolated from shorebirds in America (Sb/DE/249/06) (Table  5). These results indicated that the H9N2 viruses isolated in this study were antigenically distinct from the previously identified H9N2 viruses. Cross-reactivity (HI-determined titers) of the H9N2 AIVs with sera against represented strains Homologous titers are bold. Discussion: It is generally accepted that wild waterfowl are the natural reservoir for AIVs, and play important roles in the perpetuation and dissemination of AIVs, especially LPAIV. Previous surveillance studies indicated that AIVs circulate in diverse bird species, including Anseriforms (ducks, geese, and swans) and Charadriiforms (gulls and terns). AIVs preferentially infect cells that line the intestinal tract and numerous viruses can be excreted at high concentrations in wild waterfowl feces. Influenza viruses remain infectious in lake water for up to 4 days at 22°C and for over 30 days at 0°C. Contaminated lake water may result in efficient transmission to naïve birds by the fecal-oral route [32-34]. Migrating birds annually travel between breeding and overwintering sites, so AIVs harbored by migrating birds can be distributed along the migration flyway. The Hunan East Dongting Lake Nature Reserve is located along the East Asia–Australia flyway and is a major staging and overwintering site for migratory birds. Moreover, many domestic duck farms are located in this region and employ a free-range style to raise domestic ducks [22]. Because wild waterfowl and domestic ducks may share common habitats, water, and food, genetic exchange between different subtypes of AIVs circulating in wild waterfowl and domestic poultry may be possible in this location [35]. In this study, we obtained 22 H9N2 subtype AIV isolates from wild waterfowl in East Dongting Lake wetland in November 2011 and March 2012. Genetic analyses showed that the 22 H9N2 viruses shared over 99% nucleotide sequence homology with the strain A/Egret/Hunan/1/2012 (H9N2) in all eight gene segments, which was isolated from an egret in the same region in January 2012. These findings indicated that these viruses had a common origin. Long-term surveillance studies in America and Eurasia demonstrated that the most prevalent AIVs in wild birds could be separated into Eurasian and American lineages. While intercontinental virus exchange existed in migrating birds, its frequency was limited and invasions by whole viruses were not observed [36]. Instances of invasion by the American lineage H9N2 virus into the Eurasian continent have been reported [24]. Notably, the 22 isolates reported in this study were novel recombinant H9N2 subtype AIVs, which encoded genes derived from both the American and Eurasian gene pools circulating in wild birds. Therefore, genes that originated from the American lineage may have been carried by migratory birds from North America, leading to the generation of multiple AIV reassortants circulating in wild birds in Eurasia. Moreover, this recombinant genotype was detected both by us and Chen and colleagues in different wild birds in three months during 2011–2012 (November 2011, January 2012, and March 2012), suggesting that this virus has been circulating in wild birds in this region [24]. Although H9N2 viruses of this genotype have not been reported in other regions, they may have been distributed to other places by migratory birds along their flyways. H9N2 AIVs have been endemic in domestic poultry across Eurasia and Africa since the late 1990s. The wide circulation of H9N2 viruses provides more opportunities for reassortment with other AIV subtypes in poultry. In the last few decades, H9N2 viruses may have provided internal genes to the HPAI H5N1 and the novel H7N9 avian influenza viruses [17,18]. Additionally, H9N2 viruses have been occasionally transmitted from poultry to mammals (including humans and swine), and some of the H9N2 viruses showed the ability to bind efficiently to α2, 6-linked sialic acid, which can indicate human virus receptor specificity [15,26]. Further studies indicated that H9N2 viruses that encoded Leu226 could replicate in ferrets and be transmitted by direct contact [37]. Therefore, many researchers believe that the H9N2 viruses have the potential to cause a future pandemic. Even if they do not directly cause a pandemic, they could indirectly contribute to an influenza pandemic by contributing internal genes to a reassortant virus [17,18,38]. Our findings demonstrated that all 22 isolates were LPAIVs, and we observed neither G226L substitutions in HA proteins nor any other pathogenicity-associated mutations in other viral proteins. Nevertheless, the pathogenicity of these viruses should be further accessed in poultry and mammalian animal models. Antigenic analyses suggested that the antigenicity of the isolates were significantly different from the G1- and Y280-like viruses, which have been dominant in domestic poultry in Eurasia and Africa, and the “America” virus, which is closely related to the Korea-like viruses. The antigenic and phylogenetic analyses were generally consistent with each other. Importantly, poultry in mainland China have been widely vaccinated with commercial inactivated vaccines that contain representative prevalent influenza strains circulating in poultry in Eurasia, but not strains of the American lineage. Therefore, the currently used commercial vaccines may not protect poultry or prevent the transmission of the novel H9N2 subtype viruses in China. Conclusions: In summary, the 22 H9N2 viruses isolated from wild waterfowl in 2011–2012 were novel reassorted H9N2 subtype AIVs with similar genotypes. All isolates encoded genes for proteins with low pathogenic characteristics. Determining whether these H9N2 AIVs can transmit to poultry or other animals or further adapt to new hosts will require continuous monitoring in the future. Our findings extend our knowledge of the ecology of AIVs circulating in wild birds in the Dongting Lake region and highlight the importance of intercontinental AIV gene flow in migratory birds. Therefore, we emphasize the vital need for continued surveillance of AIVs in wild birds and poultry to prepare for and respond to potential influenza pandemics. Materials and methods: Sample collection During November 2011 and April 2012, we collected fresh fecal samples from wild birds and lake water in Hunan East Dongting Lake Nature Reserve (28°58′–29°38′N, 112°43′–113°13′E), Yueyang, Hunan, China. This site is an important overwintering habitat for East Asian migratory birds that is located in the middle reaches of the Yangtze River. Fresh fecal samples were collected with sterile cotton swabs, following previously described protocols [39], and placed in 15 mL tubes containing 4 mL virus transport media (VTM). The VTM contained: Tissue culture medium 199 (Thermo Scientific Hyclone, Logan, UT, USA), 0.5% BSA (Roche, Mannheim, Germany), 10% glycerol, 2 × 106 U/L penicillin G, 200 mg/L streptomycin, 2 × 106 U/L polymyxin B sulfate, 250 mg/L gentamicin, 60 mg/L ofloxacin HCI, 0.2 g/L sulfamethoxazole, and 5 × 105U/L nystatin (Sigma, St Louis, MO, USA). Samples were immediately transported to the laboratory at 4°C and stored at -80°C. During November 2011 and April 2012, we collected fresh fecal samples from wild birds and lake water in Hunan East Dongting Lake Nature Reserve (28°58′–29°38′N, 112°43′–113°13′E), Yueyang, Hunan, China. This site is an important overwintering habitat for East Asian migratory birds that is located in the middle reaches of the Yangtze River. Fresh fecal samples were collected with sterile cotton swabs, following previously described protocols [39], and placed in 15 mL tubes containing 4 mL virus transport media (VTM). The VTM contained: Tissue culture medium 199 (Thermo Scientific Hyclone, Logan, UT, USA), 0.5% BSA (Roche, Mannheim, Germany), 10% glycerol, 2 × 106 U/L penicillin G, 200 mg/L streptomycin, 2 × 106 U/L polymyxin B sulfate, 250 mg/L gentamicin, 60 mg/L ofloxacin HCI, 0.2 g/L sulfamethoxazole, and 5 × 105U/L nystatin (Sigma, St Louis, MO, USA). Samples were immediately transported to the laboratory at 4°C and stored at -80°C. Virus detection and isolation RNA was extracted from 200 μL fecal suspension using the BioRobot Universal system with the QIAamp One-For-All Nucleic Acid Kit (Qiagen, Hilden, Germany) in accordance with the manufacturer’s instructions. Influenza A virus was screened using qPCR assay that targeted the influenza Matrix gene. Detection was performed using a Stratagene Mx3005P thermocycler with an AgPath RT-PCR Kit (Applied Biosystems, Foster City, CA, USA) using 5 μL eluate in 25 μL total volume. Each run included 2 negative and 2 positive control samples along with 92 samples. The positive samples detected by qPCR were inoculated into the allantoic cavity of 9-day-old specific pathogen free (SPF) embryonated chicken eggs (ECEs). The ECEs were incubated at 37°C for 48 h and then chilled at 4°C for 6–8 h. Allantoic fluids were harvested and hemagglutination assays with 0.5% turkey red blood cells confirmed the presence of viruses. RNA was extracted from 200 μL fecal suspension using the BioRobot Universal system with the QIAamp One-For-All Nucleic Acid Kit (Qiagen, Hilden, Germany) in accordance with the manufacturer’s instructions. Influenza A virus was screened using qPCR assay that targeted the influenza Matrix gene. Detection was performed using a Stratagene Mx3005P thermocycler with an AgPath RT-PCR Kit (Applied Biosystems, Foster City, CA, USA) using 5 μL eluate in 25 μL total volume. Each run included 2 negative and 2 positive control samples along with 92 samples. The positive samples detected by qPCR were inoculated into the allantoic cavity of 9-day-old specific pathogen free (SPF) embryonated chicken eggs (ECEs). The ECEs were incubated at 37°C for 48 h and then chilled at 4°C for 6–8 h. Allantoic fluids were harvested and hemagglutination assays with 0.5% turkey red blood cells confirmed the presence of viruses. Virus subtyping and sequencing Viral RNA was extracted from infected allantoic fluid using the RNeasy Mini Kit (Qiagen), and reverse transcribed using the Uni12 primer (5′-AGCAAAAGCAGG-3′) with the SuperRT cDNA Kit (CWBIO, Beijing, China). Isolate subtyping was performed by PCR using 16 sets of HA (H1–H16) primers and 9 sets of NA (N1–N9) primers designed by the Chinese National Influenza Center. Complete genome amplification was performed using specific primers (primer sequences available on request) with 2× Es Taq MasterMix Kit (CWBIO). PCR products of the expected sizes were purified using a QIAquick PCR purification kit (Qiagen). Sequencing was performed using the BigDye Terminator v3.1 Cycle Sequencing Kit with an ABI PRISM 3700xl DNA Analyzer (Applied Biosystems), following the manufacturer’s instructions. Viral RNA was extracted from infected allantoic fluid using the RNeasy Mini Kit (Qiagen), and reverse transcribed using the Uni12 primer (5′-AGCAAAAGCAGG-3′) with the SuperRT cDNA Kit (CWBIO, Beijing, China). Isolate subtyping was performed by PCR using 16 sets of HA (H1–H16) primers and 9 sets of NA (N1–N9) primers designed by the Chinese National Influenza Center. Complete genome amplification was performed using specific primers (primer sequences available on request) with 2× Es Taq MasterMix Kit (CWBIO). PCR products of the expected sizes were purified using a QIAquick PCR purification kit (Qiagen). Sequencing was performed using the BigDye Terminator v3.1 Cycle Sequencing Kit with an ABI PRISM 3700xl DNA Analyzer (Applied Biosystems), following the manufacturer’s instructions. Phylogenetic analyses We performed multiple sequence alignments using the MAFFT software, version 6. Both sequences of the representative H9N2 subtype influenza A virus strains circulating in America and Eurasia and homologous sequences that shared high nucleotide similarities with our H9N2 isolates were included in the phylogenetic analyses. Preliminary phylogenetic trees were constructed to infer the overall topology, using more than 500 sequences for each gene. In order to more clearly define the phylogenetic relationships of the 22 H9N2 virus isolates, representative sequences of each cluster were selected to generate neighbor-joining (NJ) trees. Phylogenetic estimates were calculated by performing 1000 neighbor-joining bootstrap replicates. We performed multiple sequence alignments using the MAFFT software, version 6. Both sequences of the representative H9N2 subtype influenza A virus strains circulating in America and Eurasia and homologous sequences that shared high nucleotide similarities with our H9N2 isolates were included in the phylogenetic analyses. Preliminary phylogenetic trees were constructed to infer the overall topology, using more than 500 sequences for each gene. In order to more clearly define the phylogenetic relationships of the 22 H9N2 virus isolates, representative sequences of each cluster were selected to generate neighbor-joining (NJ) trees. Phylogenetic estimates were calculated by performing 1000 neighbor-joining bootstrap replicates. Antigenic analyses Antigenic analyses were performed using 6 polyclonal ferret antisera against A/chicken/Beijing/1/1994 (H9N2) (Ck/Bj), A/quail/Hong Kong/G1/1997 (H9N2) (G1), A/chicken/Hong Kong/G9/1997 (H9N2) (G9), A/Hong Kong/1073/1999 (H9N2) (HK/1073), A/chicken/Hong Kong/NT101/2003 (H9N2) (HK/NT101), and A/shorebird/ Delaware /249/2006 (H9N2) (DE/249), which were kindly provided by Dr. Webby. Polyclonal ferret antisera raised against A/wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148) and A/wild waterfowl/Dongting/PC2539/2012 (H9N2) (PC2539), two representative influenza strains, were also used in our study. HI assays were performed as previously described. Briefly, all sera were treated with receptor destroying enzyme II (RDE) (Denka Seiken, Tokyo, Japan) to remove nonspecific inhibitors of hemagglutination by adding 3 volumes of RDE to tubes with 1 volume of serum. Samples were incubated at 37°C for 16–18 h, and then inactivated at 56°C for 30 min. After RDE inactivation, 6 volumes of phosphate buffered saline (PBS; Thermo Scientific Hyclone) were added. The diluted sera were then serially diluted 2-fold with 25 μL PBS, and equal volumes of antigen (8 HA unit/50 μL) were added to each well. The plates were gently mixed and incubated at room temperature for 20–30 min. Viral titers were determined by adding 50 μL 0.5% turkey red blood cells to each well. The limit of detection for HI titers was ≤ 20. Antigenic analyses were performed using 6 polyclonal ferret antisera against A/chicken/Beijing/1/1994 (H9N2) (Ck/Bj), A/quail/Hong Kong/G1/1997 (H9N2) (G1), A/chicken/Hong Kong/G9/1997 (H9N2) (G9), A/Hong Kong/1073/1999 (H9N2) (HK/1073), A/chicken/Hong Kong/NT101/2003 (H9N2) (HK/NT101), and A/shorebird/ Delaware /249/2006 (H9N2) (DE/249), which were kindly provided by Dr. Webby. Polyclonal ferret antisera raised against A/wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148) and A/wild waterfowl/Dongting/PC2539/2012 (H9N2) (PC2539), two representative influenza strains, were also used in our study. HI assays were performed as previously described. Briefly, all sera were treated with receptor destroying enzyme II (RDE) (Denka Seiken, Tokyo, Japan) to remove nonspecific inhibitors of hemagglutination by adding 3 volumes of RDE to tubes with 1 volume of serum. Samples were incubated at 37°C for 16–18 h, and then inactivated at 56°C for 30 min. After RDE inactivation, 6 volumes of phosphate buffered saline (PBS; Thermo Scientific Hyclone) were added. The diluted sera were then serially diluted 2-fold with 25 μL PBS, and equal volumes of antigen (8 HA unit/50 μL) were added to each well. The plates were gently mixed and incubated at room temperature for 20–30 min. Viral titers were determined by adding 50 μL 0.5% turkey red blood cells to each well. The limit of detection for HI titers was ≤ 20. Nucleotide sequence accession numbers The nucleotide sequences generated in our study were deposited at GenBank under the accession numbers KF971946 to KF972121. The nucleotide sequences generated in our study were deposited at GenBank under the accession numbers KF971946 to KF972121. Sample collection: During November 2011 and April 2012, we collected fresh fecal samples from wild birds and lake water in Hunan East Dongting Lake Nature Reserve (28°58′–29°38′N, 112°43′–113°13′E), Yueyang, Hunan, China. This site is an important overwintering habitat for East Asian migratory birds that is located in the middle reaches of the Yangtze River. Fresh fecal samples were collected with sterile cotton swabs, following previously described protocols [39], and placed in 15 mL tubes containing 4 mL virus transport media (VTM). The VTM contained: Tissue culture medium 199 (Thermo Scientific Hyclone, Logan, UT, USA), 0.5% BSA (Roche, Mannheim, Germany), 10% glycerol, 2 × 106 U/L penicillin G, 200 mg/L streptomycin, 2 × 106 U/L polymyxin B sulfate, 250 mg/L gentamicin, 60 mg/L ofloxacin HCI, 0.2 g/L sulfamethoxazole, and 5 × 105U/L nystatin (Sigma, St Louis, MO, USA). Samples were immediately transported to the laboratory at 4°C and stored at -80°C. Virus detection and isolation: RNA was extracted from 200 μL fecal suspension using the BioRobot Universal system with the QIAamp One-For-All Nucleic Acid Kit (Qiagen, Hilden, Germany) in accordance with the manufacturer’s instructions. Influenza A virus was screened using qPCR assay that targeted the influenza Matrix gene. Detection was performed using a Stratagene Mx3005P thermocycler with an AgPath RT-PCR Kit (Applied Biosystems, Foster City, CA, USA) using 5 μL eluate in 25 μL total volume. Each run included 2 negative and 2 positive control samples along with 92 samples. The positive samples detected by qPCR were inoculated into the allantoic cavity of 9-day-old specific pathogen free (SPF) embryonated chicken eggs (ECEs). The ECEs were incubated at 37°C for 48 h and then chilled at 4°C for 6–8 h. Allantoic fluids were harvested and hemagglutination assays with 0.5% turkey red blood cells confirmed the presence of viruses. Virus subtyping and sequencing: Viral RNA was extracted from infected allantoic fluid using the RNeasy Mini Kit (Qiagen), and reverse transcribed using the Uni12 primer (5′-AGCAAAAGCAGG-3′) with the SuperRT cDNA Kit (CWBIO, Beijing, China). Isolate subtyping was performed by PCR using 16 sets of HA (H1–H16) primers and 9 sets of NA (N1–N9) primers designed by the Chinese National Influenza Center. Complete genome amplification was performed using specific primers (primer sequences available on request) with 2× Es Taq MasterMix Kit (CWBIO). PCR products of the expected sizes were purified using a QIAquick PCR purification kit (Qiagen). Sequencing was performed using the BigDye Terminator v3.1 Cycle Sequencing Kit with an ABI PRISM 3700xl DNA Analyzer (Applied Biosystems), following the manufacturer’s instructions. Phylogenetic analyses: We performed multiple sequence alignments using the MAFFT software, version 6. Both sequences of the representative H9N2 subtype influenza A virus strains circulating in America and Eurasia and homologous sequences that shared high nucleotide similarities with our H9N2 isolates were included in the phylogenetic analyses. Preliminary phylogenetic trees were constructed to infer the overall topology, using more than 500 sequences for each gene. In order to more clearly define the phylogenetic relationships of the 22 H9N2 virus isolates, representative sequences of each cluster were selected to generate neighbor-joining (NJ) trees. Phylogenetic estimates were calculated by performing 1000 neighbor-joining bootstrap replicates. Antigenic analyses: Antigenic analyses were performed using 6 polyclonal ferret antisera against A/chicken/Beijing/1/1994 (H9N2) (Ck/Bj), A/quail/Hong Kong/G1/1997 (H9N2) (G1), A/chicken/Hong Kong/G9/1997 (H9N2) (G9), A/Hong Kong/1073/1999 (H9N2) (HK/1073), A/chicken/Hong Kong/NT101/2003 (H9N2) (HK/NT101), and A/shorebird/ Delaware /249/2006 (H9N2) (DE/249), which were kindly provided by Dr. Webby. Polyclonal ferret antisera raised against A/wild waterfowl/Dongting/C2148/2011 (H9N2) (C2148) and A/wild waterfowl/Dongting/PC2539/2012 (H9N2) (PC2539), two representative influenza strains, were also used in our study. HI assays were performed as previously described. Briefly, all sera were treated with receptor destroying enzyme II (RDE) (Denka Seiken, Tokyo, Japan) to remove nonspecific inhibitors of hemagglutination by adding 3 volumes of RDE to tubes with 1 volume of serum. Samples were incubated at 37°C for 16–18 h, and then inactivated at 56°C for 30 min. After RDE inactivation, 6 volumes of phosphate buffered saline (PBS; Thermo Scientific Hyclone) were added. The diluted sera were then serially diluted 2-fold with 25 μL PBS, and equal volumes of antigen (8 HA unit/50 μL) were added to each well. The plates were gently mixed and incubated at room temperature for 20–30 min. Viral titers were determined by adding 50 μL 0.5% turkey red blood cells to each well. The limit of detection for HI titers was ≤ 20. Nucleotide sequence accession numbers: The nucleotide sequences generated in our study were deposited at GenBank under the accession numbers KF971946 to KF972121. Competing interests: The findings and conclusions of this report are those of the authors and do not necessarily represent the views of the funding agency. We declare no conflict of interest. Authors’ contributions: YLS and SXH designed the research; YZ performed research and drafted the manuscript; TB, YWH, ZHD, HZ, ZYB, MDY, and JFH collected samples and performed research; LY and XZ analyzed data; and YLS and WFZ helped to draft and revise the manuscript. All authors read and approved the final manuscript.
Background: Wild waterfowl are recognized as the natural reservoir for influenza A viruses. Two distinct lineages, the American and Eurasian lineages, have been identified in wild birds. Gene flow between the two lineages is limited. The H9N2 virus has become prevalent in poultry throughout Eurasia, and mainly circulates in wild ducks and shorebirds in North America. Methods: In this study, 22 H9N2 avian influenza viruses were isolated from wild waterfowl feces in East Dongting Lake Nature Reserve in November 2011 and March 2012. The phylogenetic, molecular, and antigenic characteristics of these viruses were analyzed based on analyses of the whole genome sequence of each isolate. Results: Phylogenetic analyses indicated that these H9N2 viruses were generated by reassortment events. The HA, NA, PA, and NS genes were derived from the American gene pool, and the other four genes were derived from the Eurasian gene pool. Antigenic analyses indicated that these viruses were significantly different from the Eurasian lineage viruses. Conclusions: This study presents the isolation of novel intercontinental recombinant H9N2 viruses from wild waterfowl in the East Dongting Lake wetland. The novel genotype H9N2 virus has not been detected in poultry in the region yet, and may be transmitted to naïve birds in poultry farms. Therefore, our results highlight the need for ongoing surveillance of wild birds and poultry in this region.
Background: Influenza A viruses can be divided into different subtypes based upon the two surface glycoproteins, hemagglutinin (HA) and neuraminidase (NA) [1,2]. Wild waterfowl are recognized as the natural reservoir of influenza A viruses, especially low pathogenic avian influenza virus (LPAIV) [3]. To date, all of the HA (H1–H16) and NA (N1–N9) subtype avian influenza viruses (AIVs) have been identified in wild waterfowl, with the exceptions of H17N10 and H18N11, which were isolated from bats [2,4,5]. Previous studies indicated that migratory birds play an important role in the emergence of epidemics in birds, pigs, horses, and humans [1]. Generally, AIVs are nonpathogenic in wild birds, although they sometimes cause significant morbidity and mortality when transmitted to domestic poultry [6-8]. The H9N2 subtype influenza virus was first isolated from a turkey in Wisconsin in 1966. It has been most prevalent in wild ducks and shorebirds and has shown no evidence of establishing a stable lineage in land-based poultry in North America [9]. Since the l990s, H9N2 subtype influenza viruses have become prevalent in land-based poultry across East Asia, Middle Asia, Europe and Africa. Epidemiological and phylogenetic studies indicate that three distinct sublineages of H9N2 subtype influenza viruses have been established: Ck/Bj/94-like (A/chicken/Beijing/1/94 or A/duck/Hongkong/Y280/1997), G1-like (A/Quail/Hong Kong/G1/1997) and Korea-like (A/chicken/Y439/1997 or A/chicken/Korea/383490-p96323/1996) [10-12]. Poultry infected with H9N2 subtype AIVs have reduced egg production and moderate morbidity when co-infected with other viruses or bacteria [13]. Since 1999, some H9N2 viruses have been identified with potential human virus receptor specificity, and have been occasionally transmitted to human and swine [14-16]. Moreover, H9N2 viruses may have contributed internal genes to the highly pathogenic avian influenza (HPAI) H5N1 virus in Hong Kong in 1997 and the novel H7N9 avian influenza virus in mainland China in 2013 [17,18]. Therefore, the H9N2 subtype avian influenza viruses have been classified as candidate viruses with pandemic potential [19,20]. The threat to the poultry industry and public health posed by H9N2 subtype avian influenza viruses should not be ignored. Hunan East Dongting Lake Nature Reserve is one of the largest wetlands in mainland China and is an important overwintering area and staging site for migratory birds that fly along the East Asia–Australia flyway [21]. Moreover, numerous duck farms are found around the lake [22,23]. Every migrating season, wild birds congregate at the lake where they share a common habitat with domestic ducks, which provides an opportunity for virus reassortment. During active surveillance of AIVs between 2011 and 2012, we isolated H9N2 viruses from wild waterfowl in East Dongting Lake wetlands in November 2011 (7 isolates) and March 2012 (15 isolates). The whole genome sequences of all 22 isolates were obtained, and phylogenetic trees for each gene segment were generated to analyze the relationship of these isolates with other circulating viruses in wild birds or poultry. Furthermore, we performed antigenic analyses to investigate the antigenic characteristics of the isolates. Conclusions: In summary, the 22 H9N2 viruses isolated from wild waterfowl in 2011–2012 were novel reassorted H9N2 subtype AIVs with similar genotypes. All isolates encoded genes for proteins with low pathogenic characteristics. Determining whether these H9N2 AIVs can transmit to poultry or other animals or further adapt to new hosts will require continuous monitoring in the future. Our findings extend our knowledge of the ecology of AIVs circulating in wild birds in the Dongting Lake region and highlight the importance of intercontinental AIV gene flow in migratory birds. Therefore, we emphasize the vital need for continued surveillance of AIVs in wild birds and poultry to prepare for and respond to potential influenza pandemics.
Background: Wild waterfowl are recognized as the natural reservoir for influenza A viruses. Two distinct lineages, the American and Eurasian lineages, have been identified in wild birds. Gene flow between the two lineages is limited. The H9N2 virus has become prevalent in poultry throughout Eurasia, and mainly circulates in wild ducks and shorebirds in North America. Methods: In this study, 22 H9N2 avian influenza viruses were isolated from wild waterfowl feces in East Dongting Lake Nature Reserve in November 2011 and March 2012. The phylogenetic, molecular, and antigenic characteristics of these viruses were analyzed based on analyses of the whole genome sequence of each isolate. Results: Phylogenetic analyses indicated that these H9N2 viruses were generated by reassortment events. The HA, NA, PA, and NS genes were derived from the American gene pool, and the other four genes were derived from the Eurasian gene pool. Antigenic analyses indicated that these viruses were significantly different from the Eurasian lineage viruses. Conclusions: This study presents the isolation of novel intercontinental recombinant H9N2 viruses from wild waterfowl in the East Dongting Lake wetland. The novel genotype H9N2 virus has not been detected in poultry in the region yet, and may be transmitted to naïve birds in poultry farms. Therefore, our results highlight the need for ongoing surveillance of wild birds and poultry in this region.
10,417
256
[ 640, 447, 913, 314, 138, 231, 182, 148, 114, 325, 19, 31, 62 ]
17
[ "h9n2", "viruses", "gene", "wild", "isolates", "virus", "aivs", "like", "ha", "american" ]
[ "influenza viruses prevalent", "influenza viruses isolated", "subtype avian influenza", "pathogenic avian influenza", "h7n9 avian influenza" ]
null
[CONTENT] Avian influenza virus | Wild waterfowl | H9N2 subtype | Dongting Lake | Reassortant [SUMMARY]
null
[CONTENT] Avian influenza virus | Wild waterfowl | H9N2 subtype | Dongting Lake | Reassortant [SUMMARY]
[CONTENT] Avian influenza virus | Wild waterfowl | H9N2 subtype | Dongting Lake | Reassortant [SUMMARY]
[CONTENT] Avian influenza virus | Wild waterfowl | H9N2 subtype | Dongting Lake | Reassortant [SUMMARY]
[CONTENT] Avian influenza virus | Wild waterfowl | H9N2 subtype | Dongting Lake | Reassortant [SUMMARY]
[CONTENT] Animals | Antigens, Viral | Birds | China | Evolution, Molecular | Genome, Viral | Influenza A Virus, H9N2 Subtype | Influenza in Birds | Lakes | Molecular Sequence Data | Phylogeny | RNA, Viral | Reassortant Viruses | Sequence Analysis, DNA | Wetlands [SUMMARY]
null
[CONTENT] Animals | Antigens, Viral | Birds | China | Evolution, Molecular | Genome, Viral | Influenza A Virus, H9N2 Subtype | Influenza in Birds | Lakes | Molecular Sequence Data | Phylogeny | RNA, Viral | Reassortant Viruses | Sequence Analysis, DNA | Wetlands [SUMMARY]
[CONTENT] Animals | Antigens, Viral | Birds | China | Evolution, Molecular | Genome, Viral | Influenza A Virus, H9N2 Subtype | Influenza in Birds | Lakes | Molecular Sequence Data | Phylogeny | RNA, Viral | Reassortant Viruses | Sequence Analysis, DNA | Wetlands [SUMMARY]
[CONTENT] Animals | Antigens, Viral | Birds | China | Evolution, Molecular | Genome, Viral | Influenza A Virus, H9N2 Subtype | Influenza in Birds | Lakes | Molecular Sequence Data | Phylogeny | RNA, Viral | Reassortant Viruses | Sequence Analysis, DNA | Wetlands [SUMMARY]
[CONTENT] Animals | Antigens, Viral | Birds | China | Evolution, Molecular | Genome, Viral | Influenza A Virus, H9N2 Subtype | Influenza in Birds | Lakes | Molecular Sequence Data | Phylogeny | RNA, Viral | Reassortant Viruses | Sequence Analysis, DNA | Wetlands [SUMMARY]
[CONTENT] influenza viruses prevalent | influenza viruses isolated | subtype avian influenza | pathogenic avian influenza | h7n9 avian influenza [SUMMARY]
null
[CONTENT] influenza viruses prevalent | influenza viruses isolated | subtype avian influenza | pathogenic avian influenza | h7n9 avian influenza [SUMMARY]
[CONTENT] influenza viruses prevalent | influenza viruses isolated | subtype avian influenza | pathogenic avian influenza | h7n9 avian influenza [SUMMARY]
[CONTENT] influenza viruses prevalent | influenza viruses isolated | subtype avian influenza | pathogenic avian influenza | h7n9 avian influenza [SUMMARY]
[CONTENT] influenza viruses prevalent | influenza viruses isolated | subtype avian influenza | pathogenic avian influenza | h7n9 avian influenza [SUMMARY]
[CONTENT] h9n2 | viruses | gene | wild | isolates | virus | aivs | like | ha | american [SUMMARY]
null
[CONTENT] h9n2 | viruses | gene | wild | isolates | virus | aivs | like | ha | american [SUMMARY]
[CONTENT] h9n2 | viruses | gene | wild | isolates | virus | aivs | like | ha | american [SUMMARY]
[CONTENT] h9n2 | viruses | gene | wild | isolates | virus | aivs | like | ha | american [SUMMARY]
[CONTENT] h9n2 | viruses | gene | wild | isolates | virus | aivs | like | ha | american [SUMMARY]
[CONTENT] viruses | influenza | influenza viruses | avian influenza | h9n2 | poultry | avian | subtype | wild | birds [SUMMARY]
null
[CONTENT] h9n2 | gene | american | like | bp | clustered | duck | viruses | genes | isolates [SUMMARY]
[CONTENT] aivs | birds | wild | h9n2 | poultry | wild birds | emphasize vital need continued | isolated wild waterfowl 2011 | prepare | h9n2 subtype aivs similar [SUMMARY]
[CONTENT] h9n2 | viruses | wild | gene | isolates | aivs | birds | sequences | kit | influenza [SUMMARY]
[CONTENT] h9n2 | viruses | wild | gene | isolates | aivs | birds | sequences | kit | influenza [SUMMARY]
[CONTENT] ||| Two | American | Eurasian ||| two ||| Eurasia | North America [SUMMARY]
null
[CONTENT] ||| HA | NA | PA | NS | American | four | Eurasian ||| Eurasian [SUMMARY]
[CONTENT] the East Dongting Lake ||| ||| [SUMMARY]
[CONTENT] ||| Two | American | Eurasian ||| two ||| Eurasia | North America ||| 22 | avian | East Dongting Lake Nature Reserve | November 2011 and March 2012 ||| ||| ||| HA | NA | PA | NS | American | four | Eurasian ||| Eurasian ||| the East Dongting Lake ||| ||| [SUMMARY]
[CONTENT] ||| Two | American | Eurasian ||| two ||| Eurasia | North America ||| 22 | avian | East Dongting Lake Nature Reserve | November 2011 and March 2012 ||| ||| ||| HA | NA | PA | NS | American | four | Eurasian ||| Eurasian ||| the East Dongting Lake ||| ||| [SUMMARY]
Prehospital Notification Using a Mobile Application Can Improve Regional Stroke Care System in a Metropolitan Area.
34904406
Acute ischemic stroke is a time-sensitive disease. Emergency medical service (EMS) prehospital notification of potential patients with stroke could play an important role in improving the in-hospital medical response and timely treatment of patients with acute ischemic stroke. We analyzed the effects of FASTroke, a mobile app that EMS can use to notify hospitals of patients with suspected acute ischemic stroke at the prehospital stage.
BACKGROUND
We conducted a retrospective observational study of patients diagnosed with acute ischemic stroke at 5 major hospitals in metropolitan Daegu City, Korea, from February 2020 to January 2021. The clinical conditions and time required for managing patients were compared according to whether the EMS employed FASTroke app and further compared the factors by dividing the patients into subgroups according to the preregistration received by the hospitals when using FASTroke app.
METHODS
Of the 563 patients diagnosed with acute ischemic stroke, FASTroke was activated for 200; of these, 93 were preregistered. The FASTroke prenotification showed faster door-to-computed-tomography times (19 minutes vs. 25 minutes, P < 0.001), faster door-to-intravenous-thrombolysis times (37 minutes vs. 48 minutes, P < 0.001), and faster door-to-endovascular-thrombectomy times (82 minutes vs. 119 minutes, P < 0.001). The time was further shortened when the preregistration was conducted simultaneously by the receiving hospital.
RESULTS
The FASTroke app is an easy and useful tool for prenotification as a regional stroke care system in the metropolitan area, leading to reduced transport and acute ischemic stroke management time and more reperfusion treatment. The effect was more significant when the preregistration was performed jointly.
CONCLUSION
[ "Acute Disease", "Aged", "Aged, 80 and over", "Emergency Medical Services", "Female", "Fibrinolytic Agents", "Humans", "Ischemic Stroke", "Male", "Middle Aged", "Mobile Applications", "Odds Ratio", "Registries", "Retrospective Studies", "Thrombectomy", "Time-to-Treatment" ]
8668497
INTRODUCTION
Acute ischemic stroke is a highly time-sensitive disease that requires treatment as quickly as possible. Thrombolysis, which includes intravenous thrombolysis (IVT) and endovascular thrombectomy (EVT), is a crucial treatment for acute ischemic stroke.1 Thrombolysis within the early therapeutic window results in better neurological outcomes and reduced mortality.23 Before thrombolysis can be applied; however, the brain must be imaged to rule out intracerebral hemorrhage. Recent guidelines have recommended brain computed tomography (CT) within 20 minutes of hospital arrival.14 Major efforts have been made to treat acute ischemic stroke more rapidly, and improvements in emergency medical services (EMS) at the prehospital stage play a significant role. Recent treatment guidelines from the American Heart Association/American Stroke Association (AHA/ASA) recommend that EMS conduct prehospital notifications before the arrival of patients with acute ischemic stroke to the appropriate hospitals.5 EMS prehospital notifications can reduce door-to-imaging times and door-to-IVT times for patients with acute ischemic stroke and increase the number of patients eligible for thrombolysis, thereby leading to positive outcomes.6 Telephones have been widely used for prenotification, and EMS have also confirmed whether patients with acute ischemic stroke can be accepted and treated in the contacted hospitals.7 Recently, the use of smartphones has surpassed that of cellular phones. An appropriate mobile app could therefore be a useful means for communicating acute ischemic stroke information between out-of-hospital and in-hospital medical teams. The mobile app can simultaneously deliver necessary information to multiple medical staff and hospitals. In acute stroke treatment that requires multidisciplinary management in a limited time, prenotification through mobile app will play an epochal role in reducing treatment time. We developed FASTroke, a mobile app, which has major roles in identifying suspected stroke patients by EMS personnel, prenotification of suspected stroke patients in nearby treating hospitals, and preregistration of patient's data to facilitate the intra-hospital delivery system. The app was implemented in Daegu, a city in Korea with a population of approximately 2.5 million people. We hypothesized that the prehospital notification system in this app could reduce door-to-CT, door-to-IVT, and door-to-EVT times for patients with acute ischemic stroke.
METHODS
Study design and participants This study was a retrospective, observational study conducted in Daegu, the fourth largest metropolitan city in South Korea, from February 2020 to January 2021. While 2 regional and 4 local emergency centers operate in Daegu, 5 major hospitals (Kyungpook National University Hospital, Yeungnam University Medical Center, Keimyung University Dongsan Hospital, Daegu Catholic University Medical Center, and Daegu Fatima Hospital) participated in FASTroke project, except for 1 local emergency center that could not perform EVT interventions 24 hours a day, 365 days a year. All 48 fire safety centers including 119 ambulances in Daegu participated in the study. We included adult patients with acute ischemic stroke (aged ≥ 18 years) who experienced the first abnormal symptoms (as perceived by the patient or witness) within 6 hours of being treated by EMS and were transported to the emergency department by ambulance. The study excluded patients who were candidates for reperfusion therapy but were not treated due to their refusal. Patients in the prehospital stage who could not take a stroke screening test due to mental change to predict acute ischemic stroke were also excluded. To compare the management time delay and clinical outcomes, the patients were categorized into FASTtroke prenotification and no FASTtroke prenotification based on the FASTroke app use. FASTroke activation patients were subgrouped according to whether they were preregistered prior to arriving at the hospital. This study was a retrospective, observational study conducted in Daegu, the fourth largest metropolitan city in South Korea, from February 2020 to January 2021. While 2 regional and 4 local emergency centers operate in Daegu, 5 major hospitals (Kyungpook National University Hospital, Yeungnam University Medical Center, Keimyung University Dongsan Hospital, Daegu Catholic University Medical Center, and Daegu Fatima Hospital) participated in FASTroke project, except for 1 local emergency center that could not perform EVT interventions 24 hours a day, 365 days a year. All 48 fire safety centers including 119 ambulances in Daegu participated in the study. We included adult patients with acute ischemic stroke (aged ≥ 18 years) who experienced the first abnormal symptoms (as perceived by the patient or witness) within 6 hours of being treated by EMS and were transported to the emergency department by ambulance. The study excluded patients who were candidates for reperfusion therapy but were not treated due to their refusal. Patients in the prehospital stage who could not take a stroke screening test due to mental change to predict acute ischemic stroke were also excluded. To compare the management time delay and clinical outcomes, the patients were categorized into FASTtroke prenotification and no FASTtroke prenotification based on the FASTroke app use. FASTroke activation patients were subgrouped according to whether they were preregistered prior to arriving at the hospital. Mobile application In December 2019, the Daegu Emergency Medicine Collaboration Committee (DEMCC) developed the FASTroke app as an identification, prenotification, and preregistration system for suspected stroke patients, which can efficiently utilize the regional stroke care system and subsequently improve the quality of acute stroke management. The app was introduced to all major regional emergency departments and fire departments throughout Daegu City. For the FASTroke project as a regional acute stroke care system, DEMCC organized a FASTroke team consisting of specialist physicians in emergency medicine, neurology, and neurosurgery related to treating patients with acute ischemic stroke. The FASTroke app can be used on iOS and Android systems and is free to download and apply for membership; however, DEMCC administrator approval is required for full membership. The smartphones used by the EMS, hospital stroke team, and emergency department staff were identified and registered individually by the DEMCC. EMS could activate FASTroke at any time in the event of a patient with suspected acute ischemic stroke who is then transferred to the nearest acceptable hospital, in the cases of first abnormal times (FAT, defined as the time elapsed since the first neurological abnormalities were detected) within 6 hours and blood glucose levels of at least 60 mg/dL. If these criteria are met, FASTroke can be activated if one of the symptoms (facial droop, unilateral limb weakness, and dysarthria) is present.8 Next, the patient's name and birth date is entered for hospital preregistration, and the hospital to which the patient is to be transferred is entered. After the EMS sends the information, the hospital's medical staff (stroke team and emergency department) receive notifications through their smartphones to prepare for the patients with acute ischemic stroke and to preregister the patients at the hospital. EMS can also enter the symptom onset time, blood pressure, previous diseases, and medication history, information shared with the hospital's medical staff. If a hospital cannot accommodate patients with acute ischemic stroke due to impossible reperfusion therapy, the hospital's stroke team administrator can register their hospital's nonavailability through the app at any time, preventing the EMS from activating that hospital via FASTroke. In December 2019, the Daegu Emergency Medicine Collaboration Committee (DEMCC) developed the FASTroke app as an identification, prenotification, and preregistration system for suspected stroke patients, which can efficiently utilize the regional stroke care system and subsequently improve the quality of acute stroke management. The app was introduced to all major regional emergency departments and fire departments throughout Daegu City. For the FASTroke project as a regional acute stroke care system, DEMCC organized a FASTroke team consisting of specialist physicians in emergency medicine, neurology, and neurosurgery related to treating patients with acute ischemic stroke. The FASTroke app can be used on iOS and Android systems and is free to download and apply for membership; however, DEMCC administrator approval is required for full membership. The smartphones used by the EMS, hospital stroke team, and emergency department staff were identified and registered individually by the DEMCC. EMS could activate FASTroke at any time in the event of a patient with suspected acute ischemic stroke who is then transferred to the nearest acceptable hospital, in the cases of first abnormal times (FAT, defined as the time elapsed since the first neurological abnormalities were detected) within 6 hours and blood glucose levels of at least 60 mg/dL. If these criteria are met, FASTroke can be activated if one of the symptoms (facial droop, unilateral limb weakness, and dysarthria) is present.8 Next, the patient's name and birth date is entered for hospital preregistration, and the hospital to which the patient is to be transferred is entered. After the EMS sends the information, the hospital's medical staff (stroke team and emergency department) receive notifications through their smartphones to prepare for the patients with acute ischemic stroke and to preregister the patients at the hospital. EMS can also enter the symptom onset time, blood pressure, previous diseases, and medication history, information shared with the hospital's medical staff. If a hospital cannot accommodate patients with acute ischemic stroke due to impossible reperfusion therapy, the hospital's stroke team administrator can register their hospital's nonavailability through the app at any time, preventing the EMS from activating that hospital via FASTroke. Outcome measurement After the patient's arrival at the hospital, the following times were measured for the examination and treatment: door-to-CT, door-to-IVT, and door-to-EVT. The door time was defined as the time at reception at the emergency department's entrance, the CT scan time was when the first CT image was taken, the IVT time was when tissue plasminogen activator was injected, and the EVT time was when the catheter puncture procedure was started. For the neurological evaluation, the National Institutes of Health Stroke Scale (NIHSS) was measured at admission and discharge, and the severity of stroke was divided into minor (1–4 points), moderate (5–15 points), moderate to severe (16–20 points), and severe (21–42 points) according to NIHSS score.9 To evaluate the patient's neurological improvement, the difference in NIHSS scores between admission and discharge were calculated. After the patient's arrival at the hospital, the following times were measured for the examination and treatment: door-to-CT, door-to-IVT, and door-to-EVT. The door time was defined as the time at reception at the emergency department's entrance, the CT scan time was when the first CT image was taken, the IVT time was when tissue plasminogen activator was injected, and the EVT time was when the catheter puncture procedure was started. For the neurological evaluation, the National Institutes of Health Stroke Scale (NIHSS) was measured at admission and discharge, and the severity of stroke was divided into minor (1–4 points), moderate (5–15 points), moderate to severe (16–20 points), and severe (21–42 points) according to NIHSS score.9 To evaluate the patient's neurological improvement, the difference in NIHSS scores between admission and discharge were calculated. Data collection Data of patients with the International Classification of Diseases, 10th Revision, Clinical Modification diagnostic codes I60–I64 were collected from the electronic medical records (EMR) of five hospitals. The diagnostic code input was made through a final diagnosis from an on-duty neurologic specialist. The diagnosis was based on physical examinations and medical imaging results. In total, 119 ambulances reached the hospital and only patients who entered the hospital within the FAT of 6 hours were selected. Information on the selected study group of patients was collected from the EMR of the five hospitals and the 119 run sheets of Daegu. We retrieved the patients' age, sex, past disease, smoking status, hospital admission time, mental status score, neurological examination, and treatment from their EMR. Arrival at the hospital (also known as door time) is the line that divides the prehospital stage from the hospital stage. The last normal time (LNT) is when the patient was last identified as being normal. The time taken to arrive to the hospital door was recorded based on a combination of FAT and LNT. The transport time, defined as the period between the arrival of the ambulance at the scene and its arrival at the hospital door, was determined from the EMS run sheet. Data of patients with the International Classification of Diseases, 10th Revision, Clinical Modification diagnostic codes I60–I64 were collected from the electronic medical records (EMR) of five hospitals. The diagnostic code input was made through a final diagnosis from an on-duty neurologic specialist. The diagnosis was based on physical examinations and medical imaging results. In total, 119 ambulances reached the hospital and only patients who entered the hospital within the FAT of 6 hours were selected. Information on the selected study group of patients was collected from the EMR of the five hospitals and the 119 run sheets of Daegu. We retrieved the patients' age, sex, past disease, smoking status, hospital admission time, mental status score, neurological examination, and treatment from their EMR. Arrival at the hospital (also known as door time) is the line that divides the prehospital stage from the hospital stage. The last normal time (LNT) is when the patient was last identified as being normal. The time taken to arrive to the hospital door was recorded based on a combination of FAT and LNT. The transport time, defined as the period between the arrival of the ambulance at the scene and its arrival at the hospital door, was determined from the EMS run sheet. Statistical analysis The continuous variables are reported as median and interquartile ranges and were compared using the Mann-Whitney U test and Student's t-test according to its normal/non-normal distribution. Categorical variables are reported as numbers and percentages and were compared using the χ2 test or Fisher's exact test. The associations between baseline characteristics and the use of the FASTroke app with the time spent on management including brain CT, IVT, and EVT were first analyzed using a univariate logistic regression analysis. As recommended by the AHA/ASA guidelines, the door-to-CT time was divided into 20 minutes, the door-to-IVT time was split into 2 parts based on 60 minutes, and the door-to-EVT time was divided into 90 minutes close to the median value in this study.1 To confirm the factors affecting the performance of CT scans and reperfusion treatment within the target time, the following variables were adjusted and analyzed using a multivariable logistic regression analysis: age, sex, previous disease, admission time, NIHSS at admission, and FASTroke use. The results were reported as odds ratios (ORs) and 95% confidence intervals (CIs). The FASTtroke prenotification was categorized by whether prenotification was employed, and the no FASTtroke prenotification was analyzed as a reference value. All statistical analyses were performed using SPSS version 25.0 for Windows (SPSS Inc., Armonk, NY, USA). The continuous variables are reported as median and interquartile ranges and were compared using the Mann-Whitney U test and Student's t-test according to its normal/non-normal distribution. Categorical variables are reported as numbers and percentages and were compared using the χ2 test or Fisher's exact test. The associations between baseline characteristics and the use of the FASTroke app with the time spent on management including brain CT, IVT, and EVT were first analyzed using a univariate logistic regression analysis. As recommended by the AHA/ASA guidelines, the door-to-CT time was divided into 20 minutes, the door-to-IVT time was split into 2 parts based on 60 minutes, and the door-to-EVT time was divided into 90 minutes close to the median value in this study.1 To confirm the factors affecting the performance of CT scans and reperfusion treatment within the target time, the following variables were adjusted and analyzed using a multivariable logistic regression analysis: age, sex, previous disease, admission time, NIHSS at admission, and FASTroke use. The results were reported as odds ratios (ORs) and 95% confidence intervals (CIs). The FASTtroke prenotification was categorized by whether prenotification was employed, and the no FASTtroke prenotification was analyzed as a reference value. All statistical analyses were performed using SPSS version 25.0 for Windows (SPSS Inc., Armonk, NY, USA). Ethics statement The research protocol was approved by the Institutional Review Board of Kyungpook National University Hospital (2021-06-025) and exempted from prior consent requirements due to the study's retrospective nature. The research protocol was approved by the Institutional Review Board of Kyungpook National University Hospital (2021-06-025) and exempted from prior consent requirements due to the study's retrospective nature.
RESULTS
During the study period, 563 patients with acute ischemic stroke within 6 hours of FAT were transported by 119 ambulances to 5 participating hospitals. Of these, the hospitals were prenotified by FASTroke of 200 (35.5%) patients, with 93 (46.5%) preregistered on the way to the hospital (Fig. 1). The mean age of the study patients was 72 years, and 321 (57.0%) were male. There was no significant difference in terms of previous illnesses in the use of FASTroke; however, the FASTtroke prenotification had fewer patients with a history of stroke than the no FASTtroke prenotification (22.5% vs. 32.8%, P = 0.006). There was no difference in FASTroke activation between the patients admitted to the hospital at night and those admitted during the day (P = 0.502), and there was no difference between weekday and holiday admissions (P = 0.062). The symptom-onset-to-door time was shorter for the FASTtroke prenotification for LNT and FAT, (LNT-to-door time of 110 minutes vs. 143 minutes, P = 0.001; FAT-to-door time of 61 minutes vs. 71 minutes, P = 0.039), and the transport time from scene to door was also shorter (23 minutes vs. 24 minutes, P = 0.021). The time from LNT to 119 call was short in the FASTroke prenotification group (82 minutes vs. 111 minutes, P = 0.001), but no significant difference was observed with FAT. The FASTtroke prenotification had a higher mean NIHSS score at admission (8 vs. 4, P < 0.001), as well as a higher difference value (the difference in scores between admission and discharge) than the no FASTtroke prenotification (2 vs. 0, P < 0.001). In terms of reperfusion therapy, the rates of IVT treatment alone and IVT combined EVT treatment were higher in the FASTtroke prenotification (IVT 23.0% vs. 13.2%, P = 0.002; combined IVT plus EVT 18.5% vs. 7.2%, P < 0.001). However, the rate of EVT alone did not significantly differ (15.5% vs. 13.8%, P = 0.330) (Table 1, Fig. 2). Values are presented as number (%) or number (range). Afib = atrial fibrillation, EMS = emergency medical services, LNT = last normal time, FAT = first abnormal times, GCS = Glasgow Coma Scale, NIHSS = National Institute of Health Stroke Scale, IVT = intravenous thrombolysis, EVT = endovascular thrombectomy. aDifference in NIHSS score between admission and discharge. IVT = intravenous thrombolysis, EVT = endovascular thrombectomy. In the hospital, FASTtroke prenotification had shorter door-to-CT (19 minutes vs. 25 minutes, P < 0.001) and door-to-magnetic resonance imaging (62 minutes vs. 80 minutes, P = 0.011) scan times, and shorter CT-to-IVT (17 minutes vs. 28 minutes, P = 0.002) and CT-to-EVT (66 minutes vs. 97 minutes, P < 0.001) times. Thus, door-to-IVT (37 minutes vs. 48 minutes, P < 0.001) and door-to-EVT (82 minutes vs. 119 minutes, P < 0.001) times were faster in FASTroke prenotification. In FASTtroke prenotification, the preregistered patients had shorter performance times, door-to-CT times (17 minutes vs. 20 minutes, P = 0.007), and door-to-reperfusion times (IVT 34 minutes vs. 41 minutes, P = 0.030; EVT 73 minutes vs. 90 minutes, P = 0.042) than the nonregistered patients (Table 2). Values are presented as median times (range; unit: minutes). CT = computed tomography, MRI = magnetic resonance imaging, IVT = intravenous thrombolysis, EVT = endovascular thrombectomy. Table 3 shows the results of the multivariable logistic regression analysis after dividing the patients with acute ischemic stroke into groups by target time spent on CT, IVT, and EVT (CT 20 minutes; IVT 60 minutes; EVT 90 minutes). The results of the analysis were as follows. FASTroke and no preregistration (adjusted OR, 1.88; 95% CI, 1.19–2.96; P = 0.006), FASTroke and preregistration (adjusted OR, 3.35; 95% CI, 2.04–5.53; P < 0.001), previous history of stroke (adjusted OR, 0.58; 95% CI, 0.39–0.86; P = 0.007), and NIHSS at admission (adjusted OR, 1.03; 95% CI 1.00–1.06; P = 0.016) were independent factors affecting the time to brain CT scan. The factors affecting the door-to-IVT time were FASTroke with preregistration (adjusted OR, 5.56; 95% CI, 1.29–23.93; P = 0.021), previous history of stroke (adjusted OR, 0.23; 95% CI, 0.08–0.73; P = 0.012), and anticoagulation medication (adjusted OR, 0.07; 95% CI, 0.01–0.55; P = 0.012). The factors affecting the door-to-EVT time were FASTroke with preregistration (adjusted OR, 6.73; 95% CI, 2.53–17.87; P < 0.001) and NIHSS score at admission (adjusted OR, 1.12; 95% CI, 1.05–1.19; P < 0.001). Adjusted variable: age, sex, hypertension, diabetes, dyslipidemia, atrial fibrillation/flutter, coronary artery disease, cerebrovascular event, anti-coagulation medication, visit time, visit day, NIHSS at visit, FASTroke use. OR = odds ratio, CI = confidence interval, CT = computed tomography, NIHSS = National Institute of Health Stroke Scale, IVT = intravenous thrombolysis, EVT = endovascular thrombectomy.
null
null
[ "Study design and participants", "Mobile application", "Outcome measurement", "Data collection", "Statistical analysis", "Ethics statement" ]
[ "This study was a retrospective, observational study conducted in Daegu, the fourth largest metropolitan city in South Korea, from February 2020 to January 2021. While 2 regional and 4 local emergency centers operate in Daegu, 5 major hospitals (Kyungpook National University Hospital, Yeungnam University Medical Center, Keimyung University Dongsan Hospital, Daegu Catholic University Medical Center, and Daegu Fatima Hospital) participated in FASTroke project, except for 1 local emergency center that could not perform EVT interventions 24 hours a day, 365 days a year. All 48 fire safety centers including 119 ambulances in Daegu participated in the study. We included adult patients with acute ischemic stroke (aged ≥ 18 years) who experienced the first abnormal symptoms (as perceived by the patient or witness) within 6 hours of being treated by EMS and were transported to the emergency department by ambulance. The study excluded patients who were candidates for reperfusion therapy but were not treated due to their refusal. Patients in the prehospital stage who could not take a stroke screening test due to mental change to predict acute ischemic stroke were also excluded. To compare the management time delay and clinical outcomes, the patients were categorized into FASTtroke prenotification and no FASTtroke prenotification based on the FASTroke app use. FASTroke activation patients were subgrouped according to whether they were preregistered prior to arriving at the hospital.", "In December 2019, the Daegu Emergency Medicine Collaboration Committee (DEMCC) developed the FASTroke app as an identification, prenotification, and preregistration system for suspected stroke patients, which can efficiently utilize the regional stroke care system and subsequently improve the quality of acute stroke management. The app was introduced to all major regional emergency departments and fire departments throughout Daegu City. For the FASTroke project as a regional acute stroke care system, DEMCC organized a FASTroke team consisting of specialist physicians in emergency medicine, neurology, and neurosurgery related to treating patients with acute ischemic stroke.\nThe FASTroke app can be used on iOS and Android systems and is free to download and apply for membership; however, DEMCC administrator approval is required for full membership. The smartphones used by the EMS, hospital stroke team, and emergency department staff were identified and registered individually by the DEMCC. EMS could activate FASTroke at any time in the event of a patient with suspected acute ischemic stroke who is then transferred to the nearest acceptable hospital, in the cases of first abnormal times (FAT, defined as the time elapsed since the first neurological abnormalities were detected) within 6 hours and blood glucose levels of at least 60 mg/dL. If these criteria are met, FASTroke can be activated if one of the symptoms (facial droop, unilateral limb weakness, and dysarthria) is present.8 Next, the patient's name and birth date is entered for hospital preregistration, and the hospital to which the patient is to be transferred is entered. After the EMS sends the information, the hospital's medical staff (stroke team and emergency department) receive notifications through their smartphones to prepare for the patients with acute ischemic stroke and to preregister the patients at the hospital. EMS can also enter the symptom onset time, blood pressure, previous diseases, and medication history, information shared with the hospital's medical staff. If a hospital cannot accommodate patients with acute ischemic stroke due to impossible reperfusion therapy, the hospital's stroke team administrator can register their hospital's nonavailability through the app at any time, preventing the EMS from activating that hospital via FASTroke.", "After the patient's arrival at the hospital, the following times were measured for the examination and treatment: door-to-CT, door-to-IVT, and door-to-EVT. The door time was defined as the time at reception at the emergency department's entrance, the CT scan time was when the first CT image was taken, the IVT time was when tissue plasminogen activator was injected, and the EVT time was when the catheter puncture procedure was started. For the neurological evaluation, the National Institutes of Health Stroke Scale (NIHSS) was measured at admission and discharge, and the severity of stroke was divided into minor (1–4 points), moderate (5–15 points), moderate to severe (16–20 points), and severe (21–42 points) according to NIHSS score.9 To evaluate the patient's neurological improvement, the difference in NIHSS scores between admission and discharge were calculated.", "Data of patients with the International Classification of Diseases, 10th Revision, Clinical Modification diagnostic codes I60–I64 were collected from the electronic medical records (EMR) of five hospitals. The diagnostic code input was made through a final diagnosis from an on-duty neurologic specialist. The diagnosis was based on physical examinations and medical imaging results. In total, 119 ambulances reached the hospital and only patients who entered the hospital within the FAT of 6 hours were selected. Information on the selected study group of patients was collected from the EMR of the five hospitals and the 119 run sheets of Daegu.\nWe retrieved the patients' age, sex, past disease, smoking status, hospital admission time, mental status score, neurological examination, and treatment from their EMR. Arrival at the hospital (also known as door time) is the line that divides the prehospital stage from the hospital stage. The last normal time (LNT) is when the patient was last identified as being normal. The time taken to arrive to the hospital door was recorded based on a combination of FAT and LNT. The transport time, defined as the period between the arrival of the ambulance at the scene and its arrival at the hospital door, was determined from the EMS run sheet.", "The continuous variables are reported as median and interquartile ranges and were compared using the Mann-Whitney U test and Student's t-test according to its normal/non-normal distribution. Categorical variables are reported as numbers and percentages and were compared using the χ2 test or Fisher's exact test. The associations between baseline characteristics and the use of the FASTroke app with the time spent on management including brain CT, IVT, and EVT were first analyzed using a univariate logistic regression analysis. As recommended by the AHA/ASA guidelines, the door-to-CT time was divided into 20 minutes, the door-to-IVT time was split into 2 parts based on 60 minutes, and the door-to-EVT time was divided into 90 minutes close to the median value in this study.1 To confirm the factors affecting the performance of CT scans and reperfusion treatment within the target time, the following variables were adjusted and analyzed using a multivariable logistic regression analysis: age, sex, previous disease, admission time, NIHSS at admission, and FASTroke use. The results were reported as odds ratios (ORs) and 95% confidence intervals (CIs). The FASTtroke prenotification was categorized by whether prenotification was employed, and the no FASTtroke prenotification was analyzed as a reference value. All statistical analyses were performed using SPSS version 25.0 for Windows (SPSS Inc., Armonk, NY, USA).", "The research protocol was approved by the Institutional Review Board of Kyungpook National University Hospital (2021-06-025) and exempted from prior consent requirements due to the study's retrospective nature." ]
[ null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study design and participants", "Mobile application", "Outcome measurement", "Data collection", "Statistical analysis", "Ethics statement", "RESULTS", "DISCUSSION" ]
[ "Acute ischemic stroke is a highly time-sensitive disease that requires treatment as quickly as possible. Thrombolysis, which includes intravenous thrombolysis (IVT) and endovascular thrombectomy (EVT), is a crucial treatment for acute ischemic stroke.1 Thrombolysis within the early therapeutic window results in better neurological outcomes and reduced mortality.23 Before thrombolysis can be applied; however, the brain must be imaged to rule out intracerebral hemorrhage. Recent guidelines have recommended brain computed tomography (CT) within 20 minutes of hospital arrival.14\nMajor efforts have been made to treat acute ischemic stroke more rapidly, and improvements in emergency medical services (EMS) at the prehospital stage play a significant role. Recent treatment guidelines from the American Heart Association/American Stroke Association (AHA/ASA) recommend that EMS conduct prehospital notifications before the arrival of patients with acute ischemic stroke to the appropriate hospitals.5 EMS prehospital notifications can reduce door-to-imaging times and door-to-IVT times for patients with acute ischemic stroke and increase the number of patients eligible for thrombolysis, thereby leading to positive outcomes.6 Telephones have been widely used for prenotification, and EMS have also confirmed whether patients with acute ischemic stroke can be accepted and treated in the contacted hospitals.7\nRecently, the use of smartphones has surpassed that of cellular phones. An appropriate mobile app could therefore be a useful means for communicating acute ischemic stroke information between out-of-hospital and in-hospital medical teams. The mobile app can simultaneously deliver necessary information to multiple medical staff and hospitals. In acute stroke treatment that requires multidisciplinary management in a limited time, prenotification through mobile app will play an epochal role in reducing treatment time. We developed FASTroke, a mobile app, which has major roles in identifying suspected stroke patients by EMS personnel, prenotification of suspected stroke patients in nearby treating hospitals, and preregistration of patient's data to facilitate the intra-hospital delivery system. The app was implemented in Daegu, a city in Korea with a population of approximately 2.5 million people. We hypothesized that the prehospital notification system in this app could reduce door-to-CT, door-to-IVT, and door-to-EVT times for patients with acute ischemic stroke.", "Study design and participants This study was a retrospective, observational study conducted in Daegu, the fourth largest metropolitan city in South Korea, from February 2020 to January 2021. While 2 regional and 4 local emergency centers operate in Daegu, 5 major hospitals (Kyungpook National University Hospital, Yeungnam University Medical Center, Keimyung University Dongsan Hospital, Daegu Catholic University Medical Center, and Daegu Fatima Hospital) participated in FASTroke project, except for 1 local emergency center that could not perform EVT interventions 24 hours a day, 365 days a year. All 48 fire safety centers including 119 ambulances in Daegu participated in the study. We included adult patients with acute ischemic stroke (aged ≥ 18 years) who experienced the first abnormal symptoms (as perceived by the patient or witness) within 6 hours of being treated by EMS and were transported to the emergency department by ambulance. The study excluded patients who were candidates for reperfusion therapy but were not treated due to their refusal. Patients in the prehospital stage who could not take a stroke screening test due to mental change to predict acute ischemic stroke were also excluded. To compare the management time delay and clinical outcomes, the patients were categorized into FASTtroke prenotification and no FASTtroke prenotification based on the FASTroke app use. FASTroke activation patients were subgrouped according to whether they were preregistered prior to arriving at the hospital.\nThis study was a retrospective, observational study conducted in Daegu, the fourth largest metropolitan city in South Korea, from February 2020 to January 2021. While 2 regional and 4 local emergency centers operate in Daegu, 5 major hospitals (Kyungpook National University Hospital, Yeungnam University Medical Center, Keimyung University Dongsan Hospital, Daegu Catholic University Medical Center, and Daegu Fatima Hospital) participated in FASTroke project, except for 1 local emergency center that could not perform EVT interventions 24 hours a day, 365 days a year. All 48 fire safety centers including 119 ambulances in Daegu participated in the study. We included adult patients with acute ischemic stroke (aged ≥ 18 years) who experienced the first abnormal symptoms (as perceived by the patient or witness) within 6 hours of being treated by EMS and were transported to the emergency department by ambulance. The study excluded patients who were candidates for reperfusion therapy but were not treated due to their refusal. Patients in the prehospital stage who could not take a stroke screening test due to mental change to predict acute ischemic stroke were also excluded. To compare the management time delay and clinical outcomes, the patients were categorized into FASTtroke prenotification and no FASTtroke prenotification based on the FASTroke app use. FASTroke activation patients were subgrouped according to whether they were preregistered prior to arriving at the hospital.\nMobile application In December 2019, the Daegu Emergency Medicine Collaboration Committee (DEMCC) developed the FASTroke app as an identification, prenotification, and preregistration system for suspected stroke patients, which can efficiently utilize the regional stroke care system and subsequently improve the quality of acute stroke management. The app was introduced to all major regional emergency departments and fire departments throughout Daegu City. For the FASTroke project as a regional acute stroke care system, DEMCC organized a FASTroke team consisting of specialist physicians in emergency medicine, neurology, and neurosurgery related to treating patients with acute ischemic stroke.\nThe FASTroke app can be used on iOS and Android systems and is free to download and apply for membership; however, DEMCC administrator approval is required for full membership. The smartphones used by the EMS, hospital stroke team, and emergency department staff were identified and registered individually by the DEMCC. EMS could activate FASTroke at any time in the event of a patient with suspected acute ischemic stroke who is then transferred to the nearest acceptable hospital, in the cases of first abnormal times (FAT, defined as the time elapsed since the first neurological abnormalities were detected) within 6 hours and blood glucose levels of at least 60 mg/dL. If these criteria are met, FASTroke can be activated if one of the symptoms (facial droop, unilateral limb weakness, and dysarthria) is present.8 Next, the patient's name and birth date is entered for hospital preregistration, and the hospital to which the patient is to be transferred is entered. After the EMS sends the information, the hospital's medical staff (stroke team and emergency department) receive notifications through their smartphones to prepare for the patients with acute ischemic stroke and to preregister the patients at the hospital. EMS can also enter the symptom onset time, blood pressure, previous diseases, and medication history, information shared with the hospital's medical staff. If a hospital cannot accommodate patients with acute ischemic stroke due to impossible reperfusion therapy, the hospital's stroke team administrator can register their hospital's nonavailability through the app at any time, preventing the EMS from activating that hospital via FASTroke.\nIn December 2019, the Daegu Emergency Medicine Collaboration Committee (DEMCC) developed the FASTroke app as an identification, prenotification, and preregistration system for suspected stroke patients, which can efficiently utilize the regional stroke care system and subsequently improve the quality of acute stroke management. The app was introduced to all major regional emergency departments and fire departments throughout Daegu City. For the FASTroke project as a regional acute stroke care system, DEMCC organized a FASTroke team consisting of specialist physicians in emergency medicine, neurology, and neurosurgery related to treating patients with acute ischemic stroke.\nThe FASTroke app can be used on iOS and Android systems and is free to download and apply for membership; however, DEMCC administrator approval is required for full membership. The smartphones used by the EMS, hospital stroke team, and emergency department staff were identified and registered individually by the DEMCC. EMS could activate FASTroke at any time in the event of a patient with suspected acute ischemic stroke who is then transferred to the nearest acceptable hospital, in the cases of first abnormal times (FAT, defined as the time elapsed since the first neurological abnormalities were detected) within 6 hours and blood glucose levels of at least 60 mg/dL. If these criteria are met, FASTroke can be activated if one of the symptoms (facial droop, unilateral limb weakness, and dysarthria) is present.8 Next, the patient's name and birth date is entered for hospital preregistration, and the hospital to which the patient is to be transferred is entered. After the EMS sends the information, the hospital's medical staff (stroke team and emergency department) receive notifications through their smartphones to prepare for the patients with acute ischemic stroke and to preregister the patients at the hospital. EMS can also enter the symptom onset time, blood pressure, previous diseases, and medication history, information shared with the hospital's medical staff. If a hospital cannot accommodate patients with acute ischemic stroke due to impossible reperfusion therapy, the hospital's stroke team administrator can register their hospital's nonavailability through the app at any time, preventing the EMS from activating that hospital via FASTroke.\nOutcome measurement After the patient's arrival at the hospital, the following times were measured for the examination and treatment: door-to-CT, door-to-IVT, and door-to-EVT. The door time was defined as the time at reception at the emergency department's entrance, the CT scan time was when the first CT image was taken, the IVT time was when tissue plasminogen activator was injected, and the EVT time was when the catheter puncture procedure was started. For the neurological evaluation, the National Institutes of Health Stroke Scale (NIHSS) was measured at admission and discharge, and the severity of stroke was divided into minor (1–4 points), moderate (5–15 points), moderate to severe (16–20 points), and severe (21–42 points) according to NIHSS score.9 To evaluate the patient's neurological improvement, the difference in NIHSS scores between admission and discharge were calculated.\nAfter the patient's arrival at the hospital, the following times were measured for the examination and treatment: door-to-CT, door-to-IVT, and door-to-EVT. The door time was defined as the time at reception at the emergency department's entrance, the CT scan time was when the first CT image was taken, the IVT time was when tissue plasminogen activator was injected, and the EVT time was when the catheter puncture procedure was started. For the neurological evaluation, the National Institutes of Health Stroke Scale (NIHSS) was measured at admission and discharge, and the severity of stroke was divided into minor (1–4 points), moderate (5–15 points), moderate to severe (16–20 points), and severe (21–42 points) according to NIHSS score.9 To evaluate the patient's neurological improvement, the difference in NIHSS scores between admission and discharge were calculated.\nData collection Data of patients with the International Classification of Diseases, 10th Revision, Clinical Modification diagnostic codes I60–I64 were collected from the electronic medical records (EMR) of five hospitals. The diagnostic code input was made through a final diagnosis from an on-duty neurologic specialist. The diagnosis was based on physical examinations and medical imaging results. In total, 119 ambulances reached the hospital and only patients who entered the hospital within the FAT of 6 hours were selected. Information on the selected study group of patients was collected from the EMR of the five hospitals and the 119 run sheets of Daegu.\nWe retrieved the patients' age, sex, past disease, smoking status, hospital admission time, mental status score, neurological examination, and treatment from their EMR. Arrival at the hospital (also known as door time) is the line that divides the prehospital stage from the hospital stage. The last normal time (LNT) is when the patient was last identified as being normal. The time taken to arrive to the hospital door was recorded based on a combination of FAT and LNT. The transport time, defined as the period between the arrival of the ambulance at the scene and its arrival at the hospital door, was determined from the EMS run sheet.\nData of patients with the International Classification of Diseases, 10th Revision, Clinical Modification diagnostic codes I60–I64 were collected from the electronic medical records (EMR) of five hospitals. The diagnostic code input was made through a final diagnosis from an on-duty neurologic specialist. The diagnosis was based on physical examinations and medical imaging results. In total, 119 ambulances reached the hospital and only patients who entered the hospital within the FAT of 6 hours were selected. Information on the selected study group of patients was collected from the EMR of the five hospitals and the 119 run sheets of Daegu.\nWe retrieved the patients' age, sex, past disease, smoking status, hospital admission time, mental status score, neurological examination, and treatment from their EMR. Arrival at the hospital (also known as door time) is the line that divides the prehospital stage from the hospital stage. The last normal time (LNT) is when the patient was last identified as being normal. The time taken to arrive to the hospital door was recorded based on a combination of FAT and LNT. The transport time, defined as the period between the arrival of the ambulance at the scene and its arrival at the hospital door, was determined from the EMS run sheet.\nStatistical analysis The continuous variables are reported as median and interquartile ranges and were compared using the Mann-Whitney U test and Student's t-test according to its normal/non-normal distribution. Categorical variables are reported as numbers and percentages and were compared using the χ2 test or Fisher's exact test. The associations between baseline characteristics and the use of the FASTroke app with the time spent on management including brain CT, IVT, and EVT were first analyzed using a univariate logistic regression analysis. As recommended by the AHA/ASA guidelines, the door-to-CT time was divided into 20 minutes, the door-to-IVT time was split into 2 parts based on 60 minutes, and the door-to-EVT time was divided into 90 minutes close to the median value in this study.1 To confirm the factors affecting the performance of CT scans and reperfusion treatment within the target time, the following variables were adjusted and analyzed using a multivariable logistic regression analysis: age, sex, previous disease, admission time, NIHSS at admission, and FASTroke use. The results were reported as odds ratios (ORs) and 95% confidence intervals (CIs). The FASTtroke prenotification was categorized by whether prenotification was employed, and the no FASTtroke prenotification was analyzed as a reference value. All statistical analyses were performed using SPSS version 25.0 for Windows (SPSS Inc., Armonk, NY, USA).\nThe continuous variables are reported as median and interquartile ranges and were compared using the Mann-Whitney U test and Student's t-test according to its normal/non-normal distribution. Categorical variables are reported as numbers and percentages and were compared using the χ2 test or Fisher's exact test. The associations between baseline characteristics and the use of the FASTroke app with the time spent on management including brain CT, IVT, and EVT were first analyzed using a univariate logistic regression analysis. As recommended by the AHA/ASA guidelines, the door-to-CT time was divided into 20 minutes, the door-to-IVT time was split into 2 parts based on 60 minutes, and the door-to-EVT time was divided into 90 minutes close to the median value in this study.1 To confirm the factors affecting the performance of CT scans and reperfusion treatment within the target time, the following variables were adjusted and analyzed using a multivariable logistic regression analysis: age, sex, previous disease, admission time, NIHSS at admission, and FASTroke use. The results were reported as odds ratios (ORs) and 95% confidence intervals (CIs). The FASTtroke prenotification was categorized by whether prenotification was employed, and the no FASTtroke prenotification was analyzed as a reference value. All statistical analyses were performed using SPSS version 25.0 for Windows (SPSS Inc., Armonk, NY, USA).\nEthics statement The research protocol was approved by the Institutional Review Board of Kyungpook National University Hospital (2021-06-025) and exempted from prior consent requirements due to the study's retrospective nature.\nThe research protocol was approved by the Institutional Review Board of Kyungpook National University Hospital (2021-06-025) and exempted from prior consent requirements due to the study's retrospective nature.", "This study was a retrospective, observational study conducted in Daegu, the fourth largest metropolitan city in South Korea, from February 2020 to January 2021. While 2 regional and 4 local emergency centers operate in Daegu, 5 major hospitals (Kyungpook National University Hospital, Yeungnam University Medical Center, Keimyung University Dongsan Hospital, Daegu Catholic University Medical Center, and Daegu Fatima Hospital) participated in FASTroke project, except for 1 local emergency center that could not perform EVT interventions 24 hours a day, 365 days a year. All 48 fire safety centers including 119 ambulances in Daegu participated in the study. We included adult patients with acute ischemic stroke (aged ≥ 18 years) who experienced the first abnormal symptoms (as perceived by the patient or witness) within 6 hours of being treated by EMS and were transported to the emergency department by ambulance. The study excluded patients who were candidates for reperfusion therapy but were not treated due to their refusal. Patients in the prehospital stage who could not take a stroke screening test due to mental change to predict acute ischemic stroke were also excluded. To compare the management time delay and clinical outcomes, the patients were categorized into FASTtroke prenotification and no FASTtroke prenotification based on the FASTroke app use. FASTroke activation patients were subgrouped according to whether they were preregistered prior to arriving at the hospital.", "In December 2019, the Daegu Emergency Medicine Collaboration Committee (DEMCC) developed the FASTroke app as an identification, prenotification, and preregistration system for suspected stroke patients, which can efficiently utilize the regional stroke care system and subsequently improve the quality of acute stroke management. The app was introduced to all major regional emergency departments and fire departments throughout Daegu City. For the FASTroke project as a regional acute stroke care system, DEMCC organized a FASTroke team consisting of specialist physicians in emergency medicine, neurology, and neurosurgery related to treating patients with acute ischemic stroke.\nThe FASTroke app can be used on iOS and Android systems and is free to download and apply for membership; however, DEMCC administrator approval is required for full membership. The smartphones used by the EMS, hospital stroke team, and emergency department staff were identified and registered individually by the DEMCC. EMS could activate FASTroke at any time in the event of a patient with suspected acute ischemic stroke who is then transferred to the nearest acceptable hospital, in the cases of first abnormal times (FAT, defined as the time elapsed since the first neurological abnormalities were detected) within 6 hours and blood glucose levels of at least 60 mg/dL. If these criteria are met, FASTroke can be activated if one of the symptoms (facial droop, unilateral limb weakness, and dysarthria) is present.8 Next, the patient's name and birth date is entered for hospital preregistration, and the hospital to which the patient is to be transferred is entered. After the EMS sends the information, the hospital's medical staff (stroke team and emergency department) receive notifications through their smartphones to prepare for the patients with acute ischemic stroke and to preregister the patients at the hospital. EMS can also enter the symptom onset time, blood pressure, previous diseases, and medication history, information shared with the hospital's medical staff. If a hospital cannot accommodate patients with acute ischemic stroke due to impossible reperfusion therapy, the hospital's stroke team administrator can register their hospital's nonavailability through the app at any time, preventing the EMS from activating that hospital via FASTroke.", "After the patient's arrival at the hospital, the following times were measured for the examination and treatment: door-to-CT, door-to-IVT, and door-to-EVT. The door time was defined as the time at reception at the emergency department's entrance, the CT scan time was when the first CT image was taken, the IVT time was when tissue plasminogen activator was injected, and the EVT time was when the catheter puncture procedure was started. For the neurological evaluation, the National Institutes of Health Stroke Scale (NIHSS) was measured at admission and discharge, and the severity of stroke was divided into minor (1–4 points), moderate (5–15 points), moderate to severe (16–20 points), and severe (21–42 points) according to NIHSS score.9 To evaluate the patient's neurological improvement, the difference in NIHSS scores between admission and discharge were calculated.", "Data of patients with the International Classification of Diseases, 10th Revision, Clinical Modification diagnostic codes I60–I64 were collected from the electronic medical records (EMR) of five hospitals. The diagnostic code input was made through a final diagnosis from an on-duty neurologic specialist. The diagnosis was based on physical examinations and medical imaging results. In total, 119 ambulances reached the hospital and only patients who entered the hospital within the FAT of 6 hours were selected. Information on the selected study group of patients was collected from the EMR of the five hospitals and the 119 run sheets of Daegu.\nWe retrieved the patients' age, sex, past disease, smoking status, hospital admission time, mental status score, neurological examination, and treatment from their EMR. Arrival at the hospital (also known as door time) is the line that divides the prehospital stage from the hospital stage. The last normal time (LNT) is when the patient was last identified as being normal. The time taken to arrive to the hospital door was recorded based on a combination of FAT and LNT. The transport time, defined as the period between the arrival of the ambulance at the scene and its arrival at the hospital door, was determined from the EMS run sheet.", "The continuous variables are reported as median and interquartile ranges and were compared using the Mann-Whitney U test and Student's t-test according to its normal/non-normal distribution. Categorical variables are reported as numbers and percentages and were compared using the χ2 test or Fisher's exact test. The associations between baseline characteristics and the use of the FASTroke app with the time spent on management including brain CT, IVT, and EVT were first analyzed using a univariate logistic regression analysis. As recommended by the AHA/ASA guidelines, the door-to-CT time was divided into 20 minutes, the door-to-IVT time was split into 2 parts based on 60 minutes, and the door-to-EVT time was divided into 90 minutes close to the median value in this study.1 To confirm the factors affecting the performance of CT scans and reperfusion treatment within the target time, the following variables were adjusted and analyzed using a multivariable logistic regression analysis: age, sex, previous disease, admission time, NIHSS at admission, and FASTroke use. The results were reported as odds ratios (ORs) and 95% confidence intervals (CIs). The FASTtroke prenotification was categorized by whether prenotification was employed, and the no FASTtroke prenotification was analyzed as a reference value. All statistical analyses were performed using SPSS version 25.0 for Windows (SPSS Inc., Armonk, NY, USA).", "The research protocol was approved by the Institutional Review Board of Kyungpook National University Hospital (2021-06-025) and exempted from prior consent requirements due to the study's retrospective nature.", "During the study period, 563 patients with acute ischemic stroke within 6 hours of FAT were transported by 119 ambulances to 5 participating hospitals. Of these, the hospitals were prenotified by FASTroke of 200 (35.5%) patients, with 93 (46.5%) preregistered on the way to the hospital (Fig. 1).\nThe mean age of the study patients was 72 years, and 321 (57.0%) were male. There was no significant difference in terms of previous illnesses in the use of FASTroke; however, the FASTtroke prenotification had fewer patients with a history of stroke than the no FASTtroke prenotification (22.5% vs. 32.8%, P = 0.006). There was no difference in FASTroke activation between the patients admitted to the hospital at night and those admitted during the day (P = 0.502), and there was no difference between weekday and holiday admissions (P = 0.062). The symptom-onset-to-door time was shorter for the FASTtroke prenotification for LNT and FAT, (LNT-to-door time of 110 minutes vs. 143 minutes, P = 0.001; FAT-to-door time of 61 minutes vs. 71 minutes, P = 0.039), and the transport time from scene to door was also shorter (23 minutes vs. 24 minutes, P = 0.021). The time from LNT to 119 call was short in the FASTroke prenotification group (82 minutes vs. 111 minutes, P = 0.001), but no significant difference was observed with FAT. The FASTtroke prenotification had a higher mean NIHSS score at admission (8 vs. 4, P < 0.001), as well as a higher difference value (the difference in scores between admission and discharge) than the no FASTtroke prenotification (2 vs. 0, P < 0.001). In terms of reperfusion therapy, the rates of IVT treatment alone and IVT combined EVT treatment were higher in the FASTtroke prenotification (IVT 23.0% vs. 13.2%, P = 0.002; combined IVT plus EVT 18.5% vs. 7.2%, P < 0.001). However, the rate of EVT alone did not significantly differ (15.5% vs. 13.8%, P = 0.330) (Table 1, Fig. 2).\nValues are presented as number (%) or number (range).\nAfib = atrial fibrillation, EMS = emergency medical services, LNT = last normal time, FAT = first abnormal times, GCS = Glasgow Coma Scale, NIHSS = National Institute of Health Stroke Scale, IVT = intravenous thrombolysis, EVT = endovascular thrombectomy.\naDifference in NIHSS score between admission and discharge.\nIVT = intravenous thrombolysis, EVT = endovascular thrombectomy.\nIn the hospital, FASTtroke prenotification had shorter door-to-CT (19 minutes vs. 25 minutes, P < 0.001) and door-to-magnetic resonance imaging (62 minutes vs. 80 minutes, P = 0.011) scan times, and shorter CT-to-IVT (17 minutes vs. 28 minutes, P = 0.002) and CT-to-EVT (66 minutes vs. 97 minutes, P < 0.001) times. Thus, door-to-IVT (37 minutes vs. 48 minutes, P < 0.001) and door-to-EVT (82 minutes vs. 119 minutes, P < 0.001) times were faster in FASTroke prenotification. In FASTtroke prenotification, the preregistered patients had shorter performance times, door-to-CT times (17 minutes vs. 20 minutes, P = 0.007), and door-to-reperfusion times (IVT 34 minutes vs. 41 minutes, P = 0.030; EVT 73 minutes vs. 90 minutes, P = 0.042) than the nonregistered patients (Table 2).\nValues are presented as median times (range; unit: minutes).\nCT = computed tomography, MRI = magnetic resonance imaging, IVT = intravenous thrombolysis, EVT = endovascular thrombectomy.\nTable 3 shows the results of the multivariable logistic regression analysis after dividing the patients with acute ischemic stroke into groups by target time spent on CT, IVT, and EVT (CT 20 minutes; IVT 60 minutes; EVT 90 minutes). The results of the analysis were as follows. FASTroke and no preregistration (adjusted OR, 1.88; 95% CI, 1.19–2.96; P = 0.006), FASTroke and preregistration (adjusted OR, 3.35; 95% CI, 2.04–5.53; P < 0.001), previous history of stroke (adjusted OR, 0.58; 95% CI, 0.39–0.86; P = 0.007), and NIHSS at admission (adjusted OR, 1.03; 95% CI 1.00–1.06; P = 0.016) were independent factors affecting the time to brain CT scan. The factors affecting the door-to-IVT time were FASTroke with preregistration (adjusted OR, 5.56; 95% CI, 1.29–23.93; P = 0.021), previous history of stroke (adjusted OR, 0.23; 95% CI, 0.08–0.73; P = 0.012), and anticoagulation medication (adjusted OR, 0.07; 95% CI, 0.01–0.55; P = 0.012). The factors affecting the door-to-EVT time were FASTroke with preregistration (adjusted OR, 6.73; 95% CI, 2.53–17.87; P < 0.001) and NIHSS score at admission (adjusted OR, 1.12; 95% CI, 1.05–1.19; P < 0.001).\nAdjusted variable: age, sex, hypertension, diabetes, dyslipidemia, atrial fibrillation/flutter, coronary artery disease, cerebrovascular event, anti-coagulation medication, visit time, visit day, NIHSS at visit, FASTroke use.\nOR = odds ratio, CI = confidence interval, CT = computed tomography, NIHSS = National Institute of Health Stroke Scale, IVT = intravenous thrombolysis, EVT = endovascular thrombectomy.", "In this study, EMS prenotification of patients with acute ischemic stroke using the FASTroke app showed major time savings in patient management. Door-to-CT-scan times decreased by 6 minutes, door-to-IVT times by 11 minutes, and door-to-EVT times by 37 minutes. There were additional time savings for in-hospital management of patients with acute ischemic stroke when the hospitals performed preregistration.\nAcute ischemic stroke is highly time-sensitive, with neurons dying at a rate of 1.9 million/min; prompt management is crucial.10 Numerous attempts have been made to reduce the delay in treating acute ischemic stroke, including the Helsinki model, which led to improved patient outcomes.1112 The time to CT scan, the time to treatment decision, and the time to injection of a thrombolytic drug improved in patients with acute ischemic stroke prenotified by EMS.1314 Our study reported a 5-minute reduction in the door-to-CT time and a 2-minute reduction in the door-to-IVT time as a result of prenotification.6 Our study showed a breakthrough time reduction of 8 minutes in the door-to-CT time, 14 minutes in the door-to-IVT time, and 46 minutes in the door-to-EVT time for the FASTroke preregistered group compared with no FASTtroke prenotification.\nStudies have recently been conducted to remotely check the clinical aspects, data, and scans of patients through smartphones to provide better treatment for patients with acute ischemic stroke.151617 Despite the limitations of small-scale research conducted in a single hospital, a recent study showed a reduction in door-to-needle times of 40 minutes using a mobile app in the prehospital setting.18 Consistent with prior studies, these results were confirmed in our study in a metropolitan setting; when the FASTroke app was activated, the percentage of patients who received reperfusion treatment was 9.8% higher for IVT alone and 11.3% higher for IVT plus EVT treatment. The median door-to-IVT time was 11 minutes shorter compared to those who did not activate the app. Further time savings were also achieved through hospital preregistration.\nThere are a number of limitations to using non-smart cell phones for prenotification in the prehospital stage. First, it is difficult to check the nearby hospital's capacity for accommodating patients with acute ischemic stroke in real-time with older phones. With cell phones, communication is only possible one-on-one, and EMS might encounter difficulties determining in advance whether a hospital's medical staff are busy or unable to answer the phone. With one-on-one calls, the hospital's medical staff needs to contact all stroke teams individually after receiving the EMS prenotification. By using the FASTroke prenotification app, EMS can immediately identify the hospitals that can accommodate patients with acute ischemic stroke. Once the code has been activated by the FASTroke app, all stroke team staff at the receiving hospital will be simultaneously informed by smartphone notifications. Hospitals that receive FASTroke prenotification can respond with a variety of procedures to manage patients more quickly. Special beds for patients with acute ischemic stroke can be prepared before the initial examination. Nurses can be ready to access blood vessels at the entrance, and tissue plasminogen activator can be prepared for administration the moment the patient arrives. Radiologists can prepare CT rooms for the rapid imaging of patients with acute ischemic stroke, and neurologists can accompany patients to all tests until the diagnosis. For reducing treatment time, EMS prenotification is essential, as is the standing by of various hospital medical staff, such as radiologists, neurologists, and emergency physicians, for the patients' arrival.19\nPreregistration shortens the reception time after hospital arrival. Hospital registration numbers can be prepared in advance by the EMS entering the patients' names and resident registration numbers into the FASTroke app. The stroke team doctor who receives the FASTroke preregistration notification can order the necessary examinations before the patients are admitted to the hospital. The time required to register for tests is thereby shortened, preregistered patients are indicated with unique markings on the electronic medical patient list and are recognized by all medical staff in the emergency room, enabling faster management.\nSeveral measures have been taken to facilitate FASTroke. We consider that it is not easy for the EMS to predict acute ischemic stroke based on the patient's symptoms alone at the prehospital stage. To judge these symptoms, patients with altered consciousness who could not clearly express their symptoms were excluded from the activation indications of the FASTroke. Although some patients with acute ischemic stroke may get excluded through this exclusion criterion, activating the FASTroke system in patients with numerous changes in consciousness or vague neurological symptoms may produce high false positives and exhaust the medical staff. An additional sign, hypoglycemia, a condition that could be ruled out immediately at the rescue scene to reduce false positives, was a prerequisite for FASTroke activation (blood glucose level > 60 mg/dL). A total of 512 patients were FASTroke activated during the study; of these, 312 were not enrolled. However, among the 312 not-enrolled patients, 58 were excluded with an ischemic stroke with a FAT over 6 hours, 126 with a hemorrhage stroke, and 12 with a transient ischemic attack. In other words, 116 patients (22.7%) had FASTroke alarms due to non-stroke diseases. These prediction results appear to be no different from a previously used prehospital stroke scale.20\nAcute ischemic stroke can occur anywhere, and patients must be treated quickly and safely. However, communication between EMS and hospitals is limited, and it is not easy to know in real-time treatable operating conditions. Therefore, from the perspective of a regionalized emergency medical system, communication and cooperation between the prehospital EMS and hospital medical staff is essential.21 The FASTroke app allowed confirmation of acceptance in 24-hour hospital acute care and allowed conversations about the patient state between EMS and hospital medical staff. As a result, an effective regional collaboration system was constructed through the FASTroke app. To the best of our knowledge, there has been no study of the regional-sized pre hospital stage prenotify and communication through an app, and the results of this study are considered to be very meaningful.\nA variety of factors affect the treatment of patients with acute ischemic stroke, including age, sex, underlying diseases, initial NIHSS scores, area of residence, transportation method, and time of onset.3422 In this study, higher NIHSS scores at admission resulted in a greater likelihood of the CT scan being performed within 20 minutes, and patients with a history of stroke had shorter door-to-scan times. The higher the NIHSS score at admission, the shorter the door-to-EVT times. An IVT performed within 60 minutes was less frequent for those with underlying diseases such as a previous stroke and anticoagulant medication. NIHSS is a critical factor in the prognosis of patients with acute ischemic stroke,2324 with higher scores indicating more prominent neurological symptoms, thereby enabling medical staff to recognize patients with acute ischemic stroke, activating FASTroke for faster treatment. In contrast, patients with previous underlying disease, especially those who have had a stroke, are difficult to distinguish from those with existing neurological symptoms, and predicting acute ischemic stroke is not easy. We assumed that these patients had infrequent FASTroke activation and relatively slow management.\nThis study had several limitations; the first was its retrospective nature, with the factors inherent in retrospective data collection, analysis, and interpretation. Second, the use of the FASTroke app and preregistration was not applied consistently. Although there were indications for FASTroke activation, FASTroke was used according to the prediction of acute ischemic stroke and the preference report form of the dispatched EMS. In addition, preregistration was applied differently depending on the circumstances of the hospital and medical staff. Third, given that the research period included the coronavirus disease 2019 pandemic period, EMS hospital treatment might have been restricted; however, the effect of the pandemic on this study could not be jointly analyzed. We confirmed in a previous study that the examination and treatment of patients with acute ischemic stroke at the hospital stage were not affected much during the pandemic.25 Fourth, due to its short follow-up period, this study could not determine the patients' long-term outcomes. Fifth, reperfusion therapy is performed according to current guidelines. However, neurologists may be biased in determining reperfusion due to multiple factors such as age and previous illness history and performance. The above limitations will be fully considered and improved in future research.\nIn conclusion, the FASTroke app is a useful tool in building a stroke care system for prenotification in a metropolitan area. In FASTroke prenotification, the transport time of acute ischemic stroke patients was shortened, and the proportion of reperfusion treatment was increased. The FASTroke app for patients with acute ischemic stroke helped reduce management times, such as door-to-brain-CT, door-to-IVT, and door-to-EVT. In particular, the app was most effective when the hospital conducted joint preregistration." ]
[ "intro", "methods", null, null, null, null, null, null, "results", "discussion" ]
[ "Stroke", "Thrombolytic Therapy", "Emergency Medical Services" ]
INTRODUCTION: Acute ischemic stroke is a highly time-sensitive disease that requires treatment as quickly as possible. Thrombolysis, which includes intravenous thrombolysis (IVT) and endovascular thrombectomy (EVT), is a crucial treatment for acute ischemic stroke.1 Thrombolysis within the early therapeutic window results in better neurological outcomes and reduced mortality.23 Before thrombolysis can be applied; however, the brain must be imaged to rule out intracerebral hemorrhage. Recent guidelines have recommended brain computed tomography (CT) within 20 minutes of hospital arrival.14 Major efforts have been made to treat acute ischemic stroke more rapidly, and improvements in emergency medical services (EMS) at the prehospital stage play a significant role. Recent treatment guidelines from the American Heart Association/American Stroke Association (AHA/ASA) recommend that EMS conduct prehospital notifications before the arrival of patients with acute ischemic stroke to the appropriate hospitals.5 EMS prehospital notifications can reduce door-to-imaging times and door-to-IVT times for patients with acute ischemic stroke and increase the number of patients eligible for thrombolysis, thereby leading to positive outcomes.6 Telephones have been widely used for prenotification, and EMS have also confirmed whether patients with acute ischemic stroke can be accepted and treated in the contacted hospitals.7 Recently, the use of smartphones has surpassed that of cellular phones. An appropriate mobile app could therefore be a useful means for communicating acute ischemic stroke information between out-of-hospital and in-hospital medical teams. The mobile app can simultaneously deliver necessary information to multiple medical staff and hospitals. In acute stroke treatment that requires multidisciplinary management in a limited time, prenotification through mobile app will play an epochal role in reducing treatment time. We developed FASTroke, a mobile app, which has major roles in identifying suspected stroke patients by EMS personnel, prenotification of suspected stroke patients in nearby treating hospitals, and preregistration of patient's data to facilitate the intra-hospital delivery system. The app was implemented in Daegu, a city in Korea with a population of approximately 2.5 million people. We hypothesized that the prehospital notification system in this app could reduce door-to-CT, door-to-IVT, and door-to-EVT times for patients with acute ischemic stroke. METHODS: Study design and participants This study was a retrospective, observational study conducted in Daegu, the fourth largest metropolitan city in South Korea, from February 2020 to January 2021. While 2 regional and 4 local emergency centers operate in Daegu, 5 major hospitals (Kyungpook National University Hospital, Yeungnam University Medical Center, Keimyung University Dongsan Hospital, Daegu Catholic University Medical Center, and Daegu Fatima Hospital) participated in FASTroke project, except for 1 local emergency center that could not perform EVT interventions 24 hours a day, 365 days a year. All 48 fire safety centers including 119 ambulances in Daegu participated in the study. We included adult patients with acute ischemic stroke (aged ≥ 18 years) who experienced the first abnormal symptoms (as perceived by the patient or witness) within 6 hours of being treated by EMS and were transported to the emergency department by ambulance. The study excluded patients who were candidates for reperfusion therapy but were not treated due to their refusal. Patients in the prehospital stage who could not take a stroke screening test due to mental change to predict acute ischemic stroke were also excluded. To compare the management time delay and clinical outcomes, the patients were categorized into FASTtroke prenotification and no FASTtroke prenotification based on the FASTroke app use. FASTroke activation patients were subgrouped according to whether they were preregistered prior to arriving at the hospital. This study was a retrospective, observational study conducted in Daegu, the fourth largest metropolitan city in South Korea, from February 2020 to January 2021. While 2 regional and 4 local emergency centers operate in Daegu, 5 major hospitals (Kyungpook National University Hospital, Yeungnam University Medical Center, Keimyung University Dongsan Hospital, Daegu Catholic University Medical Center, and Daegu Fatima Hospital) participated in FASTroke project, except for 1 local emergency center that could not perform EVT interventions 24 hours a day, 365 days a year. All 48 fire safety centers including 119 ambulances in Daegu participated in the study. We included adult patients with acute ischemic stroke (aged ≥ 18 years) who experienced the first abnormal symptoms (as perceived by the patient or witness) within 6 hours of being treated by EMS and were transported to the emergency department by ambulance. The study excluded patients who were candidates for reperfusion therapy but were not treated due to their refusal. Patients in the prehospital stage who could not take a stroke screening test due to mental change to predict acute ischemic stroke were also excluded. To compare the management time delay and clinical outcomes, the patients were categorized into FASTtroke prenotification and no FASTtroke prenotification based on the FASTroke app use. FASTroke activation patients were subgrouped according to whether they were preregistered prior to arriving at the hospital. Mobile application In December 2019, the Daegu Emergency Medicine Collaboration Committee (DEMCC) developed the FASTroke app as an identification, prenotification, and preregistration system for suspected stroke patients, which can efficiently utilize the regional stroke care system and subsequently improve the quality of acute stroke management. The app was introduced to all major regional emergency departments and fire departments throughout Daegu City. For the FASTroke project as a regional acute stroke care system, DEMCC organized a FASTroke team consisting of specialist physicians in emergency medicine, neurology, and neurosurgery related to treating patients with acute ischemic stroke. The FASTroke app can be used on iOS and Android systems and is free to download and apply for membership; however, DEMCC administrator approval is required for full membership. The smartphones used by the EMS, hospital stroke team, and emergency department staff were identified and registered individually by the DEMCC. EMS could activate FASTroke at any time in the event of a patient with suspected acute ischemic stroke who is then transferred to the nearest acceptable hospital, in the cases of first abnormal times (FAT, defined as the time elapsed since the first neurological abnormalities were detected) within 6 hours and blood glucose levels of at least 60 mg/dL. If these criteria are met, FASTroke can be activated if one of the symptoms (facial droop, unilateral limb weakness, and dysarthria) is present.8 Next, the patient's name and birth date is entered for hospital preregistration, and the hospital to which the patient is to be transferred is entered. After the EMS sends the information, the hospital's medical staff (stroke team and emergency department) receive notifications through their smartphones to prepare for the patients with acute ischemic stroke and to preregister the patients at the hospital. EMS can also enter the symptom onset time, blood pressure, previous diseases, and medication history, information shared with the hospital's medical staff. If a hospital cannot accommodate patients with acute ischemic stroke due to impossible reperfusion therapy, the hospital's stroke team administrator can register their hospital's nonavailability through the app at any time, preventing the EMS from activating that hospital via FASTroke. In December 2019, the Daegu Emergency Medicine Collaboration Committee (DEMCC) developed the FASTroke app as an identification, prenotification, and preregistration system for suspected stroke patients, which can efficiently utilize the regional stroke care system and subsequently improve the quality of acute stroke management. The app was introduced to all major regional emergency departments and fire departments throughout Daegu City. For the FASTroke project as a regional acute stroke care system, DEMCC organized a FASTroke team consisting of specialist physicians in emergency medicine, neurology, and neurosurgery related to treating patients with acute ischemic stroke. The FASTroke app can be used on iOS and Android systems and is free to download and apply for membership; however, DEMCC administrator approval is required for full membership. The smartphones used by the EMS, hospital stroke team, and emergency department staff were identified and registered individually by the DEMCC. EMS could activate FASTroke at any time in the event of a patient with suspected acute ischemic stroke who is then transferred to the nearest acceptable hospital, in the cases of first abnormal times (FAT, defined as the time elapsed since the first neurological abnormalities were detected) within 6 hours and blood glucose levels of at least 60 mg/dL. If these criteria are met, FASTroke can be activated if one of the symptoms (facial droop, unilateral limb weakness, and dysarthria) is present.8 Next, the patient's name and birth date is entered for hospital preregistration, and the hospital to which the patient is to be transferred is entered. After the EMS sends the information, the hospital's medical staff (stroke team and emergency department) receive notifications through their smartphones to prepare for the patients with acute ischemic stroke and to preregister the patients at the hospital. EMS can also enter the symptom onset time, blood pressure, previous diseases, and medication history, information shared with the hospital's medical staff. If a hospital cannot accommodate patients with acute ischemic stroke due to impossible reperfusion therapy, the hospital's stroke team administrator can register their hospital's nonavailability through the app at any time, preventing the EMS from activating that hospital via FASTroke. Outcome measurement After the patient's arrival at the hospital, the following times were measured for the examination and treatment: door-to-CT, door-to-IVT, and door-to-EVT. The door time was defined as the time at reception at the emergency department's entrance, the CT scan time was when the first CT image was taken, the IVT time was when tissue plasminogen activator was injected, and the EVT time was when the catheter puncture procedure was started. For the neurological evaluation, the National Institutes of Health Stroke Scale (NIHSS) was measured at admission and discharge, and the severity of stroke was divided into minor (1–4 points), moderate (5–15 points), moderate to severe (16–20 points), and severe (21–42 points) according to NIHSS score.9 To evaluate the patient's neurological improvement, the difference in NIHSS scores between admission and discharge were calculated. After the patient's arrival at the hospital, the following times were measured for the examination and treatment: door-to-CT, door-to-IVT, and door-to-EVT. The door time was defined as the time at reception at the emergency department's entrance, the CT scan time was when the first CT image was taken, the IVT time was when tissue plasminogen activator was injected, and the EVT time was when the catheter puncture procedure was started. For the neurological evaluation, the National Institutes of Health Stroke Scale (NIHSS) was measured at admission and discharge, and the severity of stroke was divided into minor (1–4 points), moderate (5–15 points), moderate to severe (16–20 points), and severe (21–42 points) according to NIHSS score.9 To evaluate the patient's neurological improvement, the difference in NIHSS scores between admission and discharge were calculated. Data collection Data of patients with the International Classification of Diseases, 10th Revision, Clinical Modification diagnostic codes I60–I64 were collected from the electronic medical records (EMR) of five hospitals. The diagnostic code input was made through a final diagnosis from an on-duty neurologic specialist. The diagnosis was based on physical examinations and medical imaging results. In total, 119 ambulances reached the hospital and only patients who entered the hospital within the FAT of 6 hours were selected. Information on the selected study group of patients was collected from the EMR of the five hospitals and the 119 run sheets of Daegu. We retrieved the patients' age, sex, past disease, smoking status, hospital admission time, mental status score, neurological examination, and treatment from their EMR. Arrival at the hospital (also known as door time) is the line that divides the prehospital stage from the hospital stage. The last normal time (LNT) is when the patient was last identified as being normal. The time taken to arrive to the hospital door was recorded based on a combination of FAT and LNT. The transport time, defined as the period between the arrival of the ambulance at the scene and its arrival at the hospital door, was determined from the EMS run sheet. Data of patients with the International Classification of Diseases, 10th Revision, Clinical Modification diagnostic codes I60–I64 were collected from the electronic medical records (EMR) of five hospitals. The diagnostic code input was made through a final diagnosis from an on-duty neurologic specialist. The diagnosis was based on physical examinations and medical imaging results. In total, 119 ambulances reached the hospital and only patients who entered the hospital within the FAT of 6 hours were selected. Information on the selected study group of patients was collected from the EMR of the five hospitals and the 119 run sheets of Daegu. We retrieved the patients' age, sex, past disease, smoking status, hospital admission time, mental status score, neurological examination, and treatment from their EMR. Arrival at the hospital (also known as door time) is the line that divides the prehospital stage from the hospital stage. The last normal time (LNT) is when the patient was last identified as being normal. The time taken to arrive to the hospital door was recorded based on a combination of FAT and LNT. The transport time, defined as the period between the arrival of the ambulance at the scene and its arrival at the hospital door, was determined from the EMS run sheet. Statistical analysis The continuous variables are reported as median and interquartile ranges and were compared using the Mann-Whitney U test and Student's t-test according to its normal/non-normal distribution. Categorical variables are reported as numbers and percentages and were compared using the χ2 test or Fisher's exact test. The associations between baseline characteristics and the use of the FASTroke app with the time spent on management including brain CT, IVT, and EVT were first analyzed using a univariate logistic regression analysis. As recommended by the AHA/ASA guidelines, the door-to-CT time was divided into 20 minutes, the door-to-IVT time was split into 2 parts based on 60 minutes, and the door-to-EVT time was divided into 90 minutes close to the median value in this study.1 To confirm the factors affecting the performance of CT scans and reperfusion treatment within the target time, the following variables were adjusted and analyzed using a multivariable logistic regression analysis: age, sex, previous disease, admission time, NIHSS at admission, and FASTroke use. The results were reported as odds ratios (ORs) and 95% confidence intervals (CIs). The FASTtroke prenotification was categorized by whether prenotification was employed, and the no FASTtroke prenotification was analyzed as a reference value. All statistical analyses were performed using SPSS version 25.0 for Windows (SPSS Inc., Armonk, NY, USA). The continuous variables are reported as median and interquartile ranges and were compared using the Mann-Whitney U test and Student's t-test according to its normal/non-normal distribution. Categorical variables are reported as numbers and percentages and were compared using the χ2 test or Fisher's exact test. The associations between baseline characteristics and the use of the FASTroke app with the time spent on management including brain CT, IVT, and EVT were first analyzed using a univariate logistic regression analysis. As recommended by the AHA/ASA guidelines, the door-to-CT time was divided into 20 minutes, the door-to-IVT time was split into 2 parts based on 60 minutes, and the door-to-EVT time was divided into 90 minutes close to the median value in this study.1 To confirm the factors affecting the performance of CT scans and reperfusion treatment within the target time, the following variables were adjusted and analyzed using a multivariable logistic regression analysis: age, sex, previous disease, admission time, NIHSS at admission, and FASTroke use. The results were reported as odds ratios (ORs) and 95% confidence intervals (CIs). The FASTtroke prenotification was categorized by whether prenotification was employed, and the no FASTtroke prenotification was analyzed as a reference value. All statistical analyses were performed using SPSS version 25.0 for Windows (SPSS Inc., Armonk, NY, USA). Ethics statement The research protocol was approved by the Institutional Review Board of Kyungpook National University Hospital (2021-06-025) and exempted from prior consent requirements due to the study's retrospective nature. The research protocol was approved by the Institutional Review Board of Kyungpook National University Hospital (2021-06-025) and exempted from prior consent requirements due to the study's retrospective nature. Study design and participants: This study was a retrospective, observational study conducted in Daegu, the fourth largest metropolitan city in South Korea, from February 2020 to January 2021. While 2 regional and 4 local emergency centers operate in Daegu, 5 major hospitals (Kyungpook National University Hospital, Yeungnam University Medical Center, Keimyung University Dongsan Hospital, Daegu Catholic University Medical Center, and Daegu Fatima Hospital) participated in FASTroke project, except for 1 local emergency center that could not perform EVT interventions 24 hours a day, 365 days a year. All 48 fire safety centers including 119 ambulances in Daegu participated in the study. We included adult patients with acute ischemic stroke (aged ≥ 18 years) who experienced the first abnormal symptoms (as perceived by the patient or witness) within 6 hours of being treated by EMS and were transported to the emergency department by ambulance. The study excluded patients who were candidates for reperfusion therapy but were not treated due to their refusal. Patients in the prehospital stage who could not take a stroke screening test due to mental change to predict acute ischemic stroke were also excluded. To compare the management time delay and clinical outcomes, the patients were categorized into FASTtroke prenotification and no FASTtroke prenotification based on the FASTroke app use. FASTroke activation patients were subgrouped according to whether they were preregistered prior to arriving at the hospital. Mobile application: In December 2019, the Daegu Emergency Medicine Collaboration Committee (DEMCC) developed the FASTroke app as an identification, prenotification, and preregistration system for suspected stroke patients, which can efficiently utilize the regional stroke care system and subsequently improve the quality of acute stroke management. The app was introduced to all major regional emergency departments and fire departments throughout Daegu City. For the FASTroke project as a regional acute stroke care system, DEMCC organized a FASTroke team consisting of specialist physicians in emergency medicine, neurology, and neurosurgery related to treating patients with acute ischemic stroke. The FASTroke app can be used on iOS and Android systems and is free to download and apply for membership; however, DEMCC administrator approval is required for full membership. The smartphones used by the EMS, hospital stroke team, and emergency department staff were identified and registered individually by the DEMCC. EMS could activate FASTroke at any time in the event of a patient with suspected acute ischemic stroke who is then transferred to the nearest acceptable hospital, in the cases of first abnormal times (FAT, defined as the time elapsed since the first neurological abnormalities were detected) within 6 hours and blood glucose levels of at least 60 mg/dL. If these criteria are met, FASTroke can be activated if one of the symptoms (facial droop, unilateral limb weakness, and dysarthria) is present.8 Next, the patient's name and birth date is entered for hospital preregistration, and the hospital to which the patient is to be transferred is entered. After the EMS sends the information, the hospital's medical staff (stroke team and emergency department) receive notifications through their smartphones to prepare for the patients with acute ischemic stroke and to preregister the patients at the hospital. EMS can also enter the symptom onset time, blood pressure, previous diseases, and medication history, information shared with the hospital's medical staff. If a hospital cannot accommodate patients with acute ischemic stroke due to impossible reperfusion therapy, the hospital's stroke team administrator can register their hospital's nonavailability through the app at any time, preventing the EMS from activating that hospital via FASTroke. Outcome measurement: After the patient's arrival at the hospital, the following times were measured for the examination and treatment: door-to-CT, door-to-IVT, and door-to-EVT. The door time was defined as the time at reception at the emergency department's entrance, the CT scan time was when the first CT image was taken, the IVT time was when tissue plasminogen activator was injected, and the EVT time was when the catheter puncture procedure was started. For the neurological evaluation, the National Institutes of Health Stroke Scale (NIHSS) was measured at admission and discharge, and the severity of stroke was divided into minor (1–4 points), moderate (5–15 points), moderate to severe (16–20 points), and severe (21–42 points) according to NIHSS score.9 To evaluate the patient's neurological improvement, the difference in NIHSS scores between admission and discharge were calculated. Data collection: Data of patients with the International Classification of Diseases, 10th Revision, Clinical Modification diagnostic codes I60–I64 were collected from the electronic medical records (EMR) of five hospitals. The diagnostic code input was made through a final diagnosis from an on-duty neurologic specialist. The diagnosis was based on physical examinations and medical imaging results. In total, 119 ambulances reached the hospital and only patients who entered the hospital within the FAT of 6 hours were selected. Information on the selected study group of patients was collected from the EMR of the five hospitals and the 119 run sheets of Daegu. We retrieved the patients' age, sex, past disease, smoking status, hospital admission time, mental status score, neurological examination, and treatment from their EMR. Arrival at the hospital (also known as door time) is the line that divides the prehospital stage from the hospital stage. The last normal time (LNT) is when the patient was last identified as being normal. The time taken to arrive to the hospital door was recorded based on a combination of FAT and LNT. The transport time, defined as the period between the arrival of the ambulance at the scene and its arrival at the hospital door, was determined from the EMS run sheet. Statistical analysis: The continuous variables are reported as median and interquartile ranges and were compared using the Mann-Whitney U test and Student's t-test according to its normal/non-normal distribution. Categorical variables are reported as numbers and percentages and were compared using the χ2 test or Fisher's exact test. The associations between baseline characteristics and the use of the FASTroke app with the time spent on management including brain CT, IVT, and EVT were first analyzed using a univariate logistic regression analysis. As recommended by the AHA/ASA guidelines, the door-to-CT time was divided into 20 minutes, the door-to-IVT time was split into 2 parts based on 60 minutes, and the door-to-EVT time was divided into 90 minutes close to the median value in this study.1 To confirm the factors affecting the performance of CT scans and reperfusion treatment within the target time, the following variables were adjusted and analyzed using a multivariable logistic regression analysis: age, sex, previous disease, admission time, NIHSS at admission, and FASTroke use. The results were reported as odds ratios (ORs) and 95% confidence intervals (CIs). The FASTtroke prenotification was categorized by whether prenotification was employed, and the no FASTtroke prenotification was analyzed as a reference value. All statistical analyses were performed using SPSS version 25.0 for Windows (SPSS Inc., Armonk, NY, USA). Ethics statement: The research protocol was approved by the Institutional Review Board of Kyungpook National University Hospital (2021-06-025) and exempted from prior consent requirements due to the study's retrospective nature. RESULTS: During the study period, 563 patients with acute ischemic stroke within 6 hours of FAT were transported by 119 ambulances to 5 participating hospitals. Of these, the hospitals were prenotified by FASTroke of 200 (35.5%) patients, with 93 (46.5%) preregistered on the way to the hospital (Fig. 1). The mean age of the study patients was 72 years, and 321 (57.0%) were male. There was no significant difference in terms of previous illnesses in the use of FASTroke; however, the FASTtroke prenotification had fewer patients with a history of stroke than the no FASTtroke prenotification (22.5% vs. 32.8%, P = 0.006). There was no difference in FASTroke activation between the patients admitted to the hospital at night and those admitted during the day (P = 0.502), and there was no difference between weekday and holiday admissions (P = 0.062). The symptom-onset-to-door time was shorter for the FASTtroke prenotification for LNT and FAT, (LNT-to-door time of 110 minutes vs. 143 minutes, P = 0.001; FAT-to-door time of 61 minutes vs. 71 minutes, P = 0.039), and the transport time from scene to door was also shorter (23 minutes vs. 24 minutes, P = 0.021). The time from LNT to 119 call was short in the FASTroke prenotification group (82 minutes vs. 111 minutes, P = 0.001), but no significant difference was observed with FAT. The FASTtroke prenotification had a higher mean NIHSS score at admission (8 vs. 4, P < 0.001), as well as a higher difference value (the difference in scores between admission and discharge) than the no FASTtroke prenotification (2 vs. 0, P < 0.001). In terms of reperfusion therapy, the rates of IVT treatment alone and IVT combined EVT treatment were higher in the FASTtroke prenotification (IVT 23.0% vs. 13.2%, P = 0.002; combined IVT plus EVT 18.5% vs. 7.2%, P < 0.001). However, the rate of EVT alone did not significantly differ (15.5% vs. 13.8%, P = 0.330) (Table 1, Fig. 2). Values are presented as number (%) or number (range). Afib = atrial fibrillation, EMS = emergency medical services, LNT = last normal time, FAT = first abnormal times, GCS = Glasgow Coma Scale, NIHSS = National Institute of Health Stroke Scale, IVT = intravenous thrombolysis, EVT = endovascular thrombectomy. aDifference in NIHSS score between admission and discharge. IVT = intravenous thrombolysis, EVT = endovascular thrombectomy. In the hospital, FASTtroke prenotification had shorter door-to-CT (19 minutes vs. 25 minutes, P < 0.001) and door-to-magnetic resonance imaging (62 minutes vs. 80 minutes, P = 0.011) scan times, and shorter CT-to-IVT (17 minutes vs. 28 minutes, P = 0.002) and CT-to-EVT (66 minutes vs. 97 minutes, P < 0.001) times. Thus, door-to-IVT (37 minutes vs. 48 minutes, P < 0.001) and door-to-EVT (82 minutes vs. 119 minutes, P < 0.001) times were faster in FASTroke prenotification. In FASTtroke prenotification, the preregistered patients had shorter performance times, door-to-CT times (17 minutes vs. 20 minutes, P = 0.007), and door-to-reperfusion times (IVT 34 minutes vs. 41 minutes, P = 0.030; EVT 73 minutes vs. 90 minutes, P = 0.042) than the nonregistered patients (Table 2). Values are presented as median times (range; unit: minutes). CT = computed tomography, MRI = magnetic resonance imaging, IVT = intravenous thrombolysis, EVT = endovascular thrombectomy. Table 3 shows the results of the multivariable logistic regression analysis after dividing the patients with acute ischemic stroke into groups by target time spent on CT, IVT, and EVT (CT 20 minutes; IVT 60 minutes; EVT 90 minutes). The results of the analysis were as follows. FASTroke and no preregistration (adjusted OR, 1.88; 95% CI, 1.19–2.96; P = 0.006), FASTroke and preregistration (adjusted OR, 3.35; 95% CI, 2.04–5.53; P < 0.001), previous history of stroke (adjusted OR, 0.58; 95% CI, 0.39–0.86; P = 0.007), and NIHSS at admission (adjusted OR, 1.03; 95% CI 1.00–1.06; P = 0.016) were independent factors affecting the time to brain CT scan. The factors affecting the door-to-IVT time were FASTroke with preregistration (adjusted OR, 5.56; 95% CI, 1.29–23.93; P = 0.021), previous history of stroke (adjusted OR, 0.23; 95% CI, 0.08–0.73; P = 0.012), and anticoagulation medication (adjusted OR, 0.07; 95% CI, 0.01–0.55; P = 0.012). The factors affecting the door-to-EVT time were FASTroke with preregistration (adjusted OR, 6.73; 95% CI, 2.53–17.87; P < 0.001) and NIHSS score at admission (adjusted OR, 1.12; 95% CI, 1.05–1.19; P < 0.001). Adjusted variable: age, sex, hypertension, diabetes, dyslipidemia, atrial fibrillation/flutter, coronary artery disease, cerebrovascular event, anti-coagulation medication, visit time, visit day, NIHSS at visit, FASTroke use. OR = odds ratio, CI = confidence interval, CT = computed tomography, NIHSS = National Institute of Health Stroke Scale, IVT = intravenous thrombolysis, EVT = endovascular thrombectomy. DISCUSSION: In this study, EMS prenotification of patients with acute ischemic stroke using the FASTroke app showed major time savings in patient management. Door-to-CT-scan times decreased by 6 minutes, door-to-IVT times by 11 minutes, and door-to-EVT times by 37 minutes. There were additional time savings for in-hospital management of patients with acute ischemic stroke when the hospitals performed preregistration. Acute ischemic stroke is highly time-sensitive, with neurons dying at a rate of 1.9 million/min; prompt management is crucial.10 Numerous attempts have been made to reduce the delay in treating acute ischemic stroke, including the Helsinki model, which led to improved patient outcomes.1112 The time to CT scan, the time to treatment decision, and the time to injection of a thrombolytic drug improved in patients with acute ischemic stroke prenotified by EMS.1314 Our study reported a 5-minute reduction in the door-to-CT time and a 2-minute reduction in the door-to-IVT time as a result of prenotification.6 Our study showed a breakthrough time reduction of 8 minutes in the door-to-CT time, 14 minutes in the door-to-IVT time, and 46 minutes in the door-to-EVT time for the FASTroke preregistered group compared with no FASTtroke prenotification. Studies have recently been conducted to remotely check the clinical aspects, data, and scans of patients through smartphones to provide better treatment for patients with acute ischemic stroke.151617 Despite the limitations of small-scale research conducted in a single hospital, a recent study showed a reduction in door-to-needle times of 40 minutes using a mobile app in the prehospital setting.18 Consistent with prior studies, these results were confirmed in our study in a metropolitan setting; when the FASTroke app was activated, the percentage of patients who received reperfusion treatment was 9.8% higher for IVT alone and 11.3% higher for IVT plus EVT treatment. The median door-to-IVT time was 11 minutes shorter compared to those who did not activate the app. Further time savings were also achieved through hospital preregistration. There are a number of limitations to using non-smart cell phones for prenotification in the prehospital stage. First, it is difficult to check the nearby hospital's capacity for accommodating patients with acute ischemic stroke in real-time with older phones. With cell phones, communication is only possible one-on-one, and EMS might encounter difficulties determining in advance whether a hospital's medical staff are busy or unable to answer the phone. With one-on-one calls, the hospital's medical staff needs to contact all stroke teams individually after receiving the EMS prenotification. By using the FASTroke prenotification app, EMS can immediately identify the hospitals that can accommodate patients with acute ischemic stroke. Once the code has been activated by the FASTroke app, all stroke team staff at the receiving hospital will be simultaneously informed by smartphone notifications. Hospitals that receive FASTroke prenotification can respond with a variety of procedures to manage patients more quickly. Special beds for patients with acute ischemic stroke can be prepared before the initial examination. Nurses can be ready to access blood vessels at the entrance, and tissue plasminogen activator can be prepared for administration the moment the patient arrives. Radiologists can prepare CT rooms for the rapid imaging of patients with acute ischemic stroke, and neurologists can accompany patients to all tests until the diagnosis. For reducing treatment time, EMS prenotification is essential, as is the standing by of various hospital medical staff, such as radiologists, neurologists, and emergency physicians, for the patients' arrival.19 Preregistration shortens the reception time after hospital arrival. Hospital registration numbers can be prepared in advance by the EMS entering the patients' names and resident registration numbers into the FASTroke app. The stroke team doctor who receives the FASTroke preregistration notification can order the necessary examinations before the patients are admitted to the hospital. The time required to register for tests is thereby shortened, preregistered patients are indicated with unique markings on the electronic medical patient list and are recognized by all medical staff in the emergency room, enabling faster management. Several measures have been taken to facilitate FASTroke. We consider that it is not easy for the EMS to predict acute ischemic stroke based on the patient's symptoms alone at the prehospital stage. To judge these symptoms, patients with altered consciousness who could not clearly express their symptoms were excluded from the activation indications of the FASTroke. Although some patients with acute ischemic stroke may get excluded through this exclusion criterion, activating the FASTroke system in patients with numerous changes in consciousness or vague neurological symptoms may produce high false positives and exhaust the medical staff. An additional sign, hypoglycemia, a condition that could be ruled out immediately at the rescue scene to reduce false positives, was a prerequisite for FASTroke activation (blood glucose level > 60 mg/dL). A total of 512 patients were FASTroke activated during the study; of these, 312 were not enrolled. However, among the 312 not-enrolled patients, 58 were excluded with an ischemic stroke with a FAT over 6 hours, 126 with a hemorrhage stroke, and 12 with a transient ischemic attack. In other words, 116 patients (22.7%) had FASTroke alarms due to non-stroke diseases. These prediction results appear to be no different from a previously used prehospital stroke scale.20 Acute ischemic stroke can occur anywhere, and patients must be treated quickly and safely. However, communication between EMS and hospitals is limited, and it is not easy to know in real-time treatable operating conditions. Therefore, from the perspective of a regionalized emergency medical system, communication and cooperation between the prehospital EMS and hospital medical staff is essential.21 The FASTroke app allowed confirmation of acceptance in 24-hour hospital acute care and allowed conversations about the patient state between EMS and hospital medical staff. As a result, an effective regional collaboration system was constructed through the FASTroke app. To the best of our knowledge, there has been no study of the regional-sized pre hospital stage prenotify and communication through an app, and the results of this study are considered to be very meaningful. A variety of factors affect the treatment of patients with acute ischemic stroke, including age, sex, underlying diseases, initial NIHSS scores, area of residence, transportation method, and time of onset.3422 In this study, higher NIHSS scores at admission resulted in a greater likelihood of the CT scan being performed within 20 minutes, and patients with a history of stroke had shorter door-to-scan times. The higher the NIHSS score at admission, the shorter the door-to-EVT times. An IVT performed within 60 minutes was less frequent for those with underlying diseases such as a previous stroke and anticoagulant medication. NIHSS is a critical factor in the prognosis of patients with acute ischemic stroke,2324 with higher scores indicating more prominent neurological symptoms, thereby enabling medical staff to recognize patients with acute ischemic stroke, activating FASTroke for faster treatment. In contrast, patients with previous underlying disease, especially those who have had a stroke, are difficult to distinguish from those with existing neurological symptoms, and predicting acute ischemic stroke is not easy. We assumed that these patients had infrequent FASTroke activation and relatively slow management. This study had several limitations; the first was its retrospective nature, with the factors inherent in retrospective data collection, analysis, and interpretation. Second, the use of the FASTroke app and preregistration was not applied consistently. Although there were indications for FASTroke activation, FASTroke was used according to the prediction of acute ischemic stroke and the preference report form of the dispatched EMS. In addition, preregistration was applied differently depending on the circumstances of the hospital and medical staff. Third, given that the research period included the coronavirus disease 2019 pandemic period, EMS hospital treatment might have been restricted; however, the effect of the pandemic on this study could not be jointly analyzed. We confirmed in a previous study that the examination and treatment of patients with acute ischemic stroke at the hospital stage were not affected much during the pandemic.25 Fourth, due to its short follow-up period, this study could not determine the patients' long-term outcomes. Fifth, reperfusion therapy is performed according to current guidelines. However, neurologists may be biased in determining reperfusion due to multiple factors such as age and previous illness history and performance. The above limitations will be fully considered and improved in future research. In conclusion, the FASTroke app is a useful tool in building a stroke care system for prenotification in a metropolitan area. In FASTroke prenotification, the transport time of acute ischemic stroke patients was shortened, and the proportion of reperfusion treatment was increased. The FASTroke app for patients with acute ischemic stroke helped reduce management times, such as door-to-brain-CT, door-to-IVT, and door-to-EVT. In particular, the app was most effective when the hospital conducted joint preregistration.
Background: Acute ischemic stroke is a time-sensitive disease. Emergency medical service (EMS) prehospital notification of potential patients with stroke could play an important role in improving the in-hospital medical response and timely treatment of patients with acute ischemic stroke. We analyzed the effects of FASTroke, a mobile app that EMS can use to notify hospitals of patients with suspected acute ischemic stroke at the prehospital stage. Methods: We conducted a retrospective observational study of patients diagnosed with acute ischemic stroke at 5 major hospitals in metropolitan Daegu City, Korea, from February 2020 to January 2021. The clinical conditions and time required for managing patients were compared according to whether the EMS employed FASTroke app and further compared the factors by dividing the patients into subgroups according to the preregistration received by the hospitals when using FASTroke app. Results: Of the 563 patients diagnosed with acute ischemic stroke, FASTroke was activated for 200; of these, 93 were preregistered. The FASTroke prenotification showed faster door-to-computed-tomography times (19 minutes vs. 25 minutes, P < 0.001), faster door-to-intravenous-thrombolysis times (37 minutes vs. 48 minutes, P < 0.001), and faster door-to-endovascular-thrombectomy times (82 minutes vs. 119 minutes, P < 0.001). The time was further shortened when the preregistration was conducted simultaneously by the receiving hospital. Conclusions: The FASTroke app is an easy and useful tool for prenotification as a regional stroke care system in the metropolitan area, leading to reduced transport and acute ischemic stroke management time and more reperfusion treatment. The effect was more significant when the preregistration was performed jointly.
null
null
7,389
320
[ 250, 400, 173, 241, 267, 36 ]
10
[ "stroke", "time", "hospital", "patients", "fastroke", "door", "acute", "ischemic", "minutes", "ischemic stroke" ]
[ "prehospital stroke", "ischemic stroke hours", "acute stroke treatment", "ems hospital stroke", "stroke patients ems" ]
null
null
[CONTENT] Stroke | Thrombolytic Therapy | Emergency Medical Services [SUMMARY]
[CONTENT] Stroke | Thrombolytic Therapy | Emergency Medical Services [SUMMARY]
[CONTENT] Stroke | Thrombolytic Therapy | Emergency Medical Services [SUMMARY]
null
[CONTENT] Stroke | Thrombolytic Therapy | Emergency Medical Services [SUMMARY]
null
[CONTENT] Acute Disease | Aged | Aged, 80 and over | Emergency Medical Services | Female | Fibrinolytic Agents | Humans | Ischemic Stroke | Male | Middle Aged | Mobile Applications | Odds Ratio | Registries | Retrospective Studies | Thrombectomy | Time-to-Treatment [SUMMARY]
[CONTENT] Acute Disease | Aged | Aged, 80 and over | Emergency Medical Services | Female | Fibrinolytic Agents | Humans | Ischemic Stroke | Male | Middle Aged | Mobile Applications | Odds Ratio | Registries | Retrospective Studies | Thrombectomy | Time-to-Treatment [SUMMARY]
[CONTENT] Acute Disease | Aged | Aged, 80 and over | Emergency Medical Services | Female | Fibrinolytic Agents | Humans | Ischemic Stroke | Male | Middle Aged | Mobile Applications | Odds Ratio | Registries | Retrospective Studies | Thrombectomy | Time-to-Treatment [SUMMARY]
null
[CONTENT] Acute Disease | Aged | Aged, 80 and over | Emergency Medical Services | Female | Fibrinolytic Agents | Humans | Ischemic Stroke | Male | Middle Aged | Mobile Applications | Odds Ratio | Registries | Retrospective Studies | Thrombectomy | Time-to-Treatment [SUMMARY]
null
[CONTENT] prehospital stroke | ischemic stroke hours | acute stroke treatment | ems hospital stroke | stroke patients ems [SUMMARY]
[CONTENT] prehospital stroke | ischemic stroke hours | acute stroke treatment | ems hospital stroke | stroke patients ems [SUMMARY]
[CONTENT] prehospital stroke | ischemic stroke hours | acute stroke treatment | ems hospital stroke | stroke patients ems [SUMMARY]
null
[CONTENT] prehospital stroke | ischemic stroke hours | acute stroke treatment | ems hospital stroke | stroke patients ems [SUMMARY]
null
[CONTENT] stroke | time | hospital | patients | fastroke | door | acute | ischemic | minutes | ischemic stroke [SUMMARY]
[CONTENT] stroke | time | hospital | patients | fastroke | door | acute | ischemic | minutes | ischemic stroke [SUMMARY]
[CONTENT] stroke | time | hospital | patients | fastroke | door | acute | ischemic | minutes | ischemic stroke [SUMMARY]
null
[CONTENT] stroke | time | hospital | patients | fastroke | door | acute | ischemic | minutes | ischemic stroke [SUMMARY]
null
[CONTENT] stroke | acute | acute ischemic stroke | acute ischemic | ischemic stroke | ischemic | thrombolysis | patients | mobile app | app [SUMMARY]
[CONTENT] hospital | time | stroke | patients | fastroke | door | daegu | emergency | acute | test [SUMMARY]
[CONTENT] vs | minutes | minutes vs | 001 | ci | 95 ci | ivt | adjusted | 95 | evt [SUMMARY]
null
[CONTENT] stroke | hospital | patients | time | door | fastroke | acute | ischemic | ischemic stroke | minutes [SUMMARY]
null
[CONTENT] ||| EMS ||| FASTroke | EMS [SUMMARY]
[CONTENT] 5 | Daegu City | Korea | February 2020 to January 2021 ||| EMS | FASTroke | FASTroke [SUMMARY]
[CONTENT] 563 | FASTroke | 200 | 93 ||| FASTroke | 19 minutes | 25 minutes | P < 0.001 | 37 minutes | 48 minutes | P < 0.001 | 82 minutes | 119 minutes | P < 0.001 ||| [SUMMARY]
null
[CONTENT] ||| EMS ||| FASTroke | EMS ||| 5 | Daegu City | Korea | February 2020 to January 2021 ||| EMS | FASTroke | FASTroke ||| 563 | FASTroke | 200 | 93 ||| FASTroke | 19 minutes | 25 minutes | P < 0.001 | 37 minutes | 48 minutes | P < 0.001 | 82 minutes | 119 minutes | P < 0.001 ||| ||| FASTroke ||| [SUMMARY]
null
The use of frozen embryos and frozen sperm have complementary IVF outcomes: a retrospective analysis in couples experiencing IVF/Donor and IVF/Husband.
36258180
The cryopreservation of sperm or embryos has been an important strategy in the treatment of infertility. Recently studies have revealed the outcomes after IVF (in vitro fertilization) treatment for single-factor exposure either to frozen sperm or embryos.
BACKGROUND
This retrospective study was to uncover the exposure to both frozen sperm and embryo effects using IVF/H (in vitro fertilization using husbands' fresh sperm) or IVF/D (in vitro fertilization using donors' frozen sperm) treatment.
METHODS
The results showed the clinical pregnancy rate (CPR), live birth rate (LBR) and low birth weight rate (LBW) increased to 63.2% (or 68.1%), 61.1% (or 66.4%) and 15.8% (or 16.2%) after using frozen embryo transfer within Group IVF/H (or Group IVF/D). After using frozen sperm, the high-quality embryo rate (HER) increased to 52% and baby with birth defect rate (BDR) reduced to 0% in subgroup D/ET comparing to subgroup H/ET. While the fertilization rate (FER), cleavage rate (CLR), HER and multiple pregnancy rate (MUR) reduced to 75%, 71%, 45% and 9.2% in subgroup D/FET comparing to subgroup H/FET. Finally, our study found accumulative frozen gamete effects, including both sperm and embryos, led to the significantly increasing in the HER (p < 0.05), CPR (p < 0.001), LBR (p < 0.001) and LBW (p < 0.05) in subgroup D/FET comparing to subgroup H/ET.
RESULTS
The use of frozen embryos and frozen sperm have complementary IVF outcomes. Our findings highlighted the parent's distinguished frozen effect not only for clinical studies but also for basic research on the mechanism of cellular response adaptations to cryopreservation.
CONCLUSION
[ "Pregnancy", "Female", "Male", "Humans", "Retrospective Studies", "Spouses", "Semen", "Fertilization in Vitro", "Pregnancy Rate", "Spermatozoa" ]
9578274
Introduction
To keep the division of embryo the same pace with the growth of the endometrium, the transferring of frozen embryos have been widely used in assisted reproductive technology (ART). Since the first successful report of frozen embryo transfer (FET) [1], the cryopreservation of embryos has been an important strategy in the treatment of infertility. FET strategies contribute an additional 25–50% chance of pregnancy for couples who have cryopreserved embryos [2–4]. However, FET is not free from the risk of a higher multipregnancy rate (MPR) and low birth weight rate (LBW), even though the live birth rate (LBR) of frozen–thawed embryos is usually higher than that of fresh transferred embryos [5–8]. On the other hand, male due to azoospermia or sperm retrieval difficulties on the day of egg retrieval in vitro fertilization with frozen spermatozoa is used to treat female who might also have tubal lesions or those experiencing failure of prior artificial insemination with donor semen (AID) cycles [9, 10]. LBW Meanwhile, there is a matter of debate on the clinical outcomes caused by alterations in the DNA integrity of semen following cryopreservation [11, 12]. Indeed, the baby with no birth defect rate following pregnancies after IVF/D was not different (97.3% vs. 97.4%) [13] after IVF with fresh husband spermatozoa, but the clinical pregnancy rate (CPR) per transfer was higher after using frozen donor semen than that after using the husbands’ semen (27.6% vs. 21.9%, respectively)[13] Above all, taking advantage of freezing sperm, eggs or embryos is just like a double-edged sword for IVF outcomes, so doctors who use it need to be cautious. Nowadays, the practical ART will also meet the clinical treatment of one couple not only need to freeze sperm, but also suitable for freezing embryos. There still have been less studies of this double-factors freezing on the IVF outcomes including fertilization, pregnancy, and child birth. Most of published studies are incomprehensive, they are retrospective or refer to cases involving either frozen embryo transfer or frozen donor spermatozoa. For this article, we cover the findings of both freezing embryos transfer (FET) and IVF/D. As far as ART procedures are concerned, FET and IVF/D lead to some specific questions requiring answers: (i) the effects on the pregnancy and neonate characteristics of single-factor exposure to frozen embryo transfer; (ii) the effects on the fertilization, pregnancy and neonate characteristics of single-factor exposure to frozen donor semen; and (iii) the influence of the duration of sperm and embryo cryopreservation on pregnancy and newborn health. Methods.
null
null
Results
Participants and grouping A total of 860 couples undergoing IVF-ET treatment were included in the analysis, including 573 couples with husband sperm (Group IVF/H) and 287 couples with frozen donor sperm (Group IVF/D). According to whether or not frozen embryo transfer was performed, the IVF/H group was divided into the H/ET subgroup (133 couples treated by IVF/H and fresh embryo transfer) and the H/FET subgroup (440 couples treated by IVF/H and frozen embryo transfer); moreover, the IVF/D group was also divided into the D/ET subgroup (49 couples treated by IVF/D and fresh embryo transfer) and the D/FET subgroup (238 couples treated by IVF/D and frozen embryo transfer) (detailed in Fig. 1). A total of 860 couples undergoing IVF-ET treatment were included in the analysis, including 573 couples with husband sperm (Group IVF/H) and 287 couples with frozen donor sperm (Group IVF/D). According to whether or not frozen embryo transfer was performed, the IVF/H group was divided into the H/ET subgroup (133 couples treated by IVF/H and fresh embryo transfer) and the H/FET subgroup (440 couples treated by IVF/H and frozen embryo transfer); moreover, the IVF/D group was also divided into the D/ET subgroup (49 couples treated by IVF/D and fresh embryo transfer) and the D/FET subgroup (238 couples treated by IVF/D and frozen embryo transfer) (detailed in Fig. 1). The baseline characteristics of the couples Except for the female BMI in the FN subgroup, the variables were nonnormally distributed among the other subgroups. Thus, using the nonparametric test (Kruskal–Wallis model), we found that the median endometrial thickness (10 cm vs. 9 cm, p = 0.015), oocytes retrieved (8 vs. 11, p = 0.000) and MII oocytes (6 vs. 10, p = 0.000) were differentiated within Group IVF/H, but we found that only female age (30 years vs. 28 years, p = 0.002) was differentiated within Group IVF/D. There were no significant differences (p > 0.05) within Group IVF/H or Group IVF/D, including the male age, semen parameters and female BMI (detailed in Table 1). Table 1The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycleBaseline ParametersGroup IVF/H P value (H/ET vs. H/FET)Group IVF/D P value (D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]Inclusion Samples (n)133440/49238/Male Age (husband or donor) a32 [29, 34]32 [29, 34]0.73123 [22, 26]23 [21, 27]0.967Semen Volume (Ml) a3 [2, 3]3 [3]0.3482 [2, 2]2 [2, 2]0.608Semen Concentration (10^6/ML) a56 [48, 64]54 [48, 62]0.73446 [43, 53]47 [43, 51]0.974Semen (PR + NP) (%)a56 [50, 60]54 [50, 60]0.93146 [43, 54]48 [43, 52]0.574Female Age a33 [30, 35]32 [30, 35]0.64430 [28, 35]28 [26, 32]0.002**Female BMI (kg/m2) a21 [24, 25]21 [26, 27]0.86222 [20, 23]21 [20, 24]0.862Endometrial Thickness (cm) a10 [8, 12]9 [7, 11]0.015*10 [8, 12]10 [8, 12]0.805Oocytes Retrieved (n) a8 [4, 11]11[8, 16]0.000***11 [8, 15]11 [8, 15]0.590MII Oocytes (n) a6 [4, 10]10 [6, 14]0.000***9 [7, 13]9 [6, 12]0.721MII Oocytes (%)a93 [78, 100]91 [82, 100]0.66081 [68, 96]86 [73, 96]0.503Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric test (Kruskal–Wallis Model)* p < 0.05 **p < 0.01 ***p < 0.001 The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycle Note: 1: H/ET subgroup = using fresh sperm and fresh embryo transfer; 2: H/FET subgroup = using fresh sperm and frozen embryo transfer; 3: D/ET subgroup = using frozen sperm and fresh embryo transfer; 4: D/FET subgroup = using frozen sperm and frozen embryo transfer; a: Nonparametric test (Kruskal–Wallis Model) * p < 0.05 **p < 0.01 ***p < 0.001 Except for the female BMI in the FN subgroup, the variables were nonnormally distributed among the other subgroups. Thus, using the nonparametric test (Kruskal–Wallis model), we found that the median endometrial thickness (10 cm vs. 9 cm, p = 0.015), oocytes retrieved (8 vs. 11, p = 0.000) and MII oocytes (6 vs. 10, p = 0.000) were differentiated within Group IVF/H, but we found that only female age (30 years vs. 28 years, p = 0.002) was differentiated within Group IVF/D. There were no significant differences (p > 0.05) within Group IVF/H or Group IVF/D, including the male age, semen parameters and female BMI (detailed in Table 1). Table 1The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycleBaseline ParametersGroup IVF/H P value (H/ET vs. H/FET)Group IVF/D P value (D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]Inclusion Samples (n)133440/49238/Male Age (husband or donor) a32 [29, 34]32 [29, 34]0.73123 [22, 26]23 [21, 27]0.967Semen Volume (Ml) a3 [2, 3]3 [3]0.3482 [2, 2]2 [2, 2]0.608Semen Concentration (10^6/ML) a56 [48, 64]54 [48, 62]0.73446 [43, 53]47 [43, 51]0.974Semen (PR + NP) (%)a56 [50, 60]54 [50, 60]0.93146 [43, 54]48 [43, 52]0.574Female Age a33 [30, 35]32 [30, 35]0.64430 [28, 35]28 [26, 32]0.002**Female BMI (kg/m2) a21 [24, 25]21 [26, 27]0.86222 [20, 23]21 [20, 24]0.862Endometrial Thickness (cm) a10 [8, 12]9 [7, 11]0.015*10 [8, 12]10 [8, 12]0.805Oocytes Retrieved (n) a8 [4, 11]11[8, 16]0.000***11 [8, 15]11 [8, 15]0.590MII Oocytes (n) a6 [4, 10]10 [6, 14]0.000***9 [7, 13]9 [6, 12]0.721MII Oocytes (%)a93 [78, 100]91 [82, 100]0.66081 [68, 96]86 [73, 96]0.503Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric test (Kruskal–Wallis Model)* p < 0.05 **p < 0.01 ***p < 0.001 The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycle Note: 1: H/ET subgroup = using fresh sperm and fresh embryo transfer; 2: H/FET subgroup = using fresh sperm and frozen embryo transfer; 3: D/ET subgroup = using frozen sperm and fresh embryo transfer; 4: D/FET subgroup = using frozen sperm and frozen embryo transfer; a: Nonparametric test (Kruskal–Wallis Model) * p < 0.05 **p < 0.01 ***p < 0.001 Maternal factor exposure to frozen embryo transfer: the comparison of clinical outcomes within group IVF/H or group IVF/D Using the nonparametric test (Kruskal–Wallis Model), the outcomes of fertilization and embryo transfer were compared within Group IVF/H and Group IVF/D (see Table 2). Before embryo transfer, the median of all the fertilization outcomes in the H/FET subgroup was higher than that in the H/ET subgroup. The median [first quartile, third quartile] fertilization rate (%), cleavage rate (%) and high-quality embryo rate (%) were 80 [67, 91], 100 [80, 100] and 57 [36, 78], respectively, in the H/FET subgroup. In contrast, there was no significant difference between the D/ET and D/FET subgroups. After frozen embryo transfer, the CPR and LBR were significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup and the H/ET subgroup (63.2 vs. 39.8, p = 0.000; 61.1 vs. 39.1, p = 0.000; 14.8 vs. 6.7, p = 0.016, respectively). However, the LBW was significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup in the H/ET subgroup (16.2 vs. 4.9, p = 0.021). In another IVF/D group, the CPR and LBR were significantly higher in the D/FET subgroup than in the fresh embryo treatment subgroup in the D/ET subgroup (68.1 vs. 49.0, p = 0.017; 66.4 vs. 47.0, p = 0.010, respectively). Otherwise, the BDR was significantly higher in the D/FET subgroup than in the fresh embryo treatment group in subgroup D/ET (1.0 vs. 0.0, p = 0.000). Table 2The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycleOutcome IndicatorsGroup IVF/H (Χ 2 ) P value (H/ET vs. H/FET)Group IVF/D (Χ 2 ) P value (D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]After FertilizationFertilization Number (n)a5 [3, 8]8 [5, 11]0.000***8 [6, 12]8 [5, 11]0.342Fertilization Rate (%)a71 [50,89]80 [67,91]0.001**67 [57,88]75 [57,84]0.822Cleavage Number (n) a4 [2, 8]8 [5, 11]0.000***8 [6, 11]7 [4, 11]0.309Cleavage Rate (%)a67 [50,88]100 [80,100]0.000***67 [57,88]71 [55,82]0.667High-Quality Embryos (n) a2 [1, 5]4 [3, 7]0.000***6 [3, 8]4 [2, 7]0.105High-Quality Embryos Rate (%)a33 [10,54]57 [36,78]0.000***52 [33,67]45 [28,60]0.070After TransferBiochemical Pregnancy Rate (%)b0.01.4(2.377) 0.1232.00.4(1.542) 0.214Clinical Pregnancy Rate (%)b39.863.2(22.789) 0.000***49.068.1(5.734) 0.017*Live Birth Rate (%)b39.161.1(20.134) 0.000***47.066.4(6.598) 0.010*Multipregnancy Rate (%)b6.714.8(5.820) 0.016*14.39.2(1.137) 0.286Miscarriage Rate (%)b0.82.0(0.997) 0.3182.02.1(0.001) 0.979Total Baby Sex Ratio (%)b69.4114.1(3.129) 0.077121.482.0(1.036) 0.309Low Birth Weight Rate (%)b4.916.2(5.286) 0.021*19.415.8(0.029) 0.806Baby with Birth Defect Rate (%)b1.61.5(0.007) 0.9330.01.0(216.712) 0.000***Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric Test (Kruskal–Wallis Model)b: chi square test for independent samples* p < 0.05 **p < 0.01 ***p < 0.001 The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycle Note: 1: H/ET subgroup = using fresh sperm and fresh embryo transfer; 2: H/FET subgroup = using fresh sperm and frozen embryo transfer; 3: D/ET subgroup = using frozen sperm and fresh embryo transfer; 4: D/FET subgroup = using frozen sperm and frozen embryo transfer; a: Nonparametric Test (Kruskal–Wallis Model) b: chi square test for independent samples * p < 0.05 **p < 0.01 ***p < 0.001 Using the nonparametric test (Kruskal–Wallis Model), the outcomes of fertilization and embryo transfer were compared within Group IVF/H and Group IVF/D (see Table 2). Before embryo transfer, the median of all the fertilization outcomes in the H/FET subgroup was higher than that in the H/ET subgroup. The median [first quartile, third quartile] fertilization rate (%), cleavage rate (%) and high-quality embryo rate (%) were 80 [67, 91], 100 [80, 100] and 57 [36, 78], respectively, in the H/FET subgroup. In contrast, there was no significant difference between the D/ET and D/FET subgroups. After frozen embryo transfer, the CPR and LBR were significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup and the H/ET subgroup (63.2 vs. 39.8, p = 0.000; 61.1 vs. 39.1, p = 0.000; 14.8 vs. 6.7, p = 0.016, respectively). However, the LBW was significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup in the H/ET subgroup (16.2 vs. 4.9, p = 0.021). In another IVF/D group, the CPR and LBR were significantly higher in the D/FET subgroup than in the fresh embryo treatment subgroup in the D/ET subgroup (68.1 vs. 49.0, p = 0.017; 66.4 vs. 47.0, p = 0.010, respectively). Otherwise, the BDR was significantly higher in the D/FET subgroup than in the fresh embryo treatment group in subgroup D/ET (1.0 vs. 0.0, p = 0.000). Table 2The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycleOutcome IndicatorsGroup IVF/H (Χ 2 ) P value (H/ET vs. H/FET)Group IVF/D (Χ 2 ) P value (D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]After FertilizationFertilization Number (n)a5 [3, 8]8 [5, 11]0.000***8 [6, 12]8 [5, 11]0.342Fertilization Rate (%)a71 [50,89]80 [67,91]0.001**67 [57,88]75 [57,84]0.822Cleavage Number (n) a4 [2, 8]8 [5, 11]0.000***8 [6, 11]7 [4, 11]0.309Cleavage Rate (%)a67 [50,88]100 [80,100]0.000***67 [57,88]71 [55,82]0.667High-Quality Embryos (n) a2 [1, 5]4 [3, 7]0.000***6 [3, 8]4 [2, 7]0.105High-Quality Embryos Rate (%)a33 [10,54]57 [36,78]0.000***52 [33,67]45 [28,60]0.070After TransferBiochemical Pregnancy Rate (%)b0.01.4(2.377) 0.1232.00.4(1.542) 0.214Clinical Pregnancy Rate (%)b39.863.2(22.789) 0.000***49.068.1(5.734) 0.017*Live Birth Rate (%)b39.161.1(20.134) 0.000***47.066.4(6.598) 0.010*Multipregnancy Rate (%)b6.714.8(5.820) 0.016*14.39.2(1.137) 0.286Miscarriage Rate (%)b0.82.0(0.997) 0.3182.02.1(0.001) 0.979Total Baby Sex Ratio (%)b69.4114.1(3.129) 0.077121.482.0(1.036) 0.309Low Birth Weight Rate (%)b4.916.2(5.286) 0.021*19.415.8(0.029) 0.806Baby with Birth Defect Rate (%)b1.61.5(0.007) 0.9330.01.0(216.712) 0.000***Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric Test (Kruskal–Wallis Model)b: chi square test for independent samples* p < 0.05 **p < 0.01 ***p < 0.001 The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycle Note: 1: H/ET subgroup = using fresh sperm and fresh embryo transfer; 2: H/FET subgroup = using fresh sperm and frozen embryo transfer; 3: D/ET subgroup = using frozen sperm and fresh embryo transfer; 4: D/FET subgroup = using frozen sperm and frozen embryo transfer; a: Nonparametric Test (Kruskal–Wallis Model) b: chi square test for independent samples * p < 0.05 **p < 0.01 ***p < 0.001 Paternal factor exposure to frozen sperm fertilization: the comparison of clinical outcomes between Group IVF/H and Group IVF/D Using the chi square test for independent samples, the comparison of clinical outcomes between Group IVF/H and Group IVF/D is shown in Figs. 2 and 3. Compared with the D/ET subgroup using frozen sperm (donor sperm) fertilization, the HER was higher than that of the H/ET subgroup using fresh husband sperm (p < 0.05). In contrast, the values of the FER, CLR and HER in the D/FET subgroup were lower than those of the H/FET subgroup using fresh husband sperm (all p < 0.05). After embryo transfer, the BDR in the D/ET subgroup was lower than that of the H/ET subgroup using fresh husband sperm (p < 0.001). Meanwhile, the MUR in the D/FET subgroup was lower than that of the H/FET subgroup using fresh husband sperm (p < 0.05). Fig. 2The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001 The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001 Fig. 3The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001 The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001 Using the chi square test for independent samples, the comparison of clinical outcomes between Group IVF/H and Group IVF/D is shown in Figs. 2 and 3. Compared with the D/ET subgroup using frozen sperm (donor sperm) fertilization, the HER was higher than that of the H/ET subgroup using fresh husband sperm (p < 0.05). In contrast, the values of the FER, CLR and HER in the D/FET subgroup were lower than those of the H/FET subgroup using fresh husband sperm (all p < 0.05). After embryo transfer, the BDR in the D/ET subgroup was lower than that of the H/ET subgroup using fresh husband sperm (p < 0.001). Meanwhile, the MUR in the D/FET subgroup was lower than that of the H/FET subgroup using fresh husband sperm (p < 0.05). Fig. 2The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001 The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001 Fig. 3The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001 The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001 Parental factors exposure to frozen sperm fertilization and frozen embryo transfer: the comparison of clinical outcomes between Group IVF/H and Group IVF/D When both frozen sperm and frozen embryo transfer were used in the D/FET group, the HER was higher than that for the use of both fresh sperm and fresh embryo transfer in the H/ET group (p < 0.05). After embryo transfer, the CPR and LBR values in the D/FET subgroup were higher than those in the H/ET group (p < 0.001). Meanwhile, the LBW in the D/FET subgroup was higher than that in the H/ET subgroup (p < 0.05). When both frozen sperm and frozen embryo transfer were used in the D/FET group, the HER was higher than that for the use of both fresh sperm and fresh embryo transfer in the H/ET group (p < 0.05). After embryo transfer, the CPR and LBR values in the D/FET subgroup were higher than those in the H/ET group (p < 0.001). Meanwhile, the LBW in the D/FET subgroup was higher than that in the H/ET subgroup (p < 0.05).
Conclusion
This study demonstrates that using frozen sperm or frozen embryo transfer have distinguished effects on the different IVF stages. Especially, the use of frozen embryos and frozen sperm have complementary IVF outcomes. Basic research on the mechanism of the cellular response adaptations to cryopreservation are needed to support our findings.
[ "Methods", "The processing of ovulation, fertilization and embryo transfer were as follows", "Grouping and statistical analysis", "Participants and grouping", "The baseline characteristics of the couples", "Maternal factor exposure to frozen embryo transfer: the comparison of clinical outcomes within group IVF/H or group IVF/D", "Paternal factor exposure to frozen sperm fertilization: the comparison of clinical outcomes between Group IVF/H and Group IVF/D", "Parental factors exposure to frozen sperm fertilization and frozen embryo transfer: the comparison of clinical outcomes between Group IVF/H and Group IVF/D" ]
[ "This study was approved by the Ethical Committee of the institutional review board of the Ethical Committee of the Obstetrics and Gynaecology Hospital of Fudan University and written informed consent was obtained from all couples. All methods were performed in accordance with the relevant guidelines and regulations.\nWe retrospectively analyzed the IVF treatment of couples experiencing infertility with frozen-thawed donor sperm (Group IVF/D) or fresh husband sperm (Group IVF/H) at Shanghai Ji Ai Genetics and IVF Institute (Jan. 2013-Feb. 2019), after which we followed up until the birth of the baby. For Group IVF/D, the donor’s inclusion criteria were normal sperm parameters above World Health Organization guidelines [14]. For Group IVF/H, the husband’s inclusion criteria were also normal sperm parameters according to World Health Organization guidelines [14]. The inclusion criteria for women in both groups were those undergoing their first IVF-ET cycle caused only by tubal obstruction factors. Women with premature ovarian failure, polycystic ovarian syndrome, chromosome abnormalities, habitual abortion and other diseases that cause infertility were all excluded from this study. In addition, male patients with anti-sperm antibodies, high DNA fragmentation index, non-ejaculation, retrograde ejaculation, chromosome abnormalities and other diseases that cause infertility were also excluded.\nThe processing of the semen sample treatments was as follows: According to the World Health Organization guidelines [14], donors’ semen samples were obtained by masturbation. After liquefaction of the fresh ejaculate, the specimens’ characteristics were evaluated, such as volume, count and motility. The qualified donors followed the National Sperm Bank reference for semen parameters (before freezing: volume ≥ 2.0 ml, concentration ≥ 60 million/ml, progressive motility ≥ 60%, normal sperm morphology ≥ 70%; after thawing: concentration ≥ 15 million/ml, progressive motility ≥ 32%, normal sperm morphology ≥ 4%). On the day of egg collection by uterine surgery, the semen samples were thawed in a water bath at 37 ℃. Then, the semen sample was treated by the density gradient method (45% and 90% gradient solution, Vitrolife, Gothenburg, Sweden), by which the samples were centrifuged at 500 g for 20 min. After removing the supernatant, the precipitate was washed with washing solution (Vitrolife, Gothenburg, Sweden) and centrifuged at 300 g for 10 min. For the husband, all the semen samples were obtained by masturbation on the day of egg collection by uterine surgery, and the fresh sperm were used in the IVF-ET cycles.\n The processing of ovulation, fertilization and embryo transfer were as follows controlled ovarian hyperstimulation (COH) used a gonadotropin-releasing hormone agonist protocol (the following was a shortened version of the long protocol) or a GnRH antagonist protocol, both of which are effective in blocking a premature LH surge [15, 16]. Generally, for the long protocol, a GnRH agonist (Triptorelin Acetate, Ipsen Pharma Biotech, France) was administered by subcutaneous injection daily starting from the luteal phase of the menstrual cycle for 10–14 days, and then ovarian stimulation with rFSH (Gn-F, Merck Serono SA Aubonne Branch, Switzerland) commenced. For the GnRH antagonist protocol, ovarian stimulation began on the second day of the menstrual cycle, and on the fifth day, antagonist (Cetrorelix Acetate, Cetrotide, Serono Labs Inc., Switzerland) administration started. Once the leading two follicles reached 18 mm or larger in diameter, the hCG administration was ejected as a trigger on the same day. Thirty-five to forty hours later, a doctor punctured the follicles and collected the eggs with the guidance of an ultrasound instrument. The sperm and oocytes (10000:1) were added to 0.1 ml of prebalanced embryo culture medium into four-well plates covered with mineral oil. The evaluation of embryo quality was performed on the 3rd day after fertilization. The embryos were divided into four levels according to the characteristics of blastomeres, such as the number, the morphology and fragments. Grade I and Grade II embryos were defined as high-quality embryos, and the other grades were defined as low-quality embryos [17, 18]. When the number of high-quality embryos was two or lower and the woman was younger than 35 years old, the embryos were frozen for transfer. All transferred embryos were 4-cell embryos. The frozen embryos were thawed according to the rapid recovery method of vitrification. Embryos with a recovery rate above 50% could be used for transfer. After embryo transfer, the patients were followed up until birth.\ncontrolled ovarian hyperstimulation (COH) used a gonadotropin-releasing hormone agonist protocol (the following was a shortened version of the long protocol) or a GnRH antagonist protocol, both of which are effective in blocking a premature LH surge [15, 16]. Generally, for the long protocol, a GnRH agonist (Triptorelin Acetate, Ipsen Pharma Biotech, France) was administered by subcutaneous injection daily starting from the luteal phase of the menstrual cycle for 10–14 days, and then ovarian stimulation with rFSH (Gn-F, Merck Serono SA Aubonne Branch, Switzerland) commenced. For the GnRH antagonist protocol, ovarian stimulation began on the second day of the menstrual cycle, and on the fifth day, antagonist (Cetrorelix Acetate, Cetrotide, Serono Labs Inc., Switzerland) administration started. Once the leading two follicles reached 18 mm or larger in diameter, the hCG administration was ejected as a trigger on the same day. Thirty-five to forty hours later, a doctor punctured the follicles and collected the eggs with the guidance of an ultrasound instrument. The sperm and oocytes (10000:1) were added to 0.1 ml of prebalanced embryo culture medium into four-well plates covered with mineral oil. The evaluation of embryo quality was performed on the 3rd day after fertilization. The embryos were divided into four levels according to the characteristics of blastomeres, such as the number, the morphology and fragments. Grade I and Grade II embryos were defined as high-quality embryos, and the other grades were defined as low-quality embryos [17, 18]. When the number of high-quality embryos was two or lower and the woman was younger than 35 years old, the embryos were frozen for transfer. All transferred embryos were 4-cell embryos. The frozen embryos were thawed according to the rapid recovery method of vitrification. Embryos with a recovery rate above 50% could be used for transfer. After embryo transfer, the patients were followed up until birth.\n Grouping and statistical analysis According to the treatment of embryo transfer, Group IVF/D was divided into the D/ET subgroup (treated with frozen donor sperm and Fresh embryo transfer) and the D/FET subgroup (treated with frozen donor sperm and frozen embryo transfer). Group IVF/H was also divided into the H/ET subgroup (treated with Fresh husband sperm and Fresh embryo transfer) and the H/FET subgroup (treated with Fresh husband sperm and frozen embryo transfer). The detail of the research grouping flow chat is shown in Fig. 1.\n\nFig. 1The research grouping flow chat\n\nThe research grouping flow chat\nThe female-related factors, including age, GnRH injection days and dosage, the estrogen secretion peak of egg collection, endometrium thickness, the number of eggs retrieved, the number of fertilizations, the number of effective embryos, the number of high-quality embryos, etc., were followed up and analyzed by SPSS 19.0 software (SPSS Inc., Cambridge, MA, USA). First, all the parameters were checked by a normal distribution test (Kolmogorov–Smirnov model and Shapiro–Wilk model). Then, the medians (first quartile, third quartile) of the continuous parameters were calculated. Finally, the variation in clinical outcomes within or between the IVF/D or IVF/H groups was compared using the nonparametric test (Kruskal–Wallis model) or chi square test for independent samples (p < 0.05).\nAccording to the treatment of embryo transfer, Group IVF/D was divided into the D/ET subgroup (treated with frozen donor sperm and Fresh embryo transfer) and the D/FET subgroup (treated with frozen donor sperm and frozen embryo transfer). Group IVF/H was also divided into the H/ET subgroup (treated with Fresh husband sperm and Fresh embryo transfer) and the H/FET subgroup (treated with Fresh husband sperm and frozen embryo transfer). The detail of the research grouping flow chat is shown in Fig. 1.\n\nFig. 1The research grouping flow chat\n\nThe research grouping flow chat\nThe female-related factors, including age, GnRH injection days and dosage, the estrogen secretion peak of egg collection, endometrium thickness, the number of eggs retrieved, the number of fertilizations, the number of effective embryos, the number of high-quality embryos, etc., were followed up and analyzed by SPSS 19.0 software (SPSS Inc., Cambridge, MA, USA). First, all the parameters were checked by a normal distribution test (Kolmogorov–Smirnov model and Shapiro–Wilk model). Then, the medians (first quartile, third quartile) of the continuous parameters were calculated. Finally, the variation in clinical outcomes within or between the IVF/D or IVF/H groups was compared using the nonparametric test (Kruskal–Wallis model) or chi square test for independent samples (p < 0.05).", "controlled ovarian hyperstimulation (COH) used a gonadotropin-releasing hormone agonist protocol (the following was a shortened version of the long protocol) or a GnRH antagonist protocol, both of which are effective in blocking a premature LH surge [15, 16]. Generally, for the long protocol, a GnRH agonist (Triptorelin Acetate, Ipsen Pharma Biotech, France) was administered by subcutaneous injection daily starting from the luteal phase of the menstrual cycle for 10–14 days, and then ovarian stimulation with rFSH (Gn-F, Merck Serono SA Aubonne Branch, Switzerland) commenced. For the GnRH antagonist protocol, ovarian stimulation began on the second day of the menstrual cycle, and on the fifth day, antagonist (Cetrorelix Acetate, Cetrotide, Serono Labs Inc., Switzerland) administration started. Once the leading two follicles reached 18 mm or larger in diameter, the hCG administration was ejected as a trigger on the same day. Thirty-five to forty hours later, a doctor punctured the follicles and collected the eggs with the guidance of an ultrasound instrument. The sperm and oocytes (10000:1) were added to 0.1 ml of prebalanced embryo culture medium into four-well plates covered with mineral oil. The evaluation of embryo quality was performed on the 3rd day after fertilization. The embryos were divided into four levels according to the characteristics of blastomeres, such as the number, the morphology and fragments. Grade I and Grade II embryos were defined as high-quality embryos, and the other grades were defined as low-quality embryos [17, 18]. When the number of high-quality embryos was two or lower and the woman was younger than 35 years old, the embryos were frozen for transfer. All transferred embryos were 4-cell embryos. The frozen embryos were thawed according to the rapid recovery method of vitrification. Embryos with a recovery rate above 50% could be used for transfer. After embryo transfer, the patients were followed up until birth.", "According to the treatment of embryo transfer, Group IVF/D was divided into the D/ET subgroup (treated with frozen donor sperm and Fresh embryo transfer) and the D/FET subgroup (treated with frozen donor sperm and frozen embryo transfer). Group IVF/H was also divided into the H/ET subgroup (treated with Fresh husband sperm and Fresh embryo transfer) and the H/FET subgroup (treated with Fresh husband sperm and frozen embryo transfer). The detail of the research grouping flow chat is shown in Fig. 1.\n\nFig. 1The research grouping flow chat\n\nThe research grouping flow chat\nThe female-related factors, including age, GnRH injection days and dosage, the estrogen secretion peak of egg collection, endometrium thickness, the number of eggs retrieved, the number of fertilizations, the number of effective embryos, the number of high-quality embryos, etc., were followed up and analyzed by SPSS 19.0 software (SPSS Inc., Cambridge, MA, USA). First, all the parameters were checked by a normal distribution test (Kolmogorov–Smirnov model and Shapiro–Wilk model). Then, the medians (first quartile, third quartile) of the continuous parameters were calculated. Finally, the variation in clinical outcomes within or between the IVF/D or IVF/H groups was compared using the nonparametric test (Kruskal–Wallis model) or chi square test for independent samples (p < 0.05).", "A total of 860 couples undergoing IVF-ET treatment were included in the analysis, including 573 couples with husband sperm (Group IVF/H) and 287 couples with frozen donor sperm (Group IVF/D). According to whether or not frozen embryo transfer was performed, the IVF/H group was divided into the H/ET subgroup (133 couples treated by IVF/H and fresh embryo transfer) and the H/FET subgroup (440 couples treated by IVF/H and frozen embryo transfer); moreover, the IVF/D group was also divided into the D/ET subgroup (49 couples treated by IVF/D and fresh embryo transfer) and the D/FET subgroup (238 couples treated by IVF/D and frozen embryo transfer) (detailed in Fig. 1).", "Except for the female BMI in the FN subgroup, the variables were nonnormally distributed among the other subgroups. Thus, using the nonparametric test (Kruskal–Wallis model), we found that the median endometrial thickness (10 cm vs. 9 cm, p = 0.015), oocytes retrieved (8 vs. 11, p = 0.000) and MII oocytes (6 vs. 10, p = 0.000) were differentiated within Group IVF/H, but we found that only female age (30 years vs. 28 years, p = 0.002) was differentiated within Group IVF/D. There were no significant differences (p > 0.05) within Group IVF/H or Group IVF/D, including the male age, semen parameters and female BMI (detailed in Table 1).\n\nTable 1The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycleBaseline ParametersGroup IVF/H\nP value\n(H/ET vs. H/FET)Group IVF/D\nP value\n(D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]Inclusion Samples (n)133440/49238/Male Age (husband or donor) a32 [29, 34]32 [29, 34]0.73123 [22, 26]23 [21, 27]0.967Semen Volume (Ml) a3 [2, 3]3 [3]0.3482 [2, 2]2 [2, 2]0.608Semen Concentration (10^6/ML) a56 [48, 64]54 [48, 62]0.73446 [43, 53]47 [43, 51]0.974Semen (PR + NP) (%)a56 [50, 60]54 [50, 60]0.93146 [43, 54]48 [43, 52]0.574Female Age a33 [30, 35]32 [30, 35]0.64430 [28, 35]28 [26, 32]0.002**Female BMI (kg/m2) a21 [24, 25]21 [26, 27]0.86222 [20, 23]21 [20, 24]0.862Endometrial Thickness (cm) a10 [8, 12]9 [7, 11]0.015*10 [8, 12]10 [8, 12]0.805Oocytes Retrieved (n) a8 [4, 11]11[8, 16]0.000***11 [8, 15]11 [8, 15]0.590MII Oocytes (n) a6 [4, 10]10 [6, 14]0.000***9 [7, 13]9 [6, 12]0.721MII Oocytes (%)a93 [78, 100]91 [82, 100]0.66081 [68, 96]86 [73, 96]0.503Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric test (Kruskal–Wallis Model)* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycle\nNote:\n1: H/ET subgroup = using fresh sperm and fresh embryo transfer;\n2: H/FET subgroup = using fresh sperm and frozen embryo transfer;\n3: D/ET subgroup = using frozen sperm and fresh embryo transfer;\n4: D/FET subgroup = using frozen sperm and frozen embryo transfer;\na: Nonparametric test (Kruskal–Wallis Model)\n* p < 0.05 **p < 0.01 ***p < 0.001", "Using the nonparametric test (Kruskal–Wallis Model), the outcomes of fertilization and embryo transfer were compared within Group IVF/H and Group IVF/D (see Table 2). Before embryo transfer, the median of all the fertilization outcomes in the H/FET subgroup was higher than that in the H/ET subgroup. The median [first quartile, third quartile] fertilization rate (%), cleavage rate (%) and high-quality embryo rate (%) were 80 [67, 91], 100 [80, 100] and 57 [36, 78], respectively, in the H/FET subgroup. In contrast, there was no significant difference between the D/ET and D/FET subgroups. After frozen embryo transfer, the CPR and LBR were significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup and the H/ET subgroup (63.2 vs. 39.8, p = 0.000; 61.1 vs. 39.1, p = 0.000; 14.8 vs. 6.7, p = 0.016, respectively). However, the LBW was significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup in the H/ET subgroup (16.2 vs. 4.9, p = 0.021). In another IVF/D group, the CPR and LBR were significantly higher in the D/FET subgroup than in the fresh embryo treatment subgroup in the D/ET subgroup (68.1 vs. 49.0, p = 0.017; 66.4 vs. 47.0, p = 0.010, respectively). Otherwise, the BDR was significantly higher in the D/FET subgroup than in the fresh embryo treatment group in subgroup D/ET (1.0 vs. 0.0, p = 0.000).\n\nTable 2The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycleOutcome IndicatorsGroup IVF/H\n(Χ\n2\n)\n\nP value\n(H/ET vs. H/FET)Group IVF/D\n(Χ\n2\n)\n\nP value\n(D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]After FertilizationFertilization Number (n)a5 [3, 8]8 [5, 11]0.000***8 [6, 12]8 [5, 11]0.342Fertilization Rate (%)a71 [50,89]80 [67,91]0.001**67 [57,88]75 [57,84]0.822Cleavage Number (n) a4 [2, 8]8 [5, 11]0.000***8 [6, 11]7 [4, 11]0.309Cleavage Rate (%)a67 [50,88]100 [80,100]0.000***67 [57,88]71 [55,82]0.667High-Quality Embryos (n) a2 [1, 5]4 [3, 7]0.000***6 [3, 8]4 [2, 7]0.105High-Quality Embryos Rate (%)a33 [10,54]57 [36,78]0.000***52 [33,67]45 [28,60]0.070After TransferBiochemical Pregnancy Rate (%)b0.01.4(2.377) 0.1232.00.4(1.542) 0.214Clinical Pregnancy Rate (%)b39.863.2(22.789) 0.000***49.068.1(5.734) 0.017*Live Birth Rate (%)b39.161.1(20.134) 0.000***47.066.4(6.598) 0.010*Multipregnancy Rate (%)b6.714.8(5.820) 0.016*14.39.2(1.137) 0.286Miscarriage Rate (%)b0.82.0(0.997) 0.3182.02.1(0.001) 0.979Total Baby Sex Ratio (%)b69.4114.1(3.129) 0.077121.482.0(1.036) 0.309Low Birth Weight Rate (%)b4.916.2(5.286) 0.021*19.415.8(0.029) 0.806Baby with Birth Defect Rate (%)b1.61.5(0.007) 0.9330.01.0(216.712) 0.000***Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric Test (Kruskal–Wallis Model)b: chi square test for independent samples* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycle\nNote:\n1: H/ET subgroup = using fresh sperm and fresh embryo transfer;\n2: H/FET subgroup = using fresh sperm and frozen embryo transfer;\n3: D/ET subgroup = using frozen sperm and fresh embryo transfer;\n4: D/FET subgroup = using frozen sperm and frozen embryo transfer;\na: Nonparametric Test (Kruskal–Wallis Model)\nb: chi square test for independent samples\n* p < 0.05 **p < 0.01 ***p < 0.001", "Using the chi square test for independent samples, the comparison of clinical outcomes between Group IVF/H and Group IVF/D is shown in Figs. 2 and 3. Compared with the D/ET subgroup using frozen sperm (donor sperm) fertilization, the HER was higher than that of the H/ET subgroup using fresh husband sperm (p < 0.05). In contrast, the values of the FER, CLR and HER in the D/FET subgroup were lower than those of the H/FET subgroup using fresh husband sperm (all p < 0.05). After embryo transfer, the BDR in the D/ET subgroup was lower than that of the H/ET subgroup using fresh husband sperm (p < 0.001). Meanwhile, the MUR in the D/FET subgroup was lower than that of the H/FET subgroup using fresh husband sperm (p < 0.05).\n\nFig. 2The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001\n\nThe comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001\n\nFig. 3The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001", "When both frozen sperm and frozen embryo transfer were used in the D/FET group, the HER was higher than that for the use of both fresh sperm and fresh embryo transfer in the H/ET group (p < 0.05). After embryo transfer, the CPR and LBR values in the D/FET subgroup were higher than those in the H/ET group (p < 0.001). Meanwhile, the LBW in the D/FET subgroup was higher than that in the H/ET subgroup (p < 0.05)." ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "The processing of ovulation, fertilization and embryo transfer were as follows", "Grouping and statistical analysis", "Results", "Participants and grouping", "The baseline characteristics of the couples", "Maternal factor exposure to frozen embryo transfer: the comparison of clinical outcomes within group IVF/H or group IVF/D", "Paternal factor exposure to frozen sperm fertilization: the comparison of clinical outcomes between Group IVF/H and Group IVF/D", "Parental factors exposure to frozen sperm fertilization and frozen embryo transfer: the comparison of clinical outcomes between Group IVF/H and Group IVF/D", "Discussion", "Conclusion" ]
[ "To keep the division of embryo the same pace with the growth of the endometrium, the transferring of frozen embryos have been widely used in assisted reproductive technology (ART). Since the first successful report of frozen embryo transfer (FET) [1], the cryopreservation of embryos has been an important strategy in the treatment of infertility. FET strategies contribute an additional 25–50% chance of pregnancy for couples who have cryopreserved embryos [2–4]. However, FET is not free from the risk of a higher multipregnancy rate (MPR) and low birth weight rate (LBW), even though the live birth rate (LBR) of frozen–thawed embryos is usually higher than that of fresh transferred embryos [5–8]. On the other hand, male due to azoospermia or sperm retrieval difficulties on the day of egg retrieval in vitro fertilization with frozen spermatozoa is used to treat female who might also have tubal lesions or those experiencing failure of prior artificial insemination with donor semen (AID) cycles [9, 10]. LBW Meanwhile, there is a matter of debate on the clinical outcomes caused by alterations in the DNA integrity of semen following cryopreservation [11, 12]. Indeed, the baby with no birth defect rate following pregnancies after IVF/D was not different (97.3% vs. 97.4%) [13] after IVF with fresh husband spermatozoa, but the clinical pregnancy rate (CPR) per transfer was higher after using frozen donor semen than that after using the husbands’ semen (27.6% vs. 21.9%, respectively)[13] Above all, taking advantage of freezing sperm, eggs or embryos is just like a double-edged sword for IVF outcomes, so doctors who use it need to be cautious.\nNowadays, the practical ART will also meet the clinical treatment of one couple not only need to freeze sperm, but also suitable for freezing embryos. There still have been less studies of this double-factors freezing on the IVF outcomes including fertilization, pregnancy, and child birth. Most of published studies are incomprehensive, they are retrospective or refer to cases involving either frozen embryo transfer or frozen donor spermatozoa. For this article, we cover the findings of both freezing embryos transfer (FET) and IVF/D. As far as ART procedures are concerned, FET and IVF/D lead to some specific questions requiring answers: (i) the effects on the pregnancy and neonate characteristics of single-factor exposure to frozen embryo transfer; (ii) the effects on the fertilization, pregnancy and neonate characteristics of single-factor exposure to frozen donor semen; and (iii) the influence of the duration of sperm and embryo cryopreservation on pregnancy and newborn health. Methods.", "This study was approved by the Ethical Committee of the institutional review board of the Ethical Committee of the Obstetrics and Gynaecology Hospital of Fudan University and written informed consent was obtained from all couples. All methods were performed in accordance with the relevant guidelines and regulations.\nWe retrospectively analyzed the IVF treatment of couples experiencing infertility with frozen-thawed donor sperm (Group IVF/D) or fresh husband sperm (Group IVF/H) at Shanghai Ji Ai Genetics and IVF Institute (Jan. 2013-Feb. 2019), after which we followed up until the birth of the baby. For Group IVF/D, the donor’s inclusion criteria were normal sperm parameters above World Health Organization guidelines [14]. For Group IVF/H, the husband’s inclusion criteria were also normal sperm parameters according to World Health Organization guidelines [14]. The inclusion criteria for women in both groups were those undergoing their first IVF-ET cycle caused only by tubal obstruction factors. Women with premature ovarian failure, polycystic ovarian syndrome, chromosome abnormalities, habitual abortion and other diseases that cause infertility were all excluded from this study. In addition, male patients with anti-sperm antibodies, high DNA fragmentation index, non-ejaculation, retrograde ejaculation, chromosome abnormalities and other diseases that cause infertility were also excluded.\nThe processing of the semen sample treatments was as follows: According to the World Health Organization guidelines [14], donors’ semen samples were obtained by masturbation. After liquefaction of the fresh ejaculate, the specimens’ characteristics were evaluated, such as volume, count and motility. The qualified donors followed the National Sperm Bank reference for semen parameters (before freezing: volume ≥ 2.0 ml, concentration ≥ 60 million/ml, progressive motility ≥ 60%, normal sperm morphology ≥ 70%; after thawing: concentration ≥ 15 million/ml, progressive motility ≥ 32%, normal sperm morphology ≥ 4%). On the day of egg collection by uterine surgery, the semen samples were thawed in a water bath at 37 ℃. Then, the semen sample was treated by the density gradient method (45% and 90% gradient solution, Vitrolife, Gothenburg, Sweden), by which the samples were centrifuged at 500 g for 20 min. After removing the supernatant, the precipitate was washed with washing solution (Vitrolife, Gothenburg, Sweden) and centrifuged at 300 g for 10 min. For the husband, all the semen samples were obtained by masturbation on the day of egg collection by uterine surgery, and the fresh sperm were used in the IVF-ET cycles.\n The processing of ovulation, fertilization and embryo transfer were as follows controlled ovarian hyperstimulation (COH) used a gonadotropin-releasing hormone agonist protocol (the following was a shortened version of the long protocol) or a GnRH antagonist protocol, both of which are effective in blocking a premature LH surge [15, 16]. Generally, for the long protocol, a GnRH agonist (Triptorelin Acetate, Ipsen Pharma Biotech, France) was administered by subcutaneous injection daily starting from the luteal phase of the menstrual cycle for 10–14 days, and then ovarian stimulation with rFSH (Gn-F, Merck Serono SA Aubonne Branch, Switzerland) commenced. For the GnRH antagonist protocol, ovarian stimulation began on the second day of the menstrual cycle, and on the fifth day, antagonist (Cetrorelix Acetate, Cetrotide, Serono Labs Inc., Switzerland) administration started. Once the leading two follicles reached 18 mm or larger in diameter, the hCG administration was ejected as a trigger on the same day. Thirty-five to forty hours later, a doctor punctured the follicles and collected the eggs with the guidance of an ultrasound instrument. The sperm and oocytes (10000:1) were added to 0.1 ml of prebalanced embryo culture medium into four-well plates covered with mineral oil. The evaluation of embryo quality was performed on the 3rd day after fertilization. The embryos were divided into four levels according to the characteristics of blastomeres, such as the number, the morphology and fragments. Grade I and Grade II embryos were defined as high-quality embryos, and the other grades were defined as low-quality embryos [17, 18]. When the number of high-quality embryos was two or lower and the woman was younger than 35 years old, the embryos were frozen for transfer. All transferred embryos were 4-cell embryos. The frozen embryos were thawed according to the rapid recovery method of vitrification. Embryos with a recovery rate above 50% could be used for transfer. After embryo transfer, the patients were followed up until birth.\ncontrolled ovarian hyperstimulation (COH) used a gonadotropin-releasing hormone agonist protocol (the following was a shortened version of the long protocol) or a GnRH antagonist protocol, both of which are effective in blocking a premature LH surge [15, 16]. Generally, for the long protocol, a GnRH agonist (Triptorelin Acetate, Ipsen Pharma Biotech, France) was administered by subcutaneous injection daily starting from the luteal phase of the menstrual cycle for 10–14 days, and then ovarian stimulation with rFSH (Gn-F, Merck Serono SA Aubonne Branch, Switzerland) commenced. For the GnRH antagonist protocol, ovarian stimulation began on the second day of the menstrual cycle, and on the fifth day, antagonist (Cetrorelix Acetate, Cetrotide, Serono Labs Inc., Switzerland) administration started. Once the leading two follicles reached 18 mm or larger in diameter, the hCG administration was ejected as a trigger on the same day. Thirty-five to forty hours later, a doctor punctured the follicles and collected the eggs with the guidance of an ultrasound instrument. The sperm and oocytes (10000:1) were added to 0.1 ml of prebalanced embryo culture medium into four-well plates covered with mineral oil. The evaluation of embryo quality was performed on the 3rd day after fertilization. The embryos were divided into four levels according to the characteristics of blastomeres, such as the number, the morphology and fragments. Grade I and Grade II embryos were defined as high-quality embryos, and the other grades were defined as low-quality embryos [17, 18]. When the number of high-quality embryos was two or lower and the woman was younger than 35 years old, the embryos were frozen for transfer. All transferred embryos were 4-cell embryos. The frozen embryos were thawed according to the rapid recovery method of vitrification. Embryos with a recovery rate above 50% could be used for transfer. After embryo transfer, the patients were followed up until birth.\n Grouping and statistical analysis According to the treatment of embryo transfer, Group IVF/D was divided into the D/ET subgroup (treated with frozen donor sperm and Fresh embryo transfer) and the D/FET subgroup (treated with frozen donor sperm and frozen embryo transfer). Group IVF/H was also divided into the H/ET subgroup (treated with Fresh husband sperm and Fresh embryo transfer) and the H/FET subgroup (treated with Fresh husband sperm and frozen embryo transfer). The detail of the research grouping flow chat is shown in Fig. 1.\n\nFig. 1The research grouping flow chat\n\nThe research grouping flow chat\nThe female-related factors, including age, GnRH injection days and dosage, the estrogen secretion peak of egg collection, endometrium thickness, the number of eggs retrieved, the number of fertilizations, the number of effective embryos, the number of high-quality embryos, etc., were followed up and analyzed by SPSS 19.0 software (SPSS Inc., Cambridge, MA, USA). First, all the parameters were checked by a normal distribution test (Kolmogorov–Smirnov model and Shapiro–Wilk model). Then, the medians (first quartile, third quartile) of the continuous parameters were calculated. Finally, the variation in clinical outcomes within or between the IVF/D or IVF/H groups was compared using the nonparametric test (Kruskal–Wallis model) or chi square test for independent samples (p < 0.05).\nAccording to the treatment of embryo transfer, Group IVF/D was divided into the D/ET subgroup (treated with frozen donor sperm and Fresh embryo transfer) and the D/FET subgroup (treated with frozen donor sperm and frozen embryo transfer). Group IVF/H was also divided into the H/ET subgroup (treated with Fresh husband sperm and Fresh embryo transfer) and the H/FET subgroup (treated with Fresh husband sperm and frozen embryo transfer). The detail of the research grouping flow chat is shown in Fig. 1.\n\nFig. 1The research grouping flow chat\n\nThe research grouping flow chat\nThe female-related factors, including age, GnRH injection days and dosage, the estrogen secretion peak of egg collection, endometrium thickness, the number of eggs retrieved, the number of fertilizations, the number of effective embryos, the number of high-quality embryos, etc., were followed up and analyzed by SPSS 19.0 software (SPSS Inc., Cambridge, MA, USA). First, all the parameters were checked by a normal distribution test (Kolmogorov–Smirnov model and Shapiro–Wilk model). Then, the medians (first quartile, third quartile) of the continuous parameters were calculated. Finally, the variation in clinical outcomes within or between the IVF/D or IVF/H groups was compared using the nonparametric test (Kruskal–Wallis model) or chi square test for independent samples (p < 0.05).", "controlled ovarian hyperstimulation (COH) used a gonadotropin-releasing hormone agonist protocol (the following was a shortened version of the long protocol) or a GnRH antagonist protocol, both of which are effective in blocking a premature LH surge [15, 16]. Generally, for the long protocol, a GnRH agonist (Triptorelin Acetate, Ipsen Pharma Biotech, France) was administered by subcutaneous injection daily starting from the luteal phase of the menstrual cycle for 10–14 days, and then ovarian stimulation with rFSH (Gn-F, Merck Serono SA Aubonne Branch, Switzerland) commenced. For the GnRH antagonist protocol, ovarian stimulation began on the second day of the menstrual cycle, and on the fifth day, antagonist (Cetrorelix Acetate, Cetrotide, Serono Labs Inc., Switzerland) administration started. Once the leading two follicles reached 18 mm or larger in diameter, the hCG administration was ejected as a trigger on the same day. Thirty-five to forty hours later, a doctor punctured the follicles and collected the eggs with the guidance of an ultrasound instrument. The sperm and oocytes (10000:1) were added to 0.1 ml of prebalanced embryo culture medium into four-well plates covered with mineral oil. The evaluation of embryo quality was performed on the 3rd day after fertilization. The embryos were divided into four levels according to the characteristics of blastomeres, such as the number, the morphology and fragments. Grade I and Grade II embryos were defined as high-quality embryos, and the other grades were defined as low-quality embryos [17, 18]. When the number of high-quality embryos was two or lower and the woman was younger than 35 years old, the embryos were frozen for transfer. All transferred embryos were 4-cell embryos. The frozen embryos were thawed according to the rapid recovery method of vitrification. Embryos with a recovery rate above 50% could be used for transfer. After embryo transfer, the patients were followed up until birth.", "According to the treatment of embryo transfer, Group IVF/D was divided into the D/ET subgroup (treated with frozen donor sperm and Fresh embryo transfer) and the D/FET subgroup (treated with frozen donor sperm and frozen embryo transfer). Group IVF/H was also divided into the H/ET subgroup (treated with Fresh husband sperm and Fresh embryo transfer) and the H/FET subgroup (treated with Fresh husband sperm and frozen embryo transfer). The detail of the research grouping flow chat is shown in Fig. 1.\n\nFig. 1The research grouping flow chat\n\nThe research grouping flow chat\nThe female-related factors, including age, GnRH injection days and dosage, the estrogen secretion peak of egg collection, endometrium thickness, the number of eggs retrieved, the number of fertilizations, the number of effective embryos, the number of high-quality embryos, etc., were followed up and analyzed by SPSS 19.0 software (SPSS Inc., Cambridge, MA, USA). First, all the parameters were checked by a normal distribution test (Kolmogorov–Smirnov model and Shapiro–Wilk model). Then, the medians (first quartile, third quartile) of the continuous parameters were calculated. Finally, the variation in clinical outcomes within or between the IVF/D or IVF/H groups was compared using the nonparametric test (Kruskal–Wallis model) or chi square test for independent samples (p < 0.05).", " Participants and grouping A total of 860 couples undergoing IVF-ET treatment were included in the analysis, including 573 couples with husband sperm (Group IVF/H) and 287 couples with frozen donor sperm (Group IVF/D). According to whether or not frozen embryo transfer was performed, the IVF/H group was divided into the H/ET subgroup (133 couples treated by IVF/H and fresh embryo transfer) and the H/FET subgroup (440 couples treated by IVF/H and frozen embryo transfer); moreover, the IVF/D group was also divided into the D/ET subgroup (49 couples treated by IVF/D and fresh embryo transfer) and the D/FET subgroup (238 couples treated by IVF/D and frozen embryo transfer) (detailed in Fig. 1).\nA total of 860 couples undergoing IVF-ET treatment were included in the analysis, including 573 couples with husband sperm (Group IVF/H) and 287 couples with frozen donor sperm (Group IVF/D). According to whether or not frozen embryo transfer was performed, the IVF/H group was divided into the H/ET subgroup (133 couples treated by IVF/H and fresh embryo transfer) and the H/FET subgroup (440 couples treated by IVF/H and frozen embryo transfer); moreover, the IVF/D group was also divided into the D/ET subgroup (49 couples treated by IVF/D and fresh embryo transfer) and the D/FET subgroup (238 couples treated by IVF/D and frozen embryo transfer) (detailed in Fig. 1).\n The baseline characteristics of the couples Except for the female BMI in the FN subgroup, the variables were nonnormally distributed among the other subgroups. Thus, using the nonparametric test (Kruskal–Wallis model), we found that the median endometrial thickness (10 cm vs. 9 cm, p = 0.015), oocytes retrieved (8 vs. 11, p = 0.000) and MII oocytes (6 vs. 10, p = 0.000) were differentiated within Group IVF/H, but we found that only female age (30 years vs. 28 years, p = 0.002) was differentiated within Group IVF/D. There were no significant differences (p > 0.05) within Group IVF/H or Group IVF/D, including the male age, semen parameters and female BMI (detailed in Table 1).\n\nTable 1The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycleBaseline ParametersGroup IVF/H\nP value\n(H/ET vs. H/FET)Group IVF/D\nP value\n(D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]Inclusion Samples (n)133440/49238/Male Age (husband or donor) a32 [29, 34]32 [29, 34]0.73123 [22, 26]23 [21, 27]0.967Semen Volume (Ml) a3 [2, 3]3 [3]0.3482 [2, 2]2 [2, 2]0.608Semen Concentration (10^6/ML) a56 [48, 64]54 [48, 62]0.73446 [43, 53]47 [43, 51]0.974Semen (PR + NP) (%)a56 [50, 60]54 [50, 60]0.93146 [43, 54]48 [43, 52]0.574Female Age a33 [30, 35]32 [30, 35]0.64430 [28, 35]28 [26, 32]0.002**Female BMI (kg/m2) a21 [24, 25]21 [26, 27]0.86222 [20, 23]21 [20, 24]0.862Endometrial Thickness (cm) a10 [8, 12]9 [7, 11]0.015*10 [8, 12]10 [8, 12]0.805Oocytes Retrieved (n) a8 [4, 11]11[8, 16]0.000***11 [8, 15]11 [8, 15]0.590MII Oocytes (n) a6 [4, 10]10 [6, 14]0.000***9 [7, 13]9 [6, 12]0.721MII Oocytes (%)a93 [78, 100]91 [82, 100]0.66081 [68, 96]86 [73, 96]0.503Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric test (Kruskal–Wallis Model)* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycle\nNote:\n1: H/ET subgroup = using fresh sperm and fresh embryo transfer;\n2: H/FET subgroup = using fresh sperm and frozen embryo transfer;\n3: D/ET subgroup = using frozen sperm and fresh embryo transfer;\n4: D/FET subgroup = using frozen sperm and frozen embryo transfer;\na: Nonparametric test (Kruskal–Wallis Model)\n* p < 0.05 **p < 0.01 ***p < 0.001\nExcept for the female BMI in the FN subgroup, the variables were nonnormally distributed among the other subgroups. Thus, using the nonparametric test (Kruskal–Wallis model), we found that the median endometrial thickness (10 cm vs. 9 cm, p = 0.015), oocytes retrieved (8 vs. 11, p = 0.000) and MII oocytes (6 vs. 10, p = 0.000) were differentiated within Group IVF/H, but we found that only female age (30 years vs. 28 years, p = 0.002) was differentiated within Group IVF/D. There were no significant differences (p > 0.05) within Group IVF/H or Group IVF/D, including the male age, semen parameters and female BMI (detailed in Table 1).\n\nTable 1The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycleBaseline ParametersGroup IVF/H\nP value\n(H/ET vs. H/FET)Group IVF/D\nP value\n(D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]Inclusion Samples (n)133440/49238/Male Age (husband or donor) a32 [29, 34]32 [29, 34]0.73123 [22, 26]23 [21, 27]0.967Semen Volume (Ml) a3 [2, 3]3 [3]0.3482 [2, 2]2 [2, 2]0.608Semen Concentration (10^6/ML) a56 [48, 64]54 [48, 62]0.73446 [43, 53]47 [43, 51]0.974Semen (PR + NP) (%)a56 [50, 60]54 [50, 60]0.93146 [43, 54]48 [43, 52]0.574Female Age a33 [30, 35]32 [30, 35]0.64430 [28, 35]28 [26, 32]0.002**Female BMI (kg/m2) a21 [24, 25]21 [26, 27]0.86222 [20, 23]21 [20, 24]0.862Endometrial Thickness (cm) a10 [8, 12]9 [7, 11]0.015*10 [8, 12]10 [8, 12]0.805Oocytes Retrieved (n) a8 [4, 11]11[8, 16]0.000***11 [8, 15]11 [8, 15]0.590MII Oocytes (n) a6 [4, 10]10 [6, 14]0.000***9 [7, 13]9 [6, 12]0.721MII Oocytes (%)a93 [78, 100]91 [82, 100]0.66081 [68, 96]86 [73, 96]0.503Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric test (Kruskal–Wallis Model)* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycle\nNote:\n1: H/ET subgroup = using fresh sperm and fresh embryo transfer;\n2: H/FET subgroup = using fresh sperm and frozen embryo transfer;\n3: D/ET subgroup = using frozen sperm and fresh embryo transfer;\n4: D/FET subgroup = using frozen sperm and frozen embryo transfer;\na: Nonparametric test (Kruskal–Wallis Model)\n* p < 0.05 **p < 0.01 ***p < 0.001\n Maternal factor exposure to frozen embryo transfer: the comparison of clinical outcomes within group IVF/H or group IVF/D Using the nonparametric test (Kruskal–Wallis Model), the outcomes of fertilization and embryo transfer were compared within Group IVF/H and Group IVF/D (see Table 2). Before embryo transfer, the median of all the fertilization outcomes in the H/FET subgroup was higher than that in the H/ET subgroup. The median [first quartile, third quartile] fertilization rate (%), cleavage rate (%) and high-quality embryo rate (%) were 80 [67, 91], 100 [80, 100] and 57 [36, 78], respectively, in the H/FET subgroup. In contrast, there was no significant difference between the D/ET and D/FET subgroups. After frozen embryo transfer, the CPR and LBR were significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup and the H/ET subgroup (63.2 vs. 39.8, p = 0.000; 61.1 vs. 39.1, p = 0.000; 14.8 vs. 6.7, p = 0.016, respectively). However, the LBW was significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup in the H/ET subgroup (16.2 vs. 4.9, p = 0.021). In another IVF/D group, the CPR and LBR were significantly higher in the D/FET subgroup than in the fresh embryo treatment subgroup in the D/ET subgroup (68.1 vs. 49.0, p = 0.017; 66.4 vs. 47.0, p = 0.010, respectively). Otherwise, the BDR was significantly higher in the D/FET subgroup than in the fresh embryo treatment group in subgroup D/ET (1.0 vs. 0.0, p = 0.000).\n\nTable 2The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycleOutcome IndicatorsGroup IVF/H\n(Χ\n2\n)\n\nP value\n(H/ET vs. H/FET)Group IVF/D\n(Χ\n2\n)\n\nP value\n(D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]After FertilizationFertilization Number (n)a5 [3, 8]8 [5, 11]0.000***8 [6, 12]8 [5, 11]0.342Fertilization Rate (%)a71 [50,89]80 [67,91]0.001**67 [57,88]75 [57,84]0.822Cleavage Number (n) a4 [2, 8]8 [5, 11]0.000***8 [6, 11]7 [4, 11]0.309Cleavage Rate (%)a67 [50,88]100 [80,100]0.000***67 [57,88]71 [55,82]0.667High-Quality Embryos (n) a2 [1, 5]4 [3, 7]0.000***6 [3, 8]4 [2, 7]0.105High-Quality Embryos Rate (%)a33 [10,54]57 [36,78]0.000***52 [33,67]45 [28,60]0.070After TransferBiochemical Pregnancy Rate (%)b0.01.4(2.377) 0.1232.00.4(1.542) 0.214Clinical Pregnancy Rate (%)b39.863.2(22.789) 0.000***49.068.1(5.734) 0.017*Live Birth Rate (%)b39.161.1(20.134) 0.000***47.066.4(6.598) 0.010*Multipregnancy Rate (%)b6.714.8(5.820) 0.016*14.39.2(1.137) 0.286Miscarriage Rate (%)b0.82.0(0.997) 0.3182.02.1(0.001) 0.979Total Baby Sex Ratio (%)b69.4114.1(3.129) 0.077121.482.0(1.036) 0.309Low Birth Weight Rate (%)b4.916.2(5.286) 0.021*19.415.8(0.029) 0.806Baby with Birth Defect Rate (%)b1.61.5(0.007) 0.9330.01.0(216.712) 0.000***Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric Test (Kruskal–Wallis Model)b: chi square test for independent samples* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycle\nNote:\n1: H/ET subgroup = using fresh sperm and fresh embryo transfer;\n2: H/FET subgroup = using fresh sperm and frozen embryo transfer;\n3: D/ET subgroup = using frozen sperm and fresh embryo transfer;\n4: D/FET subgroup = using frozen sperm and frozen embryo transfer;\na: Nonparametric Test (Kruskal–Wallis Model)\nb: chi square test for independent samples\n* p < 0.05 **p < 0.01 ***p < 0.001\nUsing the nonparametric test (Kruskal–Wallis Model), the outcomes of fertilization and embryo transfer were compared within Group IVF/H and Group IVF/D (see Table 2). Before embryo transfer, the median of all the fertilization outcomes in the H/FET subgroup was higher than that in the H/ET subgroup. The median [first quartile, third quartile] fertilization rate (%), cleavage rate (%) and high-quality embryo rate (%) were 80 [67, 91], 100 [80, 100] and 57 [36, 78], respectively, in the H/FET subgroup. In contrast, there was no significant difference between the D/ET and D/FET subgroups. After frozen embryo transfer, the CPR and LBR were significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup and the H/ET subgroup (63.2 vs. 39.8, p = 0.000; 61.1 vs. 39.1, p = 0.000; 14.8 vs. 6.7, p = 0.016, respectively). However, the LBW was significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup in the H/ET subgroup (16.2 vs. 4.9, p = 0.021). In another IVF/D group, the CPR and LBR were significantly higher in the D/FET subgroup than in the fresh embryo treatment subgroup in the D/ET subgroup (68.1 vs. 49.0, p = 0.017; 66.4 vs. 47.0, p = 0.010, respectively). Otherwise, the BDR was significantly higher in the D/FET subgroup than in the fresh embryo treatment group in subgroup D/ET (1.0 vs. 0.0, p = 0.000).\n\nTable 2The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycleOutcome IndicatorsGroup IVF/H\n(Χ\n2\n)\n\nP value\n(H/ET vs. H/FET)Group IVF/D\n(Χ\n2\n)\n\nP value\n(D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]After FertilizationFertilization Number (n)a5 [3, 8]8 [5, 11]0.000***8 [6, 12]8 [5, 11]0.342Fertilization Rate (%)a71 [50,89]80 [67,91]0.001**67 [57,88]75 [57,84]0.822Cleavage Number (n) a4 [2, 8]8 [5, 11]0.000***8 [6, 11]7 [4, 11]0.309Cleavage Rate (%)a67 [50,88]100 [80,100]0.000***67 [57,88]71 [55,82]0.667High-Quality Embryos (n) a2 [1, 5]4 [3, 7]0.000***6 [3, 8]4 [2, 7]0.105High-Quality Embryos Rate (%)a33 [10,54]57 [36,78]0.000***52 [33,67]45 [28,60]0.070After TransferBiochemical Pregnancy Rate (%)b0.01.4(2.377) 0.1232.00.4(1.542) 0.214Clinical Pregnancy Rate (%)b39.863.2(22.789) 0.000***49.068.1(5.734) 0.017*Live Birth Rate (%)b39.161.1(20.134) 0.000***47.066.4(6.598) 0.010*Multipregnancy Rate (%)b6.714.8(5.820) 0.016*14.39.2(1.137) 0.286Miscarriage Rate (%)b0.82.0(0.997) 0.3182.02.1(0.001) 0.979Total Baby Sex Ratio (%)b69.4114.1(3.129) 0.077121.482.0(1.036) 0.309Low Birth Weight Rate (%)b4.916.2(5.286) 0.021*19.415.8(0.029) 0.806Baby with Birth Defect Rate (%)b1.61.5(0.007) 0.9330.01.0(216.712) 0.000***Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric Test (Kruskal–Wallis Model)b: chi square test for independent samples* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycle\nNote:\n1: H/ET subgroup = using fresh sperm and fresh embryo transfer;\n2: H/FET subgroup = using fresh sperm and frozen embryo transfer;\n3: D/ET subgroup = using frozen sperm and fresh embryo transfer;\n4: D/FET subgroup = using frozen sperm and frozen embryo transfer;\na: Nonparametric Test (Kruskal–Wallis Model)\nb: chi square test for independent samples\n* p < 0.05 **p < 0.01 ***p < 0.001\n Paternal factor exposure to frozen sperm fertilization: the comparison of clinical outcomes between Group IVF/H and Group IVF/D Using the chi square test for independent samples, the comparison of clinical outcomes between Group IVF/H and Group IVF/D is shown in Figs. 2 and 3. Compared with the D/ET subgroup using frozen sperm (donor sperm) fertilization, the HER was higher than that of the H/ET subgroup using fresh husband sperm (p < 0.05). In contrast, the values of the FER, CLR and HER in the D/FET subgroup were lower than those of the H/FET subgroup using fresh husband sperm (all p < 0.05). After embryo transfer, the BDR in the D/ET subgroup was lower than that of the H/ET subgroup using fresh husband sperm (p < 0.001). Meanwhile, the MUR in the D/FET subgroup was lower than that of the H/FET subgroup using fresh husband sperm (p < 0.05).\n\nFig. 2The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001\n\nThe comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001\n\nFig. 3The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001\nUsing the chi square test for independent samples, the comparison of clinical outcomes between Group IVF/H and Group IVF/D is shown in Figs. 2 and 3. Compared with the D/ET subgroup using frozen sperm (donor sperm) fertilization, the HER was higher than that of the H/ET subgroup using fresh husband sperm (p < 0.05). In contrast, the values of the FER, CLR and HER in the D/FET subgroup were lower than those of the H/FET subgroup using fresh husband sperm (all p < 0.05). After embryo transfer, the BDR in the D/ET subgroup was lower than that of the H/ET subgroup using fresh husband sperm (p < 0.001). Meanwhile, the MUR in the D/FET subgroup was lower than that of the H/FET subgroup using fresh husband sperm (p < 0.05).\n\nFig. 2The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001\n\nThe comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001\n\nFig. 3The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001\n Parental factors exposure to frozen sperm fertilization and frozen embryo transfer: the comparison of clinical outcomes between Group IVF/H and Group IVF/D When both frozen sperm and frozen embryo transfer were used in the D/FET group, the HER was higher than that for the use of both fresh sperm and fresh embryo transfer in the H/ET group (p < 0.05). After embryo transfer, the CPR and LBR values in the D/FET subgroup were higher than those in the H/ET group (p < 0.001). Meanwhile, the LBW in the D/FET subgroup was higher than that in the H/ET subgroup (p < 0.05).\nWhen both frozen sperm and frozen embryo transfer were used in the D/FET group, the HER was higher than that for the use of both fresh sperm and fresh embryo transfer in the H/ET group (p < 0.05). After embryo transfer, the CPR and LBR values in the D/FET subgroup were higher than those in the H/ET group (p < 0.001). Meanwhile, the LBW in the D/FET subgroup was higher than that in the H/ET subgroup (p < 0.05).", "A total of 860 couples undergoing IVF-ET treatment were included in the analysis, including 573 couples with husband sperm (Group IVF/H) and 287 couples with frozen donor sperm (Group IVF/D). According to whether or not frozen embryo transfer was performed, the IVF/H group was divided into the H/ET subgroup (133 couples treated by IVF/H and fresh embryo transfer) and the H/FET subgroup (440 couples treated by IVF/H and frozen embryo transfer); moreover, the IVF/D group was also divided into the D/ET subgroup (49 couples treated by IVF/D and fresh embryo transfer) and the D/FET subgroup (238 couples treated by IVF/D and frozen embryo transfer) (detailed in Fig. 1).", "Except for the female BMI in the FN subgroup, the variables were nonnormally distributed among the other subgroups. Thus, using the nonparametric test (Kruskal–Wallis model), we found that the median endometrial thickness (10 cm vs. 9 cm, p = 0.015), oocytes retrieved (8 vs. 11, p = 0.000) and MII oocytes (6 vs. 10, p = 0.000) were differentiated within Group IVF/H, but we found that only female age (30 years vs. 28 years, p = 0.002) was differentiated within Group IVF/D. There were no significant differences (p > 0.05) within Group IVF/H or Group IVF/D, including the male age, semen parameters and female BMI (detailed in Table 1).\n\nTable 1The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycleBaseline ParametersGroup IVF/H\nP value\n(H/ET vs. H/FET)Group IVF/D\nP value\n(D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]Inclusion Samples (n)133440/49238/Male Age (husband or donor) a32 [29, 34]32 [29, 34]0.73123 [22, 26]23 [21, 27]0.967Semen Volume (Ml) a3 [2, 3]3 [3]0.3482 [2, 2]2 [2, 2]0.608Semen Concentration (10^6/ML) a56 [48, 64]54 [48, 62]0.73446 [43, 53]47 [43, 51]0.974Semen (PR + NP) (%)a56 [50, 60]54 [50, 60]0.93146 [43, 54]48 [43, 52]0.574Female Age a33 [30, 35]32 [30, 35]0.64430 [28, 35]28 [26, 32]0.002**Female BMI (kg/m2) a21 [24, 25]21 [26, 27]0.86222 [20, 23]21 [20, 24]0.862Endometrial Thickness (cm) a10 [8, 12]9 [7, 11]0.015*10 [8, 12]10 [8, 12]0.805Oocytes Retrieved (n) a8 [4, 11]11[8, 16]0.000***11 [8, 15]11 [8, 15]0.590MII Oocytes (n) a6 [4, 10]10 [6, 14]0.000***9 [7, 13]9 [6, 12]0.721MII Oocytes (%)a93 [78, 100]91 [82, 100]0.66081 [68, 96]86 [73, 96]0.503Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric test (Kruskal–Wallis Model)* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycle\nNote:\n1: H/ET subgroup = using fresh sperm and fresh embryo transfer;\n2: H/FET subgroup = using fresh sperm and frozen embryo transfer;\n3: D/ET subgroup = using frozen sperm and fresh embryo transfer;\n4: D/FET subgroup = using frozen sperm and frozen embryo transfer;\na: Nonparametric test (Kruskal–Wallis Model)\n* p < 0.05 **p < 0.01 ***p < 0.001", "Using the nonparametric test (Kruskal–Wallis Model), the outcomes of fertilization and embryo transfer were compared within Group IVF/H and Group IVF/D (see Table 2). Before embryo transfer, the median of all the fertilization outcomes in the H/FET subgroup was higher than that in the H/ET subgroup. The median [first quartile, third quartile] fertilization rate (%), cleavage rate (%) and high-quality embryo rate (%) were 80 [67, 91], 100 [80, 100] and 57 [36, 78], respectively, in the H/FET subgroup. In contrast, there was no significant difference between the D/ET and D/FET subgroups. After frozen embryo transfer, the CPR and LBR were significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup and the H/ET subgroup (63.2 vs. 39.8, p = 0.000; 61.1 vs. 39.1, p = 0.000; 14.8 vs. 6.7, p = 0.016, respectively). However, the LBW was significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup in the H/ET subgroup (16.2 vs. 4.9, p = 0.021). In another IVF/D group, the CPR and LBR were significantly higher in the D/FET subgroup than in the fresh embryo treatment subgroup in the D/ET subgroup (68.1 vs. 49.0, p = 0.017; 66.4 vs. 47.0, p = 0.010, respectively). Otherwise, the BDR was significantly higher in the D/FET subgroup than in the fresh embryo treatment group in subgroup D/ET (1.0 vs. 0.0, p = 0.000).\n\nTable 2The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycleOutcome IndicatorsGroup IVF/H\n(Χ\n2\n)\n\nP value\n(H/ET vs. H/FET)Group IVF/D\n(Χ\n2\n)\n\nP value\n(D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]After FertilizationFertilization Number (n)a5 [3, 8]8 [5, 11]0.000***8 [6, 12]8 [5, 11]0.342Fertilization Rate (%)a71 [50,89]80 [67,91]0.001**67 [57,88]75 [57,84]0.822Cleavage Number (n) a4 [2, 8]8 [5, 11]0.000***8 [6, 11]7 [4, 11]0.309Cleavage Rate (%)a67 [50,88]100 [80,100]0.000***67 [57,88]71 [55,82]0.667High-Quality Embryos (n) a2 [1, 5]4 [3, 7]0.000***6 [3, 8]4 [2, 7]0.105High-Quality Embryos Rate (%)a33 [10,54]57 [36,78]0.000***52 [33,67]45 [28,60]0.070After TransferBiochemical Pregnancy Rate (%)b0.01.4(2.377) 0.1232.00.4(1.542) 0.214Clinical Pregnancy Rate (%)b39.863.2(22.789) 0.000***49.068.1(5.734) 0.017*Live Birth Rate (%)b39.161.1(20.134) 0.000***47.066.4(6.598) 0.010*Multipregnancy Rate (%)b6.714.8(5.820) 0.016*14.39.2(1.137) 0.286Miscarriage Rate (%)b0.82.0(0.997) 0.3182.02.1(0.001) 0.979Total Baby Sex Ratio (%)b69.4114.1(3.129) 0.077121.482.0(1.036) 0.309Low Birth Weight Rate (%)b4.916.2(5.286) 0.021*19.415.8(0.029) 0.806Baby with Birth Defect Rate (%)b1.61.5(0.007) 0.9330.01.0(216.712) 0.000***Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric Test (Kruskal–Wallis Model)b: chi square test for independent samples* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycle\nNote:\n1: H/ET subgroup = using fresh sperm and fresh embryo transfer;\n2: H/FET subgroup = using fresh sperm and frozen embryo transfer;\n3: D/ET subgroup = using frozen sperm and fresh embryo transfer;\n4: D/FET subgroup = using frozen sperm and frozen embryo transfer;\na: Nonparametric Test (Kruskal–Wallis Model)\nb: chi square test for independent samples\n* p < 0.05 **p < 0.01 ***p < 0.001", "Using the chi square test for independent samples, the comparison of clinical outcomes between Group IVF/H and Group IVF/D is shown in Figs. 2 and 3. Compared with the D/ET subgroup using frozen sperm (donor sperm) fertilization, the HER was higher than that of the H/ET subgroup using fresh husband sperm (p < 0.05). In contrast, the values of the FER, CLR and HER in the D/FET subgroup were lower than those of the H/FET subgroup using fresh husband sperm (all p < 0.05). After embryo transfer, the BDR in the D/ET subgroup was lower than that of the H/ET subgroup using fresh husband sperm (p < 0.001). Meanwhile, the MUR in the D/FET subgroup was lower than that of the H/FET subgroup using fresh husband sperm (p < 0.05).\n\nFig. 2The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001\n\nThe comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001\n\nFig. 3The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001\n\nThe comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001", "When both frozen sperm and frozen embryo transfer were used in the D/FET group, the HER was higher than that for the use of both fresh sperm and fresh embryo transfer in the H/ET group (p < 0.05). After embryo transfer, the CPR and LBR values in the D/FET subgroup were higher than those in the H/ET group (p < 0.001). Meanwhile, the LBW in the D/FET subgroup was higher than that in the H/ET subgroup (p < 0.05).", "The results of our study indicated that using frozen sperm or frozen embryo transfer have different effects on the different IVF stages, frozen sperm mainly increases fertilization rate and reduces birth defects, while cryopreservation of embryos increases pregnancy rate respectly. For example, in terms of the single factor comparison, the frozen embryo transfer method was more conductive to CPR and LBR, of which increased to 63.2% and 61.1% after using frozen embryo transfer in the H/FET subgroup within IVF/H group. The same results of IVF outcomes were also being found within IVF/D group. In recent years, there has been an accelerating trend toward the use of frozen embryo transfer. Several studies focusing on similar frozen strategies have been conducted in recent years[8, 9, 21, 24, 26]. Recently, Qianqian Z et al. collected a larger general population than other reports to explore the superiority of the freeze-embryo strategy in all IVF/H patients. The live birth rate (LBR) after the first complete IVF cycle was 50.74% in the freeze-embryo strategy. For women who were younger than 31 years old, the LBR of that treatment cycle was 63.81% (95% CI: 62.80–64.80%). The LBR of our results, after using frozen embryo transfer in the H/FET subgroup, was 61.1% (the median age of the treated women was 32 years old). Therefore, our report was extremely close to that of Qianqian Z et al. Although the clinical pregnancy rates and live birth rates after cryopreservation are now meta-analyzed to be close to or even higher than those of fresh cycles[7, 9, 10], singletons born after FET have a higher risk of LBW[19]. In our data, the LBW (including singletons, twins and more) was significantly higher in the H/FET subgroup than in the H/ET subgroup receiving embryo treatment (16.2 vs. 4.9, respectively, p = 0.021).\nOther wisely, the frozen sperm fertilization method had an advantage of the HER and BDR, of which increased to 52% and reduced to 0%, only in subgroup D/ET comparing to subgroup H/ET. The differentiated results of IVF outcomes were found in subgroup D/FET comparing to subgroup H/FET. Limited research has been performed on outcomes from IVF treatments with frozen donor sperm [5, 6, 25]. The French Sperm Bank network covers 22 centers of sperm cryopreservation working under the same rules: the CECOS. In this prospective study, 3689 pregnancies achieved after IVF with frozen donor spermatozoa (IVF/D) were followed and reported to the central CECOS between 1987 and 1994. In the prospective CECOS study, the miscarriage rate (MSR) was 21.5%, the multipregnancy rate (MUR) was 29% (including that of twins, triplets and more), the low birth weight rate (LBW) was 4.7%, and the baby with birth defect rate (BDR) was 2.7%. In our study, most of the neonatal characteristics achieved after IVF/D were better, showing a lower MSR (2-2.1%) and MUR (9.2–14.3%) and a higher BDR (0.0–1.0%). The gaps between these two studies are due to the younger donors (the median age was 23 years old) or the rapid development of IVF technology itself in these decades, i.e., using frozen embryo transfer to avoid the deleterious effects of controlled ovarian stimulation on the endometrium [8, 9, 27]. Controversially, the frozen spermatozoa are from donors with normal semen parameters whereas the fresh spermatozoa are from the men in the couples undergoing IVF. Although we improved the inclusion criteria for fresh spermatozoa with normal standard semen parameters. We still cannot excluded impaired sperm genome or proteomic quality by which the couples were diagnosed as infertility [20, 28].\nInterestingly, we first found that the exposure to both frozen sperm and embryo treatment had a complementary effect comparing to the use of fresh sperm and embryos. The combined data led to increases in the HER (p < 0.05), CPR (p < 0.001), LBR (p < 0.001) and LBW (p < 0.05) after the completion of IVF treatment. The value of HER could derived from the paternal effects of frozen sperm, while the other three values (CPR, LBR and LBW) could derived from the maternal effects of frozen embryo transfer. This complementary effect may be explained by the physiological orientation of sperm and embryos. The sperm initiates the process of fertilization while the embryo plays key role in the pregnancy. Furthermore, the freezing of sperm acts as an artificial selection, eliminating the damage sperm (probably including the defect of acrosomes, sperm membranes or sperm DNA et al.,) [29–32]; and the cryopreservation of embryo is to keep the transplantation the same pace with the growth of the endometrium.\nOver the last 70 years, the cryobiology of reproductive cells (sperm and oocytes), embryos, blastomeres, and ovarian and testicular tissue has made rapid progress and has been widely used in human reproductive medicine [22]. Unfortunately, people are only concerned about the survival and viability of cells following the freezing and thawing processes, which could result in a live birth baby. However, little is known about the long-term development of newborns developed from paternal (or maternal) frozen gametes or even the genomic integrity of such frozen cells and tissues [23, 33]. More basic research on the mechanism of the cellular response adaptations to cryopreservation is needed in the future. Eventually, we may uncover some of the cellular defense mechanisms that make cryopreserved sperm/embryos more able to survive.", "This study demonstrates that using frozen sperm or frozen embryo transfer have distinguished effects on the different IVF stages. Especially, the use of frozen embryos and frozen sperm have complementary IVF outcomes. Basic research on the mechanism of the cellular response adaptations to cryopreservation are needed to support our findings." ]
[ "introduction", null, null, null, "results", null, null, null, null, null, "discussion", "conclusion" ]
[ "Cryopreservation", "IVF outcome", "Sperm", "Embryo", "Accumulative effect" ]
Introduction: To keep the division of embryo the same pace with the growth of the endometrium, the transferring of frozen embryos have been widely used in assisted reproductive technology (ART). Since the first successful report of frozen embryo transfer (FET) [1], the cryopreservation of embryos has been an important strategy in the treatment of infertility. FET strategies contribute an additional 25–50% chance of pregnancy for couples who have cryopreserved embryos [2–4]. However, FET is not free from the risk of a higher multipregnancy rate (MPR) and low birth weight rate (LBW), even though the live birth rate (LBR) of frozen–thawed embryos is usually higher than that of fresh transferred embryos [5–8]. On the other hand, male due to azoospermia or sperm retrieval difficulties on the day of egg retrieval in vitro fertilization with frozen spermatozoa is used to treat female who might also have tubal lesions or those experiencing failure of prior artificial insemination with donor semen (AID) cycles [9, 10]. LBW Meanwhile, there is a matter of debate on the clinical outcomes caused by alterations in the DNA integrity of semen following cryopreservation [11, 12]. Indeed, the baby with no birth defect rate following pregnancies after IVF/D was not different (97.3% vs. 97.4%) [13] after IVF with fresh husband spermatozoa, but the clinical pregnancy rate (CPR) per transfer was higher after using frozen donor semen than that after using the husbands’ semen (27.6% vs. 21.9%, respectively)[13] Above all, taking advantage of freezing sperm, eggs or embryos is just like a double-edged sword for IVF outcomes, so doctors who use it need to be cautious. Nowadays, the practical ART will also meet the clinical treatment of one couple not only need to freeze sperm, but also suitable for freezing embryos. There still have been less studies of this double-factors freezing on the IVF outcomes including fertilization, pregnancy, and child birth. Most of published studies are incomprehensive, they are retrospective or refer to cases involving either frozen embryo transfer or frozen donor spermatozoa. For this article, we cover the findings of both freezing embryos transfer (FET) and IVF/D. As far as ART procedures are concerned, FET and IVF/D lead to some specific questions requiring answers: (i) the effects on the pregnancy and neonate characteristics of single-factor exposure to frozen embryo transfer; (ii) the effects on the fertilization, pregnancy and neonate characteristics of single-factor exposure to frozen donor semen; and (iii) the influence of the duration of sperm and embryo cryopreservation on pregnancy and newborn health. Methods. Methods: This study was approved by the Ethical Committee of the institutional review board of the Ethical Committee of the Obstetrics and Gynaecology Hospital of Fudan University and written informed consent was obtained from all couples. All methods were performed in accordance with the relevant guidelines and regulations. We retrospectively analyzed the IVF treatment of couples experiencing infertility with frozen-thawed donor sperm (Group IVF/D) or fresh husband sperm (Group IVF/H) at Shanghai Ji Ai Genetics and IVF Institute (Jan. 2013-Feb. 2019), after which we followed up until the birth of the baby. For Group IVF/D, the donor’s inclusion criteria were normal sperm parameters above World Health Organization guidelines [14]. For Group IVF/H, the husband’s inclusion criteria were also normal sperm parameters according to World Health Organization guidelines [14]. The inclusion criteria for women in both groups were those undergoing their first IVF-ET cycle caused only by tubal obstruction factors. Women with premature ovarian failure, polycystic ovarian syndrome, chromosome abnormalities, habitual abortion and other diseases that cause infertility were all excluded from this study. In addition, male patients with anti-sperm antibodies, high DNA fragmentation index, non-ejaculation, retrograde ejaculation, chromosome abnormalities and other diseases that cause infertility were also excluded. The processing of the semen sample treatments was as follows: According to the World Health Organization guidelines [14], donors’ semen samples were obtained by masturbation. After liquefaction of the fresh ejaculate, the specimens’ characteristics were evaluated, such as volume, count and motility. The qualified donors followed the National Sperm Bank reference for semen parameters (before freezing: volume ≥ 2.0 ml, concentration ≥ 60 million/ml, progressive motility ≥ 60%, normal sperm morphology ≥ 70%; after thawing: concentration ≥ 15 million/ml, progressive motility ≥ 32%, normal sperm morphology ≥ 4%). On the day of egg collection by uterine surgery, the semen samples were thawed in a water bath at 37 ℃. Then, the semen sample was treated by the density gradient method (45% and 90% gradient solution, Vitrolife, Gothenburg, Sweden), by which the samples were centrifuged at 500 g for 20 min. After removing the supernatant, the precipitate was washed with washing solution (Vitrolife, Gothenburg, Sweden) and centrifuged at 300 g for 10 min. For the husband, all the semen samples were obtained by masturbation on the day of egg collection by uterine surgery, and the fresh sperm were used in the IVF-ET cycles. The processing of ovulation, fertilization and embryo transfer were as follows controlled ovarian hyperstimulation (COH) used a gonadotropin-releasing hormone agonist protocol (the following was a shortened version of the long protocol) or a GnRH antagonist protocol, both of which are effective in blocking a premature LH surge [15, 16]. Generally, for the long protocol, a GnRH agonist (Triptorelin Acetate, Ipsen Pharma Biotech, France) was administered by subcutaneous injection daily starting from the luteal phase of the menstrual cycle for 10–14 days, and then ovarian stimulation with rFSH (Gn-F, Merck Serono SA Aubonne Branch, Switzerland) commenced. For the GnRH antagonist protocol, ovarian stimulation began on the second day of the menstrual cycle, and on the fifth day, antagonist (Cetrorelix Acetate, Cetrotide, Serono Labs Inc., Switzerland) administration started. Once the leading two follicles reached 18 mm or larger in diameter, the hCG administration was ejected as a trigger on the same day. Thirty-five to forty hours later, a doctor punctured the follicles and collected the eggs with the guidance of an ultrasound instrument. The sperm and oocytes (10000:1) were added to 0.1 ml of prebalanced embryo culture medium into four-well plates covered with mineral oil. The evaluation of embryo quality was performed on the 3rd day after fertilization. The embryos were divided into four levels according to the characteristics of blastomeres, such as the number, the morphology and fragments. Grade I and Grade II embryos were defined as high-quality embryos, and the other grades were defined as low-quality embryos [17, 18]. When the number of high-quality embryos was two or lower and the woman was younger than 35 years old, the embryos were frozen for transfer. All transferred embryos were 4-cell embryos. The frozen embryos were thawed according to the rapid recovery method of vitrification. Embryos with a recovery rate above 50% could be used for transfer. After embryo transfer, the patients were followed up until birth. controlled ovarian hyperstimulation (COH) used a gonadotropin-releasing hormone agonist protocol (the following was a shortened version of the long protocol) or a GnRH antagonist protocol, both of which are effective in blocking a premature LH surge [15, 16]. Generally, for the long protocol, a GnRH agonist (Triptorelin Acetate, Ipsen Pharma Biotech, France) was administered by subcutaneous injection daily starting from the luteal phase of the menstrual cycle for 10–14 days, and then ovarian stimulation with rFSH (Gn-F, Merck Serono SA Aubonne Branch, Switzerland) commenced. For the GnRH antagonist protocol, ovarian stimulation began on the second day of the menstrual cycle, and on the fifth day, antagonist (Cetrorelix Acetate, Cetrotide, Serono Labs Inc., Switzerland) administration started. Once the leading two follicles reached 18 mm or larger in diameter, the hCG administration was ejected as a trigger on the same day. Thirty-five to forty hours later, a doctor punctured the follicles and collected the eggs with the guidance of an ultrasound instrument. The sperm and oocytes (10000:1) were added to 0.1 ml of prebalanced embryo culture medium into four-well plates covered with mineral oil. The evaluation of embryo quality was performed on the 3rd day after fertilization. The embryos were divided into four levels according to the characteristics of blastomeres, such as the number, the morphology and fragments. Grade I and Grade II embryos were defined as high-quality embryos, and the other grades were defined as low-quality embryos [17, 18]. When the number of high-quality embryos was two or lower and the woman was younger than 35 years old, the embryos were frozen for transfer. All transferred embryos were 4-cell embryos. The frozen embryos were thawed according to the rapid recovery method of vitrification. Embryos with a recovery rate above 50% could be used for transfer. After embryo transfer, the patients were followed up until birth. Grouping and statistical analysis According to the treatment of embryo transfer, Group IVF/D was divided into the D/ET subgroup (treated with frozen donor sperm and Fresh embryo transfer) and the D/FET subgroup (treated with frozen donor sperm and frozen embryo transfer). Group IVF/H was also divided into the H/ET subgroup (treated with Fresh husband sperm and Fresh embryo transfer) and the H/FET subgroup (treated with Fresh husband sperm and frozen embryo transfer). The detail of the research grouping flow chat is shown in Fig. 1. Fig. 1The research grouping flow chat The research grouping flow chat The female-related factors, including age, GnRH injection days and dosage, the estrogen secretion peak of egg collection, endometrium thickness, the number of eggs retrieved, the number of fertilizations, the number of effective embryos, the number of high-quality embryos, etc., were followed up and analyzed by SPSS 19.0 software (SPSS Inc., Cambridge, MA, USA). First, all the parameters were checked by a normal distribution test (Kolmogorov–Smirnov model and Shapiro–Wilk model). Then, the medians (first quartile, third quartile) of the continuous parameters were calculated. Finally, the variation in clinical outcomes within or between the IVF/D or IVF/H groups was compared using the nonparametric test (Kruskal–Wallis model) or chi square test for independent samples (p < 0.05). According to the treatment of embryo transfer, Group IVF/D was divided into the D/ET subgroup (treated with frozen donor sperm and Fresh embryo transfer) and the D/FET subgroup (treated with frozen donor sperm and frozen embryo transfer). Group IVF/H was also divided into the H/ET subgroup (treated with Fresh husband sperm and Fresh embryo transfer) and the H/FET subgroup (treated with Fresh husband sperm and frozen embryo transfer). The detail of the research grouping flow chat is shown in Fig. 1. Fig. 1The research grouping flow chat The research grouping flow chat The female-related factors, including age, GnRH injection days and dosage, the estrogen secretion peak of egg collection, endometrium thickness, the number of eggs retrieved, the number of fertilizations, the number of effective embryos, the number of high-quality embryos, etc., were followed up and analyzed by SPSS 19.0 software (SPSS Inc., Cambridge, MA, USA). First, all the parameters were checked by a normal distribution test (Kolmogorov–Smirnov model and Shapiro–Wilk model). Then, the medians (first quartile, third quartile) of the continuous parameters were calculated. Finally, the variation in clinical outcomes within or between the IVF/D or IVF/H groups was compared using the nonparametric test (Kruskal–Wallis model) or chi square test for independent samples (p < 0.05). The processing of ovulation, fertilization and embryo transfer were as follows: controlled ovarian hyperstimulation (COH) used a gonadotropin-releasing hormone agonist protocol (the following was a shortened version of the long protocol) or a GnRH antagonist protocol, both of which are effective in blocking a premature LH surge [15, 16]. Generally, for the long protocol, a GnRH agonist (Triptorelin Acetate, Ipsen Pharma Biotech, France) was administered by subcutaneous injection daily starting from the luteal phase of the menstrual cycle for 10–14 days, and then ovarian stimulation with rFSH (Gn-F, Merck Serono SA Aubonne Branch, Switzerland) commenced. For the GnRH antagonist protocol, ovarian stimulation began on the second day of the menstrual cycle, and on the fifth day, antagonist (Cetrorelix Acetate, Cetrotide, Serono Labs Inc., Switzerland) administration started. Once the leading two follicles reached 18 mm or larger in diameter, the hCG administration was ejected as a trigger on the same day. Thirty-five to forty hours later, a doctor punctured the follicles and collected the eggs with the guidance of an ultrasound instrument. The sperm and oocytes (10000:1) were added to 0.1 ml of prebalanced embryo culture medium into four-well plates covered with mineral oil. The evaluation of embryo quality was performed on the 3rd day after fertilization. The embryos were divided into four levels according to the characteristics of blastomeres, such as the number, the morphology and fragments. Grade I and Grade II embryos were defined as high-quality embryos, and the other grades were defined as low-quality embryos [17, 18]. When the number of high-quality embryos was two or lower and the woman was younger than 35 years old, the embryos were frozen for transfer. All transferred embryos were 4-cell embryos. The frozen embryos were thawed according to the rapid recovery method of vitrification. Embryos with a recovery rate above 50% could be used for transfer. After embryo transfer, the patients were followed up until birth. Grouping and statistical analysis: According to the treatment of embryo transfer, Group IVF/D was divided into the D/ET subgroup (treated with frozen donor sperm and Fresh embryo transfer) and the D/FET subgroup (treated with frozen donor sperm and frozen embryo transfer). Group IVF/H was also divided into the H/ET subgroup (treated with Fresh husband sperm and Fresh embryo transfer) and the H/FET subgroup (treated with Fresh husband sperm and frozen embryo transfer). The detail of the research grouping flow chat is shown in Fig. 1. Fig. 1The research grouping flow chat The research grouping flow chat The female-related factors, including age, GnRH injection days and dosage, the estrogen secretion peak of egg collection, endometrium thickness, the number of eggs retrieved, the number of fertilizations, the number of effective embryos, the number of high-quality embryos, etc., were followed up and analyzed by SPSS 19.0 software (SPSS Inc., Cambridge, MA, USA). First, all the parameters were checked by a normal distribution test (Kolmogorov–Smirnov model and Shapiro–Wilk model). Then, the medians (first quartile, third quartile) of the continuous parameters were calculated. Finally, the variation in clinical outcomes within or between the IVF/D or IVF/H groups was compared using the nonparametric test (Kruskal–Wallis model) or chi square test for independent samples (p < 0.05). Results: Participants and grouping A total of 860 couples undergoing IVF-ET treatment were included in the analysis, including 573 couples with husband sperm (Group IVF/H) and 287 couples with frozen donor sperm (Group IVF/D). According to whether or not frozen embryo transfer was performed, the IVF/H group was divided into the H/ET subgroup (133 couples treated by IVF/H and fresh embryo transfer) and the H/FET subgroup (440 couples treated by IVF/H and frozen embryo transfer); moreover, the IVF/D group was also divided into the D/ET subgroup (49 couples treated by IVF/D and fresh embryo transfer) and the D/FET subgroup (238 couples treated by IVF/D and frozen embryo transfer) (detailed in Fig. 1). A total of 860 couples undergoing IVF-ET treatment were included in the analysis, including 573 couples with husband sperm (Group IVF/H) and 287 couples with frozen donor sperm (Group IVF/D). According to whether or not frozen embryo transfer was performed, the IVF/H group was divided into the H/ET subgroup (133 couples treated by IVF/H and fresh embryo transfer) and the H/FET subgroup (440 couples treated by IVF/H and frozen embryo transfer); moreover, the IVF/D group was also divided into the D/ET subgroup (49 couples treated by IVF/D and fresh embryo transfer) and the D/FET subgroup (238 couples treated by IVF/D and frozen embryo transfer) (detailed in Fig. 1). The baseline characteristics of the couples Except for the female BMI in the FN subgroup, the variables were nonnormally distributed among the other subgroups. Thus, using the nonparametric test (Kruskal–Wallis model), we found that the median endometrial thickness (10 cm vs. 9 cm, p = 0.015), oocytes retrieved (8 vs. 11, p = 0.000) and MII oocytes (6 vs. 10, p = 0.000) were differentiated within Group IVF/H, but we found that only female age (30 years vs. 28 years, p = 0.002) was differentiated within Group IVF/D. There were no significant differences (p > 0.05) within Group IVF/H or Group IVF/D, including the male age, semen parameters and female BMI (detailed in Table 1). Table 1The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycleBaseline ParametersGroup IVF/H P value (H/ET vs. H/FET)Group IVF/D P value (D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]Inclusion Samples (n)133440/49238/Male Age (husband or donor) a32 [29, 34]32 [29, 34]0.73123 [22, 26]23 [21, 27]0.967Semen Volume (Ml) a3 [2, 3]3 [3]0.3482 [2, 2]2 [2, 2]0.608Semen Concentration (10^6/ML) a56 [48, 64]54 [48, 62]0.73446 [43, 53]47 [43, 51]0.974Semen (PR + NP) (%)a56 [50, 60]54 [50, 60]0.93146 [43, 54]48 [43, 52]0.574Female Age a33 [30, 35]32 [30, 35]0.64430 [28, 35]28 [26, 32]0.002**Female BMI (kg/m2) a21 [24, 25]21 [26, 27]0.86222 [20, 23]21 [20, 24]0.862Endometrial Thickness (cm) a10 [8, 12]9 [7, 11]0.015*10 [8, 12]10 [8, 12]0.805Oocytes Retrieved (n) a8 [4, 11]11[8, 16]0.000***11 [8, 15]11 [8, 15]0.590MII Oocytes (n) a6 [4, 10]10 [6, 14]0.000***9 [7, 13]9 [6, 12]0.721MII Oocytes (%)a93 [78, 100]91 [82, 100]0.66081 [68, 96]86 [73, 96]0.503Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric test (Kruskal–Wallis Model)* p < 0.05 **p < 0.01 ***p < 0.001 The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycle Note: 1: H/ET subgroup = using fresh sperm and fresh embryo transfer; 2: H/FET subgroup = using fresh sperm and frozen embryo transfer; 3: D/ET subgroup = using frozen sperm and fresh embryo transfer; 4: D/FET subgroup = using frozen sperm and frozen embryo transfer; a: Nonparametric test (Kruskal–Wallis Model) * p < 0.05 **p < 0.01 ***p < 0.001 Except for the female BMI in the FN subgroup, the variables were nonnormally distributed among the other subgroups. Thus, using the nonparametric test (Kruskal–Wallis model), we found that the median endometrial thickness (10 cm vs. 9 cm, p = 0.015), oocytes retrieved (8 vs. 11, p = 0.000) and MII oocytes (6 vs. 10, p = 0.000) were differentiated within Group IVF/H, but we found that only female age (30 years vs. 28 years, p = 0.002) was differentiated within Group IVF/D. There were no significant differences (p > 0.05) within Group IVF/H or Group IVF/D, including the male age, semen parameters and female BMI (detailed in Table 1). Table 1The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycleBaseline ParametersGroup IVF/H P value (H/ET vs. H/FET)Group IVF/D P value (D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]Inclusion Samples (n)133440/49238/Male Age (husband or donor) a32 [29, 34]32 [29, 34]0.73123 [22, 26]23 [21, 27]0.967Semen Volume (Ml) a3 [2, 3]3 [3]0.3482 [2, 2]2 [2, 2]0.608Semen Concentration (10^6/ML) a56 [48, 64]54 [48, 62]0.73446 [43, 53]47 [43, 51]0.974Semen (PR + NP) (%)a56 [50, 60]54 [50, 60]0.93146 [43, 54]48 [43, 52]0.574Female Age a33 [30, 35]32 [30, 35]0.64430 [28, 35]28 [26, 32]0.002**Female BMI (kg/m2) a21 [24, 25]21 [26, 27]0.86222 [20, 23]21 [20, 24]0.862Endometrial Thickness (cm) a10 [8, 12]9 [7, 11]0.015*10 [8, 12]10 [8, 12]0.805Oocytes Retrieved (n) a8 [4, 11]11[8, 16]0.000***11 [8, 15]11 [8, 15]0.590MII Oocytes (n) a6 [4, 10]10 [6, 14]0.000***9 [7, 13]9 [6, 12]0.721MII Oocytes (%)a93 [78, 100]91 [82, 100]0.66081 [68, 96]86 [73, 96]0.503Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric test (Kruskal–Wallis Model)* p < 0.05 **p < 0.01 ***p < 0.001 The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycle Note: 1: H/ET subgroup = using fresh sperm and fresh embryo transfer; 2: H/FET subgroup = using fresh sperm and frozen embryo transfer; 3: D/ET subgroup = using frozen sperm and fresh embryo transfer; 4: D/FET subgroup = using frozen sperm and frozen embryo transfer; a: Nonparametric test (Kruskal–Wallis Model) * p < 0.05 **p < 0.01 ***p < 0.001 Maternal factor exposure to frozen embryo transfer: the comparison of clinical outcomes within group IVF/H or group IVF/D Using the nonparametric test (Kruskal–Wallis Model), the outcomes of fertilization and embryo transfer were compared within Group IVF/H and Group IVF/D (see Table 2). Before embryo transfer, the median of all the fertilization outcomes in the H/FET subgroup was higher than that in the H/ET subgroup. The median [first quartile, third quartile] fertilization rate (%), cleavage rate (%) and high-quality embryo rate (%) were 80 [67, 91], 100 [80, 100] and 57 [36, 78], respectively, in the H/FET subgroup. In contrast, there was no significant difference between the D/ET and D/FET subgroups. After frozen embryo transfer, the CPR and LBR were significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup and the H/ET subgroup (63.2 vs. 39.8, p = 0.000; 61.1 vs. 39.1, p = 0.000; 14.8 vs. 6.7, p = 0.016, respectively). However, the LBW was significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup in the H/ET subgroup (16.2 vs. 4.9, p = 0.021). In another IVF/D group, the CPR and LBR were significantly higher in the D/FET subgroup than in the fresh embryo treatment subgroup in the D/ET subgroup (68.1 vs. 49.0, p = 0.017; 66.4 vs. 47.0, p = 0.010, respectively). Otherwise, the BDR was significantly higher in the D/FET subgroup than in the fresh embryo treatment group in subgroup D/ET (1.0 vs. 0.0, p = 0.000). Table 2The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycleOutcome IndicatorsGroup IVF/H (Χ 2 ) P value (H/ET vs. H/FET)Group IVF/D (Χ 2 ) P value (D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]After FertilizationFertilization Number (n)a5 [3, 8]8 [5, 11]0.000***8 [6, 12]8 [5, 11]0.342Fertilization Rate (%)a71 [50,89]80 [67,91]0.001**67 [57,88]75 [57,84]0.822Cleavage Number (n) a4 [2, 8]8 [5, 11]0.000***8 [6, 11]7 [4, 11]0.309Cleavage Rate (%)a67 [50,88]100 [80,100]0.000***67 [57,88]71 [55,82]0.667High-Quality Embryos (n) a2 [1, 5]4 [3, 7]0.000***6 [3, 8]4 [2, 7]0.105High-Quality Embryos Rate (%)a33 [10,54]57 [36,78]0.000***52 [33,67]45 [28,60]0.070After TransferBiochemical Pregnancy Rate (%)b0.01.4(2.377) 0.1232.00.4(1.542) 0.214Clinical Pregnancy Rate (%)b39.863.2(22.789) 0.000***49.068.1(5.734) 0.017*Live Birth Rate (%)b39.161.1(20.134) 0.000***47.066.4(6.598) 0.010*Multipregnancy Rate (%)b6.714.8(5.820) 0.016*14.39.2(1.137) 0.286Miscarriage Rate (%)b0.82.0(0.997) 0.3182.02.1(0.001) 0.979Total Baby Sex Ratio (%)b69.4114.1(3.129) 0.077121.482.0(1.036) 0.309Low Birth Weight Rate (%)b4.916.2(5.286) 0.021*19.415.8(0.029) 0.806Baby with Birth Defect Rate (%)b1.61.5(0.007) 0.9330.01.0(216.712) 0.000***Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric Test (Kruskal–Wallis Model)b: chi square test for independent samples* p < 0.05 **p < 0.01 ***p < 0.001 The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycle Note: 1: H/ET subgroup = using fresh sperm and fresh embryo transfer; 2: H/FET subgroup = using fresh sperm and frozen embryo transfer; 3: D/ET subgroup = using frozen sperm and fresh embryo transfer; 4: D/FET subgroup = using frozen sperm and frozen embryo transfer; a: Nonparametric Test (Kruskal–Wallis Model) b: chi square test for independent samples * p < 0.05 **p < 0.01 ***p < 0.001 Using the nonparametric test (Kruskal–Wallis Model), the outcomes of fertilization and embryo transfer were compared within Group IVF/H and Group IVF/D (see Table 2). Before embryo transfer, the median of all the fertilization outcomes in the H/FET subgroup was higher than that in the H/ET subgroup. The median [first quartile, third quartile] fertilization rate (%), cleavage rate (%) and high-quality embryo rate (%) were 80 [67, 91], 100 [80, 100] and 57 [36, 78], respectively, in the H/FET subgroup. In contrast, there was no significant difference between the D/ET and D/FET subgroups. After frozen embryo transfer, the CPR and LBR were significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup and the H/ET subgroup (63.2 vs. 39.8, p = 0.000; 61.1 vs. 39.1, p = 0.000; 14.8 vs. 6.7, p = 0.016, respectively). However, the LBW was significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup in the H/ET subgroup (16.2 vs. 4.9, p = 0.021). In another IVF/D group, the CPR and LBR were significantly higher in the D/FET subgroup than in the fresh embryo treatment subgroup in the D/ET subgroup (68.1 vs. 49.0, p = 0.017; 66.4 vs. 47.0, p = 0.010, respectively). Otherwise, the BDR was significantly higher in the D/FET subgroup than in the fresh embryo treatment group in subgroup D/ET (1.0 vs. 0.0, p = 0.000). Table 2The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycleOutcome IndicatorsGroup IVF/H (Χ 2 ) P value (H/ET vs. H/FET)Group IVF/D (Χ 2 ) P value (D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]After FertilizationFertilization Number (n)a5 [3, 8]8 [5, 11]0.000***8 [6, 12]8 [5, 11]0.342Fertilization Rate (%)a71 [50,89]80 [67,91]0.001**67 [57,88]75 [57,84]0.822Cleavage Number (n) a4 [2, 8]8 [5, 11]0.000***8 [6, 11]7 [4, 11]0.309Cleavage Rate (%)a67 [50,88]100 [80,100]0.000***67 [57,88]71 [55,82]0.667High-Quality Embryos (n) a2 [1, 5]4 [3, 7]0.000***6 [3, 8]4 [2, 7]0.105High-Quality Embryos Rate (%)a33 [10,54]57 [36,78]0.000***52 [33,67]45 [28,60]0.070After TransferBiochemical Pregnancy Rate (%)b0.01.4(2.377) 0.1232.00.4(1.542) 0.214Clinical Pregnancy Rate (%)b39.863.2(22.789) 0.000***49.068.1(5.734) 0.017*Live Birth Rate (%)b39.161.1(20.134) 0.000***47.066.4(6.598) 0.010*Multipregnancy Rate (%)b6.714.8(5.820) 0.016*14.39.2(1.137) 0.286Miscarriage Rate (%)b0.82.0(0.997) 0.3182.02.1(0.001) 0.979Total Baby Sex Ratio (%)b69.4114.1(3.129) 0.077121.482.0(1.036) 0.309Low Birth Weight Rate (%)b4.916.2(5.286) 0.021*19.415.8(0.029) 0.806Baby with Birth Defect Rate (%)b1.61.5(0.007) 0.9330.01.0(216.712) 0.000***Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric Test (Kruskal–Wallis Model)b: chi square test for independent samples* p < 0.05 **p < 0.01 ***p < 0.001 The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycle Note: 1: H/ET subgroup = using fresh sperm and fresh embryo transfer; 2: H/FET subgroup = using fresh sperm and frozen embryo transfer; 3: D/ET subgroup = using frozen sperm and fresh embryo transfer; 4: D/FET subgroup = using frozen sperm and frozen embryo transfer; a: Nonparametric Test (Kruskal–Wallis Model) b: chi square test for independent samples * p < 0.05 **p < 0.01 ***p < 0.001 Paternal factor exposure to frozen sperm fertilization: the comparison of clinical outcomes between Group IVF/H and Group IVF/D Using the chi square test for independent samples, the comparison of clinical outcomes between Group IVF/H and Group IVF/D is shown in Figs. 2 and 3. Compared with the D/ET subgroup using frozen sperm (donor sperm) fertilization, the HER was higher than that of the H/ET subgroup using fresh husband sperm (p < 0.05). In contrast, the values of the FER, CLR and HER in the D/FET subgroup were lower than those of the H/FET subgroup using fresh husband sperm (all p < 0.05). After embryo transfer, the BDR in the D/ET subgroup was lower than that of the H/ET subgroup using fresh husband sperm (p < 0.001). Meanwhile, the MUR in the D/FET subgroup was lower than that of the H/FET subgroup using fresh husband sperm (p < 0.05). Fig. 2The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001 The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001 Fig. 3The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001 The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001 Using the chi square test for independent samples, the comparison of clinical outcomes between Group IVF/H and Group IVF/D is shown in Figs. 2 and 3. Compared with the D/ET subgroup using frozen sperm (donor sperm) fertilization, the HER was higher than that of the H/ET subgroup using fresh husband sperm (p < 0.05). In contrast, the values of the FER, CLR and HER in the D/FET subgroup were lower than those of the H/FET subgroup using fresh husband sperm (all p < 0.05). After embryo transfer, the BDR in the D/ET subgroup was lower than that of the H/ET subgroup using fresh husband sperm (p < 0.001). Meanwhile, the MUR in the D/FET subgroup was lower than that of the H/FET subgroup using fresh husband sperm (p < 0.05). Fig. 2The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001 The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001 Fig. 3The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001 The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001 Parental factors exposure to frozen sperm fertilization and frozen embryo transfer: the comparison of clinical outcomes between Group IVF/H and Group IVF/D When both frozen sperm and frozen embryo transfer were used in the D/FET group, the HER was higher than that for the use of both fresh sperm and fresh embryo transfer in the H/ET group (p < 0.05). After embryo transfer, the CPR and LBR values in the D/FET subgroup were higher than those in the H/ET group (p < 0.001). Meanwhile, the LBW in the D/FET subgroup was higher than that in the H/ET subgroup (p < 0.05). When both frozen sperm and frozen embryo transfer were used in the D/FET group, the HER was higher than that for the use of both fresh sperm and fresh embryo transfer in the H/ET group (p < 0.05). After embryo transfer, the CPR and LBR values in the D/FET subgroup were higher than those in the H/ET group (p < 0.001). Meanwhile, the LBW in the D/FET subgroup was higher than that in the H/ET subgroup (p < 0.05). Participants and grouping: A total of 860 couples undergoing IVF-ET treatment were included in the analysis, including 573 couples with husband sperm (Group IVF/H) and 287 couples with frozen donor sperm (Group IVF/D). According to whether or not frozen embryo transfer was performed, the IVF/H group was divided into the H/ET subgroup (133 couples treated by IVF/H and fresh embryo transfer) and the H/FET subgroup (440 couples treated by IVF/H and frozen embryo transfer); moreover, the IVF/D group was also divided into the D/ET subgroup (49 couples treated by IVF/D and fresh embryo transfer) and the D/FET subgroup (238 couples treated by IVF/D and frozen embryo transfer) (detailed in Fig. 1). The baseline characteristics of the couples: Except for the female BMI in the FN subgroup, the variables were nonnormally distributed among the other subgroups. Thus, using the nonparametric test (Kruskal–Wallis model), we found that the median endometrial thickness (10 cm vs. 9 cm, p = 0.015), oocytes retrieved (8 vs. 11, p = 0.000) and MII oocytes (6 vs. 10, p = 0.000) were differentiated within Group IVF/H, but we found that only female age (30 years vs. 28 years, p = 0.002) was differentiated within Group IVF/D. There were no significant differences (p > 0.05) within Group IVF/H or Group IVF/D, including the male age, semen parameters and female BMI (detailed in Table 1). Table 1The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycleBaseline ParametersGroup IVF/H P value (H/ET vs. H/FET)Group IVF/D P value (D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]Inclusion Samples (n)133440/49238/Male Age (husband or donor) a32 [29, 34]32 [29, 34]0.73123 [22, 26]23 [21, 27]0.967Semen Volume (Ml) a3 [2, 3]3 [3]0.3482 [2, 2]2 [2, 2]0.608Semen Concentration (10^6/ML) a56 [48, 64]54 [48, 62]0.73446 [43, 53]47 [43, 51]0.974Semen (PR + NP) (%)a56 [50, 60]54 [50, 60]0.93146 [43, 54]48 [43, 52]0.574Female Age a33 [30, 35]32 [30, 35]0.64430 [28, 35]28 [26, 32]0.002**Female BMI (kg/m2) a21 [24, 25]21 [26, 27]0.86222 [20, 23]21 [20, 24]0.862Endometrial Thickness (cm) a10 [8, 12]9 [7, 11]0.015*10 [8, 12]10 [8, 12]0.805Oocytes Retrieved (n) a8 [4, 11]11[8, 16]0.000***11 [8, 15]11 [8, 15]0.590MII Oocytes (n) a6 [4, 10]10 [6, 14]0.000***9 [7, 13]9 [6, 12]0.721MII Oocytes (%)a93 [78, 100]91 [82, 100]0.66081 [68, 96]86 [73, 96]0.503Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric test (Kruskal–Wallis Model)* p < 0.05 **p < 0.01 ***p < 0.001 The basic characteristics of the 860 couples in one IVF/H (or IVF/D) cycle Note: 1: H/ET subgroup = using fresh sperm and fresh embryo transfer; 2: H/FET subgroup = using fresh sperm and frozen embryo transfer; 3: D/ET subgroup = using frozen sperm and fresh embryo transfer; 4: D/FET subgroup = using frozen sperm and frozen embryo transfer; a: Nonparametric test (Kruskal–Wallis Model) * p < 0.05 **p < 0.01 ***p < 0.001 Maternal factor exposure to frozen embryo transfer: the comparison of clinical outcomes within group IVF/H or group IVF/D: Using the nonparametric test (Kruskal–Wallis Model), the outcomes of fertilization and embryo transfer were compared within Group IVF/H and Group IVF/D (see Table 2). Before embryo transfer, the median of all the fertilization outcomes in the H/FET subgroup was higher than that in the H/ET subgroup. The median [first quartile, third quartile] fertilization rate (%), cleavage rate (%) and high-quality embryo rate (%) were 80 [67, 91], 100 [80, 100] and 57 [36, 78], respectively, in the H/FET subgroup. In contrast, there was no significant difference between the D/ET and D/FET subgroups. After frozen embryo transfer, the CPR and LBR were significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup and the H/ET subgroup (63.2 vs. 39.8, p = 0.000; 61.1 vs. 39.1, p = 0.000; 14.8 vs. 6.7, p = 0.016, respectively). However, the LBW was significantly higher in the H/FET subgroup than in the fresh embryo treatment subgroup in the H/ET subgroup (16.2 vs. 4.9, p = 0.021). In another IVF/D group, the CPR and LBR were significantly higher in the D/FET subgroup than in the fresh embryo treatment subgroup in the D/ET subgroup (68.1 vs. 49.0, p = 0.017; 66.4 vs. 47.0, p = 0.010, respectively). Otherwise, the BDR was significantly higher in the D/FET subgroup than in the fresh embryo treatment group in subgroup D/ET (1.0 vs. 0.0, p = 0.000). Table 2The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycleOutcome IndicatorsGroup IVF/H (Χ 2 ) P value (H/ET vs. H/FET)Group IVF/D (Χ 2 ) P value (D/ET vs. D/FET)H/ET1 subgroupH/FET2 subgroupD/ET3 subgroupD/FET4 subgroupMedian [first quartile, third quartile]Median [first quartile, third quartile]After FertilizationFertilization Number (n)a5 [3, 8]8 [5, 11]0.000***8 [6, 12]8 [5, 11]0.342Fertilization Rate (%)a71 [50,89]80 [67,91]0.001**67 [57,88]75 [57,84]0.822Cleavage Number (n) a4 [2, 8]8 [5, 11]0.000***8 [6, 11]7 [4, 11]0.309Cleavage Rate (%)a67 [50,88]100 [80,100]0.000***67 [57,88]71 [55,82]0.667High-Quality Embryos (n) a2 [1, 5]4 [3, 7]0.000***6 [3, 8]4 [2, 7]0.105High-Quality Embryos Rate (%)a33 [10,54]57 [36,78]0.000***52 [33,67]45 [28,60]0.070After TransferBiochemical Pregnancy Rate (%)b0.01.4(2.377) 0.1232.00.4(1.542) 0.214Clinical Pregnancy Rate (%)b39.863.2(22.789) 0.000***49.068.1(5.734) 0.017*Live Birth Rate (%)b39.161.1(20.134) 0.000***47.066.4(6.598) 0.010*Multipregnancy Rate (%)b6.714.8(5.820) 0.016*14.39.2(1.137) 0.286Miscarriage Rate (%)b0.82.0(0.997) 0.3182.02.1(0.001) 0.979Total Baby Sex Ratio (%)b69.4114.1(3.129) 0.077121.482.0(1.036) 0.309Low Birth Weight Rate (%)b4.916.2(5.286) 0.021*19.415.8(0.029) 0.806Baby with Birth Defect Rate (%)b1.61.5(0.007) 0.9330.01.0(216.712) 0.000***Note:1: H/ET subgroup = using fresh sperm and fresh embryo transfer;2: H/FET subgroup = using fresh sperm and frozen embryo transfer;3: D/ET subgroup = using frozen sperm and fresh embryo transfer;4: D/FET subgroup = using frozen sperm and frozen embryo transfer;a: Nonparametric Test (Kruskal–Wallis Model)b: chi square test for independent samples* p < 0.05 **p < 0.01 ***p < 0.001 The IVF outcomes of the 860 couples in one IVF/H (or IVF/D) cycle Note: 1: H/ET subgroup = using fresh sperm and fresh embryo transfer; 2: H/FET subgroup = using fresh sperm and frozen embryo transfer; 3: D/ET subgroup = using frozen sperm and fresh embryo transfer; 4: D/FET subgroup = using frozen sperm and frozen embryo transfer; a: Nonparametric Test (Kruskal–Wallis Model) b: chi square test for independent samples * p < 0.05 **p < 0.01 ***p < 0.001 Paternal factor exposure to frozen sperm fertilization: the comparison of clinical outcomes between Group IVF/H and Group IVF/D: Using the chi square test for independent samples, the comparison of clinical outcomes between Group IVF/H and Group IVF/D is shown in Figs. 2 and 3. Compared with the D/ET subgroup using frozen sperm (donor sperm) fertilization, the HER was higher than that of the H/ET subgroup using fresh husband sperm (p < 0.05). In contrast, the values of the FER, CLR and HER in the D/FET subgroup were lower than those of the H/FET subgroup using fresh husband sperm (all p < 0.05). After embryo transfer, the BDR in the D/ET subgroup was lower than that of the H/ET subgroup using fresh husband sperm (p < 0.001). Meanwhile, the MUR in the D/FET subgroup was lower than that of the H/FET subgroup using fresh husband sperm (p < 0.05). Fig. 2The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001 The comparison of fertilization outcomes between Group IVF/H and Group IVF/D. The values of the FER, CLR and HER in subgroups H/ET and H/FET are shown in the separated blue histogram and those of the D/ET and D/FET subgroups are shown in the red histogram. Note: FER = fertilization rate; CLR = cleavage rate; HER = high-quality embryo rate; * p < 0.05 **p < 0.01 ***p < 0.001 Fig. 3The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001 The comparison of embryo transfer outcomes between Group IVF/H and Group IVF/D. The proportions of the BPR, CPR, LBR, MUR, MSR, TBSR, LBW and BDR in the H/ET and H/FET subgroups are shown in the combined blue histogram and those of the D/ET or D/FET subgroups are shown in the combined red histogram. Note: BPR = biochemical pregnancy rate; CPR = clinical pregnancy rate; LBR = live birth rate; MUR = multipregnancy rate; MSR = miscarriage rate; TBSR = total baby sex ratio; LBW = low birth weight rate; BDR = baby with birth defect rate;* p < 0.05 **p < 0.01 ***p < 0.001 Parental factors exposure to frozen sperm fertilization and frozen embryo transfer: the comparison of clinical outcomes between Group IVF/H and Group IVF/D: When both frozen sperm and frozen embryo transfer were used in the D/FET group, the HER was higher than that for the use of both fresh sperm and fresh embryo transfer in the H/ET group (p < 0.05). After embryo transfer, the CPR and LBR values in the D/FET subgroup were higher than those in the H/ET group (p < 0.001). Meanwhile, the LBW in the D/FET subgroup was higher than that in the H/ET subgroup (p < 0.05). Discussion: The results of our study indicated that using frozen sperm or frozen embryo transfer have different effects on the different IVF stages, frozen sperm mainly increases fertilization rate and reduces birth defects, while cryopreservation of embryos increases pregnancy rate respectly. For example, in terms of the single factor comparison, the frozen embryo transfer method was more conductive to CPR and LBR, of which increased to 63.2% and 61.1% after using frozen embryo transfer in the H/FET subgroup within IVF/H group. The same results of IVF outcomes were also being found within IVF/D group. In recent years, there has been an accelerating trend toward the use of frozen embryo transfer. Several studies focusing on similar frozen strategies have been conducted in recent years[8, 9, 21, 24, 26]. Recently, Qianqian Z et al. collected a larger general population than other reports to explore the superiority of the freeze-embryo strategy in all IVF/H patients. The live birth rate (LBR) after the first complete IVF cycle was 50.74% in the freeze-embryo strategy. For women who were younger than 31 years old, the LBR of that treatment cycle was 63.81% (95% CI: 62.80–64.80%). The LBR of our results, after using frozen embryo transfer in the H/FET subgroup, was 61.1% (the median age of the treated women was 32 years old). Therefore, our report was extremely close to that of Qianqian Z et al. Although the clinical pregnancy rates and live birth rates after cryopreservation are now meta-analyzed to be close to or even higher than those of fresh cycles[7, 9, 10], singletons born after FET have a higher risk of LBW[19]. In our data, the LBW (including singletons, twins and more) was significantly higher in the H/FET subgroup than in the H/ET subgroup receiving embryo treatment (16.2 vs. 4.9, respectively, p = 0.021). Other wisely, the frozen sperm fertilization method had an advantage of the HER and BDR, of which increased to 52% and reduced to 0%, only in subgroup D/ET comparing to subgroup H/ET. The differentiated results of IVF outcomes were found in subgroup D/FET comparing to subgroup H/FET. Limited research has been performed on outcomes from IVF treatments with frozen donor sperm [5, 6, 25]. The French Sperm Bank network covers 22 centers of sperm cryopreservation working under the same rules: the CECOS. In this prospective study, 3689 pregnancies achieved after IVF with frozen donor spermatozoa (IVF/D) were followed and reported to the central CECOS between 1987 and 1994. In the prospective CECOS study, the miscarriage rate (MSR) was 21.5%, the multipregnancy rate (MUR) was 29% (including that of twins, triplets and more), the low birth weight rate (LBW) was 4.7%, and the baby with birth defect rate (BDR) was 2.7%. In our study, most of the neonatal characteristics achieved after IVF/D were better, showing a lower MSR (2-2.1%) and MUR (9.2–14.3%) and a higher BDR (0.0–1.0%). The gaps between these two studies are due to the younger donors (the median age was 23 years old) or the rapid development of IVF technology itself in these decades, i.e., using frozen embryo transfer to avoid the deleterious effects of controlled ovarian stimulation on the endometrium [8, 9, 27]. Controversially, the frozen spermatozoa are from donors with normal semen parameters whereas the fresh spermatozoa are from the men in the couples undergoing IVF. Although we improved the inclusion criteria for fresh spermatozoa with normal standard semen parameters. We still cannot excluded impaired sperm genome or proteomic quality by which the couples were diagnosed as infertility [20, 28]. Interestingly, we first found that the exposure to both frozen sperm and embryo treatment had a complementary effect comparing to the use of fresh sperm and embryos. The combined data led to increases in the HER (p < 0.05), CPR (p < 0.001), LBR (p < 0.001) and LBW (p < 0.05) after the completion of IVF treatment. The value of HER could derived from the paternal effects of frozen sperm, while the other three values (CPR, LBR and LBW) could derived from the maternal effects of frozen embryo transfer. This complementary effect may be explained by the physiological orientation of sperm and embryos. The sperm initiates the process of fertilization while the embryo plays key role in the pregnancy. Furthermore, the freezing of sperm acts as an artificial selection, eliminating the damage sperm (probably including the defect of acrosomes, sperm membranes or sperm DNA et al.,) [29–32]; and the cryopreservation of embryo is to keep the transplantation the same pace with the growth of the endometrium. Over the last 70 years, the cryobiology of reproductive cells (sperm and oocytes), embryos, blastomeres, and ovarian and testicular tissue has made rapid progress and has been widely used in human reproductive medicine [22]. Unfortunately, people are only concerned about the survival and viability of cells following the freezing and thawing processes, which could result in a live birth baby. However, little is known about the long-term development of newborns developed from paternal (or maternal) frozen gametes or even the genomic integrity of such frozen cells and tissues [23, 33]. More basic research on the mechanism of the cellular response adaptations to cryopreservation is needed in the future. Eventually, we may uncover some of the cellular defense mechanisms that make cryopreserved sperm/embryos more able to survive. Conclusion: This study demonstrates that using frozen sperm or frozen embryo transfer have distinguished effects on the different IVF stages. Especially, the use of frozen embryos and frozen sperm have complementary IVF outcomes. Basic research on the mechanism of the cellular response adaptations to cryopreservation are needed to support our findings.
Background: The cryopreservation of sperm or embryos has been an important strategy in the treatment of infertility. Recently studies have revealed the outcomes after IVF (in vitro fertilization) treatment for single-factor exposure either to frozen sperm or embryos. Methods: This retrospective study was to uncover the exposure to both frozen sperm and embryo effects using IVF/H (in vitro fertilization using husbands' fresh sperm) or IVF/D (in vitro fertilization using donors' frozen sperm) treatment. Results: The results showed the clinical pregnancy rate (CPR), live birth rate (LBR) and low birth weight rate (LBW) increased to 63.2% (or 68.1%), 61.1% (or 66.4%) and 15.8% (or 16.2%) after using frozen embryo transfer within Group IVF/H (or Group IVF/D). After using frozen sperm, the high-quality embryo rate (HER) increased to 52% and baby with birth defect rate (BDR) reduced to 0% in subgroup D/ET comparing to subgroup H/ET. While the fertilization rate (FER), cleavage rate (CLR), HER and multiple pregnancy rate (MUR) reduced to 75%, 71%, 45% and 9.2% in subgroup D/FET comparing to subgroup H/FET. Finally, our study found accumulative frozen gamete effects, including both sperm and embryos, led to the significantly increasing in the HER (p < 0.05), CPR (p < 0.001), LBR (p < 0.001) and LBW (p < 0.05) in subgroup D/FET comparing to subgroup H/ET. Conclusions: The use of frozen embryos and frozen sperm have complementary IVF outcomes. Our findings highlighted the parent's distinguished frozen effect not only for clinical studies but also for basic research on the mechanism of cellular response adaptations to cryopreservation.
Introduction: To keep the division of embryo the same pace with the growth of the endometrium, the transferring of frozen embryos have been widely used in assisted reproductive technology (ART). Since the first successful report of frozen embryo transfer (FET) [1], the cryopreservation of embryos has been an important strategy in the treatment of infertility. FET strategies contribute an additional 25–50% chance of pregnancy for couples who have cryopreserved embryos [2–4]. However, FET is not free from the risk of a higher multipregnancy rate (MPR) and low birth weight rate (LBW), even though the live birth rate (LBR) of frozen–thawed embryos is usually higher than that of fresh transferred embryos [5–8]. On the other hand, male due to azoospermia or sperm retrieval difficulties on the day of egg retrieval in vitro fertilization with frozen spermatozoa is used to treat female who might also have tubal lesions or those experiencing failure of prior artificial insemination with donor semen (AID) cycles [9, 10]. LBW Meanwhile, there is a matter of debate on the clinical outcomes caused by alterations in the DNA integrity of semen following cryopreservation [11, 12]. Indeed, the baby with no birth defect rate following pregnancies after IVF/D was not different (97.3% vs. 97.4%) [13] after IVF with fresh husband spermatozoa, but the clinical pregnancy rate (CPR) per transfer was higher after using frozen donor semen than that after using the husbands’ semen (27.6% vs. 21.9%, respectively)[13] Above all, taking advantage of freezing sperm, eggs or embryos is just like a double-edged sword for IVF outcomes, so doctors who use it need to be cautious. Nowadays, the practical ART will also meet the clinical treatment of one couple not only need to freeze sperm, but also suitable for freezing embryos. There still have been less studies of this double-factors freezing on the IVF outcomes including fertilization, pregnancy, and child birth. Most of published studies are incomprehensive, they are retrospective or refer to cases involving either frozen embryo transfer or frozen donor spermatozoa. For this article, we cover the findings of both freezing embryos transfer (FET) and IVF/D. As far as ART procedures are concerned, FET and IVF/D lead to some specific questions requiring answers: (i) the effects on the pregnancy and neonate characteristics of single-factor exposure to frozen embryo transfer; (ii) the effects on the fertilization, pregnancy and neonate characteristics of single-factor exposure to frozen donor semen; and (iii) the influence of the duration of sperm and embryo cryopreservation on pregnancy and newborn health. Methods. Conclusion: This study demonstrates that using frozen sperm or frozen embryo transfer have distinguished effects on the different IVF stages. Especially, the use of frozen embryos and frozen sperm have complementary IVF outcomes. Basic research on the mechanism of the cellular response adaptations to cryopreservation are needed to support our findings.
Background: The cryopreservation of sperm or embryos has been an important strategy in the treatment of infertility. Recently studies have revealed the outcomes after IVF (in vitro fertilization) treatment for single-factor exposure either to frozen sperm or embryos. Methods: This retrospective study was to uncover the exposure to both frozen sperm and embryo effects using IVF/H (in vitro fertilization using husbands' fresh sperm) or IVF/D (in vitro fertilization using donors' frozen sperm) treatment. Results: The results showed the clinical pregnancy rate (CPR), live birth rate (LBR) and low birth weight rate (LBW) increased to 63.2% (or 68.1%), 61.1% (or 66.4%) and 15.8% (or 16.2%) after using frozen embryo transfer within Group IVF/H (or Group IVF/D). After using frozen sperm, the high-quality embryo rate (HER) increased to 52% and baby with birth defect rate (BDR) reduced to 0% in subgroup D/ET comparing to subgroup H/ET. While the fertilization rate (FER), cleavage rate (CLR), HER and multiple pregnancy rate (MUR) reduced to 75%, 71%, 45% and 9.2% in subgroup D/FET comparing to subgroup H/FET. Finally, our study found accumulative frozen gamete effects, including both sperm and embryos, led to the significantly increasing in the HER (p < 0.05), CPR (p < 0.001), LBR (p < 0.001) and LBW (p < 0.05) in subgroup D/FET comparing to subgroup H/ET. Conclusions: The use of frozen embryos and frozen sperm have complementary IVF outcomes. Our findings highlighted the parent's distinguished frozen effect not only for clinical studies but also for basic research on the mechanism of cellular response adaptations to cryopreservation.
11,981
373
[ 1853, 375, 284, 156, 677, 857, 723, 109 ]
12
[ "ivf", "embryo", "subgroup", "transfer", "sperm", "frozen", "fet", "embryo transfer", "rate", "fresh" ]
[ "use frozen embryos", "embryo transfer frozen", "sperm embryo cryopreservation", "frozen sperm embryo", "transferring frozen embryos" ]
null
[CONTENT] Cryopreservation | IVF outcome | Sperm | Embryo | Accumulative effect [SUMMARY]
null
[CONTENT] Cryopreservation | IVF outcome | Sperm | Embryo | Accumulative effect [SUMMARY]
[CONTENT] Cryopreservation | IVF outcome | Sperm | Embryo | Accumulative effect [SUMMARY]
[CONTENT] Cryopreservation | IVF outcome | Sperm | Embryo | Accumulative effect [SUMMARY]
[CONTENT] Cryopreservation | IVF outcome | Sperm | Embryo | Accumulative effect [SUMMARY]
[CONTENT] Pregnancy | Female | Male | Humans | Retrospective Studies | Spouses | Semen | Fertilization in Vitro | Pregnancy Rate | Spermatozoa [SUMMARY]
null
[CONTENT] Pregnancy | Female | Male | Humans | Retrospective Studies | Spouses | Semen | Fertilization in Vitro | Pregnancy Rate | Spermatozoa [SUMMARY]
[CONTENT] Pregnancy | Female | Male | Humans | Retrospective Studies | Spouses | Semen | Fertilization in Vitro | Pregnancy Rate | Spermatozoa [SUMMARY]
[CONTENT] Pregnancy | Female | Male | Humans | Retrospective Studies | Spouses | Semen | Fertilization in Vitro | Pregnancy Rate | Spermatozoa [SUMMARY]
[CONTENT] Pregnancy | Female | Male | Humans | Retrospective Studies | Spouses | Semen | Fertilization in Vitro | Pregnancy Rate | Spermatozoa [SUMMARY]
[CONTENT] use frozen embryos | embryo transfer frozen | sperm embryo cryopreservation | frozen sperm embryo | transferring frozen embryos [SUMMARY]
null
[CONTENT] use frozen embryos | embryo transfer frozen | sperm embryo cryopreservation | frozen sperm embryo | transferring frozen embryos [SUMMARY]
[CONTENT] use frozen embryos | embryo transfer frozen | sperm embryo cryopreservation | frozen sperm embryo | transferring frozen embryos [SUMMARY]
[CONTENT] use frozen embryos | embryo transfer frozen | sperm embryo cryopreservation | frozen sperm embryo | transferring frozen embryos [SUMMARY]
[CONTENT] use frozen embryos | embryo transfer frozen | sperm embryo cryopreservation | frozen sperm embryo | transferring frozen embryos [SUMMARY]
[CONTENT] ivf | embryo | subgroup | transfer | sperm | frozen | fet | embryo transfer | rate | fresh [SUMMARY]
null
[CONTENT] ivf | embryo | subgroup | transfer | sperm | frozen | fet | embryo transfer | rate | fresh [SUMMARY]
[CONTENT] ivf | embryo | subgroup | transfer | sperm | frozen | fet | embryo transfer | rate | fresh [SUMMARY]
[CONTENT] ivf | embryo | subgroup | transfer | sperm | frozen | fet | embryo transfer | rate | fresh [SUMMARY]
[CONTENT] ivf | embryo | subgroup | transfer | sperm | frozen | fet | embryo transfer | rate | fresh [SUMMARY]
[CONTENT] embryos | pregnancy | frozen | semen | freezing | donor semen | art | rate | spermatozoa | ivf [SUMMARY]
null
[CONTENT] subgroup | ivf | rate | fet | embryo | group | group ivf | 000 | fresh | subgroup fresh [SUMMARY]
[CONTENT] frozen | frozen sperm | complementary ivf | outcomes basic research mechanism | especially use frozen | ivf stages especially use | especially use | especially | demonstrates | demonstrates frozen [SUMMARY]
[CONTENT] ivf | subgroup | embryo | frozen | transfer | sperm | fet | embryo transfer | rate | group [SUMMARY]
[CONTENT] ivf | subgroup | embryo | frozen | transfer | sperm | fet | embryo transfer | rate | group [SUMMARY]
[CONTENT] ||| IVF [SUMMARY]
null
[CONTENT] CPR | LBR | 63.2% | 68.1% | 61.1% | 66.4% | 15.8% | 16.2% | Group IVF/H | Group IVF/D ||| HER | 52% | BDR | 0% ||| FER | CLR | HER | MUR | 75% | 71% | 45% | 9.2% ||| HER | 0.05 | CPR | 0.001 | LBR | 0.001 | LBW (p < 0.05 [SUMMARY]
[CONTENT] IVF ||| [SUMMARY]
[CONTENT] ||| IVF ||| IVF/H ||| ||| CPR | LBR | 63.2% | 68.1% | 61.1% | 66.4% | 15.8% | 16.2% | Group IVF/H | Group IVF/D ||| HER | 52% | BDR | 0% ||| FER | CLR | HER | MUR | 75% | 71% | 45% | 9.2% ||| HER | 0.05 | CPR | 0.001 | LBR | 0.001 | LBW (p < 0.05 ||| IVF ||| [SUMMARY]
[CONTENT] ||| IVF ||| IVF/H ||| ||| CPR | LBR | 63.2% | 68.1% | 61.1% | 66.4% | 15.8% | 16.2% | Group IVF/H | Group IVF/D ||| HER | 52% | BDR | 0% ||| FER | CLR | HER | MUR | 75% | 71% | 45% | 9.2% ||| HER | 0.05 | CPR | 0.001 | LBR | 0.001 | LBW (p < 0.05 ||| IVF ||| [SUMMARY]
Anti-Cancer Effects of Carnosine-A Dipeptide Molecule.
33809496
Carnosine is a dipeptide molecule (β-alanyl-l-histidine) with anti-inflammatory, antioxidant, anti-glycation, and chelating properties. It is used in exercise physiology as a food supplement to increase performance; however, in vitro evidence suggests that carnosine may exhibit anti-cancer properties.
BACKGROUND
In this study, we investigated the effect of carnosine on breast, ovarian, colon, and leukemic cancer cell proliferation. We further examined U937 promonocytic, human myeloid leukemia cell phenotype, gene expression, and cytokine secretion to determine if these are linked to carnosine's anti-proliferative properties.
METHODS
Carnosine (1) inhibits breast, ovarian, colon, and leukemic cancer cell proliferation; (2) upregulates expression of pro-inflammatory molecules; (3) modulates cytokine secretion; and (4) alters U937 differentiation and phenotype.
RESULTS
These effects may have implications for a role for carnosine in anti-cancer therapy.
CONCLUSION
[ "Antineoplastic Agents", "Carnosine", "Cell Differentiation", "Cell Line, Tumor", "Cell Proliferation", "Cytokines", "Dipeptides", "Gene Expression", "HT29 Cells", "Humans", "Neoplasms", "U937 Cells" ]
8002160
1. Introduction
U937 is a promonocytic, human myeloid leukemia cell line that was originally isolated from a histiocytic lymphoma patient [1]. U937 cells are promonocytes that exhibit many characteristics of normal monocytes and so are commonly used as a model for peripheral blood mononuclear cells (PBMCs). Though the expression levels are variable, there is little overall difference in the pattern of cluster of differentiation (CD) marker expression between PBMC and U937 cells [2]. U937 cells can also be differentiated in vitro using vitamin D3 or phorbol-12-myristate-13-acetate (PMA) to macrophages and dendritic cells [3]. Carnosine is a naturally occurring dipeptide molecule (β-alanyl-l-histidine) with anti-inflammatory, antioxidant, anti-glycation, and chelating properties [4]. It is found at endogenous concentrations of up to 20 mM in humans, predominantly in the brain and skeletal muscle [5]. Traditionally used in exercise physiology to increase performance, it is available over the counter as a food supplement. Animal studies suggest therapeutic benefits for many chronic diseases (e.g., type 2 diabetes, cardiovascular disease, stroke, Alzheimer’s disease, and Parkinson’s disease), but little in vivo evidence exists in humans [6]. There is, however, some in vitro evidence that suggests carnosine may exhibit anti-cancer properties. Carnosine can selectively inhibit proliferation of transformed cells [7]. Paradoxically, McFarland and Holliday had earlier reported that carnosine increased the Hayflick limit and slowed the growth of cultured human fibroblasts [8]. However, this may be explained by the metabolic differences between these two cell types; whereas most tumor cells rely heavily on glycolysis, in fibroblasts ATP synthesis predominantly occurs in the mitochondria [9]. In glioblastoma, carnosine inhibits proliferation of primary cell cultures derived from surgically removed tumors [10]. Carnosine also inhibits proliferation of gastric, colon, and ovarian cancer cell lines [11,12,13]. In this study, we sought to replicate these observations and also included breast cancer and promonocytic leukemia cells. We further determined promonocytic (U937) cell phenotype, gene expression, and cytokine secretion to determine whether these are linked to its anti-cancer properties and to identify potential pathways via which carnosine exerts its effects. In addition, U937 cells were differentiated to become monocyte/macrophage cells, and cell changes to cell surface expression were determined using flow cytometry.
null
null
2. Results
2.1. Carnosine Inhibits Cancer Cell Proliferation HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), and ZR-75-1 (breast) cancer cell lines were cultured with increasing concentrations of carnosine from 0 to 200 mM for 6 days. U937 cells were cultured with a carnosine dose range from 0 to 100 mM for 6 days. At 100–200 mM carnosine for each line, proliferation was notably inhibited by day 5, which was even more pronounced at day 6 (Figure 1). At 10–20 mM carnosine, there was no significant difference in proliferation between treated and untreated cells. However, in U937 cells, the proliferation curves show some evidence of titration. Therefore, this cell line was selected for further analysis. HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), and ZR-75-1 (breast) cancer cell lines were cultured with increasing concentrations of carnosine from 0 to 200 mM for 6 days. U937 cells were cultured with a carnosine dose range from 0 to 100 mM for 6 days. At 100–200 mM carnosine for each line, proliferation was notably inhibited by day 5, which was even more pronounced at day 6 (Figure 1). At 10–20 mM carnosine, there was no significant difference in proliferation between treated and untreated cells. However, in U937 cells, the proliferation curves show some evidence of titration. Therefore, this cell line was selected for further analysis. 2.2. Carnosine Upregulates Expression of Proinflammatory Molecules in U937 Cell Gene array data showed differential regulation of a number of genes following exposure of U937 cells to carnosine (Figure 2A). Significant upregulation in gene expression was observed for IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNF (Figure 2B). IL-8, CCL2, and CCR5 are involved in chemotaxis; similarly, C3 is important for monocyte adhesion. CD86, IL-1β, Ly96, and TNFα are all involved in inflammatory pathways. We note that gene expression events that would result in changes in proliferation/protein expression would occur much earlier, and therefore, gene expression was assessed at an earlier time point, i.e., 24 h rather than 6 days. The results of the array were confirmed by RT-qPCR of CCL2, IL-8, and MAPK1 (Figure 2C). These genes were specifically selected for the validation study because CCL2 and IL-8 showed the largest fold-changes in the array, and MAPK1 by comparison exhibited no change. The RT-qPCR showed that CCL2 and IL-8 gene upregulation was observed at 25 mM carnosine, and that the effect became more pronounced with increasing concentrations in a dose-dependent manner. Gene array data showed differential regulation of a number of genes following exposure of U937 cells to carnosine (Figure 2A). Significant upregulation in gene expression was observed for IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNF (Figure 2B). IL-8, CCL2, and CCR5 are involved in chemotaxis; similarly, C3 is important for monocyte adhesion. CD86, IL-1β, Ly96, and TNFα are all involved in inflammatory pathways. We note that gene expression events that would result in changes in proliferation/protein expression would occur much earlier, and therefore, gene expression was assessed at an earlier time point, i.e., 24 h rather than 6 days. The results of the array were confirmed by RT-qPCR of CCL2, IL-8, and MAPK1 (Figure 2C). These genes were specifically selected for the validation study because CCL2 and IL-8 showed the largest fold-changes in the array, and MAPK1 by comparison exhibited no change. The RT-qPCR showed that CCL2 and IL-8 gene upregulation was observed at 25 mM carnosine, and that the effect became more pronounced with increasing concentrations in a dose-dependent manner. 2.3. Carnosine Modulates Cytokine Secretion from U937 Cells U937 cells were cultured in the presence or absence of 100 mM carnosine for 5 days, and culture supernatants were analyzed for the presence of cytokines. Once the raw data were adjusted to reflect the cell number in culture, it was found that carnosine increased the secretion of IL-10, GM-CSF, and TNF-α (Figure 3). IL-10 is well-established as an anti-inflammatory cytokine. GM-CSF is primarily implicated in innate cell expansion, but also primes mature immune cells, such as neutrophils. TNFα is involved in acute inflammation and has a critical autocrine role for monocytes. Interestingly, carnosine also decreased IL-8 secretion (Figure 3). IL-8 is important in mediating adhesion of monocytes to the vascular endothelium, allowing recruitment to sites of inflammation. U937 cells were cultured in the presence or absence of 100 mM carnosine for 5 days, and culture supernatants were analyzed for the presence of cytokines. Once the raw data were adjusted to reflect the cell number in culture, it was found that carnosine increased the secretion of IL-10, GM-CSF, and TNF-α (Figure 3). IL-10 is well-established as an anti-inflammatory cytokine. GM-CSF is primarily implicated in innate cell expansion, but also primes mature immune cells, such as neutrophils. TNFα is involved in acute inflammation and has a critical autocrine role for monocytes. Interestingly, carnosine also decreased IL-8 secretion (Figure 3). IL-8 is important in mediating adhesion of monocytes to the vascular endothelium, allowing recruitment to sites of inflammation. 2.4. Carnosine Alters U937 Differentiation and Phenotype U937 can be differentiated to monocytes using VitD3. Upon culture for 72 h with VitD3, U937 exhibit increased side scatter (SSC), indicative of a monocyte-like phenotype. However, in the presence of carnosine, this population is significantly smaller, with an increase in a second population with larger cell size, but lower SSC (Figure 4A). Upon further characterization, this carnosine-induced population is CD11b+CD11c+CD86+ MHCII+, indicating a macrophage-like phenotype (Figure 4B). In the monocyte-like population (Figure 4C), carnosine decreases CD11b, CD86, and MHCII expression, whereas in the macrophage-like population (Figure 4B), CD11b, CD11c, CD86, and MHCII are increased. This may suggest that carnosine is promoting differentiation to macrophages, rather than to monocytes. U937 can be differentiated to monocytes using VitD3. Upon culture for 72 h with VitD3, U937 exhibit increased side scatter (SSC), indicative of a monocyte-like phenotype. However, in the presence of carnosine, this population is significantly smaller, with an increase in a second population with larger cell size, but lower SSC (Figure 4A). Upon further characterization, this carnosine-induced population is CD11b+CD11c+CD86+ MHCII+, indicating a macrophage-like phenotype (Figure 4B). In the monocyte-like population (Figure 4C), carnosine decreases CD11b, CD86, and MHCII expression, whereas in the macrophage-like population (Figure 4B), CD11b, CD11c, CD86, and MHCII are increased. This may suggest that carnosine is promoting differentiation to macrophages, rather than to monocytes.
5. Conclusions
In this study, we observed that carnosine significantly inhibits proliferation of breast, ovarian, colon, and leukemic cancer cell lines. Further studies revealed additional effects regarding U937 cell phenotype, gene expression, and cytokine secretion. Together, the observation of increased secretion of IL-10, GM-CSF, and TNFα, decreased secretion of IL-8, increased gene expression of IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNFα, and increased expression of cell surface markers CD11b, CD11c, CD86, and MHCII all have implications for the anti-proliferative properties of carnosine, which should be further explored in anti-cancer therapy.
[ "2.1. Carnosine Inhibits Cancer Cell Proliferation", "2.2. Carnosine Upregulates Expression of Proinflammatory Molecules in U937 Cell", "2.3. Carnosine Modulates Cytokine Secretion from U937 Cells", "2.4. Carnosine Alters U937 Differentiation and Phenotype", "4. Materials and Methods", "4.1. Cell Culture", "4.2. Proliferation Assay", "4.3. Gene Array", "4.4. Cytokine Secretion", "4.5. Flow Cytometry", "4.6. Statistical Analysis" ]
[ "HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), and ZR-75-1 (breast) cancer cell lines were cultured with increasing concentrations of carnosine from 0 to 200 mM for 6 days. U937 cells were cultured with a carnosine dose range from 0 to 100 mM for 6 days. At 100–200 mM carnosine for each line, proliferation was notably inhibited by day 5, which was even more pronounced at day 6 (Figure 1). At 10–20 mM carnosine, there was no significant difference in proliferation between treated and untreated cells. However, in U937 cells, the proliferation curves show some evidence of titration. Therefore, this cell line was selected for further analysis.", "Gene array data showed differential regulation of a number of genes following exposure of U937 cells to carnosine (Figure 2A). Significant upregulation in gene expression was observed for IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNF (Figure 2B). IL-8, CCL2, and CCR5 are involved in chemotaxis; similarly, C3 is important for monocyte adhesion. CD86, IL-1β, Ly96, and TNFα are all involved in inflammatory pathways. We note that gene expression events that would result in changes in proliferation/protein expression would occur much earlier, and therefore, gene expression was assessed at an earlier time point, i.e., 24 h rather than 6 days.\nThe results of the array were confirmed by RT-qPCR of CCL2, IL-8, and MAPK1 (Figure 2C). These genes were specifically selected for the validation study because CCL2 and IL-8 showed the largest fold-changes in the array, and MAPK1 by comparison exhibited no change. The RT-qPCR showed that CCL2 and IL-8 gene upregulation was observed at 25 mM carnosine, and that the effect became more pronounced with increasing concentrations in a dose-dependent manner.", "U937 cells were cultured in the presence or absence of 100 mM carnosine for 5 days, and culture supernatants were analyzed for the presence of cytokines. Once the raw data were adjusted to reflect the cell number in culture, it was found that carnosine increased the secretion of IL-10, GM-CSF, and TNF-α (Figure 3). IL-10 is well-established as an anti-inflammatory cytokine. GM-CSF is primarily implicated in innate cell expansion, but also primes mature immune cells, such as neutrophils. TNFα is involved in acute inflammation and has a critical autocrine role for monocytes. Interestingly, carnosine also decreased IL-8 secretion (Figure 3). IL-8 is important in mediating adhesion of monocytes to the vascular endothelium, allowing recruitment to sites of inflammation.", "U937 can be differentiated to monocytes using VitD3. Upon culture for 72 h with VitD3, U937 exhibit increased side scatter (SSC), indicative of a monocyte-like phenotype. However, in the presence of carnosine, this population is significantly smaller, with an increase in a second population with larger cell size, but lower SSC (Figure 4A). Upon further characterization, this carnosine-induced population is CD11b+CD11c+CD86+ MHCII+, indicating a macrophage-like phenotype (Figure 4B).\nIn the monocyte-like population (Figure 4C), carnosine decreases CD11b, CD86, and MHCII expression, whereas in the macrophage-like population (Figure 4B), CD11b, CD11c, CD86, and MHCII are increased. This may suggest that carnosine is promoting differentiation to macrophages, rather than to monocytes.", "4.1. Cell Culture HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), U937 (promonocytic leukemia), and ZR-75-1 (breast) cancer cell lines were maintained at sub-confluence in complete RPMI (Roswell Park Memorial Institute-1640 media supplemented with 10% fetal bovine serum, penicillin/streptomycin and l-glutamine; all purchased from Sigma-Aldrich, St Louis, MO, USA) and 5% CO2 at 37 °C.\nHT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), U937 (promonocytic leukemia), and ZR-75-1 (breast) cancer cell lines were maintained at sub-confluence in complete RPMI (Roswell Park Memorial Institute-1640 media supplemented with 10% fetal bovine serum, penicillin/streptomycin and l-glutamine; all purchased from Sigma-Aldrich, St Louis, MO, USA) and 5% CO2 at 37 °C.\n4.2. Proliferation Assay Cell lines were cultured in complete RPMI and varying concentrations of carnosine from 0 to 200 mM dissolved directly in the culture media in quadruplicate plates for up to 6 days. Culture media were replenished at day 3 to maintain nutrient supply; the same method was applied to all cell lines. On days 3–6, MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) (Sigma-Aldrich) assay was performed on one plate each day to quantitate proliferation [35]. Briefly, all except 50 μL media was carefully removed from each well. An additional 50 μL of 5 mM MTT in phosphate buffered saline (PBS) was added to each well, and cells were resuspended by pipetting and incubated in MTT at 37 °C for 4 h (h). Dimethyl sulfoxide (Sigma-Aldrich, Melbourne VIC Australia) (100 μL) was added to each well for 10 min at 37 °C. Each well was pipetted before absorbance was read at 540 nm using a spectrophotometer (Bio-Rad microplate reader 6.0).\nCell lines were cultured in complete RPMI and varying concentrations of carnosine from 0 to 200 mM dissolved directly in the culture media in quadruplicate plates for up to 6 days. Culture media were replenished at day 3 to maintain nutrient supply; the same method was applied to all cell lines. On days 3–6, MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) (Sigma-Aldrich) assay was performed on one plate each day to quantitate proliferation [35]. Briefly, all except 50 μL media was carefully removed from each well. An additional 50 μL of 5 mM MTT in phosphate buffered saline (PBS) was added to each well, and cells were resuspended by pipetting and incubated in MTT at 37 °C for 4 h (h). Dimethyl sulfoxide (Sigma-Aldrich, Melbourne VIC Australia) (100 μL) was added to each well for 10 min at 37 °C. Each well was pipetted before absorbance was read at 540 nm using a spectrophotometer (Bio-Rad microplate reader 6.0).\n4.3. Gene Array U937 cells were cultured in the presence or absence of 100 mM carnosine for 24 h and RNA was extracted using TRIzol (Invitrogen, Thermo Fisher Scientific, VIC Australia) followed by purification using RNeasy Mini Kit (Qiagen, Germany), including on-column DNase-treatment. RNA was quantified using Qubit™ RNA BR Assay Kit and RNA integrity number (RIN) established using an Agilent 2100 Bioanalyzer (all RIN were ≥9.5). Samples were processed according to manufacturer’s instructions using the RT2 Profiler™ PCR Array Human Innate and Adaptive Immune Responses (Qiagen). Expression levels were normalized to the mean of the reference genes present on the array (GAPDH, ACTB, B2M, HPRT1, and RPLP0) and fold change was calculated using the 2^ (-delta delta CT) formula via the Qiagen GeneGlobe web portal.\nThe pheatmap (version 1.0.12) data package for R software (version 4.0.2) was used together with RColorBrewer (version 1.1–2) for heatmap analysis. Plots were generated from log2 transformed DeltaCt values from 2 PCR arrays. The cluster function was applied to indicate genes that have expression similarity.\nThe differential expression of three genes—namely, interleukin 8 (IL-8/CXCL8) (NM_000584), C-C motif chemokine ligand 2 (CCL-2) (NM_002982), and mitogen-activated protein kinase 1 (MAPK1) (NM_138957)—was confirmed by RT-qPCR over a range of carnosine concentrations (0–100 mM). cDNA was reverse transcribed from DNase-treated RNA (1 µg) using SuperScript IV VILO Master Mix (Life Technologies, ThermoFisher). PCR was performed using SsoAdvanced™ SYBR Green Supermix (Bio-rad) using primers from Integrated DNA Technologies (IDT): CCL-2 forward 5′-AGC AGC CAC CTT CAT TCC-3′, reverse 5′-GCC TCT GCA CTG AGA TCT TC-3′; CXCL8 forward 5′-GAG ACA GCA GAG CAC ACA AG-3′, reverse 5′-CTT CAC ACA GAG CTG CAG AA-3′ and MAPK1 forward 5′-CAT TCA GCT AAC GTT CTG CAC-3′, reverse 5′-GTG ATC ATG GTC TGG ATC TGC-3′. PCR analysis was performed in duplicate. Normalized relative expression (fold change) was determined relative to control (0 mM carnosine) and normalized to two stably expressed reference genes (actin beta, ACTB and glucuronidase beta, GusB) using ΔΔCT data analysis function in CFX™ Maestro (Bio-rad), version 1.1.\nU937 cells were cultured in the presence or absence of 100 mM carnosine for 24 h and RNA was extracted using TRIzol (Invitrogen, Thermo Fisher Scientific, VIC Australia) followed by purification using RNeasy Mini Kit (Qiagen, Germany), including on-column DNase-treatment. RNA was quantified using Qubit™ RNA BR Assay Kit and RNA integrity number (RIN) established using an Agilent 2100 Bioanalyzer (all RIN were ≥9.5). Samples were processed according to manufacturer’s instructions using the RT2 Profiler™ PCR Array Human Innate and Adaptive Immune Responses (Qiagen). Expression levels were normalized to the mean of the reference genes present on the array (GAPDH, ACTB, B2M, HPRT1, and RPLP0) and fold change was calculated using the 2^ (-delta delta CT) formula via the Qiagen GeneGlobe web portal.\nThe pheatmap (version 1.0.12) data package for R software (version 4.0.2) was used together with RColorBrewer (version 1.1–2) for heatmap analysis. Plots were generated from log2 transformed DeltaCt values from 2 PCR arrays. The cluster function was applied to indicate genes that have expression similarity.\nThe differential expression of three genes—namely, interleukin 8 (IL-8/CXCL8) (NM_000584), C-C motif chemokine ligand 2 (CCL-2) (NM_002982), and mitogen-activated protein kinase 1 (MAPK1) (NM_138957)—was confirmed by RT-qPCR over a range of carnosine concentrations (0–100 mM). cDNA was reverse transcribed from DNase-treated RNA (1 µg) using SuperScript IV VILO Master Mix (Life Technologies, ThermoFisher). PCR was performed using SsoAdvanced™ SYBR Green Supermix (Bio-rad) using primers from Integrated DNA Technologies (IDT): CCL-2 forward 5′-AGC AGC CAC CTT CAT TCC-3′, reverse 5′-GCC TCT GCA CTG AGA TCT TC-3′; CXCL8 forward 5′-GAG ACA GCA GAG CAC ACA AG-3′, reverse 5′-CTT CAC ACA GAG CTG CAG AA-3′ and MAPK1 forward 5′-CAT TCA GCT AAC GTT CTG CAC-3′, reverse 5′-GTG ATC ATG GTC TGG ATC TGC-3′. PCR analysis was performed in duplicate. Normalized relative expression (fold change) was determined relative to control (0 mM carnosine) and normalized to two stably expressed reference genes (actin beta, ACTB and glucuronidase beta, GusB) using ΔΔCT data analysis function in CFX™ Maestro (Bio-rad), version 1.1.\n4.4. Cytokine Secretion U937 cells were cultured at an initial density of 5 × 103 cells/mL, with or without 100 mM carnosine for 5 days. Supernatants were collected, centrifuged, and frozen at −80 °C until assayed. Cytokine secretion was assessed via Bioplex assay (Bio-Rad, USA) according to manufacturer’s instructions.\nU937 cells were cultured at an initial density of 5 × 103 cells/mL, with or without 100 mM carnosine for 5 days. Supernatants were collected, centrifuged, and frozen at −80 °C until assayed. Cytokine secretion was assessed via Bioplex assay (Bio-Rad, USA) according to manufacturer’s instructions.\n4.5. Flow Cytometry U937 cells were differentiated to monocytes/macrophages by exposure to 1,25-dihydroxyvitamin D3 (VitD3) for 72 h and then cultured in 100 mM carnosine for a subsequent 5 days (120 h). Cells were then harvested and labelled using CD14-BV421, CD11b-PE, CD11c-APC/Cy7, CD86-AlexaFluor488, MHCII-BV510 and analyzed using a BD FACSCanto flow cytometer. Data analysis was performed using FlowJo software (Treestar, USA).\nU937 cells were differentiated to monocytes/macrophages by exposure to 1,25-dihydroxyvitamin D3 (VitD3) for 72 h and then cultured in 100 mM carnosine for a subsequent 5 days (120 h). Cells were then harvested and labelled using CD14-BV421, CD11b-PE, CD11c-APC/Cy7, CD86-AlexaFluor488, MHCII-BV510 and analyzed using a BD FACSCanto flow cytometer. Data analysis was performed using FlowJo software (Treestar, USA).\n4.6. Statistical Analysis The statistical analyses for the cancer cell line proliferation assays and cytokine secretion by 9-plex human bioplex assay were performed using GraphPad Prism software, version 7.0e. The student t-test was performed using the Holm–Sidak method with alpha = 0.05, where p < 0.05 was regarded as statistically significant. Data are presented as mean ± standard error of the mean (SEM). To analyze the human innate and adaptive immune response genes, RT2 Profiler PCR array (Qiagen software) was used based on normalization with reference genes.\nThe statistical analyses for the cancer cell line proliferation assays and cytokine secretion by 9-plex human bioplex assay were performed using GraphPad Prism software, version 7.0e. The student t-test was performed using the Holm–Sidak method with alpha = 0.05, where p < 0.05 was regarded as statistically significant. Data are presented as mean ± standard error of the mean (SEM). To analyze the human innate and adaptive immune response genes, RT2 Profiler PCR array (Qiagen software) was used based on normalization with reference genes.", "HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), U937 (promonocytic leukemia), and ZR-75-1 (breast) cancer cell lines were maintained at sub-confluence in complete RPMI (Roswell Park Memorial Institute-1640 media supplemented with 10% fetal bovine serum, penicillin/streptomycin and l-glutamine; all purchased from Sigma-Aldrich, St Louis, MO, USA) and 5% CO2 at 37 °C.", "Cell lines were cultured in complete RPMI and varying concentrations of carnosine from 0 to 200 mM dissolved directly in the culture media in quadruplicate plates for up to 6 days. Culture media were replenished at day 3 to maintain nutrient supply; the same method was applied to all cell lines. On days 3–6, MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) (Sigma-Aldrich) assay was performed on one plate each day to quantitate proliferation [35]. Briefly, all except 50 μL media was carefully removed from each well. An additional 50 μL of 5 mM MTT in phosphate buffered saline (PBS) was added to each well, and cells were resuspended by pipetting and incubated in MTT at 37 °C for 4 h (h). Dimethyl sulfoxide (Sigma-Aldrich, Melbourne VIC Australia) (100 μL) was added to each well for 10 min at 37 °C. Each well was pipetted before absorbance was read at 540 nm using a spectrophotometer (Bio-Rad microplate reader 6.0).", "U937 cells were cultured in the presence or absence of 100 mM carnosine for 24 h and RNA was extracted using TRIzol (Invitrogen, Thermo Fisher Scientific, VIC Australia) followed by purification using RNeasy Mini Kit (Qiagen, Germany), including on-column DNase-treatment. RNA was quantified using Qubit™ RNA BR Assay Kit and RNA integrity number (RIN) established using an Agilent 2100 Bioanalyzer (all RIN were ≥9.5). Samples were processed according to manufacturer’s instructions using the RT2 Profiler™ PCR Array Human Innate and Adaptive Immune Responses (Qiagen). Expression levels were normalized to the mean of the reference genes present on the array (GAPDH, ACTB, B2M, HPRT1, and RPLP0) and fold change was calculated using the 2^ (-delta delta CT) formula via the Qiagen GeneGlobe web portal.\nThe pheatmap (version 1.0.12) data package for R software (version 4.0.2) was used together with RColorBrewer (version 1.1–2) for heatmap analysis. Plots were generated from log2 transformed DeltaCt values from 2 PCR arrays. The cluster function was applied to indicate genes that have expression similarity.\nThe differential expression of three genes—namely, interleukin 8 (IL-8/CXCL8) (NM_000584), C-C motif chemokine ligand 2 (CCL-2) (NM_002982), and mitogen-activated protein kinase 1 (MAPK1) (NM_138957)—was confirmed by RT-qPCR over a range of carnosine concentrations (0–100 mM). cDNA was reverse transcribed from DNase-treated RNA (1 µg) using SuperScript IV VILO Master Mix (Life Technologies, ThermoFisher). PCR was performed using SsoAdvanced™ SYBR Green Supermix (Bio-rad) using primers from Integrated DNA Technologies (IDT): CCL-2 forward 5′-AGC AGC CAC CTT CAT TCC-3′, reverse 5′-GCC TCT GCA CTG AGA TCT TC-3′; CXCL8 forward 5′-GAG ACA GCA GAG CAC ACA AG-3′, reverse 5′-CTT CAC ACA GAG CTG CAG AA-3′ and MAPK1 forward 5′-CAT TCA GCT AAC GTT CTG CAC-3′, reverse 5′-GTG ATC ATG GTC TGG ATC TGC-3′. PCR analysis was performed in duplicate. Normalized relative expression (fold change) was determined relative to control (0 mM carnosine) and normalized to two stably expressed reference genes (actin beta, ACTB and glucuronidase beta, GusB) using ΔΔCT data analysis function in CFX™ Maestro (Bio-rad), version 1.1.", "U937 cells were cultured at an initial density of 5 × 103 cells/mL, with or without 100 mM carnosine for 5 days. Supernatants were collected, centrifuged, and frozen at −80 °C until assayed. Cytokine secretion was assessed via Bioplex assay (Bio-Rad, USA) according to manufacturer’s instructions.", "U937 cells were differentiated to monocytes/macrophages by exposure to 1,25-dihydroxyvitamin D3 (VitD3) for 72 h and then cultured in 100 mM carnosine for a subsequent 5 days (120 h). Cells were then harvested and labelled using CD14-BV421, CD11b-PE, CD11c-APC/Cy7, CD86-AlexaFluor488, MHCII-BV510 and analyzed using a BD FACSCanto flow cytometer. Data analysis was performed using FlowJo software (Treestar, USA).", "The statistical analyses for the cancer cell line proliferation assays and cytokine secretion by 9-plex human bioplex assay were performed using GraphPad Prism software, version 7.0e. The student t-test was performed using the Holm–Sidak method with alpha = 0.05, where p < 0.05 was regarded as statistically significant. Data are presented as mean ± standard error of the mean (SEM). To analyze the human innate and adaptive immune response genes, RT2 Profiler PCR array (Qiagen software) was used based on normalization with reference genes." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. Carnosine Inhibits Cancer Cell Proliferation", "2.2. Carnosine Upregulates Expression of Proinflammatory Molecules in U937 Cell", "2.3. Carnosine Modulates Cytokine Secretion from U937 Cells", "2.4. Carnosine Alters U937 Differentiation and Phenotype", "3. Discussion", "4. Materials and Methods", "4.1. Cell Culture", "4.2. Proliferation Assay", "4.3. Gene Array", "4.4. Cytokine Secretion", "4.5. Flow Cytometry", "4.6. Statistical Analysis", "5. Conclusions" ]
[ "U937 is a promonocytic, human myeloid leukemia cell line that was originally isolated from a histiocytic lymphoma patient [1]. U937 cells are promonocytes that exhibit many characteristics of normal monocytes and so are commonly used as a model for peripheral blood mononuclear cells (PBMCs). Though the expression levels are variable, there is little overall difference in the pattern of cluster of differentiation (CD) marker expression between PBMC and U937 cells [2]. U937 cells can also be differentiated in vitro using vitamin D3 or phorbol-12-myristate-13-acetate (PMA) to macrophages and dendritic cells [3].\nCarnosine is a naturally occurring dipeptide molecule (β-alanyl-l-histidine) with anti-inflammatory, antioxidant, anti-glycation, and chelating properties [4]. It is found at endogenous concentrations of up to 20 mM in humans, predominantly in the brain and skeletal muscle [5]. Traditionally used in exercise physiology to increase performance, it is available over the counter as a food supplement. Animal studies suggest therapeutic benefits for many chronic diseases (e.g., type 2 diabetes, cardiovascular disease, stroke, Alzheimer’s disease, and Parkinson’s disease), but little in vivo evidence exists in humans [6].\nThere is, however, some in vitro evidence that suggests carnosine may exhibit anti-cancer properties. Carnosine can selectively inhibit proliferation of transformed cells [7]. Paradoxically, McFarland and Holliday had earlier reported that carnosine increased the Hayflick limit and slowed the growth of cultured human fibroblasts [8]. However, this may be explained by the metabolic differences between these two cell types; whereas most tumor cells rely heavily on glycolysis, in fibroblasts ATP synthesis predominantly occurs in the mitochondria [9]. In glioblastoma, carnosine inhibits proliferation of primary cell cultures derived from surgically removed tumors [10]. Carnosine also inhibits proliferation of gastric, colon, and ovarian cancer cell lines [11,12,13]. In this study, we sought to replicate these observations and also included breast cancer and promonocytic leukemia cells. We further determined promonocytic (U937) cell phenotype, gene expression, and cytokine secretion to determine whether these are linked to its anti-cancer properties and to identify potential pathways via which carnosine exerts its effects. In addition, U937 cells were differentiated to become monocyte/macrophage cells, and cell changes to cell surface expression were determined using flow cytometry.", "2.1. Carnosine Inhibits Cancer Cell Proliferation HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), and ZR-75-1 (breast) cancer cell lines were cultured with increasing concentrations of carnosine from 0 to 200 mM for 6 days. U937 cells were cultured with a carnosine dose range from 0 to 100 mM for 6 days. At 100–200 mM carnosine for each line, proliferation was notably inhibited by day 5, which was even more pronounced at day 6 (Figure 1). At 10–20 mM carnosine, there was no significant difference in proliferation between treated and untreated cells. However, in U937 cells, the proliferation curves show some evidence of titration. Therefore, this cell line was selected for further analysis.\nHT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), and ZR-75-1 (breast) cancer cell lines were cultured with increasing concentrations of carnosine from 0 to 200 mM for 6 days. U937 cells were cultured with a carnosine dose range from 0 to 100 mM for 6 days. At 100–200 mM carnosine for each line, proliferation was notably inhibited by day 5, which was even more pronounced at day 6 (Figure 1). At 10–20 mM carnosine, there was no significant difference in proliferation between treated and untreated cells. However, in U937 cells, the proliferation curves show some evidence of titration. Therefore, this cell line was selected for further analysis.\n2.2. Carnosine Upregulates Expression of Proinflammatory Molecules in U937 Cell Gene array data showed differential regulation of a number of genes following exposure of U937 cells to carnosine (Figure 2A). Significant upregulation in gene expression was observed for IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNF (Figure 2B). IL-8, CCL2, and CCR5 are involved in chemotaxis; similarly, C3 is important for monocyte adhesion. CD86, IL-1β, Ly96, and TNFα are all involved in inflammatory pathways. We note that gene expression events that would result in changes in proliferation/protein expression would occur much earlier, and therefore, gene expression was assessed at an earlier time point, i.e., 24 h rather than 6 days.\nThe results of the array were confirmed by RT-qPCR of CCL2, IL-8, and MAPK1 (Figure 2C). These genes were specifically selected for the validation study because CCL2 and IL-8 showed the largest fold-changes in the array, and MAPK1 by comparison exhibited no change. The RT-qPCR showed that CCL2 and IL-8 gene upregulation was observed at 25 mM carnosine, and that the effect became more pronounced with increasing concentrations in a dose-dependent manner.\nGene array data showed differential regulation of a number of genes following exposure of U937 cells to carnosine (Figure 2A). Significant upregulation in gene expression was observed for IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNF (Figure 2B). IL-8, CCL2, and CCR5 are involved in chemotaxis; similarly, C3 is important for monocyte adhesion. CD86, IL-1β, Ly96, and TNFα are all involved in inflammatory pathways. We note that gene expression events that would result in changes in proliferation/protein expression would occur much earlier, and therefore, gene expression was assessed at an earlier time point, i.e., 24 h rather than 6 days.\nThe results of the array were confirmed by RT-qPCR of CCL2, IL-8, and MAPK1 (Figure 2C). These genes were specifically selected for the validation study because CCL2 and IL-8 showed the largest fold-changes in the array, and MAPK1 by comparison exhibited no change. The RT-qPCR showed that CCL2 and IL-8 gene upregulation was observed at 25 mM carnosine, and that the effect became more pronounced with increasing concentrations in a dose-dependent manner.\n2.3. Carnosine Modulates Cytokine Secretion from U937 Cells U937 cells were cultured in the presence or absence of 100 mM carnosine for 5 days, and culture supernatants were analyzed for the presence of cytokines. Once the raw data were adjusted to reflect the cell number in culture, it was found that carnosine increased the secretion of IL-10, GM-CSF, and TNF-α (Figure 3). IL-10 is well-established as an anti-inflammatory cytokine. GM-CSF is primarily implicated in innate cell expansion, but also primes mature immune cells, such as neutrophils. TNFα is involved in acute inflammation and has a critical autocrine role for monocytes. Interestingly, carnosine also decreased IL-8 secretion (Figure 3). IL-8 is important in mediating adhesion of monocytes to the vascular endothelium, allowing recruitment to sites of inflammation.\nU937 cells were cultured in the presence or absence of 100 mM carnosine for 5 days, and culture supernatants were analyzed for the presence of cytokines. Once the raw data were adjusted to reflect the cell number in culture, it was found that carnosine increased the secretion of IL-10, GM-CSF, and TNF-α (Figure 3). IL-10 is well-established as an anti-inflammatory cytokine. GM-CSF is primarily implicated in innate cell expansion, but also primes mature immune cells, such as neutrophils. TNFα is involved in acute inflammation and has a critical autocrine role for monocytes. Interestingly, carnosine also decreased IL-8 secretion (Figure 3). IL-8 is important in mediating adhesion of monocytes to the vascular endothelium, allowing recruitment to sites of inflammation.\n2.4. Carnosine Alters U937 Differentiation and Phenotype U937 can be differentiated to monocytes using VitD3. Upon culture for 72 h with VitD3, U937 exhibit increased side scatter (SSC), indicative of a monocyte-like phenotype. However, in the presence of carnosine, this population is significantly smaller, with an increase in a second population with larger cell size, but lower SSC (Figure 4A). Upon further characterization, this carnosine-induced population is CD11b+CD11c+CD86+ MHCII+, indicating a macrophage-like phenotype (Figure 4B).\nIn the monocyte-like population (Figure 4C), carnosine decreases CD11b, CD86, and MHCII expression, whereas in the macrophage-like population (Figure 4B), CD11b, CD11c, CD86, and MHCII are increased. This may suggest that carnosine is promoting differentiation to macrophages, rather than to monocytes.\nU937 can be differentiated to monocytes using VitD3. Upon culture for 72 h with VitD3, U937 exhibit increased side scatter (SSC), indicative of a monocyte-like phenotype. However, in the presence of carnosine, this population is significantly smaller, with an increase in a second population with larger cell size, but lower SSC (Figure 4A). Upon further characterization, this carnosine-induced population is CD11b+CD11c+CD86+ MHCII+, indicating a macrophage-like phenotype (Figure 4B).\nIn the monocyte-like population (Figure 4C), carnosine decreases CD11b, CD86, and MHCII expression, whereas in the macrophage-like population (Figure 4B), CD11b, CD11c, CD86, and MHCII are increased. This may suggest that carnosine is promoting differentiation to macrophages, rather than to monocytes.", "HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), and ZR-75-1 (breast) cancer cell lines were cultured with increasing concentrations of carnosine from 0 to 200 mM for 6 days. U937 cells were cultured with a carnosine dose range from 0 to 100 mM for 6 days. At 100–200 mM carnosine for each line, proliferation was notably inhibited by day 5, which was even more pronounced at day 6 (Figure 1). At 10–20 mM carnosine, there was no significant difference in proliferation between treated and untreated cells. However, in U937 cells, the proliferation curves show some evidence of titration. Therefore, this cell line was selected for further analysis.", "Gene array data showed differential regulation of a number of genes following exposure of U937 cells to carnosine (Figure 2A). Significant upregulation in gene expression was observed for IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNF (Figure 2B). IL-8, CCL2, and CCR5 are involved in chemotaxis; similarly, C3 is important for monocyte adhesion. CD86, IL-1β, Ly96, and TNFα are all involved in inflammatory pathways. We note that gene expression events that would result in changes in proliferation/protein expression would occur much earlier, and therefore, gene expression was assessed at an earlier time point, i.e., 24 h rather than 6 days.\nThe results of the array were confirmed by RT-qPCR of CCL2, IL-8, and MAPK1 (Figure 2C). These genes were specifically selected for the validation study because CCL2 and IL-8 showed the largest fold-changes in the array, and MAPK1 by comparison exhibited no change. The RT-qPCR showed that CCL2 and IL-8 gene upregulation was observed at 25 mM carnosine, and that the effect became more pronounced with increasing concentrations in a dose-dependent manner.", "U937 cells were cultured in the presence or absence of 100 mM carnosine for 5 days, and culture supernatants were analyzed for the presence of cytokines. Once the raw data were adjusted to reflect the cell number in culture, it was found that carnosine increased the secretion of IL-10, GM-CSF, and TNF-α (Figure 3). IL-10 is well-established as an anti-inflammatory cytokine. GM-CSF is primarily implicated in innate cell expansion, but also primes mature immune cells, such as neutrophils. TNFα is involved in acute inflammation and has a critical autocrine role for monocytes. Interestingly, carnosine also decreased IL-8 secretion (Figure 3). IL-8 is important in mediating adhesion of monocytes to the vascular endothelium, allowing recruitment to sites of inflammation.", "U937 can be differentiated to monocytes using VitD3. Upon culture for 72 h with VitD3, U937 exhibit increased side scatter (SSC), indicative of a monocyte-like phenotype. However, in the presence of carnosine, this population is significantly smaller, with an increase in a second population with larger cell size, but lower SSC (Figure 4A). Upon further characterization, this carnosine-induced population is CD11b+CD11c+CD86+ MHCII+, indicating a macrophage-like phenotype (Figure 4B).\nIn the monocyte-like population (Figure 4C), carnosine decreases CD11b, CD86, and MHCII expression, whereas in the macrophage-like population (Figure 4B), CD11b, CD11c, CD86, and MHCII are increased. This may suggest that carnosine is promoting differentiation to macrophages, rather than to monocytes.", "Herein, we present observations on the effect of carnosine on a number of cancer cell lines, with further examination of U937 monocyte-like cells. Carnosine (1) inhibits breast (ZR-75-1), ovarian (SKOV-3), colon (HT29, LIM2045), and leukemic (U937) cancer cell proliferation; (2) upregulates expression of pro-inflammatory molecules; (3) modulates cytokine secretion; and (4) alters U937 differentiation and phenotype. Similar to its effect on other cancer cell lines, carnosine also exerts its anti-cancer effect on U937 by inhibiting proliferation at 100 mM. Importantly, upon visual observation of cell cultures, there was no evidence of cellular stress or cell death (i.e., blebbing or cell debris). Although the carnosine concentrations used may seem high, carnosine is found endogenously in human tissue at concentrations of up to 20–30 mM [14,15] and has been used in human clinical trials to treat a range of conditions (e.g., neurodegenerative disease, type 2 diabetes, cardiovascular disease, stroke) at doses of up to 1500–2000 mg/day [16,17,18]. Moreover, although carnosine exerted its effect on a several different cell lines, the concentration at which growth inhibition was observed varied, demonstrating sensitivity in a cell-specific manner. This may be explained by metabolic differences between the different cell types; more specifically, whether they rely more heavily on glycolysis or mitochondrial ATP synthesis for energy [9].\nU937 cells are a myeloid leukemia, so have the capacity to secrete many cytokines and chemokines either constitutively or in response to specific stimuli. The genes upregulated in response to carnosine are largely inflammatory mediators, so may contribute to its anti-cancer properties by making U937 more detectable to the immune system. In addition, IL-1β is a pro-inflammatory cytokine that has previously been shown to have tumoricidal activity and repress tumor growth [19]; thus, its increased gene expression may contribute to the anti-proliferative effects of carnosine. The gene expression of the chemoattractant CCL2 was also increased, and is known to exert both pro- and anti-tumor effects [20]. CCR5 gene expression was also increased, and CCR5 is commonly known to induce cancer growth and many anti-cancer clinical studies aim to block CCR5 expression on cancer cells [21]; however, there is considerable controversy regarding its role in cancer progression. A number of studies have shown either pro- or anti-cancer effects of CCR5, and this discrepancy may be a result of the type of cancer cells and the context in which cancer cells originate [22,23]. Here, we show that upregulation of CCR5 in the presence of carnosine might be involved in anti-tumor effects, concurrent with existing data. In addition, complement C3 gene expression was also increased in the presence of carnosine, and although C3 is generally known to promote cancer cell growth, there are studies showing a dual role of complement in cancer and its anti-cancer effects [24]. Furthermore, interferon regulatory factor 7 (IRF7) gene expression was upregulated in the presence of carnosine; IRF7 is known to decrease cancer growth and metastasis. In fact, silencing IRF7 in breast cancer cell lines enhances growth and restoring IRF7 expression reduces metastasis [25]; similarly, in prostate cancer in mice, overexpression of IRF7 significantly reduces metastasis [26].\nThe cytokine secretion assay revealed that carnosine increased secretion of IL-10, GM-CSF, and TNFα and decreased secretion of IL-8. It has been reported that high levels of IL-10 in tumors inhibit tumor metastasis [27]; tumor cell lines transfected with IL-10 show inhibition of cell growth by increasing IFNγ from CD8+ T cells [28], and IL-10 transgenic mice stimulate CD8+ T cells and limit the growth of immunogenic tumor cells in vivo [29]. Thus, although IL-10 is known as an anti-inflammatory, immunosuppressive cytokine, it has also been shown to have immunostimulatory activities that inhibit tumor cell growth. In fact, recombinant PEGylated IL-10 inhibits tumor cell growth in mice [30]. Therefore, increased levels of IL-10 by U937 cells in the presence of carnosine may suggest an anti-cancer mechanism. Furthermore, GM-CSF secretion by tumor cells has been shown to boost anti-tumor immunity, and a number of human clinical studies are using recombinant GM-CSF injections into the tumor, GM-CSF fused with tumor-associated proteins in cancer vaccines, and anti-cancer DNA vaccines incorporating GM-CSF [31]. GM-CSF regulates cancer cell growth, leading to immunosuppression in the tumor microenvironment. Thus, the increased secretion of GM-CSF in the presence of carnosine contributes to its anti-proliferative potential. In addition, carnosine increases the secretion and gene expression of pro-inflammatory cytokine TNFα, a known anti-cancer agent with the capacity to induce cancer cell death [32]. Although IL-8 is predominantly known for its immune (neutrophil) chemo-attractive properties, it has also been reported to play a role in tumor progression and metastasis in a number of human cancers by regulating angiogenesis and cytokine secretion from tumor infiltrating macrophages in the tumor microenvironment [33]. The decreased secretion of IL-8 noted by U937 cells in the presence of carnosine is potentially inhibitory to cancer progression. It is clear that the combined increased gene expression of IL-1β, the secretion of IL-10, GM-CSF, and TNFα, and decreased secretion of IL-8 contribute to the anti-proliferative effects of carnosine on U937 promonocytic leukemia cells.\nIn addition, carnosine is a known anti-glycating molecule with a number of antioxidant properties and the capacity to act as a ROS scavenger [4]. These actions can potentially counteract the oxidative stress and resultant chronic inflammation that have become known as hallmarks of cancer [34] and so may contribute to the anti-cancer effect observed in this study. This is an area worth exploring further.\nInterestingly, the gene expression data presented in Figure 2 show that IL-8 gene expression was upregulated by carnosine, whereas in Figure 3, IL-8 secretion was decreased. However, gene expression was measured at a much earlier time point (24 h) than cytokine secretion (5 days), which may account for these seemingly disparate observations. Furthermore, it is possible that the IL-8 protein was expressed but not secreted and stored intracellularly instead, potentially requiring a second stimulus for secretion. This would also account for any discrepancy.\nFinally, the altered U937 phenotype and differentiation observed in Figure 4 may also contribute to the anti-cancer effects of carnosine. U937 cells are a promonocytic leukemia and so are characteristically undifferentiated. In carnosine-treated cells, differentiation and phenotype are altered, increasing expression of CD11b, CD11c, CD86, and MHCII, thus enabling the promonocytic leukemia cells to be more visible to the immune system.", "4.1. Cell Culture HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), U937 (promonocytic leukemia), and ZR-75-1 (breast) cancer cell lines were maintained at sub-confluence in complete RPMI (Roswell Park Memorial Institute-1640 media supplemented with 10% fetal bovine serum, penicillin/streptomycin and l-glutamine; all purchased from Sigma-Aldrich, St Louis, MO, USA) and 5% CO2 at 37 °C.\nHT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), U937 (promonocytic leukemia), and ZR-75-1 (breast) cancer cell lines were maintained at sub-confluence in complete RPMI (Roswell Park Memorial Institute-1640 media supplemented with 10% fetal bovine serum, penicillin/streptomycin and l-glutamine; all purchased from Sigma-Aldrich, St Louis, MO, USA) and 5% CO2 at 37 °C.\n4.2. Proliferation Assay Cell lines were cultured in complete RPMI and varying concentrations of carnosine from 0 to 200 mM dissolved directly in the culture media in quadruplicate plates for up to 6 days. Culture media were replenished at day 3 to maintain nutrient supply; the same method was applied to all cell lines. On days 3–6, MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) (Sigma-Aldrich) assay was performed on one plate each day to quantitate proliferation [35]. Briefly, all except 50 μL media was carefully removed from each well. An additional 50 μL of 5 mM MTT in phosphate buffered saline (PBS) was added to each well, and cells were resuspended by pipetting and incubated in MTT at 37 °C for 4 h (h). Dimethyl sulfoxide (Sigma-Aldrich, Melbourne VIC Australia) (100 μL) was added to each well for 10 min at 37 °C. Each well was pipetted before absorbance was read at 540 nm using a spectrophotometer (Bio-Rad microplate reader 6.0).\nCell lines were cultured in complete RPMI and varying concentrations of carnosine from 0 to 200 mM dissolved directly in the culture media in quadruplicate plates for up to 6 days. Culture media were replenished at day 3 to maintain nutrient supply; the same method was applied to all cell lines. On days 3–6, MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) (Sigma-Aldrich) assay was performed on one plate each day to quantitate proliferation [35]. Briefly, all except 50 μL media was carefully removed from each well. An additional 50 μL of 5 mM MTT in phosphate buffered saline (PBS) was added to each well, and cells were resuspended by pipetting and incubated in MTT at 37 °C for 4 h (h). Dimethyl sulfoxide (Sigma-Aldrich, Melbourne VIC Australia) (100 μL) was added to each well for 10 min at 37 °C. Each well was pipetted before absorbance was read at 540 nm using a spectrophotometer (Bio-Rad microplate reader 6.0).\n4.3. Gene Array U937 cells were cultured in the presence or absence of 100 mM carnosine for 24 h and RNA was extracted using TRIzol (Invitrogen, Thermo Fisher Scientific, VIC Australia) followed by purification using RNeasy Mini Kit (Qiagen, Germany), including on-column DNase-treatment. RNA was quantified using Qubit™ RNA BR Assay Kit and RNA integrity number (RIN) established using an Agilent 2100 Bioanalyzer (all RIN were ≥9.5). Samples were processed according to manufacturer’s instructions using the RT2 Profiler™ PCR Array Human Innate and Adaptive Immune Responses (Qiagen). Expression levels were normalized to the mean of the reference genes present on the array (GAPDH, ACTB, B2M, HPRT1, and RPLP0) and fold change was calculated using the 2^ (-delta delta CT) formula via the Qiagen GeneGlobe web portal.\nThe pheatmap (version 1.0.12) data package for R software (version 4.0.2) was used together with RColorBrewer (version 1.1–2) for heatmap analysis. Plots were generated from log2 transformed DeltaCt values from 2 PCR arrays. The cluster function was applied to indicate genes that have expression similarity.\nThe differential expression of three genes—namely, interleukin 8 (IL-8/CXCL8) (NM_000584), C-C motif chemokine ligand 2 (CCL-2) (NM_002982), and mitogen-activated protein kinase 1 (MAPK1) (NM_138957)—was confirmed by RT-qPCR over a range of carnosine concentrations (0–100 mM). cDNA was reverse transcribed from DNase-treated RNA (1 µg) using SuperScript IV VILO Master Mix (Life Technologies, ThermoFisher). PCR was performed using SsoAdvanced™ SYBR Green Supermix (Bio-rad) using primers from Integrated DNA Technologies (IDT): CCL-2 forward 5′-AGC AGC CAC CTT CAT TCC-3′, reverse 5′-GCC TCT GCA CTG AGA TCT TC-3′; CXCL8 forward 5′-GAG ACA GCA GAG CAC ACA AG-3′, reverse 5′-CTT CAC ACA GAG CTG CAG AA-3′ and MAPK1 forward 5′-CAT TCA GCT AAC GTT CTG CAC-3′, reverse 5′-GTG ATC ATG GTC TGG ATC TGC-3′. PCR analysis was performed in duplicate. Normalized relative expression (fold change) was determined relative to control (0 mM carnosine) and normalized to two stably expressed reference genes (actin beta, ACTB and glucuronidase beta, GusB) using ΔΔCT data analysis function in CFX™ Maestro (Bio-rad), version 1.1.\nU937 cells were cultured in the presence or absence of 100 mM carnosine for 24 h and RNA was extracted using TRIzol (Invitrogen, Thermo Fisher Scientific, VIC Australia) followed by purification using RNeasy Mini Kit (Qiagen, Germany), including on-column DNase-treatment. RNA was quantified using Qubit™ RNA BR Assay Kit and RNA integrity number (RIN) established using an Agilent 2100 Bioanalyzer (all RIN were ≥9.5). Samples were processed according to manufacturer’s instructions using the RT2 Profiler™ PCR Array Human Innate and Adaptive Immune Responses (Qiagen). Expression levels were normalized to the mean of the reference genes present on the array (GAPDH, ACTB, B2M, HPRT1, and RPLP0) and fold change was calculated using the 2^ (-delta delta CT) formula via the Qiagen GeneGlobe web portal.\nThe pheatmap (version 1.0.12) data package for R software (version 4.0.2) was used together with RColorBrewer (version 1.1–2) for heatmap analysis. Plots were generated from log2 transformed DeltaCt values from 2 PCR arrays. The cluster function was applied to indicate genes that have expression similarity.\nThe differential expression of three genes—namely, interleukin 8 (IL-8/CXCL8) (NM_000584), C-C motif chemokine ligand 2 (CCL-2) (NM_002982), and mitogen-activated protein kinase 1 (MAPK1) (NM_138957)—was confirmed by RT-qPCR over a range of carnosine concentrations (0–100 mM). cDNA was reverse transcribed from DNase-treated RNA (1 µg) using SuperScript IV VILO Master Mix (Life Technologies, ThermoFisher). PCR was performed using SsoAdvanced™ SYBR Green Supermix (Bio-rad) using primers from Integrated DNA Technologies (IDT): CCL-2 forward 5′-AGC AGC CAC CTT CAT TCC-3′, reverse 5′-GCC TCT GCA CTG AGA TCT TC-3′; CXCL8 forward 5′-GAG ACA GCA GAG CAC ACA AG-3′, reverse 5′-CTT CAC ACA GAG CTG CAG AA-3′ and MAPK1 forward 5′-CAT TCA GCT AAC GTT CTG CAC-3′, reverse 5′-GTG ATC ATG GTC TGG ATC TGC-3′. PCR analysis was performed in duplicate. Normalized relative expression (fold change) was determined relative to control (0 mM carnosine) and normalized to two stably expressed reference genes (actin beta, ACTB and glucuronidase beta, GusB) using ΔΔCT data analysis function in CFX™ Maestro (Bio-rad), version 1.1.\n4.4. Cytokine Secretion U937 cells were cultured at an initial density of 5 × 103 cells/mL, with or without 100 mM carnosine for 5 days. Supernatants were collected, centrifuged, and frozen at −80 °C until assayed. Cytokine secretion was assessed via Bioplex assay (Bio-Rad, USA) according to manufacturer’s instructions.\nU937 cells were cultured at an initial density of 5 × 103 cells/mL, with or without 100 mM carnosine for 5 days. Supernatants were collected, centrifuged, and frozen at −80 °C until assayed. Cytokine secretion was assessed via Bioplex assay (Bio-Rad, USA) according to manufacturer’s instructions.\n4.5. Flow Cytometry U937 cells were differentiated to monocytes/macrophages by exposure to 1,25-dihydroxyvitamin D3 (VitD3) for 72 h and then cultured in 100 mM carnosine for a subsequent 5 days (120 h). Cells were then harvested and labelled using CD14-BV421, CD11b-PE, CD11c-APC/Cy7, CD86-AlexaFluor488, MHCII-BV510 and analyzed using a BD FACSCanto flow cytometer. Data analysis was performed using FlowJo software (Treestar, USA).\nU937 cells were differentiated to monocytes/macrophages by exposure to 1,25-dihydroxyvitamin D3 (VitD3) for 72 h and then cultured in 100 mM carnosine for a subsequent 5 days (120 h). Cells were then harvested and labelled using CD14-BV421, CD11b-PE, CD11c-APC/Cy7, CD86-AlexaFluor488, MHCII-BV510 and analyzed using a BD FACSCanto flow cytometer. Data analysis was performed using FlowJo software (Treestar, USA).\n4.6. Statistical Analysis The statistical analyses for the cancer cell line proliferation assays and cytokine secretion by 9-plex human bioplex assay were performed using GraphPad Prism software, version 7.0e. The student t-test was performed using the Holm–Sidak method with alpha = 0.05, where p < 0.05 was regarded as statistically significant. Data are presented as mean ± standard error of the mean (SEM). To analyze the human innate and adaptive immune response genes, RT2 Profiler PCR array (Qiagen software) was used based on normalization with reference genes.\nThe statistical analyses for the cancer cell line proliferation assays and cytokine secretion by 9-plex human bioplex assay were performed using GraphPad Prism software, version 7.0e. The student t-test was performed using the Holm–Sidak method with alpha = 0.05, where p < 0.05 was regarded as statistically significant. Data are presented as mean ± standard error of the mean (SEM). To analyze the human innate and adaptive immune response genes, RT2 Profiler PCR array (Qiagen software) was used based on normalization with reference genes.", "HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), U937 (promonocytic leukemia), and ZR-75-1 (breast) cancer cell lines were maintained at sub-confluence in complete RPMI (Roswell Park Memorial Institute-1640 media supplemented with 10% fetal bovine serum, penicillin/streptomycin and l-glutamine; all purchased from Sigma-Aldrich, St Louis, MO, USA) and 5% CO2 at 37 °C.", "Cell lines were cultured in complete RPMI and varying concentrations of carnosine from 0 to 200 mM dissolved directly in the culture media in quadruplicate plates for up to 6 days. Culture media were replenished at day 3 to maintain nutrient supply; the same method was applied to all cell lines. On days 3–6, MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) (Sigma-Aldrich) assay was performed on one plate each day to quantitate proliferation [35]. Briefly, all except 50 μL media was carefully removed from each well. An additional 50 μL of 5 mM MTT in phosphate buffered saline (PBS) was added to each well, and cells were resuspended by pipetting and incubated in MTT at 37 °C for 4 h (h). Dimethyl sulfoxide (Sigma-Aldrich, Melbourne VIC Australia) (100 μL) was added to each well for 10 min at 37 °C. Each well was pipetted before absorbance was read at 540 nm using a spectrophotometer (Bio-Rad microplate reader 6.0).", "U937 cells were cultured in the presence or absence of 100 mM carnosine for 24 h and RNA was extracted using TRIzol (Invitrogen, Thermo Fisher Scientific, VIC Australia) followed by purification using RNeasy Mini Kit (Qiagen, Germany), including on-column DNase-treatment. RNA was quantified using Qubit™ RNA BR Assay Kit and RNA integrity number (RIN) established using an Agilent 2100 Bioanalyzer (all RIN were ≥9.5). Samples were processed according to manufacturer’s instructions using the RT2 Profiler™ PCR Array Human Innate and Adaptive Immune Responses (Qiagen). Expression levels were normalized to the mean of the reference genes present on the array (GAPDH, ACTB, B2M, HPRT1, and RPLP0) and fold change was calculated using the 2^ (-delta delta CT) formula via the Qiagen GeneGlobe web portal.\nThe pheatmap (version 1.0.12) data package for R software (version 4.0.2) was used together with RColorBrewer (version 1.1–2) for heatmap analysis. Plots were generated from log2 transformed DeltaCt values from 2 PCR arrays. The cluster function was applied to indicate genes that have expression similarity.\nThe differential expression of three genes—namely, interleukin 8 (IL-8/CXCL8) (NM_000584), C-C motif chemokine ligand 2 (CCL-2) (NM_002982), and mitogen-activated protein kinase 1 (MAPK1) (NM_138957)—was confirmed by RT-qPCR over a range of carnosine concentrations (0–100 mM). cDNA was reverse transcribed from DNase-treated RNA (1 µg) using SuperScript IV VILO Master Mix (Life Technologies, ThermoFisher). PCR was performed using SsoAdvanced™ SYBR Green Supermix (Bio-rad) using primers from Integrated DNA Technologies (IDT): CCL-2 forward 5′-AGC AGC CAC CTT CAT TCC-3′, reverse 5′-GCC TCT GCA CTG AGA TCT TC-3′; CXCL8 forward 5′-GAG ACA GCA GAG CAC ACA AG-3′, reverse 5′-CTT CAC ACA GAG CTG CAG AA-3′ and MAPK1 forward 5′-CAT TCA GCT AAC GTT CTG CAC-3′, reverse 5′-GTG ATC ATG GTC TGG ATC TGC-3′. PCR analysis was performed in duplicate. Normalized relative expression (fold change) was determined relative to control (0 mM carnosine) and normalized to two stably expressed reference genes (actin beta, ACTB and glucuronidase beta, GusB) using ΔΔCT data analysis function in CFX™ Maestro (Bio-rad), version 1.1.", "U937 cells were cultured at an initial density of 5 × 103 cells/mL, with or without 100 mM carnosine for 5 days. Supernatants were collected, centrifuged, and frozen at −80 °C until assayed. Cytokine secretion was assessed via Bioplex assay (Bio-Rad, USA) according to manufacturer’s instructions.", "U937 cells were differentiated to monocytes/macrophages by exposure to 1,25-dihydroxyvitamin D3 (VitD3) for 72 h and then cultured in 100 mM carnosine for a subsequent 5 days (120 h). Cells were then harvested and labelled using CD14-BV421, CD11b-PE, CD11c-APC/Cy7, CD86-AlexaFluor488, MHCII-BV510 and analyzed using a BD FACSCanto flow cytometer. Data analysis was performed using FlowJo software (Treestar, USA).", "The statistical analyses for the cancer cell line proliferation assays and cytokine secretion by 9-plex human bioplex assay were performed using GraphPad Prism software, version 7.0e. The student t-test was performed using the Holm–Sidak method with alpha = 0.05, where p < 0.05 was regarded as statistically significant. Data are presented as mean ± standard error of the mean (SEM). To analyze the human innate and adaptive immune response genes, RT2 Profiler PCR array (Qiagen software) was used based on normalization with reference genes.", "In this study, we observed that carnosine significantly inhibits proliferation of breast, ovarian, colon, and leukemic cancer cell lines. Further studies revealed additional effects regarding U937 cell phenotype, gene expression, and cytokine secretion. Together, the observation of increased secretion of IL-10, GM-CSF, and TNFα, decreased secretion of IL-8, increased gene expression of IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNFα, and increased expression of cell surface markers CD11b, CD11c, CD86, and MHCII all have implications for the anti-proliferative properties of carnosine, which should be further explored in anti-cancer therapy." ]
[ "intro", "results", null, null, null, null, "discussion", null, null, null, null, null, null, null, "conclusions" ]
[ "carnosine", "anti-cancer", "cytokine", "β-alanyl-l-histidine", "immunomodulation" ]
1. Introduction: U937 is a promonocytic, human myeloid leukemia cell line that was originally isolated from a histiocytic lymphoma patient [1]. U937 cells are promonocytes that exhibit many characteristics of normal monocytes and so are commonly used as a model for peripheral blood mononuclear cells (PBMCs). Though the expression levels are variable, there is little overall difference in the pattern of cluster of differentiation (CD) marker expression between PBMC and U937 cells [2]. U937 cells can also be differentiated in vitro using vitamin D3 or phorbol-12-myristate-13-acetate (PMA) to macrophages and dendritic cells [3]. Carnosine is a naturally occurring dipeptide molecule (β-alanyl-l-histidine) with anti-inflammatory, antioxidant, anti-glycation, and chelating properties [4]. It is found at endogenous concentrations of up to 20 mM in humans, predominantly in the brain and skeletal muscle [5]. Traditionally used in exercise physiology to increase performance, it is available over the counter as a food supplement. Animal studies suggest therapeutic benefits for many chronic diseases (e.g., type 2 diabetes, cardiovascular disease, stroke, Alzheimer’s disease, and Parkinson’s disease), but little in vivo evidence exists in humans [6]. There is, however, some in vitro evidence that suggests carnosine may exhibit anti-cancer properties. Carnosine can selectively inhibit proliferation of transformed cells [7]. Paradoxically, McFarland and Holliday had earlier reported that carnosine increased the Hayflick limit and slowed the growth of cultured human fibroblasts [8]. However, this may be explained by the metabolic differences between these two cell types; whereas most tumor cells rely heavily on glycolysis, in fibroblasts ATP synthesis predominantly occurs in the mitochondria [9]. In glioblastoma, carnosine inhibits proliferation of primary cell cultures derived from surgically removed tumors [10]. Carnosine also inhibits proliferation of gastric, colon, and ovarian cancer cell lines [11,12,13]. In this study, we sought to replicate these observations and also included breast cancer and promonocytic leukemia cells. We further determined promonocytic (U937) cell phenotype, gene expression, and cytokine secretion to determine whether these are linked to its anti-cancer properties and to identify potential pathways via which carnosine exerts its effects. In addition, U937 cells were differentiated to become monocyte/macrophage cells, and cell changes to cell surface expression were determined using flow cytometry. 2. Results: 2.1. Carnosine Inhibits Cancer Cell Proliferation HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), and ZR-75-1 (breast) cancer cell lines were cultured with increasing concentrations of carnosine from 0 to 200 mM for 6 days. U937 cells were cultured with a carnosine dose range from 0 to 100 mM for 6 days. At 100–200 mM carnosine for each line, proliferation was notably inhibited by day 5, which was even more pronounced at day 6 (Figure 1). At 10–20 mM carnosine, there was no significant difference in proliferation between treated and untreated cells. However, in U937 cells, the proliferation curves show some evidence of titration. Therefore, this cell line was selected for further analysis. HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), and ZR-75-1 (breast) cancer cell lines were cultured with increasing concentrations of carnosine from 0 to 200 mM for 6 days. U937 cells were cultured with a carnosine dose range from 0 to 100 mM for 6 days. At 100–200 mM carnosine for each line, proliferation was notably inhibited by day 5, which was even more pronounced at day 6 (Figure 1). At 10–20 mM carnosine, there was no significant difference in proliferation between treated and untreated cells. However, in U937 cells, the proliferation curves show some evidence of titration. Therefore, this cell line was selected for further analysis. 2.2. Carnosine Upregulates Expression of Proinflammatory Molecules in U937 Cell Gene array data showed differential regulation of a number of genes following exposure of U937 cells to carnosine (Figure 2A). Significant upregulation in gene expression was observed for IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNF (Figure 2B). IL-8, CCL2, and CCR5 are involved in chemotaxis; similarly, C3 is important for monocyte adhesion. CD86, IL-1β, Ly96, and TNFα are all involved in inflammatory pathways. We note that gene expression events that would result in changes in proliferation/protein expression would occur much earlier, and therefore, gene expression was assessed at an earlier time point, i.e., 24 h rather than 6 days. The results of the array were confirmed by RT-qPCR of CCL2, IL-8, and MAPK1 (Figure 2C). These genes were specifically selected for the validation study because CCL2 and IL-8 showed the largest fold-changes in the array, and MAPK1 by comparison exhibited no change. The RT-qPCR showed that CCL2 and IL-8 gene upregulation was observed at 25 mM carnosine, and that the effect became more pronounced with increasing concentrations in a dose-dependent manner. Gene array data showed differential regulation of a number of genes following exposure of U937 cells to carnosine (Figure 2A). Significant upregulation in gene expression was observed for IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNF (Figure 2B). IL-8, CCL2, and CCR5 are involved in chemotaxis; similarly, C3 is important for monocyte adhesion. CD86, IL-1β, Ly96, and TNFα are all involved in inflammatory pathways. We note that gene expression events that would result in changes in proliferation/protein expression would occur much earlier, and therefore, gene expression was assessed at an earlier time point, i.e., 24 h rather than 6 days. The results of the array were confirmed by RT-qPCR of CCL2, IL-8, and MAPK1 (Figure 2C). These genes were specifically selected for the validation study because CCL2 and IL-8 showed the largest fold-changes in the array, and MAPK1 by comparison exhibited no change. The RT-qPCR showed that CCL2 and IL-8 gene upregulation was observed at 25 mM carnosine, and that the effect became more pronounced with increasing concentrations in a dose-dependent manner. 2.3. Carnosine Modulates Cytokine Secretion from U937 Cells U937 cells were cultured in the presence or absence of 100 mM carnosine for 5 days, and culture supernatants were analyzed for the presence of cytokines. Once the raw data were adjusted to reflect the cell number in culture, it was found that carnosine increased the secretion of IL-10, GM-CSF, and TNF-α (Figure 3). IL-10 is well-established as an anti-inflammatory cytokine. GM-CSF is primarily implicated in innate cell expansion, but also primes mature immune cells, such as neutrophils. TNFα is involved in acute inflammation and has a critical autocrine role for monocytes. Interestingly, carnosine also decreased IL-8 secretion (Figure 3). IL-8 is important in mediating adhesion of monocytes to the vascular endothelium, allowing recruitment to sites of inflammation. U937 cells were cultured in the presence or absence of 100 mM carnosine for 5 days, and culture supernatants were analyzed for the presence of cytokines. Once the raw data were adjusted to reflect the cell number in culture, it was found that carnosine increased the secretion of IL-10, GM-CSF, and TNF-α (Figure 3). IL-10 is well-established as an anti-inflammatory cytokine. GM-CSF is primarily implicated in innate cell expansion, but also primes mature immune cells, such as neutrophils. TNFα is involved in acute inflammation and has a critical autocrine role for monocytes. Interestingly, carnosine also decreased IL-8 secretion (Figure 3). IL-8 is important in mediating adhesion of monocytes to the vascular endothelium, allowing recruitment to sites of inflammation. 2.4. Carnosine Alters U937 Differentiation and Phenotype U937 can be differentiated to monocytes using VitD3. Upon culture for 72 h with VitD3, U937 exhibit increased side scatter (SSC), indicative of a monocyte-like phenotype. However, in the presence of carnosine, this population is significantly smaller, with an increase in a second population with larger cell size, but lower SSC (Figure 4A). Upon further characterization, this carnosine-induced population is CD11b+CD11c+CD86+ MHCII+, indicating a macrophage-like phenotype (Figure 4B). In the monocyte-like population (Figure 4C), carnosine decreases CD11b, CD86, and MHCII expression, whereas in the macrophage-like population (Figure 4B), CD11b, CD11c, CD86, and MHCII are increased. This may suggest that carnosine is promoting differentiation to macrophages, rather than to monocytes. U937 can be differentiated to monocytes using VitD3. Upon culture for 72 h with VitD3, U937 exhibit increased side scatter (SSC), indicative of a monocyte-like phenotype. However, in the presence of carnosine, this population is significantly smaller, with an increase in a second population with larger cell size, but lower SSC (Figure 4A). Upon further characterization, this carnosine-induced population is CD11b+CD11c+CD86+ MHCII+, indicating a macrophage-like phenotype (Figure 4B). In the monocyte-like population (Figure 4C), carnosine decreases CD11b, CD86, and MHCII expression, whereas in the macrophage-like population (Figure 4B), CD11b, CD11c, CD86, and MHCII are increased. This may suggest that carnosine is promoting differentiation to macrophages, rather than to monocytes. 2.1. Carnosine Inhibits Cancer Cell Proliferation: HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), and ZR-75-1 (breast) cancer cell lines were cultured with increasing concentrations of carnosine from 0 to 200 mM for 6 days. U937 cells were cultured with a carnosine dose range from 0 to 100 mM for 6 days. At 100–200 mM carnosine for each line, proliferation was notably inhibited by day 5, which was even more pronounced at day 6 (Figure 1). At 10–20 mM carnosine, there was no significant difference in proliferation between treated and untreated cells. However, in U937 cells, the proliferation curves show some evidence of titration. Therefore, this cell line was selected for further analysis. 2.2. Carnosine Upregulates Expression of Proinflammatory Molecules in U937 Cell: Gene array data showed differential regulation of a number of genes following exposure of U937 cells to carnosine (Figure 2A). Significant upregulation in gene expression was observed for IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNF (Figure 2B). IL-8, CCL2, and CCR5 are involved in chemotaxis; similarly, C3 is important for monocyte adhesion. CD86, IL-1β, Ly96, and TNFα are all involved in inflammatory pathways. We note that gene expression events that would result in changes in proliferation/protein expression would occur much earlier, and therefore, gene expression was assessed at an earlier time point, i.e., 24 h rather than 6 days. The results of the array were confirmed by RT-qPCR of CCL2, IL-8, and MAPK1 (Figure 2C). These genes were specifically selected for the validation study because CCL2 and IL-8 showed the largest fold-changes in the array, and MAPK1 by comparison exhibited no change. The RT-qPCR showed that CCL2 and IL-8 gene upregulation was observed at 25 mM carnosine, and that the effect became more pronounced with increasing concentrations in a dose-dependent manner. 2.3. Carnosine Modulates Cytokine Secretion from U937 Cells: U937 cells were cultured in the presence or absence of 100 mM carnosine for 5 days, and culture supernatants were analyzed for the presence of cytokines. Once the raw data were adjusted to reflect the cell number in culture, it was found that carnosine increased the secretion of IL-10, GM-CSF, and TNF-α (Figure 3). IL-10 is well-established as an anti-inflammatory cytokine. GM-CSF is primarily implicated in innate cell expansion, but also primes mature immune cells, such as neutrophils. TNFα is involved in acute inflammation and has a critical autocrine role for monocytes. Interestingly, carnosine also decreased IL-8 secretion (Figure 3). IL-8 is important in mediating adhesion of monocytes to the vascular endothelium, allowing recruitment to sites of inflammation. 2.4. Carnosine Alters U937 Differentiation and Phenotype: U937 can be differentiated to monocytes using VitD3. Upon culture for 72 h with VitD3, U937 exhibit increased side scatter (SSC), indicative of a monocyte-like phenotype. However, in the presence of carnosine, this population is significantly smaller, with an increase in a second population with larger cell size, but lower SSC (Figure 4A). Upon further characterization, this carnosine-induced population is CD11b+CD11c+CD86+ MHCII+, indicating a macrophage-like phenotype (Figure 4B). In the monocyte-like population (Figure 4C), carnosine decreases CD11b, CD86, and MHCII expression, whereas in the macrophage-like population (Figure 4B), CD11b, CD11c, CD86, and MHCII are increased. This may suggest that carnosine is promoting differentiation to macrophages, rather than to monocytes. 3. Discussion: Herein, we present observations on the effect of carnosine on a number of cancer cell lines, with further examination of U937 monocyte-like cells. Carnosine (1) inhibits breast (ZR-75-1), ovarian (SKOV-3), colon (HT29, LIM2045), and leukemic (U937) cancer cell proliferation; (2) upregulates expression of pro-inflammatory molecules; (3) modulates cytokine secretion; and (4) alters U937 differentiation and phenotype. Similar to its effect on other cancer cell lines, carnosine also exerts its anti-cancer effect on U937 by inhibiting proliferation at 100 mM. Importantly, upon visual observation of cell cultures, there was no evidence of cellular stress or cell death (i.e., blebbing or cell debris). Although the carnosine concentrations used may seem high, carnosine is found endogenously in human tissue at concentrations of up to 20–30 mM [14,15] and has been used in human clinical trials to treat a range of conditions (e.g., neurodegenerative disease, type 2 diabetes, cardiovascular disease, stroke) at doses of up to 1500–2000 mg/day [16,17,18]. Moreover, although carnosine exerted its effect on a several different cell lines, the concentration at which growth inhibition was observed varied, demonstrating sensitivity in a cell-specific manner. This may be explained by metabolic differences between the different cell types; more specifically, whether they rely more heavily on glycolysis or mitochondrial ATP synthesis for energy [9]. U937 cells are a myeloid leukemia, so have the capacity to secrete many cytokines and chemokines either constitutively or in response to specific stimuli. The genes upregulated in response to carnosine are largely inflammatory mediators, so may contribute to its anti-cancer properties by making U937 more detectable to the immune system. In addition, IL-1β is a pro-inflammatory cytokine that has previously been shown to have tumoricidal activity and repress tumor growth [19]; thus, its increased gene expression may contribute to the anti-proliferative effects of carnosine. The gene expression of the chemoattractant CCL2 was also increased, and is known to exert both pro- and anti-tumor effects [20]. CCR5 gene expression was also increased, and CCR5 is commonly known to induce cancer growth and many anti-cancer clinical studies aim to block CCR5 expression on cancer cells [21]; however, there is considerable controversy regarding its role in cancer progression. A number of studies have shown either pro- or anti-cancer effects of CCR5, and this discrepancy may be a result of the type of cancer cells and the context in which cancer cells originate [22,23]. Here, we show that upregulation of CCR5 in the presence of carnosine might be involved in anti-tumor effects, concurrent with existing data. In addition, complement C3 gene expression was also increased in the presence of carnosine, and although C3 is generally known to promote cancer cell growth, there are studies showing a dual role of complement in cancer and its anti-cancer effects [24]. Furthermore, interferon regulatory factor 7 (IRF7) gene expression was upregulated in the presence of carnosine; IRF7 is known to decrease cancer growth and metastasis. In fact, silencing IRF7 in breast cancer cell lines enhances growth and restoring IRF7 expression reduces metastasis [25]; similarly, in prostate cancer in mice, overexpression of IRF7 significantly reduces metastasis [26]. The cytokine secretion assay revealed that carnosine increased secretion of IL-10, GM-CSF, and TNFα and decreased secretion of IL-8. It has been reported that high levels of IL-10 in tumors inhibit tumor metastasis [27]; tumor cell lines transfected with IL-10 show inhibition of cell growth by increasing IFNγ from CD8+ T cells [28], and IL-10 transgenic mice stimulate CD8+ T cells and limit the growth of immunogenic tumor cells in vivo [29]. Thus, although IL-10 is known as an anti-inflammatory, immunosuppressive cytokine, it has also been shown to have immunostimulatory activities that inhibit tumor cell growth. In fact, recombinant PEGylated IL-10 inhibits tumor cell growth in mice [30]. Therefore, increased levels of IL-10 by U937 cells in the presence of carnosine may suggest an anti-cancer mechanism. Furthermore, GM-CSF secretion by tumor cells has been shown to boost anti-tumor immunity, and a number of human clinical studies are using recombinant GM-CSF injections into the tumor, GM-CSF fused with tumor-associated proteins in cancer vaccines, and anti-cancer DNA vaccines incorporating GM-CSF [31]. GM-CSF regulates cancer cell growth, leading to immunosuppression in the tumor microenvironment. Thus, the increased secretion of GM-CSF in the presence of carnosine contributes to its anti-proliferative potential. In addition, carnosine increases the secretion and gene expression of pro-inflammatory cytokine TNFα, a known anti-cancer agent with the capacity to induce cancer cell death [32]. Although IL-8 is predominantly known for its immune (neutrophil) chemo-attractive properties, it has also been reported to play a role in tumor progression and metastasis in a number of human cancers by regulating angiogenesis and cytokine secretion from tumor infiltrating macrophages in the tumor microenvironment [33]. The decreased secretion of IL-8 noted by U937 cells in the presence of carnosine is potentially inhibitory to cancer progression. It is clear that the combined increased gene expression of IL-1β, the secretion of IL-10, GM-CSF, and TNFα, and decreased secretion of IL-8 contribute to the anti-proliferative effects of carnosine on U937 promonocytic leukemia cells. In addition, carnosine is a known anti-glycating molecule with a number of antioxidant properties and the capacity to act as a ROS scavenger [4]. These actions can potentially counteract the oxidative stress and resultant chronic inflammation that have become known as hallmarks of cancer [34] and so may contribute to the anti-cancer effect observed in this study. This is an area worth exploring further. Interestingly, the gene expression data presented in Figure 2 show that IL-8 gene expression was upregulated by carnosine, whereas in Figure 3, IL-8 secretion was decreased. However, gene expression was measured at a much earlier time point (24 h) than cytokine secretion (5 days), which may account for these seemingly disparate observations. Furthermore, it is possible that the IL-8 protein was expressed but not secreted and stored intracellularly instead, potentially requiring a second stimulus for secretion. This would also account for any discrepancy. Finally, the altered U937 phenotype and differentiation observed in Figure 4 may also contribute to the anti-cancer effects of carnosine. U937 cells are a promonocytic leukemia and so are characteristically undifferentiated. In carnosine-treated cells, differentiation and phenotype are altered, increasing expression of CD11b, CD11c, CD86, and MHCII, thus enabling the promonocytic leukemia cells to be more visible to the immune system. 4. Materials and Methods: 4.1. Cell Culture HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), U937 (promonocytic leukemia), and ZR-75-1 (breast) cancer cell lines were maintained at sub-confluence in complete RPMI (Roswell Park Memorial Institute-1640 media supplemented with 10% fetal bovine serum, penicillin/streptomycin and l-glutamine; all purchased from Sigma-Aldrich, St Louis, MO, USA) and 5% CO2 at 37 °C. HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), U937 (promonocytic leukemia), and ZR-75-1 (breast) cancer cell lines were maintained at sub-confluence in complete RPMI (Roswell Park Memorial Institute-1640 media supplemented with 10% fetal bovine serum, penicillin/streptomycin and l-glutamine; all purchased from Sigma-Aldrich, St Louis, MO, USA) and 5% CO2 at 37 °C. 4.2. Proliferation Assay Cell lines were cultured in complete RPMI and varying concentrations of carnosine from 0 to 200 mM dissolved directly in the culture media in quadruplicate plates for up to 6 days. Culture media were replenished at day 3 to maintain nutrient supply; the same method was applied to all cell lines. On days 3–6, MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) (Sigma-Aldrich) assay was performed on one plate each day to quantitate proliferation [35]. Briefly, all except 50 μL media was carefully removed from each well. An additional 50 μL of 5 mM MTT in phosphate buffered saline (PBS) was added to each well, and cells were resuspended by pipetting and incubated in MTT at 37 °C for 4 h (h). Dimethyl sulfoxide (Sigma-Aldrich, Melbourne VIC Australia) (100 μL) was added to each well for 10 min at 37 °C. Each well was pipetted before absorbance was read at 540 nm using a spectrophotometer (Bio-Rad microplate reader 6.0). Cell lines were cultured in complete RPMI and varying concentrations of carnosine from 0 to 200 mM dissolved directly in the culture media in quadruplicate plates for up to 6 days. Culture media were replenished at day 3 to maintain nutrient supply; the same method was applied to all cell lines. On days 3–6, MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) (Sigma-Aldrich) assay was performed on one plate each day to quantitate proliferation [35]. Briefly, all except 50 μL media was carefully removed from each well. An additional 50 μL of 5 mM MTT in phosphate buffered saline (PBS) was added to each well, and cells were resuspended by pipetting and incubated in MTT at 37 °C for 4 h (h). Dimethyl sulfoxide (Sigma-Aldrich, Melbourne VIC Australia) (100 μL) was added to each well for 10 min at 37 °C. Each well was pipetted before absorbance was read at 540 nm using a spectrophotometer (Bio-Rad microplate reader 6.0). 4.3. Gene Array U937 cells were cultured in the presence or absence of 100 mM carnosine for 24 h and RNA was extracted using TRIzol (Invitrogen, Thermo Fisher Scientific, VIC Australia) followed by purification using RNeasy Mini Kit (Qiagen, Germany), including on-column DNase-treatment. RNA was quantified using Qubit™ RNA BR Assay Kit and RNA integrity number (RIN) established using an Agilent 2100 Bioanalyzer (all RIN were ≥9.5). Samples were processed according to manufacturer’s instructions using the RT2 Profiler™ PCR Array Human Innate and Adaptive Immune Responses (Qiagen). Expression levels were normalized to the mean of the reference genes present on the array (GAPDH, ACTB, B2M, HPRT1, and RPLP0) and fold change was calculated using the 2^ (-delta delta CT) formula via the Qiagen GeneGlobe web portal. The pheatmap (version 1.0.12) data package for R software (version 4.0.2) was used together with RColorBrewer (version 1.1–2) for heatmap analysis. Plots were generated from log2 transformed DeltaCt values from 2 PCR arrays. The cluster function was applied to indicate genes that have expression similarity. The differential expression of three genes—namely, interleukin 8 (IL-8/CXCL8) (NM_000584), C-C motif chemokine ligand 2 (CCL-2) (NM_002982), and mitogen-activated protein kinase 1 (MAPK1) (NM_138957)—was confirmed by RT-qPCR over a range of carnosine concentrations (0–100 mM). cDNA was reverse transcribed from DNase-treated RNA (1 µg) using SuperScript IV VILO Master Mix (Life Technologies, ThermoFisher). PCR was performed using SsoAdvanced™ SYBR Green Supermix (Bio-rad) using primers from Integrated DNA Technologies (IDT): CCL-2 forward 5′-AGC AGC CAC CTT CAT TCC-3′, reverse 5′-GCC TCT GCA CTG AGA TCT TC-3′; CXCL8 forward 5′-GAG ACA GCA GAG CAC ACA AG-3′, reverse 5′-CTT CAC ACA GAG CTG CAG AA-3′ and MAPK1 forward 5′-CAT TCA GCT AAC GTT CTG CAC-3′, reverse 5′-GTG ATC ATG GTC TGG ATC TGC-3′. PCR analysis was performed in duplicate. Normalized relative expression (fold change) was determined relative to control (0 mM carnosine) and normalized to two stably expressed reference genes (actin beta, ACTB and glucuronidase beta, GusB) using ΔΔCT data analysis function in CFX™ Maestro (Bio-rad), version 1.1. U937 cells were cultured in the presence or absence of 100 mM carnosine for 24 h and RNA was extracted using TRIzol (Invitrogen, Thermo Fisher Scientific, VIC Australia) followed by purification using RNeasy Mini Kit (Qiagen, Germany), including on-column DNase-treatment. RNA was quantified using Qubit™ RNA BR Assay Kit and RNA integrity number (RIN) established using an Agilent 2100 Bioanalyzer (all RIN were ≥9.5). Samples were processed according to manufacturer’s instructions using the RT2 Profiler™ PCR Array Human Innate and Adaptive Immune Responses (Qiagen). Expression levels were normalized to the mean of the reference genes present on the array (GAPDH, ACTB, B2M, HPRT1, and RPLP0) and fold change was calculated using the 2^ (-delta delta CT) formula via the Qiagen GeneGlobe web portal. The pheatmap (version 1.0.12) data package for R software (version 4.0.2) was used together with RColorBrewer (version 1.1–2) for heatmap analysis. Plots were generated from log2 transformed DeltaCt values from 2 PCR arrays. The cluster function was applied to indicate genes that have expression similarity. The differential expression of three genes—namely, interleukin 8 (IL-8/CXCL8) (NM_000584), C-C motif chemokine ligand 2 (CCL-2) (NM_002982), and mitogen-activated protein kinase 1 (MAPK1) (NM_138957)—was confirmed by RT-qPCR over a range of carnosine concentrations (0–100 mM). cDNA was reverse transcribed from DNase-treated RNA (1 µg) using SuperScript IV VILO Master Mix (Life Technologies, ThermoFisher). PCR was performed using SsoAdvanced™ SYBR Green Supermix (Bio-rad) using primers from Integrated DNA Technologies (IDT): CCL-2 forward 5′-AGC AGC CAC CTT CAT TCC-3′, reverse 5′-GCC TCT GCA CTG AGA TCT TC-3′; CXCL8 forward 5′-GAG ACA GCA GAG CAC ACA AG-3′, reverse 5′-CTT CAC ACA GAG CTG CAG AA-3′ and MAPK1 forward 5′-CAT TCA GCT AAC GTT CTG CAC-3′, reverse 5′-GTG ATC ATG GTC TGG ATC TGC-3′. PCR analysis was performed in duplicate. Normalized relative expression (fold change) was determined relative to control (0 mM carnosine) and normalized to two stably expressed reference genes (actin beta, ACTB and glucuronidase beta, GusB) using ΔΔCT data analysis function in CFX™ Maestro (Bio-rad), version 1.1. 4.4. Cytokine Secretion U937 cells were cultured at an initial density of 5 × 103 cells/mL, with or without 100 mM carnosine for 5 days. Supernatants were collected, centrifuged, and frozen at −80 °C until assayed. Cytokine secretion was assessed via Bioplex assay (Bio-Rad, USA) according to manufacturer’s instructions. U937 cells were cultured at an initial density of 5 × 103 cells/mL, with or without 100 mM carnosine for 5 days. Supernatants were collected, centrifuged, and frozen at −80 °C until assayed. Cytokine secretion was assessed via Bioplex assay (Bio-Rad, USA) according to manufacturer’s instructions. 4.5. Flow Cytometry U937 cells were differentiated to monocytes/macrophages by exposure to 1,25-dihydroxyvitamin D3 (VitD3) for 72 h and then cultured in 100 mM carnosine for a subsequent 5 days (120 h). Cells were then harvested and labelled using CD14-BV421, CD11b-PE, CD11c-APC/Cy7, CD86-AlexaFluor488, MHCII-BV510 and analyzed using a BD FACSCanto flow cytometer. Data analysis was performed using FlowJo software (Treestar, USA). U937 cells were differentiated to monocytes/macrophages by exposure to 1,25-dihydroxyvitamin D3 (VitD3) for 72 h and then cultured in 100 mM carnosine for a subsequent 5 days (120 h). Cells were then harvested and labelled using CD14-BV421, CD11b-PE, CD11c-APC/Cy7, CD86-AlexaFluor488, MHCII-BV510 and analyzed using a BD FACSCanto flow cytometer. Data analysis was performed using FlowJo software (Treestar, USA). 4.6. Statistical Analysis The statistical analyses for the cancer cell line proliferation assays and cytokine secretion by 9-plex human bioplex assay were performed using GraphPad Prism software, version 7.0e. The student t-test was performed using the Holm–Sidak method with alpha = 0.05, where p < 0.05 was regarded as statistically significant. Data are presented as mean ± standard error of the mean (SEM). To analyze the human innate and adaptive immune response genes, RT2 Profiler PCR array (Qiagen software) was used based on normalization with reference genes. The statistical analyses for the cancer cell line proliferation assays and cytokine secretion by 9-plex human bioplex assay were performed using GraphPad Prism software, version 7.0e. The student t-test was performed using the Holm–Sidak method with alpha = 0.05, where p < 0.05 was regarded as statistically significant. Data are presented as mean ± standard error of the mean (SEM). To analyze the human innate and adaptive immune response genes, RT2 Profiler PCR array (Qiagen software) was used based on normalization with reference genes. 4.1. Cell Culture: HT29 (colon), LIM2045 (colon), SKOV-3 (ovarian), U937 (promonocytic leukemia), and ZR-75-1 (breast) cancer cell lines were maintained at sub-confluence in complete RPMI (Roswell Park Memorial Institute-1640 media supplemented with 10% fetal bovine serum, penicillin/streptomycin and l-glutamine; all purchased from Sigma-Aldrich, St Louis, MO, USA) and 5% CO2 at 37 °C. 4.2. Proliferation Assay: Cell lines were cultured in complete RPMI and varying concentrations of carnosine from 0 to 200 mM dissolved directly in the culture media in quadruplicate plates for up to 6 days. Culture media were replenished at day 3 to maintain nutrient supply; the same method was applied to all cell lines. On days 3–6, MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) (Sigma-Aldrich) assay was performed on one plate each day to quantitate proliferation [35]. Briefly, all except 50 μL media was carefully removed from each well. An additional 50 μL of 5 mM MTT in phosphate buffered saline (PBS) was added to each well, and cells were resuspended by pipetting and incubated in MTT at 37 °C for 4 h (h). Dimethyl sulfoxide (Sigma-Aldrich, Melbourne VIC Australia) (100 μL) was added to each well for 10 min at 37 °C. Each well was pipetted before absorbance was read at 540 nm using a spectrophotometer (Bio-Rad microplate reader 6.0). 4.3. Gene Array: U937 cells were cultured in the presence or absence of 100 mM carnosine for 24 h and RNA was extracted using TRIzol (Invitrogen, Thermo Fisher Scientific, VIC Australia) followed by purification using RNeasy Mini Kit (Qiagen, Germany), including on-column DNase-treatment. RNA was quantified using Qubit™ RNA BR Assay Kit and RNA integrity number (RIN) established using an Agilent 2100 Bioanalyzer (all RIN were ≥9.5). Samples were processed according to manufacturer’s instructions using the RT2 Profiler™ PCR Array Human Innate and Adaptive Immune Responses (Qiagen). Expression levels were normalized to the mean of the reference genes present on the array (GAPDH, ACTB, B2M, HPRT1, and RPLP0) and fold change was calculated using the 2^ (-delta delta CT) formula via the Qiagen GeneGlobe web portal. The pheatmap (version 1.0.12) data package for R software (version 4.0.2) was used together with RColorBrewer (version 1.1–2) for heatmap analysis. Plots were generated from log2 transformed DeltaCt values from 2 PCR arrays. The cluster function was applied to indicate genes that have expression similarity. The differential expression of three genes—namely, interleukin 8 (IL-8/CXCL8) (NM_000584), C-C motif chemokine ligand 2 (CCL-2) (NM_002982), and mitogen-activated protein kinase 1 (MAPK1) (NM_138957)—was confirmed by RT-qPCR over a range of carnosine concentrations (0–100 mM). cDNA was reverse transcribed from DNase-treated RNA (1 µg) using SuperScript IV VILO Master Mix (Life Technologies, ThermoFisher). PCR was performed using SsoAdvanced™ SYBR Green Supermix (Bio-rad) using primers from Integrated DNA Technologies (IDT): CCL-2 forward 5′-AGC AGC CAC CTT CAT TCC-3′, reverse 5′-GCC TCT GCA CTG AGA TCT TC-3′; CXCL8 forward 5′-GAG ACA GCA GAG CAC ACA AG-3′, reverse 5′-CTT CAC ACA GAG CTG CAG AA-3′ and MAPK1 forward 5′-CAT TCA GCT AAC GTT CTG CAC-3′, reverse 5′-GTG ATC ATG GTC TGG ATC TGC-3′. PCR analysis was performed in duplicate. Normalized relative expression (fold change) was determined relative to control (0 mM carnosine) and normalized to two stably expressed reference genes (actin beta, ACTB and glucuronidase beta, GusB) using ΔΔCT data analysis function in CFX™ Maestro (Bio-rad), version 1.1. 4.4. Cytokine Secretion: U937 cells were cultured at an initial density of 5 × 103 cells/mL, with or without 100 mM carnosine for 5 days. Supernatants were collected, centrifuged, and frozen at −80 °C until assayed. Cytokine secretion was assessed via Bioplex assay (Bio-Rad, USA) according to manufacturer’s instructions. 4.5. Flow Cytometry: U937 cells were differentiated to monocytes/macrophages by exposure to 1,25-dihydroxyvitamin D3 (VitD3) for 72 h and then cultured in 100 mM carnosine for a subsequent 5 days (120 h). Cells were then harvested and labelled using CD14-BV421, CD11b-PE, CD11c-APC/Cy7, CD86-AlexaFluor488, MHCII-BV510 and analyzed using a BD FACSCanto flow cytometer. Data analysis was performed using FlowJo software (Treestar, USA). 4.6. Statistical Analysis: The statistical analyses for the cancer cell line proliferation assays and cytokine secretion by 9-plex human bioplex assay were performed using GraphPad Prism software, version 7.0e. The student t-test was performed using the Holm–Sidak method with alpha = 0.05, where p < 0.05 was regarded as statistically significant. Data are presented as mean ± standard error of the mean (SEM). To analyze the human innate and adaptive immune response genes, RT2 Profiler PCR array (Qiagen software) was used based on normalization with reference genes. 5. Conclusions: In this study, we observed that carnosine significantly inhibits proliferation of breast, ovarian, colon, and leukemic cancer cell lines. Further studies revealed additional effects regarding U937 cell phenotype, gene expression, and cytokine secretion. Together, the observation of increased secretion of IL-10, GM-CSF, and TNFα, decreased secretion of IL-8, increased gene expression of IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNFα, and increased expression of cell surface markers CD11b, CD11c, CD86, and MHCII all have implications for the anti-proliferative properties of carnosine, which should be further explored in anti-cancer therapy.
Background: Carnosine is a dipeptide molecule (β-alanyl-l-histidine) with anti-inflammatory, antioxidant, anti-glycation, and chelating properties. It is used in exercise physiology as a food supplement to increase performance; however, in vitro evidence suggests that carnosine may exhibit anti-cancer properties. Methods: In this study, we investigated the effect of carnosine on breast, ovarian, colon, and leukemic cancer cell proliferation. We further examined U937 promonocytic, human myeloid leukemia cell phenotype, gene expression, and cytokine secretion to determine if these are linked to carnosine's anti-proliferative properties. Results: Carnosine (1) inhibits breast, ovarian, colon, and leukemic cancer cell proliferation; (2) upregulates expression of pro-inflammatory molecules; (3) modulates cytokine secretion; and (4) alters U937 differentiation and phenotype. Conclusions: These effects may have implications for a role for carnosine in anti-cancer therapy.
1. Introduction: U937 is a promonocytic, human myeloid leukemia cell line that was originally isolated from a histiocytic lymphoma patient [1]. U937 cells are promonocytes that exhibit many characteristics of normal monocytes and so are commonly used as a model for peripheral blood mononuclear cells (PBMCs). Though the expression levels are variable, there is little overall difference in the pattern of cluster of differentiation (CD) marker expression between PBMC and U937 cells [2]. U937 cells can also be differentiated in vitro using vitamin D3 or phorbol-12-myristate-13-acetate (PMA) to macrophages and dendritic cells [3]. Carnosine is a naturally occurring dipeptide molecule (β-alanyl-l-histidine) with anti-inflammatory, antioxidant, anti-glycation, and chelating properties [4]. It is found at endogenous concentrations of up to 20 mM in humans, predominantly in the brain and skeletal muscle [5]. Traditionally used in exercise physiology to increase performance, it is available over the counter as a food supplement. Animal studies suggest therapeutic benefits for many chronic diseases (e.g., type 2 diabetes, cardiovascular disease, stroke, Alzheimer’s disease, and Parkinson’s disease), but little in vivo evidence exists in humans [6]. There is, however, some in vitro evidence that suggests carnosine may exhibit anti-cancer properties. Carnosine can selectively inhibit proliferation of transformed cells [7]. Paradoxically, McFarland and Holliday had earlier reported that carnosine increased the Hayflick limit and slowed the growth of cultured human fibroblasts [8]. However, this may be explained by the metabolic differences between these two cell types; whereas most tumor cells rely heavily on glycolysis, in fibroblasts ATP synthesis predominantly occurs in the mitochondria [9]. In glioblastoma, carnosine inhibits proliferation of primary cell cultures derived from surgically removed tumors [10]. Carnosine also inhibits proliferation of gastric, colon, and ovarian cancer cell lines [11,12,13]. In this study, we sought to replicate these observations and also included breast cancer and promonocytic leukemia cells. We further determined promonocytic (U937) cell phenotype, gene expression, and cytokine secretion to determine whether these are linked to its anti-cancer properties and to identify potential pathways via which carnosine exerts its effects. In addition, U937 cells were differentiated to become monocyte/macrophage cells, and cell changes to cell surface expression were determined using flow cytometry. 5. Conclusions: In this study, we observed that carnosine significantly inhibits proliferation of breast, ovarian, colon, and leukemic cancer cell lines. Further studies revealed additional effects regarding U937 cell phenotype, gene expression, and cytokine secretion. Together, the observation of increased secretion of IL-10, GM-CSF, and TNFα, decreased secretion of IL-8, increased gene expression of IL-8, CCL2, CD86, IL-1β, CCR5, Ly96, IRF7, C3, and TNFα, and increased expression of cell surface markers CD11b, CD11c, CD86, and MHCII all have implications for the anti-proliferative properties of carnosine, which should be further explored in anti-cancer therapy.
Background: Carnosine is a dipeptide molecule (β-alanyl-l-histidine) with anti-inflammatory, antioxidant, anti-glycation, and chelating properties. It is used in exercise physiology as a food supplement to increase performance; however, in vitro evidence suggests that carnosine may exhibit anti-cancer properties. Methods: In this study, we investigated the effect of carnosine on breast, ovarian, colon, and leukemic cancer cell proliferation. We further examined U937 promonocytic, human myeloid leukemia cell phenotype, gene expression, and cytokine secretion to determine if these are linked to carnosine's anti-proliferative properties. Results: Carnosine (1) inhibits breast, ovarian, colon, and leukemic cancer cell proliferation; (2) upregulates expression of pro-inflammatory molecules; (3) modulates cytokine secretion; and (4) alters U937 differentiation and phenotype. Conclusions: These effects may have implications for a role for carnosine in anti-cancer therapy.
6,961
189
[ 132, 222, 147, 154, 1977, 85, 196, 440, 61, 88, 101 ]
15
[ "carnosine", "cells", "cell", "il", "u937", "expression", "cancer", "mm", "secretion", "figure" ]
[ "u937 differentiated monocytes", "carnosine u937 promonocytic", "u937 promonocytic leukemia", "pbmc u937 cells", "carnosine u937 cells" ]
null
[CONTENT] carnosine | anti-cancer | cytokine | β-alanyl-l-histidine | immunomodulation [SUMMARY]
null
[CONTENT] carnosine | anti-cancer | cytokine | β-alanyl-l-histidine | immunomodulation [SUMMARY]
[CONTENT] carnosine | anti-cancer | cytokine | β-alanyl-l-histidine | immunomodulation [SUMMARY]
[CONTENT] carnosine | anti-cancer | cytokine | β-alanyl-l-histidine | immunomodulation [SUMMARY]
[CONTENT] carnosine | anti-cancer | cytokine | β-alanyl-l-histidine | immunomodulation [SUMMARY]
[CONTENT] Antineoplastic Agents | Carnosine | Cell Differentiation | Cell Line, Tumor | Cell Proliferation | Cytokines | Dipeptides | Gene Expression | HT29 Cells | Humans | Neoplasms | U937 Cells [SUMMARY]
null
[CONTENT] Antineoplastic Agents | Carnosine | Cell Differentiation | Cell Line, Tumor | Cell Proliferation | Cytokines | Dipeptides | Gene Expression | HT29 Cells | Humans | Neoplasms | U937 Cells [SUMMARY]
[CONTENT] Antineoplastic Agents | Carnosine | Cell Differentiation | Cell Line, Tumor | Cell Proliferation | Cytokines | Dipeptides | Gene Expression | HT29 Cells | Humans | Neoplasms | U937 Cells [SUMMARY]
[CONTENT] Antineoplastic Agents | Carnosine | Cell Differentiation | Cell Line, Tumor | Cell Proliferation | Cytokines | Dipeptides | Gene Expression | HT29 Cells | Humans | Neoplasms | U937 Cells [SUMMARY]
[CONTENT] Antineoplastic Agents | Carnosine | Cell Differentiation | Cell Line, Tumor | Cell Proliferation | Cytokines | Dipeptides | Gene Expression | HT29 Cells | Humans | Neoplasms | U937 Cells [SUMMARY]
[CONTENT] u937 differentiated monocytes | carnosine u937 promonocytic | u937 promonocytic leukemia | pbmc u937 cells | carnosine u937 cells [SUMMARY]
null
[CONTENT] u937 differentiated monocytes | carnosine u937 promonocytic | u937 promonocytic leukemia | pbmc u937 cells | carnosine u937 cells [SUMMARY]
[CONTENT] u937 differentiated monocytes | carnosine u937 promonocytic | u937 promonocytic leukemia | pbmc u937 cells | carnosine u937 cells [SUMMARY]
[CONTENT] u937 differentiated monocytes | carnosine u937 promonocytic | u937 promonocytic leukemia | pbmc u937 cells | carnosine u937 cells [SUMMARY]
[CONTENT] u937 differentiated monocytes | carnosine u937 promonocytic | u937 promonocytic leukemia | pbmc u937 cells | carnosine u937 cells [SUMMARY]
[CONTENT] carnosine | cells | cell | il | u937 | expression | cancer | mm | secretion | figure [SUMMARY]
null
[CONTENT] carnosine | cells | cell | il | u937 | expression | cancer | mm | secretion | figure [SUMMARY]
[CONTENT] carnosine | cells | cell | il | u937 | expression | cancer | mm | secretion | figure [SUMMARY]
[CONTENT] carnosine | cells | cell | il | u937 | expression | cancer | mm | secretion | figure [SUMMARY]
[CONTENT] carnosine | cells | cell | il | u937 | expression | cancer | mm | secretion | figure [SUMMARY]
[CONTENT] cells | cell | disease | carnosine | anti | properties | u937 | promonocytic | cancer | expression [SUMMARY]
null
[CONTENT] il | figure | carnosine | population | ccl2 | like | gene | expression | u937 | cd86 [SUMMARY]
[CONTENT] il | increased | expression | secretion | secretion il | anti | gene expression | tnfα | cell | gene [SUMMARY]
[CONTENT] carnosine | il | cells | cell | expression | figure | u937 | mm | cancer | secretion [SUMMARY]
[CONTENT] carnosine | il | cells | cell | expression | figure | u937 | mm | cancer | secretion [SUMMARY]
[CONTENT] Carnosine ||| [SUMMARY]
null
[CONTENT] Carnosine | 1 | 2 | 3 | 4 [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] Carnosine ||| ||| ||| myeloid ||| 1 | 2 | 3 | 4 ||| [SUMMARY]
[CONTENT] Carnosine ||| ||| ||| myeloid ||| 1 | 2 | 3 | 4 ||| [SUMMARY]
Comorbidities and cardiovascular risk factors in patients with psoriasis.
25184912
Psoriasis is a chronic inflammatory disease and its pathogenesis involves an interaction between genetic, environmental, and immunological factors. Recent studies have suggested that the chronic inflammatory nature of psoriasis may predispose to an association with other inflammatory diseases, especially cardiovascular diseases and metabolic disorders.
BACKGROUND
We conducted a cross-sectional study involving the assessment of 190 patients. Participants underwent history and physical examination. They also completed a specific questionnaire about epidemiological data, past medical history, and comorbidities. The cardiovascular risk profile was calculated using the Framingham risk score.
METHODS
Patients' mean age was 51.5 ± 14 years, and the predominant clinical presentation was plaque psoriasis (78.4%). We found an increased prevalence of systemic hypertension, type 2 diabetes, metabolic syndrome, and obesity. Increased waist circumference was also found in addition to a considerable prevalence of depression, smoking, and regular alcohol intake. Patients' cardiovascular risk was high according to the Framingham risk score, and 47.2% of patients had moderate or high risk of fatal and non-fatal coronary events in 10 years.
RESULTS
Patients had high prevalence of cardiovascular comorbidities, and high cardiovascular risk according to the Framingham risk score. Further epidemiological studies are needed in Brazil for validation of our results.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Brazil", "Cardiovascular Diseases", "Comorbidity", "Cross-Sectional Studies", "Diabetes Mellitus, Type 2", "Female", "Humans", "Male", "Metabolic Syndrome", "Middle Aged", "Obesity", "Prevalence", "Psoriasis", "Risk Assessment", "Risk Factors", "Severity of Illness Index", "Sex Distribution", "Smoking", "Young Adult" ]
4155951
INTRODUCTION
Psoriasis is a chronic and recurrent condition defined as an immunologically mediated inflammatory disease affecting mainly the skin and joints. The pathogenesis of psoriasis remains unclear, but it is known that genetic, environmental, and immunological factors are implicated.1 In recent years, the association of psoriasis with several comorbidities, such as obesity, metabolic syndrome (MS), systemic hypertension (SH), dyslipidemia, type 2 diabetes, malignancies, inflammatory bowel diseases, and habits, such as smoking and alcohol abuse has been matter of debate.2,3 This association between psoriasis and comorbidities, especially cardiovascular and metabolic disorders, may be related to their chronic and inflammatory nature, especially due to increased pro-inflammatory cytokines that are part of the pathophysiology of such disorders.4,5 Recently the discussion has extended about higher prevalence of atherosclerosis, arterial calcification, and mortality related to cardiovascular events, such as acute myocardial infarction (AMI) and stroke.6,7 The chronic inflammatory process of psoriasis involves mechanisms that cause oxidative stress and free radical production, prompting the formation of atherosclerotic plaques on vessel walls and leading to an increased risk of cardiovascular disease.8 Results of epidemiological reports and studies investigating the association with comorbidities vary depending on the population studied. For this reason this study was designed with the purpose of defining the epidemiological, clinical, and laboratory profile of patients seen at a reference center for dermatology. Other objectives were to investigate the association with comorbidities and cardiovascular risk factors such as SH, type 2 diabetes, obesity, dyslipidemia, and to define the profile of cardiovascular risk based on the Framingham risk score.
PATIENTS AND METHODS
We conducted an observational, cross-sectional study with convenience sampling. We assessed one hundred and ninety patients seen at the Department of Dermatology of the Hospital das Clínicas, Universidade Federal de Minas Gerais (HC/UFMG), Brazil, from July 2011 to May 2012. Patients older than 18 years who signed a written consent form were included in the study. This research project was approved by the Review Board of the institution. Patient's data such as age, sex, and skin color, time to diagnosis, and family history of psoriasis in first- and second-degree relatives were collected. Personal history of other comorbidities was investigated, particularly SH, type 2 diabetes, dyslipidemia, depression, continued use medications, and family history of cardiovascular events (angina, AMI, stroke, revascularization surgery, angioplasty). Patients completed the DLQI (Dermatology Life Quality Index), which was translated into and validated for Brazilian Portuguese.9 Smoking habits were investigated, and patients were divided into three categories: no smoking, previous smoking (smoking cessation more than a year before the date of the interview), and current smoking (calculating the number of pack-years).10 Current smokers were asked whether they started smoking before they were diagnosed with psoriasis. In terms of alcohol intake, patients were divided in those who do not consume alcohol regularly (less than once a month), those who currently consume alcohol regularly (more than once a month), and those who reported alcohol intake in the past. Patients reporting regular alcohol consumption were classified according to its frequency: low (less than once a week), moderate (one to three times a week), high (between four times a week and once a day), and very high (more than once a day).11 Skin lesions were graded according to their extension, erythema, infiltration, and desquamation using the Psoriasis Area Severity Index (PASI).12 Patients with PASI score >10 and/or with a DLQI score >10 based on Finlay's rule of tens were considered to have severe psoriasis.13 Patients currently on systemic medication were included in the group due to the severity of their condition, which is in agreement with the literature.14,15 This was based on the fact that many of the patients who were on systemic medication had a stable skin condition and did not meet the criteria of the rule of tens for being on systemic drugs. Weight, height, waist circumference, and blood pressure were measured during physical examination. Body mass index (BMI) was calculated using the standard formula: BMI = Weight (kg)/Height (m)2. Results of laboratory tests, such as total cholesterol (TC), high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), and fasting blood glucose, were collected from medical records. We considered the latest tests and did not include those carried out more than six months before the date of the interview.. Dyslipidemia was defined by previous diagnosis and treatment with lipid-lowering medications or if TC≥200 mg/dl and/or LDL-C≥160 mg/dl and/or HDL-C<50 mg/dl for women or <40 mg/dl for men and/or TG≥150 mg/dl.16 The criteria of the National Cholesterol Education Program - Adult Treatment Panel (NCEP-ATP III) were used to define MS.17 The assessment of cardiovascular risk was based on the Framingham risk score, which takes into account sex, age, TC, HDL, systolic and diastolic blood pressure, diabetes, and smoking, and estimates the risk of fatal or nonfatal coronary events in 10 years. Patients were stratified into risk categories: low (<10% events in 10 years), moderate (10-20%), and high (>20%).18 In the statistical analysis, qualitative variables were expressed as absolute and percentage values. The quantitative variables were expressed as mean, standard deviation, median, and interquartile range. The chi-square test was used to compare proportions; Student's t-test was used to compare two means; and when the variables did not have a Gaussian distribution, the medians were compared using the Kruskal-Wallis test. The threshold of statistical significance was set at p<0.05. The EPI INFO 6.04 (CDC, Atlanta, USA) was used for the statistical analysis.
RESULTS
General characteristics The sample's mean age was 51.5 years (±14), ranging from 18 to 92 years; 93 men (48.9%) and 97 women (51.1%) were evaluated. The mean diagnosis time was 15.8 years (±12.2), with median of 13 years (interquartile range between 6.0 and 23.0). Regarding skin color, 80 patients were caucasians (42.1%), 81 mulattos (42.6%), and 29 African descendents (15.3%). Of the 190 patients, 149 (78.4%) had a diagnosis of plaque psoriasis; 18 (9.5%) had palmoplantar psoriasis; 11 (5.8%) had guttate psoriasis; six (3.2%) had generalized pustular psoriasis; five (2.6%) had eryth-rodermic psoriasis; and only one (0.5%) had a diagnosis of Hallopeau's continuous acrodermatitis. Regarding therapeutic modality, 79 patients were only on an topical treatment (41.6%); 59 were on methotrexate (31.1%); 27 were on acitretin (14.2%); on phototherapy (10%); 16 on immunobiological medications (8.4%); and only one on cyclosporine (0.5%). With regard to the family history of psoriasis, 57 patients (30%) reported positive family history; 129 patients (67.9%) denied a family history of psoriasis, and four (2.1%) were not able to answer this item. In relation to the severity of the skin condition, 128 patients (67.4%) were considered to have severe psoriasis. Of the 190 participants, 37 (19.5%) had a previous diagnosis of psoriatic arthritis (PA). Of these, 17 (45.9%) were classified as asymmetric oligoarthritis; 10 (27%) as symmetric polyarthritis; six (16.2%) as interphalangeal arthritis; and four (10.8%) had spondylitis. No patient showed arthritis mutilans. The mean PASI score was 3.4 (±3.03), with a median of 2.3 (interquartile range = 0.9 to 4.5). Of the patients evaluated using the PASI, 17 (10.5%) has a PASI score ≥10. Regarding the DLQI, 26 patients (13.7%) had a DLQI score ≥10. One hundred and eighteen patients (62.1%) had nail involvement. The most frequent alteration was onycholysis (41.6% of patients), followed by nail pitting (21.6%), onychodystrophy (14.2%), subungual hyperkeratosis (11.6%), oil spot (4.2%), and subungual hemorrhage (11.1%). Of the 190 patients, 39 (20.5%) reported current smoking; 57 (30%) reported past smoking; and 94 (49.5%) reported they had never smoked. Most of the 96 patients reporting current or past smoking started smoking before the diagnosis of psoriasis (76.5%). The mean time of current or past smoking was 22.3 (±12 years), with a median of 20 years. The mean packyears was 21.3 (±16). Considering alcohol consumption, 105 patients (55.3%) reported they did not consume alcohol regularly, 31 (16.3%) reported alcohol intake in the past, and 54 patients (28.4%) reported current alcohol consumption. Of the 54 patients who regularly consumed alcohol, 36 (66.6%) reported low consumption (less than once a week) and 18 (33.4%) reported moderate consumption (one to three times a week). No patient reported high or very high consumption. The patients' clinical characteristics are summarized in table 1. Frequency distribution of clinical variables of the psoriasis patients of the Dermatology Service of HC-UFMG Categories do not exclude one another The sample's mean age was 51.5 years (±14), ranging from 18 to 92 years; 93 men (48.9%) and 97 women (51.1%) were evaluated. The mean diagnosis time was 15.8 years (±12.2), with median of 13 years (interquartile range between 6.0 and 23.0). Regarding skin color, 80 patients were caucasians (42.1%), 81 mulattos (42.6%), and 29 African descendents (15.3%). Of the 190 patients, 149 (78.4%) had a diagnosis of plaque psoriasis; 18 (9.5%) had palmoplantar psoriasis; 11 (5.8%) had guttate psoriasis; six (3.2%) had generalized pustular psoriasis; five (2.6%) had eryth-rodermic psoriasis; and only one (0.5%) had a diagnosis of Hallopeau's continuous acrodermatitis. Regarding therapeutic modality, 79 patients were only on an topical treatment (41.6%); 59 were on methotrexate (31.1%); 27 were on acitretin (14.2%); on phototherapy (10%); 16 on immunobiological medications (8.4%); and only one on cyclosporine (0.5%). With regard to the family history of psoriasis, 57 patients (30%) reported positive family history; 129 patients (67.9%) denied a family history of psoriasis, and four (2.1%) were not able to answer this item. In relation to the severity of the skin condition, 128 patients (67.4%) were considered to have severe psoriasis. Of the 190 participants, 37 (19.5%) had a previous diagnosis of psoriatic arthritis (PA). Of these, 17 (45.9%) were classified as asymmetric oligoarthritis; 10 (27%) as symmetric polyarthritis; six (16.2%) as interphalangeal arthritis; and four (10.8%) had spondylitis. No patient showed arthritis mutilans. The mean PASI score was 3.4 (±3.03), with a median of 2.3 (interquartile range = 0.9 to 4.5). Of the patients evaluated using the PASI, 17 (10.5%) has a PASI score ≥10. Regarding the DLQI, 26 patients (13.7%) had a DLQI score ≥10. One hundred and eighteen patients (62.1%) had nail involvement. The most frequent alteration was onycholysis (41.6% of patients), followed by nail pitting (21.6%), onychodystrophy (14.2%), subungual hyperkeratosis (11.6%), oil spot (4.2%), and subungual hemorrhage (11.1%). Of the 190 patients, 39 (20.5%) reported current smoking; 57 (30%) reported past smoking; and 94 (49.5%) reported they had never smoked. Most of the 96 patients reporting current or past smoking started smoking before the diagnosis of psoriasis (76.5%). The mean time of current or past smoking was 22.3 (±12 years), with a median of 20 years. The mean packyears was 21.3 (±16). Considering alcohol consumption, 105 patients (55.3%) reported they did not consume alcohol regularly, 31 (16.3%) reported alcohol intake in the past, and 54 patients (28.4%) reported current alcohol consumption. Of the 54 patients who regularly consumed alcohol, 36 (66.6%) reported low consumption (less than once a week) and 18 (33.4%) reported moderate consumption (one to three times a week). No patient reported high or very high consumption. The patients' clinical characteristics are summarized in table 1. Frequency distribution of clinical variables of the psoriasis patients of the Dermatology Service of HC-UFMG Categories do not exclude one another Comorbidities and laboratory data Eighty-three patients (43.7%) reported previous diagnosis of SH, 49 (25.8%) had received a previous diagnosis of depression, and 29 (15.3%) had been diagnosed with type 2 diabetes. Forty-eight patients (25.3%) reported previous diagnosis of dyslipidemia. Regarding laboratory tests, 18.5% of patients had increased TC, 16.9% had increased LDL, and 33.7% had increased TG. Sixty-six men (74.2%) and 15 women (16.9%) had low HDL. Sixty-eight patients (64.3%) had high BMI. The overweight frequency was 31.1%, and 33.2% of patients were considered obese (22.6% with grade I obesity, 7.4% with grade II obesity, and 3.2% with grade III obesity). In relation to waist circumference, 68.8% of men and 86.5% of women had increased measures (p=0.005). Considering the cutoff point suggested by the WHO as a significant biomarker for cardiovascular risk, 38.7% of men and 78.4% of women had alterations (p<0.0001). Eighty patients (44.9%) met the criteria for the diagnosis of MS according to the NCEP-ATP III (42.6% of men and 47.2% of women). The frequencies for the patients' comorbidities and laboratory alterations are summarized in table 2. Frequency distribution of comorbidities and laboratory alterations of psoriasis patients of the Dermatology Service of HC-UFMG Eighty-three patients (43.7%) reported previous diagnosis of SH, 49 (25.8%) had received a previous diagnosis of depression, and 29 (15.3%) had been diagnosed with type 2 diabetes. Forty-eight patients (25.3%) reported previous diagnosis of dyslipidemia. Regarding laboratory tests, 18.5% of patients had increased TC, 16.9% had increased LDL, and 33.7% had increased TG. Sixty-six men (74.2%) and 15 women (16.9%) had low HDL. Sixty-eight patients (64.3%) had high BMI. The overweight frequency was 31.1%, and 33.2% of patients were considered obese (22.6% with grade I obesity, 7.4% with grade II obesity, and 3.2% with grade III obesity). In relation to waist circumference, 68.8% of men and 86.5% of women had increased measures (p=0.005). Considering the cutoff point suggested by the WHO as a significant biomarker for cardiovascular risk, 38.7% of men and 78.4% of women had alterations (p<0.0001). Eighty patients (44.9%) met the criteria for the diagnosis of MS according to the NCEP-ATP III (42.6% of men and 47.2% of women). The frequencies for the patients' comorbidities and laboratory alterations are summarized in table 2. Frequency distribution of comorbidities and laboratory alterations of psoriasis patients of the Dermatology Service of HC-UFMG Framingham risk score The Framingham risk score was used to evaluate 159 patients. Of those, 38.4% had moderate risk and 8.8% had high risk for the development of coronary heart diseases in 10 years. The median score was 8.0 (interquartile range = 4.0 to 13.0). Male patients had a median score of 10.0, while among female patients the median score was 7.0. The difference between men and women was statistically significant (Kruskal-Wallis test = 8.6; p=0.0033). The mean was 10.5 ± 9.1. The Framingham risk score was used to evaluate 159 patients. Of those, 38.4% had moderate risk and 8.8% had high risk for the development of coronary heart diseases in 10 years. The median score was 8.0 (interquartile range = 4.0 to 13.0). Male patients had a median score of 10.0, while among female patients the median score was 7.0. The difference between men and women was statistically significant (Kruskal-Wallis test = 8.6; p=0.0033). The mean was 10.5 ± 9.1. Variable analysis regarding the severity of psoriasis Of the 190 patients, 128 were considered as having severe psoriasis. The statistically different variables in terms of severity of psoriasis were: - Time to diagnosis: severe patients had a mean diagnosis time of 17.5 (±12.3) years compared with a mean of 12.6 (±11.2) years among nonsevere patients. P-value = 0.0021. -Psoriatic arthritis (PsA): 34 (26.5%) patients with severe psoriasis had concurrent diagnosis of PsA compared with only three (4.8%) nonsevere patients. Pvalue = 0.0008. - Nail involvement: 91 (71.1%) severe patients had nail involvement, while only 27 (43.5%) of the nonsevere group had alterations of this type. P-value = 0.0004. - BMI: 89 (69.5%) patients of the severe group had BMI ≥ 25 (considering overweight + obesity) compared with 33 (53.2%) of the nonsevere group. However, when we only analyzed obesity, there was no statistical significance. The statistical analyses performed are described in table 3. Frequency distribution of clinical variables in relation to the severity of psoriasis in 190 patients of the Dermatology Service of HC-UFMG - statistically significant variables Chi-square test Kruskal-Wallis test f Interquartile range Interquartile range Of the 190 patients, 128 were considered as having severe psoriasis. The statistically different variables in terms of severity of psoriasis were: - Time to diagnosis: severe patients had a mean diagnosis time of 17.5 (±12.3) years compared with a mean of 12.6 (±11.2) years among nonsevere patients. P-value = 0.0021. -Psoriatic arthritis (PsA): 34 (26.5%) patients with severe psoriasis had concurrent diagnosis of PsA compared with only three (4.8%) nonsevere patients. Pvalue = 0.0008. - Nail involvement: 91 (71.1%) severe patients had nail involvement, while only 27 (43.5%) of the nonsevere group had alterations of this type. P-value = 0.0004. - BMI: 89 (69.5%) patients of the severe group had BMI ≥ 25 (considering overweight + obesity) compared with 33 (53.2%) of the nonsevere group. However, when we only analyzed obesity, there was no statistical significance. The statistical analyses performed are described in table 3. Frequency distribution of clinical variables in relation to the severity of psoriasis in 190 patients of the Dermatology Service of HC-UFMG - statistically significant variables Chi-square test Kruskal-Wallis test f Interquartile range Interquartile range Variable analysis regarding the classification of psoriasis In terms of psoriasis classification, patients were divided into two groups: plaque psoriasis, and other clinical forms, with analysis of the clinical and laboratory variables between the two groups. Only PsA showed statistical significance: of the 37 patients with a diagnosis of PsA, 35 had plaque psoriasis and only two had other clinical forms of psoriasis (23.5% vs. 4.9%, p-value=0.0146). In terms of psoriasis classification, patients were divided into two groups: plaque psoriasis, and other clinical forms, with analysis of the clinical and laboratory variables between the two groups. Only PsA showed statistical significance: of the 37 patients with a diagnosis of PsA, 35 had plaque psoriasis and only two had other clinical forms of psoriasis (23.5% vs. 4.9%, p-value=0.0146). Relationship between nail changes and psoriatic arthritis The association between PsA and nail changes was analyzed, showing statistically significant results. Of the 37 patients with a diagnosis of PsA, 34 (91.2%) had nail involvement (p-value = 0.0001). The association between PsA and nail changes was analyzed, showing statistically significant results. Of the 37 patients with a diagnosis of PsA, 34 (91.2%) had nail involvement (p-value = 0.0001). Prevalence of morbidities in the Brazilian population compared with our sample Based on data from 2011 provided by the Instituto Brasileiro de Geografia e Estatística (IBGE), the prevalence of some morbidities in the Brazilian population was compared with our sample. A greatly increased prevalence of SH, type 2 diabetes, smoking, overweight, and obesity was found in the 190 psoriasis patients compared with the general population. The prevalence of type 2 diabetes is worth noting, being 3 times higher in patients with psoriasis (Table 4). A comparison between the prevalence of MS in our sample and the existing data on the Brazilian population was also carried out from a population-based study conducted in Vitória.19 The difference between the groups was statistically significant, with p = 0.0310. The analysis by sex did not show any statistical difference. Comparison of the prevalence of comorbidities between patients with psoriasis and the Brazilian population (Source: Vigitel 2011) 95%CI = 95% confidence interval. Based on data from 2011 provided by the Instituto Brasileiro de Geografia e Estatística (IBGE), the prevalence of some morbidities in the Brazilian population was compared with our sample. A greatly increased prevalence of SH, type 2 diabetes, smoking, overweight, and obesity was found in the 190 psoriasis patients compared with the general population. The prevalence of type 2 diabetes is worth noting, being 3 times higher in patients with psoriasis (Table 4). A comparison between the prevalence of MS in our sample and the existing data on the Brazilian population was also carried out from a population-based study conducted in Vitória.19 The difference between the groups was statistically significant, with p = 0.0310. The analysis by sex did not show any statistical difference. Comparison of the prevalence of comorbidities between patients with psoriasis and the Brazilian population (Source: Vigitel 2011) 95%CI = 95% confidence interval.
CONCLUSIONS
We found an increased prevalence of SH, type 2 diabetes, MS, and obesity in patients compared with the general population. Increased waist circumference was also found in addition to a considerable prevalence of depression and regular intake of alcohol. An association with smoking was described, with a mean pack-years being considered high tobacco intake. Patients' cardiovascular risk profile was high according to the Framingham risk score, and 47.2% of patients had moderate or high risk of fatal and non-fatal coronary events in 10 years. The mean Framingham score was considered high when compared with other studies involving the Brazilian population. It is noteworthy that psoriasis may manifest as a multisystem disease not restricted to the skin and its appendages. The association of psoriasis with several comorbidities may occur due to various factors, such as the chronic inflammatory nature of the disease, genetic susceptibility, environmental factors and/or related to the patient's quality of life and even adverse effects of drugs used for systemic therapy. Comorbidities that are associated with psoriasis greatly increase the morbidity and mortality of the disease. Therefore, it should be noted that the approach of the psoriatic patient should be comprehensive and multidisciplinary to implement preventive and early therapeutic measures aiming to improve the rates of mortality, hospitalization, and survival.
[ "General characteristics", "Comorbidities and laboratory data", "Framingham risk score", "Variable analysis regarding the severity of psoriasis", "Variable analysis regarding the classification of psoriasis", "Relationship between nail changes and psoriatic arthritis", "Prevalence of morbidities in the Brazilian population compared with our\nsample" ]
[ "The sample's mean age was 51.5 years (±14), ranging from 18 to 92 years; 93 men\n(48.9%) and 97 women (51.1%) were evaluated. The mean diagnosis time was 15.8 years\n(±12.2), with median of 13 years (interquartile range between 6.0 and 23.0).\nRegarding skin color, 80 patients were caucasians (42.1%), 81 mulattos (42.6%), and\n29 African descendents (15.3%).\nOf the 190 patients, 149 (78.4%) had a diagnosis of plaque psoriasis; 18 (9.5%) had\npalmoplantar psoriasis; 11 (5.8%) had guttate psoriasis; six (3.2%) had generalized\npustular psoriasis; five (2.6%) had eryth-rodermic psoriasis; and only one (0.5%) had\na diagnosis of Hallopeau's continuous acrodermatitis.\nRegarding therapeutic modality, 79 patients were only on an topical treatment\n(41.6%); 59 were on methotrexate (31.1%); 27 were on acitretin (14.2%); on\nphototherapy (10%); 16 on immunobiological medications (8.4%); and only one on\ncyclosporine (0.5%). With regard to the family history of psoriasis, 57 patients\n(30%) reported positive family history; 129 patients (67.9%) denied a family history\nof psoriasis, and four (2.1%) were not able to answer this item.\nIn relation to the severity of the skin condition, 128 patients (67.4%) were\nconsidered to have severe psoriasis.\nOf the 190 participants, 37 (19.5%) had a previous diagnosis of psoriatic arthritis\n(PA). Of these, 17 (45.9%) were classified as asymmetric oligoarthritis; 10 (27%) as\nsymmetric polyarthritis; six (16.2%) as interphalangeal arthritis; and four (10.8%)\nhad spondylitis. No patient showed arthritis mutilans.\nThe mean PASI score was 3.4 (±3.03), with a median of 2.3 (interquartile range = 0.9\nto 4.5). Of the patients evaluated using the PASI, 17 (10.5%) has a PASI score ≥10.\nRegarding the DLQI, 26 patients (13.7%) had a DLQI score ≥10.\nOne hundred and eighteen patients (62.1%) had nail involvement. The most frequent\nalteration was onycholysis (41.6% of patients), followed by nail pitting (21.6%),\nonychodystrophy (14.2%), subungual hyperkeratosis (11.6%), oil spot (4.2%), and\nsubungual hemorrhage (11.1%).\nOf the 190 patients, 39 (20.5%) reported current smoking; 57 (30%) reported past\nsmoking; and 94 (49.5%) reported they had never smoked. Most of the 96 patients\nreporting current or past smoking started smoking before the diagnosis of psoriasis\n(76.5%). The mean time of current or past smoking was 22.3 (±12 years), with a median\nof 20 years. The mean packyears was 21.3 (±16).\nConsidering alcohol consumption, 105 patients (55.3%) reported they did not consume\nalcohol regularly, 31 (16.3%) reported alcohol intake in the past, and 54 patients\n(28.4%) reported current alcohol consumption. Of the 54 patients who regularly\nconsumed alcohol, 36 (66.6%) reported low consumption (less than once a week) and 18\n(33.4%) reported moderate consumption (one to three times a week). No patient\nreported high or very high consumption.\nThe patients' clinical characteristics are summarized in table 1.\nFrequency distribution of clinical variables of the psoriasis patients of the\nDermatology Service of HC-UFMG\n\nCategories do not exclude one another\n", "Eighty-three patients (43.7%) reported previous diagnosis of SH, 49 (25.8%) had\nreceived a previous diagnosis of depression, and 29 (15.3%) had been diagnosed with\ntype 2 diabetes. Forty-eight patients (25.3%) reported previous diagnosis of\ndyslipidemia. Regarding laboratory tests, 18.5% of patients had increased TC, 16.9%\nhad increased LDL, and 33.7% had increased TG. Sixty-six men (74.2%) and 15 women\n(16.9%) had low HDL. Sixty-eight patients (64.3%) had high BMI. The overweight\nfrequency was 31.1%, and 33.2% of patients were considered obese (22.6% with grade I\nobesity, 7.4% with grade II obesity, and 3.2% with grade III obesity).\nIn relation to waist circumference, 68.8% of men and 86.5% of women had increased\nmeasures (p=0.005). Considering the cutoff point suggested by the WHO as a\nsignificant biomarker for cardiovascular risk, 38.7% of men and 78.4% of women had\nalterations (p<0.0001).\nEighty patients (44.9%) met the criteria for the diagnosis of MS according to the\nNCEP-ATP III (42.6% of men and 47.2% of women).\nThe frequencies for the patients' comorbidities and laboratory alterations are\nsummarized in table 2.\nFrequency distribution of comorbidities and laboratory alterations of psoriasis\npatients of the Dermatology Service of HC-UFMG", "The Framingham risk score was used to evaluate 159 patients. Of those, 38.4% had\nmoderate risk and 8.8% had high risk for the development of coronary heart diseases\nin 10 years. The median score was 8.0 (interquartile range = 4.0 to 13.0). Male\npatients had a median score of 10.0, while among female patients the median score was\n7.0. The difference between men and women was statistically significant\n(Kruskal-Wallis test = 8.6; p=0.0033). The mean was 10.5 ± 9.1.", "Of the 190 patients, 128 were considered as having severe psoriasis. The\nstatistically different variables in terms of severity of psoriasis were:\n- Time to diagnosis: severe patients had a mean diagnosis time of 17.5 (±12.3) years\ncompared with a mean of 12.6 (±11.2) years among nonsevere patients. P-value =\n0.0021.\n-Psoriatic arthritis (PsA): 34 (26.5%) patients with severe psoriasis had concurrent\ndiagnosis of PsA compared with only three (4.8%) nonsevere patients. Pvalue =\n0.0008.\n- Nail involvement: 91 (71.1%) severe patients had nail involvement, while only 27\n(43.5%) of the nonsevere group had alterations of this type. P-value = 0.0004.\n- BMI: 89 (69.5%) patients of the severe group had BMI ≥ 25 (considering overweight +\nobesity) compared with 33 (53.2%) of the nonsevere group. However, when we only\nanalyzed obesity, there was no statistical significance.\nThe statistical analyses performed are described in table 3.\nFrequency distribution of clinical variables in relation to the severity of\npsoriasis in 190 patients of the Dermatology Service of HC-UFMG - statistically\nsignificant variables\nChi-square test\nKruskal-Wallis test f Interquartile range\nInterquartile range", "In terms of psoriasis classification, patients were divided into two groups: plaque\npsoriasis, and other clinical forms, with analysis of the clinical and laboratory\nvariables between the two groups. Only PsA showed statistical significance: of the 37\npatients with a diagnosis of PsA, 35 had plaque psoriasis and only two had other\nclinical forms of psoriasis (23.5% vs. 4.9%, p-value=0.0146).", "The association between PsA and nail changes was analyzed, showing statistically\nsignificant results. Of the 37 patients with a diagnosis of PsA, 34 (91.2%) had nail\ninvolvement (p-value = 0.0001).", "Based on data from 2011 provided by the Instituto Brasileiro de Geografia e\nEstatística (IBGE), the prevalence of some morbidities in the Brazilian population\nwas compared with our sample. A greatly increased prevalence of SH, type 2 diabetes,\nsmoking, overweight, and obesity was found in the 190 psoriasis patients compared\nwith the general population. The prevalence of type 2 diabetes is worth noting, being\n3 times higher in patients with psoriasis (Table\n4). A comparison between the prevalence of MS in our sample and the\nexisting data on the Brazilian population was also carried out from a\npopulation-based study conducted in Vitória.19 The difference between the groups was statistically\nsignificant, with p = 0.0310. The analysis by sex did not show any statistical\ndifference.\nComparison of the prevalence of comorbidities between patients with psoriasis\nand the Brazilian population (Source: Vigitel 2011)\n95%CI = 95% confidence interval." ]
[ null, null, null, null, null, null, null ]
[ "INTRODUCTION", "PATIENTS AND METHODS", "RESULTS", "General characteristics", "Comorbidities and laboratory data", "Framingham risk score", "Variable analysis regarding the severity of psoriasis", "Variable analysis regarding the classification of psoriasis", "Relationship between nail changes and psoriatic arthritis", "Prevalence of morbidities in the Brazilian population compared with our\nsample", "DISCUSSION", "CONCLUSIONS" ]
[ "Psoriasis is a chronic and recurrent condition defined as an immunologically mediated\ninflammatory disease affecting mainly the skin and joints. The pathogenesis of psoriasis\nremains unclear, but it is known that genetic, environmental, and immunological factors\nare implicated.1\nIn recent years, the association of psoriasis with several comorbidities, such as\nobesity, metabolic syndrome (MS), systemic hypertension (SH), dyslipidemia, type 2\ndiabetes, malignancies, inflammatory bowel diseases, and habits, such as smoking and\nalcohol abuse has been matter of debate.2,3 This association between\npsoriasis and comorbidities, especially cardiovascular and metabolic disorders, may be\nrelated to their chronic and inflammatory nature, especially due to increased\npro-inflammatory cytokines that are part of the pathophysiology of such\ndisorders.4,5 Recently the discussion has extended about higher\nprevalence of atherosclerosis, arterial calcification, and mortality related to\ncardiovascular events, such as acute myocardial infarction (AMI) and stroke.6,7\nThe chronic inflammatory process of psoriasis involves mechanisms that cause oxidative\nstress and free radical production, prompting the formation of atherosclerotic plaques\non vessel walls and leading to an increased risk of cardiovascular disease.8\nResults of epidemiological reports and studies investigating the association with\ncomorbidities vary depending on the population studied. For this reason this study was\ndesigned with the purpose of defining the epidemiological, clinical, and laboratory\nprofile of patients seen at a reference center for dermatology. Other objectives were to\ninvestigate the association with comorbidities and cardiovascular risk factors such as\nSH, type 2 diabetes, obesity, dyslipidemia, and to define the profile of cardiovascular\nrisk based on the Framingham risk score.", "We conducted an observational, cross-sectional study with convenience sampling. We\nassessed one hundred and ninety patients seen at the Department of Dermatology of the\nHospital das Clínicas, Universidade Federal de Minas Gerais (HC/UFMG), Brazil, from July\n2011 to May 2012. Patients older than 18 years who signed a written consent form were\nincluded in the study. This research project was approved by the Review Board of the\ninstitution.\nPatient's data such as age, sex, and skin color, time to diagnosis, and family history\nof psoriasis in first- and second-degree relatives were collected. Personal history of\nother comorbidities was investigated, particularly SH, type 2 diabetes, dyslipidemia,\ndepression, continued use medications, and family history of cardiovascular events\n(angina, AMI, stroke, revascularization surgery, angioplasty).\nPatients completed the DLQI (Dermatology Life Quality Index), which was translated into\nand validated for Brazilian Portuguese.9\nSmoking habits were investigated, and patients were divided into three categories: no\nsmoking, previous smoking (smoking cessation more than a year before the date of the\ninterview), and current smoking (calculating the number of pack-years).10 Current smokers were asked whether they\nstarted smoking before they were diagnosed with psoriasis.\nIn terms of alcohol intake, patients were divided in those who do not consume alcohol\nregularly (less than once a month), those who currently consume alcohol regularly (more\nthan once a month), and those who reported alcohol intake in the past. Patients\nreporting regular alcohol consumption were classified according to its frequency: low\n(less than once a week), moderate (one to three times a week), high (between four times\na week and once a day), and very high (more than once a day).11\nSkin lesions were graded according to their extension, erythema, infiltration, and\ndesquamation using the Psoriasis Area Severity Index (PASI).12\nPatients with PASI score >10 and/or with a DLQI score >10 based on Finlay's rule\nof tens were considered to have severe psoriasis.13 Patients currently on systemic medication were included in the\ngroup due to the severity of their condition, which is in agreement with the\nliterature.14,15 This was based on the fact that many of the patients who\nwere on systemic medication had a stable skin condition and did not meet the criteria of\nthe rule of tens for being on systemic drugs.\nWeight, height, waist circumference, and blood pressure were measured during physical\nexamination. Body mass index (BMI) was calculated using the standard formula: BMI =\nWeight (kg)/Height (m)2.\nResults of laboratory tests, such as total cholesterol (TC), high-density lipoprotein\n(HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), and\nfasting blood glucose, were collected from medical records. We considered the latest\ntests and did not include those carried out more than six months before the date of the\ninterview.. Dyslipidemia was defined by previous diagnosis and treatment with\nlipid-lowering medications or if TC≥200 mg/dl and/or LDL-C≥160 mg/dl and/or HDL-C<50\nmg/dl for women or <40 mg/dl for men and/or TG≥150 mg/dl.16\nThe criteria of the National Cholesterol Education Program - Adult Treatment Panel\n(NCEP-ATP III) were used to define MS.17 The assessment of cardiovascular risk was based on the Framingham\nrisk score, which takes into account sex, age, TC, HDL, systolic and diastolic blood\npressure, diabetes, and smoking, and estimates the risk of fatal or nonfatal coronary\nevents in 10 years. Patients were stratified into risk categories: low (<10% events\nin 10 years), moderate (10-20%), and high (>20%).18\nIn the statistical analysis, qualitative variables were expressed as absolute and\npercentage values. The quantitative variables were expressed as mean, standard\ndeviation, median, and interquartile range. The chi-square test was used to compare\nproportions; Student's t-test was used to compare two means; and when the variables did\nnot have a Gaussian distribution, the medians were compared using the Kruskal-Wallis\ntest. The threshold of statistical significance was set at p<0.05. The EPI INFO 6.04\n(CDC, Atlanta, USA) was used for the statistical analysis.", " General characteristics The sample's mean age was 51.5 years (±14), ranging from 18 to 92 years; 93 men\n(48.9%) and 97 women (51.1%) were evaluated. The mean diagnosis time was 15.8 years\n(±12.2), with median of 13 years (interquartile range between 6.0 and 23.0).\nRegarding skin color, 80 patients were caucasians (42.1%), 81 mulattos (42.6%), and\n29 African descendents (15.3%).\nOf the 190 patients, 149 (78.4%) had a diagnosis of plaque psoriasis; 18 (9.5%) had\npalmoplantar psoriasis; 11 (5.8%) had guttate psoriasis; six (3.2%) had generalized\npustular psoriasis; five (2.6%) had eryth-rodermic psoriasis; and only one (0.5%) had\na diagnosis of Hallopeau's continuous acrodermatitis.\nRegarding therapeutic modality, 79 patients were only on an topical treatment\n(41.6%); 59 were on methotrexate (31.1%); 27 were on acitretin (14.2%); on\nphototherapy (10%); 16 on immunobiological medications (8.4%); and only one on\ncyclosporine (0.5%). With regard to the family history of psoriasis, 57 patients\n(30%) reported positive family history; 129 patients (67.9%) denied a family history\nof psoriasis, and four (2.1%) were not able to answer this item.\nIn relation to the severity of the skin condition, 128 patients (67.4%) were\nconsidered to have severe psoriasis.\nOf the 190 participants, 37 (19.5%) had a previous diagnosis of psoriatic arthritis\n(PA). Of these, 17 (45.9%) were classified as asymmetric oligoarthritis; 10 (27%) as\nsymmetric polyarthritis; six (16.2%) as interphalangeal arthritis; and four (10.8%)\nhad spondylitis. No patient showed arthritis mutilans.\nThe mean PASI score was 3.4 (±3.03), with a median of 2.3 (interquartile range = 0.9\nto 4.5). Of the patients evaluated using the PASI, 17 (10.5%) has a PASI score ≥10.\nRegarding the DLQI, 26 patients (13.7%) had a DLQI score ≥10.\nOne hundred and eighteen patients (62.1%) had nail involvement. The most frequent\nalteration was onycholysis (41.6% of patients), followed by nail pitting (21.6%),\nonychodystrophy (14.2%), subungual hyperkeratosis (11.6%), oil spot (4.2%), and\nsubungual hemorrhage (11.1%).\nOf the 190 patients, 39 (20.5%) reported current smoking; 57 (30%) reported past\nsmoking; and 94 (49.5%) reported they had never smoked. Most of the 96 patients\nreporting current or past smoking started smoking before the diagnosis of psoriasis\n(76.5%). The mean time of current or past smoking was 22.3 (±12 years), with a median\nof 20 years. The mean packyears was 21.3 (±16).\nConsidering alcohol consumption, 105 patients (55.3%) reported they did not consume\nalcohol regularly, 31 (16.3%) reported alcohol intake in the past, and 54 patients\n(28.4%) reported current alcohol consumption. Of the 54 patients who regularly\nconsumed alcohol, 36 (66.6%) reported low consumption (less than once a week) and 18\n(33.4%) reported moderate consumption (one to three times a week). No patient\nreported high or very high consumption.\nThe patients' clinical characteristics are summarized in table 1.\nFrequency distribution of clinical variables of the psoriasis patients of the\nDermatology Service of HC-UFMG\n\nCategories do not exclude one another\n\nThe sample's mean age was 51.5 years (±14), ranging from 18 to 92 years; 93 men\n(48.9%) and 97 women (51.1%) were evaluated. The mean diagnosis time was 15.8 years\n(±12.2), with median of 13 years (interquartile range between 6.0 and 23.0).\nRegarding skin color, 80 patients were caucasians (42.1%), 81 mulattos (42.6%), and\n29 African descendents (15.3%).\nOf the 190 patients, 149 (78.4%) had a diagnosis of plaque psoriasis; 18 (9.5%) had\npalmoplantar psoriasis; 11 (5.8%) had guttate psoriasis; six (3.2%) had generalized\npustular psoriasis; five (2.6%) had eryth-rodermic psoriasis; and only one (0.5%) had\na diagnosis of Hallopeau's continuous acrodermatitis.\nRegarding therapeutic modality, 79 patients were only on an topical treatment\n(41.6%); 59 were on methotrexate (31.1%); 27 were on acitretin (14.2%); on\nphototherapy (10%); 16 on immunobiological medications (8.4%); and only one on\ncyclosporine (0.5%). With regard to the family history of psoriasis, 57 patients\n(30%) reported positive family history; 129 patients (67.9%) denied a family history\nof psoriasis, and four (2.1%) were not able to answer this item.\nIn relation to the severity of the skin condition, 128 patients (67.4%) were\nconsidered to have severe psoriasis.\nOf the 190 participants, 37 (19.5%) had a previous diagnosis of psoriatic arthritis\n(PA). Of these, 17 (45.9%) were classified as asymmetric oligoarthritis; 10 (27%) as\nsymmetric polyarthritis; six (16.2%) as interphalangeal arthritis; and four (10.8%)\nhad spondylitis. No patient showed arthritis mutilans.\nThe mean PASI score was 3.4 (±3.03), with a median of 2.3 (interquartile range = 0.9\nto 4.5). Of the patients evaluated using the PASI, 17 (10.5%) has a PASI score ≥10.\nRegarding the DLQI, 26 patients (13.7%) had a DLQI score ≥10.\nOne hundred and eighteen patients (62.1%) had nail involvement. The most frequent\nalteration was onycholysis (41.6% of patients), followed by nail pitting (21.6%),\nonychodystrophy (14.2%), subungual hyperkeratosis (11.6%), oil spot (4.2%), and\nsubungual hemorrhage (11.1%).\nOf the 190 patients, 39 (20.5%) reported current smoking; 57 (30%) reported past\nsmoking; and 94 (49.5%) reported they had never smoked. Most of the 96 patients\nreporting current or past smoking started smoking before the diagnosis of psoriasis\n(76.5%). The mean time of current or past smoking was 22.3 (±12 years), with a median\nof 20 years. The mean packyears was 21.3 (±16).\nConsidering alcohol consumption, 105 patients (55.3%) reported they did not consume\nalcohol regularly, 31 (16.3%) reported alcohol intake in the past, and 54 patients\n(28.4%) reported current alcohol consumption. Of the 54 patients who regularly\nconsumed alcohol, 36 (66.6%) reported low consumption (less than once a week) and 18\n(33.4%) reported moderate consumption (one to three times a week). No patient\nreported high or very high consumption.\nThe patients' clinical characteristics are summarized in table 1.\nFrequency distribution of clinical variables of the psoriasis patients of the\nDermatology Service of HC-UFMG\n\nCategories do not exclude one another\n\n Comorbidities and laboratory data Eighty-three patients (43.7%) reported previous diagnosis of SH, 49 (25.8%) had\nreceived a previous diagnosis of depression, and 29 (15.3%) had been diagnosed with\ntype 2 diabetes. Forty-eight patients (25.3%) reported previous diagnosis of\ndyslipidemia. Regarding laboratory tests, 18.5% of patients had increased TC, 16.9%\nhad increased LDL, and 33.7% had increased TG. Sixty-six men (74.2%) and 15 women\n(16.9%) had low HDL. Sixty-eight patients (64.3%) had high BMI. The overweight\nfrequency was 31.1%, and 33.2% of patients were considered obese (22.6% with grade I\nobesity, 7.4% with grade II obesity, and 3.2% with grade III obesity).\nIn relation to waist circumference, 68.8% of men and 86.5% of women had increased\nmeasures (p=0.005). Considering the cutoff point suggested by the WHO as a\nsignificant biomarker for cardiovascular risk, 38.7% of men and 78.4% of women had\nalterations (p<0.0001).\nEighty patients (44.9%) met the criteria for the diagnosis of MS according to the\nNCEP-ATP III (42.6% of men and 47.2% of women).\nThe frequencies for the patients' comorbidities and laboratory alterations are\nsummarized in table 2.\nFrequency distribution of comorbidities and laboratory alterations of psoriasis\npatients of the Dermatology Service of HC-UFMG\nEighty-three patients (43.7%) reported previous diagnosis of SH, 49 (25.8%) had\nreceived a previous diagnosis of depression, and 29 (15.3%) had been diagnosed with\ntype 2 diabetes. Forty-eight patients (25.3%) reported previous diagnosis of\ndyslipidemia. Regarding laboratory tests, 18.5% of patients had increased TC, 16.9%\nhad increased LDL, and 33.7% had increased TG. Sixty-six men (74.2%) and 15 women\n(16.9%) had low HDL. Sixty-eight patients (64.3%) had high BMI. The overweight\nfrequency was 31.1%, and 33.2% of patients were considered obese (22.6% with grade I\nobesity, 7.4% with grade II obesity, and 3.2% with grade III obesity).\nIn relation to waist circumference, 68.8% of men and 86.5% of women had increased\nmeasures (p=0.005). Considering the cutoff point suggested by the WHO as a\nsignificant biomarker for cardiovascular risk, 38.7% of men and 78.4% of women had\nalterations (p<0.0001).\nEighty patients (44.9%) met the criteria for the diagnosis of MS according to the\nNCEP-ATP III (42.6% of men and 47.2% of women).\nThe frequencies for the patients' comorbidities and laboratory alterations are\nsummarized in table 2.\nFrequency distribution of comorbidities and laboratory alterations of psoriasis\npatients of the Dermatology Service of HC-UFMG\n Framingham risk score The Framingham risk score was used to evaluate 159 patients. Of those, 38.4% had\nmoderate risk and 8.8% had high risk for the development of coronary heart diseases\nin 10 years. The median score was 8.0 (interquartile range = 4.0 to 13.0). Male\npatients had a median score of 10.0, while among female patients the median score was\n7.0. The difference between men and women was statistically significant\n(Kruskal-Wallis test = 8.6; p=0.0033). The mean was 10.5 ± 9.1.\nThe Framingham risk score was used to evaluate 159 patients. Of those, 38.4% had\nmoderate risk and 8.8% had high risk for the development of coronary heart diseases\nin 10 years. The median score was 8.0 (interquartile range = 4.0 to 13.0). Male\npatients had a median score of 10.0, while among female patients the median score was\n7.0. The difference between men and women was statistically significant\n(Kruskal-Wallis test = 8.6; p=0.0033). The mean was 10.5 ± 9.1.\n Variable analysis regarding the severity of psoriasis Of the 190 patients, 128 were considered as having severe psoriasis. The\nstatistically different variables in terms of severity of psoriasis were:\n- Time to diagnosis: severe patients had a mean diagnosis time of 17.5 (±12.3) years\ncompared with a mean of 12.6 (±11.2) years among nonsevere patients. P-value =\n0.0021.\n-Psoriatic arthritis (PsA): 34 (26.5%) patients with severe psoriasis had concurrent\ndiagnosis of PsA compared with only three (4.8%) nonsevere patients. Pvalue =\n0.0008.\n- Nail involvement: 91 (71.1%) severe patients had nail involvement, while only 27\n(43.5%) of the nonsevere group had alterations of this type. P-value = 0.0004.\n- BMI: 89 (69.5%) patients of the severe group had BMI ≥ 25 (considering overweight +\nobesity) compared with 33 (53.2%) of the nonsevere group. However, when we only\nanalyzed obesity, there was no statistical significance.\nThe statistical analyses performed are described in table 3.\nFrequency distribution of clinical variables in relation to the severity of\npsoriasis in 190 patients of the Dermatology Service of HC-UFMG - statistically\nsignificant variables\nChi-square test\nKruskal-Wallis test f Interquartile range\nInterquartile range\nOf the 190 patients, 128 were considered as having severe psoriasis. The\nstatistically different variables in terms of severity of psoriasis were:\n- Time to diagnosis: severe patients had a mean diagnosis time of 17.5 (±12.3) years\ncompared with a mean of 12.6 (±11.2) years among nonsevere patients. P-value =\n0.0021.\n-Psoriatic arthritis (PsA): 34 (26.5%) patients with severe psoriasis had concurrent\ndiagnosis of PsA compared with only three (4.8%) nonsevere patients. Pvalue =\n0.0008.\n- Nail involvement: 91 (71.1%) severe patients had nail involvement, while only 27\n(43.5%) of the nonsevere group had alterations of this type. P-value = 0.0004.\n- BMI: 89 (69.5%) patients of the severe group had BMI ≥ 25 (considering overweight +\nobesity) compared with 33 (53.2%) of the nonsevere group. However, when we only\nanalyzed obesity, there was no statistical significance.\nThe statistical analyses performed are described in table 3.\nFrequency distribution of clinical variables in relation to the severity of\npsoriasis in 190 patients of the Dermatology Service of HC-UFMG - statistically\nsignificant variables\nChi-square test\nKruskal-Wallis test f Interquartile range\nInterquartile range\n Variable analysis regarding the classification of psoriasis In terms of psoriasis classification, patients were divided into two groups: plaque\npsoriasis, and other clinical forms, with analysis of the clinical and laboratory\nvariables between the two groups. Only PsA showed statistical significance: of the 37\npatients with a diagnosis of PsA, 35 had plaque psoriasis and only two had other\nclinical forms of psoriasis (23.5% vs. 4.9%, p-value=0.0146).\nIn terms of psoriasis classification, patients were divided into two groups: plaque\npsoriasis, and other clinical forms, with analysis of the clinical and laboratory\nvariables between the two groups. Only PsA showed statistical significance: of the 37\npatients with a diagnosis of PsA, 35 had plaque psoriasis and only two had other\nclinical forms of psoriasis (23.5% vs. 4.9%, p-value=0.0146).\n Relationship between nail changes and psoriatic arthritis The association between PsA and nail changes was analyzed, showing statistically\nsignificant results. Of the 37 patients with a diagnosis of PsA, 34 (91.2%) had nail\ninvolvement (p-value = 0.0001).\nThe association between PsA and nail changes was analyzed, showing statistically\nsignificant results. Of the 37 patients with a diagnosis of PsA, 34 (91.2%) had nail\ninvolvement (p-value = 0.0001).\n Prevalence of morbidities in the Brazilian population compared with our\nsample Based on data from 2011 provided by the Instituto Brasileiro de Geografia e\nEstatística (IBGE), the prevalence of some morbidities in the Brazilian population\nwas compared with our sample. A greatly increased prevalence of SH, type 2 diabetes,\nsmoking, overweight, and obesity was found in the 190 psoriasis patients compared\nwith the general population. The prevalence of type 2 diabetes is worth noting, being\n3 times higher in patients with psoriasis (Table\n4). A comparison between the prevalence of MS in our sample and the\nexisting data on the Brazilian population was also carried out from a\npopulation-based study conducted in Vitória.19 The difference between the groups was statistically\nsignificant, with p = 0.0310. The analysis by sex did not show any statistical\ndifference.\nComparison of the prevalence of comorbidities between patients with psoriasis\nand the Brazilian population (Source: Vigitel 2011)\n95%CI = 95% confidence interval.\nBased on data from 2011 provided by the Instituto Brasileiro de Geografia e\nEstatística (IBGE), the prevalence of some morbidities in the Brazilian population\nwas compared with our sample. A greatly increased prevalence of SH, type 2 diabetes,\nsmoking, overweight, and obesity was found in the 190 psoriasis patients compared\nwith the general population. The prevalence of type 2 diabetes is worth noting, being\n3 times higher in patients with psoriasis (Table\n4). A comparison between the prevalence of MS in our sample and the\nexisting data on the Brazilian population was also carried out from a\npopulation-based study conducted in Vitória.19 The difference between the groups was statistically\nsignificant, with p = 0.0310. The analysis by sex did not show any statistical\ndifference.\nComparison of the prevalence of comorbidities between patients with psoriasis\nand the Brazilian population (Source: Vigitel 2011)\n95%CI = 95% confidence interval.", "The sample's mean age was 51.5 years (±14), ranging from 18 to 92 years; 93 men\n(48.9%) and 97 women (51.1%) were evaluated. The mean diagnosis time was 15.8 years\n(±12.2), with median of 13 years (interquartile range between 6.0 and 23.0).\nRegarding skin color, 80 patients were caucasians (42.1%), 81 mulattos (42.6%), and\n29 African descendents (15.3%).\nOf the 190 patients, 149 (78.4%) had a diagnosis of plaque psoriasis; 18 (9.5%) had\npalmoplantar psoriasis; 11 (5.8%) had guttate psoriasis; six (3.2%) had generalized\npustular psoriasis; five (2.6%) had eryth-rodermic psoriasis; and only one (0.5%) had\na diagnosis of Hallopeau's continuous acrodermatitis.\nRegarding therapeutic modality, 79 patients were only on an topical treatment\n(41.6%); 59 were on methotrexate (31.1%); 27 were on acitretin (14.2%); on\nphototherapy (10%); 16 on immunobiological medications (8.4%); and only one on\ncyclosporine (0.5%). With regard to the family history of psoriasis, 57 patients\n(30%) reported positive family history; 129 patients (67.9%) denied a family history\nof psoriasis, and four (2.1%) were not able to answer this item.\nIn relation to the severity of the skin condition, 128 patients (67.4%) were\nconsidered to have severe psoriasis.\nOf the 190 participants, 37 (19.5%) had a previous diagnosis of psoriatic arthritis\n(PA). Of these, 17 (45.9%) were classified as asymmetric oligoarthritis; 10 (27%) as\nsymmetric polyarthritis; six (16.2%) as interphalangeal arthritis; and four (10.8%)\nhad spondylitis. No patient showed arthritis mutilans.\nThe mean PASI score was 3.4 (±3.03), with a median of 2.3 (interquartile range = 0.9\nto 4.5). Of the patients evaluated using the PASI, 17 (10.5%) has a PASI score ≥10.\nRegarding the DLQI, 26 patients (13.7%) had a DLQI score ≥10.\nOne hundred and eighteen patients (62.1%) had nail involvement. The most frequent\nalteration was onycholysis (41.6% of patients), followed by nail pitting (21.6%),\nonychodystrophy (14.2%), subungual hyperkeratosis (11.6%), oil spot (4.2%), and\nsubungual hemorrhage (11.1%).\nOf the 190 patients, 39 (20.5%) reported current smoking; 57 (30%) reported past\nsmoking; and 94 (49.5%) reported they had never smoked. Most of the 96 patients\nreporting current or past smoking started smoking before the diagnosis of psoriasis\n(76.5%). The mean time of current or past smoking was 22.3 (±12 years), with a median\nof 20 years. The mean packyears was 21.3 (±16).\nConsidering alcohol consumption, 105 patients (55.3%) reported they did not consume\nalcohol regularly, 31 (16.3%) reported alcohol intake in the past, and 54 patients\n(28.4%) reported current alcohol consumption. Of the 54 patients who regularly\nconsumed alcohol, 36 (66.6%) reported low consumption (less than once a week) and 18\n(33.4%) reported moderate consumption (one to three times a week). No patient\nreported high or very high consumption.\nThe patients' clinical characteristics are summarized in table 1.\nFrequency distribution of clinical variables of the psoriasis patients of the\nDermatology Service of HC-UFMG\n\nCategories do not exclude one another\n", "Eighty-three patients (43.7%) reported previous diagnosis of SH, 49 (25.8%) had\nreceived a previous diagnosis of depression, and 29 (15.3%) had been diagnosed with\ntype 2 diabetes. Forty-eight patients (25.3%) reported previous diagnosis of\ndyslipidemia. Regarding laboratory tests, 18.5% of patients had increased TC, 16.9%\nhad increased LDL, and 33.7% had increased TG. Sixty-six men (74.2%) and 15 women\n(16.9%) had low HDL. Sixty-eight patients (64.3%) had high BMI. The overweight\nfrequency was 31.1%, and 33.2% of patients were considered obese (22.6% with grade I\nobesity, 7.4% with grade II obesity, and 3.2% with grade III obesity).\nIn relation to waist circumference, 68.8% of men and 86.5% of women had increased\nmeasures (p=0.005). Considering the cutoff point suggested by the WHO as a\nsignificant biomarker for cardiovascular risk, 38.7% of men and 78.4% of women had\nalterations (p<0.0001).\nEighty patients (44.9%) met the criteria for the diagnosis of MS according to the\nNCEP-ATP III (42.6% of men and 47.2% of women).\nThe frequencies for the patients' comorbidities and laboratory alterations are\nsummarized in table 2.\nFrequency distribution of comorbidities and laboratory alterations of psoriasis\npatients of the Dermatology Service of HC-UFMG", "The Framingham risk score was used to evaluate 159 patients. Of those, 38.4% had\nmoderate risk and 8.8% had high risk for the development of coronary heart diseases\nin 10 years. The median score was 8.0 (interquartile range = 4.0 to 13.0). Male\npatients had a median score of 10.0, while among female patients the median score was\n7.0. The difference between men and women was statistically significant\n(Kruskal-Wallis test = 8.6; p=0.0033). The mean was 10.5 ± 9.1.", "Of the 190 patients, 128 were considered as having severe psoriasis. The\nstatistically different variables in terms of severity of psoriasis were:\n- Time to diagnosis: severe patients had a mean diagnosis time of 17.5 (±12.3) years\ncompared with a mean of 12.6 (±11.2) years among nonsevere patients. P-value =\n0.0021.\n-Psoriatic arthritis (PsA): 34 (26.5%) patients with severe psoriasis had concurrent\ndiagnosis of PsA compared with only three (4.8%) nonsevere patients. Pvalue =\n0.0008.\n- Nail involvement: 91 (71.1%) severe patients had nail involvement, while only 27\n(43.5%) of the nonsevere group had alterations of this type. P-value = 0.0004.\n- BMI: 89 (69.5%) patients of the severe group had BMI ≥ 25 (considering overweight +\nobesity) compared with 33 (53.2%) of the nonsevere group. However, when we only\nanalyzed obesity, there was no statistical significance.\nThe statistical analyses performed are described in table 3.\nFrequency distribution of clinical variables in relation to the severity of\npsoriasis in 190 patients of the Dermatology Service of HC-UFMG - statistically\nsignificant variables\nChi-square test\nKruskal-Wallis test f Interquartile range\nInterquartile range", "In terms of psoriasis classification, patients were divided into two groups: plaque\npsoriasis, and other clinical forms, with analysis of the clinical and laboratory\nvariables between the two groups. Only PsA showed statistical significance: of the 37\npatients with a diagnosis of PsA, 35 had plaque psoriasis and only two had other\nclinical forms of psoriasis (23.5% vs. 4.9%, p-value=0.0146).", "The association between PsA and nail changes was analyzed, showing statistically\nsignificant results. Of the 37 patients with a diagnosis of PsA, 34 (91.2%) had nail\ninvolvement (p-value = 0.0001).", "Based on data from 2011 provided by the Instituto Brasileiro de Geografia e\nEstatística (IBGE), the prevalence of some morbidities in the Brazilian population\nwas compared with our sample. A greatly increased prevalence of SH, type 2 diabetes,\nsmoking, overweight, and obesity was found in the 190 psoriasis patients compared\nwith the general population. The prevalence of type 2 diabetes is worth noting, being\n3 times higher in patients with psoriasis (Table\n4). A comparison between the prevalence of MS in our sample and the\nexisting data on the Brazilian population was also carried out from a\npopulation-based study conducted in Vitória.19 The difference between the groups was statistically\nsignificant, with p = 0.0310. The analysis by sex did not show any statistical\ndifference.\nComparison of the prevalence of comorbidities between patients with psoriasis\nand the Brazilian population (Source: Vigitel 2011)\n95%CI = 95% confidence interval.", "The association between psoriasis and cardiovascular comorbidities has been widely\ndescribed in recent years by epidemiological studies in different populations. However,\nthe actual prevalence of associations in the Brazilian population still lacks specific\ndata because even data on the epidemiology of psoriasis are scarce in Brazil.\nThe mean age of our patients was 51.5 (± 14) years. This contributed to the findings of\ncomorbidities such as SH, type 2 diabetes, and dyslipidemia, whose prevalence rates\nincrease with age. The mean time to diagnosis was also high, 15.8 years. Regarding skin\ncolor, it is known that psoriasis is more prevalent in Caucasians, with a low prevalence\nin AfricanAmericans.1 Due to the high\nlevel of miscegenation of the Brazilian population, assessment of ethnicity and/or skin\ncolor is extremely difficult and controversial. The was a prevalence of patients with\nwhite and brow skin (42.1% and 42.6%, respectively) in our sample, with lower prevalence\nof black skin (15.3%). In agreement with the literature, there was no significant\ndifference between sexes, since our sample consisted of 48.9% of men and 51.1%\nwomen.\nThe predominant clinical form was plaque psoriasis, accounting for most cases (78.4%),\nfollowed by less frequent variables, as reported in epidemiological studies in other\npopulations.1 The number of\nindividuals with generalized pustular psoriasis and erythrodermic psoriasis, which are\nextremely rare variants, is worth noting because these clinical forms accounted for 5.8%\nof patients. Maybe because our study was conducted at a center of excellence for\ndermatology, it is likely to have many patients with more severe conditions.\nAlso in terms of psoriasis classification, we compared plaque psoriasis with other\nclinical forms with regard to clinical and laboratory characteristics of patients and\ncomorbidities. The only variable that showed statistical significance was PsA, which was\nmuch more frequent in patients with psoriasis vulgaris. Of the 37 patients with PsA, 35\nhad plaque psoriasis, one patient had guttate psoriasis, and one patient had\nerythrodermic psoriasis). Psoriasis vulgaris is the main form of psoriasis associated\nwith PsA, but pustular psoriasis and guttate psoriasis have also been correlated with\nPsA.\nNail involvement is a symptom of psoriasis, affecting up to 80% of patients during their\nlifetime. We found nail involvement in 62.1% of our sample. In disagreement with the\nliterature, onycholysis was the most frequent alteration, found in 67% of our patients\nwith nail changes. Nail pitting, which, according to epidemiological studies, represents\nthe most common type of nail lesions in psoriasis patients,1 was found in 41 individuals in our sample (21.5% of the\ntotal or 34.7% of patients with nail changes). The large number of patients on systemic\ntreatment in the present study might have influenced the frequency of nail changes. For\nexample, acitretin may cause nail fragility with onychorexis, onychoschizia, and\nonycholysis; and phototherapy with oral psoralen may cause photo-onycholysis. Other\ndrugs, such as methotrexate and immunobiological medication, may normalize\nkeratinization and differentiation of epidermal cells, contributing to the reduction of\nchanges in the nail matrix, such as nail pitting.\nAs for joint involvement, 19.5% of patients had a previous diagnosis of PsA, which is\nconsistent with the prevalence reported in the literature.20 Many patients had joint complaints without even having a\ndiagnosis of PsA. And this may be an underdiagnosed entity because of the difficulty in\nreaching a consensus on PsA diagnostic criteria. In addition to the diagnostic criteria,\nPsA classification according to the standards of joint involvement is also\ncontroversial. Moll and Wright classified PsA into five subtypes: asymmetric\noligoarthritis, symmetric polyarthritis similar to rheumatoid arthritis, arthritis with\na predominance of distal interphalangeal joints,arthritis mutilans, and spondylitic\narthritis.21 In our study,\nasymmetric oligoarthritis was the most prevalent form, accounting for 45.9% of patients\nwith PsA. The second most common form was symmetrical polyarthritis (27%), followed by\ndistal interphalangeal arthritis and spondylitic arthritis (16.2% and 10.8%,\nrespectively). Arthritis mutilans is extremely rare and was not found in our patients.\nMany studies confirm our findings that oligoarthritis is the most prevalent form,\nincluding studies by Moll and Wright and a Brazilian study conducted by Bonfiglioli\net al.21,22 Other studies have shown polyarticular\narthritis as the most frequent form, but there may be overlapped clinical forms, which\nmakes it difficult to classify these specific categories. Some authors claim that\noligoarthritis is more frequent in the early stages of psoriasis, with the possibility\nof developing into more extensive and severe forms.23\nThe association between PsA and nail involvement is well documented in the literature,\nand the severity of nail involvement may correlate with the severity of skin and joint\nconditions. Nail changes may be found in up to 90% of patients with a established\ndiagnosis of PsA, especially when there is involvement ofdistal interphalangeal\njoints.24 Our data corroborate\nthis percentage since 92% of our patients with arthritis had nail changes with proven\nstatistical significance.\nRegarding therapeutic modalities, it is worth noting the large number of patients on\nsystemic treatment in our study: only 41.5% were using only topical medications. About\n10% of our patients were on pho-totherapy, combined or not with other treatments, and\n54.1% were using systemic medication. It is also noteworthy that there was a large\nnumber of patients using immunobiological medication (8.5%), precisely because the study\nwas conducted at a center of excellence, where there were more severe cases. The most\ncommonly used systemic medication was methotrexate (31%), followed by acitretin (14%),\nand only one case of cyclosporine use.\nAmong the cardiovascular comorbidities, obesity is more intrinsically related to\npsoriasis. Behavioral changes and changes in the quality of life of psoriatic patients\ncould contribute to weight gain. Similarly, obesity could predispose to the onset of\nskin diseases due to common inflammatory mechanisms.2 In our study, 64.3% of patients had high BMI: the frequency of\noverweight was 31.1%, and 33.2% of patients were obese. Of the obese patients, 22.6% had\ngrade I obesity; 7.4% had grade II obesity; and 3.2% had grade III obesity. Because this\nwas a cross-sectional study, it is not possible to state whether skin diseases\npredisposed to obesity, or vice versa. Unlike the study by Neimann et\nal., the presence of obesity was not related to the severity of\npsoriasis.15\nVisceral obesity, represented by waist circumference, is known to be correlated with\nhigh metabolic and cardiovascular risk. The cutoff point commonly established for waist\ncircumference, 102 cm for men and 88 cm for women, has been questioned because it is not\nappropriate for people of different ethnicities. In some studies, lower levels - 94 cm\nfor men and 80 cm for women - have been considered more appropriate for our\npopulation.25 Considering these\nvalues, 68.8% of men and 86.5% of women had increased measures in our study. Considering\nthe cutoff point of 102 cm for men and 88 cm for women, which suggests an even higher\ncardiovascular risk, 38.7% of men and 78.4% of women had increased measures. We noticed\nthat, unlike men, most women remained in the highrisk group even when the cutoff point\nwas increased.\nThe relationship between psoriasis and SH has also been well documented. Of our 190\npatients, 43.7% had a previous diagnosis of SH. Comparing with the data of the Brazilian\npopulation, this is a significant difference, almost twice higher, according to the\nMinistry of Health (43.7% vs. 22.7%).26 The difference between sexes was also significant (19.5% vs. 39.8%\nin males and 27.4% vs. 47.4% in females). Five patients who denied having a previous\ndiagnosis of SH or use of antihypertensive medication showed increased blood pressure\nlevels during physical examination. These patients were referred for specialist\nfollow-up. Unlike the study by Kimball et al., we found no difference\nin the prevalence of SH in relation to the psoriasis severity.27\nIn addition to SH, type 2 diabetes is another important independent risk factor for\ncardiovascular disease that is associated with psoriasis, probably due to their common\npro-inflammatory state. In our study, 15.3% of patients had a previous diagnosis of type\n2 diabetes. There was a significant difference when we compared these data with the\nBrazilian population, whose prevalence is around 5.3%, i.e., almost onethird of the\nvalue we found.26 Eight patients who\ndid not have a diagnosis of type 2 diabetes showed increased glucose levels, but,\nsimilarly to SH, the diagnosis of type 2 diabetes requires at least two increased\nmeasures. In relation to these patients, we requested additional laboratory tests and\nreferred them to clinical follow-up. Diabetes is the comorbidity with the largest number\nof studies reporting its increased prevalence in proportion to the severity of\npsoriasis.15,27,28 However, this\ndifference was not found in our study.\nSeveral studies have confirmed the association between psoriasis and altered lipid\nmetabolism, such as reduced HDL and increased TG.2,11 In our study, 48\npatients (25.3%) reported a previous diagnosis of dyslipidemia. Regarding laboratory\ntests, it is worth noting the high rate of men with low HDL compared with women (74.2%\nvs. 16.9%). Among patients who were taking acitretin, only five showed abnormal lipid\nprofile, which probably did not affect the final data analysis.\nMS is a set of cardiovascular risk factors, including SH, abdominal obesity,\ndyslipidemia, and glucose intolerance. Despite the fact that SM has recently been\nquestioned whether or not it represents an additional risk, we cannot fail to consider\nthat it is a condition that aggregates multiple comorbidities that individually increase\nthe patient's cardiovascular risk. Recent studies have shown a higher prevalence of MS\nin psoriasis patients, especially after the age of 40, based on the pro-inflammatory\nnature of the two diseases.11,29 In the present study, 44.9% of patients\nmet the criteria for diagnosis of MS according to the NCEP-ATP III. When compared to\nestimated prevalence for the Brazilian population, which is 29.8%,19 there is a significant difference, which\nhas been statistically proven (p<0.0001). There was no correlation of MS with\npsoriasis severity, in disagreement with the findings by Langan et al.,\nwhere the prevalence of metabolic alterations was directly proportional to the severity\nand extent of skin psoriasis.30\nThe association between smoking and psoriasis has been widely reported, both based on\nchanges in the patient's quality of life and on the theory that smoking generates\noxidative and inflammatory mechanisms that could predispose to the development of\npsoriasis.31 In our study, 50.5%\nof patients reported past or current smoking, which is the majority of our sample (20.5%\nreported current smoking and 30% reported they smoked in the past). Among the 96\npatients who reported current or past smoking, the vast majority (76.5%) reported that\nthe onset of smoking occurred even before the diagnosis of psoriasis, i.e., maybe the\noxidative stress caused by nicotine plays a role in the pathogenesis of psoriasis.\nConsidering the time of smoking for smokers and exsmokers, the mean time was also high,\n22.3 ± 12 years. Comparing the data from our study with data from the Ministry of\nHealth, the prevalence of smoking was higher (20.5% in our sample vs. 14.8% in the\ngeneral population).26 Trying to\nquantify smoking, we calculated the number the \"pack-years\", that is, number of\ncigarettes smoked per day × number of years smoked (1 pack has 20 cigarettes). In our\nsample, mean packyears was 21.3 ±16, at least twice the value of 10 packyears\n(considered high tobacco intake by the Brazilian Society of Pulmonology), which\ndemonstrates a high consumption of our patients.32 Despite the high number of smokers, we found no relationship\nbetween smoking and psoriasis severity.\nSimilarly to smoking, the habit of drinking alcohol may start or even become more\nfrequent after the development of psoriasis due mainly to changes related to quality of\nlife.3 Some authors have also\nclaimed that alcohol and its metabolites can stimulate keratinocyte proliferation and\nthe production of inflammatory cytokines, which may contribute to the development of\npsoriasis.33 The quantification\nof alcohol consumption is extremely difficult and controversial, as there are different\ntypes of alcoholic beverages with varying alcohol content. In addition, the same amount\nof alcohol can have different effects on different individuals, depending on factors\nsuch as sex, age, and comorbidities.Several studies have reported the association of\nalcohol intake with psoriasis. Poikolainen et al. found a two-fold\nhigher intake of alcohol in psoriasis patients compared with controls up to 12 months\nbefore the onset of psoriasis.34 In a\nprospective study, Qureshi et al. found 1.72 times the risk of\ndeveloping psoriasis in alcoholic patients.35 Sommer et al. also described the association\nbetween alcohol abuse and psoriasis, in proportion to the amount of alcohol intake, in a\nsignificant magnitude, with OR ranging from 2.78 to 3.61.11 Among our patients, only 28.4% reported current habit of\nalcohol intake, and of these, no patient had a diagnosis of alcoholism according to the\nCAGE questionnaire (Cutdown, Annoyed by criticism, Guilty and Eye-opener).\nPsoriasis, as well as other skin diseases, can affect the patient's self-esteem,\ninterfering with all aspects of quality of life. This results in an increase in the\nprevalence of mood disorders, especially anxiety and depression, often unrelated to the\nextent of skin involvement. The prevalence of depression in psoriasis patients is about\n20%, and it is believed that depression onset can increase the risk for cardiovascular\ndisease.36 In our study, the\nprevalence of depression was 25.8%, but it was not related to the severity or extent of\nthe skin condition.\nRegarding major cardiovascular events, three patients reported past AMI, requiring\nangioplasty and one patient had a history of ischemic stroke. All four patients had a\ndiagnosis of psoriasis when such events occurred.\nAccording to the severity criteria used, 67.4% of patients were classified as having\nsevere psoriasis. Some variables had statistical significance in relation to disease\nseverity, such as time to diagnosis: severe patients had a mean time to diagnosis of\n17.5 (± 12.3) years compared to 12.6 (± 11.2 years) in nonsevere patients. This may be\ndue to the fact that severe patients are followed up by the health care facility, and\npatients with milder and/or controlled skin conditions are lost for follow-up precisely\nbecause they improved. Skin biopsy was also more frequently performed in severe\npatients, probably due to the need of accurate diagnostic confirmation before the start\nof a more aggressive therapy. The presence of PsA and nail involvement was also higher\nin critically ill patients, with high statistical significance (p = 0.0008 and p =\n0.0004, respectively). These data are consistent with the literature.21 Another finding that was statistically\nsignificant regarding severity was overweight.\nConsidering patients with BMI ≥ 25 (overweight + obesity), they had a tendency to have\nmore severe psoriasis. However, when obesity was separately analyzed, there was no\nstatistical significance. Neimann et al. found a correlation between\nobesity and severity of psoriasis in a population-based study in the UK, even after\nadjustment for age and sex, and performing multivariate analysis.15\nCardiovascular diseases are a major cause of morbidity and mortality in Brazil and\nworldwide. Therefore, it is essential to identify individuals at higher risk for\ndeveloping heart disease and cerebrovascular disease. In this context, several risk\nscores for cardiovascular disease are described, including the Framingham risk score,\nundoubtedly the most used. The final score estimates the risk of fatal or nonfatal\ncoronary events in 10 years, and also stratifies patients into risk categories: low\n(<10% events in 10 years), moderate (10-20%), and high (>20%). This index\nestimates a prognosis and suggests the necessary clinical interventions. In the present\nstudy, the Framingham risk score was used to evaluate 159 patients, as it should only be\nused in patients 30-74 years old without coronary artery disease. Of those, 47.2% had\nmoderate risk and 8.8% had high risk for the development of coronary heart diseases in\n10 years. Forty-five patients (28.3%) had a Framingham risk score higher than expected\nfor an individual of the same sex and age. The mean Framingham score was 10.5 ± 9.1,\nwhich is considered high, especially when compared with other studies involving the\nBrazilian population. Rodrigues et al., based on a cross-sectional\nstudy with 329 patients aged 31-70 years, found a mean Framingham risk score of\n5.7%.37Additionally, Landim\net al., based on a sample of 107 drivers aged 30-74 years, found a\nmean score of 5%.38 In our study,\nthere was a statistically significant difference when we compared females and males,\nwhich can be explained by the fact that the Framingham score has a lower discriminative\npower among women. With regard to studies on patients with psoriasis, our data are\nsimilar to the case-control study conducted by Gisondi et al. Based on\na study with 234 psoriasis patients and 234 controls, a higher mean score was found in\npatients when compared with controls (11.2 vs. 7.3), p<0.001.39 Likewise, there was no correlation of\npsoriasis severity and duration with the score. Fernandez et al., in a\nrecent study of 395 patients, also found similar results: 30.5% of patients with\nmoderate risk score and 11.4% with high risk scores (77). Similarly to our study, there\nwas no relationship between the scores and the severity of the skin condition.40", "We found an increased prevalence of SH, type 2 diabetes, MS, and obesity in patients\ncompared with the general population. Increased waist circumference was also found in\naddition to a considerable prevalence of depression and regular intake of alcohol. An\nassociation with smoking was described, with a mean pack-years being considered high\ntobacco intake.\nPatients' cardiovascular risk profile was high according to the Framingham risk score,\nand 47.2% of patients had moderate or high risk of fatal and non-fatal coronary events\nin 10 years. The mean Framingham score was considered high when compared with other\nstudies involving the Brazilian population.\nIt is noteworthy that psoriasis may manifest as a multisystem disease not restricted to\nthe skin and its appendages. The association of psoriasis with several comorbidities may\noccur due to various factors, such as the chronic inflammatory nature of the disease,\ngenetic susceptibility, environmental factors and/or related to the patient's quality of\nlife and even adverse effects of drugs used for systemic therapy. Comorbidities that are\nassociated with psoriasis greatly increase the morbidity and mortality of the disease.\nTherefore, it should be noted that the approach of the psoriatic patient should be\ncomprehensive and multidisciplinary to implement preventive and early therapeutic\nmeasures aiming to improve the rates of mortality, hospitalization, and survival." ]
[ "intro", "methods", "results", null, null, null, null, null, null, null, "discussion", "conclusions" ]
[ "Comorbidity", "Cardiovascular diseases", "Psoriasis" ]
INTRODUCTION: Psoriasis is a chronic and recurrent condition defined as an immunologically mediated inflammatory disease affecting mainly the skin and joints. The pathogenesis of psoriasis remains unclear, but it is known that genetic, environmental, and immunological factors are implicated.1 In recent years, the association of psoriasis with several comorbidities, such as obesity, metabolic syndrome (MS), systemic hypertension (SH), dyslipidemia, type 2 diabetes, malignancies, inflammatory bowel diseases, and habits, such as smoking and alcohol abuse has been matter of debate.2,3 This association between psoriasis and comorbidities, especially cardiovascular and metabolic disorders, may be related to their chronic and inflammatory nature, especially due to increased pro-inflammatory cytokines that are part of the pathophysiology of such disorders.4,5 Recently the discussion has extended about higher prevalence of atherosclerosis, arterial calcification, and mortality related to cardiovascular events, such as acute myocardial infarction (AMI) and stroke.6,7 The chronic inflammatory process of psoriasis involves mechanisms that cause oxidative stress and free radical production, prompting the formation of atherosclerotic plaques on vessel walls and leading to an increased risk of cardiovascular disease.8 Results of epidemiological reports and studies investigating the association with comorbidities vary depending on the population studied. For this reason this study was designed with the purpose of defining the epidemiological, clinical, and laboratory profile of patients seen at a reference center for dermatology. Other objectives were to investigate the association with comorbidities and cardiovascular risk factors such as SH, type 2 diabetes, obesity, dyslipidemia, and to define the profile of cardiovascular risk based on the Framingham risk score. PATIENTS AND METHODS: We conducted an observational, cross-sectional study with convenience sampling. We assessed one hundred and ninety patients seen at the Department of Dermatology of the Hospital das Clínicas, Universidade Federal de Minas Gerais (HC/UFMG), Brazil, from July 2011 to May 2012. Patients older than 18 years who signed a written consent form were included in the study. This research project was approved by the Review Board of the institution. Patient's data such as age, sex, and skin color, time to diagnosis, and family history of psoriasis in first- and second-degree relatives were collected. Personal history of other comorbidities was investigated, particularly SH, type 2 diabetes, dyslipidemia, depression, continued use medications, and family history of cardiovascular events (angina, AMI, stroke, revascularization surgery, angioplasty). Patients completed the DLQI (Dermatology Life Quality Index), which was translated into and validated for Brazilian Portuguese.9 Smoking habits were investigated, and patients were divided into three categories: no smoking, previous smoking (smoking cessation more than a year before the date of the interview), and current smoking (calculating the number of pack-years).10 Current smokers were asked whether they started smoking before they were diagnosed with psoriasis. In terms of alcohol intake, patients were divided in those who do not consume alcohol regularly (less than once a month), those who currently consume alcohol regularly (more than once a month), and those who reported alcohol intake in the past. Patients reporting regular alcohol consumption were classified according to its frequency: low (less than once a week), moderate (one to three times a week), high (between four times a week and once a day), and very high (more than once a day).11 Skin lesions were graded according to their extension, erythema, infiltration, and desquamation using the Psoriasis Area Severity Index (PASI).12 Patients with PASI score >10 and/or with a DLQI score >10 based on Finlay's rule of tens were considered to have severe psoriasis.13 Patients currently on systemic medication were included in the group due to the severity of their condition, which is in agreement with the literature.14,15 This was based on the fact that many of the patients who were on systemic medication had a stable skin condition and did not meet the criteria of the rule of tens for being on systemic drugs. Weight, height, waist circumference, and blood pressure were measured during physical examination. Body mass index (BMI) was calculated using the standard formula: BMI = Weight (kg)/Height (m)2. Results of laboratory tests, such as total cholesterol (TC), high-density lipoprotein (HDL) cholesterol, low-density lipoprotein (LDL) cholesterol, triglycerides (TG), and fasting blood glucose, were collected from medical records. We considered the latest tests and did not include those carried out more than six months before the date of the interview.. Dyslipidemia was defined by previous diagnosis and treatment with lipid-lowering medications or if TC≥200 mg/dl and/or LDL-C≥160 mg/dl and/or HDL-C<50 mg/dl for women or <40 mg/dl for men and/or TG≥150 mg/dl.16 The criteria of the National Cholesterol Education Program - Adult Treatment Panel (NCEP-ATP III) were used to define MS.17 The assessment of cardiovascular risk was based on the Framingham risk score, which takes into account sex, age, TC, HDL, systolic and diastolic blood pressure, diabetes, and smoking, and estimates the risk of fatal or nonfatal coronary events in 10 years. Patients were stratified into risk categories: low (<10% events in 10 years), moderate (10-20%), and high (>20%).18 In the statistical analysis, qualitative variables were expressed as absolute and percentage values. The quantitative variables were expressed as mean, standard deviation, median, and interquartile range. The chi-square test was used to compare proportions; Student's t-test was used to compare two means; and when the variables did not have a Gaussian distribution, the medians were compared using the Kruskal-Wallis test. The threshold of statistical significance was set at p<0.05. The EPI INFO 6.04 (CDC, Atlanta, USA) was used for the statistical analysis. RESULTS: General characteristics The sample's mean age was 51.5 years (±14), ranging from 18 to 92 years; 93 men (48.9%) and 97 women (51.1%) were evaluated. The mean diagnosis time was 15.8 years (±12.2), with median of 13 years (interquartile range between 6.0 and 23.0). Regarding skin color, 80 patients were caucasians (42.1%), 81 mulattos (42.6%), and 29 African descendents (15.3%). Of the 190 patients, 149 (78.4%) had a diagnosis of plaque psoriasis; 18 (9.5%) had palmoplantar psoriasis; 11 (5.8%) had guttate psoriasis; six (3.2%) had generalized pustular psoriasis; five (2.6%) had eryth-rodermic psoriasis; and only one (0.5%) had a diagnosis of Hallopeau's continuous acrodermatitis. Regarding therapeutic modality, 79 patients were only on an topical treatment (41.6%); 59 were on methotrexate (31.1%); 27 were on acitretin (14.2%); on phototherapy (10%); 16 on immunobiological medications (8.4%); and only one on cyclosporine (0.5%). With regard to the family history of psoriasis, 57 patients (30%) reported positive family history; 129 patients (67.9%) denied a family history of psoriasis, and four (2.1%) were not able to answer this item. In relation to the severity of the skin condition, 128 patients (67.4%) were considered to have severe psoriasis. Of the 190 participants, 37 (19.5%) had a previous diagnosis of psoriatic arthritis (PA). Of these, 17 (45.9%) were classified as asymmetric oligoarthritis; 10 (27%) as symmetric polyarthritis; six (16.2%) as interphalangeal arthritis; and four (10.8%) had spondylitis. No patient showed arthritis mutilans. The mean PASI score was 3.4 (±3.03), with a median of 2.3 (interquartile range = 0.9 to 4.5). Of the patients evaluated using the PASI, 17 (10.5%) has a PASI score ≥10. Regarding the DLQI, 26 patients (13.7%) had a DLQI score ≥10. One hundred and eighteen patients (62.1%) had nail involvement. The most frequent alteration was onycholysis (41.6% of patients), followed by nail pitting (21.6%), onychodystrophy (14.2%), subungual hyperkeratosis (11.6%), oil spot (4.2%), and subungual hemorrhage (11.1%). Of the 190 patients, 39 (20.5%) reported current smoking; 57 (30%) reported past smoking; and 94 (49.5%) reported they had never smoked. Most of the 96 patients reporting current or past smoking started smoking before the diagnosis of psoriasis (76.5%). The mean time of current or past smoking was 22.3 (±12 years), with a median of 20 years. The mean packyears was 21.3 (±16). Considering alcohol consumption, 105 patients (55.3%) reported they did not consume alcohol regularly, 31 (16.3%) reported alcohol intake in the past, and 54 patients (28.4%) reported current alcohol consumption. Of the 54 patients who regularly consumed alcohol, 36 (66.6%) reported low consumption (less than once a week) and 18 (33.4%) reported moderate consumption (one to three times a week). No patient reported high or very high consumption. The patients' clinical characteristics are summarized in table 1. Frequency distribution of clinical variables of the psoriasis patients of the Dermatology Service of HC-UFMG Categories do not exclude one another The sample's mean age was 51.5 years (±14), ranging from 18 to 92 years; 93 men (48.9%) and 97 women (51.1%) were evaluated. The mean diagnosis time was 15.8 years (±12.2), with median of 13 years (interquartile range between 6.0 and 23.0). Regarding skin color, 80 patients were caucasians (42.1%), 81 mulattos (42.6%), and 29 African descendents (15.3%). Of the 190 patients, 149 (78.4%) had a diagnosis of plaque psoriasis; 18 (9.5%) had palmoplantar psoriasis; 11 (5.8%) had guttate psoriasis; six (3.2%) had generalized pustular psoriasis; five (2.6%) had eryth-rodermic psoriasis; and only one (0.5%) had a diagnosis of Hallopeau's continuous acrodermatitis. Regarding therapeutic modality, 79 patients were only on an topical treatment (41.6%); 59 were on methotrexate (31.1%); 27 were on acitretin (14.2%); on phototherapy (10%); 16 on immunobiological medications (8.4%); and only one on cyclosporine (0.5%). With regard to the family history of psoriasis, 57 patients (30%) reported positive family history; 129 patients (67.9%) denied a family history of psoriasis, and four (2.1%) were not able to answer this item. In relation to the severity of the skin condition, 128 patients (67.4%) were considered to have severe psoriasis. Of the 190 participants, 37 (19.5%) had a previous diagnosis of psoriatic arthritis (PA). Of these, 17 (45.9%) were classified as asymmetric oligoarthritis; 10 (27%) as symmetric polyarthritis; six (16.2%) as interphalangeal arthritis; and four (10.8%) had spondylitis. No patient showed arthritis mutilans. The mean PASI score was 3.4 (±3.03), with a median of 2.3 (interquartile range = 0.9 to 4.5). Of the patients evaluated using the PASI, 17 (10.5%) has a PASI score ≥10. Regarding the DLQI, 26 patients (13.7%) had a DLQI score ≥10. One hundred and eighteen patients (62.1%) had nail involvement. The most frequent alteration was onycholysis (41.6% of patients), followed by nail pitting (21.6%), onychodystrophy (14.2%), subungual hyperkeratosis (11.6%), oil spot (4.2%), and subungual hemorrhage (11.1%). Of the 190 patients, 39 (20.5%) reported current smoking; 57 (30%) reported past smoking; and 94 (49.5%) reported they had never smoked. Most of the 96 patients reporting current or past smoking started smoking before the diagnosis of psoriasis (76.5%). The mean time of current or past smoking was 22.3 (±12 years), with a median of 20 years. The mean packyears was 21.3 (±16). Considering alcohol consumption, 105 patients (55.3%) reported they did not consume alcohol regularly, 31 (16.3%) reported alcohol intake in the past, and 54 patients (28.4%) reported current alcohol consumption. Of the 54 patients who regularly consumed alcohol, 36 (66.6%) reported low consumption (less than once a week) and 18 (33.4%) reported moderate consumption (one to three times a week). No patient reported high or very high consumption. The patients' clinical characteristics are summarized in table 1. Frequency distribution of clinical variables of the psoriasis patients of the Dermatology Service of HC-UFMG Categories do not exclude one another Comorbidities and laboratory data Eighty-three patients (43.7%) reported previous diagnosis of SH, 49 (25.8%) had received a previous diagnosis of depression, and 29 (15.3%) had been diagnosed with type 2 diabetes. Forty-eight patients (25.3%) reported previous diagnosis of dyslipidemia. Regarding laboratory tests, 18.5% of patients had increased TC, 16.9% had increased LDL, and 33.7% had increased TG. Sixty-six men (74.2%) and 15 women (16.9%) had low HDL. Sixty-eight patients (64.3%) had high BMI. The overweight frequency was 31.1%, and 33.2% of patients were considered obese (22.6% with grade I obesity, 7.4% with grade II obesity, and 3.2% with grade III obesity). In relation to waist circumference, 68.8% of men and 86.5% of women had increased measures (p=0.005). Considering the cutoff point suggested by the WHO as a significant biomarker for cardiovascular risk, 38.7% of men and 78.4% of women had alterations (p<0.0001). Eighty patients (44.9%) met the criteria for the diagnosis of MS according to the NCEP-ATP III (42.6% of men and 47.2% of women). The frequencies for the patients' comorbidities and laboratory alterations are summarized in table 2. Frequency distribution of comorbidities and laboratory alterations of psoriasis patients of the Dermatology Service of HC-UFMG Eighty-three patients (43.7%) reported previous diagnosis of SH, 49 (25.8%) had received a previous diagnosis of depression, and 29 (15.3%) had been diagnosed with type 2 diabetes. Forty-eight patients (25.3%) reported previous diagnosis of dyslipidemia. Regarding laboratory tests, 18.5% of patients had increased TC, 16.9% had increased LDL, and 33.7% had increased TG. Sixty-six men (74.2%) and 15 women (16.9%) had low HDL. Sixty-eight patients (64.3%) had high BMI. The overweight frequency was 31.1%, and 33.2% of patients were considered obese (22.6% with grade I obesity, 7.4% with grade II obesity, and 3.2% with grade III obesity). In relation to waist circumference, 68.8% of men and 86.5% of women had increased measures (p=0.005). Considering the cutoff point suggested by the WHO as a significant biomarker for cardiovascular risk, 38.7% of men and 78.4% of women had alterations (p<0.0001). Eighty patients (44.9%) met the criteria for the diagnosis of MS according to the NCEP-ATP III (42.6% of men and 47.2% of women). The frequencies for the patients' comorbidities and laboratory alterations are summarized in table 2. Frequency distribution of comorbidities and laboratory alterations of psoriasis patients of the Dermatology Service of HC-UFMG Framingham risk score The Framingham risk score was used to evaluate 159 patients. Of those, 38.4% had moderate risk and 8.8% had high risk for the development of coronary heart diseases in 10 years. The median score was 8.0 (interquartile range = 4.0 to 13.0). Male patients had a median score of 10.0, while among female patients the median score was 7.0. The difference between men and women was statistically significant (Kruskal-Wallis test = 8.6; p=0.0033). The mean was 10.5 ± 9.1. The Framingham risk score was used to evaluate 159 patients. Of those, 38.4% had moderate risk and 8.8% had high risk for the development of coronary heart diseases in 10 years. The median score was 8.0 (interquartile range = 4.0 to 13.0). Male patients had a median score of 10.0, while among female patients the median score was 7.0. The difference between men and women was statistically significant (Kruskal-Wallis test = 8.6; p=0.0033). The mean was 10.5 ± 9.1. Variable analysis regarding the severity of psoriasis Of the 190 patients, 128 were considered as having severe psoriasis. The statistically different variables in terms of severity of psoriasis were: - Time to diagnosis: severe patients had a mean diagnosis time of 17.5 (±12.3) years compared with a mean of 12.6 (±11.2) years among nonsevere patients. P-value = 0.0021. -Psoriatic arthritis (PsA): 34 (26.5%) patients with severe psoriasis had concurrent diagnosis of PsA compared with only three (4.8%) nonsevere patients. Pvalue = 0.0008. - Nail involvement: 91 (71.1%) severe patients had nail involvement, while only 27 (43.5%) of the nonsevere group had alterations of this type. P-value = 0.0004. - BMI: 89 (69.5%) patients of the severe group had BMI ≥ 25 (considering overweight + obesity) compared with 33 (53.2%) of the nonsevere group. However, when we only analyzed obesity, there was no statistical significance. The statistical analyses performed are described in table 3. Frequency distribution of clinical variables in relation to the severity of psoriasis in 190 patients of the Dermatology Service of HC-UFMG - statistically significant variables Chi-square test Kruskal-Wallis test f Interquartile range Interquartile range Of the 190 patients, 128 were considered as having severe psoriasis. The statistically different variables in terms of severity of psoriasis were: - Time to diagnosis: severe patients had a mean diagnosis time of 17.5 (±12.3) years compared with a mean of 12.6 (±11.2) years among nonsevere patients. P-value = 0.0021. -Psoriatic arthritis (PsA): 34 (26.5%) patients with severe psoriasis had concurrent diagnosis of PsA compared with only three (4.8%) nonsevere patients. Pvalue = 0.0008. - Nail involvement: 91 (71.1%) severe patients had nail involvement, while only 27 (43.5%) of the nonsevere group had alterations of this type. P-value = 0.0004. - BMI: 89 (69.5%) patients of the severe group had BMI ≥ 25 (considering overweight + obesity) compared with 33 (53.2%) of the nonsevere group. However, when we only analyzed obesity, there was no statistical significance. The statistical analyses performed are described in table 3. Frequency distribution of clinical variables in relation to the severity of psoriasis in 190 patients of the Dermatology Service of HC-UFMG - statistically significant variables Chi-square test Kruskal-Wallis test f Interquartile range Interquartile range Variable analysis regarding the classification of psoriasis In terms of psoriasis classification, patients were divided into two groups: plaque psoriasis, and other clinical forms, with analysis of the clinical and laboratory variables between the two groups. Only PsA showed statistical significance: of the 37 patients with a diagnosis of PsA, 35 had plaque psoriasis and only two had other clinical forms of psoriasis (23.5% vs. 4.9%, p-value=0.0146). In terms of psoriasis classification, patients were divided into two groups: plaque psoriasis, and other clinical forms, with analysis of the clinical and laboratory variables between the two groups. Only PsA showed statistical significance: of the 37 patients with a diagnosis of PsA, 35 had plaque psoriasis and only two had other clinical forms of psoriasis (23.5% vs. 4.9%, p-value=0.0146). Relationship between nail changes and psoriatic arthritis The association between PsA and nail changes was analyzed, showing statistically significant results. Of the 37 patients with a diagnosis of PsA, 34 (91.2%) had nail involvement (p-value = 0.0001). The association between PsA and nail changes was analyzed, showing statistically significant results. Of the 37 patients with a diagnosis of PsA, 34 (91.2%) had nail involvement (p-value = 0.0001). Prevalence of morbidities in the Brazilian population compared with our sample Based on data from 2011 provided by the Instituto Brasileiro de Geografia e Estatística (IBGE), the prevalence of some morbidities in the Brazilian population was compared with our sample. A greatly increased prevalence of SH, type 2 diabetes, smoking, overweight, and obesity was found in the 190 psoriasis patients compared with the general population. The prevalence of type 2 diabetes is worth noting, being 3 times higher in patients with psoriasis (Table 4). A comparison between the prevalence of MS in our sample and the existing data on the Brazilian population was also carried out from a population-based study conducted in Vitória.19 The difference between the groups was statistically significant, with p = 0.0310. The analysis by sex did not show any statistical difference. Comparison of the prevalence of comorbidities between patients with psoriasis and the Brazilian population (Source: Vigitel 2011) 95%CI = 95% confidence interval. Based on data from 2011 provided by the Instituto Brasileiro de Geografia e Estatística (IBGE), the prevalence of some morbidities in the Brazilian population was compared with our sample. A greatly increased prevalence of SH, type 2 diabetes, smoking, overweight, and obesity was found in the 190 psoriasis patients compared with the general population. The prevalence of type 2 diabetes is worth noting, being 3 times higher in patients with psoriasis (Table 4). A comparison between the prevalence of MS in our sample and the existing data on the Brazilian population was also carried out from a population-based study conducted in Vitória.19 The difference between the groups was statistically significant, with p = 0.0310. The analysis by sex did not show any statistical difference. Comparison of the prevalence of comorbidities between patients with psoriasis and the Brazilian population (Source: Vigitel 2011) 95%CI = 95% confidence interval. General characteristics: The sample's mean age was 51.5 years (±14), ranging from 18 to 92 years; 93 men (48.9%) and 97 women (51.1%) were evaluated. The mean diagnosis time was 15.8 years (±12.2), with median of 13 years (interquartile range between 6.0 and 23.0). Regarding skin color, 80 patients were caucasians (42.1%), 81 mulattos (42.6%), and 29 African descendents (15.3%). Of the 190 patients, 149 (78.4%) had a diagnosis of plaque psoriasis; 18 (9.5%) had palmoplantar psoriasis; 11 (5.8%) had guttate psoriasis; six (3.2%) had generalized pustular psoriasis; five (2.6%) had eryth-rodermic psoriasis; and only one (0.5%) had a diagnosis of Hallopeau's continuous acrodermatitis. Regarding therapeutic modality, 79 patients were only on an topical treatment (41.6%); 59 were on methotrexate (31.1%); 27 were on acitretin (14.2%); on phototherapy (10%); 16 on immunobiological medications (8.4%); and only one on cyclosporine (0.5%). With regard to the family history of psoriasis, 57 patients (30%) reported positive family history; 129 patients (67.9%) denied a family history of psoriasis, and four (2.1%) were not able to answer this item. In relation to the severity of the skin condition, 128 patients (67.4%) were considered to have severe psoriasis. Of the 190 participants, 37 (19.5%) had a previous diagnosis of psoriatic arthritis (PA). Of these, 17 (45.9%) were classified as asymmetric oligoarthritis; 10 (27%) as symmetric polyarthritis; six (16.2%) as interphalangeal arthritis; and four (10.8%) had spondylitis. No patient showed arthritis mutilans. The mean PASI score was 3.4 (±3.03), with a median of 2.3 (interquartile range = 0.9 to 4.5). Of the patients evaluated using the PASI, 17 (10.5%) has a PASI score ≥10. Regarding the DLQI, 26 patients (13.7%) had a DLQI score ≥10. One hundred and eighteen patients (62.1%) had nail involvement. The most frequent alteration was onycholysis (41.6% of patients), followed by nail pitting (21.6%), onychodystrophy (14.2%), subungual hyperkeratosis (11.6%), oil spot (4.2%), and subungual hemorrhage (11.1%). Of the 190 patients, 39 (20.5%) reported current smoking; 57 (30%) reported past smoking; and 94 (49.5%) reported they had never smoked. Most of the 96 patients reporting current or past smoking started smoking before the diagnosis of psoriasis (76.5%). The mean time of current or past smoking was 22.3 (±12 years), with a median of 20 years. The mean packyears was 21.3 (±16). Considering alcohol consumption, 105 patients (55.3%) reported they did not consume alcohol regularly, 31 (16.3%) reported alcohol intake in the past, and 54 patients (28.4%) reported current alcohol consumption. Of the 54 patients who regularly consumed alcohol, 36 (66.6%) reported low consumption (less than once a week) and 18 (33.4%) reported moderate consumption (one to three times a week). No patient reported high or very high consumption. The patients' clinical characteristics are summarized in table 1. Frequency distribution of clinical variables of the psoriasis patients of the Dermatology Service of HC-UFMG Categories do not exclude one another Comorbidities and laboratory data: Eighty-three patients (43.7%) reported previous diagnosis of SH, 49 (25.8%) had received a previous diagnosis of depression, and 29 (15.3%) had been diagnosed with type 2 diabetes. Forty-eight patients (25.3%) reported previous diagnosis of dyslipidemia. Regarding laboratory tests, 18.5% of patients had increased TC, 16.9% had increased LDL, and 33.7% had increased TG. Sixty-six men (74.2%) and 15 women (16.9%) had low HDL. Sixty-eight patients (64.3%) had high BMI. The overweight frequency was 31.1%, and 33.2% of patients were considered obese (22.6% with grade I obesity, 7.4% with grade II obesity, and 3.2% with grade III obesity). In relation to waist circumference, 68.8% of men and 86.5% of women had increased measures (p=0.005). Considering the cutoff point suggested by the WHO as a significant biomarker for cardiovascular risk, 38.7% of men and 78.4% of women had alterations (p<0.0001). Eighty patients (44.9%) met the criteria for the diagnosis of MS according to the NCEP-ATP III (42.6% of men and 47.2% of women). The frequencies for the patients' comorbidities and laboratory alterations are summarized in table 2. Frequency distribution of comorbidities and laboratory alterations of psoriasis patients of the Dermatology Service of HC-UFMG Framingham risk score: The Framingham risk score was used to evaluate 159 patients. Of those, 38.4% had moderate risk and 8.8% had high risk for the development of coronary heart diseases in 10 years. The median score was 8.0 (interquartile range = 4.0 to 13.0). Male patients had a median score of 10.0, while among female patients the median score was 7.0. The difference between men and women was statistically significant (Kruskal-Wallis test = 8.6; p=0.0033). The mean was 10.5 ± 9.1. Variable analysis regarding the severity of psoriasis: Of the 190 patients, 128 were considered as having severe psoriasis. The statistically different variables in terms of severity of psoriasis were: - Time to diagnosis: severe patients had a mean diagnosis time of 17.5 (±12.3) years compared with a mean of 12.6 (±11.2) years among nonsevere patients. P-value = 0.0021. -Psoriatic arthritis (PsA): 34 (26.5%) patients with severe psoriasis had concurrent diagnosis of PsA compared with only three (4.8%) nonsevere patients. Pvalue = 0.0008. - Nail involvement: 91 (71.1%) severe patients had nail involvement, while only 27 (43.5%) of the nonsevere group had alterations of this type. P-value = 0.0004. - BMI: 89 (69.5%) patients of the severe group had BMI ≥ 25 (considering overweight + obesity) compared with 33 (53.2%) of the nonsevere group. However, when we only analyzed obesity, there was no statistical significance. The statistical analyses performed are described in table 3. Frequency distribution of clinical variables in relation to the severity of psoriasis in 190 patients of the Dermatology Service of HC-UFMG - statistically significant variables Chi-square test Kruskal-Wallis test f Interquartile range Interquartile range Variable analysis regarding the classification of psoriasis: In terms of psoriasis classification, patients were divided into two groups: plaque psoriasis, and other clinical forms, with analysis of the clinical and laboratory variables between the two groups. Only PsA showed statistical significance: of the 37 patients with a diagnosis of PsA, 35 had plaque psoriasis and only two had other clinical forms of psoriasis (23.5% vs. 4.9%, p-value=0.0146). Relationship between nail changes and psoriatic arthritis: The association between PsA and nail changes was analyzed, showing statistically significant results. Of the 37 patients with a diagnosis of PsA, 34 (91.2%) had nail involvement (p-value = 0.0001). Prevalence of morbidities in the Brazilian population compared with our sample: Based on data from 2011 provided by the Instituto Brasileiro de Geografia e Estatística (IBGE), the prevalence of some morbidities in the Brazilian population was compared with our sample. A greatly increased prevalence of SH, type 2 diabetes, smoking, overweight, and obesity was found in the 190 psoriasis patients compared with the general population. The prevalence of type 2 diabetes is worth noting, being 3 times higher in patients with psoriasis (Table 4). A comparison between the prevalence of MS in our sample and the existing data on the Brazilian population was also carried out from a population-based study conducted in Vitória.19 The difference between the groups was statistically significant, with p = 0.0310. The analysis by sex did not show any statistical difference. Comparison of the prevalence of comorbidities between patients with psoriasis and the Brazilian population (Source: Vigitel 2011) 95%CI = 95% confidence interval. DISCUSSION: The association between psoriasis and cardiovascular comorbidities has been widely described in recent years by epidemiological studies in different populations. However, the actual prevalence of associations in the Brazilian population still lacks specific data because even data on the epidemiology of psoriasis are scarce in Brazil. The mean age of our patients was 51.5 (± 14) years. This contributed to the findings of comorbidities such as SH, type 2 diabetes, and dyslipidemia, whose prevalence rates increase with age. The mean time to diagnosis was also high, 15.8 years. Regarding skin color, it is known that psoriasis is more prevalent in Caucasians, with a low prevalence in AfricanAmericans.1 Due to the high level of miscegenation of the Brazilian population, assessment of ethnicity and/or skin color is extremely difficult and controversial. The was a prevalence of patients with white and brow skin (42.1% and 42.6%, respectively) in our sample, with lower prevalence of black skin (15.3%). In agreement with the literature, there was no significant difference between sexes, since our sample consisted of 48.9% of men and 51.1% women. The predominant clinical form was plaque psoriasis, accounting for most cases (78.4%), followed by less frequent variables, as reported in epidemiological studies in other populations.1 The number of individuals with generalized pustular psoriasis and erythrodermic psoriasis, which are extremely rare variants, is worth noting because these clinical forms accounted for 5.8% of patients. Maybe because our study was conducted at a center of excellence for dermatology, it is likely to have many patients with more severe conditions. Also in terms of psoriasis classification, we compared plaque psoriasis with other clinical forms with regard to clinical and laboratory characteristics of patients and comorbidities. The only variable that showed statistical significance was PsA, which was much more frequent in patients with psoriasis vulgaris. Of the 37 patients with PsA, 35 had plaque psoriasis, one patient had guttate psoriasis, and one patient had erythrodermic psoriasis). Psoriasis vulgaris is the main form of psoriasis associated with PsA, but pustular psoriasis and guttate psoriasis have also been correlated with PsA. Nail involvement is a symptom of psoriasis, affecting up to 80% of patients during their lifetime. We found nail involvement in 62.1% of our sample. In disagreement with the literature, onycholysis was the most frequent alteration, found in 67% of our patients with nail changes. Nail pitting, which, according to epidemiological studies, represents the most common type of nail lesions in psoriasis patients,1 was found in 41 individuals in our sample (21.5% of the total or 34.7% of patients with nail changes). The large number of patients on systemic treatment in the present study might have influenced the frequency of nail changes. For example, acitretin may cause nail fragility with onychorexis, onychoschizia, and onycholysis; and phototherapy with oral psoralen may cause photo-onycholysis. Other drugs, such as methotrexate and immunobiological medication, may normalize keratinization and differentiation of epidermal cells, contributing to the reduction of changes in the nail matrix, such as nail pitting. As for joint involvement, 19.5% of patients had a previous diagnosis of PsA, which is consistent with the prevalence reported in the literature.20 Many patients had joint complaints without even having a diagnosis of PsA. And this may be an underdiagnosed entity because of the difficulty in reaching a consensus on PsA diagnostic criteria. In addition to the diagnostic criteria, PsA classification according to the standards of joint involvement is also controversial. Moll and Wright classified PsA into five subtypes: asymmetric oligoarthritis, symmetric polyarthritis similar to rheumatoid arthritis, arthritis with a predominance of distal interphalangeal joints,arthritis mutilans, and spondylitic arthritis.21 In our study, asymmetric oligoarthritis was the most prevalent form, accounting for 45.9% of patients with PsA. The second most common form was symmetrical polyarthritis (27%), followed by distal interphalangeal arthritis and spondylitic arthritis (16.2% and 10.8%, respectively). Arthritis mutilans is extremely rare and was not found in our patients. Many studies confirm our findings that oligoarthritis is the most prevalent form, including studies by Moll and Wright and a Brazilian study conducted by Bonfiglioli et al.21,22 Other studies have shown polyarticular arthritis as the most frequent form, but there may be overlapped clinical forms, which makes it difficult to classify these specific categories. Some authors claim that oligoarthritis is more frequent in the early stages of psoriasis, with the possibility of developing into more extensive and severe forms.23 The association between PsA and nail involvement is well documented in the literature, and the severity of nail involvement may correlate with the severity of skin and joint conditions. Nail changes may be found in up to 90% of patients with a established diagnosis of PsA, especially when there is involvement ofdistal interphalangeal joints.24 Our data corroborate this percentage since 92% of our patients with arthritis had nail changes with proven statistical significance. Regarding therapeutic modalities, it is worth noting the large number of patients on systemic treatment in our study: only 41.5% were using only topical medications. About 10% of our patients were on pho-totherapy, combined or not with other treatments, and 54.1% were using systemic medication. It is also noteworthy that there was a large number of patients using immunobiological medication (8.5%), precisely because the study was conducted at a center of excellence, where there were more severe cases. The most commonly used systemic medication was methotrexate (31%), followed by acitretin (14%), and only one case of cyclosporine use. Among the cardiovascular comorbidities, obesity is more intrinsically related to psoriasis. Behavioral changes and changes in the quality of life of psoriatic patients could contribute to weight gain. Similarly, obesity could predispose to the onset of skin diseases due to common inflammatory mechanisms.2 In our study, 64.3% of patients had high BMI: the frequency of overweight was 31.1%, and 33.2% of patients were obese. Of the obese patients, 22.6% had grade I obesity; 7.4% had grade II obesity; and 3.2% had grade III obesity. Because this was a cross-sectional study, it is not possible to state whether skin diseases predisposed to obesity, or vice versa. Unlike the study by Neimann et al., the presence of obesity was not related to the severity of psoriasis.15 Visceral obesity, represented by waist circumference, is known to be correlated with high metabolic and cardiovascular risk. The cutoff point commonly established for waist circumference, 102 cm for men and 88 cm for women, has been questioned because it is not appropriate for people of different ethnicities. In some studies, lower levels - 94 cm for men and 80 cm for women - have been considered more appropriate for our population.25 Considering these values, 68.8% of men and 86.5% of women had increased measures in our study. Considering the cutoff point of 102 cm for men and 88 cm for women, which suggests an even higher cardiovascular risk, 38.7% of men and 78.4% of women had increased measures. We noticed that, unlike men, most women remained in the highrisk group even when the cutoff point was increased. The relationship between psoriasis and SH has also been well documented. Of our 190 patients, 43.7% had a previous diagnosis of SH. Comparing with the data of the Brazilian population, this is a significant difference, almost twice higher, according to the Ministry of Health (43.7% vs. 22.7%).26 The difference between sexes was also significant (19.5% vs. 39.8% in males and 27.4% vs. 47.4% in females). Five patients who denied having a previous diagnosis of SH or use of antihypertensive medication showed increased blood pressure levels during physical examination. These patients were referred for specialist follow-up. Unlike the study by Kimball et al., we found no difference in the prevalence of SH in relation to the psoriasis severity.27 In addition to SH, type 2 diabetes is another important independent risk factor for cardiovascular disease that is associated with psoriasis, probably due to their common pro-inflammatory state. In our study, 15.3% of patients had a previous diagnosis of type 2 diabetes. There was a significant difference when we compared these data with the Brazilian population, whose prevalence is around 5.3%, i.e., almost onethird of the value we found.26 Eight patients who did not have a diagnosis of type 2 diabetes showed increased glucose levels, but, similarly to SH, the diagnosis of type 2 diabetes requires at least two increased measures. In relation to these patients, we requested additional laboratory tests and referred them to clinical follow-up. Diabetes is the comorbidity with the largest number of studies reporting its increased prevalence in proportion to the severity of psoriasis.15,27,28 However, this difference was not found in our study. Several studies have confirmed the association between psoriasis and altered lipid metabolism, such as reduced HDL and increased TG.2,11 In our study, 48 patients (25.3%) reported a previous diagnosis of dyslipidemia. Regarding laboratory tests, it is worth noting the high rate of men with low HDL compared with women (74.2% vs. 16.9%). Among patients who were taking acitretin, only five showed abnormal lipid profile, which probably did not affect the final data analysis. MS is a set of cardiovascular risk factors, including SH, abdominal obesity, dyslipidemia, and glucose intolerance. Despite the fact that SM has recently been questioned whether or not it represents an additional risk, we cannot fail to consider that it is a condition that aggregates multiple comorbidities that individually increase the patient's cardiovascular risk. Recent studies have shown a higher prevalence of MS in psoriasis patients, especially after the age of 40, based on the pro-inflammatory nature of the two diseases.11,29 In the present study, 44.9% of patients met the criteria for diagnosis of MS according to the NCEP-ATP III. When compared to estimated prevalence for the Brazilian population, which is 29.8%,19 there is a significant difference, which has been statistically proven (p<0.0001). There was no correlation of MS with psoriasis severity, in disagreement with the findings by Langan et al., where the prevalence of metabolic alterations was directly proportional to the severity and extent of skin psoriasis.30 The association between smoking and psoriasis has been widely reported, both based on changes in the patient's quality of life and on the theory that smoking generates oxidative and inflammatory mechanisms that could predispose to the development of psoriasis.31 In our study, 50.5% of patients reported past or current smoking, which is the majority of our sample (20.5% reported current smoking and 30% reported they smoked in the past). Among the 96 patients who reported current or past smoking, the vast majority (76.5%) reported that the onset of smoking occurred even before the diagnosis of psoriasis, i.e., maybe the oxidative stress caused by nicotine plays a role in the pathogenesis of psoriasis. Considering the time of smoking for smokers and exsmokers, the mean time was also high, 22.3 ± 12 years. Comparing the data from our study with data from the Ministry of Health, the prevalence of smoking was higher (20.5% in our sample vs. 14.8% in the general population).26 Trying to quantify smoking, we calculated the number the "pack-years", that is, number of cigarettes smoked per day × number of years smoked (1 pack has 20 cigarettes). In our sample, mean packyears was 21.3 ±16, at least twice the value of 10 packyears (considered high tobacco intake by the Brazilian Society of Pulmonology), which demonstrates a high consumption of our patients.32 Despite the high number of smokers, we found no relationship between smoking and psoriasis severity. Similarly to smoking, the habit of drinking alcohol may start or even become more frequent after the development of psoriasis due mainly to changes related to quality of life.3 Some authors have also claimed that alcohol and its metabolites can stimulate keratinocyte proliferation and the production of inflammatory cytokines, which may contribute to the development of psoriasis.33 The quantification of alcohol consumption is extremely difficult and controversial, as there are different types of alcoholic beverages with varying alcohol content. In addition, the same amount of alcohol can have different effects on different individuals, depending on factors such as sex, age, and comorbidities.Several studies have reported the association of alcohol intake with psoriasis. Poikolainen et al. found a two-fold higher intake of alcohol in psoriasis patients compared with controls up to 12 months before the onset of psoriasis.34 In a prospective study, Qureshi et al. found 1.72 times the risk of developing psoriasis in alcoholic patients.35 Sommer et al. also described the association between alcohol abuse and psoriasis, in proportion to the amount of alcohol intake, in a significant magnitude, with OR ranging from 2.78 to 3.61.11 Among our patients, only 28.4% reported current habit of alcohol intake, and of these, no patient had a diagnosis of alcoholism according to the CAGE questionnaire (Cutdown, Annoyed by criticism, Guilty and Eye-opener). Psoriasis, as well as other skin diseases, can affect the patient's self-esteem, interfering with all aspects of quality of life. This results in an increase in the prevalence of mood disorders, especially anxiety and depression, often unrelated to the extent of skin involvement. The prevalence of depression in psoriasis patients is about 20%, and it is believed that depression onset can increase the risk for cardiovascular disease.36 In our study, the prevalence of depression was 25.8%, but it was not related to the severity or extent of the skin condition. Regarding major cardiovascular events, three patients reported past AMI, requiring angioplasty and one patient had a history of ischemic stroke. All four patients had a diagnosis of psoriasis when such events occurred. According to the severity criteria used, 67.4% of patients were classified as having severe psoriasis. Some variables had statistical significance in relation to disease severity, such as time to diagnosis: severe patients had a mean time to diagnosis of 17.5 (± 12.3) years compared to 12.6 (± 11.2 years) in nonsevere patients. This may be due to the fact that severe patients are followed up by the health care facility, and patients with milder and/or controlled skin conditions are lost for follow-up precisely because they improved. Skin biopsy was also more frequently performed in severe patients, probably due to the need of accurate diagnostic confirmation before the start of a more aggressive therapy. The presence of PsA and nail involvement was also higher in critically ill patients, with high statistical significance (p = 0.0008 and p = 0.0004, respectively). These data are consistent with the literature.21 Another finding that was statistically significant regarding severity was overweight. Considering patients with BMI ≥ 25 (overweight + obesity), they had a tendency to have more severe psoriasis. However, when obesity was separately analyzed, there was no statistical significance. Neimann et al. found a correlation between obesity and severity of psoriasis in a population-based study in the UK, even after adjustment for age and sex, and performing multivariate analysis.15 Cardiovascular diseases are a major cause of morbidity and mortality in Brazil and worldwide. Therefore, it is essential to identify individuals at higher risk for developing heart disease and cerebrovascular disease. In this context, several risk scores for cardiovascular disease are described, including the Framingham risk score, undoubtedly the most used. The final score estimates the risk of fatal or nonfatal coronary events in 10 years, and also stratifies patients into risk categories: low (<10% events in 10 years), moderate (10-20%), and high (>20%). This index estimates a prognosis and suggests the necessary clinical interventions. In the present study, the Framingham risk score was used to evaluate 159 patients, as it should only be used in patients 30-74 years old without coronary artery disease. Of those, 47.2% had moderate risk and 8.8% had high risk for the development of coronary heart diseases in 10 years. Forty-five patients (28.3%) had a Framingham risk score higher than expected for an individual of the same sex and age. The mean Framingham score was 10.5 ± 9.1, which is considered high, especially when compared with other studies involving the Brazilian population. Rodrigues et al., based on a cross-sectional study with 329 patients aged 31-70 years, found a mean Framingham risk score of 5.7%.37Additionally, Landim et al., based on a sample of 107 drivers aged 30-74 years, found a mean score of 5%.38 In our study, there was a statistically significant difference when we compared females and males, which can be explained by the fact that the Framingham score has a lower discriminative power among women. With regard to studies on patients with psoriasis, our data are similar to the case-control study conducted by Gisondi et al. Based on a study with 234 psoriasis patients and 234 controls, a higher mean score was found in patients when compared with controls (11.2 vs. 7.3), p<0.001.39 Likewise, there was no correlation of psoriasis severity and duration with the score. Fernandez et al., in a recent study of 395 patients, also found similar results: 30.5% of patients with moderate risk score and 11.4% with high risk scores (77). Similarly to our study, there was no relationship between the scores and the severity of the skin condition.40 CONCLUSIONS: We found an increased prevalence of SH, type 2 diabetes, MS, and obesity in patients compared with the general population. Increased waist circumference was also found in addition to a considerable prevalence of depression and regular intake of alcohol. An association with smoking was described, with a mean pack-years being considered high tobacco intake. Patients' cardiovascular risk profile was high according to the Framingham risk score, and 47.2% of patients had moderate or high risk of fatal and non-fatal coronary events in 10 years. The mean Framingham score was considered high when compared with other studies involving the Brazilian population. It is noteworthy that psoriasis may manifest as a multisystem disease not restricted to the skin and its appendages. The association of psoriasis with several comorbidities may occur due to various factors, such as the chronic inflammatory nature of the disease, genetic susceptibility, environmental factors and/or related to the patient's quality of life and even adverse effects of drugs used for systemic therapy. Comorbidities that are associated with psoriasis greatly increase the morbidity and mortality of the disease. Therefore, it should be noted that the approach of the psoriatic patient should be comprehensive and multidisciplinary to implement preventive and early therapeutic measures aiming to improve the rates of mortality, hospitalization, and survival.
Background: Psoriasis is a chronic inflammatory disease and its pathogenesis involves an interaction between genetic, environmental, and immunological factors. Recent studies have suggested that the chronic inflammatory nature of psoriasis may predispose to an association with other inflammatory diseases, especially cardiovascular diseases and metabolic disorders. Methods: We conducted a cross-sectional study involving the assessment of 190 patients. Participants underwent history and physical examination. They also completed a specific questionnaire about epidemiological data, past medical history, and comorbidities. The cardiovascular risk profile was calculated using the Framingham risk score. Results: Patients' mean age was 51.5 ± 14 years, and the predominant clinical presentation was plaque psoriasis (78.4%). We found an increased prevalence of systemic hypertension, type 2 diabetes, metabolic syndrome, and obesity. Increased waist circumference was also found in addition to a considerable prevalence of depression, smoking, and regular alcohol intake. Patients' cardiovascular risk was high according to the Framingham risk score, and 47.2% of patients had moderate or high risk of fatal and non-fatal coronary events in 10 years. Conclusions: Patients had high prevalence of cardiovascular comorbidities, and high cardiovascular risk according to the Framingham risk score. Further epidemiological studies are needed in Brazil for validation of our results.
INTRODUCTION: Psoriasis is a chronic and recurrent condition defined as an immunologically mediated inflammatory disease affecting mainly the skin and joints. The pathogenesis of psoriasis remains unclear, but it is known that genetic, environmental, and immunological factors are implicated.1 In recent years, the association of psoriasis with several comorbidities, such as obesity, metabolic syndrome (MS), systemic hypertension (SH), dyslipidemia, type 2 diabetes, malignancies, inflammatory bowel diseases, and habits, such as smoking and alcohol abuse has been matter of debate.2,3 This association between psoriasis and comorbidities, especially cardiovascular and metabolic disorders, may be related to their chronic and inflammatory nature, especially due to increased pro-inflammatory cytokines that are part of the pathophysiology of such disorders.4,5 Recently the discussion has extended about higher prevalence of atherosclerosis, arterial calcification, and mortality related to cardiovascular events, such as acute myocardial infarction (AMI) and stroke.6,7 The chronic inflammatory process of psoriasis involves mechanisms that cause oxidative stress and free radical production, prompting the formation of atherosclerotic plaques on vessel walls and leading to an increased risk of cardiovascular disease.8 Results of epidemiological reports and studies investigating the association with comorbidities vary depending on the population studied. For this reason this study was designed with the purpose of defining the epidemiological, clinical, and laboratory profile of patients seen at a reference center for dermatology. Other objectives were to investigate the association with comorbidities and cardiovascular risk factors such as SH, type 2 diabetes, obesity, dyslipidemia, and to define the profile of cardiovascular risk based on the Framingham risk score. CONCLUSIONS: We found an increased prevalence of SH, type 2 diabetes, MS, and obesity in patients compared with the general population. Increased waist circumference was also found in addition to a considerable prevalence of depression and regular intake of alcohol. An association with smoking was described, with a mean pack-years being considered high tobacco intake. Patients' cardiovascular risk profile was high according to the Framingham risk score, and 47.2% of patients had moderate or high risk of fatal and non-fatal coronary events in 10 years. The mean Framingham score was considered high when compared with other studies involving the Brazilian population. It is noteworthy that psoriasis may manifest as a multisystem disease not restricted to the skin and its appendages. The association of psoriasis with several comorbidities may occur due to various factors, such as the chronic inflammatory nature of the disease, genetic susceptibility, environmental factors and/or related to the patient's quality of life and even adverse effects of drugs used for systemic therapy. Comorbidities that are associated with psoriasis greatly increase the morbidity and mortality of the disease. Therefore, it should be noted that the approach of the psoriatic patient should be comprehensive and multidisciplinary to implement preventive and early therapeutic measures aiming to improve the rates of mortality, hospitalization, and survival.
Background: Psoriasis is a chronic inflammatory disease and its pathogenesis involves an interaction between genetic, environmental, and immunological factors. Recent studies have suggested that the chronic inflammatory nature of psoriasis may predispose to an association with other inflammatory diseases, especially cardiovascular diseases and metabolic disorders. Methods: We conducted a cross-sectional study involving the assessment of 190 patients. Participants underwent history and physical examination. They also completed a specific questionnaire about epidemiological data, past medical history, and comorbidities. The cardiovascular risk profile was calculated using the Framingham risk score. Results: Patients' mean age was 51.5 ± 14 years, and the predominant clinical presentation was plaque psoriasis (78.4%). We found an increased prevalence of systemic hypertension, type 2 diabetes, metabolic syndrome, and obesity. Increased waist circumference was also found in addition to a considerable prevalence of depression, smoking, and regular alcohol intake. Patients' cardiovascular risk was high according to the Framingham risk score, and 47.2% of patients had moderate or high risk of fatal and non-fatal coronary events in 10 years. Conclusions: Patients had high prevalence of cardiovascular comorbidities, and high cardiovascular risk according to the Framingham risk score. Further epidemiological studies are needed in Brazil for validation of our results.
10,159
245
[ 740, 287, 101, 253, 79, 43, 182 ]
12
[ "patients", "psoriasis", "diagnosis", "reported", "years", "risk", "10", "score", "smoking", "prevalence" ]
[ "psoriasis involves mechanisms", "psoriasis comorbidities obesity", "role pathogenesis psoriasis", "psoriasis cardiovascular", "psoriasis cardiovascular comorbidities" ]
[CONTENT] Comorbidity | Cardiovascular diseases | Psoriasis [SUMMARY]
[CONTENT] Comorbidity | Cardiovascular diseases | Psoriasis [SUMMARY]
[CONTENT] Comorbidity | Cardiovascular diseases | Psoriasis [SUMMARY]
[CONTENT] Comorbidity | Cardiovascular diseases | Psoriasis [SUMMARY]
[CONTENT] Comorbidity | Cardiovascular diseases | Psoriasis [SUMMARY]
[CONTENT] Comorbidity | Cardiovascular diseases | Psoriasis [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Brazil | Cardiovascular Diseases | Comorbidity | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Metabolic Syndrome | Middle Aged | Obesity | Prevalence | Psoriasis | Risk Assessment | Risk Factors | Severity of Illness Index | Sex Distribution | Smoking | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Brazil | Cardiovascular Diseases | Comorbidity | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Metabolic Syndrome | Middle Aged | Obesity | Prevalence | Psoriasis | Risk Assessment | Risk Factors | Severity of Illness Index | Sex Distribution | Smoking | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Brazil | Cardiovascular Diseases | Comorbidity | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Metabolic Syndrome | Middle Aged | Obesity | Prevalence | Psoriasis | Risk Assessment | Risk Factors | Severity of Illness Index | Sex Distribution | Smoking | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Brazil | Cardiovascular Diseases | Comorbidity | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Metabolic Syndrome | Middle Aged | Obesity | Prevalence | Psoriasis | Risk Assessment | Risk Factors | Severity of Illness Index | Sex Distribution | Smoking | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Brazil | Cardiovascular Diseases | Comorbidity | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Metabolic Syndrome | Middle Aged | Obesity | Prevalence | Psoriasis | Risk Assessment | Risk Factors | Severity of Illness Index | Sex Distribution | Smoking | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Brazil | Cardiovascular Diseases | Comorbidity | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Metabolic Syndrome | Middle Aged | Obesity | Prevalence | Psoriasis | Risk Assessment | Risk Factors | Severity of Illness Index | Sex Distribution | Smoking | Young Adult [SUMMARY]
[CONTENT] psoriasis involves mechanisms | psoriasis comorbidities obesity | role pathogenesis psoriasis | psoriasis cardiovascular | psoriasis cardiovascular comorbidities [SUMMARY]
[CONTENT] psoriasis involves mechanisms | psoriasis comorbidities obesity | role pathogenesis psoriasis | psoriasis cardiovascular | psoriasis cardiovascular comorbidities [SUMMARY]
[CONTENT] psoriasis involves mechanisms | psoriasis comorbidities obesity | role pathogenesis psoriasis | psoriasis cardiovascular | psoriasis cardiovascular comorbidities [SUMMARY]
[CONTENT] psoriasis involves mechanisms | psoriasis comorbidities obesity | role pathogenesis psoriasis | psoriasis cardiovascular | psoriasis cardiovascular comorbidities [SUMMARY]
[CONTENT] psoriasis involves mechanisms | psoriasis comorbidities obesity | role pathogenesis psoriasis | psoriasis cardiovascular | psoriasis cardiovascular comorbidities [SUMMARY]
[CONTENT] psoriasis involves mechanisms | psoriasis comorbidities obesity | role pathogenesis psoriasis | psoriasis cardiovascular | psoriasis cardiovascular comorbidities [SUMMARY]
[CONTENT] patients | psoriasis | diagnosis | reported | years | risk | 10 | score | smoking | prevalence [SUMMARY]
[CONTENT] patients | psoriasis | diagnosis | reported | years | risk | 10 | score | smoking | prevalence [SUMMARY]
[CONTENT] patients | psoriasis | diagnosis | reported | years | risk | 10 | score | smoking | prevalence [SUMMARY]
[CONTENT] patients | psoriasis | diagnosis | reported | years | risk | 10 | score | smoking | prevalence [SUMMARY]
[CONTENT] patients | psoriasis | diagnosis | reported | years | risk | 10 | score | smoking | prevalence [SUMMARY]
[CONTENT] patients | psoriasis | diagnosis | reported | years | risk | 10 | score | smoking | prevalence [SUMMARY]
[CONTENT] inflammatory | cardiovascular | chronic | association | risk | comorbidities | psoriasis | association comorbidities | association psoriasis comorbidities | chronic inflammatory [SUMMARY]
[CONTENT] dl | mg | mg dl | cholesterol | 10 | smoking | patients | alcohol | index | blood [SUMMARY]
[CONTENT] patients | psoriasis | reported | diagnosis | 10 | years | mean | median | nail | score [SUMMARY]
[CONTENT] disease | high | considered high | risk | mortality | factors | fatal | found | patient | intake [SUMMARY]
[CONTENT] patients | psoriasis | diagnosis | 10 | risk | psa | reported | score | nail | prevalence [SUMMARY]
[CONTENT] patients | psoriasis | diagnosis | 10 | risk | psa | reported | score | nail | prevalence [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 190 ||| ||| ||| Framingham [SUMMARY]
[CONTENT] 51.5 | 14 years | 78.4% ||| 2 ||| ||| Framingham | 47.2% | 10 years [SUMMARY]
[CONTENT] Framingham ||| Brazil [SUMMARY]
[CONTENT] ||| ||| 190 ||| ||| ||| Framingham ||| 51.5 | 14 years | 78.4% ||| 2 ||| ||| Framingham | 47.2% | 10 years ||| Framingham ||| Brazil [SUMMARY]
[CONTENT] ||| ||| 190 ||| ||| ||| Framingham ||| 51.5 | 14 years | 78.4% ||| 2 ||| ||| Framingham | 47.2% | 10 years ||| Framingham ||| Brazil [SUMMARY]
Circular RNA hsa_circ_0018189 drives non-small cell lung cancer growth by sequestering miR-656-3p and enhancing xCT expression.
36164726
Non-small cell lung cancer (NSCLC) is one of the cancers with a high mortality rate. CircRNAs have emerged as an important regulatory factor in tumorigenesis in recent years. However, the detailed regulatory mechanism of a circular RNA cullin 2 (hsa_circ_0018189; hsa_circ_0018189) is still unclear in NSCLC.
BACKGROUND
RNA levels of hsa_circ_0018189, microRNA (miR)-656-3p, and Solute carrier family seven member 11 (SLC7A11, xCT) were analyzed by real-time quantitative reverse transcription-polymerase chain reaction (RT-qPCR), and protein level was assessed by Western blot and immunohistochemical assay. Enzyme-linked immunosorbent assay was conducted to detect cell glutamine metabolism. Effects of hsa_circ_0018189 on cell proliferation, apoptosis, migration, and invasion were analyzed by corresponding assays. Luciferase reporter assay and RNA-immunoprecipitation assay confirmed the target relationship between miR-656-3p and hsa_circ_0018189 or xCT. The in vivo function of hsa_circ_0018189 was verified by xenograft mouse models.
METHODS
Hsa_circ_0018189 abundance was overexpressed in NSCLC cells and samples. Deficiency of hsa_circ_0018189 lowered NSCLC cell proliferative, migrating, invading, and glutamine metabolism capacities, and hsa_circ_0018189 silencing inhibited the growth of tumors in vivo. Hsa_circ_0018189 could up-regulate xCT by sponging miR-656-3p. And miR-656-3p downregulation or xCT overexpression partly overturned hsa_circ_0018189 knockdown or miR-656-3p mimic-mediated repression of NSCLC cell malignancy.
RESULTS
Hsa_circ_0018189 drove NSCLC growth by interacting with miR-656-3p and upregulating xCT.
CONCLUSION
[ "Animals", "Humans", "Mice", "Carcinoma, Non-Small-Cell Lung", "Cell Line, Tumor", "Cell Proliferation", "Gene Expression Regulation, Neoplastic", "Glutamine", "Lung Neoplasms", "MicroRNAs", "RNA, Circular", "Amino Acid Transport System y+" ]
9701851
BACKGROUND
Non‐small cell lung cancer (NSCLC) is a typical type of lung cancer. 1 , 2 NSCLC can be further subdivided into adenocarcinoma, squamous cell carcinoma, and large cell carcinoma, depending on the histological differences among the NSCLC subtypes. 3 , 4 , 5 The main risk for NSCLC is smoking. 6 Its poor prognosis and recurrence rate pose great challenges to the treatment of NSCLC. 7 , 8 Therefore, the research on NSCLC has also become a hot topic today. Circular RNAs (circRNAs), a class of single‐stranded RNA molecules with closed loop structures, have become the focus of RNA and transcriptome research. 9 In fact, circRNAs can be divided into non‐coding circRNAs and coding circRNAs. 10 , 11 Because of their unique structure, they exhibit high stability and resistance to exonucleases. 12 More and more studies on circRNAs in NSCLC have proved the action of circRNAs in the occurrence and development of NSCLC. 13 , 14 , 15 Hsa_circ_0018189 was suggested to exert a significant function in NSCLC, 16 but its specific functional mechanism has not been described. MiRNA adjusts post‐transcriptional gene expression. 17 , 18 MiRNAs, are key actors in a variety of biological processes, and their disorders have been linked to many diseases, including cancer and autoimmune diseases. 19 , 20 , 21 Dysregulation of miRNAs is common in NSCLC, and miRNAs have been suggested to participate in the regulation of the occurrence, progression, and metastasis of NSCLC by regulating target genes. 22 Therefore, a string of studies has been conducted to explore the functional mechanism of miRNAs in NSCLC, which will also provide the possibility for miRNAs to become a diagnostic and therapeutic tool. 23 , 24 The functional mechanism of miR‐656‐3p in NSCLC is not well understood, which makes it of great interest to us. Solute carrier family seven member 11 (SLC7A11), also named as xCT, is a cystine and glutamate anti‐transporter that can transport cystine into cells and export glutamate at the same time. 25 , 26 It has been reported that elevated xCT expression and glutamate release are commonly detected in cancer cells. 27 xCT has a vital function in the progression of NSCLC, but its specific regulatory mechanism has rarely been reported. Accordingly, the research was to clarify the changes in hsa_circ_0018189, miR‐656‐3p, and xCT in NSCLC, and their relationships and functions will be explored. These studies will provide theoretical support for future diagnosis and treatment of NSCLC.
null
null
RESULTS
Upregulation of hsa_circ_0018189 expression was detected in NSCLC Firstly, we analyzed the expression of hsa_circ_0018189 in NSCLC samples. Hsa_circ_0018189 was observed to be boosted in NSCLC samples (Figure 1A). Meanwhile, compared with BEAS2B, circLUC2 abundance was boosted in A549 and HCC44 cells (Figure 1B). Subsequently, we found that hsa_circ_0018189 was located at chr10 chromosome and was derived from the exons 6–12 of CUL2 mRNA (Figure 1C). We then verified the circular structure of hsa_circ_0018189. Also, hsa_circ_0018189 abundance was not significantly changed but linear CUL2 was reduced in A549 and NCC44 cells with RNase R treatment (Figure 1D). Additionally, hsa_circ_0018189 abundance had no difference after actinomycin D treatment (Figure 1E). Subcellular fractionation with RT‐qPCR analysis showed that hsa_circ_0018189 was mainly distributed in the cytoplasm of NSCLC cells (Figure 1F). In short, hsa_circ_0018189 was significantly upregulated in NSCLC. Upregulation of hsa_circ_0018189 expression was acquired in NSCLC. (A) RT‐qPCR analyzed hsa_circ_0018189 abundance in NSCLC tissues and normal tissues; (B) Hsa_circ_0018189 abundance in BEAS2B, A549, and HCC44 cells; (C) The location and structure of hsa_circ_0018189 were shown; (D) After RNase R digestion, RT‐qPCR was conducted to analyze hsa_circ_0018189 and linear CUL2 expression; (E) After the cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, RT‐qPCR was conducted to detect hsa_circ_0018189 and linear CUL2 expression; (F) The localization of circCUL2 in NSCLC cells was analyzed by subcellular fractionation with RT‐qPCR analysis. *p < .05 Firstly, we analyzed the expression of hsa_circ_0018189 in NSCLC samples. Hsa_circ_0018189 was observed to be boosted in NSCLC samples (Figure 1A). Meanwhile, compared with BEAS2B, circLUC2 abundance was boosted in A549 and HCC44 cells (Figure 1B). Subsequently, we found that hsa_circ_0018189 was located at chr10 chromosome and was derived from the exons 6–12 of CUL2 mRNA (Figure 1C). We then verified the circular structure of hsa_circ_0018189. Also, hsa_circ_0018189 abundance was not significantly changed but linear CUL2 was reduced in A549 and NCC44 cells with RNase R treatment (Figure 1D). Additionally, hsa_circ_0018189 abundance had no difference after actinomycin D treatment (Figure 1E). Subcellular fractionation with RT‐qPCR analysis showed that hsa_circ_0018189 was mainly distributed in the cytoplasm of NSCLC cells (Figure 1F). In short, hsa_circ_0018189 was significantly upregulated in NSCLC. Upregulation of hsa_circ_0018189 expression was acquired in NSCLC. (A) RT‐qPCR analyzed hsa_circ_0018189 abundance in NSCLC tissues and normal tissues; (B) Hsa_circ_0018189 abundance in BEAS2B, A549, and HCC44 cells; (C) The location and structure of hsa_circ_0018189 were shown; (D) After RNase R digestion, RT‐qPCR was conducted to analyze hsa_circ_0018189 and linear CUL2 expression; (E) After the cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, RT‐qPCR was conducted to detect hsa_circ_0018189 and linear CUL2 expression; (F) The localization of circCUL2 in NSCLC cells was analyzed by subcellular fractionation with RT‐qPCR analysis. *p < .05 Hsa_circ_0018189 knockdown suppressed glutamine metabolism, cell proliferation, migration, and invasion, while enhanced cell apoptosis in NSCLC cells At first, we verified the action of hsa_circ_0018189 knockdown, and the results suggested that si‐hsa_circ_0018189#1 caused the greatest silencing efficiency of hsa_circ_0018189 in both cells (Figure 2A). Hence, we used si‐hsa_circ_0018189#1 to exhibit the next experiments. We found that knockdown of hsa_circ_0018189 could inhibit glutamine metabolism (Figure 2B, C, and D). CCK8 assay and EdU assay confirmed that lowered hsa_circ_0018189 expression restrained cell proliferation of A549 and HCC44 cells (Figure 2E, F). In addition, cell apoptosis was increased after hsa_circ_0018189 knockdown (Figure 2G). Transwell assay results proved that si‐CUL2#1 could suppress cell migration and invasion of A549 and HCC44 cells (Figure 2H, I). E‐cadherin and Vimentin are mesenchymal markers commonly used in scientific research for epithelial–mesenchymal transformation (EMT). Knockdown of hsa_circ_0018189 promoted E‐cadherin level and retrained protein level of Vimentin in A549 and HCC44 cells (Figure 2J). We also constructed the hsa_circ_0018189 overexpression plasmid, as shown in Figure SS1. In contrast, hsa_circ_0018189 overexpression elevated glutamine metabolism, facilitated cell proliferation, migration, invasion, and lowered cell apoptosis in NSCLC cells (Figure SS1B‐I). Consistently, hsa_circ_0018189 overexpression elevated Vimentin protein levels and lessened E‐cadherin protein levels (Figure SS1J and K). All in all, these data implied that hsa_circ_0018189 promoted the progression of NSCLC. Silencing of hsa_circ_0018189 could inhibit glutamine metabolism, proliferation, migration, and invasion, while promote cell apoptosis in NSCLC cells. (A) The transfection efficiency of si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, and si‐hsa_circ_0018189#3 were analyzed by RT‐qPCR in A549, and HCC44 cells; (B‐D) ELISA assay analyzed glutamine metabolism after transfection of si‐NC or si‐hsa_circ_0018189#1; (E) Cell viability in the above cells was analyzed by CCK8 assay; (F) Cell proliferation was analyzed by EdU assay; (G) Cell apoptosis was analyzed by flow cytometry assay; (H and I) Cell migration and invasion were analyzed by Transwell assay; (J) Proteins of E‐cadherin and Vimentin were measured. *p < .05 At first, we verified the action of hsa_circ_0018189 knockdown, and the results suggested that si‐hsa_circ_0018189#1 caused the greatest silencing efficiency of hsa_circ_0018189 in both cells (Figure 2A). Hence, we used si‐hsa_circ_0018189#1 to exhibit the next experiments. We found that knockdown of hsa_circ_0018189 could inhibit glutamine metabolism (Figure 2B, C, and D). CCK8 assay and EdU assay confirmed that lowered hsa_circ_0018189 expression restrained cell proliferation of A549 and HCC44 cells (Figure 2E, F). In addition, cell apoptosis was increased after hsa_circ_0018189 knockdown (Figure 2G). Transwell assay results proved that si‐CUL2#1 could suppress cell migration and invasion of A549 and HCC44 cells (Figure 2H, I). E‐cadherin and Vimentin are mesenchymal markers commonly used in scientific research for epithelial–mesenchymal transformation (EMT). Knockdown of hsa_circ_0018189 promoted E‐cadherin level and retrained protein level of Vimentin in A549 and HCC44 cells (Figure 2J). We also constructed the hsa_circ_0018189 overexpression plasmid, as shown in Figure SS1. In contrast, hsa_circ_0018189 overexpression elevated glutamine metabolism, facilitated cell proliferation, migration, invasion, and lowered cell apoptosis in NSCLC cells (Figure SS1B‐I). Consistently, hsa_circ_0018189 overexpression elevated Vimentin protein levels and lessened E‐cadherin protein levels (Figure SS1J and K). All in all, these data implied that hsa_circ_0018189 promoted the progression of NSCLC. Silencing of hsa_circ_0018189 could inhibit glutamine metabolism, proliferation, migration, and invasion, while promote cell apoptosis in NSCLC cells. (A) The transfection efficiency of si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, and si‐hsa_circ_0018189#3 were analyzed by RT‐qPCR in A549, and HCC44 cells; (B‐D) ELISA assay analyzed glutamine metabolism after transfection of si‐NC or si‐hsa_circ_0018189#1; (E) Cell viability in the above cells was analyzed by CCK8 assay; (F) Cell proliferation was analyzed by EdU assay; (G) Cell apoptosis was analyzed by flow cytometry assay; (H and I) Cell migration and invasion were analyzed by Transwell assay; (J) Proteins of E‐cadherin and Vimentin were measured. *p < .05 Hsa_circ_0018189 was identified as a miR‐656‐3p molecular sponge Starbase and circinteractome analysis found that miR‐656‐3p and miR‐888‐5p were the most promising targets for hsa_circ_0018189 (Figure 3A). Hsa_circ_0018189 silencing resulted in an elevation in miR‐656‐3p expression level (Figure 3B, C), which was chosen for subsequent research. By using the Starbase database, we predicted the existence of targeted binding sites between hsa_circ_0018189 and miR‐656‐3p (Figure 3D). Luciferase reporter assay and RIP assay demonstrated the interaction between hsa_circ_0018189 and miR‐656‐3p in 293 T cells (Figure 3E, F). MiR‐656‐3p expression was lower in NSCLC tissues (Figure 3G). Pearson correlation analysis showed that hsa_circ_0018189 and miR‐656‐3p were negatively correlated with each other in NSCLC tumors (Figure 3H). MiR‐656‐3p abundance was lower in A549 and HCC44 cells (Figure 3I). All findings testified that hsa_circ_0018189 could target miR‐656‐3p. Hsa_circ_0018189 acted as a miR‐656‐3 sponge. (A) Starbase and circinteractome showed the predicted target miRNAs for hsa_circ_0018189; (B and C) RNA levels of miR‐656‐3p and miR‐888‐5p were determined after transfection of si‐NC or si‐hsa_circ_0018189#1; (D) The binding sites of hsa_circ_0018189 and miR‐656‐3p; (E and F) Luciferase reporter assay and RIP assay were utilized to analyze the interaction of hsa_circ_0018189 and miR‐656‐3p in 293 T cells; (G) RT‐qPCR was used to analyze miR‐656‐3p level in NSCLC tissues and normal tissues; (H) Pearson correlation coefficient was used to analyze the correlation of miR‐656‐3p and hsa_circ_0018189; (I) RT‐qPCR was used to analyze miR‐656‐3p expression in BEAS2B, A549 and HCC44 cells. *p < .05 Starbase and circinteractome analysis found that miR‐656‐3p and miR‐888‐5p were the most promising targets for hsa_circ_0018189 (Figure 3A). Hsa_circ_0018189 silencing resulted in an elevation in miR‐656‐3p expression level (Figure 3B, C), which was chosen for subsequent research. By using the Starbase database, we predicted the existence of targeted binding sites between hsa_circ_0018189 and miR‐656‐3p (Figure 3D). Luciferase reporter assay and RIP assay demonstrated the interaction between hsa_circ_0018189 and miR‐656‐3p in 293 T cells (Figure 3E, F). MiR‐656‐3p expression was lower in NSCLC tissues (Figure 3G). Pearson correlation analysis showed that hsa_circ_0018189 and miR‐656‐3p were negatively correlated with each other in NSCLC tumors (Figure 3H). MiR‐656‐3p abundance was lower in A549 and HCC44 cells (Figure 3I). All findings testified that hsa_circ_0018189 could target miR‐656‐3p. Hsa_circ_0018189 acted as a miR‐656‐3 sponge. (A) Starbase and circinteractome showed the predicted target miRNAs for hsa_circ_0018189; (B and C) RNA levels of miR‐656‐3p and miR‐888‐5p were determined after transfection of si‐NC or si‐hsa_circ_0018189#1; (D) The binding sites of hsa_circ_0018189 and miR‐656‐3p; (E and F) Luciferase reporter assay and RIP assay were utilized to analyze the interaction of hsa_circ_0018189 and miR‐656‐3p in 293 T cells; (G) RT‐qPCR was used to analyze miR‐656‐3p level in NSCLC tissues and normal tissues; (H) Pearson correlation coefficient was used to analyze the correlation of miR‐656‐3p and hsa_circ_0018189; (I) RT‐qPCR was used to analyze miR‐656‐3p expression in BEAS2B, A549 and HCC44 cells. *p < .05 Hsa_circ_0018189 sponged miR‐656‐3p to promote cell glutamine metabolism and malignancy in NSCLC cells The silencing effect of miR‐656‐3p was significant by miR‐656‐3p inhibitor (Figure 4A). Knockdown of hsa_circ_0018189‐mediated elevation of miR‐656‐3p could be restored after miR‐656‐3p silencing (Figure 4B). Deficiency of miR‐656‐3p could relieve the function of si‐hsa_circ_0018189#1 on glutamine metabolism (Figure 4C–E). The results of CCK8 assay and EdU assay also expounded that miR‐656‐3p silencing recover si‐hsa_circ_0018189#1‐caused repression of A549 and HCC44 cell proliferation (Figure 4F, G). Moreover, transfection of miR‐656‐3p inhibitor attenuated the role of si‐hsa_circ_0018189#1 on cell apoptosis, migration, and invasion (Figure 4H, I, and J). E‐cadherin and Vimentin proteins expression were influenced by si‐hsa_circ_0018189#1, while miR‐656‐3p could abolish the influence (Figure 4K). Taken together, the evidence implied that hsa_circ_0018189 could interact with miR‐656‐3p to reinforce NSCLC cell glutamine metabolism and malignancy. MiR‐656‐3p could restore the effect of hsa_circ_0018189 knockdown on NSCLC progression. (A) MiR‐656‐3p abundance was detected after transfection of inhibitor NC and miR‐656‐3p inhibitor; (B‐K) Cells were transfected with si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. (B) MiR‐656‐3p expression was estimated after transfection; (C‐E) ELISA assay analyzed glutamine metabolism after transfection; (F and G) Cell proliferation was determined; (H) Cell apoptosis was assessed after transfection; (I and J) Cell migrating and invading abilities were analyzed; (K) E‐cadherin and Vimentin protein levels were determined. *p < .05 The silencing effect of miR‐656‐3p was significant by miR‐656‐3p inhibitor (Figure 4A). Knockdown of hsa_circ_0018189‐mediated elevation of miR‐656‐3p could be restored after miR‐656‐3p silencing (Figure 4B). Deficiency of miR‐656‐3p could relieve the function of si‐hsa_circ_0018189#1 on glutamine metabolism (Figure 4C–E). The results of CCK8 assay and EdU assay also expounded that miR‐656‐3p silencing recover si‐hsa_circ_0018189#1‐caused repression of A549 and HCC44 cell proliferation (Figure 4F, G). Moreover, transfection of miR‐656‐3p inhibitor attenuated the role of si‐hsa_circ_0018189#1 on cell apoptosis, migration, and invasion (Figure 4H, I, and J). E‐cadherin and Vimentin proteins expression were influenced by si‐hsa_circ_0018189#1, while miR‐656‐3p could abolish the influence (Figure 4K). Taken together, the evidence implied that hsa_circ_0018189 could interact with miR‐656‐3p to reinforce NSCLC cell glutamine metabolism and malignancy. MiR‐656‐3p could restore the effect of hsa_circ_0018189 knockdown on NSCLC progression. (A) MiR‐656‐3p abundance was detected after transfection of inhibitor NC and miR‐656‐3p inhibitor; (B‐K) Cells were transfected with si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. (B) MiR‐656‐3p expression was estimated after transfection; (C‐E) ELISA assay analyzed glutamine metabolism after transfection; (F and G) Cell proliferation was determined; (H) Cell apoptosis was assessed after transfection; (I and J) Cell migrating and invading abilities were analyzed; (K) E‐cadherin and Vimentin protein levels were determined. *p < .05 MiR‐656‐3p directly interacted with xCT Using the TargetScan, we revealed that xCT 3'UTR harbored putative target sequences for miR‐656‐3p (Figure 5A). Luciferase reporter experiment suggested that in 293 T cells, miR‐656‐3p could reduce the luciferase activity of wild‐type xCT 3'UTR group, while there was no significant difference in mutant xCT 3'UTR (Figure 5B), confirming the targeting relationship between miR‐656‐3p and xCT. The mRNA and protein levels of xCT were predominantly raised in NSCLC rather than in normal tissues (Figure 5C, D). xCT mRNA expression in NSCLC samples was negatively correlated with miR‐656‐3p (Figure 5E). Besides, xCT protein levels were upregulated in A549 and HCC44 cells (Figure 5F). Additionally, xCT expression was decreased after knockdown of hsa_circ_0018189, while miR‐656‐3p downregulation could restore the function of si‐hsa_circ_0018189#1 on xCT expression in A549 and HCC44 cells (Figure 5G). In summary, our findings elucidated that miR‐656‐3p could target xCT, and hsa_circ_0018189 could regulate the expression of xCT by acting as a sponge of miR‐656‐3p in NSCLC. There was a target relationship between miR‐656‐3p and xCT. (A) The binding sites of miR‐656‐3p and xCT; (B) Luciferase reporter assay tested the luciferase activities of xCT 3'UTR wt and xCT 3'UTR mut in 293 T cells after transfection of mimic NC or miR‐656‐3p mimic; (C) RNA levels of xCT in NSCLC samples; (D) Protein levels of xCT in NSCLC samples; (E) The correlation of miR‐656‐3p and xCT mRNA expression levels in NSCLC samples; (F) Protein levels of xCT in BEAS2B, A549, and HCC44 cells; (G) Protein levels of xCT were estimated after transfection of si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. *p < .05 Using the TargetScan, we revealed that xCT 3'UTR harbored putative target sequences for miR‐656‐3p (Figure 5A). Luciferase reporter experiment suggested that in 293 T cells, miR‐656‐3p could reduce the luciferase activity of wild‐type xCT 3'UTR group, while there was no significant difference in mutant xCT 3'UTR (Figure 5B), confirming the targeting relationship between miR‐656‐3p and xCT. The mRNA and protein levels of xCT were predominantly raised in NSCLC rather than in normal tissues (Figure 5C, D). xCT mRNA expression in NSCLC samples was negatively correlated with miR‐656‐3p (Figure 5E). Besides, xCT protein levels were upregulated in A549 and HCC44 cells (Figure 5F). Additionally, xCT expression was decreased after knockdown of hsa_circ_0018189, while miR‐656‐3p downregulation could restore the function of si‐hsa_circ_0018189#1 on xCT expression in A549 and HCC44 cells (Figure 5G). In summary, our findings elucidated that miR‐656‐3p could target xCT, and hsa_circ_0018189 could regulate the expression of xCT by acting as a sponge of miR‐656‐3p in NSCLC. There was a target relationship between miR‐656‐3p and xCT. (A) The binding sites of miR‐656‐3p and xCT; (B) Luciferase reporter assay tested the luciferase activities of xCT 3'UTR wt and xCT 3'UTR mut in 293 T cells after transfection of mimic NC or miR‐656‐3p mimic; (C) RNA levels of xCT in NSCLC samples; (D) Protein levels of xCT in NSCLC samples; (E) The correlation of miR‐656‐3p and xCT mRNA expression levels in NSCLC samples; (F) Protein levels of xCT in BEAS2B, A549, and HCC44 cells; (G) Protein levels of xCT were estimated after transfection of si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. *p < .05 MiR‐656‐3p could inhibit cell glutamine metabolism and malignancy by reducing xCT expression in NSCLC Introduction of xCT upregulated xCT protein levels in A549 and HCC44 cells (Figure 6A). MiR‐656‐3p abundance was increased after transfection of miR‐656‐3p mimic (Figure 6B). The lowered xCT protein levels urged by miR‐656‐3p overexpression were recovered after xCT introduction (Figure 6C). Besides, xCT upregulation impaired the role of miR‐656‐3p mimic on factors associated with glutamine metabolism (Figure 6D, E and F). Beyond that, CCK8 assay and EdU assay analysis manifested that introduction of pcDNA‐xCT weakened the action of miR‐656‐3p on A549 and HCC44 cell proliferation and apoptosis (Figure 6G, H, and I). The lowered migrating and invading capacities driven by miR‐656‐3p were lessened after xCT overexpression (Figure 6J, K). Moreover, miR‐656‐3p‐mediated changes in E‐cadherin and Vimentin protein levels were recovered by co‐transfection of xCT and miR‐656‐3p (Figure 6L). In a word, the effect of miR‐656‐3p could be impaired by xCT in regulating the processes of NSCLC cells. Overexpression of xCT could recover miR‐656‐3p‐urged impacts on NSCLC progression. (A) Protein levels of xCT protein were tested after transfection of pcDNA and pcDNA‐xCT; (B) MiR‐656‐3p abundance was determined after transfection of miR‐656‐3p mimic and mimic NC; (C) Protein levels of xCT were detected after transfection mimic NC, miR‐656‐3p mimic, miR‐656‐3p mimic+pcDNA, or miR‐656‐3p mimic+pcDNA‐xCT; (D‐F) Glutamine metabolism in the above cells was analyzed; (G and H) Cell proliferation in the above cells was estimated; (I) Cell apoptosis in the above cells was detected; (J and K) Cell migration and invasion in the above cells were determined; (L) E‐cadherin and Vimentin protein levels in the above cells were measured. *p < .05 Introduction of xCT upregulated xCT protein levels in A549 and HCC44 cells (Figure 6A). MiR‐656‐3p abundance was increased after transfection of miR‐656‐3p mimic (Figure 6B). The lowered xCT protein levels urged by miR‐656‐3p overexpression were recovered after xCT introduction (Figure 6C). Besides, xCT upregulation impaired the role of miR‐656‐3p mimic on factors associated with glutamine metabolism (Figure 6D, E and F). Beyond that, CCK8 assay and EdU assay analysis manifested that introduction of pcDNA‐xCT weakened the action of miR‐656‐3p on A549 and HCC44 cell proliferation and apoptosis (Figure 6G, H, and I). The lowered migrating and invading capacities driven by miR‐656‐3p were lessened after xCT overexpression (Figure 6J, K). Moreover, miR‐656‐3p‐mediated changes in E‐cadherin and Vimentin protein levels were recovered by co‐transfection of xCT and miR‐656‐3p (Figure 6L). In a word, the effect of miR‐656‐3p could be impaired by xCT in regulating the processes of NSCLC cells. Overexpression of xCT could recover miR‐656‐3p‐urged impacts on NSCLC progression. (A) Protein levels of xCT protein were tested after transfection of pcDNA and pcDNA‐xCT; (B) MiR‐656‐3p abundance was determined after transfection of miR‐656‐3p mimic and mimic NC; (C) Protein levels of xCT were detected after transfection mimic NC, miR‐656‐3p mimic, miR‐656‐3p mimic+pcDNA, or miR‐656‐3p mimic+pcDNA‐xCT; (D‐F) Glutamine metabolism in the above cells was analyzed; (G and H) Cell proliferation in the above cells was estimated; (I) Cell apoptosis in the above cells was detected; (J and K) Cell migration and invasion in the above cells were determined; (L) E‐cadherin and Vimentin protein levels in the above cells were measured. *p < .05 Hsa_circ_0018189 could regulate tumor growth in mouse models The A549 cells stable expressing sh‐hsa_circ_0018189 were subcutaneously injected into the nude mice. Tumor volume expression in mice was measured. Hsa_circ_0018189 knockdown inhibited tumor volume (Figure 7A). The results showed that knocking down hsa_circ_0018189 reduced tumor volume (Figure 7B). RNA levels of hsa_circ_0018189 and xCT were downregulated in xenograft tumor tissues from the sh‐hsa_circ_0018189 group, but miR‐656‐3p expression was increased (Figure 7C). Western blot analysis suggested that xCT protein was decreased in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7D). IHC analysis revealed that ki‐67 and xCT levels were reduced in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7E). All data exposed that knockdown of hsa_circ_0018189 could inhibit the growth of tumors in vivo. Knockdown hsa_circ_0018189 could inhibit the growth of tumors in vivo. (A) The tumor volumes were measured at 1, 2, 3, and 4 weeks; (B) Tumor weights were tested; (C) RNA levels of hsa_circ_0018189, miR‐656‐3p, and xCT in xenograft tumor tissues were analyzed; (D) Protein levels of xCT in xenograft tumor tissues were determined; (E) IHC analysis of ki‐67 and xCT protein levels in xenograft tumor tissues. *p < .05 The A549 cells stable expressing sh‐hsa_circ_0018189 were subcutaneously injected into the nude mice. Tumor volume expression in mice was measured. Hsa_circ_0018189 knockdown inhibited tumor volume (Figure 7A). The results showed that knocking down hsa_circ_0018189 reduced tumor volume (Figure 7B). RNA levels of hsa_circ_0018189 and xCT were downregulated in xenograft tumor tissues from the sh‐hsa_circ_0018189 group, but miR‐656‐3p expression was increased (Figure 7C). Western blot analysis suggested that xCT protein was decreased in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7D). IHC analysis revealed that ki‐67 and xCT levels were reduced in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7E). All data exposed that knockdown of hsa_circ_0018189 could inhibit the growth of tumors in vivo. Knockdown hsa_circ_0018189 could inhibit the growth of tumors in vivo. (A) The tumor volumes were measured at 1, 2, 3, and 4 weeks; (B) Tumor weights were tested; (C) RNA levels of hsa_circ_0018189, miR‐656‐3p, and xCT in xenograft tumor tissues were analyzed; (D) Protein levels of xCT in xenograft tumor tissues were determined; (E) IHC analysis of ki‐67 and xCT protein levels in xenograft tumor tissues. *p < .05
CONCLUSION
In our study, hsa_circ_0018189 could regulate tumorigenicity of NSCLC through serving as a miR‐656‐3p sponge molecular and subsequently mediating xCT expression, which might be a potential target for the treatment of NSCLC.
[ "Clinical samples", "Cell culture and transfection", "Real‐time quantitative reverse transcription‐polymerase chain reaction and validation of circular characteristics", "Glutamine metabolism analysis", "Cell Counting Kit‐8 assay", "5‐ethynyl‐2′‐deoxyuridine assay", "Cell apoptosis analysis", "Cell migration and invasion analysis", "Western blot", "Luciferase reporter assay and RNA‐immunoprecipitation assay", "Xenograft mice model", "Immunohistochemistry assay", "Statistical analysis", "Upregulation of hsa_circ_0018189 expression was detected in NSCLC\n", "Hsa_circ_0018189 knockdown suppressed glutamine metabolism, cell proliferation, migration, and invasion, while enhanced cell apoptosis in NSCLC cells", "Hsa_circ_0018189 was identified as a miR‐656‐3p molecular sponge", "Hsa_circ_0018189 sponged miR‐656‐3p to promote cell glutamine metabolism and malignancy in NSCLC cells", "\nMiR‐656‐3p directly interacted with xCT\n", "\nMiR‐656‐3p could inhibit cell glutamine metabolism and malignancy by reducing xCT expression in NSCLC\n", "Hsa_circ_0018189 could regulate tumor growth in mouse models", "FUNDING INFORMATION" ]
[ "NSCLC tumor tissues (N = 90) and adjacent normal tissues (N = 90) were gathered from patients with NSCLC who were diagnosed at Nanping First Hospital Affiliated to Fujian Medical University. The Declaration of Helsinki was referenced in experiments involving human samples. Prior to surgery, the patient was informed and signed informed consent. The experiment was also approved and supported by the Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. The tumor tissues were removed and preserved at −80°C.", "The BEAS2B cell line (CRL‐9609™, ATTC, Manassas, VA) and NSCLC cell lines HCC44 (CBP61182, COBIOER, Nanjing, China) and A549 (CBP60084, COBIOER) were cultured with RPMI‐1640 (Sigma‐Aldrich, St. Louis, MO) or F12K supplemented with 10% fetal bovine serum (FBS, GIBCO, Carlsbad, CA) in a humidified incubator containing 5% CO2 at 37°C.\nThree small interfering RNAs si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, si‐hsa_circ_0018189#3, and negative control (si‐NC), miR‐656‐3p inhibitor and inhibitor NC, miR‐656‐3p mimic and negative control mimic (mimic NC), pcDNA and pcDNA‐xCT, short hairpin RNA targeting hsa_circ_0018189 (sh‐hsa_circ_0018189), and negative control (sh‐NC) were synthesized and purchased from Sangon Biotech (Shanghai, China). The Lipofectamine 3000 reagent (Invitrogen, Carlsbad, CA) was utilized for transfection according to production instructions.", "The TRIzol® Reagent (Invitrogen) was utilized for isolation of total RNA. Complementary DNA (cDNA) was synthesized using First‐Strand cDNA Synthesis Kit (Takara, Dalian, China). SYBR® Premix DimerEraser Kit (Takara, Dalian, China) was utilized to detect relative expression. The 2−ΔΔCt method\n28\n was applied to process relative expression and β‐actin or U6 was applied for standardization.\n29\n, \n30\n The primers are shown in Table 1.\nPrimers sequences used for PCR\nThe extracted total RNA was digested by adding 10 μl RNase R for subsequent analysis. Meanwhile, cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, respectively, and RNA was extracted for subsequent analysis.", "Glutamine metabolism was assessed by intracellular levels of glutamine, glutamate, and α‐KG. In 6‐well plates, the transfected cells were fed. After 24 h, the intracellular correlation levels were determined and analyzed using ELISA kits (Beyotime, Shanghai, China) including glutamine, glutamate, and α‐KG according to the instructions.", "In 96‐well plates, the transfection of cells was cultured for 24 h, and 10 μl Cell Counting Kit‐8 (CCK8, Beyotime, Shanghai, China) reagent was added to each well to hatch for 24, 48, and 72 h. Then, optical density (OD) was recorded at 450 nm by a microplate reader.", "Cell proliferative capacity was estimated with the EdU Staining Proliferation Kit (Abcam, Cambridge, UK). After transfection, the cells were continued to be cultured in 6‐well plates, and 10 μl EdU (5‐ethynyl‐2′‐deoxyuridine) reagent was added into the cells for further incubation for 2 h. Then, the medium was discarded. The cells were stained with the stain for 30 min. Then DAPI was applied for staining for 20 min. Finally, the cells were cleaned and observed and analyzed under a microscope.", "Cell apoptotic rate was estimated with an Annexin V‐FITC/PI Staining Kit (Roche, Switzerland) on a flow cytometry (Beckman Coulter; Kraemer Boulevard, CA) following the processes offered by the manufacturer.", "The cells were prepared into cell suspension. For the migration experiment, transfected cells resuspended in 200 μl of serum‐free medium were complemented in the upper chamber. For the invasion experiment, transfected cells were complemented in the upper chamber pre‐coated with matrigel (Corning, New York, Madison). 500 μl of medium containing 10% FBS was complemented in the lower chamber. Twenty‐four hours later, cells that migrated or invaded were fixed with 4% paraformaldehyde and then stained with crystal violet for 30 min. Cells were counted using an inverted microscope.", "RIPA lysis buffer (Beyotime, Shanghai, China) supplemented with protease inhibitors (Roche, Switzerland) was used to isolate total protein from NSCLC tissues and cells. After treatment, proteins were separated by sodium dodecyl sulfate‐polyacrylamide gel electrophoresis. The isolated protein was then transferred onto PVDF (Millipore, Billerica, MA) membrane, and then, the protein‐carrying membranes were probed with a primary antibody against E‐cadherin (ab1416 at a dilution of 1:50, Abcam), Vimentin (ab8069, 1 μg/ml, Abcam), β‐actin (ab8226, 1 μg/ml, Abcam), or xCT (ab175186 at a dilution of 1:1000, Abcam) for 2 h at room temperature, followed by incubation with a secondary antibody. Finally, the ECL Kit (Beyotime, Shanghai, China) was used for band observation and analysis.", "The sequences of hsa_circ_0018189 and 3'untranslated region of xCT (xCT 3'UTR) containing the wild‐type (wt) miR‐656‐3p binding site were cloned into pmirGLO Expression vectors (Promega, Madison, WI), and the mutant type (mut) miR‐656‐3p binding site was performed based on the hsa_circ_0018189 wt and xCT 3'UTR wt vectors via QuikChange II Site‐Directed Mutagenesis Kit (Stratagene, Santa Clara, CA). Co‐transfection of wt/mut reporter vector and miR‐656‐3p mimic or mimic NC was performed. The transfected cells were collected and fully lysed, followed by low‐temperature and high‐speed centrifugation to collect the supernatant, which was then plated into the 96‐well plate in triplicate for the measurement of the luciferase activity using the luciferase reporter system (Promega).\nRIP‐Assay Kit (Millipore) was used to perform RIP assay as per the manufacturer's instructions. After the beads were cleaned with cleaning buffer, pre‐diluted anti‐IgG (Abcam) or anti‐Ago2 (Abcam) solution was added to them and incubated at low temperature for 2 h. Cells were fully lysed after 48 h of transfection. The supernatant was incubated with the beads conjugated with antibodies at 4°C overnight. Then, the beads were collected by centrifugation at low temperatures and low speeds. RNA samples cleared from the beads were used for subsequent analysis of hsa_circ_0018189 and miR‐656‐3p.", "All procedures and animal experiments were approved by the Animal Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. A549 cells stable expressing sh‐hsa_circ_0018189 or sh‐NC were subcutaneously injected into four‐week‐old BALB/c nude mice (Vital River Laboratory Animal Technology [Beijing, China]). Tumor volumes (0.5 × (width)2 × length) were measured every week for four weeks, and tumor weights were analyzed after the mice were euthanized.", "Freshly removed tissues were paraffin sectioned, dewaxed, and repaired, followed by incubation with 0.3% hydrogen peroxide. After incubation with a primary antibody (ki‐67 or xCT) and a secondary horseradish peroxidase antibody, color was developed using DAB solution. Each sample selected five fields for observation.", "All data were statistically examined by GraphPad 7.0. All the data were presented as mean ± standard deviation (SD). p < .05 was considered statistically significant. We used a two‐tailed Student's t test (two groups) and one‐way or two‐way analysis of variance with Tukey's post hoc test (more than two groups) to compare significant differences.", "Firstly, we analyzed the expression of hsa_circ_0018189 in NSCLC samples. Hsa_circ_0018189 was observed to be boosted in NSCLC samples (Figure 1A). Meanwhile, compared with BEAS2B, circLUC2 abundance was boosted in A549 and HCC44 cells (Figure 1B). Subsequently, we found that hsa_circ_0018189 was located at chr10 chromosome and was derived from the exons 6–12 of CUL2 mRNA (Figure 1C). We then verified the circular structure of hsa_circ_0018189. Also, hsa_circ_0018189 abundance was not significantly changed but linear CUL2 was reduced in A549 and NCC44 cells with RNase R treatment (Figure 1D). Additionally, hsa_circ_0018189 abundance had no difference after actinomycin D treatment (Figure 1E). Subcellular fractionation with RT‐qPCR analysis showed that hsa_circ_0018189 was mainly distributed in the cytoplasm of NSCLC cells (Figure 1F). In short, hsa_circ_0018189 was significantly upregulated in NSCLC.\nUpregulation of hsa_circ_0018189 expression was acquired in NSCLC. (A) RT‐qPCR analyzed hsa_circ_0018189 abundance in NSCLC tissues and normal tissues; (B) Hsa_circ_0018189 abundance in BEAS2B, A549, and HCC44 cells; (C) The location and structure of hsa_circ_0018189 were shown; (D) After RNase R digestion, RT‐qPCR was conducted to analyze hsa_circ_0018189 and linear CUL2 expression; (E) After the cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, RT‐qPCR was conducted to detect hsa_circ_0018189 and linear CUL2 expression; (F) The localization of circCUL2 in NSCLC cells was analyzed by subcellular fractionation with RT‐qPCR analysis. *p < .05", "At first, we verified the action of hsa_circ_0018189 knockdown, and the results suggested that si‐hsa_circ_0018189#1 caused the greatest silencing efficiency of hsa_circ_0018189 in both cells (Figure 2A). Hence, we used si‐hsa_circ_0018189#1 to exhibit the next experiments. We found that knockdown of hsa_circ_0018189 could inhibit glutamine metabolism (Figure 2B, C, and D). CCK8 assay and EdU assay confirmed that lowered hsa_circ_0018189 expression restrained cell proliferation of A549 and HCC44 cells (Figure 2E, F). In addition, cell apoptosis was increased after hsa_circ_0018189 knockdown (Figure 2G). Transwell assay results proved that si‐CUL2#1 could suppress cell migration and invasion of A549 and HCC44 cells (Figure 2H, I). E‐cadherin and Vimentin are mesenchymal markers commonly used in scientific research for epithelial–mesenchymal transformation (EMT). Knockdown of hsa_circ_0018189 promoted E‐cadherin level and retrained protein level of Vimentin in A549 and HCC44 cells (Figure 2J). We also constructed the hsa_circ_0018189 overexpression plasmid, as shown in Figure SS1. In contrast, hsa_circ_0018189 overexpression elevated glutamine metabolism, facilitated cell proliferation, migration, invasion, and lowered cell apoptosis in NSCLC cells (Figure SS1B‐I). Consistently, hsa_circ_0018189 overexpression elevated Vimentin protein levels and lessened E‐cadherin protein levels (Figure SS1J and K). All in all, these data implied that hsa_circ_0018189 promoted the progression of NSCLC.\nSilencing of hsa_circ_0018189 could inhibit glutamine metabolism, proliferation, migration, and invasion, while promote cell apoptosis in NSCLC cells. (A) The transfection efficiency of si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, and si‐hsa_circ_0018189#3 were analyzed by RT‐qPCR in A549, and HCC44 cells; (B‐D) ELISA assay analyzed glutamine metabolism after transfection of si‐NC or si‐hsa_circ_0018189#1; (E) Cell viability in the above cells was analyzed by CCK8 assay; (F) Cell proliferation was analyzed by EdU assay; (G) Cell apoptosis was analyzed by flow cytometry assay; (H and I) Cell migration and invasion were analyzed by Transwell assay; (J) Proteins of E‐cadherin and Vimentin were measured. *p < .05", "Starbase and circinteractome analysis found that miR‐656‐3p and miR‐888‐5p were the most promising targets for hsa_circ_0018189 (Figure 3A). Hsa_circ_0018189 silencing resulted in an elevation in miR‐656‐3p expression level (Figure 3B, C), which was chosen for subsequent research. By using the Starbase database, we predicted the existence of targeted binding sites between hsa_circ_0018189 and miR‐656‐3p (Figure 3D). Luciferase reporter assay and RIP assay demonstrated the interaction between hsa_circ_0018189 and miR‐656‐3p in 293 T cells (Figure 3E, F). MiR‐656‐3p expression was lower in NSCLC tissues (Figure 3G). Pearson correlation analysis showed that hsa_circ_0018189 and miR‐656‐3p were negatively correlated with each other in NSCLC tumors (Figure 3H). MiR‐656‐3p abundance was lower in A549 and HCC44 cells (Figure 3I). All findings testified that hsa_circ_0018189 could target miR‐656‐3p.\nHsa_circ_0018189 acted as a miR‐656‐3 sponge. (A) Starbase and circinteractome showed the predicted target miRNAs for hsa_circ_0018189; (B and C) RNA levels of miR‐656‐3p and miR‐888‐5p were determined after transfection of si‐NC or si‐hsa_circ_0018189#1; (D) The binding sites of hsa_circ_0018189 and miR‐656‐3p; (E and F) Luciferase reporter assay and RIP assay were utilized to analyze the interaction of hsa_circ_0018189 and miR‐656‐3p in 293 T cells; (G) RT‐qPCR was used to analyze miR‐656‐3p level in NSCLC tissues and normal tissues; (H) Pearson correlation coefficient was used to analyze the correlation of miR‐656‐3p and hsa_circ_0018189; (I) RT‐qPCR was used to analyze miR‐656‐3p expression in BEAS2B, A549 and HCC44 cells. *p < .05", "The silencing effect of miR‐656‐3p was significant by miR‐656‐3p inhibitor (Figure 4A). Knockdown of hsa_circ_0018189‐mediated elevation of miR‐656‐3p could be restored after miR‐656‐3p silencing (Figure 4B). Deficiency of miR‐656‐3p could relieve the function of si‐hsa_circ_0018189#1 on glutamine metabolism (Figure 4C–E). The results of CCK8 assay and EdU assay also expounded that miR‐656‐3p silencing recover si‐hsa_circ_0018189#1‐caused repression of A549 and HCC44 cell proliferation (Figure 4F, G). Moreover, transfection of miR‐656‐3p inhibitor attenuated the role of si‐hsa_circ_0018189#1 on cell apoptosis, migration, and invasion (Figure 4H, I, and J). E‐cadherin and Vimentin proteins expression were influenced by si‐hsa_circ_0018189#1, while miR‐656‐3p could abolish the influence (Figure 4K). Taken together, the evidence implied that hsa_circ_0018189 could interact with miR‐656‐3p to reinforce NSCLC cell glutamine metabolism and malignancy.\nMiR‐656‐3p could restore the effect of hsa_circ_0018189 knockdown on NSCLC progression. (A) MiR‐656‐3p abundance was detected after transfection of inhibitor NC and miR‐656‐3p inhibitor; (B‐K) Cells were transfected with si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. (B) MiR‐656‐3p expression was estimated after transfection; (C‐E) ELISA assay analyzed glutamine metabolism after transfection; (F and G) Cell proliferation was determined; (H) Cell apoptosis was assessed after transfection; (I and J) Cell migrating and invading abilities were analyzed; (K) E‐cadherin and Vimentin protein levels were determined. *p < .05", "Using the TargetScan, we revealed that xCT 3'UTR harbored putative target sequences for miR‐656‐3p (Figure 5A). Luciferase reporter experiment suggested that in 293 T cells, miR‐656‐3p could reduce the luciferase activity of wild‐type xCT 3'UTR group, while there was no significant difference in mutant xCT 3'UTR (Figure 5B), confirming the targeting relationship between miR‐656‐3p and xCT. The mRNA and protein levels of xCT were predominantly raised in NSCLC rather than in normal tissues (Figure 5C, D). xCT mRNA expression in NSCLC samples was negatively correlated with miR‐656‐3p (Figure 5E). Besides, xCT protein levels were upregulated in A549 and HCC44 cells (Figure 5F). Additionally, xCT expression was decreased after knockdown of hsa_circ_0018189, while miR‐656‐3p downregulation could restore the function of si‐hsa_circ_0018189#1 on xCT expression in A549 and HCC44 cells (Figure 5G). In summary, our findings elucidated that miR‐656‐3p could target xCT, and hsa_circ_0018189 could regulate the expression of xCT by acting as a sponge of miR‐656‐3p in NSCLC.\nThere was a target relationship between miR‐656‐3p and xCT. (A) The binding sites of miR‐656‐3p and xCT; (B) Luciferase reporter assay tested the luciferase activities of xCT 3'UTR wt and xCT 3'UTR mut in 293 T cells after transfection of mimic NC or miR‐656‐3p mimic; (C) RNA levels of xCT in NSCLC samples; (D) Protein levels of xCT in NSCLC samples; (E) The correlation of miR‐656‐3p and xCT mRNA expression levels in NSCLC samples; (F) Protein levels of xCT in BEAS2B, A549, and HCC44 cells; (G) Protein levels of xCT were estimated after transfection of si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. *p < .05", "Introduction of xCT upregulated xCT protein levels in A549 and HCC44 cells (Figure 6A). MiR‐656‐3p abundance was increased after transfection of miR‐656‐3p mimic (Figure 6B). The lowered xCT protein levels urged by miR‐656‐3p overexpression were recovered after xCT introduction (Figure 6C). Besides, xCT upregulation impaired the role of miR‐656‐3p mimic on factors associated with glutamine metabolism (Figure 6D, E and F). Beyond that, CCK8 assay and EdU assay analysis manifested that introduction of pcDNA‐xCT weakened the action of miR‐656‐3p on A549 and HCC44 cell proliferation and apoptosis (Figure 6G, H, and I). The lowered migrating and invading capacities driven by miR‐656‐3p were lessened after xCT overexpression (Figure 6J, K). Moreover, miR‐656‐3p‐mediated changes in E‐cadherin and Vimentin protein levels were recovered by co‐transfection of xCT and miR‐656‐3p (Figure 6L). In a word, the effect of miR‐656‐3p could be impaired by xCT in regulating the processes of NSCLC cells.\nOverexpression of xCT could recover miR‐656‐3p‐urged impacts on NSCLC progression. (A) Protein levels of xCT protein were tested after transfection of pcDNA and pcDNA‐xCT; (B) MiR‐656‐3p abundance was determined after transfection of miR‐656‐3p mimic and mimic NC; (C) Protein levels of xCT were detected after transfection mimic NC, miR‐656‐3p mimic, miR‐656‐3p mimic+pcDNA, or miR‐656‐3p mimic+pcDNA‐xCT; (D‐F) Glutamine metabolism in the above cells was analyzed; (G and H) Cell proliferation in the above cells was estimated; (I) Cell apoptosis in the above cells was detected; (J and K) Cell migration and invasion in the above cells were determined; (L) E‐cadherin and Vimentin protein levels in the above cells were measured. *p < .05", "The A549 cells stable expressing sh‐hsa_circ_0018189 were subcutaneously injected into the nude mice. Tumor volume expression in mice was measured. Hsa_circ_0018189 knockdown inhibited tumor volume (Figure 7A). The results showed that knocking down hsa_circ_0018189 reduced tumor volume (Figure 7B). RNA levels of hsa_circ_0018189 and xCT were downregulated in xenograft tumor tissues from the sh‐hsa_circ_0018189 group, but miR‐656‐3p expression was increased (Figure 7C). Western blot analysis suggested that xCT protein was decreased in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7D). IHC analysis revealed that ki‐67 and xCT levels were reduced in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7E). All data exposed that knockdown of hsa_circ_0018189 could inhibit the growth of tumors in vivo.\nKnockdown hsa_circ_0018189 could inhibit the growth of tumors in vivo. (A) The tumor volumes were measured at 1, 2, 3, and 4 weeks; (B) Tumor weights were tested; (C) RNA levels of hsa_circ_0018189, miR‐656‐3p, and xCT in xenograft tumor tissues were analyzed; (D) Protein levels of xCT in xenograft tumor tissues were determined; (E) IHC analysis of ki‐67 and xCT protein levels in xenograft tumor tissues. *p < .05", "None." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "BACKGROUND", "MATERIAL AND METHODS", "Clinical samples", "Cell culture and transfection", "Real‐time quantitative reverse transcription‐polymerase chain reaction and validation of circular characteristics", "Glutamine metabolism analysis", "Cell Counting Kit‐8 assay", "5‐ethynyl‐2′‐deoxyuridine assay", "Cell apoptosis analysis", "Cell migration and invasion analysis", "Western blot", "Luciferase reporter assay and RNA‐immunoprecipitation assay", "Xenograft mice model", "Immunohistochemistry assay", "Statistical analysis", "RESULTS", "Upregulation of hsa_circ_0018189 expression was detected in NSCLC\n", "Hsa_circ_0018189 knockdown suppressed glutamine metabolism, cell proliferation, migration, and invasion, while enhanced cell apoptosis in NSCLC cells", "Hsa_circ_0018189 was identified as a miR‐656‐3p molecular sponge", "Hsa_circ_0018189 sponged miR‐656‐3p to promote cell glutamine metabolism and malignancy in NSCLC cells", "\nMiR‐656‐3p directly interacted with xCT\n", "\nMiR‐656‐3p could inhibit cell glutamine metabolism and malignancy by reducing xCT expression in NSCLC\n", "Hsa_circ_0018189 could regulate tumor growth in mouse models", "DISCUSSION", "CONCLUSION", "FUNDING INFORMATION", "CONFLICT OF INTEREST", "Supporting information" ]
[ "Non‐small cell lung cancer (NSCLC) is a typical type of lung cancer.\n1\n, \n2\n NSCLC can be further subdivided into adenocarcinoma, squamous cell carcinoma, and large cell carcinoma, depending on the histological differences among the NSCLC subtypes.\n3\n, \n4\n, \n5\n The main risk for NSCLC is smoking.\n6\n Its poor prognosis and recurrence rate pose great challenges to the treatment of NSCLC.\n7\n, \n8\n Therefore, the research on NSCLC has also become a hot topic today.\nCircular RNAs (circRNAs), a class of single‐stranded RNA molecules with closed loop structures, have become the focus of RNA and transcriptome research.\n9\n In fact, circRNAs can be divided into non‐coding circRNAs and coding circRNAs.\n10\n, \n11\n Because of their unique structure, they exhibit high stability and resistance to exonucleases.\n12\n More and more studies on circRNAs in NSCLC have proved the action of circRNAs in the occurrence and development of NSCLC.\n13\n, \n14\n, \n15\n Hsa_circ_0018189 was suggested to exert a significant function in NSCLC,\n16\n but its specific functional mechanism has not been described.\nMiRNA adjusts post‐transcriptional gene expression.\n17\n, \n18\n MiRNAs, are key actors in a variety of biological processes, and their disorders have been linked to many diseases, including cancer and autoimmune diseases.\n19\n, \n20\n, \n21\n Dysregulation of miRNAs is common in NSCLC, and miRNAs have been suggested to participate in the regulation of the occurrence, progression, and metastasis of NSCLC by regulating target genes.\n22\n Therefore, a string of studies has been conducted to explore the functional mechanism of miRNAs in NSCLC, which will also provide the possibility for miRNAs to become a diagnostic and therapeutic tool.\n23\n, \n24\n The functional mechanism of miR‐656‐3p in NSCLC is not well understood, which makes it of great interest to us.\nSolute carrier family seven member 11 (SLC7A11), also named as xCT, is a cystine and glutamate anti‐transporter that can transport cystine into cells and export glutamate at the same time.\n25\n, \n26\n It has been reported that elevated xCT expression and glutamate release are commonly detected in cancer cells.\n27\n xCT has a vital function in the progression of NSCLC, but its specific regulatory mechanism has rarely been reported.\nAccordingly, the research was to clarify the changes in hsa_circ_0018189, miR‐656‐3p, and xCT in NSCLC, and their relationships and functions will be explored. These studies will provide theoretical support for future diagnosis and treatment of NSCLC.", " Clinical samples NSCLC tumor tissues (N = 90) and adjacent normal tissues (N = 90) were gathered from patients with NSCLC who were diagnosed at Nanping First Hospital Affiliated to Fujian Medical University. The Declaration of Helsinki was referenced in experiments involving human samples. Prior to surgery, the patient was informed and signed informed consent. The experiment was also approved and supported by the Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. The tumor tissues were removed and preserved at −80°C.\nNSCLC tumor tissues (N = 90) and adjacent normal tissues (N = 90) were gathered from patients with NSCLC who were diagnosed at Nanping First Hospital Affiliated to Fujian Medical University. The Declaration of Helsinki was referenced in experiments involving human samples. Prior to surgery, the patient was informed and signed informed consent. The experiment was also approved and supported by the Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. The tumor tissues were removed and preserved at −80°C.\n Cell culture and transfection The BEAS2B cell line (CRL‐9609™, ATTC, Manassas, VA) and NSCLC cell lines HCC44 (CBP61182, COBIOER, Nanjing, China) and A549 (CBP60084, COBIOER) were cultured with RPMI‐1640 (Sigma‐Aldrich, St. Louis, MO) or F12K supplemented with 10% fetal bovine serum (FBS, GIBCO, Carlsbad, CA) in a humidified incubator containing 5% CO2 at 37°C.\nThree small interfering RNAs si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, si‐hsa_circ_0018189#3, and negative control (si‐NC), miR‐656‐3p inhibitor and inhibitor NC, miR‐656‐3p mimic and negative control mimic (mimic NC), pcDNA and pcDNA‐xCT, short hairpin RNA targeting hsa_circ_0018189 (sh‐hsa_circ_0018189), and negative control (sh‐NC) were synthesized and purchased from Sangon Biotech (Shanghai, China). The Lipofectamine 3000 reagent (Invitrogen, Carlsbad, CA) was utilized for transfection according to production instructions.\nThe BEAS2B cell line (CRL‐9609™, ATTC, Manassas, VA) and NSCLC cell lines HCC44 (CBP61182, COBIOER, Nanjing, China) and A549 (CBP60084, COBIOER) were cultured with RPMI‐1640 (Sigma‐Aldrich, St. Louis, MO) or F12K supplemented with 10% fetal bovine serum (FBS, GIBCO, Carlsbad, CA) in a humidified incubator containing 5% CO2 at 37°C.\nThree small interfering RNAs si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, si‐hsa_circ_0018189#3, and negative control (si‐NC), miR‐656‐3p inhibitor and inhibitor NC, miR‐656‐3p mimic and negative control mimic (mimic NC), pcDNA and pcDNA‐xCT, short hairpin RNA targeting hsa_circ_0018189 (sh‐hsa_circ_0018189), and negative control (sh‐NC) were synthesized and purchased from Sangon Biotech (Shanghai, China). The Lipofectamine 3000 reagent (Invitrogen, Carlsbad, CA) was utilized for transfection according to production instructions.\n Real‐time quantitative reverse transcription‐polymerase chain reaction and validation of circular characteristics The TRIzol® Reagent (Invitrogen) was utilized for isolation of total RNA. Complementary DNA (cDNA) was synthesized using First‐Strand cDNA Synthesis Kit (Takara, Dalian, China). SYBR® Premix DimerEraser Kit (Takara, Dalian, China) was utilized to detect relative expression. The 2−ΔΔCt method\n28\n was applied to process relative expression and β‐actin or U6 was applied for standardization.\n29\n, \n30\n The primers are shown in Table 1.\nPrimers sequences used for PCR\nThe extracted total RNA was digested by adding 10 μl RNase R for subsequent analysis. Meanwhile, cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, respectively, and RNA was extracted for subsequent analysis.\nThe TRIzol® Reagent (Invitrogen) was utilized for isolation of total RNA. Complementary DNA (cDNA) was synthesized using First‐Strand cDNA Synthesis Kit (Takara, Dalian, China). SYBR® Premix DimerEraser Kit (Takara, Dalian, China) was utilized to detect relative expression. The 2−ΔΔCt method\n28\n was applied to process relative expression and β‐actin or U6 was applied for standardization.\n29\n, \n30\n The primers are shown in Table 1.\nPrimers sequences used for PCR\nThe extracted total RNA was digested by adding 10 μl RNase R for subsequent analysis. Meanwhile, cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, respectively, and RNA was extracted for subsequent analysis.\n Glutamine metabolism analysis Glutamine metabolism was assessed by intracellular levels of glutamine, glutamate, and α‐KG. In 6‐well plates, the transfected cells were fed. After 24 h, the intracellular correlation levels were determined and analyzed using ELISA kits (Beyotime, Shanghai, China) including glutamine, glutamate, and α‐KG according to the instructions.\nGlutamine metabolism was assessed by intracellular levels of glutamine, glutamate, and α‐KG. In 6‐well plates, the transfected cells were fed. After 24 h, the intracellular correlation levels were determined and analyzed using ELISA kits (Beyotime, Shanghai, China) including glutamine, glutamate, and α‐KG according to the instructions.\n Cell Counting Kit‐8 assay In 96‐well plates, the transfection of cells was cultured for 24 h, and 10 μl Cell Counting Kit‐8 (CCK8, Beyotime, Shanghai, China) reagent was added to each well to hatch for 24, 48, and 72 h. Then, optical density (OD) was recorded at 450 nm by a microplate reader.\nIn 96‐well plates, the transfection of cells was cultured for 24 h, and 10 μl Cell Counting Kit‐8 (CCK8, Beyotime, Shanghai, China) reagent was added to each well to hatch for 24, 48, and 72 h. Then, optical density (OD) was recorded at 450 nm by a microplate reader.\n 5‐ethynyl‐2′‐deoxyuridine assay Cell proliferative capacity was estimated with the EdU Staining Proliferation Kit (Abcam, Cambridge, UK). After transfection, the cells were continued to be cultured in 6‐well plates, and 10 μl EdU (5‐ethynyl‐2′‐deoxyuridine) reagent was added into the cells for further incubation for 2 h. Then, the medium was discarded. The cells were stained with the stain for 30 min. Then DAPI was applied for staining for 20 min. Finally, the cells were cleaned and observed and analyzed under a microscope.\nCell proliferative capacity was estimated with the EdU Staining Proliferation Kit (Abcam, Cambridge, UK). After transfection, the cells were continued to be cultured in 6‐well plates, and 10 μl EdU (5‐ethynyl‐2′‐deoxyuridine) reagent was added into the cells for further incubation for 2 h. Then, the medium was discarded. The cells were stained with the stain for 30 min. Then DAPI was applied for staining for 20 min. Finally, the cells were cleaned and observed and analyzed under a microscope.\n Cell apoptosis analysis Cell apoptotic rate was estimated with an Annexin V‐FITC/PI Staining Kit (Roche, Switzerland) on a flow cytometry (Beckman Coulter; Kraemer Boulevard, CA) following the processes offered by the manufacturer.\nCell apoptotic rate was estimated with an Annexin V‐FITC/PI Staining Kit (Roche, Switzerland) on a flow cytometry (Beckman Coulter; Kraemer Boulevard, CA) following the processes offered by the manufacturer.\n Cell migration and invasion analysis The cells were prepared into cell suspension. For the migration experiment, transfected cells resuspended in 200 μl of serum‐free medium were complemented in the upper chamber. For the invasion experiment, transfected cells were complemented in the upper chamber pre‐coated with matrigel (Corning, New York, Madison). 500 μl of medium containing 10% FBS was complemented in the lower chamber. Twenty‐four hours later, cells that migrated or invaded were fixed with 4% paraformaldehyde and then stained with crystal violet for 30 min. Cells were counted using an inverted microscope.\nThe cells were prepared into cell suspension. For the migration experiment, transfected cells resuspended in 200 μl of serum‐free medium were complemented in the upper chamber. For the invasion experiment, transfected cells were complemented in the upper chamber pre‐coated with matrigel (Corning, New York, Madison). 500 μl of medium containing 10% FBS was complemented in the lower chamber. Twenty‐four hours later, cells that migrated or invaded were fixed with 4% paraformaldehyde and then stained with crystal violet for 30 min. Cells were counted using an inverted microscope.\n Western blot RIPA lysis buffer (Beyotime, Shanghai, China) supplemented with protease inhibitors (Roche, Switzerland) was used to isolate total protein from NSCLC tissues and cells. After treatment, proteins were separated by sodium dodecyl sulfate‐polyacrylamide gel electrophoresis. The isolated protein was then transferred onto PVDF (Millipore, Billerica, MA) membrane, and then, the protein‐carrying membranes were probed with a primary antibody against E‐cadherin (ab1416 at a dilution of 1:50, Abcam), Vimentin (ab8069, 1 μg/ml, Abcam), β‐actin (ab8226, 1 μg/ml, Abcam), or xCT (ab175186 at a dilution of 1:1000, Abcam) for 2 h at room temperature, followed by incubation with a secondary antibody. Finally, the ECL Kit (Beyotime, Shanghai, China) was used for band observation and analysis.\nRIPA lysis buffer (Beyotime, Shanghai, China) supplemented with protease inhibitors (Roche, Switzerland) was used to isolate total protein from NSCLC tissues and cells. After treatment, proteins were separated by sodium dodecyl sulfate‐polyacrylamide gel electrophoresis. The isolated protein was then transferred onto PVDF (Millipore, Billerica, MA) membrane, and then, the protein‐carrying membranes were probed with a primary antibody against E‐cadherin (ab1416 at a dilution of 1:50, Abcam), Vimentin (ab8069, 1 μg/ml, Abcam), β‐actin (ab8226, 1 μg/ml, Abcam), or xCT (ab175186 at a dilution of 1:1000, Abcam) for 2 h at room temperature, followed by incubation with a secondary antibody. Finally, the ECL Kit (Beyotime, Shanghai, China) was used for band observation and analysis.\n Luciferase reporter assay and RNA‐immunoprecipitation assay The sequences of hsa_circ_0018189 and 3'untranslated region of xCT (xCT 3'UTR) containing the wild‐type (wt) miR‐656‐3p binding site were cloned into pmirGLO Expression vectors (Promega, Madison, WI), and the mutant type (mut) miR‐656‐3p binding site was performed based on the hsa_circ_0018189 wt and xCT 3'UTR wt vectors via QuikChange II Site‐Directed Mutagenesis Kit (Stratagene, Santa Clara, CA). Co‐transfection of wt/mut reporter vector and miR‐656‐3p mimic or mimic NC was performed. The transfected cells were collected and fully lysed, followed by low‐temperature and high‐speed centrifugation to collect the supernatant, which was then plated into the 96‐well plate in triplicate for the measurement of the luciferase activity using the luciferase reporter system (Promega).\nRIP‐Assay Kit (Millipore) was used to perform RIP assay as per the manufacturer's instructions. After the beads were cleaned with cleaning buffer, pre‐diluted anti‐IgG (Abcam) or anti‐Ago2 (Abcam) solution was added to them and incubated at low temperature for 2 h. Cells were fully lysed after 48 h of transfection. The supernatant was incubated with the beads conjugated with antibodies at 4°C overnight. Then, the beads were collected by centrifugation at low temperatures and low speeds. RNA samples cleared from the beads were used for subsequent analysis of hsa_circ_0018189 and miR‐656‐3p.\nThe sequences of hsa_circ_0018189 and 3'untranslated region of xCT (xCT 3'UTR) containing the wild‐type (wt) miR‐656‐3p binding site were cloned into pmirGLO Expression vectors (Promega, Madison, WI), and the mutant type (mut) miR‐656‐3p binding site was performed based on the hsa_circ_0018189 wt and xCT 3'UTR wt vectors via QuikChange II Site‐Directed Mutagenesis Kit (Stratagene, Santa Clara, CA). Co‐transfection of wt/mut reporter vector and miR‐656‐3p mimic or mimic NC was performed. The transfected cells were collected and fully lysed, followed by low‐temperature and high‐speed centrifugation to collect the supernatant, which was then plated into the 96‐well plate in triplicate for the measurement of the luciferase activity using the luciferase reporter system (Promega).\nRIP‐Assay Kit (Millipore) was used to perform RIP assay as per the manufacturer's instructions. After the beads were cleaned with cleaning buffer, pre‐diluted anti‐IgG (Abcam) or anti‐Ago2 (Abcam) solution was added to them and incubated at low temperature for 2 h. Cells were fully lysed after 48 h of transfection. The supernatant was incubated with the beads conjugated with antibodies at 4°C overnight. Then, the beads were collected by centrifugation at low temperatures and low speeds. RNA samples cleared from the beads were used for subsequent analysis of hsa_circ_0018189 and miR‐656‐3p.\n Xenograft mice model All procedures and animal experiments were approved by the Animal Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. A549 cells stable expressing sh‐hsa_circ_0018189 or sh‐NC were subcutaneously injected into four‐week‐old BALB/c nude mice (Vital River Laboratory Animal Technology [Beijing, China]). Tumor volumes (0.5 × (width)2 × length) were measured every week for four weeks, and tumor weights were analyzed after the mice were euthanized.\nAll procedures and animal experiments were approved by the Animal Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. A549 cells stable expressing sh‐hsa_circ_0018189 or sh‐NC were subcutaneously injected into four‐week‐old BALB/c nude mice (Vital River Laboratory Animal Technology [Beijing, China]). Tumor volumes (0.5 × (width)2 × length) were measured every week for four weeks, and tumor weights were analyzed after the mice were euthanized.\n Immunohistochemistry assay Freshly removed tissues were paraffin sectioned, dewaxed, and repaired, followed by incubation with 0.3% hydrogen peroxide. After incubation with a primary antibody (ki‐67 or xCT) and a secondary horseradish peroxidase antibody, color was developed using DAB solution. Each sample selected five fields for observation.\nFreshly removed tissues were paraffin sectioned, dewaxed, and repaired, followed by incubation with 0.3% hydrogen peroxide. After incubation with a primary antibody (ki‐67 or xCT) and a secondary horseradish peroxidase antibody, color was developed using DAB solution. Each sample selected five fields for observation.\n Statistical analysis All data were statistically examined by GraphPad 7.0. All the data were presented as mean ± standard deviation (SD). p < .05 was considered statistically significant. We used a two‐tailed Student's t test (two groups) and one‐way or two‐way analysis of variance with Tukey's post hoc test (more than two groups) to compare significant differences.\nAll data were statistically examined by GraphPad 7.0. All the data were presented as mean ± standard deviation (SD). p < .05 was considered statistically significant. We used a two‐tailed Student's t test (two groups) and one‐way or two‐way analysis of variance with Tukey's post hoc test (more than two groups) to compare significant differences.", "NSCLC tumor tissues (N = 90) and adjacent normal tissues (N = 90) were gathered from patients with NSCLC who were diagnosed at Nanping First Hospital Affiliated to Fujian Medical University. The Declaration of Helsinki was referenced in experiments involving human samples. Prior to surgery, the patient was informed and signed informed consent. The experiment was also approved and supported by the Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. The tumor tissues were removed and preserved at −80°C.", "The BEAS2B cell line (CRL‐9609™, ATTC, Manassas, VA) and NSCLC cell lines HCC44 (CBP61182, COBIOER, Nanjing, China) and A549 (CBP60084, COBIOER) were cultured with RPMI‐1640 (Sigma‐Aldrich, St. Louis, MO) or F12K supplemented with 10% fetal bovine serum (FBS, GIBCO, Carlsbad, CA) in a humidified incubator containing 5% CO2 at 37°C.\nThree small interfering RNAs si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, si‐hsa_circ_0018189#3, and negative control (si‐NC), miR‐656‐3p inhibitor and inhibitor NC, miR‐656‐3p mimic and negative control mimic (mimic NC), pcDNA and pcDNA‐xCT, short hairpin RNA targeting hsa_circ_0018189 (sh‐hsa_circ_0018189), and negative control (sh‐NC) were synthesized and purchased from Sangon Biotech (Shanghai, China). The Lipofectamine 3000 reagent (Invitrogen, Carlsbad, CA) was utilized for transfection according to production instructions.", "The TRIzol® Reagent (Invitrogen) was utilized for isolation of total RNA. Complementary DNA (cDNA) was synthesized using First‐Strand cDNA Synthesis Kit (Takara, Dalian, China). SYBR® Premix DimerEraser Kit (Takara, Dalian, China) was utilized to detect relative expression. The 2−ΔΔCt method\n28\n was applied to process relative expression and β‐actin or U6 was applied for standardization.\n29\n, \n30\n The primers are shown in Table 1.\nPrimers sequences used for PCR\nThe extracted total RNA was digested by adding 10 μl RNase R for subsequent analysis. Meanwhile, cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, respectively, and RNA was extracted for subsequent analysis.", "Glutamine metabolism was assessed by intracellular levels of glutamine, glutamate, and α‐KG. In 6‐well plates, the transfected cells were fed. After 24 h, the intracellular correlation levels were determined and analyzed using ELISA kits (Beyotime, Shanghai, China) including glutamine, glutamate, and α‐KG according to the instructions.", "In 96‐well plates, the transfection of cells was cultured for 24 h, and 10 μl Cell Counting Kit‐8 (CCK8, Beyotime, Shanghai, China) reagent was added to each well to hatch for 24, 48, and 72 h. Then, optical density (OD) was recorded at 450 nm by a microplate reader.", "Cell proliferative capacity was estimated with the EdU Staining Proliferation Kit (Abcam, Cambridge, UK). After transfection, the cells were continued to be cultured in 6‐well plates, and 10 μl EdU (5‐ethynyl‐2′‐deoxyuridine) reagent was added into the cells for further incubation for 2 h. Then, the medium was discarded. The cells were stained with the stain for 30 min. Then DAPI was applied for staining for 20 min. Finally, the cells were cleaned and observed and analyzed under a microscope.", "Cell apoptotic rate was estimated with an Annexin V‐FITC/PI Staining Kit (Roche, Switzerland) on a flow cytometry (Beckman Coulter; Kraemer Boulevard, CA) following the processes offered by the manufacturer.", "The cells were prepared into cell suspension. For the migration experiment, transfected cells resuspended in 200 μl of serum‐free medium were complemented in the upper chamber. For the invasion experiment, transfected cells were complemented in the upper chamber pre‐coated with matrigel (Corning, New York, Madison). 500 μl of medium containing 10% FBS was complemented in the lower chamber. Twenty‐four hours later, cells that migrated or invaded were fixed with 4% paraformaldehyde and then stained with crystal violet for 30 min. Cells were counted using an inverted microscope.", "RIPA lysis buffer (Beyotime, Shanghai, China) supplemented with protease inhibitors (Roche, Switzerland) was used to isolate total protein from NSCLC tissues and cells. After treatment, proteins were separated by sodium dodecyl sulfate‐polyacrylamide gel electrophoresis. The isolated protein was then transferred onto PVDF (Millipore, Billerica, MA) membrane, and then, the protein‐carrying membranes were probed with a primary antibody against E‐cadherin (ab1416 at a dilution of 1:50, Abcam), Vimentin (ab8069, 1 μg/ml, Abcam), β‐actin (ab8226, 1 μg/ml, Abcam), or xCT (ab175186 at a dilution of 1:1000, Abcam) for 2 h at room temperature, followed by incubation with a secondary antibody. Finally, the ECL Kit (Beyotime, Shanghai, China) was used for band observation and analysis.", "The sequences of hsa_circ_0018189 and 3'untranslated region of xCT (xCT 3'UTR) containing the wild‐type (wt) miR‐656‐3p binding site were cloned into pmirGLO Expression vectors (Promega, Madison, WI), and the mutant type (mut) miR‐656‐3p binding site was performed based on the hsa_circ_0018189 wt and xCT 3'UTR wt vectors via QuikChange II Site‐Directed Mutagenesis Kit (Stratagene, Santa Clara, CA). Co‐transfection of wt/mut reporter vector and miR‐656‐3p mimic or mimic NC was performed. The transfected cells were collected and fully lysed, followed by low‐temperature and high‐speed centrifugation to collect the supernatant, which was then plated into the 96‐well plate in triplicate for the measurement of the luciferase activity using the luciferase reporter system (Promega).\nRIP‐Assay Kit (Millipore) was used to perform RIP assay as per the manufacturer's instructions. After the beads were cleaned with cleaning buffer, pre‐diluted anti‐IgG (Abcam) or anti‐Ago2 (Abcam) solution was added to them and incubated at low temperature for 2 h. Cells were fully lysed after 48 h of transfection. The supernatant was incubated with the beads conjugated with antibodies at 4°C overnight. Then, the beads were collected by centrifugation at low temperatures and low speeds. RNA samples cleared from the beads were used for subsequent analysis of hsa_circ_0018189 and miR‐656‐3p.", "All procedures and animal experiments were approved by the Animal Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. A549 cells stable expressing sh‐hsa_circ_0018189 or sh‐NC were subcutaneously injected into four‐week‐old BALB/c nude mice (Vital River Laboratory Animal Technology [Beijing, China]). Tumor volumes (0.5 × (width)2 × length) were measured every week for four weeks, and tumor weights were analyzed after the mice were euthanized.", "Freshly removed tissues were paraffin sectioned, dewaxed, and repaired, followed by incubation with 0.3% hydrogen peroxide. After incubation with a primary antibody (ki‐67 or xCT) and a secondary horseradish peroxidase antibody, color was developed using DAB solution. Each sample selected five fields for observation.", "All data were statistically examined by GraphPad 7.0. All the data were presented as mean ± standard deviation (SD). p < .05 was considered statistically significant. We used a two‐tailed Student's t test (two groups) and one‐way or two‐way analysis of variance with Tukey's post hoc test (more than two groups) to compare significant differences.", " Upregulation of hsa_circ_0018189 expression was detected in NSCLC\n Firstly, we analyzed the expression of hsa_circ_0018189 in NSCLC samples. Hsa_circ_0018189 was observed to be boosted in NSCLC samples (Figure 1A). Meanwhile, compared with BEAS2B, circLUC2 abundance was boosted in A549 and HCC44 cells (Figure 1B). Subsequently, we found that hsa_circ_0018189 was located at chr10 chromosome and was derived from the exons 6–12 of CUL2 mRNA (Figure 1C). We then verified the circular structure of hsa_circ_0018189. Also, hsa_circ_0018189 abundance was not significantly changed but linear CUL2 was reduced in A549 and NCC44 cells with RNase R treatment (Figure 1D). Additionally, hsa_circ_0018189 abundance had no difference after actinomycin D treatment (Figure 1E). Subcellular fractionation with RT‐qPCR analysis showed that hsa_circ_0018189 was mainly distributed in the cytoplasm of NSCLC cells (Figure 1F). In short, hsa_circ_0018189 was significantly upregulated in NSCLC.\nUpregulation of hsa_circ_0018189 expression was acquired in NSCLC. (A) RT‐qPCR analyzed hsa_circ_0018189 abundance in NSCLC tissues and normal tissues; (B) Hsa_circ_0018189 abundance in BEAS2B, A549, and HCC44 cells; (C) The location and structure of hsa_circ_0018189 were shown; (D) After RNase R digestion, RT‐qPCR was conducted to analyze hsa_circ_0018189 and linear CUL2 expression; (E) After the cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, RT‐qPCR was conducted to detect hsa_circ_0018189 and linear CUL2 expression; (F) The localization of circCUL2 in NSCLC cells was analyzed by subcellular fractionation with RT‐qPCR analysis. *p < .05\nFirstly, we analyzed the expression of hsa_circ_0018189 in NSCLC samples. Hsa_circ_0018189 was observed to be boosted in NSCLC samples (Figure 1A). Meanwhile, compared with BEAS2B, circLUC2 abundance was boosted in A549 and HCC44 cells (Figure 1B). Subsequently, we found that hsa_circ_0018189 was located at chr10 chromosome and was derived from the exons 6–12 of CUL2 mRNA (Figure 1C). We then verified the circular structure of hsa_circ_0018189. Also, hsa_circ_0018189 abundance was not significantly changed but linear CUL2 was reduced in A549 and NCC44 cells with RNase R treatment (Figure 1D). Additionally, hsa_circ_0018189 abundance had no difference after actinomycin D treatment (Figure 1E). Subcellular fractionation with RT‐qPCR analysis showed that hsa_circ_0018189 was mainly distributed in the cytoplasm of NSCLC cells (Figure 1F). In short, hsa_circ_0018189 was significantly upregulated in NSCLC.\nUpregulation of hsa_circ_0018189 expression was acquired in NSCLC. (A) RT‐qPCR analyzed hsa_circ_0018189 abundance in NSCLC tissues and normal tissues; (B) Hsa_circ_0018189 abundance in BEAS2B, A549, and HCC44 cells; (C) The location and structure of hsa_circ_0018189 were shown; (D) After RNase R digestion, RT‐qPCR was conducted to analyze hsa_circ_0018189 and linear CUL2 expression; (E) After the cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, RT‐qPCR was conducted to detect hsa_circ_0018189 and linear CUL2 expression; (F) The localization of circCUL2 in NSCLC cells was analyzed by subcellular fractionation with RT‐qPCR analysis. *p < .05\n Hsa_circ_0018189 knockdown suppressed glutamine metabolism, cell proliferation, migration, and invasion, while enhanced cell apoptosis in NSCLC cells At first, we verified the action of hsa_circ_0018189 knockdown, and the results suggested that si‐hsa_circ_0018189#1 caused the greatest silencing efficiency of hsa_circ_0018189 in both cells (Figure 2A). Hence, we used si‐hsa_circ_0018189#1 to exhibit the next experiments. We found that knockdown of hsa_circ_0018189 could inhibit glutamine metabolism (Figure 2B, C, and D). CCK8 assay and EdU assay confirmed that lowered hsa_circ_0018189 expression restrained cell proliferation of A549 and HCC44 cells (Figure 2E, F). In addition, cell apoptosis was increased after hsa_circ_0018189 knockdown (Figure 2G). Transwell assay results proved that si‐CUL2#1 could suppress cell migration and invasion of A549 and HCC44 cells (Figure 2H, I). E‐cadherin and Vimentin are mesenchymal markers commonly used in scientific research for epithelial–mesenchymal transformation (EMT). Knockdown of hsa_circ_0018189 promoted E‐cadherin level and retrained protein level of Vimentin in A549 and HCC44 cells (Figure 2J). We also constructed the hsa_circ_0018189 overexpression plasmid, as shown in Figure SS1. In contrast, hsa_circ_0018189 overexpression elevated glutamine metabolism, facilitated cell proliferation, migration, invasion, and lowered cell apoptosis in NSCLC cells (Figure SS1B‐I). Consistently, hsa_circ_0018189 overexpression elevated Vimentin protein levels and lessened E‐cadherin protein levels (Figure SS1J and K). All in all, these data implied that hsa_circ_0018189 promoted the progression of NSCLC.\nSilencing of hsa_circ_0018189 could inhibit glutamine metabolism, proliferation, migration, and invasion, while promote cell apoptosis in NSCLC cells. (A) The transfection efficiency of si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, and si‐hsa_circ_0018189#3 were analyzed by RT‐qPCR in A549, and HCC44 cells; (B‐D) ELISA assay analyzed glutamine metabolism after transfection of si‐NC or si‐hsa_circ_0018189#1; (E) Cell viability in the above cells was analyzed by CCK8 assay; (F) Cell proliferation was analyzed by EdU assay; (G) Cell apoptosis was analyzed by flow cytometry assay; (H and I) Cell migration and invasion were analyzed by Transwell assay; (J) Proteins of E‐cadherin and Vimentin were measured. *p < .05\nAt first, we verified the action of hsa_circ_0018189 knockdown, and the results suggested that si‐hsa_circ_0018189#1 caused the greatest silencing efficiency of hsa_circ_0018189 in both cells (Figure 2A). Hence, we used si‐hsa_circ_0018189#1 to exhibit the next experiments. We found that knockdown of hsa_circ_0018189 could inhibit glutamine metabolism (Figure 2B, C, and D). CCK8 assay and EdU assay confirmed that lowered hsa_circ_0018189 expression restrained cell proliferation of A549 and HCC44 cells (Figure 2E, F). In addition, cell apoptosis was increased after hsa_circ_0018189 knockdown (Figure 2G). Transwell assay results proved that si‐CUL2#1 could suppress cell migration and invasion of A549 and HCC44 cells (Figure 2H, I). E‐cadherin and Vimentin are mesenchymal markers commonly used in scientific research for epithelial–mesenchymal transformation (EMT). Knockdown of hsa_circ_0018189 promoted E‐cadherin level and retrained protein level of Vimentin in A549 and HCC44 cells (Figure 2J). We also constructed the hsa_circ_0018189 overexpression plasmid, as shown in Figure SS1. In contrast, hsa_circ_0018189 overexpression elevated glutamine metabolism, facilitated cell proliferation, migration, invasion, and lowered cell apoptosis in NSCLC cells (Figure SS1B‐I). Consistently, hsa_circ_0018189 overexpression elevated Vimentin protein levels and lessened E‐cadherin protein levels (Figure SS1J and K). All in all, these data implied that hsa_circ_0018189 promoted the progression of NSCLC.\nSilencing of hsa_circ_0018189 could inhibit glutamine metabolism, proliferation, migration, and invasion, while promote cell apoptosis in NSCLC cells. (A) The transfection efficiency of si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, and si‐hsa_circ_0018189#3 were analyzed by RT‐qPCR in A549, and HCC44 cells; (B‐D) ELISA assay analyzed glutamine metabolism after transfection of si‐NC or si‐hsa_circ_0018189#1; (E) Cell viability in the above cells was analyzed by CCK8 assay; (F) Cell proliferation was analyzed by EdU assay; (G) Cell apoptosis was analyzed by flow cytometry assay; (H and I) Cell migration and invasion were analyzed by Transwell assay; (J) Proteins of E‐cadherin and Vimentin were measured. *p < .05\n Hsa_circ_0018189 was identified as a miR‐656‐3p molecular sponge Starbase and circinteractome analysis found that miR‐656‐3p and miR‐888‐5p were the most promising targets for hsa_circ_0018189 (Figure 3A). Hsa_circ_0018189 silencing resulted in an elevation in miR‐656‐3p expression level (Figure 3B, C), which was chosen for subsequent research. By using the Starbase database, we predicted the existence of targeted binding sites between hsa_circ_0018189 and miR‐656‐3p (Figure 3D). Luciferase reporter assay and RIP assay demonstrated the interaction between hsa_circ_0018189 and miR‐656‐3p in 293 T cells (Figure 3E, F). MiR‐656‐3p expression was lower in NSCLC tissues (Figure 3G). Pearson correlation analysis showed that hsa_circ_0018189 and miR‐656‐3p were negatively correlated with each other in NSCLC tumors (Figure 3H). MiR‐656‐3p abundance was lower in A549 and HCC44 cells (Figure 3I). All findings testified that hsa_circ_0018189 could target miR‐656‐3p.\nHsa_circ_0018189 acted as a miR‐656‐3 sponge. (A) Starbase and circinteractome showed the predicted target miRNAs for hsa_circ_0018189; (B and C) RNA levels of miR‐656‐3p and miR‐888‐5p were determined after transfection of si‐NC or si‐hsa_circ_0018189#1; (D) The binding sites of hsa_circ_0018189 and miR‐656‐3p; (E and F) Luciferase reporter assay and RIP assay were utilized to analyze the interaction of hsa_circ_0018189 and miR‐656‐3p in 293 T cells; (G) RT‐qPCR was used to analyze miR‐656‐3p level in NSCLC tissues and normal tissues; (H) Pearson correlation coefficient was used to analyze the correlation of miR‐656‐3p and hsa_circ_0018189; (I) RT‐qPCR was used to analyze miR‐656‐3p expression in BEAS2B, A549 and HCC44 cells. *p < .05\nStarbase and circinteractome analysis found that miR‐656‐3p and miR‐888‐5p were the most promising targets for hsa_circ_0018189 (Figure 3A). Hsa_circ_0018189 silencing resulted in an elevation in miR‐656‐3p expression level (Figure 3B, C), which was chosen for subsequent research. By using the Starbase database, we predicted the existence of targeted binding sites between hsa_circ_0018189 and miR‐656‐3p (Figure 3D). Luciferase reporter assay and RIP assay demonstrated the interaction between hsa_circ_0018189 and miR‐656‐3p in 293 T cells (Figure 3E, F). MiR‐656‐3p expression was lower in NSCLC tissues (Figure 3G). Pearson correlation analysis showed that hsa_circ_0018189 and miR‐656‐3p were negatively correlated with each other in NSCLC tumors (Figure 3H). MiR‐656‐3p abundance was lower in A549 and HCC44 cells (Figure 3I). All findings testified that hsa_circ_0018189 could target miR‐656‐3p.\nHsa_circ_0018189 acted as a miR‐656‐3 sponge. (A) Starbase and circinteractome showed the predicted target miRNAs for hsa_circ_0018189; (B and C) RNA levels of miR‐656‐3p and miR‐888‐5p were determined after transfection of si‐NC or si‐hsa_circ_0018189#1; (D) The binding sites of hsa_circ_0018189 and miR‐656‐3p; (E and F) Luciferase reporter assay and RIP assay were utilized to analyze the interaction of hsa_circ_0018189 and miR‐656‐3p in 293 T cells; (G) RT‐qPCR was used to analyze miR‐656‐3p level in NSCLC tissues and normal tissues; (H) Pearson correlation coefficient was used to analyze the correlation of miR‐656‐3p and hsa_circ_0018189; (I) RT‐qPCR was used to analyze miR‐656‐3p expression in BEAS2B, A549 and HCC44 cells. *p < .05\n Hsa_circ_0018189 sponged miR‐656‐3p to promote cell glutamine metabolism and malignancy in NSCLC cells The silencing effect of miR‐656‐3p was significant by miR‐656‐3p inhibitor (Figure 4A). Knockdown of hsa_circ_0018189‐mediated elevation of miR‐656‐3p could be restored after miR‐656‐3p silencing (Figure 4B). Deficiency of miR‐656‐3p could relieve the function of si‐hsa_circ_0018189#1 on glutamine metabolism (Figure 4C–E). The results of CCK8 assay and EdU assay also expounded that miR‐656‐3p silencing recover si‐hsa_circ_0018189#1‐caused repression of A549 and HCC44 cell proliferation (Figure 4F, G). Moreover, transfection of miR‐656‐3p inhibitor attenuated the role of si‐hsa_circ_0018189#1 on cell apoptosis, migration, and invasion (Figure 4H, I, and J). E‐cadherin and Vimentin proteins expression were influenced by si‐hsa_circ_0018189#1, while miR‐656‐3p could abolish the influence (Figure 4K). Taken together, the evidence implied that hsa_circ_0018189 could interact with miR‐656‐3p to reinforce NSCLC cell glutamine metabolism and malignancy.\nMiR‐656‐3p could restore the effect of hsa_circ_0018189 knockdown on NSCLC progression. (A) MiR‐656‐3p abundance was detected after transfection of inhibitor NC and miR‐656‐3p inhibitor; (B‐K) Cells were transfected with si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. (B) MiR‐656‐3p expression was estimated after transfection; (C‐E) ELISA assay analyzed glutamine metabolism after transfection; (F and G) Cell proliferation was determined; (H) Cell apoptosis was assessed after transfection; (I and J) Cell migrating and invading abilities were analyzed; (K) E‐cadherin and Vimentin protein levels were determined. *p < .05\nThe silencing effect of miR‐656‐3p was significant by miR‐656‐3p inhibitor (Figure 4A). Knockdown of hsa_circ_0018189‐mediated elevation of miR‐656‐3p could be restored after miR‐656‐3p silencing (Figure 4B). Deficiency of miR‐656‐3p could relieve the function of si‐hsa_circ_0018189#1 on glutamine metabolism (Figure 4C–E). The results of CCK8 assay and EdU assay also expounded that miR‐656‐3p silencing recover si‐hsa_circ_0018189#1‐caused repression of A549 and HCC44 cell proliferation (Figure 4F, G). Moreover, transfection of miR‐656‐3p inhibitor attenuated the role of si‐hsa_circ_0018189#1 on cell apoptosis, migration, and invasion (Figure 4H, I, and J). E‐cadherin and Vimentin proteins expression were influenced by si‐hsa_circ_0018189#1, while miR‐656‐3p could abolish the influence (Figure 4K). Taken together, the evidence implied that hsa_circ_0018189 could interact with miR‐656‐3p to reinforce NSCLC cell glutamine metabolism and malignancy.\nMiR‐656‐3p could restore the effect of hsa_circ_0018189 knockdown on NSCLC progression. (A) MiR‐656‐3p abundance was detected after transfection of inhibitor NC and miR‐656‐3p inhibitor; (B‐K) Cells were transfected with si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. (B) MiR‐656‐3p expression was estimated after transfection; (C‐E) ELISA assay analyzed glutamine metabolism after transfection; (F and G) Cell proliferation was determined; (H) Cell apoptosis was assessed after transfection; (I and J) Cell migrating and invading abilities were analyzed; (K) E‐cadherin and Vimentin protein levels were determined. *p < .05\n \nMiR‐656‐3p directly interacted with xCT\n Using the TargetScan, we revealed that xCT 3'UTR harbored putative target sequences for miR‐656‐3p (Figure 5A). Luciferase reporter experiment suggested that in 293 T cells, miR‐656‐3p could reduce the luciferase activity of wild‐type xCT 3'UTR group, while there was no significant difference in mutant xCT 3'UTR (Figure 5B), confirming the targeting relationship between miR‐656‐3p and xCT. The mRNA and protein levels of xCT were predominantly raised in NSCLC rather than in normal tissues (Figure 5C, D). xCT mRNA expression in NSCLC samples was negatively correlated with miR‐656‐3p (Figure 5E). Besides, xCT protein levels were upregulated in A549 and HCC44 cells (Figure 5F). Additionally, xCT expression was decreased after knockdown of hsa_circ_0018189, while miR‐656‐3p downregulation could restore the function of si‐hsa_circ_0018189#1 on xCT expression in A549 and HCC44 cells (Figure 5G). In summary, our findings elucidated that miR‐656‐3p could target xCT, and hsa_circ_0018189 could regulate the expression of xCT by acting as a sponge of miR‐656‐3p in NSCLC.\nThere was a target relationship between miR‐656‐3p and xCT. (A) The binding sites of miR‐656‐3p and xCT; (B) Luciferase reporter assay tested the luciferase activities of xCT 3'UTR wt and xCT 3'UTR mut in 293 T cells after transfection of mimic NC or miR‐656‐3p mimic; (C) RNA levels of xCT in NSCLC samples; (D) Protein levels of xCT in NSCLC samples; (E) The correlation of miR‐656‐3p and xCT mRNA expression levels in NSCLC samples; (F) Protein levels of xCT in BEAS2B, A549, and HCC44 cells; (G) Protein levels of xCT were estimated after transfection of si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. *p < .05\nUsing the TargetScan, we revealed that xCT 3'UTR harbored putative target sequences for miR‐656‐3p (Figure 5A). Luciferase reporter experiment suggested that in 293 T cells, miR‐656‐3p could reduce the luciferase activity of wild‐type xCT 3'UTR group, while there was no significant difference in mutant xCT 3'UTR (Figure 5B), confirming the targeting relationship between miR‐656‐3p and xCT. The mRNA and protein levels of xCT were predominantly raised in NSCLC rather than in normal tissues (Figure 5C, D). xCT mRNA expression in NSCLC samples was negatively correlated with miR‐656‐3p (Figure 5E). Besides, xCT protein levels were upregulated in A549 and HCC44 cells (Figure 5F). Additionally, xCT expression was decreased after knockdown of hsa_circ_0018189, while miR‐656‐3p downregulation could restore the function of si‐hsa_circ_0018189#1 on xCT expression in A549 and HCC44 cells (Figure 5G). In summary, our findings elucidated that miR‐656‐3p could target xCT, and hsa_circ_0018189 could regulate the expression of xCT by acting as a sponge of miR‐656‐3p in NSCLC.\nThere was a target relationship between miR‐656‐3p and xCT. (A) The binding sites of miR‐656‐3p and xCT; (B) Luciferase reporter assay tested the luciferase activities of xCT 3'UTR wt and xCT 3'UTR mut in 293 T cells after transfection of mimic NC or miR‐656‐3p mimic; (C) RNA levels of xCT in NSCLC samples; (D) Protein levels of xCT in NSCLC samples; (E) The correlation of miR‐656‐3p and xCT mRNA expression levels in NSCLC samples; (F) Protein levels of xCT in BEAS2B, A549, and HCC44 cells; (G) Protein levels of xCT were estimated after transfection of si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. *p < .05\n \nMiR‐656‐3p could inhibit cell glutamine metabolism and malignancy by reducing xCT expression in NSCLC\n Introduction of xCT upregulated xCT protein levels in A549 and HCC44 cells (Figure 6A). MiR‐656‐3p abundance was increased after transfection of miR‐656‐3p mimic (Figure 6B). The lowered xCT protein levels urged by miR‐656‐3p overexpression were recovered after xCT introduction (Figure 6C). Besides, xCT upregulation impaired the role of miR‐656‐3p mimic on factors associated with glutamine metabolism (Figure 6D, E and F). Beyond that, CCK8 assay and EdU assay analysis manifested that introduction of pcDNA‐xCT weakened the action of miR‐656‐3p on A549 and HCC44 cell proliferation and apoptosis (Figure 6G, H, and I). The lowered migrating and invading capacities driven by miR‐656‐3p were lessened after xCT overexpression (Figure 6J, K). Moreover, miR‐656‐3p‐mediated changes in E‐cadherin and Vimentin protein levels were recovered by co‐transfection of xCT and miR‐656‐3p (Figure 6L). In a word, the effect of miR‐656‐3p could be impaired by xCT in regulating the processes of NSCLC cells.\nOverexpression of xCT could recover miR‐656‐3p‐urged impacts on NSCLC progression. (A) Protein levels of xCT protein were tested after transfection of pcDNA and pcDNA‐xCT; (B) MiR‐656‐3p abundance was determined after transfection of miR‐656‐3p mimic and mimic NC; (C) Protein levels of xCT were detected after transfection mimic NC, miR‐656‐3p mimic, miR‐656‐3p mimic+pcDNA, or miR‐656‐3p mimic+pcDNA‐xCT; (D‐F) Glutamine metabolism in the above cells was analyzed; (G and H) Cell proliferation in the above cells was estimated; (I) Cell apoptosis in the above cells was detected; (J and K) Cell migration and invasion in the above cells were determined; (L) E‐cadherin and Vimentin protein levels in the above cells were measured. *p < .05\nIntroduction of xCT upregulated xCT protein levels in A549 and HCC44 cells (Figure 6A). MiR‐656‐3p abundance was increased after transfection of miR‐656‐3p mimic (Figure 6B). The lowered xCT protein levels urged by miR‐656‐3p overexpression were recovered after xCT introduction (Figure 6C). Besides, xCT upregulation impaired the role of miR‐656‐3p mimic on factors associated with glutamine metabolism (Figure 6D, E and F). Beyond that, CCK8 assay and EdU assay analysis manifested that introduction of pcDNA‐xCT weakened the action of miR‐656‐3p on A549 and HCC44 cell proliferation and apoptosis (Figure 6G, H, and I). The lowered migrating and invading capacities driven by miR‐656‐3p were lessened after xCT overexpression (Figure 6J, K). Moreover, miR‐656‐3p‐mediated changes in E‐cadherin and Vimentin protein levels were recovered by co‐transfection of xCT and miR‐656‐3p (Figure 6L). In a word, the effect of miR‐656‐3p could be impaired by xCT in regulating the processes of NSCLC cells.\nOverexpression of xCT could recover miR‐656‐3p‐urged impacts on NSCLC progression. (A) Protein levels of xCT protein were tested after transfection of pcDNA and pcDNA‐xCT; (B) MiR‐656‐3p abundance was determined after transfection of miR‐656‐3p mimic and mimic NC; (C) Protein levels of xCT were detected after transfection mimic NC, miR‐656‐3p mimic, miR‐656‐3p mimic+pcDNA, or miR‐656‐3p mimic+pcDNA‐xCT; (D‐F) Glutamine metabolism in the above cells was analyzed; (G and H) Cell proliferation in the above cells was estimated; (I) Cell apoptosis in the above cells was detected; (J and K) Cell migration and invasion in the above cells were determined; (L) E‐cadherin and Vimentin protein levels in the above cells were measured. *p < .05\n Hsa_circ_0018189 could regulate tumor growth in mouse models The A549 cells stable expressing sh‐hsa_circ_0018189 were subcutaneously injected into the nude mice. Tumor volume expression in mice was measured. Hsa_circ_0018189 knockdown inhibited tumor volume (Figure 7A). The results showed that knocking down hsa_circ_0018189 reduced tumor volume (Figure 7B). RNA levels of hsa_circ_0018189 and xCT were downregulated in xenograft tumor tissues from the sh‐hsa_circ_0018189 group, but miR‐656‐3p expression was increased (Figure 7C). Western blot analysis suggested that xCT protein was decreased in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7D). IHC analysis revealed that ki‐67 and xCT levels were reduced in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7E). All data exposed that knockdown of hsa_circ_0018189 could inhibit the growth of tumors in vivo.\nKnockdown hsa_circ_0018189 could inhibit the growth of tumors in vivo. (A) The tumor volumes were measured at 1, 2, 3, and 4 weeks; (B) Tumor weights were tested; (C) RNA levels of hsa_circ_0018189, miR‐656‐3p, and xCT in xenograft tumor tissues were analyzed; (D) Protein levels of xCT in xenograft tumor tissues were determined; (E) IHC analysis of ki‐67 and xCT protein levels in xenograft tumor tissues. *p < .05\nThe A549 cells stable expressing sh‐hsa_circ_0018189 were subcutaneously injected into the nude mice. Tumor volume expression in mice was measured. Hsa_circ_0018189 knockdown inhibited tumor volume (Figure 7A). The results showed that knocking down hsa_circ_0018189 reduced tumor volume (Figure 7B). RNA levels of hsa_circ_0018189 and xCT were downregulated in xenograft tumor tissues from the sh‐hsa_circ_0018189 group, but miR‐656‐3p expression was increased (Figure 7C). Western blot analysis suggested that xCT protein was decreased in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7D). IHC analysis revealed that ki‐67 and xCT levels were reduced in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7E). All data exposed that knockdown of hsa_circ_0018189 could inhibit the growth of tumors in vivo.\nKnockdown hsa_circ_0018189 could inhibit the growth of tumors in vivo. (A) The tumor volumes were measured at 1, 2, 3, and 4 weeks; (B) Tumor weights were tested; (C) RNA levels of hsa_circ_0018189, miR‐656‐3p, and xCT in xenograft tumor tissues were analyzed; (D) Protein levels of xCT in xenograft tumor tissues were determined; (E) IHC analysis of ki‐67 and xCT protein levels in xenograft tumor tissues. *p < .05", "Firstly, we analyzed the expression of hsa_circ_0018189 in NSCLC samples. Hsa_circ_0018189 was observed to be boosted in NSCLC samples (Figure 1A). Meanwhile, compared with BEAS2B, circLUC2 abundance was boosted in A549 and HCC44 cells (Figure 1B). Subsequently, we found that hsa_circ_0018189 was located at chr10 chromosome and was derived from the exons 6–12 of CUL2 mRNA (Figure 1C). We then verified the circular structure of hsa_circ_0018189. Also, hsa_circ_0018189 abundance was not significantly changed but linear CUL2 was reduced in A549 and NCC44 cells with RNase R treatment (Figure 1D). Additionally, hsa_circ_0018189 abundance had no difference after actinomycin D treatment (Figure 1E). Subcellular fractionation with RT‐qPCR analysis showed that hsa_circ_0018189 was mainly distributed in the cytoplasm of NSCLC cells (Figure 1F). In short, hsa_circ_0018189 was significantly upregulated in NSCLC.\nUpregulation of hsa_circ_0018189 expression was acquired in NSCLC. (A) RT‐qPCR analyzed hsa_circ_0018189 abundance in NSCLC tissues and normal tissues; (B) Hsa_circ_0018189 abundance in BEAS2B, A549, and HCC44 cells; (C) The location and structure of hsa_circ_0018189 were shown; (D) After RNase R digestion, RT‐qPCR was conducted to analyze hsa_circ_0018189 and linear CUL2 expression; (E) After the cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, RT‐qPCR was conducted to detect hsa_circ_0018189 and linear CUL2 expression; (F) The localization of circCUL2 in NSCLC cells was analyzed by subcellular fractionation with RT‐qPCR analysis. *p < .05", "At first, we verified the action of hsa_circ_0018189 knockdown, and the results suggested that si‐hsa_circ_0018189#1 caused the greatest silencing efficiency of hsa_circ_0018189 in both cells (Figure 2A). Hence, we used si‐hsa_circ_0018189#1 to exhibit the next experiments. We found that knockdown of hsa_circ_0018189 could inhibit glutamine metabolism (Figure 2B, C, and D). CCK8 assay and EdU assay confirmed that lowered hsa_circ_0018189 expression restrained cell proliferation of A549 and HCC44 cells (Figure 2E, F). In addition, cell apoptosis was increased after hsa_circ_0018189 knockdown (Figure 2G). Transwell assay results proved that si‐CUL2#1 could suppress cell migration and invasion of A549 and HCC44 cells (Figure 2H, I). E‐cadherin and Vimentin are mesenchymal markers commonly used in scientific research for epithelial–mesenchymal transformation (EMT). Knockdown of hsa_circ_0018189 promoted E‐cadherin level and retrained protein level of Vimentin in A549 and HCC44 cells (Figure 2J). We also constructed the hsa_circ_0018189 overexpression plasmid, as shown in Figure SS1. In contrast, hsa_circ_0018189 overexpression elevated glutamine metabolism, facilitated cell proliferation, migration, invasion, and lowered cell apoptosis in NSCLC cells (Figure SS1B‐I). Consistently, hsa_circ_0018189 overexpression elevated Vimentin protein levels and lessened E‐cadherin protein levels (Figure SS1J and K). All in all, these data implied that hsa_circ_0018189 promoted the progression of NSCLC.\nSilencing of hsa_circ_0018189 could inhibit glutamine metabolism, proliferation, migration, and invasion, while promote cell apoptosis in NSCLC cells. (A) The transfection efficiency of si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, and si‐hsa_circ_0018189#3 were analyzed by RT‐qPCR in A549, and HCC44 cells; (B‐D) ELISA assay analyzed glutamine metabolism after transfection of si‐NC or si‐hsa_circ_0018189#1; (E) Cell viability in the above cells was analyzed by CCK8 assay; (F) Cell proliferation was analyzed by EdU assay; (G) Cell apoptosis was analyzed by flow cytometry assay; (H and I) Cell migration and invasion were analyzed by Transwell assay; (J) Proteins of E‐cadherin and Vimentin were measured. *p < .05", "Starbase and circinteractome analysis found that miR‐656‐3p and miR‐888‐5p were the most promising targets for hsa_circ_0018189 (Figure 3A). Hsa_circ_0018189 silencing resulted in an elevation in miR‐656‐3p expression level (Figure 3B, C), which was chosen for subsequent research. By using the Starbase database, we predicted the existence of targeted binding sites between hsa_circ_0018189 and miR‐656‐3p (Figure 3D). Luciferase reporter assay and RIP assay demonstrated the interaction between hsa_circ_0018189 and miR‐656‐3p in 293 T cells (Figure 3E, F). MiR‐656‐3p expression was lower in NSCLC tissues (Figure 3G). Pearson correlation analysis showed that hsa_circ_0018189 and miR‐656‐3p were negatively correlated with each other in NSCLC tumors (Figure 3H). MiR‐656‐3p abundance was lower in A549 and HCC44 cells (Figure 3I). All findings testified that hsa_circ_0018189 could target miR‐656‐3p.\nHsa_circ_0018189 acted as a miR‐656‐3 sponge. (A) Starbase and circinteractome showed the predicted target miRNAs for hsa_circ_0018189; (B and C) RNA levels of miR‐656‐3p and miR‐888‐5p were determined after transfection of si‐NC or si‐hsa_circ_0018189#1; (D) The binding sites of hsa_circ_0018189 and miR‐656‐3p; (E and F) Luciferase reporter assay and RIP assay were utilized to analyze the interaction of hsa_circ_0018189 and miR‐656‐3p in 293 T cells; (G) RT‐qPCR was used to analyze miR‐656‐3p level in NSCLC tissues and normal tissues; (H) Pearson correlation coefficient was used to analyze the correlation of miR‐656‐3p and hsa_circ_0018189; (I) RT‐qPCR was used to analyze miR‐656‐3p expression in BEAS2B, A549 and HCC44 cells. *p < .05", "The silencing effect of miR‐656‐3p was significant by miR‐656‐3p inhibitor (Figure 4A). Knockdown of hsa_circ_0018189‐mediated elevation of miR‐656‐3p could be restored after miR‐656‐3p silencing (Figure 4B). Deficiency of miR‐656‐3p could relieve the function of si‐hsa_circ_0018189#1 on glutamine metabolism (Figure 4C–E). The results of CCK8 assay and EdU assay also expounded that miR‐656‐3p silencing recover si‐hsa_circ_0018189#1‐caused repression of A549 and HCC44 cell proliferation (Figure 4F, G). Moreover, transfection of miR‐656‐3p inhibitor attenuated the role of si‐hsa_circ_0018189#1 on cell apoptosis, migration, and invasion (Figure 4H, I, and J). E‐cadherin and Vimentin proteins expression were influenced by si‐hsa_circ_0018189#1, while miR‐656‐3p could abolish the influence (Figure 4K). Taken together, the evidence implied that hsa_circ_0018189 could interact with miR‐656‐3p to reinforce NSCLC cell glutamine metabolism and malignancy.\nMiR‐656‐3p could restore the effect of hsa_circ_0018189 knockdown on NSCLC progression. (A) MiR‐656‐3p abundance was detected after transfection of inhibitor NC and miR‐656‐3p inhibitor; (B‐K) Cells were transfected with si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. (B) MiR‐656‐3p expression was estimated after transfection; (C‐E) ELISA assay analyzed glutamine metabolism after transfection; (F and G) Cell proliferation was determined; (H) Cell apoptosis was assessed after transfection; (I and J) Cell migrating and invading abilities were analyzed; (K) E‐cadherin and Vimentin protein levels were determined. *p < .05", "Using the TargetScan, we revealed that xCT 3'UTR harbored putative target sequences for miR‐656‐3p (Figure 5A). Luciferase reporter experiment suggested that in 293 T cells, miR‐656‐3p could reduce the luciferase activity of wild‐type xCT 3'UTR group, while there was no significant difference in mutant xCT 3'UTR (Figure 5B), confirming the targeting relationship between miR‐656‐3p and xCT. The mRNA and protein levels of xCT were predominantly raised in NSCLC rather than in normal tissues (Figure 5C, D). xCT mRNA expression in NSCLC samples was negatively correlated with miR‐656‐3p (Figure 5E). Besides, xCT protein levels were upregulated in A549 and HCC44 cells (Figure 5F). Additionally, xCT expression was decreased after knockdown of hsa_circ_0018189, while miR‐656‐3p downregulation could restore the function of si‐hsa_circ_0018189#1 on xCT expression in A549 and HCC44 cells (Figure 5G). In summary, our findings elucidated that miR‐656‐3p could target xCT, and hsa_circ_0018189 could regulate the expression of xCT by acting as a sponge of miR‐656‐3p in NSCLC.\nThere was a target relationship between miR‐656‐3p and xCT. (A) The binding sites of miR‐656‐3p and xCT; (B) Luciferase reporter assay tested the luciferase activities of xCT 3'UTR wt and xCT 3'UTR mut in 293 T cells after transfection of mimic NC or miR‐656‐3p mimic; (C) RNA levels of xCT in NSCLC samples; (D) Protein levels of xCT in NSCLC samples; (E) The correlation of miR‐656‐3p and xCT mRNA expression levels in NSCLC samples; (F) Protein levels of xCT in BEAS2B, A549, and HCC44 cells; (G) Protein levels of xCT were estimated after transfection of si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. *p < .05", "Introduction of xCT upregulated xCT protein levels in A549 and HCC44 cells (Figure 6A). MiR‐656‐3p abundance was increased after transfection of miR‐656‐3p mimic (Figure 6B). The lowered xCT protein levels urged by miR‐656‐3p overexpression were recovered after xCT introduction (Figure 6C). Besides, xCT upregulation impaired the role of miR‐656‐3p mimic on factors associated with glutamine metabolism (Figure 6D, E and F). Beyond that, CCK8 assay and EdU assay analysis manifested that introduction of pcDNA‐xCT weakened the action of miR‐656‐3p on A549 and HCC44 cell proliferation and apoptosis (Figure 6G, H, and I). The lowered migrating and invading capacities driven by miR‐656‐3p were lessened after xCT overexpression (Figure 6J, K). Moreover, miR‐656‐3p‐mediated changes in E‐cadherin and Vimentin protein levels were recovered by co‐transfection of xCT and miR‐656‐3p (Figure 6L). In a word, the effect of miR‐656‐3p could be impaired by xCT in regulating the processes of NSCLC cells.\nOverexpression of xCT could recover miR‐656‐3p‐urged impacts on NSCLC progression. (A) Protein levels of xCT protein were tested after transfection of pcDNA and pcDNA‐xCT; (B) MiR‐656‐3p abundance was determined after transfection of miR‐656‐3p mimic and mimic NC; (C) Protein levels of xCT were detected after transfection mimic NC, miR‐656‐3p mimic, miR‐656‐3p mimic+pcDNA, or miR‐656‐3p mimic+pcDNA‐xCT; (D‐F) Glutamine metabolism in the above cells was analyzed; (G and H) Cell proliferation in the above cells was estimated; (I) Cell apoptosis in the above cells was detected; (J and K) Cell migration and invasion in the above cells were determined; (L) E‐cadherin and Vimentin protein levels in the above cells were measured. *p < .05", "The A549 cells stable expressing sh‐hsa_circ_0018189 were subcutaneously injected into the nude mice. Tumor volume expression in mice was measured. Hsa_circ_0018189 knockdown inhibited tumor volume (Figure 7A). The results showed that knocking down hsa_circ_0018189 reduced tumor volume (Figure 7B). RNA levels of hsa_circ_0018189 and xCT were downregulated in xenograft tumor tissues from the sh‐hsa_circ_0018189 group, but miR‐656‐3p expression was increased (Figure 7C). Western blot analysis suggested that xCT protein was decreased in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7D). IHC analysis revealed that ki‐67 and xCT levels were reduced in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7E). All data exposed that knockdown of hsa_circ_0018189 could inhibit the growth of tumors in vivo.\nKnockdown hsa_circ_0018189 could inhibit the growth of tumors in vivo. (A) The tumor volumes were measured at 1, 2, 3, and 4 weeks; (B) Tumor weights were tested; (C) RNA levels of hsa_circ_0018189, miR‐656‐3p, and xCT in xenograft tumor tissues were analyzed; (D) Protein levels of xCT in xenograft tumor tissues were determined; (E) IHC analysis of ki‐67 and xCT protein levels in xenograft tumor tissues. *p < .05", "NSCLC is one of the cancers with high mortality in the world.\n31\n The treatment of NSCLC has made great progress in the continuous development of science and technology condition.\n32\n However, poor prognosis is still a difficult problem in the treatment of lung cancer today. The functional mechanism of circRNAs in many cancers has been confirmed.\n33\n The high level of hsa_circ_0018189 in tissues and cells of NSCLC aroused our interest in the detailed functional mechanism of hsa_circ_0018189 in NSCLC. In the present research, we confirmed that hsa_circ_0018189 was highly expressed in NSCLC tissues and cells, which led us to speculate that the upregulation of hsa_circ_0018189 might be a key factor leading to tumorigenesis. Further investigation found that cell proliferation, migration, invasion, and glutamine metabolism were inhibited under hsa_circ_0018189 knockdown condition, but cell apoptosis was significantly induced. In mouse xenograft experiments, hsa_circ_0018189 knockdown lessened NSCLC cell growth in vivo. These results preliminarily confirmed that hsa_circ_0018189 could promote the development of NSCLC.\nThe regulatory role of hsa_circ_0018189 in NSCLC has been confirmed, but its specific mechanism has not been reported,\n16\n so we did further exploratory experiments. Starbase prediction revealed that miR‐656‐3p might interact with hsa_circ_0018189. Further experiments confirmed that hsa_circ_0018189 functioned as a miR‐656‐3p sponge molecular. Low level of miR‐656‐3p has been reported in various tumors, such as hepatocellular carcinoma, nasopharyngeal cancer, and colorectal cancer.\n34\n, \n35\n, \n36\n A previous study exhibited that miR‐656‐3p was downregulated in NSCLC samples and cell lines, and miR‐656‐3p overexpression lessened cell malignancy.\n37\n In this research, we also verified the downregulation of miR‐656‐3p in NSCLC samples and cell lines, and the reduced miR‐656‐3p expression was able to restore the effects of si‐hsa_circ_0018189#1 on NSCLC progression. These results confirmed that hsa_circ_0018189 could promote the development of NSCLC cells by targeting miR‐656‐3p.\nThe role of xCT as an intermediate glutamate transporter in NSCLC had also been well reported.\n38\n In our experiment, xCT abundance was significantly upregulated in NSCLC cells and samples, and miR‐656‐3p could target xCT. At the same time, it was also confirmed that hsa_circ_0018189 could regulate xCT expression through miR‐656‐3p. Further studies confirmed that miR‐656‐3p inhibited tumorigenicity of cells by reducing the expression of xCT. Combining the research results, our study verified the targeting relationship between miR‐656‐3p, hsa_circ_0018189, and xCT in NSCLC for the first time. In conclusion, our study excavated a new regulatory pathway in the development of NSCLC by exploring the regulatory mechanism of new circRNA‐miRNA‐mRNA. Our study also provided new ideas for future drug development of NSCLC.", "In our study, hsa_circ_0018189 could regulate tumorigenicity of NSCLC through serving as a miR‐656‐3p sponge molecular and subsequently mediating xCT expression, which might be a potential target for the treatment of NSCLC.", "None.", "The authors declare that they have no financial conflicts of interest.", "\nFigure S1\nHsa_circ_0018189 overexpression increased glutamine metabolism, promoted proliferation, migration, invasion, and lessened apoptosis in NSCLC cells. A: The overexpression efficiency of the hsa_circ_0018189 overexpression plasmid; B‐D: ELISA assay analyzed the glutamine metabolism of NSCLC cells transfected with vector or hsa_circ_0018189; E‐I: Cell viability, proliferation, apoptosis, migration, and invasion were analyzed; J and K: Proteins of E‐cadherin and Vimentin were measured in the above cells. *p < .05.\nClick here for additional data file." ]
[ "background", "materials-and-methods", null, null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "discussion", "conclusions", null, "COI-statement", "supplementary-material" ]
[ "Hsa_circ_0018189", "miR‐656‐3p", "NSCLC", "xCT" ]
BACKGROUND: Non‐small cell lung cancer (NSCLC) is a typical type of lung cancer. 1 , 2 NSCLC can be further subdivided into adenocarcinoma, squamous cell carcinoma, and large cell carcinoma, depending on the histological differences among the NSCLC subtypes. 3 , 4 , 5 The main risk for NSCLC is smoking. 6 Its poor prognosis and recurrence rate pose great challenges to the treatment of NSCLC. 7 , 8 Therefore, the research on NSCLC has also become a hot topic today. Circular RNAs (circRNAs), a class of single‐stranded RNA molecules with closed loop structures, have become the focus of RNA and transcriptome research. 9 In fact, circRNAs can be divided into non‐coding circRNAs and coding circRNAs. 10 , 11 Because of their unique structure, they exhibit high stability and resistance to exonucleases. 12 More and more studies on circRNAs in NSCLC have proved the action of circRNAs in the occurrence and development of NSCLC. 13 , 14 , 15 Hsa_circ_0018189 was suggested to exert a significant function in NSCLC, 16 but its specific functional mechanism has not been described. MiRNA adjusts post‐transcriptional gene expression. 17 , 18 MiRNAs, are key actors in a variety of biological processes, and their disorders have been linked to many diseases, including cancer and autoimmune diseases. 19 , 20 , 21 Dysregulation of miRNAs is common in NSCLC, and miRNAs have been suggested to participate in the regulation of the occurrence, progression, and metastasis of NSCLC by regulating target genes. 22 Therefore, a string of studies has been conducted to explore the functional mechanism of miRNAs in NSCLC, which will also provide the possibility for miRNAs to become a diagnostic and therapeutic tool. 23 , 24 The functional mechanism of miR‐656‐3p in NSCLC is not well understood, which makes it of great interest to us. Solute carrier family seven member 11 (SLC7A11), also named as xCT, is a cystine and glutamate anti‐transporter that can transport cystine into cells and export glutamate at the same time. 25 , 26 It has been reported that elevated xCT expression and glutamate release are commonly detected in cancer cells. 27 xCT has a vital function in the progression of NSCLC, but its specific regulatory mechanism has rarely been reported. Accordingly, the research was to clarify the changes in hsa_circ_0018189, miR‐656‐3p, and xCT in NSCLC, and their relationships and functions will be explored. These studies will provide theoretical support for future diagnosis and treatment of NSCLC. MATERIAL AND METHODS: Clinical samples NSCLC tumor tissues (N = 90) and adjacent normal tissues (N = 90) were gathered from patients with NSCLC who were diagnosed at Nanping First Hospital Affiliated to Fujian Medical University. The Declaration of Helsinki was referenced in experiments involving human samples. Prior to surgery, the patient was informed and signed informed consent. The experiment was also approved and supported by the Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. The tumor tissues were removed and preserved at −80°C. NSCLC tumor tissues (N = 90) and adjacent normal tissues (N = 90) were gathered from patients with NSCLC who were diagnosed at Nanping First Hospital Affiliated to Fujian Medical University. The Declaration of Helsinki was referenced in experiments involving human samples. Prior to surgery, the patient was informed and signed informed consent. The experiment was also approved and supported by the Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. The tumor tissues were removed and preserved at −80°C. Cell culture and transfection The BEAS2B cell line (CRL‐9609™, ATTC, Manassas, VA) and NSCLC cell lines HCC44 (CBP61182, COBIOER, Nanjing, China) and A549 (CBP60084, COBIOER) were cultured with RPMI‐1640 (Sigma‐Aldrich, St. Louis, MO) or F12K supplemented with 10% fetal bovine serum (FBS, GIBCO, Carlsbad, CA) in a humidified incubator containing 5% CO2 at 37°C. Three small interfering RNAs si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, si‐hsa_circ_0018189#3, and negative control (si‐NC), miR‐656‐3p inhibitor and inhibitor NC, miR‐656‐3p mimic and negative control mimic (mimic NC), pcDNA and pcDNA‐xCT, short hairpin RNA targeting hsa_circ_0018189 (sh‐hsa_circ_0018189), and negative control (sh‐NC) were synthesized and purchased from Sangon Biotech (Shanghai, China). The Lipofectamine 3000 reagent (Invitrogen, Carlsbad, CA) was utilized for transfection according to production instructions. The BEAS2B cell line (CRL‐9609™, ATTC, Manassas, VA) and NSCLC cell lines HCC44 (CBP61182, COBIOER, Nanjing, China) and A549 (CBP60084, COBIOER) were cultured with RPMI‐1640 (Sigma‐Aldrich, St. Louis, MO) or F12K supplemented with 10% fetal bovine serum (FBS, GIBCO, Carlsbad, CA) in a humidified incubator containing 5% CO2 at 37°C. Three small interfering RNAs si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, si‐hsa_circ_0018189#3, and negative control (si‐NC), miR‐656‐3p inhibitor and inhibitor NC, miR‐656‐3p mimic and negative control mimic (mimic NC), pcDNA and pcDNA‐xCT, short hairpin RNA targeting hsa_circ_0018189 (sh‐hsa_circ_0018189), and negative control (sh‐NC) were synthesized and purchased from Sangon Biotech (Shanghai, China). The Lipofectamine 3000 reagent (Invitrogen, Carlsbad, CA) was utilized for transfection according to production instructions. Real‐time quantitative reverse transcription‐polymerase chain reaction and validation of circular characteristics The TRIzol® Reagent (Invitrogen) was utilized for isolation of total RNA. Complementary DNA (cDNA) was synthesized using First‐Strand cDNA Synthesis Kit (Takara, Dalian, China). SYBR® Premix DimerEraser Kit (Takara, Dalian, China) was utilized to detect relative expression. The 2−ΔΔCt method 28 was applied to process relative expression and β‐actin or U6 was applied for standardization. 29 , 30 The primers are shown in Table 1. Primers sequences used for PCR The extracted total RNA was digested by adding 10 μl RNase R for subsequent analysis. Meanwhile, cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, respectively, and RNA was extracted for subsequent analysis. The TRIzol® Reagent (Invitrogen) was utilized for isolation of total RNA. Complementary DNA (cDNA) was synthesized using First‐Strand cDNA Synthesis Kit (Takara, Dalian, China). SYBR® Premix DimerEraser Kit (Takara, Dalian, China) was utilized to detect relative expression. The 2−ΔΔCt method 28 was applied to process relative expression and β‐actin or U6 was applied for standardization. 29 , 30 The primers are shown in Table 1. Primers sequences used for PCR The extracted total RNA was digested by adding 10 μl RNase R for subsequent analysis. Meanwhile, cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, respectively, and RNA was extracted for subsequent analysis. Glutamine metabolism analysis Glutamine metabolism was assessed by intracellular levels of glutamine, glutamate, and α‐KG. In 6‐well plates, the transfected cells were fed. After 24 h, the intracellular correlation levels were determined and analyzed using ELISA kits (Beyotime, Shanghai, China) including glutamine, glutamate, and α‐KG according to the instructions. Glutamine metabolism was assessed by intracellular levels of glutamine, glutamate, and α‐KG. In 6‐well plates, the transfected cells were fed. After 24 h, the intracellular correlation levels were determined and analyzed using ELISA kits (Beyotime, Shanghai, China) including glutamine, glutamate, and α‐KG according to the instructions. Cell Counting Kit‐8 assay In 96‐well plates, the transfection of cells was cultured for 24 h, and 10 μl Cell Counting Kit‐8 (CCK8, Beyotime, Shanghai, China) reagent was added to each well to hatch for 24, 48, and 72 h. Then, optical density (OD) was recorded at 450 nm by a microplate reader. In 96‐well plates, the transfection of cells was cultured for 24 h, and 10 μl Cell Counting Kit‐8 (CCK8, Beyotime, Shanghai, China) reagent was added to each well to hatch for 24, 48, and 72 h. Then, optical density (OD) was recorded at 450 nm by a microplate reader. 5‐ethynyl‐2′‐deoxyuridine assay Cell proliferative capacity was estimated with the EdU Staining Proliferation Kit (Abcam, Cambridge, UK). After transfection, the cells were continued to be cultured in 6‐well plates, and 10 μl EdU (5‐ethynyl‐2′‐deoxyuridine) reagent was added into the cells for further incubation for 2 h. Then, the medium was discarded. The cells were stained with the stain for 30 min. Then DAPI was applied for staining for 20 min. Finally, the cells were cleaned and observed and analyzed under a microscope. Cell proliferative capacity was estimated with the EdU Staining Proliferation Kit (Abcam, Cambridge, UK). After transfection, the cells were continued to be cultured in 6‐well plates, and 10 μl EdU (5‐ethynyl‐2′‐deoxyuridine) reagent was added into the cells for further incubation for 2 h. Then, the medium was discarded. The cells were stained with the stain for 30 min. Then DAPI was applied for staining for 20 min. Finally, the cells were cleaned and observed and analyzed under a microscope. Cell apoptosis analysis Cell apoptotic rate was estimated with an Annexin V‐FITC/PI Staining Kit (Roche, Switzerland) on a flow cytometry (Beckman Coulter; Kraemer Boulevard, CA) following the processes offered by the manufacturer. Cell apoptotic rate was estimated with an Annexin V‐FITC/PI Staining Kit (Roche, Switzerland) on a flow cytometry (Beckman Coulter; Kraemer Boulevard, CA) following the processes offered by the manufacturer. Cell migration and invasion analysis The cells were prepared into cell suspension. For the migration experiment, transfected cells resuspended in 200 μl of serum‐free medium were complemented in the upper chamber. For the invasion experiment, transfected cells were complemented in the upper chamber pre‐coated with matrigel (Corning, New York, Madison). 500 μl of medium containing 10% FBS was complemented in the lower chamber. Twenty‐four hours later, cells that migrated or invaded were fixed with 4% paraformaldehyde and then stained with crystal violet for 30 min. Cells were counted using an inverted microscope. The cells were prepared into cell suspension. For the migration experiment, transfected cells resuspended in 200 μl of serum‐free medium were complemented in the upper chamber. For the invasion experiment, transfected cells were complemented in the upper chamber pre‐coated with matrigel (Corning, New York, Madison). 500 μl of medium containing 10% FBS was complemented in the lower chamber. Twenty‐four hours later, cells that migrated or invaded were fixed with 4% paraformaldehyde and then stained with crystal violet for 30 min. Cells were counted using an inverted microscope. Western blot RIPA lysis buffer (Beyotime, Shanghai, China) supplemented with protease inhibitors (Roche, Switzerland) was used to isolate total protein from NSCLC tissues and cells. After treatment, proteins were separated by sodium dodecyl sulfate‐polyacrylamide gel electrophoresis. The isolated protein was then transferred onto PVDF (Millipore, Billerica, MA) membrane, and then, the protein‐carrying membranes were probed with a primary antibody against E‐cadherin (ab1416 at a dilution of 1:50, Abcam), Vimentin (ab8069, 1 μg/ml, Abcam), β‐actin (ab8226, 1 μg/ml, Abcam), or xCT (ab175186 at a dilution of 1:1000, Abcam) for 2 h at room temperature, followed by incubation with a secondary antibody. Finally, the ECL Kit (Beyotime, Shanghai, China) was used for band observation and analysis. RIPA lysis buffer (Beyotime, Shanghai, China) supplemented with protease inhibitors (Roche, Switzerland) was used to isolate total protein from NSCLC tissues and cells. After treatment, proteins were separated by sodium dodecyl sulfate‐polyacrylamide gel electrophoresis. The isolated protein was then transferred onto PVDF (Millipore, Billerica, MA) membrane, and then, the protein‐carrying membranes were probed with a primary antibody against E‐cadherin (ab1416 at a dilution of 1:50, Abcam), Vimentin (ab8069, 1 μg/ml, Abcam), β‐actin (ab8226, 1 μg/ml, Abcam), or xCT (ab175186 at a dilution of 1:1000, Abcam) for 2 h at room temperature, followed by incubation with a secondary antibody. Finally, the ECL Kit (Beyotime, Shanghai, China) was used for band observation and analysis. Luciferase reporter assay and RNA‐immunoprecipitation assay The sequences of hsa_circ_0018189 and 3'untranslated region of xCT (xCT 3'UTR) containing the wild‐type (wt) miR‐656‐3p binding site were cloned into pmirGLO Expression vectors (Promega, Madison, WI), and the mutant type (mut) miR‐656‐3p binding site was performed based on the hsa_circ_0018189 wt and xCT 3'UTR wt vectors via QuikChange II Site‐Directed Mutagenesis Kit (Stratagene, Santa Clara, CA). Co‐transfection of wt/mut reporter vector and miR‐656‐3p mimic or mimic NC was performed. The transfected cells were collected and fully lysed, followed by low‐temperature and high‐speed centrifugation to collect the supernatant, which was then plated into the 96‐well plate in triplicate for the measurement of the luciferase activity using the luciferase reporter system (Promega). RIP‐Assay Kit (Millipore) was used to perform RIP assay as per the manufacturer's instructions. After the beads were cleaned with cleaning buffer, pre‐diluted anti‐IgG (Abcam) or anti‐Ago2 (Abcam) solution was added to them and incubated at low temperature for 2 h. Cells were fully lysed after 48 h of transfection. The supernatant was incubated with the beads conjugated with antibodies at 4°C overnight. Then, the beads were collected by centrifugation at low temperatures and low speeds. RNA samples cleared from the beads were used for subsequent analysis of hsa_circ_0018189 and miR‐656‐3p. The sequences of hsa_circ_0018189 and 3'untranslated region of xCT (xCT 3'UTR) containing the wild‐type (wt) miR‐656‐3p binding site were cloned into pmirGLO Expression vectors (Promega, Madison, WI), and the mutant type (mut) miR‐656‐3p binding site was performed based on the hsa_circ_0018189 wt and xCT 3'UTR wt vectors via QuikChange II Site‐Directed Mutagenesis Kit (Stratagene, Santa Clara, CA). Co‐transfection of wt/mut reporter vector and miR‐656‐3p mimic or mimic NC was performed. The transfected cells were collected and fully lysed, followed by low‐temperature and high‐speed centrifugation to collect the supernatant, which was then plated into the 96‐well plate in triplicate for the measurement of the luciferase activity using the luciferase reporter system (Promega). RIP‐Assay Kit (Millipore) was used to perform RIP assay as per the manufacturer's instructions. After the beads were cleaned with cleaning buffer, pre‐diluted anti‐IgG (Abcam) or anti‐Ago2 (Abcam) solution was added to them and incubated at low temperature for 2 h. Cells were fully lysed after 48 h of transfection. The supernatant was incubated with the beads conjugated with antibodies at 4°C overnight. Then, the beads were collected by centrifugation at low temperatures and low speeds. RNA samples cleared from the beads were used for subsequent analysis of hsa_circ_0018189 and miR‐656‐3p. Xenograft mice model All procedures and animal experiments were approved by the Animal Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. A549 cells stable expressing sh‐hsa_circ_0018189 or sh‐NC were subcutaneously injected into four‐week‐old BALB/c nude mice (Vital River Laboratory Animal Technology [Beijing, China]). Tumor volumes (0.5 × (width)2 × length) were measured every week for four weeks, and tumor weights were analyzed after the mice were euthanized. All procedures and animal experiments were approved by the Animal Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. A549 cells stable expressing sh‐hsa_circ_0018189 or sh‐NC were subcutaneously injected into four‐week‐old BALB/c nude mice (Vital River Laboratory Animal Technology [Beijing, China]). Tumor volumes (0.5 × (width)2 × length) were measured every week for four weeks, and tumor weights were analyzed after the mice were euthanized. Immunohistochemistry assay Freshly removed tissues were paraffin sectioned, dewaxed, and repaired, followed by incubation with 0.3% hydrogen peroxide. After incubation with a primary antibody (ki‐67 or xCT) and a secondary horseradish peroxidase antibody, color was developed using DAB solution. Each sample selected five fields for observation. Freshly removed tissues were paraffin sectioned, dewaxed, and repaired, followed by incubation with 0.3% hydrogen peroxide. After incubation with a primary antibody (ki‐67 or xCT) and a secondary horseradish peroxidase antibody, color was developed using DAB solution. Each sample selected five fields for observation. Statistical analysis All data were statistically examined by GraphPad 7.0. All the data were presented as mean ± standard deviation (SD). p < .05 was considered statistically significant. We used a two‐tailed Student's t test (two groups) and one‐way or two‐way analysis of variance with Tukey's post hoc test (more than two groups) to compare significant differences. All data were statistically examined by GraphPad 7.0. All the data were presented as mean ± standard deviation (SD). p < .05 was considered statistically significant. We used a two‐tailed Student's t test (two groups) and one‐way or two‐way analysis of variance with Tukey's post hoc test (more than two groups) to compare significant differences. Clinical samples: NSCLC tumor tissues (N = 90) and adjacent normal tissues (N = 90) were gathered from patients with NSCLC who were diagnosed at Nanping First Hospital Affiliated to Fujian Medical University. The Declaration of Helsinki was referenced in experiments involving human samples. Prior to surgery, the patient was informed and signed informed consent. The experiment was also approved and supported by the Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. The tumor tissues were removed and preserved at −80°C. Cell culture and transfection: The BEAS2B cell line (CRL‐9609™, ATTC, Manassas, VA) and NSCLC cell lines HCC44 (CBP61182, COBIOER, Nanjing, China) and A549 (CBP60084, COBIOER) were cultured with RPMI‐1640 (Sigma‐Aldrich, St. Louis, MO) or F12K supplemented with 10% fetal bovine serum (FBS, GIBCO, Carlsbad, CA) in a humidified incubator containing 5% CO2 at 37°C. Three small interfering RNAs si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, si‐hsa_circ_0018189#3, and negative control (si‐NC), miR‐656‐3p inhibitor and inhibitor NC, miR‐656‐3p mimic and negative control mimic (mimic NC), pcDNA and pcDNA‐xCT, short hairpin RNA targeting hsa_circ_0018189 (sh‐hsa_circ_0018189), and negative control (sh‐NC) were synthesized and purchased from Sangon Biotech (Shanghai, China). The Lipofectamine 3000 reagent (Invitrogen, Carlsbad, CA) was utilized for transfection according to production instructions. Real‐time quantitative reverse transcription‐polymerase chain reaction and validation of circular characteristics: The TRIzol® Reagent (Invitrogen) was utilized for isolation of total RNA. Complementary DNA (cDNA) was synthesized using First‐Strand cDNA Synthesis Kit (Takara, Dalian, China). SYBR® Premix DimerEraser Kit (Takara, Dalian, China) was utilized to detect relative expression. The 2−ΔΔCt method 28 was applied to process relative expression and β‐actin or U6 was applied for standardization. 29 , 30 The primers are shown in Table 1. Primers sequences used for PCR The extracted total RNA was digested by adding 10 μl RNase R for subsequent analysis. Meanwhile, cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, respectively, and RNA was extracted for subsequent analysis. Glutamine metabolism analysis: Glutamine metabolism was assessed by intracellular levels of glutamine, glutamate, and α‐KG. In 6‐well plates, the transfected cells were fed. After 24 h, the intracellular correlation levels were determined and analyzed using ELISA kits (Beyotime, Shanghai, China) including glutamine, glutamate, and α‐KG according to the instructions. Cell Counting Kit‐8 assay: In 96‐well plates, the transfection of cells was cultured for 24 h, and 10 μl Cell Counting Kit‐8 (CCK8, Beyotime, Shanghai, China) reagent was added to each well to hatch for 24, 48, and 72 h. Then, optical density (OD) was recorded at 450 nm by a microplate reader. 5‐ethynyl‐2′‐deoxyuridine assay: Cell proliferative capacity was estimated with the EdU Staining Proliferation Kit (Abcam, Cambridge, UK). After transfection, the cells were continued to be cultured in 6‐well plates, and 10 μl EdU (5‐ethynyl‐2′‐deoxyuridine) reagent was added into the cells for further incubation for 2 h. Then, the medium was discarded. The cells were stained with the stain for 30 min. Then DAPI was applied for staining for 20 min. Finally, the cells were cleaned and observed and analyzed under a microscope. Cell apoptosis analysis: Cell apoptotic rate was estimated with an Annexin V‐FITC/PI Staining Kit (Roche, Switzerland) on a flow cytometry (Beckman Coulter; Kraemer Boulevard, CA) following the processes offered by the manufacturer. Cell migration and invasion analysis: The cells were prepared into cell suspension. For the migration experiment, transfected cells resuspended in 200 μl of serum‐free medium were complemented in the upper chamber. For the invasion experiment, transfected cells were complemented in the upper chamber pre‐coated with matrigel (Corning, New York, Madison). 500 μl of medium containing 10% FBS was complemented in the lower chamber. Twenty‐four hours later, cells that migrated or invaded were fixed with 4% paraformaldehyde and then stained with crystal violet for 30 min. Cells were counted using an inverted microscope. Western blot: RIPA lysis buffer (Beyotime, Shanghai, China) supplemented with protease inhibitors (Roche, Switzerland) was used to isolate total protein from NSCLC tissues and cells. After treatment, proteins were separated by sodium dodecyl sulfate‐polyacrylamide gel electrophoresis. The isolated protein was then transferred onto PVDF (Millipore, Billerica, MA) membrane, and then, the protein‐carrying membranes were probed with a primary antibody against E‐cadherin (ab1416 at a dilution of 1:50, Abcam), Vimentin (ab8069, 1 μg/ml, Abcam), β‐actin (ab8226, 1 μg/ml, Abcam), or xCT (ab175186 at a dilution of 1:1000, Abcam) for 2 h at room temperature, followed by incubation with a secondary antibody. Finally, the ECL Kit (Beyotime, Shanghai, China) was used for band observation and analysis. Luciferase reporter assay and RNA‐immunoprecipitation assay: The sequences of hsa_circ_0018189 and 3'untranslated region of xCT (xCT 3'UTR) containing the wild‐type (wt) miR‐656‐3p binding site were cloned into pmirGLO Expression vectors (Promega, Madison, WI), and the mutant type (mut) miR‐656‐3p binding site was performed based on the hsa_circ_0018189 wt and xCT 3'UTR wt vectors via QuikChange II Site‐Directed Mutagenesis Kit (Stratagene, Santa Clara, CA). Co‐transfection of wt/mut reporter vector and miR‐656‐3p mimic or mimic NC was performed. The transfected cells were collected and fully lysed, followed by low‐temperature and high‐speed centrifugation to collect the supernatant, which was then plated into the 96‐well plate in triplicate for the measurement of the luciferase activity using the luciferase reporter system (Promega). RIP‐Assay Kit (Millipore) was used to perform RIP assay as per the manufacturer's instructions. After the beads were cleaned with cleaning buffer, pre‐diluted anti‐IgG (Abcam) or anti‐Ago2 (Abcam) solution was added to them and incubated at low temperature for 2 h. Cells were fully lysed after 48 h of transfection. The supernatant was incubated with the beads conjugated with antibodies at 4°C overnight. Then, the beads were collected by centrifugation at low temperatures and low speeds. RNA samples cleared from the beads were used for subsequent analysis of hsa_circ_0018189 and miR‐656‐3p. Xenograft mice model: All procedures and animal experiments were approved by the Animal Ethics Committee of Nanping First Hospital Affiliated to Fujian Medical University. A549 cells stable expressing sh‐hsa_circ_0018189 or sh‐NC were subcutaneously injected into four‐week‐old BALB/c nude mice (Vital River Laboratory Animal Technology [Beijing, China]). Tumor volumes (0.5 × (width)2 × length) were measured every week for four weeks, and tumor weights were analyzed after the mice were euthanized. Immunohistochemistry assay: Freshly removed tissues were paraffin sectioned, dewaxed, and repaired, followed by incubation with 0.3% hydrogen peroxide. After incubation with a primary antibody (ki‐67 or xCT) and a secondary horseradish peroxidase antibody, color was developed using DAB solution. Each sample selected five fields for observation. Statistical analysis: All data were statistically examined by GraphPad 7.0. All the data were presented as mean ± standard deviation (SD). p < .05 was considered statistically significant. We used a two‐tailed Student's t test (two groups) and one‐way or two‐way analysis of variance with Tukey's post hoc test (more than two groups) to compare significant differences. RESULTS: Upregulation of hsa_circ_0018189 expression was detected in NSCLC Firstly, we analyzed the expression of hsa_circ_0018189 in NSCLC samples. Hsa_circ_0018189 was observed to be boosted in NSCLC samples (Figure 1A). Meanwhile, compared with BEAS2B, circLUC2 abundance was boosted in A549 and HCC44 cells (Figure 1B). Subsequently, we found that hsa_circ_0018189 was located at chr10 chromosome and was derived from the exons 6–12 of CUL2 mRNA (Figure 1C). We then verified the circular structure of hsa_circ_0018189. Also, hsa_circ_0018189 abundance was not significantly changed but linear CUL2 was reduced in A549 and NCC44 cells with RNase R treatment (Figure 1D). Additionally, hsa_circ_0018189 abundance had no difference after actinomycin D treatment (Figure 1E). Subcellular fractionation with RT‐qPCR analysis showed that hsa_circ_0018189 was mainly distributed in the cytoplasm of NSCLC cells (Figure 1F). In short, hsa_circ_0018189 was significantly upregulated in NSCLC. Upregulation of hsa_circ_0018189 expression was acquired in NSCLC. (A) RT‐qPCR analyzed hsa_circ_0018189 abundance in NSCLC tissues and normal tissues; (B) Hsa_circ_0018189 abundance in BEAS2B, A549, and HCC44 cells; (C) The location and structure of hsa_circ_0018189 were shown; (D) After RNase R digestion, RT‐qPCR was conducted to analyze hsa_circ_0018189 and linear CUL2 expression; (E) After the cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, RT‐qPCR was conducted to detect hsa_circ_0018189 and linear CUL2 expression; (F) The localization of circCUL2 in NSCLC cells was analyzed by subcellular fractionation with RT‐qPCR analysis. *p < .05 Firstly, we analyzed the expression of hsa_circ_0018189 in NSCLC samples. Hsa_circ_0018189 was observed to be boosted in NSCLC samples (Figure 1A). Meanwhile, compared with BEAS2B, circLUC2 abundance was boosted in A549 and HCC44 cells (Figure 1B). Subsequently, we found that hsa_circ_0018189 was located at chr10 chromosome and was derived from the exons 6–12 of CUL2 mRNA (Figure 1C). We then verified the circular structure of hsa_circ_0018189. Also, hsa_circ_0018189 abundance was not significantly changed but linear CUL2 was reduced in A549 and NCC44 cells with RNase R treatment (Figure 1D). Additionally, hsa_circ_0018189 abundance had no difference after actinomycin D treatment (Figure 1E). Subcellular fractionation with RT‐qPCR analysis showed that hsa_circ_0018189 was mainly distributed in the cytoplasm of NSCLC cells (Figure 1F). In short, hsa_circ_0018189 was significantly upregulated in NSCLC. Upregulation of hsa_circ_0018189 expression was acquired in NSCLC. (A) RT‐qPCR analyzed hsa_circ_0018189 abundance in NSCLC tissues and normal tissues; (B) Hsa_circ_0018189 abundance in BEAS2B, A549, and HCC44 cells; (C) The location and structure of hsa_circ_0018189 were shown; (D) After RNase R digestion, RT‐qPCR was conducted to analyze hsa_circ_0018189 and linear CUL2 expression; (E) After the cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, RT‐qPCR was conducted to detect hsa_circ_0018189 and linear CUL2 expression; (F) The localization of circCUL2 in NSCLC cells was analyzed by subcellular fractionation with RT‐qPCR analysis. *p < .05 Hsa_circ_0018189 knockdown suppressed glutamine metabolism, cell proliferation, migration, and invasion, while enhanced cell apoptosis in NSCLC cells At first, we verified the action of hsa_circ_0018189 knockdown, and the results suggested that si‐hsa_circ_0018189#1 caused the greatest silencing efficiency of hsa_circ_0018189 in both cells (Figure 2A). Hence, we used si‐hsa_circ_0018189#1 to exhibit the next experiments. We found that knockdown of hsa_circ_0018189 could inhibit glutamine metabolism (Figure 2B, C, and D). CCK8 assay and EdU assay confirmed that lowered hsa_circ_0018189 expression restrained cell proliferation of A549 and HCC44 cells (Figure 2E, F). In addition, cell apoptosis was increased after hsa_circ_0018189 knockdown (Figure 2G). Transwell assay results proved that si‐CUL2#1 could suppress cell migration and invasion of A549 and HCC44 cells (Figure 2H, I). E‐cadherin and Vimentin are mesenchymal markers commonly used in scientific research for epithelial–mesenchymal transformation (EMT). Knockdown of hsa_circ_0018189 promoted E‐cadherin level and retrained protein level of Vimentin in A549 and HCC44 cells (Figure 2J). We also constructed the hsa_circ_0018189 overexpression plasmid, as shown in Figure SS1. In contrast, hsa_circ_0018189 overexpression elevated glutamine metabolism, facilitated cell proliferation, migration, invasion, and lowered cell apoptosis in NSCLC cells (Figure SS1B‐I). Consistently, hsa_circ_0018189 overexpression elevated Vimentin protein levels and lessened E‐cadherin protein levels (Figure SS1J and K). All in all, these data implied that hsa_circ_0018189 promoted the progression of NSCLC. Silencing of hsa_circ_0018189 could inhibit glutamine metabolism, proliferation, migration, and invasion, while promote cell apoptosis in NSCLC cells. (A) The transfection efficiency of si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, and si‐hsa_circ_0018189#3 were analyzed by RT‐qPCR in A549, and HCC44 cells; (B‐D) ELISA assay analyzed glutamine metabolism after transfection of si‐NC or si‐hsa_circ_0018189#1; (E) Cell viability in the above cells was analyzed by CCK8 assay; (F) Cell proliferation was analyzed by EdU assay; (G) Cell apoptosis was analyzed by flow cytometry assay; (H and I) Cell migration and invasion were analyzed by Transwell assay; (J) Proteins of E‐cadherin and Vimentin were measured. *p < .05 At first, we verified the action of hsa_circ_0018189 knockdown, and the results suggested that si‐hsa_circ_0018189#1 caused the greatest silencing efficiency of hsa_circ_0018189 in both cells (Figure 2A). Hence, we used si‐hsa_circ_0018189#1 to exhibit the next experiments. We found that knockdown of hsa_circ_0018189 could inhibit glutamine metabolism (Figure 2B, C, and D). CCK8 assay and EdU assay confirmed that lowered hsa_circ_0018189 expression restrained cell proliferation of A549 and HCC44 cells (Figure 2E, F). In addition, cell apoptosis was increased after hsa_circ_0018189 knockdown (Figure 2G). Transwell assay results proved that si‐CUL2#1 could suppress cell migration and invasion of A549 and HCC44 cells (Figure 2H, I). E‐cadherin and Vimentin are mesenchymal markers commonly used in scientific research for epithelial–mesenchymal transformation (EMT). Knockdown of hsa_circ_0018189 promoted E‐cadherin level and retrained protein level of Vimentin in A549 and HCC44 cells (Figure 2J). We also constructed the hsa_circ_0018189 overexpression plasmid, as shown in Figure SS1. In contrast, hsa_circ_0018189 overexpression elevated glutamine metabolism, facilitated cell proliferation, migration, invasion, and lowered cell apoptosis in NSCLC cells (Figure SS1B‐I). Consistently, hsa_circ_0018189 overexpression elevated Vimentin protein levels and lessened E‐cadherin protein levels (Figure SS1J and K). All in all, these data implied that hsa_circ_0018189 promoted the progression of NSCLC. Silencing of hsa_circ_0018189 could inhibit glutamine metabolism, proliferation, migration, and invasion, while promote cell apoptosis in NSCLC cells. (A) The transfection efficiency of si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, and si‐hsa_circ_0018189#3 were analyzed by RT‐qPCR in A549, and HCC44 cells; (B‐D) ELISA assay analyzed glutamine metabolism after transfection of si‐NC or si‐hsa_circ_0018189#1; (E) Cell viability in the above cells was analyzed by CCK8 assay; (F) Cell proliferation was analyzed by EdU assay; (G) Cell apoptosis was analyzed by flow cytometry assay; (H and I) Cell migration and invasion were analyzed by Transwell assay; (J) Proteins of E‐cadherin and Vimentin were measured. *p < .05 Hsa_circ_0018189 was identified as a miR‐656‐3p molecular sponge Starbase and circinteractome analysis found that miR‐656‐3p and miR‐888‐5p were the most promising targets for hsa_circ_0018189 (Figure 3A). Hsa_circ_0018189 silencing resulted in an elevation in miR‐656‐3p expression level (Figure 3B, C), which was chosen for subsequent research. By using the Starbase database, we predicted the existence of targeted binding sites between hsa_circ_0018189 and miR‐656‐3p (Figure 3D). Luciferase reporter assay and RIP assay demonstrated the interaction between hsa_circ_0018189 and miR‐656‐3p in 293 T cells (Figure 3E, F). MiR‐656‐3p expression was lower in NSCLC tissues (Figure 3G). Pearson correlation analysis showed that hsa_circ_0018189 and miR‐656‐3p were negatively correlated with each other in NSCLC tumors (Figure 3H). MiR‐656‐3p abundance was lower in A549 and HCC44 cells (Figure 3I). All findings testified that hsa_circ_0018189 could target miR‐656‐3p. Hsa_circ_0018189 acted as a miR‐656‐3 sponge. (A) Starbase and circinteractome showed the predicted target miRNAs for hsa_circ_0018189; (B and C) RNA levels of miR‐656‐3p and miR‐888‐5p were determined after transfection of si‐NC or si‐hsa_circ_0018189#1; (D) The binding sites of hsa_circ_0018189 and miR‐656‐3p; (E and F) Luciferase reporter assay and RIP assay were utilized to analyze the interaction of hsa_circ_0018189 and miR‐656‐3p in 293 T cells; (G) RT‐qPCR was used to analyze miR‐656‐3p level in NSCLC tissues and normal tissues; (H) Pearson correlation coefficient was used to analyze the correlation of miR‐656‐3p and hsa_circ_0018189; (I) RT‐qPCR was used to analyze miR‐656‐3p expression in BEAS2B, A549 and HCC44 cells. *p < .05 Starbase and circinteractome analysis found that miR‐656‐3p and miR‐888‐5p were the most promising targets for hsa_circ_0018189 (Figure 3A). Hsa_circ_0018189 silencing resulted in an elevation in miR‐656‐3p expression level (Figure 3B, C), which was chosen for subsequent research. By using the Starbase database, we predicted the existence of targeted binding sites between hsa_circ_0018189 and miR‐656‐3p (Figure 3D). Luciferase reporter assay and RIP assay demonstrated the interaction between hsa_circ_0018189 and miR‐656‐3p in 293 T cells (Figure 3E, F). MiR‐656‐3p expression was lower in NSCLC tissues (Figure 3G). Pearson correlation analysis showed that hsa_circ_0018189 and miR‐656‐3p were negatively correlated with each other in NSCLC tumors (Figure 3H). MiR‐656‐3p abundance was lower in A549 and HCC44 cells (Figure 3I). All findings testified that hsa_circ_0018189 could target miR‐656‐3p. Hsa_circ_0018189 acted as a miR‐656‐3 sponge. (A) Starbase and circinteractome showed the predicted target miRNAs for hsa_circ_0018189; (B and C) RNA levels of miR‐656‐3p and miR‐888‐5p were determined after transfection of si‐NC or si‐hsa_circ_0018189#1; (D) The binding sites of hsa_circ_0018189 and miR‐656‐3p; (E and F) Luciferase reporter assay and RIP assay were utilized to analyze the interaction of hsa_circ_0018189 and miR‐656‐3p in 293 T cells; (G) RT‐qPCR was used to analyze miR‐656‐3p level in NSCLC tissues and normal tissues; (H) Pearson correlation coefficient was used to analyze the correlation of miR‐656‐3p and hsa_circ_0018189; (I) RT‐qPCR was used to analyze miR‐656‐3p expression in BEAS2B, A549 and HCC44 cells. *p < .05 Hsa_circ_0018189 sponged miR‐656‐3p to promote cell glutamine metabolism and malignancy in NSCLC cells The silencing effect of miR‐656‐3p was significant by miR‐656‐3p inhibitor (Figure 4A). Knockdown of hsa_circ_0018189‐mediated elevation of miR‐656‐3p could be restored after miR‐656‐3p silencing (Figure 4B). Deficiency of miR‐656‐3p could relieve the function of si‐hsa_circ_0018189#1 on glutamine metabolism (Figure 4C–E). The results of CCK8 assay and EdU assay also expounded that miR‐656‐3p silencing recover si‐hsa_circ_0018189#1‐caused repression of A549 and HCC44 cell proliferation (Figure 4F, G). Moreover, transfection of miR‐656‐3p inhibitor attenuated the role of si‐hsa_circ_0018189#1 on cell apoptosis, migration, and invasion (Figure 4H, I, and J). E‐cadherin and Vimentin proteins expression were influenced by si‐hsa_circ_0018189#1, while miR‐656‐3p could abolish the influence (Figure 4K). Taken together, the evidence implied that hsa_circ_0018189 could interact with miR‐656‐3p to reinforce NSCLC cell glutamine metabolism and malignancy. MiR‐656‐3p could restore the effect of hsa_circ_0018189 knockdown on NSCLC progression. (A) MiR‐656‐3p abundance was detected after transfection of inhibitor NC and miR‐656‐3p inhibitor; (B‐K) Cells were transfected with si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. (B) MiR‐656‐3p expression was estimated after transfection; (C‐E) ELISA assay analyzed glutamine metabolism after transfection; (F and G) Cell proliferation was determined; (H) Cell apoptosis was assessed after transfection; (I and J) Cell migrating and invading abilities were analyzed; (K) E‐cadherin and Vimentin protein levels were determined. *p < .05 The silencing effect of miR‐656‐3p was significant by miR‐656‐3p inhibitor (Figure 4A). Knockdown of hsa_circ_0018189‐mediated elevation of miR‐656‐3p could be restored after miR‐656‐3p silencing (Figure 4B). Deficiency of miR‐656‐3p could relieve the function of si‐hsa_circ_0018189#1 on glutamine metabolism (Figure 4C–E). The results of CCK8 assay and EdU assay also expounded that miR‐656‐3p silencing recover si‐hsa_circ_0018189#1‐caused repression of A549 and HCC44 cell proliferation (Figure 4F, G). Moreover, transfection of miR‐656‐3p inhibitor attenuated the role of si‐hsa_circ_0018189#1 on cell apoptosis, migration, and invasion (Figure 4H, I, and J). E‐cadherin and Vimentin proteins expression were influenced by si‐hsa_circ_0018189#1, while miR‐656‐3p could abolish the influence (Figure 4K). Taken together, the evidence implied that hsa_circ_0018189 could interact with miR‐656‐3p to reinforce NSCLC cell glutamine metabolism and malignancy. MiR‐656‐3p could restore the effect of hsa_circ_0018189 knockdown on NSCLC progression. (A) MiR‐656‐3p abundance was detected after transfection of inhibitor NC and miR‐656‐3p inhibitor; (B‐K) Cells were transfected with si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. (B) MiR‐656‐3p expression was estimated after transfection; (C‐E) ELISA assay analyzed glutamine metabolism after transfection; (F and G) Cell proliferation was determined; (H) Cell apoptosis was assessed after transfection; (I and J) Cell migrating and invading abilities were analyzed; (K) E‐cadherin and Vimentin protein levels were determined. *p < .05 MiR‐656‐3p directly interacted with xCT Using the TargetScan, we revealed that xCT 3'UTR harbored putative target sequences for miR‐656‐3p (Figure 5A). Luciferase reporter experiment suggested that in 293 T cells, miR‐656‐3p could reduce the luciferase activity of wild‐type xCT 3'UTR group, while there was no significant difference in mutant xCT 3'UTR (Figure 5B), confirming the targeting relationship between miR‐656‐3p and xCT. The mRNA and protein levels of xCT were predominantly raised in NSCLC rather than in normal tissues (Figure 5C, D). xCT mRNA expression in NSCLC samples was negatively correlated with miR‐656‐3p (Figure 5E). Besides, xCT protein levels were upregulated in A549 and HCC44 cells (Figure 5F). Additionally, xCT expression was decreased after knockdown of hsa_circ_0018189, while miR‐656‐3p downregulation could restore the function of si‐hsa_circ_0018189#1 on xCT expression in A549 and HCC44 cells (Figure 5G). In summary, our findings elucidated that miR‐656‐3p could target xCT, and hsa_circ_0018189 could regulate the expression of xCT by acting as a sponge of miR‐656‐3p in NSCLC. There was a target relationship between miR‐656‐3p and xCT. (A) The binding sites of miR‐656‐3p and xCT; (B) Luciferase reporter assay tested the luciferase activities of xCT 3'UTR wt and xCT 3'UTR mut in 293 T cells after transfection of mimic NC or miR‐656‐3p mimic; (C) RNA levels of xCT in NSCLC samples; (D) Protein levels of xCT in NSCLC samples; (E) The correlation of miR‐656‐3p and xCT mRNA expression levels in NSCLC samples; (F) Protein levels of xCT in BEAS2B, A549, and HCC44 cells; (G) Protein levels of xCT were estimated after transfection of si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. *p < .05 Using the TargetScan, we revealed that xCT 3'UTR harbored putative target sequences for miR‐656‐3p (Figure 5A). Luciferase reporter experiment suggested that in 293 T cells, miR‐656‐3p could reduce the luciferase activity of wild‐type xCT 3'UTR group, while there was no significant difference in mutant xCT 3'UTR (Figure 5B), confirming the targeting relationship between miR‐656‐3p and xCT. The mRNA and protein levels of xCT were predominantly raised in NSCLC rather than in normal tissues (Figure 5C, D). xCT mRNA expression in NSCLC samples was negatively correlated with miR‐656‐3p (Figure 5E). Besides, xCT protein levels were upregulated in A549 and HCC44 cells (Figure 5F). Additionally, xCT expression was decreased after knockdown of hsa_circ_0018189, while miR‐656‐3p downregulation could restore the function of si‐hsa_circ_0018189#1 on xCT expression in A549 and HCC44 cells (Figure 5G). In summary, our findings elucidated that miR‐656‐3p could target xCT, and hsa_circ_0018189 could regulate the expression of xCT by acting as a sponge of miR‐656‐3p in NSCLC. There was a target relationship between miR‐656‐3p and xCT. (A) The binding sites of miR‐656‐3p and xCT; (B) Luciferase reporter assay tested the luciferase activities of xCT 3'UTR wt and xCT 3'UTR mut in 293 T cells after transfection of mimic NC or miR‐656‐3p mimic; (C) RNA levels of xCT in NSCLC samples; (D) Protein levels of xCT in NSCLC samples; (E) The correlation of miR‐656‐3p and xCT mRNA expression levels in NSCLC samples; (F) Protein levels of xCT in BEAS2B, A549, and HCC44 cells; (G) Protein levels of xCT were estimated after transfection of si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. *p < .05 MiR‐656‐3p could inhibit cell glutamine metabolism and malignancy by reducing xCT expression in NSCLC Introduction of xCT upregulated xCT protein levels in A549 and HCC44 cells (Figure 6A). MiR‐656‐3p abundance was increased after transfection of miR‐656‐3p mimic (Figure 6B). The lowered xCT protein levels urged by miR‐656‐3p overexpression were recovered after xCT introduction (Figure 6C). Besides, xCT upregulation impaired the role of miR‐656‐3p mimic on factors associated with glutamine metabolism (Figure 6D, E and F). Beyond that, CCK8 assay and EdU assay analysis manifested that introduction of pcDNA‐xCT weakened the action of miR‐656‐3p on A549 and HCC44 cell proliferation and apoptosis (Figure 6G, H, and I). The lowered migrating and invading capacities driven by miR‐656‐3p were lessened after xCT overexpression (Figure 6J, K). Moreover, miR‐656‐3p‐mediated changes in E‐cadherin and Vimentin protein levels were recovered by co‐transfection of xCT and miR‐656‐3p (Figure 6L). In a word, the effect of miR‐656‐3p could be impaired by xCT in regulating the processes of NSCLC cells. Overexpression of xCT could recover miR‐656‐3p‐urged impacts on NSCLC progression. (A) Protein levels of xCT protein were tested after transfection of pcDNA and pcDNA‐xCT; (B) MiR‐656‐3p abundance was determined after transfection of miR‐656‐3p mimic and mimic NC; (C) Protein levels of xCT were detected after transfection mimic NC, miR‐656‐3p mimic, miR‐656‐3p mimic+pcDNA, or miR‐656‐3p mimic+pcDNA‐xCT; (D‐F) Glutamine metabolism in the above cells was analyzed; (G and H) Cell proliferation in the above cells was estimated; (I) Cell apoptosis in the above cells was detected; (J and K) Cell migration and invasion in the above cells were determined; (L) E‐cadherin and Vimentin protein levels in the above cells were measured. *p < .05 Introduction of xCT upregulated xCT protein levels in A549 and HCC44 cells (Figure 6A). MiR‐656‐3p abundance was increased after transfection of miR‐656‐3p mimic (Figure 6B). The lowered xCT protein levels urged by miR‐656‐3p overexpression were recovered after xCT introduction (Figure 6C). Besides, xCT upregulation impaired the role of miR‐656‐3p mimic on factors associated with glutamine metabolism (Figure 6D, E and F). Beyond that, CCK8 assay and EdU assay analysis manifested that introduction of pcDNA‐xCT weakened the action of miR‐656‐3p on A549 and HCC44 cell proliferation and apoptosis (Figure 6G, H, and I). The lowered migrating and invading capacities driven by miR‐656‐3p were lessened after xCT overexpression (Figure 6J, K). Moreover, miR‐656‐3p‐mediated changes in E‐cadherin and Vimentin protein levels were recovered by co‐transfection of xCT and miR‐656‐3p (Figure 6L). In a word, the effect of miR‐656‐3p could be impaired by xCT in regulating the processes of NSCLC cells. Overexpression of xCT could recover miR‐656‐3p‐urged impacts on NSCLC progression. (A) Protein levels of xCT protein were tested after transfection of pcDNA and pcDNA‐xCT; (B) MiR‐656‐3p abundance was determined after transfection of miR‐656‐3p mimic and mimic NC; (C) Protein levels of xCT were detected after transfection mimic NC, miR‐656‐3p mimic, miR‐656‐3p mimic+pcDNA, or miR‐656‐3p mimic+pcDNA‐xCT; (D‐F) Glutamine metabolism in the above cells was analyzed; (G and H) Cell proliferation in the above cells was estimated; (I) Cell apoptosis in the above cells was detected; (J and K) Cell migration and invasion in the above cells were determined; (L) E‐cadherin and Vimentin protein levels in the above cells were measured. *p < .05 Hsa_circ_0018189 could regulate tumor growth in mouse models The A549 cells stable expressing sh‐hsa_circ_0018189 were subcutaneously injected into the nude mice. Tumor volume expression in mice was measured. Hsa_circ_0018189 knockdown inhibited tumor volume (Figure 7A). The results showed that knocking down hsa_circ_0018189 reduced tumor volume (Figure 7B). RNA levels of hsa_circ_0018189 and xCT were downregulated in xenograft tumor tissues from the sh‐hsa_circ_0018189 group, but miR‐656‐3p expression was increased (Figure 7C). Western blot analysis suggested that xCT protein was decreased in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7D). IHC analysis revealed that ki‐67 and xCT levels were reduced in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7E). All data exposed that knockdown of hsa_circ_0018189 could inhibit the growth of tumors in vivo. Knockdown hsa_circ_0018189 could inhibit the growth of tumors in vivo. (A) The tumor volumes were measured at 1, 2, 3, and 4 weeks; (B) Tumor weights were tested; (C) RNA levels of hsa_circ_0018189, miR‐656‐3p, and xCT in xenograft tumor tissues were analyzed; (D) Protein levels of xCT in xenograft tumor tissues were determined; (E) IHC analysis of ki‐67 and xCT protein levels in xenograft tumor tissues. *p < .05 The A549 cells stable expressing sh‐hsa_circ_0018189 were subcutaneously injected into the nude mice. Tumor volume expression in mice was measured. Hsa_circ_0018189 knockdown inhibited tumor volume (Figure 7A). The results showed that knocking down hsa_circ_0018189 reduced tumor volume (Figure 7B). RNA levels of hsa_circ_0018189 and xCT were downregulated in xenograft tumor tissues from the sh‐hsa_circ_0018189 group, but miR‐656‐3p expression was increased (Figure 7C). Western blot analysis suggested that xCT protein was decreased in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7D). IHC analysis revealed that ki‐67 and xCT levels were reduced in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7E). All data exposed that knockdown of hsa_circ_0018189 could inhibit the growth of tumors in vivo. Knockdown hsa_circ_0018189 could inhibit the growth of tumors in vivo. (A) The tumor volumes were measured at 1, 2, 3, and 4 weeks; (B) Tumor weights were tested; (C) RNA levels of hsa_circ_0018189, miR‐656‐3p, and xCT in xenograft tumor tissues were analyzed; (D) Protein levels of xCT in xenograft tumor tissues were determined; (E) IHC analysis of ki‐67 and xCT protein levels in xenograft tumor tissues. *p < .05 Upregulation of hsa_circ_0018189 expression was detected in NSCLC : Firstly, we analyzed the expression of hsa_circ_0018189 in NSCLC samples. Hsa_circ_0018189 was observed to be boosted in NSCLC samples (Figure 1A). Meanwhile, compared with BEAS2B, circLUC2 abundance was boosted in A549 and HCC44 cells (Figure 1B). Subsequently, we found that hsa_circ_0018189 was located at chr10 chromosome and was derived from the exons 6–12 of CUL2 mRNA (Figure 1C). We then verified the circular structure of hsa_circ_0018189. Also, hsa_circ_0018189 abundance was not significantly changed but linear CUL2 was reduced in A549 and NCC44 cells with RNase R treatment (Figure 1D). Additionally, hsa_circ_0018189 abundance had no difference after actinomycin D treatment (Figure 1E). Subcellular fractionation with RT‐qPCR analysis showed that hsa_circ_0018189 was mainly distributed in the cytoplasm of NSCLC cells (Figure 1F). In short, hsa_circ_0018189 was significantly upregulated in NSCLC. Upregulation of hsa_circ_0018189 expression was acquired in NSCLC. (A) RT‐qPCR analyzed hsa_circ_0018189 abundance in NSCLC tissues and normal tissues; (B) Hsa_circ_0018189 abundance in BEAS2B, A549, and HCC44 cells; (C) The location and structure of hsa_circ_0018189 were shown; (D) After RNase R digestion, RT‐qPCR was conducted to analyze hsa_circ_0018189 and linear CUL2 expression; (E) After the cells were treated with actinomycin D for 0, 4, 8, 12, and 24 h, RT‐qPCR was conducted to detect hsa_circ_0018189 and linear CUL2 expression; (F) The localization of circCUL2 in NSCLC cells was analyzed by subcellular fractionation with RT‐qPCR analysis. *p < .05 Hsa_circ_0018189 knockdown suppressed glutamine metabolism, cell proliferation, migration, and invasion, while enhanced cell apoptosis in NSCLC cells: At first, we verified the action of hsa_circ_0018189 knockdown, and the results suggested that si‐hsa_circ_0018189#1 caused the greatest silencing efficiency of hsa_circ_0018189 in both cells (Figure 2A). Hence, we used si‐hsa_circ_0018189#1 to exhibit the next experiments. We found that knockdown of hsa_circ_0018189 could inhibit glutamine metabolism (Figure 2B, C, and D). CCK8 assay and EdU assay confirmed that lowered hsa_circ_0018189 expression restrained cell proliferation of A549 and HCC44 cells (Figure 2E, F). In addition, cell apoptosis was increased after hsa_circ_0018189 knockdown (Figure 2G). Transwell assay results proved that si‐CUL2#1 could suppress cell migration and invasion of A549 and HCC44 cells (Figure 2H, I). E‐cadherin and Vimentin are mesenchymal markers commonly used in scientific research for epithelial–mesenchymal transformation (EMT). Knockdown of hsa_circ_0018189 promoted E‐cadherin level and retrained protein level of Vimentin in A549 and HCC44 cells (Figure 2J). We also constructed the hsa_circ_0018189 overexpression plasmid, as shown in Figure SS1. In contrast, hsa_circ_0018189 overexpression elevated glutamine metabolism, facilitated cell proliferation, migration, invasion, and lowered cell apoptosis in NSCLC cells (Figure SS1B‐I). Consistently, hsa_circ_0018189 overexpression elevated Vimentin protein levels and lessened E‐cadherin protein levels (Figure SS1J and K). All in all, these data implied that hsa_circ_0018189 promoted the progression of NSCLC. Silencing of hsa_circ_0018189 could inhibit glutamine metabolism, proliferation, migration, and invasion, while promote cell apoptosis in NSCLC cells. (A) The transfection efficiency of si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#2, and si‐hsa_circ_0018189#3 were analyzed by RT‐qPCR in A549, and HCC44 cells; (B‐D) ELISA assay analyzed glutamine metabolism after transfection of si‐NC or si‐hsa_circ_0018189#1; (E) Cell viability in the above cells was analyzed by CCK8 assay; (F) Cell proliferation was analyzed by EdU assay; (G) Cell apoptosis was analyzed by flow cytometry assay; (H and I) Cell migration and invasion were analyzed by Transwell assay; (J) Proteins of E‐cadherin and Vimentin were measured. *p < .05 Hsa_circ_0018189 was identified as a miR‐656‐3p molecular sponge: Starbase and circinteractome analysis found that miR‐656‐3p and miR‐888‐5p were the most promising targets for hsa_circ_0018189 (Figure 3A). Hsa_circ_0018189 silencing resulted in an elevation in miR‐656‐3p expression level (Figure 3B, C), which was chosen for subsequent research. By using the Starbase database, we predicted the existence of targeted binding sites between hsa_circ_0018189 and miR‐656‐3p (Figure 3D). Luciferase reporter assay and RIP assay demonstrated the interaction between hsa_circ_0018189 and miR‐656‐3p in 293 T cells (Figure 3E, F). MiR‐656‐3p expression was lower in NSCLC tissues (Figure 3G). Pearson correlation analysis showed that hsa_circ_0018189 and miR‐656‐3p were negatively correlated with each other in NSCLC tumors (Figure 3H). MiR‐656‐3p abundance was lower in A549 and HCC44 cells (Figure 3I). All findings testified that hsa_circ_0018189 could target miR‐656‐3p. Hsa_circ_0018189 acted as a miR‐656‐3 sponge. (A) Starbase and circinteractome showed the predicted target miRNAs for hsa_circ_0018189; (B and C) RNA levels of miR‐656‐3p and miR‐888‐5p were determined after transfection of si‐NC or si‐hsa_circ_0018189#1; (D) The binding sites of hsa_circ_0018189 and miR‐656‐3p; (E and F) Luciferase reporter assay and RIP assay were utilized to analyze the interaction of hsa_circ_0018189 and miR‐656‐3p in 293 T cells; (G) RT‐qPCR was used to analyze miR‐656‐3p level in NSCLC tissues and normal tissues; (H) Pearson correlation coefficient was used to analyze the correlation of miR‐656‐3p and hsa_circ_0018189; (I) RT‐qPCR was used to analyze miR‐656‐3p expression in BEAS2B, A549 and HCC44 cells. *p < .05 Hsa_circ_0018189 sponged miR‐656‐3p to promote cell glutamine metabolism and malignancy in NSCLC cells: The silencing effect of miR‐656‐3p was significant by miR‐656‐3p inhibitor (Figure 4A). Knockdown of hsa_circ_0018189‐mediated elevation of miR‐656‐3p could be restored after miR‐656‐3p silencing (Figure 4B). Deficiency of miR‐656‐3p could relieve the function of si‐hsa_circ_0018189#1 on glutamine metabolism (Figure 4C–E). The results of CCK8 assay and EdU assay also expounded that miR‐656‐3p silencing recover si‐hsa_circ_0018189#1‐caused repression of A549 and HCC44 cell proliferation (Figure 4F, G). Moreover, transfection of miR‐656‐3p inhibitor attenuated the role of si‐hsa_circ_0018189#1 on cell apoptosis, migration, and invasion (Figure 4H, I, and J). E‐cadherin and Vimentin proteins expression were influenced by si‐hsa_circ_0018189#1, while miR‐656‐3p could abolish the influence (Figure 4K). Taken together, the evidence implied that hsa_circ_0018189 could interact with miR‐656‐3p to reinforce NSCLC cell glutamine metabolism and malignancy. MiR‐656‐3p could restore the effect of hsa_circ_0018189 knockdown on NSCLC progression. (A) MiR‐656‐3p abundance was detected after transfection of inhibitor NC and miR‐656‐3p inhibitor; (B‐K) Cells were transfected with si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. (B) MiR‐656‐3p expression was estimated after transfection; (C‐E) ELISA assay analyzed glutamine metabolism after transfection; (F and G) Cell proliferation was determined; (H) Cell apoptosis was assessed after transfection; (I and J) Cell migrating and invading abilities were analyzed; (K) E‐cadherin and Vimentin protein levels were determined. *p < .05 MiR‐656‐3p directly interacted with xCT : Using the TargetScan, we revealed that xCT 3'UTR harbored putative target sequences for miR‐656‐3p (Figure 5A). Luciferase reporter experiment suggested that in 293 T cells, miR‐656‐3p could reduce the luciferase activity of wild‐type xCT 3'UTR group, while there was no significant difference in mutant xCT 3'UTR (Figure 5B), confirming the targeting relationship between miR‐656‐3p and xCT. The mRNA and protein levels of xCT were predominantly raised in NSCLC rather than in normal tissues (Figure 5C, D). xCT mRNA expression in NSCLC samples was negatively correlated with miR‐656‐3p (Figure 5E). Besides, xCT protein levels were upregulated in A549 and HCC44 cells (Figure 5F). Additionally, xCT expression was decreased after knockdown of hsa_circ_0018189, while miR‐656‐3p downregulation could restore the function of si‐hsa_circ_0018189#1 on xCT expression in A549 and HCC44 cells (Figure 5G). In summary, our findings elucidated that miR‐656‐3p could target xCT, and hsa_circ_0018189 could regulate the expression of xCT by acting as a sponge of miR‐656‐3p in NSCLC. There was a target relationship between miR‐656‐3p and xCT. (A) The binding sites of miR‐656‐3p and xCT; (B) Luciferase reporter assay tested the luciferase activities of xCT 3'UTR wt and xCT 3'UTR mut in 293 T cells after transfection of mimic NC or miR‐656‐3p mimic; (C) RNA levels of xCT in NSCLC samples; (D) Protein levels of xCT in NSCLC samples; (E) The correlation of miR‐656‐3p and xCT mRNA expression levels in NSCLC samples; (F) Protein levels of xCT in BEAS2B, A549, and HCC44 cells; (G) Protein levels of xCT were estimated after transfection of si‐NC, si‐hsa_circ_0018189#1, si‐hsa_circ_0018189#1 + inhibitor NC, or si‐hsa_circ_0018189#1 + miR‐656‐3p inhibitor. *p < .05 MiR‐656‐3p could inhibit cell glutamine metabolism and malignancy by reducing xCT expression in NSCLC : Introduction of xCT upregulated xCT protein levels in A549 and HCC44 cells (Figure 6A). MiR‐656‐3p abundance was increased after transfection of miR‐656‐3p mimic (Figure 6B). The lowered xCT protein levels urged by miR‐656‐3p overexpression were recovered after xCT introduction (Figure 6C). Besides, xCT upregulation impaired the role of miR‐656‐3p mimic on factors associated with glutamine metabolism (Figure 6D, E and F). Beyond that, CCK8 assay and EdU assay analysis manifested that introduction of pcDNA‐xCT weakened the action of miR‐656‐3p on A549 and HCC44 cell proliferation and apoptosis (Figure 6G, H, and I). The lowered migrating and invading capacities driven by miR‐656‐3p were lessened after xCT overexpression (Figure 6J, K). Moreover, miR‐656‐3p‐mediated changes in E‐cadherin and Vimentin protein levels were recovered by co‐transfection of xCT and miR‐656‐3p (Figure 6L). In a word, the effect of miR‐656‐3p could be impaired by xCT in regulating the processes of NSCLC cells. Overexpression of xCT could recover miR‐656‐3p‐urged impacts on NSCLC progression. (A) Protein levels of xCT protein were tested after transfection of pcDNA and pcDNA‐xCT; (B) MiR‐656‐3p abundance was determined after transfection of miR‐656‐3p mimic and mimic NC; (C) Protein levels of xCT were detected after transfection mimic NC, miR‐656‐3p mimic, miR‐656‐3p mimic+pcDNA, or miR‐656‐3p mimic+pcDNA‐xCT; (D‐F) Glutamine metabolism in the above cells was analyzed; (G and H) Cell proliferation in the above cells was estimated; (I) Cell apoptosis in the above cells was detected; (J and K) Cell migration and invasion in the above cells were determined; (L) E‐cadherin and Vimentin protein levels in the above cells were measured. *p < .05 Hsa_circ_0018189 could regulate tumor growth in mouse models: The A549 cells stable expressing sh‐hsa_circ_0018189 were subcutaneously injected into the nude mice. Tumor volume expression in mice was measured. Hsa_circ_0018189 knockdown inhibited tumor volume (Figure 7A). The results showed that knocking down hsa_circ_0018189 reduced tumor volume (Figure 7B). RNA levels of hsa_circ_0018189 and xCT were downregulated in xenograft tumor tissues from the sh‐hsa_circ_0018189 group, but miR‐656‐3p expression was increased (Figure 7C). Western blot analysis suggested that xCT protein was decreased in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7D). IHC analysis revealed that ki‐67 and xCT levels were reduced in xenograft tumor tissues with hsa_circ_0018189 silence (Figure 7E). All data exposed that knockdown of hsa_circ_0018189 could inhibit the growth of tumors in vivo. Knockdown hsa_circ_0018189 could inhibit the growth of tumors in vivo. (A) The tumor volumes were measured at 1, 2, 3, and 4 weeks; (B) Tumor weights were tested; (C) RNA levels of hsa_circ_0018189, miR‐656‐3p, and xCT in xenograft tumor tissues were analyzed; (D) Protein levels of xCT in xenograft tumor tissues were determined; (E) IHC analysis of ki‐67 and xCT protein levels in xenograft tumor tissues. *p < .05 DISCUSSION: NSCLC is one of the cancers with high mortality in the world. 31 The treatment of NSCLC has made great progress in the continuous development of science and technology condition. 32 However, poor prognosis is still a difficult problem in the treatment of lung cancer today. The functional mechanism of circRNAs in many cancers has been confirmed. 33 The high level of hsa_circ_0018189 in tissues and cells of NSCLC aroused our interest in the detailed functional mechanism of hsa_circ_0018189 in NSCLC. In the present research, we confirmed that hsa_circ_0018189 was highly expressed in NSCLC tissues and cells, which led us to speculate that the upregulation of hsa_circ_0018189 might be a key factor leading to tumorigenesis. Further investigation found that cell proliferation, migration, invasion, and glutamine metabolism were inhibited under hsa_circ_0018189 knockdown condition, but cell apoptosis was significantly induced. In mouse xenograft experiments, hsa_circ_0018189 knockdown lessened NSCLC cell growth in vivo. These results preliminarily confirmed that hsa_circ_0018189 could promote the development of NSCLC. The regulatory role of hsa_circ_0018189 in NSCLC has been confirmed, but its specific mechanism has not been reported, 16 so we did further exploratory experiments. Starbase prediction revealed that miR‐656‐3p might interact with hsa_circ_0018189. Further experiments confirmed that hsa_circ_0018189 functioned as a miR‐656‐3p sponge molecular. Low level of miR‐656‐3p has been reported in various tumors, such as hepatocellular carcinoma, nasopharyngeal cancer, and colorectal cancer. 34 , 35 , 36 A previous study exhibited that miR‐656‐3p was downregulated in NSCLC samples and cell lines, and miR‐656‐3p overexpression lessened cell malignancy. 37 In this research, we also verified the downregulation of miR‐656‐3p in NSCLC samples and cell lines, and the reduced miR‐656‐3p expression was able to restore the effects of si‐hsa_circ_0018189#1 on NSCLC progression. These results confirmed that hsa_circ_0018189 could promote the development of NSCLC cells by targeting miR‐656‐3p. The role of xCT as an intermediate glutamate transporter in NSCLC had also been well reported. 38 In our experiment, xCT abundance was significantly upregulated in NSCLC cells and samples, and miR‐656‐3p could target xCT. At the same time, it was also confirmed that hsa_circ_0018189 could regulate xCT expression through miR‐656‐3p. Further studies confirmed that miR‐656‐3p inhibited tumorigenicity of cells by reducing the expression of xCT. Combining the research results, our study verified the targeting relationship between miR‐656‐3p, hsa_circ_0018189, and xCT in NSCLC for the first time. In conclusion, our study excavated a new regulatory pathway in the development of NSCLC by exploring the regulatory mechanism of new circRNA‐miRNA‐mRNA. Our study also provided new ideas for future drug development of NSCLC. CONCLUSION: In our study, hsa_circ_0018189 could regulate tumorigenicity of NSCLC through serving as a miR‐656‐3p sponge molecular and subsequently mediating xCT expression, which might be a potential target for the treatment of NSCLC. FUNDING INFORMATION: None. CONFLICT OF INTEREST: The authors declare that they have no financial conflicts of interest. Supporting information: Figure S1 Hsa_circ_0018189 overexpression increased glutamine metabolism, promoted proliferation, migration, invasion, and lessened apoptosis in NSCLC cells. A: The overexpression efficiency of the hsa_circ_0018189 overexpression plasmid; B‐D: ELISA assay analyzed the glutamine metabolism of NSCLC cells transfected with vector or hsa_circ_0018189; E‐I: Cell viability, proliferation, apoptosis, migration, and invasion were analyzed; J and K: Proteins of E‐cadherin and Vimentin were measured in the above cells. *p < .05. Click here for additional data file.
Background: Non-small cell lung cancer (NSCLC) is one of the cancers with a high mortality rate. CircRNAs have emerged as an important regulatory factor in tumorigenesis in recent years. However, the detailed regulatory mechanism of a circular RNA cullin 2 (hsa_circ_0018189; hsa_circ_0018189) is still unclear in NSCLC. Methods: RNA levels of hsa_circ_0018189, microRNA (miR)-656-3p, and Solute carrier family seven member 11 (SLC7A11, xCT) were analyzed by real-time quantitative reverse transcription-polymerase chain reaction (RT-qPCR), and protein level was assessed by Western blot and immunohistochemical assay. Enzyme-linked immunosorbent assay was conducted to detect cell glutamine metabolism. Effects of hsa_circ_0018189 on cell proliferation, apoptosis, migration, and invasion were analyzed by corresponding assays. Luciferase reporter assay and RNA-immunoprecipitation assay confirmed the target relationship between miR-656-3p and hsa_circ_0018189 or xCT. The in vivo function of hsa_circ_0018189 was verified by xenograft mouse models. Results: Hsa_circ_0018189 abundance was overexpressed in NSCLC cells and samples. Deficiency of hsa_circ_0018189 lowered NSCLC cell proliferative, migrating, invading, and glutamine metabolism capacities, and hsa_circ_0018189 silencing inhibited the growth of tumors in vivo. Hsa_circ_0018189 could up-regulate xCT by sponging miR-656-3p. And miR-656-3p downregulation or xCT overexpression partly overturned hsa_circ_0018189 knockdown or miR-656-3p mimic-mediated repression of NSCLC cell malignancy. Conclusions: Hsa_circ_0018189 drove NSCLC growth by interacting with miR-656-3p and upregulating xCT.
BACKGROUND: Non‐small cell lung cancer (NSCLC) is a typical type of lung cancer. 1 , 2 NSCLC can be further subdivided into adenocarcinoma, squamous cell carcinoma, and large cell carcinoma, depending on the histological differences among the NSCLC subtypes. 3 , 4 , 5 The main risk for NSCLC is smoking. 6 Its poor prognosis and recurrence rate pose great challenges to the treatment of NSCLC. 7 , 8 Therefore, the research on NSCLC has also become a hot topic today. Circular RNAs (circRNAs), a class of single‐stranded RNA molecules with closed loop structures, have become the focus of RNA and transcriptome research. 9 In fact, circRNAs can be divided into non‐coding circRNAs and coding circRNAs. 10 , 11 Because of their unique structure, they exhibit high stability and resistance to exonucleases. 12 More and more studies on circRNAs in NSCLC have proved the action of circRNAs in the occurrence and development of NSCLC. 13 , 14 , 15 Hsa_circ_0018189 was suggested to exert a significant function in NSCLC, 16 but its specific functional mechanism has not been described. MiRNA adjusts post‐transcriptional gene expression. 17 , 18 MiRNAs, are key actors in a variety of biological processes, and their disorders have been linked to many diseases, including cancer and autoimmune diseases. 19 , 20 , 21 Dysregulation of miRNAs is common in NSCLC, and miRNAs have been suggested to participate in the regulation of the occurrence, progression, and metastasis of NSCLC by regulating target genes. 22 Therefore, a string of studies has been conducted to explore the functional mechanism of miRNAs in NSCLC, which will also provide the possibility for miRNAs to become a diagnostic and therapeutic tool. 23 , 24 The functional mechanism of miR‐656‐3p in NSCLC is not well understood, which makes it of great interest to us. Solute carrier family seven member 11 (SLC7A11), also named as xCT, is a cystine and glutamate anti‐transporter that can transport cystine into cells and export glutamate at the same time. 25 , 26 It has been reported that elevated xCT expression and glutamate release are commonly detected in cancer cells. 27 xCT has a vital function in the progression of NSCLC, but its specific regulatory mechanism has rarely been reported. Accordingly, the research was to clarify the changes in hsa_circ_0018189, miR‐656‐3p, and xCT in NSCLC, and their relationships and functions will be explored. These studies will provide theoretical support for future diagnosis and treatment of NSCLC. CONCLUSION: In our study, hsa_circ_0018189 could regulate tumorigenicity of NSCLC through serving as a miR‐656‐3p sponge molecular and subsequently mediating xCT expression, which might be a potential target for the treatment of NSCLC.
Background: Non-small cell lung cancer (NSCLC) is one of the cancers with a high mortality rate. CircRNAs have emerged as an important regulatory factor in tumorigenesis in recent years. However, the detailed regulatory mechanism of a circular RNA cullin 2 (hsa_circ_0018189; hsa_circ_0018189) is still unclear in NSCLC. Methods: RNA levels of hsa_circ_0018189, microRNA (miR)-656-3p, and Solute carrier family seven member 11 (SLC7A11, xCT) were analyzed by real-time quantitative reverse transcription-polymerase chain reaction (RT-qPCR), and protein level was assessed by Western blot and immunohistochemical assay. Enzyme-linked immunosorbent assay was conducted to detect cell glutamine metabolism. Effects of hsa_circ_0018189 on cell proliferation, apoptosis, migration, and invasion were analyzed by corresponding assays. Luciferase reporter assay and RNA-immunoprecipitation assay confirmed the target relationship between miR-656-3p and hsa_circ_0018189 or xCT. The in vivo function of hsa_circ_0018189 was verified by xenograft mouse models. Results: Hsa_circ_0018189 abundance was overexpressed in NSCLC cells and samples. Deficiency of hsa_circ_0018189 lowered NSCLC cell proliferative, migrating, invading, and glutamine metabolism capacities, and hsa_circ_0018189 silencing inhibited the growth of tumors in vivo. Hsa_circ_0018189 could up-regulate xCT by sponging miR-656-3p. And miR-656-3p downregulation or xCT overexpression partly overturned hsa_circ_0018189 knockdown or miR-656-3p mimic-mediated repression of NSCLC cell malignancy. Conclusions: Hsa_circ_0018189 drove NSCLC growth by interacting with miR-656-3p and upregulating xCT.
12,229
281
[ 98, 166, 147, 60, 66, 98, 39, 105, 160, 245, 85, 54, 71, 292, 390, 296, 284, 339, 329, 233, 2 ]
28
[ "hsa_circ_0018189", "mir", "656", "mir 656", "3p", "656 3p", "mir 656 3p", "cells", "xct", "figure" ]
[ "circular rnas", "rnas circrnas class", "circrnas nsclc", "circrnas cancers confirmed", "mechanism circrnas cancers" ]
null
[CONTENT] Hsa_circ_0018189 | miR‐656‐3p | NSCLC | xCT [SUMMARY]
null
[CONTENT] Hsa_circ_0018189 | miR‐656‐3p | NSCLC | xCT [SUMMARY]
[CONTENT] Hsa_circ_0018189 | miR‐656‐3p | NSCLC | xCT [SUMMARY]
[CONTENT] Hsa_circ_0018189 | miR‐656‐3p | NSCLC | xCT [SUMMARY]
[CONTENT] Hsa_circ_0018189 | miR‐656‐3p | NSCLC | xCT [SUMMARY]
[CONTENT] Animals | Humans | Mice | Carcinoma, Non-Small-Cell Lung | Cell Line, Tumor | Cell Proliferation | Gene Expression Regulation, Neoplastic | Glutamine | Lung Neoplasms | MicroRNAs | RNA, Circular | Amino Acid Transport System y+ [SUMMARY]
null
[CONTENT] Animals | Humans | Mice | Carcinoma, Non-Small-Cell Lung | Cell Line, Tumor | Cell Proliferation | Gene Expression Regulation, Neoplastic | Glutamine | Lung Neoplasms | MicroRNAs | RNA, Circular | Amino Acid Transport System y+ [SUMMARY]
[CONTENT] Animals | Humans | Mice | Carcinoma, Non-Small-Cell Lung | Cell Line, Tumor | Cell Proliferation | Gene Expression Regulation, Neoplastic | Glutamine | Lung Neoplasms | MicroRNAs | RNA, Circular | Amino Acid Transport System y+ [SUMMARY]
[CONTENT] Animals | Humans | Mice | Carcinoma, Non-Small-Cell Lung | Cell Line, Tumor | Cell Proliferation | Gene Expression Regulation, Neoplastic | Glutamine | Lung Neoplasms | MicroRNAs | RNA, Circular | Amino Acid Transport System y+ [SUMMARY]
[CONTENT] Animals | Humans | Mice | Carcinoma, Non-Small-Cell Lung | Cell Line, Tumor | Cell Proliferation | Gene Expression Regulation, Neoplastic | Glutamine | Lung Neoplasms | MicroRNAs | RNA, Circular | Amino Acid Transport System y+ [SUMMARY]
[CONTENT] circular rnas | rnas circrnas class | circrnas nsclc | circrnas cancers confirmed | mechanism circrnas cancers [SUMMARY]
null
[CONTENT] circular rnas | rnas circrnas class | circrnas nsclc | circrnas cancers confirmed | mechanism circrnas cancers [SUMMARY]
[CONTENT] circular rnas | rnas circrnas class | circrnas nsclc | circrnas cancers confirmed | mechanism circrnas cancers [SUMMARY]
[CONTENT] circular rnas | rnas circrnas class | circrnas nsclc | circrnas cancers confirmed | mechanism circrnas cancers [SUMMARY]
[CONTENT] circular rnas | rnas circrnas class | circrnas nsclc | circrnas cancers confirmed | mechanism circrnas cancers [SUMMARY]
[CONTENT] hsa_circ_0018189 | mir | 656 | mir 656 | 3p | 656 3p | mir 656 3p | cells | xct | figure [SUMMARY]
null
[CONTENT] hsa_circ_0018189 | mir | 656 | mir 656 | 3p | 656 3p | mir 656 3p | cells | xct | figure [SUMMARY]
[CONTENT] hsa_circ_0018189 | mir | 656 | mir 656 | 3p | 656 3p | mir 656 3p | cells | xct | figure [SUMMARY]
[CONTENT] hsa_circ_0018189 | mir | 656 | mir 656 | 3p | 656 3p | mir 656 3p | cells | xct | figure [SUMMARY]
[CONTENT] hsa_circ_0018189 | mir | 656 | mir 656 | 3p | 656 3p | mir 656 3p | cells | xct | figure [SUMMARY]
[CONTENT] nsclc | circrnas | mirnas | mechanism | cancer | functional | studies | functional mechanism | glutamate | research [SUMMARY]
null
[CONTENT] hsa_circ_0018189 | mir | mir 656 | 656 | 3p | mir 656 3p | 656 3p | figure | xct | si [SUMMARY]
[CONTENT] potential target treatment nsclc | serving mir 656 3p | molecular subsequently mediating | nsclc serving mir 656 | nsclc serving mir | nsclc serving | molecular subsequently | serving | serving mir | serving mir 656 [SUMMARY]
[CONTENT] hsa_circ_0018189 | mir | mir 656 | 656 | mir 656 3p | 656 3p | 3p | xct | cells | nsclc [SUMMARY]
[CONTENT] hsa_circ_0018189 | mir | mir 656 | 656 | mir 656 3p | 656 3p | 3p | xct | cells | nsclc [SUMMARY]
[CONTENT] NSCLC ||| CircRNAs | recent years ||| RNA | 2 | hsa_circ_0018189 | hsa_circ_0018189 | NSCLC [SUMMARY]
null
[CONTENT] NSCLC ||| NSCLC | hsa_circ_0018189 ||| xCT | 3p ||| 3p | xCT | 3p | NSCLC [SUMMARY]
[CONTENT] NSCLC | 3p | xCT [SUMMARY]
[CONTENT] NSCLC ||| CircRNAs | recent years ||| RNA | 2 | hsa_circ_0018189 | hsa_circ_0018189 | NSCLC ||| RNA | hsa_circ_0018189 | Solute | seven | 11 | xCT | RT-qPCR ||| ||| ||| RNA | 3p | hsa_circ_0018189 | xCT ||| hsa_circ_0018189 | xenograft ||| NSCLC ||| NSCLC | hsa_circ_0018189 ||| xCT | 3p ||| 3p | xCT | 3p | NSCLC ||| NSCLC | 3p | xCT [SUMMARY]
[CONTENT] NSCLC ||| CircRNAs | recent years ||| RNA | 2 | hsa_circ_0018189 | hsa_circ_0018189 | NSCLC ||| RNA | hsa_circ_0018189 | Solute | seven | 11 | xCT | RT-qPCR ||| ||| ||| RNA | 3p | hsa_circ_0018189 | xCT ||| hsa_circ_0018189 | xenograft ||| NSCLC ||| NSCLC | hsa_circ_0018189 ||| xCT | 3p ||| 3p | xCT | 3p | NSCLC ||| NSCLC | 3p | xCT [SUMMARY]
[Weight loss and malnutrition risk in geriatric patients].
33954833
Malnutrition is a major challenge in routine clinical practice and is associated with increased mortality.
BACKGROUND
Anonymized data from nursing home residents with at least a 3-day hospital stay were analyzed. The study included a total of 2058 residents from 19 nursing homes. The malnutrition risk was assessed by the combined MUST/PEMU (Malnutrition Universal Screening Tool/Nursing Measurement of Malnutrition and its Causes) screening and malnutrition by ESPEN (European Society for Clinical Nutrition and Metabolism) criteria.
MATERIAL AND METHODS
Of the residents 36.2% (n = 744) had an initial risk of malnutrition and 12.7% (n = 262) were already malnourished. The proportions increased to 48.6% (n = 881) and 14.3% (n = 259) at discharge, respectively. The logistic regression analysis showed a significantly increasing probability of developing a malnutrition risk during the hospital stay with the diagnoses diseases of the respiratory system (OR 2.686; CI 95 1.111-4.575), chondropathy and osteopathy (OR 1.892; CI 95 1.149-3.115) and a higher BMI (OR 0.108; CI 95 1.038-1.181), more positive weight changes 6 months before hospital (OR 1.055; CI 95 1.017-1.094) and an increasing hospital stay (OR 1.048; CI 95 1.029-1.067).
RESULTS
The identification of an initial malnutrition and the prevention of developing a malnutrition risk represent major challenges in clinical practise. Both are equally necessary.
CONCLUSION
[ "Aged", "Geriatric Assessment", "Humans", "Malnutrition", "Nursing Homes", "Nutrition Assessment", "Nutritional Status", "Weight Loss" ]
8636424
null
null
null
null
null
null
Fazit für die Praxis
Ein hoher Anteil der PatientInnen, die aus einem Pflegeheim aufgenommen werden, weist initial bereits ein Mangelernährungsrisiko oder eine manifeste Mangelernährung auf.Es ist von essenzieller Bedeutung, die bereits mit einer Mangelernährung aufgenommenen PatientInnen zu erkennen, um so eine gezielte Ernährungstherapie zu ermöglichen.Es ist erforderlich, im Verlauf des Aufenthaltes Mangelernährungsscreenings zu wiederholen, da sich häufig ein Risiko bzw. eine Mangelernährung erst während des Aufenthaltes manifestiert. Das Ernährungsmanagement ist zudem so auszurichten, dass allgemein eine Prävention von Mangelernährung unterstützt wird. Ein hoher Anteil der PatientInnen, die aus einem Pflegeheim aufgenommen werden, weist initial bereits ein Mangelernährungsrisiko oder eine manifeste Mangelernährung auf. Es ist von essenzieller Bedeutung, die bereits mit einer Mangelernährung aufgenommenen PatientInnen zu erkennen, um so eine gezielte Ernährungstherapie zu ermöglichen. Es ist erforderlich, im Verlauf des Aufenthaltes Mangelernährungsscreenings zu wiederholen, da sich häufig ein Risiko bzw. eine Mangelernährung erst während des Aufenthaltes manifestiert. Das Ernährungsmanagement ist zudem so auszurichten, dass allgemein eine Prävention von Mangelernährung unterstützt wird.
[ "Einführung und Hintergrund", "Methodik und Datenbasis", "Ergebnisse", "Ernährungsstatus und Veränderungen", "Regression", "Diskussion", "" ]
[ "Mangelernährung stellt im klinischen Alltag ein häufig auftretendes Problem dar. So sind nach einer Auswertung von nutritionDay-Daten deutscher Krankenhäuser 37,4 % der eingeschlossenen PatientInnen als manifest mangelernährt einzustufen. Lediglich 11,6 % werden aber tatsächlich auch als mangelernährt erkannt [22]. Vor allem ältere PatientInnen sind überproportional häufig betroffen [1, 4, 12]. Eine Mangelernährung und Verschlechterungen des Ernährungsstatus sind bei Älteren sowohl im Krankenhaus [9, 16] als auch in der Langzeitpflege [8, 21] mit einer erhöhten Mortalität assoziiert. Allerdings werden in diesen Untersuchungen die Veränderungen des Ernährungsstatus und die Entwicklung eines Mangelernährungsrisikos im Verlauf des Klinikaufenthaltes nur sehr selten untersucht, sondern meist zu Beginn der Behandlung festgestellt und anschließend damit zusammenhängende klinische Effekte untersucht. Das Essverhalten und Gründe für eine reduzierte Nahrungsaufnahme im Krankenhaus hingegen wurden durch die jährlich stattfindenden nutritionDays ausführlich analysiert [17]. Aber auch in diesen Erhebungen gibt es keine zwei Messpunkte für etwa das Gewicht, sodass Veränderungen des Ernährungsstatus im Verlauf der Behandlung nicht abbildbar sind.\nIm vom Bundesministerium für Bildung und Forschung (BMBF) geförderten Forschungsprojekt Prävention und Behandlung von Mangelernährung bei geriatrischen Patienten im Krankenhaus wurden in Zusammenarbeit mit 2 Stuttgarter Kliniken Praxiskonzepte entwickelt, um einer Mangelernährung bei geriatrischen PatientInnen vorzubeugen, diese zu erkennen und zu behandeln [23]. Die Konzepte sollten mithilfe von Routinedaten von PflegeheimbewohnerInnen mit mindestens 3‑tägigem Klinikaufenthalt auf ihre Wirksamkeit getestet werden. Hierfür wurde ein Vorher-nachher-Design mit Interventions- und Kontrollgruppe entwickelt [11]. Bei der Analyse der Daten vor und nach der Konzeptentwicklung (noch nicht veröffentlicht) zeigten sich deutliche Unterschiede beim Ernährungsstatus der Kohorten in den beiden Erhebungszeiträumen. Tatsächlich hatten die PatientInnen im zweiten Erhebungszeitraum signifikant weniger Gewicht vor der Einweisung und anschließend in der Klinik verloren. Aus den wenigen bisher veröffentlichten Daten und der im Projekt gemachten Beobachtung ergibt sich eine erhebliche Forschungslücke. Ziel ist es daher, die Ursachen eines in der Klinik erworbenen Mangelernährungsrisikos zu ermitteln.", "Die Stichprobe umfasst Daten von PflegeheimwohnerInnen mit Klinikaufenthalt in den Jahren 2015 und 2016 sowie im Zeitraum November 2018 bis November 2019. Die routinemäßigen Gewichtserhebungen in den Einrichtungen erlauben so die Betrachtung von Gewichtsverläufen vor und im Zusammenhang mit Klinikaufenthalten.\nIn die Auswertung flossen Daten von 19 Pflegeeinrichtungen im Raum Stuttgart ein. Die vorliegenden Daten wurden retrospektiv erhoben und erst nach einer gründlichen Anonymisierung in engem Austausch mit den jeweiligen Datenschutzbeauftragten ausgewertet. Einschlusskriterien sind lediglich ein Alter ≥ 65 Jahre und ein mindestens 3‑tägiger Klinikaufenthalt. Die Routinedaten umfassen Gewichts‑, BMI-Verläufe und Mangelernährungsscreenings 6 Monate vor bis unmittelbar nach dem Klinikaufenthalt, Alter, Geschlecht, Pflegegrad, Stürze ggf. Versterben sowie Dauer der Klinikaufenthalte. Den Entlassbriefen wurden Einweisungsdiagnosen sowie Nebendiagnosen entnommen, entsprechend dem ICD-10-Katalog (International Classification of Diseases and Related Health Problems) gruppiert und der Charlson-Komorbiditätsindex in der modifizierten Fassung nach Quan ermittelt [13]. Informationen zu verabreichten Medikamenten, erfolgten Operationen oder intensivmedizinischen Behandlungen liegen nur für eine kleine Gruppe vor und konnten daher nicht berücksichtigt werden.\nGewicht, BMI und Screeningergebnis des Instrumentes Pflegerische Erfassung von Mangelernährung und deren Ursachen in der stationären Langzeit‑/Altenpflege (PEMU) wurden, ausgehend vom Aufnahmedatum, für 6 Monate vorher, 3 Monate vorher, kurz vorher (max. 14 Tage) und unmittelbar nach dem Klinikaufenthalt erfasst. Nach der Entlassung wurden alle BewohnerInnen innerhalb eines Tages in den Einrichtungen gewogen.\nUm BewohnerInnen mit Risiko für eine Mangelernährung zu identifizieren, werden die Kriterien Gewichtsverlust und niedriger BMI des validen Screeninginstrumentes Malnutrition Universal Screening Tool (MUST) mit den Ergebnissen des in den Einrichtungen verwendeten Instrumentes PEMU kombiniert ausgewertet [18, 20, 20]. Dieses Vorgehen wird gewählt, da anhand der Daten das MUST-Kriterium Nahrungskarenz nicht erfasst werden kann, in der PEMU-Einschätzung jedoch enthalten ist. Gleichzeitig wird die PEMU in den Einrichtungen nicht so regelmäßig durchgeführt, dass diese Einschätzung ausreichen würde, um BewohnerInnen mit einem initialen Mangelernährungsrisiko sicher zu erfassen. Ist mindestens eine Einschätzung, per PEMU oder MUST unmittelbar vor bzw. nach dem Klinikaufenthalt positiv, wird dies jeweils als Risiko für eine Mangelernährung eingestuft. Zur Identifikation einer manifesten Mangelernährung werden die Konsenskriterien der European Society for Clinical Nutrition and Metabolism (ESPEN) herangezogen und auf alle Fälle angewendet [5]. Allerdings können nur die enthaltenen Kriterien niedriger BMI bzw. altersadaptierter BMI in Kombination mit einem ausgeprägten Gewichtsverlust berücksichtigt werden. Messungen der Körperzusammensetzung, zur Bestimmung eines niedrigen Fettfreie-Masse-Index (FFMI), als weiteres Kriterium einer manifesten Mangelernährung liegen nicht vor (Zusatzmaterial online: Tab. 1). Hauptaugenmerk zur Beschreibung von Veränderungen des Ernährungsstatus im Zuge des Klinikaufenthaltes liegen darum auf einem erheblichen Gewichtsverlust im Zusammenhang mit dem Klinikaufenthalt (≥ 5 %) sowie einem neu aufgetretenen Mangelernährungsrisiko.Stichprobe n–2058Geschlecht, n (%)Männer763 (37,1)Frauen1295 (62,9)Alter, MW ± SDIn Jahren83,53 ± 7,49Behandlungstage, MW ± SD–10,75 ± 9,58Gewicht vor KH, MW ± SDIn kg66,45 ± 15,15BMI vor KH, MW ± SDIn kg/m224,75 ± 5,06Einweisungsdiagnosen nach ICD-10-Gruppen, n (%)Verletzungen, Vergiftungen und bestimmte andere Folgen äußerer Ursachen294 (14,3)Krankheiten des Atmungssystems265 (12,9)Krankheiten des Verdauungssystems260 (12,6)Symptome und abnorme klinische Befunde205 (10,0)Krankheiten des Kreislaufsystems204 (9,9)Unbekannt245 (11,9)Mortalität im KH, n (%)–245 (11,9)Pflegegrad n (%)011 (0,5)165 (3,2)2392 (19,0)3563 (27,4)4608 (29,6)5418 (20,3)Komorbiditätsindex nach Charlson, MW ± SD0–29 Punkte7,16 ± 2,11Chronische Erkrankungen, n (%)Arterielle Hypertonie1288 (62,6)Demenz879 (42,7)Niereninsuffizienz641 (31,1)Diabetes mellitus630 (30,6)Z. n. Apoplex417 (20,3)Ernährungsstatus bei Aufnahme, n (%)Mangelernährungsrisiko (MUST/PEMU)744 (36,2)Manifeste ME (ESPEN)262 (12,7)Ernährungsstatus bei Entlassung, n (%)Mangelernährungsrisiko (MUST/PEMU)a881 (48,6)Manifeste ME (ESPEN)a259 (14,3)Gestellte ME-Diagnosen (Mangelernährung, Kachexie und Sarkopenie)90 (4,4)n = 1813ESPEN European Society for Clinical Nutrition and Metabolism, ICD-10 International Classification of Diseases and Related Health Problems, KH Krankenhaus, ME Mangelernährung, SD Standardabweichung, Z. n. Zustand nachaOhne in der Klinik Verstorbene\nn = 1813\nESPEN European Society for Clinical Nutrition and Metabolism, ICD-10 International Classification of Diseases and Related Health Problems, KH Krankenhaus, ME Mangelernährung, SD Standardabweichung, Z. n. Zustand nach\naOhne in der Klinik Verstorbene\nUrsprünglich beinhaltet der Datensatz 2721 Fälle; nach Bereinigung um Fälle ohne Angaben zu Diagnosen verbleiben 2556. Um den Einfluss von Gewichtschwankungen aufgrund des Ausschwemmens von Ödemen zu minimieren, werden in dieser Analyse PatientInnen mit der Nebendiagnose Herzinsuffizienz oder der Aufnahmediagnose dekompensierte Herzinsuffizienz ausgeschlossen, wodurch sich der Datensatz auf 2058 Fälle reduziert. Die Datenanalyse erfolgt mit SPSS 24®. Zur Identifikation von Risikofaktoren für Gewichtsverlust und Entwicklung eines Mangelernährungsrisikos während des Klinikaufenthaltes wird eine logistische Regressionsanalyse angewendet. Die eingeschlossenen Variablen wurden anhand theoretischer Vorüberlegungen ausgewählt, wie etwa höhere Prävalenzraten bei bestimmten chronischen Erkrankungen. Die Modelle wurden um die Faktoren Erhebungszeitraum, behandelnde Klinik, Interventions- und Kontrollgruppe, Charlson-Komorbiditätsindex, hoher Pflegegrad, Alter sowie Geschlecht adjustiert. Der Einschluss der Variablen in das Modell erfolgte gleichzeitig. Fälle mit fehlenden Daten wurden aus der Regression ausgeschlossen. Ein p < 0,05 wird als signifikant betrachtet.\nFür das gesamte Forschungsprojekt liegt ein positives Ethikvotum des Ethikkomitees der Deutschen Gesellschaft für Pflegewissenschaften in Witten vor (Antrag-Nr. 17-005).", "In die Auswertung fließen insgesamt 2058 Fälle ein. Die Stichprobe (Tab. 1) ist entsprechend den Auswahlkriterien betagt (83,5 Jahre; SD ± 7,5) und mit 62,9 % (n = 1295) eher weiblich. Mit 49,9 % (n = 1016) weist fast die Hälfte einen Pflegegrad von 4 oder 5 auf; der Charlson-Komorbiditätsindex liegt bei 7,2 (SD ± 2,1).\nErnährungsstatus und Veränderungen Ein Großteil der PatientInnen verliert bereits vor Klinikeinweisung an Gewicht. Im Zeitraum 6 Monate vor dem Klinikaufenthalt beträgt dieser Verlust im Mittel −1,8 % (SD ± 7,7; n = 1166), 3 Monate vor der Aufnahme −1,3 % (SD ± 5,9; 1443) des Körpergewichtes. In den 3 Monaten vor Klinikeinweisung haben 20,1 % (n = 288) mindestens 5 % ihres Körpergewichtes verloren, 13,2 % (n = 154) mindestens 10 % in den vorhergehenden 6 Monaten. Dementsprechend weisen 36,2 % (n = 744) bei der Aufnahme ein Mangelernährungsrisiko und 12,7 % (n = 262) eine manifeste Mangelernährung auf. Im Zusammenhang mit Klinikaufenthalten verlieren die PatientInnen im Mittel 1,7 kg (SD ± 4,1 kg) oder 2,3 % (SD ± 6,) ihres Körpergewichtes, 21,9 % (n = 396) verlieren ≥ 5 % ihres Körpergewichtes. Der Anteil an PatientInnen mit Mangelernährungsrisiko steigt nach dem Klinikaufenthalt auf 48,6 % (n = 881), der der manifesten Mangelernährung auf 14,3 % (n = 259).\nEin Großteil der PatientInnen verliert bereits vor Klinikeinweisung an Gewicht. Im Zeitraum 6 Monate vor dem Klinikaufenthalt beträgt dieser Verlust im Mittel −1,8 % (SD ± 7,7; n = 1166), 3 Monate vor der Aufnahme −1,3 % (SD ± 5,9; 1443) des Körpergewichtes. In den 3 Monaten vor Klinikeinweisung haben 20,1 % (n = 288) mindestens 5 % ihres Körpergewichtes verloren, 13,2 % (n = 154) mindestens 10 % in den vorhergehenden 6 Monaten. Dementsprechend weisen 36,2 % (n = 744) bei der Aufnahme ein Mangelernährungsrisiko und 12,7 % (n = 262) eine manifeste Mangelernährung auf. Im Zusammenhang mit Klinikaufenthalten verlieren die PatientInnen im Mittel 1,7 kg (SD ± 4,1 kg) oder 2,3 % (SD ± 6,) ihres Körpergewichtes, 21,9 % (n = 396) verlieren ≥ 5 % ihres Körpergewichtes. Der Anteil an PatientInnen mit Mangelernährungsrisiko steigt nach dem Klinikaufenthalt auf 48,6 % (n = 881), der der manifesten Mangelernährung auf 14,3 % (n = 259).\nRegression Die logistischen Regressionsmodelle Gewichtsverlust ≥ 5 % (Chi-Quadrat(27) = 97,775, p < 0,001, n = 859) und neues Mangelernährungsrisiko (Chi-Quadrat(28) = 136,318, p < 0,001, n = 1012) zeigen jeweils hochsignifikante Risikofaktoren mit starken Effekten (Zusatzmaterial online: Tab. 2 und 3). Das Risiko eines Gewichtsverlustes ≥ 5 % wird signifikant durch Einweisungsdiagnosen der ICD-10-Gruppen endokrine, Ernährungs- und Stoffwechselkrankheiten (OR 2,768; p 0,048), Krankheiten des Nervensystems (OR 2,934; p 0,030), Krankheiten des Atmungssystems (OR 2,274; p 0,0161), Krankheiten des Kreislaufsystems (OR 2,159; p 0,033), die Nebendiagnose Demenz (OR 1,490; p 0,026) sowie positivere Gewichtsverläufe in den 3 Monaten vor der Einweisung (OR 1,063; p 0,010) und mit steigender Anzahl an Behandlungstagen erhöht (OR 1,063; p < 0,001). Die Wahrscheinlichkeit, ein Mangelernährungsrisiko während des Klinikaufenthaltes zu entwickeln, steigt signifikant mit den Einweisungsdiagnosen der Gruppe Krankheiten des Atmungssystems (OR 2,156; p 0,033), den Nebendiagnosen Osteopathien und Chondropathien (OR 1,892; p 0,012) sowie einem höheren BMI (OR 1,108; p 0,002), positiveren Gewichtsveränderungen 6 Monate vor dem Krankenhaus (OR 1,055; p 0,004) und einer steigenden Anzahl an Behandlungstagen (OR 1,048; p < 0,001).\nDie logistischen Regressionsmodelle Gewichtsverlust ≥ 5 % (Chi-Quadrat(27) = 97,775, p < 0,001, n = 859) und neues Mangelernährungsrisiko (Chi-Quadrat(28) = 136,318, p < 0,001, n = 1012) zeigen jeweils hochsignifikante Risikofaktoren mit starken Effekten (Zusatzmaterial online: Tab. 2 und 3). Das Risiko eines Gewichtsverlustes ≥ 5 % wird signifikant durch Einweisungsdiagnosen der ICD-10-Gruppen endokrine, Ernährungs- und Stoffwechselkrankheiten (OR 2,768; p 0,048), Krankheiten des Nervensystems (OR 2,934; p 0,030), Krankheiten des Atmungssystems (OR 2,274; p 0,0161), Krankheiten des Kreislaufsystems (OR 2,159; p 0,033), die Nebendiagnose Demenz (OR 1,490; p 0,026) sowie positivere Gewichtsverläufe in den 3 Monaten vor der Einweisung (OR 1,063; p 0,010) und mit steigender Anzahl an Behandlungstagen erhöht (OR 1,063; p < 0,001). Die Wahrscheinlichkeit, ein Mangelernährungsrisiko während des Klinikaufenthaltes zu entwickeln, steigt signifikant mit den Einweisungsdiagnosen der Gruppe Krankheiten des Atmungssystems (OR 2,156; p 0,033), den Nebendiagnosen Osteopathien und Chondropathien (OR 1,892; p 0,012) sowie einem höheren BMI (OR 1,108; p 0,002), positiveren Gewichtsveränderungen 6 Monate vor dem Krankenhaus (OR 1,055; p 0,004) und einer steigenden Anzahl an Behandlungstagen (OR 1,048; p < 0,001).", "Ein Großteil der PatientInnen verliert bereits vor Klinikeinweisung an Gewicht. Im Zeitraum 6 Monate vor dem Klinikaufenthalt beträgt dieser Verlust im Mittel −1,8 % (SD ± 7,7; n = 1166), 3 Monate vor der Aufnahme −1,3 % (SD ± 5,9; 1443) des Körpergewichtes. In den 3 Monaten vor Klinikeinweisung haben 20,1 % (n = 288) mindestens 5 % ihres Körpergewichtes verloren, 13,2 % (n = 154) mindestens 10 % in den vorhergehenden 6 Monaten. Dementsprechend weisen 36,2 % (n = 744) bei der Aufnahme ein Mangelernährungsrisiko und 12,7 % (n = 262) eine manifeste Mangelernährung auf. Im Zusammenhang mit Klinikaufenthalten verlieren die PatientInnen im Mittel 1,7 kg (SD ± 4,1 kg) oder 2,3 % (SD ± 6,) ihres Körpergewichtes, 21,9 % (n = 396) verlieren ≥ 5 % ihres Körpergewichtes. Der Anteil an PatientInnen mit Mangelernährungsrisiko steigt nach dem Klinikaufenthalt auf 48,6 % (n = 881), der der manifesten Mangelernährung auf 14,3 % (n = 259).", "Die logistischen Regressionsmodelle Gewichtsverlust ≥ 5 % (Chi-Quadrat(27) = 97,775, p < 0,001, n = 859) und neues Mangelernährungsrisiko (Chi-Quadrat(28) = 136,318, p < 0,001, n = 1012) zeigen jeweils hochsignifikante Risikofaktoren mit starken Effekten (Zusatzmaterial online: Tab. 2 und 3). Das Risiko eines Gewichtsverlustes ≥ 5 % wird signifikant durch Einweisungsdiagnosen der ICD-10-Gruppen endokrine, Ernährungs- und Stoffwechselkrankheiten (OR 2,768; p 0,048), Krankheiten des Nervensystems (OR 2,934; p 0,030), Krankheiten des Atmungssystems (OR 2,274; p 0,0161), Krankheiten des Kreislaufsystems (OR 2,159; p 0,033), die Nebendiagnose Demenz (OR 1,490; p 0,026) sowie positivere Gewichtsverläufe in den 3 Monaten vor der Einweisung (OR 1,063; p 0,010) und mit steigender Anzahl an Behandlungstagen erhöht (OR 1,063; p < 0,001). Die Wahrscheinlichkeit, ein Mangelernährungsrisiko während des Klinikaufenthaltes zu entwickeln, steigt signifikant mit den Einweisungsdiagnosen der Gruppe Krankheiten des Atmungssystems (OR 2,156; p 0,033), den Nebendiagnosen Osteopathien und Chondropathien (OR 1,892; p 0,012) sowie einem höheren BMI (OR 1,108; p 0,002), positiveren Gewichtsveränderungen 6 Monate vor dem Krankenhaus (OR 1,055; p 0,004) und einer steigenden Anzahl an Behandlungstagen (OR 1,048; p < 0,001).", "Sowohl die Literatur als auch die bisherigen Analysen im Projekt zeigen deutlich, dass das Problem Mangelernährung im klinischen Bereich häufig auftritt, aufgrund mangelhaften oder fehlenden Screenings unentdeckt bleibt und sich dann negativ auf das Outcome auswirkt [7, 8, 15, 22]. Dies verdeutlicht die Wichtigkeit, Mangelernährung frühzeitig zu erkennen und dieser, wenn möglich, im Vorfeld bereits entgegenzuwirken. Die Prävention von Mangelernährung in den verschiedenen Versorgungssettings wird vom pflegerischen Expertenstandard Ernährungsmanagement zur Förderung der oralen Ernährung in der Pflege als zentrales Ziel aufgegriffen [2, 6]. Wie gut dies aber schließlich gelingt und inwieweit ungewollte Gewichtsverluste oder Verschlechterungen des Ernährungsstatus bei PflegeheimbewohnerInnen Teil eines natürlichen Alterungsprozesses sind, bleibt häufig unklar. Deutlich wird aber, dass ein erheblicher Anteil der PflegeheimbewohnerInnen bereits mit einem Mangelernährungsrisiko in die Klinik aufgenommen wird. Gleichzeitig zeigen Studien, dass eine individualisierte Ernährungstherapie das Outcome wesentlich verbessern kann [19]. So weisen etwa bei Sanson et al. 37,2 % der internistischen Patienten ein hohes Mangelernährungsrisiko auf [15]. Wird dann ernährungsmedizinisch interveniert, kann auch entsprechend eine Mangelernährung kodiert werden [3]. Dies kann aber nur gelingen, wenn die initial bereits Mangelernährten oder Gefährdeten auch erkannt werden. Es kann allerdings nicht davon ausgegangen werden, dass dies aktuell regelhaft geschieht, da in den Diagnoselisten Mangelernährungsdiagnosen (4,4 %; n = 90) kaum vorkommen. Wie eingangs erläutert, setzt sich der hier analysierte Datensatz aus 2 Erhebungen zusammen. Für die kleinere Kohorte der zweiten Erhebung (n = 604) konnten auch Informationen zu Operationen, Medikamenten bei Entlassung, Aufenthalten auf Intensivstation und den behandelnden Fachabteilungen erfasst werden, was bei der ersten Erhebungsphase leider nicht der Fall war. Hierdurch zeigten sich jedoch keine signifikanten Risikoveränderungen. In einer größeren Kohorte wäre dies jedoch denkbar. Diese potenziellen Risikofaktoren nicht ausreichend beurteilen zu können, stellt eine wesentliche Limitation dieser Datenauswertung dar. Es wurde/wurden außerdem keine Unterscheidung oder getrennte Kalkulationen zwischen den Interventions- und Kontrollgruppen vorgenommen, da der Einfluss der Konzeptentwicklung hier zu vernachlässigen ist. In der untersuchten Gruppe sind im Zeitraum nach der Konzepteinführung lediglich 59 Fälle den Interventionsstationen zuzuordnen. Dennoch wurde in der Regressionsanalyse zumindest dementsprechend adjustiert.\nIn der hier untersuchten Gesamtgruppe führen auch die konkreten Einweisungsdiagnosen nicht zu einer Veränderung des Risikos, vermutlich da manche Einweisungsdiagnosen relativ selten sind, z. B. Apoplex mit 3 % (n = 62), chronische Wunden mit 2 % (n = 41), Myokardinfarkte mit 0,7 % (n = 15) sowie Lungenembolien mit 0,7 % (n = 14). Lediglich die Gruppierung nach ICD-10-Hauptgruppen weist teilweise auf Risikoerhöhungen hin. Interessanterweise ist der Effekt der meisten chronischen Erkrankungen oder der Gesamtmorbiditätsindex auf das Risiko für Gewichtsverluste oder die Entwicklung eines Mangelernährungsrisikos eher gering. Dies überrascht zunächst, lässt sich aber durch die generell ähnlich ausgeprägte Morbidität in der Kohorte erklären. So weisen 69,6 % (n = 1433) einen Charlson-Komorbiditätsindex zwischen 5 und 8 Punkten auf.\nInwiefern sich das Gewicht während des Klinikaufenthaltes ändert, wird relativ selten untersucht. Grass et al. erhoben verschiedene Parameter 10 Tage vor und 30 Tage nach einem chirurgischen Eingriff. 46 % verloren dabei mehr als 5 % ihres Körpergewichtes, wobei als signifikante Risikofaktoren ein niedriger BMI zur Baseline, niedriger Oberarmumfang, Eingriffe im oberen Gastrointestinaltrakt, weibliches Geschlecht und ein erhöhtes CRP identifiziert wurden [10]. Die Risikoerhöhung bei Erkrankungen des Atmungssystems deutet hier ebenfalls erhebliche Effekte akut entzündlicher Prozesse an.\nIn der Studie von Alvarez et al. kam es bei den über 70-Jährigen zu einem signifikanten Gewichtsverlust von −1,7 kg; der Anteil Mangelernährungsrisiko stieg geringfügig von 54,6 auf 57,5 %, während der Anteil manifester Mangelernährung von 25,5 auf 21,3 % sank. Risikofaktoren wurden allerdings keine ermittelt [1]. In einer weiteren Studie kam es dagegen in einer kleineren Kohorten nur zu geringfügigen mittleren Gewichtsveränderungen, jedoch zu einem erheblichen Verlust an fettfreier Masse und damit vermutlich v. a. Muskelmasse. Auch der Phasenwinkel als Zeichen einer schlechten Nährstoffversorgung verschlechterte sich während des Klinikaufenthaltes. Diese Verschlechterung des Ernährungsstatus ist signifikant mit einem höheren Alter und einer bereits bestehenden Mangelernährung zur Baseleine assoziiert [14], was sich in der hier analysierten Kohorte nicht bestätigte.\nNun zeigt sich hier in dieser Untersuchung ein verhältnismäßig ausgeprägter Gewichtsverlust im Zusammenhang mit den Klinikaufenthalten. Dabei ist zu berücksichtigen, dass die Kohorte ausschließlich aus BewohnerInnen von Pflegeeinrichtungen besteht. Damit sind die Ergebnisse nicht allgemein auf geriatrische oder ältere PatientInnen übertragbar. Da PflegeheimbewohnerInnen vermutlich eine vergleichsweise größere Morbidität und Pflegebedürftigkeit aufweisen, wäre eine stärkere Gefährdung für Mangelernährung und ungewollte Gewichtsverluste plausibel. Vergleichbare Erhebungen mit einer solchen Kohorte wurden aber bislang nicht durchgeführt. Der Zusammenhang eines längeren Klinikaufenthaltes mit steigendem Risiko für Gewichtsverluste bzw. Entwicklung eines Mangelernährungsrisikos kommt deutlich zum Vorschein. Unklar bleibt an dieser Stelle, was Ursache und Wirkung ist. Führt der längere Klinikaufenthalt zu zunehmenden Gewichtseinbußen, führen die Gewichtsverluste zu einem längeren Klinikaufenthalt oder ist am ehesten eine Mischung aus beiden Effekten anzunehmen? Im Gegensatz zu der eingangs angedeuteten These wirkt sich in dieser Kohorte ein initial vergleichsweise besserer Ernährungsstatus nicht protektiv auf Gewichtseinbußen während der stationären Behandlung aus. Ein besserer Ausgangsstatus führt eher zu einer Gewichtsabnahme ≥ 5 %. Ein möglicher Erklärungsansatz ist der Umstand, dass eine initiale Mangelernährung mit einem häufigeren Versterben assoziiert ist. Dies hat eine frühere Datenanalyse mit dem T0-Datensatz deutlich gemacht [8]. Da in dieser Analyse aber die Gewichtsveränderungen in der Klinik anhand der Gewichtsdifferenz vor/nach dem Klinikaufenthalt errechnet werden, werden hier alle in der Klinik Verstorbenen nicht miterfasst. Dieser Umstand könnte diesen überraschenden Effekt mitverursacht haben. Auch andere eigentlich plausibel erscheinende Risikofaktoren wie Alter, Geschlecht, Morbidität und ausgeprägte Pflegebedürftigkeit spielen hierbei keine signifikante Rolle. Diese überraschenden Befunde sollten ggf. mit einem gezielteren Untersuchungsdesign und größeren Gruppen noch einmal überprüft werden. Eine mögliche Erklärung für den kaum vorhandenen Einfluss von Morbidität oder Pflegebedürftigkeit wäre, dass die Betroffenen erstens schon häufig mit einem reduzierten Ernährungsstatus aufgenommen werden, der sich dann vielleicht nicht mehr wesentlich verschlechtern kann. Zweitens werden diese PatientInnen u. U. verhältnismäßig gut versorgt, da die Pflegekräfte bei angespannter Personallage dazu neigen könnten, sich schwerpunktmäßig um die besonders hilfsbedürftigen PatientInnen zu kümmern. Auch diesen Thesen sollte in weiteren Untersuchungen nachgegangen werden. Die größten Schwierigkeiten oder Limitationen dieser Arbeit gehen auf die Auswertung von Routinedaten zurück. Diese weisen teilweise nichterklärbare Lücken auf. So fehlen z. B. für den zweiten Erhebungszeitraum überdurchschnittlich viele Entlassbriefe, was dazu führte, dass u. a. Einweisungsdiagnosen teilweise nicht vorliegen. Auch problematisch ist, dass nicht immer ein Gewicht vor Krankenhaus vorliegt, da die meisten Krankenhausaufnahmen nicht geplant sind und das letzte gemessene Gewicht dann u. U. zu lange zurückliegt. Gleichzeitig sind manche BewohnerInnen erst in den Monaten vorher in die Langzeitpflege eingezogen, wodurch in diesen Fällen eine Betrachtung der Gewichtsverläufe vor Krankenhauseinweisung nicht möglich war. Diese wird aber benötigt, um das Mangelernährungsrisiko und eine manifeste Mangelernährung einschätzen zu können, da das in der Langzeitpflege verwendete Instrument PEMU relativ selten zur Anwendung kommt. Dadurch wird u. U. die tatsächliche Prävalenz einer Mangelernährung bzw. ein Mangelernährungsrisiko in dieser Kohorte noch unterschätzt.", "\n\n" ]
[ null, null, null, null, null, null, null ]
[ "Einführung und Hintergrund", "Methodik und Datenbasis", "Ergebnisse", "Ernährungsstatus und Veränderungen", "Regression", "Diskussion", "Fazit für die Praxis", "Supplementary Information", "" ]
[ "Mangelernährung stellt im klinischen Alltag ein häufig auftretendes Problem dar. So sind nach einer Auswertung von nutritionDay-Daten deutscher Krankenhäuser 37,4 % der eingeschlossenen PatientInnen als manifest mangelernährt einzustufen. Lediglich 11,6 % werden aber tatsächlich auch als mangelernährt erkannt [22]. Vor allem ältere PatientInnen sind überproportional häufig betroffen [1, 4, 12]. Eine Mangelernährung und Verschlechterungen des Ernährungsstatus sind bei Älteren sowohl im Krankenhaus [9, 16] als auch in der Langzeitpflege [8, 21] mit einer erhöhten Mortalität assoziiert. Allerdings werden in diesen Untersuchungen die Veränderungen des Ernährungsstatus und die Entwicklung eines Mangelernährungsrisikos im Verlauf des Klinikaufenthaltes nur sehr selten untersucht, sondern meist zu Beginn der Behandlung festgestellt und anschließend damit zusammenhängende klinische Effekte untersucht. Das Essverhalten und Gründe für eine reduzierte Nahrungsaufnahme im Krankenhaus hingegen wurden durch die jährlich stattfindenden nutritionDays ausführlich analysiert [17]. Aber auch in diesen Erhebungen gibt es keine zwei Messpunkte für etwa das Gewicht, sodass Veränderungen des Ernährungsstatus im Verlauf der Behandlung nicht abbildbar sind.\nIm vom Bundesministerium für Bildung und Forschung (BMBF) geförderten Forschungsprojekt Prävention und Behandlung von Mangelernährung bei geriatrischen Patienten im Krankenhaus wurden in Zusammenarbeit mit 2 Stuttgarter Kliniken Praxiskonzepte entwickelt, um einer Mangelernährung bei geriatrischen PatientInnen vorzubeugen, diese zu erkennen und zu behandeln [23]. Die Konzepte sollten mithilfe von Routinedaten von PflegeheimbewohnerInnen mit mindestens 3‑tägigem Klinikaufenthalt auf ihre Wirksamkeit getestet werden. Hierfür wurde ein Vorher-nachher-Design mit Interventions- und Kontrollgruppe entwickelt [11]. Bei der Analyse der Daten vor und nach der Konzeptentwicklung (noch nicht veröffentlicht) zeigten sich deutliche Unterschiede beim Ernährungsstatus der Kohorten in den beiden Erhebungszeiträumen. Tatsächlich hatten die PatientInnen im zweiten Erhebungszeitraum signifikant weniger Gewicht vor der Einweisung und anschließend in der Klinik verloren. Aus den wenigen bisher veröffentlichten Daten und der im Projekt gemachten Beobachtung ergibt sich eine erhebliche Forschungslücke. Ziel ist es daher, die Ursachen eines in der Klinik erworbenen Mangelernährungsrisikos zu ermitteln.", "Die Stichprobe umfasst Daten von PflegeheimwohnerInnen mit Klinikaufenthalt in den Jahren 2015 und 2016 sowie im Zeitraum November 2018 bis November 2019. Die routinemäßigen Gewichtserhebungen in den Einrichtungen erlauben so die Betrachtung von Gewichtsverläufen vor und im Zusammenhang mit Klinikaufenthalten.\nIn die Auswertung flossen Daten von 19 Pflegeeinrichtungen im Raum Stuttgart ein. Die vorliegenden Daten wurden retrospektiv erhoben und erst nach einer gründlichen Anonymisierung in engem Austausch mit den jeweiligen Datenschutzbeauftragten ausgewertet. Einschlusskriterien sind lediglich ein Alter ≥ 65 Jahre und ein mindestens 3‑tägiger Klinikaufenthalt. Die Routinedaten umfassen Gewichts‑, BMI-Verläufe und Mangelernährungsscreenings 6 Monate vor bis unmittelbar nach dem Klinikaufenthalt, Alter, Geschlecht, Pflegegrad, Stürze ggf. Versterben sowie Dauer der Klinikaufenthalte. Den Entlassbriefen wurden Einweisungsdiagnosen sowie Nebendiagnosen entnommen, entsprechend dem ICD-10-Katalog (International Classification of Diseases and Related Health Problems) gruppiert und der Charlson-Komorbiditätsindex in der modifizierten Fassung nach Quan ermittelt [13]. Informationen zu verabreichten Medikamenten, erfolgten Operationen oder intensivmedizinischen Behandlungen liegen nur für eine kleine Gruppe vor und konnten daher nicht berücksichtigt werden.\nGewicht, BMI und Screeningergebnis des Instrumentes Pflegerische Erfassung von Mangelernährung und deren Ursachen in der stationären Langzeit‑/Altenpflege (PEMU) wurden, ausgehend vom Aufnahmedatum, für 6 Monate vorher, 3 Monate vorher, kurz vorher (max. 14 Tage) und unmittelbar nach dem Klinikaufenthalt erfasst. Nach der Entlassung wurden alle BewohnerInnen innerhalb eines Tages in den Einrichtungen gewogen.\nUm BewohnerInnen mit Risiko für eine Mangelernährung zu identifizieren, werden die Kriterien Gewichtsverlust und niedriger BMI des validen Screeninginstrumentes Malnutrition Universal Screening Tool (MUST) mit den Ergebnissen des in den Einrichtungen verwendeten Instrumentes PEMU kombiniert ausgewertet [18, 20, 20]. Dieses Vorgehen wird gewählt, da anhand der Daten das MUST-Kriterium Nahrungskarenz nicht erfasst werden kann, in der PEMU-Einschätzung jedoch enthalten ist. Gleichzeitig wird die PEMU in den Einrichtungen nicht so regelmäßig durchgeführt, dass diese Einschätzung ausreichen würde, um BewohnerInnen mit einem initialen Mangelernährungsrisiko sicher zu erfassen. Ist mindestens eine Einschätzung, per PEMU oder MUST unmittelbar vor bzw. nach dem Klinikaufenthalt positiv, wird dies jeweils als Risiko für eine Mangelernährung eingestuft. Zur Identifikation einer manifesten Mangelernährung werden die Konsenskriterien der European Society for Clinical Nutrition and Metabolism (ESPEN) herangezogen und auf alle Fälle angewendet [5]. Allerdings können nur die enthaltenen Kriterien niedriger BMI bzw. altersadaptierter BMI in Kombination mit einem ausgeprägten Gewichtsverlust berücksichtigt werden. Messungen der Körperzusammensetzung, zur Bestimmung eines niedrigen Fettfreie-Masse-Index (FFMI), als weiteres Kriterium einer manifesten Mangelernährung liegen nicht vor (Zusatzmaterial online: Tab. 1). Hauptaugenmerk zur Beschreibung von Veränderungen des Ernährungsstatus im Zuge des Klinikaufenthaltes liegen darum auf einem erheblichen Gewichtsverlust im Zusammenhang mit dem Klinikaufenthalt (≥ 5 %) sowie einem neu aufgetretenen Mangelernährungsrisiko.Stichprobe n–2058Geschlecht, n (%)Männer763 (37,1)Frauen1295 (62,9)Alter, MW ± SDIn Jahren83,53 ± 7,49Behandlungstage, MW ± SD–10,75 ± 9,58Gewicht vor KH, MW ± SDIn kg66,45 ± 15,15BMI vor KH, MW ± SDIn kg/m224,75 ± 5,06Einweisungsdiagnosen nach ICD-10-Gruppen, n (%)Verletzungen, Vergiftungen und bestimmte andere Folgen äußerer Ursachen294 (14,3)Krankheiten des Atmungssystems265 (12,9)Krankheiten des Verdauungssystems260 (12,6)Symptome und abnorme klinische Befunde205 (10,0)Krankheiten des Kreislaufsystems204 (9,9)Unbekannt245 (11,9)Mortalität im KH, n (%)–245 (11,9)Pflegegrad n (%)011 (0,5)165 (3,2)2392 (19,0)3563 (27,4)4608 (29,6)5418 (20,3)Komorbiditätsindex nach Charlson, MW ± SD0–29 Punkte7,16 ± 2,11Chronische Erkrankungen, n (%)Arterielle Hypertonie1288 (62,6)Demenz879 (42,7)Niereninsuffizienz641 (31,1)Diabetes mellitus630 (30,6)Z. n. Apoplex417 (20,3)Ernährungsstatus bei Aufnahme, n (%)Mangelernährungsrisiko (MUST/PEMU)744 (36,2)Manifeste ME (ESPEN)262 (12,7)Ernährungsstatus bei Entlassung, n (%)Mangelernährungsrisiko (MUST/PEMU)a881 (48,6)Manifeste ME (ESPEN)a259 (14,3)Gestellte ME-Diagnosen (Mangelernährung, Kachexie und Sarkopenie)90 (4,4)n = 1813ESPEN European Society for Clinical Nutrition and Metabolism, ICD-10 International Classification of Diseases and Related Health Problems, KH Krankenhaus, ME Mangelernährung, SD Standardabweichung, Z. n. Zustand nachaOhne in der Klinik Verstorbene\nn = 1813\nESPEN European Society for Clinical Nutrition and Metabolism, ICD-10 International Classification of Diseases and Related Health Problems, KH Krankenhaus, ME Mangelernährung, SD Standardabweichung, Z. n. Zustand nach\naOhne in der Klinik Verstorbene\nUrsprünglich beinhaltet der Datensatz 2721 Fälle; nach Bereinigung um Fälle ohne Angaben zu Diagnosen verbleiben 2556. Um den Einfluss von Gewichtschwankungen aufgrund des Ausschwemmens von Ödemen zu minimieren, werden in dieser Analyse PatientInnen mit der Nebendiagnose Herzinsuffizienz oder der Aufnahmediagnose dekompensierte Herzinsuffizienz ausgeschlossen, wodurch sich der Datensatz auf 2058 Fälle reduziert. Die Datenanalyse erfolgt mit SPSS 24®. Zur Identifikation von Risikofaktoren für Gewichtsverlust und Entwicklung eines Mangelernährungsrisikos während des Klinikaufenthaltes wird eine logistische Regressionsanalyse angewendet. Die eingeschlossenen Variablen wurden anhand theoretischer Vorüberlegungen ausgewählt, wie etwa höhere Prävalenzraten bei bestimmten chronischen Erkrankungen. Die Modelle wurden um die Faktoren Erhebungszeitraum, behandelnde Klinik, Interventions- und Kontrollgruppe, Charlson-Komorbiditätsindex, hoher Pflegegrad, Alter sowie Geschlecht adjustiert. Der Einschluss der Variablen in das Modell erfolgte gleichzeitig. Fälle mit fehlenden Daten wurden aus der Regression ausgeschlossen. Ein p < 0,05 wird als signifikant betrachtet.\nFür das gesamte Forschungsprojekt liegt ein positives Ethikvotum des Ethikkomitees der Deutschen Gesellschaft für Pflegewissenschaften in Witten vor (Antrag-Nr. 17-005).", "In die Auswertung fließen insgesamt 2058 Fälle ein. Die Stichprobe (Tab. 1) ist entsprechend den Auswahlkriterien betagt (83,5 Jahre; SD ± 7,5) und mit 62,9 % (n = 1295) eher weiblich. Mit 49,9 % (n = 1016) weist fast die Hälfte einen Pflegegrad von 4 oder 5 auf; der Charlson-Komorbiditätsindex liegt bei 7,2 (SD ± 2,1).\nErnährungsstatus und Veränderungen Ein Großteil der PatientInnen verliert bereits vor Klinikeinweisung an Gewicht. Im Zeitraum 6 Monate vor dem Klinikaufenthalt beträgt dieser Verlust im Mittel −1,8 % (SD ± 7,7; n = 1166), 3 Monate vor der Aufnahme −1,3 % (SD ± 5,9; 1443) des Körpergewichtes. In den 3 Monaten vor Klinikeinweisung haben 20,1 % (n = 288) mindestens 5 % ihres Körpergewichtes verloren, 13,2 % (n = 154) mindestens 10 % in den vorhergehenden 6 Monaten. Dementsprechend weisen 36,2 % (n = 744) bei der Aufnahme ein Mangelernährungsrisiko und 12,7 % (n = 262) eine manifeste Mangelernährung auf. Im Zusammenhang mit Klinikaufenthalten verlieren die PatientInnen im Mittel 1,7 kg (SD ± 4,1 kg) oder 2,3 % (SD ± 6,) ihres Körpergewichtes, 21,9 % (n = 396) verlieren ≥ 5 % ihres Körpergewichtes. Der Anteil an PatientInnen mit Mangelernährungsrisiko steigt nach dem Klinikaufenthalt auf 48,6 % (n = 881), der der manifesten Mangelernährung auf 14,3 % (n = 259).\nEin Großteil der PatientInnen verliert bereits vor Klinikeinweisung an Gewicht. Im Zeitraum 6 Monate vor dem Klinikaufenthalt beträgt dieser Verlust im Mittel −1,8 % (SD ± 7,7; n = 1166), 3 Monate vor der Aufnahme −1,3 % (SD ± 5,9; 1443) des Körpergewichtes. In den 3 Monaten vor Klinikeinweisung haben 20,1 % (n = 288) mindestens 5 % ihres Körpergewichtes verloren, 13,2 % (n = 154) mindestens 10 % in den vorhergehenden 6 Monaten. Dementsprechend weisen 36,2 % (n = 744) bei der Aufnahme ein Mangelernährungsrisiko und 12,7 % (n = 262) eine manifeste Mangelernährung auf. Im Zusammenhang mit Klinikaufenthalten verlieren die PatientInnen im Mittel 1,7 kg (SD ± 4,1 kg) oder 2,3 % (SD ± 6,) ihres Körpergewichtes, 21,9 % (n = 396) verlieren ≥ 5 % ihres Körpergewichtes. Der Anteil an PatientInnen mit Mangelernährungsrisiko steigt nach dem Klinikaufenthalt auf 48,6 % (n = 881), der der manifesten Mangelernährung auf 14,3 % (n = 259).\nRegression Die logistischen Regressionsmodelle Gewichtsverlust ≥ 5 % (Chi-Quadrat(27) = 97,775, p < 0,001, n = 859) und neues Mangelernährungsrisiko (Chi-Quadrat(28) = 136,318, p < 0,001, n = 1012) zeigen jeweils hochsignifikante Risikofaktoren mit starken Effekten (Zusatzmaterial online: Tab. 2 und 3). Das Risiko eines Gewichtsverlustes ≥ 5 % wird signifikant durch Einweisungsdiagnosen der ICD-10-Gruppen endokrine, Ernährungs- und Stoffwechselkrankheiten (OR 2,768; p 0,048), Krankheiten des Nervensystems (OR 2,934; p 0,030), Krankheiten des Atmungssystems (OR 2,274; p 0,0161), Krankheiten des Kreislaufsystems (OR 2,159; p 0,033), die Nebendiagnose Demenz (OR 1,490; p 0,026) sowie positivere Gewichtsverläufe in den 3 Monaten vor der Einweisung (OR 1,063; p 0,010) und mit steigender Anzahl an Behandlungstagen erhöht (OR 1,063; p < 0,001). Die Wahrscheinlichkeit, ein Mangelernährungsrisiko während des Klinikaufenthaltes zu entwickeln, steigt signifikant mit den Einweisungsdiagnosen der Gruppe Krankheiten des Atmungssystems (OR 2,156; p 0,033), den Nebendiagnosen Osteopathien und Chondropathien (OR 1,892; p 0,012) sowie einem höheren BMI (OR 1,108; p 0,002), positiveren Gewichtsveränderungen 6 Monate vor dem Krankenhaus (OR 1,055; p 0,004) und einer steigenden Anzahl an Behandlungstagen (OR 1,048; p < 0,001).\nDie logistischen Regressionsmodelle Gewichtsverlust ≥ 5 % (Chi-Quadrat(27) = 97,775, p < 0,001, n = 859) und neues Mangelernährungsrisiko (Chi-Quadrat(28) = 136,318, p < 0,001, n = 1012) zeigen jeweils hochsignifikante Risikofaktoren mit starken Effekten (Zusatzmaterial online: Tab. 2 und 3). Das Risiko eines Gewichtsverlustes ≥ 5 % wird signifikant durch Einweisungsdiagnosen der ICD-10-Gruppen endokrine, Ernährungs- und Stoffwechselkrankheiten (OR 2,768; p 0,048), Krankheiten des Nervensystems (OR 2,934; p 0,030), Krankheiten des Atmungssystems (OR 2,274; p 0,0161), Krankheiten des Kreislaufsystems (OR 2,159; p 0,033), die Nebendiagnose Demenz (OR 1,490; p 0,026) sowie positivere Gewichtsverläufe in den 3 Monaten vor der Einweisung (OR 1,063; p 0,010) und mit steigender Anzahl an Behandlungstagen erhöht (OR 1,063; p < 0,001). Die Wahrscheinlichkeit, ein Mangelernährungsrisiko während des Klinikaufenthaltes zu entwickeln, steigt signifikant mit den Einweisungsdiagnosen der Gruppe Krankheiten des Atmungssystems (OR 2,156; p 0,033), den Nebendiagnosen Osteopathien und Chondropathien (OR 1,892; p 0,012) sowie einem höheren BMI (OR 1,108; p 0,002), positiveren Gewichtsveränderungen 6 Monate vor dem Krankenhaus (OR 1,055; p 0,004) und einer steigenden Anzahl an Behandlungstagen (OR 1,048; p < 0,001).", "Ein Großteil der PatientInnen verliert bereits vor Klinikeinweisung an Gewicht. Im Zeitraum 6 Monate vor dem Klinikaufenthalt beträgt dieser Verlust im Mittel −1,8 % (SD ± 7,7; n = 1166), 3 Monate vor der Aufnahme −1,3 % (SD ± 5,9; 1443) des Körpergewichtes. In den 3 Monaten vor Klinikeinweisung haben 20,1 % (n = 288) mindestens 5 % ihres Körpergewichtes verloren, 13,2 % (n = 154) mindestens 10 % in den vorhergehenden 6 Monaten. Dementsprechend weisen 36,2 % (n = 744) bei der Aufnahme ein Mangelernährungsrisiko und 12,7 % (n = 262) eine manifeste Mangelernährung auf. Im Zusammenhang mit Klinikaufenthalten verlieren die PatientInnen im Mittel 1,7 kg (SD ± 4,1 kg) oder 2,3 % (SD ± 6,) ihres Körpergewichtes, 21,9 % (n = 396) verlieren ≥ 5 % ihres Körpergewichtes. Der Anteil an PatientInnen mit Mangelernährungsrisiko steigt nach dem Klinikaufenthalt auf 48,6 % (n = 881), der der manifesten Mangelernährung auf 14,3 % (n = 259).", "Die logistischen Regressionsmodelle Gewichtsverlust ≥ 5 % (Chi-Quadrat(27) = 97,775, p < 0,001, n = 859) und neues Mangelernährungsrisiko (Chi-Quadrat(28) = 136,318, p < 0,001, n = 1012) zeigen jeweils hochsignifikante Risikofaktoren mit starken Effekten (Zusatzmaterial online: Tab. 2 und 3). Das Risiko eines Gewichtsverlustes ≥ 5 % wird signifikant durch Einweisungsdiagnosen der ICD-10-Gruppen endokrine, Ernährungs- und Stoffwechselkrankheiten (OR 2,768; p 0,048), Krankheiten des Nervensystems (OR 2,934; p 0,030), Krankheiten des Atmungssystems (OR 2,274; p 0,0161), Krankheiten des Kreislaufsystems (OR 2,159; p 0,033), die Nebendiagnose Demenz (OR 1,490; p 0,026) sowie positivere Gewichtsverläufe in den 3 Monaten vor der Einweisung (OR 1,063; p 0,010) und mit steigender Anzahl an Behandlungstagen erhöht (OR 1,063; p < 0,001). Die Wahrscheinlichkeit, ein Mangelernährungsrisiko während des Klinikaufenthaltes zu entwickeln, steigt signifikant mit den Einweisungsdiagnosen der Gruppe Krankheiten des Atmungssystems (OR 2,156; p 0,033), den Nebendiagnosen Osteopathien und Chondropathien (OR 1,892; p 0,012) sowie einem höheren BMI (OR 1,108; p 0,002), positiveren Gewichtsveränderungen 6 Monate vor dem Krankenhaus (OR 1,055; p 0,004) und einer steigenden Anzahl an Behandlungstagen (OR 1,048; p < 0,001).", "Sowohl die Literatur als auch die bisherigen Analysen im Projekt zeigen deutlich, dass das Problem Mangelernährung im klinischen Bereich häufig auftritt, aufgrund mangelhaften oder fehlenden Screenings unentdeckt bleibt und sich dann negativ auf das Outcome auswirkt [7, 8, 15, 22]. Dies verdeutlicht die Wichtigkeit, Mangelernährung frühzeitig zu erkennen und dieser, wenn möglich, im Vorfeld bereits entgegenzuwirken. Die Prävention von Mangelernährung in den verschiedenen Versorgungssettings wird vom pflegerischen Expertenstandard Ernährungsmanagement zur Förderung der oralen Ernährung in der Pflege als zentrales Ziel aufgegriffen [2, 6]. Wie gut dies aber schließlich gelingt und inwieweit ungewollte Gewichtsverluste oder Verschlechterungen des Ernährungsstatus bei PflegeheimbewohnerInnen Teil eines natürlichen Alterungsprozesses sind, bleibt häufig unklar. Deutlich wird aber, dass ein erheblicher Anteil der PflegeheimbewohnerInnen bereits mit einem Mangelernährungsrisiko in die Klinik aufgenommen wird. Gleichzeitig zeigen Studien, dass eine individualisierte Ernährungstherapie das Outcome wesentlich verbessern kann [19]. So weisen etwa bei Sanson et al. 37,2 % der internistischen Patienten ein hohes Mangelernährungsrisiko auf [15]. Wird dann ernährungsmedizinisch interveniert, kann auch entsprechend eine Mangelernährung kodiert werden [3]. Dies kann aber nur gelingen, wenn die initial bereits Mangelernährten oder Gefährdeten auch erkannt werden. Es kann allerdings nicht davon ausgegangen werden, dass dies aktuell regelhaft geschieht, da in den Diagnoselisten Mangelernährungsdiagnosen (4,4 %; n = 90) kaum vorkommen. Wie eingangs erläutert, setzt sich der hier analysierte Datensatz aus 2 Erhebungen zusammen. Für die kleinere Kohorte der zweiten Erhebung (n = 604) konnten auch Informationen zu Operationen, Medikamenten bei Entlassung, Aufenthalten auf Intensivstation und den behandelnden Fachabteilungen erfasst werden, was bei der ersten Erhebungsphase leider nicht der Fall war. Hierdurch zeigten sich jedoch keine signifikanten Risikoveränderungen. In einer größeren Kohorte wäre dies jedoch denkbar. Diese potenziellen Risikofaktoren nicht ausreichend beurteilen zu können, stellt eine wesentliche Limitation dieser Datenauswertung dar. Es wurde/wurden außerdem keine Unterscheidung oder getrennte Kalkulationen zwischen den Interventions- und Kontrollgruppen vorgenommen, da der Einfluss der Konzeptentwicklung hier zu vernachlässigen ist. In der untersuchten Gruppe sind im Zeitraum nach der Konzepteinführung lediglich 59 Fälle den Interventionsstationen zuzuordnen. Dennoch wurde in der Regressionsanalyse zumindest dementsprechend adjustiert.\nIn der hier untersuchten Gesamtgruppe führen auch die konkreten Einweisungsdiagnosen nicht zu einer Veränderung des Risikos, vermutlich da manche Einweisungsdiagnosen relativ selten sind, z. B. Apoplex mit 3 % (n = 62), chronische Wunden mit 2 % (n = 41), Myokardinfarkte mit 0,7 % (n = 15) sowie Lungenembolien mit 0,7 % (n = 14). Lediglich die Gruppierung nach ICD-10-Hauptgruppen weist teilweise auf Risikoerhöhungen hin. Interessanterweise ist der Effekt der meisten chronischen Erkrankungen oder der Gesamtmorbiditätsindex auf das Risiko für Gewichtsverluste oder die Entwicklung eines Mangelernährungsrisikos eher gering. Dies überrascht zunächst, lässt sich aber durch die generell ähnlich ausgeprägte Morbidität in der Kohorte erklären. So weisen 69,6 % (n = 1433) einen Charlson-Komorbiditätsindex zwischen 5 und 8 Punkten auf.\nInwiefern sich das Gewicht während des Klinikaufenthaltes ändert, wird relativ selten untersucht. Grass et al. erhoben verschiedene Parameter 10 Tage vor und 30 Tage nach einem chirurgischen Eingriff. 46 % verloren dabei mehr als 5 % ihres Körpergewichtes, wobei als signifikante Risikofaktoren ein niedriger BMI zur Baseline, niedriger Oberarmumfang, Eingriffe im oberen Gastrointestinaltrakt, weibliches Geschlecht und ein erhöhtes CRP identifiziert wurden [10]. Die Risikoerhöhung bei Erkrankungen des Atmungssystems deutet hier ebenfalls erhebliche Effekte akut entzündlicher Prozesse an.\nIn der Studie von Alvarez et al. kam es bei den über 70-Jährigen zu einem signifikanten Gewichtsverlust von −1,7 kg; der Anteil Mangelernährungsrisiko stieg geringfügig von 54,6 auf 57,5 %, während der Anteil manifester Mangelernährung von 25,5 auf 21,3 % sank. Risikofaktoren wurden allerdings keine ermittelt [1]. In einer weiteren Studie kam es dagegen in einer kleineren Kohorten nur zu geringfügigen mittleren Gewichtsveränderungen, jedoch zu einem erheblichen Verlust an fettfreier Masse und damit vermutlich v. a. Muskelmasse. Auch der Phasenwinkel als Zeichen einer schlechten Nährstoffversorgung verschlechterte sich während des Klinikaufenthaltes. Diese Verschlechterung des Ernährungsstatus ist signifikant mit einem höheren Alter und einer bereits bestehenden Mangelernährung zur Baseleine assoziiert [14], was sich in der hier analysierten Kohorte nicht bestätigte.\nNun zeigt sich hier in dieser Untersuchung ein verhältnismäßig ausgeprägter Gewichtsverlust im Zusammenhang mit den Klinikaufenthalten. Dabei ist zu berücksichtigen, dass die Kohorte ausschließlich aus BewohnerInnen von Pflegeeinrichtungen besteht. Damit sind die Ergebnisse nicht allgemein auf geriatrische oder ältere PatientInnen übertragbar. Da PflegeheimbewohnerInnen vermutlich eine vergleichsweise größere Morbidität und Pflegebedürftigkeit aufweisen, wäre eine stärkere Gefährdung für Mangelernährung und ungewollte Gewichtsverluste plausibel. Vergleichbare Erhebungen mit einer solchen Kohorte wurden aber bislang nicht durchgeführt. Der Zusammenhang eines längeren Klinikaufenthaltes mit steigendem Risiko für Gewichtsverluste bzw. Entwicklung eines Mangelernährungsrisikos kommt deutlich zum Vorschein. Unklar bleibt an dieser Stelle, was Ursache und Wirkung ist. Führt der längere Klinikaufenthalt zu zunehmenden Gewichtseinbußen, führen die Gewichtsverluste zu einem längeren Klinikaufenthalt oder ist am ehesten eine Mischung aus beiden Effekten anzunehmen? Im Gegensatz zu der eingangs angedeuteten These wirkt sich in dieser Kohorte ein initial vergleichsweise besserer Ernährungsstatus nicht protektiv auf Gewichtseinbußen während der stationären Behandlung aus. Ein besserer Ausgangsstatus führt eher zu einer Gewichtsabnahme ≥ 5 %. Ein möglicher Erklärungsansatz ist der Umstand, dass eine initiale Mangelernährung mit einem häufigeren Versterben assoziiert ist. Dies hat eine frühere Datenanalyse mit dem T0-Datensatz deutlich gemacht [8]. Da in dieser Analyse aber die Gewichtsveränderungen in der Klinik anhand der Gewichtsdifferenz vor/nach dem Klinikaufenthalt errechnet werden, werden hier alle in der Klinik Verstorbenen nicht miterfasst. Dieser Umstand könnte diesen überraschenden Effekt mitverursacht haben. Auch andere eigentlich plausibel erscheinende Risikofaktoren wie Alter, Geschlecht, Morbidität und ausgeprägte Pflegebedürftigkeit spielen hierbei keine signifikante Rolle. Diese überraschenden Befunde sollten ggf. mit einem gezielteren Untersuchungsdesign und größeren Gruppen noch einmal überprüft werden. Eine mögliche Erklärung für den kaum vorhandenen Einfluss von Morbidität oder Pflegebedürftigkeit wäre, dass die Betroffenen erstens schon häufig mit einem reduzierten Ernährungsstatus aufgenommen werden, der sich dann vielleicht nicht mehr wesentlich verschlechtern kann. Zweitens werden diese PatientInnen u. U. verhältnismäßig gut versorgt, da die Pflegekräfte bei angespannter Personallage dazu neigen könnten, sich schwerpunktmäßig um die besonders hilfsbedürftigen PatientInnen zu kümmern. Auch diesen Thesen sollte in weiteren Untersuchungen nachgegangen werden. Die größten Schwierigkeiten oder Limitationen dieser Arbeit gehen auf die Auswertung von Routinedaten zurück. Diese weisen teilweise nichterklärbare Lücken auf. So fehlen z. B. für den zweiten Erhebungszeitraum überdurchschnittlich viele Entlassbriefe, was dazu führte, dass u. a. Einweisungsdiagnosen teilweise nicht vorliegen. Auch problematisch ist, dass nicht immer ein Gewicht vor Krankenhaus vorliegt, da die meisten Krankenhausaufnahmen nicht geplant sind und das letzte gemessene Gewicht dann u. U. zu lange zurückliegt. Gleichzeitig sind manche BewohnerInnen erst in den Monaten vorher in die Langzeitpflege eingezogen, wodurch in diesen Fällen eine Betrachtung der Gewichtsverläufe vor Krankenhauseinweisung nicht möglich war. Diese wird aber benötigt, um das Mangelernährungsrisiko und eine manifeste Mangelernährung einschätzen zu können, da das in der Langzeitpflege verwendete Instrument PEMU relativ selten zur Anwendung kommt. Dadurch wird u. U. die tatsächliche Prävalenz einer Mangelernährung bzw. ein Mangelernährungsrisiko in dieser Kohorte noch unterschätzt.", "\nEin hoher Anteil der PatientInnen, die aus einem Pflegeheim aufgenommen werden, weist initial bereits ein Mangelernährungsrisiko oder eine manifeste Mangelernährung auf.Es ist von essenzieller Bedeutung, die bereits mit einer Mangelernährung aufgenommenen PatientInnen zu erkennen, um so eine gezielte Ernährungstherapie zu ermöglichen.Es ist erforderlich, im Verlauf des Aufenthaltes Mangelernährungsscreenings zu wiederholen, da sich häufig ein Risiko bzw. eine Mangelernährung erst während des Aufenthaltes manifestiert. Das Ernährungsmanagement ist zudem so auszurichten, dass allgemein eine Prävention von Mangelernährung unterstützt wird.\n\nEin hoher Anteil der PatientInnen, die aus einem Pflegeheim aufgenommen werden, weist initial bereits ein Mangelernährungsrisiko oder eine manifeste Mangelernährung auf.\nEs ist von essenzieller Bedeutung, die bereits mit einer Mangelernährung aufgenommenen PatientInnen zu erkennen, um so eine gezielte Ernährungstherapie zu ermöglichen.\nEs ist erforderlich, im Verlauf des Aufenthaltes Mangelernährungsscreenings zu wiederholen, da sich häufig ein Risiko bzw. eine Mangelernährung erst während des Aufenthaltes manifestiert. Das Ernährungsmanagement ist zudem so auszurichten, dass allgemein eine Prävention von Mangelernährung unterstützt wird.", " \n\n\n\n\n", "\n\n" ]
[ null, null, null, null, null, null, "conclusion", "supplementary-material", null ]
[ "Routinedaten", "Prävention", "Krankenhaus", "Mangelernährungsscreening", "Morbidität", "Prevention", "Routine data", "Hospital", "Malnutrition screening", "Morbidity" ]
Einführung und Hintergrund: Mangelernährung stellt im klinischen Alltag ein häufig auftretendes Problem dar. So sind nach einer Auswertung von nutritionDay-Daten deutscher Krankenhäuser 37,4 % der eingeschlossenen PatientInnen als manifest mangelernährt einzustufen. Lediglich 11,6 % werden aber tatsächlich auch als mangelernährt erkannt [22]. Vor allem ältere PatientInnen sind überproportional häufig betroffen [1, 4, 12]. Eine Mangelernährung und Verschlechterungen des Ernährungsstatus sind bei Älteren sowohl im Krankenhaus [9, 16] als auch in der Langzeitpflege [8, 21] mit einer erhöhten Mortalität assoziiert. Allerdings werden in diesen Untersuchungen die Veränderungen des Ernährungsstatus und die Entwicklung eines Mangelernährungsrisikos im Verlauf des Klinikaufenthaltes nur sehr selten untersucht, sondern meist zu Beginn der Behandlung festgestellt und anschließend damit zusammenhängende klinische Effekte untersucht. Das Essverhalten und Gründe für eine reduzierte Nahrungsaufnahme im Krankenhaus hingegen wurden durch die jährlich stattfindenden nutritionDays ausführlich analysiert [17]. Aber auch in diesen Erhebungen gibt es keine zwei Messpunkte für etwa das Gewicht, sodass Veränderungen des Ernährungsstatus im Verlauf der Behandlung nicht abbildbar sind. Im vom Bundesministerium für Bildung und Forschung (BMBF) geförderten Forschungsprojekt Prävention und Behandlung von Mangelernährung bei geriatrischen Patienten im Krankenhaus wurden in Zusammenarbeit mit 2 Stuttgarter Kliniken Praxiskonzepte entwickelt, um einer Mangelernährung bei geriatrischen PatientInnen vorzubeugen, diese zu erkennen und zu behandeln [23]. Die Konzepte sollten mithilfe von Routinedaten von PflegeheimbewohnerInnen mit mindestens 3‑tägigem Klinikaufenthalt auf ihre Wirksamkeit getestet werden. Hierfür wurde ein Vorher-nachher-Design mit Interventions- und Kontrollgruppe entwickelt [11]. Bei der Analyse der Daten vor und nach der Konzeptentwicklung (noch nicht veröffentlicht) zeigten sich deutliche Unterschiede beim Ernährungsstatus der Kohorten in den beiden Erhebungszeiträumen. Tatsächlich hatten die PatientInnen im zweiten Erhebungszeitraum signifikant weniger Gewicht vor der Einweisung und anschließend in der Klinik verloren. Aus den wenigen bisher veröffentlichten Daten und der im Projekt gemachten Beobachtung ergibt sich eine erhebliche Forschungslücke. Ziel ist es daher, die Ursachen eines in der Klinik erworbenen Mangelernährungsrisikos zu ermitteln. Methodik und Datenbasis: Die Stichprobe umfasst Daten von PflegeheimwohnerInnen mit Klinikaufenthalt in den Jahren 2015 und 2016 sowie im Zeitraum November 2018 bis November 2019. Die routinemäßigen Gewichtserhebungen in den Einrichtungen erlauben so die Betrachtung von Gewichtsverläufen vor und im Zusammenhang mit Klinikaufenthalten. In die Auswertung flossen Daten von 19 Pflegeeinrichtungen im Raum Stuttgart ein. Die vorliegenden Daten wurden retrospektiv erhoben und erst nach einer gründlichen Anonymisierung in engem Austausch mit den jeweiligen Datenschutzbeauftragten ausgewertet. Einschlusskriterien sind lediglich ein Alter ≥ 65 Jahre und ein mindestens 3‑tägiger Klinikaufenthalt. Die Routinedaten umfassen Gewichts‑, BMI-Verläufe und Mangelernährungsscreenings 6 Monate vor bis unmittelbar nach dem Klinikaufenthalt, Alter, Geschlecht, Pflegegrad, Stürze ggf. Versterben sowie Dauer der Klinikaufenthalte. Den Entlassbriefen wurden Einweisungsdiagnosen sowie Nebendiagnosen entnommen, entsprechend dem ICD-10-Katalog (International Classification of Diseases and Related Health Problems) gruppiert und der Charlson-Komorbiditätsindex in der modifizierten Fassung nach Quan ermittelt [13]. Informationen zu verabreichten Medikamenten, erfolgten Operationen oder intensivmedizinischen Behandlungen liegen nur für eine kleine Gruppe vor und konnten daher nicht berücksichtigt werden. Gewicht, BMI und Screeningergebnis des Instrumentes Pflegerische Erfassung von Mangelernährung und deren Ursachen in der stationären Langzeit‑/Altenpflege (PEMU) wurden, ausgehend vom Aufnahmedatum, für 6 Monate vorher, 3 Monate vorher, kurz vorher (max. 14 Tage) und unmittelbar nach dem Klinikaufenthalt erfasst. Nach der Entlassung wurden alle BewohnerInnen innerhalb eines Tages in den Einrichtungen gewogen. Um BewohnerInnen mit Risiko für eine Mangelernährung zu identifizieren, werden die Kriterien Gewichtsverlust und niedriger BMI des validen Screeninginstrumentes Malnutrition Universal Screening Tool (MUST) mit den Ergebnissen des in den Einrichtungen verwendeten Instrumentes PEMU kombiniert ausgewertet [18, 20, 20]. Dieses Vorgehen wird gewählt, da anhand der Daten das MUST-Kriterium Nahrungskarenz nicht erfasst werden kann, in der PEMU-Einschätzung jedoch enthalten ist. Gleichzeitig wird die PEMU in den Einrichtungen nicht so regelmäßig durchgeführt, dass diese Einschätzung ausreichen würde, um BewohnerInnen mit einem initialen Mangelernährungsrisiko sicher zu erfassen. Ist mindestens eine Einschätzung, per PEMU oder MUST unmittelbar vor bzw. nach dem Klinikaufenthalt positiv, wird dies jeweils als Risiko für eine Mangelernährung eingestuft. Zur Identifikation einer manifesten Mangelernährung werden die Konsenskriterien der European Society for Clinical Nutrition and Metabolism (ESPEN) herangezogen und auf alle Fälle angewendet [5]. Allerdings können nur die enthaltenen Kriterien niedriger BMI bzw. altersadaptierter BMI in Kombination mit einem ausgeprägten Gewichtsverlust berücksichtigt werden. Messungen der Körperzusammensetzung, zur Bestimmung eines niedrigen Fettfreie-Masse-Index (FFMI), als weiteres Kriterium einer manifesten Mangelernährung liegen nicht vor (Zusatzmaterial online: Tab. 1). Hauptaugenmerk zur Beschreibung von Veränderungen des Ernährungsstatus im Zuge des Klinikaufenthaltes liegen darum auf einem erheblichen Gewichtsverlust im Zusammenhang mit dem Klinikaufenthalt (≥ 5 %) sowie einem neu aufgetretenen Mangelernährungsrisiko.Stichprobe n–2058Geschlecht, n (%)Männer763 (37,1)Frauen1295 (62,9)Alter, MW ± SDIn Jahren83,53 ± 7,49Behandlungstage, MW ± SD–10,75 ± 9,58Gewicht vor KH, MW ± SDIn kg66,45 ± 15,15BMI vor KH, MW ± SDIn kg/m224,75 ± 5,06Einweisungsdiagnosen nach ICD-10-Gruppen, n (%)Verletzungen, Vergiftungen und bestimmte andere Folgen äußerer Ursachen294 (14,3)Krankheiten des Atmungssystems265 (12,9)Krankheiten des Verdauungssystems260 (12,6)Symptome und abnorme klinische Befunde205 (10,0)Krankheiten des Kreislaufsystems204 (9,9)Unbekannt245 (11,9)Mortalität im KH, n (%)–245 (11,9)Pflegegrad n (%)011 (0,5)165 (3,2)2392 (19,0)3563 (27,4)4608 (29,6)5418 (20,3)Komorbiditätsindex nach Charlson, MW ± SD0–29 Punkte7,16 ± 2,11Chronische Erkrankungen, n (%)Arterielle Hypertonie1288 (62,6)Demenz879 (42,7)Niereninsuffizienz641 (31,1)Diabetes mellitus630 (30,6)Z. n. Apoplex417 (20,3)Ernährungsstatus bei Aufnahme, n (%)Mangelernährungsrisiko (MUST/PEMU)744 (36,2)Manifeste ME (ESPEN)262 (12,7)Ernährungsstatus bei Entlassung, n (%)Mangelernährungsrisiko (MUST/PEMU)a881 (48,6)Manifeste ME (ESPEN)a259 (14,3)Gestellte ME-Diagnosen (Mangelernährung, Kachexie und Sarkopenie)90 (4,4)n = 1813ESPEN European Society for Clinical Nutrition and Metabolism, ICD-10 International Classification of Diseases and Related Health Problems, KH Krankenhaus, ME Mangelernährung, SD Standardabweichung, Z. n. Zustand nachaOhne in der Klinik Verstorbene n = 1813 ESPEN European Society for Clinical Nutrition and Metabolism, ICD-10 International Classification of Diseases and Related Health Problems, KH Krankenhaus, ME Mangelernährung, SD Standardabweichung, Z. n. Zustand nach aOhne in der Klinik Verstorbene Ursprünglich beinhaltet der Datensatz 2721 Fälle; nach Bereinigung um Fälle ohne Angaben zu Diagnosen verbleiben 2556. Um den Einfluss von Gewichtschwankungen aufgrund des Ausschwemmens von Ödemen zu minimieren, werden in dieser Analyse PatientInnen mit der Nebendiagnose Herzinsuffizienz oder der Aufnahmediagnose dekompensierte Herzinsuffizienz ausgeschlossen, wodurch sich der Datensatz auf 2058 Fälle reduziert. Die Datenanalyse erfolgt mit SPSS 24®. Zur Identifikation von Risikofaktoren für Gewichtsverlust und Entwicklung eines Mangelernährungsrisikos während des Klinikaufenthaltes wird eine logistische Regressionsanalyse angewendet. Die eingeschlossenen Variablen wurden anhand theoretischer Vorüberlegungen ausgewählt, wie etwa höhere Prävalenzraten bei bestimmten chronischen Erkrankungen. Die Modelle wurden um die Faktoren Erhebungszeitraum, behandelnde Klinik, Interventions- und Kontrollgruppe, Charlson-Komorbiditätsindex, hoher Pflegegrad, Alter sowie Geschlecht adjustiert. Der Einschluss der Variablen in das Modell erfolgte gleichzeitig. Fälle mit fehlenden Daten wurden aus der Regression ausgeschlossen. Ein p < 0,05 wird als signifikant betrachtet. Für das gesamte Forschungsprojekt liegt ein positives Ethikvotum des Ethikkomitees der Deutschen Gesellschaft für Pflegewissenschaften in Witten vor (Antrag-Nr. 17-005). Ergebnisse: In die Auswertung fließen insgesamt 2058 Fälle ein. Die Stichprobe (Tab. 1) ist entsprechend den Auswahlkriterien betagt (83,5 Jahre; SD ± 7,5) und mit 62,9 % (n = 1295) eher weiblich. Mit 49,9 % (n = 1016) weist fast die Hälfte einen Pflegegrad von 4 oder 5 auf; der Charlson-Komorbiditätsindex liegt bei 7,2 (SD ± 2,1). Ernährungsstatus und Veränderungen Ein Großteil der PatientInnen verliert bereits vor Klinikeinweisung an Gewicht. Im Zeitraum 6 Monate vor dem Klinikaufenthalt beträgt dieser Verlust im Mittel −1,8 % (SD ± 7,7; n = 1166), 3 Monate vor der Aufnahme −1,3 % (SD ± 5,9; 1443) des Körpergewichtes. In den 3 Monaten vor Klinikeinweisung haben 20,1 % (n = 288) mindestens 5 % ihres Körpergewichtes verloren, 13,2 % (n = 154) mindestens 10 % in den vorhergehenden 6 Monaten. Dementsprechend weisen 36,2 % (n = 744) bei der Aufnahme ein Mangelernährungsrisiko und 12,7 % (n = 262) eine manifeste Mangelernährung auf. Im Zusammenhang mit Klinikaufenthalten verlieren die PatientInnen im Mittel 1,7 kg (SD ± 4,1 kg) oder 2,3 % (SD ± 6,) ihres Körpergewichtes, 21,9 % (n = 396) verlieren ≥ 5 % ihres Körpergewichtes. Der Anteil an PatientInnen mit Mangelernährungsrisiko steigt nach dem Klinikaufenthalt auf 48,6 % (n = 881), der der manifesten Mangelernährung auf 14,3 % (n = 259). Ein Großteil der PatientInnen verliert bereits vor Klinikeinweisung an Gewicht. Im Zeitraum 6 Monate vor dem Klinikaufenthalt beträgt dieser Verlust im Mittel −1,8 % (SD ± 7,7; n = 1166), 3 Monate vor der Aufnahme −1,3 % (SD ± 5,9; 1443) des Körpergewichtes. In den 3 Monaten vor Klinikeinweisung haben 20,1 % (n = 288) mindestens 5 % ihres Körpergewichtes verloren, 13,2 % (n = 154) mindestens 10 % in den vorhergehenden 6 Monaten. Dementsprechend weisen 36,2 % (n = 744) bei der Aufnahme ein Mangelernährungsrisiko und 12,7 % (n = 262) eine manifeste Mangelernährung auf. Im Zusammenhang mit Klinikaufenthalten verlieren die PatientInnen im Mittel 1,7 kg (SD ± 4,1 kg) oder 2,3 % (SD ± 6,) ihres Körpergewichtes, 21,9 % (n = 396) verlieren ≥ 5 % ihres Körpergewichtes. Der Anteil an PatientInnen mit Mangelernährungsrisiko steigt nach dem Klinikaufenthalt auf 48,6 % (n = 881), der der manifesten Mangelernährung auf 14,3 % (n = 259). Regression Die logistischen Regressionsmodelle Gewichtsverlust ≥ 5 % (Chi-Quadrat(27) = 97,775, p < 0,001, n = 859) und neues Mangelernährungsrisiko (Chi-Quadrat(28) = 136,318, p < 0,001, n = 1012) zeigen jeweils hochsignifikante Risikofaktoren mit starken Effekten (Zusatzmaterial online: Tab. 2 und 3). Das Risiko eines Gewichtsverlustes ≥ 5 % wird signifikant durch Einweisungsdiagnosen der ICD-10-Gruppen endokrine, Ernährungs- und Stoffwechselkrankheiten (OR 2,768; p 0,048), Krankheiten des Nervensystems (OR 2,934; p 0,030), Krankheiten des Atmungssystems (OR 2,274; p 0,0161), Krankheiten des Kreislaufsystems (OR 2,159; p 0,033), die Nebendiagnose Demenz (OR 1,490; p 0,026) sowie positivere Gewichtsverläufe in den 3 Monaten vor der Einweisung (OR 1,063; p 0,010) und mit steigender Anzahl an Behandlungstagen erhöht (OR 1,063; p < 0,001). Die Wahrscheinlichkeit, ein Mangelernährungsrisiko während des Klinikaufenthaltes zu entwickeln, steigt signifikant mit den Einweisungsdiagnosen der Gruppe Krankheiten des Atmungssystems (OR 2,156; p 0,033), den Nebendiagnosen Osteopathien und Chondropathien (OR 1,892; p 0,012) sowie einem höheren BMI (OR 1,108; p 0,002), positiveren Gewichtsveränderungen 6 Monate vor dem Krankenhaus (OR 1,055; p 0,004) und einer steigenden Anzahl an Behandlungstagen (OR 1,048; p < 0,001). Die logistischen Regressionsmodelle Gewichtsverlust ≥ 5 % (Chi-Quadrat(27) = 97,775, p < 0,001, n = 859) und neues Mangelernährungsrisiko (Chi-Quadrat(28) = 136,318, p < 0,001, n = 1012) zeigen jeweils hochsignifikante Risikofaktoren mit starken Effekten (Zusatzmaterial online: Tab. 2 und 3). Das Risiko eines Gewichtsverlustes ≥ 5 % wird signifikant durch Einweisungsdiagnosen der ICD-10-Gruppen endokrine, Ernährungs- und Stoffwechselkrankheiten (OR 2,768; p 0,048), Krankheiten des Nervensystems (OR 2,934; p 0,030), Krankheiten des Atmungssystems (OR 2,274; p 0,0161), Krankheiten des Kreislaufsystems (OR 2,159; p 0,033), die Nebendiagnose Demenz (OR 1,490; p 0,026) sowie positivere Gewichtsverläufe in den 3 Monaten vor der Einweisung (OR 1,063; p 0,010) und mit steigender Anzahl an Behandlungstagen erhöht (OR 1,063; p < 0,001). Die Wahrscheinlichkeit, ein Mangelernährungsrisiko während des Klinikaufenthaltes zu entwickeln, steigt signifikant mit den Einweisungsdiagnosen der Gruppe Krankheiten des Atmungssystems (OR 2,156; p 0,033), den Nebendiagnosen Osteopathien und Chondropathien (OR 1,892; p 0,012) sowie einem höheren BMI (OR 1,108; p 0,002), positiveren Gewichtsveränderungen 6 Monate vor dem Krankenhaus (OR 1,055; p 0,004) und einer steigenden Anzahl an Behandlungstagen (OR 1,048; p < 0,001). Ernährungsstatus und Veränderungen: Ein Großteil der PatientInnen verliert bereits vor Klinikeinweisung an Gewicht. Im Zeitraum 6 Monate vor dem Klinikaufenthalt beträgt dieser Verlust im Mittel −1,8 % (SD ± 7,7; n = 1166), 3 Monate vor der Aufnahme −1,3 % (SD ± 5,9; 1443) des Körpergewichtes. In den 3 Monaten vor Klinikeinweisung haben 20,1 % (n = 288) mindestens 5 % ihres Körpergewichtes verloren, 13,2 % (n = 154) mindestens 10 % in den vorhergehenden 6 Monaten. Dementsprechend weisen 36,2 % (n = 744) bei der Aufnahme ein Mangelernährungsrisiko und 12,7 % (n = 262) eine manifeste Mangelernährung auf. Im Zusammenhang mit Klinikaufenthalten verlieren die PatientInnen im Mittel 1,7 kg (SD ± 4,1 kg) oder 2,3 % (SD ± 6,) ihres Körpergewichtes, 21,9 % (n = 396) verlieren ≥ 5 % ihres Körpergewichtes. Der Anteil an PatientInnen mit Mangelernährungsrisiko steigt nach dem Klinikaufenthalt auf 48,6 % (n = 881), der der manifesten Mangelernährung auf 14,3 % (n = 259). Regression: Die logistischen Regressionsmodelle Gewichtsverlust ≥ 5 % (Chi-Quadrat(27) = 97,775, p < 0,001, n = 859) und neues Mangelernährungsrisiko (Chi-Quadrat(28) = 136,318, p < 0,001, n = 1012) zeigen jeweils hochsignifikante Risikofaktoren mit starken Effekten (Zusatzmaterial online: Tab. 2 und 3). Das Risiko eines Gewichtsverlustes ≥ 5 % wird signifikant durch Einweisungsdiagnosen der ICD-10-Gruppen endokrine, Ernährungs- und Stoffwechselkrankheiten (OR 2,768; p 0,048), Krankheiten des Nervensystems (OR 2,934; p 0,030), Krankheiten des Atmungssystems (OR 2,274; p 0,0161), Krankheiten des Kreislaufsystems (OR 2,159; p 0,033), die Nebendiagnose Demenz (OR 1,490; p 0,026) sowie positivere Gewichtsverläufe in den 3 Monaten vor der Einweisung (OR 1,063; p 0,010) und mit steigender Anzahl an Behandlungstagen erhöht (OR 1,063; p < 0,001). Die Wahrscheinlichkeit, ein Mangelernährungsrisiko während des Klinikaufenthaltes zu entwickeln, steigt signifikant mit den Einweisungsdiagnosen der Gruppe Krankheiten des Atmungssystems (OR 2,156; p 0,033), den Nebendiagnosen Osteopathien und Chondropathien (OR 1,892; p 0,012) sowie einem höheren BMI (OR 1,108; p 0,002), positiveren Gewichtsveränderungen 6 Monate vor dem Krankenhaus (OR 1,055; p 0,004) und einer steigenden Anzahl an Behandlungstagen (OR 1,048; p < 0,001). Diskussion: Sowohl die Literatur als auch die bisherigen Analysen im Projekt zeigen deutlich, dass das Problem Mangelernährung im klinischen Bereich häufig auftritt, aufgrund mangelhaften oder fehlenden Screenings unentdeckt bleibt und sich dann negativ auf das Outcome auswirkt [7, 8, 15, 22]. Dies verdeutlicht die Wichtigkeit, Mangelernährung frühzeitig zu erkennen und dieser, wenn möglich, im Vorfeld bereits entgegenzuwirken. Die Prävention von Mangelernährung in den verschiedenen Versorgungssettings wird vom pflegerischen Expertenstandard Ernährungsmanagement zur Förderung der oralen Ernährung in der Pflege als zentrales Ziel aufgegriffen [2, 6]. Wie gut dies aber schließlich gelingt und inwieweit ungewollte Gewichtsverluste oder Verschlechterungen des Ernährungsstatus bei PflegeheimbewohnerInnen Teil eines natürlichen Alterungsprozesses sind, bleibt häufig unklar. Deutlich wird aber, dass ein erheblicher Anteil der PflegeheimbewohnerInnen bereits mit einem Mangelernährungsrisiko in die Klinik aufgenommen wird. Gleichzeitig zeigen Studien, dass eine individualisierte Ernährungstherapie das Outcome wesentlich verbessern kann [19]. So weisen etwa bei Sanson et al. 37,2 % der internistischen Patienten ein hohes Mangelernährungsrisiko auf [15]. Wird dann ernährungsmedizinisch interveniert, kann auch entsprechend eine Mangelernährung kodiert werden [3]. Dies kann aber nur gelingen, wenn die initial bereits Mangelernährten oder Gefährdeten auch erkannt werden. Es kann allerdings nicht davon ausgegangen werden, dass dies aktuell regelhaft geschieht, da in den Diagnoselisten Mangelernährungsdiagnosen (4,4 %; n = 90) kaum vorkommen. Wie eingangs erläutert, setzt sich der hier analysierte Datensatz aus 2 Erhebungen zusammen. Für die kleinere Kohorte der zweiten Erhebung (n = 604) konnten auch Informationen zu Operationen, Medikamenten bei Entlassung, Aufenthalten auf Intensivstation und den behandelnden Fachabteilungen erfasst werden, was bei der ersten Erhebungsphase leider nicht der Fall war. Hierdurch zeigten sich jedoch keine signifikanten Risikoveränderungen. In einer größeren Kohorte wäre dies jedoch denkbar. Diese potenziellen Risikofaktoren nicht ausreichend beurteilen zu können, stellt eine wesentliche Limitation dieser Datenauswertung dar. Es wurde/wurden außerdem keine Unterscheidung oder getrennte Kalkulationen zwischen den Interventions- und Kontrollgruppen vorgenommen, da der Einfluss der Konzeptentwicklung hier zu vernachlässigen ist. In der untersuchten Gruppe sind im Zeitraum nach der Konzepteinführung lediglich 59 Fälle den Interventionsstationen zuzuordnen. Dennoch wurde in der Regressionsanalyse zumindest dementsprechend adjustiert. In der hier untersuchten Gesamtgruppe führen auch die konkreten Einweisungsdiagnosen nicht zu einer Veränderung des Risikos, vermutlich da manche Einweisungsdiagnosen relativ selten sind, z. B. Apoplex mit 3 % (n = 62), chronische Wunden mit 2 % (n = 41), Myokardinfarkte mit 0,7 % (n = 15) sowie Lungenembolien mit 0,7 % (n = 14). Lediglich die Gruppierung nach ICD-10-Hauptgruppen weist teilweise auf Risikoerhöhungen hin. Interessanterweise ist der Effekt der meisten chronischen Erkrankungen oder der Gesamtmorbiditätsindex auf das Risiko für Gewichtsverluste oder die Entwicklung eines Mangelernährungsrisikos eher gering. Dies überrascht zunächst, lässt sich aber durch die generell ähnlich ausgeprägte Morbidität in der Kohorte erklären. So weisen 69,6 % (n = 1433) einen Charlson-Komorbiditätsindex zwischen 5 und 8 Punkten auf. Inwiefern sich das Gewicht während des Klinikaufenthaltes ändert, wird relativ selten untersucht. Grass et al. erhoben verschiedene Parameter 10 Tage vor und 30 Tage nach einem chirurgischen Eingriff. 46 % verloren dabei mehr als 5 % ihres Körpergewichtes, wobei als signifikante Risikofaktoren ein niedriger BMI zur Baseline, niedriger Oberarmumfang, Eingriffe im oberen Gastrointestinaltrakt, weibliches Geschlecht und ein erhöhtes CRP identifiziert wurden [10]. Die Risikoerhöhung bei Erkrankungen des Atmungssystems deutet hier ebenfalls erhebliche Effekte akut entzündlicher Prozesse an. In der Studie von Alvarez et al. kam es bei den über 70-Jährigen zu einem signifikanten Gewichtsverlust von −1,7 kg; der Anteil Mangelernährungsrisiko stieg geringfügig von 54,6 auf 57,5 %, während der Anteil manifester Mangelernährung von 25,5 auf 21,3 % sank. Risikofaktoren wurden allerdings keine ermittelt [1]. In einer weiteren Studie kam es dagegen in einer kleineren Kohorten nur zu geringfügigen mittleren Gewichtsveränderungen, jedoch zu einem erheblichen Verlust an fettfreier Masse und damit vermutlich v. a. Muskelmasse. Auch der Phasenwinkel als Zeichen einer schlechten Nährstoffversorgung verschlechterte sich während des Klinikaufenthaltes. Diese Verschlechterung des Ernährungsstatus ist signifikant mit einem höheren Alter und einer bereits bestehenden Mangelernährung zur Baseleine assoziiert [14], was sich in der hier analysierten Kohorte nicht bestätigte. Nun zeigt sich hier in dieser Untersuchung ein verhältnismäßig ausgeprägter Gewichtsverlust im Zusammenhang mit den Klinikaufenthalten. Dabei ist zu berücksichtigen, dass die Kohorte ausschließlich aus BewohnerInnen von Pflegeeinrichtungen besteht. Damit sind die Ergebnisse nicht allgemein auf geriatrische oder ältere PatientInnen übertragbar. Da PflegeheimbewohnerInnen vermutlich eine vergleichsweise größere Morbidität und Pflegebedürftigkeit aufweisen, wäre eine stärkere Gefährdung für Mangelernährung und ungewollte Gewichtsverluste plausibel. Vergleichbare Erhebungen mit einer solchen Kohorte wurden aber bislang nicht durchgeführt. Der Zusammenhang eines längeren Klinikaufenthaltes mit steigendem Risiko für Gewichtsverluste bzw. Entwicklung eines Mangelernährungsrisikos kommt deutlich zum Vorschein. Unklar bleibt an dieser Stelle, was Ursache und Wirkung ist. Führt der längere Klinikaufenthalt zu zunehmenden Gewichtseinbußen, führen die Gewichtsverluste zu einem längeren Klinikaufenthalt oder ist am ehesten eine Mischung aus beiden Effekten anzunehmen? Im Gegensatz zu der eingangs angedeuteten These wirkt sich in dieser Kohorte ein initial vergleichsweise besserer Ernährungsstatus nicht protektiv auf Gewichtseinbußen während der stationären Behandlung aus. Ein besserer Ausgangsstatus führt eher zu einer Gewichtsabnahme ≥ 5 %. Ein möglicher Erklärungsansatz ist der Umstand, dass eine initiale Mangelernährung mit einem häufigeren Versterben assoziiert ist. Dies hat eine frühere Datenanalyse mit dem T0-Datensatz deutlich gemacht [8]. Da in dieser Analyse aber die Gewichtsveränderungen in der Klinik anhand der Gewichtsdifferenz vor/nach dem Klinikaufenthalt errechnet werden, werden hier alle in der Klinik Verstorbenen nicht miterfasst. Dieser Umstand könnte diesen überraschenden Effekt mitverursacht haben. Auch andere eigentlich plausibel erscheinende Risikofaktoren wie Alter, Geschlecht, Morbidität und ausgeprägte Pflegebedürftigkeit spielen hierbei keine signifikante Rolle. Diese überraschenden Befunde sollten ggf. mit einem gezielteren Untersuchungsdesign und größeren Gruppen noch einmal überprüft werden. Eine mögliche Erklärung für den kaum vorhandenen Einfluss von Morbidität oder Pflegebedürftigkeit wäre, dass die Betroffenen erstens schon häufig mit einem reduzierten Ernährungsstatus aufgenommen werden, der sich dann vielleicht nicht mehr wesentlich verschlechtern kann. Zweitens werden diese PatientInnen u. U. verhältnismäßig gut versorgt, da die Pflegekräfte bei angespannter Personallage dazu neigen könnten, sich schwerpunktmäßig um die besonders hilfsbedürftigen PatientInnen zu kümmern. Auch diesen Thesen sollte in weiteren Untersuchungen nachgegangen werden. Die größten Schwierigkeiten oder Limitationen dieser Arbeit gehen auf die Auswertung von Routinedaten zurück. Diese weisen teilweise nichterklärbare Lücken auf. So fehlen z. B. für den zweiten Erhebungszeitraum überdurchschnittlich viele Entlassbriefe, was dazu führte, dass u. a. Einweisungsdiagnosen teilweise nicht vorliegen. Auch problematisch ist, dass nicht immer ein Gewicht vor Krankenhaus vorliegt, da die meisten Krankenhausaufnahmen nicht geplant sind und das letzte gemessene Gewicht dann u. U. zu lange zurückliegt. Gleichzeitig sind manche BewohnerInnen erst in den Monaten vorher in die Langzeitpflege eingezogen, wodurch in diesen Fällen eine Betrachtung der Gewichtsverläufe vor Krankenhauseinweisung nicht möglich war. Diese wird aber benötigt, um das Mangelernährungsrisiko und eine manifeste Mangelernährung einschätzen zu können, da das in der Langzeitpflege verwendete Instrument PEMU relativ selten zur Anwendung kommt. Dadurch wird u. U. die tatsächliche Prävalenz einer Mangelernährung bzw. ein Mangelernährungsrisiko in dieser Kohorte noch unterschätzt. Fazit für die Praxis: Ein hoher Anteil der PatientInnen, die aus einem Pflegeheim aufgenommen werden, weist initial bereits ein Mangelernährungsrisiko oder eine manifeste Mangelernährung auf.Es ist von essenzieller Bedeutung, die bereits mit einer Mangelernährung aufgenommenen PatientInnen zu erkennen, um so eine gezielte Ernährungstherapie zu ermöglichen.Es ist erforderlich, im Verlauf des Aufenthaltes Mangelernährungsscreenings zu wiederholen, da sich häufig ein Risiko bzw. eine Mangelernährung erst während des Aufenthaltes manifestiert. Das Ernährungsmanagement ist zudem so auszurichten, dass allgemein eine Prävention von Mangelernährung unterstützt wird. Ein hoher Anteil der PatientInnen, die aus einem Pflegeheim aufgenommen werden, weist initial bereits ein Mangelernährungsrisiko oder eine manifeste Mangelernährung auf. Es ist von essenzieller Bedeutung, die bereits mit einer Mangelernährung aufgenommenen PatientInnen zu erkennen, um so eine gezielte Ernährungstherapie zu ermöglichen. Es ist erforderlich, im Verlauf des Aufenthaltes Mangelernährungsscreenings zu wiederholen, da sich häufig ein Risiko bzw. eine Mangelernährung erst während des Aufenthaltes manifestiert. Das Ernährungsmanagement ist zudem so auszurichten, dass allgemein eine Prävention von Mangelernährung unterstützt wird. Supplementary Information: :
Background: Malnutrition is a major challenge in routine clinical practice and is associated with increased mortality. Methods: Anonymized data from nursing home residents with at least a 3-day hospital stay were analyzed. The study included a total of 2058 residents from 19 nursing homes. The malnutrition risk was assessed by the combined MUST/PEMU (Malnutrition Universal Screening Tool/Nursing Measurement of Malnutrition and its Causes) screening and malnutrition by ESPEN (European Society for Clinical Nutrition and Metabolism) criteria. Results: Of the residents 36.2% (n = 744) had an initial risk of malnutrition and 12.7% (n = 262) were already malnourished. The proportions increased to 48.6% (n = 881) and 14.3% (n = 259) at discharge, respectively. The logistic regression analysis showed a significantly increasing probability of developing a malnutrition risk during the hospital stay with the diagnoses diseases of the respiratory system (OR 2.686; CI 95 1.111-4.575), chondropathy and osteopathy (OR 1.892; CI 95 1.149-3.115) and a higher BMI (OR 0.108; CI 95 1.038-1.181), more positive weight changes 6 months before hospital (OR 1.055; CI 95 1.017-1.094) and an increasing hospital stay (OR 1.048; CI 95 1.029-1.067). Conclusions: The identification of an initial malnutrition and the prevention of developing a malnutrition risk represent major challenges in clinical practise. Both are equally necessary.
null
null
4,636
302
[ 362, 1057, 1134, 239, 280, 1334, 1 ]
9
[ "der", "und", "die", "mit", "des", "den", "mangelernährung", "im", "zu", "ein" ]
[ "diagnosen mangelernährung", "auswertung von nutritionday", "stattfindenden nutritiondays", "patientinnen mit mangelernährungsrisiko", "mangelernährung und verschlechterungen" ]
null
null
null
null
null
null
null
[CONTENT] Routinedaten | Prävention | Krankenhaus | Mangelernährungsscreening | Morbidität | Prevention | Routine data | Hospital | Malnutrition screening | Morbidity [SUMMARY]
[CONTENT] Routinedaten | Prävention | Krankenhaus | Mangelernährungsscreening | Morbidität | Prevention | Routine data | Hospital | Malnutrition screening | Morbidity [SUMMARY]
null
null
null
null
[CONTENT] Aged | Geriatric Assessment | Humans | Malnutrition | Nursing Homes | Nutrition Assessment | Nutritional Status | Weight Loss [SUMMARY]
[CONTENT] Aged | Geriatric Assessment | Humans | Malnutrition | Nursing Homes | Nutrition Assessment | Nutritional Status | Weight Loss [SUMMARY]
null
null
null
null
[CONTENT] diagnosen mangelernährung | auswertung von nutritionday | stattfindenden nutritiondays | patientinnen mit mangelernährungsrisiko | mangelernährung und verschlechterungen [SUMMARY]
[CONTENT] diagnosen mangelernährung | auswertung von nutritionday | stattfindenden nutritiondays | patientinnen mit mangelernährungsrisiko | mangelernährung und verschlechterungen [SUMMARY]
null
null
null
null
[CONTENT] der | und | die | mit | des | den | mangelernährung | im | zu | ein [SUMMARY]
[CONTENT] der | und | die | mit | des | den | mangelernährung | im | zu | ein [SUMMARY]
null
null
null
null
[CONTENT] eine | mangelernährung | aufenthaltes | es ist | des aufenthaltes | ist | zu | es | ein | bereits [SUMMARY]
[CONTENT] der | und | die | des | mit | im | vor | mangelernährung | den | zu [SUMMARY]
null
null
null
null
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| at least a  | 3-day ||| 2058 | 19 ||| ESPEN ||| 36.2% | 744 | 12.7% | 262 ||| 48.6% | 881 | 14.3% | 259 ||| 2.686 | CI | 95 | 1.111 | 1.892 | CI | 95 1.149 | BMI | 0.108 | CI | 95 | 6 months | 1.055 | CI | 1.017 | 1.048 | CI | 95 | 1.029 ||| ||| [SUMMARY]
null
Clinicopathological characteristics and prognosis of 232 patients with poorly differentiated gastric neuroendocrine neoplasms.
34135560
Poorly differentiated gastric neuroendocrine neoplasms (PDGNENs) include gastric neuroendocrine carcinoma (NEC) and mixed adenoneuroendocrine carcinoma, which are highly malignant and rare tumors, and their incidence has increased over the past few decades. However, the clinicopathological features and outcomes of patients with PDGNENs have not been completely elucidated.
BACKGROUND
The data from seven centers in China from March 2007 to November 2019 were analyzed retrospectively.
METHODS
Among the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 ± 9.11 years. One hundred and thirteen (49.34%) of 229 patients had a stage III disease and 86 (37.55%) had stage IV disease. Three (1.58%) of 190 patients had no clinical symptoms, while 187 (98.42%) patients presented clinical symptoms. The tumors were mainly (89.17%) solitary and located in the upper third of the stomach (cardia and fundus of stomach: 115/215, 53.49%). Most lesions were ulcers (157/232, 67.67%), with an average diameter of 4.66 ± 2.77 cm. In terms of tumor invasion, the majority of tumors invaded the serosa (116/198, 58.58%). The median survival time of the 232 patients was 13.50 mo (7, 31 mo), and the overall 1-year, 3-year, and 5-year survival rates were 49%, 19%, and 5%, respectively. According to univariate analysis, tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were prognostic factors for patients with PDGNENs. Multivariate analysis showed that tumor number, tumor diameter, AJCC stage, and distant metastasis status were independent prognostic factors for patients with PDGNENs.
RESULTS
The overall prognosis of patients with PDGNENs is poor. The outcomes of patients with a tumor diameter > 5 cm, multiple tumors, and stage IV tumors are worse than those of other patients.
CONCLUSION
[ "Aged", "China", "Female", "Humans", "Male", "Middle Aged", "Neoplasm Staging", "Neuroendocrine Tumors", "Prognosis", "Retrospective Studies", "Stomach Neoplasms" ]
8173377
INTRODUCTION
Gastric neuroendocrine neoplasms (G-NENs) are a group of heterogeneous and rare malignant tumors originating from peptidergic neurons and neuroendocrine cells. Neuroendocrine neoplasms can occur throughout the body, such as in the gastrointestinal tract, pancreas, liver and gallbladder, thymus, and lung, but the gastrointestinal tract is the most commonly affected site[1]. According to the Surveillance, Epidemiology, and End Results report, the incidence of G-NENs has increased 15-fold in recent decades and reached 4.85/1000000 in 2014, mainly due to the wide application of gastroscopy and the improvement of pathological diagnosis techniques[2,3]. The incidence of poorly differentiated gastric neuroendocrine neoplasms (PDGNENs) is approximately 1/1000000 people, accounting for 16.4% of G-NENs, while the incidence of other tumors in the stomach shows a decreasing trend compared with G-NENs[4]. PDGNENs are divided into functional and nonfunctional tumors according to whether the tumor secretes active hormones and causes characteristic clinical manifestations, and the most common clinical tumors are nonfunctional tumors. The clinical symptoms of nonfunctional PDGNENs lack specificity, and early diagnosis is difficult. Generally, the clinical symptoms are caused by the tumor size or metastasis, mainly including abdominal pain and abdominal distension. Few functional PDGNENs secrete bioactive amines that cause carcinoid syndrome, including skin flushing, diarrhea, wheezing, etc. PDGNENs have a worse prognosis than neuroendocrine tumors (NETs), and tumor diameter, stage, location, and treatment are significantly correlated with the prognosis[4]. The 5-year survival rate of patients with well-differentiated G-NENs presenting with distant metastases may reach 35%, while the 5-year survival rate of patients with PDGNENs accompanied by distant metastasis is only 4%[5]. Relatively few large-cohort studies have assessed PDGNENs and limited data is available on their clinicopathological features and outcomes. Therefore, we collected the data of 232 patients with PDGNENs from multiple centers in China and analyzed the clinicopathological characteristics and prognostic factors of these patients, aiming to provide a reference for clinical work on PDGNENs.
MATERIALS AND METHODS
Patient selection From March 2007 to November 2019, a total of 232 patients with PDGNENs from seven centers in China were enrolled (China-Japan Friendship Hospital, n = 71; Sun Yat-Sen University Cancer Center, n = 54; The Fourth Affiliated Hospital of Hebei Medical University, n = 49; The First Affiliated Hospital Sun Yat-Sen University, n = 39; Fudan University Shanghai Cancer Center, n = 10; The Fifth Medical Center of the PLA General Hospital, n = 8; and Yunnan Tumor Hospital, n = 1). The inclusion criteria were as follows: (1) All patients were confirmed to have neuroendocrine carcinoma (NEC) or mixed adenoneuroendocrine carcinoma (MANEC) based on pathology; (2) The clinical data of the patients were relatively complete; (3) The patients underwent regular follow-up; and (4) The patients had no other tumors. This study was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the Clinical Research Ethics Committee of China-Japan Friendship Hospital (No. 2019-24-K18-1). From March 2007 to November 2019, a total of 232 patients with PDGNENs from seven centers in China were enrolled (China-Japan Friendship Hospital, n = 71; Sun Yat-Sen University Cancer Center, n = 54; The Fourth Affiliated Hospital of Hebei Medical University, n = 49; The First Affiliated Hospital Sun Yat-Sen University, n = 39; Fudan University Shanghai Cancer Center, n = 10; The Fifth Medical Center of the PLA General Hospital, n = 8; and Yunnan Tumor Hospital, n = 1). The inclusion criteria were as follows: (1) All patients were confirmed to have neuroendocrine carcinoma (NEC) or mixed adenoneuroendocrine carcinoma (MANEC) based on pathology; (2) The clinical data of the patients were relatively complete; (3) The patients underwent regular follow-up; and (4) The patients had no other tumors. This study was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the Clinical Research Ethics Committee of China-Japan Friendship Hospital (No. 2019-24-K18-1). Tumor grade and stage The pathological grading standard used in this study adopted the 2019 World Health Organization classification and grading criteria for gastrointestinal pancreatic neuroendocrine tumors[6] and updated the pathological classification and grading of gastroenteropancreatic neuroendocrine tumors. Well-differentiated gastric neuroendocrine tumors were classified into three types. NEC was no longer graded and was divided into only two subtypes: Large cell NEC (LCNEC) and small cell NEC (SCNEC). In addition, MANEC has also been replaced by mixed neuroendocrine and nonneuroendocrine tumors (MiNENs), which contain a wider range of contents. Most of the NEN components in MiNEN are NEC and may also be NETs; in addition to adenocarcinoma, other components, such as squamous cell carcinoma, may also occur in non-NEN components, but each component must account for more than 30% and be classified when reporting[7]. The 8th edition gastric cancer tumor-node-metastasis staging system was used for staging[8]. The pathological grading standard used in this study adopted the 2019 World Health Organization classification and grading criteria for gastrointestinal pancreatic neuroendocrine tumors[6] and updated the pathological classification and grading of gastroenteropancreatic neuroendocrine tumors. Well-differentiated gastric neuroendocrine tumors were classified into three types. NEC was no longer graded and was divided into only two subtypes: Large cell NEC (LCNEC) and small cell NEC (SCNEC). In addition, MANEC has also been replaced by mixed neuroendocrine and nonneuroendocrine tumors (MiNENs), which contain a wider range of contents. Most of the NEN components in MiNEN are NEC and may also be NETs; in addition to adenocarcinoma, other components, such as squamous cell carcinoma, may also occur in non-NEN components, but each component must account for more than 30% and be classified when reporting[7]. The 8th edition gastric cancer tumor-node-metastasis staging system was used for staging[8]. Follow-up The patients were followed regularly by an outpatient review, inpatient medical record review, and telephone interview. The starting point was the time when the patient’s histopathology yielded a diagnosis of PDGNENs. The end point of the follow-up was the time of death. The patients were followed regularly by an outpatient review, inpatient medical record review, and telephone interview. The starting point was the time when the patient’s histopathology yielded a diagnosis of PDGNENs. The end point of the follow-up was the time of death. Statistical methods Measurement data are presented as the mean ± SD, and the follow-up time is reported as the median (interquartile range). Count data are presented as numbers of cases (percentages). The Kaplan-Meier method was used for the survival analysis, and comparisons were performed using the log-rank test. Multivariable survival analyses were also performed to exclude dependent variables using Cox proportional hazards regression models. When the two-tailed P value was less than 0.05, the difference was considered statistically significant. Data were analyzed using SPSS 25.0 statistical analysis software (IBM, Chicago, IL, United States). Measurement data are presented as the mean ± SD, and the follow-up time is reported as the median (interquartile range). Count data are presented as numbers of cases (percentages). The Kaplan-Meier method was used for the survival analysis, and comparisons were performed using the log-rank test. Multivariable survival analyses were also performed to exclude dependent variables using Cox proportional hazards regression models. When the two-tailed P value was less than 0.05, the difference was considered statistically significant. Data were analyzed using SPSS 25.0 statistical analysis software (IBM, Chicago, IL, United States).
null
null
CONCLUSION
We are grateful to all patients and all relevant medical workers who participated in this study.
[ "INTRODUCTION", "Patient selection", "Tumor grade and stage", "Follow-up", "Statistical methods", "RESULTS", "Clinicopathological characteristics", "Treatment", "Follow-up and outcomes", "DISCUSSION", "CONCLUSION" ]
[ "Gastric neuroendocrine neoplasms (G-NENs) are a group of heterogeneous and rare malignant tumors originating from peptidergic neurons and neuroendocrine cells. Neuroendocrine neoplasms can occur throughout the body, such as in the gastrointestinal tract, pancreas, liver and gallbladder, thymus, and lung, but the gastrointestinal tract is the most commonly affected site[1]. According to the Surveillance, Epidemiology, and End Results report, the incidence of G-NENs has increased 15-fold in recent decades and reached 4.85/1000000 in 2014, mainly due to the wide application of gastroscopy and the improvement of pathological diagnosis techniques[2,3]. The incidence of poorly differentiated gastric neuroendocrine neoplasms (PDGNENs) is approximately 1/1000000 people, accounting for 16.4% of G-NENs, while the incidence of other tumors in the stomach shows a decreasing trend compared with G-NENs[4].\nPDGNENs are divided into functional and nonfunctional tumors according to whether the tumor secretes active hormones and causes characteristic clinical manifestations, and the most common clinical tumors are nonfunctional tumors. The clinical symptoms of nonfunctional PDGNENs lack specificity, and early diagnosis is difficult. Generally, the clinical symptoms are caused by the tumor size or metastasis, mainly including abdominal pain and abdominal distension. Few functional PDGNENs secrete bioactive amines that cause carcinoid syndrome, including skin flushing, diarrhea, wheezing, etc. PDGNENs have a worse prognosis than neuroendocrine tumors (NETs), and tumor diameter, stage, location, and treatment are significantly correlated with the prognosis[4]. The 5-year survival rate of patients with well-differentiated G-NENs presenting with distant metastases may reach 35%, while the 5-year survival rate of patients with PDGNENs accompanied by distant metastasis is only 4%[5].\nRelatively few large-cohort studies have assessed PDGNENs and limited data is available on their clinicopathological features and outcomes. Therefore, we collected the data of 232 patients with PDGNENs from multiple centers in China and analyzed the clinicopathological characteristics and prognostic factors of these patients, aiming to provide a reference for clinical work on PDGNENs.", "From March 2007 to November 2019, a total of 232 patients with PDGNENs from seven centers in China were enrolled (China-Japan Friendship Hospital, n = 71; Sun Yat-Sen University Cancer Center, n = 54; The Fourth Affiliated Hospital of Hebei Medical University, n = 49; The First Affiliated Hospital Sun Yat-Sen University, n = 39; Fudan University Shanghai Cancer Center, n = 10; The Fifth Medical Center of the PLA General Hospital, n = 8; and Yunnan Tumor Hospital, n = 1). The inclusion criteria were as follows: (1) All patients were confirmed to have neuroendocrine carcinoma (NEC) or mixed adenoneuroendocrine carcinoma (MANEC) based on pathology; (2) The clinical data of the patients were relatively complete; (3) The patients underwent regular follow-up; and (4) The patients had no other tumors. This study was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the Clinical Research Ethics Committee of China-Japan Friendship Hospital (No. 2019-24-K18-1).", "The pathological grading standard used in this study adopted the 2019 World Health Organization classification and grading criteria for gastrointestinal pancreatic neuroendocrine tumors[6] and updated the pathological classification and grading of gastroenteropancreatic neuroendocrine tumors. Well-differentiated gastric neuroendocrine tumors were classified into three types. NEC was no longer graded and was divided into only two subtypes: Large cell NEC (LCNEC) and small cell NEC (SCNEC). In addition, MANEC has also been replaced by mixed neuroendocrine and nonneuroendocrine tumors (MiNENs), which contain a wider range of contents. Most of the NEN components in MiNEN are NEC and may also be NETs; in addition to adenocarcinoma, other components, such as squamous cell carcinoma, may also occur in non-NEN components, but each component must account for more than 30% and be classified when reporting[7]. The 8th edition gastric cancer tumor-node-metastasis staging system was used for staging[8].", "The patients were followed regularly by an outpatient review, inpatient medical record review, and telephone interview. The starting point was the time when the patient’s histopathology yielded a diagnosis of PDGNENs. The end point of the follow-up was the time of death.", "Measurement data are presented as the mean ± SD, and the follow-up time is reported as the median (interquartile range). Count data are presented as numbers of cases (percentages). The Kaplan-Meier method was used for the survival analysis, and comparisons were performed using the log-rank test. Multivariable survival analyses were also performed to exclude dependent variables using Cox proportional hazards regression models. When the two-tailed P value was less than 0.05, the difference was considered statistically significant. Data were analyzed using SPSS 25.0 statistical analysis software (IBM, Chicago, IL, United States).", "Clinicopathological characteristics Among the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 years (range: 30-85 years), and the elderly accounted for 65.5% of the patients (Table 1). The average diameter of the tumor was 4.66 ± 2.77 cm, and the lesions mainly invaded the serosa (116/198, 58.58%). The lymph nodes were positive in 175 (79.78%) of 225 patients, and 86 (37.39%) of 230 patients exhibited distant metastases. In addition, 113 (49.34%) patients presented with a stage II disease and 86 (37.55%) presented with stage IV disease. In terms of endoscopic performance, most of the tumors were mainly located in the cardia (65/215, 30.23%). The majority of tumors were solitary (215/219, 89.17%). In addition, the lesions were mainly ulcers (157/232, 67.67%). Typical gastroscopic findings are shown in Figure 1.\n\nEndoscopic detection of poorly differentiated gastric neuroendocrine neoplasms. A: Circumferential raised lesions on the cardia with uneven surfaces; B: Irregular bumps on the side of the minor curvature of the cardia, accompanied by erosions, ulcers, and unclear boundaries that bled easily when contacted; C: A raised ulcer with a diameter of 4 cm was observed in the small curvature of the antrum; D: A deep ulcer with a diameter of approximately 0.5 cm on the posterior wall of the gastric fundus was observed, and the base was not clear.\nClinicopathological characteristics of 232 patients with gastric neuroendocrine carcinoma and univariate analysis\nNR: Not reported. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria.\nC + F: Cardia + Fundus; NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; AJCC: American Joint Committee on Cancer.\nThe pathological classification of 232 patients was G3. Of these patients, 41 had LCNEC (17.67%) (Figure 2), 26 had SCNEC (12.21%) (Figure 3), and 38 had MANEC (16.38%) (Figure 4); the subtype was not reported for 127 (54.74%) patients. The average Ki-67 index was 65.34% (Table 1).\n\nMorphology and immunohistochemical staining of large cell neuroendocrine carcinoma in the cardia. A: Morphology (hematoxylin and eosin staining, × 400); B: Chromogranin A-positive staining [immunohistochemical (IHC), × 200]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of gastric small cell neuroendocrine carcinoma. A: Morphology (hematoxylin and eosin staining, × 400); B: CD56-positive staining [immunohistochemical (IHC), × 400]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of mixed adenoneuroendocrine carcinoma. A and B: Morphology of gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 400); C: Gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 200). Alcian blue/periodic acid–Schiff (AB-PAS) staining (× 200): The left side of the picture shows adenocarcinoma (AB-PAS positive); D: Chromogranin A-positive staining (× 200). The right side of the picture shows small cell neuroendocrine carcinoma.\nThe clinical symptoms of 190 patients were recorded. One hundred and eighty-seven (98.42%) patients experienced symptoms, while three (1.58%) patients had no clinical symptoms. The main symptoms were abdominal pain (105/190, 55.26%), abdominal distension (67/190, 35.26%), weight loss (41/190, 21.58%), poor appetite (38/190, 20.00%), and gastrointestinal bleeding (31/190, 16.32%) (Figure 5).\n\nDistribution of nonspecific clinical symptoms in 190 patients with poorly differentiated gastric neuroendocrine neoplasms.\n\nAmong the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 years (range: 30-85 years), and the elderly accounted for 65.5% of the patients (Table 1). The average diameter of the tumor was 4.66 ± 2.77 cm, and the lesions mainly invaded the serosa (116/198, 58.58%). The lymph nodes were positive in 175 (79.78%) of 225 patients, and 86 (37.39%) of 230 patients exhibited distant metastases. In addition, 113 (49.34%) patients presented with a stage II disease and 86 (37.55%) presented with stage IV disease. In terms of endoscopic performance, most of the tumors were mainly located in the cardia (65/215, 30.23%). The majority of tumors were solitary (215/219, 89.17%). In addition, the lesions were mainly ulcers (157/232, 67.67%). Typical gastroscopic findings are shown in Figure 1.\n\nEndoscopic detection of poorly differentiated gastric neuroendocrine neoplasms. A: Circumferential raised lesions on the cardia with uneven surfaces; B: Irregular bumps on the side of the minor curvature of the cardia, accompanied by erosions, ulcers, and unclear boundaries that bled easily when contacted; C: A raised ulcer with a diameter of 4 cm was observed in the small curvature of the antrum; D: A deep ulcer with a diameter of approximately 0.5 cm on the posterior wall of the gastric fundus was observed, and the base was not clear.\nClinicopathological characteristics of 232 patients with gastric neuroendocrine carcinoma and univariate analysis\nNR: Not reported. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria.\nC + F: Cardia + Fundus; NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; AJCC: American Joint Committee on Cancer.\nThe pathological classification of 232 patients was G3. Of these patients, 41 had LCNEC (17.67%) (Figure 2), 26 had SCNEC (12.21%) (Figure 3), and 38 had MANEC (16.38%) (Figure 4); the subtype was not reported for 127 (54.74%) patients. The average Ki-67 index was 65.34% (Table 1).\n\nMorphology and immunohistochemical staining of large cell neuroendocrine carcinoma in the cardia. A: Morphology (hematoxylin and eosin staining, × 400); B: Chromogranin A-positive staining [immunohistochemical (IHC), × 200]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of gastric small cell neuroendocrine carcinoma. A: Morphology (hematoxylin and eosin staining, × 400); B: CD56-positive staining [immunohistochemical (IHC), × 400]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of mixed adenoneuroendocrine carcinoma. A and B: Morphology of gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 400); C: Gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 200). Alcian blue/periodic acid–Schiff (AB-PAS) staining (× 200): The left side of the picture shows adenocarcinoma (AB-PAS positive); D: Chromogranin A-positive staining (× 200). The right side of the picture shows small cell neuroendocrine carcinoma.\nThe clinical symptoms of 190 patients were recorded. One hundred and eighty-seven (98.42%) patients experienced symptoms, while three (1.58%) patients had no clinical symptoms. The main symptoms were abdominal pain (105/190, 55.26%), abdominal distension (67/190, 35.26%), weight loss (41/190, 21.58%), poor appetite (38/190, 20.00%), and gastrointestinal bleeding (31/190, 16.32%) (Figure 5).\n\nDistribution of nonspecific clinical symptoms in 190 patients with poorly differentiated gastric neuroendocrine neoplasms.\n\nTreatment Among the 232 patients with PDGNENs, 86 (37.07%) were treated by surgery, 40 (17.24%) were treated with chemotherapy, 92 (38.79%) were treated by surgery plus chemotherapy, and 14 (6.03%) were treated with other treatments (somatostatin analogs, targeted therapy, immunotherapy, traditional Chinese medicine treatment, etc.). One hundred and forty-three patients had no distant metastasis or resectable tumors (tumor stage I: 5 cases; stage II: 25 cases; and stage III: 113 cases), of whom 75 were treated by surgery, 6 were treated by chemotherapy, 55 were treated by surgery combined with chemotherapy, and 7 were treated with other treatments. This study analyzed patients without distant metastases and found that the median survival time of the surgery alone group was 18 mo, while the median survival time of the chemotherapy alone group was 11 mo and that of the surgery combined with chemotherapy group was 23 mo (P < 0.001) (Figure 6).\n\nKaplan-Meier survival analysis (P < 0.05). A: Tumor number (P = 0.01); B: Tumor diameter (P = 0.01); C: Invasion (P = 0.02); D: American Joint Committee on Cancer stage (P < 0.001); E: Distant metastasis (P < 0.001); F: Treatment for patients without distant metastases (P < 0.001). AJCC: American Joint Committee on Cancer; OS: Overall survival.\nAmong the 232 patients with PDGNENs, 86 (37.07%) were treated by surgery, 40 (17.24%) were treated with chemotherapy, 92 (38.79%) were treated by surgery plus chemotherapy, and 14 (6.03%) were treated with other treatments (somatostatin analogs, targeted therapy, immunotherapy, traditional Chinese medicine treatment, etc.). One hundred and forty-three patients had no distant metastasis or resectable tumors (tumor stage I: 5 cases; stage II: 25 cases; and stage III: 113 cases), of whom 75 were treated by surgery, 6 were treated by chemotherapy, 55 were treated by surgery combined with chemotherapy, and 7 were treated with other treatments. This study analyzed patients without distant metastases and found that the median survival time of the surgery alone group was 18 mo, while the median survival time of the chemotherapy alone group was 11 mo and that of the surgery combined with chemotherapy group was 23 mo (P < 0.001) (Figure 6).\n\nKaplan-Meier survival analysis (P < 0.05). A: Tumor number (P = 0.01); B: Tumor diameter (P = 0.01); C: Invasion (P = 0.02); D: American Joint Committee on Cancer stage (P < 0.001); E: Distant metastasis (P < 0.001); F: Treatment for patients without distant metastases (P < 0.001). AJCC: American Joint Committee on Cancer; OS: Overall survival.\nFollow-up and outcomes With a median follow-up time of 13.50 mo (range, 7-31 mo), the overall 1-year, 3-year, and 5-year survival rates were 47%, 15%, and 5%, respectively. The median survival time was 14 mo. The univariate analysis showed that tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were correlated with the prognosis (log-rank test P < 0.05; Table 1), while age, sex, lymph node metastasis, pathological type, Ki-67 index, and tumor site were not related to the prognosis (log-rank test P > 0.05; Table 1 and Figure 6). In the multivariate analysis, the tumor number {multiple vs solitary, hazard ratio (HR) [95% confidence interval (CI)]: 3.89 (1.66-9.11), P < 0.001}, tumor diameter [5 cm vs 0-5 cm, HR (95%CI): 1.56 (1.01-2.41), P = 0.04], tumor stage [IV vs I, HR (95%CI): 5.98 (1.78-20.60), P < 0.001; III vs I, HR (95%CI): 3.582 (1.07-11.88), P = 0.03], and distant metastasis status [yes vs no, HR (95%CI): 2.16 (1.41-3.31), P < 0.001] were independent risk factors affecting the prognosis (Table 2). The overall 1-year, 3-year, and 5-year survival rates of the 232 patients with PDGNENs are shown in Table 3.\nMultivariate Cox regression analysis of poorly differentiated gastric neuroendocrine neoplasms\nAJCC: American Joint Committee on Cancer.\nSurvival analysis of related factors of 232 patients with poorly-differentiated gastric neoplasms\nS + C: Surgery plus Chemotherapy; C + F: Cardia + Fundus. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria. \nNEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; NR: Not reported; AJCC: American Joint Committee on Cancer.\nWith a median follow-up time of 13.50 mo (range, 7-31 mo), the overall 1-year, 3-year, and 5-year survival rates were 47%, 15%, and 5%, respectively. The median survival time was 14 mo. The univariate analysis showed that tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were correlated with the prognosis (log-rank test P < 0.05; Table 1), while age, sex, lymph node metastasis, pathological type, Ki-67 index, and tumor site were not related to the prognosis (log-rank test P > 0.05; Table 1 and Figure 6). In the multivariate analysis, the tumor number {multiple vs solitary, hazard ratio (HR) [95% confidence interval (CI)]: 3.89 (1.66-9.11), P < 0.001}, tumor diameter [5 cm vs 0-5 cm, HR (95%CI): 1.56 (1.01-2.41), P = 0.04], tumor stage [IV vs I, HR (95%CI): 5.98 (1.78-20.60), P < 0.001; III vs I, HR (95%CI): 3.582 (1.07-11.88), P = 0.03], and distant metastasis status [yes vs no, HR (95%CI): 2.16 (1.41-3.31), P < 0.001] were independent risk factors affecting the prognosis (Table 2). The overall 1-year, 3-year, and 5-year survival rates of the 232 patients with PDGNENs are shown in Table 3.\nMultivariate Cox regression analysis of poorly differentiated gastric neuroendocrine neoplasms\nAJCC: American Joint Committee on Cancer.\nSurvival analysis of related factors of 232 patients with poorly-differentiated gastric neoplasms\nS + C: Surgery plus Chemotherapy; C + F: Cardia + Fundus. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria. \nNEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; NR: Not reported; AJCC: American Joint Committee on Cancer.", "Among the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 years (range: 30-85 years), and the elderly accounted for 65.5% of the patients (Table 1). The average diameter of the tumor was 4.66 ± 2.77 cm, and the lesions mainly invaded the serosa (116/198, 58.58%). The lymph nodes were positive in 175 (79.78%) of 225 patients, and 86 (37.39%) of 230 patients exhibited distant metastases. In addition, 113 (49.34%) patients presented with a stage II disease and 86 (37.55%) presented with stage IV disease. In terms of endoscopic performance, most of the tumors were mainly located in the cardia (65/215, 30.23%). The majority of tumors were solitary (215/219, 89.17%). In addition, the lesions were mainly ulcers (157/232, 67.67%). Typical gastroscopic findings are shown in Figure 1.\n\nEndoscopic detection of poorly differentiated gastric neuroendocrine neoplasms. A: Circumferential raised lesions on the cardia with uneven surfaces; B: Irregular bumps on the side of the minor curvature of the cardia, accompanied by erosions, ulcers, and unclear boundaries that bled easily when contacted; C: A raised ulcer with a diameter of 4 cm was observed in the small curvature of the antrum; D: A deep ulcer with a diameter of approximately 0.5 cm on the posterior wall of the gastric fundus was observed, and the base was not clear.\nClinicopathological characteristics of 232 patients with gastric neuroendocrine carcinoma and univariate analysis\nNR: Not reported. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria.\nC + F: Cardia + Fundus; NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; AJCC: American Joint Committee on Cancer.\nThe pathological classification of 232 patients was G3. Of these patients, 41 had LCNEC (17.67%) (Figure 2), 26 had SCNEC (12.21%) (Figure 3), and 38 had MANEC (16.38%) (Figure 4); the subtype was not reported for 127 (54.74%) patients. The average Ki-67 index was 65.34% (Table 1).\n\nMorphology and immunohistochemical staining of large cell neuroendocrine carcinoma in the cardia. A: Morphology (hematoxylin and eosin staining, × 400); B: Chromogranin A-positive staining [immunohistochemical (IHC), × 200]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of gastric small cell neuroendocrine carcinoma. A: Morphology (hematoxylin and eosin staining, × 400); B: CD56-positive staining [immunohistochemical (IHC), × 400]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of mixed adenoneuroendocrine carcinoma. A and B: Morphology of gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 400); C: Gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 200). Alcian blue/periodic acid–Schiff (AB-PAS) staining (× 200): The left side of the picture shows adenocarcinoma (AB-PAS positive); D: Chromogranin A-positive staining (× 200). The right side of the picture shows small cell neuroendocrine carcinoma.\nThe clinical symptoms of 190 patients were recorded. One hundred and eighty-seven (98.42%) patients experienced symptoms, while three (1.58%) patients had no clinical symptoms. The main symptoms were abdominal pain (105/190, 55.26%), abdominal distension (67/190, 35.26%), weight loss (41/190, 21.58%), poor appetite (38/190, 20.00%), and gastrointestinal bleeding (31/190, 16.32%) (Figure 5).\n\nDistribution of nonspecific clinical symptoms in 190 patients with poorly differentiated gastric neuroendocrine neoplasms.\n", "Among the 232 patients with PDGNENs, 86 (37.07%) were treated by surgery, 40 (17.24%) were treated with chemotherapy, 92 (38.79%) were treated by surgery plus chemotherapy, and 14 (6.03%) were treated with other treatments (somatostatin analogs, targeted therapy, immunotherapy, traditional Chinese medicine treatment, etc.). One hundred and forty-three patients had no distant metastasis or resectable tumors (tumor stage I: 5 cases; stage II: 25 cases; and stage III: 113 cases), of whom 75 were treated by surgery, 6 were treated by chemotherapy, 55 were treated by surgery combined with chemotherapy, and 7 were treated with other treatments. This study analyzed patients without distant metastases and found that the median survival time of the surgery alone group was 18 mo, while the median survival time of the chemotherapy alone group was 11 mo and that of the surgery combined with chemotherapy group was 23 mo (P < 0.001) (Figure 6).\n\nKaplan-Meier survival analysis (P < 0.05). A: Tumor number (P = 0.01); B: Tumor diameter (P = 0.01); C: Invasion (P = 0.02); D: American Joint Committee on Cancer stage (P < 0.001); E: Distant metastasis (P < 0.001); F: Treatment for patients without distant metastases (P < 0.001). AJCC: American Joint Committee on Cancer; OS: Overall survival.", "With a median follow-up time of 13.50 mo (range, 7-31 mo), the overall 1-year, 3-year, and 5-year survival rates were 47%, 15%, and 5%, respectively. The median survival time was 14 mo. The univariate analysis showed that tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were correlated with the prognosis (log-rank test P < 0.05; Table 1), while age, sex, lymph node metastasis, pathological type, Ki-67 index, and tumor site were not related to the prognosis (log-rank test P > 0.05; Table 1 and Figure 6). In the multivariate analysis, the tumor number {multiple vs solitary, hazard ratio (HR) [95% confidence interval (CI)]: 3.89 (1.66-9.11), P < 0.001}, tumor diameter [5 cm vs 0-5 cm, HR (95%CI): 1.56 (1.01-2.41), P = 0.04], tumor stage [IV vs I, HR (95%CI): 5.98 (1.78-20.60), P < 0.001; III vs I, HR (95%CI): 3.582 (1.07-11.88), P = 0.03], and distant metastasis status [yes vs no, HR (95%CI): 2.16 (1.41-3.31), P < 0.001] were independent risk factors affecting the prognosis (Table 2). The overall 1-year, 3-year, and 5-year survival rates of the 232 patients with PDGNENs are shown in Table 3.\nMultivariate Cox regression analysis of poorly differentiated gastric neuroendocrine neoplasms\nAJCC: American Joint Committee on Cancer.\nSurvival analysis of related factors of 232 patients with poorly-differentiated gastric neoplasms\nS + C: Surgery plus Chemotherapy; C + F: Cardia + Fundus. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria. \nNEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; NR: Not reported; AJCC: American Joint Committee on Cancer.", "PDGNENs are rare tumors that are highly malignant, accounting for 6.9% of gastroenteropancreatic neuroendocrine tumors and 0.3%-1.8% of all malignant gastric tumors[9,10]. According to the Korean literature, PDGNENs account for 2.84% of all NENs and 40% of G-NENs[11]. In this article, the male-to-female ratio reached 4.66:1, and significantly more male patients were identified than female patients. Kim et al[12] reported that among 63 patients with G-NECs, 48 were male and 15 were female. Other studies[13-15] have indicated that G-NECs have a sex bias favoring males, but the reason has not been clarified.\nPDGNENs are mostly nonfunctional and often detected incidentally. Among the 232 patients with PDGNENs analyzed in our study, all patients had nonfunctional lesions. The main symptoms were similar to those reported in several other studies[15-17]. Early PDGNENs are asymptomatic or have no specific symptoms, such as anemia, abdominal pain, and dyspepsia, and they are unintentionally identified through routine upper gastrointestinal endoscopy. Indeed, regular upper gastrointestinal endoscopy is important and can help to detect PDGNENs, especially in patients with early-stage disease. Advanced PDGNENs are easily detected, mainly because patients show tumor-related features, such as obstruction, bleeding, weight loss, and pain due to infiltration or distant metastasis[16].\nMost patients with PDGNENs exhibit advanced tumors at the time of diagnosis, and patients with advanced tumors have worse outcomes than those with early-stage tumors. In our study, the majority of patients had advanced stage tumors at the time of diagnosis; 199 (85.78%) patients exhibited lymph node metastasis or distant metastasis, and these values are similar to those of several other studies[18]. Additionally, the survival analysis showed significant differences between patients with stages I-IV. The median survival time of patients with stage I disease was 88 mo, while that of stage IV patients was only 11 mo, and the 3-year overall survival rates were 100% and 11%, respectively. Patients with PDGNENs often benefit if the disease is detected at an early stage. Ishida et al[18] reported that the 5-year survival rates of 51 patients with stage I, II, III, or IV G-NECs were 66.7%, 49.3%, 64.3%, and 7.7%, respectively. Tierney et al[19] found that the median survival times of patients with stages I-II, III, and IV tumors were 40 mo, 31 mo, and 6 mo, respectively[19]. Furthermore, our study suggested that distant metastasis is an independent prognostic factor, which is consistent with relevant reports[18-21].\nTumor diameter may be relevant to the outcomes in our study. According to the study conducted by Liang et al[20], a gastric neuroendocrine tumor diameter greater than 4.2 cm was a poor prognostic factor for patients. Fang et al[22] analyzed 156 patients with G-NECs. Univariate analysis revealed a significant difference between patients with a tumor diameter less than 4.5 cm and those with a tumor diameter greater than 4.5 cm, and the 5-year survival rates were 57.9% and 29.3%, respectively. Our data suggest that a tumor diameter greater than 5 cm was a risk factor affecting the prognosis. In clinical practice, the tumor diameter of PDGNENs may be useful to predict the outcome, and patients with a tumor diameter greater than 5 cm should receive more attention.\nTumor site may also be related to the prognosis. In our study, PDGNENs were mainly located in the upper third of the stomach (total: 53.95%), while only 27 (12.56%) PDGNENs were found in the antrum. The values are similar to those of other reports[23-25]. In addition, our data showed a longer median survival time of patients with lesions in the cardia (15.97 mo) and fundus of the stomach (20.00 mo) than that of patients with lesions in the gastric antrum (12.50 mo). Hu et al[4] reported a median survival time of patients with G-NECs in the cardia and fundus of 20 mo, which was longer than that of patients with tumors in the antrum (13 mo). Bukhari et al[14] observed a better prognosis for patients with tumors in the cardia region than that of patients with tumors in the gastric antrum (median survival: 48.0 mo vs 19.0 mo)[14]. To a certain extent, we postulate that the prognosis of PDGNENs in the upper part of the stomach is better than that of tumors in the lower part.\nThe European Neuroendocrine Tumor Society proposed guidelines for the treatment of NEC in 2016. For patients with no distant metastasis or resectable tumors, surgical treatment can be selected, and adjuvant chemotherapy (AC) or radiotherapy can be selected after surgery. Surgery is the only curative treatment for resectable PDGNENs, but the prognosis of patients who undergo surgery alone remains very poor[26]. This study analyzed patients without distant metastases and found that the median survival time of patients treated by surgery combined with chemotherapy (23 mo) was longer than that of patients who were treated by surgery alone (18 mo) or chemotherapy alone (11 mo). Their 3-year overall survival rates were 35%, 22%, and 4%, respectively. Patients with early-stage tumors should choose the proper treatment method, which may improve the quality of life and prolong the survival time. According to a retrospective study including 69 patients with G-NECs in China, the overall 3-year survival rate of patients receiving surgery combined with chemotherapy was 68.8%, while that of patients who received surgery alone was only 3.8%[27]. Bukhari et al[14] assessed 43 patients with G-NECs. Five patients did not undergo postoperative chemotherapy, and the median survival time of these patients was 15 mo. The median survival time of the remaining 34 patients who received postoperative chemotherapy was 44 mo[14]. Mao et al[28] analyzed 806 patients diagnosed with nonmetastatic poorly differentiated colorectal NECs, and 394 (48.9%) of these patients received AC. Kaplan-Meier curves showed that the median overall survival (OS) was significantly longer for patients treated with AC vs observation (57.4 mo vs 38.2 mo; P = 0.007). The Cox proportional hazards regression analysis showed that AC was associated with a significant OS benefit [HR = 0.73, P < 0.001][28]. Surgery combined with chemotherapy has advantages and may improve the prognosis of patients compared with treatment with either approach alone.\nAs a retrospective study, this study provides novel insights into the diagnosis and treatment of PDGNENs and the risk factors related to the prognosis. However, some limitations should be noted. First, some patients had incomplete basic information. Second, some patients had poor compliance, and a certain number of patients were lost to follow-up (30%). In the future, prospective, multicenter, large-scale trials are still needed to identify independent risk factors that affect the prognosis of patients with PDGNENs.", "In summary, the majority of patients with PDGNENs had already developed lymph node or distant metastasis at the time of diagnosis, and the prognosis was poor, with a 5-year survival rate of 5% and a median survival time of 13.50 mo. Electronic gastroscopy and pathological diagnostic technology have been widely popularized. When people experience the aforementioned symptoms, gastroscopy should be performed in a timely manner. If a lesion is detected, pathology should be performed to determine a clear diagnosis. Clinicians should pay more attention to patients with lesions greater than 5 cm because their prognosis is the worst, with a 5-year survival rate of 5%. Patients are recommended to undergo AC after surgery. In addition, tumor number, tumor diameter, AJCC stage, and distant metastasis are independent factors affecting the prognosis of patients with PDGNENs." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patient selection", "Tumor grade and stage", "Follow-up", "Statistical methods", "RESULTS", "Clinicopathological characteristics", "Treatment", "Follow-up and outcomes", "DISCUSSION", "CONCLUSION" ]
[ "Gastric neuroendocrine neoplasms (G-NENs) are a group of heterogeneous and rare malignant tumors originating from peptidergic neurons and neuroendocrine cells. Neuroendocrine neoplasms can occur throughout the body, such as in the gastrointestinal tract, pancreas, liver and gallbladder, thymus, and lung, but the gastrointestinal tract is the most commonly affected site[1]. According to the Surveillance, Epidemiology, and End Results report, the incidence of G-NENs has increased 15-fold in recent decades and reached 4.85/1000000 in 2014, mainly due to the wide application of gastroscopy and the improvement of pathological diagnosis techniques[2,3]. The incidence of poorly differentiated gastric neuroendocrine neoplasms (PDGNENs) is approximately 1/1000000 people, accounting for 16.4% of G-NENs, while the incidence of other tumors in the stomach shows a decreasing trend compared with G-NENs[4].\nPDGNENs are divided into functional and nonfunctional tumors according to whether the tumor secretes active hormones and causes characteristic clinical manifestations, and the most common clinical tumors are nonfunctional tumors. The clinical symptoms of nonfunctional PDGNENs lack specificity, and early diagnosis is difficult. Generally, the clinical symptoms are caused by the tumor size or metastasis, mainly including abdominal pain and abdominal distension. Few functional PDGNENs secrete bioactive amines that cause carcinoid syndrome, including skin flushing, diarrhea, wheezing, etc. PDGNENs have a worse prognosis than neuroendocrine tumors (NETs), and tumor diameter, stage, location, and treatment are significantly correlated with the prognosis[4]. The 5-year survival rate of patients with well-differentiated G-NENs presenting with distant metastases may reach 35%, while the 5-year survival rate of patients with PDGNENs accompanied by distant metastasis is only 4%[5].\nRelatively few large-cohort studies have assessed PDGNENs and limited data is available on their clinicopathological features and outcomes. Therefore, we collected the data of 232 patients with PDGNENs from multiple centers in China and analyzed the clinicopathological characteristics and prognostic factors of these patients, aiming to provide a reference for clinical work on PDGNENs.", "Patient selection From March 2007 to November 2019, a total of 232 patients with PDGNENs from seven centers in China were enrolled (China-Japan Friendship Hospital, n = 71; Sun Yat-Sen University Cancer Center, n = 54; The Fourth Affiliated Hospital of Hebei Medical University, n = 49; The First Affiliated Hospital Sun Yat-Sen University, n = 39; Fudan University Shanghai Cancer Center, n = 10; The Fifth Medical Center of the PLA General Hospital, n = 8; and Yunnan Tumor Hospital, n = 1). The inclusion criteria were as follows: (1) All patients were confirmed to have neuroendocrine carcinoma (NEC) or mixed adenoneuroendocrine carcinoma (MANEC) based on pathology; (2) The clinical data of the patients were relatively complete; (3) The patients underwent regular follow-up; and (4) The patients had no other tumors. This study was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the Clinical Research Ethics Committee of China-Japan Friendship Hospital (No. 2019-24-K18-1).\nFrom March 2007 to November 2019, a total of 232 patients with PDGNENs from seven centers in China were enrolled (China-Japan Friendship Hospital, n = 71; Sun Yat-Sen University Cancer Center, n = 54; The Fourth Affiliated Hospital of Hebei Medical University, n = 49; The First Affiliated Hospital Sun Yat-Sen University, n = 39; Fudan University Shanghai Cancer Center, n = 10; The Fifth Medical Center of the PLA General Hospital, n = 8; and Yunnan Tumor Hospital, n = 1). The inclusion criteria were as follows: (1) All patients were confirmed to have neuroendocrine carcinoma (NEC) or mixed adenoneuroendocrine carcinoma (MANEC) based on pathology; (2) The clinical data of the patients were relatively complete; (3) The patients underwent regular follow-up; and (4) The patients had no other tumors. This study was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the Clinical Research Ethics Committee of China-Japan Friendship Hospital (No. 2019-24-K18-1).\nTumor grade and stage The pathological grading standard used in this study adopted the 2019 World Health Organization classification and grading criteria for gastrointestinal pancreatic neuroendocrine tumors[6] and updated the pathological classification and grading of gastroenteropancreatic neuroendocrine tumors. Well-differentiated gastric neuroendocrine tumors were classified into three types. NEC was no longer graded and was divided into only two subtypes: Large cell NEC (LCNEC) and small cell NEC (SCNEC). In addition, MANEC has also been replaced by mixed neuroendocrine and nonneuroendocrine tumors (MiNENs), which contain a wider range of contents. Most of the NEN components in MiNEN are NEC and may also be NETs; in addition to adenocarcinoma, other components, such as squamous cell carcinoma, may also occur in non-NEN components, but each component must account for more than 30% and be classified when reporting[7]. The 8th edition gastric cancer tumor-node-metastasis staging system was used for staging[8].\nThe pathological grading standard used in this study adopted the 2019 World Health Organization classification and grading criteria for gastrointestinal pancreatic neuroendocrine tumors[6] and updated the pathological classification and grading of gastroenteropancreatic neuroendocrine tumors. Well-differentiated gastric neuroendocrine tumors were classified into three types. NEC was no longer graded and was divided into only two subtypes: Large cell NEC (LCNEC) and small cell NEC (SCNEC). In addition, MANEC has also been replaced by mixed neuroendocrine and nonneuroendocrine tumors (MiNENs), which contain a wider range of contents. Most of the NEN components in MiNEN are NEC and may also be NETs; in addition to adenocarcinoma, other components, such as squamous cell carcinoma, may also occur in non-NEN components, but each component must account for more than 30% and be classified when reporting[7]. The 8th edition gastric cancer tumor-node-metastasis staging system was used for staging[8].\nFollow-up The patients were followed regularly by an outpatient review, inpatient medical record review, and telephone interview. The starting point was the time when the patient’s histopathology yielded a diagnosis of PDGNENs. The end point of the follow-up was the time of death.\nThe patients were followed regularly by an outpatient review, inpatient medical record review, and telephone interview. The starting point was the time when the patient’s histopathology yielded a diagnosis of PDGNENs. The end point of the follow-up was the time of death.\nStatistical methods Measurement data are presented as the mean ± SD, and the follow-up time is reported as the median (interquartile range). Count data are presented as numbers of cases (percentages). The Kaplan-Meier method was used for the survival analysis, and comparisons were performed using the log-rank test. Multivariable survival analyses were also performed to exclude dependent variables using Cox proportional hazards regression models. When the two-tailed P value was less than 0.05, the difference was considered statistically significant. Data were analyzed using SPSS 25.0 statistical analysis software (IBM, Chicago, IL, United States).\nMeasurement data are presented as the mean ± SD, and the follow-up time is reported as the median (interquartile range). Count data are presented as numbers of cases (percentages). The Kaplan-Meier method was used for the survival analysis, and comparisons were performed using the log-rank test. Multivariable survival analyses were also performed to exclude dependent variables using Cox proportional hazards regression models. When the two-tailed P value was less than 0.05, the difference was considered statistically significant. Data were analyzed using SPSS 25.0 statistical analysis software (IBM, Chicago, IL, United States).", "From March 2007 to November 2019, a total of 232 patients with PDGNENs from seven centers in China were enrolled (China-Japan Friendship Hospital, n = 71; Sun Yat-Sen University Cancer Center, n = 54; The Fourth Affiliated Hospital of Hebei Medical University, n = 49; The First Affiliated Hospital Sun Yat-Sen University, n = 39; Fudan University Shanghai Cancer Center, n = 10; The Fifth Medical Center of the PLA General Hospital, n = 8; and Yunnan Tumor Hospital, n = 1). The inclusion criteria were as follows: (1) All patients were confirmed to have neuroendocrine carcinoma (NEC) or mixed adenoneuroendocrine carcinoma (MANEC) based on pathology; (2) The clinical data of the patients were relatively complete; (3) The patients underwent regular follow-up; and (4) The patients had no other tumors. This study was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the Clinical Research Ethics Committee of China-Japan Friendship Hospital (No. 2019-24-K18-1).", "The pathological grading standard used in this study adopted the 2019 World Health Organization classification and grading criteria for gastrointestinal pancreatic neuroendocrine tumors[6] and updated the pathological classification and grading of gastroenteropancreatic neuroendocrine tumors. Well-differentiated gastric neuroendocrine tumors were classified into three types. NEC was no longer graded and was divided into only two subtypes: Large cell NEC (LCNEC) and small cell NEC (SCNEC). In addition, MANEC has also been replaced by mixed neuroendocrine and nonneuroendocrine tumors (MiNENs), which contain a wider range of contents. Most of the NEN components in MiNEN are NEC and may also be NETs; in addition to adenocarcinoma, other components, such as squamous cell carcinoma, may also occur in non-NEN components, but each component must account for more than 30% and be classified when reporting[7]. The 8th edition gastric cancer tumor-node-metastasis staging system was used for staging[8].", "The patients were followed regularly by an outpatient review, inpatient medical record review, and telephone interview. The starting point was the time when the patient’s histopathology yielded a diagnosis of PDGNENs. The end point of the follow-up was the time of death.", "Measurement data are presented as the mean ± SD, and the follow-up time is reported as the median (interquartile range). Count data are presented as numbers of cases (percentages). The Kaplan-Meier method was used for the survival analysis, and comparisons were performed using the log-rank test. Multivariable survival analyses were also performed to exclude dependent variables using Cox proportional hazards regression models. When the two-tailed P value was less than 0.05, the difference was considered statistically significant. Data were analyzed using SPSS 25.0 statistical analysis software (IBM, Chicago, IL, United States).", "Clinicopathological characteristics Among the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 years (range: 30-85 years), and the elderly accounted for 65.5% of the patients (Table 1). The average diameter of the tumor was 4.66 ± 2.77 cm, and the lesions mainly invaded the serosa (116/198, 58.58%). The lymph nodes were positive in 175 (79.78%) of 225 patients, and 86 (37.39%) of 230 patients exhibited distant metastases. In addition, 113 (49.34%) patients presented with a stage II disease and 86 (37.55%) presented with stage IV disease. In terms of endoscopic performance, most of the tumors were mainly located in the cardia (65/215, 30.23%). The majority of tumors were solitary (215/219, 89.17%). In addition, the lesions were mainly ulcers (157/232, 67.67%). Typical gastroscopic findings are shown in Figure 1.\n\nEndoscopic detection of poorly differentiated gastric neuroendocrine neoplasms. A: Circumferential raised lesions on the cardia with uneven surfaces; B: Irregular bumps on the side of the minor curvature of the cardia, accompanied by erosions, ulcers, and unclear boundaries that bled easily when contacted; C: A raised ulcer with a diameter of 4 cm was observed in the small curvature of the antrum; D: A deep ulcer with a diameter of approximately 0.5 cm on the posterior wall of the gastric fundus was observed, and the base was not clear.\nClinicopathological characteristics of 232 patients with gastric neuroendocrine carcinoma and univariate analysis\nNR: Not reported. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria.\nC + F: Cardia + Fundus; NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; AJCC: American Joint Committee on Cancer.\nThe pathological classification of 232 patients was G3. Of these patients, 41 had LCNEC (17.67%) (Figure 2), 26 had SCNEC (12.21%) (Figure 3), and 38 had MANEC (16.38%) (Figure 4); the subtype was not reported for 127 (54.74%) patients. The average Ki-67 index was 65.34% (Table 1).\n\nMorphology and immunohistochemical staining of large cell neuroendocrine carcinoma in the cardia. A: Morphology (hematoxylin and eosin staining, × 400); B: Chromogranin A-positive staining [immunohistochemical (IHC), × 200]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of gastric small cell neuroendocrine carcinoma. A: Morphology (hematoxylin and eosin staining, × 400); B: CD56-positive staining [immunohistochemical (IHC), × 400]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of mixed adenoneuroendocrine carcinoma. A and B: Morphology of gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 400); C: Gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 200). Alcian blue/periodic acid–Schiff (AB-PAS) staining (× 200): The left side of the picture shows adenocarcinoma (AB-PAS positive); D: Chromogranin A-positive staining (× 200). The right side of the picture shows small cell neuroendocrine carcinoma.\nThe clinical symptoms of 190 patients were recorded. One hundred and eighty-seven (98.42%) patients experienced symptoms, while three (1.58%) patients had no clinical symptoms. The main symptoms were abdominal pain (105/190, 55.26%), abdominal distension (67/190, 35.26%), weight loss (41/190, 21.58%), poor appetite (38/190, 20.00%), and gastrointestinal bleeding (31/190, 16.32%) (Figure 5).\n\nDistribution of nonspecific clinical symptoms in 190 patients with poorly differentiated gastric neuroendocrine neoplasms.\n\nAmong the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 years (range: 30-85 years), and the elderly accounted for 65.5% of the patients (Table 1). The average diameter of the tumor was 4.66 ± 2.77 cm, and the lesions mainly invaded the serosa (116/198, 58.58%). The lymph nodes were positive in 175 (79.78%) of 225 patients, and 86 (37.39%) of 230 patients exhibited distant metastases. In addition, 113 (49.34%) patients presented with a stage II disease and 86 (37.55%) presented with stage IV disease. In terms of endoscopic performance, most of the tumors were mainly located in the cardia (65/215, 30.23%). The majority of tumors were solitary (215/219, 89.17%). In addition, the lesions were mainly ulcers (157/232, 67.67%). Typical gastroscopic findings are shown in Figure 1.\n\nEndoscopic detection of poorly differentiated gastric neuroendocrine neoplasms. A: Circumferential raised lesions on the cardia with uneven surfaces; B: Irregular bumps on the side of the minor curvature of the cardia, accompanied by erosions, ulcers, and unclear boundaries that bled easily when contacted; C: A raised ulcer with a diameter of 4 cm was observed in the small curvature of the antrum; D: A deep ulcer with a diameter of approximately 0.5 cm on the posterior wall of the gastric fundus was observed, and the base was not clear.\nClinicopathological characteristics of 232 patients with gastric neuroendocrine carcinoma and univariate analysis\nNR: Not reported. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria.\nC + F: Cardia + Fundus; NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; AJCC: American Joint Committee on Cancer.\nThe pathological classification of 232 patients was G3. Of these patients, 41 had LCNEC (17.67%) (Figure 2), 26 had SCNEC (12.21%) (Figure 3), and 38 had MANEC (16.38%) (Figure 4); the subtype was not reported for 127 (54.74%) patients. The average Ki-67 index was 65.34% (Table 1).\n\nMorphology and immunohistochemical staining of large cell neuroendocrine carcinoma in the cardia. A: Morphology (hematoxylin and eosin staining, × 400); B: Chromogranin A-positive staining [immunohistochemical (IHC), × 200]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of gastric small cell neuroendocrine carcinoma. A: Morphology (hematoxylin and eosin staining, × 400); B: CD56-positive staining [immunohistochemical (IHC), × 400]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of mixed adenoneuroendocrine carcinoma. A and B: Morphology of gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 400); C: Gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 200). Alcian blue/periodic acid–Schiff (AB-PAS) staining (× 200): The left side of the picture shows adenocarcinoma (AB-PAS positive); D: Chromogranin A-positive staining (× 200). The right side of the picture shows small cell neuroendocrine carcinoma.\nThe clinical symptoms of 190 patients were recorded. One hundred and eighty-seven (98.42%) patients experienced symptoms, while three (1.58%) patients had no clinical symptoms. The main symptoms were abdominal pain (105/190, 55.26%), abdominal distension (67/190, 35.26%), weight loss (41/190, 21.58%), poor appetite (38/190, 20.00%), and gastrointestinal bleeding (31/190, 16.32%) (Figure 5).\n\nDistribution of nonspecific clinical symptoms in 190 patients with poorly differentiated gastric neuroendocrine neoplasms.\n\nTreatment Among the 232 patients with PDGNENs, 86 (37.07%) were treated by surgery, 40 (17.24%) were treated with chemotherapy, 92 (38.79%) were treated by surgery plus chemotherapy, and 14 (6.03%) were treated with other treatments (somatostatin analogs, targeted therapy, immunotherapy, traditional Chinese medicine treatment, etc.). One hundred and forty-three patients had no distant metastasis or resectable tumors (tumor stage I: 5 cases; stage II: 25 cases; and stage III: 113 cases), of whom 75 were treated by surgery, 6 were treated by chemotherapy, 55 were treated by surgery combined with chemotherapy, and 7 were treated with other treatments. This study analyzed patients without distant metastases and found that the median survival time of the surgery alone group was 18 mo, while the median survival time of the chemotherapy alone group was 11 mo and that of the surgery combined with chemotherapy group was 23 mo (P < 0.001) (Figure 6).\n\nKaplan-Meier survival analysis (P < 0.05). A: Tumor number (P = 0.01); B: Tumor diameter (P = 0.01); C: Invasion (P = 0.02); D: American Joint Committee on Cancer stage (P < 0.001); E: Distant metastasis (P < 0.001); F: Treatment for patients without distant metastases (P < 0.001). AJCC: American Joint Committee on Cancer; OS: Overall survival.\nAmong the 232 patients with PDGNENs, 86 (37.07%) were treated by surgery, 40 (17.24%) were treated with chemotherapy, 92 (38.79%) were treated by surgery plus chemotherapy, and 14 (6.03%) were treated with other treatments (somatostatin analogs, targeted therapy, immunotherapy, traditional Chinese medicine treatment, etc.). One hundred and forty-three patients had no distant metastasis or resectable tumors (tumor stage I: 5 cases; stage II: 25 cases; and stage III: 113 cases), of whom 75 were treated by surgery, 6 were treated by chemotherapy, 55 were treated by surgery combined with chemotherapy, and 7 were treated with other treatments. This study analyzed patients without distant metastases and found that the median survival time of the surgery alone group was 18 mo, while the median survival time of the chemotherapy alone group was 11 mo and that of the surgery combined with chemotherapy group was 23 mo (P < 0.001) (Figure 6).\n\nKaplan-Meier survival analysis (P < 0.05). A: Tumor number (P = 0.01); B: Tumor diameter (P = 0.01); C: Invasion (P = 0.02); D: American Joint Committee on Cancer stage (P < 0.001); E: Distant metastasis (P < 0.001); F: Treatment for patients without distant metastases (P < 0.001). AJCC: American Joint Committee on Cancer; OS: Overall survival.\nFollow-up and outcomes With a median follow-up time of 13.50 mo (range, 7-31 mo), the overall 1-year, 3-year, and 5-year survival rates were 47%, 15%, and 5%, respectively. The median survival time was 14 mo. The univariate analysis showed that tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were correlated with the prognosis (log-rank test P < 0.05; Table 1), while age, sex, lymph node metastasis, pathological type, Ki-67 index, and tumor site were not related to the prognosis (log-rank test P > 0.05; Table 1 and Figure 6). In the multivariate analysis, the tumor number {multiple vs solitary, hazard ratio (HR) [95% confidence interval (CI)]: 3.89 (1.66-9.11), P < 0.001}, tumor diameter [5 cm vs 0-5 cm, HR (95%CI): 1.56 (1.01-2.41), P = 0.04], tumor stage [IV vs I, HR (95%CI): 5.98 (1.78-20.60), P < 0.001; III vs I, HR (95%CI): 3.582 (1.07-11.88), P = 0.03], and distant metastasis status [yes vs no, HR (95%CI): 2.16 (1.41-3.31), P < 0.001] were independent risk factors affecting the prognosis (Table 2). The overall 1-year, 3-year, and 5-year survival rates of the 232 patients with PDGNENs are shown in Table 3.\nMultivariate Cox regression analysis of poorly differentiated gastric neuroendocrine neoplasms\nAJCC: American Joint Committee on Cancer.\nSurvival analysis of related factors of 232 patients with poorly-differentiated gastric neoplasms\nS + C: Surgery plus Chemotherapy; C + F: Cardia + Fundus. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria. \nNEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; NR: Not reported; AJCC: American Joint Committee on Cancer.\nWith a median follow-up time of 13.50 mo (range, 7-31 mo), the overall 1-year, 3-year, and 5-year survival rates were 47%, 15%, and 5%, respectively. The median survival time was 14 mo. The univariate analysis showed that tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were correlated with the prognosis (log-rank test P < 0.05; Table 1), while age, sex, lymph node metastasis, pathological type, Ki-67 index, and tumor site were not related to the prognosis (log-rank test P > 0.05; Table 1 and Figure 6). In the multivariate analysis, the tumor number {multiple vs solitary, hazard ratio (HR) [95% confidence interval (CI)]: 3.89 (1.66-9.11), P < 0.001}, tumor diameter [5 cm vs 0-5 cm, HR (95%CI): 1.56 (1.01-2.41), P = 0.04], tumor stage [IV vs I, HR (95%CI): 5.98 (1.78-20.60), P < 0.001; III vs I, HR (95%CI): 3.582 (1.07-11.88), P = 0.03], and distant metastasis status [yes vs no, HR (95%CI): 2.16 (1.41-3.31), P < 0.001] were independent risk factors affecting the prognosis (Table 2). The overall 1-year, 3-year, and 5-year survival rates of the 232 patients with PDGNENs are shown in Table 3.\nMultivariate Cox regression analysis of poorly differentiated gastric neuroendocrine neoplasms\nAJCC: American Joint Committee on Cancer.\nSurvival analysis of related factors of 232 patients with poorly-differentiated gastric neoplasms\nS + C: Surgery plus Chemotherapy; C + F: Cardia + Fundus. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria. \nNEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; NR: Not reported; AJCC: American Joint Committee on Cancer.", "Among the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 years (range: 30-85 years), and the elderly accounted for 65.5% of the patients (Table 1). The average diameter of the tumor was 4.66 ± 2.77 cm, and the lesions mainly invaded the serosa (116/198, 58.58%). The lymph nodes were positive in 175 (79.78%) of 225 patients, and 86 (37.39%) of 230 patients exhibited distant metastases. In addition, 113 (49.34%) patients presented with a stage II disease and 86 (37.55%) presented with stage IV disease. In terms of endoscopic performance, most of the tumors were mainly located in the cardia (65/215, 30.23%). The majority of tumors were solitary (215/219, 89.17%). In addition, the lesions were mainly ulcers (157/232, 67.67%). Typical gastroscopic findings are shown in Figure 1.\n\nEndoscopic detection of poorly differentiated gastric neuroendocrine neoplasms. A: Circumferential raised lesions on the cardia with uneven surfaces; B: Irregular bumps on the side of the minor curvature of the cardia, accompanied by erosions, ulcers, and unclear boundaries that bled easily when contacted; C: A raised ulcer with a diameter of 4 cm was observed in the small curvature of the antrum; D: A deep ulcer with a diameter of approximately 0.5 cm on the posterior wall of the gastric fundus was observed, and the base was not clear.\nClinicopathological characteristics of 232 patients with gastric neuroendocrine carcinoma and univariate analysis\nNR: Not reported. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria.\nC + F: Cardia + Fundus; NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; AJCC: American Joint Committee on Cancer.\nThe pathological classification of 232 patients was G3. Of these patients, 41 had LCNEC (17.67%) (Figure 2), 26 had SCNEC (12.21%) (Figure 3), and 38 had MANEC (16.38%) (Figure 4); the subtype was not reported for 127 (54.74%) patients. The average Ki-67 index was 65.34% (Table 1).\n\nMorphology and immunohistochemical staining of large cell neuroendocrine carcinoma in the cardia. A: Morphology (hematoxylin and eosin staining, × 400); B: Chromogranin A-positive staining [immunohistochemical (IHC), × 200]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of gastric small cell neuroendocrine carcinoma. A: Morphology (hematoxylin and eosin staining, × 400); B: CD56-positive staining [immunohistochemical (IHC), × 400]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200).\n\nMorphology and immunohistochemical staining of mixed adenoneuroendocrine carcinoma. A and B: Morphology of gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 400); C: Gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 200). Alcian blue/periodic acid–Schiff (AB-PAS) staining (× 200): The left side of the picture shows adenocarcinoma (AB-PAS positive); D: Chromogranin A-positive staining (× 200). The right side of the picture shows small cell neuroendocrine carcinoma.\nThe clinical symptoms of 190 patients were recorded. One hundred and eighty-seven (98.42%) patients experienced symptoms, while three (1.58%) patients had no clinical symptoms. The main symptoms were abdominal pain (105/190, 55.26%), abdominal distension (67/190, 35.26%), weight loss (41/190, 21.58%), poor appetite (38/190, 20.00%), and gastrointestinal bleeding (31/190, 16.32%) (Figure 5).\n\nDistribution of nonspecific clinical symptoms in 190 patients with poorly differentiated gastric neuroendocrine neoplasms.\n", "Among the 232 patients with PDGNENs, 86 (37.07%) were treated by surgery, 40 (17.24%) were treated with chemotherapy, 92 (38.79%) were treated by surgery plus chemotherapy, and 14 (6.03%) were treated with other treatments (somatostatin analogs, targeted therapy, immunotherapy, traditional Chinese medicine treatment, etc.). One hundred and forty-three patients had no distant metastasis or resectable tumors (tumor stage I: 5 cases; stage II: 25 cases; and stage III: 113 cases), of whom 75 were treated by surgery, 6 were treated by chemotherapy, 55 were treated by surgery combined with chemotherapy, and 7 were treated with other treatments. This study analyzed patients without distant metastases and found that the median survival time of the surgery alone group was 18 mo, while the median survival time of the chemotherapy alone group was 11 mo and that of the surgery combined with chemotherapy group was 23 mo (P < 0.001) (Figure 6).\n\nKaplan-Meier survival analysis (P < 0.05). A: Tumor number (P = 0.01); B: Tumor diameter (P = 0.01); C: Invasion (P = 0.02); D: American Joint Committee on Cancer stage (P < 0.001); E: Distant metastasis (P < 0.001); F: Treatment for patients without distant metastases (P < 0.001). AJCC: American Joint Committee on Cancer; OS: Overall survival.", "With a median follow-up time of 13.50 mo (range, 7-31 mo), the overall 1-year, 3-year, and 5-year survival rates were 47%, 15%, and 5%, respectively. The median survival time was 14 mo. The univariate analysis showed that tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were correlated with the prognosis (log-rank test P < 0.05; Table 1), while age, sex, lymph node metastasis, pathological type, Ki-67 index, and tumor site were not related to the prognosis (log-rank test P > 0.05; Table 1 and Figure 6). In the multivariate analysis, the tumor number {multiple vs solitary, hazard ratio (HR) [95% confidence interval (CI)]: 3.89 (1.66-9.11), P < 0.001}, tumor diameter [5 cm vs 0-5 cm, HR (95%CI): 1.56 (1.01-2.41), P = 0.04], tumor stage [IV vs I, HR (95%CI): 5.98 (1.78-20.60), P < 0.001; III vs I, HR (95%CI): 3.582 (1.07-11.88), P = 0.03], and distant metastasis status [yes vs no, HR (95%CI): 2.16 (1.41-3.31), P < 0.001] were independent risk factors affecting the prognosis (Table 2). The overall 1-year, 3-year, and 5-year survival rates of the 232 patients with PDGNENs are shown in Table 3.\nMultivariate Cox regression analysis of poorly differentiated gastric neuroendocrine neoplasms\nAJCC: American Joint Committee on Cancer.\nSurvival analysis of related factors of 232 patients with poorly-differentiated gastric neoplasms\nS + C: Surgery plus Chemotherapy; C + F: Cardia + Fundus. \nM1: Mucosa; S: Submucosa. \nMP2: Muscularis propria. \nNEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; NR: Not reported; AJCC: American Joint Committee on Cancer.", "PDGNENs are rare tumors that are highly malignant, accounting for 6.9% of gastroenteropancreatic neuroendocrine tumors and 0.3%-1.8% of all malignant gastric tumors[9,10]. According to the Korean literature, PDGNENs account for 2.84% of all NENs and 40% of G-NENs[11]. In this article, the male-to-female ratio reached 4.66:1, and significantly more male patients were identified than female patients. Kim et al[12] reported that among 63 patients with G-NECs, 48 were male and 15 were female. Other studies[13-15] have indicated that G-NECs have a sex bias favoring males, but the reason has not been clarified.\nPDGNENs are mostly nonfunctional and often detected incidentally. Among the 232 patients with PDGNENs analyzed in our study, all patients had nonfunctional lesions. The main symptoms were similar to those reported in several other studies[15-17]. Early PDGNENs are asymptomatic or have no specific symptoms, such as anemia, abdominal pain, and dyspepsia, and they are unintentionally identified through routine upper gastrointestinal endoscopy. Indeed, regular upper gastrointestinal endoscopy is important and can help to detect PDGNENs, especially in patients with early-stage disease. Advanced PDGNENs are easily detected, mainly because patients show tumor-related features, such as obstruction, bleeding, weight loss, and pain due to infiltration or distant metastasis[16].\nMost patients with PDGNENs exhibit advanced tumors at the time of diagnosis, and patients with advanced tumors have worse outcomes than those with early-stage tumors. In our study, the majority of patients had advanced stage tumors at the time of diagnosis; 199 (85.78%) patients exhibited lymph node metastasis or distant metastasis, and these values are similar to those of several other studies[18]. Additionally, the survival analysis showed significant differences between patients with stages I-IV. The median survival time of patients with stage I disease was 88 mo, while that of stage IV patients was only 11 mo, and the 3-year overall survival rates were 100% and 11%, respectively. Patients with PDGNENs often benefit if the disease is detected at an early stage. Ishida et al[18] reported that the 5-year survival rates of 51 patients with stage I, II, III, or IV G-NECs were 66.7%, 49.3%, 64.3%, and 7.7%, respectively. Tierney et al[19] found that the median survival times of patients with stages I-II, III, and IV tumors were 40 mo, 31 mo, and 6 mo, respectively[19]. Furthermore, our study suggested that distant metastasis is an independent prognostic factor, which is consistent with relevant reports[18-21].\nTumor diameter may be relevant to the outcomes in our study. According to the study conducted by Liang et al[20], a gastric neuroendocrine tumor diameter greater than 4.2 cm was a poor prognostic factor for patients. Fang et al[22] analyzed 156 patients with G-NECs. Univariate analysis revealed a significant difference between patients with a tumor diameter less than 4.5 cm and those with a tumor diameter greater than 4.5 cm, and the 5-year survival rates were 57.9% and 29.3%, respectively. Our data suggest that a tumor diameter greater than 5 cm was a risk factor affecting the prognosis. In clinical practice, the tumor diameter of PDGNENs may be useful to predict the outcome, and patients with a tumor diameter greater than 5 cm should receive more attention.\nTumor site may also be related to the prognosis. In our study, PDGNENs were mainly located in the upper third of the stomach (total: 53.95%), while only 27 (12.56%) PDGNENs were found in the antrum. The values are similar to those of other reports[23-25]. In addition, our data showed a longer median survival time of patients with lesions in the cardia (15.97 mo) and fundus of the stomach (20.00 mo) than that of patients with lesions in the gastric antrum (12.50 mo). Hu et al[4] reported a median survival time of patients with G-NECs in the cardia and fundus of 20 mo, which was longer than that of patients with tumors in the antrum (13 mo). Bukhari et al[14] observed a better prognosis for patients with tumors in the cardia region than that of patients with tumors in the gastric antrum (median survival: 48.0 mo vs 19.0 mo)[14]. To a certain extent, we postulate that the prognosis of PDGNENs in the upper part of the stomach is better than that of tumors in the lower part.\nThe European Neuroendocrine Tumor Society proposed guidelines for the treatment of NEC in 2016. For patients with no distant metastasis or resectable tumors, surgical treatment can be selected, and adjuvant chemotherapy (AC) or radiotherapy can be selected after surgery. Surgery is the only curative treatment for resectable PDGNENs, but the prognosis of patients who undergo surgery alone remains very poor[26]. This study analyzed patients without distant metastases and found that the median survival time of patients treated by surgery combined with chemotherapy (23 mo) was longer than that of patients who were treated by surgery alone (18 mo) or chemotherapy alone (11 mo). Their 3-year overall survival rates were 35%, 22%, and 4%, respectively. Patients with early-stage tumors should choose the proper treatment method, which may improve the quality of life and prolong the survival time. According to a retrospective study including 69 patients with G-NECs in China, the overall 3-year survival rate of patients receiving surgery combined with chemotherapy was 68.8%, while that of patients who received surgery alone was only 3.8%[27]. Bukhari et al[14] assessed 43 patients with G-NECs. Five patients did not undergo postoperative chemotherapy, and the median survival time of these patients was 15 mo. The median survival time of the remaining 34 patients who received postoperative chemotherapy was 44 mo[14]. Mao et al[28] analyzed 806 patients diagnosed with nonmetastatic poorly differentiated colorectal NECs, and 394 (48.9%) of these patients received AC. Kaplan-Meier curves showed that the median overall survival (OS) was significantly longer for patients treated with AC vs observation (57.4 mo vs 38.2 mo; P = 0.007). The Cox proportional hazards regression analysis showed that AC was associated with a significant OS benefit [HR = 0.73, P < 0.001][28]. Surgery combined with chemotherapy has advantages and may improve the prognosis of patients compared with treatment with either approach alone.\nAs a retrospective study, this study provides novel insights into the diagnosis and treatment of PDGNENs and the risk factors related to the prognosis. However, some limitations should be noted. First, some patients had incomplete basic information. Second, some patients had poor compliance, and a certain number of patients were lost to follow-up (30%). In the future, prospective, multicenter, large-scale trials are still needed to identify independent risk factors that affect the prognosis of patients with PDGNENs.", "In summary, the majority of patients with PDGNENs had already developed lymph node or distant metastasis at the time of diagnosis, and the prognosis was poor, with a 5-year survival rate of 5% and a median survival time of 13.50 mo. Electronic gastroscopy and pathological diagnostic technology have been widely popularized. When people experience the aforementioned symptoms, gastroscopy should be performed in a timely manner. If a lesion is detected, pathology should be performed to determine a clear diagnosis. Clinicians should pay more attention to patients with lesions greater than 5 cm because their prognosis is the worst, with a 5-year survival rate of 5%. Patients are recommended to undergo AC after surgery. In addition, tumor number, tumor diameter, AJCC stage, and distant metastasis are independent factors affecting the prognosis of patients with PDGNENs." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null ]
[ "Poorly differentiated gastric neuroendocrine neoplasms", "Clinicopathological characteristics", "Prognosis", "Distant metastasis", "Tumor diameter" ]
INTRODUCTION: Gastric neuroendocrine neoplasms (G-NENs) are a group of heterogeneous and rare malignant tumors originating from peptidergic neurons and neuroendocrine cells. Neuroendocrine neoplasms can occur throughout the body, such as in the gastrointestinal tract, pancreas, liver and gallbladder, thymus, and lung, but the gastrointestinal tract is the most commonly affected site[1]. According to the Surveillance, Epidemiology, and End Results report, the incidence of G-NENs has increased 15-fold in recent decades and reached 4.85/1000000 in 2014, mainly due to the wide application of gastroscopy and the improvement of pathological diagnosis techniques[2,3]. The incidence of poorly differentiated gastric neuroendocrine neoplasms (PDGNENs) is approximately 1/1000000 people, accounting for 16.4% of G-NENs, while the incidence of other tumors in the stomach shows a decreasing trend compared with G-NENs[4]. PDGNENs are divided into functional and nonfunctional tumors according to whether the tumor secretes active hormones and causes characteristic clinical manifestations, and the most common clinical tumors are nonfunctional tumors. The clinical symptoms of nonfunctional PDGNENs lack specificity, and early diagnosis is difficult. Generally, the clinical symptoms are caused by the tumor size or metastasis, mainly including abdominal pain and abdominal distension. Few functional PDGNENs secrete bioactive amines that cause carcinoid syndrome, including skin flushing, diarrhea, wheezing, etc. PDGNENs have a worse prognosis than neuroendocrine tumors (NETs), and tumor diameter, stage, location, and treatment are significantly correlated with the prognosis[4]. The 5-year survival rate of patients with well-differentiated G-NENs presenting with distant metastases may reach 35%, while the 5-year survival rate of patients with PDGNENs accompanied by distant metastasis is only 4%[5]. Relatively few large-cohort studies have assessed PDGNENs and limited data is available on their clinicopathological features and outcomes. Therefore, we collected the data of 232 patients with PDGNENs from multiple centers in China and analyzed the clinicopathological characteristics and prognostic factors of these patients, aiming to provide a reference for clinical work on PDGNENs. MATERIALS AND METHODS: Patient selection From March 2007 to November 2019, a total of 232 patients with PDGNENs from seven centers in China were enrolled (China-Japan Friendship Hospital, n = 71; Sun Yat-Sen University Cancer Center, n = 54; The Fourth Affiliated Hospital of Hebei Medical University, n = 49; The First Affiliated Hospital Sun Yat-Sen University, n = 39; Fudan University Shanghai Cancer Center, n = 10; The Fifth Medical Center of the PLA General Hospital, n = 8; and Yunnan Tumor Hospital, n = 1). The inclusion criteria were as follows: (1) All patients were confirmed to have neuroendocrine carcinoma (NEC) or mixed adenoneuroendocrine carcinoma (MANEC) based on pathology; (2) The clinical data of the patients were relatively complete; (3) The patients underwent regular follow-up; and (4) The patients had no other tumors. This study was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the Clinical Research Ethics Committee of China-Japan Friendship Hospital (No. 2019-24-K18-1). From March 2007 to November 2019, a total of 232 patients with PDGNENs from seven centers in China were enrolled (China-Japan Friendship Hospital, n = 71; Sun Yat-Sen University Cancer Center, n = 54; The Fourth Affiliated Hospital of Hebei Medical University, n = 49; The First Affiliated Hospital Sun Yat-Sen University, n = 39; Fudan University Shanghai Cancer Center, n = 10; The Fifth Medical Center of the PLA General Hospital, n = 8; and Yunnan Tumor Hospital, n = 1). The inclusion criteria were as follows: (1) All patients were confirmed to have neuroendocrine carcinoma (NEC) or mixed adenoneuroendocrine carcinoma (MANEC) based on pathology; (2) The clinical data of the patients were relatively complete; (3) The patients underwent regular follow-up; and (4) The patients had no other tumors. This study was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the Clinical Research Ethics Committee of China-Japan Friendship Hospital (No. 2019-24-K18-1). Tumor grade and stage The pathological grading standard used in this study adopted the 2019 World Health Organization classification and grading criteria for gastrointestinal pancreatic neuroendocrine tumors[6] and updated the pathological classification and grading of gastroenteropancreatic neuroendocrine tumors. Well-differentiated gastric neuroendocrine tumors were classified into three types. NEC was no longer graded and was divided into only two subtypes: Large cell NEC (LCNEC) and small cell NEC (SCNEC). In addition, MANEC has also been replaced by mixed neuroendocrine and nonneuroendocrine tumors (MiNENs), which contain a wider range of contents. Most of the NEN components in MiNEN are NEC and may also be NETs; in addition to adenocarcinoma, other components, such as squamous cell carcinoma, may also occur in non-NEN components, but each component must account for more than 30% and be classified when reporting[7]. The 8th edition gastric cancer tumor-node-metastasis staging system was used for staging[8]. The pathological grading standard used in this study adopted the 2019 World Health Organization classification and grading criteria for gastrointestinal pancreatic neuroendocrine tumors[6] and updated the pathological classification and grading of gastroenteropancreatic neuroendocrine tumors. Well-differentiated gastric neuroendocrine tumors were classified into three types. NEC was no longer graded and was divided into only two subtypes: Large cell NEC (LCNEC) and small cell NEC (SCNEC). In addition, MANEC has also been replaced by mixed neuroendocrine and nonneuroendocrine tumors (MiNENs), which contain a wider range of contents. Most of the NEN components in MiNEN are NEC and may also be NETs; in addition to adenocarcinoma, other components, such as squamous cell carcinoma, may also occur in non-NEN components, but each component must account for more than 30% and be classified when reporting[7]. The 8th edition gastric cancer tumor-node-metastasis staging system was used for staging[8]. Follow-up The patients were followed regularly by an outpatient review, inpatient medical record review, and telephone interview. The starting point was the time when the patient’s histopathology yielded a diagnosis of PDGNENs. The end point of the follow-up was the time of death. The patients were followed regularly by an outpatient review, inpatient medical record review, and telephone interview. The starting point was the time when the patient’s histopathology yielded a diagnosis of PDGNENs. The end point of the follow-up was the time of death. Statistical methods Measurement data are presented as the mean ± SD, and the follow-up time is reported as the median (interquartile range). Count data are presented as numbers of cases (percentages). The Kaplan-Meier method was used for the survival analysis, and comparisons were performed using the log-rank test. Multivariable survival analyses were also performed to exclude dependent variables using Cox proportional hazards regression models. When the two-tailed P value was less than 0.05, the difference was considered statistically significant. Data were analyzed using SPSS 25.0 statistical analysis software (IBM, Chicago, IL, United States). Measurement data are presented as the mean ± SD, and the follow-up time is reported as the median (interquartile range). Count data are presented as numbers of cases (percentages). The Kaplan-Meier method was used for the survival analysis, and comparisons were performed using the log-rank test. Multivariable survival analyses were also performed to exclude dependent variables using Cox proportional hazards regression models. When the two-tailed P value was less than 0.05, the difference was considered statistically significant. Data were analyzed using SPSS 25.0 statistical analysis software (IBM, Chicago, IL, United States). Patient selection: From March 2007 to November 2019, a total of 232 patients with PDGNENs from seven centers in China were enrolled (China-Japan Friendship Hospital, n = 71; Sun Yat-Sen University Cancer Center, n = 54; The Fourth Affiliated Hospital of Hebei Medical University, n = 49; The First Affiliated Hospital Sun Yat-Sen University, n = 39; Fudan University Shanghai Cancer Center, n = 10; The Fifth Medical Center of the PLA General Hospital, n = 8; and Yunnan Tumor Hospital, n = 1). The inclusion criteria were as follows: (1) All patients were confirmed to have neuroendocrine carcinoma (NEC) or mixed adenoneuroendocrine carcinoma (MANEC) based on pathology; (2) The clinical data of the patients were relatively complete; (3) The patients underwent regular follow-up; and (4) The patients had no other tumors. This study was conducted in accordance with the provisions of the Declaration of Helsinki and approved by the Clinical Research Ethics Committee of China-Japan Friendship Hospital (No. 2019-24-K18-1). Tumor grade and stage: The pathological grading standard used in this study adopted the 2019 World Health Organization classification and grading criteria for gastrointestinal pancreatic neuroendocrine tumors[6] and updated the pathological classification and grading of gastroenteropancreatic neuroendocrine tumors. Well-differentiated gastric neuroendocrine tumors were classified into three types. NEC was no longer graded and was divided into only two subtypes: Large cell NEC (LCNEC) and small cell NEC (SCNEC). In addition, MANEC has also been replaced by mixed neuroendocrine and nonneuroendocrine tumors (MiNENs), which contain a wider range of contents. Most of the NEN components in MiNEN are NEC and may also be NETs; in addition to adenocarcinoma, other components, such as squamous cell carcinoma, may also occur in non-NEN components, but each component must account for more than 30% and be classified when reporting[7]. The 8th edition gastric cancer tumor-node-metastasis staging system was used for staging[8]. Follow-up: The patients were followed regularly by an outpatient review, inpatient medical record review, and telephone interview. The starting point was the time when the patient’s histopathology yielded a diagnosis of PDGNENs. The end point of the follow-up was the time of death. Statistical methods: Measurement data are presented as the mean ± SD, and the follow-up time is reported as the median (interquartile range). Count data are presented as numbers of cases (percentages). The Kaplan-Meier method was used for the survival analysis, and comparisons were performed using the log-rank test. Multivariable survival analyses were also performed to exclude dependent variables using Cox proportional hazards regression models. When the two-tailed P value was less than 0.05, the difference was considered statistically significant. Data were analyzed using SPSS 25.0 statistical analysis software (IBM, Chicago, IL, United States). RESULTS: Clinicopathological characteristics Among the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 years (range: 30-85 years), and the elderly accounted for 65.5% of the patients (Table 1). The average diameter of the tumor was 4.66 ± 2.77 cm, and the lesions mainly invaded the serosa (116/198, 58.58%). The lymph nodes were positive in 175 (79.78%) of 225 patients, and 86 (37.39%) of 230 patients exhibited distant metastases. In addition, 113 (49.34%) patients presented with a stage II disease and 86 (37.55%) presented with stage IV disease. In terms of endoscopic performance, most of the tumors were mainly located in the cardia (65/215, 30.23%). The majority of tumors were solitary (215/219, 89.17%). In addition, the lesions were mainly ulcers (157/232, 67.67%). Typical gastroscopic findings are shown in Figure 1. Endoscopic detection of poorly differentiated gastric neuroendocrine neoplasms. A: Circumferential raised lesions on the cardia with uneven surfaces; B: Irregular bumps on the side of the minor curvature of the cardia, accompanied by erosions, ulcers, and unclear boundaries that bled easily when contacted; C: A raised ulcer with a diameter of 4 cm was observed in the small curvature of the antrum; D: A deep ulcer with a diameter of approximately 0.5 cm on the posterior wall of the gastric fundus was observed, and the base was not clear. Clinicopathological characteristics of 232 patients with gastric neuroendocrine carcinoma and univariate analysis NR: Not reported. M1: Mucosa; S: Submucosa. MP2: Muscularis propria. C + F: Cardia + Fundus; NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; AJCC: American Joint Committee on Cancer. The pathological classification of 232 patients was G3. Of these patients, 41 had LCNEC (17.67%) (Figure 2), 26 had SCNEC (12.21%) (Figure 3), and 38 had MANEC (16.38%) (Figure 4); the subtype was not reported for 127 (54.74%) patients. The average Ki-67 index was 65.34% (Table 1). Morphology and immunohistochemical staining of large cell neuroendocrine carcinoma in the cardia. A: Morphology (hematoxylin and eosin staining, × 400); B: Chromogranin A-positive staining [immunohistochemical (IHC), × 200]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200). Morphology and immunohistochemical staining of gastric small cell neuroendocrine carcinoma. A: Morphology (hematoxylin and eosin staining, × 400); B: CD56-positive staining [immunohistochemical (IHC), × 400]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200). Morphology and immunohistochemical staining of mixed adenoneuroendocrine carcinoma. A and B: Morphology of gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 400); C: Gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 200). Alcian blue/periodic acid–Schiff (AB-PAS) staining (× 200): The left side of the picture shows adenocarcinoma (AB-PAS positive); D: Chromogranin A-positive staining (× 200). The right side of the picture shows small cell neuroendocrine carcinoma. The clinical symptoms of 190 patients were recorded. One hundred and eighty-seven (98.42%) patients experienced symptoms, while three (1.58%) patients had no clinical symptoms. The main symptoms were abdominal pain (105/190, 55.26%), abdominal distension (67/190, 35.26%), weight loss (41/190, 21.58%), poor appetite (38/190, 20.00%), and gastrointestinal bleeding (31/190, 16.32%) (Figure 5). Distribution of nonspecific clinical symptoms in 190 patients with poorly differentiated gastric neuroendocrine neoplasms. Among the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 years (range: 30-85 years), and the elderly accounted for 65.5% of the patients (Table 1). The average diameter of the tumor was 4.66 ± 2.77 cm, and the lesions mainly invaded the serosa (116/198, 58.58%). The lymph nodes were positive in 175 (79.78%) of 225 patients, and 86 (37.39%) of 230 patients exhibited distant metastases. In addition, 113 (49.34%) patients presented with a stage II disease and 86 (37.55%) presented with stage IV disease. In terms of endoscopic performance, most of the tumors were mainly located in the cardia (65/215, 30.23%). The majority of tumors were solitary (215/219, 89.17%). In addition, the lesions were mainly ulcers (157/232, 67.67%). Typical gastroscopic findings are shown in Figure 1. Endoscopic detection of poorly differentiated gastric neuroendocrine neoplasms. A: Circumferential raised lesions on the cardia with uneven surfaces; B: Irregular bumps on the side of the minor curvature of the cardia, accompanied by erosions, ulcers, and unclear boundaries that bled easily when contacted; C: A raised ulcer with a diameter of 4 cm was observed in the small curvature of the antrum; D: A deep ulcer with a diameter of approximately 0.5 cm on the posterior wall of the gastric fundus was observed, and the base was not clear. Clinicopathological characteristics of 232 patients with gastric neuroendocrine carcinoma and univariate analysis NR: Not reported. M1: Mucosa; S: Submucosa. MP2: Muscularis propria. C + F: Cardia + Fundus; NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; AJCC: American Joint Committee on Cancer. The pathological classification of 232 patients was G3. Of these patients, 41 had LCNEC (17.67%) (Figure 2), 26 had SCNEC (12.21%) (Figure 3), and 38 had MANEC (16.38%) (Figure 4); the subtype was not reported for 127 (54.74%) patients. The average Ki-67 index was 65.34% (Table 1). Morphology and immunohistochemical staining of large cell neuroendocrine carcinoma in the cardia. A: Morphology (hematoxylin and eosin staining, × 400); B: Chromogranin A-positive staining [immunohistochemical (IHC), × 200]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200). Morphology and immunohistochemical staining of gastric small cell neuroendocrine carcinoma. A: Morphology (hematoxylin and eosin staining, × 400); B: CD56-positive staining [immunohistochemical (IHC), × 400]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200). Morphology and immunohistochemical staining of mixed adenoneuroendocrine carcinoma. A and B: Morphology of gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 400); C: Gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 200). Alcian blue/periodic acid–Schiff (AB-PAS) staining (× 200): The left side of the picture shows adenocarcinoma (AB-PAS positive); D: Chromogranin A-positive staining (× 200). The right side of the picture shows small cell neuroendocrine carcinoma. The clinical symptoms of 190 patients were recorded. One hundred and eighty-seven (98.42%) patients experienced symptoms, while three (1.58%) patients had no clinical symptoms. The main symptoms were abdominal pain (105/190, 55.26%), abdominal distension (67/190, 35.26%), weight loss (41/190, 21.58%), poor appetite (38/190, 20.00%), and gastrointestinal bleeding (31/190, 16.32%) (Figure 5). Distribution of nonspecific clinical symptoms in 190 patients with poorly differentiated gastric neuroendocrine neoplasms. Treatment Among the 232 patients with PDGNENs, 86 (37.07%) were treated by surgery, 40 (17.24%) were treated with chemotherapy, 92 (38.79%) were treated by surgery plus chemotherapy, and 14 (6.03%) were treated with other treatments (somatostatin analogs, targeted therapy, immunotherapy, traditional Chinese medicine treatment, etc.). One hundred and forty-three patients had no distant metastasis or resectable tumors (tumor stage I: 5 cases; stage II: 25 cases; and stage III: 113 cases), of whom 75 were treated by surgery, 6 were treated by chemotherapy, 55 were treated by surgery combined with chemotherapy, and 7 were treated with other treatments. This study analyzed patients without distant metastases and found that the median survival time of the surgery alone group was 18 mo, while the median survival time of the chemotherapy alone group was 11 mo and that of the surgery combined with chemotherapy group was 23 mo (P < 0.001) (Figure 6). Kaplan-Meier survival analysis (P < 0.05). A: Tumor number (P = 0.01); B: Tumor diameter (P = 0.01); C: Invasion (P = 0.02); D: American Joint Committee on Cancer stage (P < 0.001); E: Distant metastasis (P < 0.001); F: Treatment for patients without distant metastases (P < 0.001). AJCC: American Joint Committee on Cancer; OS: Overall survival. Among the 232 patients with PDGNENs, 86 (37.07%) were treated by surgery, 40 (17.24%) were treated with chemotherapy, 92 (38.79%) were treated by surgery plus chemotherapy, and 14 (6.03%) were treated with other treatments (somatostatin analogs, targeted therapy, immunotherapy, traditional Chinese medicine treatment, etc.). One hundred and forty-three patients had no distant metastasis or resectable tumors (tumor stage I: 5 cases; stage II: 25 cases; and stage III: 113 cases), of whom 75 were treated by surgery, 6 were treated by chemotherapy, 55 were treated by surgery combined with chemotherapy, and 7 were treated with other treatments. This study analyzed patients without distant metastases and found that the median survival time of the surgery alone group was 18 mo, while the median survival time of the chemotherapy alone group was 11 mo and that of the surgery combined with chemotherapy group was 23 mo (P < 0.001) (Figure 6). Kaplan-Meier survival analysis (P < 0.05). A: Tumor number (P = 0.01); B: Tumor diameter (P = 0.01); C: Invasion (P = 0.02); D: American Joint Committee on Cancer stage (P < 0.001); E: Distant metastasis (P < 0.001); F: Treatment for patients without distant metastases (P < 0.001). AJCC: American Joint Committee on Cancer; OS: Overall survival. Follow-up and outcomes With a median follow-up time of 13.50 mo (range, 7-31 mo), the overall 1-year, 3-year, and 5-year survival rates were 47%, 15%, and 5%, respectively. The median survival time was 14 mo. The univariate analysis showed that tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were correlated with the prognosis (log-rank test P < 0.05; Table 1), while age, sex, lymph node metastasis, pathological type, Ki-67 index, and tumor site were not related to the prognosis (log-rank test P > 0.05; Table 1 and Figure 6). In the multivariate analysis, the tumor number {multiple vs solitary, hazard ratio (HR) [95% confidence interval (CI)]: 3.89 (1.66-9.11), P < 0.001}, tumor diameter [5 cm vs 0-5 cm, HR (95%CI): 1.56 (1.01-2.41), P = 0.04], tumor stage [IV vs I, HR (95%CI): 5.98 (1.78-20.60), P < 0.001; III vs I, HR (95%CI): 3.582 (1.07-11.88), P = 0.03], and distant metastasis status [yes vs no, HR (95%CI): 2.16 (1.41-3.31), P < 0.001] were independent risk factors affecting the prognosis (Table 2). The overall 1-year, 3-year, and 5-year survival rates of the 232 patients with PDGNENs are shown in Table 3. Multivariate Cox regression analysis of poorly differentiated gastric neuroendocrine neoplasms AJCC: American Joint Committee on Cancer. Survival analysis of related factors of 232 patients with poorly-differentiated gastric neoplasms S + C: Surgery plus Chemotherapy; C + F: Cardia + Fundus. M1: Mucosa; S: Submucosa. MP2: Muscularis propria. NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; NR: Not reported; AJCC: American Joint Committee on Cancer. With a median follow-up time of 13.50 mo (range, 7-31 mo), the overall 1-year, 3-year, and 5-year survival rates were 47%, 15%, and 5%, respectively. The median survival time was 14 mo. The univariate analysis showed that tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were correlated with the prognosis (log-rank test P < 0.05; Table 1), while age, sex, lymph node metastasis, pathological type, Ki-67 index, and tumor site were not related to the prognosis (log-rank test P > 0.05; Table 1 and Figure 6). In the multivariate analysis, the tumor number {multiple vs solitary, hazard ratio (HR) [95% confidence interval (CI)]: 3.89 (1.66-9.11), P < 0.001}, tumor diameter [5 cm vs 0-5 cm, HR (95%CI): 1.56 (1.01-2.41), P = 0.04], tumor stage [IV vs I, HR (95%CI): 5.98 (1.78-20.60), P < 0.001; III vs I, HR (95%CI): 3.582 (1.07-11.88), P = 0.03], and distant metastasis status [yes vs no, HR (95%CI): 2.16 (1.41-3.31), P < 0.001] were independent risk factors affecting the prognosis (Table 2). The overall 1-year, 3-year, and 5-year survival rates of the 232 patients with PDGNENs are shown in Table 3. Multivariate Cox regression analysis of poorly differentiated gastric neuroendocrine neoplasms AJCC: American Joint Committee on Cancer. Survival analysis of related factors of 232 patients with poorly-differentiated gastric neoplasms S + C: Surgery plus Chemotherapy; C + F: Cardia + Fundus. M1: Mucosa; S: Submucosa. MP2: Muscularis propria. NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; NR: Not reported; AJCC: American Joint Committee on Cancer. Clinicopathological characteristics: Among the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 years (range: 30-85 years), and the elderly accounted for 65.5% of the patients (Table 1). The average diameter of the tumor was 4.66 ± 2.77 cm, and the lesions mainly invaded the serosa (116/198, 58.58%). The lymph nodes were positive in 175 (79.78%) of 225 patients, and 86 (37.39%) of 230 patients exhibited distant metastases. In addition, 113 (49.34%) patients presented with a stage II disease and 86 (37.55%) presented with stage IV disease. In terms of endoscopic performance, most of the tumors were mainly located in the cardia (65/215, 30.23%). The majority of tumors were solitary (215/219, 89.17%). In addition, the lesions were mainly ulcers (157/232, 67.67%). Typical gastroscopic findings are shown in Figure 1. Endoscopic detection of poorly differentiated gastric neuroendocrine neoplasms. A: Circumferential raised lesions on the cardia with uneven surfaces; B: Irregular bumps on the side of the minor curvature of the cardia, accompanied by erosions, ulcers, and unclear boundaries that bled easily when contacted; C: A raised ulcer with a diameter of 4 cm was observed in the small curvature of the antrum; D: A deep ulcer with a diameter of approximately 0.5 cm on the posterior wall of the gastric fundus was observed, and the base was not clear. Clinicopathological characteristics of 232 patients with gastric neuroendocrine carcinoma and univariate analysis NR: Not reported. M1: Mucosa; S: Submucosa. MP2: Muscularis propria. C + F: Cardia + Fundus; NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; AJCC: American Joint Committee on Cancer. The pathological classification of 232 patients was G3. Of these patients, 41 had LCNEC (17.67%) (Figure 2), 26 had SCNEC (12.21%) (Figure 3), and 38 had MANEC (16.38%) (Figure 4); the subtype was not reported for 127 (54.74%) patients. The average Ki-67 index was 65.34% (Table 1). Morphology and immunohistochemical staining of large cell neuroendocrine carcinoma in the cardia. A: Morphology (hematoxylin and eosin staining, × 400); B: Chromogranin A-positive staining [immunohistochemical (IHC), × 200]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200). Morphology and immunohistochemical staining of gastric small cell neuroendocrine carcinoma. A: Morphology (hematoxylin and eosin staining, × 400); B: CD56-positive staining [immunohistochemical (IHC), × 400]; C: Synaptophysin-positive staining (IHC, × 200); D: Ki-67 index: 90% (IHC, × 200). Morphology and immunohistochemical staining of mixed adenoneuroendocrine carcinoma. A and B: Morphology of gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 400); C: Gastric small cell neuroendocrine carcinoma mixed with adenocarcinoma (hematoxylin and eosin staining, × 200). Alcian blue/periodic acid–Schiff (AB-PAS) staining (× 200): The left side of the picture shows adenocarcinoma (AB-PAS positive); D: Chromogranin A-positive staining (× 200). The right side of the picture shows small cell neuroendocrine carcinoma. The clinical symptoms of 190 patients were recorded. One hundred and eighty-seven (98.42%) patients experienced symptoms, while three (1.58%) patients had no clinical symptoms. The main symptoms were abdominal pain (105/190, 55.26%), abdominal distension (67/190, 35.26%), weight loss (41/190, 21.58%), poor appetite (38/190, 20.00%), and gastrointestinal bleeding (31/190, 16.32%) (Figure 5). Distribution of nonspecific clinical symptoms in 190 patients with poorly differentiated gastric neuroendocrine neoplasms. Treatment: Among the 232 patients with PDGNENs, 86 (37.07%) were treated by surgery, 40 (17.24%) were treated with chemotherapy, 92 (38.79%) were treated by surgery plus chemotherapy, and 14 (6.03%) were treated with other treatments (somatostatin analogs, targeted therapy, immunotherapy, traditional Chinese medicine treatment, etc.). One hundred and forty-three patients had no distant metastasis or resectable tumors (tumor stage I: 5 cases; stage II: 25 cases; and stage III: 113 cases), of whom 75 were treated by surgery, 6 were treated by chemotherapy, 55 were treated by surgery combined with chemotherapy, and 7 were treated with other treatments. This study analyzed patients without distant metastases and found that the median survival time of the surgery alone group was 18 mo, while the median survival time of the chemotherapy alone group was 11 mo and that of the surgery combined with chemotherapy group was 23 mo (P < 0.001) (Figure 6). Kaplan-Meier survival analysis (P < 0.05). A: Tumor number (P = 0.01); B: Tumor diameter (P = 0.01); C: Invasion (P = 0.02); D: American Joint Committee on Cancer stage (P < 0.001); E: Distant metastasis (P < 0.001); F: Treatment for patients without distant metastases (P < 0.001). AJCC: American Joint Committee on Cancer; OS: Overall survival. Follow-up and outcomes: With a median follow-up time of 13.50 mo (range, 7-31 mo), the overall 1-year, 3-year, and 5-year survival rates were 47%, 15%, and 5%, respectively. The median survival time was 14 mo. The univariate analysis showed that tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were correlated with the prognosis (log-rank test P < 0.05; Table 1), while age, sex, lymph node metastasis, pathological type, Ki-67 index, and tumor site were not related to the prognosis (log-rank test P > 0.05; Table 1 and Figure 6). In the multivariate analysis, the tumor number {multiple vs solitary, hazard ratio (HR) [95% confidence interval (CI)]: 3.89 (1.66-9.11), P < 0.001}, tumor diameter [5 cm vs 0-5 cm, HR (95%CI): 1.56 (1.01-2.41), P = 0.04], tumor stage [IV vs I, HR (95%CI): 5.98 (1.78-20.60), P < 0.001; III vs I, HR (95%CI): 3.582 (1.07-11.88), P = 0.03], and distant metastasis status [yes vs no, HR (95%CI): 2.16 (1.41-3.31), P < 0.001] were independent risk factors affecting the prognosis (Table 2). The overall 1-year, 3-year, and 5-year survival rates of the 232 patients with PDGNENs are shown in Table 3. Multivariate Cox regression analysis of poorly differentiated gastric neuroendocrine neoplasms AJCC: American Joint Committee on Cancer. Survival analysis of related factors of 232 patients with poorly-differentiated gastric neoplasms S + C: Surgery plus Chemotherapy; C + F: Cardia + Fundus. M1: Mucosa; S: Submucosa. MP2: Muscularis propria. NEC: Neuroendocrine carcinoma; MANEC: Mixed adenoneuroendocrine carcinoma; NR: Not reported; AJCC: American Joint Committee on Cancer. DISCUSSION: PDGNENs are rare tumors that are highly malignant, accounting for 6.9% of gastroenteropancreatic neuroendocrine tumors and 0.3%-1.8% of all malignant gastric tumors[9,10]. According to the Korean literature, PDGNENs account for 2.84% of all NENs and 40% of G-NENs[11]. In this article, the male-to-female ratio reached 4.66:1, and significantly more male patients were identified than female patients. Kim et al[12] reported that among 63 patients with G-NECs, 48 were male and 15 were female. Other studies[13-15] have indicated that G-NECs have a sex bias favoring males, but the reason has not been clarified. PDGNENs are mostly nonfunctional and often detected incidentally. Among the 232 patients with PDGNENs analyzed in our study, all patients had nonfunctional lesions. The main symptoms were similar to those reported in several other studies[15-17]. Early PDGNENs are asymptomatic or have no specific symptoms, such as anemia, abdominal pain, and dyspepsia, and they are unintentionally identified through routine upper gastrointestinal endoscopy. Indeed, regular upper gastrointestinal endoscopy is important and can help to detect PDGNENs, especially in patients with early-stage disease. Advanced PDGNENs are easily detected, mainly because patients show tumor-related features, such as obstruction, bleeding, weight loss, and pain due to infiltration or distant metastasis[16]. Most patients with PDGNENs exhibit advanced tumors at the time of diagnosis, and patients with advanced tumors have worse outcomes than those with early-stage tumors. In our study, the majority of patients had advanced stage tumors at the time of diagnosis; 199 (85.78%) patients exhibited lymph node metastasis or distant metastasis, and these values are similar to those of several other studies[18]. Additionally, the survival analysis showed significant differences between patients with stages I-IV. The median survival time of patients with stage I disease was 88 mo, while that of stage IV patients was only 11 mo, and the 3-year overall survival rates were 100% and 11%, respectively. Patients with PDGNENs often benefit if the disease is detected at an early stage. Ishida et al[18] reported that the 5-year survival rates of 51 patients with stage I, II, III, or IV G-NECs were 66.7%, 49.3%, 64.3%, and 7.7%, respectively. Tierney et al[19] found that the median survival times of patients with stages I-II, III, and IV tumors were 40 mo, 31 mo, and 6 mo, respectively[19]. Furthermore, our study suggested that distant metastasis is an independent prognostic factor, which is consistent with relevant reports[18-21]. Tumor diameter may be relevant to the outcomes in our study. According to the study conducted by Liang et al[20], a gastric neuroendocrine tumor diameter greater than 4.2 cm was a poor prognostic factor for patients. Fang et al[22] analyzed 156 patients with G-NECs. Univariate analysis revealed a significant difference between patients with a tumor diameter less than 4.5 cm and those with a tumor diameter greater than 4.5 cm, and the 5-year survival rates were 57.9% and 29.3%, respectively. Our data suggest that a tumor diameter greater than 5 cm was a risk factor affecting the prognosis. In clinical practice, the tumor diameter of PDGNENs may be useful to predict the outcome, and patients with a tumor diameter greater than 5 cm should receive more attention. Tumor site may also be related to the prognosis. In our study, PDGNENs were mainly located in the upper third of the stomach (total: 53.95%), while only 27 (12.56%) PDGNENs were found in the antrum. The values are similar to those of other reports[23-25]. In addition, our data showed a longer median survival time of patients with lesions in the cardia (15.97 mo) and fundus of the stomach (20.00 mo) than that of patients with lesions in the gastric antrum (12.50 mo). Hu et al[4] reported a median survival time of patients with G-NECs in the cardia and fundus of 20 mo, which was longer than that of patients with tumors in the antrum (13 mo). Bukhari et al[14] observed a better prognosis for patients with tumors in the cardia region than that of patients with tumors in the gastric antrum (median survival: 48.0 mo vs 19.0 mo)[14]. To a certain extent, we postulate that the prognosis of PDGNENs in the upper part of the stomach is better than that of tumors in the lower part. The European Neuroendocrine Tumor Society proposed guidelines for the treatment of NEC in 2016. For patients with no distant metastasis or resectable tumors, surgical treatment can be selected, and adjuvant chemotherapy (AC) or radiotherapy can be selected after surgery. Surgery is the only curative treatment for resectable PDGNENs, but the prognosis of patients who undergo surgery alone remains very poor[26]. This study analyzed patients without distant metastases and found that the median survival time of patients treated by surgery combined with chemotherapy (23 mo) was longer than that of patients who were treated by surgery alone (18 mo) or chemotherapy alone (11 mo). Their 3-year overall survival rates were 35%, 22%, and 4%, respectively. Patients with early-stage tumors should choose the proper treatment method, which may improve the quality of life and prolong the survival time. According to a retrospective study including 69 patients with G-NECs in China, the overall 3-year survival rate of patients receiving surgery combined with chemotherapy was 68.8%, while that of patients who received surgery alone was only 3.8%[27]. Bukhari et al[14] assessed 43 patients with G-NECs. Five patients did not undergo postoperative chemotherapy, and the median survival time of these patients was 15 mo. The median survival time of the remaining 34 patients who received postoperative chemotherapy was 44 mo[14]. Mao et al[28] analyzed 806 patients diagnosed with nonmetastatic poorly differentiated colorectal NECs, and 394 (48.9%) of these patients received AC. Kaplan-Meier curves showed that the median overall survival (OS) was significantly longer for patients treated with AC vs observation (57.4 mo vs 38.2 mo; P = 0.007). The Cox proportional hazards regression analysis showed that AC was associated with a significant OS benefit [HR = 0.73, P < 0.001][28]. Surgery combined with chemotherapy has advantages and may improve the prognosis of patients compared with treatment with either approach alone. As a retrospective study, this study provides novel insights into the diagnosis and treatment of PDGNENs and the risk factors related to the prognosis. However, some limitations should be noted. First, some patients had incomplete basic information. Second, some patients had poor compliance, and a certain number of patients were lost to follow-up (30%). In the future, prospective, multicenter, large-scale trials are still needed to identify independent risk factors that affect the prognosis of patients with PDGNENs. CONCLUSION: In summary, the majority of patients with PDGNENs had already developed lymph node or distant metastasis at the time of diagnosis, and the prognosis was poor, with a 5-year survival rate of 5% and a median survival time of 13.50 mo. Electronic gastroscopy and pathological diagnostic technology have been widely popularized. When people experience the aforementioned symptoms, gastroscopy should be performed in a timely manner. If a lesion is detected, pathology should be performed to determine a clear diagnosis. Clinicians should pay more attention to patients with lesions greater than 5 cm because their prognosis is the worst, with a 5-year survival rate of 5%. Patients are recommended to undergo AC after surgery. In addition, tumor number, tumor diameter, AJCC stage, and distant metastasis are independent factors affecting the prognosis of patients with PDGNENs.
Background: Poorly differentiated gastric neuroendocrine neoplasms (PDGNENs) include gastric neuroendocrine carcinoma (NEC) and mixed adenoneuroendocrine carcinoma, which are highly malignant and rare tumors, and their incidence has increased over the past few decades. However, the clinicopathological features and outcomes of patients with PDGNENs have not been completely elucidated. Methods: The data from seven centers in China from March 2007 to November 2019 were analyzed retrospectively. Results: Among the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 ± 9.11 years. One hundred and thirteen (49.34%) of 229 patients had a stage III disease and 86 (37.55%) had stage IV disease. Three (1.58%) of 190 patients had no clinical symptoms, while 187 (98.42%) patients presented clinical symptoms. The tumors were mainly (89.17%) solitary and located in the upper third of the stomach (cardia and fundus of stomach: 115/215, 53.49%). Most lesions were ulcers (157/232, 67.67%), with an average diameter of 4.66 ± 2.77 cm. In terms of tumor invasion, the majority of tumors invaded the serosa (116/198, 58.58%). The median survival time of the 232 patients was 13.50 mo (7, 31 mo), and the overall 1-year, 3-year, and 5-year survival rates were 49%, 19%, and 5%, respectively. According to univariate analysis, tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were prognostic factors for patients with PDGNENs. Multivariate analysis showed that tumor number, tumor diameter, AJCC stage, and distant metastasis status were independent prognostic factors for patients with PDGNENs. Conclusions: The overall prognosis of patients with PDGNENs is poor. The outcomes of patients with a tumor diameter > 5 cm, multiple tumors, and stage IV tumors are worse than those of other patients.
INTRODUCTION: Gastric neuroendocrine neoplasms (G-NENs) are a group of heterogeneous and rare malignant tumors originating from peptidergic neurons and neuroendocrine cells. Neuroendocrine neoplasms can occur throughout the body, such as in the gastrointestinal tract, pancreas, liver and gallbladder, thymus, and lung, but the gastrointestinal tract is the most commonly affected site[1]. According to the Surveillance, Epidemiology, and End Results report, the incidence of G-NENs has increased 15-fold in recent decades and reached 4.85/1000000 in 2014, mainly due to the wide application of gastroscopy and the improvement of pathological diagnosis techniques[2,3]. The incidence of poorly differentiated gastric neuroendocrine neoplasms (PDGNENs) is approximately 1/1000000 people, accounting for 16.4% of G-NENs, while the incidence of other tumors in the stomach shows a decreasing trend compared with G-NENs[4]. PDGNENs are divided into functional and nonfunctional tumors according to whether the tumor secretes active hormones and causes characteristic clinical manifestations, and the most common clinical tumors are nonfunctional tumors. The clinical symptoms of nonfunctional PDGNENs lack specificity, and early diagnosis is difficult. Generally, the clinical symptoms are caused by the tumor size or metastasis, mainly including abdominal pain and abdominal distension. Few functional PDGNENs secrete bioactive amines that cause carcinoid syndrome, including skin flushing, diarrhea, wheezing, etc. PDGNENs have a worse prognosis than neuroendocrine tumors (NETs), and tumor diameter, stage, location, and treatment are significantly correlated with the prognosis[4]. The 5-year survival rate of patients with well-differentiated G-NENs presenting with distant metastases may reach 35%, while the 5-year survival rate of patients with PDGNENs accompanied by distant metastasis is only 4%[5]. Relatively few large-cohort studies have assessed PDGNENs and limited data is available on their clinicopathological features and outcomes. Therefore, we collected the data of 232 patients with PDGNENs from multiple centers in China and analyzed the clinicopathological characteristics and prognostic factors of these patients, aiming to provide a reference for clinical work on PDGNENs. CONCLUSION: We are grateful to all patients and all relevant medical workers who participated in this study.
Background: Poorly differentiated gastric neuroendocrine neoplasms (PDGNENs) include gastric neuroendocrine carcinoma (NEC) and mixed adenoneuroendocrine carcinoma, which are highly malignant and rare tumors, and their incidence has increased over the past few decades. However, the clinicopathological features and outcomes of patients with PDGNENs have not been completely elucidated. Methods: The data from seven centers in China from March 2007 to November 2019 were analyzed retrospectively. Results: Among the 232 patients with PDGNENs, 191 (82.3%) were male, with an average age of 62.83 ± 9.11 years. One hundred and thirteen (49.34%) of 229 patients had a stage III disease and 86 (37.55%) had stage IV disease. Three (1.58%) of 190 patients had no clinical symptoms, while 187 (98.42%) patients presented clinical symptoms. The tumors were mainly (89.17%) solitary and located in the upper third of the stomach (cardia and fundus of stomach: 115/215, 53.49%). Most lesions were ulcers (157/232, 67.67%), with an average diameter of 4.66 ± 2.77 cm. In terms of tumor invasion, the majority of tumors invaded the serosa (116/198, 58.58%). The median survival time of the 232 patients was 13.50 mo (7, 31 mo), and the overall 1-year, 3-year, and 5-year survival rates were 49%, 19%, and 5%, respectively. According to univariate analysis, tumor number, tumor diameter, gastric invasion status, American Joint Committee on Cancer (AJCC) stage, and distant metastasis status were prognostic factors for patients with PDGNENs. Multivariate analysis showed that tumor number, tumor diameter, AJCC stage, and distant metastasis status were independent prognostic factors for patients with PDGNENs. Conclusions: The overall prognosis of patients with PDGNENs is poor. The outcomes of patients with a tumor diameter > 5 cm, multiple tumors, and stage IV tumors are worse than those of other patients.
8,058
384
[ 383, 212, 174, 50, 117, 2965, 784, 285, 408, 1338, 157 ]
12
[ "patients", "neuroendocrine", "tumor", "survival", "tumors", "carcinoma", "gastric", "pdgnens", "staining", "mo" ]
[ "prognosis neuroendocrine tumors", "neuroendocrine neoplasms occur", "neuroendocrine neoplasms nens", "gastric neuroendocrine carcinoma", "gastric neuroendocrine tumor" ]
null
[CONTENT] Poorly differentiated gastric neuroendocrine neoplasms | Clinicopathological characteristics | Prognosis | Distant metastasis | Tumor diameter [SUMMARY]
[CONTENT] Poorly differentiated gastric neuroendocrine neoplasms | Clinicopathological characteristics | Prognosis | Distant metastasis | Tumor diameter [SUMMARY]
null
[CONTENT] Poorly differentiated gastric neuroendocrine neoplasms | Clinicopathological characteristics | Prognosis | Distant metastasis | Tumor diameter [SUMMARY]
[CONTENT] Poorly differentiated gastric neuroendocrine neoplasms | Clinicopathological characteristics | Prognosis | Distant metastasis | Tumor diameter [SUMMARY]
[CONTENT] Poorly differentiated gastric neuroendocrine neoplasms | Clinicopathological characteristics | Prognosis | Distant metastasis | Tumor diameter [SUMMARY]
[CONTENT] Aged | China | Female | Humans | Male | Middle Aged | Neoplasm Staging | Neuroendocrine Tumors | Prognosis | Retrospective Studies | Stomach Neoplasms [SUMMARY]
[CONTENT] Aged | China | Female | Humans | Male | Middle Aged | Neoplasm Staging | Neuroendocrine Tumors | Prognosis | Retrospective Studies | Stomach Neoplasms [SUMMARY]
null
[CONTENT] Aged | China | Female | Humans | Male | Middle Aged | Neoplasm Staging | Neuroendocrine Tumors | Prognosis | Retrospective Studies | Stomach Neoplasms [SUMMARY]
[CONTENT] Aged | China | Female | Humans | Male | Middle Aged | Neoplasm Staging | Neuroendocrine Tumors | Prognosis | Retrospective Studies | Stomach Neoplasms [SUMMARY]
[CONTENT] Aged | China | Female | Humans | Male | Middle Aged | Neoplasm Staging | Neuroendocrine Tumors | Prognosis | Retrospective Studies | Stomach Neoplasms [SUMMARY]
[CONTENT] prognosis neuroendocrine tumors | neuroendocrine neoplasms occur | neuroendocrine neoplasms nens | gastric neuroendocrine carcinoma | gastric neuroendocrine tumor [SUMMARY]
[CONTENT] prognosis neuroendocrine tumors | neuroendocrine neoplasms occur | neuroendocrine neoplasms nens | gastric neuroendocrine carcinoma | gastric neuroendocrine tumor [SUMMARY]
null
[CONTENT] prognosis neuroendocrine tumors | neuroendocrine neoplasms occur | neuroendocrine neoplasms nens | gastric neuroendocrine carcinoma | gastric neuroendocrine tumor [SUMMARY]
[CONTENT] prognosis neuroendocrine tumors | neuroendocrine neoplasms occur | neuroendocrine neoplasms nens | gastric neuroendocrine carcinoma | gastric neuroendocrine tumor [SUMMARY]
[CONTENT] prognosis neuroendocrine tumors | neuroendocrine neoplasms occur | neuroendocrine neoplasms nens | gastric neuroendocrine carcinoma | gastric neuroendocrine tumor [SUMMARY]
[CONTENT] patients | neuroendocrine | tumor | survival | tumors | carcinoma | gastric | pdgnens | staining | mo [SUMMARY]
[CONTENT] patients | neuroendocrine | tumor | survival | tumors | carcinoma | gastric | pdgnens | staining | mo [SUMMARY]
null
[CONTENT] patients | neuroendocrine | tumor | survival | tumors | carcinoma | gastric | pdgnens | staining | mo [SUMMARY]
[CONTENT] patients | neuroendocrine | tumor | survival | tumors | carcinoma | gastric | pdgnens | staining | mo [SUMMARY]
[CONTENT] patients | neuroendocrine | tumor | survival | tumors | carcinoma | gastric | pdgnens | staining | mo [SUMMARY]
[CONTENT] nens | pdgnens | incidence | tumors | clinical | nonfunctional | neuroendocrine | neoplasms | neuroendocrine neoplasms | 1000000 [SUMMARY]
[CONTENT] hospital | university | nec | center | components | grading | data | patients | neuroendocrine | tumors [SUMMARY]
null
[CONTENT] prognosis | gastroscopy | patients | year survival rate | survival rate | rate | performed | survival | year survival | diagnosis [SUMMARY]
[CONTENT] patients | survival | neuroendocrine | tumors | tumor | hospital | pdgnens | mo | staining | time [SUMMARY]
[CONTENT] patients | survival | neuroendocrine | tumors | tumor | hospital | pdgnens | mo | staining | time [SUMMARY]
[CONTENT] gastric | NEC | the past few decades ||| [SUMMARY]
[CONTENT] seven | China | March 2007 to November 2019 [SUMMARY]
null
[CONTENT] ||| 5 cm | IV [SUMMARY]
[CONTENT] gastric | NEC | the past few decades ||| ||| seven | China | March 2007 to November 2019 ||| ||| 232 | 191 | 82.3% | 62.83 ± | 9.11 years ||| One hundred and thirteen | 49.34% | 229 | 86 | 37.55% | IV ||| Three | 1.58% | 190 | 187 | 98.42% ||| ||| 89.17% | third | 115/215 | 53.49% ||| 157/232 | 67.67% | 4.66 | 2.77 cm ||| 116/198 | 58.58% ||| 232 | 13.50 mo | 7, 31 | 1-year | 3-year | 5-year | 49% | 19% | 5% ||| American Joint Committee on Cancer | AJCC ||| AJCC ||| ||| 5 cm | IV [SUMMARY]
[CONTENT] gastric | NEC | the past few decades ||| ||| seven | China | March 2007 to November 2019 ||| ||| 232 | 191 | 82.3% | 62.83 ± | 9.11 years ||| One hundred and thirteen | 49.34% | 229 | 86 | 37.55% | IV ||| Three | 1.58% | 190 | 187 | 98.42% ||| ||| 89.17% | third | 115/215 | 53.49% ||| 157/232 | 67.67% | 4.66 | 2.77 cm ||| 116/198 | 58.58% ||| 232 | 13.50 mo | 7, 31 | 1-year | 3-year | 5-year | 49% | 19% | 5% ||| American Joint Committee on Cancer | AJCC ||| AJCC ||| ||| 5 cm | IV [SUMMARY]
Daylight saving time as a potential public health intervention: an observational study of evening daylight and objectively-measured physical activity among 23,000 children from 9 countries.
25341643
It has been proposed that introducing daylight saving measures could increase children's physical activity, but there exists little research on this issue. This study therefore examined associations between time of sunset and activity levels, including using the bi-annual 'changing of the clocks' as a natural experiment.
BACKGROUND
23,188 children aged 5-16 years from 15 studies in nine countries were brought together in the International Children's Accelerometry Database. 439 of these children were of particular interest for our analyses as they contributed data both immediately before and after the clocks changed. All children provided objectively-measured physical activity data from Actigraph accelerometers, and we used their average physical activity level (accelerometer counts per minute) as our primary outcome. Date of accelerometer data collection was matched to time of sunset, and to weather characteristics including daily precipitation, humidity, wind speed and temperature.
METHODS
Adjusting for child and weather covariates, we found that longer evening daylight was independently associated with a small increase in daily physical activity. Consistent with a causal interpretation, the magnitude of these associations was largest in the late afternoon and early evening and these associations were also evident when comparing the same child just before and just after the clocks changed. These associations were, however, only consistently observed in the five mainland European, four English and two Australian samples (adjusted, pooled effect sizes 0.03-0.07 standard deviations per hour of additional evening daylight). In some settings there was some evidence of larger associations between daylength and physical activity in boys. There was no evidence of interactions with weight status or maternal education, and inconsistent findings for interactions with age.
RESULTS
In Europe and Australia, evening daylight seems to play a causal role in increasing children's activity in a relatively equitable manner. Although the average increase in activity is small in absolute terms, these increases apply across all children in a population. Moreover, these small effect sizes actually compare relatively favourably with the typical effect of intensive, individual-level interventions. We therefore conclude that, by shifting the physical activity mean of the entire population, the introduction of additional daylight saving measures could yield worthwhile public health benefits.
CONCLUSIONS
[ "Accelerometry", "Activities of Daily Living", "Adolescent", "Body Weight", "Child", "Child, Preschool", "Cross-Sectional Studies", "Female", "Humans", "Male", "Motor Activity", "Photoperiod", "Play and Playthings", "Public Health", "Socioeconomic Factors" ]
4364628
Background
Physical activity confers substantial physical and mental health benefits in children [1–5], but most children around the world do not meet current activity guidelines [6]. For children as for adults, successfully promoting physical activity is likely to require both individual-level and population-level interventions [7]. The latter are important because, following the insights of Geoffrey Rose [8], even a small shift in a population mean can yield important public health benefits. One potential population-level measure which has received some policy attention in recent years concerns the introduction of additional daylight saving measures [9]. Although the total number of hours of daylight in the day is fixed, many countries modify when those hours fall by ‘changing the clocks’ – for example, putting the clocks forward in the summer to shift daylight hours from the very early morning to the evening. Recent decades have seen recurrent political debates surrounding daylight saving measures in several countries. For example, several Australian states have held repeated referenda on the topic, and the issue even spawned the creation in 2008 of the single-issue political party ‘Daylight Saving for South East Queensland’. Similarly a Bill was debated in the British Parliament between 2010 and 2012 which proposed to shift the clocks forward by an additional hour year round. This change would have given British children an estimated average of 200 extra waking daylight hours per year [10], and the logo of the associated civil society campaign depicted children playing outdoors in the evening sunlight. The Bill’s accompanying research paper listed “increased opportunities for outdoor activity” alongside other potential health and environmental benefits, such as reducing road traffic crashes and cutting domestic energy use [11]. A similar argument about leisure-time activity has featured in the Australian debate [12]. The British Bill’s research paper did not, however, cite any evidence to support its claims about physical activity, and nor does much evidence exist regarding likely impacts on children. Many studies have certainly reported that children’s physical activity is generally higher in the summer than in the winter, as reviewed in [13–15]. Very few studies, however, examine whether seasonal differences persist after adjustment for weather conditions, or whether the seasonal patterning of physical activity across the day is consistent with a causal effect of evening daylight. One study which did examine these issues in detail found that seasonal differences in physical activity were greatest in the late afternoon and early evening, which is what one would expect if time of sunset did play a causal role [16]. This study had some major limitations, however, including its small sample size (N = 325), its restriction to a single setting in south-east England, and its failure to adjust for temperature. This paper therefore revisited this question in a much larger, international sample. Our first broad aim was to test the hypotheses that (i) longer evening daylight is associated with higher total physical activity, even after adjusting for weather conditions; and (ii) these overall differences in physical activity are greatest in the late afternoon and early evening. Given our uniquely large sample size, we were also able to use countries’ bi-annual changing of the clocks as a natural experiment, i.e. as an event or intervention not designed for research purposes but which can nevertheless provide valuable research opportunities [17]. Specifically, we tested the hypothesis that the same child measured immediately before and immediately after the clocks changed would be more active on the days where sunset had been moved an hour later. Our second broad aim was to examine whether any associations between evening daylight and activity levels differed by study setting, sex, age, weight status or socio-economic position.
Methods
Study design The International Children’s Accelerometry Database (http://www.mrc-epid.cam.ac.uk/research/studies/icad/) was established to pool objectively-measured physical activity data from studies using the Actigraph accelerometer in children worldwide. The aims, design and methods of ICAD have been described in detail elsewhere [18]. Formal data sharing agreements were established and all partners consulted their individual research board to confirm that sufficient ethical approval had been attained for contributing data. The International Children’s Accelerometry Database (http://www.mrc-epid.cam.ac.uk/research/studies/icad/) was established to pool objectively-measured physical activity data from studies using the Actigraph accelerometer in children worldwide. The aims, design and methods of ICAD have been described in detail elsewhere [18]. Formal data sharing agreements were established and all partners consulted their individual research board to confirm that sufficient ethical approval had been attained for contributing data. Participants The full ICAD database pools accelerometer data from 20 studies conducted in ten countries between 1997 and 2009 [18]. In this paper, we excluded four studies which focussed on pre-school children and one study for which date of measurement was not available. We used baseline data from all of the 15 remaining studies, plus follow-up measurements in the seven longitudinal studies and one natural experimental study (Additional file 1: Table A1). We also used follow-up measurements from the control group of one of the two randomised controlled trials, as for this study it was possible to distinguish intervention and control groups. Among 23,354 individuals aged 5–16 years old in the 15 eligible studies, we excluded 1.7% of measurement days (0.7% of individuals) because of missing data on age, sex, weight status or weather conditions. Our resulting study population consisted of 23,188 participants who between them provided 158,784 days of valid data across 31,378 time points (Table  1). Although our full study population included children providing data from any part of the year, one of our analyses was limited to 439 children who were sampled during a week which spanned the clock change (51% female, age range 5–16, 1830 measurement days).Table 1 Descriptive characteristics of study participants N (%) participantsN (%) valid daysFull sample23,188 (100%)158,784 (100%)SexMale8819 (38%)62,745 (40%)Female14,369 (62%)† 96,039 (60%)Age5-6 years1800 (8%)7855 (5%)7-8 years711 (3%)4963 (3%)9-10 years5769 (25%)30,702 (19%)11-12 years9616 (41%)61,352 (39%)13-14 years4206 (18%)46,530 (29%)15-16 years1086 (5%)7382 (5%)Country [No. studies]Australia [N = 2]2459 (11%)18,679 (12%)Brazil [N = 1]453 (2%)1577 (1%)Denmark [N = 2]2031 (9%)11,030 (7%)England [N = 4]10,284 (44%)83,420 (53%)Estonia [N = 1]656 (3%)2537 (2%)Madeira [N = 1]1214 (5%)4899 (3%)Norway [N = 1]384 (2%)1459 (1%)Switzerland [N = 1]404 (2%)2569 (2%)United States [N = 2]5303 (23%)32,614 (21%)Weight statusNormal/underweight17,573 (76%)121,350 (76%)Overweight4116 (18%)27,967 (18%)Obese1499 (6%)9467 (6%)Mother’s educationUp to high school7422 (48%)54,547 (48%)College/vocational2656 (17%)19,352 (17%)University level5251 (34%)38,723 (34%)For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file 1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only. Descriptive characteristics of study participants For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file 1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only. The full ICAD database pools accelerometer data from 20 studies conducted in ten countries between 1997 and 2009 [18]. In this paper, we excluded four studies which focussed on pre-school children and one study for which date of measurement was not available. We used baseline data from all of the 15 remaining studies, plus follow-up measurements in the seven longitudinal studies and one natural experimental study (Additional file 1: Table A1). We also used follow-up measurements from the control group of one of the two randomised controlled trials, as for this study it was possible to distinguish intervention and control groups. Among 23,354 individuals aged 5–16 years old in the 15 eligible studies, we excluded 1.7% of measurement days (0.7% of individuals) because of missing data on age, sex, weight status or weather conditions. Our resulting study population consisted of 23,188 participants who between them provided 158,784 days of valid data across 31,378 time points (Table  1). Although our full study population included children providing data from any part of the year, one of our analyses was limited to 439 children who were sampled during a week which spanned the clock change (51% female, age range 5–16, 1830 measurement days).Table 1 Descriptive characteristics of study participants N (%) participantsN (%) valid daysFull sample23,188 (100%)158,784 (100%)SexMale8819 (38%)62,745 (40%)Female14,369 (62%)† 96,039 (60%)Age5-6 years1800 (8%)7855 (5%)7-8 years711 (3%)4963 (3%)9-10 years5769 (25%)30,702 (19%)11-12 years9616 (41%)61,352 (39%)13-14 years4206 (18%)46,530 (29%)15-16 years1086 (5%)7382 (5%)Country [No. studies]Australia [N = 2]2459 (11%)18,679 (12%)Brazil [N = 1]453 (2%)1577 (1%)Denmark [N = 2]2031 (9%)11,030 (7%)England [N = 4]10,284 (44%)83,420 (53%)Estonia [N = 1]656 (3%)2537 (2%)Madeira [N = 1]1214 (5%)4899 (3%)Norway [N = 1]384 (2%)1459 (1%)Switzerland [N = 1]404 (2%)2569 (2%)United States [N = 2]5303 (23%)32,614 (21%)Weight statusNormal/underweight17,573 (76%)121,350 (76%)Overweight4116 (18%)27,967 (18%)Obese1499 (6%)9467 (6%)Mother’s educationUp to high school7422 (48%)54,547 (48%)College/vocational2656 (17%)19,352 (17%)University level5251 (34%)38,723 (34%)For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file 1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only. Descriptive characteristics of study participants For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file 1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only. Measurement of physical activity All physical activity measurements were made with uniaxial, waist-worn Actigraph accelerometers (models 7164, 71256 and GT1M); these are a family of accelerometers that have been shown to provide reliable and valid measurement of physical activity in children and adolescents [19–21]. All raw accelerometer data files were re-analysed to provide physical activity outcome variables that could be directly compared across studies (see [18] for details). Data files were reintegrated to a 60 second epoch where necessary and processed using commercially available software (KineSoft v3.3.20, Saskatchewan, Canada). Non-wear time was defined as 60 minutes of consecutive zeros allowing for 2 minutes of non-zero interruptions [22]. We restricted our analysis of activity data to the time period 07:00 and 22:59, and defined a valid measurement day as one recording at least 500 minutes of wear time during this time period (18% days excluded as invalid). When examining the pattern of physical activity across the day, we only included hours with at least 30 minutes of measured wear time. Each participating child provided an average of 5.1 days across the week in which they were measured (range 1–7); we did not require a minimum number of valid days of accelerometer data per child because days, not children, were our primary units of analysis. Although we sought to limit our analyses to activity during waking hours, we unfortunately lacked reliable data on the time children went to sleep or woke up. While most children took their accelerometers off to sleep, on 6% of days there was evidence of overnight wear, defined as ≥5 minutes of weartime between 1:00 and 04:59. On these days, we assumed the child was in fact sleeping during any hour between 21:00 and 07:59 for which the mean accelerometer counts per minute (cpm) was below 50. Mean cpm values of under 50 were observed for 90% of hours recorded between 03:00 and 03:59 but only 3% of hours recorded between 19:00 and 19:59, suggesting this cut-point provided a reasonable proxy for sleeping time among children for whom we had reason to suspect overnight wear. Our findings were unchanged in sensitivity analyses which instead used thresholds of 30 cpm or 100 cpm to exclude suspected sleeping time, or which excluded altogether the 6% of days with suspected overnight wear. Our pre-specified primary outcome measure was the child’s average counts per minute. Substantive findings were similar in sensitivity analyses which instead used percent time spent in moderate-to-vigorous physical activity (MVPA), defined either as ≥3000 cpm [23] or ≥2296 cpm [24]. For our key findings, we present these MVPA results (using the ≥3000 cpm cut-off) alongside the results for mean cpm. In order to facilitate interpretation of these MVPA results, we additionally convert the observed percentage times into approximate absolute minutes on the assumption of a 14-hour average waking day [25]. All physical activity measurements were made with uniaxial, waist-worn Actigraph accelerometers (models 7164, 71256 and GT1M); these are a family of accelerometers that have been shown to provide reliable and valid measurement of physical activity in children and adolescents [19–21]. All raw accelerometer data files were re-analysed to provide physical activity outcome variables that could be directly compared across studies (see [18] for details). Data files were reintegrated to a 60 second epoch where necessary and processed using commercially available software (KineSoft v3.3.20, Saskatchewan, Canada). Non-wear time was defined as 60 minutes of consecutive zeros allowing for 2 minutes of non-zero interruptions [22]. We restricted our analysis of activity data to the time period 07:00 and 22:59, and defined a valid measurement day as one recording at least 500 minutes of wear time during this time period (18% days excluded as invalid). When examining the pattern of physical activity across the day, we only included hours with at least 30 minutes of measured wear time. Each participating child provided an average of 5.1 days across the week in which they were measured (range 1–7); we did not require a minimum number of valid days of accelerometer data per child because days, not children, were our primary units of analysis. Although we sought to limit our analyses to activity during waking hours, we unfortunately lacked reliable data on the time children went to sleep or woke up. While most children took their accelerometers off to sleep, on 6% of days there was evidence of overnight wear, defined as ≥5 minutes of weartime between 1:00 and 04:59. On these days, we assumed the child was in fact sleeping during any hour between 21:00 and 07:59 for which the mean accelerometer counts per minute (cpm) was below 50. Mean cpm values of under 50 were observed for 90% of hours recorded between 03:00 and 03:59 but only 3% of hours recorded between 19:00 and 19:59, suggesting this cut-point provided a reasonable proxy for sleeping time among children for whom we had reason to suspect overnight wear. Our findings were unchanged in sensitivity analyses which instead used thresholds of 30 cpm or 100 cpm to exclude suspected sleeping time, or which excluded altogether the 6% of days with suspected overnight wear. Our pre-specified primary outcome measure was the child’s average counts per minute. Substantive findings were similar in sensitivity analyses which instead used percent time spent in moderate-to-vigorous physical activity (MVPA), defined either as ≥3000 cpm [23] or ≥2296 cpm [24]. For our key findings, we present these MVPA results (using the ≥3000 cpm cut-off) alongside the results for mean cpm. In order to facilitate interpretation of these MVPA results, we additionally convert the observed percentage times into approximate absolute minutes on the assumption of a 14-hour average waking day [25]. Time of sunset and covariates For each day of accelerometer wear, we used http://www.timeanddate.com to assign time of sunset on that specific date in the city in which, or nearest which, data collection took place. We also used the date and the city of data collection to assign six weather variables to each day: total precipitation across the day, mean humidity across the day, maximum daily wind speed, mean daily temperature, maximum departure of temperature above the daily mean, and maximum departure of temperature below the daily mean. We accessed these data using Mathematica 9 (Wolfram Research), which compiles daily information from a wide range of weather stations run by states, international bodies or public-private partnerships [26]. The correlation between hour of sunset and mean temperature was moderately but not prohibitively high (r = 0.59), while correlations with other weather covariates were modest (r < 0.30). The child’s height and weight were measured in the original studies using standardized clinical procedures, and we used these to calculate body mass index (kg/m2). Participants were categorized as underweight/normal weight, overweight or obese according to age and sex-specific cut points [27]. Maternal education was assessed in 11/15 studies, and was re-coded to distinguish between ‘high school or lower’ education versus ‘college or university’ education (Additional file 1: Table A2). For each day of accelerometer wear, we used http://www.timeanddate.com to assign time of sunset on that specific date in the city in which, or nearest which, data collection took place. We also used the date and the city of data collection to assign six weather variables to each day: total precipitation across the day, mean humidity across the day, maximum daily wind speed, mean daily temperature, maximum departure of temperature above the daily mean, and maximum departure of temperature below the daily mean. We accessed these data using Mathematica 9 (Wolfram Research), which compiles daily information from a wide range of weather stations run by states, international bodies or public-private partnerships [26]. The correlation between hour of sunset and mean temperature was moderately but not prohibitively high (r = 0.59), while correlations with other weather covariates were modest (r < 0.30). The child’s height and weight were measured in the original studies using standardized clinical procedures, and we used these to calculate body mass index (kg/m2). Participants were categorized as underweight/normal weight, overweight or obese according to age and sex-specific cut points [27]. Maternal education was assessed in 11/15 studies, and was re-coded to distinguish between ‘high school or lower’ education versus ‘college or university’ education (Additional file 1: Table A2). Statistical analyses Both time of sunset and weather vary between individual days, and we therefore used days not children as our units of analysis. We adjusted for the clustering of days within children using robust standard errors. All analyses used Stata 13.1. To address our first aim, we fit linear regression models with the outcome being daily or hourly activity cpm. Time of sunset was the primary explanatory variable of interest, with adjustment for study population, age, sex, weight status, day of the week and the six weather covariates. When using the changing of the clocks as a natural experiment, we restricted our analyses to the 439 children with at least one valid school day measurement both in the week before and in the week after the clocks changed (e.g. Wednesday, Thursday and Friday before the clocks changed and Monday and Tuesday afterwards). To address our second aim, we calculated the adjusted effect size of evening daylight separately for each study population. We used forest plots to present the fifteen resulting effect sizes, together with an I2 value representing between-study heterogeneity and with an overall pooled effect size estimated using random effects meta-analysis [28]. We sometimes converted pooled estimates into standardised effect sizes by dividing by the standard deviation of activity cpm for the population in question. We then proceeded to fit interaction terms between evening daylight and the four pre-specified characteristics of sex, age, weight status and maternal education. These four characteristics were selected a priori as characteristics that we felt to be of interest and that were relatively consistently measured across the ICAD studies. We fit these interaction terms after stratifying by study population, and calculated I2 values and pooled effect sizes. When examining interactions with age, we restricted our analyses to children aged 9–15 as most measurement days (91%) were of children between these ages. Both time of sunset and weather vary between individual days, and we therefore used days not children as our units of analysis. We adjusted for the clustering of days within children using robust standard errors. All analyses used Stata 13.1. To address our first aim, we fit linear regression models with the outcome being daily or hourly activity cpm. Time of sunset was the primary explanatory variable of interest, with adjustment for study population, age, sex, weight status, day of the week and the six weather covariates. When using the changing of the clocks as a natural experiment, we restricted our analyses to the 439 children with at least one valid school day measurement both in the week before and in the week after the clocks changed (e.g. Wednesday, Thursday and Friday before the clocks changed and Monday and Tuesday afterwards). To address our second aim, we calculated the adjusted effect size of evening daylight separately for each study population. We used forest plots to present the fifteen resulting effect sizes, together with an I2 value representing between-study heterogeneity and with an overall pooled effect size estimated using random effects meta-analysis [28]. We sometimes converted pooled estimates into standardised effect sizes by dividing by the standard deviation of activity cpm for the population in question. We then proceeded to fit interaction terms between evening daylight and the four pre-specified characteristics of sex, age, weight status and maternal education. These four characteristics were selected a priori as characteristics that we felt to be of interest and that were relatively consistently measured across the ICAD studies. We fit these interaction terms after stratifying by study population, and calculated I2 values and pooled effect sizes. When examining interactions with age, we restricted our analyses to children aged 9–15 as most measurement days (91%) were of children between these ages.
Results
The characteristics of the participants are summarised in Table  1. Of the measurement days, 66% were schooldays and 38% of days had no precipitation. The average daily temperature was 12°C (range -21 to 33°C, inter-quartile range 7 to 16°C). Mean daily weartime was 773 minutes (12.9 hours), and this was similar regardless of time of sunset (e.g. regression coefficient +1.40 minutes for days with sunset 18:00–19:59 versus pre-18:00 after adjusting for study population, age and sex; and -2.5 minutes for days with sunset post-20:00 versus pre-18:00). Evening daylight and overall activity levels A later hour of sunset (i.e. extended evening daylight) was associated with increased daily activity across the full range of time of sunset, and this association was only partly attenuated after adjusting for the six weather covariates (Figure  1). Here and for all findings reported subsequently, substantive findings were similar in sensitivity analyses which instead used percent time spent in MVPA. The adjusted difference in overall daily activity between days with sunset after 21:00 vs. before 17:00 was 75 cpm (95% CI 67, 84). The equivalent difference for percent daily time in MVPA was 0.72% (95% CI 0.60, 0.84) using the ≥3000 cpm cut-point, which translates into around 6 minutes. To put the values on the y-axis in context, participants had a mean daily activity count of 560 cpm (649 in boys, 503 in girls), and spent an average of 4.0% of their day, or 33 minutes, in MVPA (5.2%/43 minutes in boys, 3.1%/26 minutes in girls). The adjusted differences between the days with more versus less evening daylight are therefore modest but not trivial in relation to children’s overall activity levels.Figure 1 Association between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00. Association between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00. A later hour of sunset (i.e. extended evening daylight) was associated with increased daily activity across the full range of time of sunset, and this association was only partly attenuated after adjusting for the six weather covariates (Figure  1). Here and for all findings reported subsequently, substantive findings were similar in sensitivity analyses which instead used percent time spent in MVPA. The adjusted difference in overall daily activity between days with sunset after 21:00 vs. before 17:00 was 75 cpm (95% CI 67, 84). The equivalent difference for percent daily time in MVPA was 0.72% (95% CI 0.60, 0.84) using the ≥3000 cpm cut-point, which translates into around 6 minutes. To put the values on the y-axis in context, participants had a mean daily activity count of 560 cpm (649 in boys, 503 in girls), and spent an average of 4.0% of their day, or 33 minutes, in MVPA (5.2%/43 minutes in boys, 3.1%/26 minutes in girls). The adjusted differences between the days with more versus less evening daylight are therefore modest but not trivial in relation to children’s overall activity levels.Figure 1 Association between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00. Association between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00. Evening daylight and the patterning of activity across the day Consistent with a causal interpretation, hour-by-hour analyses indicated that it was in the late afternoon and evening that the duration of evening daylight was most strongly associated with hourly physical activity levels (Figure  2). This was true on both schooldays and weekend/holiday days, with the period of the day when physical activity fell fastest corresponding to the timing of sunset (e.g. falling fastest between 18:00 and 19:00 on days when the sun also set between those hours). Similarly, when comparing the subsample of 439 children who were measured on schooldays immediately before and immediately after the changing of the clocks, there was strong evidence that children were more active during the evening of the days with later sunset (Figure  3). Between 17:00 and 20:59 the mean increase in physical activity on the days with later sunset was 94 cpm per hour (95% CI 62, 125); the equivalent increase in percent of time spent in MVPA was 0.84% (95% CI 0.40%, 1.28%) or 2.0 minutes.Figure 2 Adjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file 1: Figure A1 includes a version of this graph with confidence intervals.Figure 3 Mean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses. Adjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file 1: Figure A1 includes a version of this graph with confidence intervals. Mean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses. Importantly, Figure  2 and Figure  3 show no association between hour of sunset and activity levels in the morning, and generally no association in the early afternoon (with the exception of a modest effect on weekend/holiday days as early as 14:00 in Figure  2). This suggests that the association between evening daylight and physical activity cannot readily be explained by residual confounding by weather conditions, since any effects of weather would generally be expected to operate more evenly across the day [16]. These findings also provide no suggestion that later sunrise is associated with reduced activity in the morning, including on days when the sun set before 18:00 and on which the average time of sunrise was not until 07:27 (inter-quartile range 07:05 to 07:54). Consistent with a causal interpretation, hour-by-hour analyses indicated that it was in the late afternoon and evening that the duration of evening daylight was most strongly associated with hourly physical activity levels (Figure  2). This was true on both schooldays and weekend/holiday days, with the period of the day when physical activity fell fastest corresponding to the timing of sunset (e.g. falling fastest between 18:00 and 19:00 on days when the sun also set between those hours). Similarly, when comparing the subsample of 439 children who were measured on schooldays immediately before and immediately after the changing of the clocks, there was strong evidence that children were more active during the evening of the days with later sunset (Figure  3). Between 17:00 and 20:59 the mean increase in physical activity on the days with later sunset was 94 cpm per hour (95% CI 62, 125); the equivalent increase in percent of time spent in MVPA was 0.84% (95% CI 0.40%, 1.28%) or 2.0 minutes.Figure 2 Adjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file 1: Figure A1 includes a version of this graph with confidence intervals.Figure 3 Mean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses. Adjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file 1: Figure A1 includes a version of this graph with confidence intervals. Mean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses. Importantly, Figure  2 and Figure  3 show no association between hour of sunset and activity levels in the morning, and generally no association in the early afternoon (with the exception of a modest effect on weekend/holiday days as early as 14:00 in Figure  2). This suggests that the association between evening daylight and physical activity cannot readily be explained by residual confounding by weather conditions, since any effects of weather would generally be expected to operate more evenly across the day [16]. These findings also provide no suggestion that later sunrise is associated with reduced activity in the morning, including on days when the sun set before 18:00 and on which the average time of sunrise was not until 07:27 (inter-quartile range 07:05 to 07:54). Examining differences by place, sex, age, weight and maternal education As shown in Figure  4, there was strong evidence that the association between evening daylight and physical activity varied systematically between settings (I2 = 75%, p < 0.001, for overall heterogeneity between the 15 studies). Specifically there was relatively consistent evidence that evening daylight was associated with higher average physical activity in mainland Europe, England and (to a lesser extent) Australia. The pooled point estimates of the increase in daily mean activity in these three settings were 20.2 cpm, 15.7 cpm and 8.2 cpm per additional hour of evening daylight; these changes translate into standardised effect sizes of 0.07, 0.06 and 0.03, respectively. The equivalent effect sizes in terms of percent of daily time spent in MVPA were 0.19%, 0.20% and 0.05% per additional hour of evening daylight, corresponding to around 1.6 minutes, 1.7 minutes and 0.4 minutes respectively. By contrast, there was little or no consistent evidence of associations with evening daylight in the American samples or in the Madeiran and Brazilian samples, with standardised effect sizes ranging from -0.02 to +0.01 and in all cases non-significant. A post-hoc univariable meta-regression analysis provided some evidence that the smaller magnitude of the associations in these latter settings might reflect their higher maximum temperatures (adjusted R2 = 51%, p = 0.01: see Additional file 1: Figure A2 and accompanying text).Figure 4 Association between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file 1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file 1: Figures A3 and A4. Association between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file 1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file 1: Figures A3 and A4. Although associations with evening daylight varied markedly between settings, there was less convincing evidence for interactions with the child’s characteristics (Figure  4, plus Additional file 1: Figures A3-A4). This lack of an interaction was clearest for weight status and maternal education, with neither variable showing any significant interaction in any of the five study settings, and with the overall pooled effect sizes also non-significant. By contrast, the non-significant pooled interaction terms for sex and age were harder to interpret as in both cases there was some evidence of between-study heterogeneity (0.01 ≤ p ≤ 0.02). With respect to sex, this heterogeneity reflected the fact that the association between evening daylight and physical activity tended to be stronger in boys than in girls in some European and English samples, but this was not the case in the other settings (Additional file 1: Figure A3). With respect to age, there was no very obvious pattern: the magnitude of the association with evening daylight was greater among younger children in Denmark, was greater among older children in Australia, and did not differ according to age in the remaining three settings for which sufficient data were available. As shown in Figure  4, there was strong evidence that the association between evening daylight and physical activity varied systematically between settings (I2 = 75%, p < 0.001, for overall heterogeneity between the 15 studies). Specifically there was relatively consistent evidence that evening daylight was associated with higher average physical activity in mainland Europe, England and (to a lesser extent) Australia. The pooled point estimates of the increase in daily mean activity in these three settings were 20.2 cpm, 15.7 cpm and 8.2 cpm per additional hour of evening daylight; these changes translate into standardised effect sizes of 0.07, 0.06 and 0.03, respectively. The equivalent effect sizes in terms of percent of daily time spent in MVPA were 0.19%, 0.20% and 0.05% per additional hour of evening daylight, corresponding to around 1.6 minutes, 1.7 minutes and 0.4 minutes respectively. By contrast, there was little or no consistent evidence of associations with evening daylight in the American samples or in the Madeiran and Brazilian samples, with standardised effect sizes ranging from -0.02 to +0.01 and in all cases non-significant. A post-hoc univariable meta-regression analysis provided some evidence that the smaller magnitude of the associations in these latter settings might reflect their higher maximum temperatures (adjusted R2 = 51%, p = 0.01: see Additional file 1: Figure A2 and accompanying text).Figure 4 Association between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file 1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file 1: Figures A3 and A4. Association between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file 1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file 1: Figures A3 and A4. Although associations with evening daylight varied markedly between settings, there was less convincing evidence for interactions with the child’s characteristics (Figure  4, plus Additional file 1: Figures A3-A4). This lack of an interaction was clearest for weight status and maternal education, with neither variable showing any significant interaction in any of the five study settings, and with the overall pooled effect sizes also non-significant. By contrast, the non-significant pooled interaction terms for sex and age were harder to interpret as in both cases there was some evidence of between-study heterogeneity (0.01 ≤ p ≤ 0.02). With respect to sex, this heterogeneity reflected the fact that the association between evening daylight and physical activity tended to be stronger in boys than in girls in some European and English samples, but this was not the case in the other settings (Additional file 1: Figure A3). With respect to age, there was no very obvious pattern: the magnitude of the association with evening daylight was greater among younger children in Denmark, was greater among older children in Australia, and did not differ according to age in the remaining three settings for which sufficient data were available.
Conclusions
This study provides the strongest evidence to date that, in European and Australian settings, evening daylight plays a causal role in increasing physical activity in the late afternoon and early evening – a period which has been described as the ‘critical hours’ for children’s physical activity [34]. In these settings, it seems possible that additional daylight saving measures could shift mean population child physical activity levels by an amount which, although small in absolute terms, would not be trivial relative to what can feasibly be achieved through other approaches. Moreover, our findings also suggest that this effect might operate in a relatively equitable way. As such, while daylight savings proposals such as those recently considered in Britain would not solve the problem of inadequate levels of child physical activity, this paper indicates that they could represent a small step in the right direction.
[ "Background", "Study design", "Participants", "Measurement of physical activity", "Time of sunset and covariates", "Statistical analyses", "Evening daylight and overall activity levels", "Evening daylight and the patterning of activity across the day", "Examining differences by place, sex, age, weight and maternal education", "Limitations and directions for future research", "Implications for policy and practice", "" ]
[ "Physical activity confers substantial physical and mental health benefits in children\n[1–5], but most children around the world do not meet current activity guidelines\n[6]. For children as for adults, successfully promoting physical activity is likely to require both individual-level and population-level interventions\n[7]. The latter are important because, following the insights of Geoffrey Rose\n[8], even a small shift in a population mean can yield important public health benefits.\nOne potential population-level measure which has received some policy attention in recent years concerns the introduction of additional daylight saving measures\n[9]. Although the total number of hours of daylight in the day is fixed, many countries modify when those hours fall by ‘changing the clocks’ – for example, putting the clocks forward in the summer to shift daylight hours from the very early morning to the evening. Recent decades have seen recurrent political debates surrounding daylight saving measures in several countries. For example, several Australian states have held repeated referenda on the topic, and the issue even spawned the creation in 2008 of the single-issue political party ‘Daylight Saving for South East Queensland’. Similarly a Bill was debated in the British Parliament between 2010 and 2012 which proposed to shift the clocks forward by an additional hour year round. This change would have given British children an estimated average of 200 extra waking daylight hours per year\n[10], and the logo of the associated civil society campaign depicted children playing outdoors in the evening sunlight. The Bill’s accompanying research paper listed “increased opportunities for outdoor activity” alongside other potential health and environmental benefits, such as reducing road traffic crashes and cutting domestic energy use\n[11]. A similar argument about leisure-time activity has featured in the Australian debate\n[12].\nThe British Bill’s research paper did not, however, cite any evidence to support its claims about physical activity, and nor does much evidence exist regarding likely impacts on children. Many studies have certainly reported that children’s physical activity is generally higher in the summer than in the winter, as reviewed in\n[13–15]. Very few studies, however, examine whether seasonal differences persist after adjustment for weather conditions, or whether the seasonal patterning of physical activity across the day is consistent with a causal effect of evening daylight. One study which did examine these issues in detail found that seasonal differences in physical activity were greatest in the late afternoon and early evening, which is what one would expect if time of sunset did play a causal role\n[16]. This study had some major limitations, however, including its small sample size (N = 325), its restriction to a single setting in south-east England, and its failure to adjust for temperature.\nThis paper therefore revisited this question in a much larger, international sample. Our first broad aim was to test the hypotheses that (i) longer evening daylight is associated with higher total physical activity, even after adjusting for weather conditions; and (ii) these overall differences in physical activity are greatest in the late afternoon and early evening. Given our uniquely large sample size, we were also able to use countries’ bi-annual changing of the clocks as a natural experiment, i.e. as an event or intervention not designed for research purposes but which can nevertheless provide valuable research opportunities\n[17]. Specifically, we tested the hypothesis that the same child measured immediately before and immediately after the clocks changed would be more active on the days where sunset had been moved an hour later. Our second broad aim was to examine whether any associations between evening daylight and activity levels differed by study setting, sex, age, weight status or socio-economic position.", "The International Children’s Accelerometry Database (http://www.mrc-epid.cam.ac.uk/research/studies/icad/) was established to pool objectively-measured physical activity data from studies using the Actigraph accelerometer in children worldwide. The aims, design and methods of ICAD have been described in detail elsewhere\n[18]. Formal data sharing agreements were established and all partners consulted their individual research board to confirm that sufficient ethical approval had been attained for contributing data.", "The full ICAD database pools accelerometer data from 20 studies conducted in ten countries between 1997 and 2009\n[18]. In this paper, we excluded four studies which focussed on pre-school children and one study for which date of measurement was not available. We used baseline data from all of the 15 remaining studies, plus follow-up measurements in the seven longitudinal studies and one natural experimental study (Additional file\n1: Table A1). We also used follow-up measurements from the control group of one of the two randomised controlled trials, as for this study it was possible to distinguish intervention and control groups.\nAmong 23,354 individuals aged 5–16 years old in the 15 eligible studies, we excluded 1.7% of measurement days (0.7% of individuals) because of missing data on age, sex, weight status or weather conditions. Our resulting study population consisted of 23,188 participants who between them provided 158,784 days of valid data across 31,378 time points (Table \n1). Although our full study population included children providing data from any part of the year, one of our analyses was limited to 439 children who were sampled during a week which spanned the clock change (51% female, age range 5–16, 1830 measurement days).Table 1\nDescriptive characteristics of study participants\nN (%) participantsN (%) valid daysFull sample23,188 (100%)158,784 (100%)SexMale8819 (38%)62,745 (40%)Female14,369 (62%)†\n96,039 (60%)Age5-6 years1800 (8%)7855 (5%)7-8 years711 (3%)4963 (3%)9-10 years5769 (25%)30,702 (19%)11-12 years9616 (41%)61,352 (39%)13-14 years4206 (18%)46,530 (29%)15-16 years1086 (5%)7382 (5%)Country [No. studies]Australia [N = 2]2459 (11%)18,679 (12%)Brazil [N = 1]453 (2%)1577 (1%)Denmark [N = 2]2031 (9%)11,030 (7%)England [N = 4]10,284 (44%)83,420 (53%)Estonia [N = 1]656 (3%)2537 (2%)Madeira [N = 1]1214 (5%)4899 (3%)Norway [N = 1]384 (2%)1459 (1%)Switzerland [N = 1]404 (2%)2569 (2%)United States [N = 2]5303 (23%)32,614 (21%)Weight statusNormal/underweight17,573 (76%)121,350 (76%)Overweight4116 (18%)27,967 (18%)Obese1499 (6%)9467 (6%)Mother’s educationUp to high school7422 (48%)54,547 (48%)College/vocational2656 (17%)19,352 (17%)University level5251 (34%)38,723 (34%)For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file\n1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only.\n\nDescriptive characteristics of study participants\n\nFor individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file\n1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only.", "All physical activity measurements were made with uniaxial, waist-worn Actigraph accelerometers (models 7164, 71256 and GT1M); these are a family of accelerometers that have been shown to provide reliable and valid measurement of physical activity in children and adolescents\n[19–21]. All raw accelerometer data files were re-analysed to provide physical activity outcome variables that could be directly compared across studies (see\n[18] for details). Data files were reintegrated to a 60 second epoch where necessary and processed using commercially available software (KineSoft v3.3.20, Saskatchewan, Canada). Non-wear time was defined as 60 minutes of consecutive zeros allowing for 2 minutes of non-zero interruptions\n[22].\nWe restricted our analysis of activity data to the time period 07:00 and 22:59, and defined a valid measurement day as one recording at least 500 minutes of wear time during this time period (18% days excluded as invalid). When examining the pattern of physical activity across the day, we only included hours with at least 30 minutes of measured wear time. Each participating child provided an average of 5.1 days across the week in which they were measured (range 1–7); we did not require a minimum number of valid days of accelerometer data per child because days, not children, were our primary units of analysis.\nAlthough we sought to limit our analyses to activity during waking hours, we unfortunately lacked reliable data on the time children went to sleep or woke up. While most children took their accelerometers off to sleep, on 6% of days there was evidence of overnight wear, defined as ≥5 minutes of weartime between 1:00 and 04:59. On these days, we assumed the child was in fact sleeping during any hour between 21:00 and 07:59 for which the mean accelerometer counts per minute (cpm) was below 50. Mean cpm values of under 50 were observed for 90% of hours recorded between 03:00 and 03:59 but only 3% of hours recorded between 19:00 and 19:59, suggesting this cut-point provided a reasonable proxy for sleeping time among children for whom we had reason to suspect overnight wear. Our findings were unchanged in sensitivity analyses which instead used thresholds of 30 cpm or 100 cpm to exclude suspected sleeping time, or which excluded altogether the 6% of days with suspected overnight wear.\nOur pre-specified primary outcome measure was the child’s average counts per minute. Substantive findings were similar in sensitivity analyses which instead used percent time spent in moderate-to-vigorous physical activity (MVPA), defined either as ≥3000 cpm\n[23] or ≥2296 cpm\n[24]. For our key findings, we present these MVPA results (using the ≥3000 cpm cut-off) alongside the results for mean cpm. In order to facilitate interpretation of these MVPA results, we additionally convert the observed percentage times into approximate absolute minutes on the assumption of a 14-hour average waking day\n[25].", "For each day of accelerometer wear, we used http://www.timeanddate.com to assign time of sunset on that specific date in the city in which, or nearest which, data collection took place. We also used the date and the city of data collection to assign six weather variables to each day: total precipitation across the day, mean humidity across the day, maximum daily wind speed, mean daily temperature, maximum departure of temperature above the daily mean, and maximum departure of temperature below the daily mean. We accessed these data using Mathematica 9 (Wolfram Research), which compiles daily information from a wide range of weather stations run by states, international bodies or public-private partnerships\n[26]. The correlation between hour of sunset and mean temperature was moderately but not prohibitively high (r = 0.59), while correlations with other weather covariates were modest (r < 0.30).\nThe child’s height and weight were measured in the original studies using standardized clinical procedures, and we used these to calculate body mass index (kg/m2). Participants were categorized as underweight/normal weight, overweight or obese according to age and sex-specific cut points\n[27]. Maternal education was assessed in 11/15 studies, and was re-coded to distinguish between ‘high school or lower’ education versus ‘college or university’ education (Additional file\n1: Table A2).", "Both time of sunset and weather vary between individual days, and we therefore used days not children as our units of analysis. We adjusted for the clustering of days within children using robust standard errors. All analyses used Stata 13.1.\nTo address our first aim, we fit linear regression models with the outcome being daily or hourly activity cpm. Time of sunset was the primary explanatory variable of interest, with adjustment for study population, age, sex, weight status, day of the week and the six weather covariates. When using the changing of the clocks as a natural experiment, we restricted our analyses to the 439 children with at least one valid school day measurement both in the week before and in the week after the clocks changed (e.g. Wednesday, Thursday and Friday before the clocks changed and Monday and Tuesday afterwards).\nTo address our second aim, we calculated the adjusted effect size of evening daylight separately for each study population. We used forest plots to present the fifteen resulting effect sizes, together with an I2 value representing between-study heterogeneity and with an overall pooled effect size estimated using random effects meta-analysis\n[28]. We sometimes converted pooled estimates into standardised effect sizes by dividing by the standard deviation of activity cpm for the population in question. We then proceeded to fit interaction terms between evening daylight and the four pre-specified characteristics of sex, age, weight status and maternal education. These four characteristics were selected a priori as characteristics that we felt to be of interest and that were relatively consistently measured across the ICAD studies. We fit these interaction terms after stratifying by study population, and calculated I2 values and pooled effect sizes. When examining interactions with age, we restricted our analyses to children aged 9–15 as most measurement days (91%) were of children between these ages.", "A later hour of sunset (i.e. extended evening daylight) was associated with increased daily activity across the full range of time of sunset, and this association was only partly attenuated after adjusting for the six weather covariates (Figure \n1). Here and for all findings reported subsequently, substantive findings were similar in sensitivity analyses which instead used percent time spent in MVPA. The adjusted difference in overall daily activity between days with sunset after 21:00 vs. before 17:00 was 75 cpm (95% CI 67, 84). The equivalent difference for percent daily time in MVPA was 0.72% (95% CI 0.60, 0.84) using the ≥3000 cpm cut-point, which translates into around 6 minutes. To put the values on the y-axis in context, participants had a mean daily activity count of 560 cpm (649 in boys, 503 in girls), and spent an average of 4.0% of their day, or 33 minutes, in MVPA (5.2%/43 minutes in boys, 3.1%/26 minutes in girls). The adjusted differences between the days with more versus less evening daylight are therefore modest but not trivial in relation to children’s overall activity levels.Figure 1\nAssociation between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00.\n\nAssociation between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00.", "Consistent with a causal interpretation, hour-by-hour analyses indicated that it was in the late afternoon and evening that the duration of evening daylight was most strongly associated with hourly physical activity levels (Figure \n2). This was true on both schooldays and weekend/holiday days, with the period of the day when physical activity fell fastest corresponding to the timing of sunset (e.g. falling fastest between 18:00 and 19:00 on days when the sun also set between those hours). Similarly, when comparing the subsample of 439 children who were measured on schooldays immediately before and immediately after the changing of the clocks, there was strong evidence that children were more active during the evening of the days with later sunset (Figure \n3). Between 17:00 and 20:59 the mean increase in physical activity on the days with later sunset was 94 cpm per hour (95% CI 62, 125); the equivalent increase in percent of time spent in MVPA was 0.84% (95% CI 0.40%, 1.28%) or 2.0 minutes.Figure 2\nAdjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file\n1: Figure A1 includes a version of this graph with confidence intervals.Figure 3\nMean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses.\n\nAdjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file\n1: Figure A1 includes a version of this graph with confidence intervals.\n\nMean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses.\nImportantly, Figure \n2 and Figure \n3 show no association between hour of sunset and activity levels in the morning, and generally no association in the early afternoon (with the exception of a modest effect on weekend/holiday days as early as 14:00 in Figure \n2). This suggests that the association between evening daylight and physical activity cannot readily be explained by residual confounding by weather conditions, since any effects of weather would generally be expected to operate more evenly across the day\n[16]. These findings also provide no suggestion that later sunrise is associated with reduced activity in the morning, including on days when the sun set before 18:00 and on which the average time of sunrise was not until 07:27 (inter-quartile range 07:05 to 07:54).", "As shown in Figure \n4, there was strong evidence that the association between evening daylight and physical activity varied systematically between settings (I2 = 75%, p < 0.001, for overall heterogeneity between the 15 studies). Specifically there was relatively consistent evidence that evening daylight was associated with higher average physical activity in mainland Europe, England and (to a lesser extent) Australia. The pooled point estimates of the increase in daily mean activity in these three settings were 20.2 cpm, 15.7 cpm and 8.2 cpm per additional hour of evening daylight; these changes translate into standardised effect sizes of 0.07, 0.06 and 0.03, respectively. The equivalent effect sizes in terms of percent of daily time spent in MVPA were 0.19%, 0.20% and 0.05% per additional hour of evening daylight, corresponding to around 1.6 minutes, 1.7 minutes and 0.4 minutes respectively. By contrast, there was little or no consistent evidence of associations with evening daylight in the American samples or in the Madeiran and Brazilian samples, with standardised effect sizes ranging from -0.02 to +0.01 and in all cases non-significant. A post-hoc univariable meta-regression analysis provided some evidence that the smaller magnitude of the associations in these latter settings might reflect their higher maximum temperatures (adjusted R2 = 51%, p = 0.01: see Additional file\n1: Figure A2 and accompanying text).Figure 4\nAssociation between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file\n1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file\n1: Figures A3 and A4.\n\nAssociation between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file\n1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file\n1: Figures A3 and A4.\nAlthough associations with evening daylight varied markedly between settings, there was less convincing evidence for interactions with the child’s characteristics (Figure \n4, plus Additional file\n1: Figures A3-A4). This lack of an interaction was clearest for weight status and maternal education, with neither variable showing any significant interaction in any of the five study settings, and with the overall pooled effect sizes also non-significant. By contrast, the non-significant pooled interaction terms for sex and age were harder to interpret as in both cases there was some evidence of between-study heterogeneity (0.01 ≤ p ≤ 0.02). With respect to sex, this heterogeneity reflected the fact that the association between evening daylight and physical activity tended to be stronger in boys than in girls in some European and English samples, but this was not the case in the other settings (Additional file\n1: Figure A3). With respect to age, there was no very obvious pattern: the magnitude of the association with evening daylight was greater among younger children in Denmark, was greater among older children in Australia, and did not differ according to age in the remaining three settings for which sufficient data were available.", "This study substantially extends previous analyses of some subsets of this data, which have at most only provided a relatively brief examination of physical activity differences by season\n[29, 30]. It also addresses several recognised limitations of the existing literature\n[14], including small sample sizes, inconsistent accelerometer protocols and little or no examination of interactions with factors such as age, sex or weight status. In addition, our large sample size allowed us to use the bi-annual changing of the clocks as a natural experiment, and to show significant differences in children’s activity levels either side of the clock change. This observation considerably strengthens the case for a causal interpretation of the association between evening daylight and physical activity, as does the fact that the fastest decrease in children’s evening physical activity coincided with sunset throughout the year.\nThis study does, however, also have several important limitations. First, our data were largely cross-sectional rather than longitudinal: although we could follow the same child across the week when the clocks changed, we could not follow children across a full year. We have, however, no reason to believe that children sampled at different times of the year differed systematically within or between studies.\nA second set of limitations involves data not available to us. For one thing, although we adjusted for observed weather conditions on each day of measurement, the timing of some physically active events may instead reflect expected weather conditions (e.g. some schools may routinely schedule sports days on summer afternoons in the hope that it will be warm and dry). Failing to adjust for such social expectations may mean that our effect estimates are still subject to some residual confounding by weather, and this may partly account for why small differences in activity levels were seen as early as 14:00 on weekend/holidays. In addition, we lacked any data on the behavioural mediators of the observed activity differences. As such, we cannot examine how far one can generalise the findings of one previous, small English study which found that associations between day length and activity levels were largely mediated by outdoor play\n[16]. This is one useful direction for future research, perhaps particularly as it becomes increasingly possible to substitute or complement detailed activity diaries with data from global positioning systems (GPS) monitors\n[31]. We also lacked systematic information on area-level factors such as neighbourhood safety or the availability of green space which might plausibly moderate the effect of evening daylight upon physical activity; again, this would be one useful direction for future research. Also of interest would be an examination of how a wider range of behaviours vary with daylength; these were largely beyond the scope of what is possible in the ICAD database, although the lack of any association between time of sunset and accelerometer weartime provides some indirect evidence against an effect of evening daylight on children’s duration of sleep.\nFinally, most of our study populations came from Europe and almost all came from high-income settings, meaning that more research would be needed to establish how far the observed associations apply across other settings. Our data do, however, give some hints that daylight saving measures might not increase activity in hot settings, perhaps because high temperatures may inhibit summertime activity.", "The British parliament recently debated a Bill proposing new daylight saving measures which would shift the clocks forward by one additional hour year round\n[10]. If the adjusted, pooled effect size we observed in this study were fully causal, one would expect the proposed daylight savings measures to generate a 0.06 standard deviation increase in the total physical activity of English children, corresponding to an estimated 1.7 extra minutes of MVPA per day. The equivalent standardised effect sizes in mainland Europe and in Australia were 0.07 and 0.03, respectively. As such, introducing additional daylight saving measures in any of these settings would be likely only to have a small-to-very-small average effect upon each child. Such measures would, however, have far greater reach than most other potential policy initiatives, with these small average effects applying every day to each and every child in the country. This is important because even small changes to the population mean can have important public health consequences\n[8]. Moreover, although these population-level effect sizes are small in absolute terms, the English and mainland European effect estimates actually compare relatively favourably to individual-level approaches, despite the latter generally being much more intensive (and expensive). For example, one recent meta-analysis of 22 randomised controlled comparisons reported a standardised pooled effect size of 0.12 (95% CI 0.04, 0.20) for interventions seeking to promote child or adolescent physical activity\n[32].\nNotably, the association between longer evening daylight and higher physical activity was observed irrespective of weight status or maternal education. This contrasts with one previous Australian survey in which daylight savings measures seemed to have the largest effects among normal weight adults from socio-economically advantaged groups\n[33]. Further research in adults would be useful to confirm this finding, ideally using objectively-measured activity data. Speculatively, however, a relatively wide range of children may respond to longer evening daylight by playing more outdoors whereas among adults the effect may primarily be confined to the groups with the highest propensity to exercise.", "Additional file 1:\nAdditional methods and results; presentation of fuller details on the studies included in the analyses, and additional results.\n(PDF 759 KB)" ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study design", "Participants", "Measurement of physical activity", "Time of sunset and covariates", "Statistical analyses", "Results", "Evening daylight and overall activity levels", "Evening daylight and the patterning of activity across the day", "Examining differences by place, sex, age, weight and maternal education", "Discussion", "Limitations and directions for future research", "Implications for policy and practice", "Conclusions", "Electronic supplementary material", "" ]
[ "Physical activity confers substantial physical and mental health benefits in children\n[1–5], but most children around the world do not meet current activity guidelines\n[6]. For children as for adults, successfully promoting physical activity is likely to require both individual-level and population-level interventions\n[7]. The latter are important because, following the insights of Geoffrey Rose\n[8], even a small shift in a population mean can yield important public health benefits.\nOne potential population-level measure which has received some policy attention in recent years concerns the introduction of additional daylight saving measures\n[9]. Although the total number of hours of daylight in the day is fixed, many countries modify when those hours fall by ‘changing the clocks’ – for example, putting the clocks forward in the summer to shift daylight hours from the very early morning to the evening. Recent decades have seen recurrent political debates surrounding daylight saving measures in several countries. For example, several Australian states have held repeated referenda on the topic, and the issue even spawned the creation in 2008 of the single-issue political party ‘Daylight Saving for South East Queensland’. Similarly a Bill was debated in the British Parliament between 2010 and 2012 which proposed to shift the clocks forward by an additional hour year round. This change would have given British children an estimated average of 200 extra waking daylight hours per year\n[10], and the logo of the associated civil society campaign depicted children playing outdoors in the evening sunlight. The Bill’s accompanying research paper listed “increased opportunities for outdoor activity” alongside other potential health and environmental benefits, such as reducing road traffic crashes and cutting domestic energy use\n[11]. A similar argument about leisure-time activity has featured in the Australian debate\n[12].\nThe British Bill’s research paper did not, however, cite any evidence to support its claims about physical activity, and nor does much evidence exist regarding likely impacts on children. Many studies have certainly reported that children’s physical activity is generally higher in the summer than in the winter, as reviewed in\n[13–15]. Very few studies, however, examine whether seasonal differences persist after adjustment for weather conditions, or whether the seasonal patterning of physical activity across the day is consistent with a causal effect of evening daylight. One study which did examine these issues in detail found that seasonal differences in physical activity were greatest in the late afternoon and early evening, which is what one would expect if time of sunset did play a causal role\n[16]. This study had some major limitations, however, including its small sample size (N = 325), its restriction to a single setting in south-east England, and its failure to adjust for temperature.\nThis paper therefore revisited this question in a much larger, international sample. Our first broad aim was to test the hypotheses that (i) longer evening daylight is associated with higher total physical activity, even after adjusting for weather conditions; and (ii) these overall differences in physical activity are greatest in the late afternoon and early evening. Given our uniquely large sample size, we were also able to use countries’ bi-annual changing of the clocks as a natural experiment, i.e. as an event or intervention not designed for research purposes but which can nevertheless provide valuable research opportunities\n[17]. Specifically, we tested the hypothesis that the same child measured immediately before and immediately after the clocks changed would be more active on the days where sunset had been moved an hour later. Our second broad aim was to examine whether any associations between evening daylight and activity levels differed by study setting, sex, age, weight status or socio-economic position.", " Study design The International Children’s Accelerometry Database (http://www.mrc-epid.cam.ac.uk/research/studies/icad/) was established to pool objectively-measured physical activity data from studies using the Actigraph accelerometer in children worldwide. The aims, design and methods of ICAD have been described in detail elsewhere\n[18]. Formal data sharing agreements were established and all partners consulted their individual research board to confirm that sufficient ethical approval had been attained for contributing data.\nThe International Children’s Accelerometry Database (http://www.mrc-epid.cam.ac.uk/research/studies/icad/) was established to pool objectively-measured physical activity data from studies using the Actigraph accelerometer in children worldwide. The aims, design and methods of ICAD have been described in detail elsewhere\n[18]. Formal data sharing agreements were established and all partners consulted their individual research board to confirm that sufficient ethical approval had been attained for contributing data.\n Participants The full ICAD database pools accelerometer data from 20 studies conducted in ten countries between 1997 and 2009\n[18]. In this paper, we excluded four studies which focussed on pre-school children and one study for which date of measurement was not available. We used baseline data from all of the 15 remaining studies, plus follow-up measurements in the seven longitudinal studies and one natural experimental study (Additional file\n1: Table A1). We also used follow-up measurements from the control group of one of the two randomised controlled trials, as for this study it was possible to distinguish intervention and control groups.\nAmong 23,354 individuals aged 5–16 years old in the 15 eligible studies, we excluded 1.7% of measurement days (0.7% of individuals) because of missing data on age, sex, weight status or weather conditions. Our resulting study population consisted of 23,188 participants who between them provided 158,784 days of valid data across 31,378 time points (Table \n1). Although our full study population included children providing data from any part of the year, one of our analyses was limited to 439 children who were sampled during a week which spanned the clock change (51% female, age range 5–16, 1830 measurement days).Table 1\nDescriptive characteristics of study participants\nN (%) participantsN (%) valid daysFull sample23,188 (100%)158,784 (100%)SexMale8819 (38%)62,745 (40%)Female14,369 (62%)†\n96,039 (60%)Age5-6 years1800 (8%)7855 (5%)7-8 years711 (3%)4963 (3%)9-10 years5769 (25%)30,702 (19%)11-12 years9616 (41%)61,352 (39%)13-14 years4206 (18%)46,530 (29%)15-16 years1086 (5%)7382 (5%)Country [No. studies]Australia [N = 2]2459 (11%)18,679 (12%)Brazil [N = 1]453 (2%)1577 (1%)Denmark [N = 2]2031 (9%)11,030 (7%)England [N = 4]10,284 (44%)83,420 (53%)Estonia [N = 1]656 (3%)2537 (2%)Madeira [N = 1]1214 (5%)4899 (3%)Norway [N = 1]384 (2%)1459 (1%)Switzerland [N = 1]404 (2%)2569 (2%)United States [N = 2]5303 (23%)32,614 (21%)Weight statusNormal/underweight17,573 (76%)121,350 (76%)Overweight4116 (18%)27,967 (18%)Obese1499 (6%)9467 (6%)Mother’s educationUp to high school7422 (48%)54,547 (48%)College/vocational2656 (17%)19,352 (17%)University level5251 (34%)38,723 (34%)For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file\n1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only.\n\nDescriptive characteristics of study participants\n\nFor individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file\n1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only.\nThe full ICAD database pools accelerometer data from 20 studies conducted in ten countries between 1997 and 2009\n[18]. In this paper, we excluded four studies which focussed on pre-school children and one study for which date of measurement was not available. We used baseline data from all of the 15 remaining studies, plus follow-up measurements in the seven longitudinal studies and one natural experimental study (Additional file\n1: Table A1). We also used follow-up measurements from the control group of one of the two randomised controlled trials, as for this study it was possible to distinguish intervention and control groups.\nAmong 23,354 individuals aged 5–16 years old in the 15 eligible studies, we excluded 1.7% of measurement days (0.7% of individuals) because of missing data on age, sex, weight status or weather conditions. Our resulting study population consisted of 23,188 participants who between them provided 158,784 days of valid data across 31,378 time points (Table \n1). Although our full study population included children providing data from any part of the year, one of our analyses was limited to 439 children who were sampled during a week which spanned the clock change (51% female, age range 5–16, 1830 measurement days).Table 1\nDescriptive characteristics of study participants\nN (%) participantsN (%) valid daysFull sample23,188 (100%)158,784 (100%)SexMale8819 (38%)62,745 (40%)Female14,369 (62%)†\n96,039 (60%)Age5-6 years1800 (8%)7855 (5%)7-8 years711 (3%)4963 (3%)9-10 years5769 (25%)30,702 (19%)11-12 years9616 (41%)61,352 (39%)13-14 years4206 (18%)46,530 (29%)15-16 years1086 (5%)7382 (5%)Country [No. studies]Australia [N = 2]2459 (11%)18,679 (12%)Brazil [N = 1]453 (2%)1577 (1%)Denmark [N = 2]2031 (9%)11,030 (7%)England [N = 4]10,284 (44%)83,420 (53%)Estonia [N = 1]656 (3%)2537 (2%)Madeira [N = 1]1214 (5%)4899 (3%)Norway [N = 1]384 (2%)1459 (1%)Switzerland [N = 1]404 (2%)2569 (2%)United States [N = 2]5303 (23%)32,614 (21%)Weight statusNormal/underweight17,573 (76%)121,350 (76%)Overweight4116 (18%)27,967 (18%)Obese1499 (6%)9467 (6%)Mother’s educationUp to high school7422 (48%)54,547 (48%)College/vocational2656 (17%)19,352 (17%)University level5251 (34%)38,723 (34%)For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file\n1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only.\n\nDescriptive characteristics of study participants\n\nFor individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file\n1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only.\n Measurement of physical activity All physical activity measurements were made with uniaxial, waist-worn Actigraph accelerometers (models 7164, 71256 and GT1M); these are a family of accelerometers that have been shown to provide reliable and valid measurement of physical activity in children and adolescents\n[19–21]. All raw accelerometer data files were re-analysed to provide physical activity outcome variables that could be directly compared across studies (see\n[18] for details). Data files were reintegrated to a 60 second epoch where necessary and processed using commercially available software (KineSoft v3.3.20, Saskatchewan, Canada). Non-wear time was defined as 60 minutes of consecutive zeros allowing for 2 minutes of non-zero interruptions\n[22].\nWe restricted our analysis of activity data to the time period 07:00 and 22:59, and defined a valid measurement day as one recording at least 500 minutes of wear time during this time period (18% days excluded as invalid). When examining the pattern of physical activity across the day, we only included hours with at least 30 minutes of measured wear time. Each participating child provided an average of 5.1 days across the week in which they were measured (range 1–7); we did not require a minimum number of valid days of accelerometer data per child because days, not children, were our primary units of analysis.\nAlthough we sought to limit our analyses to activity during waking hours, we unfortunately lacked reliable data on the time children went to sleep or woke up. While most children took their accelerometers off to sleep, on 6% of days there was evidence of overnight wear, defined as ≥5 minutes of weartime between 1:00 and 04:59. On these days, we assumed the child was in fact sleeping during any hour between 21:00 and 07:59 for which the mean accelerometer counts per minute (cpm) was below 50. Mean cpm values of under 50 were observed for 90% of hours recorded between 03:00 and 03:59 but only 3% of hours recorded between 19:00 and 19:59, suggesting this cut-point provided a reasonable proxy for sleeping time among children for whom we had reason to suspect overnight wear. Our findings were unchanged in sensitivity analyses which instead used thresholds of 30 cpm or 100 cpm to exclude suspected sleeping time, or which excluded altogether the 6% of days with suspected overnight wear.\nOur pre-specified primary outcome measure was the child’s average counts per minute. Substantive findings were similar in sensitivity analyses which instead used percent time spent in moderate-to-vigorous physical activity (MVPA), defined either as ≥3000 cpm\n[23] or ≥2296 cpm\n[24]. For our key findings, we present these MVPA results (using the ≥3000 cpm cut-off) alongside the results for mean cpm. In order to facilitate interpretation of these MVPA results, we additionally convert the observed percentage times into approximate absolute minutes on the assumption of a 14-hour average waking day\n[25].\nAll physical activity measurements were made with uniaxial, waist-worn Actigraph accelerometers (models 7164, 71256 and GT1M); these are a family of accelerometers that have been shown to provide reliable and valid measurement of physical activity in children and adolescents\n[19–21]. All raw accelerometer data files were re-analysed to provide physical activity outcome variables that could be directly compared across studies (see\n[18] for details). Data files were reintegrated to a 60 second epoch where necessary and processed using commercially available software (KineSoft v3.3.20, Saskatchewan, Canada). Non-wear time was defined as 60 minutes of consecutive zeros allowing for 2 minutes of non-zero interruptions\n[22].\nWe restricted our analysis of activity data to the time period 07:00 and 22:59, and defined a valid measurement day as one recording at least 500 minutes of wear time during this time period (18% days excluded as invalid). When examining the pattern of physical activity across the day, we only included hours with at least 30 minutes of measured wear time. Each participating child provided an average of 5.1 days across the week in which they were measured (range 1–7); we did not require a minimum number of valid days of accelerometer data per child because days, not children, were our primary units of analysis.\nAlthough we sought to limit our analyses to activity during waking hours, we unfortunately lacked reliable data on the time children went to sleep or woke up. While most children took their accelerometers off to sleep, on 6% of days there was evidence of overnight wear, defined as ≥5 minutes of weartime between 1:00 and 04:59. On these days, we assumed the child was in fact sleeping during any hour between 21:00 and 07:59 for which the mean accelerometer counts per minute (cpm) was below 50. Mean cpm values of under 50 were observed for 90% of hours recorded between 03:00 and 03:59 but only 3% of hours recorded between 19:00 and 19:59, suggesting this cut-point provided a reasonable proxy for sleeping time among children for whom we had reason to suspect overnight wear. Our findings were unchanged in sensitivity analyses which instead used thresholds of 30 cpm or 100 cpm to exclude suspected sleeping time, or which excluded altogether the 6% of days with suspected overnight wear.\nOur pre-specified primary outcome measure was the child’s average counts per minute. Substantive findings were similar in sensitivity analyses which instead used percent time spent in moderate-to-vigorous physical activity (MVPA), defined either as ≥3000 cpm\n[23] or ≥2296 cpm\n[24]. For our key findings, we present these MVPA results (using the ≥3000 cpm cut-off) alongside the results for mean cpm. In order to facilitate interpretation of these MVPA results, we additionally convert the observed percentage times into approximate absolute minutes on the assumption of a 14-hour average waking day\n[25].\n Time of sunset and covariates For each day of accelerometer wear, we used http://www.timeanddate.com to assign time of sunset on that specific date in the city in which, or nearest which, data collection took place. We also used the date and the city of data collection to assign six weather variables to each day: total precipitation across the day, mean humidity across the day, maximum daily wind speed, mean daily temperature, maximum departure of temperature above the daily mean, and maximum departure of temperature below the daily mean. We accessed these data using Mathematica 9 (Wolfram Research), which compiles daily information from a wide range of weather stations run by states, international bodies or public-private partnerships\n[26]. The correlation between hour of sunset and mean temperature was moderately but not prohibitively high (r = 0.59), while correlations with other weather covariates were modest (r < 0.30).\nThe child’s height and weight were measured in the original studies using standardized clinical procedures, and we used these to calculate body mass index (kg/m2). Participants were categorized as underweight/normal weight, overweight or obese according to age and sex-specific cut points\n[27]. Maternal education was assessed in 11/15 studies, and was re-coded to distinguish between ‘high school or lower’ education versus ‘college or university’ education (Additional file\n1: Table A2).\nFor each day of accelerometer wear, we used http://www.timeanddate.com to assign time of sunset on that specific date in the city in which, or nearest which, data collection took place. We also used the date and the city of data collection to assign six weather variables to each day: total precipitation across the day, mean humidity across the day, maximum daily wind speed, mean daily temperature, maximum departure of temperature above the daily mean, and maximum departure of temperature below the daily mean. We accessed these data using Mathematica 9 (Wolfram Research), which compiles daily information from a wide range of weather stations run by states, international bodies or public-private partnerships\n[26]. The correlation between hour of sunset and mean temperature was moderately but not prohibitively high (r = 0.59), while correlations with other weather covariates were modest (r < 0.30).\nThe child’s height and weight were measured in the original studies using standardized clinical procedures, and we used these to calculate body mass index (kg/m2). Participants were categorized as underweight/normal weight, overweight or obese according to age and sex-specific cut points\n[27]. Maternal education was assessed in 11/15 studies, and was re-coded to distinguish between ‘high school or lower’ education versus ‘college or university’ education (Additional file\n1: Table A2).\n Statistical analyses Both time of sunset and weather vary between individual days, and we therefore used days not children as our units of analysis. We adjusted for the clustering of days within children using robust standard errors. All analyses used Stata 13.1.\nTo address our first aim, we fit linear regression models with the outcome being daily or hourly activity cpm. Time of sunset was the primary explanatory variable of interest, with adjustment for study population, age, sex, weight status, day of the week and the six weather covariates. When using the changing of the clocks as a natural experiment, we restricted our analyses to the 439 children with at least one valid school day measurement both in the week before and in the week after the clocks changed (e.g. Wednesday, Thursday and Friday before the clocks changed and Monday and Tuesday afterwards).\nTo address our second aim, we calculated the adjusted effect size of evening daylight separately for each study population. We used forest plots to present the fifteen resulting effect sizes, together with an I2 value representing between-study heterogeneity and with an overall pooled effect size estimated using random effects meta-analysis\n[28]. We sometimes converted pooled estimates into standardised effect sizes by dividing by the standard deviation of activity cpm for the population in question. We then proceeded to fit interaction terms between evening daylight and the four pre-specified characteristics of sex, age, weight status and maternal education. These four characteristics were selected a priori as characteristics that we felt to be of interest and that were relatively consistently measured across the ICAD studies. We fit these interaction terms after stratifying by study population, and calculated I2 values and pooled effect sizes. When examining interactions with age, we restricted our analyses to children aged 9–15 as most measurement days (91%) were of children between these ages.\nBoth time of sunset and weather vary between individual days, and we therefore used days not children as our units of analysis. We adjusted for the clustering of days within children using robust standard errors. All analyses used Stata 13.1.\nTo address our first aim, we fit linear regression models with the outcome being daily or hourly activity cpm. Time of sunset was the primary explanatory variable of interest, with adjustment for study population, age, sex, weight status, day of the week and the six weather covariates. When using the changing of the clocks as a natural experiment, we restricted our analyses to the 439 children with at least one valid school day measurement both in the week before and in the week after the clocks changed (e.g. Wednesday, Thursday and Friday before the clocks changed and Monday and Tuesday afterwards).\nTo address our second aim, we calculated the adjusted effect size of evening daylight separately for each study population. We used forest plots to present the fifteen resulting effect sizes, together with an I2 value representing between-study heterogeneity and with an overall pooled effect size estimated using random effects meta-analysis\n[28]. We sometimes converted pooled estimates into standardised effect sizes by dividing by the standard deviation of activity cpm for the population in question. We then proceeded to fit interaction terms between evening daylight and the four pre-specified characteristics of sex, age, weight status and maternal education. These four characteristics were selected a priori as characteristics that we felt to be of interest and that were relatively consistently measured across the ICAD studies. We fit these interaction terms after stratifying by study population, and calculated I2 values and pooled effect sizes. When examining interactions with age, we restricted our analyses to children aged 9–15 as most measurement days (91%) were of children between these ages.", "The International Children’s Accelerometry Database (http://www.mrc-epid.cam.ac.uk/research/studies/icad/) was established to pool objectively-measured physical activity data from studies using the Actigraph accelerometer in children worldwide. The aims, design and methods of ICAD have been described in detail elsewhere\n[18]. Formal data sharing agreements were established and all partners consulted their individual research board to confirm that sufficient ethical approval had been attained for contributing data.", "The full ICAD database pools accelerometer data from 20 studies conducted in ten countries between 1997 and 2009\n[18]. In this paper, we excluded four studies which focussed on pre-school children and one study for which date of measurement was not available. We used baseline data from all of the 15 remaining studies, plus follow-up measurements in the seven longitudinal studies and one natural experimental study (Additional file\n1: Table A1). We also used follow-up measurements from the control group of one of the two randomised controlled trials, as for this study it was possible to distinguish intervention and control groups.\nAmong 23,354 individuals aged 5–16 years old in the 15 eligible studies, we excluded 1.7% of measurement days (0.7% of individuals) because of missing data on age, sex, weight status or weather conditions. Our resulting study population consisted of 23,188 participants who between them provided 158,784 days of valid data across 31,378 time points (Table \n1). Although our full study population included children providing data from any part of the year, one of our analyses was limited to 439 children who were sampled during a week which spanned the clock change (51% female, age range 5–16, 1830 measurement days).Table 1\nDescriptive characteristics of study participants\nN (%) participantsN (%) valid daysFull sample23,188 (100%)158,784 (100%)SexMale8819 (38%)62,745 (40%)Female14,369 (62%)†\n96,039 (60%)Age5-6 years1800 (8%)7855 (5%)7-8 years711 (3%)4963 (3%)9-10 years5769 (25%)30,702 (19%)11-12 years9616 (41%)61,352 (39%)13-14 years4206 (18%)46,530 (29%)15-16 years1086 (5%)7382 (5%)Country [No. studies]Australia [N = 2]2459 (11%)18,679 (12%)Brazil [N = 1]453 (2%)1577 (1%)Denmark [N = 2]2031 (9%)11,030 (7%)England [N = 4]10,284 (44%)83,420 (53%)Estonia [N = 1]656 (3%)2537 (2%)Madeira [N = 1]1214 (5%)4899 (3%)Norway [N = 1]384 (2%)1459 (1%)Switzerland [N = 1]404 (2%)2569 (2%)United States [N = 2]5303 (23%)32,614 (21%)Weight statusNormal/underweight17,573 (76%)121,350 (76%)Overweight4116 (18%)27,967 (18%)Obese1499 (6%)9467 (6%)Mother’s educationUp to high school7422 (48%)54,547 (48%)College/vocational2656 (17%)19,352 (17%)University level5251 (34%)38,723 (34%)For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file\n1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only.\n\nDescriptive characteristics of study participants\n\nFor individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file\n1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only.", "All physical activity measurements were made with uniaxial, waist-worn Actigraph accelerometers (models 7164, 71256 and GT1M); these are a family of accelerometers that have been shown to provide reliable and valid measurement of physical activity in children and adolescents\n[19–21]. All raw accelerometer data files were re-analysed to provide physical activity outcome variables that could be directly compared across studies (see\n[18] for details). Data files were reintegrated to a 60 second epoch where necessary and processed using commercially available software (KineSoft v3.3.20, Saskatchewan, Canada). Non-wear time was defined as 60 minutes of consecutive zeros allowing for 2 minutes of non-zero interruptions\n[22].\nWe restricted our analysis of activity data to the time period 07:00 and 22:59, and defined a valid measurement day as one recording at least 500 minutes of wear time during this time period (18% days excluded as invalid). When examining the pattern of physical activity across the day, we only included hours with at least 30 minutes of measured wear time. Each participating child provided an average of 5.1 days across the week in which they were measured (range 1–7); we did not require a minimum number of valid days of accelerometer data per child because days, not children, were our primary units of analysis.\nAlthough we sought to limit our analyses to activity during waking hours, we unfortunately lacked reliable data on the time children went to sleep or woke up. While most children took their accelerometers off to sleep, on 6% of days there was evidence of overnight wear, defined as ≥5 minutes of weartime between 1:00 and 04:59. On these days, we assumed the child was in fact sleeping during any hour between 21:00 and 07:59 for which the mean accelerometer counts per minute (cpm) was below 50. Mean cpm values of under 50 were observed for 90% of hours recorded between 03:00 and 03:59 but only 3% of hours recorded between 19:00 and 19:59, suggesting this cut-point provided a reasonable proxy for sleeping time among children for whom we had reason to suspect overnight wear. Our findings were unchanged in sensitivity analyses which instead used thresholds of 30 cpm or 100 cpm to exclude suspected sleeping time, or which excluded altogether the 6% of days with suspected overnight wear.\nOur pre-specified primary outcome measure was the child’s average counts per minute. Substantive findings were similar in sensitivity analyses which instead used percent time spent in moderate-to-vigorous physical activity (MVPA), defined either as ≥3000 cpm\n[23] or ≥2296 cpm\n[24]. For our key findings, we present these MVPA results (using the ≥3000 cpm cut-off) alongside the results for mean cpm. In order to facilitate interpretation of these MVPA results, we additionally convert the observed percentage times into approximate absolute minutes on the assumption of a 14-hour average waking day\n[25].", "For each day of accelerometer wear, we used http://www.timeanddate.com to assign time of sunset on that specific date in the city in which, or nearest which, data collection took place. We also used the date and the city of data collection to assign six weather variables to each day: total precipitation across the day, mean humidity across the day, maximum daily wind speed, mean daily temperature, maximum departure of temperature above the daily mean, and maximum departure of temperature below the daily mean. We accessed these data using Mathematica 9 (Wolfram Research), which compiles daily information from a wide range of weather stations run by states, international bodies or public-private partnerships\n[26]. The correlation between hour of sunset and mean temperature was moderately but not prohibitively high (r = 0.59), while correlations with other weather covariates were modest (r < 0.30).\nThe child’s height and weight were measured in the original studies using standardized clinical procedures, and we used these to calculate body mass index (kg/m2). Participants were categorized as underweight/normal weight, overweight or obese according to age and sex-specific cut points\n[27]. Maternal education was assessed in 11/15 studies, and was re-coded to distinguish between ‘high school or lower’ education versus ‘college or university’ education (Additional file\n1: Table A2).", "Both time of sunset and weather vary between individual days, and we therefore used days not children as our units of analysis. We adjusted for the clustering of days within children using robust standard errors. All analyses used Stata 13.1.\nTo address our first aim, we fit linear regression models with the outcome being daily or hourly activity cpm. Time of sunset was the primary explanatory variable of interest, with adjustment for study population, age, sex, weight status, day of the week and the six weather covariates. When using the changing of the clocks as a natural experiment, we restricted our analyses to the 439 children with at least one valid school day measurement both in the week before and in the week after the clocks changed (e.g. Wednesday, Thursday and Friday before the clocks changed and Monday and Tuesday afterwards).\nTo address our second aim, we calculated the adjusted effect size of evening daylight separately for each study population. We used forest plots to present the fifteen resulting effect sizes, together with an I2 value representing between-study heterogeneity and with an overall pooled effect size estimated using random effects meta-analysis\n[28]. We sometimes converted pooled estimates into standardised effect sizes by dividing by the standard deviation of activity cpm for the population in question. We then proceeded to fit interaction terms between evening daylight and the four pre-specified characteristics of sex, age, weight status and maternal education. These four characteristics were selected a priori as characteristics that we felt to be of interest and that were relatively consistently measured across the ICAD studies. We fit these interaction terms after stratifying by study population, and calculated I2 values and pooled effect sizes. When examining interactions with age, we restricted our analyses to children aged 9–15 as most measurement days (91%) were of children between these ages.", "The characteristics of the participants are summarised in Table \n1. Of the measurement days, 66% were schooldays and 38% of days had no precipitation. The average daily temperature was 12°C (range -21 to 33°C, inter-quartile range 7 to 16°C). Mean daily weartime was 773 minutes (12.9 hours), and this was similar regardless of time of sunset (e.g. regression coefficient +1.40 minutes for days with sunset 18:00–19:59 versus pre-18:00 after adjusting for study population, age and sex; and -2.5 minutes for days with sunset post-20:00 versus pre-18:00).\n Evening daylight and overall activity levels A later hour of sunset (i.e. extended evening daylight) was associated with increased daily activity across the full range of time of sunset, and this association was only partly attenuated after adjusting for the six weather covariates (Figure \n1). Here and for all findings reported subsequently, substantive findings were similar in sensitivity analyses which instead used percent time spent in MVPA. The adjusted difference in overall daily activity between days with sunset after 21:00 vs. before 17:00 was 75 cpm (95% CI 67, 84). The equivalent difference for percent daily time in MVPA was 0.72% (95% CI 0.60, 0.84) using the ≥3000 cpm cut-point, which translates into around 6 minutes. To put the values on the y-axis in context, participants had a mean daily activity count of 560 cpm (649 in boys, 503 in girls), and spent an average of 4.0% of their day, or 33 minutes, in MVPA (5.2%/43 minutes in boys, 3.1%/26 minutes in girls). The adjusted differences between the days with more versus less evening daylight are therefore modest but not trivial in relation to children’s overall activity levels.Figure 1\nAssociation between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00.\n\nAssociation between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00.\nA later hour of sunset (i.e. extended evening daylight) was associated with increased daily activity across the full range of time of sunset, and this association was only partly attenuated after adjusting for the six weather covariates (Figure \n1). Here and for all findings reported subsequently, substantive findings were similar in sensitivity analyses which instead used percent time spent in MVPA. The adjusted difference in overall daily activity between days with sunset after 21:00 vs. before 17:00 was 75 cpm (95% CI 67, 84). The equivalent difference for percent daily time in MVPA was 0.72% (95% CI 0.60, 0.84) using the ≥3000 cpm cut-point, which translates into around 6 minutes. To put the values on the y-axis in context, participants had a mean daily activity count of 560 cpm (649 in boys, 503 in girls), and spent an average of 4.0% of their day, or 33 minutes, in MVPA (5.2%/43 minutes in boys, 3.1%/26 minutes in girls). The adjusted differences between the days with more versus less evening daylight are therefore modest but not trivial in relation to children’s overall activity levels.Figure 1\nAssociation between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00.\n\nAssociation between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00.\n Evening daylight and the patterning of activity across the day Consistent with a causal interpretation, hour-by-hour analyses indicated that it was in the late afternoon and evening that the duration of evening daylight was most strongly associated with hourly physical activity levels (Figure \n2). This was true on both schooldays and weekend/holiday days, with the period of the day when physical activity fell fastest corresponding to the timing of sunset (e.g. falling fastest between 18:00 and 19:00 on days when the sun also set between those hours). Similarly, when comparing the subsample of 439 children who were measured on schooldays immediately before and immediately after the changing of the clocks, there was strong evidence that children were more active during the evening of the days with later sunset (Figure \n3). Between 17:00 and 20:59 the mean increase in physical activity on the days with later sunset was 94 cpm per hour (95% CI 62, 125); the equivalent increase in percent of time spent in MVPA was 0.84% (95% CI 0.40%, 1.28%) or 2.0 minutes.Figure 2\nAdjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file\n1: Figure A1 includes a version of this graph with confidence intervals.Figure 3\nMean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses.\n\nAdjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file\n1: Figure A1 includes a version of this graph with confidence intervals.\n\nMean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses.\nImportantly, Figure \n2 and Figure \n3 show no association between hour of sunset and activity levels in the morning, and generally no association in the early afternoon (with the exception of a modest effect on weekend/holiday days as early as 14:00 in Figure \n2). This suggests that the association between evening daylight and physical activity cannot readily be explained by residual confounding by weather conditions, since any effects of weather would generally be expected to operate more evenly across the day\n[16]. These findings also provide no suggestion that later sunrise is associated with reduced activity in the morning, including on days when the sun set before 18:00 and on which the average time of sunrise was not until 07:27 (inter-quartile range 07:05 to 07:54).\nConsistent with a causal interpretation, hour-by-hour analyses indicated that it was in the late afternoon and evening that the duration of evening daylight was most strongly associated with hourly physical activity levels (Figure \n2). This was true on both schooldays and weekend/holiday days, with the period of the day when physical activity fell fastest corresponding to the timing of sunset (e.g. falling fastest between 18:00 and 19:00 on days when the sun also set between those hours). Similarly, when comparing the subsample of 439 children who were measured on schooldays immediately before and immediately after the changing of the clocks, there was strong evidence that children were more active during the evening of the days with later sunset (Figure \n3). Between 17:00 and 20:59 the mean increase in physical activity on the days with later sunset was 94 cpm per hour (95% CI 62, 125); the equivalent increase in percent of time spent in MVPA was 0.84% (95% CI 0.40%, 1.28%) or 2.0 minutes.Figure 2\nAdjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file\n1: Figure A1 includes a version of this graph with confidence intervals.Figure 3\nMean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses.\n\nAdjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file\n1: Figure A1 includes a version of this graph with confidence intervals.\n\nMean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses.\nImportantly, Figure \n2 and Figure \n3 show no association between hour of sunset and activity levels in the morning, and generally no association in the early afternoon (with the exception of a modest effect on weekend/holiday days as early as 14:00 in Figure \n2). This suggests that the association between evening daylight and physical activity cannot readily be explained by residual confounding by weather conditions, since any effects of weather would generally be expected to operate more evenly across the day\n[16]. These findings also provide no suggestion that later sunrise is associated with reduced activity in the morning, including on days when the sun set before 18:00 and on which the average time of sunrise was not until 07:27 (inter-quartile range 07:05 to 07:54).\n Examining differences by place, sex, age, weight and maternal education As shown in Figure \n4, there was strong evidence that the association between evening daylight and physical activity varied systematically between settings (I2 = 75%, p < 0.001, for overall heterogeneity between the 15 studies). Specifically there was relatively consistent evidence that evening daylight was associated with higher average physical activity in mainland Europe, England and (to a lesser extent) Australia. The pooled point estimates of the increase in daily mean activity in these three settings were 20.2 cpm, 15.7 cpm and 8.2 cpm per additional hour of evening daylight; these changes translate into standardised effect sizes of 0.07, 0.06 and 0.03, respectively. The equivalent effect sizes in terms of percent of daily time spent in MVPA were 0.19%, 0.20% and 0.05% per additional hour of evening daylight, corresponding to around 1.6 minutes, 1.7 minutes and 0.4 minutes respectively. By contrast, there was little or no consistent evidence of associations with evening daylight in the American samples or in the Madeiran and Brazilian samples, with standardised effect sizes ranging from -0.02 to +0.01 and in all cases non-significant. A post-hoc univariable meta-regression analysis provided some evidence that the smaller magnitude of the associations in these latter settings might reflect their higher maximum temperatures (adjusted R2 = 51%, p = 0.01: see Additional file\n1: Figure A2 and accompanying text).Figure 4\nAssociation between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file\n1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file\n1: Figures A3 and A4.\n\nAssociation between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file\n1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file\n1: Figures A3 and A4.\nAlthough associations with evening daylight varied markedly between settings, there was less convincing evidence for interactions with the child’s characteristics (Figure \n4, plus Additional file\n1: Figures A3-A4). This lack of an interaction was clearest for weight status and maternal education, with neither variable showing any significant interaction in any of the five study settings, and with the overall pooled effect sizes also non-significant. By contrast, the non-significant pooled interaction terms for sex and age were harder to interpret as in both cases there was some evidence of between-study heterogeneity (0.01 ≤ p ≤ 0.02). With respect to sex, this heterogeneity reflected the fact that the association between evening daylight and physical activity tended to be stronger in boys than in girls in some European and English samples, but this was not the case in the other settings (Additional file\n1: Figure A3). With respect to age, there was no very obvious pattern: the magnitude of the association with evening daylight was greater among younger children in Denmark, was greater among older children in Australia, and did not differ according to age in the remaining three settings for which sufficient data were available.\nAs shown in Figure \n4, there was strong evidence that the association between evening daylight and physical activity varied systematically between settings (I2 = 75%, p < 0.001, for overall heterogeneity between the 15 studies). Specifically there was relatively consistent evidence that evening daylight was associated with higher average physical activity in mainland Europe, England and (to a lesser extent) Australia. The pooled point estimates of the increase in daily mean activity in these three settings were 20.2 cpm, 15.7 cpm and 8.2 cpm per additional hour of evening daylight; these changes translate into standardised effect sizes of 0.07, 0.06 and 0.03, respectively. The equivalent effect sizes in terms of percent of daily time spent in MVPA were 0.19%, 0.20% and 0.05% per additional hour of evening daylight, corresponding to around 1.6 minutes, 1.7 minutes and 0.4 minutes respectively. By contrast, there was little or no consistent evidence of associations with evening daylight in the American samples or in the Madeiran and Brazilian samples, with standardised effect sizes ranging from -0.02 to +0.01 and in all cases non-significant. A post-hoc univariable meta-regression analysis provided some evidence that the smaller magnitude of the associations in these latter settings might reflect their higher maximum temperatures (adjusted R2 = 51%, p = 0.01: see Additional file\n1: Figure A2 and accompanying text).Figure 4\nAssociation between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file\n1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file\n1: Figures A3 and A4.\n\nAssociation between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file\n1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file\n1: Figures A3 and A4.\nAlthough associations with evening daylight varied markedly between settings, there was less convincing evidence for interactions with the child’s characteristics (Figure \n4, plus Additional file\n1: Figures A3-A4). This lack of an interaction was clearest for weight status and maternal education, with neither variable showing any significant interaction in any of the five study settings, and with the overall pooled effect sizes also non-significant. By contrast, the non-significant pooled interaction terms for sex and age were harder to interpret as in both cases there was some evidence of between-study heterogeneity (0.01 ≤ p ≤ 0.02). With respect to sex, this heterogeneity reflected the fact that the association between evening daylight and physical activity tended to be stronger in boys than in girls in some European and English samples, but this was not the case in the other settings (Additional file\n1: Figure A3). With respect to age, there was no very obvious pattern: the magnitude of the association with evening daylight was greater among younger children in Denmark, was greater among older children in Australia, and did not differ according to age in the remaining three settings for which sufficient data were available.", "A later hour of sunset (i.e. extended evening daylight) was associated with increased daily activity across the full range of time of sunset, and this association was only partly attenuated after adjusting for the six weather covariates (Figure \n1). Here and for all findings reported subsequently, substantive findings were similar in sensitivity analyses which instead used percent time spent in MVPA. The adjusted difference in overall daily activity between days with sunset after 21:00 vs. before 17:00 was 75 cpm (95% CI 67, 84). The equivalent difference for percent daily time in MVPA was 0.72% (95% CI 0.60, 0.84) using the ≥3000 cpm cut-point, which translates into around 6 minutes. To put the values on the y-axis in context, participants had a mean daily activity count of 560 cpm (649 in boys, 503 in girls), and spent an average of 4.0% of their day, or 33 minutes, in MVPA (5.2%/43 minutes in boys, 3.1%/26 minutes in girls). The adjusted differences between the days with more versus less evening daylight are therefore modest but not trivial in relation to children’s overall activity levels.Figure 1\nAssociation between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00.\n\nAssociation between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00.", "Consistent with a causal interpretation, hour-by-hour analyses indicated that it was in the late afternoon and evening that the duration of evening daylight was most strongly associated with hourly physical activity levels (Figure \n2). This was true on both schooldays and weekend/holiday days, with the period of the day when physical activity fell fastest corresponding to the timing of sunset (e.g. falling fastest between 18:00 and 19:00 on days when the sun also set between those hours). Similarly, when comparing the subsample of 439 children who were measured on schooldays immediately before and immediately after the changing of the clocks, there was strong evidence that children were more active during the evening of the days with later sunset (Figure \n3). Between 17:00 and 20:59 the mean increase in physical activity on the days with later sunset was 94 cpm per hour (95% CI 62, 125); the equivalent increase in percent of time spent in MVPA was 0.84% (95% CI 0.40%, 1.28%) or 2.0 minutes.Figure 2\nAdjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file\n1: Figure A1 includes a version of this graph with confidence intervals.Figure 3\nMean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses.\n\nAdjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file\n1: Figure A1 includes a version of this graph with confidence intervals.\n\nMean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses.\nImportantly, Figure \n2 and Figure \n3 show no association between hour of sunset and activity levels in the morning, and generally no association in the early afternoon (with the exception of a modest effect on weekend/holiday days as early as 14:00 in Figure \n2). This suggests that the association between evening daylight and physical activity cannot readily be explained by residual confounding by weather conditions, since any effects of weather would generally be expected to operate more evenly across the day\n[16]. These findings also provide no suggestion that later sunrise is associated with reduced activity in the morning, including on days when the sun set before 18:00 and on which the average time of sunrise was not until 07:27 (inter-quartile range 07:05 to 07:54).", "As shown in Figure \n4, there was strong evidence that the association between evening daylight and physical activity varied systematically between settings (I2 = 75%, p < 0.001, for overall heterogeneity between the 15 studies). Specifically there was relatively consistent evidence that evening daylight was associated with higher average physical activity in mainland Europe, England and (to a lesser extent) Australia. The pooled point estimates of the increase in daily mean activity in these three settings were 20.2 cpm, 15.7 cpm and 8.2 cpm per additional hour of evening daylight; these changes translate into standardised effect sizes of 0.07, 0.06 and 0.03, respectively. The equivalent effect sizes in terms of percent of daily time spent in MVPA were 0.19%, 0.20% and 0.05% per additional hour of evening daylight, corresponding to around 1.6 minutes, 1.7 minutes and 0.4 minutes respectively. By contrast, there was little or no consistent evidence of associations with evening daylight in the American samples or in the Madeiran and Brazilian samples, with standardised effect sizes ranging from -0.02 to +0.01 and in all cases non-significant. A post-hoc univariable meta-regression analysis provided some evidence that the smaller magnitude of the associations in these latter settings might reflect their higher maximum temperatures (adjusted R2 = 51%, p = 0.01: see Additional file\n1: Figure A2 and accompanying text).Figure 4\nAssociation between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file\n1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file\n1: Figures A3 and A4.\n\nAssociation between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file\n1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file\n1: Figures A3 and A4.\nAlthough associations with evening daylight varied markedly between settings, there was less convincing evidence for interactions with the child’s characteristics (Figure \n4, plus Additional file\n1: Figures A3-A4). This lack of an interaction was clearest for weight status and maternal education, with neither variable showing any significant interaction in any of the five study settings, and with the overall pooled effect sizes also non-significant. By contrast, the non-significant pooled interaction terms for sex and age were harder to interpret as in both cases there was some evidence of between-study heterogeneity (0.01 ≤ p ≤ 0.02). With respect to sex, this heterogeneity reflected the fact that the association between evening daylight and physical activity tended to be stronger in boys than in girls in some European and English samples, but this was not the case in the other settings (Additional file\n1: Figure A3). With respect to age, there was no very obvious pattern: the magnitude of the association with evening daylight was greater among younger children in Denmark, was greater among older children in Australia, and did not differ according to age in the remaining three settings for which sufficient data were available.", "Among 23,000 school-age children from 9 countries, we found strong evidence that longer evening daylight was associated with a small increase in daily physical activity, even after adjusting for weather conditions. Consistent with a causal interpretation, the magnitude of this associations was largest in the late afternoon and early evening, including when the same child was measured immediately before and after the clocks went forward or back. These associations were, however, only consistently observed in the European and Australian samples. There was inconsistent evidence that the magnitude of the association with evening daylight was greater in boys; no evidence of any differences in the magnitude of the association according to weight status or maternal education; and inconsistent findings for interactions with age.\n Limitations and directions for future research This study substantially extends previous analyses of some subsets of this data, which have at most only provided a relatively brief examination of physical activity differences by season\n[29, 30]. It also addresses several recognised limitations of the existing literature\n[14], including small sample sizes, inconsistent accelerometer protocols and little or no examination of interactions with factors such as age, sex or weight status. In addition, our large sample size allowed us to use the bi-annual changing of the clocks as a natural experiment, and to show significant differences in children’s activity levels either side of the clock change. This observation considerably strengthens the case for a causal interpretation of the association between evening daylight and physical activity, as does the fact that the fastest decrease in children’s evening physical activity coincided with sunset throughout the year.\nThis study does, however, also have several important limitations. First, our data were largely cross-sectional rather than longitudinal: although we could follow the same child across the week when the clocks changed, we could not follow children across a full year. We have, however, no reason to believe that children sampled at different times of the year differed systematically within or between studies.\nA second set of limitations involves data not available to us. For one thing, although we adjusted for observed weather conditions on each day of measurement, the timing of some physically active events may instead reflect expected weather conditions (e.g. some schools may routinely schedule sports days on summer afternoons in the hope that it will be warm and dry). Failing to adjust for such social expectations may mean that our effect estimates are still subject to some residual confounding by weather, and this may partly account for why small differences in activity levels were seen as early as 14:00 on weekend/holidays. In addition, we lacked any data on the behavioural mediators of the observed activity differences. As such, we cannot examine how far one can generalise the findings of one previous, small English study which found that associations between day length and activity levels were largely mediated by outdoor play\n[16]. This is one useful direction for future research, perhaps particularly as it becomes increasingly possible to substitute or complement detailed activity diaries with data from global positioning systems (GPS) monitors\n[31]. We also lacked systematic information on area-level factors such as neighbourhood safety or the availability of green space which might plausibly moderate the effect of evening daylight upon physical activity; again, this would be one useful direction for future research. Also of interest would be an examination of how a wider range of behaviours vary with daylength; these were largely beyond the scope of what is possible in the ICAD database, although the lack of any association between time of sunset and accelerometer weartime provides some indirect evidence against an effect of evening daylight on children’s duration of sleep.\nFinally, most of our study populations came from Europe and almost all came from high-income settings, meaning that more research would be needed to establish how far the observed associations apply across other settings. Our data do, however, give some hints that daylight saving measures might not increase activity in hot settings, perhaps because high temperatures may inhibit summertime activity.\nThis study substantially extends previous analyses of some subsets of this data, which have at most only provided a relatively brief examination of physical activity differences by season\n[29, 30]. It also addresses several recognised limitations of the existing literature\n[14], including small sample sizes, inconsistent accelerometer protocols and little or no examination of interactions with factors such as age, sex or weight status. In addition, our large sample size allowed us to use the bi-annual changing of the clocks as a natural experiment, and to show significant differences in children’s activity levels either side of the clock change. This observation considerably strengthens the case for a causal interpretation of the association between evening daylight and physical activity, as does the fact that the fastest decrease in children’s evening physical activity coincided with sunset throughout the year.\nThis study does, however, also have several important limitations. First, our data were largely cross-sectional rather than longitudinal: although we could follow the same child across the week when the clocks changed, we could not follow children across a full year. We have, however, no reason to believe that children sampled at different times of the year differed systematically within or between studies.\nA second set of limitations involves data not available to us. For one thing, although we adjusted for observed weather conditions on each day of measurement, the timing of some physically active events may instead reflect expected weather conditions (e.g. some schools may routinely schedule sports days on summer afternoons in the hope that it will be warm and dry). Failing to adjust for such social expectations may mean that our effect estimates are still subject to some residual confounding by weather, and this may partly account for why small differences in activity levels were seen as early as 14:00 on weekend/holidays. In addition, we lacked any data on the behavioural mediators of the observed activity differences. As such, we cannot examine how far one can generalise the findings of one previous, small English study which found that associations between day length and activity levels were largely mediated by outdoor play\n[16]. This is one useful direction for future research, perhaps particularly as it becomes increasingly possible to substitute or complement detailed activity diaries with data from global positioning systems (GPS) monitors\n[31]. We also lacked systematic information on area-level factors such as neighbourhood safety or the availability of green space which might plausibly moderate the effect of evening daylight upon physical activity; again, this would be one useful direction for future research. Also of interest would be an examination of how a wider range of behaviours vary with daylength; these were largely beyond the scope of what is possible in the ICAD database, although the lack of any association between time of sunset and accelerometer weartime provides some indirect evidence against an effect of evening daylight on children’s duration of sleep.\nFinally, most of our study populations came from Europe and almost all came from high-income settings, meaning that more research would be needed to establish how far the observed associations apply across other settings. Our data do, however, give some hints that daylight saving measures might not increase activity in hot settings, perhaps because high temperatures may inhibit summertime activity.\n Implications for policy and practice The British parliament recently debated a Bill proposing new daylight saving measures which would shift the clocks forward by one additional hour year round\n[10]. If the adjusted, pooled effect size we observed in this study were fully causal, one would expect the proposed daylight savings measures to generate a 0.06 standard deviation increase in the total physical activity of English children, corresponding to an estimated 1.7 extra minutes of MVPA per day. The equivalent standardised effect sizes in mainland Europe and in Australia were 0.07 and 0.03, respectively. As such, introducing additional daylight saving measures in any of these settings would be likely only to have a small-to-very-small average effect upon each child. Such measures would, however, have far greater reach than most other potential policy initiatives, with these small average effects applying every day to each and every child in the country. This is important because even small changes to the population mean can have important public health consequences\n[8]. Moreover, although these population-level effect sizes are small in absolute terms, the English and mainland European effect estimates actually compare relatively favourably to individual-level approaches, despite the latter generally being much more intensive (and expensive). For example, one recent meta-analysis of 22 randomised controlled comparisons reported a standardised pooled effect size of 0.12 (95% CI 0.04, 0.20) for interventions seeking to promote child or adolescent physical activity\n[32].\nNotably, the association between longer evening daylight and higher physical activity was observed irrespective of weight status or maternal education. This contrasts with one previous Australian survey in which daylight savings measures seemed to have the largest effects among normal weight adults from socio-economically advantaged groups\n[33]. Further research in adults would be useful to confirm this finding, ideally using objectively-measured activity data. Speculatively, however, a relatively wide range of children may respond to longer evening daylight by playing more outdoors whereas among adults the effect may primarily be confined to the groups with the highest propensity to exercise.\nThe British parliament recently debated a Bill proposing new daylight saving measures which would shift the clocks forward by one additional hour year round\n[10]. If the adjusted, pooled effect size we observed in this study were fully causal, one would expect the proposed daylight savings measures to generate a 0.06 standard deviation increase in the total physical activity of English children, corresponding to an estimated 1.7 extra minutes of MVPA per day. The equivalent standardised effect sizes in mainland Europe and in Australia were 0.07 and 0.03, respectively. As such, introducing additional daylight saving measures in any of these settings would be likely only to have a small-to-very-small average effect upon each child. Such measures would, however, have far greater reach than most other potential policy initiatives, with these small average effects applying every day to each and every child in the country. This is important because even small changes to the population mean can have important public health consequences\n[8]. Moreover, although these population-level effect sizes are small in absolute terms, the English and mainland European effect estimates actually compare relatively favourably to individual-level approaches, despite the latter generally being much more intensive (and expensive). For example, one recent meta-analysis of 22 randomised controlled comparisons reported a standardised pooled effect size of 0.12 (95% CI 0.04, 0.20) for interventions seeking to promote child or adolescent physical activity\n[32].\nNotably, the association between longer evening daylight and higher physical activity was observed irrespective of weight status or maternal education. This contrasts with one previous Australian survey in which daylight savings measures seemed to have the largest effects among normal weight adults from socio-economically advantaged groups\n[33]. Further research in adults would be useful to confirm this finding, ideally using objectively-measured activity data. Speculatively, however, a relatively wide range of children may respond to longer evening daylight by playing more outdoors whereas among adults the effect may primarily be confined to the groups with the highest propensity to exercise.", "This study substantially extends previous analyses of some subsets of this data, which have at most only provided a relatively brief examination of physical activity differences by season\n[29, 30]. It also addresses several recognised limitations of the existing literature\n[14], including small sample sizes, inconsistent accelerometer protocols and little or no examination of interactions with factors such as age, sex or weight status. In addition, our large sample size allowed us to use the bi-annual changing of the clocks as a natural experiment, and to show significant differences in children’s activity levels either side of the clock change. This observation considerably strengthens the case for a causal interpretation of the association between evening daylight and physical activity, as does the fact that the fastest decrease in children’s evening physical activity coincided with sunset throughout the year.\nThis study does, however, also have several important limitations. First, our data were largely cross-sectional rather than longitudinal: although we could follow the same child across the week when the clocks changed, we could not follow children across a full year. We have, however, no reason to believe that children sampled at different times of the year differed systematically within or between studies.\nA second set of limitations involves data not available to us. For one thing, although we adjusted for observed weather conditions on each day of measurement, the timing of some physically active events may instead reflect expected weather conditions (e.g. some schools may routinely schedule sports days on summer afternoons in the hope that it will be warm and dry). Failing to adjust for such social expectations may mean that our effect estimates are still subject to some residual confounding by weather, and this may partly account for why small differences in activity levels were seen as early as 14:00 on weekend/holidays. In addition, we lacked any data on the behavioural mediators of the observed activity differences. As such, we cannot examine how far one can generalise the findings of one previous, small English study which found that associations between day length and activity levels were largely mediated by outdoor play\n[16]. This is one useful direction for future research, perhaps particularly as it becomes increasingly possible to substitute or complement detailed activity diaries with data from global positioning systems (GPS) monitors\n[31]. We also lacked systematic information on area-level factors such as neighbourhood safety or the availability of green space which might plausibly moderate the effect of evening daylight upon physical activity; again, this would be one useful direction for future research. Also of interest would be an examination of how a wider range of behaviours vary with daylength; these were largely beyond the scope of what is possible in the ICAD database, although the lack of any association between time of sunset and accelerometer weartime provides some indirect evidence against an effect of evening daylight on children’s duration of sleep.\nFinally, most of our study populations came from Europe and almost all came from high-income settings, meaning that more research would be needed to establish how far the observed associations apply across other settings. Our data do, however, give some hints that daylight saving measures might not increase activity in hot settings, perhaps because high temperatures may inhibit summertime activity.", "The British parliament recently debated a Bill proposing new daylight saving measures which would shift the clocks forward by one additional hour year round\n[10]. If the adjusted, pooled effect size we observed in this study were fully causal, one would expect the proposed daylight savings measures to generate a 0.06 standard deviation increase in the total physical activity of English children, corresponding to an estimated 1.7 extra minutes of MVPA per day. The equivalent standardised effect sizes in mainland Europe and in Australia were 0.07 and 0.03, respectively. As such, introducing additional daylight saving measures in any of these settings would be likely only to have a small-to-very-small average effect upon each child. Such measures would, however, have far greater reach than most other potential policy initiatives, with these small average effects applying every day to each and every child in the country. This is important because even small changes to the population mean can have important public health consequences\n[8]. Moreover, although these population-level effect sizes are small in absolute terms, the English and mainland European effect estimates actually compare relatively favourably to individual-level approaches, despite the latter generally being much more intensive (and expensive). For example, one recent meta-analysis of 22 randomised controlled comparisons reported a standardised pooled effect size of 0.12 (95% CI 0.04, 0.20) for interventions seeking to promote child or adolescent physical activity\n[32].\nNotably, the association between longer evening daylight and higher physical activity was observed irrespective of weight status or maternal education. This contrasts with one previous Australian survey in which daylight savings measures seemed to have the largest effects among normal weight adults from socio-economically advantaged groups\n[33]. Further research in adults would be useful to confirm this finding, ideally using objectively-measured activity data. Speculatively, however, a relatively wide range of children may respond to longer evening daylight by playing more outdoors whereas among adults the effect may primarily be confined to the groups with the highest propensity to exercise.", "This study provides the strongest evidence to date that, in European and Australian settings, evening daylight plays a causal role in increasing physical activity in the late afternoon and early evening – a period which has been described as the ‘critical hours’ for children’s physical activity\n[34]. In these settings, it seems possible that additional daylight saving measures could shift mean population child physical activity levels by an amount which, although small in absolute terms, would not be trivial relative to what can feasibly be achieved through other approaches. Moreover, our findings also suggest that this effect might operate in a relatively equitable way. As such, while daylight savings proposals such as those recently considered in Britain would not solve the problem of inadequate levels of child physical activity, this paper indicates that they could represent a small step in the right direction.", " Additional file 1:\nAdditional methods and results; presentation of fuller details on the studies included in the analyses, and additional results.\n(PDF 759 KB)\nAdditional file 1:\nAdditional methods and results; presentation of fuller details on the studies included in the analyses, and additional results.\n(PDF 759 KB)", "Additional file 1:\nAdditional methods and results; presentation of fuller details on the studies included in the analyses, and additional results.\n(PDF 759 KB)" ]
[ null, "methods", null, null, null, null, null, "results", null, null, null, "discussion", null, null, "conclusions", "supplementary-material", null ]
[ "Child", "Adolescent", "Physical activity", "Day length", "Seasons" ]
Background: Physical activity confers substantial physical and mental health benefits in children [1–5], but most children around the world do not meet current activity guidelines [6]. For children as for adults, successfully promoting physical activity is likely to require both individual-level and population-level interventions [7]. The latter are important because, following the insights of Geoffrey Rose [8], even a small shift in a population mean can yield important public health benefits. One potential population-level measure which has received some policy attention in recent years concerns the introduction of additional daylight saving measures [9]. Although the total number of hours of daylight in the day is fixed, many countries modify when those hours fall by ‘changing the clocks’ – for example, putting the clocks forward in the summer to shift daylight hours from the very early morning to the evening. Recent decades have seen recurrent political debates surrounding daylight saving measures in several countries. For example, several Australian states have held repeated referenda on the topic, and the issue even spawned the creation in 2008 of the single-issue political party ‘Daylight Saving for South East Queensland’. Similarly a Bill was debated in the British Parliament between 2010 and 2012 which proposed to shift the clocks forward by an additional hour year round. This change would have given British children an estimated average of 200 extra waking daylight hours per year [10], and the logo of the associated civil society campaign depicted children playing outdoors in the evening sunlight. The Bill’s accompanying research paper listed “increased opportunities for outdoor activity” alongside other potential health and environmental benefits, such as reducing road traffic crashes and cutting domestic energy use [11]. A similar argument about leisure-time activity has featured in the Australian debate [12]. The British Bill’s research paper did not, however, cite any evidence to support its claims about physical activity, and nor does much evidence exist regarding likely impacts on children. Many studies have certainly reported that children’s physical activity is generally higher in the summer than in the winter, as reviewed in [13–15]. Very few studies, however, examine whether seasonal differences persist after adjustment for weather conditions, or whether the seasonal patterning of physical activity across the day is consistent with a causal effect of evening daylight. One study which did examine these issues in detail found that seasonal differences in physical activity were greatest in the late afternoon and early evening, which is what one would expect if time of sunset did play a causal role [16]. This study had some major limitations, however, including its small sample size (N = 325), its restriction to a single setting in south-east England, and its failure to adjust for temperature. This paper therefore revisited this question in a much larger, international sample. Our first broad aim was to test the hypotheses that (i) longer evening daylight is associated with higher total physical activity, even after adjusting for weather conditions; and (ii) these overall differences in physical activity are greatest in the late afternoon and early evening. Given our uniquely large sample size, we were also able to use countries’ bi-annual changing of the clocks as a natural experiment, i.e. as an event or intervention not designed for research purposes but which can nevertheless provide valuable research opportunities [17]. Specifically, we tested the hypothesis that the same child measured immediately before and immediately after the clocks changed would be more active on the days where sunset had been moved an hour later. Our second broad aim was to examine whether any associations between evening daylight and activity levels differed by study setting, sex, age, weight status or socio-economic position. Methods: Study design The International Children’s Accelerometry Database (http://www.mrc-epid.cam.ac.uk/research/studies/icad/) was established to pool objectively-measured physical activity data from studies using the Actigraph accelerometer in children worldwide. The aims, design and methods of ICAD have been described in detail elsewhere [18]. Formal data sharing agreements were established and all partners consulted their individual research board to confirm that sufficient ethical approval had been attained for contributing data. The International Children’s Accelerometry Database (http://www.mrc-epid.cam.ac.uk/research/studies/icad/) was established to pool objectively-measured physical activity data from studies using the Actigraph accelerometer in children worldwide. The aims, design and methods of ICAD have been described in detail elsewhere [18]. Formal data sharing agreements were established and all partners consulted their individual research board to confirm that sufficient ethical approval had been attained for contributing data. Participants The full ICAD database pools accelerometer data from 20 studies conducted in ten countries between 1997 and 2009 [18]. In this paper, we excluded four studies which focussed on pre-school children and one study for which date of measurement was not available. We used baseline data from all of the 15 remaining studies, plus follow-up measurements in the seven longitudinal studies and one natural experimental study (Additional file 1: Table A1). We also used follow-up measurements from the control group of one of the two randomised controlled trials, as for this study it was possible to distinguish intervention and control groups. Among 23,354 individuals aged 5–16 years old in the 15 eligible studies, we excluded 1.7% of measurement days (0.7% of individuals) because of missing data on age, sex, weight status or weather conditions. Our resulting study population consisted of 23,188 participants who between them provided 158,784 days of valid data across 31,378 time points (Table  1). Although our full study population included children providing data from any part of the year, one of our analyses was limited to 439 children who were sampled during a week which spanned the clock change (51% female, age range 5–16, 1830 measurement days).Table 1 Descriptive characteristics of study participants N (%) participantsN (%) valid daysFull sample23,188 (100%)158,784 (100%)SexMale8819 (38%)62,745 (40%)Female14,369 (62%)† 96,039 (60%)Age5-6 years1800 (8%)7855 (5%)7-8 years711 (3%)4963 (3%)9-10 years5769 (25%)30,702 (19%)11-12 years9616 (41%)61,352 (39%)13-14 years4206 (18%)46,530 (29%)15-16 years1086 (5%)7382 (5%)Country [No. studies]Australia [N = 2]2459 (11%)18,679 (12%)Brazil [N = 1]453 (2%)1577 (1%)Denmark [N = 2]2031 (9%)11,030 (7%)England [N = 4]10,284 (44%)83,420 (53%)Estonia [N = 1]656 (3%)2537 (2%)Madeira [N = 1]1214 (5%)4899 (3%)Norway [N = 1]384 (2%)1459 (1%)Switzerland [N = 1]404 (2%)2569 (2%)United States [N = 2]5303 (23%)32,614 (21%)Weight statusNormal/underweight17,573 (76%)121,350 (76%)Overweight4116 (18%)27,967 (18%)Obese1499 (6%)9467 (6%)Mother’s educationUp to high school7422 (48%)54,547 (48%)College/vocational2656 (17%)19,352 (17%)University level5251 (34%)38,723 (34%)For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file 1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only. Descriptive characteristics of study participants For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file 1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only. The full ICAD database pools accelerometer data from 20 studies conducted in ten countries between 1997 and 2009 [18]. In this paper, we excluded four studies which focussed on pre-school children and one study for which date of measurement was not available. We used baseline data from all of the 15 remaining studies, plus follow-up measurements in the seven longitudinal studies and one natural experimental study (Additional file 1: Table A1). We also used follow-up measurements from the control group of one of the two randomised controlled trials, as for this study it was possible to distinguish intervention and control groups. Among 23,354 individuals aged 5–16 years old in the 15 eligible studies, we excluded 1.7% of measurement days (0.7% of individuals) because of missing data on age, sex, weight status or weather conditions. Our resulting study population consisted of 23,188 participants who between them provided 158,784 days of valid data across 31,378 time points (Table  1). Although our full study population included children providing data from any part of the year, one of our analyses was limited to 439 children who were sampled during a week which spanned the clock change (51% female, age range 5–16, 1830 measurement days).Table 1 Descriptive characteristics of study participants N (%) participantsN (%) valid daysFull sample23,188 (100%)158,784 (100%)SexMale8819 (38%)62,745 (40%)Female14,369 (62%)† 96,039 (60%)Age5-6 years1800 (8%)7855 (5%)7-8 years711 (3%)4963 (3%)9-10 years5769 (25%)30,702 (19%)11-12 years9616 (41%)61,352 (39%)13-14 years4206 (18%)46,530 (29%)15-16 years1086 (5%)7382 (5%)Country [No. studies]Australia [N = 2]2459 (11%)18,679 (12%)Brazil [N = 1]453 (2%)1577 (1%)Denmark [N = 2]2031 (9%)11,030 (7%)England [N = 4]10,284 (44%)83,420 (53%)Estonia [N = 1]656 (3%)2537 (2%)Madeira [N = 1]1214 (5%)4899 (3%)Norway [N = 1]384 (2%)1459 (1%)Switzerland [N = 1]404 (2%)2569 (2%)United States [N = 2]5303 (23%)32,614 (21%)Weight statusNormal/underweight17,573 (76%)121,350 (76%)Overweight4116 (18%)27,967 (18%)Obese1499 (6%)9467 (6%)Mother’s educationUp to high school7422 (48%)54,547 (48%)College/vocational2656 (17%)19,352 (17%)University level5251 (34%)38,723 (34%)For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file 1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only. Descriptive characteristics of study participants For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file 1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only. Measurement of physical activity All physical activity measurements were made with uniaxial, waist-worn Actigraph accelerometers (models 7164, 71256 and GT1M); these are a family of accelerometers that have been shown to provide reliable and valid measurement of physical activity in children and adolescents [19–21]. All raw accelerometer data files were re-analysed to provide physical activity outcome variables that could be directly compared across studies (see [18] for details). Data files were reintegrated to a 60 second epoch where necessary and processed using commercially available software (KineSoft v3.3.20, Saskatchewan, Canada). Non-wear time was defined as 60 minutes of consecutive zeros allowing for 2 minutes of non-zero interruptions [22]. We restricted our analysis of activity data to the time period 07:00 and 22:59, and defined a valid measurement day as one recording at least 500 minutes of wear time during this time period (18% days excluded as invalid). When examining the pattern of physical activity across the day, we only included hours with at least 30 minutes of measured wear time. Each participating child provided an average of 5.1 days across the week in which they were measured (range 1–7); we did not require a minimum number of valid days of accelerometer data per child because days, not children, were our primary units of analysis. Although we sought to limit our analyses to activity during waking hours, we unfortunately lacked reliable data on the time children went to sleep or woke up. While most children took their accelerometers off to sleep, on 6% of days there was evidence of overnight wear, defined as ≥5 minutes of weartime between 1:00 and 04:59. On these days, we assumed the child was in fact sleeping during any hour between 21:00 and 07:59 for which the mean accelerometer counts per minute (cpm) was below 50. Mean cpm values of under 50 were observed for 90% of hours recorded between 03:00 and 03:59 but only 3% of hours recorded between 19:00 and 19:59, suggesting this cut-point provided a reasonable proxy for sleeping time among children for whom we had reason to suspect overnight wear. Our findings were unchanged in sensitivity analyses which instead used thresholds of 30 cpm or 100 cpm to exclude suspected sleeping time, or which excluded altogether the 6% of days with suspected overnight wear. Our pre-specified primary outcome measure was the child’s average counts per minute. Substantive findings were similar in sensitivity analyses which instead used percent time spent in moderate-to-vigorous physical activity (MVPA), defined either as ≥3000 cpm [23] or ≥2296 cpm [24]. For our key findings, we present these MVPA results (using the ≥3000 cpm cut-off) alongside the results for mean cpm. In order to facilitate interpretation of these MVPA results, we additionally convert the observed percentage times into approximate absolute minutes on the assumption of a 14-hour average waking day [25]. All physical activity measurements were made with uniaxial, waist-worn Actigraph accelerometers (models 7164, 71256 and GT1M); these are a family of accelerometers that have been shown to provide reliable and valid measurement of physical activity in children and adolescents [19–21]. All raw accelerometer data files were re-analysed to provide physical activity outcome variables that could be directly compared across studies (see [18] for details). Data files were reintegrated to a 60 second epoch where necessary and processed using commercially available software (KineSoft v3.3.20, Saskatchewan, Canada). Non-wear time was defined as 60 minutes of consecutive zeros allowing for 2 minutes of non-zero interruptions [22]. We restricted our analysis of activity data to the time period 07:00 and 22:59, and defined a valid measurement day as one recording at least 500 minutes of wear time during this time period (18% days excluded as invalid). When examining the pattern of physical activity across the day, we only included hours with at least 30 minutes of measured wear time. Each participating child provided an average of 5.1 days across the week in which they were measured (range 1–7); we did not require a minimum number of valid days of accelerometer data per child because days, not children, were our primary units of analysis. Although we sought to limit our analyses to activity during waking hours, we unfortunately lacked reliable data on the time children went to sleep or woke up. While most children took their accelerometers off to sleep, on 6% of days there was evidence of overnight wear, defined as ≥5 minutes of weartime between 1:00 and 04:59. On these days, we assumed the child was in fact sleeping during any hour between 21:00 and 07:59 for which the mean accelerometer counts per minute (cpm) was below 50. Mean cpm values of under 50 were observed for 90% of hours recorded between 03:00 and 03:59 but only 3% of hours recorded between 19:00 and 19:59, suggesting this cut-point provided a reasonable proxy for sleeping time among children for whom we had reason to suspect overnight wear. Our findings were unchanged in sensitivity analyses which instead used thresholds of 30 cpm or 100 cpm to exclude suspected sleeping time, or which excluded altogether the 6% of days with suspected overnight wear. Our pre-specified primary outcome measure was the child’s average counts per minute. Substantive findings were similar in sensitivity analyses which instead used percent time spent in moderate-to-vigorous physical activity (MVPA), defined either as ≥3000 cpm [23] or ≥2296 cpm [24]. For our key findings, we present these MVPA results (using the ≥3000 cpm cut-off) alongside the results for mean cpm. In order to facilitate interpretation of these MVPA results, we additionally convert the observed percentage times into approximate absolute minutes on the assumption of a 14-hour average waking day [25]. Time of sunset and covariates For each day of accelerometer wear, we used http://www.timeanddate.com to assign time of sunset on that specific date in the city in which, or nearest which, data collection took place. We also used the date and the city of data collection to assign six weather variables to each day: total precipitation across the day, mean humidity across the day, maximum daily wind speed, mean daily temperature, maximum departure of temperature above the daily mean, and maximum departure of temperature below the daily mean. We accessed these data using Mathematica 9 (Wolfram Research), which compiles daily information from a wide range of weather stations run by states, international bodies or public-private partnerships [26]. The correlation between hour of sunset and mean temperature was moderately but not prohibitively high (r = 0.59), while correlations with other weather covariates were modest (r < 0.30). The child’s height and weight were measured in the original studies using standardized clinical procedures, and we used these to calculate body mass index (kg/m2). Participants were categorized as underweight/normal weight, overweight or obese according to age and sex-specific cut points [27]. Maternal education was assessed in 11/15 studies, and was re-coded to distinguish between ‘high school or lower’ education versus ‘college or university’ education (Additional file 1: Table A2). For each day of accelerometer wear, we used http://www.timeanddate.com to assign time of sunset on that specific date in the city in which, or nearest which, data collection took place. We also used the date and the city of data collection to assign six weather variables to each day: total precipitation across the day, mean humidity across the day, maximum daily wind speed, mean daily temperature, maximum departure of temperature above the daily mean, and maximum departure of temperature below the daily mean. We accessed these data using Mathematica 9 (Wolfram Research), which compiles daily information from a wide range of weather stations run by states, international bodies or public-private partnerships [26]. The correlation between hour of sunset and mean temperature was moderately but not prohibitively high (r = 0.59), while correlations with other weather covariates were modest (r < 0.30). The child’s height and weight were measured in the original studies using standardized clinical procedures, and we used these to calculate body mass index (kg/m2). Participants were categorized as underweight/normal weight, overweight or obese according to age and sex-specific cut points [27]. Maternal education was assessed in 11/15 studies, and was re-coded to distinguish between ‘high school or lower’ education versus ‘college or university’ education (Additional file 1: Table A2). Statistical analyses Both time of sunset and weather vary between individual days, and we therefore used days not children as our units of analysis. We adjusted for the clustering of days within children using robust standard errors. All analyses used Stata 13.1. To address our first aim, we fit linear regression models with the outcome being daily or hourly activity cpm. Time of sunset was the primary explanatory variable of interest, with adjustment for study population, age, sex, weight status, day of the week and the six weather covariates. When using the changing of the clocks as a natural experiment, we restricted our analyses to the 439 children with at least one valid school day measurement both in the week before and in the week after the clocks changed (e.g. Wednesday, Thursday and Friday before the clocks changed and Monday and Tuesday afterwards). To address our second aim, we calculated the adjusted effect size of evening daylight separately for each study population. We used forest plots to present the fifteen resulting effect sizes, together with an I2 value representing between-study heterogeneity and with an overall pooled effect size estimated using random effects meta-analysis [28]. We sometimes converted pooled estimates into standardised effect sizes by dividing by the standard deviation of activity cpm for the population in question. We then proceeded to fit interaction terms between evening daylight and the four pre-specified characteristics of sex, age, weight status and maternal education. These four characteristics were selected a priori as characteristics that we felt to be of interest and that were relatively consistently measured across the ICAD studies. We fit these interaction terms after stratifying by study population, and calculated I2 values and pooled effect sizes. When examining interactions with age, we restricted our analyses to children aged 9–15 as most measurement days (91%) were of children between these ages. Both time of sunset and weather vary between individual days, and we therefore used days not children as our units of analysis. We adjusted for the clustering of days within children using robust standard errors. All analyses used Stata 13.1. To address our first aim, we fit linear regression models with the outcome being daily or hourly activity cpm. Time of sunset was the primary explanatory variable of interest, with adjustment for study population, age, sex, weight status, day of the week and the six weather covariates. When using the changing of the clocks as a natural experiment, we restricted our analyses to the 439 children with at least one valid school day measurement both in the week before and in the week after the clocks changed (e.g. Wednesday, Thursday and Friday before the clocks changed and Monday and Tuesday afterwards). To address our second aim, we calculated the adjusted effect size of evening daylight separately for each study population. We used forest plots to present the fifteen resulting effect sizes, together with an I2 value representing between-study heterogeneity and with an overall pooled effect size estimated using random effects meta-analysis [28]. We sometimes converted pooled estimates into standardised effect sizes by dividing by the standard deviation of activity cpm for the population in question. We then proceeded to fit interaction terms between evening daylight and the four pre-specified characteristics of sex, age, weight status and maternal education. These four characteristics were selected a priori as characteristics that we felt to be of interest and that were relatively consistently measured across the ICAD studies. We fit these interaction terms after stratifying by study population, and calculated I2 values and pooled effect sizes. When examining interactions with age, we restricted our analyses to children aged 9–15 as most measurement days (91%) were of children between these ages. Study design: The International Children’s Accelerometry Database (http://www.mrc-epid.cam.ac.uk/research/studies/icad/) was established to pool objectively-measured physical activity data from studies using the Actigraph accelerometer in children worldwide. The aims, design and methods of ICAD have been described in detail elsewhere [18]. Formal data sharing agreements were established and all partners consulted their individual research board to confirm that sufficient ethical approval had been attained for contributing data. Participants: The full ICAD database pools accelerometer data from 20 studies conducted in ten countries between 1997 and 2009 [18]. In this paper, we excluded four studies which focussed on pre-school children and one study for which date of measurement was not available. We used baseline data from all of the 15 remaining studies, plus follow-up measurements in the seven longitudinal studies and one natural experimental study (Additional file 1: Table A1). We also used follow-up measurements from the control group of one of the two randomised controlled trials, as for this study it was possible to distinguish intervention and control groups. Among 23,354 individuals aged 5–16 years old in the 15 eligible studies, we excluded 1.7% of measurement days (0.7% of individuals) because of missing data on age, sex, weight status or weather conditions. Our resulting study population consisted of 23,188 participants who between them provided 158,784 days of valid data across 31,378 time points (Table  1). Although our full study population included children providing data from any part of the year, one of our analyses was limited to 439 children who were sampled during a week which spanned the clock change (51% female, age range 5–16, 1830 measurement days).Table 1 Descriptive characteristics of study participants N (%) participantsN (%) valid daysFull sample23,188 (100%)158,784 (100%)SexMale8819 (38%)62,745 (40%)Female14,369 (62%)† 96,039 (60%)Age5-6 years1800 (8%)7855 (5%)7-8 years711 (3%)4963 (3%)9-10 years5769 (25%)30,702 (19%)11-12 years9616 (41%)61,352 (39%)13-14 years4206 (18%)46,530 (29%)15-16 years1086 (5%)7382 (5%)Country [No. studies]Australia [N = 2]2459 (11%)18,679 (12%)Brazil [N = 1]453 (2%)1577 (1%)Denmark [N = 2]2031 (9%)11,030 (7%)England [N = 4]10,284 (44%)83,420 (53%)Estonia [N = 1]656 (3%)2537 (2%)Madeira [N = 1]1214 (5%)4899 (3%)Norway [N = 1]384 (2%)1459 (1%)Switzerland [N = 1]404 (2%)2569 (2%)United States [N = 2]5303 (23%)32,614 (21%)Weight statusNormal/underweight17,573 (76%)121,350 (76%)Overweight4116 (18%)27,967 (18%)Obese1499 (6%)9467 (6%)Mother’s educationUp to high school7422 (48%)54,547 (48%)College/vocational2656 (17%)19,352 (17%)University level5251 (34%)38,723 (34%)For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file 1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only. Descriptive characteristics of study participants For individuals measured more than once, the first column gives age and weight status at baseline while the second column gives age and weight status during the measurement period in question. Numbers add up to less than the total for mother’s education because this variable was only collected in 11 of the 15 studies, and was also subject to some missing data within those 11 studies (see Additional file 1: Tables A1 and A2). †Proportion of girls 52% after excluding one large American study that measured girls only. Measurement of physical activity: All physical activity measurements were made with uniaxial, waist-worn Actigraph accelerometers (models 7164, 71256 and GT1M); these are a family of accelerometers that have been shown to provide reliable and valid measurement of physical activity in children and adolescents [19–21]. All raw accelerometer data files were re-analysed to provide physical activity outcome variables that could be directly compared across studies (see [18] for details). Data files were reintegrated to a 60 second epoch where necessary and processed using commercially available software (KineSoft v3.3.20, Saskatchewan, Canada). Non-wear time was defined as 60 minutes of consecutive zeros allowing for 2 minutes of non-zero interruptions [22]. We restricted our analysis of activity data to the time period 07:00 and 22:59, and defined a valid measurement day as one recording at least 500 minutes of wear time during this time period (18% days excluded as invalid). When examining the pattern of physical activity across the day, we only included hours with at least 30 minutes of measured wear time. Each participating child provided an average of 5.1 days across the week in which they were measured (range 1–7); we did not require a minimum number of valid days of accelerometer data per child because days, not children, were our primary units of analysis. Although we sought to limit our analyses to activity during waking hours, we unfortunately lacked reliable data on the time children went to sleep or woke up. While most children took their accelerometers off to sleep, on 6% of days there was evidence of overnight wear, defined as ≥5 minutes of weartime between 1:00 and 04:59. On these days, we assumed the child was in fact sleeping during any hour between 21:00 and 07:59 for which the mean accelerometer counts per minute (cpm) was below 50. Mean cpm values of under 50 were observed for 90% of hours recorded between 03:00 and 03:59 but only 3% of hours recorded between 19:00 and 19:59, suggesting this cut-point provided a reasonable proxy for sleeping time among children for whom we had reason to suspect overnight wear. Our findings were unchanged in sensitivity analyses which instead used thresholds of 30 cpm or 100 cpm to exclude suspected sleeping time, or which excluded altogether the 6% of days with suspected overnight wear. Our pre-specified primary outcome measure was the child’s average counts per minute. Substantive findings were similar in sensitivity analyses which instead used percent time spent in moderate-to-vigorous physical activity (MVPA), defined either as ≥3000 cpm [23] or ≥2296 cpm [24]. For our key findings, we present these MVPA results (using the ≥3000 cpm cut-off) alongside the results for mean cpm. In order to facilitate interpretation of these MVPA results, we additionally convert the observed percentage times into approximate absolute minutes on the assumption of a 14-hour average waking day [25]. Time of sunset and covariates: For each day of accelerometer wear, we used http://www.timeanddate.com to assign time of sunset on that specific date in the city in which, or nearest which, data collection took place. We also used the date and the city of data collection to assign six weather variables to each day: total precipitation across the day, mean humidity across the day, maximum daily wind speed, mean daily temperature, maximum departure of temperature above the daily mean, and maximum departure of temperature below the daily mean. We accessed these data using Mathematica 9 (Wolfram Research), which compiles daily information from a wide range of weather stations run by states, international bodies or public-private partnerships [26]. The correlation between hour of sunset and mean temperature was moderately but not prohibitively high (r = 0.59), while correlations with other weather covariates were modest (r < 0.30). The child’s height and weight were measured in the original studies using standardized clinical procedures, and we used these to calculate body mass index (kg/m2). Participants were categorized as underweight/normal weight, overweight or obese according to age and sex-specific cut points [27]. Maternal education was assessed in 11/15 studies, and was re-coded to distinguish between ‘high school or lower’ education versus ‘college or university’ education (Additional file 1: Table A2). Statistical analyses: Both time of sunset and weather vary between individual days, and we therefore used days not children as our units of analysis. We adjusted for the clustering of days within children using robust standard errors. All analyses used Stata 13.1. To address our first aim, we fit linear regression models with the outcome being daily or hourly activity cpm. Time of sunset was the primary explanatory variable of interest, with adjustment for study population, age, sex, weight status, day of the week and the six weather covariates. When using the changing of the clocks as a natural experiment, we restricted our analyses to the 439 children with at least one valid school day measurement both in the week before and in the week after the clocks changed (e.g. Wednesday, Thursday and Friday before the clocks changed and Monday and Tuesday afterwards). To address our second aim, we calculated the adjusted effect size of evening daylight separately for each study population. We used forest plots to present the fifteen resulting effect sizes, together with an I2 value representing between-study heterogeneity and with an overall pooled effect size estimated using random effects meta-analysis [28]. We sometimes converted pooled estimates into standardised effect sizes by dividing by the standard deviation of activity cpm for the population in question. We then proceeded to fit interaction terms between evening daylight and the four pre-specified characteristics of sex, age, weight status and maternal education. These four characteristics were selected a priori as characteristics that we felt to be of interest and that were relatively consistently measured across the ICAD studies. We fit these interaction terms after stratifying by study population, and calculated I2 values and pooled effect sizes. When examining interactions with age, we restricted our analyses to children aged 9–15 as most measurement days (91%) were of children between these ages. Results: The characteristics of the participants are summarised in Table  1. Of the measurement days, 66% were schooldays and 38% of days had no precipitation. The average daily temperature was 12°C (range -21 to 33°C, inter-quartile range 7 to 16°C). Mean daily weartime was 773 minutes (12.9 hours), and this was similar regardless of time of sunset (e.g. regression coefficient +1.40 minutes for days with sunset 18:00–19:59 versus pre-18:00 after adjusting for study population, age and sex; and -2.5 minutes for days with sunset post-20:00 versus pre-18:00). Evening daylight and overall activity levels A later hour of sunset (i.e. extended evening daylight) was associated with increased daily activity across the full range of time of sunset, and this association was only partly attenuated after adjusting for the six weather covariates (Figure  1). Here and for all findings reported subsequently, substantive findings were similar in sensitivity analyses which instead used percent time spent in MVPA. The adjusted difference in overall daily activity between days with sunset after 21:00 vs. before 17:00 was 75 cpm (95% CI 67, 84). The equivalent difference for percent daily time in MVPA was 0.72% (95% CI 0.60, 0.84) using the ≥3000 cpm cut-point, which translates into around 6 minutes. To put the values on the y-axis in context, participants had a mean daily activity count of 560 cpm (649 in boys, 503 in girls), and spent an average of 4.0% of their day, or 33 minutes, in MVPA (5.2%/43 minutes in boys, 3.1%/26 minutes in girls). The adjusted differences between the days with more versus less evening daylight are therefore modest but not trivial in relation to children’s overall activity levels.Figure 1 Association between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00. Association between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00. A later hour of sunset (i.e. extended evening daylight) was associated with increased daily activity across the full range of time of sunset, and this association was only partly attenuated after adjusting for the six weather covariates (Figure  1). Here and for all findings reported subsequently, substantive findings were similar in sensitivity analyses which instead used percent time spent in MVPA. The adjusted difference in overall daily activity between days with sunset after 21:00 vs. before 17:00 was 75 cpm (95% CI 67, 84). The equivalent difference for percent daily time in MVPA was 0.72% (95% CI 0.60, 0.84) using the ≥3000 cpm cut-point, which translates into around 6 minutes. To put the values on the y-axis in context, participants had a mean daily activity count of 560 cpm (649 in boys, 503 in girls), and spent an average of 4.0% of their day, or 33 minutes, in MVPA (5.2%/43 minutes in boys, 3.1%/26 minutes in girls). The adjusted differences between the days with more versus less evening daylight are therefore modest but not trivial in relation to children’s overall activity levels.Figure 1 Association between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00. Association between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00. Evening daylight and the patterning of activity across the day Consistent with a causal interpretation, hour-by-hour analyses indicated that it was in the late afternoon and evening that the duration of evening daylight was most strongly associated with hourly physical activity levels (Figure  2). This was true on both schooldays and weekend/holiday days, with the period of the day when physical activity fell fastest corresponding to the timing of sunset (e.g. falling fastest between 18:00 and 19:00 on days when the sun also set between those hours). Similarly, when comparing the subsample of 439 children who were measured on schooldays immediately before and immediately after the changing of the clocks, there was strong evidence that children were more active during the evening of the days with later sunset (Figure  3). Between 17:00 and 20:59 the mean increase in physical activity on the days with later sunset was 94 cpm per hour (95% CI 62, 125); the equivalent increase in percent of time spent in MVPA was 0.84% (95% CI 0.40%, 1.28%) or 2.0 minutes.Figure 2 Adjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file 1: Figure A1 includes a version of this graph with confidence intervals.Figure 3 Mean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses. Adjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file 1: Figure A1 includes a version of this graph with confidence intervals. Mean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses. Importantly, Figure  2 and Figure  3 show no association between hour of sunset and activity levels in the morning, and generally no association in the early afternoon (with the exception of a modest effect on weekend/holiday days as early as 14:00 in Figure  2). This suggests that the association between evening daylight and physical activity cannot readily be explained by residual confounding by weather conditions, since any effects of weather would generally be expected to operate more evenly across the day [16]. These findings also provide no suggestion that later sunrise is associated with reduced activity in the morning, including on days when the sun set before 18:00 and on which the average time of sunrise was not until 07:27 (inter-quartile range 07:05 to 07:54). Consistent with a causal interpretation, hour-by-hour analyses indicated that it was in the late afternoon and evening that the duration of evening daylight was most strongly associated with hourly physical activity levels (Figure  2). This was true on both schooldays and weekend/holiday days, with the period of the day when physical activity fell fastest corresponding to the timing of sunset (e.g. falling fastest between 18:00 and 19:00 on days when the sun also set between those hours). Similarly, when comparing the subsample of 439 children who were measured on schooldays immediately before and immediately after the changing of the clocks, there was strong evidence that children were more active during the evening of the days with later sunset (Figure  3). Between 17:00 and 20:59 the mean increase in physical activity on the days with later sunset was 94 cpm per hour (95% CI 62, 125); the equivalent increase in percent of time spent in MVPA was 0.84% (95% CI 0.40%, 1.28%) or 2.0 minutes.Figure 2 Adjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file 1: Figure A1 includes a version of this graph with confidence intervals.Figure 3 Mean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses. Adjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file 1: Figure A1 includes a version of this graph with confidence intervals. Mean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses. Importantly, Figure  2 and Figure  3 show no association between hour of sunset and activity levels in the morning, and generally no association in the early afternoon (with the exception of a modest effect on weekend/holiday days as early as 14:00 in Figure  2). This suggests that the association between evening daylight and physical activity cannot readily be explained by residual confounding by weather conditions, since any effects of weather would generally be expected to operate more evenly across the day [16]. These findings also provide no suggestion that later sunrise is associated with reduced activity in the morning, including on days when the sun set before 18:00 and on which the average time of sunrise was not until 07:27 (inter-quartile range 07:05 to 07:54). Examining differences by place, sex, age, weight and maternal education As shown in Figure  4, there was strong evidence that the association between evening daylight and physical activity varied systematically between settings (I2 = 75%, p < 0.001, for overall heterogeneity between the 15 studies). Specifically there was relatively consistent evidence that evening daylight was associated with higher average physical activity in mainland Europe, England and (to a lesser extent) Australia. The pooled point estimates of the increase in daily mean activity in these three settings were 20.2 cpm, 15.7 cpm and 8.2 cpm per additional hour of evening daylight; these changes translate into standardised effect sizes of 0.07, 0.06 and 0.03, respectively. The equivalent effect sizes in terms of percent of daily time spent in MVPA were 0.19%, 0.20% and 0.05% per additional hour of evening daylight, corresponding to around 1.6 minutes, 1.7 minutes and 0.4 minutes respectively. By contrast, there was little or no consistent evidence of associations with evening daylight in the American samples or in the Madeiran and Brazilian samples, with standardised effect sizes ranging from -0.02 to +0.01 and in all cases non-significant. A post-hoc univariable meta-regression analysis provided some evidence that the smaller magnitude of the associations in these latter settings might reflect their higher maximum temperatures (adjusted R2 = 51%, p = 0.01: see Additional file 1: Figure A2 and accompanying text).Figure 4 Association between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file 1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file 1: Figures A3 and A4. Association between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file 1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file 1: Figures A3 and A4. Although associations with evening daylight varied markedly between settings, there was less convincing evidence for interactions with the child’s characteristics (Figure  4, plus Additional file 1: Figures A3-A4). This lack of an interaction was clearest for weight status and maternal education, with neither variable showing any significant interaction in any of the five study settings, and with the overall pooled effect sizes also non-significant. By contrast, the non-significant pooled interaction terms for sex and age were harder to interpret as in both cases there was some evidence of between-study heterogeneity (0.01 ≤ p ≤ 0.02). With respect to sex, this heterogeneity reflected the fact that the association between evening daylight and physical activity tended to be stronger in boys than in girls in some European and English samples, but this was not the case in the other settings (Additional file 1: Figure A3). With respect to age, there was no very obvious pattern: the magnitude of the association with evening daylight was greater among younger children in Denmark, was greater among older children in Australia, and did not differ according to age in the remaining three settings for which sufficient data were available. As shown in Figure  4, there was strong evidence that the association between evening daylight and physical activity varied systematically between settings (I2 = 75%, p < 0.001, for overall heterogeneity between the 15 studies). Specifically there was relatively consistent evidence that evening daylight was associated with higher average physical activity in mainland Europe, England and (to a lesser extent) Australia. The pooled point estimates of the increase in daily mean activity in these three settings were 20.2 cpm, 15.7 cpm and 8.2 cpm per additional hour of evening daylight; these changes translate into standardised effect sizes of 0.07, 0.06 and 0.03, respectively. The equivalent effect sizes in terms of percent of daily time spent in MVPA were 0.19%, 0.20% and 0.05% per additional hour of evening daylight, corresponding to around 1.6 minutes, 1.7 minutes and 0.4 minutes respectively. By contrast, there was little or no consistent evidence of associations with evening daylight in the American samples or in the Madeiran and Brazilian samples, with standardised effect sizes ranging from -0.02 to +0.01 and in all cases non-significant. A post-hoc univariable meta-regression analysis provided some evidence that the smaller magnitude of the associations in these latter settings might reflect their higher maximum temperatures (adjusted R2 = 51%, p = 0.01: see Additional file 1: Figure A2 and accompanying text).Figure 4 Association between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file 1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file 1: Figures A3 and A4. Association between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file 1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file 1: Figures A3 and A4. Although associations with evening daylight varied markedly between settings, there was less convincing evidence for interactions with the child’s characteristics (Figure  4, plus Additional file 1: Figures A3-A4). This lack of an interaction was clearest for weight status and maternal education, with neither variable showing any significant interaction in any of the five study settings, and with the overall pooled effect sizes also non-significant. By contrast, the non-significant pooled interaction terms for sex and age were harder to interpret as in both cases there was some evidence of between-study heterogeneity (0.01 ≤ p ≤ 0.02). With respect to sex, this heterogeneity reflected the fact that the association between evening daylight and physical activity tended to be stronger in boys than in girls in some European and English samples, but this was not the case in the other settings (Additional file 1: Figure A3). With respect to age, there was no very obvious pattern: the magnitude of the association with evening daylight was greater among younger children in Denmark, was greater among older children in Australia, and did not differ according to age in the remaining three settings for which sufficient data were available. Evening daylight and overall activity levels: A later hour of sunset (i.e. extended evening daylight) was associated with increased daily activity across the full range of time of sunset, and this association was only partly attenuated after adjusting for the six weather covariates (Figure  1). Here and for all findings reported subsequently, substantive findings were similar in sensitivity analyses which instead used percent time spent in MVPA. The adjusted difference in overall daily activity between days with sunset after 21:00 vs. before 17:00 was 75 cpm (95% CI 67, 84). The equivalent difference for percent daily time in MVPA was 0.72% (95% CI 0.60, 0.84) using the ≥3000 cpm cut-point, which translates into around 6 minutes. To put the values on the y-axis in context, participants had a mean daily activity count of 560 cpm (649 in boys, 503 in girls), and spent an average of 4.0% of their day, or 33 minutes, in MVPA (5.2%/43 minutes in boys, 3.1%/26 minutes in girls). The adjusted differences between the days with more versus less evening daylight are therefore modest but not trivial in relation to children’s overall activity levels.Figure 1 Association between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00. Association between time of sunset and total daily activity. CI = confidence interval, cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Minimally-adjusted analyses adjust for age, sex and study population; additionally-adjusted analyses also include day of the week, weight status and (most importantly) the six weather covariates. Hour of sunset is rounded down, e.g. ‘18’ covers ‘18:00–18:59’, and the reference group is sunset before 17:00. Evening daylight and the patterning of activity across the day: Consistent with a causal interpretation, hour-by-hour analyses indicated that it was in the late afternoon and evening that the duration of evening daylight was most strongly associated with hourly physical activity levels (Figure  2). This was true on both schooldays and weekend/holiday days, with the period of the day when physical activity fell fastest corresponding to the timing of sunset (e.g. falling fastest between 18:00 and 19:00 on days when the sun also set between those hours). Similarly, when comparing the subsample of 439 children who were measured on schooldays immediately before and immediately after the changing of the clocks, there was strong evidence that children were more active during the evening of the days with later sunset (Figure  3). Between 17:00 and 20:59 the mean increase in physical activity on the days with later sunset was 94 cpm per hour (95% CI 62, 125); the equivalent increase in percent of time spent in MVPA was 0.84% (95% CI 0.40%, 1.28%) or 2.0 minutes.Figure 2 Adjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file 1: Figure A1 includes a version of this graph with confidence intervals.Figure 3 Mean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses. Adjusted mean activity counts per minute across the hours of the day, according to the time of sunset. cpm = counts per minute. Analysis based on 158,784 measurement days from 23,188 children from 15 studies in 9 countries. Analyses adjust for study population, age, sex, day of the week, weight status and the six weather variables, with a reference group of 09:00 on days with sunset before 18:00. Hours are rounded down, e.g. ‘18’ covers ‘18:00–18:59’. Confidence intervals not presented as they are generally too narrow to be clearly visible: Additional file 1: Figure A1 includes a version of this graph with confidence intervals. Mean physical activity across the hours of the day, comparing children either side of the changing of the clocks. CI = confidence interval, cpm = counts per minute. Analysis based on 1830 schooldays from 439 children from 11 studies in 9 countries. Analyses restricted to children with at least one valid schoolday measurement day both before and after the clocks changed; to increase power, data from across the spring and autumn clock changes are pooled. Hours are grouped into two-hour time periods to increase power and are rounded down, e.g. ‘7-8’ covers ‘07:00–08:59’. Adjustment was not essential as each child serves as his or her own control, but the results were similar in adjusted analyses. Importantly, Figure  2 and Figure  3 show no association between hour of sunset and activity levels in the morning, and generally no association in the early afternoon (with the exception of a modest effect on weekend/holiday days as early as 14:00 in Figure  2). This suggests that the association between evening daylight and physical activity cannot readily be explained by residual confounding by weather conditions, since any effects of weather would generally be expected to operate more evenly across the day [16]. These findings also provide no suggestion that later sunrise is associated with reduced activity in the morning, including on days when the sun set before 18:00 and on which the average time of sunrise was not until 07:27 (inter-quartile range 07:05 to 07:54). Examining differences by place, sex, age, weight and maternal education: As shown in Figure  4, there was strong evidence that the association between evening daylight and physical activity varied systematically between settings (I2 = 75%, p < 0.001, for overall heterogeneity between the 15 studies). Specifically there was relatively consistent evidence that evening daylight was associated with higher average physical activity in mainland Europe, England and (to a lesser extent) Australia. The pooled point estimates of the increase in daily mean activity in these three settings were 20.2 cpm, 15.7 cpm and 8.2 cpm per additional hour of evening daylight; these changes translate into standardised effect sizes of 0.07, 0.06 and 0.03, respectively. The equivalent effect sizes in terms of percent of daily time spent in MVPA were 0.19%, 0.20% and 0.05% per additional hour of evening daylight, corresponding to around 1.6 minutes, 1.7 minutes and 0.4 minutes respectively. By contrast, there was little or no consistent evidence of associations with evening daylight in the American samples or in the Madeiran and Brazilian samples, with standardised effect sizes ranging from -0.02 to +0.01 and in all cases non-significant. A post-hoc univariable meta-regression analysis provided some evidence that the smaller magnitude of the associations in these latter settings might reflect their higher maximum temperatures (adjusted R2 = 51%, p = 0.01: see Additional file 1: Figure A2 and accompanying text).Figure 4 Association between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file 1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file 1: Figures A3 and A4. Association between evening daylight and physical activity across study populations, and pooled effect sizes for interactions by sex, age, weight status and maternal education. cpm = counts per minute. Analysis based on 23,188 children from 15 studies in 9 countries, except for the comparison of maternal education which is based on 15,563 children in 11 studies in 8 countries (see Additional file 1: Tables A1 and A2 for details of studies providing maternal education data). On the left, random-effects pooled estimates are presented by country/region, together with 95% confidence intervals. Points to the right of the line indicate that longer evening daylight is associated with increased mean daily cpm, points to the left indicate the reverse. On the right, pooled effect sizes and 95% confidence intervals are shown following tests for interaction, with the adjusted interaction term representing the difference that the interaction variable (e.g. sex) makes to the effect size for evening daylight upon total daily activity measured in cpm. For interaction terms stratified by study population see the Additional file 1: Figures A3 and A4. Although associations with evening daylight varied markedly between settings, there was less convincing evidence for interactions with the child’s characteristics (Figure  4, plus Additional file 1: Figures A3-A4). This lack of an interaction was clearest for weight status and maternal education, with neither variable showing any significant interaction in any of the five study settings, and with the overall pooled effect sizes also non-significant. By contrast, the non-significant pooled interaction terms for sex and age were harder to interpret as in both cases there was some evidence of between-study heterogeneity (0.01 ≤ p ≤ 0.02). With respect to sex, this heterogeneity reflected the fact that the association between evening daylight and physical activity tended to be stronger in boys than in girls in some European and English samples, but this was not the case in the other settings (Additional file 1: Figure A3). With respect to age, there was no very obvious pattern: the magnitude of the association with evening daylight was greater among younger children in Denmark, was greater among older children in Australia, and did not differ according to age in the remaining three settings for which sufficient data were available. Discussion: Among 23,000 school-age children from 9 countries, we found strong evidence that longer evening daylight was associated with a small increase in daily physical activity, even after adjusting for weather conditions. Consistent with a causal interpretation, the magnitude of this associations was largest in the late afternoon and early evening, including when the same child was measured immediately before and after the clocks went forward or back. These associations were, however, only consistently observed in the European and Australian samples. There was inconsistent evidence that the magnitude of the association with evening daylight was greater in boys; no evidence of any differences in the magnitude of the association according to weight status or maternal education; and inconsistent findings for interactions with age. Limitations and directions for future research This study substantially extends previous analyses of some subsets of this data, which have at most only provided a relatively brief examination of physical activity differences by season [29, 30]. It also addresses several recognised limitations of the existing literature [14], including small sample sizes, inconsistent accelerometer protocols and little or no examination of interactions with factors such as age, sex or weight status. In addition, our large sample size allowed us to use the bi-annual changing of the clocks as a natural experiment, and to show significant differences in children’s activity levels either side of the clock change. This observation considerably strengthens the case for a causal interpretation of the association between evening daylight and physical activity, as does the fact that the fastest decrease in children’s evening physical activity coincided with sunset throughout the year. This study does, however, also have several important limitations. First, our data were largely cross-sectional rather than longitudinal: although we could follow the same child across the week when the clocks changed, we could not follow children across a full year. We have, however, no reason to believe that children sampled at different times of the year differed systematically within or between studies. A second set of limitations involves data not available to us. For one thing, although we adjusted for observed weather conditions on each day of measurement, the timing of some physically active events may instead reflect expected weather conditions (e.g. some schools may routinely schedule sports days on summer afternoons in the hope that it will be warm and dry). Failing to adjust for such social expectations may mean that our effect estimates are still subject to some residual confounding by weather, and this may partly account for why small differences in activity levels were seen as early as 14:00 on weekend/holidays. In addition, we lacked any data on the behavioural mediators of the observed activity differences. As such, we cannot examine how far one can generalise the findings of one previous, small English study which found that associations between day length and activity levels were largely mediated by outdoor play [16]. This is one useful direction for future research, perhaps particularly as it becomes increasingly possible to substitute or complement detailed activity diaries with data from global positioning systems (GPS) monitors [31]. We also lacked systematic information on area-level factors such as neighbourhood safety or the availability of green space which might plausibly moderate the effect of evening daylight upon physical activity; again, this would be one useful direction for future research. Also of interest would be an examination of how a wider range of behaviours vary with daylength; these were largely beyond the scope of what is possible in the ICAD database, although the lack of any association between time of sunset and accelerometer weartime provides some indirect evidence against an effect of evening daylight on children’s duration of sleep. Finally, most of our study populations came from Europe and almost all came from high-income settings, meaning that more research would be needed to establish how far the observed associations apply across other settings. Our data do, however, give some hints that daylight saving measures might not increase activity in hot settings, perhaps because high temperatures may inhibit summertime activity. This study substantially extends previous analyses of some subsets of this data, which have at most only provided a relatively brief examination of physical activity differences by season [29, 30]. It also addresses several recognised limitations of the existing literature [14], including small sample sizes, inconsistent accelerometer protocols and little or no examination of interactions with factors such as age, sex or weight status. In addition, our large sample size allowed us to use the bi-annual changing of the clocks as a natural experiment, and to show significant differences in children’s activity levels either side of the clock change. This observation considerably strengthens the case for a causal interpretation of the association between evening daylight and physical activity, as does the fact that the fastest decrease in children’s evening physical activity coincided with sunset throughout the year. This study does, however, also have several important limitations. First, our data were largely cross-sectional rather than longitudinal: although we could follow the same child across the week when the clocks changed, we could not follow children across a full year. We have, however, no reason to believe that children sampled at different times of the year differed systematically within or between studies. A second set of limitations involves data not available to us. For one thing, although we adjusted for observed weather conditions on each day of measurement, the timing of some physically active events may instead reflect expected weather conditions (e.g. some schools may routinely schedule sports days on summer afternoons in the hope that it will be warm and dry). Failing to adjust for such social expectations may mean that our effect estimates are still subject to some residual confounding by weather, and this may partly account for why small differences in activity levels were seen as early as 14:00 on weekend/holidays. In addition, we lacked any data on the behavioural mediators of the observed activity differences. As such, we cannot examine how far one can generalise the findings of one previous, small English study which found that associations between day length and activity levels were largely mediated by outdoor play [16]. This is one useful direction for future research, perhaps particularly as it becomes increasingly possible to substitute or complement detailed activity diaries with data from global positioning systems (GPS) monitors [31]. We also lacked systematic information on area-level factors such as neighbourhood safety or the availability of green space which might plausibly moderate the effect of evening daylight upon physical activity; again, this would be one useful direction for future research. Also of interest would be an examination of how a wider range of behaviours vary with daylength; these were largely beyond the scope of what is possible in the ICAD database, although the lack of any association between time of sunset and accelerometer weartime provides some indirect evidence against an effect of evening daylight on children’s duration of sleep. Finally, most of our study populations came from Europe and almost all came from high-income settings, meaning that more research would be needed to establish how far the observed associations apply across other settings. Our data do, however, give some hints that daylight saving measures might not increase activity in hot settings, perhaps because high temperatures may inhibit summertime activity. Implications for policy and practice The British parliament recently debated a Bill proposing new daylight saving measures which would shift the clocks forward by one additional hour year round [10]. If the adjusted, pooled effect size we observed in this study were fully causal, one would expect the proposed daylight savings measures to generate a 0.06 standard deviation increase in the total physical activity of English children, corresponding to an estimated 1.7 extra minutes of MVPA per day. The equivalent standardised effect sizes in mainland Europe and in Australia were 0.07 and 0.03, respectively. As such, introducing additional daylight saving measures in any of these settings would be likely only to have a small-to-very-small average effect upon each child. Such measures would, however, have far greater reach than most other potential policy initiatives, with these small average effects applying every day to each and every child in the country. This is important because even small changes to the population mean can have important public health consequences [8]. Moreover, although these population-level effect sizes are small in absolute terms, the English and mainland European effect estimates actually compare relatively favourably to individual-level approaches, despite the latter generally being much more intensive (and expensive). For example, one recent meta-analysis of 22 randomised controlled comparisons reported a standardised pooled effect size of 0.12 (95% CI 0.04, 0.20) for interventions seeking to promote child or adolescent physical activity [32]. Notably, the association between longer evening daylight and higher physical activity was observed irrespective of weight status or maternal education. This contrasts with one previous Australian survey in which daylight savings measures seemed to have the largest effects among normal weight adults from socio-economically advantaged groups [33]. Further research in adults would be useful to confirm this finding, ideally using objectively-measured activity data. Speculatively, however, a relatively wide range of children may respond to longer evening daylight by playing more outdoors whereas among adults the effect may primarily be confined to the groups with the highest propensity to exercise. The British parliament recently debated a Bill proposing new daylight saving measures which would shift the clocks forward by one additional hour year round [10]. If the adjusted, pooled effect size we observed in this study were fully causal, one would expect the proposed daylight savings measures to generate a 0.06 standard deviation increase in the total physical activity of English children, corresponding to an estimated 1.7 extra minutes of MVPA per day. The equivalent standardised effect sizes in mainland Europe and in Australia were 0.07 and 0.03, respectively. As such, introducing additional daylight saving measures in any of these settings would be likely only to have a small-to-very-small average effect upon each child. Such measures would, however, have far greater reach than most other potential policy initiatives, with these small average effects applying every day to each and every child in the country. This is important because even small changes to the population mean can have important public health consequences [8]. Moreover, although these population-level effect sizes are small in absolute terms, the English and mainland European effect estimates actually compare relatively favourably to individual-level approaches, despite the latter generally being much more intensive (and expensive). For example, one recent meta-analysis of 22 randomised controlled comparisons reported a standardised pooled effect size of 0.12 (95% CI 0.04, 0.20) for interventions seeking to promote child or adolescent physical activity [32]. Notably, the association between longer evening daylight and higher physical activity was observed irrespective of weight status or maternal education. This contrasts with one previous Australian survey in which daylight savings measures seemed to have the largest effects among normal weight adults from socio-economically advantaged groups [33]. Further research in adults would be useful to confirm this finding, ideally using objectively-measured activity data. Speculatively, however, a relatively wide range of children may respond to longer evening daylight by playing more outdoors whereas among adults the effect may primarily be confined to the groups with the highest propensity to exercise. Limitations and directions for future research: This study substantially extends previous analyses of some subsets of this data, which have at most only provided a relatively brief examination of physical activity differences by season [29, 30]. It also addresses several recognised limitations of the existing literature [14], including small sample sizes, inconsistent accelerometer protocols and little or no examination of interactions with factors such as age, sex or weight status. In addition, our large sample size allowed us to use the bi-annual changing of the clocks as a natural experiment, and to show significant differences in children’s activity levels either side of the clock change. This observation considerably strengthens the case for a causal interpretation of the association between evening daylight and physical activity, as does the fact that the fastest decrease in children’s evening physical activity coincided with sunset throughout the year. This study does, however, also have several important limitations. First, our data were largely cross-sectional rather than longitudinal: although we could follow the same child across the week when the clocks changed, we could not follow children across a full year. We have, however, no reason to believe that children sampled at different times of the year differed systematically within or between studies. A second set of limitations involves data not available to us. For one thing, although we adjusted for observed weather conditions on each day of measurement, the timing of some physically active events may instead reflect expected weather conditions (e.g. some schools may routinely schedule sports days on summer afternoons in the hope that it will be warm and dry). Failing to adjust for such social expectations may mean that our effect estimates are still subject to some residual confounding by weather, and this may partly account for why small differences in activity levels were seen as early as 14:00 on weekend/holidays. In addition, we lacked any data on the behavioural mediators of the observed activity differences. As such, we cannot examine how far one can generalise the findings of one previous, small English study which found that associations between day length and activity levels were largely mediated by outdoor play [16]. This is one useful direction for future research, perhaps particularly as it becomes increasingly possible to substitute or complement detailed activity diaries with data from global positioning systems (GPS) monitors [31]. We also lacked systematic information on area-level factors such as neighbourhood safety or the availability of green space which might plausibly moderate the effect of evening daylight upon physical activity; again, this would be one useful direction for future research. Also of interest would be an examination of how a wider range of behaviours vary with daylength; these were largely beyond the scope of what is possible in the ICAD database, although the lack of any association between time of sunset and accelerometer weartime provides some indirect evidence against an effect of evening daylight on children’s duration of sleep. Finally, most of our study populations came from Europe and almost all came from high-income settings, meaning that more research would be needed to establish how far the observed associations apply across other settings. Our data do, however, give some hints that daylight saving measures might not increase activity in hot settings, perhaps because high temperatures may inhibit summertime activity. Implications for policy and practice: The British parliament recently debated a Bill proposing new daylight saving measures which would shift the clocks forward by one additional hour year round [10]. If the adjusted, pooled effect size we observed in this study were fully causal, one would expect the proposed daylight savings measures to generate a 0.06 standard deviation increase in the total physical activity of English children, corresponding to an estimated 1.7 extra minutes of MVPA per day. The equivalent standardised effect sizes in mainland Europe and in Australia were 0.07 and 0.03, respectively. As such, introducing additional daylight saving measures in any of these settings would be likely only to have a small-to-very-small average effect upon each child. Such measures would, however, have far greater reach than most other potential policy initiatives, with these small average effects applying every day to each and every child in the country. This is important because even small changes to the population mean can have important public health consequences [8]. Moreover, although these population-level effect sizes are small in absolute terms, the English and mainland European effect estimates actually compare relatively favourably to individual-level approaches, despite the latter generally being much more intensive (and expensive). For example, one recent meta-analysis of 22 randomised controlled comparisons reported a standardised pooled effect size of 0.12 (95% CI 0.04, 0.20) for interventions seeking to promote child or adolescent physical activity [32]. Notably, the association between longer evening daylight and higher physical activity was observed irrespective of weight status or maternal education. This contrasts with one previous Australian survey in which daylight savings measures seemed to have the largest effects among normal weight adults from socio-economically advantaged groups [33]. Further research in adults would be useful to confirm this finding, ideally using objectively-measured activity data. Speculatively, however, a relatively wide range of children may respond to longer evening daylight by playing more outdoors whereas among adults the effect may primarily be confined to the groups with the highest propensity to exercise. Conclusions: This study provides the strongest evidence to date that, in European and Australian settings, evening daylight plays a causal role in increasing physical activity in the late afternoon and early evening – a period which has been described as the ‘critical hours’ for children’s physical activity [34]. In these settings, it seems possible that additional daylight saving measures could shift mean population child physical activity levels by an amount which, although small in absolute terms, would not be trivial relative to what can feasibly be achieved through other approaches. Moreover, our findings also suggest that this effect might operate in a relatively equitable way. As such, while daylight savings proposals such as those recently considered in Britain would not solve the problem of inadequate levels of child physical activity, this paper indicates that they could represent a small step in the right direction. Electronic supplementary material: Additional file 1: Additional methods and results; presentation of fuller details on the studies included in the analyses, and additional results. (PDF 759 KB) Additional file 1: Additional methods and results; presentation of fuller details on the studies included in the analyses, and additional results. (PDF 759 KB) : Additional file 1: Additional methods and results; presentation of fuller details on the studies included in the analyses, and additional results. (PDF 759 KB)
Background: It has been proposed that introducing daylight saving measures could increase children's physical activity, but there exists little research on this issue. This study therefore examined associations between time of sunset and activity levels, including using the bi-annual 'changing of the clocks' as a natural experiment. Methods: 23,188 children aged 5-16 years from 15 studies in nine countries were brought together in the International Children's Accelerometry Database. 439 of these children were of particular interest for our analyses as they contributed data both immediately before and after the clocks changed. All children provided objectively-measured physical activity data from Actigraph accelerometers, and we used their average physical activity level (accelerometer counts per minute) as our primary outcome. Date of accelerometer data collection was matched to time of sunset, and to weather characteristics including daily precipitation, humidity, wind speed and temperature. Results: Adjusting for child and weather covariates, we found that longer evening daylight was independently associated with a small increase in daily physical activity. Consistent with a causal interpretation, the magnitude of these associations was largest in the late afternoon and early evening and these associations were also evident when comparing the same child just before and just after the clocks changed. These associations were, however, only consistently observed in the five mainland European, four English and two Australian samples (adjusted, pooled effect sizes 0.03-0.07 standard deviations per hour of additional evening daylight). In some settings there was some evidence of larger associations between daylength and physical activity in boys. There was no evidence of interactions with weight status or maternal education, and inconsistent findings for interactions with age. Conclusions: In Europe and Australia, evening daylight seems to play a causal role in increasing children's activity in a relatively equitable manner. Although the average increase in activity is small in absolute terms, these increases apply across all children in a population. Moreover, these small effect sizes actually compare relatively favourably with the typical effect of intensive, individual-level interventions. We therefore conclude that, by shifting the physical activity mean of the entire population, the introduction of additional daylight saving measures could yield worthwhile public health benefits.
Background: Physical activity confers substantial physical and mental health benefits in children [1–5], but most children around the world do not meet current activity guidelines [6]. For children as for adults, successfully promoting physical activity is likely to require both individual-level and population-level interventions [7]. The latter are important because, following the insights of Geoffrey Rose [8], even a small shift in a population mean can yield important public health benefits. One potential population-level measure which has received some policy attention in recent years concerns the introduction of additional daylight saving measures [9]. Although the total number of hours of daylight in the day is fixed, many countries modify when those hours fall by ‘changing the clocks’ – for example, putting the clocks forward in the summer to shift daylight hours from the very early morning to the evening. Recent decades have seen recurrent political debates surrounding daylight saving measures in several countries. For example, several Australian states have held repeated referenda on the topic, and the issue even spawned the creation in 2008 of the single-issue political party ‘Daylight Saving for South East Queensland’. Similarly a Bill was debated in the British Parliament between 2010 and 2012 which proposed to shift the clocks forward by an additional hour year round. This change would have given British children an estimated average of 200 extra waking daylight hours per year [10], and the logo of the associated civil society campaign depicted children playing outdoors in the evening sunlight. The Bill’s accompanying research paper listed “increased opportunities for outdoor activity” alongside other potential health and environmental benefits, such as reducing road traffic crashes and cutting domestic energy use [11]. A similar argument about leisure-time activity has featured in the Australian debate [12]. The British Bill’s research paper did not, however, cite any evidence to support its claims about physical activity, and nor does much evidence exist regarding likely impacts on children. Many studies have certainly reported that children’s physical activity is generally higher in the summer than in the winter, as reviewed in [13–15]. Very few studies, however, examine whether seasonal differences persist after adjustment for weather conditions, or whether the seasonal patterning of physical activity across the day is consistent with a causal effect of evening daylight. One study which did examine these issues in detail found that seasonal differences in physical activity were greatest in the late afternoon and early evening, which is what one would expect if time of sunset did play a causal role [16]. This study had some major limitations, however, including its small sample size (N = 325), its restriction to a single setting in south-east England, and its failure to adjust for temperature. This paper therefore revisited this question in a much larger, international sample. Our first broad aim was to test the hypotheses that (i) longer evening daylight is associated with higher total physical activity, even after adjusting for weather conditions; and (ii) these overall differences in physical activity are greatest in the late afternoon and early evening. Given our uniquely large sample size, we were also able to use countries’ bi-annual changing of the clocks as a natural experiment, i.e. as an event or intervention not designed for research purposes but which can nevertheless provide valuable research opportunities [17]. Specifically, we tested the hypothesis that the same child measured immediately before and immediately after the clocks changed would be more active on the days where sunset had been moved an hour later. Our second broad aim was to examine whether any associations between evening daylight and activity levels differed by study setting, sex, age, weight status or socio-economic position. Conclusions: This study provides the strongest evidence to date that, in European and Australian settings, evening daylight plays a causal role in increasing physical activity in the late afternoon and early evening – a period which has been described as the ‘critical hours’ for children’s physical activity [34]. In these settings, it seems possible that additional daylight saving measures could shift mean population child physical activity levels by an amount which, although small in absolute terms, would not be trivial relative to what can feasibly be achieved through other approaches. Moreover, our findings also suggest that this effect might operate in a relatively equitable way. As such, while daylight savings proposals such as those recently considered in Britain would not solve the problem of inadequate levels of child physical activity, this paper indicates that they could represent a small step in the right direction.
Background: It has been proposed that introducing daylight saving measures could increase children's physical activity, but there exists little research on this issue. This study therefore examined associations between time of sunset and activity levels, including using the bi-annual 'changing of the clocks' as a natural experiment. Methods: 23,188 children aged 5-16 years from 15 studies in nine countries were brought together in the International Children's Accelerometry Database. 439 of these children were of particular interest for our analyses as they contributed data both immediately before and after the clocks changed. All children provided objectively-measured physical activity data from Actigraph accelerometers, and we used their average physical activity level (accelerometer counts per minute) as our primary outcome. Date of accelerometer data collection was matched to time of sunset, and to weather characteristics including daily precipitation, humidity, wind speed and temperature. Results: Adjusting for child and weather covariates, we found that longer evening daylight was independently associated with a small increase in daily physical activity. Consistent with a causal interpretation, the magnitude of these associations was largest in the late afternoon and early evening and these associations were also evident when comparing the same child just before and just after the clocks changed. These associations were, however, only consistently observed in the five mainland European, four English and two Australian samples (adjusted, pooled effect sizes 0.03-0.07 standard deviations per hour of additional evening daylight). In some settings there was some evidence of larger associations between daylength and physical activity in boys. There was no evidence of interactions with weight status or maternal education, and inconsistent findings for interactions with age. Conclusions: In Europe and Australia, evening daylight seems to play a causal role in increasing children's activity in a relatively equitable manner. Although the average increase in activity is small in absolute terms, these increases apply across all children in a population. Moreover, these small effect sizes actually compare relatively favourably with the typical effect of intensive, individual-level interventions. We therefore conclude that, by shifting the physical activity mean of the entire population, the introduction of additional daylight saving measures could yield worthwhile public health benefits.
16,912
419
[ 726, 75, 639, 579, 269, 350, 432, 880, 921, 623, 392, 32 ]
17
[ "activity", "children", "daylight", "evening", "studies", "days", "study", "data", "physical", "physical activity" ]
[ "surrounding daylight saving", "australian survey daylight", "daylight saving measures", "daylight savings proposals", "proposed daylight savings" ]
[CONTENT] Child | Adolescent | Physical activity | Day length | Seasons [SUMMARY]
[CONTENT] Child | Adolescent | Physical activity | Day length | Seasons [SUMMARY]
[CONTENT] Child | Adolescent | Physical activity | Day length | Seasons [SUMMARY]
[CONTENT] Child | Adolescent | Physical activity | Day length | Seasons [SUMMARY]
[CONTENT] Child | Adolescent | Physical activity | Day length | Seasons [SUMMARY]
[CONTENT] Child | Adolescent | Physical activity | Day length | Seasons [SUMMARY]
[CONTENT] Accelerometry | Activities of Daily Living | Adolescent | Body Weight | Child | Child, Preschool | Cross-Sectional Studies | Female | Humans | Male | Motor Activity | Photoperiod | Play and Playthings | Public Health | Socioeconomic Factors [SUMMARY]
[CONTENT] Accelerometry | Activities of Daily Living | Adolescent | Body Weight | Child | Child, Preschool | Cross-Sectional Studies | Female | Humans | Male | Motor Activity | Photoperiod | Play and Playthings | Public Health | Socioeconomic Factors [SUMMARY]
[CONTENT] Accelerometry | Activities of Daily Living | Adolescent | Body Weight | Child | Child, Preschool | Cross-Sectional Studies | Female | Humans | Male | Motor Activity | Photoperiod | Play and Playthings | Public Health | Socioeconomic Factors [SUMMARY]
[CONTENT] Accelerometry | Activities of Daily Living | Adolescent | Body Weight | Child | Child, Preschool | Cross-Sectional Studies | Female | Humans | Male | Motor Activity | Photoperiod | Play and Playthings | Public Health | Socioeconomic Factors [SUMMARY]
[CONTENT] Accelerometry | Activities of Daily Living | Adolescent | Body Weight | Child | Child, Preschool | Cross-Sectional Studies | Female | Humans | Male | Motor Activity | Photoperiod | Play and Playthings | Public Health | Socioeconomic Factors [SUMMARY]
[CONTENT] Accelerometry | Activities of Daily Living | Adolescent | Body Weight | Child | Child, Preschool | Cross-Sectional Studies | Female | Humans | Male | Motor Activity | Photoperiod | Play and Playthings | Public Health | Socioeconomic Factors [SUMMARY]
[CONTENT] surrounding daylight saving | australian survey daylight | daylight saving measures | daylight savings proposals | proposed daylight savings [SUMMARY]
[CONTENT] surrounding daylight saving | australian survey daylight | daylight saving measures | daylight savings proposals | proposed daylight savings [SUMMARY]
[CONTENT] surrounding daylight saving | australian survey daylight | daylight saving measures | daylight savings proposals | proposed daylight savings [SUMMARY]
[CONTENT] surrounding daylight saving | australian survey daylight | daylight saving measures | daylight savings proposals | proposed daylight savings [SUMMARY]
[CONTENT] surrounding daylight saving | australian survey daylight | daylight saving measures | daylight savings proposals | proposed daylight savings [SUMMARY]
[CONTENT] surrounding daylight saving | australian survey daylight | daylight saving measures | daylight savings proposals | proposed daylight savings [SUMMARY]
[CONTENT] activity | children | daylight | evening | studies | days | study | data | physical | physical activity [SUMMARY]
[CONTENT] activity | children | daylight | evening | studies | days | study | data | physical | physical activity [SUMMARY]
[CONTENT] activity | children | daylight | evening | studies | days | study | data | physical | physical activity [SUMMARY]
[CONTENT] activity | children | daylight | evening | studies | days | study | data | physical | physical activity [SUMMARY]
[CONTENT] activity | children | daylight | evening | studies | days | study | data | physical | physical activity [SUMMARY]
[CONTENT] activity | children | daylight | evening | studies | days | study | data | physical | physical activity [SUMMARY]
[CONTENT] activity | daylight | physical | physical activity | evening | seasonal | benefits | clocks | children | hours [SUMMARY]
[CONTENT] data | days | cpm | time | study | studies | children | wear | measurement | 11 [SUMMARY]
[CONTENT] 00 | figure | cpm | 18 | activity | evening | sunset | confidence | daylight | evening daylight [SUMMARY]
[CONTENT] child physical | child physical activity | physical activity | physical | activity | daylight | small | settings | levels | recently considered [SUMMARY]
[CONTENT] activity | children | daylight | additional | physical | data | physical activity | cpm | evening | effect [SUMMARY]
[CONTENT] activity | children | daylight | additional | physical | data | physical activity | cpm | evening | effect [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 23,188 | 5-16 years | 15 | nine | the International Children's Accelerometry Database ||| 439 ||| Actigraph ||| daily [SUMMARY]
[CONTENT] evening | daily ||| early evening ||| five | European | four | English | two | Australian | 0.03 | deviations per hour of additional evening ||| ||| [SUMMARY]
[CONTENT] Europe | Australia | evening ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| 23,188 | 5-16 years | 15 | nine | the International Children's Accelerometry Database ||| 439 ||| Actigraph ||| daily ||| ||| evening | daily ||| early evening ||| five | European | four | English | two | Australian | 0.03 | deviations per hour of additional evening ||| ||| ||| Europe | Australia | evening ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| 23,188 | 5-16 years | 15 | nine | the International Children's Accelerometry Database ||| 439 ||| Actigraph ||| daily ||| ||| evening | daily ||| early evening ||| five | European | four | English | two | Australian | 0.03 | deviations per hour of additional evening ||| ||| ||| Europe | Australia | evening ||| ||| ||| [SUMMARY]
The Psychological Impact of COVID-19 Pandemic on Women's Mental Health during Pregnancy: A Rapid Evidence Review.
34281049
The perinatal period is a particularly vulnerable period in women's lives that implies significant physiological and psychological changes that can place women at higher risk for depression and anxiety symptoms. In addition, the ongoing pandemic of coronavirus disease 2019 (COVID-19) is likely to increase this vulnerability and the prevalence of mental health problems. This review aimed to investigate the existing literature on the psychological impact of the COVID-19 pandemic on women during pregnancy and the first year postpartum.
BACKGROUND
The literature search was conducted using the following databases: Pubmed, Scopus, WOS-web of science, PsycInfo and Google Scholar. Out of the total of 116 initially selected papers, 17 have been included in the final work, according to the inclusion criteria.
METHOD
The reviewed contributions report a moderate to severe impact of the COVID-19 outbreak on the mental health of pregnant women, mainly in the form of a significant increase in depression-up to 58% in Spain-and anxiety symptoms-up to 72% in Canada. In addition to the common psychological symptoms, COVID-19-specific worries emerged with respect to its potential effects on pregnancy and the well-being of the unborn child. Social support and being engaged in regular physical activities appear to be protective factors able to buffer against the effects of the pandemic on maternal mental health.
RESULTS
Despite the limitations of the study design, the evidence suggests that it is essential to provide appropriate psychological support to pregnant women during the emergency in order to protect their mental health and to minimize the risks of long-term effects on child development.
CONCLUSIONS
[ "Anxiety", "COVID-19", "Canada", "Child", "Depression", "Female", "Humans", "Mental Health", "Pandemics", "Parturition", "Pregnancy", "SARS-CoV-2", "Spain", "Stress, Psychological" ]
8297318
1. Introduction
On 12 January 2020, the World Health Organization (WHO) officially announced the coronavirus disease 2019 (COVID-19), originating in Wuhan in December 2019, as a pandemic. In the course of most infectious disease outbreaks, restrictive measures can be necessary to stop the virus. With the aim of limiting Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) propagation, governments around the world have imposed some restrictions, such us national lockdowns and social distancing. A recent review [1] suggested that restrictive measures are often associated with negative psychological effects that can still be identified months or years later, and highlighted the impact of quarantine and isolation on mental health. Indeed, the actual outbreak is leading to psychological distress and increased mental health problems, such as stress, anxiety, depressive symptoms, insomnia, denial, anger and fear [2]. Psychological distress and mood disorders seem most likely in more vulnerable populations [3,4,5], such as pregnant women. Maternal mental health is particularly important to consider, due to the increased risk for depression and anxiety [6]. Pregnancy and the postpartum period, especially for first time mothers, have been identified as delicate periods in a woman’s life that are accompanied by significant social, psychological and also physiological changes [7,8], and for this reason pregnant women have been considered a high-risk population. Several studies have reported that the perinatal period is a time characterized by increased risk for emotional disorders such as depression, anxiety, and trauma-related disorders, especially in the presence of stress conditions [8,9,10]. This is also true for pregnant and postpartum women and their infants in the face of emergencies or natural disasters [11,12]. Indeed, during the SARS outbreak, pregnant women may have concerns about their own health and about the health of their unborn babies, and may display fears relating to pregnancy, to childbirth, or both. Additionally, feelings of uncertainty (characteristic of an epidemic) represent a significant stressor that can increase distress in pregnant women [13]. Overall, these complex and multiple variables may affect both mothers and their children’s physical and psychological health, in short-, medium- and long-term periods [14,15,16,17]. Therefore, the condition of the COVID-19 pandemic and associated factors could produce additional stress for women during perinatality and accentuate this predisposition [3,18]. For these reasons and due to the negative effect of psychological distress during pregnancy on the health of mothers and their offspring, priority should be given to support maternal mental health in the perinatal period [19,20]. These issues suggest that research is necessary to explore the effects of the COVID-19 pandemic on women during perinatality. The current review was designed to summarize the existing literature on the psychological impact of the COVID-19 pandemic on pregnant women.
null
null
3. Results
3.1. Maternal Mental Health All of the identified papers suggest that the COVID-19 pandemic can have a significant impact on maternal mental health, mainly in the form of anxiety and depressive symptoms. The prevalence of depression and anxiety in pregnant women has significantly increased since the spread of COVID-19 disease. Pregnant women during the COVID-19 pandemic reported more psychological symptomatology compared to pregnant women before the COVID-19 outbreak. All of the identified papers suggest that the COVID-19 pandemic can have a significant impact on maternal mental health, mainly in the form of anxiety and depressive symptoms. The prevalence of depression and anxiety in pregnant women has significantly increased since the spread of COVID-19 disease. Pregnant women during the COVID-19 pandemic reported more psychological symptomatology compared to pregnant women before the COVID-19 outbreak. 3.2. Countries The studies included in the rapid review consider participants from China [22,23,24], Canada [18,25,26,27], Turkey [28,29], Argentina [3], Iran [30], Qatar [31], Spain [32], Italy [33,34], Pakistan [35], and Japan [36]. The studies included in the rapid review consider participants from China [22,23,24], Canada [18,25,26,27], Turkey [28,29], Argentina [3], Iran [30], Qatar [31], Spain [32], Italy [33,34], Pakistan [35], and Japan [36]. 3.3. Participants All the studies involved women who were at least 18 years old. Most of the papers concerned studies addressing women during pregnancy [3,18,22,23,25,26,33], two of which were case–control studies comparing pregnant and non-pregnant women [3,23]. Only one study [3] longitudinally monitored the population throughout the lockdown. In this case, participants were divided into two groups: 102 pregnant women and a control group of 102 non-pregnant women. One study was a case–control study [18] that considered pregnant women before the COVID-19 pandemic and pregnant women during the pandemic; finally, one contribution [27] considered pregnant and first-year postpartum women, assessed before and during the pandemic. A Chinese survey [22] compared the mental health status of pregnant women before the declaration of the COVID-19 epidemic and after. Only two studies [27,32] considered both pregnancy and the postpartum period. All the studies involved women who were at least 18 years old. Most of the papers concerned studies addressing women during pregnancy [3,18,22,23,25,26,33], two of which were case–control studies comparing pregnant and non-pregnant women [3,23]. Only one study [3] longitudinally monitored the population throughout the lockdown. In this case, participants were divided into two groups: 102 pregnant women and a control group of 102 non-pregnant women. One study was a case–control study [18] that considered pregnant women before the COVID-19 pandemic and pregnant women during the pandemic; finally, one contribution [27] considered pregnant and first-year postpartum women, assessed before and during the pandemic. A Chinese survey [22] compared the mental health status of pregnant women before the declaration of the COVID-19 epidemic and after. Only two studies [27,32] considered both pregnancy and the postpartum period. 3.4. Instruments As regards the administered instruments, all studies adopted self-reports; seven studies delivered only one questionnaire, the rest multiple measures. As concerns depression, seven studies applied the Edinburgh Postnatal Depression Scale (EPDS) [37], a 10-item self-report questionnaire addressing perinatal depressive symptoms within the last 7 days. The overall score is computed by adding items on a four-point Likert scale. Higher scores reflect more depressive symptoms. Three others applied self-report depression symptoms scales, although these were not specific for pregnancy and the postpartum period: the Center for Epidemiological Studies Depression Scale [38]; the Beck Depression Inventory II [39]; the Patient Health Questionnaire 9 [40]. With respect to anxiety, only two studies evaluated perinatal anxiety: one study with a questionnaire including ten items specifically addressing feelings about the health of the baby and her/his birth [41], and the other administered the Cambridge Worry Scale (CWS) [42] to assess pregnancy-specific anxiety as well as general anxiety, whereas the majority of scholars applied generic anxiety questionnaires. Four administered the State–Trait Anxiety Inventory (STAI) [43]; one study applied the Generalized Anxiety Disorder Scale 7 (GAD-7) [44], one the Patient-Reported Outcomes Measurement Information System (PROMIS) Anxiety Adult seven-item short form [45], and one the Visual Analog Scale (VAS) for anxiety [46]. Some studies used a combined measure for depression and anxiety: two applied the Hospital Anxiety and Depression Scale (HADS) [47], and one the Patient Health Questionnaire Anxiety–Depression Scale (PHQ-ADS) [48]. One study administered the Depression Anxiety Stress Scales 21 (DASS 21) [49] to distinguish between the affective syndromes of depression, anxiety and tension/stress. Three studies resorted to the the Positive and Negative Affect Schedule (PANAS) [50] to evaluate mood or emotion. Global psychological distress was also measured through the Kessler Psychological Distress Scale (K10) [51] in two papers, and the Symptom Checklist—90 (SCL-90) [52] in one paper. Finally, specific measures of other variables included in the contributions were evaluated, but none were specific to perinatality, with the exception of one study measuring the infants’ APGAR (Adaptability, Partnership, Growth, Affection, and Resolve)[53]. As regards the administered instruments, all studies adopted self-reports; seven studies delivered only one questionnaire, the rest multiple measures. As concerns depression, seven studies applied the Edinburgh Postnatal Depression Scale (EPDS) [37], a 10-item self-report questionnaire addressing perinatal depressive symptoms within the last 7 days. The overall score is computed by adding items on a four-point Likert scale. Higher scores reflect more depressive symptoms. Three others applied self-report depression symptoms scales, although these were not specific for pregnancy and the postpartum period: the Center for Epidemiological Studies Depression Scale [38]; the Beck Depression Inventory II [39]; the Patient Health Questionnaire 9 [40]. With respect to anxiety, only two studies evaluated perinatal anxiety: one study with a questionnaire including ten items specifically addressing feelings about the health of the baby and her/his birth [41], and the other administered the Cambridge Worry Scale (CWS) [42] to assess pregnancy-specific anxiety as well as general anxiety, whereas the majority of scholars applied generic anxiety questionnaires. Four administered the State–Trait Anxiety Inventory (STAI) [43]; one study applied the Generalized Anxiety Disorder Scale 7 (GAD-7) [44], one the Patient-Reported Outcomes Measurement Information System (PROMIS) Anxiety Adult seven-item short form [45], and one the Visual Analog Scale (VAS) for anxiety [46]. Some studies used a combined measure for depression and anxiety: two applied the Hospital Anxiety and Depression Scale (HADS) [47], and one the Patient Health Questionnaire Anxiety–Depression Scale (PHQ-ADS) [48]. One study administered the Depression Anxiety Stress Scales 21 (DASS 21) [49] to distinguish between the affective syndromes of depression, anxiety and tension/stress. Three studies resorted to the the Positive and Negative Affect Schedule (PANAS) [50] to evaluate mood or emotion. Global psychological distress was also measured through the Kessler Psychological Distress Scale (K10) [51] in two papers, and the Symptom Checklist—90 (SCL-90) [52] in one paper. Finally, specific measures of other variables included in the contributions were evaluated, but none were specific to perinatality, with the exception of one study measuring the infants’ APGAR (Adaptability, Partnership, Growth, Affection, and Resolve)[53]. 3.5. Prevalence of Depression and Anxiety Symptoms The prevalence of depression and anxiety reported was similar for all studies considered. With regards to the prevalence of depression in Qatar, for example, 39.2% of pregnant women presented depressive symptomatology [31]; in Turkey, the prevalence was 56.3% [28]; in Iran, 32.7% of the participants had symptoms of depression [30]; 58% in Spain [32]; in Canada, the studies indicated values close to 40% (37% [25]; 40.7% [27]); in China, 29.6% of women in Wu’s study [22] and 33.71% among the 2883 women involved in Sun et al.’s survey [24] referred to symptoms of depression. Concerning anxiety symptoms, in Qatar, a 34.4% prevalence of anxiety was identified [31]; 51% has been reported in Spain [32]; in Canada rates from 56.6% [25] to 72% [27] were detected, which are close to the Italian prevalence of 68% [33], while two Turkish studies found rates of 64.5% [28] and 43.9% [30], respectively. The prevalence of depression and anxiety reported was similar for all studies considered. With regards to the prevalence of depression in Qatar, for example, 39.2% of pregnant women presented depressive symptomatology [31]; in Turkey, the prevalence was 56.3% [28]; in Iran, 32.7% of the participants had symptoms of depression [30]; 58% in Spain [32]; in Canada, the studies indicated values close to 40% (37% [25]; 40.7% [27]); in China, 29.6% of women in Wu’s study [22] and 33.71% among the 2883 women involved in Sun et al.’s survey [24] referred to symptoms of depression. Concerning anxiety symptoms, in Qatar, a 34.4% prevalence of anxiety was identified [31]; 51% has been reported in Spain [32]; in Canada rates from 56.6% [25] to 72% [27] were detected, which are close to the Italian prevalence of 68% [33], while two Turkish studies found rates of 64.5% [28] and 43.9% [30], respectively. 3.6. Comparison between Pre- and Post-COVID Depressive and Anxiety Symptoms More specifically, one of the four studies from Canada (n =1987) found significantly higher anxiety and depressive symptoms compared to the scores in pregnant women before the COVID-19 pandemic, with 37% self-referring clinical levels of depression and 57% self-referring clinical levels of anxiety [25]. In Davenport et al.’s investigation [27], 900 women were involved: 58% were pregnant and 42% were in the first year postpartum. Pre-pandemic and current values were assessed for each group. It emerged that an EPDS score > 13 was self-reported in 15% pre-pandemic mothers and in 40.7% during the COVID 19. Moderate to high anxiety (STAI-state score > 40) was reported in 29% of women before the pandemic vs. 72% of women during its course. In another Canadian survey, two cohorts of pregnant women were recruited, one prior to the COVID-19 pandemic (n = 496) and the other one (n = 1258) was enrolled online during the pandemic [18]. Researchers have shown that the latter reported more depressive and anxiety symptoms than pregnant women assessed before the COVID-19 pandemic. In addition, the COVID-19 women reported higher levels of dissociative symptoms and of post-traumatic stress disorder symptoms, and also described more negative affectivity and less positive affectivity than the pre-COVID-19 cohort did. In addition, this study showed that pregnant women assessed within the pandemic context with a previous psychiatric diagnosis or coming from a low-income background were more inclined to develop psychiatric symptoms. The latter result contrasts with evidence from another study [31]: despite the main findings of Farrel’s research revealing that 34.4% of women reached clinical levels for anxiety and 39.2% for depression, these analyses did not reveal any association between these symptoms and previous mental health, occupation, pregnancy complications and gestational age. These results highlight that the worsening of psychiatric symptoms could be attributed to the psychological impact of the pandemic and to the containment measures. Similarly, Effati-Daryani et al. [30] showed that among their sample of 205 women, based on the scores obtained in DASS-21, 67.3% were in the normal range, 32.7% were identified to have symptoms of depression (12.7% mild, 10.7% moderate, 7.3% severe and 2.0% extremely), 56.1% were in the normal range, and 43.9% had symptoms of anxiety (17.6% mild, 12.2% moderate, 6.3% severe and 7.8% very severe). As emerged from Farrel’s aforementioned study, the evidence showed no statistically significant relationship between gestational age and depression, stress, and anxiety levels (p > 0.05). A multi-center cross-sectional study [22] provided the opportunity to compare the mental health status of pregnant women before and after the declaration of the COVID-19 epidemic. Pregnant women assessed after the abovementioned declaration had significantly increased rates of depressive symptoms (26.0% vs. 29.6%) compared to women evaluated before the declaration. Additionally, the prevalence of depressive symptoms increased along with the increase in the number of newly confirmed cases, suspected infections and deaths. This evidence is consistent with Sun et al.’s study [24] that demonstrated that the prevalence of perinatal depression increased with the increasing number of confirmed cases of COVID-19 patients. In particular, among the 2883 women involved in the survey, 33.71% had depressive symptoms, 27.02% showed mild depression, 5.24% moderate depression, and 1.46% severe depression. More specifically, one of the four studies from Canada (n =1987) found significantly higher anxiety and depressive symptoms compared to the scores in pregnant women before the COVID-19 pandemic, with 37% self-referring clinical levels of depression and 57% self-referring clinical levels of anxiety [25]. In Davenport et al.’s investigation [27], 900 women were involved: 58% were pregnant and 42% were in the first year postpartum. Pre-pandemic and current values were assessed for each group. It emerged that an EPDS score > 13 was self-reported in 15% pre-pandemic mothers and in 40.7% during the COVID 19. Moderate to high anxiety (STAI-state score > 40) was reported in 29% of women before the pandemic vs. 72% of women during its course. In another Canadian survey, two cohorts of pregnant women were recruited, one prior to the COVID-19 pandemic (n = 496) and the other one (n = 1258) was enrolled online during the pandemic [18]. Researchers have shown that the latter reported more depressive and anxiety symptoms than pregnant women assessed before the COVID-19 pandemic. In addition, the COVID-19 women reported higher levels of dissociative symptoms and of post-traumatic stress disorder symptoms, and also described more negative affectivity and less positive affectivity than the pre-COVID-19 cohort did. In addition, this study showed that pregnant women assessed within the pandemic context with a previous psychiatric diagnosis or coming from a low-income background were more inclined to develop psychiatric symptoms. The latter result contrasts with evidence from another study [31]: despite the main findings of Farrel’s research revealing that 34.4% of women reached clinical levels for anxiety and 39.2% for depression, these analyses did not reveal any association between these symptoms and previous mental health, occupation, pregnancy complications and gestational age. These results highlight that the worsening of psychiatric symptoms could be attributed to the psychological impact of the pandemic and to the containment measures. Similarly, Effati-Daryani et al. [30] showed that among their sample of 205 women, based on the scores obtained in DASS-21, 67.3% were in the normal range, 32.7% were identified to have symptoms of depression (12.7% mild, 10.7% moderate, 7.3% severe and 2.0% extremely), 56.1% were in the normal range, and 43.9% had symptoms of anxiety (17.6% mild, 12.2% moderate, 6.3% severe and 7.8% very severe). As emerged from Farrel’s aforementioned study, the evidence showed no statistically significant relationship between gestational age and depression, stress, and anxiety levels (p > 0.05). A multi-center cross-sectional study [22] provided the opportunity to compare the mental health status of pregnant women before and after the declaration of the COVID-19 epidemic. Pregnant women assessed after the abovementioned declaration had significantly increased rates of depressive symptoms (26.0% vs. 29.6%) compared to women evaluated before the declaration. Additionally, the prevalence of depressive symptoms increased along with the increase in the number of newly confirmed cases, suspected infections and deaths. This evidence is consistent with Sun et al.’s study [24] that demonstrated that the prevalence of perinatal depression increased with the increasing number of confirmed cases of COVID-19 patients. In particular, among the 2883 women involved in the survey, 33.71% had depressive symptoms, 27.02% showed mild depression, 5.24% moderate depression, and 1.46% severe depression. 3.7. Comparison between Pregnant versus Non-Pregnant Women’s Mental Health Regarding the prevalence of anxiety and depressive symptoms during the pandemic compared between pregnant and non-pregnant women, discordant results emerged from the two studies considered [3,23]. The first one [3] demonstrated that, during quarantine, both pregnant and non-pregnant women showed a gradual increment in psychopathological measures and a decline in positive affect. However, the group of pregnant women showed a more pronounced increase in depression, anxiety and negative affect than the non-pregnant women did. In addition, pregnant women showed a more evident decrease in positive affect. On the contrary, in the other study [23], pregnant women seemed to have an advantage when facing mental problems; really, they showed lower levels of depression, anxiety, insomnia and post-traumatic stress disorder (PTSD) than non-pregnant women. Specifically, 5.3%, 6.8%, 2.4%, 2.6%, and 0.9% of pregnant women, respectively, presented symptoms of depression, anxiety, physical discomfort, insomnia, and PTSD, whereas non-pregnant women’s prevalences were 17.5% (depression), 17.5% (anxiety), 2.5% (physical discomfort), 5.4% (insomnia), and 5.7% (PTSD). Taken together, the data that emerged from the papers included in this review suggest that the COVID-19 outbreak had a moderate to severe effect on the mental health of pregnant women; actually, the prevalence of psychological symptoms (mainly depression and anxiety) has significantly increased with the diffusion of COVID-19. Regarding the prevalence of anxiety and depressive symptoms during the pandemic compared between pregnant and non-pregnant women, discordant results emerged from the two studies considered [3,23]. The first one [3] demonstrated that, during quarantine, both pregnant and non-pregnant women showed a gradual increment in psychopathological measures and a decline in positive affect. However, the group of pregnant women showed a more pronounced increase in depression, anxiety and negative affect than the non-pregnant women did. In addition, pregnant women showed a more evident decrease in positive affect. On the contrary, in the other study [23], pregnant women seemed to have an advantage when facing mental problems; really, they showed lower levels of depression, anxiety, insomnia and post-traumatic stress disorder (PTSD) than non-pregnant women. Specifically, 5.3%, 6.8%, 2.4%, 2.6%, and 0.9% of pregnant women, respectively, presented symptoms of depression, anxiety, physical discomfort, insomnia, and PTSD, whereas non-pregnant women’s prevalences were 17.5% (depression), 17.5% (anxiety), 2.5% (physical discomfort), 5.4% (insomnia), and 5.7% (PTSD). Taken together, the data that emerged from the papers included in this review suggest that the COVID-19 outbreak had a moderate to severe effect on the mental health of pregnant women; actually, the prevalence of psychological symptoms (mainly depression and anxiety) has significantly increased with the diffusion of COVID-19. 3.8. Beyond Depression and Anxiety: Specific Maternal Worries and Fears In addition to the common psychiatric symptoms of depression and anxiety, some of the included studies also reported a high prevalence of fear, which represents the most reported symptom in pregnant women [25,26,29,31,33,34,35,36]. The concerns regarding infection were mainly for the pregnancy and for their families and children. Many women in the reviewed studies from different countries expressed worries about their own health and that of their unborn children in relation to the pandemic [25,29,31,33,35,36], concerns about delivery (e.g., whether their partner will be present, giving birth) and the baby’s health (e.g., something being wrong with the infant) [26]. In particular, two studies reported evidence that pregnant women experience great anxiety regarding the fear of transmitting the virus vertically to their baby [29,33]. Saccone et al. [33] pointed out that almost half of the women (46%) had worries about transmitting the infection to their infants. In the survey headed by Akgor et al. [29], the authors found that 82.5% (n = 245) of the pregnant women involved in the research reported high anxiety regarding the vertical transmission of the disease to their babies during delivery if they were infected with COVID-19 [29]. Consistently with this, in another survey [31] on 552 mothers, 353 (64%) women were highly aware and worried about the COVID-19 pandemic (i.e., fears of carrying the virus, vertical transmission causing harm to fetuses, vulnerability). This finding emerged despite the fact that 64% of respondents did not acknowledge any impact of the COVID-19 pandemic on their mental well-being. In a cross-sectional survey on 288 women accessing maternity services in Qatar, Farrel et al. [31] identified worries about pregnancy in 143 women and concerns about family and children in 189 of them. A high prevalence of fear of abnormal perinatal consequence was also detected; in one study conducted in Italy, over half of the mothers were worried that COVID-19 could cause a fetal structural anomaly, fetal growth restriction or preterm delivery [34]. Additionally, more than half of the pregnant women involved in another survey (66%, n = 196) were worried about pregnancy problems if their visits to the hospital were delayed or cancelled [29]. Furthermore, Lebel et al. [25] identified that the elevated depression and anxiety symptoms during the COVID-19 pandemic were significantly associated with COVID-19-related concerns, i.e., threats to their baby’s health and to their own lives, worries about not receiving enough care during pregnancy, and also worries due to social isolation. These levels are much higher than what is typical for pregnant women and those reported by the rest of the community during the COVID-19 pandemic [25]. In addition to the common psychiatric symptoms of depression and anxiety, some of the included studies also reported a high prevalence of fear, which represents the most reported symptom in pregnant women [25,26,29,31,33,34,35,36]. The concerns regarding infection were mainly for the pregnancy and for their families and children. Many women in the reviewed studies from different countries expressed worries about their own health and that of their unborn children in relation to the pandemic [25,29,31,33,35,36], concerns about delivery (e.g., whether their partner will be present, giving birth) and the baby’s health (e.g., something being wrong with the infant) [26]. In particular, two studies reported evidence that pregnant women experience great anxiety regarding the fear of transmitting the virus vertically to their baby [29,33]. Saccone et al. [33] pointed out that almost half of the women (46%) had worries about transmitting the infection to their infants. In the survey headed by Akgor et al. [29], the authors found that 82.5% (n = 245) of the pregnant women involved in the research reported high anxiety regarding the vertical transmission of the disease to their babies during delivery if they were infected with COVID-19 [29]. Consistently with this, in another survey [31] on 552 mothers, 353 (64%) women were highly aware and worried about the COVID-19 pandemic (i.e., fears of carrying the virus, vertical transmission causing harm to fetuses, vulnerability). This finding emerged despite the fact that 64% of respondents did not acknowledge any impact of the COVID-19 pandemic on their mental well-being. In a cross-sectional survey on 288 women accessing maternity services in Qatar, Farrel et al. [31] identified worries about pregnancy in 143 women and concerns about family and children in 189 of them. A high prevalence of fear of abnormal perinatal consequence was also detected; in one study conducted in Italy, over half of the mothers were worried that COVID-19 could cause a fetal structural anomaly, fetal growth restriction or preterm delivery [34]. Additionally, more than half of the pregnant women involved in another survey (66%, n = 196) were worried about pregnancy problems if their visits to the hospital were delayed or cancelled [29]. Furthermore, Lebel et al. [25] identified that the elevated depression and anxiety symptoms during the COVID-19 pandemic were significantly associated with COVID-19-related concerns, i.e., threats to their baby’s health and to their own lives, worries about not receiving enough care during pregnancy, and also worries due to social isolation. These levels are much higher than what is typical for pregnant women and those reported by the rest of the community during the COVID-19 pandemic [25]. 3.9. Protective Factors However, several of the reviewed studies also focused on some possible factors that may mediate/moderate the impact of the pandemic on women’s mental health. Some scholars reported that increased perceived social support and support effectiveness were associated with lower mental health symptoms, and appeared to be protective factors against depression and anxiety [25,26,27]. Similarly, a Japanese survey including 1777 pregnant women demonstrated that a lack of social support is significantly related to depressive symptoms [36]. These results are in line with previous literature that proved that better social support was related to decreased depression and anxiety symptoms during both pregnancy and postpartum [54,55]. As is known, life during a pandemic is characterized by isolation, social distancing, restrictive measures and the limitation of movement, all of which can lead women to experience a lack of social support from friends, relatives, and partners [56], with negative consequences for mental health, as mentioned above. Physical activity has also been investigated in terms of its protective function for psychological symptoms. Specifically, four studies showed that physical activity is related to reduced mental health problems. Being involved in regular physical activity during the COVID-19 pandemic represents a protective factor for the onset of anxiety and depressive symptoms in pregnant [22,25,27,28] or postpartum women [27], as confirmed by Lebel et al. [25]. However, several of the reviewed studies also focused on some possible factors that may mediate/moderate the impact of the pandemic on women’s mental health. Some scholars reported that increased perceived social support and support effectiveness were associated with lower mental health symptoms, and appeared to be protective factors against depression and anxiety [25,26,27]. Similarly, a Japanese survey including 1777 pregnant women demonstrated that a lack of social support is significantly related to depressive symptoms [36]. These results are in line with previous literature that proved that better social support was related to decreased depression and anxiety symptoms during both pregnancy and postpartum [54,55]. As is known, life during a pandemic is characterized by isolation, social distancing, restrictive measures and the limitation of movement, all of which can lead women to experience a lack of social support from friends, relatives, and partners [56], with negative consequences for mental health, as mentioned above. Physical activity has also been investigated in terms of its protective function for psychological symptoms. Specifically, four studies showed that physical activity is related to reduced mental health problems. Being involved in regular physical activity during the COVID-19 pandemic represents a protective factor for the onset of anxiety and depressive symptoms in pregnant [22,25,27,28] or postpartum women [27], as confirmed by Lebel et al. [25].
6. Conclusions
The present review provides valuable clinical suggestions that should be carefully monitored during the evaluation of women during perinatality. In fact, the COVID-19 pandemic adds numerous risk factors for the mental health of mothers during the perinatal period. Longitudinal, cohort, multicenter studies should be carried out in order to promote standardized screening and intervention guidelines to support pregnant and postpartum women during the COVID-19 outbreak, and to promote healthy family functioning. The identification of risks and protective factors during the current pandemic is particularly important, especially considering the long-term effect that maternal mental health has on a child’s development. Finally, despite the acknowledged distress linked to such a situation, it may offer the possibility to develop pioneering online methods to detect psychological problems and deliver early mental health interventions to mothers and their infants.
[ "2. Material and Methods", "2.1. Search Strategy", "2.2. Population", "2.3. Intervention/Exposure", "2.4. Comparison", "2.5. Outcomes", "2.6. Study Design", "2.7. Selection Criteria", "2.8. Data Extraction", "3.1. Maternal Mental Health", "3.2. Countries", "3.4. Instruments", "3.5. Prevalence of Depression and Anxiety Symptoms", "3.6. Comparison between Pre- and Post-COVID Depressive and Anxiety Symptoms", "3.7. Comparison between Pregnant versus Non-Pregnant Women’s Mental Health", "3.8. Beyond Depression and Anxiety: Specific Maternal Worries and Fears", "3.9. Protective Factors", "5. Limitations" ]
[ "This research was conducted as a rapid review. Rapid reviews follow the guidelines for systematic reviews, but are simplified in order to accelerate the process of traditional reviews to produce rapid evidence [21].\n2.1. Search Strategy The Pubmed, Scopus, WOS—web of science, PsycInfo and Google Scholar indexed databases were searched using the terms COVID-19, Coronavirus, mental health, anxiety, depression, and well-being crossed with perinatality-related terms (i.e., pregnancy, maternal mental health, maternal mental disorder, perinatal period). Following the need to accelerate the searches, as rapid reviews require, they were performed in the period from December 2020 to January 2021. The selection of material followed the reading of the titles and abstracts of identified publications. Articles were included if they fulfilled the following PICOS (population, intervention or exposure, comparison, outcomes, study design) eligibility criteria.\nThe Pubmed, Scopus, WOS—web of science, PsycInfo and Google Scholar indexed databases were searched using the terms COVID-19, Coronavirus, mental health, anxiety, depression, and well-being crossed with perinatality-related terms (i.e., pregnancy, maternal mental health, maternal mental disorder, perinatal period). Following the need to accelerate the searches, as rapid reviews require, they were performed in the period from December 2020 to January 2021. The selection of material followed the reading of the titles and abstracts of identified publications. Articles were included if they fulfilled the following PICOS (population, intervention or exposure, comparison, outcomes, study design) eligibility criteria.\n2.2. Population Women who were pregnant at the time of the first wave of COVID-19 outbreak in their country.\nWomen who were pregnant at the time of the first wave of COVID-19 outbreak in their country.\n2.3. Intervention/Exposure Studies focusing on mental health outcomes (e.g., depression, anxiety, insomnia, post-traumatic stress disorder) in the target population during the COVID-19 pandemic.\nStudies focusing on mental health outcomes (e.g., depression, anxiety, insomnia, post-traumatic stress disorder) in the target population during the COVID-19 pandemic.\n2.4. Comparison This is not applicable for the aim of this rapid review.\nThis is not applicable for the aim of this rapid review.\n2.5. Outcomes We looked at the following outcomes: psychological symptomatology (e.g., self-reported depression, anxiety, insomnia, post-traumatic stress disorder).\nWe looked at the following outcomes: psychological symptomatology (e.g., self-reported depression, anxiety, insomnia, post-traumatic stress disorder).\n2.6. Study Design We included studies with primary data collection.\nWe included studies with primary data collection.\n2.7. Selection Criteria The inclusion criteria were being published in English, reporting primary data and having the full-length text available, being original articles with at least 100 participants, being about the new coronavirus pandemic (COVID-19), and referring exclusively to its psychological consequences for women who were pregnant during the outbreak or were within the first year postpartum. The exclusion criteria were being editorials, letters or commentaries. We excluded articles that did not consider psychological aspects during pregnancy and abstracts without the full text available. A total of 116 articles were found in the initial search. After duplicates and papers without full texts available were removed, 41 full texts of possibly pertinent studies were assessed for eligibility and were independently screened by both authors to reduce the selection bias.\nFinally, of a total of 116 publications found, 17 manuscripts met the aforementioned inclusion criteria; therefore, they were considered eligible and were included in the rapid review. Narrative synthesis was applied to analyze the relevant papers grouped under themes.\nThe study selection process is illustrated by the PRISMA flow chart shown in Figure 1.\nThe inclusion criteria were being published in English, reporting primary data and having the full-length text available, being original articles with at least 100 participants, being about the new coronavirus pandemic (COVID-19), and referring exclusively to its psychological consequences for women who were pregnant during the outbreak or were within the first year postpartum. The exclusion criteria were being editorials, letters or commentaries. We excluded articles that did not consider psychological aspects during pregnancy and abstracts without the full text available. A total of 116 articles were found in the initial search. After duplicates and papers without full texts available were removed, 41 full texts of possibly pertinent studies were assessed for eligibility and were independently screened by both authors to reduce the selection bias.\nFinally, of a total of 116 publications found, 17 manuscripts met the aforementioned inclusion criteria; therefore, they were considered eligible and were included in the rapid review. Narrative synthesis was applied to analyze the relevant papers grouped under themes.\nThe study selection process is illustrated by the PRISMA flow chart shown in Figure 1.\n2.8. Data Extraction The study characteristics of the included papers were extracted by the two authors independently, and relevant information is shown in Table 1, including country, population, number of participants, study design, measurement tools and main results.\nThe study characteristics of the included papers were extracted by the two authors independently, and relevant information is shown in Table 1, including country, population, number of participants, study design, measurement tools and main results.", "The Pubmed, Scopus, WOS—web of science, PsycInfo and Google Scholar indexed databases were searched using the terms COVID-19, Coronavirus, mental health, anxiety, depression, and well-being crossed with perinatality-related terms (i.e., pregnancy, maternal mental health, maternal mental disorder, perinatal period). Following the need to accelerate the searches, as rapid reviews require, they were performed in the period from December 2020 to January 2021. The selection of material followed the reading of the titles and abstracts of identified publications. Articles were included if they fulfilled the following PICOS (population, intervention or exposure, comparison, outcomes, study design) eligibility criteria.", "Women who were pregnant at the time of the first wave of COVID-19 outbreak in their country.", "Studies focusing on mental health outcomes (e.g., depression, anxiety, insomnia, post-traumatic stress disorder) in the target population during the COVID-19 pandemic.", "This is not applicable for the aim of this rapid review.", "We looked at the following outcomes: psychological symptomatology (e.g., self-reported depression, anxiety, insomnia, post-traumatic stress disorder).", "We included studies with primary data collection.", "The inclusion criteria were being published in English, reporting primary data and having the full-length text available, being original articles with at least 100 participants, being about the new coronavirus pandemic (COVID-19), and referring exclusively to its psychological consequences for women who were pregnant during the outbreak or were within the first year postpartum. The exclusion criteria were being editorials, letters or commentaries. We excluded articles that did not consider psychological aspects during pregnancy and abstracts without the full text available. A total of 116 articles were found in the initial search. After duplicates and papers without full texts available were removed, 41 full texts of possibly pertinent studies were assessed for eligibility and were independently screened by both authors to reduce the selection bias.\nFinally, of a total of 116 publications found, 17 manuscripts met the aforementioned inclusion criteria; therefore, they were considered eligible and were included in the rapid review. Narrative synthesis was applied to analyze the relevant papers grouped under themes.\nThe study selection process is illustrated by the PRISMA flow chart shown in Figure 1.", "The study characteristics of the included papers were extracted by the two authors independently, and relevant information is shown in Table 1, including country, population, number of participants, study design, measurement tools and main results.", "All of the identified papers suggest that the COVID-19 pandemic can have a significant impact on maternal mental health, mainly in the form of anxiety and depressive symptoms. The prevalence of depression and anxiety in pregnant women has significantly increased since the spread of COVID-19 disease. Pregnant women during the COVID-19 pandemic reported more psychological symptomatology compared to pregnant women before the COVID-19 outbreak.", "The studies included in the rapid review consider participants from China [22,23,24], Canada [18,25,26,27], Turkey [28,29], Argentina [3], Iran [30], Qatar [31], Spain [32], Italy [33,34], Pakistan [35], and Japan [36].", "As regards the administered instruments, all studies adopted self-reports; seven studies delivered only one questionnaire, the rest multiple measures.\nAs concerns depression, seven studies applied the Edinburgh Postnatal Depression Scale (EPDS) [37], a 10-item self-report questionnaire addressing perinatal depressive symptoms within the last 7 days. The overall score is computed by adding items on a four-point Likert scale. Higher scores reflect more depressive symptoms.\nThree others applied self-report depression symptoms scales, although these were not specific for pregnancy and the postpartum period: the Center for Epidemiological Studies Depression Scale [38]; the Beck Depression Inventory II [39]; the Patient Health Questionnaire 9 [40].\nWith respect to anxiety, only two studies evaluated perinatal anxiety: one study with a questionnaire including ten items specifically addressing feelings about the health of the baby and her/his birth [41], and the other administered the Cambridge Worry Scale (CWS) [42] to assess pregnancy-specific anxiety as well as general anxiety, whereas the majority of scholars applied generic anxiety questionnaires. Four administered the State–Trait Anxiety Inventory (STAI) [43]; one study applied the Generalized Anxiety Disorder Scale 7 (GAD-7) [44], one the Patient-Reported Outcomes Measurement Information System (PROMIS) Anxiety Adult seven-item short form [45], and one the Visual Analog Scale (VAS) for anxiety [46].\nSome studies used a combined measure for depression and anxiety: two applied the Hospital Anxiety and Depression Scale (HADS) [47], and one the Patient Health Questionnaire Anxiety–Depression Scale (PHQ-ADS) [48].\nOne study administered the Depression Anxiety Stress Scales 21 (DASS 21) [49] to distinguish between the affective syndromes of depression, anxiety and tension/stress.\nThree studies resorted to the the Positive and Negative Affect Schedule (PANAS) [50] to evaluate mood or emotion.\nGlobal psychological distress was also measured through the Kessler Psychological Distress Scale (K10) [51] in two papers, and the Symptom Checklist—90 (SCL-90) [52] in one paper.\nFinally, specific measures of other variables included in the contributions were evaluated, but none were specific to perinatality, with the exception of one study measuring the infants’ APGAR (Adaptability, Partnership, Growth, Affection, and Resolve)[53].", "The prevalence of depression and anxiety reported was similar for all studies considered. With regards to the prevalence of depression in Qatar, for example, 39.2% of pregnant women presented depressive symptomatology [31]; in Turkey, the prevalence was 56.3% [28]; in Iran, 32.7% of the participants had symptoms of depression [30]; 58% in Spain [32]; in Canada, the studies indicated values close to 40% (37% [25]; 40.7% [27]); in China, 29.6% of women in Wu’s study [22] and 33.71% among the 2883 women involved in Sun et al.’s survey [24] referred to symptoms of depression. Concerning anxiety symptoms, in Qatar, a 34.4% prevalence of anxiety was identified [31]; 51% has been reported in Spain [32]; in Canada rates from 56.6% [25] to 72% [27] were detected, which are close to the Italian prevalence of 68% [33], while two Turkish studies found rates of 64.5% [28] and 43.9% [30], respectively.", "More specifically, one of the four studies from Canada (n =1987) found significantly higher anxiety and depressive symptoms compared to the scores in pregnant women before the COVID-19 pandemic, with 37% self-referring clinical levels of depression and 57% self-referring clinical levels of anxiety [25]. In Davenport et al.’s investigation [27], 900 women were involved: 58% were pregnant and 42% were in the first year postpartum. Pre-pandemic and current values were assessed for each group. It emerged that an EPDS score > 13 was self-reported in 15% pre-pandemic mothers and in 40.7% during the COVID 19. Moderate to high anxiety (STAI-state score > 40) was reported in 29% of women before the pandemic vs. 72% of women during its course.\nIn another Canadian survey, two cohorts of pregnant women were recruited, one prior to the COVID-19 pandemic (n = 496) and the other one (n = 1258) was enrolled online during the pandemic [18]. Researchers have shown that the latter reported more depressive and anxiety symptoms than pregnant women assessed before the COVID-19 pandemic. In addition, the COVID-19 women reported higher levels of dissociative symptoms and of post-traumatic stress disorder symptoms, and also described more negative affectivity and less positive affectivity than the pre-COVID-19 cohort did. In addition, this study showed that pregnant women assessed within the pandemic context with a previous psychiatric diagnosis or coming from a low-income background were more inclined to develop psychiatric symptoms.\nThe latter result contrasts with evidence from another study [31]: despite the main findings of Farrel’s research revealing that 34.4% of women reached clinical levels for anxiety and 39.2% for depression, these analyses did not reveal any association between these symptoms and previous mental health, occupation, pregnancy complications and gestational age. These results highlight that the worsening of psychiatric symptoms could be attributed to the psychological impact of the pandemic and to the containment measures. Similarly, Effati-Daryani et al. [30] showed that among their sample of 205 women, based on the scores obtained in DASS-21, 67.3% were in the normal range, 32.7% were identified to have symptoms of depression (12.7% mild, 10.7% moderate, 7.3% severe and 2.0% extremely), 56.1% were in the normal range, and 43.9% had symptoms of anxiety (17.6% mild, 12.2% moderate, 6.3% severe and 7.8% very severe). As emerged from Farrel’s aforementioned study, the evidence showed no statistically significant relationship between gestational age and depression, stress, and anxiety levels (p > 0.05).\nA multi-center cross-sectional study [22] provided the opportunity to compare the mental health status of pregnant women before and after the declaration of the COVID-19 epidemic. Pregnant women assessed after the abovementioned declaration had significantly increased rates of depressive symptoms (26.0% vs. 29.6%) compared to women evaluated before the declaration. Additionally, the prevalence of depressive symptoms increased along with the increase in the number of newly confirmed cases, suspected infections and deaths. This evidence is consistent with Sun et al.’s study [24] that demonstrated that the prevalence of perinatal depression increased with the increasing number of confirmed cases of COVID-19 patients. In particular, among the 2883 women involved in the survey, 33.71% had depressive symptoms, 27.02% showed mild depression, 5.24% moderate depression, and 1.46% severe depression.", "Regarding the prevalence of anxiety and depressive symptoms during the pandemic compared between pregnant and non-pregnant women, discordant results emerged from the two studies considered [3,23]. The first one [3] demonstrated that, during quarantine, both pregnant and non-pregnant women showed a gradual increment in psychopathological measures and a decline in positive affect. However, the group of pregnant women showed a more pronounced increase in depression, anxiety and negative affect than the non-pregnant women did. In addition, pregnant women showed a more evident decrease in positive affect. On the contrary, in the other study [23], pregnant women seemed to have an advantage when facing mental problems; really, they showed lower levels of depression, anxiety, insomnia and post-traumatic stress disorder (PTSD) than non-pregnant women. Specifically, 5.3%, 6.8%, 2.4%, 2.6%, and 0.9% of pregnant women, respectively, presented symptoms of depression, anxiety, physical discomfort, insomnia, and PTSD, whereas non-pregnant women’s prevalences were 17.5% (depression), 17.5% (anxiety), 2.5% (physical discomfort), 5.4% (insomnia), and 5.7% (PTSD).\nTaken together, the data that emerged from the papers included in this review suggest that the COVID-19 outbreak had a moderate to severe effect on the mental health of pregnant women; actually, the prevalence of psychological symptoms (mainly depression and anxiety) has significantly increased with the diffusion of COVID-19.", "In addition to the common psychiatric symptoms of depression and anxiety, some of the included studies also reported a high prevalence of fear, which represents the most reported symptom in pregnant women [25,26,29,31,33,34,35,36].\nThe concerns regarding infection were mainly for the pregnancy and for their families and children. Many women in the reviewed studies from different countries expressed worries about their own health and that of their unborn children in relation to the pandemic [25,29,31,33,35,36], concerns about delivery (e.g., whether their partner will be present, giving birth) and the baby’s health (e.g., something being wrong with the infant) [26].\nIn particular, two studies reported evidence that pregnant women experience great anxiety regarding the fear of transmitting the virus vertically to their baby [29,33]. Saccone et al. [33] pointed out that almost half of the women (46%) had worries about transmitting the infection to their infants. In the survey headed by Akgor et al. [29], the authors found that 82.5% (n = 245) of the pregnant women involved in the research reported high anxiety regarding the vertical transmission of the disease to their babies during delivery if they were infected with COVID-19 [29].\nConsistently with this, in another survey [31] on 552 mothers, 353 (64%) women were highly aware and worried about the COVID-19 pandemic (i.e., fears of carrying the virus, vertical transmission causing harm to fetuses, vulnerability). This finding emerged despite the fact that 64% of respondents did not acknowledge any impact of the COVID-19 pandemic on their mental well-being.\nIn a cross-sectional survey on 288 women accessing maternity services in Qatar, Farrel et al. [31] identified worries about pregnancy in 143 women and concerns about family and children in 189 of them. A high prevalence of fear of abnormal perinatal consequence was also detected; in one study conducted in Italy, over half of the mothers were worried that COVID-19 could cause a fetal structural anomaly, fetal growth restriction or preterm delivery [34]. Additionally, more than half of the pregnant women involved in another survey (66%, n = 196) were worried about pregnancy problems if their visits to the hospital were delayed or cancelled [29].\nFurthermore, Lebel et al. [25] identified that the elevated depression and anxiety symptoms during the COVID-19 pandemic were significantly associated with COVID-19-related concerns, i.e., threats to their baby’s health and to their own lives, worries about not receiving enough care during pregnancy, and also worries due to social isolation. These levels are much higher than what is typical for pregnant women and those reported by the rest of the community during the COVID-19 pandemic [25].", "However, several of the reviewed studies also focused on some possible factors that may mediate/moderate the impact of the pandemic on women’s mental health. Some scholars reported that increased perceived social support and support effectiveness were associated with lower mental health symptoms, and appeared to be protective factors against depression and anxiety [25,26,27]. Similarly, a Japanese survey including 1777 pregnant women demonstrated that a lack of social support is significantly related to depressive symptoms [36].\nThese results are in line with previous literature that proved that better social support was related to decreased depression and anxiety symptoms during both pregnancy and postpartum [54,55]. As is known, life during a pandemic is characterized by isolation, social distancing, restrictive measures and the limitation of movement, all of which can lead women to experience a lack of social support from friends, relatives, and partners [56], with negative consequences for mental health, as mentioned above.\nPhysical activity has also been investigated in terms of its protective function for psychological symptoms. Specifically, four studies showed that physical activity is related to reduced mental health problems. Being involved in regular physical activity during the COVID-19 pandemic represents a protective factor for the onset of anxiety and depressive symptoms in pregnant [22,25,27,28] or postpartum women [27], as confirmed by Lebel et al. [25].", "The findings of this review have to be seen in light of some limitations. First, grey literature was excluded, and the articles included were limited to those in the English language within the selected keywords and databases. For this reason, the review cannot claim to be representative of all studies addressing the topic under investigation; therefore, the evidence that emerged could be overestimated or underestimated.\nAdditionally, only two surveys compared mental health outcomes for pregnant women with non-pregnant women during pandemic, only one study compared pregnant women before the COVID-19 pandemic and pregnant women during the pandemic, and only one considered pregnant and first-year postpartum women assessed before and during the pandemic. The paucity of studies makes it difficult to point out the differences between being pregnant during a pandemic and in another period.\nMoreover, no standardized quality appraisal of the included papers was carried out, as is usual in rapid evidence reviews [62]. This necessitates great caution in the interpretation of the review’s findings. Actually, the reviewed studies diverged with respect to enrollment modalities and the samples’ characteristics. Additionally, there was significant variability in the assessment measures that limits the generalizability of our findings; as well, in some cases, differences in the symptoms emerged, even though the same questionnaire was administered. Therefore, to improve the screening and prevention/intervention programs, a more rigorous study design is required, which should include the calculation of effect sizes, control groups, and a longitudinal perspective [14,63].\nEssentially, every study resorted to self-report questionnaires. Even though self-report measures are commonly administered in studies addressing maternal psychological functioning in the perinatal period, biased responses may not be excluded. Furthermore, only a few studies included instruments specific to the pre- and postpartum period, which may cause misleading conclusions. Besides this, resorting to such types of instruments does not allow us to distinguish between transient maternal malaise and more structured psychopathology, which is important to intervention; that said, they make a crucial contribution in prevention programs.\nMoreover, even though some of the reviewed studies considered additional variables (i.e., social support, physical activity) that may buffer the impact of the COVID-19 pandemic on mother’s psychological symptomatology [22,25,26,27,28,36], future studies should consider many of the risk factors that have been identified in the literature as relevant intervening variables, such as maternal SES and education, childbirth experiences, comorbidity, romantic couple adjustment, infant temperament, and breastfeeding [64,65,66,67,68]. From a research perspective, the interrelationship between these variables should be investigated through path analysis and linear structural relations modeling to understand their contributions to the outcomes for mothers and children." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Material and Methods", "2.1. Search Strategy", "2.2. Population", "2.3. Intervention/Exposure", "2.4. Comparison", "2.5. Outcomes", "2.6. Study Design", "2.7. Selection Criteria", "2.8. Data Extraction", "3. Results", "3.1. Maternal Mental Health", "3.2. Countries", "3.3. Participants", "3.4. Instruments", "3.5. Prevalence of Depression and Anxiety Symptoms", "3.6. Comparison between Pre- and Post-COVID Depressive and Anxiety Symptoms", "3.7. Comparison between Pregnant versus Non-Pregnant Women’s Mental Health", "3.8. Beyond Depression and Anxiety: Specific Maternal Worries and Fears", "3.9. Protective Factors", "4. Discussion", "5. Limitations", "6. Conclusions" ]
[ "On 12 January 2020, the World Health Organization (WHO) officially announced the coronavirus disease 2019 (COVID-19), originating in Wuhan in December 2019, as a pandemic. \nIn the course of most infectious disease outbreaks, restrictive measures can be necessary to stop the virus. With the aim of limiting Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) propagation, governments around the world have imposed some restrictions, such us national lockdowns and social distancing. A recent review [1] suggested that restrictive measures are often associated with negative psychological effects that can still be identified months or years later, and highlighted the impact of quarantine and isolation on mental health. \nIndeed, the actual outbreak is leading to psychological distress and increased mental health problems, such as stress, anxiety, depressive symptoms, insomnia, denial, anger and fear [2]. Psychological distress and mood disorders seem most likely in more vulnerable populations [3,4,5], such as pregnant women.\nMaternal mental health is particularly important to consider, due to the increased risk for depression and anxiety [6]. Pregnancy and the postpartum period, especially for first time mothers, have been identified as delicate periods in a woman’s life that are accompanied by significant social, psychological and also physiological changes [7,8], and for this reason pregnant women have been considered a high-risk population.\nSeveral studies have reported that the perinatal period is a time characterized by increased risk for emotional disorders such as depression, anxiety, and trauma-related disorders, especially in the presence of stress conditions [8,9,10]. This is also true for pregnant and postpartum women and their infants in the face of emergencies or natural disasters [11,12].\nIndeed, during the SARS outbreak, pregnant women may have concerns about their own health and about the health of their unborn babies, and may display fears relating to pregnancy, to childbirth, or both. Additionally, feelings of uncertainty (characteristic of an epidemic) represent a significant stressor that can increase distress in pregnant women [13].\nOverall, these complex and multiple variables may affect both mothers and their children’s physical and psychological health, in short-, medium- and long-term periods [14,15,16,17]. Therefore, the condition of the COVID-19 pandemic and associated factors could produce additional stress for women during perinatality and accentuate this predisposition [3,18]. For these reasons and due to the negative effect of psychological distress during pregnancy on the health of mothers and their offspring, priority should be given to support maternal mental health in the perinatal period [19,20]. These issues suggest that research is necessary to explore the effects of the COVID-19 pandemic on women during perinatality. The current review was designed to summarize the existing literature on the psychological impact of the COVID-19 pandemic on pregnant women.", "This research was conducted as a rapid review. Rapid reviews follow the guidelines for systematic reviews, but are simplified in order to accelerate the process of traditional reviews to produce rapid evidence [21].\n2.1. Search Strategy The Pubmed, Scopus, WOS—web of science, PsycInfo and Google Scholar indexed databases were searched using the terms COVID-19, Coronavirus, mental health, anxiety, depression, and well-being crossed with perinatality-related terms (i.e., pregnancy, maternal mental health, maternal mental disorder, perinatal period). Following the need to accelerate the searches, as rapid reviews require, they were performed in the period from December 2020 to January 2021. The selection of material followed the reading of the titles and abstracts of identified publications. Articles were included if they fulfilled the following PICOS (population, intervention or exposure, comparison, outcomes, study design) eligibility criteria.\nThe Pubmed, Scopus, WOS—web of science, PsycInfo and Google Scholar indexed databases were searched using the terms COVID-19, Coronavirus, mental health, anxiety, depression, and well-being crossed with perinatality-related terms (i.e., pregnancy, maternal mental health, maternal mental disorder, perinatal period). Following the need to accelerate the searches, as rapid reviews require, they were performed in the period from December 2020 to January 2021. The selection of material followed the reading of the titles and abstracts of identified publications. Articles were included if they fulfilled the following PICOS (population, intervention or exposure, comparison, outcomes, study design) eligibility criteria.\n2.2. Population Women who were pregnant at the time of the first wave of COVID-19 outbreak in their country.\nWomen who were pregnant at the time of the first wave of COVID-19 outbreak in their country.\n2.3. Intervention/Exposure Studies focusing on mental health outcomes (e.g., depression, anxiety, insomnia, post-traumatic stress disorder) in the target population during the COVID-19 pandemic.\nStudies focusing on mental health outcomes (e.g., depression, anxiety, insomnia, post-traumatic stress disorder) in the target population during the COVID-19 pandemic.\n2.4. Comparison This is not applicable for the aim of this rapid review.\nThis is not applicable for the aim of this rapid review.\n2.5. Outcomes We looked at the following outcomes: psychological symptomatology (e.g., self-reported depression, anxiety, insomnia, post-traumatic stress disorder).\nWe looked at the following outcomes: psychological symptomatology (e.g., self-reported depression, anxiety, insomnia, post-traumatic stress disorder).\n2.6. Study Design We included studies with primary data collection.\nWe included studies with primary data collection.\n2.7. Selection Criteria The inclusion criteria were being published in English, reporting primary data and having the full-length text available, being original articles with at least 100 participants, being about the new coronavirus pandemic (COVID-19), and referring exclusively to its psychological consequences for women who were pregnant during the outbreak or were within the first year postpartum. The exclusion criteria were being editorials, letters or commentaries. We excluded articles that did not consider psychological aspects during pregnancy and abstracts without the full text available. A total of 116 articles were found in the initial search. After duplicates and papers without full texts available were removed, 41 full texts of possibly pertinent studies were assessed for eligibility and were independently screened by both authors to reduce the selection bias.\nFinally, of a total of 116 publications found, 17 manuscripts met the aforementioned inclusion criteria; therefore, they were considered eligible and were included in the rapid review. Narrative synthesis was applied to analyze the relevant papers grouped under themes.\nThe study selection process is illustrated by the PRISMA flow chart shown in Figure 1.\nThe inclusion criteria were being published in English, reporting primary data and having the full-length text available, being original articles with at least 100 participants, being about the new coronavirus pandemic (COVID-19), and referring exclusively to its psychological consequences for women who were pregnant during the outbreak or were within the first year postpartum. The exclusion criteria were being editorials, letters or commentaries. We excluded articles that did not consider psychological aspects during pregnancy and abstracts without the full text available. A total of 116 articles were found in the initial search. After duplicates and papers without full texts available were removed, 41 full texts of possibly pertinent studies were assessed for eligibility and were independently screened by both authors to reduce the selection bias.\nFinally, of a total of 116 publications found, 17 manuscripts met the aforementioned inclusion criteria; therefore, they were considered eligible and were included in the rapid review. Narrative synthesis was applied to analyze the relevant papers grouped under themes.\nThe study selection process is illustrated by the PRISMA flow chart shown in Figure 1.\n2.8. Data Extraction The study characteristics of the included papers were extracted by the two authors independently, and relevant information is shown in Table 1, including country, population, number of participants, study design, measurement tools and main results.\nThe study characteristics of the included papers were extracted by the two authors independently, and relevant information is shown in Table 1, including country, population, number of participants, study design, measurement tools and main results.", "The Pubmed, Scopus, WOS—web of science, PsycInfo and Google Scholar indexed databases were searched using the terms COVID-19, Coronavirus, mental health, anxiety, depression, and well-being crossed with perinatality-related terms (i.e., pregnancy, maternal mental health, maternal mental disorder, perinatal period). Following the need to accelerate the searches, as rapid reviews require, they were performed in the period from December 2020 to January 2021. The selection of material followed the reading of the titles and abstracts of identified publications. Articles were included if they fulfilled the following PICOS (population, intervention or exposure, comparison, outcomes, study design) eligibility criteria.", "Women who were pregnant at the time of the first wave of COVID-19 outbreak in their country.", "Studies focusing on mental health outcomes (e.g., depression, anxiety, insomnia, post-traumatic stress disorder) in the target population during the COVID-19 pandemic.", "This is not applicable for the aim of this rapid review.", "We looked at the following outcomes: psychological symptomatology (e.g., self-reported depression, anxiety, insomnia, post-traumatic stress disorder).", "We included studies with primary data collection.", "The inclusion criteria were being published in English, reporting primary data and having the full-length text available, being original articles with at least 100 participants, being about the new coronavirus pandemic (COVID-19), and referring exclusively to its psychological consequences for women who were pregnant during the outbreak or were within the first year postpartum. The exclusion criteria were being editorials, letters or commentaries. We excluded articles that did not consider psychological aspects during pregnancy and abstracts without the full text available. A total of 116 articles were found in the initial search. After duplicates and papers without full texts available were removed, 41 full texts of possibly pertinent studies were assessed for eligibility and were independently screened by both authors to reduce the selection bias.\nFinally, of a total of 116 publications found, 17 manuscripts met the aforementioned inclusion criteria; therefore, they were considered eligible and were included in the rapid review. Narrative synthesis was applied to analyze the relevant papers grouped under themes.\nThe study selection process is illustrated by the PRISMA flow chart shown in Figure 1.", "The study characteristics of the included papers were extracted by the two authors independently, and relevant information is shown in Table 1, including country, population, number of participants, study design, measurement tools and main results.", "3.1. Maternal Mental Health All of the identified papers suggest that the COVID-19 pandemic can have a significant impact on maternal mental health, mainly in the form of anxiety and depressive symptoms. The prevalence of depression and anxiety in pregnant women has significantly increased since the spread of COVID-19 disease. Pregnant women during the COVID-19 pandemic reported more psychological symptomatology compared to pregnant women before the COVID-19 outbreak.\nAll of the identified papers suggest that the COVID-19 pandemic can have a significant impact on maternal mental health, mainly in the form of anxiety and depressive symptoms. The prevalence of depression and anxiety in pregnant women has significantly increased since the spread of COVID-19 disease. Pregnant women during the COVID-19 pandemic reported more psychological symptomatology compared to pregnant women before the COVID-19 outbreak.\n3.2. Countries The studies included in the rapid review consider participants from China [22,23,24], Canada [18,25,26,27], Turkey [28,29], Argentina [3], Iran [30], Qatar [31], Spain [32], Italy [33,34], Pakistan [35], and Japan [36].\nThe studies included in the rapid review consider participants from China [22,23,24], Canada [18,25,26,27], Turkey [28,29], Argentina [3], Iran [30], Qatar [31], Spain [32], Italy [33,34], Pakistan [35], and Japan [36].\n3.3. Participants All the studies involved women who were at least 18 years old. Most of the papers concerned studies addressing women during pregnancy [3,18,22,23,25,26,33], two of which were case–control studies comparing pregnant and non-pregnant women [3,23]. Only one study [3] longitudinally monitored the population throughout the lockdown. In this case, participants were divided into two groups: 102 pregnant women and a control group of 102 non-pregnant women. One study was a case–control study [18] that considered pregnant women before the COVID-19 pandemic and pregnant women during the pandemic; finally, one contribution [27] considered pregnant and first-year postpartum women, assessed before and during the pandemic. A Chinese survey [22] compared the mental health status of pregnant women before the declaration of the COVID-19 epidemic and after. Only two studies [27,32] considered both pregnancy and the postpartum period.\nAll the studies involved women who were at least 18 years old. Most of the papers concerned studies addressing women during pregnancy [3,18,22,23,25,26,33], two of which were case–control studies comparing pregnant and non-pregnant women [3,23]. Only one study [3] longitudinally monitored the population throughout the lockdown. In this case, participants were divided into two groups: 102 pregnant women and a control group of 102 non-pregnant women. One study was a case–control study [18] that considered pregnant women before the COVID-19 pandemic and pregnant women during the pandemic; finally, one contribution [27] considered pregnant and first-year postpartum women, assessed before and during the pandemic. A Chinese survey [22] compared the mental health status of pregnant women before the declaration of the COVID-19 epidemic and after. Only two studies [27,32] considered both pregnancy and the postpartum period.\n3.4. Instruments As regards the administered instruments, all studies adopted self-reports; seven studies delivered only one questionnaire, the rest multiple measures.\nAs concerns depression, seven studies applied the Edinburgh Postnatal Depression Scale (EPDS) [37], a 10-item self-report questionnaire addressing perinatal depressive symptoms within the last 7 days. The overall score is computed by adding items on a four-point Likert scale. Higher scores reflect more depressive symptoms.\nThree others applied self-report depression symptoms scales, although these were not specific for pregnancy and the postpartum period: the Center for Epidemiological Studies Depression Scale [38]; the Beck Depression Inventory II [39]; the Patient Health Questionnaire 9 [40].\nWith respect to anxiety, only two studies evaluated perinatal anxiety: one study with a questionnaire including ten items specifically addressing feelings about the health of the baby and her/his birth [41], and the other administered the Cambridge Worry Scale (CWS) [42] to assess pregnancy-specific anxiety as well as general anxiety, whereas the majority of scholars applied generic anxiety questionnaires. Four administered the State–Trait Anxiety Inventory (STAI) [43]; one study applied the Generalized Anxiety Disorder Scale 7 (GAD-7) [44], one the Patient-Reported Outcomes Measurement Information System (PROMIS) Anxiety Adult seven-item short form [45], and one the Visual Analog Scale (VAS) for anxiety [46].\nSome studies used a combined measure for depression and anxiety: two applied the Hospital Anxiety and Depression Scale (HADS) [47], and one the Patient Health Questionnaire Anxiety–Depression Scale (PHQ-ADS) [48].\nOne study administered the Depression Anxiety Stress Scales 21 (DASS 21) [49] to distinguish between the affective syndromes of depression, anxiety and tension/stress.\nThree studies resorted to the the Positive and Negative Affect Schedule (PANAS) [50] to evaluate mood or emotion.\nGlobal psychological distress was also measured through the Kessler Psychological Distress Scale (K10) [51] in two papers, and the Symptom Checklist—90 (SCL-90) [52] in one paper.\nFinally, specific measures of other variables included in the contributions were evaluated, but none were specific to perinatality, with the exception of one study measuring the infants’ APGAR (Adaptability, Partnership, Growth, Affection, and Resolve)[53].\nAs regards the administered instruments, all studies adopted self-reports; seven studies delivered only one questionnaire, the rest multiple measures.\nAs concerns depression, seven studies applied the Edinburgh Postnatal Depression Scale (EPDS) [37], a 10-item self-report questionnaire addressing perinatal depressive symptoms within the last 7 days. The overall score is computed by adding items on a four-point Likert scale. Higher scores reflect more depressive symptoms.\nThree others applied self-report depression symptoms scales, although these were not specific for pregnancy and the postpartum period: the Center for Epidemiological Studies Depression Scale [38]; the Beck Depression Inventory II [39]; the Patient Health Questionnaire 9 [40].\nWith respect to anxiety, only two studies evaluated perinatal anxiety: one study with a questionnaire including ten items specifically addressing feelings about the health of the baby and her/his birth [41], and the other administered the Cambridge Worry Scale (CWS) [42] to assess pregnancy-specific anxiety as well as general anxiety, whereas the majority of scholars applied generic anxiety questionnaires. Four administered the State–Trait Anxiety Inventory (STAI) [43]; one study applied the Generalized Anxiety Disorder Scale 7 (GAD-7) [44], one the Patient-Reported Outcomes Measurement Information System (PROMIS) Anxiety Adult seven-item short form [45], and one the Visual Analog Scale (VAS) for anxiety [46].\nSome studies used a combined measure for depression and anxiety: two applied the Hospital Anxiety and Depression Scale (HADS) [47], and one the Patient Health Questionnaire Anxiety–Depression Scale (PHQ-ADS) [48].\nOne study administered the Depression Anxiety Stress Scales 21 (DASS 21) [49] to distinguish between the affective syndromes of depression, anxiety and tension/stress.\nThree studies resorted to the the Positive and Negative Affect Schedule (PANAS) [50] to evaluate mood or emotion.\nGlobal psychological distress was also measured through the Kessler Psychological Distress Scale (K10) [51] in two papers, and the Symptom Checklist—90 (SCL-90) [52] in one paper.\nFinally, specific measures of other variables included in the contributions were evaluated, but none were specific to perinatality, with the exception of one study measuring the infants’ APGAR (Adaptability, Partnership, Growth, Affection, and Resolve)[53].\n3.5. Prevalence of Depression and Anxiety Symptoms The prevalence of depression and anxiety reported was similar for all studies considered. With regards to the prevalence of depression in Qatar, for example, 39.2% of pregnant women presented depressive symptomatology [31]; in Turkey, the prevalence was 56.3% [28]; in Iran, 32.7% of the participants had symptoms of depression [30]; 58% in Spain [32]; in Canada, the studies indicated values close to 40% (37% [25]; 40.7% [27]); in China, 29.6% of women in Wu’s study [22] and 33.71% among the 2883 women involved in Sun et al.’s survey [24] referred to symptoms of depression. Concerning anxiety symptoms, in Qatar, a 34.4% prevalence of anxiety was identified [31]; 51% has been reported in Spain [32]; in Canada rates from 56.6% [25] to 72% [27] were detected, which are close to the Italian prevalence of 68% [33], while two Turkish studies found rates of 64.5% [28] and 43.9% [30], respectively.\nThe prevalence of depression and anxiety reported was similar for all studies considered. With regards to the prevalence of depression in Qatar, for example, 39.2% of pregnant women presented depressive symptomatology [31]; in Turkey, the prevalence was 56.3% [28]; in Iran, 32.7% of the participants had symptoms of depression [30]; 58% in Spain [32]; in Canada, the studies indicated values close to 40% (37% [25]; 40.7% [27]); in China, 29.6% of women in Wu’s study [22] and 33.71% among the 2883 women involved in Sun et al.’s survey [24] referred to symptoms of depression. Concerning anxiety symptoms, in Qatar, a 34.4% prevalence of anxiety was identified [31]; 51% has been reported in Spain [32]; in Canada rates from 56.6% [25] to 72% [27] were detected, which are close to the Italian prevalence of 68% [33], while two Turkish studies found rates of 64.5% [28] and 43.9% [30], respectively.\n3.6. Comparison between Pre- and Post-COVID Depressive and Anxiety Symptoms More specifically, one of the four studies from Canada (n =1987) found significantly higher anxiety and depressive symptoms compared to the scores in pregnant women before the COVID-19 pandemic, with 37% self-referring clinical levels of depression and 57% self-referring clinical levels of anxiety [25]. In Davenport et al.’s investigation [27], 900 women were involved: 58% were pregnant and 42% were in the first year postpartum. Pre-pandemic and current values were assessed for each group. It emerged that an EPDS score > 13 was self-reported in 15% pre-pandemic mothers and in 40.7% during the COVID 19. Moderate to high anxiety (STAI-state score > 40) was reported in 29% of women before the pandemic vs. 72% of women during its course.\nIn another Canadian survey, two cohorts of pregnant women were recruited, one prior to the COVID-19 pandemic (n = 496) and the other one (n = 1258) was enrolled online during the pandemic [18]. Researchers have shown that the latter reported more depressive and anxiety symptoms than pregnant women assessed before the COVID-19 pandemic. In addition, the COVID-19 women reported higher levels of dissociative symptoms and of post-traumatic stress disorder symptoms, and also described more negative affectivity and less positive affectivity than the pre-COVID-19 cohort did. In addition, this study showed that pregnant women assessed within the pandemic context with a previous psychiatric diagnosis or coming from a low-income background were more inclined to develop psychiatric symptoms.\nThe latter result contrasts with evidence from another study [31]: despite the main findings of Farrel’s research revealing that 34.4% of women reached clinical levels for anxiety and 39.2% for depression, these analyses did not reveal any association between these symptoms and previous mental health, occupation, pregnancy complications and gestational age. These results highlight that the worsening of psychiatric symptoms could be attributed to the psychological impact of the pandemic and to the containment measures. Similarly, Effati-Daryani et al. [30] showed that among their sample of 205 women, based on the scores obtained in DASS-21, 67.3% were in the normal range, 32.7% were identified to have symptoms of depression (12.7% mild, 10.7% moderate, 7.3% severe and 2.0% extremely), 56.1% were in the normal range, and 43.9% had symptoms of anxiety (17.6% mild, 12.2% moderate, 6.3% severe and 7.8% very severe). As emerged from Farrel’s aforementioned study, the evidence showed no statistically significant relationship between gestational age and depression, stress, and anxiety levels (p > 0.05).\nA multi-center cross-sectional study [22] provided the opportunity to compare the mental health status of pregnant women before and after the declaration of the COVID-19 epidemic. Pregnant women assessed after the abovementioned declaration had significantly increased rates of depressive symptoms (26.0% vs. 29.6%) compared to women evaluated before the declaration. Additionally, the prevalence of depressive symptoms increased along with the increase in the number of newly confirmed cases, suspected infections and deaths. This evidence is consistent with Sun et al.’s study [24] that demonstrated that the prevalence of perinatal depression increased with the increasing number of confirmed cases of COVID-19 patients. In particular, among the 2883 women involved in the survey, 33.71% had depressive symptoms, 27.02% showed mild depression, 5.24% moderate depression, and 1.46% severe depression.\nMore specifically, one of the four studies from Canada (n =1987) found significantly higher anxiety and depressive symptoms compared to the scores in pregnant women before the COVID-19 pandemic, with 37% self-referring clinical levels of depression and 57% self-referring clinical levels of anxiety [25]. In Davenport et al.’s investigation [27], 900 women were involved: 58% were pregnant and 42% were in the first year postpartum. Pre-pandemic and current values were assessed for each group. It emerged that an EPDS score > 13 was self-reported in 15% pre-pandemic mothers and in 40.7% during the COVID 19. Moderate to high anxiety (STAI-state score > 40) was reported in 29% of women before the pandemic vs. 72% of women during its course.\nIn another Canadian survey, two cohorts of pregnant women were recruited, one prior to the COVID-19 pandemic (n = 496) and the other one (n = 1258) was enrolled online during the pandemic [18]. Researchers have shown that the latter reported more depressive and anxiety symptoms than pregnant women assessed before the COVID-19 pandemic. In addition, the COVID-19 women reported higher levels of dissociative symptoms and of post-traumatic stress disorder symptoms, and also described more negative affectivity and less positive affectivity than the pre-COVID-19 cohort did. In addition, this study showed that pregnant women assessed within the pandemic context with a previous psychiatric diagnosis or coming from a low-income background were more inclined to develop psychiatric symptoms.\nThe latter result contrasts with evidence from another study [31]: despite the main findings of Farrel’s research revealing that 34.4% of women reached clinical levels for anxiety and 39.2% for depression, these analyses did not reveal any association between these symptoms and previous mental health, occupation, pregnancy complications and gestational age. These results highlight that the worsening of psychiatric symptoms could be attributed to the psychological impact of the pandemic and to the containment measures. Similarly, Effati-Daryani et al. [30] showed that among their sample of 205 women, based on the scores obtained in DASS-21, 67.3% were in the normal range, 32.7% were identified to have symptoms of depression (12.7% mild, 10.7% moderate, 7.3% severe and 2.0% extremely), 56.1% were in the normal range, and 43.9% had symptoms of anxiety (17.6% mild, 12.2% moderate, 6.3% severe and 7.8% very severe). As emerged from Farrel’s aforementioned study, the evidence showed no statistically significant relationship between gestational age and depression, stress, and anxiety levels (p > 0.05).\nA multi-center cross-sectional study [22] provided the opportunity to compare the mental health status of pregnant women before and after the declaration of the COVID-19 epidemic. Pregnant women assessed after the abovementioned declaration had significantly increased rates of depressive symptoms (26.0% vs. 29.6%) compared to women evaluated before the declaration. Additionally, the prevalence of depressive symptoms increased along with the increase in the number of newly confirmed cases, suspected infections and deaths. This evidence is consistent with Sun et al.’s study [24] that demonstrated that the prevalence of perinatal depression increased with the increasing number of confirmed cases of COVID-19 patients. In particular, among the 2883 women involved in the survey, 33.71% had depressive symptoms, 27.02% showed mild depression, 5.24% moderate depression, and 1.46% severe depression.\n3.7. Comparison between Pregnant versus Non-Pregnant Women’s Mental Health Regarding the prevalence of anxiety and depressive symptoms during the pandemic compared between pregnant and non-pregnant women, discordant results emerged from the two studies considered [3,23]. The first one [3] demonstrated that, during quarantine, both pregnant and non-pregnant women showed a gradual increment in psychopathological measures and a decline in positive affect. However, the group of pregnant women showed a more pronounced increase in depression, anxiety and negative affect than the non-pregnant women did. In addition, pregnant women showed a more evident decrease in positive affect. On the contrary, in the other study [23], pregnant women seemed to have an advantage when facing mental problems; really, they showed lower levels of depression, anxiety, insomnia and post-traumatic stress disorder (PTSD) than non-pregnant women. Specifically, 5.3%, 6.8%, 2.4%, 2.6%, and 0.9% of pregnant women, respectively, presented symptoms of depression, anxiety, physical discomfort, insomnia, and PTSD, whereas non-pregnant women’s prevalences were 17.5% (depression), 17.5% (anxiety), 2.5% (physical discomfort), 5.4% (insomnia), and 5.7% (PTSD).\nTaken together, the data that emerged from the papers included in this review suggest that the COVID-19 outbreak had a moderate to severe effect on the mental health of pregnant women; actually, the prevalence of psychological symptoms (mainly depression and anxiety) has significantly increased with the diffusion of COVID-19.\nRegarding the prevalence of anxiety and depressive symptoms during the pandemic compared between pregnant and non-pregnant women, discordant results emerged from the two studies considered [3,23]. The first one [3] demonstrated that, during quarantine, both pregnant and non-pregnant women showed a gradual increment in psychopathological measures and a decline in positive affect. However, the group of pregnant women showed a more pronounced increase in depression, anxiety and negative affect than the non-pregnant women did. In addition, pregnant women showed a more evident decrease in positive affect. On the contrary, in the other study [23], pregnant women seemed to have an advantage when facing mental problems; really, they showed lower levels of depression, anxiety, insomnia and post-traumatic stress disorder (PTSD) than non-pregnant women. Specifically, 5.3%, 6.8%, 2.4%, 2.6%, and 0.9% of pregnant women, respectively, presented symptoms of depression, anxiety, physical discomfort, insomnia, and PTSD, whereas non-pregnant women’s prevalences were 17.5% (depression), 17.5% (anxiety), 2.5% (physical discomfort), 5.4% (insomnia), and 5.7% (PTSD).\nTaken together, the data that emerged from the papers included in this review suggest that the COVID-19 outbreak had a moderate to severe effect on the mental health of pregnant women; actually, the prevalence of psychological symptoms (mainly depression and anxiety) has significantly increased with the diffusion of COVID-19.\n3.8. Beyond Depression and Anxiety: Specific Maternal Worries and Fears In addition to the common psychiatric symptoms of depression and anxiety, some of the included studies also reported a high prevalence of fear, which represents the most reported symptom in pregnant women [25,26,29,31,33,34,35,36].\nThe concerns regarding infection were mainly for the pregnancy and for their families and children. Many women in the reviewed studies from different countries expressed worries about their own health and that of their unborn children in relation to the pandemic [25,29,31,33,35,36], concerns about delivery (e.g., whether their partner will be present, giving birth) and the baby’s health (e.g., something being wrong with the infant) [26].\nIn particular, two studies reported evidence that pregnant women experience great anxiety regarding the fear of transmitting the virus vertically to their baby [29,33]. Saccone et al. [33] pointed out that almost half of the women (46%) had worries about transmitting the infection to their infants. In the survey headed by Akgor et al. [29], the authors found that 82.5% (n = 245) of the pregnant women involved in the research reported high anxiety regarding the vertical transmission of the disease to their babies during delivery if they were infected with COVID-19 [29].\nConsistently with this, in another survey [31] on 552 mothers, 353 (64%) women were highly aware and worried about the COVID-19 pandemic (i.e., fears of carrying the virus, vertical transmission causing harm to fetuses, vulnerability). This finding emerged despite the fact that 64% of respondents did not acknowledge any impact of the COVID-19 pandemic on their mental well-being.\nIn a cross-sectional survey on 288 women accessing maternity services in Qatar, Farrel et al. [31] identified worries about pregnancy in 143 women and concerns about family and children in 189 of them. A high prevalence of fear of abnormal perinatal consequence was also detected; in one study conducted in Italy, over half of the mothers were worried that COVID-19 could cause a fetal structural anomaly, fetal growth restriction or preterm delivery [34]. Additionally, more than half of the pregnant women involved in another survey (66%, n = 196) were worried about pregnancy problems if their visits to the hospital were delayed or cancelled [29].\nFurthermore, Lebel et al. [25] identified that the elevated depression and anxiety symptoms during the COVID-19 pandemic were significantly associated with COVID-19-related concerns, i.e., threats to their baby’s health and to their own lives, worries about not receiving enough care during pregnancy, and also worries due to social isolation. These levels are much higher than what is typical for pregnant women and those reported by the rest of the community during the COVID-19 pandemic [25].\nIn addition to the common psychiatric symptoms of depression and anxiety, some of the included studies also reported a high prevalence of fear, which represents the most reported symptom in pregnant women [25,26,29,31,33,34,35,36].\nThe concerns regarding infection were mainly for the pregnancy and for their families and children. Many women in the reviewed studies from different countries expressed worries about their own health and that of their unborn children in relation to the pandemic [25,29,31,33,35,36], concerns about delivery (e.g., whether their partner will be present, giving birth) and the baby’s health (e.g., something being wrong with the infant) [26].\nIn particular, two studies reported evidence that pregnant women experience great anxiety regarding the fear of transmitting the virus vertically to their baby [29,33]. Saccone et al. [33] pointed out that almost half of the women (46%) had worries about transmitting the infection to their infants. In the survey headed by Akgor et al. [29], the authors found that 82.5% (n = 245) of the pregnant women involved in the research reported high anxiety regarding the vertical transmission of the disease to their babies during delivery if they were infected with COVID-19 [29].\nConsistently with this, in another survey [31] on 552 mothers, 353 (64%) women were highly aware and worried about the COVID-19 pandemic (i.e., fears of carrying the virus, vertical transmission causing harm to fetuses, vulnerability). This finding emerged despite the fact that 64% of respondents did not acknowledge any impact of the COVID-19 pandemic on their mental well-being.\nIn a cross-sectional survey on 288 women accessing maternity services in Qatar, Farrel et al. [31] identified worries about pregnancy in 143 women and concerns about family and children in 189 of them. A high prevalence of fear of abnormal perinatal consequence was also detected; in one study conducted in Italy, over half of the mothers were worried that COVID-19 could cause a fetal structural anomaly, fetal growth restriction or preterm delivery [34]. Additionally, more than half of the pregnant women involved in another survey (66%, n = 196) were worried about pregnancy problems if their visits to the hospital were delayed or cancelled [29].\nFurthermore, Lebel et al. [25] identified that the elevated depression and anxiety symptoms during the COVID-19 pandemic were significantly associated with COVID-19-related concerns, i.e., threats to their baby’s health and to their own lives, worries about not receiving enough care during pregnancy, and also worries due to social isolation. These levels are much higher than what is typical for pregnant women and those reported by the rest of the community during the COVID-19 pandemic [25].\n3.9. Protective Factors However, several of the reviewed studies also focused on some possible factors that may mediate/moderate the impact of the pandemic on women’s mental health. Some scholars reported that increased perceived social support and support effectiveness were associated with lower mental health symptoms, and appeared to be protective factors against depression and anxiety [25,26,27]. Similarly, a Japanese survey including 1777 pregnant women demonstrated that a lack of social support is significantly related to depressive symptoms [36].\nThese results are in line with previous literature that proved that better social support was related to decreased depression and anxiety symptoms during both pregnancy and postpartum [54,55]. As is known, life during a pandemic is characterized by isolation, social distancing, restrictive measures and the limitation of movement, all of which can lead women to experience a lack of social support from friends, relatives, and partners [56], with negative consequences for mental health, as mentioned above.\nPhysical activity has also been investigated in terms of its protective function for psychological symptoms. Specifically, four studies showed that physical activity is related to reduced mental health problems. Being involved in regular physical activity during the COVID-19 pandemic represents a protective factor for the onset of anxiety and depressive symptoms in pregnant [22,25,27,28] or postpartum women [27], as confirmed by Lebel et al. [25].\nHowever, several of the reviewed studies also focused on some possible factors that may mediate/moderate the impact of the pandemic on women’s mental health. Some scholars reported that increased perceived social support and support effectiveness were associated with lower mental health symptoms, and appeared to be protective factors against depression and anxiety [25,26,27]. Similarly, a Japanese survey including 1777 pregnant women demonstrated that a lack of social support is significantly related to depressive symptoms [36].\nThese results are in line with previous literature that proved that better social support was related to decreased depression and anxiety symptoms during both pregnancy and postpartum [54,55]. As is known, life during a pandemic is characterized by isolation, social distancing, restrictive measures and the limitation of movement, all of which can lead women to experience a lack of social support from friends, relatives, and partners [56], with negative consequences for mental health, as mentioned above.\nPhysical activity has also been investigated in terms of its protective function for psychological symptoms. Specifically, four studies showed that physical activity is related to reduced mental health problems. Being involved in regular physical activity during the COVID-19 pandemic represents a protective factor for the onset of anxiety and depressive symptoms in pregnant [22,25,27,28] or postpartum women [27], as confirmed by Lebel et al. [25].", "All of the identified papers suggest that the COVID-19 pandemic can have a significant impact on maternal mental health, mainly in the form of anxiety and depressive symptoms. The prevalence of depression and anxiety in pregnant women has significantly increased since the spread of COVID-19 disease. Pregnant women during the COVID-19 pandemic reported more psychological symptomatology compared to pregnant women before the COVID-19 outbreak.", "The studies included in the rapid review consider participants from China [22,23,24], Canada [18,25,26,27], Turkey [28,29], Argentina [3], Iran [30], Qatar [31], Spain [32], Italy [33,34], Pakistan [35], and Japan [36].", "All the studies involved women who were at least 18 years old. Most of the papers concerned studies addressing women during pregnancy [3,18,22,23,25,26,33], two of which were case–control studies comparing pregnant and non-pregnant women [3,23]. Only one study [3] longitudinally monitored the population throughout the lockdown. In this case, participants were divided into two groups: 102 pregnant women and a control group of 102 non-pregnant women. One study was a case–control study [18] that considered pregnant women before the COVID-19 pandemic and pregnant women during the pandemic; finally, one contribution [27] considered pregnant and first-year postpartum women, assessed before and during the pandemic. A Chinese survey [22] compared the mental health status of pregnant women before the declaration of the COVID-19 epidemic and after. Only two studies [27,32] considered both pregnancy and the postpartum period.", "As regards the administered instruments, all studies adopted self-reports; seven studies delivered only one questionnaire, the rest multiple measures.\nAs concerns depression, seven studies applied the Edinburgh Postnatal Depression Scale (EPDS) [37], a 10-item self-report questionnaire addressing perinatal depressive symptoms within the last 7 days. The overall score is computed by adding items on a four-point Likert scale. Higher scores reflect more depressive symptoms.\nThree others applied self-report depression symptoms scales, although these were not specific for pregnancy and the postpartum period: the Center for Epidemiological Studies Depression Scale [38]; the Beck Depression Inventory II [39]; the Patient Health Questionnaire 9 [40].\nWith respect to anxiety, only two studies evaluated perinatal anxiety: one study with a questionnaire including ten items specifically addressing feelings about the health of the baby and her/his birth [41], and the other administered the Cambridge Worry Scale (CWS) [42] to assess pregnancy-specific anxiety as well as general anxiety, whereas the majority of scholars applied generic anxiety questionnaires. Four administered the State–Trait Anxiety Inventory (STAI) [43]; one study applied the Generalized Anxiety Disorder Scale 7 (GAD-7) [44], one the Patient-Reported Outcomes Measurement Information System (PROMIS) Anxiety Adult seven-item short form [45], and one the Visual Analog Scale (VAS) for anxiety [46].\nSome studies used a combined measure for depression and anxiety: two applied the Hospital Anxiety and Depression Scale (HADS) [47], and one the Patient Health Questionnaire Anxiety–Depression Scale (PHQ-ADS) [48].\nOne study administered the Depression Anxiety Stress Scales 21 (DASS 21) [49] to distinguish between the affective syndromes of depression, anxiety and tension/stress.\nThree studies resorted to the the Positive and Negative Affect Schedule (PANAS) [50] to evaluate mood or emotion.\nGlobal psychological distress was also measured through the Kessler Psychological Distress Scale (K10) [51] in two papers, and the Symptom Checklist—90 (SCL-90) [52] in one paper.\nFinally, specific measures of other variables included in the contributions were evaluated, but none were specific to perinatality, with the exception of one study measuring the infants’ APGAR (Adaptability, Partnership, Growth, Affection, and Resolve)[53].", "The prevalence of depression and anxiety reported was similar for all studies considered. With regards to the prevalence of depression in Qatar, for example, 39.2% of pregnant women presented depressive symptomatology [31]; in Turkey, the prevalence was 56.3% [28]; in Iran, 32.7% of the participants had symptoms of depression [30]; 58% in Spain [32]; in Canada, the studies indicated values close to 40% (37% [25]; 40.7% [27]); in China, 29.6% of women in Wu’s study [22] and 33.71% among the 2883 women involved in Sun et al.’s survey [24] referred to symptoms of depression. Concerning anxiety symptoms, in Qatar, a 34.4% prevalence of anxiety was identified [31]; 51% has been reported in Spain [32]; in Canada rates from 56.6% [25] to 72% [27] were detected, which are close to the Italian prevalence of 68% [33], while two Turkish studies found rates of 64.5% [28] and 43.9% [30], respectively.", "More specifically, one of the four studies from Canada (n =1987) found significantly higher anxiety and depressive symptoms compared to the scores in pregnant women before the COVID-19 pandemic, with 37% self-referring clinical levels of depression and 57% self-referring clinical levels of anxiety [25]. In Davenport et al.’s investigation [27], 900 women were involved: 58% were pregnant and 42% were in the first year postpartum. Pre-pandemic and current values were assessed for each group. It emerged that an EPDS score > 13 was self-reported in 15% pre-pandemic mothers and in 40.7% during the COVID 19. Moderate to high anxiety (STAI-state score > 40) was reported in 29% of women before the pandemic vs. 72% of women during its course.\nIn another Canadian survey, two cohorts of pregnant women were recruited, one prior to the COVID-19 pandemic (n = 496) and the other one (n = 1258) was enrolled online during the pandemic [18]. Researchers have shown that the latter reported more depressive and anxiety symptoms than pregnant women assessed before the COVID-19 pandemic. In addition, the COVID-19 women reported higher levels of dissociative symptoms and of post-traumatic stress disorder symptoms, and also described more negative affectivity and less positive affectivity than the pre-COVID-19 cohort did. In addition, this study showed that pregnant women assessed within the pandemic context with a previous psychiatric diagnosis or coming from a low-income background were more inclined to develop psychiatric symptoms.\nThe latter result contrasts with evidence from another study [31]: despite the main findings of Farrel’s research revealing that 34.4% of women reached clinical levels for anxiety and 39.2% for depression, these analyses did not reveal any association between these symptoms and previous mental health, occupation, pregnancy complications and gestational age. These results highlight that the worsening of psychiatric symptoms could be attributed to the psychological impact of the pandemic and to the containment measures. Similarly, Effati-Daryani et al. [30] showed that among their sample of 205 women, based on the scores obtained in DASS-21, 67.3% were in the normal range, 32.7% were identified to have symptoms of depression (12.7% mild, 10.7% moderate, 7.3% severe and 2.0% extremely), 56.1% were in the normal range, and 43.9% had symptoms of anxiety (17.6% mild, 12.2% moderate, 6.3% severe and 7.8% very severe). As emerged from Farrel’s aforementioned study, the evidence showed no statistically significant relationship between gestational age and depression, stress, and anxiety levels (p > 0.05).\nA multi-center cross-sectional study [22] provided the opportunity to compare the mental health status of pregnant women before and after the declaration of the COVID-19 epidemic. Pregnant women assessed after the abovementioned declaration had significantly increased rates of depressive symptoms (26.0% vs. 29.6%) compared to women evaluated before the declaration. Additionally, the prevalence of depressive symptoms increased along with the increase in the number of newly confirmed cases, suspected infections and deaths. This evidence is consistent with Sun et al.’s study [24] that demonstrated that the prevalence of perinatal depression increased with the increasing number of confirmed cases of COVID-19 patients. In particular, among the 2883 women involved in the survey, 33.71% had depressive symptoms, 27.02% showed mild depression, 5.24% moderate depression, and 1.46% severe depression.", "Regarding the prevalence of anxiety and depressive symptoms during the pandemic compared between pregnant and non-pregnant women, discordant results emerged from the two studies considered [3,23]. The first one [3] demonstrated that, during quarantine, both pregnant and non-pregnant women showed a gradual increment in psychopathological measures and a decline in positive affect. However, the group of pregnant women showed a more pronounced increase in depression, anxiety and negative affect than the non-pregnant women did. In addition, pregnant women showed a more evident decrease in positive affect. On the contrary, in the other study [23], pregnant women seemed to have an advantage when facing mental problems; really, they showed lower levels of depression, anxiety, insomnia and post-traumatic stress disorder (PTSD) than non-pregnant women. Specifically, 5.3%, 6.8%, 2.4%, 2.6%, and 0.9% of pregnant women, respectively, presented symptoms of depression, anxiety, physical discomfort, insomnia, and PTSD, whereas non-pregnant women’s prevalences were 17.5% (depression), 17.5% (anxiety), 2.5% (physical discomfort), 5.4% (insomnia), and 5.7% (PTSD).\nTaken together, the data that emerged from the papers included in this review suggest that the COVID-19 outbreak had a moderate to severe effect on the mental health of pregnant women; actually, the prevalence of psychological symptoms (mainly depression and anxiety) has significantly increased with the diffusion of COVID-19.", "In addition to the common psychiatric symptoms of depression and anxiety, some of the included studies also reported a high prevalence of fear, which represents the most reported symptom in pregnant women [25,26,29,31,33,34,35,36].\nThe concerns regarding infection were mainly for the pregnancy and for their families and children. Many women in the reviewed studies from different countries expressed worries about their own health and that of their unborn children in relation to the pandemic [25,29,31,33,35,36], concerns about delivery (e.g., whether their partner will be present, giving birth) and the baby’s health (e.g., something being wrong with the infant) [26].\nIn particular, two studies reported evidence that pregnant women experience great anxiety regarding the fear of transmitting the virus vertically to their baby [29,33]. Saccone et al. [33] pointed out that almost half of the women (46%) had worries about transmitting the infection to their infants. In the survey headed by Akgor et al. [29], the authors found that 82.5% (n = 245) of the pregnant women involved in the research reported high anxiety regarding the vertical transmission of the disease to their babies during delivery if they were infected with COVID-19 [29].\nConsistently with this, in another survey [31] on 552 mothers, 353 (64%) women were highly aware and worried about the COVID-19 pandemic (i.e., fears of carrying the virus, vertical transmission causing harm to fetuses, vulnerability). This finding emerged despite the fact that 64% of respondents did not acknowledge any impact of the COVID-19 pandemic on their mental well-being.\nIn a cross-sectional survey on 288 women accessing maternity services in Qatar, Farrel et al. [31] identified worries about pregnancy in 143 women and concerns about family and children in 189 of them. A high prevalence of fear of abnormal perinatal consequence was also detected; in one study conducted in Italy, over half of the mothers were worried that COVID-19 could cause a fetal structural anomaly, fetal growth restriction or preterm delivery [34]. Additionally, more than half of the pregnant women involved in another survey (66%, n = 196) were worried about pregnancy problems if their visits to the hospital were delayed or cancelled [29].\nFurthermore, Lebel et al. [25] identified that the elevated depression and anxiety symptoms during the COVID-19 pandemic were significantly associated with COVID-19-related concerns, i.e., threats to their baby’s health and to their own lives, worries about not receiving enough care during pregnancy, and also worries due to social isolation. These levels are much higher than what is typical for pregnant women and those reported by the rest of the community during the COVID-19 pandemic [25].", "However, several of the reviewed studies also focused on some possible factors that may mediate/moderate the impact of the pandemic on women’s mental health. Some scholars reported that increased perceived social support and support effectiveness were associated with lower mental health symptoms, and appeared to be protective factors against depression and anxiety [25,26,27]. Similarly, a Japanese survey including 1777 pregnant women demonstrated that a lack of social support is significantly related to depressive symptoms [36].\nThese results are in line with previous literature that proved that better social support was related to decreased depression and anxiety symptoms during both pregnancy and postpartum [54,55]. As is known, life during a pandemic is characterized by isolation, social distancing, restrictive measures and the limitation of movement, all of which can lead women to experience a lack of social support from friends, relatives, and partners [56], with negative consequences for mental health, as mentioned above.\nPhysical activity has also been investigated in terms of its protective function for psychological symptoms. Specifically, four studies showed that physical activity is related to reduced mental health problems. Being involved in regular physical activity during the COVID-19 pandemic represents a protective factor for the onset of anxiety and depressive symptoms in pregnant [22,25,27,28] or postpartum women [27], as confirmed by Lebel et al. [25].", "The present rapid review was aimed at describing the current scientific evidence on the psychological impact of the COVID-19 outbreak on mother’s mental health in the perinatal period. We chose a rapid review with the aim of providing evidence in a “timely and cost-effective manner”, as stated by the WHO [21]. Indeed, perinatality is to be considered as a priority in the primary care system. An effective means of identification of the condition of women and their infants from pregnancy to the first year of life of the children may inform the management of potential mental health disorders, and guide efficacious preventive interventions. Although the current review may not be considered exhaustive, our findings confirm that the COVID-19 pandemic has a considerable impact on the psychological health of pregnant and postpartum women. Indeed, although lacking multicentered studies, research from different countries and cultures has shown an increased prevalence of depression and anxiety among mothers during COVID-19 compared to similar pre-COVID19 pandemic mothers [18,22,25,26,27,28,31,33,34]. Hence, an accurate screening approach should be implemented for women in the peripartum. This is especially true in the face of healthcare systems that are not able to respond to the progressive increase in the demand for services. Such a situation seems particularly relevant to healthcare systems under the pressure of the COVID-19 pandemic emergency, helping to reduce the workload by referring only the screened, most vulnerable women for targeted intervention.\nIt is noteworthy that most studies were carried out through web-based questionnaires. This modality seems particularly useful in the abovementioned low-resource contexts. Computerized screening should also be favored since it has been shown that people tend to reveal more personal information through the computer and feel a greater sense of anonymity, increasing the likelihood of participation [57]. Some studies also detected, in addition to depressive and anxiety symptoms, higher percentages of post-traumatic stress disorder, dissociation, and distress [18], and higher levels of negative affectivity [3,18]. Moreover, independently of identified psychological symptomatology, high levels of awareness and concerns about the COVID-19 pandemic emerged, especially fears of carrying the virus and vertical transmission causing harm to fetuses [25,29,31,33,35,36]. These are relevant issues in that maternal malaise is not limited to ordinary screened psychological problems (i.e., depression and anxiety). Traumatic responses and emotional dysregulation may also affect mothers and their infants after pregnancy, with relevant long-term psychophysiological effects [58,59,60,61]. Specific attention to these vulnerabilities must be considered in order to provide efficacious interventions.", "The findings of this review have to be seen in light of some limitations. First, grey literature was excluded, and the articles included were limited to those in the English language within the selected keywords and databases. For this reason, the review cannot claim to be representative of all studies addressing the topic under investigation; therefore, the evidence that emerged could be overestimated or underestimated.\nAdditionally, only two surveys compared mental health outcomes for pregnant women with non-pregnant women during pandemic, only one study compared pregnant women before the COVID-19 pandemic and pregnant women during the pandemic, and only one considered pregnant and first-year postpartum women assessed before and during the pandemic. The paucity of studies makes it difficult to point out the differences between being pregnant during a pandemic and in another period.\nMoreover, no standardized quality appraisal of the included papers was carried out, as is usual in rapid evidence reviews [62]. This necessitates great caution in the interpretation of the review’s findings. Actually, the reviewed studies diverged with respect to enrollment modalities and the samples’ characteristics. Additionally, there was significant variability in the assessment measures that limits the generalizability of our findings; as well, in some cases, differences in the symptoms emerged, even though the same questionnaire was administered. Therefore, to improve the screening and prevention/intervention programs, a more rigorous study design is required, which should include the calculation of effect sizes, control groups, and a longitudinal perspective [14,63].\nEssentially, every study resorted to self-report questionnaires. Even though self-report measures are commonly administered in studies addressing maternal psychological functioning in the perinatal period, biased responses may not be excluded. Furthermore, only a few studies included instruments specific to the pre- and postpartum period, which may cause misleading conclusions. Besides this, resorting to such types of instruments does not allow us to distinguish between transient maternal malaise and more structured psychopathology, which is important to intervention; that said, they make a crucial contribution in prevention programs.\nMoreover, even though some of the reviewed studies considered additional variables (i.e., social support, physical activity) that may buffer the impact of the COVID-19 pandemic on mother’s psychological symptomatology [22,25,26,27,28,36], future studies should consider many of the risk factors that have been identified in the literature as relevant intervening variables, such as maternal SES and education, childbirth experiences, comorbidity, romantic couple adjustment, infant temperament, and breastfeeding [64,65,66,67,68]. From a research perspective, the interrelationship between these variables should be investigated through path analysis and linear structural relations modeling to understand their contributions to the outcomes for mothers and children.", "The present review provides valuable clinical suggestions that should be carefully monitored during the evaluation of women during perinatality.\nIn fact, the COVID-19 pandemic adds numerous risk factors for the mental health of mothers during the perinatal period. Longitudinal, cohort, multicenter studies should be carried out in order to promote standardized screening and intervention guidelines to support pregnant and postpartum women during the COVID-19 outbreak, and to promote healthy family functioning. The identification of risks and protective factors during the current pandemic is particularly important, especially considering the long-term effect that maternal mental health has on a child’s development. Finally, despite the acknowledged distress linked to such a situation, it may offer the possibility to develop pioneering online methods to detect psychological problems and deliver early mental health interventions to mothers and their infants." ]
[ "intro", null, null, null, null, null, null, null, null, null, "results", null, null, "subjects", null, null, null, null, null, null, "discussion", null, "conclusions" ]
[ "COVID-19", "maternal mental health", "anxiety", "depression", "perinatality" ]
1. Introduction: On 12 January 2020, the World Health Organization (WHO) officially announced the coronavirus disease 2019 (COVID-19), originating in Wuhan in December 2019, as a pandemic. In the course of most infectious disease outbreaks, restrictive measures can be necessary to stop the virus. With the aim of limiting Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) propagation, governments around the world have imposed some restrictions, such us national lockdowns and social distancing. A recent review [1] suggested that restrictive measures are often associated with negative psychological effects that can still be identified months or years later, and highlighted the impact of quarantine and isolation on mental health. Indeed, the actual outbreak is leading to psychological distress and increased mental health problems, such as stress, anxiety, depressive symptoms, insomnia, denial, anger and fear [2]. Psychological distress and mood disorders seem most likely in more vulnerable populations [3,4,5], such as pregnant women. Maternal mental health is particularly important to consider, due to the increased risk for depression and anxiety [6]. Pregnancy and the postpartum period, especially for first time mothers, have been identified as delicate periods in a woman’s life that are accompanied by significant social, psychological and also physiological changes [7,8], and for this reason pregnant women have been considered a high-risk population. Several studies have reported that the perinatal period is a time characterized by increased risk for emotional disorders such as depression, anxiety, and trauma-related disorders, especially in the presence of stress conditions [8,9,10]. This is also true for pregnant and postpartum women and their infants in the face of emergencies or natural disasters [11,12]. Indeed, during the SARS outbreak, pregnant women may have concerns about their own health and about the health of their unborn babies, and may display fears relating to pregnancy, to childbirth, or both. Additionally, feelings of uncertainty (characteristic of an epidemic) represent a significant stressor that can increase distress in pregnant women [13]. Overall, these complex and multiple variables may affect both mothers and their children’s physical and psychological health, in short-, medium- and long-term periods [14,15,16,17]. Therefore, the condition of the COVID-19 pandemic and associated factors could produce additional stress for women during perinatality and accentuate this predisposition [3,18]. For these reasons and due to the negative effect of psychological distress during pregnancy on the health of mothers and their offspring, priority should be given to support maternal mental health in the perinatal period [19,20]. These issues suggest that research is necessary to explore the effects of the COVID-19 pandemic on women during perinatality. The current review was designed to summarize the existing literature on the psychological impact of the COVID-19 pandemic on pregnant women. 2. Material and Methods: This research was conducted as a rapid review. Rapid reviews follow the guidelines for systematic reviews, but are simplified in order to accelerate the process of traditional reviews to produce rapid evidence [21]. 2.1. Search Strategy The Pubmed, Scopus, WOS—web of science, PsycInfo and Google Scholar indexed databases were searched using the terms COVID-19, Coronavirus, mental health, anxiety, depression, and well-being crossed with perinatality-related terms (i.e., pregnancy, maternal mental health, maternal mental disorder, perinatal period). Following the need to accelerate the searches, as rapid reviews require, they were performed in the period from December 2020 to January 2021. The selection of material followed the reading of the titles and abstracts of identified publications. Articles were included if they fulfilled the following PICOS (population, intervention or exposure, comparison, outcomes, study design) eligibility criteria. The Pubmed, Scopus, WOS—web of science, PsycInfo and Google Scholar indexed databases were searched using the terms COVID-19, Coronavirus, mental health, anxiety, depression, and well-being crossed with perinatality-related terms (i.e., pregnancy, maternal mental health, maternal mental disorder, perinatal period). Following the need to accelerate the searches, as rapid reviews require, they were performed in the period from December 2020 to January 2021. The selection of material followed the reading of the titles and abstracts of identified publications. Articles were included if they fulfilled the following PICOS (population, intervention or exposure, comparison, outcomes, study design) eligibility criteria. 2.2. Population Women who were pregnant at the time of the first wave of COVID-19 outbreak in their country. Women who were pregnant at the time of the first wave of COVID-19 outbreak in their country. 2.3. Intervention/Exposure Studies focusing on mental health outcomes (e.g., depression, anxiety, insomnia, post-traumatic stress disorder) in the target population during the COVID-19 pandemic. Studies focusing on mental health outcomes (e.g., depression, anxiety, insomnia, post-traumatic stress disorder) in the target population during the COVID-19 pandemic. 2.4. Comparison This is not applicable for the aim of this rapid review. This is not applicable for the aim of this rapid review. 2.5. Outcomes We looked at the following outcomes: psychological symptomatology (e.g., self-reported depression, anxiety, insomnia, post-traumatic stress disorder). We looked at the following outcomes: psychological symptomatology (e.g., self-reported depression, anxiety, insomnia, post-traumatic stress disorder). 2.6. Study Design We included studies with primary data collection. We included studies with primary data collection. 2.7. Selection Criteria The inclusion criteria were being published in English, reporting primary data and having the full-length text available, being original articles with at least 100 participants, being about the new coronavirus pandemic (COVID-19), and referring exclusively to its psychological consequences for women who were pregnant during the outbreak or were within the first year postpartum. The exclusion criteria were being editorials, letters or commentaries. We excluded articles that did not consider psychological aspects during pregnancy and abstracts without the full text available. A total of 116 articles were found in the initial search. After duplicates and papers without full texts available were removed, 41 full texts of possibly pertinent studies were assessed for eligibility and were independently screened by both authors to reduce the selection bias. Finally, of a total of 116 publications found, 17 manuscripts met the aforementioned inclusion criteria; therefore, they were considered eligible and were included in the rapid review. Narrative synthesis was applied to analyze the relevant papers grouped under themes. The study selection process is illustrated by the PRISMA flow chart shown in Figure 1. The inclusion criteria were being published in English, reporting primary data and having the full-length text available, being original articles with at least 100 participants, being about the new coronavirus pandemic (COVID-19), and referring exclusively to its psychological consequences for women who were pregnant during the outbreak or were within the first year postpartum. The exclusion criteria were being editorials, letters or commentaries. We excluded articles that did not consider psychological aspects during pregnancy and abstracts without the full text available. A total of 116 articles were found in the initial search. After duplicates and papers without full texts available were removed, 41 full texts of possibly pertinent studies were assessed for eligibility and were independently screened by both authors to reduce the selection bias. Finally, of a total of 116 publications found, 17 manuscripts met the aforementioned inclusion criteria; therefore, they were considered eligible and were included in the rapid review. Narrative synthesis was applied to analyze the relevant papers grouped under themes. The study selection process is illustrated by the PRISMA flow chart shown in Figure 1. 2.8. Data Extraction The study characteristics of the included papers were extracted by the two authors independently, and relevant information is shown in Table 1, including country, population, number of participants, study design, measurement tools and main results. The study characteristics of the included papers were extracted by the two authors independently, and relevant information is shown in Table 1, including country, population, number of participants, study design, measurement tools and main results. 2.1. Search Strategy: The Pubmed, Scopus, WOS—web of science, PsycInfo and Google Scholar indexed databases were searched using the terms COVID-19, Coronavirus, mental health, anxiety, depression, and well-being crossed with perinatality-related terms (i.e., pregnancy, maternal mental health, maternal mental disorder, perinatal period). Following the need to accelerate the searches, as rapid reviews require, they were performed in the period from December 2020 to January 2021. The selection of material followed the reading of the titles and abstracts of identified publications. Articles were included if they fulfilled the following PICOS (population, intervention or exposure, comparison, outcomes, study design) eligibility criteria. 2.2. Population: Women who were pregnant at the time of the first wave of COVID-19 outbreak in their country. 2.3. Intervention/Exposure: Studies focusing on mental health outcomes (e.g., depression, anxiety, insomnia, post-traumatic stress disorder) in the target population during the COVID-19 pandemic. 2.4. Comparison: This is not applicable for the aim of this rapid review. 2.5. Outcomes: We looked at the following outcomes: psychological symptomatology (e.g., self-reported depression, anxiety, insomnia, post-traumatic stress disorder). 2.6. Study Design: We included studies with primary data collection. 2.7. Selection Criteria: The inclusion criteria were being published in English, reporting primary data and having the full-length text available, being original articles with at least 100 participants, being about the new coronavirus pandemic (COVID-19), and referring exclusively to its psychological consequences for women who were pregnant during the outbreak or were within the first year postpartum. The exclusion criteria were being editorials, letters or commentaries. We excluded articles that did not consider psychological aspects during pregnancy and abstracts without the full text available. A total of 116 articles were found in the initial search. After duplicates and papers without full texts available were removed, 41 full texts of possibly pertinent studies were assessed for eligibility and were independently screened by both authors to reduce the selection bias. Finally, of a total of 116 publications found, 17 manuscripts met the aforementioned inclusion criteria; therefore, they were considered eligible and were included in the rapid review. Narrative synthesis was applied to analyze the relevant papers grouped under themes. The study selection process is illustrated by the PRISMA flow chart shown in Figure 1. 2.8. Data Extraction: The study characteristics of the included papers were extracted by the two authors independently, and relevant information is shown in Table 1, including country, population, number of participants, study design, measurement tools and main results. 3. Results: 3.1. Maternal Mental Health All of the identified papers suggest that the COVID-19 pandemic can have a significant impact on maternal mental health, mainly in the form of anxiety and depressive symptoms. The prevalence of depression and anxiety in pregnant women has significantly increased since the spread of COVID-19 disease. Pregnant women during the COVID-19 pandemic reported more psychological symptomatology compared to pregnant women before the COVID-19 outbreak. All of the identified papers suggest that the COVID-19 pandemic can have a significant impact on maternal mental health, mainly in the form of anxiety and depressive symptoms. The prevalence of depression and anxiety in pregnant women has significantly increased since the spread of COVID-19 disease. Pregnant women during the COVID-19 pandemic reported more psychological symptomatology compared to pregnant women before the COVID-19 outbreak. 3.2. Countries The studies included in the rapid review consider participants from China [22,23,24], Canada [18,25,26,27], Turkey [28,29], Argentina [3], Iran [30], Qatar [31], Spain [32], Italy [33,34], Pakistan [35], and Japan [36]. The studies included in the rapid review consider participants from China [22,23,24], Canada [18,25,26,27], Turkey [28,29], Argentina [3], Iran [30], Qatar [31], Spain [32], Italy [33,34], Pakistan [35], and Japan [36]. 3.3. Participants All the studies involved women who were at least 18 years old. Most of the papers concerned studies addressing women during pregnancy [3,18,22,23,25,26,33], two of which were case–control studies comparing pregnant and non-pregnant women [3,23]. Only one study [3] longitudinally monitored the population throughout the lockdown. In this case, participants were divided into two groups: 102 pregnant women and a control group of 102 non-pregnant women. One study was a case–control study [18] that considered pregnant women before the COVID-19 pandemic and pregnant women during the pandemic; finally, one contribution [27] considered pregnant and first-year postpartum women, assessed before and during the pandemic. A Chinese survey [22] compared the mental health status of pregnant women before the declaration of the COVID-19 epidemic and after. Only two studies [27,32] considered both pregnancy and the postpartum period. All the studies involved women who were at least 18 years old. Most of the papers concerned studies addressing women during pregnancy [3,18,22,23,25,26,33], two of which were case–control studies comparing pregnant and non-pregnant women [3,23]. Only one study [3] longitudinally monitored the population throughout the lockdown. In this case, participants were divided into two groups: 102 pregnant women and a control group of 102 non-pregnant women. One study was a case–control study [18] that considered pregnant women before the COVID-19 pandemic and pregnant women during the pandemic; finally, one contribution [27] considered pregnant and first-year postpartum women, assessed before and during the pandemic. A Chinese survey [22] compared the mental health status of pregnant women before the declaration of the COVID-19 epidemic and after. Only two studies [27,32] considered both pregnancy and the postpartum period. 3.4. Instruments As regards the administered instruments, all studies adopted self-reports; seven studies delivered only one questionnaire, the rest multiple measures. As concerns depression, seven studies applied the Edinburgh Postnatal Depression Scale (EPDS) [37], a 10-item self-report questionnaire addressing perinatal depressive symptoms within the last 7 days. The overall score is computed by adding items on a four-point Likert scale. Higher scores reflect more depressive symptoms. Three others applied self-report depression symptoms scales, although these were not specific for pregnancy and the postpartum period: the Center for Epidemiological Studies Depression Scale [38]; the Beck Depression Inventory II [39]; the Patient Health Questionnaire 9 [40]. With respect to anxiety, only two studies evaluated perinatal anxiety: one study with a questionnaire including ten items specifically addressing feelings about the health of the baby and her/his birth [41], and the other administered the Cambridge Worry Scale (CWS) [42] to assess pregnancy-specific anxiety as well as general anxiety, whereas the majority of scholars applied generic anxiety questionnaires. Four administered the State–Trait Anxiety Inventory (STAI) [43]; one study applied the Generalized Anxiety Disorder Scale 7 (GAD-7) [44], one the Patient-Reported Outcomes Measurement Information System (PROMIS) Anxiety Adult seven-item short form [45], and one the Visual Analog Scale (VAS) for anxiety [46]. Some studies used a combined measure for depression and anxiety: two applied the Hospital Anxiety and Depression Scale (HADS) [47], and one the Patient Health Questionnaire Anxiety–Depression Scale (PHQ-ADS) [48]. One study administered the Depression Anxiety Stress Scales 21 (DASS 21) [49] to distinguish between the affective syndromes of depression, anxiety and tension/stress. Three studies resorted to the the Positive and Negative Affect Schedule (PANAS) [50] to evaluate mood or emotion. Global psychological distress was also measured through the Kessler Psychological Distress Scale (K10) [51] in two papers, and the Symptom Checklist—90 (SCL-90) [52] in one paper. Finally, specific measures of other variables included in the contributions were evaluated, but none were specific to perinatality, with the exception of one study measuring the infants’ APGAR (Adaptability, Partnership, Growth, Affection, and Resolve)[53]. As regards the administered instruments, all studies adopted self-reports; seven studies delivered only one questionnaire, the rest multiple measures. As concerns depression, seven studies applied the Edinburgh Postnatal Depression Scale (EPDS) [37], a 10-item self-report questionnaire addressing perinatal depressive symptoms within the last 7 days. The overall score is computed by adding items on a four-point Likert scale. Higher scores reflect more depressive symptoms. Three others applied self-report depression symptoms scales, although these were not specific for pregnancy and the postpartum period: the Center for Epidemiological Studies Depression Scale [38]; the Beck Depression Inventory II [39]; the Patient Health Questionnaire 9 [40]. With respect to anxiety, only two studies evaluated perinatal anxiety: one study with a questionnaire including ten items specifically addressing feelings about the health of the baby and her/his birth [41], and the other administered the Cambridge Worry Scale (CWS) [42] to assess pregnancy-specific anxiety as well as general anxiety, whereas the majority of scholars applied generic anxiety questionnaires. Four administered the State–Trait Anxiety Inventory (STAI) [43]; one study applied the Generalized Anxiety Disorder Scale 7 (GAD-7) [44], one the Patient-Reported Outcomes Measurement Information System (PROMIS) Anxiety Adult seven-item short form [45], and one the Visual Analog Scale (VAS) for anxiety [46]. Some studies used a combined measure for depression and anxiety: two applied the Hospital Anxiety and Depression Scale (HADS) [47], and one the Patient Health Questionnaire Anxiety–Depression Scale (PHQ-ADS) [48]. One study administered the Depression Anxiety Stress Scales 21 (DASS 21) [49] to distinguish between the affective syndromes of depression, anxiety and tension/stress. Three studies resorted to the the Positive and Negative Affect Schedule (PANAS) [50] to evaluate mood or emotion. Global psychological distress was also measured through the Kessler Psychological Distress Scale (K10) [51] in two papers, and the Symptom Checklist—90 (SCL-90) [52] in one paper. Finally, specific measures of other variables included in the contributions were evaluated, but none were specific to perinatality, with the exception of one study measuring the infants’ APGAR (Adaptability, Partnership, Growth, Affection, and Resolve)[53]. 3.5. Prevalence of Depression and Anxiety Symptoms The prevalence of depression and anxiety reported was similar for all studies considered. With regards to the prevalence of depression in Qatar, for example, 39.2% of pregnant women presented depressive symptomatology [31]; in Turkey, the prevalence was 56.3% [28]; in Iran, 32.7% of the participants had symptoms of depression [30]; 58% in Spain [32]; in Canada, the studies indicated values close to 40% (37% [25]; 40.7% [27]); in China, 29.6% of women in Wu’s study [22] and 33.71% among the 2883 women involved in Sun et al.’s survey [24] referred to symptoms of depression. Concerning anxiety symptoms, in Qatar, a 34.4% prevalence of anxiety was identified [31]; 51% has been reported in Spain [32]; in Canada rates from 56.6% [25] to 72% [27] were detected, which are close to the Italian prevalence of 68% [33], while two Turkish studies found rates of 64.5% [28] and 43.9% [30], respectively. The prevalence of depression and anxiety reported was similar for all studies considered. With regards to the prevalence of depression in Qatar, for example, 39.2% of pregnant women presented depressive symptomatology [31]; in Turkey, the prevalence was 56.3% [28]; in Iran, 32.7% of the participants had symptoms of depression [30]; 58% in Spain [32]; in Canada, the studies indicated values close to 40% (37% [25]; 40.7% [27]); in China, 29.6% of women in Wu’s study [22] and 33.71% among the 2883 women involved in Sun et al.’s survey [24] referred to symptoms of depression. Concerning anxiety symptoms, in Qatar, a 34.4% prevalence of anxiety was identified [31]; 51% has been reported in Spain [32]; in Canada rates from 56.6% [25] to 72% [27] were detected, which are close to the Italian prevalence of 68% [33], while two Turkish studies found rates of 64.5% [28] and 43.9% [30], respectively. 3.6. Comparison between Pre- and Post-COVID Depressive and Anxiety Symptoms More specifically, one of the four studies from Canada (n =1987) found significantly higher anxiety and depressive symptoms compared to the scores in pregnant women before the COVID-19 pandemic, with 37% self-referring clinical levels of depression and 57% self-referring clinical levels of anxiety [25]. In Davenport et al.’s investigation [27], 900 women were involved: 58% were pregnant and 42% were in the first year postpartum. Pre-pandemic and current values were assessed for each group. It emerged that an EPDS score > 13 was self-reported in 15% pre-pandemic mothers and in 40.7% during the COVID 19. Moderate to high anxiety (STAI-state score > 40) was reported in 29% of women before the pandemic vs. 72% of women during its course. In another Canadian survey, two cohorts of pregnant women were recruited, one prior to the COVID-19 pandemic (n = 496) and the other one (n = 1258) was enrolled online during the pandemic [18]. Researchers have shown that the latter reported more depressive and anxiety symptoms than pregnant women assessed before the COVID-19 pandemic. In addition, the COVID-19 women reported higher levels of dissociative symptoms and of post-traumatic stress disorder symptoms, and also described more negative affectivity and less positive affectivity than the pre-COVID-19 cohort did. In addition, this study showed that pregnant women assessed within the pandemic context with a previous psychiatric diagnosis or coming from a low-income background were more inclined to develop psychiatric symptoms. The latter result contrasts with evidence from another study [31]: despite the main findings of Farrel’s research revealing that 34.4% of women reached clinical levels for anxiety and 39.2% for depression, these analyses did not reveal any association between these symptoms and previous mental health, occupation, pregnancy complications and gestational age. These results highlight that the worsening of psychiatric symptoms could be attributed to the psychological impact of the pandemic and to the containment measures. Similarly, Effati-Daryani et al. [30] showed that among their sample of 205 women, based on the scores obtained in DASS-21, 67.3% were in the normal range, 32.7% were identified to have symptoms of depression (12.7% mild, 10.7% moderate, 7.3% severe and 2.0% extremely), 56.1% were in the normal range, and 43.9% had symptoms of anxiety (17.6% mild, 12.2% moderate, 6.3% severe and 7.8% very severe). As emerged from Farrel’s aforementioned study, the evidence showed no statistically significant relationship between gestational age and depression, stress, and anxiety levels (p > 0.05). A multi-center cross-sectional study [22] provided the opportunity to compare the mental health status of pregnant women before and after the declaration of the COVID-19 epidemic. Pregnant women assessed after the abovementioned declaration had significantly increased rates of depressive symptoms (26.0% vs. 29.6%) compared to women evaluated before the declaration. Additionally, the prevalence of depressive symptoms increased along with the increase in the number of newly confirmed cases, suspected infections and deaths. This evidence is consistent with Sun et al.’s study [24] that demonstrated that the prevalence of perinatal depression increased with the increasing number of confirmed cases of COVID-19 patients. In particular, among the 2883 women involved in the survey, 33.71% had depressive symptoms, 27.02% showed mild depression, 5.24% moderate depression, and 1.46% severe depression. More specifically, one of the four studies from Canada (n =1987) found significantly higher anxiety and depressive symptoms compared to the scores in pregnant women before the COVID-19 pandemic, with 37% self-referring clinical levels of depression and 57% self-referring clinical levels of anxiety [25]. In Davenport et al.’s investigation [27], 900 women were involved: 58% were pregnant and 42% were in the first year postpartum. Pre-pandemic and current values were assessed for each group. It emerged that an EPDS score > 13 was self-reported in 15% pre-pandemic mothers and in 40.7% during the COVID 19. Moderate to high anxiety (STAI-state score > 40) was reported in 29% of women before the pandemic vs. 72% of women during its course. In another Canadian survey, two cohorts of pregnant women were recruited, one prior to the COVID-19 pandemic (n = 496) and the other one (n = 1258) was enrolled online during the pandemic [18]. Researchers have shown that the latter reported more depressive and anxiety symptoms than pregnant women assessed before the COVID-19 pandemic. In addition, the COVID-19 women reported higher levels of dissociative symptoms and of post-traumatic stress disorder symptoms, and also described more negative affectivity and less positive affectivity than the pre-COVID-19 cohort did. In addition, this study showed that pregnant women assessed within the pandemic context with a previous psychiatric diagnosis or coming from a low-income background were more inclined to develop psychiatric symptoms. The latter result contrasts with evidence from another study [31]: despite the main findings of Farrel’s research revealing that 34.4% of women reached clinical levels for anxiety and 39.2% for depression, these analyses did not reveal any association between these symptoms and previous mental health, occupation, pregnancy complications and gestational age. These results highlight that the worsening of psychiatric symptoms could be attributed to the psychological impact of the pandemic and to the containment measures. Similarly, Effati-Daryani et al. [30] showed that among their sample of 205 women, based on the scores obtained in DASS-21, 67.3% were in the normal range, 32.7% were identified to have symptoms of depression (12.7% mild, 10.7% moderate, 7.3% severe and 2.0% extremely), 56.1% were in the normal range, and 43.9% had symptoms of anxiety (17.6% mild, 12.2% moderate, 6.3% severe and 7.8% very severe). As emerged from Farrel’s aforementioned study, the evidence showed no statistically significant relationship between gestational age and depression, stress, and anxiety levels (p > 0.05). A multi-center cross-sectional study [22] provided the opportunity to compare the mental health status of pregnant women before and after the declaration of the COVID-19 epidemic. Pregnant women assessed after the abovementioned declaration had significantly increased rates of depressive symptoms (26.0% vs. 29.6%) compared to women evaluated before the declaration. Additionally, the prevalence of depressive symptoms increased along with the increase in the number of newly confirmed cases, suspected infections and deaths. This evidence is consistent with Sun et al.’s study [24] that demonstrated that the prevalence of perinatal depression increased with the increasing number of confirmed cases of COVID-19 patients. In particular, among the 2883 women involved in the survey, 33.71% had depressive symptoms, 27.02% showed mild depression, 5.24% moderate depression, and 1.46% severe depression. 3.7. Comparison between Pregnant versus Non-Pregnant Women’s Mental Health Regarding the prevalence of anxiety and depressive symptoms during the pandemic compared between pregnant and non-pregnant women, discordant results emerged from the two studies considered [3,23]. The first one [3] demonstrated that, during quarantine, both pregnant and non-pregnant women showed a gradual increment in psychopathological measures and a decline in positive affect. However, the group of pregnant women showed a more pronounced increase in depression, anxiety and negative affect than the non-pregnant women did. In addition, pregnant women showed a more evident decrease in positive affect. On the contrary, in the other study [23], pregnant women seemed to have an advantage when facing mental problems; really, they showed lower levels of depression, anxiety, insomnia and post-traumatic stress disorder (PTSD) than non-pregnant women. Specifically, 5.3%, 6.8%, 2.4%, 2.6%, and 0.9% of pregnant women, respectively, presented symptoms of depression, anxiety, physical discomfort, insomnia, and PTSD, whereas non-pregnant women’s prevalences were 17.5% (depression), 17.5% (anxiety), 2.5% (physical discomfort), 5.4% (insomnia), and 5.7% (PTSD). Taken together, the data that emerged from the papers included in this review suggest that the COVID-19 outbreak had a moderate to severe effect on the mental health of pregnant women; actually, the prevalence of psychological symptoms (mainly depression and anxiety) has significantly increased with the diffusion of COVID-19. Regarding the prevalence of anxiety and depressive symptoms during the pandemic compared between pregnant and non-pregnant women, discordant results emerged from the two studies considered [3,23]. The first one [3] demonstrated that, during quarantine, both pregnant and non-pregnant women showed a gradual increment in psychopathological measures and a decline in positive affect. However, the group of pregnant women showed a more pronounced increase in depression, anxiety and negative affect than the non-pregnant women did. In addition, pregnant women showed a more evident decrease in positive affect. On the contrary, in the other study [23], pregnant women seemed to have an advantage when facing mental problems; really, they showed lower levels of depression, anxiety, insomnia and post-traumatic stress disorder (PTSD) than non-pregnant women. Specifically, 5.3%, 6.8%, 2.4%, 2.6%, and 0.9% of pregnant women, respectively, presented symptoms of depression, anxiety, physical discomfort, insomnia, and PTSD, whereas non-pregnant women’s prevalences were 17.5% (depression), 17.5% (anxiety), 2.5% (physical discomfort), 5.4% (insomnia), and 5.7% (PTSD). Taken together, the data that emerged from the papers included in this review suggest that the COVID-19 outbreak had a moderate to severe effect on the mental health of pregnant women; actually, the prevalence of psychological symptoms (mainly depression and anxiety) has significantly increased with the diffusion of COVID-19. 3.8. Beyond Depression and Anxiety: Specific Maternal Worries and Fears In addition to the common psychiatric symptoms of depression and anxiety, some of the included studies also reported a high prevalence of fear, which represents the most reported symptom in pregnant women [25,26,29,31,33,34,35,36]. The concerns regarding infection were mainly for the pregnancy and for their families and children. Many women in the reviewed studies from different countries expressed worries about their own health and that of their unborn children in relation to the pandemic [25,29,31,33,35,36], concerns about delivery (e.g., whether their partner will be present, giving birth) and the baby’s health (e.g., something being wrong with the infant) [26]. In particular, two studies reported evidence that pregnant women experience great anxiety regarding the fear of transmitting the virus vertically to their baby [29,33]. Saccone et al. [33] pointed out that almost half of the women (46%) had worries about transmitting the infection to their infants. In the survey headed by Akgor et al. [29], the authors found that 82.5% (n = 245) of the pregnant women involved in the research reported high anxiety regarding the vertical transmission of the disease to their babies during delivery if they were infected with COVID-19 [29]. Consistently with this, in another survey [31] on 552 mothers, 353 (64%) women were highly aware and worried about the COVID-19 pandemic (i.e., fears of carrying the virus, vertical transmission causing harm to fetuses, vulnerability). This finding emerged despite the fact that 64% of respondents did not acknowledge any impact of the COVID-19 pandemic on their mental well-being. In a cross-sectional survey on 288 women accessing maternity services in Qatar, Farrel et al. [31] identified worries about pregnancy in 143 women and concerns about family and children in 189 of them. A high prevalence of fear of abnormal perinatal consequence was also detected; in one study conducted in Italy, over half of the mothers were worried that COVID-19 could cause a fetal structural anomaly, fetal growth restriction or preterm delivery [34]. Additionally, more than half of the pregnant women involved in another survey (66%, n = 196) were worried about pregnancy problems if their visits to the hospital were delayed or cancelled [29]. Furthermore, Lebel et al. [25] identified that the elevated depression and anxiety symptoms during the COVID-19 pandemic were significantly associated with COVID-19-related concerns, i.e., threats to their baby’s health and to their own lives, worries about not receiving enough care during pregnancy, and also worries due to social isolation. These levels are much higher than what is typical for pregnant women and those reported by the rest of the community during the COVID-19 pandemic [25]. In addition to the common psychiatric symptoms of depression and anxiety, some of the included studies also reported a high prevalence of fear, which represents the most reported symptom in pregnant women [25,26,29,31,33,34,35,36]. The concerns regarding infection were mainly for the pregnancy and for their families and children. Many women in the reviewed studies from different countries expressed worries about their own health and that of their unborn children in relation to the pandemic [25,29,31,33,35,36], concerns about delivery (e.g., whether their partner will be present, giving birth) and the baby’s health (e.g., something being wrong with the infant) [26]. In particular, two studies reported evidence that pregnant women experience great anxiety regarding the fear of transmitting the virus vertically to their baby [29,33]. Saccone et al. [33] pointed out that almost half of the women (46%) had worries about transmitting the infection to their infants. In the survey headed by Akgor et al. [29], the authors found that 82.5% (n = 245) of the pregnant women involved in the research reported high anxiety regarding the vertical transmission of the disease to their babies during delivery if they were infected with COVID-19 [29]. Consistently with this, in another survey [31] on 552 mothers, 353 (64%) women were highly aware and worried about the COVID-19 pandemic (i.e., fears of carrying the virus, vertical transmission causing harm to fetuses, vulnerability). This finding emerged despite the fact that 64% of respondents did not acknowledge any impact of the COVID-19 pandemic on their mental well-being. In a cross-sectional survey on 288 women accessing maternity services in Qatar, Farrel et al. [31] identified worries about pregnancy in 143 women and concerns about family and children in 189 of them. A high prevalence of fear of abnormal perinatal consequence was also detected; in one study conducted in Italy, over half of the mothers were worried that COVID-19 could cause a fetal structural anomaly, fetal growth restriction or preterm delivery [34]. Additionally, more than half of the pregnant women involved in another survey (66%, n = 196) were worried about pregnancy problems if their visits to the hospital were delayed or cancelled [29]. Furthermore, Lebel et al. [25] identified that the elevated depression and anxiety symptoms during the COVID-19 pandemic were significantly associated with COVID-19-related concerns, i.e., threats to their baby’s health and to their own lives, worries about not receiving enough care during pregnancy, and also worries due to social isolation. These levels are much higher than what is typical for pregnant women and those reported by the rest of the community during the COVID-19 pandemic [25]. 3.9. Protective Factors However, several of the reviewed studies also focused on some possible factors that may mediate/moderate the impact of the pandemic on women’s mental health. Some scholars reported that increased perceived social support and support effectiveness were associated with lower mental health symptoms, and appeared to be protective factors against depression and anxiety [25,26,27]. Similarly, a Japanese survey including 1777 pregnant women demonstrated that a lack of social support is significantly related to depressive symptoms [36]. These results are in line with previous literature that proved that better social support was related to decreased depression and anxiety symptoms during both pregnancy and postpartum [54,55]. As is known, life during a pandemic is characterized by isolation, social distancing, restrictive measures and the limitation of movement, all of which can lead women to experience a lack of social support from friends, relatives, and partners [56], with negative consequences for mental health, as mentioned above. Physical activity has also been investigated in terms of its protective function for psychological symptoms. Specifically, four studies showed that physical activity is related to reduced mental health problems. Being involved in regular physical activity during the COVID-19 pandemic represents a protective factor for the onset of anxiety and depressive symptoms in pregnant [22,25,27,28] or postpartum women [27], as confirmed by Lebel et al. [25]. However, several of the reviewed studies also focused on some possible factors that may mediate/moderate the impact of the pandemic on women’s mental health. Some scholars reported that increased perceived social support and support effectiveness were associated with lower mental health symptoms, and appeared to be protective factors against depression and anxiety [25,26,27]. Similarly, a Japanese survey including 1777 pregnant women demonstrated that a lack of social support is significantly related to depressive symptoms [36]. These results are in line with previous literature that proved that better social support was related to decreased depression and anxiety symptoms during both pregnancy and postpartum [54,55]. As is known, life during a pandemic is characterized by isolation, social distancing, restrictive measures and the limitation of movement, all of which can lead women to experience a lack of social support from friends, relatives, and partners [56], with negative consequences for mental health, as mentioned above. Physical activity has also been investigated in terms of its protective function for psychological symptoms. Specifically, four studies showed that physical activity is related to reduced mental health problems. Being involved in regular physical activity during the COVID-19 pandemic represents a protective factor for the onset of anxiety and depressive symptoms in pregnant [22,25,27,28] or postpartum women [27], as confirmed by Lebel et al. [25]. 3.1. Maternal Mental Health: All of the identified papers suggest that the COVID-19 pandemic can have a significant impact on maternal mental health, mainly in the form of anxiety and depressive symptoms. The prevalence of depression and anxiety in pregnant women has significantly increased since the spread of COVID-19 disease. Pregnant women during the COVID-19 pandemic reported more psychological symptomatology compared to pregnant women before the COVID-19 outbreak. 3.2. Countries: The studies included in the rapid review consider participants from China [22,23,24], Canada [18,25,26,27], Turkey [28,29], Argentina [3], Iran [30], Qatar [31], Spain [32], Italy [33,34], Pakistan [35], and Japan [36]. 3.3. Participants: All the studies involved women who were at least 18 years old. Most of the papers concerned studies addressing women during pregnancy [3,18,22,23,25,26,33], two of which were case–control studies comparing pregnant and non-pregnant women [3,23]. Only one study [3] longitudinally monitored the population throughout the lockdown. In this case, participants were divided into two groups: 102 pregnant women and a control group of 102 non-pregnant women. One study was a case–control study [18] that considered pregnant women before the COVID-19 pandemic and pregnant women during the pandemic; finally, one contribution [27] considered pregnant and first-year postpartum women, assessed before and during the pandemic. A Chinese survey [22] compared the mental health status of pregnant women before the declaration of the COVID-19 epidemic and after. Only two studies [27,32] considered both pregnancy and the postpartum period. 3.4. Instruments: As regards the administered instruments, all studies adopted self-reports; seven studies delivered only one questionnaire, the rest multiple measures. As concerns depression, seven studies applied the Edinburgh Postnatal Depression Scale (EPDS) [37], a 10-item self-report questionnaire addressing perinatal depressive symptoms within the last 7 days. The overall score is computed by adding items on a four-point Likert scale. Higher scores reflect more depressive symptoms. Three others applied self-report depression symptoms scales, although these were not specific for pregnancy and the postpartum period: the Center for Epidemiological Studies Depression Scale [38]; the Beck Depression Inventory II [39]; the Patient Health Questionnaire 9 [40]. With respect to anxiety, only two studies evaluated perinatal anxiety: one study with a questionnaire including ten items specifically addressing feelings about the health of the baby and her/his birth [41], and the other administered the Cambridge Worry Scale (CWS) [42] to assess pregnancy-specific anxiety as well as general anxiety, whereas the majority of scholars applied generic anxiety questionnaires. Four administered the State–Trait Anxiety Inventory (STAI) [43]; one study applied the Generalized Anxiety Disorder Scale 7 (GAD-7) [44], one the Patient-Reported Outcomes Measurement Information System (PROMIS) Anxiety Adult seven-item short form [45], and one the Visual Analog Scale (VAS) for anxiety [46]. Some studies used a combined measure for depression and anxiety: two applied the Hospital Anxiety and Depression Scale (HADS) [47], and one the Patient Health Questionnaire Anxiety–Depression Scale (PHQ-ADS) [48]. One study administered the Depression Anxiety Stress Scales 21 (DASS 21) [49] to distinguish between the affective syndromes of depression, anxiety and tension/stress. Three studies resorted to the the Positive and Negative Affect Schedule (PANAS) [50] to evaluate mood or emotion. Global psychological distress was also measured through the Kessler Psychological Distress Scale (K10) [51] in two papers, and the Symptom Checklist—90 (SCL-90) [52] in one paper. Finally, specific measures of other variables included in the contributions were evaluated, but none were specific to perinatality, with the exception of one study measuring the infants’ APGAR (Adaptability, Partnership, Growth, Affection, and Resolve)[53]. 3.5. Prevalence of Depression and Anxiety Symptoms: The prevalence of depression and anxiety reported was similar for all studies considered. With regards to the prevalence of depression in Qatar, for example, 39.2% of pregnant women presented depressive symptomatology [31]; in Turkey, the prevalence was 56.3% [28]; in Iran, 32.7% of the participants had symptoms of depression [30]; 58% in Spain [32]; in Canada, the studies indicated values close to 40% (37% [25]; 40.7% [27]); in China, 29.6% of women in Wu’s study [22] and 33.71% among the 2883 women involved in Sun et al.’s survey [24] referred to symptoms of depression. Concerning anxiety symptoms, in Qatar, a 34.4% prevalence of anxiety was identified [31]; 51% has been reported in Spain [32]; in Canada rates from 56.6% [25] to 72% [27] were detected, which are close to the Italian prevalence of 68% [33], while two Turkish studies found rates of 64.5% [28] and 43.9% [30], respectively. 3.6. Comparison between Pre- and Post-COVID Depressive and Anxiety Symptoms: More specifically, one of the four studies from Canada (n =1987) found significantly higher anxiety and depressive symptoms compared to the scores in pregnant women before the COVID-19 pandemic, with 37% self-referring clinical levels of depression and 57% self-referring clinical levels of anxiety [25]. In Davenport et al.’s investigation [27], 900 women were involved: 58% were pregnant and 42% were in the first year postpartum. Pre-pandemic and current values were assessed for each group. It emerged that an EPDS score > 13 was self-reported in 15% pre-pandemic mothers and in 40.7% during the COVID 19. Moderate to high anxiety (STAI-state score > 40) was reported in 29% of women before the pandemic vs. 72% of women during its course. In another Canadian survey, two cohorts of pregnant women were recruited, one prior to the COVID-19 pandemic (n = 496) and the other one (n = 1258) was enrolled online during the pandemic [18]. Researchers have shown that the latter reported more depressive and anxiety symptoms than pregnant women assessed before the COVID-19 pandemic. In addition, the COVID-19 women reported higher levels of dissociative symptoms and of post-traumatic stress disorder symptoms, and also described more negative affectivity and less positive affectivity than the pre-COVID-19 cohort did. In addition, this study showed that pregnant women assessed within the pandemic context with a previous psychiatric diagnosis or coming from a low-income background were more inclined to develop psychiatric symptoms. The latter result contrasts with evidence from another study [31]: despite the main findings of Farrel’s research revealing that 34.4% of women reached clinical levels for anxiety and 39.2% for depression, these analyses did not reveal any association between these symptoms and previous mental health, occupation, pregnancy complications and gestational age. These results highlight that the worsening of psychiatric symptoms could be attributed to the psychological impact of the pandemic and to the containment measures. Similarly, Effati-Daryani et al. [30] showed that among their sample of 205 women, based on the scores obtained in DASS-21, 67.3% were in the normal range, 32.7% were identified to have symptoms of depression (12.7% mild, 10.7% moderate, 7.3% severe and 2.0% extremely), 56.1% were in the normal range, and 43.9% had symptoms of anxiety (17.6% mild, 12.2% moderate, 6.3% severe and 7.8% very severe). As emerged from Farrel’s aforementioned study, the evidence showed no statistically significant relationship between gestational age and depression, stress, and anxiety levels (p > 0.05). A multi-center cross-sectional study [22] provided the opportunity to compare the mental health status of pregnant women before and after the declaration of the COVID-19 epidemic. Pregnant women assessed after the abovementioned declaration had significantly increased rates of depressive symptoms (26.0% vs. 29.6%) compared to women evaluated before the declaration. Additionally, the prevalence of depressive symptoms increased along with the increase in the number of newly confirmed cases, suspected infections and deaths. This evidence is consistent with Sun et al.’s study [24] that demonstrated that the prevalence of perinatal depression increased with the increasing number of confirmed cases of COVID-19 patients. In particular, among the 2883 women involved in the survey, 33.71% had depressive symptoms, 27.02% showed mild depression, 5.24% moderate depression, and 1.46% severe depression. 3.7. Comparison between Pregnant versus Non-Pregnant Women’s Mental Health: Regarding the prevalence of anxiety and depressive symptoms during the pandemic compared between pregnant and non-pregnant women, discordant results emerged from the two studies considered [3,23]. The first one [3] demonstrated that, during quarantine, both pregnant and non-pregnant women showed a gradual increment in psychopathological measures and a decline in positive affect. However, the group of pregnant women showed a more pronounced increase in depression, anxiety and negative affect than the non-pregnant women did. In addition, pregnant women showed a more evident decrease in positive affect. On the contrary, in the other study [23], pregnant women seemed to have an advantage when facing mental problems; really, they showed lower levels of depression, anxiety, insomnia and post-traumatic stress disorder (PTSD) than non-pregnant women. Specifically, 5.3%, 6.8%, 2.4%, 2.6%, and 0.9% of pregnant women, respectively, presented symptoms of depression, anxiety, physical discomfort, insomnia, and PTSD, whereas non-pregnant women’s prevalences were 17.5% (depression), 17.5% (anxiety), 2.5% (physical discomfort), 5.4% (insomnia), and 5.7% (PTSD). Taken together, the data that emerged from the papers included in this review suggest that the COVID-19 outbreak had a moderate to severe effect on the mental health of pregnant women; actually, the prevalence of psychological symptoms (mainly depression and anxiety) has significantly increased with the diffusion of COVID-19. 3.8. Beyond Depression and Anxiety: Specific Maternal Worries and Fears: In addition to the common psychiatric symptoms of depression and anxiety, some of the included studies also reported a high prevalence of fear, which represents the most reported symptom in pregnant women [25,26,29,31,33,34,35,36]. The concerns regarding infection were mainly for the pregnancy and for their families and children. Many women in the reviewed studies from different countries expressed worries about their own health and that of their unborn children in relation to the pandemic [25,29,31,33,35,36], concerns about delivery (e.g., whether their partner will be present, giving birth) and the baby’s health (e.g., something being wrong with the infant) [26]. In particular, two studies reported evidence that pregnant women experience great anxiety regarding the fear of transmitting the virus vertically to their baby [29,33]. Saccone et al. [33] pointed out that almost half of the women (46%) had worries about transmitting the infection to their infants. In the survey headed by Akgor et al. [29], the authors found that 82.5% (n = 245) of the pregnant women involved in the research reported high anxiety regarding the vertical transmission of the disease to their babies during delivery if they were infected with COVID-19 [29]. Consistently with this, in another survey [31] on 552 mothers, 353 (64%) women were highly aware and worried about the COVID-19 pandemic (i.e., fears of carrying the virus, vertical transmission causing harm to fetuses, vulnerability). This finding emerged despite the fact that 64% of respondents did not acknowledge any impact of the COVID-19 pandemic on their mental well-being. In a cross-sectional survey on 288 women accessing maternity services in Qatar, Farrel et al. [31] identified worries about pregnancy in 143 women and concerns about family and children in 189 of them. A high prevalence of fear of abnormal perinatal consequence was also detected; in one study conducted in Italy, over half of the mothers were worried that COVID-19 could cause a fetal structural anomaly, fetal growth restriction or preterm delivery [34]. Additionally, more than half of the pregnant women involved in another survey (66%, n = 196) were worried about pregnancy problems if their visits to the hospital were delayed or cancelled [29]. Furthermore, Lebel et al. [25] identified that the elevated depression and anxiety symptoms during the COVID-19 pandemic were significantly associated with COVID-19-related concerns, i.e., threats to their baby’s health and to their own lives, worries about not receiving enough care during pregnancy, and also worries due to social isolation. These levels are much higher than what is typical for pregnant women and those reported by the rest of the community during the COVID-19 pandemic [25]. 3.9. Protective Factors: However, several of the reviewed studies also focused on some possible factors that may mediate/moderate the impact of the pandemic on women’s mental health. Some scholars reported that increased perceived social support and support effectiveness were associated with lower mental health symptoms, and appeared to be protective factors against depression and anxiety [25,26,27]. Similarly, a Japanese survey including 1777 pregnant women demonstrated that a lack of social support is significantly related to depressive symptoms [36]. These results are in line with previous literature that proved that better social support was related to decreased depression and anxiety symptoms during both pregnancy and postpartum [54,55]. As is known, life during a pandemic is characterized by isolation, social distancing, restrictive measures and the limitation of movement, all of which can lead women to experience a lack of social support from friends, relatives, and partners [56], with negative consequences for mental health, as mentioned above. Physical activity has also been investigated in terms of its protective function for psychological symptoms. Specifically, four studies showed that physical activity is related to reduced mental health problems. Being involved in regular physical activity during the COVID-19 pandemic represents a protective factor for the onset of anxiety and depressive symptoms in pregnant [22,25,27,28] or postpartum women [27], as confirmed by Lebel et al. [25]. 4. Discussion: The present rapid review was aimed at describing the current scientific evidence on the psychological impact of the COVID-19 outbreak on mother’s mental health in the perinatal period. We chose a rapid review with the aim of providing evidence in a “timely and cost-effective manner”, as stated by the WHO [21]. Indeed, perinatality is to be considered as a priority in the primary care system. An effective means of identification of the condition of women and their infants from pregnancy to the first year of life of the children may inform the management of potential mental health disorders, and guide efficacious preventive interventions. Although the current review may not be considered exhaustive, our findings confirm that the COVID-19 pandemic has a considerable impact on the psychological health of pregnant and postpartum women. Indeed, although lacking multicentered studies, research from different countries and cultures has shown an increased prevalence of depression and anxiety among mothers during COVID-19 compared to similar pre-COVID19 pandemic mothers [18,22,25,26,27,28,31,33,34]. Hence, an accurate screening approach should be implemented for women in the peripartum. This is especially true in the face of healthcare systems that are not able to respond to the progressive increase in the demand for services. Such a situation seems particularly relevant to healthcare systems under the pressure of the COVID-19 pandemic emergency, helping to reduce the workload by referring only the screened, most vulnerable women for targeted intervention. It is noteworthy that most studies were carried out through web-based questionnaires. This modality seems particularly useful in the abovementioned low-resource contexts. Computerized screening should also be favored since it has been shown that people tend to reveal more personal information through the computer and feel a greater sense of anonymity, increasing the likelihood of participation [57]. Some studies also detected, in addition to depressive and anxiety symptoms, higher percentages of post-traumatic stress disorder, dissociation, and distress [18], and higher levels of negative affectivity [3,18]. Moreover, independently of identified psychological symptomatology, high levels of awareness and concerns about the COVID-19 pandemic emerged, especially fears of carrying the virus and vertical transmission causing harm to fetuses [25,29,31,33,35,36]. These are relevant issues in that maternal malaise is not limited to ordinary screened psychological problems (i.e., depression and anxiety). Traumatic responses and emotional dysregulation may also affect mothers and their infants after pregnancy, with relevant long-term psychophysiological effects [58,59,60,61]. Specific attention to these vulnerabilities must be considered in order to provide efficacious interventions. 5. Limitations: The findings of this review have to be seen in light of some limitations. First, grey literature was excluded, and the articles included were limited to those in the English language within the selected keywords and databases. For this reason, the review cannot claim to be representative of all studies addressing the topic under investigation; therefore, the evidence that emerged could be overestimated or underestimated. Additionally, only two surveys compared mental health outcomes for pregnant women with non-pregnant women during pandemic, only one study compared pregnant women before the COVID-19 pandemic and pregnant women during the pandemic, and only one considered pregnant and first-year postpartum women assessed before and during the pandemic. The paucity of studies makes it difficult to point out the differences between being pregnant during a pandemic and in another period. Moreover, no standardized quality appraisal of the included papers was carried out, as is usual in rapid evidence reviews [62]. This necessitates great caution in the interpretation of the review’s findings. Actually, the reviewed studies diverged with respect to enrollment modalities and the samples’ characteristics. Additionally, there was significant variability in the assessment measures that limits the generalizability of our findings; as well, in some cases, differences in the symptoms emerged, even though the same questionnaire was administered. Therefore, to improve the screening and prevention/intervention programs, a more rigorous study design is required, which should include the calculation of effect sizes, control groups, and a longitudinal perspective [14,63]. Essentially, every study resorted to self-report questionnaires. Even though self-report measures are commonly administered in studies addressing maternal psychological functioning in the perinatal period, biased responses may not be excluded. Furthermore, only a few studies included instruments specific to the pre- and postpartum period, which may cause misleading conclusions. Besides this, resorting to such types of instruments does not allow us to distinguish between transient maternal malaise and more structured psychopathology, which is important to intervention; that said, they make a crucial contribution in prevention programs. Moreover, even though some of the reviewed studies considered additional variables (i.e., social support, physical activity) that may buffer the impact of the COVID-19 pandemic on mother’s psychological symptomatology [22,25,26,27,28,36], future studies should consider many of the risk factors that have been identified in the literature as relevant intervening variables, such as maternal SES and education, childbirth experiences, comorbidity, romantic couple adjustment, infant temperament, and breastfeeding [64,65,66,67,68]. From a research perspective, the interrelationship between these variables should be investigated through path analysis and linear structural relations modeling to understand their contributions to the outcomes for mothers and children. 6. Conclusions: The present review provides valuable clinical suggestions that should be carefully monitored during the evaluation of women during perinatality. In fact, the COVID-19 pandemic adds numerous risk factors for the mental health of mothers during the perinatal period. Longitudinal, cohort, multicenter studies should be carried out in order to promote standardized screening and intervention guidelines to support pregnant and postpartum women during the COVID-19 outbreak, and to promote healthy family functioning. The identification of risks and protective factors during the current pandemic is particularly important, especially considering the long-term effect that maternal mental health has on a child’s development. Finally, despite the acknowledged distress linked to such a situation, it may offer the possibility to develop pioneering online methods to detect psychological problems and deliver early mental health interventions to mothers and their infants.
Background: The perinatal period is a particularly vulnerable period in women's lives that implies significant physiological and psychological changes that can place women at higher risk for depression and anxiety symptoms. In addition, the ongoing pandemic of coronavirus disease 2019 (COVID-19) is likely to increase this vulnerability and the prevalence of mental health problems. This review aimed to investigate the existing literature on the psychological impact of the COVID-19 pandemic on women during pregnancy and the first year postpartum. Methods: The literature search was conducted using the following databases: Pubmed, Scopus, WOS-web of science, PsycInfo and Google Scholar. Out of the total of 116 initially selected papers, 17 have been included in the final work, according to the inclusion criteria. Results: The reviewed contributions report a moderate to severe impact of the COVID-19 outbreak on the mental health of pregnant women, mainly in the form of a significant increase in depression-up to 58% in Spain-and anxiety symptoms-up to 72% in Canada. In addition to the common psychological symptoms, COVID-19-specific worries emerged with respect to its potential effects on pregnancy and the well-being of the unborn child. Social support and being engaged in regular physical activities appear to be protective factors able to buffer against the effects of the pandemic on maternal mental health. Conclusions: Despite the limitations of the study design, the evidence suggests that it is essential to provide appropriate psychological support to pregnant women during the emergency in order to protect their mental health and to minimize the risks of long-term effects on child development.
1. Introduction: On 12 January 2020, the World Health Organization (WHO) officially announced the coronavirus disease 2019 (COVID-19), originating in Wuhan in December 2019, as a pandemic. In the course of most infectious disease outbreaks, restrictive measures can be necessary to stop the virus. With the aim of limiting Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) propagation, governments around the world have imposed some restrictions, such us national lockdowns and social distancing. A recent review [1] suggested that restrictive measures are often associated with negative psychological effects that can still be identified months or years later, and highlighted the impact of quarantine and isolation on mental health. Indeed, the actual outbreak is leading to psychological distress and increased mental health problems, such as stress, anxiety, depressive symptoms, insomnia, denial, anger and fear [2]. Psychological distress and mood disorders seem most likely in more vulnerable populations [3,4,5], such as pregnant women. Maternal mental health is particularly important to consider, due to the increased risk for depression and anxiety [6]. Pregnancy and the postpartum period, especially for first time mothers, have been identified as delicate periods in a woman’s life that are accompanied by significant social, psychological and also physiological changes [7,8], and for this reason pregnant women have been considered a high-risk population. Several studies have reported that the perinatal period is a time characterized by increased risk for emotional disorders such as depression, anxiety, and trauma-related disorders, especially in the presence of stress conditions [8,9,10]. This is also true for pregnant and postpartum women and their infants in the face of emergencies or natural disasters [11,12]. Indeed, during the SARS outbreak, pregnant women may have concerns about their own health and about the health of their unborn babies, and may display fears relating to pregnancy, to childbirth, or both. Additionally, feelings of uncertainty (characteristic of an epidemic) represent a significant stressor that can increase distress in pregnant women [13]. Overall, these complex and multiple variables may affect both mothers and their children’s physical and psychological health, in short-, medium- and long-term periods [14,15,16,17]. Therefore, the condition of the COVID-19 pandemic and associated factors could produce additional stress for women during perinatality and accentuate this predisposition [3,18]. For these reasons and due to the negative effect of psychological distress during pregnancy on the health of mothers and their offspring, priority should be given to support maternal mental health in the perinatal period [19,20]. These issues suggest that research is necessary to explore the effects of the COVID-19 pandemic on women during perinatality. The current review was designed to summarize the existing literature on the psychological impact of the COVID-19 pandemic on pregnant women. 6. Conclusions: The present review provides valuable clinical suggestions that should be carefully monitored during the evaluation of women during perinatality. In fact, the COVID-19 pandemic adds numerous risk factors for the mental health of mothers during the perinatal period. Longitudinal, cohort, multicenter studies should be carried out in order to promote standardized screening and intervention guidelines to support pregnant and postpartum women during the COVID-19 outbreak, and to promote healthy family functioning. The identification of risks and protective factors during the current pandemic is particularly important, especially considering the long-term effect that maternal mental health has on a child’s development. Finally, despite the acknowledged distress linked to such a situation, it may offer the possibility to develop pioneering online methods to detect psychological problems and deliver early mental health interventions to mothers and their infants.
Background: The perinatal period is a particularly vulnerable period in women's lives that implies significant physiological and psychological changes that can place women at higher risk for depression and anxiety symptoms. In addition, the ongoing pandemic of coronavirus disease 2019 (COVID-19) is likely to increase this vulnerability and the prevalence of mental health problems. This review aimed to investigate the existing literature on the psychological impact of the COVID-19 pandemic on women during pregnancy and the first year postpartum. Methods: The literature search was conducted using the following databases: Pubmed, Scopus, WOS-web of science, PsycInfo and Google Scholar. Out of the total of 116 initially selected papers, 17 have been included in the final work, according to the inclusion criteria. Results: The reviewed contributions report a moderate to severe impact of the COVID-19 outbreak on the mental health of pregnant women, mainly in the form of a significant increase in depression-up to 58% in Spain-and anxiety symptoms-up to 72% in Canada. In addition to the common psychological symptoms, COVID-19-specific worries emerged with respect to its potential effects on pregnancy and the well-being of the unborn child. Social support and being engaged in regular physical activities appear to be protective factors able to buffer against the effects of the pandemic on maternal mental health. Conclusions: Despite the limitations of the study design, the evidence suggests that it is essential to provide appropriate psychological support to pregnant women during the emergency in order to protect their mental health and to minimize the risks of long-term effects on child development.
11,636
306
[ 1022, 128, 18, 30, 12, 28, 8, 203, 42, 68, 61, 467, 221, 671, 291, 529, 257, 510 ]
23
[ "women", "anxiety", "pregnant", "depression", "pregnant women", "covid", "19", "covid 19", "symptoms", "pandemic" ]
[ "covid 19 depression", "pandemic reported psychological", "pandemic pregnant women", "coronavirus mental", "anxiety mothers covid" ]
null
[CONTENT] COVID-19 | maternal mental health | anxiety | depression | perinatality [SUMMARY]
null
[CONTENT] COVID-19 | maternal mental health | anxiety | depression | perinatality [SUMMARY]
[CONTENT] COVID-19 | maternal mental health | anxiety | depression | perinatality [SUMMARY]
[CONTENT] COVID-19 | maternal mental health | anxiety | depression | perinatality [SUMMARY]
[CONTENT] COVID-19 | maternal mental health | anxiety | depression | perinatality [SUMMARY]
[CONTENT] Anxiety | COVID-19 | Canada | Child | Depression | Female | Humans | Mental Health | Pandemics | Parturition | Pregnancy | SARS-CoV-2 | Spain | Stress, Psychological [SUMMARY]
null
[CONTENT] Anxiety | COVID-19 | Canada | Child | Depression | Female | Humans | Mental Health | Pandemics | Parturition | Pregnancy | SARS-CoV-2 | Spain | Stress, Psychological [SUMMARY]
[CONTENT] Anxiety | COVID-19 | Canada | Child | Depression | Female | Humans | Mental Health | Pandemics | Parturition | Pregnancy | SARS-CoV-2 | Spain | Stress, Psychological [SUMMARY]
[CONTENT] Anxiety | COVID-19 | Canada | Child | Depression | Female | Humans | Mental Health | Pandemics | Parturition | Pregnancy | SARS-CoV-2 | Spain | Stress, Psychological [SUMMARY]
[CONTENT] Anxiety | COVID-19 | Canada | Child | Depression | Female | Humans | Mental Health | Pandemics | Parturition | Pregnancy | SARS-CoV-2 | Spain | Stress, Psychological [SUMMARY]
[CONTENT] covid 19 depression | pandemic reported psychological | pandemic pregnant women | coronavirus mental | anxiety mothers covid [SUMMARY]
null
[CONTENT] covid 19 depression | pandemic reported psychological | pandemic pregnant women | coronavirus mental | anxiety mothers covid [SUMMARY]
[CONTENT] covid 19 depression | pandemic reported psychological | pandemic pregnant women | coronavirus mental | anxiety mothers covid [SUMMARY]
[CONTENT] covid 19 depression | pandemic reported psychological | pandemic pregnant women | coronavirus mental | anxiety mothers covid [SUMMARY]
[CONTENT] covid 19 depression | pandemic reported psychological | pandemic pregnant women | coronavirus mental | anxiety mothers covid [SUMMARY]
[CONTENT] women | anxiety | pregnant | depression | pregnant women | covid | 19 | covid 19 | symptoms | pandemic [SUMMARY]
null
[CONTENT] women | anxiety | pregnant | depression | pregnant women | covid | 19 | covid 19 | symptoms | pandemic [SUMMARY]
[CONTENT] women | anxiety | pregnant | depression | pregnant women | covid | 19 | covid 19 | symptoms | pandemic [SUMMARY]
[CONTENT] women | anxiety | pregnant | depression | pregnant women | covid | 19 | covid 19 | symptoms | pandemic [SUMMARY]
[CONTENT] women | anxiety | pregnant | depression | pregnant women | covid | 19 | covid 19 | symptoms | pandemic [SUMMARY]
[CONTENT] health | women | psychological | distress | disorders | pregnant women | pregnant | psychological distress | risk | 2019 [SUMMARY]
null
[CONTENT] women | anxiety | pregnant women | pregnant | symptoms | depression | pandemic | covid | 19 | covid 19 [SUMMARY]
[CONTENT] promote | factors | mental health | mental | health | mothers | longitudinal cohort multicenter | finally despite acknowledged | promote standardized screening intervention | promote standardized screening [SUMMARY]
[CONTENT] women | pregnant | anxiety | pregnant women | 19 | covid | covid 19 | depression | pandemic | studies [SUMMARY]
[CONTENT] women | pregnant | anxiety | pregnant women | 19 | covid | covid 19 | depression | pandemic | studies [SUMMARY]
[CONTENT] ||| 2019 | COVID-19 ||| COVID-19 | the first year [SUMMARY]
null
[CONTENT] COVID-19 | 58% | Spain | up to 72% | Canada ||| COVID-19 ||| [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| 2019 | COVID-19 ||| COVID-19 | the first year ||| PsycInfo | Google Scholar ||| 116 | 17 ||| ||| COVID-19 | 58% | Spain | up to 72% | Canada ||| COVID-19 ||| ||| [SUMMARY]
[CONTENT] ||| 2019 | COVID-19 ||| COVID-19 | the first year ||| PsycInfo | Google Scholar ||| 116 | 17 ||| ||| COVID-19 | 58% | Spain | up to 72% | Canada ||| COVID-19 ||| ||| [SUMMARY]
null
34283002
Pseudocedrela kotschyi (Schweinf) Harms (Meliaceae) is an important medicinal plant found in tropical and subtropical countries of Africa. Traditionally, P. kotschyi is used in the treatment of various diseases including diabetes, malaria, abdominal pain and diarrhoea.
CONTEXT
Through interpreting already published scientific manuscripts retrieved from different scientific search engines, namely, Medline, PubMed, EMBASE, Science Direct and Google scholar databases, an up-to-date review on the medicinal potentials of P. kotschyi from inception until September, 2020 was compiled. 'Pseudocedrela kotschyi', 'traditional uses', 'pharmacological properties' and 'chemical constituents' were used as search words.
METHODS
At present, more than 30 chemical constituents have been isolated and identified from the root and stem bark of P. kotschyi, among which limonoids and triterpenes are the main active constituents. Based on prior research, P. kotschyi has been reported to possess anti-inflammatory, analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, anti-trypanosomiasis, hepatoprotective, antioxidant, antidiabetic, antidiarrheal, antimicrobial, and anticancer effects.
RESULTS
P. kotschyi is reported to be effective in treating a variety of diseases. Current phytochemical and pharmacological studies mainly focus on antimalaria, anti-leishmaniasis, anti-trypanosomiasis and anticancer potential of the root and stem bark of P. kotschyi. Although experimental data support the beneficial medicinal properties of this plant, there is still a paucity of information on its toxicity profile. Nonetheless, this review provides the basis for future research work.
CONCLUSIONS
[ "Ethnopharmacology", "Medicine, Traditional", "Meliaceae", "Phytochemicals", "Phytotherapy", "Plant Extracts", "Plants, Medicinal" ]
8293955
Introduction
Traditional medicinal plants have been an essential source of remedy for various illnesses since ancient times. Preparations of plant materials such as infusion, decoction, powder, or paste have been used in various traditional practices in different parts of the world. People living in Africa and Asia make use of herbal medications to supplement the conventional medicine practice (Ekor 2014). There has been an increasing interest in the usage of herbal medicines in recent years. About 80% of the world’s population is using phytotherapeutic medicines (Balekundri and Mannur 2020). The WHO estimated that the size of the global market for herbal products was USD 60 billion in the year 2000 and this is expected to grow 7% per annum towards USD 5 trillion by the year 2050 (Tan et al. 2020). Several analyses have clearly verified traditional claims of numerous medicinal plants leading to the commercialisation of the many herbal products and their nomination as leads within the development of pharmaceutical medication (Williams 2021). Many clinically useful drugs have been discovered based on the knowledge derived from the ethnomedicinal applications of various herbal materials (Balunas and Kinghorn 2005). Pseudocedrela kotschyi (Schweinf.) Harms (Meliaceae) is an important medicinal plant found in the tropical and subtropical countries of Africa. This plant has been extensively used in the African traditional medicine system for the treatment of a variety of diseases, particularly as analgesic, antimicrobial, antimalarial, anthelminthic, and antidiarrheal agents. The main focus of this review was to establish the ethnopharmacological uses and medicinal characteristics of P. kotschyi and highlight its potential as future drug for the treatment of various tropical diseases.
null
null
null
null
Conclusions
P. kotschyi is an important medicinal plant which is used in the traditional treatment of different ailments. Based on its ethnomedicinal claims, extensive pharmacological and phytochemical investigations have been carried out which led to the isolation and characterisation of several bioactive constituents. Results from the pharmacological investigations on this plant and its phytoconstituents have demonstrated its high therapeutic potential in the treatment of cancer and tropical diseases, particularly malaria, leishmaniasis and trypanosomiasis. Although, experimental data support the beneficial medicinal properties of P. kotschyi, there were no sufficient data on the toxicity and safety profile of the plant. Nonetheless, this review provides the foundation for future work. Considering the amount of knowledge so far obtained on the medicinal properties of P. kotschyi, further studies on this plant should be directed towards establishing its safety profile as well as design and development of drug product either as single chemical entity or as a standardised herbal preparation. Tropical diseases are among the most neglected health problems in the world. The pharmaceutical industries show little research and development interest in this area albeit the devastating effect of such diseases. Therefore, research finding of this nature should be advanced towards the development of useful medicinal products.
[ "Methodology", "Geographical distribution", "Ethnomedicinal uses", "Phytochemistry", "Pharmacological activity", "Anti-inflammatory, analgesic and antipyretic activities", "Antiparasitic activity", "Antimicrobial activity", "Antioxidant and hepatoprotective activities", "Hypoglycaemic and digestive enzyme inhibitory activities", "Antiproliferative activity", "Antidiarrheal activity", "Toxicity" ]
[ "Scientific manuscripts on P. kotschyi were retrieved from different scientific search engines. Literature search was carried out on PUBMED using Pseudocedrela kotschyi as key words. Additional literature searches on Medline, EMBASE, Science Direct and Google scholar databases were done using pharmacological activity, chemical constituents and traditional uses of Pseudocedrela kotschyi as search terms. Literature published on the topic in English language from inception until September, 2020 were collected, analysed and an up-to-date review on the medicinal potential of P. kotschyi was compiled.\nGeographical distribution P. kotschyi (common name: dry zone cedar) is a medicinal plant. Other common names of P. kotschyi are Tuna (Hausa) and Emi gbegi in Yoruba. It is found in tropical and subtropical countries of Africa which include Nigeria, Cote d’Ivoire, Senegal, Ghana, Democratic Republic of Congo, and Uganda. The plant often grows as a medium sized tree of about 12–20 ft high (Ayo et al. 2010; Alain et al. 2014; Alhassan et al. 2014). Below is the taxonomical classification of P. kotschyi (Hassler 2019).\n\n\nClassification\n\n\nKingdom  Plantae\nPhylum  Tracheophyta\nClass   Magnoliopsida\nOrder    Sapindales\nFamily   Meliaceae\nGenus    Pseudocedrela\nSpecies   kotschyi\nP. kotschyi (common name: dry zone cedar) is a medicinal plant. Other common names of P. kotschyi are Tuna (Hausa) and Emi gbegi in Yoruba. It is found in tropical and subtropical countries of Africa which include Nigeria, Cote d’Ivoire, Senegal, Ghana, Democratic Republic of Congo, and Uganda. The plant often grows as a medium sized tree of about 12–20 ft high (Ayo et al. 2010; Alain et al. 2014; Alhassan et al. 2014). Below is the taxonomical classification of P. kotschyi (Hassler 2019).\n\n\nClassification\n\n\nKingdom  Plantae\nPhylum  Tracheophyta\nClass   Magnoliopsida\nOrder    Sapindales\nFamily   Meliaceae\nGenus    Pseudocedrela\nSpecies   kotschyi\nEthnomedicinal uses Different parts of P. kotschyi are used in the traditional treatment of various diseases. The root is used in the treatment of leprosy (Pedersen et al. 2009), epilepsy, dementia (Kantati et al. 2016), diabetes (Salihu Shinkafi et al. 2015), malaria, abdominal pain, diarrhoea (Ahua et al. 2007), toothache and gingivitis (Tapsoba and Deschamps 2006). The root is also used as a chewing stick for tooth cleaning and enhancement of oral health (Wolinsky and Sote 1984; Olabanji et al. 2007; Adeniyi et al. 2010). The leaf is used in the treatment of female infertility (Olabanji et al. 2007), intestinal worms (Koné et al. 2005) and malaria (Asase et al. 2005). The stem bark is used in the treatment of cancer (Saidu et al. 2015), infantile dermatitis (Erinoso et al. 2016), stomach ache (Asase et al. 2005), toothache (Kayode and Sanni 2016), high blood-pressure, skin diseases, and haemorrhoids (Nadembega et al. 2011).\nDifferent parts of P. kotschyi are used in the traditional treatment of various diseases. The root is used in the treatment of leprosy (Pedersen et al. 2009), epilepsy, dementia (Kantati et al. 2016), diabetes (Salihu Shinkafi et al. 2015), malaria, abdominal pain, diarrhoea (Ahua et al. 2007), toothache and gingivitis (Tapsoba and Deschamps 2006). The root is also used as a chewing stick for tooth cleaning and enhancement of oral health (Wolinsky and Sote 1984; Olabanji et al. 2007; Adeniyi et al. 2010). The leaf is used in the treatment of female infertility (Olabanji et al. 2007), intestinal worms (Koné et al. 2005) and malaria (Asase et al. 2005). The stem bark is used in the treatment of cancer (Saidu et al. 2015), infantile dermatitis (Erinoso et al. 2016), stomach ache (Asase et al. 2005), toothache (Kayode and Sanni 2016), high blood-pressure, skin diseases, and haemorrhoids (Nadembega et al. 2011).\nPhytochemistry Phytochemical investigations revealed that P. kotschyi contains a variety of pharmacological active secondary metabolites. A total of 32 compounds have so far reported to have been isolated from the plant which mainly include limonoids, triterpenes, and flavonoids.\nLimonoids are modified triterpenes which are highly oxygenated and have a typical furanylsteroid as their core structure (Roy and Saraf 2006). They are also known as tetraterpenoids. Limonoids are rare natural products which occur mainly in plants of Meliaceae and Rutaceae families and less frequently in the Cneoraceae family (Tan and Luo 2011).\nSeveral phragmalin-type limonoid orthoacetates (Figure 1) have reportedly been isolated from the of roots of this plant, namely, kotschyins A–H (1–8) (Hay et al. 2007; Dal Piaz et al. 2012). These compounds are complex with a very high degree of oxidation and rearrangement as compared to the parent limonoid structure.\nPhragmalin-type limonoid orthoacetates isolated from the roots of P. kotschyi.\nOther limonoid derivatives (Figure 2) found in the roots and stem bark of P. kotschyi are 7-deacetylgedunin (9), 7-deacetyl-7-oxogedunin (10) (Hay et al. 2007), 1α,7α epoxy-gedunin (11), gedunin (12) (Dal Piaz et al. 2012), kostchyienones A (13) and B (14), andirobin (15) methylangolensate (16) (Sidjui et al. 2018). Additional limonoids derivatives (Figure 3) that were isolated from the P. kotschyi bark include pseudrelones A–C (17–19) (Taylor 1979). The pseudrelones also have a phragmalin nucleus with orthoacetate function but they have a lesser degree of oxidation than the kotschyins.\nLimonoid derivatives isolated from the roots of P. kotschyi.\nLimonoid derivatives isolated from the bark of P. kotschyi.\nThe steroids isolated from this plant (Figure 4) include odoratone (20), spicatin (21), 11-acetil-odoratol (22) (Dal Piaz et al. 2012), β-sitosterol (23), 3-O-β-d-glucopyranosyl β-sitosterol (24) stigmasterol (25), 3-O-β-d-glucopyranosyl stigmasterol (26) betulinic acid (27) (Sidjui et al. 2018). Three secotirucallane triterpenes were also isolated from the stem bark of P. kotschyi. These include, 4-hydroxy-3,4-secotirucalla-7,24-dien-3,21-dioic acid (28), 3,4-secotirucalla-4(29),7,24-trien-3,21-dioic acid (29) and 3-methyl ester 3,4-secotirucalla-4(28),7,24-trien-3,21-dioic (30) (Figure 4) (Mambou et al. 2018). Two flavonoids, namely, 3,6,8-trihydroxy-2-(3,4-dihydroxylphenyl)-4H-chrom-4-one (31) and quercetin, 3,4′,7-trimethyl ether (32) (Figure 4) have also been isolated from the roots of this plant (Sidjui et al. 2018).\nTriterpenes and flavonoids isolated from the roots of P. kotschyi.\nThe GCMS analysis of essential oils from root and stem of P. kotschyi indicated that both oils contain mainly sesquiterpenoids. These include, α-cubebene, α-copaene, β-elemene, β-caryophyllene, trans-α-bergamotene, aromadendrene, (E)-β-farnesene, α-humulene, allo-aromadendrene, γ-muurolene, farnesene, germacrene D, β-selinene, α-selinene, α-muurolene, γ-cadinene, calamenene, δ-cadinene, cadina-1,4-diene, α-calacorene, α-cadinene, β-calacorene, germacrene B, cadalene, epi-cubebol, cubebol, spathulenol, globulol, humulene oxide II, epi-α-cadinol, epi-α-muurolol, α-muurolol, selin-11-en-4-α-ol, α-cadinol and juniper camphor. The stem bark oil was found to comprise largely of sesquiterpene hydrocarbons (79.6%), with δ-cadinene (31.3%) as the major constituents. While the oxygenated sesquiterpenes were found to be abundant in the root with cubebols (32.5%) and cadinols (17.9%) as the major constituents (Boyom et al. 2004).\nPhytochemical investigations revealed that P. kotschyi contains a variety of pharmacological active secondary metabolites. A total of 32 compounds have so far reported to have been isolated from the plant which mainly include limonoids, triterpenes, and flavonoids.\nLimonoids are modified triterpenes which are highly oxygenated and have a typical furanylsteroid as their core structure (Roy and Saraf 2006). They are also known as tetraterpenoids. Limonoids are rare natural products which occur mainly in plants of Meliaceae and Rutaceae families and less frequently in the Cneoraceae family (Tan and Luo 2011).\nSeveral phragmalin-type limonoid orthoacetates (Figure 1) have reportedly been isolated from the of roots of this plant, namely, kotschyins A–H (1–8) (Hay et al. 2007; Dal Piaz et al. 2012). These compounds are complex with a very high degree of oxidation and rearrangement as compared to the parent limonoid structure.\nPhragmalin-type limonoid orthoacetates isolated from the roots of P. kotschyi.\nOther limonoid derivatives (Figure 2) found in the roots and stem bark of P. kotschyi are 7-deacetylgedunin (9), 7-deacetyl-7-oxogedunin (10) (Hay et al. 2007), 1α,7α epoxy-gedunin (11), gedunin (12) (Dal Piaz et al. 2012), kostchyienones A (13) and B (14), andirobin (15) methylangolensate (16) (Sidjui et al. 2018). Additional limonoids derivatives (Figure 3) that were isolated from the P. kotschyi bark include pseudrelones A–C (17–19) (Taylor 1979). The pseudrelones also have a phragmalin nucleus with orthoacetate function but they have a lesser degree of oxidation than the kotschyins.\nLimonoid derivatives isolated from the roots of P. kotschyi.\nLimonoid derivatives isolated from the bark of P. kotschyi.\nThe steroids isolated from this plant (Figure 4) include odoratone (20), spicatin (21), 11-acetil-odoratol (22) (Dal Piaz et al. 2012), β-sitosterol (23), 3-O-β-d-glucopyranosyl β-sitosterol (24) stigmasterol (25), 3-O-β-d-glucopyranosyl stigmasterol (26) betulinic acid (27) (Sidjui et al. 2018). Three secotirucallane triterpenes were also isolated from the stem bark of P. kotschyi. These include, 4-hydroxy-3,4-secotirucalla-7,24-dien-3,21-dioic acid (28), 3,4-secotirucalla-4(29),7,24-trien-3,21-dioic acid (29) and 3-methyl ester 3,4-secotirucalla-4(28),7,24-trien-3,21-dioic (30) (Figure 4) (Mambou et al. 2018). Two flavonoids, namely, 3,6,8-trihydroxy-2-(3,4-dihydroxylphenyl)-4H-chrom-4-one (31) and quercetin, 3,4′,7-trimethyl ether (32) (Figure 4) have also been isolated from the roots of this plant (Sidjui et al. 2018).\nTriterpenes and flavonoids isolated from the roots of P. kotschyi.\nThe GCMS analysis of essential oils from root and stem of P. kotschyi indicated that both oils contain mainly sesquiterpenoids. These include, α-cubebene, α-copaene, β-elemene, β-caryophyllene, trans-α-bergamotene, aromadendrene, (E)-β-farnesene, α-humulene, allo-aromadendrene, γ-muurolene, farnesene, germacrene D, β-selinene, α-selinene, α-muurolene, γ-cadinene, calamenene, δ-cadinene, cadina-1,4-diene, α-calacorene, α-cadinene, β-calacorene, germacrene B, cadalene, epi-cubebol, cubebol, spathulenol, globulol, humulene oxide II, epi-α-cadinol, epi-α-muurolol, α-muurolol, selin-11-en-4-α-ol, α-cadinol and juniper camphor. The stem bark oil was found to comprise largely of sesquiterpene hydrocarbons (79.6%), with δ-cadinene (31.3%) as the major constituents. While the oxygenated sesquiterpenes were found to be abundant in the root with cubebols (32.5%) and cadinols (17.9%) as the major constituents (Boyom et al. 2004).\nPharmacological activity The ethnomedicinal claims for the efficacy of P. kotschyi in the treatment of various diseases have been confirmed by numerous relevant scientific studies. Several pharmacological investigations have been carried out to confirm the traditional medicinal uses of the roots, stem bark, and leaves of P. kotschyi. A wide range of pharmacological activities such as analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, hepatoprotective, antioxidant, and antimicrobial, have been reported by researchers so far.\nThe ethnomedicinal claims for the efficacy of P. kotschyi in the treatment of various diseases have been confirmed by numerous relevant scientific studies. Several pharmacological investigations have been carried out to confirm the traditional medicinal uses of the roots, stem bark, and leaves of P. kotschyi. A wide range of pharmacological activities such as analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, hepatoprotective, antioxidant, and antimicrobial, have been reported by researchers so far.\nAnti-inflammatory, analgesic and antipyretic activities Inflammation is an adaptive response that is triggered by noxious stimuli or conditions, such as tissue injury and infection (Medzhitov 2008; Ahmed 2011). Inflammatory response involves the secretion of several chemical mediators and signalling molecules such as nitric oxide (NO), and proinflammatory cytokines, including tumour necrosis, factor-α (TNF-α), interferon-γ (IFNγ), lipopolysaccharides (LPS) and interleukins (Medzhitov 2008). Even though inflammatory response is meant to be a beneficial process of restoring homeostasis, it is often associated with some disorders like pain and pyrexia due to the secretion of the chemical mediators (Bielefeldt et al. 2009; Garami et al. 2018). Chronic secretion of proinflammatory cytokines is also associated with development of diseases such as cancer and diabetes. Hence, anti-inflammatory agents represent an important class of medicines. Extracts and phytoconstituents of P. kotschyi have been reported to possess anti-inflammatory, analgesic and antipyretic properties.\nThe administration of methanol crude extract of P. kotschyi stem bark at a dose of 200 mg/kg/day and its butanol and chloroform fractions have been shown to produce significant analgesic activity when evaluated with a mice model. The extracts and fractions decreased the number of writhes by 88–92% during the acetic acid induced writhing assay (Abubakar et al. 2016). Akuodor et al. (2013) investigated the antipyretic activity of ethanol extract of P. kotschyi leaves on yeast and amphetamine induced hyperpyrexia in rats. They reported that the leaf extract (50, 100 and 150 mg/kg i.p.) displayed a significant (p < 0.05) dose-dependent decrease in pyrexia.\nScientific investigations have shown that 7-deacetylgedunin (9) had significant anti-inflammatory activity. Compound 9 was reported to significantly inhibit lipopolysaccharide induced nitric oxide in murine macrophage RAW 264.7 cells with an IC50 of 4.9 ± 0.1 μM. It also produced the downregulation of mRNA and protein expression of inducible nitric oxide synthase (iNOS) at a dose of 10 µM (Sarigaputi et al. 2015). These findings suggest that compound 9 produces its anti-inflammatory effect through the modulation of NO production. Chen et al. (2017) investigated the anti-inflammatory activity of compound 9 in C57BL/6 mice. Their result showed that its intraperitoneal administration at a dose of 5 mg/kg body weight for two consecutive days significantly decreased LPS-induced mice mortality by 40%. The above findings demonstrate that compound 9 is a promising anti-inflammatory agent from this plant. The anti-inflammatory effect of this compound perhaps accounts for the analgesic and antipyretic properties of the P. kotschyi extracts.\nInflammation is an adaptive response that is triggered by noxious stimuli or conditions, such as tissue injury and infection (Medzhitov 2008; Ahmed 2011). Inflammatory response involves the secretion of several chemical mediators and signalling molecules such as nitric oxide (NO), and proinflammatory cytokines, including tumour necrosis, factor-α (TNF-α), interferon-γ (IFNγ), lipopolysaccharides (LPS) and interleukins (Medzhitov 2008). Even though inflammatory response is meant to be a beneficial process of restoring homeostasis, it is often associated with some disorders like pain and pyrexia due to the secretion of the chemical mediators (Bielefeldt et al. 2009; Garami et al. 2018). Chronic secretion of proinflammatory cytokines is also associated with development of diseases such as cancer and diabetes. Hence, anti-inflammatory agents represent an important class of medicines. Extracts and phytoconstituents of P. kotschyi have been reported to possess anti-inflammatory, analgesic and antipyretic properties.\nThe administration of methanol crude extract of P. kotschyi stem bark at a dose of 200 mg/kg/day and its butanol and chloroform fractions have been shown to produce significant analgesic activity when evaluated with a mice model. The extracts and fractions decreased the number of writhes by 88–92% during the acetic acid induced writhing assay (Abubakar et al. 2016). Akuodor et al. (2013) investigated the antipyretic activity of ethanol extract of P. kotschyi leaves on yeast and amphetamine induced hyperpyrexia in rats. They reported that the leaf extract (50, 100 and 150 mg/kg i.p.) displayed a significant (p < 0.05) dose-dependent decrease in pyrexia.\nScientific investigations have shown that 7-deacetylgedunin (9) had significant anti-inflammatory activity. Compound 9 was reported to significantly inhibit lipopolysaccharide induced nitric oxide in murine macrophage RAW 264.7 cells with an IC50 of 4.9 ± 0.1 μM. It also produced the downregulation of mRNA and protein expression of inducible nitric oxide synthase (iNOS) at a dose of 10 µM (Sarigaputi et al. 2015). These findings suggest that compound 9 produces its anti-inflammatory effect through the modulation of NO production. Chen et al. (2017) investigated the anti-inflammatory activity of compound 9 in C57BL/6 mice. Their result showed that its intraperitoneal administration at a dose of 5 mg/kg body weight for two consecutive days significantly decreased LPS-induced mice mortality by 40%. The above findings demonstrate that compound 9 is a promising anti-inflammatory agent from this plant. The anti-inflammatory effect of this compound perhaps accounts for the analgesic and antipyretic properties of the P. kotschyi extracts.\nAntiparasitic activity Parasitic diseases are among the foremost health problems today, especially in tropical countries of Africa and Asia. Diseases such as malaria, leishmaniasis, trypanosomiasis and helminthiasis affect millions of people each year causing high morbidity and mortality, particularly, in developing countries (Hotez and Kamath 2009). Hence, there is an urgent need for new drugs to treat and control of these diseases.\nExtracts obtained from different parts of P. kotschyi have been reported to possess activity against several human parasites. Ahua et al. (2007) investigated the anti-leishmaniasis activity of P. kotschyi including several other plants against Leishmania major. The dichloromethane extract of P. kotschyi roots (at a dose of 75 µg/mL) exhibited a marked activity (>90% mortality) against the intracellular form of the parasite which is pathogenically significant for humans.\nIn another study, the anthelminthic activity of an ethanol extract of P. kotschyi roots against Haemonchus contortus (a pathogenic nematode found in small ruminants) was evaluated. The researchers discovered that the ethanol extract possessed larvicidal activity against the helminth with a LC100 of 0.02 µg/mL (Koné et al. 2005). The aqueous stem bark extract of this plant (50 mg/mL) has also been demonstrated to exert anthelminthic activity against Lumbricus terrestris with 25.4 min death time (Ukwubile et al. 2017).\nThe antimalarial effect of P. kotschyi on malaria parasite has been reported in several research manuscripts. Christian et al. (2015) investigated the suppressive and curative effect of ethanol extract of P. kotschyi leaves against malaria in Plasmodium berghei berghei infected mice. The results obtained showed that oral administration of the extract (100–400 mg/kg/day) exhibited a significant antimalarial effect which is evident by the suppression of parasitemia and prolong life of infected animals. In an another study, methanol extract of the P. kotschyi leaves at an oral dose of 200 mg/kg/day was found to reduce parasitemia by 90.70% in P. berghei berghei infected mice after four consecutive days of treatment (Dawet and Stephen 2014). However, the ethanol and aqueous extracts of the P. kotschyi stem bark exhibited lower activity against the malaria parasite (39.43% and 28.36% reduction in parasitemia, respectively) (Dawet and Yakubu 2014).\nThe limonoid derivatives 9 and 10 were reported to display significant in vitro activity against chloroquine-resistant Plasmodium falciparum with IC50 values of 1.36 and 1.77 µg/mL, respectively (Hay et al. 2007). The two compounds also displayed significant antiparasitic activity against Leishmania donovani, Trypanosoma brucei rhodesiense with a low-range IC50 of 0.99–3.4 µg/mL. In contrast, the orthoacetatate kotschyin A was found to be inactive against all the tested parasites (Hay et al. 2007). In related work, Sidjui et al. (2018) evaluated the in vitro antiplasmodial activity of 14 compounds isolated from P. kotschyi. Their findings showed that the limonoid derivatives 9, 10, 13, 14 and 15 exhibited very significant activity against both chloroquine-sensitive (Pf3D7) and chloroquine-resistant (PfINDO) strains of genus Plasmodium with IC50 values ranging from 0.75 to 9.05 µg/mL.\nSteverding et al. (2020) investigated the trypanocidal and leishmanicidal activities of six limonoids, namely, 9, 10, 13, 14, 15 and 16, against bloodstream forms of Trypanosoma brucei and promastigotes of Leishmania major. All the six compounds showed anti-trypanosomal activity with IC50 values ranging from 3.18 to 14.5 µM. Compounds 9, 10, 13 and 14 also displayed leishmanicidal activity with IC50 of 11.60, 7.63, 2.86 and 14.90 µM, respectively, while 15 and 16 were inactive.\nThe antiplasmodial, trypanocidal, and leishmanicidal activities of these compounds provide justification for the use of crude extract of P. kotschyi in the traditional treatment of malaria and other parasitic infectious diseases.\nParasitic diseases are among the foremost health problems today, especially in tropical countries of Africa and Asia. Diseases such as malaria, leishmaniasis, trypanosomiasis and helminthiasis affect millions of people each year causing high morbidity and mortality, particularly, in developing countries (Hotez and Kamath 2009). Hence, there is an urgent need for new drugs to treat and control of these diseases.\nExtracts obtained from different parts of P. kotschyi have been reported to possess activity against several human parasites. Ahua et al. (2007) investigated the anti-leishmaniasis activity of P. kotschyi including several other plants against Leishmania major. The dichloromethane extract of P. kotschyi roots (at a dose of 75 µg/mL) exhibited a marked activity (>90% mortality) against the intracellular form of the parasite which is pathogenically significant for humans.\nIn another study, the anthelminthic activity of an ethanol extract of P. kotschyi roots against Haemonchus contortus (a pathogenic nematode found in small ruminants) was evaluated. The researchers discovered that the ethanol extract possessed larvicidal activity against the helminth with a LC100 of 0.02 µg/mL (Koné et al. 2005). The aqueous stem bark extract of this plant (50 mg/mL) has also been demonstrated to exert anthelminthic activity against Lumbricus terrestris with 25.4 min death time (Ukwubile et al. 2017).\nThe antimalarial effect of P. kotschyi on malaria parasite has been reported in several research manuscripts. Christian et al. (2015) investigated the suppressive and curative effect of ethanol extract of P. kotschyi leaves against malaria in Plasmodium berghei berghei infected mice. The results obtained showed that oral administration of the extract (100–400 mg/kg/day) exhibited a significant antimalarial effect which is evident by the suppression of parasitemia and prolong life of infected animals. In an another study, methanol extract of the P. kotschyi leaves at an oral dose of 200 mg/kg/day was found to reduce parasitemia by 90.70% in P. berghei berghei infected mice after four consecutive days of treatment (Dawet and Stephen 2014). However, the ethanol and aqueous extracts of the P. kotschyi stem bark exhibited lower activity against the malaria parasite (39.43% and 28.36% reduction in parasitemia, respectively) (Dawet and Yakubu 2014).\nThe limonoid derivatives 9 and 10 were reported to display significant in vitro activity against chloroquine-resistant Plasmodium falciparum with IC50 values of 1.36 and 1.77 µg/mL, respectively (Hay et al. 2007). The two compounds also displayed significant antiparasitic activity against Leishmania donovani, Trypanosoma brucei rhodesiense with a low-range IC50 of 0.99–3.4 µg/mL. In contrast, the orthoacetatate kotschyin A was found to be inactive against all the tested parasites (Hay et al. 2007). In related work, Sidjui et al. (2018) evaluated the in vitro antiplasmodial activity of 14 compounds isolated from P. kotschyi. Their findings showed that the limonoid derivatives 9, 10, 13, 14 and 15 exhibited very significant activity against both chloroquine-sensitive (Pf3D7) and chloroquine-resistant (PfINDO) strains of genus Plasmodium with IC50 values ranging from 0.75 to 9.05 µg/mL.\nSteverding et al. (2020) investigated the trypanocidal and leishmanicidal activities of six limonoids, namely, 9, 10, 13, 14, 15 and 16, against bloodstream forms of Trypanosoma brucei and promastigotes of Leishmania major. All the six compounds showed anti-trypanosomal activity with IC50 values ranging from 3.18 to 14.5 µM. Compounds 9, 10, 13 and 14 also displayed leishmanicidal activity with IC50 of 11.60, 7.63, 2.86 and 14.90 µM, respectively, while 15 and 16 were inactive.\nThe antiplasmodial, trypanocidal, and leishmanicidal activities of these compounds provide justification for the use of crude extract of P. kotschyi in the traditional treatment of malaria and other parasitic infectious diseases.\nAntimicrobial activity Antimicrobial agents are among the most commonly used medications. The prevalence of antimicrobial resistance in recent years has led to a renewed effort to discover newer antimicrobial agents for the treatment of infectious diseases (Hobson et al. 2021). Extracts of P. kotschyi were reported to display appreciable activity against some pathogenic microorganisms.\nAyo et al. (2010) investigated the antimicrobial activity of petroleum ether, ethyl acetate and methanol extracts of the P. kotschyi leaves against Staphylococcus aureus, Salmonella typhi, Streptococcus pyogenes, Candida albicans, and Escherichia coli. The results of the study showed that the ethyl acetate extract exhibited antibacterial activity against all the tested organisms with MIC values of 10–20 mg/mL. In an another similar study, the crude methanol extract of the stem bark of this plant was also shown to exhibit good activity against a panel of pathogenic bacteria and fungi which include methicillin-resistant S. aureus (MRSA), S. aureus, S. pyogenes, Corynebacterium ulcerans, Bacillus subtilis, E. coli, S. typhi, Shigella dysenteriae, Klebsiella pneumoniae, Neisseria gonorrhoeae, Pseudomonas aeruginosa, C. albicans, C. krusei, and C. tropicalis with MIC values of 3.75–10.0 mg/mL (Alhassan et al. 2014). The methanol extract of the woody stem was also found to possess antifungal activity against C. krusei ATCC 6825 with an MIC of 6.25 mg/mL (Adeniyi et al. 2010).\nThe secotirucallane triterpenes (compounds 28, 29 and 30) isolated from the bark of P. kotschyi have been reported to possess significant antibacterial activity against Staphylococcus aureus ATCC 25923), Escherichia coli S2(1) and Pseudomonas aeruginosa with MIC ranging from 6 to 64 µg/mL. Compound 29 exhibited the highest antibacterial activity while 30 had the lowest (Mambou et al. 2018). The presence of these compounds is likely responsible for the antimicrobial property of P. kotschyi extracts and justify the ethnomedicinal use of this plant as a chewing stick for tooth cleaning and enhancement of oral health.\nAntimicrobial agents are among the most commonly used medications. The prevalence of antimicrobial resistance in recent years has led to a renewed effort to discover newer antimicrobial agents for the treatment of infectious diseases (Hobson et al. 2021). Extracts of P. kotschyi were reported to display appreciable activity against some pathogenic microorganisms.\nAyo et al. (2010) investigated the antimicrobial activity of petroleum ether, ethyl acetate and methanol extracts of the P. kotschyi leaves against Staphylococcus aureus, Salmonella typhi, Streptococcus pyogenes, Candida albicans, and Escherichia coli. The results of the study showed that the ethyl acetate extract exhibited antibacterial activity against all the tested organisms with MIC values of 10–20 mg/mL. In an another similar study, the crude methanol extract of the stem bark of this plant was also shown to exhibit good activity against a panel of pathogenic bacteria and fungi which include methicillin-resistant S. aureus (MRSA), S. aureus, S. pyogenes, Corynebacterium ulcerans, Bacillus subtilis, E. coli, S. typhi, Shigella dysenteriae, Klebsiella pneumoniae, Neisseria gonorrhoeae, Pseudomonas aeruginosa, C. albicans, C. krusei, and C. tropicalis with MIC values of 3.75–10.0 mg/mL (Alhassan et al. 2014). The methanol extract of the woody stem was also found to possess antifungal activity against C. krusei ATCC 6825 with an MIC of 6.25 mg/mL (Adeniyi et al. 2010).\nThe secotirucallane triterpenes (compounds 28, 29 and 30) isolated from the bark of P. kotschyi have been reported to possess significant antibacterial activity against Staphylococcus aureus ATCC 25923), Escherichia coli S2(1) and Pseudomonas aeruginosa with MIC ranging from 6 to 64 µg/mL. Compound 29 exhibited the highest antibacterial activity while 30 had the lowest (Mambou et al. 2018). The presence of these compounds is likely responsible for the antimicrobial property of P. kotschyi extracts and justify the ethnomedicinal use of this plant as a chewing stick for tooth cleaning and enhancement of oral health.\nAntioxidant and hepatoprotective activities The ethanol extract of P. kotschyi stem bark has been reported to possess DPPH radical scavenging activity with an IC50 of 4 µg/mL (Alain et al. 2014).\nA study on the hepatoprotective activity of methanol and aqueous extracts of the P. kotschyi leaves revealed that both extracts (at a dose of 750 mg/kg/day) were able to protect the liver against paracetamol induced oxidative damage (Eleha et al. 2016). A similar study conducted by Nchouwet et al. (2018) showed that 2 weeks pre-treatment with aqueous and methanol extracts of P. kotschyi stem bark (150 mg/kg/day) significantly suppressed the development of paracetamol induced hepatotoxicity in experimental rats.\nThe ethanol extract of P. kotschyi stem bark has been reported to possess DPPH radical scavenging activity with an IC50 of 4 µg/mL (Alain et al. 2014).\nA study on the hepatoprotective activity of methanol and aqueous extracts of the P. kotschyi leaves revealed that both extracts (at a dose of 750 mg/kg/day) were able to protect the liver against paracetamol induced oxidative damage (Eleha et al. 2016). A similar study conducted by Nchouwet et al. (2018) showed that 2 weeks pre-treatment with aqueous and methanol extracts of P. kotschyi stem bark (150 mg/kg/day) significantly suppressed the development of paracetamol induced hepatotoxicity in experimental rats.\nHypoglycaemic and digestive enzyme inhibitory activities Diabetes mellitus is disorder associated with abnormal glucose metabolism resulting from insulin insufficiency or dysfunction. It is one of the major non-communicable diseases that affect millions of people globally. Scientific investigation has revealed that P. kotschyi extracts possess some antidiabetic properties. Georgewill and Georgewill (2019) investigated the hypoglycaemic effect of aqueous extract of P. kotschyi leaves on alloxan induced diabetic rats. Results of their investigation revealed that oral administration of the extract (200 mg/kg/day for 14 days) caused significant hypoglycaemic effect in the experimental animals. The ethanol extract of roots of this plant was also reported to exhibit inhibitory activity against α-glucosidase (IC50 = 5.0 ± 0.2 μg/mL), an important digestive enzyme targeted in diabetes treatment (Bothon et al. 2013).\nDiabetes mellitus is disorder associated with abnormal glucose metabolism resulting from insulin insufficiency or dysfunction. It is one of the major non-communicable diseases that affect millions of people globally. Scientific investigation has revealed that P. kotschyi extracts possess some antidiabetic properties. Georgewill and Georgewill (2019) investigated the hypoglycaemic effect of aqueous extract of P. kotschyi leaves on alloxan induced diabetic rats. Results of their investigation revealed that oral administration of the extract (200 mg/kg/day for 14 days) caused significant hypoglycaemic effect in the experimental animals. The ethanol extract of roots of this plant was also reported to exhibit inhibitory activity against α-glucosidase (IC50 = 5.0 ± 0.2 μg/mL), an important digestive enzyme targeted in diabetes treatment (Bothon et al. 2013).\nAntiproliferative activity Cancer is an important disease that is characterised by the abnormal rapid proliferation of cells that invade and destroy other tissues (Alhassan et al. 2018). It is a major public health problem throughout the world. Pharmacological studies have shown that P. kotschyi possesses anticancer potential.\nKassim et al. (2015) investigated the antiproliferative activity and apoptosis induction effect of aqueous extract of P. kotschyi roots against a panel of prostate cancer cell lines, namely, PC3, DU-145, LNCaP and CWR-22 cell lines. Results from the 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyltetrazolium bromide (MTT) assay showed that all four cancer cell lines exhibited a dose-dependent decrease in cell proliferation and viability after treatment with the aqueous extract with IC50 values ranging from 12 to 42 µg/mL. The results obtained also showed that LNCaP, PC3, DU-145, and CWR-22 cell lines had 42, 35, 33 and 24% induced apoptotic cells, respectively, after treatment with the same extract. The results of both the antiproliferative and apoptosis assay indicated that the LNCaP cells were the most sensitive to the P. kotschyi extract.\nHeat shock protein 90 (Hsp90) is a molecular chaperone that is involved in the folding, activation and assembly of several proteins including oncoproteins such as HER2, Survivin, EGFR, Akt, Raf-1, mutant p53 (Calderwood et al. 2006; Dal Piaz et al. 2012). Hsp90 is often overexpressed in cancer cells. It has been demonstrated to play a vital role in tumour progression, malignancy and resistance to chemotherapeutic agents (Zhang et al. 2019). Hence, Hsp90 is recently considered as a viable molecular target for development of new anticancer drugs (Gupta et al. 2019). Phytoconstituents of P. kotschyi have been shown to possess significant Hsp90 inhibitory activity. Dal Piaz et al. (2012) investigated the Hsp90 binding capability of several compounds using a surface plasmon resonance (SPR) approach. They found that the limonoid orthoacetates (1–6) displayed good binding capability to the protein with compound 4 being the most effective. Compound 4 also exhibited significant anti-proliferative activity against three cancer cell lines, namely, PC-3 (human prostate cancer cells), A2780 (human ovarian carcinoma cells), and MCF-7 (human breast adenocarcinoma cells) with IC50 values of 62 ± 0.4, 38 ± 0.7 and 25 ± 1.2 µM, respectively. These findings suggest that Hsp90 inhibition is a mechanism of action for anti-proliferative effects of the limonoids orthoacetates from P. kotschyi. These findings provide scientific bases for the future development of new anticancer agents from P. kotschyi in the form of a standardised herbal preparation or as a pure chemical entity.\nCancer is an important disease that is characterised by the abnormal rapid proliferation of cells that invade and destroy other tissues (Alhassan et al. 2018). It is a major public health problem throughout the world. Pharmacological studies have shown that P. kotschyi possesses anticancer potential.\nKassim et al. (2015) investigated the antiproliferative activity and apoptosis induction effect of aqueous extract of P. kotschyi roots against a panel of prostate cancer cell lines, namely, PC3, DU-145, LNCaP and CWR-22 cell lines. Results from the 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyltetrazolium bromide (MTT) assay showed that all four cancer cell lines exhibited a dose-dependent decrease in cell proliferation and viability after treatment with the aqueous extract with IC50 values ranging from 12 to 42 µg/mL. The results obtained also showed that LNCaP, PC3, DU-145, and CWR-22 cell lines had 42, 35, 33 and 24% induced apoptotic cells, respectively, after treatment with the same extract. The results of both the antiproliferative and apoptosis assay indicated that the LNCaP cells were the most sensitive to the P. kotschyi extract.\nHeat shock protein 90 (Hsp90) is a molecular chaperone that is involved in the folding, activation and assembly of several proteins including oncoproteins such as HER2, Survivin, EGFR, Akt, Raf-1, mutant p53 (Calderwood et al. 2006; Dal Piaz et al. 2012). Hsp90 is often overexpressed in cancer cells. It has been demonstrated to play a vital role in tumour progression, malignancy and resistance to chemotherapeutic agents (Zhang et al. 2019). Hence, Hsp90 is recently considered as a viable molecular target for development of new anticancer drugs (Gupta et al. 2019). Phytoconstituents of P. kotschyi have been shown to possess significant Hsp90 inhibitory activity. Dal Piaz et al. (2012) investigated the Hsp90 binding capability of several compounds using a surface plasmon resonance (SPR) approach. They found that the limonoid orthoacetates (1–6) displayed good binding capability to the protein with compound 4 being the most effective. Compound 4 also exhibited significant anti-proliferative activity against three cancer cell lines, namely, PC-3 (human prostate cancer cells), A2780 (human ovarian carcinoma cells), and MCF-7 (human breast adenocarcinoma cells) with IC50 values of 62 ± 0.4, 38 ± 0.7 and 25 ± 1.2 µM, respectively. These findings suggest that Hsp90 inhibition is a mechanism of action for anti-proliferative effects of the limonoids orthoacetates from P. kotschyi. These findings provide scientific bases for the future development of new anticancer agents from P. kotschyi in the form of a standardised herbal preparation or as a pure chemical entity.\nAntidiarrheal activity Treatment of diarrhoea comes under one of the common ethnomedicinal uses of P. kotschyi. To further verify this claim, Essiet et al. (2016) investigated the antidiarrheal property of ethanol extract of P kotschyi leaves in Wistar albino rats. Diarrhoea was induced in the animals using castor oil. Results of the investigation revealed that oral administration of the extract (100, 200 and 400 mg/kg) produced significant (p < 0.05) dose-dependent inhibition of induced diarrhoea (67–91%). The results also showed that the administered doses of the extract decreased intestinal transit time by 57–66% while intestinal fluid accumulation was decreased by 68–82%. This finding undoubtedly supports the traditional use of this plant in the treatment of diarrhoea.\nTreatment of diarrhoea comes under one of the common ethnomedicinal uses of P. kotschyi. To further verify this claim, Essiet et al. (2016) investigated the antidiarrheal property of ethanol extract of P kotschyi leaves in Wistar albino rats. Diarrhoea was induced in the animals using castor oil. Results of the investigation revealed that oral administration of the extract (100, 200 and 400 mg/kg) produced significant (p < 0.05) dose-dependent inhibition of induced diarrhoea (67–91%). The results also showed that the administered doses of the extract decreased intestinal transit time by 57–66% while intestinal fluid accumulation was decreased by 68–82%. This finding undoubtedly supports the traditional use of this plant in the treatment of diarrhoea.\nToxicity There is a general perception that plant-based medicinal products are natural and thus, very safe for human consumption. However, this notion is wrong because several plants have been shown to produce wide range of adverse reactions some of which are capable of causing serious injuries, life-threatening conditions, and even death (Ekor 2014). Hence, it is of paramount importance to investigate the toxicity profile of traditional medicinal plants as well as their phytoconstituents in order to establish their safety. The study of toxicity is an essential component for new drug development process (Hornberg et al. 2014).\nSome toxicological studies have been carried out on the extracts of P. kotschyi. Nchouwet et al. (2017) investigated the acute and sub-chronic toxicity of P. kotschyi stem bark aqueous extract in albino rats. For the acute toxicity study, the LD50 was found to be greater than 2000 mg/kg body weight. The sub-chronic administration of the aqueous extract at a dose of 400 mg/kg body weight/day for 28 days, caused significant increase in total protein and HDL-Cholesterol with concomitant decrease in LDL-cholesterol while other biochemical and hematological parameters were found to be within the normal range. However, histological examination revealed the presence of inflammation and necrosis in the kidney and liver tissues of animals treated with 400 mg/kg body weight-/day of extract while tissue samples from animals treated at lower doses remained normal. This implies that the extract may have exhibited some toxic effect on the kidney and liver tissues at 400 mg/kg body weight while it is relatively safe at lower doses.\nKabiru et al. (2015) conducted a sub-chronic toxicity evaluation of a crude methanol extract of leaves in Sprague-Dawley rats at doses of 40, 200 and 1000 mg/kg body weight/day for 4 weeks. They found that the extract did not produce any significant alteration in both hematological and biochemical parameters when compared with standard controls. This implied that the extract was relatively non-toxic at the tested doses. Ezeokpo et al. (2020) also carried out a similar study with an ethanol extract of P. kotschyi leaves in Wistar rats. Their results revealed that the extract (400 mg/kg body weight/day) did not produce any significant derangement in hematological and biochemical parameters after 28 days of treatment. The above findings indicated that methanol and ethanol extracts of P. kotschyi leaves are relatively non-toxic at higher doses compared to the aqueous stem bark extract. However, more detailed research is still required to corroborate this finding. Albeit most of the pharmacological activities and chemical constituents reported on this plant have been obtained from the leaf’s extracts, the toxicity evaluation of ethanol, methanol and chloroform extracts of roots and stem bark of P. kotschy is yet to be carried out and reported. Hence, further toxicity studies on different extracts, fractions and chemical constituents of the root and stem bark of P. kotschyi are still required to ascertain the thorough safety of the plant.\nThere is a general perception that plant-based medicinal products are natural and thus, very safe for human consumption. However, this notion is wrong because several plants have been shown to produce wide range of adverse reactions some of which are capable of causing serious injuries, life-threatening conditions, and even death (Ekor 2014). Hence, it is of paramount importance to investigate the toxicity profile of traditional medicinal plants as well as their phytoconstituents in order to establish their safety. The study of toxicity is an essential component for new drug development process (Hornberg et al. 2014).\nSome toxicological studies have been carried out on the extracts of P. kotschyi. Nchouwet et al. (2017) investigated the acute and sub-chronic toxicity of P. kotschyi stem bark aqueous extract in albino rats. For the acute toxicity study, the LD50 was found to be greater than 2000 mg/kg body weight. The sub-chronic administration of the aqueous extract at a dose of 400 mg/kg body weight/day for 28 days, caused significant increase in total protein and HDL-Cholesterol with concomitant decrease in LDL-cholesterol while other biochemical and hematological parameters were found to be within the normal range. However, histological examination revealed the presence of inflammation and necrosis in the kidney and liver tissues of animals treated with 400 mg/kg body weight-/day of extract while tissue samples from animals treated at lower doses remained normal. This implies that the extract may have exhibited some toxic effect on the kidney and liver tissues at 400 mg/kg body weight while it is relatively safe at lower doses.\nKabiru et al. (2015) conducted a sub-chronic toxicity evaluation of a crude methanol extract of leaves in Sprague-Dawley rats at doses of 40, 200 and 1000 mg/kg body weight/day for 4 weeks. They found that the extract did not produce any significant alteration in both hematological and biochemical parameters when compared with standard controls. This implied that the extract was relatively non-toxic at the tested doses. Ezeokpo et al. (2020) also carried out a similar study with an ethanol extract of P. kotschyi leaves in Wistar rats. Their results revealed that the extract (400 mg/kg body weight/day) did not produce any significant derangement in hematological and biochemical parameters after 28 days of treatment. The above findings indicated that methanol and ethanol extracts of P. kotschyi leaves are relatively non-toxic at higher doses compared to the aqueous stem bark extract. However, more detailed research is still required to corroborate this finding. Albeit most of the pharmacological activities and chemical constituents reported on this plant have been obtained from the leaf’s extracts, the toxicity evaluation of ethanol, methanol and chloroform extracts of roots and stem bark of P. kotschy is yet to be carried out and reported. Hence, further toxicity studies on different extracts, fractions and chemical constituents of the root and stem bark of P. kotschyi are still required to ascertain the thorough safety of the plant.", "P. kotschyi (common name: dry zone cedar) is a medicinal plant. Other common names of P. kotschyi are Tuna (Hausa) and Emi gbegi in Yoruba. It is found in tropical and subtropical countries of Africa which include Nigeria, Cote d’Ivoire, Senegal, Ghana, Democratic Republic of Congo, and Uganda. The plant often grows as a medium sized tree of about 12–20 ft high (Ayo et al. 2010; Alain et al. 2014; Alhassan et al. 2014). Below is the taxonomical classification of P. kotschyi (Hassler 2019).\n\n\nClassification\n\n\nKingdom  Plantae\nPhylum  Tracheophyta\nClass   Magnoliopsida\nOrder    Sapindales\nFamily   Meliaceae\nGenus    Pseudocedrela\nSpecies   kotschyi", "Different parts of P. kotschyi are used in the traditional treatment of various diseases. The root is used in the treatment of leprosy (Pedersen et al. 2009), epilepsy, dementia (Kantati et al. 2016), diabetes (Salihu Shinkafi et al. 2015), malaria, abdominal pain, diarrhoea (Ahua et al. 2007), toothache and gingivitis (Tapsoba and Deschamps 2006). The root is also used as a chewing stick for tooth cleaning and enhancement of oral health (Wolinsky and Sote 1984; Olabanji et al. 2007; Adeniyi et al. 2010). The leaf is used in the treatment of female infertility (Olabanji et al. 2007), intestinal worms (Koné et al. 2005) and malaria (Asase et al. 2005). The stem bark is used in the treatment of cancer (Saidu et al. 2015), infantile dermatitis (Erinoso et al. 2016), stomach ache (Asase et al. 2005), toothache (Kayode and Sanni 2016), high blood-pressure, skin diseases, and haemorrhoids (Nadembega et al. 2011).", "Phytochemical investigations revealed that P. kotschyi contains a variety of pharmacological active secondary metabolites. A total of 32 compounds have so far reported to have been isolated from the plant which mainly include limonoids, triterpenes, and flavonoids.\nLimonoids are modified triterpenes which are highly oxygenated and have a typical furanylsteroid as their core structure (Roy and Saraf 2006). They are also known as tetraterpenoids. Limonoids are rare natural products which occur mainly in plants of Meliaceae and Rutaceae families and less frequently in the Cneoraceae family (Tan and Luo 2011).\nSeveral phragmalin-type limonoid orthoacetates (Figure 1) have reportedly been isolated from the of roots of this plant, namely, kotschyins A–H (1–8) (Hay et al. 2007; Dal Piaz et al. 2012). These compounds are complex with a very high degree of oxidation and rearrangement as compared to the parent limonoid structure.\nPhragmalin-type limonoid orthoacetates isolated from the roots of P. kotschyi.\nOther limonoid derivatives (Figure 2) found in the roots and stem bark of P. kotschyi are 7-deacetylgedunin (9), 7-deacetyl-7-oxogedunin (10) (Hay et al. 2007), 1α,7α epoxy-gedunin (11), gedunin (12) (Dal Piaz et al. 2012), kostchyienones A (13) and B (14), andirobin (15) methylangolensate (16) (Sidjui et al. 2018). Additional limonoids derivatives (Figure 3) that were isolated from the P. kotschyi bark include pseudrelones A–C (17–19) (Taylor 1979). The pseudrelones also have a phragmalin nucleus with orthoacetate function but they have a lesser degree of oxidation than the kotschyins.\nLimonoid derivatives isolated from the roots of P. kotschyi.\nLimonoid derivatives isolated from the bark of P. kotschyi.\nThe steroids isolated from this plant (Figure 4) include odoratone (20), spicatin (21), 11-acetil-odoratol (22) (Dal Piaz et al. 2012), β-sitosterol (23), 3-O-β-d-glucopyranosyl β-sitosterol (24) stigmasterol (25), 3-O-β-d-glucopyranosyl stigmasterol (26) betulinic acid (27) (Sidjui et al. 2018). Three secotirucallane triterpenes were also isolated from the stem bark of P. kotschyi. These include, 4-hydroxy-3,4-secotirucalla-7,24-dien-3,21-dioic acid (28), 3,4-secotirucalla-4(29),7,24-trien-3,21-dioic acid (29) and 3-methyl ester 3,4-secotirucalla-4(28),7,24-trien-3,21-dioic (30) (Figure 4) (Mambou et al. 2018). Two flavonoids, namely, 3,6,8-trihydroxy-2-(3,4-dihydroxylphenyl)-4H-chrom-4-one (31) and quercetin, 3,4′,7-trimethyl ether (32) (Figure 4) have also been isolated from the roots of this plant (Sidjui et al. 2018).\nTriterpenes and flavonoids isolated from the roots of P. kotschyi.\nThe GCMS analysis of essential oils from root and stem of P. kotschyi indicated that both oils contain mainly sesquiterpenoids. These include, α-cubebene, α-copaene, β-elemene, β-caryophyllene, trans-α-bergamotene, aromadendrene, (E)-β-farnesene, α-humulene, allo-aromadendrene, γ-muurolene, farnesene, germacrene D, β-selinene, α-selinene, α-muurolene, γ-cadinene, calamenene, δ-cadinene, cadina-1,4-diene, α-calacorene, α-cadinene, β-calacorene, germacrene B, cadalene, epi-cubebol, cubebol, spathulenol, globulol, humulene oxide II, epi-α-cadinol, epi-α-muurolol, α-muurolol, selin-11-en-4-α-ol, α-cadinol and juniper camphor. The stem bark oil was found to comprise largely of sesquiterpene hydrocarbons (79.6%), with δ-cadinene (31.3%) as the major constituents. While the oxygenated sesquiterpenes were found to be abundant in the root with cubebols (32.5%) and cadinols (17.9%) as the major constituents (Boyom et al. 2004).", "The ethnomedicinal claims for the efficacy of P. kotschyi in the treatment of various diseases have been confirmed by numerous relevant scientific studies. Several pharmacological investigations have been carried out to confirm the traditional medicinal uses of the roots, stem bark, and leaves of P. kotschyi. A wide range of pharmacological activities such as analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, hepatoprotective, antioxidant, and antimicrobial, have been reported by researchers so far.", "Inflammation is an adaptive response that is triggered by noxious stimuli or conditions, such as tissue injury and infection (Medzhitov 2008; Ahmed 2011). Inflammatory response involves the secretion of several chemical mediators and signalling molecules such as nitric oxide (NO), and proinflammatory cytokines, including tumour necrosis, factor-α (TNF-α), interferon-γ (IFNγ), lipopolysaccharides (LPS) and interleukins (Medzhitov 2008). Even though inflammatory response is meant to be a beneficial process of restoring homeostasis, it is often associated with some disorders like pain and pyrexia due to the secretion of the chemical mediators (Bielefeldt et al. 2009; Garami et al. 2018). Chronic secretion of proinflammatory cytokines is also associated with development of diseases such as cancer and diabetes. Hence, anti-inflammatory agents represent an important class of medicines. Extracts and phytoconstituents of P. kotschyi have been reported to possess anti-inflammatory, analgesic and antipyretic properties.\nThe administration of methanol crude extract of P. kotschyi stem bark at a dose of 200 mg/kg/day and its butanol and chloroform fractions have been shown to produce significant analgesic activity when evaluated with a mice model. The extracts and fractions decreased the number of writhes by 88–92% during the acetic acid induced writhing assay (Abubakar et al. 2016). Akuodor et al. (2013) investigated the antipyretic activity of ethanol extract of P. kotschyi leaves on yeast and amphetamine induced hyperpyrexia in rats. They reported that the leaf extract (50, 100 and 150 mg/kg i.p.) displayed a significant (p < 0.05) dose-dependent decrease in pyrexia.\nScientific investigations have shown that 7-deacetylgedunin (9) had significant anti-inflammatory activity. Compound 9 was reported to significantly inhibit lipopolysaccharide induced nitric oxide in murine macrophage RAW 264.7 cells with an IC50 of 4.9 ± 0.1 μM. It also produced the downregulation of mRNA and protein expression of inducible nitric oxide synthase (iNOS) at a dose of 10 µM (Sarigaputi et al. 2015). These findings suggest that compound 9 produces its anti-inflammatory effect through the modulation of NO production. Chen et al. (2017) investigated the anti-inflammatory activity of compound 9 in C57BL/6 mice. Their result showed that its intraperitoneal administration at a dose of 5 mg/kg body weight for two consecutive days significantly decreased LPS-induced mice mortality by 40%. The above findings demonstrate that compound 9 is a promising anti-inflammatory agent from this plant. The anti-inflammatory effect of this compound perhaps accounts for the analgesic and antipyretic properties of the P. kotschyi extracts.", "Parasitic diseases are among the foremost health problems today, especially in tropical countries of Africa and Asia. Diseases such as malaria, leishmaniasis, trypanosomiasis and helminthiasis affect millions of people each year causing high morbidity and mortality, particularly, in developing countries (Hotez and Kamath 2009). Hence, there is an urgent need for new drugs to treat and control of these diseases.\nExtracts obtained from different parts of P. kotschyi have been reported to possess activity against several human parasites. Ahua et al. (2007) investigated the anti-leishmaniasis activity of P. kotschyi including several other plants against Leishmania major. The dichloromethane extract of P. kotschyi roots (at a dose of 75 µg/mL) exhibited a marked activity (>90% mortality) against the intracellular form of the parasite which is pathogenically significant for humans.\nIn another study, the anthelminthic activity of an ethanol extract of P. kotschyi roots against Haemonchus contortus (a pathogenic nematode found in small ruminants) was evaluated. The researchers discovered that the ethanol extract possessed larvicidal activity against the helminth with a LC100 of 0.02 µg/mL (Koné et al. 2005). The aqueous stem bark extract of this plant (50 mg/mL) has also been demonstrated to exert anthelminthic activity against Lumbricus terrestris with 25.4 min death time (Ukwubile et al. 2017).\nThe antimalarial effect of P. kotschyi on malaria parasite has been reported in several research manuscripts. Christian et al. (2015) investigated the suppressive and curative effect of ethanol extract of P. kotschyi leaves against malaria in Plasmodium berghei berghei infected mice. The results obtained showed that oral administration of the extract (100–400 mg/kg/day) exhibited a significant antimalarial effect which is evident by the suppression of parasitemia and prolong life of infected animals. In an another study, methanol extract of the P. kotschyi leaves at an oral dose of 200 mg/kg/day was found to reduce parasitemia by 90.70% in P. berghei berghei infected mice after four consecutive days of treatment (Dawet and Stephen 2014). However, the ethanol and aqueous extracts of the P. kotschyi stem bark exhibited lower activity against the malaria parasite (39.43% and 28.36% reduction in parasitemia, respectively) (Dawet and Yakubu 2014).\nThe limonoid derivatives 9 and 10 were reported to display significant in vitro activity against chloroquine-resistant Plasmodium falciparum with IC50 values of 1.36 and 1.77 µg/mL, respectively (Hay et al. 2007). The two compounds also displayed significant antiparasitic activity against Leishmania donovani, Trypanosoma brucei rhodesiense with a low-range IC50 of 0.99–3.4 µg/mL. In contrast, the orthoacetatate kotschyin A was found to be inactive against all the tested parasites (Hay et al. 2007). In related work, Sidjui et al. (2018) evaluated the in vitro antiplasmodial activity of 14 compounds isolated from P. kotschyi. Their findings showed that the limonoid derivatives 9, 10, 13, 14 and 15 exhibited very significant activity against both chloroquine-sensitive (Pf3D7) and chloroquine-resistant (PfINDO) strains of genus Plasmodium with IC50 values ranging from 0.75 to 9.05 µg/mL.\nSteverding et al. (2020) investigated the trypanocidal and leishmanicidal activities of six limonoids, namely, 9, 10, 13, 14, 15 and 16, against bloodstream forms of Trypanosoma brucei and promastigotes of Leishmania major. All the six compounds showed anti-trypanosomal activity with IC50 values ranging from 3.18 to 14.5 µM. Compounds 9, 10, 13 and 14 also displayed leishmanicidal activity with IC50 of 11.60, 7.63, 2.86 and 14.90 µM, respectively, while 15 and 16 were inactive.\nThe antiplasmodial, trypanocidal, and leishmanicidal activities of these compounds provide justification for the use of crude extract of P. kotschyi in the traditional treatment of malaria and other parasitic infectious diseases.", "Antimicrobial agents are among the most commonly used medications. The prevalence of antimicrobial resistance in recent years has led to a renewed effort to discover newer antimicrobial agents for the treatment of infectious diseases (Hobson et al. 2021). Extracts of P. kotschyi were reported to display appreciable activity against some pathogenic microorganisms.\nAyo et al. (2010) investigated the antimicrobial activity of petroleum ether, ethyl acetate and methanol extracts of the P. kotschyi leaves against Staphylococcus aureus, Salmonella typhi, Streptococcus pyogenes, Candida albicans, and Escherichia coli. The results of the study showed that the ethyl acetate extract exhibited antibacterial activity against all the tested organisms with MIC values of 10–20 mg/mL. In an another similar study, the crude methanol extract of the stem bark of this plant was also shown to exhibit good activity against a panel of pathogenic bacteria and fungi which include methicillin-resistant S. aureus (MRSA), S. aureus, S. pyogenes, Corynebacterium ulcerans, Bacillus subtilis, E. coli, S. typhi, Shigella dysenteriae, Klebsiella pneumoniae, Neisseria gonorrhoeae, Pseudomonas aeruginosa, C. albicans, C. krusei, and C. tropicalis with MIC values of 3.75–10.0 mg/mL (Alhassan et al. 2014). The methanol extract of the woody stem was also found to possess antifungal activity against C. krusei ATCC 6825 with an MIC of 6.25 mg/mL (Adeniyi et al. 2010).\nThe secotirucallane triterpenes (compounds 28, 29 and 30) isolated from the bark of P. kotschyi have been reported to possess significant antibacterial activity against Staphylococcus aureus ATCC 25923), Escherichia coli S2(1) and Pseudomonas aeruginosa with MIC ranging from 6 to 64 µg/mL. Compound 29 exhibited the highest antibacterial activity while 30 had the lowest (Mambou et al. 2018). The presence of these compounds is likely responsible for the antimicrobial property of P. kotschyi extracts and justify the ethnomedicinal use of this plant as a chewing stick for tooth cleaning and enhancement of oral health.", "The ethanol extract of P. kotschyi stem bark has been reported to possess DPPH radical scavenging activity with an IC50 of 4 µg/mL (Alain et al. 2014).\nA study on the hepatoprotective activity of methanol and aqueous extracts of the P. kotschyi leaves revealed that both extracts (at a dose of 750 mg/kg/day) were able to protect the liver against paracetamol induced oxidative damage (Eleha et al. 2016). A similar study conducted by Nchouwet et al. (2018) showed that 2 weeks pre-treatment with aqueous and methanol extracts of P. kotschyi stem bark (150 mg/kg/day) significantly suppressed the development of paracetamol induced hepatotoxicity in experimental rats.", "Diabetes mellitus is disorder associated with abnormal glucose metabolism resulting from insulin insufficiency or dysfunction. It is one of the major non-communicable diseases that affect millions of people globally. Scientific investigation has revealed that P. kotschyi extracts possess some antidiabetic properties. Georgewill and Georgewill (2019) investigated the hypoglycaemic effect of aqueous extract of P. kotschyi leaves on alloxan induced diabetic rats. Results of their investigation revealed that oral administration of the extract (200 mg/kg/day for 14 days) caused significant hypoglycaemic effect in the experimental animals. The ethanol extract of roots of this plant was also reported to exhibit inhibitory activity against α-glucosidase (IC50 = 5.0 ± 0.2 μg/mL), an important digestive enzyme targeted in diabetes treatment (Bothon et al. 2013).", "Cancer is an important disease that is characterised by the abnormal rapid proliferation of cells that invade and destroy other tissues (Alhassan et al. 2018). It is a major public health problem throughout the world. Pharmacological studies have shown that P. kotschyi possesses anticancer potential.\nKassim et al. (2015) investigated the antiproliferative activity and apoptosis induction effect of aqueous extract of P. kotschyi roots against a panel of prostate cancer cell lines, namely, PC3, DU-145, LNCaP and CWR-22 cell lines. Results from the 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyltetrazolium bromide (MTT) assay showed that all four cancer cell lines exhibited a dose-dependent decrease in cell proliferation and viability after treatment with the aqueous extract with IC50 values ranging from 12 to 42 µg/mL. The results obtained also showed that LNCaP, PC3, DU-145, and CWR-22 cell lines had 42, 35, 33 and 24% induced apoptotic cells, respectively, after treatment with the same extract. The results of both the antiproliferative and apoptosis assay indicated that the LNCaP cells were the most sensitive to the P. kotschyi extract.\nHeat shock protein 90 (Hsp90) is a molecular chaperone that is involved in the folding, activation and assembly of several proteins including oncoproteins such as HER2, Survivin, EGFR, Akt, Raf-1, mutant p53 (Calderwood et al. 2006; Dal Piaz et al. 2012). Hsp90 is often overexpressed in cancer cells. It has been demonstrated to play a vital role in tumour progression, malignancy and resistance to chemotherapeutic agents (Zhang et al. 2019). Hence, Hsp90 is recently considered as a viable molecular target for development of new anticancer drugs (Gupta et al. 2019). Phytoconstituents of P. kotschyi have been shown to possess significant Hsp90 inhibitory activity. Dal Piaz et al. (2012) investigated the Hsp90 binding capability of several compounds using a surface plasmon resonance (SPR) approach. They found that the limonoid orthoacetates (1–6) displayed good binding capability to the protein with compound 4 being the most effective. Compound 4 also exhibited significant anti-proliferative activity against three cancer cell lines, namely, PC-3 (human prostate cancer cells), A2780 (human ovarian carcinoma cells), and MCF-7 (human breast adenocarcinoma cells) with IC50 values of 62 ± 0.4, 38 ± 0.7 and 25 ± 1.2 µM, respectively. These findings suggest that Hsp90 inhibition is a mechanism of action for anti-proliferative effects of the limonoids orthoacetates from P. kotschyi. These findings provide scientific bases for the future development of new anticancer agents from P. kotschyi in the form of a standardised herbal preparation or as a pure chemical entity.", "Treatment of diarrhoea comes under one of the common ethnomedicinal uses of P. kotschyi. To further verify this claim, Essiet et al. (2016) investigated the antidiarrheal property of ethanol extract of P kotschyi leaves in Wistar albino rats. Diarrhoea was induced in the animals using castor oil. Results of the investigation revealed that oral administration of the extract (100, 200 and 400 mg/kg) produced significant (p < 0.05) dose-dependent inhibition of induced diarrhoea (67–91%). The results also showed that the administered doses of the extract decreased intestinal transit time by 57–66% while intestinal fluid accumulation was decreased by 68–82%. This finding undoubtedly supports the traditional use of this plant in the treatment of diarrhoea.", "There is a general perception that plant-based medicinal products are natural and thus, very safe for human consumption. However, this notion is wrong because several plants have been shown to produce wide range of adverse reactions some of which are capable of causing serious injuries, life-threatening conditions, and even death (Ekor 2014). Hence, it is of paramount importance to investigate the toxicity profile of traditional medicinal plants as well as their phytoconstituents in order to establish their safety. The study of toxicity is an essential component for new drug development process (Hornberg et al. 2014).\nSome toxicological studies have been carried out on the extracts of P. kotschyi. Nchouwet et al. (2017) investigated the acute and sub-chronic toxicity of P. kotschyi stem bark aqueous extract in albino rats. For the acute toxicity study, the LD50 was found to be greater than 2000 mg/kg body weight. The sub-chronic administration of the aqueous extract at a dose of 400 mg/kg body weight/day for 28 days, caused significant increase in total protein and HDL-Cholesterol with concomitant decrease in LDL-cholesterol while other biochemical and hematological parameters were found to be within the normal range. However, histological examination revealed the presence of inflammation and necrosis in the kidney and liver tissues of animals treated with 400 mg/kg body weight-/day of extract while tissue samples from animals treated at lower doses remained normal. This implies that the extract may have exhibited some toxic effect on the kidney and liver tissues at 400 mg/kg body weight while it is relatively safe at lower doses.\nKabiru et al. (2015) conducted a sub-chronic toxicity evaluation of a crude methanol extract of leaves in Sprague-Dawley rats at doses of 40, 200 and 1000 mg/kg body weight/day for 4 weeks. They found that the extract did not produce any significant alteration in both hematological and biochemical parameters when compared with standard controls. This implied that the extract was relatively non-toxic at the tested doses. Ezeokpo et al. (2020) also carried out a similar study with an ethanol extract of P. kotschyi leaves in Wistar rats. Their results revealed that the extract (400 mg/kg body weight/day) did not produce any significant derangement in hematological and biochemical parameters after 28 days of treatment. The above findings indicated that methanol and ethanol extracts of P. kotschyi leaves are relatively non-toxic at higher doses compared to the aqueous stem bark extract. However, more detailed research is still required to corroborate this finding. Albeit most of the pharmacological activities and chemical constituents reported on this plant have been obtained from the leaf’s extracts, the toxicity evaluation of ethanol, methanol and chloroform extracts of roots and stem bark of P. kotschy is yet to be carried out and reported. Hence, further toxicity studies on different extracts, fractions and chemical constituents of the root and stem bark of P. kotschyi are still required to ascertain the thorough safety of the plant." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methodology", "Geographical distribution", "Ethnomedicinal uses", "Phytochemistry", "Pharmacological activity", "Anti-inflammatory, analgesic and antipyretic activities", "Antiparasitic activity", "Antimicrobial activity", "Antioxidant and hepatoprotective activities", "Hypoglycaemic and digestive enzyme inhibitory activities", "Antiproliferative activity", "Antidiarrheal activity", "Toxicity", "Conclusions" ]
[ "Traditional medicinal plants have been an essential source of remedy for various illnesses since ancient times. Preparations of plant materials such as infusion, decoction, powder, or paste have been used in various traditional practices in different parts of the world. People living in Africa and Asia make use of herbal medications to supplement the conventional medicine practice (Ekor 2014). There has been an increasing interest in the usage of herbal medicines in recent years. About 80% of the world’s population is using phytotherapeutic medicines (Balekundri and Mannur 2020). The WHO estimated that the size of the global market for herbal products was USD 60 billion in the year 2000 and this is expected to grow 7% per annum towards USD 5 trillion by the year 2050 (Tan et al. 2020). Several analyses have clearly verified traditional claims of numerous medicinal plants leading to the commercialisation of the many herbal products and their nomination as leads within the development of pharmaceutical medication (Williams 2021). Many clinically useful drugs have been discovered based on the knowledge derived from the ethnomedicinal applications of various herbal materials (Balunas and Kinghorn 2005).\nPseudocedrela kotschyi (Schweinf.) Harms (Meliaceae) is an important medicinal plant found in the tropical and subtropical countries of Africa. This plant has been extensively used in the African traditional medicine system for the treatment of a variety of diseases, particularly as analgesic, antimicrobial, antimalarial, anthelminthic, and antidiarrheal agents. The main focus of this review was to establish the ethnopharmacological uses and medicinal characteristics of P. kotschyi and highlight its potential as future drug for the treatment of various tropical diseases.", "Scientific manuscripts on P. kotschyi were retrieved from different scientific search engines. Literature search was carried out on PUBMED using Pseudocedrela kotschyi as key words. Additional literature searches on Medline, EMBASE, Science Direct and Google scholar databases were done using pharmacological activity, chemical constituents and traditional uses of Pseudocedrela kotschyi as search terms. Literature published on the topic in English language from inception until September, 2020 were collected, analysed and an up-to-date review on the medicinal potential of P. kotschyi was compiled.\nGeographical distribution P. kotschyi (common name: dry zone cedar) is a medicinal plant. Other common names of P. kotschyi are Tuna (Hausa) and Emi gbegi in Yoruba. It is found in tropical and subtropical countries of Africa which include Nigeria, Cote d’Ivoire, Senegal, Ghana, Democratic Republic of Congo, and Uganda. The plant often grows as a medium sized tree of about 12–20 ft high (Ayo et al. 2010; Alain et al. 2014; Alhassan et al. 2014). Below is the taxonomical classification of P. kotschyi (Hassler 2019).\n\n\nClassification\n\n\nKingdom  Plantae\nPhylum  Tracheophyta\nClass   Magnoliopsida\nOrder    Sapindales\nFamily   Meliaceae\nGenus    Pseudocedrela\nSpecies   kotschyi\nP. kotschyi (common name: dry zone cedar) is a medicinal plant. Other common names of P. kotschyi are Tuna (Hausa) and Emi gbegi in Yoruba. It is found in tropical and subtropical countries of Africa which include Nigeria, Cote d’Ivoire, Senegal, Ghana, Democratic Republic of Congo, and Uganda. The plant often grows as a medium sized tree of about 12–20 ft high (Ayo et al. 2010; Alain et al. 2014; Alhassan et al. 2014). Below is the taxonomical classification of P. kotschyi (Hassler 2019).\n\n\nClassification\n\n\nKingdom  Plantae\nPhylum  Tracheophyta\nClass   Magnoliopsida\nOrder    Sapindales\nFamily   Meliaceae\nGenus    Pseudocedrela\nSpecies   kotschyi\nEthnomedicinal uses Different parts of P. kotschyi are used in the traditional treatment of various diseases. The root is used in the treatment of leprosy (Pedersen et al. 2009), epilepsy, dementia (Kantati et al. 2016), diabetes (Salihu Shinkafi et al. 2015), malaria, abdominal pain, diarrhoea (Ahua et al. 2007), toothache and gingivitis (Tapsoba and Deschamps 2006). The root is also used as a chewing stick for tooth cleaning and enhancement of oral health (Wolinsky and Sote 1984; Olabanji et al. 2007; Adeniyi et al. 2010). The leaf is used in the treatment of female infertility (Olabanji et al. 2007), intestinal worms (Koné et al. 2005) and malaria (Asase et al. 2005). The stem bark is used in the treatment of cancer (Saidu et al. 2015), infantile dermatitis (Erinoso et al. 2016), stomach ache (Asase et al. 2005), toothache (Kayode and Sanni 2016), high blood-pressure, skin diseases, and haemorrhoids (Nadembega et al. 2011).\nDifferent parts of P. kotschyi are used in the traditional treatment of various diseases. The root is used in the treatment of leprosy (Pedersen et al. 2009), epilepsy, dementia (Kantati et al. 2016), diabetes (Salihu Shinkafi et al. 2015), malaria, abdominal pain, diarrhoea (Ahua et al. 2007), toothache and gingivitis (Tapsoba and Deschamps 2006). The root is also used as a chewing stick for tooth cleaning and enhancement of oral health (Wolinsky and Sote 1984; Olabanji et al. 2007; Adeniyi et al. 2010). The leaf is used in the treatment of female infertility (Olabanji et al. 2007), intestinal worms (Koné et al. 2005) and malaria (Asase et al. 2005). The stem bark is used in the treatment of cancer (Saidu et al. 2015), infantile dermatitis (Erinoso et al. 2016), stomach ache (Asase et al. 2005), toothache (Kayode and Sanni 2016), high blood-pressure, skin diseases, and haemorrhoids (Nadembega et al. 2011).\nPhytochemistry Phytochemical investigations revealed that P. kotschyi contains a variety of pharmacological active secondary metabolites. A total of 32 compounds have so far reported to have been isolated from the plant which mainly include limonoids, triterpenes, and flavonoids.\nLimonoids are modified triterpenes which are highly oxygenated and have a typical furanylsteroid as their core structure (Roy and Saraf 2006). They are also known as tetraterpenoids. Limonoids are rare natural products which occur mainly in plants of Meliaceae and Rutaceae families and less frequently in the Cneoraceae family (Tan and Luo 2011).\nSeveral phragmalin-type limonoid orthoacetates (Figure 1) have reportedly been isolated from the of roots of this plant, namely, kotschyins A–H (1–8) (Hay et al. 2007; Dal Piaz et al. 2012). These compounds are complex with a very high degree of oxidation and rearrangement as compared to the parent limonoid structure.\nPhragmalin-type limonoid orthoacetates isolated from the roots of P. kotschyi.\nOther limonoid derivatives (Figure 2) found in the roots and stem bark of P. kotschyi are 7-deacetylgedunin (9), 7-deacetyl-7-oxogedunin (10) (Hay et al. 2007), 1α,7α epoxy-gedunin (11), gedunin (12) (Dal Piaz et al. 2012), kostchyienones A (13) and B (14), andirobin (15) methylangolensate (16) (Sidjui et al. 2018). Additional limonoids derivatives (Figure 3) that were isolated from the P. kotschyi bark include pseudrelones A–C (17–19) (Taylor 1979). The pseudrelones also have a phragmalin nucleus with orthoacetate function but they have a lesser degree of oxidation than the kotschyins.\nLimonoid derivatives isolated from the roots of P. kotschyi.\nLimonoid derivatives isolated from the bark of P. kotschyi.\nThe steroids isolated from this plant (Figure 4) include odoratone (20), spicatin (21), 11-acetil-odoratol (22) (Dal Piaz et al. 2012), β-sitosterol (23), 3-O-β-d-glucopyranosyl β-sitosterol (24) stigmasterol (25), 3-O-β-d-glucopyranosyl stigmasterol (26) betulinic acid (27) (Sidjui et al. 2018). Three secotirucallane triterpenes were also isolated from the stem bark of P. kotschyi. These include, 4-hydroxy-3,4-secotirucalla-7,24-dien-3,21-dioic acid (28), 3,4-secotirucalla-4(29),7,24-trien-3,21-dioic acid (29) and 3-methyl ester 3,4-secotirucalla-4(28),7,24-trien-3,21-dioic (30) (Figure 4) (Mambou et al. 2018). Two flavonoids, namely, 3,6,8-trihydroxy-2-(3,4-dihydroxylphenyl)-4H-chrom-4-one (31) and quercetin, 3,4′,7-trimethyl ether (32) (Figure 4) have also been isolated from the roots of this plant (Sidjui et al. 2018).\nTriterpenes and flavonoids isolated from the roots of P. kotschyi.\nThe GCMS analysis of essential oils from root and stem of P. kotschyi indicated that both oils contain mainly sesquiterpenoids. These include, α-cubebene, α-copaene, β-elemene, β-caryophyllene, trans-α-bergamotene, aromadendrene, (E)-β-farnesene, α-humulene, allo-aromadendrene, γ-muurolene, farnesene, germacrene D, β-selinene, α-selinene, α-muurolene, γ-cadinene, calamenene, δ-cadinene, cadina-1,4-diene, α-calacorene, α-cadinene, β-calacorene, germacrene B, cadalene, epi-cubebol, cubebol, spathulenol, globulol, humulene oxide II, epi-α-cadinol, epi-α-muurolol, α-muurolol, selin-11-en-4-α-ol, α-cadinol and juniper camphor. The stem bark oil was found to comprise largely of sesquiterpene hydrocarbons (79.6%), with δ-cadinene (31.3%) as the major constituents. While the oxygenated sesquiterpenes were found to be abundant in the root with cubebols (32.5%) and cadinols (17.9%) as the major constituents (Boyom et al. 2004).\nPhytochemical investigations revealed that P. kotschyi contains a variety of pharmacological active secondary metabolites. A total of 32 compounds have so far reported to have been isolated from the plant which mainly include limonoids, triterpenes, and flavonoids.\nLimonoids are modified triterpenes which are highly oxygenated and have a typical furanylsteroid as their core structure (Roy and Saraf 2006). They are also known as tetraterpenoids. Limonoids are rare natural products which occur mainly in plants of Meliaceae and Rutaceae families and less frequently in the Cneoraceae family (Tan and Luo 2011).\nSeveral phragmalin-type limonoid orthoacetates (Figure 1) have reportedly been isolated from the of roots of this plant, namely, kotschyins A–H (1–8) (Hay et al. 2007; Dal Piaz et al. 2012). These compounds are complex with a very high degree of oxidation and rearrangement as compared to the parent limonoid structure.\nPhragmalin-type limonoid orthoacetates isolated from the roots of P. kotschyi.\nOther limonoid derivatives (Figure 2) found in the roots and stem bark of P. kotschyi are 7-deacetylgedunin (9), 7-deacetyl-7-oxogedunin (10) (Hay et al. 2007), 1α,7α epoxy-gedunin (11), gedunin (12) (Dal Piaz et al. 2012), kostchyienones A (13) and B (14), andirobin (15) methylangolensate (16) (Sidjui et al. 2018). Additional limonoids derivatives (Figure 3) that were isolated from the P. kotschyi bark include pseudrelones A–C (17–19) (Taylor 1979). The pseudrelones also have a phragmalin nucleus with orthoacetate function but they have a lesser degree of oxidation than the kotschyins.\nLimonoid derivatives isolated from the roots of P. kotschyi.\nLimonoid derivatives isolated from the bark of P. kotschyi.\nThe steroids isolated from this plant (Figure 4) include odoratone (20), spicatin (21), 11-acetil-odoratol (22) (Dal Piaz et al. 2012), β-sitosterol (23), 3-O-β-d-glucopyranosyl β-sitosterol (24) stigmasterol (25), 3-O-β-d-glucopyranosyl stigmasterol (26) betulinic acid (27) (Sidjui et al. 2018). Three secotirucallane triterpenes were also isolated from the stem bark of P. kotschyi. These include, 4-hydroxy-3,4-secotirucalla-7,24-dien-3,21-dioic acid (28), 3,4-secotirucalla-4(29),7,24-trien-3,21-dioic acid (29) and 3-methyl ester 3,4-secotirucalla-4(28),7,24-trien-3,21-dioic (30) (Figure 4) (Mambou et al. 2018). Two flavonoids, namely, 3,6,8-trihydroxy-2-(3,4-dihydroxylphenyl)-4H-chrom-4-one (31) and quercetin, 3,4′,7-trimethyl ether (32) (Figure 4) have also been isolated from the roots of this plant (Sidjui et al. 2018).\nTriterpenes and flavonoids isolated from the roots of P. kotschyi.\nThe GCMS analysis of essential oils from root and stem of P. kotschyi indicated that both oils contain mainly sesquiterpenoids. These include, α-cubebene, α-copaene, β-elemene, β-caryophyllene, trans-α-bergamotene, aromadendrene, (E)-β-farnesene, α-humulene, allo-aromadendrene, γ-muurolene, farnesene, germacrene D, β-selinene, α-selinene, α-muurolene, γ-cadinene, calamenene, δ-cadinene, cadina-1,4-diene, α-calacorene, α-cadinene, β-calacorene, germacrene B, cadalene, epi-cubebol, cubebol, spathulenol, globulol, humulene oxide II, epi-α-cadinol, epi-α-muurolol, α-muurolol, selin-11-en-4-α-ol, α-cadinol and juniper camphor. The stem bark oil was found to comprise largely of sesquiterpene hydrocarbons (79.6%), with δ-cadinene (31.3%) as the major constituents. While the oxygenated sesquiterpenes were found to be abundant in the root with cubebols (32.5%) and cadinols (17.9%) as the major constituents (Boyom et al. 2004).\nPharmacological activity The ethnomedicinal claims for the efficacy of P. kotschyi in the treatment of various diseases have been confirmed by numerous relevant scientific studies. Several pharmacological investigations have been carried out to confirm the traditional medicinal uses of the roots, stem bark, and leaves of P. kotschyi. A wide range of pharmacological activities such as analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, hepatoprotective, antioxidant, and antimicrobial, have been reported by researchers so far.\nThe ethnomedicinal claims for the efficacy of P. kotschyi in the treatment of various diseases have been confirmed by numerous relevant scientific studies. Several pharmacological investigations have been carried out to confirm the traditional medicinal uses of the roots, stem bark, and leaves of P. kotschyi. A wide range of pharmacological activities such as analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, hepatoprotective, antioxidant, and antimicrobial, have been reported by researchers so far.\nAnti-inflammatory, analgesic and antipyretic activities Inflammation is an adaptive response that is triggered by noxious stimuli or conditions, such as tissue injury and infection (Medzhitov 2008; Ahmed 2011). Inflammatory response involves the secretion of several chemical mediators and signalling molecules such as nitric oxide (NO), and proinflammatory cytokines, including tumour necrosis, factor-α (TNF-α), interferon-γ (IFNγ), lipopolysaccharides (LPS) and interleukins (Medzhitov 2008). Even though inflammatory response is meant to be a beneficial process of restoring homeostasis, it is often associated with some disorders like pain and pyrexia due to the secretion of the chemical mediators (Bielefeldt et al. 2009; Garami et al. 2018). Chronic secretion of proinflammatory cytokines is also associated with development of diseases such as cancer and diabetes. Hence, anti-inflammatory agents represent an important class of medicines. Extracts and phytoconstituents of P. kotschyi have been reported to possess anti-inflammatory, analgesic and antipyretic properties.\nThe administration of methanol crude extract of P. kotschyi stem bark at a dose of 200 mg/kg/day and its butanol and chloroform fractions have been shown to produce significant analgesic activity when evaluated with a mice model. The extracts and fractions decreased the number of writhes by 88–92% during the acetic acid induced writhing assay (Abubakar et al. 2016). Akuodor et al. (2013) investigated the antipyretic activity of ethanol extract of P. kotschyi leaves on yeast and amphetamine induced hyperpyrexia in rats. They reported that the leaf extract (50, 100 and 150 mg/kg i.p.) displayed a significant (p < 0.05) dose-dependent decrease in pyrexia.\nScientific investigations have shown that 7-deacetylgedunin (9) had significant anti-inflammatory activity. Compound 9 was reported to significantly inhibit lipopolysaccharide induced nitric oxide in murine macrophage RAW 264.7 cells with an IC50 of 4.9 ± 0.1 μM. It also produced the downregulation of mRNA and protein expression of inducible nitric oxide synthase (iNOS) at a dose of 10 µM (Sarigaputi et al. 2015). These findings suggest that compound 9 produces its anti-inflammatory effect through the modulation of NO production. Chen et al. (2017) investigated the anti-inflammatory activity of compound 9 in C57BL/6 mice. Their result showed that its intraperitoneal administration at a dose of 5 mg/kg body weight for two consecutive days significantly decreased LPS-induced mice mortality by 40%. The above findings demonstrate that compound 9 is a promising anti-inflammatory agent from this plant. The anti-inflammatory effect of this compound perhaps accounts for the analgesic and antipyretic properties of the P. kotschyi extracts.\nInflammation is an adaptive response that is triggered by noxious stimuli or conditions, such as tissue injury and infection (Medzhitov 2008; Ahmed 2011). Inflammatory response involves the secretion of several chemical mediators and signalling molecules such as nitric oxide (NO), and proinflammatory cytokines, including tumour necrosis, factor-α (TNF-α), interferon-γ (IFNγ), lipopolysaccharides (LPS) and interleukins (Medzhitov 2008). Even though inflammatory response is meant to be a beneficial process of restoring homeostasis, it is often associated with some disorders like pain and pyrexia due to the secretion of the chemical mediators (Bielefeldt et al. 2009; Garami et al. 2018). Chronic secretion of proinflammatory cytokines is also associated with development of diseases such as cancer and diabetes. Hence, anti-inflammatory agents represent an important class of medicines. Extracts and phytoconstituents of P. kotschyi have been reported to possess anti-inflammatory, analgesic and antipyretic properties.\nThe administration of methanol crude extract of P. kotschyi stem bark at a dose of 200 mg/kg/day and its butanol and chloroform fractions have been shown to produce significant analgesic activity when evaluated with a mice model. The extracts and fractions decreased the number of writhes by 88–92% during the acetic acid induced writhing assay (Abubakar et al. 2016). Akuodor et al. (2013) investigated the antipyretic activity of ethanol extract of P. kotschyi leaves on yeast and amphetamine induced hyperpyrexia in rats. They reported that the leaf extract (50, 100 and 150 mg/kg i.p.) displayed a significant (p < 0.05) dose-dependent decrease in pyrexia.\nScientific investigations have shown that 7-deacetylgedunin (9) had significant anti-inflammatory activity. Compound 9 was reported to significantly inhibit lipopolysaccharide induced nitric oxide in murine macrophage RAW 264.7 cells with an IC50 of 4.9 ± 0.1 μM. It also produced the downregulation of mRNA and protein expression of inducible nitric oxide synthase (iNOS) at a dose of 10 µM (Sarigaputi et al. 2015). These findings suggest that compound 9 produces its anti-inflammatory effect through the modulation of NO production. Chen et al. (2017) investigated the anti-inflammatory activity of compound 9 in C57BL/6 mice. Their result showed that its intraperitoneal administration at a dose of 5 mg/kg body weight for two consecutive days significantly decreased LPS-induced mice mortality by 40%. The above findings demonstrate that compound 9 is a promising anti-inflammatory agent from this plant. The anti-inflammatory effect of this compound perhaps accounts for the analgesic and antipyretic properties of the P. kotschyi extracts.\nAntiparasitic activity Parasitic diseases are among the foremost health problems today, especially in tropical countries of Africa and Asia. Diseases such as malaria, leishmaniasis, trypanosomiasis and helminthiasis affect millions of people each year causing high morbidity and mortality, particularly, in developing countries (Hotez and Kamath 2009). Hence, there is an urgent need for new drugs to treat and control of these diseases.\nExtracts obtained from different parts of P. kotschyi have been reported to possess activity against several human parasites. Ahua et al. (2007) investigated the anti-leishmaniasis activity of P. kotschyi including several other plants against Leishmania major. The dichloromethane extract of P. kotschyi roots (at a dose of 75 µg/mL) exhibited a marked activity (>90% mortality) against the intracellular form of the parasite which is pathogenically significant for humans.\nIn another study, the anthelminthic activity of an ethanol extract of P. kotschyi roots against Haemonchus contortus (a pathogenic nematode found in small ruminants) was evaluated. The researchers discovered that the ethanol extract possessed larvicidal activity against the helminth with a LC100 of 0.02 µg/mL (Koné et al. 2005). The aqueous stem bark extract of this plant (50 mg/mL) has also been demonstrated to exert anthelminthic activity against Lumbricus terrestris with 25.4 min death time (Ukwubile et al. 2017).\nThe antimalarial effect of P. kotschyi on malaria parasite has been reported in several research manuscripts. Christian et al. (2015) investigated the suppressive and curative effect of ethanol extract of P. kotschyi leaves against malaria in Plasmodium berghei berghei infected mice. The results obtained showed that oral administration of the extract (100–400 mg/kg/day) exhibited a significant antimalarial effect which is evident by the suppression of parasitemia and prolong life of infected animals. In an another study, methanol extract of the P. kotschyi leaves at an oral dose of 200 mg/kg/day was found to reduce parasitemia by 90.70% in P. berghei berghei infected mice after four consecutive days of treatment (Dawet and Stephen 2014). However, the ethanol and aqueous extracts of the P. kotschyi stem bark exhibited lower activity against the malaria parasite (39.43% and 28.36% reduction in parasitemia, respectively) (Dawet and Yakubu 2014).\nThe limonoid derivatives 9 and 10 were reported to display significant in vitro activity against chloroquine-resistant Plasmodium falciparum with IC50 values of 1.36 and 1.77 µg/mL, respectively (Hay et al. 2007). The two compounds also displayed significant antiparasitic activity against Leishmania donovani, Trypanosoma brucei rhodesiense with a low-range IC50 of 0.99–3.4 µg/mL. In contrast, the orthoacetatate kotschyin A was found to be inactive against all the tested parasites (Hay et al. 2007). In related work, Sidjui et al. (2018) evaluated the in vitro antiplasmodial activity of 14 compounds isolated from P. kotschyi. Their findings showed that the limonoid derivatives 9, 10, 13, 14 and 15 exhibited very significant activity against both chloroquine-sensitive (Pf3D7) and chloroquine-resistant (PfINDO) strains of genus Plasmodium with IC50 values ranging from 0.75 to 9.05 µg/mL.\nSteverding et al. (2020) investigated the trypanocidal and leishmanicidal activities of six limonoids, namely, 9, 10, 13, 14, 15 and 16, against bloodstream forms of Trypanosoma brucei and promastigotes of Leishmania major. All the six compounds showed anti-trypanosomal activity with IC50 values ranging from 3.18 to 14.5 µM. Compounds 9, 10, 13 and 14 also displayed leishmanicidal activity with IC50 of 11.60, 7.63, 2.86 and 14.90 µM, respectively, while 15 and 16 were inactive.\nThe antiplasmodial, trypanocidal, and leishmanicidal activities of these compounds provide justification for the use of crude extract of P. kotschyi in the traditional treatment of malaria and other parasitic infectious diseases.\nParasitic diseases are among the foremost health problems today, especially in tropical countries of Africa and Asia. Diseases such as malaria, leishmaniasis, trypanosomiasis and helminthiasis affect millions of people each year causing high morbidity and mortality, particularly, in developing countries (Hotez and Kamath 2009). Hence, there is an urgent need for new drugs to treat and control of these diseases.\nExtracts obtained from different parts of P. kotschyi have been reported to possess activity against several human parasites. Ahua et al. (2007) investigated the anti-leishmaniasis activity of P. kotschyi including several other plants against Leishmania major. The dichloromethane extract of P. kotschyi roots (at a dose of 75 µg/mL) exhibited a marked activity (>90% mortality) against the intracellular form of the parasite which is pathogenically significant for humans.\nIn another study, the anthelminthic activity of an ethanol extract of P. kotschyi roots against Haemonchus contortus (a pathogenic nematode found in small ruminants) was evaluated. The researchers discovered that the ethanol extract possessed larvicidal activity against the helminth with a LC100 of 0.02 µg/mL (Koné et al. 2005). The aqueous stem bark extract of this plant (50 mg/mL) has also been demonstrated to exert anthelminthic activity against Lumbricus terrestris with 25.4 min death time (Ukwubile et al. 2017).\nThe antimalarial effect of P. kotschyi on malaria parasite has been reported in several research manuscripts. Christian et al. (2015) investigated the suppressive and curative effect of ethanol extract of P. kotschyi leaves against malaria in Plasmodium berghei berghei infected mice. The results obtained showed that oral administration of the extract (100–400 mg/kg/day) exhibited a significant antimalarial effect which is evident by the suppression of parasitemia and prolong life of infected animals. In an another study, methanol extract of the P. kotschyi leaves at an oral dose of 200 mg/kg/day was found to reduce parasitemia by 90.70% in P. berghei berghei infected mice after four consecutive days of treatment (Dawet and Stephen 2014). However, the ethanol and aqueous extracts of the P. kotschyi stem bark exhibited lower activity against the malaria parasite (39.43% and 28.36% reduction in parasitemia, respectively) (Dawet and Yakubu 2014).\nThe limonoid derivatives 9 and 10 were reported to display significant in vitro activity against chloroquine-resistant Plasmodium falciparum with IC50 values of 1.36 and 1.77 µg/mL, respectively (Hay et al. 2007). The two compounds also displayed significant antiparasitic activity against Leishmania donovani, Trypanosoma brucei rhodesiense with a low-range IC50 of 0.99–3.4 µg/mL. In contrast, the orthoacetatate kotschyin A was found to be inactive against all the tested parasites (Hay et al. 2007). In related work, Sidjui et al. (2018) evaluated the in vitro antiplasmodial activity of 14 compounds isolated from P. kotschyi. Their findings showed that the limonoid derivatives 9, 10, 13, 14 and 15 exhibited very significant activity against both chloroquine-sensitive (Pf3D7) and chloroquine-resistant (PfINDO) strains of genus Plasmodium with IC50 values ranging from 0.75 to 9.05 µg/mL.\nSteverding et al. (2020) investigated the trypanocidal and leishmanicidal activities of six limonoids, namely, 9, 10, 13, 14, 15 and 16, against bloodstream forms of Trypanosoma brucei and promastigotes of Leishmania major. All the six compounds showed anti-trypanosomal activity with IC50 values ranging from 3.18 to 14.5 µM. Compounds 9, 10, 13 and 14 also displayed leishmanicidal activity with IC50 of 11.60, 7.63, 2.86 and 14.90 µM, respectively, while 15 and 16 were inactive.\nThe antiplasmodial, trypanocidal, and leishmanicidal activities of these compounds provide justification for the use of crude extract of P. kotschyi in the traditional treatment of malaria and other parasitic infectious diseases.\nAntimicrobial activity Antimicrobial agents are among the most commonly used medications. The prevalence of antimicrobial resistance in recent years has led to a renewed effort to discover newer antimicrobial agents for the treatment of infectious diseases (Hobson et al. 2021). Extracts of P. kotschyi were reported to display appreciable activity against some pathogenic microorganisms.\nAyo et al. (2010) investigated the antimicrobial activity of petroleum ether, ethyl acetate and methanol extracts of the P. kotschyi leaves against Staphylococcus aureus, Salmonella typhi, Streptococcus pyogenes, Candida albicans, and Escherichia coli. The results of the study showed that the ethyl acetate extract exhibited antibacterial activity against all the tested organisms with MIC values of 10–20 mg/mL. In an another similar study, the crude methanol extract of the stem bark of this plant was also shown to exhibit good activity against a panel of pathogenic bacteria and fungi which include methicillin-resistant S. aureus (MRSA), S. aureus, S. pyogenes, Corynebacterium ulcerans, Bacillus subtilis, E. coli, S. typhi, Shigella dysenteriae, Klebsiella pneumoniae, Neisseria gonorrhoeae, Pseudomonas aeruginosa, C. albicans, C. krusei, and C. tropicalis with MIC values of 3.75–10.0 mg/mL (Alhassan et al. 2014). The methanol extract of the woody stem was also found to possess antifungal activity against C. krusei ATCC 6825 with an MIC of 6.25 mg/mL (Adeniyi et al. 2010).\nThe secotirucallane triterpenes (compounds 28, 29 and 30) isolated from the bark of P. kotschyi have been reported to possess significant antibacterial activity against Staphylococcus aureus ATCC 25923), Escherichia coli S2(1) and Pseudomonas aeruginosa with MIC ranging from 6 to 64 µg/mL. Compound 29 exhibited the highest antibacterial activity while 30 had the lowest (Mambou et al. 2018). The presence of these compounds is likely responsible for the antimicrobial property of P. kotschyi extracts and justify the ethnomedicinal use of this plant as a chewing stick for tooth cleaning and enhancement of oral health.\nAntimicrobial agents are among the most commonly used medications. The prevalence of antimicrobial resistance in recent years has led to a renewed effort to discover newer antimicrobial agents for the treatment of infectious diseases (Hobson et al. 2021). Extracts of P. kotschyi were reported to display appreciable activity against some pathogenic microorganisms.\nAyo et al. (2010) investigated the antimicrobial activity of petroleum ether, ethyl acetate and methanol extracts of the P. kotschyi leaves against Staphylococcus aureus, Salmonella typhi, Streptococcus pyogenes, Candida albicans, and Escherichia coli. The results of the study showed that the ethyl acetate extract exhibited antibacterial activity against all the tested organisms with MIC values of 10–20 mg/mL. In an another similar study, the crude methanol extract of the stem bark of this plant was also shown to exhibit good activity against a panel of pathogenic bacteria and fungi which include methicillin-resistant S. aureus (MRSA), S. aureus, S. pyogenes, Corynebacterium ulcerans, Bacillus subtilis, E. coli, S. typhi, Shigella dysenteriae, Klebsiella pneumoniae, Neisseria gonorrhoeae, Pseudomonas aeruginosa, C. albicans, C. krusei, and C. tropicalis with MIC values of 3.75–10.0 mg/mL (Alhassan et al. 2014). The methanol extract of the woody stem was also found to possess antifungal activity against C. krusei ATCC 6825 with an MIC of 6.25 mg/mL (Adeniyi et al. 2010).\nThe secotirucallane triterpenes (compounds 28, 29 and 30) isolated from the bark of P. kotschyi have been reported to possess significant antibacterial activity against Staphylococcus aureus ATCC 25923), Escherichia coli S2(1) and Pseudomonas aeruginosa with MIC ranging from 6 to 64 µg/mL. Compound 29 exhibited the highest antibacterial activity while 30 had the lowest (Mambou et al. 2018). The presence of these compounds is likely responsible for the antimicrobial property of P. kotschyi extracts and justify the ethnomedicinal use of this plant as a chewing stick for tooth cleaning and enhancement of oral health.\nAntioxidant and hepatoprotective activities The ethanol extract of P. kotschyi stem bark has been reported to possess DPPH radical scavenging activity with an IC50 of 4 µg/mL (Alain et al. 2014).\nA study on the hepatoprotective activity of methanol and aqueous extracts of the P. kotschyi leaves revealed that both extracts (at a dose of 750 mg/kg/day) were able to protect the liver against paracetamol induced oxidative damage (Eleha et al. 2016). A similar study conducted by Nchouwet et al. (2018) showed that 2 weeks pre-treatment with aqueous and methanol extracts of P. kotschyi stem bark (150 mg/kg/day) significantly suppressed the development of paracetamol induced hepatotoxicity in experimental rats.\nThe ethanol extract of P. kotschyi stem bark has been reported to possess DPPH radical scavenging activity with an IC50 of 4 µg/mL (Alain et al. 2014).\nA study on the hepatoprotective activity of methanol and aqueous extracts of the P. kotschyi leaves revealed that both extracts (at a dose of 750 mg/kg/day) were able to protect the liver against paracetamol induced oxidative damage (Eleha et al. 2016). A similar study conducted by Nchouwet et al. (2018) showed that 2 weeks pre-treatment with aqueous and methanol extracts of P. kotschyi stem bark (150 mg/kg/day) significantly suppressed the development of paracetamol induced hepatotoxicity in experimental rats.\nHypoglycaemic and digestive enzyme inhibitory activities Diabetes mellitus is disorder associated with abnormal glucose metabolism resulting from insulin insufficiency or dysfunction. It is one of the major non-communicable diseases that affect millions of people globally. Scientific investigation has revealed that P. kotschyi extracts possess some antidiabetic properties. Georgewill and Georgewill (2019) investigated the hypoglycaemic effect of aqueous extract of P. kotschyi leaves on alloxan induced diabetic rats. Results of their investigation revealed that oral administration of the extract (200 mg/kg/day for 14 days) caused significant hypoglycaemic effect in the experimental animals. The ethanol extract of roots of this plant was also reported to exhibit inhibitory activity against α-glucosidase (IC50 = 5.0 ± 0.2 μg/mL), an important digestive enzyme targeted in diabetes treatment (Bothon et al. 2013).\nDiabetes mellitus is disorder associated with abnormal glucose metabolism resulting from insulin insufficiency or dysfunction. It is one of the major non-communicable diseases that affect millions of people globally. Scientific investigation has revealed that P. kotschyi extracts possess some antidiabetic properties. Georgewill and Georgewill (2019) investigated the hypoglycaemic effect of aqueous extract of P. kotschyi leaves on alloxan induced diabetic rats. Results of their investigation revealed that oral administration of the extract (200 mg/kg/day for 14 days) caused significant hypoglycaemic effect in the experimental animals. The ethanol extract of roots of this plant was also reported to exhibit inhibitory activity against α-glucosidase (IC50 = 5.0 ± 0.2 μg/mL), an important digestive enzyme targeted in diabetes treatment (Bothon et al. 2013).\nAntiproliferative activity Cancer is an important disease that is characterised by the abnormal rapid proliferation of cells that invade and destroy other tissues (Alhassan et al. 2018). It is a major public health problem throughout the world. Pharmacological studies have shown that P. kotschyi possesses anticancer potential.\nKassim et al. (2015) investigated the antiproliferative activity and apoptosis induction effect of aqueous extract of P. kotschyi roots against a panel of prostate cancer cell lines, namely, PC3, DU-145, LNCaP and CWR-22 cell lines. Results from the 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyltetrazolium bromide (MTT) assay showed that all four cancer cell lines exhibited a dose-dependent decrease in cell proliferation and viability after treatment with the aqueous extract with IC50 values ranging from 12 to 42 µg/mL. The results obtained also showed that LNCaP, PC3, DU-145, and CWR-22 cell lines had 42, 35, 33 and 24% induced apoptotic cells, respectively, after treatment with the same extract. The results of both the antiproliferative and apoptosis assay indicated that the LNCaP cells were the most sensitive to the P. kotschyi extract.\nHeat shock protein 90 (Hsp90) is a molecular chaperone that is involved in the folding, activation and assembly of several proteins including oncoproteins such as HER2, Survivin, EGFR, Akt, Raf-1, mutant p53 (Calderwood et al. 2006; Dal Piaz et al. 2012). Hsp90 is often overexpressed in cancer cells. It has been demonstrated to play a vital role in tumour progression, malignancy and resistance to chemotherapeutic agents (Zhang et al. 2019). Hence, Hsp90 is recently considered as a viable molecular target for development of new anticancer drugs (Gupta et al. 2019). Phytoconstituents of P. kotschyi have been shown to possess significant Hsp90 inhibitory activity. Dal Piaz et al. (2012) investigated the Hsp90 binding capability of several compounds using a surface plasmon resonance (SPR) approach. They found that the limonoid orthoacetates (1–6) displayed good binding capability to the protein with compound 4 being the most effective. Compound 4 also exhibited significant anti-proliferative activity against three cancer cell lines, namely, PC-3 (human prostate cancer cells), A2780 (human ovarian carcinoma cells), and MCF-7 (human breast adenocarcinoma cells) with IC50 values of 62 ± 0.4, 38 ± 0.7 and 25 ± 1.2 µM, respectively. These findings suggest that Hsp90 inhibition is a mechanism of action for anti-proliferative effects of the limonoids orthoacetates from P. kotschyi. These findings provide scientific bases for the future development of new anticancer agents from P. kotschyi in the form of a standardised herbal preparation or as a pure chemical entity.\nCancer is an important disease that is characterised by the abnormal rapid proliferation of cells that invade and destroy other tissues (Alhassan et al. 2018). It is a major public health problem throughout the world. Pharmacological studies have shown that P. kotschyi possesses anticancer potential.\nKassim et al. (2015) investigated the antiproliferative activity and apoptosis induction effect of aqueous extract of P. kotschyi roots against a panel of prostate cancer cell lines, namely, PC3, DU-145, LNCaP and CWR-22 cell lines. Results from the 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyltetrazolium bromide (MTT) assay showed that all four cancer cell lines exhibited a dose-dependent decrease in cell proliferation and viability after treatment with the aqueous extract with IC50 values ranging from 12 to 42 µg/mL. The results obtained also showed that LNCaP, PC3, DU-145, and CWR-22 cell lines had 42, 35, 33 and 24% induced apoptotic cells, respectively, after treatment with the same extract. The results of both the antiproliferative and apoptosis assay indicated that the LNCaP cells were the most sensitive to the P. kotschyi extract.\nHeat shock protein 90 (Hsp90) is a molecular chaperone that is involved in the folding, activation and assembly of several proteins including oncoproteins such as HER2, Survivin, EGFR, Akt, Raf-1, mutant p53 (Calderwood et al. 2006; Dal Piaz et al. 2012). Hsp90 is often overexpressed in cancer cells. It has been demonstrated to play a vital role in tumour progression, malignancy and resistance to chemotherapeutic agents (Zhang et al. 2019). Hence, Hsp90 is recently considered as a viable molecular target for development of new anticancer drugs (Gupta et al. 2019). Phytoconstituents of P. kotschyi have been shown to possess significant Hsp90 inhibitory activity. Dal Piaz et al. (2012) investigated the Hsp90 binding capability of several compounds using a surface plasmon resonance (SPR) approach. They found that the limonoid orthoacetates (1–6) displayed good binding capability to the protein with compound 4 being the most effective. Compound 4 also exhibited significant anti-proliferative activity against three cancer cell lines, namely, PC-3 (human prostate cancer cells), A2780 (human ovarian carcinoma cells), and MCF-7 (human breast adenocarcinoma cells) with IC50 values of 62 ± 0.4, 38 ± 0.7 and 25 ± 1.2 µM, respectively. These findings suggest that Hsp90 inhibition is a mechanism of action for anti-proliferative effects of the limonoids orthoacetates from P. kotschyi. These findings provide scientific bases for the future development of new anticancer agents from P. kotschyi in the form of a standardised herbal preparation or as a pure chemical entity.\nAntidiarrheal activity Treatment of diarrhoea comes under one of the common ethnomedicinal uses of P. kotschyi. To further verify this claim, Essiet et al. (2016) investigated the antidiarrheal property of ethanol extract of P kotschyi leaves in Wistar albino rats. Diarrhoea was induced in the animals using castor oil. Results of the investigation revealed that oral administration of the extract (100, 200 and 400 mg/kg) produced significant (p < 0.05) dose-dependent inhibition of induced diarrhoea (67–91%). The results also showed that the administered doses of the extract decreased intestinal transit time by 57–66% while intestinal fluid accumulation was decreased by 68–82%. This finding undoubtedly supports the traditional use of this plant in the treatment of diarrhoea.\nTreatment of diarrhoea comes under one of the common ethnomedicinal uses of P. kotschyi. To further verify this claim, Essiet et al. (2016) investigated the antidiarrheal property of ethanol extract of P kotschyi leaves in Wistar albino rats. Diarrhoea was induced in the animals using castor oil. Results of the investigation revealed that oral administration of the extract (100, 200 and 400 mg/kg) produced significant (p < 0.05) dose-dependent inhibition of induced diarrhoea (67–91%). The results also showed that the administered doses of the extract decreased intestinal transit time by 57–66% while intestinal fluid accumulation was decreased by 68–82%. This finding undoubtedly supports the traditional use of this plant in the treatment of diarrhoea.\nToxicity There is a general perception that plant-based medicinal products are natural and thus, very safe for human consumption. However, this notion is wrong because several plants have been shown to produce wide range of adverse reactions some of which are capable of causing serious injuries, life-threatening conditions, and even death (Ekor 2014). Hence, it is of paramount importance to investigate the toxicity profile of traditional medicinal plants as well as their phytoconstituents in order to establish their safety. The study of toxicity is an essential component for new drug development process (Hornberg et al. 2014).\nSome toxicological studies have been carried out on the extracts of P. kotschyi. Nchouwet et al. (2017) investigated the acute and sub-chronic toxicity of P. kotschyi stem bark aqueous extract in albino rats. For the acute toxicity study, the LD50 was found to be greater than 2000 mg/kg body weight. The sub-chronic administration of the aqueous extract at a dose of 400 mg/kg body weight/day for 28 days, caused significant increase in total protein and HDL-Cholesterol with concomitant decrease in LDL-cholesterol while other biochemical and hematological parameters were found to be within the normal range. However, histological examination revealed the presence of inflammation and necrosis in the kidney and liver tissues of animals treated with 400 mg/kg body weight-/day of extract while tissue samples from animals treated at lower doses remained normal. This implies that the extract may have exhibited some toxic effect on the kidney and liver tissues at 400 mg/kg body weight while it is relatively safe at lower doses.\nKabiru et al. (2015) conducted a sub-chronic toxicity evaluation of a crude methanol extract of leaves in Sprague-Dawley rats at doses of 40, 200 and 1000 mg/kg body weight/day for 4 weeks. They found that the extract did not produce any significant alteration in both hematological and biochemical parameters when compared with standard controls. This implied that the extract was relatively non-toxic at the tested doses. Ezeokpo et al. (2020) also carried out a similar study with an ethanol extract of P. kotschyi leaves in Wistar rats. Their results revealed that the extract (400 mg/kg body weight/day) did not produce any significant derangement in hematological and biochemical parameters after 28 days of treatment. The above findings indicated that methanol and ethanol extracts of P. kotschyi leaves are relatively non-toxic at higher doses compared to the aqueous stem bark extract. However, more detailed research is still required to corroborate this finding. Albeit most of the pharmacological activities and chemical constituents reported on this plant have been obtained from the leaf’s extracts, the toxicity evaluation of ethanol, methanol and chloroform extracts of roots and stem bark of P. kotschy is yet to be carried out and reported. Hence, further toxicity studies on different extracts, fractions and chemical constituents of the root and stem bark of P. kotschyi are still required to ascertain the thorough safety of the plant.\nThere is a general perception that plant-based medicinal products are natural and thus, very safe for human consumption. However, this notion is wrong because several plants have been shown to produce wide range of adverse reactions some of which are capable of causing serious injuries, life-threatening conditions, and even death (Ekor 2014). Hence, it is of paramount importance to investigate the toxicity profile of traditional medicinal plants as well as their phytoconstituents in order to establish their safety. The study of toxicity is an essential component for new drug development process (Hornberg et al. 2014).\nSome toxicological studies have been carried out on the extracts of P. kotschyi. Nchouwet et al. (2017) investigated the acute and sub-chronic toxicity of P. kotschyi stem bark aqueous extract in albino rats. For the acute toxicity study, the LD50 was found to be greater than 2000 mg/kg body weight. The sub-chronic administration of the aqueous extract at a dose of 400 mg/kg body weight/day for 28 days, caused significant increase in total protein and HDL-Cholesterol with concomitant decrease in LDL-cholesterol while other biochemical and hematological parameters were found to be within the normal range. However, histological examination revealed the presence of inflammation and necrosis in the kidney and liver tissues of animals treated with 400 mg/kg body weight-/day of extract while tissue samples from animals treated at lower doses remained normal. This implies that the extract may have exhibited some toxic effect on the kidney and liver tissues at 400 mg/kg body weight while it is relatively safe at lower doses.\nKabiru et al. (2015) conducted a sub-chronic toxicity evaluation of a crude methanol extract of leaves in Sprague-Dawley rats at doses of 40, 200 and 1000 mg/kg body weight/day for 4 weeks. They found that the extract did not produce any significant alteration in both hematological and biochemical parameters when compared with standard controls. This implied that the extract was relatively non-toxic at the tested doses. Ezeokpo et al. (2020) also carried out a similar study with an ethanol extract of P. kotschyi leaves in Wistar rats. Their results revealed that the extract (400 mg/kg body weight/day) did not produce any significant derangement in hematological and biochemical parameters after 28 days of treatment. The above findings indicated that methanol and ethanol extracts of P. kotschyi leaves are relatively non-toxic at higher doses compared to the aqueous stem bark extract. However, more detailed research is still required to corroborate this finding. Albeit most of the pharmacological activities and chemical constituents reported on this plant have been obtained from the leaf’s extracts, the toxicity evaluation of ethanol, methanol and chloroform extracts of roots and stem bark of P. kotschy is yet to be carried out and reported. Hence, further toxicity studies on different extracts, fractions and chemical constituents of the root and stem bark of P. kotschyi are still required to ascertain the thorough safety of the plant.", "P. kotschyi (common name: dry zone cedar) is a medicinal plant. Other common names of P. kotschyi are Tuna (Hausa) and Emi gbegi in Yoruba. It is found in tropical and subtropical countries of Africa which include Nigeria, Cote d’Ivoire, Senegal, Ghana, Democratic Republic of Congo, and Uganda. The plant often grows as a medium sized tree of about 12–20 ft high (Ayo et al. 2010; Alain et al. 2014; Alhassan et al. 2014). Below is the taxonomical classification of P. kotschyi (Hassler 2019).\n\n\nClassification\n\n\nKingdom  Plantae\nPhylum  Tracheophyta\nClass   Magnoliopsida\nOrder    Sapindales\nFamily   Meliaceae\nGenus    Pseudocedrela\nSpecies   kotschyi", "Different parts of P. kotschyi are used in the traditional treatment of various diseases. The root is used in the treatment of leprosy (Pedersen et al. 2009), epilepsy, dementia (Kantati et al. 2016), diabetes (Salihu Shinkafi et al. 2015), malaria, abdominal pain, diarrhoea (Ahua et al. 2007), toothache and gingivitis (Tapsoba and Deschamps 2006). The root is also used as a chewing stick for tooth cleaning and enhancement of oral health (Wolinsky and Sote 1984; Olabanji et al. 2007; Adeniyi et al. 2010). The leaf is used in the treatment of female infertility (Olabanji et al. 2007), intestinal worms (Koné et al. 2005) and malaria (Asase et al. 2005). The stem bark is used in the treatment of cancer (Saidu et al. 2015), infantile dermatitis (Erinoso et al. 2016), stomach ache (Asase et al. 2005), toothache (Kayode and Sanni 2016), high blood-pressure, skin diseases, and haemorrhoids (Nadembega et al. 2011).", "Phytochemical investigations revealed that P. kotschyi contains a variety of pharmacological active secondary metabolites. A total of 32 compounds have so far reported to have been isolated from the plant which mainly include limonoids, triterpenes, and flavonoids.\nLimonoids are modified triterpenes which are highly oxygenated and have a typical furanylsteroid as their core structure (Roy and Saraf 2006). They are also known as tetraterpenoids. Limonoids are rare natural products which occur mainly in plants of Meliaceae and Rutaceae families and less frequently in the Cneoraceae family (Tan and Luo 2011).\nSeveral phragmalin-type limonoid orthoacetates (Figure 1) have reportedly been isolated from the of roots of this plant, namely, kotschyins A–H (1–8) (Hay et al. 2007; Dal Piaz et al. 2012). These compounds are complex with a very high degree of oxidation and rearrangement as compared to the parent limonoid structure.\nPhragmalin-type limonoid orthoacetates isolated from the roots of P. kotschyi.\nOther limonoid derivatives (Figure 2) found in the roots and stem bark of P. kotschyi are 7-deacetylgedunin (9), 7-deacetyl-7-oxogedunin (10) (Hay et al. 2007), 1α,7α epoxy-gedunin (11), gedunin (12) (Dal Piaz et al. 2012), kostchyienones A (13) and B (14), andirobin (15) methylangolensate (16) (Sidjui et al. 2018). Additional limonoids derivatives (Figure 3) that were isolated from the P. kotschyi bark include pseudrelones A–C (17–19) (Taylor 1979). The pseudrelones also have a phragmalin nucleus with orthoacetate function but they have a lesser degree of oxidation than the kotschyins.\nLimonoid derivatives isolated from the roots of P. kotschyi.\nLimonoid derivatives isolated from the bark of P. kotschyi.\nThe steroids isolated from this plant (Figure 4) include odoratone (20), spicatin (21), 11-acetil-odoratol (22) (Dal Piaz et al. 2012), β-sitosterol (23), 3-O-β-d-glucopyranosyl β-sitosterol (24) stigmasterol (25), 3-O-β-d-glucopyranosyl stigmasterol (26) betulinic acid (27) (Sidjui et al. 2018). Three secotirucallane triterpenes were also isolated from the stem bark of P. kotschyi. These include, 4-hydroxy-3,4-secotirucalla-7,24-dien-3,21-dioic acid (28), 3,4-secotirucalla-4(29),7,24-trien-3,21-dioic acid (29) and 3-methyl ester 3,4-secotirucalla-4(28),7,24-trien-3,21-dioic (30) (Figure 4) (Mambou et al. 2018). Two flavonoids, namely, 3,6,8-trihydroxy-2-(3,4-dihydroxylphenyl)-4H-chrom-4-one (31) and quercetin, 3,4′,7-trimethyl ether (32) (Figure 4) have also been isolated from the roots of this plant (Sidjui et al. 2018).\nTriterpenes and flavonoids isolated from the roots of P. kotschyi.\nThe GCMS analysis of essential oils from root and stem of P. kotschyi indicated that both oils contain mainly sesquiterpenoids. These include, α-cubebene, α-copaene, β-elemene, β-caryophyllene, trans-α-bergamotene, aromadendrene, (E)-β-farnesene, α-humulene, allo-aromadendrene, γ-muurolene, farnesene, germacrene D, β-selinene, α-selinene, α-muurolene, γ-cadinene, calamenene, δ-cadinene, cadina-1,4-diene, α-calacorene, α-cadinene, β-calacorene, germacrene B, cadalene, epi-cubebol, cubebol, spathulenol, globulol, humulene oxide II, epi-α-cadinol, epi-α-muurolol, α-muurolol, selin-11-en-4-α-ol, α-cadinol and juniper camphor. The stem bark oil was found to comprise largely of sesquiterpene hydrocarbons (79.6%), with δ-cadinene (31.3%) as the major constituents. While the oxygenated sesquiterpenes were found to be abundant in the root with cubebols (32.5%) and cadinols (17.9%) as the major constituents (Boyom et al. 2004).", "The ethnomedicinal claims for the efficacy of P. kotschyi in the treatment of various diseases have been confirmed by numerous relevant scientific studies. Several pharmacological investigations have been carried out to confirm the traditional medicinal uses of the roots, stem bark, and leaves of P. kotschyi. A wide range of pharmacological activities such as analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, hepatoprotective, antioxidant, and antimicrobial, have been reported by researchers so far.", "Inflammation is an adaptive response that is triggered by noxious stimuli or conditions, such as tissue injury and infection (Medzhitov 2008; Ahmed 2011). Inflammatory response involves the secretion of several chemical mediators and signalling molecules such as nitric oxide (NO), and proinflammatory cytokines, including tumour necrosis, factor-α (TNF-α), interferon-γ (IFNγ), lipopolysaccharides (LPS) and interleukins (Medzhitov 2008). Even though inflammatory response is meant to be a beneficial process of restoring homeostasis, it is often associated with some disorders like pain and pyrexia due to the secretion of the chemical mediators (Bielefeldt et al. 2009; Garami et al. 2018). Chronic secretion of proinflammatory cytokines is also associated with development of diseases such as cancer and diabetes. Hence, anti-inflammatory agents represent an important class of medicines. Extracts and phytoconstituents of P. kotschyi have been reported to possess anti-inflammatory, analgesic and antipyretic properties.\nThe administration of methanol crude extract of P. kotschyi stem bark at a dose of 200 mg/kg/day and its butanol and chloroform fractions have been shown to produce significant analgesic activity when evaluated with a mice model. The extracts and fractions decreased the number of writhes by 88–92% during the acetic acid induced writhing assay (Abubakar et al. 2016). Akuodor et al. (2013) investigated the antipyretic activity of ethanol extract of P. kotschyi leaves on yeast and amphetamine induced hyperpyrexia in rats. They reported that the leaf extract (50, 100 and 150 mg/kg i.p.) displayed a significant (p < 0.05) dose-dependent decrease in pyrexia.\nScientific investigations have shown that 7-deacetylgedunin (9) had significant anti-inflammatory activity. Compound 9 was reported to significantly inhibit lipopolysaccharide induced nitric oxide in murine macrophage RAW 264.7 cells with an IC50 of 4.9 ± 0.1 μM. It also produced the downregulation of mRNA and protein expression of inducible nitric oxide synthase (iNOS) at a dose of 10 µM (Sarigaputi et al. 2015). These findings suggest that compound 9 produces its anti-inflammatory effect through the modulation of NO production. Chen et al. (2017) investigated the anti-inflammatory activity of compound 9 in C57BL/6 mice. Their result showed that its intraperitoneal administration at a dose of 5 mg/kg body weight for two consecutive days significantly decreased LPS-induced mice mortality by 40%. The above findings demonstrate that compound 9 is a promising anti-inflammatory agent from this plant. The anti-inflammatory effect of this compound perhaps accounts for the analgesic and antipyretic properties of the P. kotschyi extracts.", "Parasitic diseases are among the foremost health problems today, especially in tropical countries of Africa and Asia. Diseases such as malaria, leishmaniasis, trypanosomiasis and helminthiasis affect millions of people each year causing high morbidity and mortality, particularly, in developing countries (Hotez and Kamath 2009). Hence, there is an urgent need for new drugs to treat and control of these diseases.\nExtracts obtained from different parts of P. kotschyi have been reported to possess activity against several human parasites. Ahua et al. (2007) investigated the anti-leishmaniasis activity of P. kotschyi including several other plants against Leishmania major. The dichloromethane extract of P. kotschyi roots (at a dose of 75 µg/mL) exhibited a marked activity (>90% mortality) against the intracellular form of the parasite which is pathogenically significant for humans.\nIn another study, the anthelminthic activity of an ethanol extract of P. kotschyi roots against Haemonchus contortus (a pathogenic nematode found in small ruminants) was evaluated. The researchers discovered that the ethanol extract possessed larvicidal activity against the helminth with a LC100 of 0.02 µg/mL (Koné et al. 2005). The aqueous stem bark extract of this plant (50 mg/mL) has also been demonstrated to exert anthelminthic activity against Lumbricus terrestris with 25.4 min death time (Ukwubile et al. 2017).\nThe antimalarial effect of P. kotschyi on malaria parasite has been reported in several research manuscripts. Christian et al. (2015) investigated the suppressive and curative effect of ethanol extract of P. kotschyi leaves against malaria in Plasmodium berghei berghei infected mice. The results obtained showed that oral administration of the extract (100–400 mg/kg/day) exhibited a significant antimalarial effect which is evident by the suppression of parasitemia and prolong life of infected animals. In an another study, methanol extract of the P. kotschyi leaves at an oral dose of 200 mg/kg/day was found to reduce parasitemia by 90.70% in P. berghei berghei infected mice after four consecutive days of treatment (Dawet and Stephen 2014). However, the ethanol and aqueous extracts of the P. kotschyi stem bark exhibited lower activity against the malaria parasite (39.43% and 28.36% reduction in parasitemia, respectively) (Dawet and Yakubu 2014).\nThe limonoid derivatives 9 and 10 were reported to display significant in vitro activity against chloroquine-resistant Plasmodium falciparum with IC50 values of 1.36 and 1.77 µg/mL, respectively (Hay et al. 2007). The two compounds also displayed significant antiparasitic activity against Leishmania donovani, Trypanosoma brucei rhodesiense with a low-range IC50 of 0.99–3.4 µg/mL. In contrast, the orthoacetatate kotschyin A was found to be inactive against all the tested parasites (Hay et al. 2007). In related work, Sidjui et al. (2018) evaluated the in vitro antiplasmodial activity of 14 compounds isolated from P. kotschyi. Their findings showed that the limonoid derivatives 9, 10, 13, 14 and 15 exhibited very significant activity against both chloroquine-sensitive (Pf3D7) and chloroquine-resistant (PfINDO) strains of genus Plasmodium with IC50 values ranging from 0.75 to 9.05 µg/mL.\nSteverding et al. (2020) investigated the trypanocidal and leishmanicidal activities of six limonoids, namely, 9, 10, 13, 14, 15 and 16, against bloodstream forms of Trypanosoma brucei and promastigotes of Leishmania major. All the six compounds showed anti-trypanosomal activity with IC50 values ranging from 3.18 to 14.5 µM. Compounds 9, 10, 13 and 14 also displayed leishmanicidal activity with IC50 of 11.60, 7.63, 2.86 and 14.90 µM, respectively, while 15 and 16 were inactive.\nThe antiplasmodial, trypanocidal, and leishmanicidal activities of these compounds provide justification for the use of crude extract of P. kotschyi in the traditional treatment of malaria and other parasitic infectious diseases.", "Antimicrobial agents are among the most commonly used medications. The prevalence of antimicrobial resistance in recent years has led to a renewed effort to discover newer antimicrobial agents for the treatment of infectious diseases (Hobson et al. 2021). Extracts of P. kotschyi were reported to display appreciable activity against some pathogenic microorganisms.\nAyo et al. (2010) investigated the antimicrobial activity of petroleum ether, ethyl acetate and methanol extracts of the P. kotschyi leaves against Staphylococcus aureus, Salmonella typhi, Streptococcus pyogenes, Candida albicans, and Escherichia coli. The results of the study showed that the ethyl acetate extract exhibited antibacterial activity against all the tested organisms with MIC values of 10–20 mg/mL. In an another similar study, the crude methanol extract of the stem bark of this plant was also shown to exhibit good activity against a panel of pathogenic bacteria and fungi which include methicillin-resistant S. aureus (MRSA), S. aureus, S. pyogenes, Corynebacterium ulcerans, Bacillus subtilis, E. coli, S. typhi, Shigella dysenteriae, Klebsiella pneumoniae, Neisseria gonorrhoeae, Pseudomonas aeruginosa, C. albicans, C. krusei, and C. tropicalis with MIC values of 3.75–10.0 mg/mL (Alhassan et al. 2014). The methanol extract of the woody stem was also found to possess antifungal activity against C. krusei ATCC 6825 with an MIC of 6.25 mg/mL (Adeniyi et al. 2010).\nThe secotirucallane triterpenes (compounds 28, 29 and 30) isolated from the bark of P. kotschyi have been reported to possess significant antibacterial activity against Staphylococcus aureus ATCC 25923), Escherichia coli S2(1) and Pseudomonas aeruginosa with MIC ranging from 6 to 64 µg/mL. Compound 29 exhibited the highest antibacterial activity while 30 had the lowest (Mambou et al. 2018). The presence of these compounds is likely responsible for the antimicrobial property of P. kotschyi extracts and justify the ethnomedicinal use of this plant as a chewing stick for tooth cleaning and enhancement of oral health.", "The ethanol extract of P. kotschyi stem bark has been reported to possess DPPH radical scavenging activity with an IC50 of 4 µg/mL (Alain et al. 2014).\nA study on the hepatoprotective activity of methanol and aqueous extracts of the P. kotschyi leaves revealed that both extracts (at a dose of 750 mg/kg/day) were able to protect the liver against paracetamol induced oxidative damage (Eleha et al. 2016). A similar study conducted by Nchouwet et al. (2018) showed that 2 weeks pre-treatment with aqueous and methanol extracts of P. kotschyi stem bark (150 mg/kg/day) significantly suppressed the development of paracetamol induced hepatotoxicity in experimental rats.", "Diabetes mellitus is disorder associated with abnormal glucose metabolism resulting from insulin insufficiency or dysfunction. It is one of the major non-communicable diseases that affect millions of people globally. Scientific investigation has revealed that P. kotschyi extracts possess some antidiabetic properties. Georgewill and Georgewill (2019) investigated the hypoglycaemic effect of aqueous extract of P. kotschyi leaves on alloxan induced diabetic rats. Results of their investigation revealed that oral administration of the extract (200 mg/kg/day for 14 days) caused significant hypoglycaemic effect in the experimental animals. The ethanol extract of roots of this plant was also reported to exhibit inhibitory activity against α-glucosidase (IC50 = 5.0 ± 0.2 μg/mL), an important digestive enzyme targeted in diabetes treatment (Bothon et al. 2013).", "Cancer is an important disease that is characterised by the abnormal rapid proliferation of cells that invade and destroy other tissues (Alhassan et al. 2018). It is a major public health problem throughout the world. Pharmacological studies have shown that P. kotschyi possesses anticancer potential.\nKassim et al. (2015) investigated the antiproliferative activity and apoptosis induction effect of aqueous extract of P. kotschyi roots against a panel of prostate cancer cell lines, namely, PC3, DU-145, LNCaP and CWR-22 cell lines. Results from the 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyltetrazolium bromide (MTT) assay showed that all four cancer cell lines exhibited a dose-dependent decrease in cell proliferation and viability after treatment with the aqueous extract with IC50 values ranging from 12 to 42 µg/mL. The results obtained also showed that LNCaP, PC3, DU-145, and CWR-22 cell lines had 42, 35, 33 and 24% induced apoptotic cells, respectively, after treatment with the same extract. The results of both the antiproliferative and apoptosis assay indicated that the LNCaP cells were the most sensitive to the P. kotschyi extract.\nHeat shock protein 90 (Hsp90) is a molecular chaperone that is involved in the folding, activation and assembly of several proteins including oncoproteins such as HER2, Survivin, EGFR, Akt, Raf-1, mutant p53 (Calderwood et al. 2006; Dal Piaz et al. 2012). Hsp90 is often overexpressed in cancer cells. It has been demonstrated to play a vital role in tumour progression, malignancy and resistance to chemotherapeutic agents (Zhang et al. 2019). Hence, Hsp90 is recently considered as a viable molecular target for development of new anticancer drugs (Gupta et al. 2019). Phytoconstituents of P. kotschyi have been shown to possess significant Hsp90 inhibitory activity. Dal Piaz et al. (2012) investigated the Hsp90 binding capability of several compounds using a surface plasmon resonance (SPR) approach. They found that the limonoid orthoacetates (1–6) displayed good binding capability to the protein with compound 4 being the most effective. Compound 4 also exhibited significant anti-proliferative activity against three cancer cell lines, namely, PC-3 (human prostate cancer cells), A2780 (human ovarian carcinoma cells), and MCF-7 (human breast adenocarcinoma cells) with IC50 values of 62 ± 0.4, 38 ± 0.7 and 25 ± 1.2 µM, respectively. These findings suggest that Hsp90 inhibition is a mechanism of action for anti-proliferative effects of the limonoids orthoacetates from P. kotschyi. These findings provide scientific bases for the future development of new anticancer agents from P. kotschyi in the form of a standardised herbal preparation or as a pure chemical entity.", "Treatment of diarrhoea comes under one of the common ethnomedicinal uses of P. kotschyi. To further verify this claim, Essiet et al. (2016) investigated the antidiarrheal property of ethanol extract of P kotschyi leaves in Wistar albino rats. Diarrhoea was induced in the animals using castor oil. Results of the investigation revealed that oral administration of the extract (100, 200 and 400 mg/kg) produced significant (p < 0.05) dose-dependent inhibition of induced diarrhoea (67–91%). The results also showed that the administered doses of the extract decreased intestinal transit time by 57–66% while intestinal fluid accumulation was decreased by 68–82%. This finding undoubtedly supports the traditional use of this plant in the treatment of diarrhoea.", "There is a general perception that plant-based medicinal products are natural and thus, very safe for human consumption. However, this notion is wrong because several plants have been shown to produce wide range of adverse reactions some of which are capable of causing serious injuries, life-threatening conditions, and even death (Ekor 2014). Hence, it is of paramount importance to investigate the toxicity profile of traditional medicinal plants as well as their phytoconstituents in order to establish their safety. The study of toxicity is an essential component for new drug development process (Hornberg et al. 2014).\nSome toxicological studies have been carried out on the extracts of P. kotschyi. Nchouwet et al. (2017) investigated the acute and sub-chronic toxicity of P. kotschyi stem bark aqueous extract in albino rats. For the acute toxicity study, the LD50 was found to be greater than 2000 mg/kg body weight. The sub-chronic administration of the aqueous extract at a dose of 400 mg/kg body weight/day for 28 days, caused significant increase in total protein and HDL-Cholesterol with concomitant decrease in LDL-cholesterol while other biochemical and hematological parameters were found to be within the normal range. However, histological examination revealed the presence of inflammation and necrosis in the kidney and liver tissues of animals treated with 400 mg/kg body weight-/day of extract while tissue samples from animals treated at lower doses remained normal. This implies that the extract may have exhibited some toxic effect on the kidney and liver tissues at 400 mg/kg body weight while it is relatively safe at lower doses.\nKabiru et al. (2015) conducted a sub-chronic toxicity evaluation of a crude methanol extract of leaves in Sprague-Dawley rats at doses of 40, 200 and 1000 mg/kg body weight/day for 4 weeks. They found that the extract did not produce any significant alteration in both hematological and biochemical parameters when compared with standard controls. This implied that the extract was relatively non-toxic at the tested doses. Ezeokpo et al. (2020) also carried out a similar study with an ethanol extract of P. kotschyi leaves in Wistar rats. Their results revealed that the extract (400 mg/kg body weight/day) did not produce any significant derangement in hematological and biochemical parameters after 28 days of treatment. The above findings indicated that methanol and ethanol extracts of P. kotschyi leaves are relatively non-toxic at higher doses compared to the aqueous stem bark extract. However, more detailed research is still required to corroborate this finding. Albeit most of the pharmacological activities and chemical constituents reported on this plant have been obtained from the leaf’s extracts, the toxicity evaluation of ethanol, methanol and chloroform extracts of roots and stem bark of P. kotschy is yet to be carried out and reported. Hence, further toxicity studies on different extracts, fractions and chemical constituents of the root and stem bark of P. kotschyi are still required to ascertain the thorough safety of the plant.", "P. kotschyi is an important medicinal plant which is used in the traditional treatment of different ailments. Based on its ethnomedicinal claims, extensive pharmacological and phytochemical investigations have been carried out which led to the isolation and characterisation of several bioactive constituents. Results from the pharmacological investigations on this plant and its phytoconstituents have demonstrated its high therapeutic potential in the treatment of cancer and tropical diseases, particularly malaria, leishmaniasis and trypanosomiasis. Although, experimental data support the beneficial medicinal properties of P. kotschyi, there were no sufficient data on the toxicity and safety profile of the plant. Nonetheless, this review provides the foundation for future work. Considering the amount of knowledge so far obtained on the medicinal properties of P. kotschyi, further studies on this plant should be directed towards establishing its safety profile as well as design and development of drug product either as single chemical entity or as a standardised herbal preparation. Tropical diseases are among the most neglected health problems in the world. The pharmaceutical industries show little research and development interest in this area albeit the devastating effect of such diseases. Therefore, research finding of this nature should be advanced towards the development of useful medicinal products." ]
[ "intro", null, null, null, null, null, null, null, null, null, null, null, null, null, "conclusions" ]
[ "Traditional uses", "scientific claims", "bioactive compounds", "limonoid orthoacetates", "kotschyins", "kostchyienones", "pseudrelones", "toxicity" ]
Introduction: Traditional medicinal plants have been an essential source of remedy for various illnesses since ancient times. Preparations of plant materials such as infusion, decoction, powder, or paste have been used in various traditional practices in different parts of the world. People living in Africa and Asia make use of herbal medications to supplement the conventional medicine practice (Ekor 2014). There has been an increasing interest in the usage of herbal medicines in recent years. About 80% of the world’s population is using phytotherapeutic medicines (Balekundri and Mannur 2020). The WHO estimated that the size of the global market for herbal products was USD 60 billion in the year 2000 and this is expected to grow 7% per annum towards USD 5 trillion by the year 2050 (Tan et al. 2020). Several analyses have clearly verified traditional claims of numerous medicinal plants leading to the commercialisation of the many herbal products and their nomination as leads within the development of pharmaceutical medication (Williams 2021). Many clinically useful drugs have been discovered based on the knowledge derived from the ethnomedicinal applications of various herbal materials (Balunas and Kinghorn 2005). Pseudocedrela kotschyi (Schweinf.) Harms (Meliaceae) is an important medicinal plant found in the tropical and subtropical countries of Africa. This plant has been extensively used in the African traditional medicine system for the treatment of a variety of diseases, particularly as analgesic, antimicrobial, antimalarial, anthelminthic, and antidiarrheal agents. The main focus of this review was to establish the ethnopharmacological uses and medicinal characteristics of P. kotschyi and highlight its potential as future drug for the treatment of various tropical diseases. Methodology: Scientific manuscripts on P. kotschyi were retrieved from different scientific search engines. Literature search was carried out on PUBMED using Pseudocedrela kotschyi as key words. Additional literature searches on Medline, EMBASE, Science Direct and Google scholar databases were done using pharmacological activity, chemical constituents and traditional uses of Pseudocedrela kotschyi as search terms. Literature published on the topic in English language from inception until September, 2020 were collected, analysed and an up-to-date review on the medicinal potential of P. kotschyi was compiled. Geographical distribution P. kotschyi (common name: dry zone cedar) is a medicinal plant. Other common names of P. kotschyi are Tuna (Hausa) and Emi gbegi in Yoruba. It is found in tropical and subtropical countries of Africa which include Nigeria, Cote d’Ivoire, Senegal, Ghana, Democratic Republic of Congo, and Uganda. The plant often grows as a medium sized tree of about 12–20 ft high (Ayo et al. 2010; Alain et al. 2014; Alhassan et al. 2014). Below is the taxonomical classification of P. kotschyi (Hassler 2019). Classification Kingdom  Plantae Phylum  Tracheophyta Class   Magnoliopsida Order    Sapindales Family   Meliaceae Genus    Pseudocedrela Species   kotschyi P. kotschyi (common name: dry zone cedar) is a medicinal plant. Other common names of P. kotschyi are Tuna (Hausa) and Emi gbegi in Yoruba. It is found in tropical and subtropical countries of Africa which include Nigeria, Cote d’Ivoire, Senegal, Ghana, Democratic Republic of Congo, and Uganda. The plant often grows as a medium sized tree of about 12–20 ft high (Ayo et al. 2010; Alain et al. 2014; Alhassan et al. 2014). Below is the taxonomical classification of P. kotschyi (Hassler 2019). Classification Kingdom  Plantae Phylum  Tracheophyta Class   Magnoliopsida Order    Sapindales Family   Meliaceae Genus    Pseudocedrela Species   kotschyi Ethnomedicinal uses Different parts of P. kotschyi are used in the traditional treatment of various diseases. The root is used in the treatment of leprosy (Pedersen et al. 2009), epilepsy, dementia (Kantati et al. 2016), diabetes (Salihu Shinkafi et al. 2015), malaria, abdominal pain, diarrhoea (Ahua et al. 2007), toothache and gingivitis (Tapsoba and Deschamps 2006). The root is also used as a chewing stick for tooth cleaning and enhancement of oral health (Wolinsky and Sote 1984; Olabanji et al. 2007; Adeniyi et al. 2010). The leaf is used in the treatment of female infertility (Olabanji et al. 2007), intestinal worms (Koné et al. 2005) and malaria (Asase et al. 2005). The stem bark is used in the treatment of cancer (Saidu et al. 2015), infantile dermatitis (Erinoso et al. 2016), stomach ache (Asase et al. 2005), toothache (Kayode and Sanni 2016), high blood-pressure, skin diseases, and haemorrhoids (Nadembega et al. 2011). Different parts of P. kotschyi are used in the traditional treatment of various diseases. The root is used in the treatment of leprosy (Pedersen et al. 2009), epilepsy, dementia (Kantati et al. 2016), diabetes (Salihu Shinkafi et al. 2015), malaria, abdominal pain, diarrhoea (Ahua et al. 2007), toothache and gingivitis (Tapsoba and Deschamps 2006). The root is also used as a chewing stick for tooth cleaning and enhancement of oral health (Wolinsky and Sote 1984; Olabanji et al. 2007; Adeniyi et al. 2010). The leaf is used in the treatment of female infertility (Olabanji et al. 2007), intestinal worms (Koné et al. 2005) and malaria (Asase et al. 2005). The stem bark is used in the treatment of cancer (Saidu et al. 2015), infantile dermatitis (Erinoso et al. 2016), stomach ache (Asase et al. 2005), toothache (Kayode and Sanni 2016), high blood-pressure, skin diseases, and haemorrhoids (Nadembega et al. 2011). Phytochemistry Phytochemical investigations revealed that P. kotschyi contains a variety of pharmacological active secondary metabolites. A total of 32 compounds have so far reported to have been isolated from the plant which mainly include limonoids, triterpenes, and flavonoids. Limonoids are modified triterpenes which are highly oxygenated and have a typical furanylsteroid as their core structure (Roy and Saraf 2006). They are also known as tetraterpenoids. Limonoids are rare natural products which occur mainly in plants of Meliaceae and Rutaceae families and less frequently in the Cneoraceae family (Tan and Luo 2011). Several phragmalin-type limonoid orthoacetates (Figure 1) have reportedly been isolated from the of roots of this plant, namely, kotschyins A–H (1–8) (Hay et al. 2007; Dal Piaz et al. 2012). These compounds are complex with a very high degree of oxidation and rearrangement as compared to the parent limonoid structure. Phragmalin-type limonoid orthoacetates isolated from the roots of P. kotschyi. Other limonoid derivatives (Figure 2) found in the roots and stem bark of P. kotschyi are 7-deacetylgedunin (9), 7-deacetyl-7-oxogedunin (10) (Hay et al. 2007), 1α,7α epoxy-gedunin (11), gedunin (12) (Dal Piaz et al. 2012), kostchyienones A (13) and B (14), andirobin (15) methylangolensate (16) (Sidjui et al. 2018). Additional limonoids derivatives (Figure 3) that were isolated from the P. kotschyi bark include pseudrelones A–C (17–19) (Taylor 1979). The pseudrelones also have a phragmalin nucleus with orthoacetate function but they have a lesser degree of oxidation than the kotschyins. Limonoid derivatives isolated from the roots of P. kotschyi. Limonoid derivatives isolated from the bark of P. kotschyi. The steroids isolated from this plant (Figure 4) include odoratone (20), spicatin (21), 11-acetil-odoratol (22) (Dal Piaz et al. 2012), β-sitosterol (23), 3-O-β-d-glucopyranosyl β-sitosterol (24) stigmasterol (25), 3-O-β-d-glucopyranosyl stigmasterol (26) betulinic acid (27) (Sidjui et al. 2018). Three secotirucallane triterpenes were also isolated from the stem bark of P. kotschyi. These include, 4-hydroxy-3,4-secotirucalla-7,24-dien-3,21-dioic acid (28), 3,4-secotirucalla-4(29),7,24-trien-3,21-dioic acid (29) and 3-methyl ester 3,4-secotirucalla-4(28),7,24-trien-3,21-dioic (30) (Figure 4) (Mambou et al. 2018). Two flavonoids, namely, 3,6,8-trihydroxy-2-(3,4-dihydroxylphenyl)-4H-chrom-4-one (31) and quercetin, 3,4′,7-trimethyl ether (32) (Figure 4) have also been isolated from the roots of this plant (Sidjui et al. 2018). Triterpenes and flavonoids isolated from the roots of P. kotschyi. The GCMS analysis of essential oils from root and stem of P. kotschyi indicated that both oils contain mainly sesquiterpenoids. These include, α-cubebene, α-copaene, β-elemene, β-caryophyllene, trans-α-bergamotene, aromadendrene, (E)-β-farnesene, α-humulene, allo-aromadendrene, γ-muurolene, farnesene, germacrene D, β-selinene, α-selinene, α-muurolene, γ-cadinene, calamenene, δ-cadinene, cadina-1,4-diene, α-calacorene, α-cadinene, β-calacorene, germacrene B, cadalene, epi-cubebol, cubebol, spathulenol, globulol, humulene oxide II, epi-α-cadinol, epi-α-muurolol, α-muurolol, selin-11-en-4-α-ol, α-cadinol and juniper camphor. The stem bark oil was found to comprise largely of sesquiterpene hydrocarbons (79.6%), with δ-cadinene (31.3%) as the major constituents. While the oxygenated sesquiterpenes were found to be abundant in the root with cubebols (32.5%) and cadinols (17.9%) as the major constituents (Boyom et al. 2004). Phytochemical investigations revealed that P. kotschyi contains a variety of pharmacological active secondary metabolites. A total of 32 compounds have so far reported to have been isolated from the plant which mainly include limonoids, triterpenes, and flavonoids. Limonoids are modified triterpenes which are highly oxygenated and have a typical furanylsteroid as their core structure (Roy and Saraf 2006). They are also known as tetraterpenoids. Limonoids are rare natural products which occur mainly in plants of Meliaceae and Rutaceae families and less frequently in the Cneoraceae family (Tan and Luo 2011). Several phragmalin-type limonoid orthoacetates (Figure 1) have reportedly been isolated from the of roots of this plant, namely, kotschyins A–H (1–8) (Hay et al. 2007; Dal Piaz et al. 2012). These compounds are complex with a very high degree of oxidation and rearrangement as compared to the parent limonoid structure. Phragmalin-type limonoid orthoacetates isolated from the roots of P. kotschyi. Other limonoid derivatives (Figure 2) found in the roots and stem bark of P. kotschyi are 7-deacetylgedunin (9), 7-deacetyl-7-oxogedunin (10) (Hay et al. 2007), 1α,7α epoxy-gedunin (11), gedunin (12) (Dal Piaz et al. 2012), kostchyienones A (13) and B (14), andirobin (15) methylangolensate (16) (Sidjui et al. 2018). Additional limonoids derivatives (Figure 3) that were isolated from the P. kotschyi bark include pseudrelones A–C (17–19) (Taylor 1979). The pseudrelones also have a phragmalin nucleus with orthoacetate function but they have a lesser degree of oxidation than the kotschyins. Limonoid derivatives isolated from the roots of P. kotschyi. Limonoid derivatives isolated from the bark of P. kotschyi. The steroids isolated from this plant (Figure 4) include odoratone (20), spicatin (21), 11-acetil-odoratol (22) (Dal Piaz et al. 2012), β-sitosterol (23), 3-O-β-d-glucopyranosyl β-sitosterol (24) stigmasterol (25), 3-O-β-d-glucopyranosyl stigmasterol (26) betulinic acid (27) (Sidjui et al. 2018). Three secotirucallane triterpenes were also isolated from the stem bark of P. kotschyi. These include, 4-hydroxy-3,4-secotirucalla-7,24-dien-3,21-dioic acid (28), 3,4-secotirucalla-4(29),7,24-trien-3,21-dioic acid (29) and 3-methyl ester 3,4-secotirucalla-4(28),7,24-trien-3,21-dioic (30) (Figure 4) (Mambou et al. 2018). Two flavonoids, namely, 3,6,8-trihydroxy-2-(3,4-dihydroxylphenyl)-4H-chrom-4-one (31) and quercetin, 3,4′,7-trimethyl ether (32) (Figure 4) have also been isolated from the roots of this plant (Sidjui et al. 2018). Triterpenes and flavonoids isolated from the roots of P. kotschyi. The GCMS analysis of essential oils from root and stem of P. kotschyi indicated that both oils contain mainly sesquiterpenoids. These include, α-cubebene, α-copaene, β-elemene, β-caryophyllene, trans-α-bergamotene, aromadendrene, (E)-β-farnesene, α-humulene, allo-aromadendrene, γ-muurolene, farnesene, germacrene D, β-selinene, α-selinene, α-muurolene, γ-cadinene, calamenene, δ-cadinene, cadina-1,4-diene, α-calacorene, α-cadinene, β-calacorene, germacrene B, cadalene, epi-cubebol, cubebol, spathulenol, globulol, humulene oxide II, epi-α-cadinol, epi-α-muurolol, α-muurolol, selin-11-en-4-α-ol, α-cadinol and juniper camphor. The stem bark oil was found to comprise largely of sesquiterpene hydrocarbons (79.6%), with δ-cadinene (31.3%) as the major constituents. While the oxygenated sesquiterpenes were found to be abundant in the root with cubebols (32.5%) and cadinols (17.9%) as the major constituents (Boyom et al. 2004). Pharmacological activity The ethnomedicinal claims for the efficacy of P. kotschyi in the treatment of various diseases have been confirmed by numerous relevant scientific studies. Several pharmacological investigations have been carried out to confirm the traditional medicinal uses of the roots, stem bark, and leaves of P. kotschyi. A wide range of pharmacological activities such as analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, hepatoprotective, antioxidant, and antimicrobial, have been reported by researchers so far. The ethnomedicinal claims for the efficacy of P. kotschyi in the treatment of various diseases have been confirmed by numerous relevant scientific studies. Several pharmacological investigations have been carried out to confirm the traditional medicinal uses of the roots, stem bark, and leaves of P. kotschyi. A wide range of pharmacological activities such as analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, hepatoprotective, antioxidant, and antimicrobial, have been reported by researchers so far. Anti-inflammatory, analgesic and antipyretic activities Inflammation is an adaptive response that is triggered by noxious stimuli or conditions, such as tissue injury and infection (Medzhitov 2008; Ahmed 2011). Inflammatory response involves the secretion of several chemical mediators and signalling molecules such as nitric oxide (NO), and proinflammatory cytokines, including tumour necrosis, factor-α (TNF-α), interferon-γ (IFNγ), lipopolysaccharides (LPS) and interleukins (Medzhitov 2008). Even though inflammatory response is meant to be a beneficial process of restoring homeostasis, it is often associated with some disorders like pain and pyrexia due to the secretion of the chemical mediators (Bielefeldt et al. 2009; Garami et al. 2018). Chronic secretion of proinflammatory cytokines is also associated with development of diseases such as cancer and diabetes. Hence, anti-inflammatory agents represent an important class of medicines. Extracts and phytoconstituents of P. kotschyi have been reported to possess anti-inflammatory, analgesic and antipyretic properties. The administration of methanol crude extract of P. kotschyi stem bark at a dose of 200 mg/kg/day and its butanol and chloroform fractions have been shown to produce significant analgesic activity when evaluated with a mice model. The extracts and fractions decreased the number of writhes by 88–92% during the acetic acid induced writhing assay (Abubakar et al. 2016). Akuodor et al. (2013) investigated the antipyretic activity of ethanol extract of P. kotschyi leaves on yeast and amphetamine induced hyperpyrexia in rats. They reported that the leaf extract (50, 100 and 150 mg/kg i.p.) displayed a significant (p < 0.05) dose-dependent decrease in pyrexia. Scientific investigations have shown that 7-deacetylgedunin (9) had significant anti-inflammatory activity. Compound 9 was reported to significantly inhibit lipopolysaccharide induced nitric oxide in murine macrophage RAW 264.7 cells with an IC50 of 4.9 ± 0.1 μM. It also produced the downregulation of mRNA and protein expression of inducible nitric oxide synthase (iNOS) at a dose of 10 µM (Sarigaputi et al. 2015). These findings suggest that compound 9 produces its anti-inflammatory effect through the modulation of NO production. Chen et al. (2017) investigated the anti-inflammatory activity of compound 9 in C57BL/6 mice. Their result showed that its intraperitoneal administration at a dose of 5 mg/kg body weight for two consecutive days significantly decreased LPS-induced mice mortality by 40%. The above findings demonstrate that compound 9 is a promising anti-inflammatory agent from this plant. The anti-inflammatory effect of this compound perhaps accounts for the analgesic and antipyretic properties of the P. kotschyi extracts. Inflammation is an adaptive response that is triggered by noxious stimuli or conditions, such as tissue injury and infection (Medzhitov 2008; Ahmed 2011). Inflammatory response involves the secretion of several chemical mediators and signalling molecules such as nitric oxide (NO), and proinflammatory cytokines, including tumour necrosis, factor-α (TNF-α), interferon-γ (IFNγ), lipopolysaccharides (LPS) and interleukins (Medzhitov 2008). Even though inflammatory response is meant to be a beneficial process of restoring homeostasis, it is often associated with some disorders like pain and pyrexia due to the secretion of the chemical mediators (Bielefeldt et al. 2009; Garami et al. 2018). Chronic secretion of proinflammatory cytokines is also associated with development of diseases such as cancer and diabetes. Hence, anti-inflammatory agents represent an important class of medicines. Extracts and phytoconstituents of P. kotschyi have been reported to possess anti-inflammatory, analgesic and antipyretic properties. The administration of methanol crude extract of P. kotschyi stem bark at a dose of 200 mg/kg/day and its butanol and chloroform fractions have been shown to produce significant analgesic activity when evaluated with a mice model. The extracts and fractions decreased the number of writhes by 88–92% during the acetic acid induced writhing assay (Abubakar et al. 2016). Akuodor et al. (2013) investigated the antipyretic activity of ethanol extract of P. kotschyi leaves on yeast and amphetamine induced hyperpyrexia in rats. They reported that the leaf extract (50, 100 and 150 mg/kg i.p.) displayed a significant (p < 0.05) dose-dependent decrease in pyrexia. Scientific investigations have shown that 7-deacetylgedunin (9) had significant anti-inflammatory activity. Compound 9 was reported to significantly inhibit lipopolysaccharide induced nitric oxide in murine macrophage RAW 264.7 cells with an IC50 of 4.9 ± 0.1 μM. It also produced the downregulation of mRNA and protein expression of inducible nitric oxide synthase (iNOS) at a dose of 10 µM (Sarigaputi et al. 2015). These findings suggest that compound 9 produces its anti-inflammatory effect through the modulation of NO production. Chen et al. (2017) investigated the anti-inflammatory activity of compound 9 in C57BL/6 mice. Their result showed that its intraperitoneal administration at a dose of 5 mg/kg body weight for two consecutive days significantly decreased LPS-induced mice mortality by 40%. The above findings demonstrate that compound 9 is a promising anti-inflammatory agent from this plant. The anti-inflammatory effect of this compound perhaps accounts for the analgesic and antipyretic properties of the P. kotschyi extracts. Antiparasitic activity Parasitic diseases are among the foremost health problems today, especially in tropical countries of Africa and Asia. Diseases such as malaria, leishmaniasis, trypanosomiasis and helminthiasis affect millions of people each year causing high morbidity and mortality, particularly, in developing countries (Hotez and Kamath 2009). Hence, there is an urgent need for new drugs to treat and control of these diseases. Extracts obtained from different parts of P. kotschyi have been reported to possess activity against several human parasites. Ahua et al. (2007) investigated the anti-leishmaniasis activity of P. kotschyi including several other plants against Leishmania major. The dichloromethane extract of P. kotschyi roots (at a dose of 75 µg/mL) exhibited a marked activity (>90% mortality) against the intracellular form of the parasite which is pathogenically significant for humans. In another study, the anthelminthic activity of an ethanol extract of P. kotschyi roots against Haemonchus contortus (a pathogenic nematode found in small ruminants) was evaluated. The researchers discovered that the ethanol extract possessed larvicidal activity against the helminth with a LC100 of 0.02 µg/mL (Koné et al. 2005). The aqueous stem bark extract of this plant (50 mg/mL) has also been demonstrated to exert anthelminthic activity against Lumbricus terrestris with 25.4 min death time (Ukwubile et al. 2017). The antimalarial effect of P. kotschyi on malaria parasite has been reported in several research manuscripts. Christian et al. (2015) investigated the suppressive and curative effect of ethanol extract of P. kotschyi leaves against malaria in Plasmodium berghei berghei infected mice. The results obtained showed that oral administration of the extract (100–400 mg/kg/day) exhibited a significant antimalarial effect which is evident by the suppression of parasitemia and prolong life of infected animals. In an another study, methanol extract of the P. kotschyi leaves at an oral dose of 200 mg/kg/day was found to reduce parasitemia by 90.70% in P. berghei berghei infected mice after four consecutive days of treatment (Dawet and Stephen 2014). However, the ethanol and aqueous extracts of the P. kotschyi stem bark exhibited lower activity against the malaria parasite (39.43% and 28.36% reduction in parasitemia, respectively) (Dawet and Yakubu 2014). The limonoid derivatives 9 and 10 were reported to display significant in vitro activity against chloroquine-resistant Plasmodium falciparum with IC50 values of 1.36 and 1.77 µg/mL, respectively (Hay et al. 2007). The two compounds also displayed significant antiparasitic activity against Leishmania donovani, Trypanosoma brucei rhodesiense with a low-range IC50 of 0.99–3.4 µg/mL. In contrast, the orthoacetatate kotschyin A was found to be inactive against all the tested parasites (Hay et al. 2007). In related work, Sidjui et al. (2018) evaluated the in vitro antiplasmodial activity of 14 compounds isolated from P. kotschyi. Their findings showed that the limonoid derivatives 9, 10, 13, 14 and 15 exhibited very significant activity against both chloroquine-sensitive (Pf3D7) and chloroquine-resistant (PfINDO) strains of genus Plasmodium with IC50 values ranging from 0.75 to 9.05 µg/mL. Steverding et al. (2020) investigated the trypanocidal and leishmanicidal activities of six limonoids, namely, 9, 10, 13, 14, 15 and 16, against bloodstream forms of Trypanosoma brucei and promastigotes of Leishmania major. All the six compounds showed anti-trypanosomal activity with IC50 values ranging from 3.18 to 14.5 µM. Compounds 9, 10, 13 and 14 also displayed leishmanicidal activity with IC50 of 11.60, 7.63, 2.86 and 14.90 µM, respectively, while 15 and 16 were inactive. The antiplasmodial, trypanocidal, and leishmanicidal activities of these compounds provide justification for the use of crude extract of P. kotschyi in the traditional treatment of malaria and other parasitic infectious diseases. Parasitic diseases are among the foremost health problems today, especially in tropical countries of Africa and Asia. Diseases such as malaria, leishmaniasis, trypanosomiasis and helminthiasis affect millions of people each year causing high morbidity and mortality, particularly, in developing countries (Hotez and Kamath 2009). Hence, there is an urgent need for new drugs to treat and control of these diseases. Extracts obtained from different parts of P. kotschyi have been reported to possess activity against several human parasites. Ahua et al. (2007) investigated the anti-leishmaniasis activity of P. kotschyi including several other plants against Leishmania major. The dichloromethane extract of P. kotschyi roots (at a dose of 75 µg/mL) exhibited a marked activity (>90% mortality) against the intracellular form of the parasite which is pathogenically significant for humans. In another study, the anthelminthic activity of an ethanol extract of P. kotschyi roots against Haemonchus contortus (a pathogenic nematode found in small ruminants) was evaluated. The researchers discovered that the ethanol extract possessed larvicidal activity against the helminth with a LC100 of 0.02 µg/mL (Koné et al. 2005). The aqueous stem bark extract of this plant (50 mg/mL) has also been demonstrated to exert anthelminthic activity against Lumbricus terrestris with 25.4 min death time (Ukwubile et al. 2017). The antimalarial effect of P. kotschyi on malaria parasite has been reported in several research manuscripts. Christian et al. (2015) investigated the suppressive and curative effect of ethanol extract of P. kotschyi leaves against malaria in Plasmodium berghei berghei infected mice. The results obtained showed that oral administration of the extract (100–400 mg/kg/day) exhibited a significant antimalarial effect which is evident by the suppression of parasitemia and prolong life of infected animals. In an another study, methanol extract of the P. kotschyi leaves at an oral dose of 200 mg/kg/day was found to reduce parasitemia by 90.70% in P. berghei berghei infected mice after four consecutive days of treatment (Dawet and Stephen 2014). However, the ethanol and aqueous extracts of the P. kotschyi stem bark exhibited lower activity against the malaria parasite (39.43% and 28.36% reduction in parasitemia, respectively) (Dawet and Yakubu 2014). The limonoid derivatives 9 and 10 were reported to display significant in vitro activity against chloroquine-resistant Plasmodium falciparum with IC50 values of 1.36 and 1.77 µg/mL, respectively (Hay et al. 2007). The two compounds also displayed significant antiparasitic activity against Leishmania donovani, Trypanosoma brucei rhodesiense with a low-range IC50 of 0.99–3.4 µg/mL. In contrast, the orthoacetatate kotschyin A was found to be inactive against all the tested parasites (Hay et al. 2007). In related work, Sidjui et al. (2018) evaluated the in vitro antiplasmodial activity of 14 compounds isolated from P. kotschyi. Their findings showed that the limonoid derivatives 9, 10, 13, 14 and 15 exhibited very significant activity against both chloroquine-sensitive (Pf3D7) and chloroquine-resistant (PfINDO) strains of genus Plasmodium with IC50 values ranging from 0.75 to 9.05 µg/mL. Steverding et al. (2020) investigated the trypanocidal and leishmanicidal activities of six limonoids, namely, 9, 10, 13, 14, 15 and 16, against bloodstream forms of Trypanosoma brucei and promastigotes of Leishmania major. All the six compounds showed anti-trypanosomal activity with IC50 values ranging from 3.18 to 14.5 µM. Compounds 9, 10, 13 and 14 also displayed leishmanicidal activity with IC50 of 11.60, 7.63, 2.86 and 14.90 µM, respectively, while 15 and 16 were inactive. The antiplasmodial, trypanocidal, and leishmanicidal activities of these compounds provide justification for the use of crude extract of P. kotschyi in the traditional treatment of malaria and other parasitic infectious diseases. Antimicrobial activity Antimicrobial agents are among the most commonly used medications. The prevalence of antimicrobial resistance in recent years has led to a renewed effort to discover newer antimicrobial agents for the treatment of infectious diseases (Hobson et al. 2021). Extracts of P. kotschyi were reported to display appreciable activity against some pathogenic microorganisms. Ayo et al. (2010) investigated the antimicrobial activity of petroleum ether, ethyl acetate and methanol extracts of the P. kotschyi leaves against Staphylococcus aureus, Salmonella typhi, Streptococcus pyogenes, Candida albicans, and Escherichia coli. The results of the study showed that the ethyl acetate extract exhibited antibacterial activity against all the tested organisms with MIC values of 10–20 mg/mL. In an another similar study, the crude methanol extract of the stem bark of this plant was also shown to exhibit good activity against a panel of pathogenic bacteria and fungi which include methicillin-resistant S. aureus (MRSA), S. aureus, S. pyogenes, Corynebacterium ulcerans, Bacillus subtilis, E. coli, S. typhi, Shigella dysenteriae, Klebsiella pneumoniae, Neisseria gonorrhoeae, Pseudomonas aeruginosa, C. albicans, C. krusei, and C. tropicalis with MIC values of 3.75–10.0 mg/mL (Alhassan et al. 2014). The methanol extract of the woody stem was also found to possess antifungal activity against C. krusei ATCC 6825 with an MIC of 6.25 mg/mL (Adeniyi et al. 2010). The secotirucallane triterpenes (compounds 28, 29 and 30) isolated from the bark of P. kotschyi have been reported to possess significant antibacterial activity against Staphylococcus aureus ATCC 25923), Escherichia coli S2(1) and Pseudomonas aeruginosa with MIC ranging from 6 to 64 µg/mL. Compound 29 exhibited the highest antibacterial activity while 30 had the lowest (Mambou et al. 2018). The presence of these compounds is likely responsible for the antimicrobial property of P. kotschyi extracts and justify the ethnomedicinal use of this plant as a chewing stick for tooth cleaning and enhancement of oral health. Antimicrobial agents are among the most commonly used medications. The prevalence of antimicrobial resistance in recent years has led to a renewed effort to discover newer antimicrobial agents for the treatment of infectious diseases (Hobson et al. 2021). Extracts of P. kotschyi were reported to display appreciable activity against some pathogenic microorganisms. Ayo et al. (2010) investigated the antimicrobial activity of petroleum ether, ethyl acetate and methanol extracts of the P. kotschyi leaves against Staphylococcus aureus, Salmonella typhi, Streptococcus pyogenes, Candida albicans, and Escherichia coli. The results of the study showed that the ethyl acetate extract exhibited antibacterial activity against all the tested organisms with MIC values of 10–20 mg/mL. In an another similar study, the crude methanol extract of the stem bark of this plant was also shown to exhibit good activity against a panel of pathogenic bacteria and fungi which include methicillin-resistant S. aureus (MRSA), S. aureus, S. pyogenes, Corynebacterium ulcerans, Bacillus subtilis, E. coli, S. typhi, Shigella dysenteriae, Klebsiella pneumoniae, Neisseria gonorrhoeae, Pseudomonas aeruginosa, C. albicans, C. krusei, and C. tropicalis with MIC values of 3.75–10.0 mg/mL (Alhassan et al. 2014). The methanol extract of the woody stem was also found to possess antifungal activity against C. krusei ATCC 6825 with an MIC of 6.25 mg/mL (Adeniyi et al. 2010). The secotirucallane triterpenes (compounds 28, 29 and 30) isolated from the bark of P. kotschyi have been reported to possess significant antibacterial activity against Staphylococcus aureus ATCC 25923), Escherichia coli S2(1) and Pseudomonas aeruginosa with MIC ranging from 6 to 64 µg/mL. Compound 29 exhibited the highest antibacterial activity while 30 had the lowest (Mambou et al. 2018). The presence of these compounds is likely responsible for the antimicrobial property of P. kotschyi extracts and justify the ethnomedicinal use of this plant as a chewing stick for tooth cleaning and enhancement of oral health. Antioxidant and hepatoprotective activities The ethanol extract of P. kotschyi stem bark has been reported to possess DPPH radical scavenging activity with an IC50 of 4 µg/mL (Alain et al. 2014). A study on the hepatoprotective activity of methanol and aqueous extracts of the P. kotschyi leaves revealed that both extracts (at a dose of 750 mg/kg/day) were able to protect the liver against paracetamol induced oxidative damage (Eleha et al. 2016). A similar study conducted by Nchouwet et al. (2018) showed that 2 weeks pre-treatment with aqueous and methanol extracts of P. kotschyi stem bark (150 mg/kg/day) significantly suppressed the development of paracetamol induced hepatotoxicity in experimental rats. The ethanol extract of P. kotschyi stem bark has been reported to possess DPPH radical scavenging activity with an IC50 of 4 µg/mL (Alain et al. 2014). A study on the hepatoprotective activity of methanol and aqueous extracts of the P. kotschyi leaves revealed that both extracts (at a dose of 750 mg/kg/day) were able to protect the liver against paracetamol induced oxidative damage (Eleha et al. 2016). A similar study conducted by Nchouwet et al. (2018) showed that 2 weeks pre-treatment with aqueous and methanol extracts of P. kotschyi stem bark (150 mg/kg/day) significantly suppressed the development of paracetamol induced hepatotoxicity in experimental rats. Hypoglycaemic and digestive enzyme inhibitory activities Diabetes mellitus is disorder associated with abnormal glucose metabolism resulting from insulin insufficiency or dysfunction. It is one of the major non-communicable diseases that affect millions of people globally. Scientific investigation has revealed that P. kotschyi extracts possess some antidiabetic properties. Georgewill and Georgewill (2019) investigated the hypoglycaemic effect of aqueous extract of P. kotschyi leaves on alloxan induced diabetic rats. Results of their investigation revealed that oral administration of the extract (200 mg/kg/day for 14 days) caused significant hypoglycaemic effect in the experimental animals. The ethanol extract of roots of this plant was also reported to exhibit inhibitory activity against α-glucosidase (IC50 = 5.0 ± 0.2 μg/mL), an important digestive enzyme targeted in diabetes treatment (Bothon et al. 2013). Diabetes mellitus is disorder associated with abnormal glucose metabolism resulting from insulin insufficiency or dysfunction. It is one of the major non-communicable diseases that affect millions of people globally. Scientific investigation has revealed that P. kotschyi extracts possess some antidiabetic properties. Georgewill and Georgewill (2019) investigated the hypoglycaemic effect of aqueous extract of P. kotschyi leaves on alloxan induced diabetic rats. Results of their investigation revealed that oral administration of the extract (200 mg/kg/day for 14 days) caused significant hypoglycaemic effect in the experimental animals. The ethanol extract of roots of this plant was also reported to exhibit inhibitory activity against α-glucosidase (IC50 = 5.0 ± 0.2 μg/mL), an important digestive enzyme targeted in diabetes treatment (Bothon et al. 2013). Antiproliferative activity Cancer is an important disease that is characterised by the abnormal rapid proliferation of cells that invade and destroy other tissues (Alhassan et al. 2018). It is a major public health problem throughout the world. Pharmacological studies have shown that P. kotschyi possesses anticancer potential. Kassim et al. (2015) investigated the antiproliferative activity and apoptosis induction effect of aqueous extract of P. kotschyi roots against a panel of prostate cancer cell lines, namely, PC3, DU-145, LNCaP and CWR-22 cell lines. Results from the 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyltetrazolium bromide (MTT) assay showed that all four cancer cell lines exhibited a dose-dependent decrease in cell proliferation and viability after treatment with the aqueous extract with IC50 values ranging from 12 to 42 µg/mL. The results obtained also showed that LNCaP, PC3, DU-145, and CWR-22 cell lines had 42, 35, 33 and 24% induced apoptotic cells, respectively, after treatment with the same extract. The results of both the antiproliferative and apoptosis assay indicated that the LNCaP cells were the most sensitive to the P. kotschyi extract. Heat shock protein 90 (Hsp90) is a molecular chaperone that is involved in the folding, activation and assembly of several proteins including oncoproteins such as HER2, Survivin, EGFR, Akt, Raf-1, mutant p53 (Calderwood et al. 2006; Dal Piaz et al. 2012). Hsp90 is often overexpressed in cancer cells. It has been demonstrated to play a vital role in tumour progression, malignancy and resistance to chemotherapeutic agents (Zhang et al. 2019). Hence, Hsp90 is recently considered as a viable molecular target for development of new anticancer drugs (Gupta et al. 2019). Phytoconstituents of P. kotschyi have been shown to possess significant Hsp90 inhibitory activity. Dal Piaz et al. (2012) investigated the Hsp90 binding capability of several compounds using a surface plasmon resonance (SPR) approach. They found that the limonoid orthoacetates (1–6) displayed good binding capability to the protein with compound 4 being the most effective. Compound 4 also exhibited significant anti-proliferative activity against three cancer cell lines, namely, PC-3 (human prostate cancer cells), A2780 (human ovarian carcinoma cells), and MCF-7 (human breast adenocarcinoma cells) with IC50 values of 62 ± 0.4, 38 ± 0.7 and 25 ± 1.2 µM, respectively. These findings suggest that Hsp90 inhibition is a mechanism of action for anti-proliferative effects of the limonoids orthoacetates from P. kotschyi. These findings provide scientific bases for the future development of new anticancer agents from P. kotschyi in the form of a standardised herbal preparation or as a pure chemical entity. Cancer is an important disease that is characterised by the abnormal rapid proliferation of cells that invade and destroy other tissues (Alhassan et al. 2018). It is a major public health problem throughout the world. Pharmacological studies have shown that P. kotschyi possesses anticancer potential. Kassim et al. (2015) investigated the antiproliferative activity and apoptosis induction effect of aqueous extract of P. kotschyi roots against a panel of prostate cancer cell lines, namely, PC3, DU-145, LNCaP and CWR-22 cell lines. Results from the 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyltetrazolium bromide (MTT) assay showed that all four cancer cell lines exhibited a dose-dependent decrease in cell proliferation and viability after treatment with the aqueous extract with IC50 values ranging from 12 to 42 µg/mL. The results obtained also showed that LNCaP, PC3, DU-145, and CWR-22 cell lines had 42, 35, 33 and 24% induced apoptotic cells, respectively, after treatment with the same extract. The results of both the antiproliferative and apoptosis assay indicated that the LNCaP cells were the most sensitive to the P. kotschyi extract. Heat shock protein 90 (Hsp90) is a molecular chaperone that is involved in the folding, activation and assembly of several proteins including oncoproteins such as HER2, Survivin, EGFR, Akt, Raf-1, mutant p53 (Calderwood et al. 2006; Dal Piaz et al. 2012). Hsp90 is often overexpressed in cancer cells. It has been demonstrated to play a vital role in tumour progression, malignancy and resistance to chemotherapeutic agents (Zhang et al. 2019). Hence, Hsp90 is recently considered as a viable molecular target for development of new anticancer drugs (Gupta et al. 2019). Phytoconstituents of P. kotschyi have been shown to possess significant Hsp90 inhibitory activity. Dal Piaz et al. (2012) investigated the Hsp90 binding capability of several compounds using a surface plasmon resonance (SPR) approach. They found that the limonoid orthoacetates (1–6) displayed good binding capability to the protein with compound 4 being the most effective. Compound 4 also exhibited significant anti-proliferative activity against three cancer cell lines, namely, PC-3 (human prostate cancer cells), A2780 (human ovarian carcinoma cells), and MCF-7 (human breast adenocarcinoma cells) with IC50 values of 62 ± 0.4, 38 ± 0.7 and 25 ± 1.2 µM, respectively. These findings suggest that Hsp90 inhibition is a mechanism of action for anti-proliferative effects of the limonoids orthoacetates from P. kotschyi. These findings provide scientific bases for the future development of new anticancer agents from P. kotschyi in the form of a standardised herbal preparation or as a pure chemical entity. Antidiarrheal activity Treatment of diarrhoea comes under one of the common ethnomedicinal uses of P. kotschyi. To further verify this claim, Essiet et al. (2016) investigated the antidiarrheal property of ethanol extract of P kotschyi leaves in Wistar albino rats. Diarrhoea was induced in the animals using castor oil. Results of the investigation revealed that oral administration of the extract (100, 200 and 400 mg/kg) produced significant (p < 0.05) dose-dependent inhibition of induced diarrhoea (67–91%). The results also showed that the administered doses of the extract decreased intestinal transit time by 57–66% while intestinal fluid accumulation was decreased by 68–82%. This finding undoubtedly supports the traditional use of this plant in the treatment of diarrhoea. Treatment of diarrhoea comes under one of the common ethnomedicinal uses of P. kotschyi. To further verify this claim, Essiet et al. (2016) investigated the antidiarrheal property of ethanol extract of P kotschyi leaves in Wistar albino rats. Diarrhoea was induced in the animals using castor oil. Results of the investigation revealed that oral administration of the extract (100, 200 and 400 mg/kg) produced significant (p < 0.05) dose-dependent inhibition of induced diarrhoea (67–91%). The results also showed that the administered doses of the extract decreased intestinal transit time by 57–66% while intestinal fluid accumulation was decreased by 68–82%. This finding undoubtedly supports the traditional use of this plant in the treatment of diarrhoea. Toxicity There is a general perception that plant-based medicinal products are natural and thus, very safe for human consumption. However, this notion is wrong because several plants have been shown to produce wide range of adverse reactions some of which are capable of causing serious injuries, life-threatening conditions, and even death (Ekor 2014). Hence, it is of paramount importance to investigate the toxicity profile of traditional medicinal plants as well as their phytoconstituents in order to establish their safety. The study of toxicity is an essential component for new drug development process (Hornberg et al. 2014). Some toxicological studies have been carried out on the extracts of P. kotschyi. Nchouwet et al. (2017) investigated the acute and sub-chronic toxicity of P. kotschyi stem bark aqueous extract in albino rats. For the acute toxicity study, the LD50 was found to be greater than 2000 mg/kg body weight. The sub-chronic administration of the aqueous extract at a dose of 400 mg/kg body weight/day for 28 days, caused significant increase in total protein and HDL-Cholesterol with concomitant decrease in LDL-cholesterol while other biochemical and hematological parameters were found to be within the normal range. However, histological examination revealed the presence of inflammation and necrosis in the kidney and liver tissues of animals treated with 400 mg/kg body weight-/day of extract while tissue samples from animals treated at lower doses remained normal. This implies that the extract may have exhibited some toxic effect on the kidney and liver tissues at 400 mg/kg body weight while it is relatively safe at lower doses. Kabiru et al. (2015) conducted a sub-chronic toxicity evaluation of a crude methanol extract of leaves in Sprague-Dawley rats at doses of 40, 200 and 1000 mg/kg body weight/day for 4 weeks. They found that the extract did not produce any significant alteration in both hematological and biochemical parameters when compared with standard controls. This implied that the extract was relatively non-toxic at the tested doses. Ezeokpo et al. (2020) also carried out a similar study with an ethanol extract of P. kotschyi leaves in Wistar rats. Their results revealed that the extract (400 mg/kg body weight/day) did not produce any significant derangement in hematological and biochemical parameters after 28 days of treatment. The above findings indicated that methanol and ethanol extracts of P. kotschyi leaves are relatively non-toxic at higher doses compared to the aqueous stem bark extract. However, more detailed research is still required to corroborate this finding. Albeit most of the pharmacological activities and chemical constituents reported on this plant have been obtained from the leaf’s extracts, the toxicity evaluation of ethanol, methanol and chloroform extracts of roots and stem bark of P. kotschy is yet to be carried out and reported. Hence, further toxicity studies on different extracts, fractions and chemical constituents of the root and stem bark of P. kotschyi are still required to ascertain the thorough safety of the plant. There is a general perception that plant-based medicinal products are natural and thus, very safe for human consumption. However, this notion is wrong because several plants have been shown to produce wide range of adverse reactions some of which are capable of causing serious injuries, life-threatening conditions, and even death (Ekor 2014). Hence, it is of paramount importance to investigate the toxicity profile of traditional medicinal plants as well as their phytoconstituents in order to establish their safety. The study of toxicity is an essential component for new drug development process (Hornberg et al. 2014). Some toxicological studies have been carried out on the extracts of P. kotschyi. Nchouwet et al. (2017) investigated the acute and sub-chronic toxicity of P. kotschyi stem bark aqueous extract in albino rats. For the acute toxicity study, the LD50 was found to be greater than 2000 mg/kg body weight. The sub-chronic administration of the aqueous extract at a dose of 400 mg/kg body weight/day for 28 days, caused significant increase in total protein and HDL-Cholesterol with concomitant decrease in LDL-cholesterol while other biochemical and hematological parameters were found to be within the normal range. However, histological examination revealed the presence of inflammation and necrosis in the kidney and liver tissues of animals treated with 400 mg/kg body weight-/day of extract while tissue samples from animals treated at lower doses remained normal. This implies that the extract may have exhibited some toxic effect on the kidney and liver tissues at 400 mg/kg body weight while it is relatively safe at lower doses. Kabiru et al. (2015) conducted a sub-chronic toxicity evaluation of a crude methanol extract of leaves in Sprague-Dawley rats at doses of 40, 200 and 1000 mg/kg body weight/day for 4 weeks. They found that the extract did not produce any significant alteration in both hematological and biochemical parameters when compared with standard controls. This implied that the extract was relatively non-toxic at the tested doses. Ezeokpo et al. (2020) also carried out a similar study with an ethanol extract of P. kotschyi leaves in Wistar rats. Their results revealed that the extract (400 mg/kg body weight/day) did not produce any significant derangement in hematological and biochemical parameters after 28 days of treatment. The above findings indicated that methanol and ethanol extracts of P. kotschyi leaves are relatively non-toxic at higher doses compared to the aqueous stem bark extract. However, more detailed research is still required to corroborate this finding. Albeit most of the pharmacological activities and chemical constituents reported on this plant have been obtained from the leaf’s extracts, the toxicity evaluation of ethanol, methanol and chloroform extracts of roots and stem bark of P. kotschy is yet to be carried out and reported. Hence, further toxicity studies on different extracts, fractions and chemical constituents of the root and stem bark of P. kotschyi are still required to ascertain the thorough safety of the plant. Geographical distribution: P. kotschyi (common name: dry zone cedar) is a medicinal plant. Other common names of P. kotschyi are Tuna (Hausa) and Emi gbegi in Yoruba. It is found in tropical and subtropical countries of Africa which include Nigeria, Cote d’Ivoire, Senegal, Ghana, Democratic Republic of Congo, and Uganda. The plant often grows as a medium sized tree of about 12–20 ft high (Ayo et al. 2010; Alain et al. 2014; Alhassan et al. 2014). Below is the taxonomical classification of P. kotschyi (Hassler 2019). Classification Kingdom  Plantae Phylum  Tracheophyta Class   Magnoliopsida Order    Sapindales Family   Meliaceae Genus    Pseudocedrela Species   kotschyi Ethnomedicinal uses: Different parts of P. kotschyi are used in the traditional treatment of various diseases. The root is used in the treatment of leprosy (Pedersen et al. 2009), epilepsy, dementia (Kantati et al. 2016), diabetes (Salihu Shinkafi et al. 2015), malaria, abdominal pain, diarrhoea (Ahua et al. 2007), toothache and gingivitis (Tapsoba and Deschamps 2006). The root is also used as a chewing stick for tooth cleaning and enhancement of oral health (Wolinsky and Sote 1984; Olabanji et al. 2007; Adeniyi et al. 2010). The leaf is used in the treatment of female infertility (Olabanji et al. 2007), intestinal worms (Koné et al. 2005) and malaria (Asase et al. 2005). The stem bark is used in the treatment of cancer (Saidu et al. 2015), infantile dermatitis (Erinoso et al. 2016), stomach ache (Asase et al. 2005), toothache (Kayode and Sanni 2016), high blood-pressure, skin diseases, and haemorrhoids (Nadembega et al. 2011). Phytochemistry: Phytochemical investigations revealed that P. kotschyi contains a variety of pharmacological active secondary metabolites. A total of 32 compounds have so far reported to have been isolated from the plant which mainly include limonoids, triterpenes, and flavonoids. Limonoids are modified triterpenes which are highly oxygenated and have a typical furanylsteroid as their core structure (Roy and Saraf 2006). They are also known as tetraterpenoids. Limonoids are rare natural products which occur mainly in plants of Meliaceae and Rutaceae families and less frequently in the Cneoraceae family (Tan and Luo 2011). Several phragmalin-type limonoid orthoacetates (Figure 1) have reportedly been isolated from the of roots of this plant, namely, kotschyins A–H (1–8) (Hay et al. 2007; Dal Piaz et al. 2012). These compounds are complex with a very high degree of oxidation and rearrangement as compared to the parent limonoid structure. Phragmalin-type limonoid orthoacetates isolated from the roots of P. kotschyi. Other limonoid derivatives (Figure 2) found in the roots and stem bark of P. kotschyi are 7-deacetylgedunin (9), 7-deacetyl-7-oxogedunin (10) (Hay et al. 2007), 1α,7α epoxy-gedunin (11), gedunin (12) (Dal Piaz et al. 2012), kostchyienones A (13) and B (14), andirobin (15) methylangolensate (16) (Sidjui et al. 2018). Additional limonoids derivatives (Figure 3) that were isolated from the P. kotschyi bark include pseudrelones A–C (17–19) (Taylor 1979). The pseudrelones also have a phragmalin nucleus with orthoacetate function but they have a lesser degree of oxidation than the kotschyins. Limonoid derivatives isolated from the roots of P. kotschyi. Limonoid derivatives isolated from the bark of P. kotschyi. The steroids isolated from this plant (Figure 4) include odoratone (20), spicatin (21), 11-acetil-odoratol (22) (Dal Piaz et al. 2012), β-sitosterol (23), 3-O-β-d-glucopyranosyl β-sitosterol (24) stigmasterol (25), 3-O-β-d-glucopyranosyl stigmasterol (26) betulinic acid (27) (Sidjui et al. 2018). Three secotirucallane triterpenes were also isolated from the stem bark of P. kotschyi. These include, 4-hydroxy-3,4-secotirucalla-7,24-dien-3,21-dioic acid (28), 3,4-secotirucalla-4(29),7,24-trien-3,21-dioic acid (29) and 3-methyl ester 3,4-secotirucalla-4(28),7,24-trien-3,21-dioic (30) (Figure 4) (Mambou et al. 2018). Two flavonoids, namely, 3,6,8-trihydroxy-2-(3,4-dihydroxylphenyl)-4H-chrom-4-one (31) and quercetin, 3,4′,7-trimethyl ether (32) (Figure 4) have also been isolated from the roots of this plant (Sidjui et al. 2018). Triterpenes and flavonoids isolated from the roots of P. kotschyi. The GCMS analysis of essential oils from root and stem of P. kotschyi indicated that both oils contain mainly sesquiterpenoids. These include, α-cubebene, α-copaene, β-elemene, β-caryophyllene, trans-α-bergamotene, aromadendrene, (E)-β-farnesene, α-humulene, allo-aromadendrene, γ-muurolene, farnesene, germacrene D, β-selinene, α-selinene, α-muurolene, γ-cadinene, calamenene, δ-cadinene, cadina-1,4-diene, α-calacorene, α-cadinene, β-calacorene, germacrene B, cadalene, epi-cubebol, cubebol, spathulenol, globulol, humulene oxide II, epi-α-cadinol, epi-α-muurolol, α-muurolol, selin-11-en-4-α-ol, α-cadinol and juniper camphor. The stem bark oil was found to comprise largely of sesquiterpene hydrocarbons (79.6%), with δ-cadinene (31.3%) as the major constituents. While the oxygenated sesquiterpenes were found to be abundant in the root with cubebols (32.5%) and cadinols (17.9%) as the major constituents (Boyom et al. 2004). Pharmacological activity: The ethnomedicinal claims for the efficacy of P. kotschyi in the treatment of various diseases have been confirmed by numerous relevant scientific studies. Several pharmacological investigations have been carried out to confirm the traditional medicinal uses of the roots, stem bark, and leaves of P. kotschyi. A wide range of pharmacological activities such as analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, hepatoprotective, antioxidant, and antimicrobial, have been reported by researchers so far. Anti-inflammatory, analgesic and antipyretic activities: Inflammation is an adaptive response that is triggered by noxious stimuli or conditions, such as tissue injury and infection (Medzhitov 2008; Ahmed 2011). Inflammatory response involves the secretion of several chemical mediators and signalling molecules such as nitric oxide (NO), and proinflammatory cytokines, including tumour necrosis, factor-α (TNF-α), interferon-γ (IFNγ), lipopolysaccharides (LPS) and interleukins (Medzhitov 2008). Even though inflammatory response is meant to be a beneficial process of restoring homeostasis, it is often associated with some disorders like pain and pyrexia due to the secretion of the chemical mediators (Bielefeldt et al. 2009; Garami et al. 2018). Chronic secretion of proinflammatory cytokines is also associated with development of diseases such as cancer and diabetes. Hence, anti-inflammatory agents represent an important class of medicines. Extracts and phytoconstituents of P. kotschyi have been reported to possess anti-inflammatory, analgesic and antipyretic properties. The administration of methanol crude extract of P. kotschyi stem bark at a dose of 200 mg/kg/day and its butanol and chloroform fractions have been shown to produce significant analgesic activity when evaluated with a mice model. The extracts and fractions decreased the number of writhes by 88–92% during the acetic acid induced writhing assay (Abubakar et al. 2016). Akuodor et al. (2013) investigated the antipyretic activity of ethanol extract of P. kotschyi leaves on yeast and amphetamine induced hyperpyrexia in rats. They reported that the leaf extract (50, 100 and 150 mg/kg i.p.) displayed a significant (p < 0.05) dose-dependent decrease in pyrexia. Scientific investigations have shown that 7-deacetylgedunin (9) had significant anti-inflammatory activity. Compound 9 was reported to significantly inhibit lipopolysaccharide induced nitric oxide in murine macrophage RAW 264.7 cells with an IC50 of 4.9 ± 0.1 μM. It also produced the downregulation of mRNA and protein expression of inducible nitric oxide synthase (iNOS) at a dose of 10 µM (Sarigaputi et al. 2015). These findings suggest that compound 9 produces its anti-inflammatory effect through the modulation of NO production. Chen et al. (2017) investigated the anti-inflammatory activity of compound 9 in C57BL/6 mice. Their result showed that its intraperitoneal administration at a dose of 5 mg/kg body weight for two consecutive days significantly decreased LPS-induced mice mortality by 40%. The above findings demonstrate that compound 9 is a promising anti-inflammatory agent from this plant. The anti-inflammatory effect of this compound perhaps accounts for the analgesic and antipyretic properties of the P. kotschyi extracts. Antiparasitic activity: Parasitic diseases are among the foremost health problems today, especially in tropical countries of Africa and Asia. Diseases such as malaria, leishmaniasis, trypanosomiasis and helminthiasis affect millions of people each year causing high morbidity and mortality, particularly, in developing countries (Hotez and Kamath 2009). Hence, there is an urgent need for new drugs to treat and control of these diseases. Extracts obtained from different parts of P. kotschyi have been reported to possess activity against several human parasites. Ahua et al. (2007) investigated the anti-leishmaniasis activity of P. kotschyi including several other plants against Leishmania major. The dichloromethane extract of P. kotschyi roots (at a dose of 75 µg/mL) exhibited a marked activity (>90% mortality) against the intracellular form of the parasite which is pathogenically significant for humans. In another study, the anthelminthic activity of an ethanol extract of P. kotschyi roots against Haemonchus contortus (a pathogenic nematode found in small ruminants) was evaluated. The researchers discovered that the ethanol extract possessed larvicidal activity against the helminth with a LC100 of 0.02 µg/mL (Koné et al. 2005). The aqueous stem bark extract of this plant (50 mg/mL) has also been demonstrated to exert anthelminthic activity against Lumbricus terrestris with 25.4 min death time (Ukwubile et al. 2017). The antimalarial effect of P. kotschyi on malaria parasite has been reported in several research manuscripts. Christian et al. (2015) investigated the suppressive and curative effect of ethanol extract of P. kotschyi leaves against malaria in Plasmodium berghei berghei infected mice. The results obtained showed that oral administration of the extract (100–400 mg/kg/day) exhibited a significant antimalarial effect which is evident by the suppression of parasitemia and prolong life of infected animals. In an another study, methanol extract of the P. kotschyi leaves at an oral dose of 200 mg/kg/day was found to reduce parasitemia by 90.70% in P. berghei berghei infected mice after four consecutive days of treatment (Dawet and Stephen 2014). However, the ethanol and aqueous extracts of the P. kotschyi stem bark exhibited lower activity against the malaria parasite (39.43% and 28.36% reduction in parasitemia, respectively) (Dawet and Yakubu 2014). The limonoid derivatives 9 and 10 were reported to display significant in vitro activity against chloroquine-resistant Plasmodium falciparum with IC50 values of 1.36 and 1.77 µg/mL, respectively (Hay et al. 2007). The two compounds also displayed significant antiparasitic activity against Leishmania donovani, Trypanosoma brucei rhodesiense with a low-range IC50 of 0.99–3.4 µg/mL. In contrast, the orthoacetatate kotschyin A was found to be inactive against all the tested parasites (Hay et al. 2007). In related work, Sidjui et al. (2018) evaluated the in vitro antiplasmodial activity of 14 compounds isolated from P. kotschyi. Their findings showed that the limonoid derivatives 9, 10, 13, 14 and 15 exhibited very significant activity against both chloroquine-sensitive (Pf3D7) and chloroquine-resistant (PfINDO) strains of genus Plasmodium with IC50 values ranging from 0.75 to 9.05 µg/mL. Steverding et al. (2020) investigated the trypanocidal and leishmanicidal activities of six limonoids, namely, 9, 10, 13, 14, 15 and 16, against bloodstream forms of Trypanosoma brucei and promastigotes of Leishmania major. All the six compounds showed anti-trypanosomal activity with IC50 values ranging from 3.18 to 14.5 µM. Compounds 9, 10, 13 and 14 also displayed leishmanicidal activity with IC50 of 11.60, 7.63, 2.86 and 14.90 µM, respectively, while 15 and 16 were inactive. The antiplasmodial, trypanocidal, and leishmanicidal activities of these compounds provide justification for the use of crude extract of P. kotschyi in the traditional treatment of malaria and other parasitic infectious diseases. Antimicrobial activity: Antimicrobial agents are among the most commonly used medications. The prevalence of antimicrobial resistance in recent years has led to a renewed effort to discover newer antimicrobial agents for the treatment of infectious diseases (Hobson et al. 2021). Extracts of P. kotschyi were reported to display appreciable activity against some pathogenic microorganisms. Ayo et al. (2010) investigated the antimicrobial activity of petroleum ether, ethyl acetate and methanol extracts of the P. kotschyi leaves against Staphylococcus aureus, Salmonella typhi, Streptococcus pyogenes, Candida albicans, and Escherichia coli. The results of the study showed that the ethyl acetate extract exhibited antibacterial activity against all the tested organisms with MIC values of 10–20 mg/mL. In an another similar study, the crude methanol extract of the stem bark of this plant was also shown to exhibit good activity against a panel of pathogenic bacteria and fungi which include methicillin-resistant S. aureus (MRSA), S. aureus, S. pyogenes, Corynebacterium ulcerans, Bacillus subtilis, E. coli, S. typhi, Shigella dysenteriae, Klebsiella pneumoniae, Neisseria gonorrhoeae, Pseudomonas aeruginosa, C. albicans, C. krusei, and C. tropicalis with MIC values of 3.75–10.0 mg/mL (Alhassan et al. 2014). The methanol extract of the woody stem was also found to possess antifungal activity against C. krusei ATCC 6825 with an MIC of 6.25 mg/mL (Adeniyi et al. 2010). The secotirucallane triterpenes (compounds 28, 29 and 30) isolated from the bark of P. kotschyi have been reported to possess significant antibacterial activity against Staphylococcus aureus ATCC 25923), Escherichia coli S2(1) and Pseudomonas aeruginosa with MIC ranging from 6 to 64 µg/mL. Compound 29 exhibited the highest antibacterial activity while 30 had the lowest (Mambou et al. 2018). The presence of these compounds is likely responsible for the antimicrobial property of P. kotschyi extracts and justify the ethnomedicinal use of this plant as a chewing stick for tooth cleaning and enhancement of oral health. Antioxidant and hepatoprotective activities: The ethanol extract of P. kotschyi stem bark has been reported to possess DPPH radical scavenging activity with an IC50 of 4 µg/mL (Alain et al. 2014). A study on the hepatoprotective activity of methanol and aqueous extracts of the P. kotschyi leaves revealed that both extracts (at a dose of 750 mg/kg/day) were able to protect the liver against paracetamol induced oxidative damage (Eleha et al. 2016). A similar study conducted by Nchouwet et al. (2018) showed that 2 weeks pre-treatment with aqueous and methanol extracts of P. kotschyi stem bark (150 mg/kg/day) significantly suppressed the development of paracetamol induced hepatotoxicity in experimental rats. Hypoglycaemic and digestive enzyme inhibitory activities: Diabetes mellitus is disorder associated with abnormal glucose metabolism resulting from insulin insufficiency or dysfunction. It is one of the major non-communicable diseases that affect millions of people globally. Scientific investigation has revealed that P. kotschyi extracts possess some antidiabetic properties. Georgewill and Georgewill (2019) investigated the hypoglycaemic effect of aqueous extract of P. kotschyi leaves on alloxan induced diabetic rats. Results of their investigation revealed that oral administration of the extract (200 mg/kg/day for 14 days) caused significant hypoglycaemic effect in the experimental animals. The ethanol extract of roots of this plant was also reported to exhibit inhibitory activity against α-glucosidase (IC50 = 5.0 ± 0.2 μg/mL), an important digestive enzyme targeted in diabetes treatment (Bothon et al. 2013). Antiproliferative activity: Cancer is an important disease that is characterised by the abnormal rapid proliferation of cells that invade and destroy other tissues (Alhassan et al. 2018). It is a major public health problem throughout the world. Pharmacological studies have shown that P. kotschyi possesses anticancer potential. Kassim et al. (2015) investigated the antiproliferative activity and apoptosis induction effect of aqueous extract of P. kotschyi roots against a panel of prostate cancer cell lines, namely, PC3, DU-145, LNCaP and CWR-22 cell lines. Results from the 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyltetrazolium bromide (MTT) assay showed that all four cancer cell lines exhibited a dose-dependent decrease in cell proliferation and viability after treatment with the aqueous extract with IC50 values ranging from 12 to 42 µg/mL. The results obtained also showed that LNCaP, PC3, DU-145, and CWR-22 cell lines had 42, 35, 33 and 24% induced apoptotic cells, respectively, after treatment with the same extract. The results of both the antiproliferative and apoptosis assay indicated that the LNCaP cells were the most sensitive to the P. kotschyi extract. Heat shock protein 90 (Hsp90) is a molecular chaperone that is involved in the folding, activation and assembly of several proteins including oncoproteins such as HER2, Survivin, EGFR, Akt, Raf-1, mutant p53 (Calderwood et al. 2006; Dal Piaz et al. 2012). Hsp90 is often overexpressed in cancer cells. It has been demonstrated to play a vital role in tumour progression, malignancy and resistance to chemotherapeutic agents (Zhang et al. 2019). Hence, Hsp90 is recently considered as a viable molecular target for development of new anticancer drugs (Gupta et al. 2019). Phytoconstituents of P. kotschyi have been shown to possess significant Hsp90 inhibitory activity. Dal Piaz et al. (2012) investigated the Hsp90 binding capability of several compounds using a surface plasmon resonance (SPR) approach. They found that the limonoid orthoacetates (1–6) displayed good binding capability to the protein with compound 4 being the most effective. Compound 4 also exhibited significant anti-proliferative activity against three cancer cell lines, namely, PC-3 (human prostate cancer cells), A2780 (human ovarian carcinoma cells), and MCF-7 (human breast adenocarcinoma cells) with IC50 values of 62 ± 0.4, 38 ± 0.7 and 25 ± 1.2 µM, respectively. These findings suggest that Hsp90 inhibition is a mechanism of action for anti-proliferative effects of the limonoids orthoacetates from P. kotschyi. These findings provide scientific bases for the future development of new anticancer agents from P. kotschyi in the form of a standardised herbal preparation or as a pure chemical entity. Antidiarrheal activity: Treatment of diarrhoea comes under one of the common ethnomedicinal uses of P. kotschyi. To further verify this claim, Essiet et al. (2016) investigated the antidiarrheal property of ethanol extract of P kotschyi leaves in Wistar albino rats. Diarrhoea was induced in the animals using castor oil. Results of the investigation revealed that oral administration of the extract (100, 200 and 400 mg/kg) produced significant (p < 0.05) dose-dependent inhibition of induced diarrhoea (67–91%). The results also showed that the administered doses of the extract decreased intestinal transit time by 57–66% while intestinal fluid accumulation was decreased by 68–82%. This finding undoubtedly supports the traditional use of this plant in the treatment of diarrhoea. Toxicity: There is a general perception that plant-based medicinal products are natural and thus, very safe for human consumption. However, this notion is wrong because several plants have been shown to produce wide range of adverse reactions some of which are capable of causing serious injuries, life-threatening conditions, and even death (Ekor 2014). Hence, it is of paramount importance to investigate the toxicity profile of traditional medicinal plants as well as their phytoconstituents in order to establish their safety. The study of toxicity is an essential component for new drug development process (Hornberg et al. 2014). Some toxicological studies have been carried out on the extracts of P. kotschyi. Nchouwet et al. (2017) investigated the acute and sub-chronic toxicity of P. kotschyi stem bark aqueous extract in albino rats. For the acute toxicity study, the LD50 was found to be greater than 2000 mg/kg body weight. The sub-chronic administration of the aqueous extract at a dose of 400 mg/kg body weight/day for 28 days, caused significant increase in total protein and HDL-Cholesterol with concomitant decrease in LDL-cholesterol while other biochemical and hematological parameters were found to be within the normal range. However, histological examination revealed the presence of inflammation and necrosis in the kidney and liver tissues of animals treated with 400 mg/kg body weight-/day of extract while tissue samples from animals treated at lower doses remained normal. This implies that the extract may have exhibited some toxic effect on the kidney and liver tissues at 400 mg/kg body weight while it is relatively safe at lower doses. Kabiru et al. (2015) conducted a sub-chronic toxicity evaluation of a crude methanol extract of leaves in Sprague-Dawley rats at doses of 40, 200 and 1000 mg/kg body weight/day for 4 weeks. They found that the extract did not produce any significant alteration in both hematological and biochemical parameters when compared with standard controls. This implied that the extract was relatively non-toxic at the tested doses. Ezeokpo et al. (2020) also carried out a similar study with an ethanol extract of P. kotschyi leaves in Wistar rats. Their results revealed that the extract (400 mg/kg body weight/day) did not produce any significant derangement in hematological and biochemical parameters after 28 days of treatment. The above findings indicated that methanol and ethanol extracts of P. kotschyi leaves are relatively non-toxic at higher doses compared to the aqueous stem bark extract. However, more detailed research is still required to corroborate this finding. Albeit most of the pharmacological activities and chemical constituents reported on this plant have been obtained from the leaf’s extracts, the toxicity evaluation of ethanol, methanol and chloroform extracts of roots and stem bark of P. kotschy is yet to be carried out and reported. Hence, further toxicity studies on different extracts, fractions and chemical constituents of the root and stem bark of P. kotschyi are still required to ascertain the thorough safety of the plant. Conclusions: P. kotschyi is an important medicinal plant which is used in the traditional treatment of different ailments. Based on its ethnomedicinal claims, extensive pharmacological and phytochemical investigations have been carried out which led to the isolation and characterisation of several bioactive constituents. Results from the pharmacological investigations on this plant and its phytoconstituents have demonstrated its high therapeutic potential in the treatment of cancer and tropical diseases, particularly malaria, leishmaniasis and trypanosomiasis. Although, experimental data support the beneficial medicinal properties of P. kotschyi, there were no sufficient data on the toxicity and safety profile of the plant. Nonetheless, this review provides the foundation for future work. Considering the amount of knowledge so far obtained on the medicinal properties of P. kotschyi, further studies on this plant should be directed towards establishing its safety profile as well as design and development of drug product either as single chemical entity or as a standardised herbal preparation. Tropical diseases are among the most neglected health problems in the world. The pharmaceutical industries show little research and development interest in this area albeit the devastating effect of such diseases. Therefore, research finding of this nature should be advanced towards the development of useful medicinal products.
Background: Pseudocedrela kotschyi (Schweinf) Harms (Meliaceae) is an important medicinal plant found in tropical and subtropical countries of Africa. Traditionally, P. kotschyi is used in the treatment of various diseases including diabetes, malaria, abdominal pain and diarrhoea. Methods: Through interpreting already published scientific manuscripts retrieved from different scientific search engines, namely, Medline, PubMed, EMBASE, Science Direct and Google scholar databases, an up-to-date review on the medicinal potentials of P. kotschyi from inception until September, 2020 was compiled. 'Pseudocedrela kotschyi', 'traditional uses', 'pharmacological properties' and 'chemical constituents' were used as search words. Results: At present, more than 30 chemical constituents have been isolated and identified from the root and stem bark of P. kotschyi, among which limonoids and triterpenes are the main active constituents. Based on prior research, P. kotschyi has been reported to possess anti-inflammatory, analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, anti-trypanosomiasis, hepatoprotective, antioxidant, antidiabetic, antidiarrheal, antimicrobial, and anticancer effects. Conclusions: P. kotschyi is reported to be effective in treating a variety of diseases. Current phytochemical and pharmacological studies mainly focus on antimalaria, anti-leishmaniasis, anti-trypanosomiasis and anticancer potential of the root and stem bark of P. kotschyi. Although experimental data support the beneficial medicinal properties of this plant, there is still a paucity of information on its toxicity profile. Nonetheless, this review provides the basis for future research work.
Introduction: Traditional medicinal plants have been an essential source of remedy for various illnesses since ancient times. Preparations of plant materials such as infusion, decoction, powder, or paste have been used in various traditional practices in different parts of the world. People living in Africa and Asia make use of herbal medications to supplement the conventional medicine practice (Ekor 2014). There has been an increasing interest in the usage of herbal medicines in recent years. About 80% of the world’s population is using phytotherapeutic medicines (Balekundri and Mannur 2020). The WHO estimated that the size of the global market for herbal products was USD 60 billion in the year 2000 and this is expected to grow 7% per annum towards USD 5 trillion by the year 2050 (Tan et al. 2020). Several analyses have clearly verified traditional claims of numerous medicinal plants leading to the commercialisation of the many herbal products and their nomination as leads within the development of pharmaceutical medication (Williams 2021). Many clinically useful drugs have been discovered based on the knowledge derived from the ethnomedicinal applications of various herbal materials (Balunas and Kinghorn 2005). Pseudocedrela kotschyi (Schweinf.) Harms (Meliaceae) is an important medicinal plant found in the tropical and subtropical countries of Africa. This plant has been extensively used in the African traditional medicine system for the treatment of a variety of diseases, particularly as analgesic, antimicrobial, antimalarial, anthelminthic, and antidiarrheal agents. The main focus of this review was to establish the ethnopharmacological uses and medicinal characteristics of P. kotschyi and highlight its potential as future drug for the treatment of various tropical diseases. Conclusions: P. kotschyi is an important medicinal plant which is used in the traditional treatment of different ailments. Based on its ethnomedicinal claims, extensive pharmacological and phytochemical investigations have been carried out which led to the isolation and characterisation of several bioactive constituents. Results from the pharmacological investigations on this plant and its phytoconstituents have demonstrated its high therapeutic potential in the treatment of cancer and tropical diseases, particularly malaria, leishmaniasis and trypanosomiasis. Although, experimental data support the beneficial medicinal properties of P. kotschyi, there were no sufficient data on the toxicity and safety profile of the plant. Nonetheless, this review provides the foundation for future work. Considering the amount of knowledge so far obtained on the medicinal properties of P. kotschyi, further studies on this plant should be directed towards establishing its safety profile as well as design and development of drug product either as single chemical entity or as a standardised herbal preparation. Tropical diseases are among the most neglected health problems in the world. The pharmaceutical industries show little research and development interest in this area albeit the devastating effect of such diseases. Therefore, research finding of this nature should be advanced towards the development of useful medicinal products.
Background: Pseudocedrela kotschyi (Schweinf) Harms (Meliaceae) is an important medicinal plant found in tropical and subtropical countries of Africa. Traditionally, P. kotschyi is used in the treatment of various diseases including diabetes, malaria, abdominal pain and diarrhoea. Methods: Through interpreting already published scientific manuscripts retrieved from different scientific search engines, namely, Medline, PubMed, EMBASE, Science Direct and Google scholar databases, an up-to-date review on the medicinal potentials of P. kotschyi from inception until September, 2020 was compiled. 'Pseudocedrela kotschyi', 'traditional uses', 'pharmacological properties' and 'chemical constituents' were used as search words. Results: At present, more than 30 chemical constituents have been isolated and identified from the root and stem bark of P. kotschyi, among which limonoids and triterpenes are the main active constituents. Based on prior research, P. kotschyi has been reported to possess anti-inflammatory, analgesic, antipyretic, anthelminthic, antimalaria, anti-leishmaniasis, anti-trypanosomiasis, hepatoprotective, antioxidant, antidiabetic, antidiarrheal, antimicrobial, and anticancer effects. Conclusions: P. kotschyi is reported to be effective in treating a variety of diseases. Current phytochemical and pharmacological studies mainly focus on antimalaria, anti-leishmaniasis, anti-trypanosomiasis and anticancer potential of the root and stem bark of P. kotschyi. Although experimental data support the beneficial medicinal properties of this plant, there is still a paucity of information on its toxicity profile. Nonetheless, this review provides the basis for future research work.
14,047
296
[ 9011, 142, 227, 803, 85, 514, 746, 378, 140, 152, 515, 142, 585 ]
15
[ "kotschyi", "extract", "activity", "mg", "bark", "plant", "stem", "extracts", "treatment", "significant" ]
[ "medicinal plant", "herbal products", "medicinal plant traditional", "usage herbal medicines", "herbal medicines recent" ]
null
null
[CONTENT] Traditional uses | scientific claims | bioactive compounds | limonoid orthoacetates | kotschyins | kostchyienones | pseudrelones | toxicity [SUMMARY]
null
null
[CONTENT] Traditional uses | scientific claims | bioactive compounds | limonoid orthoacetates | kotschyins | kostchyienones | pseudrelones | toxicity [SUMMARY]
[CONTENT] Traditional uses | scientific claims | bioactive compounds | limonoid orthoacetates | kotschyins | kostchyienones | pseudrelones | toxicity [SUMMARY]
[CONTENT] Traditional uses | scientific claims | bioactive compounds | limonoid orthoacetates | kotschyins | kostchyienones | pseudrelones | toxicity [SUMMARY]
[CONTENT] Ethnopharmacology | Medicine, Traditional | Meliaceae | Phytochemicals | Phytotherapy | Plant Extracts | Plants, Medicinal [SUMMARY]
null
null
[CONTENT] Ethnopharmacology | Medicine, Traditional | Meliaceae | Phytochemicals | Phytotherapy | Plant Extracts | Plants, Medicinal [SUMMARY]
[CONTENT] Ethnopharmacology | Medicine, Traditional | Meliaceae | Phytochemicals | Phytotherapy | Plant Extracts | Plants, Medicinal [SUMMARY]
[CONTENT] Ethnopharmacology | Medicine, Traditional | Meliaceae | Phytochemicals | Phytotherapy | Plant Extracts | Plants, Medicinal [SUMMARY]
[CONTENT] medicinal plant | herbal products | medicinal plant traditional | usage herbal medicines | herbal medicines recent [SUMMARY]
null
null
[CONTENT] medicinal plant | herbal products | medicinal plant traditional | usage herbal medicines | herbal medicines recent [SUMMARY]
[CONTENT] medicinal plant | herbal products | medicinal plant traditional | usage herbal medicines | herbal medicines recent [SUMMARY]
[CONTENT] medicinal plant | herbal products | medicinal plant traditional | usage herbal medicines | herbal medicines recent [SUMMARY]
[CONTENT] kotschyi | extract | activity | mg | bark | plant | stem | extracts | treatment | significant [SUMMARY]
null
null
[CONTENT] kotschyi | extract | activity | mg | bark | plant | stem | extracts | treatment | significant [SUMMARY]
[CONTENT] kotschyi | extract | activity | mg | bark | plant | stem | extracts | treatment | significant [SUMMARY]
[CONTENT] kotschyi | extract | activity | mg | bark | plant | stem | extracts | treatment | significant [SUMMARY]
[CONTENT] herbal | medicinal | traditional | materials | usd | herbal products | medicine | medicines | year | medicinal plants [SUMMARY]
null
null
[CONTENT] medicinal | safety profile | medicinal properties | medicinal properties kotschyi | data | tropical diseases | plant | development | safety | profile [SUMMARY]
[CONTENT] kotschyi | extract | activity | mg | extracts | plant | treatment | bark | kg | mg kg [SUMMARY]
[CONTENT] kotschyi | extract | activity | mg | extracts | plant | treatment | bark | kg | mg kg [SUMMARY]
[CONTENT] Meliaceae | Africa ||| [SUMMARY]
null
null
[CONTENT] ||| ||| ||| [SUMMARY]
[CONTENT] Pseudocedrela | Meliaceae | Africa ||| ||| Medline | PubMed | Science Direct | Google | September, 2020 ||| Pseudocedrela ||| more than 30 ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] Pseudocedrela | Meliaceae | Africa ||| ||| Medline | PubMed | Science Direct | Google | September, 2020 ||| Pseudocedrela ||| more than 30 ||| ||| ||| ||| ||| [SUMMARY]
Prediction of poorly differentiated hepatocellular carcinoma using contrast computed tomography.
25608454
Percutaneous radiofrequency ablation (RFA) is a well-established local treatment for small hepatocellular carcinoma (HCC). However, poor differentiation is a risk factor for tumor seeding or intrahepatic dissemination after RFA for HCC. The present study aimed to develop a method for predicting poorly differentiated HCC using contrast computed tomography (CT) for safe and effective RFA.
BACKGROUND
Of HCCs diagnosed histologically, 223 patients with 226 HCCs showing tumor enhancement on contrast CT were analyzed. The tumor enhancement pattern was classified into two categories, with and without non-enhanced areas, and tumor stain that disappeared during the venous or equilibrium phase with the tumor becoming hypodense was categorized as positive for washout.
METHODS
The 226 HCCs were evaluated as well differentiated (w-) in 56, moderately differentiated (m-) in 137, and poorly differentiated (p-) in 33. The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The percentage with heterogeneous enhancement in all HCCs was 13% in w-HCCs, 29% in m-HCCs, and 85% in p-HCCs. The percentage with tumor stain washout in the venous phase was 29% in w-HCCs, 63% in m-HCCs, and 94% in p-HCCs. The percentage with heterogeneous enhancement in small HCCs was 10% in w-HCCs, 10% in m-HCCs, and 75% in p-HCCs. The percentage with tumor stain washout in the venous phase in small HCCs was 23% in w-HCCs, 58% in m-HCCs, and 100% in p-HCCs. Significant correlations were seen for each factor (p < 0.001 each). Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for prediction of poor differentiation in small HCCs by tumor enhancement with non-enhanced areas were 75%, 90%, 48%, 97%, and 88%, respectively; for tumor stain washout in the venous phase, these were 100%, 55%, 22%, 100%, and 60%, respectively.
RESULTS
Tumor enhancement patterns were associated with poor histological differentiation even in small HCCs. Tumor enhancement with non-enhanced areas was valuable for predicting poorly differentiated HCC.
CONCLUSIONS
[ "Aged", "Carcinoma, Hepatocellular", "Contrast Media", "Female", "Humans", "Liver Neoplasms", "Male", "Middle Aged", "Radiographic Image Enhancement", "Tomography, X-Ray Computed" ]
4331839
Background
Percutaneous radiofrequency ablation (RFA) is a well-established local treatment for unresectable small hepatocellular carcinoma (HCC), which is a repeatable and safe procedure. Currently, RFA is considered the standard of care for patients with Barcelona-Clinic Liver Cancer (BCLC) 0-A tumors not suitable for surgery [1]. Recently, Forner et al. [2] proposed RFA instead of resection in patients with very early (<2 cm) HCC. However, several investigators have reported the risk of seeding [3-5], intrahepatic dissemination [6,7], and aggressive recurrence after RFA [8-10]. Some investigators reported that these critical recurrences were related to poor differentiation [4,7]. Therefore, poor differentiation would be a risk factor for tumor seeding or intrahepatic dissemination after RFA for HCC. Furthermore, the prognosis of patients with poorly differentiated HCCs is worse even with radical therapy [11-13]. Along with de-differentiation from well to moderately/poorly differentiated HCC, even small HCCs have a greater tendency for vascular invasion and intrahepatic metastasis [14,15]. Fukuda et al. [16] recommended that, when hepatic function is well preserved, hepatic resection should be the first choice for local control, especially in cases of moderately to poorly differentiated HCC. Therefore, the prediction of poorly differentiated HCC before therapy is crucial for deciding the optimal therapeutic strategy and for safe and effective RFA even for small HCCs. Contrast computed tomography (CT) is commonly used for definite diagnosis of HCCs on imaging [17]. However, the differential diagnosis of poorly differentiated HCC using contrast CT has not been sufficiently established. In the present study, correlations between the enhancement pattern on contrast CT and histological differentiation, and the ability to predict poorly differentiated HCC using contrast CT were analyzed.
Methods
Patients In our hospital’s HCC database, 310 patients with 315 HCC nodules were histologically diagnosed by tumor biopsy or surgical resection between May 2001 and December 2010. The flowchart of patient enrollment is shown in Figure 1. Of 226 HCC nodules, 165 were diagnosed by tumor biopsy, and 61 were diagnosed by resection. The patients’ characteristics are summarized in Table 5. This retrospective study was approved by our ethics committee and conformed to the Helsinki Declaration. The need for patients to give written, informed consent was waived by our ethics committee. Patient enrollment flowchart. Patients’ characteristics SD, standard deviation; HCV, hepatitis C virus; AFP, alpha-fetoprotein; AFP-L3, lens culinaris agglutinin-reactive alpha-fetoprotein; DCP, Des-gamma-carboxyprothrombin. In our hospital’s HCC database, 310 patients with 315 HCC nodules were histologically diagnosed by tumor biopsy or surgical resection between May 2001 and December 2010. The flowchart of patient enrollment is shown in Figure 1. Of 226 HCC nodules, 165 were diagnosed by tumor biopsy, and 61 were diagnosed by resection. The patients’ characteristics are summarized in Table 5. This retrospective study was approved by our ethics committee and conformed to the Helsinki Declaration. The need for patients to give written, informed consent was waived by our ethics committee. Patient enrollment flowchart. Patients’ characteristics SD, standard deviation; HCV, hepatitis C virus; AFP, alpha-fetoprotein; AFP-L3, lens culinaris agglutinin-reactive alpha-fetoprotein; DCP, Des-gamma-carboxyprothrombin. Technique and analysis All contrast CT examinations were performed with multi-detector row CT scanners having at least 4 detectors (Aquilion, Toshiba Medical Systems, Tochigi, Japan or Light speed VCT, GE Medical Systems, Milwaukee, WI, USA) with a section thickness of 5 mm. In addition to plain images, arterial phase images were obtained 40 seconds after the start of bolus administration. From 2005 onward, the arterial phase was scanned with an automatic bolus-tracking program. Venous and equilibrium phase images were obtained at 70 seconds and 180 seconds, respectively. All patients received a non-ionic iodinated contrast medium at a dose of 580 mgI/kg; it was administered to all patients by an automated power injector for 30 seconds (19.3 mgI/kg/s). Contrast CT findings related to tumor enhancement pattern and washout were categorized as follows. Tumor enhancement pattern in the arterial phase was classified into two categories, with and without non-enhanced areas (Figures 2 and 3). Tumor stain obtained during the arterial phase that disappeared during the venous or equilibrium phase, with the tumor becoming hypodense, was categorized as positive for washout. Images obtained by contrast CT were independently analyzed using the above criteria of enhancement patterns without reference to histological differentiation by two experienced readers with more than 20 years of experience in liver imaging. Any disagreements in interpretation were resolved by consensus. Tumor enhancement without non-enhanced areas. The pre-contrast image (a) shows an iso-density tumor. In comparison with pre-contrast image, the tumor stain has no non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows. Tumor enhancement with non-enhanced areas. The pre-contrast image (a) shows a low-density tumor. In comparison with the pre-contrast image, the tumor stain has non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows. Needle biopsies of tumors were performed using an 18-gauge needle (Bard Monopty® C.R. Bard Inc., Covington, GA, USA). Liver biopsy was performed using a 16-gauge needle. Histological findings were classified using the METAVIR scoring system [28]. All biopsy and resected specimens were examined by two experienced pathologists, without reference to the CT findings of their tumors and surrounding livers. According to the International Working Party classification [29], HCC histology was classified into three types: well differentiated (w-), moderately differentiated (m-), and poorly differentiated (p-) HCCs. If heterogeneous differentiation was found in the obtained HCC tissue, differentiation grade was classified based on the lowest differentiated grade. Any discrepancies between the two pathologists with more than 20 years of experience in liver pathology were resolved by discussion to reach consensus. All contrast CT examinations were performed with multi-detector row CT scanners having at least 4 detectors (Aquilion, Toshiba Medical Systems, Tochigi, Japan or Light speed VCT, GE Medical Systems, Milwaukee, WI, USA) with a section thickness of 5 mm. In addition to plain images, arterial phase images were obtained 40 seconds after the start of bolus administration. From 2005 onward, the arterial phase was scanned with an automatic bolus-tracking program. Venous and equilibrium phase images were obtained at 70 seconds and 180 seconds, respectively. All patients received a non-ionic iodinated contrast medium at a dose of 580 mgI/kg; it was administered to all patients by an automated power injector for 30 seconds (19.3 mgI/kg/s). Contrast CT findings related to tumor enhancement pattern and washout were categorized as follows. Tumor enhancement pattern in the arterial phase was classified into two categories, with and without non-enhanced areas (Figures 2 and 3). Tumor stain obtained during the arterial phase that disappeared during the venous or equilibrium phase, with the tumor becoming hypodense, was categorized as positive for washout. Images obtained by contrast CT were independently analyzed using the above criteria of enhancement patterns without reference to histological differentiation by two experienced readers with more than 20 years of experience in liver imaging. Any disagreements in interpretation were resolved by consensus. Tumor enhancement without non-enhanced areas. The pre-contrast image (a) shows an iso-density tumor. In comparison with pre-contrast image, the tumor stain has no non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows. Tumor enhancement with non-enhanced areas. The pre-contrast image (a) shows a low-density tumor. In comparison with the pre-contrast image, the tumor stain has non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows. Needle biopsies of tumors were performed using an 18-gauge needle (Bard Monopty® C.R. Bard Inc., Covington, GA, USA). Liver biopsy was performed using a 16-gauge needle. Histological findings were classified using the METAVIR scoring system [28]. All biopsy and resected specimens were examined by two experienced pathologists, without reference to the CT findings of their tumors and surrounding livers. According to the International Working Party classification [29], HCC histology was classified into three types: well differentiated (w-), moderately differentiated (m-), and poorly differentiated (p-) HCCs. If heterogeneous differentiation was found in the obtained HCC tissue, differentiation grade was classified based on the lowest differentiated grade. Any discrepancies between the two pathologists with more than 20 years of experience in liver pathology were resolved by discussion to reach consensus. Statistical analysis Values are expressed as means ± standard deviation (SD). The correlation between tumor size and histological differentiation was analyzed using the Jonckheere-Terpstra test. The correlation between the enhancement pattern and histological differentiation was analyzed using Fisher’s exact test or the chi-square test of independence. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for diagnosis of p-HCC were calculated according to findings on contrast CT. Accuracy between groups was compared using the McNemar test. A p value less than 0.05 was considered significant. All analyses were performed using the SPSS 20.0 software package (SPSS, Inc., Chicago, IL, USA). Values are expressed as means ± standard deviation (SD). The correlation between tumor size and histological differentiation was analyzed using the Jonckheere-Terpstra test. The correlation between the enhancement pattern and histological differentiation was analyzed using Fisher’s exact test or the chi-square test of independence. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for diagnosis of p-HCC were calculated according to findings on contrast CT. Accuracy between groups was compared using the McNemar test. A p value less than 0.05 was considered significant. All analyses were performed using the SPSS 20.0 software package (SPSS, Inc., Chicago, IL, USA).
Results
Correlation of tumor size and histological differentiation The histological classification was w-HCC in 56, m-HCC in 137, and p-HCC in 33. Mean diameter by histological classification was 26 ± 13 mm in w-HCCs, 33 ± 20 mm in m-HCCs, and 44 ± 33 mm in p-HCCs. The tumor size was significantly larger as the histological differentiation grade advanced (p = 0.03). In pairwise comparisons, tumor size was significantly smaller for w-HCCs than for m-HCCs and p-HCCs (P = 0.003 and p = 0.001). However, there was no significant difference between m-HCCs and p-HCCs (p = 1.000). The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The proportions of w-HCCs, m-HCCs, and p-HCCs in small HCCs were 33% (48/145), 56% (81/145), and 11% (16/145), respectively. The histological classification was w-HCC in 56, m-HCC in 137, and p-HCC in 33. Mean diameter by histological classification was 26 ± 13 mm in w-HCCs, 33 ± 20 mm in m-HCCs, and 44 ± 33 mm in p-HCCs. The tumor size was significantly larger as the histological differentiation grade advanced (p = 0.03). In pairwise comparisons, tumor size was significantly smaller for w-HCCs than for m-HCCs and p-HCCs (P = 0.003 and p = 0.001). However, there was no significant difference between m-HCCs and p-HCCs (p = 1.000). The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The proportions of w-HCCs, m-HCCs, and p-HCCs in small HCCs were 33% (48/145), 56% (81/145), and 11% (16/145), respectively. Correlation between tumor enhancement patterns and histological differentiation The correlation between tumor enhancement patterns in the arterial phase and histological differentiation is shown in Table 1. The percentage of tumors with tumor stain with non-enhanced areas was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there was a significant difference between m-HCCs and p-HCCs. However, there was no significant difference between w-HCCs and m-HCCs. As in all HCCs, there was also a significant correlation even in small HCCs (3 cm or less in diameter). Correlation between tumor enhancement pattern in the arterial phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by with versus without non-enhanced area using Fisher’s exact test. The correlation between tumor enhancement patterns in the arterial phase and histological differentiation is shown in Table 1. The percentage of tumors with tumor stain with non-enhanced areas was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there was a significant difference between m-HCCs and p-HCCs. However, there was no significant difference between w-HCCs and m-HCCs. As in all HCCs, there was also a significant correlation even in small HCCs (3 cm or less in diameter). Correlation between tumor enhancement pattern in the arterial phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by with versus without non-enhanced area using Fisher’s exact test. Correlation between tumor stain washout and histological differentiation The correlation between tumor stain washout in the venous phase and histological differentiation is shown in Table 2. The percentage of tumors with tumor stain washout in the venous phase was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there were significant differences among all groups (p < 0.05). As in all HCCs, there were also significant correlations even in small HCCs. Correlation between tumor stain washout in the venous phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test. The correlation between tumor stain washout in the equilibrium phase and histological differentiation is shown in Table 3. The percentage of tumors with tumor stain washout in the equilibrium phase was higher as the histological differentiation grade advanced (p < 0.001). However, in comparisons of each pair, no significant difference was observed between m-HCCs and p-HCCs. As in all HCCs, there were also significant correlations even in small HCCs between tumor stain washout in the equilibrium phase and histological differentiation. Correlation between tumor stain washout in the equilibrium phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test. The correlation between tumor stain washout in the venous phase and histological differentiation is shown in Table 2. The percentage of tumors with tumor stain washout in the venous phase was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there were significant differences among all groups (p < 0.05). As in all HCCs, there were also significant correlations even in small HCCs. Correlation between tumor stain washout in the venous phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test. The correlation between tumor stain washout in the equilibrium phase and histological differentiation is shown in Table 3. The percentage of tumors with tumor stain washout in the equilibrium phase was higher as the histological differentiation grade advanced (p < 0.001). However, in comparisons of each pair, no significant difference was observed between m-HCCs and p-HCCs. As in all HCCs, there were also significant correlations even in small HCCs between tumor stain washout in the equilibrium phase and histological differentiation. Correlation between tumor stain washout in the equilibrium phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for the prediction of poorly differentiated HCC Sensitivity, specificity, PPV, NPV, and accuracy for the prediction of p-HCC by each CT finding are shown in Table 4. Sensitivity for p-HCC was inferior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. However, specificity and accuracy for p-HCC were superior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. These findings were seen even in small HCCs. Although accuracies for p-HCC in both all HCCs and small HCCs were slightly improved by the combination of tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase, these improvements were not significant. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of prediction for poorly differentiated hepatocellular carcinoma using CT findings CT, computed tomography; PPV, positive predictive value; NPV, negative predictive value; HCC, hepatocellular carcinoma. Sensitivity, specificity, PPV, NPV, and accuracy for the prediction of p-HCC by each CT finding are shown in Table 4. Sensitivity for p-HCC was inferior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. However, specificity and accuracy for p-HCC were superior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. These findings were seen even in small HCCs. Although accuracies for p-HCC in both all HCCs and small HCCs were slightly improved by the combination of tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase, these improvements were not significant. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of prediction for poorly differentiated hepatocellular carcinoma using CT findings CT, computed tomography; PPV, positive predictive value; NPV, negative predictive value; HCC, hepatocellular carcinoma.
Conclusions
In conclusion, arterial tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase were associated with poor histological differentiation even in small HCCs, and tumor enhancement with non-enhanced areas was the most valuable finding in the prediction of poorly differentiated HCC. For safe and effective RFA for small HCCs, systematic resection should be considered as the treatment of first choice for small HCCs with arterial tumor enhancement with non-enhanced areas, because the prevalence of microscopic vascular invasion or intrahepatic metastasis is quite high in p-HCCs. If unresectable, combinations of RFA with transcatheter arterial chemo-embolization should be considered as alternative treatment strategies. However, further study and analysis are required to determine whether this approach actually helps improve the prognosis of small p-HCCs.
[ "Background", "Correlation of tumor size and histological differentiation", "Correlation between tumor enhancement patterns and histological differentiation", "Correlation between tumor stain washout and histological differentiation", "Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for the prediction of poorly differentiated HCC", "Patients", "Technique and analysis", "Statistical analysis", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Percutaneous radiofrequency ablation (RFA) is a well-established local treatment for unresectable small hepatocellular carcinoma (HCC), which is a repeatable and safe procedure. Currently, RFA is considered the standard of care for patients with Barcelona-Clinic Liver Cancer (BCLC) 0-A tumors not suitable for surgery [1]. Recently, Forner et al. [2] proposed RFA instead of resection in patients with very early (<2 cm) HCC.\nHowever, several investigators have reported the risk of seeding [3-5], intrahepatic dissemination [6,7], and aggressive recurrence after RFA [8-10]. Some investigators reported that these critical recurrences were related to poor differentiation [4,7]. Therefore, poor differentiation would be a risk factor for tumor seeding or intrahepatic dissemination after RFA for HCC. Furthermore, the prognosis of patients with poorly differentiated HCCs is worse even with radical therapy [11-13]. Along with de-differentiation from well to moderately/poorly differentiated HCC, even small HCCs have a greater tendency for vascular invasion and intrahepatic metastasis [14,15]. Fukuda et al. [16] recommended that, when hepatic function is well preserved, hepatic resection should be the first choice for local control, especially in cases of moderately to poorly differentiated HCC. Therefore, the prediction of poorly differentiated HCC before therapy is crucial for deciding the optimal therapeutic strategy and for safe and effective RFA even for small HCCs.\nContrast computed tomography (CT) is commonly used for definite diagnosis of HCCs on imaging [17]. However, the differential diagnosis of poorly differentiated HCC using contrast CT has not been sufficiently established. In the present study, correlations between the enhancement pattern on contrast CT and histological differentiation, and the ability to predict poorly differentiated HCC using contrast CT were analyzed.", "The histological classification was w-HCC in 56, m-HCC in 137, and p-HCC in 33. Mean diameter by histological classification was 26 ± 13 mm in w-HCCs, 33 ± 20 mm in m-HCCs, and 44 ± 33 mm in p-HCCs. The tumor size was significantly larger as the histological differentiation grade advanced (p = 0.03). In pairwise comparisons, tumor size was significantly smaller for w-HCCs than for m-HCCs and p-HCCs (P = 0.003 and p = 0.001). However, there was no significant difference between m-HCCs and p-HCCs (p = 1.000). The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The proportions of w-HCCs, m-HCCs, and p-HCCs in small HCCs were 33% (48/145), 56% (81/145), and 11% (16/145), respectively.", "The correlation between tumor enhancement patterns in the arterial phase and histological differentiation is shown in Table 1. The percentage of tumors with tumor stain with non-enhanced areas was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there was a significant difference between m-HCCs and p-HCCs. However, there was no significant difference between w-HCCs and m-HCCs. As in all HCCs, there was also a significant correlation even in small HCCs (3 cm or less in diameter).\nCorrelation between tumor enhancement pattern in the arterial phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by with versus without non-enhanced area using Fisher’s exact test.", "The correlation between tumor stain washout in the venous phase and histological differentiation is shown in Table 2. The percentage of tumors with tumor stain washout in the venous phase was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there were significant differences among all groups (p < 0.05). As in all HCCs, there were also significant correlations even in small HCCs.\nCorrelation between tumor stain washout in the venous phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test.\nThe correlation between tumor stain washout in the equilibrium phase and histological differentiation is shown in Table 3. The percentage of tumors with tumor stain washout in the equilibrium phase was higher as the histological differentiation grade advanced (p < 0.001). However, in comparisons of each pair, no significant difference was observed between m-HCCs and p-HCCs. As in all HCCs, there were also significant correlations even in small HCCs between tumor stain washout in the equilibrium phase and histological differentiation.\nCorrelation between tumor stain washout in the equilibrium phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test.", "Sensitivity, specificity, PPV, NPV, and accuracy for the prediction of p-HCC by each CT finding are shown in Table 4. Sensitivity for p-HCC was inferior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. However, specificity and accuracy for p-HCC were superior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. These findings were seen even in small HCCs. Although accuracies for p-HCC in both all HCCs and small HCCs were slightly improved by the combination of tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase, these improvements were not significant.\nSensitivity, specificity, positive predictive value, negative predictive value, and accuracy of prediction for poorly differentiated hepatocellular carcinoma using CT findings\nCT, computed tomography; PPV, positive predictive value; NPV, negative predictive value; HCC, hepatocellular carcinoma.", "In our hospital’s HCC database, 310 patients with 315 HCC nodules were histologically diagnosed by tumor biopsy or surgical resection between May 2001 and December 2010. The flowchart of patient enrollment is shown in Figure 1. Of 226 HCC nodules, 165 were diagnosed by tumor biopsy, and 61 were diagnosed by resection. The patients’ characteristics are summarized in Table 5. This retrospective study was approved by our ethics committee and conformed to the Helsinki Declaration. The need for patients to give written, informed consent was waived by our ethics committee.\nPatient enrollment flowchart.\nPatients’ characteristics\nSD, standard deviation; HCV, hepatitis C virus; AFP, alpha-fetoprotein; AFP-L3, lens culinaris agglutinin-reactive alpha-fetoprotein; DCP, Des-gamma-carboxyprothrombin.", "All contrast CT examinations were performed with multi-detector row CT scanners having at least 4 detectors (Aquilion, Toshiba Medical Systems, Tochigi, Japan or Light speed VCT, GE Medical Systems, Milwaukee, WI, USA) with a section thickness of 5 mm. In addition to plain images, arterial phase images were obtained 40 seconds after the start of bolus administration. From 2005 onward, the arterial phase was scanned with an automatic bolus-tracking program. Venous and equilibrium phase images were obtained at 70 seconds and 180 seconds, respectively. All patients received a non-ionic iodinated contrast medium at a dose of 580 mgI/kg; it was administered to all patients by an automated power injector for 30 seconds (19.3 mgI/kg/s).\nContrast CT findings related to tumor enhancement pattern and washout were categorized as follows. Tumor enhancement pattern in the arterial phase was classified into two categories, with and without non-enhanced areas (Figures 2 and 3). Tumor stain obtained during the arterial phase that disappeared during the venous or equilibrium phase, with the tumor becoming hypodense, was categorized as positive for washout. Images obtained by contrast CT were independently analyzed using the above criteria of enhancement patterns without reference to histological differentiation by two experienced readers with more than 20 years of experience in liver imaging. Any disagreements in interpretation were resolved by consensus.\nTumor enhancement without non-enhanced areas. The pre-contrast image (a) shows an iso-density tumor. In comparison with pre-contrast image, the tumor stain has no non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows.\nTumor enhancement with non-enhanced areas. The pre-contrast image (a) shows a low-density tumor. In comparison with the pre-contrast image, the tumor stain has non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows.\nNeedle biopsies of tumors were performed using an 18-gauge needle (Bard Monopty® C.R. Bard Inc., Covington, GA, USA). Liver biopsy was performed using a 16-gauge needle. Histological findings were classified using the METAVIR scoring system [28]. All biopsy and resected specimens were examined by two experienced pathologists, without reference to the CT findings of their tumors and surrounding livers. According to the International Working Party classification [29], HCC histology was classified into three types: well differentiated (w-), moderately differentiated (m-), and poorly differentiated (p-) HCCs. If heterogeneous differentiation was found in the obtained HCC tissue, differentiation grade was classified based on the lowest differentiated grade. Any discrepancies between the two pathologists with more than 20 years of experience in liver pathology were resolved by discussion to reach consensus.", "Values are expressed as means ± standard deviation (SD). The correlation between tumor size and histological differentiation was analyzed using the Jonckheere-Terpstra test. The correlation between the enhancement pattern and histological differentiation was analyzed using Fisher’s exact test or the chi-square test of independence. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for diagnosis of p-HCC were calculated according to findings on contrast CT. Accuracy between groups was compared using the McNemar test. A p value less than 0.05 was considered significant. All analyses were performed using the SPSS 20.0 software package (SPSS, Inc., Chicago, IL, USA).", "HCC: Hepatocellular carcinoma; RFA: Percutaneous radiofrequency ablation; CT: Computed tomography; BCLC: Barcelona-clinic liver cancer; w: Well differentiated; m: Moderately differentiated; p: Poorly differentiated; SD: Standard deviation; PPV: positive predictive value; NPV: Negative predictive value.", "The authors declare that they have no competing interests.", "NK, TH and IM designed and proposed the research; all authors approved the analysis and participated in drafting the article; MY, SN, MK, DH, UK, II, MT, IM, and KJ collected the clinical data; TH and IM analyzed imaging examinations; NK and TH performed the statistical analysis; NK and TH wrote the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Results", "Correlation of tumor size and histological differentiation", "Correlation between tumor enhancement patterns and histological differentiation", "Correlation between tumor stain washout and histological differentiation", "Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for the prediction of poorly differentiated HCC", "Discussion", "Conclusions", "Methods", "Patients", "Technique and analysis", "Statistical analysis", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Percutaneous radiofrequency ablation (RFA) is a well-established local treatment for unresectable small hepatocellular carcinoma (HCC), which is a repeatable and safe procedure. Currently, RFA is considered the standard of care for patients with Barcelona-Clinic Liver Cancer (BCLC) 0-A tumors not suitable for surgery [1]. Recently, Forner et al. [2] proposed RFA instead of resection in patients with very early (<2 cm) HCC.\nHowever, several investigators have reported the risk of seeding [3-5], intrahepatic dissemination [6,7], and aggressive recurrence after RFA [8-10]. Some investigators reported that these critical recurrences were related to poor differentiation [4,7]. Therefore, poor differentiation would be a risk factor for tumor seeding or intrahepatic dissemination after RFA for HCC. Furthermore, the prognosis of patients with poorly differentiated HCCs is worse even with radical therapy [11-13]. Along with de-differentiation from well to moderately/poorly differentiated HCC, even small HCCs have a greater tendency for vascular invasion and intrahepatic metastasis [14,15]. Fukuda et al. [16] recommended that, when hepatic function is well preserved, hepatic resection should be the first choice for local control, especially in cases of moderately to poorly differentiated HCC. Therefore, the prediction of poorly differentiated HCC before therapy is crucial for deciding the optimal therapeutic strategy and for safe and effective RFA even for small HCCs.\nContrast computed tomography (CT) is commonly used for definite diagnosis of HCCs on imaging [17]. However, the differential diagnosis of poorly differentiated HCC using contrast CT has not been sufficiently established. In the present study, correlations between the enhancement pattern on contrast CT and histological differentiation, and the ability to predict poorly differentiated HCC using contrast CT were analyzed.", " Correlation of tumor size and histological differentiation The histological classification was w-HCC in 56, m-HCC in 137, and p-HCC in 33. Mean diameter by histological classification was 26 ± 13 mm in w-HCCs, 33 ± 20 mm in m-HCCs, and 44 ± 33 mm in p-HCCs. The tumor size was significantly larger as the histological differentiation grade advanced (p = 0.03). In pairwise comparisons, tumor size was significantly smaller for w-HCCs than for m-HCCs and p-HCCs (P = 0.003 and p = 0.001). However, there was no significant difference between m-HCCs and p-HCCs (p = 1.000). The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The proportions of w-HCCs, m-HCCs, and p-HCCs in small HCCs were 33% (48/145), 56% (81/145), and 11% (16/145), respectively.\nThe histological classification was w-HCC in 56, m-HCC in 137, and p-HCC in 33. Mean diameter by histological classification was 26 ± 13 mm in w-HCCs, 33 ± 20 mm in m-HCCs, and 44 ± 33 mm in p-HCCs. The tumor size was significantly larger as the histological differentiation grade advanced (p = 0.03). In pairwise comparisons, tumor size was significantly smaller for w-HCCs than for m-HCCs and p-HCCs (P = 0.003 and p = 0.001). However, there was no significant difference between m-HCCs and p-HCCs (p = 1.000). The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The proportions of w-HCCs, m-HCCs, and p-HCCs in small HCCs were 33% (48/145), 56% (81/145), and 11% (16/145), respectively.\n Correlation between tumor enhancement patterns and histological differentiation The correlation between tumor enhancement patterns in the arterial phase and histological differentiation is shown in Table 1. The percentage of tumors with tumor stain with non-enhanced areas was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there was a significant difference between m-HCCs and p-HCCs. However, there was no significant difference between w-HCCs and m-HCCs. As in all HCCs, there was also a significant correlation even in small HCCs (3 cm or less in diameter).\nCorrelation between tumor enhancement pattern in the arterial phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by with versus without non-enhanced area using Fisher’s exact test.\nThe correlation between tumor enhancement patterns in the arterial phase and histological differentiation is shown in Table 1. The percentage of tumors with tumor stain with non-enhanced areas was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there was a significant difference between m-HCCs and p-HCCs. However, there was no significant difference between w-HCCs and m-HCCs. As in all HCCs, there was also a significant correlation even in small HCCs (3 cm or less in diameter).\nCorrelation between tumor enhancement pattern in the arterial phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by with versus without non-enhanced area using Fisher’s exact test.\n Correlation between tumor stain washout and histological differentiation The correlation between tumor stain washout in the venous phase and histological differentiation is shown in Table 2. The percentage of tumors with tumor stain washout in the venous phase was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there were significant differences among all groups (p < 0.05). As in all HCCs, there were also significant correlations even in small HCCs.\nCorrelation between tumor stain washout in the venous phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test.\nThe correlation between tumor stain washout in the equilibrium phase and histological differentiation is shown in Table 3. The percentage of tumors with tumor stain washout in the equilibrium phase was higher as the histological differentiation grade advanced (p < 0.001). However, in comparisons of each pair, no significant difference was observed between m-HCCs and p-HCCs. As in all HCCs, there were also significant correlations even in small HCCs between tumor stain washout in the equilibrium phase and histological differentiation.\nCorrelation between tumor stain washout in the equilibrium phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test.\nThe correlation between tumor stain washout in the venous phase and histological differentiation is shown in Table 2. The percentage of tumors with tumor stain washout in the venous phase was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there were significant differences among all groups (p < 0.05). As in all HCCs, there were also significant correlations even in small HCCs.\nCorrelation between tumor stain washout in the venous phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test.\nThe correlation between tumor stain washout in the equilibrium phase and histological differentiation is shown in Table 3. The percentage of tumors with tumor stain washout in the equilibrium phase was higher as the histological differentiation grade advanced (p < 0.001). However, in comparisons of each pair, no significant difference was observed between m-HCCs and p-HCCs. As in all HCCs, there were also significant correlations even in small HCCs between tumor stain washout in the equilibrium phase and histological differentiation.\nCorrelation between tumor stain washout in the equilibrium phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test.\n Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for the prediction of poorly differentiated HCC Sensitivity, specificity, PPV, NPV, and accuracy for the prediction of p-HCC by each CT finding are shown in Table 4. Sensitivity for p-HCC was inferior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. However, specificity and accuracy for p-HCC were superior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. These findings were seen even in small HCCs. Although accuracies for p-HCC in both all HCCs and small HCCs were slightly improved by the combination of tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase, these improvements were not significant.\nSensitivity, specificity, positive predictive value, negative predictive value, and accuracy of prediction for poorly differentiated hepatocellular carcinoma using CT findings\nCT, computed tomography; PPV, positive predictive value; NPV, negative predictive value; HCC, hepatocellular carcinoma.\nSensitivity, specificity, PPV, NPV, and accuracy for the prediction of p-HCC by each CT finding are shown in Table 4. Sensitivity for p-HCC was inferior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. However, specificity and accuracy for p-HCC were superior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. These findings were seen even in small HCCs. Although accuracies for p-HCC in both all HCCs and small HCCs were slightly improved by the combination of tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase, these improvements were not significant.\nSensitivity, specificity, positive predictive value, negative predictive value, and accuracy of prediction for poorly differentiated hepatocellular carcinoma using CT findings\nCT, computed tomography; PPV, positive predictive value; NPV, negative predictive value; HCC, hepatocellular carcinoma.", "The histological classification was w-HCC in 56, m-HCC in 137, and p-HCC in 33. Mean diameter by histological classification was 26 ± 13 mm in w-HCCs, 33 ± 20 mm in m-HCCs, and 44 ± 33 mm in p-HCCs. The tumor size was significantly larger as the histological differentiation grade advanced (p = 0.03). In pairwise comparisons, tumor size was significantly smaller for w-HCCs than for m-HCCs and p-HCCs (P = 0.003 and p = 0.001). However, there was no significant difference between m-HCCs and p-HCCs (p = 1.000). The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The proportions of w-HCCs, m-HCCs, and p-HCCs in small HCCs were 33% (48/145), 56% (81/145), and 11% (16/145), respectively.", "The correlation between tumor enhancement patterns in the arterial phase and histological differentiation is shown in Table 1. The percentage of tumors with tumor stain with non-enhanced areas was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there was a significant difference between m-HCCs and p-HCCs. However, there was no significant difference between w-HCCs and m-HCCs. As in all HCCs, there was also a significant correlation even in small HCCs (3 cm or less in diameter).\nCorrelation between tumor enhancement pattern in the arterial phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by with versus without non-enhanced area using Fisher’s exact test.", "The correlation between tumor stain washout in the venous phase and histological differentiation is shown in Table 2. The percentage of tumors with tumor stain washout in the venous phase was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there were significant differences among all groups (p < 0.05). As in all HCCs, there were also significant correlations even in small HCCs.\nCorrelation between tumor stain washout in the venous phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test.\nThe correlation between tumor stain washout in the equilibrium phase and histological differentiation is shown in Table 3. The percentage of tumors with tumor stain washout in the equilibrium phase was higher as the histological differentiation grade advanced (p < 0.001). However, in comparisons of each pair, no significant difference was observed between m-HCCs and p-HCCs. As in all HCCs, there were also significant correlations even in small HCCs between tumor stain washout in the equilibrium phase and histological differentiation.\nCorrelation between tumor stain washout in the equilibrium phase and histological differentiation\nHCC, hepatocellular carcinoma.\nThe p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test.", "Sensitivity, specificity, PPV, NPV, and accuracy for the prediction of p-HCC by each CT finding are shown in Table 4. Sensitivity for p-HCC was inferior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. However, specificity and accuracy for p-HCC were superior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. These findings were seen even in small HCCs. Although accuracies for p-HCC in both all HCCs and small HCCs were slightly improved by the combination of tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase, these improvements were not significant.\nSensitivity, specificity, positive predictive value, negative predictive value, and accuracy of prediction for poorly differentiated hepatocellular carcinoma using CT findings\nCT, computed tomography; PPV, positive predictive value; NPV, negative predictive value; HCC, hepatocellular carcinoma.", "With respect to a hemodynamic change from m-HCC to p-HCC, Asayama et al. [18] reported that the arterial blood supply decreases significantly. Furthermore, it was also found that, although hypervascular tumor was predominant in p-HCCs, the proportion of hypovascular tumors was higher in w-HCCs and p-HCCs than in m-HCCs on contrast ultrasonography [19] and contrast CT [20]. However, Jang et al. [19] indicated that there was no significant difference in arterial vascularity between w-HCCs and p-HCCs on contrast ultrasonography. Lee et al. [20] also demonstrated that no significant difference was seen in the prevalence of atypical arterial enhancement such as hypoattenuation between w-HCCs and p-HCCs on contrast CT. These studies did not analyze the diagnostic values for p-HCC using arterial hypovascularity. Sanda et al. [21] demonstrated that even small HCCs (diameter up to 2 cm) intermingled with hypovascular areas and hypervascular areas on the arterial phase of contrast CT showed contiguous multinodular type and included p-HCC components. Kawamura et al. [22] reported that heterogeneous enhancement with irregular ring-like structures in the arterial phase of contrast CT is a significant independent predictor of p-HCC. Of course, their heterogeneous enhancement pattern with irregular ring-like structures was included in the criteria of tumor enhancement with non-enhanced areas in the present study. From the above, it is assumed that a hemodynamic change from hypervascularity to hypovascularity in overt HCC means that p-HCC components have been generated in the HCC. Accordingly, the present arterial tumor enhancement classification with or without non-enhanced areas is reasonable for predicting hypervascular HCC including p-HCC components.\nWith respect to other enhancement pattern findings of contrast CT associated with p-HCC, tumor stain washout in the venous phase has been reported. Nishie et al. [23] indicated that p-HCCs are considered to show faster tumor stain washout on contrast CT than non-p-HCCs. On contrast magnetic resonance imaging, it has also been reported that tumor stain washout in the venous phase was more frequently seen in p-HCCs [24,25]. Furthermore, on contrast-enhanced ultrasonography, tumor stain washout time was significantly less in p-HCCs [19,26]. From the various above contrast studies and the present results, there is no doubt that tumor stain washout becomes faster as the histological differentiation of HCC advances.\nIn the present study, the diagnostic accuracy for p-HCC using tumor enhancement with non-enhanced areas in the arterial phase of contrast CT was high even in small HCCs. On the other hand, the accuracy for p-HCC by tumor stain washout in the venous phase of contrast CT was not as high. In the present results, tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase were associated with poor histological differentiation even in small HCCs. However, the improvement of accuracies for p-HCC in both all HCCs and small HCCs by the combination of tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase were slight, and these improvements were not significant. Therefore, tumor enhancement with non-enhanced areas appears to be the most valuable finding on contrast CT in the prediction of poorly differentiated HCC.\nThe present study had some limitations. First, the present study was retrospective. Furthermore, HCCs with no enhancement in the arterial phase of contrast CT were not evaluated. Therefore, the present results cannot be generalized to HCCs without arterial enhancement on contrast CT. Second, needle biopsy samples were used. The assessment of histological differentiation grade using biopsy samples may not reflect the lowest differentiated component in the tumor. Nevertheless, the present results could suggest that the contrast CT enhancement pattern facilitates assessment of histological malignant potential. Third, this study could not show the correlation between the contrast CT enhancement pattern and the prognosis. Kawamura et al. [27] reported that a heterogeneous enhancement pattern with irregular ringed-like structures on dynamic CT is associated with tumor recurrence after RFA. However, their study was very small. In the future, a large-scale cohort study should be conducted to investigate whether these findings will contribute to predicting outcome after RFA.", "In conclusion, arterial tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase were associated with poor histological differentiation even in small HCCs, and tumor enhancement with non-enhanced areas was the most valuable finding in the prediction of poorly differentiated HCC. For safe and effective RFA for small HCCs, systematic resection should be considered as the treatment of first choice for small HCCs with arterial tumor enhancement with non-enhanced areas, because the prevalence of microscopic vascular invasion or intrahepatic metastasis is quite high in p-HCCs. If unresectable, combinations of RFA with transcatheter arterial chemo-embolization should be considered as alternative treatment strategies. However, further study and analysis are required to determine whether this approach actually helps improve the prognosis of small p-HCCs.", " Patients In our hospital’s HCC database, 310 patients with 315 HCC nodules were histologically diagnosed by tumor biopsy or surgical resection between May 2001 and December 2010. The flowchart of patient enrollment is shown in Figure 1. Of 226 HCC nodules, 165 were diagnosed by tumor biopsy, and 61 were diagnosed by resection. The patients’ characteristics are summarized in Table 5. This retrospective study was approved by our ethics committee and conformed to the Helsinki Declaration. The need for patients to give written, informed consent was waived by our ethics committee.\nPatient enrollment flowchart.\nPatients’ characteristics\nSD, standard deviation; HCV, hepatitis C virus; AFP, alpha-fetoprotein; AFP-L3, lens culinaris agglutinin-reactive alpha-fetoprotein; DCP, Des-gamma-carboxyprothrombin.\nIn our hospital’s HCC database, 310 patients with 315 HCC nodules were histologically diagnosed by tumor biopsy or surgical resection between May 2001 and December 2010. The flowchart of patient enrollment is shown in Figure 1. Of 226 HCC nodules, 165 were diagnosed by tumor biopsy, and 61 were diagnosed by resection. The patients’ characteristics are summarized in Table 5. This retrospective study was approved by our ethics committee and conformed to the Helsinki Declaration. The need for patients to give written, informed consent was waived by our ethics committee.\nPatient enrollment flowchart.\nPatients’ characteristics\nSD, standard deviation; HCV, hepatitis C virus; AFP, alpha-fetoprotein; AFP-L3, lens culinaris agglutinin-reactive alpha-fetoprotein; DCP, Des-gamma-carboxyprothrombin.\n Technique and analysis All contrast CT examinations were performed with multi-detector row CT scanners having at least 4 detectors (Aquilion, Toshiba Medical Systems, Tochigi, Japan or Light speed VCT, GE Medical Systems, Milwaukee, WI, USA) with a section thickness of 5 mm. In addition to plain images, arterial phase images were obtained 40 seconds after the start of bolus administration. From 2005 onward, the arterial phase was scanned with an automatic bolus-tracking program. Venous and equilibrium phase images were obtained at 70 seconds and 180 seconds, respectively. All patients received a non-ionic iodinated contrast medium at a dose of 580 mgI/kg; it was administered to all patients by an automated power injector for 30 seconds (19.3 mgI/kg/s).\nContrast CT findings related to tumor enhancement pattern and washout were categorized as follows. Tumor enhancement pattern in the arterial phase was classified into two categories, with and without non-enhanced areas (Figures 2 and 3). Tumor stain obtained during the arterial phase that disappeared during the venous or equilibrium phase, with the tumor becoming hypodense, was categorized as positive for washout. Images obtained by contrast CT were independently analyzed using the above criteria of enhancement patterns without reference to histological differentiation by two experienced readers with more than 20 years of experience in liver imaging. Any disagreements in interpretation were resolved by consensus.\nTumor enhancement without non-enhanced areas. The pre-contrast image (a) shows an iso-density tumor. In comparison with pre-contrast image, the tumor stain has no non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows.\nTumor enhancement with non-enhanced areas. The pre-contrast image (a) shows a low-density tumor. In comparison with the pre-contrast image, the tumor stain has non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows.\nNeedle biopsies of tumors were performed using an 18-gauge needle (Bard Monopty® C.R. Bard Inc., Covington, GA, USA). Liver biopsy was performed using a 16-gauge needle. Histological findings were classified using the METAVIR scoring system [28]. All biopsy and resected specimens were examined by two experienced pathologists, without reference to the CT findings of their tumors and surrounding livers. According to the International Working Party classification [29], HCC histology was classified into three types: well differentiated (w-), moderately differentiated (m-), and poorly differentiated (p-) HCCs. If heterogeneous differentiation was found in the obtained HCC tissue, differentiation grade was classified based on the lowest differentiated grade. Any discrepancies between the two pathologists with more than 20 years of experience in liver pathology were resolved by discussion to reach consensus.\nAll contrast CT examinations were performed with multi-detector row CT scanners having at least 4 detectors (Aquilion, Toshiba Medical Systems, Tochigi, Japan or Light speed VCT, GE Medical Systems, Milwaukee, WI, USA) with a section thickness of 5 mm. In addition to plain images, arterial phase images were obtained 40 seconds after the start of bolus administration. From 2005 onward, the arterial phase was scanned with an automatic bolus-tracking program. Venous and equilibrium phase images were obtained at 70 seconds and 180 seconds, respectively. All patients received a non-ionic iodinated contrast medium at a dose of 580 mgI/kg; it was administered to all patients by an automated power injector for 30 seconds (19.3 mgI/kg/s).\nContrast CT findings related to tumor enhancement pattern and washout were categorized as follows. Tumor enhancement pattern in the arterial phase was classified into two categories, with and without non-enhanced areas (Figures 2 and 3). Tumor stain obtained during the arterial phase that disappeared during the venous or equilibrium phase, with the tumor becoming hypodense, was categorized as positive for washout. Images obtained by contrast CT were independently analyzed using the above criteria of enhancement patterns without reference to histological differentiation by two experienced readers with more than 20 years of experience in liver imaging. Any disagreements in interpretation were resolved by consensus.\nTumor enhancement without non-enhanced areas. The pre-contrast image (a) shows an iso-density tumor. In comparison with pre-contrast image, the tumor stain has no non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows.\nTumor enhancement with non-enhanced areas. The pre-contrast image (a) shows a low-density tumor. In comparison with the pre-contrast image, the tumor stain has non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows.\nNeedle biopsies of tumors were performed using an 18-gauge needle (Bard Monopty® C.R. Bard Inc., Covington, GA, USA). Liver biopsy was performed using a 16-gauge needle. Histological findings were classified using the METAVIR scoring system [28]. All biopsy and resected specimens were examined by two experienced pathologists, without reference to the CT findings of their tumors and surrounding livers. According to the International Working Party classification [29], HCC histology was classified into three types: well differentiated (w-), moderately differentiated (m-), and poorly differentiated (p-) HCCs. If heterogeneous differentiation was found in the obtained HCC tissue, differentiation grade was classified based on the lowest differentiated grade. Any discrepancies between the two pathologists with more than 20 years of experience in liver pathology were resolved by discussion to reach consensus.\n Statistical analysis Values are expressed as means ± standard deviation (SD). The correlation between tumor size and histological differentiation was analyzed using the Jonckheere-Terpstra test. The correlation between the enhancement pattern and histological differentiation was analyzed using Fisher’s exact test or the chi-square test of independence. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for diagnosis of p-HCC were calculated according to findings on contrast CT. Accuracy between groups was compared using the McNemar test. A p value less than 0.05 was considered significant. All analyses were performed using the SPSS 20.0 software package (SPSS, Inc., Chicago, IL, USA).\nValues are expressed as means ± standard deviation (SD). The correlation between tumor size and histological differentiation was analyzed using the Jonckheere-Terpstra test. The correlation between the enhancement pattern and histological differentiation was analyzed using Fisher’s exact test or the chi-square test of independence. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for diagnosis of p-HCC were calculated according to findings on contrast CT. Accuracy between groups was compared using the McNemar test. A p value less than 0.05 was considered significant. All analyses were performed using the SPSS 20.0 software package (SPSS, Inc., Chicago, IL, USA).", "In our hospital’s HCC database, 310 patients with 315 HCC nodules were histologically diagnosed by tumor biopsy or surgical resection between May 2001 and December 2010. The flowchart of patient enrollment is shown in Figure 1. Of 226 HCC nodules, 165 were diagnosed by tumor biopsy, and 61 were diagnosed by resection. The patients’ characteristics are summarized in Table 5. This retrospective study was approved by our ethics committee and conformed to the Helsinki Declaration. The need for patients to give written, informed consent was waived by our ethics committee.\nPatient enrollment flowchart.\nPatients’ characteristics\nSD, standard deviation; HCV, hepatitis C virus; AFP, alpha-fetoprotein; AFP-L3, lens culinaris agglutinin-reactive alpha-fetoprotein; DCP, Des-gamma-carboxyprothrombin.", "All contrast CT examinations were performed with multi-detector row CT scanners having at least 4 detectors (Aquilion, Toshiba Medical Systems, Tochigi, Japan or Light speed VCT, GE Medical Systems, Milwaukee, WI, USA) with a section thickness of 5 mm. In addition to plain images, arterial phase images were obtained 40 seconds after the start of bolus administration. From 2005 onward, the arterial phase was scanned with an automatic bolus-tracking program. Venous and equilibrium phase images were obtained at 70 seconds and 180 seconds, respectively. All patients received a non-ionic iodinated contrast medium at a dose of 580 mgI/kg; it was administered to all patients by an automated power injector for 30 seconds (19.3 mgI/kg/s).\nContrast CT findings related to tumor enhancement pattern and washout were categorized as follows. Tumor enhancement pattern in the arterial phase was classified into two categories, with and without non-enhanced areas (Figures 2 and 3). Tumor stain obtained during the arterial phase that disappeared during the venous or equilibrium phase, with the tumor becoming hypodense, was categorized as positive for washout. Images obtained by contrast CT were independently analyzed using the above criteria of enhancement patterns without reference to histological differentiation by two experienced readers with more than 20 years of experience in liver imaging. Any disagreements in interpretation were resolved by consensus.\nTumor enhancement without non-enhanced areas. The pre-contrast image (a) shows an iso-density tumor. In comparison with pre-contrast image, the tumor stain has no non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows.\nTumor enhancement with non-enhanced areas. The pre-contrast image (a) shows a low-density tumor. In comparison with the pre-contrast image, the tumor stain has non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows.\nNeedle biopsies of tumors were performed using an 18-gauge needle (Bard Monopty® C.R. Bard Inc., Covington, GA, USA). Liver biopsy was performed using a 16-gauge needle. Histological findings were classified using the METAVIR scoring system [28]. All biopsy and resected specimens were examined by two experienced pathologists, without reference to the CT findings of their tumors and surrounding livers. According to the International Working Party classification [29], HCC histology was classified into three types: well differentiated (w-), moderately differentiated (m-), and poorly differentiated (p-) HCCs. If heterogeneous differentiation was found in the obtained HCC tissue, differentiation grade was classified based on the lowest differentiated grade. Any discrepancies between the two pathologists with more than 20 years of experience in liver pathology were resolved by discussion to reach consensus.", "Values are expressed as means ± standard deviation (SD). The correlation between tumor size and histological differentiation was analyzed using the Jonckheere-Terpstra test. The correlation between the enhancement pattern and histological differentiation was analyzed using Fisher’s exact test or the chi-square test of independence. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for diagnosis of p-HCC were calculated according to findings on contrast CT. Accuracy between groups was compared using the McNemar test. A p value less than 0.05 was considered significant. All analyses were performed using the SPSS 20.0 software package (SPSS, Inc., Chicago, IL, USA).", "HCC: Hepatocellular carcinoma; RFA: Percutaneous radiofrequency ablation; CT: Computed tomography; BCLC: Barcelona-clinic liver cancer; w: Well differentiated; m: Moderately differentiated; p: Poorly differentiated; SD: Standard deviation; PPV: positive predictive value; NPV: Negative predictive value.", "The authors declare that they have no competing interests.", "NK, TH and IM designed and proposed the research; all authors approved the analysis and participated in drafting the article; MY, SN, MK, DH, UK, II, MT, IM, and KJ collected the clinical data; TH and IM analyzed imaging examinations; NK and TH performed the statistical analysis; NK and TH wrote the manuscript. All authors read and approved the final manuscript." ]
[ null, "results", null, null, null, null, "discussion", "conclusions", "methods", null, null, null, null, null, null ]
[ "Contrast computed tomography", "Histological differentiation", "Poorly differentiated hepatocellular carcinoma" ]
Background: Percutaneous radiofrequency ablation (RFA) is a well-established local treatment for unresectable small hepatocellular carcinoma (HCC), which is a repeatable and safe procedure. Currently, RFA is considered the standard of care for patients with Barcelona-Clinic Liver Cancer (BCLC) 0-A tumors not suitable for surgery [1]. Recently, Forner et al. [2] proposed RFA instead of resection in patients with very early (<2 cm) HCC. However, several investigators have reported the risk of seeding [3-5], intrahepatic dissemination [6,7], and aggressive recurrence after RFA [8-10]. Some investigators reported that these critical recurrences were related to poor differentiation [4,7]. Therefore, poor differentiation would be a risk factor for tumor seeding or intrahepatic dissemination after RFA for HCC. Furthermore, the prognosis of patients with poorly differentiated HCCs is worse even with radical therapy [11-13]. Along with de-differentiation from well to moderately/poorly differentiated HCC, even small HCCs have a greater tendency for vascular invasion and intrahepatic metastasis [14,15]. Fukuda et al. [16] recommended that, when hepatic function is well preserved, hepatic resection should be the first choice for local control, especially in cases of moderately to poorly differentiated HCC. Therefore, the prediction of poorly differentiated HCC before therapy is crucial for deciding the optimal therapeutic strategy and for safe and effective RFA even for small HCCs. Contrast computed tomography (CT) is commonly used for definite diagnosis of HCCs on imaging [17]. However, the differential diagnosis of poorly differentiated HCC using contrast CT has not been sufficiently established. In the present study, correlations between the enhancement pattern on contrast CT and histological differentiation, and the ability to predict poorly differentiated HCC using contrast CT were analyzed. Results: Correlation of tumor size and histological differentiation The histological classification was w-HCC in 56, m-HCC in 137, and p-HCC in 33. Mean diameter by histological classification was 26 ± 13 mm in w-HCCs, 33 ± 20 mm in m-HCCs, and 44 ± 33 mm in p-HCCs. The tumor size was significantly larger as the histological differentiation grade advanced (p = 0.03). In pairwise comparisons, tumor size was significantly smaller for w-HCCs than for m-HCCs and p-HCCs (P = 0.003 and p = 0.001). However, there was no significant difference between m-HCCs and p-HCCs (p = 1.000). The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The proportions of w-HCCs, m-HCCs, and p-HCCs in small HCCs were 33% (48/145), 56% (81/145), and 11% (16/145), respectively. The histological classification was w-HCC in 56, m-HCC in 137, and p-HCC in 33. Mean diameter by histological classification was 26 ± 13 mm in w-HCCs, 33 ± 20 mm in m-HCCs, and 44 ± 33 mm in p-HCCs. The tumor size was significantly larger as the histological differentiation grade advanced (p = 0.03). In pairwise comparisons, tumor size was significantly smaller for w-HCCs than for m-HCCs and p-HCCs (P = 0.003 and p = 0.001). However, there was no significant difference between m-HCCs and p-HCCs (p = 1.000). The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The proportions of w-HCCs, m-HCCs, and p-HCCs in small HCCs were 33% (48/145), 56% (81/145), and 11% (16/145), respectively. Correlation between tumor enhancement patterns and histological differentiation The correlation between tumor enhancement patterns in the arterial phase and histological differentiation is shown in Table 1. The percentage of tumors with tumor stain with non-enhanced areas was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there was a significant difference between m-HCCs and p-HCCs. However, there was no significant difference between w-HCCs and m-HCCs. As in all HCCs, there was also a significant correlation even in small HCCs (3 cm or less in diameter). Correlation between tumor enhancement pattern in the arterial phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by with versus without non-enhanced area using Fisher’s exact test. The correlation between tumor enhancement patterns in the arterial phase and histological differentiation is shown in Table 1. The percentage of tumors with tumor stain with non-enhanced areas was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there was a significant difference between m-HCCs and p-HCCs. However, there was no significant difference between w-HCCs and m-HCCs. As in all HCCs, there was also a significant correlation even in small HCCs (3 cm or less in diameter). Correlation between tumor enhancement pattern in the arterial phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by with versus without non-enhanced area using Fisher’s exact test. Correlation between tumor stain washout and histological differentiation The correlation between tumor stain washout in the venous phase and histological differentiation is shown in Table 2. The percentage of tumors with tumor stain washout in the venous phase was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there were significant differences among all groups (p < 0.05). As in all HCCs, there were also significant correlations even in small HCCs. Correlation between tumor stain washout in the venous phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test. The correlation between tumor stain washout in the equilibrium phase and histological differentiation is shown in Table 3. The percentage of tumors with tumor stain washout in the equilibrium phase was higher as the histological differentiation grade advanced (p < 0.001). However, in comparisons of each pair, no significant difference was observed between m-HCCs and p-HCCs. As in all HCCs, there were also significant correlations even in small HCCs between tumor stain washout in the equilibrium phase and histological differentiation. Correlation between tumor stain washout in the equilibrium phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test. The correlation between tumor stain washout in the venous phase and histological differentiation is shown in Table 2. The percentage of tumors with tumor stain washout in the venous phase was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there were significant differences among all groups (p < 0.05). As in all HCCs, there were also significant correlations even in small HCCs. Correlation between tumor stain washout in the venous phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test. The correlation between tumor stain washout in the equilibrium phase and histological differentiation is shown in Table 3. The percentage of tumors with tumor stain washout in the equilibrium phase was higher as the histological differentiation grade advanced (p < 0.001). However, in comparisons of each pair, no significant difference was observed between m-HCCs and p-HCCs. As in all HCCs, there were also significant correlations even in small HCCs between tumor stain washout in the equilibrium phase and histological differentiation. Correlation between tumor stain washout in the equilibrium phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for the prediction of poorly differentiated HCC Sensitivity, specificity, PPV, NPV, and accuracy for the prediction of p-HCC by each CT finding are shown in Table 4. Sensitivity for p-HCC was inferior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. However, specificity and accuracy for p-HCC were superior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. These findings were seen even in small HCCs. Although accuracies for p-HCC in both all HCCs and small HCCs were slightly improved by the combination of tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase, these improvements were not significant. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of prediction for poorly differentiated hepatocellular carcinoma using CT findings CT, computed tomography; PPV, positive predictive value; NPV, negative predictive value; HCC, hepatocellular carcinoma. Sensitivity, specificity, PPV, NPV, and accuracy for the prediction of p-HCC by each CT finding are shown in Table 4. Sensitivity for p-HCC was inferior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. However, specificity and accuracy for p-HCC were superior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. These findings were seen even in small HCCs. Although accuracies for p-HCC in both all HCCs and small HCCs were slightly improved by the combination of tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase, these improvements were not significant. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of prediction for poorly differentiated hepatocellular carcinoma using CT findings CT, computed tomography; PPV, positive predictive value; NPV, negative predictive value; HCC, hepatocellular carcinoma. Correlation of tumor size and histological differentiation: The histological classification was w-HCC in 56, m-HCC in 137, and p-HCC in 33. Mean diameter by histological classification was 26 ± 13 mm in w-HCCs, 33 ± 20 mm in m-HCCs, and 44 ± 33 mm in p-HCCs. The tumor size was significantly larger as the histological differentiation grade advanced (p = 0.03). In pairwise comparisons, tumor size was significantly smaller for w-HCCs than for m-HCCs and p-HCCs (P = 0.003 and p = 0.001). However, there was no significant difference between m-HCCs and p-HCCs (p = 1.000). The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The proportions of w-HCCs, m-HCCs, and p-HCCs in small HCCs were 33% (48/145), 56% (81/145), and 11% (16/145), respectively. Correlation between tumor enhancement patterns and histological differentiation: The correlation between tumor enhancement patterns in the arterial phase and histological differentiation is shown in Table 1. The percentage of tumors with tumor stain with non-enhanced areas was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there was a significant difference between m-HCCs and p-HCCs. However, there was no significant difference between w-HCCs and m-HCCs. As in all HCCs, there was also a significant correlation even in small HCCs (3 cm or less in diameter). Correlation between tumor enhancement pattern in the arterial phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by with versus without non-enhanced area using Fisher’s exact test. Correlation between tumor stain washout and histological differentiation: The correlation between tumor stain washout in the venous phase and histological differentiation is shown in Table 2. The percentage of tumors with tumor stain washout in the venous phase was significantly higher as the histological differentiation grade advanced (p < 0.001). In pairwise comparisons, there were significant differences among all groups (p < 0.05). As in all HCCs, there were also significant correlations even in small HCCs. Correlation between tumor stain washout in the venous phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test. The correlation between tumor stain washout in the equilibrium phase and histological differentiation is shown in Table 3. The percentage of tumors with tumor stain washout in the equilibrium phase was higher as the histological differentiation grade advanced (p < 0.001). However, in comparisons of each pair, no significant difference was observed between m-HCCs and p-HCCs. As in all HCCs, there were also significant correlations even in small HCCs between tumor stain washout in the equilibrium phase and histological differentiation. Correlation between tumor stain washout in the equilibrium phase and histological differentiation HCC, hepatocellular carcinoma. The p-value was calculated among each differentiation group by positive versus negative tumor stain washout using Fisher’s exact test. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for the prediction of poorly differentiated HCC: Sensitivity, specificity, PPV, NPV, and accuracy for the prediction of p-HCC by each CT finding are shown in Table 4. Sensitivity for p-HCC was inferior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. However, specificity and accuracy for p-HCC were superior by tumor enhancement with non-enhanced areas than by tumor stain washout in the venous phase. These findings were seen even in small HCCs. Although accuracies for p-HCC in both all HCCs and small HCCs were slightly improved by the combination of tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase, these improvements were not significant. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of prediction for poorly differentiated hepatocellular carcinoma using CT findings CT, computed tomography; PPV, positive predictive value; NPV, negative predictive value; HCC, hepatocellular carcinoma. Discussion: With respect to a hemodynamic change from m-HCC to p-HCC, Asayama et al. [18] reported that the arterial blood supply decreases significantly. Furthermore, it was also found that, although hypervascular tumor was predominant in p-HCCs, the proportion of hypovascular tumors was higher in w-HCCs and p-HCCs than in m-HCCs on contrast ultrasonography [19] and contrast CT [20]. However, Jang et al. [19] indicated that there was no significant difference in arterial vascularity between w-HCCs and p-HCCs on contrast ultrasonography. Lee et al. [20] also demonstrated that no significant difference was seen in the prevalence of atypical arterial enhancement such as hypoattenuation between w-HCCs and p-HCCs on contrast CT. These studies did not analyze the diagnostic values for p-HCC using arterial hypovascularity. Sanda et al. [21] demonstrated that even small HCCs (diameter up to 2 cm) intermingled with hypovascular areas and hypervascular areas on the arterial phase of contrast CT showed contiguous multinodular type and included p-HCC components. Kawamura et al. [22] reported that heterogeneous enhancement with irregular ring-like structures in the arterial phase of contrast CT is a significant independent predictor of p-HCC. Of course, their heterogeneous enhancement pattern with irregular ring-like structures was included in the criteria of tumor enhancement with non-enhanced areas in the present study. From the above, it is assumed that a hemodynamic change from hypervascularity to hypovascularity in overt HCC means that p-HCC components have been generated in the HCC. Accordingly, the present arterial tumor enhancement classification with or without non-enhanced areas is reasonable for predicting hypervascular HCC including p-HCC components. With respect to other enhancement pattern findings of contrast CT associated with p-HCC, tumor stain washout in the venous phase has been reported. Nishie et al. [23] indicated that p-HCCs are considered to show faster tumor stain washout on contrast CT than non-p-HCCs. On contrast magnetic resonance imaging, it has also been reported that tumor stain washout in the venous phase was more frequently seen in p-HCCs [24,25]. Furthermore, on contrast-enhanced ultrasonography, tumor stain washout time was significantly less in p-HCCs [19,26]. From the various above contrast studies and the present results, there is no doubt that tumor stain washout becomes faster as the histological differentiation of HCC advances. In the present study, the diagnostic accuracy for p-HCC using tumor enhancement with non-enhanced areas in the arterial phase of contrast CT was high even in small HCCs. On the other hand, the accuracy for p-HCC by tumor stain washout in the venous phase of contrast CT was not as high. In the present results, tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase were associated with poor histological differentiation even in small HCCs. However, the improvement of accuracies for p-HCC in both all HCCs and small HCCs by the combination of tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase were slight, and these improvements were not significant. Therefore, tumor enhancement with non-enhanced areas appears to be the most valuable finding on contrast CT in the prediction of poorly differentiated HCC. The present study had some limitations. First, the present study was retrospective. Furthermore, HCCs with no enhancement in the arterial phase of contrast CT were not evaluated. Therefore, the present results cannot be generalized to HCCs without arterial enhancement on contrast CT. Second, needle biopsy samples were used. The assessment of histological differentiation grade using biopsy samples may not reflect the lowest differentiated component in the tumor. Nevertheless, the present results could suggest that the contrast CT enhancement pattern facilitates assessment of histological malignant potential. Third, this study could not show the correlation between the contrast CT enhancement pattern and the prognosis. Kawamura et al. [27] reported that a heterogeneous enhancement pattern with irregular ringed-like structures on dynamic CT is associated with tumor recurrence after RFA. However, their study was very small. In the future, a large-scale cohort study should be conducted to investigate whether these findings will contribute to predicting outcome after RFA. Conclusions: In conclusion, arterial tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase were associated with poor histological differentiation even in small HCCs, and tumor enhancement with non-enhanced areas was the most valuable finding in the prediction of poorly differentiated HCC. For safe and effective RFA for small HCCs, systematic resection should be considered as the treatment of first choice for small HCCs with arterial tumor enhancement with non-enhanced areas, because the prevalence of microscopic vascular invasion or intrahepatic metastasis is quite high in p-HCCs. If unresectable, combinations of RFA with transcatheter arterial chemo-embolization should be considered as alternative treatment strategies. However, further study and analysis are required to determine whether this approach actually helps improve the prognosis of small p-HCCs. Methods: Patients In our hospital’s HCC database, 310 patients with 315 HCC nodules were histologically diagnosed by tumor biopsy or surgical resection between May 2001 and December 2010. The flowchart of patient enrollment is shown in Figure 1. Of 226 HCC nodules, 165 were diagnosed by tumor biopsy, and 61 were diagnosed by resection. The patients’ characteristics are summarized in Table 5. This retrospective study was approved by our ethics committee and conformed to the Helsinki Declaration. The need for patients to give written, informed consent was waived by our ethics committee. Patient enrollment flowchart. Patients’ characteristics SD, standard deviation; HCV, hepatitis C virus; AFP, alpha-fetoprotein; AFP-L3, lens culinaris agglutinin-reactive alpha-fetoprotein; DCP, Des-gamma-carboxyprothrombin. In our hospital’s HCC database, 310 patients with 315 HCC nodules were histologically diagnosed by tumor biopsy or surgical resection between May 2001 and December 2010. The flowchart of patient enrollment is shown in Figure 1. Of 226 HCC nodules, 165 were diagnosed by tumor biopsy, and 61 were diagnosed by resection. The patients’ characteristics are summarized in Table 5. This retrospective study was approved by our ethics committee and conformed to the Helsinki Declaration. The need for patients to give written, informed consent was waived by our ethics committee. Patient enrollment flowchart. Patients’ characteristics SD, standard deviation; HCV, hepatitis C virus; AFP, alpha-fetoprotein; AFP-L3, lens culinaris agglutinin-reactive alpha-fetoprotein; DCP, Des-gamma-carboxyprothrombin. Technique and analysis All contrast CT examinations were performed with multi-detector row CT scanners having at least 4 detectors (Aquilion, Toshiba Medical Systems, Tochigi, Japan or Light speed VCT, GE Medical Systems, Milwaukee, WI, USA) with a section thickness of 5 mm. In addition to plain images, arterial phase images were obtained 40 seconds after the start of bolus administration. From 2005 onward, the arterial phase was scanned with an automatic bolus-tracking program. Venous and equilibrium phase images were obtained at 70 seconds and 180 seconds, respectively. All patients received a non-ionic iodinated contrast medium at a dose of 580 mgI/kg; it was administered to all patients by an automated power injector for 30 seconds (19.3 mgI/kg/s). Contrast CT findings related to tumor enhancement pattern and washout were categorized as follows. Tumor enhancement pattern in the arterial phase was classified into two categories, with and without non-enhanced areas (Figures 2 and 3). Tumor stain obtained during the arterial phase that disappeared during the venous or equilibrium phase, with the tumor becoming hypodense, was categorized as positive for washout. Images obtained by contrast CT were independently analyzed using the above criteria of enhancement patterns without reference to histological differentiation by two experienced readers with more than 20 years of experience in liver imaging. Any disagreements in interpretation were resolved by consensus. Tumor enhancement without non-enhanced areas. The pre-contrast image (a) shows an iso-density tumor. In comparison with pre-contrast image, the tumor stain has no non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows. Tumor enhancement with non-enhanced areas. The pre-contrast image (a) shows a low-density tumor. In comparison with the pre-contrast image, the tumor stain has non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows. Needle biopsies of tumors were performed using an 18-gauge needle (Bard Monopty® C.R. Bard Inc., Covington, GA, USA). Liver biopsy was performed using a 16-gauge needle. Histological findings were classified using the METAVIR scoring system [28]. All biopsy and resected specimens were examined by two experienced pathologists, without reference to the CT findings of their tumors and surrounding livers. According to the International Working Party classification [29], HCC histology was classified into three types: well differentiated (w-), moderately differentiated (m-), and poorly differentiated (p-) HCCs. If heterogeneous differentiation was found in the obtained HCC tissue, differentiation grade was classified based on the lowest differentiated grade. Any discrepancies between the two pathologists with more than 20 years of experience in liver pathology were resolved by discussion to reach consensus. All contrast CT examinations were performed with multi-detector row CT scanners having at least 4 detectors (Aquilion, Toshiba Medical Systems, Tochigi, Japan or Light speed VCT, GE Medical Systems, Milwaukee, WI, USA) with a section thickness of 5 mm. In addition to plain images, arterial phase images were obtained 40 seconds after the start of bolus administration. From 2005 onward, the arterial phase was scanned with an automatic bolus-tracking program. Venous and equilibrium phase images were obtained at 70 seconds and 180 seconds, respectively. All patients received a non-ionic iodinated contrast medium at a dose of 580 mgI/kg; it was administered to all patients by an automated power injector for 30 seconds (19.3 mgI/kg/s). Contrast CT findings related to tumor enhancement pattern and washout were categorized as follows. Tumor enhancement pattern in the arterial phase was classified into two categories, with and without non-enhanced areas (Figures 2 and 3). Tumor stain obtained during the arterial phase that disappeared during the venous or equilibrium phase, with the tumor becoming hypodense, was categorized as positive for washout. Images obtained by contrast CT were independently analyzed using the above criteria of enhancement patterns without reference to histological differentiation by two experienced readers with more than 20 years of experience in liver imaging. Any disagreements in interpretation were resolved by consensus. Tumor enhancement without non-enhanced areas. The pre-contrast image (a) shows an iso-density tumor. In comparison with pre-contrast image, the tumor stain has no non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows. Tumor enhancement with non-enhanced areas. The pre-contrast image (a) shows a low-density tumor. In comparison with the pre-contrast image, the tumor stain has non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows. Needle biopsies of tumors were performed using an 18-gauge needle (Bard Monopty® C.R. Bard Inc., Covington, GA, USA). Liver biopsy was performed using a 16-gauge needle. Histological findings were classified using the METAVIR scoring system [28]. All biopsy and resected specimens were examined by two experienced pathologists, without reference to the CT findings of their tumors and surrounding livers. According to the International Working Party classification [29], HCC histology was classified into three types: well differentiated (w-), moderately differentiated (m-), and poorly differentiated (p-) HCCs. If heterogeneous differentiation was found in the obtained HCC tissue, differentiation grade was classified based on the lowest differentiated grade. Any discrepancies between the two pathologists with more than 20 years of experience in liver pathology were resolved by discussion to reach consensus. Statistical analysis Values are expressed as means ± standard deviation (SD). The correlation between tumor size and histological differentiation was analyzed using the Jonckheere-Terpstra test. The correlation between the enhancement pattern and histological differentiation was analyzed using Fisher’s exact test or the chi-square test of independence. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for diagnosis of p-HCC were calculated according to findings on contrast CT. Accuracy between groups was compared using the McNemar test. A p value less than 0.05 was considered significant. All analyses were performed using the SPSS 20.0 software package (SPSS, Inc., Chicago, IL, USA). Values are expressed as means ± standard deviation (SD). The correlation between tumor size and histological differentiation was analyzed using the Jonckheere-Terpstra test. The correlation between the enhancement pattern and histological differentiation was analyzed using Fisher’s exact test or the chi-square test of independence. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for diagnosis of p-HCC were calculated according to findings on contrast CT. Accuracy between groups was compared using the McNemar test. A p value less than 0.05 was considered significant. All analyses were performed using the SPSS 20.0 software package (SPSS, Inc., Chicago, IL, USA). Patients: In our hospital’s HCC database, 310 patients with 315 HCC nodules were histologically diagnosed by tumor biopsy or surgical resection between May 2001 and December 2010. The flowchart of patient enrollment is shown in Figure 1. Of 226 HCC nodules, 165 were diagnosed by tumor biopsy, and 61 were diagnosed by resection. The patients’ characteristics are summarized in Table 5. This retrospective study was approved by our ethics committee and conformed to the Helsinki Declaration. The need for patients to give written, informed consent was waived by our ethics committee. Patient enrollment flowchart. Patients’ characteristics SD, standard deviation; HCV, hepatitis C virus; AFP, alpha-fetoprotein; AFP-L3, lens culinaris agglutinin-reactive alpha-fetoprotein; DCP, Des-gamma-carboxyprothrombin. Technique and analysis: All contrast CT examinations were performed with multi-detector row CT scanners having at least 4 detectors (Aquilion, Toshiba Medical Systems, Tochigi, Japan or Light speed VCT, GE Medical Systems, Milwaukee, WI, USA) with a section thickness of 5 mm. In addition to plain images, arterial phase images were obtained 40 seconds after the start of bolus administration. From 2005 onward, the arterial phase was scanned with an automatic bolus-tracking program. Venous and equilibrium phase images were obtained at 70 seconds and 180 seconds, respectively. All patients received a non-ionic iodinated contrast medium at a dose of 580 mgI/kg; it was administered to all patients by an automated power injector for 30 seconds (19.3 mgI/kg/s). Contrast CT findings related to tumor enhancement pattern and washout were categorized as follows. Tumor enhancement pattern in the arterial phase was classified into two categories, with and without non-enhanced areas (Figures 2 and 3). Tumor stain obtained during the arterial phase that disappeared during the venous or equilibrium phase, with the tumor becoming hypodense, was categorized as positive for washout. Images obtained by contrast CT were independently analyzed using the above criteria of enhancement patterns without reference to histological differentiation by two experienced readers with more than 20 years of experience in liver imaging. Any disagreements in interpretation were resolved by consensus. Tumor enhancement without non-enhanced areas. The pre-contrast image (a) shows an iso-density tumor. In comparison with pre-contrast image, the tumor stain has no non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows. Tumor enhancement with non-enhanced areas. The pre-contrast image (a) shows a low-density tumor. In comparison with the pre-contrast image, the tumor stain has non-enhanced areas in the arterial phase (b). The tumor is indicated by arrows. Needle biopsies of tumors were performed using an 18-gauge needle (Bard Monopty® C.R. Bard Inc., Covington, GA, USA). Liver biopsy was performed using a 16-gauge needle. Histological findings were classified using the METAVIR scoring system [28]. All biopsy and resected specimens were examined by two experienced pathologists, without reference to the CT findings of their tumors and surrounding livers. According to the International Working Party classification [29], HCC histology was classified into three types: well differentiated (w-), moderately differentiated (m-), and poorly differentiated (p-) HCCs. If heterogeneous differentiation was found in the obtained HCC tissue, differentiation grade was classified based on the lowest differentiated grade. Any discrepancies between the two pathologists with more than 20 years of experience in liver pathology were resolved by discussion to reach consensus. Statistical analysis: Values are expressed as means ± standard deviation (SD). The correlation between tumor size and histological differentiation was analyzed using the Jonckheere-Terpstra test. The correlation between the enhancement pattern and histological differentiation was analyzed using Fisher’s exact test or the chi-square test of independence. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for diagnosis of p-HCC were calculated according to findings on contrast CT. Accuracy between groups was compared using the McNemar test. A p value less than 0.05 was considered significant. All analyses were performed using the SPSS 20.0 software package (SPSS, Inc., Chicago, IL, USA). Abbreviations: HCC: Hepatocellular carcinoma; RFA: Percutaneous radiofrequency ablation; CT: Computed tomography; BCLC: Barcelona-clinic liver cancer; w: Well differentiated; m: Moderately differentiated; p: Poorly differentiated; SD: Standard deviation; PPV: positive predictive value; NPV: Negative predictive value. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: NK, TH and IM designed and proposed the research; all authors approved the analysis and participated in drafting the article; MY, SN, MK, DH, UK, II, MT, IM, and KJ collected the clinical data; TH and IM analyzed imaging examinations; NK and TH performed the statistical analysis; NK and TH wrote the manuscript. All authors read and approved the final manuscript.
Background: Percutaneous radiofrequency ablation (RFA) is a well-established local treatment for small hepatocellular carcinoma (HCC). However, poor differentiation is a risk factor for tumor seeding or intrahepatic dissemination after RFA for HCC. The present study aimed to develop a method for predicting poorly differentiated HCC using contrast computed tomography (CT) for safe and effective RFA. Methods: Of HCCs diagnosed histologically, 223 patients with 226 HCCs showing tumor enhancement on contrast CT were analyzed. The tumor enhancement pattern was classified into two categories, with and without non-enhanced areas, and tumor stain that disappeared during the venous or equilibrium phase with the tumor becoming hypodense was categorized as positive for washout. Results: The 226 HCCs were evaluated as well differentiated (w-) in 56, moderately differentiated (m-) in 137, and poorly differentiated (p-) in 33. The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The percentage with heterogeneous enhancement in all HCCs was 13% in w-HCCs, 29% in m-HCCs, and 85% in p-HCCs. The percentage with tumor stain washout in the venous phase was 29% in w-HCCs, 63% in m-HCCs, and 94% in p-HCCs. The percentage with heterogeneous enhancement in small HCCs was 10% in w-HCCs, 10% in m-HCCs, and 75% in p-HCCs. The percentage with tumor stain washout in the venous phase in small HCCs was 23% in w-HCCs, 58% in m-HCCs, and 100% in p-HCCs. Significant correlations were seen for each factor (p < 0.001 each). Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for prediction of poor differentiation in small HCCs by tumor enhancement with non-enhanced areas were 75%, 90%, 48%, 97%, and 88%, respectively; for tumor stain washout in the venous phase, these were 100%, 55%, 22%, 100%, and 60%, respectively. Conclusions: Tumor enhancement patterns were associated with poor histological differentiation even in small HCCs. Tumor enhancement with non-enhanced areas was valuable for predicting poorly differentiated HCC.
Background: Percutaneous radiofrequency ablation (RFA) is a well-established local treatment for unresectable small hepatocellular carcinoma (HCC), which is a repeatable and safe procedure. Currently, RFA is considered the standard of care for patients with Barcelona-Clinic Liver Cancer (BCLC) 0-A tumors not suitable for surgery [1]. Recently, Forner et al. [2] proposed RFA instead of resection in patients with very early (<2 cm) HCC. However, several investigators have reported the risk of seeding [3-5], intrahepatic dissemination [6,7], and aggressive recurrence after RFA [8-10]. Some investigators reported that these critical recurrences were related to poor differentiation [4,7]. Therefore, poor differentiation would be a risk factor for tumor seeding or intrahepatic dissemination after RFA for HCC. Furthermore, the prognosis of patients with poorly differentiated HCCs is worse even with radical therapy [11-13]. Along with de-differentiation from well to moderately/poorly differentiated HCC, even small HCCs have a greater tendency for vascular invasion and intrahepatic metastasis [14,15]. Fukuda et al. [16] recommended that, when hepatic function is well preserved, hepatic resection should be the first choice for local control, especially in cases of moderately to poorly differentiated HCC. Therefore, the prediction of poorly differentiated HCC before therapy is crucial for deciding the optimal therapeutic strategy and for safe and effective RFA even for small HCCs. Contrast computed tomography (CT) is commonly used for definite diagnosis of HCCs on imaging [17]. However, the differential diagnosis of poorly differentiated HCC using contrast CT has not been sufficiently established. In the present study, correlations between the enhancement pattern on contrast CT and histological differentiation, and the ability to predict poorly differentiated HCC using contrast CT were analyzed. Conclusions: In conclusion, arterial tumor enhancement with non-enhanced areas and tumor stain washout in the venous phase were associated with poor histological differentiation even in small HCCs, and tumor enhancement with non-enhanced areas was the most valuable finding in the prediction of poorly differentiated HCC. For safe and effective RFA for small HCCs, systematic resection should be considered as the treatment of first choice for small HCCs with arterial tumor enhancement with non-enhanced areas, because the prevalence of microscopic vascular invasion or intrahepatic metastasis is quite high in p-HCCs. If unresectable, combinations of RFA with transcatheter arterial chemo-embolization should be considered as alternative treatment strategies. However, further study and analysis are required to determine whether this approach actually helps improve the prognosis of small p-HCCs.
Background: Percutaneous radiofrequency ablation (RFA) is a well-established local treatment for small hepatocellular carcinoma (HCC). However, poor differentiation is a risk factor for tumor seeding or intrahepatic dissemination after RFA for HCC. The present study aimed to develop a method for predicting poorly differentiated HCC using contrast computed tomography (CT) for safe and effective RFA. Methods: Of HCCs diagnosed histologically, 223 patients with 226 HCCs showing tumor enhancement on contrast CT were analyzed. The tumor enhancement pattern was classified into two categories, with and without non-enhanced areas, and tumor stain that disappeared during the venous or equilibrium phase with the tumor becoming hypodense was categorized as positive for washout. Results: The 226 HCCs were evaluated as well differentiated (w-) in 56, moderately differentiated (m-) in 137, and poorly differentiated (p-) in 33. The proportions of small HCCs (3 cm or less) in w-HCCs, m-HCCs, and p-HCCs were 86% (48/56), 59% (81/137), and 48% (16/33), respectively. The percentage with heterogeneous enhancement in all HCCs was 13% in w-HCCs, 29% in m-HCCs, and 85% in p-HCCs. The percentage with tumor stain washout in the venous phase was 29% in w-HCCs, 63% in m-HCCs, and 94% in p-HCCs. The percentage with heterogeneous enhancement in small HCCs was 10% in w-HCCs, 10% in m-HCCs, and 75% in p-HCCs. The percentage with tumor stain washout in the venous phase in small HCCs was 23% in w-HCCs, 58% in m-HCCs, and 100% in p-HCCs. Significant correlations were seen for each factor (p < 0.001 each). Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for prediction of poor differentiation in small HCCs by tumor enhancement with non-enhanced areas were 75%, 90%, 48%, 97%, and 88%, respectively; for tumor stain washout in the venous phase, these were 100%, 55%, 22%, 100%, and 60%, respectively. Conclusions: Tumor enhancement patterns were associated with poor histological differentiation even in small HCCs. Tumor enhancement with non-enhanced areas was valuable for predicting poorly differentiated HCC.
6,616
473
[ 349, 232, 152, 266, 181, 152, 544, 133, 56, 10, 77 ]
15
[ "tumor", "hccs", "hcc", "differentiation", "phase", "histological", "tumor stain", "stain", "enhancement", "histological differentiation" ]
[ "hcc tumor enhancement", "hccs tumor size", "hepatocellular carcinoma sensitivity", "intrahepatic dissemination rfa", "liver cancer differentiated" ]
[CONTENT] Contrast computed tomography | Histological differentiation | Poorly differentiated hepatocellular carcinoma [SUMMARY]
[CONTENT] Contrast computed tomography | Histological differentiation | Poorly differentiated hepatocellular carcinoma [SUMMARY]
[CONTENT] Contrast computed tomography | Histological differentiation | Poorly differentiated hepatocellular carcinoma [SUMMARY]
[CONTENT] Contrast computed tomography | Histological differentiation | Poorly differentiated hepatocellular carcinoma [SUMMARY]
[CONTENT] Contrast computed tomography | Histological differentiation | Poorly differentiated hepatocellular carcinoma [SUMMARY]
[CONTENT] Contrast computed tomography | Histological differentiation | Poorly differentiated hepatocellular carcinoma [SUMMARY]
[CONTENT] Aged | Carcinoma, Hepatocellular | Contrast Media | Female | Humans | Liver Neoplasms | Male | Middle Aged | Radiographic Image Enhancement | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Aged | Carcinoma, Hepatocellular | Contrast Media | Female | Humans | Liver Neoplasms | Male | Middle Aged | Radiographic Image Enhancement | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Aged | Carcinoma, Hepatocellular | Contrast Media | Female | Humans | Liver Neoplasms | Male | Middle Aged | Radiographic Image Enhancement | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Aged | Carcinoma, Hepatocellular | Contrast Media | Female | Humans | Liver Neoplasms | Male | Middle Aged | Radiographic Image Enhancement | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Aged | Carcinoma, Hepatocellular | Contrast Media | Female | Humans | Liver Neoplasms | Male | Middle Aged | Radiographic Image Enhancement | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Aged | Carcinoma, Hepatocellular | Contrast Media | Female | Humans | Liver Neoplasms | Male | Middle Aged | Radiographic Image Enhancement | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] hcc tumor enhancement | hccs tumor size | hepatocellular carcinoma sensitivity | intrahepatic dissemination rfa | liver cancer differentiated [SUMMARY]
[CONTENT] hcc tumor enhancement | hccs tumor size | hepatocellular carcinoma sensitivity | intrahepatic dissemination rfa | liver cancer differentiated [SUMMARY]
[CONTENT] hcc tumor enhancement | hccs tumor size | hepatocellular carcinoma sensitivity | intrahepatic dissemination rfa | liver cancer differentiated [SUMMARY]
[CONTENT] hcc tumor enhancement | hccs tumor size | hepatocellular carcinoma sensitivity | intrahepatic dissemination rfa | liver cancer differentiated [SUMMARY]
[CONTENT] hcc tumor enhancement | hccs tumor size | hepatocellular carcinoma sensitivity | intrahepatic dissemination rfa | liver cancer differentiated [SUMMARY]
[CONTENT] hcc tumor enhancement | hccs tumor size | hepatocellular carcinoma sensitivity | intrahepatic dissemination rfa | liver cancer differentiated [SUMMARY]
[CONTENT] tumor | hccs | hcc | differentiation | phase | histological | tumor stain | stain | enhancement | histological differentiation [SUMMARY]
[CONTENT] tumor | hccs | hcc | differentiation | phase | histological | tumor stain | stain | enhancement | histological differentiation [SUMMARY]
[CONTENT] tumor | hccs | hcc | differentiation | phase | histological | tumor stain | stain | enhancement | histological differentiation [SUMMARY]
[CONTENT] tumor | hccs | hcc | differentiation | phase | histological | tumor stain | stain | enhancement | histological differentiation [SUMMARY]
[CONTENT] tumor | hccs | hcc | differentiation | phase | histological | tumor stain | stain | enhancement | histological differentiation [SUMMARY]
[CONTENT] tumor | hccs | hcc | differentiation | phase | histological | tumor stain | stain | enhancement | histological differentiation [SUMMARY]
[CONTENT] rfa | poorly differentiated hcc | differentiated hcc | differentiated | poorly differentiated | poorly | hcc | intrahepatic | contrast | patients [SUMMARY]
[CONTENT] tumor | contrast | patients | obtained | phase | arterial phase | arterial | classified | images | pre [SUMMARY]
[CONTENT] hccs | tumor | tumor stain washout | stain washout | hccs hccs | differentiation | stain | tumor stain | washout | histological [SUMMARY]
[CONTENT] hccs | small | small hccs | arterial tumor enhancement non | tumor enhancement non enhanced | arterial | tumor enhancement non | enhancement non enhanced | enhancement non enhanced areas | enhancement non [SUMMARY]
[CONTENT] hccs | tumor | phase | hcc | differentiation | contrast | hccs hccs | histological | enhancement | tumor stain [SUMMARY]
[CONTENT] hccs | tumor | phase | hcc | differentiation | contrast | hccs hccs | histological | enhancement | tumor stain [SUMMARY]
[CONTENT] ||| HCC ||| RFA [SUMMARY]
[CONTENT] 223 | 226 | CT ||| two [SUMMARY]
[CONTENT] 226 | 56 | 137 | 33 ||| 3 cm | 86% | 48/56 | 59% | 48% | 16/33 ||| 13% | 29% | 85% ||| 29% | 63% | 94% ||| 10% | 10% | 75% ||| 23% | 58% | 100% ||| 0.001 ||| 75% | 90% | 48% | 97% | 88% | 100% | 55% | 22% | 100% | 60% [SUMMARY]
[CONTENT] ||| Tumor [SUMMARY]
[CONTENT] ||| HCC ||| RFA ||| 223 | 226 | CT ||| two ||| ||| 226 | 56 | 137 | 33 ||| 3 cm | 86% | 48/56 | 59% | 48% | 16/33 ||| 13% | 29% | 85% ||| 29% | 63% | 94% ||| 10% | 10% | 75% ||| 23% | 58% | 100% ||| 0.001 ||| 75% | 90% | 48% | 97% | 88% | 100% | 55% | 22% | 100% | 60% ||| ||| Tumor [SUMMARY]
[CONTENT] ||| HCC ||| RFA ||| 223 | 226 | CT ||| two ||| ||| 226 | 56 | 137 | 33 ||| 3 cm | 86% | 48/56 | 59% | 48% | 16/33 ||| 13% | 29% | 85% ||| 29% | 63% | 94% ||| 10% | 10% | 75% ||| 23% | 58% | 100% ||| 0.001 ||| 75% | 90% | 48% | 97% | 88% | 100% | 55% | 22% | 100% | 60% ||| ||| Tumor [SUMMARY]
Correlations between serum levels of microRNA-148a-3p and microRNA-485-5p and the progression and recurrence of prostate cancer.
36434610
Unpredicted postoperative recurrence of prostate cancer, one of the most common malignancies among males worldwide, has become a prominent issue affecting patients after treatment. Here, we investigated the correlation between the serum miR-148a-3p and miR-485-5p expression levels and cancer recurrence in PCa patients, aiming to identify new biomarkers for diagnosis and predicting postoperative recurrence of prostate cancer.
BACKGROUND
A total of 198 male PCa cases treated with surgery, postoperative radiotherapy, and chemotherapy were involved in the presented study. Serum levels of miR-148a-3p and miR-485-5p were measured before the initial operation for the involved cases, which were then followed up for two years to monitor the recurrence of cancer and to split the cases into recurrence and non-recurrence groups. Comparison of the relative expressions of serum miR-148a-3p and miR-485-5p were made and related to other clinic pathological features.
METHODS
Pre-surgery serum levels of miR-148a-3p in patients with TNM stage cT1-2a prostate cancer (Gleason score < 7) were significantly lower (P < 0.05) than levels in patients with TNM Classification of Malignant Tumors (TNM) stage cT2b and higher prostate cancer (Gleason score ≥ 7). pre-surgery serum levels of miR-485-5p in patients with TNM stage cT1-2a prostate cancer (Gleason score < 7) were significantly higher (P < 0.05) than in patients with TNM stage cT2b and higher cancer (Gleason score ≥ 7). Serum miR-148a-3p level in recurrence group is higher than the non-recurrence group (P < 0.05) while serum miR-485-5p level in recurrence group is lower than non-recurrence group (P < 0.05). ROC curve analysis showed the AUCs of using miR-148a-3p, miR-485-5p, and combined detection for predicting recurrence of prostate cancer were 0.825 (95% CI 0.765-0.875, P < 0.0001), 0.790 (95% CI 0.726-0.844, P < 0.0001), and 0.913 (95% CI 0.865-0.948, P < 0.0001).
RESULTS
Pre-surgery serum miR-148a-3p level positively correlates while miR-485-5p level negatively correlates with prostate cancer's progressing and postoperative recurrence. Both molecules show potential to be used for predicting postoperative recurrence individually or combined.
CONCLUSION
[ "Humans", "Male", "Prostatic Neoplasms", "Prostate", "Postoperative Period", "ROC Curve", "MicroRNAs" ]
9701040
Background
As the fifth most frequent cause of male death worldwide, prostate cancer (PCa) counts for more than 30% of all newly diagnosed cancers with an approximately 5-year survival rate of 9% [1, 2]. Advanced prostate cancer exhibits signs and symptoms, including typically slow urination, difficulty emptying the bladder, blood in the urine, and back pain [1, 3]. Screening and diagnosis of PCa are important for detecting cases, predicting disease outcomes, guiding clinical management decisions, and avoiding overtreatment [4–6]. Thus, rapid sensitive diagnostic methods are in demand. Including measurements of serum prostate-specific antigens (PSA/KLK3) in the diagnosis of prostate cancer has led to a significant increase in the detection of early-stage PCa (Gleason < 6) [6–10]. However, the specificity and sensitivity of serum PSA are still not high enough for detecting early-stage prostate cancer or precisely evaluating the progression or severity of cancer since elevated PSA levels are also detected in benign prostatic hyperplasia, inflammation, and other urinary tract diseases [11]. Thus, sensitive and specific biomarkers for prostate cancer diagnosis are in high demand. Apart from the difficulties in the early diagnosis of prostate cancer, postoperative recurrence is another major issue threatening the patients’ health and recovery. Postoperative recurrence is when the surviving prostate cancer cells become evident again after initial treatment, such as surgery or radiation therapy documenting the removal of cancer cells [12]. It occurs in around 15% of patients and always requires a second cancer treatment at least 6 months after surgery [13]. Clinically, the prediction of postoperative recurrence is based on serum prostate-specific antigens. A second increase following the initial drop-off after surgery or radiation therapy usually indicates the recurrence of prostate cancer cells. However, due to the low sensitivity and specificity of the currently used prostate-specific antigens, predicting cancer recurrence faces the same difficulty as early diagnosis of prostate cancer. Therefore, identifying new biomarkers is in need. Recent studies have reported the potential of microRNAs (miRNAs) as diagnostic biomarkers and therapeutic targets of PCa. MicroRNAs are small single-strand non-coding RNA molecules regulating gene expression through complementary base pairing with target mRNAs, affecting the post-transcription processing of ~ 60% of the human genome through base-pairing with target mRNAs [6, 14]. The involvement of miRNAs in the development and progression of numerous tumors has been widely reported, highlighting the potential for cancer diagnosis and treatments [15]. The Association of miRNAs with prostate cancers was initially reported by a large-scale miRNA analysis using a large collection of samples, including prostate cancers [6]. At least 12 miRNAs were identified as overexpressed in prostate cancer, and further studies confirmed the role of miRNAs as new biomarkers for prostate cancer [16]. After that, numerous reports have suggested the essential roles of microRNAs (miRNAs) in PCa formation and progression [17]. The expression of those miRNAs constantly changes significantly during prostate tumors, indicating their potential as clinical biomarkers [18, 19]. miR-148a-3p and miR-485-5p are miRNAs showing particular diagnostic potential related to prostate cancer. miR-148a-3p, located on the 7p15.2 region of human chromosomes, has a well-established role in developing tumors [20]. Published studies suggested miR-148a-3p significantly decreased in various cancer cells, such as bladder cancer cells [21], suggesting its potential as a cancer biomarker. miR-485-5p, on the other hand, has been identified as a tumor suppressor with a significant inhibitory effect on the proliferation and differentiation of gastric cancer [22, 23]. miR-485-5p inhibits the expression of hypoxia-inducible factor 1 and impedes hepatocellular carcinoma cell differentiation, inhibiting the growth of liver cancer cells [24]. However, whether miR-148a-3p and miR-485-5p participate in the progression of prostate cancers is still unknown. Their potential as a biomarker for diagnosis and predicting prostate cancer recurrence has not been examined. In the presented study, we examined the changes in serum miR-148a-3p and miR-485-5p levels in patients with prostate cancer at different stages and recurrence. Evaluated the correlations between serum miR-148a-3p and miR-485-5p levels and cancer progression and recurrence, aiming to identify new diagnostic biomarkers for prostate cancer.
null
null
Results
Characteristics of the study population The baseline clinical characteristics of the study population are summarized in Table 1. Briefly, there is no statistically significant difference (P > 0.05) in age, body mass index, hypertension, diabetes, smoking history, and family history of PCa between recurrence and non-recurrence groups. Notably, the average age of the involved cases is 64.05 ± 10.11 years, the average age of patients with non-recurrence is 63.8 ± 8.4 years, and the average age of patients with recurrence is 64.0 ± 8.1 years. Table 1Clinical characteristics and perioperative dataParametersNon-recurrence group (n = 153)Recurrence group (n = 45)t/ χ2 P Age (years)63.8 ± 8.464.0 ± 8.10.6380.524BMI (kg/m2)22.3 ± 2.422.5 ± 2.61.5540.152Hypertension (%)49 (32.02)14 (31.11)0.0950.763Diabetes (%)17 (11.11)4 (8.89)0.1570.741Smoking history (%)84 (54.90)19 (42.22)3.5480.052The family history of PCa (%)9 (5.88)2 (4.44)1.1460.288Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant Clinical characteristics and perioperative data Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant The baseline clinical characteristics of the study population are summarized in Table 1. Briefly, there is no statistically significant difference (P > 0.05) in age, body mass index, hypertension, diabetes, smoking history, and family history of PCa between recurrence and non-recurrence groups. Notably, the average age of the involved cases is 64.05 ± 10.11 years, the average age of patients with non-recurrence is 63.8 ± 8.4 years, and the average age of patients with recurrence is 64.0 ± 8.1 years. Table 1Clinical characteristics and perioperative dataParametersNon-recurrence group (n = 153)Recurrence group (n = 45)t/ χ2 P Age (years)63.8 ± 8.464.0 ± 8.10.6380.524BMI (kg/m2)22.3 ± 2.422.5 ± 2.61.5540.152Hypertension (%)49 (32.02)14 (31.11)0.0950.763Diabetes (%)17 (11.11)4 (8.89)0.1570.741Smoking history (%)84 (54.90)19 (42.22)3.5480.052The family history of PCa (%)9 (5.88)2 (4.44)1.1460.288Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant Clinical characteristics and perioperative data Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant Serum miR-148a-3p increases while miR-485-5p decreases in the late stages of prostate cancer compared with earlier stages We first examined the correlation between pre-surgery serum levels of miR-148a-3p and miR-485-5p and other clinical feathers in the involved cases before any operation (Table 2). We compared the level of miR-148a-3p and miR-485-5p in patients with different stages based on either TNM or Gleason scores. According to TNM staging, the involved cases were separated into two groups: 150 cases in the cT1-2a stage, 48 cases in cT2b, and higher stages. Pre-surgery serum levels of miR-148a-3p in patients at the cT1-2a stage (4.31 ± 2.01) were significantly lower than in patients at cT2b or higher stages (8.52 ± 2.28), while pre-surgery serum levels of miR-485-5p in patients at cT1-2a stage (4.02 ± 1.25) were significantly higher than patients at cT2b or higher stage (0.42 ± 0.28). According to the Gleason score, the involved cases were divided into two groups: 143 Gleason < 7 cases and 55 Gleason ≥ 7 cases. Pre-surgery serum levels of miR-148a-3p in Gleason < 7 cases (3.98 ± 1.25) were dramatically lower than Gleason ≥ 7 cases (7.05 ± 3.47 while pre-surgery serum levels of miR-485-5p in Gleason < 7 cases (1.40 ± 0.58) were significantly higher than Gleason ≥ 7 cases (0.52 ± 0.23). Both serum levels of miR-148a-3p and miR-485-5p show no significantly different (P > 0.05) related to the age (≥ 60 years compared with < 60 years old), index lesion diameter (no index compared with index lesion(s) ≤ 7 mm), or serum PSA level (> 10ng/mL compared with ≤ 10ng/mL). Table 2The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patientsParametersNumber (n)miR-148a-3pt P miR-485-5pt P Age0.5250.5840.9520.350≥ 60 years1274.85 ± 1.661.12 ± 0.54< 60 years714.73 ± 1.581.18 ± 0.41Index lesion diameter0.3410.7721.5250.080No index1034.85 ± 1.631.10 ± 0.44≤ 7 mm954.77 ± 1.821.18 ± 0.51PSA level0.7110.5001.9150.083> 10ng/mL1654.85 ± 2.121.11 ± 0.51≤ 10ng/mL334.66 ± 1.981.32 ± 0.62Gleason score7.520< 0.00117.58< 0.001< 7 points1433.98 ± 1.251.40 ± 0.58≥ 7 points557.05 ± 3.470.52 ± 0.23TNM staging7.113< 0.00118.225< 0.001cT1-2a1214.31 ± 2.014.02 ± 1.25cT2b and higher778.52 ± 2.280.42 ± 0.28Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patients Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant We first examined the correlation between pre-surgery serum levels of miR-148a-3p and miR-485-5p and other clinical feathers in the involved cases before any operation (Table 2). We compared the level of miR-148a-3p and miR-485-5p in patients with different stages based on either TNM or Gleason scores. According to TNM staging, the involved cases were separated into two groups: 150 cases in the cT1-2a stage, 48 cases in cT2b, and higher stages. Pre-surgery serum levels of miR-148a-3p in patients at the cT1-2a stage (4.31 ± 2.01) were significantly lower than in patients at cT2b or higher stages (8.52 ± 2.28), while pre-surgery serum levels of miR-485-5p in patients at cT1-2a stage (4.02 ± 1.25) were significantly higher than patients at cT2b or higher stage (0.42 ± 0.28). According to the Gleason score, the involved cases were divided into two groups: 143 Gleason < 7 cases and 55 Gleason ≥ 7 cases. Pre-surgery serum levels of miR-148a-3p in Gleason < 7 cases (3.98 ± 1.25) were dramatically lower than Gleason ≥ 7 cases (7.05 ± 3.47 while pre-surgery serum levels of miR-485-5p in Gleason < 7 cases (1.40 ± 0.58) were significantly higher than Gleason ≥ 7 cases (0.52 ± 0.23). Both serum levels of miR-148a-3p and miR-485-5p show no significantly different (P > 0.05) related to the age (≥ 60 years compared with < 60 years old), index lesion diameter (no index compared with index lesion(s) ≤ 7 mm), or serum PSA level (> 10ng/mL compared with ≤ 10ng/mL). Table 2The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patientsParametersNumber (n)miR-148a-3pt P miR-485-5pt P Age0.5250.5840.9520.350≥ 60 years1274.85 ± 1.661.12 ± 0.54< 60 years714.73 ± 1.581.18 ± 0.41Index lesion diameter0.3410.7721.5250.080No index1034.85 ± 1.631.10 ± 0.44≤ 7 mm954.77 ± 1.821.18 ± 0.51PSA level0.7110.5001.9150.083> 10ng/mL1654.85 ± 2.121.11 ± 0.51≤ 10ng/mL334.66 ± 1.981.32 ± 0.62Gleason score7.520< 0.00117.58< 0.001< 7 points1433.98 ± 1.251.40 ± 0.58≥ 7 points557.05 ± 3.470.52 ± 0.23TNM staging7.113< 0.00118.225< 0.001cT1-2a1214.31 ± 2.014.02 ± 1.25cT2b and higher778.52 ± 2.280.42 ± 0.28Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patients Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant Serum miR-148a-3p increases while miR-485-5p decreases in postoperative recurrence compared with the non-recurrence group We next examined the possible correlation between serum levels of miR-148a-3p and miR-485-5p and the recurrence of prostate cancer. Serum levels of miR-148a-3p in the recurrence group (5.97 ± 0.18) were significantly higher than that of the non-recurrence group (4.53 ± 0.07) (t = 8.502, P < 0.001). In comparison, serum miR-485-5p levels in the recurrence group (0.27 ± 0.03) were lower than that of the non-recurrence group (1.22 ± 0.02) (t = 27.72, P < 0.001) (Fig. 1). Fig. 1Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis We next examined the possible correlation between serum levels of miR-148a-3p and miR-485-5p and the recurrence of prostate cancer. Serum levels of miR-148a-3p in the recurrence group (5.97 ± 0.18) were significantly higher than that of the non-recurrence group (4.53 ± 0.07) (t = 8.502, P < 0.001). In comparison, serum miR-485-5p levels in the recurrence group (0.27 ± 0.03) were lower than that of the non-recurrence group (1.22 ± 0.02) (t = 27.72, P < 0.001) (Fig. 1). Fig. 1Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis Both miR-148a-3p and miR-485-5p demonstrate great potential as sensitive and specific biomarkers for prostate cancer We then evaluated the sensitivity and specificity of using serum levels of miR-148a-3p and miR-485-5p to distinguish non-recurrence versus recurrence. We performed a receiver operating characteristic (ROC) curve analysis of the variations of serum levels of miR-148a-3p and miR-485-5p and calculated the area under the ROC curve (AUC) and optimal cut-off values (Fig. 2; Table 3). Both miR-148a-3p and miR-485-5p demonstrate significant sensitivity and specificity in distinguishing non-recurrence versus recurrence. Respectively, the AUC of recurrence in combined detection was greater than that of individuals detection (Z = 2.42, P = 0.0155; Z = 3.234, P = 0.0012). Thus, using the two molecules combined to predict prostate cancer recurrence was even more specific and sensitive than single detection. Fig. 2 Receiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant Receiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant Table 3ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCaAUC95% CIsensitivityspecificityBCVPmiR-148a-3p0.8250.765–0.87582.2273.25.01< 0.0001miR-485-5p0.7900.726–0.84464.4494.770.70< 0.0001miR-148a-3p& miR-485-5p0.9130.865–0.94891.1181.70.18< 0.0001AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCa AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant We then evaluated the sensitivity and specificity of using serum levels of miR-148a-3p and miR-485-5p to distinguish non-recurrence versus recurrence. We performed a receiver operating characteristic (ROC) curve analysis of the variations of serum levels of miR-148a-3p and miR-485-5p and calculated the area under the ROC curve (AUC) and optimal cut-off values (Fig. 2; Table 3). Both miR-148a-3p and miR-485-5p demonstrate significant sensitivity and specificity in distinguishing non-recurrence versus recurrence. Respectively, the AUC of recurrence in combined detection was greater than that of individuals detection (Z = 2.42, P = 0.0155; Z = 3.234, P = 0.0012). Thus, using the two molecules combined to predict prostate cancer recurrence was even more specific and sensitive than single detection. Fig. 2 Receiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant Receiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant Table 3ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCaAUC95% CIsensitivityspecificityBCVPmiR-148a-3p0.8250.765–0.87582.2273.25.01< 0.0001miR-485-5p0.7900.726–0.84464.4494.770.70< 0.0001miR-148a-3p& miR-485-5p0.9130.865–0.94891.1181.70.18< 0.0001AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCa AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant
Conclusion
In conclusion, our study demonstrated that pre-surgery serum levels of miR-148a-3p and miR-485-5p could be used as sensitive and specific biomarkers for diagnosis, evaluating cancer progression stages, and anticipating postoperative recurrence.
[ "Background", "Study population", "Treatment and follow-up", "Clinical data collection", "Measurement of serum prostate-specific antigen levels", "Measurement of serum miR-148a-3p, mir-485-5p levels", "Reagents and equipment", "Specimen collection", "Determination of serum miR-148a-3p, miR-485-5p levels", "Statistical analysis", "Characteristics of the study population", "Serum miR-148a-3p increases while miR-485-5p decreases in the late stages of prostate cancer compared with earlier stages", "Serum miR-148a-3p increases while miR-485-5p decreases in postoperative recurrence compared with the non-recurrence group", "Both miR-148a-3p and miR-485-5p demonstrate great potential as sensitive and specific biomarkers for prostate cancer", "Limitations" ]
[ "As the fifth most frequent cause of male death worldwide, prostate cancer (PCa) counts for more than 30% of all newly diagnosed cancers with an approximately 5-year survival rate of 9% [1, 2]. Advanced prostate cancer exhibits signs and symptoms, including typically slow urination, difficulty emptying the bladder, blood in the urine, and back pain [1, 3]. Screening and diagnosis of PCa are important for detecting cases, predicting disease outcomes, guiding clinical management decisions, and avoiding overtreatment [4–6]. Thus, rapid sensitive diagnostic methods are in demand. Including measurements of serum prostate-specific antigens (PSA/KLK3) in the diagnosis of prostate cancer has led to a significant increase in the detection of early-stage PCa (Gleason < 6) [6–10]. However, the specificity and sensitivity of serum PSA are still not high enough for detecting early-stage prostate cancer or precisely evaluating the progression or severity of cancer since elevated PSA levels are also detected in benign prostatic hyperplasia, inflammation, and other urinary tract diseases [11]. Thus, sensitive and specific biomarkers for prostate cancer diagnosis are in high demand.\nApart from the difficulties in the early diagnosis of prostate cancer, postoperative recurrence is another major issue threatening the patients’ health and recovery. Postoperative recurrence is when the surviving prostate cancer cells become evident again after initial treatment, such as surgery or radiation therapy documenting the removal of cancer cells [12]. It occurs in around 15% of patients and always requires a second cancer treatment at least 6 months after surgery [13]. Clinically, the prediction of postoperative recurrence is based on serum prostate-specific antigens. A second increase following the initial drop-off after surgery or radiation therapy usually indicates the recurrence of prostate cancer cells. However, due to the low sensitivity and specificity of the currently used prostate-specific antigens, predicting cancer recurrence faces the same difficulty as early diagnosis of prostate cancer. Therefore, identifying new biomarkers is in need.\nRecent studies have reported the potential of microRNAs (miRNAs) as diagnostic biomarkers and therapeutic targets of PCa. MicroRNAs are small single-strand non-coding RNA molecules regulating gene expression through complementary base pairing with target mRNAs, affecting the post-transcription processing of ~ 60% of the human genome through base-pairing with target mRNAs [6, 14]. The involvement of miRNAs in the development and progression of numerous tumors has been widely reported, highlighting the potential for cancer diagnosis and treatments [15]. The Association of miRNAs with prostate cancers was initially reported by a large-scale miRNA analysis using a large collection of samples, including prostate cancers [6]. At least 12 miRNAs were identified as overexpressed in prostate cancer, and further studies confirmed the role of miRNAs as new biomarkers for prostate cancer [16]. After that, numerous reports have suggested the essential roles of microRNAs (miRNAs) in PCa formation and progression [17]. The expression of those miRNAs constantly changes significantly during prostate tumors, indicating their potential as clinical biomarkers [18, 19]. miR-148a-3p and miR-485-5p are miRNAs showing particular diagnostic potential related to prostate cancer. miR-148a-3p, located on the 7p15.2 region of human chromosomes, has a well-established role in developing tumors [20]. Published studies suggested miR-148a-3p significantly decreased in various cancer cells, such as bladder cancer cells [21], suggesting its potential as a cancer biomarker. miR-485-5p, on the other hand, has been identified as a tumor suppressor with a significant inhibitory effect on the proliferation and differentiation of gastric cancer [22, 23]. miR-485-5p inhibits the expression of hypoxia-inducible factor 1 and impedes hepatocellular carcinoma cell differentiation, inhibiting the growth of liver cancer cells [24]. However, whether miR-148a-3p and miR-485-5p participate in the progression of prostate cancers is still unknown. Their potential as a biomarker for diagnosis and predicting prostate cancer recurrence has not been examined.\nIn the presented study, we examined the changes in serum miR-148a-3p and miR-485-5p levels in patients with prostate cancer at different stages and recurrence. Evaluated the correlations between serum miR-148a-3p and miR-485-5p levels and cancer progression and recurrence, aiming to identify new diagnostic biomarkers for prostate cancer.", "Retrospectively, this study was conducted on patients (aged 64.05 ± 10.11) with prostate cancer admitted to the Department of Urology, Yan’an People’s Hospital. Clinical data of patients were collected following approved protocols of the Committee of Yan’an People’s Hospital with written informed agreements obtained from patients. All patients showed the best adherence to the protocol. Informed consent was obtained from all patients. A total of 198 cases were enrolled and all cases satisfied the following criteria.\nInclusion criteria: (1) diagnosed with prostate cancer by surgical histopathological examination; (2) no history of prostate surgery; (3) with a traceable clinical history.\nExclusion criteria: (1) with severe prostatic hyperplasia; (2) with another malignant tumor; (3) with severe urinary system disease; (4) with coagulation dysfunction.", "All involved cases underwent radical prostatectomy. Radiotherapy (adjuvant radiotherapy for patients with low or intermediate risk) were routinely given according to China guideline for the screening and early detection of prostate cancer (2022, Beijing) [25]. The cases were follow-up postoperatively for two years. Re-examining serum PSA, ultrasonography and MRI were given once every 2 months. Recurrence or metastasis was determined based on the following criteria: (1) Continuous serum PSA ≥ 0.2 ng/mL; (2) MRI Diffusion-Weighted Imaging Sequences find other organs (mainly bone) have an obvious abnormal high signal shadow. Forty-five cases were determined as recurrence, while 153 cases as non-recurrence.", "The clinical history data collected includes age, body mass index, hypertension, diabetes, smoking history, and family history of prostate cancer. According to EAU guidelines 2022, preoperative Gleason scores were used to evaluate the risk of Pca (low-risk PCa: Gleason < 7, high-risk PCa: Gleason ≥ 7) with the following criteria: serum PSA level: low-risk PCa (≤ 10ng/mL), intermediate/high-risk PCa (> 10ng/mL), metastasis (TNM) classification for the staging of PCa: low-risk PCa (cT1-2a), intermediate/high-risk PCa (cT2b and higher), index lesion diameter: low-risk PCa (no index ), intermediate/high-risk PCa (≤ 7 mm ).", "Serum levels of PSA (Human KLK3/PSA (Sandwich ELISA) ELISA Kit; Catalog#: LS-F22855-1; Detection Range: 0.938-60 ng/ml; Intra-assay: CV% <10%; Inter-assay CV% <10%) were measured according to the manufacturer’s instructions.", "Reagents and equipment RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA.\nRNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA.\nSpecimen collection Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C.\nBefore the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C.\nDetermination of serum miR-148a-3p, miR-485-5p levels Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p.\nFor miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT.\nTotal RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p.\nFor miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT.", "RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA.", "Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C.", "Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p.\nFor miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT.", "SPSS 22.0 software was used for statistical analysis. The data were expressed as the mean ± standard deviation (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\bar{\\chi }$$\\end{document}χ¯ ± s), and a comparison between the two groups was performed. Independent samples t-test was used. Count data were compared using the χ2 test. Predict receiver operating characteristic (ROC) model curve was used to evaluate the sensitivity and specificity of miR-148a-3p and miR-485-5p as biomarkers. P < 0.05 was considered to show a statistically significant difference.", "The baseline clinical characteristics of the study population are summarized in Table 1. Briefly, there is no statistically significant difference (P > 0.05) in age, body mass index, hypertension, diabetes, smoking history, and family history of PCa between recurrence and non-recurrence groups. Notably, the average age of the involved cases is 64.05 ± 10.11 years, the average age of patients with non-recurrence is 63.8 ± 8.4 years, and the average age of patients with recurrence is 64.0 ± 8.1 years.\n\nTable 1Clinical characteristics and perioperative dataParametersNon-recurrence group (n = 153)Recurrence group (n = 45)t/ χ2\nP\nAge (years)63.8 ± 8.464.0 ± 8.10.6380.524BMI (kg/m2)22.3 ± 2.422.5 ± 2.61.5540.152Hypertension (%)49 (32.02)14 (31.11)0.0950.763Diabetes (%)17 (11.11)4 (8.89)0.1570.741Smoking history (%)84 (54.90)19 (42.22)3.5480.052The family history of PCa (%)9 (5.88)2 (4.44)1.1460.288Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant\nClinical characteristics and perioperative data\nData are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant", "We first examined the correlation between pre-surgery serum levels of miR-148a-3p and miR-485-5p and other clinical feathers in the involved cases before any operation (Table 2). We compared the level of miR-148a-3p and miR-485-5p in patients with different stages based on either TNM or Gleason scores. According to TNM staging, the involved cases were separated into two groups: 150 cases in the cT1-2a stage, 48 cases in cT2b, and higher stages. Pre-surgery serum levels of miR-148a-3p in patients at the cT1-2a stage (4.31 ± 2.01) were significantly lower than in patients at cT2b or higher stages (8.52 ± 2.28), while pre-surgery serum levels of miR-485-5p in patients at cT1-2a stage (4.02 ± 1.25) were significantly higher than patients at cT2b or higher stage (0.42 ± 0.28). According to the Gleason score, the involved cases were divided into two groups: 143 Gleason < 7 cases and 55 Gleason ≥ 7 cases. Pre-surgery serum levels of miR-148a-3p in Gleason < 7 cases (3.98 ± 1.25) were dramatically lower than Gleason ≥ 7 cases (7.05 ± 3.47 while pre-surgery serum levels of miR-485-5p in Gleason < 7 cases (1.40 ± 0.58) were significantly higher than Gleason ≥ 7 cases (0.52 ± 0.23). Both serum levels of miR-148a-3p and miR-485-5p show no significantly different (P > 0.05) related to the age (≥ 60 years compared with < 60 years old), index lesion diameter (no index compared with index lesion(s) ≤ 7 mm), or serum PSA level (> 10ng/mL compared with ≤ 10ng/mL).\n\nTable 2The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patientsParametersNumber (n)miR-148a-3pt\nP\nmiR-485-5pt\nP\nAge0.5250.5840.9520.350≥ 60 years1274.85 ± 1.661.12 ± 0.54< 60 years714.73 ± 1.581.18 ± 0.41Index lesion diameter0.3410.7721.5250.080No index1034.85 ± 1.631.10 ± 0.44≤ 7 mm954.77 ± 1.821.18 ± 0.51PSA level0.7110.5001.9150.083> 10ng/mL1654.85 ± 2.121.11 ± 0.51≤ 10ng/mL334.66 ± 1.981.32 ± 0.62Gleason score7.520< 0.00117.58< 0.001< 7 points1433.98 ± 1.251.40 ± 0.58≥ 7 points557.05 ± 3.470.52 ± 0.23TNM staging7.113< 0.00118.225< 0.001cT1-2a1214.31 ± 2.014.02 ± 1.25cT2b and higher778.52 ± 2.280.42 ± 0.28Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant\nThe levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patients\nData are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant", "We next examined the possible correlation between serum levels of miR-148a-3p and miR-485-5p and the recurrence of prostate cancer. Serum levels of miR-148a-3p in the recurrence group (5.97 ± 0.18) were significantly higher than that of the non-recurrence group (4.53 ± 0.07) (t = 8.502, P < 0.001). In comparison, serum miR-485-5p levels in the recurrence group (0.27 ± 0.03) were lower than that of the non-recurrence group (1.22 ± 0.02) (t = 27.72, P < 0.001) (Fig. 1).\n\nFig. 1Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis\nDot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis", "We then evaluated the sensitivity and specificity of using serum levels of miR-148a-3p and miR-485-5p to distinguish non-recurrence versus recurrence. We performed a receiver operating characteristic (ROC) curve analysis of the variations of serum levels of miR-148a-3p and miR-485-5p and calculated the area under the ROC curve (AUC) and optimal cut-off values (Fig. 2; Table 3). Both miR-148a-3p and miR-485-5p demonstrate significant sensitivity and specificity in distinguishing non-recurrence versus recurrence. Respectively, the AUC of recurrence in combined detection was greater than that of individuals detection (Z = 2.42, P = 0.0155; Z = 3.234, P = 0.0012). Thus, using the two molecules combined to predict prostate cancer recurrence was even more specific and sensitive than single detection.\n\nFig. 2\nReceiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant\n\nReceiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant\n\nTable 3ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCaAUC95% CIsensitivityspecificityBCVPmiR-148a-3p0.8250.765–0.87582.2273.25.01< 0.0001miR-485-5p0.7900.726–0.84464.4494.770.70< 0.0001miR-148a-3p& miR-485-5p0.9130.865–0.94891.1181.70.18< 0.0001AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant\nROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCa\nAUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant", "Although our study demonstrated promising biomarkers, there are still some limitations in this study. Limitations include (a) This study only enrolls patients with PCa from the Shaanxi area; thus, it may not fully represent PCa patients in the global trend. (b) Our study only focuses on two miRNAs, including miR-148a-3p and miR-485-5p. A comprehensive analysis using a high throughput technique to screen for the miRNAs profiles is still needed to identify diagnostic markers. (c) Our study only measures miR-148a-3p and miR-485-5p in a small population, and future study is needed to apply our findings to a large population to evaluate miR-148a-3p and miR-485-5p as biomarkers." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Materials and methods", "Study population", "Treatment and follow-up", "Clinical data collection", "Measurement of serum prostate-specific antigen levels", "Measurement of serum miR-148a-3p, mir-485-5p levels", "Reagents and equipment", "Specimen collection", "Determination of serum miR-148a-3p, miR-485-5p levels", "Statistical analysis", "Results", "Characteristics of the study population", "Serum miR-148a-3p increases while miR-485-5p decreases in the late stages of prostate cancer compared with earlier stages", "Serum miR-148a-3p increases while miR-485-5p decreases in postoperative recurrence compared with the non-recurrence group", "Both miR-148a-3p and miR-485-5p demonstrate great potential as sensitive and specific biomarkers for prostate cancer", "Discussion", "Limitations", "Conclusion" ]
[ "As the fifth most frequent cause of male death worldwide, prostate cancer (PCa) counts for more than 30% of all newly diagnosed cancers with an approximately 5-year survival rate of 9% [1, 2]. Advanced prostate cancer exhibits signs and symptoms, including typically slow urination, difficulty emptying the bladder, blood in the urine, and back pain [1, 3]. Screening and diagnosis of PCa are important for detecting cases, predicting disease outcomes, guiding clinical management decisions, and avoiding overtreatment [4–6]. Thus, rapid sensitive diagnostic methods are in demand. Including measurements of serum prostate-specific antigens (PSA/KLK3) in the diagnosis of prostate cancer has led to a significant increase in the detection of early-stage PCa (Gleason < 6) [6–10]. However, the specificity and sensitivity of serum PSA are still not high enough for detecting early-stage prostate cancer or precisely evaluating the progression or severity of cancer since elevated PSA levels are also detected in benign prostatic hyperplasia, inflammation, and other urinary tract diseases [11]. Thus, sensitive and specific biomarkers for prostate cancer diagnosis are in high demand.\nApart from the difficulties in the early diagnosis of prostate cancer, postoperative recurrence is another major issue threatening the patients’ health and recovery. Postoperative recurrence is when the surviving prostate cancer cells become evident again after initial treatment, such as surgery or radiation therapy documenting the removal of cancer cells [12]. It occurs in around 15% of patients and always requires a second cancer treatment at least 6 months after surgery [13]. Clinically, the prediction of postoperative recurrence is based on serum prostate-specific antigens. A second increase following the initial drop-off after surgery or radiation therapy usually indicates the recurrence of prostate cancer cells. However, due to the low sensitivity and specificity of the currently used prostate-specific antigens, predicting cancer recurrence faces the same difficulty as early diagnosis of prostate cancer. Therefore, identifying new biomarkers is in need.\nRecent studies have reported the potential of microRNAs (miRNAs) as diagnostic biomarkers and therapeutic targets of PCa. MicroRNAs are small single-strand non-coding RNA molecules regulating gene expression through complementary base pairing with target mRNAs, affecting the post-transcription processing of ~ 60% of the human genome through base-pairing with target mRNAs [6, 14]. The involvement of miRNAs in the development and progression of numerous tumors has been widely reported, highlighting the potential for cancer diagnosis and treatments [15]. The Association of miRNAs with prostate cancers was initially reported by a large-scale miRNA analysis using a large collection of samples, including prostate cancers [6]. At least 12 miRNAs were identified as overexpressed in prostate cancer, and further studies confirmed the role of miRNAs as new biomarkers for prostate cancer [16]. After that, numerous reports have suggested the essential roles of microRNAs (miRNAs) in PCa formation and progression [17]. The expression of those miRNAs constantly changes significantly during prostate tumors, indicating their potential as clinical biomarkers [18, 19]. miR-148a-3p and miR-485-5p are miRNAs showing particular diagnostic potential related to prostate cancer. miR-148a-3p, located on the 7p15.2 region of human chromosomes, has a well-established role in developing tumors [20]. Published studies suggested miR-148a-3p significantly decreased in various cancer cells, such as bladder cancer cells [21], suggesting its potential as a cancer biomarker. miR-485-5p, on the other hand, has been identified as a tumor suppressor with a significant inhibitory effect on the proliferation and differentiation of gastric cancer [22, 23]. miR-485-5p inhibits the expression of hypoxia-inducible factor 1 and impedes hepatocellular carcinoma cell differentiation, inhibiting the growth of liver cancer cells [24]. However, whether miR-148a-3p and miR-485-5p participate in the progression of prostate cancers is still unknown. Their potential as a biomarker for diagnosis and predicting prostate cancer recurrence has not been examined.\nIn the presented study, we examined the changes in serum miR-148a-3p and miR-485-5p levels in patients with prostate cancer at different stages and recurrence. Evaluated the correlations between serum miR-148a-3p and miR-485-5p levels and cancer progression and recurrence, aiming to identify new diagnostic biomarkers for prostate cancer.", "Study population Retrospectively, this study was conducted on patients (aged 64.05 ± 10.11) with prostate cancer admitted to the Department of Urology, Yan’an People’s Hospital. Clinical data of patients were collected following approved protocols of the Committee of Yan’an People’s Hospital with written informed agreements obtained from patients. All patients showed the best adherence to the protocol. Informed consent was obtained from all patients. A total of 198 cases were enrolled and all cases satisfied the following criteria.\nInclusion criteria: (1) diagnosed with prostate cancer by surgical histopathological examination; (2) no history of prostate surgery; (3) with a traceable clinical history.\nExclusion criteria: (1) with severe prostatic hyperplasia; (2) with another malignant tumor; (3) with severe urinary system disease; (4) with coagulation dysfunction.\nRetrospectively, this study was conducted on patients (aged 64.05 ± 10.11) with prostate cancer admitted to the Department of Urology, Yan’an People’s Hospital. Clinical data of patients were collected following approved protocols of the Committee of Yan’an People’s Hospital with written informed agreements obtained from patients. All patients showed the best adherence to the protocol. Informed consent was obtained from all patients. A total of 198 cases were enrolled and all cases satisfied the following criteria.\nInclusion criteria: (1) diagnosed with prostate cancer by surgical histopathological examination; (2) no history of prostate surgery; (3) with a traceable clinical history.\nExclusion criteria: (1) with severe prostatic hyperplasia; (2) with another malignant tumor; (3) with severe urinary system disease; (4) with coagulation dysfunction.\nTreatment and follow-up All involved cases underwent radical prostatectomy. Radiotherapy (adjuvant radiotherapy for patients with low or intermediate risk) were routinely given according to China guideline for the screening and early detection of prostate cancer (2022, Beijing) [25]. The cases were follow-up postoperatively for two years. Re-examining serum PSA, ultrasonography and MRI were given once every 2 months. Recurrence or metastasis was determined based on the following criteria: (1) Continuous serum PSA ≥ 0.2 ng/mL; (2) MRI Diffusion-Weighted Imaging Sequences find other organs (mainly bone) have an obvious abnormal high signal shadow. Forty-five cases were determined as recurrence, while 153 cases as non-recurrence.\nAll involved cases underwent radical prostatectomy. Radiotherapy (adjuvant radiotherapy for patients with low or intermediate risk) were routinely given according to China guideline for the screening and early detection of prostate cancer (2022, Beijing) [25]. The cases were follow-up postoperatively for two years. Re-examining serum PSA, ultrasonography and MRI were given once every 2 months. Recurrence or metastasis was determined based on the following criteria: (1) Continuous serum PSA ≥ 0.2 ng/mL; (2) MRI Diffusion-Weighted Imaging Sequences find other organs (mainly bone) have an obvious abnormal high signal shadow. Forty-five cases were determined as recurrence, while 153 cases as non-recurrence.\nClinical data collection The clinical history data collected includes age, body mass index, hypertension, diabetes, smoking history, and family history of prostate cancer. According to EAU guidelines 2022, preoperative Gleason scores were used to evaluate the risk of Pca (low-risk PCa: Gleason < 7, high-risk PCa: Gleason ≥ 7) with the following criteria: serum PSA level: low-risk PCa (≤ 10ng/mL), intermediate/high-risk PCa (> 10ng/mL), metastasis (TNM) classification for the staging of PCa: low-risk PCa (cT1-2a), intermediate/high-risk PCa (cT2b and higher), index lesion diameter: low-risk PCa (no index ), intermediate/high-risk PCa (≤ 7 mm ).\nThe clinical history data collected includes age, body mass index, hypertension, diabetes, smoking history, and family history of prostate cancer. According to EAU guidelines 2022, preoperative Gleason scores were used to evaluate the risk of Pca (low-risk PCa: Gleason < 7, high-risk PCa: Gleason ≥ 7) with the following criteria: serum PSA level: low-risk PCa (≤ 10ng/mL), intermediate/high-risk PCa (> 10ng/mL), metastasis (TNM) classification for the staging of PCa: low-risk PCa (cT1-2a), intermediate/high-risk PCa (cT2b and higher), index lesion diameter: low-risk PCa (no index ), intermediate/high-risk PCa (≤ 7 mm ).\nMeasurement of serum prostate-specific antigen levels Serum levels of PSA (Human KLK3/PSA (Sandwich ELISA) ELISA Kit; Catalog#: LS-F22855-1; Detection Range: 0.938-60 ng/ml; Intra-assay: CV% <10%; Inter-assay CV% <10%) were measured according to the manufacturer’s instructions.\nSerum levels of PSA (Human KLK3/PSA (Sandwich ELISA) ELISA Kit; Catalog#: LS-F22855-1; Detection Range: 0.938-60 ng/ml; Intra-assay: CV% <10%; Inter-assay CV% <10%) were measured according to the manufacturer’s instructions.\nMeasurement of serum miR-148a-3p, mir-485-5p levels Reagents and equipment RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA.\nRNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA.\nSpecimen collection Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C.\nBefore the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C.\nDetermination of serum miR-148a-3p, miR-485-5p levels Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p.\nFor miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT.\nTotal RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p.\nFor miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT.\nReagents and equipment RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA.\nRNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA.\nSpecimen collection Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C.\nBefore the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C.\nDetermination of serum miR-148a-3p, miR-485-5p levels Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p.\nFor miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT.\nTotal RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p.\nFor miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT.\nStatistical analysis SPSS 22.0 software was used for statistical analysis. The data were expressed as the mean ± standard deviation (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\bar{\\chi }$$\\end{document}χ¯ ± s), and a comparison between the two groups was performed. Independent samples t-test was used. Count data were compared using the χ2 test. Predict receiver operating characteristic (ROC) model curve was used to evaluate the sensitivity and specificity of miR-148a-3p and miR-485-5p as biomarkers. P < 0.05 was considered to show a statistically significant difference.\nSPSS 22.0 software was used for statistical analysis. The data were expressed as the mean ± standard deviation (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\bar{\\chi }$$\\end{document}χ¯ ± s), and a comparison between the two groups was performed. Independent samples t-test was used. Count data were compared using the χ2 test. Predict receiver operating characteristic (ROC) model curve was used to evaluate the sensitivity and specificity of miR-148a-3p and miR-485-5p as biomarkers. P < 0.05 was considered to show a statistically significant difference.", "Retrospectively, this study was conducted on patients (aged 64.05 ± 10.11) with prostate cancer admitted to the Department of Urology, Yan’an People’s Hospital. Clinical data of patients were collected following approved protocols of the Committee of Yan’an People’s Hospital with written informed agreements obtained from patients. All patients showed the best adherence to the protocol. Informed consent was obtained from all patients. A total of 198 cases were enrolled and all cases satisfied the following criteria.\nInclusion criteria: (1) diagnosed with prostate cancer by surgical histopathological examination; (2) no history of prostate surgery; (3) with a traceable clinical history.\nExclusion criteria: (1) with severe prostatic hyperplasia; (2) with another malignant tumor; (3) with severe urinary system disease; (4) with coagulation dysfunction.", "All involved cases underwent radical prostatectomy. Radiotherapy (adjuvant radiotherapy for patients with low or intermediate risk) were routinely given according to China guideline for the screening and early detection of prostate cancer (2022, Beijing) [25]. The cases were follow-up postoperatively for two years. Re-examining serum PSA, ultrasonography and MRI were given once every 2 months. Recurrence or metastasis was determined based on the following criteria: (1) Continuous serum PSA ≥ 0.2 ng/mL; (2) MRI Diffusion-Weighted Imaging Sequences find other organs (mainly bone) have an obvious abnormal high signal shadow. Forty-five cases were determined as recurrence, while 153 cases as non-recurrence.", "The clinical history data collected includes age, body mass index, hypertension, diabetes, smoking history, and family history of prostate cancer. According to EAU guidelines 2022, preoperative Gleason scores were used to evaluate the risk of Pca (low-risk PCa: Gleason < 7, high-risk PCa: Gleason ≥ 7) with the following criteria: serum PSA level: low-risk PCa (≤ 10ng/mL), intermediate/high-risk PCa (> 10ng/mL), metastasis (TNM) classification for the staging of PCa: low-risk PCa (cT1-2a), intermediate/high-risk PCa (cT2b and higher), index lesion diameter: low-risk PCa (no index ), intermediate/high-risk PCa (≤ 7 mm ).", "Serum levels of PSA (Human KLK3/PSA (Sandwich ELISA) ELISA Kit; Catalog#: LS-F22855-1; Detection Range: 0.938-60 ng/ml; Intra-assay: CV% <10%; Inter-assay CV% <10%) were measured according to the manufacturer’s instructions.", "Reagents and equipment RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA.\nRNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA.\nSpecimen collection Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C.\nBefore the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C.\nDetermination of serum miR-148a-3p, miR-485-5p levels Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p.\nFor miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT.\nTotal RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p.\nFor miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT.", "RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA.", "Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C.", "Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p.\nFor miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT.", "SPSS 22.0 software was used for statistical analysis. The data were expressed as the mean ± standard deviation (\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\bar{\\chi }$$\\end{document}χ¯ ± s), and a comparison between the two groups was performed. Independent samples t-test was used. Count data were compared using the χ2 test. Predict receiver operating characteristic (ROC) model curve was used to evaluate the sensitivity and specificity of miR-148a-3p and miR-485-5p as biomarkers. P < 0.05 was considered to show a statistically significant difference.", "Characteristics of the study population The baseline clinical characteristics of the study population are summarized in Table 1. Briefly, there is no statistically significant difference (P > 0.05) in age, body mass index, hypertension, diabetes, smoking history, and family history of PCa between recurrence and non-recurrence groups. Notably, the average age of the involved cases is 64.05 ± 10.11 years, the average age of patients with non-recurrence is 63.8 ± 8.4 years, and the average age of patients with recurrence is 64.0 ± 8.1 years.\n\nTable 1Clinical characteristics and perioperative dataParametersNon-recurrence group (n = 153)Recurrence group (n = 45)t/ χ2\nP\nAge (years)63.8 ± 8.464.0 ± 8.10.6380.524BMI (kg/m2)22.3 ± 2.422.5 ± 2.61.5540.152Hypertension (%)49 (32.02)14 (31.11)0.0950.763Diabetes (%)17 (11.11)4 (8.89)0.1570.741Smoking history (%)84 (54.90)19 (42.22)3.5480.052The family history of PCa (%)9 (5.88)2 (4.44)1.1460.288Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant\nClinical characteristics and perioperative data\nData are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant\nThe baseline clinical characteristics of the study population are summarized in Table 1. Briefly, there is no statistically significant difference (P > 0.05) in age, body mass index, hypertension, diabetes, smoking history, and family history of PCa between recurrence and non-recurrence groups. Notably, the average age of the involved cases is 64.05 ± 10.11 years, the average age of patients with non-recurrence is 63.8 ± 8.4 years, and the average age of patients with recurrence is 64.0 ± 8.1 years.\n\nTable 1Clinical characteristics and perioperative dataParametersNon-recurrence group (n = 153)Recurrence group (n = 45)t/ χ2\nP\nAge (years)63.8 ± 8.464.0 ± 8.10.6380.524BMI (kg/m2)22.3 ± 2.422.5 ± 2.61.5540.152Hypertension (%)49 (32.02)14 (31.11)0.0950.763Diabetes (%)17 (11.11)4 (8.89)0.1570.741Smoking history (%)84 (54.90)19 (42.22)3.5480.052The family history of PCa (%)9 (5.88)2 (4.44)1.1460.288Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant\nClinical characteristics and perioperative data\nData are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant\nSerum miR-148a-3p increases while miR-485-5p decreases in the late stages of prostate cancer compared with earlier stages We first examined the correlation between pre-surgery serum levels of miR-148a-3p and miR-485-5p and other clinical feathers in the involved cases before any operation (Table 2). We compared the level of miR-148a-3p and miR-485-5p in patients with different stages based on either TNM or Gleason scores. According to TNM staging, the involved cases were separated into two groups: 150 cases in the cT1-2a stage, 48 cases in cT2b, and higher stages. Pre-surgery serum levels of miR-148a-3p in patients at the cT1-2a stage (4.31 ± 2.01) were significantly lower than in patients at cT2b or higher stages (8.52 ± 2.28), while pre-surgery serum levels of miR-485-5p in patients at cT1-2a stage (4.02 ± 1.25) were significantly higher than patients at cT2b or higher stage (0.42 ± 0.28). According to the Gleason score, the involved cases were divided into two groups: 143 Gleason < 7 cases and 55 Gleason ≥ 7 cases. Pre-surgery serum levels of miR-148a-3p in Gleason < 7 cases (3.98 ± 1.25) were dramatically lower than Gleason ≥ 7 cases (7.05 ± 3.47 while pre-surgery serum levels of miR-485-5p in Gleason < 7 cases (1.40 ± 0.58) were significantly higher than Gleason ≥ 7 cases (0.52 ± 0.23). Both serum levels of miR-148a-3p and miR-485-5p show no significantly different (P > 0.05) related to the age (≥ 60 years compared with < 60 years old), index lesion diameter (no index compared with index lesion(s) ≤ 7 mm), or serum PSA level (> 10ng/mL compared with ≤ 10ng/mL).\n\nTable 2The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patientsParametersNumber (n)miR-148a-3pt\nP\nmiR-485-5pt\nP\nAge0.5250.5840.9520.350≥ 60 years1274.85 ± 1.661.12 ± 0.54< 60 years714.73 ± 1.581.18 ± 0.41Index lesion diameter0.3410.7721.5250.080No index1034.85 ± 1.631.10 ± 0.44≤ 7 mm954.77 ± 1.821.18 ± 0.51PSA level0.7110.5001.9150.083> 10ng/mL1654.85 ± 2.121.11 ± 0.51≤ 10ng/mL334.66 ± 1.981.32 ± 0.62Gleason score7.520< 0.00117.58< 0.001< 7 points1433.98 ± 1.251.40 ± 0.58≥ 7 points557.05 ± 3.470.52 ± 0.23TNM staging7.113< 0.00118.225< 0.001cT1-2a1214.31 ± 2.014.02 ± 1.25cT2b and higher778.52 ± 2.280.42 ± 0.28Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant\nThe levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patients\nData are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant\nWe first examined the correlation between pre-surgery serum levels of miR-148a-3p and miR-485-5p and other clinical feathers in the involved cases before any operation (Table 2). We compared the level of miR-148a-3p and miR-485-5p in patients with different stages based on either TNM or Gleason scores. According to TNM staging, the involved cases were separated into two groups: 150 cases in the cT1-2a stage, 48 cases in cT2b, and higher stages. Pre-surgery serum levels of miR-148a-3p in patients at the cT1-2a stage (4.31 ± 2.01) were significantly lower than in patients at cT2b or higher stages (8.52 ± 2.28), while pre-surgery serum levels of miR-485-5p in patients at cT1-2a stage (4.02 ± 1.25) were significantly higher than patients at cT2b or higher stage (0.42 ± 0.28). According to the Gleason score, the involved cases were divided into two groups: 143 Gleason < 7 cases and 55 Gleason ≥ 7 cases. Pre-surgery serum levels of miR-148a-3p in Gleason < 7 cases (3.98 ± 1.25) were dramatically lower than Gleason ≥ 7 cases (7.05 ± 3.47 while pre-surgery serum levels of miR-485-5p in Gleason < 7 cases (1.40 ± 0.58) were significantly higher than Gleason ≥ 7 cases (0.52 ± 0.23). Both serum levels of miR-148a-3p and miR-485-5p show no significantly different (P > 0.05) related to the age (≥ 60 years compared with < 60 years old), index lesion diameter (no index compared with index lesion(s) ≤ 7 mm), or serum PSA level (> 10ng/mL compared with ≤ 10ng/mL).\n\nTable 2The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patientsParametersNumber (n)miR-148a-3pt\nP\nmiR-485-5pt\nP\nAge0.5250.5840.9520.350≥ 60 years1274.85 ± 1.661.12 ± 0.54< 60 years714.73 ± 1.581.18 ± 0.41Index lesion diameter0.3410.7721.5250.080No index1034.85 ± 1.631.10 ± 0.44≤ 7 mm954.77 ± 1.821.18 ± 0.51PSA level0.7110.5001.9150.083> 10ng/mL1654.85 ± 2.121.11 ± 0.51≤ 10ng/mL334.66 ± 1.981.32 ± 0.62Gleason score7.520< 0.00117.58< 0.001< 7 points1433.98 ± 1.251.40 ± 0.58≥ 7 points557.05 ± 3.470.52 ± 0.23TNM staging7.113< 0.00118.225< 0.001cT1-2a1214.31 ± 2.014.02 ± 1.25cT2b and higher778.52 ± 2.280.42 ± 0.28Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant\nThe levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patients\nData are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant\nSerum miR-148a-3p increases while miR-485-5p decreases in postoperative recurrence compared with the non-recurrence group We next examined the possible correlation between serum levels of miR-148a-3p and miR-485-5p and the recurrence of prostate cancer. Serum levels of miR-148a-3p in the recurrence group (5.97 ± 0.18) were significantly higher than that of the non-recurrence group (4.53 ± 0.07) (t = 8.502, P < 0.001). In comparison, serum miR-485-5p levels in the recurrence group (0.27 ± 0.03) were lower than that of the non-recurrence group (1.22 ± 0.02) (t = 27.72, P < 0.001) (Fig. 1).\n\nFig. 1Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis\nDot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis\nWe next examined the possible correlation between serum levels of miR-148a-3p and miR-485-5p and the recurrence of prostate cancer. Serum levels of miR-148a-3p in the recurrence group (5.97 ± 0.18) were significantly higher than that of the non-recurrence group (4.53 ± 0.07) (t = 8.502, P < 0.001). In comparison, serum miR-485-5p levels in the recurrence group (0.27 ± 0.03) were lower than that of the non-recurrence group (1.22 ± 0.02) (t = 27.72, P < 0.001) (Fig. 1).\n\nFig. 1Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis\nDot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis\nBoth miR-148a-3p and miR-485-5p demonstrate great potential as sensitive and specific biomarkers for prostate cancer We then evaluated the sensitivity and specificity of using serum levels of miR-148a-3p and miR-485-5p to distinguish non-recurrence versus recurrence. We performed a receiver operating characteristic (ROC) curve analysis of the variations of serum levels of miR-148a-3p and miR-485-5p and calculated the area under the ROC curve (AUC) and optimal cut-off values (Fig. 2; Table 3). Both miR-148a-3p and miR-485-5p demonstrate significant sensitivity and specificity in distinguishing non-recurrence versus recurrence. Respectively, the AUC of recurrence in combined detection was greater than that of individuals detection (Z = 2.42, P = 0.0155; Z = 3.234, P = 0.0012). Thus, using the two molecules combined to predict prostate cancer recurrence was even more specific and sensitive than single detection.\n\nFig. 2\nReceiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant\n\nReceiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant\n\nTable 3ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCaAUC95% CIsensitivityspecificityBCVPmiR-148a-3p0.8250.765–0.87582.2273.25.01< 0.0001miR-485-5p0.7900.726–0.84464.4494.770.70< 0.0001miR-148a-3p& miR-485-5p0.9130.865–0.94891.1181.70.18< 0.0001AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant\nROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCa\nAUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant\nWe then evaluated the sensitivity and specificity of using serum levels of miR-148a-3p and miR-485-5p to distinguish non-recurrence versus recurrence. We performed a receiver operating characteristic (ROC) curve analysis of the variations of serum levels of miR-148a-3p and miR-485-5p and calculated the area under the ROC curve (AUC) and optimal cut-off values (Fig. 2; Table 3). Both miR-148a-3p and miR-485-5p demonstrate significant sensitivity and specificity in distinguishing non-recurrence versus recurrence. Respectively, the AUC of recurrence in combined detection was greater than that of individuals detection (Z = 2.42, P = 0.0155; Z = 3.234, P = 0.0012). Thus, using the two molecules combined to predict prostate cancer recurrence was even more specific and sensitive than single detection.\n\nFig. 2\nReceiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant\n\nReceiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant\n\nTable 3ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCaAUC95% CIsensitivityspecificityBCVPmiR-148a-3p0.8250.765–0.87582.2273.25.01< 0.0001miR-485-5p0.7900.726–0.84464.4494.770.70< 0.0001miR-148a-3p& miR-485-5p0.9130.865–0.94891.1181.70.18< 0.0001AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant\nROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCa\nAUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant", "The baseline clinical characteristics of the study population are summarized in Table 1. Briefly, there is no statistically significant difference (P > 0.05) in age, body mass index, hypertension, diabetes, smoking history, and family history of PCa between recurrence and non-recurrence groups. Notably, the average age of the involved cases is 64.05 ± 10.11 years, the average age of patients with non-recurrence is 63.8 ± 8.4 years, and the average age of patients with recurrence is 64.0 ± 8.1 years.\n\nTable 1Clinical characteristics and perioperative dataParametersNon-recurrence group (n = 153)Recurrence group (n = 45)t/ χ2\nP\nAge (years)63.8 ± 8.464.0 ± 8.10.6380.524BMI (kg/m2)22.3 ± 2.422.5 ± 2.61.5540.152Hypertension (%)49 (32.02)14 (31.11)0.0950.763Diabetes (%)17 (11.11)4 (8.89)0.1570.741Smoking history (%)84 (54.90)19 (42.22)3.5480.052The family history of PCa (%)9 (5.88)2 (4.44)1.1460.288Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant\nClinical characteristics and perioperative data\nData are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant", "We first examined the correlation between pre-surgery serum levels of miR-148a-3p and miR-485-5p and other clinical feathers in the involved cases before any operation (Table 2). We compared the level of miR-148a-3p and miR-485-5p in patients with different stages based on either TNM or Gleason scores. According to TNM staging, the involved cases were separated into two groups: 150 cases in the cT1-2a stage, 48 cases in cT2b, and higher stages. Pre-surgery serum levels of miR-148a-3p in patients at the cT1-2a stage (4.31 ± 2.01) were significantly lower than in patients at cT2b or higher stages (8.52 ± 2.28), while pre-surgery serum levels of miR-485-5p in patients at cT1-2a stage (4.02 ± 1.25) were significantly higher than patients at cT2b or higher stage (0.42 ± 0.28). According to the Gleason score, the involved cases were divided into two groups: 143 Gleason < 7 cases and 55 Gleason ≥ 7 cases. Pre-surgery serum levels of miR-148a-3p in Gleason < 7 cases (3.98 ± 1.25) were dramatically lower than Gleason ≥ 7 cases (7.05 ± 3.47 while pre-surgery serum levels of miR-485-5p in Gleason < 7 cases (1.40 ± 0.58) were significantly higher than Gleason ≥ 7 cases (0.52 ± 0.23). Both serum levels of miR-148a-3p and miR-485-5p show no significantly different (P > 0.05) related to the age (≥ 60 years compared with < 60 years old), index lesion diameter (no index compared with index lesion(s) ≤ 7 mm), or serum PSA level (> 10ng/mL compared with ≤ 10ng/mL).\n\nTable 2The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patientsParametersNumber (n)miR-148a-3pt\nP\nmiR-485-5pt\nP\nAge0.5250.5840.9520.350≥ 60 years1274.85 ± 1.661.12 ± 0.54< 60 years714.73 ± 1.581.18 ± 0.41Index lesion diameter0.3410.7721.5250.080No index1034.85 ± 1.631.10 ± 0.44≤ 7 mm954.77 ± 1.821.18 ± 0.51PSA level0.7110.5001.9150.083> 10ng/mL1654.85 ± 2.121.11 ± 0.51≤ 10ng/mL334.66 ± 1.981.32 ± 0.62Gleason score7.520< 0.00117.58< 0.001< 7 points1433.98 ± 1.251.40 ± 0.58≥ 7 points557.05 ± 3.470.52 ± 0.23TNM staging7.113< 0.00118.225< 0.001cT1-2a1214.31 ± 2.014.02 ± 1.25cT2b and higher778.52 ± 2.280.42 ± 0.28Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant\nThe levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patients\nData are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant", "We next examined the possible correlation between serum levels of miR-148a-3p and miR-485-5p and the recurrence of prostate cancer. Serum levels of miR-148a-3p in the recurrence group (5.97 ± 0.18) were significantly higher than that of the non-recurrence group (4.53 ± 0.07) (t = 8.502, P < 0.001). In comparison, serum miR-485-5p levels in the recurrence group (0.27 ± 0.03) were lower than that of the non-recurrence group (1.22 ± 0.02) (t = 27.72, P < 0.001) (Fig. 1).\n\nFig. 1Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis\nDot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis", "We then evaluated the sensitivity and specificity of using serum levels of miR-148a-3p and miR-485-5p to distinguish non-recurrence versus recurrence. We performed a receiver operating characteristic (ROC) curve analysis of the variations of serum levels of miR-148a-3p and miR-485-5p and calculated the area under the ROC curve (AUC) and optimal cut-off values (Fig. 2; Table 3). Both miR-148a-3p and miR-485-5p demonstrate significant sensitivity and specificity in distinguishing non-recurrence versus recurrence. Respectively, the AUC of recurrence in combined detection was greater than that of individuals detection (Z = 2.42, P = 0.0155; Z = 3.234, P = 0.0012). Thus, using the two molecules combined to predict prostate cancer recurrence was even more specific and sensitive than single detection.\n\nFig. 2\nReceiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant\n\nReceiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant\n\nTable 3ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCaAUC95% CIsensitivityspecificityBCVPmiR-148a-3p0.8250.765–0.87582.2273.25.01< 0.0001miR-485-5p0.7900.726–0.84464.4494.770.70< 0.0001miR-148a-3p& miR-485-5p0.9130.865–0.94891.1181.70.18< 0.0001AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant\nROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCa\nAUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant", "In the presented study, we examined the correlation between pre-surgery serum levels of miR-148a-3p and miR-485-5p and prostate cancer progression (indicated by Gleason score and TNM stages) and postoperative recurrence in 198 patients with prostate cancer who underwent surgery. We also evaluated the sensitivity and specificity of miR-148a-3p and miR-485-5p as diagnostic markers. Our results suggested that serum levels of miR-148a-3p positively correlated while miR-485-5p negatively correlated with Gleason score and TNM staging in PCa patients. Patients diagnosed with postoperative recurrence tend to have higher serum levels of miR-148a-3p and lower miR-485-5p. Besides, our ROC analysis suggested that miR-148a-3p and miR-485-5p combined detection are potentially sensitive and specific diagnostic biomarkers for PCa postoperative recurrence.\nDespite the advances in prostate cancer treatments [26], prostate cancer (PCa) has always been a significant threat to male health. one of the reasons is the lack of early diagnosis methods and ways to precisely evaluate the severity of progression stages of prostate cancer, which is critical for deciding the appropriate treatment. Primarily, the goal of PCa diagnosis is to discover the earliest possible clinically significant PCa to maximize oncological outcomes and minimize functional side effects [26, 27]. Thus, efficient and sensitive screening using biomarkers of PCa is an ideal strategy. Although the addition of biomarkers such as serum prostate-specific antigen levels to prostate cancer diagnosis has increased diagnosis at relatively early stages (Gleason < 7), the current diagnostic algorithm still presents several limitations. Besides, the low sensitivity and specificity of SPA antigens still highlight the urgent demand for sensitive and specific biomarkers. microRNAs (miRNAs) could be perspective biomarkers of PCa. miRNAs are endogenous non-coding RNA regulating post-transcriptional gene expression, which participates in cell proliferation, differentiation, and apoptosis process [28]. The involvements of miRNAs in PCa have been widely reported both as diagnostic biomarkers [15, 29–31] and as therapeutic targets [6, 32]. In the presented study, we examined the potential of using two miRNAs, miR-148a-3p and miR-485-5p, to diagnose and evaluate prostate cancer. miR-148a-3p regulates myoblast differentiation into myotubes and accelerates the proliferation of myoblasts in the G1 phase of the cell cycle [33, 34]. It has also been shown to be related to cellular apoptosis and promotes tumor progression by protecting tumors from immune attack [35]. Our study showed that serum miR-148a-3p levels increased in late-stage prostate cancer, suggesting its positive correlation with prostate cancer progression, which agrees with the published tumor-promoting function of miR-148a-3p [19]. miR-485-5p, on the other hand, is a tumor suppressor gene in various malignant tumors [24]. Published studies showed that miR-485-5p induced multidrug resistance in cisplatin-resistant cell lines [25] and activated the protein kinase B signaling pathway in breast cancer cells [26] to suppress tumor progression. Our results suggested that serum miR-485-5p levels negatively correlated with prostate cancer progression with decreased values in late-stage prostate cancer patients, which recapitulate the published tumor repression function. Together, our study suggests the potential of miR-148a-3p and miR-485-5p to diagnose and evaluate prostate cancer.\nAnother significant difficulty in curing prostate cancer is the postoperative recurrence, which is the recurrence of prostate cancer cells after initial treatment has removed them. Prostate cancer is mainly treated with surgery combined with radiotherapy, chemotherapy, and endocrine therapy [36, 37]. Although it is considered a comprehensive treatment, recurrence and metastasis rate is still as high as more than 15%. Besides, due to the difficulties in diagnosing prostate cancer, recurrence is always hard to predict. In our study, we examined the potential to use serum levels of biomarkers measured pre-surgery to predict the probability of recurrence after treatment, allowing doctors to anticipate patients’ recovery better and be more prepared for potential recurrence. Our results suggested that patients who eventually got prostate cancer recurrence always had higher serum levels of miR-148a-3p and lower serum levels of miR-485-5p even before the initial treatment was performed, highlighting the possibility of using pre-surgery measures to anticipate recurrence. Besides, our ROC analysis indicates the high sensitivity and specificity of using pre-surgery miR-148a-3p and miR-485-5p levels to distinguish recurrence versus non-recurrence patients, suggesting the potential for predicting recurrence based on these two molecules. In addition, our results suggest that combining both pre-surgery serum levels of miR-148a-3p and miR-485-5p gives a significantly better prediction. Our studies suggest that measuring pre-surgery serum levels of miR-148a-3p and miR-485-5p may help with diagnosis, precisely evaluating cancer stages, and anticipating postoperative recurrence for PCa patients.", "Although our study demonstrated promising biomarkers, there are still some limitations in this study. Limitations include (a) This study only enrolls patients with PCa from the Shaanxi area; thus, it may not fully represent PCa patients in the global trend. (b) Our study only focuses on two miRNAs, including miR-148a-3p and miR-485-5p. A comprehensive analysis using a high throughput technique to screen for the miRNAs profiles is still needed to identify diagnostic markers. (c) Our study only measures miR-148a-3p and miR-485-5p in a small population, and future study is needed to apply our findings to a large population to evaluate miR-148a-3p and miR-485-5p as biomarkers.", "In conclusion, our study demonstrated that pre-surgery serum levels of miR-148a-3p and miR-485-5p could be used as sensitive and specific biomarkers for diagnosis, evaluating cancer progression stages, and anticipating postoperative recurrence." ]
[ null, "materials|methods", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", null, "conclusion" ]
[ "Prostate cancer", "miR-148a-3p", "miR-485-5p", "Gleason score", "TNM staging", "Recurrence" ]
Background: As the fifth most frequent cause of male death worldwide, prostate cancer (PCa) counts for more than 30% of all newly diagnosed cancers with an approximately 5-year survival rate of 9% [1, 2]. Advanced prostate cancer exhibits signs and symptoms, including typically slow urination, difficulty emptying the bladder, blood in the urine, and back pain [1, 3]. Screening and diagnosis of PCa are important for detecting cases, predicting disease outcomes, guiding clinical management decisions, and avoiding overtreatment [4–6]. Thus, rapid sensitive diagnostic methods are in demand. Including measurements of serum prostate-specific antigens (PSA/KLK3) in the diagnosis of prostate cancer has led to a significant increase in the detection of early-stage PCa (Gleason < 6) [6–10]. However, the specificity and sensitivity of serum PSA are still not high enough for detecting early-stage prostate cancer or precisely evaluating the progression or severity of cancer since elevated PSA levels are also detected in benign prostatic hyperplasia, inflammation, and other urinary tract diseases [11]. Thus, sensitive and specific biomarkers for prostate cancer diagnosis are in high demand. Apart from the difficulties in the early diagnosis of prostate cancer, postoperative recurrence is another major issue threatening the patients’ health and recovery. Postoperative recurrence is when the surviving prostate cancer cells become evident again after initial treatment, such as surgery or radiation therapy documenting the removal of cancer cells [12]. It occurs in around 15% of patients and always requires a second cancer treatment at least 6 months after surgery [13]. Clinically, the prediction of postoperative recurrence is based on serum prostate-specific antigens. A second increase following the initial drop-off after surgery or radiation therapy usually indicates the recurrence of prostate cancer cells. However, due to the low sensitivity and specificity of the currently used prostate-specific antigens, predicting cancer recurrence faces the same difficulty as early diagnosis of prostate cancer. Therefore, identifying new biomarkers is in need. Recent studies have reported the potential of microRNAs (miRNAs) as diagnostic biomarkers and therapeutic targets of PCa. MicroRNAs are small single-strand non-coding RNA molecules regulating gene expression through complementary base pairing with target mRNAs, affecting the post-transcription processing of ~ 60% of the human genome through base-pairing with target mRNAs [6, 14]. The involvement of miRNAs in the development and progression of numerous tumors has been widely reported, highlighting the potential for cancer diagnosis and treatments [15]. The Association of miRNAs with prostate cancers was initially reported by a large-scale miRNA analysis using a large collection of samples, including prostate cancers [6]. At least 12 miRNAs were identified as overexpressed in prostate cancer, and further studies confirmed the role of miRNAs as new biomarkers for prostate cancer [16]. After that, numerous reports have suggested the essential roles of microRNAs (miRNAs) in PCa formation and progression [17]. The expression of those miRNAs constantly changes significantly during prostate tumors, indicating their potential as clinical biomarkers [18, 19]. miR-148a-3p and miR-485-5p are miRNAs showing particular diagnostic potential related to prostate cancer. miR-148a-3p, located on the 7p15.2 region of human chromosomes, has a well-established role in developing tumors [20]. Published studies suggested miR-148a-3p significantly decreased in various cancer cells, such as bladder cancer cells [21], suggesting its potential as a cancer biomarker. miR-485-5p, on the other hand, has been identified as a tumor suppressor with a significant inhibitory effect on the proliferation and differentiation of gastric cancer [22, 23]. miR-485-5p inhibits the expression of hypoxia-inducible factor 1 and impedes hepatocellular carcinoma cell differentiation, inhibiting the growth of liver cancer cells [24]. However, whether miR-148a-3p and miR-485-5p participate in the progression of prostate cancers is still unknown. Their potential as a biomarker for diagnosis and predicting prostate cancer recurrence has not been examined. In the presented study, we examined the changes in serum miR-148a-3p and miR-485-5p levels in patients with prostate cancer at different stages and recurrence. Evaluated the correlations between serum miR-148a-3p and miR-485-5p levels and cancer progression and recurrence, aiming to identify new diagnostic biomarkers for prostate cancer. Materials and methods: Study population Retrospectively, this study was conducted on patients (aged 64.05 ± 10.11) with prostate cancer admitted to the Department of Urology, Yan’an People’s Hospital. Clinical data of patients were collected following approved protocols of the Committee of Yan’an People’s Hospital with written informed agreements obtained from patients. All patients showed the best adherence to the protocol. Informed consent was obtained from all patients. A total of 198 cases were enrolled and all cases satisfied the following criteria. Inclusion criteria: (1) diagnosed with prostate cancer by surgical histopathological examination; (2) no history of prostate surgery; (3) with a traceable clinical history. Exclusion criteria: (1) with severe prostatic hyperplasia; (2) with another malignant tumor; (3) with severe urinary system disease; (4) with coagulation dysfunction. Retrospectively, this study was conducted on patients (aged 64.05 ± 10.11) with prostate cancer admitted to the Department of Urology, Yan’an People’s Hospital. Clinical data of patients were collected following approved protocols of the Committee of Yan’an People’s Hospital with written informed agreements obtained from patients. All patients showed the best adherence to the protocol. Informed consent was obtained from all patients. A total of 198 cases were enrolled and all cases satisfied the following criteria. Inclusion criteria: (1) diagnosed with prostate cancer by surgical histopathological examination; (2) no history of prostate surgery; (3) with a traceable clinical history. Exclusion criteria: (1) with severe prostatic hyperplasia; (2) with another malignant tumor; (3) with severe urinary system disease; (4) with coagulation dysfunction. Treatment and follow-up All involved cases underwent radical prostatectomy. Radiotherapy (adjuvant radiotherapy for patients with low or intermediate risk) were routinely given according to China guideline for the screening and early detection of prostate cancer (2022, Beijing) [25]. The cases were follow-up postoperatively for two years. Re-examining serum PSA, ultrasonography and MRI were given once every 2 months. Recurrence or metastasis was determined based on the following criteria: (1) Continuous serum PSA ≥ 0.2 ng/mL; (2) MRI Diffusion-Weighted Imaging Sequences find other organs (mainly bone) have an obvious abnormal high signal shadow. Forty-five cases were determined as recurrence, while 153 cases as non-recurrence. All involved cases underwent radical prostatectomy. Radiotherapy (adjuvant radiotherapy for patients with low or intermediate risk) were routinely given according to China guideline for the screening and early detection of prostate cancer (2022, Beijing) [25]. The cases were follow-up postoperatively for two years. Re-examining serum PSA, ultrasonography and MRI were given once every 2 months. Recurrence or metastasis was determined based on the following criteria: (1) Continuous serum PSA ≥ 0.2 ng/mL; (2) MRI Diffusion-Weighted Imaging Sequences find other organs (mainly bone) have an obvious abnormal high signal shadow. Forty-five cases were determined as recurrence, while 153 cases as non-recurrence. Clinical data collection The clinical history data collected includes age, body mass index, hypertension, diabetes, smoking history, and family history of prostate cancer. According to EAU guidelines 2022, preoperative Gleason scores were used to evaluate the risk of Pca (low-risk PCa: Gleason < 7, high-risk PCa: Gleason ≥ 7) with the following criteria: serum PSA level: low-risk PCa (≤ 10ng/mL), intermediate/high-risk PCa (> 10ng/mL), metastasis (TNM) classification for the staging of PCa: low-risk PCa (cT1-2a), intermediate/high-risk PCa (cT2b and higher), index lesion diameter: low-risk PCa (no index ), intermediate/high-risk PCa (≤ 7 mm ). The clinical history data collected includes age, body mass index, hypertension, diabetes, smoking history, and family history of prostate cancer. According to EAU guidelines 2022, preoperative Gleason scores were used to evaluate the risk of Pca (low-risk PCa: Gleason < 7, high-risk PCa: Gleason ≥ 7) with the following criteria: serum PSA level: low-risk PCa (≤ 10ng/mL), intermediate/high-risk PCa (> 10ng/mL), metastasis (TNM) classification for the staging of PCa: low-risk PCa (cT1-2a), intermediate/high-risk PCa (cT2b and higher), index lesion diameter: low-risk PCa (no index ), intermediate/high-risk PCa (≤ 7 mm ). Measurement of serum prostate-specific antigen levels Serum levels of PSA (Human KLK3/PSA (Sandwich ELISA) ELISA Kit; Catalog#: LS-F22855-1; Detection Range: 0.938-60 ng/ml; Intra-assay: CV% <10%; Inter-assay CV% <10%) were measured according to the manufacturer’s instructions. Serum levels of PSA (Human KLK3/PSA (Sandwich ELISA) ELISA Kit; Catalog#: LS-F22855-1; Detection Range: 0.938-60 ng/ml; Intra-assay: CV% <10%; Inter-assay CV% <10%) were measured according to the manufacturer’s instructions. Measurement of serum miR-148a-3p, mir-485-5p levels Reagents and equipment RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA. RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA. Specimen collection Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C. Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C. Determination of serum miR-148a-3p, miR-485-5p levels Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p. For miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT. Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p. For miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT. Reagents and equipment RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA. RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA. Specimen collection Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C. Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C. Determination of serum miR-148a-3p, miR-485-5p levels Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p. For miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT. Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p. For miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT. Statistical analysis SPSS 22.0 software was used for statistical analysis. The data were expressed as the mean ± standard deviation (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bar{\chi }$$\end{document}χ¯ ± s), and a comparison between the two groups was performed. Independent samples t-test was used. Count data were compared using the χ2 test. Predict receiver operating characteristic (ROC) model curve was used to evaluate the sensitivity and specificity of miR-148a-3p and miR-485-5p as biomarkers. P < 0.05 was considered to show a statistically significant difference. SPSS 22.0 software was used for statistical analysis. The data were expressed as the mean ± standard deviation (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bar{\chi }$$\end{document}χ¯ ± s), and a comparison between the two groups was performed. Independent samples t-test was used. Count data were compared using the χ2 test. Predict receiver operating characteristic (ROC) model curve was used to evaluate the sensitivity and specificity of miR-148a-3p and miR-485-5p as biomarkers. P < 0.05 was considered to show a statistically significant difference. Study population: Retrospectively, this study was conducted on patients (aged 64.05 ± 10.11) with prostate cancer admitted to the Department of Urology, Yan’an People’s Hospital. Clinical data of patients were collected following approved protocols of the Committee of Yan’an People’s Hospital with written informed agreements obtained from patients. All patients showed the best adherence to the protocol. Informed consent was obtained from all patients. A total of 198 cases were enrolled and all cases satisfied the following criteria. Inclusion criteria: (1) diagnosed with prostate cancer by surgical histopathological examination; (2) no history of prostate surgery; (3) with a traceable clinical history. Exclusion criteria: (1) with severe prostatic hyperplasia; (2) with another malignant tumor; (3) with severe urinary system disease; (4) with coagulation dysfunction. Treatment and follow-up: All involved cases underwent radical prostatectomy. Radiotherapy (adjuvant radiotherapy for patients with low or intermediate risk) were routinely given according to China guideline for the screening and early detection of prostate cancer (2022, Beijing) [25]. The cases were follow-up postoperatively for two years. Re-examining serum PSA, ultrasonography and MRI were given once every 2 months. Recurrence or metastasis was determined based on the following criteria: (1) Continuous serum PSA ≥ 0.2 ng/mL; (2) MRI Diffusion-Weighted Imaging Sequences find other organs (mainly bone) have an obvious abnormal high signal shadow. Forty-five cases were determined as recurrence, while 153 cases as non-recurrence. Clinical data collection: The clinical history data collected includes age, body mass index, hypertension, diabetes, smoking history, and family history of prostate cancer. According to EAU guidelines 2022, preoperative Gleason scores were used to evaluate the risk of Pca (low-risk PCa: Gleason < 7, high-risk PCa: Gleason ≥ 7) with the following criteria: serum PSA level: low-risk PCa (≤ 10ng/mL), intermediate/high-risk PCa (> 10ng/mL), metastasis (TNM) classification for the staging of PCa: low-risk PCa (cT1-2a), intermediate/high-risk PCa (cT2b and higher), index lesion diameter: low-risk PCa (no index ), intermediate/high-risk PCa (≤ 7 mm ). Measurement of serum prostate-specific antigen levels: Serum levels of PSA (Human KLK3/PSA (Sandwich ELISA) ELISA Kit; Catalog#: LS-F22855-1; Detection Range: 0.938-60 ng/ml; Intra-assay: CV% <10%; Inter-assay CV% <10%) were measured according to the manufacturer’s instructions. Measurement of serum miR-148a-3p, mir-485-5p levels: Reagents and equipment RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA. RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA. Specimen collection Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C. Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C. Determination of serum miR-148a-3p, miR-485-5p levels Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p. For miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT. Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p. For miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT. Reagents and equipment: RNA extraction reagent: TRI Reagent® BD (TB 126), Catalog # TB126, Interassay CV: <10%, Intraassay CV: <10%, MRCGENE. Real-time catastrophe quantitative PCR kit: One-Step TB Green PrimeScript RT-PCR Kit II (Perfect Real Time), Catalog # RR086B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Reverse transcription kit: Efficient preparation of cDNA: PrimeScript Reverse Transcriptase, Catalog # 2680B, Interassay CV: <10%, Intraassay CV: <10%, takarabio. Wizard2 2-Detector Gamma Counter, 550 samples, Catalog # C 2470-0020, PerkinElmer, Germany. PCR instrument: ABI Geneamp 9700 PCR - Thermal Cycler, Catalog # ABI-97, ABI, USA. NanoDrop™ 2000/2000c Spectrophotometers, Catalog # ND2000CLAPTOP, Thermo Fisher Scientific, USA. Specimen collection: Before the operation, 5 ml of fasting venous blood was collected and placed in an anticoagulation tube, centrifuged at 2500 rpm for 10 min to collect supernatants. All samples are stored at − 80 °C. Determination of serum miR-148a-3p, miR-485-5p levels: Total RNA was extracted from 200 µl aliquoted plasma using an RNA extraction reagent. ND2000C UV spectrophotometer was used to determine the concentration and purity of the RNA. RNA samples with A260/A280 between 1.8–2.1 were used for cDNA synthesis by reverse transcription kit. miRNAs relatively levels were measured through RT-qPCR performed using the Real-time catastrophe quantitative PCR kit according to the following setup: PCR reaction system: cDNA template 3.0 µL, Premix ExTqTM 10 µL, upstream primer 1 µL, downstream primer 1 µL, and ddH2O 5.0 µL. Reaction conditions: 94 °C 5 min, 94 °C for 30 s, 58 °C for 30 s, 72 °C for 10 min, 40 cycles. Technical triplicates were performed. GAPDH and U6 were measured as internal controls to calculate the relative expression levels of miR-148a-3p and miR-485-5p. For miRNAs measurements, the primer design and synthesis were completed by Guangzhou Ruibo Biotechnology Co., Ltd. miR-148a-3p: Forward: AGC TCT GCT ACT GAG ATG CG, Reverse: GAC TGC CAG CTA TCA TCG; miR-485-5p: Forward CTG GAA CGG TGA AGG TGA CA; Reverse: AAG GGA CTT CCT GTA ACA ACG CA; U6 Forward: GCT TCG GCA GCA CAT ATA CTA AAA T, Reverse: CGC TTC ACG AAT TTG CGT GTC AT. GAPDH Forward: AAG GTG AAG GTC GGA GTC A ; Reverse: GGA AGA TGG TGA TGG GAT TT. Statistical analysis: SPSS 22.0 software was used for statistical analysis. The data were expressed as the mean ± standard deviation (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bar{\chi }$$\end{document}χ¯ ± s), and a comparison between the two groups was performed. Independent samples t-test was used. Count data were compared using the χ2 test. Predict receiver operating characteristic (ROC) model curve was used to evaluate the sensitivity and specificity of miR-148a-3p and miR-485-5p as biomarkers. P < 0.05 was considered to show a statistically significant difference. Results: Characteristics of the study population The baseline clinical characteristics of the study population are summarized in Table 1. Briefly, there is no statistically significant difference (P > 0.05) in age, body mass index, hypertension, diabetes, smoking history, and family history of PCa between recurrence and non-recurrence groups. Notably, the average age of the involved cases is 64.05 ± 10.11 years, the average age of patients with non-recurrence is 63.8 ± 8.4 years, and the average age of patients with recurrence is 64.0 ± 8.1 years. Table 1Clinical characteristics and perioperative dataParametersNon-recurrence group (n = 153)Recurrence group (n = 45)t/ χ2 P Age (years)63.8 ± 8.464.0 ± 8.10.6380.524BMI (kg/m2)22.3 ± 2.422.5 ± 2.61.5540.152Hypertension (%)49 (32.02)14 (31.11)0.0950.763Diabetes (%)17 (11.11)4 (8.89)0.1570.741Smoking history (%)84 (54.90)19 (42.22)3.5480.052The family history of PCa (%)9 (5.88)2 (4.44)1.1460.288Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant Clinical characteristics and perioperative data Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant The baseline clinical characteristics of the study population are summarized in Table 1. Briefly, there is no statistically significant difference (P > 0.05) in age, body mass index, hypertension, diabetes, smoking history, and family history of PCa between recurrence and non-recurrence groups. Notably, the average age of the involved cases is 64.05 ± 10.11 years, the average age of patients with non-recurrence is 63.8 ± 8.4 years, and the average age of patients with recurrence is 64.0 ± 8.1 years. Table 1Clinical characteristics and perioperative dataParametersNon-recurrence group (n = 153)Recurrence group (n = 45)t/ χ2 P Age (years)63.8 ± 8.464.0 ± 8.10.6380.524BMI (kg/m2)22.3 ± 2.422.5 ± 2.61.5540.152Hypertension (%)49 (32.02)14 (31.11)0.0950.763Diabetes (%)17 (11.11)4 (8.89)0.1570.741Smoking history (%)84 (54.90)19 (42.22)3.5480.052The family history of PCa (%)9 (5.88)2 (4.44)1.1460.288Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant Clinical characteristics and perioperative data Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant Serum miR-148a-3p increases while miR-485-5p decreases in the late stages of prostate cancer compared with earlier stages We first examined the correlation between pre-surgery serum levels of miR-148a-3p and miR-485-5p and other clinical feathers in the involved cases before any operation (Table 2). We compared the level of miR-148a-3p and miR-485-5p in patients with different stages based on either TNM or Gleason scores. According to TNM staging, the involved cases were separated into two groups: 150 cases in the cT1-2a stage, 48 cases in cT2b, and higher stages. Pre-surgery serum levels of miR-148a-3p in patients at the cT1-2a stage (4.31 ± 2.01) were significantly lower than in patients at cT2b or higher stages (8.52 ± 2.28), while pre-surgery serum levels of miR-485-5p in patients at cT1-2a stage (4.02 ± 1.25) were significantly higher than patients at cT2b or higher stage (0.42 ± 0.28). According to the Gleason score, the involved cases were divided into two groups: 143 Gleason < 7 cases and 55 Gleason ≥ 7 cases. Pre-surgery serum levels of miR-148a-3p in Gleason < 7 cases (3.98 ± 1.25) were dramatically lower than Gleason ≥ 7 cases (7.05 ± 3.47 while pre-surgery serum levels of miR-485-5p in Gleason < 7 cases (1.40 ± 0.58) were significantly higher than Gleason ≥ 7 cases (0.52 ± 0.23). Both serum levels of miR-148a-3p and miR-485-5p show no significantly different (P > 0.05) related to the age (≥ 60 years compared with < 60 years old), index lesion diameter (no index compared with index lesion(s) ≤ 7 mm), or serum PSA level (> 10ng/mL compared with ≤ 10ng/mL). Table 2The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patientsParametersNumber (n)miR-148a-3pt P miR-485-5pt P Age0.5250.5840.9520.350≥ 60 years1274.85 ± 1.661.12 ± 0.54< 60 years714.73 ± 1.581.18 ± 0.41Index lesion diameter0.3410.7721.5250.080No index1034.85 ± 1.631.10 ± 0.44≤ 7 mm954.77 ± 1.821.18 ± 0.51PSA level0.7110.5001.9150.083> 10ng/mL1654.85 ± 2.121.11 ± 0.51≤ 10ng/mL334.66 ± 1.981.32 ± 0.62Gleason score7.520< 0.00117.58< 0.001< 7 points1433.98 ± 1.251.40 ± 0.58≥ 7 points557.05 ± 3.470.52 ± 0.23TNM staging7.113< 0.00118.225< 0.001cT1-2a1214.31 ± 2.014.02 ± 1.25cT2b and higher778.52 ± 2.280.42 ± 0.28Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patients Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant We first examined the correlation between pre-surgery serum levels of miR-148a-3p and miR-485-5p and other clinical feathers in the involved cases before any operation (Table 2). We compared the level of miR-148a-3p and miR-485-5p in patients with different stages based on either TNM or Gleason scores. According to TNM staging, the involved cases were separated into two groups: 150 cases in the cT1-2a stage, 48 cases in cT2b, and higher stages. Pre-surgery serum levels of miR-148a-3p in patients at the cT1-2a stage (4.31 ± 2.01) were significantly lower than in patients at cT2b or higher stages (8.52 ± 2.28), while pre-surgery serum levels of miR-485-5p in patients at cT1-2a stage (4.02 ± 1.25) were significantly higher than patients at cT2b or higher stage (0.42 ± 0.28). According to the Gleason score, the involved cases were divided into two groups: 143 Gleason < 7 cases and 55 Gleason ≥ 7 cases. Pre-surgery serum levels of miR-148a-3p in Gleason < 7 cases (3.98 ± 1.25) were dramatically lower than Gleason ≥ 7 cases (7.05 ± 3.47 while pre-surgery serum levels of miR-485-5p in Gleason < 7 cases (1.40 ± 0.58) were significantly higher than Gleason ≥ 7 cases (0.52 ± 0.23). Both serum levels of miR-148a-3p and miR-485-5p show no significantly different (P > 0.05) related to the age (≥ 60 years compared with < 60 years old), index lesion diameter (no index compared with index lesion(s) ≤ 7 mm), or serum PSA level (> 10ng/mL compared with ≤ 10ng/mL). Table 2The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patientsParametersNumber (n)miR-148a-3pt P miR-485-5pt P Age0.5250.5840.9520.350≥ 60 years1274.85 ± 1.661.12 ± 0.54< 60 years714.73 ± 1.581.18 ± 0.41Index lesion diameter0.3410.7721.5250.080No index1034.85 ± 1.631.10 ± 0.44≤ 7 mm954.77 ± 1.821.18 ± 0.51PSA level0.7110.5001.9150.083> 10ng/mL1654.85 ± 2.121.11 ± 0.51≤ 10ng/mL334.66 ± 1.981.32 ± 0.62Gleason score7.520< 0.00117.58< 0.001< 7 points1433.98 ± 1.251.40 ± 0.58≥ 7 points557.05 ± 3.470.52 ± 0.23TNM staging7.113< 0.00118.225< 0.001cT1-2a1214.31 ± 2.014.02 ± 1.25cT2b and higher778.52 ± 2.280.42 ± 0.28Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patients Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant Serum miR-148a-3p increases while miR-485-5p decreases in postoperative recurrence compared with the non-recurrence group We next examined the possible correlation between serum levels of miR-148a-3p and miR-485-5p and the recurrence of prostate cancer. Serum levels of miR-148a-3p in the recurrence group (5.97 ± 0.18) were significantly higher than that of the non-recurrence group (4.53 ± 0.07) (t = 8.502, P < 0.001). In comparison, serum miR-485-5p levels in the recurrence group (0.27 ± 0.03) were lower than that of the non-recurrence group (1.22 ± 0.02) (t = 27.72, P < 0.001) (Fig. 1). Fig. 1Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis We next examined the possible correlation between serum levels of miR-148a-3p and miR-485-5p and the recurrence of prostate cancer. Serum levels of miR-148a-3p in the recurrence group (5.97 ± 0.18) were significantly higher than that of the non-recurrence group (4.53 ± 0.07) (t = 8.502, P < 0.001). In comparison, serum miR-485-5p levels in the recurrence group (0.27 ± 0.03) were lower than that of the non-recurrence group (1.22 ± 0.02) (t = 27.72, P < 0.001) (Fig. 1). Fig. 1Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis Both miR-148a-3p and miR-485-5p demonstrate great potential as sensitive and specific biomarkers for prostate cancer We then evaluated the sensitivity and specificity of using serum levels of miR-148a-3p and miR-485-5p to distinguish non-recurrence versus recurrence. We performed a receiver operating characteristic (ROC) curve analysis of the variations of serum levels of miR-148a-3p and miR-485-5p and calculated the area under the ROC curve (AUC) and optimal cut-off values (Fig. 2; Table 3). Both miR-148a-3p and miR-485-5p demonstrate significant sensitivity and specificity in distinguishing non-recurrence versus recurrence. Respectively, the AUC of recurrence in combined detection was greater than that of individuals detection (Z = 2.42, P = 0.0155; Z = 3.234, P = 0.0012). Thus, using the two molecules combined to predict prostate cancer recurrence was even more specific and sensitive than single detection. Fig. 2 Receiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant Receiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant Table 3ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCaAUC95% CIsensitivityspecificityBCVPmiR-148a-3p0.8250.765–0.87582.2273.25.01< 0.0001miR-485-5p0.7900.726–0.84464.4494.770.70< 0.0001miR-148a-3p& miR-485-5p0.9130.865–0.94891.1181.70.18< 0.0001AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCa AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant We then evaluated the sensitivity and specificity of using serum levels of miR-148a-3p and miR-485-5p to distinguish non-recurrence versus recurrence. We performed a receiver operating characteristic (ROC) curve analysis of the variations of serum levels of miR-148a-3p and miR-485-5p and calculated the area under the ROC curve (AUC) and optimal cut-off values (Fig. 2; Table 3). Both miR-148a-3p and miR-485-5p demonstrate significant sensitivity and specificity in distinguishing non-recurrence versus recurrence. Respectively, the AUC of recurrence in combined detection was greater than that of individuals detection (Z = 2.42, P = 0.0155; Z = 3.234, P = 0.0012). Thus, using the two molecules combined to predict prostate cancer recurrence was even more specific and sensitive than single detection. Fig. 2 Receiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant Receiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant Table 3ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCaAUC95% CIsensitivityspecificityBCVPmiR-148a-3p0.8250.765–0.87582.2273.25.01< 0.0001miR-485-5p0.7900.726–0.84464.4494.770.70< 0.0001miR-148a-3p& miR-485-5p0.9130.865–0.94891.1181.70.18< 0.0001AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCa AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant Characteristics of the study population: The baseline clinical characteristics of the study population are summarized in Table 1. Briefly, there is no statistically significant difference (P > 0.05) in age, body mass index, hypertension, diabetes, smoking history, and family history of PCa between recurrence and non-recurrence groups. Notably, the average age of the involved cases is 64.05 ± 10.11 years, the average age of patients with non-recurrence is 63.8 ± 8.4 years, and the average age of patients with recurrence is 64.0 ± 8.1 years. Table 1Clinical characteristics and perioperative dataParametersNon-recurrence group (n = 153)Recurrence group (n = 45)t/ χ2 P Age (years)63.8 ± 8.464.0 ± 8.10.6380.524BMI (kg/m2)22.3 ± 2.422.5 ± 2.61.5540.152Hypertension (%)49 (32.02)14 (31.11)0.0950.763Diabetes (%)17 (11.11)4 (8.89)0.1570.741Smoking history (%)84 (54.90)19 (42.22)3.5480.052The family history of PCa (%)9 (5.88)2 (4.44)1.1460.288Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant Clinical characteristics and perioperative data Data are shown as means ± SD. BMI, Body mass index; χ2/t, results of chi-square test; P-value, the result of student’s T-test. P < 0.05 is considered to be significant Serum miR-148a-3p increases while miR-485-5p decreases in the late stages of prostate cancer compared with earlier stages: We first examined the correlation between pre-surgery serum levels of miR-148a-3p and miR-485-5p and other clinical feathers in the involved cases before any operation (Table 2). We compared the level of miR-148a-3p and miR-485-5p in patients with different stages based on either TNM or Gleason scores. According to TNM staging, the involved cases were separated into two groups: 150 cases in the cT1-2a stage, 48 cases in cT2b, and higher stages. Pre-surgery serum levels of miR-148a-3p in patients at the cT1-2a stage (4.31 ± 2.01) were significantly lower than in patients at cT2b or higher stages (8.52 ± 2.28), while pre-surgery serum levels of miR-485-5p in patients at cT1-2a stage (4.02 ± 1.25) were significantly higher than patients at cT2b or higher stage (0.42 ± 0.28). According to the Gleason score, the involved cases were divided into two groups: 143 Gleason < 7 cases and 55 Gleason ≥ 7 cases. Pre-surgery serum levels of miR-148a-3p in Gleason < 7 cases (3.98 ± 1.25) were dramatically lower than Gleason ≥ 7 cases (7.05 ± 3.47 while pre-surgery serum levels of miR-485-5p in Gleason < 7 cases (1.40 ± 0.58) were significantly higher than Gleason ≥ 7 cases (0.52 ± 0.23). Both serum levels of miR-148a-3p and miR-485-5p show no significantly different (P > 0.05) related to the age (≥ 60 years compared with < 60 years old), index lesion diameter (no index compared with index lesion(s) ≤ 7 mm), or serum PSA level (> 10ng/mL compared with ≤ 10ng/mL). Table 2The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patientsParametersNumber (n)miR-148a-3pt P miR-485-5pt P Age0.5250.5840.9520.350≥ 60 years1274.85 ± 1.661.12 ± 0.54< 60 years714.73 ± 1.581.18 ± 0.41Index lesion diameter0.3410.7721.5250.080No index1034.85 ± 1.631.10 ± 0.44≤ 7 mm954.77 ± 1.821.18 ± 0.51PSA level0.7110.5001.9150.083> 10ng/mL1654.85 ± 2.121.11 ± 0.51≤ 10ng/mL334.66 ± 1.981.32 ± 0.62Gleason score7.520< 0.00117.58< 0.001< 7 points1433.98 ± 1.251.40 ± 0.58≥ 7 points557.05 ± 3.470.52 ± 0.23TNM staging7.113< 0.00118.225< 0.001cT1-2a1214.31 ± 2.014.02 ± 1.25cT2b and higher778.52 ± 2.280.42 ± 0.28Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant The levels of miR-148a-3p and miR-485-5p in preoperative serum of PCa patients Data are shown as means ± SD. Result of student’s t-test. P < 0.05 is considered to be significant Serum miR-148a-3p increases while miR-485-5p decreases in postoperative recurrence compared with the non-recurrence group: We next examined the possible correlation between serum levels of miR-148a-3p and miR-485-5p and the recurrence of prostate cancer. Serum levels of miR-148a-3p in the recurrence group (5.97 ± 0.18) were significantly higher than that of the non-recurrence group (4.53 ± 0.07) (t = 8.502, P < 0.001). In comparison, serum miR-485-5p levels in the recurrence group (0.27 ± 0.03) were lower than that of the non-recurrence group (1.22 ± 0.02) (t = 27.72, P < 0.001) (Fig. 1). Fig. 1Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis Dot plots showing increased miR-148a-3p and decreased miR-485-5p in the recurrence group compared with the non-recurrence group. The levels of serum miR-148a-3p and miR-485-5p expression in non-recurrence group (n = 153) and recurrence group (n = 45) was determined via qRT-PCR analysis Both miR-148a-3p and miR-485-5p demonstrate great potential as sensitive and specific biomarkers for prostate cancer: We then evaluated the sensitivity and specificity of using serum levels of miR-148a-3p and miR-485-5p to distinguish non-recurrence versus recurrence. We performed a receiver operating characteristic (ROC) curve analysis of the variations of serum levels of miR-148a-3p and miR-485-5p and calculated the area under the ROC curve (AUC) and optimal cut-off values (Fig. 2; Table 3). Both miR-148a-3p and miR-485-5p demonstrate significant sensitivity and specificity in distinguishing non-recurrence versus recurrence. Respectively, the AUC of recurrence in combined detection was greater than that of individuals detection (Z = 2.42, P = 0.0155; Z = 3.234, P = 0.0012). Thus, using the two molecules combined to predict prostate cancer recurrence was even more specific and sensitive than single detection. Fig. 2 Receiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant Receiver operating characteristic (ROC) curve analysis of the variations of miR-148a-3p and miR-485-5p in PCa patients. P < 0.05 was considered statistically significant Table 3ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCaAUC95% CIsensitivityspecificityBCVPmiR-148a-3p0.8250.765–0.87582.2273.25.01< 0.0001miR-485-5p0.7900.726–0.84464.4494.770.70< 0.0001miR-148a-3p& miR-485-5p0.9130.865–0.94891.1181.70.18< 0.0001AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant ROC curve analysis of the variations of miR-148a-3p and miR-485-5p in recurrent PCa AUC: area under the ROC curve; BCV: Best Critical Value; P < 0.05 was considered statistically significant Discussion: In the presented study, we examined the correlation between pre-surgery serum levels of miR-148a-3p and miR-485-5p and prostate cancer progression (indicated by Gleason score and TNM stages) and postoperative recurrence in 198 patients with prostate cancer who underwent surgery. We also evaluated the sensitivity and specificity of miR-148a-3p and miR-485-5p as diagnostic markers. Our results suggested that serum levels of miR-148a-3p positively correlated while miR-485-5p negatively correlated with Gleason score and TNM staging in PCa patients. Patients diagnosed with postoperative recurrence tend to have higher serum levels of miR-148a-3p and lower miR-485-5p. Besides, our ROC analysis suggested that miR-148a-3p and miR-485-5p combined detection are potentially sensitive and specific diagnostic biomarkers for PCa postoperative recurrence. Despite the advances in prostate cancer treatments [26], prostate cancer (PCa) has always been a significant threat to male health. one of the reasons is the lack of early diagnosis methods and ways to precisely evaluate the severity of progression stages of prostate cancer, which is critical for deciding the appropriate treatment. Primarily, the goal of PCa diagnosis is to discover the earliest possible clinically significant PCa to maximize oncological outcomes and minimize functional side effects [26, 27]. Thus, efficient and sensitive screening using biomarkers of PCa is an ideal strategy. Although the addition of biomarkers such as serum prostate-specific antigen levels to prostate cancer diagnosis has increased diagnosis at relatively early stages (Gleason < 7), the current diagnostic algorithm still presents several limitations. Besides, the low sensitivity and specificity of SPA antigens still highlight the urgent demand for sensitive and specific biomarkers. microRNAs (miRNAs) could be perspective biomarkers of PCa. miRNAs are endogenous non-coding RNA regulating post-transcriptional gene expression, which participates in cell proliferation, differentiation, and apoptosis process [28]. The involvements of miRNAs in PCa have been widely reported both as diagnostic biomarkers [15, 29–31] and as therapeutic targets [6, 32]. In the presented study, we examined the potential of using two miRNAs, miR-148a-3p and miR-485-5p, to diagnose and evaluate prostate cancer. miR-148a-3p regulates myoblast differentiation into myotubes and accelerates the proliferation of myoblasts in the G1 phase of the cell cycle [33, 34]. It has also been shown to be related to cellular apoptosis and promotes tumor progression by protecting tumors from immune attack [35]. Our study showed that serum miR-148a-3p levels increased in late-stage prostate cancer, suggesting its positive correlation with prostate cancer progression, which agrees with the published tumor-promoting function of miR-148a-3p [19]. miR-485-5p, on the other hand, is a tumor suppressor gene in various malignant tumors [24]. Published studies showed that miR-485-5p induced multidrug resistance in cisplatin-resistant cell lines [25] and activated the protein kinase B signaling pathway in breast cancer cells [26] to suppress tumor progression. Our results suggested that serum miR-485-5p levels negatively correlated with prostate cancer progression with decreased values in late-stage prostate cancer patients, which recapitulate the published tumor repression function. Together, our study suggests the potential of miR-148a-3p and miR-485-5p to diagnose and evaluate prostate cancer. Another significant difficulty in curing prostate cancer is the postoperative recurrence, which is the recurrence of prostate cancer cells after initial treatment has removed them. Prostate cancer is mainly treated with surgery combined with radiotherapy, chemotherapy, and endocrine therapy [36, 37]. Although it is considered a comprehensive treatment, recurrence and metastasis rate is still as high as more than 15%. Besides, due to the difficulties in diagnosing prostate cancer, recurrence is always hard to predict. In our study, we examined the potential to use serum levels of biomarkers measured pre-surgery to predict the probability of recurrence after treatment, allowing doctors to anticipate patients’ recovery better and be more prepared for potential recurrence. Our results suggested that patients who eventually got prostate cancer recurrence always had higher serum levels of miR-148a-3p and lower serum levels of miR-485-5p even before the initial treatment was performed, highlighting the possibility of using pre-surgery measures to anticipate recurrence. Besides, our ROC analysis indicates the high sensitivity and specificity of using pre-surgery miR-148a-3p and miR-485-5p levels to distinguish recurrence versus non-recurrence patients, suggesting the potential for predicting recurrence based on these two molecules. In addition, our results suggest that combining both pre-surgery serum levels of miR-148a-3p and miR-485-5p gives a significantly better prediction. Our studies suggest that measuring pre-surgery serum levels of miR-148a-3p and miR-485-5p may help with diagnosis, precisely evaluating cancer stages, and anticipating postoperative recurrence for PCa patients. Limitations: Although our study demonstrated promising biomarkers, there are still some limitations in this study. Limitations include (a) This study only enrolls patients with PCa from the Shaanxi area; thus, it may not fully represent PCa patients in the global trend. (b) Our study only focuses on two miRNAs, including miR-148a-3p and miR-485-5p. A comprehensive analysis using a high throughput technique to screen for the miRNAs profiles is still needed to identify diagnostic markers. (c) Our study only measures miR-148a-3p and miR-485-5p in a small population, and future study is needed to apply our findings to a large population to evaluate miR-148a-3p and miR-485-5p as biomarkers. Conclusion: In conclusion, our study demonstrated that pre-surgery serum levels of miR-148a-3p and miR-485-5p could be used as sensitive and specific biomarkers for diagnosis, evaluating cancer progression stages, and anticipating postoperative recurrence.
Background: Unpredicted postoperative recurrence of prostate cancer, one of the most common malignancies among males worldwide, has become a prominent issue affecting patients after treatment. Here, we investigated the correlation between the serum miR-148a-3p and miR-485-5p expression levels and cancer recurrence in PCa patients, aiming to identify new biomarkers for diagnosis and predicting postoperative recurrence of prostate cancer. Methods: A total of 198 male PCa cases treated with surgery, postoperative radiotherapy, and chemotherapy were involved in the presented study. Serum levels of miR-148a-3p and miR-485-5p were measured before the initial operation for the involved cases, which were then followed up for two years to monitor the recurrence of cancer and to split the cases into recurrence and non-recurrence groups. Comparison of the relative expressions of serum miR-148a-3p and miR-485-5p were made and related to other clinic pathological features. Results: Pre-surgery serum levels of miR-148a-3p in patients with TNM stage cT1-2a prostate cancer (Gleason score < 7) were significantly lower (P < 0.05) than levels in patients with TNM Classification of Malignant Tumors (TNM) stage cT2b and higher prostate cancer (Gleason score ≥ 7). pre-surgery serum levels of miR-485-5p in patients with TNM stage cT1-2a prostate cancer (Gleason score < 7) were significantly higher (P < 0.05) than in patients with TNM stage cT2b and higher cancer (Gleason score ≥ 7). Serum miR-148a-3p level in recurrence group is higher than the non-recurrence group (P < 0.05) while serum miR-485-5p level in recurrence group is lower than non-recurrence group (P < 0.05). ROC curve analysis showed the AUCs of using miR-148a-3p, miR-485-5p, and combined detection for predicting recurrence of prostate cancer were 0.825 (95% CI 0.765-0.875, P < 0.0001), 0.790 (95% CI 0.726-0.844, P < 0.0001), and 0.913 (95% CI 0.865-0.948, P < 0.0001). Conclusions: Pre-surgery serum miR-148a-3p level positively correlates while miR-485-5p level negatively correlates with prostate cancer's progressing and postoperative recurrence. Both molecules show potential to be used for predicting postoperative recurrence individually or combined.
Background: As the fifth most frequent cause of male death worldwide, prostate cancer (PCa) counts for more than 30% of all newly diagnosed cancers with an approximately 5-year survival rate of 9% [1, 2]. Advanced prostate cancer exhibits signs and symptoms, including typically slow urination, difficulty emptying the bladder, blood in the urine, and back pain [1, 3]. Screening and diagnosis of PCa are important for detecting cases, predicting disease outcomes, guiding clinical management decisions, and avoiding overtreatment [4–6]. Thus, rapid sensitive diagnostic methods are in demand. Including measurements of serum prostate-specific antigens (PSA/KLK3) in the diagnosis of prostate cancer has led to a significant increase in the detection of early-stage PCa (Gleason < 6) [6–10]. However, the specificity and sensitivity of serum PSA are still not high enough for detecting early-stage prostate cancer or precisely evaluating the progression or severity of cancer since elevated PSA levels are also detected in benign prostatic hyperplasia, inflammation, and other urinary tract diseases [11]. Thus, sensitive and specific biomarkers for prostate cancer diagnosis are in high demand. Apart from the difficulties in the early diagnosis of prostate cancer, postoperative recurrence is another major issue threatening the patients’ health and recovery. Postoperative recurrence is when the surviving prostate cancer cells become evident again after initial treatment, such as surgery or radiation therapy documenting the removal of cancer cells [12]. It occurs in around 15% of patients and always requires a second cancer treatment at least 6 months after surgery [13]. Clinically, the prediction of postoperative recurrence is based on serum prostate-specific antigens. A second increase following the initial drop-off after surgery or radiation therapy usually indicates the recurrence of prostate cancer cells. However, due to the low sensitivity and specificity of the currently used prostate-specific antigens, predicting cancer recurrence faces the same difficulty as early diagnosis of prostate cancer. Therefore, identifying new biomarkers is in need. Recent studies have reported the potential of microRNAs (miRNAs) as diagnostic biomarkers and therapeutic targets of PCa. MicroRNAs are small single-strand non-coding RNA molecules regulating gene expression through complementary base pairing with target mRNAs, affecting the post-transcription processing of ~ 60% of the human genome through base-pairing with target mRNAs [6, 14]. The involvement of miRNAs in the development and progression of numerous tumors has been widely reported, highlighting the potential for cancer diagnosis and treatments [15]. The Association of miRNAs with prostate cancers was initially reported by a large-scale miRNA analysis using a large collection of samples, including prostate cancers [6]. At least 12 miRNAs were identified as overexpressed in prostate cancer, and further studies confirmed the role of miRNAs as new biomarkers for prostate cancer [16]. After that, numerous reports have suggested the essential roles of microRNAs (miRNAs) in PCa formation and progression [17]. The expression of those miRNAs constantly changes significantly during prostate tumors, indicating their potential as clinical biomarkers [18, 19]. miR-148a-3p and miR-485-5p are miRNAs showing particular diagnostic potential related to prostate cancer. miR-148a-3p, located on the 7p15.2 region of human chromosomes, has a well-established role in developing tumors [20]. Published studies suggested miR-148a-3p significantly decreased in various cancer cells, such as bladder cancer cells [21], suggesting its potential as a cancer biomarker. miR-485-5p, on the other hand, has been identified as a tumor suppressor with a significant inhibitory effect on the proliferation and differentiation of gastric cancer [22, 23]. miR-485-5p inhibits the expression of hypoxia-inducible factor 1 and impedes hepatocellular carcinoma cell differentiation, inhibiting the growth of liver cancer cells [24]. However, whether miR-148a-3p and miR-485-5p participate in the progression of prostate cancers is still unknown. Their potential as a biomarker for diagnosis and predicting prostate cancer recurrence has not been examined. In the presented study, we examined the changes in serum miR-148a-3p and miR-485-5p levels in patients with prostate cancer at different stages and recurrence. Evaluated the correlations between serum miR-148a-3p and miR-485-5p levels and cancer progression and recurrence, aiming to identify new diagnostic biomarkers for prostate cancer. Conclusion: In conclusion, our study demonstrated that pre-surgery serum levels of miR-148a-3p and miR-485-5p could be used as sensitive and specific biomarkers for diagnosis, evaluating cancer progression stages, and anticipating postoperative recurrence.
Background: Unpredicted postoperative recurrence of prostate cancer, one of the most common malignancies among males worldwide, has become a prominent issue affecting patients after treatment. Here, we investigated the correlation between the serum miR-148a-3p and miR-485-5p expression levels and cancer recurrence in PCa patients, aiming to identify new biomarkers for diagnosis and predicting postoperative recurrence of prostate cancer. Methods: A total of 198 male PCa cases treated with surgery, postoperative radiotherapy, and chemotherapy were involved in the presented study. Serum levels of miR-148a-3p and miR-485-5p were measured before the initial operation for the involved cases, which were then followed up for two years to monitor the recurrence of cancer and to split the cases into recurrence and non-recurrence groups. Comparison of the relative expressions of serum miR-148a-3p and miR-485-5p were made and related to other clinic pathological features. Results: Pre-surgery serum levels of miR-148a-3p in patients with TNM stage cT1-2a prostate cancer (Gleason score < 7) were significantly lower (P < 0.05) than levels in patients with TNM Classification of Malignant Tumors (TNM) stage cT2b and higher prostate cancer (Gleason score ≥ 7). pre-surgery serum levels of miR-485-5p in patients with TNM stage cT1-2a prostate cancer (Gleason score < 7) were significantly higher (P < 0.05) than in patients with TNM stage cT2b and higher cancer (Gleason score ≥ 7). Serum miR-148a-3p level in recurrence group is higher than the non-recurrence group (P < 0.05) while serum miR-485-5p level in recurrence group is lower than non-recurrence group (P < 0.05). ROC curve analysis showed the AUCs of using miR-148a-3p, miR-485-5p, and combined detection for predicting recurrence of prostate cancer were 0.825 (95% CI 0.765-0.875, P < 0.0001), 0.790 (95% CI 0.726-0.844, P < 0.0001), and 0.913 (95% CI 0.865-0.948, P < 0.0001). Conclusions: Pre-surgery serum miR-148a-3p level positively correlates while miR-485-5p level negatively correlates with prostate cancer's progressing and postoperative recurrence. Both molecules show potential to be used for predicting postoperative recurrence individually or combined.
11,915
447
[ 824, 159, 137, 161, 61, 989, 166, 43, 276, 127, 300, 586, 251, 322, 127 ]
19
[ "mir", "485", "148a", "mir 485", "mir 148a", "148a 3p", "3p", "5p", "mir 148a 3p", "mir 485 5p" ]
[ "diagnose evaluate prostate", "prostate cancer serum", "diagnosing prostate cancer", "new biomarkers prostate", "biomarkers prostate cancer" ]
null
[CONTENT] Prostate cancer | miR-148a-3p | miR-485-5p | Gleason score | TNM staging | Recurrence [SUMMARY]
null
[CONTENT] Prostate cancer | miR-148a-3p | miR-485-5p | Gleason score | TNM staging | Recurrence [SUMMARY]
[CONTENT] Prostate cancer | miR-148a-3p | miR-485-5p | Gleason score | TNM staging | Recurrence [SUMMARY]
[CONTENT] Prostate cancer | miR-148a-3p | miR-485-5p | Gleason score | TNM staging | Recurrence [SUMMARY]
[CONTENT] Prostate cancer | miR-148a-3p | miR-485-5p | Gleason score | TNM staging | Recurrence [SUMMARY]
[CONTENT] Humans | Male | Prostatic Neoplasms | Prostate | Postoperative Period | ROC Curve | MicroRNAs [SUMMARY]
null
[CONTENT] Humans | Male | Prostatic Neoplasms | Prostate | Postoperative Period | ROC Curve | MicroRNAs [SUMMARY]
[CONTENT] Humans | Male | Prostatic Neoplasms | Prostate | Postoperative Period | ROC Curve | MicroRNAs [SUMMARY]
[CONTENT] Humans | Male | Prostatic Neoplasms | Prostate | Postoperative Period | ROC Curve | MicroRNAs [SUMMARY]
[CONTENT] Humans | Male | Prostatic Neoplasms | Prostate | Postoperative Period | ROC Curve | MicroRNAs [SUMMARY]
[CONTENT] diagnose evaluate prostate | prostate cancer serum | diagnosing prostate cancer | new biomarkers prostate | biomarkers prostate cancer [SUMMARY]
null
[CONTENT] diagnose evaluate prostate | prostate cancer serum | diagnosing prostate cancer | new biomarkers prostate | biomarkers prostate cancer [SUMMARY]
[CONTENT] diagnose evaluate prostate | prostate cancer serum | diagnosing prostate cancer | new biomarkers prostate | biomarkers prostate cancer [SUMMARY]
[CONTENT] diagnose evaluate prostate | prostate cancer serum | diagnosing prostate cancer | new biomarkers prostate | biomarkers prostate cancer [SUMMARY]
[CONTENT] diagnose evaluate prostate | prostate cancer serum | diagnosing prostate cancer | new biomarkers prostate | biomarkers prostate cancer [SUMMARY]
[CONTENT] mir | 485 | 148a | mir 485 | mir 148a | 148a 3p | 3p | 5p | mir 148a 3p | mir 485 5p [SUMMARY]
null
[CONTENT] mir | 485 | 148a | mir 485 | mir 148a | 148a 3p | 3p | 5p | mir 148a 3p | mir 485 5p [SUMMARY]
[CONTENT] mir | 485 | 148a | mir 485 | mir 148a | 148a 3p | 3p | 5p | mir 148a 3p | mir 485 5p [SUMMARY]
[CONTENT] mir | 485 | 148a | mir 485 | mir 148a | 148a 3p | 3p | 5p | mir 148a 3p | mir 485 5p [SUMMARY]
[CONTENT] mir | 485 | 148a | mir 485 | mir 148a | 148a 3p | 3p | 5p | mir 148a 3p | mir 485 5p [SUMMARY]
[CONTENT] cancer | prostate | prostate cancer | diagnosis | cells | cancer cells | mir | mirnas | potential | recurrence [SUMMARY]
null
[CONTENT] mir | recurrence | group | recurrence group | 485 | 148a | mir 485 | 148a 3p | mir 148a | 3p [SUMMARY]
[CONTENT] progression stages anticipating postoperative | specific biomarkers diagnosis | progression stages anticipating | conclusion study demonstrated pre | 5p sensitive specific biomarkers | conclusion study demonstrated | conclusion study | conclusion | diagnosis evaluating | biomarkers diagnosis evaluating cancer [SUMMARY]
[CONTENT] mir | recurrence | 148a | 485 | mir 485 | 148a 3p | 3p | mir 148a | 5p | 485 5p [SUMMARY]
[CONTENT] mir | recurrence | 148a | 485 | mir 485 | 148a 3p | 3p | mir 148a | 5p | 485 5p [SUMMARY]
[CONTENT] one ||| [SUMMARY]
null
[CONTENT] TNM | 2a | Gleason | 0.05 | TNM Classification of Malignant Tumors | TNM | Gleason | ≥ ||| TNM | 2a | Gleason | 0.05 | TNM | Gleason | ≥ ||| 0.05 | serum miR-485-5p | 0.05 ||| ROC | 0.825 | 95% | CI | 0.765-0.875 | 0.0001 | 0.790 | 95% | CI | 0.726 | 0.0001 | 0.913 | 95% | CI | 0.865-0.948 | 0.0001 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] one ||| ||| 198 ||| two years ||| ||| TNM | 2a | Gleason | 0.05 | TNM Classification of Malignant Tumors | TNM | Gleason | ≥ ||| TNM | 2a | Gleason | 0.05 | TNM | Gleason | ≥ ||| 0.05 | serum miR-485-5p | 0.05 ||| ROC | 0.825 | 95% | CI | 0.765-0.875 | 0.0001 | 0.790 | 95% | CI | 0.726 | 0.0001 | 0.913 | 95% | CI | 0.865-0.948 | 0.0001 ||| ||| [SUMMARY]
[CONTENT] one ||| ||| 198 ||| two years ||| ||| TNM | 2a | Gleason | 0.05 | TNM Classification of Malignant Tumors | TNM | Gleason | ≥ ||| TNM | 2a | Gleason | 0.05 | TNM | Gleason | ≥ ||| 0.05 | serum miR-485-5p | 0.05 ||| ROC | 0.825 | 95% | CI | 0.765-0.875 | 0.0001 | 0.790 | 95% | CI | 0.726 | 0.0001 | 0.913 | 95% | CI | 0.865-0.948 | 0.0001 ||| ||| [SUMMARY]
Bone mineral density predictors in long-standing type 1 and type 2 diabetes mellitus.
34362364
Despite the increased fracture risk, bone mineral density (BMD) is variable in type 1 (T1D) and type 2 (T2D) diabetes mellitus. We aimed at comparing independent BMD predictors in T1D, T2D and control subjects, respectively.
BACKGROUND
Cross-sectional case-control study enrolling 30 T1D, 39 T2D and 69 age, sex and body mass index (BMI) - matched controls that underwent clinical examination, dual-energy X-ray absorptiometry (BMD at the lumbar spine and femoral neck) and serum determination of HbA1c and parameters of calcium and phosphate metabolism.
METHODS
T2D patients had similar BMD compared to T1D individuals (after adjusting for age, BMI and disease duration) and to matched controls, respectively. In multiple regression analysis, diabetes duration - but not HbA1c- negatively predicted femoral neck BMD in T1D (β= -0.39, p = 0.014), while BMI was a positive predictor for lumbar spine (β = 0.46, p = 0.006) and femoral neck BMD (β = 0.44, p = 0.007) in T2D, besides gender influence. Age negatively predicted BMD in controls, but not in patients with diabetes.
RESULTS
Long-standing diabetes and female gender particularly increase the risk for low bone mass in T1D. An increased body weight partially hinders BMD loss in T2D. The impact of age appears to be surpassed by that of other bone regulating factors in both T1D and T2D patients.
CONCLUSIONS
[ "Adult", "Biomarkers", "Blood Glucose", "Body Mass Index", "Bone Density", "Case-Control Studies", "Cross-Sectional Studies", "Diabetes Mellitus, Type 1", "Diabetes Mellitus, Type 2", "Female", "Follow-Up Studies", "Fractures, Bone", "Glycated Hemoglobin", "Humans", "Male", "Middle Aged", "Osteoporosis", "Prognosis", "Romania" ]
8344168
Background
Diabetes mellitus is a chronic whole-body disease leading to a wide range of complications, such as cardiovascular disease, retinopathy, nephropathy, neuropathy and also “sweet bone” disease [1]. Although the underlying pathophysiological background is very different, type 1 (T1D) and type 2 diabetes (T2D) are both associated with an increased fracture risk - which is multifactorial and only partially explained by falls and bone mineral density (BMD) [2]. The most consistent effect is upon the hip fracture risk, ranging between 2.4- and 7-fold increase in T1D [3] and being two to three times higher in T2D compared to the general population [1]. Diabetic osteopathy in T1D and T2D is characterized by low serum vitamin D, negative calcium balance, low bone turnover and high sclerostin levels [4]. However, bone mass may differ to some extent in T1D when compared to T2D [5], but not in all studies [6]. Low BMD occurs early after disease onset due to the deleterious effects of insulinopenia upon bone turnover and bone mass accrual in T1D, remaining rather stable afterwards [7]. Reported BMD in T2D varies from unaltered bone density [8, 9] to a paradoxically higher BMD [5] compared to controls. Low bone mass was also found in the later stages of T2D, possibly linked to microvascular disease [10]. Skeletal fragility is nevertheless described in both T1D and T2D, independently of BMD [2]. Advanced glycation end products (AGEs) alter the structure of the collagen, promote oxidative stress and inflammation, and also contribute to low bone turnover [1, 3]. The effect of glycemic control - reflected by HbA1c levels - upon bone is inconsistent, with some studies reporting an elevated fracture risk with increasing HbA1c [11, 12], while bone density evolution appears rather independent of HbA1c levels [5, 6]. In T2D, the protective effect of an increased body weight and hyperinsulinemia upon bone are counterbalanced by the negative impact of increased visceral adiposity and insulin resistance, an inadequate adaptation of bone strength to increased mechanical load, the long duration of disease evolution and various anti-diabetic drugs (e.g., thiazolidinediones or sodium-glucose cotransporter type 2 inhibitors – SGLT-2) [1]. We aimed at investigating independent predictors of BMD in T1D and T2D patients compared to controls with regard to general and diabetes - specific parameters.
null
null
Results
Despite being younger and having a mean BMI within the normal reference range, T1D patients had a longer duration of diabetes and a poor glycemic control with more diabetes complications compared to the older, rather obese but with improved glycemic control T2D patients (Table 1). Serum calcium, phosphate, PTH and thyroid status were similar between T1D and T2D patients, although serum PTH had the tendency to be higher in T2D subjects (Table 1). All T1D patients were receiving exogenous insulin treatment, while all T2D patients were under metformin treatment: 16 were following metformin monotherapy, 13 were taking metformin together with a sulfonylurea drug and 10 associated incretin therapy to metformin (none were using thiazolidinediones or sodium-glucose co-transporter-2 inhibitors). Table 1Characteristics of the study patientsType 1diabetes mellitus(n = 30)Control group(n = 30)Type 2diabetes mellitus(n = 39)Controlgroup(n = 39)Type 1vs.Type 2Type 1vs.ControlType 2vs.Controlmean ± SEMmean ± SEMmean ± SEMp-valueGender (n)Women(Menopause)Menn = 18 (60 %)(n = 6)n = 12 (40 %)n = 18 (60 %)(n = 6)n = 12 (40 %)n = 19 (48.7 %)(n = 16)n = 20 (51.3 %)n = 19 (48.7 %)(n = 16)n = 20 (51.3 %)------Age (y)40.27 ± 2.742.4 ± 3.1262.39 ± 1.2160.03 ± 1.31< 0.0010.610.2BMI (g/cm2)24.27 ± 0.8125.66 ± 0.7330.56 ± 0.7830.77 ± 0.78< 0.0010.220.84Diabetes-specific parametersDuration of diabetes (y)14.23 ± 1.89-9.49 ± 0.8-0.026--HbA1c (%)8.86 ± 0.35-6.86 ± 0.15-< 0.001--Complications (n)12 (40 %)9 (23 %)General parametersCalcium (mg/dl)9.63 ± 0.7-9.69 ± 0.6-0.49--Phosphate (mg/dl)3.43 ± 0.123.25 ± 0.080.19TSH (uUI/ml)3.3 ± 0.8-2.23 ± 0.013-0.14--PTH (pg/ml)37.98 ± 2.2437.11 ± 2.3347.47 ± 3.948.42 ± 1.60.0550.790.84BMI body mass index, PTH parathyroid hormone, SEM standard error of the mean, TSH thyroid stimulating hormone Characteristics of the study patients BMI body mass index, PTH parathyroid hormone, SEM standard error of the mean, TSH thyroid stimulating hormone BMD values at the lumbar spine and femoral neck were similar between T1D and T2D patients, and also between T1D patients and controls and between T2D patients and controls, in the whole group and according to sex, respectively (Table 2). After adjusting for age, BMI and disease duration, BMD did not vary significantly between T1D and T2D patients, respectively (Table 2). However, fewer patients in the T2D group exhibited low bone mass compared to matched controls, while the number of low bone mass subjects was similar in T1D patients and matched controls (Table 2). Table 2BMD values in diabetes patients and controlsType 1diabetes mellitusControlgroupType 2diabetes mellitusControlgroupType 1vs.Type 2Type 1vs.ControlType 2vs.Controlmean ± SEMmean ± SEMmean ± SEMmean ± SEMp-valueBMD / WomenN = 18 N = 18 N = 19 N = 19Lumbar BMD(g/cm2)0.97 ± 0.020.97 ± 0.030.93 ± 0.040.93 ± 0.030.470.990.9Lumbar T/Z-score-0.8 ± 0.2-0.7 ± 0.2-1 ± 0.3-1.1 ± 0.30.590.830.91Femoral neck BMD (g/cm2)0.74 ± 0.020.78 ± 0.030.77 ± 0.030.76 ± 0.030.520.290.98Femoral neck T/Z-score-1.1 ± 0.2-0.6 ± 0.3-0.6 ± 0.3-0.8 ± 0.20.210.160.79BMD / MenN = 12 N = 12 N = 20 N = 20Lumbar BMD(g/cm2)1.09 ± 0.031 ± 0.051.08 ± 0.041.03 ± 0.030.910.150.3Lumbar T/Z-score-0.2 ± 0.3-0.8 ± 0.5-0.1 ± 0.3-0.6 ± 0.30.760.260.3Femoral neck BMD (g/cm2)0.94 ± 0.030.9 ± 0.040.9 ± 0.040.92 ± 0.030.380.480.52Femoral neck T/Z-score0.1 ± 0.3-0.2 ± 0.4-0.3 ± 0.30 ± 0.20.380.550.42BMD/TotalN = 30 N = 30 N = 39 N = 39Lumbar BMD(g/cm2)1.01 ± 0.020.98 ± 0.031.01 ± 0.030.98 ± 0.020.880.30.89Lumbar T/Z-score-0.6 ± 0.2-0.8 ± 0.2-0.6 ± 0.3-0.8 ± 0.20.850.50.94Femoral neck BMD (g/cm2)0.82 ± 0.030.83 ± 0.280.83 ± 0.030.85 ± 0.020.790.810.79Femoral neck T/Z-score-0.6 ± 0.2-0.4 ± 0.21-0.5 ± 0.2-0.4 ± 0.20.610.530.55Adjusted BMD* (LSMEAN ± SE)N = 30 N = 30 N = 39 N = 39Lumbar BMD (g/cm2)1.04 ± 0.04-0.99 ± 0.03-0.46--Femoral neck BMD (g/cm2)0.84 ± 0.04-0.82 ± 0.03-0.7--WHO-criteriaLow bone massN = 8 N = 8 N = 13 N = 17*adjusted for age, body mass index and diabetes durationBMD bone mineral density, LSMEAN least square mean, SE standard error, SEM standard error of the mean, WHO world health organization BMD values in diabetes patients and controls *adjusted for age, body mass index and diabetes duration BMD bone mineral density, LSMEAN least square mean, SE standard error, SEM standard error of the mean, WHO world health organization BMD predictors in T1D, T2D and controls General (age, gender, BMI and PTH) and diabetes - specific parameters (disease duration, HbA1c) were introduced as independent variables in multiple regression analysis with lumbar and femoral neck BMD as outcome variables, respectively (Table 3). Gender independently predicted BMD across all models: compared to men, female sex was an independent risk factor for low BMD in T1D, T2D patients and controls, respectively. In T1D patients, diabetes duration was also a negative independent predictor of femoral neck BMD, while BMI was a positive independent predictor of both lumbar and femoral neck BMD in the T2D group - but not in controls. Age was a negative independent predictor of BMD in controls (both T1D and T2D controls), but not in patients with diabetes (Table 3). Table 3Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectivelyModelDependent variableT1D modelT2 modelPredictorT1DT2DBetap-valueBetap-value1Lumbar BMDR2 = 0.43P = 0.038R2 = 0.37P = 0.014AgeGenderDiabetes durationHbA1cBMIPTH0.15-0.66-0.360.07-0.30.270.50.0020.0750.70.160.21-0.03-0.520.160.020.46-0.10.870.0020.330.890.0060.512Femoral Neck BMDR2 = 0.68P < 0.001R2 = 0.42P = 0.006AgeGenderDiabetes durationHbA1cBMIPTH-0.08-0.66-0.390.130.120.080.64< 0.0010.0140.330.450.63-0.26-0.550.27-0.110.440.090.110.0010.090.460.0070.55ModelDependent variableControl T1D modelControl T2 modelPredictorControl T1DControl T2DBetap-valueBetap-value1Lumbar BMDR2 = 0.27P = 0.036R2 = 0.32P = 0.01AgeGenderBMIPTH-0.45-0.310.040.050.0110.050.810.75-0.41-0.450.17-0.170.0130.0050.360.332Femoral Neck BMDR2 = 0.35P = 0.006R2 = 0.51P < 0.001AgeGenderBMIPTH-0.31-0.50.04-0.060.050.0020.810.69-0.39-0.670.15-0.230.006< 0.0010.350.12BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectively R2 = 0.43 P = 0.038 R2 = 0.37 P = 0.014 Age Gender Diabetes duration HbA1c BMI PTH 0.15 -0.66 -0.36 0.07 -0.3 0.27 0.5 0.002 0.075 0.7 0.16 0.21 -0.03 -0.52 0.16 0.02 0.46 -0.1 0.87 0.002 0.33 0.89 0.006 0.51 R2 = 0.68 P < 0.001 R2 = 0.42 P = 0.006 Age Gender Diabetes duration HbA1c BMI PTH -0.08 -0.66 -0.39 0.13 0.12 0.08 0.64 < 0.001 0.014 0.33 0.45 0.63 -0.26 -0.55 0.27 -0.11 0.44 0.09 0.11 0.001 0.09 0.46 0.007 0.55 R2 = 0.27 P = 0.036 R2 = 0.32 P = 0.01 Age Gender BMI PTH -0.45 -0.31 0.04 0.05 0.011 0.05 0.81 0.75 -0.41 -0.45 0.17 -0.17 0.013 0.005 0.36 0.33 R2 = 0.35 P = 0.006 R2 = 0.51 P < 0.001 Age Gender BMI PTH -0.31 -0.5 0.04 -0.06 0.05 0.002 0.81 0.69 -0.39 -0.67 0.15 -0.23 0.006 < 0.001 0.35 0.12 BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes General (age, gender, BMI and PTH) and diabetes - specific parameters (disease duration, HbA1c) were introduced as independent variables in multiple regression analysis with lumbar and femoral neck BMD as outcome variables, respectively (Table 3). Gender independently predicted BMD across all models: compared to men, female sex was an independent risk factor for low BMD in T1D, T2D patients and controls, respectively. In T1D patients, diabetes duration was also a negative independent predictor of femoral neck BMD, while BMI was a positive independent predictor of both lumbar and femoral neck BMD in the T2D group - but not in controls. Age was a negative independent predictor of BMD in controls (both T1D and T2D controls), but not in patients with diabetes (Table 3). Table 3Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectivelyModelDependent variableT1D modelT2 modelPredictorT1DT2DBetap-valueBetap-value1Lumbar BMDR2 = 0.43P = 0.038R2 = 0.37P = 0.014AgeGenderDiabetes durationHbA1cBMIPTH0.15-0.66-0.360.07-0.30.270.50.0020.0750.70.160.21-0.03-0.520.160.020.46-0.10.870.0020.330.890.0060.512Femoral Neck BMDR2 = 0.68P < 0.001R2 = 0.42P = 0.006AgeGenderDiabetes durationHbA1cBMIPTH-0.08-0.66-0.390.130.120.080.64< 0.0010.0140.330.450.63-0.26-0.550.27-0.110.440.090.110.0010.090.460.0070.55ModelDependent variableControl T1D modelControl T2 modelPredictorControl T1DControl T2DBetap-valueBetap-value1Lumbar BMDR2 = 0.27P = 0.036R2 = 0.32P = 0.01AgeGenderBMIPTH-0.45-0.310.040.050.0110.050.810.75-0.41-0.450.17-0.170.0130.0050.360.332Femoral Neck BMDR2 = 0.35P = 0.006R2 = 0.51P < 0.001AgeGenderBMIPTH-0.31-0.50.04-0.060.050.0020.810.69-0.39-0.670.15-0.230.006< 0.0010.350.12BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectively R2 = 0.43 P = 0.038 R2 = 0.37 P = 0.014 Age Gender Diabetes duration HbA1c BMI PTH 0.15 -0.66 -0.36 0.07 -0.3 0.27 0.5 0.002 0.075 0.7 0.16 0.21 -0.03 -0.52 0.16 0.02 0.46 -0.1 0.87 0.002 0.33 0.89 0.006 0.51 R2 = 0.68 P < 0.001 R2 = 0.42 P = 0.006 Age Gender Diabetes duration HbA1c BMI PTH -0.08 -0.66 -0.39 0.13 0.12 0.08 0.64 < 0.001 0.014 0.33 0.45 0.63 -0.26 -0.55 0.27 -0.11 0.44 0.09 0.11 0.001 0.09 0.46 0.007 0.55 R2 = 0.27 P = 0.036 R2 = 0.32 P = 0.01 Age Gender BMI PTH -0.45 -0.31 0.04 0.05 0.011 0.05 0.81 0.75 -0.41 -0.45 0.17 -0.17 0.013 0.005 0.36 0.33 R2 = 0.35 P = 0.006 R2 = 0.51 P < 0.001 Age Gender BMI PTH -0.31 -0.5 0.04 -0.06 0.05 0.002 0.81 0.69 -0.39 -0.67 0.15 -0.23 0.006 < 0.001 0.35 0.12 BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes Subgroup analysis The presence of diabetic complications T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1). Fig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1). Fig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 Drugs in T2D In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively. In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively. The presence of diabetic complications T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1). Fig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1). Fig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 Drugs in T2D In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively. In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively.
Conclusions
Female sex and long-standing diabetes particularly increase the risk for low BMD in T1D, with special concern for the femoral neck. An increased BMI partially contributes to BMD preservation in T2D, independently of age; however, appreciating bone mass to its real extent is rather difficult in T2D due to various contributing factors to bone changes.
[ "Background", "Methods", "Study design and subjects", "Evaluation and measurements", "Statistical analysis", "BMD predictors in T1D, T2D and controls", "Subgroup analysis", "The presence of diabetic complications", "Drugs in T2D" ]
[ "Diabetes mellitus is a chronic whole-body disease leading to a wide range of complications, such as cardiovascular disease, retinopathy, nephropathy, neuropathy and also “sweet bone” disease [1]. Although the underlying pathophysiological background is very different, type 1 (T1D) and type 2 diabetes (T2D) are both associated with an increased fracture risk - which is multifactorial and only partially explained by falls and bone mineral density (BMD) [2]. The most consistent effect is upon the hip fracture risk, ranging between 2.4- and 7-fold increase in T1D [3] and being two to three times higher in T2D compared to the general population [1].\nDiabetic osteopathy in T1D and T2D is characterized by low serum vitamin D, negative calcium balance, low bone turnover and high sclerostin levels [4]. However, bone mass may differ to some extent in T1D when compared to T2D [5], but not in all studies [6]. Low BMD occurs early after disease onset due to the deleterious effects of insulinopenia upon bone turnover and bone mass accrual in T1D, remaining rather stable afterwards [7]. Reported BMD in T2D varies from unaltered bone density [8, 9] to a paradoxically higher BMD [5] compared to controls. Low bone mass was also found in the later stages of T2D, possibly linked to microvascular disease [10].\nSkeletal fragility is nevertheless described in both T1D and T2D, independently of BMD [2]. Advanced glycation end products (AGEs) alter the structure of the collagen, promote oxidative stress and inflammation, and also contribute to low bone turnover [1, 3]. The effect of glycemic control - reflected by HbA1c levels - upon bone is inconsistent, with some studies reporting an elevated fracture risk with increasing HbA1c [11, 12], while bone density evolution appears rather independent of HbA1c levels [5, 6]. In T2D, the protective effect of an increased body weight and hyperinsulinemia upon bone are counterbalanced by the negative impact of increased visceral adiposity and insulin resistance, an inadequate adaptation of bone strength to increased mechanical load, the long duration of disease evolution and various anti-diabetic drugs (e.g., thiazolidinediones or sodium-glucose cotransporter type 2 inhibitors – SGLT-2) [1].\nWe aimed at investigating independent predictors of BMD in T1D and T2D patients compared to controls with regard to general and diabetes - specific parameters.", "Study design and subjects Patients diagnosed with diabetes (T1D and T2D) were consecutively recruited during routine follow-up visits for disease monitoring in the Diabetes, Nutrition and Metabolic Diseases Clinic of “Sf. Spiridon” Clinical Emergency Hospital Iasi (Romania) between January and December 2017. Patients aged between 18 and 80 years old were included if they had a well-established diagnosis of diabetes according to standardized criteria [13] in their original medical record (HbA1c > 6.5 % on two separate tests; T1D: new-onset hyperglycemia accompanied by ketonuria at debut, low serum levels of insulin and peptide C and requiring insulin treatment for control and survival - antibodies to glutamic acid decarboxylase were also tested where the clinical phenotype was rather non-specific, such as slow onset of symptoms, BMI ≥ 25 kg/m2 or age over 40 with normal BMI and requiring insulin treatment from the time of diagnosis [14]; T2D: two fasting blood sugar levels ≥ 126 mg/dl or an oral glucose tolerance test showing serum glucose ≥ 200 mg/dl accompanied by a phenotype of insulin resistance and not requiring insulin), were more than 1 year after disease onset, were receiving antidiabetic treatment (without any changes in medication type in the past six months), were at their first bone evaluation, and had an estimated glomerular filtration rate (eGFR) ≥ 60 ml/min/1.73 m2 (serum creatinine was measured and eGFR was calculated using the CKD-EPI equation). Age, sex and body mass index (BMI) 1:1 matched apparently healthy volunteer controls (CTL) referred by the general practitioner to the outpatient department in our hospital for general investigations were enrolled in the same period as the patients. Exclusion criteria for both groups were represented by calcium and vitamin D supplementation, bone active therapy (antiresorptive/bone-forming therapy), liver disease, moderate and severe chronic kidney disease (CKD; stage G3 to end-stage renal disease), history of parathyroid or rheumatological disease, oral corticosteroid use > 5 mg prednisone equivalent in the past 3 months or endogenous hypercortisolism, hypo- and hyperthyroidism, inflammatory bowel disease, hypogonadism (other than menopause), smoking (both regular and heavy) and heavy drinking (more than 2 drinks per day or more than 15 drinks per week for men and more than 1 drink per day or more than 8 drinks per week for women). Subjects exhibiting hyperglycemia (an abnormal fasting blood sugar, impaired glucose tolerance or diabetes) were further excluded from the CTL group.\nSixty-nine patients (30 T1D and 39 T2D) and 69 age, sex and BMI-matched CTL that were willing to participate and met the study inclusion and exclusion criteria were recruited in the Diabetes and Endocrinology outpatient clinics, after giving written informed consent and were enrolled in this cross-sectional case-control study. The study adhered to the Declaration of Helsinki and was approved by the institutional Ethics Committee.\nPatients diagnosed with diabetes (T1D and T2D) were consecutively recruited during routine follow-up visits for disease monitoring in the Diabetes, Nutrition and Metabolic Diseases Clinic of “Sf. Spiridon” Clinical Emergency Hospital Iasi (Romania) between January and December 2017. Patients aged between 18 and 80 years old were included if they had a well-established diagnosis of diabetes according to standardized criteria [13] in their original medical record (HbA1c > 6.5 % on two separate tests; T1D: new-onset hyperglycemia accompanied by ketonuria at debut, low serum levels of insulin and peptide C and requiring insulin treatment for control and survival - antibodies to glutamic acid decarboxylase were also tested where the clinical phenotype was rather non-specific, such as slow onset of symptoms, BMI ≥ 25 kg/m2 or age over 40 with normal BMI and requiring insulin treatment from the time of diagnosis [14]; T2D: two fasting blood sugar levels ≥ 126 mg/dl or an oral glucose tolerance test showing serum glucose ≥ 200 mg/dl accompanied by a phenotype of insulin resistance and not requiring insulin), were more than 1 year after disease onset, were receiving antidiabetic treatment (without any changes in medication type in the past six months), were at their first bone evaluation, and had an estimated glomerular filtration rate (eGFR) ≥ 60 ml/min/1.73 m2 (serum creatinine was measured and eGFR was calculated using the CKD-EPI equation). Age, sex and body mass index (BMI) 1:1 matched apparently healthy volunteer controls (CTL) referred by the general practitioner to the outpatient department in our hospital for general investigations were enrolled in the same period as the patients. Exclusion criteria for both groups were represented by calcium and vitamin D supplementation, bone active therapy (antiresorptive/bone-forming therapy), liver disease, moderate and severe chronic kidney disease (CKD; stage G3 to end-stage renal disease), history of parathyroid or rheumatological disease, oral corticosteroid use > 5 mg prednisone equivalent in the past 3 months or endogenous hypercortisolism, hypo- and hyperthyroidism, inflammatory bowel disease, hypogonadism (other than menopause), smoking (both regular and heavy) and heavy drinking (more than 2 drinks per day or more than 15 drinks per week for men and more than 1 drink per day or more than 8 drinks per week for women). Subjects exhibiting hyperglycemia (an abnormal fasting blood sugar, impaired glucose tolerance or diabetes) were further excluded from the CTL group.\nSixty-nine patients (30 T1D and 39 T2D) and 69 age, sex and BMI-matched CTL that were willing to participate and met the study inclusion and exclusion criteria were recruited in the Diabetes and Endocrinology outpatient clinics, after giving written informed consent and were enrolled in this cross-sectional case-control study. The study adhered to the Declaration of Helsinki and was approved by the institutional Ethics Committee.\nEvaluation and measurements Complete medical history (anamnesis and medical charts) was recorded for all patients and CTL. The presence of microvascular complications was defined as: [1] nephropathy: positive albumin:creatinine ratio (≥ 30 mg/g) on two or more occasions, [2] retinopathy: positive ophthalmologic fundus examination, [3] polineuropathy: clinical measurement of vibration. Macrovascular complications were defined based on the recorded history of coronary heart disease, stroke, myocardial infarction, or peripheral vascular disease, respectively.\nAfter clinical examination (height and weight were recorded and BMI was calculated as weight (kg) divided by square height (m)), all patients underwent dual-energy X-ray absorptiometry (DXA; Hologic Delphi A, software version 12.7.3.2 Hologic Inc., USA) scanning to measure BMD at the lumbar spine and hip (femoral neck was reported due to lower values compared to total hip, according to the recommendations of the ISCD [15]). Coefficient of variation was 0.39 % for lumbar anterior-posterior spine and 1 % for femoral neck BMD. Measurements were made by two trained technicians certified by the International Society for Clinical Densitometry (ISCD), according to standard protocol and with daily calibration. Least significant change (LSC) was 0.008 g/cm2 for lumbar BMD and 0.0104 g/cm2 for femoral neck BMD, respectively. According to the Adult Official Positions of the ISCD [15], T-scores were reported for postmenopausal women and men ≥ 50 years of age (“low bone mass” was defined as T-score <-1 in this category) while Z-scores were recorded for premenopausal women and men < 50 years (“low bone mass” was defined as Z-score ≤-2). Also, if there was a more than 1.0 T-score difference between adjacent vertebrae, the questioned vertebra was excluded from the analysis, while the BMD of the remaining vertebrae was used to derive T-score [15]. Menopause was defined as more than 12 months since natural cessation of menstrual cycles.\nOn the same day as the clinical and DXA examinations, blood samples were collected after overnight fasting in all study participants. Biochemical analysis of standard clinical parameters included HbA1c determination (ion-exchange high-performance liquid chromatography (HPLC) method), serum calcium and phosphate (colorimetry; Cobas 6000 analyzer, Roche), serum thyroid stimulating hormone (TSH) and parathyroid hormone (PTH) (intact PTH second-generation chemiluminescent enzyme immunometric assay; Immulite 2000 Immunoassay System, Siemens).\nComplete medical history (anamnesis and medical charts) was recorded for all patients and CTL. The presence of microvascular complications was defined as: [1] nephropathy: positive albumin:creatinine ratio (≥ 30 mg/g) on two or more occasions, [2] retinopathy: positive ophthalmologic fundus examination, [3] polineuropathy: clinical measurement of vibration. Macrovascular complications were defined based on the recorded history of coronary heart disease, stroke, myocardial infarction, or peripheral vascular disease, respectively.\nAfter clinical examination (height and weight were recorded and BMI was calculated as weight (kg) divided by square height (m)), all patients underwent dual-energy X-ray absorptiometry (DXA; Hologic Delphi A, software version 12.7.3.2 Hologic Inc., USA) scanning to measure BMD at the lumbar spine and hip (femoral neck was reported due to lower values compared to total hip, according to the recommendations of the ISCD [15]). Coefficient of variation was 0.39 % for lumbar anterior-posterior spine and 1 % for femoral neck BMD. Measurements were made by two trained technicians certified by the International Society for Clinical Densitometry (ISCD), according to standard protocol and with daily calibration. Least significant change (LSC) was 0.008 g/cm2 for lumbar BMD and 0.0104 g/cm2 for femoral neck BMD, respectively. According to the Adult Official Positions of the ISCD [15], T-scores were reported for postmenopausal women and men ≥ 50 years of age (“low bone mass” was defined as T-score <-1 in this category) while Z-scores were recorded for premenopausal women and men < 50 years (“low bone mass” was defined as Z-score ≤-2). Also, if there was a more than 1.0 T-score difference between adjacent vertebrae, the questioned vertebra was excluded from the analysis, while the BMD of the remaining vertebrae was used to derive T-score [15]. Menopause was defined as more than 12 months since natural cessation of menstrual cycles.\nOn the same day as the clinical and DXA examinations, blood samples were collected after overnight fasting in all study participants. Biochemical analysis of standard clinical parameters included HbA1c determination (ion-exchange high-performance liquid chromatography (HPLC) method), serum calcium and phosphate (colorimetry; Cobas 6000 analyzer, Roche), serum thyroid stimulating hormone (TSH) and parathyroid hormone (PTH) (intact PTH second-generation chemiluminescent enzyme immunometric assay; Immulite 2000 Immunoassay System, Siemens).\nStatistical analysis SPSS (SPSS Statistics version 20.0 for Windows) was employed for statistical analysis. Data are expressed as mean ± SEM (standard error of the mean). Comparisons between groups (T1D versus T2D, T1D versus controls and T2D versus controls, respectively) were made using Student’s t-test (for normally distributed data) or the non-parametric Mann-Whitney U test (for skewed data), after checking for normal distribution (Shapiro-Wilk test). Analysis of variance (ANOVA) was employed for comparisons between 3 or more categories. The analysis of covariance (ANCOVA) was used to calculate age, BMI and diabetes duration - adjusted BMD values in T1D compared to T2D (least square means ± standard error are reported). Multiple regression analysis was performed to assess independent predictors of bone mass in T1D, T2D and matched-CTL, respectively, as follows: continuous variables potentially influencing BMD variation, such as age, diabetes duration, HbA1c, BMI and PTH, as well as categorical variables (introduced after binary coding 0/1), such as gender (male = 0, female = 1) were introduced as independent variables in regression models with lumbar BMD and femoral neck BMD as the outcome variables, respectively. The level of significance was established for p-value < 0.05.\nSPSS (SPSS Statistics version 20.0 for Windows) was employed for statistical analysis. Data are expressed as mean ± SEM (standard error of the mean). Comparisons between groups (T1D versus T2D, T1D versus controls and T2D versus controls, respectively) were made using Student’s t-test (for normally distributed data) or the non-parametric Mann-Whitney U test (for skewed data), after checking for normal distribution (Shapiro-Wilk test). Analysis of variance (ANOVA) was employed for comparisons between 3 or more categories. The analysis of covariance (ANCOVA) was used to calculate age, BMI and diabetes duration - adjusted BMD values in T1D compared to T2D (least square means ± standard error are reported). Multiple regression analysis was performed to assess independent predictors of bone mass in T1D, T2D and matched-CTL, respectively, as follows: continuous variables potentially influencing BMD variation, such as age, diabetes duration, HbA1c, BMI and PTH, as well as categorical variables (introduced after binary coding 0/1), such as gender (male = 0, female = 1) were introduced as independent variables in regression models with lumbar BMD and femoral neck BMD as the outcome variables, respectively. The level of significance was established for p-value < 0.05.", "Patients diagnosed with diabetes (T1D and T2D) were consecutively recruited during routine follow-up visits for disease monitoring in the Diabetes, Nutrition and Metabolic Diseases Clinic of “Sf. Spiridon” Clinical Emergency Hospital Iasi (Romania) between January and December 2017. Patients aged between 18 and 80 years old were included if they had a well-established diagnosis of diabetes according to standardized criteria [13] in their original medical record (HbA1c > 6.5 % on two separate tests; T1D: new-onset hyperglycemia accompanied by ketonuria at debut, low serum levels of insulin and peptide C and requiring insulin treatment for control and survival - antibodies to glutamic acid decarboxylase were also tested where the clinical phenotype was rather non-specific, such as slow onset of symptoms, BMI ≥ 25 kg/m2 or age over 40 with normal BMI and requiring insulin treatment from the time of diagnosis [14]; T2D: two fasting blood sugar levels ≥ 126 mg/dl or an oral glucose tolerance test showing serum glucose ≥ 200 mg/dl accompanied by a phenotype of insulin resistance and not requiring insulin), were more than 1 year after disease onset, were receiving antidiabetic treatment (without any changes in medication type in the past six months), were at their first bone evaluation, and had an estimated glomerular filtration rate (eGFR) ≥ 60 ml/min/1.73 m2 (serum creatinine was measured and eGFR was calculated using the CKD-EPI equation). Age, sex and body mass index (BMI) 1:1 matched apparently healthy volunteer controls (CTL) referred by the general practitioner to the outpatient department in our hospital for general investigations were enrolled in the same period as the patients. Exclusion criteria for both groups were represented by calcium and vitamin D supplementation, bone active therapy (antiresorptive/bone-forming therapy), liver disease, moderate and severe chronic kidney disease (CKD; stage G3 to end-stage renal disease), history of parathyroid or rheumatological disease, oral corticosteroid use > 5 mg prednisone equivalent in the past 3 months or endogenous hypercortisolism, hypo- and hyperthyroidism, inflammatory bowel disease, hypogonadism (other than menopause), smoking (both regular and heavy) and heavy drinking (more than 2 drinks per day or more than 15 drinks per week for men and more than 1 drink per day or more than 8 drinks per week for women). Subjects exhibiting hyperglycemia (an abnormal fasting blood sugar, impaired glucose tolerance or diabetes) were further excluded from the CTL group.\nSixty-nine patients (30 T1D and 39 T2D) and 69 age, sex and BMI-matched CTL that were willing to participate and met the study inclusion and exclusion criteria were recruited in the Diabetes and Endocrinology outpatient clinics, after giving written informed consent and were enrolled in this cross-sectional case-control study. The study adhered to the Declaration of Helsinki and was approved by the institutional Ethics Committee.", "Complete medical history (anamnesis and medical charts) was recorded for all patients and CTL. The presence of microvascular complications was defined as: [1] nephropathy: positive albumin:creatinine ratio (≥ 30 mg/g) on two or more occasions, [2] retinopathy: positive ophthalmologic fundus examination, [3] polineuropathy: clinical measurement of vibration. Macrovascular complications were defined based on the recorded history of coronary heart disease, stroke, myocardial infarction, or peripheral vascular disease, respectively.\nAfter clinical examination (height and weight were recorded and BMI was calculated as weight (kg) divided by square height (m)), all patients underwent dual-energy X-ray absorptiometry (DXA; Hologic Delphi A, software version 12.7.3.2 Hologic Inc., USA) scanning to measure BMD at the lumbar spine and hip (femoral neck was reported due to lower values compared to total hip, according to the recommendations of the ISCD [15]). Coefficient of variation was 0.39 % for lumbar anterior-posterior spine and 1 % for femoral neck BMD. Measurements were made by two trained technicians certified by the International Society for Clinical Densitometry (ISCD), according to standard protocol and with daily calibration. Least significant change (LSC) was 0.008 g/cm2 for lumbar BMD and 0.0104 g/cm2 for femoral neck BMD, respectively. According to the Adult Official Positions of the ISCD [15], T-scores were reported for postmenopausal women and men ≥ 50 years of age (“low bone mass” was defined as T-score <-1 in this category) while Z-scores were recorded for premenopausal women and men < 50 years (“low bone mass” was defined as Z-score ≤-2). Also, if there was a more than 1.0 T-score difference between adjacent vertebrae, the questioned vertebra was excluded from the analysis, while the BMD of the remaining vertebrae was used to derive T-score [15]. Menopause was defined as more than 12 months since natural cessation of menstrual cycles.\nOn the same day as the clinical and DXA examinations, blood samples were collected after overnight fasting in all study participants. Biochemical analysis of standard clinical parameters included HbA1c determination (ion-exchange high-performance liquid chromatography (HPLC) method), serum calcium and phosphate (colorimetry; Cobas 6000 analyzer, Roche), serum thyroid stimulating hormone (TSH) and parathyroid hormone (PTH) (intact PTH second-generation chemiluminescent enzyme immunometric assay; Immulite 2000 Immunoassay System, Siemens).", "SPSS (SPSS Statistics version 20.0 for Windows) was employed for statistical analysis. Data are expressed as mean ± SEM (standard error of the mean). Comparisons between groups (T1D versus T2D, T1D versus controls and T2D versus controls, respectively) were made using Student’s t-test (for normally distributed data) or the non-parametric Mann-Whitney U test (for skewed data), after checking for normal distribution (Shapiro-Wilk test). Analysis of variance (ANOVA) was employed for comparisons between 3 or more categories. The analysis of covariance (ANCOVA) was used to calculate age, BMI and diabetes duration - adjusted BMD values in T1D compared to T2D (least square means ± standard error are reported). Multiple regression analysis was performed to assess independent predictors of bone mass in T1D, T2D and matched-CTL, respectively, as follows: continuous variables potentially influencing BMD variation, such as age, diabetes duration, HbA1c, BMI and PTH, as well as categorical variables (introduced after binary coding 0/1), such as gender (male = 0, female = 1) were introduced as independent variables in regression models with lumbar BMD and femoral neck BMD as the outcome variables, respectively. The level of significance was established for p-value < 0.05.", "General (age, gender, BMI and PTH) and diabetes - specific parameters (disease duration, HbA1c) were introduced as independent variables in multiple regression analysis with lumbar and femoral neck BMD as outcome variables, respectively (Table 3). Gender independently predicted BMD across all models: compared to men, female sex was an independent risk factor for low BMD in T1D, T2D patients and controls, respectively. In T1D patients, diabetes duration was also a negative independent predictor of femoral neck BMD, while BMI was a positive independent predictor of both lumbar and femoral neck BMD in the T2D group - but not in controls. Age was a negative independent predictor of BMD in controls (both T1D and T2D controls), but not in patients with diabetes (Table 3).\nTable 3Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectivelyModelDependent variableT1D modelT2 modelPredictorT1DT2DBetap-valueBetap-value1Lumbar BMDR2 = 0.43P = 0.038R2 = 0.37P = 0.014AgeGenderDiabetes durationHbA1cBMIPTH0.15-0.66-0.360.07-0.30.270.50.0020.0750.70.160.21-0.03-0.520.160.020.46-0.10.870.0020.330.890.0060.512Femoral Neck BMDR2 = 0.68P < 0.001R2 = 0.42P = 0.006AgeGenderDiabetes durationHbA1cBMIPTH-0.08-0.66-0.390.130.120.080.64< 0.0010.0140.330.450.63-0.26-0.550.27-0.110.440.090.110.0010.090.460.0070.55ModelDependent variableControl T1D modelControl T2 modelPredictorControl T1DControl T2DBetap-valueBetap-value1Lumbar BMDR2 = 0.27P = 0.036R2 = 0.32P = 0.01AgeGenderBMIPTH-0.45-0.310.040.050.0110.050.810.75-0.41-0.450.17-0.170.0130.0050.360.332Femoral Neck BMDR2 = 0.35P = 0.006R2 = 0.51P < 0.001AgeGenderBMIPTH-0.31-0.50.04-0.060.050.0020.810.69-0.39-0.670.15-0.230.006< 0.0010.350.12BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes\nMultiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectively\nR2 = 0.43\nP = 0.038\nR2 = 0.37\nP = 0.014\nAge\nGender\nDiabetes duration\nHbA1c\nBMI\nPTH\n0.15\n-0.66\n-0.36\n0.07\n-0.3\n0.27\n0.5\n0.002\n0.075\n0.7\n0.16\n0.21\n-0.03\n-0.52\n0.16\n0.02\n0.46\n-0.1\n0.87\n0.002\n0.33\n0.89\n0.006\n0.51\nR2 = 0.68\nP < 0.001\nR2 = 0.42\nP = 0.006\nAge\nGender\nDiabetes duration\nHbA1c\nBMI\nPTH\n-0.08\n-0.66\n-0.39\n0.13\n0.12\n0.08\n0.64\n< 0.001\n0.014\n0.33\n0.45\n0.63\n-0.26\n-0.55\n0.27\n-0.11\n0.44\n0.09\n0.11\n0.001\n0.09\n0.46\n0.007\n0.55\nR2 = 0.27\nP = 0.036\nR2 = 0.32\nP = 0.01\nAge\nGender\nBMI\nPTH\n-0.45\n-0.31\n0.04\n0.05\n0.011\n0.05\n0.81\n0.75\n-0.41\n-0.45\n0.17\n-0.17\n0.013\n0.005\n0.36\n0.33\nR2 = 0.35\nP = 0.006\nR2 = 0.51\nP < 0.001\nAge\nGender\nBMI\nPTH\n-0.31\n-0.5\n0.04\n-0.06\n0.05\n0.002\n0.81\n0.69\n-0.39\n-0.67\n0.15\n-0.23\n0.006\n< 0.001\n0.35\n0.12\nBMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes", "The presence of diabetic complications T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1).\nFig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nBMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nT1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1).\nFig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nBMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nDrugs in T2D In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively.\nIn the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively.", "T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1).\nFig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nBMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39", "In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study design and subjects", "Evaluation and measurements", "Statistical analysis", "Results", "BMD predictors in T1D, T2D and controls", "Subgroup analysis", "The presence of diabetic complications", "Drugs in T2D", "Discussion", "Conclusions" ]
[ "Diabetes mellitus is a chronic whole-body disease leading to a wide range of complications, such as cardiovascular disease, retinopathy, nephropathy, neuropathy and also “sweet bone” disease [1]. Although the underlying pathophysiological background is very different, type 1 (T1D) and type 2 diabetes (T2D) are both associated with an increased fracture risk - which is multifactorial and only partially explained by falls and bone mineral density (BMD) [2]. The most consistent effect is upon the hip fracture risk, ranging between 2.4- and 7-fold increase in T1D [3] and being two to three times higher in T2D compared to the general population [1].\nDiabetic osteopathy in T1D and T2D is characterized by low serum vitamin D, negative calcium balance, low bone turnover and high sclerostin levels [4]. However, bone mass may differ to some extent in T1D when compared to T2D [5], but not in all studies [6]. Low BMD occurs early after disease onset due to the deleterious effects of insulinopenia upon bone turnover and bone mass accrual in T1D, remaining rather stable afterwards [7]. Reported BMD in T2D varies from unaltered bone density [8, 9] to a paradoxically higher BMD [5] compared to controls. Low bone mass was also found in the later stages of T2D, possibly linked to microvascular disease [10].\nSkeletal fragility is nevertheless described in both T1D and T2D, independently of BMD [2]. Advanced glycation end products (AGEs) alter the structure of the collagen, promote oxidative stress and inflammation, and also contribute to low bone turnover [1, 3]. The effect of glycemic control - reflected by HbA1c levels - upon bone is inconsistent, with some studies reporting an elevated fracture risk with increasing HbA1c [11, 12], while bone density evolution appears rather independent of HbA1c levels [5, 6]. In T2D, the protective effect of an increased body weight and hyperinsulinemia upon bone are counterbalanced by the negative impact of increased visceral adiposity and insulin resistance, an inadequate adaptation of bone strength to increased mechanical load, the long duration of disease evolution and various anti-diabetic drugs (e.g., thiazolidinediones or sodium-glucose cotransporter type 2 inhibitors – SGLT-2) [1].\nWe aimed at investigating independent predictors of BMD in T1D and T2D patients compared to controls with regard to general and diabetes - specific parameters.", "Study design and subjects Patients diagnosed with diabetes (T1D and T2D) were consecutively recruited during routine follow-up visits for disease monitoring in the Diabetes, Nutrition and Metabolic Diseases Clinic of “Sf. Spiridon” Clinical Emergency Hospital Iasi (Romania) between January and December 2017. Patients aged between 18 and 80 years old were included if they had a well-established diagnosis of diabetes according to standardized criteria [13] in their original medical record (HbA1c > 6.5 % on two separate tests; T1D: new-onset hyperglycemia accompanied by ketonuria at debut, low serum levels of insulin and peptide C and requiring insulin treatment for control and survival - antibodies to glutamic acid decarboxylase were also tested where the clinical phenotype was rather non-specific, such as slow onset of symptoms, BMI ≥ 25 kg/m2 or age over 40 with normal BMI and requiring insulin treatment from the time of diagnosis [14]; T2D: two fasting blood sugar levels ≥ 126 mg/dl or an oral glucose tolerance test showing serum glucose ≥ 200 mg/dl accompanied by a phenotype of insulin resistance and not requiring insulin), were more than 1 year after disease onset, were receiving antidiabetic treatment (without any changes in medication type in the past six months), were at their first bone evaluation, and had an estimated glomerular filtration rate (eGFR) ≥ 60 ml/min/1.73 m2 (serum creatinine was measured and eGFR was calculated using the CKD-EPI equation). Age, sex and body mass index (BMI) 1:1 matched apparently healthy volunteer controls (CTL) referred by the general practitioner to the outpatient department in our hospital for general investigations were enrolled in the same period as the patients. Exclusion criteria for both groups were represented by calcium and vitamin D supplementation, bone active therapy (antiresorptive/bone-forming therapy), liver disease, moderate and severe chronic kidney disease (CKD; stage G3 to end-stage renal disease), history of parathyroid or rheumatological disease, oral corticosteroid use > 5 mg prednisone equivalent in the past 3 months or endogenous hypercortisolism, hypo- and hyperthyroidism, inflammatory bowel disease, hypogonadism (other than menopause), smoking (both regular and heavy) and heavy drinking (more than 2 drinks per day or more than 15 drinks per week for men and more than 1 drink per day or more than 8 drinks per week for women). Subjects exhibiting hyperglycemia (an abnormal fasting blood sugar, impaired glucose tolerance or diabetes) were further excluded from the CTL group.\nSixty-nine patients (30 T1D and 39 T2D) and 69 age, sex and BMI-matched CTL that were willing to participate and met the study inclusion and exclusion criteria were recruited in the Diabetes and Endocrinology outpatient clinics, after giving written informed consent and were enrolled in this cross-sectional case-control study. The study adhered to the Declaration of Helsinki and was approved by the institutional Ethics Committee.\nPatients diagnosed with diabetes (T1D and T2D) were consecutively recruited during routine follow-up visits for disease monitoring in the Diabetes, Nutrition and Metabolic Diseases Clinic of “Sf. Spiridon” Clinical Emergency Hospital Iasi (Romania) between January and December 2017. Patients aged between 18 and 80 years old were included if they had a well-established diagnosis of diabetes according to standardized criteria [13] in their original medical record (HbA1c > 6.5 % on two separate tests; T1D: new-onset hyperglycemia accompanied by ketonuria at debut, low serum levels of insulin and peptide C and requiring insulin treatment for control and survival - antibodies to glutamic acid decarboxylase were also tested where the clinical phenotype was rather non-specific, such as slow onset of symptoms, BMI ≥ 25 kg/m2 or age over 40 with normal BMI and requiring insulin treatment from the time of diagnosis [14]; T2D: two fasting blood sugar levels ≥ 126 mg/dl or an oral glucose tolerance test showing serum glucose ≥ 200 mg/dl accompanied by a phenotype of insulin resistance and not requiring insulin), were more than 1 year after disease onset, were receiving antidiabetic treatment (without any changes in medication type in the past six months), were at their first bone evaluation, and had an estimated glomerular filtration rate (eGFR) ≥ 60 ml/min/1.73 m2 (serum creatinine was measured and eGFR was calculated using the CKD-EPI equation). Age, sex and body mass index (BMI) 1:1 matched apparently healthy volunteer controls (CTL) referred by the general practitioner to the outpatient department in our hospital for general investigations were enrolled in the same period as the patients. Exclusion criteria for both groups were represented by calcium and vitamin D supplementation, bone active therapy (antiresorptive/bone-forming therapy), liver disease, moderate and severe chronic kidney disease (CKD; stage G3 to end-stage renal disease), history of parathyroid or rheumatological disease, oral corticosteroid use > 5 mg prednisone equivalent in the past 3 months or endogenous hypercortisolism, hypo- and hyperthyroidism, inflammatory bowel disease, hypogonadism (other than menopause), smoking (both regular and heavy) and heavy drinking (more than 2 drinks per day or more than 15 drinks per week for men and more than 1 drink per day or more than 8 drinks per week for women). Subjects exhibiting hyperglycemia (an abnormal fasting blood sugar, impaired glucose tolerance or diabetes) were further excluded from the CTL group.\nSixty-nine patients (30 T1D and 39 T2D) and 69 age, sex and BMI-matched CTL that were willing to participate and met the study inclusion and exclusion criteria were recruited in the Diabetes and Endocrinology outpatient clinics, after giving written informed consent and were enrolled in this cross-sectional case-control study. The study adhered to the Declaration of Helsinki and was approved by the institutional Ethics Committee.\nEvaluation and measurements Complete medical history (anamnesis and medical charts) was recorded for all patients and CTL. The presence of microvascular complications was defined as: [1] nephropathy: positive albumin:creatinine ratio (≥ 30 mg/g) on two or more occasions, [2] retinopathy: positive ophthalmologic fundus examination, [3] polineuropathy: clinical measurement of vibration. Macrovascular complications were defined based on the recorded history of coronary heart disease, stroke, myocardial infarction, or peripheral vascular disease, respectively.\nAfter clinical examination (height and weight were recorded and BMI was calculated as weight (kg) divided by square height (m)), all patients underwent dual-energy X-ray absorptiometry (DXA; Hologic Delphi A, software version 12.7.3.2 Hologic Inc., USA) scanning to measure BMD at the lumbar spine and hip (femoral neck was reported due to lower values compared to total hip, according to the recommendations of the ISCD [15]). Coefficient of variation was 0.39 % for lumbar anterior-posterior spine and 1 % for femoral neck BMD. Measurements were made by two trained technicians certified by the International Society for Clinical Densitometry (ISCD), according to standard protocol and with daily calibration. Least significant change (LSC) was 0.008 g/cm2 for lumbar BMD and 0.0104 g/cm2 for femoral neck BMD, respectively. According to the Adult Official Positions of the ISCD [15], T-scores were reported for postmenopausal women and men ≥ 50 years of age (“low bone mass” was defined as T-score <-1 in this category) while Z-scores were recorded for premenopausal women and men < 50 years (“low bone mass” was defined as Z-score ≤-2). Also, if there was a more than 1.0 T-score difference between adjacent vertebrae, the questioned vertebra was excluded from the analysis, while the BMD of the remaining vertebrae was used to derive T-score [15]. Menopause was defined as more than 12 months since natural cessation of menstrual cycles.\nOn the same day as the clinical and DXA examinations, blood samples were collected after overnight fasting in all study participants. Biochemical analysis of standard clinical parameters included HbA1c determination (ion-exchange high-performance liquid chromatography (HPLC) method), serum calcium and phosphate (colorimetry; Cobas 6000 analyzer, Roche), serum thyroid stimulating hormone (TSH) and parathyroid hormone (PTH) (intact PTH second-generation chemiluminescent enzyme immunometric assay; Immulite 2000 Immunoassay System, Siemens).\nComplete medical history (anamnesis and medical charts) was recorded for all patients and CTL. The presence of microvascular complications was defined as: [1] nephropathy: positive albumin:creatinine ratio (≥ 30 mg/g) on two or more occasions, [2] retinopathy: positive ophthalmologic fundus examination, [3] polineuropathy: clinical measurement of vibration. Macrovascular complications were defined based on the recorded history of coronary heart disease, stroke, myocardial infarction, or peripheral vascular disease, respectively.\nAfter clinical examination (height and weight were recorded and BMI was calculated as weight (kg) divided by square height (m)), all patients underwent dual-energy X-ray absorptiometry (DXA; Hologic Delphi A, software version 12.7.3.2 Hologic Inc., USA) scanning to measure BMD at the lumbar spine and hip (femoral neck was reported due to lower values compared to total hip, according to the recommendations of the ISCD [15]). Coefficient of variation was 0.39 % for lumbar anterior-posterior spine and 1 % for femoral neck BMD. Measurements were made by two trained technicians certified by the International Society for Clinical Densitometry (ISCD), according to standard protocol and with daily calibration. Least significant change (LSC) was 0.008 g/cm2 for lumbar BMD and 0.0104 g/cm2 for femoral neck BMD, respectively. According to the Adult Official Positions of the ISCD [15], T-scores were reported for postmenopausal women and men ≥ 50 years of age (“low bone mass” was defined as T-score <-1 in this category) while Z-scores were recorded for premenopausal women and men < 50 years (“low bone mass” was defined as Z-score ≤-2). Also, if there was a more than 1.0 T-score difference between adjacent vertebrae, the questioned vertebra was excluded from the analysis, while the BMD of the remaining vertebrae was used to derive T-score [15]. Menopause was defined as more than 12 months since natural cessation of menstrual cycles.\nOn the same day as the clinical and DXA examinations, blood samples were collected after overnight fasting in all study participants. Biochemical analysis of standard clinical parameters included HbA1c determination (ion-exchange high-performance liquid chromatography (HPLC) method), serum calcium and phosphate (colorimetry; Cobas 6000 analyzer, Roche), serum thyroid stimulating hormone (TSH) and parathyroid hormone (PTH) (intact PTH second-generation chemiluminescent enzyme immunometric assay; Immulite 2000 Immunoassay System, Siemens).\nStatistical analysis SPSS (SPSS Statistics version 20.0 for Windows) was employed for statistical analysis. Data are expressed as mean ± SEM (standard error of the mean). Comparisons between groups (T1D versus T2D, T1D versus controls and T2D versus controls, respectively) were made using Student’s t-test (for normally distributed data) or the non-parametric Mann-Whitney U test (for skewed data), after checking for normal distribution (Shapiro-Wilk test). Analysis of variance (ANOVA) was employed for comparisons between 3 or more categories. The analysis of covariance (ANCOVA) was used to calculate age, BMI and diabetes duration - adjusted BMD values in T1D compared to T2D (least square means ± standard error are reported). Multiple regression analysis was performed to assess independent predictors of bone mass in T1D, T2D and matched-CTL, respectively, as follows: continuous variables potentially influencing BMD variation, such as age, diabetes duration, HbA1c, BMI and PTH, as well as categorical variables (introduced after binary coding 0/1), such as gender (male = 0, female = 1) were introduced as independent variables in regression models with lumbar BMD and femoral neck BMD as the outcome variables, respectively. The level of significance was established for p-value < 0.05.\nSPSS (SPSS Statistics version 20.0 for Windows) was employed for statistical analysis. Data are expressed as mean ± SEM (standard error of the mean). Comparisons between groups (T1D versus T2D, T1D versus controls and T2D versus controls, respectively) were made using Student’s t-test (for normally distributed data) or the non-parametric Mann-Whitney U test (for skewed data), after checking for normal distribution (Shapiro-Wilk test). Analysis of variance (ANOVA) was employed for comparisons between 3 or more categories. The analysis of covariance (ANCOVA) was used to calculate age, BMI and diabetes duration - adjusted BMD values in T1D compared to T2D (least square means ± standard error are reported). Multiple regression analysis was performed to assess independent predictors of bone mass in T1D, T2D and matched-CTL, respectively, as follows: continuous variables potentially influencing BMD variation, such as age, diabetes duration, HbA1c, BMI and PTH, as well as categorical variables (introduced after binary coding 0/1), such as gender (male = 0, female = 1) were introduced as independent variables in regression models with lumbar BMD and femoral neck BMD as the outcome variables, respectively. The level of significance was established for p-value < 0.05.", "Patients diagnosed with diabetes (T1D and T2D) were consecutively recruited during routine follow-up visits for disease monitoring in the Diabetes, Nutrition and Metabolic Diseases Clinic of “Sf. Spiridon” Clinical Emergency Hospital Iasi (Romania) between January and December 2017. Patients aged between 18 and 80 years old were included if they had a well-established diagnosis of diabetes according to standardized criteria [13] in their original medical record (HbA1c > 6.5 % on two separate tests; T1D: new-onset hyperglycemia accompanied by ketonuria at debut, low serum levels of insulin and peptide C and requiring insulin treatment for control and survival - antibodies to glutamic acid decarboxylase were also tested where the clinical phenotype was rather non-specific, such as slow onset of symptoms, BMI ≥ 25 kg/m2 or age over 40 with normal BMI and requiring insulin treatment from the time of diagnosis [14]; T2D: two fasting blood sugar levels ≥ 126 mg/dl or an oral glucose tolerance test showing serum glucose ≥ 200 mg/dl accompanied by a phenotype of insulin resistance and not requiring insulin), were more than 1 year after disease onset, were receiving antidiabetic treatment (without any changes in medication type in the past six months), were at their first bone evaluation, and had an estimated glomerular filtration rate (eGFR) ≥ 60 ml/min/1.73 m2 (serum creatinine was measured and eGFR was calculated using the CKD-EPI equation). Age, sex and body mass index (BMI) 1:1 matched apparently healthy volunteer controls (CTL) referred by the general practitioner to the outpatient department in our hospital for general investigations were enrolled in the same period as the patients. Exclusion criteria for both groups were represented by calcium and vitamin D supplementation, bone active therapy (antiresorptive/bone-forming therapy), liver disease, moderate and severe chronic kidney disease (CKD; stage G3 to end-stage renal disease), history of parathyroid or rheumatological disease, oral corticosteroid use > 5 mg prednisone equivalent in the past 3 months or endogenous hypercortisolism, hypo- and hyperthyroidism, inflammatory bowel disease, hypogonadism (other than menopause), smoking (both regular and heavy) and heavy drinking (more than 2 drinks per day or more than 15 drinks per week for men and more than 1 drink per day or more than 8 drinks per week for women). Subjects exhibiting hyperglycemia (an abnormal fasting blood sugar, impaired glucose tolerance or diabetes) were further excluded from the CTL group.\nSixty-nine patients (30 T1D and 39 T2D) and 69 age, sex and BMI-matched CTL that were willing to participate and met the study inclusion and exclusion criteria were recruited in the Diabetes and Endocrinology outpatient clinics, after giving written informed consent and were enrolled in this cross-sectional case-control study. The study adhered to the Declaration of Helsinki and was approved by the institutional Ethics Committee.", "Complete medical history (anamnesis and medical charts) was recorded for all patients and CTL. The presence of microvascular complications was defined as: [1] nephropathy: positive albumin:creatinine ratio (≥ 30 mg/g) on two or more occasions, [2] retinopathy: positive ophthalmologic fundus examination, [3] polineuropathy: clinical measurement of vibration. Macrovascular complications were defined based on the recorded history of coronary heart disease, stroke, myocardial infarction, or peripheral vascular disease, respectively.\nAfter clinical examination (height and weight were recorded and BMI was calculated as weight (kg) divided by square height (m)), all patients underwent dual-energy X-ray absorptiometry (DXA; Hologic Delphi A, software version 12.7.3.2 Hologic Inc., USA) scanning to measure BMD at the lumbar spine and hip (femoral neck was reported due to lower values compared to total hip, according to the recommendations of the ISCD [15]). Coefficient of variation was 0.39 % for lumbar anterior-posterior spine and 1 % for femoral neck BMD. Measurements were made by two trained technicians certified by the International Society for Clinical Densitometry (ISCD), according to standard protocol and with daily calibration. Least significant change (LSC) was 0.008 g/cm2 for lumbar BMD and 0.0104 g/cm2 for femoral neck BMD, respectively. According to the Adult Official Positions of the ISCD [15], T-scores were reported for postmenopausal women and men ≥ 50 years of age (“low bone mass” was defined as T-score <-1 in this category) while Z-scores were recorded for premenopausal women and men < 50 years (“low bone mass” was defined as Z-score ≤-2). Also, if there was a more than 1.0 T-score difference between adjacent vertebrae, the questioned vertebra was excluded from the analysis, while the BMD of the remaining vertebrae was used to derive T-score [15]. Menopause was defined as more than 12 months since natural cessation of menstrual cycles.\nOn the same day as the clinical and DXA examinations, blood samples were collected after overnight fasting in all study participants. Biochemical analysis of standard clinical parameters included HbA1c determination (ion-exchange high-performance liquid chromatography (HPLC) method), serum calcium and phosphate (colorimetry; Cobas 6000 analyzer, Roche), serum thyroid stimulating hormone (TSH) and parathyroid hormone (PTH) (intact PTH second-generation chemiluminescent enzyme immunometric assay; Immulite 2000 Immunoassay System, Siemens).", "SPSS (SPSS Statistics version 20.0 for Windows) was employed for statistical analysis. Data are expressed as mean ± SEM (standard error of the mean). Comparisons between groups (T1D versus T2D, T1D versus controls and T2D versus controls, respectively) were made using Student’s t-test (for normally distributed data) or the non-parametric Mann-Whitney U test (for skewed data), after checking for normal distribution (Shapiro-Wilk test). Analysis of variance (ANOVA) was employed for comparisons between 3 or more categories. The analysis of covariance (ANCOVA) was used to calculate age, BMI and diabetes duration - adjusted BMD values in T1D compared to T2D (least square means ± standard error are reported). Multiple regression analysis was performed to assess independent predictors of bone mass in T1D, T2D and matched-CTL, respectively, as follows: continuous variables potentially influencing BMD variation, such as age, diabetes duration, HbA1c, BMI and PTH, as well as categorical variables (introduced after binary coding 0/1), such as gender (male = 0, female = 1) were introduced as independent variables in regression models with lumbar BMD and femoral neck BMD as the outcome variables, respectively. The level of significance was established for p-value < 0.05.", "Despite being younger and having a mean BMI within the normal reference range, T1D patients had a longer duration of diabetes and a poor glycemic control with more diabetes complications compared to the older, rather obese but with improved glycemic control T2D patients (Table 1). Serum calcium, phosphate, PTH and thyroid status were similar between T1D and T2D patients, although serum PTH had the tendency to be higher in T2D subjects (Table 1). All T1D patients were receiving exogenous insulin treatment, while all T2D patients were under metformin treatment: 16 were following metformin monotherapy, 13 were taking metformin together with a sulfonylurea drug and 10 associated incretin therapy to metformin (none were using thiazolidinediones or sodium-glucose co-transporter-2 inhibitors).\nTable 1Characteristics of the study patientsType 1diabetes mellitus(n = 30)Control group(n = 30)Type 2diabetes mellitus(n = 39)Controlgroup(n = 39)Type 1vs.Type 2Type 1vs.ControlType 2vs.Controlmean ± SEMmean ± SEMmean ± SEMp-valueGender (n)Women(Menopause)Menn = 18 (60 %)(n = 6)n = 12 (40 %)n = 18 (60 %)(n = 6)n = 12 (40 %)n = 19 (48.7 %)(n = 16)n = 20 (51.3 %)n = 19 (48.7 %)(n = 16)n = 20 (51.3 %)------Age (y)40.27 ± 2.742.4 ± 3.1262.39 ± 1.2160.03 ± 1.31< 0.0010.610.2BMI (g/cm2)24.27 ± 0.8125.66 ± 0.7330.56 ± 0.7830.77 ± 0.78< 0.0010.220.84Diabetes-specific parametersDuration of diabetes (y)14.23 ± 1.89-9.49 ± 0.8-0.026--HbA1c (%)8.86 ± 0.35-6.86 ± 0.15-< 0.001--Complications (n)12 (40 %)9 (23 %)General parametersCalcium (mg/dl)9.63 ± 0.7-9.69 ± 0.6-0.49--Phosphate (mg/dl)3.43 ± 0.123.25 ± 0.080.19TSH (uUI/ml)3.3 ± 0.8-2.23 ± 0.013-0.14--PTH (pg/ml)37.98 ± 2.2437.11 ± 2.3347.47 ± 3.948.42 ± 1.60.0550.790.84BMI body mass index, PTH parathyroid hormone, SEM standard error of the mean, TSH thyroid stimulating hormone\nCharacteristics of the study patients\nBMI body mass index, PTH parathyroid hormone, SEM standard error of the mean, TSH thyroid stimulating hormone\nBMD values at the lumbar spine and femoral neck were similar between T1D and T2D patients, and also between T1D patients and controls and between T2D patients and controls, in the whole group and according to sex, respectively (Table 2). After adjusting for age, BMI and disease duration, BMD did not vary significantly between T1D and T2D patients, respectively (Table 2). However, fewer patients in the T2D group exhibited low bone mass compared to matched controls, while the number of low bone mass subjects was similar in T1D patients and matched controls (Table 2).\nTable 2BMD values in diabetes patients and controlsType 1diabetes mellitusControlgroupType 2diabetes mellitusControlgroupType 1vs.Type 2Type 1vs.ControlType 2vs.Controlmean ± SEMmean ± SEMmean ± SEMmean ± SEMp-valueBMD / WomenN = 18 N = 18 N = 19 N = 19Lumbar BMD(g/cm2)0.97 ± 0.020.97 ± 0.030.93 ± 0.040.93 ± 0.030.470.990.9Lumbar T/Z-score-0.8 ± 0.2-0.7 ± 0.2-1 ± 0.3-1.1 ± 0.30.590.830.91Femoral neck BMD (g/cm2)0.74 ± 0.020.78 ± 0.030.77 ± 0.030.76 ± 0.030.520.290.98Femoral neck T/Z-score-1.1 ± 0.2-0.6 ± 0.3-0.6 ± 0.3-0.8 ± 0.20.210.160.79BMD / MenN = 12 N = 12 N = 20 N = 20Lumbar BMD(g/cm2)1.09 ± 0.031 ± 0.051.08 ± 0.041.03 ± 0.030.910.150.3Lumbar T/Z-score-0.2 ± 0.3-0.8 ± 0.5-0.1 ± 0.3-0.6 ± 0.30.760.260.3Femoral neck BMD (g/cm2)0.94 ± 0.030.9 ± 0.040.9 ± 0.040.92 ± 0.030.380.480.52Femoral neck T/Z-score0.1 ± 0.3-0.2 ± 0.4-0.3 ± 0.30 ± 0.20.380.550.42BMD/TotalN = 30 N = 30 N = 39 N = 39Lumbar BMD(g/cm2)1.01 ± 0.020.98 ± 0.031.01 ± 0.030.98 ± 0.020.880.30.89Lumbar T/Z-score-0.6 ± 0.2-0.8 ± 0.2-0.6 ± 0.3-0.8 ± 0.20.850.50.94Femoral neck BMD (g/cm2)0.82 ± 0.030.83 ± 0.280.83 ± 0.030.85 ± 0.020.790.810.79Femoral neck T/Z-score-0.6 ± 0.2-0.4 ± 0.21-0.5 ± 0.2-0.4 ± 0.20.610.530.55Adjusted BMD* (LSMEAN ± SE)N = 30 N = 30 N = 39 N = 39Lumbar BMD (g/cm2)1.04 ± 0.04-0.99 ± 0.03-0.46--Femoral neck BMD (g/cm2)0.84 ± 0.04-0.82 ± 0.03-0.7--WHO-criteriaLow bone massN = 8 N = 8 N = 13 N = 17*adjusted for age, body mass index and diabetes durationBMD bone mineral density, LSMEAN least square mean, SE standard error, SEM standard error of the mean, WHO world health organization\nBMD values in diabetes patients and controls\n*adjusted for age, body mass index and diabetes duration\nBMD bone mineral density, LSMEAN least square mean, SE standard error, SEM standard error of the mean, WHO world health organization\nBMD predictors in T1D, T2D and controls General (age, gender, BMI and PTH) and diabetes - specific parameters (disease duration, HbA1c) were introduced as independent variables in multiple regression analysis with lumbar and femoral neck BMD as outcome variables, respectively (Table 3). Gender independently predicted BMD across all models: compared to men, female sex was an independent risk factor for low BMD in T1D, T2D patients and controls, respectively. In T1D patients, diabetes duration was also a negative independent predictor of femoral neck BMD, while BMI was a positive independent predictor of both lumbar and femoral neck BMD in the T2D group - but not in controls. Age was a negative independent predictor of BMD in controls (both T1D and T2D controls), but not in patients with diabetes (Table 3).\nTable 3Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectivelyModelDependent variableT1D modelT2 modelPredictorT1DT2DBetap-valueBetap-value1Lumbar BMDR2 = 0.43P = 0.038R2 = 0.37P = 0.014AgeGenderDiabetes durationHbA1cBMIPTH0.15-0.66-0.360.07-0.30.270.50.0020.0750.70.160.21-0.03-0.520.160.020.46-0.10.870.0020.330.890.0060.512Femoral Neck BMDR2 = 0.68P < 0.001R2 = 0.42P = 0.006AgeGenderDiabetes durationHbA1cBMIPTH-0.08-0.66-0.390.130.120.080.64< 0.0010.0140.330.450.63-0.26-0.550.27-0.110.440.090.110.0010.090.460.0070.55ModelDependent variableControl T1D modelControl T2 modelPredictorControl T1DControl T2DBetap-valueBetap-value1Lumbar BMDR2 = 0.27P = 0.036R2 = 0.32P = 0.01AgeGenderBMIPTH-0.45-0.310.040.050.0110.050.810.75-0.41-0.450.17-0.170.0130.0050.360.332Femoral Neck BMDR2 = 0.35P = 0.006R2 = 0.51P < 0.001AgeGenderBMIPTH-0.31-0.50.04-0.060.050.0020.810.69-0.39-0.670.15-0.230.006< 0.0010.350.12BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes\nMultiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectively\nR2 = 0.43\nP = 0.038\nR2 = 0.37\nP = 0.014\nAge\nGender\nDiabetes duration\nHbA1c\nBMI\nPTH\n0.15\n-0.66\n-0.36\n0.07\n-0.3\n0.27\n0.5\n0.002\n0.075\n0.7\n0.16\n0.21\n-0.03\n-0.52\n0.16\n0.02\n0.46\n-0.1\n0.87\n0.002\n0.33\n0.89\n0.006\n0.51\nR2 = 0.68\nP < 0.001\nR2 = 0.42\nP = 0.006\nAge\nGender\nDiabetes duration\nHbA1c\nBMI\nPTH\n-0.08\n-0.66\n-0.39\n0.13\n0.12\n0.08\n0.64\n< 0.001\n0.014\n0.33\n0.45\n0.63\n-0.26\n-0.55\n0.27\n-0.11\n0.44\n0.09\n0.11\n0.001\n0.09\n0.46\n0.007\n0.55\nR2 = 0.27\nP = 0.036\nR2 = 0.32\nP = 0.01\nAge\nGender\nBMI\nPTH\n-0.45\n-0.31\n0.04\n0.05\n0.011\n0.05\n0.81\n0.75\n-0.41\n-0.45\n0.17\n-0.17\n0.013\n0.005\n0.36\n0.33\nR2 = 0.35\nP = 0.006\nR2 = 0.51\nP < 0.001\nAge\nGender\nBMI\nPTH\n-0.31\n-0.5\n0.04\n-0.06\n0.05\n0.002\n0.81\n0.69\n-0.39\n-0.67\n0.15\n-0.23\n0.006\n< 0.001\n0.35\n0.12\nBMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes\nGeneral (age, gender, BMI and PTH) and diabetes - specific parameters (disease duration, HbA1c) were introduced as independent variables in multiple regression analysis with lumbar and femoral neck BMD as outcome variables, respectively (Table 3). Gender independently predicted BMD across all models: compared to men, female sex was an independent risk factor for low BMD in T1D, T2D patients and controls, respectively. In T1D patients, diabetes duration was also a negative independent predictor of femoral neck BMD, while BMI was a positive independent predictor of both lumbar and femoral neck BMD in the T2D group - but not in controls. Age was a negative independent predictor of BMD in controls (both T1D and T2D controls), but not in patients with diabetes (Table 3).\nTable 3Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectivelyModelDependent variableT1D modelT2 modelPredictorT1DT2DBetap-valueBetap-value1Lumbar BMDR2 = 0.43P = 0.038R2 = 0.37P = 0.014AgeGenderDiabetes durationHbA1cBMIPTH0.15-0.66-0.360.07-0.30.270.50.0020.0750.70.160.21-0.03-0.520.160.020.46-0.10.870.0020.330.890.0060.512Femoral Neck BMDR2 = 0.68P < 0.001R2 = 0.42P = 0.006AgeGenderDiabetes durationHbA1cBMIPTH-0.08-0.66-0.390.130.120.080.64< 0.0010.0140.330.450.63-0.26-0.550.27-0.110.440.090.110.0010.090.460.0070.55ModelDependent variableControl T1D modelControl T2 modelPredictorControl T1DControl T2DBetap-valueBetap-value1Lumbar BMDR2 = 0.27P = 0.036R2 = 0.32P = 0.01AgeGenderBMIPTH-0.45-0.310.040.050.0110.050.810.75-0.41-0.450.17-0.170.0130.0050.360.332Femoral Neck BMDR2 = 0.35P = 0.006R2 = 0.51P < 0.001AgeGenderBMIPTH-0.31-0.50.04-0.060.050.0020.810.69-0.39-0.670.15-0.230.006< 0.0010.350.12BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes\nMultiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectively\nR2 = 0.43\nP = 0.038\nR2 = 0.37\nP = 0.014\nAge\nGender\nDiabetes duration\nHbA1c\nBMI\nPTH\n0.15\n-0.66\n-0.36\n0.07\n-0.3\n0.27\n0.5\n0.002\n0.075\n0.7\n0.16\n0.21\n-0.03\n-0.52\n0.16\n0.02\n0.46\n-0.1\n0.87\n0.002\n0.33\n0.89\n0.006\n0.51\nR2 = 0.68\nP < 0.001\nR2 = 0.42\nP = 0.006\nAge\nGender\nDiabetes duration\nHbA1c\nBMI\nPTH\n-0.08\n-0.66\n-0.39\n0.13\n0.12\n0.08\n0.64\n< 0.001\n0.014\n0.33\n0.45\n0.63\n-0.26\n-0.55\n0.27\n-0.11\n0.44\n0.09\n0.11\n0.001\n0.09\n0.46\n0.007\n0.55\nR2 = 0.27\nP = 0.036\nR2 = 0.32\nP = 0.01\nAge\nGender\nBMI\nPTH\n-0.45\n-0.31\n0.04\n0.05\n0.011\n0.05\n0.81\n0.75\n-0.41\n-0.45\n0.17\n-0.17\n0.013\n0.005\n0.36\n0.33\nR2 = 0.35\nP = 0.006\nR2 = 0.51\nP < 0.001\nAge\nGender\nBMI\nPTH\n-0.31\n-0.5\n0.04\n-0.06\n0.05\n0.002\n0.81\n0.69\n-0.39\n-0.67\n0.15\n-0.23\n0.006\n< 0.001\n0.35\n0.12\nBMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes\nSubgroup analysis The presence of diabetic complications T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1).\nFig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nBMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nT1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1).\nFig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nBMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nDrugs in T2D In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively.\nIn the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively.\nThe presence of diabetic complications T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1).\nFig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nBMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nT1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1).\nFig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nBMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nDrugs in T2D In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively.\nIn the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively.", "General (age, gender, BMI and PTH) and diabetes - specific parameters (disease duration, HbA1c) were introduced as independent variables in multiple regression analysis with lumbar and femoral neck BMD as outcome variables, respectively (Table 3). Gender independently predicted BMD across all models: compared to men, female sex was an independent risk factor for low BMD in T1D, T2D patients and controls, respectively. In T1D patients, diabetes duration was also a negative independent predictor of femoral neck BMD, while BMI was a positive independent predictor of both lumbar and femoral neck BMD in the T2D group - but not in controls. Age was a negative independent predictor of BMD in controls (both T1D and T2D controls), but not in patients with diabetes (Table 3).\nTable 3Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectivelyModelDependent variableT1D modelT2 modelPredictorT1DT2DBetap-valueBetap-value1Lumbar BMDR2 = 0.43P = 0.038R2 = 0.37P = 0.014AgeGenderDiabetes durationHbA1cBMIPTH0.15-0.66-0.360.07-0.30.270.50.0020.0750.70.160.21-0.03-0.520.160.020.46-0.10.870.0020.330.890.0060.512Femoral Neck BMDR2 = 0.68P < 0.001R2 = 0.42P = 0.006AgeGenderDiabetes durationHbA1cBMIPTH-0.08-0.66-0.390.130.120.080.64< 0.0010.0140.330.450.63-0.26-0.550.27-0.110.440.090.110.0010.090.460.0070.55ModelDependent variableControl T1D modelControl T2 modelPredictorControl T1DControl T2DBetap-valueBetap-value1Lumbar BMDR2 = 0.27P = 0.036R2 = 0.32P = 0.01AgeGenderBMIPTH-0.45-0.310.040.050.0110.050.810.75-0.41-0.450.17-0.170.0130.0050.360.332Femoral Neck BMDR2 = 0.35P = 0.006R2 = 0.51P < 0.001AgeGenderBMIPTH-0.31-0.50.04-0.060.050.0020.810.69-0.39-0.670.15-0.230.006< 0.0010.350.12BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes\nMultiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectively\nR2 = 0.43\nP = 0.038\nR2 = 0.37\nP = 0.014\nAge\nGender\nDiabetes duration\nHbA1c\nBMI\nPTH\n0.15\n-0.66\n-0.36\n0.07\n-0.3\n0.27\n0.5\n0.002\n0.075\n0.7\n0.16\n0.21\n-0.03\n-0.52\n0.16\n0.02\n0.46\n-0.1\n0.87\n0.002\n0.33\n0.89\n0.006\n0.51\nR2 = 0.68\nP < 0.001\nR2 = 0.42\nP = 0.006\nAge\nGender\nDiabetes duration\nHbA1c\nBMI\nPTH\n-0.08\n-0.66\n-0.39\n0.13\n0.12\n0.08\n0.64\n< 0.001\n0.014\n0.33\n0.45\n0.63\n-0.26\n-0.55\n0.27\n-0.11\n0.44\n0.09\n0.11\n0.001\n0.09\n0.46\n0.007\n0.55\nR2 = 0.27\nP = 0.036\nR2 = 0.32\nP = 0.01\nAge\nGender\nBMI\nPTH\n-0.45\n-0.31\n0.04\n0.05\n0.011\n0.05\n0.81\n0.75\n-0.41\n-0.45\n0.17\n-0.17\n0.013\n0.005\n0.36\n0.33\nR2 = 0.35\nP = 0.006\nR2 = 0.51\nP < 0.001\nAge\nGender\nBMI\nPTH\n-0.31\n-0.5\n0.04\n-0.06\n0.05\n0.002\n0.81\n0.69\n-0.39\n-0.67\n0.15\n-0.23\n0.006\n< 0.001\n0.35\n0.12\nBMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes", "The presence of diabetic complications T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1).\nFig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nBMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nT1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1).\nFig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nBMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nDrugs in T2D In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively.\nIn the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively.", "T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1).\nFig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39\nBMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39", "In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively.", "T1D and T2D patients had similar BMD compared to controls, respectively. Diabetes duration, but not HbA1c, was found to negatively predict femoral neck BMD in T1D, but not T2D patients. In the T2D group, BMI was an independent predictor of bone density, while female gender was negatively associated with low BMD in both T1D and T2D, independently of other factors. Unlike patients with diabetes, age was the other major independent predictor of bone mass in controls, in addition to gender.\nAlbeit BMD underestimates fracture risk in patients with diabetes, it still remains the cornerstone in bone evaluation in this particular group due to high accessibility and low costs [16]. Data reporting BMD values in T1D and T2D patients are very heterogenous and rather inconsistent with regard to BMD predictive factors.\nAn older meta-analysis [5] reported negative Z-scores for T1D patients and positive Z-scores for T2D subjects, thus concluding that T1D is associated with lower bone mass, while T2D patients generally have higher BMD. More recent studies reported, however, similar BMD values both between patients with diabetes and controls and between T1D and T2D patients, respectively [6, 17]. Another recent study performed investigating bone mass in long-standing (longer than 50 years) T1D patients with good glycemic control and low rates of vascular complications reported similar or even better BMD expressed as Z-score compared to age-, gender- and race-matched population [18]. We found lower bone mass scores at the femoral neck in T1D women compared to T2D women and controls; nevertheless, the differences did not reach statistical significance, probably due to the limited number of patients. Also, more women in the T2D group were postmenopausal compared to the T1D group, and this may account for the lack of a statistical difference regarding BMD between types of diabetes. More so, the different stages of evolution and disease management captured in various studies, the potentially erroneous diabetes classification and also the adjustment for various confounding variables may account for the variability of reported data. The early and rather acute insulinopenia associated with diabetes onset impairs bone mass accrual and negatively impacts peak bone mass. Thus, bone mass acquisition is hampered in the early stages of T1D [3]. Nonetheless, bone density was demonstrated to stabilize or even increase after exogenous insulin treatment is well installed, with studies reporting age – and gender - expected bone density measurements [19]. More so, T1D patients with low bone mass are reported to follow lower insulin dose regimens compared to those with normal bone mass [20].\nDisease duration - and not age – proved to be one of the main independent predictors of low femoral neck BMD in T1D patients in our study, suggesting that diabetes-related factors, such as diabetes duration, may be more important for bone. Indeed, T1D patients experiencing diabetes-specific complications had a longer disease history and also the tendency towards lower femoral neck BMD. Low rates of vascular complications have been linked to preserved BMD in long-lasting T1D [18]. According to recent consensus in the field, diabetes-specific risk factors for fracture include age, low BMD, the presence of complications of diabetes, disease duration, previous fractures and glycemic control (particularly in T1D with HbA1c > 8–9 %) [21]. Our results are in agreement with other studies reporting long-lasting disease as a risk factor for fragility fractures [11, 22]. The presence of micro- and macrovascular complications is associated with low BMD [5, 6] and was also reported to increase fracture risk [22, 23]. Microvascular complications as a result of collagen glycation and impaired bone turnover due to AGEs are thought to compromise bone quality and material properties, thereby significantly increasing fracture risk [2, 24, 25]. Although we and others [26] failed to find any significant BMD variation according to the presence of complications, microvascular damage is demonstrated to alter bone microarchitecture, possibly via VEGF linking diabetic complications and skeletal health. This explains the disproportionate fracture risk in T1D versus T2D, compared to differences in BMD [27]. Complications are also associated with longer disease history, an independent factor for low BMD, once again supporting the link between bone mass and microarchitecture changes and the long exposure to diabetic milieu. Similar to other studies [6, 20], we failed to find a significant effect of HbA1c (which shows only the severity of recent diabetes dysregulation) upon BMD. However, we did not assess fracture risk, as long standing poor glycemic control is known to be associated with increased fracture risk, independently of bone mass [20].\nT2D patients in the current study had similar BMD compared to matched controls, although fewer patients in the diabetes group exhibited low bone mass. T2D patients are generally reported to have increased BMD compared to reference populations, although not in all studies [5, 28]. Potential disease misclassification, lack of a matched control group and inability to adjust for covariates are important sources of bias and heterogeneity [28]. Diabetes duration is an important confounder: the osteoanabolic effects of the hyperinsulinemia secondary to insulin resistance may explain the apparently higher bone density in early T2D, while insulinopenia in T1D and late T2D is accompanied by sarcopenia and low bone mass [1]. Despite using metformin which is known to positively impact bone mass and reduce fracture risk [29, 30], the T2D patients in the current study had a rather long disease history of approximately 10 years, with one quarter also experiencing complications. A diabetes duration longer than 5 years is a risk factor for low bone mass [31] and the presence of microvascular complications in T2D is associated with lower cortical volumetric BMD and altered bone microarchitecture, namely increased cortical porosity and diminished cortical thickness at the radius [32]. BMD did not differ in patients with complicated T2D compared to T2D patients without complications in the current study. At the same time, our T2D patients had a good glycemic control, while an increased HbA1c is associated with increased BMD according to the meta-analysis of Ma et al.[28]. Other meta-analyses failed to find a significant correlation of HbA1c with BMD in T2D [5]. Also, BMD progressively increases with clinical cutoffs for fasting glucose (normal, impaired and overt T2D) [33]. However, this increased BMD may be explained by the diminished bone mineral area of these patients, which also exhibit low bone turnover as assessed by serum markers, such as osteocalcin or cross-laps [33].\nAn increased BMI is a protective factor against osteoporosis in all populations [34] (including patients with diabetes [34]), via the increased mechanical loading. Obesity is a risk factor for insulin resistance and diabetes [35], being at the same time associated with higher areal and volumetric BMD and improved cortical bone structure [1, 35]. It also contributes to higher BMD in T2D patients, as demonstrated by the current study and also by many others [28]. At the same time, diabetes and obesity are associated with systemic inflammation and adipokine dysregulation, all contributing to impaired bone metabolism [36]. Despite variable BMD, alterations in cortical bone microarchitecture are reported in T2D patients, explaining the higher fracture risk compared to the reference population [37].\nNone of the patients in our study were under anti-diabetic therapy known to negatively impact bone mass, such as thiazolidinediones or SGLT-2 inhibitors. While all T2D patients were using the “bone-friendly” metformin, the subgroup also using a sulfonylurea drug tended to have a rather lower BMD, without reaching statistical significance. This is still to be clarified as the mechanisms of action of the sulfonylurea class of medication upon bone remain unelucidated up to present [38].\nAge and gender (female sex was associated with a lower BMD compared to men, independently of other factors) were the main independent BMD predictors in the reference populations in our study. Interestingly, the effect of age, unanimously recognized as a risk factor for low bone mass, was not detected in the T1D and T2D patients in our study. It is possible that other factors surpass the effect of aging upon BMD in diabetes, particularly in young or obese patients.\nOur study is limited by the relatively small number of patients, lack of assessment of bone microarchitecture, bone turnover markers or fracture risk. The effect of vitamin D levels, known to be altered in individuals with diabetes [4], was also not assessed. Nonetheless, the presence of matched control groups for T1D and T2D subjects, respectively, together with the evaluation of BMD predictors in patients with diabetes versus matched healthy individuals are important study strengths.", "Female sex and long-standing diabetes particularly increase the risk for low BMD in T1D, with special concern for the femoral neck. An increased BMI partially contributes to BMD preservation in T2D, independently of age; however, appreciating bone mass to its real extent is rather difficult in T2D due to various contributing factors to bone changes." ]
[ null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusion" ]
[ "Diabetes mellitus", "Bone mineral density", "HbA1c" ]
Background: Diabetes mellitus is a chronic whole-body disease leading to a wide range of complications, such as cardiovascular disease, retinopathy, nephropathy, neuropathy and also “sweet bone” disease [1]. Although the underlying pathophysiological background is very different, type 1 (T1D) and type 2 diabetes (T2D) are both associated with an increased fracture risk - which is multifactorial and only partially explained by falls and bone mineral density (BMD) [2]. The most consistent effect is upon the hip fracture risk, ranging between 2.4- and 7-fold increase in T1D [3] and being two to three times higher in T2D compared to the general population [1]. Diabetic osteopathy in T1D and T2D is characterized by low serum vitamin D, negative calcium balance, low bone turnover and high sclerostin levels [4]. However, bone mass may differ to some extent in T1D when compared to T2D [5], but not in all studies [6]. Low BMD occurs early after disease onset due to the deleterious effects of insulinopenia upon bone turnover and bone mass accrual in T1D, remaining rather stable afterwards [7]. Reported BMD in T2D varies from unaltered bone density [8, 9] to a paradoxically higher BMD [5] compared to controls. Low bone mass was also found in the later stages of T2D, possibly linked to microvascular disease [10]. Skeletal fragility is nevertheless described in both T1D and T2D, independently of BMD [2]. Advanced glycation end products (AGEs) alter the structure of the collagen, promote oxidative stress and inflammation, and also contribute to low bone turnover [1, 3]. The effect of glycemic control - reflected by HbA1c levels - upon bone is inconsistent, with some studies reporting an elevated fracture risk with increasing HbA1c [11, 12], while bone density evolution appears rather independent of HbA1c levels [5, 6]. In T2D, the protective effect of an increased body weight and hyperinsulinemia upon bone are counterbalanced by the negative impact of increased visceral adiposity and insulin resistance, an inadequate adaptation of bone strength to increased mechanical load, the long duration of disease evolution and various anti-diabetic drugs (e.g., thiazolidinediones or sodium-glucose cotransporter type 2 inhibitors – SGLT-2) [1]. We aimed at investigating independent predictors of BMD in T1D and T2D patients compared to controls with regard to general and diabetes - specific parameters. Methods: Study design and subjects Patients diagnosed with diabetes (T1D and T2D) were consecutively recruited during routine follow-up visits for disease monitoring in the Diabetes, Nutrition and Metabolic Diseases Clinic of “Sf. Spiridon” Clinical Emergency Hospital Iasi (Romania) between January and December 2017. Patients aged between 18 and 80 years old were included if they had a well-established diagnosis of diabetes according to standardized criteria [13] in their original medical record (HbA1c > 6.5 % on two separate tests; T1D: new-onset hyperglycemia accompanied by ketonuria at debut, low serum levels of insulin and peptide C and requiring insulin treatment for control and survival - antibodies to glutamic acid decarboxylase were also tested where the clinical phenotype was rather non-specific, such as slow onset of symptoms, BMI ≥ 25 kg/m2 or age over 40 with normal BMI and requiring insulin treatment from the time of diagnosis [14]; T2D: two fasting blood sugar levels ≥ 126 mg/dl or an oral glucose tolerance test showing serum glucose ≥ 200 mg/dl accompanied by a phenotype of insulin resistance and not requiring insulin), were more than 1 year after disease onset, were receiving antidiabetic treatment (without any changes in medication type in the past six months), were at their first bone evaluation, and had an estimated glomerular filtration rate (eGFR) ≥ 60 ml/min/1.73 m2 (serum creatinine was measured and eGFR was calculated using the CKD-EPI equation). Age, sex and body mass index (BMI) 1:1 matched apparently healthy volunteer controls (CTL) referred by the general practitioner to the outpatient department in our hospital for general investigations were enrolled in the same period as the patients. Exclusion criteria for both groups were represented by calcium and vitamin D supplementation, bone active therapy (antiresorptive/bone-forming therapy), liver disease, moderate and severe chronic kidney disease (CKD; stage G3 to end-stage renal disease), history of parathyroid or rheumatological disease, oral corticosteroid use > 5 mg prednisone equivalent in the past 3 months or endogenous hypercortisolism, hypo- and hyperthyroidism, inflammatory bowel disease, hypogonadism (other than menopause), smoking (both regular and heavy) and heavy drinking (more than 2 drinks per day or more than 15 drinks per week for men and more than 1 drink per day or more than 8 drinks per week for women). Subjects exhibiting hyperglycemia (an abnormal fasting blood sugar, impaired glucose tolerance or diabetes) were further excluded from the CTL group. Sixty-nine patients (30 T1D and 39 T2D) and 69 age, sex and BMI-matched CTL that were willing to participate and met the study inclusion and exclusion criteria were recruited in the Diabetes and Endocrinology outpatient clinics, after giving written informed consent and were enrolled in this cross-sectional case-control study. The study adhered to the Declaration of Helsinki and was approved by the institutional Ethics Committee. Patients diagnosed with diabetes (T1D and T2D) were consecutively recruited during routine follow-up visits for disease monitoring in the Diabetes, Nutrition and Metabolic Diseases Clinic of “Sf. Spiridon” Clinical Emergency Hospital Iasi (Romania) between January and December 2017. Patients aged between 18 and 80 years old were included if they had a well-established diagnosis of diabetes according to standardized criteria [13] in their original medical record (HbA1c > 6.5 % on two separate tests; T1D: new-onset hyperglycemia accompanied by ketonuria at debut, low serum levels of insulin and peptide C and requiring insulin treatment for control and survival - antibodies to glutamic acid decarboxylase were also tested where the clinical phenotype was rather non-specific, such as slow onset of symptoms, BMI ≥ 25 kg/m2 or age over 40 with normal BMI and requiring insulin treatment from the time of diagnosis [14]; T2D: two fasting blood sugar levels ≥ 126 mg/dl or an oral glucose tolerance test showing serum glucose ≥ 200 mg/dl accompanied by a phenotype of insulin resistance and not requiring insulin), were more than 1 year after disease onset, were receiving antidiabetic treatment (without any changes in medication type in the past six months), were at their first bone evaluation, and had an estimated glomerular filtration rate (eGFR) ≥ 60 ml/min/1.73 m2 (serum creatinine was measured and eGFR was calculated using the CKD-EPI equation). Age, sex and body mass index (BMI) 1:1 matched apparently healthy volunteer controls (CTL) referred by the general practitioner to the outpatient department in our hospital for general investigations were enrolled in the same period as the patients. Exclusion criteria for both groups were represented by calcium and vitamin D supplementation, bone active therapy (antiresorptive/bone-forming therapy), liver disease, moderate and severe chronic kidney disease (CKD; stage G3 to end-stage renal disease), history of parathyroid or rheumatological disease, oral corticosteroid use > 5 mg prednisone equivalent in the past 3 months or endogenous hypercortisolism, hypo- and hyperthyroidism, inflammatory bowel disease, hypogonadism (other than menopause), smoking (both regular and heavy) and heavy drinking (more than 2 drinks per day or more than 15 drinks per week for men and more than 1 drink per day or more than 8 drinks per week for women). Subjects exhibiting hyperglycemia (an abnormal fasting blood sugar, impaired glucose tolerance or diabetes) were further excluded from the CTL group. Sixty-nine patients (30 T1D and 39 T2D) and 69 age, sex and BMI-matched CTL that were willing to participate and met the study inclusion and exclusion criteria were recruited in the Diabetes and Endocrinology outpatient clinics, after giving written informed consent and were enrolled in this cross-sectional case-control study. The study adhered to the Declaration of Helsinki and was approved by the institutional Ethics Committee. Evaluation and measurements Complete medical history (anamnesis and medical charts) was recorded for all patients and CTL. The presence of microvascular complications was defined as: [1] nephropathy: positive albumin:creatinine ratio (≥ 30 mg/g) on two or more occasions, [2] retinopathy: positive ophthalmologic fundus examination, [3] polineuropathy: clinical measurement of vibration. Macrovascular complications were defined based on the recorded history of coronary heart disease, stroke, myocardial infarction, or peripheral vascular disease, respectively. After clinical examination (height and weight were recorded and BMI was calculated as weight (kg) divided by square height (m)), all patients underwent dual-energy X-ray absorptiometry (DXA; Hologic Delphi A, software version 12.7.3.2 Hologic Inc., USA) scanning to measure BMD at the lumbar spine and hip (femoral neck was reported due to lower values compared to total hip, according to the recommendations of the ISCD [15]). Coefficient of variation was 0.39 % for lumbar anterior-posterior spine and 1 % for femoral neck BMD. Measurements were made by two trained technicians certified by the International Society for Clinical Densitometry (ISCD), according to standard protocol and with daily calibration. Least significant change (LSC) was 0.008 g/cm2 for lumbar BMD and 0.0104 g/cm2 for femoral neck BMD, respectively. According to the Adult Official Positions of the ISCD [15], T-scores were reported for postmenopausal women and men ≥ 50 years of age (“low bone mass” was defined as T-score <-1 in this category) while Z-scores were recorded for premenopausal women and men < 50 years (“low bone mass” was defined as Z-score ≤-2). Also, if there was a more than 1.0 T-score difference between adjacent vertebrae, the questioned vertebra was excluded from the analysis, while the BMD of the remaining vertebrae was used to derive T-score [15]. Menopause was defined as more than 12 months since natural cessation of menstrual cycles. On the same day as the clinical and DXA examinations, blood samples were collected after overnight fasting in all study participants. Biochemical analysis of standard clinical parameters included HbA1c determination (ion-exchange high-performance liquid chromatography (HPLC) method), serum calcium and phosphate (colorimetry; Cobas 6000 analyzer, Roche), serum thyroid stimulating hormone (TSH) and parathyroid hormone (PTH) (intact PTH second-generation chemiluminescent enzyme immunometric assay; Immulite 2000 Immunoassay System, Siemens). Complete medical history (anamnesis and medical charts) was recorded for all patients and CTL. The presence of microvascular complications was defined as: [1] nephropathy: positive albumin:creatinine ratio (≥ 30 mg/g) on two or more occasions, [2] retinopathy: positive ophthalmologic fundus examination, [3] polineuropathy: clinical measurement of vibration. Macrovascular complications were defined based on the recorded history of coronary heart disease, stroke, myocardial infarction, or peripheral vascular disease, respectively. After clinical examination (height and weight were recorded and BMI was calculated as weight (kg) divided by square height (m)), all patients underwent dual-energy X-ray absorptiometry (DXA; Hologic Delphi A, software version 12.7.3.2 Hologic Inc., USA) scanning to measure BMD at the lumbar spine and hip (femoral neck was reported due to lower values compared to total hip, according to the recommendations of the ISCD [15]). Coefficient of variation was 0.39 % for lumbar anterior-posterior spine and 1 % for femoral neck BMD. Measurements were made by two trained technicians certified by the International Society for Clinical Densitometry (ISCD), according to standard protocol and with daily calibration. Least significant change (LSC) was 0.008 g/cm2 for lumbar BMD and 0.0104 g/cm2 for femoral neck BMD, respectively. According to the Adult Official Positions of the ISCD [15], T-scores were reported for postmenopausal women and men ≥ 50 years of age (“low bone mass” was defined as T-score <-1 in this category) while Z-scores were recorded for premenopausal women and men < 50 years (“low bone mass” was defined as Z-score ≤-2). Also, if there was a more than 1.0 T-score difference between adjacent vertebrae, the questioned vertebra was excluded from the analysis, while the BMD of the remaining vertebrae was used to derive T-score [15]. Menopause was defined as more than 12 months since natural cessation of menstrual cycles. On the same day as the clinical and DXA examinations, blood samples were collected after overnight fasting in all study participants. Biochemical analysis of standard clinical parameters included HbA1c determination (ion-exchange high-performance liquid chromatography (HPLC) method), serum calcium and phosphate (colorimetry; Cobas 6000 analyzer, Roche), serum thyroid stimulating hormone (TSH) and parathyroid hormone (PTH) (intact PTH second-generation chemiluminescent enzyme immunometric assay; Immulite 2000 Immunoassay System, Siemens). Statistical analysis SPSS (SPSS Statistics version 20.0 for Windows) was employed for statistical analysis. Data are expressed as mean ± SEM (standard error of the mean). Comparisons between groups (T1D versus T2D, T1D versus controls and T2D versus controls, respectively) were made using Student’s t-test (for normally distributed data) or the non-parametric Mann-Whitney U test (for skewed data), after checking for normal distribution (Shapiro-Wilk test). Analysis of variance (ANOVA) was employed for comparisons between 3 or more categories. The analysis of covariance (ANCOVA) was used to calculate age, BMI and diabetes duration - adjusted BMD values in T1D compared to T2D (least square means ± standard error are reported). Multiple regression analysis was performed to assess independent predictors of bone mass in T1D, T2D and matched-CTL, respectively, as follows: continuous variables potentially influencing BMD variation, such as age, diabetes duration, HbA1c, BMI and PTH, as well as categorical variables (introduced after binary coding 0/1), such as gender (male = 0, female = 1) were introduced as independent variables in regression models with lumbar BMD and femoral neck BMD as the outcome variables, respectively. The level of significance was established for p-value < 0.05. SPSS (SPSS Statistics version 20.0 for Windows) was employed for statistical analysis. Data are expressed as mean ± SEM (standard error of the mean). Comparisons between groups (T1D versus T2D, T1D versus controls and T2D versus controls, respectively) were made using Student’s t-test (for normally distributed data) or the non-parametric Mann-Whitney U test (for skewed data), after checking for normal distribution (Shapiro-Wilk test). Analysis of variance (ANOVA) was employed for comparisons between 3 or more categories. The analysis of covariance (ANCOVA) was used to calculate age, BMI and diabetes duration - adjusted BMD values in T1D compared to T2D (least square means ± standard error are reported). Multiple regression analysis was performed to assess independent predictors of bone mass in T1D, T2D and matched-CTL, respectively, as follows: continuous variables potentially influencing BMD variation, such as age, diabetes duration, HbA1c, BMI and PTH, as well as categorical variables (introduced after binary coding 0/1), such as gender (male = 0, female = 1) were introduced as independent variables in regression models with lumbar BMD and femoral neck BMD as the outcome variables, respectively. The level of significance was established for p-value < 0.05. Study design and subjects: Patients diagnosed with diabetes (T1D and T2D) were consecutively recruited during routine follow-up visits for disease monitoring in the Diabetes, Nutrition and Metabolic Diseases Clinic of “Sf. Spiridon” Clinical Emergency Hospital Iasi (Romania) between January and December 2017. Patients aged between 18 and 80 years old were included if they had a well-established diagnosis of diabetes according to standardized criteria [13] in their original medical record (HbA1c > 6.5 % on two separate tests; T1D: new-onset hyperglycemia accompanied by ketonuria at debut, low serum levels of insulin and peptide C and requiring insulin treatment for control and survival - antibodies to glutamic acid decarboxylase were also tested where the clinical phenotype was rather non-specific, such as slow onset of symptoms, BMI ≥ 25 kg/m2 or age over 40 with normal BMI and requiring insulin treatment from the time of diagnosis [14]; T2D: two fasting blood sugar levels ≥ 126 mg/dl or an oral glucose tolerance test showing serum glucose ≥ 200 mg/dl accompanied by a phenotype of insulin resistance and not requiring insulin), were more than 1 year after disease onset, were receiving antidiabetic treatment (without any changes in medication type in the past six months), were at their first bone evaluation, and had an estimated glomerular filtration rate (eGFR) ≥ 60 ml/min/1.73 m2 (serum creatinine was measured and eGFR was calculated using the CKD-EPI equation). Age, sex and body mass index (BMI) 1:1 matched apparently healthy volunteer controls (CTL) referred by the general practitioner to the outpatient department in our hospital for general investigations were enrolled in the same period as the patients. Exclusion criteria for both groups were represented by calcium and vitamin D supplementation, bone active therapy (antiresorptive/bone-forming therapy), liver disease, moderate and severe chronic kidney disease (CKD; stage G3 to end-stage renal disease), history of parathyroid or rheumatological disease, oral corticosteroid use > 5 mg prednisone equivalent in the past 3 months or endogenous hypercortisolism, hypo- and hyperthyroidism, inflammatory bowel disease, hypogonadism (other than menopause), smoking (both regular and heavy) and heavy drinking (more than 2 drinks per day or more than 15 drinks per week for men and more than 1 drink per day or more than 8 drinks per week for women). Subjects exhibiting hyperglycemia (an abnormal fasting blood sugar, impaired glucose tolerance or diabetes) were further excluded from the CTL group. Sixty-nine patients (30 T1D and 39 T2D) and 69 age, sex and BMI-matched CTL that were willing to participate and met the study inclusion and exclusion criteria were recruited in the Diabetes and Endocrinology outpatient clinics, after giving written informed consent and were enrolled in this cross-sectional case-control study. The study adhered to the Declaration of Helsinki and was approved by the institutional Ethics Committee. Evaluation and measurements: Complete medical history (anamnesis and medical charts) was recorded for all patients and CTL. The presence of microvascular complications was defined as: [1] nephropathy: positive albumin:creatinine ratio (≥ 30 mg/g) on two or more occasions, [2] retinopathy: positive ophthalmologic fundus examination, [3] polineuropathy: clinical measurement of vibration. Macrovascular complications were defined based on the recorded history of coronary heart disease, stroke, myocardial infarction, or peripheral vascular disease, respectively. After clinical examination (height and weight were recorded and BMI was calculated as weight (kg) divided by square height (m)), all patients underwent dual-energy X-ray absorptiometry (DXA; Hologic Delphi A, software version 12.7.3.2 Hologic Inc., USA) scanning to measure BMD at the lumbar spine and hip (femoral neck was reported due to lower values compared to total hip, according to the recommendations of the ISCD [15]). Coefficient of variation was 0.39 % for lumbar anterior-posterior spine and 1 % for femoral neck BMD. Measurements were made by two trained technicians certified by the International Society for Clinical Densitometry (ISCD), according to standard protocol and with daily calibration. Least significant change (LSC) was 0.008 g/cm2 for lumbar BMD and 0.0104 g/cm2 for femoral neck BMD, respectively. According to the Adult Official Positions of the ISCD [15], T-scores were reported for postmenopausal women and men ≥ 50 years of age (“low bone mass” was defined as T-score <-1 in this category) while Z-scores were recorded for premenopausal women and men < 50 years (“low bone mass” was defined as Z-score ≤-2). Also, if there was a more than 1.0 T-score difference between adjacent vertebrae, the questioned vertebra was excluded from the analysis, while the BMD of the remaining vertebrae was used to derive T-score [15]. Menopause was defined as more than 12 months since natural cessation of menstrual cycles. On the same day as the clinical and DXA examinations, blood samples were collected after overnight fasting in all study participants. Biochemical analysis of standard clinical parameters included HbA1c determination (ion-exchange high-performance liquid chromatography (HPLC) method), serum calcium and phosphate (colorimetry; Cobas 6000 analyzer, Roche), serum thyroid stimulating hormone (TSH) and parathyroid hormone (PTH) (intact PTH second-generation chemiluminescent enzyme immunometric assay; Immulite 2000 Immunoassay System, Siemens). Statistical analysis: SPSS (SPSS Statistics version 20.0 for Windows) was employed for statistical analysis. Data are expressed as mean ± SEM (standard error of the mean). Comparisons between groups (T1D versus T2D, T1D versus controls and T2D versus controls, respectively) were made using Student’s t-test (for normally distributed data) or the non-parametric Mann-Whitney U test (for skewed data), after checking for normal distribution (Shapiro-Wilk test). Analysis of variance (ANOVA) was employed for comparisons between 3 or more categories. The analysis of covariance (ANCOVA) was used to calculate age, BMI and diabetes duration - adjusted BMD values in T1D compared to T2D (least square means ± standard error are reported). Multiple regression analysis was performed to assess independent predictors of bone mass in T1D, T2D and matched-CTL, respectively, as follows: continuous variables potentially influencing BMD variation, such as age, diabetes duration, HbA1c, BMI and PTH, as well as categorical variables (introduced after binary coding 0/1), such as gender (male = 0, female = 1) were introduced as independent variables in regression models with lumbar BMD and femoral neck BMD as the outcome variables, respectively. The level of significance was established for p-value < 0.05. Results: Despite being younger and having a mean BMI within the normal reference range, T1D patients had a longer duration of diabetes and a poor glycemic control with more diabetes complications compared to the older, rather obese but with improved glycemic control T2D patients (Table 1). Serum calcium, phosphate, PTH and thyroid status were similar between T1D and T2D patients, although serum PTH had the tendency to be higher in T2D subjects (Table 1). All T1D patients were receiving exogenous insulin treatment, while all T2D patients were under metformin treatment: 16 were following metformin monotherapy, 13 were taking metformin together with a sulfonylurea drug and 10 associated incretin therapy to metformin (none were using thiazolidinediones or sodium-glucose co-transporter-2 inhibitors). Table 1Characteristics of the study patientsType 1diabetes mellitus(n = 30)Control group(n = 30)Type 2diabetes mellitus(n = 39)Controlgroup(n = 39)Type 1vs.Type 2Type 1vs.ControlType 2vs.Controlmean ± SEMmean ± SEMmean ± SEMp-valueGender (n)Women(Menopause)Menn = 18 (60 %)(n = 6)n = 12 (40 %)n = 18 (60 %)(n = 6)n = 12 (40 %)n = 19 (48.7 %)(n = 16)n = 20 (51.3 %)n = 19 (48.7 %)(n = 16)n = 20 (51.3 %)------Age (y)40.27 ± 2.742.4 ± 3.1262.39 ± 1.2160.03 ± 1.31< 0.0010.610.2BMI (g/cm2)24.27 ± 0.8125.66 ± 0.7330.56 ± 0.7830.77 ± 0.78< 0.0010.220.84Diabetes-specific parametersDuration of diabetes (y)14.23 ± 1.89-9.49 ± 0.8-0.026--HbA1c (%)8.86 ± 0.35-6.86 ± 0.15-< 0.001--Complications (n)12 (40 %)9 (23 %)General parametersCalcium (mg/dl)9.63 ± 0.7-9.69 ± 0.6-0.49--Phosphate (mg/dl)3.43 ± 0.123.25 ± 0.080.19TSH (uUI/ml)3.3 ± 0.8-2.23 ± 0.013-0.14--PTH (pg/ml)37.98 ± 2.2437.11 ± 2.3347.47 ± 3.948.42 ± 1.60.0550.790.84BMI body mass index, PTH parathyroid hormone, SEM standard error of the mean, TSH thyroid stimulating hormone Characteristics of the study patients BMI body mass index, PTH parathyroid hormone, SEM standard error of the mean, TSH thyroid stimulating hormone BMD values at the lumbar spine and femoral neck were similar between T1D and T2D patients, and also between T1D patients and controls and between T2D patients and controls, in the whole group and according to sex, respectively (Table 2). After adjusting for age, BMI and disease duration, BMD did not vary significantly between T1D and T2D patients, respectively (Table 2). However, fewer patients in the T2D group exhibited low bone mass compared to matched controls, while the number of low bone mass subjects was similar in T1D patients and matched controls (Table 2). Table 2BMD values in diabetes patients and controlsType 1diabetes mellitusControlgroupType 2diabetes mellitusControlgroupType 1vs.Type 2Type 1vs.ControlType 2vs.Controlmean ± SEMmean ± SEMmean ± SEMmean ± SEMp-valueBMD / WomenN = 18 N = 18 N = 19 N = 19Lumbar BMD(g/cm2)0.97 ± 0.020.97 ± 0.030.93 ± 0.040.93 ± 0.030.470.990.9Lumbar T/Z-score-0.8 ± 0.2-0.7 ± 0.2-1 ± 0.3-1.1 ± 0.30.590.830.91Femoral neck BMD (g/cm2)0.74 ± 0.020.78 ± 0.030.77 ± 0.030.76 ± 0.030.520.290.98Femoral neck T/Z-score-1.1 ± 0.2-0.6 ± 0.3-0.6 ± 0.3-0.8 ± 0.20.210.160.79BMD / MenN = 12 N = 12 N = 20 N = 20Lumbar BMD(g/cm2)1.09 ± 0.031 ± 0.051.08 ± 0.041.03 ± 0.030.910.150.3Lumbar T/Z-score-0.2 ± 0.3-0.8 ± 0.5-0.1 ± 0.3-0.6 ± 0.30.760.260.3Femoral neck BMD (g/cm2)0.94 ± 0.030.9 ± 0.040.9 ± 0.040.92 ± 0.030.380.480.52Femoral neck T/Z-score0.1 ± 0.3-0.2 ± 0.4-0.3 ± 0.30 ± 0.20.380.550.42BMD/TotalN = 30 N = 30 N = 39 N = 39Lumbar BMD(g/cm2)1.01 ± 0.020.98 ± 0.031.01 ± 0.030.98 ± 0.020.880.30.89Lumbar T/Z-score-0.6 ± 0.2-0.8 ± 0.2-0.6 ± 0.3-0.8 ± 0.20.850.50.94Femoral neck BMD (g/cm2)0.82 ± 0.030.83 ± 0.280.83 ± 0.030.85 ± 0.020.790.810.79Femoral neck T/Z-score-0.6 ± 0.2-0.4 ± 0.21-0.5 ± 0.2-0.4 ± 0.20.610.530.55Adjusted BMD* (LSMEAN ± SE)N = 30 N = 30 N = 39 N = 39Lumbar BMD (g/cm2)1.04 ± 0.04-0.99 ± 0.03-0.46--Femoral neck BMD (g/cm2)0.84 ± 0.04-0.82 ± 0.03-0.7--WHO-criteriaLow bone massN = 8 N = 8 N = 13 N = 17*adjusted for age, body mass index and diabetes durationBMD bone mineral density, LSMEAN least square mean, SE standard error, SEM standard error of the mean, WHO world health organization BMD values in diabetes patients and controls *adjusted for age, body mass index and diabetes duration BMD bone mineral density, LSMEAN least square mean, SE standard error, SEM standard error of the mean, WHO world health organization BMD predictors in T1D, T2D and controls General (age, gender, BMI and PTH) and diabetes - specific parameters (disease duration, HbA1c) were introduced as independent variables in multiple regression analysis with lumbar and femoral neck BMD as outcome variables, respectively (Table 3). Gender independently predicted BMD across all models: compared to men, female sex was an independent risk factor for low BMD in T1D, T2D patients and controls, respectively. In T1D patients, diabetes duration was also a negative independent predictor of femoral neck BMD, while BMI was a positive independent predictor of both lumbar and femoral neck BMD in the T2D group - but not in controls. Age was a negative independent predictor of BMD in controls (both T1D and T2D controls), but not in patients with diabetes (Table 3). Table 3Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectivelyModelDependent variableT1D modelT2 modelPredictorT1DT2DBetap-valueBetap-value1Lumbar BMDR2 = 0.43P = 0.038R2 = 0.37P = 0.014AgeGenderDiabetes durationHbA1cBMIPTH0.15-0.66-0.360.07-0.30.270.50.0020.0750.70.160.21-0.03-0.520.160.020.46-0.10.870.0020.330.890.0060.512Femoral Neck BMDR2 = 0.68P < 0.001R2 = 0.42P = 0.006AgeGenderDiabetes durationHbA1cBMIPTH-0.08-0.66-0.390.130.120.080.64< 0.0010.0140.330.450.63-0.26-0.550.27-0.110.440.090.110.0010.090.460.0070.55ModelDependent variableControl T1D modelControl T2 modelPredictorControl T1DControl T2DBetap-valueBetap-value1Lumbar BMDR2 = 0.27P = 0.036R2 = 0.32P = 0.01AgeGenderBMIPTH-0.45-0.310.040.050.0110.050.810.75-0.41-0.450.17-0.170.0130.0050.360.332Femoral Neck BMDR2 = 0.35P = 0.006R2 = 0.51P < 0.001AgeGenderBMIPTH-0.31-0.50.04-0.060.050.0020.810.69-0.39-0.670.15-0.230.006< 0.0010.350.12BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectively R2 = 0.43 P = 0.038 R2 = 0.37 P = 0.014 Age Gender Diabetes duration HbA1c BMI PTH 0.15 -0.66 -0.36 0.07 -0.3 0.27 0.5 0.002 0.075 0.7 0.16 0.21 -0.03 -0.52 0.16 0.02 0.46 -0.1 0.87 0.002 0.33 0.89 0.006 0.51 R2 = 0.68 P < 0.001 R2 = 0.42 P = 0.006 Age Gender Diabetes duration HbA1c BMI PTH -0.08 -0.66 -0.39 0.13 0.12 0.08 0.64 < 0.001 0.014 0.33 0.45 0.63 -0.26 -0.55 0.27 -0.11 0.44 0.09 0.11 0.001 0.09 0.46 0.007 0.55 R2 = 0.27 P = 0.036 R2 = 0.32 P = 0.01 Age Gender BMI PTH -0.45 -0.31 0.04 0.05 0.011 0.05 0.81 0.75 -0.41 -0.45 0.17 -0.17 0.013 0.005 0.36 0.33 R2 = 0.35 P = 0.006 R2 = 0.51 P < 0.001 Age Gender BMI PTH -0.31 -0.5 0.04 -0.06 0.05 0.002 0.81 0.69 -0.39 -0.67 0.15 -0.23 0.006 < 0.001 0.35 0.12 BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes General (age, gender, BMI and PTH) and diabetes - specific parameters (disease duration, HbA1c) were introduced as independent variables in multiple regression analysis with lumbar and femoral neck BMD as outcome variables, respectively (Table 3). Gender independently predicted BMD across all models: compared to men, female sex was an independent risk factor for low BMD in T1D, T2D patients and controls, respectively. In T1D patients, diabetes duration was also a negative independent predictor of femoral neck BMD, while BMI was a positive independent predictor of both lumbar and femoral neck BMD in the T2D group - but not in controls. Age was a negative independent predictor of BMD in controls (both T1D and T2D controls), but not in patients with diabetes (Table 3). Table 3Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectivelyModelDependent variableT1D modelT2 modelPredictorT1DT2DBetap-valueBetap-value1Lumbar BMDR2 = 0.43P = 0.038R2 = 0.37P = 0.014AgeGenderDiabetes durationHbA1cBMIPTH0.15-0.66-0.360.07-0.30.270.50.0020.0750.70.160.21-0.03-0.520.160.020.46-0.10.870.0020.330.890.0060.512Femoral Neck BMDR2 = 0.68P < 0.001R2 = 0.42P = 0.006AgeGenderDiabetes durationHbA1cBMIPTH-0.08-0.66-0.390.130.120.080.64< 0.0010.0140.330.450.63-0.26-0.550.27-0.110.440.090.110.0010.090.460.0070.55ModelDependent variableControl T1D modelControl T2 modelPredictorControl T1DControl T2DBetap-valueBetap-value1Lumbar BMDR2 = 0.27P = 0.036R2 = 0.32P = 0.01AgeGenderBMIPTH-0.45-0.310.040.050.0110.050.810.75-0.41-0.450.17-0.170.0130.0050.360.332Femoral Neck BMDR2 = 0.35P = 0.006R2 = 0.51P < 0.001AgeGenderBMIPTH-0.31-0.50.04-0.060.050.0020.810.69-0.39-0.670.15-0.230.006< 0.0010.350.12BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectively R2 = 0.43 P = 0.038 R2 = 0.37 P = 0.014 Age Gender Diabetes duration HbA1c BMI PTH 0.15 -0.66 -0.36 0.07 -0.3 0.27 0.5 0.002 0.075 0.7 0.16 0.21 -0.03 -0.52 0.16 0.02 0.46 -0.1 0.87 0.002 0.33 0.89 0.006 0.51 R2 = 0.68 P < 0.001 R2 = 0.42 P = 0.006 Age Gender Diabetes duration HbA1c BMI PTH -0.08 -0.66 -0.39 0.13 0.12 0.08 0.64 < 0.001 0.014 0.33 0.45 0.63 -0.26 -0.55 0.27 -0.11 0.44 0.09 0.11 0.001 0.09 0.46 0.007 0.55 R2 = 0.27 P = 0.036 R2 = 0.32 P = 0.01 Age Gender BMI PTH -0.45 -0.31 0.04 0.05 0.011 0.05 0.81 0.75 -0.41 -0.45 0.17 -0.17 0.013 0.005 0.36 0.33 R2 = 0.35 P = 0.006 R2 = 0.51 P < 0.001 Age Gender BMI PTH -0.31 -0.5 0.04 -0.06 0.05 0.002 0.81 0.69 -0.39 -0.67 0.15 -0.23 0.006 < 0.001 0.35 0.12 BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes Subgroup analysis The presence of diabetic complications T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1). Fig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1). Fig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 Drugs in T2D In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively. In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively. The presence of diabetic complications T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1). Fig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1). Fig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 Drugs in T2D In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively. In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively. BMD predictors in T1D, T2D and controls: General (age, gender, BMI and PTH) and diabetes - specific parameters (disease duration, HbA1c) were introduced as independent variables in multiple regression analysis with lumbar and femoral neck BMD as outcome variables, respectively (Table 3). Gender independently predicted BMD across all models: compared to men, female sex was an independent risk factor for low BMD in T1D, T2D patients and controls, respectively. In T1D patients, diabetes duration was also a negative independent predictor of femoral neck BMD, while BMI was a positive independent predictor of both lumbar and femoral neck BMD in the T2D group - but not in controls. Age was a negative independent predictor of BMD in controls (both T1D and T2D controls), but not in patients with diabetes (Table 3). Table 3Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectivelyModelDependent variableT1D modelT2 modelPredictorT1DT2DBetap-valueBetap-value1Lumbar BMDR2 = 0.43P = 0.038R2 = 0.37P = 0.014AgeGenderDiabetes durationHbA1cBMIPTH0.15-0.66-0.360.07-0.30.270.50.0020.0750.70.160.21-0.03-0.520.160.020.46-0.10.870.0020.330.890.0060.512Femoral Neck BMDR2 = 0.68P < 0.001R2 = 0.42P = 0.006AgeGenderDiabetes durationHbA1cBMIPTH-0.08-0.66-0.390.130.120.080.64< 0.0010.0140.330.450.63-0.26-0.550.27-0.110.440.090.110.0010.090.460.0070.55ModelDependent variableControl T1D modelControl T2 modelPredictorControl T1DControl T2DBetap-valueBetap-value1Lumbar BMDR2 = 0.27P = 0.036R2 = 0.32P = 0.01AgeGenderBMIPTH-0.45-0.310.040.050.0110.050.810.75-0.41-0.450.17-0.170.0130.0050.360.332Femoral Neck BMDR2 = 0.35P = 0.006R2 = 0.51P < 0.001AgeGenderBMIPTH-0.31-0.50.04-0.060.050.0020.810.69-0.39-0.670.15-0.230.006< 0.0010.350.12BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes Multiple regression analysis of BMD predictors in patients with type 1 and type 2 diabetes mellitus and controls, respectively R2 = 0.43 P = 0.038 R2 = 0.37 P = 0.014 Age Gender Diabetes duration HbA1c BMI PTH 0.15 -0.66 -0.36 0.07 -0.3 0.27 0.5 0.002 0.075 0.7 0.16 0.21 -0.03 -0.52 0.16 0.02 0.46 -0.1 0.87 0.002 0.33 0.89 0.006 0.51 R2 = 0.68 P < 0.001 R2 = 0.42 P = 0.006 Age Gender Diabetes duration HbA1c BMI PTH -0.08 -0.66 -0.39 0.13 0.12 0.08 0.64 < 0.001 0.014 0.33 0.45 0.63 -0.26 -0.55 0.27 -0.11 0.44 0.09 0.11 0.001 0.09 0.46 0.007 0.55 R2 = 0.27 P = 0.036 R2 = 0.32 P = 0.01 Age Gender BMI PTH -0.45 -0.31 0.04 0.05 0.011 0.05 0.81 0.75 -0.41 -0.45 0.17 -0.17 0.013 0.005 0.36 0.33 R2 = 0.35 P = 0.006 R2 = 0.51 P < 0.001 Age Gender BMI PTH -0.31 -0.5 0.04 -0.06 0.05 0.002 0.81 0.69 -0.39 -0.67 0.15 -0.23 0.006 < 0.001 0.35 0.12 BMD bone mineral density, BMI body mass index, PTH parathyroid hormone, T1D type 1 diabetes, T2D type 2 diabetes Subgroup analysis: The presence of diabetic complications T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1). Fig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1). Fig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 Drugs in T2D In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively. In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively. The presence of diabetic complications: T1D patients with diabetes complications (n = 12) had similar lumbar BMD values compared to T1D patients without complications (n = 18) and controls (n = 30), respectively (p = 0.47); however, they tended to have a rather lower femoral neck BMD, but the differences did not reach statistical significance (0.764 g/cm2 in T1D patients with complications versus 0.858 g/cm2 in T1D patients without complications versus 0.829 g/cm2 in matched controls, respectively, p = 0.21). More so, T1D patients with diabetes complications had longer disease duration (20.5 ± 3.3 versus 10.1 ± 1.68 years, p = 0.012) compared to T1D patients without complications, despite similar HbA1c and BMI (data not shown). BMD did not differ significantly between T2D patients with (n = 9) and without (n = 30) complications and controls (n = 39), respectively (Fig. 1). Fig. 1BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 BMD values according to the presence of complications in type 1 and type 2 diabetes patients compared to controls. BMD = bone mineral density, T1D = type 1 diabetes mellitus, T2D = type 2 diabetes mellitus. T1D patients with complications – n = 12, T1D patients without complications – n = 18, T2D patients with complications – n = 9, T2D patients without complications – n = 30, T1D controls – n = 30, T2D controls – n = 39 Drugs in T2D: In the T2D group, we compared BMD across different treatment categories. Although patients taking metformin + sulfonylurea tended to have a rather lower bone density compared to the other subgroups, lumbar BMD (1.03 ± 0.05 versus 0.95 ± 0.05 versus 1.04 ± 0.04, p = 0.42) and femoral neck BMD (0.84 ± 0.04 versus 0.8 ± 0.05 versus 0.86 ± 0.04, respectively) did not differ significantly among metformin monotherapy (n = 16), metformin + sulfonylurea (n = 13) and metformin + incretin therapy (n = 10), respectively. Discussion: T1D and T2D patients had similar BMD compared to controls, respectively. Diabetes duration, but not HbA1c, was found to negatively predict femoral neck BMD in T1D, but not T2D patients. In the T2D group, BMI was an independent predictor of bone density, while female gender was negatively associated with low BMD in both T1D and T2D, independently of other factors. Unlike patients with diabetes, age was the other major independent predictor of bone mass in controls, in addition to gender. Albeit BMD underestimates fracture risk in patients with diabetes, it still remains the cornerstone in bone evaluation in this particular group due to high accessibility and low costs [16]. Data reporting BMD values in T1D and T2D patients are very heterogenous and rather inconsistent with regard to BMD predictive factors. An older meta-analysis [5] reported negative Z-scores for T1D patients and positive Z-scores for T2D subjects, thus concluding that T1D is associated with lower bone mass, while T2D patients generally have higher BMD. More recent studies reported, however, similar BMD values both between patients with diabetes and controls and between T1D and T2D patients, respectively [6, 17]. Another recent study performed investigating bone mass in long-standing (longer than 50 years) T1D patients with good glycemic control and low rates of vascular complications reported similar or even better BMD expressed as Z-score compared to age-, gender- and race-matched population [18]. We found lower bone mass scores at the femoral neck in T1D women compared to T2D women and controls; nevertheless, the differences did not reach statistical significance, probably due to the limited number of patients. Also, more women in the T2D group were postmenopausal compared to the T1D group, and this may account for the lack of a statistical difference regarding BMD between types of diabetes. More so, the different stages of evolution and disease management captured in various studies, the potentially erroneous diabetes classification and also the adjustment for various confounding variables may account for the variability of reported data. The early and rather acute insulinopenia associated with diabetes onset impairs bone mass accrual and negatively impacts peak bone mass. Thus, bone mass acquisition is hampered in the early stages of T1D [3]. Nonetheless, bone density was demonstrated to stabilize or even increase after exogenous insulin treatment is well installed, with studies reporting age – and gender - expected bone density measurements [19]. More so, T1D patients with low bone mass are reported to follow lower insulin dose regimens compared to those with normal bone mass [20]. Disease duration - and not age – proved to be one of the main independent predictors of low femoral neck BMD in T1D patients in our study, suggesting that diabetes-related factors, such as diabetes duration, may be more important for bone. Indeed, T1D patients experiencing diabetes-specific complications had a longer disease history and also the tendency towards lower femoral neck BMD. Low rates of vascular complications have been linked to preserved BMD in long-lasting T1D [18]. According to recent consensus in the field, diabetes-specific risk factors for fracture include age, low BMD, the presence of complications of diabetes, disease duration, previous fractures and glycemic control (particularly in T1D with HbA1c > 8–9 %) [21]. Our results are in agreement with other studies reporting long-lasting disease as a risk factor for fragility fractures [11, 22]. The presence of micro- and macrovascular complications is associated with low BMD [5, 6] and was also reported to increase fracture risk [22, 23]. Microvascular complications as a result of collagen glycation and impaired bone turnover due to AGEs are thought to compromise bone quality and material properties, thereby significantly increasing fracture risk [2, 24, 25]. Although we and others [26] failed to find any significant BMD variation according to the presence of complications, microvascular damage is demonstrated to alter bone microarchitecture, possibly via VEGF linking diabetic complications and skeletal health. This explains the disproportionate fracture risk in T1D versus T2D, compared to differences in BMD [27]. Complications are also associated with longer disease history, an independent factor for low BMD, once again supporting the link between bone mass and microarchitecture changes and the long exposure to diabetic milieu. Similar to other studies [6, 20], we failed to find a significant effect of HbA1c (which shows only the severity of recent diabetes dysregulation) upon BMD. However, we did not assess fracture risk, as long standing poor glycemic control is known to be associated with increased fracture risk, independently of bone mass [20]. T2D patients in the current study had similar BMD compared to matched controls, although fewer patients in the diabetes group exhibited low bone mass. T2D patients are generally reported to have increased BMD compared to reference populations, although not in all studies [5, 28]. Potential disease misclassification, lack of a matched control group and inability to adjust for covariates are important sources of bias and heterogeneity [28]. Diabetes duration is an important confounder: the osteoanabolic effects of the hyperinsulinemia secondary to insulin resistance may explain the apparently higher bone density in early T2D, while insulinopenia in T1D and late T2D is accompanied by sarcopenia and low bone mass [1]. Despite using metformin which is known to positively impact bone mass and reduce fracture risk [29, 30], the T2D patients in the current study had a rather long disease history of approximately 10 years, with one quarter also experiencing complications. A diabetes duration longer than 5 years is a risk factor for low bone mass [31] and the presence of microvascular complications in T2D is associated with lower cortical volumetric BMD and altered bone microarchitecture, namely increased cortical porosity and diminished cortical thickness at the radius [32]. BMD did not differ in patients with complicated T2D compared to T2D patients without complications in the current study. At the same time, our T2D patients had a good glycemic control, while an increased HbA1c is associated with increased BMD according to the meta-analysis of Ma et al.[28]. Other meta-analyses failed to find a significant correlation of HbA1c with BMD in T2D [5]. Also, BMD progressively increases with clinical cutoffs for fasting glucose (normal, impaired and overt T2D) [33]. However, this increased BMD may be explained by the diminished bone mineral area of these patients, which also exhibit low bone turnover as assessed by serum markers, such as osteocalcin or cross-laps [33]. An increased BMI is a protective factor against osteoporosis in all populations [34] (including patients with diabetes [34]), via the increased mechanical loading. Obesity is a risk factor for insulin resistance and diabetes [35], being at the same time associated with higher areal and volumetric BMD and improved cortical bone structure [1, 35]. It also contributes to higher BMD in T2D patients, as demonstrated by the current study and also by many others [28]. At the same time, diabetes and obesity are associated with systemic inflammation and adipokine dysregulation, all contributing to impaired bone metabolism [36]. Despite variable BMD, alterations in cortical bone microarchitecture are reported in T2D patients, explaining the higher fracture risk compared to the reference population [37]. None of the patients in our study were under anti-diabetic therapy known to negatively impact bone mass, such as thiazolidinediones or SGLT-2 inhibitors. While all T2D patients were using the “bone-friendly” metformin, the subgroup also using a sulfonylurea drug tended to have a rather lower BMD, without reaching statistical significance. This is still to be clarified as the mechanisms of action of the sulfonylurea class of medication upon bone remain unelucidated up to present [38]. Age and gender (female sex was associated with a lower BMD compared to men, independently of other factors) were the main independent BMD predictors in the reference populations in our study. Interestingly, the effect of age, unanimously recognized as a risk factor for low bone mass, was not detected in the T1D and T2D patients in our study. It is possible that other factors surpass the effect of aging upon BMD in diabetes, particularly in young or obese patients. Our study is limited by the relatively small number of patients, lack of assessment of bone microarchitecture, bone turnover markers or fracture risk. The effect of vitamin D levels, known to be altered in individuals with diabetes [4], was also not assessed. Nonetheless, the presence of matched control groups for T1D and T2D subjects, respectively, together with the evaluation of BMD predictors in patients with diabetes versus matched healthy individuals are important study strengths. Conclusions: Female sex and long-standing diabetes particularly increase the risk for low BMD in T1D, with special concern for the femoral neck. An increased BMI partially contributes to BMD preservation in T2D, independently of age; however, appreciating bone mass to its real extent is rather difficult in T2D due to various contributing factors to bone changes.
Background: Despite the increased fracture risk, bone mineral density (BMD) is variable in type 1 (T1D) and type 2 (T2D) diabetes mellitus. We aimed at comparing independent BMD predictors in T1D, T2D and control subjects, respectively. Methods: Cross-sectional case-control study enrolling 30 T1D, 39 T2D and 69 age, sex and body mass index (BMI) - matched controls that underwent clinical examination, dual-energy X-ray absorptiometry (BMD at the lumbar spine and femoral neck) and serum determination of HbA1c and parameters of calcium and phosphate metabolism. Results: T2D patients had similar BMD compared to T1D individuals (after adjusting for age, BMI and disease duration) and to matched controls, respectively. In multiple regression analysis, diabetes duration - but not HbA1c- negatively predicted femoral neck BMD in T1D (β= -0.39, p = 0.014), while BMI was a positive predictor for lumbar spine (β = 0.46, p = 0.006) and femoral neck BMD (β = 0.44, p = 0.007) in T2D, besides gender influence. Age negatively predicted BMD in controls, but not in patients with diabetes. Conclusions: Long-standing diabetes and female gender particularly increase the risk for low bone mass in T1D. An increased body weight partially hinders BMD loss in T2D. The impact of age appears to be surpassed by that of other bone regulating factors in both T1D and T2D patients.
Background: Diabetes mellitus is a chronic whole-body disease leading to a wide range of complications, such as cardiovascular disease, retinopathy, nephropathy, neuropathy and also “sweet bone” disease [1]. Although the underlying pathophysiological background is very different, type 1 (T1D) and type 2 diabetes (T2D) are both associated with an increased fracture risk - which is multifactorial and only partially explained by falls and bone mineral density (BMD) [2]. The most consistent effect is upon the hip fracture risk, ranging between 2.4- and 7-fold increase in T1D [3] and being two to three times higher in T2D compared to the general population [1]. Diabetic osteopathy in T1D and T2D is characterized by low serum vitamin D, negative calcium balance, low bone turnover and high sclerostin levels [4]. However, bone mass may differ to some extent in T1D when compared to T2D [5], but not in all studies [6]. Low BMD occurs early after disease onset due to the deleterious effects of insulinopenia upon bone turnover and bone mass accrual in T1D, remaining rather stable afterwards [7]. Reported BMD in T2D varies from unaltered bone density [8, 9] to a paradoxically higher BMD [5] compared to controls. Low bone mass was also found in the later stages of T2D, possibly linked to microvascular disease [10]. Skeletal fragility is nevertheless described in both T1D and T2D, independently of BMD [2]. Advanced glycation end products (AGEs) alter the structure of the collagen, promote oxidative stress and inflammation, and also contribute to low bone turnover [1, 3]. The effect of glycemic control - reflected by HbA1c levels - upon bone is inconsistent, with some studies reporting an elevated fracture risk with increasing HbA1c [11, 12], while bone density evolution appears rather independent of HbA1c levels [5, 6]. In T2D, the protective effect of an increased body weight and hyperinsulinemia upon bone are counterbalanced by the negative impact of increased visceral adiposity and insulin resistance, an inadequate adaptation of bone strength to increased mechanical load, the long duration of disease evolution and various anti-diabetic drugs (e.g., thiazolidinediones or sodium-glucose cotransporter type 2 inhibitors – SGLT-2) [1]. We aimed at investigating independent predictors of BMD in T1D and T2D patients compared to controls with regard to general and diabetes - specific parameters. Conclusions: Female sex and long-standing diabetes particularly increase the risk for low BMD in T1D, with special concern for the femoral neck. An increased BMI partially contributes to BMD preservation in T2D, independently of age; however, appreciating bone mass to its real extent is rather difficult in T2D due to various contributing factors to bone changes.
Background: Despite the increased fracture risk, bone mineral density (BMD) is variable in type 1 (T1D) and type 2 (T2D) diabetes mellitus. We aimed at comparing independent BMD predictors in T1D, T2D and control subjects, respectively. Methods: Cross-sectional case-control study enrolling 30 T1D, 39 T2D and 69 age, sex and body mass index (BMI) - matched controls that underwent clinical examination, dual-energy X-ray absorptiometry (BMD at the lumbar spine and femoral neck) and serum determination of HbA1c and parameters of calcium and phosphate metabolism. Results: T2D patients had similar BMD compared to T1D individuals (after adjusting for age, BMI and disease duration) and to matched controls, respectively. In multiple regression analysis, diabetes duration - but not HbA1c- negatively predicted femoral neck BMD in T1D (β= -0.39, p = 0.014), while BMI was a positive predictor for lumbar spine (β = 0.46, p = 0.006) and femoral neck BMD (β = 0.44, p = 0.007) in T2D, besides gender influence. Age negatively predicted BMD in controls, but not in patients with diabetes. Conclusions: Long-standing diabetes and female gender particularly increase the risk for low bone mass in T1D. An increased body weight partially hinders BMD loss in T2D. The impact of age appears to be surpassed by that of other bone regulating factors in both T1D and T2D patients.
13,469
289
[ 467, 2670, 573, 496, 259, 687, 1119, 423, 131 ]
12
[ "patients", "bmd", "t1d", "t2d", "complications", "diabetes", "controls", "bone", "type", "patients complications" ]
[ "insulinopenia bone turnover", "bone t1d patients", "bone mineral density", "population diabetic osteopathy", "diabetic complications skeletal" ]
null
[CONTENT] Diabetes mellitus | Bone mineral density | HbA1c [SUMMARY]
null
[CONTENT] Diabetes mellitus | Bone mineral density | HbA1c [SUMMARY]
[CONTENT] Diabetes mellitus | Bone mineral density | HbA1c [SUMMARY]
[CONTENT] Diabetes mellitus | Bone mineral density | HbA1c [SUMMARY]
[CONTENT] Diabetes mellitus | Bone mineral density | HbA1c [SUMMARY]
[CONTENT] Adult | Biomarkers | Blood Glucose | Body Mass Index | Bone Density | Case-Control Studies | Cross-Sectional Studies | Diabetes Mellitus, Type 1 | Diabetes Mellitus, Type 2 | Female | Follow-Up Studies | Fractures, Bone | Glycated Hemoglobin | Humans | Male | Middle Aged | Osteoporosis | Prognosis | Romania [SUMMARY]
null
[CONTENT] Adult | Biomarkers | Blood Glucose | Body Mass Index | Bone Density | Case-Control Studies | Cross-Sectional Studies | Diabetes Mellitus, Type 1 | Diabetes Mellitus, Type 2 | Female | Follow-Up Studies | Fractures, Bone | Glycated Hemoglobin | Humans | Male | Middle Aged | Osteoporosis | Prognosis | Romania [SUMMARY]
[CONTENT] Adult | Biomarkers | Blood Glucose | Body Mass Index | Bone Density | Case-Control Studies | Cross-Sectional Studies | Diabetes Mellitus, Type 1 | Diabetes Mellitus, Type 2 | Female | Follow-Up Studies | Fractures, Bone | Glycated Hemoglobin | Humans | Male | Middle Aged | Osteoporosis | Prognosis | Romania [SUMMARY]
[CONTENT] Adult | Biomarkers | Blood Glucose | Body Mass Index | Bone Density | Case-Control Studies | Cross-Sectional Studies | Diabetes Mellitus, Type 1 | Diabetes Mellitus, Type 2 | Female | Follow-Up Studies | Fractures, Bone | Glycated Hemoglobin | Humans | Male | Middle Aged | Osteoporosis | Prognosis | Romania [SUMMARY]
[CONTENT] Adult | Biomarkers | Blood Glucose | Body Mass Index | Bone Density | Case-Control Studies | Cross-Sectional Studies | Diabetes Mellitus, Type 1 | Diabetes Mellitus, Type 2 | Female | Follow-Up Studies | Fractures, Bone | Glycated Hemoglobin | Humans | Male | Middle Aged | Osteoporosis | Prognosis | Romania [SUMMARY]
[CONTENT] insulinopenia bone turnover | bone t1d patients | bone mineral density | population diabetic osteopathy | diabetic complications skeletal [SUMMARY]
null
[CONTENT] insulinopenia bone turnover | bone t1d patients | bone mineral density | population diabetic osteopathy | diabetic complications skeletal [SUMMARY]
[CONTENT] insulinopenia bone turnover | bone t1d patients | bone mineral density | population diabetic osteopathy | diabetic complications skeletal [SUMMARY]
[CONTENT] insulinopenia bone turnover | bone t1d patients | bone mineral density | population diabetic osteopathy | diabetic complications skeletal [SUMMARY]
[CONTENT] insulinopenia bone turnover | bone t1d patients | bone mineral density | population diabetic osteopathy | diabetic complications skeletal [SUMMARY]
[CONTENT] patients | bmd | t1d | t2d | complications | diabetes | controls | bone | type | patients complications [SUMMARY]
null
[CONTENT] patients | bmd | t1d | t2d | complications | diabetes | controls | bone | type | patients complications [SUMMARY]
[CONTENT] patients | bmd | t1d | t2d | complications | diabetes | controls | bone | type | patients complications [SUMMARY]
[CONTENT] patients | bmd | t1d | t2d | complications | diabetes | controls | bone | type | patients complications [SUMMARY]
[CONTENT] patients | bmd | t1d | t2d | complications | diabetes | controls | bone | type | patients complications [SUMMARY]
[CONTENT] bone | t2d | increased | t1d | disease | bone turnover | turnover | fracture | fracture risk | effect [SUMMARY]
null
[CONTENT] patients | complications | patients complications | t1d | t1d patients | type | bmd | diabetes | t1d patients complications | controls [SUMMARY]
[CONTENT] neck increased | bmd t1d special concern | bmd preservation t2d | bmd preservation t2d independently | t2d independently age appreciating | t2d independently age | extent difficult t2d contributing | extent difficult t2d | extent difficult | long standing diabetes particularly [SUMMARY]
[CONTENT] patients | bmd | t1d | complications | t2d | diabetes | patients complications | bone | t1d patients | controls [SUMMARY]
[CONTENT] patients | bmd | t1d | complications | t2d | diabetes | patients complications | bone | t1d patients | controls [SUMMARY]
[CONTENT] BMD | 1 | 2 ||| BMD | T1D | T2D [SUMMARY]
null
[CONTENT] BMD | BMI ||| BMD | 0.014 | BMI | 0.46 | 0.006 | BMD | 0.44 | 0.007 ||| BMD [SUMMARY]
[CONTENT] BMD | T2D. [SUMMARY]
[CONTENT] BMD | 1 | 2 ||| BMD | T1D | T2D ||| 30 | 39 | 69 | BMI | BMD ||| ||| BMD | BMI ||| BMD | 0.014 | BMI | 0.46 | 0.006 | BMD | 0.44 | 0.007 ||| BMD ||| BMD | T2D. [SUMMARY]
[CONTENT] BMD | 1 | 2 ||| BMD | T1D | T2D ||| 30 | 39 | 69 | BMI | BMD ||| ||| BMD | BMI ||| BMD | 0.014 | BMI | 0.46 | 0.006 | BMD | 0.44 | 0.007 ||| BMD ||| BMD | T2D. [SUMMARY]
ANTHELMINTIC EFFECTS OF DRIED GROUND BANANA PLANT LEAVES
28480391
Helminths is a endoparasites that cause the major losses for profitable sheep production in Brazil. The increased development of resistant strains of endoparasites have enforced the search for sustainable alternatives. The aim of this paper was to provide information about endoparasites control with banana leaves in infected sheep as alternative control strategies and see its viability.
BACKGROUND
In this study, we performed two trials to investigate the anthelmintic properties of banana leaves on endoparasites in sheep. In Trial 1, twelve sheep were artificially infected with Trichostrongylus colubriformis; in Trial 2, eleven sheep were artificially infected with Haemonchus contortus. Clinical examinations, packed cell volume, total protein, faecal egg counts (FECs) and egg hatchability tests (EHTs) were performed. At the end of the trials, the sheep were humanely slaughtered, and total worm counts were performed.
MATERIALS AND METHODS
In Trial 1 and 2, no significant FEC decreases were note but significant diference in EHTs were observed. Total worm counts, clinical and haematological parameters did not reveal significant changes between the treatment and control groups. These results suggest that feeding dried ground banana plant leaves to sheep may reduce the viability of Trichostrongylus colubriformis eggs, and this anthelmintic activity is potentially exploitable as part of an integrated parasite management programme.
RESULTS
However, further investigation is needed to establish the optimal dosage, develop a convenient delivery form and confirm the economic feasibility of using banana plantation byproducts as feed for ruminant species. Abbreviations: Coproculture test (CT)., Faecal egg count (FEC)., Egg hatchability test (EHT).
CONCLUSION
[ "Animals", "Anthelmintics", "Feces", "Female", "Haemonchus", "Larva", "Male", "Musa", "Parasite Egg Count", "Plant Extracts", "Plant Leaves", "Sheep", "Sheep Diseases", "Trichostrongylosis", "Trichostrongylus" ]
5411864
Introduction
Gastrointestinal helminthiases are a major sanitary concern in sheep herds. Classical clinical signs include loss of appetite, weight loss, melena and anaemia in the case of Haemonchus, whereas severe Trichostrongylus infections may result in diarrhoea (West et al., 2009). The parasitological control of nematodes is based on repeated treatments with anthelmintic drugs (Amarante et al., 1997). The practice of repeated treatments favors the emergence of resistant helminth to existing medication (Torres-Acosta e Hoste, 2008). Currently, there is very resistant to endoparasites anthelmintics available. (Max et al., 2010) generating increasing concern about the presence of residues in meat and milk influencing the environment (Athanasiadou et al., 2008). Anthelmintics derived bioactive plants can be an alternative for the treatment of parasitic infections (Akhtar et al. 2000). Several studies confirm that tanninipherous plants have been used for control of gastrointestinal nematodes in small ruminants, using extracts, leaves, fruits or seeds from different regions of the world and obtained with different techniques (Min et al., 2004, Martinez-Ortiz-De-Montellano et al., 2010, Novobilský et al, 2013)Sheep that grazed on French honeysuckle (Hedysarum coronarium), a source of condensed tannins, exhibited lower FECs and reduced nematode burdens at post-mortem examination compared with animals that grazed on lucerne (Medicago sativa) (Niezen et al., 1995). Analyzing the effect of Acacia molissima extract lambs naturally infected with H. contortus and T. colubriformis, Minho et al. (2008) reported that there was a reduction in egg counts per gram of feces and parasite load of H. contortus in the abomasum, a fact not observed in the parasite load of T. colubriformis in intestine. Nery et al., 2012 studied the efficacy of extracts of immature mango on ovine gastrointestinal nematodes and proved that this fruit could be used to ovine nematodes control. Cala et al., 2014 used Artemisia annua L. extacts in naturally infected sheep and discovered that this plant combined with commercial anthelminths have been one potential synergism to control Haemonchus contortus. Knowledge of the traditional use of the banana plant (Musa spp.) in medicine in Asia reached Europe as early as the XVI century (Touwaide and Appetiti, 2013). Currently, bananas are one of the most important fruit crops in the world, with a global production of approximately 102 million tons (FAO, 2012). Banana plants yield approximately 13 tons/ha/year of leaves and pseudostems, a biomass that could be exploited as fodder for ruminants (Foulkes et al., 1977). The crude protein content of banana leaves ranges from 14.6 g to 17.9 g (Heuzé and Tran, 2013), and its use as a feed source for cattle in the tropics has been explored with positive results (Llosa, 1950). The anthelmintic effect of the banana plant has been studied both in vitro and in vivo. Aqueous extracts from the leaves, false stems and flowering stalks of banana plants significantly reduced the in vitro recovery of Haemonchus larvae from pooled faeces from naturally infected sheep, with the average reduction in worm recovery ranging from 98.5 to 100% (Oliveira et al., 2010). Lambs that were artificially infected with H. contortus and fed with banana plant leaves for 21 consecutive days demonstrated significant FECs reduction compared with the control group, suggesting that the treatment reduced the fecundity of the worms (Marie-Magdalene et al., 2010). The anthelmintic properties of aqueous extract of leaves, stems and heart banana reported by Nogueira et al. (2012) state that in vitro test was inhibition of hatching eggs of H. contortus. In the in vivo test using naturally infected sheep, the authors report a reduction in the elimination of eggs. These studies have demonstrated that feeding banana plant to sheep may have some efficacy in controlling nematode infections, although it is unclear how it could be used in a feasible and effective manner at the farm level. Recommendations to use the banana plant as fodder date back to the 1950s (Vaitsman, 1954), but its use has not reached any significant level. In addition, gaps persist in the literature regarding its economic viability and effective dosage. The aerial parts of the banana plant are commonly used to cover and protect the soil (Borges, 2006), and the productive chain lacks reliable information to subside any efforts towards the use of this biomass in animal nutrition. This study describes two trials designed to assess the efficacy and safety of a daily dose of 150 g of dried ground banana plant leaves during ten consecutive days against artificial Haemonchus and Trichostrongylus infections in sheep by comparing physical, haematological and parasitological parameters.
null
null
Results
Analysis of the fresh and dried ground Musa spp. leaves used in Trials 1 and 2 are presented in Table 1. Analysis of the fresh and dried ground Musa spp. leaves used in the trials. Values expressed as g/kg of dry matter Values expressed as gram equivalents of tannic acid per kg dry matter Values expressed as gram equivalents of leucocyanidin per kg dry matter No significant difference was noted between the weight of the animals in the control and treatment groups in Trial 1 and 2 (monospecific infection with T. colubriformis). In addition, no significant difference was noted in the packed cell volume of the control and treatment groups. Although the sheep in the control group exhibited a higher total protein concentration at Week 3 than the sheep in the treatment group (P=0.001), all values were within the normal range for the species. In Trial 1, the EHT was significantly decreased in the group fed dried ground banana plant leaves (Musa spp.) (Table 2). Larval hatchability was not inhibited in the control or treatment groups infected with H. contortus (Trial 2) (Table 2). Egg hatchability tests (EHTs) for Trial 1 (artificial infection with Trial 1: Control group n=6, Treatment group n=6; Trial 2: Control group n=5, Treatment group n=6. No significant difference was noted in FECs between the control and treatment groups infected with T. colubriformis (Trial 1) and H. contortus (Trial 2) (Table 3). Faecal egg counts (number eggs per g faeces - original date) for Trial 1 (artificial infection with Trichostrongylus colubriformis) and Trial 2 (artificial infection with Haemonchus contortus) in sheep treated with or without banana leaves. fec = faecal egg count; mean± standard deviation We made the differentiation of larvae between male and female, after treatment in animals mono-specifically infected with T. colubriformis (Trial 1) (Table 4). Total worm counts (original date) for Trial 1 (artificial infection with Trichostrongylus colubriformis) and Trial 2 (artificial infection with Haemonchus contortus) in sheep treated with or without banana leaves. (TW)*=Total worms No significant difference was observed in the weight of the animals in the control and treatment groups infected with H. contortus throughout the duration of the experiment. There was no significant difference in the total number of males and females in control sheep and sheep treated with H. contortus monospecific infections (Trial 2) (Table 4).
null
null
[]
[]
[]
[ "Introduction", "Materials and Methods", "Results", "Discussion" ]
[ "Gastrointestinal helminthiases are a major sanitary concern in sheep herds. Classical clinical signs include loss of appetite, weight loss, melena and anaemia in the case of Haemonchus, whereas severe Trichostrongylus infections may result in diarrhoea (West et al., 2009). The parasitological control of nematodes is based on repeated treatments with anthelmintic drugs (Amarante et al., 1997). The practice of repeated treatments favors the emergence of resistant helminth to existing medication (Torres-Acosta e Hoste, 2008). Currently, there is very resistant to endoparasites anthelmintics available. (Max et al., 2010) generating increasing concern about the presence of residues in meat and milk influencing the environment (Athanasiadou et al., 2008). Anthelmintics derived bioactive plants can be an alternative for the treatment of parasitic infections (Akhtar et al. 2000). Several studies confirm that tanninipherous plants have been used for control of gastrointestinal nematodes in small ruminants, using extracts, leaves, fruits or seeds from different regions of the world and obtained with different techniques (Min et al., 2004, Martinez-Ortiz-De-Montellano et al., 2010, Novobilský et al, 2013)Sheep that grazed on French honeysuckle (Hedysarum coronarium), a source of condensed tannins, exhibited lower FECs and reduced nematode burdens at post-mortem examination compared with animals that grazed on lucerne (Medicago sativa) (Niezen et al., 1995).\nAnalyzing the effect of Acacia molissima extract lambs naturally infected with H. contortus and T. colubriformis, Minho et al. (2008) reported that there was a reduction in egg counts per gram of feces and parasite load of H. contortus in the abomasum, a fact not observed in the parasite load of T. colubriformis in intestine. Nery et al., 2012 studied the efficacy of extracts of immature mango on ovine gastrointestinal nematodes and proved that this fruit could be used to ovine nematodes control. Cala et al., 2014 used Artemisia annua L. extacts in naturally infected sheep and discovered that this plant combined with commercial anthelminths have been one potential synergism to control Haemonchus contortus.\nKnowledge of the traditional use of the banana plant (Musa spp.) in medicine in Asia reached Europe as early as the XVI century (Touwaide and Appetiti, 2013). Currently, bananas are one of the most important fruit crops in the world, with a global production of approximately 102 million tons (FAO, 2012). Banana plants yield approximately 13 tons/ha/year of leaves and pseudostems, a biomass that could be exploited as fodder for ruminants (Foulkes et al., 1977). The crude protein content of banana leaves ranges from 14.6 g to 17.9 g (Heuzé and Tran, 2013), and its use as a feed source for cattle in the tropics has been explored with positive results (Llosa, 1950). The anthelmintic effect of the banana plant has been studied both in vitro and in vivo. Aqueous extracts from the leaves, false stems and flowering stalks of banana plants significantly reduced the in vitro recovery of Haemonchus larvae from pooled faeces from naturally infected sheep, with the average reduction in worm recovery ranging from 98.5 to 100% (Oliveira et al., 2010). Lambs that were artificially infected with H. contortus and fed with banana plant leaves for 21 consecutive days demonstrated significant FECs reduction compared with the control group, suggesting that the treatment reduced the fecundity of the worms (Marie-Magdalene et al., 2010). The anthelmintic properties of aqueous extract of leaves, stems and heart banana reported by Nogueira et al. (2012) state that in vitro test was inhibition of hatching eggs of H. contortus. In the in vivo test using naturally infected sheep, the authors report a reduction in the elimination of eggs. These studies have demonstrated that feeding banana plant to sheep may have some efficacy in controlling nematode infections, although it is unclear how it could be used in a feasible and effective manner at the farm level. Recommendations to use the banana plant as fodder date back to the 1950s (Vaitsman, 1954), but its use has not reached any significant level. In addition, gaps persist in the literature regarding its economic viability and effective dosage. The aerial parts of the banana plant are commonly used to cover and protect the soil (Borges, 2006), and the productive chain lacks reliable information to subside any efforts towards the use of this biomass in animal nutrition. This study describes two trials designed to assess the efficacy and safety of a daily dose of 150 g of dried ground banana plant leaves during ten consecutive days against artificial Haemonchus and Trichostrongylus infections in sheep by comparing physical, haematological and parasitological parameters.", "The trials were conducted at the Veterinary Hospital of FMVZ - USP, Section in accordance with internationally accepted principles for laboratory animal use and care expressed in Brazilian Federal Law No. 11794/2008 and its regulations, as approved by the local Institutional Animal Care and Use Committee. Fresh banana plant leaves were collected from plantations in Registro city, São Paulo state and the leaves were then stored in an open shed, chopped in the direction of the fibres and oven dried at 40 °C (as higher temperatures can damage condensed tannins). The dried material was then coarsely ground to be fed to the sheep later. Quantification of the condensed tannins in the dried leaves was performed at the Animal Nutrition Laboratory of the Centre for Nuclear Energy in Agriculture - CENA - Luís de Queiróz Superior School of Agriculture - ESALQ - USP. Twenty-three Santa Inês wethers ranging from 6 months to 1 year in age were used in the trials. The animals were obtained from free-range farms located in the state of São Paulo. The average weights of the twelve sheep obtained for Trial 1 and the eleven animals obtained for Trial 2 were 29.0 ± 4 kg and 16.0 ± 4 kg, respectively.\nTo ensure that the animals were parasite-free before the start of the trials, the sheep were treated with levamisole hydrochloride (Ripercol L, Fort Dodge) at 10 mg/kg b.w. and albendazole (Valbazen, Pfizer) at 10 mg/kg b.w. for three consecutive days. Next, the animals were treated with trichlorfon (Neguvon, Bayer Health Care, Sîo Paulo-SP, Brazil) at 100 mg/kg b.w. for another three consecutive days. FECs were performed for all sheep seven days after the last anthelmintic treatment. FECs were conducted according to the McMaster technique (Whitlock, 1948). The same protocol was applied to all animals that still carried worms until gastrointestinal nematodes were completely eliminated.\nOnce the sheep were worm-free, they were artificially infected with one of two parasites. For Trial 1, twelve animals were infected with T. colubriformis, and eleven animals were infected with H. contortus for Trial 2. Infective larvae from both species were obtained from the Department of Parasitology, Institute of Biosciences, Paulista State University - UNESP, Botucatu Campus. Each animal received 3,000 larvae per week by gavage for two weeks. After 21 days, FECs revealed that the artificial infections failed in both groups. Therefore, the sheep were injected with dexamethasone (Dexin, Fagra) at 0.5 mg/kg b.w. twice a week to induce immunosuppression, and the larvae were re-administered until patent monospecific infections were achieved, as confirmed by FECs of 1,000 eggs or higher. Coproculture tests (CTs) were conducted to confirm the monospecific infections. CTs were conducted according to a modified O’Sullivan technique (Roberts and O’Sullivan, 1950).\nIn each trial, once a patent monospecific infection was achieved, the sheep were randomly assigned to the control or treatment groups. In Trial 1 (monospecific infection with T. colubriformis), the control group was composed of 6 animals, and the treatment group contained 6 animals. In Trial 2 (monospecific infection with H. contortus), there were 5 sheep in the control group and 6 animals in the treatment group.\nUntil the trials began, the sheep were fed 300 g of concentrated feed formulated to deliver 15 % raw protein and coast cross (Cynodon dactylon) hay ad libitum. Beginning on Day 0 for both trials, the treatment groups were fed 150 g of dried ground banana plant leaves mixed with 150 g of concentrated feed/animal/day for 10 days plus coast cross hay ad libitum. Sheep in the control groups were fed the initial diet throughout the experimental period. Throughout the trials, the animals were subjected to physical examination by a veterinarian as described by Dirksen et al. (1983). The packed cell volume and total plasma protein were determined at Days 0, +12 and +17 in Trial 1 and at Days 0, +9, +15 and +17 in Trial 2. To evaluate the anthelmintic effect of Musa spp. leaves, FECs and egg hatchability tests (EHTs) were performed at Days 0, +5, +9, +12 and +17 in Trial 1, whereas faecal exams were conducted at Days 0, +6, +9, +12, +15 and +17 in Trial 2. EHTs were conducted according to a technique adapted from Coles et al. (1992) using pooled faeces. At the end of the trial, the sheep were humanely slaughtered, and the gastrointestinal contents and mucosae were collected for total worm counts.\nStatistical analyses were performed using SAS 9.3 (SAS Institute, USA). In both trials, the design of this experiment was completely randomised with two treatments (with or without banana leaves) and animals were the experimental unit. The data were tested for normality and variance homogeneity for the proc univariate normal procedure. The analysis of variance was performed using the proc mixed with repeated measurements procedure in SAS®. The FEC and worm count were transformed by log10 (x + 10). Comparisons of means were carried out by pdiff. The probability level for acceptance or rejection of the hypothesis was 5 %.", "Analysis of the fresh and dried ground Musa spp. leaves used in Trials 1 and 2 are presented in Table 1.\nAnalysis of the fresh and dried ground Musa spp. leaves used in the trials.\nValues expressed as g/kg of dry matter\nValues expressed as gram equivalents of tannic acid per kg dry matter\nValues expressed as gram equivalents of leucocyanidin per kg dry matter\nNo significant difference was noted between the weight of the animals in the control and treatment groups in Trial 1 and 2 (monospecific infection with T. colubriformis). In addition, no significant difference was noted in the packed cell volume of the control and treatment groups. Although the sheep in the control group exhibited a higher total protein concentration at Week 3 than the sheep in the treatment group (P=0.001), all values were within the normal range for the species. In Trial 1, the EHT was significantly decreased in the group fed dried ground banana plant leaves (Musa spp.) (Table 2). Larval hatchability was not inhibited in the control or treatment groups infected with H. contortus (Trial 2) (Table 2).\nEgg hatchability tests (EHTs) for Trial 1 (artificial infection with\nTrial 1: Control group n=6, Treatment group n=6; Trial 2: Control group n=5, Treatment group n=6.\nNo significant difference was noted in FECs between the control and treatment groups infected with T. colubriformis (Trial 1) and H. contortus (Trial 2) (Table 3).\nFaecal egg counts (number eggs per g faeces - original date) for Trial 1 (artificial infection with Trichostrongylus colubriformis) and Trial 2 (artificial infection with Haemonchus contortus) in sheep treated with or without banana leaves.\nfec = faecal egg count; mean± standard deviation\nWe made the differentiation of larvae between male and female, after treatment in animals mono-specifically infected with T. colubriformis (Trial 1) (Table 4).\nTotal worm counts (original date) for Trial 1 (artificial infection with Trichostrongylus colubriformis) and Trial 2 (artificial infection with Haemonchus contortus) in sheep treated with or without banana leaves.\n(TW)*=Total worms\nNo significant difference was observed in the weight of the animals in the control and treatment groups infected with H. contortus throughout the duration of the experiment.\nThere was no significant difference in the total number of males and females in control sheep and sheep treated with H. contortus monospecific infections (Trial 2) (Table 4).", "The banana extract evaluated in concentration was well accepted by the animals, since the offered material was ingested by them, demonstrating having good palatability. In the trial period, the animals showed no diarrhea and changes in clinical parameters. The results strongly suggest that the ingestion of dried ground banana does not cause any deleterious effects\nThe effectiveness of Acacia extract was tested in vivo by Cenci et al. (2007) and Max (2010when they used naturally infected sheep and demonstrated a decrease in OPG count and reduction in parasite burden of gastrointestinal nematodes. Minho et al. (2008) analyzed, in vitro, the effect of condensed tannins from Acacia molissima on lambs naturally infected with H. contortus and T. colubriformis and reported that there was a reduction in OPG count and parasite load of H. contortus in the abomasum when they received 1.6 g / kg body weight extract. The anthelmintic effect of condensed tannins in gastrointestinal nematodes in sheep was confirmed by the authors. Manolaraki et al. (2010) tested the action anthelmintic of condensed tannins present in Pistacia lentiscus, Quercus coccifera and Ceratonia soliqua in sheep infected with H. contortus and T. colubriformis. There was a reduction in the elimination of eggs brought about mainly by the decrease in fecundity of females of both species. These results suggested that tannins could be an alternative method for controlling endoparasites in ruminants raised in a pasture management system, thereby reducing reliance on antihelmintics. We did not observe a similar reduction in FECs in either of the trials reported herein most likely because banana plant leaves have low tannin concentrations compared with other tanniniferous plants. Ribas et al. (2009) evaluated the efficacy in vivo of banana leaf in the control of gastrointestinal worms in small ruminants, and observed no reduction of OPG in treated animals compared to the control group. According to the authors, this fact may be due to the short period of administration of banana leaves, or the restricted supply of 1 kg / animal / day. Assessing the anthelmintic efficacy of waste from the banana crop in gastrointestinal nematodes of sheep, Nogueira et al. (2012) reported that the in vivo test, the extract showed no efficacy, while the extract of leaves showed moderate efficacy. The authors suggest that low efficacy may be related to dose, extraction process or frequency of administration. The findings in these studies also report the absence of the anthelmintic action of banana extract in vivo tests, corroborating the results found in this study.\nIn this study, there was a significant reduction in hatching eggs Trichostrongylus, but was not observed for Haemonchus eggs. In in vitro tests, Nogueira et al. (2012) evaluated the effectiveness of the aqueous extract of banana at the outbreak of Haemonchus spp. eggs, at a concentration of 2.5 mg / mL and reported that there was a significant inhibition compared to control treatment. Batatinha et al. (2004) evaluated the effectiveness of the aqueous extract of banana leaves on of gastrointestinal nematodes larvae of goats culture and report that the concentration of 130.6 mg / mL reduction in the number of larvae Trichostrogylus, Oesophagostomum and Haemonchus were 71.52; 95.80 and 97.92% respectively. The results show a greater reduction in the number of Haemonchus larvae compared to Trichostrogylus genre. In in vitro tests, we observed anthelmintic action of the aqueous extract of banana, a fact that may be related to the development phase of the parasites, since according to Paolini et al. (2003) the divergence of effects depends on the parasite development stage exposed to the extract. In addition to the development phase of the parasite, Marie-Magdalegne et al. (2010) suggested that other compounds, such as terpenoids and flavonoids, potentially play a role in the anthelmintic effect exhibited by Musa spp. both in vitro and in vivo. The aqueous extraction method used by Oliveira et al. (2010) reinforces this hypothesis because it required heating the samples at 60 °C for one hour, which would denature most condensed tannins. Although we did not observe a decrease in FECs in either trial, a significant reduction in in vitro EHT was observed in the animals infected with T. colubriformis, demonstrating an inhibitory effect of the banana leaf on egg development. It is likely that the eggs counted at these time points were not viable, suggesting that dried ground banana plant leaves could be used to decrease pasture contamination, even at a dose of 150 g per sheep.\nThe use of natural products would further reduce the presence of chemical residues in foods of animal origin, especially in small ruminants that have short meat production cycles. Given the increasing pressure from consumer markets for foods that are free of chemical residues, this type of management correlates well with global efforts to reduce environmental pollution. The diversity of the Brazilian flora allows for the possibility of utilising various plant products to control parasitic diseases in livestock. A collective, systematic effort is necessary to incorporate functional or therapeutic foods into feed for small ruminants to control worm infections.\nOur results suggest that Musa spp. has anthelmintic properties, as treatment completely inhibited Trichostrongylus colubriformis larval hatchability in vitro at two consecutive time points.\nThe presence of tannins Musa spp. can promote the health of the animal. However, side effects are concentration dependent manner and extraction of these metabolites. Thus, studies are needed to define how to use, methods of extraction, analysis of secondary metabolites and dose in order to facilitate the use of these compounds in nematode control properties and, consequently, increase in the productivity of sheep industry. Therefore, bio panning of bioactive compounds and the development of an anthelmintic product containing condensed tannin would have great commercial potential." ]
[ "intro", "materials|methods", "results", "discussion" ]
[ "alternative parasite control", "endoparasites", "small ruminant" ]
Introduction: Gastrointestinal helminthiases are a major sanitary concern in sheep herds. Classical clinical signs include loss of appetite, weight loss, melena and anaemia in the case of Haemonchus, whereas severe Trichostrongylus infections may result in diarrhoea (West et al., 2009). The parasitological control of nematodes is based on repeated treatments with anthelmintic drugs (Amarante et al., 1997). The practice of repeated treatments favors the emergence of resistant helminth to existing medication (Torres-Acosta e Hoste, 2008). Currently, there is very resistant to endoparasites anthelmintics available. (Max et al., 2010) generating increasing concern about the presence of residues in meat and milk influencing the environment (Athanasiadou et al., 2008). Anthelmintics derived bioactive plants can be an alternative for the treatment of parasitic infections (Akhtar et al. 2000). Several studies confirm that tanninipherous plants have been used for control of gastrointestinal nematodes in small ruminants, using extracts, leaves, fruits or seeds from different regions of the world and obtained with different techniques (Min et al., 2004, Martinez-Ortiz-De-Montellano et al., 2010, Novobilský et al, 2013)Sheep that grazed on French honeysuckle (Hedysarum coronarium), a source of condensed tannins, exhibited lower FECs and reduced nematode burdens at post-mortem examination compared with animals that grazed on lucerne (Medicago sativa) (Niezen et al., 1995). Analyzing the effect of Acacia molissima extract lambs naturally infected with H. contortus and T. colubriformis, Minho et al. (2008) reported that there was a reduction in egg counts per gram of feces and parasite load of H. contortus in the abomasum, a fact not observed in the parasite load of T. colubriformis in intestine. Nery et al., 2012 studied the efficacy of extracts of immature mango on ovine gastrointestinal nematodes and proved that this fruit could be used to ovine nematodes control. Cala et al., 2014 used Artemisia annua L. extacts in naturally infected sheep and discovered that this plant combined with commercial anthelminths have been one potential synergism to control Haemonchus contortus. Knowledge of the traditional use of the banana plant (Musa spp.) in medicine in Asia reached Europe as early as the XVI century (Touwaide and Appetiti, 2013). Currently, bananas are one of the most important fruit crops in the world, with a global production of approximately 102 million tons (FAO, 2012). Banana plants yield approximately 13 tons/ha/year of leaves and pseudostems, a biomass that could be exploited as fodder for ruminants (Foulkes et al., 1977). The crude protein content of banana leaves ranges from 14.6 g to 17.9 g (Heuzé and Tran, 2013), and its use as a feed source for cattle in the tropics has been explored with positive results (Llosa, 1950). The anthelmintic effect of the banana plant has been studied both in vitro and in vivo. Aqueous extracts from the leaves, false stems and flowering stalks of banana plants significantly reduced the in vitro recovery of Haemonchus larvae from pooled faeces from naturally infected sheep, with the average reduction in worm recovery ranging from 98.5 to 100% (Oliveira et al., 2010). Lambs that were artificially infected with H. contortus and fed with banana plant leaves for 21 consecutive days demonstrated significant FECs reduction compared with the control group, suggesting that the treatment reduced the fecundity of the worms (Marie-Magdalene et al., 2010). The anthelmintic properties of aqueous extract of leaves, stems and heart banana reported by Nogueira et al. (2012) state that in vitro test was inhibition of hatching eggs of H. contortus. In the in vivo test using naturally infected sheep, the authors report a reduction in the elimination of eggs. These studies have demonstrated that feeding banana plant to sheep may have some efficacy in controlling nematode infections, although it is unclear how it could be used in a feasible and effective manner at the farm level. Recommendations to use the banana plant as fodder date back to the 1950s (Vaitsman, 1954), but its use has not reached any significant level. In addition, gaps persist in the literature regarding its economic viability and effective dosage. The aerial parts of the banana plant are commonly used to cover and protect the soil (Borges, 2006), and the productive chain lacks reliable information to subside any efforts towards the use of this biomass in animal nutrition. This study describes two trials designed to assess the efficacy and safety of a daily dose of 150 g of dried ground banana plant leaves during ten consecutive days against artificial Haemonchus and Trichostrongylus infections in sheep by comparing physical, haematological and parasitological parameters. Materials and Methods: The trials were conducted at the Veterinary Hospital of FMVZ - USP, Section in accordance with internationally accepted principles for laboratory animal use and care expressed in Brazilian Federal Law No. 11794/2008 and its regulations, as approved by the local Institutional Animal Care and Use Committee. Fresh banana plant leaves were collected from plantations in Registro city, São Paulo state and the leaves were then stored in an open shed, chopped in the direction of the fibres and oven dried at 40 °C (as higher temperatures can damage condensed tannins). The dried material was then coarsely ground to be fed to the sheep later. Quantification of the condensed tannins in the dried leaves was performed at the Animal Nutrition Laboratory of the Centre for Nuclear Energy in Agriculture - CENA - Luís de Queiróz Superior School of Agriculture - ESALQ - USP. Twenty-three Santa Inês wethers ranging from 6 months to 1 year in age were used in the trials. The animals were obtained from free-range farms located in the state of São Paulo. The average weights of the twelve sheep obtained for Trial 1 and the eleven animals obtained for Trial 2 were 29.0 ± 4 kg and 16.0 ± 4 kg, respectively. To ensure that the animals were parasite-free before the start of the trials, the sheep were treated with levamisole hydrochloride (Ripercol L, Fort Dodge) at 10 mg/kg b.w. and albendazole (Valbazen, Pfizer) at 10 mg/kg b.w. for three consecutive days. Next, the animals were treated with trichlorfon (Neguvon, Bayer Health Care, Sîo Paulo-SP, Brazil) at 100 mg/kg b.w. for another three consecutive days. FECs were performed for all sheep seven days after the last anthelmintic treatment. FECs were conducted according to the McMaster technique (Whitlock, 1948). The same protocol was applied to all animals that still carried worms until gastrointestinal nematodes were completely eliminated. Once the sheep were worm-free, they were artificially infected with one of two parasites. For Trial 1, twelve animals were infected with T. colubriformis, and eleven animals were infected with H. contortus for Trial 2. Infective larvae from both species were obtained from the Department of Parasitology, Institute of Biosciences, Paulista State University - UNESP, Botucatu Campus. Each animal received 3,000 larvae per week by gavage for two weeks. After 21 days, FECs revealed that the artificial infections failed in both groups. Therefore, the sheep were injected with dexamethasone (Dexin, Fagra) at 0.5 mg/kg b.w. twice a week to induce immunosuppression, and the larvae were re-administered until patent monospecific infections were achieved, as confirmed by FECs of 1,000 eggs or higher. Coproculture tests (CTs) were conducted to confirm the monospecific infections. CTs were conducted according to a modified O’Sullivan technique (Roberts and O’Sullivan, 1950). In each trial, once a patent monospecific infection was achieved, the sheep were randomly assigned to the control or treatment groups. In Trial 1 (monospecific infection with T. colubriformis), the control group was composed of 6 animals, and the treatment group contained 6 animals. In Trial 2 (monospecific infection with H. contortus), there were 5 sheep in the control group and 6 animals in the treatment group. Until the trials began, the sheep were fed 300 g of concentrated feed formulated to deliver 15 % raw protein and coast cross (Cynodon dactylon) hay ad libitum. Beginning on Day 0 for both trials, the treatment groups were fed 150 g of dried ground banana plant leaves mixed with 150 g of concentrated feed/animal/day for 10 days plus coast cross hay ad libitum. Sheep in the control groups were fed the initial diet throughout the experimental period. Throughout the trials, the animals were subjected to physical examination by a veterinarian as described by Dirksen et al. (1983). The packed cell volume and total plasma protein were determined at Days 0, +12 and +17 in Trial 1 and at Days 0, +9, +15 and +17 in Trial 2. To evaluate the anthelmintic effect of Musa spp. leaves, FECs and egg hatchability tests (EHTs) were performed at Days 0, +5, +9, +12 and +17 in Trial 1, whereas faecal exams were conducted at Days 0, +6, +9, +12, +15 and +17 in Trial 2. EHTs were conducted according to a technique adapted from Coles et al. (1992) using pooled faeces. At the end of the trial, the sheep were humanely slaughtered, and the gastrointestinal contents and mucosae were collected for total worm counts. Statistical analyses were performed using SAS 9.3 (SAS Institute, USA). In both trials, the design of this experiment was completely randomised with two treatments (with or without banana leaves) and animals were the experimental unit. The data were tested for normality and variance homogeneity for the proc univariate normal procedure. The analysis of variance was performed using the proc mixed with repeated measurements procedure in SAS®. The FEC and worm count were transformed by log10 (x + 10). Comparisons of means were carried out by pdiff. The probability level for acceptance or rejection of the hypothesis was 5 %. Results: Analysis of the fresh and dried ground Musa spp. leaves used in Trials 1 and 2 are presented in Table 1. Analysis of the fresh and dried ground Musa spp. leaves used in the trials. Values expressed as g/kg of dry matter Values expressed as gram equivalents of tannic acid per kg dry matter Values expressed as gram equivalents of leucocyanidin per kg dry matter No significant difference was noted between the weight of the animals in the control and treatment groups in Trial 1 and 2 (monospecific infection with T. colubriformis). In addition, no significant difference was noted in the packed cell volume of the control and treatment groups. Although the sheep in the control group exhibited a higher total protein concentration at Week 3 than the sheep in the treatment group (P=0.001), all values were within the normal range for the species. In Trial 1, the EHT was significantly decreased in the group fed dried ground banana plant leaves (Musa spp.) (Table 2). Larval hatchability was not inhibited in the control or treatment groups infected with H. contortus (Trial 2) (Table 2). Egg hatchability tests (EHTs) for Trial 1 (artificial infection with Trial 1: Control group n=6, Treatment group n=6; Trial 2: Control group n=5, Treatment group n=6. No significant difference was noted in FECs between the control and treatment groups infected with T. colubriformis (Trial 1) and H. contortus (Trial 2) (Table 3). Faecal egg counts (number eggs per g faeces - original date) for Trial 1 (artificial infection with Trichostrongylus colubriformis) and Trial 2 (artificial infection with Haemonchus contortus) in sheep treated with or without banana leaves. fec = faecal egg count; mean± standard deviation We made the differentiation of larvae between male and female, after treatment in animals mono-specifically infected with T. colubriformis (Trial 1) (Table 4). Total worm counts (original date) for Trial 1 (artificial infection with Trichostrongylus colubriformis) and Trial 2 (artificial infection with Haemonchus contortus) in sheep treated with or without banana leaves. (TW)*=Total worms No significant difference was observed in the weight of the animals in the control and treatment groups infected with H. contortus throughout the duration of the experiment. There was no significant difference in the total number of males and females in control sheep and sheep treated with H. contortus monospecific infections (Trial 2) (Table 4). Discussion: The banana extract evaluated in concentration was well accepted by the animals, since the offered material was ingested by them, demonstrating having good palatability. In the trial period, the animals showed no diarrhea and changes in clinical parameters. The results strongly suggest that the ingestion of dried ground banana does not cause any deleterious effects The effectiveness of Acacia extract was tested in vivo by Cenci et al. (2007) and Max (2010when they used naturally infected sheep and demonstrated a decrease in OPG count and reduction in parasite burden of gastrointestinal nematodes. Minho et al. (2008) analyzed, in vitro, the effect of condensed tannins from Acacia molissima on lambs naturally infected with H. contortus and T. colubriformis and reported that there was a reduction in OPG count and parasite load of H. contortus in the abomasum when they received 1.6 g / kg body weight extract. The anthelmintic effect of condensed tannins in gastrointestinal nematodes in sheep was confirmed by the authors. Manolaraki et al. (2010) tested the action anthelmintic of condensed tannins present in Pistacia lentiscus, Quercus coccifera and Ceratonia soliqua in sheep infected with H. contortus and T. colubriformis. There was a reduction in the elimination of eggs brought about mainly by the decrease in fecundity of females of both species. These results suggested that tannins could be an alternative method for controlling endoparasites in ruminants raised in a pasture management system, thereby reducing reliance on antihelmintics. We did not observe a similar reduction in FECs in either of the trials reported herein most likely because banana plant leaves have low tannin concentrations compared with other tanniniferous plants. Ribas et al. (2009) evaluated the efficacy in vivo of banana leaf in the control of gastrointestinal worms in small ruminants, and observed no reduction of OPG in treated animals compared to the control group. According to the authors, this fact may be due to the short period of administration of banana leaves, or the restricted supply of 1 kg / animal / day. Assessing the anthelmintic efficacy of waste from the banana crop in gastrointestinal nematodes of sheep, Nogueira et al. (2012) reported that the in vivo test, the extract showed no efficacy, while the extract of leaves showed moderate efficacy. The authors suggest that low efficacy may be related to dose, extraction process or frequency of administration. The findings in these studies also report the absence of the anthelmintic action of banana extract in vivo tests, corroborating the results found in this study. In this study, there was a significant reduction in hatching eggs Trichostrongylus, but was not observed for Haemonchus eggs. In in vitro tests, Nogueira et al. (2012) evaluated the effectiveness of the aqueous extract of banana at the outbreak of Haemonchus spp. eggs, at a concentration of 2.5 mg / mL and reported that there was a significant inhibition compared to control treatment. Batatinha et al. (2004) evaluated the effectiveness of the aqueous extract of banana leaves on of gastrointestinal nematodes larvae of goats culture and report that the concentration of 130.6 mg / mL reduction in the number of larvae Trichostrogylus, Oesophagostomum and Haemonchus were 71.52; 95.80 and 97.92% respectively. The results show a greater reduction in the number of Haemonchus larvae compared to Trichostrogylus genre. In in vitro tests, we observed anthelmintic action of the aqueous extract of banana, a fact that may be related to the development phase of the parasites, since according to Paolini et al. (2003) the divergence of effects depends on the parasite development stage exposed to the extract. In addition to the development phase of the parasite, Marie-Magdalegne et al. (2010) suggested that other compounds, such as terpenoids and flavonoids, potentially play a role in the anthelmintic effect exhibited by Musa spp. both in vitro and in vivo. The aqueous extraction method used by Oliveira et al. (2010) reinforces this hypothesis because it required heating the samples at 60 °C for one hour, which would denature most condensed tannins. Although we did not observe a decrease in FECs in either trial, a significant reduction in in vitro EHT was observed in the animals infected with T. colubriformis, demonstrating an inhibitory effect of the banana leaf on egg development. It is likely that the eggs counted at these time points were not viable, suggesting that dried ground banana plant leaves could be used to decrease pasture contamination, even at a dose of 150 g per sheep. The use of natural products would further reduce the presence of chemical residues in foods of animal origin, especially in small ruminants that have short meat production cycles. Given the increasing pressure from consumer markets for foods that are free of chemical residues, this type of management correlates well with global efforts to reduce environmental pollution. The diversity of the Brazilian flora allows for the possibility of utilising various plant products to control parasitic diseases in livestock. A collective, systematic effort is necessary to incorporate functional or therapeutic foods into feed for small ruminants to control worm infections. Our results suggest that Musa spp. has anthelmintic properties, as treatment completely inhibited Trichostrongylus colubriformis larval hatchability in vitro at two consecutive time points. The presence of tannins Musa spp. can promote the health of the animal. However, side effects are concentration dependent manner and extraction of these metabolites. Thus, studies are needed to define how to use, methods of extraction, analysis of secondary metabolites and dose in order to facilitate the use of these compounds in nematode control properties and, consequently, increase in the productivity of sheep industry. Therefore, bio panning of bioactive compounds and the development of an anthelmintic product containing condensed tannin would have great commercial potential.
Background: Helminths is a endoparasites that cause the major losses for profitable sheep production in Brazil. The increased development of resistant strains of endoparasites have enforced the search for sustainable alternatives. The aim of this paper was to provide information about endoparasites control with banana leaves in infected sheep as alternative control strategies and see its viability. Methods: In this study, we performed two trials to investigate the anthelmintic properties of banana leaves on endoparasites in sheep. In Trial 1, twelve sheep were artificially infected with Trichostrongylus colubriformis; in Trial 2, eleven sheep were artificially infected with Haemonchus contortus. Clinical examinations, packed cell volume, total protein, faecal egg counts (FECs) and egg hatchability tests (EHTs) were performed. At the end of the trials, the sheep were humanely slaughtered, and total worm counts were performed. Results: In Trial 1 and 2, no significant FEC decreases were note but significant diference in EHTs were observed. Total worm counts, clinical and haematological parameters did not reveal significant changes between the treatment and control groups. These results suggest that feeding dried ground banana plant leaves to sheep may reduce the viability of Trichostrongylus colubriformis eggs, and this anthelmintic activity is potentially exploitable as part of an integrated parasite management programme. Conclusions: However, further investigation is needed to establish the optimal dosage, develop a convenient delivery form and confirm the economic feasibility of using banana plantation byproducts as feed for ruminant species. Abbreviations: Coproculture test (CT)., Faecal egg count (FEC)., Egg hatchability test (EHT).
null
null
3,439
301
[]
4
[ "sheep", "banana", "trial", "control", "leaves", "animals", "treatment", "contortus", "infected", "plant" ]
[ "anthelmintic efficacy", "anthelmintics derived bioactive", "gastrointestinal nematodes minho", "gastrointestinal nematodes sheep", "tannins gastrointestinal nematodes" ]
null
null
null
[CONTENT] alternative parasite control | endoparasites | small ruminant [SUMMARY]
null
[CONTENT] alternative parasite control | endoparasites | small ruminant [SUMMARY]
null
[CONTENT] alternative parasite control | endoparasites | small ruminant [SUMMARY]
null
[CONTENT] Animals | Anthelmintics | Feces | Female | Haemonchus | Larva | Male | Musa | Parasite Egg Count | Plant Extracts | Plant Leaves | Sheep | Sheep Diseases | Trichostrongylosis | Trichostrongylus [SUMMARY]
null
[CONTENT] Animals | Anthelmintics | Feces | Female | Haemonchus | Larva | Male | Musa | Parasite Egg Count | Plant Extracts | Plant Leaves | Sheep | Sheep Diseases | Trichostrongylosis | Trichostrongylus [SUMMARY]
null
[CONTENT] Animals | Anthelmintics | Feces | Female | Haemonchus | Larva | Male | Musa | Parasite Egg Count | Plant Extracts | Plant Leaves | Sheep | Sheep Diseases | Trichostrongylosis | Trichostrongylus [SUMMARY]
null
[CONTENT] anthelmintic efficacy | anthelmintics derived bioactive | gastrointestinal nematodes minho | gastrointestinal nematodes sheep | tannins gastrointestinal nematodes [SUMMARY]
null
[CONTENT] anthelmintic efficacy | anthelmintics derived bioactive | gastrointestinal nematodes minho | gastrointestinal nematodes sheep | tannins gastrointestinal nematodes [SUMMARY]
null
[CONTENT] anthelmintic efficacy | anthelmintics derived bioactive | gastrointestinal nematodes minho | gastrointestinal nematodes sheep | tannins gastrointestinal nematodes [SUMMARY]
null
[CONTENT] sheep | banana | trial | control | leaves | animals | treatment | contortus | infected | plant [SUMMARY]
null
[CONTENT] sheep | banana | trial | control | leaves | animals | treatment | contortus | infected | plant [SUMMARY]
null
[CONTENT] sheep | banana | trial | control | leaves | animals | treatment | contortus | infected | plant [SUMMARY]
null
[CONTENT] banana | plant | banana plant | sheep | leaves | use | reduction | naturally infected | 2010 | plants [SUMMARY]
null
[CONTENT] trial | table | trial artificial | significant difference | trial artificial infection | artificial infection | difference | infection | control | treatment [SUMMARY]
null
[CONTENT] trial | sheep | banana | control | leaves | animals | treatment | reduction | contortus | extract [SUMMARY]
null
[CONTENT] Brazil ||| ||| [SUMMARY]
null
[CONTENT] 1 | FEC ||| ||| Trichostrongylus [SUMMARY]
null
[CONTENT] Brazil ||| ||| ||| two ||| 1 | twelve | Trichostrongylus | Trial 2 | eleven | Haemonchus ||| ||| ||| ||| 1 | FEC ||| ||| Trichostrongylus ||| ||| Egg | EHT [SUMMARY]
null
Carotid intima-media thickness, hypertension, and dyslipidemia in obese adolescents.
33708303
Obesity is a global health problem with growing prevalence in developing countries. Obesity causes chronic inflammation due to imbalances between pro- and anti-inflammatory cytokines. This causes metabolic complications such as dyslipidemia, hypertension, and cardiovascular disorder. Carotid intima-media thickness (CIMT) is a predictor of atherosclerosis which could be measured easily and non-invasively. Early detection of cardiovascular diseases in obese adolescents at risk is hoped to improve outcomes.
INTRODUCTION
This is a cross-sectional study on obese adolescents aged 13-16 year old at Pediatric Clinic of Dr. Soetomo General Hospital. Obesity is defined as Body mass index higher than 95th percentiles according to CDC (2000). Dyslipidemia is diagnosed when either an increase in cholesterol, LDL, triglyceride or a decrease in HDL level is found, as recommended by NCPE and American Academy of Pediatrics. Hypertension is defined as an increase of blood pressure > P95 according to age and gender. The differences of CIMT based on dyslipidemia, hypertension, and gender were analyzed with Wilcoxon Mann Whitney with significant p value (p < 0,005).
METHODS
This study included 59 obese adolescents, consisting of 32 (54.2%) male adolescents and 35 (59.3%) female adolescents. Dyslipidemia was found on 38 (64.4%) adolescents and hypertension was found on 35 (59.3%) adolescents. No difference of CIMT was found between obese adolescents with and without dyslipidemia and with and without hypertension based on gender (p > 0.05).
RESULTS
No difference of CIMT based on gender between adolescents aged below 18. The high number of dyslipidemia and hypertension in obese adolescents need an early detection of cardiovascular complication.
CONCLUSION
[ "Adolescent", "Carotid Intima-Media Thickness", "Cross-Sectional Studies", "Dyslipidemias", "Female", "Humans", "Hypertension", "Indonesia", "Male", "Pediatric Obesity", "Sex Factors" ]
7906559
Introduction
Obesity is a global problem which increases the risk of early death. Obesity causes chronic inflammation through an increase in the production of pro-inflammatory cytokines, which causes metabolic diseases [1]. Adolescents who have risk factors such as obesity, dyslipidemia, hypertension, and diabetes mellitus have higher risk of cardiovascular diseases as adults [2]. Metabolic syndrome increases mortality risk up to 1.5 times [3]. The process of atherosclerosis starts early in obese children and adolescents. Carotid intima media thickness (CIMT) is a subclinical marker of atherosclerosis [2]. Measurement of CIMT is a modality that could be used to assess cardiovascular risk factors non-invasively and has been done since 1980s. Obese adolescents with cardiovascular risk factors have higher CIMT [4]. Majority of study in CIMT as a risk factor of cardiovascular disease are performed in adult and developed country. In developing country, there is a limited study in CIMT of obese adolescents. The aim of this study is to analyze the difference of CIMT between obese adolescents with and without dyslipidemia and with and without hypertension based on gender.
Methods
A cross-sectional study was done on obese adolescents aged 13-16 year old at Pediatric Clinic Dr. Soetomo General Hospital, Surabaya, Indonesia. Subjects who had Body Mass Index higher than 95th percentile based on BMI presentile in CDC curve according to age and sex were included in this study. Subjects who had consumed corticosteroids within 6 months before study, underwent hormonal therapy or consumed dyslipidemia drugs within 3 months before study, smoked, consumed alcohol, or had endocrine disorder were excluded. Anthropometry measurement, including body weight and height, was done by trained health workers. Body weight was measured without footwear, accessories, and with clothes that weighed less than 0.1kg using digital scale (Seca, Germany). Body height was measured without footwear or headwear in erect position using stadiometer (Seca, Germany). Body mass index (BMI) was calculated with the formula of body weight (kg) divided by squared body height (meter). Obesity is defined as BMI higher that 95th percentile according to age and gender based on CDC curve (2000). Blood pressure was measured in sitting position after the subject had rested for 10 minutes. Hypertension is defined as blood pressure higher than 95th percentile according to age and gender. CIMT measurement was done using high-resolution B-mode ultrasonography (Toshiba, Japan) by a cardiologist. Subjects were examined in supine position, with neck minimally extended and probe placed in anterolateral position. Imaging was done on the left common carotid artery. Lipid profile test was done using ELISA method. Triglyceride test was done using Autosera S TG-N Kit (Sekisui Medical Co., Ltd., Japan). LDL, HDL, and total cholesterol tests were done using Cholestest®LDL, Cholestest®N HDL, dan Pureauto®S CHO-N (Sekisui Medical Co., Ltd., Japan). Dyslipidemia is diagnosed when either an increase in cholesterol, LDL, triglyceride or a decrease in HDL level is found, as recommended by NCPE and American Academy of Pediatrics. Statistic methods: quantitative variables are described in mean and standard deviation. CIMT differences based on dyslipidemia, hypertension, and gender were analyzed using Wilcoxon Mann Whitney with significant p of < 0.05. Analysis was done using SPSS. This study was approved by Ethical Committee in health research of Dr. Soetomo General Hospital (ref. No. 0698/KEPK/X/2018). Parents of the subjects were provided with an informed consent before study. All data obtained from the subject were anonymized.
Results
This study included 59 obese adolescents, consisting of 32 (54.2%) male adolescents and 35 (59.3%) female adolescents. Dyslipidemia was found on 38 (64.4%) adolescents and hypertension was found on 35 (59.3%) adolescents. Characteristics of the subjects are shown in Table 1. There was no difference in CIMT between female and male adolescents (mean = 0.51 ± 0.12 vs 0.51 ± 0.07; p = 0.50). There was also no difference in CMIT between obese adolescents with or without dyslipidemia (female p = 0.974; male p = 0.313) and with or without hypertension (female p = 0.321; male p = 0.833) based on gender (Table 2). Characteristics of the study's subjects Correlation between variables SD= Standard Deviation
Conclusion
No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender. Age below 18 does not affect CIMT thickness may due to unstarted progressive thickening of the wall of the common carotid artery. There was a higher number of dyslipidemia and hypertension in obese adolescents. The further study with a greater number of subject and control subject are needed to assess the risk of cardiovascular disease in obese adolescents. What is known about this topic No difference in CIMT was found between obese adolescents based on gender; Obese adolescents can suffered from hypertension and dyslipidemia; Hyperension and dyslipidemia can influence the CIMT. No difference in CIMT was found between obese adolescents based on gender; Obese adolescents can suffered from hypertension and dyslipidemia; Hyperension and dyslipidemia can influence the CIMT. What this study adds No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender; Unstarted progressive thickening of the wall of the common carotid artery at the age below 18 can cause no differences in CIMT; Prevalence of dyslipidemia and hypertension in obese adolescents are high in developing country, such as Indonesia. No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender; Unstarted progressive thickening of the wall of the common carotid artery at the age below 18 can cause no differences in CIMT; Prevalence of dyslipidemia and hypertension in obese adolescents are high in developing country, such as Indonesia.
[ "What is known about this topic", "What this study adds" ]
[ "No difference in CIMT was found between obese adolescents based on gender;\nObese adolescents can suffered from hypertension and dyslipidemia;\nHyperension and dyslipidemia can influence the CIMT.", "No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender;\nUnstarted progressive thickening of the wall of the common carotid artery at the age below 18 can cause no differences in CIMT;\nPrevalence of dyslipidemia and hypertension in obese adolescents are high in developing country, such as Indonesia." ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "What is known about this topic", "What this study adds", "Competing interests" ]
[ "Obesity is a global problem which increases the risk of early death. Obesity causes chronic inflammation through an increase in the production of pro-inflammatory cytokines, which causes metabolic diseases [1]. Adolescents who have risk factors such as obesity, dyslipidemia, hypertension, and diabetes mellitus have higher risk of cardiovascular diseases as adults [2]. Metabolic syndrome increases mortality risk up to 1.5 times [3]. The process of atherosclerosis starts early in obese children and adolescents. Carotid intima media thickness (CIMT) is a subclinical marker of atherosclerosis [2]. Measurement of CIMT is a modality that could be used to assess cardiovascular risk factors non-invasively and has been done since 1980s. Obese adolescents with cardiovascular risk factors have higher CIMT [4]. Majority of study in CIMT as a risk factor of cardiovascular disease are performed in adult and developed country. In developing country, there is a limited study in CIMT of obese adolescents. The aim of this study is to analyze the difference of CIMT between obese adolescents with and without dyslipidemia and with and without hypertension based on gender.", "A cross-sectional study was done on obese adolescents aged 13-16 year old at Pediatric Clinic Dr. Soetomo General Hospital, Surabaya, Indonesia. Subjects who had Body Mass Index higher than 95th percentile based on BMI presentile in CDC curve according to age and sex were included in this study. Subjects who had consumed corticosteroids within 6 months before study, underwent hormonal therapy or consumed dyslipidemia drugs within 3 months before study, smoked, consumed alcohol, or had endocrine disorder were excluded.\nAnthropometry measurement, including body weight and height, was done by trained health workers. Body weight was measured without footwear, accessories, and with clothes that weighed less than 0.1kg using digital scale (Seca, Germany). Body height was measured without footwear or headwear in erect position using stadiometer (Seca, Germany). Body mass index (BMI) was calculated with the formula of body weight (kg) divided by squared body height (meter). Obesity is defined as BMI higher that 95th percentile according to age and gender based on CDC curve (2000). Blood pressure was measured in sitting position after the subject had rested for 10 minutes. Hypertension is defined as blood pressure higher than 95th percentile according to age and gender.\nCIMT measurement was done using high-resolution B-mode ultrasonography (Toshiba, Japan) by a cardiologist. Subjects were examined in supine position, with neck minimally extended and probe placed in anterolateral position. Imaging was done on the left common carotid artery. Lipid profile test was done using ELISA method. Triglyceride test was done using Autosera S TG-N Kit (Sekisui Medical Co., Ltd., Japan). LDL, HDL, and total cholesterol tests were done using Cholestest®LDL, Cholestest®N HDL, dan Pureauto®S CHO-N (Sekisui Medical Co., Ltd., Japan). Dyslipidemia is diagnosed when either an increase in cholesterol, LDL, triglyceride or a decrease in HDL level is found, as recommended by NCPE and American Academy of Pediatrics.\nStatistic methods: quantitative variables are described in mean and standard deviation. CIMT differences based on dyslipidemia, hypertension, and gender were analyzed using Wilcoxon Mann Whitney with significant p of < 0.05. Analysis was done using SPSS. This study was approved by Ethical Committee in health research of Dr. Soetomo General Hospital (ref. No. 0698/KEPK/X/2018). Parents of the subjects were provided with an informed consent before study. All data obtained from the subject were anonymized.", "This study included 59 obese adolescents, consisting of 32 (54.2%) male adolescents and 35 (59.3%) female adolescents. Dyslipidemia was found on 38 (64.4%) adolescents and hypertension was found on 35 (59.3%) adolescents. Characteristics of the subjects are shown in Table 1. There was no difference in CIMT between female and male adolescents (mean = 0.51 ± 0.12 vs 0.51 ± 0.07; p = 0.50). There was also no difference in CMIT between obese adolescents with or without dyslipidemia (female p = 0.974; male p = 0.313) and with or without hypertension (female p = 0.321; male p = 0.833) based on gender (Table 2).\nCharacteristics of the study's subjects\nCorrelation between variables\nSD= Standard Deviation", "Obesity is related to inflammation as a result of imbalances between pro- and anti-inflammatory cytokines [5]. Inflammation in obesity is marked by an increase in TNF and hsCRP [6]. Storage of abdominal adipose tissue causes cell dysfunction and cardiometabolic diseases in adulthood [7, 8]. The higher the body fat percentage, the higher the risk of cardiovascular diseases [9]. Atherosclerosis proccess starts early among children and adolescents with obesity. CIMT is a subclinical atherosclerosis marker [2]. Obese adolescents have higher CIMT compared to adolescents with normal BMI [10]. Previous study on healthy subjects showed no difference in CIMT between female and male adolescents, but CIMT tends to increase after 10 years of age [11]. A study on obese adolescents showed no difference in CIMT between males and females, but an association between CIMT and arterial stiffness was found, especially among female adolescents [12]. Results from this study is in accordance with that study. When a child turns 10 year old, puberty starts and hormonal change could cause changes in body fat composition [13].\nIn this study, majority of subject had dyslipidemia and hypertension (Table 1). CIMT increases when dyslipidemia, hypertension, and diabetes mellitus is present [14]. Obese adolescents have a 6.5 times higher risk of pre-hypertension or hypertension [15]. Blood pressure has direct effect on CIMT [4]. Hypertension causes hypertrophy of the tunica media of blood vessels, therefore increasing CIMT [16]. However, another study on children and adolescents showed that there is no difference in CIMT between subjects with obesity and with metabolic syndromes [2]. This study did not find a difference in CIMT between subjects with and without hypertension based on gender. Previous study mentioned that CIMT is associated with age, but not blood pressure [10]. No difference in CIMT between subjects with and without dyslipidemia based on gender was found (Table 2). Previous study reported that obesity could affect BMI if cardiovascular risk factor, such as dyslipidemia, is present [4]. High level of triglyceride is associated with increased CIMT [2]. Gender does not affect the wall of normal common carotid artery until the age of 18 with progressive thickening of the blood vessel's wall [17].\nThere were some limitation in this study. First, there was a limited subject in this study. Second, the low sensitivity of high-resolution B-mode ultrasonography could affect CIMT measurement and might not be able to detect small differences, as a previous study [11]. There was still a limited study in CIMT of obese adolescence in developing country. Detecting of early risk of cardiovascular disease in obese adolescents are needed. Further studies with a greater number of subject and control subject who has a normal BMI are needed to assess the risk of cardiovascular disease using CIMT in obese adolescents.", "No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender. Age below 18 does not affect CIMT thickness may due to unstarted progressive thickening of the wall of the common carotid artery. There was a higher number of dyslipidemia and hypertension in obese adolescents. The further study with a greater number of subject and control subject are needed to assess the risk of cardiovascular disease in obese adolescents.\nWhat is known about this topic No difference in CIMT was found between obese adolescents based on gender;\nObese adolescents can suffered from hypertension and dyslipidemia;\nHyperension and dyslipidemia can influence the CIMT.\nNo difference in CIMT was found between obese adolescents based on gender;\nObese adolescents can suffered from hypertension and dyslipidemia;\nHyperension and dyslipidemia can influence the CIMT.\nWhat this study adds No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender;\nUnstarted progressive thickening of the wall of the common carotid artery at the age below 18 can cause no differences in CIMT;\nPrevalence of dyslipidemia and hypertension in obese adolescents are high in developing country, such as Indonesia.\nNo difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender;\nUnstarted progressive thickening of the wall of the common carotid artery at the age below 18 can cause no differences in CIMT;\nPrevalence of dyslipidemia and hypertension in obese adolescents are high in developing country, such as Indonesia.", "No difference in CIMT was found between obese adolescents based on gender;\nObese adolescents can suffered from hypertension and dyslipidemia;\nHyperension and dyslipidemia can influence the CIMT.", "No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender;\nUnstarted progressive thickening of the wall of the common carotid artery at the age below 18 can cause no differences in CIMT;\nPrevalence of dyslipidemia and hypertension in obese adolescents are high in developing country, such as Indonesia.", "The authors declare no competing interests." ]
[ "intro", "methods", "results", "discussion", "conclusion", null, null, "COI-statement" ]
[ "Obesity", "adolescents", "CIMT", "dyslipidemia", "hypertension" ]
Introduction: Obesity is a global problem which increases the risk of early death. Obesity causes chronic inflammation through an increase in the production of pro-inflammatory cytokines, which causes metabolic diseases [1]. Adolescents who have risk factors such as obesity, dyslipidemia, hypertension, and diabetes mellitus have higher risk of cardiovascular diseases as adults [2]. Metabolic syndrome increases mortality risk up to 1.5 times [3]. The process of atherosclerosis starts early in obese children and adolescents. Carotid intima media thickness (CIMT) is a subclinical marker of atherosclerosis [2]. Measurement of CIMT is a modality that could be used to assess cardiovascular risk factors non-invasively and has been done since 1980s. Obese adolescents with cardiovascular risk factors have higher CIMT [4]. Majority of study in CIMT as a risk factor of cardiovascular disease are performed in adult and developed country. In developing country, there is a limited study in CIMT of obese adolescents. The aim of this study is to analyze the difference of CIMT between obese adolescents with and without dyslipidemia and with and without hypertension based on gender. Methods: A cross-sectional study was done on obese adolescents aged 13-16 year old at Pediatric Clinic Dr. Soetomo General Hospital, Surabaya, Indonesia. Subjects who had Body Mass Index higher than 95th percentile based on BMI presentile in CDC curve according to age and sex were included in this study. Subjects who had consumed corticosteroids within 6 months before study, underwent hormonal therapy or consumed dyslipidemia drugs within 3 months before study, smoked, consumed alcohol, or had endocrine disorder were excluded. Anthropometry measurement, including body weight and height, was done by trained health workers. Body weight was measured without footwear, accessories, and with clothes that weighed less than 0.1kg using digital scale (Seca, Germany). Body height was measured without footwear or headwear in erect position using stadiometer (Seca, Germany). Body mass index (BMI) was calculated with the formula of body weight (kg) divided by squared body height (meter). Obesity is defined as BMI higher that 95th percentile according to age and gender based on CDC curve (2000). Blood pressure was measured in sitting position after the subject had rested for 10 minutes. Hypertension is defined as blood pressure higher than 95th percentile according to age and gender. CIMT measurement was done using high-resolution B-mode ultrasonography (Toshiba, Japan) by a cardiologist. Subjects were examined in supine position, with neck minimally extended and probe placed in anterolateral position. Imaging was done on the left common carotid artery. Lipid profile test was done using ELISA method. Triglyceride test was done using Autosera S TG-N Kit (Sekisui Medical Co., Ltd., Japan). LDL, HDL, and total cholesterol tests were done using Cholestest®LDL, Cholestest®N HDL, dan Pureauto®S CHO-N (Sekisui Medical Co., Ltd., Japan). Dyslipidemia is diagnosed when either an increase in cholesterol, LDL, triglyceride or a decrease in HDL level is found, as recommended by NCPE and American Academy of Pediatrics. Statistic methods: quantitative variables are described in mean and standard deviation. CIMT differences based on dyslipidemia, hypertension, and gender were analyzed using Wilcoxon Mann Whitney with significant p of < 0.05. Analysis was done using SPSS. This study was approved by Ethical Committee in health research of Dr. Soetomo General Hospital (ref. No. 0698/KEPK/X/2018). Parents of the subjects were provided with an informed consent before study. All data obtained from the subject were anonymized. Results: This study included 59 obese adolescents, consisting of 32 (54.2%) male adolescents and 35 (59.3%) female adolescents. Dyslipidemia was found on 38 (64.4%) adolescents and hypertension was found on 35 (59.3%) adolescents. Characteristics of the subjects are shown in Table 1. There was no difference in CIMT between female and male adolescents (mean = 0.51 ± 0.12 vs 0.51 ± 0.07; p = 0.50). There was also no difference in CMIT between obese adolescents with or without dyslipidemia (female p = 0.974; male p = 0.313) and with or without hypertension (female p = 0.321; male p = 0.833) based on gender (Table 2). Characteristics of the study's subjects Correlation between variables SD= Standard Deviation Discussion: Obesity is related to inflammation as a result of imbalances between pro- and anti-inflammatory cytokines [5]. Inflammation in obesity is marked by an increase in TNF and hsCRP [6]. Storage of abdominal adipose tissue causes cell dysfunction and cardiometabolic diseases in adulthood [7, 8]. The higher the body fat percentage, the higher the risk of cardiovascular diseases [9]. Atherosclerosis proccess starts early among children and adolescents with obesity. CIMT is a subclinical atherosclerosis marker [2]. Obese adolescents have higher CIMT compared to adolescents with normal BMI [10]. Previous study on healthy subjects showed no difference in CIMT between female and male adolescents, but CIMT tends to increase after 10 years of age [11]. A study on obese adolescents showed no difference in CIMT between males and females, but an association between CIMT and arterial stiffness was found, especially among female adolescents [12]. Results from this study is in accordance with that study. When a child turns 10 year old, puberty starts and hormonal change could cause changes in body fat composition [13]. In this study, majority of subject had dyslipidemia and hypertension (Table 1). CIMT increases when dyslipidemia, hypertension, and diabetes mellitus is present [14]. Obese adolescents have a 6.5 times higher risk of pre-hypertension or hypertension [15]. Blood pressure has direct effect on CIMT [4]. Hypertension causes hypertrophy of the tunica media of blood vessels, therefore increasing CIMT [16]. However, another study on children and adolescents showed that there is no difference in CIMT between subjects with obesity and with metabolic syndromes [2]. This study did not find a difference in CIMT between subjects with and without hypertension based on gender. Previous study mentioned that CIMT is associated with age, but not blood pressure [10]. No difference in CIMT between subjects with and without dyslipidemia based on gender was found (Table 2). Previous study reported that obesity could affect BMI if cardiovascular risk factor, such as dyslipidemia, is present [4]. High level of triglyceride is associated with increased CIMT [2]. Gender does not affect the wall of normal common carotid artery until the age of 18 with progressive thickening of the blood vessel's wall [17]. There were some limitation in this study. First, there was a limited subject in this study. Second, the low sensitivity of high-resolution B-mode ultrasonography could affect CIMT measurement and might not be able to detect small differences, as a previous study [11]. There was still a limited study in CIMT of obese adolescence in developing country. Detecting of early risk of cardiovascular disease in obese adolescents are needed. Further studies with a greater number of subject and control subject who has a normal BMI are needed to assess the risk of cardiovascular disease using CIMT in obese adolescents. Conclusion: No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender. Age below 18 does not affect CIMT thickness may due to unstarted progressive thickening of the wall of the common carotid artery. There was a higher number of dyslipidemia and hypertension in obese adolescents. The further study with a greater number of subject and control subject are needed to assess the risk of cardiovascular disease in obese adolescents. What is known about this topic No difference in CIMT was found between obese adolescents based on gender; Obese adolescents can suffered from hypertension and dyslipidemia; Hyperension and dyslipidemia can influence the CIMT. No difference in CIMT was found between obese adolescents based on gender; Obese adolescents can suffered from hypertension and dyslipidemia; Hyperension and dyslipidemia can influence the CIMT. What this study adds No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender; Unstarted progressive thickening of the wall of the common carotid artery at the age below 18 can cause no differences in CIMT; Prevalence of dyslipidemia and hypertension in obese adolescents are high in developing country, such as Indonesia. No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender; Unstarted progressive thickening of the wall of the common carotid artery at the age below 18 can cause no differences in CIMT; Prevalence of dyslipidemia and hypertension in obese adolescents are high in developing country, such as Indonesia. What is known about this topic: No difference in CIMT was found between obese adolescents based on gender; Obese adolescents can suffered from hypertension and dyslipidemia; Hyperension and dyslipidemia can influence the CIMT. What this study adds: No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender; Unstarted progressive thickening of the wall of the common carotid artery at the age below 18 can cause no differences in CIMT; Prevalence of dyslipidemia and hypertension in obese adolescents are high in developing country, such as Indonesia. Competing interests: The authors declare no competing interests.
Background: Obesity is a global health problem with growing prevalence in developing countries. Obesity causes chronic inflammation due to imbalances between pro- and anti-inflammatory cytokines. This causes metabolic complications such as dyslipidemia, hypertension, and cardiovascular disorder. Carotid intima-media thickness (CIMT) is a predictor of atherosclerosis which could be measured easily and non-invasively. Early detection of cardiovascular diseases in obese adolescents at risk is hoped to improve outcomes. Methods: This is a cross-sectional study on obese adolescents aged 13-16 year old at Pediatric Clinic of Dr. Soetomo General Hospital. Obesity is defined as Body mass index higher than 95th percentiles according to CDC (2000). Dyslipidemia is diagnosed when either an increase in cholesterol, LDL, triglyceride or a decrease in HDL level is found, as recommended by NCPE and American Academy of Pediatrics. Hypertension is defined as an increase of blood pressure > P95 according to age and gender. The differences of CIMT based on dyslipidemia, hypertension, and gender were analyzed with Wilcoxon Mann Whitney with significant p value (p < 0,005). Results: This study included 59 obese adolescents, consisting of 32 (54.2%) male adolescents and 35 (59.3%) female adolescents. Dyslipidemia was found on 38 (64.4%) adolescents and hypertension was found on 35 (59.3%) adolescents. No difference of CIMT was found between obese adolescents with and without dyslipidemia and with and without hypertension based on gender (p > 0.05). Conclusions: No difference of CIMT based on gender between adolescents aged below 18. The high number of dyslipidemia and hypertension in obese adolescents need an early detection of cardiovascular complication.
Introduction: Obesity is a global problem which increases the risk of early death. Obesity causes chronic inflammation through an increase in the production of pro-inflammatory cytokines, which causes metabolic diseases [1]. Adolescents who have risk factors such as obesity, dyslipidemia, hypertension, and diabetes mellitus have higher risk of cardiovascular diseases as adults [2]. Metabolic syndrome increases mortality risk up to 1.5 times [3]. The process of atherosclerosis starts early in obese children and adolescents. Carotid intima media thickness (CIMT) is a subclinical marker of atherosclerosis [2]. Measurement of CIMT is a modality that could be used to assess cardiovascular risk factors non-invasively and has been done since 1980s. Obese adolescents with cardiovascular risk factors have higher CIMT [4]. Majority of study in CIMT as a risk factor of cardiovascular disease are performed in adult and developed country. In developing country, there is a limited study in CIMT of obese adolescents. The aim of this study is to analyze the difference of CIMT between obese adolescents with and without dyslipidemia and with and without hypertension based on gender. Conclusion: No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender. Age below 18 does not affect CIMT thickness may due to unstarted progressive thickening of the wall of the common carotid artery. There was a higher number of dyslipidemia and hypertension in obese adolescents. The further study with a greater number of subject and control subject are needed to assess the risk of cardiovascular disease in obese adolescents. What is known about this topic No difference in CIMT was found between obese adolescents based on gender; Obese adolescents can suffered from hypertension and dyslipidemia; Hyperension and dyslipidemia can influence the CIMT. No difference in CIMT was found between obese adolescents based on gender; Obese adolescents can suffered from hypertension and dyslipidemia; Hyperension and dyslipidemia can influence the CIMT. What this study adds No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender; Unstarted progressive thickening of the wall of the common carotid artery at the age below 18 can cause no differences in CIMT; Prevalence of dyslipidemia and hypertension in obese adolescents are high in developing country, such as Indonesia. No difference in CIMT was found between obese adolescents with and without hypertension and with and without dyslipidemia based on gender; Unstarted progressive thickening of the wall of the common carotid artery at the age below 18 can cause no differences in CIMT; Prevalence of dyslipidemia and hypertension in obese adolescents are high in developing country, such as Indonesia.
Background: Obesity is a global health problem with growing prevalence in developing countries. Obesity causes chronic inflammation due to imbalances between pro- and anti-inflammatory cytokines. This causes metabolic complications such as dyslipidemia, hypertension, and cardiovascular disorder. Carotid intima-media thickness (CIMT) is a predictor of atherosclerosis which could be measured easily and non-invasively. Early detection of cardiovascular diseases in obese adolescents at risk is hoped to improve outcomes. Methods: This is a cross-sectional study on obese adolescents aged 13-16 year old at Pediatric Clinic of Dr. Soetomo General Hospital. Obesity is defined as Body mass index higher than 95th percentiles according to CDC (2000). Dyslipidemia is diagnosed when either an increase in cholesterol, LDL, triglyceride or a decrease in HDL level is found, as recommended by NCPE and American Academy of Pediatrics. Hypertension is defined as an increase of blood pressure > P95 according to age and gender. The differences of CIMT based on dyslipidemia, hypertension, and gender were analyzed with Wilcoxon Mann Whitney with significant p value (p < 0,005). Results: This study included 59 obese adolescents, consisting of 32 (54.2%) male adolescents and 35 (59.3%) female adolescents. Dyslipidemia was found on 38 (64.4%) adolescents and hypertension was found on 35 (59.3%) adolescents. No difference of CIMT was found between obese adolescents with and without dyslipidemia and with and without hypertension based on gender (p > 0.05). Conclusions: No difference of CIMT based on gender between adolescents aged below 18. The high number of dyslipidemia and hypertension in obese adolescents need an early detection of cardiovascular complication.
1,821
321
[ 32, 65 ]
8
[ "cimt", "adolescents", "obese", "study", "obese adolescents", "dyslipidemia", "hypertension", "gender", "difference", "based" ]
[ "dyslipidemia hypertension obese", "adolescents obesity cimt", "obese adolescents cardiovascular", "cimt subclinical atherosclerosis", "atherosclerosis measurement cimt" ]
[CONTENT] Obesity | adolescents | CIMT | dyslipidemia | hypertension [SUMMARY]
[CONTENT] Obesity | adolescents | CIMT | dyslipidemia | hypertension [SUMMARY]
[CONTENT] Obesity | adolescents | CIMT | dyslipidemia | hypertension [SUMMARY]
[CONTENT] Obesity | adolescents | CIMT | dyslipidemia | hypertension [SUMMARY]
[CONTENT] Obesity | adolescents | CIMT | dyslipidemia | hypertension [SUMMARY]
[CONTENT] Obesity | adolescents | CIMT | dyslipidemia | hypertension [SUMMARY]
[CONTENT] Adolescent | Carotid Intima-Media Thickness | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Hypertension | Indonesia | Male | Pediatric Obesity | Sex Factors [SUMMARY]
[CONTENT] Adolescent | Carotid Intima-Media Thickness | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Hypertension | Indonesia | Male | Pediatric Obesity | Sex Factors [SUMMARY]
[CONTENT] Adolescent | Carotid Intima-Media Thickness | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Hypertension | Indonesia | Male | Pediatric Obesity | Sex Factors [SUMMARY]
[CONTENT] Adolescent | Carotid Intima-Media Thickness | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Hypertension | Indonesia | Male | Pediatric Obesity | Sex Factors [SUMMARY]
[CONTENT] Adolescent | Carotid Intima-Media Thickness | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Hypertension | Indonesia | Male | Pediatric Obesity | Sex Factors [SUMMARY]
[CONTENT] Adolescent | Carotid Intima-Media Thickness | Cross-Sectional Studies | Dyslipidemias | Female | Humans | Hypertension | Indonesia | Male | Pediatric Obesity | Sex Factors [SUMMARY]
[CONTENT] dyslipidemia hypertension obese | adolescents obesity cimt | obese adolescents cardiovascular | cimt subclinical atherosclerosis | atherosclerosis measurement cimt [SUMMARY]
[CONTENT] dyslipidemia hypertension obese | adolescents obesity cimt | obese adolescents cardiovascular | cimt subclinical atherosclerosis | atherosclerosis measurement cimt [SUMMARY]
[CONTENT] dyslipidemia hypertension obese | adolescents obesity cimt | obese adolescents cardiovascular | cimt subclinical atherosclerosis | atherosclerosis measurement cimt [SUMMARY]
[CONTENT] dyslipidemia hypertension obese | adolescents obesity cimt | obese adolescents cardiovascular | cimt subclinical atherosclerosis | atherosclerosis measurement cimt [SUMMARY]
[CONTENT] dyslipidemia hypertension obese | adolescents obesity cimt | obese adolescents cardiovascular | cimt subclinical atherosclerosis | atherosclerosis measurement cimt [SUMMARY]
[CONTENT] dyslipidemia hypertension obese | adolescents obesity cimt | obese adolescents cardiovascular | cimt subclinical atherosclerosis | atherosclerosis measurement cimt [SUMMARY]
[CONTENT] cimt | adolescents | obese | study | obese adolescents | dyslipidemia | hypertension | gender | difference | based [SUMMARY]
[CONTENT] cimt | adolescents | obese | study | obese adolescents | dyslipidemia | hypertension | gender | difference | based [SUMMARY]
[CONTENT] cimt | adolescents | obese | study | obese adolescents | dyslipidemia | hypertension | gender | difference | based [SUMMARY]
[CONTENT] cimt | adolescents | obese | study | obese adolescents | dyslipidemia | hypertension | gender | difference | based [SUMMARY]
[CONTENT] cimt | adolescents | obese | study | obese adolescents | dyslipidemia | hypertension | gender | difference | based [SUMMARY]
[CONTENT] cimt | adolescents | obese | study | obese adolescents | dyslipidemia | hypertension | gender | difference | based [SUMMARY]
[CONTENT] risk | factors | risk factors | cardiovascular | cimt | adolescents | obesity | cardiovascular risk factors | obese | study [SUMMARY]
[CONTENT] body | position | study | height | measured | body weight | ldl | according age | according | japan [SUMMARY]
[CONTENT] male | female | adolescents | 59 | characteristics | 35 59 | 35 | 51 | table | adolescents dyslipidemia [SUMMARY]
[CONTENT] obese adolescents | obese | adolescents | cimt | dyslipidemia | found obese adolescents | difference cimt found | found obese | difference cimt found obese | hypertension dyslipidemia [SUMMARY]
[CONTENT] adolescents | cimt | obese | obese adolescents | dyslipidemia | hypertension | study | difference | risk | difference cimt [SUMMARY]
[CONTENT] adolescents | cimt | obese | obese adolescents | dyslipidemia | hypertension | study | difference | risk | difference cimt [SUMMARY]
[CONTENT] ||| ||| ||| ||| [SUMMARY]
[CONTENT] obese | 13-16 year old | Pediatric Clinic of Dr. Soetomo General Hospital ||| 95th | CDC | 2000 ||| HDL | NCPE | American Academy of Pediatrics ||| ||| P95 ||| Wilcoxon Mann Whitney [SUMMARY]
[CONTENT] 59 | 32 | 54.2% | 35 | 59.3% ||| 38 | 64.4% | 35 | 59.3% ||| obese | 0.05 [SUMMARY]
[CONTENT] below 18 ||| obese [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| obese | 13-16 year old | Pediatric Clinic of Dr. Soetomo General Hospital ||| 95th | CDC | 2000 ||| HDL | NCPE | American Academy of Pediatrics ||| ||| P95 ||| Wilcoxon Mann Whitney ||| ||| 59 | 32 | 54.2% | 35 | 59.3% ||| 38 | 64.4% | 35 | 59.3% ||| obese | 0.05 ||| below 18 ||| obese [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| obese | 13-16 year old | Pediatric Clinic of Dr. Soetomo General Hospital ||| 95th | CDC | 2000 ||| HDL | NCPE | American Academy of Pediatrics ||| ||| P95 ||| Wilcoxon Mann Whitney ||| ||| 59 | 32 | 54.2% | 35 | 59.3% ||| 38 | 64.4% | 35 | 59.3% ||| obese | 0.05 ||| below 18 ||| obese [SUMMARY]
An examination of domestic partner violence and its justification in the Republic of Georgia.
24180483
Little research on Intimate Partner Violence (IPV) and social perceptions toward this behavior has been disseminated from Eastern Europe. This study explores the prevalence and risk factors of IPV and the justification of this behavior among women in the Republic of Georgia. It seeks to better understand how IPV and IPV justification relate and how social justification of IPV differs across socio-economic measures among this population of women.
BACKGROUND
This study utilizes a national sample of ever-married women from the Republic of Georgia (N = 4,302). We describe the factors that predict IPV justification among these women and the relationship between of the acceptability of IPV and victimization overall and across socio-demographic factors.
METHODS
While the overall lifetime prevalence of IPV in this sample was relatively low (4%), these women were two to four times more likely to justify IPV, Just under one-quarter of the sample agreed that IPV was justified in at least one scenario, namely when the wife was unfaithful, compared with women who had no experience being abused by a partner. Georgian women who were poor, from a rural community, had lower education, were not working and who experienced child abuse or IPV among their parents were more likely to justify this behavior.
RESULTS
These findings begin to fill a gap in our understanding of IPV experienced by women in Eastern Europe. In addition, these findings emphasize the need for researchers, practitioners and policy makers to contextualize IPV in terms of the justification of this behavior among the population being considered as this can play an important role in perpetration, victimization and response.
CONCLUSIONS
[ "Adolescent", "Adult", "Adult Survivors of Child Abuse", "Attitude", "Educational Status", "Employment", "Female", "Georgia (Republic)", "Humans", "Rural Population", "Socioeconomic Factors", "Spouse Abuse", "Women", "Young Adult" ]
3828390
Background
With little to no published research on intimate partner violence disseminated from Eastern Europe, this study pays particular focus on IPV and the social justification of this behavior among women from Georgia. Georgia has experienced a tremendous amount of social and economic turmoil in the past two decades following its independence from the Soviet Union in 1991. Upon gaining sovereignty, Georgia has undergone several armed conflicts resulting in splitting the country and establishing de-facto separated regions of South Ossetia and Abkhazia. Disagreement over reforms resulted in violent conflicts and ethnic cleansing. Furthermore, socioeconomic changes during the intervening decades have created an environment conducive to increasing IPV. Prior to the upheaval Georgia experienced relative income equality, but after 1990 increased inflation drove the poverty rate up. Although the United Nations Commission on Humans Rights adopted their Declaration on the Elimination of Violence against Women in 1993 [1], it was not until 2006 that Georgia instituted its first law on domestic violence and defined it to include acts of physical, psychological, economic and sexual violence between family members. Furthermore, domestic violence laws in Georgia are argued to be vague as they are enveloped into general criminal codes against violence, tending to ignore domestic violence as a unique condition and paying no regard to psychological violence [2]. Interestingly, the IPV data that have been reported show the prevalence in Georgia as being notably low -- fewer than 8% of women report ever experiencing any victimization [3,4]. For context, a multi-county study on women’s health and domestic violence in ten countries between the years of 2000 and 2003 conducted by the World Health Organization (WHO) obtained IPV prevalence rates ranging from 15%-71% with only two countries with a prevalence of under 25% [5]. This lower IPV prevalence in Georgia seems to contradict research identifying a typically higher rate of IPV in countries highly impacted by conflict [6,7]. While there have been high levels of conflict experienced by the populace, Georgians are more often the victims than the oppressors. Yet on an individual level, gender power differentials have been featured in Georgia’s past including bridal kidnappings where men would kidnap virgin women to rape them in order to keep them from marrying any man except for the offender [8]. Given these experiences, it is interesting to contextualize IPV in Georgia within a larger framework of social justification of this behavior. The examination of partner violence justification and its impact on IPV victimization has seen some growth internationally [5,9-13]. These studies have found that the rate of justification in many countries are quite high and can vary by the reason for abuse (e.g. neglect of child, infidelity). Furthermore, these studies found that women tended to approve of IPV at a greater rate than males and factors reflecting lower socio-economic status resulted in typically higher acceptance of IPV. Drawing from a small but growing body of international research, conducted primarily over the last decade, a clearer understanding of the interaction between social justification of partner violence and its incidence has developed. “Injunctive” social norms, or consensus about IPV being acceptable or not within a community [12,14] recognizes that while partner violence often occurs in the privacy of one’s home, it is informed by the attitudes of the larger society. Furthermore, cultural differences in the incidence of partner violence has been argued as a reflection the attitudes shared by the group that governs interactions within each culture [15]. The conceptualization of the intersection between social norming and IPV is integrated within the earliest conceptualizations of this problem. Two of the most often cited theoretical frameworks applied to understand or distinguish types of partner violence are common couple violence theory and the theory of patriarchal terrorism; each conceptualization centers strongly on the role of social perceptions of violence and/or women [16,17]. The common couple violence theory posits that general social attitudes toward violence are central to producing a more violent society where IPV can exist [18]. While the core of this framework is on partner violence being gender symmetric, an argument that stems from the idea that behaviors of both males and females reflect that of their larger culture’s endorsement of violence, these general attitudes about the acceptability or non-acceptability of violent behaviors can influence the justification of partner violence in subtle ways. A key element of the common couple violence framework includes the nature of what behaviors we define as violent, for example lower threshold violence may encompasses greater female perpetrators while acts of domestic homicide primarily are perpetrated by males. In one British study it was found that there was not full agreement as to what behaviors constituted partner violence finding that 16% of urban participants did not feel slapping denoted partner violence and 5% did not feel getting punched was an act of partner violence [19]. Clearly if a behavior is not seen as violent, it will be deemed more acceptable by those who experience, perpetrate and respond to it. The patriarchal terrorism theory points to social attitudes around the role of women in relation to men as the source of partner violence [20]. The role of patriarchy within a culture plays an intricate role in social perceptions of partner violence toward women as it can support attitudes that men are not responsible for their behaviors whereas women are to blame [21], women’s behaviors are the triggers of violence by partners [22], and that women secretly desire this exertion of power [9]. These attitudes interchange with the level of tolerance an individual will feel toward IPV which in turn can play an important role in influencing whether these violent acts are reported to a third party such as the police [23,24]. Examples of patriarchal hegemony in Georgia do exist. While the actual prevalence is not known, there was a time when bridal kidnapping, where single men would kidnap virgin women to rape them in order to keep them from marrying any man except for the offender were not uncommon [8]. Yet, for over a decade, the rate of reported partner violence is Georgia is consistently low [3,4]. Georgia provides an interesting case study as it conceptually represents a society like the U.S and Europe of the past in which women may be viewed as inferior, but families are often intact and multigenerational, and the influences of the western world are only recently emerging. Within this context it is further relevant to understand the role of socio-demographics such as age, geography, marital status, education, work status and wealth play in this dynamic as we recognize these factors influence IPV consistently across the globe [25]. To further explore the relationship between IPV victimization and the acceptance of this behavior among Georgian women, our study takes advantage of the justification scenarios built into the most recent Women’s Reproductive Health Survey in Georgia (2010). To better understand the factors associated with the justification of partner violence in Georgia, this study seeks to answer the following questions: 1) What factors are associated with Georgian women justifying partner violence against women? and 2) To what extent does experiences with domestic violence predict IPV justification?
Methods
Dataset This study is a secondary data analysis of the 2010 Women’s Reproductive Health Survey (RHS) in Georgia (GERHS10). Permission to use this data set came from the Georgian National Center for Disease Control (NCDC). The principle purpose of the original RHS is to examine factors related to pregnancy and fertility among Georgian women. Previous RHSs in Georgia took place in 1999 and 2005. Surveys are conducted by the Georgian Ministry of Labor, Health and Social Affairs (MoLHSA) in collaboration with the Georgian National Center for Disease Control (NCDC) Division of Reproductive Health, Centers for Disease Control and Prevention (DRH/CDC), the United Nations Population Fund (UNFPA), and the United States Agency for International Development (USAID). These surveys utilize large nationally representative probability samples of women aged 15–44 years. Women are interviewed in their homes. A total of 6,292 women were interviewed in 2010. For further details, refer to the RHS summary report [4]. Only those women who were ever married were asked about justification for IPV (i.e., is a man ever justified for beating his wife). Thus, the sample studied here was restricted to the ever-married women. In addition, with the focus on IPV justification, women were excluded if they answered “don’t know” or had a missing response for any of the justification scenarios as these measures were dichotomized and aggregated (see below). We also excluded women who had a missing response to any of the past or current domestic violence experience questions to allow our raw totals for the prevalence estimates to be the same. These exclusions resulted in the loss of only 185 women (4% of the data) and resulted in a final sample of 4,301. This sample consisted of mostly currently married women between the ages of 24 and 44. This human subject research study was in compliance with the Helsinki Declaration approved by the Institutional Review Boards (IRBs) of both the Georgian National Center for Disease Control and Public Health and the United States Center for Disease Control (CDC). This study is a secondary data analysis of the 2010 Women’s Reproductive Health Survey (RHS) in Georgia (GERHS10). Permission to use this data set came from the Georgian National Center for Disease Control (NCDC). The principle purpose of the original RHS is to examine factors related to pregnancy and fertility among Georgian women. Previous RHSs in Georgia took place in 1999 and 2005. Surveys are conducted by the Georgian Ministry of Labor, Health and Social Affairs (MoLHSA) in collaboration with the Georgian National Center for Disease Control (NCDC) Division of Reproductive Health, Centers for Disease Control and Prevention (DRH/CDC), the United Nations Population Fund (UNFPA), and the United States Agency for International Development (USAID). These surveys utilize large nationally representative probability samples of women aged 15–44 years. Women are interviewed in their homes. A total of 6,292 women were interviewed in 2010. For further details, refer to the RHS summary report [4]. Only those women who were ever married were asked about justification for IPV (i.e., is a man ever justified for beating his wife). Thus, the sample studied here was restricted to the ever-married women. In addition, with the focus on IPV justification, women were excluded if they answered “don’t know” or had a missing response for any of the justification scenarios as these measures were dichotomized and aggregated (see below). We also excluded women who had a missing response to any of the past or current domestic violence experience questions to allow our raw totals for the prevalence estimates to be the same. These exclusions resulted in the loss of only 185 women (4% of the data) and resulted in a final sample of 4,301. This sample consisted of mostly currently married women between the ages of 24 and 44. This human subject research study was in compliance with the Helsinki Declaration approved by the Institutional Review Boards (IRBs) of both the Georgian National Center for Disease Control and Public Health and the United States Center for Disease Control (CDC). Measurements Included demographic measures are those factors identified as typically associated with intimate partner violence risk through a World Health Organization (WHO) multi-country study [25]. These include: age (15–24, 25–34, 35–44), residence (rural, urban); marital status (not currently married, currently married); education (did not complete secondary, completed secondary only, completed university or technical college); and working status (yes, no). Wealth (low, middle, high) was measured as composite index based on ownership of a television, automobile, refrigerator, videocassette recorder, cell phone, land-line phone, flush toilet, heating system, vacation home and having more than one room per household member. The experience of IPV was measured using the modified eight-item Conflict Tactic Scale [26], which includes acts of physical, psychological (e.g., insults and threats of harm) and sexual abuse of ever-married women during her lifetime and within the past year. This study focused on “prior to past year” and “past year” reports of physical violence only which included being pushed, shoved, slapped, kicked, hit with the fist or an object, beaten up, and threatened with a knife or other weapon. While lifetime and past year partner violence is commonly explored as distinct outcomes, since these were both included in the same model as covariates, this produces issues with collinearity. As a result, those who experienced violence in the past year, with or without exposure before were classified within “past year” while those who reported lifetime abuse, but none in the past year were classified within “prior to past year.” In addition, experience with domestic violence as a child (yes/no) was measured using two questions. The first asked the respondent if, as a child or adolescent, she had witnessed physical abuse between her parents; this first question was only asked to women who reported being raised in a household with both parents. The second asked the respondent if she experienced abuse from her parents as a child or adolescent. Partner violence justification was measured in the GERHS10 by assessing agreement (yes/no) to seven scenarios. Specifically, women were asked if they felt a husband would be justified in beating his wife if: 1) she goes out without telling him; 2) she neglects the children; 3) she argues with him; 4) she refuses to have sex with him; 5) he is not happy with her household work or food provisions; 6) she asks him whether he has other girlfriends; 7) he finds out that she has been unfaithful. For this study, each scenario is examined separately. In addition, responses are dichotomized to those who responded affirmatively to any one scenario compared with those who responded negatively to all. This “any justification” variable was created to allow for comparisons across other similar studies which utilize this grouped measure. Included demographic measures are those factors identified as typically associated with intimate partner violence risk through a World Health Organization (WHO) multi-country study [25]. These include: age (15–24, 25–34, 35–44), residence (rural, urban); marital status (not currently married, currently married); education (did not complete secondary, completed secondary only, completed university or technical college); and working status (yes, no). Wealth (low, middle, high) was measured as composite index based on ownership of a television, automobile, refrigerator, videocassette recorder, cell phone, land-line phone, flush toilet, heating system, vacation home and having more than one room per household member. The experience of IPV was measured using the modified eight-item Conflict Tactic Scale [26], which includes acts of physical, psychological (e.g., insults and threats of harm) and sexual abuse of ever-married women during her lifetime and within the past year. This study focused on “prior to past year” and “past year” reports of physical violence only which included being pushed, shoved, slapped, kicked, hit with the fist or an object, beaten up, and threatened with a knife or other weapon. While lifetime and past year partner violence is commonly explored as distinct outcomes, since these were both included in the same model as covariates, this produces issues with collinearity. As a result, those who experienced violence in the past year, with or without exposure before were classified within “past year” while those who reported lifetime abuse, but none in the past year were classified within “prior to past year.” In addition, experience with domestic violence as a child (yes/no) was measured using two questions. The first asked the respondent if, as a child or adolescent, she had witnessed physical abuse between her parents; this first question was only asked to women who reported being raised in a household with both parents. The second asked the respondent if she experienced abuse from her parents as a child or adolescent. Partner violence justification was measured in the GERHS10 by assessing agreement (yes/no) to seven scenarios. Specifically, women were asked if they felt a husband would be justified in beating his wife if: 1) she goes out without telling him; 2) she neglects the children; 3) she argues with him; 4) she refuses to have sex with him; 5) he is not happy with her household work or food provisions; 6) she asks him whether he has other girlfriends; 7) he finds out that she has been unfaithful. For this study, each scenario is examined separately. In addition, responses are dichotomized to those who responded affirmatively to any one scenario compared with those who responded negatively to all. This “any justification” variable was created to allow for comparisons across other similar studies which utilize this grouped measure. Statistical analysis Prevalences were weighted, adjusting for sample design. Weighted prevalence’s were calculated by Stata's “SVY: TABULATE” command, which employs the standard formula for weighted means [27]. Descriptive statistics, including frequencies and percentages, were computed and tested using categorical chi-square tests for statistical significance between measured factors and justification of IPV in any scenario as well as between IPV victimization status and each specific IPV justification scenario. Prediction models were created for each scenario and the combination of any scenario of IPV justification using a survey log binomial regression to estimate crude and adjusted prevalence ratios. For each adjusted model all measured covariates were included except where noted in the table. Exclusions of covariates were only made when the models that included those variables failed to converge. Log-binomial was utilized as the odds ratio generated through logistic regression tends to overestimate prevalence among common outcomes [28]. Prevalences were weighted, adjusting for sample design. Weighted prevalence’s were calculated by Stata's “SVY: TABULATE” command, which employs the standard formula for weighted means [27]. Descriptive statistics, including frequencies and percentages, were computed and tested using categorical chi-square tests for statistical significance between measured factors and justification of IPV in any scenario as well as between IPV victimization status and each specific IPV justification scenario. Prediction models were created for each scenario and the combination of any scenario of IPV justification using a survey log binomial regression to estimate crude and adjusted prevalence ratios. For each adjusted model all measured covariates were included except where noted in the table. Exclusions of covariates were only made when the models that included those variables failed to converge. Log-binomial was utilized as the odds ratio generated through logistic regression tends to overestimate prevalence among common outcomes [28].
Results
Demographics The total sample of 4,302 ever married women aged 15 to 44 years included primarily women over 25 (Table 1). There was a relatively equal distribution across wealth tercile, and just over half of the women reside in a rural area. While this sample includes only women who were ever married, a few (9%) were not currently married. Over half of the women had completed university or technical college; however, over three quarters were not working. Fewer than 9% reported ever witnessing IPV among their parents or experiencing abuse as a child. Just under 4% reported experiencing IPV victimization prior to the past year and close to 2% reported past year IPV victimization. Significant differences in the justification of IPV overall were apparent across each socio-demographic measure except for age and partner violence history. Sample descriptives ever married women of reproductive age in country of Georgia Note: Limited to women who were ever married as these were the only ones asked about IPV justification. The general sample lifetime prevalence is 7%. Overall just over 19% of the Georgian women in this sample agreed that a husband is justified in beating his wife in at least one scenario. This was primarily driven by the scenario: “when a wife is unfaithful” which justified IPV according to 18.6% of the sample. The next remaining scenarios were viewed as cause for IPV by five percent or less of the women, specifically and in order of increased prevalence: when a wife neglects her children (5.0%); when a wife argues back (3.3%); when a wife asks about other girlfriends (2.4%); when a wife goes out without her husband’s permission (1.8%); when a wife refuses sex (1.6%) and when a wife neglects housework (1.3%). The total sample of 4,302 ever married women aged 15 to 44 years included primarily women over 25 (Table 1). There was a relatively equal distribution across wealth tercile, and just over half of the women reside in a rural area. While this sample includes only women who were ever married, a few (9%) were not currently married. Over half of the women had completed university or technical college; however, over three quarters were not working. Fewer than 9% reported ever witnessing IPV among their parents or experiencing abuse as a child. Just under 4% reported experiencing IPV victimization prior to the past year and close to 2% reported past year IPV victimization. Significant differences in the justification of IPV overall were apparent across each socio-demographic measure except for age and partner violence history. Sample descriptives ever married women of reproductive age in country of Georgia Note: Limited to women who were ever married as these were the only ones asked about IPV justification. The general sample lifetime prevalence is 7%. Overall just over 19% of the Georgian women in this sample agreed that a husband is justified in beating his wife in at least one scenario. This was primarily driven by the scenario: “when a wife is unfaithful” which justified IPV according to 18.6% of the sample. The next remaining scenarios were viewed as cause for IPV by five percent or less of the women, specifically and in order of increased prevalence: when a wife neglects her children (5.0%); when a wife argues back (3.3%); when a wife asks about other girlfriends (2.4%); when a wife goes out without her husband’s permission (1.8%); when a wife refuses sex (1.6%) and when a wife neglects housework (1.3%). Predicting justification of intimate partner violence Predictive models were examined to better understand the factors most associated with justification of IPV under at least one circumstance as well as across specific circumstances for Georgian women. Tables 2, 3 and 4 presents both crude and adjusted results as in many cases, the adjusted models resulted in a factor showing no significance due to sparse cells. Table 2 focuses on the associations between socio-economic factors on the justification of IPV. It was found that, women in the lowest household income tercile were two to twenty-four times more likely to justify IPV when compared with women of the highest income depending on the scenario. In terms of education, the lower a woman’s education, the higher the likelihood that she would justify partner violence across each scenario. Lastly, women who were not currently working consistently showed a higher likelihood to justify IPV across each scenario when compared with women who were currently working. Socio-economic predictors of IPV justification * omitted from model as inclusion resulted in a failure to converge. Demographic predictors of IPV justification * omitted from model as inclusion resulted in a failure to converge. Associations between experiences with domestic violence and IPV justification * omitted from model as inclusion resulted in a failure to converge. Table 3 explores the relationships between the socio-demographic factors of geography, age and marital status on a Georgian woman’s likelihood to justify IPV. A consistent pattern emerges that women from rural communities have an increased (two to up to five times) likelihood to justify IPV depending on the scenario. In terms of age, it appears that younger women (under 25) are more likely to justify IPV when a woman refuses sex when compared with women 25–34 but under no other circumstance. Alternatively, older women are more likely to justify IPV around issues of infidelity or going out without permission compared to the middle age group. As described above, only “ever married” women were asked questions regarding IPV justification in this survey. There are two circumstances where a currently married woman is significantly more likely to justify IPV compared with a woman who is divorced, separated or widowed: when a woman goes out without permission or is unfaithful. Table 4 explores the role of past and current exposure to domestic violence on a Georgian woman’s justification of IPV. Consistently, women who experienced violence in their homes while growing up, in the form of parental IPV or child abuse, show an increased likelihood of supporting IPV compared to those women who did not experience this violence. Furthermore, having been abused as a child is consistently a strong predictor of current justification of IPV among this sample. Similarly, those women who experienced IPV prior to the past year were more likely to justify this behavior across four of the seven scenarios. Obtaining significance for past year IPV was challenging given the low prevalence of this among the sample. However, the unadjusted estimates reveal the currently abused women show an approximately a three-time increased likelihood to justify IPV when a woman fails in her household duties, as compared to non-currently abused women. Predictive models were examined to better understand the factors most associated with justification of IPV under at least one circumstance as well as across specific circumstances for Georgian women. Tables 2, 3 and 4 presents both crude and adjusted results as in many cases, the adjusted models resulted in a factor showing no significance due to sparse cells. Table 2 focuses on the associations between socio-economic factors on the justification of IPV. It was found that, women in the lowest household income tercile were two to twenty-four times more likely to justify IPV when compared with women of the highest income depending on the scenario. In terms of education, the lower a woman’s education, the higher the likelihood that she would justify partner violence across each scenario. Lastly, women who were not currently working consistently showed a higher likelihood to justify IPV across each scenario when compared with women who were currently working. Socio-economic predictors of IPV justification * omitted from model as inclusion resulted in a failure to converge. Demographic predictors of IPV justification * omitted from model as inclusion resulted in a failure to converge. Associations between experiences with domestic violence and IPV justification * omitted from model as inclusion resulted in a failure to converge. Table 3 explores the relationships between the socio-demographic factors of geography, age and marital status on a Georgian woman’s likelihood to justify IPV. A consistent pattern emerges that women from rural communities have an increased (two to up to five times) likelihood to justify IPV depending on the scenario. In terms of age, it appears that younger women (under 25) are more likely to justify IPV when a woman refuses sex when compared with women 25–34 but under no other circumstance. Alternatively, older women are more likely to justify IPV around issues of infidelity or going out without permission compared to the middle age group. As described above, only “ever married” women were asked questions regarding IPV justification in this survey. There are two circumstances where a currently married woman is significantly more likely to justify IPV compared with a woman who is divorced, separated or widowed: when a woman goes out without permission or is unfaithful. Table 4 explores the role of past and current exposure to domestic violence on a Georgian woman’s justification of IPV. Consistently, women who experienced violence in their homes while growing up, in the form of parental IPV or child abuse, show an increased likelihood of supporting IPV compared to those women who did not experience this violence. Furthermore, having been abused as a child is consistently a strong predictor of current justification of IPV among this sample. Similarly, those women who experienced IPV prior to the past year were more likely to justify this behavior across four of the seven scenarios. Obtaining significance for past year IPV was challenging given the low prevalence of this among the sample. However, the unadjusted estimates reveal the currently abused women show an approximately a three-time increased likelihood to justify IPV when a woman fails in her household duties, as compared to non-currently abused women.
Conclusions
The Republic of Georgia has experienced both recent and threated conflicts, which conceivably would increase the level of IPV [6] however, given that IPV justification is more likely historic, based largely on social norms, there is little reason to believe that justification of IPV is impacted by these conflicts. In Georgia, violence against women is historically unacceptable; with extended families often living together and strong family social support, it is difficult to keep this behavior secret. Perhaps, this social disapproval counteracts the impact of living in a state of conflict. Lastly, as found in other countries [12], the most consistent predictors of IPV justification among this sample were lower socio-economic status, lower education, living in a rural area and having personal experiences with domestic violence. These findings again, provide insight to the higher level of IPV victimization among these multi-marginal populations that lack a social barrier against the abuse of women by their intimate partners. It is has been recognized that the justification of IPV directly influences both our ideas about who is to blame and our definition of the behaviors that identify this problem as a public health and a legal issue. Furthermore, partner violence justification can reveal important information about the perpetration, victimization and response to this type of violence in a specific region. As the Georgian society moves toward a more Western society-- teens and young women are becoming more sexually active before marriage and with a growing economy it will be easier for young couples to live on their own -- it is likely that risks will also move, albeit slowly. Georgia may serve as a case example that as historic protections are loosened (extended family living together, women being only sexually active with their husband) there will be a growing need to improve other types of protection, in this case education relative to the welfare of women. Thus, as Georgia becomes more “modern” changing attitudes about justification for IPV may become more important to prevent increases in IPV among this population.
[ "Background", "Dataset", "Measurements", "Statistical analysis", "Demographics", "Predicting justification of intimate partner violence", "Consent", "Abbreviations", "Competing interests", "Authors’ contributions", "Authors’ information", "Pre-publication history" ]
[ "With little to no published research on intimate partner violence disseminated from Eastern Europe, this study pays particular focus on IPV and the social justification of this behavior among women from Georgia. Georgia has experienced a tremendous amount of social and economic turmoil in the past two decades following its independence from the Soviet Union in 1991. Upon gaining sovereignty, Georgia has undergone several armed conflicts resulting in splitting the country and establishing de-facto separated regions of South Ossetia and Abkhazia. Disagreement over reforms resulted in violent conflicts and ethnic cleansing. Furthermore, socioeconomic changes during the intervening decades have created an environment conducive to increasing IPV. Prior to the upheaval Georgia experienced relative income equality, but after 1990 increased inflation drove the poverty rate up. Although the United Nations Commission on Humans Rights adopted their Declaration on the Elimination of Violence against Women in 1993 [1], it was not until 2006 that Georgia instituted its first law on domestic violence and defined it to include acts of physical, psychological, economic and sexual violence between family members. Furthermore, domestic violence laws in Georgia are argued to be vague as they are enveloped into general criminal codes against violence, tending to ignore domestic violence as a unique condition and paying no regard to psychological violence [2]. Interestingly, the IPV data that have been reported show the prevalence in Georgia as being notably low -- fewer than 8% of women report ever experiencing any victimization [3,4]. For context, a multi-county study on women’s health and domestic violence in ten countries between the years of 2000 and 2003 conducted by the World Health Organization (WHO) obtained IPV prevalence rates ranging from 15%-71% with only two countries with a prevalence of under 25% [5]. This lower IPV prevalence in Georgia seems to contradict research identifying a typically higher rate of IPV in countries highly impacted by conflict [6,7].\nWhile there have been high levels of conflict experienced by the populace, Georgians are more often the victims than the oppressors. Yet on an individual level, gender power differentials have been featured in Georgia’s past including bridal kidnappings where men would kidnap virgin women to rape them in order to keep them from marrying any man except for the offender [8]. Given these experiences, it is interesting to contextualize IPV in Georgia within a larger framework of social justification of this behavior. The examination of partner violence justification and its impact on IPV victimization has seen some growth internationally [5,9-13]. These studies have found that the rate of justification in many countries are quite high and can vary by the reason for abuse (e.g. neglect of child, infidelity). Furthermore, these studies found that women tended to approve of IPV at a greater rate than males and factors reflecting lower socio-economic status resulted in typically higher acceptance of IPV.\nDrawing from a small but growing body of international research, conducted primarily over the last decade, a clearer understanding of the interaction between social justification of partner violence and its incidence has developed. “Injunctive” social norms, or consensus about IPV being acceptable or not within a community [12,14] recognizes that while partner violence often occurs in the privacy of one’s home, it is informed by the attitudes of the larger society. Furthermore, cultural differences in the incidence of partner violence has been argued as a reflection the attitudes shared by the group that governs interactions within each culture [15]. The conceptualization of the intersection between social norming and IPV is integrated within the earliest conceptualizations of this problem. Two of the most often cited theoretical frameworks applied to understand or distinguish types of partner violence are common couple violence theory and the theory of patriarchal terrorism; each conceptualization centers strongly on the role of social perceptions of violence and/or women [16,17].\nThe common couple violence theory posits that general social attitudes toward violence are central to producing a more violent society where IPV can exist [18]. While the core of this framework is on partner violence being gender symmetric, an argument that stems from the idea that behaviors of both males and females reflect that of their larger culture’s endorsement of violence, these general attitudes about the acceptability or non-acceptability of violent behaviors can influence the justification of partner violence in subtle ways. A key element of the common couple violence framework includes the nature of what behaviors we define as violent, for example lower threshold violence may encompasses greater female perpetrators while acts of domestic homicide primarily are perpetrated by males. In one British study it was found that there was not full agreement as to what behaviors constituted partner violence finding that 16% of urban participants did not feel slapping denoted partner violence and 5% did not feel getting punched was an act of partner violence [19]. Clearly if a behavior is not seen as violent, it will be deemed more acceptable by those who experience, perpetrate and respond to it.\nThe patriarchal terrorism theory points to social attitudes around the role of women in relation to men as the source of partner violence [20]. The role of patriarchy within a culture plays an intricate role in social perceptions of partner violence toward women as it can support attitudes that men are not responsible for their behaviors whereas women are to blame [21], women’s behaviors are the triggers of violence by partners [22], and that women secretly desire this exertion of power [9]. These attitudes interchange with the level of tolerance an individual will feel toward IPV which in turn can play an important role in influencing whether these violent acts are reported to a third party such as the police [23,24].\nExamples of patriarchal hegemony in Georgia do exist. While the actual prevalence is not known, there was a time when bridal kidnapping, where single men would kidnap virgin women to rape them in order to keep them from marrying any man except for the offender were not uncommon [8]. Yet, for over a decade, the rate of reported partner violence is Georgia is consistently low [3,4]. Georgia provides an interesting case study as it conceptually represents a society like the U.S and Europe of the past in which women may be viewed as inferior, but families are often intact and multigenerational, and the influences of the western world are only recently emerging. Within this context it is further relevant to understand the role of socio-demographics such as age, geography, marital status, education, work status and wealth play in this dynamic as we recognize these factors influence IPV consistently across the globe [25].\nTo further explore the relationship between IPV victimization and the acceptance of this behavior among Georgian women, our study takes advantage of the justification scenarios built into the most recent Women’s Reproductive Health Survey in Georgia (2010). To better understand the factors associated with the justification of partner violence in Georgia, this study seeks to answer the following questions: 1) What factors are associated with Georgian women justifying partner violence against women? and 2) To what extent does experiences with domestic violence predict IPV justification?", "This study is a secondary data analysis of the 2010 Women’s Reproductive Health Survey (RHS) in Georgia (GERHS10). Permission to use this data set came from the Georgian National Center for Disease Control (NCDC). The principle purpose of the original RHS is to examine factors related to pregnancy and fertility among Georgian women. Previous RHSs in Georgia took place in 1999 and 2005. Surveys are conducted by the Georgian Ministry of Labor, Health and Social Affairs (MoLHSA) in collaboration with the Georgian National Center for Disease Control (NCDC) Division of Reproductive Health, Centers for Disease Control and Prevention (DRH/CDC), the United Nations Population Fund (UNFPA), and the United States Agency for International Development (USAID). These surveys utilize large nationally representative probability samples of women aged 15–44 years. Women are interviewed in their homes. A total of 6,292 women were interviewed in 2010. For further details, refer to the RHS summary report [4]. Only those women who were ever married were asked about justification for IPV (i.e., is a man ever justified for beating his wife). Thus, the sample studied here was restricted to the ever-married women. In addition, with the focus on IPV justification, women were excluded if they answered “don’t know” or had a missing response for any of the justification scenarios as these measures were dichotomized and aggregated (see below). We also excluded women who had a missing response to any of the past or current domestic violence experience questions to allow our raw totals for the prevalence estimates to be the same. These exclusions resulted in the loss of only 185 women (4% of the data) and resulted in a final sample of 4,301. This sample consisted of mostly currently married women between the ages of 24 and 44. This human subject research study was in compliance with the Helsinki Declaration approved by the Institutional Review Boards (IRBs) of both the Georgian National Center for Disease Control and Public Health and the United States Center for Disease Control (CDC).", "Included demographic measures are those factors identified as typically associated with intimate partner violence risk through a World Health Organization (WHO) multi-country study [25]. These include: age (15–24, 25–34, 35–44), residence (rural, urban); marital status (not currently married, currently married); education (did not complete secondary, completed secondary only, completed university or technical college); and working status (yes, no). Wealth (low, middle, high) was measured as composite index based on ownership of a television, automobile, refrigerator, videocassette recorder, cell phone, land-line phone, flush toilet, heating system, vacation home and having more than one room per household member.\nThe experience of IPV was measured using the modified eight-item Conflict Tactic Scale [26], which includes acts of physical, psychological (e.g., insults and threats of harm) and sexual abuse of ever-married women during her lifetime and within the past year. This study focused on “prior to past year” and “past year” reports of physical violence only which included being pushed, shoved, slapped, kicked, hit with the fist or an object, beaten up, and threatened with a knife or other weapon. While lifetime and past year partner violence is commonly explored as distinct outcomes, since these were both included in the same model as covariates, this produces issues with collinearity. As a result, those who experienced violence in the past year, with or without exposure before were classified within “past year” while those who reported lifetime abuse, but none in the past year were classified within “prior to past year.” In addition, experience with domestic violence as a child (yes/no) was measured using two questions. The first asked the respondent if, as a child or adolescent, she had witnessed physical abuse between her parents; this first question was only asked to women who reported being raised in a household with both parents. The second asked the respondent if she experienced abuse from her parents as a child or adolescent.\nPartner violence justification was measured in the GERHS10 by assessing agreement (yes/no) to seven scenarios. Specifically, women were asked if they felt a husband would be justified in beating his wife if: 1) she goes out without telling him; 2) she neglects the children; 3) she argues with him; 4) she refuses to have sex with him; 5) he is not happy with her household work or food provisions; 6) she asks him whether he has other girlfriends; 7) he finds out that she has been unfaithful. For this study, each scenario is examined separately. In addition, responses are dichotomized to those who responded affirmatively to any one scenario compared with those who responded negatively to all. This “any justification” variable was created to allow for comparisons across other similar studies which utilize this grouped measure.", "Prevalences were weighted, adjusting for sample design. Weighted prevalence’s were calculated by Stata's “SVY: TABULATE” command, which employs the standard formula for weighted means [27]. Descriptive statistics, including frequencies and percentages, were computed and tested using categorical chi-square tests for statistical significance between measured factors and justification of IPV in any scenario as well as between IPV victimization status and each specific IPV justification scenario. Prediction models were created for each scenario and the combination of any scenario of IPV justification using a survey log binomial regression to estimate crude and adjusted prevalence ratios. For each adjusted model all measured covariates were included except where noted in the table. Exclusions of covariates were only made when the models that included those variables failed to converge. Log-binomial was utilized as the odds ratio generated through logistic regression tends to overestimate prevalence among common outcomes [28].", "The total sample of 4,302 ever married women aged 15 to 44 years included primarily women over 25 (Table 1). There was a relatively equal distribution across wealth tercile, and just over half of the women reside in a rural area. While this sample includes only women who were ever married, a few (9%) were not currently married. Over half of the women had completed university or technical college; however, over three quarters were not working. Fewer than 9% reported ever witnessing IPV among their parents or experiencing abuse as a child. Just under 4% reported experiencing IPV victimization prior to the past year and close to 2% reported past year IPV victimization. Significant differences in the justification of IPV overall were apparent across each socio-demographic measure except for age and partner violence history.\nSample descriptives ever married women of reproductive age in country of Georgia\nNote: Limited to women who were ever married as these were the only ones asked about IPV justification. The general sample lifetime prevalence is 7%.\nOverall just over 19% of the Georgian women in this sample agreed that a husband is justified in beating his wife in at least one scenario. This was primarily driven by the scenario: “when a wife is unfaithful” which justified IPV according to 18.6% of the sample. The next remaining scenarios were viewed as cause for IPV by five percent or less of the women, specifically and in order of increased prevalence: when a wife neglects her children (5.0%); when a wife argues back (3.3%); when a wife asks about other girlfriends (2.4%); when a wife goes out without her husband’s permission (1.8%); when a wife refuses sex (1.6%) and when a wife neglects housework (1.3%).", "Predictive models were examined to better understand the factors most associated with justification of IPV under at least one circumstance as well as across specific circumstances for Georgian women. Tables 2, 3 and 4 presents both crude and adjusted results as in many cases, the adjusted models resulted in a factor showing no significance due to sparse cells. Table 2 focuses on the associations between socio-economic factors on the justification of IPV. It was found that, women in the lowest household income tercile were two to twenty-four times more likely to justify IPV when compared with women of the highest income depending on the scenario. In terms of education, the lower a woman’s education, the higher the likelihood that she would justify partner violence across each scenario. Lastly, women who were not currently working consistently showed a higher likelihood to justify IPV across each scenario when compared with women who were currently working.\nSocio-economic predictors of IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nDemographic predictors of IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nAssociations between experiences with domestic violence and IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nTable 3 explores the relationships between the socio-demographic factors of geography, age and marital status on a Georgian woman’s likelihood to justify IPV. A consistent pattern emerges that women from rural communities have an increased (two to up to five times) likelihood to justify IPV depending on the scenario. In terms of age, it appears that younger women (under 25) are more likely to justify IPV when a woman refuses sex when compared with women 25–34 but under no other circumstance. Alternatively, older women are more likely to justify IPV around issues of infidelity or going out without permission compared to the middle age group. As described above, only “ever married” women were asked questions regarding IPV justification in this survey. There are two circumstances where a currently married woman is significantly more likely to justify IPV compared with a woman who is divorced, separated or widowed: when a woman goes out without permission or is unfaithful.\nTable 4 explores the role of past and current exposure to domestic violence on a Georgian woman’s justification of IPV. Consistently, women who experienced violence in their homes while growing up, in the form of parental IPV or child abuse, show an increased likelihood of supporting IPV compared to those women who did not experience this violence. Furthermore, having been abused as a child is consistently a strong predictor of current justification of IPV among this sample. Similarly, those women who experienced IPV prior to the past year were more likely to justify this behavior across four of the seven scenarios. Obtaining significance for past year IPV was challenging given the low prevalence of this among the sample. However, the unadjusted estimates reveal the currently abused women show an approximately a three-time increased likelihood to justify IPV when a woman fails in her household duties, as compared to non-currently abused women.", "Written informed consent was obtained from the participants for the publication of this report and any accompanying images.", "DRH/CDC: Division of Reproductive Health, Centers for Disease Control and Prevention; GERHS10: Women’s Reproductive Health Survey in Georgia; IPV: Intimate Partner Violence; MoLHSA: Georgian Ministry of Labor, Health and Social Affairs () in collaboration with the; NCDC: Georgian National Center for Disease Control; RHS: Women’s Reproductive Health Survey; UNFPA: United Nations Population Fund; USAID: United States Agency for International Development; WHO: World Health Organization.", "The authors declare that they have no competing interests.", "EW contributed to the conceptualization, design, analysis and interpretation. MB contributed to the conceptualization, design, acquisition of data and interpretation. NA contributed to the acquisition of data and interpretation. SS contributed to the analysis and interpretation. LAM contributed to the conceptualization, design, analysis and interpretation. All authors contributed to draft, revisions and give final approval to this manuscript.", "Eve Waltermaurer and Maia Butsashvili are co-first authors.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6874/13/44/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Dataset", "Measurements", "Statistical analysis", "Results", "Demographics", "Predicting justification of intimate partner violence", "Discussion", "Conclusions", "Consent", "Abbreviations", "Competing interests", "Authors’ contributions", "Authors’ information", "Pre-publication history" ]
[ "With little to no published research on intimate partner violence disseminated from Eastern Europe, this study pays particular focus on IPV and the social justification of this behavior among women from Georgia. Georgia has experienced a tremendous amount of social and economic turmoil in the past two decades following its independence from the Soviet Union in 1991. Upon gaining sovereignty, Georgia has undergone several armed conflicts resulting in splitting the country and establishing de-facto separated regions of South Ossetia and Abkhazia. Disagreement over reforms resulted in violent conflicts and ethnic cleansing. Furthermore, socioeconomic changes during the intervening decades have created an environment conducive to increasing IPV. Prior to the upheaval Georgia experienced relative income equality, but after 1990 increased inflation drove the poverty rate up. Although the United Nations Commission on Humans Rights adopted their Declaration on the Elimination of Violence against Women in 1993 [1], it was not until 2006 that Georgia instituted its first law on domestic violence and defined it to include acts of physical, psychological, economic and sexual violence between family members. Furthermore, domestic violence laws in Georgia are argued to be vague as they are enveloped into general criminal codes against violence, tending to ignore domestic violence as a unique condition and paying no regard to psychological violence [2]. Interestingly, the IPV data that have been reported show the prevalence in Georgia as being notably low -- fewer than 8% of women report ever experiencing any victimization [3,4]. For context, a multi-county study on women’s health and domestic violence in ten countries between the years of 2000 and 2003 conducted by the World Health Organization (WHO) obtained IPV prevalence rates ranging from 15%-71% with only two countries with a prevalence of under 25% [5]. This lower IPV prevalence in Georgia seems to contradict research identifying a typically higher rate of IPV in countries highly impacted by conflict [6,7].\nWhile there have been high levels of conflict experienced by the populace, Georgians are more often the victims than the oppressors. Yet on an individual level, gender power differentials have been featured in Georgia’s past including bridal kidnappings where men would kidnap virgin women to rape them in order to keep them from marrying any man except for the offender [8]. Given these experiences, it is interesting to contextualize IPV in Georgia within a larger framework of social justification of this behavior. The examination of partner violence justification and its impact on IPV victimization has seen some growth internationally [5,9-13]. These studies have found that the rate of justification in many countries are quite high and can vary by the reason for abuse (e.g. neglect of child, infidelity). Furthermore, these studies found that women tended to approve of IPV at a greater rate than males and factors reflecting lower socio-economic status resulted in typically higher acceptance of IPV.\nDrawing from a small but growing body of international research, conducted primarily over the last decade, a clearer understanding of the interaction between social justification of partner violence and its incidence has developed. “Injunctive” social norms, or consensus about IPV being acceptable or not within a community [12,14] recognizes that while partner violence often occurs in the privacy of one’s home, it is informed by the attitudes of the larger society. Furthermore, cultural differences in the incidence of partner violence has been argued as a reflection the attitudes shared by the group that governs interactions within each culture [15]. The conceptualization of the intersection between social norming and IPV is integrated within the earliest conceptualizations of this problem. Two of the most often cited theoretical frameworks applied to understand or distinguish types of partner violence are common couple violence theory and the theory of patriarchal terrorism; each conceptualization centers strongly on the role of social perceptions of violence and/or women [16,17].\nThe common couple violence theory posits that general social attitudes toward violence are central to producing a more violent society where IPV can exist [18]. While the core of this framework is on partner violence being gender symmetric, an argument that stems from the idea that behaviors of both males and females reflect that of their larger culture’s endorsement of violence, these general attitudes about the acceptability or non-acceptability of violent behaviors can influence the justification of partner violence in subtle ways. A key element of the common couple violence framework includes the nature of what behaviors we define as violent, for example lower threshold violence may encompasses greater female perpetrators while acts of domestic homicide primarily are perpetrated by males. In one British study it was found that there was not full agreement as to what behaviors constituted partner violence finding that 16% of urban participants did not feel slapping denoted partner violence and 5% did not feel getting punched was an act of partner violence [19]. Clearly if a behavior is not seen as violent, it will be deemed more acceptable by those who experience, perpetrate and respond to it.\nThe patriarchal terrorism theory points to social attitudes around the role of women in relation to men as the source of partner violence [20]. The role of patriarchy within a culture plays an intricate role in social perceptions of partner violence toward women as it can support attitudes that men are not responsible for their behaviors whereas women are to blame [21], women’s behaviors are the triggers of violence by partners [22], and that women secretly desire this exertion of power [9]. These attitudes interchange with the level of tolerance an individual will feel toward IPV which in turn can play an important role in influencing whether these violent acts are reported to a third party such as the police [23,24].\nExamples of patriarchal hegemony in Georgia do exist. While the actual prevalence is not known, there was a time when bridal kidnapping, where single men would kidnap virgin women to rape them in order to keep them from marrying any man except for the offender were not uncommon [8]. Yet, for over a decade, the rate of reported partner violence is Georgia is consistently low [3,4]. Georgia provides an interesting case study as it conceptually represents a society like the U.S and Europe of the past in which women may be viewed as inferior, but families are often intact and multigenerational, and the influences of the western world are only recently emerging. Within this context it is further relevant to understand the role of socio-demographics such as age, geography, marital status, education, work status and wealth play in this dynamic as we recognize these factors influence IPV consistently across the globe [25].\nTo further explore the relationship between IPV victimization and the acceptance of this behavior among Georgian women, our study takes advantage of the justification scenarios built into the most recent Women’s Reproductive Health Survey in Georgia (2010). To better understand the factors associated with the justification of partner violence in Georgia, this study seeks to answer the following questions: 1) What factors are associated with Georgian women justifying partner violence against women? and 2) To what extent does experiences with domestic violence predict IPV justification?", " Dataset This study is a secondary data analysis of the 2010 Women’s Reproductive Health Survey (RHS) in Georgia (GERHS10). Permission to use this data set came from the Georgian National Center for Disease Control (NCDC). The principle purpose of the original RHS is to examine factors related to pregnancy and fertility among Georgian women. Previous RHSs in Georgia took place in 1999 and 2005. Surveys are conducted by the Georgian Ministry of Labor, Health and Social Affairs (MoLHSA) in collaboration with the Georgian National Center for Disease Control (NCDC) Division of Reproductive Health, Centers for Disease Control and Prevention (DRH/CDC), the United Nations Population Fund (UNFPA), and the United States Agency for International Development (USAID). These surveys utilize large nationally representative probability samples of women aged 15–44 years. Women are interviewed in their homes. A total of 6,292 women were interviewed in 2010. For further details, refer to the RHS summary report [4]. Only those women who were ever married were asked about justification for IPV (i.e., is a man ever justified for beating his wife). Thus, the sample studied here was restricted to the ever-married women. In addition, with the focus on IPV justification, women were excluded if they answered “don’t know” or had a missing response for any of the justification scenarios as these measures were dichotomized and aggregated (see below). We also excluded women who had a missing response to any of the past or current domestic violence experience questions to allow our raw totals for the prevalence estimates to be the same. These exclusions resulted in the loss of only 185 women (4% of the data) and resulted in a final sample of 4,301. This sample consisted of mostly currently married women between the ages of 24 and 44. This human subject research study was in compliance with the Helsinki Declaration approved by the Institutional Review Boards (IRBs) of both the Georgian National Center for Disease Control and Public Health and the United States Center for Disease Control (CDC).\nThis study is a secondary data analysis of the 2010 Women’s Reproductive Health Survey (RHS) in Georgia (GERHS10). Permission to use this data set came from the Georgian National Center for Disease Control (NCDC). The principle purpose of the original RHS is to examine factors related to pregnancy and fertility among Georgian women. Previous RHSs in Georgia took place in 1999 and 2005. Surveys are conducted by the Georgian Ministry of Labor, Health and Social Affairs (MoLHSA) in collaboration with the Georgian National Center for Disease Control (NCDC) Division of Reproductive Health, Centers for Disease Control and Prevention (DRH/CDC), the United Nations Population Fund (UNFPA), and the United States Agency for International Development (USAID). These surveys utilize large nationally representative probability samples of women aged 15–44 years. Women are interviewed in their homes. A total of 6,292 women were interviewed in 2010. For further details, refer to the RHS summary report [4]. Only those women who were ever married were asked about justification for IPV (i.e., is a man ever justified for beating his wife). Thus, the sample studied here was restricted to the ever-married women. In addition, with the focus on IPV justification, women were excluded if they answered “don’t know” or had a missing response for any of the justification scenarios as these measures were dichotomized and aggregated (see below). We also excluded women who had a missing response to any of the past or current domestic violence experience questions to allow our raw totals for the prevalence estimates to be the same. These exclusions resulted in the loss of only 185 women (4% of the data) and resulted in a final sample of 4,301. This sample consisted of mostly currently married women between the ages of 24 and 44. This human subject research study was in compliance with the Helsinki Declaration approved by the Institutional Review Boards (IRBs) of both the Georgian National Center for Disease Control and Public Health and the United States Center for Disease Control (CDC).\n Measurements Included demographic measures are those factors identified as typically associated with intimate partner violence risk through a World Health Organization (WHO) multi-country study [25]. These include: age (15–24, 25–34, 35–44), residence (rural, urban); marital status (not currently married, currently married); education (did not complete secondary, completed secondary only, completed university or technical college); and working status (yes, no). Wealth (low, middle, high) was measured as composite index based on ownership of a television, automobile, refrigerator, videocassette recorder, cell phone, land-line phone, flush toilet, heating system, vacation home and having more than one room per household member.\nThe experience of IPV was measured using the modified eight-item Conflict Tactic Scale [26], which includes acts of physical, psychological (e.g., insults and threats of harm) and sexual abuse of ever-married women during her lifetime and within the past year. This study focused on “prior to past year” and “past year” reports of physical violence only which included being pushed, shoved, slapped, kicked, hit with the fist or an object, beaten up, and threatened with a knife or other weapon. While lifetime and past year partner violence is commonly explored as distinct outcomes, since these were both included in the same model as covariates, this produces issues with collinearity. As a result, those who experienced violence in the past year, with or without exposure before were classified within “past year” while those who reported lifetime abuse, but none in the past year were classified within “prior to past year.” In addition, experience with domestic violence as a child (yes/no) was measured using two questions. The first asked the respondent if, as a child or adolescent, she had witnessed physical abuse between her parents; this first question was only asked to women who reported being raised in a household with both parents. The second asked the respondent if she experienced abuse from her parents as a child or adolescent.\nPartner violence justification was measured in the GERHS10 by assessing agreement (yes/no) to seven scenarios. Specifically, women were asked if they felt a husband would be justified in beating his wife if: 1) she goes out without telling him; 2) she neglects the children; 3) she argues with him; 4) she refuses to have sex with him; 5) he is not happy with her household work or food provisions; 6) she asks him whether he has other girlfriends; 7) he finds out that she has been unfaithful. For this study, each scenario is examined separately. In addition, responses are dichotomized to those who responded affirmatively to any one scenario compared with those who responded negatively to all. This “any justification” variable was created to allow for comparisons across other similar studies which utilize this grouped measure.\nIncluded demographic measures are those factors identified as typically associated with intimate partner violence risk through a World Health Organization (WHO) multi-country study [25]. These include: age (15–24, 25–34, 35–44), residence (rural, urban); marital status (not currently married, currently married); education (did not complete secondary, completed secondary only, completed university or technical college); and working status (yes, no). Wealth (low, middle, high) was measured as composite index based on ownership of a television, automobile, refrigerator, videocassette recorder, cell phone, land-line phone, flush toilet, heating system, vacation home and having more than one room per household member.\nThe experience of IPV was measured using the modified eight-item Conflict Tactic Scale [26], which includes acts of physical, psychological (e.g., insults and threats of harm) and sexual abuse of ever-married women during her lifetime and within the past year. This study focused on “prior to past year” and “past year” reports of physical violence only which included being pushed, shoved, slapped, kicked, hit with the fist or an object, beaten up, and threatened with a knife or other weapon. While lifetime and past year partner violence is commonly explored as distinct outcomes, since these were both included in the same model as covariates, this produces issues with collinearity. As a result, those who experienced violence in the past year, with or without exposure before were classified within “past year” while those who reported lifetime abuse, but none in the past year were classified within “prior to past year.” In addition, experience with domestic violence as a child (yes/no) was measured using two questions. The first asked the respondent if, as a child or adolescent, she had witnessed physical abuse between her parents; this first question was only asked to women who reported being raised in a household with both parents. The second asked the respondent if she experienced abuse from her parents as a child or adolescent.\nPartner violence justification was measured in the GERHS10 by assessing agreement (yes/no) to seven scenarios. Specifically, women were asked if they felt a husband would be justified in beating his wife if: 1) she goes out without telling him; 2) she neglects the children; 3) she argues with him; 4) she refuses to have sex with him; 5) he is not happy with her household work or food provisions; 6) she asks him whether he has other girlfriends; 7) he finds out that she has been unfaithful. For this study, each scenario is examined separately. In addition, responses are dichotomized to those who responded affirmatively to any one scenario compared with those who responded negatively to all. This “any justification” variable was created to allow for comparisons across other similar studies which utilize this grouped measure.\n Statistical analysis Prevalences were weighted, adjusting for sample design. Weighted prevalence’s were calculated by Stata's “SVY: TABULATE” command, which employs the standard formula for weighted means [27]. Descriptive statistics, including frequencies and percentages, were computed and tested using categorical chi-square tests for statistical significance between measured factors and justification of IPV in any scenario as well as between IPV victimization status and each specific IPV justification scenario. Prediction models were created for each scenario and the combination of any scenario of IPV justification using a survey log binomial regression to estimate crude and adjusted prevalence ratios. For each adjusted model all measured covariates were included except where noted in the table. Exclusions of covariates were only made when the models that included those variables failed to converge. Log-binomial was utilized as the odds ratio generated through logistic regression tends to overestimate prevalence among common outcomes [28].\nPrevalences were weighted, adjusting for sample design. Weighted prevalence’s were calculated by Stata's “SVY: TABULATE” command, which employs the standard formula for weighted means [27]. Descriptive statistics, including frequencies and percentages, were computed and tested using categorical chi-square tests for statistical significance between measured factors and justification of IPV in any scenario as well as between IPV victimization status and each specific IPV justification scenario. Prediction models were created for each scenario and the combination of any scenario of IPV justification using a survey log binomial regression to estimate crude and adjusted prevalence ratios. For each adjusted model all measured covariates were included except where noted in the table. Exclusions of covariates were only made when the models that included those variables failed to converge. Log-binomial was utilized as the odds ratio generated through logistic regression tends to overestimate prevalence among common outcomes [28].", "This study is a secondary data analysis of the 2010 Women’s Reproductive Health Survey (RHS) in Georgia (GERHS10). Permission to use this data set came from the Georgian National Center for Disease Control (NCDC). The principle purpose of the original RHS is to examine factors related to pregnancy and fertility among Georgian women. Previous RHSs in Georgia took place in 1999 and 2005. Surveys are conducted by the Georgian Ministry of Labor, Health and Social Affairs (MoLHSA) in collaboration with the Georgian National Center for Disease Control (NCDC) Division of Reproductive Health, Centers for Disease Control and Prevention (DRH/CDC), the United Nations Population Fund (UNFPA), and the United States Agency for International Development (USAID). These surveys utilize large nationally representative probability samples of women aged 15–44 years. Women are interviewed in their homes. A total of 6,292 women were interviewed in 2010. For further details, refer to the RHS summary report [4]. Only those women who were ever married were asked about justification for IPV (i.e., is a man ever justified for beating his wife). Thus, the sample studied here was restricted to the ever-married women. In addition, with the focus on IPV justification, women were excluded if they answered “don’t know” or had a missing response for any of the justification scenarios as these measures were dichotomized and aggregated (see below). We also excluded women who had a missing response to any of the past or current domestic violence experience questions to allow our raw totals for the prevalence estimates to be the same. These exclusions resulted in the loss of only 185 women (4% of the data) and resulted in a final sample of 4,301. This sample consisted of mostly currently married women between the ages of 24 and 44. This human subject research study was in compliance with the Helsinki Declaration approved by the Institutional Review Boards (IRBs) of both the Georgian National Center for Disease Control and Public Health and the United States Center for Disease Control (CDC).", "Included demographic measures are those factors identified as typically associated with intimate partner violence risk through a World Health Organization (WHO) multi-country study [25]. These include: age (15–24, 25–34, 35–44), residence (rural, urban); marital status (not currently married, currently married); education (did not complete secondary, completed secondary only, completed university or technical college); and working status (yes, no). Wealth (low, middle, high) was measured as composite index based on ownership of a television, automobile, refrigerator, videocassette recorder, cell phone, land-line phone, flush toilet, heating system, vacation home and having more than one room per household member.\nThe experience of IPV was measured using the modified eight-item Conflict Tactic Scale [26], which includes acts of physical, psychological (e.g., insults and threats of harm) and sexual abuse of ever-married women during her lifetime and within the past year. This study focused on “prior to past year” and “past year” reports of physical violence only which included being pushed, shoved, slapped, kicked, hit with the fist or an object, beaten up, and threatened with a knife or other weapon. While lifetime and past year partner violence is commonly explored as distinct outcomes, since these were both included in the same model as covariates, this produces issues with collinearity. As a result, those who experienced violence in the past year, with or without exposure before were classified within “past year” while those who reported lifetime abuse, but none in the past year were classified within “prior to past year.” In addition, experience with domestic violence as a child (yes/no) was measured using two questions. The first asked the respondent if, as a child or adolescent, she had witnessed physical abuse between her parents; this first question was only asked to women who reported being raised in a household with both parents. The second asked the respondent if she experienced abuse from her parents as a child or adolescent.\nPartner violence justification was measured in the GERHS10 by assessing agreement (yes/no) to seven scenarios. Specifically, women were asked if they felt a husband would be justified in beating his wife if: 1) she goes out without telling him; 2) she neglects the children; 3) she argues with him; 4) she refuses to have sex with him; 5) he is not happy with her household work or food provisions; 6) she asks him whether he has other girlfriends; 7) he finds out that she has been unfaithful. For this study, each scenario is examined separately. In addition, responses are dichotomized to those who responded affirmatively to any one scenario compared with those who responded negatively to all. This “any justification” variable was created to allow for comparisons across other similar studies which utilize this grouped measure.", "Prevalences were weighted, adjusting for sample design. Weighted prevalence’s were calculated by Stata's “SVY: TABULATE” command, which employs the standard formula for weighted means [27]. Descriptive statistics, including frequencies and percentages, were computed and tested using categorical chi-square tests for statistical significance between measured factors and justification of IPV in any scenario as well as between IPV victimization status and each specific IPV justification scenario. Prediction models were created for each scenario and the combination of any scenario of IPV justification using a survey log binomial regression to estimate crude and adjusted prevalence ratios. For each adjusted model all measured covariates were included except where noted in the table. Exclusions of covariates were only made when the models that included those variables failed to converge. Log-binomial was utilized as the odds ratio generated through logistic regression tends to overestimate prevalence among common outcomes [28].", " Demographics The total sample of 4,302 ever married women aged 15 to 44 years included primarily women over 25 (Table 1). There was a relatively equal distribution across wealth tercile, and just over half of the women reside in a rural area. While this sample includes only women who were ever married, a few (9%) were not currently married. Over half of the women had completed university or technical college; however, over three quarters were not working. Fewer than 9% reported ever witnessing IPV among their parents or experiencing abuse as a child. Just under 4% reported experiencing IPV victimization prior to the past year and close to 2% reported past year IPV victimization. Significant differences in the justification of IPV overall were apparent across each socio-demographic measure except for age and partner violence history.\nSample descriptives ever married women of reproductive age in country of Georgia\nNote: Limited to women who were ever married as these were the only ones asked about IPV justification. The general sample lifetime prevalence is 7%.\nOverall just over 19% of the Georgian women in this sample agreed that a husband is justified in beating his wife in at least one scenario. This was primarily driven by the scenario: “when a wife is unfaithful” which justified IPV according to 18.6% of the sample. The next remaining scenarios were viewed as cause for IPV by five percent or less of the women, specifically and in order of increased prevalence: when a wife neglects her children (5.0%); when a wife argues back (3.3%); when a wife asks about other girlfriends (2.4%); when a wife goes out without her husband’s permission (1.8%); when a wife refuses sex (1.6%) and when a wife neglects housework (1.3%).\nThe total sample of 4,302 ever married women aged 15 to 44 years included primarily women over 25 (Table 1). There was a relatively equal distribution across wealth tercile, and just over half of the women reside in a rural area. While this sample includes only women who were ever married, a few (9%) were not currently married. Over half of the women had completed university or technical college; however, over three quarters were not working. Fewer than 9% reported ever witnessing IPV among their parents or experiencing abuse as a child. Just under 4% reported experiencing IPV victimization prior to the past year and close to 2% reported past year IPV victimization. Significant differences in the justification of IPV overall were apparent across each socio-demographic measure except for age and partner violence history.\nSample descriptives ever married women of reproductive age in country of Georgia\nNote: Limited to women who were ever married as these were the only ones asked about IPV justification. The general sample lifetime prevalence is 7%.\nOverall just over 19% of the Georgian women in this sample agreed that a husband is justified in beating his wife in at least one scenario. This was primarily driven by the scenario: “when a wife is unfaithful” which justified IPV according to 18.6% of the sample. The next remaining scenarios were viewed as cause for IPV by five percent or less of the women, specifically and in order of increased prevalence: when a wife neglects her children (5.0%); when a wife argues back (3.3%); when a wife asks about other girlfriends (2.4%); when a wife goes out without her husband’s permission (1.8%); when a wife refuses sex (1.6%) and when a wife neglects housework (1.3%).\n Predicting justification of intimate partner violence Predictive models were examined to better understand the factors most associated with justification of IPV under at least one circumstance as well as across specific circumstances for Georgian women. Tables 2, 3 and 4 presents both crude and adjusted results as in many cases, the adjusted models resulted in a factor showing no significance due to sparse cells. Table 2 focuses on the associations between socio-economic factors on the justification of IPV. It was found that, women in the lowest household income tercile were two to twenty-four times more likely to justify IPV when compared with women of the highest income depending on the scenario. In terms of education, the lower a woman’s education, the higher the likelihood that she would justify partner violence across each scenario. Lastly, women who were not currently working consistently showed a higher likelihood to justify IPV across each scenario when compared with women who were currently working.\nSocio-economic predictors of IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nDemographic predictors of IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nAssociations between experiences with domestic violence and IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nTable 3 explores the relationships between the socio-demographic factors of geography, age and marital status on a Georgian woman’s likelihood to justify IPV. A consistent pattern emerges that women from rural communities have an increased (two to up to five times) likelihood to justify IPV depending on the scenario. In terms of age, it appears that younger women (under 25) are more likely to justify IPV when a woman refuses sex when compared with women 25–34 but under no other circumstance. Alternatively, older women are more likely to justify IPV around issues of infidelity or going out without permission compared to the middle age group. As described above, only “ever married” women were asked questions regarding IPV justification in this survey. There are two circumstances where a currently married woman is significantly more likely to justify IPV compared with a woman who is divorced, separated or widowed: when a woman goes out without permission or is unfaithful.\nTable 4 explores the role of past and current exposure to domestic violence on a Georgian woman’s justification of IPV. Consistently, women who experienced violence in their homes while growing up, in the form of parental IPV or child abuse, show an increased likelihood of supporting IPV compared to those women who did not experience this violence. Furthermore, having been abused as a child is consistently a strong predictor of current justification of IPV among this sample. Similarly, those women who experienced IPV prior to the past year were more likely to justify this behavior across four of the seven scenarios. Obtaining significance for past year IPV was challenging given the low prevalence of this among the sample. However, the unadjusted estimates reveal the currently abused women show an approximately a three-time increased likelihood to justify IPV when a woman fails in her household duties, as compared to non-currently abused women.\nPredictive models were examined to better understand the factors most associated with justification of IPV under at least one circumstance as well as across specific circumstances for Georgian women. Tables 2, 3 and 4 presents both crude and adjusted results as in many cases, the adjusted models resulted in a factor showing no significance due to sparse cells. Table 2 focuses on the associations between socio-economic factors on the justification of IPV. It was found that, women in the lowest household income tercile were two to twenty-four times more likely to justify IPV when compared with women of the highest income depending on the scenario. In terms of education, the lower a woman’s education, the higher the likelihood that she would justify partner violence across each scenario. Lastly, women who were not currently working consistently showed a higher likelihood to justify IPV across each scenario when compared with women who were currently working.\nSocio-economic predictors of IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nDemographic predictors of IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nAssociations between experiences with domestic violence and IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nTable 3 explores the relationships between the socio-demographic factors of geography, age and marital status on a Georgian woman’s likelihood to justify IPV. A consistent pattern emerges that women from rural communities have an increased (two to up to five times) likelihood to justify IPV depending on the scenario. In terms of age, it appears that younger women (under 25) are more likely to justify IPV when a woman refuses sex when compared with women 25–34 but under no other circumstance. Alternatively, older women are more likely to justify IPV around issues of infidelity or going out without permission compared to the middle age group. As described above, only “ever married” women were asked questions regarding IPV justification in this survey. There are two circumstances where a currently married woman is significantly more likely to justify IPV compared with a woman who is divorced, separated or widowed: when a woman goes out without permission or is unfaithful.\nTable 4 explores the role of past and current exposure to domestic violence on a Georgian woman’s justification of IPV. Consistently, women who experienced violence in their homes while growing up, in the form of parental IPV or child abuse, show an increased likelihood of supporting IPV compared to those women who did not experience this violence. Furthermore, having been abused as a child is consistently a strong predictor of current justification of IPV among this sample. Similarly, those women who experienced IPV prior to the past year were more likely to justify this behavior across four of the seven scenarios. Obtaining significance for past year IPV was challenging given the low prevalence of this among the sample. However, the unadjusted estimates reveal the currently abused women show an approximately a three-time increased likelihood to justify IPV when a woman fails in her household duties, as compared to non-currently abused women.", "The total sample of 4,302 ever married women aged 15 to 44 years included primarily women over 25 (Table 1). There was a relatively equal distribution across wealth tercile, and just over half of the women reside in a rural area. While this sample includes only women who were ever married, a few (9%) were not currently married. Over half of the women had completed university or technical college; however, over three quarters were not working. Fewer than 9% reported ever witnessing IPV among their parents or experiencing abuse as a child. Just under 4% reported experiencing IPV victimization prior to the past year and close to 2% reported past year IPV victimization. Significant differences in the justification of IPV overall were apparent across each socio-demographic measure except for age and partner violence history.\nSample descriptives ever married women of reproductive age in country of Georgia\nNote: Limited to women who were ever married as these were the only ones asked about IPV justification. The general sample lifetime prevalence is 7%.\nOverall just over 19% of the Georgian women in this sample agreed that a husband is justified in beating his wife in at least one scenario. This was primarily driven by the scenario: “when a wife is unfaithful” which justified IPV according to 18.6% of the sample. The next remaining scenarios were viewed as cause for IPV by five percent or less of the women, specifically and in order of increased prevalence: when a wife neglects her children (5.0%); when a wife argues back (3.3%); when a wife asks about other girlfriends (2.4%); when a wife goes out without her husband’s permission (1.8%); when a wife refuses sex (1.6%) and when a wife neglects housework (1.3%).", "Predictive models were examined to better understand the factors most associated with justification of IPV under at least one circumstance as well as across specific circumstances for Georgian women. Tables 2, 3 and 4 presents both crude and adjusted results as in many cases, the adjusted models resulted in a factor showing no significance due to sparse cells. Table 2 focuses on the associations between socio-economic factors on the justification of IPV. It was found that, women in the lowest household income tercile were two to twenty-four times more likely to justify IPV when compared with women of the highest income depending on the scenario. In terms of education, the lower a woman’s education, the higher the likelihood that she would justify partner violence across each scenario. Lastly, women who were not currently working consistently showed a higher likelihood to justify IPV across each scenario when compared with women who were currently working.\nSocio-economic predictors of IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nDemographic predictors of IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nAssociations between experiences with domestic violence and IPV justification\n* omitted from model as inclusion resulted in a failure to converge.\nTable 3 explores the relationships between the socio-demographic factors of geography, age and marital status on a Georgian woman’s likelihood to justify IPV. A consistent pattern emerges that women from rural communities have an increased (two to up to five times) likelihood to justify IPV depending on the scenario. In terms of age, it appears that younger women (under 25) are more likely to justify IPV when a woman refuses sex when compared with women 25–34 but under no other circumstance. Alternatively, older women are more likely to justify IPV around issues of infidelity or going out without permission compared to the middle age group. As described above, only “ever married” women were asked questions regarding IPV justification in this survey. There are two circumstances where a currently married woman is significantly more likely to justify IPV compared with a woman who is divorced, separated or widowed: when a woman goes out without permission or is unfaithful.\nTable 4 explores the role of past and current exposure to domestic violence on a Georgian woman’s justification of IPV. Consistently, women who experienced violence in their homes while growing up, in the form of parental IPV or child abuse, show an increased likelihood of supporting IPV compared to those women who did not experience this violence. Furthermore, having been abused as a child is consistently a strong predictor of current justification of IPV among this sample. Similarly, those women who experienced IPV prior to the past year were more likely to justify this behavior across four of the seven scenarios. Obtaining significance for past year IPV was challenging given the low prevalence of this among the sample. However, the unadjusted estimates reveal the currently abused women show an approximately a three-time increased likelihood to justify IPV when a woman fails in her household duties, as compared to non-currently abused women.", "Georgian women report comparatively lower rates of IPV and lower justification of this behavior. For example, when looking at the closest bordering countries where data was available, in the second most common scenario where IPV was justified, neglect of a child, fewer than 5% of Georgian women felt this scenario justified IPV as compared to 26% of the women in Kazakhstan and 27% of the women in Armenia [29]. However, as found elsewhere, these two factors, victimization and justification of this behavior are strongly associated in this population. This association may lend insight to the lower levels of IPV found in Georgia; the lower tolerance of this behavior may act as a social deterrent even without strong legislation.\nAs noted in other countries [26,30,31], Georgian women who were poor, from rural areas, had lower education and were not working had the greatest likelihood to justify partner violence. Prior research restricted to males only found conflicting findings in terms of the association between IPV justification and educational levels [32]. However, in this study the two opposing samples also differed by age, marital status, education and potentially the content of and character of the educational system and the endorsement of patriarchal ideals. These findings support the WHO results and others identifying that lower socio economic status influence social perceptions toward partner violence. Lastly, while it was difficult to ascertain the impact of current IPV on justification given the small number of recently victimized women in this sample, experiences of violence either as a partner, a child or witness to IPV among parents increased the likelihood that a Georgian woman would justify this behavior. While past research has identified that increased justification is associated with increased risk of IPV [30], contextualizing the influence of abuse felt or witnessed during childhood further illuminates the influence of experienced violence on perceptions of acceptability.\nIn this cross-sectional survey the temporal order of IPV victimization and IPV justification could not be established. However, conceptually these two factors likely have a dynamic relationship with each impacting the other. As an analysis of secondary data, this study was limited to crude measures of physical violence with no contextual factors. Furthermore, the original researchers excluded never married women for the IPV justification questions. A small body of research has found that younger women are up to twice as likely to justify IPV compared to older women [13,33-37]. Whereas primarily the younger women fall into the “never married” category, a comparative analysis is not possible here. Other methodological issues included the decision to ask only those women who reported that they were raised in a household by both parents were asked if they witnessed IPV among their parents growing up. While, in the US, the high degree of IPV among divorced and separated parents is recognized, the in Georgia this familial experience (with or without violence) is rare. For our sample, with only 24 women overall skipped (less than 1%) there is little reason to expect this skip to have biased our findings. Lastly, as a multi-country study, situational measures for justification were predefined to allow international comparisons by the WHO. It is possible that scenarios developed specifically for this population would have revealed the behaviors that show stronger justification of abuse. Further research on this relationship applying culturally specific measures is warranted.", "The Republic of Georgia has experienced both recent and threated conflicts, which conceivably would increase the level of IPV [6] however, given that IPV justification is more likely historic, based largely on social norms, there is little reason to believe that justification of IPV is impacted by these conflicts. In Georgia, violence against women is historically unacceptable; with extended families often living together and strong family social support, it is difficult to keep this behavior secret. Perhaps, this social disapproval counteracts the impact of living in a state of conflict. Lastly, as found in other countries [12], the most consistent predictors of IPV justification among this sample were lower socio-economic status, lower education, living in a rural area and having personal experiences with domestic violence. These findings again, provide insight to the higher level of IPV victimization among these multi-marginal populations that lack a social barrier against the abuse of women by their intimate partners.\nIt is has been recognized that the justification of IPV directly influences both our ideas about who is to blame and our definition of the behaviors that identify this problem as a public health and a legal issue. Furthermore, partner violence justification can reveal important information about the perpetration, victimization and response to this type of violence in a specific region. As the Georgian society moves toward a more Western society-- teens and young women are becoming more sexually active before marriage and with a growing economy it will be easier for young couples to live on their own -- it is likely that risks will also move, albeit slowly. Georgia may serve as a case example that as historic protections are loosened (extended family living together, women being only sexually active with their husband) there will be a growing need to improve other types of protection, in this case education relative to the welfare of women. Thus, as Georgia becomes more “modern” changing attitudes about justification for IPV may become more important to prevent increases in IPV among this population.", "Written informed consent was obtained from the participants for the publication of this report and any accompanying images.", "DRH/CDC: Division of Reproductive Health, Centers for Disease Control and Prevention; GERHS10: Women’s Reproductive Health Survey in Georgia; IPV: Intimate Partner Violence; MoLHSA: Georgian Ministry of Labor, Health and Social Affairs () in collaboration with the; NCDC: Georgian National Center for Disease Control; RHS: Women’s Reproductive Health Survey; UNFPA: United Nations Population Fund; USAID: United States Agency for International Development; WHO: World Health Organization.", "The authors declare that they have no competing interests.", "EW contributed to the conceptualization, design, analysis and interpretation. MB contributed to the conceptualization, design, acquisition of data and interpretation. NA contributed to the acquisition of data and interpretation. SS contributed to the analysis and interpretation. LAM contributed to the conceptualization, design, analysis and interpretation. All authors contributed to draft, revisions and give final approval to this manuscript.", "Eve Waltermaurer and Maia Butsashvili are co-first authors.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6874/13/44/prepub\n" ]
[ null, "methods", null, null, null, "results", null, null, "discussion", "conclusions", null, null, null, null, null, null ]
[ "Partner violence", "Social norms", "Women", "Gender", "Republic of Georgia" ]
Background: With little to no published research on intimate partner violence disseminated from Eastern Europe, this study pays particular focus on IPV and the social justification of this behavior among women from Georgia. Georgia has experienced a tremendous amount of social and economic turmoil in the past two decades following its independence from the Soviet Union in 1991. Upon gaining sovereignty, Georgia has undergone several armed conflicts resulting in splitting the country and establishing de-facto separated regions of South Ossetia and Abkhazia. Disagreement over reforms resulted in violent conflicts and ethnic cleansing. Furthermore, socioeconomic changes during the intervening decades have created an environment conducive to increasing IPV. Prior to the upheaval Georgia experienced relative income equality, but after 1990 increased inflation drove the poverty rate up. Although the United Nations Commission on Humans Rights adopted their Declaration on the Elimination of Violence against Women in 1993 [1], it was not until 2006 that Georgia instituted its first law on domestic violence and defined it to include acts of physical, psychological, economic and sexual violence between family members. Furthermore, domestic violence laws in Georgia are argued to be vague as they are enveloped into general criminal codes against violence, tending to ignore domestic violence as a unique condition and paying no regard to psychological violence [2]. Interestingly, the IPV data that have been reported show the prevalence in Georgia as being notably low -- fewer than 8% of women report ever experiencing any victimization [3,4]. For context, a multi-county study on women’s health and domestic violence in ten countries between the years of 2000 and 2003 conducted by the World Health Organization (WHO) obtained IPV prevalence rates ranging from 15%-71% with only two countries with a prevalence of under 25% [5]. This lower IPV prevalence in Georgia seems to contradict research identifying a typically higher rate of IPV in countries highly impacted by conflict [6,7]. While there have been high levels of conflict experienced by the populace, Georgians are more often the victims than the oppressors. Yet on an individual level, gender power differentials have been featured in Georgia’s past including bridal kidnappings where men would kidnap virgin women to rape them in order to keep them from marrying any man except for the offender [8]. Given these experiences, it is interesting to contextualize IPV in Georgia within a larger framework of social justification of this behavior. The examination of partner violence justification and its impact on IPV victimization has seen some growth internationally [5,9-13]. These studies have found that the rate of justification in many countries are quite high and can vary by the reason for abuse (e.g. neglect of child, infidelity). Furthermore, these studies found that women tended to approve of IPV at a greater rate than males and factors reflecting lower socio-economic status resulted in typically higher acceptance of IPV. Drawing from a small but growing body of international research, conducted primarily over the last decade, a clearer understanding of the interaction between social justification of partner violence and its incidence has developed. “Injunctive” social norms, or consensus about IPV being acceptable or not within a community [12,14] recognizes that while partner violence often occurs in the privacy of one’s home, it is informed by the attitudes of the larger society. Furthermore, cultural differences in the incidence of partner violence has been argued as a reflection the attitudes shared by the group that governs interactions within each culture [15]. The conceptualization of the intersection between social norming and IPV is integrated within the earliest conceptualizations of this problem. Two of the most often cited theoretical frameworks applied to understand or distinguish types of partner violence are common couple violence theory and the theory of patriarchal terrorism; each conceptualization centers strongly on the role of social perceptions of violence and/or women [16,17]. The common couple violence theory posits that general social attitudes toward violence are central to producing a more violent society where IPV can exist [18]. While the core of this framework is on partner violence being gender symmetric, an argument that stems from the idea that behaviors of both males and females reflect that of their larger culture’s endorsement of violence, these general attitudes about the acceptability or non-acceptability of violent behaviors can influence the justification of partner violence in subtle ways. A key element of the common couple violence framework includes the nature of what behaviors we define as violent, for example lower threshold violence may encompasses greater female perpetrators while acts of domestic homicide primarily are perpetrated by males. In one British study it was found that there was not full agreement as to what behaviors constituted partner violence finding that 16% of urban participants did not feel slapping denoted partner violence and 5% did not feel getting punched was an act of partner violence [19]. Clearly if a behavior is not seen as violent, it will be deemed more acceptable by those who experience, perpetrate and respond to it. The patriarchal terrorism theory points to social attitudes around the role of women in relation to men as the source of partner violence [20]. The role of patriarchy within a culture plays an intricate role in social perceptions of partner violence toward women as it can support attitudes that men are not responsible for their behaviors whereas women are to blame [21], women’s behaviors are the triggers of violence by partners [22], and that women secretly desire this exertion of power [9]. These attitudes interchange with the level of tolerance an individual will feel toward IPV which in turn can play an important role in influencing whether these violent acts are reported to a third party such as the police [23,24]. Examples of patriarchal hegemony in Georgia do exist. While the actual prevalence is not known, there was a time when bridal kidnapping, where single men would kidnap virgin women to rape them in order to keep them from marrying any man except for the offender were not uncommon [8]. Yet, for over a decade, the rate of reported partner violence is Georgia is consistently low [3,4]. Georgia provides an interesting case study as it conceptually represents a society like the U.S and Europe of the past in which women may be viewed as inferior, but families are often intact and multigenerational, and the influences of the western world are only recently emerging. Within this context it is further relevant to understand the role of socio-demographics such as age, geography, marital status, education, work status and wealth play in this dynamic as we recognize these factors influence IPV consistently across the globe [25]. To further explore the relationship between IPV victimization and the acceptance of this behavior among Georgian women, our study takes advantage of the justification scenarios built into the most recent Women’s Reproductive Health Survey in Georgia (2010). To better understand the factors associated with the justification of partner violence in Georgia, this study seeks to answer the following questions: 1) What factors are associated with Georgian women justifying partner violence against women? and 2) To what extent does experiences with domestic violence predict IPV justification? Methods: Dataset This study is a secondary data analysis of the 2010 Women’s Reproductive Health Survey (RHS) in Georgia (GERHS10). Permission to use this data set came from the Georgian National Center for Disease Control (NCDC). The principle purpose of the original RHS is to examine factors related to pregnancy and fertility among Georgian women. Previous RHSs in Georgia took place in 1999 and 2005. Surveys are conducted by the Georgian Ministry of Labor, Health and Social Affairs (MoLHSA) in collaboration with the Georgian National Center for Disease Control (NCDC) Division of Reproductive Health, Centers for Disease Control and Prevention (DRH/CDC), the United Nations Population Fund (UNFPA), and the United States Agency for International Development (USAID). These surveys utilize large nationally representative probability samples of women aged 15–44 years. Women are interviewed in their homes. A total of 6,292 women were interviewed in 2010. For further details, refer to the RHS summary report [4]. Only those women who were ever married were asked about justification for IPV (i.e., is a man ever justified for beating his wife). Thus, the sample studied here was restricted to the ever-married women. In addition, with the focus on IPV justification, women were excluded if they answered “don’t know” or had a missing response for any of the justification scenarios as these measures were dichotomized and aggregated (see below). We also excluded women who had a missing response to any of the past or current domestic violence experience questions to allow our raw totals for the prevalence estimates to be the same. These exclusions resulted in the loss of only 185 women (4% of the data) and resulted in a final sample of 4,301. This sample consisted of mostly currently married women between the ages of 24 and 44. This human subject research study was in compliance with the Helsinki Declaration approved by the Institutional Review Boards (IRBs) of both the Georgian National Center for Disease Control and Public Health and the United States Center for Disease Control (CDC). This study is a secondary data analysis of the 2010 Women’s Reproductive Health Survey (RHS) in Georgia (GERHS10). Permission to use this data set came from the Georgian National Center for Disease Control (NCDC). The principle purpose of the original RHS is to examine factors related to pregnancy and fertility among Georgian women. Previous RHSs in Georgia took place in 1999 and 2005. Surveys are conducted by the Georgian Ministry of Labor, Health and Social Affairs (MoLHSA) in collaboration with the Georgian National Center for Disease Control (NCDC) Division of Reproductive Health, Centers for Disease Control and Prevention (DRH/CDC), the United Nations Population Fund (UNFPA), and the United States Agency for International Development (USAID). These surveys utilize large nationally representative probability samples of women aged 15–44 years. Women are interviewed in their homes. A total of 6,292 women were interviewed in 2010. For further details, refer to the RHS summary report [4]. Only those women who were ever married were asked about justification for IPV (i.e., is a man ever justified for beating his wife). Thus, the sample studied here was restricted to the ever-married women. In addition, with the focus on IPV justification, women were excluded if they answered “don’t know” or had a missing response for any of the justification scenarios as these measures were dichotomized and aggregated (see below). We also excluded women who had a missing response to any of the past or current domestic violence experience questions to allow our raw totals for the prevalence estimates to be the same. These exclusions resulted in the loss of only 185 women (4% of the data) and resulted in a final sample of 4,301. This sample consisted of mostly currently married women between the ages of 24 and 44. This human subject research study was in compliance with the Helsinki Declaration approved by the Institutional Review Boards (IRBs) of both the Georgian National Center for Disease Control and Public Health and the United States Center for Disease Control (CDC). Measurements Included demographic measures are those factors identified as typically associated with intimate partner violence risk through a World Health Organization (WHO) multi-country study [25]. These include: age (15–24, 25–34, 35–44), residence (rural, urban); marital status (not currently married, currently married); education (did not complete secondary, completed secondary only, completed university or technical college); and working status (yes, no). Wealth (low, middle, high) was measured as composite index based on ownership of a television, automobile, refrigerator, videocassette recorder, cell phone, land-line phone, flush toilet, heating system, vacation home and having more than one room per household member. The experience of IPV was measured using the modified eight-item Conflict Tactic Scale [26], which includes acts of physical, psychological (e.g., insults and threats of harm) and sexual abuse of ever-married women during her lifetime and within the past year. This study focused on “prior to past year” and “past year” reports of physical violence only which included being pushed, shoved, slapped, kicked, hit with the fist or an object, beaten up, and threatened with a knife or other weapon. While lifetime and past year partner violence is commonly explored as distinct outcomes, since these were both included in the same model as covariates, this produces issues with collinearity. As a result, those who experienced violence in the past year, with or without exposure before were classified within “past year” while those who reported lifetime abuse, but none in the past year were classified within “prior to past year.” In addition, experience with domestic violence as a child (yes/no) was measured using two questions. The first asked the respondent if, as a child or adolescent, she had witnessed physical abuse between her parents; this first question was only asked to women who reported being raised in a household with both parents. The second asked the respondent if she experienced abuse from her parents as a child or adolescent. Partner violence justification was measured in the GERHS10 by assessing agreement (yes/no) to seven scenarios. Specifically, women were asked if they felt a husband would be justified in beating his wife if: 1) she goes out without telling him; 2) she neglects the children; 3) she argues with him; 4) she refuses to have sex with him; 5) he is not happy with her household work or food provisions; 6) she asks him whether he has other girlfriends; 7) he finds out that she has been unfaithful. For this study, each scenario is examined separately. In addition, responses are dichotomized to those who responded affirmatively to any one scenario compared with those who responded negatively to all. This “any justification” variable was created to allow for comparisons across other similar studies which utilize this grouped measure. Included demographic measures are those factors identified as typically associated with intimate partner violence risk through a World Health Organization (WHO) multi-country study [25]. These include: age (15–24, 25–34, 35–44), residence (rural, urban); marital status (not currently married, currently married); education (did not complete secondary, completed secondary only, completed university or technical college); and working status (yes, no). Wealth (low, middle, high) was measured as composite index based on ownership of a television, automobile, refrigerator, videocassette recorder, cell phone, land-line phone, flush toilet, heating system, vacation home and having more than one room per household member. The experience of IPV was measured using the modified eight-item Conflict Tactic Scale [26], which includes acts of physical, psychological (e.g., insults and threats of harm) and sexual abuse of ever-married women during her lifetime and within the past year. This study focused on “prior to past year” and “past year” reports of physical violence only which included being pushed, shoved, slapped, kicked, hit with the fist or an object, beaten up, and threatened with a knife or other weapon. While lifetime and past year partner violence is commonly explored as distinct outcomes, since these were both included in the same model as covariates, this produces issues with collinearity. As a result, those who experienced violence in the past year, with or without exposure before were classified within “past year” while those who reported lifetime abuse, but none in the past year were classified within “prior to past year.” In addition, experience with domestic violence as a child (yes/no) was measured using two questions. The first asked the respondent if, as a child or adolescent, she had witnessed physical abuse between her parents; this first question was only asked to women who reported being raised in a household with both parents. The second asked the respondent if she experienced abuse from her parents as a child or adolescent. Partner violence justification was measured in the GERHS10 by assessing agreement (yes/no) to seven scenarios. Specifically, women were asked if they felt a husband would be justified in beating his wife if: 1) she goes out without telling him; 2) she neglects the children; 3) she argues with him; 4) she refuses to have sex with him; 5) he is not happy with her household work or food provisions; 6) she asks him whether he has other girlfriends; 7) he finds out that she has been unfaithful. For this study, each scenario is examined separately. In addition, responses are dichotomized to those who responded affirmatively to any one scenario compared with those who responded negatively to all. This “any justification” variable was created to allow for comparisons across other similar studies which utilize this grouped measure. Statistical analysis Prevalences were weighted, adjusting for sample design. Weighted prevalence’s were calculated by Stata's “SVY: TABULATE” command, which employs the standard formula for weighted means [27]. Descriptive statistics, including frequencies and percentages, were computed and tested using categorical chi-square tests for statistical significance between measured factors and justification of IPV in any scenario as well as between IPV victimization status and each specific IPV justification scenario. Prediction models were created for each scenario and the combination of any scenario of IPV justification using a survey log binomial regression to estimate crude and adjusted prevalence ratios. For each adjusted model all measured covariates were included except where noted in the table. Exclusions of covariates were only made when the models that included those variables failed to converge. Log-binomial was utilized as the odds ratio generated through logistic regression tends to overestimate prevalence among common outcomes [28]. Prevalences were weighted, adjusting for sample design. Weighted prevalence’s were calculated by Stata's “SVY: TABULATE” command, which employs the standard formula for weighted means [27]. Descriptive statistics, including frequencies and percentages, were computed and tested using categorical chi-square tests for statistical significance between measured factors and justification of IPV in any scenario as well as between IPV victimization status and each specific IPV justification scenario. Prediction models were created for each scenario and the combination of any scenario of IPV justification using a survey log binomial regression to estimate crude and adjusted prevalence ratios. For each adjusted model all measured covariates were included except where noted in the table. Exclusions of covariates were only made when the models that included those variables failed to converge. Log-binomial was utilized as the odds ratio generated through logistic regression tends to overestimate prevalence among common outcomes [28]. Dataset: This study is a secondary data analysis of the 2010 Women’s Reproductive Health Survey (RHS) in Georgia (GERHS10). Permission to use this data set came from the Georgian National Center for Disease Control (NCDC). The principle purpose of the original RHS is to examine factors related to pregnancy and fertility among Georgian women. Previous RHSs in Georgia took place in 1999 and 2005. Surveys are conducted by the Georgian Ministry of Labor, Health and Social Affairs (MoLHSA) in collaboration with the Georgian National Center for Disease Control (NCDC) Division of Reproductive Health, Centers for Disease Control and Prevention (DRH/CDC), the United Nations Population Fund (UNFPA), and the United States Agency for International Development (USAID). These surveys utilize large nationally representative probability samples of women aged 15–44 years. Women are interviewed in their homes. A total of 6,292 women were interviewed in 2010. For further details, refer to the RHS summary report [4]. Only those women who were ever married were asked about justification for IPV (i.e., is a man ever justified for beating his wife). Thus, the sample studied here was restricted to the ever-married women. In addition, with the focus on IPV justification, women were excluded if they answered “don’t know” or had a missing response for any of the justification scenarios as these measures were dichotomized and aggregated (see below). We also excluded women who had a missing response to any of the past or current domestic violence experience questions to allow our raw totals for the prevalence estimates to be the same. These exclusions resulted in the loss of only 185 women (4% of the data) and resulted in a final sample of 4,301. This sample consisted of mostly currently married women between the ages of 24 and 44. This human subject research study was in compliance with the Helsinki Declaration approved by the Institutional Review Boards (IRBs) of both the Georgian National Center for Disease Control and Public Health and the United States Center for Disease Control (CDC). Measurements: Included demographic measures are those factors identified as typically associated with intimate partner violence risk through a World Health Organization (WHO) multi-country study [25]. These include: age (15–24, 25–34, 35–44), residence (rural, urban); marital status (not currently married, currently married); education (did not complete secondary, completed secondary only, completed university or technical college); and working status (yes, no). Wealth (low, middle, high) was measured as composite index based on ownership of a television, automobile, refrigerator, videocassette recorder, cell phone, land-line phone, flush toilet, heating system, vacation home and having more than one room per household member. The experience of IPV was measured using the modified eight-item Conflict Tactic Scale [26], which includes acts of physical, psychological (e.g., insults and threats of harm) and sexual abuse of ever-married women during her lifetime and within the past year. This study focused on “prior to past year” and “past year” reports of physical violence only which included being pushed, shoved, slapped, kicked, hit with the fist or an object, beaten up, and threatened with a knife or other weapon. While lifetime and past year partner violence is commonly explored as distinct outcomes, since these were both included in the same model as covariates, this produces issues with collinearity. As a result, those who experienced violence in the past year, with or without exposure before were classified within “past year” while those who reported lifetime abuse, but none in the past year were classified within “prior to past year.” In addition, experience with domestic violence as a child (yes/no) was measured using two questions. The first asked the respondent if, as a child or adolescent, she had witnessed physical abuse between her parents; this first question was only asked to women who reported being raised in a household with both parents. The second asked the respondent if she experienced abuse from her parents as a child or adolescent. Partner violence justification was measured in the GERHS10 by assessing agreement (yes/no) to seven scenarios. Specifically, women were asked if they felt a husband would be justified in beating his wife if: 1) she goes out without telling him; 2) she neglects the children; 3) she argues with him; 4) she refuses to have sex with him; 5) he is not happy with her household work or food provisions; 6) she asks him whether he has other girlfriends; 7) he finds out that she has been unfaithful. For this study, each scenario is examined separately. In addition, responses are dichotomized to those who responded affirmatively to any one scenario compared with those who responded negatively to all. This “any justification” variable was created to allow for comparisons across other similar studies which utilize this grouped measure. Statistical analysis: Prevalences were weighted, adjusting for sample design. Weighted prevalence’s were calculated by Stata's “SVY: TABULATE” command, which employs the standard formula for weighted means [27]. Descriptive statistics, including frequencies and percentages, were computed and tested using categorical chi-square tests for statistical significance between measured factors and justification of IPV in any scenario as well as between IPV victimization status and each specific IPV justification scenario. Prediction models were created for each scenario and the combination of any scenario of IPV justification using a survey log binomial regression to estimate crude and adjusted prevalence ratios. For each adjusted model all measured covariates were included except where noted in the table. Exclusions of covariates were only made when the models that included those variables failed to converge. Log-binomial was utilized as the odds ratio generated through logistic regression tends to overestimate prevalence among common outcomes [28]. Results: Demographics The total sample of 4,302 ever married women aged 15 to 44 years included primarily women over 25 (Table 1). There was a relatively equal distribution across wealth tercile, and just over half of the women reside in a rural area. While this sample includes only women who were ever married, a few (9%) were not currently married. Over half of the women had completed university or technical college; however, over three quarters were not working. Fewer than 9% reported ever witnessing IPV among their parents or experiencing abuse as a child. Just under 4% reported experiencing IPV victimization prior to the past year and close to 2% reported past year IPV victimization. Significant differences in the justification of IPV overall were apparent across each socio-demographic measure except for age and partner violence history. Sample descriptives ever married women of reproductive age in country of Georgia Note: Limited to women who were ever married as these were the only ones asked about IPV justification. The general sample lifetime prevalence is 7%. Overall just over 19% of the Georgian women in this sample agreed that a husband is justified in beating his wife in at least one scenario. This was primarily driven by the scenario: “when a wife is unfaithful” which justified IPV according to 18.6% of the sample. The next remaining scenarios were viewed as cause for IPV by five percent or less of the women, specifically and in order of increased prevalence: when a wife neglects her children (5.0%); when a wife argues back (3.3%); when a wife asks about other girlfriends (2.4%); when a wife goes out without her husband’s permission (1.8%); when a wife refuses sex (1.6%) and when a wife neglects housework (1.3%). The total sample of 4,302 ever married women aged 15 to 44 years included primarily women over 25 (Table 1). There was a relatively equal distribution across wealth tercile, and just over half of the women reside in a rural area. While this sample includes only women who were ever married, a few (9%) were not currently married. Over half of the women had completed university or technical college; however, over three quarters were not working. Fewer than 9% reported ever witnessing IPV among their parents or experiencing abuse as a child. Just under 4% reported experiencing IPV victimization prior to the past year and close to 2% reported past year IPV victimization. Significant differences in the justification of IPV overall were apparent across each socio-demographic measure except for age and partner violence history. Sample descriptives ever married women of reproductive age in country of Georgia Note: Limited to women who were ever married as these were the only ones asked about IPV justification. The general sample lifetime prevalence is 7%. Overall just over 19% of the Georgian women in this sample agreed that a husband is justified in beating his wife in at least one scenario. This was primarily driven by the scenario: “when a wife is unfaithful” which justified IPV according to 18.6% of the sample. The next remaining scenarios were viewed as cause for IPV by five percent or less of the women, specifically and in order of increased prevalence: when a wife neglects her children (5.0%); when a wife argues back (3.3%); when a wife asks about other girlfriends (2.4%); when a wife goes out without her husband’s permission (1.8%); when a wife refuses sex (1.6%) and when a wife neglects housework (1.3%). Predicting justification of intimate partner violence Predictive models were examined to better understand the factors most associated with justification of IPV under at least one circumstance as well as across specific circumstances for Georgian women. Tables 2, 3 and 4 presents both crude and adjusted results as in many cases, the adjusted models resulted in a factor showing no significance due to sparse cells. Table 2 focuses on the associations between socio-economic factors on the justification of IPV. It was found that, women in the lowest household income tercile were two to twenty-four times more likely to justify IPV when compared with women of the highest income depending on the scenario. In terms of education, the lower a woman’s education, the higher the likelihood that she would justify partner violence across each scenario. Lastly, women who were not currently working consistently showed a higher likelihood to justify IPV across each scenario when compared with women who were currently working. Socio-economic predictors of IPV justification * omitted from model as inclusion resulted in a failure to converge. Demographic predictors of IPV justification * omitted from model as inclusion resulted in a failure to converge. Associations between experiences with domestic violence and IPV justification * omitted from model as inclusion resulted in a failure to converge. Table 3 explores the relationships between the socio-demographic factors of geography, age and marital status on a Georgian woman’s likelihood to justify IPV. A consistent pattern emerges that women from rural communities have an increased (two to up to five times) likelihood to justify IPV depending on the scenario. In terms of age, it appears that younger women (under 25) are more likely to justify IPV when a woman refuses sex when compared with women 25–34 but under no other circumstance. Alternatively, older women are more likely to justify IPV around issues of infidelity or going out without permission compared to the middle age group. As described above, only “ever married” women were asked questions regarding IPV justification in this survey. There are two circumstances where a currently married woman is significantly more likely to justify IPV compared with a woman who is divorced, separated or widowed: when a woman goes out without permission or is unfaithful. Table 4 explores the role of past and current exposure to domestic violence on a Georgian woman’s justification of IPV. Consistently, women who experienced violence in their homes while growing up, in the form of parental IPV or child abuse, show an increased likelihood of supporting IPV compared to those women who did not experience this violence. Furthermore, having been abused as a child is consistently a strong predictor of current justification of IPV among this sample. Similarly, those women who experienced IPV prior to the past year were more likely to justify this behavior across four of the seven scenarios. Obtaining significance for past year IPV was challenging given the low prevalence of this among the sample. However, the unadjusted estimates reveal the currently abused women show an approximately a three-time increased likelihood to justify IPV when a woman fails in her household duties, as compared to non-currently abused women. Predictive models were examined to better understand the factors most associated with justification of IPV under at least one circumstance as well as across specific circumstances for Georgian women. Tables 2, 3 and 4 presents both crude and adjusted results as in many cases, the adjusted models resulted in a factor showing no significance due to sparse cells. Table 2 focuses on the associations between socio-economic factors on the justification of IPV. It was found that, women in the lowest household income tercile were two to twenty-four times more likely to justify IPV when compared with women of the highest income depending on the scenario. In terms of education, the lower a woman’s education, the higher the likelihood that she would justify partner violence across each scenario. Lastly, women who were not currently working consistently showed a higher likelihood to justify IPV across each scenario when compared with women who were currently working. Socio-economic predictors of IPV justification * omitted from model as inclusion resulted in a failure to converge. Demographic predictors of IPV justification * omitted from model as inclusion resulted in a failure to converge. Associations between experiences with domestic violence and IPV justification * omitted from model as inclusion resulted in a failure to converge. Table 3 explores the relationships between the socio-demographic factors of geography, age and marital status on a Georgian woman’s likelihood to justify IPV. A consistent pattern emerges that women from rural communities have an increased (two to up to five times) likelihood to justify IPV depending on the scenario. In terms of age, it appears that younger women (under 25) are more likely to justify IPV when a woman refuses sex when compared with women 25–34 but under no other circumstance. Alternatively, older women are more likely to justify IPV around issues of infidelity or going out without permission compared to the middle age group. As described above, only “ever married” women were asked questions regarding IPV justification in this survey. There are two circumstances where a currently married woman is significantly more likely to justify IPV compared with a woman who is divorced, separated or widowed: when a woman goes out without permission or is unfaithful. Table 4 explores the role of past and current exposure to domestic violence on a Georgian woman’s justification of IPV. Consistently, women who experienced violence in their homes while growing up, in the form of parental IPV or child abuse, show an increased likelihood of supporting IPV compared to those women who did not experience this violence. Furthermore, having been abused as a child is consistently a strong predictor of current justification of IPV among this sample. Similarly, those women who experienced IPV prior to the past year were more likely to justify this behavior across four of the seven scenarios. Obtaining significance for past year IPV was challenging given the low prevalence of this among the sample. However, the unadjusted estimates reveal the currently abused women show an approximately a three-time increased likelihood to justify IPV when a woman fails in her household duties, as compared to non-currently abused women. Demographics: The total sample of 4,302 ever married women aged 15 to 44 years included primarily women over 25 (Table 1). There was a relatively equal distribution across wealth tercile, and just over half of the women reside in a rural area. While this sample includes only women who were ever married, a few (9%) were not currently married. Over half of the women had completed university or technical college; however, over three quarters were not working. Fewer than 9% reported ever witnessing IPV among their parents or experiencing abuse as a child. Just under 4% reported experiencing IPV victimization prior to the past year and close to 2% reported past year IPV victimization. Significant differences in the justification of IPV overall were apparent across each socio-demographic measure except for age and partner violence history. Sample descriptives ever married women of reproductive age in country of Georgia Note: Limited to women who were ever married as these were the only ones asked about IPV justification. The general sample lifetime prevalence is 7%. Overall just over 19% of the Georgian women in this sample agreed that a husband is justified in beating his wife in at least one scenario. This was primarily driven by the scenario: “when a wife is unfaithful” which justified IPV according to 18.6% of the sample. The next remaining scenarios were viewed as cause for IPV by five percent or less of the women, specifically and in order of increased prevalence: when a wife neglects her children (5.0%); when a wife argues back (3.3%); when a wife asks about other girlfriends (2.4%); when a wife goes out without her husband’s permission (1.8%); when a wife refuses sex (1.6%) and when a wife neglects housework (1.3%). Predicting justification of intimate partner violence: Predictive models were examined to better understand the factors most associated with justification of IPV under at least one circumstance as well as across specific circumstances for Georgian women. Tables 2, 3 and 4 presents both crude and adjusted results as in many cases, the adjusted models resulted in a factor showing no significance due to sparse cells. Table 2 focuses on the associations between socio-economic factors on the justification of IPV. It was found that, women in the lowest household income tercile were two to twenty-four times more likely to justify IPV when compared with women of the highest income depending on the scenario. In terms of education, the lower a woman’s education, the higher the likelihood that she would justify partner violence across each scenario. Lastly, women who were not currently working consistently showed a higher likelihood to justify IPV across each scenario when compared with women who were currently working. Socio-economic predictors of IPV justification * omitted from model as inclusion resulted in a failure to converge. Demographic predictors of IPV justification * omitted from model as inclusion resulted in a failure to converge. Associations between experiences with domestic violence and IPV justification * omitted from model as inclusion resulted in a failure to converge. Table 3 explores the relationships between the socio-demographic factors of geography, age and marital status on a Georgian woman’s likelihood to justify IPV. A consistent pattern emerges that women from rural communities have an increased (two to up to five times) likelihood to justify IPV depending on the scenario. In terms of age, it appears that younger women (under 25) are more likely to justify IPV when a woman refuses sex when compared with women 25–34 but under no other circumstance. Alternatively, older women are more likely to justify IPV around issues of infidelity or going out without permission compared to the middle age group. As described above, only “ever married” women were asked questions regarding IPV justification in this survey. There are two circumstances where a currently married woman is significantly more likely to justify IPV compared with a woman who is divorced, separated or widowed: when a woman goes out without permission or is unfaithful. Table 4 explores the role of past and current exposure to domestic violence on a Georgian woman’s justification of IPV. Consistently, women who experienced violence in their homes while growing up, in the form of parental IPV or child abuse, show an increased likelihood of supporting IPV compared to those women who did not experience this violence. Furthermore, having been abused as a child is consistently a strong predictor of current justification of IPV among this sample. Similarly, those women who experienced IPV prior to the past year were more likely to justify this behavior across four of the seven scenarios. Obtaining significance for past year IPV was challenging given the low prevalence of this among the sample. However, the unadjusted estimates reveal the currently abused women show an approximately a three-time increased likelihood to justify IPV when a woman fails in her household duties, as compared to non-currently abused women. Discussion: Georgian women report comparatively lower rates of IPV and lower justification of this behavior. For example, when looking at the closest bordering countries where data was available, in the second most common scenario where IPV was justified, neglect of a child, fewer than 5% of Georgian women felt this scenario justified IPV as compared to 26% of the women in Kazakhstan and 27% of the women in Armenia [29]. However, as found elsewhere, these two factors, victimization and justification of this behavior are strongly associated in this population. This association may lend insight to the lower levels of IPV found in Georgia; the lower tolerance of this behavior may act as a social deterrent even without strong legislation. As noted in other countries [26,30,31], Georgian women who were poor, from rural areas, had lower education and were not working had the greatest likelihood to justify partner violence. Prior research restricted to males only found conflicting findings in terms of the association between IPV justification and educational levels [32]. However, in this study the two opposing samples also differed by age, marital status, education and potentially the content of and character of the educational system and the endorsement of patriarchal ideals. These findings support the WHO results and others identifying that lower socio economic status influence social perceptions toward partner violence. Lastly, while it was difficult to ascertain the impact of current IPV on justification given the small number of recently victimized women in this sample, experiences of violence either as a partner, a child or witness to IPV among parents increased the likelihood that a Georgian woman would justify this behavior. While past research has identified that increased justification is associated with increased risk of IPV [30], contextualizing the influence of abuse felt or witnessed during childhood further illuminates the influence of experienced violence on perceptions of acceptability. In this cross-sectional survey the temporal order of IPV victimization and IPV justification could not be established. However, conceptually these two factors likely have a dynamic relationship with each impacting the other. As an analysis of secondary data, this study was limited to crude measures of physical violence with no contextual factors. Furthermore, the original researchers excluded never married women for the IPV justification questions. A small body of research has found that younger women are up to twice as likely to justify IPV compared to older women [13,33-37]. Whereas primarily the younger women fall into the “never married” category, a comparative analysis is not possible here. Other methodological issues included the decision to ask only those women who reported that they were raised in a household by both parents were asked if they witnessed IPV among their parents growing up. While, in the US, the high degree of IPV among divorced and separated parents is recognized, the in Georgia this familial experience (with or without violence) is rare. For our sample, with only 24 women overall skipped (less than 1%) there is little reason to expect this skip to have biased our findings. Lastly, as a multi-country study, situational measures for justification were predefined to allow international comparisons by the WHO. It is possible that scenarios developed specifically for this population would have revealed the behaviors that show stronger justification of abuse. Further research on this relationship applying culturally specific measures is warranted. Conclusions: The Republic of Georgia has experienced both recent and threated conflicts, which conceivably would increase the level of IPV [6] however, given that IPV justification is more likely historic, based largely on social norms, there is little reason to believe that justification of IPV is impacted by these conflicts. In Georgia, violence against women is historically unacceptable; with extended families often living together and strong family social support, it is difficult to keep this behavior secret. Perhaps, this social disapproval counteracts the impact of living in a state of conflict. Lastly, as found in other countries [12], the most consistent predictors of IPV justification among this sample were lower socio-economic status, lower education, living in a rural area and having personal experiences with domestic violence. These findings again, provide insight to the higher level of IPV victimization among these multi-marginal populations that lack a social barrier against the abuse of women by their intimate partners. It is has been recognized that the justification of IPV directly influences both our ideas about who is to blame and our definition of the behaviors that identify this problem as a public health and a legal issue. Furthermore, partner violence justification can reveal important information about the perpetration, victimization and response to this type of violence in a specific region. As the Georgian society moves toward a more Western society-- teens and young women are becoming more sexually active before marriage and with a growing economy it will be easier for young couples to live on their own -- it is likely that risks will also move, albeit slowly. Georgia may serve as a case example that as historic protections are loosened (extended family living together, women being only sexually active with their husband) there will be a growing need to improve other types of protection, in this case education relative to the welfare of women. Thus, as Georgia becomes more “modern” changing attitudes about justification for IPV may become more important to prevent increases in IPV among this population. Consent: Written informed consent was obtained from the participants for the publication of this report and any accompanying images. Abbreviations: DRH/CDC: Division of Reproductive Health, Centers for Disease Control and Prevention; GERHS10: Women’s Reproductive Health Survey in Georgia; IPV: Intimate Partner Violence; MoLHSA: Georgian Ministry of Labor, Health and Social Affairs () in collaboration with the; NCDC: Georgian National Center for Disease Control; RHS: Women’s Reproductive Health Survey; UNFPA: United Nations Population Fund; USAID: United States Agency for International Development; WHO: World Health Organization. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: EW contributed to the conceptualization, design, analysis and interpretation. MB contributed to the conceptualization, design, acquisition of data and interpretation. NA contributed to the acquisition of data and interpretation. SS contributed to the analysis and interpretation. LAM contributed to the conceptualization, design, analysis and interpretation. All authors contributed to draft, revisions and give final approval to this manuscript. Authors’ information: Eve Waltermaurer and Maia Butsashvili are co-first authors. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6874/13/44/prepub
Background: Little research on Intimate Partner Violence (IPV) and social perceptions toward this behavior has been disseminated from Eastern Europe. This study explores the prevalence and risk factors of IPV and the justification of this behavior among women in the Republic of Georgia. It seeks to better understand how IPV and IPV justification relate and how social justification of IPV differs across socio-economic measures among this population of women. Methods: This study utilizes a national sample of ever-married women from the Republic of Georgia (N = 4,302). We describe the factors that predict IPV justification among these women and the relationship between of the acceptability of IPV and victimization overall and across socio-demographic factors. Results: While the overall lifetime prevalence of IPV in this sample was relatively low (4%), these women were two to four times more likely to justify IPV, Just under one-quarter of the sample agreed that IPV was justified in at least one scenario, namely when the wife was unfaithful, compared with women who had no experience being abused by a partner. Georgian women who were poor, from a rural community, had lower education, were not working and who experienced child abuse or IPV among their parents were more likely to justify this behavior. Conclusions: These findings begin to fill a gap in our understanding of IPV experienced by women in Eastern Europe. In addition, these findings emphasize the need for researchers, practitioners and policy makers to contextualize IPV in terms of the justification of this behavior among the population being considered as this can play an important role in perpetration, victimization and response.
Background: With little to no published research on intimate partner violence disseminated from Eastern Europe, this study pays particular focus on IPV and the social justification of this behavior among women from Georgia. Georgia has experienced a tremendous amount of social and economic turmoil in the past two decades following its independence from the Soviet Union in 1991. Upon gaining sovereignty, Georgia has undergone several armed conflicts resulting in splitting the country and establishing de-facto separated regions of South Ossetia and Abkhazia. Disagreement over reforms resulted in violent conflicts and ethnic cleansing. Furthermore, socioeconomic changes during the intervening decades have created an environment conducive to increasing IPV. Prior to the upheaval Georgia experienced relative income equality, but after 1990 increased inflation drove the poverty rate up. Although the United Nations Commission on Humans Rights adopted their Declaration on the Elimination of Violence against Women in 1993 [1], it was not until 2006 that Georgia instituted its first law on domestic violence and defined it to include acts of physical, psychological, economic and sexual violence between family members. Furthermore, domestic violence laws in Georgia are argued to be vague as they are enveloped into general criminal codes against violence, tending to ignore domestic violence as a unique condition and paying no regard to psychological violence [2]. Interestingly, the IPV data that have been reported show the prevalence in Georgia as being notably low -- fewer than 8% of women report ever experiencing any victimization [3,4]. For context, a multi-county study on women’s health and domestic violence in ten countries between the years of 2000 and 2003 conducted by the World Health Organization (WHO) obtained IPV prevalence rates ranging from 15%-71% with only two countries with a prevalence of under 25% [5]. This lower IPV prevalence in Georgia seems to contradict research identifying a typically higher rate of IPV in countries highly impacted by conflict [6,7]. While there have been high levels of conflict experienced by the populace, Georgians are more often the victims than the oppressors. Yet on an individual level, gender power differentials have been featured in Georgia’s past including bridal kidnappings where men would kidnap virgin women to rape them in order to keep them from marrying any man except for the offender [8]. Given these experiences, it is interesting to contextualize IPV in Georgia within a larger framework of social justification of this behavior. The examination of partner violence justification and its impact on IPV victimization has seen some growth internationally [5,9-13]. These studies have found that the rate of justification in many countries are quite high and can vary by the reason for abuse (e.g. neglect of child, infidelity). Furthermore, these studies found that women tended to approve of IPV at a greater rate than males and factors reflecting lower socio-economic status resulted in typically higher acceptance of IPV. Drawing from a small but growing body of international research, conducted primarily over the last decade, a clearer understanding of the interaction between social justification of partner violence and its incidence has developed. “Injunctive” social norms, or consensus about IPV being acceptable or not within a community [12,14] recognizes that while partner violence often occurs in the privacy of one’s home, it is informed by the attitudes of the larger society. Furthermore, cultural differences in the incidence of partner violence has been argued as a reflection the attitudes shared by the group that governs interactions within each culture [15]. The conceptualization of the intersection between social norming and IPV is integrated within the earliest conceptualizations of this problem. Two of the most often cited theoretical frameworks applied to understand or distinguish types of partner violence are common couple violence theory and the theory of patriarchal terrorism; each conceptualization centers strongly on the role of social perceptions of violence and/or women [16,17]. The common couple violence theory posits that general social attitudes toward violence are central to producing a more violent society where IPV can exist [18]. While the core of this framework is on partner violence being gender symmetric, an argument that stems from the idea that behaviors of both males and females reflect that of their larger culture’s endorsement of violence, these general attitudes about the acceptability or non-acceptability of violent behaviors can influence the justification of partner violence in subtle ways. A key element of the common couple violence framework includes the nature of what behaviors we define as violent, for example lower threshold violence may encompasses greater female perpetrators while acts of domestic homicide primarily are perpetrated by males. In one British study it was found that there was not full agreement as to what behaviors constituted partner violence finding that 16% of urban participants did not feel slapping denoted partner violence and 5% did not feel getting punched was an act of partner violence [19]. Clearly if a behavior is not seen as violent, it will be deemed more acceptable by those who experience, perpetrate and respond to it. The patriarchal terrorism theory points to social attitudes around the role of women in relation to men as the source of partner violence [20]. The role of patriarchy within a culture plays an intricate role in social perceptions of partner violence toward women as it can support attitudes that men are not responsible for their behaviors whereas women are to blame [21], women’s behaviors are the triggers of violence by partners [22], and that women secretly desire this exertion of power [9]. These attitudes interchange with the level of tolerance an individual will feel toward IPV which in turn can play an important role in influencing whether these violent acts are reported to a third party such as the police [23,24]. Examples of patriarchal hegemony in Georgia do exist. While the actual prevalence is not known, there was a time when bridal kidnapping, where single men would kidnap virgin women to rape them in order to keep them from marrying any man except for the offender were not uncommon [8]. Yet, for over a decade, the rate of reported partner violence is Georgia is consistently low [3,4]. Georgia provides an interesting case study as it conceptually represents a society like the U.S and Europe of the past in which women may be viewed as inferior, but families are often intact and multigenerational, and the influences of the western world are only recently emerging. Within this context it is further relevant to understand the role of socio-demographics such as age, geography, marital status, education, work status and wealth play in this dynamic as we recognize these factors influence IPV consistently across the globe [25]. To further explore the relationship between IPV victimization and the acceptance of this behavior among Georgian women, our study takes advantage of the justification scenarios built into the most recent Women’s Reproductive Health Survey in Georgia (2010). To better understand the factors associated with the justification of partner violence in Georgia, this study seeks to answer the following questions: 1) What factors are associated with Georgian women justifying partner violence against women? and 2) To what extent does experiences with domestic violence predict IPV justification? Conclusions: The Republic of Georgia has experienced both recent and threated conflicts, which conceivably would increase the level of IPV [6] however, given that IPV justification is more likely historic, based largely on social norms, there is little reason to believe that justification of IPV is impacted by these conflicts. In Georgia, violence against women is historically unacceptable; with extended families often living together and strong family social support, it is difficult to keep this behavior secret. Perhaps, this social disapproval counteracts the impact of living in a state of conflict. Lastly, as found in other countries [12], the most consistent predictors of IPV justification among this sample were lower socio-economic status, lower education, living in a rural area and having personal experiences with domestic violence. These findings again, provide insight to the higher level of IPV victimization among these multi-marginal populations that lack a social barrier against the abuse of women by their intimate partners. It is has been recognized that the justification of IPV directly influences both our ideas about who is to blame and our definition of the behaviors that identify this problem as a public health and a legal issue. Furthermore, partner violence justification can reveal important information about the perpetration, victimization and response to this type of violence in a specific region. As the Georgian society moves toward a more Western society-- teens and young women are becoming more sexually active before marriage and with a growing economy it will be easier for young couples to live on their own -- it is likely that risks will also move, albeit slowly. Georgia may serve as a case example that as historic protections are loosened (extended family living together, women being only sexually active with their husband) there will be a growing need to improve other types of protection, in this case education relative to the welfare of women. Thus, as Georgia becomes more “modern” changing attitudes about justification for IPV may become more important to prevent increases in IPV among this population.
Background: Little research on Intimate Partner Violence (IPV) and social perceptions toward this behavior has been disseminated from Eastern Europe. This study explores the prevalence and risk factors of IPV and the justification of this behavior among women in the Republic of Georgia. It seeks to better understand how IPV and IPV justification relate and how social justification of IPV differs across socio-economic measures among this population of women. Methods: This study utilizes a national sample of ever-married women from the Republic of Georgia (N = 4,302). We describe the factors that predict IPV justification among these women and the relationship between of the acceptability of IPV and victimization overall and across socio-demographic factors. Results: While the overall lifetime prevalence of IPV in this sample was relatively low (4%), these women were two to four times more likely to justify IPV, Just under one-quarter of the sample agreed that IPV was justified in at least one scenario, namely when the wife was unfaithful, compared with women who had no experience being abused by a partner. Georgian women who were poor, from a rural community, had lower education, were not working and who experienced child abuse or IPV among their parents were more likely to justify this behavior. Conclusions: These findings begin to fill a gap in our understanding of IPV experienced by women in Eastern Europe. In addition, these findings emphasize the need for researchers, practitioners and policy makers to contextualize IPV in terms of the justification of this behavior among the population being considered as this can play an important role in perpetration, victimization and response.
8,876
312
[ 1341, 394, 570, 168, 349, 589, 19, 89, 10, 71, 11, 16 ]
16
[ "women", "ipv", "violence", "justification", "past", "married", "sample", "partner", "scenario", "partner violence" ]
[ "violence georgian woman", "violence georgia consistently", "conflicts georgia violence", "violence laws georgia", "domestic violence georgian" ]
[CONTENT] Partner violence | Social norms | Women | Gender | Republic of Georgia [SUMMARY]
[CONTENT] Partner violence | Social norms | Women | Gender | Republic of Georgia [SUMMARY]
[CONTENT] Partner violence | Social norms | Women | Gender | Republic of Georgia [SUMMARY]
[CONTENT] Partner violence | Social norms | Women | Gender | Republic of Georgia [SUMMARY]
[CONTENT] Partner violence | Social norms | Women | Gender | Republic of Georgia [SUMMARY]
[CONTENT] Partner violence | Social norms | Women | Gender | Republic of Georgia [SUMMARY]
[CONTENT] Adolescent | Adult | Adult Survivors of Child Abuse | Attitude | Educational Status | Employment | Female | Georgia (Republic) | Humans | Rural Population | Socioeconomic Factors | Spouse Abuse | Women | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Adult Survivors of Child Abuse | Attitude | Educational Status | Employment | Female | Georgia (Republic) | Humans | Rural Population | Socioeconomic Factors | Spouse Abuse | Women | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Adult Survivors of Child Abuse | Attitude | Educational Status | Employment | Female | Georgia (Republic) | Humans | Rural Population | Socioeconomic Factors | Spouse Abuse | Women | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Adult Survivors of Child Abuse | Attitude | Educational Status | Employment | Female | Georgia (Republic) | Humans | Rural Population | Socioeconomic Factors | Spouse Abuse | Women | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Adult Survivors of Child Abuse | Attitude | Educational Status | Employment | Female | Georgia (Republic) | Humans | Rural Population | Socioeconomic Factors | Spouse Abuse | Women | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Adult Survivors of Child Abuse | Attitude | Educational Status | Employment | Female | Georgia (Republic) | Humans | Rural Population | Socioeconomic Factors | Spouse Abuse | Women | Young Adult [SUMMARY]
[CONTENT] violence georgian woman | violence georgia consistently | conflicts georgia violence | violence laws georgia | domestic violence georgian [SUMMARY]
[CONTENT] violence georgian woman | violence georgia consistently | conflicts georgia violence | violence laws georgia | domestic violence georgian [SUMMARY]
[CONTENT] violence georgian woman | violence georgia consistently | conflicts georgia violence | violence laws georgia | domestic violence georgian [SUMMARY]
[CONTENT] violence georgian woman | violence georgia consistently | conflicts georgia violence | violence laws georgia | domestic violence georgian [SUMMARY]
[CONTENT] violence georgian woman | violence georgia consistently | conflicts georgia violence | violence laws georgia | domestic violence georgian [SUMMARY]
[CONTENT] violence georgian woman | violence georgia consistently | conflicts georgia violence | violence laws georgia | domestic violence georgian [SUMMARY]
[CONTENT] women | ipv | violence | justification | past | married | sample | partner | scenario | partner violence [SUMMARY]
[CONTENT] women | ipv | violence | justification | past | married | sample | partner | scenario | partner violence [SUMMARY]
[CONTENT] women | ipv | violence | justification | past | married | sample | partner | scenario | partner violence [SUMMARY]
[CONTENT] women | ipv | violence | justification | past | married | sample | partner | scenario | partner violence [SUMMARY]
[CONTENT] women | ipv | violence | justification | past | married | sample | partner | scenario | partner violence [SUMMARY]
[CONTENT] women | ipv | violence | justification | past | married | sample | partner | scenario | partner violence [SUMMARY]
[CONTENT] violence | women | georgia | partner | partner violence | ipv | attitudes | social | violent | rate [SUMMARY]
[CONTENT] women | year | past year | past | measured | disease control | disease | control | justification | scenario [SUMMARY]
[CONTENT] ipv | women | justify | justify ipv | woman | wife | justification | likelihood | compared | sample [SUMMARY]
[CONTENT] living | ipv | justification | social | women | georgia | women sexually active | women sexually | sexually active | young [SUMMARY]
[CONTENT] women | ipv | justification | violence | authors | married | scenario | health | past | year [SUMMARY]
[CONTENT] women | ipv | justification | violence | authors | married | scenario | health | past | year [SUMMARY]
[CONTENT] Intimate Partner Violence | Eastern Europe ||| IPV | the Republic of Georgia ||| IPV | IPV | IPV [SUMMARY]
[CONTENT] the Republic of Georgia | 4,302 ||| IPV | IPV [SUMMARY]
[CONTENT] IPV | 4% | two to four | IPV | Just under one-quarter | IPV | at least one ||| Georgian | IPV [SUMMARY]
[CONTENT] IPV | Eastern Europe ||| IPV [SUMMARY]
[CONTENT] Intimate Partner Violence | Eastern Europe ||| IPV | the Republic of Georgia ||| IPV | IPV | IPV ||| the Republic of Georgia | 4,302 ||| IPV | IPV ||| IPV | 4% | two to four | IPV | Just under one-quarter | IPV | at least one ||| Georgian | IPV ||| IPV | Eastern Europe ||| IPV [SUMMARY]
[CONTENT] Intimate Partner Violence | Eastern Europe ||| IPV | the Republic of Georgia ||| IPV | IPV | IPV ||| the Republic of Georgia | 4,302 ||| IPV | IPV ||| IPV | 4% | two to four | IPV | Just under one-quarter | IPV | at least one ||| Georgian | IPV ||| IPV | Eastern Europe ||| IPV [SUMMARY]
The mitochondrial genome of Paragyrodactylus variegatus (Platyhelminthes: Monogenea): differences in major non-coding region and gene order compared to Gyrodactylus.
25130627
Paragyrodactylus Gvosdev and Martechov, 1953, a viviparous genus of ectoparasite within the Gyrodactylidae, contains three nominal species all of which infect Asian river loaches. The group is suspected to be a basal lineage within Gyrodactylus Nordmann, 1832 sensu lato although this remains unclear. Further molecular study, beyond characterization of the standard Internal Transcribed Spacer region, is needed to clarify the evolutionary relationships within the family and the placement of this genus.
BACKGROUND
The mitochondrial genome of Paragyrodactylus variegatus You, King, Ye and Cone, 2014 was amplified in six parts from a single worm, sequenced using primer walking, annotated and analyzed using bioinformatic tools.
METHODS
The mitochondrial genome of P. variegatus is 14,517 bp, containing 12 protein-coding genes (PCGs), 22 transfer RNA (tRNA) genes, two ribosomal RNA (rRNA) genes and a major non-coding region (NCR). The overall A + T content of the mitochondrial genome is 76.3%, which is higher than all reported mitochondrial genomes of monogeneans. All of the 22 tRNAs have the typical cloverleaf secondary structure, except tRNACys, tRNASer1 and tRNASer2 that lack the dihydrouridine (DHU) arm. There are six domains (domain III is absent) and three domains in the inferred secondary structures of the large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS), respectively. The NCR includes six 40 bp tandem repeat units and has the double identical poly-T stretches, stem-loop structure and some surrounding structure elements. The gene order (tRNAGln, tRNAMet and NCR) differs in arrangement compared to the mitochondrial genomes reported from Gyrodactylus spp.
RESULTS
The Duplication and Random Loss Model and Recombination Model together are the most plausible explanations for the variation in gene order. Both morphological characters and characteristics of the mitochondrial genome support Paragyrodactylus as a distinct genus from Gyrodactylus. Considering their specific distribution and known hosts, we believe that Paragyrodactylus is a relict freshwater lineage of viviparous monogenean isolated in the high plateaus of central Asia on closely related river loaches.
CONCLUSION
[ "Animals", "Base Sequence", "DNA, Mitochondrial", "Gene Order", "Genome, Mitochondrial", "Helminth Proteins", "Nucleic Acid Conformation", "Polymerase Chain Reaction", "RNA, Ribosomal", "RNA, Transfer", "Species Specificity", "Trematoda" ]
4150975
Background
Gyrodactylids are widespread parasites of freshwater and marine fishes, typically inhabiting the skin and gills of their hosts. Their direct life-cycle and hyperviviparous method of reproduction facilitates rapid population growth. Some species are pathogenic to their host (e.g. Gyrodactylus salaris Malmberg, 1957) [1] and capable of causing high host mortality resulting in serious ecological and economical consequences [2]. Over twenty genera and 400 species of gyrodactylids have been described [3], most of them being identified by comparative morphology of the opisthaptoral hard parts. This traditional approach for identification of gyrodactylids gives limited information for detailed phylogenetic analysis. Recently, the nuclear ribosomal DNA (rDNA) and the internal transcribed spacers (ITS) of rDNA have been incorporated into the molecular taxonomy of the group [4, 5]. In addition, mitochondrial markers (COI and COII) are also confirmed to be DNA barcoding for Gyrodactylus Nordmann, 1832 [6, 7]. But more polymorphic molecular markers suitable for different taxonomic categories are still needed for studying the taxonomy and phylogeny of these parasites. Paragyrodactylus Gvosdev and Martechov, 1953 is a genus of Gyrodactylidae comprising three nominal species, Paragyrodactylus iliensis Gvosdev and Martechov, 1953 (=P. dogieli Osmanov, 1965), Paragyrodactylus barbatuli Ergens, 1970 and Paragyrodactylus variegatus You, King, Ye and Cone, 2014, all of which infect river loaches (Nemacheilidae) inhabiting streams in central Asia [8]. The relationship between Paragyrodactylus and Gyrodactylus has been recently explored. Kritsky and Boeger reported the two genera had a close relationship based on morphological characters [9]. Bakke et al. believed the complexity of the attachment apparatus separates Paragyrodactylus from Gyrodactylus and pondered whether these differences were fundamental or a local diversification within Gyrodactylus [3]. Furthermore, You et al., using morphology and molecular data, presented the hypothesis that Paragyrodactylus was a relict freshwater lineage of viviparous monogeneans isolated in the high plateaus of central Asia on river loaches [8]. The ambiguous relationship between Paragyrodactylus and Gyrodactylus emphasizes the need for further molecular study of these genera. Due to its higher rate of base substitution, maternal inheritance, evolutionary conserved gene products and low recombination [10, 11], mitochondrial genomes provide powerful markers for phylogenetic analysis, biological identification and population studies. In addition, mitochondrial genomes can provide genome-level characters such as gene order for deep-level phylogenetic analysis [12, 13]. To date, the complete mitochondrial DNA sequences of only nine monogeneans are available, including three species of Gyrodactylus. In the present study, the first mitochondrial genome for Paragyrodactylus, P. variegatus, is sequenced and characterized. We report on its genome organization, base composition, gene order, codon usage, ribosomal and transfer RNA gene features and major non-coding region. Additionally, we provide a preliminary comparison of the gene arrangement within both Paragyrodactylus and Gyrodactylus.
Methods
Specimen collection and DNA extraction Specimens of P. variegatus were collected from the skin and fins of wild Homatula variegata (Dabry de Thiersant, 1874) in the Qinling Mountain region of central China. Upon capture the specimens were immediately preserved in 99% ethanol and stored at 4°C. The DNA from one parasite was extracted using a TIANamp Micro DNA Kit (Tiangen Biotech, Beijing, China) according to the manufacturer’s protocol. Specimens of P. variegatus were collected from the skin and fins of wild Homatula variegata (Dabry de Thiersant, 1874) in the Qinling Mountain region of central China. Upon capture the specimens were immediately preserved in 99% ethanol and stored at 4°C. The DNA from one parasite was extracted using a TIANamp Micro DNA Kit (Tiangen Biotech, Beijing, China) according to the manufacturer’s protocol. PCR and sequencing The complete mitochondrial genome of P. variegatus was amplified in six parts using a combination of existing primers and newly developed primers generated by primer walking (primers listed in Table 1). For short fragments (<2 kb), PCR reactions were performed in a total volume of 25 μl, containing 3.0 mM MgCl2, 10 mM Tris–HCl (pH 8.3), 50 mM KCl, 0.25 mM of each dNTP , 1.25 U rTaq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 93°C, followed by 40 cycles of 10 sec at 92°C, 1.5 min at 52–54°C, 2 min at 60°C, and final extension of 6 min at 72°C. For long fragments (>2 kb), the 25 μl PCR reaction consisted of 2.5 mM MgCl2, 2.5 μl 10 × LA PCR Buffer II (Mg2+ free), 0.4 mM of each dNTP, 1.25 U LA Taq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 94°C, followed by 40 cycles of 20 sec at 93°C, 30 sec at 53–54°C, 4–7 min at 68°C, and final extension of 10 min at 68°C. All PCR products were purified with a PCR Purification Kit (Sangon Biotech, Shanghai, China) and sequenced using multiple primers including those which generated the PCR product and new internal primers developed by primer walking.Table 1 List of PCR primer combinations used to amplify the mitochondrial genome of Paragyrodactylus variegatus Primer nameGeneSequence(5′ – 3′)Source1 F(UND1F)*ND1CGHAAGGGNCCNAAHAAGGTHuyse et al. (2007) [17]1R*COITAAACTTCTGGATGWCCAAAAAATThis study2 F(UNAD5F)ND5TTRGARGCNATGCGBGCHCCHuyse et al. (2007) [17]2RCOIIIYCARCCTGAGCGAATTCARGCKGGThis study3 F(U12SF)*rrnSCAGTGCCAGCAKYYGCGGTTAHuyse et al. (2007) [17]3R(UNAD5R)*ND5GGWGCMCGCATNGCYTCYAAHuyse et al. (2007) [17]4 FND5ATGTGATTTTTAGAGTTATGCTTThis study4R(6RNAD5)ND5AGGHTCTCTAACTTGGAAAGWTAGTATHuyse et al. (2008) [24]5 F*COIIITCTTCWRTTACAGYAACDTCCTAThis study5R*ND1AAACCTCATACCTAACTGCGThis study6 F*COICTCCTTTATCTGGTGCTCTGGGThis study6R*rrnSGACGGGCGGTATGTACCTCTCTThis studyF236COIIITTGTTTTTGATTCCGTGAThis studyF930CYTBTTATCTTTGTGGTTCGTTCGThis studyF1568CYTBAGGTCAAAGATAGGTGGGTTAGThis studyF2174ND4TATAGGAATTTTACCATTATTTAThis studyF2855ND4CATGGCTTATCAGTTTGThis studyF3302tRNAGln GGTAGCATAGGAGGTAAGGTTCThis studyF8330COITTTAGCGGGTATTTCAAGTAThis studyF8920COIGTATTATTCACTATAGGAGGGGTAThis studyR4662ATP6ACGAAATAATAAAAATATAAAAAGTThis studyR5283ND2TCCAGAAACTAACAATAAAGCACThis studyR6003tRNAVal ACCTAATGCTTGTAATGThis studyR6599ND1AAACCTCATACCTAACTGCGThis studyR7212tRNAPro GCAGCCCTATCAGTAAGACCThis studyR7941COIACCAAGCCCTACAAAACCTGThis studyR10014rrnLTCCCCATTCAGACAATCCTCThis studyR10652rrnSGCTGGCACTGTGACTTATCCTAThis studyR11375COIIATTGTAGGTAAAAAGGTTCACThis studyR12090ND6AAAAAGACAATAAGACCCACTAThis studyR12752tRNALeu(UUR) AACACTTTGTATTTGACGCTThis studyR14014ND5AGGTTCAAGTAATGGTAGGTCTThis study*The PCR primers for the long PCR fragment (>2 kb). List of PCR primer combinations used to amplify the mitochondrial genome of Paragyrodactylus variegatus *The PCR primers for the long PCR fragment (>2 kb). The complete mitochondrial genome of P. variegatus was amplified in six parts using a combination of existing primers and newly developed primers generated by primer walking (primers listed in Table 1). For short fragments (<2 kb), PCR reactions were performed in a total volume of 25 μl, containing 3.0 mM MgCl2, 10 mM Tris–HCl (pH 8.3), 50 mM KCl, 0.25 mM of each dNTP , 1.25 U rTaq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 93°C, followed by 40 cycles of 10 sec at 92°C, 1.5 min at 52–54°C, 2 min at 60°C, and final extension of 6 min at 72°C. For long fragments (>2 kb), the 25 μl PCR reaction consisted of 2.5 mM MgCl2, 2.5 μl 10 × LA PCR Buffer II (Mg2+ free), 0.4 mM of each dNTP, 1.25 U LA Taq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 94°C, followed by 40 cycles of 20 sec at 93°C, 30 sec at 53–54°C, 4–7 min at 68°C, and final extension of 10 min at 68°C. All PCR products were purified with a PCR Purification Kit (Sangon Biotech, Shanghai, China) and sequenced using multiple primers including those which generated the PCR product and new internal primers developed by primer walking.Table 1 List of PCR primer combinations used to amplify the mitochondrial genome of Paragyrodactylus variegatus Primer nameGeneSequence(5′ – 3′)Source1 F(UND1F)*ND1CGHAAGGGNCCNAAHAAGGTHuyse et al. (2007) [17]1R*COITAAACTTCTGGATGWCCAAAAAATThis study2 F(UNAD5F)ND5TTRGARGCNATGCGBGCHCCHuyse et al. (2007) [17]2RCOIIIYCARCCTGAGCGAATTCARGCKGGThis study3 F(U12SF)*rrnSCAGTGCCAGCAKYYGCGGTTAHuyse et al. (2007) [17]3R(UNAD5R)*ND5GGWGCMCGCATNGCYTCYAAHuyse et al. (2007) [17]4 FND5ATGTGATTTTTAGAGTTATGCTTThis study4R(6RNAD5)ND5AGGHTCTCTAACTTGGAAAGWTAGTATHuyse et al. (2008) [24]5 F*COIIITCTTCWRTTACAGYAACDTCCTAThis study5R*ND1AAACCTCATACCTAACTGCGThis study6 F*COICTCCTTTATCTGGTGCTCTGGGThis study6R*rrnSGACGGGCGGTATGTACCTCTCTThis studyF236COIIITTGTTTTTGATTCCGTGAThis studyF930CYTBTTATCTTTGTGGTTCGTTCGThis studyF1568CYTBAGGTCAAAGATAGGTGGGTTAGThis studyF2174ND4TATAGGAATTTTACCATTATTTAThis studyF2855ND4CATGGCTTATCAGTTTGThis studyF3302tRNAGln GGTAGCATAGGAGGTAAGGTTCThis studyF8330COITTTAGCGGGTATTTCAAGTAThis studyF8920COIGTATTATTCACTATAGGAGGGGTAThis studyR4662ATP6ACGAAATAATAAAAATATAAAAAGTThis studyR5283ND2TCCAGAAACTAACAATAAAGCACThis studyR6003tRNAVal ACCTAATGCTTGTAATGThis studyR6599ND1AAACCTCATACCTAACTGCGThis studyR7212tRNAPro GCAGCCCTATCAGTAAGACCThis studyR7941COIACCAAGCCCTACAAAACCTGThis studyR10014rrnLTCCCCATTCAGACAATCCTCThis studyR10652rrnSGCTGGCACTGTGACTTATCCTAThis studyR11375COIIATTGTAGGTAAAAAGGTTCACThis studyR12090ND6AAAAAGACAATAAGACCCACTAThis studyR12752tRNALeu(UUR) AACACTTTGTATTTGACGCTThis studyR14014ND5AGGTTCAAGTAATGGTAGGTCTThis study*The PCR primers for the long PCR fragment (>2 kb). List of PCR primer combinations used to amplify the mitochondrial genome of Paragyrodactylus variegatus *The PCR primers for the long PCR fragment (>2 kb). Sequence analysis Contiguous sequence fragments were assembled using SeqMan (DNAStar) and Staden Package v1.7.0 [14]. Protein-coding (PCGs) and ribosomal RNA (rRNA) genes were initially identified using BLAST (Basic Local Alignment Search Tool) searches on GenBank, then by alignment with the published mitochondrial genomes of Gyrodactylus derjavinoides Malmberg, Collins, Cunningham and Jalali, 2007 (GenBank no. EU293891), G. salaris (GenBank no. DQ988931) and Gyrodactylus thymalli Zitnan, 1960 (GenBank no. EF527269). The secondary structure of the two rRNA genes was determined mainly by comparison with the published rRNA secondary structures of Dugesia japonica Ichikawa and Kawakatsu, 1964 (GenBank no. NC_016439) [15]. Protein-coding regions were translated with the echinoderm mitochondrial genetic code. The program tRNAscan-SE v1.21 (http://lowelab.ucsc.edu/tRNAscan-SE/) was used to identify transfer RNA (tRNA) genes and their structures [16], using the mito/chloroplast codon and setting the cove cutoff score to one. The tRNAs, which were not detected by tRNA scan-SE v1.21, were identified by comparing the sequence to Gyrodactylus [17, 18]. Tandem Repeat Finder v4.07 was used to identify tandem repeats in non-coding regions [19]. The base composition, codon usage and genetic distance were calculated with MEGA v5.1 [20]. The nonsynonymous (Ka)/synonymous (Ks) values were estimated by the KaKs_Calculator v1.2 with the MA method [21]. Contiguous sequence fragments were assembled using SeqMan (DNAStar) and Staden Package v1.7.0 [14]. Protein-coding (PCGs) and ribosomal RNA (rRNA) genes were initially identified using BLAST (Basic Local Alignment Search Tool) searches on GenBank, then by alignment with the published mitochondrial genomes of Gyrodactylus derjavinoides Malmberg, Collins, Cunningham and Jalali, 2007 (GenBank no. EU293891), G. salaris (GenBank no. DQ988931) and Gyrodactylus thymalli Zitnan, 1960 (GenBank no. EF527269). The secondary structure of the two rRNA genes was determined mainly by comparison with the published rRNA secondary structures of Dugesia japonica Ichikawa and Kawakatsu, 1964 (GenBank no. NC_016439) [15]. Protein-coding regions were translated with the echinoderm mitochondrial genetic code. The program tRNAscan-SE v1.21 (http://lowelab.ucsc.edu/tRNAscan-SE/) was used to identify transfer RNA (tRNA) genes and their structures [16], using the mito/chloroplast codon and setting the cove cutoff score to one. The tRNAs, which were not detected by tRNA scan-SE v1.21, were identified by comparing the sequence to Gyrodactylus [17, 18]. Tandem Repeat Finder v4.07 was used to identify tandem repeats in non-coding regions [19]. The base composition, codon usage and genetic distance were calculated with MEGA v5.1 [20]. The nonsynonymous (Ka)/synonymous (Ks) values were estimated by the KaKs_Calculator v1.2 with the MA method [21].
Results
Genome organization, base composition and gene order The circular mitochondrial genome of P. variegatus is 14,517 bp in size (GenBank no. KM067269) and contains 12 PCGs, 22 tRNAs, two rRNA and a single major non-coding region (NCR) (Figure 1). It lacks the ATP8 gene, and all the genes are transcribed from the same strand. The overall nucleotide composition is: T (45.8%), C (9.5%), A (30.4%), G (14.2%), with an overall A + T content of 76.3% (Table 2).Figure 1 The gene map for the mitochondrial genome of Paragyrodactylus variegatus . The gene map for the mitochondrial genome of Paragyrodactylus variegatus . Base composition of the mitochondrial genome of Paragyrodactylus variegatus The arrangement of rRNA and protein coding genes of P. variegatus is typical for gyrodactylids. However, the gene order of some tRNA genes is different: there are three tRNAs (tRNAGln, tRNAPhe, tRNAMet) between ND4 and the major non-coding region and five tRNAs (tRNATyr, tRNALeu1, tRNASer2, tRNALeu2, tRNAArg) between ND6 and ND5 in P. variegatus, while Gyrodactylus spp. have one tRNA (tRNAPhe) and seven tRNAs (tRNATyr, tRNALeu1, tRNAGln, tRNAMet, tRNASer2, tRNALeu2, tRNAArg) in the same location, respectively. The circular mitochondrial genome of P. variegatus is 14,517 bp in size (GenBank no. KM067269) and contains 12 PCGs, 22 tRNAs, two rRNA and a single major non-coding region (NCR) (Figure 1). It lacks the ATP8 gene, and all the genes are transcribed from the same strand. The overall nucleotide composition is: T (45.8%), C (9.5%), A (30.4%), G (14.2%), with an overall A + T content of 76.3% (Table 2).Figure 1 The gene map for the mitochondrial genome of Paragyrodactylus variegatus . The gene map for the mitochondrial genome of Paragyrodactylus variegatus . Base composition of the mitochondrial genome of Paragyrodactylus variegatus The arrangement of rRNA and protein coding genes of P. variegatus is typical for gyrodactylids. However, the gene order of some tRNA genes is different: there are three tRNAs (tRNAGln, tRNAPhe, tRNAMet) between ND4 and the major non-coding region and five tRNAs (tRNATyr, tRNALeu1, tRNASer2, tRNALeu2, tRNAArg) between ND6 and ND5 in P. variegatus, while Gyrodactylus spp. have one tRNA (tRNAPhe) and seven tRNAs (tRNATyr, tRNALeu1, tRNAGln, tRNAMet, tRNASer2, tRNALeu2, tRNAArg) in the same location, respectively. Protein coding genes and codon usage The total length of all 12 PCGs is 9,990 bp. The average A + T content of PCGs is 75.7% (Table 2), ranging from 70.9% (COI) to 82.9% (ND2). ATG is the typical start codon, except for ND1 and COII, which begins with GTG and TTG, respectively (Table 3). All PCGs terminate with the stop codons TAA, while ND5 uses the codon TAG. The incomplete stop codons were not observed in P. variegatus.Table 3 The organization of the mitochondrial genome of Paragyrodactylus variegatus GenePositionSize (bp)CodonAnticodonIntergenic nucleotidesFormToStartStopCOIII1639639ATGTAA/tRNA-His (H)65171363GTG11CYTB71917981080ATGTAA5ND4L18032057255ATGTAA4ND4203032381209ATGTAA-28tRNA-Gln (Q)3245331167TTG6tRNA-Phe (F)3331339767GAA19tRNA-Met (M)3410347667CAT12NCR3477456910930ATP645705082513ATGTAA0ND250845959876ATGTAA1tRNA-Val (V)5974604067TAC14tRNA-Ala (A)6047611266TGC6tRNA-Asp (D)6114617865GTC1ND161837073891GTGTAA4tRNA-Asn (N)7087715569GTT13tRNA-Pro (P)7159722163TGG3tRNA-Ile (I)7216728368GAT-6tRNA-Lys (K)7288735265CTT4ND373617711351ATGTAA8tRNA-Ser(AGN)(S1)7726778257TCT14tRNA-Trp (W)7792785867TCA9COI786294091548ATGTAA3tRNA-Thr (T)9418948467TGT8rrnL(16S)948410443960-1tRNA-Cys (C)104441050360GCA0rrnS (12S)10505112167121COII1122311804582TTGTAA6tRNA-Glu (E)119551201864TTC150ND61202512501477ATGTAA6tRNA-Tyr (Y)125071257367GTA5tRNA-Leu(CUN)(L1)125851265066TAG11tRNA-Ser(UCN)(S2)126571271660TGA6tRNA-Leu(UUR)(L2)127191278870TAA2tRNA-Arg (R)127941286067TCG5ND512865144331569ATGTAG4tRNA-Gly (G)144461451368TCC12 The organization of the mitochondrial genome of Paragyrodactylus variegatus The codon usage and relative synonymous codon usage (RSCU) values are summarized (Table 4). The most frequent amino acids in the PCGs of P. variegatus are as follows: Leucine (16.43%), Phenylalanine (13.23%), Serine (12.48%), and Isoleucine (10.67%). The frequency of Glutamine is especially low (0.69%). The codons TTA (Leucine; 12.09%) and TTT (Phenylalanine; 11.48%) are the most frequently used codons. For the third position of the fourfold degenerate amino acid, codons ending with T are the most frequent.Table 4 Codon usage for the 12 mitochondrial proteins of Paragyrodactylus variegatus Codon(AA)N%RSCUCodon(AA)N%RSCUUUU(F)38111.481.74UAU(Y)1805.421.72UUC(F)581.750.26UAC(Y)290.870.28UUA(L)40112.094.41UAA(*)00.000UUG(L)391.180.43UAG(*)00.000CUU(L)682.050.75CAU(H)451.361.7CUC(L)70.210.08CAC(H)80.240.3CUA(L)270.810.3CAA(Q)140.421.22CUG(L)30.090.03CAG(Q)90.270.78AUU(I)1755.271.48AAU(N)1033.101.67AUC(I)110.330.09AAC(N)180.540.29AUA(I)1685.061.42AAA(N)641.931.04AUG(M)682.051AAG(K)481.451GUU(V)1504.522.4GAU(D)541.631.59GUC(V)80.240.13GAC(D)140.420.41GUA(V)812.441.3GAA(E)371.121.32GUG(V)110.330.18GAG(E)190.570.68UCU(S)1143.442.2UGU(C)651.961.83UCC(S)90.270.17UGC(C)60.180.17UCA(S)651.961.26UGA(W)581.751.55UCG(S)30.090.06UGG(W)170.510.45CCU(P)381.152.03CGU(R)330.993CCC(P)20.060.11CGC(R)40.120.36CCA(P)341.021.81CGA(R)50.150.45CCG(P)10.030.05CGG(R)20.060.18ACU(T)581.752.37AGU(S)1043.132.01ACC(T)90.270.37AGC(S)120.360.23ACA(T)300.901.22AGA(S)812.441.57ACG(T)10.030.04AGG(S)260.780.5GCU(A)330.991.97GGU(G)902.712.05GCC(A)70.210.42GGC(G)180.540.41GCA(A)250.751.49GGA(G)461.391.05GCG(A)20.060.12GGG(G)220.660.5A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage. Codon usage for the 12 mitochondrial proteins of Paragyrodactylus variegatus A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage. The total length of all 12 PCGs is 9,990 bp. The average A + T content of PCGs is 75.7% (Table 2), ranging from 70.9% (COI) to 82.9% (ND2). ATG is the typical start codon, except for ND1 and COII, which begins with GTG and TTG, respectively (Table 3). All PCGs terminate with the stop codons TAA, while ND5 uses the codon TAG. The incomplete stop codons were not observed in P. variegatus.Table 3 The organization of the mitochondrial genome of Paragyrodactylus variegatus GenePositionSize (bp)CodonAnticodonIntergenic nucleotidesFormToStartStopCOIII1639639ATGTAA/tRNA-His (H)65171363GTG11CYTB71917981080ATGTAA5ND4L18032057255ATGTAA4ND4203032381209ATGTAA-28tRNA-Gln (Q)3245331167TTG6tRNA-Phe (F)3331339767GAA19tRNA-Met (M)3410347667CAT12NCR3477456910930ATP645705082513ATGTAA0ND250845959876ATGTAA1tRNA-Val (V)5974604067TAC14tRNA-Ala (A)6047611266TGC6tRNA-Asp (D)6114617865GTC1ND161837073891GTGTAA4tRNA-Asn (N)7087715569GTT13tRNA-Pro (P)7159722163TGG3tRNA-Ile (I)7216728368GAT-6tRNA-Lys (K)7288735265CTT4ND373617711351ATGTAA8tRNA-Ser(AGN)(S1)7726778257TCT14tRNA-Trp (W)7792785867TCA9COI786294091548ATGTAA3tRNA-Thr (T)9418948467TGT8rrnL(16S)948410443960-1tRNA-Cys (C)104441050360GCA0rrnS (12S)10505112167121COII1122311804582TTGTAA6tRNA-Glu (E)119551201864TTC150ND61202512501477ATGTAA6tRNA-Tyr (Y)125071257367GTA5tRNA-Leu(CUN)(L1)125851265066TAG11tRNA-Ser(UCN)(S2)126571271660TGA6tRNA-Leu(UUR)(L2)127191278870TAA2tRNA-Arg (R)127941286067TCG5ND512865144331569ATGTAG4tRNA-Gly (G)144461451368TCC12 The organization of the mitochondrial genome of Paragyrodactylus variegatus The codon usage and relative synonymous codon usage (RSCU) values are summarized (Table 4). The most frequent amino acids in the PCGs of P. variegatus are as follows: Leucine (16.43%), Phenylalanine (13.23%), Serine (12.48%), and Isoleucine (10.67%). The frequency of Glutamine is especially low (0.69%). The codons TTA (Leucine; 12.09%) and TTT (Phenylalanine; 11.48%) are the most frequently used codons. For the third position of the fourfold degenerate amino acid, codons ending with T are the most frequent.Table 4 Codon usage for the 12 mitochondrial proteins of Paragyrodactylus variegatus Codon(AA)N%RSCUCodon(AA)N%RSCUUUU(F)38111.481.74UAU(Y)1805.421.72UUC(F)581.750.26UAC(Y)290.870.28UUA(L)40112.094.41UAA(*)00.000UUG(L)391.180.43UAG(*)00.000CUU(L)682.050.75CAU(H)451.361.7CUC(L)70.210.08CAC(H)80.240.3CUA(L)270.810.3CAA(Q)140.421.22CUG(L)30.090.03CAG(Q)90.270.78AUU(I)1755.271.48AAU(N)1033.101.67AUC(I)110.330.09AAC(N)180.540.29AUA(I)1685.061.42AAA(N)641.931.04AUG(M)682.051AAG(K)481.451GUU(V)1504.522.4GAU(D)541.631.59GUC(V)80.240.13GAC(D)140.420.41GUA(V)812.441.3GAA(E)371.121.32GUG(V)110.330.18GAG(E)190.570.68UCU(S)1143.442.2UGU(C)651.961.83UCC(S)90.270.17UGC(C)60.180.17UCA(S)651.961.26UGA(W)581.751.55UCG(S)30.090.06UGG(W)170.510.45CCU(P)381.152.03CGU(R)330.993CCC(P)20.060.11CGC(R)40.120.36CCA(P)341.021.81CGA(R)50.150.45CCG(P)10.030.05CGG(R)20.060.18ACU(T)581.752.37AGU(S)1043.132.01ACC(T)90.270.37AGC(S)120.360.23ACA(T)300.901.22AGA(S)812.441.57ACG(T)10.030.04AGG(S)260.780.5GCU(A)330.991.97GGU(G)902.712.05GCC(A)70.210.42GGC(G)180.540.41GCA(A)250.751.49GGA(G)461.391.05GCG(A)20.060.12GGG(G)220.660.5A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage. Codon usage for the 12 mitochondrial proteins of Paragyrodactylus variegatus A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage. Ribosomal and transfer RNA genes The length of large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS) genes of P. variegatus are 960 bp and 712 bp, respectively (Table 3). The A + T contents of the rrnL and rrnS of P. variegatus are 75.0% and 75.3%, respectively. The predicted secondary structure of rrnL and rrnS of P. variegatus is shown in Figure 2 and Figure 3. The secondary structures of these regions contain six and three structural domains, respectively. But domain I of the rrnL lacks a large region at the 5′ end gene, and the domain III is absent in the secondary structure of rrnL of P. variegatus.Figure 2 Inferred secondary structure of the mitochondrial rrnL gene for Paragyrodactylus variegatus . Figure 3 Inferred secondary structure of the mitochondrial rrnS gene for Paragyrodactylus variegatus . Inferred secondary structure of the mitochondrial rrnL gene for Paragyrodactylus variegatus . Inferred secondary structure of the mitochondrial rrnS gene for Paragyrodactylus variegatus . The 22 tRNA genes of P. variegatus vary in length from 57 to 70 nucleotides. Sequences of tRNAIle and tRNAThr genes overlap with neighboring genes (Table 3). All of the 22 tRNAs have the typical cloverleaf secondary structure, except for tRNACys, tRNASer1 and tRNASer2 in which each have unpaired dihydrouridine (DHU) arm. The length of large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS) genes of P. variegatus are 960 bp and 712 bp, respectively (Table 3). The A + T contents of the rrnL and rrnS of P. variegatus are 75.0% and 75.3%, respectively. The predicted secondary structure of rrnL and rrnS of P. variegatus is shown in Figure 2 and Figure 3. The secondary structures of these regions contain six and three structural domains, respectively. But domain I of the rrnL lacks a large region at the 5′ end gene, and the domain III is absent in the secondary structure of rrnL of P. variegatus.Figure 2 Inferred secondary structure of the mitochondrial rrnL gene for Paragyrodactylus variegatus . Figure 3 Inferred secondary structure of the mitochondrial rrnS gene for Paragyrodactylus variegatus . Inferred secondary structure of the mitochondrial rrnL gene for Paragyrodactylus variegatus . Inferred secondary structure of the mitochondrial rrnS gene for Paragyrodactylus variegatus . The 22 tRNA genes of P. variegatus vary in length from 57 to 70 nucleotides. Sequences of tRNAIle and tRNAThr genes overlap with neighboring genes (Table 3). All of the 22 tRNAs have the typical cloverleaf secondary structure, except for tRNACys, tRNASer1 and tRNASer2 in which each have unpaired dihydrouridine (DHU) arm. Synonymous and nonsynonymous substitutions and genetic distance The Ka/Ks values for all 12 PCGs of P. variegatus versus Gyrodactylus spp. are presented, which all are less than 0.3. The highest average Ka/Ks value is ND2 (0.29), while the Ka/Ks ratios of half the PCGs are low (Ka/Ks < 0.1). The genetic distance between P. variegatus and the three reported species of Gyrodactylus (G. thymalli, G. salaris and G. derjavinoides) are much greater than among the three species of Gyrodactylus (Figure 4). The maximum divergence occurs in ND5 gene (48.9%) between P. variegatus and G. salaris. In addition, the genetic distances of rRNA genes are lower than protein genes (Figure 4).Figure 4 The genetic distance of protein and rRNA genes of Paragyrodactylus variegatus and Gyrodactylus spp. The genetic distance of protein and rRNA genes of Paragyrodactylus variegatus and Gyrodactylus spp. The Ka/Ks values for all 12 PCGs of P. variegatus versus Gyrodactylus spp. are presented, which all are less than 0.3. The highest average Ka/Ks value is ND2 (0.29), while the Ka/Ks ratios of half the PCGs are low (Ka/Ks < 0.1). The genetic distance between P. variegatus and the three reported species of Gyrodactylus (G. thymalli, G. salaris and G. derjavinoides) are much greater than among the three species of Gyrodactylus (Figure 4). The maximum divergence occurs in ND5 gene (48.9%) between P. variegatus and G. salaris. In addition, the genetic distances of rRNA genes are lower than protein genes (Figure 4).Figure 4 The genetic distance of protein and rRNA genes of Paragyrodactylus variegatus and Gyrodactylus spp. The genetic distance of protein and rRNA genes of Paragyrodactylus variegatus and Gyrodactylus spp. Non-coding regions The major non-coding region is 1,093 bp in size, which is highly enriched in AT (83.4%). This non-coding region can be subdivided into six parts including three junctions by the sequence pattern (Figure 5). The sequence of part I and part II is homologous with 81.7% sequence identity. Part III contains six identical repeat units of 40 bp sequence with some sequence modifications: one substitution at the fifth position (the initial repeat unit), three substitutions at the 223rd, 227th and 237th positions and two insertions at the 222nd and 225th positions (the terminal repeat unit). The repeat unit of part III was able to fold into a stem-loop secondary structure. Some predicted structural elements were also found in the sequence of part I and II (Figure 6). In addition, 30 short non-coding regions, all < 151 bp, occur in the mitochondrial genome of P. variegatus (Table 3).Figure 5 Organization of the mitochondrial major non-coding region of Paragyrodactylus variegatus . Figure 6 Predicted structural elements for the mitochondrial major non-coding region of Paragyrodactylus variegatus . ‘(G)’ is the variation in the identical pattern of part II. Organization of the mitochondrial major non-coding region of Paragyrodactylus variegatus . Predicted structural elements for the mitochondrial major non-coding region of Paragyrodactylus variegatus . ‘(G)’ is the variation in the identical pattern of part II. The major non-coding region is 1,093 bp in size, which is highly enriched in AT (83.4%). This non-coding region can be subdivided into six parts including three junctions by the sequence pattern (Figure 5). The sequence of part I and part II is homologous with 81.7% sequence identity. Part III contains six identical repeat units of 40 bp sequence with some sequence modifications: one substitution at the fifth position (the initial repeat unit), three substitutions at the 223rd, 227th and 237th positions and two insertions at the 222nd and 225th positions (the terminal repeat unit). The repeat unit of part III was able to fold into a stem-loop secondary structure. Some predicted structural elements were also found in the sequence of part I and II (Figure 6). In addition, 30 short non-coding regions, all < 151 bp, occur in the mitochondrial genome of P. variegatus (Table 3).Figure 5 Organization of the mitochondrial major non-coding region of Paragyrodactylus variegatus . Figure 6 Predicted structural elements for the mitochondrial major non-coding region of Paragyrodactylus variegatus . ‘(G)’ is the variation in the identical pattern of part II. Organization of the mitochondrial major non-coding region of Paragyrodactylus variegatus . Predicted structural elements for the mitochondrial major non-coding region of Paragyrodactylus variegatus . ‘(G)’ is the variation in the identical pattern of part II.
Conclusions
The characteristics of the mitochondrial genome of P. variegatus are notably different from Gyrodactylus spp., including the gene order, which is similar to other monopisthocotylids. The overall average genetic distance between Paragyrodactylus and Gyrodactylus based on the rRNA and 12 protein coding genes is remarkably greater than within Gyrodactylus. All of these features support Paragyrodactylus as a distinct genus. Considering their specific distribution and hosts, we tend towards the view of You et al. [8] that Paragyrodactylus is a relict freshwater lineage of viviparous monogenean isolated in the high plateaus of central Asia on closely related river loaches.
[ "Background", "Specimen collection and DNA extraction", "PCR and sequencing", "Sequence analysis", "Genome organization, base composition and gene order", "Protein coding genes and codon usage", "Ribosomal and transfer RNA genes", "Synonymous and nonsynonymous substitutions and genetic distance", "Non-coding regions", "Characteristics of the mitochondrial genome", "The major non-coding region", "Gene arrangements and possible evolutionary mechanisms" ]
[ "Gyrodactylids are widespread parasites of freshwater and marine fishes, typically inhabiting the skin and gills of their hosts. Their direct life-cycle and hyperviviparous method of reproduction facilitates rapid population growth. Some species are pathogenic to their host (e.g. Gyrodactylus salaris Malmberg, 1957) [1] and capable of causing high host mortality resulting in serious ecological and economical consequences [2]. Over twenty genera and 400 species of gyrodactylids have been described [3], most of them being identified by comparative morphology of the opisthaptoral hard parts. This traditional approach for identification of gyrodactylids gives limited information for detailed phylogenetic analysis. Recently, the nuclear ribosomal DNA (rDNA) and the internal transcribed spacers (ITS) of rDNA have been incorporated into the molecular taxonomy of the group [4, 5]. In addition, mitochondrial markers (COI and COII) are also confirmed to be DNA barcoding for Gyrodactylus Nordmann, 1832 [6, 7]. But more polymorphic molecular markers suitable for different taxonomic categories are still needed for studying the taxonomy and phylogeny of these parasites.\nParagyrodactylus Gvosdev and Martechov, 1953 is a genus of Gyrodactylidae comprising three nominal species, Paragyrodactylus iliensis Gvosdev and Martechov, 1953 (=P. dogieli Osmanov, 1965), Paragyrodactylus barbatuli Ergens, 1970 and Paragyrodactylus variegatus You, King, Ye and Cone, 2014, all of which infect river loaches (Nemacheilidae) inhabiting streams in central Asia [8]. The relationship between Paragyrodactylus and Gyrodactylus has been recently explored. Kritsky and Boeger reported the two genera had a close relationship based on morphological characters [9]. Bakke et al. believed the complexity of the attachment apparatus separates Paragyrodactylus from Gyrodactylus and pondered whether these differences were fundamental or a local diversification within Gyrodactylus\n[3]. Furthermore, You et al., using morphology and molecular data, presented the hypothesis that Paragyrodactylus was a relict freshwater lineage of viviparous monogeneans isolated in the high plateaus of central Asia on river loaches [8]. The ambiguous relationship between Paragyrodactylus and Gyrodactylus emphasizes the need for further molecular study of these genera.\nDue to its higher rate of base substitution, maternal inheritance, evolutionary conserved gene products and low recombination [10, 11], mitochondrial genomes provide powerful markers for phylogenetic analysis, biological identification and population studies. In addition, mitochondrial genomes can provide genome-level characters such as gene order for deep-level phylogenetic analysis [12, 13]. To date, the complete mitochondrial DNA sequences of only nine monogeneans are available, including three species of Gyrodactylus.\nIn the present study, the first mitochondrial genome for Paragyrodactylus, P. variegatus, is sequenced and characterized. We report on its genome organization, base composition, gene order, codon usage, ribosomal and transfer RNA gene features and major non-coding region. Additionally, we provide a preliminary comparison of the gene arrangement within both Paragyrodactylus and Gyrodactylus.", "Specimens of P. variegatus were collected from the skin and fins of wild Homatula variegata (Dabry de Thiersant, 1874) in the Qinling Mountain region of central China. Upon capture the specimens were immediately preserved in 99% ethanol and stored at 4°C. The DNA from one parasite was extracted using a TIANamp Micro DNA Kit (Tiangen Biotech, Beijing, China) according to the manufacturer’s protocol.", "The complete mitochondrial genome of P. variegatus was amplified in six parts using a combination of existing primers and newly developed primers generated by primer walking (primers listed in Table 1). For short fragments (<2 kb), PCR reactions were performed in a total volume of 25 μl, containing 3.0 mM MgCl2, 10 mM Tris–HCl (pH 8.3), 50 mM KCl, 0.25 mM of each dNTP , 1.25 U rTaq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 93°C, followed by 40 cycles of 10 sec at 92°C, 1.5 min at 52–54°C, 2 min at 60°C, and final extension of 6 min at 72°C. For long fragments (>2 kb), the 25 μl PCR reaction consisted of 2.5 mM MgCl2, 2.5 μl 10 × LA PCR Buffer II (Mg2+ free), 0.4 mM of each dNTP, 1.25 U LA Taq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 94°C, followed by 40 cycles of 20 sec at 93°C, 30 sec at 53–54°C, 4–7 min at 68°C, and final extension of 10 min at 68°C. All PCR products were purified with a PCR Purification Kit (Sangon Biotech, Shanghai, China) and sequenced using multiple primers including those which generated the PCR product and new internal primers developed by primer walking.Table 1\nList of PCR primer combinations used to amplify the mitochondrial genome of\nParagyrodactylus variegatus\nPrimer nameGeneSequence(5′ – 3′)Source1 F(UND1F)*ND1CGHAAGGGNCCNAAHAAGGTHuyse et al. (2007) [17]1R*COITAAACTTCTGGATGWCCAAAAAATThis study2 F(UNAD5F)ND5TTRGARGCNATGCGBGCHCCHuyse et al. (2007) [17]2RCOIIIYCARCCTGAGCGAATTCARGCKGGThis study3 F(U12SF)*rrnSCAGTGCCAGCAKYYGCGGTTAHuyse et al. (2007) [17]3R(UNAD5R)*ND5GGWGCMCGCATNGCYTCYAAHuyse et al. (2007) [17]4 FND5ATGTGATTTTTAGAGTTATGCTTThis study4R(6RNAD5)ND5AGGHTCTCTAACTTGGAAAGWTAGTATHuyse et al. (2008) [24]5 F*COIIITCTTCWRTTACAGYAACDTCCTAThis study5R*ND1AAACCTCATACCTAACTGCGThis study6 F*COICTCCTTTATCTGGTGCTCTGGGThis study6R*rrnSGACGGGCGGTATGTACCTCTCTThis studyF236COIIITTGTTTTTGATTCCGTGAThis studyF930CYTBTTATCTTTGTGGTTCGTTCGThis studyF1568CYTBAGGTCAAAGATAGGTGGGTTAGThis studyF2174ND4TATAGGAATTTTACCATTATTTAThis studyF2855ND4CATGGCTTATCAGTTTGThis studyF3302tRNAGln\nGGTAGCATAGGAGGTAAGGTTCThis studyF8330COITTTAGCGGGTATTTCAAGTAThis studyF8920COIGTATTATTCACTATAGGAGGGGTAThis studyR4662ATP6ACGAAATAATAAAAATATAAAAAGTThis studyR5283ND2TCCAGAAACTAACAATAAAGCACThis studyR6003tRNAVal\nACCTAATGCTTGTAATGThis studyR6599ND1AAACCTCATACCTAACTGCGThis studyR7212tRNAPro\nGCAGCCCTATCAGTAAGACCThis studyR7941COIACCAAGCCCTACAAAACCTGThis studyR10014rrnLTCCCCATTCAGACAATCCTCThis studyR10652rrnSGCTGGCACTGTGACTTATCCTAThis studyR11375COIIATTGTAGGTAAAAAGGTTCACThis studyR12090ND6AAAAAGACAATAAGACCCACTAThis studyR12752tRNALeu(UUR)\nAACACTTTGTATTTGACGCTThis studyR14014ND5AGGTTCAAGTAATGGTAGGTCTThis study*The PCR primers for the long PCR fragment (>2 kb).\n\nList of PCR primer combinations used to amplify the mitochondrial genome of\nParagyrodactylus variegatus\n\n*The PCR primers for the long PCR fragment (>2 kb).", "Contiguous sequence fragments were assembled using SeqMan (DNAStar) and Staden Package v1.7.0 [14]. Protein-coding (PCGs) and ribosomal RNA (rRNA) genes were initially identified using BLAST (Basic Local Alignment Search Tool) searches on GenBank, then by alignment with the published mitochondrial genomes of Gyrodactylus derjavinoides Malmberg, Collins, Cunningham and Jalali, 2007 (GenBank no. EU293891), G. salaris (GenBank no. DQ988931) and Gyrodactylus thymalli Zitnan, 1960 (GenBank no. EF527269). The secondary structure of the two rRNA genes was determined mainly by comparison with the published rRNA secondary structures of Dugesia japonica Ichikawa and Kawakatsu, 1964 (GenBank no. NC_016439) [15]. Protein-coding regions were translated with the echinoderm mitochondrial genetic code. The program tRNAscan-SE v1.21 (http://lowelab.ucsc.edu/tRNAscan-SE/) was used to identify transfer RNA (tRNA) genes and their structures [16], using the mito/chloroplast codon and setting the cove cutoff score to one. The tRNAs, which were not detected by tRNA scan-SE v1.21, were identified by comparing the sequence to Gyrodactylus\n[17, 18]. Tandem Repeat Finder v4.07 was used to identify tandem repeats in non-coding regions [19]. The base composition, codon usage and genetic distance were calculated with MEGA v5.1 [20]. The nonsynonymous (Ka)/synonymous (Ks) values were estimated by the KaKs_Calculator v1.2 with the MA method [21].", "The circular mitochondrial genome of P. variegatus is 14,517 bp in size (GenBank no. KM067269) and contains 12 PCGs, 22 tRNAs, two rRNA and a single major non-coding region (NCR) (Figure 1). It lacks the ATP8 gene, and all the genes are transcribed from the same strand. The overall nucleotide composition is: T (45.8%), C (9.5%), A (30.4%), G (14.2%), with an overall A + T content of 76.3% (Table 2).Figure 1\nThe gene map for the mitochondrial genome of\nParagyrodactylus variegatus\n.\n\n\nThe gene map for the mitochondrial genome of\nParagyrodactylus variegatus\n.\n\n\nBase composition of the mitochondrial genome of\nParagyrodactylus variegatus\n\nThe arrangement of rRNA and protein coding genes of P. variegatus is typical for gyrodactylids. However, the gene order of some tRNA genes is different: there are three tRNAs (tRNAGln, tRNAPhe, tRNAMet) between ND4 and the major non-coding region and five tRNAs (tRNATyr, tRNALeu1, tRNASer2, tRNALeu2, tRNAArg) between ND6 and ND5 in P. variegatus, while Gyrodactylus spp. have one tRNA (tRNAPhe) and seven tRNAs (tRNATyr, tRNALeu1, tRNAGln, tRNAMet, tRNASer2, tRNALeu2, tRNAArg) in the same location, respectively.", "The total length of all 12 PCGs is 9,990 bp. The average A + T content of PCGs is 75.7% (Table 2), ranging from 70.9% (COI) to 82.9% (ND2). ATG is the typical start codon, except for ND1 and COII, which begins with GTG and TTG, respectively (Table 3). All PCGs terminate with the stop codons TAA, while ND5 uses the codon TAG. The incomplete stop codons were not observed in P. variegatus.Table 3\nThe organization of the mitochondrial genome of\nParagyrodactylus variegatus\nGenePositionSize (bp)CodonAnticodonIntergenic nucleotidesFormToStartStopCOIII1639639ATGTAA/tRNA-His (H)65171363GTG11CYTB71917981080ATGTAA5ND4L18032057255ATGTAA4ND4203032381209ATGTAA-28tRNA-Gln (Q)3245331167TTG6tRNA-Phe (F)3331339767GAA19tRNA-Met (M)3410347667CAT12NCR3477456910930ATP645705082513ATGTAA0ND250845959876ATGTAA1tRNA-Val (V)5974604067TAC14tRNA-Ala (A)6047611266TGC6tRNA-Asp (D)6114617865GTC1ND161837073891GTGTAA4tRNA-Asn (N)7087715569GTT13tRNA-Pro (P)7159722163TGG3tRNA-Ile (I)7216728368GAT-6tRNA-Lys (K)7288735265CTT4ND373617711351ATGTAA8tRNA-Ser(AGN)(S1)7726778257TCT14tRNA-Trp (W)7792785867TCA9COI786294091548ATGTAA3tRNA-Thr (T)9418948467TGT8rrnL(16S)948410443960-1tRNA-Cys (C)104441050360GCA0rrnS (12S)10505112167121COII1122311804582TTGTAA6tRNA-Glu (E)119551201864TTC150ND61202512501477ATGTAA6tRNA-Tyr (Y)125071257367GTA5tRNA-Leu(CUN)(L1)125851265066TAG11tRNA-Ser(UCN)(S2)126571271660TGA6tRNA-Leu(UUR)(L2)127191278870TAA2tRNA-Arg (R)127941286067TCG5ND512865144331569ATGTAG4tRNA-Gly (G)144461451368TCC12\n\nThe organization of the mitochondrial genome of\nParagyrodactylus variegatus\n\nThe codon usage and relative synonymous codon usage (RSCU) values are summarized (Table 4). The most frequent amino acids in the PCGs of P. variegatus are as follows: Leucine (16.43%), Phenylalanine (13.23%), Serine (12.48%), and Isoleucine (10.67%). The frequency of Glutamine is especially low (0.69%). The codons TTA (Leucine; 12.09%) and TTT (Phenylalanine; 11.48%) are the most frequently used codons. For the third position of the fourfold degenerate amino acid, codons ending with T are the most frequent.Table 4\nCodon usage for the 12 mitochondrial proteins of\nParagyrodactylus variegatus\nCodon(AA)N%RSCUCodon(AA)N%RSCUUUU(F)38111.481.74UAU(Y)1805.421.72UUC(F)581.750.26UAC(Y)290.870.28UUA(L)40112.094.41UAA(*)00.000UUG(L)391.180.43UAG(*)00.000CUU(L)682.050.75CAU(H)451.361.7CUC(L)70.210.08CAC(H)80.240.3CUA(L)270.810.3CAA(Q)140.421.22CUG(L)30.090.03CAG(Q)90.270.78AUU(I)1755.271.48AAU(N)1033.101.67AUC(I)110.330.09AAC(N)180.540.29AUA(I)1685.061.42AAA(N)641.931.04AUG(M)682.051AAG(K)481.451GUU(V)1504.522.4GAU(D)541.631.59GUC(V)80.240.13GAC(D)140.420.41GUA(V)812.441.3GAA(E)371.121.32GUG(V)110.330.18GAG(E)190.570.68UCU(S)1143.442.2UGU(C)651.961.83UCC(S)90.270.17UGC(C)60.180.17UCA(S)651.961.26UGA(W)581.751.55UCG(S)30.090.06UGG(W)170.510.45CCU(P)381.152.03CGU(R)330.993CCC(P)20.060.11CGC(R)40.120.36CCA(P)341.021.81CGA(R)50.150.45CCG(P)10.030.05CGG(R)20.060.18ACU(T)581.752.37AGU(S)1043.132.01ACC(T)90.270.37AGC(S)120.360.23ACA(T)300.901.22AGA(S)812.441.57ACG(T)10.030.04AGG(S)260.780.5GCU(A)330.991.97GGU(G)902.712.05GCC(A)70.210.42GGC(G)180.540.41GCA(A)250.751.49GGA(G)461.391.05GCG(A)20.060.12GGG(G)220.660.5A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage.\n\nCodon usage for the 12 mitochondrial proteins of\nParagyrodactylus variegatus\n\nA total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage.", "The length of large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS) genes of P. variegatus are 960 bp and 712 bp, respectively (Table 3). The A + T contents of the rrnL and rrnS of P. variegatus are 75.0% and 75.3%, respectively. The predicted secondary structure of rrnL and rrnS of P. variegatus is shown in Figure 2 and Figure 3. The secondary structures of these regions contain six and three structural domains, respectively. But domain I of the rrnL lacks a large region at the 5′ end gene, and the domain III is absent in the secondary structure of rrnL of P. variegatus.Figure 2\nInferred secondary structure of the mitochondrial rrnL gene for\nParagyrodactylus variegatus\n.\nFigure 3\nInferred secondary structure of the mitochondrial rrnS gene for\nParagyrodactylus variegatus\n.\n\n\nInferred secondary structure of the mitochondrial rrnL gene for\nParagyrodactylus variegatus\n.\n\n\nInferred secondary structure of the mitochondrial rrnS gene for\nParagyrodactylus variegatus\n.\n\nThe 22 tRNA genes of P. variegatus vary in length from 57 to 70 nucleotides. Sequences of tRNAIle and tRNAThr genes overlap with neighboring genes (Table 3). All of the 22 tRNAs have the typical cloverleaf secondary structure, except for tRNACys, tRNASer1 and tRNASer2 in which each have unpaired dihydrouridine (DHU) arm.", "The Ka/Ks values for all 12 PCGs of P. variegatus versus Gyrodactylus spp. are presented, which all are less than 0.3. The highest average Ka/Ks value is ND2 (0.29), while the Ka/Ks ratios of half the PCGs are low (Ka/Ks < 0.1). The genetic distance between P. variegatus and the three reported species of Gyrodactylus (G. thymalli, G. salaris and G. derjavinoides) are much greater than among the three species of Gyrodactylus (Figure 4). The maximum divergence occurs in ND5 gene (48.9%) between P. variegatus and G. salaris. In addition, the genetic distances of rRNA genes are lower than protein genes (Figure 4).Figure 4\nThe genetic distance of protein and rRNA genes of\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n\n\nThe genetic distance of protein and rRNA genes of\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n", "The major non-coding region is 1,093 bp in size, which is highly enriched in AT (83.4%). This non-coding region can be subdivided into six parts including three junctions by the sequence pattern (Figure 5). The sequence of part I and part II is homologous with 81.7% sequence identity. Part III contains six identical repeat units of 40 bp sequence with some sequence modifications: one substitution at the fifth position (the initial repeat unit), three substitutions at the 223rd, 227th and 237th positions and two insertions at the 222nd and 225th positions (the terminal repeat unit). The repeat unit of part III was able to fold into a stem-loop secondary structure. Some predicted structural elements were also found in the sequence of part I and II (Figure 6). In addition, 30 short non-coding regions, all < 151 bp, occur in the mitochondrial genome of P. variegatus (Table 3).Figure 5\nOrganization of the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n.\nFigure 6\nPredicted structural elements for the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n. ‘(G)’ is the variation in the identical pattern of part II.\n\nOrganization of the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n.\n\n\nPredicted structural elements for the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n. ‘(G)’ is the variation in the identical pattern of part II.", "The mitochondrial genome of P. variegatus is 222 bp shorter than that of G. derjavinoides, but well within the length range of parasitic flatworms [22, 23]. Differing number and length of the major non-coding region is the main factor that contributes to this difference in genome size. The overall A + T content of P. variegatus is higher than that of all reported mitochondrial genomes of monogeneans. The average Ka/Ks values of genes encoding 3 subunits of cytochrome c oxidase and the cytochrome b subunit of cytochrome bc1 complex are lower than genes encoding subunits of the NADH dehydrogenase complex (with the exception of ND1), especially COI and Cytb genes. This feature demonstrates COI, COII, COIII and Cytb genes are more strongly effected by purifying selection pressure compared to subunits of the NADH dehydrogenase genes (except ND1), which is similar to the findings of Huyse et al. [24] for Gyrodactylus derjavinoides. The degree of functional constraints might be a reason for corresponding degree sequence variations of protein genes. The low Ka/Ks values and genetic distance of COI and Cytb genes also imply that both genes could be used as a useful marker for analyses at higher taxonomic levels. Although sizes of rrnL and rrnS are very similar among Gyrodactylus spp. and P. variegatus, the sequence similarities are not high. These discrepancies may reflect the variable helices or loops that exist in the rRNA structure.", "The mitochondrial genome of P. variegatus includes one major non-coding region, which has been frequently observed in other invertebrates. It contains a high A + T content and tandem repeat sequences which could not be found in large non-coding regions (>500 bp) of the published mitochondrial genomes of monopisthocotyleans. We found that length and number of tandem repeat units are similar to those observed in Microcotyle sebastis Goto, 1894 [25]\n, contradicting the study of Zhang et al.\n[26] that reported the length and number of repeated motifs were different in the mitochondrial non-coding regions of monopisthocotylids and polyopisthocotylids.\nA non-coding region with high A + T content and pertinent elements usually corresponds to the control region for replication and transcription initiation. In the major non-coding region of P. variegatus, we found identical patterns within part I and part II. The patterns have only two nucleotide modifications with 2.3% sequence discrepancy; however, the overall difference between the whole sequence of part I and part II is 18.3%. The highly conserved part of the non-coding region is believed to have a functional role. The patterns contain poly-T stretches, a stem-loop structure and some surrounding structure elements (A + T-rich segment and G[A]nT) (Figure 6) which are typical of control regions in insects [27–30]. Although typical control regions are not readily identifiable within the mitochondrial genome of flatworms [17], the predicted secondary structure, conserved element, repeat sequences and high A + T content of major non-coding region in P. variegatus implies that this region might play an important role in the initiation of replication and transcription.\nIn addition, through alignment of non-coding regions sequences between Gyrodactylus spp. and P. variegatus, we found some conserved motifs in each species with the overall similarity among them being 72.1%. The conserved motifs (>5 bp) mainly existed in the A + T-rich segment and G + A-rich segment. However, whether or not the conserved motifs are present in other species of Gyrodactylidae needs to be assessed with a broader taxon sample.", "Five available mitochondrial gene arrangements of monopisthocotylids are shown in Figure 7. The arrangement of all rRNA and protein coding genes are identical throughout all samples, however, the tRNA genes differ in arrangement showing some translocation, particularly long-range translocation. No notable rearrangement hot spot could be found in gene arrangements of monopisthocotylids, however, the major change of gene arrangement among polyopisthocotylids is limited in the COIII-ND5 junction as a gene rearrangement hot spot [26]. Two gene clusters (tRNAAsn-tRNAPro-tRNAIle-tRNALys and rrnL-tRNACys-rrnS) were found to be conserved in all mitochondrial genomes of monopisthocotyleans. Nevertheless, the tRNALys and tRNACys were found in the gene rearrangement hot spot of polyopisthocotyleans. The conserved gene clusters could potentially be a marker used to help define the Polyopisthocotylea and Monopisthocotylea within the monogenea, as well as providing information for a deeper understanding of the evolution of monogenean mitochondrial genomes.Figure 7\nGene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp.\n\nGene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp.\nGene rearrangement can be mainly explained by three mechanisms: Duplication and Random Loss Model [31, 32], Duplication and Nonrandom Loss Model [33] and Recombination Model [34]. The variation (tRNAGln, tRNAMet and NCR) of mitochondrial gene order occurring between P. variegatus and Gyrodactylus spp. could be explained by the duplication and random loss model and recombination model together with the parsimonious scenario. We assume that the process contains three steps: one tandem duplication, random loss, followed by intramitochondrial recombination (Figure 8). We prefer this mechanism for the following reasons: the duplicate NCRs in the mitochondrial genomes of most metazoans can be explained by the duplication and random loss model, but the stepwise mechanism described above is more appropriate to interpret the duplicated NCRs and long-range translocation, meanwhile the rest of the genes remain in their original state. Furthermore, there are several examples of mitochondrial recombination in animals [35–38], and a similar mechanism accounts for the gene rearrangement of other metazoans [39, 40]. In addition, the tRNAMet genes of Gyrodactylus spp. are clearly homologous to the tRNAMet gene of P. variegatus with 80.6% sequence similarity. However, the tRNAGln region does have low sequence similarity (66.2%) between the mitochondrial genomes of Gyrodactylus spp. and P. variegatus, so we cannot be certain that the translocation event happened. As more mitochondrial genomes of gyrodactylids become available, all of the above hypotheses should be tested with respect to gene orders.Figure 8\nPossible mechanism of mitochondrial gene rearrangements occurring in\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n\n\nPossible mechanism of mitochondrial gene rearrangements occurring in\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Specimen collection and DNA extraction", "PCR and sequencing", "Sequence analysis", "Results", "Genome organization, base composition and gene order", "Protein coding genes and codon usage", "Ribosomal and transfer RNA genes", "Synonymous and nonsynonymous substitutions and genetic distance", "Non-coding regions", "Discussion", "Characteristics of the mitochondrial genome", "The major non-coding region", "Gene arrangements and possible evolutionary mechanisms", "Conclusions" ]
[ "Gyrodactylids are widespread parasites of freshwater and marine fishes, typically inhabiting the skin and gills of their hosts. Their direct life-cycle and hyperviviparous method of reproduction facilitates rapid population growth. Some species are pathogenic to their host (e.g. Gyrodactylus salaris Malmberg, 1957) [1] and capable of causing high host mortality resulting in serious ecological and economical consequences [2]. Over twenty genera and 400 species of gyrodactylids have been described [3], most of them being identified by comparative morphology of the opisthaptoral hard parts. This traditional approach for identification of gyrodactylids gives limited information for detailed phylogenetic analysis. Recently, the nuclear ribosomal DNA (rDNA) and the internal transcribed spacers (ITS) of rDNA have been incorporated into the molecular taxonomy of the group [4, 5]. In addition, mitochondrial markers (COI and COII) are also confirmed to be DNA barcoding for Gyrodactylus Nordmann, 1832 [6, 7]. But more polymorphic molecular markers suitable for different taxonomic categories are still needed for studying the taxonomy and phylogeny of these parasites.\nParagyrodactylus Gvosdev and Martechov, 1953 is a genus of Gyrodactylidae comprising three nominal species, Paragyrodactylus iliensis Gvosdev and Martechov, 1953 (=P. dogieli Osmanov, 1965), Paragyrodactylus barbatuli Ergens, 1970 and Paragyrodactylus variegatus You, King, Ye and Cone, 2014, all of which infect river loaches (Nemacheilidae) inhabiting streams in central Asia [8]. The relationship between Paragyrodactylus and Gyrodactylus has been recently explored. Kritsky and Boeger reported the two genera had a close relationship based on morphological characters [9]. Bakke et al. believed the complexity of the attachment apparatus separates Paragyrodactylus from Gyrodactylus and pondered whether these differences were fundamental or a local diversification within Gyrodactylus\n[3]. Furthermore, You et al., using morphology and molecular data, presented the hypothesis that Paragyrodactylus was a relict freshwater lineage of viviparous monogeneans isolated in the high plateaus of central Asia on river loaches [8]. The ambiguous relationship between Paragyrodactylus and Gyrodactylus emphasizes the need for further molecular study of these genera.\nDue to its higher rate of base substitution, maternal inheritance, evolutionary conserved gene products and low recombination [10, 11], mitochondrial genomes provide powerful markers for phylogenetic analysis, biological identification and population studies. In addition, mitochondrial genomes can provide genome-level characters such as gene order for deep-level phylogenetic analysis [12, 13]. To date, the complete mitochondrial DNA sequences of only nine monogeneans are available, including three species of Gyrodactylus.\nIn the present study, the first mitochondrial genome for Paragyrodactylus, P. variegatus, is sequenced and characterized. We report on its genome organization, base composition, gene order, codon usage, ribosomal and transfer RNA gene features and major non-coding region. Additionally, we provide a preliminary comparison of the gene arrangement within both Paragyrodactylus and Gyrodactylus.", " Specimen collection and DNA extraction Specimens of P. variegatus were collected from the skin and fins of wild Homatula variegata (Dabry de Thiersant, 1874) in the Qinling Mountain region of central China. Upon capture the specimens were immediately preserved in 99% ethanol and stored at 4°C. The DNA from one parasite was extracted using a TIANamp Micro DNA Kit (Tiangen Biotech, Beijing, China) according to the manufacturer’s protocol.\nSpecimens of P. variegatus were collected from the skin and fins of wild Homatula variegata (Dabry de Thiersant, 1874) in the Qinling Mountain region of central China. Upon capture the specimens were immediately preserved in 99% ethanol and stored at 4°C. The DNA from one parasite was extracted using a TIANamp Micro DNA Kit (Tiangen Biotech, Beijing, China) according to the manufacturer’s protocol.\n PCR and sequencing The complete mitochondrial genome of P. variegatus was amplified in six parts using a combination of existing primers and newly developed primers generated by primer walking (primers listed in Table 1). For short fragments (<2 kb), PCR reactions were performed in a total volume of 25 μl, containing 3.0 mM MgCl2, 10 mM Tris–HCl (pH 8.3), 50 mM KCl, 0.25 mM of each dNTP , 1.25 U rTaq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 93°C, followed by 40 cycles of 10 sec at 92°C, 1.5 min at 52–54°C, 2 min at 60°C, and final extension of 6 min at 72°C. For long fragments (>2 kb), the 25 μl PCR reaction consisted of 2.5 mM MgCl2, 2.5 μl 10 × LA PCR Buffer II (Mg2+ free), 0.4 mM of each dNTP, 1.25 U LA Taq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 94°C, followed by 40 cycles of 20 sec at 93°C, 30 sec at 53–54°C, 4–7 min at 68°C, and final extension of 10 min at 68°C. All PCR products were purified with a PCR Purification Kit (Sangon Biotech, Shanghai, China) and sequenced using multiple primers including those which generated the PCR product and new internal primers developed by primer walking.Table 1\nList of PCR primer combinations used to amplify the mitochondrial genome of\nParagyrodactylus variegatus\nPrimer nameGeneSequence(5′ – 3′)Source1 F(UND1F)*ND1CGHAAGGGNCCNAAHAAGGTHuyse et al. (2007) [17]1R*COITAAACTTCTGGATGWCCAAAAAATThis study2 F(UNAD5F)ND5TTRGARGCNATGCGBGCHCCHuyse et al. (2007) [17]2RCOIIIYCARCCTGAGCGAATTCARGCKGGThis study3 F(U12SF)*rrnSCAGTGCCAGCAKYYGCGGTTAHuyse et al. (2007) [17]3R(UNAD5R)*ND5GGWGCMCGCATNGCYTCYAAHuyse et al. (2007) [17]4 FND5ATGTGATTTTTAGAGTTATGCTTThis study4R(6RNAD5)ND5AGGHTCTCTAACTTGGAAAGWTAGTATHuyse et al. (2008) [24]5 F*COIIITCTTCWRTTACAGYAACDTCCTAThis study5R*ND1AAACCTCATACCTAACTGCGThis study6 F*COICTCCTTTATCTGGTGCTCTGGGThis study6R*rrnSGACGGGCGGTATGTACCTCTCTThis studyF236COIIITTGTTTTTGATTCCGTGAThis studyF930CYTBTTATCTTTGTGGTTCGTTCGThis studyF1568CYTBAGGTCAAAGATAGGTGGGTTAGThis studyF2174ND4TATAGGAATTTTACCATTATTTAThis studyF2855ND4CATGGCTTATCAGTTTGThis studyF3302tRNAGln\nGGTAGCATAGGAGGTAAGGTTCThis studyF8330COITTTAGCGGGTATTTCAAGTAThis studyF8920COIGTATTATTCACTATAGGAGGGGTAThis studyR4662ATP6ACGAAATAATAAAAATATAAAAAGTThis studyR5283ND2TCCAGAAACTAACAATAAAGCACThis studyR6003tRNAVal\nACCTAATGCTTGTAATGThis studyR6599ND1AAACCTCATACCTAACTGCGThis studyR7212tRNAPro\nGCAGCCCTATCAGTAAGACCThis studyR7941COIACCAAGCCCTACAAAACCTGThis studyR10014rrnLTCCCCATTCAGACAATCCTCThis studyR10652rrnSGCTGGCACTGTGACTTATCCTAThis studyR11375COIIATTGTAGGTAAAAAGGTTCACThis studyR12090ND6AAAAAGACAATAAGACCCACTAThis studyR12752tRNALeu(UUR)\nAACACTTTGTATTTGACGCTThis studyR14014ND5AGGTTCAAGTAATGGTAGGTCTThis study*The PCR primers for the long PCR fragment (>2 kb).\n\nList of PCR primer combinations used to amplify the mitochondrial genome of\nParagyrodactylus variegatus\n\n*The PCR primers for the long PCR fragment (>2 kb).\nThe complete mitochondrial genome of P. variegatus was amplified in six parts using a combination of existing primers and newly developed primers generated by primer walking (primers listed in Table 1). For short fragments (<2 kb), PCR reactions were performed in a total volume of 25 μl, containing 3.0 mM MgCl2, 10 mM Tris–HCl (pH 8.3), 50 mM KCl, 0.25 mM of each dNTP , 1.25 U rTaq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 93°C, followed by 40 cycles of 10 sec at 92°C, 1.5 min at 52–54°C, 2 min at 60°C, and final extension of 6 min at 72°C. For long fragments (>2 kb), the 25 μl PCR reaction consisted of 2.5 mM MgCl2, 2.5 μl 10 × LA PCR Buffer II (Mg2+ free), 0.4 mM of each dNTP, 1.25 U LA Taq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 94°C, followed by 40 cycles of 20 sec at 93°C, 30 sec at 53–54°C, 4–7 min at 68°C, and final extension of 10 min at 68°C. All PCR products were purified with a PCR Purification Kit (Sangon Biotech, Shanghai, China) and sequenced using multiple primers including those which generated the PCR product and new internal primers developed by primer walking.Table 1\nList of PCR primer combinations used to amplify the mitochondrial genome of\nParagyrodactylus variegatus\nPrimer nameGeneSequence(5′ – 3′)Source1 F(UND1F)*ND1CGHAAGGGNCCNAAHAAGGTHuyse et al. (2007) [17]1R*COITAAACTTCTGGATGWCCAAAAAATThis study2 F(UNAD5F)ND5TTRGARGCNATGCGBGCHCCHuyse et al. (2007) [17]2RCOIIIYCARCCTGAGCGAATTCARGCKGGThis study3 F(U12SF)*rrnSCAGTGCCAGCAKYYGCGGTTAHuyse et al. (2007) [17]3R(UNAD5R)*ND5GGWGCMCGCATNGCYTCYAAHuyse et al. (2007) [17]4 FND5ATGTGATTTTTAGAGTTATGCTTThis study4R(6RNAD5)ND5AGGHTCTCTAACTTGGAAAGWTAGTATHuyse et al. (2008) [24]5 F*COIIITCTTCWRTTACAGYAACDTCCTAThis study5R*ND1AAACCTCATACCTAACTGCGThis study6 F*COICTCCTTTATCTGGTGCTCTGGGThis study6R*rrnSGACGGGCGGTATGTACCTCTCTThis studyF236COIIITTGTTTTTGATTCCGTGAThis studyF930CYTBTTATCTTTGTGGTTCGTTCGThis studyF1568CYTBAGGTCAAAGATAGGTGGGTTAGThis studyF2174ND4TATAGGAATTTTACCATTATTTAThis studyF2855ND4CATGGCTTATCAGTTTGThis studyF3302tRNAGln\nGGTAGCATAGGAGGTAAGGTTCThis studyF8330COITTTAGCGGGTATTTCAAGTAThis studyF8920COIGTATTATTCACTATAGGAGGGGTAThis studyR4662ATP6ACGAAATAATAAAAATATAAAAAGTThis studyR5283ND2TCCAGAAACTAACAATAAAGCACThis studyR6003tRNAVal\nACCTAATGCTTGTAATGThis studyR6599ND1AAACCTCATACCTAACTGCGThis studyR7212tRNAPro\nGCAGCCCTATCAGTAAGACCThis studyR7941COIACCAAGCCCTACAAAACCTGThis studyR10014rrnLTCCCCATTCAGACAATCCTCThis studyR10652rrnSGCTGGCACTGTGACTTATCCTAThis studyR11375COIIATTGTAGGTAAAAAGGTTCACThis studyR12090ND6AAAAAGACAATAAGACCCACTAThis studyR12752tRNALeu(UUR)\nAACACTTTGTATTTGACGCTThis studyR14014ND5AGGTTCAAGTAATGGTAGGTCTThis study*The PCR primers for the long PCR fragment (>2 kb).\n\nList of PCR primer combinations used to amplify the mitochondrial genome of\nParagyrodactylus variegatus\n\n*The PCR primers for the long PCR fragment (>2 kb).\n Sequence analysis Contiguous sequence fragments were assembled using SeqMan (DNAStar) and Staden Package v1.7.0 [14]. Protein-coding (PCGs) and ribosomal RNA (rRNA) genes were initially identified using BLAST (Basic Local Alignment Search Tool) searches on GenBank, then by alignment with the published mitochondrial genomes of Gyrodactylus derjavinoides Malmberg, Collins, Cunningham and Jalali, 2007 (GenBank no. EU293891), G. salaris (GenBank no. DQ988931) and Gyrodactylus thymalli Zitnan, 1960 (GenBank no. EF527269). The secondary structure of the two rRNA genes was determined mainly by comparison with the published rRNA secondary structures of Dugesia japonica Ichikawa and Kawakatsu, 1964 (GenBank no. NC_016439) [15]. Protein-coding regions were translated with the echinoderm mitochondrial genetic code. The program tRNAscan-SE v1.21 (http://lowelab.ucsc.edu/tRNAscan-SE/) was used to identify transfer RNA (tRNA) genes and their structures [16], using the mito/chloroplast codon and setting the cove cutoff score to one. The tRNAs, which were not detected by tRNA scan-SE v1.21, were identified by comparing the sequence to Gyrodactylus\n[17, 18]. Tandem Repeat Finder v4.07 was used to identify tandem repeats in non-coding regions [19]. The base composition, codon usage and genetic distance were calculated with MEGA v5.1 [20]. The nonsynonymous (Ka)/synonymous (Ks) values were estimated by the KaKs_Calculator v1.2 with the MA method [21].\nContiguous sequence fragments were assembled using SeqMan (DNAStar) and Staden Package v1.7.0 [14]. Protein-coding (PCGs) and ribosomal RNA (rRNA) genes were initially identified using BLAST (Basic Local Alignment Search Tool) searches on GenBank, then by alignment with the published mitochondrial genomes of Gyrodactylus derjavinoides Malmberg, Collins, Cunningham and Jalali, 2007 (GenBank no. EU293891), G. salaris (GenBank no. DQ988931) and Gyrodactylus thymalli Zitnan, 1960 (GenBank no. EF527269). The secondary structure of the two rRNA genes was determined mainly by comparison with the published rRNA secondary structures of Dugesia japonica Ichikawa and Kawakatsu, 1964 (GenBank no. NC_016439) [15]. Protein-coding regions were translated with the echinoderm mitochondrial genetic code. The program tRNAscan-SE v1.21 (http://lowelab.ucsc.edu/tRNAscan-SE/) was used to identify transfer RNA (tRNA) genes and their structures [16], using the mito/chloroplast codon and setting the cove cutoff score to one. The tRNAs, which were not detected by tRNA scan-SE v1.21, were identified by comparing the sequence to Gyrodactylus\n[17, 18]. Tandem Repeat Finder v4.07 was used to identify tandem repeats in non-coding regions [19]. The base composition, codon usage and genetic distance were calculated with MEGA v5.1 [20]. The nonsynonymous (Ka)/synonymous (Ks) values were estimated by the KaKs_Calculator v1.2 with the MA method [21].", "Specimens of P. variegatus were collected from the skin and fins of wild Homatula variegata (Dabry de Thiersant, 1874) in the Qinling Mountain region of central China. Upon capture the specimens were immediately preserved in 99% ethanol and stored at 4°C. The DNA from one parasite was extracted using a TIANamp Micro DNA Kit (Tiangen Biotech, Beijing, China) according to the manufacturer’s protocol.", "The complete mitochondrial genome of P. variegatus was amplified in six parts using a combination of existing primers and newly developed primers generated by primer walking (primers listed in Table 1). For short fragments (<2 kb), PCR reactions were performed in a total volume of 25 μl, containing 3.0 mM MgCl2, 10 mM Tris–HCl (pH 8.3), 50 mM KCl, 0.25 mM of each dNTP , 1.25 U rTaq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 93°C, followed by 40 cycles of 10 sec at 92°C, 1.5 min at 52–54°C, 2 min at 60°C, and final extension of 6 min at 72°C. For long fragments (>2 kb), the 25 μl PCR reaction consisted of 2.5 mM MgCl2, 2.5 μl 10 × LA PCR Buffer II (Mg2+ free), 0.4 mM of each dNTP, 1.25 U LA Taq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 94°C, followed by 40 cycles of 20 sec at 93°C, 30 sec at 53–54°C, 4–7 min at 68°C, and final extension of 10 min at 68°C. All PCR products were purified with a PCR Purification Kit (Sangon Biotech, Shanghai, China) and sequenced using multiple primers including those which generated the PCR product and new internal primers developed by primer walking.Table 1\nList of PCR primer combinations used to amplify the mitochondrial genome of\nParagyrodactylus variegatus\nPrimer nameGeneSequence(5′ – 3′)Source1 F(UND1F)*ND1CGHAAGGGNCCNAAHAAGGTHuyse et al. (2007) [17]1R*COITAAACTTCTGGATGWCCAAAAAATThis study2 F(UNAD5F)ND5TTRGARGCNATGCGBGCHCCHuyse et al. (2007) [17]2RCOIIIYCARCCTGAGCGAATTCARGCKGGThis study3 F(U12SF)*rrnSCAGTGCCAGCAKYYGCGGTTAHuyse et al. (2007) [17]3R(UNAD5R)*ND5GGWGCMCGCATNGCYTCYAAHuyse et al. (2007) [17]4 FND5ATGTGATTTTTAGAGTTATGCTTThis study4R(6RNAD5)ND5AGGHTCTCTAACTTGGAAAGWTAGTATHuyse et al. (2008) [24]5 F*COIIITCTTCWRTTACAGYAACDTCCTAThis study5R*ND1AAACCTCATACCTAACTGCGThis study6 F*COICTCCTTTATCTGGTGCTCTGGGThis study6R*rrnSGACGGGCGGTATGTACCTCTCTThis studyF236COIIITTGTTTTTGATTCCGTGAThis studyF930CYTBTTATCTTTGTGGTTCGTTCGThis studyF1568CYTBAGGTCAAAGATAGGTGGGTTAGThis studyF2174ND4TATAGGAATTTTACCATTATTTAThis studyF2855ND4CATGGCTTATCAGTTTGThis studyF3302tRNAGln\nGGTAGCATAGGAGGTAAGGTTCThis studyF8330COITTTAGCGGGTATTTCAAGTAThis studyF8920COIGTATTATTCACTATAGGAGGGGTAThis studyR4662ATP6ACGAAATAATAAAAATATAAAAAGTThis studyR5283ND2TCCAGAAACTAACAATAAAGCACThis studyR6003tRNAVal\nACCTAATGCTTGTAATGThis studyR6599ND1AAACCTCATACCTAACTGCGThis studyR7212tRNAPro\nGCAGCCCTATCAGTAAGACCThis studyR7941COIACCAAGCCCTACAAAACCTGThis studyR10014rrnLTCCCCATTCAGACAATCCTCThis studyR10652rrnSGCTGGCACTGTGACTTATCCTAThis studyR11375COIIATTGTAGGTAAAAAGGTTCACThis studyR12090ND6AAAAAGACAATAAGACCCACTAThis studyR12752tRNALeu(UUR)\nAACACTTTGTATTTGACGCTThis studyR14014ND5AGGTTCAAGTAATGGTAGGTCTThis study*The PCR primers for the long PCR fragment (>2 kb).\n\nList of PCR primer combinations used to amplify the mitochondrial genome of\nParagyrodactylus variegatus\n\n*The PCR primers for the long PCR fragment (>2 kb).", "Contiguous sequence fragments were assembled using SeqMan (DNAStar) and Staden Package v1.7.0 [14]. Protein-coding (PCGs) and ribosomal RNA (rRNA) genes were initially identified using BLAST (Basic Local Alignment Search Tool) searches on GenBank, then by alignment with the published mitochondrial genomes of Gyrodactylus derjavinoides Malmberg, Collins, Cunningham and Jalali, 2007 (GenBank no. EU293891), G. salaris (GenBank no. DQ988931) and Gyrodactylus thymalli Zitnan, 1960 (GenBank no. EF527269). The secondary structure of the two rRNA genes was determined mainly by comparison with the published rRNA secondary structures of Dugesia japonica Ichikawa and Kawakatsu, 1964 (GenBank no. NC_016439) [15]. Protein-coding regions were translated with the echinoderm mitochondrial genetic code. The program tRNAscan-SE v1.21 (http://lowelab.ucsc.edu/tRNAscan-SE/) was used to identify transfer RNA (tRNA) genes and their structures [16], using the mito/chloroplast codon and setting the cove cutoff score to one. The tRNAs, which were not detected by tRNA scan-SE v1.21, were identified by comparing the sequence to Gyrodactylus\n[17, 18]. Tandem Repeat Finder v4.07 was used to identify tandem repeats in non-coding regions [19]. The base composition, codon usage and genetic distance were calculated with MEGA v5.1 [20]. The nonsynonymous (Ka)/synonymous (Ks) values were estimated by the KaKs_Calculator v1.2 with the MA method [21].", " Genome organization, base composition and gene order The circular mitochondrial genome of P. variegatus is 14,517 bp in size (GenBank no. KM067269) and contains 12 PCGs, 22 tRNAs, two rRNA and a single major non-coding region (NCR) (Figure 1). It lacks the ATP8 gene, and all the genes are transcribed from the same strand. The overall nucleotide composition is: T (45.8%), C (9.5%), A (30.4%), G (14.2%), with an overall A + T content of 76.3% (Table 2).Figure 1\nThe gene map for the mitochondrial genome of\nParagyrodactylus variegatus\n.\n\n\nThe gene map for the mitochondrial genome of\nParagyrodactylus variegatus\n.\n\n\nBase composition of the mitochondrial genome of\nParagyrodactylus variegatus\n\nThe arrangement of rRNA and protein coding genes of P. variegatus is typical for gyrodactylids. However, the gene order of some tRNA genes is different: there are three tRNAs (tRNAGln, tRNAPhe, tRNAMet) between ND4 and the major non-coding region and five tRNAs (tRNATyr, tRNALeu1, tRNASer2, tRNALeu2, tRNAArg) between ND6 and ND5 in P. variegatus, while Gyrodactylus spp. have one tRNA (tRNAPhe) and seven tRNAs (tRNATyr, tRNALeu1, tRNAGln, tRNAMet, tRNASer2, tRNALeu2, tRNAArg) in the same location, respectively.\nThe circular mitochondrial genome of P. variegatus is 14,517 bp in size (GenBank no. KM067269) and contains 12 PCGs, 22 tRNAs, two rRNA and a single major non-coding region (NCR) (Figure 1). It lacks the ATP8 gene, and all the genes are transcribed from the same strand. The overall nucleotide composition is: T (45.8%), C (9.5%), A (30.4%), G (14.2%), with an overall A + T content of 76.3% (Table 2).Figure 1\nThe gene map for the mitochondrial genome of\nParagyrodactylus variegatus\n.\n\n\nThe gene map for the mitochondrial genome of\nParagyrodactylus variegatus\n.\n\n\nBase composition of the mitochondrial genome of\nParagyrodactylus variegatus\n\nThe arrangement of rRNA and protein coding genes of P. variegatus is typical for gyrodactylids. However, the gene order of some tRNA genes is different: there are three tRNAs (tRNAGln, tRNAPhe, tRNAMet) between ND4 and the major non-coding region and five tRNAs (tRNATyr, tRNALeu1, tRNASer2, tRNALeu2, tRNAArg) between ND6 and ND5 in P. variegatus, while Gyrodactylus spp. have one tRNA (tRNAPhe) and seven tRNAs (tRNATyr, tRNALeu1, tRNAGln, tRNAMet, tRNASer2, tRNALeu2, tRNAArg) in the same location, respectively.\n Protein coding genes and codon usage The total length of all 12 PCGs is 9,990 bp. The average A + T content of PCGs is 75.7% (Table 2), ranging from 70.9% (COI) to 82.9% (ND2). ATG is the typical start codon, except for ND1 and COII, which begins with GTG and TTG, respectively (Table 3). All PCGs terminate with the stop codons TAA, while ND5 uses the codon TAG. The incomplete stop codons were not observed in P. variegatus.Table 3\nThe organization of the mitochondrial genome of\nParagyrodactylus variegatus\nGenePositionSize (bp)CodonAnticodonIntergenic nucleotidesFormToStartStopCOIII1639639ATGTAA/tRNA-His (H)65171363GTG11CYTB71917981080ATGTAA5ND4L18032057255ATGTAA4ND4203032381209ATGTAA-28tRNA-Gln (Q)3245331167TTG6tRNA-Phe (F)3331339767GAA19tRNA-Met (M)3410347667CAT12NCR3477456910930ATP645705082513ATGTAA0ND250845959876ATGTAA1tRNA-Val (V)5974604067TAC14tRNA-Ala (A)6047611266TGC6tRNA-Asp (D)6114617865GTC1ND161837073891GTGTAA4tRNA-Asn (N)7087715569GTT13tRNA-Pro (P)7159722163TGG3tRNA-Ile (I)7216728368GAT-6tRNA-Lys (K)7288735265CTT4ND373617711351ATGTAA8tRNA-Ser(AGN)(S1)7726778257TCT14tRNA-Trp (W)7792785867TCA9COI786294091548ATGTAA3tRNA-Thr (T)9418948467TGT8rrnL(16S)948410443960-1tRNA-Cys (C)104441050360GCA0rrnS (12S)10505112167121COII1122311804582TTGTAA6tRNA-Glu (E)119551201864TTC150ND61202512501477ATGTAA6tRNA-Tyr (Y)125071257367GTA5tRNA-Leu(CUN)(L1)125851265066TAG11tRNA-Ser(UCN)(S2)126571271660TGA6tRNA-Leu(UUR)(L2)127191278870TAA2tRNA-Arg (R)127941286067TCG5ND512865144331569ATGTAG4tRNA-Gly (G)144461451368TCC12\n\nThe organization of the mitochondrial genome of\nParagyrodactylus variegatus\n\nThe codon usage and relative synonymous codon usage (RSCU) values are summarized (Table 4). The most frequent amino acids in the PCGs of P. variegatus are as follows: Leucine (16.43%), Phenylalanine (13.23%), Serine (12.48%), and Isoleucine (10.67%). The frequency of Glutamine is especially low (0.69%). The codons TTA (Leucine; 12.09%) and TTT (Phenylalanine; 11.48%) are the most frequently used codons. For the third position of the fourfold degenerate amino acid, codons ending with T are the most frequent.Table 4\nCodon usage for the 12 mitochondrial proteins of\nParagyrodactylus variegatus\nCodon(AA)N%RSCUCodon(AA)N%RSCUUUU(F)38111.481.74UAU(Y)1805.421.72UUC(F)581.750.26UAC(Y)290.870.28UUA(L)40112.094.41UAA(*)00.000UUG(L)391.180.43UAG(*)00.000CUU(L)682.050.75CAU(H)451.361.7CUC(L)70.210.08CAC(H)80.240.3CUA(L)270.810.3CAA(Q)140.421.22CUG(L)30.090.03CAG(Q)90.270.78AUU(I)1755.271.48AAU(N)1033.101.67AUC(I)110.330.09AAC(N)180.540.29AUA(I)1685.061.42AAA(N)641.931.04AUG(M)682.051AAG(K)481.451GUU(V)1504.522.4GAU(D)541.631.59GUC(V)80.240.13GAC(D)140.420.41GUA(V)812.441.3GAA(E)371.121.32GUG(V)110.330.18GAG(E)190.570.68UCU(S)1143.442.2UGU(C)651.961.83UCC(S)90.270.17UGC(C)60.180.17UCA(S)651.961.26UGA(W)581.751.55UCG(S)30.090.06UGG(W)170.510.45CCU(P)381.152.03CGU(R)330.993CCC(P)20.060.11CGC(R)40.120.36CCA(P)341.021.81CGA(R)50.150.45CCG(P)10.030.05CGG(R)20.060.18ACU(T)581.752.37AGU(S)1043.132.01ACC(T)90.270.37AGC(S)120.360.23ACA(T)300.901.22AGA(S)812.441.57ACG(T)10.030.04AGG(S)260.780.5GCU(A)330.991.97GGU(G)902.712.05GCC(A)70.210.42GGC(G)180.540.41GCA(A)250.751.49GGA(G)461.391.05GCG(A)20.060.12GGG(G)220.660.5A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage.\n\nCodon usage for the 12 mitochondrial proteins of\nParagyrodactylus variegatus\n\nA total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage.\nThe total length of all 12 PCGs is 9,990 bp. The average A + T content of PCGs is 75.7% (Table 2), ranging from 70.9% (COI) to 82.9% (ND2). ATG is the typical start codon, except for ND1 and COII, which begins with GTG and TTG, respectively (Table 3). All PCGs terminate with the stop codons TAA, while ND5 uses the codon TAG. The incomplete stop codons were not observed in P. variegatus.Table 3\nThe organization of the mitochondrial genome of\nParagyrodactylus variegatus\nGenePositionSize (bp)CodonAnticodonIntergenic nucleotidesFormToStartStopCOIII1639639ATGTAA/tRNA-His (H)65171363GTG11CYTB71917981080ATGTAA5ND4L18032057255ATGTAA4ND4203032381209ATGTAA-28tRNA-Gln (Q)3245331167TTG6tRNA-Phe (F)3331339767GAA19tRNA-Met (M)3410347667CAT12NCR3477456910930ATP645705082513ATGTAA0ND250845959876ATGTAA1tRNA-Val (V)5974604067TAC14tRNA-Ala (A)6047611266TGC6tRNA-Asp (D)6114617865GTC1ND161837073891GTGTAA4tRNA-Asn (N)7087715569GTT13tRNA-Pro (P)7159722163TGG3tRNA-Ile (I)7216728368GAT-6tRNA-Lys (K)7288735265CTT4ND373617711351ATGTAA8tRNA-Ser(AGN)(S1)7726778257TCT14tRNA-Trp (W)7792785867TCA9COI786294091548ATGTAA3tRNA-Thr (T)9418948467TGT8rrnL(16S)948410443960-1tRNA-Cys (C)104441050360GCA0rrnS (12S)10505112167121COII1122311804582TTGTAA6tRNA-Glu (E)119551201864TTC150ND61202512501477ATGTAA6tRNA-Tyr (Y)125071257367GTA5tRNA-Leu(CUN)(L1)125851265066TAG11tRNA-Ser(UCN)(S2)126571271660TGA6tRNA-Leu(UUR)(L2)127191278870TAA2tRNA-Arg (R)127941286067TCG5ND512865144331569ATGTAG4tRNA-Gly (G)144461451368TCC12\n\nThe organization of the mitochondrial genome of\nParagyrodactylus variegatus\n\nThe codon usage and relative synonymous codon usage (RSCU) values are summarized (Table 4). The most frequent amino acids in the PCGs of P. variegatus are as follows: Leucine (16.43%), Phenylalanine (13.23%), Serine (12.48%), and Isoleucine (10.67%). The frequency of Glutamine is especially low (0.69%). The codons TTA (Leucine; 12.09%) and TTT (Phenylalanine; 11.48%) are the most frequently used codons. For the third position of the fourfold degenerate amino acid, codons ending with T are the most frequent.Table 4\nCodon usage for the 12 mitochondrial proteins of\nParagyrodactylus variegatus\nCodon(AA)N%RSCUCodon(AA)N%RSCUUUU(F)38111.481.74UAU(Y)1805.421.72UUC(F)581.750.26UAC(Y)290.870.28UUA(L)40112.094.41UAA(*)00.000UUG(L)391.180.43UAG(*)00.000CUU(L)682.050.75CAU(H)451.361.7CUC(L)70.210.08CAC(H)80.240.3CUA(L)270.810.3CAA(Q)140.421.22CUG(L)30.090.03CAG(Q)90.270.78AUU(I)1755.271.48AAU(N)1033.101.67AUC(I)110.330.09AAC(N)180.540.29AUA(I)1685.061.42AAA(N)641.931.04AUG(M)682.051AAG(K)481.451GUU(V)1504.522.4GAU(D)541.631.59GUC(V)80.240.13GAC(D)140.420.41GUA(V)812.441.3GAA(E)371.121.32GUG(V)110.330.18GAG(E)190.570.68UCU(S)1143.442.2UGU(C)651.961.83UCC(S)90.270.17UGC(C)60.180.17UCA(S)651.961.26UGA(W)581.751.55UCG(S)30.090.06UGG(W)170.510.45CCU(P)381.152.03CGU(R)330.993CCC(P)20.060.11CGC(R)40.120.36CCA(P)341.021.81CGA(R)50.150.45CCG(P)10.030.05CGG(R)20.060.18ACU(T)581.752.37AGU(S)1043.132.01ACC(T)90.270.37AGC(S)120.360.23ACA(T)300.901.22AGA(S)812.441.57ACG(T)10.030.04AGG(S)260.780.5GCU(A)330.991.97GGU(G)902.712.05GCC(A)70.210.42GGC(G)180.540.41GCA(A)250.751.49GGA(G)461.391.05GCG(A)20.060.12GGG(G)220.660.5A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage.\n\nCodon usage for the 12 mitochondrial proteins of\nParagyrodactylus variegatus\n\nA total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage.\n Ribosomal and transfer RNA genes The length of large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS) genes of P. variegatus are 960 bp and 712 bp, respectively (Table 3). The A + T contents of the rrnL and rrnS of P. variegatus are 75.0% and 75.3%, respectively. The predicted secondary structure of rrnL and rrnS of P. variegatus is shown in Figure 2 and Figure 3. The secondary structures of these regions contain six and three structural domains, respectively. But domain I of the rrnL lacks a large region at the 5′ end gene, and the domain III is absent in the secondary structure of rrnL of P. variegatus.Figure 2\nInferred secondary structure of the mitochondrial rrnL gene for\nParagyrodactylus variegatus\n.\nFigure 3\nInferred secondary structure of the mitochondrial rrnS gene for\nParagyrodactylus variegatus\n.\n\n\nInferred secondary structure of the mitochondrial rrnL gene for\nParagyrodactylus variegatus\n.\n\n\nInferred secondary structure of the mitochondrial rrnS gene for\nParagyrodactylus variegatus\n.\n\nThe 22 tRNA genes of P. variegatus vary in length from 57 to 70 nucleotides. Sequences of tRNAIle and tRNAThr genes overlap with neighboring genes (Table 3). All of the 22 tRNAs have the typical cloverleaf secondary structure, except for tRNACys, tRNASer1 and tRNASer2 in which each have unpaired dihydrouridine (DHU) arm.\nThe length of large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS) genes of P. variegatus are 960 bp and 712 bp, respectively (Table 3). The A + T contents of the rrnL and rrnS of P. variegatus are 75.0% and 75.3%, respectively. The predicted secondary structure of rrnL and rrnS of P. variegatus is shown in Figure 2 and Figure 3. The secondary structures of these regions contain six and three structural domains, respectively. But domain I of the rrnL lacks a large region at the 5′ end gene, and the domain III is absent in the secondary structure of rrnL of P. variegatus.Figure 2\nInferred secondary structure of the mitochondrial rrnL gene for\nParagyrodactylus variegatus\n.\nFigure 3\nInferred secondary structure of the mitochondrial rrnS gene for\nParagyrodactylus variegatus\n.\n\n\nInferred secondary structure of the mitochondrial rrnL gene for\nParagyrodactylus variegatus\n.\n\n\nInferred secondary structure of the mitochondrial rrnS gene for\nParagyrodactylus variegatus\n.\n\nThe 22 tRNA genes of P. variegatus vary in length from 57 to 70 nucleotides. Sequences of tRNAIle and tRNAThr genes overlap with neighboring genes (Table 3). All of the 22 tRNAs have the typical cloverleaf secondary structure, except for tRNACys, tRNASer1 and tRNASer2 in which each have unpaired dihydrouridine (DHU) arm.\n Synonymous and nonsynonymous substitutions and genetic distance The Ka/Ks values for all 12 PCGs of P. variegatus versus Gyrodactylus spp. are presented, which all are less than 0.3. The highest average Ka/Ks value is ND2 (0.29), while the Ka/Ks ratios of half the PCGs are low (Ka/Ks < 0.1). The genetic distance between P. variegatus and the three reported species of Gyrodactylus (G. thymalli, G. salaris and G. derjavinoides) are much greater than among the three species of Gyrodactylus (Figure 4). The maximum divergence occurs in ND5 gene (48.9%) between P. variegatus and G. salaris. In addition, the genetic distances of rRNA genes are lower than protein genes (Figure 4).Figure 4\nThe genetic distance of protein and rRNA genes of\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n\n\nThe genetic distance of protein and rRNA genes of\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n\nThe Ka/Ks values for all 12 PCGs of P. variegatus versus Gyrodactylus spp. are presented, which all are less than 0.3. The highest average Ka/Ks value is ND2 (0.29), while the Ka/Ks ratios of half the PCGs are low (Ka/Ks < 0.1). The genetic distance between P. variegatus and the three reported species of Gyrodactylus (G. thymalli, G. salaris and G. derjavinoides) are much greater than among the three species of Gyrodactylus (Figure 4). The maximum divergence occurs in ND5 gene (48.9%) between P. variegatus and G. salaris. In addition, the genetic distances of rRNA genes are lower than protein genes (Figure 4).Figure 4\nThe genetic distance of protein and rRNA genes of\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n\n\nThe genetic distance of protein and rRNA genes of\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n\n Non-coding regions The major non-coding region is 1,093 bp in size, which is highly enriched in AT (83.4%). This non-coding region can be subdivided into six parts including three junctions by the sequence pattern (Figure 5). The sequence of part I and part II is homologous with 81.7% sequence identity. Part III contains six identical repeat units of 40 bp sequence with some sequence modifications: one substitution at the fifth position (the initial repeat unit), three substitutions at the 223rd, 227th and 237th positions and two insertions at the 222nd and 225th positions (the terminal repeat unit). The repeat unit of part III was able to fold into a stem-loop secondary structure. Some predicted structural elements were also found in the sequence of part I and II (Figure 6). In addition, 30 short non-coding regions, all < 151 bp, occur in the mitochondrial genome of P. variegatus (Table 3).Figure 5\nOrganization of the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n.\nFigure 6\nPredicted structural elements for the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n. ‘(G)’ is the variation in the identical pattern of part II.\n\nOrganization of the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n.\n\n\nPredicted structural elements for the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n. ‘(G)’ is the variation in the identical pattern of part II.\nThe major non-coding region is 1,093 bp in size, which is highly enriched in AT (83.4%). This non-coding region can be subdivided into six parts including three junctions by the sequence pattern (Figure 5). The sequence of part I and part II is homologous with 81.7% sequence identity. Part III contains six identical repeat units of 40 bp sequence with some sequence modifications: one substitution at the fifth position (the initial repeat unit), three substitutions at the 223rd, 227th and 237th positions and two insertions at the 222nd and 225th positions (the terminal repeat unit). The repeat unit of part III was able to fold into a stem-loop secondary structure. Some predicted structural elements were also found in the sequence of part I and II (Figure 6). In addition, 30 short non-coding regions, all < 151 bp, occur in the mitochondrial genome of P. variegatus (Table 3).Figure 5\nOrganization of the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n.\nFigure 6\nPredicted structural elements for the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n. ‘(G)’ is the variation in the identical pattern of part II.\n\nOrganization of the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n.\n\n\nPredicted structural elements for the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n. ‘(G)’ is the variation in the identical pattern of part II.", "The circular mitochondrial genome of P. variegatus is 14,517 bp in size (GenBank no. KM067269) and contains 12 PCGs, 22 tRNAs, two rRNA and a single major non-coding region (NCR) (Figure 1). It lacks the ATP8 gene, and all the genes are transcribed from the same strand. The overall nucleotide composition is: T (45.8%), C (9.5%), A (30.4%), G (14.2%), with an overall A + T content of 76.3% (Table 2).Figure 1\nThe gene map for the mitochondrial genome of\nParagyrodactylus variegatus\n.\n\n\nThe gene map for the mitochondrial genome of\nParagyrodactylus variegatus\n.\n\n\nBase composition of the mitochondrial genome of\nParagyrodactylus variegatus\n\nThe arrangement of rRNA and protein coding genes of P. variegatus is typical for gyrodactylids. However, the gene order of some tRNA genes is different: there are three tRNAs (tRNAGln, tRNAPhe, tRNAMet) between ND4 and the major non-coding region and five tRNAs (tRNATyr, tRNALeu1, tRNASer2, tRNALeu2, tRNAArg) between ND6 and ND5 in P. variegatus, while Gyrodactylus spp. have one tRNA (tRNAPhe) and seven tRNAs (tRNATyr, tRNALeu1, tRNAGln, tRNAMet, tRNASer2, tRNALeu2, tRNAArg) in the same location, respectively.", "The total length of all 12 PCGs is 9,990 bp. The average A + T content of PCGs is 75.7% (Table 2), ranging from 70.9% (COI) to 82.9% (ND2). ATG is the typical start codon, except for ND1 and COII, which begins with GTG and TTG, respectively (Table 3). All PCGs terminate with the stop codons TAA, while ND5 uses the codon TAG. The incomplete stop codons were not observed in P. variegatus.Table 3\nThe organization of the mitochondrial genome of\nParagyrodactylus variegatus\nGenePositionSize (bp)CodonAnticodonIntergenic nucleotidesFormToStartStopCOIII1639639ATGTAA/tRNA-His (H)65171363GTG11CYTB71917981080ATGTAA5ND4L18032057255ATGTAA4ND4203032381209ATGTAA-28tRNA-Gln (Q)3245331167TTG6tRNA-Phe (F)3331339767GAA19tRNA-Met (M)3410347667CAT12NCR3477456910930ATP645705082513ATGTAA0ND250845959876ATGTAA1tRNA-Val (V)5974604067TAC14tRNA-Ala (A)6047611266TGC6tRNA-Asp (D)6114617865GTC1ND161837073891GTGTAA4tRNA-Asn (N)7087715569GTT13tRNA-Pro (P)7159722163TGG3tRNA-Ile (I)7216728368GAT-6tRNA-Lys (K)7288735265CTT4ND373617711351ATGTAA8tRNA-Ser(AGN)(S1)7726778257TCT14tRNA-Trp (W)7792785867TCA9COI786294091548ATGTAA3tRNA-Thr (T)9418948467TGT8rrnL(16S)948410443960-1tRNA-Cys (C)104441050360GCA0rrnS (12S)10505112167121COII1122311804582TTGTAA6tRNA-Glu (E)119551201864TTC150ND61202512501477ATGTAA6tRNA-Tyr (Y)125071257367GTA5tRNA-Leu(CUN)(L1)125851265066TAG11tRNA-Ser(UCN)(S2)126571271660TGA6tRNA-Leu(UUR)(L2)127191278870TAA2tRNA-Arg (R)127941286067TCG5ND512865144331569ATGTAG4tRNA-Gly (G)144461451368TCC12\n\nThe organization of the mitochondrial genome of\nParagyrodactylus variegatus\n\nThe codon usage and relative synonymous codon usage (RSCU) values are summarized (Table 4). The most frequent amino acids in the PCGs of P. variegatus are as follows: Leucine (16.43%), Phenylalanine (13.23%), Serine (12.48%), and Isoleucine (10.67%). The frequency of Glutamine is especially low (0.69%). The codons TTA (Leucine; 12.09%) and TTT (Phenylalanine; 11.48%) are the most frequently used codons. For the third position of the fourfold degenerate amino acid, codons ending with T are the most frequent.Table 4\nCodon usage for the 12 mitochondrial proteins of\nParagyrodactylus variegatus\nCodon(AA)N%RSCUCodon(AA)N%RSCUUUU(F)38111.481.74UAU(Y)1805.421.72UUC(F)581.750.26UAC(Y)290.870.28UUA(L)40112.094.41UAA(*)00.000UUG(L)391.180.43UAG(*)00.000CUU(L)682.050.75CAU(H)451.361.7CUC(L)70.210.08CAC(H)80.240.3CUA(L)270.810.3CAA(Q)140.421.22CUG(L)30.090.03CAG(Q)90.270.78AUU(I)1755.271.48AAU(N)1033.101.67AUC(I)110.330.09AAC(N)180.540.29AUA(I)1685.061.42AAA(N)641.931.04AUG(M)682.051AAG(K)481.451GUU(V)1504.522.4GAU(D)541.631.59GUC(V)80.240.13GAC(D)140.420.41GUA(V)812.441.3GAA(E)371.121.32GUG(V)110.330.18GAG(E)190.570.68UCU(S)1143.442.2UGU(C)651.961.83UCC(S)90.270.17UGC(C)60.180.17UCA(S)651.961.26UGA(W)581.751.55UCG(S)30.090.06UGG(W)170.510.45CCU(P)381.152.03CGU(R)330.993CCC(P)20.060.11CGC(R)40.120.36CCA(P)341.021.81CGA(R)50.150.45CCG(P)10.030.05CGG(R)20.060.18ACU(T)581.752.37AGU(S)1043.132.01ACC(T)90.270.37AGC(S)120.360.23ACA(T)300.901.22AGA(S)812.441.57ACG(T)10.030.04AGG(S)260.780.5GCU(A)330.991.97GGU(G)902.712.05GCC(A)70.210.42GGC(G)180.540.41GCA(A)250.751.49GGA(G)461.391.05GCG(A)20.060.12GGG(G)220.660.5A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage.\n\nCodon usage for the 12 mitochondrial proteins of\nParagyrodactylus variegatus\n\nA total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage.", "The length of large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS) genes of P. variegatus are 960 bp and 712 bp, respectively (Table 3). The A + T contents of the rrnL and rrnS of P. variegatus are 75.0% and 75.3%, respectively. The predicted secondary structure of rrnL and rrnS of P. variegatus is shown in Figure 2 and Figure 3. The secondary structures of these regions contain six and three structural domains, respectively. But domain I of the rrnL lacks a large region at the 5′ end gene, and the domain III is absent in the secondary structure of rrnL of P. variegatus.Figure 2\nInferred secondary structure of the mitochondrial rrnL gene for\nParagyrodactylus variegatus\n.\nFigure 3\nInferred secondary structure of the mitochondrial rrnS gene for\nParagyrodactylus variegatus\n.\n\n\nInferred secondary structure of the mitochondrial rrnL gene for\nParagyrodactylus variegatus\n.\n\n\nInferred secondary structure of the mitochondrial rrnS gene for\nParagyrodactylus variegatus\n.\n\nThe 22 tRNA genes of P. variegatus vary in length from 57 to 70 nucleotides. Sequences of tRNAIle and tRNAThr genes overlap with neighboring genes (Table 3). All of the 22 tRNAs have the typical cloverleaf secondary structure, except for tRNACys, tRNASer1 and tRNASer2 in which each have unpaired dihydrouridine (DHU) arm.", "The Ka/Ks values for all 12 PCGs of P. variegatus versus Gyrodactylus spp. are presented, which all are less than 0.3. The highest average Ka/Ks value is ND2 (0.29), while the Ka/Ks ratios of half the PCGs are low (Ka/Ks < 0.1). The genetic distance between P. variegatus and the three reported species of Gyrodactylus (G. thymalli, G. salaris and G. derjavinoides) are much greater than among the three species of Gyrodactylus (Figure 4). The maximum divergence occurs in ND5 gene (48.9%) between P. variegatus and G. salaris. In addition, the genetic distances of rRNA genes are lower than protein genes (Figure 4).Figure 4\nThe genetic distance of protein and rRNA genes of\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n\n\nThe genetic distance of protein and rRNA genes of\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n", "The major non-coding region is 1,093 bp in size, which is highly enriched in AT (83.4%). This non-coding region can be subdivided into six parts including three junctions by the sequence pattern (Figure 5). The sequence of part I and part II is homologous with 81.7% sequence identity. Part III contains six identical repeat units of 40 bp sequence with some sequence modifications: one substitution at the fifth position (the initial repeat unit), three substitutions at the 223rd, 227th and 237th positions and two insertions at the 222nd and 225th positions (the terminal repeat unit). The repeat unit of part III was able to fold into a stem-loop secondary structure. Some predicted structural elements were also found in the sequence of part I and II (Figure 6). In addition, 30 short non-coding regions, all < 151 bp, occur in the mitochondrial genome of P. variegatus (Table 3).Figure 5\nOrganization of the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n.\nFigure 6\nPredicted structural elements for the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n. ‘(G)’ is the variation in the identical pattern of part II.\n\nOrganization of the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n.\n\n\nPredicted structural elements for the mitochondrial major non-coding region of\nParagyrodactylus variegatus\n. ‘(G)’ is the variation in the identical pattern of part II.", " Characteristics of the mitochondrial genome The mitochondrial genome of P. variegatus is 222 bp shorter than that of G. derjavinoides, but well within the length range of parasitic flatworms [22, 23]. Differing number and length of the major non-coding region is the main factor that contributes to this difference in genome size. The overall A + T content of P. variegatus is higher than that of all reported mitochondrial genomes of monogeneans. The average Ka/Ks values of genes encoding 3 subunits of cytochrome c oxidase and the cytochrome b subunit of cytochrome bc1 complex are lower than genes encoding subunits of the NADH dehydrogenase complex (with the exception of ND1), especially COI and Cytb genes. This feature demonstrates COI, COII, COIII and Cytb genes are more strongly effected by purifying selection pressure compared to subunits of the NADH dehydrogenase genes (except ND1), which is similar to the findings of Huyse et al. [24] for Gyrodactylus derjavinoides. The degree of functional constraints might be a reason for corresponding degree sequence variations of protein genes. The low Ka/Ks values and genetic distance of COI and Cytb genes also imply that both genes could be used as a useful marker for analyses at higher taxonomic levels. Although sizes of rrnL and rrnS are very similar among Gyrodactylus spp. and P. variegatus, the sequence similarities are not high. These discrepancies may reflect the variable helices or loops that exist in the rRNA structure.\nThe mitochondrial genome of P. variegatus is 222 bp shorter than that of G. derjavinoides, but well within the length range of parasitic flatworms [22, 23]. Differing number and length of the major non-coding region is the main factor that contributes to this difference in genome size. The overall A + T content of P. variegatus is higher than that of all reported mitochondrial genomes of monogeneans. The average Ka/Ks values of genes encoding 3 subunits of cytochrome c oxidase and the cytochrome b subunit of cytochrome bc1 complex are lower than genes encoding subunits of the NADH dehydrogenase complex (with the exception of ND1), especially COI and Cytb genes. This feature demonstrates COI, COII, COIII and Cytb genes are more strongly effected by purifying selection pressure compared to subunits of the NADH dehydrogenase genes (except ND1), which is similar to the findings of Huyse et al. [24] for Gyrodactylus derjavinoides. The degree of functional constraints might be a reason for corresponding degree sequence variations of protein genes. The low Ka/Ks values and genetic distance of COI and Cytb genes also imply that both genes could be used as a useful marker for analyses at higher taxonomic levels. Although sizes of rrnL and rrnS are very similar among Gyrodactylus spp. and P. variegatus, the sequence similarities are not high. These discrepancies may reflect the variable helices or loops that exist in the rRNA structure.\n The major non-coding region The mitochondrial genome of P. variegatus includes one major non-coding region, which has been frequently observed in other invertebrates. It contains a high A + T content and tandem repeat sequences which could not be found in large non-coding regions (>500 bp) of the published mitochondrial genomes of monopisthocotyleans. We found that length and number of tandem repeat units are similar to those observed in Microcotyle sebastis Goto, 1894 [25]\n, contradicting the study of Zhang et al.\n[26] that reported the length and number of repeated motifs were different in the mitochondrial non-coding regions of monopisthocotylids and polyopisthocotylids.\nA non-coding region with high A + T content and pertinent elements usually corresponds to the control region for replication and transcription initiation. In the major non-coding region of P. variegatus, we found identical patterns within part I and part II. The patterns have only two nucleotide modifications with 2.3% sequence discrepancy; however, the overall difference between the whole sequence of part I and part II is 18.3%. The highly conserved part of the non-coding region is believed to have a functional role. The patterns contain poly-T stretches, a stem-loop structure and some surrounding structure elements (A + T-rich segment and G[A]nT) (Figure 6) which are typical of control regions in insects [27–30]. Although typical control regions are not readily identifiable within the mitochondrial genome of flatworms [17], the predicted secondary structure, conserved element, repeat sequences and high A + T content of major non-coding region in P. variegatus implies that this region might play an important role in the initiation of replication and transcription.\nIn addition, through alignment of non-coding regions sequences between Gyrodactylus spp. and P. variegatus, we found some conserved motifs in each species with the overall similarity among them being 72.1%. The conserved motifs (>5 bp) mainly existed in the A + T-rich segment and G + A-rich segment. However, whether or not the conserved motifs are present in other species of Gyrodactylidae needs to be assessed with a broader taxon sample.\nThe mitochondrial genome of P. variegatus includes one major non-coding region, which has been frequently observed in other invertebrates. It contains a high A + T content and tandem repeat sequences which could not be found in large non-coding regions (>500 bp) of the published mitochondrial genomes of monopisthocotyleans. We found that length and number of tandem repeat units are similar to those observed in Microcotyle sebastis Goto, 1894 [25]\n, contradicting the study of Zhang et al.\n[26] that reported the length and number of repeated motifs were different in the mitochondrial non-coding regions of monopisthocotylids and polyopisthocotylids.\nA non-coding region with high A + T content and pertinent elements usually corresponds to the control region for replication and transcription initiation. In the major non-coding region of P. variegatus, we found identical patterns within part I and part II. The patterns have only two nucleotide modifications with 2.3% sequence discrepancy; however, the overall difference between the whole sequence of part I and part II is 18.3%. The highly conserved part of the non-coding region is believed to have a functional role. The patterns contain poly-T stretches, a stem-loop structure and some surrounding structure elements (A + T-rich segment and G[A]nT) (Figure 6) which are typical of control regions in insects [27–30]. Although typical control regions are not readily identifiable within the mitochondrial genome of flatworms [17], the predicted secondary structure, conserved element, repeat sequences and high A + T content of major non-coding region in P. variegatus implies that this region might play an important role in the initiation of replication and transcription.\nIn addition, through alignment of non-coding regions sequences between Gyrodactylus spp. and P. variegatus, we found some conserved motifs in each species with the overall similarity among them being 72.1%. The conserved motifs (>5 bp) mainly existed in the A + T-rich segment and G + A-rich segment. However, whether or not the conserved motifs are present in other species of Gyrodactylidae needs to be assessed with a broader taxon sample.\n Gene arrangements and possible evolutionary mechanisms Five available mitochondrial gene arrangements of monopisthocotylids are shown in Figure 7. The arrangement of all rRNA and protein coding genes are identical throughout all samples, however, the tRNA genes differ in arrangement showing some translocation, particularly long-range translocation. No notable rearrangement hot spot could be found in gene arrangements of monopisthocotylids, however, the major change of gene arrangement among polyopisthocotylids is limited in the COIII-ND5 junction as a gene rearrangement hot spot [26]. Two gene clusters (tRNAAsn-tRNAPro-tRNAIle-tRNALys and rrnL-tRNACys-rrnS) were found to be conserved in all mitochondrial genomes of monopisthocotyleans. Nevertheless, the tRNALys and tRNACys were found in the gene rearrangement hot spot of polyopisthocotyleans. The conserved gene clusters could potentially be a marker used to help define the Polyopisthocotylea and Monopisthocotylea within the monogenea, as well as providing information for a deeper understanding of the evolution of monogenean mitochondrial genomes.Figure 7\nGene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp.\n\nGene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp.\nGene rearrangement can be mainly explained by three mechanisms: Duplication and Random Loss Model [31, 32], Duplication and Nonrandom Loss Model [33] and Recombination Model [34]. The variation (tRNAGln, tRNAMet and NCR) of mitochondrial gene order occurring between P. variegatus and Gyrodactylus spp. could be explained by the duplication and random loss model and recombination model together with the parsimonious scenario. We assume that the process contains three steps: one tandem duplication, random loss, followed by intramitochondrial recombination (Figure 8). We prefer this mechanism for the following reasons: the duplicate NCRs in the mitochondrial genomes of most metazoans can be explained by the duplication and random loss model, but the stepwise mechanism described above is more appropriate to interpret the duplicated NCRs and long-range translocation, meanwhile the rest of the genes remain in their original state. Furthermore, there are several examples of mitochondrial recombination in animals [35–38], and a similar mechanism accounts for the gene rearrangement of other metazoans [39, 40]. In addition, the tRNAMet genes of Gyrodactylus spp. are clearly homologous to the tRNAMet gene of P. variegatus with 80.6% sequence similarity. However, the tRNAGln region does have low sequence similarity (66.2%) between the mitochondrial genomes of Gyrodactylus spp. and P. variegatus, so we cannot be certain that the translocation event happened. As more mitochondrial genomes of gyrodactylids become available, all of the above hypotheses should be tested with respect to gene orders.Figure 8\nPossible mechanism of mitochondrial gene rearrangements occurring in\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n\n\nPossible mechanism of mitochondrial gene rearrangements occurring in\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n\nFive available mitochondrial gene arrangements of monopisthocotylids are shown in Figure 7. The arrangement of all rRNA and protein coding genes are identical throughout all samples, however, the tRNA genes differ in arrangement showing some translocation, particularly long-range translocation. No notable rearrangement hot spot could be found in gene arrangements of monopisthocotylids, however, the major change of gene arrangement among polyopisthocotylids is limited in the COIII-ND5 junction as a gene rearrangement hot spot [26]. Two gene clusters (tRNAAsn-tRNAPro-tRNAIle-tRNALys and rrnL-tRNACys-rrnS) were found to be conserved in all mitochondrial genomes of monopisthocotyleans. Nevertheless, the tRNALys and tRNACys were found in the gene rearrangement hot spot of polyopisthocotyleans. The conserved gene clusters could potentially be a marker used to help define the Polyopisthocotylea and Monopisthocotylea within the monogenea, as well as providing information for a deeper understanding of the evolution of monogenean mitochondrial genomes.Figure 7\nGene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp.\n\nGene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp.\nGene rearrangement can be mainly explained by three mechanisms: Duplication and Random Loss Model [31, 32], Duplication and Nonrandom Loss Model [33] and Recombination Model [34]. The variation (tRNAGln, tRNAMet and NCR) of mitochondrial gene order occurring between P. variegatus and Gyrodactylus spp. could be explained by the duplication and random loss model and recombination model together with the parsimonious scenario. We assume that the process contains three steps: one tandem duplication, random loss, followed by intramitochondrial recombination (Figure 8). We prefer this mechanism for the following reasons: the duplicate NCRs in the mitochondrial genomes of most metazoans can be explained by the duplication and random loss model, but the stepwise mechanism described above is more appropriate to interpret the duplicated NCRs and long-range translocation, meanwhile the rest of the genes remain in their original state. Furthermore, there are several examples of mitochondrial recombination in animals [35–38], and a similar mechanism accounts for the gene rearrangement of other metazoans [39, 40]. In addition, the tRNAMet genes of Gyrodactylus spp. are clearly homologous to the tRNAMet gene of P. variegatus with 80.6% sequence similarity. However, the tRNAGln region does have low sequence similarity (66.2%) between the mitochondrial genomes of Gyrodactylus spp. and P. variegatus, so we cannot be certain that the translocation event happened. As more mitochondrial genomes of gyrodactylids become available, all of the above hypotheses should be tested with respect to gene orders.Figure 8\nPossible mechanism of mitochondrial gene rearrangements occurring in\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n\n\nPossible mechanism of mitochondrial gene rearrangements occurring in\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n", "The mitochondrial genome of P. variegatus is 222 bp shorter than that of G. derjavinoides, but well within the length range of parasitic flatworms [22, 23]. Differing number and length of the major non-coding region is the main factor that contributes to this difference in genome size. The overall A + T content of P. variegatus is higher than that of all reported mitochondrial genomes of monogeneans. The average Ka/Ks values of genes encoding 3 subunits of cytochrome c oxidase and the cytochrome b subunit of cytochrome bc1 complex are lower than genes encoding subunits of the NADH dehydrogenase complex (with the exception of ND1), especially COI and Cytb genes. This feature demonstrates COI, COII, COIII and Cytb genes are more strongly effected by purifying selection pressure compared to subunits of the NADH dehydrogenase genes (except ND1), which is similar to the findings of Huyse et al. [24] for Gyrodactylus derjavinoides. The degree of functional constraints might be a reason for corresponding degree sequence variations of protein genes. The low Ka/Ks values and genetic distance of COI and Cytb genes also imply that both genes could be used as a useful marker for analyses at higher taxonomic levels. Although sizes of rrnL and rrnS are very similar among Gyrodactylus spp. and P. variegatus, the sequence similarities are not high. These discrepancies may reflect the variable helices or loops that exist in the rRNA structure.", "The mitochondrial genome of P. variegatus includes one major non-coding region, which has been frequently observed in other invertebrates. It contains a high A + T content and tandem repeat sequences which could not be found in large non-coding regions (>500 bp) of the published mitochondrial genomes of monopisthocotyleans. We found that length and number of tandem repeat units are similar to those observed in Microcotyle sebastis Goto, 1894 [25]\n, contradicting the study of Zhang et al.\n[26] that reported the length and number of repeated motifs were different in the mitochondrial non-coding regions of monopisthocotylids and polyopisthocotylids.\nA non-coding region with high A + T content and pertinent elements usually corresponds to the control region for replication and transcription initiation. In the major non-coding region of P. variegatus, we found identical patterns within part I and part II. The patterns have only two nucleotide modifications with 2.3% sequence discrepancy; however, the overall difference between the whole sequence of part I and part II is 18.3%. The highly conserved part of the non-coding region is believed to have a functional role. The patterns contain poly-T stretches, a stem-loop structure and some surrounding structure elements (A + T-rich segment and G[A]nT) (Figure 6) which are typical of control regions in insects [27–30]. Although typical control regions are not readily identifiable within the mitochondrial genome of flatworms [17], the predicted secondary structure, conserved element, repeat sequences and high A + T content of major non-coding region in P. variegatus implies that this region might play an important role in the initiation of replication and transcription.\nIn addition, through alignment of non-coding regions sequences between Gyrodactylus spp. and P. variegatus, we found some conserved motifs in each species with the overall similarity among them being 72.1%. The conserved motifs (>5 bp) mainly existed in the A + T-rich segment and G + A-rich segment. However, whether or not the conserved motifs are present in other species of Gyrodactylidae needs to be assessed with a broader taxon sample.", "Five available mitochondrial gene arrangements of monopisthocotylids are shown in Figure 7. The arrangement of all rRNA and protein coding genes are identical throughout all samples, however, the tRNA genes differ in arrangement showing some translocation, particularly long-range translocation. No notable rearrangement hot spot could be found in gene arrangements of monopisthocotylids, however, the major change of gene arrangement among polyopisthocotylids is limited in the COIII-ND5 junction as a gene rearrangement hot spot [26]. Two gene clusters (tRNAAsn-tRNAPro-tRNAIle-tRNALys and rrnL-tRNACys-rrnS) were found to be conserved in all mitochondrial genomes of monopisthocotyleans. Nevertheless, the tRNALys and tRNACys were found in the gene rearrangement hot spot of polyopisthocotyleans. The conserved gene clusters could potentially be a marker used to help define the Polyopisthocotylea and Monopisthocotylea within the monogenea, as well as providing information for a deeper understanding of the evolution of monogenean mitochondrial genomes.Figure 7\nGene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp.\n\nGene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp.\nGene rearrangement can be mainly explained by three mechanisms: Duplication and Random Loss Model [31, 32], Duplication and Nonrandom Loss Model [33] and Recombination Model [34]. The variation (tRNAGln, tRNAMet and NCR) of mitochondrial gene order occurring between P. variegatus and Gyrodactylus spp. could be explained by the duplication and random loss model and recombination model together with the parsimonious scenario. We assume that the process contains three steps: one tandem duplication, random loss, followed by intramitochondrial recombination (Figure 8). We prefer this mechanism for the following reasons: the duplicate NCRs in the mitochondrial genomes of most metazoans can be explained by the duplication and random loss model, but the stepwise mechanism described above is more appropriate to interpret the duplicated NCRs and long-range translocation, meanwhile the rest of the genes remain in their original state. Furthermore, there are several examples of mitochondrial recombination in animals [35–38], and a similar mechanism accounts for the gene rearrangement of other metazoans [39, 40]. In addition, the tRNAMet genes of Gyrodactylus spp. are clearly homologous to the tRNAMet gene of P. variegatus with 80.6% sequence similarity. However, the tRNAGln region does have low sequence similarity (66.2%) between the mitochondrial genomes of Gyrodactylus spp. and P. variegatus, so we cannot be certain that the translocation event happened. As more mitochondrial genomes of gyrodactylids become available, all of the above hypotheses should be tested with respect to gene orders.Figure 8\nPossible mechanism of mitochondrial gene rearrangements occurring in\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n\n\nPossible mechanism of mitochondrial gene rearrangements occurring in\nParagyrodactylus variegatus\nand\nGyrodactylus\nspp.\n", "The characteristics of the mitochondrial genome of P. variegatus are notably different from Gyrodactylus spp., including the gene order, which is similar to other monopisthocotylids. The overall average genetic distance between Paragyrodactylus and Gyrodactylus based on the rRNA and 12 protein coding genes is remarkably greater than within Gyrodactylus. All of these features support Paragyrodactylus as a distinct genus. Considering their specific distribution and hosts, we tend towards the view of You et al.\n[8] that Paragyrodactylus is a relict freshwater lineage of viviparous monogenean isolated in the high plateaus of central Asia on closely related river loaches." ]
[ null, "methods", null, null, null, "results", null, null, null, null, null, "discussion", null, null, null, "conclusions" ]
[ "\nParagyrodactylus variegatus\n", "Mitochondrial genome", "\nGyrodactylus\n", "Gyrodactylidae", "Monogenea", "\nHomatula variegata\n", "China" ]
Background: Gyrodactylids are widespread parasites of freshwater and marine fishes, typically inhabiting the skin and gills of their hosts. Their direct life-cycle and hyperviviparous method of reproduction facilitates rapid population growth. Some species are pathogenic to their host (e.g. Gyrodactylus salaris Malmberg, 1957) [1] and capable of causing high host mortality resulting in serious ecological and economical consequences [2]. Over twenty genera and 400 species of gyrodactylids have been described [3], most of them being identified by comparative morphology of the opisthaptoral hard parts. This traditional approach for identification of gyrodactylids gives limited information for detailed phylogenetic analysis. Recently, the nuclear ribosomal DNA (rDNA) and the internal transcribed spacers (ITS) of rDNA have been incorporated into the molecular taxonomy of the group [4, 5]. In addition, mitochondrial markers (COI and COII) are also confirmed to be DNA barcoding for Gyrodactylus Nordmann, 1832 [6, 7]. But more polymorphic molecular markers suitable for different taxonomic categories are still needed for studying the taxonomy and phylogeny of these parasites. Paragyrodactylus Gvosdev and Martechov, 1953 is a genus of Gyrodactylidae comprising three nominal species, Paragyrodactylus iliensis Gvosdev and Martechov, 1953 (=P. dogieli Osmanov, 1965), Paragyrodactylus barbatuli Ergens, 1970 and Paragyrodactylus variegatus You, King, Ye and Cone, 2014, all of which infect river loaches (Nemacheilidae) inhabiting streams in central Asia [8]. The relationship between Paragyrodactylus and Gyrodactylus has been recently explored. Kritsky and Boeger reported the two genera had a close relationship based on morphological characters [9]. Bakke et al. believed the complexity of the attachment apparatus separates Paragyrodactylus from Gyrodactylus and pondered whether these differences were fundamental or a local diversification within Gyrodactylus [3]. Furthermore, You et al., using morphology and molecular data, presented the hypothesis that Paragyrodactylus was a relict freshwater lineage of viviparous monogeneans isolated in the high plateaus of central Asia on river loaches [8]. The ambiguous relationship between Paragyrodactylus and Gyrodactylus emphasizes the need for further molecular study of these genera. Due to its higher rate of base substitution, maternal inheritance, evolutionary conserved gene products and low recombination [10, 11], mitochondrial genomes provide powerful markers for phylogenetic analysis, biological identification and population studies. In addition, mitochondrial genomes can provide genome-level characters such as gene order for deep-level phylogenetic analysis [12, 13]. To date, the complete mitochondrial DNA sequences of only nine monogeneans are available, including three species of Gyrodactylus. In the present study, the first mitochondrial genome for Paragyrodactylus, P. variegatus, is sequenced and characterized. We report on its genome organization, base composition, gene order, codon usage, ribosomal and transfer RNA gene features and major non-coding region. Additionally, we provide a preliminary comparison of the gene arrangement within both Paragyrodactylus and Gyrodactylus. Methods: Specimen collection and DNA extraction Specimens of P. variegatus were collected from the skin and fins of wild Homatula variegata (Dabry de Thiersant, 1874) in the Qinling Mountain region of central China. Upon capture the specimens were immediately preserved in 99% ethanol and stored at 4°C. The DNA from one parasite was extracted using a TIANamp Micro DNA Kit (Tiangen Biotech, Beijing, China) according to the manufacturer’s protocol. Specimens of P. variegatus were collected from the skin and fins of wild Homatula variegata (Dabry de Thiersant, 1874) in the Qinling Mountain region of central China. Upon capture the specimens were immediately preserved in 99% ethanol and stored at 4°C. The DNA from one parasite was extracted using a TIANamp Micro DNA Kit (Tiangen Biotech, Beijing, China) according to the manufacturer’s protocol. PCR and sequencing The complete mitochondrial genome of P. variegatus was amplified in six parts using a combination of existing primers and newly developed primers generated by primer walking (primers listed in Table 1). For short fragments (<2 kb), PCR reactions were performed in a total volume of 25 μl, containing 3.0 mM MgCl2, 10 mM Tris–HCl (pH 8.3), 50 mM KCl, 0.25 mM of each dNTP , 1.25 U rTaq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 93°C, followed by 40 cycles of 10 sec at 92°C, 1.5 min at 52–54°C, 2 min at 60°C, and final extension of 6 min at 72°C. For long fragments (>2 kb), the 25 μl PCR reaction consisted of 2.5 mM MgCl2, 2.5 μl 10 × LA PCR Buffer II (Mg2+ free), 0.4 mM of each dNTP, 1.25 U LA Taq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 94°C, followed by 40 cycles of 20 sec at 93°C, 30 sec at 53–54°C, 4–7 min at 68°C, and final extension of 10 min at 68°C. All PCR products were purified with a PCR Purification Kit (Sangon Biotech, Shanghai, China) and sequenced using multiple primers including those which generated the PCR product and new internal primers developed by primer walking.Table 1 List of PCR primer combinations used to amplify the mitochondrial genome of Paragyrodactylus variegatus Primer nameGeneSequence(5′ – 3′)Source1 F(UND1F)*ND1CGHAAGGGNCCNAAHAAGGTHuyse et al. (2007) [17]1R*COITAAACTTCTGGATGWCCAAAAAATThis study2 F(UNAD5F)ND5TTRGARGCNATGCGBGCHCCHuyse et al. (2007) [17]2RCOIIIYCARCCTGAGCGAATTCARGCKGGThis study3 F(U12SF)*rrnSCAGTGCCAGCAKYYGCGGTTAHuyse et al. (2007) [17]3R(UNAD5R)*ND5GGWGCMCGCATNGCYTCYAAHuyse et al. (2007) [17]4 FND5ATGTGATTTTTAGAGTTATGCTTThis study4R(6RNAD5)ND5AGGHTCTCTAACTTGGAAAGWTAGTATHuyse et al. (2008) [24]5 F*COIIITCTTCWRTTACAGYAACDTCCTAThis study5R*ND1AAACCTCATACCTAACTGCGThis study6 F*COICTCCTTTATCTGGTGCTCTGGGThis study6R*rrnSGACGGGCGGTATGTACCTCTCTThis studyF236COIIITTGTTTTTGATTCCGTGAThis studyF930CYTBTTATCTTTGTGGTTCGTTCGThis studyF1568CYTBAGGTCAAAGATAGGTGGGTTAGThis studyF2174ND4TATAGGAATTTTACCATTATTTAThis studyF2855ND4CATGGCTTATCAGTTTGThis studyF3302tRNAGln GGTAGCATAGGAGGTAAGGTTCThis studyF8330COITTTAGCGGGTATTTCAAGTAThis studyF8920COIGTATTATTCACTATAGGAGGGGTAThis studyR4662ATP6ACGAAATAATAAAAATATAAAAAGTThis studyR5283ND2TCCAGAAACTAACAATAAAGCACThis studyR6003tRNAVal ACCTAATGCTTGTAATGThis studyR6599ND1AAACCTCATACCTAACTGCGThis studyR7212tRNAPro GCAGCCCTATCAGTAAGACCThis studyR7941COIACCAAGCCCTACAAAACCTGThis studyR10014rrnLTCCCCATTCAGACAATCCTCThis studyR10652rrnSGCTGGCACTGTGACTTATCCTAThis studyR11375COIIATTGTAGGTAAAAAGGTTCACThis studyR12090ND6AAAAAGACAATAAGACCCACTAThis studyR12752tRNALeu(UUR) AACACTTTGTATTTGACGCTThis studyR14014ND5AGGTTCAAGTAATGGTAGGTCTThis study*The PCR primers for the long PCR fragment (>2 kb). List of PCR primer combinations used to amplify the mitochondrial genome of Paragyrodactylus variegatus *The PCR primers for the long PCR fragment (>2 kb). The complete mitochondrial genome of P. variegatus was amplified in six parts using a combination of existing primers and newly developed primers generated by primer walking (primers listed in Table 1). For short fragments (<2 kb), PCR reactions were performed in a total volume of 25 μl, containing 3.0 mM MgCl2, 10 mM Tris–HCl (pH 8.3), 50 mM KCl, 0.25 mM of each dNTP , 1.25 U rTaq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 93°C, followed by 40 cycles of 10 sec at 92°C, 1.5 min at 52–54°C, 2 min at 60°C, and final extension of 6 min at 72°C. For long fragments (>2 kb), the 25 μl PCR reaction consisted of 2.5 mM MgCl2, 2.5 μl 10 × LA PCR Buffer II (Mg2+ free), 0.4 mM of each dNTP, 1.25 U LA Taq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 94°C, followed by 40 cycles of 20 sec at 93°C, 30 sec at 53–54°C, 4–7 min at 68°C, and final extension of 10 min at 68°C. All PCR products were purified with a PCR Purification Kit (Sangon Biotech, Shanghai, China) and sequenced using multiple primers including those which generated the PCR product and new internal primers developed by primer walking.Table 1 List of PCR primer combinations used to amplify the mitochondrial genome of Paragyrodactylus variegatus Primer nameGeneSequence(5′ – 3′)Source1 F(UND1F)*ND1CGHAAGGGNCCNAAHAAGGTHuyse et al. (2007) [17]1R*COITAAACTTCTGGATGWCCAAAAAATThis study2 F(UNAD5F)ND5TTRGARGCNATGCGBGCHCCHuyse et al. (2007) [17]2RCOIIIYCARCCTGAGCGAATTCARGCKGGThis study3 F(U12SF)*rrnSCAGTGCCAGCAKYYGCGGTTAHuyse et al. (2007) [17]3R(UNAD5R)*ND5GGWGCMCGCATNGCYTCYAAHuyse et al. (2007) [17]4 FND5ATGTGATTTTTAGAGTTATGCTTThis study4R(6RNAD5)ND5AGGHTCTCTAACTTGGAAAGWTAGTATHuyse et al. (2008) [24]5 F*COIIITCTTCWRTTACAGYAACDTCCTAThis study5R*ND1AAACCTCATACCTAACTGCGThis study6 F*COICTCCTTTATCTGGTGCTCTGGGThis study6R*rrnSGACGGGCGGTATGTACCTCTCTThis studyF236COIIITTGTTTTTGATTCCGTGAThis studyF930CYTBTTATCTTTGTGGTTCGTTCGThis studyF1568CYTBAGGTCAAAGATAGGTGGGTTAGThis studyF2174ND4TATAGGAATTTTACCATTATTTAThis studyF2855ND4CATGGCTTATCAGTTTGThis studyF3302tRNAGln GGTAGCATAGGAGGTAAGGTTCThis studyF8330COITTTAGCGGGTATTTCAAGTAThis studyF8920COIGTATTATTCACTATAGGAGGGGTAThis studyR4662ATP6ACGAAATAATAAAAATATAAAAAGTThis studyR5283ND2TCCAGAAACTAACAATAAAGCACThis studyR6003tRNAVal ACCTAATGCTTGTAATGThis studyR6599ND1AAACCTCATACCTAACTGCGThis studyR7212tRNAPro GCAGCCCTATCAGTAAGACCThis studyR7941COIACCAAGCCCTACAAAACCTGThis studyR10014rrnLTCCCCATTCAGACAATCCTCThis studyR10652rrnSGCTGGCACTGTGACTTATCCTAThis studyR11375COIIATTGTAGGTAAAAAGGTTCACThis studyR12090ND6AAAAAGACAATAAGACCCACTAThis studyR12752tRNALeu(UUR) AACACTTTGTATTTGACGCTThis studyR14014ND5AGGTTCAAGTAATGGTAGGTCTThis study*The PCR primers for the long PCR fragment (>2 kb). List of PCR primer combinations used to amplify the mitochondrial genome of Paragyrodactylus variegatus *The PCR primers for the long PCR fragment (>2 kb). Sequence analysis Contiguous sequence fragments were assembled using SeqMan (DNAStar) and Staden Package v1.7.0 [14]. Protein-coding (PCGs) and ribosomal RNA (rRNA) genes were initially identified using BLAST (Basic Local Alignment Search Tool) searches on GenBank, then by alignment with the published mitochondrial genomes of Gyrodactylus derjavinoides Malmberg, Collins, Cunningham and Jalali, 2007 (GenBank no. EU293891), G. salaris (GenBank no. DQ988931) and Gyrodactylus thymalli Zitnan, 1960 (GenBank no. EF527269). The secondary structure of the two rRNA genes was determined mainly by comparison with the published rRNA secondary structures of Dugesia japonica Ichikawa and Kawakatsu, 1964 (GenBank no. NC_016439) [15]. Protein-coding regions were translated with the echinoderm mitochondrial genetic code. The program tRNAscan-SE v1.21 (http://lowelab.ucsc.edu/tRNAscan-SE/) was used to identify transfer RNA (tRNA) genes and their structures [16], using the mito/chloroplast codon and setting the cove cutoff score to one. The tRNAs, which were not detected by tRNA scan-SE v1.21, were identified by comparing the sequence to Gyrodactylus [17, 18]. Tandem Repeat Finder v4.07 was used to identify tandem repeats in non-coding regions [19]. The base composition, codon usage and genetic distance were calculated with MEGA v5.1 [20]. The nonsynonymous (Ka)/synonymous (Ks) values were estimated by the KaKs_Calculator v1.2 with the MA method [21]. Contiguous sequence fragments were assembled using SeqMan (DNAStar) and Staden Package v1.7.0 [14]. Protein-coding (PCGs) and ribosomal RNA (rRNA) genes were initially identified using BLAST (Basic Local Alignment Search Tool) searches on GenBank, then by alignment with the published mitochondrial genomes of Gyrodactylus derjavinoides Malmberg, Collins, Cunningham and Jalali, 2007 (GenBank no. EU293891), G. salaris (GenBank no. DQ988931) and Gyrodactylus thymalli Zitnan, 1960 (GenBank no. EF527269). The secondary structure of the two rRNA genes was determined mainly by comparison with the published rRNA secondary structures of Dugesia japonica Ichikawa and Kawakatsu, 1964 (GenBank no. NC_016439) [15]. Protein-coding regions were translated with the echinoderm mitochondrial genetic code. The program tRNAscan-SE v1.21 (http://lowelab.ucsc.edu/tRNAscan-SE/) was used to identify transfer RNA (tRNA) genes and their structures [16], using the mito/chloroplast codon and setting the cove cutoff score to one. The tRNAs, which were not detected by tRNA scan-SE v1.21, were identified by comparing the sequence to Gyrodactylus [17, 18]. Tandem Repeat Finder v4.07 was used to identify tandem repeats in non-coding regions [19]. The base composition, codon usage and genetic distance were calculated with MEGA v5.1 [20]. The nonsynonymous (Ka)/synonymous (Ks) values were estimated by the KaKs_Calculator v1.2 with the MA method [21]. Specimen collection and DNA extraction: Specimens of P. variegatus were collected from the skin and fins of wild Homatula variegata (Dabry de Thiersant, 1874) in the Qinling Mountain region of central China. Upon capture the specimens were immediately preserved in 99% ethanol and stored at 4°C. The DNA from one parasite was extracted using a TIANamp Micro DNA Kit (Tiangen Biotech, Beijing, China) according to the manufacturer’s protocol. PCR and sequencing: The complete mitochondrial genome of P. variegatus was amplified in six parts using a combination of existing primers and newly developed primers generated by primer walking (primers listed in Table 1). For short fragments (<2 kb), PCR reactions were performed in a total volume of 25 μl, containing 3.0 mM MgCl2, 10 mM Tris–HCl (pH 8.3), 50 mM KCl, 0.25 mM of each dNTP , 1.25 U rTaq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 93°C, followed by 40 cycles of 10 sec at 92°C, 1.5 min at 52–54°C, 2 min at 60°C, and final extension of 6 min at 72°C. For long fragments (>2 kb), the 25 μl PCR reaction consisted of 2.5 mM MgCl2, 2.5 μl 10 × LA PCR Buffer II (Mg2+ free), 0.4 mM of each dNTP, 1.25 U LA Taq polymerase (TaKaRa, Dalian, China), 0.4 μM of each primer, 45 ng gDNA. Cycling conditions were: an initial denaturation for 1 min at 94°C, followed by 40 cycles of 20 sec at 93°C, 30 sec at 53–54°C, 4–7 min at 68°C, and final extension of 10 min at 68°C. All PCR products were purified with a PCR Purification Kit (Sangon Biotech, Shanghai, China) and sequenced using multiple primers including those which generated the PCR product and new internal primers developed by primer walking.Table 1 List of PCR primer combinations used to amplify the mitochondrial genome of Paragyrodactylus variegatus Primer nameGeneSequence(5′ – 3′)Source1 F(UND1F)*ND1CGHAAGGGNCCNAAHAAGGTHuyse et al. (2007) [17]1R*COITAAACTTCTGGATGWCCAAAAAATThis study2 F(UNAD5F)ND5TTRGARGCNATGCGBGCHCCHuyse et al. (2007) [17]2RCOIIIYCARCCTGAGCGAATTCARGCKGGThis study3 F(U12SF)*rrnSCAGTGCCAGCAKYYGCGGTTAHuyse et al. (2007) [17]3R(UNAD5R)*ND5GGWGCMCGCATNGCYTCYAAHuyse et al. (2007) [17]4 FND5ATGTGATTTTTAGAGTTATGCTTThis study4R(6RNAD5)ND5AGGHTCTCTAACTTGGAAAGWTAGTATHuyse et al. (2008) [24]5 F*COIIITCTTCWRTTACAGYAACDTCCTAThis study5R*ND1AAACCTCATACCTAACTGCGThis study6 F*COICTCCTTTATCTGGTGCTCTGGGThis study6R*rrnSGACGGGCGGTATGTACCTCTCTThis studyF236COIIITTGTTTTTGATTCCGTGAThis studyF930CYTBTTATCTTTGTGGTTCGTTCGThis studyF1568CYTBAGGTCAAAGATAGGTGGGTTAGThis studyF2174ND4TATAGGAATTTTACCATTATTTAThis studyF2855ND4CATGGCTTATCAGTTTGThis studyF3302tRNAGln GGTAGCATAGGAGGTAAGGTTCThis studyF8330COITTTAGCGGGTATTTCAAGTAThis studyF8920COIGTATTATTCACTATAGGAGGGGTAThis studyR4662ATP6ACGAAATAATAAAAATATAAAAAGTThis studyR5283ND2TCCAGAAACTAACAATAAAGCACThis studyR6003tRNAVal ACCTAATGCTTGTAATGThis studyR6599ND1AAACCTCATACCTAACTGCGThis studyR7212tRNAPro GCAGCCCTATCAGTAAGACCThis studyR7941COIACCAAGCCCTACAAAACCTGThis studyR10014rrnLTCCCCATTCAGACAATCCTCThis studyR10652rrnSGCTGGCACTGTGACTTATCCTAThis studyR11375COIIATTGTAGGTAAAAAGGTTCACThis studyR12090ND6AAAAAGACAATAAGACCCACTAThis studyR12752tRNALeu(UUR) AACACTTTGTATTTGACGCTThis studyR14014ND5AGGTTCAAGTAATGGTAGGTCTThis study*The PCR primers for the long PCR fragment (>2 kb). List of PCR primer combinations used to amplify the mitochondrial genome of Paragyrodactylus variegatus *The PCR primers for the long PCR fragment (>2 kb). Sequence analysis: Contiguous sequence fragments were assembled using SeqMan (DNAStar) and Staden Package v1.7.0 [14]. Protein-coding (PCGs) and ribosomal RNA (rRNA) genes were initially identified using BLAST (Basic Local Alignment Search Tool) searches on GenBank, then by alignment with the published mitochondrial genomes of Gyrodactylus derjavinoides Malmberg, Collins, Cunningham and Jalali, 2007 (GenBank no. EU293891), G. salaris (GenBank no. DQ988931) and Gyrodactylus thymalli Zitnan, 1960 (GenBank no. EF527269). The secondary structure of the two rRNA genes was determined mainly by comparison with the published rRNA secondary structures of Dugesia japonica Ichikawa and Kawakatsu, 1964 (GenBank no. NC_016439) [15]. Protein-coding regions were translated with the echinoderm mitochondrial genetic code. The program tRNAscan-SE v1.21 (http://lowelab.ucsc.edu/tRNAscan-SE/) was used to identify transfer RNA (tRNA) genes and their structures [16], using the mito/chloroplast codon and setting the cove cutoff score to one. The tRNAs, which were not detected by tRNA scan-SE v1.21, were identified by comparing the sequence to Gyrodactylus [17, 18]. Tandem Repeat Finder v4.07 was used to identify tandem repeats in non-coding regions [19]. The base composition, codon usage and genetic distance were calculated with MEGA v5.1 [20]. The nonsynonymous (Ka)/synonymous (Ks) values were estimated by the KaKs_Calculator v1.2 with the MA method [21]. Results: Genome organization, base composition and gene order The circular mitochondrial genome of P. variegatus is 14,517 bp in size (GenBank no. KM067269) and contains 12 PCGs, 22 tRNAs, two rRNA and a single major non-coding region (NCR) (Figure 1). It lacks the ATP8 gene, and all the genes are transcribed from the same strand. The overall nucleotide composition is: T (45.8%), C (9.5%), A (30.4%), G (14.2%), with an overall A + T content of 76.3% (Table 2).Figure 1 The gene map for the mitochondrial genome of Paragyrodactylus variegatus . The gene map for the mitochondrial genome of Paragyrodactylus variegatus . Base composition of the mitochondrial genome of Paragyrodactylus variegatus The arrangement of rRNA and protein coding genes of P. variegatus is typical for gyrodactylids. However, the gene order of some tRNA genes is different: there are three tRNAs (tRNAGln, tRNAPhe, tRNAMet) between ND4 and the major non-coding region and five tRNAs (tRNATyr, tRNALeu1, tRNASer2, tRNALeu2, tRNAArg) between ND6 and ND5 in P. variegatus, while Gyrodactylus spp. have one tRNA (tRNAPhe) and seven tRNAs (tRNATyr, tRNALeu1, tRNAGln, tRNAMet, tRNASer2, tRNALeu2, tRNAArg) in the same location, respectively. The circular mitochondrial genome of P. variegatus is 14,517 bp in size (GenBank no. KM067269) and contains 12 PCGs, 22 tRNAs, two rRNA and a single major non-coding region (NCR) (Figure 1). It lacks the ATP8 gene, and all the genes are transcribed from the same strand. The overall nucleotide composition is: T (45.8%), C (9.5%), A (30.4%), G (14.2%), with an overall A + T content of 76.3% (Table 2).Figure 1 The gene map for the mitochondrial genome of Paragyrodactylus variegatus . The gene map for the mitochondrial genome of Paragyrodactylus variegatus . Base composition of the mitochondrial genome of Paragyrodactylus variegatus The arrangement of rRNA and protein coding genes of P. variegatus is typical for gyrodactylids. However, the gene order of some tRNA genes is different: there are three tRNAs (tRNAGln, tRNAPhe, tRNAMet) between ND4 and the major non-coding region and five tRNAs (tRNATyr, tRNALeu1, tRNASer2, tRNALeu2, tRNAArg) between ND6 and ND5 in P. variegatus, while Gyrodactylus spp. have one tRNA (tRNAPhe) and seven tRNAs (tRNATyr, tRNALeu1, tRNAGln, tRNAMet, tRNASer2, tRNALeu2, tRNAArg) in the same location, respectively. Protein coding genes and codon usage The total length of all 12 PCGs is 9,990 bp. The average A + T content of PCGs is 75.7% (Table 2), ranging from 70.9% (COI) to 82.9% (ND2). ATG is the typical start codon, except for ND1 and COII, which begins with GTG and TTG, respectively (Table 3). All PCGs terminate with the stop codons TAA, while ND5 uses the codon TAG. The incomplete stop codons were not observed in P. variegatus.Table 3 The organization of the mitochondrial genome of Paragyrodactylus variegatus GenePositionSize (bp)CodonAnticodonIntergenic nucleotidesFormToStartStopCOIII1639639ATGTAA/tRNA-His (H)65171363GTG11CYTB71917981080ATGTAA5ND4L18032057255ATGTAA4ND4203032381209ATGTAA-28tRNA-Gln (Q)3245331167TTG6tRNA-Phe (F)3331339767GAA19tRNA-Met (M)3410347667CAT12NCR3477456910930ATP645705082513ATGTAA0ND250845959876ATGTAA1tRNA-Val (V)5974604067TAC14tRNA-Ala (A)6047611266TGC6tRNA-Asp (D)6114617865GTC1ND161837073891GTGTAA4tRNA-Asn (N)7087715569GTT13tRNA-Pro (P)7159722163TGG3tRNA-Ile (I)7216728368GAT-6tRNA-Lys (K)7288735265CTT4ND373617711351ATGTAA8tRNA-Ser(AGN)(S1)7726778257TCT14tRNA-Trp (W)7792785867TCA9COI786294091548ATGTAA3tRNA-Thr (T)9418948467TGT8rrnL(16S)948410443960-1tRNA-Cys (C)104441050360GCA0rrnS (12S)10505112167121COII1122311804582TTGTAA6tRNA-Glu (E)119551201864TTC150ND61202512501477ATGTAA6tRNA-Tyr (Y)125071257367GTA5tRNA-Leu(CUN)(L1)125851265066TAG11tRNA-Ser(UCN)(S2)126571271660TGA6tRNA-Leu(UUR)(L2)127191278870TAA2tRNA-Arg (R)127941286067TCG5ND512865144331569ATGTAG4tRNA-Gly (G)144461451368TCC12 The organization of the mitochondrial genome of Paragyrodactylus variegatus The codon usage and relative synonymous codon usage (RSCU) values are summarized (Table 4). The most frequent amino acids in the PCGs of P. variegatus are as follows: Leucine (16.43%), Phenylalanine (13.23%), Serine (12.48%), and Isoleucine (10.67%). The frequency of Glutamine is especially low (0.69%). The codons TTA (Leucine; 12.09%) and TTT (Phenylalanine; 11.48%) are the most frequently used codons. For the third position of the fourfold degenerate amino acid, codons ending with T are the most frequent.Table 4 Codon usage for the 12 mitochondrial proteins of Paragyrodactylus variegatus Codon(AA)N%RSCUCodon(AA)N%RSCUUUU(F)38111.481.74UAU(Y)1805.421.72UUC(F)581.750.26UAC(Y)290.870.28UUA(L)40112.094.41UAA(*)00.000UUG(L)391.180.43UAG(*)00.000CUU(L)682.050.75CAU(H)451.361.7CUC(L)70.210.08CAC(H)80.240.3CUA(L)270.810.3CAA(Q)140.421.22CUG(L)30.090.03CAG(Q)90.270.78AUU(I)1755.271.48AAU(N)1033.101.67AUC(I)110.330.09AAC(N)180.540.29AUA(I)1685.061.42AAA(N)641.931.04AUG(M)682.051AAG(K)481.451GUU(V)1504.522.4GAU(D)541.631.59GUC(V)80.240.13GAC(D)140.420.41GUA(V)812.441.3GAA(E)371.121.32GUG(V)110.330.18GAG(E)190.570.68UCU(S)1143.442.2UGU(C)651.961.83UCC(S)90.270.17UGC(C)60.180.17UCA(S)651.961.26UGA(W)581.751.55UCG(S)30.090.06UGG(W)170.510.45CCU(P)381.152.03CGU(R)330.993CCC(P)20.060.11CGC(R)40.120.36CCA(P)341.021.81CGA(R)50.150.45CCG(P)10.030.05CGG(R)20.060.18ACU(T)581.752.37AGU(S)1043.132.01ACC(T)90.270.37AGC(S)120.360.23ACA(T)300.901.22AGA(S)812.441.57ACG(T)10.030.04AGG(S)260.780.5GCU(A)330.991.97GGU(G)902.712.05GCC(A)70.210.42GGC(G)180.540.41GCA(A)250.751.49GGA(G)461.391.05GCG(A)20.060.12GGG(G)220.660.5A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage. Codon usage for the 12 mitochondrial proteins of Paragyrodactylus variegatus A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage. The total length of all 12 PCGs is 9,990 bp. The average A + T content of PCGs is 75.7% (Table 2), ranging from 70.9% (COI) to 82.9% (ND2). ATG is the typical start codon, except for ND1 and COII, which begins with GTG and TTG, respectively (Table 3). All PCGs terminate with the stop codons TAA, while ND5 uses the codon TAG. The incomplete stop codons were not observed in P. variegatus.Table 3 The organization of the mitochondrial genome of Paragyrodactylus variegatus GenePositionSize (bp)CodonAnticodonIntergenic nucleotidesFormToStartStopCOIII1639639ATGTAA/tRNA-His (H)65171363GTG11CYTB71917981080ATGTAA5ND4L18032057255ATGTAA4ND4203032381209ATGTAA-28tRNA-Gln (Q)3245331167TTG6tRNA-Phe (F)3331339767GAA19tRNA-Met (M)3410347667CAT12NCR3477456910930ATP645705082513ATGTAA0ND250845959876ATGTAA1tRNA-Val (V)5974604067TAC14tRNA-Ala (A)6047611266TGC6tRNA-Asp (D)6114617865GTC1ND161837073891GTGTAA4tRNA-Asn (N)7087715569GTT13tRNA-Pro (P)7159722163TGG3tRNA-Ile (I)7216728368GAT-6tRNA-Lys (K)7288735265CTT4ND373617711351ATGTAA8tRNA-Ser(AGN)(S1)7726778257TCT14tRNA-Trp (W)7792785867TCA9COI786294091548ATGTAA3tRNA-Thr (T)9418948467TGT8rrnL(16S)948410443960-1tRNA-Cys (C)104441050360GCA0rrnS (12S)10505112167121COII1122311804582TTGTAA6tRNA-Glu (E)119551201864TTC150ND61202512501477ATGTAA6tRNA-Tyr (Y)125071257367GTA5tRNA-Leu(CUN)(L1)125851265066TAG11tRNA-Ser(UCN)(S2)126571271660TGA6tRNA-Leu(UUR)(L2)127191278870TAA2tRNA-Arg (R)127941286067TCG5ND512865144331569ATGTAG4tRNA-Gly (G)144461451368TCC12 The organization of the mitochondrial genome of Paragyrodactylus variegatus The codon usage and relative synonymous codon usage (RSCU) values are summarized (Table 4). The most frequent amino acids in the PCGs of P. variegatus are as follows: Leucine (16.43%), Phenylalanine (13.23%), Serine (12.48%), and Isoleucine (10.67%). The frequency of Glutamine is especially low (0.69%). The codons TTA (Leucine; 12.09%) and TTT (Phenylalanine; 11.48%) are the most frequently used codons. For the third position of the fourfold degenerate amino acid, codons ending with T are the most frequent.Table 4 Codon usage for the 12 mitochondrial proteins of Paragyrodactylus variegatus Codon(AA)N%RSCUCodon(AA)N%RSCUUUU(F)38111.481.74UAU(Y)1805.421.72UUC(F)581.750.26UAC(Y)290.870.28UUA(L)40112.094.41UAA(*)00.000UUG(L)391.180.43UAG(*)00.000CUU(L)682.050.75CAU(H)451.361.7CUC(L)70.210.08CAC(H)80.240.3CUA(L)270.810.3CAA(Q)140.421.22CUG(L)30.090.03CAG(Q)90.270.78AUU(I)1755.271.48AAU(N)1033.101.67AUC(I)110.330.09AAC(N)180.540.29AUA(I)1685.061.42AAA(N)641.931.04AUG(M)682.051AAG(K)481.451GUU(V)1504.522.4GAU(D)541.631.59GUC(V)80.240.13GAC(D)140.420.41GUA(V)812.441.3GAA(E)371.121.32GUG(V)110.330.18GAG(E)190.570.68UCU(S)1143.442.2UGU(C)651.961.83UCC(S)90.270.17UGC(C)60.180.17UCA(S)651.961.26UGA(W)581.751.55UCG(S)30.090.06UGG(W)170.510.45CCU(P)381.152.03CGU(R)330.993CCC(P)20.060.11CGC(R)40.120.36CCA(P)341.021.81CGA(R)50.150.45CCG(P)10.030.05CGG(R)20.060.18ACU(T)581.752.37AGU(S)1043.132.01ACC(T)90.270.37AGC(S)120.360.23ACA(T)300.901.22AGA(S)812.441.57ACG(T)10.030.04AGG(S)260.780.5GCU(A)330.991.97GGU(G)902.712.05GCC(A)70.210.42GGC(G)180.540.41GCA(A)250.751.49GGA(G)461.391.05GCG(A)20.060.12GGG(G)220.660.5A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage. Codon usage for the 12 mitochondrial proteins of Paragyrodactylus variegatus A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage. Ribosomal and transfer RNA genes The length of large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS) genes of P. variegatus are 960 bp and 712 bp, respectively (Table 3). The A + T contents of the rrnL and rrnS of P. variegatus are 75.0% and 75.3%, respectively. The predicted secondary structure of rrnL and rrnS of P. variegatus is shown in Figure 2 and Figure 3. The secondary structures of these regions contain six and three structural domains, respectively. But domain I of the rrnL lacks a large region at the 5′ end gene, and the domain III is absent in the secondary structure of rrnL of P. variegatus.Figure 2 Inferred secondary structure of the mitochondrial rrnL gene for Paragyrodactylus variegatus . Figure 3 Inferred secondary structure of the mitochondrial rrnS gene for Paragyrodactylus variegatus . Inferred secondary structure of the mitochondrial rrnL gene for Paragyrodactylus variegatus . Inferred secondary structure of the mitochondrial rrnS gene for Paragyrodactylus variegatus . The 22 tRNA genes of P. variegatus vary in length from 57 to 70 nucleotides. Sequences of tRNAIle and tRNAThr genes overlap with neighboring genes (Table 3). All of the 22 tRNAs have the typical cloverleaf secondary structure, except for tRNACys, tRNASer1 and tRNASer2 in which each have unpaired dihydrouridine (DHU) arm. The length of large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS) genes of P. variegatus are 960 bp and 712 bp, respectively (Table 3). The A + T contents of the rrnL and rrnS of P. variegatus are 75.0% and 75.3%, respectively. The predicted secondary structure of rrnL and rrnS of P. variegatus is shown in Figure 2 and Figure 3. The secondary structures of these regions contain six and three structural domains, respectively. But domain I of the rrnL lacks a large region at the 5′ end gene, and the domain III is absent in the secondary structure of rrnL of P. variegatus.Figure 2 Inferred secondary structure of the mitochondrial rrnL gene for Paragyrodactylus variegatus . Figure 3 Inferred secondary structure of the mitochondrial rrnS gene for Paragyrodactylus variegatus . Inferred secondary structure of the mitochondrial rrnL gene for Paragyrodactylus variegatus . Inferred secondary structure of the mitochondrial rrnS gene for Paragyrodactylus variegatus . The 22 tRNA genes of P. variegatus vary in length from 57 to 70 nucleotides. Sequences of tRNAIle and tRNAThr genes overlap with neighboring genes (Table 3). All of the 22 tRNAs have the typical cloverleaf secondary structure, except for tRNACys, tRNASer1 and tRNASer2 in which each have unpaired dihydrouridine (DHU) arm. Synonymous and nonsynonymous substitutions and genetic distance The Ka/Ks values for all 12 PCGs of P. variegatus versus Gyrodactylus spp. are presented, which all are less than 0.3. The highest average Ka/Ks value is ND2 (0.29), while the Ka/Ks ratios of half the PCGs are low (Ka/Ks < 0.1). The genetic distance between P. variegatus and the three reported species of Gyrodactylus (G. thymalli, G. salaris and G. derjavinoides) are much greater than among the three species of Gyrodactylus (Figure 4). The maximum divergence occurs in ND5 gene (48.9%) between P. variegatus and G. salaris. In addition, the genetic distances of rRNA genes are lower than protein genes (Figure 4).Figure 4 The genetic distance of protein and rRNA genes of Paragyrodactylus variegatus and Gyrodactylus spp. The genetic distance of protein and rRNA genes of Paragyrodactylus variegatus and Gyrodactylus spp. The Ka/Ks values for all 12 PCGs of P. variegatus versus Gyrodactylus spp. are presented, which all are less than 0.3. The highest average Ka/Ks value is ND2 (0.29), while the Ka/Ks ratios of half the PCGs are low (Ka/Ks < 0.1). The genetic distance between P. variegatus and the three reported species of Gyrodactylus (G. thymalli, G. salaris and G. derjavinoides) are much greater than among the three species of Gyrodactylus (Figure 4). The maximum divergence occurs in ND5 gene (48.9%) between P. variegatus and G. salaris. In addition, the genetic distances of rRNA genes are lower than protein genes (Figure 4).Figure 4 The genetic distance of protein and rRNA genes of Paragyrodactylus variegatus and Gyrodactylus spp. The genetic distance of protein and rRNA genes of Paragyrodactylus variegatus and Gyrodactylus spp. Non-coding regions The major non-coding region is 1,093 bp in size, which is highly enriched in AT (83.4%). This non-coding region can be subdivided into six parts including three junctions by the sequence pattern (Figure 5). The sequence of part I and part II is homologous with 81.7% sequence identity. Part III contains six identical repeat units of 40 bp sequence with some sequence modifications: one substitution at the fifth position (the initial repeat unit), three substitutions at the 223rd, 227th and 237th positions and two insertions at the 222nd and 225th positions (the terminal repeat unit). The repeat unit of part III was able to fold into a stem-loop secondary structure. Some predicted structural elements were also found in the sequence of part I and II (Figure 6). In addition, 30 short non-coding regions, all < 151 bp, occur in the mitochondrial genome of P. variegatus (Table 3).Figure 5 Organization of the mitochondrial major non-coding region of Paragyrodactylus variegatus . Figure 6 Predicted structural elements for the mitochondrial major non-coding region of Paragyrodactylus variegatus . ‘(G)’ is the variation in the identical pattern of part II. Organization of the mitochondrial major non-coding region of Paragyrodactylus variegatus . Predicted structural elements for the mitochondrial major non-coding region of Paragyrodactylus variegatus . ‘(G)’ is the variation in the identical pattern of part II. The major non-coding region is 1,093 bp in size, which is highly enriched in AT (83.4%). This non-coding region can be subdivided into six parts including three junctions by the sequence pattern (Figure 5). The sequence of part I and part II is homologous with 81.7% sequence identity. Part III contains six identical repeat units of 40 bp sequence with some sequence modifications: one substitution at the fifth position (the initial repeat unit), three substitutions at the 223rd, 227th and 237th positions and two insertions at the 222nd and 225th positions (the terminal repeat unit). The repeat unit of part III was able to fold into a stem-loop secondary structure. Some predicted structural elements were also found in the sequence of part I and II (Figure 6). In addition, 30 short non-coding regions, all < 151 bp, occur in the mitochondrial genome of P. variegatus (Table 3).Figure 5 Organization of the mitochondrial major non-coding region of Paragyrodactylus variegatus . Figure 6 Predicted structural elements for the mitochondrial major non-coding region of Paragyrodactylus variegatus . ‘(G)’ is the variation in the identical pattern of part II. Organization of the mitochondrial major non-coding region of Paragyrodactylus variegatus . Predicted structural elements for the mitochondrial major non-coding region of Paragyrodactylus variegatus . ‘(G)’ is the variation in the identical pattern of part II. Genome organization, base composition and gene order: The circular mitochondrial genome of P. variegatus is 14,517 bp in size (GenBank no. KM067269) and contains 12 PCGs, 22 tRNAs, two rRNA and a single major non-coding region (NCR) (Figure 1). It lacks the ATP8 gene, and all the genes are transcribed from the same strand. The overall nucleotide composition is: T (45.8%), C (9.5%), A (30.4%), G (14.2%), with an overall A + T content of 76.3% (Table 2).Figure 1 The gene map for the mitochondrial genome of Paragyrodactylus variegatus . The gene map for the mitochondrial genome of Paragyrodactylus variegatus . Base composition of the mitochondrial genome of Paragyrodactylus variegatus The arrangement of rRNA and protein coding genes of P. variegatus is typical for gyrodactylids. However, the gene order of some tRNA genes is different: there are three tRNAs (tRNAGln, tRNAPhe, tRNAMet) between ND4 and the major non-coding region and five tRNAs (tRNATyr, tRNALeu1, tRNASer2, tRNALeu2, tRNAArg) between ND6 and ND5 in P. variegatus, while Gyrodactylus spp. have one tRNA (tRNAPhe) and seven tRNAs (tRNATyr, tRNALeu1, tRNAGln, tRNAMet, tRNASer2, tRNALeu2, tRNAArg) in the same location, respectively. Protein coding genes and codon usage: The total length of all 12 PCGs is 9,990 bp. The average A + T content of PCGs is 75.7% (Table 2), ranging from 70.9% (COI) to 82.9% (ND2). ATG is the typical start codon, except for ND1 and COII, which begins with GTG and TTG, respectively (Table 3). All PCGs terminate with the stop codons TAA, while ND5 uses the codon TAG. The incomplete stop codons were not observed in P. variegatus.Table 3 The organization of the mitochondrial genome of Paragyrodactylus variegatus GenePositionSize (bp)CodonAnticodonIntergenic nucleotidesFormToStartStopCOIII1639639ATGTAA/tRNA-His (H)65171363GTG11CYTB71917981080ATGTAA5ND4L18032057255ATGTAA4ND4203032381209ATGTAA-28tRNA-Gln (Q)3245331167TTG6tRNA-Phe (F)3331339767GAA19tRNA-Met (M)3410347667CAT12NCR3477456910930ATP645705082513ATGTAA0ND250845959876ATGTAA1tRNA-Val (V)5974604067TAC14tRNA-Ala (A)6047611266TGC6tRNA-Asp (D)6114617865GTC1ND161837073891GTGTAA4tRNA-Asn (N)7087715569GTT13tRNA-Pro (P)7159722163TGG3tRNA-Ile (I)7216728368GAT-6tRNA-Lys (K)7288735265CTT4ND373617711351ATGTAA8tRNA-Ser(AGN)(S1)7726778257TCT14tRNA-Trp (W)7792785867TCA9COI786294091548ATGTAA3tRNA-Thr (T)9418948467TGT8rrnL(16S)948410443960-1tRNA-Cys (C)104441050360GCA0rrnS (12S)10505112167121COII1122311804582TTGTAA6tRNA-Glu (E)119551201864TTC150ND61202512501477ATGTAA6tRNA-Tyr (Y)125071257367GTA5tRNA-Leu(CUN)(L1)125851265066TAG11tRNA-Ser(UCN)(S2)126571271660TGA6tRNA-Leu(UUR)(L2)127191278870TAA2tRNA-Arg (R)127941286067TCG5ND512865144331569ATGTAG4tRNA-Gly (G)144461451368TCC12 The organization of the mitochondrial genome of Paragyrodactylus variegatus The codon usage and relative synonymous codon usage (RSCU) values are summarized (Table 4). The most frequent amino acids in the PCGs of P. variegatus are as follows: Leucine (16.43%), Phenylalanine (13.23%), Serine (12.48%), and Isoleucine (10.67%). The frequency of Glutamine is especially low (0.69%). The codons TTA (Leucine; 12.09%) and TTT (Phenylalanine; 11.48%) are the most frequently used codons. For the third position of the fourfold degenerate amino acid, codons ending with T are the most frequent.Table 4 Codon usage for the 12 mitochondrial proteins of Paragyrodactylus variegatus Codon(AA)N%RSCUCodon(AA)N%RSCUUUU(F)38111.481.74UAU(Y)1805.421.72UUC(F)581.750.26UAC(Y)290.870.28UUA(L)40112.094.41UAA(*)00.000UUG(L)391.180.43UAG(*)00.000CUU(L)682.050.75CAU(H)451.361.7CUC(L)70.210.08CAC(H)80.240.3CUA(L)270.810.3CAA(Q)140.421.22CUG(L)30.090.03CAG(Q)90.270.78AUU(I)1755.271.48AAU(N)1033.101.67AUC(I)110.330.09AAC(N)180.540.29AUA(I)1685.061.42AAA(N)641.931.04AUG(M)682.051AAG(K)481.451GUU(V)1504.522.4GAU(D)541.631.59GUC(V)80.240.13GAC(D)140.420.41GUA(V)812.441.3GAA(E)371.121.32GUG(V)110.330.18GAG(E)190.570.68UCU(S)1143.442.2UGU(C)651.961.83UCC(S)90.270.17UGC(C)60.180.17UCA(S)651.961.26UGA(W)581.751.55UCG(S)30.090.06UGG(W)170.510.45CCU(P)381.152.03CGU(R)330.993CCC(P)20.060.11CGC(R)40.120.36CCA(P)341.021.81CGA(R)50.150.45CCG(P)10.030.05CGG(R)20.060.18ACU(T)581.752.37AGU(S)1043.132.01ACC(T)90.270.37AGC(S)120.360.23ACA(T)300.901.22AGA(S)812.441.57ACG(T)10.030.04AGG(S)260.780.5GCU(A)330.991.97GGU(G)902.712.05GCC(A)70.210.42GGC(G)180.540.41GCA(A)250.751.49GGA(G)461.391.05GCG(A)20.060.12GGG(G)220.660.5A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage. Codon usage for the 12 mitochondrial proteins of Paragyrodactylus variegatus A total of 3318 codons for P. variegatus were analyzed, excluding the stop codons. AA, amino acid; N, number of used codon; % = N/3318; RSCU, relative synonymous codon usage. Ribosomal and transfer RNA genes: The length of large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS) genes of P. variegatus are 960 bp and 712 bp, respectively (Table 3). The A + T contents of the rrnL and rrnS of P. variegatus are 75.0% and 75.3%, respectively. The predicted secondary structure of rrnL and rrnS of P. variegatus is shown in Figure 2 and Figure 3. The secondary structures of these regions contain six and three structural domains, respectively. But domain I of the rrnL lacks a large region at the 5′ end gene, and the domain III is absent in the secondary structure of rrnL of P. variegatus.Figure 2 Inferred secondary structure of the mitochondrial rrnL gene for Paragyrodactylus variegatus . Figure 3 Inferred secondary structure of the mitochondrial rrnS gene for Paragyrodactylus variegatus . Inferred secondary structure of the mitochondrial rrnL gene for Paragyrodactylus variegatus . Inferred secondary structure of the mitochondrial rrnS gene for Paragyrodactylus variegatus . The 22 tRNA genes of P. variegatus vary in length from 57 to 70 nucleotides. Sequences of tRNAIle and tRNAThr genes overlap with neighboring genes (Table 3). All of the 22 tRNAs have the typical cloverleaf secondary structure, except for tRNACys, tRNASer1 and tRNASer2 in which each have unpaired dihydrouridine (DHU) arm. Synonymous and nonsynonymous substitutions and genetic distance: The Ka/Ks values for all 12 PCGs of P. variegatus versus Gyrodactylus spp. are presented, which all are less than 0.3. The highest average Ka/Ks value is ND2 (0.29), while the Ka/Ks ratios of half the PCGs are low (Ka/Ks < 0.1). The genetic distance between P. variegatus and the three reported species of Gyrodactylus (G. thymalli, G. salaris and G. derjavinoides) are much greater than among the three species of Gyrodactylus (Figure 4). The maximum divergence occurs in ND5 gene (48.9%) between P. variegatus and G. salaris. In addition, the genetic distances of rRNA genes are lower than protein genes (Figure 4).Figure 4 The genetic distance of protein and rRNA genes of Paragyrodactylus variegatus and Gyrodactylus spp. The genetic distance of protein and rRNA genes of Paragyrodactylus variegatus and Gyrodactylus spp. Non-coding regions: The major non-coding region is 1,093 bp in size, which is highly enriched in AT (83.4%). This non-coding region can be subdivided into six parts including three junctions by the sequence pattern (Figure 5). The sequence of part I and part II is homologous with 81.7% sequence identity. Part III contains six identical repeat units of 40 bp sequence with some sequence modifications: one substitution at the fifth position (the initial repeat unit), three substitutions at the 223rd, 227th and 237th positions and two insertions at the 222nd and 225th positions (the terminal repeat unit). The repeat unit of part III was able to fold into a stem-loop secondary structure. Some predicted structural elements were also found in the sequence of part I and II (Figure 6). In addition, 30 short non-coding regions, all < 151 bp, occur in the mitochondrial genome of P. variegatus (Table 3).Figure 5 Organization of the mitochondrial major non-coding region of Paragyrodactylus variegatus . Figure 6 Predicted structural elements for the mitochondrial major non-coding region of Paragyrodactylus variegatus . ‘(G)’ is the variation in the identical pattern of part II. Organization of the mitochondrial major non-coding region of Paragyrodactylus variegatus . Predicted structural elements for the mitochondrial major non-coding region of Paragyrodactylus variegatus . ‘(G)’ is the variation in the identical pattern of part II. Discussion: Characteristics of the mitochondrial genome The mitochondrial genome of P. variegatus is 222 bp shorter than that of G. derjavinoides, but well within the length range of parasitic flatworms [22, 23]. Differing number and length of the major non-coding region is the main factor that contributes to this difference in genome size. The overall A + T content of P. variegatus is higher than that of all reported mitochondrial genomes of monogeneans. The average Ka/Ks values of genes encoding 3 subunits of cytochrome c oxidase and the cytochrome b subunit of cytochrome bc1 complex are lower than genes encoding subunits of the NADH dehydrogenase complex (with the exception of ND1), especially COI and Cytb genes. This feature demonstrates COI, COII, COIII and Cytb genes are more strongly effected by purifying selection pressure compared to subunits of the NADH dehydrogenase genes (except ND1), which is similar to the findings of Huyse et al. [24] for Gyrodactylus derjavinoides. The degree of functional constraints might be a reason for corresponding degree sequence variations of protein genes. The low Ka/Ks values and genetic distance of COI and Cytb genes also imply that both genes could be used as a useful marker for analyses at higher taxonomic levels. Although sizes of rrnL and rrnS are very similar among Gyrodactylus spp. and P. variegatus, the sequence similarities are not high. These discrepancies may reflect the variable helices or loops that exist in the rRNA structure. The mitochondrial genome of P. variegatus is 222 bp shorter than that of G. derjavinoides, but well within the length range of parasitic flatworms [22, 23]. Differing number and length of the major non-coding region is the main factor that contributes to this difference in genome size. The overall A + T content of P. variegatus is higher than that of all reported mitochondrial genomes of monogeneans. The average Ka/Ks values of genes encoding 3 subunits of cytochrome c oxidase and the cytochrome b subunit of cytochrome bc1 complex are lower than genes encoding subunits of the NADH dehydrogenase complex (with the exception of ND1), especially COI and Cytb genes. This feature demonstrates COI, COII, COIII and Cytb genes are more strongly effected by purifying selection pressure compared to subunits of the NADH dehydrogenase genes (except ND1), which is similar to the findings of Huyse et al. [24] for Gyrodactylus derjavinoides. The degree of functional constraints might be a reason for corresponding degree sequence variations of protein genes. The low Ka/Ks values and genetic distance of COI and Cytb genes also imply that both genes could be used as a useful marker for analyses at higher taxonomic levels. Although sizes of rrnL and rrnS are very similar among Gyrodactylus spp. and P. variegatus, the sequence similarities are not high. These discrepancies may reflect the variable helices or loops that exist in the rRNA structure. The major non-coding region The mitochondrial genome of P. variegatus includes one major non-coding region, which has been frequently observed in other invertebrates. It contains a high A + T content and tandem repeat sequences which could not be found in large non-coding regions (>500 bp) of the published mitochondrial genomes of monopisthocotyleans. We found that length and number of tandem repeat units are similar to those observed in Microcotyle sebastis Goto, 1894 [25] , contradicting the study of Zhang et al. [26] that reported the length and number of repeated motifs were different in the mitochondrial non-coding regions of monopisthocotylids and polyopisthocotylids. A non-coding region with high A + T content and pertinent elements usually corresponds to the control region for replication and transcription initiation. In the major non-coding region of P. variegatus, we found identical patterns within part I and part II. The patterns have only two nucleotide modifications with 2.3% sequence discrepancy; however, the overall difference between the whole sequence of part I and part II is 18.3%. The highly conserved part of the non-coding region is believed to have a functional role. The patterns contain poly-T stretches, a stem-loop structure and some surrounding structure elements (A + T-rich segment and G[A]nT) (Figure 6) which are typical of control regions in insects [27–30]. Although typical control regions are not readily identifiable within the mitochondrial genome of flatworms [17], the predicted secondary structure, conserved element, repeat sequences and high A + T content of major non-coding region in P. variegatus implies that this region might play an important role in the initiation of replication and transcription. In addition, through alignment of non-coding regions sequences between Gyrodactylus spp. and P. variegatus, we found some conserved motifs in each species with the overall similarity among them being 72.1%. The conserved motifs (>5 bp) mainly existed in the A + T-rich segment and G + A-rich segment. However, whether or not the conserved motifs are present in other species of Gyrodactylidae needs to be assessed with a broader taxon sample. The mitochondrial genome of P. variegatus includes one major non-coding region, which has been frequently observed in other invertebrates. It contains a high A + T content and tandem repeat sequences which could not be found in large non-coding regions (>500 bp) of the published mitochondrial genomes of monopisthocotyleans. We found that length and number of tandem repeat units are similar to those observed in Microcotyle sebastis Goto, 1894 [25] , contradicting the study of Zhang et al. [26] that reported the length and number of repeated motifs were different in the mitochondrial non-coding regions of monopisthocotylids and polyopisthocotylids. A non-coding region with high A + T content and pertinent elements usually corresponds to the control region for replication and transcription initiation. In the major non-coding region of P. variegatus, we found identical patterns within part I and part II. The patterns have only two nucleotide modifications with 2.3% sequence discrepancy; however, the overall difference between the whole sequence of part I and part II is 18.3%. The highly conserved part of the non-coding region is believed to have a functional role. The patterns contain poly-T stretches, a stem-loop structure and some surrounding structure elements (A + T-rich segment and G[A]nT) (Figure 6) which are typical of control regions in insects [27–30]. Although typical control regions are not readily identifiable within the mitochondrial genome of flatworms [17], the predicted secondary structure, conserved element, repeat sequences and high A + T content of major non-coding region in P. variegatus implies that this region might play an important role in the initiation of replication and transcription. In addition, through alignment of non-coding regions sequences between Gyrodactylus spp. and P. variegatus, we found some conserved motifs in each species with the overall similarity among them being 72.1%. The conserved motifs (>5 bp) mainly existed in the A + T-rich segment and G + A-rich segment. However, whether or not the conserved motifs are present in other species of Gyrodactylidae needs to be assessed with a broader taxon sample. Gene arrangements and possible evolutionary mechanisms Five available mitochondrial gene arrangements of monopisthocotylids are shown in Figure 7. The arrangement of all rRNA and protein coding genes are identical throughout all samples, however, the tRNA genes differ in arrangement showing some translocation, particularly long-range translocation. No notable rearrangement hot spot could be found in gene arrangements of monopisthocotylids, however, the major change of gene arrangement among polyopisthocotylids is limited in the COIII-ND5 junction as a gene rearrangement hot spot [26]. Two gene clusters (tRNAAsn-tRNAPro-tRNAIle-tRNALys and rrnL-tRNACys-rrnS) were found to be conserved in all mitochondrial genomes of monopisthocotyleans. Nevertheless, the tRNALys and tRNACys were found in the gene rearrangement hot spot of polyopisthocotyleans. The conserved gene clusters could potentially be a marker used to help define the Polyopisthocotylea and Monopisthocotylea within the monogenea, as well as providing information for a deeper understanding of the evolution of monogenean mitochondrial genomes.Figure 7 Gene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp. Gene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp. Gene rearrangement can be mainly explained by three mechanisms: Duplication and Random Loss Model [31, 32], Duplication and Nonrandom Loss Model [33] and Recombination Model [34]. The variation (tRNAGln, tRNAMet and NCR) of mitochondrial gene order occurring between P. variegatus and Gyrodactylus spp. could be explained by the duplication and random loss model and recombination model together with the parsimonious scenario. We assume that the process contains three steps: one tandem duplication, random loss, followed by intramitochondrial recombination (Figure 8). We prefer this mechanism for the following reasons: the duplicate NCRs in the mitochondrial genomes of most metazoans can be explained by the duplication and random loss model, but the stepwise mechanism described above is more appropriate to interpret the duplicated NCRs and long-range translocation, meanwhile the rest of the genes remain in their original state. Furthermore, there are several examples of mitochondrial recombination in animals [35–38], and a similar mechanism accounts for the gene rearrangement of other metazoans [39, 40]. In addition, the tRNAMet genes of Gyrodactylus spp. are clearly homologous to the tRNAMet gene of P. variegatus with 80.6% sequence similarity. However, the tRNAGln region does have low sequence similarity (66.2%) between the mitochondrial genomes of Gyrodactylus spp. and P. variegatus, so we cannot be certain that the translocation event happened. As more mitochondrial genomes of gyrodactylids become available, all of the above hypotheses should be tested with respect to gene orders.Figure 8 Possible mechanism of mitochondrial gene rearrangements occurring in Paragyrodactylus variegatus and Gyrodactylus spp. Possible mechanism of mitochondrial gene rearrangements occurring in Paragyrodactylus variegatus and Gyrodactylus spp. Five available mitochondrial gene arrangements of monopisthocotylids are shown in Figure 7. The arrangement of all rRNA and protein coding genes are identical throughout all samples, however, the tRNA genes differ in arrangement showing some translocation, particularly long-range translocation. No notable rearrangement hot spot could be found in gene arrangements of monopisthocotylids, however, the major change of gene arrangement among polyopisthocotylids is limited in the COIII-ND5 junction as a gene rearrangement hot spot [26]. Two gene clusters (tRNAAsn-tRNAPro-tRNAIle-tRNALys and rrnL-tRNACys-rrnS) were found to be conserved in all mitochondrial genomes of monopisthocotyleans. Nevertheless, the tRNALys and tRNACys were found in the gene rearrangement hot spot of polyopisthocotyleans. The conserved gene clusters could potentially be a marker used to help define the Polyopisthocotylea and Monopisthocotylea within the monogenea, as well as providing information for a deeper understanding of the evolution of monogenean mitochondrial genomes.Figure 7 Gene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp. Gene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp. Gene rearrangement can be mainly explained by three mechanisms: Duplication and Random Loss Model [31, 32], Duplication and Nonrandom Loss Model [33] and Recombination Model [34]. The variation (tRNAGln, tRNAMet and NCR) of mitochondrial gene order occurring between P. variegatus and Gyrodactylus spp. could be explained by the duplication and random loss model and recombination model together with the parsimonious scenario. We assume that the process contains three steps: one tandem duplication, random loss, followed by intramitochondrial recombination (Figure 8). We prefer this mechanism for the following reasons: the duplicate NCRs in the mitochondrial genomes of most metazoans can be explained by the duplication and random loss model, but the stepwise mechanism described above is more appropriate to interpret the duplicated NCRs and long-range translocation, meanwhile the rest of the genes remain in their original state. Furthermore, there are several examples of mitochondrial recombination in animals [35–38], and a similar mechanism accounts for the gene rearrangement of other metazoans [39, 40]. In addition, the tRNAMet genes of Gyrodactylus spp. are clearly homologous to the tRNAMet gene of P. variegatus with 80.6% sequence similarity. However, the tRNAGln region does have low sequence similarity (66.2%) between the mitochondrial genomes of Gyrodactylus spp. and P. variegatus, so we cannot be certain that the translocation event happened. As more mitochondrial genomes of gyrodactylids become available, all of the above hypotheses should be tested with respect to gene orders.Figure 8 Possible mechanism of mitochondrial gene rearrangements occurring in Paragyrodactylus variegatus and Gyrodactylus spp. Possible mechanism of mitochondrial gene rearrangements occurring in Paragyrodactylus variegatus and Gyrodactylus spp. Characteristics of the mitochondrial genome: The mitochondrial genome of P. variegatus is 222 bp shorter than that of G. derjavinoides, but well within the length range of parasitic flatworms [22, 23]. Differing number and length of the major non-coding region is the main factor that contributes to this difference in genome size. The overall A + T content of P. variegatus is higher than that of all reported mitochondrial genomes of monogeneans. The average Ka/Ks values of genes encoding 3 subunits of cytochrome c oxidase and the cytochrome b subunit of cytochrome bc1 complex are lower than genes encoding subunits of the NADH dehydrogenase complex (with the exception of ND1), especially COI and Cytb genes. This feature demonstrates COI, COII, COIII and Cytb genes are more strongly effected by purifying selection pressure compared to subunits of the NADH dehydrogenase genes (except ND1), which is similar to the findings of Huyse et al. [24] for Gyrodactylus derjavinoides. The degree of functional constraints might be a reason for corresponding degree sequence variations of protein genes. The low Ka/Ks values and genetic distance of COI and Cytb genes also imply that both genes could be used as a useful marker for analyses at higher taxonomic levels. Although sizes of rrnL and rrnS are very similar among Gyrodactylus spp. and P. variegatus, the sequence similarities are not high. These discrepancies may reflect the variable helices or loops that exist in the rRNA structure. The major non-coding region: The mitochondrial genome of P. variegatus includes one major non-coding region, which has been frequently observed in other invertebrates. It contains a high A + T content and tandem repeat sequences which could not be found in large non-coding regions (>500 bp) of the published mitochondrial genomes of monopisthocotyleans. We found that length and number of tandem repeat units are similar to those observed in Microcotyle sebastis Goto, 1894 [25] , contradicting the study of Zhang et al. [26] that reported the length and number of repeated motifs were different in the mitochondrial non-coding regions of monopisthocotylids and polyopisthocotylids. A non-coding region with high A + T content and pertinent elements usually corresponds to the control region for replication and transcription initiation. In the major non-coding region of P. variegatus, we found identical patterns within part I and part II. The patterns have only two nucleotide modifications with 2.3% sequence discrepancy; however, the overall difference between the whole sequence of part I and part II is 18.3%. The highly conserved part of the non-coding region is believed to have a functional role. The patterns contain poly-T stretches, a stem-loop structure and some surrounding structure elements (A + T-rich segment and G[A]nT) (Figure 6) which are typical of control regions in insects [27–30]. Although typical control regions are not readily identifiable within the mitochondrial genome of flatworms [17], the predicted secondary structure, conserved element, repeat sequences and high A + T content of major non-coding region in P. variegatus implies that this region might play an important role in the initiation of replication and transcription. In addition, through alignment of non-coding regions sequences between Gyrodactylus spp. and P. variegatus, we found some conserved motifs in each species with the overall similarity among them being 72.1%. The conserved motifs (>5 bp) mainly existed in the A + T-rich segment and G + A-rich segment. However, whether or not the conserved motifs are present in other species of Gyrodactylidae needs to be assessed with a broader taxon sample. Gene arrangements and possible evolutionary mechanisms: Five available mitochondrial gene arrangements of monopisthocotylids are shown in Figure 7. The arrangement of all rRNA and protein coding genes are identical throughout all samples, however, the tRNA genes differ in arrangement showing some translocation, particularly long-range translocation. No notable rearrangement hot spot could be found in gene arrangements of monopisthocotylids, however, the major change of gene arrangement among polyopisthocotylids is limited in the COIII-ND5 junction as a gene rearrangement hot spot [26]. Two gene clusters (tRNAAsn-tRNAPro-tRNAIle-tRNALys and rrnL-tRNACys-rrnS) were found to be conserved in all mitochondrial genomes of monopisthocotyleans. Nevertheless, the tRNALys and tRNACys were found in the gene rearrangement hot spot of polyopisthocotyleans. The conserved gene clusters could potentially be a marker used to help define the Polyopisthocotylea and Monopisthocotylea within the monogenea, as well as providing information for a deeper understanding of the evolution of monogenean mitochondrial genomes.Figure 7 Gene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp. Gene arrangements of ten monogenean species. Gene and genome size are not to scale. All genes are transcribed in the same direction (form left to right). Red and black box shows the conserved gene cluster and gene rearrangement hot spot, respectively. The non-coding region (>500 bp) is denoted by the NCR. The same gene arrangement of three Gyrodactylus species (G. salaris, G. derjavinoides and G. thymalli) is shown as Gyrodactylus spp. Gene rearrangement can be mainly explained by three mechanisms: Duplication and Random Loss Model [31, 32], Duplication and Nonrandom Loss Model [33] and Recombination Model [34]. The variation (tRNAGln, tRNAMet and NCR) of mitochondrial gene order occurring between P. variegatus and Gyrodactylus spp. could be explained by the duplication and random loss model and recombination model together with the parsimonious scenario. We assume that the process contains three steps: one tandem duplication, random loss, followed by intramitochondrial recombination (Figure 8). We prefer this mechanism for the following reasons: the duplicate NCRs in the mitochondrial genomes of most metazoans can be explained by the duplication and random loss model, but the stepwise mechanism described above is more appropriate to interpret the duplicated NCRs and long-range translocation, meanwhile the rest of the genes remain in their original state. Furthermore, there are several examples of mitochondrial recombination in animals [35–38], and a similar mechanism accounts for the gene rearrangement of other metazoans [39, 40]. In addition, the tRNAMet genes of Gyrodactylus spp. are clearly homologous to the tRNAMet gene of P. variegatus with 80.6% sequence similarity. However, the tRNAGln region does have low sequence similarity (66.2%) between the mitochondrial genomes of Gyrodactylus spp. and P. variegatus, so we cannot be certain that the translocation event happened. As more mitochondrial genomes of gyrodactylids become available, all of the above hypotheses should be tested with respect to gene orders.Figure 8 Possible mechanism of mitochondrial gene rearrangements occurring in Paragyrodactylus variegatus and Gyrodactylus spp. Possible mechanism of mitochondrial gene rearrangements occurring in Paragyrodactylus variegatus and Gyrodactylus spp. Conclusions: The characteristics of the mitochondrial genome of P. variegatus are notably different from Gyrodactylus spp., including the gene order, which is similar to other monopisthocotylids. The overall average genetic distance between Paragyrodactylus and Gyrodactylus based on the rRNA and 12 protein coding genes is remarkably greater than within Gyrodactylus. All of these features support Paragyrodactylus as a distinct genus. Considering their specific distribution and hosts, we tend towards the view of You et al. [8] that Paragyrodactylus is a relict freshwater lineage of viviparous monogenean isolated in the high plateaus of central Asia on closely related river loaches.
Background: Paragyrodactylus Gvosdev and Martechov, 1953, a viviparous genus of ectoparasite within the Gyrodactylidae, contains three nominal species all of which infect Asian river loaches. The group is suspected to be a basal lineage within Gyrodactylus Nordmann, 1832 sensu lato although this remains unclear. Further molecular study, beyond characterization of the standard Internal Transcribed Spacer region, is needed to clarify the evolutionary relationships within the family and the placement of this genus. Methods: The mitochondrial genome of Paragyrodactylus variegatus You, King, Ye and Cone, 2014 was amplified in six parts from a single worm, sequenced using primer walking, annotated and analyzed using bioinformatic tools. Results: The mitochondrial genome of P. variegatus is 14,517 bp, containing 12 protein-coding genes (PCGs), 22 transfer RNA (tRNA) genes, two ribosomal RNA (rRNA) genes and a major non-coding region (NCR). The overall A + T content of the mitochondrial genome is 76.3%, which is higher than all reported mitochondrial genomes of monogeneans. All of the 22 tRNAs have the typical cloverleaf secondary structure, except tRNACys, tRNASer1 and tRNASer2 that lack the dihydrouridine (DHU) arm. There are six domains (domain III is absent) and three domains in the inferred secondary structures of the large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS), respectively. The NCR includes six 40 bp tandem repeat units and has the double identical poly-T stretches, stem-loop structure and some surrounding structure elements. The gene order (tRNAGln, tRNAMet and NCR) differs in arrangement compared to the mitochondrial genomes reported from Gyrodactylus spp. Conclusions: The Duplication and Random Loss Model and Recombination Model together are the most plausible explanations for the variation in gene order. Both morphological characters and characteristics of the mitochondrial genome support Paragyrodactylus as a distinct genus from Gyrodactylus. Considering their specific distribution and known hosts, we believe that Paragyrodactylus is a relict freshwater lineage of viviparous monogenean isolated in the high plateaus of central Asia on closely related river loaches.
Background: Gyrodactylids are widespread parasites of freshwater and marine fishes, typically inhabiting the skin and gills of their hosts. Their direct life-cycle and hyperviviparous method of reproduction facilitates rapid population growth. Some species are pathogenic to their host (e.g. Gyrodactylus salaris Malmberg, 1957) [1] and capable of causing high host mortality resulting in serious ecological and economical consequences [2]. Over twenty genera and 400 species of gyrodactylids have been described [3], most of them being identified by comparative morphology of the opisthaptoral hard parts. This traditional approach for identification of gyrodactylids gives limited information for detailed phylogenetic analysis. Recently, the nuclear ribosomal DNA (rDNA) and the internal transcribed spacers (ITS) of rDNA have been incorporated into the molecular taxonomy of the group [4, 5]. In addition, mitochondrial markers (COI and COII) are also confirmed to be DNA barcoding for Gyrodactylus Nordmann, 1832 [6, 7]. But more polymorphic molecular markers suitable for different taxonomic categories are still needed for studying the taxonomy and phylogeny of these parasites. Paragyrodactylus Gvosdev and Martechov, 1953 is a genus of Gyrodactylidae comprising three nominal species, Paragyrodactylus iliensis Gvosdev and Martechov, 1953 (=P. dogieli Osmanov, 1965), Paragyrodactylus barbatuli Ergens, 1970 and Paragyrodactylus variegatus You, King, Ye and Cone, 2014, all of which infect river loaches (Nemacheilidae) inhabiting streams in central Asia [8]. The relationship between Paragyrodactylus and Gyrodactylus has been recently explored. Kritsky and Boeger reported the two genera had a close relationship based on morphological characters [9]. Bakke et al. believed the complexity of the attachment apparatus separates Paragyrodactylus from Gyrodactylus and pondered whether these differences were fundamental or a local diversification within Gyrodactylus [3]. Furthermore, You et al., using morphology and molecular data, presented the hypothesis that Paragyrodactylus was a relict freshwater lineage of viviparous monogeneans isolated in the high plateaus of central Asia on river loaches [8]. The ambiguous relationship between Paragyrodactylus and Gyrodactylus emphasizes the need for further molecular study of these genera. Due to its higher rate of base substitution, maternal inheritance, evolutionary conserved gene products and low recombination [10, 11], mitochondrial genomes provide powerful markers for phylogenetic analysis, biological identification and population studies. In addition, mitochondrial genomes can provide genome-level characters such as gene order for deep-level phylogenetic analysis [12, 13]. To date, the complete mitochondrial DNA sequences of only nine monogeneans are available, including three species of Gyrodactylus. In the present study, the first mitochondrial genome for Paragyrodactylus, P. variegatus, is sequenced and characterized. We report on its genome organization, base composition, gene order, codon usage, ribosomal and transfer RNA gene features and major non-coding region. Additionally, we provide a preliminary comparison of the gene arrangement within both Paragyrodactylus and Gyrodactylus. Conclusions: The characteristics of the mitochondrial genome of P. variegatus are notably different from Gyrodactylus spp., including the gene order, which is similar to other monopisthocotylids. The overall average genetic distance between Paragyrodactylus and Gyrodactylus based on the rRNA and 12 protein coding genes is remarkably greater than within Gyrodactylus. All of these features support Paragyrodactylus as a distinct genus. Considering their specific distribution and hosts, we tend towards the view of You et al. [8] that Paragyrodactylus is a relict freshwater lineage of viviparous monogenean isolated in the high plateaus of central Asia on closely related river loaches.
Background: Paragyrodactylus Gvosdev and Martechov, 1953, a viviparous genus of ectoparasite within the Gyrodactylidae, contains three nominal species all of which infect Asian river loaches. The group is suspected to be a basal lineage within Gyrodactylus Nordmann, 1832 sensu lato although this remains unclear. Further molecular study, beyond characterization of the standard Internal Transcribed Spacer region, is needed to clarify the evolutionary relationships within the family and the placement of this genus. Methods: The mitochondrial genome of Paragyrodactylus variegatus You, King, Ye and Cone, 2014 was amplified in six parts from a single worm, sequenced using primer walking, annotated and analyzed using bioinformatic tools. Results: The mitochondrial genome of P. variegatus is 14,517 bp, containing 12 protein-coding genes (PCGs), 22 transfer RNA (tRNA) genes, two ribosomal RNA (rRNA) genes and a major non-coding region (NCR). The overall A + T content of the mitochondrial genome is 76.3%, which is higher than all reported mitochondrial genomes of monogeneans. All of the 22 tRNAs have the typical cloverleaf secondary structure, except tRNACys, tRNASer1 and tRNASer2 that lack the dihydrouridine (DHU) arm. There are six domains (domain III is absent) and three domains in the inferred secondary structures of the large ribosomal subunit (rrnL) and small ribosomal subunit (rrnS), respectively. The NCR includes six 40 bp tandem repeat units and has the double identical poly-T stretches, stem-loop structure and some surrounding structure elements. The gene order (tRNAGln, tRNAMet and NCR) differs in arrangement compared to the mitochondrial genomes reported from Gyrodactylus spp. Conclusions: The Duplication and Random Loss Model and Recombination Model together are the most plausible explanations for the variation in gene order. Both morphological characters and characteristics of the mitochondrial genome support Paragyrodactylus as a distinct genus from Gyrodactylus. Considering their specific distribution and known hosts, we believe that Paragyrodactylus is a relict freshwater lineage of viviparous monogenean isolated in the high plateaus of central Asia on closely related river loaches.
11,795
396
[ 547, 77, 497, 274, 255, 441, 257, 178, 293, 269, 427, 681 ]
16
[ "variegatus", "mitochondrial", "gene", "genes", "coding", "paragyrodactylus", "gyrodactylus", "non coding", "non", "region" ]
[ "genomes gyrodactylids", "genes gyrodactylus spp", "1953 genus gyrodactylidae", "400 species gyrodactylids", "gyrodactylids widespread parasites" ]
[CONTENT] Paragyrodactylus variegatus | Mitochondrial genome | Gyrodactylus | Gyrodactylidae | Monogenea | Homatula variegata | China [SUMMARY]
[CONTENT] Paragyrodactylus variegatus | Mitochondrial genome | Gyrodactylus | Gyrodactylidae | Monogenea | Homatula variegata | China [SUMMARY]
[CONTENT] Paragyrodactylus variegatus | Mitochondrial genome | Gyrodactylus | Gyrodactylidae | Monogenea | Homatula variegata | China [SUMMARY]
[CONTENT] Paragyrodactylus variegatus | Mitochondrial genome | Gyrodactylus | Gyrodactylidae | Monogenea | Homatula variegata | China [SUMMARY]
[CONTENT] Paragyrodactylus variegatus | Mitochondrial genome | Gyrodactylus | Gyrodactylidae | Monogenea | Homatula variegata | China [SUMMARY]
[CONTENT] Paragyrodactylus variegatus | Mitochondrial genome | Gyrodactylus | Gyrodactylidae | Monogenea | Homatula variegata | China [SUMMARY]
[CONTENT] Animals | Base Sequence | DNA, Mitochondrial | Gene Order | Genome, Mitochondrial | Helminth Proteins | Nucleic Acid Conformation | Polymerase Chain Reaction | RNA, Ribosomal | RNA, Transfer | Species Specificity | Trematoda [SUMMARY]
[CONTENT] Animals | Base Sequence | DNA, Mitochondrial | Gene Order | Genome, Mitochondrial | Helminth Proteins | Nucleic Acid Conformation | Polymerase Chain Reaction | RNA, Ribosomal | RNA, Transfer | Species Specificity | Trematoda [SUMMARY]
[CONTENT] Animals | Base Sequence | DNA, Mitochondrial | Gene Order | Genome, Mitochondrial | Helminth Proteins | Nucleic Acid Conformation | Polymerase Chain Reaction | RNA, Ribosomal | RNA, Transfer | Species Specificity | Trematoda [SUMMARY]
[CONTENT] Animals | Base Sequence | DNA, Mitochondrial | Gene Order | Genome, Mitochondrial | Helminth Proteins | Nucleic Acid Conformation | Polymerase Chain Reaction | RNA, Ribosomal | RNA, Transfer | Species Specificity | Trematoda [SUMMARY]
[CONTENT] Animals | Base Sequence | DNA, Mitochondrial | Gene Order | Genome, Mitochondrial | Helminth Proteins | Nucleic Acid Conformation | Polymerase Chain Reaction | RNA, Ribosomal | RNA, Transfer | Species Specificity | Trematoda [SUMMARY]
[CONTENT] Animals | Base Sequence | DNA, Mitochondrial | Gene Order | Genome, Mitochondrial | Helminth Proteins | Nucleic Acid Conformation | Polymerase Chain Reaction | RNA, Ribosomal | RNA, Transfer | Species Specificity | Trematoda [SUMMARY]
[CONTENT] genomes gyrodactylids | genes gyrodactylus spp | 1953 genus gyrodactylidae | 400 species gyrodactylids | gyrodactylids widespread parasites [SUMMARY]
[CONTENT] genomes gyrodactylids | genes gyrodactylus spp | 1953 genus gyrodactylidae | 400 species gyrodactylids | gyrodactylids widespread parasites [SUMMARY]
[CONTENT] genomes gyrodactylids | genes gyrodactylus spp | 1953 genus gyrodactylidae | 400 species gyrodactylids | gyrodactylids widespread parasites [SUMMARY]
[CONTENT] genomes gyrodactylids | genes gyrodactylus spp | 1953 genus gyrodactylidae | 400 species gyrodactylids | gyrodactylids widespread parasites [SUMMARY]
[CONTENT] genomes gyrodactylids | genes gyrodactylus spp | 1953 genus gyrodactylidae | 400 species gyrodactylids | gyrodactylids widespread parasites [SUMMARY]
[CONTENT] genomes gyrodactylids | genes gyrodactylus spp | 1953 genus gyrodactylidae | 400 species gyrodactylids | gyrodactylids widespread parasites [SUMMARY]
[CONTENT] variegatus | mitochondrial | gene | genes | coding | paragyrodactylus | gyrodactylus | non coding | non | region [SUMMARY]
[CONTENT] variegatus | mitochondrial | gene | genes | coding | paragyrodactylus | gyrodactylus | non coding | non | region [SUMMARY]
[CONTENT] variegatus | mitochondrial | gene | genes | coding | paragyrodactylus | gyrodactylus | non coding | non | region [SUMMARY]
[CONTENT] variegatus | mitochondrial | gene | genes | coding | paragyrodactylus | gyrodactylus | non coding | non | region [SUMMARY]
[CONTENT] variegatus | mitochondrial | gene | genes | coding | paragyrodactylus | gyrodactylus | non coding | non | region [SUMMARY]
[CONTENT] variegatus | mitochondrial | gene | genes | coding | paragyrodactylus | gyrodactylus | non coding | non | region [SUMMARY]
[CONTENT] paragyrodactylus | molecular | paragyrodactylus gyrodactylus | gyrodactylus | markers | phylogenetic analysis | phylogenetic | genera | provide | relationship [SUMMARY]
[CONTENT] pcr | primer | primers | min | mm | 2007 | china | genbank | 25 | v1 [SUMMARY]
[CONTENT] variegatus | codons | codon | paragyrodactylus variegatus | paragyrodactylus | figure | mitochondrial | genes | gene | secondary [SUMMARY]
[CONTENT] gyrodactylus | paragyrodactylus | notably | distinct | related river | related river loaches | genes remarkably | considering | 12 protein coding genes | 12 protein coding [SUMMARY]
[CONTENT] variegatus | gene | genes | mitochondrial | gyrodactylus | paragyrodactylus | coding | non coding | non | pcr [SUMMARY]
[CONTENT] variegatus | gene | genes | mitochondrial | gyrodactylus | paragyrodactylus | coding | non coding | non | pcr [SUMMARY]
[CONTENT] Paragyrodactylus Gvosdev | Martechov | 1953 | three | Asian ||| Gyrodactylus Nordmann | 1832 | lato ||| Internal Transcribed Spacer [SUMMARY]
[CONTENT] Paragyrodactylus | Cone | 2014 | six [SUMMARY]
[CONTENT] 14,517 | 12 | 22 | RNA | two | RNA | NCR ||| 76.3% ||| 22 | DHU ||| six | three ||| NCR | six 40 | tandem ||| tRNAGln | tRNAMet | NCR | Gyrodactylus [SUMMARY]
[CONTENT] ||| Paragyrodactylus | Gyrodactylus ||| Paragyrodactylus | Asia [SUMMARY]
[CONTENT] Paragyrodactylus Gvosdev | Martechov | 1953 | three | Asian ||| Gyrodactylus Nordmann | 1832 | lato ||| Internal Transcribed Spacer ||| Paragyrodactylus | Cone | 2014 | six ||| ||| 14,517 | 12 | 22 | RNA | two | RNA | NCR ||| 76.3% ||| 22 | DHU ||| six | three ||| NCR | six 40 | tandem ||| tRNAGln | tRNAMet | NCR | Gyrodactylus ||| ||| Paragyrodactylus | Gyrodactylus ||| Paragyrodactylus | Asia [SUMMARY]
[CONTENT] Paragyrodactylus Gvosdev | Martechov | 1953 | three | Asian ||| Gyrodactylus Nordmann | 1832 | lato ||| Internal Transcribed Spacer ||| Paragyrodactylus | Cone | 2014 | six ||| ||| 14,517 | 12 | 22 | RNA | two | RNA | NCR ||| 76.3% ||| 22 | DHU ||| six | three ||| NCR | six 40 | tandem ||| tRNAGln | tRNAMet | NCR | Gyrodactylus ||| ||| Paragyrodactylus | Gyrodactylus ||| Paragyrodactylus | Asia [SUMMARY]
Drinking carrot juice increases total antioxidant status and decreases lipid peroxidation in adults.
21943297
High prevalence of obesity and cardiovascular disease is attributable to sedentary lifestyle and eating diets high in fat and refined carbohydrate while eating diets low in fruit and vegetables. Epidemiological studies have confirmed a strong association between eating diets rich in fruits and vegetables and cardiovascular health. The aim of this pilot study was to determine whether drinking fresh carrot juice influences antioxidant status and cardiovascular risk markers in subjects not modifying their eating habits.
BACKGROUND
An experiment was conducted to evaluate the effects of consuming 16 fl oz of daily freshly squeezed carrot juice for three months on cardiovascular risk markers, C-reactive protein, insulin, leptin, interleukin-1α, body fat percentage, body mass index (BMI), blood pressure, antioxidant status, and malondialdehyde production. Fasting blood samples were collected pre-test and 90 days afterward to conclude the study.
METHODS
Drinking carrot juice did not affect (P > 0.1) the plasma cholesterol, triglycerides, Apo A, Apo B, LDL, HDL, body fat percentage, insulin, leptin, interleukin-1α, or C-reactive protein. Drinking carrot juice decreased (P = 0.06) systolic pressure, but did not influence diastolic pressure. Drinking carrot juice significantly (P < 0.05) increased the plasma total antioxidant capacity and decreased (P < 0.05) the plasma malondialdehyde production.
RESULTS
Drinking carrot juice may protect the cardiovascular system by increasing total antioxidant status and by decreasing lipid peroxidation independent of any of the cardiovascular risk markers measured in the study.
CONCLUSION
[ "Adult", "Antioxidants", "Beverages", "Blood Pressure", "Body Mass Index", "C-Reactive Protein", "Cardiovascular Diseases", "Daucus carota", "Drinking", "Female", "Humans", "Insulin", "Interleukin-1alpha", "Leptin", "Lipid Metabolism", "Lipid Peroxidation", "Male", "Pilot Projects", "Risk" ]
3192732
Background
Despite a gradual decline in mortality from cardiovascular disease (CVD), it is still the leading cause of both morbidity and mortality in the United States [1]. Approximately 864,000 Americans die each year from CVD and this figure makes up 35% of the total deaths in the United States [2]. In recent years, there have been disturbing increases in the prevalence of CVD risk factors like diabetes, obesity, and the metabolic syndrome which collectively may negate the downward trends in CVD mortality [1,3,4]. Obesity elevates the risk of developing metabolic syndrome, diabetes, and CVD [5-7]. Obesity is thought to initiate a cascade of events leading to systemic inflammation and increases in circulating C-reactive protein, insulin resistance, and dyslipidemia [7]. In 1998, the American Heart Association considered obesity to be one of the major risk factors for coronary heart disease [8]. Other risk factors for CVD include a higher body mass index (BMI), the marker commonly used to establish obesity, which has been shown to be independently associated with hypertension, elevated total cholesterol and low-density lipoprotein (LDL) cholesterol levels, and lower high density lipoprotein (HDL) cholesterol [9-12]. The optimal BMI for adults 18 to 85 years of age is from 23 to 25 for most races [13]. Evidence has demonstrated that people with elevated BMI are at higher risk of developing CVD compared with those of normal BMI [14,15]. Additionally, the presence of metabolic syndrome is predicted to shorten life leading to death at younger ages [16]. Diets high in fat and cholesterol are the major factors contributing to CVD. Dietary modification and lifestyle changes are suggested to be effective strategies to prevent CVD [17]. The National Heart, Lung, and Blood Institute (NHLBI) recommend the Therapeutic Lifestyle Change (TLC) diet to improve heart health in individuals at risk for CVD. The TLC diet consists of reducing intake of saturated and total fat from animal products and increasing the intake of fibrous vegetables, fruits, whole grains, and legumes [18]. Increased fruit and vegetable consumption has been found to play a key role in preventing heart disease. In a follow up of the Nurse's Health Study, an additional serving per day of fruits and vegetables was associated with a 4% reduction in the risk of coronary heart disease [19]. Fruits and vegetables contain many nutrients which may be associated with reduced risk for heart disease, including fiber, vitamins, minerals, and phytochemicals. Phytochemicals found in fruits and vegetables have been shown to reduce inflammation, oxidative stress, and other markers of CVD [20]. Among common fruits and vegetables, carrots are high in fibers, carotenoids, vitamins C and E, and phenolics such as p-coumaric, chlorogenic, and caffeic acids [21]. Consuming foods containing phenolic compounds has decreased the risk of vascular diseases in previous studies [22,23]. Phenolic compounds are dietary antioxidants found in plants that are shown to inhibit LDL oxidation, inhibit platelet aggregation and adhesion, decrease total and LDL cholesterol, and induce endothelium-dependent vaso-relaxation [24-26]. Oral intake of carrot juice also displays other beneficial physiological effects including reduced oxidative DNA damage [27], increased levels of plasma antioxidants [28], and reduced inflammation [29]. In the Lipid Research Clinics Coronary Primary Prevention Trial (LRC-CPPT), men were tracked over 13 years and results revealed that those with the highest plasma carotenoid levels had lower risk of coronary heart disease [30]. In a 12-year follow-up of the Prospective Basel Study, Eicholzer and colleagues found that the risk of ischemic heart disease is increased by 1.53 Relative Risk in those with the lowest plasma carotene concentrations [31]. Inflammation has been shown to be a strong predictor of CVD and serum β-carotene inversely correlates with C-reactive protein and interleukin-6 [29,32]. Previous studies have shown beneficial effects of eating high fiber diets on lowering cardiovascular risk factors [33-35]. One other study has shown potential benefit of fruit and vegetable juice concentrate on cardiovascular prevention [36]. The goal of present study was to evaluate the potential role of drinking 16 ounces fresh squeezed carrot juice (equivalent to one pound of fresh carrot) daily on lowering cardiovascular risk markers in adults with elevated cholesterol and triglycerides.
Methods
Subjects In this study, eight males and nine females from Texas A & M University-Kingsville (TAMUK) faculty and staff (n = 17) with elevated plasma cholesterol and triglyceride levels were the study participants. Each subject was asked to drink 16 fl oz fresh carrot juices daily for the three month duration of the study. The research study was approved by TAMUK IRB-board prior to initiation of the study. Participants agreed to drink the carrot juice daily and signed a consent form to agree to rules and regulations of the study. Carrots were juiced and delivered to research participants daily. Weight and height were measured both pre-test and post-test to calculate BMI status. Fasting blood samples were collected by a licensed nurse at the start of the study and after 90 days when the study was concluded. The blood samples were collected in tubes and centrifuged at 1,500x g for 15 min. The blood chemistry and lipid panels were analyzed using Modular Analytics D 2400 system and Cobas Integra 800 of Roche Diagnostics Corp. Indianapolis, IN. In this study, eight males and nine females from Texas A & M University-Kingsville (TAMUK) faculty and staff (n = 17) with elevated plasma cholesterol and triglyceride levels were the study participants. Each subject was asked to drink 16 fl oz fresh carrot juices daily for the three month duration of the study. The research study was approved by TAMUK IRB-board prior to initiation of the study. Participants agreed to drink the carrot juice daily and signed a consent form to agree to rules and regulations of the study. Carrots were juiced and delivered to research participants daily. Weight and height were measured both pre-test and post-test to calculate BMI status. Fasting blood samples were collected by a licensed nurse at the start of the study and after 90 days when the study was concluded. The blood samples were collected in tubes and centrifuged at 1,500x g for 15 min. The blood chemistry and lipid panels were analyzed using Modular Analytics D 2400 system and Cobas Integra 800 of Roche Diagnostics Corp. Indianapolis, IN. Outcome Measures Blood Pressure Collection Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg). Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg). Bioelectrical Impedance Analysis The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study. The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study. Inflammatory Markers and Hormones The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI). The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI). Total Antioxidant Status and Malondialdehyde Production The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA). The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA). Blood Pressure Collection Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg). Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg). Bioelectrical Impedance Analysis The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study. The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study. Inflammatory Markers and Hormones The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI). The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI). Total Antioxidant Status and Malondialdehyde Production The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA). The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA). Analysis The experimental design was a pre-test/post-test and a comparison student t-test was performed to determine the effects of drinking 16 fl oz fresh carrot juice daily as an independent variable on variables of interest at baseline and 90 days after the completion of the study as outlined by Steel and Torrie [37]. The experimental design was a pre-test/post-test and a comparison student t-test was performed to determine the effects of drinking 16 fl oz fresh carrot juice daily as an independent variable on variables of interest at baseline and 90 days after the completion of the study as outlined by Steel and Torrie [37].
null
null
Conclusion
ASP coordinated and implemented the study, conducted laboratory analysis, and helped to draft the manuscript. SF conducted laboratory analysis. AS drafted the manuscript. BSP designed the study and drafted the manuscript. FD designed and coordinated the study, performed the statistical analyses, and helped to draft the manuscript. All authors read and approved the final manuscript.
[ "Background", "Subjects", "Outcome Measures", "Blood Pressure Collection", "Bioelectrical Impedance Analysis", "Inflammatory Markers and Hormones", "Total Antioxidant Status and Malondialdehyde Production", "Analysis", "Results", "Discussion", "Conclusion" ]
[ "Despite a gradual decline in mortality from cardiovascular disease (CVD), it is still the leading cause of both morbidity and mortality in the United States [1]. Approximately 864,000 Americans die each year from CVD and this figure makes up 35% of the total deaths in the United States [2]. In recent years, there have been disturbing increases in the prevalence of CVD risk factors like diabetes, obesity, and the metabolic syndrome which collectively may negate the downward trends in CVD mortality [1,3,4].\nObesity elevates the risk of developing metabolic syndrome, diabetes, and CVD [5-7]. Obesity is thought to initiate a cascade of events leading to systemic inflammation and increases in circulating C-reactive protein, insulin resistance, and dyslipidemia [7]. In 1998, the American Heart Association considered obesity to be one of the major risk factors for coronary heart disease [8]. Other risk factors for CVD include a higher body mass index (BMI), the marker commonly used to establish obesity, which has been shown to be independently associated with hypertension, elevated total cholesterol and low-density lipoprotein (LDL) cholesterol levels, and lower high density lipoprotein (HDL) cholesterol [9-12]. The optimal BMI for adults 18 to 85 years of age is from 23 to 25 for most races [13]. Evidence has demonstrated that people with elevated BMI are at higher risk of developing CVD compared with those of normal BMI [14,15]. Additionally, the presence of metabolic syndrome is predicted to shorten life leading to death at younger ages [16].\nDiets high in fat and cholesterol are the major factors contributing to CVD. Dietary modification and lifestyle changes are suggested to be effective strategies to prevent CVD [17]. The National Heart, Lung, and Blood Institute (NHLBI) recommend the Therapeutic Lifestyle Change (TLC) diet to improve heart health in individuals at risk for CVD. The TLC diet consists of reducing intake of saturated and total fat from animal products and increasing the intake of fibrous vegetables, fruits, whole grains, and legumes [18]. Increased fruit and vegetable consumption has been found to play a key role in preventing heart disease. In a follow up of the Nurse's Health Study, an additional serving per day of fruits and vegetables was associated with a 4% reduction in the risk of coronary heart disease [19]. Fruits and vegetables contain many nutrients which may be associated with reduced risk for heart disease, including fiber, vitamins, minerals, and phytochemicals. Phytochemicals found in fruits and vegetables have been shown to reduce inflammation, oxidative stress, and other markers of CVD [20].\nAmong common fruits and vegetables, carrots are high in fibers, carotenoids, vitamins C and E, and phenolics such as p-coumaric, chlorogenic, and caffeic acids [21]. Consuming foods containing phenolic compounds has decreased the risk of vascular diseases in previous studies [22,23]. Phenolic compounds are dietary antioxidants found in plants that are shown to inhibit LDL oxidation, inhibit platelet aggregation and adhesion, decrease total and LDL cholesterol, and induce endothelium-dependent vaso-relaxation [24-26]. Oral intake of carrot juice also displays other beneficial physiological effects including reduced oxidative DNA damage [27], increased levels of plasma antioxidants [28], and reduced inflammation [29]. In the Lipid Research Clinics Coronary Primary Prevention Trial (LRC-CPPT), men were tracked over 13 years and results revealed that those with the highest plasma carotenoid levels had lower risk of coronary heart disease [30]. In a 12-year follow-up of the Prospective Basel Study, Eicholzer and colleagues found that the risk of ischemic heart disease is increased by 1.53 Relative Risk in those with the lowest plasma carotene concentrations [31]. Inflammation has been shown to be a strong predictor of CVD and serum β-carotene inversely correlates with C-reactive protein and interleukin-6 [29,32].\nPrevious studies have shown beneficial effects of eating high fiber diets on lowering cardiovascular risk factors [33-35]. One other study has shown potential benefit of fruit and vegetable juice concentrate on cardiovascular prevention [36]. The goal of present study was to evaluate the potential role of drinking 16 ounces fresh squeezed carrot juice (equivalent to one pound of fresh carrot) daily on lowering cardiovascular risk markers in adults with elevated cholesterol and triglycerides.", "In this study, eight males and nine females from Texas A & M University-Kingsville (TAMUK) faculty and staff (n = 17) with elevated plasma cholesterol and triglyceride levels were the study participants. Each subject was asked to drink 16 fl oz fresh carrot juices daily for the three month duration of the study. The research study was approved by TAMUK IRB-board prior to initiation of the study. Participants agreed to drink the carrot juice daily and signed a consent form to agree to rules and regulations of the study. Carrots were juiced and delivered to research participants daily. Weight and height were measured both pre-test and post-test to calculate BMI status. Fasting blood samples were collected by a licensed nurse at the start of the study and after 90 days when the study was concluded. The blood samples were collected in tubes and centrifuged at 1,500x g for 15 min. The blood chemistry and lipid panels were analyzed using Modular Analytics D 2400 system and Cobas Integra 800 of Roche Diagnostics Corp. Indianapolis, IN.", " Blood Pressure Collection Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg).\nThree random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg).\n Bioelectrical Impedance Analysis The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study.\nThe Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study.\n Inflammatory Markers and Hormones The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI).\nThe plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI).\n Total Antioxidant Status and Malondialdehyde Production The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA).\nThe plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA).", "Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg).", "The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study.", "The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI).", "The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA).", "The experimental design was a pre-test/post-test and a comparison student t-test was performed to determine the effects of drinking 16 fl oz fresh carrot juice daily as an independent variable on variables of interest at baseline and 90 days after the completion of the study as outlined by Steel and Torrie [37].", "Drinking 16 fl oz of carrot juice daily for three months did not (P > 0.1) alter subject's weight, body fat percentage, or BMI (Table 1). While drinking carrot juice numerically lowered systolic pressure (P = 0.06) while diastolic pressure was unchanged (Table 1). The fasting plasma chemistry including glucose, triglycerides, cholesterol, HDL, LDL, VLDL, the hormones insulin and leptin, and the inflammatory markers interleukin-1α and C-reactive protein were unaffected from consuming carrot juice (Table 1). Plasma Apo A and Apo B were not (P > 0.1) significantly affected by drinking carrot juice (Table 1). However, the plasma antioxidant status was significantly (P < 0.05) increased while the plasma malondialdehyde production was significantly (P < 0.05) decreased after drinking carrot juice (Table 1).\nEffects of drinking carrot juice on anthropometrics, blood Pressure, blood chemistry, lipid panel, hormones, inflammatory markers, and antioxidant status\na,bMeans with unlike superscript are significantly (P ≤ 0.05) different from each other.\nComparing the males to the females, men weighed more and were taller than the female subjects (Table 2). The plasma chemistry, lipid panel, blood pressure, and plasma antioxidant capacity were not (P > 0.1) different between men and women participants (Tables 2 and 3). However, the plasma interleukin-1α, Apo B, and malondialdehyde were significantly (P < 0.05) higher in men (Table 3), while the percentage of body fat, insulin, and leptin were higher (P < 0.05) in women (Tables 2 and 3). It is interesting to note that at baseline, the men had significantly higher (P < 0.05) plasma malondialdhyde concentration than the women, but after drinking carrot juice for 90 days, the plasma malondialdhyde concentration was significantly reduced (P < 0.05) and there was no difference in the plasma malondialdehyde concentration between men and women participants at the end of the study.\nGender differences on anthropometrics, blood Pressure, blood chemistry, and insulin\na,bMeans with unlike superscript are significantly (P ≤ 0.05) different from each other.\nGender differences on lipid panel, leptin, inflammatory markers, and antioxidant status\na,cMeans with unlike superscript are significantly (P ≤ 0.05) different from each other.", "In the present study, the lack of effects of carrot juice on cardiovascular markers may well be attributed to the fact that no change in eating pattern was requested [38]. The subjects were told to continue with their daily lifestyle and drink one glass containing 16 fl oz (480 mL) of carrot juice as a morning snack.\nDespite no change in anthropometric or lipid panels, carrot juice contributed to a 5% reduction in systolic blood pressure. The trend in systolic blood pressure lowering effect of carrot juice is consistent with the National Heart, Lung, and Blood Institute's recommendations to increase intake of vegetables as part of a dietary pattern [39]. It is also consistent with the finding that systolic blood pressure, rather than diastolic, is more sensitive to nutrient intake modifications [40]. The nutrients present in carrot juice, including fiber, potassium, nitrates, and vitamin C could have contributed to the effect seen in lowering systolic blood pressure [41]. Elevated blood pressure is a primary risk factor for the development of CVD and is described as a component of the metabolic syndrome [42]. Carrots are a rich source of nitrates, which may be converted into nitric oxide to increase vasodilation, possibly decreasing blood pressure. The Mediterranean diet, known for relatively low rates of CVD compared to the typical Western diet, is rich in dietary nitrates and green leafy vegetables containing potassium [43]. Carrot juice is also a rich source of potassium which may in part have contributed to lowering systolic blood pressure [44].\nThe lack of effect from drinking carrot juice on the plasma inflammatory markers as well as insulin and leptin may have been because insulin, leptin, interleukin-1α, and C-reactive protein were all within normal ranges. Our report is inconsistent with an earlier finding suggesting that elevated C-reactive protein was independently associated with higher BMI and higher triglycerides [45]. However, our results are consistent with those who reported that the mean values of cholesterol, triglycerides, and LDL cholesterol were not affected by increased C-reactive protein and insulin in obese women when compared to normal weight controls suggesting that the increase in inflammatory markers have no effect on lipid panel status [46]. Therefore, we conclude that the high mean cholesterol and triglyceride concentrations are attributed to the diet and the eating habits rather than to inflammatory stimulation.\nIt is interesting to note that the increase in the plasma antioxidant status and the decrease in the plasma malondialdehyde may have been related to drinking carrot juice which is a good source of β-carotenes and α-carotenes with antioxidant activity. In a similar study, drinking kale juice significantly improved antioxidant status without affecting the concentration of malondialdehyde [47]. The inconsistency in results is probably due to the amount of juice consumed. In the current study, the participants consumed 16 fl oz (480 mL) of fresh carrot juice everyday while in the study reported by Kim and colleagues, the participants drank 5 ounces (150 mL) of Kale juice daily [47].\nOur results support the notion that physiological differences exist between the male and female subjects. One of the physiological differences between the participants are sex hormones causing men to weigh more and have a lower body fat percentage then the female counterpart [48,49]. Furthermore, the higher plasma leptin and insulin concentrations in females agrees with earlier reports and the differences could also be attributed to sex hormones as observed in one other study [50]. However, the elevated plasma interleukin-1α, Apo B, and malondialdehyde in men suggest that men are at a greater risk for developing CVD than women. Moreover, decreased lipid peroxidation evident from drinking carrot juice is associated with increased antioxidant status independent of inflammatory markers, hormones, or increased cholesterol and triglyceride concentrations.", "In conclusion, drinking 16 fl oz of fresh carrot juice daily significantly increased antioxidant status and suppressed lipid peroxidation without affecting the plasma cholesterol and triglyceride status. It appears both dietary and lifestyle change are required to positively alter lipid profiles in place of increasing fruits or vegetable serving sizes without comprehensive dietary modifications. However, future studies are needed to either refute or validate the results and conclusion of this study." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects", "Outcome Measures", "Blood Pressure Collection", "Bioelectrical Impedance Analysis", "Inflammatory Markers and Hormones", "Total Antioxidant Status and Malondialdehyde Production", "Analysis", "Results", "Discussion", "Conclusion" ]
[ "Despite a gradual decline in mortality from cardiovascular disease (CVD), it is still the leading cause of both morbidity and mortality in the United States [1]. Approximately 864,000 Americans die each year from CVD and this figure makes up 35% of the total deaths in the United States [2]. In recent years, there have been disturbing increases in the prevalence of CVD risk factors like diabetes, obesity, and the metabolic syndrome which collectively may negate the downward trends in CVD mortality [1,3,4].\nObesity elevates the risk of developing metabolic syndrome, diabetes, and CVD [5-7]. Obesity is thought to initiate a cascade of events leading to systemic inflammation and increases in circulating C-reactive protein, insulin resistance, and dyslipidemia [7]. In 1998, the American Heart Association considered obesity to be one of the major risk factors for coronary heart disease [8]. Other risk factors for CVD include a higher body mass index (BMI), the marker commonly used to establish obesity, which has been shown to be independently associated with hypertension, elevated total cholesterol and low-density lipoprotein (LDL) cholesterol levels, and lower high density lipoprotein (HDL) cholesterol [9-12]. The optimal BMI for adults 18 to 85 years of age is from 23 to 25 for most races [13]. Evidence has demonstrated that people with elevated BMI are at higher risk of developing CVD compared with those of normal BMI [14,15]. Additionally, the presence of metabolic syndrome is predicted to shorten life leading to death at younger ages [16].\nDiets high in fat and cholesterol are the major factors contributing to CVD. Dietary modification and lifestyle changes are suggested to be effective strategies to prevent CVD [17]. The National Heart, Lung, and Blood Institute (NHLBI) recommend the Therapeutic Lifestyle Change (TLC) diet to improve heart health in individuals at risk for CVD. The TLC diet consists of reducing intake of saturated and total fat from animal products and increasing the intake of fibrous vegetables, fruits, whole grains, and legumes [18]. Increased fruit and vegetable consumption has been found to play a key role in preventing heart disease. In a follow up of the Nurse's Health Study, an additional serving per day of fruits and vegetables was associated with a 4% reduction in the risk of coronary heart disease [19]. Fruits and vegetables contain many nutrients which may be associated with reduced risk for heart disease, including fiber, vitamins, minerals, and phytochemicals. Phytochemicals found in fruits and vegetables have been shown to reduce inflammation, oxidative stress, and other markers of CVD [20].\nAmong common fruits and vegetables, carrots are high in fibers, carotenoids, vitamins C and E, and phenolics such as p-coumaric, chlorogenic, and caffeic acids [21]. Consuming foods containing phenolic compounds has decreased the risk of vascular diseases in previous studies [22,23]. Phenolic compounds are dietary antioxidants found in plants that are shown to inhibit LDL oxidation, inhibit platelet aggregation and adhesion, decrease total and LDL cholesterol, and induce endothelium-dependent vaso-relaxation [24-26]. Oral intake of carrot juice also displays other beneficial physiological effects including reduced oxidative DNA damage [27], increased levels of plasma antioxidants [28], and reduced inflammation [29]. In the Lipid Research Clinics Coronary Primary Prevention Trial (LRC-CPPT), men were tracked over 13 years and results revealed that those with the highest plasma carotenoid levels had lower risk of coronary heart disease [30]. In a 12-year follow-up of the Prospective Basel Study, Eicholzer and colleagues found that the risk of ischemic heart disease is increased by 1.53 Relative Risk in those with the lowest plasma carotene concentrations [31]. Inflammation has been shown to be a strong predictor of CVD and serum β-carotene inversely correlates with C-reactive protein and interleukin-6 [29,32].\nPrevious studies have shown beneficial effects of eating high fiber diets on lowering cardiovascular risk factors [33-35]. One other study has shown potential benefit of fruit and vegetable juice concentrate on cardiovascular prevention [36]. The goal of present study was to evaluate the potential role of drinking 16 ounces fresh squeezed carrot juice (equivalent to one pound of fresh carrot) daily on lowering cardiovascular risk markers in adults with elevated cholesterol and triglycerides.", " Subjects In this study, eight males and nine females from Texas A & M University-Kingsville (TAMUK) faculty and staff (n = 17) with elevated plasma cholesterol and triglyceride levels were the study participants. Each subject was asked to drink 16 fl oz fresh carrot juices daily for the three month duration of the study. The research study was approved by TAMUK IRB-board prior to initiation of the study. Participants agreed to drink the carrot juice daily and signed a consent form to agree to rules and regulations of the study. Carrots were juiced and delivered to research participants daily. Weight and height were measured both pre-test and post-test to calculate BMI status. Fasting blood samples were collected by a licensed nurse at the start of the study and after 90 days when the study was concluded. The blood samples were collected in tubes and centrifuged at 1,500x g for 15 min. The blood chemistry and lipid panels were analyzed using Modular Analytics D 2400 system and Cobas Integra 800 of Roche Diagnostics Corp. Indianapolis, IN.\nIn this study, eight males and nine females from Texas A & M University-Kingsville (TAMUK) faculty and staff (n = 17) with elevated plasma cholesterol and triglyceride levels were the study participants. Each subject was asked to drink 16 fl oz fresh carrot juices daily for the three month duration of the study. The research study was approved by TAMUK IRB-board prior to initiation of the study. Participants agreed to drink the carrot juice daily and signed a consent form to agree to rules and regulations of the study. Carrots were juiced and delivered to research participants daily. Weight and height were measured both pre-test and post-test to calculate BMI status. Fasting blood samples were collected by a licensed nurse at the start of the study and after 90 days when the study was concluded. The blood samples were collected in tubes and centrifuged at 1,500x g for 15 min. The blood chemistry and lipid panels were analyzed using Modular Analytics D 2400 system and Cobas Integra 800 of Roche Diagnostics Corp. Indianapolis, IN.\n Outcome Measures Blood Pressure Collection Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg).\nThree random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg).\n Bioelectrical Impedance Analysis The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study.\nThe Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study.\n Inflammatory Markers and Hormones The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI).\nThe plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI).\n Total Antioxidant Status and Malondialdehyde Production The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA).\nThe plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA).\n Blood Pressure Collection Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg).\nThree random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg).\n Bioelectrical Impedance Analysis The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study.\nThe Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study.\n Inflammatory Markers and Hormones The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI).\nThe plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI).\n Total Antioxidant Status and Malondialdehyde Production The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA).\nThe plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA).\n Analysis The experimental design was a pre-test/post-test and a comparison student t-test was performed to determine the effects of drinking 16 fl oz fresh carrot juice daily as an independent variable on variables of interest at baseline and 90 days after the completion of the study as outlined by Steel and Torrie [37].\nThe experimental design was a pre-test/post-test and a comparison student t-test was performed to determine the effects of drinking 16 fl oz fresh carrot juice daily as an independent variable on variables of interest at baseline and 90 days after the completion of the study as outlined by Steel and Torrie [37].", "In this study, eight males and nine females from Texas A & M University-Kingsville (TAMUK) faculty and staff (n = 17) with elevated plasma cholesterol and triglyceride levels were the study participants. Each subject was asked to drink 16 fl oz fresh carrot juices daily for the three month duration of the study. The research study was approved by TAMUK IRB-board prior to initiation of the study. Participants agreed to drink the carrot juice daily and signed a consent form to agree to rules and regulations of the study. Carrots were juiced and delivered to research participants daily. Weight and height were measured both pre-test and post-test to calculate BMI status. Fasting blood samples were collected by a licensed nurse at the start of the study and after 90 days when the study was concluded. The blood samples were collected in tubes and centrifuged at 1,500x g for 15 min. The blood chemistry and lipid panels were analyzed using Modular Analytics D 2400 system and Cobas Integra 800 of Roche Diagnostics Corp. Indianapolis, IN.", " Blood Pressure Collection Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg).\nThree random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg).\n Bioelectrical Impedance Analysis The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study.\nThe Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study.\n Inflammatory Markers and Hormones The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI).\nThe plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI).\n Total Antioxidant Status and Malondialdehyde Production The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA).\nThe plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA).", "Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg).", "The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study.", "The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI).", "The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA).", "The experimental design was a pre-test/post-test and a comparison student t-test was performed to determine the effects of drinking 16 fl oz fresh carrot juice daily as an independent variable on variables of interest at baseline and 90 days after the completion of the study as outlined by Steel and Torrie [37].", "Drinking 16 fl oz of carrot juice daily for three months did not (P > 0.1) alter subject's weight, body fat percentage, or BMI (Table 1). While drinking carrot juice numerically lowered systolic pressure (P = 0.06) while diastolic pressure was unchanged (Table 1). The fasting plasma chemistry including glucose, triglycerides, cholesterol, HDL, LDL, VLDL, the hormones insulin and leptin, and the inflammatory markers interleukin-1α and C-reactive protein were unaffected from consuming carrot juice (Table 1). Plasma Apo A and Apo B were not (P > 0.1) significantly affected by drinking carrot juice (Table 1). However, the plasma antioxidant status was significantly (P < 0.05) increased while the plasma malondialdehyde production was significantly (P < 0.05) decreased after drinking carrot juice (Table 1).\nEffects of drinking carrot juice on anthropometrics, blood Pressure, blood chemistry, lipid panel, hormones, inflammatory markers, and antioxidant status\na,bMeans with unlike superscript are significantly (P ≤ 0.05) different from each other.\nComparing the males to the females, men weighed more and were taller than the female subjects (Table 2). The plasma chemistry, lipid panel, blood pressure, and plasma antioxidant capacity were not (P > 0.1) different between men and women participants (Tables 2 and 3). However, the plasma interleukin-1α, Apo B, and malondialdehyde were significantly (P < 0.05) higher in men (Table 3), while the percentage of body fat, insulin, and leptin were higher (P < 0.05) in women (Tables 2 and 3). It is interesting to note that at baseline, the men had significantly higher (P < 0.05) plasma malondialdhyde concentration than the women, but after drinking carrot juice for 90 days, the plasma malondialdhyde concentration was significantly reduced (P < 0.05) and there was no difference in the plasma malondialdehyde concentration between men and women participants at the end of the study.\nGender differences on anthropometrics, blood Pressure, blood chemistry, and insulin\na,bMeans with unlike superscript are significantly (P ≤ 0.05) different from each other.\nGender differences on lipid panel, leptin, inflammatory markers, and antioxidant status\na,cMeans with unlike superscript are significantly (P ≤ 0.05) different from each other.", "In the present study, the lack of effects of carrot juice on cardiovascular markers may well be attributed to the fact that no change in eating pattern was requested [38]. The subjects were told to continue with their daily lifestyle and drink one glass containing 16 fl oz (480 mL) of carrot juice as a morning snack.\nDespite no change in anthropometric or lipid panels, carrot juice contributed to a 5% reduction in systolic blood pressure. The trend in systolic blood pressure lowering effect of carrot juice is consistent with the National Heart, Lung, and Blood Institute's recommendations to increase intake of vegetables as part of a dietary pattern [39]. It is also consistent with the finding that systolic blood pressure, rather than diastolic, is more sensitive to nutrient intake modifications [40]. The nutrients present in carrot juice, including fiber, potassium, nitrates, and vitamin C could have contributed to the effect seen in lowering systolic blood pressure [41]. Elevated blood pressure is a primary risk factor for the development of CVD and is described as a component of the metabolic syndrome [42]. Carrots are a rich source of nitrates, which may be converted into nitric oxide to increase vasodilation, possibly decreasing blood pressure. The Mediterranean diet, known for relatively low rates of CVD compared to the typical Western diet, is rich in dietary nitrates and green leafy vegetables containing potassium [43]. Carrot juice is also a rich source of potassium which may in part have contributed to lowering systolic blood pressure [44].\nThe lack of effect from drinking carrot juice on the plasma inflammatory markers as well as insulin and leptin may have been because insulin, leptin, interleukin-1α, and C-reactive protein were all within normal ranges. Our report is inconsistent with an earlier finding suggesting that elevated C-reactive protein was independently associated with higher BMI and higher triglycerides [45]. However, our results are consistent with those who reported that the mean values of cholesterol, triglycerides, and LDL cholesterol were not affected by increased C-reactive protein and insulin in obese women when compared to normal weight controls suggesting that the increase in inflammatory markers have no effect on lipid panel status [46]. Therefore, we conclude that the high mean cholesterol and triglyceride concentrations are attributed to the diet and the eating habits rather than to inflammatory stimulation.\nIt is interesting to note that the increase in the plasma antioxidant status and the decrease in the plasma malondialdehyde may have been related to drinking carrot juice which is a good source of β-carotenes and α-carotenes with antioxidant activity. In a similar study, drinking kale juice significantly improved antioxidant status without affecting the concentration of malondialdehyde [47]. The inconsistency in results is probably due to the amount of juice consumed. In the current study, the participants consumed 16 fl oz (480 mL) of fresh carrot juice everyday while in the study reported by Kim and colleagues, the participants drank 5 ounces (150 mL) of Kale juice daily [47].\nOur results support the notion that physiological differences exist between the male and female subjects. One of the physiological differences between the participants are sex hormones causing men to weigh more and have a lower body fat percentage then the female counterpart [48,49]. Furthermore, the higher plasma leptin and insulin concentrations in females agrees with earlier reports and the differences could also be attributed to sex hormones as observed in one other study [50]. However, the elevated plasma interleukin-1α, Apo B, and malondialdehyde in men suggest that men are at a greater risk for developing CVD than women. Moreover, decreased lipid peroxidation evident from drinking carrot juice is associated with increased antioxidant status independent of inflammatory markers, hormones, or increased cholesterol and triglyceride concentrations.", "In conclusion, drinking 16 fl oz of fresh carrot juice daily significantly increased antioxidant status and suppressed lipid peroxidation without affecting the plasma cholesterol and triglyceride status. It appears both dietary and lifestyle change are required to positively alter lipid profiles in place of increasing fruits or vegetable serving sizes without comprehensive dietary modifications. However, future studies are needed to either refute or validate the results and conclusion of this study." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null ]
[ "Antioxidant status", "cardiovascular disease", "lipid peroxidation" ]
Background: Despite a gradual decline in mortality from cardiovascular disease (CVD), it is still the leading cause of both morbidity and mortality in the United States [1]. Approximately 864,000 Americans die each year from CVD and this figure makes up 35% of the total deaths in the United States [2]. In recent years, there have been disturbing increases in the prevalence of CVD risk factors like diabetes, obesity, and the metabolic syndrome which collectively may negate the downward trends in CVD mortality [1,3,4]. Obesity elevates the risk of developing metabolic syndrome, diabetes, and CVD [5-7]. Obesity is thought to initiate a cascade of events leading to systemic inflammation and increases in circulating C-reactive protein, insulin resistance, and dyslipidemia [7]. In 1998, the American Heart Association considered obesity to be one of the major risk factors for coronary heart disease [8]. Other risk factors for CVD include a higher body mass index (BMI), the marker commonly used to establish obesity, which has been shown to be independently associated with hypertension, elevated total cholesterol and low-density lipoprotein (LDL) cholesterol levels, and lower high density lipoprotein (HDL) cholesterol [9-12]. The optimal BMI for adults 18 to 85 years of age is from 23 to 25 for most races [13]. Evidence has demonstrated that people with elevated BMI are at higher risk of developing CVD compared with those of normal BMI [14,15]. Additionally, the presence of metabolic syndrome is predicted to shorten life leading to death at younger ages [16]. Diets high in fat and cholesterol are the major factors contributing to CVD. Dietary modification and lifestyle changes are suggested to be effective strategies to prevent CVD [17]. The National Heart, Lung, and Blood Institute (NHLBI) recommend the Therapeutic Lifestyle Change (TLC) diet to improve heart health in individuals at risk for CVD. The TLC diet consists of reducing intake of saturated and total fat from animal products and increasing the intake of fibrous vegetables, fruits, whole grains, and legumes [18]. Increased fruit and vegetable consumption has been found to play a key role in preventing heart disease. In a follow up of the Nurse's Health Study, an additional serving per day of fruits and vegetables was associated with a 4% reduction in the risk of coronary heart disease [19]. Fruits and vegetables contain many nutrients which may be associated with reduced risk for heart disease, including fiber, vitamins, minerals, and phytochemicals. Phytochemicals found in fruits and vegetables have been shown to reduce inflammation, oxidative stress, and other markers of CVD [20]. Among common fruits and vegetables, carrots are high in fibers, carotenoids, vitamins C and E, and phenolics such as p-coumaric, chlorogenic, and caffeic acids [21]. Consuming foods containing phenolic compounds has decreased the risk of vascular diseases in previous studies [22,23]. Phenolic compounds are dietary antioxidants found in plants that are shown to inhibit LDL oxidation, inhibit platelet aggregation and adhesion, decrease total and LDL cholesterol, and induce endothelium-dependent vaso-relaxation [24-26]. Oral intake of carrot juice also displays other beneficial physiological effects including reduced oxidative DNA damage [27], increased levels of plasma antioxidants [28], and reduced inflammation [29]. In the Lipid Research Clinics Coronary Primary Prevention Trial (LRC-CPPT), men were tracked over 13 years and results revealed that those with the highest plasma carotenoid levels had lower risk of coronary heart disease [30]. In a 12-year follow-up of the Prospective Basel Study, Eicholzer and colleagues found that the risk of ischemic heart disease is increased by 1.53 Relative Risk in those with the lowest plasma carotene concentrations [31]. Inflammation has been shown to be a strong predictor of CVD and serum β-carotene inversely correlates with C-reactive protein and interleukin-6 [29,32]. Previous studies have shown beneficial effects of eating high fiber diets on lowering cardiovascular risk factors [33-35]. One other study has shown potential benefit of fruit and vegetable juice concentrate on cardiovascular prevention [36]. The goal of present study was to evaluate the potential role of drinking 16 ounces fresh squeezed carrot juice (equivalent to one pound of fresh carrot) daily on lowering cardiovascular risk markers in adults with elevated cholesterol and triglycerides. Methods: Subjects In this study, eight males and nine females from Texas A & M University-Kingsville (TAMUK) faculty and staff (n = 17) with elevated plasma cholesterol and triglyceride levels were the study participants. Each subject was asked to drink 16 fl oz fresh carrot juices daily for the three month duration of the study. The research study was approved by TAMUK IRB-board prior to initiation of the study. Participants agreed to drink the carrot juice daily and signed a consent form to agree to rules and regulations of the study. Carrots were juiced and delivered to research participants daily. Weight and height were measured both pre-test and post-test to calculate BMI status. Fasting blood samples were collected by a licensed nurse at the start of the study and after 90 days when the study was concluded. The blood samples were collected in tubes and centrifuged at 1,500x g for 15 min. The blood chemistry and lipid panels were analyzed using Modular Analytics D 2400 system and Cobas Integra 800 of Roche Diagnostics Corp. Indianapolis, IN. In this study, eight males and nine females from Texas A & M University-Kingsville (TAMUK) faculty and staff (n = 17) with elevated plasma cholesterol and triglyceride levels were the study participants. Each subject was asked to drink 16 fl oz fresh carrot juices daily for the three month duration of the study. The research study was approved by TAMUK IRB-board prior to initiation of the study. Participants agreed to drink the carrot juice daily and signed a consent form to agree to rules and regulations of the study. Carrots were juiced and delivered to research participants daily. Weight and height were measured both pre-test and post-test to calculate BMI status. Fasting blood samples were collected by a licensed nurse at the start of the study and after 90 days when the study was concluded. The blood samples were collected in tubes and centrifuged at 1,500x g for 15 min. The blood chemistry and lipid panels were analyzed using Modular Analytics D 2400 system and Cobas Integra 800 of Roche Diagnostics Corp. Indianapolis, IN. Outcome Measures Blood Pressure Collection Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg). Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg). Bioelectrical Impedance Analysis The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study. The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study. Inflammatory Markers and Hormones The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI). The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI). Total Antioxidant Status and Malondialdehyde Production The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA). The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA). Blood Pressure Collection Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg). Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg). Bioelectrical Impedance Analysis The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study. The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study. Inflammatory Markers and Hormones The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI). The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI). Total Antioxidant Status and Malondialdehyde Production The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA). The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA). Analysis The experimental design was a pre-test/post-test and a comparison student t-test was performed to determine the effects of drinking 16 fl oz fresh carrot juice daily as an independent variable on variables of interest at baseline and 90 days after the completion of the study as outlined by Steel and Torrie [37]. The experimental design was a pre-test/post-test and a comparison student t-test was performed to determine the effects of drinking 16 fl oz fresh carrot juice daily as an independent variable on variables of interest at baseline and 90 days after the completion of the study as outlined by Steel and Torrie [37]. Subjects: In this study, eight males and nine females from Texas A & M University-Kingsville (TAMUK) faculty and staff (n = 17) with elevated plasma cholesterol and triglyceride levels were the study participants. Each subject was asked to drink 16 fl oz fresh carrot juices daily for the three month duration of the study. The research study was approved by TAMUK IRB-board prior to initiation of the study. Participants agreed to drink the carrot juice daily and signed a consent form to agree to rules and regulations of the study. Carrots were juiced and delivered to research participants daily. Weight and height were measured both pre-test and post-test to calculate BMI status. Fasting blood samples were collected by a licensed nurse at the start of the study and after 90 days when the study was concluded. The blood samples were collected in tubes and centrifuged at 1,500x g for 15 min. The blood chemistry and lipid panels were analyzed using Modular Analytics D 2400 system and Cobas Integra 800 of Roche Diagnostics Corp. Indianapolis, IN. Outcome Measures: Blood Pressure Collection Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg). Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg). Bioelectrical Impedance Analysis The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study. The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study. Inflammatory Markers and Hormones The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI). The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI). Total Antioxidant Status and Malondialdehyde Production The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA). The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA). Blood Pressure Collection: Three random blood pressures from the right upper arm were taken using the OMRON Model #HEM 711. Blood pressure was measured at the beginning and end of the study. The blood pressure was taken from each subject and repeated if: 1) an error occurred in the reading, 2) the subjects seemed anxious or nervous, or 3) the blood pressure measurement was above the normal range (120 mm Hg/80 mm Hg). Bioelectrical Impedance Analysis: The Bioelectrical Impedance Analysis (BIA) technique was used to estimate percentage of body fat using Quantum II (RJL Systems, 2006, Clinton Twp., MI). The BIA test was performed while each subject was lying supine with their arms and legs spread open. After the electrode site was cleaned with isopropyl alcohol, electrode patches with self-adhesive conducting gel were attached to the dorsal surface of the right foot and right hand. The electrodes introduced an alternating current (50 kHz) at the base of the toes and fingers and the Quantum II measures the voltage changes (RLJ Systems, 2006, Clinton Twp., MI). The BIA was measured at the beginning and end of the study. Inflammatory Markers and Hormones: The plasma interleukin-1α was determined as a pro-inflammatory marker (R&D System, Minneapolis, MN). The plasma C-reactive protein was analyzed using a rat C-reactive protein ELISA kit as an index for inflammation (Life Diagnostics, Westchester, PA, USA). Insulin was analyzed to determine an insulin resistant state (Linco Research, Inc. St. Charles, MI). Leptin was analyzed as an adiposity hormone (Linco Research, Inc. St. Charles, MI). Total Antioxidant Status and Malondialdehyde Production: The plasma was collected and an aliquot was refrigerated for total antioxidant status using a commercially available kit (Calbiochem, San Diego, CA, USA) as a quantitative measure of circulating antioxidant status. The plasma malondialdehyde was evaluated as an indicator of lipid peroxidation using a kit from Northwest life science (Vancouver, WA, USA). Analysis: The experimental design was a pre-test/post-test and a comparison student t-test was performed to determine the effects of drinking 16 fl oz fresh carrot juice daily as an independent variable on variables of interest at baseline and 90 days after the completion of the study as outlined by Steel and Torrie [37]. Results: Drinking 16 fl oz of carrot juice daily for three months did not (P > 0.1) alter subject's weight, body fat percentage, or BMI (Table 1). While drinking carrot juice numerically lowered systolic pressure (P = 0.06) while diastolic pressure was unchanged (Table 1). The fasting plasma chemistry including glucose, triglycerides, cholesterol, HDL, LDL, VLDL, the hormones insulin and leptin, and the inflammatory markers interleukin-1α and C-reactive protein were unaffected from consuming carrot juice (Table 1). Plasma Apo A and Apo B were not (P > 0.1) significantly affected by drinking carrot juice (Table 1). However, the plasma antioxidant status was significantly (P < 0.05) increased while the plasma malondialdehyde production was significantly (P < 0.05) decreased after drinking carrot juice (Table 1). Effects of drinking carrot juice on anthropometrics, blood Pressure, blood chemistry, lipid panel, hormones, inflammatory markers, and antioxidant status a,bMeans with unlike superscript are significantly (P ≤ 0.05) different from each other. Comparing the males to the females, men weighed more and were taller than the female subjects (Table 2). The plasma chemistry, lipid panel, blood pressure, and plasma antioxidant capacity were not (P > 0.1) different between men and women participants (Tables 2 and 3). However, the plasma interleukin-1α, Apo B, and malondialdehyde were significantly (P < 0.05) higher in men (Table 3), while the percentage of body fat, insulin, and leptin were higher (P < 0.05) in women (Tables 2 and 3). It is interesting to note that at baseline, the men had significantly higher (P < 0.05) plasma malondialdhyde concentration than the women, but after drinking carrot juice for 90 days, the plasma malondialdhyde concentration was significantly reduced (P < 0.05) and there was no difference in the plasma malondialdehyde concentration between men and women participants at the end of the study. Gender differences on anthropometrics, blood Pressure, blood chemistry, and insulin a,bMeans with unlike superscript are significantly (P ≤ 0.05) different from each other. Gender differences on lipid panel, leptin, inflammatory markers, and antioxidant status a,cMeans with unlike superscript are significantly (P ≤ 0.05) different from each other. Discussion: In the present study, the lack of effects of carrot juice on cardiovascular markers may well be attributed to the fact that no change in eating pattern was requested [38]. The subjects were told to continue with their daily lifestyle and drink one glass containing 16 fl oz (480 mL) of carrot juice as a morning snack. Despite no change in anthropometric or lipid panels, carrot juice contributed to a 5% reduction in systolic blood pressure. The trend in systolic blood pressure lowering effect of carrot juice is consistent with the National Heart, Lung, and Blood Institute's recommendations to increase intake of vegetables as part of a dietary pattern [39]. It is also consistent with the finding that systolic blood pressure, rather than diastolic, is more sensitive to nutrient intake modifications [40]. The nutrients present in carrot juice, including fiber, potassium, nitrates, and vitamin C could have contributed to the effect seen in lowering systolic blood pressure [41]. Elevated blood pressure is a primary risk factor for the development of CVD and is described as a component of the metabolic syndrome [42]. Carrots are a rich source of nitrates, which may be converted into nitric oxide to increase vasodilation, possibly decreasing blood pressure. The Mediterranean diet, known for relatively low rates of CVD compared to the typical Western diet, is rich in dietary nitrates and green leafy vegetables containing potassium [43]. Carrot juice is also a rich source of potassium which may in part have contributed to lowering systolic blood pressure [44]. The lack of effect from drinking carrot juice on the plasma inflammatory markers as well as insulin and leptin may have been because insulin, leptin, interleukin-1α, and C-reactive protein were all within normal ranges. Our report is inconsistent with an earlier finding suggesting that elevated C-reactive protein was independently associated with higher BMI and higher triglycerides [45]. However, our results are consistent with those who reported that the mean values of cholesterol, triglycerides, and LDL cholesterol were not affected by increased C-reactive protein and insulin in obese women when compared to normal weight controls suggesting that the increase in inflammatory markers have no effect on lipid panel status [46]. Therefore, we conclude that the high mean cholesterol and triglyceride concentrations are attributed to the diet and the eating habits rather than to inflammatory stimulation. It is interesting to note that the increase in the plasma antioxidant status and the decrease in the plasma malondialdehyde may have been related to drinking carrot juice which is a good source of β-carotenes and α-carotenes with antioxidant activity. In a similar study, drinking kale juice significantly improved antioxidant status without affecting the concentration of malondialdehyde [47]. The inconsistency in results is probably due to the amount of juice consumed. In the current study, the participants consumed 16 fl oz (480 mL) of fresh carrot juice everyday while in the study reported by Kim and colleagues, the participants drank 5 ounces (150 mL) of Kale juice daily [47]. Our results support the notion that physiological differences exist between the male and female subjects. One of the physiological differences between the participants are sex hormones causing men to weigh more and have a lower body fat percentage then the female counterpart [48,49]. Furthermore, the higher plasma leptin and insulin concentrations in females agrees with earlier reports and the differences could also be attributed to sex hormones as observed in one other study [50]. However, the elevated plasma interleukin-1α, Apo B, and malondialdehyde in men suggest that men are at a greater risk for developing CVD than women. Moreover, decreased lipid peroxidation evident from drinking carrot juice is associated with increased antioxidant status independent of inflammatory markers, hormones, or increased cholesterol and triglyceride concentrations. Conclusion: In conclusion, drinking 16 fl oz of fresh carrot juice daily significantly increased antioxidant status and suppressed lipid peroxidation without affecting the plasma cholesterol and triglyceride status. It appears both dietary and lifestyle change are required to positively alter lipid profiles in place of increasing fruits or vegetable serving sizes without comprehensive dietary modifications. However, future studies are needed to either refute or validate the results and conclusion of this study.
Background: High prevalence of obesity and cardiovascular disease is attributable to sedentary lifestyle and eating diets high in fat and refined carbohydrate while eating diets low in fruit and vegetables. Epidemiological studies have confirmed a strong association between eating diets rich in fruits and vegetables and cardiovascular health. The aim of this pilot study was to determine whether drinking fresh carrot juice influences antioxidant status and cardiovascular risk markers in subjects not modifying their eating habits. Methods: An experiment was conducted to evaluate the effects of consuming 16 fl oz of daily freshly squeezed carrot juice for three months on cardiovascular risk markers, C-reactive protein, insulin, leptin, interleukin-1α, body fat percentage, body mass index (BMI), blood pressure, antioxidant status, and malondialdehyde production. Fasting blood samples were collected pre-test and 90 days afterward to conclude the study. Results: Drinking carrot juice did not affect (P > 0.1) the plasma cholesterol, triglycerides, Apo A, Apo B, LDL, HDL, body fat percentage, insulin, leptin, interleukin-1α, or C-reactive protein. Drinking carrot juice decreased (P = 0.06) systolic pressure, but did not influence diastolic pressure. Drinking carrot juice significantly (P < 0.05) increased the plasma total antioxidant capacity and decreased (P < 0.05) the plasma malondialdehyde production. Conclusions: Drinking carrot juice may protect the cardiovascular system by increasing total antioxidant status and by decreasing lipid peroxidation independent of any of the cardiovascular risk markers measured in the study.
Background: Despite a gradual decline in mortality from cardiovascular disease (CVD), it is still the leading cause of both morbidity and mortality in the United States [1]. Approximately 864,000 Americans die each year from CVD and this figure makes up 35% of the total deaths in the United States [2]. In recent years, there have been disturbing increases in the prevalence of CVD risk factors like diabetes, obesity, and the metabolic syndrome which collectively may negate the downward trends in CVD mortality [1,3,4]. Obesity elevates the risk of developing metabolic syndrome, diabetes, and CVD [5-7]. Obesity is thought to initiate a cascade of events leading to systemic inflammation and increases in circulating C-reactive protein, insulin resistance, and dyslipidemia [7]. In 1998, the American Heart Association considered obesity to be one of the major risk factors for coronary heart disease [8]. Other risk factors for CVD include a higher body mass index (BMI), the marker commonly used to establish obesity, which has been shown to be independently associated with hypertension, elevated total cholesterol and low-density lipoprotein (LDL) cholesterol levels, and lower high density lipoprotein (HDL) cholesterol [9-12]. The optimal BMI for adults 18 to 85 years of age is from 23 to 25 for most races [13]. Evidence has demonstrated that people with elevated BMI are at higher risk of developing CVD compared with those of normal BMI [14,15]. Additionally, the presence of metabolic syndrome is predicted to shorten life leading to death at younger ages [16]. Diets high in fat and cholesterol are the major factors contributing to CVD. Dietary modification and lifestyle changes are suggested to be effective strategies to prevent CVD [17]. The National Heart, Lung, and Blood Institute (NHLBI) recommend the Therapeutic Lifestyle Change (TLC) diet to improve heart health in individuals at risk for CVD. The TLC diet consists of reducing intake of saturated and total fat from animal products and increasing the intake of fibrous vegetables, fruits, whole grains, and legumes [18]. Increased fruit and vegetable consumption has been found to play a key role in preventing heart disease. In a follow up of the Nurse's Health Study, an additional serving per day of fruits and vegetables was associated with a 4% reduction in the risk of coronary heart disease [19]. Fruits and vegetables contain many nutrients which may be associated with reduced risk for heart disease, including fiber, vitamins, minerals, and phytochemicals. Phytochemicals found in fruits and vegetables have been shown to reduce inflammation, oxidative stress, and other markers of CVD [20]. Among common fruits and vegetables, carrots are high in fibers, carotenoids, vitamins C and E, and phenolics such as p-coumaric, chlorogenic, and caffeic acids [21]. Consuming foods containing phenolic compounds has decreased the risk of vascular diseases in previous studies [22,23]. Phenolic compounds are dietary antioxidants found in plants that are shown to inhibit LDL oxidation, inhibit platelet aggregation and adhesion, decrease total and LDL cholesterol, and induce endothelium-dependent vaso-relaxation [24-26]. Oral intake of carrot juice also displays other beneficial physiological effects including reduced oxidative DNA damage [27], increased levels of plasma antioxidants [28], and reduced inflammation [29]. In the Lipid Research Clinics Coronary Primary Prevention Trial (LRC-CPPT), men were tracked over 13 years and results revealed that those with the highest plasma carotenoid levels had lower risk of coronary heart disease [30]. In a 12-year follow-up of the Prospective Basel Study, Eicholzer and colleagues found that the risk of ischemic heart disease is increased by 1.53 Relative Risk in those with the lowest plasma carotene concentrations [31]. Inflammation has been shown to be a strong predictor of CVD and serum β-carotene inversely correlates with C-reactive protein and interleukin-6 [29,32]. Previous studies have shown beneficial effects of eating high fiber diets on lowering cardiovascular risk factors [33-35]. One other study has shown potential benefit of fruit and vegetable juice concentrate on cardiovascular prevention [36]. The goal of present study was to evaluate the potential role of drinking 16 ounces fresh squeezed carrot juice (equivalent to one pound of fresh carrot) daily on lowering cardiovascular risk markers in adults with elevated cholesterol and triglycerides. Conclusion: ASP coordinated and implemented the study, conducted laboratory analysis, and helped to draft the manuscript. SF conducted laboratory analysis. AS drafted the manuscript. BSP designed the study and drafted the manuscript. FD designed and coordinated the study, performed the statistical analyses, and helped to draft the manuscript. All authors read and approved the final manuscript.
Background: High prevalence of obesity and cardiovascular disease is attributable to sedentary lifestyle and eating diets high in fat and refined carbohydrate while eating diets low in fruit and vegetables. Epidemiological studies have confirmed a strong association between eating diets rich in fruits and vegetables and cardiovascular health. The aim of this pilot study was to determine whether drinking fresh carrot juice influences antioxidant status and cardiovascular risk markers in subjects not modifying their eating habits. Methods: An experiment was conducted to evaluate the effects of consuming 16 fl oz of daily freshly squeezed carrot juice for three months on cardiovascular risk markers, C-reactive protein, insulin, leptin, interleukin-1α, body fat percentage, body mass index (BMI), blood pressure, antioxidant status, and malondialdehyde production. Fasting blood samples were collected pre-test and 90 days afterward to conclude the study. Results: Drinking carrot juice did not affect (P > 0.1) the plasma cholesterol, triglycerides, Apo A, Apo B, LDL, HDL, body fat percentage, insulin, leptin, interleukin-1α, or C-reactive protein. Drinking carrot juice decreased (P = 0.06) systolic pressure, but did not influence diastolic pressure. Drinking carrot juice significantly (P < 0.05) increased the plasma total antioxidant capacity and decreased (P < 0.05) the plasma malondialdehyde production. Conclusions: Drinking carrot juice may protect the cardiovascular system by increasing total antioxidant status and by decreasing lipid peroxidation independent of any of the cardiovascular risk markers measured in the study.
5,628
289
[ 848, 197, 770, 82, 135, 91, 63, 63, 452, 721, 76 ]
12
[ "blood", "study", "plasma", "pressure", "blood pressure", "juice", "carrot", "status", "mi", "antioxidant" ]
[ "mortality obesity elevates", "cardiovascular risk", "cardiovascular risk factors", "cvd obesity", "diabetes cvd obesity" ]
null
[CONTENT] Antioxidant status | cardiovascular disease | lipid peroxidation [SUMMARY]
[CONTENT] Antioxidant status | cardiovascular disease | lipid peroxidation [SUMMARY]
null
[CONTENT] Antioxidant status | cardiovascular disease | lipid peroxidation [SUMMARY]
[CONTENT] Antioxidant status | cardiovascular disease | lipid peroxidation [SUMMARY]
[CONTENT] Antioxidant status | cardiovascular disease | lipid peroxidation [SUMMARY]
[CONTENT] Adult | Antioxidants | Beverages | Blood Pressure | Body Mass Index | C-Reactive Protein | Cardiovascular Diseases | Daucus carota | Drinking | Female | Humans | Insulin | Interleukin-1alpha | Leptin | Lipid Metabolism | Lipid Peroxidation | Male | Pilot Projects | Risk [SUMMARY]
[CONTENT] Adult | Antioxidants | Beverages | Blood Pressure | Body Mass Index | C-Reactive Protein | Cardiovascular Diseases | Daucus carota | Drinking | Female | Humans | Insulin | Interleukin-1alpha | Leptin | Lipid Metabolism | Lipid Peroxidation | Male | Pilot Projects | Risk [SUMMARY]
null
[CONTENT] Adult | Antioxidants | Beverages | Blood Pressure | Body Mass Index | C-Reactive Protein | Cardiovascular Diseases | Daucus carota | Drinking | Female | Humans | Insulin | Interleukin-1alpha | Leptin | Lipid Metabolism | Lipid Peroxidation | Male | Pilot Projects | Risk [SUMMARY]
[CONTENT] Adult | Antioxidants | Beverages | Blood Pressure | Body Mass Index | C-Reactive Protein | Cardiovascular Diseases | Daucus carota | Drinking | Female | Humans | Insulin | Interleukin-1alpha | Leptin | Lipid Metabolism | Lipid Peroxidation | Male | Pilot Projects | Risk [SUMMARY]
[CONTENT] Adult | Antioxidants | Beverages | Blood Pressure | Body Mass Index | C-Reactive Protein | Cardiovascular Diseases | Daucus carota | Drinking | Female | Humans | Insulin | Interleukin-1alpha | Leptin | Lipid Metabolism | Lipid Peroxidation | Male | Pilot Projects | Risk [SUMMARY]
[CONTENT] mortality obesity elevates | cardiovascular risk | cardiovascular risk factors | cvd obesity | diabetes cvd obesity [SUMMARY]
[CONTENT] mortality obesity elevates | cardiovascular risk | cardiovascular risk factors | cvd obesity | diabetes cvd obesity [SUMMARY]
null
[CONTENT] mortality obesity elevates | cardiovascular risk | cardiovascular risk factors | cvd obesity | diabetes cvd obesity [SUMMARY]
[CONTENT] mortality obesity elevates | cardiovascular risk | cardiovascular risk factors | cvd obesity | diabetes cvd obesity [SUMMARY]
[CONTENT] mortality obesity elevates | cardiovascular risk | cardiovascular risk factors | cvd obesity | diabetes cvd obesity [SUMMARY]
[CONTENT] blood | study | plasma | pressure | blood pressure | juice | carrot | status | mi | antioxidant [SUMMARY]
[CONTENT] blood | study | plasma | pressure | blood pressure | juice | carrot | status | mi | antioxidant [SUMMARY]
null
[CONTENT] blood | study | plasma | pressure | blood pressure | juice | carrot | status | mi | antioxidant [SUMMARY]
[CONTENT] blood | study | plasma | pressure | blood pressure | juice | carrot | status | mi | antioxidant [SUMMARY]
[CONTENT] blood | study | plasma | pressure | blood pressure | juice | carrot | status | mi | antioxidant [SUMMARY]
[CONTENT] risk | cvd | heart | disease | heart disease | shown | obesity | factors | vegetables | fruits [SUMMARY]
[CONTENT] blood | mi | study | analyzed | bia | blood pressure | test | pressure | kit | usa [SUMMARY]
null
[CONTENT] conclusion | dietary | status | status appears | dietary lifestyle change required | triglyceride status appears | triglyceride status | refute | refute validate | refute validate results [SUMMARY]
[CONTENT] blood | plasma | study | pressure | blood pressure | carrot | status | mi | juice | test [SUMMARY]
[CONTENT] blood | plasma | study | pressure | blood pressure | carrot | status | mi | juice | test [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] 16 | daily | three months | leptin | BMI ||| 90 days [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| ||| 16 | daily | three months | leptin | BMI ||| 90 days ||| ||| 0.1 | Apo A | Apo B | HDL | leptin ||| 0.06 ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| 16 | daily | three months | leptin | BMI ||| 90 days ||| ||| 0.1 | Apo A | Apo B | HDL | leptin ||| 0.06 ||| ||| [SUMMARY]
Machine learning predicts portal vein thrombosis after splenectomy in patients with portal hypertension: Comparative analysis of three practical models.
36157936
For patients with portal hypertension (PH), portal vein thrombosis (PVT) is a fatal complication after splenectomy. Postoperative platelet elevation is considered the foremost reason for PVT. However, the value of postoperative platelet elevation rate (PPER) in predicting PVT has never been studied.
BACKGROUND
We retrospectively reviewed 483 patients with PH related to hepatitis B virus who underwent splenectomy between July 2011 and September 2018, and they were randomized into either a training (n = 338) or a validation (n = 145) cohort. The generalized linear (GL) method, least absolute shrinkage and selection operator (LASSO), and random forest (RF) were used to construct models. The receiver operating characteristic curves (ROC), calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC) were used to evaluate the robustness and clinical practicability of the GL model (GLM), LASSO model (LSM), and RF model (RFM).
METHODS
Multivariate analysis exhibited that the first and third days for PPER (PPER1, PPER3) were strongly associated with PVT [odds ratio (OR): 1.78, 95% confidence interval (CI): 1.24-2.62, P = 0.002; OR: 1.43, 95%CI: 1.16-1.77, P < 0.001, respectively]. The areas under the ROC curves of the GLM, LSM, and RFM in the training cohort were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively; and were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85) in the validation cohort, respectively. The calibration curves showed satisfactory agreement between prediction by models and actual observation. DCA and CIC indicated that all models conferred high clinical net benefits.
RESULTS
PPER1 and PPER3 are effective indicators for postoperative prediction of PVT. We have successfully developed PPER-based practical models to accurately predict PVT, which would conveniently help clinicians rapidly differentiate individuals at high risk of PVT, and thus guide the adoption of timely interventions.
CONCLUSION
[ "Humans", "Hypertension, Portal", "Liver Cirrhosis", "Machine Learning", "Portal Vein", "Retrospective Studies", "Risk Factors", "Splenectomy", "Venous Thrombosis" ]
9476873
INTRODUCTION
Liver cirrhosis is recognized as an extremely important and rapidly increasing disease burden in the world[1]. In the progressive stage of liver cirrhosis, the complications caused by portal hypertension (PH), including esophagogastric variceal bleeding and hypersplenism, pose a great threat to the patients’ life and health[2,3]. Liver transplantation is currently recommended as a curative treatment for liver cirrhosis combined with PH; however, due to the shortage of liver sources and high trans-plantation cost, its clinical practicability is limited[4,5]. Transjugular intrahepatic portosystemic shunt seems to be a gospel for PH, but unfortunately, restenosis or/and hepatic encephalopathy will occur in more than 60% of patients[6,7]. In Asia, splenectomy (or combined with devascularization) has been widely adopted as an effective treatment for hypersplenism or esophageal and gastric variceal bleeding caused by PH[8,9]. Portal vein thrombosis (PVT) is often defined as thrombosis within the portal vein trunk or intrahepatic portal branches, with or without the splenic vein or superior mesenteric vein involvement[10,11]. PVT is considered a dreaded complication after splenectomy for patients with PH[12], and the probability of PVT has been reported to be 4.8%-51.5%[13-15]. For patients with acute PVT and PVT resulting in superior mesenteric vein thrombosis, it has been reported that PVT may be closely associated with acute liver failure and could influence mortality[16]. Hence, strategies are needed to prevent PVT in patients who underwent splenectomy. In clinical practice, anticoagulation is a critical method for the prevention and treatment of PVT in patients after splenectomy. However, when the anticoagulation therapy should be started remains controversial. Early anticoagulation may result in life-threatening bleeding events for patients with liver cirrhosis. Whether anticoagulant therapy should be prescribed to all patients after splenectomy deserves careful consideration. In addition, the majority of patients with PVT are asymptomatic and only a few experience abdominal discomfort[12]. Therefore, there is an urgent requirement to find effective diagnostic methods to early and rapidly identify individuals with high risk of PVT after splenectomy, and then further guide clinicians to take intervention measures. Color Doppler ultrasonography and/or contrast-enhanced computed tomography (CT) is commonly applied for the final diagnosis of PVT[17]; however, they seem to be useless for screening out the high-risk individuals who are vulnerable to PVT. Given this, many scholars attempted to investigate the risk factors closely related to the occurrence of PVT after splenectomy[8,18-21]. Several investigators paid attention to the fact that preoperative low platelet count (PLT) and postoperative high PLT may be crucial predictors of the risk of PVT postoperatively[19,22]. Generally speaking, patients with PH will experience rebounding rises in PLT after splenectomy[23], combined with hemodynamic changes in the portal venous system, and thus these patients are highly prone to developing PVT[24]. However, the effect of the amplitude of sharp postoperative rises in PLT on PVT has received little attention. We speculated that the postoperative platelet elevation rate (PPER) should be an important predictor of PVT. To the best of our knowledge, there are no reports on the relationship between PPER and PVT. In recent years, to meet the urgent demand of finding effective methods to predict PVT after splenectomy, several studies have attempted to construct predictive models for PVT after splenectomy in patients with cirrhosis using multivariate regression analysis[25,26]. However, there are few clinical variables included in the analysis and the accuracy of these prediction models is still unsatisfactory. Therefore, there is an urgent need for an efficient and accurate visualization model. Nowadays, novel machine learning algorithms based on more clinical features have shown great potential in various aspects of medical research, especially in the construction of predictive models, and the features screened for model construction are clinically interpretable[27-29]. Gao et al[28] constructed four machine learning models based on 53 raw clinical features of coronavirus disease 2019 patients to distinguish individuals at high risk for mortality, with an area under the receiver operating characteristics (ROC) curve (AUC) of 0.976. Kawakami et al[29] developed seven supervised machine learning classifiers based on 32 clinical parameters, among which the random forest (RF) model showed the best performance in distinguishing epithelial ovarian cancer from benign ovarian tumors with an AUC of 0.968. The wide range of applications of machine learning methods has surpassed conventional statistical analysis due to their higher accuracy, which might enable machine learning to be increasingly applied in the field of medical research[30-32]. Although compared with traditional multivariate analysis methods, machine learning algorithms have overwhelming advantages in constructing clinical prediction models, so far, only Wang et al[33] have tried to construct a prediction model of PVT after splenectomy in cirrhotic patients with PH using machine learning algorithms. The model that they constructed has greatly improved the prediction efficiency compared with the traditional models. However, the clinical parameters involved in the construction of the model are extremely complex, which limits its clinical use. Therefore, the purpose of this study was to evaluate the predictive value of PPER for the risk of PVT after splenectomy for patients with PH. In addition, we sought to build simple, efficient, and accurate practical models for predicting PVT with machine learning algorithms to facilitate assisting clinicians in the early identification of individuals at high risk of PVT after splenectomy and taking intervention measures in time. We present the following article in accordance with the TRIPOD reporting checklist.
MATERIALS AND METHODS
Study population We retrospectively recruited 944 consecutive patients aged no less than 18 years who underwent splenectomy at our institution between July 4, 2011 and September 7, 2018. The patients with the following conditions were excluded: (1) Splenic space-occupying lesion; (2) Hematological disease; (3) PH caused by non-hepatitis B virus (HBV) related etiologies, such as schistosome, hepatitis C virus, or other unknown causes; (4) Presence of PVT confirmed by preoperative imaging; (5) Previous history of endoscopic therapy, splenic embolization, shunt surgery, or anticoagulants; (6) Incomplete clinical features; (7) Unelevated PLT on the first (PLT1) and third day (PLT3) after the operation compared to the preoperative values; and (8) Receiving prophylactic anti-coagulant therapy after splenectomy. Finally, a total of 483 patients with PH interrelated to HBV were included in this study. The flow diagram of patient selection and study design is shown in Figure 1A. The study was approved by the Medical Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology. Owing to the retrospective nature of this study, written informed consent was waived. Flow chart and correlation chart. A: Flow diagram of patient selection and study design; B: Correlation matrix between candidate variables. The size and color of the circle in the matrix reflect the correlation between the corresponding variables. The darker the blue, the stronger the positive correlation between variables, and the darker the red, the stronger the negative correlation between variables. HCV: Hepatitis C virus; HBV: Hepatitis B virus; PLT1 and PLT3: Platelet counts on the first and third days after operation; PVT: Portal vein thrombosis; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3. We retrospectively recruited 944 consecutive patients aged no less than 18 years who underwent splenectomy at our institution between July 4, 2011 and September 7, 2018. The patients with the following conditions were excluded: (1) Splenic space-occupying lesion; (2) Hematological disease; (3) PH caused by non-hepatitis B virus (HBV) related etiologies, such as schistosome, hepatitis C virus, or other unknown causes; (4) Presence of PVT confirmed by preoperative imaging; (5) Previous history of endoscopic therapy, splenic embolization, shunt surgery, or anticoagulants; (6) Incomplete clinical features; (7) Unelevated PLT on the first (PLT1) and third day (PLT3) after the operation compared to the preoperative values; and (8) Receiving prophylactic anti-coagulant therapy after splenectomy. Finally, a total of 483 patients with PH interrelated to HBV were included in this study. The flow diagram of patient selection and study design is shown in Figure 1A. The study was approved by the Medical Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology. Owing to the retrospective nature of this study, written informed consent was waived. Flow chart and correlation chart. A: Flow diagram of patient selection and study design; B: Correlation matrix between candidate variables. The size and color of the circle in the matrix reflect the correlation between the corresponding variables. The darker the blue, the stronger the positive correlation between variables, and the darker the red, the stronger the negative correlation between variables. HCV: Hepatitis C virus; HBV: Hepatitis B virus; PLT1 and PLT3: Platelet counts on the first and third days after operation; PVT: Portal vein thrombosis; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3. Data collection All the patients’ clinical features were acquired from the electronic medical record system in our institution, which mainly included sex, age, smoking and drinking history, previous treatment history, etiologies, blood biochemical parameters, and imaging information. The blood biochemical parameters included routine blood tests [red blood cells (RBC), reference interval: 4.30-5.80 × 1012/L; hemoglobin, reference interval: 130.0-175.0 g/L; white blood cells (WBC), reference interval: 3.50-9.50 × 109/L; neutrophil count (N), reference interval: 1.80-6.30 × 109/L; lymphocyte count (L), reference interval: 1.10-3.20 × 109/L; neutrophil to lymphocyte ratio (NLR); PLT, reference interval: 125.0-350.0 × 109/L; platelet to lymphocyte ratio], coagulation function [prothrombin time, reference interval: 11.5-14.5 s; prothrombin activity (PTA), reference interval: 75.0%-125.0%; international normalized ratio, reference interval: 0.80-1.20; fibrinogen, reference interval: 2.00-4.00 g/L; activated partial thromboplastin time, reference interval: 29.0-42.0 s], and liver function [alanine aminotransaminase (ALT), reference interval: ≤ 41 U/L; aspartate aminotransaminase (AST), reference interval: ≤ 40 U/L; serum albumin, reference interval: 35.0-52.0 g/L; serum total bilirubin, reference interval: ≤ 26 μmol/L] within 7 d before surgery, as well as PLT1 and PLT3. The preoperative Child-Pugh grade was divided into three levels of A, B, and C[34], with grade C excluded. Information on the esophageal and gastric varices (EGV), spleen thickness (SPT), diameter of the portal vein (DPV), and preoperative blood transfusion (PBT) within 7 d before the operation was also collected. All the patients’ clinical features were acquired from the electronic medical record system in our institution, which mainly included sex, age, smoking and drinking history, previous treatment history, etiologies, blood biochemical parameters, and imaging information. The blood biochemical parameters included routine blood tests [red blood cells (RBC), reference interval: 4.30-5.80 × 1012/L; hemoglobin, reference interval: 130.0-175.0 g/L; white blood cells (WBC), reference interval: 3.50-9.50 × 109/L; neutrophil count (N), reference interval: 1.80-6.30 × 109/L; lymphocyte count (L), reference interval: 1.10-3.20 × 109/L; neutrophil to lymphocyte ratio (NLR); PLT, reference interval: 125.0-350.0 × 109/L; platelet to lymphocyte ratio], coagulation function [prothrombin time, reference interval: 11.5-14.5 s; prothrombin activity (PTA), reference interval: 75.0%-125.0%; international normalized ratio, reference interval: 0.80-1.20; fibrinogen, reference interval: 2.00-4.00 g/L; activated partial thromboplastin time, reference interval: 29.0-42.0 s], and liver function [alanine aminotransaminase (ALT), reference interval: ≤ 41 U/L; aspartate aminotransaminase (AST), reference interval: ≤ 40 U/L; serum albumin, reference interval: 35.0-52.0 g/L; serum total bilirubin, reference interval: ≤ 26 μmol/L] within 7 d before surgery, as well as PLT1 and PLT3. The preoperative Child-Pugh grade was divided into three levels of A, B, and C[34], with grade C excluded. Information on the esophageal and gastric varices (EGV), spleen thickness (SPT), diameter of the portal vein (DPV), and preoperative blood transfusion (PBT) within 7 d before the operation was also collected. Definition of variables We diagnosed PVT by color Doppler ultrasound examination[35] and contrast-enhanced CT would be applied as an auxiliary examination when its diagnosis was questioned[36]. In this study, abdominal ultrasound and contrast-enhanced CT examinations were routinely performed within 7 d before the operation. Routine ultrasonography was performed on the 7th day after the operation[19,20], or at any time when there were suspected clinical symptoms of PVT such as fever, severe abdominal pain, vomiting, abnormal liver function, and leukocytosis[12]. According to the definition of varices[8], EGV was divided into EGV without varices and EGV with varices in this study. SPT was defined as the vertical distance between the splenic hilum and the cut point of the lateral margin, and DPV was measured as the largest anteroposterior diameter at the point of intersection with the hepatic artery, during the patient’s breath holding[37]. The PPER was calculated from the preoperative PLT and postoperative PLT. For example, PPER1 (at the first day) was calculated as (PLT1 - PLT)/ PLT × 100%, and PPER3 (at the third day) was calculated as (PLT3 - PLT)/ PLT × 100%. We diagnosed PVT by color Doppler ultrasound examination[35] and contrast-enhanced CT would be applied as an auxiliary examination when its diagnosis was questioned[36]. In this study, abdominal ultrasound and contrast-enhanced CT examinations were routinely performed within 7 d before the operation. Routine ultrasonography was performed on the 7th day after the operation[19,20], or at any time when there were suspected clinical symptoms of PVT such as fever, severe abdominal pain, vomiting, abnormal liver function, and leukocytosis[12]. According to the definition of varices[8], EGV was divided into EGV without varices and EGV with varices in this study. SPT was defined as the vertical distance between the splenic hilum and the cut point of the lateral margin, and DPV was measured as the largest anteroposterior diameter at the point of intersection with the hepatic artery, during the patient’s breath holding[37]. The PPER was calculated from the preoperative PLT and postoperative PLT. For example, PPER1 (at the first day) was calculated as (PLT1 - PLT)/ PLT × 100%, and PPER3 (at the third day) was calculated as (PLT3 - PLT)/ PLT × 100%. Development of models All candidates were randomly divided into two parts by using the “caret” package, of which 70% were assigned to a training cohort and 30% were assigned to a validation cohort. All model building was performed in the training cohort. Multivariate forward stepwise logistic regression analysis was used to select valuable variables to construct the generalized linear model (GLM). The least absolute shrinkage and selection operator (LASSO) was a well-established shrinkage method that can effectively screen meaningful variables from a large set of variables with potential multicollinearity to develop the LASSO model (LSM)[38], which was implemented by using the “glmnet” package. RF was composed of a great number of individual decision trees running as a whole[39]. These multifarious decision tree models were applied for the construction of the RF model (RFM)[40]. The importance of candidate variables was reflected by the mean decreased Gini (MDG) score. All candidates were randomly divided into two parts by using the “caret” package, of which 70% were assigned to a training cohort and 30% were assigned to a validation cohort. All model building was performed in the training cohort. Multivariate forward stepwise logistic regression analysis was used to select valuable variables to construct the generalized linear model (GLM). The least absolute shrinkage and selection operator (LASSO) was a well-established shrinkage method that can effectively screen meaningful variables from a large set of variables with potential multicollinearity to develop the LASSO model (LSM)[38], which was implemented by using the “glmnet” package. RF was composed of a great number of individual decision trees running as a whole[39]. These multifarious decision tree models were applied for the construction of the RF model (RFM)[40]. The importance of candidate variables was reflected by the mean decreased Gini (MDG) score. Evaluation of models The robustness and clinical practicability of models were assessed using the ROC curve, calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC). The AUC were used to estimate the discernment of each model by using “rms” packages. The calibration curves were applied to examine the calibration ability of each model and calibrated with 1000 bootstrap samples to reduce overfitting bias. The clinical applicability of each model was informed by DCA and CIC using “rms” and “rmda” packages. The robustness and clinical practicability of models were assessed using the ROC curve, calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC). The AUC were used to estimate the discernment of each model by using “rms” packages. The calibration curves were applied to examine the calibration ability of each model and calibrated with 1000 bootstrap samples to reduce overfitting bias. The clinical applicability of each model was informed by DCA and CIC using “rms” and “rmda” packages. Statistical analysis Statistical analyses were performed with R Statistical Software (version 4.1.2, https://www.r-project.org/). Continuous variables were tested for normality. Those with normality are described as the mean ± SD, while those without normality are described as the median and interquartile range. Continuous variables were compared using the student’s t-test or non-parametric rank-sum test (Kruskal-Wallis test) as appropriate. Categorical variables are described as numbers (percentage) and were compared using the Chi-square test or Fisher exact test as appropriate. Correlations between candidate variables were determined by Spearman’s correlation coefficient. All statistical tests were two-tailed, and P < 0.05 was considered significant. Statistical analyses were performed with R Statistical Software (version 4.1.2, https://www.r-project.org/). Continuous variables were tested for normality. Those with normality are described as the mean ± SD, while those without normality are described as the median and interquartile range. Continuous variables were compared using the student’s t-test or non-parametric rank-sum test (Kruskal-Wallis test) as appropriate. Categorical variables are described as numbers (percentage) and were compared using the Chi-square test or Fisher exact test as appropriate. Correlations between candidate variables were determined by Spearman’s correlation coefficient. All statistical tests were two-tailed, and P < 0.05 was considered significant.
null
null
CONCLUSION
According to our experience, patients with a more remarkable increase in platelet count in the first 3 d after operation have a higher probability of PVT, which should be prioritized for prophylactic anticoagulation.
[ "INTRODUCTION", "Study population", "Data collection", "Definition of variables", "Development of models", "Evaluation of models", "Statistical analysis", "RESULTS", "Patient demographics and characteristics", "Logistic regression analysis", "Establishment of PPER-based models", "Assessment and verification of PPER-based models", "Importance of PPER for models", "Comparative analysis of PPER-based models", "DISCUSSION", "CONCLUSION" ]
[ "Liver cirrhosis is recognized as an extremely important and rapidly increasing disease burden in the world[1]. In the progressive stage of liver cirrhosis, the complications caused by portal hypertension (PH), including esophagogastric variceal bleeding and hypersplenism, pose a great threat to the patients’ life and health[2,3]. Liver transplantation is currently recommended as a curative treatment for liver cirrhosis combined with PH; however, due to the shortage of liver sources and high trans-plantation cost, its clinical practicability is limited[4,5]. Transjugular intrahepatic portosystemic shunt seems to be a gospel for PH, but unfortunately, restenosis or/and hepatic encephalopathy will occur in more than 60% of patients[6,7]. In Asia, splenectomy (or combined with devascularization) has been widely adopted as an effective treatment for hypersplenism or esophageal and gastric variceal bleeding caused by PH[8,9].\nPortal vein thrombosis (PVT) is often defined as thrombosis within the portal vein trunk or intrahepatic portal branches, with or without the splenic vein or superior mesenteric vein involvement[10,11]. PVT is considered a dreaded complication after splenectomy for patients with PH[12], and the probability of PVT has been reported to be 4.8%-51.5%[13-15]. For patients with acute PVT and PVT resulting in superior mesenteric vein thrombosis, it has been reported that PVT may be closely associated with acute liver failure and could influence mortality[16]. Hence, strategies are needed to prevent PVT in patients who underwent splenectomy. In clinical practice, anticoagulation is a critical method for the prevention and treatment of PVT in patients after splenectomy. However, when the anticoagulation therapy should be started remains controversial. Early anticoagulation may result in life-threatening bleeding events for patients with liver cirrhosis. Whether anticoagulant therapy should be prescribed to all patients after splenectomy deserves careful consideration. In addition, the majority of patients with PVT are asymptomatic and only a few experience abdominal discomfort[12]. Therefore, there is an urgent requirement to find effective diagnostic methods to early and rapidly identify individuals with high risk of PVT after splenectomy, and then further guide clinicians to take intervention measures. Color Doppler ultrasonography and/or contrast-enhanced computed tomography (CT) is commonly applied for the final diagnosis of PVT[17]; however, they seem to be useless for screening out the high-risk individuals who are vulnerable to PVT. Given this, many scholars attempted to investigate the risk factors closely related to the occurrence of PVT after splenectomy[8,18-21]. Several investigators paid attention to the fact that preoperative low platelet count (PLT) and postoperative high PLT may be crucial predictors of the risk of PVT postoperatively[19,22].\nGenerally speaking, patients with PH will experience rebounding rises in PLT after splenectomy[23], combined with hemodynamic changes in the portal venous system, and thus these patients are highly prone to developing PVT[24]. However, the effect of the amplitude of sharp postoperative rises in PLT on PVT has received little attention. We speculated that the postoperative platelet elevation rate (PPER) should be an important predictor of PVT. To the best of our knowledge, there are no reports on the relationship between PPER and PVT.\nIn recent years, to meet the urgent demand of finding effective methods to predict PVT after splenectomy, several studies have attempted to construct predictive models for PVT after splenectomy in patients with cirrhosis using multivariate regression analysis[25,26]. However, there are few clinical variables included in the analysis and the accuracy of these prediction models is still unsatisfactory. Therefore, there is an urgent need for an efficient and accurate visualization model.\nNowadays, novel machine learning algorithms based on more clinical features have shown great potential in various aspects of medical research, especially in the construction of predictive models, and the features screened for model construction are clinically interpretable[27-29]. Gao et al[28] constructed four machine learning models based on 53 raw clinical features of coronavirus disease 2019 patients to distinguish individuals at high risk for mortality, with an area under the receiver operating characteristics (ROC) curve (AUC) of 0.976. Kawakami et al[29] developed seven supervised machine learning classifiers based on 32 clinical parameters, among which the random forest (RF) model showed the best performance in distinguishing epithelial ovarian cancer from benign ovarian tumors with an AUC of 0.968. The wide range of applications of machine learning methods has surpassed conventional statistical analysis due to their higher accuracy, which might enable machine learning to be increasingly applied in the field of medical research[30-32]. Although compared with traditional multivariate analysis methods, machine learning algorithms have overwhelming advantages in constructing clinical prediction models, so far, only Wang et al[33] have tried to construct a prediction model of PVT after splenectomy in cirrhotic patients with PH using machine learning algorithms. The model that they constructed has greatly improved the prediction efficiency compared with the traditional models. However, the clinical parameters involved in the construction of the model are extremely complex, which limits its clinical use.\nTherefore, the purpose of this study was to evaluate the predictive value of PPER for the risk of PVT after splenectomy for patients with PH. In addition, we sought to build simple, efficient, and accurate practical models for predicting PVT with machine learning algorithms to facilitate assisting clinicians in the early identification of individuals at high risk of PVT after splenectomy and taking intervention measures in time. We present the following article in accordance with the TRIPOD reporting checklist.", "We retrospectively recruited 944 consecutive patients aged no less than 18 years who underwent splenectomy at our institution between July 4, 2011 and September 7, 2018. The patients with the following conditions were excluded: (1) Splenic space-occupying lesion; (2) Hematological disease; (3) PH caused by non-hepatitis B virus (HBV) related etiologies, such as schistosome, hepatitis C virus, or other unknown causes; (4) Presence of PVT confirmed by preoperative imaging; (5) Previous history of endoscopic therapy, splenic embolization, shunt surgery, or anticoagulants; (6) Incomplete clinical features; (7) Unelevated PLT on the first (PLT1) and third day (PLT3) after the operation compared to the preoperative values; and (8) Receiving prophylactic anti-coagulant therapy after splenectomy. Finally, a total of 483 patients with PH interrelated to HBV were included in this study. The flow diagram of patient selection and study design is shown in Figure 1A. The study was approved by the Medical Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology. Owing to the retrospective nature of this study, written informed consent was waived. \n\nFlow chart and correlation chart. A: Flow diagram of patient selection and study design; B: Correlation matrix between candidate variables. The size and color of the circle in the matrix reflect the correlation between the corresponding variables. The darker the blue, the stronger the positive correlation between variables, and the darker the red, the stronger the negative correlation between variables. HCV: Hepatitis C virus; HBV: Hepatitis B virus; PLT1 and PLT3: Platelet counts on the first and third days after operation; PVT: Portal vein thrombosis; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.", "All the patients’ clinical features were acquired from the electronic medical record system in our institution, which mainly included sex, age, smoking and drinking history, previous treatment history, etiologies, blood biochemical parameters, and imaging information. The blood biochemical parameters included routine blood tests [red blood cells (RBC), reference interval: 4.30-5.80 × 1012/L; hemoglobin, reference interval: 130.0-175.0 g/L; white blood cells (WBC), reference interval: 3.50-9.50 × 109/L; neutrophil count (N), reference interval: 1.80-6.30 × 109/L; lymphocyte count (L), reference interval: 1.10-3.20 × 109/L; neutrophil to lymphocyte ratio (NLR); PLT, reference interval: 125.0-350.0 × 109/L; platelet to lymphocyte ratio], coagulation function [prothrombin time, reference interval: 11.5-14.5 s; prothrombin activity (PTA), reference interval: 75.0%-125.0%; international normalized ratio, reference interval: 0.80-1.20; fibrinogen, reference interval: 2.00-4.00 g/L; activated partial thromboplastin time, reference interval: 29.0-42.0 s], and liver function [alanine aminotransaminase (ALT), reference interval: ≤ 41 U/L; aspartate aminotransaminase (AST), reference interval: ≤ 40 U/L; serum albumin, reference interval: 35.0-52.0 g/L; serum total bilirubin, reference interval: ≤ 26 μmol/L] within 7 d before surgery, as well as PLT1 and PLT3. The preoperative Child-Pugh grade was divided into three levels of A, B, and C[34], with grade C excluded. Information on the esophageal and gastric varices (EGV), spleen thickness (SPT), diameter of the portal vein (DPV), and preoperative blood transfusion (PBT) within 7 d before the operation was also collected.", "We diagnosed PVT by color Doppler ultrasound examination[35] and contrast-enhanced CT would be applied as an auxiliary examination when its diagnosis was questioned[36]. In this study, abdominal ultrasound and contrast-enhanced CT examinations were routinely performed within 7 d before the operation. Routine ultrasonography was performed on the 7th day after the operation[19,20], or at any time when there were suspected clinical symptoms of PVT such as fever, severe abdominal pain, vomiting, abnormal liver function, and leukocytosis[12]. \nAccording to the definition of varices[8], EGV was divided into EGV without varices and EGV with varices in this study. SPT was defined as the vertical distance between the splenic hilum and the cut point of the lateral margin, and DPV was measured as the largest anteroposterior diameter at the point of intersection with the hepatic artery, during the patient’s breath holding[37]. \nThe PPER was calculated from the preoperative PLT and postoperative PLT. For example, PPER1 (at the first day) was calculated as (PLT1 - PLT)/ PLT × 100%, and PPER3 (at the third day) was calculated as (PLT3 - PLT)/ PLT × 100%.", "All candidates were randomly divided into two parts by using the “caret” package, of which 70% were assigned to a training cohort and 30% were assigned to a validation cohort. All model building was performed in the training cohort. Multivariate forward stepwise logistic regression analysis was used to select valuable variables to construct the generalized linear model (GLM). The least absolute shrinkage and selection operator (LASSO) was a well-established shrinkage method that can effectively screen meaningful variables from a large set of variables with potential multicollinearity to develop the LASSO model (LSM)[38], which was implemented by using the “glmnet” package. RF was composed of a great number of individual decision trees running as a whole[39]. These multifarious decision tree models were applied for the construction of the RF model (RFM)[40]. The importance of candidate variables was reflected by the mean decreased Gini (MDG) score.", "The robustness and clinical practicability of models were assessed using the ROC curve, calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC). The AUC were used to estimate the discernment of each model by using “rms” packages. The calibration curves were applied to examine the calibration ability of each model and calibrated with 1000 bootstrap samples to reduce overfitting bias. The clinical applicability of each model was informed by DCA and CIC using “rms” and “rmda” packages.", "Statistical analyses were performed with R Statistical Software (version 4.1.2, https://www.r-project.org/). Continuous variables were tested for normality. Those with normality are described as the mean ± SD, while those without normality are described as the median and interquartile range. Continuous variables were compared using the student’s t-test or non-parametric rank-sum test (Kruskal-Wallis test) as appropriate. Categorical variables are described as numbers (percentage) and were compared using the Chi-square test or Fisher exact test as appropriate. Correlations between candidate variables were determined by Spearman’s correlation coefficient. All statistical tests were two-tailed, and P < 0.05 was considered significant.", "Patient demographics and characteristics The detailed clinical characteristics of 483 patients with PH are summarized in Table 1. All participants were randomly and automatically divided into a training cohort (n = 338, 70%) and a validation cohort (n = 145, 30%). The presence of PVT was diagnosed in 200 (41.4%) cases, 135 (39.9%) cases, and 65 (44.8%) cases in the overall cohort, training cohort, and verification cohort, respectively. Consistent with the results of the intergroup comparison, among the 31 candidate variables included, 14 were associated with PVT, including RBC, WBC, L, NLR, PLT, PTA, ALT, AST, EGV, SPT, DPV, PBT, PPER1, and PPER3 (Figure 1B and Supplementary Table 1), which indicated that PPER1 and PPER3 were highly likely to be potential predictors of PVT.\nDetailed clinical characteristics of 483 patients with portal hypertension\nContinuous variables are presented as the median and interquartile range (IQR). \nPH: Portal hypertension; PVT: Portal vein thrombosis; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.\nThe detailed clinical characteristics of 483 patients with PH are summarized in Table 1. All participants were randomly and automatically divided into a training cohort (n = 338, 70%) and a validation cohort (n = 145, 30%). The presence of PVT was diagnosed in 200 (41.4%) cases, 135 (39.9%) cases, and 65 (44.8%) cases in the overall cohort, training cohort, and verification cohort, respectively. Consistent with the results of the intergroup comparison, among the 31 candidate variables included, 14 were associated with PVT, including RBC, WBC, L, NLR, PLT, PTA, ALT, AST, EGV, SPT, DPV, PBT, PPER1, and PPER3 (Figure 1B and Supplementary Table 1), which indicated that PPER1 and PPER3 were highly likely to be potential predictors of PVT.\nDetailed clinical characteristics of 483 patients with portal hypertension\nContinuous variables are presented as the median and interquartile range (IQR). \nPH: Portal hypertension; PVT: Portal vein thrombosis; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.\nLogistic regression analysis Univariate and multivariate logistic regression analyses for risk factors associated with PVT in the overall cohort are presented in Table 2. In the univariate analysis, a total of 11 variables with P < 0.05 were included in the further multivariate analysis. Finally, the following six variables were revealed to be closely associated with the occurrence of PVT: L [odds ratio (OR): 0.28, 95% confidence interval (CI): 0.14-0.54, P < 0.001], EGV (OR: 0.51, 95%CI: 0.32-0.79, P = 0.003), SPT (OR: 1.22, 95%CI: 1.06-1.40, P = 0.005), DPV (OR: 3.57, 95%CI: 1.86-7.03, P < 0.001), PPER1 (OR: 1.78, 95%CI: 1.24-2.62, P = 0.002), and PPER3 (OR: 1.43, 95%CI: 1.16-1.77, P < 0.001). This result demonstrated that PPER1 and PPER3 were independent risk factors for the occurrence of PVT.\nUnivariate and multivariate logistic regression analyses for risk factors associated with portal vein thrombosis in the overall cohort\nForward stepwise analysis. \nContinuous variables. \nPVT: Portal vein thrombosis; OR: Odds ratio; CI: Confidence interval; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.\nUnivariate and multivariate logistic regression analyses for risk factors associated with PVT in the overall cohort are presented in Table 2. In the univariate analysis, a total of 11 variables with P < 0.05 were included in the further multivariate analysis. Finally, the following six variables were revealed to be closely associated with the occurrence of PVT: L [odds ratio (OR): 0.28, 95% confidence interval (CI): 0.14-0.54, P < 0.001], EGV (OR: 0.51, 95%CI: 0.32-0.79, P = 0.003), SPT (OR: 1.22, 95%CI: 1.06-1.40, P = 0.005), DPV (OR: 3.57, 95%CI: 1.86-7.03, P < 0.001), PPER1 (OR: 1.78, 95%CI: 1.24-2.62, P = 0.002), and PPER3 (OR: 1.43, 95%CI: 1.16-1.77, P < 0.001). This result demonstrated that PPER1 and PPER3 were independent risk factors for the occurrence of PVT.\nUnivariate and multivariate logistic regression analyses for risk factors associated with portal vein thrombosis in the overall cohort\nForward stepwise analysis. \nContinuous variables. \nPVT: Portal vein thrombosis; OR: Odds ratio; CI: Confidence interval; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.\nEstablishment of PPER-based models As shown in Supplementary Table 2, the following five variables strongly associated with PVT were chosen to construct the GLM: L (OR: 0.34, 95%CI: 0.14-0.77, P = 0.01), SPT (OR: 1.21, 95%CI: 1.02-1.44, P = 0.02), DPV (OR: 5.85, 95%CI: 2.57-14.05, P < 0.001), PPER1 (OR: 1.77, 95%CI: 1.13-2.82, P = 0.01), and PPER3 (OR: 1.42, 95%CI: 1.12-1.84, P = 0.005). The optimal LSM was obtained when all 31 candidate variables were shrunk to 10 through the LASSO (Figure 2A and B), which included L, NLR, PLT, PTA, AST, EGV, SPT, DPV, PPER1, and PPER3. In the RF, the total sample group had the smallest error of 24.56%, when the number of random trees was 133 (Figure 2C). A total of 133 random trees were set and passed through five iterations, and the importance scores of the candidate variables are presented in Figure 2D. Ultimately, nine variables with higher MDG scores were selected to construct the RFM.\n\nFeatures selection. A: Least absolute shrinkage and selection operator variable trace profiles of the 31 features. The 3-fold cross-validation was employed; B: Mean square error (MSE) plots of models under different lambda. The lambda corresponding to one MSE away from the minimum MSE was the optimal lambda value of 0.043, and the target variables shrunk to 10; C: Relationships between the error and number of trees. There are three lines, green representing error in the positive event group, red representing error in the negative event group, and black representing error in the total sample group; D: Importance scores of the candidate features. RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3; MDG: Mean decreased Gini.\nAs shown in Supplementary Table 2, the following five variables strongly associated with PVT were chosen to construct the GLM: L (OR: 0.34, 95%CI: 0.14-0.77, P = 0.01), SPT (OR: 1.21, 95%CI: 1.02-1.44, P = 0.02), DPV (OR: 5.85, 95%CI: 2.57-14.05, P < 0.001), PPER1 (OR: 1.77, 95%CI: 1.13-2.82, P = 0.01), and PPER3 (OR: 1.42, 95%CI: 1.12-1.84, P = 0.005). The optimal LSM was obtained when all 31 candidate variables were shrunk to 10 through the LASSO (Figure 2A and B), which included L, NLR, PLT, PTA, AST, EGV, SPT, DPV, PPER1, and PPER3. In the RF, the total sample group had the smallest error of 24.56%, when the number of random trees was 133 (Figure 2C). A total of 133 random trees were set and passed through five iterations, and the importance scores of the candidate variables are presented in Figure 2D. Ultimately, nine variables with higher MDG scores were selected to construct the RFM.\n\nFeatures selection. A: Least absolute shrinkage and selection operator variable trace profiles of the 31 features. The 3-fold cross-validation was employed; B: Mean square error (MSE) plots of models under different lambda. The lambda corresponding to one MSE away from the minimum MSE was the optimal lambda value of 0.043, and the target variables shrunk to 10; C: Relationships between the error and number of trees. There are three lines, green representing error in the positive event group, red representing error in the negative event group, and black representing error in the total sample group; D: Importance scores of the candidate features. RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3; MDG: Mean decreased Gini.\nAssessment and verification of PPER-based models The ROC curves of the GLM, LSM, and RFM in the training cohort are shown in Figure 3A, and their AUCs were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively. All models had excellent calibration ability in the training cohort (Figure 3B). DCA and CIC revealed that they both conferred high clinical net benefits (Figures 3C and 4A-C). \n\nEvaluation and validation of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: The receiver operating characteristic curves of the postoperative platelet elevation rate (PPER)-based models; B and E: The calibration curves of the PPER-based models; C and F: The decision curve analysis of the PPER-based models. GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; AUC: Area under the receiver operating characteristic curve.\n\nClinical impact curves of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: Clinical impact curves for the generalized linear model; B and E: Clinical impact curves for the least absolute shrinkage and selection operator model; C and F: Clinical impact curves for the random forest model.\nIn the validation cohort, the ROC curves of all models are presented in Figure 3D, and their AUC were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85), respectively. All models demonstrated highly satisfactory calibration capability and clinical functionality (Figures 3E and F and 4D-F).\nThe ROC curves of the GLM, LSM, and RFM in the training cohort are shown in Figure 3A, and their AUCs were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively. All models had excellent calibration ability in the training cohort (Figure 3B). DCA and CIC revealed that they both conferred high clinical net benefits (Figures 3C and 4A-C). \n\nEvaluation and validation of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: The receiver operating characteristic curves of the postoperative platelet elevation rate (PPER)-based models; B and E: The calibration curves of the PPER-based models; C and F: The decision curve analysis of the PPER-based models. GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; AUC: Area under the receiver operating characteristic curve.\n\nClinical impact curves of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: Clinical impact curves for the generalized linear model; B and E: Clinical impact curves for the least absolute shrinkage and selection operator model; C and F: Clinical impact curves for the random forest model.\nIn the validation cohort, the ROC curves of all models are presented in Figure 3D, and their AUC were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85), respectively. All models demonstrated highly satisfactory calibration capability and clinical functionality (Figures 3E and F and 4D-F).\nImportance of PPER for models As shown in Figure 5A, the nomogram for GLM recruited a total of five variables, including L, SPT, DPV, PPER1, and PPER3, which happened to be the intersection variables of the GLM, LSM, and RFM (Figure 5B). From this, it appeared that the aforementioned variables were significant predictors of the occurrence of PVT and they produced remarkable effects on the construction of the models. Moreover, the present study revealed that among these variables shared by the GLM, LSM, and RFM, the order of weight from high to low was DPV, PPER1, PPER3, SPT, and L (Figure 5C), which fully promulgated the predictive value of the PPER (PPER1 and PPER3) for PVT in all models.\n\nNomogram for the generalized linear model and weighting of variables. A: Nomogram for the generalized linear model (GLM); B: Intersection variables among the GLM, least absolute shrinkage and selection operator model (LSM), and random forest model (RFM); C: Weights of the intersection variables in the GLM, LSM, and RFM, respectively. SPT: Spleen thickness; L: Lymphocyte count; DPV: Diameter of portal vein; PPER1 and PPER3: The first and third days for postoperative platelet elevation rate; PVT: Portal vein thrombosis; PLR: Platelet to lymphocyte ratio; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PTA: Prothrombin activity; EGV: Esophageal and gastric varices; AST: Aspartate aminotransaminase.\nAs shown in Figure 5A, the nomogram for GLM recruited a total of five variables, including L, SPT, DPV, PPER1, and PPER3, which happened to be the intersection variables of the GLM, LSM, and RFM (Figure 5B). From this, it appeared that the aforementioned variables were significant predictors of the occurrence of PVT and they produced remarkable effects on the construction of the models. Moreover, the present study revealed that among these variables shared by the GLM, LSM, and RFM, the order of weight from high to low was DPV, PPER1, PPER3, SPT, and L (Figure 5C), which fully promulgated the predictive value of the PPER (PPER1 and PPER3) for PVT in all models.\n\nNomogram for the generalized linear model and weighting of variables. A: Nomogram for the generalized linear model (GLM); B: Intersection variables among the GLM, least absolute shrinkage and selection operator model (LSM), and random forest model (RFM); C: Weights of the intersection variables in the GLM, LSM, and RFM, respectively. SPT: Spleen thickness; L: Lymphocyte count; DPV: Diameter of portal vein; PPER1 and PPER3: The first and third days for postoperative platelet elevation rate; PVT: Portal vein thrombosis; PLR: Platelet to lymphocyte ratio; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PTA: Prothrombin activity; EGV: Esophageal and gastric varices; AST: Aspartate aminotransaminase.\nComparative analysis of PPER-based models The performance of three PPER-based models in predicting PVT in different cohorts is shown in Table 3. In the overall cohort, the accuracy of the GLM, LSM, and RFM was 76.2%, 77.4%, and 77.4% respectively. In the training cohort, the accuracy of the GLM, LSM, and RFM was 79.6%, 79.0%, and 78.7% respectively. In the validation cohort, the accuracy of the GLM, LSM, and RFM was 74.5%, 79.3%, and 76.6% respectively. When other metrics of the models, such as AUC, sensitivity, specificity, positive predictive value, negative predictive value, kappa values, and Brier scores, were comprehensively considered, the LSM and RFM appeared to be slightly superior to the GLM.\nPerformance of models for portal vein thrombosis risk prediction in different cohorts\nAUC: Area under the receiver operating characteristics curve; CI: Confidence interval; PPV: Positive predictive value; NPV: Negative predictive value; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model.\nThe performance of three PPER-based models in predicting PVT in different cohorts is shown in Table 3. In the overall cohort, the accuracy of the GLM, LSM, and RFM was 76.2%, 77.4%, and 77.4% respectively. In the training cohort, the accuracy of the GLM, LSM, and RFM was 79.6%, 79.0%, and 78.7% respectively. In the validation cohort, the accuracy of the GLM, LSM, and RFM was 74.5%, 79.3%, and 76.6% respectively. When other metrics of the models, such as AUC, sensitivity, specificity, positive predictive value, negative predictive value, kappa values, and Brier scores, were comprehensively considered, the LSM and RFM appeared to be slightly superior to the GLM.\nPerformance of models for portal vein thrombosis risk prediction in different cohorts\nAUC: Area under the receiver operating characteristics curve; CI: Confidence interval; PPV: Positive predictive value; NPV: Negative predictive value; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model.", "The detailed clinical characteristics of 483 patients with PH are summarized in Table 1. All participants were randomly and automatically divided into a training cohort (n = 338, 70%) and a validation cohort (n = 145, 30%). The presence of PVT was diagnosed in 200 (41.4%) cases, 135 (39.9%) cases, and 65 (44.8%) cases in the overall cohort, training cohort, and verification cohort, respectively. Consistent with the results of the intergroup comparison, among the 31 candidate variables included, 14 were associated with PVT, including RBC, WBC, L, NLR, PLT, PTA, ALT, AST, EGV, SPT, DPV, PBT, PPER1, and PPER3 (Figure 1B and Supplementary Table 1), which indicated that PPER1 and PPER3 were highly likely to be potential predictors of PVT.\nDetailed clinical characteristics of 483 patients with portal hypertension\nContinuous variables are presented as the median and interquartile range (IQR). \nPH: Portal hypertension; PVT: Portal vein thrombosis; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.", "Univariate and multivariate logistic regression analyses for risk factors associated with PVT in the overall cohort are presented in Table 2. In the univariate analysis, a total of 11 variables with P < 0.05 were included in the further multivariate analysis. Finally, the following six variables were revealed to be closely associated with the occurrence of PVT: L [odds ratio (OR): 0.28, 95% confidence interval (CI): 0.14-0.54, P < 0.001], EGV (OR: 0.51, 95%CI: 0.32-0.79, P = 0.003), SPT (OR: 1.22, 95%CI: 1.06-1.40, P = 0.005), DPV (OR: 3.57, 95%CI: 1.86-7.03, P < 0.001), PPER1 (OR: 1.78, 95%CI: 1.24-2.62, P = 0.002), and PPER3 (OR: 1.43, 95%CI: 1.16-1.77, P < 0.001). This result demonstrated that PPER1 and PPER3 were independent risk factors for the occurrence of PVT.\nUnivariate and multivariate logistic regression analyses for risk factors associated with portal vein thrombosis in the overall cohort\nForward stepwise analysis. \nContinuous variables. \nPVT: Portal vein thrombosis; OR: Odds ratio; CI: Confidence interval; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.", "As shown in Supplementary Table 2, the following five variables strongly associated with PVT were chosen to construct the GLM: L (OR: 0.34, 95%CI: 0.14-0.77, P = 0.01), SPT (OR: 1.21, 95%CI: 1.02-1.44, P = 0.02), DPV (OR: 5.85, 95%CI: 2.57-14.05, P < 0.001), PPER1 (OR: 1.77, 95%CI: 1.13-2.82, P = 0.01), and PPER3 (OR: 1.42, 95%CI: 1.12-1.84, P = 0.005). The optimal LSM was obtained when all 31 candidate variables were shrunk to 10 through the LASSO (Figure 2A and B), which included L, NLR, PLT, PTA, AST, EGV, SPT, DPV, PPER1, and PPER3. In the RF, the total sample group had the smallest error of 24.56%, when the number of random trees was 133 (Figure 2C). A total of 133 random trees were set and passed through five iterations, and the importance scores of the candidate variables are presented in Figure 2D. Ultimately, nine variables with higher MDG scores were selected to construct the RFM.\n\nFeatures selection. A: Least absolute shrinkage and selection operator variable trace profiles of the 31 features. The 3-fold cross-validation was employed; B: Mean square error (MSE) plots of models under different lambda. The lambda corresponding to one MSE away from the minimum MSE was the optimal lambda value of 0.043, and the target variables shrunk to 10; C: Relationships between the error and number of trees. There are three lines, green representing error in the positive event group, red representing error in the negative event group, and black representing error in the total sample group; D: Importance scores of the candidate features. RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3; MDG: Mean decreased Gini.", "The ROC curves of the GLM, LSM, and RFM in the training cohort are shown in Figure 3A, and their AUCs were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively. All models had excellent calibration ability in the training cohort (Figure 3B). DCA and CIC revealed that they both conferred high clinical net benefits (Figures 3C and 4A-C). \n\nEvaluation and validation of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: The receiver operating characteristic curves of the postoperative platelet elevation rate (PPER)-based models; B and E: The calibration curves of the PPER-based models; C and F: The decision curve analysis of the PPER-based models. GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; AUC: Area under the receiver operating characteristic curve.\n\nClinical impact curves of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: Clinical impact curves for the generalized linear model; B and E: Clinical impact curves for the least absolute shrinkage and selection operator model; C and F: Clinical impact curves for the random forest model.\nIn the validation cohort, the ROC curves of all models are presented in Figure 3D, and their AUC were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85), respectively. All models demonstrated highly satisfactory calibration capability and clinical functionality (Figures 3E and F and 4D-F).", "As shown in Figure 5A, the nomogram for GLM recruited a total of five variables, including L, SPT, DPV, PPER1, and PPER3, which happened to be the intersection variables of the GLM, LSM, and RFM (Figure 5B). From this, it appeared that the aforementioned variables were significant predictors of the occurrence of PVT and they produced remarkable effects on the construction of the models. Moreover, the present study revealed that among these variables shared by the GLM, LSM, and RFM, the order of weight from high to low was DPV, PPER1, PPER3, SPT, and L (Figure 5C), which fully promulgated the predictive value of the PPER (PPER1 and PPER3) for PVT in all models.\n\nNomogram for the generalized linear model and weighting of variables. A: Nomogram for the generalized linear model (GLM); B: Intersection variables among the GLM, least absolute shrinkage and selection operator model (LSM), and random forest model (RFM); C: Weights of the intersection variables in the GLM, LSM, and RFM, respectively. SPT: Spleen thickness; L: Lymphocyte count; DPV: Diameter of portal vein; PPER1 and PPER3: The first and third days for postoperative platelet elevation rate; PVT: Portal vein thrombosis; PLR: Platelet to lymphocyte ratio; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PTA: Prothrombin activity; EGV: Esophageal and gastric varices; AST: Aspartate aminotransaminase.", "The performance of three PPER-based models in predicting PVT in different cohorts is shown in Table 3. In the overall cohort, the accuracy of the GLM, LSM, and RFM was 76.2%, 77.4%, and 77.4% respectively. In the training cohort, the accuracy of the GLM, LSM, and RFM was 79.6%, 79.0%, and 78.7% respectively. In the validation cohort, the accuracy of the GLM, LSM, and RFM was 74.5%, 79.3%, and 76.6% respectively. When other metrics of the models, such as AUC, sensitivity, specificity, positive predictive value, negative predictive value, kappa values, and Brier scores, were comprehensively considered, the LSM and RFM appeared to be slightly superior to the GLM.\nPerformance of models for portal vein thrombosis risk prediction in different cohorts\nAUC: Area under the receiver operating characteristics curve; CI: Confidence interval; PPV: Positive predictive value; NPV: Negative predictive value; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model.", "Undoubtedly, PVT is a lethal complication after splenectomy in cirrhotic patients with PH[12]. Once PVT exists, there will be elevated portal venous pressure, ischemic bowel necrosis, progressive impairment of liver function, and even liver failure, which can eventually be life-threatening[41,42]. Therefore, research on the optimization of early detection of individuals at high risk of PVT after splenectomy is urgently needed. In this study, we successfully constructed the PPER-based models for predicting PVT by machine/deep learning, which would be conducive to early identifying the population at high risk of PVT.\nIn the present study, conventional generalized linear (CGL) method and machine/deep learning (including the LASSO and RF) were applied separately to screen out the variables that greatly affected the PVT prediction. The CGL method is characterized by strong interpretability, especially when the multifactorial forward stepwise regression method is used, and therefore, it has been widely applied as the traditional method to construct a predictive model[43]. However, with the rapid progress of artificial intelligence technology, a novel prediction model based on machine/deep learning has emerged with a higher probability of accuracy, which has led some clinicians to question the value of the CGL model in the clinical application of individualized patients[44]. Coincidentally, our research results proclaimed that the performance of the LSM and RFM seemed to be slightly better than the GLM.\nInterestingly, the PPER-based models contained the following five intersecting factors, namely, SPT, DPV, L, PPER1, and PPER3, which sufficiently illustrated that these were the main contributors to the higher incidence of PVT. Previous studies found that preoperative SPT and DPV were important predictors of the formation of PVT after splenectomy in patients with PH[8,45], which was highly consistent with our findings. A very reasonable explanation is that a wide preoperative DPV and SPT will lead to slow portal vein blood flow, which is closely related to postoperative thrombosis[45,46]. \nIn most cases, platelet, erythrocyte, and leukocyte counts rose dramatically over a short time after splenectomy in patients with PH, and the blood was hypercoagulable[8]. Therefore, previous studies suggested that preoperative low platelet and leukocyte counts were founders of the formation of PVT postoperatively[47]. This study revealed that preoperative L was an influential factor in PVT postoperatively, which coincided with the above view. Of note, the present study employed the PPER to reflect the magnitude of dynamic changes in preoperative and postoperative PLT. Subsequently, it was found that the PPER had high predictive value for the risk of PVT postoperatively, which was not addressed previously.\nStamou et al[48] reported that the median time to the formation of PVT after splenectomy in patients with PH was the 6th day (range, 3-11). Lu et al[8] concluded that 49.19% of patients developed PVT within 7 d after splenectomy. Therefore, scholars routinely applied ultrasonography examination to diagnose the PVT on the 7th day after splenectomy[8,19,20]. In this study, combined with the preoperative predictors and PPER, the PPER-based models that we constructed can effectively discriminate individuals with high risk of PVT as early as the first 3 d after the operation, which was extremely critical for guiding clinicians’ treatment strategies.\nCurrently, there is no standard preventive regimen for PVT after splenectomy in cirrhotic patients with PH[49]. Most scholars have recently advocated that the prophylactic anticoagulant therapy is administered earlier postoperatively, which will be more helpful in reducing the incidence of PVT[50,51]. However, it should be cautiously chosen because in patients with liver cirrhosis, it cannot avoid the risk of bleeding[51]. In addition, if the preventive regimens are routinely adopted for all individuals with PH after splenectomy, it is bound to raise the suspicion of overtreatment. Excitingly, in the present study, the accuracy of the PPER-based models in predicting PVT was up to 80%, which can distinguish individuals at high risk of PVT with high efficiency, and thus guide clinicians to take targeted individualized preventive measures in time.\nThe present study has some limitations. First, due to the retrospective nature of the study, selection bias cannot be eliminated. Second, the uncommon preoperative factors that may influence the formation of PVT, such as splenic vein diameter, spleen volume, and portal vein flow velocity[8,19], were not routinely measured in our institution and thus failed to be included in the present study. However, the SPT and DPV in this study can indirectly reflect these indicators to a certain extent[45,46]. Third, this was a monocentric study design. Although the PPER-based models demonstrated excellent performance for predicting PVT, they still lacked the verification of external cohorts. Therefore, large-scale prospective multicenter studies are warranted, which are beneficial to the popularity and application of the PPER-based models.", "PPER1 and PPER3 are effective indicators for postoperative prediction of PVT. We have successfully developed the PPER-based practical models for predicting PVT, which could help clinicians identify individuals at high risk for PVT early and efficiently, and thus guide the timely intervention measures." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study population", "Data collection", "Definition of variables", "Development of models", "Evaluation of models", "Statistical analysis", "RESULTS", "Patient demographics and characteristics", "Logistic regression analysis", "Establishment of PPER-based models", "Assessment and verification of PPER-based models", "Importance of PPER for models", "Comparative analysis of PPER-based models", "DISCUSSION", "CONCLUSION" ]
[ "Liver cirrhosis is recognized as an extremely important and rapidly increasing disease burden in the world[1]. In the progressive stage of liver cirrhosis, the complications caused by portal hypertension (PH), including esophagogastric variceal bleeding and hypersplenism, pose a great threat to the patients’ life and health[2,3]. Liver transplantation is currently recommended as a curative treatment for liver cirrhosis combined with PH; however, due to the shortage of liver sources and high trans-plantation cost, its clinical practicability is limited[4,5]. Transjugular intrahepatic portosystemic shunt seems to be a gospel for PH, but unfortunately, restenosis or/and hepatic encephalopathy will occur in more than 60% of patients[6,7]. In Asia, splenectomy (or combined with devascularization) has been widely adopted as an effective treatment for hypersplenism or esophageal and gastric variceal bleeding caused by PH[8,9].\nPortal vein thrombosis (PVT) is often defined as thrombosis within the portal vein trunk or intrahepatic portal branches, with or without the splenic vein or superior mesenteric vein involvement[10,11]. PVT is considered a dreaded complication after splenectomy for patients with PH[12], and the probability of PVT has been reported to be 4.8%-51.5%[13-15]. For patients with acute PVT and PVT resulting in superior mesenteric vein thrombosis, it has been reported that PVT may be closely associated with acute liver failure and could influence mortality[16]. Hence, strategies are needed to prevent PVT in patients who underwent splenectomy. In clinical practice, anticoagulation is a critical method for the prevention and treatment of PVT in patients after splenectomy. However, when the anticoagulation therapy should be started remains controversial. Early anticoagulation may result in life-threatening bleeding events for patients with liver cirrhosis. Whether anticoagulant therapy should be prescribed to all patients after splenectomy deserves careful consideration. In addition, the majority of patients with PVT are asymptomatic and only a few experience abdominal discomfort[12]. Therefore, there is an urgent requirement to find effective diagnostic methods to early and rapidly identify individuals with high risk of PVT after splenectomy, and then further guide clinicians to take intervention measures. Color Doppler ultrasonography and/or contrast-enhanced computed tomography (CT) is commonly applied for the final diagnosis of PVT[17]; however, they seem to be useless for screening out the high-risk individuals who are vulnerable to PVT. Given this, many scholars attempted to investigate the risk factors closely related to the occurrence of PVT after splenectomy[8,18-21]. Several investigators paid attention to the fact that preoperative low platelet count (PLT) and postoperative high PLT may be crucial predictors of the risk of PVT postoperatively[19,22].\nGenerally speaking, patients with PH will experience rebounding rises in PLT after splenectomy[23], combined with hemodynamic changes in the portal venous system, and thus these patients are highly prone to developing PVT[24]. However, the effect of the amplitude of sharp postoperative rises in PLT on PVT has received little attention. We speculated that the postoperative platelet elevation rate (PPER) should be an important predictor of PVT. To the best of our knowledge, there are no reports on the relationship between PPER and PVT.\nIn recent years, to meet the urgent demand of finding effective methods to predict PVT after splenectomy, several studies have attempted to construct predictive models for PVT after splenectomy in patients with cirrhosis using multivariate regression analysis[25,26]. However, there are few clinical variables included in the analysis and the accuracy of these prediction models is still unsatisfactory. Therefore, there is an urgent need for an efficient and accurate visualization model.\nNowadays, novel machine learning algorithms based on more clinical features have shown great potential in various aspects of medical research, especially in the construction of predictive models, and the features screened for model construction are clinically interpretable[27-29]. Gao et al[28] constructed four machine learning models based on 53 raw clinical features of coronavirus disease 2019 patients to distinguish individuals at high risk for mortality, with an area under the receiver operating characteristics (ROC) curve (AUC) of 0.976. Kawakami et al[29] developed seven supervised machine learning classifiers based on 32 clinical parameters, among which the random forest (RF) model showed the best performance in distinguishing epithelial ovarian cancer from benign ovarian tumors with an AUC of 0.968. The wide range of applications of machine learning methods has surpassed conventional statistical analysis due to their higher accuracy, which might enable machine learning to be increasingly applied in the field of medical research[30-32]. Although compared with traditional multivariate analysis methods, machine learning algorithms have overwhelming advantages in constructing clinical prediction models, so far, only Wang et al[33] have tried to construct a prediction model of PVT after splenectomy in cirrhotic patients with PH using machine learning algorithms. The model that they constructed has greatly improved the prediction efficiency compared with the traditional models. However, the clinical parameters involved in the construction of the model are extremely complex, which limits its clinical use.\nTherefore, the purpose of this study was to evaluate the predictive value of PPER for the risk of PVT after splenectomy for patients with PH. In addition, we sought to build simple, efficient, and accurate practical models for predicting PVT with machine learning algorithms to facilitate assisting clinicians in the early identification of individuals at high risk of PVT after splenectomy and taking intervention measures in time. We present the following article in accordance with the TRIPOD reporting checklist.", "Study population We retrospectively recruited 944 consecutive patients aged no less than 18 years who underwent splenectomy at our institution between July 4, 2011 and September 7, 2018. The patients with the following conditions were excluded: (1) Splenic space-occupying lesion; (2) Hematological disease; (3) PH caused by non-hepatitis B virus (HBV) related etiologies, such as schistosome, hepatitis C virus, or other unknown causes; (4) Presence of PVT confirmed by preoperative imaging; (5) Previous history of endoscopic therapy, splenic embolization, shunt surgery, or anticoagulants; (6) Incomplete clinical features; (7) Unelevated PLT on the first (PLT1) and third day (PLT3) after the operation compared to the preoperative values; and (8) Receiving prophylactic anti-coagulant therapy after splenectomy. Finally, a total of 483 patients with PH interrelated to HBV were included in this study. The flow diagram of patient selection and study design is shown in Figure 1A. The study was approved by the Medical Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology. Owing to the retrospective nature of this study, written informed consent was waived. \n\nFlow chart and correlation chart. A: Flow diagram of patient selection and study design; B: Correlation matrix between candidate variables. The size and color of the circle in the matrix reflect the correlation between the corresponding variables. The darker the blue, the stronger the positive correlation between variables, and the darker the red, the stronger the negative correlation between variables. HCV: Hepatitis C virus; HBV: Hepatitis B virus; PLT1 and PLT3: Platelet counts on the first and third days after operation; PVT: Portal vein thrombosis; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.\nWe retrospectively recruited 944 consecutive patients aged no less than 18 years who underwent splenectomy at our institution between July 4, 2011 and September 7, 2018. The patients with the following conditions were excluded: (1) Splenic space-occupying lesion; (2) Hematological disease; (3) PH caused by non-hepatitis B virus (HBV) related etiologies, such as schistosome, hepatitis C virus, or other unknown causes; (4) Presence of PVT confirmed by preoperative imaging; (5) Previous history of endoscopic therapy, splenic embolization, shunt surgery, or anticoagulants; (6) Incomplete clinical features; (7) Unelevated PLT on the first (PLT1) and third day (PLT3) after the operation compared to the preoperative values; and (8) Receiving prophylactic anti-coagulant therapy after splenectomy. Finally, a total of 483 patients with PH interrelated to HBV were included in this study. The flow diagram of patient selection and study design is shown in Figure 1A. The study was approved by the Medical Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology. Owing to the retrospective nature of this study, written informed consent was waived. \n\nFlow chart and correlation chart. A: Flow diagram of patient selection and study design; B: Correlation matrix between candidate variables. The size and color of the circle in the matrix reflect the correlation between the corresponding variables. The darker the blue, the stronger the positive correlation between variables, and the darker the red, the stronger the negative correlation between variables. HCV: Hepatitis C virus; HBV: Hepatitis B virus; PLT1 and PLT3: Platelet counts on the first and third days after operation; PVT: Portal vein thrombosis; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.\nData collection All the patients’ clinical features were acquired from the electronic medical record system in our institution, which mainly included sex, age, smoking and drinking history, previous treatment history, etiologies, blood biochemical parameters, and imaging information. The blood biochemical parameters included routine blood tests [red blood cells (RBC), reference interval: 4.30-5.80 × 1012/L; hemoglobin, reference interval: 130.0-175.0 g/L; white blood cells (WBC), reference interval: 3.50-9.50 × 109/L; neutrophil count (N), reference interval: 1.80-6.30 × 109/L; lymphocyte count (L), reference interval: 1.10-3.20 × 109/L; neutrophil to lymphocyte ratio (NLR); PLT, reference interval: 125.0-350.0 × 109/L; platelet to lymphocyte ratio], coagulation function [prothrombin time, reference interval: 11.5-14.5 s; prothrombin activity (PTA), reference interval: 75.0%-125.0%; international normalized ratio, reference interval: 0.80-1.20; fibrinogen, reference interval: 2.00-4.00 g/L; activated partial thromboplastin time, reference interval: 29.0-42.0 s], and liver function [alanine aminotransaminase (ALT), reference interval: ≤ 41 U/L; aspartate aminotransaminase (AST), reference interval: ≤ 40 U/L; serum albumin, reference interval: 35.0-52.0 g/L; serum total bilirubin, reference interval: ≤ 26 μmol/L] within 7 d before surgery, as well as PLT1 and PLT3. The preoperative Child-Pugh grade was divided into three levels of A, B, and C[34], with grade C excluded. Information on the esophageal and gastric varices (EGV), spleen thickness (SPT), diameter of the portal vein (DPV), and preoperative blood transfusion (PBT) within 7 d before the operation was also collected.\nAll the patients’ clinical features were acquired from the electronic medical record system in our institution, which mainly included sex, age, smoking and drinking history, previous treatment history, etiologies, blood biochemical parameters, and imaging information. The blood biochemical parameters included routine blood tests [red blood cells (RBC), reference interval: 4.30-5.80 × 1012/L; hemoglobin, reference interval: 130.0-175.0 g/L; white blood cells (WBC), reference interval: 3.50-9.50 × 109/L; neutrophil count (N), reference interval: 1.80-6.30 × 109/L; lymphocyte count (L), reference interval: 1.10-3.20 × 109/L; neutrophil to lymphocyte ratio (NLR); PLT, reference interval: 125.0-350.0 × 109/L; platelet to lymphocyte ratio], coagulation function [prothrombin time, reference interval: 11.5-14.5 s; prothrombin activity (PTA), reference interval: 75.0%-125.0%; international normalized ratio, reference interval: 0.80-1.20; fibrinogen, reference interval: 2.00-4.00 g/L; activated partial thromboplastin time, reference interval: 29.0-42.0 s], and liver function [alanine aminotransaminase (ALT), reference interval: ≤ 41 U/L; aspartate aminotransaminase (AST), reference interval: ≤ 40 U/L; serum albumin, reference interval: 35.0-52.0 g/L; serum total bilirubin, reference interval: ≤ 26 μmol/L] within 7 d before surgery, as well as PLT1 and PLT3. The preoperative Child-Pugh grade was divided into three levels of A, B, and C[34], with grade C excluded. Information on the esophageal and gastric varices (EGV), spleen thickness (SPT), diameter of the portal vein (DPV), and preoperative blood transfusion (PBT) within 7 d before the operation was also collected.\nDefinition of variables We diagnosed PVT by color Doppler ultrasound examination[35] and contrast-enhanced CT would be applied as an auxiliary examination when its diagnosis was questioned[36]. In this study, abdominal ultrasound and contrast-enhanced CT examinations were routinely performed within 7 d before the operation. Routine ultrasonography was performed on the 7th day after the operation[19,20], or at any time when there were suspected clinical symptoms of PVT such as fever, severe abdominal pain, vomiting, abnormal liver function, and leukocytosis[12]. \nAccording to the definition of varices[8], EGV was divided into EGV without varices and EGV with varices in this study. SPT was defined as the vertical distance between the splenic hilum and the cut point of the lateral margin, and DPV was measured as the largest anteroposterior diameter at the point of intersection with the hepatic artery, during the patient’s breath holding[37]. \nThe PPER was calculated from the preoperative PLT and postoperative PLT. For example, PPER1 (at the first day) was calculated as (PLT1 - PLT)/ PLT × 100%, and PPER3 (at the third day) was calculated as (PLT3 - PLT)/ PLT × 100%.\nWe diagnosed PVT by color Doppler ultrasound examination[35] and contrast-enhanced CT would be applied as an auxiliary examination when its diagnosis was questioned[36]. In this study, abdominal ultrasound and contrast-enhanced CT examinations were routinely performed within 7 d before the operation. Routine ultrasonography was performed on the 7th day after the operation[19,20], or at any time when there were suspected clinical symptoms of PVT such as fever, severe abdominal pain, vomiting, abnormal liver function, and leukocytosis[12]. \nAccording to the definition of varices[8], EGV was divided into EGV without varices and EGV with varices in this study. SPT was defined as the vertical distance between the splenic hilum and the cut point of the lateral margin, and DPV was measured as the largest anteroposterior diameter at the point of intersection with the hepatic artery, during the patient’s breath holding[37]. \nThe PPER was calculated from the preoperative PLT and postoperative PLT. For example, PPER1 (at the first day) was calculated as (PLT1 - PLT)/ PLT × 100%, and PPER3 (at the third day) was calculated as (PLT3 - PLT)/ PLT × 100%.\nDevelopment of models All candidates were randomly divided into two parts by using the “caret” package, of which 70% were assigned to a training cohort and 30% were assigned to a validation cohort. All model building was performed in the training cohort. Multivariate forward stepwise logistic regression analysis was used to select valuable variables to construct the generalized linear model (GLM). The least absolute shrinkage and selection operator (LASSO) was a well-established shrinkage method that can effectively screen meaningful variables from a large set of variables with potential multicollinearity to develop the LASSO model (LSM)[38], which was implemented by using the “glmnet” package. RF was composed of a great number of individual decision trees running as a whole[39]. These multifarious decision tree models were applied for the construction of the RF model (RFM)[40]. The importance of candidate variables was reflected by the mean decreased Gini (MDG) score.\nAll candidates were randomly divided into two parts by using the “caret” package, of which 70% were assigned to a training cohort and 30% were assigned to a validation cohort. All model building was performed in the training cohort. Multivariate forward stepwise logistic regression analysis was used to select valuable variables to construct the generalized linear model (GLM). The least absolute shrinkage and selection operator (LASSO) was a well-established shrinkage method that can effectively screen meaningful variables from a large set of variables with potential multicollinearity to develop the LASSO model (LSM)[38], which was implemented by using the “glmnet” package. RF was composed of a great number of individual decision trees running as a whole[39]. These multifarious decision tree models were applied for the construction of the RF model (RFM)[40]. The importance of candidate variables was reflected by the mean decreased Gini (MDG) score.\nEvaluation of models The robustness and clinical practicability of models were assessed using the ROC curve, calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC). The AUC were used to estimate the discernment of each model by using “rms” packages. The calibration curves were applied to examine the calibration ability of each model and calibrated with 1000 bootstrap samples to reduce overfitting bias. The clinical applicability of each model was informed by DCA and CIC using “rms” and “rmda” packages.\nThe robustness and clinical practicability of models were assessed using the ROC curve, calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC). The AUC were used to estimate the discernment of each model by using “rms” packages. The calibration curves were applied to examine the calibration ability of each model and calibrated with 1000 bootstrap samples to reduce overfitting bias. The clinical applicability of each model was informed by DCA and CIC using “rms” and “rmda” packages.\nStatistical analysis Statistical analyses were performed with R Statistical Software (version 4.1.2, https://www.r-project.org/). Continuous variables were tested for normality. Those with normality are described as the mean ± SD, while those without normality are described as the median and interquartile range. Continuous variables were compared using the student’s t-test or non-parametric rank-sum test (Kruskal-Wallis test) as appropriate. Categorical variables are described as numbers (percentage) and were compared using the Chi-square test or Fisher exact test as appropriate. Correlations between candidate variables were determined by Spearman’s correlation coefficient. All statistical tests were two-tailed, and P < 0.05 was considered significant.\nStatistical analyses were performed with R Statistical Software (version 4.1.2, https://www.r-project.org/). Continuous variables were tested for normality. Those with normality are described as the mean ± SD, while those without normality are described as the median and interquartile range. Continuous variables were compared using the student’s t-test or non-parametric rank-sum test (Kruskal-Wallis test) as appropriate. Categorical variables are described as numbers (percentage) and were compared using the Chi-square test or Fisher exact test as appropriate. Correlations between candidate variables were determined by Spearman’s correlation coefficient. All statistical tests were two-tailed, and P < 0.05 was considered significant.", "We retrospectively recruited 944 consecutive patients aged no less than 18 years who underwent splenectomy at our institution between July 4, 2011 and September 7, 2018. The patients with the following conditions were excluded: (1) Splenic space-occupying lesion; (2) Hematological disease; (3) PH caused by non-hepatitis B virus (HBV) related etiologies, such as schistosome, hepatitis C virus, or other unknown causes; (4) Presence of PVT confirmed by preoperative imaging; (5) Previous history of endoscopic therapy, splenic embolization, shunt surgery, or anticoagulants; (6) Incomplete clinical features; (7) Unelevated PLT on the first (PLT1) and third day (PLT3) after the operation compared to the preoperative values; and (8) Receiving prophylactic anti-coagulant therapy after splenectomy. Finally, a total of 483 patients with PH interrelated to HBV were included in this study. The flow diagram of patient selection and study design is shown in Figure 1A. The study was approved by the Medical Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology. Owing to the retrospective nature of this study, written informed consent was waived. \n\nFlow chart and correlation chart. A: Flow diagram of patient selection and study design; B: Correlation matrix between candidate variables. The size and color of the circle in the matrix reflect the correlation between the corresponding variables. The darker the blue, the stronger the positive correlation between variables, and the darker the red, the stronger the negative correlation between variables. HCV: Hepatitis C virus; HBV: Hepatitis B virus; PLT1 and PLT3: Platelet counts on the first and third days after operation; PVT: Portal vein thrombosis; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.", "All the patients’ clinical features were acquired from the electronic medical record system in our institution, which mainly included sex, age, smoking and drinking history, previous treatment history, etiologies, blood biochemical parameters, and imaging information. The blood biochemical parameters included routine blood tests [red blood cells (RBC), reference interval: 4.30-5.80 × 1012/L; hemoglobin, reference interval: 130.0-175.0 g/L; white blood cells (WBC), reference interval: 3.50-9.50 × 109/L; neutrophil count (N), reference interval: 1.80-6.30 × 109/L; lymphocyte count (L), reference interval: 1.10-3.20 × 109/L; neutrophil to lymphocyte ratio (NLR); PLT, reference interval: 125.0-350.0 × 109/L; platelet to lymphocyte ratio], coagulation function [prothrombin time, reference interval: 11.5-14.5 s; prothrombin activity (PTA), reference interval: 75.0%-125.0%; international normalized ratio, reference interval: 0.80-1.20; fibrinogen, reference interval: 2.00-4.00 g/L; activated partial thromboplastin time, reference interval: 29.0-42.0 s], and liver function [alanine aminotransaminase (ALT), reference interval: ≤ 41 U/L; aspartate aminotransaminase (AST), reference interval: ≤ 40 U/L; serum albumin, reference interval: 35.0-52.0 g/L; serum total bilirubin, reference interval: ≤ 26 μmol/L] within 7 d before surgery, as well as PLT1 and PLT3. The preoperative Child-Pugh grade was divided into three levels of A, B, and C[34], with grade C excluded. Information on the esophageal and gastric varices (EGV), spleen thickness (SPT), diameter of the portal vein (DPV), and preoperative blood transfusion (PBT) within 7 d before the operation was also collected.", "We diagnosed PVT by color Doppler ultrasound examination[35] and contrast-enhanced CT would be applied as an auxiliary examination when its diagnosis was questioned[36]. In this study, abdominal ultrasound and contrast-enhanced CT examinations were routinely performed within 7 d before the operation. Routine ultrasonography was performed on the 7th day after the operation[19,20], or at any time when there were suspected clinical symptoms of PVT such as fever, severe abdominal pain, vomiting, abnormal liver function, and leukocytosis[12]. \nAccording to the definition of varices[8], EGV was divided into EGV without varices and EGV with varices in this study. SPT was defined as the vertical distance between the splenic hilum and the cut point of the lateral margin, and DPV was measured as the largest anteroposterior diameter at the point of intersection with the hepatic artery, during the patient’s breath holding[37]. \nThe PPER was calculated from the preoperative PLT and postoperative PLT. For example, PPER1 (at the first day) was calculated as (PLT1 - PLT)/ PLT × 100%, and PPER3 (at the third day) was calculated as (PLT3 - PLT)/ PLT × 100%.", "All candidates were randomly divided into two parts by using the “caret” package, of which 70% were assigned to a training cohort and 30% were assigned to a validation cohort. All model building was performed in the training cohort. Multivariate forward stepwise logistic regression analysis was used to select valuable variables to construct the generalized linear model (GLM). The least absolute shrinkage and selection operator (LASSO) was a well-established shrinkage method that can effectively screen meaningful variables from a large set of variables with potential multicollinearity to develop the LASSO model (LSM)[38], which was implemented by using the “glmnet” package. RF was composed of a great number of individual decision trees running as a whole[39]. These multifarious decision tree models were applied for the construction of the RF model (RFM)[40]. The importance of candidate variables was reflected by the mean decreased Gini (MDG) score.", "The robustness and clinical practicability of models were assessed using the ROC curve, calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC). The AUC were used to estimate the discernment of each model by using “rms” packages. The calibration curves were applied to examine the calibration ability of each model and calibrated with 1000 bootstrap samples to reduce overfitting bias. The clinical applicability of each model was informed by DCA and CIC using “rms” and “rmda” packages.", "Statistical analyses were performed with R Statistical Software (version 4.1.2, https://www.r-project.org/). Continuous variables were tested for normality. Those with normality are described as the mean ± SD, while those without normality are described as the median and interquartile range. Continuous variables were compared using the student’s t-test or non-parametric rank-sum test (Kruskal-Wallis test) as appropriate. Categorical variables are described as numbers (percentage) and were compared using the Chi-square test or Fisher exact test as appropriate. Correlations between candidate variables were determined by Spearman’s correlation coefficient. All statistical tests were two-tailed, and P < 0.05 was considered significant.", "Patient demographics and characteristics The detailed clinical characteristics of 483 patients with PH are summarized in Table 1. All participants were randomly and automatically divided into a training cohort (n = 338, 70%) and a validation cohort (n = 145, 30%). The presence of PVT was diagnosed in 200 (41.4%) cases, 135 (39.9%) cases, and 65 (44.8%) cases in the overall cohort, training cohort, and verification cohort, respectively. Consistent with the results of the intergroup comparison, among the 31 candidate variables included, 14 were associated with PVT, including RBC, WBC, L, NLR, PLT, PTA, ALT, AST, EGV, SPT, DPV, PBT, PPER1, and PPER3 (Figure 1B and Supplementary Table 1), which indicated that PPER1 and PPER3 were highly likely to be potential predictors of PVT.\nDetailed clinical characteristics of 483 patients with portal hypertension\nContinuous variables are presented as the median and interquartile range (IQR). \nPH: Portal hypertension; PVT: Portal vein thrombosis; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.\nThe detailed clinical characteristics of 483 patients with PH are summarized in Table 1. All participants were randomly and automatically divided into a training cohort (n = 338, 70%) and a validation cohort (n = 145, 30%). The presence of PVT was diagnosed in 200 (41.4%) cases, 135 (39.9%) cases, and 65 (44.8%) cases in the overall cohort, training cohort, and verification cohort, respectively. Consistent with the results of the intergroup comparison, among the 31 candidate variables included, 14 were associated with PVT, including RBC, WBC, L, NLR, PLT, PTA, ALT, AST, EGV, SPT, DPV, PBT, PPER1, and PPER3 (Figure 1B and Supplementary Table 1), which indicated that PPER1 and PPER3 were highly likely to be potential predictors of PVT.\nDetailed clinical characteristics of 483 patients with portal hypertension\nContinuous variables are presented as the median and interquartile range (IQR). \nPH: Portal hypertension; PVT: Portal vein thrombosis; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.\nLogistic regression analysis Univariate and multivariate logistic regression analyses for risk factors associated with PVT in the overall cohort are presented in Table 2. In the univariate analysis, a total of 11 variables with P < 0.05 were included in the further multivariate analysis. Finally, the following six variables were revealed to be closely associated with the occurrence of PVT: L [odds ratio (OR): 0.28, 95% confidence interval (CI): 0.14-0.54, P < 0.001], EGV (OR: 0.51, 95%CI: 0.32-0.79, P = 0.003), SPT (OR: 1.22, 95%CI: 1.06-1.40, P = 0.005), DPV (OR: 3.57, 95%CI: 1.86-7.03, P < 0.001), PPER1 (OR: 1.78, 95%CI: 1.24-2.62, P = 0.002), and PPER3 (OR: 1.43, 95%CI: 1.16-1.77, P < 0.001). This result demonstrated that PPER1 and PPER3 were independent risk factors for the occurrence of PVT.\nUnivariate and multivariate logistic regression analyses for risk factors associated with portal vein thrombosis in the overall cohort\nForward stepwise analysis. \nContinuous variables. \nPVT: Portal vein thrombosis; OR: Odds ratio; CI: Confidence interval; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.\nUnivariate and multivariate logistic regression analyses for risk factors associated with PVT in the overall cohort are presented in Table 2. In the univariate analysis, a total of 11 variables with P < 0.05 were included in the further multivariate analysis. Finally, the following six variables were revealed to be closely associated with the occurrence of PVT: L [odds ratio (OR): 0.28, 95% confidence interval (CI): 0.14-0.54, P < 0.001], EGV (OR: 0.51, 95%CI: 0.32-0.79, P = 0.003), SPT (OR: 1.22, 95%CI: 1.06-1.40, P = 0.005), DPV (OR: 3.57, 95%CI: 1.86-7.03, P < 0.001), PPER1 (OR: 1.78, 95%CI: 1.24-2.62, P = 0.002), and PPER3 (OR: 1.43, 95%CI: 1.16-1.77, P < 0.001). This result demonstrated that PPER1 and PPER3 were independent risk factors for the occurrence of PVT.\nUnivariate and multivariate logistic regression analyses for risk factors associated with portal vein thrombosis in the overall cohort\nForward stepwise analysis. \nContinuous variables. \nPVT: Portal vein thrombosis; OR: Odds ratio; CI: Confidence interval; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.\nEstablishment of PPER-based models As shown in Supplementary Table 2, the following five variables strongly associated with PVT were chosen to construct the GLM: L (OR: 0.34, 95%CI: 0.14-0.77, P = 0.01), SPT (OR: 1.21, 95%CI: 1.02-1.44, P = 0.02), DPV (OR: 5.85, 95%CI: 2.57-14.05, P < 0.001), PPER1 (OR: 1.77, 95%CI: 1.13-2.82, P = 0.01), and PPER3 (OR: 1.42, 95%CI: 1.12-1.84, P = 0.005). The optimal LSM was obtained when all 31 candidate variables were shrunk to 10 through the LASSO (Figure 2A and B), which included L, NLR, PLT, PTA, AST, EGV, SPT, DPV, PPER1, and PPER3. In the RF, the total sample group had the smallest error of 24.56%, when the number of random trees was 133 (Figure 2C). A total of 133 random trees were set and passed through five iterations, and the importance scores of the candidate variables are presented in Figure 2D. Ultimately, nine variables with higher MDG scores were selected to construct the RFM.\n\nFeatures selection. A: Least absolute shrinkage and selection operator variable trace profiles of the 31 features. The 3-fold cross-validation was employed; B: Mean square error (MSE) plots of models under different lambda. The lambda corresponding to one MSE away from the minimum MSE was the optimal lambda value of 0.043, and the target variables shrunk to 10; C: Relationships between the error and number of trees. There are three lines, green representing error in the positive event group, red representing error in the negative event group, and black representing error in the total sample group; D: Importance scores of the candidate features. RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3; MDG: Mean decreased Gini.\nAs shown in Supplementary Table 2, the following five variables strongly associated with PVT were chosen to construct the GLM: L (OR: 0.34, 95%CI: 0.14-0.77, P = 0.01), SPT (OR: 1.21, 95%CI: 1.02-1.44, P = 0.02), DPV (OR: 5.85, 95%CI: 2.57-14.05, P < 0.001), PPER1 (OR: 1.77, 95%CI: 1.13-2.82, P = 0.01), and PPER3 (OR: 1.42, 95%CI: 1.12-1.84, P = 0.005). The optimal LSM was obtained when all 31 candidate variables were shrunk to 10 through the LASSO (Figure 2A and B), which included L, NLR, PLT, PTA, AST, EGV, SPT, DPV, PPER1, and PPER3. In the RF, the total sample group had the smallest error of 24.56%, when the number of random trees was 133 (Figure 2C). A total of 133 random trees were set and passed through five iterations, and the importance scores of the candidate variables are presented in Figure 2D. Ultimately, nine variables with higher MDG scores were selected to construct the RFM.\n\nFeatures selection. A: Least absolute shrinkage and selection operator variable trace profiles of the 31 features. The 3-fold cross-validation was employed; B: Mean square error (MSE) plots of models under different lambda. The lambda corresponding to one MSE away from the minimum MSE was the optimal lambda value of 0.043, and the target variables shrunk to 10; C: Relationships between the error and number of trees. There are three lines, green representing error in the positive event group, red representing error in the negative event group, and black representing error in the total sample group; D: Importance scores of the candidate features. RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3; MDG: Mean decreased Gini.\nAssessment and verification of PPER-based models The ROC curves of the GLM, LSM, and RFM in the training cohort are shown in Figure 3A, and their AUCs were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively. All models had excellent calibration ability in the training cohort (Figure 3B). DCA and CIC revealed that they both conferred high clinical net benefits (Figures 3C and 4A-C). \n\nEvaluation and validation of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: The receiver operating characteristic curves of the postoperative platelet elevation rate (PPER)-based models; B and E: The calibration curves of the PPER-based models; C and F: The decision curve analysis of the PPER-based models. GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; AUC: Area under the receiver operating characteristic curve.\n\nClinical impact curves of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: Clinical impact curves for the generalized linear model; B and E: Clinical impact curves for the least absolute shrinkage and selection operator model; C and F: Clinical impact curves for the random forest model.\nIn the validation cohort, the ROC curves of all models are presented in Figure 3D, and their AUC were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85), respectively. All models demonstrated highly satisfactory calibration capability and clinical functionality (Figures 3E and F and 4D-F).\nThe ROC curves of the GLM, LSM, and RFM in the training cohort are shown in Figure 3A, and their AUCs were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively. All models had excellent calibration ability in the training cohort (Figure 3B). DCA and CIC revealed that they both conferred high clinical net benefits (Figures 3C and 4A-C). \n\nEvaluation and validation of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: The receiver operating characteristic curves of the postoperative platelet elevation rate (PPER)-based models; B and E: The calibration curves of the PPER-based models; C and F: The decision curve analysis of the PPER-based models. GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; AUC: Area under the receiver operating characteristic curve.\n\nClinical impact curves of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: Clinical impact curves for the generalized linear model; B and E: Clinical impact curves for the least absolute shrinkage and selection operator model; C and F: Clinical impact curves for the random forest model.\nIn the validation cohort, the ROC curves of all models are presented in Figure 3D, and their AUC were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85), respectively. All models demonstrated highly satisfactory calibration capability and clinical functionality (Figures 3E and F and 4D-F).\nImportance of PPER for models As shown in Figure 5A, the nomogram for GLM recruited a total of five variables, including L, SPT, DPV, PPER1, and PPER3, which happened to be the intersection variables of the GLM, LSM, and RFM (Figure 5B). From this, it appeared that the aforementioned variables were significant predictors of the occurrence of PVT and they produced remarkable effects on the construction of the models. Moreover, the present study revealed that among these variables shared by the GLM, LSM, and RFM, the order of weight from high to low was DPV, PPER1, PPER3, SPT, and L (Figure 5C), which fully promulgated the predictive value of the PPER (PPER1 and PPER3) for PVT in all models.\n\nNomogram for the generalized linear model and weighting of variables. A: Nomogram for the generalized linear model (GLM); B: Intersection variables among the GLM, least absolute shrinkage and selection operator model (LSM), and random forest model (RFM); C: Weights of the intersection variables in the GLM, LSM, and RFM, respectively. SPT: Spleen thickness; L: Lymphocyte count; DPV: Diameter of portal vein; PPER1 and PPER3: The first and third days for postoperative platelet elevation rate; PVT: Portal vein thrombosis; PLR: Platelet to lymphocyte ratio; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PTA: Prothrombin activity; EGV: Esophageal and gastric varices; AST: Aspartate aminotransaminase.\nAs shown in Figure 5A, the nomogram for GLM recruited a total of five variables, including L, SPT, DPV, PPER1, and PPER3, which happened to be the intersection variables of the GLM, LSM, and RFM (Figure 5B). From this, it appeared that the aforementioned variables were significant predictors of the occurrence of PVT and they produced remarkable effects on the construction of the models. Moreover, the present study revealed that among these variables shared by the GLM, LSM, and RFM, the order of weight from high to low was DPV, PPER1, PPER3, SPT, and L (Figure 5C), which fully promulgated the predictive value of the PPER (PPER1 and PPER3) for PVT in all models.\n\nNomogram for the generalized linear model and weighting of variables. A: Nomogram for the generalized linear model (GLM); B: Intersection variables among the GLM, least absolute shrinkage and selection operator model (LSM), and random forest model (RFM); C: Weights of the intersection variables in the GLM, LSM, and RFM, respectively. SPT: Spleen thickness; L: Lymphocyte count; DPV: Diameter of portal vein; PPER1 and PPER3: The first and third days for postoperative platelet elevation rate; PVT: Portal vein thrombosis; PLR: Platelet to lymphocyte ratio; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PTA: Prothrombin activity; EGV: Esophageal and gastric varices; AST: Aspartate aminotransaminase.\nComparative analysis of PPER-based models The performance of three PPER-based models in predicting PVT in different cohorts is shown in Table 3. In the overall cohort, the accuracy of the GLM, LSM, and RFM was 76.2%, 77.4%, and 77.4% respectively. In the training cohort, the accuracy of the GLM, LSM, and RFM was 79.6%, 79.0%, and 78.7% respectively. In the validation cohort, the accuracy of the GLM, LSM, and RFM was 74.5%, 79.3%, and 76.6% respectively. When other metrics of the models, such as AUC, sensitivity, specificity, positive predictive value, negative predictive value, kappa values, and Brier scores, were comprehensively considered, the LSM and RFM appeared to be slightly superior to the GLM.\nPerformance of models for portal vein thrombosis risk prediction in different cohorts\nAUC: Area under the receiver operating characteristics curve; CI: Confidence interval; PPV: Positive predictive value; NPV: Negative predictive value; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model.\nThe performance of three PPER-based models in predicting PVT in different cohorts is shown in Table 3. In the overall cohort, the accuracy of the GLM, LSM, and RFM was 76.2%, 77.4%, and 77.4% respectively. In the training cohort, the accuracy of the GLM, LSM, and RFM was 79.6%, 79.0%, and 78.7% respectively. In the validation cohort, the accuracy of the GLM, LSM, and RFM was 74.5%, 79.3%, and 76.6% respectively. When other metrics of the models, such as AUC, sensitivity, specificity, positive predictive value, negative predictive value, kappa values, and Brier scores, were comprehensively considered, the LSM and RFM appeared to be slightly superior to the GLM.\nPerformance of models for portal vein thrombosis risk prediction in different cohorts\nAUC: Area under the receiver operating characteristics curve; CI: Confidence interval; PPV: Positive predictive value; NPV: Negative predictive value; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model.", "The detailed clinical characteristics of 483 patients with PH are summarized in Table 1. All participants were randomly and automatically divided into a training cohort (n = 338, 70%) and a validation cohort (n = 145, 30%). The presence of PVT was diagnosed in 200 (41.4%) cases, 135 (39.9%) cases, and 65 (44.8%) cases in the overall cohort, training cohort, and verification cohort, respectively. Consistent with the results of the intergroup comparison, among the 31 candidate variables included, 14 were associated with PVT, including RBC, WBC, L, NLR, PLT, PTA, ALT, AST, EGV, SPT, DPV, PBT, PPER1, and PPER3 (Figure 1B and Supplementary Table 1), which indicated that PPER1 and PPER3 were highly likely to be potential predictors of PVT.\nDetailed clinical characteristics of 483 patients with portal hypertension\nContinuous variables are presented as the median and interquartile range (IQR). \nPH: Portal hypertension; PVT: Portal vein thrombosis; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.", "Univariate and multivariate logistic regression analyses for risk factors associated with PVT in the overall cohort are presented in Table 2. In the univariate analysis, a total of 11 variables with P < 0.05 were included in the further multivariate analysis. Finally, the following six variables were revealed to be closely associated with the occurrence of PVT: L [odds ratio (OR): 0.28, 95% confidence interval (CI): 0.14-0.54, P < 0.001], EGV (OR: 0.51, 95%CI: 0.32-0.79, P = 0.003), SPT (OR: 1.22, 95%CI: 1.06-1.40, P = 0.005), DPV (OR: 3.57, 95%CI: 1.86-7.03, P < 0.001), PPER1 (OR: 1.78, 95%CI: 1.24-2.62, P = 0.002), and PPER3 (OR: 1.43, 95%CI: 1.16-1.77, P < 0.001). This result demonstrated that PPER1 and PPER3 were independent risk factors for the occurrence of PVT.\nUnivariate and multivariate logistic regression analyses for risk factors associated with portal vein thrombosis in the overall cohort\nForward stepwise analysis. \nContinuous variables. \nPVT: Portal vein thrombosis; OR: Odds ratio; CI: Confidence interval; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3.", "As shown in Supplementary Table 2, the following five variables strongly associated with PVT were chosen to construct the GLM: L (OR: 0.34, 95%CI: 0.14-0.77, P = 0.01), SPT (OR: 1.21, 95%CI: 1.02-1.44, P = 0.02), DPV (OR: 5.85, 95%CI: 2.57-14.05, P < 0.001), PPER1 (OR: 1.77, 95%CI: 1.13-2.82, P = 0.01), and PPER3 (OR: 1.42, 95%CI: 1.12-1.84, P = 0.005). The optimal LSM was obtained when all 31 candidate variables were shrunk to 10 through the LASSO (Figure 2A and B), which included L, NLR, PLT, PTA, AST, EGV, SPT, DPV, PPER1, and PPER3. In the RF, the total sample group had the smallest error of 24.56%, when the number of random trees was 133 (Figure 2C). A total of 133 random trees were set and passed through five iterations, and the importance scores of the candidate variables are presented in Figure 2D. Ultimately, nine variables with higher MDG scores were selected to construct the RFM.\n\nFeatures selection. A: Least absolute shrinkage and selection operator variable trace profiles of the 31 features. The 3-fold cross-validation was employed; B: Mean square error (MSE) plots of models under different lambda. The lambda corresponding to one MSE away from the minimum MSE was the optimal lambda value of 0.043, and the target variables shrunk to 10; C: Relationships between the error and number of trees. There are three lines, green representing error in the positive event group, red representing error in the negative event group, and black representing error in the total sample group; D: Importance scores of the candidate features. RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3; MDG: Mean decreased Gini.", "The ROC curves of the GLM, LSM, and RFM in the training cohort are shown in Figure 3A, and their AUCs were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively. All models had excellent calibration ability in the training cohort (Figure 3B). DCA and CIC revealed that they both conferred high clinical net benefits (Figures 3C and 4A-C). \n\nEvaluation and validation of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: The receiver operating characteristic curves of the postoperative platelet elevation rate (PPER)-based models; B and E: The calibration curves of the PPER-based models; C and F: The decision curve analysis of the PPER-based models. GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; AUC: Area under the receiver operating characteristic curve.\n\nClinical impact curves of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: Clinical impact curves for the generalized linear model; B and E: Clinical impact curves for the least absolute shrinkage and selection operator model; C and F: Clinical impact curves for the random forest model.\nIn the validation cohort, the ROC curves of all models are presented in Figure 3D, and their AUC were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85), respectively. All models demonstrated highly satisfactory calibration capability and clinical functionality (Figures 3E and F and 4D-F).", "As shown in Figure 5A, the nomogram for GLM recruited a total of five variables, including L, SPT, DPV, PPER1, and PPER3, which happened to be the intersection variables of the GLM, LSM, and RFM (Figure 5B). From this, it appeared that the aforementioned variables were significant predictors of the occurrence of PVT and they produced remarkable effects on the construction of the models. Moreover, the present study revealed that among these variables shared by the GLM, LSM, and RFM, the order of weight from high to low was DPV, PPER1, PPER3, SPT, and L (Figure 5C), which fully promulgated the predictive value of the PPER (PPER1 and PPER3) for PVT in all models.\n\nNomogram for the generalized linear model and weighting of variables. A: Nomogram for the generalized linear model (GLM); B: Intersection variables among the GLM, least absolute shrinkage and selection operator model (LSM), and random forest model (RFM); C: Weights of the intersection variables in the GLM, LSM, and RFM, respectively. SPT: Spleen thickness; L: Lymphocyte count; DPV: Diameter of portal vein; PPER1 and PPER3: The first and third days for postoperative platelet elevation rate; PVT: Portal vein thrombosis; PLR: Platelet to lymphocyte ratio; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PTA: Prothrombin activity; EGV: Esophageal and gastric varices; AST: Aspartate aminotransaminase.", "The performance of three PPER-based models in predicting PVT in different cohorts is shown in Table 3. In the overall cohort, the accuracy of the GLM, LSM, and RFM was 76.2%, 77.4%, and 77.4% respectively. In the training cohort, the accuracy of the GLM, LSM, and RFM was 79.6%, 79.0%, and 78.7% respectively. In the validation cohort, the accuracy of the GLM, LSM, and RFM was 74.5%, 79.3%, and 76.6% respectively. When other metrics of the models, such as AUC, sensitivity, specificity, positive predictive value, negative predictive value, kappa values, and Brier scores, were comprehensively considered, the LSM and RFM appeared to be slightly superior to the GLM.\nPerformance of models for portal vein thrombosis risk prediction in different cohorts\nAUC: Area under the receiver operating characteristics curve; CI: Confidence interval; PPV: Positive predictive value; NPV: Negative predictive value; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model.", "Undoubtedly, PVT is a lethal complication after splenectomy in cirrhotic patients with PH[12]. Once PVT exists, there will be elevated portal venous pressure, ischemic bowel necrosis, progressive impairment of liver function, and even liver failure, which can eventually be life-threatening[41,42]. Therefore, research on the optimization of early detection of individuals at high risk of PVT after splenectomy is urgently needed. In this study, we successfully constructed the PPER-based models for predicting PVT by machine/deep learning, which would be conducive to early identifying the population at high risk of PVT.\nIn the present study, conventional generalized linear (CGL) method and machine/deep learning (including the LASSO and RF) were applied separately to screen out the variables that greatly affected the PVT prediction. The CGL method is characterized by strong interpretability, especially when the multifactorial forward stepwise regression method is used, and therefore, it has been widely applied as the traditional method to construct a predictive model[43]. However, with the rapid progress of artificial intelligence technology, a novel prediction model based on machine/deep learning has emerged with a higher probability of accuracy, which has led some clinicians to question the value of the CGL model in the clinical application of individualized patients[44]. Coincidentally, our research results proclaimed that the performance of the LSM and RFM seemed to be slightly better than the GLM.\nInterestingly, the PPER-based models contained the following five intersecting factors, namely, SPT, DPV, L, PPER1, and PPER3, which sufficiently illustrated that these were the main contributors to the higher incidence of PVT. Previous studies found that preoperative SPT and DPV were important predictors of the formation of PVT after splenectomy in patients with PH[8,45], which was highly consistent with our findings. A very reasonable explanation is that a wide preoperative DPV and SPT will lead to slow portal vein blood flow, which is closely related to postoperative thrombosis[45,46]. \nIn most cases, platelet, erythrocyte, and leukocyte counts rose dramatically over a short time after splenectomy in patients with PH, and the blood was hypercoagulable[8]. Therefore, previous studies suggested that preoperative low platelet and leukocyte counts were founders of the formation of PVT postoperatively[47]. This study revealed that preoperative L was an influential factor in PVT postoperatively, which coincided with the above view. Of note, the present study employed the PPER to reflect the magnitude of dynamic changes in preoperative and postoperative PLT. Subsequently, it was found that the PPER had high predictive value for the risk of PVT postoperatively, which was not addressed previously.\nStamou et al[48] reported that the median time to the formation of PVT after splenectomy in patients with PH was the 6th day (range, 3-11). Lu et al[8] concluded that 49.19% of patients developed PVT within 7 d after splenectomy. Therefore, scholars routinely applied ultrasonography examination to diagnose the PVT on the 7th day after splenectomy[8,19,20]. In this study, combined with the preoperative predictors and PPER, the PPER-based models that we constructed can effectively discriminate individuals with high risk of PVT as early as the first 3 d after the operation, which was extremely critical for guiding clinicians’ treatment strategies.\nCurrently, there is no standard preventive regimen for PVT after splenectomy in cirrhotic patients with PH[49]. Most scholars have recently advocated that the prophylactic anticoagulant therapy is administered earlier postoperatively, which will be more helpful in reducing the incidence of PVT[50,51]. However, it should be cautiously chosen because in patients with liver cirrhosis, it cannot avoid the risk of bleeding[51]. In addition, if the preventive regimens are routinely adopted for all individuals with PH after splenectomy, it is bound to raise the suspicion of overtreatment. Excitingly, in the present study, the accuracy of the PPER-based models in predicting PVT was up to 80%, which can distinguish individuals at high risk of PVT with high efficiency, and thus guide clinicians to take targeted individualized preventive measures in time.\nThe present study has some limitations. First, due to the retrospective nature of the study, selection bias cannot be eliminated. Second, the uncommon preoperative factors that may influence the formation of PVT, such as splenic vein diameter, spleen volume, and portal vein flow velocity[8,19], were not routinely measured in our institution and thus failed to be included in the present study. However, the SPT and DPV in this study can indirectly reflect these indicators to a certain extent[45,46]. Third, this was a monocentric study design. Although the PPER-based models demonstrated excellent performance for predicting PVT, they still lacked the verification of external cohorts. Therefore, large-scale prospective multicenter studies are warranted, which are beneficial to the popularity and application of the PPER-based models.", "PPER1 and PPER3 are effective indicators for postoperative prediction of PVT. We have successfully developed the PPER-based practical models for predicting PVT, which could help clinicians identify individuals at high risk for PVT early and efficiently, and thus guide the timely intervention measures." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Portal hypertension", "Splenectomy", "Portal vein thrombosis", "Postoperative platelet elevation rate", "Practical model", "Machine learning" ]
INTRODUCTION: Liver cirrhosis is recognized as an extremely important and rapidly increasing disease burden in the world[1]. In the progressive stage of liver cirrhosis, the complications caused by portal hypertension (PH), including esophagogastric variceal bleeding and hypersplenism, pose a great threat to the patients’ life and health[2,3]. Liver transplantation is currently recommended as a curative treatment for liver cirrhosis combined with PH; however, due to the shortage of liver sources and high trans-plantation cost, its clinical practicability is limited[4,5]. Transjugular intrahepatic portosystemic shunt seems to be a gospel for PH, but unfortunately, restenosis or/and hepatic encephalopathy will occur in more than 60% of patients[6,7]. In Asia, splenectomy (or combined with devascularization) has been widely adopted as an effective treatment for hypersplenism or esophageal and gastric variceal bleeding caused by PH[8,9]. Portal vein thrombosis (PVT) is often defined as thrombosis within the portal vein trunk or intrahepatic portal branches, with or without the splenic vein or superior mesenteric vein involvement[10,11]. PVT is considered a dreaded complication after splenectomy for patients with PH[12], and the probability of PVT has been reported to be 4.8%-51.5%[13-15]. For patients with acute PVT and PVT resulting in superior mesenteric vein thrombosis, it has been reported that PVT may be closely associated with acute liver failure and could influence mortality[16]. Hence, strategies are needed to prevent PVT in patients who underwent splenectomy. In clinical practice, anticoagulation is a critical method for the prevention and treatment of PVT in patients after splenectomy. However, when the anticoagulation therapy should be started remains controversial. Early anticoagulation may result in life-threatening bleeding events for patients with liver cirrhosis. Whether anticoagulant therapy should be prescribed to all patients after splenectomy deserves careful consideration. In addition, the majority of patients with PVT are asymptomatic and only a few experience abdominal discomfort[12]. Therefore, there is an urgent requirement to find effective diagnostic methods to early and rapidly identify individuals with high risk of PVT after splenectomy, and then further guide clinicians to take intervention measures. Color Doppler ultrasonography and/or contrast-enhanced computed tomography (CT) is commonly applied for the final diagnosis of PVT[17]; however, they seem to be useless for screening out the high-risk individuals who are vulnerable to PVT. Given this, many scholars attempted to investigate the risk factors closely related to the occurrence of PVT after splenectomy[8,18-21]. Several investigators paid attention to the fact that preoperative low platelet count (PLT) and postoperative high PLT may be crucial predictors of the risk of PVT postoperatively[19,22]. Generally speaking, patients with PH will experience rebounding rises in PLT after splenectomy[23], combined with hemodynamic changes in the portal venous system, and thus these patients are highly prone to developing PVT[24]. However, the effect of the amplitude of sharp postoperative rises in PLT on PVT has received little attention. We speculated that the postoperative platelet elevation rate (PPER) should be an important predictor of PVT. To the best of our knowledge, there are no reports on the relationship between PPER and PVT. In recent years, to meet the urgent demand of finding effective methods to predict PVT after splenectomy, several studies have attempted to construct predictive models for PVT after splenectomy in patients with cirrhosis using multivariate regression analysis[25,26]. However, there are few clinical variables included in the analysis and the accuracy of these prediction models is still unsatisfactory. Therefore, there is an urgent need for an efficient and accurate visualization model. Nowadays, novel machine learning algorithms based on more clinical features have shown great potential in various aspects of medical research, especially in the construction of predictive models, and the features screened for model construction are clinically interpretable[27-29]. Gao et al[28] constructed four machine learning models based on 53 raw clinical features of coronavirus disease 2019 patients to distinguish individuals at high risk for mortality, with an area under the receiver operating characteristics (ROC) curve (AUC) of 0.976. Kawakami et al[29] developed seven supervised machine learning classifiers based on 32 clinical parameters, among which the random forest (RF) model showed the best performance in distinguishing epithelial ovarian cancer from benign ovarian tumors with an AUC of 0.968. The wide range of applications of machine learning methods has surpassed conventional statistical analysis due to their higher accuracy, which might enable machine learning to be increasingly applied in the field of medical research[30-32]. Although compared with traditional multivariate analysis methods, machine learning algorithms have overwhelming advantages in constructing clinical prediction models, so far, only Wang et al[33] have tried to construct a prediction model of PVT after splenectomy in cirrhotic patients with PH using machine learning algorithms. The model that they constructed has greatly improved the prediction efficiency compared with the traditional models. However, the clinical parameters involved in the construction of the model are extremely complex, which limits its clinical use. Therefore, the purpose of this study was to evaluate the predictive value of PPER for the risk of PVT after splenectomy for patients with PH. In addition, we sought to build simple, efficient, and accurate practical models for predicting PVT with machine learning algorithms to facilitate assisting clinicians in the early identification of individuals at high risk of PVT after splenectomy and taking intervention measures in time. We present the following article in accordance with the TRIPOD reporting checklist. MATERIALS AND METHODS: Study population We retrospectively recruited 944 consecutive patients aged no less than 18 years who underwent splenectomy at our institution between July 4, 2011 and September 7, 2018. The patients with the following conditions were excluded: (1) Splenic space-occupying lesion; (2) Hematological disease; (3) PH caused by non-hepatitis B virus (HBV) related etiologies, such as schistosome, hepatitis C virus, or other unknown causes; (4) Presence of PVT confirmed by preoperative imaging; (5) Previous history of endoscopic therapy, splenic embolization, shunt surgery, or anticoagulants; (6) Incomplete clinical features; (7) Unelevated PLT on the first (PLT1) and third day (PLT3) after the operation compared to the preoperative values; and (8) Receiving prophylactic anti-coagulant therapy after splenectomy. Finally, a total of 483 patients with PH interrelated to HBV were included in this study. The flow diagram of patient selection and study design is shown in Figure 1A. The study was approved by the Medical Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology. Owing to the retrospective nature of this study, written informed consent was waived. Flow chart and correlation chart. A: Flow diagram of patient selection and study design; B: Correlation matrix between candidate variables. The size and color of the circle in the matrix reflect the correlation between the corresponding variables. The darker the blue, the stronger the positive correlation between variables, and the darker the red, the stronger the negative correlation between variables. HCV: Hepatitis C virus; HBV: Hepatitis B virus; PLT1 and PLT3: Platelet counts on the first and third days after operation; PVT: Portal vein thrombosis; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3. We retrospectively recruited 944 consecutive patients aged no less than 18 years who underwent splenectomy at our institution between July 4, 2011 and September 7, 2018. The patients with the following conditions were excluded: (1) Splenic space-occupying lesion; (2) Hematological disease; (3) PH caused by non-hepatitis B virus (HBV) related etiologies, such as schistosome, hepatitis C virus, or other unknown causes; (4) Presence of PVT confirmed by preoperative imaging; (5) Previous history of endoscopic therapy, splenic embolization, shunt surgery, or anticoagulants; (6) Incomplete clinical features; (7) Unelevated PLT on the first (PLT1) and third day (PLT3) after the operation compared to the preoperative values; and (8) Receiving prophylactic anti-coagulant therapy after splenectomy. Finally, a total of 483 patients with PH interrelated to HBV were included in this study. The flow diagram of patient selection and study design is shown in Figure 1A. The study was approved by the Medical Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology. Owing to the retrospective nature of this study, written informed consent was waived. Flow chart and correlation chart. A: Flow diagram of patient selection and study design; B: Correlation matrix between candidate variables. The size and color of the circle in the matrix reflect the correlation between the corresponding variables. The darker the blue, the stronger the positive correlation between variables, and the darker the red, the stronger the negative correlation between variables. HCV: Hepatitis C virus; HBV: Hepatitis B virus; PLT1 and PLT3: Platelet counts on the first and third days after operation; PVT: Portal vein thrombosis; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3. Data collection All the patients’ clinical features were acquired from the electronic medical record system in our institution, which mainly included sex, age, smoking and drinking history, previous treatment history, etiologies, blood biochemical parameters, and imaging information. The blood biochemical parameters included routine blood tests [red blood cells (RBC), reference interval: 4.30-5.80 × 1012/L; hemoglobin, reference interval: 130.0-175.0 g/L; white blood cells (WBC), reference interval: 3.50-9.50 × 109/L; neutrophil count (N), reference interval: 1.80-6.30 × 109/L; lymphocyte count (L), reference interval: 1.10-3.20 × 109/L; neutrophil to lymphocyte ratio (NLR); PLT, reference interval: 125.0-350.0 × 109/L; platelet to lymphocyte ratio], coagulation function [prothrombin time, reference interval: 11.5-14.5 s; prothrombin activity (PTA), reference interval: 75.0%-125.0%; international normalized ratio, reference interval: 0.80-1.20; fibrinogen, reference interval: 2.00-4.00 g/L; activated partial thromboplastin time, reference interval: 29.0-42.0 s], and liver function [alanine aminotransaminase (ALT), reference interval: ≤ 41 U/L; aspartate aminotransaminase (AST), reference interval: ≤ 40 U/L; serum albumin, reference interval: 35.0-52.0 g/L; serum total bilirubin, reference interval: ≤ 26 μmol/L] within 7 d before surgery, as well as PLT1 and PLT3. The preoperative Child-Pugh grade was divided into three levels of A, B, and C[34], with grade C excluded. Information on the esophageal and gastric varices (EGV), spleen thickness (SPT), diameter of the portal vein (DPV), and preoperative blood transfusion (PBT) within 7 d before the operation was also collected. All the patients’ clinical features were acquired from the electronic medical record system in our institution, which mainly included sex, age, smoking and drinking history, previous treatment history, etiologies, blood biochemical parameters, and imaging information. The blood biochemical parameters included routine blood tests [red blood cells (RBC), reference interval: 4.30-5.80 × 1012/L; hemoglobin, reference interval: 130.0-175.0 g/L; white blood cells (WBC), reference interval: 3.50-9.50 × 109/L; neutrophil count (N), reference interval: 1.80-6.30 × 109/L; lymphocyte count (L), reference interval: 1.10-3.20 × 109/L; neutrophil to lymphocyte ratio (NLR); PLT, reference interval: 125.0-350.0 × 109/L; platelet to lymphocyte ratio], coagulation function [prothrombin time, reference interval: 11.5-14.5 s; prothrombin activity (PTA), reference interval: 75.0%-125.0%; international normalized ratio, reference interval: 0.80-1.20; fibrinogen, reference interval: 2.00-4.00 g/L; activated partial thromboplastin time, reference interval: 29.0-42.0 s], and liver function [alanine aminotransaminase (ALT), reference interval: ≤ 41 U/L; aspartate aminotransaminase (AST), reference interval: ≤ 40 U/L; serum albumin, reference interval: 35.0-52.0 g/L; serum total bilirubin, reference interval: ≤ 26 μmol/L] within 7 d before surgery, as well as PLT1 and PLT3. The preoperative Child-Pugh grade was divided into three levels of A, B, and C[34], with grade C excluded. Information on the esophageal and gastric varices (EGV), spleen thickness (SPT), diameter of the portal vein (DPV), and preoperative blood transfusion (PBT) within 7 d before the operation was also collected. Definition of variables We diagnosed PVT by color Doppler ultrasound examination[35] and contrast-enhanced CT would be applied as an auxiliary examination when its diagnosis was questioned[36]. In this study, abdominal ultrasound and contrast-enhanced CT examinations were routinely performed within 7 d before the operation. Routine ultrasonography was performed on the 7th day after the operation[19,20], or at any time when there were suspected clinical symptoms of PVT such as fever, severe abdominal pain, vomiting, abnormal liver function, and leukocytosis[12]. According to the definition of varices[8], EGV was divided into EGV without varices and EGV with varices in this study. SPT was defined as the vertical distance between the splenic hilum and the cut point of the lateral margin, and DPV was measured as the largest anteroposterior diameter at the point of intersection with the hepatic artery, during the patient’s breath holding[37]. The PPER was calculated from the preoperative PLT and postoperative PLT. For example, PPER1 (at the first day) was calculated as (PLT1 - PLT)/ PLT × 100%, and PPER3 (at the third day) was calculated as (PLT3 - PLT)/ PLT × 100%. We diagnosed PVT by color Doppler ultrasound examination[35] and contrast-enhanced CT would be applied as an auxiliary examination when its diagnosis was questioned[36]. In this study, abdominal ultrasound and contrast-enhanced CT examinations were routinely performed within 7 d before the operation. Routine ultrasonography was performed on the 7th day after the operation[19,20], or at any time when there were suspected clinical symptoms of PVT such as fever, severe abdominal pain, vomiting, abnormal liver function, and leukocytosis[12]. According to the definition of varices[8], EGV was divided into EGV without varices and EGV with varices in this study. SPT was defined as the vertical distance between the splenic hilum and the cut point of the lateral margin, and DPV was measured as the largest anteroposterior diameter at the point of intersection with the hepatic artery, during the patient’s breath holding[37]. The PPER was calculated from the preoperative PLT and postoperative PLT. For example, PPER1 (at the first day) was calculated as (PLT1 - PLT)/ PLT × 100%, and PPER3 (at the third day) was calculated as (PLT3 - PLT)/ PLT × 100%. Development of models All candidates were randomly divided into two parts by using the “caret” package, of which 70% were assigned to a training cohort and 30% were assigned to a validation cohort. All model building was performed in the training cohort. Multivariate forward stepwise logistic regression analysis was used to select valuable variables to construct the generalized linear model (GLM). The least absolute shrinkage and selection operator (LASSO) was a well-established shrinkage method that can effectively screen meaningful variables from a large set of variables with potential multicollinearity to develop the LASSO model (LSM)[38], which was implemented by using the “glmnet” package. RF was composed of a great number of individual decision trees running as a whole[39]. These multifarious decision tree models were applied for the construction of the RF model (RFM)[40]. The importance of candidate variables was reflected by the mean decreased Gini (MDG) score. All candidates were randomly divided into two parts by using the “caret” package, of which 70% were assigned to a training cohort and 30% were assigned to a validation cohort. All model building was performed in the training cohort. Multivariate forward stepwise logistic regression analysis was used to select valuable variables to construct the generalized linear model (GLM). The least absolute shrinkage and selection operator (LASSO) was a well-established shrinkage method that can effectively screen meaningful variables from a large set of variables with potential multicollinearity to develop the LASSO model (LSM)[38], which was implemented by using the “glmnet” package. RF was composed of a great number of individual decision trees running as a whole[39]. These multifarious decision tree models were applied for the construction of the RF model (RFM)[40]. The importance of candidate variables was reflected by the mean decreased Gini (MDG) score. Evaluation of models The robustness and clinical practicability of models were assessed using the ROC curve, calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC). The AUC were used to estimate the discernment of each model by using “rms” packages. The calibration curves were applied to examine the calibration ability of each model and calibrated with 1000 bootstrap samples to reduce overfitting bias. The clinical applicability of each model was informed by DCA and CIC using “rms” and “rmda” packages. The robustness and clinical practicability of models were assessed using the ROC curve, calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC). The AUC were used to estimate the discernment of each model by using “rms” packages. The calibration curves were applied to examine the calibration ability of each model and calibrated with 1000 bootstrap samples to reduce overfitting bias. The clinical applicability of each model was informed by DCA and CIC using “rms” and “rmda” packages. Statistical analysis Statistical analyses were performed with R Statistical Software (version 4.1.2, https://www.r-project.org/). Continuous variables were tested for normality. Those with normality are described as the mean ± SD, while those without normality are described as the median and interquartile range. Continuous variables were compared using the student’s t-test or non-parametric rank-sum test (Kruskal-Wallis test) as appropriate. Categorical variables are described as numbers (percentage) and were compared using the Chi-square test or Fisher exact test as appropriate. Correlations between candidate variables were determined by Spearman’s correlation coefficient. All statistical tests were two-tailed, and P < 0.05 was considered significant. Statistical analyses were performed with R Statistical Software (version 4.1.2, https://www.r-project.org/). Continuous variables were tested for normality. Those with normality are described as the mean ± SD, while those without normality are described as the median and interquartile range. Continuous variables were compared using the student’s t-test or non-parametric rank-sum test (Kruskal-Wallis test) as appropriate. Categorical variables are described as numbers (percentage) and were compared using the Chi-square test or Fisher exact test as appropriate. Correlations between candidate variables were determined by Spearman’s correlation coefficient. All statistical tests were two-tailed, and P < 0.05 was considered significant. Study population: We retrospectively recruited 944 consecutive patients aged no less than 18 years who underwent splenectomy at our institution between July 4, 2011 and September 7, 2018. The patients with the following conditions were excluded: (1) Splenic space-occupying lesion; (2) Hematological disease; (3) PH caused by non-hepatitis B virus (HBV) related etiologies, such as schistosome, hepatitis C virus, or other unknown causes; (4) Presence of PVT confirmed by preoperative imaging; (5) Previous history of endoscopic therapy, splenic embolization, shunt surgery, or anticoagulants; (6) Incomplete clinical features; (7) Unelevated PLT on the first (PLT1) and third day (PLT3) after the operation compared to the preoperative values; and (8) Receiving prophylactic anti-coagulant therapy after splenectomy. Finally, a total of 483 patients with PH interrelated to HBV were included in this study. The flow diagram of patient selection and study design is shown in Figure 1A. The study was approved by the Medical Ethics Committee of Tongji Medical College, Huazhong University of Science and Technology. Owing to the retrospective nature of this study, written informed consent was waived. Flow chart and correlation chart. A: Flow diagram of patient selection and study design; B: Correlation matrix between candidate variables. The size and color of the circle in the matrix reflect the correlation between the corresponding variables. The darker the blue, the stronger the positive correlation between variables, and the darker the red, the stronger the negative correlation between variables. HCV: Hepatitis C virus; HBV: Hepatitis B virus; PLT1 and PLT3: Platelet counts on the first and third days after operation; PVT: Portal vein thrombosis; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3. Data collection: All the patients’ clinical features were acquired from the electronic medical record system in our institution, which mainly included sex, age, smoking and drinking history, previous treatment history, etiologies, blood biochemical parameters, and imaging information. The blood biochemical parameters included routine blood tests [red blood cells (RBC), reference interval: 4.30-5.80 × 1012/L; hemoglobin, reference interval: 130.0-175.0 g/L; white blood cells (WBC), reference interval: 3.50-9.50 × 109/L; neutrophil count (N), reference interval: 1.80-6.30 × 109/L; lymphocyte count (L), reference interval: 1.10-3.20 × 109/L; neutrophil to lymphocyte ratio (NLR); PLT, reference interval: 125.0-350.0 × 109/L; platelet to lymphocyte ratio], coagulation function [prothrombin time, reference interval: 11.5-14.5 s; prothrombin activity (PTA), reference interval: 75.0%-125.0%; international normalized ratio, reference interval: 0.80-1.20; fibrinogen, reference interval: 2.00-4.00 g/L; activated partial thromboplastin time, reference interval: 29.0-42.0 s], and liver function [alanine aminotransaminase (ALT), reference interval: ≤ 41 U/L; aspartate aminotransaminase (AST), reference interval: ≤ 40 U/L; serum albumin, reference interval: 35.0-52.0 g/L; serum total bilirubin, reference interval: ≤ 26 μmol/L] within 7 d before surgery, as well as PLT1 and PLT3. The preoperative Child-Pugh grade was divided into three levels of A, B, and C[34], with grade C excluded. Information on the esophageal and gastric varices (EGV), spleen thickness (SPT), diameter of the portal vein (DPV), and preoperative blood transfusion (PBT) within 7 d before the operation was also collected. Definition of variables: We diagnosed PVT by color Doppler ultrasound examination[35] and contrast-enhanced CT would be applied as an auxiliary examination when its diagnosis was questioned[36]. In this study, abdominal ultrasound and contrast-enhanced CT examinations were routinely performed within 7 d before the operation. Routine ultrasonography was performed on the 7th day after the operation[19,20], or at any time when there were suspected clinical symptoms of PVT such as fever, severe abdominal pain, vomiting, abnormal liver function, and leukocytosis[12]. According to the definition of varices[8], EGV was divided into EGV without varices and EGV with varices in this study. SPT was defined as the vertical distance between the splenic hilum and the cut point of the lateral margin, and DPV was measured as the largest anteroposterior diameter at the point of intersection with the hepatic artery, during the patient’s breath holding[37]. The PPER was calculated from the preoperative PLT and postoperative PLT. For example, PPER1 (at the first day) was calculated as (PLT1 - PLT)/ PLT × 100%, and PPER3 (at the third day) was calculated as (PLT3 - PLT)/ PLT × 100%. Development of models: All candidates were randomly divided into two parts by using the “caret” package, of which 70% were assigned to a training cohort and 30% were assigned to a validation cohort. All model building was performed in the training cohort. Multivariate forward stepwise logistic regression analysis was used to select valuable variables to construct the generalized linear model (GLM). The least absolute shrinkage and selection operator (LASSO) was a well-established shrinkage method that can effectively screen meaningful variables from a large set of variables with potential multicollinearity to develop the LASSO model (LSM)[38], which was implemented by using the “glmnet” package. RF was composed of a great number of individual decision trees running as a whole[39]. These multifarious decision tree models were applied for the construction of the RF model (RFM)[40]. The importance of candidate variables was reflected by the mean decreased Gini (MDG) score. Evaluation of models: The robustness and clinical practicability of models were assessed using the ROC curve, calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC). The AUC were used to estimate the discernment of each model by using “rms” packages. The calibration curves were applied to examine the calibration ability of each model and calibrated with 1000 bootstrap samples to reduce overfitting bias. The clinical applicability of each model was informed by DCA and CIC using “rms” and “rmda” packages. Statistical analysis: Statistical analyses were performed with R Statistical Software (version 4.1.2, https://www.r-project.org/). Continuous variables were tested for normality. Those with normality are described as the mean ± SD, while those without normality are described as the median and interquartile range. Continuous variables were compared using the student’s t-test or non-parametric rank-sum test (Kruskal-Wallis test) as appropriate. Categorical variables are described as numbers (percentage) and were compared using the Chi-square test or Fisher exact test as appropriate. Correlations between candidate variables were determined by Spearman’s correlation coefficient. All statistical tests were two-tailed, and P < 0.05 was considered significant. RESULTS: Patient demographics and characteristics The detailed clinical characteristics of 483 patients with PH are summarized in Table 1. All participants were randomly and automatically divided into a training cohort (n = 338, 70%) and a validation cohort (n = 145, 30%). The presence of PVT was diagnosed in 200 (41.4%) cases, 135 (39.9%) cases, and 65 (44.8%) cases in the overall cohort, training cohort, and verification cohort, respectively. Consistent with the results of the intergroup comparison, among the 31 candidate variables included, 14 were associated with PVT, including RBC, WBC, L, NLR, PLT, PTA, ALT, AST, EGV, SPT, DPV, PBT, PPER1, and PPER3 (Figure 1B and Supplementary Table 1), which indicated that PPER1 and PPER3 were highly likely to be potential predictors of PVT. Detailed clinical characteristics of 483 patients with portal hypertension Continuous variables are presented as the median and interquartile range (IQR). PH: Portal hypertension; PVT: Portal vein thrombosis; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3. The detailed clinical characteristics of 483 patients with PH are summarized in Table 1. All participants were randomly and automatically divided into a training cohort (n = 338, 70%) and a validation cohort (n = 145, 30%). The presence of PVT was diagnosed in 200 (41.4%) cases, 135 (39.9%) cases, and 65 (44.8%) cases in the overall cohort, training cohort, and verification cohort, respectively. Consistent with the results of the intergroup comparison, among the 31 candidate variables included, 14 were associated with PVT, including RBC, WBC, L, NLR, PLT, PTA, ALT, AST, EGV, SPT, DPV, PBT, PPER1, and PPER3 (Figure 1B and Supplementary Table 1), which indicated that PPER1 and PPER3 were highly likely to be potential predictors of PVT. Detailed clinical characteristics of 483 patients with portal hypertension Continuous variables are presented as the median and interquartile range (IQR). PH: Portal hypertension; PVT: Portal vein thrombosis; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3. Logistic regression analysis Univariate and multivariate logistic regression analyses for risk factors associated with PVT in the overall cohort are presented in Table 2. In the univariate analysis, a total of 11 variables with P < 0.05 were included in the further multivariate analysis. Finally, the following six variables were revealed to be closely associated with the occurrence of PVT: L [odds ratio (OR): 0.28, 95% confidence interval (CI): 0.14-0.54, P < 0.001], EGV (OR: 0.51, 95%CI: 0.32-0.79, P = 0.003), SPT (OR: 1.22, 95%CI: 1.06-1.40, P = 0.005), DPV (OR: 3.57, 95%CI: 1.86-7.03, P < 0.001), PPER1 (OR: 1.78, 95%CI: 1.24-2.62, P = 0.002), and PPER3 (OR: 1.43, 95%CI: 1.16-1.77, P < 0.001). This result demonstrated that PPER1 and PPER3 were independent risk factors for the occurrence of PVT. Univariate and multivariate logistic regression analyses for risk factors associated with portal vein thrombosis in the overall cohort Forward stepwise analysis. Continuous variables. PVT: Portal vein thrombosis; OR: Odds ratio; CI: Confidence interval; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3. Univariate and multivariate logistic regression analyses for risk factors associated with PVT in the overall cohort are presented in Table 2. In the univariate analysis, a total of 11 variables with P < 0.05 were included in the further multivariate analysis. Finally, the following six variables were revealed to be closely associated with the occurrence of PVT: L [odds ratio (OR): 0.28, 95% confidence interval (CI): 0.14-0.54, P < 0.001], EGV (OR: 0.51, 95%CI: 0.32-0.79, P = 0.003), SPT (OR: 1.22, 95%CI: 1.06-1.40, P = 0.005), DPV (OR: 3.57, 95%CI: 1.86-7.03, P < 0.001), PPER1 (OR: 1.78, 95%CI: 1.24-2.62, P = 0.002), and PPER3 (OR: 1.43, 95%CI: 1.16-1.77, P < 0.001). This result demonstrated that PPER1 and PPER3 were independent risk factors for the occurrence of PVT. Univariate and multivariate logistic regression analyses for risk factors associated with portal vein thrombosis in the overall cohort Forward stepwise analysis. Continuous variables. PVT: Portal vein thrombosis; OR: Odds ratio; CI: Confidence interval; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3. Establishment of PPER-based models As shown in Supplementary Table 2, the following five variables strongly associated with PVT were chosen to construct the GLM: L (OR: 0.34, 95%CI: 0.14-0.77, P = 0.01), SPT (OR: 1.21, 95%CI: 1.02-1.44, P = 0.02), DPV (OR: 5.85, 95%CI: 2.57-14.05, P < 0.001), PPER1 (OR: 1.77, 95%CI: 1.13-2.82, P = 0.01), and PPER3 (OR: 1.42, 95%CI: 1.12-1.84, P = 0.005). The optimal LSM was obtained when all 31 candidate variables were shrunk to 10 through the LASSO (Figure 2A and B), which included L, NLR, PLT, PTA, AST, EGV, SPT, DPV, PPER1, and PPER3. In the RF, the total sample group had the smallest error of 24.56%, when the number of random trees was 133 (Figure 2C). A total of 133 random trees were set and passed through five iterations, and the importance scores of the candidate variables are presented in Figure 2D. Ultimately, nine variables with higher MDG scores were selected to construct the RFM. Features selection. A: Least absolute shrinkage and selection operator variable trace profiles of the 31 features. The 3-fold cross-validation was employed; B: Mean square error (MSE) plots of models under different lambda. The lambda corresponding to one MSE away from the minimum MSE was the optimal lambda value of 0.043, and the target variables shrunk to 10; C: Relationships between the error and number of trees. There are three lines, green representing error in the positive event group, red representing error in the negative event group, and black representing error in the total sample group; D: Importance scores of the candidate features. RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3; MDG: Mean decreased Gini. As shown in Supplementary Table 2, the following five variables strongly associated with PVT were chosen to construct the GLM: L (OR: 0.34, 95%CI: 0.14-0.77, P = 0.01), SPT (OR: 1.21, 95%CI: 1.02-1.44, P = 0.02), DPV (OR: 5.85, 95%CI: 2.57-14.05, P < 0.001), PPER1 (OR: 1.77, 95%CI: 1.13-2.82, P = 0.01), and PPER3 (OR: 1.42, 95%CI: 1.12-1.84, P = 0.005). The optimal LSM was obtained when all 31 candidate variables were shrunk to 10 through the LASSO (Figure 2A and B), which included L, NLR, PLT, PTA, AST, EGV, SPT, DPV, PPER1, and PPER3. In the RF, the total sample group had the smallest error of 24.56%, when the number of random trees was 133 (Figure 2C). A total of 133 random trees were set and passed through five iterations, and the importance scores of the candidate variables are presented in Figure 2D. Ultimately, nine variables with higher MDG scores were selected to construct the RFM. Features selection. A: Least absolute shrinkage and selection operator variable trace profiles of the 31 features. The 3-fold cross-validation was employed; B: Mean square error (MSE) plots of models under different lambda. The lambda corresponding to one MSE away from the minimum MSE was the optimal lambda value of 0.043, and the target variables shrunk to 10; C: Relationships between the error and number of trees. There are three lines, green representing error in the positive event group, red representing error in the negative event group, and black representing error in the total sample group; D: Importance scores of the candidate features. RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3; MDG: Mean decreased Gini. Assessment and verification of PPER-based models The ROC curves of the GLM, LSM, and RFM in the training cohort are shown in Figure 3A, and their AUCs were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively. All models had excellent calibration ability in the training cohort (Figure 3B). DCA and CIC revealed that they both conferred high clinical net benefits (Figures 3C and 4A-C). Evaluation and validation of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: The receiver operating characteristic curves of the postoperative platelet elevation rate (PPER)-based models; B and E: The calibration curves of the PPER-based models; C and F: The decision curve analysis of the PPER-based models. GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; AUC: Area under the receiver operating characteristic curve. Clinical impact curves of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: Clinical impact curves for the generalized linear model; B and E: Clinical impact curves for the least absolute shrinkage and selection operator model; C and F: Clinical impact curves for the random forest model. In the validation cohort, the ROC curves of all models are presented in Figure 3D, and their AUC were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85), respectively. All models demonstrated highly satisfactory calibration capability and clinical functionality (Figures 3E and F and 4D-F). The ROC curves of the GLM, LSM, and RFM in the training cohort are shown in Figure 3A, and their AUCs were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively. All models had excellent calibration ability in the training cohort (Figure 3B). DCA and CIC revealed that they both conferred high clinical net benefits (Figures 3C and 4A-C). Evaluation and validation of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: The receiver operating characteristic curves of the postoperative platelet elevation rate (PPER)-based models; B and E: The calibration curves of the PPER-based models; C and F: The decision curve analysis of the PPER-based models. GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; AUC: Area under the receiver operating characteristic curve. Clinical impact curves of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: Clinical impact curves for the generalized linear model; B and E: Clinical impact curves for the least absolute shrinkage and selection operator model; C and F: Clinical impact curves for the random forest model. In the validation cohort, the ROC curves of all models are presented in Figure 3D, and their AUC were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85), respectively. All models demonstrated highly satisfactory calibration capability and clinical functionality (Figures 3E and F and 4D-F). Importance of PPER for models As shown in Figure 5A, the nomogram for GLM recruited a total of five variables, including L, SPT, DPV, PPER1, and PPER3, which happened to be the intersection variables of the GLM, LSM, and RFM (Figure 5B). From this, it appeared that the aforementioned variables were significant predictors of the occurrence of PVT and they produced remarkable effects on the construction of the models. Moreover, the present study revealed that among these variables shared by the GLM, LSM, and RFM, the order of weight from high to low was DPV, PPER1, PPER3, SPT, and L (Figure 5C), which fully promulgated the predictive value of the PPER (PPER1 and PPER3) for PVT in all models. Nomogram for the generalized linear model and weighting of variables. A: Nomogram for the generalized linear model (GLM); B: Intersection variables among the GLM, least absolute shrinkage and selection operator model (LSM), and random forest model (RFM); C: Weights of the intersection variables in the GLM, LSM, and RFM, respectively. SPT: Spleen thickness; L: Lymphocyte count; DPV: Diameter of portal vein; PPER1 and PPER3: The first and third days for postoperative platelet elevation rate; PVT: Portal vein thrombosis; PLR: Platelet to lymphocyte ratio; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PTA: Prothrombin activity; EGV: Esophageal and gastric varices; AST: Aspartate aminotransaminase. As shown in Figure 5A, the nomogram for GLM recruited a total of five variables, including L, SPT, DPV, PPER1, and PPER3, which happened to be the intersection variables of the GLM, LSM, and RFM (Figure 5B). From this, it appeared that the aforementioned variables were significant predictors of the occurrence of PVT and they produced remarkable effects on the construction of the models. Moreover, the present study revealed that among these variables shared by the GLM, LSM, and RFM, the order of weight from high to low was DPV, PPER1, PPER3, SPT, and L (Figure 5C), which fully promulgated the predictive value of the PPER (PPER1 and PPER3) for PVT in all models. Nomogram for the generalized linear model and weighting of variables. A: Nomogram for the generalized linear model (GLM); B: Intersection variables among the GLM, least absolute shrinkage and selection operator model (LSM), and random forest model (RFM); C: Weights of the intersection variables in the GLM, LSM, and RFM, respectively. SPT: Spleen thickness; L: Lymphocyte count; DPV: Diameter of portal vein; PPER1 and PPER3: The first and third days for postoperative platelet elevation rate; PVT: Portal vein thrombosis; PLR: Platelet to lymphocyte ratio; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PTA: Prothrombin activity; EGV: Esophageal and gastric varices; AST: Aspartate aminotransaminase. Comparative analysis of PPER-based models The performance of three PPER-based models in predicting PVT in different cohorts is shown in Table 3. In the overall cohort, the accuracy of the GLM, LSM, and RFM was 76.2%, 77.4%, and 77.4% respectively. In the training cohort, the accuracy of the GLM, LSM, and RFM was 79.6%, 79.0%, and 78.7% respectively. In the validation cohort, the accuracy of the GLM, LSM, and RFM was 74.5%, 79.3%, and 76.6% respectively. When other metrics of the models, such as AUC, sensitivity, specificity, positive predictive value, negative predictive value, kappa values, and Brier scores, were comprehensively considered, the LSM and RFM appeared to be slightly superior to the GLM. Performance of models for portal vein thrombosis risk prediction in different cohorts AUC: Area under the receiver operating characteristics curve; CI: Confidence interval; PPV: Positive predictive value; NPV: Negative predictive value; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model. The performance of three PPER-based models in predicting PVT in different cohorts is shown in Table 3. In the overall cohort, the accuracy of the GLM, LSM, and RFM was 76.2%, 77.4%, and 77.4% respectively. In the training cohort, the accuracy of the GLM, LSM, and RFM was 79.6%, 79.0%, and 78.7% respectively. In the validation cohort, the accuracy of the GLM, LSM, and RFM was 74.5%, 79.3%, and 76.6% respectively. When other metrics of the models, such as AUC, sensitivity, specificity, positive predictive value, negative predictive value, kappa values, and Brier scores, were comprehensively considered, the LSM and RFM appeared to be slightly superior to the GLM. Performance of models for portal vein thrombosis risk prediction in different cohorts AUC: Area under the receiver operating characteristics curve; CI: Confidence interval; PPV: Positive predictive value; NPV: Negative predictive value; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model. Patient demographics and characteristics: The detailed clinical characteristics of 483 patients with PH are summarized in Table 1. All participants were randomly and automatically divided into a training cohort (n = 338, 70%) and a validation cohort (n = 145, 30%). The presence of PVT was diagnosed in 200 (41.4%) cases, 135 (39.9%) cases, and 65 (44.8%) cases in the overall cohort, training cohort, and verification cohort, respectively. Consistent with the results of the intergroup comparison, among the 31 candidate variables included, 14 were associated with PVT, including RBC, WBC, L, NLR, PLT, PTA, ALT, AST, EGV, SPT, DPV, PBT, PPER1, and PPER3 (Figure 1B and Supplementary Table 1), which indicated that PPER1 and PPER3 were highly likely to be potential predictors of PVT. Detailed clinical characteristics of 483 patients with portal hypertension Continuous variables are presented as the median and interquartile range (IQR). PH: Portal hypertension; PVT: Portal vein thrombosis; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3. Logistic regression analysis: Univariate and multivariate logistic regression analyses for risk factors associated with PVT in the overall cohort are presented in Table 2. In the univariate analysis, a total of 11 variables with P < 0.05 were included in the further multivariate analysis. Finally, the following six variables were revealed to be closely associated with the occurrence of PVT: L [odds ratio (OR): 0.28, 95% confidence interval (CI): 0.14-0.54, P < 0.001], EGV (OR: 0.51, 95%CI: 0.32-0.79, P = 0.003), SPT (OR: 1.22, 95%CI: 1.06-1.40, P = 0.005), DPV (OR: 3.57, 95%CI: 1.86-7.03, P < 0.001), PPER1 (OR: 1.78, 95%CI: 1.24-2.62, P = 0.002), and PPER3 (OR: 1.43, 95%CI: 1.16-1.77, P < 0.001). This result demonstrated that PPER1 and PPER3 were independent risk factors for the occurrence of PVT. Univariate and multivariate logistic regression analyses for risk factors associated with portal vein thrombosis in the overall cohort Forward stepwise analysis. Continuous variables. PVT: Portal vein thrombosis; OR: Odds ratio; CI: Confidence interval; RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; De: Devascularization; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3. Establishment of PPER-based models: As shown in Supplementary Table 2, the following five variables strongly associated with PVT were chosen to construct the GLM: L (OR: 0.34, 95%CI: 0.14-0.77, P = 0.01), SPT (OR: 1.21, 95%CI: 1.02-1.44, P = 0.02), DPV (OR: 5.85, 95%CI: 2.57-14.05, P < 0.001), PPER1 (OR: 1.77, 95%CI: 1.13-2.82, P = 0.01), and PPER3 (OR: 1.42, 95%CI: 1.12-1.84, P = 0.005). The optimal LSM was obtained when all 31 candidate variables were shrunk to 10 through the LASSO (Figure 2A and B), which included L, NLR, PLT, PTA, AST, EGV, SPT, DPV, PPER1, and PPER3. In the RF, the total sample group had the smallest error of 24.56%, when the number of random trees was 133 (Figure 2C). A total of 133 random trees were set and passed through five iterations, and the importance scores of the candidate variables are presented in Figure 2D. Ultimately, nine variables with higher MDG scores were selected to construct the RFM. Features selection. A: Least absolute shrinkage and selection operator variable trace profiles of the 31 features. The 3-fold cross-validation was employed; B: Mean square error (MSE) plots of models under different lambda. The lambda corresponding to one MSE away from the minimum MSE was the optimal lambda value of 0.043, and the target variables shrunk to 10; C: Relationships between the error and number of trees. There are three lines, green representing error in the positive event group, red representing error in the negative event group, and black representing error in the total sample group; D: Importance scores of the candidate features. RBC: Red blood cells; HLB: Hemoglobin; WBC: White blood cells; N: Neutrophil count; L: Lymphocyte count; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PLR: Platelet to lymphocyte ratio; PT: Prothrombin time; PTA: Prothrombin activity; INR: International normalized ratio; FIB: Fibrinogen; APTT: Activated partial thromboplastin time; ALT: Alanine aminotransaminase; AST: Aspartate aminotransaminase; ALB: Serum albumin; TBIL: Total serum bilirubin; Child: Child-Pugh grade; EGV: Esophageal and gastric varices; SPT: Spleen thickness; DPV: Diameter of the portal vein; PBT: Preoperative blood transfusion; PLT1: Platelet count at postoperative day 1; PLT3: Platelet count at postoperative day 3; PPER1: Platelet elevation rate at postoperative day 1; PPER3: Platelet elevation rate at postoperative day 3; MDG: Mean decreased Gini. Assessment and verification of PPER-based models: The ROC curves of the GLM, LSM, and RFM in the training cohort are shown in Figure 3A, and their AUCs were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively. All models had excellent calibration ability in the training cohort (Figure 3B). DCA and CIC revealed that they both conferred high clinical net benefits (Figures 3C and 4A-C). Evaluation and validation of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: The receiver operating characteristic curves of the postoperative platelet elevation rate (PPER)-based models; B and E: The calibration curves of the PPER-based models; C and F: The decision curve analysis of the PPER-based models. GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model; AUC: Area under the receiver operating characteristic curve. Clinical impact curves of the postoperative platelet elevation rate-based models in the training cohort and validation cohort. A and D: Clinical impact curves for the generalized linear model; B and E: Clinical impact curves for the least absolute shrinkage and selection operator model; C and F: Clinical impact curves for the random forest model. In the validation cohort, the ROC curves of all models are presented in Figure 3D, and their AUC were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85), respectively. All models demonstrated highly satisfactory calibration capability and clinical functionality (Figures 3E and F and 4D-F). Importance of PPER for models: As shown in Figure 5A, the nomogram for GLM recruited a total of five variables, including L, SPT, DPV, PPER1, and PPER3, which happened to be the intersection variables of the GLM, LSM, and RFM (Figure 5B). From this, it appeared that the aforementioned variables were significant predictors of the occurrence of PVT and they produced remarkable effects on the construction of the models. Moreover, the present study revealed that among these variables shared by the GLM, LSM, and RFM, the order of weight from high to low was DPV, PPER1, PPER3, SPT, and L (Figure 5C), which fully promulgated the predictive value of the PPER (PPER1 and PPER3) for PVT in all models. Nomogram for the generalized linear model and weighting of variables. A: Nomogram for the generalized linear model (GLM); B: Intersection variables among the GLM, least absolute shrinkage and selection operator model (LSM), and random forest model (RFM); C: Weights of the intersection variables in the GLM, LSM, and RFM, respectively. SPT: Spleen thickness; L: Lymphocyte count; DPV: Diameter of portal vein; PPER1 and PPER3: The first and third days for postoperative platelet elevation rate; PVT: Portal vein thrombosis; PLR: Platelet to lymphocyte ratio; NLR: Neutrophil to lymphocyte ratio; PLT: Platelet count; PTA: Prothrombin activity; EGV: Esophageal and gastric varices; AST: Aspartate aminotransaminase. Comparative analysis of PPER-based models: The performance of three PPER-based models in predicting PVT in different cohorts is shown in Table 3. In the overall cohort, the accuracy of the GLM, LSM, and RFM was 76.2%, 77.4%, and 77.4% respectively. In the training cohort, the accuracy of the GLM, LSM, and RFM was 79.6%, 79.0%, and 78.7% respectively. In the validation cohort, the accuracy of the GLM, LSM, and RFM was 74.5%, 79.3%, and 76.6% respectively. When other metrics of the models, such as AUC, sensitivity, specificity, positive predictive value, negative predictive value, kappa values, and Brier scores, were comprehensively considered, the LSM and RFM appeared to be slightly superior to the GLM. Performance of models for portal vein thrombosis risk prediction in different cohorts AUC: Area under the receiver operating characteristics curve; CI: Confidence interval; PPV: Positive predictive value; NPV: Negative predictive value; GLM: Generalized linear model; LSM: Least absolute shrinkage and selection operator model; RFM: Random forest model. DISCUSSION: Undoubtedly, PVT is a lethal complication after splenectomy in cirrhotic patients with PH[12]. Once PVT exists, there will be elevated portal venous pressure, ischemic bowel necrosis, progressive impairment of liver function, and even liver failure, which can eventually be life-threatening[41,42]. Therefore, research on the optimization of early detection of individuals at high risk of PVT after splenectomy is urgently needed. In this study, we successfully constructed the PPER-based models for predicting PVT by machine/deep learning, which would be conducive to early identifying the population at high risk of PVT. In the present study, conventional generalized linear (CGL) method and machine/deep learning (including the LASSO and RF) were applied separately to screen out the variables that greatly affected the PVT prediction. The CGL method is characterized by strong interpretability, especially when the multifactorial forward stepwise regression method is used, and therefore, it has been widely applied as the traditional method to construct a predictive model[43]. However, with the rapid progress of artificial intelligence technology, a novel prediction model based on machine/deep learning has emerged with a higher probability of accuracy, which has led some clinicians to question the value of the CGL model in the clinical application of individualized patients[44]. Coincidentally, our research results proclaimed that the performance of the LSM and RFM seemed to be slightly better than the GLM. Interestingly, the PPER-based models contained the following five intersecting factors, namely, SPT, DPV, L, PPER1, and PPER3, which sufficiently illustrated that these were the main contributors to the higher incidence of PVT. Previous studies found that preoperative SPT and DPV were important predictors of the formation of PVT after splenectomy in patients with PH[8,45], which was highly consistent with our findings. A very reasonable explanation is that a wide preoperative DPV and SPT will lead to slow portal vein blood flow, which is closely related to postoperative thrombosis[45,46]. In most cases, platelet, erythrocyte, and leukocyte counts rose dramatically over a short time after splenectomy in patients with PH, and the blood was hypercoagulable[8]. Therefore, previous studies suggested that preoperative low platelet and leukocyte counts were founders of the formation of PVT postoperatively[47]. This study revealed that preoperative L was an influential factor in PVT postoperatively, which coincided with the above view. Of note, the present study employed the PPER to reflect the magnitude of dynamic changes in preoperative and postoperative PLT. Subsequently, it was found that the PPER had high predictive value for the risk of PVT postoperatively, which was not addressed previously. Stamou et al[48] reported that the median time to the formation of PVT after splenectomy in patients with PH was the 6th day (range, 3-11). Lu et al[8] concluded that 49.19% of patients developed PVT within 7 d after splenectomy. Therefore, scholars routinely applied ultrasonography examination to diagnose the PVT on the 7th day after splenectomy[8,19,20]. In this study, combined with the preoperative predictors and PPER, the PPER-based models that we constructed can effectively discriminate individuals with high risk of PVT as early as the first 3 d after the operation, which was extremely critical for guiding clinicians’ treatment strategies. Currently, there is no standard preventive regimen for PVT after splenectomy in cirrhotic patients with PH[49]. Most scholars have recently advocated that the prophylactic anticoagulant therapy is administered earlier postoperatively, which will be more helpful in reducing the incidence of PVT[50,51]. However, it should be cautiously chosen because in patients with liver cirrhosis, it cannot avoid the risk of bleeding[51]. In addition, if the preventive regimens are routinely adopted for all individuals with PH after splenectomy, it is bound to raise the suspicion of overtreatment. Excitingly, in the present study, the accuracy of the PPER-based models in predicting PVT was up to 80%, which can distinguish individuals at high risk of PVT with high efficiency, and thus guide clinicians to take targeted individualized preventive measures in time. The present study has some limitations. First, due to the retrospective nature of the study, selection bias cannot be eliminated. Second, the uncommon preoperative factors that may influence the formation of PVT, such as splenic vein diameter, spleen volume, and portal vein flow velocity[8,19], were not routinely measured in our institution and thus failed to be included in the present study. However, the SPT and DPV in this study can indirectly reflect these indicators to a certain extent[45,46]. Third, this was a monocentric study design. Although the PPER-based models demonstrated excellent performance for predicting PVT, they still lacked the verification of external cohorts. Therefore, large-scale prospective multicenter studies are warranted, which are beneficial to the popularity and application of the PPER-based models. CONCLUSION: PPER1 and PPER3 are effective indicators for postoperative prediction of PVT. We have successfully developed the PPER-based practical models for predicting PVT, which could help clinicians identify individuals at high risk for PVT early and efficiently, and thus guide the timely intervention measures.
Background: For patients with portal hypertension (PH), portal vein thrombosis (PVT) is a fatal complication after splenectomy. Postoperative platelet elevation is considered the foremost reason for PVT. However, the value of postoperative platelet elevation rate (PPER) in predicting PVT has never been studied. Methods: We retrospectively reviewed 483 patients with PH related to hepatitis B virus who underwent splenectomy between July 2011 and September 2018, and they were randomized into either a training (n = 338) or a validation (n = 145) cohort. The generalized linear (GL) method, least absolute shrinkage and selection operator (LASSO), and random forest (RF) were used to construct models. The receiver operating characteristic curves (ROC), calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC) were used to evaluate the robustness and clinical practicability of the GL model (GLM), LASSO model (LSM), and RF model (RFM). Results: Multivariate analysis exhibited that the first and third days for PPER (PPER1, PPER3) were strongly associated with PVT [odds ratio (OR): 1.78, 95% confidence interval (CI): 1.24-2.62, P = 0.002; OR: 1.43, 95%CI: 1.16-1.77, P < 0.001, respectively]. The areas under the ROC curves of the GLM, LSM, and RFM in the training cohort were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively; and were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85) in the validation cohort, respectively. The calibration curves showed satisfactory agreement between prediction by models and actual observation. DCA and CIC indicated that all models conferred high clinical net benefits. Conclusions: PPER1 and PPER3 are effective indicators for postoperative prediction of PVT. We have successfully developed PPER-based practical models to accurately predict PVT, which would conveniently help clinicians rapidly differentiate individuals at high risk of PVT, and thus guide the adoption of timely interventions.
INTRODUCTION: Liver cirrhosis is recognized as an extremely important and rapidly increasing disease burden in the world[1]. In the progressive stage of liver cirrhosis, the complications caused by portal hypertension (PH), including esophagogastric variceal bleeding and hypersplenism, pose a great threat to the patients’ life and health[2,3]. Liver transplantation is currently recommended as a curative treatment for liver cirrhosis combined with PH; however, due to the shortage of liver sources and high trans-plantation cost, its clinical practicability is limited[4,5]. Transjugular intrahepatic portosystemic shunt seems to be a gospel for PH, but unfortunately, restenosis or/and hepatic encephalopathy will occur in more than 60% of patients[6,7]. In Asia, splenectomy (or combined with devascularization) has been widely adopted as an effective treatment for hypersplenism or esophageal and gastric variceal bleeding caused by PH[8,9]. Portal vein thrombosis (PVT) is often defined as thrombosis within the portal vein trunk or intrahepatic portal branches, with or without the splenic vein or superior mesenteric vein involvement[10,11]. PVT is considered a dreaded complication after splenectomy for patients with PH[12], and the probability of PVT has been reported to be 4.8%-51.5%[13-15]. For patients with acute PVT and PVT resulting in superior mesenteric vein thrombosis, it has been reported that PVT may be closely associated with acute liver failure and could influence mortality[16]. Hence, strategies are needed to prevent PVT in patients who underwent splenectomy. In clinical practice, anticoagulation is a critical method for the prevention and treatment of PVT in patients after splenectomy. However, when the anticoagulation therapy should be started remains controversial. Early anticoagulation may result in life-threatening bleeding events for patients with liver cirrhosis. Whether anticoagulant therapy should be prescribed to all patients after splenectomy deserves careful consideration. In addition, the majority of patients with PVT are asymptomatic and only a few experience abdominal discomfort[12]. Therefore, there is an urgent requirement to find effective diagnostic methods to early and rapidly identify individuals with high risk of PVT after splenectomy, and then further guide clinicians to take intervention measures. Color Doppler ultrasonography and/or contrast-enhanced computed tomography (CT) is commonly applied for the final diagnosis of PVT[17]; however, they seem to be useless for screening out the high-risk individuals who are vulnerable to PVT. Given this, many scholars attempted to investigate the risk factors closely related to the occurrence of PVT after splenectomy[8,18-21]. Several investigators paid attention to the fact that preoperative low platelet count (PLT) and postoperative high PLT may be crucial predictors of the risk of PVT postoperatively[19,22]. Generally speaking, patients with PH will experience rebounding rises in PLT after splenectomy[23], combined with hemodynamic changes in the portal venous system, and thus these patients are highly prone to developing PVT[24]. However, the effect of the amplitude of sharp postoperative rises in PLT on PVT has received little attention. We speculated that the postoperative platelet elevation rate (PPER) should be an important predictor of PVT. To the best of our knowledge, there are no reports on the relationship between PPER and PVT. In recent years, to meet the urgent demand of finding effective methods to predict PVT after splenectomy, several studies have attempted to construct predictive models for PVT after splenectomy in patients with cirrhosis using multivariate regression analysis[25,26]. However, there are few clinical variables included in the analysis and the accuracy of these prediction models is still unsatisfactory. Therefore, there is an urgent need for an efficient and accurate visualization model. Nowadays, novel machine learning algorithms based on more clinical features have shown great potential in various aspects of medical research, especially in the construction of predictive models, and the features screened for model construction are clinically interpretable[27-29]. Gao et al[28] constructed four machine learning models based on 53 raw clinical features of coronavirus disease 2019 patients to distinguish individuals at high risk for mortality, with an area under the receiver operating characteristics (ROC) curve (AUC) of 0.976. Kawakami et al[29] developed seven supervised machine learning classifiers based on 32 clinical parameters, among which the random forest (RF) model showed the best performance in distinguishing epithelial ovarian cancer from benign ovarian tumors with an AUC of 0.968. The wide range of applications of machine learning methods has surpassed conventional statistical analysis due to their higher accuracy, which might enable machine learning to be increasingly applied in the field of medical research[30-32]. Although compared with traditional multivariate analysis methods, machine learning algorithms have overwhelming advantages in constructing clinical prediction models, so far, only Wang et al[33] have tried to construct a prediction model of PVT after splenectomy in cirrhotic patients with PH using machine learning algorithms. The model that they constructed has greatly improved the prediction efficiency compared with the traditional models. However, the clinical parameters involved in the construction of the model are extremely complex, which limits its clinical use. Therefore, the purpose of this study was to evaluate the predictive value of PPER for the risk of PVT after splenectomy for patients with PH. In addition, we sought to build simple, efficient, and accurate practical models for predicting PVT with machine learning algorithms to facilitate assisting clinicians in the early identification of individuals at high risk of PVT after splenectomy and taking intervention measures in time. We present the following article in accordance with the TRIPOD reporting checklist. CONCLUSION: According to our experience, patients with a more remarkable increase in platelet count in the first 3 d after operation have a higher probability of PVT, which should be prioritized for prophylactic anticoagulation.
Background: For patients with portal hypertension (PH), portal vein thrombosis (PVT) is a fatal complication after splenectomy. Postoperative platelet elevation is considered the foremost reason for PVT. However, the value of postoperative platelet elevation rate (PPER) in predicting PVT has never been studied. Methods: We retrospectively reviewed 483 patients with PH related to hepatitis B virus who underwent splenectomy between July 2011 and September 2018, and they were randomized into either a training (n = 338) or a validation (n = 145) cohort. The generalized linear (GL) method, least absolute shrinkage and selection operator (LASSO), and random forest (RF) were used to construct models. The receiver operating characteristic curves (ROC), calibration curve, decision curve analysis (DCA), and clinical impact curve (CIC) were used to evaluate the robustness and clinical practicability of the GL model (GLM), LASSO model (LSM), and RF model (RFM). Results: Multivariate analysis exhibited that the first and third days for PPER (PPER1, PPER3) were strongly associated with PVT [odds ratio (OR): 1.78, 95% confidence interval (CI): 1.24-2.62, P = 0.002; OR: 1.43, 95%CI: 1.16-1.77, P < 0.001, respectively]. The areas under the ROC curves of the GLM, LSM, and RFM in the training cohort were 0.83 (95%CI: 0.79-0.88), 0.84 (95%CI: 0.79-0.88), and 0.84 (95%CI: 0.79-0.88), respectively; and were 0.77 (95%CI: 0.69-0.85), 0.83 (95%CI: 0.76-0.90), and 0.78 (95%CI: 0.70-0.85) in the validation cohort, respectively. The calibration curves showed satisfactory agreement between prediction by models and actual observation. DCA and CIC indicated that all models conferred high clinical net benefits. Conclusions: PPER1 and PPER3 are effective indicators for postoperative prediction of PVT. We have successfully developed PPER-based practical models to accurately predict PVT, which would conveniently help clinicians rapidly differentiate individuals at high risk of PVT, and thus guide the adoption of timely interventions.
12,912
421
[ 1015, 501, 366, 219, 172, 96, 127, 4254, 363, 396, 521, 329, 285, 211, 914, 49 ]
17
[ "pvt", "platelet", "variables", "model", "models", "count", "postoperative", "cohort", "ratio", "ci" ]
[ "ph portal hypertension", "health liver transplantation", "portal vein thrombosis", "transjugular intrahepatic portosystemic", "splenectomy patients cirrhosis" ]
null
[CONTENT] Portal hypertension | Splenectomy | Portal vein thrombosis | Postoperative platelet elevation rate | Practical model | Machine learning [SUMMARY]
[CONTENT] Portal hypertension | Splenectomy | Portal vein thrombosis | Postoperative platelet elevation rate | Practical model | Machine learning [SUMMARY]
null
[CONTENT] Portal hypertension | Splenectomy | Portal vein thrombosis | Postoperative platelet elevation rate | Practical model | Machine learning [SUMMARY]
[CONTENT] Portal hypertension | Splenectomy | Portal vein thrombosis | Postoperative platelet elevation rate | Practical model | Machine learning [SUMMARY]
[CONTENT] Portal hypertension | Splenectomy | Portal vein thrombosis | Postoperative platelet elevation rate | Practical model | Machine learning [SUMMARY]
[CONTENT] Humans | Hypertension, Portal | Liver Cirrhosis | Machine Learning | Portal Vein | Retrospective Studies | Risk Factors | Splenectomy | Venous Thrombosis [SUMMARY]
[CONTENT] Humans | Hypertension, Portal | Liver Cirrhosis | Machine Learning | Portal Vein | Retrospective Studies | Risk Factors | Splenectomy | Venous Thrombosis [SUMMARY]
null
[CONTENT] Humans | Hypertension, Portal | Liver Cirrhosis | Machine Learning | Portal Vein | Retrospective Studies | Risk Factors | Splenectomy | Venous Thrombosis [SUMMARY]
[CONTENT] Humans | Hypertension, Portal | Liver Cirrhosis | Machine Learning | Portal Vein | Retrospective Studies | Risk Factors | Splenectomy | Venous Thrombosis [SUMMARY]
[CONTENT] Humans | Hypertension, Portal | Liver Cirrhosis | Machine Learning | Portal Vein | Retrospective Studies | Risk Factors | Splenectomy | Venous Thrombosis [SUMMARY]
[CONTENT] ph portal hypertension | health liver transplantation | portal vein thrombosis | transjugular intrahepatic portosystemic | splenectomy patients cirrhosis [SUMMARY]
[CONTENT] ph portal hypertension | health liver transplantation | portal vein thrombosis | transjugular intrahepatic portosystemic | splenectomy patients cirrhosis [SUMMARY]
null
[CONTENT] ph portal hypertension | health liver transplantation | portal vein thrombosis | transjugular intrahepatic portosystemic | splenectomy patients cirrhosis [SUMMARY]
[CONTENT] ph portal hypertension | health liver transplantation | portal vein thrombosis | transjugular intrahepatic portosystemic | splenectomy patients cirrhosis [SUMMARY]
[CONTENT] ph portal hypertension | health liver transplantation | portal vein thrombosis | transjugular intrahepatic portosystemic | splenectomy patients cirrhosis [SUMMARY]
[CONTENT] pvt | platelet | variables | model | models | count | postoperative | cohort | ratio | ci [SUMMARY]
[CONTENT] pvt | platelet | variables | model | models | count | postoperative | cohort | ratio | ci [SUMMARY]
null
[CONTENT] pvt | platelet | variables | model | models | count | postoperative | cohort | ratio | ci [SUMMARY]
[CONTENT] pvt | platelet | variables | model | models | count | postoperative | cohort | ratio | ci [SUMMARY]
[CONTENT] pvt | platelet | variables | model | models | count | postoperative | cohort | ratio | ci [SUMMARY]
[CONTENT] pvt | splenectomy | patients | machine learning | learning | machine | pvt splenectomy | ph | liver | cirrhosis [SUMMARY]
[CONTENT] reference interval | reference | interval | variables | blood | correlation | model | test | study | plt [SUMMARY]
null
[CONTENT] pvt | pvt help clinicians | pvt successfully | based practical models predicting | pvt early efficiently | pvt early efficiently guide | pvt successfully developed pper | pvt successfully developed | postoperative prediction pvt successfully | help clinicians [SUMMARY]
[CONTENT] pvt | variables | model | platelet | cohort | reference interval | reference | ci | interval | 95 [SUMMARY]
[CONTENT] pvt | variables | model | platelet | cohort | reference interval | reference | ci | interval | 95 [SUMMARY]
[CONTENT] ||| PVT ||| PVT [SUMMARY]
[CONTENT] 483 | between July 2011 and September 2018 | 338 | 145 ||| GL | LASSO ||| ROC | CIC | GL | GLM | LASSO | LSM | RF [SUMMARY]
null
[CONTENT] PPER1 | PPER3 | PVT ||| clinicians | PVT [SUMMARY]
[CONTENT] ||| PVT ||| PVT ||| 483 | between July 2011 and September 2018 | 338 | 145 ||| GL | LASSO ||| ROC | CIC | GL | GLM | LASSO | LSM | RF ||| ||| the first and third days | PPER1 | PPER3 | PVT ||| 1.78 | 95% | CI | 1.24 | 0.002 | 1.43 | 1.16-1.77 | P < 0.001 ||| ROC | GLM | LSM | 0.83 | 0.79-0.88 | 0.84 | 0.79-0.88 | 0.84 | 0.79-0.88 | 0.77 | 0.69-0.85 | 0.83 | 0.76-0.90 | 0.78 | 0.70-0.85 ||| ||| CIC ||| PPER1 | PPER3 | PVT ||| clinicians | PVT [SUMMARY]
[CONTENT] ||| PVT ||| PVT ||| 483 | between July 2011 and September 2018 | 338 | 145 ||| GL | LASSO ||| ROC | CIC | GL | GLM | LASSO | LSM | RF ||| ||| the first and third days | PPER1 | PPER3 | PVT ||| 1.78 | 95% | CI | 1.24 | 0.002 | 1.43 | 1.16-1.77 | P < 0.001 ||| ROC | GLM | LSM | 0.83 | 0.79-0.88 | 0.84 | 0.79-0.88 | 0.84 | 0.79-0.88 | 0.77 | 0.69-0.85 | 0.83 | 0.76-0.90 | 0.78 | 0.70-0.85 ||| ||| CIC ||| PPER1 | PPER3 | PVT ||| clinicians | PVT [SUMMARY]
Clinical Profile of Neonates Admitted with Sepsis to Neonatal Intensive Care Unit of Jimma Medical Center, A Tertiary Hospital in Ethiopia.
34483605
Globally, over 3 million newborn die each year, one million of these attributed to infections. The objective of this study was to determine the etiologies and clinical characteristics of sepsis in neonates admitted to intensive care unit of a tertiary hospital in Ethiopia.
BACKGROUND
A longitudinal hospital based cohort study was conducted from April 1 to October 31, 2018 at the neonatal intensive care unit of Jimma Medical Center, southwest Ethiopia. Diagnosis of sepsis was established using the World Health Organization's case definition. Structured questionnaires and case specific recording formats were used to capture the relevant data. Venous blood and cerebrospinal fluid from neonates suspected to have sepsis were collected.
METHODS
Out of 304 neonates enrolled in the study, 195 (64.1%) had clinical evidence for sepsis, majority (84.1%; 164/195) of them having early onset neonatal sepsis. The three most frequent presenting signs and symptoms were fast breathing (64.6%; 122/195), fever (48.1%; 91/195) and altered feeding (39.0%; 76/195). Etiologic agents were detected from the blood culture of 61.2% (115/195) neonates. Bacterial pathogens contributed for 94.8% (109/115); the rest being fungal etiologies. Coagulase negative staphylococci (25.7%; 28/109), Staphylococcus aureus (22.1%; 24/109) and Klebsiella species (16.5%; 18/109) were the most commonly isolated bacteria.
RESULTS
Majority of the neonates had early onset neonatal sepsis. The major etiologies isolated in our study markedly deviate from the usual organisms causing neonatal sepsis. Multicentre study and continuous surveillance are essential to tackle the current challenge to reduce neonatal mortality due to sepsis in Ethiopia.
CONCLUSION
[ "Anti-Bacterial Agents", "Cohort Studies", "Ethiopia", "Humans", "Infant, Newborn", "Intensive Care Units, Neonatal", "Sepsis", "Tertiary Care Centers" ]
8365478
Introduction
In 2018, 5.3 million children died before reaching their fifth birthday of which 2.5 million (47%) died in the first month of life. Sub-Saharan Africa had the highest under five mortality rate, with a neonatal mortality rate of 28 deaths per 1,000 live births. A significant proportion of neonatal mortality is attributable to neonatal infections (1,2). In Ethiopia, the current neonatal mortality rate is 30 per 1000 live births and neonatal infections is one of the top three causes of neonatal mortality (3). Besides increased mortality, neonatal sepsis predisposes to several neuro-developmental complications (4–6). Diagnosis and management of neonatal sepsis are among the greatest challenges health workers face. Some of the challenges are lack of specific signs and symptoms and unavailability of the necessary laboratory investigations, particularly in developing countries (7,8). This necessitates the initiation of empirical antibiotic therapy till sepsis is either ruled out or confirmed and until specific organisms are isolated. The ever changing patterns of the etiologic agents of sepsis and the dramatically increasing rate of multidrug resistant organisms are also additional challenges on the use of these empiric treatment regimens, delaying the effective treatment of these infections (8,9). Ethiopia is not an exception to these and similar challenges and practices are encountered in many of the health facilities in the country. Neonatal sepsis is usually classified into early onset neonatal sepsis (EONS) if it occurs in the first 7 days of life and late onset neonatal sepsis (LONS) if it occurs between 7 and 28 days of life. Huge variations are observed in the etiology of sepsis depending on host and environmental factors. Etiological data from low and middle- income countries (LMICs), are very limited, even if some studies conducted in these regions have demonstrated the most common causative agents of neonatal sepsis to be S. aureus, Escherichia coli and Klebsiella spp. (10–14). In Ethiopia, only few studies have been published on neonatal sepsis which have indicated the common etiologies to be S. aureus, Coagulase negative staphylococci (CONS), Klebsiella spp and E. Coli (15,16), necessitating further studies to be conducted to characterise the clinical findings and etiologies of neonatal sepsis in the local context. Hence, this study was done with the aim of describing the etiology, clinical characteristics and outcome of neonates admitted to Jimma Medical Center (JMC) with neonatal sepsis. The current study is a sub-study of a large study conducted to determine the magnitude, clinical characteristics, etiologies and antimicrobial susceptibility pattern of the isolates of neonatal sepsis. A separate article on the antimicrobial susceptibility of the isolates will be prepared
Methods
Study settings: The study was conducted at the neonatal intensive care unit (NICU) of JMC, a tertiary hospital in Southwest Ethiopia. Study design and period: A longitudinal hospital based cohort study comparing neonates with signs/symptoms of sepsis to those without signs/symptoms of sepsis was conducted from April 1 to October 31, 2018. In addition, in the subgroup of neonates with signs/symptoms of sepsis, we conducted a descriptive microbiological sub-study that is highlighting the spectrum of infectious agents in neonates with signs/symptoms of sepsis. Selection of study participants: Included in this study were newborns younger than 28 days admitted to the NICU during the study period, enrolled consecutively after their parents or care givers gave informed consent. After enrolment, the neonates were followed until discharge to assess the discharge outcome. Data collection procedures: Patients were categorized on admission into those with clinical diagnosis of sepsis and those without. Case specific recording format was used to capture relevant variables. Both groups of patients were followed according to the protocols of the unit. Neonates with no sepsis on admission who developed signs and symptoms of sepsis after admission were later categorized with the group with sepsis. The neonates were evaluated and treated by the treating physicians as per the standard protocol at the hospital. Clinical diagnosis of sepsis was made based on the World Health Organization's (WHO) criteria of presence of one or more of the following symptoms: temperature instability (>38 or <35.5°C), tachypnea (≥60breaths/minute), poor feeding/unable to feed, respiratory distress, convulsions, decreased movement or no movement at all (2). Additionally, the presence of risk factors for the development of sepsis was also used to support the diagnosis. Clinical data were collected by trained nurses and physicians working in the NICU (the nurses collected the relevant data and the physicians verified the data collected by the nurses); blood samples were collected by the nurses and cerebrospinal fluid (CSF) by the treating physicians. Specimen collection and processing: In the subgroup of neonates suspected to have sepsis, 1–3ml venous blood was collected and inoculated into aerobic BACTEC bottles (BACTEC Peds Plus/F medium, Becton Dickinson, USA) which were then incubated in BACTEC FX40 automated machine. Samples flagged as positive on BACTEC were subcultured onto Columbia 5% sheep blood, MacConkey, and Chocolate agar (Oxoid, Basingstoke, United Kingdom). Additionally, 2–3ml of CSF was collected under aseptic conditions for analysis and culture. Isolation and identification of bacterial pathogens was performed according to standard microbiological techniques (17). In the subgroup of neonates without clinical suspicion of sepsis, no analysis of microbiological specimen was performed. Data processing, analysis and interpretation: Data was entered into epidata version 3.1 and then exported to and analyzed with SPSS version 20.0. Descriptive statistics like frequency and proportion was carried out to describe the data and results are presented as narrations and using tables and figures. Ethical considerations: Ethical clearance was obtained from Institutional Review Board of Jimma University Institute of Health in Ethiopia (IHRPGD/274/2018) and The Ethics Committee at the Medical Faculty of Ludwig-Maximilians-Universität of Munich, Germany. Written informed consent was obtained from parents or care takers. All study procedures were conducted as per the guidelines of good clinical practices. All laboratory results were timely communicated with the treating physicians so that the treatment of the neonates could be adjusted accordingly.
Results
Background characteristics of the neonates: A total of 304 neonates, 57.9% (176) being male, were enrolled in the study. About 63.0% (188) of them were delivered through spontaneous vaginal delivery. Most of the neonates (258, 86.3%) were younger than 7 days at admission. Only 48.0% (146) of them had their weight determined at birth, 75 (51.4%) being low birth weight (< 2500 g). Gestational age was determined by the New Ballard Score (NBS) for 65.5% (199/304) of the neonates, 41.7% (83/199) being preterm. One fifth (63/304, 20.7%) of the neonates were resuscitated at birth (Table 1). There was no any significant association seen between these variables and blood culture result. Background characteristics of neonates admitted to neonatal intensive care unit of JMC New Ballard Score Maternal socio-demographic, obstetric and medical characteristics: The majority of the mothers were between 18 and 35 years (284/304, 93.3%) and about half (146/304, 48.0%) were illiterate. Regarding the obstetric characteristics, 44.9% (136/304) of the mothers were primipara; most (291/304, 95.8%) had at least one ANC follow up visit, and the majority delivered at health facilities (284/304, 94.3%). Only very few of the mothers had associated medical illnesses with hypertension, human immunodeficiency virus (HIV), congestive heart failure and diabetes mellitus seen in 10 (3.2%), 2 (0.7%), 2 (0.7%) and 1 (0.3%) respectively (Table 2 and 3). Maternal socio-demographic characteristics in neonates admitted to JMC Maternal obstetric and medical conditions in neonates admitted to JMC SVD: Spontaneous vaginal delivery; CS: Cesarean section; ID: Instrumental delivery Among maternal obstetric conditions, prolonged labor (more than 24hours) was seen in 13.9% (42/304) whereas prolonged rupture of membrane (more than 18hours) and meconium stained amniotic fluid were reported in 11.9% (36/304) and 7.4% (22/304) of the mothers respectively. Moreover, urinary tract infection (4.5%; 14/304), fever in the last 7 days before delivery (3.6%; 11/304), chorioamnionitis (3.3%; 10/304) and foul-smelling amniotic fluid (3.0%; 9/304) were documented in small proportion of the mothers (Tables 2 and 3). Clinical profile of neonates admitted with sepsis: Out of the 304 neonates included in this study, 195 (64.1%) had sepsis according to the clinical definition, majority (84.1%; 164/195) of them having EONS. The most frequently observed signs and symptoms were fast breathing (62.6%; 122/195), fever (46.7%; 91/195), altered feeding (39.0%, 76/195), respiratory distress (33.8%; 66/195) and hypothermia (31.8%; 62/195) (Figure 1). Clinical presentation of neonates admitted with sepsis to NICU of JMC [*Include apnea (7), skin pustules (7), pallor (6), eye discharge (5), abdominal distension (5), umbilical discharge (3), and bulged fontanels (2)] Laboratory investigations: Microbiological investigations only in the subgroup of neonates with signs of sepsis were conducted and included in this work. Blood culture was done for 96.4% (188/195) of the neonates with suspected sepsis. Of these, 61.2% (115/188) were positive; 109 were bacteria whereas 6 were fungi. In two neonates, multiple organisms were detected on blood culture. Coagulase Negative Staphylococcus (CONS) (25.7%; 28/109), S. aureus (22.0%; 24/109) and Klebsiella spp. (16.5%; 18/109) were the three predominant bacteria isolated. Other gram positive bacteria isolated were micrococcus spp. (3/109, 2.8%), Group B streptococcus (3/109, 2.8%) and Listeria monocytogenes (1/109, 0.9%) whereas the other gram negative bacteria isolated were E. coli (2/109, 1.8%), Enterobacter spp. (2/109, 1.8%), Providentia spp. (2/109, 1.8%), Proteus mirabilis (1/109, 0.9%) and Serratia spp. (1/109, 0.9%). Lumbar puncture and CSF culture were performed for 68.2% (133/195) and 72.9% (97/133) of neonates with sepsis respectively. Pleocytosis (white cell count of ≥15cells/mm3) was detected in 8.3% (11/133) whereas culture was positive in only 4.1% (4/97). No microorganism was detected on CSF gram stain (microscopy). The organisms isolated from the CSF cultures were Citrobacter spp. (1), K. pneumoniae (2) and Acinetobacter spp. (1). Treatment given to those with sepsis: Majority of the newborns with sepsis, 90% (171/190), received combination of ampicillin and gentamicin whereas 16/190 (8.4%) received combination of ceftriaxone and gentamicin. Only few of them received combination of ceftazidime and vancomycin (3/190, 1.6%). Clinical outcome of the neonates: Overall, the majority of the neonates were discharged with improvement (233, 76.9%), whereas 43 (14.2%) died in the hospital. Out of the 43 deaths, 19 (44.2%) had sepsis and 14 of these 19 deaths with sepsis (73.7%) had positive blood culture results. This gives a death rate among those with clinical sepsis 9.7% (19/195) whereas the death rate among the culture confirmed ones is 12.2% (14/115). The death rate among the gram negative isolates is almost 4 times (10/52, 19.2%) higher than that seen in neonates with gram-positive isolates (3/59, 5.1%) (Figure 2). Discharge status of neonates admitted to NICU of JMC LAMA – leave against medical advice
null
null
[]
[]
[]
[ "Introduction", "Methods", "Results", "Discussion" ]
[ "In 2018, 5.3 million children died before reaching their fifth birthday of which 2.5 million (47%) died in the first month of life. Sub-Saharan Africa had the highest under five mortality rate, with a neonatal mortality rate of 28 deaths per 1,000 live births. A significant proportion of neonatal mortality is attributable to neonatal infections (1,2). In Ethiopia, the current neonatal mortality rate is 30 per 1000 live births and neonatal infections is one of the top three causes of neonatal mortality (3). Besides increased mortality, neonatal sepsis predisposes to several neuro-developmental complications (4–6).\nDiagnosis and management of neonatal sepsis are among the greatest challenges health workers face. Some of the challenges are lack of specific signs and symptoms and unavailability of the necessary laboratory investigations, particularly in developing countries (7,8). This necessitates the initiation of empirical antibiotic therapy till sepsis is either ruled out or confirmed and until specific organisms are isolated. The ever changing patterns of the etiologic agents of sepsis and the dramatically increasing rate of multidrug resistant organisms are also additional challenges on the use of these empiric treatment regimens, delaying the effective treatment of these infections (8,9). Ethiopia is not an exception to these and similar challenges and practices are encountered in many of the health facilities in the country.\nNeonatal sepsis is usually classified into early onset neonatal sepsis (EONS) if it occurs in the first 7 days of life and late onset neonatal sepsis (LONS) if it occurs between 7 and 28 days of life. Huge variations are observed in the etiology of sepsis depending on host and environmental factors. Etiological data from low and middle- income countries (LMICs), are very limited, even if some studies conducted in these regions have demonstrated the most common causative agents of neonatal sepsis to be S. aureus, Escherichia coli and Klebsiella spp. (10–14).\nIn Ethiopia, only few studies have been published on neonatal sepsis which have indicated the common etiologies to be S. aureus, Coagulase negative staphylococci (CONS), Klebsiella spp and E. Coli (15,16), necessitating further studies to be conducted to characterise the clinical findings and etiologies of neonatal sepsis in the local context. Hence, this study was done with the aim of describing the etiology, clinical characteristics and outcome of neonates admitted to Jimma Medical Center (JMC) with neonatal sepsis. The current study is a sub-study of a large study conducted to determine the magnitude, clinical characteristics, etiologies and antimicrobial susceptibility pattern of the isolates of neonatal sepsis. A separate article on the antimicrobial susceptibility of the isolates will be prepared", "Study settings: The study was conducted at the neonatal intensive care unit (NICU) of JMC, a tertiary hospital in Southwest Ethiopia.\nStudy design and period: A longitudinal hospital based cohort study comparing neonates with signs/symptoms of sepsis to those without signs/symptoms of sepsis was conducted from April 1 to October 31, 2018. In addition, in the subgroup of neonates with signs/symptoms of sepsis, we conducted a descriptive microbiological sub-study that is highlighting the spectrum of infectious agents in neonates with signs/symptoms of sepsis.\nSelection of study participants: Included in this study were newborns younger than 28 days admitted to the NICU during the study period, enrolled consecutively after their parents or care givers gave informed consent. After enrolment, the neonates were followed until discharge to assess the discharge outcome.\nData collection procedures: Patients were categorized on admission into those with clinical diagnosis of sepsis and those without. Case specific recording format was used to capture relevant variables. Both groups of patients were followed according to the protocols of the unit. Neonates with no sepsis on admission who developed signs and symptoms of sepsis after admission were later categorized with the group with sepsis.\nThe neonates were evaluated and treated by the treating physicians as per the standard protocol at the hospital. Clinical diagnosis of sepsis was made based on the World Health Organization's (WHO) criteria of presence of one or more of the following symptoms: temperature instability (>38 or <35.5°C), tachypnea (≥60breaths/minute), poor feeding/unable to feed, respiratory distress, convulsions, decreased movement or no movement at all (2). Additionally, the presence of risk factors for the development of sepsis was also used to support the diagnosis. Clinical data were collected by trained nurses and physicians working in the NICU (the nurses collected the relevant data and the physicians verified the data collected by the nurses); blood samples were collected by the nurses and cerebrospinal fluid (CSF) by the treating physicians.\nSpecimen collection and processing: In the subgroup of neonates suspected to have sepsis, 1–3ml venous blood was collected and inoculated into aerobic BACTEC bottles (BACTEC Peds Plus/F medium, Becton Dickinson, USA) which were then incubated in BACTEC FX40 automated machine. Samples flagged as positive on BACTEC were subcultured onto Columbia 5% sheep blood, MacConkey, and Chocolate agar (Oxoid, Basingstoke, United Kingdom). Additionally, 2–3ml of CSF was collected under aseptic conditions for analysis and culture. Isolation and identification of bacterial pathogens was performed according to standard microbiological techniques (17). In the subgroup of neonates without clinical suspicion of sepsis, no analysis of microbiological specimen was performed.\nData processing, analysis and interpretation: Data was entered into epidata version 3.1 and then exported to and analyzed with SPSS version 20.0. Descriptive statistics like frequency and proportion was carried out to describe the data and results are presented as narrations and using tables and figures.\nEthical considerations: Ethical clearance was obtained from Institutional Review Board of Jimma University Institute of Health in Ethiopia (IHRPGD/274/2018) and The Ethics Committee at the Medical Faculty of Ludwig-Maximilians-Universität of Munich, Germany. Written informed consent was obtained from parents or care takers. All study procedures were conducted as per the guidelines of good clinical practices. All laboratory results were timely communicated with the treating physicians so that the treatment of the neonates could be adjusted accordingly.", "Background characteristics of the neonates: A total of 304 neonates, 57.9% (176) being male, were enrolled in the study. About 63.0% (188) of them were delivered through spontaneous vaginal delivery. Most of the neonates (258, 86.3%) were younger than 7 days at admission. Only 48.0% (146) of them had their weight determined at birth, 75 (51.4%) being low birth weight (< 2500 g). Gestational age was determined by the New Ballard Score (NBS) for 65.5% (199/304) of the neonates, 41.7% (83/199) being preterm. One fifth (63/304, 20.7%) of the neonates were resuscitated at birth (Table 1). There was no any significant association seen between these variables and blood culture result.\nBackground characteristics of neonates admitted to neonatal intensive care unit of JMC\nNew Ballard Score\nMaternal socio-demographic, obstetric and medical characteristics: The majority of the mothers were between 18 and 35 years (284/304, 93.3%) and about half (146/304, 48.0%) were illiterate. Regarding the obstetric characteristics, 44.9% (136/304) of the mothers were primipara; most (291/304, 95.8%) had at least one ANC follow up visit, and the majority delivered at health facilities (284/304, 94.3%). Only very few of the mothers had associated medical illnesses with hypertension, human immunodeficiency virus (HIV), congestive heart failure and diabetes mellitus seen in 10 (3.2%), 2 (0.7%), 2 (0.7%) and 1 (0.3%) respectively (Table 2 and 3).\nMaternal socio-demographic characteristics in neonates admitted to JMC\nMaternal obstetric and medical conditions in neonates admitted to JMC\nSVD: Spontaneous vaginal delivery; CS: Cesarean section; ID: Instrumental delivery\nAmong maternal obstetric conditions, prolonged labor (more than 24hours) was seen in 13.9% (42/304) whereas prolonged rupture of membrane (more than 18hours) and meconium stained amniotic fluid were reported in 11.9% (36/304) and 7.4% (22/304) of the mothers respectively. Moreover, urinary tract infection (4.5%; 14/304), fever in the last 7 days before delivery (3.6%; 11/304), chorioamnionitis (3.3%; 10/304) and foul-smelling amniotic fluid (3.0%; 9/304) were documented in small proportion of the mothers (Tables 2 and 3).\nClinical profile of neonates admitted with sepsis: Out of the 304 neonates included in this study, 195 (64.1%) had sepsis according to the clinical definition, majority (84.1%; 164/195) of them having EONS. The most frequently observed signs and symptoms were fast breathing (62.6%; 122/195), fever (46.7%; 91/195), altered feeding (39.0%, 76/195), respiratory distress (33.8%; 66/195) and hypothermia (31.8%; 62/195) (Figure 1).\nClinical presentation of neonates admitted with sepsis to NICU of JMC\n[*Include apnea (7), skin pustules (7), pallor (6), eye discharge (5), abdominal distension (5), umbilical discharge (3), and bulged fontanels (2)]\nLaboratory investigations: Microbiological investigations only in the subgroup of neonates with signs of sepsis were conducted and included in this work. Blood culture was done for 96.4% (188/195) of the neonates with suspected sepsis. Of these, 61.2% (115/188) were positive; 109 were bacteria whereas 6 were fungi. In two neonates, multiple organisms were detected on blood culture. Coagulase Negative Staphylococcus (CONS) (25.7%; 28/109), S. aureus (22.0%; 24/109) and Klebsiella spp. (16.5%; 18/109) were the three predominant bacteria isolated. Other gram positive bacteria isolated were micrococcus spp. (3/109, 2.8%), Group B streptococcus (3/109, 2.8%) and Listeria monocytogenes (1/109, 0.9%) whereas the other gram negative bacteria isolated were E. coli (2/109, 1.8%), Enterobacter spp. (2/109, 1.8%), Providentia spp. (2/109, 1.8%), Proteus mirabilis (1/109, 0.9%) and Serratia spp. (1/109, 0.9%).\nLumbar puncture and CSF culture were performed for 68.2% (133/195) and 72.9% (97/133) of neonates with sepsis respectively. Pleocytosis (white cell count of ≥15cells/mm3) was detected in 8.3% (11/133) whereas culture was positive in only 4.1% (4/97). No microorganism was detected on CSF gram stain (microscopy). The organisms isolated from the CSF cultures were Citrobacter spp. (1), K. pneumoniae (2) and Acinetobacter spp. (1).\nTreatment given to those with sepsis: Majority of the newborns with sepsis, 90% (171/190), received combination of ampicillin and gentamicin whereas 16/190 (8.4%) received combination of ceftriaxone and gentamicin. Only few of them received combination of ceftazidime and vancomycin (3/190, 1.6%).\nClinical outcome of the neonates: Overall, the majority of the neonates were discharged with improvement (233, 76.9%), whereas 43 (14.2%) died in the hospital. Out of the 43 deaths, 19 (44.2%) had sepsis and 14 of these 19 deaths with sepsis (73.7%) had positive blood culture results. This gives a death rate among those with clinical sepsis 9.7% (19/195) whereas the death rate among the culture confirmed ones is 12.2% (14/115). The death rate among the gram negative isolates is almost 4 times (10/52, 19.2%) higher than that seen in neonates with gram-positive isolates (3/59, 5.1%) (Figure 2).\nDischarge status of neonates admitted to NICU of JMC\nLAMA – leave against medical advice", "Almost two thirds (195/304, 64.1%) of the neonates enrolled into this study had clinical sepsis. EONS was 4 times more common than LONS which is similar with several other studies in which EONS was at least 3 times as common as LONS (10,13,15).\nThe most frequent clinical manifestations seen in our study were similar to findings of a study done in Northern Ethiopia, in which fever, irregular respirations and tachypnea were the three commonest symptoms of illness (18). Similarly, other studies done in other countries have reported respiratory distress, poor feeding and fever as the most frequent symptoms of illness (19–21).\nThe three most frequently seen obstetric conditions in the neonates with sepsis in this study were prolonged labor, premature rupture of membrane and meconium stained amniotic fluid which is similar to a study done in Southern Ethiopia (22). The frequency of seeing the well known risk factors for neonatal sepsis like maternal urinary tract infection (UTI), fever during pregnancy and short before delivery, chorioamnionitis, and foul smelling amniotic fluid among those with sepsis was very low, each accounting for only <5.0%. This is similar to a study done in Northern Ethiopia where only 4% of the mothers had UTI (18) and a study from Southern Ethiopia where only <5% of the mothers had UTI, chorioamnionitis and foul smelling liquor (22). But it is in contrast to other studies done in Ethiopia which have reported higher rate of maternal fever, UTI/sexually transmitted infections (STIs) and premature rupture of membrane (15,18,23). This difference might be explained by the difference in the study design used, the timing of the studies and the background characteristics of the patient cohort.\nMost of the newborns in this study were born in health facilities. A higher proportion of those born in a health facility were diagnosed with sepsis compared to those delivered at home. This is also reported from other studies (18,22). Among the neonates with sepsis, 30 (15.4%) were resuscitated at birth, which is lower than a report from Mekelle, Northern Ethiopia, in which 44.9% of neonates with sepsis had been resuscitated (23) but higher than what has been reported from Felege Hiwot Hospital where only 3.6% of the neonates with sepsis had been resuscitated (18). This might give some clue about the possible additional risk factors of neonatal sepsis, given the fact that most of our health facilities are deprived of the basic infection prevention and patient safety practices and the fact that the predominant organisms in our study are common floras of the health facilities.\nMoreover, this finding may also reflect that diagnosis of neonatal sepsis in newborns born at home or low level health facilities are likely to be missed or delayed and only those born at hospitals are detected and treated. As the rate of home delivery remains very high in Ethiopia (24), findings reported in our study and other hospital based studies in Ethiopia may not reflect the real burden of neonatal morbidities in general and neonatal sepsis in particular. This may be one reason why neonatal mortality in LMICs remains very high.\nThe three predominant bacteria isolated in the current study were CONS, S. aureus, and Klebsiella spp. which are entirely different from the usual etiologies of neonatal sepsis (Listeria monocytogenes, Group B streptococcus and gram negatives like E. coli). This finding is similar to a study done in North-western Ethiopia where S. aureus, CONS and K. pneumoniae were the predominant organisms (15). The result is also in line with other literatures which indicate Klebsiella spp. S. aureus and CONS to be responsible for neonatal sepsis in developing countries (25). Hence, large, facility and community based studies are required to see the etiologies of neonatal sepsis in order to make the necessary modifications of antimicrobial treatment.\nAs we tried to highlight above, our study which is limited to a university hospital may not reflect the real burden of neonatal sepsis in Ethiopia because of the fact that most mothers still give birth at home and most institutional deliveries happen at low level healthcare facilities. The other limitation in our study is that we only focused on bacterial etiologies whereas neonatal sepsis could be due to other etiologies including viruses. Furthermore, as we collected only one blood culture, the possibilities of contamination cannot be ruled out. Finally, some of the neonates might have received antimicrobials prior to collection of blood for microbiologic testing which might have affected the blood culture positivity rate and hence, probably the type of microorganisms responsible for sepsis in the study site. Nevertheless, we believe that our finding, along with similar recent studies in the country, may help improve empiric management of neonates with clinical suspicion of sepsis.\nIn this study, we have demonstrated that majority of the neonates had EONS and were born in a health facility, the latter of which is thought to reduce the burden of neonatal infections. Additionally, we have shown that the bacterial etiologies isolated in our study, predominantly CONS, S. aureus and Klebsiella spp, differ from reports from both high and low income settings. The fact that these organisms are predominantly nosocomial in origin and that many of the neonates were born in health facilities (the potential sources of these organisms) highlights the importance of infection prevention and control practices of the health facilities, particularly in the labor and delivery units, including the process of neonatal resuscitation. Additionally, continuous surveillance of the etiologies of neonatal sepsis and possible revision of the empiric antimicrobial treatment of neonatal sepsis targeting these commonest etiologies is recommended." ]
[ "intro", "methods", "results", "discussion" ]
[ "Neonatal mortality", "neonatal sepsis", "isolates", "blood culture", "Ethiopia" ]
Introduction: In 2018, 5.3 million children died before reaching their fifth birthday of which 2.5 million (47%) died in the first month of life. Sub-Saharan Africa had the highest under five mortality rate, with a neonatal mortality rate of 28 deaths per 1,000 live births. A significant proportion of neonatal mortality is attributable to neonatal infections (1,2). In Ethiopia, the current neonatal mortality rate is 30 per 1000 live births and neonatal infections is one of the top three causes of neonatal mortality (3). Besides increased mortality, neonatal sepsis predisposes to several neuro-developmental complications (4–6). Diagnosis and management of neonatal sepsis are among the greatest challenges health workers face. Some of the challenges are lack of specific signs and symptoms and unavailability of the necessary laboratory investigations, particularly in developing countries (7,8). This necessitates the initiation of empirical antibiotic therapy till sepsis is either ruled out or confirmed and until specific organisms are isolated. The ever changing patterns of the etiologic agents of sepsis and the dramatically increasing rate of multidrug resistant organisms are also additional challenges on the use of these empiric treatment regimens, delaying the effective treatment of these infections (8,9). Ethiopia is not an exception to these and similar challenges and practices are encountered in many of the health facilities in the country. Neonatal sepsis is usually classified into early onset neonatal sepsis (EONS) if it occurs in the first 7 days of life and late onset neonatal sepsis (LONS) if it occurs between 7 and 28 days of life. Huge variations are observed in the etiology of sepsis depending on host and environmental factors. Etiological data from low and middle- income countries (LMICs), are very limited, even if some studies conducted in these regions have demonstrated the most common causative agents of neonatal sepsis to be S. aureus, Escherichia coli and Klebsiella spp. (10–14). In Ethiopia, only few studies have been published on neonatal sepsis which have indicated the common etiologies to be S. aureus, Coagulase negative staphylococci (CONS), Klebsiella spp and E. Coli (15,16), necessitating further studies to be conducted to characterise the clinical findings and etiologies of neonatal sepsis in the local context. Hence, this study was done with the aim of describing the etiology, clinical characteristics and outcome of neonates admitted to Jimma Medical Center (JMC) with neonatal sepsis. The current study is a sub-study of a large study conducted to determine the magnitude, clinical characteristics, etiologies and antimicrobial susceptibility pattern of the isolates of neonatal sepsis. A separate article on the antimicrobial susceptibility of the isolates will be prepared Methods: Study settings: The study was conducted at the neonatal intensive care unit (NICU) of JMC, a tertiary hospital in Southwest Ethiopia. Study design and period: A longitudinal hospital based cohort study comparing neonates with signs/symptoms of sepsis to those without signs/symptoms of sepsis was conducted from April 1 to October 31, 2018. In addition, in the subgroup of neonates with signs/symptoms of sepsis, we conducted a descriptive microbiological sub-study that is highlighting the spectrum of infectious agents in neonates with signs/symptoms of sepsis. Selection of study participants: Included in this study were newborns younger than 28 days admitted to the NICU during the study period, enrolled consecutively after their parents or care givers gave informed consent. After enrolment, the neonates were followed until discharge to assess the discharge outcome. Data collection procedures: Patients were categorized on admission into those with clinical diagnosis of sepsis and those without. Case specific recording format was used to capture relevant variables. Both groups of patients were followed according to the protocols of the unit. Neonates with no sepsis on admission who developed signs and symptoms of sepsis after admission were later categorized with the group with sepsis. The neonates were evaluated and treated by the treating physicians as per the standard protocol at the hospital. Clinical diagnosis of sepsis was made based on the World Health Organization's (WHO) criteria of presence of one or more of the following symptoms: temperature instability (>38 or <35.5°C), tachypnea (≥60breaths/minute), poor feeding/unable to feed, respiratory distress, convulsions, decreased movement or no movement at all (2). Additionally, the presence of risk factors for the development of sepsis was also used to support the diagnosis. Clinical data were collected by trained nurses and physicians working in the NICU (the nurses collected the relevant data and the physicians verified the data collected by the nurses); blood samples were collected by the nurses and cerebrospinal fluid (CSF) by the treating physicians. Specimen collection and processing: In the subgroup of neonates suspected to have sepsis, 1–3ml venous blood was collected and inoculated into aerobic BACTEC bottles (BACTEC Peds Plus/F medium, Becton Dickinson, USA) which were then incubated in BACTEC FX40 automated machine. Samples flagged as positive on BACTEC were subcultured onto Columbia 5% sheep blood, MacConkey, and Chocolate agar (Oxoid, Basingstoke, United Kingdom). Additionally, 2–3ml of CSF was collected under aseptic conditions for analysis and culture. Isolation and identification of bacterial pathogens was performed according to standard microbiological techniques (17). In the subgroup of neonates without clinical suspicion of sepsis, no analysis of microbiological specimen was performed. Data processing, analysis and interpretation: Data was entered into epidata version 3.1 and then exported to and analyzed with SPSS version 20.0. Descriptive statistics like frequency and proportion was carried out to describe the data and results are presented as narrations and using tables and figures. Ethical considerations: Ethical clearance was obtained from Institutional Review Board of Jimma University Institute of Health in Ethiopia (IHRPGD/274/2018) and The Ethics Committee at the Medical Faculty of Ludwig-Maximilians-Universität of Munich, Germany. Written informed consent was obtained from parents or care takers. All study procedures were conducted as per the guidelines of good clinical practices. All laboratory results were timely communicated with the treating physicians so that the treatment of the neonates could be adjusted accordingly. Results: Background characteristics of the neonates: A total of 304 neonates, 57.9% (176) being male, were enrolled in the study. About 63.0% (188) of them were delivered through spontaneous vaginal delivery. Most of the neonates (258, 86.3%) were younger than 7 days at admission. Only 48.0% (146) of them had their weight determined at birth, 75 (51.4%) being low birth weight (< 2500 g). Gestational age was determined by the New Ballard Score (NBS) for 65.5% (199/304) of the neonates, 41.7% (83/199) being preterm. One fifth (63/304, 20.7%) of the neonates were resuscitated at birth (Table 1). There was no any significant association seen between these variables and blood culture result. Background characteristics of neonates admitted to neonatal intensive care unit of JMC New Ballard Score Maternal socio-demographic, obstetric and medical characteristics: The majority of the mothers were between 18 and 35 years (284/304, 93.3%) and about half (146/304, 48.0%) were illiterate. Regarding the obstetric characteristics, 44.9% (136/304) of the mothers were primipara; most (291/304, 95.8%) had at least one ANC follow up visit, and the majority delivered at health facilities (284/304, 94.3%). Only very few of the mothers had associated medical illnesses with hypertension, human immunodeficiency virus (HIV), congestive heart failure and diabetes mellitus seen in 10 (3.2%), 2 (0.7%), 2 (0.7%) and 1 (0.3%) respectively (Table 2 and 3). Maternal socio-demographic characteristics in neonates admitted to JMC Maternal obstetric and medical conditions in neonates admitted to JMC SVD: Spontaneous vaginal delivery; CS: Cesarean section; ID: Instrumental delivery Among maternal obstetric conditions, prolonged labor (more than 24hours) was seen in 13.9% (42/304) whereas prolonged rupture of membrane (more than 18hours) and meconium stained amniotic fluid were reported in 11.9% (36/304) and 7.4% (22/304) of the mothers respectively. Moreover, urinary tract infection (4.5%; 14/304), fever in the last 7 days before delivery (3.6%; 11/304), chorioamnionitis (3.3%; 10/304) and foul-smelling amniotic fluid (3.0%; 9/304) were documented in small proportion of the mothers (Tables 2 and 3). Clinical profile of neonates admitted with sepsis: Out of the 304 neonates included in this study, 195 (64.1%) had sepsis according to the clinical definition, majority (84.1%; 164/195) of them having EONS. The most frequently observed signs and symptoms were fast breathing (62.6%; 122/195), fever (46.7%; 91/195), altered feeding (39.0%, 76/195), respiratory distress (33.8%; 66/195) and hypothermia (31.8%; 62/195) (Figure 1). Clinical presentation of neonates admitted with sepsis to NICU of JMC [*Include apnea (7), skin pustules (7), pallor (6), eye discharge (5), abdominal distension (5), umbilical discharge (3), and bulged fontanels (2)] Laboratory investigations: Microbiological investigations only in the subgroup of neonates with signs of sepsis were conducted and included in this work. Blood culture was done for 96.4% (188/195) of the neonates with suspected sepsis. Of these, 61.2% (115/188) were positive; 109 were bacteria whereas 6 were fungi. In two neonates, multiple organisms were detected on blood culture. Coagulase Negative Staphylococcus (CONS) (25.7%; 28/109), S. aureus (22.0%; 24/109) and Klebsiella spp. (16.5%; 18/109) were the three predominant bacteria isolated. Other gram positive bacteria isolated were micrococcus spp. (3/109, 2.8%), Group B streptococcus (3/109, 2.8%) and Listeria monocytogenes (1/109, 0.9%) whereas the other gram negative bacteria isolated were E. coli (2/109, 1.8%), Enterobacter spp. (2/109, 1.8%), Providentia spp. (2/109, 1.8%), Proteus mirabilis (1/109, 0.9%) and Serratia spp. (1/109, 0.9%). Lumbar puncture and CSF culture were performed for 68.2% (133/195) and 72.9% (97/133) of neonates with sepsis respectively. Pleocytosis (white cell count of ≥15cells/mm3) was detected in 8.3% (11/133) whereas culture was positive in only 4.1% (4/97). No microorganism was detected on CSF gram stain (microscopy). The organisms isolated from the CSF cultures were Citrobacter spp. (1), K. pneumoniae (2) and Acinetobacter spp. (1). Treatment given to those with sepsis: Majority of the newborns with sepsis, 90% (171/190), received combination of ampicillin and gentamicin whereas 16/190 (8.4%) received combination of ceftriaxone and gentamicin. Only few of them received combination of ceftazidime and vancomycin (3/190, 1.6%). Clinical outcome of the neonates: Overall, the majority of the neonates were discharged with improvement (233, 76.9%), whereas 43 (14.2%) died in the hospital. Out of the 43 deaths, 19 (44.2%) had sepsis and 14 of these 19 deaths with sepsis (73.7%) had positive blood culture results. This gives a death rate among those with clinical sepsis 9.7% (19/195) whereas the death rate among the culture confirmed ones is 12.2% (14/115). The death rate among the gram negative isolates is almost 4 times (10/52, 19.2%) higher than that seen in neonates with gram-positive isolates (3/59, 5.1%) (Figure 2). Discharge status of neonates admitted to NICU of JMC LAMA – leave against medical advice Discussion: Almost two thirds (195/304, 64.1%) of the neonates enrolled into this study had clinical sepsis. EONS was 4 times more common than LONS which is similar with several other studies in which EONS was at least 3 times as common as LONS (10,13,15). The most frequent clinical manifestations seen in our study were similar to findings of a study done in Northern Ethiopia, in which fever, irregular respirations and tachypnea were the three commonest symptoms of illness (18). Similarly, other studies done in other countries have reported respiratory distress, poor feeding and fever as the most frequent symptoms of illness (19–21). The three most frequently seen obstetric conditions in the neonates with sepsis in this study were prolonged labor, premature rupture of membrane and meconium stained amniotic fluid which is similar to a study done in Southern Ethiopia (22). The frequency of seeing the well known risk factors for neonatal sepsis like maternal urinary tract infection (UTI), fever during pregnancy and short before delivery, chorioamnionitis, and foul smelling amniotic fluid among those with sepsis was very low, each accounting for only <5.0%. This is similar to a study done in Northern Ethiopia where only 4% of the mothers had UTI (18) and a study from Southern Ethiopia where only <5% of the mothers had UTI, chorioamnionitis and foul smelling liquor (22). But it is in contrast to other studies done in Ethiopia which have reported higher rate of maternal fever, UTI/sexually transmitted infections (STIs) and premature rupture of membrane (15,18,23). This difference might be explained by the difference in the study design used, the timing of the studies and the background characteristics of the patient cohort. Most of the newborns in this study were born in health facilities. A higher proportion of those born in a health facility were diagnosed with sepsis compared to those delivered at home. This is also reported from other studies (18,22). Among the neonates with sepsis, 30 (15.4%) were resuscitated at birth, which is lower than a report from Mekelle, Northern Ethiopia, in which 44.9% of neonates with sepsis had been resuscitated (23) but higher than what has been reported from Felege Hiwot Hospital where only 3.6% of the neonates with sepsis had been resuscitated (18). This might give some clue about the possible additional risk factors of neonatal sepsis, given the fact that most of our health facilities are deprived of the basic infection prevention and patient safety practices and the fact that the predominant organisms in our study are common floras of the health facilities. Moreover, this finding may also reflect that diagnosis of neonatal sepsis in newborns born at home or low level health facilities are likely to be missed or delayed and only those born at hospitals are detected and treated. As the rate of home delivery remains very high in Ethiopia (24), findings reported in our study and other hospital based studies in Ethiopia may not reflect the real burden of neonatal morbidities in general and neonatal sepsis in particular. This may be one reason why neonatal mortality in LMICs remains very high. The three predominant bacteria isolated in the current study were CONS, S. aureus, and Klebsiella spp. which are entirely different from the usual etiologies of neonatal sepsis (Listeria monocytogenes, Group B streptococcus and gram negatives like E. coli). This finding is similar to a study done in North-western Ethiopia where S. aureus, CONS and K. pneumoniae were the predominant organisms (15). The result is also in line with other literatures which indicate Klebsiella spp. S. aureus and CONS to be responsible for neonatal sepsis in developing countries (25). Hence, large, facility and community based studies are required to see the etiologies of neonatal sepsis in order to make the necessary modifications of antimicrobial treatment. As we tried to highlight above, our study which is limited to a university hospital may not reflect the real burden of neonatal sepsis in Ethiopia because of the fact that most mothers still give birth at home and most institutional deliveries happen at low level healthcare facilities. The other limitation in our study is that we only focused on bacterial etiologies whereas neonatal sepsis could be due to other etiologies including viruses. Furthermore, as we collected only one blood culture, the possibilities of contamination cannot be ruled out. Finally, some of the neonates might have received antimicrobials prior to collection of blood for microbiologic testing which might have affected the blood culture positivity rate and hence, probably the type of microorganisms responsible for sepsis in the study site. Nevertheless, we believe that our finding, along with similar recent studies in the country, may help improve empiric management of neonates with clinical suspicion of sepsis. In this study, we have demonstrated that majority of the neonates had EONS and were born in a health facility, the latter of which is thought to reduce the burden of neonatal infections. Additionally, we have shown that the bacterial etiologies isolated in our study, predominantly CONS, S. aureus and Klebsiella spp, differ from reports from both high and low income settings. The fact that these organisms are predominantly nosocomial in origin and that many of the neonates were born in health facilities (the potential sources of these organisms) highlights the importance of infection prevention and control practices of the health facilities, particularly in the labor and delivery units, including the process of neonatal resuscitation. Additionally, continuous surveillance of the etiologies of neonatal sepsis and possible revision of the empiric antimicrobial treatment of neonatal sepsis targeting these commonest etiologies is recommended.
Background: Globally, over 3 million newborn die each year, one million of these attributed to infections. The objective of this study was to determine the etiologies and clinical characteristics of sepsis in neonates admitted to intensive care unit of a tertiary hospital in Ethiopia. Methods: A longitudinal hospital based cohort study was conducted from April 1 to October 31, 2018 at the neonatal intensive care unit of Jimma Medical Center, southwest Ethiopia. Diagnosis of sepsis was established using the World Health Organization's case definition. Structured questionnaires and case specific recording formats were used to capture the relevant data. Venous blood and cerebrospinal fluid from neonates suspected to have sepsis were collected. Results: Out of 304 neonates enrolled in the study, 195 (64.1%) had clinical evidence for sepsis, majority (84.1%; 164/195) of them having early onset neonatal sepsis. The three most frequent presenting signs and symptoms were fast breathing (64.6%; 122/195), fever (48.1%; 91/195) and altered feeding (39.0%; 76/195). Etiologic agents were detected from the blood culture of 61.2% (115/195) neonates. Bacterial pathogens contributed for 94.8% (109/115); the rest being fungal etiologies. Coagulase negative staphylococci (25.7%; 28/109), Staphylococcus aureus (22.1%; 24/109) and Klebsiella species (16.5%; 18/109) were the most commonly isolated bacteria. Conclusions: Majority of the neonates had early onset neonatal sepsis. The major etiologies isolated in our study markedly deviate from the usual organisms causing neonatal sepsis. Multicentre study and continuous surveillance are essential to tackle the current challenge to reduce neonatal mortality due to sepsis in Ethiopia.
null
null
3,360
320
[]
4
[ "sepsis", "neonates", "study", "neonatal", "neonatal sepsis", "304", "clinical", "ethiopia", "health", "spp" ]
[ "country neonatal sepsis", "etiologies neonatal sepsis", "neonates sepsis study", "mortality neonatal sepsis", "neonatal sepsis ethiopia" ]
null
null
[CONTENT] Neonatal mortality | neonatal sepsis | isolates | blood culture | Ethiopia [SUMMARY]
[CONTENT] Neonatal mortality | neonatal sepsis | isolates | blood culture | Ethiopia [SUMMARY]
[CONTENT] Neonatal mortality | neonatal sepsis | isolates | blood culture | Ethiopia [SUMMARY]
null
[CONTENT] Neonatal mortality | neonatal sepsis | isolates | blood culture | Ethiopia [SUMMARY]
null
[CONTENT] Anti-Bacterial Agents | Cohort Studies | Ethiopia | Humans | Infant, Newborn | Intensive Care Units, Neonatal | Sepsis | Tertiary Care Centers [SUMMARY]
[CONTENT] Anti-Bacterial Agents | Cohort Studies | Ethiopia | Humans | Infant, Newborn | Intensive Care Units, Neonatal | Sepsis | Tertiary Care Centers [SUMMARY]
[CONTENT] Anti-Bacterial Agents | Cohort Studies | Ethiopia | Humans | Infant, Newborn | Intensive Care Units, Neonatal | Sepsis | Tertiary Care Centers [SUMMARY]
null
[CONTENT] Anti-Bacterial Agents | Cohort Studies | Ethiopia | Humans | Infant, Newborn | Intensive Care Units, Neonatal | Sepsis | Tertiary Care Centers [SUMMARY]
null
[CONTENT] country neonatal sepsis | etiologies neonatal sepsis | neonates sepsis study | mortality neonatal sepsis | neonatal sepsis ethiopia [SUMMARY]
[CONTENT] country neonatal sepsis | etiologies neonatal sepsis | neonates sepsis study | mortality neonatal sepsis | neonatal sepsis ethiopia [SUMMARY]
[CONTENT] country neonatal sepsis | etiologies neonatal sepsis | neonates sepsis study | mortality neonatal sepsis | neonatal sepsis ethiopia [SUMMARY]
null
[CONTENT] country neonatal sepsis | etiologies neonatal sepsis | neonates sepsis study | mortality neonatal sepsis | neonatal sepsis ethiopia [SUMMARY]
null
[CONTENT] sepsis | neonates | study | neonatal | neonatal sepsis | 304 | clinical | ethiopia | health | spp [SUMMARY]
[CONTENT] sepsis | neonates | study | neonatal | neonatal sepsis | 304 | clinical | ethiopia | health | spp [SUMMARY]
[CONTENT] sepsis | neonates | study | neonatal | neonatal sepsis | 304 | clinical | ethiopia | health | spp [SUMMARY]
null
[CONTENT] sepsis | neonates | study | neonatal | neonatal sepsis | 304 | clinical | ethiopia | health | spp [SUMMARY]
null
[CONTENT] neonatal | neonatal sepsis | sepsis | mortality | challenges | neonatal mortality | mortality rate | life | rate | etiologies [SUMMARY]
[CONTENT] sepsis | data | signs symptoms sepsis | physicians | symptoms sepsis | collected | study | neonates | nurses | bactec [SUMMARY]
[CONTENT] 304 | 109 | neonates | 195 | sepsis | neonates admitted | spp | culture | spp 109 | gram [SUMMARY]
null
[CONTENT] sepsis | neonatal | neonates | neonatal sepsis | study | 304 | 109 | ethiopia | studies | clinical [SUMMARY]
null
[CONTENT] over 3 million | each year | one million ||| tertiary | Ethiopia [SUMMARY]
[CONTENT] April 1 to October 31, 2018 | Jimma Medical Center | Ethiopia ||| the World Health Organization's ||| Structured ||| [SUMMARY]
[CONTENT] 304 | 195 | 64.1% | 84.1% | 164/195 ||| three | 64.6% | 122/195 | 48.1% | 91/195 | 39.0% | 76/195 ||| Etiologic | 61.2% | 115/195 ||| 94.8% | 109/115 ||| 25.7% | 28/109 | Staphylococcus aureus | 22.1% | 24/109 | Klebsiella | 16.5% | 18/109 [SUMMARY]
null
[CONTENT] over 3 million | each year | one million ||| tertiary | Ethiopia ||| April 1 to October 31, 2018 | Jimma Medical Center | Ethiopia ||| the World Health Organization's ||| Structured ||| ||| 304 | 195 | 64.1% | 84.1% | 164/195 ||| three | 64.6% | 122/195 | 48.1% | 91/195 | 39.0% | 76/195 ||| Etiologic | 61.2% | 115/195 ||| 94.8% | 109/115 ||| 25.7% | 28/109 | Staphylococcus aureus | 22.1% | 24/109 | Klebsiella | 16.5% | 18/109 ||| ||| ||| Multicentre | Ethiopia [SUMMARY]
null
Fatty liver with metabolic disorder, such as metabolic dysfunction-associated fatty liver disease, indicates high risk for developing diabetes mellitus.
35167194
Nonalcoholic fatty liver disease (NAFLD) is diagnosed after excluding other liver diseases. The pathogenesis of NAFLD when complicated by other liver diseases has not been established completely. Metabolic dysfunction-associated fatty liver disease (MAFLD) involves more metabolic factors than NAFLD, regardless of complications with other diseases. This study aimed to clarify the effects of fatty liver occurring with metabolic disorders, such as MAFLD without diabetes mellitus (DM), on the development of DM.
INTRODUCTION
We retrospectively assessed 9,459 participants who underwent two or more annual health check-ups. The participants were divided into the MAFLD group (fatty liver disease with overweight/obesity or non-overweight/obesity complicated by metabolic disorders), simple fatty liver group (fatty liver disease other than MAFLD group), metabolic disorder group (metabolic disorder without fatty liver disease), and normal group (all other participants).
MATERIALS AND METHODS
The DM onset rates in the normal, simple fatty liver, metabolic disorder, and MAFLD groups were 0.51, 1.85, 2.52, and 7.36%, respectively. In the multivariate analysis, the MAFLD group showed a significantly higher risk of DM onset compared with other three groups (P < 0.01). Additionally, the risk of DM onset was significantly increased in fatty liver disease with overweight/obesity or pre-diabetes (P < 0.01).
RESULTS
Fatty liver with metabolic disorders, such as MAFLD, can be used to identify patients with fatty liver disease who are at high risk of developing DM. Additionally, patients with fatty liver disease complicated with overweight/obesity or prediabetes are at an increased risk of DM onset and should receive more attention.
CONCLUSIONS
[ "Body Mass Index", "Diabetes Mellitus", "Humans", "Metabolic Diseases", "Non-alcoholic Fatty Liver Disease", "Obesity", "Overweight", "Retrospective Studies" ]
9248428
INTRODUCTION
Certain changes to a person’s lifestyle, including increased dietary and fat intake and decreased physical activity, can induce various metabolic diseases. Among them, diabetes mellitus (DM) increases the incidence of cardiovascular disease and cancer 1 , 2 , 3 , 4 ; furthermore, it is considered to have a significant effect on healthy life expectancy 5 . Therefore, there is a need to identify high‐risk populations for DM and to provide lifestyle interventions. Nonalcoholic fatty liver disease (NAFLD) affects the development of DM 6 , 7 , 8 , 9 , 10 . However, since NAFLD is diagnosed after excluding other liver diseases, the pathogenesis of NAFLD when complicated by other liver diseases has not been established completely. Furthermore, since there are numerous patients with fatty liver disease, providing interventions for all of them is difficult; moreover, there is a need to stratify their risk of the development of DM. Metabolic dysfunction‐associated fatty liver disease (MAFLD), which is defined as fatty liver disease combined with obesity, diabetes, and metabolic diseases, regardless of the presence of other liver diseases, was proposed by the American Gastroenterological Association and the European Association for the Study of the Liver in 2020 11 , 12 . MAFLD is considered a fatty liver disease containing more metabolic factors than NAFLD; however, the impacts of MAFLD on the development of various metabolic disorders remain unclear. Therefore, we aimed to clarify the effects of fatty liver with some metabolic diseases, including MAFLD, on the development of DM.
null
null
RESULTS
Characteristics of the participants Table 1 shows the baseline characteristics of each group. The onset rates of diabetes mellitus in the normal, simple fatty liver, metabolic disorder, and MAFLD groups were 0.51, 1.85, 2.52, and 7.36%, respectively (Table 1). Compared with the normal group, the other groups had a significantly higher proportion of males; were older; and had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre levels, and TG; lower HDL‐C levels; a lower proportion of snacking habits; and a higher proportion of current smokers (Table 1). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 1). There were 128 patients with HBs‐Ag, 91 patients with anti‐HCV, and 752 drinkers (Table 1). Baseline characteristics Data are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week) DM onset was defined as fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes treatment. The endpoint characteristics were similar to the baseline characteristics (Table 2). Compared with the normal group, the other groups had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre, and TG levels; lower HDL‐C levels; a lower proportion of individuals with snacking habits; and higher proportion of current smokers (Table 2). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 2). Endpoint characteristics Data are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week). The comparisons of characteristics between baseline and endpoint revealed that BMI, DBP, and AST levels at endpoint were higher than at baseline among the normal, simple fatty liver, and metabolic disorder groups (Table 3). The FPG levels at endpoint were higher than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). In all groups, Cre levels at endpoint were higher than at baseline, and HDL‐c levels at endpoint were lower than at baseline (Table 3). The proportion of individuals with periodic exercise habits was higher at endpoint than at baseline among all groups and the proportion of current smokers was lower at endpoint than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). The differences in characteristics between baseline and endpoint The Wilcoxon signed rank test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05. Data are presented as median (interquartile range) or number (percentage). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week) Table 1 shows the baseline characteristics of each group. The onset rates of diabetes mellitus in the normal, simple fatty liver, metabolic disorder, and MAFLD groups were 0.51, 1.85, 2.52, and 7.36%, respectively (Table 1). Compared with the normal group, the other groups had a significantly higher proportion of males; were older; and had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre levels, and TG; lower HDL‐C levels; a lower proportion of snacking habits; and a higher proportion of current smokers (Table 1). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 1). There were 128 patients with HBs‐Ag, 91 patients with anti‐HCV, and 752 drinkers (Table 1). Baseline characteristics Data are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week) DM onset was defined as fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes treatment. The endpoint characteristics were similar to the baseline characteristics (Table 2). Compared with the normal group, the other groups had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre, and TG levels; lower HDL‐C levels; a lower proportion of individuals with snacking habits; and higher proportion of current smokers (Table 2). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 2). Endpoint characteristics Data are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week). The comparisons of characteristics between baseline and endpoint revealed that BMI, DBP, and AST levels at endpoint were higher than at baseline among the normal, simple fatty liver, and metabolic disorder groups (Table 3). The FPG levels at endpoint were higher than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). In all groups, Cre levels at endpoint were higher than at baseline, and HDL‐c levels at endpoint were lower than at baseline (Table 3). The proportion of individuals with periodic exercise habits was higher at endpoint than at baseline among all groups and the proportion of current smokers was lower at endpoint than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). The differences in characteristics between baseline and endpoint The Wilcoxon signed rank test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05. Data are presented as median (interquartile range) or number (percentage). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week) Risk of DM onset with simple fatty liver, metabolic disorder, and MAFLD Univariate analysis revealed that simple fatty liver, metabolic disorder, and MAFLD were significant risk factors for DM compared with the normal group. Further, MAFLD had a higher risk of DM onset than simple fatty liver and metabolic disorder (simple fatty liver, HR: 3.61, 95% CI: 1.38–9.43; metabolic disorder, HR: 5.41, 95% CI: 3.39–8.62; MAFLD, HR: 18.46, 95% CI: 12.06–28.27) (P‐trend <0.01; Table 4). In addition, multivariate analysis adjusted for sex, age, BMI, Cre, exercise habit, snacking habit, drinking habit, smoking habit, and family history of diabetes showed a significantly increased risk of DM onset in the other three groups compared with the normal group (simple fatty liver, aHR: 2.76, 95% CI: 1.05–7.23; metabolic disorder, aHR: 3.62, 95% CI: 2.24–5.85; MAFLD, aHR: 11.03, 95% CI: 7.03–17.28) (P‐trend <0.01; Table 4). The risk for the onset of diabetes mellitus Differences were considered statistically significant for P < 0.05. BMI, body mass index; CI, confidence interval; Cre, creatinine; HR, hazard ratio; SBP, systolic blood pressure. Multivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes. In the normal, simple fatty liver, metabolic disorder, and MAFLD groups, the 1‐year DM onset rates were 0.04%, 0%, 0.03%, and 0.5%, respectively; the 3‐year DM onset rates were 0.11%, 0.43%, 1.06%, and 3.15%, respectively; the 5‐year DM onset rates were 0.44%, 1.64%, 2.07%, and 5.37%, respectively; and the 10‐year DM onset rates were 0.69%, 3.72%, 5.25%, and 16.68%, respectively. Univariate analysis revealed that simple fatty liver, metabolic disorder, and MAFLD were significant risk factors for DM compared with the normal group. Further, MAFLD had a higher risk of DM onset than simple fatty liver and metabolic disorder (simple fatty liver, HR: 3.61, 95% CI: 1.38–9.43; metabolic disorder, HR: 5.41, 95% CI: 3.39–8.62; MAFLD, HR: 18.46, 95% CI: 12.06–28.27) (P‐trend <0.01; Table 4). In addition, multivariate analysis adjusted for sex, age, BMI, Cre, exercise habit, snacking habit, drinking habit, smoking habit, and family history of diabetes showed a significantly increased risk of DM onset in the other three groups compared with the normal group (simple fatty liver, aHR: 2.76, 95% CI: 1.05–7.23; metabolic disorder, aHR: 3.62, 95% CI: 2.24–5.85; MAFLD, aHR: 11.03, 95% CI: 7.03–17.28) (P‐trend <0.01; Table 4). The risk for the onset of diabetes mellitus Differences were considered statistically significant for P < 0.05. BMI, body mass index; CI, confidence interval; Cre, creatinine; HR, hazard ratio; SBP, systolic blood pressure. Multivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes. In the normal, simple fatty liver, metabolic disorder, and MAFLD groups, the 1‐year DM onset rates were 0.04%, 0%, 0.03%, and 0.5%, respectively; the 3‐year DM onset rates were 0.11%, 0.43%, 1.06%, and 3.15%, respectively; the 5‐year DM onset rates were 0.44%, 1.64%, 2.07%, and 5.37%, respectively; and the 10‐year DM onset rates were 0.69%, 3.72%, 5.25%, and 16.68%, respectively. Risk of fatty liver disease combined with other factors Next, we examined the factors that exacerbated the risk of fatty liver disease in DM. Fatty liver disease with overweight/obesity or pre‐diabetes significantly increased the risk of DM onset compared with fatty liver disease with other metabolic diseases (overweight/obesity, HR: 3.21, 95% CI: 1.75–5.94; pre‐diabetes, HR: 8.54, 95% CI: 4.92–14.82, respectively; Table 5). Additionally, multivariate analysis adjusted for sex; age; BMI; Cre levels; exercise habit; snacking habit; drinking habit; smoking habit; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes, showed a significantly increased risk of DM onset in fatty liver disease with overweight/obesity and pre‐diabetes (overweight/obesity, HR: 2.18, 95% CI: 1.15–4.13; pre‐diabetes, HR: 7.82, 95% CI: 4.37–13.99, respectively; Table 5). In the fatty liver with overweight/obesity and pre‐diabetes groups, the 1‐year DM onset rates were 0.54 and 0.07%, respectively; the 3‐year DM onset rates were 3.28 and 4.65%, respectively; the 5‐year DM onset rates were 5.53 and 7.89%, respectively; and the 10‐year DM onset rates were 16.99 and 24.1%, respectively. Risk of fatty liver in combination with other factors Differences were considered statistically significant for P < 0.05. BMI, body mass index; CI, confidence interval; Cre, creatinine; HDL‐C, high‐density lipoprotein cholesterol; HR, hazard ratio; SBP, systolic blood pressure; TG, triglycerides. Multivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes, along with metabolic disorders as follows: ahypertension, high TG levels, low HDL‐C levels, and pre‐diabetes; boverweight/obese, high TG levels, low HDL‐C levels, and pre‐diabetes; coverweight/obese, hypertension, low HDL‐C levels, and pre‐diabetes; doverweight/obese, hypertension, high TG levels, and low HDL‐C levels. Next, we examined the factors that exacerbated the risk of fatty liver disease in DM. Fatty liver disease with overweight/obesity or pre‐diabetes significantly increased the risk of DM onset compared with fatty liver disease with other metabolic diseases (overweight/obesity, HR: 3.21, 95% CI: 1.75–5.94; pre‐diabetes, HR: 8.54, 95% CI: 4.92–14.82, respectively; Table 5). Additionally, multivariate analysis adjusted for sex; age; BMI; Cre levels; exercise habit; snacking habit; drinking habit; smoking habit; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes, showed a significantly increased risk of DM onset in fatty liver disease with overweight/obesity and pre‐diabetes (overweight/obesity, HR: 2.18, 95% CI: 1.15–4.13; pre‐diabetes, HR: 7.82, 95% CI: 4.37–13.99, respectively; Table 5). In the fatty liver with overweight/obesity and pre‐diabetes groups, the 1‐year DM onset rates were 0.54 and 0.07%, respectively; the 3‐year DM onset rates were 3.28 and 4.65%, respectively; the 5‐year DM onset rates were 5.53 and 7.89%, respectively; and the 10‐year DM onset rates were 16.99 and 24.1%, respectively. Risk of fatty liver in combination with other factors Differences were considered statistically significant for P < 0.05. BMI, body mass index; CI, confidence interval; Cre, creatinine; HDL‐C, high‐density lipoprotein cholesterol; HR, hazard ratio; SBP, systolic blood pressure; TG, triglycerides. Multivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes, along with metabolic disorders as follows: ahypertension, high TG levels, low HDL‐C levels, and pre‐diabetes; boverweight/obese, high TG levels, low HDL‐C levels, and pre‐diabetes; coverweight/obese, hypertension, low HDL‐C levels, and pre‐diabetes; doverweight/obese, hypertension, high TG levels, and low HDL‐C levels.
null
null
[ "INTRODUCTION", "Definitions", "Characteristics of the participants", "Risk of DM onset with simple fatty liver, metabolic disorder, and MAFLD", "Risk of fatty liver disease combined with other factors", "DISCLOSURE" ]
[ "Certain changes to a person’s lifestyle, including increased dietary and fat intake and decreased physical activity, can induce various metabolic diseases. Among them, diabetes mellitus (DM) increases the incidence of cardiovascular disease and cancer\n1\n, \n2\n, \n3\n, \n4\n; furthermore, it is considered to have a significant effect on healthy life expectancy\n5\n. Therefore, there is a need to identify high‐risk populations for DM and to provide lifestyle interventions.\nNonalcoholic fatty liver disease (NAFLD) affects the development of DM\n6\n, \n7\n, \n8\n, \n9\n, \n10\n. However, since NAFLD is diagnosed after excluding other liver diseases, the pathogenesis of NAFLD when complicated by other liver diseases has not been established completely. Furthermore, since there are numerous patients with fatty liver disease, providing interventions for all of them is difficult; moreover, there is a need to stratify their risk of the development of DM.\nMetabolic dysfunction‐associated fatty liver disease (MAFLD), which is defined as fatty liver disease combined with obesity, diabetes, and metabolic diseases, regardless of the presence of other liver diseases, was proposed by the American Gastroenterological Association and the European Association for the Study of the Liver in 2020\n11\n, \n12\n. MAFLD is considered a fatty liver disease containing more metabolic factors than NAFLD; however, the impacts of MAFLD on the development of various metabolic disorders remain unclear. Therefore, we aimed to clarify the effects of fatty liver with some metabolic diseases, including MAFLD, on the development of DM.", "The MAFLD group comprised participants with fatty liver disease who were overweight/obese (BMI ≥23 kg/m2) or non‐overweight/obese with a metabolic disorder. The criteria for metabolic disorders were defined, based in part, on the criteria for MAFLD\n11\n, and the diagnosis was based on the presence of two or more of the following metabolic risks: hypertension: blood pressure ≥130/85 mmHg or anti‐hypertension drug treatment; high TG levels (TG ≥1.70 mM or lipid‐lowering drug treatment; low HDL‐C levels (HDL‐C <1.04 mM for men and <1.30 mM for women); pre‐diabetes: impaired fasting glucose (fasting glucose levels 5.55–6.94 mM) or HbA1c between 5.7% and 6.4% without anti‐diabetes treatment. The simple fatty liver group comprised participants with fatty liver disease other than the MAFLD group, while the metabolic disorder group comprised participants with metabolic disorders without fatty liver disease. The remaining participants were included in the normal group.\nDiabetes mellitus onset was identified when the blood test examination and questionnaire at the health check‐up visit revealed fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes drug treatment.\nAfter examination of the medical records, 359 patients with diabetes were excluded using the following exclusion criteria: currently receiving antidiabetic medications (n = 120), fasting glucose levels ≥6.99 mM (n = 267), or HbA1c ≥6.5% (n = 310) (Figure 1). The final analysis comprised 9,459 participants (4,509 men and 4,950 women) of Ehime University Hospital (Figure 1). The observed mean duration was 5.53 ± 3.53 years (men: 5.32 ± 3.53 years, women: 5.73 ± 3.51 years). This study was approved by the Research Ethics Committee of Ehime University Hospital following the tenets of the Declaration of Helsinki and its later amendments (approval number: 1709007; University Hospital Medical Information Network ID: UMIN000011953), and was conducted in compliance with the Guidelines for Good Clinical Practice and local ethical and legal requirements. All participants were allocated a numerical code to ensure their anonymity. Additionally, all data were preserved in a secure database.\nStudy flowchart.\nJMP version 14.2.0 software (SAS Institute Japan, Tokyo, Japan) was used for statistical analyses. Assumptions of normal distribution were assessed using the Kolmogorov‐Smirnov‐Lilliefors test. Since the continuous variables proved to be non‐normally distributed, they were analyzed using the Steel‐Dwass test. Categorical variables were analyzed using the χ2 test. The Wilcoxon signed‐rank test was used to compare continuous variables representing baseline and endpoint characteristics. We performed univariate and multivariate Cox proportional hazards regression analyses to assess hazard ratios (HRs) and 95% confidence intervals (CIs) for DM development. Multivariate analyses were adjusted for the following variables: age; sex; Cre levels; exercise, snacking, drinking, and smoking habits; and family history of diabetes. The combined risk of fatty liver disease for DM was assessed using univariate and multivariate Cox proportional hazards regression analyses that were adjusted for sex; age; Cre; exercise, snacking, drinking, and smoking habits; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes. All data are expressed as the median (interquartile range) or number (percentage). Statistical significance was determined with P values <0.05.", "Table 1 shows the baseline characteristics of each group. The onset rates of diabetes mellitus in the normal, simple fatty liver, metabolic disorder, and MAFLD groups were 0.51, 1.85, 2.52, and 7.36%, respectively (Table 1). Compared with the normal group, the other groups had a significantly higher proportion of males; were older; and had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre levels, and TG; lower HDL‐C levels; a lower proportion of snacking habits; and a higher proportion of current smokers (Table 1). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 1). There were 128 patients with HBs‐Ag, 91 patients with anti‐HCV, and 752 drinkers (Table 1).\nBaseline characteristics\nData are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week)\nDM onset was defined as fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes treatment.\nThe endpoint characteristics were similar to the baseline characteristics (Table 2). Compared with the normal group, the other groups had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre, and TG levels; lower HDL‐C levels; a lower proportion of individuals with snacking habits; and higher proportion of current smokers (Table 2). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 2).\nEndpoint characteristics\nData are presented as median (interquartile range) or number (percentage).\nThe Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week).\nThe comparisons of characteristics between baseline and endpoint revealed that BMI, DBP, and AST levels at endpoint were higher than at baseline among the normal, simple fatty liver, and metabolic disorder groups (Table 3). The FPG levels at endpoint were higher than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). In all groups, Cre levels at endpoint were higher than at baseline, and HDL‐c levels at endpoint were lower than at baseline (Table 3). The proportion of individuals with periodic exercise habits was higher at endpoint than at baseline among all groups and the proportion of current smokers was lower at endpoint than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3).\nThe differences in characteristics between baseline and endpoint\nThe Wilcoxon signed rank test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05. Data are presented as median (interquartile range) or number (percentage).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week)", "Univariate analysis revealed that simple fatty liver, metabolic disorder, and MAFLD were significant risk factors for DM compared with the normal group. Further, MAFLD had a higher risk of DM onset than simple fatty liver and metabolic disorder (simple fatty liver, HR: 3.61, 95% CI: 1.38–9.43; metabolic disorder, HR: 5.41, 95% CI: 3.39–8.62; MAFLD, HR: 18.46, 95% CI: 12.06–28.27) (P‐trend <0.01; Table 4). In addition, multivariate analysis adjusted for sex, age, BMI, Cre, exercise habit, snacking habit, drinking habit, smoking habit, and family history of diabetes showed a significantly increased risk of DM onset in the other three groups compared with the normal group (simple fatty liver, aHR: 2.76, 95% CI: 1.05–7.23; metabolic disorder, aHR: 3.62, 95% CI: 2.24–5.85; MAFLD, aHR: 11.03, 95% CI: 7.03–17.28) (P‐trend <0.01; Table 4).\nThe risk for the onset of diabetes mellitus\nDifferences were considered statistically significant for P < 0.05.\nBMI, body mass index; CI, confidence interval; Cre, creatinine; HR, hazard ratio; SBP, systolic blood pressure.\nMultivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes.\nIn the normal, simple fatty liver, metabolic disorder, and MAFLD groups, the 1‐year DM onset rates were 0.04%, 0%, 0.03%, and 0.5%, respectively; the 3‐year DM onset rates were 0.11%, 0.43%, 1.06%, and 3.15%, respectively; the 5‐year DM onset rates were 0.44%, 1.64%, 2.07%, and 5.37%, respectively; and the 10‐year DM onset rates were 0.69%, 3.72%, 5.25%, and 16.68%, respectively.", "Next, we examined the factors that exacerbated the risk of fatty liver disease in DM. Fatty liver disease with overweight/obesity or pre‐diabetes significantly increased the risk of DM onset compared with fatty liver disease with other metabolic diseases (overweight/obesity, HR: 3.21, 95% CI: 1.75–5.94; pre‐diabetes, HR: 8.54, 95% CI: 4.92–14.82, respectively; Table 5). Additionally, multivariate analysis adjusted for sex; age; BMI; Cre levels; exercise habit; snacking habit; drinking habit; smoking habit; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes, showed a significantly increased risk of DM onset in fatty liver disease with overweight/obesity and pre‐diabetes (overweight/obesity, HR: 2.18, 95% CI: 1.15–4.13; pre‐diabetes, HR: 7.82, 95% CI: 4.37–13.99, respectively; Table 5). In the fatty liver with overweight/obesity and pre‐diabetes groups, the 1‐year DM onset rates were 0.54 and 0.07%, respectively; the 3‐year DM onset rates were 3.28 and 4.65%, respectively; the 5‐year DM onset rates were 5.53 and 7.89%, respectively; and the 10‐year DM onset rates were 16.99 and 24.1%, respectively.\nRisk of fatty liver in combination with other factors\nDifferences were considered statistically significant for P < 0.05.\nBMI, body mass index; CI, confidence interval; Cre, creatinine; HDL‐C, high‐density lipoprotein cholesterol; HR, hazard ratio; SBP, systolic blood pressure; TG, triglycerides.\nMultivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes, along with metabolic disorders as follows: ahypertension, high TG levels, low HDL‐C levels, and pre‐diabetes; boverweight/obese, high TG levels, low HDL‐C levels, and pre‐diabetes; coverweight/obese, hypertension, low HDL‐C levels, and pre‐diabetes; doverweight/obese, hypertension, high TG levels, and low HDL‐C levels.", "The authors declare that they have no conflicts of interest.\nApproval of the research protocol: This study was approved by the Research Ethics Committee of Ehime University Hospital (approval number: 1709007).\nInformed consent: N/A. Since this study protocol was retrospective in manner and all the participants’ data were de‐identified, it was not necessary to obtain informed consent from participants in this study.\nRegistry and the registration no. of the study/trial: October 3, 2013 University Hospital Medical Information Network ID: UMIN000011953.\nAnimal studies: N/A." ]
[ null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Definitions", "RESULTS", "Characteristics of the participants", "Risk of DM onset with simple fatty liver, metabolic disorder, and MAFLD", "Risk of fatty liver disease combined with other factors", "DISCUSSION", "DISCLOSURE" ]
[ "Certain changes to a person’s lifestyle, including increased dietary and fat intake and decreased physical activity, can induce various metabolic diseases. Among them, diabetes mellitus (DM) increases the incidence of cardiovascular disease and cancer\n1\n, \n2\n, \n3\n, \n4\n; furthermore, it is considered to have a significant effect on healthy life expectancy\n5\n. Therefore, there is a need to identify high‐risk populations for DM and to provide lifestyle interventions.\nNonalcoholic fatty liver disease (NAFLD) affects the development of DM\n6\n, \n7\n, \n8\n, \n9\n, \n10\n. However, since NAFLD is diagnosed after excluding other liver diseases, the pathogenesis of NAFLD when complicated by other liver diseases has not been established completely. Furthermore, since there are numerous patients with fatty liver disease, providing interventions for all of them is difficult; moreover, there is a need to stratify their risk of the development of DM.\nMetabolic dysfunction‐associated fatty liver disease (MAFLD), which is defined as fatty liver disease combined with obesity, diabetes, and metabolic diseases, regardless of the presence of other liver diseases, was proposed by the American Gastroenterological Association and the European Association for the Study of the Liver in 2020\n11\n, \n12\n. MAFLD is considered a fatty liver disease containing more metabolic factors than NAFLD; however, the impacts of MAFLD on the development of various metabolic disorders remain unclear. Therefore, we aimed to clarify the effects of fatty liver with some metabolic diseases, including MAFLD, on the development of DM.", "This community‐based longitudinal cohort study examined the medical records of 9,817 Japanese participants (4,793 men and 5,024 women). The participants were aged 21–78 years and underwent two or more annual health examinations at the Ehime General Health Care Association from April 2003 to March 2017. Annual health examinations were conducted to record the medical history and prescribed medications, perform body measurements, and assess routine biochemical variables.\nThe body mass index (BMI; kg/m2) was calculated using body weight and height, measured wearing only a light gown. Blood pressure was assessed with an automatic sphygmomanometer in a seated position. Blood samples were collected after fasting for >10 h. The risk of diabetes was determined based on the levels of fasting plasma glucose (FPG) and hemoglobin A1c (HbA1c). Liver enzymes, including aspartate aminotransferase (AST) and alanine aminotransferase (ALT), were analyzed. Lipid profiles were determined by assessing triglyceride (TG) and high‐density lipoprotein cholesterol (HDL‐C) levels. Furthermore, creatinine (Cre), hepatitis B surface antigen (HBs‐Ag), and hepatitis C antibody (anti‐HCV) levels were measured. Before the physical examination, health workers asked the participants to complete a questionnaire assessing their medical history; prescribed medications; family history of DM to the second degree; health‐related behaviors, including exercise (no habit or awareness of exercise vs periodic exercise) and snacking habits (no snacking vs snacking ≥1 time/day); alcohol consumption (men: ≥210 g/week, women: ≥140 g/week), and smoking status\n13\n. An experienced technician diagnosed fatty liver disease by abdominal ultrasonography without considering the participants’ data. Of the four fatty liver disease diagnostic criteria (hepatorenal echocardiographic contrast, liver brightness, deep attenuation, and vascular blurring), fatty liver disease was diagnosed based on hepatorenal contrast and liver brightness\n14\n, \n15\n.\nDefinitions The MAFLD group comprised participants with fatty liver disease who were overweight/obese (BMI ≥23 kg/m2) or non‐overweight/obese with a metabolic disorder. The criteria for metabolic disorders were defined, based in part, on the criteria for MAFLD\n11\n, and the diagnosis was based on the presence of two or more of the following metabolic risks: hypertension: blood pressure ≥130/85 mmHg or anti‐hypertension drug treatment; high TG levels (TG ≥1.70 mM or lipid‐lowering drug treatment; low HDL‐C levels (HDL‐C <1.04 mM for men and <1.30 mM for women); pre‐diabetes: impaired fasting glucose (fasting glucose levels 5.55–6.94 mM) or HbA1c between 5.7% and 6.4% without anti‐diabetes treatment. The simple fatty liver group comprised participants with fatty liver disease other than the MAFLD group, while the metabolic disorder group comprised participants with metabolic disorders without fatty liver disease. The remaining participants were included in the normal group.\nDiabetes mellitus onset was identified when the blood test examination and questionnaire at the health check‐up visit revealed fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes drug treatment.\nAfter examination of the medical records, 359 patients with diabetes were excluded using the following exclusion criteria: currently receiving antidiabetic medications (n = 120), fasting glucose levels ≥6.99 mM (n = 267), or HbA1c ≥6.5% (n = 310) (Figure 1). The final analysis comprised 9,459 participants (4,509 men and 4,950 women) of Ehime University Hospital (Figure 1). The observed mean duration was 5.53 ± 3.53 years (men: 5.32 ± 3.53 years, women: 5.73 ± 3.51 years). This study was approved by the Research Ethics Committee of Ehime University Hospital following the tenets of the Declaration of Helsinki and its later amendments (approval number: 1709007; University Hospital Medical Information Network ID: UMIN000011953), and was conducted in compliance with the Guidelines for Good Clinical Practice and local ethical and legal requirements. All participants were allocated a numerical code to ensure their anonymity. Additionally, all data were preserved in a secure database.\nStudy flowchart.\nJMP version 14.2.0 software (SAS Institute Japan, Tokyo, Japan) was used for statistical analyses. Assumptions of normal distribution were assessed using the Kolmogorov‐Smirnov‐Lilliefors test. Since the continuous variables proved to be non‐normally distributed, they were analyzed using the Steel‐Dwass test. Categorical variables were analyzed using the χ2 test. The Wilcoxon signed‐rank test was used to compare continuous variables representing baseline and endpoint characteristics. We performed univariate and multivariate Cox proportional hazards regression analyses to assess hazard ratios (HRs) and 95% confidence intervals (CIs) for DM development. Multivariate analyses were adjusted for the following variables: age; sex; Cre levels; exercise, snacking, drinking, and smoking habits; and family history of diabetes. The combined risk of fatty liver disease for DM was assessed using univariate and multivariate Cox proportional hazards regression analyses that were adjusted for sex; age; Cre; exercise, snacking, drinking, and smoking habits; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes. All data are expressed as the median (interquartile range) or number (percentage). Statistical significance was determined with P values <0.05.\nThe MAFLD group comprised participants with fatty liver disease who were overweight/obese (BMI ≥23 kg/m2) or non‐overweight/obese with a metabolic disorder. The criteria for metabolic disorders were defined, based in part, on the criteria for MAFLD\n11\n, and the diagnosis was based on the presence of two or more of the following metabolic risks: hypertension: blood pressure ≥130/85 mmHg or anti‐hypertension drug treatment; high TG levels (TG ≥1.70 mM or lipid‐lowering drug treatment; low HDL‐C levels (HDL‐C <1.04 mM for men and <1.30 mM for women); pre‐diabetes: impaired fasting glucose (fasting glucose levels 5.55–6.94 mM) or HbA1c between 5.7% and 6.4% without anti‐diabetes treatment. The simple fatty liver group comprised participants with fatty liver disease other than the MAFLD group, while the metabolic disorder group comprised participants with metabolic disorders without fatty liver disease. The remaining participants were included in the normal group.\nDiabetes mellitus onset was identified when the blood test examination and questionnaire at the health check‐up visit revealed fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes drug treatment.\nAfter examination of the medical records, 359 patients with diabetes were excluded using the following exclusion criteria: currently receiving antidiabetic medications (n = 120), fasting glucose levels ≥6.99 mM (n = 267), or HbA1c ≥6.5% (n = 310) (Figure 1). The final analysis comprised 9,459 participants (4,509 men and 4,950 women) of Ehime University Hospital (Figure 1). The observed mean duration was 5.53 ± 3.53 years (men: 5.32 ± 3.53 years, women: 5.73 ± 3.51 years). This study was approved by the Research Ethics Committee of Ehime University Hospital following the tenets of the Declaration of Helsinki and its later amendments (approval number: 1709007; University Hospital Medical Information Network ID: UMIN000011953), and was conducted in compliance with the Guidelines for Good Clinical Practice and local ethical and legal requirements. All participants were allocated a numerical code to ensure their anonymity. Additionally, all data were preserved in a secure database.\nStudy flowchart.\nJMP version 14.2.0 software (SAS Institute Japan, Tokyo, Japan) was used for statistical analyses. Assumptions of normal distribution were assessed using the Kolmogorov‐Smirnov‐Lilliefors test. Since the continuous variables proved to be non‐normally distributed, they were analyzed using the Steel‐Dwass test. Categorical variables were analyzed using the χ2 test. The Wilcoxon signed‐rank test was used to compare continuous variables representing baseline and endpoint characteristics. We performed univariate and multivariate Cox proportional hazards regression analyses to assess hazard ratios (HRs) and 95% confidence intervals (CIs) for DM development. Multivariate analyses were adjusted for the following variables: age; sex; Cre levels; exercise, snacking, drinking, and smoking habits; and family history of diabetes. The combined risk of fatty liver disease for DM was assessed using univariate and multivariate Cox proportional hazards regression analyses that were adjusted for sex; age; Cre; exercise, snacking, drinking, and smoking habits; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes. All data are expressed as the median (interquartile range) or number (percentage). Statistical significance was determined with P values <0.05.", "The MAFLD group comprised participants with fatty liver disease who were overweight/obese (BMI ≥23 kg/m2) or non‐overweight/obese with a metabolic disorder. The criteria for metabolic disorders were defined, based in part, on the criteria for MAFLD\n11\n, and the diagnosis was based on the presence of two or more of the following metabolic risks: hypertension: blood pressure ≥130/85 mmHg or anti‐hypertension drug treatment; high TG levels (TG ≥1.70 mM or lipid‐lowering drug treatment; low HDL‐C levels (HDL‐C <1.04 mM for men and <1.30 mM for women); pre‐diabetes: impaired fasting glucose (fasting glucose levels 5.55–6.94 mM) or HbA1c between 5.7% and 6.4% without anti‐diabetes treatment. The simple fatty liver group comprised participants with fatty liver disease other than the MAFLD group, while the metabolic disorder group comprised participants with metabolic disorders without fatty liver disease. The remaining participants were included in the normal group.\nDiabetes mellitus onset was identified when the blood test examination and questionnaire at the health check‐up visit revealed fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes drug treatment.\nAfter examination of the medical records, 359 patients with diabetes were excluded using the following exclusion criteria: currently receiving antidiabetic medications (n = 120), fasting glucose levels ≥6.99 mM (n = 267), or HbA1c ≥6.5% (n = 310) (Figure 1). The final analysis comprised 9,459 participants (4,509 men and 4,950 women) of Ehime University Hospital (Figure 1). The observed mean duration was 5.53 ± 3.53 years (men: 5.32 ± 3.53 years, women: 5.73 ± 3.51 years). This study was approved by the Research Ethics Committee of Ehime University Hospital following the tenets of the Declaration of Helsinki and its later amendments (approval number: 1709007; University Hospital Medical Information Network ID: UMIN000011953), and was conducted in compliance with the Guidelines for Good Clinical Practice and local ethical and legal requirements. All participants were allocated a numerical code to ensure their anonymity. Additionally, all data were preserved in a secure database.\nStudy flowchart.\nJMP version 14.2.0 software (SAS Institute Japan, Tokyo, Japan) was used for statistical analyses. Assumptions of normal distribution were assessed using the Kolmogorov‐Smirnov‐Lilliefors test. Since the continuous variables proved to be non‐normally distributed, they were analyzed using the Steel‐Dwass test. Categorical variables were analyzed using the χ2 test. The Wilcoxon signed‐rank test was used to compare continuous variables representing baseline and endpoint characteristics. We performed univariate and multivariate Cox proportional hazards regression analyses to assess hazard ratios (HRs) and 95% confidence intervals (CIs) for DM development. Multivariate analyses were adjusted for the following variables: age; sex; Cre levels; exercise, snacking, drinking, and smoking habits; and family history of diabetes. The combined risk of fatty liver disease for DM was assessed using univariate and multivariate Cox proportional hazards regression analyses that were adjusted for sex; age; Cre; exercise, snacking, drinking, and smoking habits; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes. All data are expressed as the median (interquartile range) or number (percentage). Statistical significance was determined with P values <0.05.", "Characteristics of the participants Table 1 shows the baseline characteristics of each group. The onset rates of diabetes mellitus in the normal, simple fatty liver, metabolic disorder, and MAFLD groups were 0.51, 1.85, 2.52, and 7.36%, respectively (Table 1). Compared with the normal group, the other groups had a significantly higher proportion of males; were older; and had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre levels, and TG; lower HDL‐C levels; a lower proportion of snacking habits; and a higher proportion of current smokers (Table 1). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 1). There were 128 patients with HBs‐Ag, 91 patients with anti‐HCV, and 752 drinkers (Table 1).\nBaseline characteristics\nData are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week)\nDM onset was defined as fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes treatment.\nThe endpoint characteristics were similar to the baseline characteristics (Table 2). Compared with the normal group, the other groups had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre, and TG levels; lower HDL‐C levels; a lower proportion of individuals with snacking habits; and higher proportion of current smokers (Table 2). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 2).\nEndpoint characteristics\nData are presented as median (interquartile range) or number (percentage).\nThe Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week).\nThe comparisons of characteristics between baseline and endpoint revealed that BMI, DBP, and AST levels at endpoint were higher than at baseline among the normal, simple fatty liver, and metabolic disorder groups (Table 3). The FPG levels at endpoint were higher than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). In all groups, Cre levels at endpoint were higher than at baseline, and HDL‐c levels at endpoint were lower than at baseline (Table 3). The proportion of individuals with periodic exercise habits was higher at endpoint than at baseline among all groups and the proportion of current smokers was lower at endpoint than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3).\nThe differences in characteristics between baseline and endpoint\nThe Wilcoxon signed rank test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05. Data are presented as median (interquartile range) or number (percentage).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week)\nTable 1 shows the baseline characteristics of each group. The onset rates of diabetes mellitus in the normal, simple fatty liver, metabolic disorder, and MAFLD groups were 0.51, 1.85, 2.52, and 7.36%, respectively (Table 1). Compared with the normal group, the other groups had a significantly higher proportion of males; were older; and had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre levels, and TG; lower HDL‐C levels; a lower proportion of snacking habits; and a higher proportion of current smokers (Table 1). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 1). There were 128 patients with HBs‐Ag, 91 patients with anti‐HCV, and 752 drinkers (Table 1).\nBaseline characteristics\nData are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week)\nDM onset was defined as fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes treatment.\nThe endpoint characteristics were similar to the baseline characteristics (Table 2). Compared with the normal group, the other groups had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre, and TG levels; lower HDL‐C levels; a lower proportion of individuals with snacking habits; and higher proportion of current smokers (Table 2). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 2).\nEndpoint characteristics\nData are presented as median (interquartile range) or number (percentage).\nThe Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week).\nThe comparisons of characteristics between baseline and endpoint revealed that BMI, DBP, and AST levels at endpoint were higher than at baseline among the normal, simple fatty liver, and metabolic disorder groups (Table 3). The FPG levels at endpoint were higher than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). In all groups, Cre levels at endpoint were higher than at baseline, and HDL‐c levels at endpoint were lower than at baseline (Table 3). The proportion of individuals with periodic exercise habits was higher at endpoint than at baseline among all groups and the proportion of current smokers was lower at endpoint than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3).\nThe differences in characteristics between baseline and endpoint\nThe Wilcoxon signed rank test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05. Data are presented as median (interquartile range) or number (percentage).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week)\nRisk of DM onset with simple fatty liver, metabolic disorder, and MAFLD Univariate analysis revealed that simple fatty liver, metabolic disorder, and MAFLD were significant risk factors for DM compared with the normal group. Further, MAFLD had a higher risk of DM onset than simple fatty liver and metabolic disorder (simple fatty liver, HR: 3.61, 95% CI: 1.38–9.43; metabolic disorder, HR: 5.41, 95% CI: 3.39–8.62; MAFLD, HR: 18.46, 95% CI: 12.06–28.27) (P‐trend <0.01; Table 4). In addition, multivariate analysis adjusted for sex, age, BMI, Cre, exercise habit, snacking habit, drinking habit, smoking habit, and family history of diabetes showed a significantly increased risk of DM onset in the other three groups compared with the normal group (simple fatty liver, aHR: 2.76, 95% CI: 1.05–7.23; metabolic disorder, aHR: 3.62, 95% CI: 2.24–5.85; MAFLD, aHR: 11.03, 95% CI: 7.03–17.28) (P‐trend <0.01; Table 4).\nThe risk for the onset of diabetes mellitus\nDifferences were considered statistically significant for P < 0.05.\nBMI, body mass index; CI, confidence interval; Cre, creatinine; HR, hazard ratio; SBP, systolic blood pressure.\nMultivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes.\nIn the normal, simple fatty liver, metabolic disorder, and MAFLD groups, the 1‐year DM onset rates were 0.04%, 0%, 0.03%, and 0.5%, respectively; the 3‐year DM onset rates were 0.11%, 0.43%, 1.06%, and 3.15%, respectively; the 5‐year DM onset rates were 0.44%, 1.64%, 2.07%, and 5.37%, respectively; and the 10‐year DM onset rates were 0.69%, 3.72%, 5.25%, and 16.68%, respectively.\nUnivariate analysis revealed that simple fatty liver, metabolic disorder, and MAFLD were significant risk factors for DM compared with the normal group. Further, MAFLD had a higher risk of DM onset than simple fatty liver and metabolic disorder (simple fatty liver, HR: 3.61, 95% CI: 1.38–9.43; metabolic disorder, HR: 5.41, 95% CI: 3.39–8.62; MAFLD, HR: 18.46, 95% CI: 12.06–28.27) (P‐trend <0.01; Table 4). In addition, multivariate analysis adjusted for sex, age, BMI, Cre, exercise habit, snacking habit, drinking habit, smoking habit, and family history of diabetes showed a significantly increased risk of DM onset in the other three groups compared with the normal group (simple fatty liver, aHR: 2.76, 95% CI: 1.05–7.23; metabolic disorder, aHR: 3.62, 95% CI: 2.24–5.85; MAFLD, aHR: 11.03, 95% CI: 7.03–17.28) (P‐trend <0.01; Table 4).\nThe risk for the onset of diabetes mellitus\nDifferences were considered statistically significant for P < 0.05.\nBMI, body mass index; CI, confidence interval; Cre, creatinine; HR, hazard ratio; SBP, systolic blood pressure.\nMultivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes.\nIn the normal, simple fatty liver, metabolic disorder, and MAFLD groups, the 1‐year DM onset rates were 0.04%, 0%, 0.03%, and 0.5%, respectively; the 3‐year DM onset rates were 0.11%, 0.43%, 1.06%, and 3.15%, respectively; the 5‐year DM onset rates were 0.44%, 1.64%, 2.07%, and 5.37%, respectively; and the 10‐year DM onset rates were 0.69%, 3.72%, 5.25%, and 16.68%, respectively.\nRisk of fatty liver disease combined with other factors Next, we examined the factors that exacerbated the risk of fatty liver disease in DM. Fatty liver disease with overweight/obesity or pre‐diabetes significantly increased the risk of DM onset compared with fatty liver disease with other metabolic diseases (overweight/obesity, HR: 3.21, 95% CI: 1.75–5.94; pre‐diabetes, HR: 8.54, 95% CI: 4.92–14.82, respectively; Table 5). Additionally, multivariate analysis adjusted for sex; age; BMI; Cre levels; exercise habit; snacking habit; drinking habit; smoking habit; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes, showed a significantly increased risk of DM onset in fatty liver disease with overweight/obesity and pre‐diabetes (overweight/obesity, HR: 2.18, 95% CI: 1.15–4.13; pre‐diabetes, HR: 7.82, 95% CI: 4.37–13.99, respectively; Table 5). In the fatty liver with overweight/obesity and pre‐diabetes groups, the 1‐year DM onset rates were 0.54 and 0.07%, respectively; the 3‐year DM onset rates were 3.28 and 4.65%, respectively; the 5‐year DM onset rates were 5.53 and 7.89%, respectively; and the 10‐year DM onset rates were 16.99 and 24.1%, respectively.\nRisk of fatty liver in combination with other factors\nDifferences were considered statistically significant for P < 0.05.\nBMI, body mass index; CI, confidence interval; Cre, creatinine; HDL‐C, high‐density lipoprotein cholesterol; HR, hazard ratio; SBP, systolic blood pressure; TG, triglycerides.\nMultivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes, along with metabolic disorders as follows: ahypertension, high TG levels, low HDL‐C levels, and pre‐diabetes; boverweight/obese, high TG levels, low HDL‐C levels, and pre‐diabetes; coverweight/obese, hypertension, low HDL‐C levels, and pre‐diabetes; doverweight/obese, hypertension, high TG levels, and low HDL‐C levels.\nNext, we examined the factors that exacerbated the risk of fatty liver disease in DM. Fatty liver disease with overweight/obesity or pre‐diabetes significantly increased the risk of DM onset compared with fatty liver disease with other metabolic diseases (overweight/obesity, HR: 3.21, 95% CI: 1.75–5.94; pre‐diabetes, HR: 8.54, 95% CI: 4.92–14.82, respectively; Table 5). Additionally, multivariate analysis adjusted for sex; age; BMI; Cre levels; exercise habit; snacking habit; drinking habit; smoking habit; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes, showed a significantly increased risk of DM onset in fatty liver disease with overweight/obesity and pre‐diabetes (overweight/obesity, HR: 2.18, 95% CI: 1.15–4.13; pre‐diabetes, HR: 7.82, 95% CI: 4.37–13.99, respectively; Table 5). In the fatty liver with overweight/obesity and pre‐diabetes groups, the 1‐year DM onset rates were 0.54 and 0.07%, respectively; the 3‐year DM onset rates were 3.28 and 4.65%, respectively; the 5‐year DM onset rates were 5.53 and 7.89%, respectively; and the 10‐year DM onset rates were 16.99 and 24.1%, respectively.\nRisk of fatty liver in combination with other factors\nDifferences were considered statistically significant for P < 0.05.\nBMI, body mass index; CI, confidence interval; Cre, creatinine; HDL‐C, high‐density lipoprotein cholesterol; HR, hazard ratio; SBP, systolic blood pressure; TG, triglycerides.\nMultivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes, along with metabolic disorders as follows: ahypertension, high TG levels, low HDL‐C levels, and pre‐diabetes; boverweight/obese, high TG levels, low HDL‐C levels, and pre‐diabetes; coverweight/obese, hypertension, low HDL‐C levels, and pre‐diabetes; doverweight/obese, hypertension, high TG levels, and low HDL‐C levels.", "Table 1 shows the baseline characteristics of each group. The onset rates of diabetes mellitus in the normal, simple fatty liver, metabolic disorder, and MAFLD groups were 0.51, 1.85, 2.52, and 7.36%, respectively (Table 1). Compared with the normal group, the other groups had a significantly higher proportion of males; were older; and had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre levels, and TG; lower HDL‐C levels; a lower proportion of snacking habits; and a higher proportion of current smokers (Table 1). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 1). There were 128 patients with HBs‐Ag, 91 patients with anti‐HCV, and 752 drinkers (Table 1).\nBaseline characteristics\nData are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week)\nDM onset was defined as fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes treatment.\nThe endpoint characteristics were similar to the baseline characteristics (Table 2). Compared with the normal group, the other groups had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre, and TG levels; lower HDL‐C levels; a lower proportion of individuals with snacking habits; and higher proportion of current smokers (Table 2). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 2).\nEndpoint characteristics\nData are presented as median (interquartile range) or number (percentage).\nThe Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week).\nThe comparisons of characteristics between baseline and endpoint revealed that BMI, DBP, and AST levels at endpoint were higher than at baseline among the normal, simple fatty liver, and metabolic disorder groups (Table 3). The FPG levels at endpoint were higher than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). In all groups, Cre levels at endpoint were higher than at baseline, and HDL‐c levels at endpoint were lower than at baseline (Table 3). The proportion of individuals with periodic exercise habits was higher at endpoint than at baseline among all groups and the proportion of current smokers was lower at endpoint than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3).\nThe differences in characteristics between baseline and endpoint\nThe Wilcoxon signed rank test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05. Data are presented as median (interquartile range) or number (percentage).\nALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides.\nExercise habit: no habit or awareness of exercise vs periodic exercise.\nSnacking habit: no snacking vs snacking ≥1 time/day.\nDrinker: men: ≥210 g/week, women: ≥140 g/week)", "Univariate analysis revealed that simple fatty liver, metabolic disorder, and MAFLD were significant risk factors for DM compared with the normal group. Further, MAFLD had a higher risk of DM onset than simple fatty liver and metabolic disorder (simple fatty liver, HR: 3.61, 95% CI: 1.38–9.43; metabolic disorder, HR: 5.41, 95% CI: 3.39–8.62; MAFLD, HR: 18.46, 95% CI: 12.06–28.27) (P‐trend <0.01; Table 4). In addition, multivariate analysis adjusted for sex, age, BMI, Cre, exercise habit, snacking habit, drinking habit, smoking habit, and family history of diabetes showed a significantly increased risk of DM onset in the other three groups compared with the normal group (simple fatty liver, aHR: 2.76, 95% CI: 1.05–7.23; metabolic disorder, aHR: 3.62, 95% CI: 2.24–5.85; MAFLD, aHR: 11.03, 95% CI: 7.03–17.28) (P‐trend <0.01; Table 4).\nThe risk for the onset of diabetes mellitus\nDifferences were considered statistically significant for P < 0.05.\nBMI, body mass index; CI, confidence interval; Cre, creatinine; HR, hazard ratio; SBP, systolic blood pressure.\nMultivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes.\nIn the normal, simple fatty liver, metabolic disorder, and MAFLD groups, the 1‐year DM onset rates were 0.04%, 0%, 0.03%, and 0.5%, respectively; the 3‐year DM onset rates were 0.11%, 0.43%, 1.06%, and 3.15%, respectively; the 5‐year DM onset rates were 0.44%, 1.64%, 2.07%, and 5.37%, respectively; and the 10‐year DM onset rates were 0.69%, 3.72%, 5.25%, and 16.68%, respectively.", "Next, we examined the factors that exacerbated the risk of fatty liver disease in DM. Fatty liver disease with overweight/obesity or pre‐diabetes significantly increased the risk of DM onset compared with fatty liver disease with other metabolic diseases (overweight/obesity, HR: 3.21, 95% CI: 1.75–5.94; pre‐diabetes, HR: 8.54, 95% CI: 4.92–14.82, respectively; Table 5). Additionally, multivariate analysis adjusted for sex; age; BMI; Cre levels; exercise habit; snacking habit; drinking habit; smoking habit; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes, showed a significantly increased risk of DM onset in fatty liver disease with overweight/obesity and pre‐diabetes (overweight/obesity, HR: 2.18, 95% CI: 1.15–4.13; pre‐diabetes, HR: 7.82, 95% CI: 4.37–13.99, respectively; Table 5). In the fatty liver with overweight/obesity and pre‐diabetes groups, the 1‐year DM onset rates were 0.54 and 0.07%, respectively; the 3‐year DM onset rates were 3.28 and 4.65%, respectively; the 5‐year DM onset rates were 5.53 and 7.89%, respectively; and the 10‐year DM onset rates were 16.99 and 24.1%, respectively.\nRisk of fatty liver in combination with other factors\nDifferences were considered statistically significant for P < 0.05.\nBMI, body mass index; CI, confidence interval; Cre, creatinine; HDL‐C, high‐density lipoprotein cholesterol; HR, hazard ratio; SBP, systolic blood pressure; TG, triglycerides.\nMultivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes, along with metabolic disorders as follows: ahypertension, high TG levels, low HDL‐C levels, and pre‐diabetes; boverweight/obese, high TG levels, low HDL‐C levels, and pre‐diabetes; coverweight/obese, hypertension, low HDL‐C levels, and pre‐diabetes; doverweight/obese, hypertension, high TG levels, and low HDL‐C levels.", "We observed that fatty liver with metabolic disorders, such as MAFLD, indicated an increased risk of DM onset compared with simple fatty liver and metabolic disorders. Additionally, fatty liver disease in the presence of overweight/obesity or pre‐diabetes showed a higher risk of DM onset compared with fatty liver disease with other metabolic disorders. Therefore, MAFLD, especially fatty liver disease with overweight/obesity or pre‐diabetes, is an appropriate disease concept for the risk of DM development.\nNonalcoholic fatty liver disease is a significant risk factor for DM; moreover, NAFLD is strongly associated with DM\n6\n, \n7\n, \n8\n, \n9\n, \n10\n. Additionally, a previous examination of the relationship between the changes in fatty liver status over time and the risk of DM onset indicated that, although improvement of fatty liver did not reduce the risk of developing DM as low as that in non‐fatty liver patients, exacerbation of the severity of fatty liver disease notably increased the risk of DM onset\n16\n. Therefore, it is important to intervene in cases of fatty liver disease to reduce the risk of DM onset. However, since NAFLD is diagnosed after excluding other liver diseases, it is difficult to properly evaluate various diseases that could complicate NAFLD pathogenesis, including viral hepatitis, alcoholic liver disease, and autoimmune liver diseases. In 2020, the concept of MAFLD was proposed, which facilitated the pathogenesis of fatty liver disease\n11\n, \n12\n. Although we included 128 patients with HBs‐Ag, 91 patients with anti‐HCV, and 752 drinkers, the relationship between fatty liver disease and diabetes could be accurately represented. Additionally, MAFLD is a good predictive factor of hepatic fibrosis\n17\n and mortality\n18\n, \n19\n. There is a need for future studies on MAFLD to confirm whether MAFLD is carries increased risk factors for other diseases\n20\n.\nOur findings confirmed that fatty liver disease was classified according to the risk of DM. Liang et al.\n21\n reported that compared with participants without fatty liver disease, patients with MAFLD and NAFLD had an increased risk of DM onset (estimated risk ratio [RR] 2.08, 95% CI 1.72–2.52; RR 2.01, 95% CI 1.65–2.46, respectively) after adjustment for age, sex, educational background, smoking status, and leisure‐time exercise. Moreover, the change from NAFLD to MAFLD had no effect on the relationship with diabetes. However, since this previous study compared disease concepts, there was an overlap in the target participants with NAFLD and MAFLD. Therefore, our findings are important since we stratified participants based on their risk of DM development, and they demonstrated that a diagnosis of MAFLD can be used to identify patients at a higher risk of developing DM. Additionally, we analyzed the factors that specifically increased the risk of fatty liver disease.\nOur study strengths are that only five data points were missing from the entire cohort of patients and visits (one BMI measurement, and two points each for exercise and snacking habits). However, this study has several limitations. First, because we did not collect data pertaining to waist circumference, insulin levels, or high‐sensitivity C‐reactive protein levels, we could not assess all the MAFLD components. Therefore, it is possible that the number of patients with MAFLD was underestimated. Second, although abdominal ultrasonography is a reliable diagnostic method with high sensitivity and specificity for fatty liver disease\n22\n, \n23\n, we could not assess the effects of the severity of fatty liver disease on DM development since we could not determine the extent of fatty liver disease and fibrosis. Third, self‐reported data were used for some of the factors surveyed, which may compromise the accuracy of the survey results. Fourth, the data are not truly continuous because they were collected only once a year at annual health check‐ups. Fifth, it is possible that some of the participants who developed DM after the health check‐up missed the next health check‐up. Therefore, the number of patients who developed diabetes may have been underestimated. Finally, since we only studied the Japanese population, studies with other populations are needed to confirm the generality of our results.\nIn conclusion, our findings demonstrated that a diagnosis of MAFLD can be used to classify a high risk of DM onset among patients with fatty liver disease. Additionally, among patients with fatty liver disease, those with fatty liver who are also overweight/obese or have pre‐diabetes are at a higher risk of developing DM. Therefore, clinicians should be particularly vigilant when treating patients with fatty liver complicated by metabolic disorders, such as MAFLD, to prevent the development of DM. Stratifying the risk of fatty liver can allow the identification of, and interventions for, patients at a high risk of diabetes development.", "The authors declare that they have no conflicts of interest.\nApproval of the research protocol: This study was approved by the Research Ethics Committee of Ehime University Hospital (approval number: 1709007).\nInformed consent: N/A. Since this study protocol was retrospective in manner and all the participants’ data were de‐identified, it was not necessary to obtain informed consent from participants in this study.\nRegistry and the registration no. of the study/trial: October 3, 2013 University Hospital Medical Information Network ID: UMIN000011953.\nAnimal studies: N/A." ]
[ null, "materials-and-methods", null, "results", null, null, null, "discussion", null ]
[ "Diabetes mellitus", "Fatty liver", "Metabolic disease" ]
INTRODUCTION: Certain changes to a person’s lifestyle, including increased dietary and fat intake and decreased physical activity, can induce various metabolic diseases. Among them, diabetes mellitus (DM) increases the incidence of cardiovascular disease and cancer 1 , 2 , 3 , 4 ; furthermore, it is considered to have a significant effect on healthy life expectancy 5 . Therefore, there is a need to identify high‐risk populations for DM and to provide lifestyle interventions. Nonalcoholic fatty liver disease (NAFLD) affects the development of DM 6 , 7 , 8 , 9 , 10 . However, since NAFLD is diagnosed after excluding other liver diseases, the pathogenesis of NAFLD when complicated by other liver diseases has not been established completely. Furthermore, since there are numerous patients with fatty liver disease, providing interventions for all of them is difficult; moreover, there is a need to stratify their risk of the development of DM. Metabolic dysfunction‐associated fatty liver disease (MAFLD), which is defined as fatty liver disease combined with obesity, diabetes, and metabolic diseases, regardless of the presence of other liver diseases, was proposed by the American Gastroenterological Association and the European Association for the Study of the Liver in 2020 11 , 12 . MAFLD is considered a fatty liver disease containing more metabolic factors than NAFLD; however, the impacts of MAFLD on the development of various metabolic disorders remain unclear. Therefore, we aimed to clarify the effects of fatty liver with some metabolic diseases, including MAFLD, on the development of DM. MATERIALS AND METHODS: This community‐based longitudinal cohort study examined the medical records of 9,817 Japanese participants (4,793 men and 5,024 women). The participants were aged 21–78 years and underwent two or more annual health examinations at the Ehime General Health Care Association from April 2003 to March 2017. Annual health examinations were conducted to record the medical history and prescribed medications, perform body measurements, and assess routine biochemical variables. The body mass index (BMI; kg/m2) was calculated using body weight and height, measured wearing only a light gown. Blood pressure was assessed with an automatic sphygmomanometer in a seated position. Blood samples were collected after fasting for >10 h. The risk of diabetes was determined based on the levels of fasting plasma glucose (FPG) and hemoglobin A1c (HbA1c). Liver enzymes, including aspartate aminotransferase (AST) and alanine aminotransferase (ALT), were analyzed. Lipid profiles were determined by assessing triglyceride (TG) and high‐density lipoprotein cholesterol (HDL‐C) levels. Furthermore, creatinine (Cre), hepatitis B surface antigen (HBs‐Ag), and hepatitis C antibody (anti‐HCV) levels were measured. Before the physical examination, health workers asked the participants to complete a questionnaire assessing their medical history; prescribed medications; family history of DM to the second degree; health‐related behaviors, including exercise (no habit or awareness of exercise vs periodic exercise) and snacking habits (no snacking vs snacking ≥1 time/day); alcohol consumption (men: ≥210 g/week, women: ≥140 g/week), and smoking status 13 . An experienced technician diagnosed fatty liver disease by abdominal ultrasonography without considering the participants’ data. Of the four fatty liver disease diagnostic criteria (hepatorenal echocardiographic contrast, liver brightness, deep attenuation, and vascular blurring), fatty liver disease was diagnosed based on hepatorenal contrast and liver brightness 14 , 15 . Definitions The MAFLD group comprised participants with fatty liver disease who were overweight/obese (BMI ≥23 kg/m2) or non‐overweight/obese with a metabolic disorder. The criteria for metabolic disorders were defined, based in part, on the criteria for MAFLD 11 , and the diagnosis was based on the presence of two or more of the following metabolic risks: hypertension: blood pressure ≥130/85 mmHg or anti‐hypertension drug treatment; high TG levels (TG ≥1.70 mM or lipid‐lowering drug treatment; low HDL‐C levels (HDL‐C <1.04 mM for men and <1.30 mM for women); pre‐diabetes: impaired fasting glucose (fasting glucose levels 5.55–6.94 mM) or HbA1c between 5.7% and 6.4% without anti‐diabetes treatment. The simple fatty liver group comprised participants with fatty liver disease other than the MAFLD group, while the metabolic disorder group comprised participants with metabolic disorders without fatty liver disease. The remaining participants were included in the normal group. Diabetes mellitus onset was identified when the blood test examination and questionnaire at the health check‐up visit revealed fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes drug treatment. After examination of the medical records, 359 patients with diabetes were excluded using the following exclusion criteria: currently receiving antidiabetic medications (n = 120), fasting glucose levels ≥6.99 mM (n = 267), or HbA1c ≥6.5% (n = 310) (Figure 1). The final analysis comprised 9,459 participants (4,509 men and 4,950 women) of Ehime University Hospital (Figure 1). The observed mean duration was 5.53 ± 3.53 years (men: 5.32 ± 3.53 years, women: 5.73 ± 3.51 years). This study was approved by the Research Ethics Committee of Ehime University Hospital following the tenets of the Declaration of Helsinki and its later amendments (approval number: 1709007; University Hospital Medical Information Network ID: UMIN000011953), and was conducted in compliance with the Guidelines for Good Clinical Practice and local ethical and legal requirements. All participants were allocated a numerical code to ensure their anonymity. Additionally, all data were preserved in a secure database. Study flowchart. JMP version 14.2.0 software (SAS Institute Japan, Tokyo, Japan) was used for statistical analyses. Assumptions of normal distribution were assessed using the Kolmogorov‐Smirnov‐Lilliefors test. Since the continuous variables proved to be non‐normally distributed, they were analyzed using the Steel‐Dwass test. Categorical variables were analyzed using the χ2 test. The Wilcoxon signed‐rank test was used to compare continuous variables representing baseline and endpoint characteristics. We performed univariate and multivariate Cox proportional hazards regression analyses to assess hazard ratios (HRs) and 95% confidence intervals (CIs) for DM development. Multivariate analyses were adjusted for the following variables: age; sex; Cre levels; exercise, snacking, drinking, and smoking habits; and family history of diabetes. The combined risk of fatty liver disease for DM was assessed using univariate and multivariate Cox proportional hazards regression analyses that were adjusted for sex; age; Cre; exercise, snacking, drinking, and smoking habits; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes. All data are expressed as the median (interquartile range) or number (percentage). Statistical significance was determined with P values <0.05. The MAFLD group comprised participants with fatty liver disease who were overweight/obese (BMI ≥23 kg/m2) or non‐overweight/obese with a metabolic disorder. The criteria for metabolic disorders were defined, based in part, on the criteria for MAFLD 11 , and the diagnosis was based on the presence of two or more of the following metabolic risks: hypertension: blood pressure ≥130/85 mmHg or anti‐hypertension drug treatment; high TG levels (TG ≥1.70 mM or lipid‐lowering drug treatment; low HDL‐C levels (HDL‐C <1.04 mM for men and <1.30 mM for women); pre‐diabetes: impaired fasting glucose (fasting glucose levels 5.55–6.94 mM) or HbA1c between 5.7% and 6.4% without anti‐diabetes treatment. The simple fatty liver group comprised participants with fatty liver disease other than the MAFLD group, while the metabolic disorder group comprised participants with metabolic disorders without fatty liver disease. The remaining participants were included in the normal group. Diabetes mellitus onset was identified when the blood test examination and questionnaire at the health check‐up visit revealed fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes drug treatment. After examination of the medical records, 359 patients with diabetes were excluded using the following exclusion criteria: currently receiving antidiabetic medications (n = 120), fasting glucose levels ≥6.99 mM (n = 267), or HbA1c ≥6.5% (n = 310) (Figure 1). The final analysis comprised 9,459 participants (4,509 men and 4,950 women) of Ehime University Hospital (Figure 1). The observed mean duration was 5.53 ± 3.53 years (men: 5.32 ± 3.53 years, women: 5.73 ± 3.51 years). This study was approved by the Research Ethics Committee of Ehime University Hospital following the tenets of the Declaration of Helsinki and its later amendments (approval number: 1709007; University Hospital Medical Information Network ID: UMIN000011953), and was conducted in compliance with the Guidelines for Good Clinical Practice and local ethical and legal requirements. All participants were allocated a numerical code to ensure their anonymity. Additionally, all data were preserved in a secure database. Study flowchart. JMP version 14.2.0 software (SAS Institute Japan, Tokyo, Japan) was used for statistical analyses. Assumptions of normal distribution were assessed using the Kolmogorov‐Smirnov‐Lilliefors test. Since the continuous variables proved to be non‐normally distributed, they were analyzed using the Steel‐Dwass test. Categorical variables were analyzed using the χ2 test. The Wilcoxon signed‐rank test was used to compare continuous variables representing baseline and endpoint characteristics. We performed univariate and multivariate Cox proportional hazards regression analyses to assess hazard ratios (HRs) and 95% confidence intervals (CIs) for DM development. Multivariate analyses were adjusted for the following variables: age; sex; Cre levels; exercise, snacking, drinking, and smoking habits; and family history of diabetes. The combined risk of fatty liver disease for DM was assessed using univariate and multivariate Cox proportional hazards regression analyses that were adjusted for sex; age; Cre; exercise, snacking, drinking, and smoking habits; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes. All data are expressed as the median (interquartile range) or number (percentage). Statistical significance was determined with P values <0.05. Definitions: The MAFLD group comprised participants with fatty liver disease who were overweight/obese (BMI ≥23 kg/m2) or non‐overweight/obese with a metabolic disorder. The criteria for metabolic disorders were defined, based in part, on the criteria for MAFLD 11 , and the diagnosis was based on the presence of two or more of the following metabolic risks: hypertension: blood pressure ≥130/85 mmHg or anti‐hypertension drug treatment; high TG levels (TG ≥1.70 mM or lipid‐lowering drug treatment; low HDL‐C levels (HDL‐C <1.04 mM for men and <1.30 mM for women); pre‐diabetes: impaired fasting glucose (fasting glucose levels 5.55–6.94 mM) or HbA1c between 5.7% and 6.4% without anti‐diabetes treatment. The simple fatty liver group comprised participants with fatty liver disease other than the MAFLD group, while the metabolic disorder group comprised participants with metabolic disorders without fatty liver disease. The remaining participants were included in the normal group. Diabetes mellitus onset was identified when the blood test examination and questionnaire at the health check‐up visit revealed fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes drug treatment. After examination of the medical records, 359 patients with diabetes were excluded using the following exclusion criteria: currently receiving antidiabetic medications (n = 120), fasting glucose levels ≥6.99 mM (n = 267), or HbA1c ≥6.5% (n = 310) (Figure 1). The final analysis comprised 9,459 participants (4,509 men and 4,950 women) of Ehime University Hospital (Figure 1). The observed mean duration was 5.53 ± 3.53 years (men: 5.32 ± 3.53 years, women: 5.73 ± 3.51 years). This study was approved by the Research Ethics Committee of Ehime University Hospital following the tenets of the Declaration of Helsinki and its later amendments (approval number: 1709007; University Hospital Medical Information Network ID: UMIN000011953), and was conducted in compliance with the Guidelines for Good Clinical Practice and local ethical and legal requirements. All participants were allocated a numerical code to ensure their anonymity. Additionally, all data were preserved in a secure database. Study flowchart. JMP version 14.2.0 software (SAS Institute Japan, Tokyo, Japan) was used for statistical analyses. Assumptions of normal distribution were assessed using the Kolmogorov‐Smirnov‐Lilliefors test. Since the continuous variables proved to be non‐normally distributed, they were analyzed using the Steel‐Dwass test. Categorical variables were analyzed using the χ2 test. The Wilcoxon signed‐rank test was used to compare continuous variables representing baseline and endpoint characteristics. We performed univariate and multivariate Cox proportional hazards regression analyses to assess hazard ratios (HRs) and 95% confidence intervals (CIs) for DM development. Multivariate analyses were adjusted for the following variables: age; sex; Cre levels; exercise, snacking, drinking, and smoking habits; and family history of diabetes. The combined risk of fatty liver disease for DM was assessed using univariate and multivariate Cox proportional hazards regression analyses that were adjusted for sex; age; Cre; exercise, snacking, drinking, and smoking habits; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes. All data are expressed as the median (interquartile range) or number (percentage). Statistical significance was determined with P values <0.05. RESULTS: Characteristics of the participants Table 1 shows the baseline characteristics of each group. The onset rates of diabetes mellitus in the normal, simple fatty liver, metabolic disorder, and MAFLD groups were 0.51, 1.85, 2.52, and 7.36%, respectively (Table 1). Compared with the normal group, the other groups had a significantly higher proportion of males; were older; and had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre levels, and TG; lower HDL‐C levels; a lower proportion of snacking habits; and a higher proportion of current smokers (Table 1). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 1). There were 128 patients with HBs‐Ag, 91 patients with anti‐HCV, and 752 drinkers (Table 1). Baseline characteristics Data are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week) DM onset was defined as fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes treatment. The endpoint characteristics were similar to the baseline characteristics (Table 2). Compared with the normal group, the other groups had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre, and TG levels; lower HDL‐C levels; a lower proportion of individuals with snacking habits; and higher proportion of current smokers (Table 2). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 2). Endpoint characteristics Data are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week). The comparisons of characteristics between baseline and endpoint revealed that BMI, DBP, and AST levels at endpoint were higher than at baseline among the normal, simple fatty liver, and metabolic disorder groups (Table 3). The FPG levels at endpoint were higher than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). In all groups, Cre levels at endpoint were higher than at baseline, and HDL‐c levels at endpoint were lower than at baseline (Table 3). The proportion of individuals with periodic exercise habits was higher at endpoint than at baseline among all groups and the proportion of current smokers was lower at endpoint than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). The differences in characteristics between baseline and endpoint The Wilcoxon signed rank test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05. Data are presented as median (interquartile range) or number (percentage). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week) Table 1 shows the baseline characteristics of each group. The onset rates of diabetes mellitus in the normal, simple fatty liver, metabolic disorder, and MAFLD groups were 0.51, 1.85, 2.52, and 7.36%, respectively (Table 1). Compared with the normal group, the other groups had a significantly higher proportion of males; were older; and had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre levels, and TG; lower HDL‐C levels; a lower proportion of snacking habits; and a higher proportion of current smokers (Table 1). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 1). There were 128 patients with HBs‐Ag, 91 patients with anti‐HCV, and 752 drinkers (Table 1). Baseline characteristics Data are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week) DM onset was defined as fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes treatment. The endpoint characteristics were similar to the baseline characteristics (Table 2). Compared with the normal group, the other groups had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre, and TG levels; lower HDL‐C levels; a lower proportion of individuals with snacking habits; and higher proportion of current smokers (Table 2). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 2). Endpoint characteristics Data are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week). The comparisons of characteristics between baseline and endpoint revealed that BMI, DBP, and AST levels at endpoint were higher than at baseline among the normal, simple fatty liver, and metabolic disorder groups (Table 3). The FPG levels at endpoint were higher than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). In all groups, Cre levels at endpoint were higher than at baseline, and HDL‐c levels at endpoint were lower than at baseline (Table 3). The proportion of individuals with periodic exercise habits was higher at endpoint than at baseline among all groups and the proportion of current smokers was lower at endpoint than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). The differences in characteristics between baseline and endpoint The Wilcoxon signed rank test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05. Data are presented as median (interquartile range) or number (percentage). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week) Risk of DM onset with simple fatty liver, metabolic disorder, and MAFLD Univariate analysis revealed that simple fatty liver, metabolic disorder, and MAFLD were significant risk factors for DM compared with the normal group. Further, MAFLD had a higher risk of DM onset than simple fatty liver and metabolic disorder (simple fatty liver, HR: 3.61, 95% CI: 1.38–9.43; metabolic disorder, HR: 5.41, 95% CI: 3.39–8.62; MAFLD, HR: 18.46, 95% CI: 12.06–28.27) (P‐trend <0.01; Table 4). In addition, multivariate analysis adjusted for sex, age, BMI, Cre, exercise habit, snacking habit, drinking habit, smoking habit, and family history of diabetes showed a significantly increased risk of DM onset in the other three groups compared with the normal group (simple fatty liver, aHR: 2.76, 95% CI: 1.05–7.23; metabolic disorder, aHR: 3.62, 95% CI: 2.24–5.85; MAFLD, aHR: 11.03, 95% CI: 7.03–17.28) (P‐trend <0.01; Table 4). The risk for the onset of diabetes mellitus Differences were considered statistically significant for P < 0.05. BMI, body mass index; CI, confidence interval; Cre, creatinine; HR, hazard ratio; SBP, systolic blood pressure. Multivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes. In the normal, simple fatty liver, metabolic disorder, and MAFLD groups, the 1‐year DM onset rates were 0.04%, 0%, 0.03%, and 0.5%, respectively; the 3‐year DM onset rates were 0.11%, 0.43%, 1.06%, and 3.15%, respectively; the 5‐year DM onset rates were 0.44%, 1.64%, 2.07%, and 5.37%, respectively; and the 10‐year DM onset rates were 0.69%, 3.72%, 5.25%, and 16.68%, respectively. Univariate analysis revealed that simple fatty liver, metabolic disorder, and MAFLD were significant risk factors for DM compared with the normal group. Further, MAFLD had a higher risk of DM onset than simple fatty liver and metabolic disorder (simple fatty liver, HR: 3.61, 95% CI: 1.38–9.43; metabolic disorder, HR: 5.41, 95% CI: 3.39–8.62; MAFLD, HR: 18.46, 95% CI: 12.06–28.27) (P‐trend <0.01; Table 4). In addition, multivariate analysis adjusted for sex, age, BMI, Cre, exercise habit, snacking habit, drinking habit, smoking habit, and family history of diabetes showed a significantly increased risk of DM onset in the other three groups compared with the normal group (simple fatty liver, aHR: 2.76, 95% CI: 1.05–7.23; metabolic disorder, aHR: 3.62, 95% CI: 2.24–5.85; MAFLD, aHR: 11.03, 95% CI: 7.03–17.28) (P‐trend <0.01; Table 4). The risk for the onset of diabetes mellitus Differences were considered statistically significant for P < 0.05. BMI, body mass index; CI, confidence interval; Cre, creatinine; HR, hazard ratio; SBP, systolic blood pressure. Multivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes. In the normal, simple fatty liver, metabolic disorder, and MAFLD groups, the 1‐year DM onset rates were 0.04%, 0%, 0.03%, and 0.5%, respectively; the 3‐year DM onset rates were 0.11%, 0.43%, 1.06%, and 3.15%, respectively; the 5‐year DM onset rates were 0.44%, 1.64%, 2.07%, and 5.37%, respectively; and the 10‐year DM onset rates were 0.69%, 3.72%, 5.25%, and 16.68%, respectively. Risk of fatty liver disease combined with other factors Next, we examined the factors that exacerbated the risk of fatty liver disease in DM. Fatty liver disease with overweight/obesity or pre‐diabetes significantly increased the risk of DM onset compared with fatty liver disease with other metabolic diseases (overweight/obesity, HR: 3.21, 95% CI: 1.75–5.94; pre‐diabetes, HR: 8.54, 95% CI: 4.92–14.82, respectively; Table 5). Additionally, multivariate analysis adjusted for sex; age; BMI; Cre levels; exercise habit; snacking habit; drinking habit; smoking habit; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes, showed a significantly increased risk of DM onset in fatty liver disease with overweight/obesity and pre‐diabetes (overweight/obesity, HR: 2.18, 95% CI: 1.15–4.13; pre‐diabetes, HR: 7.82, 95% CI: 4.37–13.99, respectively; Table 5). In the fatty liver with overweight/obesity and pre‐diabetes groups, the 1‐year DM onset rates were 0.54 and 0.07%, respectively; the 3‐year DM onset rates were 3.28 and 4.65%, respectively; the 5‐year DM onset rates were 5.53 and 7.89%, respectively; and the 10‐year DM onset rates were 16.99 and 24.1%, respectively. Risk of fatty liver in combination with other factors Differences were considered statistically significant for P < 0.05. BMI, body mass index; CI, confidence interval; Cre, creatinine; HDL‐C, high‐density lipoprotein cholesterol; HR, hazard ratio; SBP, systolic blood pressure; TG, triglycerides. Multivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes, along with metabolic disorders as follows: ahypertension, high TG levels, low HDL‐C levels, and pre‐diabetes; boverweight/obese, high TG levels, low HDL‐C levels, and pre‐diabetes; coverweight/obese, hypertension, low HDL‐C levels, and pre‐diabetes; doverweight/obese, hypertension, high TG levels, and low HDL‐C levels. Next, we examined the factors that exacerbated the risk of fatty liver disease in DM. Fatty liver disease with overweight/obesity or pre‐diabetes significantly increased the risk of DM onset compared with fatty liver disease with other metabolic diseases (overweight/obesity, HR: 3.21, 95% CI: 1.75–5.94; pre‐diabetes, HR: 8.54, 95% CI: 4.92–14.82, respectively; Table 5). Additionally, multivariate analysis adjusted for sex; age; BMI; Cre levels; exercise habit; snacking habit; drinking habit; smoking habit; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes, showed a significantly increased risk of DM onset in fatty liver disease with overweight/obesity and pre‐diabetes (overweight/obesity, HR: 2.18, 95% CI: 1.15–4.13; pre‐diabetes, HR: 7.82, 95% CI: 4.37–13.99, respectively; Table 5). In the fatty liver with overweight/obesity and pre‐diabetes groups, the 1‐year DM onset rates were 0.54 and 0.07%, respectively; the 3‐year DM onset rates were 3.28 and 4.65%, respectively; the 5‐year DM onset rates were 5.53 and 7.89%, respectively; and the 10‐year DM onset rates were 16.99 and 24.1%, respectively. Risk of fatty liver in combination with other factors Differences were considered statistically significant for P < 0.05. BMI, body mass index; CI, confidence interval; Cre, creatinine; HDL‐C, high‐density lipoprotein cholesterol; HR, hazard ratio; SBP, systolic blood pressure; TG, triglycerides. Multivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes, along with metabolic disorders as follows: ahypertension, high TG levels, low HDL‐C levels, and pre‐diabetes; boverweight/obese, high TG levels, low HDL‐C levels, and pre‐diabetes; coverweight/obese, hypertension, low HDL‐C levels, and pre‐diabetes; doverweight/obese, hypertension, high TG levels, and low HDL‐C levels. Characteristics of the participants: Table 1 shows the baseline characteristics of each group. The onset rates of diabetes mellitus in the normal, simple fatty liver, metabolic disorder, and MAFLD groups were 0.51, 1.85, 2.52, and 7.36%, respectively (Table 1). Compared with the normal group, the other groups had a significantly higher proportion of males; were older; and had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre levels, and TG; lower HDL‐C levels; a lower proportion of snacking habits; and a higher proportion of current smokers (Table 1). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 1). There were 128 patients with HBs‐Ag, 91 patients with anti‐HCV, and 752 drinkers (Table 1). Baseline characteristics Data are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week) DM onset was defined as fasting glucose levels ≥6.99 mM, HbA1c ≥6.5%, or initiation of anti‐diabetes treatment. The endpoint characteristics were similar to the baseline characteristics (Table 2). Compared with the normal group, the other groups had higher BMIs; increased BP and FPG, HbA1c, AST, ALT, Cre, and TG levels; lower HDL‐C levels; a lower proportion of individuals with snacking habits; and higher proportion of current smokers (Table 2). The metabolic disorder group had a higher proportion of participants who performed periodic exercise and were drinkers than the other groups (Table 2). Endpoint characteristics Data are presented as median (interquartile range) or number (percentage). The Steel‐Dwass test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05 (*normal vs simple fatty liver, †normal vs metabolic disorder, ‡normal vs MAFLD, §simple fatty liver vs metabolic disorder, ||simple fatty liver vs MAFLD, ¶metabolic disorder vs MAFLD). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week). The comparisons of characteristics between baseline and endpoint revealed that BMI, DBP, and AST levels at endpoint were higher than at baseline among the normal, simple fatty liver, and metabolic disorder groups (Table 3). The FPG levels at endpoint were higher than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). In all groups, Cre levels at endpoint were higher than at baseline, and HDL‐c levels at endpoint were lower than at baseline (Table 3). The proportion of individuals with periodic exercise habits was higher at endpoint than at baseline among all groups and the proportion of current smokers was lower at endpoint than at baseline among normal, metabolic disorder, and MAFLD groups (Table 3). The differences in characteristics between baseline and endpoint The Wilcoxon signed rank test and χ2 test were used to analyze continuous and categorical variables, respectively. Differences were considered significant at P < 0.05. Data are presented as median (interquartile range) or number (percentage). ALT, alanine aminotransferase; AST, aspartate aminotransferase; BMI, body mass index; Cre, creatinine; DBP, diastolic blood pressure; FPG, fasting plasma glucose; HbA1c, hemoglobin A1c; HDL‐C, high‐density lipoprotein cholesterol; SBP, systolic blood pressure; TG, triglycerides. Exercise habit: no habit or awareness of exercise vs periodic exercise. Snacking habit: no snacking vs snacking ≥1 time/day. Drinker: men: ≥210 g/week, women: ≥140 g/week) Risk of DM onset with simple fatty liver, metabolic disorder, and MAFLD: Univariate analysis revealed that simple fatty liver, metabolic disorder, and MAFLD were significant risk factors for DM compared with the normal group. Further, MAFLD had a higher risk of DM onset than simple fatty liver and metabolic disorder (simple fatty liver, HR: 3.61, 95% CI: 1.38–9.43; metabolic disorder, HR: 5.41, 95% CI: 3.39–8.62; MAFLD, HR: 18.46, 95% CI: 12.06–28.27) (P‐trend <0.01; Table 4). In addition, multivariate analysis adjusted for sex, age, BMI, Cre, exercise habit, snacking habit, drinking habit, smoking habit, and family history of diabetes showed a significantly increased risk of DM onset in the other three groups compared with the normal group (simple fatty liver, aHR: 2.76, 95% CI: 1.05–7.23; metabolic disorder, aHR: 3.62, 95% CI: 2.24–5.85; MAFLD, aHR: 11.03, 95% CI: 7.03–17.28) (P‐trend <0.01; Table 4). The risk for the onset of diabetes mellitus Differences were considered statistically significant for P < 0.05. BMI, body mass index; CI, confidence interval; Cre, creatinine; HR, hazard ratio; SBP, systolic blood pressure. Multivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes. In the normal, simple fatty liver, metabolic disorder, and MAFLD groups, the 1‐year DM onset rates were 0.04%, 0%, 0.03%, and 0.5%, respectively; the 3‐year DM onset rates were 0.11%, 0.43%, 1.06%, and 3.15%, respectively; the 5‐year DM onset rates were 0.44%, 1.64%, 2.07%, and 5.37%, respectively; and the 10‐year DM onset rates were 0.69%, 3.72%, 5.25%, and 16.68%, respectively. Risk of fatty liver disease combined with other factors: Next, we examined the factors that exacerbated the risk of fatty liver disease in DM. Fatty liver disease with overweight/obesity or pre‐diabetes significantly increased the risk of DM onset compared with fatty liver disease with other metabolic diseases (overweight/obesity, HR: 3.21, 95% CI: 1.75–5.94; pre‐diabetes, HR: 8.54, 95% CI: 4.92–14.82, respectively; Table 5). Additionally, multivariate analysis adjusted for sex; age; BMI; Cre levels; exercise habit; snacking habit; drinking habit; smoking habit; family history of diabetes; and metabolic disorders, including overweight/obesity, hypertension, high TG levels, low HDL‐c levels, or pre‐diabetes, showed a significantly increased risk of DM onset in fatty liver disease with overweight/obesity and pre‐diabetes (overweight/obesity, HR: 2.18, 95% CI: 1.15–4.13; pre‐diabetes, HR: 7.82, 95% CI: 4.37–13.99, respectively; Table 5). In the fatty liver with overweight/obesity and pre‐diabetes groups, the 1‐year DM onset rates were 0.54 and 0.07%, respectively; the 3‐year DM onset rates were 3.28 and 4.65%, respectively; the 5‐year DM onset rates were 5.53 and 7.89%, respectively; and the 10‐year DM onset rates were 16.99 and 24.1%, respectively. Risk of fatty liver in combination with other factors Differences were considered statistically significant for P < 0.05. BMI, body mass index; CI, confidence interval; Cre, creatinine; HDL‐C, high‐density lipoprotein cholesterol; HR, hazard ratio; SBP, systolic blood pressure; TG, triglycerides. Multivariate Cox proportional hazards regression analysis was adjusted for sex, age (years), BMI (kg/m2), SBP (mmHg), Cre (µM), exercise habits, snacking habits, drinking habits, smoking status, and family history of diabetes, along with metabolic disorders as follows: ahypertension, high TG levels, low HDL‐C levels, and pre‐diabetes; boverweight/obese, high TG levels, low HDL‐C levels, and pre‐diabetes; coverweight/obese, hypertension, low HDL‐C levels, and pre‐diabetes; doverweight/obese, hypertension, high TG levels, and low HDL‐C levels. DISCUSSION: We observed that fatty liver with metabolic disorders, such as MAFLD, indicated an increased risk of DM onset compared with simple fatty liver and metabolic disorders. Additionally, fatty liver disease in the presence of overweight/obesity or pre‐diabetes showed a higher risk of DM onset compared with fatty liver disease with other metabolic disorders. Therefore, MAFLD, especially fatty liver disease with overweight/obesity or pre‐diabetes, is an appropriate disease concept for the risk of DM development. Nonalcoholic fatty liver disease is a significant risk factor for DM; moreover, NAFLD is strongly associated with DM 6 , 7 , 8 , 9 , 10 . Additionally, a previous examination of the relationship between the changes in fatty liver status over time and the risk of DM onset indicated that, although improvement of fatty liver did not reduce the risk of developing DM as low as that in non‐fatty liver patients, exacerbation of the severity of fatty liver disease notably increased the risk of DM onset 16 . Therefore, it is important to intervene in cases of fatty liver disease to reduce the risk of DM onset. However, since NAFLD is diagnosed after excluding other liver diseases, it is difficult to properly evaluate various diseases that could complicate NAFLD pathogenesis, including viral hepatitis, alcoholic liver disease, and autoimmune liver diseases. In 2020, the concept of MAFLD was proposed, which facilitated the pathogenesis of fatty liver disease 11 , 12 . Although we included 128 patients with HBs‐Ag, 91 patients with anti‐HCV, and 752 drinkers, the relationship between fatty liver disease and diabetes could be accurately represented. Additionally, MAFLD is a good predictive factor of hepatic fibrosis 17 and mortality 18 , 19 . There is a need for future studies on MAFLD to confirm whether MAFLD is carries increased risk factors for other diseases 20 . Our findings confirmed that fatty liver disease was classified according to the risk of DM. Liang et al. 21 reported that compared with participants without fatty liver disease, patients with MAFLD and NAFLD had an increased risk of DM onset (estimated risk ratio [RR] 2.08, 95% CI 1.72–2.52; RR 2.01, 95% CI 1.65–2.46, respectively) after adjustment for age, sex, educational background, smoking status, and leisure‐time exercise. Moreover, the change from NAFLD to MAFLD had no effect on the relationship with diabetes. However, since this previous study compared disease concepts, there was an overlap in the target participants with NAFLD and MAFLD. Therefore, our findings are important since we stratified participants based on their risk of DM development, and they demonstrated that a diagnosis of MAFLD can be used to identify patients at a higher risk of developing DM. Additionally, we analyzed the factors that specifically increased the risk of fatty liver disease. Our study strengths are that only five data points were missing from the entire cohort of patients and visits (one BMI measurement, and two points each for exercise and snacking habits). However, this study has several limitations. First, because we did not collect data pertaining to waist circumference, insulin levels, or high‐sensitivity C‐reactive protein levels, we could not assess all the MAFLD components. Therefore, it is possible that the number of patients with MAFLD was underestimated. Second, although abdominal ultrasonography is a reliable diagnostic method with high sensitivity and specificity for fatty liver disease 22 , 23 , we could not assess the effects of the severity of fatty liver disease on DM development since we could not determine the extent of fatty liver disease and fibrosis. Third, self‐reported data were used for some of the factors surveyed, which may compromise the accuracy of the survey results. Fourth, the data are not truly continuous because they were collected only once a year at annual health check‐ups. Fifth, it is possible that some of the participants who developed DM after the health check‐up missed the next health check‐up. Therefore, the number of patients who developed diabetes may have been underestimated. Finally, since we only studied the Japanese population, studies with other populations are needed to confirm the generality of our results. In conclusion, our findings demonstrated that a diagnosis of MAFLD can be used to classify a high risk of DM onset among patients with fatty liver disease. Additionally, among patients with fatty liver disease, those with fatty liver who are also overweight/obese or have pre‐diabetes are at a higher risk of developing DM. Therefore, clinicians should be particularly vigilant when treating patients with fatty liver complicated by metabolic disorders, such as MAFLD, to prevent the development of DM. Stratifying the risk of fatty liver can allow the identification of, and interventions for, patients at a high risk of diabetes development. DISCLOSURE: The authors declare that they have no conflicts of interest. Approval of the research protocol: This study was approved by the Research Ethics Committee of Ehime University Hospital (approval number: 1709007). Informed consent: N/A. Since this study protocol was retrospective in manner and all the participants’ data were de‐identified, it was not necessary to obtain informed consent from participants in this study. Registry and the registration no. of the study/trial: October 3, 2013 University Hospital Medical Information Network ID: UMIN000011953. Animal studies: N/A.
Background: Nonalcoholic fatty liver disease (NAFLD) is diagnosed after excluding other liver diseases. The pathogenesis of NAFLD when complicated by other liver diseases has not been established completely. Metabolic dysfunction-associated fatty liver disease (MAFLD) involves more metabolic factors than NAFLD, regardless of complications with other diseases. This study aimed to clarify the effects of fatty liver occurring with metabolic disorders, such as MAFLD without diabetes mellitus (DM), on the development of DM. Methods: We retrospectively assessed 9,459 participants who underwent two or more annual health check-ups. The participants were divided into the MAFLD group (fatty liver disease with overweight/obesity or non-overweight/obesity complicated by metabolic disorders), simple fatty liver group (fatty liver disease other than MAFLD group), metabolic disorder group (metabolic disorder without fatty liver disease), and normal group (all other participants). Results: The DM onset rates in the normal, simple fatty liver, metabolic disorder, and MAFLD groups were 0.51, 1.85, 2.52, and 7.36%, respectively. In the multivariate analysis, the MAFLD group showed a significantly higher risk of DM onset compared with other three groups (P < 0.01). Additionally, the risk of DM onset was significantly increased in fatty liver disease with overweight/obesity or pre-diabetes (P < 0.01). Conclusions: Fatty liver with metabolic disorders, such as MAFLD, can be used to identify patients with fatty liver disease who are at high risk of developing DM. Additionally, patients with fatty liver disease complicated with overweight/obesity or prediabetes are at an increased risk of DM onset and should receive more attention.
null
null
9,050
327
[ 309, 654, 951, 394, 422, 109 ]
9
[ "liver", "fatty liver", "fatty", "metabolic", "levels", "diabetes", "dm", "mafld", "exercise", "disorder" ]
[ "effects fatty liver", "liver metabolic disorder", "liver disease diabetes", "risk fatty liver", "liver disease metabolic" ]
null
null
null
[CONTENT] Diabetes mellitus | Fatty liver | Metabolic disease [SUMMARY]
null
[CONTENT] Diabetes mellitus | Fatty liver | Metabolic disease [SUMMARY]
null
[CONTENT] Diabetes mellitus | Fatty liver | Metabolic disease [SUMMARY]
null
[CONTENT] Body Mass Index | Diabetes Mellitus | Humans | Metabolic Diseases | Non-alcoholic Fatty Liver Disease | Obesity | Overweight | Retrospective Studies [SUMMARY]
null
[CONTENT] Body Mass Index | Diabetes Mellitus | Humans | Metabolic Diseases | Non-alcoholic Fatty Liver Disease | Obesity | Overweight | Retrospective Studies [SUMMARY]
null
[CONTENT] Body Mass Index | Diabetes Mellitus | Humans | Metabolic Diseases | Non-alcoholic Fatty Liver Disease | Obesity | Overweight | Retrospective Studies [SUMMARY]
null
[CONTENT] effects fatty liver | liver metabolic disorder | liver disease diabetes | risk fatty liver | liver disease metabolic [SUMMARY]
null
[CONTENT] effects fatty liver | liver metabolic disorder | liver disease diabetes | risk fatty liver | liver disease metabolic [SUMMARY]
null
[CONTENT] effects fatty liver | liver metabolic disorder | liver disease diabetes | risk fatty liver | liver disease metabolic [SUMMARY]
null
[CONTENT] liver | fatty liver | fatty | metabolic | levels | diabetes | dm | mafld | exercise | disorder [SUMMARY]
null
[CONTENT] liver | fatty liver | fatty | metabolic | levels | diabetes | dm | mafld | exercise | disorder [SUMMARY]
null
[CONTENT] liver | fatty liver | fatty | metabolic | levels | diabetes | dm | mafld | exercise | disorder [SUMMARY]
null
[CONTENT] liver | diseases | nafld | disease | liver disease | fatty liver disease | development | fatty | metabolic | fatty liver [SUMMARY]
null
[CONTENT] vs | table | metabolic disorder | disorder | levels | habit | normal | metabolic | liver | fatty [SUMMARY]
null
[CONTENT] liver | fatty liver | fatty | levels | metabolic | dm | diabetes | disease | mafld | liver disease [SUMMARY]
null
[CONTENT] ||| NAFLD ||| Metabolic | NAFLD ||| DM [SUMMARY]
null
[CONTENT] DM | 0.51 | 1.85 | 2.52 | 7.36% ||| DM | three | 0.01 ||| DM | 0.01 [SUMMARY]
null
[CONTENT] ||| ||| NAFLD ||| Metabolic | NAFLD ||| 9,459 | two ||| ||| ||| DM | 0.51 | 1.85 | 2.52 | 7.36% ||| DM | three | 0.01 ||| DM | 0.01 ||| DM ||| DM [SUMMARY]
null
Understanding the influence of substrate when growing tumorspheres.
33722191
Cancer stem cells are important for the development of many solid tumors. These cells receive promoting and inhibitory signals that depend on the nature of their environment (their niche) and determine cell dynamics. Mechanical stresses are crucial to the initiation and interpretation of these signals.
BACKGROUND
A two-population mathematical model of tumorsphere growth is used to interpret the results of a series of experiments recently carried out in Tianjin, China, and extract information about the intraspecific and interspecific interactions between cancer stem cell and differentiated cancer cell populations.
METHODS
The model allows us to reconstruct the time evolution of the cancer stem cell fraction, which was not directly measured. We find that, in the presence of stem cell growth factors, the interspecific cooperation between cancer stem cells and differentiated cancer cells induces a positive feedback loop that determines growth, independently of substrate hardness. In a frustrated attempt to reconstitute the stem cell niche, the number of cancer stem cells increases continuously with a reproduction rate that is enhanced by a hard substrate. For growth on soft agar, intraspecific interactions are always inhibitory, but on hard agar the interactions between stem cells are collaborative while those between differentiated cells are strongly inhibitory. Evidence also suggests that a hard substrate brings about a large fraction of asymmetric stem cell divisions. In the absence of stem cell growth factors, the barrier to differentiation is broken and overall growth is faster, even if the stem cell number is conserved.
RESULTS
Our interpretation of the experimental results validates the centrality of the concept of stem cell niche when tumor growth is fueled by cancer stem cells. Niche memory is found to be responsible for the characteristic population dynamics observed in tumorspheres. The model also shows why substratum stiffness has a deep influence on the behavior of cancer stem cells, stiffer substrates leading to a larger proportion of asymmetric doublings. A specific condition for the growth of the cancer stem cell number is also obtained.
CONCLUSIONS
[ "Cell Differentiation", "Cell Proliferation", "Culture Media", "Humans", "Models, Biological", "Neoplasms", "Neoplastic Stem Cells", "Spheroids, Cellular", "Stem Cell Niche", "Stress, Mechanical", "Surface Properties", "Tumor Cells, Cultured" ]
7962376
Background
For some time, it has been known that the presence of cancer stem cells (CSCs) is important for the development of many solid tumors [1–6]. According to the CSC hypothesis these cells are often crucial for the development of resistance to therapeutic interventions [7, 8]. In healthy tissues the proportion of stem cells is small; homeostatic equilibrium is maintained through the signals that the stem cells receive from their niches. The onset of cancer is likely to destroy this equilibrium and cancerous tissues may exhibit a higher proportion of stem cells than normal tissues [9]. This increased proportion of cancer stem cells may underlie the aggressive behavior of high-grade tumors [2, 10]. As recently explained by Taniguchi et al. [11], the cross-talk between tumor initiating (stem) cells and their niche microenvironment is a possible therapeutic target. Understanding the nature of the interactions between CSCs and their environment is therefore important for the development of effective intervention procedures. Interesting mathematical models have been developed to explain various properties of stem-cell-driven tissue growth. Stiehl and Marciniak-Czochra proposed a mathematical model of cancer stem cell dynamics to describe the time evolution of a leukemic cell line competing with healthy hematopoiesis [12]. This group later provided evidence that the influence of leukemic stem cells on the course of the disease is stronger than that of non-stem leukemic cells [13]. Yang, Plikus and Komarova used stochastic modeling to explore the relative importance of symmetric and asymmetric stem cell divisions, showing that tight homeostatic control is not necessarily associated with purely asymmetric divisions and that symmetric divisions can help to stabilize mouse paw epidermis lineage [14]. Recently, Bessonov and coworkers developed a model that allowed them to determine the influence of the population dynamics on the time-varying probabilities of different cell fates and the ascertainment of the cell-cell communication factors influencing these probabilities [15]. These authors suggest that a coordinated dynamical change of the cell behavior parameters occurs in response to a biochemical signal, which they describe as an underlying field. Here we will describe the effects of cellular interactions using nonlinear terms instead. Live cells are generally sensitive to substratum rigidity and texture [16]. A growing tumor must compete for space with the surrounding environment; the resulting mechanical stresses generate signals that impact on the tumor cells. Cells integrate these mechanical cues and respond in ways that are related to their phenotype. Their active response may also lead to phenotype modifications [17, 18]; in fact, mechanical cues generated by the environment can trigger cancer cell invasion [19]. Environmental stiffness may then be associated with tumor progression, a process that can also be promoted by mechanically activated ion channels [20]. What is the influence of the mechanical environment on cancer stem cells? At each generation, CSCs divide symmetrically, generating either two new CSCs or two differentiated cancer cells (DCCs), or asymmetrically, generating one CSC and one differentiated cancer cell [7, 21]. Quorum sensing controls differentiation of healthy stem cells, but it is thought to be altered in cancer stem cells [22]. Mechanical inputs are an important component of the altered control mechanism and can be assumed to play a role in the fate of the cancer stem cells. In vitro experiments have been designed to probe the influence of mechanical stresses of various types on tumor cells. The solid-stress inhibition of multicellular spheroid growth was already demonstrated by Helmlinger and coworkers in 1997 [23]. The results of these experiments were shown to follow allometric laws [24]. Interestingly, Koike et al. showed that spheroid formation with Dunning R3327 rat prostate carcinoma AT3.1 cells is facilitated by solid stress [25]. A study by Cheng et al. suggested how tumors grow in confined locations where levels of stress are high, showing that growth-induced solid stress can affect cell phenotype [26]. Using spheroid cell aggregates, Montel et al. showed that applied pressure may be used to modulate tumor growth [27] and observed that cells are blocked by compressive stresses at the G1 checkpoint [28]. The organization of cells in a spheroid is modified by physical confinement [29], which likewise modifies the proliferation gradient [30]. The stiffness of hydrogels has been shown to determine the shape of tumor cells, with high stiffnesses leading to spheroidal cells, a feature known to occur in in vivo tumors [31]. By studying the behavior of adult neural stem cells under various mechanical cues, Saha et al. showed that soft gels favored differentiation into neurons while harder gels promoted glial cultures. Importantly, they also showed that low substrate stiffness inhibited cell spreading, self-renewal, and differentiation [32]. Osteocyte-like cells were shown to significantly induce compaction of tumor spheroids formed using breast cancer cells [33]. Matrix stiffness was shown to affect, through mechanotransduction events, the osteogenic outcome of human mesenchymal stem cell differentiation [34]. HeLa cells were used to show that both an attractive contact force and a substrate-controlled remote force contribute to the formation of large-scale multicellular structures in cancer [35]. Fifteen years ago, Discher, Janmey, and Wang not only explained that the stiffness of the anchoring substrate can have a strong influence on the cell state, but they also indicated that stem cell differentiation may be influenced by the nature of the substrate [36]. It is relevant that naive mesenchymal stem cells were shown to commit to various phenotypes with high sensitivity to tissue elasticity: They differentiate preferably into neurons and osteocytes if they are cultured on soft and rigid matrices, respectively [37]. On the other hand, human mesenchymal stem cells adhere onto precalcified bones, which are softer than calcified bones [16]. It is also known that hydrodynamic shear stress promotes the conversion of primary patient epithelial tumor cells into specific cancer stem-like cells [38]. Smith et al. found that the mechanical context of the differentiation niche can drive endothelial cell identity from human-induced pluripotent stem cells, showing that stiffness drives mesodermal differentiation, leading to endothelial commitment [39]. Thus, microenvironments help specify stem cell lineages, although it may be difficult to decouple the influence of mechanical interactions and surface topography and stiffness from biochemical effects [16, 39]. Since they are grown in the absence of the complex signaling system prevalent in the environment of real tumors, tumorspheres, spheroids formed by clonal proliferation out of permanent cell lines, tumor tissue, or blood [40], are suitable candidates to probe the influence of mechanical stimuli on stem-cell-fueled cancer growth. Wang et al. cultured breast CSCs on soft and hard agar matrix surfaces, investigating the effects that substrate stiffness has on cell state and proliferation [41]. These authors showed that breast cancer stem cells can be kept in states of differentiation, proliferation or quiescence depending on a combination of adherent growth and stem cells growth factors, but they focused on the experimental possibilities and did not draw conclusions about how these agencies may modify the stem cell niche to lead to the observed behavior. Recently, we developed a two-population tumorsphere model to identify the role of the intraspecific and interspecific interactions that determine tumorsphere growth [18]. Application of our model to three breast cancer cell lines studied by Chen and coworkers [9] indicates that while intraspecific interactions are inhibitory, interspecific interactions promote growth. This feature of interspecific interactions was interpreted in terms of the stimulation by CSCs of the growth of DCCs in order to consolidate their niches and of the plasticity of the DCCs to dedifferentiate into CSCs [18]. Here we use this model to analyze the experimental results of Wang et al. [41], discussing how substrate stiffness influences growth and finding that the concept of cancer stem cell niche is central for its understanding. In the next section we review the model of Ref. [18] and in the following sections we apply it to the results of Ref.[41] and discuss their implications.
null
null
Results
The initial stages First, let us answer the following question: Given that we start tumorsphere growth from a small CSC seed, what is the minimum size Sm needed for this seed to guarantee CSC population growth? By setting D=0 in Eq. (1b), we see that there are two cases: If differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with 2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ S_{m} = \frac{p_{s} - p_{d}}{ \alpha_{SS} p_{s}}. $$ \end{document}Sm=ps−pdαSSps. If differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere. If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with 2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ S_{m} = \frac{p_{s} - p_{d}}{ \alpha_{SS} p_{s}}. $$ \end{document}Sm=ps−pdαSSps. We thus need αSS<0 : The CSCs must cooperate to yield additional cancer stem cells starting from a pure CSC seed. A larger cooperative interaction implies that we can use a smaller seed. In this case, the intraspecific interaction coefficient αSS plays a key role in the growth determination from the very beginning of the process. It is worth mentioning that in the experiments discussed here the conditions ps<pd and S0>Sm are never satisfied simultaneously. Usually, ps is smaller than pd, and there is a minimum number of stem cells required to ensure stem cell growth. But if a differentiation - inhibiting agent is added to the system, increasing ps, a single cancer stem cell may suffice to generate growth. As shown in Additional file 1, we can linearize our equations to describe the initial evolution of a small system, finding that the trajectory in the S−D plane starts as, 3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ D(t) = \frac{(D_{0} + S_{0})}{S_{0}}S(t)^{1/(p_{s} -p_{d})}- S(t), $$ \end{document}D(t)=(D0+S0)S0S(t)1/(ps−pd)−S(t), where S(0)=S0 and D(0)=D0. Initially, if pd>ps, the number of differentiated cells increases, while the number of stem cells decreases, and the representative point gets close to the D-axis. If there is growth in the stem cell subpopulation, it is due to the nonlinear terms. Our model therefore generates a simple analytical description of the early stages of tumorsphere evolution and specifies the conditions for a successful implantation of the initial cancer stem cell seed. Unless a potent anti-differentiation agent is added to the growth medium, we expect the differentiation probability pd to be larger than ps. If so, our Eq. (2) predicts the minimum number of stem cells needed to initiate successful spheroid growth. This number depends only on ps, pd, and the intraspecific interaction between cancer stem cells, which must be cooperative. Weak cooperation or a small ps would mean that the tumorsphere must be started from a large nucleus. In the next section we review the experimental results reported in [41] and determine the model parameters. First, let us answer the following question: Given that we start tumorsphere growth from a small CSC seed, what is the minimum size Sm needed for this seed to guarantee CSC population growth? By setting D=0 in Eq. (1b), we see that there are two cases: If differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with 2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ S_{m} = \frac{p_{s} - p_{d}}{ \alpha_{SS} p_{s}}. $$ \end{document}Sm=ps−pdαSSps. If differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere. If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with 2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ S_{m} = \frac{p_{s} - p_{d}}{ \alpha_{SS} p_{s}}. $$ \end{document}Sm=ps−pdαSSps. We thus need αSS<0 : The CSCs must cooperate to yield additional cancer stem cells starting from a pure CSC seed. A larger cooperative interaction implies that we can use a smaller seed. In this case, the intraspecific interaction coefficient αSS plays a key role in the growth determination from the very beginning of the process. It is worth mentioning that in the experiments discussed here the conditions ps<pd and S0>Sm are never satisfied simultaneously. Usually, ps is smaller than pd, and there is a minimum number of stem cells required to ensure stem cell growth. But if a differentiation - inhibiting agent is added to the system, increasing ps, a single cancer stem cell may suffice to generate growth. As shown in Additional file 1, we can linearize our equations to describe the initial evolution of a small system, finding that the trajectory in the S−D plane starts as, 3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ D(t) = \frac{(D_{0} + S_{0})}{S_{0}}S(t)^{1/(p_{s} -p_{d})}- S(t), $$ \end{document}D(t)=(D0+S0)S0S(t)1/(ps−pd)−S(t), where S(0)=S0 and D(0)=D0. Initially, if pd>ps, the number of differentiated cells increases, while the number of stem cells decreases, and the representative point gets close to the D-axis. If there is growth in the stem cell subpopulation, it is due to the nonlinear terms. Our model therefore generates a simple analytical description of the early stages of tumorsphere evolution and specifies the conditions for a successful implantation of the initial cancer stem cell seed. Unless a potent anti-differentiation agent is added to the growth medium, we expect the differentiation probability pd to be larger than ps. If so, our Eq. (2) predicts the minimum number of stem cells needed to initiate successful spheroid growth. This number depends only on ps, pd, and the intraspecific interaction between cancer stem cells, which must be cooperative. Weak cooperation or a small ps would mean that the tumorsphere must be started from a large nucleus. In the next section we review the experimental results reported in [41] and determine the model parameters. Experimental data We used our model to analyze the results of Wang and coworkers [41]. These authors studied the growth of breast cancer cell cultures belonging to three different cell lines: MCF7, MDA-MB-231 and MDA-MB-435. For each of these tumor lines they grew tumorspheres using three different environmental conditions, as detailed below. In all cases the spheroids initially have 4-5 cells that originate from a single CSC. Since only the MDA-MB-231 cell line yielded bona fide round spheroids for all three experimental specifications, we will use this line to compare our findings with the experimental results. To facilitate the implementation of the model presented in [18], we report the data in terms of cell numbers. We used our model to analyze the results of Wang and coworkers [41]. These authors studied the growth of breast cancer cell cultures belonging to three different cell lines: MCF7, MDA-MB-231 and MDA-MB-435. For each of these tumor lines they grew tumorspheres using three different environmental conditions, as detailed below. In all cases the spheroids initially have 4-5 cells that originate from a single CSC. Since only the MDA-MB-231 cell line yielded bona fide round spheroids for all three experimental specifications, we will use this line to compare our findings with the experimental results. To facilitate the implementation of the model presented in [18], we report the data in terms of cell numbers. Soft substrate In the soft experiment, cells were cultured using soft (0.05%) agar as the matrix surface for cell contact. Differentiation inhibitors were added to the growth medium to increase the CSC fraction. Under these conditions there is little incentive for the stem cells to either duplicate or leave their quiescent status. Only the tendency to build a suitable niche may break the quiescence. Hence their small basal growth rate. As a result, a slow exponential growth of CSCs prevails in the early stages of tumorsphere growth as depicted in Fig. 3. Such behavior can be predicted as shown in Eq. 3 and Eqs. (A3) and (A4) in Additional file 1, but the basal growth rate is so small that the process appears to be almost linear. The CSCs population (red line) is always much larger than its DCC counterpart; as a matter of fact, Wang et al. reported a 95% of CSC at day 8 with a low growth rate. The distribution of the fitting values generated by the RGS method is shown in Additional file 2, Fig. A. Note there, and in Table 1, the very high (close to unity) value of ps, the positive sign of the intraspecific interaction coefficients and the negative sign of the interspecific interaction coefficients. Fig. 3Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell populationTable 1The three chosen parameter sets obtained from fits to the experimental dataConstantUnitsSoftHardControlr[1/days]0.06850.13350.0240psnone0.97010.71240.1646pdnone0.00190.00000.3911αSS[1/cells]0.0873-0.04560.0028αSD[1/cells]-0.4185-0.52800.0266αDS[1/cells]-0.2061-0.1376-1.0683αDD[1/cells]0.36681.8329-0.3087\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac {S}{S+D}|_{8days}$\end{document}SS+D|8daysnone0.94790.9130.141 Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell population The three chosen parameter sets obtained from fits to the experimental data In the soft experiment, cells were cultured using soft (0.05%) agar as the matrix surface for cell contact. Differentiation inhibitors were added to the growth medium to increase the CSC fraction. Under these conditions there is little incentive for the stem cells to either duplicate or leave their quiescent status. Only the tendency to build a suitable niche may break the quiescence. Hence their small basal growth rate. As a result, a slow exponential growth of CSCs prevails in the early stages of tumorsphere growth as depicted in Fig. 3. Such behavior can be predicted as shown in Eq. 3 and Eqs. (A3) and (A4) in Additional file 1, but the basal growth rate is so small that the process appears to be almost linear. The CSCs population (red line) is always much larger than its DCC counterpart; as a matter of fact, Wang et al. reported a 95% of CSC at day 8 with a low growth rate. The distribution of the fitting values generated by the RGS method is shown in Additional file 2, Fig. A. Note there, and in Table 1, the very high (close to unity) value of ps, the positive sign of the intraspecific interaction coefficients and the negative sign of the interspecific interaction coefficients. Fig. 3Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell populationTable 1The three chosen parameter sets obtained from fits to the experimental dataConstantUnitsSoftHardControlr[1/days]0.06850.13350.0240psnone0.97010.71240.1646pdnone0.00190.00000.3911αSS[1/cells]0.0873-0.04560.0028αSD[1/cells]-0.4185-0.52800.0266αDS[1/cells]-0.2061-0.1376-1.0683αDD[1/cells]0.36681.8329-0.3087\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac {S}{S+D}|_{8days}$\end{document}SS+D|8daysnone0.94790.9130.141 Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell population The three chosen parameter sets obtained from fits to the experimental data Hard substrate In the hard experiment, cells were cultured using hard (30%) agar as the contact matrix surface. Differentiation inhibitors were also added to the growth medium. For this experiment we expect the model to describe a high fraction of CSCs, as in soft, but now with a higher proliferation rate. Applying the RGS method to this data set, we see that this is indeed so (see Table 1 and Additional file 2, Fig. B for the resulting parameters), obtaining the curves depicted in Fig. 4. At early times, growth is nearly linear, as observed in soft, but only for the first four days, speeding up afterwards. The CSCs outnumber the DCCs, reaching 91% of the cell population by day 8, consistently with the results reported in [41]. This fraction is a little lower than in soft but would become much larger than that at later times. Fig. 4Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs The symmetric CSC reproduction probability is still high, but noticeably lower than in soft, and the basal rate is twice that in soft. The interspecific interaction coefficients are negative, as in soft, but the CSC intraspecific interaction coefficient is now negative, too. In the hard experiment, cells were cultured using hard (30%) agar as the contact matrix surface. Differentiation inhibitors were also added to the growth medium. For this experiment we expect the model to describe a high fraction of CSCs, as in soft, but now with a higher proliferation rate. Applying the RGS method to this data set, we see that this is indeed so (see Table 1 and Additional file 2, Fig. B for the resulting parameters), obtaining the curves depicted in Fig. 4. At early times, growth is nearly linear, as observed in soft, but only for the first four days, speeding up afterwards. The CSCs outnumber the DCCs, reaching 91% of the cell population by day 8, consistently with the results reported in [41]. This fraction is a little lower than in soft but would become much larger than that at later times. Fig. 4Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs The symmetric CSC reproduction probability is still high, but noticeably lower than in soft, and the basal rate is twice that in soft. The interspecific interaction coefficients are negative, as in soft, but the CSC intraspecific interaction coefficient is now negative, too. Control substrate In the control experiment, cells were cultured using hard (30%) agar as the contact matrix surface, but no differentiation inhibitor was added to the medium. The stem-cell promoting factors EGF and bFGF were replaced by neutral serum and the cells were grown on a hard substrate [41]. In this case, although the spheroid cannot preserve its spherical shape at late times, a fitting attempt, shown in Fig. 5, is informative (the corresponding boxplots are shown in Additional file 2, Fig. C).Although the CSC number remains nearly constant, the DCCs can proliferate indefinitely, leading to fast overall growth. Fig. 5Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase All the new relevant information obtained from fitting the experimental data is summarized in Table 1, where we report the values of the parameters of our model. Those values have an accuracy of 98% and their correspondig distributions are reportedd in Additional file 2. Furthermore, in Table 2, we report some quantities, derived from parameters in Table 1 that will be useful in the following sections. Table 2Useful derived quantities from some values of Table 1ConstantUnitsSoftHardControl1/r[days]14.597.4941.66rp[1/days]0.06630.0951-0.0054\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac {1}{rp}$\end{document}1rp[days]15.0810.52-185p=(ps−pd)none0.96820.7124-0.2265 Useful derived quantities from some values of Table 1 In the control experiment, cells were cultured using hard (30%) agar as the contact matrix surface, but no differentiation inhibitor was added to the medium. The stem-cell promoting factors EGF and bFGF were replaced by neutral serum and the cells were grown on a hard substrate [41]. In this case, although the spheroid cannot preserve its spherical shape at late times, a fitting attempt, shown in Fig. 5, is informative (the corresponding boxplots are shown in Additional file 2, Fig. C).Although the CSC number remains nearly constant, the DCCs can proliferate indefinitely, leading to fast overall growth. Fig. 5Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase All the new relevant information obtained from fitting the experimental data is summarized in Table 1, where we report the values of the parameters of our model. Those values have an accuracy of 98% and their correspondig distributions are reportedd in Additional file 2. Furthermore, in Table 2, we report some quantities, derived from parameters in Table 1 that will be useful in the following sections. Table 2Useful derived quantities from some values of Table 1ConstantUnitsSoftHardControl1/r[days]14.597.4941.66rp[1/days]0.06630.0951-0.0054\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac {1}{rp}$\end{document}1rp[days]15.0810.52-185p=(ps−pd)none0.96820.7124-0.2265 Useful derived quantities from some values of Table 1
Conclusion
The analysis of experimental data with our model confirms Wang’s conclusions and indicates that the substrate regulates the details of tumorsphere evolution and that a powerful engine of tumorsphere growth is the stem cell “memory" of its niche. By comparing growth on the hard and soft substrates, our analysis also confirms the observations that substrate stiffness promotes cancer cell proliferation (as recently reviewed by Nia et al. [50]. What is more interesting is that the evident differences between the parameters describing growth on the hard and control substrates indicate that the response of stem and non-stem cancer cells to an increase in substrate stiffness is likely to be mediated by different processes. In summary, the ability of stem cells to sense their environment plays a crucial role in tumorsphere evolution. Our model has proven to be particularly useful at determining why substratum stiffness has a profound influence on the behavior of cancer stem cells, soft substrates favoring symmetric divisions and hard substrates leading to a large proportion of asymmetric doublings. In vivo studies are needed to further our understanding of niche processes under natural environments.
[ "Background", "Methods", "Fitting with the model", "The initial stages", "Experimental data", "Soft substrate", "Hard substrate", "Control substrate", "Soft substrate", "Hard substrate", "Control substrate", "" ]
[ "For some time, it has been known that the presence of cancer stem cells (CSCs) is important for the development of many solid tumors [1–6]. According to the CSC hypothesis these cells are often crucial for the development of resistance to therapeutic interventions [7, 8]. In healthy tissues the proportion of stem cells is small; homeostatic equilibrium is maintained through the signals that the stem cells receive from their niches. The onset of cancer is likely to destroy this equilibrium and cancerous tissues may exhibit a higher proportion of stem cells than normal tissues [9]. This increased proportion of cancer stem cells may underlie the aggressive behavior of high-grade tumors [2, 10]. As recently explained by Taniguchi et al. [11], the cross-talk between tumor initiating (stem) cells and their niche microenvironment is a possible therapeutic target. Understanding the nature of the interactions between CSCs and their environment is therefore important for the development of effective intervention procedures.\nInteresting mathematical models have been developed to explain various properties of stem-cell-driven tissue growth. Stiehl and Marciniak-Czochra proposed a mathematical model of cancer stem cell dynamics to describe the time evolution of a leukemic cell line competing with healthy hematopoiesis [12]. This group later provided evidence that the influence of leukemic stem cells on the course of the disease is stronger than that of non-stem leukemic cells [13]. Yang, Plikus and Komarova used stochastic modeling to explore the relative importance of symmetric and asymmetric stem cell divisions, showing that tight homeostatic control is not necessarily associated with purely asymmetric divisions and that symmetric divisions can help to stabilize mouse paw epidermis lineage [14]. Recently, Bessonov and coworkers developed a model that allowed them to determine the influence of the population dynamics on the time-varying probabilities of different cell fates and the ascertainment of the cell-cell communication factors influencing these probabilities [15]. These authors suggest that a coordinated dynamical change of the cell behavior parameters occurs in response to a biochemical signal, which they describe as an underlying field. Here we will describe the effects of cellular interactions using nonlinear terms instead.\nLive cells are generally sensitive to substratum rigidity and texture [16]. A growing tumor must compete for space with the surrounding environment; the resulting mechanical stresses generate signals that impact on the tumor cells. Cells integrate these mechanical cues and respond in ways that are related to their phenotype. Their active response may also lead to phenotype modifications [17, 18]; in fact, mechanical cues generated by the environment can trigger cancer cell invasion [19]. Environmental stiffness may then be associated with tumor progression, a process that can also be promoted by mechanically activated ion channels [20].\nWhat is the influence of the mechanical environment on cancer stem cells? At each generation, CSCs divide symmetrically, generating either two new CSCs or two differentiated cancer cells (DCCs), or asymmetrically, generating one CSC and one differentiated cancer cell [7, 21]. Quorum sensing controls differentiation of healthy stem cells, but it is thought to be altered in cancer stem cells [22]. Mechanical inputs are an important component of the altered control mechanism and can be assumed to play a role in the fate of the cancer stem cells. In vitro experiments have been designed to probe the influence of mechanical stresses of various types on tumor cells. The solid-stress inhibition of multicellular spheroid growth was already demonstrated by Helmlinger and coworkers in 1997 [23]. The results of these experiments were shown to follow allometric laws [24]. Interestingly, Koike et al. showed that spheroid formation with Dunning R3327 rat prostate carcinoma AT3.1 cells is facilitated by solid stress [25].\nA study by Cheng et al. suggested how tumors grow in confined locations where levels of stress are high, showing that growth-induced solid stress can affect cell phenotype [26]. Using spheroid cell aggregates, Montel et al. showed that applied pressure may be used to modulate tumor growth [27] and observed that cells are blocked by compressive stresses at the G1 checkpoint [28]. The organization of cells in a spheroid is modified by physical confinement [29], which likewise modifies the proliferation gradient [30]. The stiffness of hydrogels has been shown to determine the shape of tumor cells, with high stiffnesses leading to spheroidal cells, a feature known to occur in in vivo tumors [31]. By studying the behavior of adult neural stem cells under various mechanical cues, Saha et al. showed that soft gels favored differentiation into neurons while harder gels promoted glial cultures. Importantly, they also showed that low substrate stiffness inhibited cell spreading, self-renewal, and differentiation [32]. Osteocyte-like cells were shown to significantly induce compaction of tumor spheroids formed using breast cancer cells [33]. Matrix stiffness was shown to affect, through mechanotransduction events, the osteogenic outcome of human mesenchymal stem cell differentiation [34]. HeLa cells were used to show that both an attractive contact force and a substrate-controlled remote force contribute to the formation of large-scale multicellular structures in cancer [35].\nFifteen years ago, Discher, Janmey, and Wang not only explained that the stiffness of the anchoring substrate can have a strong influence on the cell state, but they also indicated that stem cell differentiation may be influenced by the nature of the substrate [36]. It is relevant that naive mesenchymal stem cells were shown to commit to various phenotypes with high sensitivity to tissue elasticity: They differentiate preferably into neurons and osteocytes if they are cultured on soft and rigid matrices, respectively [37]. On the other hand, human mesenchymal stem cells adhere onto precalcified bones, which are softer than calcified bones [16]. It is also known that hydrodynamic shear stress promotes the conversion of primary patient epithelial tumor cells into specific cancer stem-like cells [38]. Smith et al. found that the mechanical context of the differentiation niche can drive endothelial cell identity from human-induced pluripotent stem cells, showing that stiffness drives mesodermal differentiation, leading to endothelial commitment [39]. Thus, microenvironments help specify stem cell lineages, although it may be difficult to decouple the influence of mechanical interactions and surface topography and stiffness from biochemical effects [16, 39]. Since they are grown in the absence of the complex signaling system prevalent in the environment of real tumors, tumorspheres, spheroids formed by clonal proliferation out of permanent cell lines, tumor tissue, or blood [40], are suitable candidates to probe the influence of mechanical stimuli on stem-cell-fueled cancer growth.\nWang et al. cultured breast CSCs on soft and hard agar matrix surfaces, investigating the effects that substrate stiffness has on cell state and proliferation [41]. These authors showed that breast cancer stem cells can be kept in states of differentiation, proliferation or quiescence depending on a combination of adherent growth and stem cells growth factors, but they focused on the experimental possibilities and did not draw conclusions about how these agencies may modify the stem cell niche to lead to the observed behavior. Recently, we developed a two-population tumorsphere model to identify the role of the intraspecific and interspecific interactions that determine tumorsphere growth [18]. Application of our model to three breast cancer cell lines studied by Chen and coworkers [9] indicates that while intraspecific interactions are inhibitory, interspecific interactions promote growth. This feature of interspecific interactions was interpreted in terms of the stimulation by CSCs of the growth of DCCs in order to consolidate their niches and of the plasticity of the DCCs to dedifferentiate into CSCs [18]. Here we use this model to analyze the experimental results of Wang et al. [41], discussing how substrate stiffness influences growth and finding that the concept of cancer stem cell niche is central for its understanding. In the next section we review the model of Ref. [18] and in the following sections we apply it to the results of Ref.[41] and discuss their implications.", "We model mathematically the growth of a tumorsphere considering two cell populations: Cancer stem cells (S) and differentiated cancer cells (D). By including in the last class all cells with any degree of differentiation we can isolate the role played by the stem cells. We further assume that: \nThe single basal growth rate r characterizes the timescale of the system. By construction, it matches a priori the population doubling time (PDT) of the DCCs. This provides a more suitable description than the previous model with two basal growth rates [18] since, in general, it is not possible to discriminate between these rates in experiments such as that of Ref. [41].When a CSC undergoes mitosis there is a probability ps that two new CSCs are generated, a probability pd that two DCCs are generated, and a probability pa that there is an asymmetric division. Because of normalization, pa=1−pd−ps. These probabilities should be multiplied by the basal growth rate r, see Fig. 1, in such a way that it is possible to reasonably model the effective creation rates of new cells.\nFig. 1Schematic representation of the cell division outcomes and the intrinsic growth rates. Each of these is given by the product of the basal growth rate r and the probability of the respective outcome. a Parent cells replicate themselves originating daughter cells in their same class. b CSC differentiation occurs in two ways: if asymmetric, the S population remains unchanged; if symmetric, S decreasesThe members of each subpopulation interact with each other (intraspecific interactions) and with the members of the other subpopulation (interspecific interactions), c.f. Fig 2. These interactions are described by proportionality factors αij whose signs and magnitudes quantify the number of cells that are created in the system due to interactions with preexisting cells. The indices i and j may represent S or D. They indicate either intraspecific (i=j) or interspecific (i≠j) interactions.\nFig. 2The signs of the coefficients αij indicate whether the interactions promote or inhibit growth. Cells already present in the tumorsphere either favor (αij<0) or hinder (αij>0) the production of new cells. Arrows indicate the influence of each population on the newborn cells\nThe single basal growth rate r characterizes the timescale of the system. By construction, it matches a priori the population doubling time (PDT) of the DCCs. This provides a more suitable description than the previous model with two basal growth rates [18] since, in general, it is not possible to discriminate between these rates in experiments such as that of Ref. [41].\nWhen a CSC undergoes mitosis there is a probability ps that two new CSCs are generated, a probability pd that two DCCs are generated, and a probability pa that there is an asymmetric division. Because of normalization, pa=1−pd−ps. These probabilities should be multiplied by the basal growth rate r, see Fig. 1, in such a way that it is possible to reasonably model the effective creation rates of new cells.\nFig. 1Schematic representation of the cell division outcomes and the intrinsic growth rates. Each of these is given by the product of the basal growth rate r and the probability of the respective outcome. a Parent cells replicate themselves originating daughter cells in their same class. b CSC differentiation occurs in two ways: if asymmetric, the S population remains unchanged; if symmetric, S decreases\nSchematic representation of the cell division outcomes and the intrinsic growth rates. Each of these is given by the product of the basal growth rate r and the probability of the respective outcome. a Parent cells replicate themselves originating daughter cells in their same class. b CSC differentiation occurs in two ways: if asymmetric, the S population remains unchanged; if symmetric, S decreases\nThe members of each subpopulation interact with each other (intraspecific interactions) and with the members of the other subpopulation (interspecific interactions), c.f. Fig 2. These interactions are described by proportionality factors αij whose signs and magnitudes quantify the number of cells that are created in the system due to interactions with preexisting cells. The indices i and j may represent S or D. They indicate either intraspecific (i=j) or interspecific (i≠j) interactions.\nFig. 2The signs of the coefficients αij indicate whether the interactions promote or inhibit growth. Cells already present in the tumorsphere either favor (αij<0) or hinder (αij>0) the production of new cells. Arrows indicate the influence of each population on the newborn cells\nThe signs of the coefficients αij indicate whether the interactions promote or inhibit growth. Cells already present in the tumorsphere either favor (αij<0) or hinder (αij>0) the production of new cells. Arrows indicate the influence of each population on the newborn cells\nWe can describe the evolution of the two interacting populations by generalizing the standard equations for two competing species (see, p. ej. [42], p. 67). \n1a\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ \\frac{dS}{dt}=r[ p_{s} S]\\left\\lbrace \\frac{p_{s}- p_{d}}{ p_{s}} - \\alpha_{SS}S - \\alpha_{SD}D \\right\\rbrace, $$ \\end{document}dSdt=r[psS]ps−pdps−αSSS−αSDD,\n\n1b\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ \\frac{dD}{dt}=r [D+ (1+p_{d}- p_{s})S]\\left\\lbrace 1-\\alpha_{DD}D - \\alpha_{DS}S \\right\\rbrace. $$ \\end{document}dDdt=r[D+(1+pd−ps)S]1−αDDD−αDSS.\nThe first term inside the braces on the right-hand side of Eq. (1a) corresponds to the net intrinsic creation of new CSCs (symmetric CSC divisions minus divisions yielding two DCCs). Note that asymmetric divisions do not change the number of cancer stem cells, but symmetric differentiation removes the parent CSC from the S population, as illustrated in Fig. 1. The second and third terms correspond, respectively, to the effects on the CSC population of the interactions with other CSCs and with differentiated cancer cells.\nThe factor in the square brackets on the right-hand side of Eq. (1b) is proportional to the rate of creation, in the absence of interactions, of differentiated cells due to the division of other DCCs (first term), plus the asymmetric division and differentiation of CSCs (second term). The first term between braces corresponds to the interaction-free growth of the system. The second and third terms represent, respectively, the influences of the other DCCs and of the CSCs on the differentiated cancer cell population. The effect of cell-cell interactions on cell creation is assumed to be proportional to the abundances of the populations emitting the signals and of those receiving them; therefore, the corresponding terms are quadratic in the populations. The interaction strengths are represented by the coefficients αij. Negative interaction coefficients (αij<0) describe growth-promoting interactions, e.g. the j population promotes the growth of the i population. Positive values of αij describe the growth inhibition of population i by population j. In particular, as shown in Fig. 2, αSS tells us how CSCs promote/inhibit the creation of new CSCs, αDD tells us how DCCs promote/inhibit the creation of new DCCs, while αDS informs us about the influence of CSCs on the generation of new differentiated cancer cells and αSD the influence of DCCs on the generation of new cancer stem cells.\nThere are no analytic solutions for these differential equations. Their numerical solutions yield the time evolution of both subpopulations, S(t) and D(t). In Additional file 1 we summarize some properties of Eqs. (1) and their solutions that we will use in our analysis.\nFitting with the model The data sets correspond to the total cell number T in the spheroids. Thus, we fit the data with T=S+D, where S and D are the numerical solutions of the system of Eqs. (1). Thereby, our model allows us to obtain information on the dynamics of the CSC and DCC subpopulations and, in particular, on the time evolution of the CSC fraction, from data corresponding to the whole spheroid. Due to the scarcity of data points and the ensuing difficulties of the optimization problem, fitting our model to the data leads to different sets of possible parameter values. To obtain the optimal set, we use a random grid search (RGS) algorithm. The RGS algorithm consists in randomly sweeping some domain for initial conditions in parameter space. In our case, such a domain is bounded by physically reasonable assumptions such as that the values of probability be pi∈[0,1] with i≡s,d and the growth rate r>0. Also we ask for the outcome of the fitting process to give normalized positive probabilities, positive populations in the range of validity of the data, and fractions of the order of the ones reported by Wang et al. We then collect in a histogram all the parameters that have a relative error lower than 5% when fitting the data points. To do this we define a relative error measure given by the nonlinear estimator \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$R=\\frac{1}{n}\\sum_{i=1}^{n} \\left(\\frac{y_{i}-Y(t_{i})}{y_{i}} \\right)^{2}. $$ \\end{document}R=1n∑i=1nyi−Y(ti)yi2.\nHere n is the number of data points, yi the data value at time ti, and Y(ti) the function value obtained by fitting the data. This estimator is the same as the function we minimize through the fitting process (the classical R2 parameter also used as a minimization - objective function, is not a good reporter for a nonlinear problem). A first selection criterion of the RGS algorithm ensures that no accepted parameter set has an accuracy below 95%. A consistent statistical interpretation of the process requires that the order of magnitude and, especially, the sign of each parameter be the same in all realizations. Therefore, even if different combinations of the fitting parameters yield acceptable descriptions of the experimental results, the qualitative mechanisms that control spheroid growth can be satisfactorily identified. We thus find a distribution for each parameter and select the median as its representative value.\nThe data sets correspond to the total cell number T in the spheroids. Thus, we fit the data with T=S+D, where S and D are the numerical solutions of the system of Eqs. (1). Thereby, our model allows us to obtain information on the dynamics of the CSC and DCC subpopulations and, in particular, on the time evolution of the CSC fraction, from data corresponding to the whole spheroid. Due to the scarcity of data points and the ensuing difficulties of the optimization problem, fitting our model to the data leads to different sets of possible parameter values. To obtain the optimal set, we use a random grid search (RGS) algorithm. The RGS algorithm consists in randomly sweeping some domain for initial conditions in parameter space. In our case, such a domain is bounded by physically reasonable assumptions such as that the values of probability be pi∈[0,1] with i≡s,d and the growth rate r>0. Also we ask for the outcome of the fitting process to give normalized positive probabilities, positive populations in the range of validity of the data, and fractions of the order of the ones reported by Wang et al. We then collect in a histogram all the parameters that have a relative error lower than 5% when fitting the data points. To do this we define a relative error measure given by the nonlinear estimator \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$R=\\frac{1}{n}\\sum_{i=1}^{n} \\left(\\frac{y_{i}-Y(t_{i})}{y_{i}} \\right)^{2}. $$ \\end{document}R=1n∑i=1nyi−Y(ti)yi2.\nHere n is the number of data points, yi the data value at time ti, and Y(ti) the function value obtained by fitting the data. This estimator is the same as the function we minimize through the fitting process (the classical R2 parameter also used as a minimization - objective function, is not a good reporter for a nonlinear problem). A first selection criterion of the RGS algorithm ensures that no accepted parameter set has an accuracy below 95%. A consistent statistical interpretation of the process requires that the order of magnitude and, especially, the sign of each parameter be the same in all realizations. Therefore, even if different combinations of the fitting parameters yield acceptable descriptions of the experimental results, the qualitative mechanisms that control spheroid growth can be satisfactorily identified. We thus find a distribution for each parameter and select the median as its representative value.", "The data sets correspond to the total cell number T in the spheroids. Thus, we fit the data with T=S+D, where S and D are the numerical solutions of the system of Eqs. (1). Thereby, our model allows us to obtain information on the dynamics of the CSC and DCC subpopulations and, in particular, on the time evolution of the CSC fraction, from data corresponding to the whole spheroid. Due to the scarcity of data points and the ensuing difficulties of the optimization problem, fitting our model to the data leads to different sets of possible parameter values. To obtain the optimal set, we use a random grid search (RGS) algorithm. The RGS algorithm consists in randomly sweeping some domain for initial conditions in parameter space. In our case, such a domain is bounded by physically reasonable assumptions such as that the values of probability be pi∈[0,1] with i≡s,d and the growth rate r>0. Also we ask for the outcome of the fitting process to give normalized positive probabilities, positive populations in the range of validity of the data, and fractions of the order of the ones reported by Wang et al. We then collect in a histogram all the parameters that have a relative error lower than 5% when fitting the data points. To do this we define a relative error measure given by the nonlinear estimator \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$R=\\frac{1}{n}\\sum_{i=1}^{n} \\left(\\frac{y_{i}-Y(t_{i})}{y_{i}} \\right)^{2}. $$ \\end{document}R=1n∑i=1nyi−Y(ti)yi2.\nHere n is the number of data points, yi the data value at time ti, and Y(ti) the function value obtained by fitting the data. This estimator is the same as the function we minimize through the fitting process (the classical R2 parameter also used as a minimization - objective function, is not a good reporter for a nonlinear problem). A first selection criterion of the RGS algorithm ensures that no accepted parameter set has an accuracy below 95%. A consistent statistical interpretation of the process requires that the order of magnitude and, especially, the sign of each parameter be the same in all realizations. Therefore, even if different combinations of the fitting parameters yield acceptable descriptions of the experimental results, the qualitative mechanisms that control spheroid growth can be satisfactorily identified. We thus find a distribution for each parameter and select the median as its representative value.", "First, let us answer the following question: Given that we start tumorsphere growth from a small CSC seed, what is the minimum size Sm needed for this seed to guarantee CSC population growth? By setting D=0 in Eq. (1b), we see that there are two cases: \nIf differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with \n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ S_{m} = \\frac{p_{s} - p_{d}}{ \\alpha_{SS} p_{s}}. $$ \\end{document}Sm=ps−pdαSSps.\nIf differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.\nIf ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with \n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ S_{m} = \\frac{p_{s} - p_{d}}{ \\alpha_{SS} p_{s}}. $$ \\end{document}Sm=ps−pdαSSps.\nWe thus need αSS<0 : The CSCs must cooperate to yield additional cancer stem cells starting from a pure CSC seed. A larger cooperative interaction implies that we can use a smaller seed. In this case, the intraspecific interaction coefficient αSS plays a key role in the growth determination from the very beginning of the process. It is worth mentioning that in the experiments discussed here the conditions ps<pd and S0>Sm are never satisfied simultaneously. Usually, ps is smaller than pd, and there is a minimum number of stem cells required to ensure stem cell growth. But if a differentiation - inhibiting agent is added to the system, increasing ps, a single cancer stem cell may suffice to generate growth. As shown in Additional file 1, we can linearize our equations to describe the initial evolution of a small system, finding that the trajectory in the S−D plane starts as, \n3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ D(t) = \\frac{(D_{0} + S_{0})}{S_{0}}S(t)^{1/(p_{s} -p_{d})}- S(t), $$ \\end{document}D(t)=(D0+S0)S0S(t)1/(ps−pd)−S(t),\nwhere S(0)=S0 and D(0)=D0. Initially, if pd>ps, the number of differentiated cells increases, while the number of stem cells decreases, and the representative point gets close to the D-axis. If there is growth in the stem cell subpopulation, it is due to the nonlinear terms.\nOur model therefore generates a simple analytical description of the early stages of tumorsphere evolution and specifies the conditions for a successful implantation of the initial cancer stem cell seed. Unless a potent anti-differentiation agent is added to the growth medium, we expect the differentiation probability pd to be larger than ps. If so, our Eq. (2) predicts the minimum number of stem cells needed to initiate successful spheroid growth. This number depends only on ps, pd, and the intraspecific interaction between cancer stem cells, which must be cooperative. Weak cooperation or a small ps would mean that the tumorsphere must be started from a large nucleus.\nIn the next section we review the experimental results reported in [41] and determine the model parameters.", "We used our model to analyze the results of Wang and coworkers [41]. These authors studied the growth of breast cancer cell cultures belonging to three different cell lines: MCF7, MDA-MB-231 and MDA-MB-435. For each of these tumor lines they grew tumorspheres using three different environmental conditions, as detailed below. In all cases the spheroids initially have 4-5 cells that originate from a single CSC. Since only the MDA-MB-231 cell line yielded bona fide round spheroids for all three experimental specifications, we will use this line to compare our findings with the experimental results. To facilitate the implementation of the model presented in [18], we report the data in terms of cell numbers.", "In the soft experiment, cells were cultured using soft (0.05%) agar as the matrix surface for cell contact. Differentiation inhibitors were added to the growth medium to increase the CSC fraction. Under these conditions there is little incentive for the stem cells to either duplicate or leave their quiescent status. Only the tendency to build a suitable niche may break the quiescence. Hence their small basal growth rate. As a result, a slow exponential growth of CSCs prevails in the early stages of tumorsphere growth as depicted in Fig. 3. Such behavior can be predicted as shown in Eq. 3 and Eqs. (A3) and (A4) in Additional file 1, but the basal growth rate is so small that the process appears to be almost linear. The CSCs population (red line) is always much larger than its DCC counterpart; as a matter of fact, Wang et al. reported a 95% of CSC at day 8 with a low growth rate. The distribution of the fitting values generated by the RGS method is shown in Additional file 2, Fig. A. Note there, and in Table 1, the very high (close to unity) value of ps, the positive sign of the intraspecific interaction coefficients and the negative sign of the interspecific interaction coefficients.\nFig. 3Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell populationTable 1The three chosen parameter sets obtained from fits to the experimental dataConstantUnitsSoftHardControlr[1/days]0.06850.13350.0240psnone0.97010.71240.1646pdnone0.00190.00000.3911αSS[1/cells]0.0873-0.04560.0028αSD[1/cells]-0.4185-0.52800.0266αDS[1/cells]-0.2061-0.1376-1.0683αDD[1/cells]0.36681.8329-0.3087\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\frac {S}{S+D}|_{8days}$\\end{document}SS+D|8daysnone0.94790.9130.141\nGrowth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell population\nThe three chosen parameter sets obtained from fits to the experimental data", "In the hard experiment, cells were cultured using hard (30%) agar as the contact matrix surface. Differentiation inhibitors were also added to the growth medium. For this experiment we expect the model to describe a high fraction of CSCs, as in soft, but now with a higher proliferation rate. Applying the RGS method to this data set, we see that this is indeed so (see Table 1 and Additional file 2, Fig. B for the resulting parameters), obtaining the curves depicted in Fig. 4. At early times, growth is nearly linear, as observed in soft, but only for the first four days, speeding up afterwards. The CSCs outnumber the DCCs, reaching 91% of the cell population by day 8, consistently with the results reported in [41]. This fraction is a little lower than in soft but would become much larger than that at later times.\nFig. 4Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs\nGrowth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs\nThe symmetric CSC reproduction probability is still high, but noticeably lower than in soft, and the basal rate is twice that in soft. The interspecific interaction coefficients are negative, as in soft, but the CSC intraspecific interaction coefficient is now negative, too.", "In the control experiment, cells were cultured using hard (30%) agar as the contact matrix surface, but no differentiation inhibitor was added to the medium. The stem-cell promoting factors EGF and bFGF were replaced by neutral serum and the cells were grown on a hard substrate [41]. In this case, although the spheroid cannot preserve its spherical shape at late times, a fitting attempt, shown in Fig. 5, is informative (the corresponding boxplots are shown in Additional file 2, Fig. C).Although the CSC number remains nearly constant, the DCCs can proliferate indefinitely, leading to fast overall growth.\nFig. 5Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase\nGrowth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase\nAll the new relevant information obtained from fitting the experimental data is summarized in Table 1, where we report the values of the parameters of our model. Those values have an accuracy of 98% and their correspondig distributions are reportedd in Additional file 2. Furthermore, in Table 2, we report some quantities, derived from parameters in Table 1 that will be useful in the following sections.\nTable 2Useful derived quantities from some values of Table 1ConstantUnitsSoftHardControl1/r[days]14.597.4941.66rp[1/days]0.06630.0951-0.0054\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\frac {1}{rp}$\\end{document}1rp[days]15.0810.52-185p=(ps−pd)none0.96820.7124-0.2265\nUseful derived quantities from some values of Table 1", "The computed basal growth rate r is 0.069 day −1, which means that the PDT is close to 15 days. This is consistent with the results obtained by Wang et al. [41] using flow cytometry, but somewhat longer than typical cancer stem cells doubling times, which range from 3 to 11 days, depending on tumor type and culture conditions [47, 48]. DCCs normally reproduce faster but, because in this model r represents the average growth rate of the whole population, we recover a PDT consistent with that of the dominating CSCs. This lends support to our modeling assumption of a single basal growth rate.\nQuiescence is the prevalent state of the stem cells. Since their function is to replenish dead or damaged cells, they enter the cycle when their niches signal the need for new cells. In soft (and in hard) the addition of differentiation inhibitors implies that the CSCs always record low DCC populations. This drives them into the cycle, where they divide but, prevented from differentiating, overwhelmingly generate new CSCs. Differentiation is very unlikely (pd=0.0019) and we may neglect it to simplify the analysis. If we do this, there is no linear contribution of the CSCs to DCC generation. With the parameter values in Table 1, the equilibrium point where the two kind of cells coexist is located at (S∗,D∗)=(−21.7,−7.0) cells. This point lies in the third quadrant indicating that there is no physical/biological coexistence of the two populations. There are no attractors in the first quadrant and all trajectories diverge. This confirms that CSCs lose their normal quiescent state in a continuous (and futile) attempt to produce more DCCs.\nThe (positive) intraspecific interaction coefficients αii are here directly related to the individual maximum population sizes of the respective subpopulations. If we assumed that the two subpopulations did not interact, ij=0, i≠j, Eq. (1a) would read: \n4\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ \\frac{dS}{dt}=rS\\left[(p_{s} - p_{d})- p_{s} \\alpha_{SS} S \\right], $$ \\end{document}dSdt=rS(ps−pd)−psαSSS,\nwhich is a logistic equation that leads to a maximum population size Sc=(ps−pd)/(psαSS)≃12 cells. In this way, from Eq. (1b), we obtain Dc=1/αDD≃2 cells for the DCCs, which is six times smaller than Sc. Therefore, if there were no interactions between subpopulations our model would predict a 14-cell spheroid, a size that would be reached by day 17. Interactions between the populations are needed to understand the faster growth observed in the experiment. The negative values of the interspecific interaction coefficients, αij<0, i≠j lead to a positive feedback loop: An increase in one subpopulation drives an increase in the other. The numbers in Table 1, especially the relatively large value of αSD (5 times that of αSS), and the relatively low value of αDS (less than half of αDD), indicate that this interplay favors a net increase in CSC number but is not strong enough to lead to an increase in DCC number.\nThe feedback loop mechanism is activated to generate a suitable niche, which requires a low S/(S+D) fraction. The inhibition of differentiation causes the CSCs to continuously reproduce in a frustrated attempt to recreate the DCC population required by the niche. Since the population equilibrium corresponding to a stable niche is never reached, cycling CSCs seldom return to quiescence. In Fig. 6, the fraction S/(S+D) is depicted for the three experiments up to day ten. Due to the inhibitor’s efficacy, this fraction falls very slowly for the soft and hard substrates (light blue and orange lines, respectively), but decays freely in the control environment.\nFig. 6CSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche\nCSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche", "When the substrate hardens the environmental conditions that mediate cell-to-cell signaling change and the CSC phenotype becomes more amenable to proliferation, as seen by comparing Figs. 3 and 4. The growth rate r, which still represents the PDT of the CSCs because of their prevalence, is twice as large as that corresponding to growth on the soft substrate. Our interpretation is that increasing the substrate hardness alters the CSC phenotype required to reach the cell fraction that regulates niche size. As in soft, the CSCs try to increase the DCC population but now they are immersed in a different environment. The duplication of the growth rate r, the reduction of the symmetric duplication probability ps, and the emergence of a large fraction (>50%) of asymmetric divisions indicates that the direct effect of the differentiation-inhibiting factors is weaker than in soft. Indirect effects appear through the interspecific coefficients, especially the relatively large and negative (-0.53) αSD. As Fig. 6 shows, by day 8 the DCC fraction is not much larger than in soft, indicating that the attempt to establish the niche has also failed in hard.\nMore remarkable is that the intraspecific CSC coefficient has changed its sign, an indication that CSCs record a stressed environment that they may perceive as due to the presence of damaged tissue. This generates a phenotype different from that in soft [18], which accelerates cell division. On the other hand, the large and positive DCC intraspecific coefficient, αDD=1.83, implies a huge increase of the inhibitory signaling between DCCs with respect to soft. In this case, the discussion following Eq. 4 suggests for this system a maximum intrinsic DCC number smaller than unity, Dc=1/αDD≃0.5 cells, meaning that on this substrate the DCC subpopulation would not be able to survive without the CSCs.\nThe general picture is that of a growing tumorsphere whose response to the substrate is to increase its cell number as fast as possible, aiming to reach a DCC fraction that equilibrates the niche, a goal that cannot be attained due to the presence of differentiation-inhibiting agents. The influence of the niche, as in [18], is thus a cornerstone for the biological interpretation of the model results.", "As mentioned in the previous section, we cannot expect the model to give a completely accurate description of the control substrate experiment, but its interpretation may shed light on the system dynamics. In this condition CSCs are allowed to freely differentiate. These cells record an environment where the proportion of DCCs increases monotonically and the population fractions should tend to those corresponding to niche equilibrium. However, Figs. 5 and 6 suggest that there is no limit to the increase of the DCC fraction. We conjecture that this behavior may be explained by migration: after the spheroid reaches a given size, cells start to migrate and the average number of DCCs recorded by each CSC does not increase. One consequence is that the CSC number remains stationary as shown in Fig. 5. In fact, their effective PDT is of about six months, i.e., they are generally quiescent, as they should.\nFurthermore, note that the whole PDT leads to a duplication of the first 5 cells after 41 days (1/0.024), which is three times slower than the soft rate (15 days). To explain the rapid spheroid growth we need to consider the contribution of the interactions. From Table 1 we see that interactions favor DCCs and restrict CSCs proliferation. A more detailed analysis of the evolution of the two subpopulations reveals the following: \nCSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment.DCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated.\nCSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment.\nDCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated.\nOf note, our analysis of these experiments implies that the absence of the stem cell growth factors in control leads to the disappearance of the feedback loop that plays such a crucial role in both soft and hard. The existence of the feedback loops detected in soft and hard can similarly be inferred from experiments carried out with the cancer lines SUM159, MCF-7, and T47D, which were also cultured with stem cell growth factors [9, 18].\nEven if the CSC fractions are not far from unity in both soft and hard, the detailed reasons for their behavior are different. In both cases the cell subpopulations assist each other, generating a positive-feedback cycle that leads to continuous growth, an indication that cell-to-cell signaling is crucial to determine the process. The effect of the substrate on intraspecific interactions in hard is strong. There, CSCs are weakly promoting, but DCCs are so strongly inhibitory that the DCC population would disappear if it were not for the significant cooperation from the CSCs, which is expressed mainly through a considerable fraction of asymmetric divisions. The inhibition between DCCs is also likely to induce the phenotype change indicated by the large and negative value of αSD. The parameter αDS, which controls the influence of cancer stem cells on differentiated cancer cells, is always negative, and very strong so in control, suggesting that CSCs have a promoting and protective influence on DCCs, a phenomenon that was already observed by Kim and coworkers. These authors found that CSCs protect DCCs from anoikis promoting tumor formation when the two subpopulations are mixed [49]. The smaller magnitude of αDS in the soft and hard experiments suggests that stem cell maintenance factors weaken, but do not cancel, this protective effect.", "\nAdditional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere.\nAdditional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere.\n\nAdditional file 2 Distribution graphs from fitting procedure.\nAdditional file 2 Distribution graphs from fitting procedure." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Fitting with the model", "Results", "The initial stages", "Experimental data", "Soft substrate", "Hard substrate", "Control substrate", "Discussion", "Soft substrate", "Hard substrate", "Control substrate", "Conclusion", "Supplementary Information", "" ]
[ "For some time, it has been known that the presence of cancer stem cells (CSCs) is important for the development of many solid tumors [1–6]. According to the CSC hypothesis these cells are often crucial for the development of resistance to therapeutic interventions [7, 8]. In healthy tissues the proportion of stem cells is small; homeostatic equilibrium is maintained through the signals that the stem cells receive from their niches. The onset of cancer is likely to destroy this equilibrium and cancerous tissues may exhibit a higher proportion of stem cells than normal tissues [9]. This increased proportion of cancer stem cells may underlie the aggressive behavior of high-grade tumors [2, 10]. As recently explained by Taniguchi et al. [11], the cross-talk between tumor initiating (stem) cells and their niche microenvironment is a possible therapeutic target. Understanding the nature of the interactions between CSCs and their environment is therefore important for the development of effective intervention procedures.\nInteresting mathematical models have been developed to explain various properties of stem-cell-driven tissue growth. Stiehl and Marciniak-Czochra proposed a mathematical model of cancer stem cell dynamics to describe the time evolution of a leukemic cell line competing with healthy hematopoiesis [12]. This group later provided evidence that the influence of leukemic stem cells on the course of the disease is stronger than that of non-stem leukemic cells [13]. Yang, Plikus and Komarova used stochastic modeling to explore the relative importance of symmetric and asymmetric stem cell divisions, showing that tight homeostatic control is not necessarily associated with purely asymmetric divisions and that symmetric divisions can help to stabilize mouse paw epidermis lineage [14]. Recently, Bessonov and coworkers developed a model that allowed them to determine the influence of the population dynamics on the time-varying probabilities of different cell fates and the ascertainment of the cell-cell communication factors influencing these probabilities [15]. These authors suggest that a coordinated dynamical change of the cell behavior parameters occurs in response to a biochemical signal, which they describe as an underlying field. Here we will describe the effects of cellular interactions using nonlinear terms instead.\nLive cells are generally sensitive to substratum rigidity and texture [16]. A growing tumor must compete for space with the surrounding environment; the resulting mechanical stresses generate signals that impact on the tumor cells. Cells integrate these mechanical cues and respond in ways that are related to their phenotype. Their active response may also lead to phenotype modifications [17, 18]; in fact, mechanical cues generated by the environment can trigger cancer cell invasion [19]. Environmental stiffness may then be associated with tumor progression, a process that can also be promoted by mechanically activated ion channels [20].\nWhat is the influence of the mechanical environment on cancer stem cells? At each generation, CSCs divide symmetrically, generating either two new CSCs or two differentiated cancer cells (DCCs), or asymmetrically, generating one CSC and one differentiated cancer cell [7, 21]. Quorum sensing controls differentiation of healthy stem cells, but it is thought to be altered in cancer stem cells [22]. Mechanical inputs are an important component of the altered control mechanism and can be assumed to play a role in the fate of the cancer stem cells. In vitro experiments have been designed to probe the influence of mechanical stresses of various types on tumor cells. The solid-stress inhibition of multicellular spheroid growth was already demonstrated by Helmlinger and coworkers in 1997 [23]. The results of these experiments were shown to follow allometric laws [24]. Interestingly, Koike et al. showed that spheroid formation with Dunning R3327 rat prostate carcinoma AT3.1 cells is facilitated by solid stress [25].\nA study by Cheng et al. suggested how tumors grow in confined locations where levels of stress are high, showing that growth-induced solid stress can affect cell phenotype [26]. Using spheroid cell aggregates, Montel et al. showed that applied pressure may be used to modulate tumor growth [27] and observed that cells are blocked by compressive stresses at the G1 checkpoint [28]. The organization of cells in a spheroid is modified by physical confinement [29], which likewise modifies the proliferation gradient [30]. The stiffness of hydrogels has been shown to determine the shape of tumor cells, with high stiffnesses leading to spheroidal cells, a feature known to occur in in vivo tumors [31]. By studying the behavior of adult neural stem cells under various mechanical cues, Saha et al. showed that soft gels favored differentiation into neurons while harder gels promoted glial cultures. Importantly, they also showed that low substrate stiffness inhibited cell spreading, self-renewal, and differentiation [32]. Osteocyte-like cells were shown to significantly induce compaction of tumor spheroids formed using breast cancer cells [33]. Matrix stiffness was shown to affect, through mechanotransduction events, the osteogenic outcome of human mesenchymal stem cell differentiation [34]. HeLa cells were used to show that both an attractive contact force and a substrate-controlled remote force contribute to the formation of large-scale multicellular structures in cancer [35].\nFifteen years ago, Discher, Janmey, and Wang not only explained that the stiffness of the anchoring substrate can have a strong influence on the cell state, but they also indicated that stem cell differentiation may be influenced by the nature of the substrate [36]. It is relevant that naive mesenchymal stem cells were shown to commit to various phenotypes with high sensitivity to tissue elasticity: They differentiate preferably into neurons and osteocytes if they are cultured on soft and rigid matrices, respectively [37]. On the other hand, human mesenchymal stem cells adhere onto precalcified bones, which are softer than calcified bones [16]. It is also known that hydrodynamic shear stress promotes the conversion of primary patient epithelial tumor cells into specific cancer stem-like cells [38]. Smith et al. found that the mechanical context of the differentiation niche can drive endothelial cell identity from human-induced pluripotent stem cells, showing that stiffness drives mesodermal differentiation, leading to endothelial commitment [39]. Thus, microenvironments help specify stem cell lineages, although it may be difficult to decouple the influence of mechanical interactions and surface topography and stiffness from biochemical effects [16, 39]. Since they are grown in the absence of the complex signaling system prevalent in the environment of real tumors, tumorspheres, spheroids formed by clonal proliferation out of permanent cell lines, tumor tissue, or blood [40], are suitable candidates to probe the influence of mechanical stimuli on stem-cell-fueled cancer growth.\nWang et al. cultured breast CSCs on soft and hard agar matrix surfaces, investigating the effects that substrate stiffness has on cell state and proliferation [41]. These authors showed that breast cancer stem cells can be kept in states of differentiation, proliferation or quiescence depending on a combination of adherent growth and stem cells growth factors, but they focused on the experimental possibilities and did not draw conclusions about how these agencies may modify the stem cell niche to lead to the observed behavior. Recently, we developed a two-population tumorsphere model to identify the role of the intraspecific and interspecific interactions that determine tumorsphere growth [18]. Application of our model to three breast cancer cell lines studied by Chen and coworkers [9] indicates that while intraspecific interactions are inhibitory, interspecific interactions promote growth. This feature of interspecific interactions was interpreted in terms of the stimulation by CSCs of the growth of DCCs in order to consolidate their niches and of the plasticity of the DCCs to dedifferentiate into CSCs [18]. Here we use this model to analyze the experimental results of Wang et al. [41], discussing how substrate stiffness influences growth and finding that the concept of cancer stem cell niche is central for its understanding. In the next section we review the model of Ref. [18] and in the following sections we apply it to the results of Ref.[41] and discuss their implications.", "We model mathematically the growth of a tumorsphere considering two cell populations: Cancer stem cells (S) and differentiated cancer cells (D). By including in the last class all cells with any degree of differentiation we can isolate the role played by the stem cells. We further assume that: \nThe single basal growth rate r characterizes the timescale of the system. By construction, it matches a priori the population doubling time (PDT) of the DCCs. This provides a more suitable description than the previous model with two basal growth rates [18] since, in general, it is not possible to discriminate between these rates in experiments such as that of Ref. [41].When a CSC undergoes mitosis there is a probability ps that two new CSCs are generated, a probability pd that two DCCs are generated, and a probability pa that there is an asymmetric division. Because of normalization, pa=1−pd−ps. These probabilities should be multiplied by the basal growth rate r, see Fig. 1, in such a way that it is possible to reasonably model the effective creation rates of new cells.\nFig. 1Schematic representation of the cell division outcomes and the intrinsic growth rates. Each of these is given by the product of the basal growth rate r and the probability of the respective outcome. a Parent cells replicate themselves originating daughter cells in their same class. b CSC differentiation occurs in two ways: if asymmetric, the S population remains unchanged; if symmetric, S decreasesThe members of each subpopulation interact with each other (intraspecific interactions) and with the members of the other subpopulation (interspecific interactions), c.f. Fig 2. These interactions are described by proportionality factors αij whose signs and magnitudes quantify the number of cells that are created in the system due to interactions with preexisting cells. The indices i and j may represent S or D. They indicate either intraspecific (i=j) or interspecific (i≠j) interactions.\nFig. 2The signs of the coefficients αij indicate whether the interactions promote or inhibit growth. Cells already present in the tumorsphere either favor (αij<0) or hinder (αij>0) the production of new cells. Arrows indicate the influence of each population on the newborn cells\nThe single basal growth rate r characterizes the timescale of the system. By construction, it matches a priori the population doubling time (PDT) of the DCCs. This provides a more suitable description than the previous model with two basal growth rates [18] since, in general, it is not possible to discriminate between these rates in experiments such as that of Ref. [41].\nWhen a CSC undergoes mitosis there is a probability ps that two new CSCs are generated, a probability pd that two DCCs are generated, and a probability pa that there is an asymmetric division. Because of normalization, pa=1−pd−ps. These probabilities should be multiplied by the basal growth rate r, see Fig. 1, in such a way that it is possible to reasonably model the effective creation rates of new cells.\nFig. 1Schematic representation of the cell division outcomes and the intrinsic growth rates. Each of these is given by the product of the basal growth rate r and the probability of the respective outcome. a Parent cells replicate themselves originating daughter cells in their same class. b CSC differentiation occurs in two ways: if asymmetric, the S population remains unchanged; if symmetric, S decreases\nSchematic representation of the cell division outcomes and the intrinsic growth rates. Each of these is given by the product of the basal growth rate r and the probability of the respective outcome. a Parent cells replicate themselves originating daughter cells in their same class. b CSC differentiation occurs in two ways: if asymmetric, the S population remains unchanged; if symmetric, S decreases\nThe members of each subpopulation interact with each other (intraspecific interactions) and with the members of the other subpopulation (interspecific interactions), c.f. Fig 2. These interactions are described by proportionality factors αij whose signs and magnitudes quantify the number of cells that are created in the system due to interactions with preexisting cells. The indices i and j may represent S or D. They indicate either intraspecific (i=j) or interspecific (i≠j) interactions.\nFig. 2The signs of the coefficients αij indicate whether the interactions promote or inhibit growth. Cells already present in the tumorsphere either favor (αij<0) or hinder (αij>0) the production of new cells. Arrows indicate the influence of each population on the newborn cells\nThe signs of the coefficients αij indicate whether the interactions promote or inhibit growth. Cells already present in the tumorsphere either favor (αij<0) or hinder (αij>0) the production of new cells. Arrows indicate the influence of each population on the newborn cells\nWe can describe the evolution of the two interacting populations by generalizing the standard equations for two competing species (see, p. ej. [42], p. 67). \n1a\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ \\frac{dS}{dt}=r[ p_{s} S]\\left\\lbrace \\frac{p_{s}- p_{d}}{ p_{s}} - \\alpha_{SS}S - \\alpha_{SD}D \\right\\rbrace, $$ \\end{document}dSdt=r[psS]ps−pdps−αSSS−αSDD,\n\n1b\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ \\frac{dD}{dt}=r [D+ (1+p_{d}- p_{s})S]\\left\\lbrace 1-\\alpha_{DD}D - \\alpha_{DS}S \\right\\rbrace. $$ \\end{document}dDdt=r[D+(1+pd−ps)S]1−αDDD−αDSS.\nThe first term inside the braces on the right-hand side of Eq. (1a) corresponds to the net intrinsic creation of new CSCs (symmetric CSC divisions minus divisions yielding two DCCs). Note that asymmetric divisions do not change the number of cancer stem cells, but symmetric differentiation removes the parent CSC from the S population, as illustrated in Fig. 1. The second and third terms correspond, respectively, to the effects on the CSC population of the interactions with other CSCs and with differentiated cancer cells.\nThe factor in the square brackets on the right-hand side of Eq. (1b) is proportional to the rate of creation, in the absence of interactions, of differentiated cells due to the division of other DCCs (first term), plus the asymmetric division and differentiation of CSCs (second term). The first term between braces corresponds to the interaction-free growth of the system. The second and third terms represent, respectively, the influences of the other DCCs and of the CSCs on the differentiated cancer cell population. The effect of cell-cell interactions on cell creation is assumed to be proportional to the abundances of the populations emitting the signals and of those receiving them; therefore, the corresponding terms are quadratic in the populations. The interaction strengths are represented by the coefficients αij. Negative interaction coefficients (αij<0) describe growth-promoting interactions, e.g. the j population promotes the growth of the i population. Positive values of αij describe the growth inhibition of population i by population j. In particular, as shown in Fig. 2, αSS tells us how CSCs promote/inhibit the creation of new CSCs, αDD tells us how DCCs promote/inhibit the creation of new DCCs, while αDS informs us about the influence of CSCs on the generation of new differentiated cancer cells and αSD the influence of DCCs on the generation of new cancer stem cells.\nThere are no analytic solutions for these differential equations. Their numerical solutions yield the time evolution of both subpopulations, S(t) and D(t). In Additional file 1 we summarize some properties of Eqs. (1) and their solutions that we will use in our analysis.\nFitting with the model The data sets correspond to the total cell number T in the spheroids. Thus, we fit the data with T=S+D, where S and D are the numerical solutions of the system of Eqs. (1). Thereby, our model allows us to obtain information on the dynamics of the CSC and DCC subpopulations and, in particular, on the time evolution of the CSC fraction, from data corresponding to the whole spheroid. Due to the scarcity of data points and the ensuing difficulties of the optimization problem, fitting our model to the data leads to different sets of possible parameter values. To obtain the optimal set, we use a random grid search (RGS) algorithm. The RGS algorithm consists in randomly sweeping some domain for initial conditions in parameter space. In our case, such a domain is bounded by physically reasonable assumptions such as that the values of probability be pi∈[0,1] with i≡s,d and the growth rate r>0. Also we ask for the outcome of the fitting process to give normalized positive probabilities, positive populations in the range of validity of the data, and fractions of the order of the ones reported by Wang et al. We then collect in a histogram all the parameters that have a relative error lower than 5% when fitting the data points. To do this we define a relative error measure given by the nonlinear estimator \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$R=\\frac{1}{n}\\sum_{i=1}^{n} \\left(\\frac{y_{i}-Y(t_{i})}{y_{i}} \\right)^{2}. $$ \\end{document}R=1n∑i=1nyi−Y(ti)yi2.\nHere n is the number of data points, yi the data value at time ti, and Y(ti) the function value obtained by fitting the data. This estimator is the same as the function we minimize through the fitting process (the classical R2 parameter also used as a minimization - objective function, is not a good reporter for a nonlinear problem). A first selection criterion of the RGS algorithm ensures that no accepted parameter set has an accuracy below 95%. A consistent statistical interpretation of the process requires that the order of magnitude and, especially, the sign of each parameter be the same in all realizations. Therefore, even if different combinations of the fitting parameters yield acceptable descriptions of the experimental results, the qualitative mechanisms that control spheroid growth can be satisfactorily identified. We thus find a distribution for each parameter and select the median as its representative value.\nThe data sets correspond to the total cell number T in the spheroids. Thus, we fit the data with T=S+D, where S and D are the numerical solutions of the system of Eqs. (1). Thereby, our model allows us to obtain information on the dynamics of the CSC and DCC subpopulations and, in particular, on the time evolution of the CSC fraction, from data corresponding to the whole spheroid. Due to the scarcity of data points and the ensuing difficulties of the optimization problem, fitting our model to the data leads to different sets of possible parameter values. To obtain the optimal set, we use a random grid search (RGS) algorithm. The RGS algorithm consists in randomly sweeping some domain for initial conditions in parameter space. In our case, such a domain is bounded by physically reasonable assumptions such as that the values of probability be pi∈[0,1] with i≡s,d and the growth rate r>0. Also we ask for the outcome of the fitting process to give normalized positive probabilities, positive populations in the range of validity of the data, and fractions of the order of the ones reported by Wang et al. We then collect in a histogram all the parameters that have a relative error lower than 5% when fitting the data points. To do this we define a relative error measure given by the nonlinear estimator \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$R=\\frac{1}{n}\\sum_{i=1}^{n} \\left(\\frac{y_{i}-Y(t_{i})}{y_{i}} \\right)^{2}. $$ \\end{document}R=1n∑i=1nyi−Y(ti)yi2.\nHere n is the number of data points, yi the data value at time ti, and Y(ti) the function value obtained by fitting the data. This estimator is the same as the function we minimize through the fitting process (the classical R2 parameter also used as a minimization - objective function, is not a good reporter for a nonlinear problem). A first selection criterion of the RGS algorithm ensures that no accepted parameter set has an accuracy below 95%. A consistent statistical interpretation of the process requires that the order of magnitude and, especially, the sign of each parameter be the same in all realizations. Therefore, even if different combinations of the fitting parameters yield acceptable descriptions of the experimental results, the qualitative mechanisms that control spheroid growth can be satisfactorily identified. We thus find a distribution for each parameter and select the median as its representative value.", "The data sets correspond to the total cell number T in the spheroids. Thus, we fit the data with T=S+D, where S and D are the numerical solutions of the system of Eqs. (1). Thereby, our model allows us to obtain information on the dynamics of the CSC and DCC subpopulations and, in particular, on the time evolution of the CSC fraction, from data corresponding to the whole spheroid. Due to the scarcity of data points and the ensuing difficulties of the optimization problem, fitting our model to the data leads to different sets of possible parameter values. To obtain the optimal set, we use a random grid search (RGS) algorithm. The RGS algorithm consists in randomly sweeping some domain for initial conditions in parameter space. In our case, such a domain is bounded by physically reasonable assumptions such as that the values of probability be pi∈[0,1] with i≡s,d and the growth rate r>0. Also we ask for the outcome of the fitting process to give normalized positive probabilities, positive populations in the range of validity of the data, and fractions of the order of the ones reported by Wang et al. We then collect in a histogram all the parameters that have a relative error lower than 5% when fitting the data points. To do this we define a relative error measure given by the nonlinear estimator \n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$R=\\frac{1}{n}\\sum_{i=1}^{n} \\left(\\frac{y_{i}-Y(t_{i})}{y_{i}} \\right)^{2}. $$ \\end{document}R=1n∑i=1nyi−Y(ti)yi2.\nHere n is the number of data points, yi the data value at time ti, and Y(ti) the function value obtained by fitting the data. This estimator is the same as the function we minimize through the fitting process (the classical R2 parameter also used as a minimization - objective function, is not a good reporter for a nonlinear problem). A first selection criterion of the RGS algorithm ensures that no accepted parameter set has an accuracy below 95%. A consistent statistical interpretation of the process requires that the order of magnitude and, especially, the sign of each parameter be the same in all realizations. Therefore, even if different combinations of the fitting parameters yield acceptable descriptions of the experimental results, the qualitative mechanisms that control spheroid growth can be satisfactorily identified. We thus find a distribution for each parameter and select the median as its representative value.", "The initial stages First, let us answer the following question: Given that we start tumorsphere growth from a small CSC seed, what is the minimum size Sm needed for this seed to guarantee CSC population growth? By setting D=0 in Eq. (1b), we see that there are two cases: \nIf differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with \n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ S_{m} = \\frac{p_{s} - p_{d}}{ \\alpha_{SS} p_{s}}. $$ \\end{document}Sm=ps−pdαSSps.\nIf differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.\nIf ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with \n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ S_{m} = \\frac{p_{s} - p_{d}}{ \\alpha_{SS} p_{s}}. $$ \\end{document}Sm=ps−pdαSSps.\nWe thus need αSS<0 : The CSCs must cooperate to yield additional cancer stem cells starting from a pure CSC seed. A larger cooperative interaction implies that we can use a smaller seed. In this case, the intraspecific interaction coefficient αSS plays a key role in the growth determination from the very beginning of the process. It is worth mentioning that in the experiments discussed here the conditions ps<pd and S0>Sm are never satisfied simultaneously. Usually, ps is smaller than pd, and there is a minimum number of stem cells required to ensure stem cell growth. But if a differentiation - inhibiting agent is added to the system, increasing ps, a single cancer stem cell may suffice to generate growth. As shown in Additional file 1, we can linearize our equations to describe the initial evolution of a small system, finding that the trajectory in the S−D plane starts as, \n3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ D(t) = \\frac{(D_{0} + S_{0})}{S_{0}}S(t)^{1/(p_{s} -p_{d})}- S(t), $$ \\end{document}D(t)=(D0+S0)S0S(t)1/(ps−pd)−S(t),\nwhere S(0)=S0 and D(0)=D0. Initially, if pd>ps, the number of differentiated cells increases, while the number of stem cells decreases, and the representative point gets close to the D-axis. If there is growth in the stem cell subpopulation, it is due to the nonlinear terms.\nOur model therefore generates a simple analytical description of the early stages of tumorsphere evolution and specifies the conditions for a successful implantation of the initial cancer stem cell seed. Unless a potent anti-differentiation agent is added to the growth medium, we expect the differentiation probability pd to be larger than ps. If so, our Eq. (2) predicts the minimum number of stem cells needed to initiate successful spheroid growth. This number depends only on ps, pd, and the intraspecific interaction between cancer stem cells, which must be cooperative. Weak cooperation or a small ps would mean that the tumorsphere must be started from a large nucleus.\nIn the next section we review the experimental results reported in [41] and determine the model parameters.\nFirst, let us answer the following question: Given that we start tumorsphere growth from a small CSC seed, what is the minimum size Sm needed for this seed to guarantee CSC population growth? By setting D=0 in Eq. (1b), we see that there are two cases: \nIf differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with \n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ S_{m} = \\frac{p_{s} - p_{d}}{ \\alpha_{SS} p_{s}}. $$ \\end{document}Sm=ps−pdαSSps.\nIf differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.\nIf ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with \n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ S_{m} = \\frac{p_{s} - p_{d}}{ \\alpha_{SS} p_{s}}. $$ \\end{document}Sm=ps−pdαSSps.\nWe thus need αSS<0 : The CSCs must cooperate to yield additional cancer stem cells starting from a pure CSC seed. A larger cooperative interaction implies that we can use a smaller seed. In this case, the intraspecific interaction coefficient αSS plays a key role in the growth determination from the very beginning of the process. It is worth mentioning that in the experiments discussed here the conditions ps<pd and S0>Sm are never satisfied simultaneously. Usually, ps is smaller than pd, and there is a minimum number of stem cells required to ensure stem cell growth. But if a differentiation - inhibiting agent is added to the system, increasing ps, a single cancer stem cell may suffice to generate growth. As shown in Additional file 1, we can linearize our equations to describe the initial evolution of a small system, finding that the trajectory in the S−D plane starts as, \n3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ D(t) = \\frac{(D_{0} + S_{0})}{S_{0}}S(t)^{1/(p_{s} -p_{d})}- S(t), $$ \\end{document}D(t)=(D0+S0)S0S(t)1/(ps−pd)−S(t),\nwhere S(0)=S0 and D(0)=D0. Initially, if pd>ps, the number of differentiated cells increases, while the number of stem cells decreases, and the representative point gets close to the D-axis. If there is growth in the stem cell subpopulation, it is due to the nonlinear terms.\nOur model therefore generates a simple analytical description of the early stages of tumorsphere evolution and specifies the conditions for a successful implantation of the initial cancer stem cell seed. Unless a potent anti-differentiation agent is added to the growth medium, we expect the differentiation probability pd to be larger than ps. If so, our Eq. (2) predicts the minimum number of stem cells needed to initiate successful spheroid growth. This number depends only on ps, pd, and the intraspecific interaction between cancer stem cells, which must be cooperative. Weak cooperation or a small ps would mean that the tumorsphere must be started from a large nucleus.\nIn the next section we review the experimental results reported in [41] and determine the model parameters.\nExperimental data We used our model to analyze the results of Wang and coworkers [41]. These authors studied the growth of breast cancer cell cultures belonging to three different cell lines: MCF7, MDA-MB-231 and MDA-MB-435. For each of these tumor lines they grew tumorspheres using three different environmental conditions, as detailed below. In all cases the spheroids initially have 4-5 cells that originate from a single CSC. Since only the MDA-MB-231 cell line yielded bona fide round spheroids for all three experimental specifications, we will use this line to compare our findings with the experimental results. To facilitate the implementation of the model presented in [18], we report the data in terms of cell numbers.\nWe used our model to analyze the results of Wang and coworkers [41]. These authors studied the growth of breast cancer cell cultures belonging to three different cell lines: MCF7, MDA-MB-231 and MDA-MB-435. For each of these tumor lines they grew tumorspheres using three different environmental conditions, as detailed below. In all cases the spheroids initially have 4-5 cells that originate from a single CSC. Since only the MDA-MB-231 cell line yielded bona fide round spheroids for all three experimental specifications, we will use this line to compare our findings with the experimental results. To facilitate the implementation of the model presented in [18], we report the data in terms of cell numbers.\nSoft substrate In the soft experiment, cells were cultured using soft (0.05%) agar as the matrix surface for cell contact. Differentiation inhibitors were added to the growth medium to increase the CSC fraction. Under these conditions there is little incentive for the stem cells to either duplicate or leave their quiescent status. Only the tendency to build a suitable niche may break the quiescence. Hence their small basal growth rate. As a result, a slow exponential growth of CSCs prevails in the early stages of tumorsphere growth as depicted in Fig. 3. Such behavior can be predicted as shown in Eq. 3 and Eqs. (A3) and (A4) in Additional file 1, but the basal growth rate is so small that the process appears to be almost linear. The CSCs population (red line) is always much larger than its DCC counterpart; as a matter of fact, Wang et al. reported a 95% of CSC at day 8 with a low growth rate. The distribution of the fitting values generated by the RGS method is shown in Additional file 2, Fig. A. Note there, and in Table 1, the very high (close to unity) value of ps, the positive sign of the intraspecific interaction coefficients and the negative sign of the interspecific interaction coefficients.\nFig. 3Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell populationTable 1The three chosen parameter sets obtained from fits to the experimental dataConstantUnitsSoftHardControlr[1/days]0.06850.13350.0240psnone0.97010.71240.1646pdnone0.00190.00000.3911αSS[1/cells]0.0873-0.04560.0028αSD[1/cells]-0.4185-0.52800.0266αDS[1/cells]-0.2061-0.1376-1.0683αDD[1/cells]0.36681.8329-0.3087\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\frac {S}{S+D}|_{8days}$\\end{document}SS+D|8daysnone0.94790.9130.141\nGrowth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell population\nThe three chosen parameter sets obtained from fits to the experimental data\nIn the soft experiment, cells were cultured using soft (0.05%) agar as the matrix surface for cell contact. Differentiation inhibitors were added to the growth medium to increase the CSC fraction. Under these conditions there is little incentive for the stem cells to either duplicate or leave their quiescent status. Only the tendency to build a suitable niche may break the quiescence. Hence their small basal growth rate. As a result, a slow exponential growth of CSCs prevails in the early stages of tumorsphere growth as depicted in Fig. 3. Such behavior can be predicted as shown in Eq. 3 and Eqs. (A3) and (A4) in Additional file 1, but the basal growth rate is so small that the process appears to be almost linear. The CSCs population (red line) is always much larger than its DCC counterpart; as a matter of fact, Wang et al. reported a 95% of CSC at day 8 with a low growth rate. The distribution of the fitting values generated by the RGS method is shown in Additional file 2, Fig. A. Note there, and in Table 1, the very high (close to unity) value of ps, the positive sign of the intraspecific interaction coefficients and the negative sign of the interspecific interaction coefficients.\nFig. 3Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell populationTable 1The three chosen parameter sets obtained from fits to the experimental dataConstantUnitsSoftHardControlr[1/days]0.06850.13350.0240psnone0.97010.71240.1646pdnone0.00190.00000.3911αSS[1/cells]0.0873-0.04560.0028αSD[1/cells]-0.4185-0.52800.0266αDS[1/cells]-0.2061-0.1376-1.0683αDD[1/cells]0.36681.8329-0.3087\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\frac {S}{S+D}|_{8days}$\\end{document}SS+D|8daysnone0.94790.9130.141\nGrowth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell population\nThe three chosen parameter sets obtained from fits to the experimental data\nHard substrate In the hard experiment, cells were cultured using hard (30%) agar as the contact matrix surface. Differentiation inhibitors were also added to the growth medium. For this experiment we expect the model to describe a high fraction of CSCs, as in soft, but now with a higher proliferation rate. Applying the RGS method to this data set, we see that this is indeed so (see Table 1 and Additional file 2, Fig. B for the resulting parameters), obtaining the curves depicted in Fig. 4. At early times, growth is nearly linear, as observed in soft, but only for the first four days, speeding up afterwards. The CSCs outnumber the DCCs, reaching 91% of the cell population by day 8, consistently with the results reported in [41]. This fraction is a little lower than in soft but would become much larger than that at later times.\nFig. 4Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs\nGrowth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs\nThe symmetric CSC reproduction probability is still high, but noticeably lower than in soft, and the basal rate is twice that in soft. The interspecific interaction coefficients are negative, as in soft, but the CSC intraspecific interaction coefficient is now negative, too.\nIn the hard experiment, cells were cultured using hard (30%) agar as the contact matrix surface. Differentiation inhibitors were also added to the growth medium. For this experiment we expect the model to describe a high fraction of CSCs, as in soft, but now with a higher proliferation rate. Applying the RGS method to this data set, we see that this is indeed so (see Table 1 and Additional file 2, Fig. B for the resulting parameters), obtaining the curves depicted in Fig. 4. At early times, growth is nearly linear, as observed in soft, but only for the first four days, speeding up afterwards. The CSCs outnumber the DCCs, reaching 91% of the cell population by day 8, consistently with the results reported in [41]. This fraction is a little lower than in soft but would become much larger than that at later times.\nFig. 4Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs\nGrowth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs\nThe symmetric CSC reproduction probability is still high, but noticeably lower than in soft, and the basal rate is twice that in soft. The interspecific interaction coefficients are negative, as in soft, but the CSC intraspecific interaction coefficient is now negative, too.\nControl substrate In the control experiment, cells were cultured using hard (30%) agar as the contact matrix surface, but no differentiation inhibitor was added to the medium. The stem-cell promoting factors EGF and bFGF were replaced by neutral serum and the cells were grown on a hard substrate [41]. In this case, although the spheroid cannot preserve its spherical shape at late times, a fitting attempt, shown in Fig. 5, is informative (the corresponding boxplots are shown in Additional file 2, Fig. C).Although the CSC number remains nearly constant, the DCCs can proliferate indefinitely, leading to fast overall growth.\nFig. 5Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase\nGrowth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase\nAll the new relevant information obtained from fitting the experimental data is summarized in Table 1, where we report the values of the parameters of our model. Those values have an accuracy of 98% and their correspondig distributions are reportedd in Additional file 2. Furthermore, in Table 2, we report some quantities, derived from parameters in Table 1 that will be useful in the following sections.\nTable 2Useful derived quantities from some values of Table 1ConstantUnitsSoftHardControl1/r[days]14.597.4941.66rp[1/days]0.06630.0951-0.0054\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\frac {1}{rp}$\\end{document}1rp[days]15.0810.52-185p=(ps−pd)none0.96820.7124-0.2265\nUseful derived quantities from some values of Table 1\nIn the control experiment, cells were cultured using hard (30%) agar as the contact matrix surface, but no differentiation inhibitor was added to the medium. The stem-cell promoting factors EGF and bFGF were replaced by neutral serum and the cells were grown on a hard substrate [41]. In this case, although the spheroid cannot preserve its spherical shape at late times, a fitting attempt, shown in Fig. 5, is informative (the corresponding boxplots are shown in Additional file 2, Fig. C).Although the CSC number remains nearly constant, the DCCs can proliferate indefinitely, leading to fast overall growth.\nFig. 5Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase\nGrowth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase\nAll the new relevant information obtained from fitting the experimental data is summarized in Table 1, where we report the values of the parameters of our model. Those values have an accuracy of 98% and their correspondig distributions are reportedd in Additional file 2. Furthermore, in Table 2, we report some quantities, derived from parameters in Table 1 that will be useful in the following sections.\nTable 2Useful derived quantities from some values of Table 1ConstantUnitsSoftHardControl1/r[days]14.597.4941.66rp[1/days]0.06630.0951-0.0054\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\frac {1}{rp}$\\end{document}1rp[days]15.0810.52-185p=(ps−pd)none0.96820.7124-0.2265\nUseful derived quantities from some values of Table 1", "First, let us answer the following question: Given that we start tumorsphere growth from a small CSC seed, what is the minimum size Sm needed for this seed to guarantee CSC population growth? By setting D=0 in Eq. (1b), we see that there are two cases: \nIf differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with \n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ S_{m} = \\frac{p_{s} - p_{d}}{ \\alpha_{SS} p_{s}}. $$ \\end{document}Sm=ps−pdαSSps.\nIf differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.\nIf ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with \n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ S_{m} = \\frac{p_{s} - p_{d}}{ \\alpha_{SS} p_{s}}. $$ \\end{document}Sm=ps−pdαSSps.\nWe thus need αSS<0 : The CSCs must cooperate to yield additional cancer stem cells starting from a pure CSC seed. A larger cooperative interaction implies that we can use a smaller seed. In this case, the intraspecific interaction coefficient αSS plays a key role in the growth determination from the very beginning of the process. It is worth mentioning that in the experiments discussed here the conditions ps<pd and S0>Sm are never satisfied simultaneously. Usually, ps is smaller than pd, and there is a minimum number of stem cells required to ensure stem cell growth. But if a differentiation - inhibiting agent is added to the system, increasing ps, a single cancer stem cell may suffice to generate growth. As shown in Additional file 1, we can linearize our equations to describe the initial evolution of a small system, finding that the trajectory in the S−D plane starts as, \n3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ D(t) = \\frac{(D_{0} + S_{0})}{S_{0}}S(t)^{1/(p_{s} -p_{d})}- S(t), $$ \\end{document}D(t)=(D0+S0)S0S(t)1/(ps−pd)−S(t),\nwhere S(0)=S0 and D(0)=D0. Initially, if pd>ps, the number of differentiated cells increases, while the number of stem cells decreases, and the representative point gets close to the D-axis. If there is growth in the stem cell subpopulation, it is due to the nonlinear terms.\nOur model therefore generates a simple analytical description of the early stages of tumorsphere evolution and specifies the conditions for a successful implantation of the initial cancer stem cell seed. Unless a potent anti-differentiation agent is added to the growth medium, we expect the differentiation probability pd to be larger than ps. If so, our Eq. (2) predicts the minimum number of stem cells needed to initiate successful spheroid growth. This number depends only on ps, pd, and the intraspecific interaction between cancer stem cells, which must be cooperative. Weak cooperation or a small ps would mean that the tumorsphere must be started from a large nucleus.\nIn the next section we review the experimental results reported in [41] and determine the model parameters.", "We used our model to analyze the results of Wang and coworkers [41]. These authors studied the growth of breast cancer cell cultures belonging to three different cell lines: MCF7, MDA-MB-231 and MDA-MB-435. For each of these tumor lines they grew tumorspheres using three different environmental conditions, as detailed below. In all cases the spheroids initially have 4-5 cells that originate from a single CSC. Since only the MDA-MB-231 cell line yielded bona fide round spheroids for all three experimental specifications, we will use this line to compare our findings with the experimental results. To facilitate the implementation of the model presented in [18], we report the data in terms of cell numbers.", "In the soft experiment, cells were cultured using soft (0.05%) agar as the matrix surface for cell contact. Differentiation inhibitors were added to the growth medium to increase the CSC fraction. Under these conditions there is little incentive for the stem cells to either duplicate or leave their quiescent status. Only the tendency to build a suitable niche may break the quiescence. Hence their small basal growth rate. As a result, a slow exponential growth of CSCs prevails in the early stages of tumorsphere growth as depicted in Fig. 3. Such behavior can be predicted as shown in Eq. 3 and Eqs. (A3) and (A4) in Additional file 1, but the basal growth rate is so small that the process appears to be almost linear. The CSCs population (red line) is always much larger than its DCC counterpart; as a matter of fact, Wang et al. reported a 95% of CSC at day 8 with a low growth rate. The distribution of the fitting values generated by the RGS method is shown in Additional file 2, Fig. A. Note there, and in Table 1, the very high (close to unity) value of ps, the positive sign of the intraspecific interaction coefficients and the negative sign of the interspecific interaction coefficients.\nFig. 3Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell populationTable 1The three chosen parameter sets obtained from fits to the experimental dataConstantUnitsSoftHardControlr[1/days]0.06850.13350.0240psnone0.97010.71240.1646pdnone0.00190.00000.3911αSS[1/cells]0.0873-0.04560.0028αSD[1/cells]-0.4185-0.52800.0266αDS[1/cells]-0.2061-0.1376-1.0683αDD[1/cells]0.36681.8329-0.3087\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\frac {S}{S+D}|_{8days}$\\end{document}SS+D|8daysnone0.94790.9130.141\nGrowth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell population\nThe three chosen parameter sets obtained from fits to the experimental data", "In the hard experiment, cells were cultured using hard (30%) agar as the contact matrix surface. Differentiation inhibitors were also added to the growth medium. For this experiment we expect the model to describe a high fraction of CSCs, as in soft, but now with a higher proliferation rate. Applying the RGS method to this data set, we see that this is indeed so (see Table 1 and Additional file 2, Fig. B for the resulting parameters), obtaining the curves depicted in Fig. 4. At early times, growth is nearly linear, as observed in soft, but only for the first four days, speeding up afterwards. The CSCs outnumber the DCCs, reaching 91% of the cell population by day 8, consistently with the results reported in [41]. This fraction is a little lower than in soft but would become much larger than that at later times.\nFig. 4Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs\nGrowth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs\nThe symmetric CSC reproduction probability is still high, but noticeably lower than in soft, and the basal rate is twice that in soft. The interspecific interaction coefficients are negative, as in soft, but the CSC intraspecific interaction coefficient is now negative, too.", "In the control experiment, cells were cultured using hard (30%) agar as the contact matrix surface, but no differentiation inhibitor was added to the medium. The stem-cell promoting factors EGF and bFGF were replaced by neutral serum and the cells were grown on a hard substrate [41]. In this case, although the spheroid cannot preserve its spherical shape at late times, a fitting attempt, shown in Fig. 5, is informative (the corresponding boxplots are shown in Additional file 2, Fig. C).Although the CSC number remains nearly constant, the DCCs can proliferate indefinitely, leading to fast overall growth.\nFig. 5Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase\nGrowth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase\nAll the new relevant information obtained from fitting the experimental data is summarized in Table 1, where we report the values of the parameters of our model. Those values have an accuracy of 98% and their correspondig distributions are reportedd in Additional file 2. Furthermore, in Table 2, we report some quantities, derived from parameters in Table 1 that will be useful in the following sections.\nTable 2Useful derived quantities from some values of Table 1ConstantUnitsSoftHardControl1/r[days]14.597.4941.66rp[1/days]0.06630.0951-0.0054\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$\\frac {1}{rp}$\\end{document}1rp[days]15.0810.52-185p=(ps−pd)none0.96820.7124-0.2265\nUseful derived quantities from some values of Table 1", "In normal tissues, homeostasis is guaranteed by factors secreted by differentiated cells that inhibit the division and self-renewal of stem cells [22, 43]. Cancer stem cells may partially escape these controls, but their activity is still influenced by their environment. In non-anchored cells, as is the case analyzed in the present work, clustering of most integrins on the plasma membrane by ECM molecules, and thus FAs formation, is lost. In normal cells, such events are sufficient to trigger anoikis, but upregulation of specific integrins can confer anoikis resistance. For instance, ανβ3 integrin has the ability to maintain receptor clustering in non-adherent cells (reviewed in Hamidi and Ivaska [44]). Interestingly, the MDA-MB-231 cells used by Wang et al. [41] are an ανβ3 integrin-overexpressing breast cancer cell line and highly dependent on ανβ3-emanating signals for proliferation and survival [45, 46], and it is likely that changes in the stiffness of the substratum may alter integrins clustering and consequently, cell proliferation.\nWe would like to emphasize some experimental facts from Ref. [41] that are useful for the interpretation of the results: \nA remarkably high percentage (>95%) of the cells cultured under soft and hard conditions with growth factors express the stem cell marker Oct4, which is frequently used as a marker for undifferentiated cells. Oct-4 expression must be tightly regulated; too much or too little leads to cell differentiation.The soft and control experiments show low activity of telomerase, a marker for proliferation. The higher telomerase activity exhibited by hard indicates a faster growth rate. This is consistent with the expression rates of Ki67-positive, which are close to 90% for hard and minimal in the other cases.The high (95%) CSC fraction and low (<5%) proliferation rate observed in soft at day 8 suggest a population largely consisting of quiescent CSCs. The proliferative fraction was higher in hard.In control, markers indicate a strong dominance of the differentiated state. The stem cell fraction (∼5%) and proliferation rate (5% according to KI-67 and 22% according to flow cytometry) are both low.\nA remarkably high percentage (>95%) of the cells cultured under soft and hard conditions with growth factors express the stem cell marker Oct4, which is frequently used as a marker for undifferentiated cells. Oct-4 expression must be tightly regulated; too much or too little leads to cell differentiation.\nThe soft and control experiments show low activity of telomerase, a marker for proliferation. The higher telomerase activity exhibited by hard indicates a faster growth rate. This is consistent with the expression rates of Ki67-positive, which are close to 90% for hard and minimal in the other cases.\nThe high (95%) CSC fraction and low (<5%) proliferation rate observed in soft at day 8 suggest a population largely consisting of quiescent CSCs. The proliferative fraction was higher in hard.\nIn control, markers indicate a strong dominance of the differentiated state. The stem cell fraction (∼5%) and proliferation rate (5% according to KI-67 and 22% according to flow cytometry) are both low.\nIn the soft and hard experiments, cells must adapt to the restrictions imposed by the application of the stem cell maintenance factors EGF and bFGF. We especially extracted information about the cell dynamics from four features: the basal growth rate, the CSC fraction, and the intraspecific and interspecific interaction parameters. The parameter sets resulting from fitting the model to the hard, soft and control experiments, summarized in Table 1, are quite different. We next separately interpret the results of each experiment.\nSoft substrate The computed basal growth rate r is 0.069 day −1, which means that the PDT is close to 15 days. This is consistent with the results obtained by Wang et al. [41] using flow cytometry, but somewhat longer than typical cancer stem cells doubling times, which range from 3 to 11 days, depending on tumor type and culture conditions [47, 48]. DCCs normally reproduce faster but, because in this model r represents the average growth rate of the whole population, we recover a PDT consistent with that of the dominating CSCs. This lends support to our modeling assumption of a single basal growth rate.\nQuiescence is the prevalent state of the stem cells. Since their function is to replenish dead or damaged cells, they enter the cycle when their niches signal the need for new cells. In soft (and in hard) the addition of differentiation inhibitors implies that the CSCs always record low DCC populations. This drives them into the cycle, where they divide but, prevented from differentiating, overwhelmingly generate new CSCs. Differentiation is very unlikely (pd=0.0019) and we may neglect it to simplify the analysis. If we do this, there is no linear contribution of the CSCs to DCC generation. With the parameter values in Table 1, the equilibrium point where the two kind of cells coexist is located at (S∗,D∗)=(−21.7,−7.0) cells. This point lies in the third quadrant indicating that there is no physical/biological coexistence of the two populations. There are no attractors in the first quadrant and all trajectories diverge. This confirms that CSCs lose their normal quiescent state in a continuous (and futile) attempt to produce more DCCs.\nThe (positive) intraspecific interaction coefficients αii are here directly related to the individual maximum population sizes of the respective subpopulations. If we assumed that the two subpopulations did not interact, ij=0, i≠j, Eq. (1a) would read: \n4\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ \\frac{dS}{dt}=rS\\left[(p_{s} - p_{d})- p_{s} \\alpha_{SS} S \\right], $$ \\end{document}dSdt=rS(ps−pd)−psαSSS,\nwhich is a logistic equation that leads to a maximum population size Sc=(ps−pd)/(psαSS)≃12 cells. In this way, from Eq. (1b), we obtain Dc=1/αDD≃2 cells for the DCCs, which is six times smaller than Sc. Therefore, if there were no interactions between subpopulations our model would predict a 14-cell spheroid, a size that would be reached by day 17. Interactions between the populations are needed to understand the faster growth observed in the experiment. The negative values of the interspecific interaction coefficients, αij<0, i≠j lead to a positive feedback loop: An increase in one subpopulation drives an increase in the other. The numbers in Table 1, especially the relatively large value of αSD (5 times that of αSS), and the relatively low value of αDS (less than half of αDD), indicate that this interplay favors a net increase in CSC number but is not strong enough to lead to an increase in DCC number.\nThe feedback loop mechanism is activated to generate a suitable niche, which requires a low S/(S+D) fraction. The inhibition of differentiation causes the CSCs to continuously reproduce in a frustrated attempt to recreate the DCC population required by the niche. Since the population equilibrium corresponding to a stable niche is never reached, cycling CSCs seldom return to quiescence. In Fig. 6, the fraction S/(S+D) is depicted for the three experiments up to day ten. Due to the inhibitor’s efficacy, this fraction falls very slowly for the soft and hard substrates (light blue and orange lines, respectively), but decays freely in the control environment.\nFig. 6CSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche\nCSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche\nThe computed basal growth rate r is 0.069 day −1, which means that the PDT is close to 15 days. This is consistent with the results obtained by Wang et al. [41] using flow cytometry, but somewhat longer than typical cancer stem cells doubling times, which range from 3 to 11 days, depending on tumor type and culture conditions [47, 48]. DCCs normally reproduce faster but, because in this model r represents the average growth rate of the whole population, we recover a PDT consistent with that of the dominating CSCs. This lends support to our modeling assumption of a single basal growth rate.\nQuiescence is the prevalent state of the stem cells. Since their function is to replenish dead or damaged cells, they enter the cycle when their niches signal the need for new cells. In soft (and in hard) the addition of differentiation inhibitors implies that the CSCs always record low DCC populations. This drives them into the cycle, where they divide but, prevented from differentiating, overwhelmingly generate new CSCs. Differentiation is very unlikely (pd=0.0019) and we may neglect it to simplify the analysis. If we do this, there is no linear contribution of the CSCs to DCC generation. With the parameter values in Table 1, the equilibrium point where the two kind of cells coexist is located at (S∗,D∗)=(−21.7,−7.0) cells. This point lies in the third quadrant indicating that there is no physical/biological coexistence of the two populations. There are no attractors in the first quadrant and all trajectories diverge. This confirms that CSCs lose their normal quiescent state in a continuous (and futile) attempt to produce more DCCs.\nThe (positive) intraspecific interaction coefficients αii are here directly related to the individual maximum population sizes of the respective subpopulations. If we assumed that the two subpopulations did not interact, ij=0, i≠j, Eq. (1a) would read: \n4\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ \\frac{dS}{dt}=rS\\left[(p_{s} - p_{d})- p_{s} \\alpha_{SS} S \\right], $$ \\end{document}dSdt=rS(ps−pd)−psαSSS,\nwhich is a logistic equation that leads to a maximum population size Sc=(ps−pd)/(psαSS)≃12 cells. In this way, from Eq. (1b), we obtain Dc=1/αDD≃2 cells for the DCCs, which is six times smaller than Sc. Therefore, if there were no interactions between subpopulations our model would predict a 14-cell spheroid, a size that would be reached by day 17. Interactions between the populations are needed to understand the faster growth observed in the experiment. The negative values of the interspecific interaction coefficients, αij<0, i≠j lead to a positive feedback loop: An increase in one subpopulation drives an increase in the other. The numbers in Table 1, especially the relatively large value of αSD (5 times that of αSS), and the relatively low value of αDS (less than half of αDD), indicate that this interplay favors a net increase in CSC number but is not strong enough to lead to an increase in DCC number.\nThe feedback loop mechanism is activated to generate a suitable niche, which requires a low S/(S+D) fraction. The inhibition of differentiation causes the CSCs to continuously reproduce in a frustrated attempt to recreate the DCC population required by the niche. Since the population equilibrium corresponding to a stable niche is never reached, cycling CSCs seldom return to quiescence. In Fig. 6, the fraction S/(S+D) is depicted for the three experiments up to day ten. Due to the inhibitor’s efficacy, this fraction falls very slowly for the soft and hard substrates (light blue and orange lines, respectively), but decays freely in the control environment.\nFig. 6CSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche\nCSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche\nHard substrate When the substrate hardens the environmental conditions that mediate cell-to-cell signaling change and the CSC phenotype becomes more amenable to proliferation, as seen by comparing Figs. 3 and 4. The growth rate r, which still represents the PDT of the CSCs because of their prevalence, is twice as large as that corresponding to growth on the soft substrate. Our interpretation is that increasing the substrate hardness alters the CSC phenotype required to reach the cell fraction that regulates niche size. As in soft, the CSCs try to increase the DCC population but now they are immersed in a different environment. The duplication of the growth rate r, the reduction of the symmetric duplication probability ps, and the emergence of a large fraction (>50%) of asymmetric divisions indicates that the direct effect of the differentiation-inhibiting factors is weaker than in soft. Indirect effects appear through the interspecific coefficients, especially the relatively large and negative (-0.53) αSD. As Fig. 6 shows, by day 8 the DCC fraction is not much larger than in soft, indicating that the attempt to establish the niche has also failed in hard.\nMore remarkable is that the intraspecific CSC coefficient has changed its sign, an indication that CSCs record a stressed environment that they may perceive as due to the presence of damaged tissue. This generates a phenotype different from that in soft [18], which accelerates cell division. On the other hand, the large and positive DCC intraspecific coefficient, αDD=1.83, implies a huge increase of the inhibitory signaling between DCCs with respect to soft. In this case, the discussion following Eq. 4 suggests for this system a maximum intrinsic DCC number smaller than unity, Dc=1/αDD≃0.5 cells, meaning that on this substrate the DCC subpopulation would not be able to survive without the CSCs.\nThe general picture is that of a growing tumorsphere whose response to the substrate is to increase its cell number as fast as possible, aiming to reach a DCC fraction that equilibrates the niche, a goal that cannot be attained due to the presence of differentiation-inhibiting agents. The influence of the niche, as in [18], is thus a cornerstone for the biological interpretation of the model results.\nWhen the substrate hardens the environmental conditions that mediate cell-to-cell signaling change and the CSC phenotype becomes more amenable to proliferation, as seen by comparing Figs. 3 and 4. The growth rate r, which still represents the PDT of the CSCs because of their prevalence, is twice as large as that corresponding to growth on the soft substrate. Our interpretation is that increasing the substrate hardness alters the CSC phenotype required to reach the cell fraction that regulates niche size. As in soft, the CSCs try to increase the DCC population but now they are immersed in a different environment. The duplication of the growth rate r, the reduction of the symmetric duplication probability ps, and the emergence of a large fraction (>50%) of asymmetric divisions indicates that the direct effect of the differentiation-inhibiting factors is weaker than in soft. Indirect effects appear through the interspecific coefficients, especially the relatively large and negative (-0.53) αSD. As Fig. 6 shows, by day 8 the DCC fraction is not much larger than in soft, indicating that the attempt to establish the niche has also failed in hard.\nMore remarkable is that the intraspecific CSC coefficient has changed its sign, an indication that CSCs record a stressed environment that they may perceive as due to the presence of damaged tissue. This generates a phenotype different from that in soft [18], which accelerates cell division. On the other hand, the large and positive DCC intraspecific coefficient, αDD=1.83, implies a huge increase of the inhibitory signaling between DCCs with respect to soft. In this case, the discussion following Eq. 4 suggests for this system a maximum intrinsic DCC number smaller than unity, Dc=1/αDD≃0.5 cells, meaning that on this substrate the DCC subpopulation would not be able to survive without the CSCs.\nThe general picture is that of a growing tumorsphere whose response to the substrate is to increase its cell number as fast as possible, aiming to reach a DCC fraction that equilibrates the niche, a goal that cannot be attained due to the presence of differentiation-inhibiting agents. The influence of the niche, as in [18], is thus a cornerstone for the biological interpretation of the model results.\nControl substrate As mentioned in the previous section, we cannot expect the model to give a completely accurate description of the control substrate experiment, but its interpretation may shed light on the system dynamics. In this condition CSCs are allowed to freely differentiate. These cells record an environment where the proportion of DCCs increases monotonically and the population fractions should tend to those corresponding to niche equilibrium. However, Figs. 5 and 6 suggest that there is no limit to the increase of the DCC fraction. We conjecture that this behavior may be explained by migration: after the spheroid reaches a given size, cells start to migrate and the average number of DCCs recorded by each CSC does not increase. One consequence is that the CSC number remains stationary as shown in Fig. 5. In fact, their effective PDT is of about six months, i.e., they are generally quiescent, as they should.\nFurthermore, note that the whole PDT leads to a duplication of the first 5 cells after 41 days (1/0.024), which is three times slower than the soft rate (15 days). To explain the rapid spheroid growth we need to consider the contribution of the interactions. From Table 1 we see that interactions favor DCCs and restrict CSCs proliferation. A more detailed analysis of the evolution of the two subpopulations reveals the following: \nCSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment.DCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated.\nCSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment.\nDCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated.\nOf note, our analysis of these experiments implies that the absence of the stem cell growth factors in control leads to the disappearance of the feedback loop that plays such a crucial role in both soft and hard. The existence of the feedback loops detected in soft and hard can similarly be inferred from experiments carried out with the cancer lines SUM159, MCF-7, and T47D, which were also cultured with stem cell growth factors [9, 18].\nEven if the CSC fractions are not far from unity in both soft and hard, the detailed reasons for their behavior are different. In both cases the cell subpopulations assist each other, generating a positive-feedback cycle that leads to continuous growth, an indication that cell-to-cell signaling is crucial to determine the process. The effect of the substrate on intraspecific interactions in hard is strong. There, CSCs are weakly promoting, but DCCs are so strongly inhibitory that the DCC population would disappear if it were not for the significant cooperation from the CSCs, which is expressed mainly through a considerable fraction of asymmetric divisions. The inhibition between DCCs is also likely to induce the phenotype change indicated by the large and negative value of αSD. The parameter αDS, which controls the influence of cancer stem cells on differentiated cancer cells, is always negative, and very strong so in control, suggesting that CSCs have a promoting and protective influence on DCCs, a phenomenon that was already observed by Kim and coworkers. These authors found that CSCs protect DCCs from anoikis promoting tumor formation when the two subpopulations are mixed [49]. The smaller magnitude of αDS in the soft and hard experiments suggests that stem cell maintenance factors weaken, but do not cancel, this protective effect.\nAs mentioned in the previous section, we cannot expect the model to give a completely accurate description of the control substrate experiment, but its interpretation may shed light on the system dynamics. In this condition CSCs are allowed to freely differentiate. These cells record an environment where the proportion of DCCs increases monotonically and the population fractions should tend to those corresponding to niche equilibrium. However, Figs. 5 and 6 suggest that there is no limit to the increase of the DCC fraction. We conjecture that this behavior may be explained by migration: after the spheroid reaches a given size, cells start to migrate and the average number of DCCs recorded by each CSC does not increase. One consequence is that the CSC number remains stationary as shown in Fig. 5. In fact, their effective PDT is of about six months, i.e., they are generally quiescent, as they should.\nFurthermore, note that the whole PDT leads to a duplication of the first 5 cells after 41 days (1/0.024), which is three times slower than the soft rate (15 days). To explain the rapid spheroid growth we need to consider the contribution of the interactions. From Table 1 we see that interactions favor DCCs and restrict CSCs proliferation. A more detailed analysis of the evolution of the two subpopulations reveals the following: \nCSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment.DCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated.\nCSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment.\nDCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated.\nOf note, our analysis of these experiments implies that the absence of the stem cell growth factors in control leads to the disappearance of the feedback loop that plays such a crucial role in both soft and hard. The existence of the feedback loops detected in soft and hard can similarly be inferred from experiments carried out with the cancer lines SUM159, MCF-7, and T47D, which were also cultured with stem cell growth factors [9, 18].\nEven if the CSC fractions are not far from unity in both soft and hard, the detailed reasons for their behavior are different. In both cases the cell subpopulations assist each other, generating a positive-feedback cycle that leads to continuous growth, an indication that cell-to-cell signaling is crucial to determine the process. The effect of the substrate on intraspecific interactions in hard is strong. There, CSCs are weakly promoting, but DCCs are so strongly inhibitory that the DCC population would disappear if it were not for the significant cooperation from the CSCs, which is expressed mainly through a considerable fraction of asymmetric divisions. The inhibition between DCCs is also likely to induce the phenotype change indicated by the large and negative value of αSD. The parameter αDS, which controls the influence of cancer stem cells on differentiated cancer cells, is always negative, and very strong so in control, suggesting that CSCs have a promoting and protective influence on DCCs, a phenomenon that was already observed by Kim and coworkers. These authors found that CSCs protect DCCs from anoikis promoting tumor formation when the two subpopulations are mixed [49]. The smaller magnitude of αDS in the soft and hard experiments suggests that stem cell maintenance factors weaken, but do not cancel, this protective effect.", "The computed basal growth rate r is 0.069 day −1, which means that the PDT is close to 15 days. This is consistent with the results obtained by Wang et al. [41] using flow cytometry, but somewhat longer than typical cancer stem cells doubling times, which range from 3 to 11 days, depending on tumor type and culture conditions [47, 48]. DCCs normally reproduce faster but, because in this model r represents the average growth rate of the whole population, we recover a PDT consistent with that of the dominating CSCs. This lends support to our modeling assumption of a single basal growth rate.\nQuiescence is the prevalent state of the stem cells. Since their function is to replenish dead or damaged cells, they enter the cycle when their niches signal the need for new cells. In soft (and in hard) the addition of differentiation inhibitors implies that the CSCs always record low DCC populations. This drives them into the cycle, where they divide but, prevented from differentiating, overwhelmingly generate new CSCs. Differentiation is very unlikely (pd=0.0019) and we may neglect it to simplify the analysis. If we do this, there is no linear contribution of the CSCs to DCC generation. With the parameter values in Table 1, the equilibrium point where the two kind of cells coexist is located at (S∗,D∗)=(−21.7,−7.0) cells. This point lies in the third quadrant indicating that there is no physical/biological coexistence of the two populations. There are no attractors in the first quadrant and all trajectories diverge. This confirms that CSCs lose their normal quiescent state in a continuous (and futile) attempt to produce more DCCs.\nThe (positive) intraspecific interaction coefficients αii are here directly related to the individual maximum population sizes of the respective subpopulations. If we assumed that the two subpopulations did not interact, ij=0, i≠j, Eq. (1a) would read: \n4\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document} $$ \\frac{dS}{dt}=rS\\left[(p_{s} - p_{d})- p_{s} \\alpha_{SS} S \\right], $$ \\end{document}dSdt=rS(ps−pd)−psαSSS,\nwhich is a logistic equation that leads to a maximum population size Sc=(ps−pd)/(psαSS)≃12 cells. In this way, from Eq. (1b), we obtain Dc=1/αDD≃2 cells for the DCCs, which is six times smaller than Sc. Therefore, if there were no interactions between subpopulations our model would predict a 14-cell spheroid, a size that would be reached by day 17. Interactions between the populations are needed to understand the faster growth observed in the experiment. The negative values of the interspecific interaction coefficients, αij<0, i≠j lead to a positive feedback loop: An increase in one subpopulation drives an increase in the other. The numbers in Table 1, especially the relatively large value of αSD (5 times that of αSS), and the relatively low value of αDS (less than half of αDD), indicate that this interplay favors a net increase in CSC number but is not strong enough to lead to an increase in DCC number.\nThe feedback loop mechanism is activated to generate a suitable niche, which requires a low S/(S+D) fraction. The inhibition of differentiation causes the CSCs to continuously reproduce in a frustrated attempt to recreate the DCC population required by the niche. Since the population equilibrium corresponding to a stable niche is never reached, cycling CSCs seldom return to quiescence. In Fig. 6, the fraction S/(S+D) is depicted for the three experiments up to day ten. Due to the inhibitor’s efficacy, this fraction falls very slowly for the soft and hard substrates (light blue and orange lines, respectively), but decays freely in the control environment.\nFig. 6CSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche\nCSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche", "When the substrate hardens the environmental conditions that mediate cell-to-cell signaling change and the CSC phenotype becomes more amenable to proliferation, as seen by comparing Figs. 3 and 4. The growth rate r, which still represents the PDT of the CSCs because of their prevalence, is twice as large as that corresponding to growth on the soft substrate. Our interpretation is that increasing the substrate hardness alters the CSC phenotype required to reach the cell fraction that regulates niche size. As in soft, the CSCs try to increase the DCC population but now they are immersed in a different environment. The duplication of the growth rate r, the reduction of the symmetric duplication probability ps, and the emergence of a large fraction (>50%) of asymmetric divisions indicates that the direct effect of the differentiation-inhibiting factors is weaker than in soft. Indirect effects appear through the interspecific coefficients, especially the relatively large and negative (-0.53) αSD. As Fig. 6 shows, by day 8 the DCC fraction is not much larger than in soft, indicating that the attempt to establish the niche has also failed in hard.\nMore remarkable is that the intraspecific CSC coefficient has changed its sign, an indication that CSCs record a stressed environment that they may perceive as due to the presence of damaged tissue. This generates a phenotype different from that in soft [18], which accelerates cell division. On the other hand, the large and positive DCC intraspecific coefficient, αDD=1.83, implies a huge increase of the inhibitory signaling between DCCs with respect to soft. In this case, the discussion following Eq. 4 suggests for this system a maximum intrinsic DCC number smaller than unity, Dc=1/αDD≃0.5 cells, meaning that on this substrate the DCC subpopulation would not be able to survive without the CSCs.\nThe general picture is that of a growing tumorsphere whose response to the substrate is to increase its cell number as fast as possible, aiming to reach a DCC fraction that equilibrates the niche, a goal that cannot be attained due to the presence of differentiation-inhibiting agents. The influence of the niche, as in [18], is thus a cornerstone for the biological interpretation of the model results.", "As mentioned in the previous section, we cannot expect the model to give a completely accurate description of the control substrate experiment, but its interpretation may shed light on the system dynamics. In this condition CSCs are allowed to freely differentiate. These cells record an environment where the proportion of DCCs increases monotonically and the population fractions should tend to those corresponding to niche equilibrium. However, Figs. 5 and 6 suggest that there is no limit to the increase of the DCC fraction. We conjecture that this behavior may be explained by migration: after the spheroid reaches a given size, cells start to migrate and the average number of DCCs recorded by each CSC does not increase. One consequence is that the CSC number remains stationary as shown in Fig. 5. In fact, their effective PDT is of about six months, i.e., they are generally quiescent, as they should.\nFurthermore, note that the whole PDT leads to a duplication of the first 5 cells after 41 days (1/0.024), which is three times slower than the soft rate (15 days). To explain the rapid spheroid growth we need to consider the contribution of the interactions. From Table 1 we see that interactions favor DCCs and restrict CSCs proliferation. A more detailed analysis of the evolution of the two subpopulations reveals the following: \nCSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment.DCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated.\nCSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment.\nDCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated.\nOf note, our analysis of these experiments implies that the absence of the stem cell growth factors in control leads to the disappearance of the feedback loop that plays such a crucial role in both soft and hard. The existence of the feedback loops detected in soft and hard can similarly be inferred from experiments carried out with the cancer lines SUM159, MCF-7, and T47D, which were also cultured with stem cell growth factors [9, 18].\nEven if the CSC fractions are not far from unity in both soft and hard, the detailed reasons for their behavior are different. In both cases the cell subpopulations assist each other, generating a positive-feedback cycle that leads to continuous growth, an indication that cell-to-cell signaling is crucial to determine the process. The effect of the substrate on intraspecific interactions in hard is strong. There, CSCs are weakly promoting, but DCCs are so strongly inhibitory that the DCC population would disappear if it were not for the significant cooperation from the CSCs, which is expressed mainly through a considerable fraction of asymmetric divisions. The inhibition between DCCs is also likely to induce the phenotype change indicated by the large and negative value of αSD. The parameter αDS, which controls the influence of cancer stem cells on differentiated cancer cells, is always negative, and very strong so in control, suggesting that CSCs have a promoting and protective influence on DCCs, a phenomenon that was already observed by Kim and coworkers. These authors found that CSCs protect DCCs from anoikis promoting tumor formation when the two subpopulations are mixed [49]. The smaller magnitude of αDS in the soft and hard experiments suggests that stem cell maintenance factors weaken, but do not cancel, this protective effect.", "The analysis of experimental data with our model confirms Wang’s conclusions and indicates that the substrate regulates the details of tumorsphere evolution and that a powerful engine of tumorsphere growth is the stem cell “memory\" of its niche. By comparing growth on the hard and soft substrates, our analysis also confirms the observations that substrate stiffness promotes cancer cell proliferation (as recently reviewed by Nia et al. [50]. What is more interesting is that the evident differences between the parameters describing growth on the hard and control substrates indicate that the response of stem and non-stem cancer cells to an increase in substrate stiffness is likely to be mediated by different processes.\nIn summary, the ability of stem cells to sense their environment plays a crucial role in tumorsphere evolution. Our model has proven to be particularly useful at determining why substratum stiffness has a profound influence on the behavior of cancer stem cells, soft substrates favoring symmetric divisions and hard substrates leading to a large proportion of asymmetric doublings. In vivo studies are needed to further our understanding of niche processes under natural environments.", " \nAdditional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere.\nAdditional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere.\n\nAdditional file 2 Distribution graphs from fitting procedure.\nAdditional file 2 Distribution graphs from fitting procedure.\n\nAdditional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere.\nAdditional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere.\n\nAdditional file 2 Distribution graphs from fitting procedure.\nAdditional file 2 Distribution graphs from fitting procedure.", "\nAdditional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere.\nAdditional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere.\n\nAdditional file 2 Distribution graphs from fitting procedure.\nAdditional file 2 Distribution graphs from fitting procedure." ]
[ null, null, null, "results", null, null, null, null, null, "discussion", null, null, null, "conclusion", "supplementary-material", null ]
[ "Tumor growth", "Cancer stem cell", "Niche", "Substrate", "Tumorsphere", "Mathematical modelling" ]
Background: For some time, it has been known that the presence of cancer stem cells (CSCs) is important for the development of many solid tumors [1–6]. According to the CSC hypothesis these cells are often crucial for the development of resistance to therapeutic interventions [7, 8]. In healthy tissues the proportion of stem cells is small; homeostatic equilibrium is maintained through the signals that the stem cells receive from their niches. The onset of cancer is likely to destroy this equilibrium and cancerous tissues may exhibit a higher proportion of stem cells than normal tissues [9]. This increased proportion of cancer stem cells may underlie the aggressive behavior of high-grade tumors [2, 10]. As recently explained by Taniguchi et al. [11], the cross-talk between tumor initiating (stem) cells and their niche microenvironment is a possible therapeutic target. Understanding the nature of the interactions between CSCs and their environment is therefore important for the development of effective intervention procedures. Interesting mathematical models have been developed to explain various properties of stem-cell-driven tissue growth. Stiehl and Marciniak-Czochra proposed a mathematical model of cancer stem cell dynamics to describe the time evolution of a leukemic cell line competing with healthy hematopoiesis [12]. This group later provided evidence that the influence of leukemic stem cells on the course of the disease is stronger than that of non-stem leukemic cells [13]. Yang, Plikus and Komarova used stochastic modeling to explore the relative importance of symmetric and asymmetric stem cell divisions, showing that tight homeostatic control is not necessarily associated with purely asymmetric divisions and that symmetric divisions can help to stabilize mouse paw epidermis lineage [14]. Recently, Bessonov and coworkers developed a model that allowed them to determine the influence of the population dynamics on the time-varying probabilities of different cell fates and the ascertainment of the cell-cell communication factors influencing these probabilities [15]. These authors suggest that a coordinated dynamical change of the cell behavior parameters occurs in response to a biochemical signal, which they describe as an underlying field. Here we will describe the effects of cellular interactions using nonlinear terms instead. Live cells are generally sensitive to substratum rigidity and texture [16]. A growing tumor must compete for space with the surrounding environment; the resulting mechanical stresses generate signals that impact on the tumor cells. Cells integrate these mechanical cues and respond in ways that are related to their phenotype. Their active response may also lead to phenotype modifications [17, 18]; in fact, mechanical cues generated by the environment can trigger cancer cell invasion [19]. Environmental stiffness may then be associated with tumor progression, a process that can also be promoted by mechanically activated ion channels [20]. What is the influence of the mechanical environment on cancer stem cells? At each generation, CSCs divide symmetrically, generating either two new CSCs or two differentiated cancer cells (DCCs), or asymmetrically, generating one CSC and one differentiated cancer cell [7, 21]. Quorum sensing controls differentiation of healthy stem cells, but it is thought to be altered in cancer stem cells [22]. Mechanical inputs are an important component of the altered control mechanism and can be assumed to play a role in the fate of the cancer stem cells. In vitro experiments have been designed to probe the influence of mechanical stresses of various types on tumor cells. The solid-stress inhibition of multicellular spheroid growth was already demonstrated by Helmlinger and coworkers in 1997 [23]. The results of these experiments were shown to follow allometric laws [24]. Interestingly, Koike et al. showed that spheroid formation with Dunning R3327 rat prostate carcinoma AT3.1 cells is facilitated by solid stress [25]. A study by Cheng et al. suggested how tumors grow in confined locations where levels of stress are high, showing that growth-induced solid stress can affect cell phenotype [26]. Using spheroid cell aggregates, Montel et al. showed that applied pressure may be used to modulate tumor growth [27] and observed that cells are blocked by compressive stresses at the G1 checkpoint [28]. The organization of cells in a spheroid is modified by physical confinement [29], which likewise modifies the proliferation gradient [30]. The stiffness of hydrogels has been shown to determine the shape of tumor cells, with high stiffnesses leading to spheroidal cells, a feature known to occur in in vivo tumors [31]. By studying the behavior of adult neural stem cells under various mechanical cues, Saha et al. showed that soft gels favored differentiation into neurons while harder gels promoted glial cultures. Importantly, they also showed that low substrate stiffness inhibited cell spreading, self-renewal, and differentiation [32]. Osteocyte-like cells were shown to significantly induce compaction of tumor spheroids formed using breast cancer cells [33]. Matrix stiffness was shown to affect, through mechanotransduction events, the osteogenic outcome of human mesenchymal stem cell differentiation [34]. HeLa cells were used to show that both an attractive contact force and a substrate-controlled remote force contribute to the formation of large-scale multicellular structures in cancer [35]. Fifteen years ago, Discher, Janmey, and Wang not only explained that the stiffness of the anchoring substrate can have a strong influence on the cell state, but they also indicated that stem cell differentiation may be influenced by the nature of the substrate [36]. It is relevant that naive mesenchymal stem cells were shown to commit to various phenotypes with high sensitivity to tissue elasticity: They differentiate preferably into neurons and osteocytes if they are cultured on soft and rigid matrices, respectively [37]. On the other hand, human mesenchymal stem cells adhere onto precalcified bones, which are softer than calcified bones [16]. It is also known that hydrodynamic shear stress promotes the conversion of primary patient epithelial tumor cells into specific cancer stem-like cells [38]. Smith et al. found that the mechanical context of the differentiation niche can drive endothelial cell identity from human-induced pluripotent stem cells, showing that stiffness drives mesodermal differentiation, leading to endothelial commitment [39]. Thus, microenvironments help specify stem cell lineages, although it may be difficult to decouple the influence of mechanical interactions and surface topography and stiffness from biochemical effects [16, 39]. Since they are grown in the absence of the complex signaling system prevalent in the environment of real tumors, tumorspheres, spheroids formed by clonal proliferation out of permanent cell lines, tumor tissue, or blood [40], are suitable candidates to probe the influence of mechanical stimuli on stem-cell-fueled cancer growth. Wang et al. cultured breast CSCs on soft and hard agar matrix surfaces, investigating the effects that substrate stiffness has on cell state and proliferation [41]. These authors showed that breast cancer stem cells can be kept in states of differentiation, proliferation or quiescence depending on a combination of adherent growth and stem cells growth factors, but they focused on the experimental possibilities and did not draw conclusions about how these agencies may modify the stem cell niche to lead to the observed behavior. Recently, we developed a two-population tumorsphere model to identify the role of the intraspecific and interspecific interactions that determine tumorsphere growth [18]. Application of our model to three breast cancer cell lines studied by Chen and coworkers [9] indicates that while intraspecific interactions are inhibitory, interspecific interactions promote growth. This feature of interspecific interactions was interpreted in terms of the stimulation by CSCs of the growth of DCCs in order to consolidate their niches and of the plasticity of the DCCs to dedifferentiate into CSCs [18]. Here we use this model to analyze the experimental results of Wang et al. [41], discussing how substrate stiffness influences growth and finding that the concept of cancer stem cell niche is central for its understanding. In the next section we review the model of Ref. [18] and in the following sections we apply it to the results of Ref.[41] and discuss their implications. Methods: We model mathematically the growth of a tumorsphere considering two cell populations: Cancer stem cells (S) and differentiated cancer cells (D). By including in the last class all cells with any degree of differentiation we can isolate the role played by the stem cells. We further assume that: The single basal growth rate r characterizes the timescale of the system. By construction, it matches a priori the population doubling time (PDT) of the DCCs. This provides a more suitable description than the previous model with two basal growth rates [18] since, in general, it is not possible to discriminate between these rates in experiments such as that of Ref. [41].When a CSC undergoes mitosis there is a probability ps that two new CSCs are generated, a probability pd that two DCCs are generated, and a probability pa that there is an asymmetric division. Because of normalization, pa=1−pd−ps. These probabilities should be multiplied by the basal growth rate r, see Fig. 1, in such a way that it is possible to reasonably model the effective creation rates of new cells. Fig. 1Schematic representation of the cell division outcomes and the intrinsic growth rates. Each of these is given by the product of the basal growth rate r and the probability of the respective outcome. a Parent cells replicate themselves originating daughter cells in their same class. b CSC differentiation occurs in two ways: if asymmetric, the S population remains unchanged; if symmetric, S decreasesThe members of each subpopulation interact with each other (intraspecific interactions) and with the members of the other subpopulation (interspecific interactions), c.f. Fig 2. These interactions are described by proportionality factors αij whose signs and magnitudes quantify the number of cells that are created in the system due to interactions with preexisting cells. The indices i and j may represent S or D. They indicate either intraspecific (i=j) or interspecific (i≠j) interactions. Fig. 2The signs of the coefficients αij indicate whether the interactions promote or inhibit growth. Cells already present in the tumorsphere either favor (αij<0) or hinder (αij>0) the production of new cells. Arrows indicate the influence of each population on the newborn cells The single basal growth rate r characterizes the timescale of the system. By construction, it matches a priori the population doubling time (PDT) of the DCCs. This provides a more suitable description than the previous model with two basal growth rates [18] since, in general, it is not possible to discriminate between these rates in experiments such as that of Ref. [41]. When a CSC undergoes mitosis there is a probability ps that two new CSCs are generated, a probability pd that two DCCs are generated, and a probability pa that there is an asymmetric division. Because of normalization, pa=1−pd−ps. These probabilities should be multiplied by the basal growth rate r, see Fig. 1, in such a way that it is possible to reasonably model the effective creation rates of new cells. Fig. 1Schematic representation of the cell division outcomes and the intrinsic growth rates. Each of these is given by the product of the basal growth rate r and the probability of the respective outcome. a Parent cells replicate themselves originating daughter cells in their same class. b CSC differentiation occurs in two ways: if asymmetric, the S population remains unchanged; if symmetric, S decreases Schematic representation of the cell division outcomes and the intrinsic growth rates. Each of these is given by the product of the basal growth rate r and the probability of the respective outcome. a Parent cells replicate themselves originating daughter cells in their same class. b CSC differentiation occurs in two ways: if asymmetric, the S population remains unchanged; if symmetric, S decreases The members of each subpopulation interact with each other (intraspecific interactions) and with the members of the other subpopulation (interspecific interactions), c.f. Fig 2. These interactions are described by proportionality factors αij whose signs and magnitudes quantify the number of cells that are created in the system due to interactions with preexisting cells. The indices i and j may represent S or D. They indicate either intraspecific (i=j) or interspecific (i≠j) interactions. Fig. 2The signs of the coefficients αij indicate whether the interactions promote or inhibit growth. Cells already present in the tumorsphere either favor (αij<0) or hinder (αij>0) the production of new cells. Arrows indicate the influence of each population on the newborn cells The signs of the coefficients αij indicate whether the interactions promote or inhibit growth. Cells already present in the tumorsphere either favor (αij<0) or hinder (αij>0) the production of new cells. Arrows indicate the influence of each population on the newborn cells We can describe the evolution of the two interacting populations by generalizing the standard equations for two competing species (see, p. ej. [42], p. 67). 1a\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ \frac{dS}{dt}=r[ p_{s} S]\left\lbrace \frac{p_{s}- p_{d}}{ p_{s}} - \alpha_{SS}S - \alpha_{SD}D \right\rbrace, $$ \end{document}dSdt=r[psS]ps−pdps−αSSS−αSDD, 1b\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ \frac{dD}{dt}=r [D+ (1+p_{d}- p_{s})S]\left\lbrace 1-\alpha_{DD}D - \alpha_{DS}S \right\rbrace. $$ \end{document}dDdt=r[D+(1+pd−ps)S]1−αDDD−αDSS. The first term inside the braces on the right-hand side of Eq. (1a) corresponds to the net intrinsic creation of new CSCs (symmetric CSC divisions minus divisions yielding two DCCs). Note that asymmetric divisions do not change the number of cancer stem cells, but symmetric differentiation removes the parent CSC from the S population, as illustrated in Fig. 1. The second and third terms correspond, respectively, to the effects on the CSC population of the interactions with other CSCs and with differentiated cancer cells. The factor in the square brackets on the right-hand side of Eq. (1b) is proportional to the rate of creation, in the absence of interactions, of differentiated cells due to the division of other DCCs (first term), plus the asymmetric division and differentiation of CSCs (second term). The first term between braces corresponds to the interaction-free growth of the system. The second and third terms represent, respectively, the influences of the other DCCs and of the CSCs on the differentiated cancer cell population. The effect of cell-cell interactions on cell creation is assumed to be proportional to the abundances of the populations emitting the signals and of those receiving them; therefore, the corresponding terms are quadratic in the populations. The interaction strengths are represented by the coefficients αij. Negative interaction coefficients (αij<0) describe growth-promoting interactions, e.g. the j population promotes the growth of the i population. Positive values of αij describe the growth inhibition of population i by population j. In particular, as shown in Fig. 2, αSS tells us how CSCs promote/inhibit the creation of new CSCs, αDD tells us how DCCs promote/inhibit the creation of new DCCs, while αDS informs us about the influence of CSCs on the generation of new differentiated cancer cells and αSD the influence of DCCs on the generation of new cancer stem cells. There are no analytic solutions for these differential equations. Their numerical solutions yield the time evolution of both subpopulations, S(t) and D(t). In Additional file 1 we summarize some properties of Eqs. (1) and their solutions that we will use in our analysis. Fitting with the model The data sets correspond to the total cell number T in the spheroids. Thus, we fit the data with T=S+D, where S and D are the numerical solutions of the system of Eqs. (1). Thereby, our model allows us to obtain information on the dynamics of the CSC and DCC subpopulations and, in particular, on the time evolution of the CSC fraction, from data corresponding to the whole spheroid. Due to the scarcity of data points and the ensuing difficulties of the optimization problem, fitting our model to the data leads to different sets of possible parameter values. To obtain the optimal set, we use a random grid search (RGS) algorithm. The RGS algorithm consists in randomly sweeping some domain for initial conditions in parameter space. In our case, such a domain is bounded by physically reasonable assumptions such as that the values of probability be pi∈[0,1] with i≡s,d and the growth rate r>0. Also we ask for the outcome of the fitting process to give normalized positive probabilities, positive populations in the range of validity of the data, and fractions of the order of the ones reported by Wang et al. We then collect in a histogram all the parameters that have a relative error lower than 5% when fitting the data points. To do this we define a relative error measure given by the nonlinear estimator \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$R=\frac{1}{n}\sum_{i=1}^{n} \left(\frac{y_{i}-Y(t_{i})}{y_{i}} \right)^{2}. $$ \end{document}R=1n∑i=1nyi−Y(ti)yi2. Here n is the number of data points, yi the data value at time ti, and Y(ti) the function value obtained by fitting the data. This estimator is the same as the function we minimize through the fitting process (the classical R2 parameter also used as a minimization - objective function, is not a good reporter for a nonlinear problem). A first selection criterion of the RGS algorithm ensures that no accepted parameter set has an accuracy below 95%. A consistent statistical interpretation of the process requires that the order of magnitude and, especially, the sign of each parameter be the same in all realizations. Therefore, even if different combinations of the fitting parameters yield acceptable descriptions of the experimental results, the qualitative mechanisms that control spheroid growth can be satisfactorily identified. We thus find a distribution for each parameter and select the median as its representative value. The data sets correspond to the total cell number T in the spheroids. Thus, we fit the data with T=S+D, where S and D are the numerical solutions of the system of Eqs. (1). Thereby, our model allows us to obtain information on the dynamics of the CSC and DCC subpopulations and, in particular, on the time evolution of the CSC fraction, from data corresponding to the whole spheroid. Due to the scarcity of data points and the ensuing difficulties of the optimization problem, fitting our model to the data leads to different sets of possible parameter values. To obtain the optimal set, we use a random grid search (RGS) algorithm. The RGS algorithm consists in randomly sweeping some domain for initial conditions in parameter space. In our case, such a domain is bounded by physically reasonable assumptions such as that the values of probability be pi∈[0,1] with i≡s,d and the growth rate r>0. Also we ask for the outcome of the fitting process to give normalized positive probabilities, positive populations in the range of validity of the data, and fractions of the order of the ones reported by Wang et al. We then collect in a histogram all the parameters that have a relative error lower than 5% when fitting the data points. To do this we define a relative error measure given by the nonlinear estimator \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$R=\frac{1}{n}\sum_{i=1}^{n} \left(\frac{y_{i}-Y(t_{i})}{y_{i}} \right)^{2}. $$ \end{document}R=1n∑i=1nyi−Y(ti)yi2. Here n is the number of data points, yi the data value at time ti, and Y(ti) the function value obtained by fitting the data. This estimator is the same as the function we minimize through the fitting process (the classical R2 parameter also used as a minimization - objective function, is not a good reporter for a nonlinear problem). A first selection criterion of the RGS algorithm ensures that no accepted parameter set has an accuracy below 95%. A consistent statistical interpretation of the process requires that the order of magnitude and, especially, the sign of each parameter be the same in all realizations. Therefore, even if different combinations of the fitting parameters yield acceptable descriptions of the experimental results, the qualitative mechanisms that control spheroid growth can be satisfactorily identified. We thus find a distribution for each parameter and select the median as its representative value. Fitting with the model: The data sets correspond to the total cell number T in the spheroids. Thus, we fit the data with T=S+D, where S and D are the numerical solutions of the system of Eqs. (1). Thereby, our model allows us to obtain information on the dynamics of the CSC and DCC subpopulations and, in particular, on the time evolution of the CSC fraction, from data corresponding to the whole spheroid. Due to the scarcity of data points and the ensuing difficulties of the optimization problem, fitting our model to the data leads to different sets of possible parameter values. To obtain the optimal set, we use a random grid search (RGS) algorithm. The RGS algorithm consists in randomly sweeping some domain for initial conditions in parameter space. In our case, such a domain is bounded by physically reasonable assumptions such as that the values of probability be pi∈[0,1] with i≡s,d and the growth rate r>0. Also we ask for the outcome of the fitting process to give normalized positive probabilities, positive populations in the range of validity of the data, and fractions of the order of the ones reported by Wang et al. We then collect in a histogram all the parameters that have a relative error lower than 5% when fitting the data points. To do this we define a relative error measure given by the nonlinear estimator \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$R=\frac{1}{n}\sum_{i=1}^{n} \left(\frac{y_{i}-Y(t_{i})}{y_{i}} \right)^{2}. $$ \end{document}R=1n∑i=1nyi−Y(ti)yi2. Here n is the number of data points, yi the data value at time ti, and Y(ti) the function value obtained by fitting the data. This estimator is the same as the function we minimize through the fitting process (the classical R2 parameter also used as a minimization - objective function, is not a good reporter for a nonlinear problem). A first selection criterion of the RGS algorithm ensures that no accepted parameter set has an accuracy below 95%. A consistent statistical interpretation of the process requires that the order of magnitude and, especially, the sign of each parameter be the same in all realizations. Therefore, even if different combinations of the fitting parameters yield acceptable descriptions of the experimental results, the qualitative mechanisms that control spheroid growth can be satisfactorily identified. We thus find a distribution for each parameter and select the median as its representative value. Results: The initial stages First, let us answer the following question: Given that we start tumorsphere growth from a small CSC seed, what is the minimum size Sm needed for this seed to guarantee CSC population growth? By setting D=0 in Eq. (1b), we see that there are two cases: If differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with 2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ S_{m} = \frac{p_{s} - p_{d}}{ \alpha_{SS} p_{s}}. $$ \end{document}Sm=ps−pdαSSps. If differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere. If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with 2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ S_{m} = \frac{p_{s} - p_{d}}{ \alpha_{SS} p_{s}}. $$ \end{document}Sm=ps−pdαSSps. We thus need αSS<0 : The CSCs must cooperate to yield additional cancer stem cells starting from a pure CSC seed. A larger cooperative interaction implies that we can use a smaller seed. In this case, the intraspecific interaction coefficient αSS plays a key role in the growth determination from the very beginning of the process. It is worth mentioning that in the experiments discussed here the conditions ps<pd and S0>Sm are never satisfied simultaneously. Usually, ps is smaller than pd, and there is a minimum number of stem cells required to ensure stem cell growth. But if a differentiation - inhibiting agent is added to the system, increasing ps, a single cancer stem cell may suffice to generate growth. As shown in Additional file 1, we can linearize our equations to describe the initial evolution of a small system, finding that the trajectory in the S−D plane starts as, 3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ D(t) = \frac{(D_{0} + S_{0})}{S_{0}}S(t)^{1/(p_{s} -p_{d})}- S(t), $$ \end{document}D(t)=(D0+S0)S0S(t)1/(ps−pd)−S(t), where S(0)=S0 and D(0)=D0. Initially, if pd>ps, the number of differentiated cells increases, while the number of stem cells decreases, and the representative point gets close to the D-axis. If there is growth in the stem cell subpopulation, it is due to the nonlinear terms. Our model therefore generates a simple analytical description of the early stages of tumorsphere evolution and specifies the conditions for a successful implantation of the initial cancer stem cell seed. Unless a potent anti-differentiation agent is added to the growth medium, we expect the differentiation probability pd to be larger than ps. If so, our Eq. (2) predicts the minimum number of stem cells needed to initiate successful spheroid growth. This number depends only on ps, pd, and the intraspecific interaction between cancer stem cells, which must be cooperative. Weak cooperation or a small ps would mean that the tumorsphere must be started from a large nucleus. In the next section we review the experimental results reported in [41] and determine the model parameters. First, let us answer the following question: Given that we start tumorsphere growth from a small CSC seed, what is the minimum size Sm needed for this seed to guarantee CSC population growth? By setting D=0 in Eq. (1b), we see that there are two cases: If differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with 2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ S_{m} = \frac{p_{s} - p_{d}}{ \alpha_{SS} p_{s}}. $$ \end{document}Sm=ps−pdαSSps. If differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere. If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with 2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ S_{m} = \frac{p_{s} - p_{d}}{ \alpha_{SS} p_{s}}. $$ \end{document}Sm=ps−pdαSSps. We thus need αSS<0 : The CSCs must cooperate to yield additional cancer stem cells starting from a pure CSC seed. A larger cooperative interaction implies that we can use a smaller seed. In this case, the intraspecific interaction coefficient αSS plays a key role in the growth determination from the very beginning of the process. It is worth mentioning that in the experiments discussed here the conditions ps<pd and S0>Sm are never satisfied simultaneously. Usually, ps is smaller than pd, and there is a minimum number of stem cells required to ensure stem cell growth. But if a differentiation - inhibiting agent is added to the system, increasing ps, a single cancer stem cell may suffice to generate growth. As shown in Additional file 1, we can linearize our equations to describe the initial evolution of a small system, finding that the trajectory in the S−D plane starts as, 3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ D(t) = \frac{(D_{0} + S_{0})}{S_{0}}S(t)^{1/(p_{s} -p_{d})}- S(t), $$ \end{document}D(t)=(D0+S0)S0S(t)1/(ps−pd)−S(t), where S(0)=S0 and D(0)=D0. Initially, if pd>ps, the number of differentiated cells increases, while the number of stem cells decreases, and the representative point gets close to the D-axis. If there is growth in the stem cell subpopulation, it is due to the nonlinear terms. Our model therefore generates a simple analytical description of the early stages of tumorsphere evolution and specifies the conditions for a successful implantation of the initial cancer stem cell seed. Unless a potent anti-differentiation agent is added to the growth medium, we expect the differentiation probability pd to be larger than ps. If so, our Eq. (2) predicts the minimum number of stem cells needed to initiate successful spheroid growth. This number depends only on ps, pd, and the intraspecific interaction between cancer stem cells, which must be cooperative. Weak cooperation or a small ps would mean that the tumorsphere must be started from a large nucleus. In the next section we review the experimental results reported in [41] and determine the model parameters. Experimental data We used our model to analyze the results of Wang and coworkers [41]. These authors studied the growth of breast cancer cell cultures belonging to three different cell lines: MCF7, MDA-MB-231 and MDA-MB-435. For each of these tumor lines they grew tumorspheres using three different environmental conditions, as detailed below. In all cases the spheroids initially have 4-5 cells that originate from a single CSC. Since only the MDA-MB-231 cell line yielded bona fide round spheroids for all three experimental specifications, we will use this line to compare our findings with the experimental results. To facilitate the implementation of the model presented in [18], we report the data in terms of cell numbers. We used our model to analyze the results of Wang and coworkers [41]. These authors studied the growth of breast cancer cell cultures belonging to three different cell lines: MCF7, MDA-MB-231 and MDA-MB-435. For each of these tumor lines they grew tumorspheres using three different environmental conditions, as detailed below. In all cases the spheroids initially have 4-5 cells that originate from a single CSC. Since only the MDA-MB-231 cell line yielded bona fide round spheroids for all three experimental specifications, we will use this line to compare our findings with the experimental results. To facilitate the implementation of the model presented in [18], we report the data in terms of cell numbers. Soft substrate In the soft experiment, cells were cultured using soft (0.05%) agar as the matrix surface for cell contact. Differentiation inhibitors were added to the growth medium to increase the CSC fraction. Under these conditions there is little incentive for the stem cells to either duplicate or leave their quiescent status. Only the tendency to build a suitable niche may break the quiescence. Hence their small basal growth rate. As a result, a slow exponential growth of CSCs prevails in the early stages of tumorsphere growth as depicted in Fig. 3. Such behavior can be predicted as shown in Eq. 3 and Eqs. (A3) and (A4) in Additional file 1, but the basal growth rate is so small that the process appears to be almost linear. The CSCs population (red line) is always much larger than its DCC counterpart; as a matter of fact, Wang et al. reported a 95% of CSC at day 8 with a low growth rate. The distribution of the fitting values generated by the RGS method is shown in Additional file 2, Fig. A. Note there, and in Table 1, the very high (close to unity) value of ps, the positive sign of the intraspecific interaction coefficients and the negative sign of the interspecific interaction coefficients. Fig. 3Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell populationTable 1The three chosen parameter sets obtained from fits to the experimental dataConstantUnitsSoftHardControlr[1/days]0.06850.13350.0240psnone0.97010.71240.1646pdnone0.00190.00000.3911αSS[1/cells]0.0873-0.04560.0028αSD[1/cells]-0.4185-0.52800.0266αDS[1/cells]-0.2061-0.1376-1.0683αDD[1/cells]0.36681.8329-0.3087\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac {S}{S+D}|_{8days}$\end{document}SS+D|8daysnone0.94790.9130.141 Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell population The three chosen parameter sets obtained from fits to the experimental data In the soft experiment, cells were cultured using soft (0.05%) agar as the matrix surface for cell contact. Differentiation inhibitors were added to the growth medium to increase the CSC fraction. Under these conditions there is little incentive for the stem cells to either duplicate or leave their quiescent status. Only the tendency to build a suitable niche may break the quiescence. Hence their small basal growth rate. As a result, a slow exponential growth of CSCs prevails in the early stages of tumorsphere growth as depicted in Fig. 3. Such behavior can be predicted as shown in Eq. 3 and Eqs. (A3) and (A4) in Additional file 1, but the basal growth rate is so small that the process appears to be almost linear. The CSCs population (red line) is always much larger than its DCC counterpart; as a matter of fact, Wang et al. reported a 95% of CSC at day 8 with a low growth rate. The distribution of the fitting values generated by the RGS method is shown in Additional file 2, Fig. A. Note there, and in Table 1, the very high (close to unity) value of ps, the positive sign of the intraspecific interaction coefficients and the negative sign of the interspecific interaction coefficients. Fig. 3Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell populationTable 1The three chosen parameter sets obtained from fits to the experimental dataConstantUnitsSoftHardControlr[1/days]0.06850.13350.0240psnone0.97010.71240.1646pdnone0.00190.00000.3911αSS[1/cells]0.0873-0.04560.0028αSD[1/cells]-0.4185-0.52800.0266αDS[1/cells]-0.2061-0.1376-1.0683αDD[1/cells]0.36681.8329-0.3087\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac {S}{S+D}|_{8days}$\end{document}SS+D|8daysnone0.94790.9130.141 Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell population The three chosen parameter sets obtained from fits to the experimental data Hard substrate In the hard experiment, cells were cultured using hard (30%) agar as the contact matrix surface. Differentiation inhibitors were also added to the growth medium. For this experiment we expect the model to describe a high fraction of CSCs, as in soft, but now with a higher proliferation rate. Applying the RGS method to this data set, we see that this is indeed so (see Table 1 and Additional file 2, Fig. B for the resulting parameters), obtaining the curves depicted in Fig. 4. At early times, growth is nearly linear, as observed in soft, but only for the first four days, speeding up afterwards. The CSCs outnumber the DCCs, reaching 91% of the cell population by day 8, consistently with the results reported in [41]. This fraction is a little lower than in soft but would become much larger than that at later times. Fig. 4Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs The symmetric CSC reproduction probability is still high, but noticeably lower than in soft, and the basal rate is twice that in soft. The interspecific interaction coefficients are negative, as in soft, but the CSC intraspecific interaction coefficient is now negative, too. In the hard experiment, cells were cultured using hard (30%) agar as the contact matrix surface. Differentiation inhibitors were also added to the growth medium. For this experiment we expect the model to describe a high fraction of CSCs, as in soft, but now with a higher proliferation rate. Applying the RGS method to this data set, we see that this is indeed so (see Table 1 and Additional file 2, Fig. B for the resulting parameters), obtaining the curves depicted in Fig. 4. At early times, growth is nearly linear, as observed in soft, but only for the first four days, speeding up afterwards. The CSCs outnumber the DCCs, reaching 91% of the cell population by day 8, consistently with the results reported in [41]. This fraction is a little lower than in soft but would become much larger than that at later times. Fig. 4Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs The symmetric CSC reproduction probability is still high, but noticeably lower than in soft, and the basal rate is twice that in soft. The interspecific interaction coefficients are negative, as in soft, but the CSC intraspecific interaction coefficient is now negative, too. Control substrate In the control experiment, cells were cultured using hard (30%) agar as the contact matrix surface, but no differentiation inhibitor was added to the medium. The stem-cell promoting factors EGF and bFGF were replaced by neutral serum and the cells were grown on a hard substrate [41]. In this case, although the spheroid cannot preserve its spherical shape at late times, a fitting attempt, shown in Fig. 5, is informative (the corresponding boxplots are shown in Additional file 2, Fig. C).Although the CSC number remains nearly constant, the DCCs can proliferate indefinitely, leading to fast overall growth. Fig. 5Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase All the new relevant information obtained from fitting the experimental data is summarized in Table 1, where we report the values of the parameters of our model. Those values have an accuracy of 98% and their correspondig distributions are reportedd in Additional file 2. Furthermore, in Table 2, we report some quantities, derived from parameters in Table 1 that will be useful in the following sections. Table 2Useful derived quantities from some values of Table 1ConstantUnitsSoftHardControl1/r[days]14.597.4941.66rp[1/days]0.06630.0951-0.0054\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac {1}{rp}$\end{document}1rp[days]15.0810.52-185p=(ps−pd)none0.96820.7124-0.2265 Useful derived quantities from some values of Table 1 In the control experiment, cells were cultured using hard (30%) agar as the contact matrix surface, but no differentiation inhibitor was added to the medium. The stem-cell promoting factors EGF and bFGF were replaced by neutral serum and the cells were grown on a hard substrate [41]. In this case, although the spheroid cannot preserve its spherical shape at late times, a fitting attempt, shown in Fig. 5, is informative (the corresponding boxplots are shown in Additional file 2, Fig. C).Although the CSC number remains nearly constant, the DCCs can proliferate indefinitely, leading to fast overall growth. Fig. 5Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase All the new relevant information obtained from fitting the experimental data is summarized in Table 1, where we report the values of the parameters of our model. Those values have an accuracy of 98% and their correspondig distributions are reportedd in Additional file 2. Furthermore, in Table 2, we report some quantities, derived from parameters in Table 1 that will be useful in the following sections. Table 2Useful derived quantities from some values of Table 1ConstantUnitsSoftHardControl1/r[days]14.597.4941.66rp[1/days]0.06630.0951-0.0054\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac {1}{rp}$\end{document}1rp[days]15.0810.52-185p=(ps−pd)none0.96820.7124-0.2265 Useful derived quantities from some values of Table 1 The initial stages: First, let us answer the following question: Given that we start tumorsphere growth from a small CSC seed, what is the minimum size Sm needed for this seed to guarantee CSC population growth? By setting D=0 in Eq. (1b), we see that there are two cases: If differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere.If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with 2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ S_{m} = \frac{p_{s} - p_{d}}{ \alpha_{SS} p_{s}}. $$ \end{document}Sm=ps−pdαSSps. If differentiation is inhibited, ps>pd, as in the case of the soft and hard experiments discussed below, the linear term dominates and the initial seed may be arbitrarily small: a single cancer stem cell may generate a tumorsphere. If ps<pd, it is easy to see that the condition for initial CSC number is that the quadratic term be large enough, i.e. S0>Sm, with 2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ S_{m} = \frac{p_{s} - p_{d}}{ \alpha_{SS} p_{s}}. $$ \end{document}Sm=ps−pdαSSps. We thus need αSS<0 : The CSCs must cooperate to yield additional cancer stem cells starting from a pure CSC seed. A larger cooperative interaction implies that we can use a smaller seed. In this case, the intraspecific interaction coefficient αSS plays a key role in the growth determination from the very beginning of the process. It is worth mentioning that in the experiments discussed here the conditions ps<pd and S0>Sm are never satisfied simultaneously. Usually, ps is smaller than pd, and there is a minimum number of stem cells required to ensure stem cell growth. But if a differentiation - inhibiting agent is added to the system, increasing ps, a single cancer stem cell may suffice to generate growth. As shown in Additional file 1, we can linearize our equations to describe the initial evolution of a small system, finding that the trajectory in the S−D plane starts as, 3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ D(t) = \frac{(D_{0} + S_{0})}{S_{0}}S(t)^{1/(p_{s} -p_{d})}- S(t), $$ \end{document}D(t)=(D0+S0)S0S(t)1/(ps−pd)−S(t), where S(0)=S0 and D(0)=D0. Initially, if pd>ps, the number of differentiated cells increases, while the number of stem cells decreases, and the representative point gets close to the D-axis. If there is growth in the stem cell subpopulation, it is due to the nonlinear terms. Our model therefore generates a simple analytical description of the early stages of tumorsphere evolution and specifies the conditions for a successful implantation of the initial cancer stem cell seed. Unless a potent anti-differentiation agent is added to the growth medium, we expect the differentiation probability pd to be larger than ps. If so, our Eq. (2) predicts the minimum number of stem cells needed to initiate successful spheroid growth. This number depends only on ps, pd, and the intraspecific interaction between cancer stem cells, which must be cooperative. Weak cooperation or a small ps would mean that the tumorsphere must be started from a large nucleus. In the next section we review the experimental results reported in [41] and determine the model parameters. Experimental data: We used our model to analyze the results of Wang and coworkers [41]. These authors studied the growth of breast cancer cell cultures belonging to three different cell lines: MCF7, MDA-MB-231 and MDA-MB-435. For each of these tumor lines they grew tumorspheres using three different environmental conditions, as detailed below. In all cases the spheroids initially have 4-5 cells that originate from a single CSC. Since only the MDA-MB-231 cell line yielded bona fide round spheroids for all three experimental specifications, we will use this line to compare our findings with the experimental results. To facilitate the implementation of the model presented in [18], we report the data in terms of cell numbers. Soft substrate: In the soft experiment, cells were cultured using soft (0.05%) agar as the matrix surface for cell contact. Differentiation inhibitors were added to the growth medium to increase the CSC fraction. Under these conditions there is little incentive for the stem cells to either duplicate or leave their quiescent status. Only the tendency to build a suitable niche may break the quiescence. Hence their small basal growth rate. As a result, a slow exponential growth of CSCs prevails in the early stages of tumorsphere growth as depicted in Fig. 3. Such behavior can be predicted as shown in Eq. 3 and Eqs. (A3) and (A4) in Additional file 1, but the basal growth rate is so small that the process appears to be almost linear. The CSCs population (red line) is always much larger than its DCC counterpart; as a matter of fact, Wang et al. reported a 95% of CSC at day 8 with a low growth rate. The distribution of the fitting values generated by the RGS method is shown in Additional file 2, Fig. A. Note there, and in Table 1, the very high (close to unity) value of ps, the positive sign of the intraspecific interaction coefficients and the negative sign of the interspecific interaction coefficients. Fig. 3Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell populationTable 1The three chosen parameter sets obtained from fits to the experimental dataConstantUnitsSoftHardControlr[1/days]0.06850.13350.0240psnone0.97010.71240.1646pdnone0.00190.00000.3911αSS[1/cells]0.0873-0.04560.0028αSD[1/cells]-0.4185-0.52800.0266αDS[1/cells]-0.2061-0.1376-1.0683αDD[1/cells]0.36681.8329-0.3087\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac {S}{S+D}|_{8days}$\end{document}SS+D|8daysnone0.94790.9130.141 Growth in the soft experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a soft substrate. The blue line fits the total cell population The three chosen parameter sets obtained from fits to the experimental data Hard substrate: In the hard experiment, cells were cultured using hard (30%) agar as the contact matrix surface. Differentiation inhibitors were also added to the growth medium. For this experiment we expect the model to describe a high fraction of CSCs, as in soft, but now with a higher proliferation rate. Applying the RGS method to this data set, we see that this is indeed so (see Table 1 and Additional file 2, Fig. B for the resulting parameters), obtaining the curves depicted in Fig. 4. At early times, growth is nearly linear, as observed in soft, but only for the first four days, speeding up afterwards. The CSCs outnumber the DCCs, reaching 91% of the cell population by day 8, consistently with the results reported in [41]. This fraction is a little lower than in soft but would become much larger than that at later times. Fig. 4Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs Growth in the hard experiment. Data-based model results for the CSC (red) and DCC (green) spheroid subpopulations grown on a hard substrate. The hard substrate yields a faster growth rate than the soft substrate, and, at late times, a higher fraction of CSCs The symmetric CSC reproduction probability is still high, but noticeably lower than in soft, and the basal rate is twice that in soft. The interspecific interaction coefficients are negative, as in soft, but the CSC intraspecific interaction coefficient is now negative, too. Control substrate: In the control experiment, cells were cultured using hard (30%) agar as the contact matrix surface, but no differentiation inhibitor was added to the medium. The stem-cell promoting factors EGF and bFGF were replaced by neutral serum and the cells were grown on a hard substrate [41]. In this case, although the spheroid cannot preserve its spherical shape at late times, a fitting attempt, shown in Fig. 5, is informative (the corresponding boxplots are shown in Additional file 2, Fig. C).Although the CSC number remains nearly constant, the DCCs can proliferate indefinitely, leading to fast overall growth. Fig. 5Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase Growth in the control experiment. Fitting the control medium data predicts unlimited growth, faster than in either soft or hard, but now driven by the DCCs. The number of CSCs does not increase All the new relevant information obtained from fitting the experimental data is summarized in Table 1, where we report the values of the parameters of our model. Those values have an accuracy of 98% and their correspondig distributions are reportedd in Additional file 2. Furthermore, in Table 2, we report some quantities, derived from parameters in Table 1 that will be useful in the following sections. Table 2Useful derived quantities from some values of Table 1ConstantUnitsSoftHardControl1/r[days]14.597.4941.66rp[1/days]0.06630.0951-0.0054\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac {1}{rp}$\end{document}1rp[days]15.0810.52-185p=(ps−pd)none0.96820.7124-0.2265 Useful derived quantities from some values of Table 1 Discussion: In normal tissues, homeostasis is guaranteed by factors secreted by differentiated cells that inhibit the division and self-renewal of stem cells [22, 43]. Cancer stem cells may partially escape these controls, but their activity is still influenced by their environment. In non-anchored cells, as is the case analyzed in the present work, clustering of most integrins on the plasma membrane by ECM molecules, and thus FAs formation, is lost. In normal cells, such events are sufficient to trigger anoikis, but upregulation of specific integrins can confer anoikis resistance. For instance, ανβ3 integrin has the ability to maintain receptor clustering in non-adherent cells (reviewed in Hamidi and Ivaska [44]). Interestingly, the MDA-MB-231 cells used by Wang et al. [41] are an ανβ3 integrin-overexpressing breast cancer cell line and highly dependent on ανβ3-emanating signals for proliferation and survival [45, 46], and it is likely that changes in the stiffness of the substratum may alter integrins clustering and consequently, cell proliferation. We would like to emphasize some experimental facts from Ref. [41] that are useful for the interpretation of the results: A remarkably high percentage (>95%) of the cells cultured under soft and hard conditions with growth factors express the stem cell marker Oct4, which is frequently used as a marker for undifferentiated cells. Oct-4 expression must be tightly regulated; too much or too little leads to cell differentiation.The soft and control experiments show low activity of telomerase, a marker for proliferation. The higher telomerase activity exhibited by hard indicates a faster growth rate. This is consistent with the expression rates of Ki67-positive, which are close to 90% for hard and minimal in the other cases.The high (95%) CSC fraction and low (<5%) proliferation rate observed in soft at day 8 suggest a population largely consisting of quiescent CSCs. The proliferative fraction was higher in hard.In control, markers indicate a strong dominance of the differentiated state. The stem cell fraction (∼5%) and proliferation rate (5% according to KI-67 and 22% according to flow cytometry) are both low. A remarkably high percentage (>95%) of the cells cultured under soft and hard conditions with growth factors express the stem cell marker Oct4, which is frequently used as a marker for undifferentiated cells. Oct-4 expression must be tightly regulated; too much or too little leads to cell differentiation. The soft and control experiments show low activity of telomerase, a marker for proliferation. The higher telomerase activity exhibited by hard indicates a faster growth rate. This is consistent with the expression rates of Ki67-positive, which are close to 90% for hard and minimal in the other cases. The high (95%) CSC fraction and low (<5%) proliferation rate observed in soft at day 8 suggest a population largely consisting of quiescent CSCs. The proliferative fraction was higher in hard. In control, markers indicate a strong dominance of the differentiated state. The stem cell fraction (∼5%) and proliferation rate (5% according to KI-67 and 22% according to flow cytometry) are both low. In the soft and hard experiments, cells must adapt to the restrictions imposed by the application of the stem cell maintenance factors EGF and bFGF. We especially extracted information about the cell dynamics from four features: the basal growth rate, the CSC fraction, and the intraspecific and interspecific interaction parameters. The parameter sets resulting from fitting the model to the hard, soft and control experiments, summarized in Table 1, are quite different. We next separately interpret the results of each experiment. Soft substrate The computed basal growth rate r is 0.069 day −1, which means that the PDT is close to 15 days. This is consistent with the results obtained by Wang et al. [41] using flow cytometry, but somewhat longer than typical cancer stem cells doubling times, which range from 3 to 11 days, depending on tumor type and culture conditions [47, 48]. DCCs normally reproduce faster but, because in this model r represents the average growth rate of the whole population, we recover a PDT consistent with that of the dominating CSCs. This lends support to our modeling assumption of a single basal growth rate. Quiescence is the prevalent state of the stem cells. Since their function is to replenish dead or damaged cells, they enter the cycle when their niches signal the need for new cells. In soft (and in hard) the addition of differentiation inhibitors implies that the CSCs always record low DCC populations. This drives them into the cycle, where they divide but, prevented from differentiating, overwhelmingly generate new CSCs. Differentiation is very unlikely (pd=0.0019) and we may neglect it to simplify the analysis. If we do this, there is no linear contribution of the CSCs to DCC generation. With the parameter values in Table 1, the equilibrium point where the two kind of cells coexist is located at (S∗,D∗)=(−21.7,−7.0) cells. This point lies in the third quadrant indicating that there is no physical/biological coexistence of the two populations. There are no attractors in the first quadrant and all trajectories diverge. This confirms that CSCs lose their normal quiescent state in a continuous (and futile) attempt to produce more DCCs. The (positive) intraspecific interaction coefficients αii are here directly related to the individual maximum population sizes of the respective subpopulations. If we assumed that the two subpopulations did not interact, ij=0, i≠j, Eq. (1a) would read: 4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ \frac{dS}{dt}=rS\left[(p_{s} - p_{d})- p_{s} \alpha_{SS} S \right], $$ \end{document}dSdt=rS(ps−pd)−psαSSS, which is a logistic equation that leads to a maximum population size Sc=(ps−pd)/(psαSS)≃12 cells. In this way, from Eq. (1b), we obtain Dc=1/αDD≃2 cells for the DCCs, which is six times smaller than Sc. Therefore, if there were no interactions between subpopulations our model would predict a 14-cell spheroid, a size that would be reached by day 17. Interactions between the populations are needed to understand the faster growth observed in the experiment. The negative values of the interspecific interaction coefficients, αij<0, i≠j lead to a positive feedback loop: An increase in one subpopulation drives an increase in the other. The numbers in Table 1, especially the relatively large value of αSD (5 times that of αSS), and the relatively low value of αDS (less than half of αDD), indicate that this interplay favors a net increase in CSC number but is not strong enough to lead to an increase in DCC number. The feedback loop mechanism is activated to generate a suitable niche, which requires a low S/(S+D) fraction. The inhibition of differentiation causes the CSCs to continuously reproduce in a frustrated attempt to recreate the DCC population required by the niche. Since the population equilibrium corresponding to a stable niche is never reached, cycling CSCs seldom return to quiescence. In Fig. 6, the fraction S/(S+D) is depicted for the three experiments up to day ten. Due to the inhibitor’s efficacy, this fraction falls very slowly for the soft and hard substrates (light blue and orange lines, respectively), but decays freely in the control environment. Fig. 6CSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche CSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche The computed basal growth rate r is 0.069 day −1, which means that the PDT is close to 15 days. This is consistent with the results obtained by Wang et al. [41] using flow cytometry, but somewhat longer than typical cancer stem cells doubling times, which range from 3 to 11 days, depending on tumor type and culture conditions [47, 48]. DCCs normally reproduce faster but, because in this model r represents the average growth rate of the whole population, we recover a PDT consistent with that of the dominating CSCs. This lends support to our modeling assumption of a single basal growth rate. Quiescence is the prevalent state of the stem cells. Since their function is to replenish dead or damaged cells, they enter the cycle when their niches signal the need for new cells. In soft (and in hard) the addition of differentiation inhibitors implies that the CSCs always record low DCC populations. This drives them into the cycle, where they divide but, prevented from differentiating, overwhelmingly generate new CSCs. Differentiation is very unlikely (pd=0.0019) and we may neglect it to simplify the analysis. If we do this, there is no linear contribution of the CSCs to DCC generation. With the parameter values in Table 1, the equilibrium point where the two kind of cells coexist is located at (S∗,D∗)=(−21.7,−7.0) cells. This point lies in the third quadrant indicating that there is no physical/biological coexistence of the two populations. There are no attractors in the first quadrant and all trajectories diverge. This confirms that CSCs lose their normal quiescent state in a continuous (and futile) attempt to produce more DCCs. The (positive) intraspecific interaction coefficients αii are here directly related to the individual maximum population sizes of the respective subpopulations. If we assumed that the two subpopulations did not interact, ij=0, i≠j, Eq. (1a) would read: 4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ \frac{dS}{dt}=rS\left[(p_{s} - p_{d})- p_{s} \alpha_{SS} S \right], $$ \end{document}dSdt=rS(ps−pd)−psαSSS, which is a logistic equation that leads to a maximum population size Sc=(ps−pd)/(psαSS)≃12 cells. In this way, from Eq. (1b), we obtain Dc=1/αDD≃2 cells for the DCCs, which is six times smaller than Sc. Therefore, if there were no interactions between subpopulations our model would predict a 14-cell spheroid, a size that would be reached by day 17. Interactions between the populations are needed to understand the faster growth observed in the experiment. The negative values of the interspecific interaction coefficients, αij<0, i≠j lead to a positive feedback loop: An increase in one subpopulation drives an increase in the other. The numbers in Table 1, especially the relatively large value of αSD (5 times that of αSS), and the relatively low value of αDS (less than half of αDD), indicate that this interplay favors a net increase in CSC number but is not strong enough to lead to an increase in DCC number. The feedback loop mechanism is activated to generate a suitable niche, which requires a low S/(S+D) fraction. The inhibition of differentiation causes the CSCs to continuously reproduce in a frustrated attempt to recreate the DCC population required by the niche. Since the population equilibrium corresponding to a stable niche is never reached, cycling CSCs seldom return to quiescence. In Fig. 6, the fraction S/(S+D) is depicted for the three experiments up to day ten. Due to the inhibitor’s efficacy, this fraction falls very slowly for the soft and hard substrates (light blue and orange lines, respectively), but decays freely in the control environment. Fig. 6CSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche CSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche Hard substrate When the substrate hardens the environmental conditions that mediate cell-to-cell signaling change and the CSC phenotype becomes more amenable to proliferation, as seen by comparing Figs. 3 and 4. The growth rate r, which still represents the PDT of the CSCs because of their prevalence, is twice as large as that corresponding to growth on the soft substrate. Our interpretation is that increasing the substrate hardness alters the CSC phenotype required to reach the cell fraction that regulates niche size. As in soft, the CSCs try to increase the DCC population but now they are immersed in a different environment. The duplication of the growth rate r, the reduction of the symmetric duplication probability ps, and the emergence of a large fraction (>50%) of asymmetric divisions indicates that the direct effect of the differentiation-inhibiting factors is weaker than in soft. Indirect effects appear through the interspecific coefficients, especially the relatively large and negative (-0.53) αSD. As Fig. 6 shows, by day 8 the DCC fraction is not much larger than in soft, indicating that the attempt to establish the niche has also failed in hard. More remarkable is that the intraspecific CSC coefficient has changed its sign, an indication that CSCs record a stressed environment that they may perceive as due to the presence of damaged tissue. This generates a phenotype different from that in soft [18], which accelerates cell division. On the other hand, the large and positive DCC intraspecific coefficient, αDD=1.83, implies a huge increase of the inhibitory signaling between DCCs with respect to soft. In this case, the discussion following Eq. 4 suggests for this system a maximum intrinsic DCC number smaller than unity, Dc=1/αDD≃0.5 cells, meaning that on this substrate the DCC subpopulation would not be able to survive without the CSCs. The general picture is that of a growing tumorsphere whose response to the substrate is to increase its cell number as fast as possible, aiming to reach a DCC fraction that equilibrates the niche, a goal that cannot be attained due to the presence of differentiation-inhibiting agents. The influence of the niche, as in [18], is thus a cornerstone for the biological interpretation of the model results. When the substrate hardens the environmental conditions that mediate cell-to-cell signaling change and the CSC phenotype becomes more amenable to proliferation, as seen by comparing Figs. 3 and 4. The growth rate r, which still represents the PDT of the CSCs because of their prevalence, is twice as large as that corresponding to growth on the soft substrate. Our interpretation is that increasing the substrate hardness alters the CSC phenotype required to reach the cell fraction that regulates niche size. As in soft, the CSCs try to increase the DCC population but now they are immersed in a different environment. The duplication of the growth rate r, the reduction of the symmetric duplication probability ps, and the emergence of a large fraction (>50%) of asymmetric divisions indicates that the direct effect of the differentiation-inhibiting factors is weaker than in soft. Indirect effects appear through the interspecific coefficients, especially the relatively large and negative (-0.53) αSD. As Fig. 6 shows, by day 8 the DCC fraction is not much larger than in soft, indicating that the attempt to establish the niche has also failed in hard. More remarkable is that the intraspecific CSC coefficient has changed its sign, an indication that CSCs record a stressed environment that they may perceive as due to the presence of damaged tissue. This generates a phenotype different from that in soft [18], which accelerates cell division. On the other hand, the large and positive DCC intraspecific coefficient, αDD=1.83, implies a huge increase of the inhibitory signaling between DCCs with respect to soft. In this case, the discussion following Eq. 4 suggests for this system a maximum intrinsic DCC number smaller than unity, Dc=1/αDD≃0.5 cells, meaning that on this substrate the DCC subpopulation would not be able to survive without the CSCs. The general picture is that of a growing tumorsphere whose response to the substrate is to increase its cell number as fast as possible, aiming to reach a DCC fraction that equilibrates the niche, a goal that cannot be attained due to the presence of differentiation-inhibiting agents. The influence of the niche, as in [18], is thus a cornerstone for the biological interpretation of the model results. Control substrate As mentioned in the previous section, we cannot expect the model to give a completely accurate description of the control substrate experiment, but its interpretation may shed light on the system dynamics. In this condition CSCs are allowed to freely differentiate. These cells record an environment where the proportion of DCCs increases monotonically and the population fractions should tend to those corresponding to niche equilibrium. However, Figs. 5 and 6 suggest that there is no limit to the increase of the DCC fraction. We conjecture that this behavior may be explained by migration: after the spheroid reaches a given size, cells start to migrate and the average number of DCCs recorded by each CSC does not increase. One consequence is that the CSC number remains stationary as shown in Fig. 5. In fact, their effective PDT is of about six months, i.e., they are generally quiescent, as they should. Furthermore, note that the whole PDT leads to a duplication of the first 5 cells after 41 days (1/0.024), which is three times slower than the soft rate (15 days). To explain the rapid spheroid growth we need to consider the contribution of the interactions. From Table 1 we see that interactions favor DCCs and restrict CSCs proliferation. A more detailed analysis of the evolution of the two subpopulations reveals the following: CSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment.DCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated. CSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment. DCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated. Of note, our analysis of these experiments implies that the absence of the stem cell growth factors in control leads to the disappearance of the feedback loop that plays such a crucial role in both soft and hard. The existence of the feedback loops detected in soft and hard can similarly be inferred from experiments carried out with the cancer lines SUM159, MCF-7, and T47D, which were also cultured with stem cell growth factors [9, 18]. Even if the CSC fractions are not far from unity in both soft and hard, the detailed reasons for their behavior are different. In both cases the cell subpopulations assist each other, generating a positive-feedback cycle that leads to continuous growth, an indication that cell-to-cell signaling is crucial to determine the process. The effect of the substrate on intraspecific interactions in hard is strong. There, CSCs are weakly promoting, but DCCs are so strongly inhibitory that the DCC population would disappear if it were not for the significant cooperation from the CSCs, which is expressed mainly through a considerable fraction of asymmetric divisions. The inhibition between DCCs is also likely to induce the phenotype change indicated by the large and negative value of αSD. The parameter αDS, which controls the influence of cancer stem cells on differentiated cancer cells, is always negative, and very strong so in control, suggesting that CSCs have a promoting and protective influence on DCCs, a phenomenon that was already observed by Kim and coworkers. These authors found that CSCs protect DCCs from anoikis promoting tumor formation when the two subpopulations are mixed [49]. The smaller magnitude of αDS in the soft and hard experiments suggests that stem cell maintenance factors weaken, but do not cancel, this protective effect. As mentioned in the previous section, we cannot expect the model to give a completely accurate description of the control substrate experiment, but its interpretation may shed light on the system dynamics. In this condition CSCs are allowed to freely differentiate. These cells record an environment where the proportion of DCCs increases monotonically and the population fractions should tend to those corresponding to niche equilibrium. However, Figs. 5 and 6 suggest that there is no limit to the increase of the DCC fraction. We conjecture that this behavior may be explained by migration: after the spheroid reaches a given size, cells start to migrate and the average number of DCCs recorded by each CSC does not increase. One consequence is that the CSC number remains stationary as shown in Fig. 5. In fact, their effective PDT is of about six months, i.e., they are generally quiescent, as they should. Furthermore, note that the whole PDT leads to a duplication of the first 5 cells after 41 days (1/0.024), which is three times slower than the soft rate (15 days). To explain the rapid spheroid growth we need to consider the contribution of the interactions. From Table 1 we see that interactions favor DCCs and restrict CSCs proliferation. A more detailed analysis of the evolution of the two subpopulations reveals the following: CSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment.DCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated. CSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment. DCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated. Of note, our analysis of these experiments implies that the absence of the stem cell growth factors in control leads to the disappearance of the feedback loop that plays such a crucial role in both soft and hard. The existence of the feedback loops detected in soft and hard can similarly be inferred from experiments carried out with the cancer lines SUM159, MCF-7, and T47D, which were also cultured with stem cell growth factors [9, 18]. Even if the CSC fractions are not far from unity in both soft and hard, the detailed reasons for their behavior are different. In both cases the cell subpopulations assist each other, generating a positive-feedback cycle that leads to continuous growth, an indication that cell-to-cell signaling is crucial to determine the process. The effect of the substrate on intraspecific interactions in hard is strong. There, CSCs are weakly promoting, but DCCs are so strongly inhibitory that the DCC population would disappear if it were not for the significant cooperation from the CSCs, which is expressed mainly through a considerable fraction of asymmetric divisions. The inhibition between DCCs is also likely to induce the phenotype change indicated by the large and negative value of αSD. The parameter αDS, which controls the influence of cancer stem cells on differentiated cancer cells, is always negative, and very strong so in control, suggesting that CSCs have a promoting and protective influence on DCCs, a phenomenon that was already observed by Kim and coworkers. These authors found that CSCs protect DCCs from anoikis promoting tumor formation when the two subpopulations are mixed [49]. The smaller magnitude of αDS in the soft and hard experiments suggests that stem cell maintenance factors weaken, but do not cancel, this protective effect. Soft substrate: The computed basal growth rate r is 0.069 day −1, which means that the PDT is close to 15 days. This is consistent with the results obtained by Wang et al. [41] using flow cytometry, but somewhat longer than typical cancer stem cells doubling times, which range from 3 to 11 days, depending on tumor type and culture conditions [47, 48]. DCCs normally reproduce faster but, because in this model r represents the average growth rate of the whole population, we recover a PDT consistent with that of the dominating CSCs. This lends support to our modeling assumption of a single basal growth rate. Quiescence is the prevalent state of the stem cells. Since their function is to replenish dead or damaged cells, they enter the cycle when their niches signal the need for new cells. In soft (and in hard) the addition of differentiation inhibitors implies that the CSCs always record low DCC populations. This drives them into the cycle, where they divide but, prevented from differentiating, overwhelmingly generate new CSCs. Differentiation is very unlikely (pd=0.0019) and we may neglect it to simplify the analysis. If we do this, there is no linear contribution of the CSCs to DCC generation. With the parameter values in Table 1, the equilibrium point where the two kind of cells coexist is located at (S∗,D∗)=(−21.7,−7.0) cells. This point lies in the third quadrant indicating that there is no physical/biological coexistence of the two populations. There are no attractors in the first quadrant and all trajectories diverge. This confirms that CSCs lose their normal quiescent state in a continuous (and futile) attempt to produce more DCCs. The (positive) intraspecific interaction coefficients αii are here directly related to the individual maximum population sizes of the respective subpopulations. If we assumed that the two subpopulations did not interact, ij=0, i≠j, Eq. (1a) would read: 4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$ \frac{dS}{dt}=rS\left[(p_{s} - p_{d})- p_{s} \alpha_{SS} S \right], $$ \end{document}dSdt=rS(ps−pd)−psαSSS, which is a logistic equation that leads to a maximum population size Sc=(ps−pd)/(psαSS)≃12 cells. In this way, from Eq. (1b), we obtain Dc=1/αDD≃2 cells for the DCCs, which is six times smaller than Sc. Therefore, if there were no interactions between subpopulations our model would predict a 14-cell spheroid, a size that would be reached by day 17. Interactions between the populations are needed to understand the faster growth observed in the experiment. The negative values of the interspecific interaction coefficients, αij<0, i≠j lead to a positive feedback loop: An increase in one subpopulation drives an increase in the other. The numbers in Table 1, especially the relatively large value of αSD (5 times that of αSS), and the relatively low value of αDS (less than half of αDD), indicate that this interplay favors a net increase in CSC number but is not strong enough to lead to an increase in DCC number. The feedback loop mechanism is activated to generate a suitable niche, which requires a low S/(S+D) fraction. The inhibition of differentiation causes the CSCs to continuously reproduce in a frustrated attempt to recreate the DCC population required by the niche. Since the population equilibrium corresponding to a stable niche is never reached, cycling CSCs seldom return to quiescence. In Fig. 6, the fraction S/(S+D) is depicted for the three experiments up to day ten. Due to the inhibitor’s efficacy, this fraction falls very slowly for the soft and hard substrates (light blue and orange lines, respectively), but decays freely in the control environment. Fig. 6CSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche CSC fractions. Time evolution of the cancer stem cell fraction predicted by the model for the three experiments. In both soft and hard the stem cell fraction remains very high, since the stem cell maintenance factors are a barrier to DCC generation. In control the differentiation barrier is not present and the stem cell fraction decreases towards the value corresponding to the niche Hard substrate: When the substrate hardens the environmental conditions that mediate cell-to-cell signaling change and the CSC phenotype becomes more amenable to proliferation, as seen by comparing Figs. 3 and 4. The growth rate r, which still represents the PDT of the CSCs because of their prevalence, is twice as large as that corresponding to growth on the soft substrate. Our interpretation is that increasing the substrate hardness alters the CSC phenotype required to reach the cell fraction that regulates niche size. As in soft, the CSCs try to increase the DCC population but now they are immersed in a different environment. The duplication of the growth rate r, the reduction of the symmetric duplication probability ps, and the emergence of a large fraction (>50%) of asymmetric divisions indicates that the direct effect of the differentiation-inhibiting factors is weaker than in soft. Indirect effects appear through the interspecific coefficients, especially the relatively large and negative (-0.53) αSD. As Fig. 6 shows, by day 8 the DCC fraction is not much larger than in soft, indicating that the attempt to establish the niche has also failed in hard. More remarkable is that the intraspecific CSC coefficient has changed its sign, an indication that CSCs record a stressed environment that they may perceive as due to the presence of damaged tissue. This generates a phenotype different from that in soft [18], which accelerates cell division. On the other hand, the large and positive DCC intraspecific coefficient, αDD=1.83, implies a huge increase of the inhibitory signaling between DCCs with respect to soft. In this case, the discussion following Eq. 4 suggests for this system a maximum intrinsic DCC number smaller than unity, Dc=1/αDD≃0.5 cells, meaning that on this substrate the DCC subpopulation would not be able to survive without the CSCs. The general picture is that of a growing tumorsphere whose response to the substrate is to increase its cell number as fast as possible, aiming to reach a DCC fraction that equilibrates the niche, a goal that cannot be attained due to the presence of differentiation-inhibiting agents. The influence of the niche, as in [18], is thus a cornerstone for the biological interpretation of the model results. Control substrate: As mentioned in the previous section, we cannot expect the model to give a completely accurate description of the control substrate experiment, but its interpretation may shed light on the system dynamics. In this condition CSCs are allowed to freely differentiate. These cells record an environment where the proportion of DCCs increases monotonically and the population fractions should tend to those corresponding to niche equilibrium. However, Figs. 5 and 6 suggest that there is no limit to the increase of the DCC fraction. We conjecture that this behavior may be explained by migration: after the spheroid reaches a given size, cells start to migrate and the average number of DCCs recorded by each CSC does not increase. One consequence is that the CSC number remains stationary as shown in Fig. 5. In fact, their effective PDT is of about six months, i.e., they are generally quiescent, as they should. Furthermore, note that the whole PDT leads to a duplication of the first 5 cells after 41 days (1/0.024), which is three times slower than the soft rate (15 days). To explain the rapid spheroid growth we need to consider the contribution of the interactions. From Table 1 we see that interactions favor DCCs and restrict CSCs proliferation. A more detailed analysis of the evolution of the two subpopulations reveals the following: CSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment.DCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated. CSCs: The positivity and very low absolute values of αSS and αSD ensure the stability of the CSC number. The dominant contribution to the change in the CSC number is given by the linear term, which yields |psr|−1=185 days, meaning that the CSCs are quiescent during the whole experiment. DCCs: Approximating Eq. (1b) with S→0, we get D = 1/ αDD. Because αDD<0, the quadratic term always promotes DCC number growth. As mentioned in the case of hard, a negative sign in the intraspecific interaction parameter is related to signaling loss. Sphere disaggregation in control suggests that the hard substrate promotes migration, weakening cell-to-cell interactions. Given that the CSC pool remains constant while many cells move away from the spheroid, we can also conclude that the migrating cells are likely to be differentiated. Of note, our analysis of these experiments implies that the absence of the stem cell growth factors in control leads to the disappearance of the feedback loop that plays such a crucial role in both soft and hard. The existence of the feedback loops detected in soft and hard can similarly be inferred from experiments carried out with the cancer lines SUM159, MCF-7, and T47D, which were also cultured with stem cell growth factors [9, 18]. Even if the CSC fractions are not far from unity in both soft and hard, the detailed reasons for their behavior are different. In both cases the cell subpopulations assist each other, generating a positive-feedback cycle that leads to continuous growth, an indication that cell-to-cell signaling is crucial to determine the process. The effect of the substrate on intraspecific interactions in hard is strong. There, CSCs are weakly promoting, but DCCs are so strongly inhibitory that the DCC population would disappear if it were not for the significant cooperation from the CSCs, which is expressed mainly through a considerable fraction of asymmetric divisions. The inhibition between DCCs is also likely to induce the phenotype change indicated by the large and negative value of αSD. The parameter αDS, which controls the influence of cancer stem cells on differentiated cancer cells, is always negative, and very strong so in control, suggesting that CSCs have a promoting and protective influence on DCCs, a phenomenon that was already observed by Kim and coworkers. These authors found that CSCs protect DCCs from anoikis promoting tumor formation when the two subpopulations are mixed [49]. The smaller magnitude of αDS in the soft and hard experiments suggests that stem cell maintenance factors weaken, but do not cancel, this protective effect. Conclusion: The analysis of experimental data with our model confirms Wang’s conclusions and indicates that the substrate regulates the details of tumorsphere evolution and that a powerful engine of tumorsphere growth is the stem cell “memory" of its niche. By comparing growth on the hard and soft substrates, our analysis also confirms the observations that substrate stiffness promotes cancer cell proliferation (as recently reviewed by Nia et al. [50]. What is more interesting is that the evident differences between the parameters describing growth on the hard and control substrates indicate that the response of stem and non-stem cancer cells to an increase in substrate stiffness is likely to be mediated by different processes. In summary, the ability of stem cells to sense their environment plays a crucial role in tumorsphere evolution. Our model has proven to be particularly useful at determining why substratum stiffness has a profound influence on the behavior of cancer stem cells, soft substrates favoring symmetric divisions and hard substrates leading to a large proportion of asymmetric doublings. In vivo studies are needed to further our understanding of niche processes under natural environments. Supplementary Information: Additional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere. Additional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere. Additional file 2 Distribution graphs from fitting procedure. Additional file 2 Distribution graphs from fitting procedure. Additional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere. Additional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere. Additional file 2 Distribution graphs from fitting procedure. Additional file 2 Distribution graphs from fitting procedure. : Additional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere. Additional file 1 Some consequences of equation (1). The onset of growth and the fate of a tumorsphere. Additional file 2 Distribution graphs from fitting procedure. Additional file 2 Distribution graphs from fitting procedure.
Background: Cancer stem cells are important for the development of many solid tumors. These cells receive promoting and inhibitory signals that depend on the nature of their environment (their niche) and determine cell dynamics. Mechanical stresses are crucial to the initiation and interpretation of these signals. Methods: A two-population mathematical model of tumorsphere growth is used to interpret the results of a series of experiments recently carried out in Tianjin, China, and extract information about the intraspecific and interspecific interactions between cancer stem cell and differentiated cancer cell populations. Results: The model allows us to reconstruct the time evolution of the cancer stem cell fraction, which was not directly measured. We find that, in the presence of stem cell growth factors, the interspecific cooperation between cancer stem cells and differentiated cancer cells induces a positive feedback loop that determines growth, independently of substrate hardness. In a frustrated attempt to reconstitute the stem cell niche, the number of cancer stem cells increases continuously with a reproduction rate that is enhanced by a hard substrate. For growth on soft agar, intraspecific interactions are always inhibitory, but on hard agar the interactions between stem cells are collaborative while those between differentiated cells are strongly inhibitory. Evidence also suggests that a hard substrate brings about a large fraction of asymmetric stem cell divisions. In the absence of stem cell growth factors, the barrier to differentiation is broken and overall growth is faster, even if the stem cell number is conserved. Conclusions: Our interpretation of the experimental results validates the centrality of the concept of stem cell niche when tumor growth is fueled by cancer stem cells. Niche memory is found to be responsible for the characteristic population dynamics observed in tumorspheres. The model also shows why substratum stiffness has a deep influence on the behavior of cancer stem cells, stiffer substrates leading to a larger proportion of asymmetric doublings. A specific condition for the growth of the cancer stem cell number is also obtained.
Background: For some time, it has been known that the presence of cancer stem cells (CSCs) is important for the development of many solid tumors [1–6]. According to the CSC hypothesis these cells are often crucial for the development of resistance to therapeutic interventions [7, 8]. In healthy tissues the proportion of stem cells is small; homeostatic equilibrium is maintained through the signals that the stem cells receive from their niches. The onset of cancer is likely to destroy this equilibrium and cancerous tissues may exhibit a higher proportion of stem cells than normal tissues [9]. This increased proportion of cancer stem cells may underlie the aggressive behavior of high-grade tumors [2, 10]. As recently explained by Taniguchi et al. [11], the cross-talk between tumor initiating (stem) cells and their niche microenvironment is a possible therapeutic target. Understanding the nature of the interactions between CSCs and their environment is therefore important for the development of effective intervention procedures. Interesting mathematical models have been developed to explain various properties of stem-cell-driven tissue growth. Stiehl and Marciniak-Czochra proposed a mathematical model of cancer stem cell dynamics to describe the time evolution of a leukemic cell line competing with healthy hematopoiesis [12]. This group later provided evidence that the influence of leukemic stem cells on the course of the disease is stronger than that of non-stem leukemic cells [13]. Yang, Plikus and Komarova used stochastic modeling to explore the relative importance of symmetric and asymmetric stem cell divisions, showing that tight homeostatic control is not necessarily associated with purely asymmetric divisions and that symmetric divisions can help to stabilize mouse paw epidermis lineage [14]. Recently, Bessonov and coworkers developed a model that allowed them to determine the influence of the population dynamics on the time-varying probabilities of different cell fates and the ascertainment of the cell-cell communication factors influencing these probabilities [15]. These authors suggest that a coordinated dynamical change of the cell behavior parameters occurs in response to a biochemical signal, which they describe as an underlying field. Here we will describe the effects of cellular interactions using nonlinear terms instead. Live cells are generally sensitive to substratum rigidity and texture [16]. A growing tumor must compete for space with the surrounding environment; the resulting mechanical stresses generate signals that impact on the tumor cells. Cells integrate these mechanical cues and respond in ways that are related to their phenotype. Their active response may also lead to phenotype modifications [17, 18]; in fact, mechanical cues generated by the environment can trigger cancer cell invasion [19]. Environmental stiffness may then be associated with tumor progression, a process that can also be promoted by mechanically activated ion channels [20]. What is the influence of the mechanical environment on cancer stem cells? At each generation, CSCs divide symmetrically, generating either two new CSCs or two differentiated cancer cells (DCCs), or asymmetrically, generating one CSC and one differentiated cancer cell [7, 21]. Quorum sensing controls differentiation of healthy stem cells, but it is thought to be altered in cancer stem cells [22]. Mechanical inputs are an important component of the altered control mechanism and can be assumed to play a role in the fate of the cancer stem cells. In vitro experiments have been designed to probe the influence of mechanical stresses of various types on tumor cells. The solid-stress inhibition of multicellular spheroid growth was already demonstrated by Helmlinger and coworkers in 1997 [23]. The results of these experiments were shown to follow allometric laws [24]. Interestingly, Koike et al. showed that spheroid formation with Dunning R3327 rat prostate carcinoma AT3.1 cells is facilitated by solid stress [25]. A study by Cheng et al. suggested how tumors grow in confined locations where levels of stress are high, showing that growth-induced solid stress can affect cell phenotype [26]. Using spheroid cell aggregates, Montel et al. showed that applied pressure may be used to modulate tumor growth [27] and observed that cells are blocked by compressive stresses at the G1 checkpoint [28]. The organization of cells in a spheroid is modified by physical confinement [29], which likewise modifies the proliferation gradient [30]. The stiffness of hydrogels has been shown to determine the shape of tumor cells, with high stiffnesses leading to spheroidal cells, a feature known to occur in in vivo tumors [31]. By studying the behavior of adult neural stem cells under various mechanical cues, Saha et al. showed that soft gels favored differentiation into neurons while harder gels promoted glial cultures. Importantly, they also showed that low substrate stiffness inhibited cell spreading, self-renewal, and differentiation [32]. Osteocyte-like cells were shown to significantly induce compaction of tumor spheroids formed using breast cancer cells [33]. Matrix stiffness was shown to affect, through mechanotransduction events, the osteogenic outcome of human mesenchymal stem cell differentiation [34]. HeLa cells were used to show that both an attractive contact force and a substrate-controlled remote force contribute to the formation of large-scale multicellular structures in cancer [35]. Fifteen years ago, Discher, Janmey, and Wang not only explained that the stiffness of the anchoring substrate can have a strong influence on the cell state, but they also indicated that stem cell differentiation may be influenced by the nature of the substrate [36]. It is relevant that naive mesenchymal stem cells were shown to commit to various phenotypes with high sensitivity to tissue elasticity: They differentiate preferably into neurons and osteocytes if they are cultured on soft and rigid matrices, respectively [37]. On the other hand, human mesenchymal stem cells adhere onto precalcified bones, which are softer than calcified bones [16]. It is also known that hydrodynamic shear stress promotes the conversion of primary patient epithelial tumor cells into specific cancer stem-like cells [38]. Smith et al. found that the mechanical context of the differentiation niche can drive endothelial cell identity from human-induced pluripotent stem cells, showing that stiffness drives mesodermal differentiation, leading to endothelial commitment [39]. Thus, microenvironments help specify stem cell lineages, although it may be difficult to decouple the influence of mechanical interactions and surface topography and stiffness from biochemical effects [16, 39]. Since they are grown in the absence of the complex signaling system prevalent in the environment of real tumors, tumorspheres, spheroids formed by clonal proliferation out of permanent cell lines, tumor tissue, or blood [40], are suitable candidates to probe the influence of mechanical stimuli on stem-cell-fueled cancer growth. Wang et al. cultured breast CSCs on soft and hard agar matrix surfaces, investigating the effects that substrate stiffness has on cell state and proliferation [41]. These authors showed that breast cancer stem cells can be kept in states of differentiation, proliferation or quiescence depending on a combination of adherent growth and stem cells growth factors, but they focused on the experimental possibilities and did not draw conclusions about how these agencies may modify the stem cell niche to lead to the observed behavior. Recently, we developed a two-population tumorsphere model to identify the role of the intraspecific and interspecific interactions that determine tumorsphere growth [18]. Application of our model to three breast cancer cell lines studied by Chen and coworkers [9] indicates that while intraspecific interactions are inhibitory, interspecific interactions promote growth. This feature of interspecific interactions was interpreted in terms of the stimulation by CSCs of the growth of DCCs in order to consolidate their niches and of the plasticity of the DCCs to dedifferentiate into CSCs [18]. Here we use this model to analyze the experimental results of Wang et al. [41], discussing how substrate stiffness influences growth and finding that the concept of cancer stem cell niche is central for its understanding. In the next section we review the model of Ref. [18] and in the following sections we apply it to the results of Ref.[41] and discuss their implications. Conclusion: The analysis of experimental data with our model confirms Wang’s conclusions and indicates that the substrate regulates the details of tumorsphere evolution and that a powerful engine of tumorsphere growth is the stem cell “memory" of its niche. By comparing growth on the hard and soft substrates, our analysis also confirms the observations that substrate stiffness promotes cancer cell proliferation (as recently reviewed by Nia et al. [50]. What is more interesting is that the evident differences between the parameters describing growth on the hard and control substrates indicate that the response of stem and non-stem cancer cells to an increase in substrate stiffness is likely to be mediated by different processes. In summary, the ability of stem cells to sense their environment plays a crucial role in tumorsphere evolution. Our model has proven to be particularly useful at determining why substratum stiffness has a profound influence on the behavior of cancer stem cells, soft substrates favoring symmetric divisions and hard substrates leading to a large proportion of asymmetric doublings. In vivo studies are needed to further our understanding of niche processes under natural environments.
Background: Cancer stem cells are important for the development of many solid tumors. These cells receive promoting and inhibitory signals that depend on the nature of their environment (their niche) and determine cell dynamics. Mechanical stresses are crucial to the initiation and interpretation of these signals. Methods: A two-population mathematical model of tumorsphere growth is used to interpret the results of a series of experiments recently carried out in Tianjin, China, and extract information about the intraspecific and interspecific interactions between cancer stem cell and differentiated cancer cell populations. Results: The model allows us to reconstruct the time evolution of the cancer stem cell fraction, which was not directly measured. We find that, in the presence of stem cell growth factors, the interspecific cooperation between cancer stem cells and differentiated cancer cells induces a positive feedback loop that determines growth, independently of substrate hardness. In a frustrated attempt to reconstitute the stem cell niche, the number of cancer stem cells increases continuously with a reproduction rate that is enhanced by a hard substrate. For growth on soft agar, intraspecific interactions are always inhibitory, but on hard agar the interactions between stem cells are collaborative while those between differentiated cells are strongly inhibitory. Evidence also suggests that a hard substrate brings about a large fraction of asymmetric stem cell divisions. In the absence of stem cell growth factors, the barrier to differentiation is broken and overall growth is faster, even if the stem cell number is conserved. Conclusions: Our interpretation of the experimental results validates the centrality of the concept of stem cell niche when tumor growth is fueled by cancer stem cells. Niche memory is found to be responsible for the characteristic population dynamics observed in tumorspheres. The model also shows why substratum stiffness has a deep influence on the behavior of cancer stem cells, stiffer substrates leading to a larger proportion of asymmetric doublings. A specific condition for the growth of the cancer stem cell number is also obtained.
18,070
371
[ 1542, 2412, 469, 744, 136, 403, 339, 340, 857, 425, 898, 66 ]
16
[ "cells", "growth", "cell", "usepackage", "stem", "soft", "cscs", "csc", "hard", "number" ]
[ "tumor initiating stem", "cell dynamics describe", "cell dynamics", "mathematically growth tumorsphere", "model cancer stem" ]
null
[CONTENT] Tumor growth | Cancer stem cell | Niche | Substrate | Tumorsphere | Mathematical modelling [SUMMARY]
null
[CONTENT] Tumor growth | Cancer stem cell | Niche | Substrate | Tumorsphere | Mathematical modelling [SUMMARY]
[CONTENT] Tumor growth | Cancer stem cell | Niche | Substrate | Tumorsphere | Mathematical modelling [SUMMARY]
[CONTENT] Tumor growth | Cancer stem cell | Niche | Substrate | Tumorsphere | Mathematical modelling [SUMMARY]
[CONTENT] Tumor growth | Cancer stem cell | Niche | Substrate | Tumorsphere | Mathematical modelling [SUMMARY]
[CONTENT] Cell Differentiation | Cell Proliferation | Culture Media | Humans | Models, Biological | Neoplasms | Neoplastic Stem Cells | Spheroids, Cellular | Stem Cell Niche | Stress, Mechanical | Surface Properties | Tumor Cells, Cultured [SUMMARY]
null
[CONTENT] Cell Differentiation | Cell Proliferation | Culture Media | Humans | Models, Biological | Neoplasms | Neoplastic Stem Cells | Spheroids, Cellular | Stem Cell Niche | Stress, Mechanical | Surface Properties | Tumor Cells, Cultured [SUMMARY]
[CONTENT] Cell Differentiation | Cell Proliferation | Culture Media | Humans | Models, Biological | Neoplasms | Neoplastic Stem Cells | Spheroids, Cellular | Stem Cell Niche | Stress, Mechanical | Surface Properties | Tumor Cells, Cultured [SUMMARY]
[CONTENT] Cell Differentiation | Cell Proliferation | Culture Media | Humans | Models, Biological | Neoplasms | Neoplastic Stem Cells | Spheroids, Cellular | Stem Cell Niche | Stress, Mechanical | Surface Properties | Tumor Cells, Cultured [SUMMARY]
[CONTENT] Cell Differentiation | Cell Proliferation | Culture Media | Humans | Models, Biological | Neoplasms | Neoplastic Stem Cells | Spheroids, Cellular | Stem Cell Niche | Stress, Mechanical | Surface Properties | Tumor Cells, Cultured [SUMMARY]
[CONTENT] tumor initiating stem | cell dynamics describe | cell dynamics | mathematically growth tumorsphere | model cancer stem [SUMMARY]
null
[CONTENT] tumor initiating stem | cell dynamics describe | cell dynamics | mathematically growth tumorsphere | model cancer stem [SUMMARY]
[CONTENT] tumor initiating stem | cell dynamics describe | cell dynamics | mathematically growth tumorsphere | model cancer stem [SUMMARY]
[CONTENT] tumor initiating stem | cell dynamics describe | cell dynamics | mathematically growth tumorsphere | model cancer stem [SUMMARY]
[CONTENT] tumor initiating stem | cell dynamics describe | cell dynamics | mathematically growth tumorsphere | model cancer stem [SUMMARY]
[CONTENT] cells | growth | cell | usepackage | stem | soft | cscs | csc | hard | number [SUMMARY]
null
[CONTENT] cells | growth | cell | usepackage | stem | soft | cscs | csc | hard | number [SUMMARY]
[CONTENT] cells | growth | cell | usepackage | stem | soft | cscs | csc | hard | number [SUMMARY]
[CONTENT] cells | growth | cell | usepackage | stem | soft | cscs | csc | hard | number [SUMMARY]
[CONTENT] cells | growth | cell | usepackage | stem | soft | cscs | csc | hard | number [SUMMARY]
[CONTENT] cells | stem | mechanical | cell | stem cells | cancer | stiffness | tumor | tumors | stress [SUMMARY]
null
[CONTENT] usepackage | ps | growth | soft | hard | pd | seed | cells | stem | csc [SUMMARY]
[CONTENT] substrates | stiffness | stem | processes | soft substrates | substrate stiffness | growth hard | tumorsphere evolution | confirms | cancer [SUMMARY]
[CONTENT] usepackage | cells | cell | growth | stem | soft | hard | cscs | data | csc [SUMMARY]
[CONTENT] usepackage | cells | cell | growth | stem | soft | hard | cscs | data | csc [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| two | Tianjin | China ||| ||| ||| ||| ||| ||| ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| two | Tianjin | China ||| ||| ||| ||| ||| ||| ||| ||| ||| ||| ||| [SUMMARY]
First prospective European study for the feasibility and safety of magnetically controlled capsule endoscopy in gastric mucosal abnormalities.
35721886
While capsule endoscopy (CE) is the gold standard diagnostic method of detecting small bowel (SB) diseases and disorders, a novel magnetically controlled capsule endoscopy (MCCE) system provides non-invasive evaluation of the gastric mucosal surface, which can be performed without sedation or discomfort. During standard SBCE, passive movement of the CE may cause areas of the complex anatomy of the gastric mucosa to remain unexplored, whereas the precision of MCCE capsule movements inside the stomach promises better visualization of the entire mucosa.
BACKGROUND
Of outpatients who were referred for SBCE, 284 (male/female: 149/135) were prospectively enrolled and evaluated by MCCE. The stomach was examined in the supine, left, and right lateral decubitus positions without sedation. Next, all patients underwent a complete SBCE study protocol. The gastric mucosa was explored with the Ankon MCCE system with active magnetic control of the capsule endoscope in the stomach, applying three standardized pre-programmed computerized algorithms in combination with manual control of the magnetic movements.
METHODS
The urea breath test revealed Helicobacter pylori positivity in 32.7% of patients. The mean gastric and SB transit times with MCCE were 0 h 47 min 40 s and 3 h 46 min 22 s, respectively. The average total time of upper gastrointestinal MCCE examination was 5 h 48 min 35 s. Active magnetic movement of the Ankon capsule through the pylorus was successful in 41.9% of patients. Overall diagnostic yield for detecting abnormalities in the stomach and SB was 81.9% (68.6% minor; 13.3% major pathologies); 25.8% of abnormalities were in the SB; 74.2% were in the stomach. The diagnostic yield for stomach/SB was 55.9%/12.7% for minor and 4.9%/8.4% for major pathologies.
RESULTS
MCCE is a feasible, safe diagnostic method for evaluating gastric mucosal lesions and is a promising non-invasive screening tool to decrease morbidity and mortality in upper gastro-intestinal diseases.
CONCLUSION
[ "Capsule Endoscopes", "Capsule Endoscopy", "Feasibility Studies", "Female", "Gastric Mucosa", "Humans", "Intestinal Diseases", "Male", "Prospective Studies" ]
9157624
INTRODUCTION
Capsule endoscopy (CE) is a non-invasive, painless, patient-friendly, and safe diagnostic method. It is currently the gold standard examination for small bowel (SB) diseases, including inflammatory bowel disease, suspected polyposis syndromes, unexplained abdominal pain, celiac disease, and obscure gastrointestinal bleeding (OGIB). Small bowel capsule endoscopy (SBCE) provides excellent diagnostic yield, which has been proven in many clinical trials since its introduction in the late 1990s. Currently, it has a fundamental role in gastrointestinal (GI) endoscopy, especially in patients with suspected SB diseases[1]. Attempts to expand the diagnostic role of CE to areas that are traditionally explored with standard endoscopic procedures (gastroscopy and colonoscopy), e.g., the esophagus, stomach, and colon. These aims have been challenged by significant drawbacks in the past due to a lack of optimal preparation, standardized procedural design, and ability to visualize the entire mucosal surface[2]. A common perception is that the anatomical structures of the stomach and colon are too complex for adequate visualization with a passive capsule endoscope[3-5]. Although the SB seems to be a simple pipe, the more complex gastric anatomy with passive movements may cause unexplored areas of the gastric mucosa when utilizing standard CE. Magnetically controlled capsule endoscopy (MCCE) systems were developed over the past ten years to precisely control capsule movements inside the GI tract and to achieve better visualization of the entire mucosa, especially in the stomach. Different approaches have been developed and studied over time, which have enabled manual steering or active movement of the capsule systems. The external magnetic field seems to be the most adequate and energetically effective method for steering and holding the capsule within the stomach[6,7]. According to recent, prospective studies that were mainly conducted in China, a methodology developed by Ankon and AnX robotics could be an optimal method for non-invasive screening and early diagnosis of gastric diseases such as gastric cancer and intramucosal high-grade dysplasia. It has the potential to become a valuable and accepted screening method in a population with high gastric cancer risk[8]. The aim of this study was to evaluate the feasibility, safety, and diagnostic yield of the Ankon MCCE system in patients with gastric and SB disorders who were referred to our endoscopy unit for SBCE examination between September 2017 and December 2020. Our secondary aim was to evaluate the overall gastric examinations, SB transit times (TTs) and efficacy of MCCE to achieve complete gastric and SBCE investigation by utilizing the same upper GI capsule.
MATERIALS AND METHODS
Study design Our present study prospectively enrolled outpatients who were seen at the Endo-Kapszula Endoscopy Unit, Székesfehérvár, Hungary, between September 2017 and December 2020. A combined investigation of the stomach and SB of patients referred to for SBCE was performed with a robotically MCCE system (NaviCam, Ankon Technologies Co, Ltd, Shanghai, China). The primary endpoint of the recent study was to investigate the diagnostic yield of MCCE in the evaluation of gastric and SB abnormalities. The secondary endpoints were to evaluate the feasibility of performing complete gastric and SB investigation by utilizing the same diagnostic procedure; to address safety parameters, including adverse and severe adverse events, and to calculate the capsule TTs, such as esophageal, gastric, and SB TTs, and the overall procedure time. Our present study prospectively enrolled outpatients who were seen at the Endo-Kapszula Endoscopy Unit, Székesfehérvár, Hungary, between September 2017 and December 2020. A combined investigation of the stomach and SB of patients referred to for SBCE was performed with a robotically MCCE system (NaviCam, Ankon Technologies Co, Ltd, Shanghai, China). The primary endpoint of the recent study was to investigate the diagnostic yield of MCCE in the evaluation of gastric and SB abnormalities. The secondary endpoints were to evaluate the feasibility of performing complete gastric and SB investigation by utilizing the same diagnostic procedure; to address safety parameters, including adverse and severe adverse events, and to calculate the capsule TTs, such as esophageal, gastric, and SB TTs, and the overall procedure time. Patients All patients who were referred to our endoscopy unit for SBCE and who agreed to our gastric study protocol were included in the current study. During the study period, we enrolled 284 consecutive, eligible patients, of which there were 149 males (52.5%) and 135 females (47.5 %), with a mean age of 44 years. Detailed patient characteristics are shown in Table 1. The indications of MCCE were iron deficiency anemia, OGIB, suspected/established SB Crohn’s disease, suspected/confirmed celiac disease, suspected SB neoplastic disease, carcinoid syndrome, and SB polyposis syndromes. Indications of small bowel capsule endoscopy (grouped by gender) BMI: Body mass index; SB: Small bowel. This study was approved by the Ethical Committee of the University of Szeged (Registry No. 5/17.04.26) and registered in the ClinicalTrials.gov trial registry (Identifier: NCT03234725). The present study was conducted according to the World Medical Association's Declaration of Helsinki provisions in 1995. Patients agreed to undergo capsule endoscopies and Helicobacter pylori (H. pylori) urea breath tests (UBTs) by written informed consent. All patients who were referred to our endoscopy unit for SBCE and who agreed to our gastric study protocol were included in the current study. During the study period, we enrolled 284 consecutive, eligible patients, of which there were 149 males (52.5%) and 135 females (47.5 %), with a mean age of 44 years. Detailed patient characteristics are shown in Table 1. The indications of MCCE were iron deficiency anemia, OGIB, suspected/established SB Crohn’s disease, suspected/confirmed celiac disease, suspected SB neoplastic disease, carcinoid syndrome, and SB polyposis syndromes. Indications of small bowel capsule endoscopy (grouped by gender) BMI: Body mass index; SB: Small bowel. This study was approved by the Ethical Committee of the University of Szeged (Registry No. 5/17.04.26) and registered in the ClinicalTrials.gov trial registry (Identifier: NCT03234725). The present study was conducted according to the World Medical Association's Declaration of Helsinki provisions in 1995. Patients agreed to undergo capsule endoscopies and Helicobacter pylori (H. pylori) urea breath tests (UBTs) by written informed consent. Patient preparation for MCCE According to the SBCE guidelines, we applied the following patient preparation and study protocol: Patients followed a liquid diet and consumed 2 L of water with two sacks of polyethylene glycol (PEG) the day before the study. First, H. pylori C13 UBT was performed on the morning of the study, while the patient was in a fasting condition. The UBT requires an average of 30 min, during which time the patient should not consume any fluids or food. During that visit, the patient's medical history was recorded, and a physical examination was performed. After the UBT, the patient ingested 2 dL water with simethicone suspension (80 mg) to reduce mucus in the stomach. Then, to distend the stomach properly, approximately 8-10 dL of clean water consumed by all patients within 10 min. Water ingestion may be repeated as needed to enhance gastric distension during examination. After complete visualization of the total gastric mucosal surface, active pyloric propulsion of the capsule was attempted in all patients with the external magnetic field. If it was unsuccessful, 10 mg intravenous metoclopramide was applied 60 min after the capsule was swallowed. Contraindications of MCCE were the same as of those of SBCE and magnetic resonance imaging (MRI) examination. In our study, absolute contraindications were the patients with previous abdominal surgery and/or a previous capsule retention; with implanted MRI-incompatible electronic devices (e.g., implantable cardioverter-defibrillator and pacemakers), or magnetic, metal foreign bodies; and who were not competent or refused the informed consent form; who were under age 18 or above age 70; and those who were pregnant. A patency capsule test was initially performed in all patients to determine those with relative contraindications, including known or suspected GI obstruction due to SB Crohn’s disease, neoplasia, or SB surgery. According to the SBCE guidelines, we applied the following patient preparation and study protocol: Patients followed a liquid diet and consumed 2 L of water with two sacks of polyethylene glycol (PEG) the day before the study. First, H. pylori C13 UBT was performed on the morning of the study, while the patient was in a fasting condition. The UBT requires an average of 30 min, during which time the patient should not consume any fluids or food. During that visit, the patient's medical history was recorded, and a physical examination was performed. After the UBT, the patient ingested 2 dL water with simethicone suspension (80 mg) to reduce mucus in the stomach. Then, to distend the stomach properly, approximately 8-10 dL of clean water consumed by all patients within 10 min. Water ingestion may be repeated as needed to enhance gastric distension during examination. After complete visualization of the total gastric mucosal surface, active pyloric propulsion of the capsule was attempted in all patients with the external magnetic field. If it was unsuccessful, 10 mg intravenous metoclopramide was applied 60 min after the capsule was swallowed. Contraindications of MCCE were the same as of those of SBCE and magnetic resonance imaging (MRI) examination. In our study, absolute contraindications were the patients with previous abdominal surgery and/or a previous capsule retention; with implanted MRI-incompatible electronic devices (e.g., implantable cardioverter-defibrillator and pacemakers), or magnetic, metal foreign bodies; and who were not competent or refused the informed consent form; who were under age 18 or above age 70; and those who were pregnant. A patency capsule test was initially performed in all patients to determine those with relative contraindications, including known or suspected GI obstruction due to SB Crohn’s disease, neoplasia, or SB surgery. Technical methods of MCCE Our system involves a special static magnet with robotic and manual guidance, computer workstation with ESNavi software (Ankon Technologies Co, Ltd) (Figure 1), magnetic capsule endoscope, capsule locator, and data recorder. The capsule endoscope sizes 26.8 × 11.6 mm, weighs 4.8 g, and has a permanent spherical magnet inside (Figure 2). The operator can adjust the frequency of captured pictures from 0.5 to 6 frames per second (fps). Capsule functioning can be stopped temporarily and restarted by the operator remotely from the workstation. The picture resolution is 480 × 480 pixels, and the field of view is 140°. The illumination can be automatically adjusted by an automatic picture focusing mechanism, which enables the view depth to shift from 0 mm to 60 mm. Depending on the fps, the battery life can be over 10 h, which allows combined gastric and SB capsule investigations with the same capsule. Robotic C-arm, investigation table, and computer workstation for magnetically controlled capsule endoscopy. Magnetic capsule endoscope (NaviCam). The magnetic robotic C-arm generates an adjustable magnetic field outside of the body with a maximum strength of 0.2 T, which allows precise controlled movements in three spatial directions. During the process, the physician guides the magnetic capsule by two joysticks in any chosen, spatial direction or along its axis, and therefore, he can rotate or tilt it. Moreover, the capsule can advance forward 360° by a rotational automatism in the direction of the capsule visual axis, which is helpful for transposition of the capsule from the long axis of the stomach. The capsule locator is a unique device that activates the capsule by infrared light prior to the patient swallowing it. The presence of the capsule can be detected in the case of suspected capsule retention (Figure 3). Capsule locator device. The system is capable of real-time digital transmission of images to the operating system. At the same time, it is continuously obtaining information about the gyroscope, which is built into the capsule [three-dimensional (3D) motion detector]. It also receives data from the actual spatial location of the camera and localizes the capsule inside the body at any time. The data recorder (Figure 4) may receive all motion information and high-quality pictures via wireless transmission from the capsule endoscope. In contrast, the connection between the data recorder and the workstation via a USB wire makes visible the real-time pictures and gyroscope information. Data recorder. Magnetic CE can be transferred along five different axes with the controlling magnet: two rotational and three 3D spaces. To achieve this, there are two joysticks on the workstation desktop: the left one controls the capsule in two rotational axes (horizontal and vertical directions) and the right joystick directs it in the three translational axes (forward/backwards, up/down, left/right). Precise magnetic control is achieved by positioning the examining table, modifying the position of the spherical magnet axis along the 3D space, and dynamically adjusting the strength and direction of the vectorial magnetic fields perpendicular to each other. The above-mentioned robotic systems can also automatically run the magnetic field vector and position it by robotically controlled computer-based software (scripts) in default mode, which can achieve a standardized mucosal scanning during the MCCE procedure without the direct intervention of a physician. Our system involves a special static magnet with robotic and manual guidance, computer workstation with ESNavi software (Ankon Technologies Co, Ltd) (Figure 1), magnetic capsule endoscope, capsule locator, and data recorder. The capsule endoscope sizes 26.8 × 11.6 mm, weighs 4.8 g, and has a permanent spherical magnet inside (Figure 2). The operator can adjust the frequency of captured pictures from 0.5 to 6 frames per second (fps). Capsule functioning can be stopped temporarily and restarted by the operator remotely from the workstation. The picture resolution is 480 × 480 pixels, and the field of view is 140°. The illumination can be automatically adjusted by an automatic picture focusing mechanism, which enables the view depth to shift from 0 mm to 60 mm. Depending on the fps, the battery life can be over 10 h, which allows combined gastric and SB capsule investigations with the same capsule. Robotic C-arm, investigation table, and computer workstation for magnetically controlled capsule endoscopy. Magnetic capsule endoscope (NaviCam). The magnetic robotic C-arm generates an adjustable magnetic field outside of the body with a maximum strength of 0.2 T, which allows precise controlled movements in three spatial directions. During the process, the physician guides the magnetic capsule by two joysticks in any chosen, spatial direction or along its axis, and therefore, he can rotate or tilt it. Moreover, the capsule can advance forward 360° by a rotational automatism in the direction of the capsule visual axis, which is helpful for transposition of the capsule from the long axis of the stomach. The capsule locator is a unique device that activates the capsule by infrared light prior to the patient swallowing it. The presence of the capsule can be detected in the case of suspected capsule retention (Figure 3). Capsule locator device. The system is capable of real-time digital transmission of images to the operating system. At the same time, it is continuously obtaining information about the gyroscope, which is built into the capsule [three-dimensional (3D) motion detector]. It also receives data from the actual spatial location of the camera and localizes the capsule inside the body at any time. The data recorder (Figure 4) may receive all motion information and high-quality pictures via wireless transmission from the capsule endoscope. In contrast, the connection between the data recorder and the workstation via a USB wire makes visible the real-time pictures and gyroscope information. Data recorder. Magnetic CE can be transferred along five different axes with the controlling magnet: two rotational and three 3D spaces. To achieve this, there are two joysticks on the workstation desktop: the left one controls the capsule in two rotational axes (horizontal and vertical directions) and the right joystick directs it in the three translational axes (forward/backwards, up/down, left/right). Precise magnetic control is achieved by positioning the examining table, modifying the position of the spherical magnet axis along the 3D space, and dynamically adjusting the strength and direction of the vectorial magnetic fields perpendicular to each other. The above-mentioned robotic systems can also automatically run the magnetic field vector and position it by robotically controlled computer-based software (scripts) in default mode, which can achieve a standardized mucosal scanning during the MCCE procedure without the direct intervention of a physician. MCCE study protocol and gastric stations To achieve optimal gastric mucosal visualization and standardization of the MCCE protocol in the stomach, we defined eight different stations with different patient positions to visualize the entire inner gastric surface as during upper GI endoscopy. Changing the patient position from the left lateral decubitus to the supine and right lateral position is necessary to combine gravity and magnetic force, which improve capsule maneuvering (Figure 5). The capsule swallowed by the patients in the left lateral decubitus position (Figure 6). Different magnetically controlled capsule endoscopy stations and capsule camera orientations defined to achieve a complete gastric mucosal surface visualization and mapping (created by Zoltán Tóbiás, MD). The capsule swallowed by the patients in the left lateral decubitus position. A: Patients and ball magnet position; B and C: Pictures of the Z-line by CE from our database. Stations at the left lateral decubitus position: Station I. Visualization of the gastric fundus and subcar-dial region with the cardia (posterior J type retroflexion). After entering the stomach, the capsule was lowered into the large curvature at the body of the stomach. The magnetic ball was held high up at the level of the patient's right shoulder. The capsule camera was maintained in an obliquely upward orientation of 45° and then horizontally rotated to observe the gastric fundus and the cardia (Figure 7A-C). Stations I-III at the left lateral decubitus position. A: Patients’ position (Station I); B: Cardia by a gastroscope (Station I); C: Cardia by CE (Station I); D: Patients’ position (Station II); E: Cardia by a gastroscope (Station II); F: Cardia by CE (Station II); G: Patients’ position (Station III); H: Corpus by a gastroscope (Station III); I: Corpus by a capsule (Station III). Station II. Moving closer to the cardia and the fundus (anterior J type retroflexion). While using the right joystick, the ball magnet was lowered and fitted closely to the patient's right arm. Due to the proximity of the magnetic field, the capsule rose to the anterior wall and small curvature of the stomach. The capsule was then rotated with the camera oriented vertically upward to observe the cardia up close, with the fundus at a distance (Figure 7D-F). Station III. Visualization of the gastric body from the fundus. With the ball magnet in the same position, using the left joystick, the capsule camera was rotated and tilted downwards, enabling visualization of the proximal body of the stomach (corpus) and the gastric folds at the large curvature with a longitudinal view (Figure 7G-I). Stations in the supine position: Station IV. Visualization of the angular incisure. Patients were asked to lie on their back, in a supine position. The capsule was moved and located to the distal body into the angular region due to a change of gravity and stepwise magnetic movements. The magnetic ball was moved to the left upper abdomen (hypochondrium) and then lowered, close to the patient abdomen. In this case, the capsule was raised to the anterior wall of the stomach; therefore, the small curvature and the angular incisure were examined as well (Figure 8A-C). Stations IV and V in the supine position. A: Patients’ position (Station IV); B: Angular incisure by a gastroscope (Station IV); C: Angular incisure by CE (Station IV); A: Patients’ position (Station V); B: Angular incisure by a gastroscope (Station V); C: Angular incisure by CE (Station V). Station V. Visualization of the larger curvature of the distal body from the angular incisure (U type of retroversion). The magnetic ball was moved to the epigastric area, close to the abdomen. Then the capsule camera was oriented upward to observe the anterior wall of the gastric body. In this position, the capsule can be turned and rotated (e.g., towards the cardia), enabling visualization of the distal body of the stomach longitudinally, as in the endoscopic view of U-type retroversion (Figure 8D-F). Stations at the right lateral decubitus patient position: Station VI. Visualization of the antral canal. In the next phase, patients were asked to turn to the right lateral decubitus position. Due to the gravity force, the capsule sank and moved spontaneously into the antral region. Then, the ball magnet was positioned over the left kidney. The capsule was brought closer to the large curvature with the camera oriented obliquely downward at 45°, which enabled observation of the antrum. Then, the antral canal could be examined with the pylorus, and the angular incisure became visible from the antrum (Figure 9A-C). Station at the right lateral decubitus patient position. A: Patients’ position (Station VI); B: Antrum canal of stomach by gastroscope (Station VI); C: Antrum canal of stomach by CE (Station VI). Stations in the supine position: Station VII. Prepyloric view and visualization of the pylorus. After that, each patients were asked to lie on his or her back in the supine position. The ball magnet was positioned close to the body on the right upper abdomen (hypochondrium). The capsule camera was oriented horizontally and laterally toward the pylorus for closer observation. The magnet position ensured that the capsule remained in the antrum. Using both the right and left joysticks, we moved the capsule closer to the pylorus (Figure 10A-C). Stations VII and VIII in the supine position. A: Patient’s position (Station VII); B: Antrum canal of stomach by gastroscope (Station VII); C: Antrum canal of stomach by CE (Station VII); D: Patient’s position (Station VIII); E: Pyloric ring (Station VIII); F: Ampulla of Vater (Station VIII); G: Pylorus from the duodenal bulb with a capsule (Station VIII). Station VIII: Magnetically controlled transpyloric passage of the capsule. The magnetic ball was placed at the patient's right side at the level of the duodenal bulb. The capsule was then rotated until the camera faced the pylorus. The capsule was dragged close to the pylorus under the guidance of the magnet robot. As the pylorus was opened, peristalsis assisted the capsule into the duodenum. After transpyloric passage, first we depicted the duodenal bulb, then from the descending and inferior horizontal part of the duodenum, we visualized the ampulla of the Vater by tilting the capsule camera upwards to facilitate the retrograde view (Figure 10D-G). To achieve optimal gastric mucosal visualization and standardization of the MCCE protocol in the stomach, we defined eight different stations with different patient positions to visualize the entire inner gastric surface as during upper GI endoscopy. Changing the patient position from the left lateral decubitus to the supine and right lateral position is necessary to combine gravity and magnetic force, which improve capsule maneuvering (Figure 5). The capsule swallowed by the patients in the left lateral decubitus position (Figure 6). Different magnetically controlled capsule endoscopy stations and capsule camera orientations defined to achieve a complete gastric mucosal surface visualization and mapping (created by Zoltán Tóbiás, MD). The capsule swallowed by the patients in the left lateral decubitus position. A: Patients and ball magnet position; B and C: Pictures of the Z-line by CE from our database. Stations at the left lateral decubitus position: Station I. Visualization of the gastric fundus and subcar-dial region with the cardia (posterior J type retroflexion). After entering the stomach, the capsule was lowered into the large curvature at the body of the stomach. The magnetic ball was held high up at the level of the patient's right shoulder. The capsule camera was maintained in an obliquely upward orientation of 45° and then horizontally rotated to observe the gastric fundus and the cardia (Figure 7A-C). Stations I-III at the left lateral decubitus position. A: Patients’ position (Station I); B: Cardia by a gastroscope (Station I); C: Cardia by CE (Station I); D: Patients’ position (Station II); E: Cardia by a gastroscope (Station II); F: Cardia by CE (Station II); G: Patients’ position (Station III); H: Corpus by a gastroscope (Station III); I: Corpus by a capsule (Station III). Station II. Moving closer to the cardia and the fundus (anterior J type retroflexion). While using the right joystick, the ball magnet was lowered and fitted closely to the patient's right arm. Due to the proximity of the magnetic field, the capsule rose to the anterior wall and small curvature of the stomach. The capsule was then rotated with the camera oriented vertically upward to observe the cardia up close, with the fundus at a distance (Figure 7D-F). Station III. Visualization of the gastric body from the fundus. With the ball magnet in the same position, using the left joystick, the capsule camera was rotated and tilted downwards, enabling visualization of the proximal body of the stomach (corpus) and the gastric folds at the large curvature with a longitudinal view (Figure 7G-I). Stations in the supine position: Station IV. Visualization of the angular incisure. Patients were asked to lie on their back, in a supine position. The capsule was moved and located to the distal body into the angular region due to a change of gravity and stepwise magnetic movements. The magnetic ball was moved to the left upper abdomen (hypochondrium) and then lowered, close to the patient abdomen. In this case, the capsule was raised to the anterior wall of the stomach; therefore, the small curvature and the angular incisure were examined as well (Figure 8A-C). Stations IV and V in the supine position. A: Patients’ position (Station IV); B: Angular incisure by a gastroscope (Station IV); C: Angular incisure by CE (Station IV); A: Patients’ position (Station V); B: Angular incisure by a gastroscope (Station V); C: Angular incisure by CE (Station V). Station V. Visualization of the larger curvature of the distal body from the angular incisure (U type of retroversion). The magnetic ball was moved to the epigastric area, close to the abdomen. Then the capsule camera was oriented upward to observe the anterior wall of the gastric body. In this position, the capsule can be turned and rotated (e.g., towards the cardia), enabling visualization of the distal body of the stomach longitudinally, as in the endoscopic view of U-type retroversion (Figure 8D-F). Stations at the right lateral decubitus patient position: Station VI. Visualization of the antral canal. In the next phase, patients were asked to turn to the right lateral decubitus position. Due to the gravity force, the capsule sank and moved spontaneously into the antral region. Then, the ball magnet was positioned over the left kidney. The capsule was brought closer to the large curvature with the camera oriented obliquely downward at 45°, which enabled observation of the antrum. Then, the antral canal could be examined with the pylorus, and the angular incisure became visible from the antrum (Figure 9A-C). Station at the right lateral decubitus patient position. A: Patients’ position (Station VI); B: Antrum canal of stomach by gastroscope (Station VI); C: Antrum canal of stomach by CE (Station VI). Stations in the supine position: Station VII. Prepyloric view and visualization of the pylorus. After that, each patients were asked to lie on his or her back in the supine position. The ball magnet was positioned close to the body on the right upper abdomen (hypochondrium). The capsule camera was oriented horizontally and laterally toward the pylorus for closer observation. The magnet position ensured that the capsule remained in the antrum. Using both the right and left joysticks, we moved the capsule closer to the pylorus (Figure 10A-C). Stations VII and VIII in the supine position. A: Patient’s position (Station VII); B: Antrum canal of stomach by gastroscope (Station VII); C: Antrum canal of stomach by CE (Station VII); D: Patient’s position (Station VIII); E: Pyloric ring (Station VIII); F: Ampulla of Vater (Station VIII); G: Pylorus from the duodenal bulb with a capsule (Station VIII). Station VIII: Magnetically controlled transpyloric passage of the capsule. The magnetic ball was placed at the patient's right side at the level of the duodenal bulb. The capsule was then rotated until the camera faced the pylorus. The capsule was dragged close to the pylorus under the guidance of the magnet robot. As the pylorus was opened, peristalsis assisted the capsule into the duodenum. After transpyloric passage, first we depicted the duodenal bulb, then from the descending and inferior horizontal part of the duodenum, we visualized the ampulla of the Vater by tilting the capsule camera upwards to facilitate the retrograde view (Figure 10D-G). Examining the small intestine If the capsule entered via the small intestine, patients were asked to drink 1 L of PEG solution, and from then on, they were requested to drink clear fluids (i.e., water). The movement of the capsule in the small intestine was then monitored hourly in real-time visualization mode. The examination ended when the capsule arrives at the colon or stops functioning due to the low battery. If the capsule entered via the small intestine, patients were asked to drink 1 L of PEG solution, and from then on, they were requested to drink clear fluids (i.e., water). The movement of the capsule in the small intestine was then monitored hourly in real-time visualization mode. The examination ended when the capsule arrives at the colon or stops functioning due to the low battery.
null
null
CONCLUSION
Acknowledgements: special thanks for Zoltán Tóbiás MD for the graphical work and visual explanations of different gastric stations and Kata Tőgl our nurse manager for assistance and modelling the patients positioning during magnetically controlled capsule endoscopy.
[ "INTRODUCTION", "Study design", "Patients", "Patient preparation for MCCE", "Technical methods of MCCE", "MCCE study protocol and gastric stations", "Examining the small intestine", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Capsule endoscopy (CE) is a non-invasive, painless, patient-friendly, and safe diagnostic method. It is currently the gold standard examination for small bowel (SB) diseases, including inflammatory bowel disease, suspected polyposis syndromes, unexplained abdominal pain, celiac disease, and obscure gastrointestinal bleeding (OGIB). Small bowel capsule endoscopy (SBCE) provides excellent diagnostic yield, which has been proven in many clinical trials since its introduction in the late 1990s. Currently, it has a fundamental role in gastrointestinal (GI) endoscopy, especially in patients with suspected SB diseases[1]. Attempts to expand the diagnostic role of CE to areas that are traditionally explored with standard endoscopic procedures (gastroscopy and colonoscopy), e.g., the esophagus, stomach, and colon. These aims have been challenged by significant drawbacks in the past due to a lack of optimal preparation, standardized procedural design, and ability to visualize the entire mucosal surface[2]. A common perception is that the anatomical structures of the stomach and colon are too complex for adequate visualization with a passive capsule endoscope[3-5].\nAlthough the SB seems to be a simple pipe, the more complex gastric anatomy with passive movements may cause unexplored areas of the gastric mucosa when utilizing standard CE. Magnetically controlled capsule endoscopy (MCCE) systems were developed over the past ten years to precisely control capsule movements inside the GI tract and to achieve better visualization of the entire mucosa, especially in the stomach. Different approaches have been developed and studied over time, which have enabled manual steering or active movement of the capsule systems. The external magnetic field seems to be the most adequate and energetically effective method for steering and holding the capsule within the stomach[6,7].\nAccording to recent, prospective studies that were mainly conducted in China, a methodology developed by Ankon and AnX robotics could be an optimal method for non-invasive screening and early diagnosis of gastric diseases such as gastric cancer and intramucosal high-grade dysplasia. It has the potential to become a valuable and accepted screening method in a population with high gastric cancer risk[8]. \nThe aim of this study was to evaluate the feasibility, safety, and diagnostic yield of the Ankon MCCE system in patients with gastric and SB disorders who were referred to our endoscopy unit for SBCE examination between September 2017 and December 2020. Our secondary aim was to evaluate the overall gastric examinations, SB transit times (TTs) and efficacy of MCCE to achieve complete gastric and SBCE investigation by utilizing the same upper GI capsule.", "Our present study prospectively enrolled outpatients who were seen at the Endo-Kapszula Endoscopy Unit, Székesfehérvár, Hungary, between September 2017 and December 2020. A combined investigation of the stomach and SB of patients referred to for SBCE was performed with a robotically MCCE system (NaviCam, Ankon Technologies Co, Ltd, Shanghai, China).\nThe primary endpoint of the recent study was to investigate the diagnostic yield of MCCE in the evaluation of gastric and SB abnormalities. The secondary endpoints were to evaluate the feasibility of performing complete gastric and SB investigation by utilizing the same diagnostic procedure; to address safety parameters, including adverse and severe adverse events, and to calculate the capsule TTs, such as esophageal, gastric, and SB TTs, and the overall procedure time.", "All patients who were referred to our endoscopy unit for SBCE and who agreed to our gastric study protocol were included in the current study. During the study period, we enrolled 284 consecutive, eligible patients, of which there were 149 males (52.5%) and 135 females (47.5 %), with a mean age of 44 years. Detailed patient characteristics are shown in Table 1. The indications of MCCE were iron deficiency anemia, OGIB, suspected/established SB Crohn’s disease, suspected/confirmed celiac disease, suspected SB neoplastic disease, carcinoid syndrome, and SB polyposis syndromes.\nIndications of small bowel capsule endoscopy (grouped by gender)\nBMI: Body mass index; SB: Small bowel.\nThis study was approved by the Ethical Committee of the University of Szeged (Registry No. 5/17.04.26) and registered in the ClinicalTrials.gov trial registry (Identifier: NCT03234725). The present study was conducted according to the World Medical Association's Declaration of Helsinki provisions in 1995. Patients agreed to undergo capsule endoscopies and Helicobacter pylori (H. pylori) urea breath tests (UBTs) by written informed consent.", "According to the SBCE guidelines, we applied the following patient preparation and study protocol: Patients followed a liquid diet and consumed 2 L of water with two sacks of polyethylene glycol (PEG) the day before the study. First, H. pylori C13 UBT was performed on the morning of the study, while the patient was in a fasting condition. The UBT requires an average of 30 min, during which time the patient should not consume any fluids or food. During that visit, the patient's medical history was recorded, and a physical examination was performed. After the UBT, the patient ingested 2 dL water with simethicone suspension (80 mg) to reduce mucus in the stomach. Then, to distend the stomach properly, approximately 8-10 dL of clean water consumed by all patients within 10 min. Water ingestion may be repeated as needed to enhance gastric distension during examination. After complete visualization of the total gastric mucosal surface, active pyloric propulsion of the capsule was attempted in all patients with the external magnetic field. If it was unsuccessful, 10 mg intravenous metoclopramide was applied 60 min after the capsule was swallowed.\nContraindications of MCCE were the same as of those of SBCE and magnetic resonance imaging (MRI) examination. In our study, absolute contraindications were the patients with previous abdominal surgery and/or a previous capsule retention; with implanted MRI-incompatible electronic devices (e.g., implantable cardioverter-defibrillator and pacemakers), or magnetic, metal foreign bodies; and who were not competent or refused the informed consent form; who were under age 18 or above age 70; and those who were pregnant. A patency capsule test was initially performed in all patients to determine those with relative contraindications, including known or suspected GI obstruction due to SB Crohn’s disease, neoplasia, or SB surgery.", "Our system involves a special static magnet with robotic and manual guidance, computer workstation with ESNavi software (Ankon Technologies Co, Ltd) (Figure 1), magnetic capsule endoscope, capsule locator, and data recorder. The capsule endoscope sizes 26.8 × 11.6 mm, weighs 4.8 g, and has a permanent spherical magnet inside (Figure 2). The operator can adjust the frequency of captured pictures from 0.5 to 6 frames per second (fps). Capsule functioning can be stopped temporarily and restarted by the operator remotely from the workstation. The picture resolution is 480 × 480 pixels, and the field of view is 140°. The illumination can be automatically adjusted by an automatic picture focusing mechanism, which enables the view depth to shift from 0 mm to 60 mm. Depending on the fps, the battery life can be over 10 h, which allows combined gastric and SB capsule investigations with the same capsule.\nRobotic C-arm, investigation table, and computer workstation for magnetically controlled capsule endoscopy.\nMagnetic capsule endoscope (NaviCam).\nThe magnetic robotic C-arm generates an adjustable magnetic field outside of the body with a maximum strength of 0.2 T, which allows precise controlled movements in three spatial directions. During the process, the physician guides the magnetic capsule by two joysticks in any chosen, spatial direction or along its axis, and therefore, he can rotate or tilt it. Moreover, the capsule can advance forward 360° by a rotational automatism in the direction of the capsule visual axis, which is helpful for transposition of the capsule from the long axis of the stomach.\nThe capsule locator is a unique device that activates the capsule by infrared light prior to the patient swallowing it. The presence of the capsule can be detected in the case of suspected capsule retention (Figure 3).\nCapsule locator device.\nThe system is capable of real-time digital transmission of images to the operating system. At the same time, it is continuously obtaining information about the gyroscope, which is built into the capsule [three-dimensional (3D) motion detector]. It also receives data from the actual spatial location of the camera and localizes the capsule inside the body at any time. The data recorder (Figure 4) may receive all motion information and high-quality pictures via wireless transmission from the capsule endoscope. In contrast, the connection between the data recorder and the workstation via a USB wire makes visible the real-time pictures and gyroscope information.\nData recorder.\nMagnetic CE can be transferred along five different axes with the controlling magnet: two rotational and three 3D spaces. To achieve this, there are two joysticks on the workstation desktop: the left one controls the capsule in two rotational axes (horizontal and vertical directions) and the right joystick directs it in the three translational axes (forward/backwards, up/down, left/right). Precise magnetic control is achieved by positioning the examining table, modifying the position of the spherical magnet axis along the 3D space, and dynamically adjusting the strength and direction of the vectorial magnetic fields perpendicular to each other. The above-mentioned robotic systems can also automatically run the magnetic field vector and position it by robotically controlled computer-based software (scripts) in default mode, which can achieve a standardized mucosal scanning during the MCCE procedure without the direct intervention of a physician. ", "To achieve optimal gastric mucosal visualization and standardization of the MCCE protocol in the stomach, we defined eight different stations with different patient positions to visualize the entire inner gastric surface as during upper GI endoscopy. Changing the patient position from the left lateral decubitus to the supine and right lateral position is necessary to combine gravity and magnetic force, which improve capsule maneuvering (Figure 5). The capsule swallowed by the patients in the left lateral decubitus position (Figure 6).\nDifferent magnetically controlled capsule endoscopy stations and capsule camera orientations defined to achieve a complete gastric mucosal surface visualization and mapping (created by Zoltán Tóbiás, MD).\nThe capsule swallowed by the patients in the left lateral decubitus position. A: Patients and ball magnet position; B and C: Pictures of the Z-line by CE from our database.\n\nStations at the left lateral decubitus position: Station I. Visualization of the gastric fundus and subcar-dial region with the cardia (posterior J type retroflexion).\nAfter entering the stomach, the capsule was lowered into the large curvature at the body of the stomach. The magnetic ball was held high up at the level of the patient's right shoulder. The capsule camera was maintained in an obliquely upward orientation of 45° and then horizontally rotated to observe the gastric fundus and the cardia (Figure 7A-C).\n\nStations I-III at the left lateral decubitus position. A: Patients’ position (Station I); B: Cardia by a gastroscope (Station I); C: Cardia by CE (Station I); D: Patients’ position (Station II); E: Cardia by a gastroscope (Station II); F: Cardia by CE (Station II); G: Patients’ position (Station III); H: Corpus by a gastroscope (Station III); I: Corpus by a capsule (Station III).\nStation II. Moving closer to the cardia and the fundus (anterior J type retroflexion).\nWhile using the right joystick, the ball magnet was lowered and fitted closely to the patient's right arm. Due to the proximity of the magnetic field, the capsule rose to the anterior wall and small curvature of the stomach. The capsule was then rotated with the camera oriented vertically upward to observe the cardia up close, with the fundus at a distance (Figure 7D-F).\nStation III. Visualization of the gastric body from the fundus.\nWith the ball magnet in the same position, using the left joystick, the capsule camera was rotated and tilted downwards, enabling visualization of the proximal body of the stomach (corpus) and the gastric folds at the large curvature with a longitudinal view (Figure 7G-I).\n\nStations in the supine position: Station IV. Visualization of the angular incisure.\nPatients were asked to lie on their back, in a supine position. The capsule was moved and located to the distal body into the angular region due to a change of gravity and stepwise magnetic movements. The magnetic ball was moved to the left upper abdomen (hypochondrium) and then lowered, close to the patient abdomen. In this case, the capsule was raised to the anterior wall of the stomach; therefore, the small curvature and the angular incisure were examined as well (Figure 8A-C).\n\nStations IV and V in the supine position. A: Patients’ position (Station IV); B: Angular incisure by a gastroscope (Station IV); C: Angular incisure by CE (Station IV); A: Patients’ position (Station V); B: Angular incisure by a gastroscope (Station V); C: Angular incisure by CE (Station V).\nStation V. Visualization of the larger curvature of the distal body from the angular incisure (U type of retroversion).\nThe magnetic ball was moved to the epigastric area, close to the abdomen. Then the capsule camera was oriented upward to observe the anterior wall of the gastric body. In this position, the capsule can be turned and rotated (e.g., towards the cardia), enabling visualization of the distal body of the stomach longitudinally, as in the endoscopic view of U-type retroversion (Figure 8D-F).\n\nStations at the right lateral decubitus patient position: Station VI. Visualization of the antral canal.\nIn the next phase, patients were asked to turn to the right lateral decubitus position. Due to the gravity force, the capsule sank and moved spontaneously into the antral region. Then, the ball magnet was positioned over the left kidney. The capsule was brought closer to the large curvature with the camera oriented obliquely downward at 45°, which enabled observation of the antrum. Then, the antral canal could be examined with the pylorus, and the angular incisure became visible from the antrum (Figure 9A-C).\n\nStation at the right lateral decubitus patient position. A: Patients’ position (Station VI); B: Antrum canal of stomach by gastroscope (Station VI); C: Antrum canal of stomach by CE (Station VI).\n\nStations in the supine position: Station VII. Prepyloric view and visualization of the pylorus.\nAfter that, each patients were asked to lie on his or her back in the supine position. The ball magnet was positioned close to the body on the right upper abdomen (hypochondrium). The capsule camera was oriented horizontally and laterally toward the pylorus for closer observation. The magnet position ensured that the capsule remained in the antrum. Using both the right and left joysticks, we moved the capsule closer to the pylorus (Figure 10A-C).\n\n Stations VII and VIII in the supine position. A: Patient’s position (Station VII); B: Antrum canal of stomach by gastroscope (Station VII); C: Antrum canal of stomach by CE (Station VII); D: Patient’s position (Station VIII); E: Pyloric ring (Station VIII); F: Ampulla of Vater (Station VIII); G: Pylorus from the duodenal bulb with a capsule (Station VIII).\nStation VIII: Magnetically controlled transpyloric passage of the capsule.\nThe magnetic ball was placed at the patient's right side at the level of the duodenal bulb. The capsule was then rotated until the camera faced the pylorus. The capsule was dragged close to the pylorus under the guidance of the magnet robot. As the pylorus was opened, peristalsis assisted the capsule into the duodenum. After transpyloric passage, first we depicted the duodenal bulb, then from the descending and inferior horizontal part of the duodenum, we visualized the ampulla of the Vater by tilting the capsule camera upwards to facilitate the retrograde view (Figure 10D-G).", "If the capsule entered via the small intestine, patients were asked to drink 1 L of PEG solution, and from then on, they were requested to drink clear fluids (i.e., water). The movement of the capsule in the small intestine was then monitored hourly in real-time visualization mode. The examination ended when the capsule arrives at the colon or stops functioning due to the low battery.", "An UBT test revealed H. pylori positivity in 32.7% of cases. (Table 2). No significant association with HP status and the type (proximal or distal), distribution (diffuse or focal) and severity (minimal or active, erosive) of the gastritis described on MCCE results were depicted. \nThe results of Helicobacter pylori C13 urea breath tests\n\nH. pylori: Helicobacter pylori.\nThe mean gastric, SB, and colon TT with MCCE was: 47 h 40 min (M/F: 44 h 15 min/51 h 14 min), 3 h 46 min 22 s (3 h 52 min 44 s/3 h 38 min 21 s) and 1 h 4 min 34 s (1 h 1 min 16 s/1 h 8 min 53 s), respectively. Average total time of MCCE examination: 5 h 48 min 35 s (5 h 46 min 37 s/5 h 50 min 18 s) (Table 3).\nMean gastric, small bowel, and overall transit times of magnetically controlled capsule endoscopy\nThe diagnostic yields for detecting any abnormalities in the stomach and SB with MCCE were 81.9%: 68.6% for minor pathologies and 13.3% for major pathologies. 25.8% of the abnormalities were found in the SB, and 74.2% were in the stomach. The diagnostic yield for stomach/SB was 4.9%/8.4% for major pathologies and 55.9%/12.7% for minor pathologies (Table 4). In the stomach, ulcers and polyps were considered major, while signs of gastritis were minor pathologies. In the SB signs of Crohn’s and celiac disease were the major and non-specific inflammation, diverticula and angiodysplasia the minor pathologies. Based on findings from MCCE examinations, the distribution of pathologies is depicted in Table 5.\nDiagnostic yield of magnetically controlled capsule endoscopy\nSB: Small bowel.\nDistribution of pathologies detected by magnetically controlled capsule endoscopy\nThe capsule's active magnetic movement through the pylorus was successful using the magnet in 41.9% of all patients (automatized protocol in 56 patients and active manually controlled magnetic activity in 63 patients) (Table 6).\nDistribution of different types of transpyloric transit in complete and incomplete small bowel studies\nSB: Small bowel.\nIn 18 (M/F: 6/12) patients (6.3%), SB visualization with MCCE was incomplete. There were 13 occurrences of incomplete examinations as a result of capsule depletion. In 3 of these 13 cases, the capsule was depleted within 5 h of operation, indicating a manufacturing flaw. The average total study time in the remaining 10 cases was 9 h 12 min 9 s. From the pylorus to the last image, the average transit time was 8 h 26 min 4 s. The examination was discontinued sooner than planned in 3 cases due to the patient's request. In 2 cases, there was capsule retention due to Crohn's-like ulceration, accompanied by a narrow bowel lumen (Table 6). These problems were resolved spontaneously without surgery, but with medical treatment. There were no serious adverse events or capsule retention during the study period.", "There are two main directions in developing capsule endoscopes with active movements: internally and externally guided techniques. Internal guidance means that the locomotive system is integrated into the capsule, but these systems require longer-lasting batteries for full functionality. Its main advantage is a more precise navigation with the help of the internalized locomotive system. External guidance refers to outwardly conducted capsule movements, e.g., with a magnetic control unit or device/moving arm generating the magnetic field. The exceptional advantage of this method is that the magnetic field (and its adjustment) built by the external unit controls the locomotive mechanism, thus making this an energy-saving solution. However, a disadvantage of this technique is that it is limited to more passive, less flexible locomotion. The examining physician controls the capsule outside the body with an external control unit, which decreases the accuracy of the spatial and temporal movements. Locomotion of the capsule endoscope would result in numerous long-term advantages in the diagnostics and therapy of GI diseases (focused biopsies, treatment options, e.g., laser tumor ablation, local targeted release of certain medications, and endoscopic ultrasound applications)[1]. \nMagnetic assisted capsule endoscopy (MACE) systems have been developed over the past 5-10 years, primarily for research goals. Experiences drawn from previously conducted in vitro studies showed that locomotion and precision of the capsule are significantly influenced by the physical characteristics of the generated magnetic field created by the external magnetic unit while travelling in the human body[1].\nIn 2011, Olympus was the first to introduce a modified MRI machine prototype that moved a MACE system which allowed the operator to successfully guide the capsule in a chosen spatial direction inside the stomach after drinking water. However, the adoption of this diagnostic procedure did not spread worldwide and get medical acceptance[6]. \nIn 2014, IntroMedic developed a navigation (NAVI) magnetic capsule system that could be moved externally with a small, hammer-shaped, external static magnet device. The technology involving a magnet that assisted the physician in manually moving the capsule endoscope appeared viable and valuable in many experimental settings. However, it did not achieve any breakthroughs, as it only allowed sudden and harsh position changes of the capsule. In clinical trials studying the application of the NAVI capsule system using a small cohort, 65%-86% of the mucosal layer of the stomach was visualized accurately using external magnetically controlled locomotion after the ingestion of water. Its diagnostic accuracy was similar to that of traditional gastroscopy[9]. Using a similar type of NAVI capsule system, CE was performed on the large intestines, where magnetic locomotion directed the capsule from the cecum to the sigmoid colon. At the same time, a colonoscopy probe was inserted to monitor capsule movements to provide dilatation[10]. In iron deficiency anemia, after routine colonoscopy of 49 patients, the performance of a MACE system was compared with the IntroMedic Navi system in providing a diagnostic examination of the stomach and small intestine in one sitting. According to the results, MACE detected more pathologies than by gastroscopy alone, and no significant difference in diagnostic sensitivity was found between the methods when examining the upper GI tract[11].\nPrecise locomotion of the magnetic capsule inside the GI tract by manual control is not possible due to the variable density of tissues and variable distance of the capsule, e.g., in the stomach from the external magnet. Moreover, exact spatial location of the capsule, its relation to the surrounding organs, or the ante-/retrograde orientation cannot be judged accurately. Therefore, a control mechanism is needed alongside the magnetic capsule, which is based on external robotics that enable the physician's input to be executed (i.e., joystick: in forward, backward, upward, downward, and sideways directions). By pre-programming instruction sequences (script) in the computing control unit to examine the stomach from the fundus to the antrum, we created a reproducible examination of the complete inner mucosal lining of the stomach, which lowered the variability among investigations. If the examining physician notices significant pathology, he/she can intervene and move the examination to manual control and revisit the alteration; doing so increases the number of images taken of the lesion and optimizes the diagnostic accuracy of the study.\nNaviCam was the first magnetically controlled capsule system that enables bidirectional data transmission and robotic control. The capsule functions at 1-6 fps and its spatial orientation with continuous monitoring of energy levels are transmitted via the recording vest to control the data and screen of the computing unit. At the console, we can control the precise locomotion of the capsule, image capturing speed, and brightness, as well as turn it on or off.\nWe combined manual and automatic, robotic capsule control to optimize the gastric procedure. As we previously published in an in vitro setting, complete 100% visualization of the inner surface of a plastic stomach model with the medium and large stomach autoscan program module and with the freehand controlling method could be successfully achieved in all cases. With the small and medium stomach mode, we could observe only 97.5% of the inner surface, because of the incomplete visualization of the prepyloric region. With freehand method, we needed nearly twice as much average time (749 s) to make a complete examination, compared to the robotic maneuvering with autoscan program (390 s). In an everyday praxis, in each patient position, the following stations and capsule positioning were performed after running the 3 different autoscan programs[12,13]. \nCE is a potentially patient-friendly and noncontact/infection-free diagnostic modality of screening for gastric diseases, which may result in complete and excellent examinations of the stomach. During the coronavirus disease 2019 pandemic, direct contact with the patients in the practice is a potential risk factor for infections. Non-contact medicine is a possibility for further examinations, such as remote control of CE[14]. The prevalence of focal gastric lesions (polyps, gastric ulcers, submucosal tumors, etc.) increases with age, and they may be detected with MCCE as effectively as with gastroscopy[8]. The sensitivity is between 90% and 92%, and the specificity is between 94% and 96.7%[15]. Several studies proved that, altogether, reliable preparation (gastric cleanliness), careful examination, and MCCE are pivotal for adequate gastric visualization and, therefore, for the detection of superficial gastric neoplasms[16]. In a large multicenter gastric cancer screening trial conducted in China seven asymptomatic patients out of 3182 were diagnosed with advanced cancer by MCCE. Cancer prevalence was highest in the gastric body (3 cases), followed by 2 cases in the cardia and 2 in the antrum, while 1-1 cases were detected in the region of the angulus, in the fundus and in the esophagus. All were confirmed to be adenocarcinoma pathologically. These results indicate that screening with MCCE may be useful in high family risk populations or people aged over 50 years old as a gastric cancer screening modality[8].\nMagnetic steering may also impact TTs, as delayed gastric transit of the CE (especially in patients with gastroparesis) may lead to an incomplete SB exam. This method may increase the papilla detection rate as well. In addition to magnetic steering, there are several other independent factors, such as male gender and higher BMI, which increase gastric TTs and decrease SB TTs, thus impacting the time required for CE completion[10]. Despite the fact that delayed gastric TT may cause incomplete SBCE, we should avoid esophago-gastro-duodenoscopy to resolve temporary gastric capsule retention and instead use MCCE[17]. \nThe visual diagnosis of the presence of H. pylori with standard white light endoscopy (WLI) has a relatively low accuracy, especially in population with a low pretest probability. Moreover, WLI endoscopy correlates poorly with histopathological findings of H. pylori induced gastritis too. Recently, low quality retrospective studies proposed the theory, that with the application of a special electronic chromoendoscopy, linked color imaging, diffuse reddish appearance of the mucosa in the gastric body and fundic glands correlates with the presence of HP[18].\nIn our study we found no correlation between the HP status and the activity and type of gastritis observed on MCCE as follows, but not included this into the Table 2, since it would be relevant in another publication, focusing HP status and gastritis on MCCE.\nMCCE is an excellent method to visualize the entire gastric mucosal surface up to 100% precision in healthy volunteers[5]. In a previous study of 75 consecutive patients, we demonstrated that with our combined method with automatic and manual control, a complete visualization of the cardia, fundus, corpus, angulus, antrum, and duodenal bulb was 95.4%, 100%, 100%, 100%, 100% and 100%, respectively. The ampulla of Vater was detected and observed in 20 patients (26%). Moreover, the visualization of the distal esophagus and the Z-line was possible in 67/75 patients (89%), which was assessed as complete in 23, and partially complete in 44 patients. The mean stationary time for MCCE in the distal esophagus was 1 min and 32 s[19].\nThe use of MCCE needs to be considered in western and eastern countries, as there are two different scenarios, depending on the different levels of prevalence of UGI diseases and gastric cancer in each country. Nowadays, MCCE is mostly used for the detection of malignant and premalignant gastric lesions, which are more prevalent in eastern countries. However, the present technology opens the door to further, new technologies; subsequent developments would allow MCCE technology to be extended to other regions of the GI tract, e.g., the esophagus, which may facilitate the detection of more relevant lesions, such as Barrett metaplasia in western countries as well. With second-generation MCCE, visualization of the esophagus, Z-line, and duodenal papilla would be improved, according to a study by Jiang et al[20] Furthermore, the gastric TT could shorten by one-third, which was nearly achieved by conventional endoscopy with an examination of approximately 10 min.\nIt is known that the current gold standard gastroscopy has some clear disadvantages, as it is commonly uncomfortable for patients, and therefore it is mostly performed under sedation, which carries definite procedure related risks. MCCE, as a patient-friendly, non-invasive test, might be an alternative for patients who refuse to undergo gastroscopy and increase patients’ adherence for screening. Furthermore, MCCE of the stomach was recently approved by the Chinese Food and Drug Association for the following diagnostic indications: (1) As an alternative diagnostic tool for patients who refuse to undergo gastroscopy; (2) Screening of gastric diseases as a part of physical examination; (3) Screening for gastric cancer; (4) To diagnose various causes of GI inflammation; (5) To perform follow-up for diseases like gastric varices, gastric ulcer, atrophic gastritis, and polyps after surgical or endoscopic removal[21].\nThere is no similar study in the literature, as we performed a complete upper GI capsule examination, including the stomach and the SB with the same capsule endoscope during MCCE. Denzer et al[22] published a blinded, prospective trial from two French centers with the Intromedic manually controlled MCCE. A total of 189 patients were enrolled into this multicenter study. Lesions were defined as major (requiring biopsy or removal) or minor ones. The final gold-standard was unblinded conventional gastroscopy with biopsy, under sedation with propofol. Twenty-three major lesions were found in 21 patients and in this population, the capsule accuracy was 90.5% as compared to gastroscopy. Of the remaining 168 patients, 94% had minor and mostly multiple lesions; the capsule accuracy was 88.1%. All patients preferred MCCE over gastroscopy[22].\nOne of the risk factors of incomplete SB capsule endoscopy is a prolonged gastric transit time, which could be considered as a limitation of our combined gastric and SB study protocol. In our patient population, the average gastric transit time with magnetic transpyloric, manual control was 26 min. In contrast, in those cases where the magnetic transpyloric control failed, after examining the stomach, we left the capsule to propel through the pylorus by spontaneous peristaltic activity. In these patient groups, the average gastric transit time took 1 h and 9 min. In 10 cases out of 18 incomplete SB studies caused by battery low energy, this event occurred in 3 patients with manual magnetic passage and in 7 patients with spontaneous transpyloric passage[23].\nVisibility and identification of landmarks are important factors to consider in accurate examination of stomach using MCCE. Gastric landmarks and typical stations described in the methods were always forced to achieve during combined automatic and manual maneuvering. For improving the learning curve of our gastroenterologist, we started to train the examinations in a plastic stomach model. In our previously published abstract, we described the improvement of the learning curves with manual magnetic controls both in experts and in trainees[1]. In this study, we find significant differences in the examination time of the complete inner surface mapping between trainees and experts, and moreover automatic protocols were faster and equally accurate as experts to achieve a complete inner surface mapping.\nThe problem how to minimalize bubbles and mucoid secretions is an existing problem in real life studies. To improve visibility, we established a unique preparation process with a combination of bicarbonate, Pronase B, and simethicone combined with a patient body rotation technique[2]. Moreover, in our described stations, we rotate our patients from left lateral to supine, then from supine top right lateral, and finally from right lateral to supine position during MCCE study. During this protocol, the gastric mucoid secretions also moving into different parts of the stomach due to the gravity making visible all the landmarks and the majority of the mucosal abnormalities. Application of prokinetics or motilin agonist erythromycin might also be an option in future studies to improve the visibility and reduce gastric lake content[24,25].\nAn inherent limitation of our present study that we performed gastroscopy only in a few patients with major gastric pathologies to accomplish final diagnosis and biopsy; and therefore, we could not assess the accuracy of MCCE in all patients compared to gastroscopy. However, several previous studies demonstrated excellent diagnostic value and high accuracy. In a recent meta-analysis of Zhang et al[26] four studies with 612 patients were included, in which the results of blinded MCCE and gastroscopy were compared. MCCE demonstrated a pooled sensitivity and specificity of 91% (95%CI: 0.87-0.93) and 90% (95%CI: 0.75-0.96), respectively. The diagnostic accuracy of MCCE was 91% (95%CI: 0.88-0.94) for assessing gastric diseases. ", "MCCE is an effective and safe diagnostic method for evaluating gastric and SB mucosal lesions while being utilized as a single upper GI capsule endoscope in patients referred for SBCE evaluation. MCCE using the novel Ankon capsule endoscopic technique is a proper diagnostic method for evaluating the gastric mucosa. It is a promising non-invasive screening tool that may be applied in future monitoring of patients with high gastric cancer family risk to decrease morbidity and mortality of benign and malignant upper GI disorders." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study design", "Patients", "Patient preparation for MCCE", "Technical methods of MCCE", "MCCE study protocol and gastric stations", "Examining the small intestine", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Capsule endoscopy (CE) is a non-invasive, painless, patient-friendly, and safe diagnostic method. It is currently the gold standard examination for small bowel (SB) diseases, including inflammatory bowel disease, suspected polyposis syndromes, unexplained abdominal pain, celiac disease, and obscure gastrointestinal bleeding (OGIB). Small bowel capsule endoscopy (SBCE) provides excellent diagnostic yield, which has been proven in many clinical trials since its introduction in the late 1990s. Currently, it has a fundamental role in gastrointestinal (GI) endoscopy, especially in patients with suspected SB diseases[1]. Attempts to expand the diagnostic role of CE to areas that are traditionally explored with standard endoscopic procedures (gastroscopy and colonoscopy), e.g., the esophagus, stomach, and colon. These aims have been challenged by significant drawbacks in the past due to a lack of optimal preparation, standardized procedural design, and ability to visualize the entire mucosal surface[2]. A common perception is that the anatomical structures of the stomach and colon are too complex for adequate visualization with a passive capsule endoscope[3-5].\nAlthough the SB seems to be a simple pipe, the more complex gastric anatomy with passive movements may cause unexplored areas of the gastric mucosa when utilizing standard CE. Magnetically controlled capsule endoscopy (MCCE) systems were developed over the past ten years to precisely control capsule movements inside the GI tract and to achieve better visualization of the entire mucosa, especially in the stomach. Different approaches have been developed and studied over time, which have enabled manual steering or active movement of the capsule systems. The external magnetic field seems to be the most adequate and energetically effective method for steering and holding the capsule within the stomach[6,7].\nAccording to recent, prospective studies that were mainly conducted in China, a methodology developed by Ankon and AnX robotics could be an optimal method for non-invasive screening and early diagnosis of gastric diseases such as gastric cancer and intramucosal high-grade dysplasia. It has the potential to become a valuable and accepted screening method in a population with high gastric cancer risk[8]. \nThe aim of this study was to evaluate the feasibility, safety, and diagnostic yield of the Ankon MCCE system in patients with gastric and SB disorders who were referred to our endoscopy unit for SBCE examination between September 2017 and December 2020. Our secondary aim was to evaluate the overall gastric examinations, SB transit times (TTs) and efficacy of MCCE to achieve complete gastric and SBCE investigation by utilizing the same upper GI capsule.", "Study design Our present study prospectively enrolled outpatients who were seen at the Endo-Kapszula Endoscopy Unit, Székesfehérvár, Hungary, between September 2017 and December 2020. A combined investigation of the stomach and SB of patients referred to for SBCE was performed with a robotically MCCE system (NaviCam, Ankon Technologies Co, Ltd, Shanghai, China).\nThe primary endpoint of the recent study was to investigate the diagnostic yield of MCCE in the evaluation of gastric and SB abnormalities. The secondary endpoints were to evaluate the feasibility of performing complete gastric and SB investigation by utilizing the same diagnostic procedure; to address safety parameters, including adverse and severe adverse events, and to calculate the capsule TTs, such as esophageal, gastric, and SB TTs, and the overall procedure time.\nOur present study prospectively enrolled outpatients who were seen at the Endo-Kapszula Endoscopy Unit, Székesfehérvár, Hungary, between September 2017 and December 2020. A combined investigation of the stomach and SB of patients referred to for SBCE was performed with a robotically MCCE system (NaviCam, Ankon Technologies Co, Ltd, Shanghai, China).\nThe primary endpoint of the recent study was to investigate the diagnostic yield of MCCE in the evaluation of gastric and SB abnormalities. The secondary endpoints were to evaluate the feasibility of performing complete gastric and SB investigation by utilizing the same diagnostic procedure; to address safety parameters, including adverse and severe adverse events, and to calculate the capsule TTs, such as esophageal, gastric, and SB TTs, and the overall procedure time.\nPatients All patients who were referred to our endoscopy unit for SBCE and who agreed to our gastric study protocol were included in the current study. During the study period, we enrolled 284 consecutive, eligible patients, of which there were 149 males (52.5%) and 135 females (47.5 %), with a mean age of 44 years. Detailed patient characteristics are shown in Table 1. The indications of MCCE were iron deficiency anemia, OGIB, suspected/established SB Crohn’s disease, suspected/confirmed celiac disease, suspected SB neoplastic disease, carcinoid syndrome, and SB polyposis syndromes.\nIndications of small bowel capsule endoscopy (grouped by gender)\nBMI: Body mass index; SB: Small bowel.\nThis study was approved by the Ethical Committee of the University of Szeged (Registry No. 5/17.04.26) and registered in the ClinicalTrials.gov trial registry (Identifier: NCT03234725). The present study was conducted according to the World Medical Association's Declaration of Helsinki provisions in 1995. Patients agreed to undergo capsule endoscopies and Helicobacter pylori (H. pylori) urea breath tests (UBTs) by written informed consent.\nAll patients who were referred to our endoscopy unit for SBCE and who agreed to our gastric study protocol were included in the current study. During the study period, we enrolled 284 consecutive, eligible patients, of which there were 149 males (52.5%) and 135 females (47.5 %), with a mean age of 44 years. Detailed patient characteristics are shown in Table 1. The indications of MCCE were iron deficiency anemia, OGIB, suspected/established SB Crohn’s disease, suspected/confirmed celiac disease, suspected SB neoplastic disease, carcinoid syndrome, and SB polyposis syndromes.\nIndications of small bowel capsule endoscopy (grouped by gender)\nBMI: Body mass index; SB: Small bowel.\nThis study was approved by the Ethical Committee of the University of Szeged (Registry No. 5/17.04.26) and registered in the ClinicalTrials.gov trial registry (Identifier: NCT03234725). The present study was conducted according to the World Medical Association's Declaration of Helsinki provisions in 1995. Patients agreed to undergo capsule endoscopies and Helicobacter pylori (H. pylori) urea breath tests (UBTs) by written informed consent.\nPatient preparation for MCCE According to the SBCE guidelines, we applied the following patient preparation and study protocol: Patients followed a liquid diet and consumed 2 L of water with two sacks of polyethylene glycol (PEG) the day before the study. First, H. pylori C13 UBT was performed on the morning of the study, while the patient was in a fasting condition. The UBT requires an average of 30 min, during which time the patient should not consume any fluids or food. During that visit, the patient's medical history was recorded, and a physical examination was performed. After the UBT, the patient ingested 2 dL water with simethicone suspension (80 mg) to reduce mucus in the stomach. Then, to distend the stomach properly, approximately 8-10 dL of clean water consumed by all patients within 10 min. Water ingestion may be repeated as needed to enhance gastric distension during examination. After complete visualization of the total gastric mucosal surface, active pyloric propulsion of the capsule was attempted in all patients with the external magnetic field. If it was unsuccessful, 10 mg intravenous metoclopramide was applied 60 min after the capsule was swallowed.\nContraindications of MCCE were the same as of those of SBCE and magnetic resonance imaging (MRI) examination. In our study, absolute contraindications were the patients with previous abdominal surgery and/or a previous capsule retention; with implanted MRI-incompatible electronic devices (e.g., implantable cardioverter-defibrillator and pacemakers), or magnetic, metal foreign bodies; and who were not competent or refused the informed consent form; who were under age 18 or above age 70; and those who were pregnant. A patency capsule test was initially performed in all patients to determine those with relative contraindications, including known or suspected GI obstruction due to SB Crohn’s disease, neoplasia, or SB surgery.\nAccording to the SBCE guidelines, we applied the following patient preparation and study protocol: Patients followed a liquid diet and consumed 2 L of water with two sacks of polyethylene glycol (PEG) the day before the study. First, H. pylori C13 UBT was performed on the morning of the study, while the patient was in a fasting condition. The UBT requires an average of 30 min, during which time the patient should not consume any fluids or food. During that visit, the patient's medical history was recorded, and a physical examination was performed. After the UBT, the patient ingested 2 dL water with simethicone suspension (80 mg) to reduce mucus in the stomach. Then, to distend the stomach properly, approximately 8-10 dL of clean water consumed by all patients within 10 min. Water ingestion may be repeated as needed to enhance gastric distension during examination. After complete visualization of the total gastric mucosal surface, active pyloric propulsion of the capsule was attempted in all patients with the external magnetic field. If it was unsuccessful, 10 mg intravenous metoclopramide was applied 60 min after the capsule was swallowed.\nContraindications of MCCE were the same as of those of SBCE and magnetic resonance imaging (MRI) examination. In our study, absolute contraindications were the patients with previous abdominal surgery and/or a previous capsule retention; with implanted MRI-incompatible electronic devices (e.g., implantable cardioverter-defibrillator and pacemakers), or magnetic, metal foreign bodies; and who were not competent or refused the informed consent form; who were under age 18 or above age 70; and those who were pregnant. A patency capsule test was initially performed in all patients to determine those with relative contraindications, including known or suspected GI obstruction due to SB Crohn’s disease, neoplasia, or SB surgery.\nTechnical methods of MCCE Our system involves a special static magnet with robotic and manual guidance, computer workstation with ESNavi software (Ankon Technologies Co, Ltd) (Figure 1), magnetic capsule endoscope, capsule locator, and data recorder. The capsule endoscope sizes 26.8 × 11.6 mm, weighs 4.8 g, and has a permanent spherical magnet inside (Figure 2). The operator can adjust the frequency of captured pictures from 0.5 to 6 frames per second (fps). Capsule functioning can be stopped temporarily and restarted by the operator remotely from the workstation. The picture resolution is 480 × 480 pixels, and the field of view is 140°. The illumination can be automatically adjusted by an automatic picture focusing mechanism, which enables the view depth to shift from 0 mm to 60 mm. Depending on the fps, the battery life can be over 10 h, which allows combined gastric and SB capsule investigations with the same capsule.\nRobotic C-arm, investigation table, and computer workstation for magnetically controlled capsule endoscopy.\nMagnetic capsule endoscope (NaviCam).\nThe magnetic robotic C-arm generates an adjustable magnetic field outside of the body with a maximum strength of 0.2 T, which allows precise controlled movements in three spatial directions. During the process, the physician guides the magnetic capsule by two joysticks in any chosen, spatial direction or along its axis, and therefore, he can rotate or tilt it. Moreover, the capsule can advance forward 360° by a rotational automatism in the direction of the capsule visual axis, which is helpful for transposition of the capsule from the long axis of the stomach.\nThe capsule locator is a unique device that activates the capsule by infrared light prior to the patient swallowing it. The presence of the capsule can be detected in the case of suspected capsule retention (Figure 3).\nCapsule locator device.\nThe system is capable of real-time digital transmission of images to the operating system. At the same time, it is continuously obtaining information about the gyroscope, which is built into the capsule [three-dimensional (3D) motion detector]. It also receives data from the actual spatial location of the camera and localizes the capsule inside the body at any time. The data recorder (Figure 4) may receive all motion information and high-quality pictures via wireless transmission from the capsule endoscope. In contrast, the connection between the data recorder and the workstation via a USB wire makes visible the real-time pictures and gyroscope information.\nData recorder.\nMagnetic CE can be transferred along five different axes with the controlling magnet: two rotational and three 3D spaces. To achieve this, there are two joysticks on the workstation desktop: the left one controls the capsule in two rotational axes (horizontal and vertical directions) and the right joystick directs it in the three translational axes (forward/backwards, up/down, left/right). Precise magnetic control is achieved by positioning the examining table, modifying the position of the spherical magnet axis along the 3D space, and dynamically adjusting the strength and direction of the vectorial magnetic fields perpendicular to each other. The above-mentioned robotic systems can also automatically run the magnetic field vector and position it by robotically controlled computer-based software (scripts) in default mode, which can achieve a standardized mucosal scanning during the MCCE procedure without the direct intervention of a physician. \nOur system involves a special static magnet with robotic and manual guidance, computer workstation with ESNavi software (Ankon Technologies Co, Ltd) (Figure 1), magnetic capsule endoscope, capsule locator, and data recorder. The capsule endoscope sizes 26.8 × 11.6 mm, weighs 4.8 g, and has a permanent spherical magnet inside (Figure 2). The operator can adjust the frequency of captured pictures from 0.5 to 6 frames per second (fps). Capsule functioning can be stopped temporarily and restarted by the operator remotely from the workstation. The picture resolution is 480 × 480 pixels, and the field of view is 140°. The illumination can be automatically adjusted by an automatic picture focusing mechanism, which enables the view depth to shift from 0 mm to 60 mm. Depending on the fps, the battery life can be over 10 h, which allows combined gastric and SB capsule investigations with the same capsule.\nRobotic C-arm, investigation table, and computer workstation for magnetically controlled capsule endoscopy.\nMagnetic capsule endoscope (NaviCam).\nThe magnetic robotic C-arm generates an adjustable magnetic field outside of the body with a maximum strength of 0.2 T, which allows precise controlled movements in three spatial directions. During the process, the physician guides the magnetic capsule by two joysticks in any chosen, spatial direction or along its axis, and therefore, he can rotate or tilt it. Moreover, the capsule can advance forward 360° by a rotational automatism in the direction of the capsule visual axis, which is helpful for transposition of the capsule from the long axis of the stomach.\nThe capsule locator is a unique device that activates the capsule by infrared light prior to the patient swallowing it. The presence of the capsule can be detected in the case of suspected capsule retention (Figure 3).\nCapsule locator device.\nThe system is capable of real-time digital transmission of images to the operating system. At the same time, it is continuously obtaining information about the gyroscope, which is built into the capsule [three-dimensional (3D) motion detector]. It also receives data from the actual spatial location of the camera and localizes the capsule inside the body at any time. The data recorder (Figure 4) may receive all motion information and high-quality pictures via wireless transmission from the capsule endoscope. In contrast, the connection between the data recorder and the workstation via a USB wire makes visible the real-time pictures and gyroscope information.\nData recorder.\nMagnetic CE can be transferred along five different axes with the controlling magnet: two rotational and three 3D spaces. To achieve this, there are two joysticks on the workstation desktop: the left one controls the capsule in two rotational axes (horizontal and vertical directions) and the right joystick directs it in the three translational axes (forward/backwards, up/down, left/right). Precise magnetic control is achieved by positioning the examining table, modifying the position of the spherical magnet axis along the 3D space, and dynamically adjusting the strength and direction of the vectorial magnetic fields perpendicular to each other. The above-mentioned robotic systems can also automatically run the magnetic field vector and position it by robotically controlled computer-based software (scripts) in default mode, which can achieve a standardized mucosal scanning during the MCCE procedure without the direct intervention of a physician. \nMCCE study protocol and gastric stations To achieve optimal gastric mucosal visualization and standardization of the MCCE protocol in the stomach, we defined eight different stations with different patient positions to visualize the entire inner gastric surface as during upper GI endoscopy. Changing the patient position from the left lateral decubitus to the supine and right lateral position is necessary to combine gravity and magnetic force, which improve capsule maneuvering (Figure 5). The capsule swallowed by the patients in the left lateral decubitus position (Figure 6).\nDifferent magnetically controlled capsule endoscopy stations and capsule camera orientations defined to achieve a complete gastric mucosal surface visualization and mapping (created by Zoltán Tóbiás, MD).\nThe capsule swallowed by the patients in the left lateral decubitus position. A: Patients and ball magnet position; B and C: Pictures of the Z-line by CE from our database.\n\nStations at the left lateral decubitus position: Station I. Visualization of the gastric fundus and subcar-dial region with the cardia (posterior J type retroflexion).\nAfter entering the stomach, the capsule was lowered into the large curvature at the body of the stomach. The magnetic ball was held high up at the level of the patient's right shoulder. The capsule camera was maintained in an obliquely upward orientation of 45° and then horizontally rotated to observe the gastric fundus and the cardia (Figure 7A-C).\n\nStations I-III at the left lateral decubitus position. A: Patients’ position (Station I); B: Cardia by a gastroscope (Station I); C: Cardia by CE (Station I); D: Patients’ position (Station II); E: Cardia by a gastroscope (Station II); F: Cardia by CE (Station II); G: Patients’ position (Station III); H: Corpus by a gastroscope (Station III); I: Corpus by a capsule (Station III).\nStation II. Moving closer to the cardia and the fundus (anterior J type retroflexion).\nWhile using the right joystick, the ball magnet was lowered and fitted closely to the patient's right arm. Due to the proximity of the magnetic field, the capsule rose to the anterior wall and small curvature of the stomach. The capsule was then rotated with the camera oriented vertically upward to observe the cardia up close, with the fundus at a distance (Figure 7D-F).\nStation III. Visualization of the gastric body from the fundus.\nWith the ball magnet in the same position, using the left joystick, the capsule camera was rotated and tilted downwards, enabling visualization of the proximal body of the stomach (corpus) and the gastric folds at the large curvature with a longitudinal view (Figure 7G-I).\n\nStations in the supine position: Station IV. Visualization of the angular incisure.\nPatients were asked to lie on their back, in a supine position. The capsule was moved and located to the distal body into the angular region due to a change of gravity and stepwise magnetic movements. The magnetic ball was moved to the left upper abdomen (hypochondrium) and then lowered, close to the patient abdomen. In this case, the capsule was raised to the anterior wall of the stomach; therefore, the small curvature and the angular incisure were examined as well (Figure 8A-C).\n\nStations IV and V in the supine position. A: Patients’ position (Station IV); B: Angular incisure by a gastroscope (Station IV); C: Angular incisure by CE (Station IV); A: Patients’ position (Station V); B: Angular incisure by a gastroscope (Station V); C: Angular incisure by CE (Station V).\nStation V. Visualization of the larger curvature of the distal body from the angular incisure (U type of retroversion).\nThe magnetic ball was moved to the epigastric area, close to the abdomen. Then the capsule camera was oriented upward to observe the anterior wall of the gastric body. In this position, the capsule can be turned and rotated (e.g., towards the cardia), enabling visualization of the distal body of the stomach longitudinally, as in the endoscopic view of U-type retroversion (Figure 8D-F).\n\nStations at the right lateral decubitus patient position: Station VI. Visualization of the antral canal.\nIn the next phase, patients were asked to turn to the right lateral decubitus position. Due to the gravity force, the capsule sank and moved spontaneously into the antral region. Then, the ball magnet was positioned over the left kidney. The capsule was brought closer to the large curvature with the camera oriented obliquely downward at 45°, which enabled observation of the antrum. Then, the antral canal could be examined with the pylorus, and the angular incisure became visible from the antrum (Figure 9A-C).\n\nStation at the right lateral decubitus patient position. A: Patients’ position (Station VI); B: Antrum canal of stomach by gastroscope (Station VI); C: Antrum canal of stomach by CE (Station VI).\n\nStations in the supine position: Station VII. Prepyloric view and visualization of the pylorus.\nAfter that, each patients were asked to lie on his or her back in the supine position. The ball magnet was positioned close to the body on the right upper abdomen (hypochondrium). The capsule camera was oriented horizontally and laterally toward the pylorus for closer observation. The magnet position ensured that the capsule remained in the antrum. Using both the right and left joysticks, we moved the capsule closer to the pylorus (Figure 10A-C).\n\n Stations VII and VIII in the supine position. A: Patient’s position (Station VII); B: Antrum canal of stomach by gastroscope (Station VII); C: Antrum canal of stomach by CE (Station VII); D: Patient’s position (Station VIII); E: Pyloric ring (Station VIII); F: Ampulla of Vater (Station VIII); G: Pylorus from the duodenal bulb with a capsule (Station VIII).\nStation VIII: Magnetically controlled transpyloric passage of the capsule.\nThe magnetic ball was placed at the patient's right side at the level of the duodenal bulb. The capsule was then rotated until the camera faced the pylorus. The capsule was dragged close to the pylorus under the guidance of the magnet robot. As the pylorus was opened, peristalsis assisted the capsule into the duodenum. After transpyloric passage, first we depicted the duodenal bulb, then from the descending and inferior horizontal part of the duodenum, we visualized the ampulla of the Vater by tilting the capsule camera upwards to facilitate the retrograde view (Figure 10D-G).\nTo achieve optimal gastric mucosal visualization and standardization of the MCCE protocol in the stomach, we defined eight different stations with different patient positions to visualize the entire inner gastric surface as during upper GI endoscopy. Changing the patient position from the left lateral decubitus to the supine and right lateral position is necessary to combine gravity and magnetic force, which improve capsule maneuvering (Figure 5). The capsule swallowed by the patients in the left lateral decubitus position (Figure 6).\nDifferent magnetically controlled capsule endoscopy stations and capsule camera orientations defined to achieve a complete gastric mucosal surface visualization and mapping (created by Zoltán Tóbiás, MD).\nThe capsule swallowed by the patients in the left lateral decubitus position. A: Patients and ball magnet position; B and C: Pictures of the Z-line by CE from our database.\n\nStations at the left lateral decubitus position: Station I. Visualization of the gastric fundus and subcar-dial region with the cardia (posterior J type retroflexion).\nAfter entering the stomach, the capsule was lowered into the large curvature at the body of the stomach. The magnetic ball was held high up at the level of the patient's right shoulder. The capsule camera was maintained in an obliquely upward orientation of 45° and then horizontally rotated to observe the gastric fundus and the cardia (Figure 7A-C).\n\nStations I-III at the left lateral decubitus position. A: Patients’ position (Station I); B: Cardia by a gastroscope (Station I); C: Cardia by CE (Station I); D: Patients’ position (Station II); E: Cardia by a gastroscope (Station II); F: Cardia by CE (Station II); G: Patients’ position (Station III); H: Corpus by a gastroscope (Station III); I: Corpus by a capsule (Station III).\nStation II. Moving closer to the cardia and the fundus (anterior J type retroflexion).\nWhile using the right joystick, the ball magnet was lowered and fitted closely to the patient's right arm. Due to the proximity of the magnetic field, the capsule rose to the anterior wall and small curvature of the stomach. The capsule was then rotated with the camera oriented vertically upward to observe the cardia up close, with the fundus at a distance (Figure 7D-F).\nStation III. Visualization of the gastric body from the fundus.\nWith the ball magnet in the same position, using the left joystick, the capsule camera was rotated and tilted downwards, enabling visualization of the proximal body of the stomach (corpus) and the gastric folds at the large curvature with a longitudinal view (Figure 7G-I).\n\nStations in the supine position: Station IV. Visualization of the angular incisure.\nPatients were asked to lie on their back, in a supine position. The capsule was moved and located to the distal body into the angular region due to a change of gravity and stepwise magnetic movements. The magnetic ball was moved to the left upper abdomen (hypochondrium) and then lowered, close to the patient abdomen. In this case, the capsule was raised to the anterior wall of the stomach; therefore, the small curvature and the angular incisure were examined as well (Figure 8A-C).\n\nStations IV and V in the supine position. A: Patients’ position (Station IV); B: Angular incisure by a gastroscope (Station IV); C: Angular incisure by CE (Station IV); A: Patients’ position (Station V); B: Angular incisure by a gastroscope (Station V); C: Angular incisure by CE (Station V).\nStation V. Visualization of the larger curvature of the distal body from the angular incisure (U type of retroversion).\nThe magnetic ball was moved to the epigastric area, close to the abdomen. Then the capsule camera was oriented upward to observe the anterior wall of the gastric body. In this position, the capsule can be turned and rotated (e.g., towards the cardia), enabling visualization of the distal body of the stomach longitudinally, as in the endoscopic view of U-type retroversion (Figure 8D-F).\n\nStations at the right lateral decubitus patient position: Station VI. Visualization of the antral canal.\nIn the next phase, patients were asked to turn to the right lateral decubitus position. Due to the gravity force, the capsule sank and moved spontaneously into the antral region. Then, the ball magnet was positioned over the left kidney. The capsule was brought closer to the large curvature with the camera oriented obliquely downward at 45°, which enabled observation of the antrum. Then, the antral canal could be examined with the pylorus, and the angular incisure became visible from the antrum (Figure 9A-C).\n\nStation at the right lateral decubitus patient position. A: Patients’ position (Station VI); B: Antrum canal of stomach by gastroscope (Station VI); C: Antrum canal of stomach by CE (Station VI).\n\nStations in the supine position: Station VII. Prepyloric view and visualization of the pylorus.\nAfter that, each patients were asked to lie on his or her back in the supine position. The ball magnet was positioned close to the body on the right upper abdomen (hypochondrium). The capsule camera was oriented horizontally and laterally toward the pylorus for closer observation. The magnet position ensured that the capsule remained in the antrum. Using both the right and left joysticks, we moved the capsule closer to the pylorus (Figure 10A-C).\n\n Stations VII and VIII in the supine position. A: Patient’s position (Station VII); B: Antrum canal of stomach by gastroscope (Station VII); C: Antrum canal of stomach by CE (Station VII); D: Patient’s position (Station VIII); E: Pyloric ring (Station VIII); F: Ampulla of Vater (Station VIII); G: Pylorus from the duodenal bulb with a capsule (Station VIII).\nStation VIII: Magnetically controlled transpyloric passage of the capsule.\nThe magnetic ball was placed at the patient's right side at the level of the duodenal bulb. The capsule was then rotated until the camera faced the pylorus. The capsule was dragged close to the pylorus under the guidance of the magnet robot. As the pylorus was opened, peristalsis assisted the capsule into the duodenum. After transpyloric passage, first we depicted the duodenal bulb, then from the descending and inferior horizontal part of the duodenum, we visualized the ampulla of the Vater by tilting the capsule camera upwards to facilitate the retrograde view (Figure 10D-G).\nExamining the small intestine If the capsule entered via the small intestine, patients were asked to drink 1 L of PEG solution, and from then on, they were requested to drink clear fluids (i.e., water). The movement of the capsule in the small intestine was then monitored hourly in real-time visualization mode. The examination ended when the capsule arrives at the colon or stops functioning due to the low battery.\nIf the capsule entered via the small intestine, patients were asked to drink 1 L of PEG solution, and from then on, they were requested to drink clear fluids (i.e., water). The movement of the capsule in the small intestine was then monitored hourly in real-time visualization mode. The examination ended when the capsule arrives at the colon or stops functioning due to the low battery.", "Our present study prospectively enrolled outpatients who were seen at the Endo-Kapszula Endoscopy Unit, Székesfehérvár, Hungary, between September 2017 and December 2020. A combined investigation of the stomach and SB of patients referred to for SBCE was performed with a robotically MCCE system (NaviCam, Ankon Technologies Co, Ltd, Shanghai, China).\nThe primary endpoint of the recent study was to investigate the diagnostic yield of MCCE in the evaluation of gastric and SB abnormalities. The secondary endpoints were to evaluate the feasibility of performing complete gastric and SB investigation by utilizing the same diagnostic procedure; to address safety parameters, including adverse and severe adverse events, and to calculate the capsule TTs, such as esophageal, gastric, and SB TTs, and the overall procedure time.", "All patients who were referred to our endoscopy unit for SBCE and who agreed to our gastric study protocol were included in the current study. During the study period, we enrolled 284 consecutive, eligible patients, of which there were 149 males (52.5%) and 135 females (47.5 %), with a mean age of 44 years. Detailed patient characteristics are shown in Table 1. The indications of MCCE were iron deficiency anemia, OGIB, suspected/established SB Crohn’s disease, suspected/confirmed celiac disease, suspected SB neoplastic disease, carcinoid syndrome, and SB polyposis syndromes.\nIndications of small bowel capsule endoscopy (grouped by gender)\nBMI: Body mass index; SB: Small bowel.\nThis study was approved by the Ethical Committee of the University of Szeged (Registry No. 5/17.04.26) and registered in the ClinicalTrials.gov trial registry (Identifier: NCT03234725). The present study was conducted according to the World Medical Association's Declaration of Helsinki provisions in 1995. Patients agreed to undergo capsule endoscopies and Helicobacter pylori (H. pylori) urea breath tests (UBTs) by written informed consent.", "According to the SBCE guidelines, we applied the following patient preparation and study protocol: Patients followed a liquid diet and consumed 2 L of water with two sacks of polyethylene glycol (PEG) the day before the study. First, H. pylori C13 UBT was performed on the morning of the study, while the patient was in a fasting condition. The UBT requires an average of 30 min, during which time the patient should not consume any fluids or food. During that visit, the patient's medical history was recorded, and a physical examination was performed. After the UBT, the patient ingested 2 dL water with simethicone suspension (80 mg) to reduce mucus in the stomach. Then, to distend the stomach properly, approximately 8-10 dL of clean water consumed by all patients within 10 min. Water ingestion may be repeated as needed to enhance gastric distension during examination. After complete visualization of the total gastric mucosal surface, active pyloric propulsion of the capsule was attempted in all patients with the external magnetic field. If it was unsuccessful, 10 mg intravenous metoclopramide was applied 60 min after the capsule was swallowed.\nContraindications of MCCE were the same as of those of SBCE and magnetic resonance imaging (MRI) examination. In our study, absolute contraindications were the patients with previous abdominal surgery and/or a previous capsule retention; with implanted MRI-incompatible electronic devices (e.g., implantable cardioverter-defibrillator and pacemakers), or magnetic, metal foreign bodies; and who were not competent or refused the informed consent form; who were under age 18 or above age 70; and those who were pregnant. A patency capsule test was initially performed in all patients to determine those with relative contraindications, including known or suspected GI obstruction due to SB Crohn’s disease, neoplasia, or SB surgery.", "Our system involves a special static magnet with robotic and manual guidance, computer workstation with ESNavi software (Ankon Technologies Co, Ltd) (Figure 1), magnetic capsule endoscope, capsule locator, and data recorder. The capsule endoscope sizes 26.8 × 11.6 mm, weighs 4.8 g, and has a permanent spherical magnet inside (Figure 2). The operator can adjust the frequency of captured pictures from 0.5 to 6 frames per second (fps). Capsule functioning can be stopped temporarily and restarted by the operator remotely from the workstation. The picture resolution is 480 × 480 pixels, and the field of view is 140°. The illumination can be automatically adjusted by an automatic picture focusing mechanism, which enables the view depth to shift from 0 mm to 60 mm. Depending on the fps, the battery life can be over 10 h, which allows combined gastric and SB capsule investigations with the same capsule.\nRobotic C-arm, investigation table, and computer workstation for magnetically controlled capsule endoscopy.\nMagnetic capsule endoscope (NaviCam).\nThe magnetic robotic C-arm generates an adjustable magnetic field outside of the body with a maximum strength of 0.2 T, which allows precise controlled movements in three spatial directions. During the process, the physician guides the magnetic capsule by two joysticks in any chosen, spatial direction or along its axis, and therefore, he can rotate or tilt it. Moreover, the capsule can advance forward 360° by a rotational automatism in the direction of the capsule visual axis, which is helpful for transposition of the capsule from the long axis of the stomach.\nThe capsule locator is a unique device that activates the capsule by infrared light prior to the patient swallowing it. The presence of the capsule can be detected in the case of suspected capsule retention (Figure 3).\nCapsule locator device.\nThe system is capable of real-time digital transmission of images to the operating system. At the same time, it is continuously obtaining information about the gyroscope, which is built into the capsule [three-dimensional (3D) motion detector]. It also receives data from the actual spatial location of the camera and localizes the capsule inside the body at any time. The data recorder (Figure 4) may receive all motion information and high-quality pictures via wireless transmission from the capsule endoscope. In contrast, the connection between the data recorder and the workstation via a USB wire makes visible the real-time pictures and gyroscope information.\nData recorder.\nMagnetic CE can be transferred along five different axes with the controlling magnet: two rotational and three 3D spaces. To achieve this, there are two joysticks on the workstation desktop: the left one controls the capsule in two rotational axes (horizontal and vertical directions) and the right joystick directs it in the three translational axes (forward/backwards, up/down, left/right). Precise magnetic control is achieved by positioning the examining table, modifying the position of the spherical magnet axis along the 3D space, and dynamically adjusting the strength and direction of the vectorial magnetic fields perpendicular to each other. The above-mentioned robotic systems can also automatically run the magnetic field vector and position it by robotically controlled computer-based software (scripts) in default mode, which can achieve a standardized mucosal scanning during the MCCE procedure without the direct intervention of a physician. ", "To achieve optimal gastric mucosal visualization and standardization of the MCCE protocol in the stomach, we defined eight different stations with different patient positions to visualize the entire inner gastric surface as during upper GI endoscopy. Changing the patient position from the left lateral decubitus to the supine and right lateral position is necessary to combine gravity and magnetic force, which improve capsule maneuvering (Figure 5). The capsule swallowed by the patients in the left lateral decubitus position (Figure 6).\nDifferent magnetically controlled capsule endoscopy stations and capsule camera orientations defined to achieve a complete gastric mucosal surface visualization and mapping (created by Zoltán Tóbiás, MD).\nThe capsule swallowed by the patients in the left lateral decubitus position. A: Patients and ball magnet position; B and C: Pictures of the Z-line by CE from our database.\n\nStations at the left lateral decubitus position: Station I. Visualization of the gastric fundus and subcar-dial region with the cardia (posterior J type retroflexion).\nAfter entering the stomach, the capsule was lowered into the large curvature at the body of the stomach. The magnetic ball was held high up at the level of the patient's right shoulder. The capsule camera was maintained in an obliquely upward orientation of 45° and then horizontally rotated to observe the gastric fundus and the cardia (Figure 7A-C).\n\nStations I-III at the left lateral decubitus position. A: Patients’ position (Station I); B: Cardia by a gastroscope (Station I); C: Cardia by CE (Station I); D: Patients’ position (Station II); E: Cardia by a gastroscope (Station II); F: Cardia by CE (Station II); G: Patients’ position (Station III); H: Corpus by a gastroscope (Station III); I: Corpus by a capsule (Station III).\nStation II. Moving closer to the cardia and the fundus (anterior J type retroflexion).\nWhile using the right joystick, the ball magnet was lowered and fitted closely to the patient's right arm. Due to the proximity of the magnetic field, the capsule rose to the anterior wall and small curvature of the stomach. The capsule was then rotated with the camera oriented vertically upward to observe the cardia up close, with the fundus at a distance (Figure 7D-F).\nStation III. Visualization of the gastric body from the fundus.\nWith the ball magnet in the same position, using the left joystick, the capsule camera was rotated and tilted downwards, enabling visualization of the proximal body of the stomach (corpus) and the gastric folds at the large curvature with a longitudinal view (Figure 7G-I).\n\nStations in the supine position: Station IV. Visualization of the angular incisure.\nPatients were asked to lie on their back, in a supine position. The capsule was moved and located to the distal body into the angular region due to a change of gravity and stepwise magnetic movements. The magnetic ball was moved to the left upper abdomen (hypochondrium) and then lowered, close to the patient abdomen. In this case, the capsule was raised to the anterior wall of the stomach; therefore, the small curvature and the angular incisure were examined as well (Figure 8A-C).\n\nStations IV and V in the supine position. A: Patients’ position (Station IV); B: Angular incisure by a gastroscope (Station IV); C: Angular incisure by CE (Station IV); A: Patients’ position (Station V); B: Angular incisure by a gastroscope (Station V); C: Angular incisure by CE (Station V).\nStation V. Visualization of the larger curvature of the distal body from the angular incisure (U type of retroversion).\nThe magnetic ball was moved to the epigastric area, close to the abdomen. Then the capsule camera was oriented upward to observe the anterior wall of the gastric body. In this position, the capsule can be turned and rotated (e.g., towards the cardia), enabling visualization of the distal body of the stomach longitudinally, as in the endoscopic view of U-type retroversion (Figure 8D-F).\n\nStations at the right lateral decubitus patient position: Station VI. Visualization of the antral canal.\nIn the next phase, patients were asked to turn to the right lateral decubitus position. Due to the gravity force, the capsule sank and moved spontaneously into the antral region. Then, the ball magnet was positioned over the left kidney. The capsule was brought closer to the large curvature with the camera oriented obliquely downward at 45°, which enabled observation of the antrum. Then, the antral canal could be examined with the pylorus, and the angular incisure became visible from the antrum (Figure 9A-C).\n\nStation at the right lateral decubitus patient position. A: Patients’ position (Station VI); B: Antrum canal of stomach by gastroscope (Station VI); C: Antrum canal of stomach by CE (Station VI).\n\nStations in the supine position: Station VII. Prepyloric view and visualization of the pylorus.\nAfter that, each patients were asked to lie on his or her back in the supine position. The ball magnet was positioned close to the body on the right upper abdomen (hypochondrium). The capsule camera was oriented horizontally and laterally toward the pylorus for closer observation. The magnet position ensured that the capsule remained in the antrum. Using both the right and left joysticks, we moved the capsule closer to the pylorus (Figure 10A-C).\n\n Stations VII and VIII in the supine position. A: Patient’s position (Station VII); B: Antrum canal of stomach by gastroscope (Station VII); C: Antrum canal of stomach by CE (Station VII); D: Patient’s position (Station VIII); E: Pyloric ring (Station VIII); F: Ampulla of Vater (Station VIII); G: Pylorus from the duodenal bulb with a capsule (Station VIII).\nStation VIII: Magnetically controlled transpyloric passage of the capsule.\nThe magnetic ball was placed at the patient's right side at the level of the duodenal bulb. The capsule was then rotated until the camera faced the pylorus. The capsule was dragged close to the pylorus under the guidance of the magnet robot. As the pylorus was opened, peristalsis assisted the capsule into the duodenum. After transpyloric passage, first we depicted the duodenal bulb, then from the descending and inferior horizontal part of the duodenum, we visualized the ampulla of the Vater by tilting the capsule camera upwards to facilitate the retrograde view (Figure 10D-G).", "If the capsule entered via the small intestine, patients were asked to drink 1 L of PEG solution, and from then on, they were requested to drink clear fluids (i.e., water). The movement of the capsule in the small intestine was then monitored hourly in real-time visualization mode. The examination ended when the capsule arrives at the colon or stops functioning due to the low battery.", "An UBT test revealed H. pylori positivity in 32.7% of cases. (Table 2). No significant association with HP status and the type (proximal or distal), distribution (diffuse or focal) and severity (minimal or active, erosive) of the gastritis described on MCCE results were depicted. \nThe results of Helicobacter pylori C13 urea breath tests\n\nH. pylori: Helicobacter pylori.\nThe mean gastric, SB, and colon TT with MCCE was: 47 h 40 min (M/F: 44 h 15 min/51 h 14 min), 3 h 46 min 22 s (3 h 52 min 44 s/3 h 38 min 21 s) and 1 h 4 min 34 s (1 h 1 min 16 s/1 h 8 min 53 s), respectively. Average total time of MCCE examination: 5 h 48 min 35 s (5 h 46 min 37 s/5 h 50 min 18 s) (Table 3).\nMean gastric, small bowel, and overall transit times of magnetically controlled capsule endoscopy\nThe diagnostic yields for detecting any abnormalities in the stomach and SB with MCCE were 81.9%: 68.6% for minor pathologies and 13.3% for major pathologies. 25.8% of the abnormalities were found in the SB, and 74.2% were in the stomach. The diagnostic yield for stomach/SB was 4.9%/8.4% for major pathologies and 55.9%/12.7% for minor pathologies (Table 4). In the stomach, ulcers and polyps were considered major, while signs of gastritis were minor pathologies. In the SB signs of Crohn’s and celiac disease were the major and non-specific inflammation, diverticula and angiodysplasia the minor pathologies. Based on findings from MCCE examinations, the distribution of pathologies is depicted in Table 5.\nDiagnostic yield of magnetically controlled capsule endoscopy\nSB: Small bowel.\nDistribution of pathologies detected by magnetically controlled capsule endoscopy\nThe capsule's active magnetic movement through the pylorus was successful using the magnet in 41.9% of all patients (automatized protocol in 56 patients and active manually controlled magnetic activity in 63 patients) (Table 6).\nDistribution of different types of transpyloric transit in complete and incomplete small bowel studies\nSB: Small bowel.\nIn 18 (M/F: 6/12) patients (6.3%), SB visualization with MCCE was incomplete. There were 13 occurrences of incomplete examinations as a result of capsule depletion. In 3 of these 13 cases, the capsule was depleted within 5 h of operation, indicating a manufacturing flaw. The average total study time in the remaining 10 cases was 9 h 12 min 9 s. From the pylorus to the last image, the average transit time was 8 h 26 min 4 s. The examination was discontinued sooner than planned in 3 cases due to the patient's request. In 2 cases, there was capsule retention due to Crohn's-like ulceration, accompanied by a narrow bowel lumen (Table 6). These problems were resolved spontaneously without surgery, but with medical treatment. There were no serious adverse events or capsule retention during the study period.", "There are two main directions in developing capsule endoscopes with active movements: internally and externally guided techniques. Internal guidance means that the locomotive system is integrated into the capsule, but these systems require longer-lasting batteries for full functionality. Its main advantage is a more precise navigation with the help of the internalized locomotive system. External guidance refers to outwardly conducted capsule movements, e.g., with a magnetic control unit or device/moving arm generating the magnetic field. The exceptional advantage of this method is that the magnetic field (and its adjustment) built by the external unit controls the locomotive mechanism, thus making this an energy-saving solution. However, a disadvantage of this technique is that it is limited to more passive, less flexible locomotion. The examining physician controls the capsule outside the body with an external control unit, which decreases the accuracy of the spatial and temporal movements. Locomotion of the capsule endoscope would result in numerous long-term advantages in the diagnostics and therapy of GI diseases (focused biopsies, treatment options, e.g., laser tumor ablation, local targeted release of certain medications, and endoscopic ultrasound applications)[1]. \nMagnetic assisted capsule endoscopy (MACE) systems have been developed over the past 5-10 years, primarily for research goals. Experiences drawn from previously conducted in vitro studies showed that locomotion and precision of the capsule are significantly influenced by the physical characteristics of the generated magnetic field created by the external magnetic unit while travelling in the human body[1].\nIn 2011, Olympus was the first to introduce a modified MRI machine prototype that moved a MACE system which allowed the operator to successfully guide the capsule in a chosen spatial direction inside the stomach after drinking water. However, the adoption of this diagnostic procedure did not spread worldwide and get medical acceptance[6]. \nIn 2014, IntroMedic developed a navigation (NAVI) magnetic capsule system that could be moved externally with a small, hammer-shaped, external static magnet device. The technology involving a magnet that assisted the physician in manually moving the capsule endoscope appeared viable and valuable in many experimental settings. However, it did not achieve any breakthroughs, as it only allowed sudden and harsh position changes of the capsule. In clinical trials studying the application of the NAVI capsule system using a small cohort, 65%-86% of the mucosal layer of the stomach was visualized accurately using external magnetically controlled locomotion after the ingestion of water. Its diagnostic accuracy was similar to that of traditional gastroscopy[9]. Using a similar type of NAVI capsule system, CE was performed on the large intestines, where magnetic locomotion directed the capsule from the cecum to the sigmoid colon. At the same time, a colonoscopy probe was inserted to monitor capsule movements to provide dilatation[10]. In iron deficiency anemia, after routine colonoscopy of 49 patients, the performance of a MACE system was compared with the IntroMedic Navi system in providing a diagnostic examination of the stomach and small intestine in one sitting. According to the results, MACE detected more pathologies than by gastroscopy alone, and no significant difference in diagnostic sensitivity was found between the methods when examining the upper GI tract[11].\nPrecise locomotion of the magnetic capsule inside the GI tract by manual control is not possible due to the variable density of tissues and variable distance of the capsule, e.g., in the stomach from the external magnet. Moreover, exact spatial location of the capsule, its relation to the surrounding organs, or the ante-/retrograde orientation cannot be judged accurately. Therefore, a control mechanism is needed alongside the magnetic capsule, which is based on external robotics that enable the physician's input to be executed (i.e., joystick: in forward, backward, upward, downward, and sideways directions). By pre-programming instruction sequences (script) in the computing control unit to examine the stomach from the fundus to the antrum, we created a reproducible examination of the complete inner mucosal lining of the stomach, which lowered the variability among investigations. If the examining physician notices significant pathology, he/she can intervene and move the examination to manual control and revisit the alteration; doing so increases the number of images taken of the lesion and optimizes the diagnostic accuracy of the study.\nNaviCam was the first magnetically controlled capsule system that enables bidirectional data transmission and robotic control. The capsule functions at 1-6 fps and its spatial orientation with continuous monitoring of energy levels are transmitted via the recording vest to control the data and screen of the computing unit. At the console, we can control the precise locomotion of the capsule, image capturing speed, and brightness, as well as turn it on or off.\nWe combined manual and automatic, robotic capsule control to optimize the gastric procedure. As we previously published in an in vitro setting, complete 100% visualization of the inner surface of a plastic stomach model with the medium and large stomach autoscan program module and with the freehand controlling method could be successfully achieved in all cases. With the small and medium stomach mode, we could observe only 97.5% of the inner surface, because of the incomplete visualization of the prepyloric region. With freehand method, we needed nearly twice as much average time (749 s) to make a complete examination, compared to the robotic maneuvering with autoscan program (390 s). In an everyday praxis, in each patient position, the following stations and capsule positioning were performed after running the 3 different autoscan programs[12,13]. \nCE is a potentially patient-friendly and noncontact/infection-free diagnostic modality of screening for gastric diseases, which may result in complete and excellent examinations of the stomach. During the coronavirus disease 2019 pandemic, direct contact with the patients in the practice is a potential risk factor for infections. Non-contact medicine is a possibility for further examinations, such as remote control of CE[14]. The prevalence of focal gastric lesions (polyps, gastric ulcers, submucosal tumors, etc.) increases with age, and they may be detected with MCCE as effectively as with gastroscopy[8]. The sensitivity is between 90% and 92%, and the specificity is between 94% and 96.7%[15]. Several studies proved that, altogether, reliable preparation (gastric cleanliness), careful examination, and MCCE are pivotal for adequate gastric visualization and, therefore, for the detection of superficial gastric neoplasms[16]. In a large multicenter gastric cancer screening trial conducted in China seven asymptomatic patients out of 3182 were diagnosed with advanced cancer by MCCE. Cancer prevalence was highest in the gastric body (3 cases), followed by 2 cases in the cardia and 2 in the antrum, while 1-1 cases were detected in the region of the angulus, in the fundus and in the esophagus. All were confirmed to be adenocarcinoma pathologically. These results indicate that screening with MCCE may be useful in high family risk populations or people aged over 50 years old as a gastric cancer screening modality[8].\nMagnetic steering may also impact TTs, as delayed gastric transit of the CE (especially in patients with gastroparesis) may lead to an incomplete SB exam. This method may increase the papilla detection rate as well. In addition to magnetic steering, there are several other independent factors, such as male gender and higher BMI, which increase gastric TTs and decrease SB TTs, thus impacting the time required for CE completion[10]. Despite the fact that delayed gastric TT may cause incomplete SBCE, we should avoid esophago-gastro-duodenoscopy to resolve temporary gastric capsule retention and instead use MCCE[17]. \nThe visual diagnosis of the presence of H. pylori with standard white light endoscopy (WLI) has a relatively low accuracy, especially in population with a low pretest probability. Moreover, WLI endoscopy correlates poorly with histopathological findings of H. pylori induced gastritis too. Recently, low quality retrospective studies proposed the theory, that with the application of a special electronic chromoendoscopy, linked color imaging, diffuse reddish appearance of the mucosa in the gastric body and fundic glands correlates with the presence of HP[18].\nIn our study we found no correlation between the HP status and the activity and type of gastritis observed on MCCE as follows, but not included this into the Table 2, since it would be relevant in another publication, focusing HP status and gastritis on MCCE.\nMCCE is an excellent method to visualize the entire gastric mucosal surface up to 100% precision in healthy volunteers[5]. In a previous study of 75 consecutive patients, we demonstrated that with our combined method with automatic and manual control, a complete visualization of the cardia, fundus, corpus, angulus, antrum, and duodenal bulb was 95.4%, 100%, 100%, 100%, 100% and 100%, respectively. The ampulla of Vater was detected and observed in 20 patients (26%). Moreover, the visualization of the distal esophagus and the Z-line was possible in 67/75 patients (89%), which was assessed as complete in 23, and partially complete in 44 patients. The mean stationary time for MCCE in the distal esophagus was 1 min and 32 s[19].\nThe use of MCCE needs to be considered in western and eastern countries, as there are two different scenarios, depending on the different levels of prevalence of UGI diseases and gastric cancer in each country. Nowadays, MCCE is mostly used for the detection of malignant and premalignant gastric lesions, which are more prevalent in eastern countries. However, the present technology opens the door to further, new technologies; subsequent developments would allow MCCE technology to be extended to other regions of the GI tract, e.g., the esophagus, which may facilitate the detection of more relevant lesions, such as Barrett metaplasia in western countries as well. With second-generation MCCE, visualization of the esophagus, Z-line, and duodenal papilla would be improved, according to a study by Jiang et al[20] Furthermore, the gastric TT could shorten by one-third, which was nearly achieved by conventional endoscopy with an examination of approximately 10 min.\nIt is known that the current gold standard gastroscopy has some clear disadvantages, as it is commonly uncomfortable for patients, and therefore it is mostly performed under sedation, which carries definite procedure related risks. MCCE, as a patient-friendly, non-invasive test, might be an alternative for patients who refuse to undergo gastroscopy and increase patients’ adherence for screening. Furthermore, MCCE of the stomach was recently approved by the Chinese Food and Drug Association for the following diagnostic indications: (1) As an alternative diagnostic tool for patients who refuse to undergo gastroscopy; (2) Screening of gastric diseases as a part of physical examination; (3) Screening for gastric cancer; (4) To diagnose various causes of GI inflammation; (5) To perform follow-up for diseases like gastric varices, gastric ulcer, atrophic gastritis, and polyps after surgical or endoscopic removal[21].\nThere is no similar study in the literature, as we performed a complete upper GI capsule examination, including the stomach and the SB with the same capsule endoscope during MCCE. Denzer et al[22] published a blinded, prospective trial from two French centers with the Intromedic manually controlled MCCE. A total of 189 patients were enrolled into this multicenter study. Lesions were defined as major (requiring biopsy or removal) or minor ones. The final gold-standard was unblinded conventional gastroscopy with biopsy, under sedation with propofol. Twenty-three major lesions were found in 21 patients and in this population, the capsule accuracy was 90.5% as compared to gastroscopy. Of the remaining 168 patients, 94% had minor and mostly multiple lesions; the capsule accuracy was 88.1%. All patients preferred MCCE over gastroscopy[22].\nOne of the risk factors of incomplete SB capsule endoscopy is a prolonged gastric transit time, which could be considered as a limitation of our combined gastric and SB study protocol. In our patient population, the average gastric transit time with magnetic transpyloric, manual control was 26 min. In contrast, in those cases where the magnetic transpyloric control failed, after examining the stomach, we left the capsule to propel through the pylorus by spontaneous peristaltic activity. In these patient groups, the average gastric transit time took 1 h and 9 min. In 10 cases out of 18 incomplete SB studies caused by battery low energy, this event occurred in 3 patients with manual magnetic passage and in 7 patients with spontaneous transpyloric passage[23].\nVisibility and identification of landmarks are important factors to consider in accurate examination of stomach using MCCE. Gastric landmarks and typical stations described in the methods were always forced to achieve during combined automatic and manual maneuvering. For improving the learning curve of our gastroenterologist, we started to train the examinations in a plastic stomach model. In our previously published abstract, we described the improvement of the learning curves with manual magnetic controls both in experts and in trainees[1]. In this study, we find significant differences in the examination time of the complete inner surface mapping between trainees and experts, and moreover automatic protocols were faster and equally accurate as experts to achieve a complete inner surface mapping.\nThe problem how to minimalize bubbles and mucoid secretions is an existing problem in real life studies. To improve visibility, we established a unique preparation process with a combination of bicarbonate, Pronase B, and simethicone combined with a patient body rotation technique[2]. Moreover, in our described stations, we rotate our patients from left lateral to supine, then from supine top right lateral, and finally from right lateral to supine position during MCCE study. During this protocol, the gastric mucoid secretions also moving into different parts of the stomach due to the gravity making visible all the landmarks and the majority of the mucosal abnormalities. Application of prokinetics or motilin agonist erythromycin might also be an option in future studies to improve the visibility and reduce gastric lake content[24,25].\nAn inherent limitation of our present study that we performed gastroscopy only in a few patients with major gastric pathologies to accomplish final diagnosis and biopsy; and therefore, we could not assess the accuracy of MCCE in all patients compared to gastroscopy. However, several previous studies demonstrated excellent diagnostic value and high accuracy. In a recent meta-analysis of Zhang et al[26] four studies with 612 patients were included, in which the results of blinded MCCE and gastroscopy were compared. MCCE demonstrated a pooled sensitivity and specificity of 91% (95%CI: 0.87-0.93) and 90% (95%CI: 0.75-0.96), respectively. The diagnostic accuracy of MCCE was 91% (95%CI: 0.88-0.94) for assessing gastric diseases. ", "MCCE is an effective and safe diagnostic method for evaluating gastric and SB mucosal lesions while being utilized as a single upper GI capsule endoscope in patients referred for SBCE evaluation. MCCE using the novel Ankon capsule endoscopic technique is a proper diagnostic method for evaluating the gastric mucosa. It is a promising non-invasive screening tool that may be applied in future monitoring of patients with high gastric cancer family risk to decrease morbidity and mortality of benign and malignant upper GI disorders." ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[ "Bowel diseases", "Capsule endoscopy", "Diagnostic techniques", "Gastrointestinal diseases", "Gastric mucosa", "\nHelicobacter pylori\n" ]
INTRODUCTION: Capsule endoscopy (CE) is a non-invasive, painless, patient-friendly, and safe diagnostic method. It is currently the gold standard examination for small bowel (SB) diseases, including inflammatory bowel disease, suspected polyposis syndromes, unexplained abdominal pain, celiac disease, and obscure gastrointestinal bleeding (OGIB). Small bowel capsule endoscopy (SBCE) provides excellent diagnostic yield, which has been proven in many clinical trials since its introduction in the late 1990s. Currently, it has a fundamental role in gastrointestinal (GI) endoscopy, especially in patients with suspected SB diseases[1]. Attempts to expand the diagnostic role of CE to areas that are traditionally explored with standard endoscopic procedures (gastroscopy and colonoscopy), e.g., the esophagus, stomach, and colon. These aims have been challenged by significant drawbacks in the past due to a lack of optimal preparation, standardized procedural design, and ability to visualize the entire mucosal surface[2]. A common perception is that the anatomical structures of the stomach and colon are too complex for adequate visualization with a passive capsule endoscope[3-5]. Although the SB seems to be a simple pipe, the more complex gastric anatomy with passive movements may cause unexplored areas of the gastric mucosa when utilizing standard CE. Magnetically controlled capsule endoscopy (MCCE) systems were developed over the past ten years to precisely control capsule movements inside the GI tract and to achieve better visualization of the entire mucosa, especially in the stomach. Different approaches have been developed and studied over time, which have enabled manual steering or active movement of the capsule systems. The external magnetic field seems to be the most adequate and energetically effective method for steering and holding the capsule within the stomach[6,7]. According to recent, prospective studies that were mainly conducted in China, a methodology developed by Ankon and AnX robotics could be an optimal method for non-invasive screening and early diagnosis of gastric diseases such as gastric cancer and intramucosal high-grade dysplasia. It has the potential to become a valuable and accepted screening method in a population with high gastric cancer risk[8]. The aim of this study was to evaluate the feasibility, safety, and diagnostic yield of the Ankon MCCE system in patients with gastric and SB disorders who were referred to our endoscopy unit for SBCE examination between September 2017 and December 2020. Our secondary aim was to evaluate the overall gastric examinations, SB transit times (TTs) and efficacy of MCCE to achieve complete gastric and SBCE investigation by utilizing the same upper GI capsule. MATERIALS AND METHODS: Study design Our present study prospectively enrolled outpatients who were seen at the Endo-Kapszula Endoscopy Unit, Székesfehérvár, Hungary, between September 2017 and December 2020. A combined investigation of the stomach and SB of patients referred to for SBCE was performed with a robotically MCCE system (NaviCam, Ankon Technologies Co, Ltd, Shanghai, China). The primary endpoint of the recent study was to investigate the diagnostic yield of MCCE in the evaluation of gastric and SB abnormalities. The secondary endpoints were to evaluate the feasibility of performing complete gastric and SB investigation by utilizing the same diagnostic procedure; to address safety parameters, including adverse and severe adverse events, and to calculate the capsule TTs, such as esophageal, gastric, and SB TTs, and the overall procedure time. Our present study prospectively enrolled outpatients who were seen at the Endo-Kapszula Endoscopy Unit, Székesfehérvár, Hungary, between September 2017 and December 2020. A combined investigation of the stomach and SB of patients referred to for SBCE was performed with a robotically MCCE system (NaviCam, Ankon Technologies Co, Ltd, Shanghai, China). The primary endpoint of the recent study was to investigate the diagnostic yield of MCCE in the evaluation of gastric and SB abnormalities. The secondary endpoints were to evaluate the feasibility of performing complete gastric and SB investigation by utilizing the same diagnostic procedure; to address safety parameters, including adverse and severe adverse events, and to calculate the capsule TTs, such as esophageal, gastric, and SB TTs, and the overall procedure time. Patients All patients who were referred to our endoscopy unit for SBCE and who agreed to our gastric study protocol were included in the current study. During the study period, we enrolled 284 consecutive, eligible patients, of which there were 149 males (52.5%) and 135 females (47.5 %), with a mean age of 44 years. Detailed patient characteristics are shown in Table 1. The indications of MCCE were iron deficiency anemia, OGIB, suspected/established SB Crohn’s disease, suspected/confirmed celiac disease, suspected SB neoplastic disease, carcinoid syndrome, and SB polyposis syndromes. Indications of small bowel capsule endoscopy (grouped by gender) BMI: Body mass index; SB: Small bowel. This study was approved by the Ethical Committee of the University of Szeged (Registry No. 5/17.04.26) and registered in the ClinicalTrials.gov trial registry (Identifier: NCT03234725). The present study was conducted according to the World Medical Association's Declaration of Helsinki provisions in 1995. Patients agreed to undergo capsule endoscopies and Helicobacter pylori (H. pylori) urea breath tests (UBTs) by written informed consent. All patients who were referred to our endoscopy unit for SBCE and who agreed to our gastric study protocol were included in the current study. During the study period, we enrolled 284 consecutive, eligible patients, of which there were 149 males (52.5%) and 135 females (47.5 %), with a mean age of 44 years. Detailed patient characteristics are shown in Table 1. The indications of MCCE were iron deficiency anemia, OGIB, suspected/established SB Crohn’s disease, suspected/confirmed celiac disease, suspected SB neoplastic disease, carcinoid syndrome, and SB polyposis syndromes. Indications of small bowel capsule endoscopy (grouped by gender) BMI: Body mass index; SB: Small bowel. This study was approved by the Ethical Committee of the University of Szeged (Registry No. 5/17.04.26) and registered in the ClinicalTrials.gov trial registry (Identifier: NCT03234725). The present study was conducted according to the World Medical Association's Declaration of Helsinki provisions in 1995. Patients agreed to undergo capsule endoscopies and Helicobacter pylori (H. pylori) urea breath tests (UBTs) by written informed consent. Patient preparation for MCCE According to the SBCE guidelines, we applied the following patient preparation and study protocol: Patients followed a liquid diet and consumed 2 L of water with two sacks of polyethylene glycol (PEG) the day before the study. First, H. pylori C13 UBT was performed on the morning of the study, while the patient was in a fasting condition. The UBT requires an average of 30 min, during which time the patient should not consume any fluids or food. During that visit, the patient's medical history was recorded, and a physical examination was performed. After the UBT, the patient ingested 2 dL water with simethicone suspension (80 mg) to reduce mucus in the stomach. Then, to distend the stomach properly, approximately 8-10 dL of clean water consumed by all patients within 10 min. Water ingestion may be repeated as needed to enhance gastric distension during examination. After complete visualization of the total gastric mucosal surface, active pyloric propulsion of the capsule was attempted in all patients with the external magnetic field. If it was unsuccessful, 10 mg intravenous metoclopramide was applied 60 min after the capsule was swallowed. Contraindications of MCCE were the same as of those of SBCE and magnetic resonance imaging (MRI) examination. In our study, absolute contraindications were the patients with previous abdominal surgery and/or a previous capsule retention; with implanted MRI-incompatible electronic devices (e.g., implantable cardioverter-defibrillator and pacemakers), or magnetic, metal foreign bodies; and who were not competent or refused the informed consent form; who were under age 18 or above age 70; and those who were pregnant. A patency capsule test was initially performed in all patients to determine those with relative contraindications, including known or suspected GI obstruction due to SB Crohn’s disease, neoplasia, or SB surgery. According to the SBCE guidelines, we applied the following patient preparation and study protocol: Patients followed a liquid diet and consumed 2 L of water with two sacks of polyethylene glycol (PEG) the day before the study. First, H. pylori C13 UBT was performed on the morning of the study, while the patient was in a fasting condition. The UBT requires an average of 30 min, during which time the patient should not consume any fluids or food. During that visit, the patient's medical history was recorded, and a physical examination was performed. After the UBT, the patient ingested 2 dL water with simethicone suspension (80 mg) to reduce mucus in the stomach. Then, to distend the stomach properly, approximately 8-10 dL of clean water consumed by all patients within 10 min. Water ingestion may be repeated as needed to enhance gastric distension during examination. After complete visualization of the total gastric mucosal surface, active pyloric propulsion of the capsule was attempted in all patients with the external magnetic field. If it was unsuccessful, 10 mg intravenous metoclopramide was applied 60 min after the capsule was swallowed. Contraindications of MCCE were the same as of those of SBCE and magnetic resonance imaging (MRI) examination. In our study, absolute contraindications were the patients with previous abdominal surgery and/or a previous capsule retention; with implanted MRI-incompatible electronic devices (e.g., implantable cardioverter-defibrillator and pacemakers), or magnetic, metal foreign bodies; and who were not competent or refused the informed consent form; who were under age 18 or above age 70; and those who were pregnant. A patency capsule test was initially performed in all patients to determine those with relative contraindications, including known or suspected GI obstruction due to SB Crohn’s disease, neoplasia, or SB surgery. Technical methods of MCCE Our system involves a special static magnet with robotic and manual guidance, computer workstation with ESNavi software (Ankon Technologies Co, Ltd) (Figure 1), magnetic capsule endoscope, capsule locator, and data recorder. The capsule endoscope sizes 26.8 × 11.6 mm, weighs 4.8 g, and has a permanent spherical magnet inside (Figure 2). The operator can adjust the frequency of captured pictures from 0.5 to 6 frames per second (fps). Capsule functioning can be stopped temporarily and restarted by the operator remotely from the workstation. The picture resolution is 480 × 480 pixels, and the field of view is 140°. The illumination can be automatically adjusted by an automatic picture focusing mechanism, which enables the view depth to shift from 0 mm to 60 mm. Depending on the fps, the battery life can be over 10 h, which allows combined gastric and SB capsule investigations with the same capsule. Robotic C-arm, investigation table, and computer workstation for magnetically controlled capsule endoscopy. Magnetic capsule endoscope (NaviCam). The magnetic robotic C-arm generates an adjustable magnetic field outside of the body with a maximum strength of 0.2 T, which allows precise controlled movements in three spatial directions. During the process, the physician guides the magnetic capsule by two joysticks in any chosen, spatial direction or along its axis, and therefore, he can rotate or tilt it. Moreover, the capsule can advance forward 360° by a rotational automatism in the direction of the capsule visual axis, which is helpful for transposition of the capsule from the long axis of the stomach. The capsule locator is a unique device that activates the capsule by infrared light prior to the patient swallowing it. The presence of the capsule can be detected in the case of suspected capsule retention (Figure 3). Capsule locator device. The system is capable of real-time digital transmission of images to the operating system. At the same time, it is continuously obtaining information about the gyroscope, which is built into the capsule [three-dimensional (3D) motion detector]. It also receives data from the actual spatial location of the camera and localizes the capsule inside the body at any time. The data recorder (Figure 4) may receive all motion information and high-quality pictures via wireless transmission from the capsule endoscope. In contrast, the connection between the data recorder and the workstation via a USB wire makes visible the real-time pictures and gyroscope information. Data recorder. Magnetic CE can be transferred along five different axes with the controlling magnet: two rotational and three 3D spaces. To achieve this, there are two joysticks on the workstation desktop: the left one controls the capsule in two rotational axes (horizontal and vertical directions) and the right joystick directs it in the three translational axes (forward/backwards, up/down, left/right). Precise magnetic control is achieved by positioning the examining table, modifying the position of the spherical magnet axis along the 3D space, and dynamically adjusting the strength and direction of the vectorial magnetic fields perpendicular to each other. The above-mentioned robotic systems can also automatically run the magnetic field vector and position it by robotically controlled computer-based software (scripts) in default mode, which can achieve a standardized mucosal scanning during the MCCE procedure without the direct intervention of a physician. Our system involves a special static magnet with robotic and manual guidance, computer workstation with ESNavi software (Ankon Technologies Co, Ltd) (Figure 1), magnetic capsule endoscope, capsule locator, and data recorder. The capsule endoscope sizes 26.8 × 11.6 mm, weighs 4.8 g, and has a permanent spherical magnet inside (Figure 2). The operator can adjust the frequency of captured pictures from 0.5 to 6 frames per second (fps). Capsule functioning can be stopped temporarily and restarted by the operator remotely from the workstation. The picture resolution is 480 × 480 pixels, and the field of view is 140°. The illumination can be automatically adjusted by an automatic picture focusing mechanism, which enables the view depth to shift from 0 mm to 60 mm. Depending on the fps, the battery life can be over 10 h, which allows combined gastric and SB capsule investigations with the same capsule. Robotic C-arm, investigation table, and computer workstation for magnetically controlled capsule endoscopy. Magnetic capsule endoscope (NaviCam). The magnetic robotic C-arm generates an adjustable magnetic field outside of the body with a maximum strength of 0.2 T, which allows precise controlled movements in three spatial directions. During the process, the physician guides the magnetic capsule by two joysticks in any chosen, spatial direction or along its axis, and therefore, he can rotate or tilt it. Moreover, the capsule can advance forward 360° by a rotational automatism in the direction of the capsule visual axis, which is helpful for transposition of the capsule from the long axis of the stomach. The capsule locator is a unique device that activates the capsule by infrared light prior to the patient swallowing it. The presence of the capsule can be detected in the case of suspected capsule retention (Figure 3). Capsule locator device. The system is capable of real-time digital transmission of images to the operating system. At the same time, it is continuously obtaining information about the gyroscope, which is built into the capsule [three-dimensional (3D) motion detector]. It also receives data from the actual spatial location of the camera and localizes the capsule inside the body at any time. The data recorder (Figure 4) may receive all motion information and high-quality pictures via wireless transmission from the capsule endoscope. In contrast, the connection between the data recorder and the workstation via a USB wire makes visible the real-time pictures and gyroscope information. Data recorder. Magnetic CE can be transferred along five different axes with the controlling magnet: two rotational and three 3D spaces. To achieve this, there are two joysticks on the workstation desktop: the left one controls the capsule in two rotational axes (horizontal and vertical directions) and the right joystick directs it in the three translational axes (forward/backwards, up/down, left/right). Precise magnetic control is achieved by positioning the examining table, modifying the position of the spherical magnet axis along the 3D space, and dynamically adjusting the strength and direction of the vectorial magnetic fields perpendicular to each other. The above-mentioned robotic systems can also automatically run the magnetic field vector and position it by robotically controlled computer-based software (scripts) in default mode, which can achieve a standardized mucosal scanning during the MCCE procedure without the direct intervention of a physician. MCCE study protocol and gastric stations To achieve optimal gastric mucosal visualization and standardization of the MCCE protocol in the stomach, we defined eight different stations with different patient positions to visualize the entire inner gastric surface as during upper GI endoscopy. Changing the patient position from the left lateral decubitus to the supine and right lateral position is necessary to combine gravity and magnetic force, which improve capsule maneuvering (Figure 5). The capsule swallowed by the patients in the left lateral decubitus position (Figure 6). Different magnetically controlled capsule endoscopy stations and capsule camera orientations defined to achieve a complete gastric mucosal surface visualization and mapping (created by Zoltán Tóbiás, MD). The capsule swallowed by the patients in the left lateral decubitus position. A: Patients and ball magnet position; B and C: Pictures of the Z-line by CE from our database. Stations at the left lateral decubitus position: Station I. Visualization of the gastric fundus and subcar-dial region with the cardia (posterior J type retroflexion). After entering the stomach, the capsule was lowered into the large curvature at the body of the stomach. The magnetic ball was held high up at the level of the patient's right shoulder. The capsule camera was maintained in an obliquely upward orientation of 45° and then horizontally rotated to observe the gastric fundus and the cardia (Figure 7A-C). Stations I-III at the left lateral decubitus position. A: Patients’ position (Station I); B: Cardia by a gastroscope (Station I); C: Cardia by CE (Station I); D: Patients’ position (Station II); E: Cardia by a gastroscope (Station II); F: Cardia by CE (Station II); G: Patients’ position (Station III); H: Corpus by a gastroscope (Station III); I: Corpus by a capsule (Station III). Station II. Moving closer to the cardia and the fundus (anterior J type retroflexion). While using the right joystick, the ball magnet was lowered and fitted closely to the patient's right arm. Due to the proximity of the magnetic field, the capsule rose to the anterior wall and small curvature of the stomach. The capsule was then rotated with the camera oriented vertically upward to observe the cardia up close, with the fundus at a distance (Figure 7D-F). Station III. Visualization of the gastric body from the fundus. With the ball magnet in the same position, using the left joystick, the capsule camera was rotated and tilted downwards, enabling visualization of the proximal body of the stomach (corpus) and the gastric folds at the large curvature with a longitudinal view (Figure 7G-I). Stations in the supine position: Station IV. Visualization of the angular incisure. Patients were asked to lie on their back, in a supine position. The capsule was moved and located to the distal body into the angular region due to a change of gravity and stepwise magnetic movements. The magnetic ball was moved to the left upper abdomen (hypochondrium) and then lowered, close to the patient abdomen. In this case, the capsule was raised to the anterior wall of the stomach; therefore, the small curvature and the angular incisure were examined as well (Figure 8A-C). Stations IV and V in the supine position. A: Patients’ position (Station IV); B: Angular incisure by a gastroscope (Station IV); C: Angular incisure by CE (Station IV); A: Patients’ position (Station V); B: Angular incisure by a gastroscope (Station V); C: Angular incisure by CE (Station V). Station V. Visualization of the larger curvature of the distal body from the angular incisure (U type of retroversion). The magnetic ball was moved to the epigastric area, close to the abdomen. Then the capsule camera was oriented upward to observe the anterior wall of the gastric body. In this position, the capsule can be turned and rotated (e.g., towards the cardia), enabling visualization of the distal body of the stomach longitudinally, as in the endoscopic view of U-type retroversion (Figure 8D-F). Stations at the right lateral decubitus patient position: Station VI. Visualization of the antral canal. In the next phase, patients were asked to turn to the right lateral decubitus position. Due to the gravity force, the capsule sank and moved spontaneously into the antral region. Then, the ball magnet was positioned over the left kidney. The capsule was brought closer to the large curvature with the camera oriented obliquely downward at 45°, which enabled observation of the antrum. Then, the antral canal could be examined with the pylorus, and the angular incisure became visible from the antrum (Figure 9A-C). Station at the right lateral decubitus patient position. A: Patients’ position (Station VI); B: Antrum canal of stomach by gastroscope (Station VI); C: Antrum canal of stomach by CE (Station VI). Stations in the supine position: Station VII. Prepyloric view and visualization of the pylorus. After that, each patients were asked to lie on his or her back in the supine position. The ball magnet was positioned close to the body on the right upper abdomen (hypochondrium). The capsule camera was oriented horizontally and laterally toward the pylorus for closer observation. The magnet position ensured that the capsule remained in the antrum. Using both the right and left joysticks, we moved the capsule closer to the pylorus (Figure 10A-C). Stations VII and VIII in the supine position. A: Patient’s position (Station VII); B: Antrum canal of stomach by gastroscope (Station VII); C: Antrum canal of stomach by CE (Station VII); D: Patient’s position (Station VIII); E: Pyloric ring (Station VIII); F: Ampulla of Vater (Station VIII); G: Pylorus from the duodenal bulb with a capsule (Station VIII). Station VIII: Magnetically controlled transpyloric passage of the capsule. The magnetic ball was placed at the patient's right side at the level of the duodenal bulb. The capsule was then rotated until the camera faced the pylorus. The capsule was dragged close to the pylorus under the guidance of the magnet robot. As the pylorus was opened, peristalsis assisted the capsule into the duodenum. After transpyloric passage, first we depicted the duodenal bulb, then from the descending and inferior horizontal part of the duodenum, we visualized the ampulla of the Vater by tilting the capsule camera upwards to facilitate the retrograde view (Figure 10D-G). To achieve optimal gastric mucosal visualization and standardization of the MCCE protocol in the stomach, we defined eight different stations with different patient positions to visualize the entire inner gastric surface as during upper GI endoscopy. Changing the patient position from the left lateral decubitus to the supine and right lateral position is necessary to combine gravity and magnetic force, which improve capsule maneuvering (Figure 5). The capsule swallowed by the patients in the left lateral decubitus position (Figure 6). Different magnetically controlled capsule endoscopy stations and capsule camera orientations defined to achieve a complete gastric mucosal surface visualization and mapping (created by Zoltán Tóbiás, MD). The capsule swallowed by the patients in the left lateral decubitus position. A: Patients and ball magnet position; B and C: Pictures of the Z-line by CE from our database. Stations at the left lateral decubitus position: Station I. Visualization of the gastric fundus and subcar-dial region with the cardia (posterior J type retroflexion). After entering the stomach, the capsule was lowered into the large curvature at the body of the stomach. The magnetic ball was held high up at the level of the patient's right shoulder. The capsule camera was maintained in an obliquely upward orientation of 45° and then horizontally rotated to observe the gastric fundus and the cardia (Figure 7A-C). Stations I-III at the left lateral decubitus position. A: Patients’ position (Station I); B: Cardia by a gastroscope (Station I); C: Cardia by CE (Station I); D: Patients’ position (Station II); E: Cardia by a gastroscope (Station II); F: Cardia by CE (Station II); G: Patients’ position (Station III); H: Corpus by a gastroscope (Station III); I: Corpus by a capsule (Station III). Station II. Moving closer to the cardia and the fundus (anterior J type retroflexion). While using the right joystick, the ball magnet was lowered and fitted closely to the patient's right arm. Due to the proximity of the magnetic field, the capsule rose to the anterior wall and small curvature of the stomach. The capsule was then rotated with the camera oriented vertically upward to observe the cardia up close, with the fundus at a distance (Figure 7D-F). Station III. Visualization of the gastric body from the fundus. With the ball magnet in the same position, using the left joystick, the capsule camera was rotated and tilted downwards, enabling visualization of the proximal body of the stomach (corpus) and the gastric folds at the large curvature with a longitudinal view (Figure 7G-I). Stations in the supine position: Station IV. Visualization of the angular incisure. Patients were asked to lie on their back, in a supine position. The capsule was moved and located to the distal body into the angular region due to a change of gravity and stepwise magnetic movements. The magnetic ball was moved to the left upper abdomen (hypochondrium) and then lowered, close to the patient abdomen. In this case, the capsule was raised to the anterior wall of the stomach; therefore, the small curvature and the angular incisure were examined as well (Figure 8A-C). Stations IV and V in the supine position. A: Patients’ position (Station IV); B: Angular incisure by a gastroscope (Station IV); C: Angular incisure by CE (Station IV); A: Patients’ position (Station V); B: Angular incisure by a gastroscope (Station V); C: Angular incisure by CE (Station V). Station V. Visualization of the larger curvature of the distal body from the angular incisure (U type of retroversion). The magnetic ball was moved to the epigastric area, close to the abdomen. Then the capsule camera was oriented upward to observe the anterior wall of the gastric body. In this position, the capsule can be turned and rotated (e.g., towards the cardia), enabling visualization of the distal body of the stomach longitudinally, as in the endoscopic view of U-type retroversion (Figure 8D-F). Stations at the right lateral decubitus patient position: Station VI. Visualization of the antral canal. In the next phase, patients were asked to turn to the right lateral decubitus position. Due to the gravity force, the capsule sank and moved spontaneously into the antral region. Then, the ball magnet was positioned over the left kidney. The capsule was brought closer to the large curvature with the camera oriented obliquely downward at 45°, which enabled observation of the antrum. Then, the antral canal could be examined with the pylorus, and the angular incisure became visible from the antrum (Figure 9A-C). Station at the right lateral decubitus patient position. A: Patients’ position (Station VI); B: Antrum canal of stomach by gastroscope (Station VI); C: Antrum canal of stomach by CE (Station VI). Stations in the supine position: Station VII. Prepyloric view and visualization of the pylorus. After that, each patients were asked to lie on his or her back in the supine position. The ball magnet was positioned close to the body on the right upper abdomen (hypochondrium). The capsule camera was oriented horizontally and laterally toward the pylorus for closer observation. The magnet position ensured that the capsule remained in the antrum. Using both the right and left joysticks, we moved the capsule closer to the pylorus (Figure 10A-C). Stations VII and VIII in the supine position. A: Patient’s position (Station VII); B: Antrum canal of stomach by gastroscope (Station VII); C: Antrum canal of stomach by CE (Station VII); D: Patient’s position (Station VIII); E: Pyloric ring (Station VIII); F: Ampulla of Vater (Station VIII); G: Pylorus from the duodenal bulb with a capsule (Station VIII). Station VIII: Magnetically controlled transpyloric passage of the capsule. The magnetic ball was placed at the patient's right side at the level of the duodenal bulb. The capsule was then rotated until the camera faced the pylorus. The capsule was dragged close to the pylorus under the guidance of the magnet robot. As the pylorus was opened, peristalsis assisted the capsule into the duodenum. After transpyloric passage, first we depicted the duodenal bulb, then from the descending and inferior horizontal part of the duodenum, we visualized the ampulla of the Vater by tilting the capsule camera upwards to facilitate the retrograde view (Figure 10D-G). Examining the small intestine If the capsule entered via the small intestine, patients were asked to drink 1 L of PEG solution, and from then on, they were requested to drink clear fluids (i.e., water). The movement of the capsule in the small intestine was then monitored hourly in real-time visualization mode. The examination ended when the capsule arrives at the colon or stops functioning due to the low battery. If the capsule entered via the small intestine, patients were asked to drink 1 L of PEG solution, and from then on, they were requested to drink clear fluids (i.e., water). The movement of the capsule in the small intestine was then monitored hourly in real-time visualization mode. The examination ended when the capsule arrives at the colon or stops functioning due to the low battery. Study design: Our present study prospectively enrolled outpatients who were seen at the Endo-Kapszula Endoscopy Unit, Székesfehérvár, Hungary, between September 2017 and December 2020. A combined investigation of the stomach and SB of patients referred to for SBCE was performed with a robotically MCCE system (NaviCam, Ankon Technologies Co, Ltd, Shanghai, China). The primary endpoint of the recent study was to investigate the diagnostic yield of MCCE in the evaluation of gastric and SB abnormalities. The secondary endpoints were to evaluate the feasibility of performing complete gastric and SB investigation by utilizing the same diagnostic procedure; to address safety parameters, including adverse and severe adverse events, and to calculate the capsule TTs, such as esophageal, gastric, and SB TTs, and the overall procedure time. Patients: All patients who were referred to our endoscopy unit for SBCE and who agreed to our gastric study protocol were included in the current study. During the study period, we enrolled 284 consecutive, eligible patients, of which there were 149 males (52.5%) and 135 females (47.5 %), with a mean age of 44 years. Detailed patient characteristics are shown in Table 1. The indications of MCCE were iron deficiency anemia, OGIB, suspected/established SB Crohn’s disease, suspected/confirmed celiac disease, suspected SB neoplastic disease, carcinoid syndrome, and SB polyposis syndromes. Indications of small bowel capsule endoscopy (grouped by gender) BMI: Body mass index; SB: Small bowel. This study was approved by the Ethical Committee of the University of Szeged (Registry No. 5/17.04.26) and registered in the ClinicalTrials.gov trial registry (Identifier: NCT03234725). The present study was conducted according to the World Medical Association's Declaration of Helsinki provisions in 1995. Patients agreed to undergo capsule endoscopies and Helicobacter pylori (H. pylori) urea breath tests (UBTs) by written informed consent. Patient preparation for MCCE: According to the SBCE guidelines, we applied the following patient preparation and study protocol: Patients followed a liquid diet and consumed 2 L of water with two sacks of polyethylene glycol (PEG) the day before the study. First, H. pylori C13 UBT was performed on the morning of the study, while the patient was in a fasting condition. The UBT requires an average of 30 min, during which time the patient should not consume any fluids or food. During that visit, the patient's medical history was recorded, and a physical examination was performed. After the UBT, the patient ingested 2 dL water with simethicone suspension (80 mg) to reduce mucus in the stomach. Then, to distend the stomach properly, approximately 8-10 dL of clean water consumed by all patients within 10 min. Water ingestion may be repeated as needed to enhance gastric distension during examination. After complete visualization of the total gastric mucosal surface, active pyloric propulsion of the capsule was attempted in all patients with the external magnetic field. If it was unsuccessful, 10 mg intravenous metoclopramide was applied 60 min after the capsule was swallowed. Contraindications of MCCE were the same as of those of SBCE and magnetic resonance imaging (MRI) examination. In our study, absolute contraindications were the patients with previous abdominal surgery and/or a previous capsule retention; with implanted MRI-incompatible electronic devices (e.g., implantable cardioverter-defibrillator and pacemakers), or magnetic, metal foreign bodies; and who were not competent or refused the informed consent form; who were under age 18 or above age 70; and those who were pregnant. A patency capsule test was initially performed in all patients to determine those with relative contraindications, including known or suspected GI obstruction due to SB Crohn’s disease, neoplasia, or SB surgery. Technical methods of MCCE: Our system involves a special static magnet with robotic and manual guidance, computer workstation with ESNavi software (Ankon Technologies Co, Ltd) (Figure 1), magnetic capsule endoscope, capsule locator, and data recorder. The capsule endoscope sizes 26.8 × 11.6 mm, weighs 4.8 g, and has a permanent spherical magnet inside (Figure 2). The operator can adjust the frequency of captured pictures from 0.5 to 6 frames per second (fps). Capsule functioning can be stopped temporarily and restarted by the operator remotely from the workstation. The picture resolution is 480 × 480 pixels, and the field of view is 140°. The illumination can be automatically adjusted by an automatic picture focusing mechanism, which enables the view depth to shift from 0 mm to 60 mm. Depending on the fps, the battery life can be over 10 h, which allows combined gastric and SB capsule investigations with the same capsule. Robotic C-arm, investigation table, and computer workstation for magnetically controlled capsule endoscopy. Magnetic capsule endoscope (NaviCam). The magnetic robotic C-arm generates an adjustable magnetic field outside of the body with a maximum strength of 0.2 T, which allows precise controlled movements in three spatial directions. During the process, the physician guides the magnetic capsule by two joysticks in any chosen, spatial direction or along its axis, and therefore, he can rotate or tilt it. Moreover, the capsule can advance forward 360° by a rotational automatism in the direction of the capsule visual axis, which is helpful for transposition of the capsule from the long axis of the stomach. The capsule locator is a unique device that activates the capsule by infrared light prior to the patient swallowing it. The presence of the capsule can be detected in the case of suspected capsule retention (Figure 3). Capsule locator device. The system is capable of real-time digital transmission of images to the operating system. At the same time, it is continuously obtaining information about the gyroscope, which is built into the capsule [three-dimensional (3D) motion detector]. It also receives data from the actual spatial location of the camera and localizes the capsule inside the body at any time. The data recorder (Figure 4) may receive all motion information and high-quality pictures via wireless transmission from the capsule endoscope. In contrast, the connection between the data recorder and the workstation via a USB wire makes visible the real-time pictures and gyroscope information. Data recorder. Magnetic CE can be transferred along five different axes with the controlling magnet: two rotational and three 3D spaces. To achieve this, there are two joysticks on the workstation desktop: the left one controls the capsule in two rotational axes (horizontal and vertical directions) and the right joystick directs it in the three translational axes (forward/backwards, up/down, left/right). Precise magnetic control is achieved by positioning the examining table, modifying the position of the spherical magnet axis along the 3D space, and dynamically adjusting the strength and direction of the vectorial magnetic fields perpendicular to each other. The above-mentioned robotic systems can also automatically run the magnetic field vector and position it by robotically controlled computer-based software (scripts) in default mode, which can achieve a standardized mucosal scanning during the MCCE procedure without the direct intervention of a physician. MCCE study protocol and gastric stations: To achieve optimal gastric mucosal visualization and standardization of the MCCE protocol in the stomach, we defined eight different stations with different patient positions to visualize the entire inner gastric surface as during upper GI endoscopy. Changing the patient position from the left lateral decubitus to the supine and right lateral position is necessary to combine gravity and magnetic force, which improve capsule maneuvering (Figure 5). The capsule swallowed by the patients in the left lateral decubitus position (Figure 6). Different magnetically controlled capsule endoscopy stations and capsule camera orientations defined to achieve a complete gastric mucosal surface visualization and mapping (created by Zoltán Tóbiás, MD). The capsule swallowed by the patients in the left lateral decubitus position. A: Patients and ball magnet position; B and C: Pictures of the Z-line by CE from our database. Stations at the left lateral decubitus position: Station I. Visualization of the gastric fundus and subcar-dial region with the cardia (posterior J type retroflexion). After entering the stomach, the capsule was lowered into the large curvature at the body of the stomach. The magnetic ball was held high up at the level of the patient's right shoulder. The capsule camera was maintained in an obliquely upward orientation of 45° and then horizontally rotated to observe the gastric fundus and the cardia (Figure 7A-C). Stations I-III at the left lateral decubitus position. A: Patients’ position (Station I); B: Cardia by a gastroscope (Station I); C: Cardia by CE (Station I); D: Patients’ position (Station II); E: Cardia by a gastroscope (Station II); F: Cardia by CE (Station II); G: Patients’ position (Station III); H: Corpus by a gastroscope (Station III); I: Corpus by a capsule (Station III). Station II. Moving closer to the cardia and the fundus (anterior J type retroflexion). While using the right joystick, the ball magnet was lowered and fitted closely to the patient's right arm. Due to the proximity of the magnetic field, the capsule rose to the anterior wall and small curvature of the stomach. The capsule was then rotated with the camera oriented vertically upward to observe the cardia up close, with the fundus at a distance (Figure 7D-F). Station III. Visualization of the gastric body from the fundus. With the ball magnet in the same position, using the left joystick, the capsule camera was rotated and tilted downwards, enabling visualization of the proximal body of the stomach (corpus) and the gastric folds at the large curvature with a longitudinal view (Figure 7G-I). Stations in the supine position: Station IV. Visualization of the angular incisure. Patients were asked to lie on their back, in a supine position. The capsule was moved and located to the distal body into the angular region due to a change of gravity and stepwise magnetic movements. The magnetic ball was moved to the left upper abdomen (hypochondrium) and then lowered, close to the patient abdomen. In this case, the capsule was raised to the anterior wall of the stomach; therefore, the small curvature and the angular incisure were examined as well (Figure 8A-C). Stations IV and V in the supine position. A: Patients’ position (Station IV); B: Angular incisure by a gastroscope (Station IV); C: Angular incisure by CE (Station IV); A: Patients’ position (Station V); B: Angular incisure by a gastroscope (Station V); C: Angular incisure by CE (Station V). Station V. Visualization of the larger curvature of the distal body from the angular incisure (U type of retroversion). The magnetic ball was moved to the epigastric area, close to the abdomen. Then the capsule camera was oriented upward to observe the anterior wall of the gastric body. In this position, the capsule can be turned and rotated (e.g., towards the cardia), enabling visualization of the distal body of the stomach longitudinally, as in the endoscopic view of U-type retroversion (Figure 8D-F). Stations at the right lateral decubitus patient position: Station VI. Visualization of the antral canal. In the next phase, patients were asked to turn to the right lateral decubitus position. Due to the gravity force, the capsule sank and moved spontaneously into the antral region. Then, the ball magnet was positioned over the left kidney. The capsule was brought closer to the large curvature with the camera oriented obliquely downward at 45°, which enabled observation of the antrum. Then, the antral canal could be examined with the pylorus, and the angular incisure became visible from the antrum (Figure 9A-C). Station at the right lateral decubitus patient position. A: Patients’ position (Station VI); B: Antrum canal of stomach by gastroscope (Station VI); C: Antrum canal of stomach by CE (Station VI). Stations in the supine position: Station VII. Prepyloric view and visualization of the pylorus. After that, each patients were asked to lie on his or her back in the supine position. The ball magnet was positioned close to the body on the right upper abdomen (hypochondrium). The capsule camera was oriented horizontally and laterally toward the pylorus for closer observation. The magnet position ensured that the capsule remained in the antrum. Using both the right and left joysticks, we moved the capsule closer to the pylorus (Figure 10A-C). Stations VII and VIII in the supine position. A: Patient’s position (Station VII); B: Antrum canal of stomach by gastroscope (Station VII); C: Antrum canal of stomach by CE (Station VII); D: Patient’s position (Station VIII); E: Pyloric ring (Station VIII); F: Ampulla of Vater (Station VIII); G: Pylorus from the duodenal bulb with a capsule (Station VIII). Station VIII: Magnetically controlled transpyloric passage of the capsule. The magnetic ball was placed at the patient's right side at the level of the duodenal bulb. The capsule was then rotated until the camera faced the pylorus. The capsule was dragged close to the pylorus under the guidance of the magnet robot. As the pylorus was opened, peristalsis assisted the capsule into the duodenum. After transpyloric passage, first we depicted the duodenal bulb, then from the descending and inferior horizontal part of the duodenum, we visualized the ampulla of the Vater by tilting the capsule camera upwards to facilitate the retrograde view (Figure 10D-G). Examining the small intestine: If the capsule entered via the small intestine, patients were asked to drink 1 L of PEG solution, and from then on, they were requested to drink clear fluids (i.e., water). The movement of the capsule in the small intestine was then monitored hourly in real-time visualization mode. The examination ended when the capsule arrives at the colon or stops functioning due to the low battery. RESULTS: An UBT test revealed H. pylori positivity in 32.7% of cases. (Table 2). No significant association with HP status and the type (proximal or distal), distribution (diffuse or focal) and severity (minimal or active, erosive) of the gastritis described on MCCE results were depicted. The results of Helicobacter pylori C13 urea breath tests H. pylori: Helicobacter pylori. The mean gastric, SB, and colon TT with MCCE was: 47 h 40 min (M/F: 44 h 15 min/51 h 14 min), 3 h 46 min 22 s (3 h 52 min 44 s/3 h 38 min 21 s) and 1 h 4 min 34 s (1 h 1 min 16 s/1 h 8 min 53 s), respectively. Average total time of MCCE examination: 5 h 48 min 35 s (5 h 46 min 37 s/5 h 50 min 18 s) (Table 3). Mean gastric, small bowel, and overall transit times of magnetically controlled capsule endoscopy The diagnostic yields for detecting any abnormalities in the stomach and SB with MCCE were 81.9%: 68.6% for minor pathologies and 13.3% for major pathologies. 25.8% of the abnormalities were found in the SB, and 74.2% were in the stomach. The diagnostic yield for stomach/SB was 4.9%/8.4% for major pathologies and 55.9%/12.7% for minor pathologies (Table 4). In the stomach, ulcers and polyps were considered major, while signs of gastritis were minor pathologies. In the SB signs of Crohn’s and celiac disease were the major and non-specific inflammation, diverticula and angiodysplasia the minor pathologies. Based on findings from MCCE examinations, the distribution of pathologies is depicted in Table 5. Diagnostic yield of magnetically controlled capsule endoscopy SB: Small bowel. Distribution of pathologies detected by magnetically controlled capsule endoscopy The capsule's active magnetic movement through the pylorus was successful using the magnet in 41.9% of all patients (automatized protocol in 56 patients and active manually controlled magnetic activity in 63 patients) (Table 6). Distribution of different types of transpyloric transit in complete and incomplete small bowel studies SB: Small bowel. In 18 (M/F: 6/12) patients (6.3%), SB visualization with MCCE was incomplete. There were 13 occurrences of incomplete examinations as a result of capsule depletion. In 3 of these 13 cases, the capsule was depleted within 5 h of operation, indicating a manufacturing flaw. The average total study time in the remaining 10 cases was 9 h 12 min 9 s. From the pylorus to the last image, the average transit time was 8 h 26 min 4 s. The examination was discontinued sooner than planned in 3 cases due to the patient's request. In 2 cases, there was capsule retention due to Crohn's-like ulceration, accompanied by a narrow bowel lumen (Table 6). These problems were resolved spontaneously without surgery, but with medical treatment. There were no serious adverse events or capsule retention during the study period. DISCUSSION: There are two main directions in developing capsule endoscopes with active movements: internally and externally guided techniques. Internal guidance means that the locomotive system is integrated into the capsule, but these systems require longer-lasting batteries for full functionality. Its main advantage is a more precise navigation with the help of the internalized locomotive system. External guidance refers to outwardly conducted capsule movements, e.g., with a magnetic control unit or device/moving arm generating the magnetic field. The exceptional advantage of this method is that the magnetic field (and its adjustment) built by the external unit controls the locomotive mechanism, thus making this an energy-saving solution. However, a disadvantage of this technique is that it is limited to more passive, less flexible locomotion. The examining physician controls the capsule outside the body with an external control unit, which decreases the accuracy of the spatial and temporal movements. Locomotion of the capsule endoscope would result in numerous long-term advantages in the diagnostics and therapy of GI diseases (focused biopsies, treatment options, e.g., laser tumor ablation, local targeted release of certain medications, and endoscopic ultrasound applications)[1]. Magnetic assisted capsule endoscopy (MACE) systems have been developed over the past 5-10 years, primarily for research goals. Experiences drawn from previously conducted in vitro studies showed that locomotion and precision of the capsule are significantly influenced by the physical characteristics of the generated magnetic field created by the external magnetic unit while travelling in the human body[1]. In 2011, Olympus was the first to introduce a modified MRI machine prototype that moved a MACE system which allowed the operator to successfully guide the capsule in a chosen spatial direction inside the stomach after drinking water. However, the adoption of this diagnostic procedure did not spread worldwide and get medical acceptance[6]. In 2014, IntroMedic developed a navigation (NAVI) magnetic capsule system that could be moved externally with a small, hammer-shaped, external static magnet device. The technology involving a magnet that assisted the physician in manually moving the capsule endoscope appeared viable and valuable in many experimental settings. However, it did not achieve any breakthroughs, as it only allowed sudden and harsh position changes of the capsule. In clinical trials studying the application of the NAVI capsule system using a small cohort, 65%-86% of the mucosal layer of the stomach was visualized accurately using external magnetically controlled locomotion after the ingestion of water. Its diagnostic accuracy was similar to that of traditional gastroscopy[9]. Using a similar type of NAVI capsule system, CE was performed on the large intestines, where magnetic locomotion directed the capsule from the cecum to the sigmoid colon. At the same time, a colonoscopy probe was inserted to monitor capsule movements to provide dilatation[10]. In iron deficiency anemia, after routine colonoscopy of 49 patients, the performance of a MACE system was compared with the IntroMedic Navi system in providing a diagnostic examination of the stomach and small intestine in one sitting. According to the results, MACE detected more pathologies than by gastroscopy alone, and no significant difference in diagnostic sensitivity was found between the methods when examining the upper GI tract[11]. Precise locomotion of the magnetic capsule inside the GI tract by manual control is not possible due to the variable density of tissues and variable distance of the capsule, e.g., in the stomach from the external magnet. Moreover, exact spatial location of the capsule, its relation to the surrounding organs, or the ante-/retrograde orientation cannot be judged accurately. Therefore, a control mechanism is needed alongside the magnetic capsule, which is based on external robotics that enable the physician's input to be executed (i.e., joystick: in forward, backward, upward, downward, and sideways directions). By pre-programming instruction sequences (script) in the computing control unit to examine the stomach from the fundus to the antrum, we created a reproducible examination of the complete inner mucosal lining of the stomach, which lowered the variability among investigations. If the examining physician notices significant pathology, he/she can intervene and move the examination to manual control and revisit the alteration; doing so increases the number of images taken of the lesion and optimizes the diagnostic accuracy of the study. NaviCam was the first magnetically controlled capsule system that enables bidirectional data transmission and robotic control. The capsule functions at 1-6 fps and its spatial orientation with continuous monitoring of energy levels are transmitted via the recording vest to control the data and screen of the computing unit. At the console, we can control the precise locomotion of the capsule, image capturing speed, and brightness, as well as turn it on or off. We combined manual and automatic, robotic capsule control to optimize the gastric procedure. As we previously published in an in vitro setting, complete 100% visualization of the inner surface of a plastic stomach model with the medium and large stomach autoscan program module and with the freehand controlling method could be successfully achieved in all cases. With the small and medium stomach mode, we could observe only 97.5% of the inner surface, because of the incomplete visualization of the prepyloric region. With freehand method, we needed nearly twice as much average time (749 s) to make a complete examination, compared to the robotic maneuvering with autoscan program (390 s). In an everyday praxis, in each patient position, the following stations and capsule positioning were performed after running the 3 different autoscan programs[12,13]. CE is a potentially patient-friendly and noncontact/infection-free diagnostic modality of screening for gastric diseases, which may result in complete and excellent examinations of the stomach. During the coronavirus disease 2019 pandemic, direct contact with the patients in the practice is a potential risk factor for infections. Non-contact medicine is a possibility for further examinations, such as remote control of CE[14]. The prevalence of focal gastric lesions (polyps, gastric ulcers, submucosal tumors, etc.) increases with age, and they may be detected with MCCE as effectively as with gastroscopy[8]. The sensitivity is between 90% and 92%, and the specificity is between 94% and 96.7%[15]. Several studies proved that, altogether, reliable preparation (gastric cleanliness), careful examination, and MCCE are pivotal for adequate gastric visualization and, therefore, for the detection of superficial gastric neoplasms[16]. In a large multicenter gastric cancer screening trial conducted in China seven asymptomatic patients out of 3182 were diagnosed with advanced cancer by MCCE. Cancer prevalence was highest in the gastric body (3 cases), followed by 2 cases in the cardia and 2 in the antrum, while 1-1 cases were detected in the region of the angulus, in the fundus and in the esophagus. All were confirmed to be adenocarcinoma pathologically. These results indicate that screening with MCCE may be useful in high family risk populations or people aged over 50 years old as a gastric cancer screening modality[8]. Magnetic steering may also impact TTs, as delayed gastric transit of the CE (especially in patients with gastroparesis) may lead to an incomplete SB exam. This method may increase the papilla detection rate as well. In addition to magnetic steering, there are several other independent factors, such as male gender and higher BMI, which increase gastric TTs and decrease SB TTs, thus impacting the time required for CE completion[10]. Despite the fact that delayed gastric TT may cause incomplete SBCE, we should avoid esophago-gastro-duodenoscopy to resolve temporary gastric capsule retention and instead use MCCE[17]. The visual diagnosis of the presence of H. pylori with standard white light endoscopy (WLI) has a relatively low accuracy, especially in population with a low pretest probability. Moreover, WLI endoscopy correlates poorly with histopathological findings of H. pylori induced gastritis too. Recently, low quality retrospective studies proposed the theory, that with the application of a special electronic chromoendoscopy, linked color imaging, diffuse reddish appearance of the mucosa in the gastric body and fundic glands correlates with the presence of HP[18]. In our study we found no correlation between the HP status and the activity and type of gastritis observed on MCCE as follows, but not included this into the Table 2, since it would be relevant in another publication, focusing HP status and gastritis on MCCE. MCCE is an excellent method to visualize the entire gastric mucosal surface up to 100% precision in healthy volunteers[5]. In a previous study of 75 consecutive patients, we demonstrated that with our combined method with automatic and manual control, a complete visualization of the cardia, fundus, corpus, angulus, antrum, and duodenal bulb was 95.4%, 100%, 100%, 100%, 100% and 100%, respectively. The ampulla of Vater was detected and observed in 20 patients (26%). Moreover, the visualization of the distal esophagus and the Z-line was possible in 67/75 patients (89%), which was assessed as complete in 23, and partially complete in 44 patients. The mean stationary time for MCCE in the distal esophagus was 1 min and 32 s[19]. The use of MCCE needs to be considered in western and eastern countries, as there are two different scenarios, depending on the different levels of prevalence of UGI diseases and gastric cancer in each country. Nowadays, MCCE is mostly used for the detection of malignant and premalignant gastric lesions, which are more prevalent in eastern countries. However, the present technology opens the door to further, new technologies; subsequent developments would allow MCCE technology to be extended to other regions of the GI tract, e.g., the esophagus, which may facilitate the detection of more relevant lesions, such as Barrett metaplasia in western countries as well. With second-generation MCCE, visualization of the esophagus, Z-line, and duodenal papilla would be improved, according to a study by Jiang et al[20] Furthermore, the gastric TT could shorten by one-third, which was nearly achieved by conventional endoscopy with an examination of approximately 10 min. It is known that the current gold standard gastroscopy has some clear disadvantages, as it is commonly uncomfortable for patients, and therefore it is mostly performed under sedation, which carries definite procedure related risks. MCCE, as a patient-friendly, non-invasive test, might be an alternative for patients who refuse to undergo gastroscopy and increase patients’ adherence for screening. Furthermore, MCCE of the stomach was recently approved by the Chinese Food and Drug Association for the following diagnostic indications: (1) As an alternative diagnostic tool for patients who refuse to undergo gastroscopy; (2) Screening of gastric diseases as a part of physical examination; (3) Screening for gastric cancer; (4) To diagnose various causes of GI inflammation; (5) To perform follow-up for diseases like gastric varices, gastric ulcer, atrophic gastritis, and polyps after surgical or endoscopic removal[21]. There is no similar study in the literature, as we performed a complete upper GI capsule examination, including the stomach and the SB with the same capsule endoscope during MCCE. Denzer et al[22] published a blinded, prospective trial from two French centers with the Intromedic manually controlled MCCE. A total of 189 patients were enrolled into this multicenter study. Lesions were defined as major (requiring biopsy or removal) or minor ones. The final gold-standard was unblinded conventional gastroscopy with biopsy, under sedation with propofol. Twenty-three major lesions were found in 21 patients and in this population, the capsule accuracy was 90.5% as compared to gastroscopy. Of the remaining 168 patients, 94% had minor and mostly multiple lesions; the capsule accuracy was 88.1%. All patients preferred MCCE over gastroscopy[22]. One of the risk factors of incomplete SB capsule endoscopy is a prolonged gastric transit time, which could be considered as a limitation of our combined gastric and SB study protocol. In our patient population, the average gastric transit time with magnetic transpyloric, manual control was 26 min. In contrast, in those cases where the magnetic transpyloric control failed, after examining the stomach, we left the capsule to propel through the pylorus by spontaneous peristaltic activity. In these patient groups, the average gastric transit time took 1 h and 9 min. In 10 cases out of 18 incomplete SB studies caused by battery low energy, this event occurred in 3 patients with manual magnetic passage and in 7 patients with spontaneous transpyloric passage[23]. Visibility and identification of landmarks are important factors to consider in accurate examination of stomach using MCCE. Gastric landmarks and typical stations described in the methods were always forced to achieve during combined automatic and manual maneuvering. For improving the learning curve of our gastroenterologist, we started to train the examinations in a plastic stomach model. In our previously published abstract, we described the improvement of the learning curves with manual magnetic controls both in experts and in trainees[1]. In this study, we find significant differences in the examination time of the complete inner surface mapping between trainees and experts, and moreover automatic protocols were faster and equally accurate as experts to achieve a complete inner surface mapping. The problem how to minimalize bubbles and mucoid secretions is an existing problem in real life studies. To improve visibility, we established a unique preparation process with a combination of bicarbonate, Pronase B, and simethicone combined with a patient body rotation technique[2]. Moreover, in our described stations, we rotate our patients from left lateral to supine, then from supine top right lateral, and finally from right lateral to supine position during MCCE study. During this protocol, the gastric mucoid secretions also moving into different parts of the stomach due to the gravity making visible all the landmarks and the majority of the mucosal abnormalities. Application of prokinetics or motilin agonist erythromycin might also be an option in future studies to improve the visibility and reduce gastric lake content[24,25]. An inherent limitation of our present study that we performed gastroscopy only in a few patients with major gastric pathologies to accomplish final diagnosis and biopsy; and therefore, we could not assess the accuracy of MCCE in all patients compared to gastroscopy. However, several previous studies demonstrated excellent diagnostic value and high accuracy. In a recent meta-analysis of Zhang et al[26] four studies with 612 patients were included, in which the results of blinded MCCE and gastroscopy were compared. MCCE demonstrated a pooled sensitivity and specificity of 91% (95%CI: 0.87-0.93) and 90% (95%CI: 0.75-0.96), respectively. The diagnostic accuracy of MCCE was 91% (95%CI: 0.88-0.94) for assessing gastric diseases. CONCLUSION: MCCE is an effective and safe diagnostic method for evaluating gastric and SB mucosal lesions while being utilized as a single upper GI capsule endoscope in patients referred for SBCE evaluation. MCCE using the novel Ankon capsule endoscopic technique is a proper diagnostic method for evaluating the gastric mucosa. It is a promising non-invasive screening tool that may be applied in future monitoring of patients with high gastric cancer family risk to decrease morbidity and mortality of benign and malignant upper GI disorders.
Background: While capsule endoscopy (CE) is the gold standard diagnostic method of detecting small bowel (SB) diseases and disorders, a novel magnetically controlled capsule endoscopy (MCCE) system provides non-invasive evaluation of the gastric mucosal surface, which can be performed without sedation or discomfort. During standard SBCE, passive movement of the CE may cause areas of the complex anatomy of the gastric mucosa to remain unexplored, whereas the precision of MCCE capsule movements inside the stomach promises better visualization of the entire mucosa. Methods: Of outpatients who were referred for SBCE, 284 (male/female: 149/135) were prospectively enrolled and evaluated by MCCE. The stomach was examined in the supine, left, and right lateral decubitus positions without sedation. Next, all patients underwent a complete SBCE study protocol. The gastric mucosa was explored with the Ankon MCCE system with active magnetic control of the capsule endoscope in the stomach, applying three standardized pre-programmed computerized algorithms in combination with manual control of the magnetic movements. Results: The urea breath test revealed Helicobacter pylori positivity in 32.7% of patients. The mean gastric and SB transit times with MCCE were 0 h 47 min 40 s and 3 h 46 min 22 s, respectively. The average total time of upper gastrointestinal MCCE examination was 5 h 48 min 35 s. Active magnetic movement of the Ankon capsule through the pylorus was successful in 41.9% of patients. Overall diagnostic yield for detecting abnormalities in the stomach and SB was 81.9% (68.6% minor; 13.3% major pathologies); 25.8% of abnormalities were in the SB; 74.2% were in the stomach. The diagnostic yield for stomach/SB was 55.9%/12.7% for minor and 4.9%/8.4% for major pathologies. Conclusions: MCCE is a feasible, safe diagnostic method for evaluating gastric mucosal lesions and is a promising non-invasive screening tool to decrease morbidity and mortality in upper gastro-intestinal diseases.
INTRODUCTION: Capsule endoscopy (CE) is a non-invasive, painless, patient-friendly, and safe diagnostic method. It is currently the gold standard examination for small bowel (SB) diseases, including inflammatory bowel disease, suspected polyposis syndromes, unexplained abdominal pain, celiac disease, and obscure gastrointestinal bleeding (OGIB). Small bowel capsule endoscopy (SBCE) provides excellent diagnostic yield, which has been proven in many clinical trials since its introduction in the late 1990s. Currently, it has a fundamental role in gastrointestinal (GI) endoscopy, especially in patients with suspected SB diseases[1]. Attempts to expand the diagnostic role of CE to areas that are traditionally explored with standard endoscopic procedures (gastroscopy and colonoscopy), e.g., the esophagus, stomach, and colon. These aims have been challenged by significant drawbacks in the past due to a lack of optimal preparation, standardized procedural design, and ability to visualize the entire mucosal surface[2]. A common perception is that the anatomical structures of the stomach and colon are too complex for adequate visualization with a passive capsule endoscope[3-5]. Although the SB seems to be a simple pipe, the more complex gastric anatomy with passive movements may cause unexplored areas of the gastric mucosa when utilizing standard CE. Magnetically controlled capsule endoscopy (MCCE) systems were developed over the past ten years to precisely control capsule movements inside the GI tract and to achieve better visualization of the entire mucosa, especially in the stomach. Different approaches have been developed and studied over time, which have enabled manual steering or active movement of the capsule systems. The external magnetic field seems to be the most adequate and energetically effective method for steering and holding the capsule within the stomach[6,7]. According to recent, prospective studies that were mainly conducted in China, a methodology developed by Ankon and AnX robotics could be an optimal method for non-invasive screening and early diagnosis of gastric diseases such as gastric cancer and intramucosal high-grade dysplasia. It has the potential to become a valuable and accepted screening method in a population with high gastric cancer risk[8]. The aim of this study was to evaluate the feasibility, safety, and diagnostic yield of the Ankon MCCE system in patients with gastric and SB disorders who were referred to our endoscopy unit for SBCE examination between September 2017 and December 2020. Our secondary aim was to evaluate the overall gastric examinations, SB transit times (TTs) and efficacy of MCCE to achieve complete gastric and SBCE investigation by utilizing the same upper GI capsule. CONCLUSION: Acknowledgements: special thanks for Zoltán Tóbiás MD for the graphical work and visual explanations of different gastric stations and Kata Tőgl our nurse manager for assistance and modelling the patients positioning during magnetically controlled capsule endoscopy.
Background: While capsule endoscopy (CE) is the gold standard diagnostic method of detecting small bowel (SB) diseases and disorders, a novel magnetically controlled capsule endoscopy (MCCE) system provides non-invasive evaluation of the gastric mucosal surface, which can be performed without sedation or discomfort. During standard SBCE, passive movement of the CE may cause areas of the complex anatomy of the gastric mucosa to remain unexplored, whereas the precision of MCCE capsule movements inside the stomach promises better visualization of the entire mucosa. Methods: Of outpatients who were referred for SBCE, 284 (male/female: 149/135) were prospectively enrolled and evaluated by MCCE. The stomach was examined in the supine, left, and right lateral decubitus positions without sedation. Next, all patients underwent a complete SBCE study protocol. The gastric mucosa was explored with the Ankon MCCE system with active magnetic control of the capsule endoscope in the stomach, applying three standardized pre-programmed computerized algorithms in combination with manual control of the magnetic movements. Results: The urea breath test revealed Helicobacter pylori positivity in 32.7% of patients. The mean gastric and SB transit times with MCCE were 0 h 47 min 40 s and 3 h 46 min 22 s, respectively. The average total time of upper gastrointestinal MCCE examination was 5 h 48 min 35 s. Active magnetic movement of the Ankon capsule through the pylorus was successful in 41.9% of patients. Overall diagnostic yield for detecting abnormalities in the stomach and SB was 81.9% (68.6% minor; 13.3% major pathologies); 25.8% of abnormalities were in the SB; 74.2% were in the stomach. The diagnostic yield for stomach/SB was 55.9%/12.7% for minor and 4.9%/8.4% for major pathologies. Conclusions: MCCE is a feasible, safe diagnostic method for evaluating gastric mucosal lesions and is a promising non-invasive screening tool to decrease morbidity and mortality in upper gastro-intestinal diseases.
12,225
371
[ 479, 145, 212, 344, 648, 1306, 77, 584, 2785, 88 ]
11
[ "capsule", "station", "patients", "gastric", "position", "magnetic", "stomach", "patient", "mcce", "sb" ]
[ "capsule endoscopy sbce", "capsule endoscopy ce", "capsule endoscopy diagnostic", "gastrointestinal gi endoscopy", "bowel capsule endoscopy" ]
null
[CONTENT] Bowel diseases | Capsule endoscopy | Diagnostic techniques | Gastrointestinal diseases | Gastric mucosa | Helicobacter pylori [SUMMARY]
[CONTENT] Bowel diseases | Capsule endoscopy | Diagnostic techniques | Gastrointestinal diseases | Gastric mucosa | Helicobacter pylori [SUMMARY]
null
[CONTENT] Bowel diseases | Capsule endoscopy | Diagnostic techniques | Gastrointestinal diseases | Gastric mucosa | Helicobacter pylori [SUMMARY]
[CONTENT] Bowel diseases | Capsule endoscopy | Diagnostic techniques | Gastrointestinal diseases | Gastric mucosa | Helicobacter pylori [SUMMARY]
[CONTENT] Bowel diseases | Capsule endoscopy | Diagnostic techniques | Gastrointestinal diseases | Gastric mucosa | Helicobacter pylori [SUMMARY]
[CONTENT] Capsule Endoscopes | Capsule Endoscopy | Feasibility Studies | Female | Gastric Mucosa | Humans | Intestinal Diseases | Male | Prospective Studies [SUMMARY]
[CONTENT] Capsule Endoscopes | Capsule Endoscopy | Feasibility Studies | Female | Gastric Mucosa | Humans | Intestinal Diseases | Male | Prospective Studies [SUMMARY]
null
[CONTENT] Capsule Endoscopes | Capsule Endoscopy | Feasibility Studies | Female | Gastric Mucosa | Humans | Intestinal Diseases | Male | Prospective Studies [SUMMARY]
[CONTENT] Capsule Endoscopes | Capsule Endoscopy | Feasibility Studies | Female | Gastric Mucosa | Humans | Intestinal Diseases | Male | Prospective Studies [SUMMARY]
[CONTENT] Capsule Endoscopes | Capsule Endoscopy | Feasibility Studies | Female | Gastric Mucosa | Humans | Intestinal Diseases | Male | Prospective Studies [SUMMARY]
[CONTENT] capsule endoscopy sbce | capsule endoscopy ce | capsule endoscopy diagnostic | gastrointestinal gi endoscopy | bowel capsule endoscopy [SUMMARY]
[CONTENT] capsule endoscopy sbce | capsule endoscopy ce | capsule endoscopy diagnostic | gastrointestinal gi endoscopy | bowel capsule endoscopy [SUMMARY]
null
[CONTENT] capsule endoscopy sbce | capsule endoscopy ce | capsule endoscopy diagnostic | gastrointestinal gi endoscopy | bowel capsule endoscopy [SUMMARY]
[CONTENT] capsule endoscopy sbce | capsule endoscopy ce | capsule endoscopy diagnostic | gastrointestinal gi endoscopy | bowel capsule endoscopy [SUMMARY]
[CONTENT] capsule endoscopy sbce | capsule endoscopy ce | capsule endoscopy diagnostic | gastrointestinal gi endoscopy | bowel capsule endoscopy [SUMMARY]
[CONTENT] capsule | station | patients | gastric | position | magnetic | stomach | patient | mcce | sb [SUMMARY]
[CONTENT] capsule | station | patients | gastric | position | magnetic | stomach | patient | mcce | sb [SUMMARY]
null
[CONTENT] capsule | station | patients | gastric | position | magnetic | stomach | patient | mcce | sb [SUMMARY]
[CONTENT] capsule | station | patients | gastric | position | magnetic | stomach | patient | mcce | sb [SUMMARY]
[CONTENT] capsule | station | patients | gastric | position | magnetic | stomach | patient | mcce | sb [SUMMARY]
[CONTENT] gastric | method | capsule | diseases | standard | developed | endoscopy | diagnostic | sb | bowel [SUMMARY]
[CONTENT] station | capsule | position | figure | position station | magnetic | patients | patient | right | ball [SUMMARY]
null
[CONTENT] evaluating | evaluating gastric | method evaluating gastric | method evaluating | diagnostic method evaluating gastric | diagnostic method evaluating | diagnostic method | method | upper | upper gi [SUMMARY]
[CONTENT] capsule | station | gastric | patients | position | sb | magnetic | study | stomach | mcce [SUMMARY]
[CONTENT] capsule | station | gastric | patients | position | sb | magnetic | study | stomach | mcce [SUMMARY]
[CONTENT] CE | SB | MCCE ||| CE | MCCE [SUMMARY]
[CONTENT] SBCE | 284 | 149/135 | MCCE ||| ||| ||| three [SUMMARY]
null
[CONTENT] MCCE [SUMMARY]
[CONTENT] CE | SB | MCCE ||| CE | MCCE ||| SBCE | 284 | 149/135 | MCCE ||| ||| ||| three ||| ||| 32.7% ||| SB | MCCE | 0 | 47 | 40 s and | 3 | 46 | 22 ||| 5 | 48 | 35 | s. Active | Ankon | 41.9% ||| SB | 81.9% | 68.6% | 13.3% | 25.8% | SB | 74.2% ||| 55.9%/12.7% | 4.9%/8.4% ||| [SUMMARY]
[CONTENT] CE | SB | MCCE ||| CE | MCCE ||| SBCE | 284 | 149/135 | MCCE ||| ||| ||| three ||| ||| 32.7% ||| SB | MCCE | 0 | 47 | 40 s and | 3 | 46 | 22 ||| 5 | 48 | 35 | s. Active | Ankon | 41.9% ||| SB | 81.9% | 68.6% | 13.3% | 25.8% | SB | 74.2% ||| 55.9%/12.7% | 4.9%/8.4% ||| [SUMMARY]
Prevalence and Correlates of Self-Reported Cardiovascular Diseases Among a Nationally Representative Population-Based Sample of Adults in Ecuador in 2018.
33976550
This study aimed to determine the prevalence and correlates of self-reported cardiovascular diseases (SRCVDs) among adults in Ecuador.
BACKGROUND
National cross-sectional survey data of 4638 persons aged 18-69 years in Ecuador were analysed. Research data were collected with an interview-administered questionnaire, physical and biochemical measurements.
METHODS
The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women. In adjusted logistic regression analysis, being Montubio (adjusted odds ratio-AOR: 1.66, 95% confidence interval-CI: 1.10-2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19-2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02-1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19-2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD.
RESULTS
Almost one in ten persons aged 18-69 years had SRCVD in Ecuador. Several associated factors, including Montubio by ethnicity, family alcohol problems, past smoking, and poor oral health status, were identified, which can be targeted in public health interventions.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Alcohol Drinking", "Cardiovascular Diseases", "Cross-Sectional Studies", "Ecuador", "Female", "Health Surveys", "Humans", "Life Style", "Male", "Middle Aged", "Oral Health", "Prevalence", "Risk Assessment", "Risk Factors", "Self Report", "Smoking", "Time Factors", "Young Adult" ]
8106477
Introduction
Globally, an estimated 17.9 million people died from Cardiovascular disease (CVDs) in 2016, representing 31% of all global deaths. Of these deaths, 85% are due to heart attack and stroke.1 In persons 50 years and older in 2019, “ischaemic heart disease and stroke were the top-ranked causes of disability adjusted life years (DALYs).”2 More than three-quarters of deaths from CVDs occur in low- and middle-income countries.1 “Heart attacks and strokes are usually acute events and are mainly caused by a blockage that prevents blood from flowing to the heart or brain.”1 In studies in the Americas, a study (60 years and older) in seven urban centres in Latin America and the Caribbean, the prevalence of self-reported cardiovascular disease (SRCVD) was 20.3%,3 and in a study among persons aged 60 years and older in the highlands and coastal areas of Ecuador in 2009 found a prevalence of self-reported heart disease of 13.1% and stroke 6.4%.4 In urban-rural sites (35–70 years) in Argentina, Brazil, Chile and Colombia the prevalence of SRCVD was 3.9%, 6.9%, 3.3%, and 3.8%, respectively.5 In a national study in Brazil (18 years older) the SR stroke prevalence was 1.6%,6 in Colombia (18–69 years) SRCVD 5.5%,7 Mexico (50 years and older) the prevalence of SR stroke was 4.3% and angina 13.6%,8 and in USA in 2016 (20 years plus), SRCVD (congestive health failure 3.4%, angina 3.0%, heart attack 4.4%, or stroke 3.9%).9 In Nepal (24–64 years), 2% had major cardiovascular events,10 in China (35–74 years) 3.3% in men and 3.6% in women SRCVD,11 in Australia (≥25 years) (2007–2008) the prevalence of SRCVD (heart attack or stroke) was 4.5%.12 To our knowledge, we could not find national information on SRCVD in Ecuador.13,14 CVDs contribute to 24% of mortality in 2016 in Ecuador.15 Using the mortality national registry in Ecuador, the myocardial infarction mortality rate increased from 51 in 2012 to 157 in 2016 deaths per 100,000,16 and mortality due to ischemic heart disease increased in Ecuador in the period 2001–2016.17 Ecuador’s population (16.9 million) consists of a mixture various ethnic groups, ranging from Mestizo (71.9%) to Afro Ecuadorian (4.3%).18 Factors associated with SRCVD include sociodemographic factors, behavioural and biological CVD risk risk factors. Sociodemographic factors associated with SRCVD include, older age,9,19,20 men,9,20 women,4,19 low socioeconomic status,21 lower education,6,9,20 and ethnicity.22 Behavioural CVD risk factors associated with SRCVD may include, smoking/tobacco use,3,6,20 past smoking,4 physical inactivity,4,6,23 inadequate or low fruit and vegetable consumption,10,23 high intake of sodium and sodium chloride (regular salt),24,25 and psychological distress.12 Biological CVD risk factors associated with SRCVD may include, hypertension,3,19–23,26,27 diabetes,3,6,20–23,26 obesity,3,4,6,21,23,26 abnormal cholesterol values,19 and poor oral health (edentulism).28 This study aimed to determine the prevalence and correlates of SRCVDs in a national population-based survey among adults in Ecuador in 2018.
null
null
null
null
Conclusion
Almost one in ten persons aged 18–69 years reported having been diagnosed with CVD in Ecuador. Several associated factors for CVD, such as being Montubio, family alcohol problems, past smoking tobacco, and poor oral health status were identified, which can be targeted in public health interventions.
[ "Method", "Sample and Procedures", "Measures", "Data Analysis", "Results", "Sample and Cardiovascular Diseases Prevalence Characteristics", "Associations with Self-Reported Cardiovascular Disease Prevalence", "Discussion", "Conclusion" ]
[ "Sample and Procedures This is a secondary analysis conducted using nationally representative population-based and cross-sectional data (18–69 years old) from the “2018 Ecuador STEPS survey.”29 The 2018 Ecuador STEPS survey data and more detailed sampling methods can be found elsewhere.30 Briefly, using a three-stage cluster sampling method, at the first stage primary sampling units (PSUs) were selected by stratum, at the second stage within each PSU selected in the first stage 12 occupied households were selected, and at the third stage, one participant (18–69 years) was selected from each household.30 Selected individuals were assessed with an interview-administered questionnaire, physical and biochemical measurements.30 Research data were collected using electronic tablet devices.30 The “Ethical Review Committee of the Ecuador Ministry of Health” approved the study.30\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.\nThis is a secondary analysis conducted using nationally representative population-based and cross-sectional data (18–69 years old) from the “2018 Ecuador STEPS survey.”29 The 2018 Ecuador STEPS survey data and more detailed sampling methods can be found elsewhere.30 Briefly, using a three-stage cluster sampling method, at the first stage primary sampling units (PSUs) were selected by stratum, at the second stage within each PSU selected in the first stage 12 occupied households were selected, and at the third stage, one participant (18–69 years) was selected from each household.30 Selected individuals were assessed with an interview-administered questionnaire, physical and biochemical measurements.30 Research data were collected using electronic tablet devices.30 The “Ethical Review Committee of the Ecuador Ministry of Health” approved the study.30\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.\nMeasures Outcome variable: History of CVDs was assessed with the question, “Have you ever had a heart attack or chest pain from heart disease (angina) or a stroke (cerebrovascular accident or incident)?” (Yes, No).30\nSociodemographic covariates included age, sex, highest level of formal education, and ethnicity.30\nBehavioural covariates included current and past smoking tobacco, daily servings of fruit and vegetable intake, and “low, moderate or high physical activity based on the Global Physical Activity Questionnaire”.30,31 Salt intake was assessed with the item, “Do you add salt to food at the table?”30 Responses were trichotomized into 1=never, 2=raley or sometimes, and 3=often or always. Alcohol dependence was sourced from the “Alcohol Use Disorder Identification Test=AUDIT” (items 4–6, ≥4 total scores).32 Alcohol family problems was sourced from the item, “During the past 12 months, have you had family problems or problems with your partner due to someone else’s drinking?” (1=yes: > monthly to 4=once or twice).30\nBiological covariates included measured Body Mass Index (BMI) classified as “<18.5kg/m2 underweight, 18.5–24.4kg/m2 normal weight, 25–29.9kg/m2 overweight and ≥30 kg/m2 obesity”.33 Hypertension or raised blood pressure (BP) was defined as “systolic BP ≥140 mm Hg and/or diastolic BP ≥90 mm Hg or where the participant is currently on antihypertensive medication”.34 Diabetes was defined as “fasting plasma glucose levels ≥7.0 mmol/L (126 mg/dl); or using insulin or oral hypoglycaemic drugs; or having a history of diagnosis of diabetes”.35 Raised total cholesterol was defined as “fasting TC ≥5.0 mmol/L or currently on medication for raised cholesterol”.35 Self-rated oral health status (AROH) was sourced from two items, 1) “How would you describe the state of your teeth, and 2) How would you describe the state of your gums?”30 Poor SROH was defined as “having poor or very poor status of teeth and/or gums, and good oral health as having average, good, very good or excellent status of teeth and/or gums”, in line with previous research.36 Cronbach alpha’s for the two item SROH scale was 0.74 in this sample.\nOutcome variable: History of CVDs was assessed with the question, “Have you ever had a heart attack or chest pain from heart disease (angina) or a stroke (cerebrovascular accident or incident)?” (Yes, No).30\nSociodemographic covariates included age, sex, highest level of formal education, and ethnicity.30\nBehavioural covariates included current and past smoking tobacco, daily servings of fruit and vegetable intake, and “low, moderate or high physical activity based on the Global Physical Activity Questionnaire”.30,31 Salt intake was assessed with the item, “Do you add salt to food at the table?”30 Responses were trichotomized into 1=never, 2=raley or sometimes, and 3=often or always. Alcohol dependence was sourced from the “Alcohol Use Disorder Identification Test=AUDIT” (items 4–6, ≥4 total scores).32 Alcohol family problems was sourced from the item, “During the past 12 months, have you had family problems or problems with your partner due to someone else’s drinking?” (1=yes: > monthly to 4=once or twice).30\nBiological covariates included measured Body Mass Index (BMI) classified as “<18.5kg/m2 underweight, 18.5–24.4kg/m2 normal weight, 25–29.9kg/m2 overweight and ≥30 kg/m2 obesity”.33 Hypertension or raised blood pressure (BP) was defined as “systolic BP ≥140 mm Hg and/or diastolic BP ≥90 mm Hg or where the participant is currently on antihypertensive medication”.34 Diabetes was defined as “fasting plasma glucose levels ≥7.0 mmol/L (126 mg/dl); or using insulin or oral hypoglycaemic drugs; or having a history of diagnosis of diabetes”.35 Raised total cholesterol was defined as “fasting TC ≥5.0 mmol/L or currently on medication for raised cholesterol”.35 Self-rated oral health status (AROH) was sourced from two items, 1) “How would you describe the state of your teeth, and 2) How would you describe the state of your gums?”30 Poor SROH was defined as “having poor or very poor status of teeth and/or gums, and good oral health as having average, good, very good or excellent status of teeth and/or gums”, in line with previous research.36 Cronbach alpha’s for the two item SROH scale was 0.74 in this sample.\nData Analysis Considering the clustered study design, all statistical analyses were done using “STATA software version 14.0 (Stata Corporation, College Station, TX, USA).” Unadjusted and adjusted logistic regression was used to calculate predictors of SRCVD. Variables with p-values <0.1 in univariate analysis (age group, educational level, ethnicity, family alcohol problems, smoking status, alcohol dependence, body weight status, hypertension, and self-rated oral health) were included in the final adjusted model. P-values of below 0.05 were accepted as significant and missing values were excluded from the analysis.\nConsidering the clustered study design, all statistical analyses were done using “STATA software version 14.0 (Stata Corporation, College Station, TX, USA).” Unadjusted and adjusted logistic regression was used to calculate predictors of SRCVD. Variables with p-values <0.1 in univariate analysis (age group, educational level, ethnicity, family alcohol problems, smoking status, alcohol dependence, body weight status, hypertension, and self-rated oral health) were included in the final adjusted model. P-values of below 0.05 were accepted as significant and missing values were excluded from the analysis.", "This is a secondary analysis conducted using nationally representative population-based and cross-sectional data (18–69 years old) from the “2018 Ecuador STEPS survey.”29 The 2018 Ecuador STEPS survey data and more detailed sampling methods can be found elsewhere.30 Briefly, using a three-stage cluster sampling method, at the first stage primary sampling units (PSUs) were selected by stratum, at the second stage within each PSU selected in the first stage 12 occupied households were selected, and at the third stage, one participant (18–69 years) was selected from each household.30 Selected individuals were assessed with an interview-administered questionnaire, physical and biochemical measurements.30 Research data were collected using electronic tablet devices.30 The “Ethical Review Committee of the Ecuador Ministry of Health” approved the study.30\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.", "Outcome variable: History of CVDs was assessed with the question, “Have you ever had a heart attack or chest pain from heart disease (angina) or a stroke (cerebrovascular accident or incident)?” (Yes, No).30\nSociodemographic covariates included age, sex, highest level of formal education, and ethnicity.30\nBehavioural covariates included current and past smoking tobacco, daily servings of fruit and vegetable intake, and “low, moderate or high physical activity based on the Global Physical Activity Questionnaire”.30,31 Salt intake was assessed with the item, “Do you add salt to food at the table?”30 Responses were trichotomized into 1=never, 2=raley or sometimes, and 3=often or always. Alcohol dependence was sourced from the “Alcohol Use Disorder Identification Test=AUDIT” (items 4–6, ≥4 total scores).32 Alcohol family problems was sourced from the item, “During the past 12 months, have you had family problems or problems with your partner due to someone else’s drinking?” (1=yes: > monthly to 4=once or twice).30\nBiological covariates included measured Body Mass Index (BMI) classified as “<18.5kg/m2 underweight, 18.5–24.4kg/m2 normal weight, 25–29.9kg/m2 overweight and ≥30 kg/m2 obesity”.33 Hypertension or raised blood pressure (BP) was defined as “systolic BP ≥140 mm Hg and/or diastolic BP ≥90 mm Hg or where the participant is currently on antihypertensive medication”.34 Diabetes was defined as “fasting plasma glucose levels ≥7.0 mmol/L (126 mg/dl); or using insulin or oral hypoglycaemic drugs; or having a history of diagnosis of diabetes”.35 Raised total cholesterol was defined as “fasting TC ≥5.0 mmol/L or currently on medication for raised cholesterol”.35 Self-rated oral health status (AROH) was sourced from two items, 1) “How would you describe the state of your teeth, and 2) How would you describe the state of your gums?”30 Poor SROH was defined as “having poor or very poor status of teeth and/or gums, and good oral health as having average, good, very good or excellent status of teeth and/or gums”, in line with previous research.36 Cronbach alpha’s for the two item SROH scale was 0.74 in this sample.", "Considering the clustered study design, all statistical analyses were done using “STATA software version 14.0 (Stata Corporation, College Station, TX, USA).” Unadjusted and adjusted logistic regression was used to calculate predictors of SRCVD. Variables with p-values <0.1 in univariate analysis (age group, educational level, ethnicity, family alcohol problems, smoking status, alcohol dependence, body weight status, hypertension, and self-rated oral health) were included in the final adjusted model. P-values of below 0.05 were accepted as significant and missing values were excluded from the analysis.", "Sample and Cardiovascular Diseases Prevalence Characteristics The sample included 4,638 adults (18–69 years; with 39 median age), 58.1% were female, 30.5% had higher education, and majority (78.9%) belonged to the Mestizo ethnic group. The study response rate was 69.4%.30 One in five participants (24.7%) had low physical activity, 13.7% currently smoked tobacco, 11.8% were dependent on alcohol use, 7.0% had alcohol family problems, 58.8% had 1 or less serving of fruit and vegetables a day, and 12.45 often or always added salt to their meals. One in four respondents (25.7%) had obesity, 20.5% hypertension, 7.1% diabetes, 34.8% raised total cholesterol, and 9.7% poor oral health. The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women (see Table 1).Table 1Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018Variable (#Missing Cases)SampleSelf-Reported Cardiovascular DiseaseN (%)N (%)aAll4638391 (8.7)Age in years (#0) 18–291205 (30.1)73 (6.3) 30–442034 (40.7)167 (8.7) 45–691399 (29.3)151 (11.2)Sex (#0) Female2694 (58.1)236 (8.9) Male1944 (41.9)155 (8.5)Education (#4) <Secondary2452 (48.2)226 (9.8) Secondary929 (21.2)59 (7.4) >Secondary1253 (30.5)104 (7.8)Ethnicity (#2) Mestizo3567 (78.9)295 (8.4) Amerindian378 (6.5)19 (4.7) Montubio335 (7.2)39 (13.7) African227 (4.4)20 (9.7) White/other129 (3.0)18 (12.2)Family alcohol problems (#0) No4326 (93.0)348 (8.3) Yes312 (7.0)43 (14.4)Smoking status (#0) Never3046 (61.9)227 (7.5) Past1007 (24.4)111 (11.2) Current585 (13.7)53 (9.9)Alcohol dependence (#0) No4145 (88.2)336 (8.3) Yes493 (11.8)55 (11.7)Fruit/vegetables (servings/day) (#11) 0–12754 (58.8)229 (8.9) 2–31416 (31.2)116 (8.2) 4 or more457 (10.0)45 (9.1)Adding salt to meals (#19) Never2534 (52.8)203 (8.0) Sometimes/rarely1553 (34.7)135 (9.1) Often/always532 (12.4)52 (10.8)Physical activity (#6) Low1083 (24.7)95 (8.3) Moderate1206 (25.5)102 (8.5) High2343 (49.8)193 (8.9)Body mass index (#168) Normal1486 (34.9)117 (7.6) Under55 (1.5)4 (6.9) Overweight1725 (37.9)142 (8.6) Obesity1204 (25.7)121 (11.0)Hypertension (#101) No3599 (79.5)270 (7.9) Yes938 (20.5)118 (12.3)Type 2 diabetes (#598) No3713 (92.9)333 (9.0) Yes327 (7.1)33 (10.1)Raised cholesterol (#560) No2569 (65.2)225 (9.1) Yes1509 (34.8)144 (9.4)Self-rated oral health (#175) Good3995 (90.3)310 (8.0) Poor468 (9.7)66 (14.8)Notes: #, number. aPercentage is based on the weighted percentage of complete cases.\n\nSample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018\nNotes: #, number. aPercentage is based on the weighted percentage of complete cases.\nThe sample included 4,638 adults (18–69 years; with 39 median age), 58.1% were female, 30.5% had higher education, and majority (78.9%) belonged to the Mestizo ethnic group. The study response rate was 69.4%.30 One in five participants (24.7%) had low physical activity, 13.7% currently smoked tobacco, 11.8% were dependent on alcohol use, 7.0% had alcohol family problems, 58.8% had 1 or less serving of fruit and vegetables a day, and 12.45 often or always added salt to their meals. One in four respondents (25.7%) had obesity, 20.5% hypertension, 7.1% diabetes, 34.8% raised total cholesterol, and 9.7% poor oral health. The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women (see Table 1).Table 1Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018Variable (#Missing Cases)SampleSelf-Reported Cardiovascular DiseaseN (%)N (%)aAll4638391 (8.7)Age in years (#0) 18–291205 (30.1)73 (6.3) 30–442034 (40.7)167 (8.7) 45–691399 (29.3)151 (11.2)Sex (#0) Female2694 (58.1)236 (8.9) Male1944 (41.9)155 (8.5)Education (#4) <Secondary2452 (48.2)226 (9.8) Secondary929 (21.2)59 (7.4) >Secondary1253 (30.5)104 (7.8)Ethnicity (#2) Mestizo3567 (78.9)295 (8.4) Amerindian378 (6.5)19 (4.7) Montubio335 (7.2)39 (13.7) African227 (4.4)20 (9.7) White/other129 (3.0)18 (12.2)Family alcohol problems (#0) No4326 (93.0)348 (8.3) Yes312 (7.0)43 (14.4)Smoking status (#0) Never3046 (61.9)227 (7.5) Past1007 (24.4)111 (11.2) Current585 (13.7)53 (9.9)Alcohol dependence (#0) No4145 (88.2)336 (8.3) Yes493 (11.8)55 (11.7)Fruit/vegetables (servings/day) (#11) 0–12754 (58.8)229 (8.9) 2–31416 (31.2)116 (8.2) 4 or more457 (10.0)45 (9.1)Adding salt to meals (#19) Never2534 (52.8)203 (8.0) Sometimes/rarely1553 (34.7)135 (9.1) Often/always532 (12.4)52 (10.8)Physical activity (#6) Low1083 (24.7)95 (8.3) Moderate1206 (25.5)102 (8.5) High2343 (49.8)193 (8.9)Body mass index (#168) Normal1486 (34.9)117 (7.6) Under55 (1.5)4 (6.9) Overweight1725 (37.9)142 (8.6) Obesity1204 (25.7)121 (11.0)Hypertension (#101) No3599 (79.5)270 (7.9) Yes938 (20.5)118 (12.3)Type 2 diabetes (#598) No3713 (92.9)333 (9.0) Yes327 (7.1)33 (10.1)Raised cholesterol (#560) No2569 (65.2)225 (9.1) Yes1509 (34.8)144 (9.4)Self-rated oral health (#175) Good3995 (90.3)310 (8.0) Poor468 (9.7)66 (14.8)Notes: #, number. aPercentage is based on the weighted percentage of complete cases.\n\nSample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018\nNotes: #, number. aPercentage is based on the weighted percentage of complete cases.\nAssociations with Self-Reported Cardiovascular Disease Prevalence In adjusted logistic regression analysis, Montubio (Adjusted Odds Ratio-AOR: 1.66, 95% Confidence Interval-CI: 1.10–2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19–2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02–1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19–2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD (see Table 2).Table 2Multivariable Associations with Self-Reported Cardiovascular DiseaseVariableCrude OR (95% CI)p-valueAdjusted OR (95% CI)p-valueAge in years 18–291 (Reference)1 (Reference) 30–441.41 (1.01, 1.98)0.0421.18 (0.81, 1.71)0.389 45–691.87 (1.30, 2.68)<0.0011.31 (0.85, 2.03)0.224Sex Female1 (Reference)— Male0.96 (0.74, 1.23)0.727Education <Secondary1 (Reference)1 (Reference) Secondary0.74 (0.53, 1.03)0.0770.84 (0.58, 1.22)0.369 >Secondary0.78 (0.57, 1.05)0.0970.89 (0.64, 1.23)0.487Ethnicity Mestizo1 (Reference)1 (Reference) Amerindian0.54 (0.27, 1.06)0.0760.51 (0.25, 1.02)0.058 Montubio1.73 (1.16, 2.59)0.0081.66 (1.10, 2.50)0.017 African1.17 (0.68, 2.01)0.5771.21 (0.70, 2.08)0.497 White/other1.51 (0.83, 2.74)0.1731.54 (0.84, 2.83)0.167Family alcohol problems No1 (Reference)1 (Reference) Yes1.86 (1.26, 2.75)0.0021.78 (1.19, 2.65)0.005Smoking status Never1 (Reference)1 (Reference) Past1.55 (1.18, 2.05)0.0021.36 (1.02, 1.81)0.038 Current1.36 (0.95, 1.95)0.0901.24 (0.84, 1.83)0.287Alcohol dependence No1 (Reference)1 (Reference) Yes1.47 (1.03, 2.10)0.0341.29 (0.87, 1.91)0.212Fruit/vegetables (servings/day) 0–11 (Reference)— 2–30.92 (0.69, 1.21)0.549 4 or more1.03 (0.68, 1.56)0.897Adding salt to meals Never1 (Reference)— Sometimes/rarely1.15 (0.86, 1.54)0.352 Often/always1.39 (0.94, 2.05)0.103Physical activity Low1 (Reference)— Moderate1.03 (0.73, 1.45)0.872 High1.08 (0.81, 1.46)0.587Body mass index Normal1 (Reference)1 (Reference) Under0.89 (0.29, 2.77)0.8421.10 (0.35, 3.45)0.876 Overweight1.14 (0.83, 1.57)0.4041.10 (0.78, 1.56)0.581 Obesity1.50 (1.08, 2.07)0.0161.36 (0.95, 1.94)0.088Hypertension No1 (Reference)1 (Reference) Yes1.63 (1.24, 2.14)<0.0011.31 (0.96, 1.78)0.083Type 2 diabetes No1 (Reference)— Yes1.14 (0.72, 1.79)0.574Raised cholesterol No1 (Reference)— Yes1.05 (0.80, 1.36)0.741Self-rated oral health Good1 (Reference)1 (Reference) Poor2.00 (1.41, 2.82)<0.0011.74 (1.19, 2.54)0.005Abbreviations: OR, odds ratio; CI, confidence interval.\n\nMultivariable Associations with Self-Reported Cardiovascular Disease\nAbbreviations: OR, odds ratio; CI, confidence interval.\nIn adjusted logistic regression analysis, Montubio (Adjusted Odds Ratio-AOR: 1.66, 95% Confidence Interval-CI: 1.10–2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19–2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02–1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19–2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD (see Table 2).Table 2Multivariable Associations with Self-Reported Cardiovascular DiseaseVariableCrude OR (95% CI)p-valueAdjusted OR (95% CI)p-valueAge in years 18–291 (Reference)1 (Reference) 30–441.41 (1.01, 1.98)0.0421.18 (0.81, 1.71)0.389 45–691.87 (1.30, 2.68)<0.0011.31 (0.85, 2.03)0.224Sex Female1 (Reference)— Male0.96 (0.74, 1.23)0.727Education <Secondary1 (Reference)1 (Reference) Secondary0.74 (0.53, 1.03)0.0770.84 (0.58, 1.22)0.369 >Secondary0.78 (0.57, 1.05)0.0970.89 (0.64, 1.23)0.487Ethnicity Mestizo1 (Reference)1 (Reference) Amerindian0.54 (0.27, 1.06)0.0760.51 (0.25, 1.02)0.058 Montubio1.73 (1.16, 2.59)0.0081.66 (1.10, 2.50)0.017 African1.17 (0.68, 2.01)0.5771.21 (0.70, 2.08)0.497 White/other1.51 (0.83, 2.74)0.1731.54 (0.84, 2.83)0.167Family alcohol problems No1 (Reference)1 (Reference) Yes1.86 (1.26, 2.75)0.0021.78 (1.19, 2.65)0.005Smoking status Never1 (Reference)1 (Reference) Past1.55 (1.18, 2.05)0.0021.36 (1.02, 1.81)0.038 Current1.36 (0.95, 1.95)0.0901.24 (0.84, 1.83)0.287Alcohol dependence No1 (Reference)1 (Reference) Yes1.47 (1.03, 2.10)0.0341.29 (0.87, 1.91)0.212Fruit/vegetables (servings/day) 0–11 (Reference)— 2–30.92 (0.69, 1.21)0.549 4 or more1.03 (0.68, 1.56)0.897Adding salt to meals Never1 (Reference)— Sometimes/rarely1.15 (0.86, 1.54)0.352 Often/always1.39 (0.94, 2.05)0.103Physical activity Low1 (Reference)— Moderate1.03 (0.73, 1.45)0.872 High1.08 (0.81, 1.46)0.587Body mass index Normal1 (Reference)1 (Reference) Under0.89 (0.29, 2.77)0.8421.10 (0.35, 3.45)0.876 Overweight1.14 (0.83, 1.57)0.4041.10 (0.78, 1.56)0.581 Obesity1.50 (1.08, 2.07)0.0161.36 (0.95, 1.94)0.088Hypertension No1 (Reference)1 (Reference) Yes1.63 (1.24, 2.14)<0.0011.31 (0.96, 1.78)0.083Type 2 diabetes No1 (Reference)— Yes1.14 (0.72, 1.79)0.574Raised cholesterol No1 (Reference)— Yes1.05 (0.80, 1.36)0.741Self-rated oral health Good1 (Reference)1 (Reference) Poor2.00 (1.41, 2.82)<0.0011.74 (1.19, 2.54)0.005Abbreviations: OR, odds ratio; CI, confidence interval.\n\nMultivariable Associations with Self-Reported Cardiovascular Disease\nAbbreviations: OR, odds ratio; CI, confidence interval.", "The sample included 4,638 adults (18–69 years; with 39 median age), 58.1% were female, 30.5% had higher education, and majority (78.9%) belonged to the Mestizo ethnic group. The study response rate was 69.4%.30 One in five participants (24.7%) had low physical activity, 13.7% currently smoked tobacco, 11.8% were dependent on alcohol use, 7.0% had alcohol family problems, 58.8% had 1 or less serving of fruit and vegetables a day, and 12.45 often or always added salt to their meals. One in four respondents (25.7%) had obesity, 20.5% hypertension, 7.1% diabetes, 34.8% raised total cholesterol, and 9.7% poor oral health. The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women (see Table 1).Table 1Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018Variable (#Missing Cases)SampleSelf-Reported Cardiovascular DiseaseN (%)N (%)aAll4638391 (8.7)Age in years (#0) 18–291205 (30.1)73 (6.3) 30–442034 (40.7)167 (8.7) 45–691399 (29.3)151 (11.2)Sex (#0) Female2694 (58.1)236 (8.9) Male1944 (41.9)155 (8.5)Education (#4) <Secondary2452 (48.2)226 (9.8) Secondary929 (21.2)59 (7.4) >Secondary1253 (30.5)104 (7.8)Ethnicity (#2) Mestizo3567 (78.9)295 (8.4) Amerindian378 (6.5)19 (4.7) Montubio335 (7.2)39 (13.7) African227 (4.4)20 (9.7) White/other129 (3.0)18 (12.2)Family alcohol problems (#0) No4326 (93.0)348 (8.3) Yes312 (7.0)43 (14.4)Smoking status (#0) Never3046 (61.9)227 (7.5) Past1007 (24.4)111 (11.2) Current585 (13.7)53 (9.9)Alcohol dependence (#0) No4145 (88.2)336 (8.3) Yes493 (11.8)55 (11.7)Fruit/vegetables (servings/day) (#11) 0–12754 (58.8)229 (8.9) 2–31416 (31.2)116 (8.2) 4 or more457 (10.0)45 (9.1)Adding salt to meals (#19) Never2534 (52.8)203 (8.0) Sometimes/rarely1553 (34.7)135 (9.1) Often/always532 (12.4)52 (10.8)Physical activity (#6) Low1083 (24.7)95 (8.3) Moderate1206 (25.5)102 (8.5) High2343 (49.8)193 (8.9)Body mass index (#168) Normal1486 (34.9)117 (7.6) Under55 (1.5)4 (6.9) Overweight1725 (37.9)142 (8.6) Obesity1204 (25.7)121 (11.0)Hypertension (#101) No3599 (79.5)270 (7.9) Yes938 (20.5)118 (12.3)Type 2 diabetes (#598) No3713 (92.9)333 (9.0) Yes327 (7.1)33 (10.1)Raised cholesterol (#560) No2569 (65.2)225 (9.1) Yes1509 (34.8)144 (9.4)Self-rated oral health (#175) Good3995 (90.3)310 (8.0) Poor468 (9.7)66 (14.8)Notes: #, number. aPercentage is based on the weighted percentage of complete cases.\n\nSample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018\nNotes: #, number. aPercentage is based on the weighted percentage of complete cases.", "In adjusted logistic regression analysis, Montubio (Adjusted Odds Ratio-AOR: 1.66, 95% Confidence Interval-CI: 1.10–2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19–2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02–1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19–2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD (see Table 2).Table 2Multivariable Associations with Self-Reported Cardiovascular DiseaseVariableCrude OR (95% CI)p-valueAdjusted OR (95% CI)p-valueAge in years 18–291 (Reference)1 (Reference) 30–441.41 (1.01, 1.98)0.0421.18 (0.81, 1.71)0.389 45–691.87 (1.30, 2.68)<0.0011.31 (0.85, 2.03)0.224Sex Female1 (Reference)— Male0.96 (0.74, 1.23)0.727Education <Secondary1 (Reference)1 (Reference) Secondary0.74 (0.53, 1.03)0.0770.84 (0.58, 1.22)0.369 >Secondary0.78 (0.57, 1.05)0.0970.89 (0.64, 1.23)0.487Ethnicity Mestizo1 (Reference)1 (Reference) Amerindian0.54 (0.27, 1.06)0.0760.51 (0.25, 1.02)0.058 Montubio1.73 (1.16, 2.59)0.0081.66 (1.10, 2.50)0.017 African1.17 (0.68, 2.01)0.5771.21 (0.70, 2.08)0.497 White/other1.51 (0.83, 2.74)0.1731.54 (0.84, 2.83)0.167Family alcohol problems No1 (Reference)1 (Reference) Yes1.86 (1.26, 2.75)0.0021.78 (1.19, 2.65)0.005Smoking status Never1 (Reference)1 (Reference) Past1.55 (1.18, 2.05)0.0021.36 (1.02, 1.81)0.038 Current1.36 (0.95, 1.95)0.0901.24 (0.84, 1.83)0.287Alcohol dependence No1 (Reference)1 (Reference) Yes1.47 (1.03, 2.10)0.0341.29 (0.87, 1.91)0.212Fruit/vegetables (servings/day) 0–11 (Reference)— 2–30.92 (0.69, 1.21)0.549 4 or more1.03 (0.68, 1.56)0.897Adding salt to meals Never1 (Reference)— Sometimes/rarely1.15 (0.86, 1.54)0.352 Often/always1.39 (0.94, 2.05)0.103Physical activity Low1 (Reference)— Moderate1.03 (0.73, 1.45)0.872 High1.08 (0.81, 1.46)0.587Body mass index Normal1 (Reference)1 (Reference) Under0.89 (0.29, 2.77)0.8421.10 (0.35, 3.45)0.876 Overweight1.14 (0.83, 1.57)0.4041.10 (0.78, 1.56)0.581 Obesity1.50 (1.08, 2.07)0.0161.36 (0.95, 1.94)0.088Hypertension No1 (Reference)1 (Reference) Yes1.63 (1.24, 2.14)<0.0011.31 (0.96, 1.78)0.083Type 2 diabetes No1 (Reference)— Yes1.14 (0.72, 1.79)0.574Raised cholesterol No1 (Reference)— Yes1.05 (0.80, 1.36)0.741Self-rated oral health Good1 (Reference)1 (Reference) Poor2.00 (1.41, 2.82)<0.0011.74 (1.19, 2.54)0.005Abbreviations: OR, odds ratio; CI, confidence interval.\n\nMultivariable Associations with Self-Reported Cardiovascular Disease\nAbbreviations: OR, odds ratio; CI, confidence interval.", "In this nationally representative sample of adults (18–69 years) in Ecuador, the prevalence of SRCVD (8.7%) was higher than in Argentina (3.9%, 35–70 years), Brazil (6.9%, 35–70 years), Chile (3.3%, 35–70 years) and Colombia (3.8%, 35–70 years),5 in Brazil (SR stroke, 1.6%, 18 years older),6 Colombia (5.5%, 18–69 years),7 in Nepal (SRCVD, 2.0%, 24–64 years),10 in China (SRCVD, 3.5%, 35–74 years),11 in Australia (SRCVD, 4.5%, ≥25 years),12 and USA (SR congestive health failure 3.4%, angina 3.0%, heart attack 4.4%, or stroke 3.9%, 20 years and older).9 However, it should be noted that the prevalence rates of SRCVD are difficult to compare because of different measurements and different age groups. The high prevalence of SRCVD found in Ecuador calls for community-based massive education campaigns and health care provision of people with CVD, including the identification of a cerebral-vascular event and emergency care.3,37\nConsistent with previous research,9,19,20 older age (45–69 years) was positively associated with SRCVD in unadjusted analysis. Some previous studies found sex differences4,9,19,20 in the prevalence of SRCVD, while in our study sex differences were not reaching significance. Other research found an association between low socioeconomic status or lower education6,9,20,21 and SRCVD, while this survey did not show such associations. Compared to Mestizo Ecuadorean, Montubio Ecuadorean were more likely and Amerindian less likely (marginally significant) to have SRCVD. In a previous study among Amerindians in Ecuador, a low prevalence of atrial fibrillation was found, which can be explained by, both, “racially determined short stature and frequent dietary oily fish intake.”38 The most significant predictors of the increasing mortality rate from myocardial infarction in Ecuador were living in the coast belonging to a mixed race.16 The Montubio (“an aboriginal mestizo group that originates from the coastal part of Ecuador”)18 may form part of this mortality rate from myocardial infarction. In the study among older adults in Ecuador,4 people living on the coast had an increased risk of heart disease and stroke compared to those in the highlands.\nIn agreement with previous research,12,39,40 this study showed that psychosocial stress in the form of alcohol family problems was associated with SRCVD. Stress can increase the cerebrovascular disease risk by modulating sympathomimetic activity, affecting the blood pressure reactivity, cerebral endothelium, coagulation, or heart rhythm.39 In line with former research,3,4,6,20 past tobacco use was positively associated with SRCVD. In a systematic review and meta analysis the importance of smoking as an independent risk factor for stroke was confirmed.41 Contrary to expectation,4,6,10,23–25 physical inactivity, inadequate or low fruit and vegetable consumption and frequent salt consumption were not associated with SRCVD in this study. Similarly to a study in Nepal,10 the consumption of fruit and vegetables was very low in this study, but was unlike in the Nepal study not associated with SRCVD.\nConsistent with former research,3,4,6,19–23,26,27 this survey showed in univariate analysis an association between hypertension and obesity with SRCVD. Unlike some studies,3,6,19–23,26 this study did not find significant associations between diabetes, raised cholesterol and SRCVD. In line with some previous research,28 poor oral health (edentulism) was positively associated with SRCVD. Periodontal diseases have been shown to be associated with systematic diseases, such as CVD.42\nThe strength of the study includes a nationally representative sample of adults in Ecuador and the use of a standardized STEPS survey methodology and measurements. Study limitations include the cross-sectional design, the assessment of some variables, including CVD by self-report, and that certain relevant variables, such as psychological distress, were not assessed. Previous research compared self-reported versus hospital-coded diagnosis of CVD, and found SRCVD valid for epidemiological studies.43 Further, the study did not assess the duration of having a CVD and the CVD type, which prevents us from establishing a trend between time of CVD diagnosis and health risk behaviours. The study participants only included those that had survived CVD, and excluded those with CVD who had died prior to the survey, increasing the possible underestimates of our figures.44", "Almost one in ten persons aged 18–69 years reported having been diagnosed with CVD in Ecuador. Several associated factors for CVD, such as being Montubio, family alcohol problems, past smoking tobacco, and poor oral health status were identified, which can be targeted in public health interventions." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Method", "Sample and Procedures", "Measures", "Data Analysis", "Results", "Sample and Cardiovascular Diseases Prevalence Characteristics", "Associations with Self-Reported Cardiovascular Disease Prevalence", "Discussion", "Conclusion" ]
[ "Globally, an estimated 17.9 million people died from Cardiovascular disease (CVDs) in 2016, representing 31% of all global deaths. Of these deaths, 85% are due to heart attack and stroke.1 In persons 50 years and older in 2019, “ischaemic heart disease and stroke were the top-ranked causes of disability adjusted life years (DALYs).”2 More than three-quarters of deaths from CVDs occur in low- and middle-income countries.1 “Heart attacks and strokes are usually acute events and are mainly caused by a blockage that prevents blood from flowing to the heart or brain.”1\nIn studies in the Americas, a study (60 years and older) in seven urban centres in Latin America and the Caribbean, the prevalence of self-reported cardiovascular disease (SRCVD) was 20.3%,3 and in a study among persons aged 60 years and older in the highlands and coastal areas of Ecuador in 2009 found a prevalence of self-reported heart disease of 13.1% and stroke 6.4%.4 In urban-rural sites (35–70 years) in Argentina, Brazil, Chile and Colombia the prevalence of SRCVD was 3.9%, 6.9%, 3.3%, and 3.8%, respectively.5 In a national study in Brazil (18 years older) the SR stroke prevalence was 1.6%,6 in Colombia (18–69 years) SRCVD 5.5%,7 Mexico (50 years and older) the prevalence of SR stroke was 4.3% and angina 13.6%,8 and in USA in 2016 (20 years plus), SRCVD (congestive health failure 3.4%, angina 3.0%, heart attack 4.4%, or stroke 3.9%).9 In Nepal (24–64 years), 2% had major cardiovascular events,10 in China (35–74 years) 3.3% in men and 3.6% in women SRCVD,11 in Australia (≥25 years) (2007–2008) the prevalence of SRCVD (heart attack or stroke) was 4.5%.12 To our knowledge, we could not find national information on SRCVD in Ecuador.13,14 CVDs contribute to 24% of mortality in 2016 in Ecuador.15 Using the mortality national registry in Ecuador, the myocardial infarction mortality rate increased from 51 in 2012 to 157 in 2016 deaths per 100,000,16 and mortality due to ischemic heart disease increased in Ecuador in the period 2001–2016.17 Ecuador’s population (16.9 million) consists of a mixture various ethnic groups, ranging from Mestizo (71.9%) to Afro Ecuadorian (4.3%).18\nFactors associated with SRCVD include sociodemographic factors, behavioural and biological CVD risk risk factors. Sociodemographic factors associated with SRCVD include, older age,9,19,20 men,9,20 women,4,19 low socioeconomic status,21 lower education,6,9,20 and ethnicity.22 Behavioural CVD risk factors associated with SRCVD may include, smoking/tobacco use,3,6,20 past smoking,4 physical inactivity,4,6,23 inadequate or low fruit and vegetable consumption,10,23 high intake of sodium and sodium chloride (regular salt),24,25 and psychological distress.12 Biological CVD risk factors associated with SRCVD may include, hypertension,3,19–23,26,27 diabetes,3,6,20–23,26 obesity,3,4,6,21,23,26 abnormal cholesterol values,19 and poor oral health (edentulism).28 This study aimed to determine the prevalence and correlates of SRCVDs in a national population-based survey among adults in Ecuador in 2018.", "Sample and Procedures This is a secondary analysis conducted using nationally representative population-based and cross-sectional data (18–69 years old) from the “2018 Ecuador STEPS survey.”29 The 2018 Ecuador STEPS survey data and more detailed sampling methods can be found elsewhere.30 Briefly, using a three-stage cluster sampling method, at the first stage primary sampling units (PSUs) were selected by stratum, at the second stage within each PSU selected in the first stage 12 occupied households were selected, and at the third stage, one participant (18–69 years) was selected from each household.30 Selected individuals were assessed with an interview-administered questionnaire, physical and biochemical measurements.30 Research data were collected using electronic tablet devices.30 The “Ethical Review Committee of the Ecuador Ministry of Health” approved the study.30\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.\nThis is a secondary analysis conducted using nationally representative population-based and cross-sectional data (18–69 years old) from the “2018 Ecuador STEPS survey.”29 The 2018 Ecuador STEPS survey data and more detailed sampling methods can be found elsewhere.30 Briefly, using a three-stage cluster sampling method, at the first stage primary sampling units (PSUs) were selected by stratum, at the second stage within each PSU selected in the first stage 12 occupied households were selected, and at the third stage, one participant (18–69 years) was selected from each household.30 Selected individuals were assessed with an interview-administered questionnaire, physical and biochemical measurements.30 Research data were collected using electronic tablet devices.30 The “Ethical Review Committee of the Ecuador Ministry of Health” approved the study.30\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.\nMeasures Outcome variable: History of CVDs was assessed with the question, “Have you ever had a heart attack or chest pain from heart disease (angina) or a stroke (cerebrovascular accident or incident)?” (Yes, No).30\nSociodemographic covariates included age, sex, highest level of formal education, and ethnicity.30\nBehavioural covariates included current and past smoking tobacco, daily servings of fruit and vegetable intake, and “low, moderate or high physical activity based on the Global Physical Activity Questionnaire”.30,31 Salt intake was assessed with the item, “Do you add salt to food at the table?”30 Responses were trichotomized into 1=never, 2=raley or sometimes, and 3=often or always. Alcohol dependence was sourced from the “Alcohol Use Disorder Identification Test=AUDIT” (items 4–6, ≥4 total scores).32 Alcohol family problems was sourced from the item, “During the past 12 months, have you had family problems or problems with your partner due to someone else’s drinking?” (1=yes: > monthly to 4=once or twice).30\nBiological covariates included measured Body Mass Index (BMI) classified as “<18.5kg/m2 underweight, 18.5–24.4kg/m2 normal weight, 25–29.9kg/m2 overweight and ≥30 kg/m2 obesity”.33 Hypertension or raised blood pressure (BP) was defined as “systolic BP ≥140 mm Hg and/or diastolic BP ≥90 mm Hg or where the participant is currently on antihypertensive medication”.34 Diabetes was defined as “fasting plasma glucose levels ≥7.0 mmol/L (126 mg/dl); or using insulin or oral hypoglycaemic drugs; or having a history of diagnosis of diabetes”.35 Raised total cholesterol was defined as “fasting TC ≥5.0 mmol/L or currently on medication for raised cholesterol”.35 Self-rated oral health status (AROH) was sourced from two items, 1) “How would you describe the state of your teeth, and 2) How would you describe the state of your gums?”30 Poor SROH was defined as “having poor or very poor status of teeth and/or gums, and good oral health as having average, good, very good or excellent status of teeth and/or gums”, in line with previous research.36 Cronbach alpha’s for the two item SROH scale was 0.74 in this sample.\nOutcome variable: History of CVDs was assessed with the question, “Have you ever had a heart attack or chest pain from heart disease (angina) or a stroke (cerebrovascular accident or incident)?” (Yes, No).30\nSociodemographic covariates included age, sex, highest level of formal education, and ethnicity.30\nBehavioural covariates included current and past smoking tobacco, daily servings of fruit and vegetable intake, and “low, moderate or high physical activity based on the Global Physical Activity Questionnaire”.30,31 Salt intake was assessed with the item, “Do you add salt to food at the table?”30 Responses were trichotomized into 1=never, 2=raley or sometimes, and 3=often or always. Alcohol dependence was sourced from the “Alcohol Use Disorder Identification Test=AUDIT” (items 4–6, ≥4 total scores).32 Alcohol family problems was sourced from the item, “During the past 12 months, have you had family problems or problems with your partner due to someone else’s drinking?” (1=yes: > monthly to 4=once or twice).30\nBiological covariates included measured Body Mass Index (BMI) classified as “<18.5kg/m2 underweight, 18.5–24.4kg/m2 normal weight, 25–29.9kg/m2 overweight and ≥30 kg/m2 obesity”.33 Hypertension or raised blood pressure (BP) was defined as “systolic BP ≥140 mm Hg and/or diastolic BP ≥90 mm Hg or where the participant is currently on antihypertensive medication”.34 Diabetes was defined as “fasting plasma glucose levels ≥7.0 mmol/L (126 mg/dl); or using insulin or oral hypoglycaemic drugs; or having a history of diagnosis of diabetes”.35 Raised total cholesterol was defined as “fasting TC ≥5.0 mmol/L or currently on medication for raised cholesterol”.35 Self-rated oral health status (AROH) was sourced from two items, 1) “How would you describe the state of your teeth, and 2) How would you describe the state of your gums?”30 Poor SROH was defined as “having poor or very poor status of teeth and/or gums, and good oral health as having average, good, very good or excellent status of teeth and/or gums”, in line with previous research.36 Cronbach alpha’s for the two item SROH scale was 0.74 in this sample.\nData Analysis Considering the clustered study design, all statistical analyses were done using “STATA software version 14.0 (Stata Corporation, College Station, TX, USA).” Unadjusted and adjusted logistic regression was used to calculate predictors of SRCVD. Variables with p-values <0.1 in univariate analysis (age group, educational level, ethnicity, family alcohol problems, smoking status, alcohol dependence, body weight status, hypertension, and self-rated oral health) were included in the final adjusted model. P-values of below 0.05 were accepted as significant and missing values were excluded from the analysis.\nConsidering the clustered study design, all statistical analyses were done using “STATA software version 14.0 (Stata Corporation, College Station, TX, USA).” Unadjusted and adjusted logistic regression was used to calculate predictors of SRCVD. Variables with p-values <0.1 in univariate analysis (age group, educational level, ethnicity, family alcohol problems, smoking status, alcohol dependence, body weight status, hypertension, and self-rated oral health) were included in the final adjusted model. P-values of below 0.05 were accepted as significant and missing values were excluded from the analysis.", "This is a secondary analysis conducted using nationally representative population-based and cross-sectional data (18–69 years old) from the “2018 Ecuador STEPS survey.”29 The 2018 Ecuador STEPS survey data and more detailed sampling methods can be found elsewhere.30 Briefly, using a three-stage cluster sampling method, at the first stage primary sampling units (PSUs) were selected by stratum, at the second stage within each PSU selected in the first stage 12 occupied households were selected, and at the third stage, one participant (18–69 years) was selected from each household.30 Selected individuals were assessed with an interview-administered questionnaire, physical and biochemical measurements.30 Research data were collected using electronic tablet devices.30 The “Ethical Review Committee of the Ecuador Ministry of Health” approved the study.30\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.\nAll procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study.", "Outcome variable: History of CVDs was assessed with the question, “Have you ever had a heart attack or chest pain from heart disease (angina) or a stroke (cerebrovascular accident or incident)?” (Yes, No).30\nSociodemographic covariates included age, sex, highest level of formal education, and ethnicity.30\nBehavioural covariates included current and past smoking tobacco, daily servings of fruit and vegetable intake, and “low, moderate or high physical activity based on the Global Physical Activity Questionnaire”.30,31 Salt intake was assessed with the item, “Do you add salt to food at the table?”30 Responses were trichotomized into 1=never, 2=raley or sometimes, and 3=often or always. Alcohol dependence was sourced from the “Alcohol Use Disorder Identification Test=AUDIT” (items 4–6, ≥4 total scores).32 Alcohol family problems was sourced from the item, “During the past 12 months, have you had family problems or problems with your partner due to someone else’s drinking?” (1=yes: > monthly to 4=once or twice).30\nBiological covariates included measured Body Mass Index (BMI) classified as “<18.5kg/m2 underweight, 18.5–24.4kg/m2 normal weight, 25–29.9kg/m2 overweight and ≥30 kg/m2 obesity”.33 Hypertension or raised blood pressure (BP) was defined as “systolic BP ≥140 mm Hg and/or diastolic BP ≥90 mm Hg or where the participant is currently on antihypertensive medication”.34 Diabetes was defined as “fasting plasma glucose levels ≥7.0 mmol/L (126 mg/dl); or using insulin or oral hypoglycaemic drugs; or having a history of diagnosis of diabetes”.35 Raised total cholesterol was defined as “fasting TC ≥5.0 mmol/L or currently on medication for raised cholesterol”.35 Self-rated oral health status (AROH) was sourced from two items, 1) “How would you describe the state of your teeth, and 2) How would you describe the state of your gums?”30 Poor SROH was defined as “having poor or very poor status of teeth and/or gums, and good oral health as having average, good, very good or excellent status of teeth and/or gums”, in line with previous research.36 Cronbach alpha’s for the two item SROH scale was 0.74 in this sample.", "Considering the clustered study design, all statistical analyses were done using “STATA software version 14.0 (Stata Corporation, College Station, TX, USA).” Unadjusted and adjusted logistic regression was used to calculate predictors of SRCVD. Variables with p-values <0.1 in univariate analysis (age group, educational level, ethnicity, family alcohol problems, smoking status, alcohol dependence, body weight status, hypertension, and self-rated oral health) were included in the final adjusted model. P-values of below 0.05 were accepted as significant and missing values were excluded from the analysis.", "Sample and Cardiovascular Diseases Prevalence Characteristics The sample included 4,638 adults (18–69 years; with 39 median age), 58.1% were female, 30.5% had higher education, and majority (78.9%) belonged to the Mestizo ethnic group. The study response rate was 69.4%.30 One in five participants (24.7%) had low physical activity, 13.7% currently smoked tobacco, 11.8% were dependent on alcohol use, 7.0% had alcohol family problems, 58.8% had 1 or less serving of fruit and vegetables a day, and 12.45 often or always added salt to their meals. One in four respondents (25.7%) had obesity, 20.5% hypertension, 7.1% diabetes, 34.8% raised total cholesterol, and 9.7% poor oral health. The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women (see Table 1).Table 1Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018Variable (#Missing Cases)SampleSelf-Reported Cardiovascular DiseaseN (%)N (%)aAll4638391 (8.7)Age in years (#0) 18–291205 (30.1)73 (6.3) 30–442034 (40.7)167 (8.7) 45–691399 (29.3)151 (11.2)Sex (#0) Female2694 (58.1)236 (8.9) Male1944 (41.9)155 (8.5)Education (#4) <Secondary2452 (48.2)226 (9.8) Secondary929 (21.2)59 (7.4) >Secondary1253 (30.5)104 (7.8)Ethnicity (#2) Mestizo3567 (78.9)295 (8.4) Amerindian378 (6.5)19 (4.7) Montubio335 (7.2)39 (13.7) African227 (4.4)20 (9.7) White/other129 (3.0)18 (12.2)Family alcohol problems (#0) No4326 (93.0)348 (8.3) Yes312 (7.0)43 (14.4)Smoking status (#0) Never3046 (61.9)227 (7.5) Past1007 (24.4)111 (11.2) Current585 (13.7)53 (9.9)Alcohol dependence (#0) No4145 (88.2)336 (8.3) Yes493 (11.8)55 (11.7)Fruit/vegetables (servings/day) (#11) 0–12754 (58.8)229 (8.9) 2–31416 (31.2)116 (8.2) 4 or more457 (10.0)45 (9.1)Adding salt to meals (#19) Never2534 (52.8)203 (8.0) Sometimes/rarely1553 (34.7)135 (9.1) Often/always532 (12.4)52 (10.8)Physical activity (#6) Low1083 (24.7)95 (8.3) Moderate1206 (25.5)102 (8.5) High2343 (49.8)193 (8.9)Body mass index (#168) Normal1486 (34.9)117 (7.6) Under55 (1.5)4 (6.9) Overweight1725 (37.9)142 (8.6) Obesity1204 (25.7)121 (11.0)Hypertension (#101) No3599 (79.5)270 (7.9) Yes938 (20.5)118 (12.3)Type 2 diabetes (#598) No3713 (92.9)333 (9.0) Yes327 (7.1)33 (10.1)Raised cholesterol (#560) No2569 (65.2)225 (9.1) Yes1509 (34.8)144 (9.4)Self-rated oral health (#175) Good3995 (90.3)310 (8.0) Poor468 (9.7)66 (14.8)Notes: #, number. aPercentage is based on the weighted percentage of complete cases.\n\nSample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018\nNotes: #, number. aPercentage is based on the weighted percentage of complete cases.\nThe sample included 4,638 adults (18–69 years; with 39 median age), 58.1% were female, 30.5% had higher education, and majority (78.9%) belonged to the Mestizo ethnic group. The study response rate was 69.4%.30 One in five participants (24.7%) had low physical activity, 13.7% currently smoked tobacco, 11.8% were dependent on alcohol use, 7.0% had alcohol family problems, 58.8% had 1 or less serving of fruit and vegetables a day, and 12.45 often or always added salt to their meals. One in four respondents (25.7%) had obesity, 20.5% hypertension, 7.1% diabetes, 34.8% raised total cholesterol, and 9.7% poor oral health. The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women (see Table 1).Table 1Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018Variable (#Missing Cases)SampleSelf-Reported Cardiovascular DiseaseN (%)N (%)aAll4638391 (8.7)Age in years (#0) 18–291205 (30.1)73 (6.3) 30–442034 (40.7)167 (8.7) 45–691399 (29.3)151 (11.2)Sex (#0) Female2694 (58.1)236 (8.9) Male1944 (41.9)155 (8.5)Education (#4) <Secondary2452 (48.2)226 (9.8) Secondary929 (21.2)59 (7.4) >Secondary1253 (30.5)104 (7.8)Ethnicity (#2) Mestizo3567 (78.9)295 (8.4) Amerindian378 (6.5)19 (4.7) Montubio335 (7.2)39 (13.7) African227 (4.4)20 (9.7) White/other129 (3.0)18 (12.2)Family alcohol problems (#0) No4326 (93.0)348 (8.3) Yes312 (7.0)43 (14.4)Smoking status (#0) Never3046 (61.9)227 (7.5) Past1007 (24.4)111 (11.2) Current585 (13.7)53 (9.9)Alcohol dependence (#0) No4145 (88.2)336 (8.3) Yes493 (11.8)55 (11.7)Fruit/vegetables (servings/day) (#11) 0–12754 (58.8)229 (8.9) 2–31416 (31.2)116 (8.2) 4 or more457 (10.0)45 (9.1)Adding salt to meals (#19) Never2534 (52.8)203 (8.0) Sometimes/rarely1553 (34.7)135 (9.1) Often/always532 (12.4)52 (10.8)Physical activity (#6) Low1083 (24.7)95 (8.3) Moderate1206 (25.5)102 (8.5) High2343 (49.8)193 (8.9)Body mass index (#168) Normal1486 (34.9)117 (7.6) Under55 (1.5)4 (6.9) Overweight1725 (37.9)142 (8.6) Obesity1204 (25.7)121 (11.0)Hypertension (#101) No3599 (79.5)270 (7.9) Yes938 (20.5)118 (12.3)Type 2 diabetes (#598) No3713 (92.9)333 (9.0) Yes327 (7.1)33 (10.1)Raised cholesterol (#560) No2569 (65.2)225 (9.1) Yes1509 (34.8)144 (9.4)Self-rated oral health (#175) Good3995 (90.3)310 (8.0) Poor468 (9.7)66 (14.8)Notes: #, number. aPercentage is based on the weighted percentage of complete cases.\n\nSample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018\nNotes: #, number. aPercentage is based on the weighted percentage of complete cases.\nAssociations with Self-Reported Cardiovascular Disease Prevalence In adjusted logistic regression analysis, Montubio (Adjusted Odds Ratio-AOR: 1.66, 95% Confidence Interval-CI: 1.10–2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19–2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02–1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19–2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD (see Table 2).Table 2Multivariable Associations with Self-Reported Cardiovascular DiseaseVariableCrude OR (95% CI)p-valueAdjusted OR (95% CI)p-valueAge in years 18–291 (Reference)1 (Reference) 30–441.41 (1.01, 1.98)0.0421.18 (0.81, 1.71)0.389 45–691.87 (1.30, 2.68)<0.0011.31 (0.85, 2.03)0.224Sex Female1 (Reference)— Male0.96 (0.74, 1.23)0.727Education <Secondary1 (Reference)1 (Reference) Secondary0.74 (0.53, 1.03)0.0770.84 (0.58, 1.22)0.369 >Secondary0.78 (0.57, 1.05)0.0970.89 (0.64, 1.23)0.487Ethnicity Mestizo1 (Reference)1 (Reference) Amerindian0.54 (0.27, 1.06)0.0760.51 (0.25, 1.02)0.058 Montubio1.73 (1.16, 2.59)0.0081.66 (1.10, 2.50)0.017 African1.17 (0.68, 2.01)0.5771.21 (0.70, 2.08)0.497 White/other1.51 (0.83, 2.74)0.1731.54 (0.84, 2.83)0.167Family alcohol problems No1 (Reference)1 (Reference) Yes1.86 (1.26, 2.75)0.0021.78 (1.19, 2.65)0.005Smoking status Never1 (Reference)1 (Reference) Past1.55 (1.18, 2.05)0.0021.36 (1.02, 1.81)0.038 Current1.36 (0.95, 1.95)0.0901.24 (0.84, 1.83)0.287Alcohol dependence No1 (Reference)1 (Reference) Yes1.47 (1.03, 2.10)0.0341.29 (0.87, 1.91)0.212Fruit/vegetables (servings/day) 0–11 (Reference)— 2–30.92 (0.69, 1.21)0.549 4 or more1.03 (0.68, 1.56)0.897Adding salt to meals Never1 (Reference)— Sometimes/rarely1.15 (0.86, 1.54)0.352 Often/always1.39 (0.94, 2.05)0.103Physical activity Low1 (Reference)— Moderate1.03 (0.73, 1.45)0.872 High1.08 (0.81, 1.46)0.587Body mass index Normal1 (Reference)1 (Reference) Under0.89 (0.29, 2.77)0.8421.10 (0.35, 3.45)0.876 Overweight1.14 (0.83, 1.57)0.4041.10 (0.78, 1.56)0.581 Obesity1.50 (1.08, 2.07)0.0161.36 (0.95, 1.94)0.088Hypertension No1 (Reference)1 (Reference) Yes1.63 (1.24, 2.14)<0.0011.31 (0.96, 1.78)0.083Type 2 diabetes No1 (Reference)— Yes1.14 (0.72, 1.79)0.574Raised cholesterol No1 (Reference)— Yes1.05 (0.80, 1.36)0.741Self-rated oral health Good1 (Reference)1 (Reference) Poor2.00 (1.41, 2.82)<0.0011.74 (1.19, 2.54)0.005Abbreviations: OR, odds ratio; CI, confidence interval.\n\nMultivariable Associations with Self-Reported Cardiovascular Disease\nAbbreviations: OR, odds ratio; CI, confidence interval.\nIn adjusted logistic regression analysis, Montubio (Adjusted Odds Ratio-AOR: 1.66, 95% Confidence Interval-CI: 1.10–2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19–2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02–1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19–2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD (see Table 2).Table 2Multivariable Associations with Self-Reported Cardiovascular DiseaseVariableCrude OR (95% CI)p-valueAdjusted OR (95% CI)p-valueAge in years 18–291 (Reference)1 (Reference) 30–441.41 (1.01, 1.98)0.0421.18 (0.81, 1.71)0.389 45–691.87 (1.30, 2.68)<0.0011.31 (0.85, 2.03)0.224Sex Female1 (Reference)— Male0.96 (0.74, 1.23)0.727Education <Secondary1 (Reference)1 (Reference) Secondary0.74 (0.53, 1.03)0.0770.84 (0.58, 1.22)0.369 >Secondary0.78 (0.57, 1.05)0.0970.89 (0.64, 1.23)0.487Ethnicity Mestizo1 (Reference)1 (Reference) Amerindian0.54 (0.27, 1.06)0.0760.51 (0.25, 1.02)0.058 Montubio1.73 (1.16, 2.59)0.0081.66 (1.10, 2.50)0.017 African1.17 (0.68, 2.01)0.5771.21 (0.70, 2.08)0.497 White/other1.51 (0.83, 2.74)0.1731.54 (0.84, 2.83)0.167Family alcohol problems No1 (Reference)1 (Reference) Yes1.86 (1.26, 2.75)0.0021.78 (1.19, 2.65)0.005Smoking status Never1 (Reference)1 (Reference) Past1.55 (1.18, 2.05)0.0021.36 (1.02, 1.81)0.038 Current1.36 (0.95, 1.95)0.0901.24 (0.84, 1.83)0.287Alcohol dependence No1 (Reference)1 (Reference) Yes1.47 (1.03, 2.10)0.0341.29 (0.87, 1.91)0.212Fruit/vegetables (servings/day) 0–11 (Reference)— 2–30.92 (0.69, 1.21)0.549 4 or more1.03 (0.68, 1.56)0.897Adding salt to meals Never1 (Reference)— Sometimes/rarely1.15 (0.86, 1.54)0.352 Often/always1.39 (0.94, 2.05)0.103Physical activity Low1 (Reference)— Moderate1.03 (0.73, 1.45)0.872 High1.08 (0.81, 1.46)0.587Body mass index Normal1 (Reference)1 (Reference) Under0.89 (0.29, 2.77)0.8421.10 (0.35, 3.45)0.876 Overweight1.14 (0.83, 1.57)0.4041.10 (0.78, 1.56)0.581 Obesity1.50 (1.08, 2.07)0.0161.36 (0.95, 1.94)0.088Hypertension No1 (Reference)1 (Reference) Yes1.63 (1.24, 2.14)<0.0011.31 (0.96, 1.78)0.083Type 2 diabetes No1 (Reference)— Yes1.14 (0.72, 1.79)0.574Raised cholesterol No1 (Reference)— Yes1.05 (0.80, 1.36)0.741Self-rated oral health Good1 (Reference)1 (Reference) Poor2.00 (1.41, 2.82)<0.0011.74 (1.19, 2.54)0.005Abbreviations: OR, odds ratio; CI, confidence interval.\n\nMultivariable Associations with Self-Reported Cardiovascular Disease\nAbbreviations: OR, odds ratio; CI, confidence interval.", "The sample included 4,638 adults (18–69 years; with 39 median age), 58.1% were female, 30.5% had higher education, and majority (78.9%) belonged to the Mestizo ethnic group. The study response rate was 69.4%.30 One in five participants (24.7%) had low physical activity, 13.7% currently smoked tobacco, 11.8% were dependent on alcohol use, 7.0% had alcohol family problems, 58.8% had 1 or less serving of fruit and vegetables a day, and 12.45 often or always added salt to their meals. One in four respondents (25.7%) had obesity, 20.5% hypertension, 7.1% diabetes, 34.8% raised total cholesterol, and 9.7% poor oral health. The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women (see Table 1).Table 1Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018Variable (#Missing Cases)SampleSelf-Reported Cardiovascular DiseaseN (%)N (%)aAll4638391 (8.7)Age in years (#0) 18–291205 (30.1)73 (6.3) 30–442034 (40.7)167 (8.7) 45–691399 (29.3)151 (11.2)Sex (#0) Female2694 (58.1)236 (8.9) Male1944 (41.9)155 (8.5)Education (#4) <Secondary2452 (48.2)226 (9.8) Secondary929 (21.2)59 (7.4) >Secondary1253 (30.5)104 (7.8)Ethnicity (#2) Mestizo3567 (78.9)295 (8.4) Amerindian378 (6.5)19 (4.7) Montubio335 (7.2)39 (13.7) African227 (4.4)20 (9.7) White/other129 (3.0)18 (12.2)Family alcohol problems (#0) No4326 (93.0)348 (8.3) Yes312 (7.0)43 (14.4)Smoking status (#0) Never3046 (61.9)227 (7.5) Past1007 (24.4)111 (11.2) Current585 (13.7)53 (9.9)Alcohol dependence (#0) No4145 (88.2)336 (8.3) Yes493 (11.8)55 (11.7)Fruit/vegetables (servings/day) (#11) 0–12754 (58.8)229 (8.9) 2–31416 (31.2)116 (8.2) 4 or more457 (10.0)45 (9.1)Adding salt to meals (#19) Never2534 (52.8)203 (8.0) Sometimes/rarely1553 (34.7)135 (9.1) Often/always532 (12.4)52 (10.8)Physical activity (#6) Low1083 (24.7)95 (8.3) Moderate1206 (25.5)102 (8.5) High2343 (49.8)193 (8.9)Body mass index (#168) Normal1486 (34.9)117 (7.6) Under55 (1.5)4 (6.9) Overweight1725 (37.9)142 (8.6) Obesity1204 (25.7)121 (11.0)Hypertension (#101) No3599 (79.5)270 (7.9) Yes938 (20.5)118 (12.3)Type 2 diabetes (#598) No3713 (92.9)333 (9.0) Yes327 (7.1)33 (10.1)Raised cholesterol (#560) No2569 (65.2)225 (9.1) Yes1509 (34.8)144 (9.4)Self-rated oral health (#175) Good3995 (90.3)310 (8.0) Poor468 (9.7)66 (14.8)Notes: #, number. aPercentage is based on the weighted percentage of complete cases.\n\nSample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018\nNotes: #, number. aPercentage is based on the weighted percentage of complete cases.", "In adjusted logistic regression analysis, Montubio (Adjusted Odds Ratio-AOR: 1.66, 95% Confidence Interval-CI: 1.10–2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19–2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02–1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19–2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD (see Table 2).Table 2Multivariable Associations with Self-Reported Cardiovascular DiseaseVariableCrude OR (95% CI)p-valueAdjusted OR (95% CI)p-valueAge in years 18–291 (Reference)1 (Reference) 30–441.41 (1.01, 1.98)0.0421.18 (0.81, 1.71)0.389 45–691.87 (1.30, 2.68)<0.0011.31 (0.85, 2.03)0.224Sex Female1 (Reference)— Male0.96 (0.74, 1.23)0.727Education <Secondary1 (Reference)1 (Reference) Secondary0.74 (0.53, 1.03)0.0770.84 (0.58, 1.22)0.369 >Secondary0.78 (0.57, 1.05)0.0970.89 (0.64, 1.23)0.487Ethnicity Mestizo1 (Reference)1 (Reference) Amerindian0.54 (0.27, 1.06)0.0760.51 (0.25, 1.02)0.058 Montubio1.73 (1.16, 2.59)0.0081.66 (1.10, 2.50)0.017 African1.17 (0.68, 2.01)0.5771.21 (0.70, 2.08)0.497 White/other1.51 (0.83, 2.74)0.1731.54 (0.84, 2.83)0.167Family alcohol problems No1 (Reference)1 (Reference) Yes1.86 (1.26, 2.75)0.0021.78 (1.19, 2.65)0.005Smoking status Never1 (Reference)1 (Reference) Past1.55 (1.18, 2.05)0.0021.36 (1.02, 1.81)0.038 Current1.36 (0.95, 1.95)0.0901.24 (0.84, 1.83)0.287Alcohol dependence No1 (Reference)1 (Reference) Yes1.47 (1.03, 2.10)0.0341.29 (0.87, 1.91)0.212Fruit/vegetables (servings/day) 0–11 (Reference)— 2–30.92 (0.69, 1.21)0.549 4 or more1.03 (0.68, 1.56)0.897Adding salt to meals Never1 (Reference)— Sometimes/rarely1.15 (0.86, 1.54)0.352 Often/always1.39 (0.94, 2.05)0.103Physical activity Low1 (Reference)— Moderate1.03 (0.73, 1.45)0.872 High1.08 (0.81, 1.46)0.587Body mass index Normal1 (Reference)1 (Reference) Under0.89 (0.29, 2.77)0.8421.10 (0.35, 3.45)0.876 Overweight1.14 (0.83, 1.57)0.4041.10 (0.78, 1.56)0.581 Obesity1.50 (1.08, 2.07)0.0161.36 (0.95, 1.94)0.088Hypertension No1 (Reference)1 (Reference) Yes1.63 (1.24, 2.14)<0.0011.31 (0.96, 1.78)0.083Type 2 diabetes No1 (Reference)— Yes1.14 (0.72, 1.79)0.574Raised cholesterol No1 (Reference)— Yes1.05 (0.80, 1.36)0.741Self-rated oral health Good1 (Reference)1 (Reference) Poor2.00 (1.41, 2.82)<0.0011.74 (1.19, 2.54)0.005Abbreviations: OR, odds ratio; CI, confidence interval.\n\nMultivariable Associations with Self-Reported Cardiovascular Disease\nAbbreviations: OR, odds ratio; CI, confidence interval.", "In this nationally representative sample of adults (18–69 years) in Ecuador, the prevalence of SRCVD (8.7%) was higher than in Argentina (3.9%, 35–70 years), Brazil (6.9%, 35–70 years), Chile (3.3%, 35–70 years) and Colombia (3.8%, 35–70 years),5 in Brazil (SR stroke, 1.6%, 18 years older),6 Colombia (5.5%, 18–69 years),7 in Nepal (SRCVD, 2.0%, 24–64 years),10 in China (SRCVD, 3.5%, 35–74 years),11 in Australia (SRCVD, 4.5%, ≥25 years),12 and USA (SR congestive health failure 3.4%, angina 3.0%, heart attack 4.4%, or stroke 3.9%, 20 years and older).9 However, it should be noted that the prevalence rates of SRCVD are difficult to compare because of different measurements and different age groups. The high prevalence of SRCVD found in Ecuador calls for community-based massive education campaigns and health care provision of people with CVD, including the identification of a cerebral-vascular event and emergency care.3,37\nConsistent with previous research,9,19,20 older age (45–69 years) was positively associated with SRCVD in unadjusted analysis. Some previous studies found sex differences4,9,19,20 in the prevalence of SRCVD, while in our study sex differences were not reaching significance. Other research found an association between low socioeconomic status or lower education6,9,20,21 and SRCVD, while this survey did not show such associations. Compared to Mestizo Ecuadorean, Montubio Ecuadorean were more likely and Amerindian less likely (marginally significant) to have SRCVD. In a previous study among Amerindians in Ecuador, a low prevalence of atrial fibrillation was found, which can be explained by, both, “racially determined short stature and frequent dietary oily fish intake.”38 The most significant predictors of the increasing mortality rate from myocardial infarction in Ecuador were living in the coast belonging to a mixed race.16 The Montubio (“an aboriginal mestizo group that originates from the coastal part of Ecuador”)18 may form part of this mortality rate from myocardial infarction. In the study among older adults in Ecuador,4 people living on the coast had an increased risk of heart disease and stroke compared to those in the highlands.\nIn agreement with previous research,12,39,40 this study showed that psychosocial stress in the form of alcohol family problems was associated with SRCVD. Stress can increase the cerebrovascular disease risk by modulating sympathomimetic activity, affecting the blood pressure reactivity, cerebral endothelium, coagulation, or heart rhythm.39 In line with former research,3,4,6,20 past tobacco use was positively associated with SRCVD. In a systematic review and meta analysis the importance of smoking as an independent risk factor for stroke was confirmed.41 Contrary to expectation,4,6,10,23–25 physical inactivity, inadequate or low fruit and vegetable consumption and frequent salt consumption were not associated with SRCVD in this study. Similarly to a study in Nepal,10 the consumption of fruit and vegetables was very low in this study, but was unlike in the Nepal study not associated with SRCVD.\nConsistent with former research,3,4,6,19–23,26,27 this survey showed in univariate analysis an association between hypertension and obesity with SRCVD. Unlike some studies,3,6,19–23,26 this study did not find significant associations between diabetes, raised cholesterol and SRCVD. In line with some previous research,28 poor oral health (edentulism) was positively associated with SRCVD. Periodontal diseases have been shown to be associated with systematic diseases, such as CVD.42\nThe strength of the study includes a nationally representative sample of adults in Ecuador and the use of a standardized STEPS survey methodology and measurements. Study limitations include the cross-sectional design, the assessment of some variables, including CVD by self-report, and that certain relevant variables, such as psychological distress, were not assessed. Previous research compared self-reported versus hospital-coded diagnosis of CVD, and found SRCVD valid for epidemiological studies.43 Further, the study did not assess the duration of having a CVD and the CVD type, which prevents us from establishing a trend between time of CVD diagnosis and health risk behaviours. The study participants only included those that had survived CVD, and excluded those with CVD who had died prior to the survey, increasing the possible underestimates of our figures.44", "Almost one in ten persons aged 18–69 years reported having been diagnosed with CVD in Ecuador. Several associated factors for CVD, such as being Montubio, family alcohol problems, past smoking tobacco, and poor oral health status were identified, which can be targeted in public health interventions." ]
[ "intro", null, null, null, null, null, null, null, null, null ]
[ "chronic conditions", "lifestyle factors", "cardiovascular disease", "adults", "Ecuador" ]
Introduction: Globally, an estimated 17.9 million people died from Cardiovascular disease (CVDs) in 2016, representing 31% of all global deaths. Of these deaths, 85% are due to heart attack and stroke.1 In persons 50 years and older in 2019, “ischaemic heart disease and stroke were the top-ranked causes of disability adjusted life years (DALYs).”2 More than three-quarters of deaths from CVDs occur in low- and middle-income countries.1 “Heart attacks and strokes are usually acute events and are mainly caused by a blockage that prevents blood from flowing to the heart or brain.”1 In studies in the Americas, a study (60 years and older) in seven urban centres in Latin America and the Caribbean, the prevalence of self-reported cardiovascular disease (SRCVD) was 20.3%,3 and in a study among persons aged 60 years and older in the highlands and coastal areas of Ecuador in 2009 found a prevalence of self-reported heart disease of 13.1% and stroke 6.4%.4 In urban-rural sites (35–70 years) in Argentina, Brazil, Chile and Colombia the prevalence of SRCVD was 3.9%, 6.9%, 3.3%, and 3.8%, respectively.5 In a national study in Brazil (18 years older) the SR stroke prevalence was 1.6%,6 in Colombia (18–69 years) SRCVD 5.5%,7 Mexico (50 years and older) the prevalence of SR stroke was 4.3% and angina 13.6%,8 and in USA in 2016 (20 years plus), SRCVD (congestive health failure 3.4%, angina 3.0%, heart attack 4.4%, or stroke 3.9%).9 In Nepal (24–64 years), 2% had major cardiovascular events,10 in China (35–74 years) 3.3% in men and 3.6% in women SRCVD,11 in Australia (≥25 years) (2007–2008) the prevalence of SRCVD (heart attack or stroke) was 4.5%.12 To our knowledge, we could not find national information on SRCVD in Ecuador.13,14 CVDs contribute to 24% of mortality in 2016 in Ecuador.15 Using the mortality national registry in Ecuador, the myocardial infarction mortality rate increased from 51 in 2012 to 157 in 2016 deaths per 100,000,16 and mortality due to ischemic heart disease increased in Ecuador in the period 2001–2016.17 Ecuador’s population (16.9 million) consists of a mixture various ethnic groups, ranging from Mestizo (71.9%) to Afro Ecuadorian (4.3%).18 Factors associated with SRCVD include sociodemographic factors, behavioural and biological CVD risk risk factors. Sociodemographic factors associated with SRCVD include, older age,9,19,20 men,9,20 women,4,19 low socioeconomic status,21 lower education,6,9,20 and ethnicity.22 Behavioural CVD risk factors associated with SRCVD may include, smoking/tobacco use,3,6,20 past smoking,4 physical inactivity,4,6,23 inadequate or low fruit and vegetable consumption,10,23 high intake of sodium and sodium chloride (regular salt),24,25 and psychological distress.12 Biological CVD risk factors associated with SRCVD may include, hypertension,3,19–23,26,27 diabetes,3,6,20–23,26 obesity,3,4,6,21,23,26 abnormal cholesterol values,19 and poor oral health (edentulism).28 This study aimed to determine the prevalence and correlates of SRCVDs in a national population-based survey among adults in Ecuador in 2018. Method: Sample and Procedures This is a secondary analysis conducted using nationally representative population-based and cross-sectional data (18–69 years old) from the “2018 Ecuador STEPS survey.”29 The 2018 Ecuador STEPS survey data and more detailed sampling methods can be found elsewhere.30 Briefly, using a three-stage cluster sampling method, at the first stage primary sampling units (PSUs) were selected by stratum, at the second stage within each PSU selected in the first stage 12 occupied households were selected, and at the third stage, one participant (18–69 years) was selected from each household.30 Selected individuals were assessed with an interview-administered questionnaire, physical and biochemical measurements.30 Research data were collected using electronic tablet devices.30 The “Ethical Review Committee of the Ecuador Ministry of Health” approved the study.30 All procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study. All procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study. This is a secondary analysis conducted using nationally representative population-based and cross-sectional data (18–69 years old) from the “2018 Ecuador STEPS survey.”29 The 2018 Ecuador STEPS survey data and more detailed sampling methods can be found elsewhere.30 Briefly, using a three-stage cluster sampling method, at the first stage primary sampling units (PSUs) were selected by stratum, at the second stage within each PSU selected in the first stage 12 occupied households were selected, and at the third stage, one participant (18–69 years) was selected from each household.30 Selected individuals were assessed with an interview-administered questionnaire, physical and biochemical measurements.30 Research data were collected using electronic tablet devices.30 The “Ethical Review Committee of the Ecuador Ministry of Health” approved the study.30 All procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study. All procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study. Measures Outcome variable: History of CVDs was assessed with the question, “Have you ever had a heart attack or chest pain from heart disease (angina) or a stroke (cerebrovascular accident or incident)?” (Yes, No).30 Sociodemographic covariates included age, sex, highest level of formal education, and ethnicity.30 Behavioural covariates included current and past smoking tobacco, daily servings of fruit and vegetable intake, and “low, moderate or high physical activity based on the Global Physical Activity Questionnaire”.30,31 Salt intake was assessed with the item, “Do you add salt to food at the table?”30 Responses were trichotomized into 1=never, 2=raley or sometimes, and 3=often or always. Alcohol dependence was sourced from the “Alcohol Use Disorder Identification Test=AUDIT” (items 4–6, ≥4 total scores).32 Alcohol family problems was sourced from the item, “During the past 12 months, have you had family problems or problems with your partner due to someone else’s drinking?” (1=yes: > monthly to 4=once or twice).30 Biological covariates included measured Body Mass Index (BMI) classified as “<18.5kg/m2 underweight, 18.5–24.4kg/m2 normal weight, 25–29.9kg/m2 overweight and ≥30 kg/m2 obesity”.33 Hypertension or raised blood pressure (BP) was defined as “systolic BP ≥140 mm Hg and/or diastolic BP ≥90 mm Hg or where the participant is currently on antihypertensive medication”.34 Diabetes was defined as “fasting plasma glucose levels ≥7.0 mmol/L (126 mg/dl); or using insulin or oral hypoglycaemic drugs; or having a history of diagnosis of diabetes”.35 Raised total cholesterol was defined as “fasting TC ≥5.0 mmol/L or currently on medication for raised cholesterol”.35 Self-rated oral health status (AROH) was sourced from two items, 1) “How would you describe the state of your teeth, and 2) How would you describe the state of your gums?”30 Poor SROH was defined as “having poor or very poor status of teeth and/or gums, and good oral health as having average, good, very good or excellent status of teeth and/or gums”, in line with previous research.36 Cronbach alpha’s for the two item SROH scale was 0.74 in this sample. Outcome variable: History of CVDs was assessed with the question, “Have you ever had a heart attack or chest pain from heart disease (angina) or a stroke (cerebrovascular accident or incident)?” (Yes, No).30 Sociodemographic covariates included age, sex, highest level of formal education, and ethnicity.30 Behavioural covariates included current and past smoking tobacco, daily servings of fruit and vegetable intake, and “low, moderate or high physical activity based on the Global Physical Activity Questionnaire”.30,31 Salt intake was assessed with the item, “Do you add salt to food at the table?”30 Responses were trichotomized into 1=never, 2=raley or sometimes, and 3=often or always. Alcohol dependence was sourced from the “Alcohol Use Disorder Identification Test=AUDIT” (items 4–6, ≥4 total scores).32 Alcohol family problems was sourced from the item, “During the past 12 months, have you had family problems or problems with your partner due to someone else’s drinking?” (1=yes: > monthly to 4=once or twice).30 Biological covariates included measured Body Mass Index (BMI) classified as “<18.5kg/m2 underweight, 18.5–24.4kg/m2 normal weight, 25–29.9kg/m2 overweight and ≥30 kg/m2 obesity”.33 Hypertension or raised blood pressure (BP) was defined as “systolic BP ≥140 mm Hg and/or diastolic BP ≥90 mm Hg or where the participant is currently on antihypertensive medication”.34 Diabetes was defined as “fasting plasma glucose levels ≥7.0 mmol/L (126 mg/dl); or using insulin or oral hypoglycaemic drugs; or having a history of diagnosis of diabetes”.35 Raised total cholesterol was defined as “fasting TC ≥5.0 mmol/L or currently on medication for raised cholesterol”.35 Self-rated oral health status (AROH) was sourced from two items, 1) “How would you describe the state of your teeth, and 2) How would you describe the state of your gums?”30 Poor SROH was defined as “having poor or very poor status of teeth and/or gums, and good oral health as having average, good, very good or excellent status of teeth and/or gums”, in line with previous research.36 Cronbach alpha’s for the two item SROH scale was 0.74 in this sample. Data Analysis Considering the clustered study design, all statistical analyses were done using “STATA software version 14.0 (Stata Corporation, College Station, TX, USA).” Unadjusted and adjusted logistic regression was used to calculate predictors of SRCVD. Variables with p-values <0.1 in univariate analysis (age group, educational level, ethnicity, family alcohol problems, smoking status, alcohol dependence, body weight status, hypertension, and self-rated oral health) were included in the final adjusted model. P-values of below 0.05 were accepted as significant and missing values were excluded from the analysis. Considering the clustered study design, all statistical analyses were done using “STATA software version 14.0 (Stata Corporation, College Station, TX, USA).” Unadjusted and adjusted logistic regression was used to calculate predictors of SRCVD. Variables with p-values <0.1 in univariate analysis (age group, educational level, ethnicity, family alcohol problems, smoking status, alcohol dependence, body weight status, hypertension, and self-rated oral health) were included in the final adjusted model. P-values of below 0.05 were accepted as significant and missing values were excluded from the analysis. Sample and Procedures: This is a secondary analysis conducted using nationally representative population-based and cross-sectional data (18–69 years old) from the “2018 Ecuador STEPS survey.”29 The 2018 Ecuador STEPS survey data and more detailed sampling methods can be found elsewhere.30 Briefly, using a three-stage cluster sampling method, at the first stage primary sampling units (PSUs) were selected by stratum, at the second stage within each PSU selected in the first stage 12 occupied households were selected, and at the third stage, one participant (18–69 years) was selected from each household.30 Selected individuals were assessed with an interview-administered questionnaire, physical and biochemical measurements.30 Research data were collected using electronic tablet devices.30 The “Ethical Review Committee of the Ecuador Ministry of Health” approved the study.30 All procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study. All procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000. Informed consent was obtained from all participants included in the study. Measures: Outcome variable: History of CVDs was assessed with the question, “Have you ever had a heart attack or chest pain from heart disease (angina) or a stroke (cerebrovascular accident or incident)?” (Yes, No).30 Sociodemographic covariates included age, sex, highest level of formal education, and ethnicity.30 Behavioural covariates included current and past smoking tobacco, daily servings of fruit and vegetable intake, and “low, moderate or high physical activity based on the Global Physical Activity Questionnaire”.30,31 Salt intake was assessed with the item, “Do you add salt to food at the table?”30 Responses were trichotomized into 1=never, 2=raley or sometimes, and 3=often or always. Alcohol dependence was sourced from the “Alcohol Use Disorder Identification Test=AUDIT” (items 4–6, ≥4 total scores).32 Alcohol family problems was sourced from the item, “During the past 12 months, have you had family problems or problems with your partner due to someone else’s drinking?” (1=yes: > monthly to 4=once or twice).30 Biological covariates included measured Body Mass Index (BMI) classified as “<18.5kg/m2 underweight, 18.5–24.4kg/m2 normal weight, 25–29.9kg/m2 overweight and ≥30 kg/m2 obesity”.33 Hypertension or raised blood pressure (BP) was defined as “systolic BP ≥140 mm Hg and/or diastolic BP ≥90 mm Hg or where the participant is currently on antihypertensive medication”.34 Diabetes was defined as “fasting plasma glucose levels ≥7.0 mmol/L (126 mg/dl); or using insulin or oral hypoglycaemic drugs; or having a history of diagnosis of diabetes”.35 Raised total cholesterol was defined as “fasting TC ≥5.0 mmol/L or currently on medication for raised cholesterol”.35 Self-rated oral health status (AROH) was sourced from two items, 1) “How would you describe the state of your teeth, and 2) How would you describe the state of your gums?”30 Poor SROH was defined as “having poor or very poor status of teeth and/or gums, and good oral health as having average, good, very good or excellent status of teeth and/or gums”, in line with previous research.36 Cronbach alpha’s for the two item SROH scale was 0.74 in this sample. Data Analysis: Considering the clustered study design, all statistical analyses were done using “STATA software version 14.0 (Stata Corporation, College Station, TX, USA).” Unadjusted and adjusted logistic regression was used to calculate predictors of SRCVD. Variables with p-values <0.1 in univariate analysis (age group, educational level, ethnicity, family alcohol problems, smoking status, alcohol dependence, body weight status, hypertension, and self-rated oral health) were included in the final adjusted model. P-values of below 0.05 were accepted as significant and missing values were excluded from the analysis. Results: Sample and Cardiovascular Diseases Prevalence Characteristics The sample included 4,638 adults (18–69 years; with 39 median age), 58.1% were female, 30.5% had higher education, and majority (78.9%) belonged to the Mestizo ethnic group. The study response rate was 69.4%.30 One in five participants (24.7%) had low physical activity, 13.7% currently smoked tobacco, 11.8% were dependent on alcohol use, 7.0% had alcohol family problems, 58.8% had 1 or less serving of fruit and vegetables a day, and 12.45 often or always added salt to their meals. One in four respondents (25.7%) had obesity, 20.5% hypertension, 7.1% diabetes, 34.8% raised total cholesterol, and 9.7% poor oral health. The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women (see Table 1).Table 1Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018Variable (#Missing Cases)SampleSelf-Reported Cardiovascular DiseaseN (%)N (%)aAll4638391 (8.7)Age in years (#0) 18–291205 (30.1)73 (6.3) 30–442034 (40.7)167 (8.7) 45–691399 (29.3)151 (11.2)Sex (#0) Female2694 (58.1)236 (8.9) Male1944 (41.9)155 (8.5)Education (#4) <Secondary2452 (48.2)226 (9.8) Secondary929 (21.2)59 (7.4) >Secondary1253 (30.5)104 (7.8)Ethnicity (#2) Mestizo3567 (78.9)295 (8.4) Amerindian378 (6.5)19 (4.7) Montubio335 (7.2)39 (13.7) African227 (4.4)20 (9.7) White/other129 (3.0)18 (12.2)Family alcohol problems (#0) No4326 (93.0)348 (8.3) Yes312 (7.0)43 (14.4)Smoking status (#0) Never3046 (61.9)227 (7.5) Past1007 (24.4)111 (11.2) Current585 (13.7)53 (9.9)Alcohol dependence (#0) No4145 (88.2)336 (8.3) Yes493 (11.8)55 (11.7)Fruit/vegetables (servings/day) (#11) 0–12754 (58.8)229 (8.9) 2–31416 (31.2)116 (8.2) 4 or more457 (10.0)45 (9.1)Adding salt to meals (#19) Never2534 (52.8)203 (8.0) Sometimes/rarely1553 (34.7)135 (9.1) Often/always532 (12.4)52 (10.8)Physical activity (#6) Low1083 (24.7)95 (8.3) Moderate1206 (25.5)102 (8.5) High2343 (49.8)193 (8.9)Body mass index (#168) Normal1486 (34.9)117 (7.6) Under55 (1.5)4 (6.9) Overweight1725 (37.9)142 (8.6) Obesity1204 (25.7)121 (11.0)Hypertension (#101) No3599 (79.5)270 (7.9) Yes938 (20.5)118 (12.3)Type 2 diabetes (#598) No3713 (92.9)333 (9.0) Yes327 (7.1)33 (10.1)Raised cholesterol (#560) No2569 (65.2)225 (9.1) Yes1509 (34.8)144 (9.4)Self-rated oral health (#175) Good3995 (90.3)310 (8.0) Poor468 (9.7)66 (14.8)Notes: #, number. aPercentage is based on the weighted percentage of complete cases. Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018 Notes: #, number. aPercentage is based on the weighted percentage of complete cases. The sample included 4,638 adults (18–69 years; with 39 median age), 58.1% were female, 30.5% had higher education, and majority (78.9%) belonged to the Mestizo ethnic group. The study response rate was 69.4%.30 One in five participants (24.7%) had low physical activity, 13.7% currently smoked tobacco, 11.8% were dependent on alcohol use, 7.0% had alcohol family problems, 58.8% had 1 or less serving of fruit and vegetables a day, and 12.45 often or always added salt to their meals. One in four respondents (25.7%) had obesity, 20.5% hypertension, 7.1% diabetes, 34.8% raised total cholesterol, and 9.7% poor oral health. The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women (see Table 1).Table 1Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018Variable (#Missing Cases)SampleSelf-Reported Cardiovascular DiseaseN (%)N (%)aAll4638391 (8.7)Age in years (#0) 18–291205 (30.1)73 (6.3) 30–442034 (40.7)167 (8.7) 45–691399 (29.3)151 (11.2)Sex (#0) Female2694 (58.1)236 (8.9) Male1944 (41.9)155 (8.5)Education (#4) <Secondary2452 (48.2)226 (9.8) Secondary929 (21.2)59 (7.4) >Secondary1253 (30.5)104 (7.8)Ethnicity (#2) Mestizo3567 (78.9)295 (8.4) Amerindian378 (6.5)19 (4.7) Montubio335 (7.2)39 (13.7) African227 (4.4)20 (9.7) White/other129 (3.0)18 (12.2)Family alcohol problems (#0) No4326 (93.0)348 (8.3) Yes312 (7.0)43 (14.4)Smoking status (#0) Never3046 (61.9)227 (7.5) Past1007 (24.4)111 (11.2) Current585 (13.7)53 (9.9)Alcohol dependence (#0) No4145 (88.2)336 (8.3) Yes493 (11.8)55 (11.7)Fruit/vegetables (servings/day) (#11) 0–12754 (58.8)229 (8.9) 2–31416 (31.2)116 (8.2) 4 or more457 (10.0)45 (9.1)Adding salt to meals (#19) Never2534 (52.8)203 (8.0) Sometimes/rarely1553 (34.7)135 (9.1) Often/always532 (12.4)52 (10.8)Physical activity (#6) Low1083 (24.7)95 (8.3) Moderate1206 (25.5)102 (8.5) High2343 (49.8)193 (8.9)Body mass index (#168) Normal1486 (34.9)117 (7.6) Under55 (1.5)4 (6.9) Overweight1725 (37.9)142 (8.6) Obesity1204 (25.7)121 (11.0)Hypertension (#101) No3599 (79.5)270 (7.9) Yes938 (20.5)118 (12.3)Type 2 diabetes (#598) No3713 (92.9)333 (9.0) Yes327 (7.1)33 (10.1)Raised cholesterol (#560) No2569 (65.2)225 (9.1) Yes1509 (34.8)144 (9.4)Self-rated oral health (#175) Good3995 (90.3)310 (8.0) Poor468 (9.7)66 (14.8)Notes: #, number. aPercentage is based on the weighted percentage of complete cases. Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018 Notes: #, number. aPercentage is based on the weighted percentage of complete cases. Associations with Self-Reported Cardiovascular Disease Prevalence In adjusted logistic regression analysis, Montubio (Adjusted Odds Ratio-AOR: 1.66, 95% Confidence Interval-CI: 1.10–2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19–2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02–1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19–2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD (see Table 2).Table 2Multivariable Associations with Self-Reported Cardiovascular DiseaseVariableCrude OR (95% CI)p-valueAdjusted OR (95% CI)p-valueAge in years 18–291 (Reference)1 (Reference) 30–441.41 (1.01, 1.98)0.0421.18 (0.81, 1.71)0.389 45–691.87 (1.30, 2.68)<0.0011.31 (0.85, 2.03)0.224Sex Female1 (Reference)— Male0.96 (0.74, 1.23)0.727Education <Secondary1 (Reference)1 (Reference) Secondary0.74 (0.53, 1.03)0.0770.84 (0.58, 1.22)0.369 >Secondary0.78 (0.57, 1.05)0.0970.89 (0.64, 1.23)0.487Ethnicity Mestizo1 (Reference)1 (Reference) Amerindian0.54 (0.27, 1.06)0.0760.51 (0.25, 1.02)0.058 Montubio1.73 (1.16, 2.59)0.0081.66 (1.10, 2.50)0.017 African1.17 (0.68, 2.01)0.5771.21 (0.70, 2.08)0.497 White/other1.51 (0.83, 2.74)0.1731.54 (0.84, 2.83)0.167Family alcohol problems No1 (Reference)1 (Reference) Yes1.86 (1.26, 2.75)0.0021.78 (1.19, 2.65)0.005Smoking status Never1 (Reference)1 (Reference) Past1.55 (1.18, 2.05)0.0021.36 (1.02, 1.81)0.038 Current1.36 (0.95, 1.95)0.0901.24 (0.84, 1.83)0.287Alcohol dependence No1 (Reference)1 (Reference) Yes1.47 (1.03, 2.10)0.0341.29 (0.87, 1.91)0.212Fruit/vegetables (servings/day) 0–11 (Reference)— 2–30.92 (0.69, 1.21)0.549 4 or more1.03 (0.68, 1.56)0.897Adding salt to meals Never1 (Reference)— Sometimes/rarely1.15 (0.86, 1.54)0.352 Often/always1.39 (0.94, 2.05)0.103Physical activity Low1 (Reference)— Moderate1.03 (0.73, 1.45)0.872 High1.08 (0.81, 1.46)0.587Body mass index Normal1 (Reference)1 (Reference) Under0.89 (0.29, 2.77)0.8421.10 (0.35, 3.45)0.876 Overweight1.14 (0.83, 1.57)0.4041.10 (0.78, 1.56)0.581 Obesity1.50 (1.08, 2.07)0.0161.36 (0.95, 1.94)0.088Hypertension No1 (Reference)1 (Reference) Yes1.63 (1.24, 2.14)<0.0011.31 (0.96, 1.78)0.083Type 2 diabetes No1 (Reference)— Yes1.14 (0.72, 1.79)0.574Raised cholesterol No1 (Reference)— Yes1.05 (0.80, 1.36)0.741Self-rated oral health Good1 (Reference)1 (Reference) Poor2.00 (1.41, 2.82)<0.0011.74 (1.19, 2.54)0.005Abbreviations: OR, odds ratio; CI, confidence interval. Multivariable Associations with Self-Reported Cardiovascular Disease Abbreviations: OR, odds ratio; CI, confidence interval. In adjusted logistic regression analysis, Montubio (Adjusted Odds Ratio-AOR: 1.66, 95% Confidence Interval-CI: 1.10–2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19–2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02–1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19–2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD (see Table 2).Table 2Multivariable Associations with Self-Reported Cardiovascular DiseaseVariableCrude OR (95% CI)p-valueAdjusted OR (95% CI)p-valueAge in years 18–291 (Reference)1 (Reference) 30–441.41 (1.01, 1.98)0.0421.18 (0.81, 1.71)0.389 45–691.87 (1.30, 2.68)<0.0011.31 (0.85, 2.03)0.224Sex Female1 (Reference)— Male0.96 (0.74, 1.23)0.727Education <Secondary1 (Reference)1 (Reference) Secondary0.74 (0.53, 1.03)0.0770.84 (0.58, 1.22)0.369 >Secondary0.78 (0.57, 1.05)0.0970.89 (0.64, 1.23)0.487Ethnicity Mestizo1 (Reference)1 (Reference) Amerindian0.54 (0.27, 1.06)0.0760.51 (0.25, 1.02)0.058 Montubio1.73 (1.16, 2.59)0.0081.66 (1.10, 2.50)0.017 African1.17 (0.68, 2.01)0.5771.21 (0.70, 2.08)0.497 White/other1.51 (0.83, 2.74)0.1731.54 (0.84, 2.83)0.167Family alcohol problems No1 (Reference)1 (Reference) Yes1.86 (1.26, 2.75)0.0021.78 (1.19, 2.65)0.005Smoking status Never1 (Reference)1 (Reference) Past1.55 (1.18, 2.05)0.0021.36 (1.02, 1.81)0.038 Current1.36 (0.95, 1.95)0.0901.24 (0.84, 1.83)0.287Alcohol dependence No1 (Reference)1 (Reference) Yes1.47 (1.03, 2.10)0.0341.29 (0.87, 1.91)0.212Fruit/vegetables (servings/day) 0–11 (Reference)— 2–30.92 (0.69, 1.21)0.549 4 or more1.03 (0.68, 1.56)0.897Adding salt to meals Never1 (Reference)— Sometimes/rarely1.15 (0.86, 1.54)0.352 Often/always1.39 (0.94, 2.05)0.103Physical activity Low1 (Reference)— Moderate1.03 (0.73, 1.45)0.872 High1.08 (0.81, 1.46)0.587Body mass index Normal1 (Reference)1 (Reference) Under0.89 (0.29, 2.77)0.8421.10 (0.35, 3.45)0.876 Overweight1.14 (0.83, 1.57)0.4041.10 (0.78, 1.56)0.581 Obesity1.50 (1.08, 2.07)0.0161.36 (0.95, 1.94)0.088Hypertension No1 (Reference)1 (Reference) Yes1.63 (1.24, 2.14)<0.0011.31 (0.96, 1.78)0.083Type 2 diabetes No1 (Reference)— Yes1.14 (0.72, 1.79)0.574Raised cholesterol No1 (Reference)— Yes1.05 (0.80, 1.36)0.741Self-rated oral health Good1 (Reference)1 (Reference) Poor2.00 (1.41, 2.82)<0.0011.74 (1.19, 2.54)0.005Abbreviations: OR, odds ratio; CI, confidence interval. Multivariable Associations with Self-Reported Cardiovascular Disease Abbreviations: OR, odds ratio; CI, confidence interval. Sample and Cardiovascular Diseases Prevalence Characteristics: The sample included 4,638 adults (18–69 years; with 39 median age), 58.1% were female, 30.5% had higher education, and majority (78.9%) belonged to the Mestizo ethnic group. The study response rate was 69.4%.30 One in five participants (24.7%) had low physical activity, 13.7% currently smoked tobacco, 11.8% were dependent on alcohol use, 7.0% had alcohol family problems, 58.8% had 1 or less serving of fruit and vegetables a day, and 12.45 often or always added salt to their meals. One in four respondents (25.7%) had obesity, 20.5% hypertension, 7.1% diabetes, 34.8% raised total cholesterol, and 9.7% poor oral health. The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women (see Table 1).Table 1Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018Variable (#Missing Cases)SampleSelf-Reported Cardiovascular DiseaseN (%)N (%)aAll4638391 (8.7)Age in years (#0) 18–291205 (30.1)73 (6.3) 30–442034 (40.7)167 (8.7) 45–691399 (29.3)151 (11.2)Sex (#0) Female2694 (58.1)236 (8.9) Male1944 (41.9)155 (8.5)Education (#4) <Secondary2452 (48.2)226 (9.8) Secondary929 (21.2)59 (7.4) >Secondary1253 (30.5)104 (7.8)Ethnicity (#2) Mestizo3567 (78.9)295 (8.4) Amerindian378 (6.5)19 (4.7) Montubio335 (7.2)39 (13.7) African227 (4.4)20 (9.7) White/other129 (3.0)18 (12.2)Family alcohol problems (#0) No4326 (93.0)348 (8.3) Yes312 (7.0)43 (14.4)Smoking status (#0) Never3046 (61.9)227 (7.5) Past1007 (24.4)111 (11.2) Current585 (13.7)53 (9.9)Alcohol dependence (#0) No4145 (88.2)336 (8.3) Yes493 (11.8)55 (11.7)Fruit/vegetables (servings/day) (#11) 0–12754 (58.8)229 (8.9) 2–31416 (31.2)116 (8.2) 4 or more457 (10.0)45 (9.1)Adding salt to meals (#19) Never2534 (52.8)203 (8.0) Sometimes/rarely1553 (34.7)135 (9.1) Often/always532 (12.4)52 (10.8)Physical activity (#6) Low1083 (24.7)95 (8.3) Moderate1206 (25.5)102 (8.5) High2343 (49.8)193 (8.9)Body mass index (#168) Normal1486 (34.9)117 (7.6) Under55 (1.5)4 (6.9) Overweight1725 (37.9)142 (8.6) Obesity1204 (25.7)121 (11.0)Hypertension (#101) No3599 (79.5)270 (7.9) Yes938 (20.5)118 (12.3)Type 2 diabetes (#598) No3713 (92.9)333 (9.0) Yes327 (7.1)33 (10.1)Raised cholesterol (#560) No2569 (65.2)225 (9.1) Yes1509 (34.8)144 (9.4)Self-rated oral health (#175) Good3995 (90.3)310 (8.0) Poor468 (9.7)66 (14.8)Notes: #, number. aPercentage is based on the weighted percentage of complete cases. Sample and Cardiovascular Disease Characteristics Among Adults, Ecuador, 2018 Notes: #, number. aPercentage is based on the weighted percentage of complete cases. Associations with Self-Reported Cardiovascular Disease Prevalence: In adjusted logistic regression analysis, Montubio (Adjusted Odds Ratio-AOR: 1.66, 95% Confidence Interval-CI: 1.10–2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19–2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02–1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19–2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD (see Table 2).Table 2Multivariable Associations with Self-Reported Cardiovascular DiseaseVariableCrude OR (95% CI)p-valueAdjusted OR (95% CI)p-valueAge in years 18–291 (Reference)1 (Reference) 30–441.41 (1.01, 1.98)0.0421.18 (0.81, 1.71)0.389 45–691.87 (1.30, 2.68)<0.0011.31 (0.85, 2.03)0.224Sex Female1 (Reference)— Male0.96 (0.74, 1.23)0.727Education <Secondary1 (Reference)1 (Reference) Secondary0.74 (0.53, 1.03)0.0770.84 (0.58, 1.22)0.369 >Secondary0.78 (0.57, 1.05)0.0970.89 (0.64, 1.23)0.487Ethnicity Mestizo1 (Reference)1 (Reference) Amerindian0.54 (0.27, 1.06)0.0760.51 (0.25, 1.02)0.058 Montubio1.73 (1.16, 2.59)0.0081.66 (1.10, 2.50)0.017 African1.17 (0.68, 2.01)0.5771.21 (0.70, 2.08)0.497 White/other1.51 (0.83, 2.74)0.1731.54 (0.84, 2.83)0.167Family alcohol problems No1 (Reference)1 (Reference) Yes1.86 (1.26, 2.75)0.0021.78 (1.19, 2.65)0.005Smoking status Never1 (Reference)1 (Reference) Past1.55 (1.18, 2.05)0.0021.36 (1.02, 1.81)0.038 Current1.36 (0.95, 1.95)0.0901.24 (0.84, 1.83)0.287Alcohol dependence No1 (Reference)1 (Reference) Yes1.47 (1.03, 2.10)0.0341.29 (0.87, 1.91)0.212Fruit/vegetables (servings/day) 0–11 (Reference)— 2–30.92 (0.69, 1.21)0.549 4 or more1.03 (0.68, 1.56)0.897Adding salt to meals Never1 (Reference)— Sometimes/rarely1.15 (0.86, 1.54)0.352 Often/always1.39 (0.94, 2.05)0.103Physical activity Low1 (Reference)— Moderate1.03 (0.73, 1.45)0.872 High1.08 (0.81, 1.46)0.587Body mass index Normal1 (Reference)1 (Reference) Under0.89 (0.29, 2.77)0.8421.10 (0.35, 3.45)0.876 Overweight1.14 (0.83, 1.57)0.4041.10 (0.78, 1.56)0.581 Obesity1.50 (1.08, 2.07)0.0161.36 (0.95, 1.94)0.088Hypertension No1 (Reference)1 (Reference) Yes1.63 (1.24, 2.14)<0.0011.31 (0.96, 1.78)0.083Type 2 diabetes No1 (Reference)— Yes1.14 (0.72, 1.79)0.574Raised cholesterol No1 (Reference)— Yes1.05 (0.80, 1.36)0.741Self-rated oral health Good1 (Reference)1 (Reference) Poor2.00 (1.41, 2.82)<0.0011.74 (1.19, 2.54)0.005Abbreviations: OR, odds ratio; CI, confidence interval. Multivariable Associations with Self-Reported Cardiovascular Disease Abbreviations: OR, odds ratio; CI, confidence interval. Discussion: In this nationally representative sample of adults (18–69 years) in Ecuador, the prevalence of SRCVD (8.7%) was higher than in Argentina (3.9%, 35–70 years), Brazil (6.9%, 35–70 years), Chile (3.3%, 35–70 years) and Colombia (3.8%, 35–70 years),5 in Brazil (SR stroke, 1.6%, 18 years older),6 Colombia (5.5%, 18–69 years),7 in Nepal (SRCVD, 2.0%, 24–64 years),10 in China (SRCVD, 3.5%, 35–74 years),11 in Australia (SRCVD, 4.5%, ≥25 years),12 and USA (SR congestive health failure 3.4%, angina 3.0%, heart attack 4.4%, or stroke 3.9%, 20 years and older).9 However, it should be noted that the prevalence rates of SRCVD are difficult to compare because of different measurements and different age groups. The high prevalence of SRCVD found in Ecuador calls for community-based massive education campaigns and health care provision of people with CVD, including the identification of a cerebral-vascular event and emergency care.3,37 Consistent with previous research,9,19,20 older age (45–69 years) was positively associated with SRCVD in unadjusted analysis. Some previous studies found sex differences4,9,19,20 in the prevalence of SRCVD, while in our study sex differences were not reaching significance. Other research found an association between low socioeconomic status or lower education6,9,20,21 and SRCVD, while this survey did not show such associations. Compared to Mestizo Ecuadorean, Montubio Ecuadorean were more likely and Amerindian less likely (marginally significant) to have SRCVD. In a previous study among Amerindians in Ecuador, a low prevalence of atrial fibrillation was found, which can be explained by, both, “racially determined short stature and frequent dietary oily fish intake.”38 The most significant predictors of the increasing mortality rate from myocardial infarction in Ecuador were living in the coast belonging to a mixed race.16 The Montubio (“an aboriginal mestizo group that originates from the coastal part of Ecuador”)18 may form part of this mortality rate from myocardial infarction. In the study among older adults in Ecuador,4 people living on the coast had an increased risk of heart disease and stroke compared to those in the highlands. In agreement with previous research,12,39,40 this study showed that psychosocial stress in the form of alcohol family problems was associated with SRCVD. Stress can increase the cerebrovascular disease risk by modulating sympathomimetic activity, affecting the blood pressure reactivity, cerebral endothelium, coagulation, or heart rhythm.39 In line with former research,3,4,6,20 past tobacco use was positively associated with SRCVD. In a systematic review and meta analysis the importance of smoking as an independent risk factor for stroke was confirmed.41 Contrary to expectation,4,6,10,23–25 physical inactivity, inadequate or low fruit and vegetable consumption and frequent salt consumption were not associated with SRCVD in this study. Similarly to a study in Nepal,10 the consumption of fruit and vegetables was very low in this study, but was unlike in the Nepal study not associated with SRCVD. Consistent with former research,3,4,6,19–23,26,27 this survey showed in univariate analysis an association between hypertension and obesity with SRCVD. Unlike some studies,3,6,19–23,26 this study did not find significant associations between diabetes, raised cholesterol and SRCVD. In line with some previous research,28 poor oral health (edentulism) was positively associated with SRCVD. Periodontal diseases have been shown to be associated with systematic diseases, such as CVD.42 The strength of the study includes a nationally representative sample of adults in Ecuador and the use of a standardized STEPS survey methodology and measurements. Study limitations include the cross-sectional design, the assessment of some variables, including CVD by self-report, and that certain relevant variables, such as psychological distress, were not assessed. Previous research compared self-reported versus hospital-coded diagnosis of CVD, and found SRCVD valid for epidemiological studies.43 Further, the study did not assess the duration of having a CVD and the CVD type, which prevents us from establishing a trend between time of CVD diagnosis and health risk behaviours. The study participants only included those that had survived CVD, and excluded those with CVD who had died prior to the survey, increasing the possible underestimates of our figures.44 Conclusion: Almost one in ten persons aged 18–69 years reported having been diagnosed with CVD in Ecuador. Several associated factors for CVD, such as being Montubio, family alcohol problems, past smoking tobacco, and poor oral health status were identified, which can be targeted in public health interventions.
Background: This study aimed to determine the prevalence and correlates of self-reported cardiovascular diseases (SRCVDs) among adults in Ecuador. Methods: National cross-sectional survey data of 4638 persons aged 18-69 years in Ecuador were analysed. Research data were collected with an interview-administered questionnaire, physical and biochemical measurements. Results: The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women. In adjusted logistic regression analysis, being Montubio (adjusted odds ratio-AOR: 1.66, 95% confidence interval-CI: 1.10-2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19-2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02-1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19-2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD. Conclusions: Almost one in ten persons aged 18-69 years had SRCVD in Ecuador. Several associated factors, including Montubio by ethnicity, family alcohol problems, past smoking, and poor oral health status, were identified, which can be targeted in public health interventions.
Introduction: Globally, an estimated 17.9 million people died from Cardiovascular disease (CVDs) in 2016, representing 31% of all global deaths. Of these deaths, 85% are due to heart attack and stroke.1 In persons 50 years and older in 2019, “ischaemic heart disease and stroke were the top-ranked causes of disability adjusted life years (DALYs).”2 More than three-quarters of deaths from CVDs occur in low- and middle-income countries.1 “Heart attacks and strokes are usually acute events and are mainly caused by a blockage that prevents blood from flowing to the heart or brain.”1 In studies in the Americas, a study (60 years and older) in seven urban centres in Latin America and the Caribbean, the prevalence of self-reported cardiovascular disease (SRCVD) was 20.3%,3 and in a study among persons aged 60 years and older in the highlands and coastal areas of Ecuador in 2009 found a prevalence of self-reported heart disease of 13.1% and stroke 6.4%.4 In urban-rural sites (35–70 years) in Argentina, Brazil, Chile and Colombia the prevalence of SRCVD was 3.9%, 6.9%, 3.3%, and 3.8%, respectively.5 In a national study in Brazil (18 years older) the SR stroke prevalence was 1.6%,6 in Colombia (18–69 years) SRCVD 5.5%,7 Mexico (50 years and older) the prevalence of SR stroke was 4.3% and angina 13.6%,8 and in USA in 2016 (20 years plus), SRCVD (congestive health failure 3.4%, angina 3.0%, heart attack 4.4%, or stroke 3.9%).9 In Nepal (24–64 years), 2% had major cardiovascular events,10 in China (35–74 years) 3.3% in men and 3.6% in women SRCVD,11 in Australia (≥25 years) (2007–2008) the prevalence of SRCVD (heart attack or stroke) was 4.5%.12 To our knowledge, we could not find national information on SRCVD in Ecuador.13,14 CVDs contribute to 24% of mortality in 2016 in Ecuador.15 Using the mortality national registry in Ecuador, the myocardial infarction mortality rate increased from 51 in 2012 to 157 in 2016 deaths per 100,000,16 and mortality due to ischemic heart disease increased in Ecuador in the period 2001–2016.17 Ecuador’s population (16.9 million) consists of a mixture various ethnic groups, ranging from Mestizo (71.9%) to Afro Ecuadorian (4.3%).18 Factors associated with SRCVD include sociodemographic factors, behavioural and biological CVD risk risk factors. Sociodemographic factors associated with SRCVD include, older age,9,19,20 men,9,20 women,4,19 low socioeconomic status,21 lower education,6,9,20 and ethnicity.22 Behavioural CVD risk factors associated with SRCVD may include, smoking/tobacco use,3,6,20 past smoking,4 physical inactivity,4,6,23 inadequate or low fruit and vegetable consumption,10,23 high intake of sodium and sodium chloride (regular salt),24,25 and psychological distress.12 Biological CVD risk factors associated with SRCVD may include, hypertension,3,19–23,26,27 diabetes,3,6,20–23,26 obesity,3,4,6,21,23,26 abnormal cholesterol values,19 and poor oral health (edentulism).28 This study aimed to determine the prevalence and correlates of SRCVDs in a national population-based survey among adults in Ecuador in 2018. Conclusion: Almost one in ten persons aged 18–69 years reported having been diagnosed with CVD in Ecuador. Several associated factors for CVD, such as being Montubio, family alcohol problems, past smoking tobacco, and poor oral health status were identified, which can be targeted in public health interventions.
Background: This study aimed to determine the prevalence and correlates of self-reported cardiovascular diseases (SRCVDs) among adults in Ecuador. Methods: National cross-sectional survey data of 4638 persons aged 18-69 years in Ecuador were analysed. Research data were collected with an interview-administered questionnaire, physical and biochemical measurements. Results: The prevalence of SRCVDs was 8.7%, 8.5% among men and 8.9% among women. In adjusted logistic regression analysis, being Montubio (adjusted odds ratio-AOR: 1.66, 95% confidence interval-CI: 1.10-2.50), family alcohol problems (AOR: 1.78, 95% CI: 1.19-2.65), past smoking tobacco (AOR: 1.36, 95% CI: 1.02-1.81), and poor oral health status (AOR: 1.74, 95% CI: 1.19-2.54) were associated with SRCVD. In addition, in unadjusted analysis, older age, alcohol dependence, obesity, and having hypertension were associated with SRCVD. Conclusions: Almost one in ten persons aged 18-69 years had SRCVD in Ecuador. Several associated factors, including Montubio by ethnicity, family alcohol problems, past smoking, and poor oral health status, were identified, which can be targeted in public health interventions.
7,139
249
[ 1561, 239, 424, 112, 2251, 600, 517, 770, 53 ]
10
[ "reference", "30", "years", "alcohol", "18", "srcvd", "study", "health", "ecuador", "95" ]
[ "heart attack stroke", "stroke prevalence colombia", "cardiovascular disease srcvd", "reported cardiovascular diseasen", "cardiovascular diseases prevalence" ]
null
null
[CONTENT] chronic conditions | lifestyle factors | cardiovascular disease | adults | Ecuador [SUMMARY]
null
null
[CONTENT] chronic conditions | lifestyle factors | cardiovascular disease | adults | Ecuador [SUMMARY]
[CONTENT] chronic conditions | lifestyle factors | cardiovascular disease | adults | Ecuador [SUMMARY]
[CONTENT] chronic conditions | lifestyle factors | cardiovascular disease | adults | Ecuador [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Alcohol Drinking | Cardiovascular Diseases | Cross-Sectional Studies | Ecuador | Female | Health Surveys | Humans | Life Style | Male | Middle Aged | Oral Health | Prevalence | Risk Assessment | Risk Factors | Self Report | Smoking | Time Factors | Young Adult [SUMMARY]
null
null
[CONTENT] Adolescent | Adult | Aged | Alcohol Drinking | Cardiovascular Diseases | Cross-Sectional Studies | Ecuador | Female | Health Surveys | Humans | Life Style | Male | Middle Aged | Oral Health | Prevalence | Risk Assessment | Risk Factors | Self Report | Smoking | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Alcohol Drinking | Cardiovascular Diseases | Cross-Sectional Studies | Ecuador | Female | Health Surveys | Humans | Life Style | Male | Middle Aged | Oral Health | Prevalence | Risk Assessment | Risk Factors | Self Report | Smoking | Time Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Alcohol Drinking | Cardiovascular Diseases | Cross-Sectional Studies | Ecuador | Female | Health Surveys | Humans | Life Style | Male | Middle Aged | Oral Health | Prevalence | Risk Assessment | Risk Factors | Self Report | Smoking | Time Factors | Young Adult [SUMMARY]
[CONTENT] heart attack stroke | stroke prevalence colombia | cardiovascular disease srcvd | reported cardiovascular diseasen | cardiovascular diseases prevalence [SUMMARY]
null
null
[CONTENT] heart attack stroke | stroke prevalence colombia | cardiovascular disease srcvd | reported cardiovascular diseasen | cardiovascular diseases prevalence [SUMMARY]
[CONTENT] heart attack stroke | stroke prevalence colombia | cardiovascular disease srcvd | reported cardiovascular diseasen | cardiovascular diseases prevalence [SUMMARY]
[CONTENT] heart attack stroke | stroke prevalence colombia | cardiovascular disease srcvd | reported cardiovascular diseasen | cardiovascular diseases prevalence [SUMMARY]
[CONTENT] reference | 30 | years | alcohol | 18 | srcvd | study | health | ecuador | 95 [SUMMARY]
null
null
[CONTENT] reference | 30 | years | alcohol | 18 | srcvd | study | health | ecuador | 95 [SUMMARY]
[CONTENT] reference | 30 | years | alcohol | 18 | srcvd | study | health | ecuador | 95 [SUMMARY]
[CONTENT] reference | 30 | years | alcohol | 18 | srcvd | study | health | ecuador | 95 [SUMMARY]
[CONTENT] srcvd | years | heart | factors | 2016 | stroke | prevalence | 20 | years older | deaths [SUMMARY]
null
null
[CONTENT] cvd | factors cvd montubio family | smoking tobacco poor | health status identified targeted | family alcohol problems past | past smoking tobacco poor | targeted public health interventions | targeted public health | ecuador associated | ecuador associated factors [SUMMARY]
[CONTENT] reference | 30 | srcvd | years | alcohol | study | ecuador | cvd | 18 | reference reference [SUMMARY]
[CONTENT] reference | 30 | srcvd | years | alcohol | study | ecuador | cvd | 18 | reference reference [SUMMARY]
[CONTENT] Ecuador [SUMMARY]
null
null
[CONTENT] Almost one | ten | 18-69 years | SRCVD | Ecuador ||| Montubio [SUMMARY]
[CONTENT] Ecuador ||| 4638 | 18-69 years | Ecuador ||| ||| ||| 8.7% | 8.5% | 8.9% ||| Montubio | 1.66 | 95% | CI | 1.10-2.50 | 1.78 | 95% | CI | 1.19-2.65 | 1.36 | 95% | CI | 1.02-1.81 | 1.74 | 95% | CI | 1.19-2.54 | SRCVD ||| SRCVD ||| Almost one | ten | 18-69 years | SRCVD | Ecuador ||| Montubio [SUMMARY]
[CONTENT] Ecuador ||| 4638 | 18-69 years | Ecuador ||| ||| ||| 8.7% | 8.5% | 8.9% ||| Montubio | 1.66 | 95% | CI | 1.10-2.50 | 1.78 | 95% | CI | 1.19-2.65 | 1.36 | 95% | CI | 1.02-1.81 | 1.74 | 95% | CI | 1.19-2.54 | SRCVD ||| SRCVD ||| Almost one | ten | 18-69 years | SRCVD | Ecuador ||| Montubio [SUMMARY]
Unfractionated heparin or low-molecular-weight heparin for venous thromboembolism prophylaxis after hepatic resection: A meta-analysis.
36401460
Two systematic reviews summarized the efficacy and safety of pharmacological prophylaxis for venous thromboembolism (VTE) after hepatic resection, but both lacked a discussion of the differences in the pharmacological prophylaxis of VTE in different ethnicities. Therefore, we aimed to evaluate the efficacy and safety of low-molecular-weight heparin (LMWH) or unfractionated heparin (UFH) for VTE prophylaxis in Asian and Caucasian patients who have undergone hepatic resection.
BACKGROUND
We searched PubMed, Web of Science, Embase, China National Knowledge Infrastructure, Wanfang Data, and VIP databases for studies reporting the primary outcomes of VTE incidence, bleeding events, and all-cause mortality from January 2000 to July 2022.
METHODS
Ten studies involving 4318 participants who had undergone hepatic resection were included: 6 in Asians and 4 in Caucasians. A significant difference in VTE incidence was observed between the experimental and control groups (odds ratio [OR] = 0.39, 95% confidence interval [CI]: 0.20, 0.74, P = .004). No significant difference in bleeding events and all-cause mortality was observed (OR = 1.29, 95% CI: 0.80, 2.09, P = .30; OR = 0.71, 95% CI: 0.36, 1.42, P = .33, respectively). Subgroup analyses stratified by ethnicity showed a significant difference in the incidence of VTE in Asians (OR = 0.16, 95% CI: 0.06, 0.39, P < .0001), but not in Caucasians (OR = 0.69, 95% CI: 0.39, 1.23, P = .21). No significant differences in bleeding events were found between Asians (OR = 1.60, 95% CI: 0.48, 5.37, P = .45) and Caucasians (OR = 1.11, 95% CI: 0.58, 2.12, P = .75). The sensitivity analysis showed that Ejaz's study was the main source of heterogeneity, and when Ejaz's study was excluded, a significant difference in VTE incidence was found in Caucasians (OR = 0.58, 95% CI: 0.36, 0.93, P = .02).
RESULTS
This study's findings indicate that the application of UFH or LMWH for VTE prophylaxis after hepatic resection is efficacious and safe in Asians and Caucasians. It is necessary for Asians to receive drug prophylaxis for VTE after hepatic resection. This study can provide a reference for the development of guidelines in the future, especially regarding the pharmacological prevention of VTE in different ethnicities.
CONCLUSION
[ "Humans", "Heparin, Low-Molecular-Weight", "Venous Thromboembolism", "Heparin", "Treatment Outcome", "Anticoagulants", "Hemorrhage" ]
9678573
1. Introduction
VTE which is characterized by deep venous thrombosis (DVT) or pulmonary thromboembolism, is a significant cause of morbidity and mortality in patients who have undergone open abdominal surgery.[1] The incidence of VTE is associated with increased age, obesity, malignancy, and extensive and prolonged resection. Patients undergoing hepatic resection often have most of the aforementioned risk factors and, therefore, have a higher incidence of VTE.[2,3] Currently, there is a lack of authoritative guidelines for VTE prophylaxis following hepatic resection. Previous studies have indicated that extended anticoagulation therapy after hepatic resection is both effective and safe.[4,5] However, some studies have proposed different perspective.[3,6,7] Furthermore, a meta-analysis including 5 studies in which most patients were from the US and Europe, indicated that the application of perioperative chemical thromboprophylaxis reduces the incidence of VTE after hepatic resection without a significantly increased risk of bleeding, but a recent systematic review including 16 studies showed that the efficacy of VTE prophylaxis after hepatic resection has not been proven in Asian patients.[8] UFH and LMWH are recommended as VTE prophylaxis after major surgery.[9–11] Many studies have reported the efficacy and safety of UFH or LMWH for VTE prophylaxis after hepatic resection.[3,6,7,12–14] However, these results are controversial, particularly regarding the use of pharmacological prophylaxis for VTE after hepatic resection in Asian populations.[8,15] Additionally, 2 systematic reviews summarized the efficacy and safety of pharmacological prophylaxis for VTE after hepatic resection,[16,17] but both lacked a discussion of the difference in the pharmacological prophylaxis of VTE in different ethnicities. Therefore, we aimed to conduct a meta-analysis to quantitatively compare patients undergoing hepatic resection prophylaxis for VTE with UFH or LMWH among Asian and Caucasian patients.
2. Method
2.1. Ethics statements This meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.[18] The protocol for this meta-analysis was registered with PROSPERO (registration number: CRD42022349271, http://www.crd.york.ac.uk/PROSPERO/). This study was based on the published literature. Ethical approval and patient consent were not obtained. This meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.[18] The protocol for this meta-analysis was registered with PROSPERO (registration number: CRD42022349271, http://www.crd.york.ac.uk/PROSPERO/). This study was based on the published literature. Ethical approval and patient consent were not obtained. 2.2. Search strategy Two researchers independently completed the literature search for this meta-analysis, and discrepancies were resolved through full discussion. Eligible studies were searched in PubMed, Embase, Web of Science, China National Knowledge Infrastructure, Wanfang Data, and VIP databases. The retrieval time limit was January 2000 to July 2022. The language used was limited to English and Chinese. The following search terms were used: (heparin OR UFH OR UFH OR low molecular weight heparin OR LMWH OR LMWH OR enoxaparin OR lovenox OR nadroparin) AND (venous thrombus OR venous thrombus embolism OR VTE OR deep vein thrombosis OR deep venous thrombosis (DVT) OR pulmonary embolism OR PE OR pulmonary thromboembolism OR PTE OR portal vein thrombosis OR PVT OR mesenteric venous thrombosis OR hemorrhage OR hemorrhagic complication OR thromboembolic complication OR bleeding) AND (hepatectomy OR hepatectomies OR liver resection OR hepatic resection OR hepatic craniectomy OR hemi-hepatectomy OR hepatolobectomy OR surgery for colorectal liver metastases OR hepatic metastases of colorectal). Two researchers independently completed the literature search for this meta-analysis, and discrepancies were resolved through full discussion. Eligible studies were searched in PubMed, Embase, Web of Science, China National Knowledge Infrastructure, Wanfang Data, and VIP databases. The retrieval time limit was January 2000 to July 2022. The language used was limited to English and Chinese. The following search terms were used: (heparin OR UFH OR UFH OR low molecular weight heparin OR LMWH OR LMWH OR enoxaparin OR lovenox OR nadroparin) AND (venous thrombus OR venous thrombus embolism OR VTE OR deep vein thrombosis OR deep venous thrombosis (DVT) OR pulmonary embolism OR PE OR pulmonary thromboembolism OR PTE OR portal vein thrombosis OR PVT OR mesenteric venous thrombosis OR hemorrhage OR hemorrhagic complication OR thromboembolic complication OR bleeding) AND (hepatectomy OR hepatectomies OR liver resection OR hepatic resection OR hepatic craniectomy OR hemi-hepatectomy OR hepatolobectomy OR surgery for colorectal liver metastases OR hepatic metastases of colorectal). 2.3. Inclusion and exclusion criteria The inclusion criteria were as follows: (1) patients who had undergone hepatic resection; (2) patients in an experimental group who were treated with UFH of LWMH for VTE prophylaxis after hepatic resection and, patients in a control group who were not treated with pharmacological prophylaxis (the control group could receive nothing or conventional therapy such as mechanical thromboprophylaxis); (3) the outcomes of the study included at least 1 of the following: VTE, bleeding events, and all-cause mortality; and (4) studies were cohort studies, case-control studies, randomized controlled clinical trials (RCTs), or quasi-experimental studies. Exclusion criteria were as follows: (1) case reports, reviews, editorials, animal studies, or republished literature; (2) studies without a control group; (3) studies in which data research could not be extracted or the full text was not available; and (4) studies missing primary outcome. The inclusion criteria were as follows: (1) patients who had undergone hepatic resection; (2) patients in an experimental group who were treated with UFH of LWMH for VTE prophylaxis after hepatic resection and, patients in a control group who were not treated with pharmacological prophylaxis (the control group could receive nothing or conventional therapy such as mechanical thromboprophylaxis); (3) the outcomes of the study included at least 1 of the following: VTE, bleeding events, and all-cause mortality; and (4) studies were cohort studies, case-control studies, randomized controlled clinical trials (RCTs), or quasi-experimental studies. Exclusion criteria were as follows: (1) case reports, reviews, editorials, animal studies, or republished literature; (2) studies without a control group; (3) studies in which data research could not be extracted or the full text was not available; and (4) studies missing primary outcome. 2.4. Data extraction Two researchers independently completed the data extraction process, and discrepancies were resolved through full discussion. The following data were extracted: the article title, first author, publication year, study design, patient ethnicity, intervention, patient characteristics, and outcomes. Two researchers independently completed the data extraction process, and discrepancies were resolved through full discussion. The following data were extracted: the article title, first author, publication year, study design, patient ethnicity, intervention, patient characteristics, and outcomes. 2.5. Quality assessment Two researchers independently performed the quality assessment of each study. The Cochrane risk-of-bias tool 2.0 was used to assess the risk of bias in RCTs and quasi-experimental studies.[19] The Newcastle-Ottawa Quality Assessment Scale (NOS) was used to assess the quality of the cohort and case-control studies in this meta-analysis.[20] The NOS has 3 domains: selection of study, comparability, and outcome evaluation, with 8 items and a total score of 9 points. Except for item 5, which counts as 2 points (1 point for controlling age confounding factors and 1 point for controlling other important confounding factors), all other items count as 1 point. Scores ≤ 3 are regarded as low-quality studies, scores between 4 and 6 are considered as medium-quality studies, and scores > 7 are regarded as high-quality studies. Two researchers independently performed the quality assessment of each study. The Cochrane risk-of-bias tool 2.0 was used to assess the risk of bias in RCTs and quasi-experimental studies.[19] The Newcastle-Ottawa Quality Assessment Scale (NOS) was used to assess the quality of the cohort and case-control studies in this meta-analysis.[20] The NOS has 3 domains: selection of study, comparability, and outcome evaluation, with 8 items and a total score of 9 points. Except for item 5, which counts as 2 points (1 point for controlling age confounding factors and 1 point for controlling other important confounding factors), all other items count as 1 point. Scores ≤ 3 are regarded as low-quality studies, scores between 4 and 6 are considered as medium-quality studies, and scores > 7 are regarded as high-quality studies. 2.6. Statistical analysis Meta-analyses were conducted using RevMan5.3 software according to the Cochrane Manual for Systematic Evaluation of Interventions. The pooled effect size of the meta-analyses was assessed using the OR and 95% CI. I2 statistics and the Cochran Q test were used to assess statistical heterogeneity. The Mantel–Haenszel method for the fixed-effects model was applied when no significant heterogeneity was detected (I2 < 50% or P-value for heterogeneity > 0.1). The Der Simonian–Laird method for the random-effects model was used when significant heterogeneity was detected (I2 ≥ 50% or P-value for heterogeneity ≤ 0.1). However, if obvious variation in the included studies was found, a random effects model was used. Subgroup analyses of the Asian and Caucasian patients were performed. Sensitivity analyses were performed to evaluate the reliability of the results by excluding studies individually. The publication bias of the included studies was assessed using funnel plots. Statistical significance was set at P < .05. Meta-analyses were conducted using RevMan5.3 software according to the Cochrane Manual for Systematic Evaluation of Interventions. The pooled effect size of the meta-analyses was assessed using the OR and 95% CI. I2 statistics and the Cochran Q test were used to assess statistical heterogeneity. The Mantel–Haenszel method for the fixed-effects model was applied when no significant heterogeneity was detected (I2 < 50% or P-value for heterogeneity > 0.1). The Der Simonian–Laird method for the random-effects model was used when significant heterogeneity was detected (I2 ≥ 50% or P-value for heterogeneity ≤ 0.1). However, if obvious variation in the included studies was found, a random effects model was used. Subgroup analyses of the Asian and Caucasian patients were performed. Sensitivity analyses were performed to evaluate the reliability of the results by excluding studies individually. The publication bias of the included studies was assessed using funnel plots. Statistical significance was set at P < .05.
3. Results
3.1. Literature search and study characteristics The initial literature search yielded a total of 610 articles. After removing duplicates, 136 articles remained. After screening titles and abstracts, 450 articles were excluded. Then, 24 articles met our inclusion criteria and were eligible for full-text evaluation. Finally, a total of 10 studies were included in the analysis.[3,5–7,12,21–25] Among these studies, 4 were in Chinese,[7,23–25] and 6 were in English.[3,5,6,12,21,22] The flow diagram is shown in Figure 1. The flow diagram of the literature search. All the included studies were cohort studies. A total of 4318 patients underwent liver resection, of which 2551 and 1767 patients were in the experimental and control groups, respectively. The main characteristics of the 10 studies are summarized in Table 1. The NOS was used to evaluate the quality of the eligible cohort studies, and the scoring details are shown in Table 2. Characteristics of included studies. Quality assessment of included studies. The initial literature search yielded a total of 610 articles. After removing duplicates, 136 articles remained. After screening titles and abstracts, 450 articles were excluded. Then, 24 articles met our inclusion criteria and were eligible for full-text evaluation. Finally, a total of 10 studies were included in the analysis.[3,5–7,12,21–25] Among these studies, 4 were in Chinese,[7,23–25] and 6 were in English.[3,5,6,12,21,22] The flow diagram is shown in Figure 1. The flow diagram of the literature search. All the included studies were cohort studies. A total of 4318 patients underwent liver resection, of which 2551 and 1767 patients were in the experimental and control groups, respectively. The main characteristics of the 10 studies are summarized in Table 1. The NOS was used to evaluate the quality of the eligible cohort studies, and the scoring details are shown in Table 2. Characteristics of included studies. Quality assessment of included studies. 3.2. VTE events All studies reported the incidence of VTE events in 4318 patients,[3,5–7,12,21–25] including 2551 in the experimental group and 1767 in the control group. There was no significant heterogeneity between the studies (I2 = 44%, P = .08); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed a significant difference in the overall rate of VTE between the experimental and control groups (OR = 0.39, 95% CI: 0.20, 0.74, P = .004) (Fig. 2). Forest plot comparing the efficacy of the experimental group vs. the control group on VTE events. VTE = venous thromboembolism. All studies reported the incidence of VTE events in 4318 patients,[3,5–7,12,21–25] including 2551 in the experimental group and 1767 in the control group. There was no significant heterogeneity between the studies (I2 = 44%, P = .08); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed a significant difference in the overall rate of VTE between the experimental and control groups (OR = 0.39, 95% CI: 0.20, 0.74, P = .004) (Fig. 2). Forest plot comparing the efficacy of the experimental group vs. the control group on VTE events. VTE = venous thromboembolism. 3.3. Bleeding events Seven studies reported the incidence of bleeding events in 3074 patients,[6,7,12,21,23–25] including 1709 in the experimental group and 1365 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .68); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in the overall rate of bleeding events between the experimental and control groups (OR = 1.29, 95% CI: 0.80, 2.09, P = .30) (Fig. 3). Forest plot comparing the safety of the experimental group vs. the control group on bleeding events. Seven studies reported the incidence of bleeding events in 3074 patients,[6,7,12,21,23–25] including 1709 in the experimental group and 1365 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .68); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in the overall rate of bleeding events between the experimental and control groups (OR = 1.29, 95% CI: 0.80, 2.09, P = .30) (Fig. 3). Forest plot comparing the safety of the experimental group vs. the control group on bleeding events. 3.4. All-cause mortality Five studies reported all-cause mortality,[3,6,7,12,22] 1 of which only reported the total number of deaths in both groups,[6] and 4 studies included a total of 1484 patients in this meta-analysis,[3,7,12,22] including 874 in the experimental group and 610 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .48) but the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in all-cause mortality between the experimental and control groups (OR = 0.71, 95% CI: 0.36, 1.42, P = .33) (Fig. 4). Forest plot comparing the safety of the experimental group vs. the control group on all-cause mortality. Five studies reported all-cause mortality,[3,6,7,12,22] 1 of which only reported the total number of deaths in both groups,[6] and 4 studies included a total of 1484 patients in this meta-analysis,[3,7,12,22] including 874 in the experimental group and 610 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .48) but the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in all-cause mortality between the experimental and control groups (OR = 0.71, 95% CI: 0.36, 1.42, P = .33) (Fig. 4). Forest plot comparing the safety of the experimental group vs. the control group on all-cause mortality. 3.5. Subgroup analyses and sensitivity analysis Using a random-effects model, subgroup analyses stratified by ethnicity showed a significant difference in the overall rate of VTE between the experimental and control groups in the Asian subgroup that included 6 studies[5,7,12,23–25] (OR = 0.16, 95% CI: 0.06, 0.39, P < .0001), but no significant difference was observed in the Caucasian subgroup that included 4 studies[3,6,21,22] (OR = 0.69, 95% CI: 0.39, 1.23, P = .21) (Fig. 5). No significant difference in the incidence of bleeding events with UFH or LMWH for VTE prophylaxis after hepatic resection was found in the Asian subgroup that included 5 studies[7,12,23–25] (OR = 1.60, 95% CI: 0.48, 5.37, P = .45) or the Caucasian subgroup that included 2 studies[6,21] (OR = 1.11, 95% CI: 0.58, 2.12, P = .75) (Fig. 6). Forest plot of subgroup analysis comparing the efficacy of the experimental group vs. the control group on VTE events in Asians and Caucasians. VTE = venous thromboembolism. Forest plot of subgroup analysis comparing the safety of the experimental group vs. the control group on bleeding events in Asians and Caucasians. Using a random-effects model, sensitivity analysis showed a significant difference in the VTE incidence between the experimental and control groups in Caucasians when Ejaz’s study[3] was excluded (OR = 0.58, 95% CI: 0.36, 0.93, P = .02). Using a random-effects model, subgroup analyses stratified by ethnicity showed a significant difference in the overall rate of VTE between the experimental and control groups in the Asian subgroup that included 6 studies[5,7,12,23–25] (OR = 0.16, 95% CI: 0.06, 0.39, P < .0001), but no significant difference was observed in the Caucasian subgroup that included 4 studies[3,6,21,22] (OR = 0.69, 95% CI: 0.39, 1.23, P = .21) (Fig. 5). No significant difference in the incidence of bleeding events with UFH or LMWH for VTE prophylaxis after hepatic resection was found in the Asian subgroup that included 5 studies[7,12,23–25] (OR = 1.60, 95% CI: 0.48, 5.37, P = .45) or the Caucasian subgroup that included 2 studies[6,21] (OR = 1.11, 95% CI: 0.58, 2.12, P = .75) (Fig. 6). Forest plot of subgroup analysis comparing the efficacy of the experimental group vs. the control group on VTE events in Asians and Caucasians. VTE = venous thromboembolism. Forest plot of subgroup analysis comparing the safety of the experimental group vs. the control group on bleeding events in Asians and Caucasians. Using a random-effects model, sensitivity analysis showed a significant difference in the VTE incidence between the experimental and control groups in Caucasians when Ejaz’s study[3] was excluded (OR = 0.58, 95% CI: 0.36, 0.93, P = .02). 3.6. Publication bias The asymmetric funnel plot for the outcome of VTE suggested publication bias in this meta-analysis (Fig. 7). No significant publication bias was found for bleeding or all-cause mortality events. Funnel plot for the outcome of VTE events. VTE = venous thromboembolism. The asymmetric funnel plot for the outcome of VTE suggested publication bias in this meta-analysis (Fig. 7). No significant publication bias was found for bleeding or all-cause mortality events. Funnel plot for the outcome of VTE events. VTE = venous thromboembolism.
null
null
[ "2.1. Ethics statements", "2.2. Search strategy", "2.3. Inclusion and exclusion criteria", "2.4. Data extraction", "2.5. Quality assessment", "2.6. Statistical analysis", "3.1. Literature search and study characteristics", "3.2. VTE events", "3.3. Bleeding events", "3.4. All-cause mortality", "3.5. Subgroup analyses and sensitivity analysis", "3.6. Publication bias", "Acknowledgments", "Author contributions" ]
[ "This meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.[18] The protocol for this meta-analysis was registered with PROSPERO (registration number: CRD42022349271, http://www.crd.york.ac.uk/PROSPERO/). This study was based on the published literature. Ethical approval and patient consent were not obtained.", "Two researchers independently completed the literature search for this meta-analysis, and discrepancies were resolved through full discussion. Eligible studies were searched in PubMed, Embase, Web of Science, China National Knowledge Infrastructure, Wanfang Data, and VIP databases. The retrieval time limit was January 2000 to July 2022. The language used was limited to English and Chinese. The following search terms were used: (heparin OR UFH OR UFH OR low molecular weight heparin OR LMWH OR LMWH OR enoxaparin OR lovenox OR nadroparin) AND (venous thrombus OR venous thrombus embolism OR VTE OR deep vein thrombosis OR deep venous thrombosis (DVT) OR pulmonary embolism OR PE OR pulmonary thromboembolism OR PTE OR portal vein thrombosis OR PVT OR mesenteric venous thrombosis OR hemorrhage OR hemorrhagic complication OR thromboembolic complication OR bleeding) AND (hepatectomy OR hepatectomies OR liver resection OR hepatic resection OR hepatic craniectomy OR hemi-hepatectomy OR hepatolobectomy OR surgery for colorectal liver metastases OR hepatic metastases of colorectal).", "The inclusion criteria were as follows:\n(1) patients who had undergone hepatic resection;\n(2) patients in an experimental group who were treated with UFH of LWMH for VTE prophylaxis after hepatic resection and, patients in a control group who were not treated with pharmacological prophylaxis (the control group could receive nothing or conventional therapy such as mechanical thromboprophylaxis);\n(3) the outcomes of the study included at least 1 of the following: VTE, bleeding events, and all-cause mortality; and\n(4) studies were cohort studies, case-control studies, randomized controlled clinical trials (RCTs), or quasi-experimental studies.\nExclusion criteria were as follows:\n(1) case reports, reviews, editorials, animal studies, or republished literature;\n(2) studies without a control group;\n(3) studies in which data research could not be extracted or the full text was not available; and\n(4) studies missing primary outcome.", "Two researchers independently completed the data extraction process, and discrepancies were resolved through full discussion. The following data were extracted: the article title, first author, publication year, study design, patient ethnicity, intervention, patient characteristics, and outcomes.", "Two researchers independently performed the quality assessment of each study. The Cochrane risk-of-bias tool 2.0 was used to assess the risk of bias in RCTs and quasi-experimental studies.[19] The Newcastle-Ottawa Quality Assessment Scale (NOS) was used to assess the quality of the cohort and case-control studies in this meta-analysis.[20] The NOS has 3 domains: selection of study, comparability, and outcome evaluation, with 8 items and a total score of 9 points. Except for item 5, which counts as 2 points (1 point for controlling age confounding factors and 1 point for controlling other important confounding factors), all other items count as 1 point. Scores ≤ 3 are regarded as low-quality studies, scores between 4 and 6 are considered as medium-quality studies, and scores > 7 are regarded as high-quality studies.", "Meta-analyses were conducted using RevMan5.3 software according to the Cochrane Manual for Systematic Evaluation of Interventions. The pooled effect size of the meta-analyses was assessed using the OR and 95% CI. I2 statistics and the Cochran Q test were used to assess statistical heterogeneity. The Mantel–Haenszel method for the fixed-effects model was applied when no significant heterogeneity was detected (I2 < 50% or P-value for heterogeneity > 0.1). The Der Simonian–Laird method for the random-effects model was used when significant heterogeneity was detected (I2 ≥ 50% or P-value for heterogeneity ≤ 0.1). However, if obvious variation in the included studies was found, a random effects model was used. Subgroup analyses of the Asian and Caucasian patients were performed. Sensitivity analyses were performed to evaluate the reliability of the results by excluding studies individually. The publication bias of the included studies was assessed using funnel plots. Statistical significance was set at P < .05.", "The initial literature search yielded a total of 610 articles. After removing duplicates, 136 articles remained. After screening titles and abstracts, 450 articles were excluded. Then, 24 articles met our inclusion criteria and were eligible for full-text evaluation. Finally, a total of 10 studies were included in the analysis.[3,5–7,12,21–25] Among these studies, 4 were in Chinese,[7,23–25] and 6 were in English.[3,5,6,12,21,22] The flow diagram is shown in Figure 1.\nThe flow diagram of the literature search.\nAll the included studies were cohort studies. A total of 4318 patients underwent liver resection, of which 2551 and 1767 patients were in the experimental and control groups, respectively. The main characteristics of the 10 studies are summarized in Table 1. The NOS was used to evaluate the quality of the eligible cohort studies, and the scoring details are shown in Table 2.\nCharacteristics of included studies.\nQuality assessment of included studies.", "All studies reported the incidence of VTE events in 4318 patients,[3,5–7,12,21–25] including 2551 in the experimental group and 1767 in the control group. There was no significant heterogeneity between the studies (I2 = 44%, P = .08); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed a significant difference in the overall rate of VTE between the experimental and control groups (OR = 0.39, 95% CI: 0.20, 0.74, P = .004) (Fig. 2).\nForest plot comparing the efficacy of the experimental group vs. the control group on VTE events. VTE = venous thromboembolism.", "Seven studies reported the incidence of bleeding events in 3074 patients,[6,7,12,21,23–25] including 1709 in the experimental group and 1365 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .68); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in the overall rate of bleeding events between the experimental and control groups (OR = 1.29, 95% CI: 0.80, 2.09, P = .30) (Fig. 3).\nForest plot comparing the safety of the experimental group vs. the control group on bleeding events.", "Five studies reported all-cause mortality,[3,6,7,12,22] 1 of which only reported the total number of deaths in both groups,[6] and 4 studies included a total of 1484 patients in this meta-analysis,[3,7,12,22] including 874 in the experimental group and 610 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .48) but the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in all-cause mortality between the experimental and control groups (OR = 0.71, 95% CI: 0.36, 1.42, P = .33) (Fig. 4).\nForest plot comparing the safety of the experimental group vs. the control group on all-cause mortality.", "Using a random-effects model, subgroup analyses stratified by ethnicity showed a significant difference in the overall rate of VTE between the experimental and control groups in the Asian subgroup that included 6 studies[5,7,12,23–25] (OR = 0.16, 95% CI: 0.06, 0.39, P < .0001), but no significant difference was observed in the Caucasian subgroup that included 4 studies[3,6,21,22] (OR = 0.69, 95% CI: 0.39, 1.23, P = .21) (Fig. 5). No significant difference in the incidence of bleeding events with UFH or LMWH for VTE prophylaxis after hepatic resection was found in the Asian subgroup that included 5 studies[7,12,23–25] (OR = 1.60, 95% CI: 0.48, 5.37, P = .45) or the Caucasian subgroup that included 2 studies[6,21] (OR = 1.11, 95% CI: 0.58, 2.12, P = .75) (Fig. 6).\nForest plot of subgroup analysis comparing the efficacy of the experimental group vs. the control group on VTE events in Asians and Caucasians. VTE = venous thromboembolism.\nForest plot of subgroup analysis comparing the safety of the experimental group vs. the control group on bleeding events in Asians and Caucasians.\nUsing a random-effects model, sensitivity analysis showed a significant difference in the VTE incidence between the experimental and control groups in Caucasians when Ejaz’s study[3] was excluded (OR = 0.58, 95% CI: 0.36, 0.93, P = .02).", "The asymmetric funnel plot for the outcome of VTE suggested publication bias in this meta-analysis (Fig. 7). No significant publication bias was found for bleeding or all-cause mortality events.\nFunnel plot for the outcome of VTE events. VTE = venous thromboembolism.", "We would like to thank Editage (www.editage.cn) for English language editing.", "Conceptualization: Wentao Zhang, Changhong Du.\nData curation: Wentao Zhang, Baoyue Hu.\nFormal analysis: Wentao Zhang, Xinchun Wei, Baoyue Hu.\nMethodology: Wentao Zhang, Changhong Du, Baoyue Hu.\nSoftware: Wentao Zhang, Shiwei Yang.\nSupervision: Wentao Zhang.\nWriting – original draft: Wentao Zhang.\nWriting – review & editing: Wentao Zhang, Baoyue Hu." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Method", "2.1. Ethics statements", "2.2. Search strategy", "2.3. Inclusion and exclusion criteria", "2.4. Data extraction", "2.5. Quality assessment", "2.6. Statistical analysis", "3. Results", "3.1. Literature search and study characteristics", "3.2. VTE events", "3.3. Bleeding events", "3.4. All-cause mortality", "3.5. Subgroup analyses and sensitivity analysis", "3.6. Publication bias", "4. Discussion", "Acknowledgments", "Author contributions" ]
[ "VTE which is characterized by deep venous thrombosis (DVT) or pulmonary thromboembolism, is a significant cause of morbidity and mortality in patients who have undergone open abdominal surgery.[1] The incidence of VTE is associated with increased age, obesity, malignancy, and extensive and prolonged resection. Patients undergoing hepatic resection often have most of the aforementioned risk factors and, therefore, have a higher incidence of VTE.[2,3] Currently, there is a lack of authoritative guidelines for VTE prophylaxis following hepatic resection. Previous studies have indicated that extended anticoagulation therapy after hepatic resection is both effective and safe.[4,5] However, some studies have proposed different perspective.[3,6,7] Furthermore, a meta-analysis including 5 studies in which most patients were from the US and Europe, indicated that the application of perioperative chemical thromboprophylaxis reduces the incidence of VTE after hepatic resection without a significantly increased risk of bleeding, but a recent systematic review including 16 studies showed that the efficacy of VTE prophylaxis after hepatic resection has not been proven in Asian patients.[8] UFH and LMWH are recommended as VTE prophylaxis after major surgery.[9–11] Many studies have reported the efficacy and safety of UFH or LMWH for VTE prophylaxis after hepatic resection.[3,6,7,12–14] However, these results are controversial, particularly regarding the use of pharmacological prophylaxis for VTE after hepatic resection in Asian populations.[8,15] Additionally, 2 systematic reviews summarized the efficacy and safety of pharmacological prophylaxis for VTE after hepatic resection,[16,17] but both lacked a discussion of the difference in the pharmacological prophylaxis of VTE in different ethnicities. Therefore, we aimed to conduct a meta-analysis to quantitatively compare patients undergoing hepatic resection prophylaxis for VTE with UFH or LMWH among Asian and Caucasian patients.", "2.1. Ethics statements This meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.[18] The protocol for this meta-analysis was registered with PROSPERO (registration number: CRD42022349271, http://www.crd.york.ac.uk/PROSPERO/). This study was based on the published literature. Ethical approval and patient consent were not obtained.\nThis meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.[18] The protocol for this meta-analysis was registered with PROSPERO (registration number: CRD42022349271, http://www.crd.york.ac.uk/PROSPERO/). This study was based on the published literature. Ethical approval and patient consent were not obtained.\n2.2. Search strategy Two researchers independently completed the literature search for this meta-analysis, and discrepancies were resolved through full discussion. Eligible studies were searched in PubMed, Embase, Web of Science, China National Knowledge Infrastructure, Wanfang Data, and VIP databases. The retrieval time limit was January 2000 to July 2022. The language used was limited to English and Chinese. The following search terms were used: (heparin OR UFH OR UFH OR low molecular weight heparin OR LMWH OR LMWH OR enoxaparin OR lovenox OR nadroparin) AND (venous thrombus OR venous thrombus embolism OR VTE OR deep vein thrombosis OR deep venous thrombosis (DVT) OR pulmonary embolism OR PE OR pulmonary thromboembolism OR PTE OR portal vein thrombosis OR PVT OR mesenteric venous thrombosis OR hemorrhage OR hemorrhagic complication OR thromboembolic complication OR bleeding) AND (hepatectomy OR hepatectomies OR liver resection OR hepatic resection OR hepatic craniectomy OR hemi-hepatectomy OR hepatolobectomy OR surgery for colorectal liver metastases OR hepatic metastases of colorectal).\nTwo researchers independently completed the literature search for this meta-analysis, and discrepancies were resolved through full discussion. Eligible studies were searched in PubMed, Embase, Web of Science, China National Knowledge Infrastructure, Wanfang Data, and VIP databases. The retrieval time limit was January 2000 to July 2022. The language used was limited to English and Chinese. The following search terms were used: (heparin OR UFH OR UFH OR low molecular weight heparin OR LMWH OR LMWH OR enoxaparin OR lovenox OR nadroparin) AND (venous thrombus OR venous thrombus embolism OR VTE OR deep vein thrombosis OR deep venous thrombosis (DVT) OR pulmonary embolism OR PE OR pulmonary thromboembolism OR PTE OR portal vein thrombosis OR PVT OR mesenteric venous thrombosis OR hemorrhage OR hemorrhagic complication OR thromboembolic complication OR bleeding) AND (hepatectomy OR hepatectomies OR liver resection OR hepatic resection OR hepatic craniectomy OR hemi-hepatectomy OR hepatolobectomy OR surgery for colorectal liver metastases OR hepatic metastases of colorectal).\n2.3. Inclusion and exclusion criteria The inclusion criteria were as follows:\n(1) patients who had undergone hepatic resection;\n(2) patients in an experimental group who were treated with UFH of LWMH for VTE prophylaxis after hepatic resection and, patients in a control group who were not treated with pharmacological prophylaxis (the control group could receive nothing or conventional therapy such as mechanical thromboprophylaxis);\n(3) the outcomes of the study included at least 1 of the following: VTE, bleeding events, and all-cause mortality; and\n(4) studies were cohort studies, case-control studies, randomized controlled clinical trials (RCTs), or quasi-experimental studies.\nExclusion criteria were as follows:\n(1) case reports, reviews, editorials, animal studies, or republished literature;\n(2) studies without a control group;\n(3) studies in which data research could not be extracted or the full text was not available; and\n(4) studies missing primary outcome.\nThe inclusion criteria were as follows:\n(1) patients who had undergone hepatic resection;\n(2) patients in an experimental group who were treated with UFH of LWMH for VTE prophylaxis after hepatic resection and, patients in a control group who were not treated with pharmacological prophylaxis (the control group could receive nothing or conventional therapy such as mechanical thromboprophylaxis);\n(3) the outcomes of the study included at least 1 of the following: VTE, bleeding events, and all-cause mortality; and\n(4) studies were cohort studies, case-control studies, randomized controlled clinical trials (RCTs), or quasi-experimental studies.\nExclusion criteria were as follows:\n(1) case reports, reviews, editorials, animal studies, or republished literature;\n(2) studies without a control group;\n(3) studies in which data research could not be extracted or the full text was not available; and\n(4) studies missing primary outcome.\n2.4. Data extraction Two researchers independently completed the data extraction process, and discrepancies were resolved through full discussion. The following data were extracted: the article title, first author, publication year, study design, patient ethnicity, intervention, patient characteristics, and outcomes.\nTwo researchers independently completed the data extraction process, and discrepancies were resolved through full discussion. The following data were extracted: the article title, first author, publication year, study design, patient ethnicity, intervention, patient characteristics, and outcomes.\n2.5. Quality assessment Two researchers independently performed the quality assessment of each study. The Cochrane risk-of-bias tool 2.0 was used to assess the risk of bias in RCTs and quasi-experimental studies.[19] The Newcastle-Ottawa Quality Assessment Scale (NOS) was used to assess the quality of the cohort and case-control studies in this meta-analysis.[20] The NOS has 3 domains: selection of study, comparability, and outcome evaluation, with 8 items and a total score of 9 points. Except for item 5, which counts as 2 points (1 point for controlling age confounding factors and 1 point for controlling other important confounding factors), all other items count as 1 point. Scores ≤ 3 are regarded as low-quality studies, scores between 4 and 6 are considered as medium-quality studies, and scores > 7 are regarded as high-quality studies.\nTwo researchers independently performed the quality assessment of each study. The Cochrane risk-of-bias tool 2.0 was used to assess the risk of bias in RCTs and quasi-experimental studies.[19] The Newcastle-Ottawa Quality Assessment Scale (NOS) was used to assess the quality of the cohort and case-control studies in this meta-analysis.[20] The NOS has 3 domains: selection of study, comparability, and outcome evaluation, with 8 items and a total score of 9 points. Except for item 5, which counts as 2 points (1 point for controlling age confounding factors and 1 point for controlling other important confounding factors), all other items count as 1 point. Scores ≤ 3 are regarded as low-quality studies, scores between 4 and 6 are considered as medium-quality studies, and scores > 7 are regarded as high-quality studies.\n2.6. Statistical analysis Meta-analyses were conducted using RevMan5.3 software according to the Cochrane Manual for Systematic Evaluation of Interventions. The pooled effect size of the meta-analyses was assessed using the OR and 95% CI. I2 statistics and the Cochran Q test were used to assess statistical heterogeneity. The Mantel–Haenszel method for the fixed-effects model was applied when no significant heterogeneity was detected (I2 < 50% or P-value for heterogeneity > 0.1). The Der Simonian–Laird method for the random-effects model was used when significant heterogeneity was detected (I2 ≥ 50% or P-value for heterogeneity ≤ 0.1). However, if obvious variation in the included studies was found, a random effects model was used. Subgroup analyses of the Asian and Caucasian patients were performed. Sensitivity analyses were performed to evaluate the reliability of the results by excluding studies individually. The publication bias of the included studies was assessed using funnel plots. Statistical significance was set at P < .05.\nMeta-analyses were conducted using RevMan5.3 software according to the Cochrane Manual for Systematic Evaluation of Interventions. The pooled effect size of the meta-analyses was assessed using the OR and 95% CI. I2 statistics and the Cochran Q test were used to assess statistical heterogeneity. The Mantel–Haenszel method for the fixed-effects model was applied when no significant heterogeneity was detected (I2 < 50% or P-value for heterogeneity > 0.1). The Der Simonian–Laird method for the random-effects model was used when significant heterogeneity was detected (I2 ≥ 50% or P-value for heterogeneity ≤ 0.1). However, if obvious variation in the included studies was found, a random effects model was used. Subgroup analyses of the Asian and Caucasian patients were performed. Sensitivity analyses were performed to evaluate the reliability of the results by excluding studies individually. The publication bias of the included studies was assessed using funnel plots. Statistical significance was set at P < .05.", "This meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.[18] The protocol for this meta-analysis was registered with PROSPERO (registration number: CRD42022349271, http://www.crd.york.ac.uk/PROSPERO/). This study was based on the published literature. Ethical approval and patient consent were not obtained.", "Two researchers independently completed the literature search for this meta-analysis, and discrepancies were resolved through full discussion. Eligible studies were searched in PubMed, Embase, Web of Science, China National Knowledge Infrastructure, Wanfang Data, and VIP databases. The retrieval time limit was January 2000 to July 2022. The language used was limited to English and Chinese. The following search terms were used: (heparin OR UFH OR UFH OR low molecular weight heparin OR LMWH OR LMWH OR enoxaparin OR lovenox OR nadroparin) AND (venous thrombus OR venous thrombus embolism OR VTE OR deep vein thrombosis OR deep venous thrombosis (DVT) OR pulmonary embolism OR PE OR pulmonary thromboembolism OR PTE OR portal vein thrombosis OR PVT OR mesenteric venous thrombosis OR hemorrhage OR hemorrhagic complication OR thromboembolic complication OR bleeding) AND (hepatectomy OR hepatectomies OR liver resection OR hepatic resection OR hepatic craniectomy OR hemi-hepatectomy OR hepatolobectomy OR surgery for colorectal liver metastases OR hepatic metastases of colorectal).", "The inclusion criteria were as follows:\n(1) patients who had undergone hepatic resection;\n(2) patients in an experimental group who were treated with UFH of LWMH for VTE prophylaxis after hepatic resection and, patients in a control group who were not treated with pharmacological prophylaxis (the control group could receive nothing or conventional therapy such as mechanical thromboprophylaxis);\n(3) the outcomes of the study included at least 1 of the following: VTE, bleeding events, and all-cause mortality; and\n(4) studies were cohort studies, case-control studies, randomized controlled clinical trials (RCTs), or quasi-experimental studies.\nExclusion criteria were as follows:\n(1) case reports, reviews, editorials, animal studies, or republished literature;\n(2) studies without a control group;\n(3) studies in which data research could not be extracted or the full text was not available; and\n(4) studies missing primary outcome.", "Two researchers independently completed the data extraction process, and discrepancies were resolved through full discussion. The following data were extracted: the article title, first author, publication year, study design, patient ethnicity, intervention, patient characteristics, and outcomes.", "Two researchers independently performed the quality assessment of each study. The Cochrane risk-of-bias tool 2.0 was used to assess the risk of bias in RCTs and quasi-experimental studies.[19] The Newcastle-Ottawa Quality Assessment Scale (NOS) was used to assess the quality of the cohort and case-control studies in this meta-analysis.[20] The NOS has 3 domains: selection of study, comparability, and outcome evaluation, with 8 items and a total score of 9 points. Except for item 5, which counts as 2 points (1 point for controlling age confounding factors and 1 point for controlling other important confounding factors), all other items count as 1 point. Scores ≤ 3 are regarded as low-quality studies, scores between 4 and 6 are considered as medium-quality studies, and scores > 7 are regarded as high-quality studies.", "Meta-analyses were conducted using RevMan5.3 software according to the Cochrane Manual for Systematic Evaluation of Interventions. The pooled effect size of the meta-analyses was assessed using the OR and 95% CI. I2 statistics and the Cochran Q test were used to assess statistical heterogeneity. The Mantel–Haenszel method for the fixed-effects model was applied when no significant heterogeneity was detected (I2 < 50% or P-value for heterogeneity > 0.1). The Der Simonian–Laird method for the random-effects model was used when significant heterogeneity was detected (I2 ≥ 50% or P-value for heterogeneity ≤ 0.1). However, if obvious variation in the included studies was found, a random effects model was used. Subgroup analyses of the Asian and Caucasian patients were performed. Sensitivity analyses were performed to evaluate the reliability of the results by excluding studies individually. The publication bias of the included studies was assessed using funnel plots. Statistical significance was set at P < .05.", "3.1. Literature search and study characteristics The initial literature search yielded a total of 610 articles. After removing duplicates, 136 articles remained. After screening titles and abstracts, 450 articles were excluded. Then, 24 articles met our inclusion criteria and were eligible for full-text evaluation. Finally, a total of 10 studies were included in the analysis.[3,5–7,12,21–25] Among these studies, 4 were in Chinese,[7,23–25] and 6 were in English.[3,5,6,12,21,22] The flow diagram is shown in Figure 1.\nThe flow diagram of the literature search.\nAll the included studies were cohort studies. A total of 4318 patients underwent liver resection, of which 2551 and 1767 patients were in the experimental and control groups, respectively. The main characteristics of the 10 studies are summarized in Table 1. The NOS was used to evaluate the quality of the eligible cohort studies, and the scoring details are shown in Table 2.\nCharacteristics of included studies.\nQuality assessment of included studies.\nThe initial literature search yielded a total of 610 articles. After removing duplicates, 136 articles remained. After screening titles and abstracts, 450 articles were excluded. Then, 24 articles met our inclusion criteria and were eligible for full-text evaluation. Finally, a total of 10 studies were included in the analysis.[3,5–7,12,21–25] Among these studies, 4 were in Chinese,[7,23–25] and 6 were in English.[3,5,6,12,21,22] The flow diagram is shown in Figure 1.\nThe flow diagram of the literature search.\nAll the included studies were cohort studies. A total of 4318 patients underwent liver resection, of which 2551 and 1767 patients were in the experimental and control groups, respectively. The main characteristics of the 10 studies are summarized in Table 1. The NOS was used to evaluate the quality of the eligible cohort studies, and the scoring details are shown in Table 2.\nCharacteristics of included studies.\nQuality assessment of included studies.\n3.2. VTE events All studies reported the incidence of VTE events in 4318 patients,[3,5–7,12,21–25] including 2551 in the experimental group and 1767 in the control group. There was no significant heterogeneity between the studies (I2 = 44%, P = .08); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed a significant difference in the overall rate of VTE between the experimental and control groups (OR = 0.39, 95% CI: 0.20, 0.74, P = .004) (Fig. 2).\nForest plot comparing the efficacy of the experimental group vs. the control group on VTE events. VTE = venous thromboembolism.\nAll studies reported the incidence of VTE events in 4318 patients,[3,5–7,12,21–25] including 2551 in the experimental group and 1767 in the control group. There was no significant heterogeneity between the studies (I2 = 44%, P = .08); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed a significant difference in the overall rate of VTE between the experimental and control groups (OR = 0.39, 95% CI: 0.20, 0.74, P = .004) (Fig. 2).\nForest plot comparing the efficacy of the experimental group vs. the control group on VTE events. VTE = venous thromboembolism.\n3.3. Bleeding events Seven studies reported the incidence of bleeding events in 3074 patients,[6,7,12,21,23–25] including 1709 in the experimental group and 1365 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .68); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in the overall rate of bleeding events between the experimental and control groups (OR = 1.29, 95% CI: 0.80, 2.09, P = .30) (Fig. 3).\nForest plot comparing the safety of the experimental group vs. the control group on bleeding events.\nSeven studies reported the incidence of bleeding events in 3074 patients,[6,7,12,21,23–25] including 1709 in the experimental group and 1365 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .68); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in the overall rate of bleeding events between the experimental and control groups (OR = 1.29, 95% CI: 0.80, 2.09, P = .30) (Fig. 3).\nForest plot comparing the safety of the experimental group vs. the control group on bleeding events.\n3.4. All-cause mortality Five studies reported all-cause mortality,[3,6,7,12,22] 1 of which only reported the total number of deaths in both groups,[6] and 4 studies included a total of 1484 patients in this meta-analysis,[3,7,12,22] including 874 in the experimental group and 610 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .48) but the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in all-cause mortality between the experimental and control groups (OR = 0.71, 95% CI: 0.36, 1.42, P = .33) (Fig. 4).\nForest plot comparing the safety of the experimental group vs. the control group on all-cause mortality.\nFive studies reported all-cause mortality,[3,6,7,12,22] 1 of which only reported the total number of deaths in both groups,[6] and 4 studies included a total of 1484 patients in this meta-analysis,[3,7,12,22] including 874 in the experimental group and 610 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .48) but the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in all-cause mortality between the experimental and control groups (OR = 0.71, 95% CI: 0.36, 1.42, P = .33) (Fig. 4).\nForest plot comparing the safety of the experimental group vs. the control group on all-cause mortality.\n3.5. Subgroup analyses and sensitivity analysis Using a random-effects model, subgroup analyses stratified by ethnicity showed a significant difference in the overall rate of VTE between the experimental and control groups in the Asian subgroup that included 6 studies[5,7,12,23–25] (OR = 0.16, 95% CI: 0.06, 0.39, P < .0001), but no significant difference was observed in the Caucasian subgroup that included 4 studies[3,6,21,22] (OR = 0.69, 95% CI: 0.39, 1.23, P = .21) (Fig. 5). No significant difference in the incidence of bleeding events with UFH or LMWH for VTE prophylaxis after hepatic resection was found in the Asian subgroup that included 5 studies[7,12,23–25] (OR = 1.60, 95% CI: 0.48, 5.37, P = .45) or the Caucasian subgroup that included 2 studies[6,21] (OR = 1.11, 95% CI: 0.58, 2.12, P = .75) (Fig. 6).\nForest plot of subgroup analysis comparing the efficacy of the experimental group vs. the control group on VTE events in Asians and Caucasians. VTE = venous thromboembolism.\nForest plot of subgroup analysis comparing the safety of the experimental group vs. the control group on bleeding events in Asians and Caucasians.\nUsing a random-effects model, sensitivity analysis showed a significant difference in the VTE incidence between the experimental and control groups in Caucasians when Ejaz’s study[3] was excluded (OR = 0.58, 95% CI: 0.36, 0.93, P = .02).\nUsing a random-effects model, subgroup analyses stratified by ethnicity showed a significant difference in the overall rate of VTE between the experimental and control groups in the Asian subgroup that included 6 studies[5,7,12,23–25] (OR = 0.16, 95% CI: 0.06, 0.39, P < .0001), but no significant difference was observed in the Caucasian subgroup that included 4 studies[3,6,21,22] (OR = 0.69, 95% CI: 0.39, 1.23, P = .21) (Fig. 5). No significant difference in the incidence of bleeding events with UFH or LMWH for VTE prophylaxis after hepatic resection was found in the Asian subgroup that included 5 studies[7,12,23–25] (OR = 1.60, 95% CI: 0.48, 5.37, P = .45) or the Caucasian subgroup that included 2 studies[6,21] (OR = 1.11, 95% CI: 0.58, 2.12, P = .75) (Fig. 6).\nForest plot of subgroup analysis comparing the efficacy of the experimental group vs. the control group on VTE events in Asians and Caucasians. VTE = venous thromboembolism.\nForest plot of subgroup analysis comparing the safety of the experimental group vs. the control group on bleeding events in Asians and Caucasians.\nUsing a random-effects model, sensitivity analysis showed a significant difference in the VTE incidence between the experimental and control groups in Caucasians when Ejaz’s study[3] was excluded (OR = 0.58, 95% CI: 0.36, 0.93, P = .02).\n3.6. Publication bias The asymmetric funnel plot for the outcome of VTE suggested publication bias in this meta-analysis (Fig. 7). No significant publication bias was found for bleeding or all-cause mortality events.\nFunnel plot for the outcome of VTE events. VTE = venous thromboembolism.\nThe asymmetric funnel plot for the outcome of VTE suggested publication bias in this meta-analysis (Fig. 7). No significant publication bias was found for bleeding or all-cause mortality events.\nFunnel plot for the outcome of VTE events. VTE = venous thromboembolism.", "The initial literature search yielded a total of 610 articles. After removing duplicates, 136 articles remained. After screening titles and abstracts, 450 articles were excluded. Then, 24 articles met our inclusion criteria and were eligible for full-text evaluation. Finally, a total of 10 studies were included in the analysis.[3,5–7,12,21–25] Among these studies, 4 were in Chinese,[7,23–25] and 6 were in English.[3,5,6,12,21,22] The flow diagram is shown in Figure 1.\nThe flow diagram of the literature search.\nAll the included studies were cohort studies. A total of 4318 patients underwent liver resection, of which 2551 and 1767 patients were in the experimental and control groups, respectively. The main characteristics of the 10 studies are summarized in Table 1. The NOS was used to evaluate the quality of the eligible cohort studies, and the scoring details are shown in Table 2.\nCharacteristics of included studies.\nQuality assessment of included studies.", "All studies reported the incidence of VTE events in 4318 patients,[3,5–7,12,21–25] including 2551 in the experimental group and 1767 in the control group. There was no significant heterogeneity between the studies (I2 = 44%, P = .08); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed a significant difference in the overall rate of VTE between the experimental and control groups (OR = 0.39, 95% CI: 0.20, 0.74, P = .004) (Fig. 2).\nForest plot comparing the efficacy of the experimental group vs. the control group on VTE events. VTE = venous thromboembolism.", "Seven studies reported the incidence of bleeding events in 3074 patients,[6,7,12,21,23–25] including 1709 in the experimental group and 1365 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .68); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in the overall rate of bleeding events between the experimental and control groups (OR = 1.29, 95% CI: 0.80, 2.09, P = .30) (Fig. 3).\nForest plot comparing the safety of the experimental group vs. the control group on bleeding events.", "Five studies reported all-cause mortality,[3,6,7,12,22] 1 of which only reported the total number of deaths in both groups,[6] and 4 studies included a total of 1484 patients in this meta-analysis,[3,7,12,22] including 874 in the experimental group and 610 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .48) but the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in all-cause mortality between the experimental and control groups (OR = 0.71, 95% CI: 0.36, 1.42, P = .33) (Fig. 4).\nForest plot comparing the safety of the experimental group vs. the control group on all-cause mortality.", "Using a random-effects model, subgroup analyses stratified by ethnicity showed a significant difference in the overall rate of VTE between the experimental and control groups in the Asian subgroup that included 6 studies[5,7,12,23–25] (OR = 0.16, 95% CI: 0.06, 0.39, P < .0001), but no significant difference was observed in the Caucasian subgroup that included 4 studies[3,6,21,22] (OR = 0.69, 95% CI: 0.39, 1.23, P = .21) (Fig. 5). No significant difference in the incidence of bleeding events with UFH or LMWH for VTE prophylaxis after hepatic resection was found in the Asian subgroup that included 5 studies[7,12,23–25] (OR = 1.60, 95% CI: 0.48, 5.37, P = .45) or the Caucasian subgroup that included 2 studies[6,21] (OR = 1.11, 95% CI: 0.58, 2.12, P = .75) (Fig. 6).\nForest plot of subgroup analysis comparing the efficacy of the experimental group vs. the control group on VTE events in Asians and Caucasians. VTE = venous thromboembolism.\nForest plot of subgroup analysis comparing the safety of the experimental group vs. the control group on bleeding events in Asians and Caucasians.\nUsing a random-effects model, sensitivity analysis showed a significant difference in the VTE incidence between the experimental and control groups in Caucasians when Ejaz’s study[3] was excluded (OR = 0.58, 95% CI: 0.36, 0.93, P = .02).", "The asymmetric funnel plot for the outcome of VTE suggested publication bias in this meta-analysis (Fig. 7). No significant publication bias was found for bleeding or all-cause mortality events.\nFunnel plot for the outcome of VTE events. VTE = venous thromboembolism.", "Our study’s findings showed that the application of UFH and LMWH for VTE prophylaxis after hepatic resection was efficacious and safe, which is in line with findings of previous meta-analyses. Interestingly, a significant difference in the incidence of VTE was only observed in Asians in the subgroup analysis.[5,7,12,23–25] In 4 cohort studies of Caucasians,[3,6,21,22] no significant difference was found in the incidence of VTE. There was no significant difference in the incidence of bleeding events between UFH and LMWH for VTE prophylaxis after hepatic resection in Asian or Caucasian patients. Limited by the number of included studies, subgroup analyses of all-cause mortality with UFH or LMWH for VTE prophylaxis after hepatic resection in Asian or Caucasian patients were not performed. There could be a higher incidence of VTE after surgery in Caucasians than in Asians, and Asians with a low incidence of VTE after surgery often do not receive post-operative VTE prophylaxis.[26,27] Previous findings suggested that routine pharmacologic prevention of VTE may not be necessary in Asians as a result of the 3 times higher risk-benefit ratio of prophylaxis than in Caucasians.[14] Moreover, the safety and effectiveness of chemical thromboprophylaxis against VTE after liver resection are still controversial, especially in Asians, and it is important to build evidence to classify risks individually according to each race.[8] Asians have different risk factors, treatment patterns, and a higher risk of all-cause mortality than patients from other countries.[28] Recently, the incidence of VTE across Asia has been increasing, which may be attributable to the aging population, dietary changes, and increasing incidence of obesity and diabetes.[15] This fact reminds us that it is necessary to pay attention to the prevention of VTE after hepatectomy in the Asian population. However, our meta-analysis indicated that UFH and LMWH are effective and safe for VTE prophylaxis after hepatectomy in Asians. These findings may provide a reference for the development of guidelines for pharmacological prevention of VTE after hepatic resection in different ethnicities in the future.\nTo determine the reason for the inefficacy of UFH or LMWH for VTE prophylaxis after hepatic resection in Caucasians, we performed a sensitivity analysis on the subgroup of Caucasians. When Ejaz’s study was excluded, a significant difference was found in the incidence of VTE. After reviewing this article, we concluded that this difference may be attributed to the selection of participants, as the history of VTE (29/454 in the experimental group vs. 1/145 in the control group) was significantly different between the experimental and control groups. A previous study indicated that the VTE incidence was significantly associated with a history of VTE in patients,[29,30] and confounding factors influenced the results of this study. To our best knowledge, this study is the first meta-analysis to quantitatively assess the efficacy and safety of UFH and LMWH for VTE prophylaxis after hepatic resection. This study is also the first meta-analysis to study the efficacy and safety of UFH and LMWH in the prevention of VTE after liver resection in different ethnicities.\nNevertheless, this meta-analysis has several limitations. First, no RCTs were included, which increased the risk of bias in the meta-analysis. Second, the studies included patients with many risk factors for VTE, such as age, operative time, history of VTE, and malignancies. Due to insufficient study data, clinical conditions and various risk factors of the patients were not included, which may have influenced our results. Third, UFH and LMWH are similar but different anticoagulants in terms of the efficacy and safety in VTE.[31] The interventions of 3 studies included 2 drugs (used single or sequential but not simultaneously), and a direct comparison of the 2 similar and different anticoagulants (UFH vs. LMWH) for VTE prophylaxis after hepatic resection is lacking. Fourth, the funnel plot indicated a possible publication bias, which may have overestimated the efficacy of the 2 anticoagulants. Hence, more large-scale, high-quality studies are still necessary to confirm the efficacy and safety of UFH or LMWH for VTE prophylaxis after hepatic resection.\nIn general, our meta-analysis indicated that the application of UFH or LMWH for VTE prophylaxis after hepatic resection was also efficacious and safe in Asians, as in Caucasians. In the future, it may be necessary to use UFH or LMWH to prevent VTE in Asian patients after hepatectomy. Although this meta-analysis has some limitations that cannot be resolved, the results are reliable. Larger sample sizes and high-quality RCTs are needed to confirm these results. Given the lack of guidelines for pharmacological prevention of VTE after hepatic resection, we hope this meta-analysis can provide a reference for developing guidelines, especially regarding the use of pharmacological prevention of VTE in different ethnicities.", "We would like to thank Editage (www.editage.cn) for English language editing.", "Conceptualization: Wentao Zhang, Changhong Du.\nData curation: Wentao Zhang, Baoyue Hu.\nFormal analysis: Wentao Zhang, Xinchun Wei, Baoyue Hu.\nMethodology: Wentao Zhang, Changhong Du, Baoyue Hu.\nSoftware: Wentao Zhang, Shiwei Yang.\nSupervision: Wentao Zhang.\nWriting – original draft: Wentao Zhang.\nWriting – review & editing: Wentao Zhang, Baoyue Hu." ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", null, null ]
[ "all-cause mortality", "bleeding", "heparin", "hepatic resection", "VTE" ]
1. Introduction: VTE which is characterized by deep venous thrombosis (DVT) or pulmonary thromboembolism, is a significant cause of morbidity and mortality in patients who have undergone open abdominal surgery.[1] The incidence of VTE is associated with increased age, obesity, malignancy, and extensive and prolonged resection. Patients undergoing hepatic resection often have most of the aforementioned risk factors and, therefore, have a higher incidence of VTE.[2,3] Currently, there is a lack of authoritative guidelines for VTE prophylaxis following hepatic resection. Previous studies have indicated that extended anticoagulation therapy after hepatic resection is both effective and safe.[4,5] However, some studies have proposed different perspective.[3,6,7] Furthermore, a meta-analysis including 5 studies in which most patients were from the US and Europe, indicated that the application of perioperative chemical thromboprophylaxis reduces the incidence of VTE after hepatic resection without a significantly increased risk of bleeding, but a recent systematic review including 16 studies showed that the efficacy of VTE prophylaxis after hepatic resection has not been proven in Asian patients.[8] UFH and LMWH are recommended as VTE prophylaxis after major surgery.[9–11] Many studies have reported the efficacy and safety of UFH or LMWH for VTE prophylaxis after hepatic resection.[3,6,7,12–14] However, these results are controversial, particularly regarding the use of pharmacological prophylaxis for VTE after hepatic resection in Asian populations.[8,15] Additionally, 2 systematic reviews summarized the efficacy and safety of pharmacological prophylaxis for VTE after hepatic resection,[16,17] but both lacked a discussion of the difference in the pharmacological prophylaxis of VTE in different ethnicities. Therefore, we aimed to conduct a meta-analysis to quantitatively compare patients undergoing hepatic resection prophylaxis for VTE with UFH or LMWH among Asian and Caucasian patients. 2. Method: 2.1. Ethics statements This meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.[18] The protocol for this meta-analysis was registered with PROSPERO (registration number: CRD42022349271, http://www.crd.york.ac.uk/PROSPERO/). This study was based on the published literature. Ethical approval and patient consent were not obtained. This meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.[18] The protocol for this meta-analysis was registered with PROSPERO (registration number: CRD42022349271, http://www.crd.york.ac.uk/PROSPERO/). This study was based on the published literature. Ethical approval and patient consent were not obtained. 2.2. Search strategy Two researchers independently completed the literature search for this meta-analysis, and discrepancies were resolved through full discussion. Eligible studies were searched in PubMed, Embase, Web of Science, China National Knowledge Infrastructure, Wanfang Data, and VIP databases. The retrieval time limit was January 2000 to July 2022. The language used was limited to English and Chinese. The following search terms were used: (heparin OR UFH OR UFH OR low molecular weight heparin OR LMWH OR LMWH OR enoxaparin OR lovenox OR nadroparin) AND (venous thrombus OR venous thrombus embolism OR VTE OR deep vein thrombosis OR deep venous thrombosis (DVT) OR pulmonary embolism OR PE OR pulmonary thromboembolism OR PTE OR portal vein thrombosis OR PVT OR mesenteric venous thrombosis OR hemorrhage OR hemorrhagic complication OR thromboembolic complication OR bleeding) AND (hepatectomy OR hepatectomies OR liver resection OR hepatic resection OR hepatic craniectomy OR hemi-hepatectomy OR hepatolobectomy OR surgery for colorectal liver metastases OR hepatic metastases of colorectal). Two researchers independently completed the literature search for this meta-analysis, and discrepancies were resolved through full discussion. Eligible studies were searched in PubMed, Embase, Web of Science, China National Knowledge Infrastructure, Wanfang Data, and VIP databases. The retrieval time limit was January 2000 to July 2022. The language used was limited to English and Chinese. The following search terms were used: (heparin OR UFH OR UFH OR low molecular weight heparin OR LMWH OR LMWH OR enoxaparin OR lovenox OR nadroparin) AND (venous thrombus OR venous thrombus embolism OR VTE OR deep vein thrombosis OR deep venous thrombosis (DVT) OR pulmonary embolism OR PE OR pulmonary thromboembolism OR PTE OR portal vein thrombosis OR PVT OR mesenteric venous thrombosis OR hemorrhage OR hemorrhagic complication OR thromboembolic complication OR bleeding) AND (hepatectomy OR hepatectomies OR liver resection OR hepatic resection OR hepatic craniectomy OR hemi-hepatectomy OR hepatolobectomy OR surgery for colorectal liver metastases OR hepatic metastases of colorectal). 2.3. Inclusion and exclusion criteria The inclusion criteria were as follows: (1) patients who had undergone hepatic resection; (2) patients in an experimental group who were treated with UFH of LWMH for VTE prophylaxis after hepatic resection and, patients in a control group who were not treated with pharmacological prophylaxis (the control group could receive nothing or conventional therapy such as mechanical thromboprophylaxis); (3) the outcomes of the study included at least 1 of the following: VTE, bleeding events, and all-cause mortality; and (4) studies were cohort studies, case-control studies, randomized controlled clinical trials (RCTs), or quasi-experimental studies. Exclusion criteria were as follows: (1) case reports, reviews, editorials, animal studies, or republished literature; (2) studies without a control group; (3) studies in which data research could not be extracted or the full text was not available; and (4) studies missing primary outcome. The inclusion criteria were as follows: (1) patients who had undergone hepatic resection; (2) patients in an experimental group who were treated with UFH of LWMH for VTE prophylaxis after hepatic resection and, patients in a control group who were not treated with pharmacological prophylaxis (the control group could receive nothing or conventional therapy such as mechanical thromboprophylaxis); (3) the outcomes of the study included at least 1 of the following: VTE, bleeding events, and all-cause mortality; and (4) studies were cohort studies, case-control studies, randomized controlled clinical trials (RCTs), or quasi-experimental studies. Exclusion criteria were as follows: (1) case reports, reviews, editorials, animal studies, or republished literature; (2) studies without a control group; (3) studies in which data research could not be extracted or the full text was not available; and (4) studies missing primary outcome. 2.4. Data extraction Two researchers independently completed the data extraction process, and discrepancies were resolved through full discussion. The following data were extracted: the article title, first author, publication year, study design, patient ethnicity, intervention, patient characteristics, and outcomes. Two researchers independently completed the data extraction process, and discrepancies were resolved through full discussion. The following data were extracted: the article title, first author, publication year, study design, patient ethnicity, intervention, patient characteristics, and outcomes. 2.5. Quality assessment Two researchers independently performed the quality assessment of each study. The Cochrane risk-of-bias tool 2.0 was used to assess the risk of bias in RCTs and quasi-experimental studies.[19] The Newcastle-Ottawa Quality Assessment Scale (NOS) was used to assess the quality of the cohort and case-control studies in this meta-analysis.[20] The NOS has 3 domains: selection of study, comparability, and outcome evaluation, with 8 items and a total score of 9 points. Except for item 5, which counts as 2 points (1 point for controlling age confounding factors and 1 point for controlling other important confounding factors), all other items count as 1 point. Scores ≤ 3 are regarded as low-quality studies, scores between 4 and 6 are considered as medium-quality studies, and scores > 7 are regarded as high-quality studies. Two researchers independently performed the quality assessment of each study. The Cochrane risk-of-bias tool 2.0 was used to assess the risk of bias in RCTs and quasi-experimental studies.[19] The Newcastle-Ottawa Quality Assessment Scale (NOS) was used to assess the quality of the cohort and case-control studies in this meta-analysis.[20] The NOS has 3 domains: selection of study, comparability, and outcome evaluation, with 8 items and a total score of 9 points. Except for item 5, which counts as 2 points (1 point for controlling age confounding factors and 1 point for controlling other important confounding factors), all other items count as 1 point. Scores ≤ 3 are regarded as low-quality studies, scores between 4 and 6 are considered as medium-quality studies, and scores > 7 are regarded as high-quality studies. 2.6. Statistical analysis Meta-analyses were conducted using RevMan5.3 software according to the Cochrane Manual for Systematic Evaluation of Interventions. The pooled effect size of the meta-analyses was assessed using the OR and 95% CI. I2 statistics and the Cochran Q test were used to assess statistical heterogeneity. The Mantel–Haenszel method for the fixed-effects model was applied when no significant heterogeneity was detected (I2 < 50% or P-value for heterogeneity > 0.1). The Der Simonian–Laird method for the random-effects model was used when significant heterogeneity was detected (I2 ≥ 50% or P-value for heterogeneity ≤ 0.1). However, if obvious variation in the included studies was found, a random effects model was used. Subgroup analyses of the Asian and Caucasian patients were performed. Sensitivity analyses were performed to evaluate the reliability of the results by excluding studies individually. The publication bias of the included studies was assessed using funnel plots. Statistical significance was set at P < .05. Meta-analyses were conducted using RevMan5.3 software according to the Cochrane Manual for Systematic Evaluation of Interventions. The pooled effect size of the meta-analyses was assessed using the OR and 95% CI. I2 statistics and the Cochran Q test were used to assess statistical heterogeneity. The Mantel–Haenszel method for the fixed-effects model was applied when no significant heterogeneity was detected (I2 < 50% or P-value for heterogeneity > 0.1). The Der Simonian–Laird method for the random-effects model was used when significant heterogeneity was detected (I2 ≥ 50% or P-value for heterogeneity ≤ 0.1). However, if obvious variation in the included studies was found, a random effects model was used. Subgroup analyses of the Asian and Caucasian patients were performed. Sensitivity analyses were performed to evaluate the reliability of the results by excluding studies individually. The publication bias of the included studies was assessed using funnel plots. Statistical significance was set at P < .05. 2.1. Ethics statements: This meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses.[18] The protocol for this meta-analysis was registered with PROSPERO (registration number: CRD42022349271, http://www.crd.york.ac.uk/PROSPERO/). This study was based on the published literature. Ethical approval and patient consent were not obtained. 2.2. Search strategy: Two researchers independently completed the literature search for this meta-analysis, and discrepancies were resolved through full discussion. Eligible studies were searched in PubMed, Embase, Web of Science, China National Knowledge Infrastructure, Wanfang Data, and VIP databases. The retrieval time limit was January 2000 to July 2022. The language used was limited to English and Chinese. The following search terms were used: (heparin OR UFH OR UFH OR low molecular weight heparin OR LMWH OR LMWH OR enoxaparin OR lovenox OR nadroparin) AND (venous thrombus OR venous thrombus embolism OR VTE OR deep vein thrombosis OR deep venous thrombosis (DVT) OR pulmonary embolism OR PE OR pulmonary thromboembolism OR PTE OR portal vein thrombosis OR PVT OR mesenteric venous thrombosis OR hemorrhage OR hemorrhagic complication OR thromboembolic complication OR bleeding) AND (hepatectomy OR hepatectomies OR liver resection OR hepatic resection OR hepatic craniectomy OR hemi-hepatectomy OR hepatolobectomy OR surgery for colorectal liver metastases OR hepatic metastases of colorectal). 2.3. Inclusion and exclusion criteria: The inclusion criteria were as follows: (1) patients who had undergone hepatic resection; (2) patients in an experimental group who were treated with UFH of LWMH for VTE prophylaxis after hepatic resection and, patients in a control group who were not treated with pharmacological prophylaxis (the control group could receive nothing or conventional therapy such as mechanical thromboprophylaxis); (3) the outcomes of the study included at least 1 of the following: VTE, bleeding events, and all-cause mortality; and (4) studies were cohort studies, case-control studies, randomized controlled clinical trials (RCTs), or quasi-experimental studies. Exclusion criteria were as follows: (1) case reports, reviews, editorials, animal studies, or republished literature; (2) studies without a control group; (3) studies in which data research could not be extracted or the full text was not available; and (4) studies missing primary outcome. 2.4. Data extraction: Two researchers independently completed the data extraction process, and discrepancies were resolved through full discussion. The following data were extracted: the article title, first author, publication year, study design, patient ethnicity, intervention, patient characteristics, and outcomes. 2.5. Quality assessment: Two researchers independently performed the quality assessment of each study. The Cochrane risk-of-bias tool 2.0 was used to assess the risk of bias in RCTs and quasi-experimental studies.[19] The Newcastle-Ottawa Quality Assessment Scale (NOS) was used to assess the quality of the cohort and case-control studies in this meta-analysis.[20] The NOS has 3 domains: selection of study, comparability, and outcome evaluation, with 8 items and a total score of 9 points. Except for item 5, which counts as 2 points (1 point for controlling age confounding factors and 1 point for controlling other important confounding factors), all other items count as 1 point. Scores ≤ 3 are regarded as low-quality studies, scores between 4 and 6 are considered as medium-quality studies, and scores > 7 are regarded as high-quality studies. 2.6. Statistical analysis: Meta-analyses were conducted using RevMan5.3 software according to the Cochrane Manual for Systematic Evaluation of Interventions. The pooled effect size of the meta-analyses was assessed using the OR and 95% CI. I2 statistics and the Cochran Q test were used to assess statistical heterogeneity. The Mantel–Haenszel method for the fixed-effects model was applied when no significant heterogeneity was detected (I2 < 50% or P-value for heterogeneity > 0.1). The Der Simonian–Laird method for the random-effects model was used when significant heterogeneity was detected (I2 ≥ 50% or P-value for heterogeneity ≤ 0.1). However, if obvious variation in the included studies was found, a random effects model was used. Subgroup analyses of the Asian and Caucasian patients were performed. Sensitivity analyses were performed to evaluate the reliability of the results by excluding studies individually. The publication bias of the included studies was assessed using funnel plots. Statistical significance was set at P < .05. 3. Results: 3.1. Literature search and study characteristics The initial literature search yielded a total of 610 articles. After removing duplicates, 136 articles remained. After screening titles and abstracts, 450 articles were excluded. Then, 24 articles met our inclusion criteria and were eligible for full-text evaluation. Finally, a total of 10 studies were included in the analysis.[3,5–7,12,21–25] Among these studies, 4 were in Chinese,[7,23–25] and 6 were in English.[3,5,6,12,21,22] The flow diagram is shown in Figure 1. The flow diagram of the literature search. All the included studies were cohort studies. A total of 4318 patients underwent liver resection, of which 2551 and 1767 patients were in the experimental and control groups, respectively. The main characteristics of the 10 studies are summarized in Table 1. The NOS was used to evaluate the quality of the eligible cohort studies, and the scoring details are shown in Table 2. Characteristics of included studies. Quality assessment of included studies. The initial literature search yielded a total of 610 articles. After removing duplicates, 136 articles remained. After screening titles and abstracts, 450 articles were excluded. Then, 24 articles met our inclusion criteria and were eligible for full-text evaluation. Finally, a total of 10 studies were included in the analysis.[3,5–7,12,21–25] Among these studies, 4 were in Chinese,[7,23–25] and 6 were in English.[3,5,6,12,21,22] The flow diagram is shown in Figure 1. The flow diagram of the literature search. All the included studies were cohort studies. A total of 4318 patients underwent liver resection, of which 2551 and 1767 patients were in the experimental and control groups, respectively. The main characteristics of the 10 studies are summarized in Table 1. The NOS was used to evaluate the quality of the eligible cohort studies, and the scoring details are shown in Table 2. Characteristics of included studies. Quality assessment of included studies. 3.2. VTE events All studies reported the incidence of VTE events in 4318 patients,[3,5–7,12,21–25] including 2551 in the experimental group and 1767 in the control group. There was no significant heterogeneity between the studies (I2 = 44%, P = .08); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed a significant difference in the overall rate of VTE between the experimental and control groups (OR = 0.39, 95% CI: 0.20, 0.74, P = .004) (Fig. 2). Forest plot comparing the efficacy of the experimental group vs. the control group on VTE events. VTE = venous thromboembolism. All studies reported the incidence of VTE events in 4318 patients,[3,5–7,12,21–25] including 2551 in the experimental group and 1767 in the control group. There was no significant heterogeneity between the studies (I2 = 44%, P = .08); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed a significant difference in the overall rate of VTE between the experimental and control groups (OR = 0.39, 95% CI: 0.20, 0.74, P = .004) (Fig. 2). Forest plot comparing the efficacy of the experimental group vs. the control group on VTE events. VTE = venous thromboembolism. 3.3. Bleeding events Seven studies reported the incidence of bleeding events in 3074 patients,[6,7,12,21,23–25] including 1709 in the experimental group and 1365 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .68); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in the overall rate of bleeding events between the experimental and control groups (OR = 1.29, 95% CI: 0.80, 2.09, P = .30) (Fig. 3). Forest plot comparing the safety of the experimental group vs. the control group on bleeding events. Seven studies reported the incidence of bleeding events in 3074 patients,[6,7,12,21,23–25] including 1709 in the experimental group and 1365 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .68); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in the overall rate of bleeding events between the experimental and control groups (OR = 1.29, 95% CI: 0.80, 2.09, P = .30) (Fig. 3). Forest plot comparing the safety of the experimental group vs. the control group on bleeding events. 3.4. All-cause mortality Five studies reported all-cause mortality,[3,6,7,12,22] 1 of which only reported the total number of deaths in both groups,[6] and 4 studies included a total of 1484 patients in this meta-analysis,[3,7,12,22] including 874 in the experimental group and 610 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .48) but the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in all-cause mortality between the experimental and control groups (OR = 0.71, 95% CI: 0.36, 1.42, P = .33) (Fig. 4). Forest plot comparing the safety of the experimental group vs. the control group on all-cause mortality. Five studies reported all-cause mortality,[3,6,7,12,22] 1 of which only reported the total number of deaths in both groups,[6] and 4 studies included a total of 1484 patients in this meta-analysis,[3,7,12,22] including 874 in the experimental group and 610 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .48) but the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in all-cause mortality between the experimental and control groups (OR = 0.71, 95% CI: 0.36, 1.42, P = .33) (Fig. 4). Forest plot comparing the safety of the experimental group vs. the control group on all-cause mortality. 3.5. Subgroup analyses and sensitivity analysis Using a random-effects model, subgroup analyses stratified by ethnicity showed a significant difference in the overall rate of VTE between the experimental and control groups in the Asian subgroup that included 6 studies[5,7,12,23–25] (OR = 0.16, 95% CI: 0.06, 0.39, P < .0001), but no significant difference was observed in the Caucasian subgroup that included 4 studies[3,6,21,22] (OR = 0.69, 95% CI: 0.39, 1.23, P = .21) (Fig. 5). No significant difference in the incidence of bleeding events with UFH or LMWH for VTE prophylaxis after hepatic resection was found in the Asian subgroup that included 5 studies[7,12,23–25] (OR = 1.60, 95% CI: 0.48, 5.37, P = .45) or the Caucasian subgroup that included 2 studies[6,21] (OR = 1.11, 95% CI: 0.58, 2.12, P = .75) (Fig. 6). Forest plot of subgroup analysis comparing the efficacy of the experimental group vs. the control group on VTE events in Asians and Caucasians. VTE = venous thromboembolism. Forest plot of subgroup analysis comparing the safety of the experimental group vs. the control group on bleeding events in Asians and Caucasians. Using a random-effects model, sensitivity analysis showed a significant difference in the VTE incidence between the experimental and control groups in Caucasians when Ejaz’s study[3] was excluded (OR = 0.58, 95% CI: 0.36, 0.93, P = .02). Using a random-effects model, subgroup analyses stratified by ethnicity showed a significant difference in the overall rate of VTE between the experimental and control groups in the Asian subgroup that included 6 studies[5,7,12,23–25] (OR = 0.16, 95% CI: 0.06, 0.39, P < .0001), but no significant difference was observed in the Caucasian subgroup that included 4 studies[3,6,21,22] (OR = 0.69, 95% CI: 0.39, 1.23, P = .21) (Fig. 5). No significant difference in the incidence of bleeding events with UFH or LMWH for VTE prophylaxis after hepatic resection was found in the Asian subgroup that included 5 studies[7,12,23–25] (OR = 1.60, 95% CI: 0.48, 5.37, P = .45) or the Caucasian subgroup that included 2 studies[6,21] (OR = 1.11, 95% CI: 0.58, 2.12, P = .75) (Fig. 6). Forest plot of subgroup analysis comparing the efficacy of the experimental group vs. the control group on VTE events in Asians and Caucasians. VTE = venous thromboembolism. Forest plot of subgroup analysis comparing the safety of the experimental group vs. the control group on bleeding events in Asians and Caucasians. Using a random-effects model, sensitivity analysis showed a significant difference in the VTE incidence between the experimental and control groups in Caucasians when Ejaz’s study[3] was excluded (OR = 0.58, 95% CI: 0.36, 0.93, P = .02). 3.6. Publication bias The asymmetric funnel plot for the outcome of VTE suggested publication bias in this meta-analysis (Fig. 7). No significant publication bias was found for bleeding or all-cause mortality events. Funnel plot for the outcome of VTE events. VTE = venous thromboembolism. The asymmetric funnel plot for the outcome of VTE suggested publication bias in this meta-analysis (Fig. 7). No significant publication bias was found for bleeding or all-cause mortality events. Funnel plot for the outcome of VTE events. VTE = venous thromboembolism. 3.1. Literature search and study characteristics: The initial literature search yielded a total of 610 articles. After removing duplicates, 136 articles remained. After screening titles and abstracts, 450 articles were excluded. Then, 24 articles met our inclusion criteria and were eligible for full-text evaluation. Finally, a total of 10 studies were included in the analysis.[3,5–7,12,21–25] Among these studies, 4 were in Chinese,[7,23–25] and 6 were in English.[3,5,6,12,21,22] The flow diagram is shown in Figure 1. The flow diagram of the literature search. All the included studies were cohort studies. A total of 4318 patients underwent liver resection, of which 2551 and 1767 patients were in the experimental and control groups, respectively. The main characteristics of the 10 studies are summarized in Table 1. The NOS was used to evaluate the quality of the eligible cohort studies, and the scoring details are shown in Table 2. Characteristics of included studies. Quality assessment of included studies. 3.2. VTE events: All studies reported the incidence of VTE events in 4318 patients,[3,5–7,12,21–25] including 2551 in the experimental group and 1767 in the control group. There was no significant heterogeneity between the studies (I2 = 44%, P = .08); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed a significant difference in the overall rate of VTE between the experimental and control groups (OR = 0.39, 95% CI: 0.20, 0.74, P = .004) (Fig. 2). Forest plot comparing the efficacy of the experimental group vs. the control group on VTE events. VTE = venous thromboembolism. 3.3. Bleeding events: Seven studies reported the incidence of bleeding events in 3074 patients,[6,7,12,21,23–25] including 1709 in the experimental group and 1365 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .68); however, the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in the overall rate of bleeding events between the experimental and control groups (OR = 1.29, 95% CI: 0.80, 2.09, P = .30) (Fig. 3). Forest plot comparing the safety of the experimental group vs. the control group on bleeding events. 3.4. All-cause mortality: Five studies reported all-cause mortality,[3,6,7,12,22] 1 of which only reported the total number of deaths in both groups,[6] and 4 studies included a total of 1484 patients in this meta-analysis,[3,7,12,22] including 874 in the experimental group and 610 in the control group. There was no heterogeneity between the studies (I2 = 0%, P = .48) but the variation in the included studies was significant, and the random-effects model was used for this analysis. The results showed that there was no significant difference in all-cause mortality between the experimental and control groups (OR = 0.71, 95% CI: 0.36, 1.42, P = .33) (Fig. 4). Forest plot comparing the safety of the experimental group vs. the control group on all-cause mortality. 3.5. Subgroup analyses and sensitivity analysis: Using a random-effects model, subgroup analyses stratified by ethnicity showed a significant difference in the overall rate of VTE between the experimental and control groups in the Asian subgroup that included 6 studies[5,7,12,23–25] (OR = 0.16, 95% CI: 0.06, 0.39, P < .0001), but no significant difference was observed in the Caucasian subgroup that included 4 studies[3,6,21,22] (OR = 0.69, 95% CI: 0.39, 1.23, P = .21) (Fig. 5). No significant difference in the incidence of bleeding events with UFH or LMWH for VTE prophylaxis after hepatic resection was found in the Asian subgroup that included 5 studies[7,12,23–25] (OR = 1.60, 95% CI: 0.48, 5.37, P = .45) or the Caucasian subgroup that included 2 studies[6,21] (OR = 1.11, 95% CI: 0.58, 2.12, P = .75) (Fig. 6). Forest plot of subgroup analysis comparing the efficacy of the experimental group vs. the control group on VTE events in Asians and Caucasians. VTE = venous thromboembolism. Forest plot of subgroup analysis comparing the safety of the experimental group vs. the control group on bleeding events in Asians and Caucasians. Using a random-effects model, sensitivity analysis showed a significant difference in the VTE incidence between the experimental and control groups in Caucasians when Ejaz’s study[3] was excluded (OR = 0.58, 95% CI: 0.36, 0.93, P = .02). 3.6. Publication bias: The asymmetric funnel plot for the outcome of VTE suggested publication bias in this meta-analysis (Fig. 7). No significant publication bias was found for bleeding or all-cause mortality events. Funnel plot for the outcome of VTE events. VTE = venous thromboembolism. 4. Discussion: Our study’s findings showed that the application of UFH and LMWH for VTE prophylaxis after hepatic resection was efficacious and safe, which is in line with findings of previous meta-analyses. Interestingly, a significant difference in the incidence of VTE was only observed in Asians in the subgroup analysis.[5,7,12,23–25] In 4 cohort studies of Caucasians,[3,6,21,22] no significant difference was found in the incidence of VTE. There was no significant difference in the incidence of bleeding events between UFH and LMWH for VTE prophylaxis after hepatic resection in Asian or Caucasian patients. Limited by the number of included studies, subgroup analyses of all-cause mortality with UFH or LMWH for VTE prophylaxis after hepatic resection in Asian or Caucasian patients were not performed. There could be a higher incidence of VTE after surgery in Caucasians than in Asians, and Asians with a low incidence of VTE after surgery often do not receive post-operative VTE prophylaxis.[26,27] Previous findings suggested that routine pharmacologic prevention of VTE may not be necessary in Asians as a result of the 3 times higher risk-benefit ratio of prophylaxis than in Caucasians.[14] Moreover, the safety and effectiveness of chemical thromboprophylaxis against VTE after liver resection are still controversial, especially in Asians, and it is important to build evidence to classify risks individually according to each race.[8] Asians have different risk factors, treatment patterns, and a higher risk of all-cause mortality than patients from other countries.[28] Recently, the incidence of VTE across Asia has been increasing, which may be attributable to the aging population, dietary changes, and increasing incidence of obesity and diabetes.[15] This fact reminds us that it is necessary to pay attention to the prevention of VTE after hepatectomy in the Asian population. However, our meta-analysis indicated that UFH and LMWH are effective and safe for VTE prophylaxis after hepatectomy in Asians. These findings may provide a reference for the development of guidelines for pharmacological prevention of VTE after hepatic resection in different ethnicities in the future. To determine the reason for the inefficacy of UFH or LMWH for VTE prophylaxis after hepatic resection in Caucasians, we performed a sensitivity analysis on the subgroup of Caucasians. When Ejaz’s study was excluded, a significant difference was found in the incidence of VTE. After reviewing this article, we concluded that this difference may be attributed to the selection of participants, as the history of VTE (29/454 in the experimental group vs. 1/145 in the control group) was significantly different between the experimental and control groups. A previous study indicated that the VTE incidence was significantly associated with a history of VTE in patients,[29,30] and confounding factors influenced the results of this study. To our best knowledge, this study is the first meta-analysis to quantitatively assess the efficacy and safety of UFH and LMWH for VTE prophylaxis after hepatic resection. This study is also the first meta-analysis to study the efficacy and safety of UFH and LMWH in the prevention of VTE after liver resection in different ethnicities. Nevertheless, this meta-analysis has several limitations. First, no RCTs were included, which increased the risk of bias in the meta-analysis. Second, the studies included patients with many risk factors for VTE, such as age, operative time, history of VTE, and malignancies. Due to insufficient study data, clinical conditions and various risk factors of the patients were not included, which may have influenced our results. Third, UFH and LMWH are similar but different anticoagulants in terms of the efficacy and safety in VTE.[31] The interventions of 3 studies included 2 drugs (used single or sequential but not simultaneously), and a direct comparison of the 2 similar and different anticoagulants (UFH vs. LMWH) for VTE prophylaxis after hepatic resection is lacking. Fourth, the funnel plot indicated a possible publication bias, which may have overestimated the efficacy of the 2 anticoagulants. Hence, more large-scale, high-quality studies are still necessary to confirm the efficacy and safety of UFH or LMWH for VTE prophylaxis after hepatic resection. In general, our meta-analysis indicated that the application of UFH or LMWH for VTE prophylaxis after hepatic resection was also efficacious and safe in Asians, as in Caucasians. In the future, it may be necessary to use UFH or LMWH to prevent VTE in Asian patients after hepatectomy. Although this meta-analysis has some limitations that cannot be resolved, the results are reliable. Larger sample sizes and high-quality RCTs are needed to confirm these results. Given the lack of guidelines for pharmacological prevention of VTE after hepatic resection, we hope this meta-analysis can provide a reference for developing guidelines, especially regarding the use of pharmacological prevention of VTE in different ethnicities. Acknowledgments: We would like to thank Editage (www.editage.cn) for English language editing. Author contributions: Conceptualization: Wentao Zhang, Changhong Du. Data curation: Wentao Zhang, Baoyue Hu. Formal analysis: Wentao Zhang, Xinchun Wei, Baoyue Hu. Methodology: Wentao Zhang, Changhong Du, Baoyue Hu. Software: Wentao Zhang, Shiwei Yang. Supervision: Wentao Zhang. Writing – original draft: Wentao Zhang. Writing – review & editing: Wentao Zhang, Baoyue Hu.
Background: Two systematic reviews summarized the efficacy and safety of pharmacological prophylaxis for venous thromboembolism (VTE) after hepatic resection, but both lacked a discussion of the differences in the pharmacological prophylaxis of VTE in different ethnicities. Therefore, we aimed to evaluate the efficacy and safety of low-molecular-weight heparin (LMWH) or unfractionated heparin (UFH) for VTE prophylaxis in Asian and Caucasian patients who have undergone hepatic resection. Methods: We searched PubMed, Web of Science, Embase, China National Knowledge Infrastructure, Wanfang Data, and VIP databases for studies reporting the primary outcomes of VTE incidence, bleeding events, and all-cause mortality from January 2000 to July 2022. Results: Ten studies involving 4318 participants who had undergone hepatic resection were included: 6 in Asians and 4 in Caucasians. A significant difference in VTE incidence was observed between the experimental and control groups (odds ratio [OR] = 0.39, 95% confidence interval [CI]: 0.20, 0.74, P = .004). No significant difference in bleeding events and all-cause mortality was observed (OR = 1.29, 95% CI: 0.80, 2.09, P = .30; OR = 0.71, 95% CI: 0.36, 1.42, P = .33, respectively). Subgroup analyses stratified by ethnicity showed a significant difference in the incidence of VTE in Asians (OR = 0.16, 95% CI: 0.06, 0.39, P < .0001), but not in Caucasians (OR = 0.69, 95% CI: 0.39, 1.23, P = .21). No significant differences in bleeding events were found between Asians (OR = 1.60, 95% CI: 0.48, 5.37, P = .45) and Caucasians (OR = 1.11, 95% CI: 0.58, 2.12, P = .75). The sensitivity analysis showed that Ejaz's study was the main source of heterogeneity, and when Ejaz's study was excluded, a significant difference in VTE incidence was found in Caucasians (OR = 0.58, 95% CI: 0.36, 0.93, P = .02). Conclusions: This study's findings indicate that the application of UFH or LMWH for VTE prophylaxis after hepatic resection is efficacious and safe in Asians and Caucasians. It is necessary for Asians to receive drug prophylaxis for VTE after hepatic resection. This study can provide a reference for the development of guidelines in the future, especially regarding the pharmacological prevention of VTE in different ethnicities.
null
null
6,755
472
[ 58, 181, 193, 47, 166, 188, 177, 131, 129, 152, 278, 53, 14, 80 ]
18
[ "studies", "vte", "group", "control", "analysis", "experimental", "included", "significant", "resection", "patients" ]
[ "perioperative chemical thromboprophylaxis", "effectiveness chemical thromboprophylaxis", "vte prophylaxis hepatic", "venous thromboembolism discussion", "thromboprophylaxis vte liver" ]
null
null
[CONTENT] all-cause mortality | bleeding | heparin | hepatic resection | VTE [SUMMARY]
[CONTENT] all-cause mortality | bleeding | heparin | hepatic resection | VTE [SUMMARY]
[CONTENT] all-cause mortality | bleeding | heparin | hepatic resection | VTE [SUMMARY]
null
[CONTENT] all-cause mortality | bleeding | heparin | hepatic resection | VTE [SUMMARY]
null
[CONTENT] Humans | Heparin, Low-Molecular-Weight | Venous Thromboembolism | Heparin | Treatment Outcome | Anticoagulants | Hemorrhage [SUMMARY]
[CONTENT] Humans | Heparin, Low-Molecular-Weight | Venous Thromboembolism | Heparin | Treatment Outcome | Anticoagulants | Hemorrhage [SUMMARY]
[CONTENT] Humans | Heparin, Low-Molecular-Weight | Venous Thromboembolism | Heparin | Treatment Outcome | Anticoagulants | Hemorrhage [SUMMARY]
null
[CONTENT] Humans | Heparin, Low-Molecular-Weight | Venous Thromboembolism | Heparin | Treatment Outcome | Anticoagulants | Hemorrhage [SUMMARY]
null
[CONTENT] perioperative chemical thromboprophylaxis | effectiveness chemical thromboprophylaxis | vte prophylaxis hepatic | venous thromboembolism discussion | thromboprophylaxis vte liver [SUMMARY]
[CONTENT] perioperative chemical thromboprophylaxis | effectiveness chemical thromboprophylaxis | vte prophylaxis hepatic | venous thromboembolism discussion | thromboprophylaxis vte liver [SUMMARY]
[CONTENT] perioperative chemical thromboprophylaxis | effectiveness chemical thromboprophylaxis | vte prophylaxis hepatic | venous thromboembolism discussion | thromboprophylaxis vte liver [SUMMARY]
null
[CONTENT] perioperative chemical thromboprophylaxis | effectiveness chemical thromboprophylaxis | vte prophylaxis hepatic | venous thromboembolism discussion | thromboprophylaxis vte liver [SUMMARY]
null
[CONTENT] studies | vte | group | control | analysis | experimental | included | significant | resection | patients [SUMMARY]
[CONTENT] studies | vte | group | control | analysis | experimental | included | significant | resection | patients [SUMMARY]
[CONTENT] studies | vte | group | control | analysis | experimental | included | significant | resection | patients [SUMMARY]
null
[CONTENT] studies | vte | group | control | analysis | experimental | included | significant | resection | patients [SUMMARY]
null
[CONTENT] vte | resection | hepatic resection | hepatic | prophylaxis | prophylaxis vte | pharmacological prophylaxis vte | patients | vte hepatic resection | vte hepatic [SUMMARY]
[CONTENT] studies | quality | meta | thrombosis | heterogeneity | analyses | hepatic | data | study | point [SUMMARY]
[CONTENT] studies | group | control | experimental | vte | events | significant | included | included studies | subgroup [SUMMARY]
null
[CONTENT] studies | vte | group | control | experimental | significant | included | events | analysis | resection [SUMMARY]
null
[CONTENT] Two ||| UFH | Asian | Caucasian [SUMMARY]
[CONTENT] PubMed | China National Knowledge Infrastructure | Wanfang Data | January 2000 to July 2022 [SUMMARY]
[CONTENT] Ten | 4318 | 6 | Asians | 4 | Caucasians ||| 0.39 | 95% ||| CI | 0.20 | 0.74 | .004 ||| 1.29 | 95% | CI | 0.80 | 2.09 | 0.71 | 95% | CI | 0.36 | 1.42 ||| Asians | 0.16 | 95% | CI | 0.06 | 0.39 | Caucasians | 0.69 | 95% | CI | 0.39 | 1.23 ||| Asians | 1.60 | 95% | CI | 0.48 | 5.37 | Caucasians | 1.11 | 95% | CI | 0.58 | 2.12 ||| Ejaz | Ejaz | Caucasians | 0.58 | 95% | CI | 0.36 | 0.93 [SUMMARY]
null
[CONTENT] Two ||| UFH | Asian | Caucasian ||| PubMed | China National Knowledge Infrastructure | Wanfang Data | January 2000 to July 2022 ||| Ten | 4318 | 6 | Asians | 4 | Caucasians ||| 0.39 | 95% ||| CI | 0.20 | 0.74 | .004 ||| 1.29 | 95% | CI | 0.80 | 2.09 | 0.71 | 95% | CI | 0.36 | 1.42 ||| Asians | 0.16 | 95% | CI | 0.06 | 0.39 | Caucasians | 0.69 | 95% | CI | 0.39 | 1.23 ||| Asians | 1.60 | 95% | CI | 0.48 | 5.37 | Caucasians | 1.11 | 95% | CI | 0.58 | 2.12 ||| Ejaz | Ejaz | Caucasians | 0.58 | 95% | CI | 0.36 | 0.93 ||| UFH | Asians | Caucasians ||| Asians ||| [SUMMARY]
null
null
30618511
Mycobacterium bovis Bacille Calmette-Guérin (BCG) osteitis, a rare complication of BCG vaccination, has not been well investigated in Korea. This study aimed to evaluate the clinical characteristics of BCG osteitis during the recent 10 years in Korea.
BACKGROUND
Children diagnosed with BCG osteitis at the Seoul National University Children's Hospital from January 2007 to March 2018 were included. M. bovis BCG was confirmed by multiplex polymerase chain reaction (PCR) in the affected bone. BCG immunization status and clinical information were reviewed retrospectively.
METHODS
Twenty-one patients were diagnosed with BCG osteitis and their median symptom onset from BCG vaccination was 13.8 months (range, 6.0-32.5). Sixteen children (76.2%) received Tokyo-172 vaccine by percutaneous multiple puncture method, while four (19.0%) and one (4.8%) received intradermal Tokyo-172 and Danish strain, respectively. Common presenting symptoms were swelling (76.2%), limited movement of the affected site (63.2%), and pain (61.9%) while fever was only accompanied in 19.0%. Femur (33.3%) and the tarsal bones (23.8%) were the most frequently involved sites; and demarcated osteolytic lesions (63.1%) and cortical breakages (42.1%) were observed on plain radiographs. Surgical drainage was performed in 90.5%, and 33.3% of them required repeated surgical interventions due to persistent symptoms. Antituberculosis medications were administered for a median duration of 12 months (range, 12-31). Most patients recovered without evident sequelae.
RESULTS
Highly suspecting BCG osteitis based on clinical manifestations is important for prompt management. A comprehensive national surveillance system is needed to understand the exact incidence of serious adverse reactions following BCG vaccination and establish safe vaccination policy in Korea.
CONCLUSION
[ "Antitubercular Agents", "BCG Vaccine", "Child, Preschool", "Female", "Humans", "Immunization", "Infant", "Male", "Osteitis", "Republic of Korea", "Retrospective Studies", "Tuberculosis" ]
6318445
INTRODUCTION
Bacille Calmette-Guérin (BCG) immunization is recommended to prevent disseminated tuberculosis and extrapulmonary tuberculosis including meningitis in young children. BCG is a live attenuated vaccine developed from serial passage of Mycobacterium bovis. Currently the Russian (Moscow–368), the Bulgarian (Sofia SL222) and the Tokyo-172 are the three most commonly used BCG strains worldwide.1 The World Health Organization recommends intradermal administration of the vaccine, but percutaneous application using a multiple puncture injection device is practiced in some countries including Japan and Korea. The Tokyo-172 BCG vaccine is currently available in both percutaneous and intradermal formulations. Although adverse events at injection site are frequently observed, serious adverse events following BCG vaccination are rare. The incidence of dissemination of BCG is estimated to be 0.19–1.56 per million vaccinations.2 BCG osteitis is another rare systemic adverse reaction following BCG vaccination. The cases of BCG osteitis were increasingly notified when there was a change in the vaccine strain, for example, in Scandinavia and in Eastern Europe.2 More recently, there are increased reports of osteitis in Japan, Taiwan, and Thailand.345 In Korea, Japan and Taiwan, BCG vaccination is recommended for all infants through a national immunization program (NIP). Infants in Japan receive the BCG Tokyo (strain 172; Japan BCG Laboratory) by percutaneous method, whereas those in Taiwan receive the BCG Tokyo (strain 172; Taiwan CDC Vaccine Center) by intradermal injection. In Korea, newborns are vaccinated with either intradermal BCG Danish (strain 1331; Statens Serum Institute), intradermal BCG Tokyo (strain 172; Japan BCG Laboratory), or percutaneous BCG Tokyo (strain 172; Japan BCG Laboratory). Previously, the incidence of osteitis following BCG Tokyo-172 vaccination was reported to be extremely low, hence the BCG Tokyo vaccine was considered as the safest BCG vaccine.67 However, a higher incidence of 3.68 cases per million vaccinations was reported in Taiwan.8 Unlike Japan and Taiwan, no long-term observational studies of serious adverse reactions following BCG vaccinations have been reported in Korea. Here, we describe the clinical characteristics of BCG osteitis from our experience during the recent 10 years.
METHODS
Study subjects A retrospective chart review was performed on all laboratory-confirmed BCG osteitis children admitted to Seoul National University Children's Hospital, a large tertiary referral center in Korea, from January 2007 to March 2018. Children who were clinically suspected of BCG infection but not confirmed by BCG-specific polymerase chain reaction (PCR) were excluded. History of BCG vaccination was taken from the data in the national immunization registry, namely, the brand name of BCG vaccine, manufacturer, and the date and the site of vaccination. For those whose data are not available from the registry, BCG vaccination history was interpreted from the personal immunization records and inspecting the scar of the injection sites. Clinical data were collected regarding patient demographics, presenting symptoms, physical findings, radiologic features, histologic findings, and clinical outcome. The immune status was evaluated by the number and percentage of T-cell subsets, complement levels, immunoglobulin levels, and performing the dihydrorhodamine test. A retrospective chart review was performed on all laboratory-confirmed BCG osteitis children admitted to Seoul National University Children's Hospital, a large tertiary referral center in Korea, from January 2007 to March 2018. Children who were clinically suspected of BCG infection but not confirmed by BCG-specific polymerase chain reaction (PCR) were excluded. History of BCG vaccination was taken from the data in the national immunization registry, namely, the brand name of BCG vaccine, manufacturer, and the date and the site of vaccination. For those whose data are not available from the registry, BCG vaccination history was interpreted from the personal immunization records and inspecting the scar of the injection sites. Clinical data were collected regarding patient demographics, presenting symptoms, physical findings, radiologic features, histologic findings, and clinical outcome. The immune status was evaluated by the number and percentage of T-cell subsets, complement levels, immunoglobulin levels, and performing the dihydrorhodamine test. Laboratory confirmation The presumptive diagnosis in the majority of the cases was mainly based on positive PCR for Mycobacterium tuberculosis complex directly from the isolates of M. bovis BCG, or from the fresh samples obtained from the affected tissue. Laboratory confirmation of M. bovis BCG was performed with the use of the same protocol previously described by Kim et al.9 Briefly, the real-time PCR targeted for the 53-bp mycobacterial interspersed repetitive units (MIRUs) of the senX3–regX3 IR was performed to differentiate M. bovis BCG from non-BCG M. tuberculosis complex. Subsequently, a multiplex PCR assay with 7 primers (ET1, ET2, ET3, RD8l, RD8r, RD14l, and RD14r) that amplified 3 regions of difference – RD1, RD8 and RD14 was performed to discriminate among the BCG substrains. The presumptive diagnosis in the majority of the cases was mainly based on positive PCR for Mycobacterium tuberculosis complex directly from the isolates of M. bovis BCG, or from the fresh samples obtained from the affected tissue. Laboratory confirmation of M. bovis BCG was performed with the use of the same protocol previously described by Kim et al.9 Briefly, the real-time PCR targeted for the 53-bp mycobacterial interspersed repetitive units (MIRUs) of the senX3–regX3 IR was performed to differentiate M. bovis BCG from non-BCG M. tuberculosis complex. Subsequently, a multiplex PCR assay with 7 primers (ET1, ET2, ET3, RD8l, RD8r, RD14l, and RD14r) that amplified 3 regions of difference – RD1, RD8 and RD14 was performed to discriminate among the BCG substrains. Statistical analysis Categorical variables were summarized by absolute frequencies and percentages. Continuous variables were expressed as means with standard deviation or – if skewed – as medians with ranges. All data management and analysis were performed using SPSS version 19.0 (IBM Corp., Armonk, NY, USA). Categorical variables were summarized by absolute frequencies and percentages. Continuous variables were expressed as means with standard deviation or – if skewed – as medians with ranges. All data management and analysis were performed using SPSS version 19.0 (IBM Corp., Armonk, NY, USA). Ethics statement This study was approved by the Institutional Review Boards (IRB) of Seoul National University Hospital (IRB No. H-1711-120-901). Informed consent of the patients was waived. This study was approved by the Institutional Review Boards (IRB) of Seoul National University Hospital (IRB No. H-1711-120-901). Informed consent of the patients was waived.
RESULTS
Patient characteristics During the study period, 21 BCG osteitis cases were confirmed. Three children who are highly suspicious of having BCG osteitis from the histologic findings were excluded because BCG-specific PCR was not performed. Detailed clinical characteristics of the patients are shown in Table 1. Of these, 4 cases had been previously reported.910 Fourteen patients (66.7%) were boys and seven were girls. The symptoms or signs of osteitis were first noticed at the median age of 14.3 months (range, 6.5–33.4) (Table 2). The median age at BCG inoculation was 24 days (range, 2–36). Sixteen patients (76.2%) received BCG Tokyo-172 strain by percutaneous multiple puncture method, four (19.0%) received BCG Tokyo-172 strain intradermally, and one (4.8%) received Danish BCG intradermally. A total of 20 cases (95.2%) were inoculated with Tokyo-172 strain. The years the BCG osteitis patients received vaccination are shown in Fig. 1. Patients were vaccinated in different regions: Seoul-metropolitan (35%), central area (40%), and southern area (25%). BCG = Bacille Calmette-Guérin, AFB = acid fast bacilli, MTB PCR = M. tuberculosis complex polymerase chain reaction, PC = percutaneous, CG = chronic granuloma, CN = caseous necrosis, H = isoniazid, R = rifampin, S = streptomycin, Z = pyrazinamide, CLA = clarithromycin, E = ethambutol, ID = intradermal. aThis patient received prolonged treatment with various antimycobacterial combinations as making a definitive diagnosis was difficult; bThis patient recovered without joint complication until the patient developed osteomyelitis at the same joint caused by Mycobacterium intracellulare. BCG = Bacille Calmette-Guérin. BCG = Bacille Calmette-Guérin . During the study period, 21 BCG osteitis cases were confirmed. Three children who are highly suspicious of having BCG osteitis from the histologic findings were excluded because BCG-specific PCR was not performed. Detailed clinical characteristics of the patients are shown in Table 1. Of these, 4 cases had been previously reported.910 Fourteen patients (66.7%) were boys and seven were girls. The symptoms or signs of osteitis were first noticed at the median age of 14.3 months (range, 6.5–33.4) (Table 2). The median age at BCG inoculation was 24 days (range, 2–36). Sixteen patients (76.2%) received BCG Tokyo-172 strain by percutaneous multiple puncture method, four (19.0%) received BCG Tokyo-172 strain intradermally, and one (4.8%) received Danish BCG intradermally. A total of 20 cases (95.2%) were inoculated with Tokyo-172 strain. The years the BCG osteitis patients received vaccination are shown in Fig. 1. Patients were vaccinated in different regions: Seoul-metropolitan (35%), central area (40%), and southern area (25%). BCG = Bacille Calmette-Guérin, AFB = acid fast bacilli, MTB PCR = M. tuberculosis complex polymerase chain reaction, PC = percutaneous, CG = chronic granuloma, CN = caseous necrosis, H = isoniazid, R = rifampin, S = streptomycin, Z = pyrazinamide, CLA = clarithromycin, E = ethambutol, ID = intradermal. aThis patient received prolonged treatment with various antimycobacterial combinations as making a definitive diagnosis was difficult; bThis patient recovered without joint complication until the patient developed osteomyelitis at the same joint caused by Mycobacterium intracellulare. BCG = Bacille Calmette-Guérin. BCG = Bacille Calmette-Guérin . Clinical manifestations The median duration from BCG vaccination to symptom onset was 13.8 months (range, 6.0–32.5). Patients were diagnosed after a median of 29 days (range, 3–112) from symptom onset. At the time of diagnosis, the most common symptoms were swelling (76.2%), limitation of movement (63.2%) and pain (61.9%). Fever (≥ 38.0°C) was accompanied only in 19.0%. Excepting the two cases with multiple bone involvement, the most commonly affected site was femur (33.3%), followed by the tarsal bones (28.6%), tibia (9.5%), and rib and sternum (9.5%), as shown in Table 2. The initial diagnosis for the 19 referred cases were bacterial osteomyelitis in 10 (52.6%), bone tumor or Langerhans cell histiocytosis in 6 (31.6%), and transient synovitis in 3 (15.7%). The median duration from BCG vaccination to symptom onset was 13.8 months (range, 6.0–32.5). Patients were diagnosed after a median of 29 days (range, 3–112) from symptom onset. At the time of diagnosis, the most common symptoms were swelling (76.2%), limitation of movement (63.2%) and pain (61.9%). Fever (≥ 38.0°C) was accompanied only in 19.0%. Excepting the two cases with multiple bone involvement, the most commonly affected site was femur (33.3%), followed by the tarsal bones (28.6%), tibia (9.5%), and rib and sternum (9.5%), as shown in Table 2. The initial diagnosis for the 19 referred cases were bacterial osteomyelitis in 10 (52.6%), bone tumor or Langerhans cell histiocytosis in 6 (31.6%), and transient synovitis in 3 (15.7%). Comorbidities Three (14.3%) were born preterm at 32–36 weeks of gestational age with a range of birth weight from 2.4 kg to 2.8 kg. Two had a previous history of lymph node excision due to BCG lymphadenopathy. Tests for human immunodeficiency virus were performed in 18 patients and all were negative. Further immunological investigations performed in 15 cases were all normal at the time of diagnosis. Although there were no identified cases of primary immunodeficiency, one had a past history of Pneumocystis jirovecii pneumonitis during infancy and thus was suspected of immunodeficiency. Three (14.3%) were born preterm at 32–36 weeks of gestational age with a range of birth weight from 2.4 kg to 2.8 kg. Two had a previous history of lymph node excision due to BCG lymphadenopathy. Tests for human immunodeficiency virus were performed in 18 patients and all were negative. Further immunological investigations performed in 15 cases were all normal at the time of diagnosis. Although there were no identified cases of primary immunodeficiency, one had a past history of Pneumocystis jirovecii pneumonitis during infancy and thus was suspected of immunodeficiency. Laboratory and radiologic findings On admission, 14.3% of the children had leukocytosis (> 14,000/mm3) and one (4.8%) had leukopenia (< 4,000/mm3). C-reactive protein was elevated (> 5.0 mg/L) in 35%. Erythrocyte sedimentation rate was tested for 16 cases and was elevated (20 mm/hr or greater) in 68.8%. Solitary demarcated osteolytic lesion (63.1%) and cortical breakage (42.1%) were the most common findings on initial plain radiographs in 19 cases. In four patients (21.0%) there were only soft tissue swelling or bulging on plain radiographs. Magnetic resonance imaging was performed in 20 patients including those with no evident bony abnormality on simple radiographs. Bone abnormalities were found on magnetic resonance imaging in all cases. The epiphysis was affected in 6 (50%) of the 12 patients with BCG osteitis in the long bones. Transphyseal spread between the epiphysis and the metaphysis was observed in 3 (30%). No diaphyseal involvement was noticed. Soft tissue abnormalities were common, where the most common finding was inflammation of the surrounding muscles (87.5%), followed by soft tissue abscess (43.8%). Subcutaneous inflammation was noted in 37.5%. Eight of the 16 cases with a single bone lesion had joint involvement, including four cases with synovial hypertrophy and four cases with joint effusion. On admission, 14.3% of the children had leukocytosis (> 14,000/mm3) and one (4.8%) had leukopenia (< 4,000/mm3). C-reactive protein was elevated (> 5.0 mg/L) in 35%. Erythrocyte sedimentation rate was tested for 16 cases and was elevated (20 mm/hr or greater) in 68.8%. Solitary demarcated osteolytic lesion (63.1%) and cortical breakage (42.1%) were the most common findings on initial plain radiographs in 19 cases. In four patients (21.0%) there were only soft tissue swelling or bulging on plain radiographs. Magnetic resonance imaging was performed in 20 patients including those with no evident bony abnormality on simple radiographs. Bone abnormalities were found on magnetic resonance imaging in all cases. The epiphysis was affected in 6 (50%) of the 12 patients with BCG osteitis in the long bones. Transphyseal spread between the epiphysis and the metaphysis was observed in 3 (30%). No diaphyseal involvement was noticed. Soft tissue abnormalities were common, where the most common finding was inflammation of the surrounding muscles (87.5%), followed by soft tissue abscess (43.8%). Subcutaneous inflammation was noted in 37.5%. Eight of the 16 cases with a single bone lesion had joint involvement, including four cases with synovial hypertrophy and four cases with joint effusion. Microbiological findings The diagnosis of M. bovis BCG osteitis was confirmed in all patients by performing BCG-specific PCR on DNA extracted from biopsy samples or bacterial isolates. The real-time PCR for the 53-bp MIRU was all negative, suggesting M. bovis BCG strains. The multiplex PCR assay with 7 primers that can discriminate BCG substrains revealed the Tokyo strain in 20 and Danish strain in one. Acid-fast bacilli stain was positive in 25.0% and M. bovis grew by culture in 70.0%. PCR for M. tuberculosis complex on fresh biopsy sample was positive in 93.8%. Histopathology showed chronic granuloma with caseous necrosis in 77.8%. The diagnosis of M. bovis BCG osteitis was confirmed in all patients by performing BCG-specific PCR on DNA extracted from biopsy samples or bacterial isolates. The real-time PCR for the 53-bp MIRU was all negative, suggesting M. bovis BCG strains. The multiplex PCR assay with 7 primers that can discriminate BCG substrains revealed the Tokyo strain in 20 and Danish strain in one. Acid-fast bacilli stain was positive in 25.0% and M. bovis grew by culture in 70.0%. PCR for M. tuberculosis complex on fresh biopsy sample was positive in 93.8%. Histopathology showed chronic granuloma with caseous necrosis in 77.8%. Treatment In our case series, 20 (95.2%) children underwent surgical procedures, where 19 patients had surgical treatment and one received biopsy only. Seven (33.3%) patients required more than one surgical interventions because of persistent swelling and spread of infection outside the epiphysis. All patients received antimycobacterial medication with isoniazid and rifampicin for a median duration of 12 months (range, 12–31). In our case series, 20 (95.2%) children underwent surgical procedures, where 19 patients had surgical treatment and one received biopsy only. Seven (33.3%) patients required more than one surgical interventions because of persistent swelling and spread of infection outside the epiphysis. All patients received antimycobacterial medication with isoniazid and rifampicin for a median duration of 12 months (range, 12–31). Clinical outcome Eighteen of the 21 patients were followed-up for more than one year and three children were still on treatment. The median follow-up duration was 3.15 years (range, 1.0–10.8). All were compliant with their medical treatment and were regularly followed-up every 1 to 2 months. Symptoms, physical examination findings and radiographic findings improved in most patients (95.2%) without obvious sequelae. There was no recurrence of BCG osteitis after completing antimycobacterial therapy. One patient who had a talar lesion with inflammatory destruction and necrotic fragmentation at the time of diagnosis later developed progressive bony remodeling with collapse of the talus, resulting in leg length shortening. Another patient developed joint contracture due to nontuberculosis mycobacterial infection in the previous BCG osteitis site at the age of 5 years. This patient, who had a past history of P. jirovecii pneumonitis during infancy, suffered from disseminated nontuberculous mycobacterial infection. One patient had horseshoe kidney, tethered cord syndrome and developmental delay at the time of BCG osteitis, and was diagnosed with Kabuki syndrome with KMT2D (also known as MLL2) gene mutation at the age of 5 years. This patient later developed nontuberculous mycobacterial infection in bilateral renal cysts. No deaths related to BCG osteitis or its treatment were identified. Eighteen of the 21 patients were followed-up for more than one year and three children were still on treatment. The median follow-up duration was 3.15 years (range, 1.0–10.8). All were compliant with their medical treatment and were regularly followed-up every 1 to 2 months. Symptoms, physical examination findings and radiographic findings improved in most patients (95.2%) without obvious sequelae. There was no recurrence of BCG osteitis after completing antimycobacterial therapy. One patient who had a talar lesion with inflammatory destruction and necrotic fragmentation at the time of diagnosis later developed progressive bony remodeling with collapse of the talus, resulting in leg length shortening. Another patient developed joint contracture due to nontuberculosis mycobacterial infection in the previous BCG osteitis site at the age of 5 years. This patient, who had a past history of P. jirovecii pneumonitis during infancy, suffered from disseminated nontuberculous mycobacterial infection. One patient had horseshoe kidney, tethered cord syndrome and developmental delay at the time of BCG osteitis, and was diagnosed with Kabuki syndrome with KMT2D (also known as MLL2) gene mutation at the age of 5 years. This patient later developed nontuberculous mycobacterial infection in bilateral renal cysts. No deaths related to BCG osteitis or its treatment were identified.
null
null
[ "Study subjects", "Laboratory confirmation", "Statistical analysis", "Ethics statement", "Patient characteristics", "Clinical manifestations", "Comorbidities", "Laboratory and radiologic findings", "Microbiological findings", "Treatment", "Clinical outcome" ]
[ "A retrospective chart review was performed on all laboratory-confirmed BCG osteitis children admitted to Seoul National University Children's Hospital, a large tertiary referral center in Korea, from January 2007 to March 2018. Children who were clinically suspected of BCG infection but not confirmed by BCG-specific polymerase chain reaction (PCR) were excluded. History of BCG vaccination was taken from the data in the national immunization registry, namely, the brand name of BCG vaccine, manufacturer, and the date and the site of vaccination. For those whose data are not available from the registry, BCG vaccination history was interpreted from the personal immunization records and inspecting the scar of the injection sites. Clinical data were collected regarding patient demographics, presenting symptoms, physical findings, radiologic features, histologic findings, and clinical outcome. The immune status was evaluated by the number and percentage of T-cell subsets, complement levels, immunoglobulin levels, and performing the dihydrorhodamine test.", "The presumptive diagnosis in the majority of the cases was mainly based on positive PCR for Mycobacterium tuberculosis complex directly from the isolates of M. bovis BCG, or from the fresh samples obtained from the affected tissue. Laboratory confirmation of M. bovis BCG was performed with the use of the same protocol previously described by Kim et al.9 Briefly, the real-time PCR targeted for the 53-bp mycobacterial interspersed repetitive units (MIRUs) of the senX3–regX3 IR was performed to differentiate M. bovis BCG from non-BCG M. tuberculosis complex. Subsequently, a multiplex PCR assay with 7 primers (ET1, ET2, ET3, RD8l, RD8r, RD14l, and RD14r) that amplified 3 regions of difference – RD1, RD8 and RD14 was performed to discriminate among the BCG substrains.", "Categorical variables were summarized by absolute frequencies and percentages. Continuous variables were expressed as means with standard deviation or – if skewed – as medians with ranges. All data management and analysis were performed using SPSS version 19.0 (IBM Corp., Armonk, NY, USA).", "This study was approved by the Institutional Review Boards (IRB) of Seoul National University Hospital (IRB No. H-1711-120-901). Informed consent of the patients was waived.", "During the study period, 21 BCG osteitis cases were confirmed. Three children who are highly suspicious of having BCG osteitis from the histologic findings were excluded because BCG-specific PCR was not performed. Detailed clinical characteristics of the patients are shown in Table 1. Of these, 4 cases had been previously reported.910 Fourteen patients (66.7%) were boys and seven were girls. The symptoms or signs of osteitis were first noticed at the median age of 14.3 months (range, 6.5–33.4) (Table 2). The median age at BCG inoculation was 24 days (range, 2–36). Sixteen patients (76.2%) received BCG Tokyo-172 strain by percutaneous multiple puncture method, four (19.0%) received BCG Tokyo-172 strain intradermally, and one (4.8%) received Danish BCG intradermally. A total of 20 cases (95.2%) were inoculated with Tokyo-172 strain. The years the BCG osteitis patients received vaccination are shown in Fig. 1. Patients were vaccinated in different regions: Seoul-metropolitan (35%), central area (40%), and southern area (25%).\nBCG = Bacille Calmette-Guérin, AFB = acid fast bacilli, MTB PCR = M. tuberculosis complex polymerase chain reaction, PC = percutaneous, CG = chronic granuloma, CN = caseous necrosis, H = isoniazid, R = rifampin, S = streptomycin, Z = pyrazinamide, CLA = clarithromycin, E = ethambutol, ID = intradermal.\naThis patient received prolonged treatment with various antimycobacterial combinations as making a definitive diagnosis was difficult; bThis patient recovered without joint complication until the patient developed osteomyelitis at the same joint caused by Mycobacterium intracellulare.\nBCG = Bacille Calmette-Guérin.\nBCG = Bacille Calmette-Guérin .", "The median duration from BCG vaccination to symptom onset was 13.8 months (range, 6.0–32.5). Patients were diagnosed after a median of 29 days (range, 3–112) from symptom onset. At the time of diagnosis, the most common symptoms were swelling (76.2%), limitation of movement (63.2%) and pain (61.9%). Fever (≥ 38.0°C) was accompanied only in 19.0%. Excepting the two cases with multiple bone involvement, the most commonly affected site was femur (33.3%), followed by the tarsal bones (28.6%), tibia (9.5%), and rib and sternum (9.5%), as shown in Table 2. The initial diagnosis for the 19 referred cases were bacterial osteomyelitis in 10 (52.6%), bone tumor or Langerhans cell histiocytosis in 6 (31.6%), and transient synovitis in 3 (15.7%).", "Three (14.3%) were born preterm at 32–36 weeks of gestational age with a range of birth weight from 2.4 kg to 2.8 kg. Two had a previous history of lymph node excision due to BCG lymphadenopathy. Tests for human immunodeficiency virus were performed in 18 patients and all were negative. Further immunological investigations performed in 15 cases were all normal at the time of diagnosis. Although there were no identified cases of primary immunodeficiency, one had a past history of Pneumocystis jirovecii pneumonitis during infancy and thus was suspected of immunodeficiency.", "On admission, 14.3% of the children had leukocytosis (> 14,000/mm3) and one (4.8%) had leukopenia (< 4,000/mm3). C-reactive protein was elevated (> 5.0 mg/L) in 35%. Erythrocyte sedimentation rate was tested for 16 cases and was elevated (20 mm/hr or greater) in 68.8%. Solitary demarcated osteolytic lesion (63.1%) and cortical breakage (42.1%) were the most common findings on initial plain radiographs in 19 cases. In four patients (21.0%) there were only soft tissue swelling or bulging on plain radiographs. Magnetic resonance imaging was performed in 20 patients including those with no evident bony abnormality on simple radiographs. Bone abnormalities were found on magnetic resonance imaging in all cases. The epiphysis was affected in 6 (50%) of the 12 patients with BCG osteitis in the long bones. Transphyseal spread between the epiphysis and the metaphysis was observed in 3 (30%). No diaphyseal involvement was noticed. Soft tissue abnormalities were common, where the most common finding was inflammation of the surrounding muscles (87.5%), followed by soft tissue abscess (43.8%). Subcutaneous inflammation was noted in 37.5%. Eight of the 16 cases with a single bone lesion had joint involvement, including four cases with synovial hypertrophy and four cases with joint effusion.", "The diagnosis of M. bovis BCG osteitis was confirmed in all patients by performing BCG-specific PCR on DNA extracted from biopsy samples or bacterial isolates. The real-time PCR for the 53-bp MIRU was all negative, suggesting M. bovis BCG strains. The multiplex PCR assay with 7 primers that can discriminate BCG substrains revealed the Tokyo strain in 20 and Danish strain in one. Acid-fast bacilli stain was positive in 25.0% and M. bovis grew by culture in 70.0%. PCR for M. tuberculosis complex on fresh biopsy sample was positive in 93.8%. Histopathology showed chronic granuloma with caseous necrosis in 77.8%.", "In our case series, 20 (95.2%) children underwent surgical procedures, where 19 patients had surgical treatment and one received biopsy only. Seven (33.3%) patients required more than one surgical interventions because of persistent swelling and spread of infection outside the epiphysis. All patients received antimycobacterial medication with isoniazid and rifampicin for a median duration of 12 months (range, 12–31).", "Eighteen of the 21 patients were followed-up for more than one year and three children were still on treatment. The median follow-up duration was 3.15 years (range, 1.0–10.8). All were compliant with their medical treatment and were regularly followed-up every 1 to 2 months. Symptoms, physical examination findings and radiographic findings improved in most patients (95.2%) without obvious sequelae. There was no recurrence of BCG osteitis after completing antimycobacterial therapy. One patient who had a talar lesion with inflammatory destruction and necrotic fragmentation at the time of diagnosis later developed progressive bony remodeling with collapse of the talus, resulting in leg length shortening. Another patient developed joint contracture due to nontuberculosis mycobacterial infection in the previous BCG osteitis site at the age of 5 years. This patient, who had a past history of P. jirovecii pneumonitis during infancy, suffered from disseminated nontuberculous mycobacterial infection. One patient had horseshoe kidney, tethered cord syndrome and developmental delay at the time of BCG osteitis, and was diagnosed with Kabuki syndrome with KMT2D (also known as MLL2) gene mutation at the age of 5 years. This patient later developed nontuberculous mycobacterial infection in bilateral renal cysts. No deaths related to BCG osteitis or its treatment were identified." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study subjects", "Laboratory confirmation", "Statistical analysis", "Ethics statement", "RESULTS", "Patient characteristics", "Clinical manifestations", "Comorbidities", "Laboratory and radiologic findings", "Microbiological findings", "Treatment", "Clinical outcome", "DISCUSSION" ]
[ "Bacille Calmette-Guérin (BCG) immunization is recommended to prevent disseminated tuberculosis and extrapulmonary tuberculosis including meningitis in young children. BCG is a live attenuated vaccine developed from serial passage of Mycobacterium bovis. Currently the Russian (Moscow–368), the Bulgarian (Sofia SL222) and the Tokyo-172 are the three most commonly used BCG strains worldwide.1 The World Health Organization recommends intradermal administration of the vaccine, but percutaneous application using a multiple puncture injection device is practiced in some countries including Japan and Korea. The Tokyo-172 BCG vaccine is currently available in both percutaneous and intradermal formulations.\nAlthough adverse events at injection site are frequently observed, serious adverse events following BCG vaccination are rare. The incidence of dissemination of BCG is estimated to be 0.19–1.56 per million vaccinations.2 BCG osteitis is another rare systemic adverse reaction following BCG vaccination. The cases of BCG osteitis were increasingly notified when there was a change in the vaccine strain, for example, in Scandinavia and in Eastern Europe.2 More recently, there are increased reports of osteitis in Japan, Taiwan, and Thailand.345\nIn Korea, Japan and Taiwan, BCG vaccination is recommended for all infants through a national immunization program (NIP). Infants in Japan receive the BCG Tokyo (strain 172; Japan BCG Laboratory) by percutaneous method, whereas those in Taiwan receive the BCG Tokyo (strain 172; Taiwan CDC Vaccine Center) by intradermal injection. In Korea, newborns are vaccinated with either intradermal BCG Danish (strain 1331; Statens Serum Institute), intradermal BCG Tokyo (strain 172; Japan BCG Laboratory), or percutaneous BCG Tokyo (strain 172; Japan BCG Laboratory). Previously, the incidence of osteitis following BCG Tokyo-172 vaccination was reported to be extremely low, hence the BCG Tokyo vaccine was considered as the safest BCG vaccine.67 However, a higher incidence of 3.68 cases per million vaccinations was reported in Taiwan.8\nUnlike Japan and Taiwan, no long-term observational studies of serious adverse reactions following BCG vaccinations have been reported in Korea. Here, we describe the clinical characteristics of BCG osteitis from our experience during the recent 10 years.", " Study subjects A retrospective chart review was performed on all laboratory-confirmed BCG osteitis children admitted to Seoul National University Children's Hospital, a large tertiary referral center in Korea, from January 2007 to March 2018. Children who were clinically suspected of BCG infection but not confirmed by BCG-specific polymerase chain reaction (PCR) were excluded. History of BCG vaccination was taken from the data in the national immunization registry, namely, the brand name of BCG vaccine, manufacturer, and the date and the site of vaccination. For those whose data are not available from the registry, BCG vaccination history was interpreted from the personal immunization records and inspecting the scar of the injection sites. Clinical data were collected regarding patient demographics, presenting symptoms, physical findings, radiologic features, histologic findings, and clinical outcome. The immune status was evaluated by the number and percentage of T-cell subsets, complement levels, immunoglobulin levels, and performing the dihydrorhodamine test.\nA retrospective chart review was performed on all laboratory-confirmed BCG osteitis children admitted to Seoul National University Children's Hospital, a large tertiary referral center in Korea, from January 2007 to March 2018. Children who were clinically suspected of BCG infection but not confirmed by BCG-specific polymerase chain reaction (PCR) were excluded. History of BCG vaccination was taken from the data in the national immunization registry, namely, the brand name of BCG vaccine, manufacturer, and the date and the site of vaccination. For those whose data are not available from the registry, BCG vaccination history was interpreted from the personal immunization records and inspecting the scar of the injection sites. Clinical data were collected regarding patient demographics, presenting symptoms, physical findings, radiologic features, histologic findings, and clinical outcome. The immune status was evaluated by the number and percentage of T-cell subsets, complement levels, immunoglobulin levels, and performing the dihydrorhodamine test.\n Laboratory confirmation The presumptive diagnosis in the majority of the cases was mainly based on positive PCR for Mycobacterium tuberculosis complex directly from the isolates of M. bovis BCG, or from the fresh samples obtained from the affected tissue. Laboratory confirmation of M. bovis BCG was performed with the use of the same protocol previously described by Kim et al.9 Briefly, the real-time PCR targeted for the 53-bp mycobacterial interspersed repetitive units (MIRUs) of the senX3–regX3 IR was performed to differentiate M. bovis BCG from non-BCG M. tuberculosis complex. Subsequently, a multiplex PCR assay with 7 primers (ET1, ET2, ET3, RD8l, RD8r, RD14l, and RD14r) that amplified 3 regions of difference – RD1, RD8 and RD14 was performed to discriminate among the BCG substrains.\nThe presumptive diagnosis in the majority of the cases was mainly based on positive PCR for Mycobacterium tuberculosis complex directly from the isolates of M. bovis BCG, or from the fresh samples obtained from the affected tissue. Laboratory confirmation of M. bovis BCG was performed with the use of the same protocol previously described by Kim et al.9 Briefly, the real-time PCR targeted for the 53-bp mycobacterial interspersed repetitive units (MIRUs) of the senX3–regX3 IR was performed to differentiate M. bovis BCG from non-BCG M. tuberculosis complex. Subsequently, a multiplex PCR assay with 7 primers (ET1, ET2, ET3, RD8l, RD8r, RD14l, and RD14r) that amplified 3 regions of difference – RD1, RD8 and RD14 was performed to discriminate among the BCG substrains.\n Statistical analysis Categorical variables were summarized by absolute frequencies and percentages. Continuous variables were expressed as means with standard deviation or – if skewed – as medians with ranges. All data management and analysis were performed using SPSS version 19.0 (IBM Corp., Armonk, NY, USA).\nCategorical variables were summarized by absolute frequencies and percentages. Continuous variables were expressed as means with standard deviation or – if skewed – as medians with ranges. All data management and analysis were performed using SPSS version 19.0 (IBM Corp., Armonk, NY, USA).\n Ethics statement This study was approved by the Institutional Review Boards (IRB) of Seoul National University Hospital (IRB No. H-1711-120-901). Informed consent of the patients was waived.\nThis study was approved by the Institutional Review Boards (IRB) of Seoul National University Hospital (IRB No. H-1711-120-901). Informed consent of the patients was waived.", "A retrospective chart review was performed on all laboratory-confirmed BCG osteitis children admitted to Seoul National University Children's Hospital, a large tertiary referral center in Korea, from January 2007 to March 2018. Children who were clinically suspected of BCG infection but not confirmed by BCG-specific polymerase chain reaction (PCR) were excluded. History of BCG vaccination was taken from the data in the national immunization registry, namely, the brand name of BCG vaccine, manufacturer, and the date and the site of vaccination. For those whose data are not available from the registry, BCG vaccination history was interpreted from the personal immunization records and inspecting the scar of the injection sites. Clinical data were collected regarding patient demographics, presenting symptoms, physical findings, radiologic features, histologic findings, and clinical outcome. The immune status was evaluated by the number and percentage of T-cell subsets, complement levels, immunoglobulin levels, and performing the dihydrorhodamine test.", "The presumptive diagnosis in the majority of the cases was mainly based on positive PCR for Mycobacterium tuberculosis complex directly from the isolates of M. bovis BCG, or from the fresh samples obtained from the affected tissue. Laboratory confirmation of M. bovis BCG was performed with the use of the same protocol previously described by Kim et al.9 Briefly, the real-time PCR targeted for the 53-bp mycobacterial interspersed repetitive units (MIRUs) of the senX3–regX3 IR was performed to differentiate M. bovis BCG from non-BCG M. tuberculosis complex. Subsequently, a multiplex PCR assay with 7 primers (ET1, ET2, ET3, RD8l, RD8r, RD14l, and RD14r) that amplified 3 regions of difference – RD1, RD8 and RD14 was performed to discriminate among the BCG substrains.", "Categorical variables were summarized by absolute frequencies and percentages. Continuous variables were expressed as means with standard deviation or – if skewed – as medians with ranges. All data management and analysis were performed using SPSS version 19.0 (IBM Corp., Armonk, NY, USA).", "This study was approved by the Institutional Review Boards (IRB) of Seoul National University Hospital (IRB No. H-1711-120-901). Informed consent of the patients was waived.", " Patient characteristics During the study period, 21 BCG osteitis cases were confirmed. Three children who are highly suspicious of having BCG osteitis from the histologic findings were excluded because BCG-specific PCR was not performed. Detailed clinical characteristics of the patients are shown in Table 1. Of these, 4 cases had been previously reported.910 Fourteen patients (66.7%) were boys and seven were girls. The symptoms or signs of osteitis were first noticed at the median age of 14.3 months (range, 6.5–33.4) (Table 2). The median age at BCG inoculation was 24 days (range, 2–36). Sixteen patients (76.2%) received BCG Tokyo-172 strain by percutaneous multiple puncture method, four (19.0%) received BCG Tokyo-172 strain intradermally, and one (4.8%) received Danish BCG intradermally. A total of 20 cases (95.2%) were inoculated with Tokyo-172 strain. The years the BCG osteitis patients received vaccination are shown in Fig. 1. Patients were vaccinated in different regions: Seoul-metropolitan (35%), central area (40%), and southern area (25%).\nBCG = Bacille Calmette-Guérin, AFB = acid fast bacilli, MTB PCR = M. tuberculosis complex polymerase chain reaction, PC = percutaneous, CG = chronic granuloma, CN = caseous necrosis, H = isoniazid, R = rifampin, S = streptomycin, Z = pyrazinamide, CLA = clarithromycin, E = ethambutol, ID = intradermal.\naThis patient received prolonged treatment with various antimycobacterial combinations as making a definitive diagnosis was difficult; bThis patient recovered without joint complication until the patient developed osteomyelitis at the same joint caused by Mycobacterium intracellulare.\nBCG = Bacille Calmette-Guérin.\nBCG = Bacille Calmette-Guérin .\nDuring the study period, 21 BCG osteitis cases were confirmed. Three children who are highly suspicious of having BCG osteitis from the histologic findings were excluded because BCG-specific PCR was not performed. Detailed clinical characteristics of the patients are shown in Table 1. Of these, 4 cases had been previously reported.910 Fourteen patients (66.7%) were boys and seven were girls. The symptoms or signs of osteitis were first noticed at the median age of 14.3 months (range, 6.5–33.4) (Table 2). The median age at BCG inoculation was 24 days (range, 2–36). Sixteen patients (76.2%) received BCG Tokyo-172 strain by percutaneous multiple puncture method, four (19.0%) received BCG Tokyo-172 strain intradermally, and one (4.8%) received Danish BCG intradermally. A total of 20 cases (95.2%) were inoculated with Tokyo-172 strain. The years the BCG osteitis patients received vaccination are shown in Fig. 1. Patients were vaccinated in different regions: Seoul-metropolitan (35%), central area (40%), and southern area (25%).\nBCG = Bacille Calmette-Guérin, AFB = acid fast bacilli, MTB PCR = M. tuberculosis complex polymerase chain reaction, PC = percutaneous, CG = chronic granuloma, CN = caseous necrosis, H = isoniazid, R = rifampin, S = streptomycin, Z = pyrazinamide, CLA = clarithromycin, E = ethambutol, ID = intradermal.\naThis patient received prolonged treatment with various antimycobacterial combinations as making a definitive diagnosis was difficult; bThis patient recovered without joint complication until the patient developed osteomyelitis at the same joint caused by Mycobacterium intracellulare.\nBCG = Bacille Calmette-Guérin.\nBCG = Bacille Calmette-Guérin .\n Clinical manifestations The median duration from BCG vaccination to symptom onset was 13.8 months (range, 6.0–32.5). Patients were diagnosed after a median of 29 days (range, 3–112) from symptom onset. At the time of diagnosis, the most common symptoms were swelling (76.2%), limitation of movement (63.2%) and pain (61.9%). Fever (≥ 38.0°C) was accompanied only in 19.0%. Excepting the two cases with multiple bone involvement, the most commonly affected site was femur (33.3%), followed by the tarsal bones (28.6%), tibia (9.5%), and rib and sternum (9.5%), as shown in Table 2. The initial diagnosis for the 19 referred cases were bacterial osteomyelitis in 10 (52.6%), bone tumor or Langerhans cell histiocytosis in 6 (31.6%), and transient synovitis in 3 (15.7%).\nThe median duration from BCG vaccination to symptom onset was 13.8 months (range, 6.0–32.5). Patients were diagnosed after a median of 29 days (range, 3–112) from symptom onset. At the time of diagnosis, the most common symptoms were swelling (76.2%), limitation of movement (63.2%) and pain (61.9%). Fever (≥ 38.0°C) was accompanied only in 19.0%. Excepting the two cases with multiple bone involvement, the most commonly affected site was femur (33.3%), followed by the tarsal bones (28.6%), tibia (9.5%), and rib and sternum (9.5%), as shown in Table 2. The initial diagnosis for the 19 referred cases were bacterial osteomyelitis in 10 (52.6%), bone tumor or Langerhans cell histiocytosis in 6 (31.6%), and transient synovitis in 3 (15.7%).\n Comorbidities Three (14.3%) were born preterm at 32–36 weeks of gestational age with a range of birth weight from 2.4 kg to 2.8 kg. Two had a previous history of lymph node excision due to BCG lymphadenopathy. Tests for human immunodeficiency virus were performed in 18 patients and all were negative. Further immunological investigations performed in 15 cases were all normal at the time of diagnosis. Although there were no identified cases of primary immunodeficiency, one had a past history of Pneumocystis jirovecii pneumonitis during infancy and thus was suspected of immunodeficiency.\nThree (14.3%) were born preterm at 32–36 weeks of gestational age with a range of birth weight from 2.4 kg to 2.8 kg. Two had a previous history of lymph node excision due to BCG lymphadenopathy. Tests for human immunodeficiency virus were performed in 18 patients and all were negative. Further immunological investigations performed in 15 cases were all normal at the time of diagnosis. Although there were no identified cases of primary immunodeficiency, one had a past history of Pneumocystis jirovecii pneumonitis during infancy and thus was suspected of immunodeficiency.\n Laboratory and radiologic findings On admission, 14.3% of the children had leukocytosis (> 14,000/mm3) and one (4.8%) had leukopenia (< 4,000/mm3). C-reactive protein was elevated (> 5.0 mg/L) in 35%. Erythrocyte sedimentation rate was tested for 16 cases and was elevated (20 mm/hr or greater) in 68.8%. Solitary demarcated osteolytic lesion (63.1%) and cortical breakage (42.1%) were the most common findings on initial plain radiographs in 19 cases. In four patients (21.0%) there were only soft tissue swelling or bulging on plain radiographs. Magnetic resonance imaging was performed in 20 patients including those with no evident bony abnormality on simple radiographs. Bone abnormalities were found on magnetic resonance imaging in all cases. The epiphysis was affected in 6 (50%) of the 12 patients with BCG osteitis in the long bones. Transphyseal spread between the epiphysis and the metaphysis was observed in 3 (30%). No diaphyseal involvement was noticed. Soft tissue abnormalities were common, where the most common finding was inflammation of the surrounding muscles (87.5%), followed by soft tissue abscess (43.8%). Subcutaneous inflammation was noted in 37.5%. Eight of the 16 cases with a single bone lesion had joint involvement, including four cases with synovial hypertrophy and four cases with joint effusion.\nOn admission, 14.3% of the children had leukocytosis (> 14,000/mm3) and one (4.8%) had leukopenia (< 4,000/mm3). C-reactive protein was elevated (> 5.0 mg/L) in 35%. Erythrocyte sedimentation rate was tested for 16 cases and was elevated (20 mm/hr or greater) in 68.8%. Solitary demarcated osteolytic lesion (63.1%) and cortical breakage (42.1%) were the most common findings on initial plain radiographs in 19 cases. In four patients (21.0%) there were only soft tissue swelling or bulging on plain radiographs. Magnetic resonance imaging was performed in 20 patients including those with no evident bony abnormality on simple radiographs. Bone abnormalities were found on magnetic resonance imaging in all cases. The epiphysis was affected in 6 (50%) of the 12 patients with BCG osteitis in the long bones. Transphyseal spread between the epiphysis and the metaphysis was observed in 3 (30%). No diaphyseal involvement was noticed. Soft tissue abnormalities were common, where the most common finding was inflammation of the surrounding muscles (87.5%), followed by soft tissue abscess (43.8%). Subcutaneous inflammation was noted in 37.5%. Eight of the 16 cases with a single bone lesion had joint involvement, including four cases with synovial hypertrophy and four cases with joint effusion.\n Microbiological findings The diagnosis of M. bovis BCG osteitis was confirmed in all patients by performing BCG-specific PCR on DNA extracted from biopsy samples or bacterial isolates. The real-time PCR for the 53-bp MIRU was all negative, suggesting M. bovis BCG strains. The multiplex PCR assay with 7 primers that can discriminate BCG substrains revealed the Tokyo strain in 20 and Danish strain in one. Acid-fast bacilli stain was positive in 25.0% and M. bovis grew by culture in 70.0%. PCR for M. tuberculosis complex on fresh biopsy sample was positive in 93.8%. Histopathology showed chronic granuloma with caseous necrosis in 77.8%.\nThe diagnosis of M. bovis BCG osteitis was confirmed in all patients by performing BCG-specific PCR on DNA extracted from biopsy samples or bacterial isolates. The real-time PCR for the 53-bp MIRU was all negative, suggesting M. bovis BCG strains. The multiplex PCR assay with 7 primers that can discriminate BCG substrains revealed the Tokyo strain in 20 and Danish strain in one. Acid-fast bacilli stain was positive in 25.0% and M. bovis grew by culture in 70.0%. PCR for M. tuberculosis complex on fresh biopsy sample was positive in 93.8%. Histopathology showed chronic granuloma with caseous necrosis in 77.8%.\n Treatment In our case series, 20 (95.2%) children underwent surgical procedures, where 19 patients had surgical treatment and one received biopsy only. Seven (33.3%) patients required more than one surgical interventions because of persistent swelling and spread of infection outside the epiphysis. All patients received antimycobacterial medication with isoniazid and rifampicin for a median duration of 12 months (range, 12–31).\nIn our case series, 20 (95.2%) children underwent surgical procedures, where 19 patients had surgical treatment and one received biopsy only. Seven (33.3%) patients required more than one surgical interventions because of persistent swelling and spread of infection outside the epiphysis. All patients received antimycobacterial medication with isoniazid and rifampicin for a median duration of 12 months (range, 12–31).\n Clinical outcome Eighteen of the 21 patients were followed-up for more than one year and three children were still on treatment. The median follow-up duration was 3.15 years (range, 1.0–10.8). All were compliant with their medical treatment and were regularly followed-up every 1 to 2 months. Symptoms, physical examination findings and radiographic findings improved in most patients (95.2%) without obvious sequelae. There was no recurrence of BCG osteitis after completing antimycobacterial therapy. One patient who had a talar lesion with inflammatory destruction and necrotic fragmentation at the time of diagnosis later developed progressive bony remodeling with collapse of the talus, resulting in leg length shortening. Another patient developed joint contracture due to nontuberculosis mycobacterial infection in the previous BCG osteitis site at the age of 5 years. This patient, who had a past history of P. jirovecii pneumonitis during infancy, suffered from disseminated nontuberculous mycobacterial infection. One patient had horseshoe kidney, tethered cord syndrome and developmental delay at the time of BCG osteitis, and was diagnosed with Kabuki syndrome with KMT2D (also known as MLL2) gene mutation at the age of 5 years. This patient later developed nontuberculous mycobacterial infection in bilateral renal cysts. No deaths related to BCG osteitis or its treatment were identified.\nEighteen of the 21 patients were followed-up for more than one year and three children were still on treatment. The median follow-up duration was 3.15 years (range, 1.0–10.8). All were compliant with their medical treatment and were regularly followed-up every 1 to 2 months. Symptoms, physical examination findings and radiographic findings improved in most patients (95.2%) without obvious sequelae. There was no recurrence of BCG osteitis after completing antimycobacterial therapy. One patient who had a talar lesion with inflammatory destruction and necrotic fragmentation at the time of diagnosis later developed progressive bony remodeling with collapse of the talus, resulting in leg length shortening. Another patient developed joint contracture due to nontuberculosis mycobacterial infection in the previous BCG osteitis site at the age of 5 years. This patient, who had a past history of P. jirovecii pneumonitis during infancy, suffered from disseminated nontuberculous mycobacterial infection. One patient had horseshoe kidney, tethered cord syndrome and developmental delay at the time of BCG osteitis, and was diagnosed with Kabuki syndrome with KMT2D (also known as MLL2) gene mutation at the age of 5 years. This patient later developed nontuberculous mycobacterial infection in bilateral renal cysts. No deaths related to BCG osteitis or its treatment were identified.", "During the study period, 21 BCG osteitis cases were confirmed. Three children who are highly suspicious of having BCG osteitis from the histologic findings were excluded because BCG-specific PCR was not performed. Detailed clinical characteristics of the patients are shown in Table 1. Of these, 4 cases had been previously reported.910 Fourteen patients (66.7%) were boys and seven were girls. The symptoms or signs of osteitis were first noticed at the median age of 14.3 months (range, 6.5–33.4) (Table 2). The median age at BCG inoculation was 24 days (range, 2–36). Sixteen patients (76.2%) received BCG Tokyo-172 strain by percutaneous multiple puncture method, four (19.0%) received BCG Tokyo-172 strain intradermally, and one (4.8%) received Danish BCG intradermally. A total of 20 cases (95.2%) were inoculated with Tokyo-172 strain. The years the BCG osteitis patients received vaccination are shown in Fig. 1. Patients were vaccinated in different regions: Seoul-metropolitan (35%), central area (40%), and southern area (25%).\nBCG = Bacille Calmette-Guérin, AFB = acid fast bacilli, MTB PCR = M. tuberculosis complex polymerase chain reaction, PC = percutaneous, CG = chronic granuloma, CN = caseous necrosis, H = isoniazid, R = rifampin, S = streptomycin, Z = pyrazinamide, CLA = clarithromycin, E = ethambutol, ID = intradermal.\naThis patient received prolonged treatment with various antimycobacterial combinations as making a definitive diagnosis was difficult; bThis patient recovered without joint complication until the patient developed osteomyelitis at the same joint caused by Mycobacterium intracellulare.\nBCG = Bacille Calmette-Guérin.\nBCG = Bacille Calmette-Guérin .", "The median duration from BCG vaccination to symptom onset was 13.8 months (range, 6.0–32.5). Patients were diagnosed after a median of 29 days (range, 3–112) from symptom onset. At the time of diagnosis, the most common symptoms were swelling (76.2%), limitation of movement (63.2%) and pain (61.9%). Fever (≥ 38.0°C) was accompanied only in 19.0%. Excepting the two cases with multiple bone involvement, the most commonly affected site was femur (33.3%), followed by the tarsal bones (28.6%), tibia (9.5%), and rib and sternum (9.5%), as shown in Table 2. The initial diagnosis for the 19 referred cases were bacterial osteomyelitis in 10 (52.6%), bone tumor or Langerhans cell histiocytosis in 6 (31.6%), and transient synovitis in 3 (15.7%).", "Three (14.3%) were born preterm at 32–36 weeks of gestational age with a range of birth weight from 2.4 kg to 2.8 kg. Two had a previous history of lymph node excision due to BCG lymphadenopathy. Tests for human immunodeficiency virus were performed in 18 patients and all were negative. Further immunological investigations performed in 15 cases were all normal at the time of diagnosis. Although there were no identified cases of primary immunodeficiency, one had a past history of Pneumocystis jirovecii pneumonitis during infancy and thus was suspected of immunodeficiency.", "On admission, 14.3% of the children had leukocytosis (> 14,000/mm3) and one (4.8%) had leukopenia (< 4,000/mm3). C-reactive protein was elevated (> 5.0 mg/L) in 35%. Erythrocyte sedimentation rate was tested for 16 cases and was elevated (20 mm/hr or greater) in 68.8%. Solitary demarcated osteolytic lesion (63.1%) and cortical breakage (42.1%) were the most common findings on initial plain radiographs in 19 cases. In four patients (21.0%) there were only soft tissue swelling or bulging on plain radiographs. Magnetic resonance imaging was performed in 20 patients including those with no evident bony abnormality on simple radiographs. Bone abnormalities were found on magnetic resonance imaging in all cases. The epiphysis was affected in 6 (50%) of the 12 patients with BCG osteitis in the long bones. Transphyseal spread between the epiphysis and the metaphysis was observed in 3 (30%). No diaphyseal involvement was noticed. Soft tissue abnormalities were common, where the most common finding was inflammation of the surrounding muscles (87.5%), followed by soft tissue abscess (43.8%). Subcutaneous inflammation was noted in 37.5%. Eight of the 16 cases with a single bone lesion had joint involvement, including four cases with synovial hypertrophy and four cases with joint effusion.", "The diagnosis of M. bovis BCG osteitis was confirmed in all patients by performing BCG-specific PCR on DNA extracted from biopsy samples or bacterial isolates. The real-time PCR for the 53-bp MIRU was all negative, suggesting M. bovis BCG strains. The multiplex PCR assay with 7 primers that can discriminate BCG substrains revealed the Tokyo strain in 20 and Danish strain in one. Acid-fast bacilli stain was positive in 25.0% and M. bovis grew by culture in 70.0%. PCR for M. tuberculosis complex on fresh biopsy sample was positive in 93.8%. Histopathology showed chronic granuloma with caseous necrosis in 77.8%.", "In our case series, 20 (95.2%) children underwent surgical procedures, where 19 patients had surgical treatment and one received biopsy only. Seven (33.3%) patients required more than one surgical interventions because of persistent swelling and spread of infection outside the epiphysis. All patients received antimycobacterial medication with isoniazid and rifampicin for a median duration of 12 months (range, 12–31).", "Eighteen of the 21 patients were followed-up for more than one year and three children were still on treatment. The median follow-up duration was 3.15 years (range, 1.0–10.8). All were compliant with their medical treatment and were regularly followed-up every 1 to 2 months. Symptoms, physical examination findings and radiographic findings improved in most patients (95.2%) without obvious sequelae. There was no recurrence of BCG osteitis after completing antimycobacterial therapy. One patient who had a talar lesion with inflammatory destruction and necrotic fragmentation at the time of diagnosis later developed progressive bony remodeling with collapse of the talus, resulting in leg length shortening. Another patient developed joint contracture due to nontuberculosis mycobacterial infection in the previous BCG osteitis site at the age of 5 years. This patient, who had a past history of P. jirovecii pneumonitis during infancy, suffered from disseminated nontuberculous mycobacterial infection. One patient had horseshoe kidney, tethered cord syndrome and developmental delay at the time of BCG osteitis, and was diagnosed with Kabuki syndrome with KMT2D (also known as MLL2) gene mutation at the age of 5 years. This patient later developed nontuberculous mycobacterial infection in bilateral renal cysts. No deaths related to BCG osteitis or its treatment were identified.", "This study reports 21 children with laboratory-confirmed BCG osteitis treated at a single referral center in Korea from January 2007 to March 2018. The median age of symptom onset was 14.3 months. The most frequently administered BCG vaccine was the Tokyo-172 strain inoculated percutaneously (76.2%), followed by the same strain administered intradermally (19.0%). Only one child received the Danish strain by intradermal method. The most common symptoms were swelling, refusal to move the affected site, and pain, while fever was only accompanied in few. All of the patients received antimycobacterial medications for at least 12 months and all but one of the patients had surgical procedures either for diagnostic or therapeutic purposes. After treatment, most of the patients improved without sequelae.\nThe clinical manifestations of BCG osteitis in this study were comparable with those published in previous literatures.411 These clinical findings and the solitary well-demarcated osteolytic lesion frequently observed on the plain radiographs mimic acute bacterial osteomyelitis.12 As BCG osteitis is uncommon to easily come across clinicians' minds, it is important to recognize the distinctive features between bacterial osteomyelitis and BCG osteitis. Children with bacterial osteomyelitis commonly present with high fever and bone pain of abrupt onset.13 In this study, we found that BCG osteitis accompanied fever in 19% only and were diagnosed after a median of 29 days from symptom onset, and even longer delay in diagnosis of several months has been previously observed, underscoring the importance of clinical suspicion. Other important differential point is the affected age. Acute bacterial osteomyelitis can develop at any ages in children while BCG osteitis usually develops between 6 months and 5 years of age.\nBCG osteitis is known to have a good prognosis and the overall outcome in this study was also favorable.1 During the median 3.2 years of follow-up, only one patient developed orthopedic complication related to BCG osteitis. One retrospective study conducted in Finland observed some sequelae in only 3% of the BCG osteitis patients and a systemic review reported that 2.4% of children with BCG osteitis experience sequelae.11 However, more than one surgical interventions may be needed when the diagnosis is delayed due to persistent swelling and local spread of the infection as shown in this study, and patients receiving surgical interventions as part of the treatment seem to develop more complications than those receiving diagnostic procedures only.11 Moreover, patients with confirmed or suspected immunodeficiency could be prone to subsequent atypical infections in the previous BCG osteitis site and also present with multiple lesions.13 A recent study evaluating 160 former Finnish patients diagnosed 19–47 years ago in infancy reported a higher rate of 13.8% to have orthopedic complications, such as leg length discrepancy and chronic pain in the affected limb.14 Therefore, surveillance for serious adverse reactions following BCG vaccination and monitoring their long-term outcomes would be necessary.\nIn this study, it is important to note that 95.2% patients with BCG osteitis were vaccinated with the Tokyo-172 strain and 80.0% of them received the vaccine percutaneously via multiple puncture technique. Adverse reactions after BCG vaccination are known to depend on various factors including the vaccine strain, route of vaccine administration, and number of viable bacilli in the batch.1 BCG osteitis was widely reported in the 1970s primarily in Northern Europe where the occurrence of the disease was associated with the changes in the vaccine strain or the manufacturing method. The incidence of BCG osteitis in Sweden and Finland increased rapidly shortly after the manufacturer was changed from the Swedish Laboratory to the Statens Serum Institute, Copenhagen.1516 Although the same Gothenburg strain was used, the incidence in Sweden and Finland increased from 2.5 to 33.0 per 100,000 and from 13.0 to 72.9 per 100,000 BCG-vaccinated infants which later decreased rapidly after the vaccine was replaced by the Copenhagen 1331 strain and the Glaxo-Evans strain, respectively.17 Interestingly, while the incidence of BCG osteitis following the Glaxo BCG vaccination was 6.4 per 100,000 from 1978 to 1988 in Finland, no cases were reported in the UK despite the use of the same vaccine. The underlying reason for the variation in incidence among different countries using the same BCG strain remains unclear.\nThe BCG Tokyo-172 strain is known to be less reactogenic than other strains and the incidence of BCG osteitis in Japan, where it is administered by percutaneous multiple puncture method, was reported as only 0.01 case per million in the 1980s, the lowest compared to other countries.18 During 1951–2004, the incidence still remained as low as 0.1 per million.7 However, the incidence of BCG osteitis in Japan increased by more than 10-fold during 1998–2007 to 2 per million. The annual incidence was 2.2 cases per year between 1998 and 2007, and it further increased to 4.14 cases per year between 2005 and 2011.319 In Japan, BCG vaccination is not recommended during the first 3 months of age due to serious adverse events in immunodeficient infants, and the recommended age for vaccination was changed from 3–6 months to 5–8 months in 2013.1920 In Taiwan, where The Tokyo-172 strain is administered intradermally, the incidence of BCG osteitis is reported to be much higher, estimated to be 3.68 per million during 2002–2006.8 With improved laboratory facility to differentiate M. bovis BCG from other M. tuberculosis complex species as well as enhanced surveillance, the incidence rapidly increased to 12.9 cases per million during 2005–2007 and then to 30.1 cases per million during 2008–2012.421 BCG vaccine was previously recommended for infants reaching 5 months of age, but the Taiwan Centers for Disease Control recently revised the schedule for infants aged 5 through 8 months to lower adverse event cases following BCG immunization.22\nBased on this study of 21 children with BCG osteitis, the estimated incidence of BCG osteitis in Korea is also high. The Korean NIP recommends BCG vaccination for all neonates within 4 weeks after birth and about 96%–97% neonates are vaccinated.23 Considering the total number of about 4,900,000 births from 2007 to 2017 and that at least 95% of the newborns are vaccinated with BCG, the incidence of BCG osteitis during 2007–2017 in Korea would be estimated to be at least 4.08 cases per million vaccinations.24 Although the Korean NIP for BCG vaccination is based on the intradermal Danish strain, the proportion of percutaneous Tokyo-172 BCG vaccination is estimated to be 68% during recent 10 years, which is more than twice that of intradermal Danish BCG vaccination.252627 The incidence of BCG osteitis following the percutaneous BCG Tokyo vaccine in Korea would be at least 4.44 cases per million and the incidence following the intradermal BCG Tokyo vaccine would be at least 8.27 per million. The data in this study and in previous studies conducted in Taiwan and Thailand imply that this BCG Tokyo-172 vaccine might be more virulent than expected, causing invasive disease in young immunocompetent children.528\nDespite the high number of BCG osteitis observed in this single institute from 2007 to 2017, only 5 cases of BCG-related osteitis were actually reported and reimbursed through the government-operated vaccine injury compensation system, Korea National Vaccine Injury Compensation Program (KVICP), between 1995 and 2016.293031 Even the five cases reported through the program do not match with our five patients who received intradermal BCG through the NIP. One main reason for the discordance would be that the percutaneous BCG Tokyo-172 vaccine, the most common vaccine used in Korea, was not included in the NIP during most of the study period, hence adverse reactions related to this vaccine strain was not monitored through the program. Hence, the overall serious adverse events related to BCG vaccination must have been underreported and its incidence underestimated. A comprehensive national surveillance system is needed to accurately monitor serious adverse reactions following both intradermal and percutaneous BCG vaccination to assess their safety in Korea.\nThis study has several limitations. This study was conducted in a single institute, retrospectively. However, as 90% of the patients were referred from all across the country, the data might have reflected the overall characteristics of BCG osteitis in Korea. Additionally, many children might not have been diagnosed due to lack of suspicion by clinicians and lack of conducting molecular testing even when suspected. Despite this concern of underdiagnosis, the estimated incidence of at least 4.08 per million based on a single center data itself depicts the numerous occurrences of BCG osteitis cases in Korea. The relative risk of BCG osteitis associated with the Tokyo-172 strain to the intradermal Danish strain could not be assessed in this study as the exact numbers of each vaccine strain used during the study years are unknown.\nIn conclusion, highly suspecting BCG osteitis based on clinical features is crucial in diagnosing and treating the disease promptly. So far, the incidence of BCG osteitis as well as other serious adverse reactions associated with BCG vaccination in Korea is likely to be underestimated due to limited national surveillance system. Active and thorough monitoring of vaccine safety is urgently required and the adverse reaction profiles should be further reflected in establishing safe and effective BCG vaccination policy in Korea." ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, null, null, null, null, "discussion" ]
[ "\nMycobacterium bovis\n", "BCG", "Osteitis", "Vaccination", "Complications" ]
INTRODUCTION: Bacille Calmette-Guérin (BCG) immunization is recommended to prevent disseminated tuberculosis and extrapulmonary tuberculosis including meningitis in young children. BCG is a live attenuated vaccine developed from serial passage of Mycobacterium bovis. Currently the Russian (Moscow–368), the Bulgarian (Sofia SL222) and the Tokyo-172 are the three most commonly used BCG strains worldwide.1 The World Health Organization recommends intradermal administration of the vaccine, but percutaneous application using a multiple puncture injection device is practiced in some countries including Japan and Korea. The Tokyo-172 BCG vaccine is currently available in both percutaneous and intradermal formulations. Although adverse events at injection site are frequently observed, serious adverse events following BCG vaccination are rare. The incidence of dissemination of BCG is estimated to be 0.19–1.56 per million vaccinations.2 BCG osteitis is another rare systemic adverse reaction following BCG vaccination. The cases of BCG osteitis were increasingly notified when there was a change in the vaccine strain, for example, in Scandinavia and in Eastern Europe.2 More recently, there are increased reports of osteitis in Japan, Taiwan, and Thailand.345 In Korea, Japan and Taiwan, BCG vaccination is recommended for all infants through a national immunization program (NIP). Infants in Japan receive the BCG Tokyo (strain 172; Japan BCG Laboratory) by percutaneous method, whereas those in Taiwan receive the BCG Tokyo (strain 172; Taiwan CDC Vaccine Center) by intradermal injection. In Korea, newborns are vaccinated with either intradermal BCG Danish (strain 1331; Statens Serum Institute), intradermal BCG Tokyo (strain 172; Japan BCG Laboratory), or percutaneous BCG Tokyo (strain 172; Japan BCG Laboratory). Previously, the incidence of osteitis following BCG Tokyo-172 vaccination was reported to be extremely low, hence the BCG Tokyo vaccine was considered as the safest BCG vaccine.67 However, a higher incidence of 3.68 cases per million vaccinations was reported in Taiwan.8 Unlike Japan and Taiwan, no long-term observational studies of serious adverse reactions following BCG vaccinations have been reported in Korea. Here, we describe the clinical characteristics of BCG osteitis from our experience during the recent 10 years. METHODS: Study subjects A retrospective chart review was performed on all laboratory-confirmed BCG osteitis children admitted to Seoul National University Children's Hospital, a large tertiary referral center in Korea, from January 2007 to March 2018. Children who were clinically suspected of BCG infection but not confirmed by BCG-specific polymerase chain reaction (PCR) were excluded. History of BCG vaccination was taken from the data in the national immunization registry, namely, the brand name of BCG vaccine, manufacturer, and the date and the site of vaccination. For those whose data are not available from the registry, BCG vaccination history was interpreted from the personal immunization records and inspecting the scar of the injection sites. Clinical data were collected regarding patient demographics, presenting symptoms, physical findings, radiologic features, histologic findings, and clinical outcome. The immune status was evaluated by the number and percentage of T-cell subsets, complement levels, immunoglobulin levels, and performing the dihydrorhodamine test. A retrospective chart review was performed on all laboratory-confirmed BCG osteitis children admitted to Seoul National University Children's Hospital, a large tertiary referral center in Korea, from January 2007 to March 2018. Children who were clinically suspected of BCG infection but not confirmed by BCG-specific polymerase chain reaction (PCR) were excluded. History of BCG vaccination was taken from the data in the national immunization registry, namely, the brand name of BCG vaccine, manufacturer, and the date and the site of vaccination. For those whose data are not available from the registry, BCG vaccination history was interpreted from the personal immunization records and inspecting the scar of the injection sites. Clinical data were collected regarding patient demographics, presenting symptoms, physical findings, radiologic features, histologic findings, and clinical outcome. The immune status was evaluated by the number and percentage of T-cell subsets, complement levels, immunoglobulin levels, and performing the dihydrorhodamine test. Laboratory confirmation The presumptive diagnosis in the majority of the cases was mainly based on positive PCR for Mycobacterium tuberculosis complex directly from the isolates of M. bovis BCG, or from the fresh samples obtained from the affected tissue. Laboratory confirmation of M. bovis BCG was performed with the use of the same protocol previously described by Kim et al.9 Briefly, the real-time PCR targeted for the 53-bp mycobacterial interspersed repetitive units (MIRUs) of the senX3–regX3 IR was performed to differentiate M. bovis BCG from non-BCG M. tuberculosis complex. Subsequently, a multiplex PCR assay with 7 primers (ET1, ET2, ET3, RD8l, RD8r, RD14l, and RD14r) that amplified 3 regions of difference – RD1, RD8 and RD14 was performed to discriminate among the BCG substrains. The presumptive diagnosis in the majority of the cases was mainly based on positive PCR for Mycobacterium tuberculosis complex directly from the isolates of M. bovis BCG, or from the fresh samples obtained from the affected tissue. Laboratory confirmation of M. bovis BCG was performed with the use of the same protocol previously described by Kim et al.9 Briefly, the real-time PCR targeted for the 53-bp mycobacterial interspersed repetitive units (MIRUs) of the senX3–regX3 IR was performed to differentiate M. bovis BCG from non-BCG M. tuberculosis complex. Subsequently, a multiplex PCR assay with 7 primers (ET1, ET2, ET3, RD8l, RD8r, RD14l, and RD14r) that amplified 3 regions of difference – RD1, RD8 and RD14 was performed to discriminate among the BCG substrains. Statistical analysis Categorical variables were summarized by absolute frequencies and percentages. Continuous variables were expressed as means with standard deviation or – if skewed – as medians with ranges. All data management and analysis were performed using SPSS version 19.0 (IBM Corp., Armonk, NY, USA). Categorical variables were summarized by absolute frequencies and percentages. Continuous variables were expressed as means with standard deviation or – if skewed – as medians with ranges. All data management and analysis were performed using SPSS version 19.0 (IBM Corp., Armonk, NY, USA). Ethics statement This study was approved by the Institutional Review Boards (IRB) of Seoul National University Hospital (IRB No. H-1711-120-901). Informed consent of the patients was waived. This study was approved by the Institutional Review Boards (IRB) of Seoul National University Hospital (IRB No. H-1711-120-901). Informed consent of the patients was waived. Study subjects: A retrospective chart review was performed on all laboratory-confirmed BCG osteitis children admitted to Seoul National University Children's Hospital, a large tertiary referral center in Korea, from January 2007 to March 2018. Children who were clinically suspected of BCG infection but not confirmed by BCG-specific polymerase chain reaction (PCR) were excluded. History of BCG vaccination was taken from the data in the national immunization registry, namely, the brand name of BCG vaccine, manufacturer, and the date and the site of vaccination. For those whose data are not available from the registry, BCG vaccination history was interpreted from the personal immunization records and inspecting the scar of the injection sites. Clinical data were collected regarding patient demographics, presenting symptoms, physical findings, radiologic features, histologic findings, and clinical outcome. The immune status was evaluated by the number and percentage of T-cell subsets, complement levels, immunoglobulin levels, and performing the dihydrorhodamine test. Laboratory confirmation: The presumptive diagnosis in the majority of the cases was mainly based on positive PCR for Mycobacterium tuberculosis complex directly from the isolates of M. bovis BCG, or from the fresh samples obtained from the affected tissue. Laboratory confirmation of M. bovis BCG was performed with the use of the same protocol previously described by Kim et al.9 Briefly, the real-time PCR targeted for the 53-bp mycobacterial interspersed repetitive units (MIRUs) of the senX3–regX3 IR was performed to differentiate M. bovis BCG from non-BCG M. tuberculosis complex. Subsequently, a multiplex PCR assay with 7 primers (ET1, ET2, ET3, RD8l, RD8r, RD14l, and RD14r) that amplified 3 regions of difference – RD1, RD8 and RD14 was performed to discriminate among the BCG substrains. Statistical analysis: Categorical variables were summarized by absolute frequencies and percentages. Continuous variables were expressed as means with standard deviation or – if skewed – as medians with ranges. All data management and analysis were performed using SPSS version 19.0 (IBM Corp., Armonk, NY, USA). Ethics statement: This study was approved by the Institutional Review Boards (IRB) of Seoul National University Hospital (IRB No. H-1711-120-901). Informed consent of the patients was waived. RESULTS: Patient characteristics During the study period, 21 BCG osteitis cases were confirmed. Three children who are highly suspicious of having BCG osteitis from the histologic findings were excluded because BCG-specific PCR was not performed. Detailed clinical characteristics of the patients are shown in Table 1. Of these, 4 cases had been previously reported.910 Fourteen patients (66.7%) were boys and seven were girls. The symptoms or signs of osteitis were first noticed at the median age of 14.3 months (range, 6.5–33.4) (Table 2). The median age at BCG inoculation was 24 days (range, 2–36). Sixteen patients (76.2%) received BCG Tokyo-172 strain by percutaneous multiple puncture method, four (19.0%) received BCG Tokyo-172 strain intradermally, and one (4.8%) received Danish BCG intradermally. A total of 20 cases (95.2%) were inoculated with Tokyo-172 strain. The years the BCG osteitis patients received vaccination are shown in Fig. 1. Patients were vaccinated in different regions: Seoul-metropolitan (35%), central area (40%), and southern area (25%). BCG = Bacille Calmette-Guérin, AFB = acid fast bacilli, MTB PCR = M. tuberculosis complex polymerase chain reaction, PC = percutaneous, CG = chronic granuloma, CN = caseous necrosis, H = isoniazid, R = rifampin, S = streptomycin, Z = pyrazinamide, CLA = clarithromycin, E = ethambutol, ID = intradermal. aThis patient received prolonged treatment with various antimycobacterial combinations as making a definitive diagnosis was difficult; bThis patient recovered without joint complication until the patient developed osteomyelitis at the same joint caused by Mycobacterium intracellulare. BCG = Bacille Calmette-Guérin. BCG = Bacille Calmette-Guérin . During the study period, 21 BCG osteitis cases were confirmed. Three children who are highly suspicious of having BCG osteitis from the histologic findings were excluded because BCG-specific PCR was not performed. Detailed clinical characteristics of the patients are shown in Table 1. Of these, 4 cases had been previously reported.910 Fourteen patients (66.7%) were boys and seven were girls. The symptoms or signs of osteitis were first noticed at the median age of 14.3 months (range, 6.5–33.4) (Table 2). The median age at BCG inoculation was 24 days (range, 2–36). Sixteen patients (76.2%) received BCG Tokyo-172 strain by percutaneous multiple puncture method, four (19.0%) received BCG Tokyo-172 strain intradermally, and one (4.8%) received Danish BCG intradermally. A total of 20 cases (95.2%) were inoculated with Tokyo-172 strain. The years the BCG osteitis patients received vaccination are shown in Fig. 1. Patients were vaccinated in different regions: Seoul-metropolitan (35%), central area (40%), and southern area (25%). BCG = Bacille Calmette-Guérin, AFB = acid fast bacilli, MTB PCR = M. tuberculosis complex polymerase chain reaction, PC = percutaneous, CG = chronic granuloma, CN = caseous necrosis, H = isoniazid, R = rifampin, S = streptomycin, Z = pyrazinamide, CLA = clarithromycin, E = ethambutol, ID = intradermal. aThis patient received prolonged treatment with various antimycobacterial combinations as making a definitive diagnosis was difficult; bThis patient recovered without joint complication until the patient developed osteomyelitis at the same joint caused by Mycobacterium intracellulare. BCG = Bacille Calmette-Guérin. BCG = Bacille Calmette-Guérin . Clinical manifestations The median duration from BCG vaccination to symptom onset was 13.8 months (range, 6.0–32.5). Patients were diagnosed after a median of 29 days (range, 3–112) from symptom onset. At the time of diagnosis, the most common symptoms were swelling (76.2%), limitation of movement (63.2%) and pain (61.9%). Fever (≥ 38.0°C) was accompanied only in 19.0%. Excepting the two cases with multiple bone involvement, the most commonly affected site was femur (33.3%), followed by the tarsal bones (28.6%), tibia (9.5%), and rib and sternum (9.5%), as shown in Table 2. The initial diagnosis for the 19 referred cases were bacterial osteomyelitis in 10 (52.6%), bone tumor or Langerhans cell histiocytosis in 6 (31.6%), and transient synovitis in 3 (15.7%). The median duration from BCG vaccination to symptom onset was 13.8 months (range, 6.0–32.5). Patients were diagnosed after a median of 29 days (range, 3–112) from symptom onset. At the time of diagnosis, the most common symptoms were swelling (76.2%), limitation of movement (63.2%) and pain (61.9%). Fever (≥ 38.0°C) was accompanied only in 19.0%. Excepting the two cases with multiple bone involvement, the most commonly affected site was femur (33.3%), followed by the tarsal bones (28.6%), tibia (9.5%), and rib and sternum (9.5%), as shown in Table 2. The initial diagnosis for the 19 referred cases were bacterial osteomyelitis in 10 (52.6%), bone tumor or Langerhans cell histiocytosis in 6 (31.6%), and transient synovitis in 3 (15.7%). Comorbidities Three (14.3%) were born preterm at 32–36 weeks of gestational age with a range of birth weight from 2.4 kg to 2.8 kg. Two had a previous history of lymph node excision due to BCG lymphadenopathy. Tests for human immunodeficiency virus were performed in 18 patients and all were negative. Further immunological investigations performed in 15 cases were all normal at the time of diagnosis. Although there were no identified cases of primary immunodeficiency, one had a past history of Pneumocystis jirovecii pneumonitis during infancy and thus was suspected of immunodeficiency. Three (14.3%) were born preterm at 32–36 weeks of gestational age with a range of birth weight from 2.4 kg to 2.8 kg. Two had a previous history of lymph node excision due to BCG lymphadenopathy. Tests for human immunodeficiency virus were performed in 18 patients and all were negative. Further immunological investigations performed in 15 cases were all normal at the time of diagnosis. Although there were no identified cases of primary immunodeficiency, one had a past history of Pneumocystis jirovecii pneumonitis during infancy and thus was suspected of immunodeficiency. Laboratory and radiologic findings On admission, 14.3% of the children had leukocytosis (> 14,000/mm3) and one (4.8%) had leukopenia (< 4,000/mm3). C-reactive protein was elevated (> 5.0 mg/L) in 35%. Erythrocyte sedimentation rate was tested for 16 cases and was elevated (20 mm/hr or greater) in 68.8%. Solitary demarcated osteolytic lesion (63.1%) and cortical breakage (42.1%) were the most common findings on initial plain radiographs in 19 cases. In four patients (21.0%) there were only soft tissue swelling or bulging on plain radiographs. Magnetic resonance imaging was performed in 20 patients including those with no evident bony abnormality on simple radiographs. Bone abnormalities were found on magnetic resonance imaging in all cases. The epiphysis was affected in 6 (50%) of the 12 patients with BCG osteitis in the long bones. Transphyseal spread between the epiphysis and the metaphysis was observed in 3 (30%). No diaphyseal involvement was noticed. Soft tissue abnormalities were common, where the most common finding was inflammation of the surrounding muscles (87.5%), followed by soft tissue abscess (43.8%). Subcutaneous inflammation was noted in 37.5%. Eight of the 16 cases with a single bone lesion had joint involvement, including four cases with synovial hypertrophy and four cases with joint effusion. On admission, 14.3% of the children had leukocytosis (> 14,000/mm3) and one (4.8%) had leukopenia (< 4,000/mm3). C-reactive protein was elevated (> 5.0 mg/L) in 35%. Erythrocyte sedimentation rate was tested for 16 cases and was elevated (20 mm/hr or greater) in 68.8%. Solitary demarcated osteolytic lesion (63.1%) and cortical breakage (42.1%) were the most common findings on initial plain radiographs in 19 cases. In four patients (21.0%) there were only soft tissue swelling or bulging on plain radiographs. Magnetic resonance imaging was performed in 20 patients including those with no evident bony abnormality on simple radiographs. Bone abnormalities were found on magnetic resonance imaging in all cases. The epiphysis was affected in 6 (50%) of the 12 patients with BCG osteitis in the long bones. Transphyseal spread between the epiphysis and the metaphysis was observed in 3 (30%). No diaphyseal involvement was noticed. Soft tissue abnormalities were common, where the most common finding was inflammation of the surrounding muscles (87.5%), followed by soft tissue abscess (43.8%). Subcutaneous inflammation was noted in 37.5%. Eight of the 16 cases with a single bone lesion had joint involvement, including four cases with synovial hypertrophy and four cases with joint effusion. Microbiological findings The diagnosis of M. bovis BCG osteitis was confirmed in all patients by performing BCG-specific PCR on DNA extracted from biopsy samples or bacterial isolates. The real-time PCR for the 53-bp MIRU was all negative, suggesting M. bovis BCG strains. The multiplex PCR assay with 7 primers that can discriminate BCG substrains revealed the Tokyo strain in 20 and Danish strain in one. Acid-fast bacilli stain was positive in 25.0% and M. bovis grew by culture in 70.0%. PCR for M. tuberculosis complex on fresh biopsy sample was positive in 93.8%. Histopathology showed chronic granuloma with caseous necrosis in 77.8%. The diagnosis of M. bovis BCG osteitis was confirmed in all patients by performing BCG-specific PCR on DNA extracted from biopsy samples or bacterial isolates. The real-time PCR for the 53-bp MIRU was all negative, suggesting M. bovis BCG strains. The multiplex PCR assay with 7 primers that can discriminate BCG substrains revealed the Tokyo strain in 20 and Danish strain in one. Acid-fast bacilli stain was positive in 25.0% and M. bovis grew by culture in 70.0%. PCR for M. tuberculosis complex on fresh biopsy sample was positive in 93.8%. Histopathology showed chronic granuloma with caseous necrosis in 77.8%. Treatment In our case series, 20 (95.2%) children underwent surgical procedures, where 19 patients had surgical treatment and one received biopsy only. Seven (33.3%) patients required more than one surgical interventions because of persistent swelling and spread of infection outside the epiphysis. All patients received antimycobacterial medication with isoniazid and rifampicin for a median duration of 12 months (range, 12–31). In our case series, 20 (95.2%) children underwent surgical procedures, where 19 patients had surgical treatment and one received biopsy only. Seven (33.3%) patients required more than one surgical interventions because of persistent swelling and spread of infection outside the epiphysis. All patients received antimycobacterial medication with isoniazid and rifampicin for a median duration of 12 months (range, 12–31). Clinical outcome Eighteen of the 21 patients were followed-up for more than one year and three children were still on treatment. The median follow-up duration was 3.15 years (range, 1.0–10.8). All were compliant with their medical treatment and were regularly followed-up every 1 to 2 months. Symptoms, physical examination findings and radiographic findings improved in most patients (95.2%) without obvious sequelae. There was no recurrence of BCG osteitis after completing antimycobacterial therapy. One patient who had a talar lesion with inflammatory destruction and necrotic fragmentation at the time of diagnosis later developed progressive bony remodeling with collapse of the talus, resulting in leg length shortening. Another patient developed joint contracture due to nontuberculosis mycobacterial infection in the previous BCG osteitis site at the age of 5 years. This patient, who had a past history of P. jirovecii pneumonitis during infancy, suffered from disseminated nontuberculous mycobacterial infection. One patient had horseshoe kidney, tethered cord syndrome and developmental delay at the time of BCG osteitis, and was diagnosed with Kabuki syndrome with KMT2D (also known as MLL2) gene mutation at the age of 5 years. This patient later developed nontuberculous mycobacterial infection in bilateral renal cysts. No deaths related to BCG osteitis or its treatment were identified. Eighteen of the 21 patients were followed-up for more than one year and three children were still on treatment. The median follow-up duration was 3.15 years (range, 1.0–10.8). All were compliant with their medical treatment and were regularly followed-up every 1 to 2 months. Symptoms, physical examination findings and radiographic findings improved in most patients (95.2%) without obvious sequelae. There was no recurrence of BCG osteitis after completing antimycobacterial therapy. One patient who had a talar lesion with inflammatory destruction and necrotic fragmentation at the time of diagnosis later developed progressive bony remodeling with collapse of the talus, resulting in leg length shortening. Another patient developed joint contracture due to nontuberculosis mycobacterial infection in the previous BCG osteitis site at the age of 5 years. This patient, who had a past history of P. jirovecii pneumonitis during infancy, suffered from disseminated nontuberculous mycobacterial infection. One patient had horseshoe kidney, tethered cord syndrome and developmental delay at the time of BCG osteitis, and was diagnosed with Kabuki syndrome with KMT2D (also known as MLL2) gene mutation at the age of 5 years. This patient later developed nontuberculous mycobacterial infection in bilateral renal cysts. No deaths related to BCG osteitis or its treatment were identified. Patient characteristics: During the study period, 21 BCG osteitis cases were confirmed. Three children who are highly suspicious of having BCG osteitis from the histologic findings were excluded because BCG-specific PCR was not performed. Detailed clinical characteristics of the patients are shown in Table 1. Of these, 4 cases had been previously reported.910 Fourteen patients (66.7%) were boys and seven were girls. The symptoms or signs of osteitis were first noticed at the median age of 14.3 months (range, 6.5–33.4) (Table 2). The median age at BCG inoculation was 24 days (range, 2–36). Sixteen patients (76.2%) received BCG Tokyo-172 strain by percutaneous multiple puncture method, four (19.0%) received BCG Tokyo-172 strain intradermally, and one (4.8%) received Danish BCG intradermally. A total of 20 cases (95.2%) were inoculated with Tokyo-172 strain. The years the BCG osteitis patients received vaccination are shown in Fig. 1. Patients were vaccinated in different regions: Seoul-metropolitan (35%), central area (40%), and southern area (25%). BCG = Bacille Calmette-Guérin, AFB = acid fast bacilli, MTB PCR = M. tuberculosis complex polymerase chain reaction, PC = percutaneous, CG = chronic granuloma, CN = caseous necrosis, H = isoniazid, R = rifampin, S = streptomycin, Z = pyrazinamide, CLA = clarithromycin, E = ethambutol, ID = intradermal. aThis patient received prolonged treatment with various antimycobacterial combinations as making a definitive diagnosis was difficult; bThis patient recovered without joint complication until the patient developed osteomyelitis at the same joint caused by Mycobacterium intracellulare. BCG = Bacille Calmette-Guérin. BCG = Bacille Calmette-Guérin . Clinical manifestations: The median duration from BCG vaccination to symptom onset was 13.8 months (range, 6.0–32.5). Patients were diagnosed after a median of 29 days (range, 3–112) from symptom onset. At the time of diagnosis, the most common symptoms were swelling (76.2%), limitation of movement (63.2%) and pain (61.9%). Fever (≥ 38.0°C) was accompanied only in 19.0%. Excepting the two cases with multiple bone involvement, the most commonly affected site was femur (33.3%), followed by the tarsal bones (28.6%), tibia (9.5%), and rib and sternum (9.5%), as shown in Table 2. The initial diagnosis for the 19 referred cases were bacterial osteomyelitis in 10 (52.6%), bone tumor or Langerhans cell histiocytosis in 6 (31.6%), and transient synovitis in 3 (15.7%). Comorbidities: Three (14.3%) were born preterm at 32–36 weeks of gestational age with a range of birth weight from 2.4 kg to 2.8 kg. Two had a previous history of lymph node excision due to BCG lymphadenopathy. Tests for human immunodeficiency virus were performed in 18 patients and all were negative. Further immunological investigations performed in 15 cases were all normal at the time of diagnosis. Although there were no identified cases of primary immunodeficiency, one had a past history of Pneumocystis jirovecii pneumonitis during infancy and thus was suspected of immunodeficiency. Laboratory and radiologic findings: On admission, 14.3% of the children had leukocytosis (> 14,000/mm3) and one (4.8%) had leukopenia (< 4,000/mm3). C-reactive protein was elevated (> 5.0 mg/L) in 35%. Erythrocyte sedimentation rate was tested for 16 cases and was elevated (20 mm/hr or greater) in 68.8%. Solitary demarcated osteolytic lesion (63.1%) and cortical breakage (42.1%) were the most common findings on initial plain radiographs in 19 cases. In four patients (21.0%) there were only soft tissue swelling or bulging on plain radiographs. Magnetic resonance imaging was performed in 20 patients including those with no evident bony abnormality on simple radiographs. Bone abnormalities were found on magnetic resonance imaging in all cases. The epiphysis was affected in 6 (50%) of the 12 patients with BCG osteitis in the long bones. Transphyseal spread between the epiphysis and the metaphysis was observed in 3 (30%). No diaphyseal involvement was noticed. Soft tissue abnormalities were common, where the most common finding was inflammation of the surrounding muscles (87.5%), followed by soft tissue abscess (43.8%). Subcutaneous inflammation was noted in 37.5%. Eight of the 16 cases with a single bone lesion had joint involvement, including four cases with synovial hypertrophy and four cases with joint effusion. Microbiological findings: The diagnosis of M. bovis BCG osteitis was confirmed in all patients by performing BCG-specific PCR on DNA extracted from biopsy samples or bacterial isolates. The real-time PCR for the 53-bp MIRU was all negative, suggesting M. bovis BCG strains. The multiplex PCR assay with 7 primers that can discriminate BCG substrains revealed the Tokyo strain in 20 and Danish strain in one. Acid-fast bacilli stain was positive in 25.0% and M. bovis grew by culture in 70.0%. PCR for M. tuberculosis complex on fresh biopsy sample was positive in 93.8%. Histopathology showed chronic granuloma with caseous necrosis in 77.8%. Treatment: In our case series, 20 (95.2%) children underwent surgical procedures, where 19 patients had surgical treatment and one received biopsy only. Seven (33.3%) patients required more than one surgical interventions because of persistent swelling and spread of infection outside the epiphysis. All patients received antimycobacterial medication with isoniazid and rifampicin for a median duration of 12 months (range, 12–31). Clinical outcome: Eighteen of the 21 patients were followed-up for more than one year and three children were still on treatment. The median follow-up duration was 3.15 years (range, 1.0–10.8). All were compliant with their medical treatment and were regularly followed-up every 1 to 2 months. Symptoms, physical examination findings and radiographic findings improved in most patients (95.2%) without obvious sequelae. There was no recurrence of BCG osteitis after completing antimycobacterial therapy. One patient who had a talar lesion with inflammatory destruction and necrotic fragmentation at the time of diagnosis later developed progressive bony remodeling with collapse of the talus, resulting in leg length shortening. Another patient developed joint contracture due to nontuberculosis mycobacterial infection in the previous BCG osteitis site at the age of 5 years. This patient, who had a past history of P. jirovecii pneumonitis during infancy, suffered from disseminated nontuberculous mycobacterial infection. One patient had horseshoe kidney, tethered cord syndrome and developmental delay at the time of BCG osteitis, and was diagnosed with Kabuki syndrome with KMT2D (also known as MLL2) gene mutation at the age of 5 years. This patient later developed nontuberculous mycobacterial infection in bilateral renal cysts. No deaths related to BCG osteitis or its treatment were identified. DISCUSSION: This study reports 21 children with laboratory-confirmed BCG osteitis treated at a single referral center in Korea from January 2007 to March 2018. The median age of symptom onset was 14.3 months. The most frequently administered BCG vaccine was the Tokyo-172 strain inoculated percutaneously (76.2%), followed by the same strain administered intradermally (19.0%). Only one child received the Danish strain by intradermal method. The most common symptoms were swelling, refusal to move the affected site, and pain, while fever was only accompanied in few. All of the patients received antimycobacterial medications for at least 12 months and all but one of the patients had surgical procedures either for diagnostic or therapeutic purposes. After treatment, most of the patients improved without sequelae. The clinical manifestations of BCG osteitis in this study were comparable with those published in previous literatures.411 These clinical findings and the solitary well-demarcated osteolytic lesion frequently observed on the plain radiographs mimic acute bacterial osteomyelitis.12 As BCG osteitis is uncommon to easily come across clinicians' minds, it is important to recognize the distinctive features between bacterial osteomyelitis and BCG osteitis. Children with bacterial osteomyelitis commonly present with high fever and bone pain of abrupt onset.13 In this study, we found that BCG osteitis accompanied fever in 19% only and were diagnosed after a median of 29 days from symptom onset, and even longer delay in diagnosis of several months has been previously observed, underscoring the importance of clinical suspicion. Other important differential point is the affected age. Acute bacterial osteomyelitis can develop at any ages in children while BCG osteitis usually develops between 6 months and 5 years of age. BCG osteitis is known to have a good prognosis and the overall outcome in this study was also favorable.1 During the median 3.2 years of follow-up, only one patient developed orthopedic complication related to BCG osteitis. One retrospective study conducted in Finland observed some sequelae in only 3% of the BCG osteitis patients and a systemic review reported that 2.4% of children with BCG osteitis experience sequelae.11 However, more than one surgical interventions may be needed when the diagnosis is delayed due to persistent swelling and local spread of the infection as shown in this study, and patients receiving surgical interventions as part of the treatment seem to develop more complications than those receiving diagnostic procedures only.11 Moreover, patients with confirmed or suspected immunodeficiency could be prone to subsequent atypical infections in the previous BCG osteitis site and also present with multiple lesions.13 A recent study evaluating 160 former Finnish patients diagnosed 19–47 years ago in infancy reported a higher rate of 13.8% to have orthopedic complications, such as leg length discrepancy and chronic pain in the affected limb.14 Therefore, surveillance for serious adverse reactions following BCG vaccination and monitoring their long-term outcomes would be necessary. In this study, it is important to note that 95.2% patients with BCG osteitis were vaccinated with the Tokyo-172 strain and 80.0% of them received the vaccine percutaneously via multiple puncture technique. Adverse reactions after BCG vaccination are known to depend on various factors including the vaccine strain, route of vaccine administration, and number of viable bacilli in the batch.1 BCG osteitis was widely reported in the 1970s primarily in Northern Europe where the occurrence of the disease was associated with the changes in the vaccine strain or the manufacturing method. The incidence of BCG osteitis in Sweden and Finland increased rapidly shortly after the manufacturer was changed from the Swedish Laboratory to the Statens Serum Institute, Copenhagen.1516 Although the same Gothenburg strain was used, the incidence in Sweden and Finland increased from 2.5 to 33.0 per 100,000 and from 13.0 to 72.9 per 100,000 BCG-vaccinated infants which later decreased rapidly after the vaccine was replaced by the Copenhagen 1331 strain and the Glaxo-Evans strain, respectively.17 Interestingly, while the incidence of BCG osteitis following the Glaxo BCG vaccination was 6.4 per 100,000 from 1978 to 1988 in Finland, no cases were reported in the UK despite the use of the same vaccine. The underlying reason for the variation in incidence among different countries using the same BCG strain remains unclear. The BCG Tokyo-172 strain is known to be less reactogenic than other strains and the incidence of BCG osteitis in Japan, where it is administered by percutaneous multiple puncture method, was reported as only 0.01 case per million in the 1980s, the lowest compared to other countries.18 During 1951–2004, the incidence still remained as low as 0.1 per million.7 However, the incidence of BCG osteitis in Japan increased by more than 10-fold during 1998–2007 to 2 per million. The annual incidence was 2.2 cases per year between 1998 and 2007, and it further increased to 4.14 cases per year between 2005 and 2011.319 In Japan, BCG vaccination is not recommended during the first 3 months of age due to serious adverse events in immunodeficient infants, and the recommended age for vaccination was changed from 3–6 months to 5–8 months in 2013.1920 In Taiwan, where The Tokyo-172 strain is administered intradermally, the incidence of BCG osteitis is reported to be much higher, estimated to be 3.68 per million during 2002–2006.8 With improved laboratory facility to differentiate M. bovis BCG from other M. tuberculosis complex species as well as enhanced surveillance, the incidence rapidly increased to 12.9 cases per million during 2005–2007 and then to 30.1 cases per million during 2008–2012.421 BCG vaccine was previously recommended for infants reaching 5 months of age, but the Taiwan Centers for Disease Control recently revised the schedule for infants aged 5 through 8 months to lower adverse event cases following BCG immunization.22 Based on this study of 21 children with BCG osteitis, the estimated incidence of BCG osteitis in Korea is also high. The Korean NIP recommends BCG vaccination for all neonates within 4 weeks after birth and about 96%–97% neonates are vaccinated.23 Considering the total number of about 4,900,000 births from 2007 to 2017 and that at least 95% of the newborns are vaccinated with BCG, the incidence of BCG osteitis during 2007–2017 in Korea would be estimated to be at least 4.08 cases per million vaccinations.24 Although the Korean NIP for BCG vaccination is based on the intradermal Danish strain, the proportion of percutaneous Tokyo-172 BCG vaccination is estimated to be 68% during recent 10 years, which is more than twice that of intradermal Danish BCG vaccination.252627 The incidence of BCG osteitis following the percutaneous BCG Tokyo vaccine in Korea would be at least 4.44 cases per million and the incidence following the intradermal BCG Tokyo vaccine would be at least 8.27 per million. The data in this study and in previous studies conducted in Taiwan and Thailand imply that this BCG Tokyo-172 vaccine might be more virulent than expected, causing invasive disease in young immunocompetent children.528 Despite the high number of BCG osteitis observed in this single institute from 2007 to 2017, only 5 cases of BCG-related osteitis were actually reported and reimbursed through the government-operated vaccine injury compensation system, Korea National Vaccine Injury Compensation Program (KVICP), between 1995 and 2016.293031 Even the five cases reported through the program do not match with our five patients who received intradermal BCG through the NIP. One main reason for the discordance would be that the percutaneous BCG Tokyo-172 vaccine, the most common vaccine used in Korea, was not included in the NIP during most of the study period, hence adverse reactions related to this vaccine strain was not monitored through the program. Hence, the overall serious adverse events related to BCG vaccination must have been underreported and its incidence underestimated. A comprehensive national surveillance system is needed to accurately monitor serious adverse reactions following both intradermal and percutaneous BCG vaccination to assess their safety in Korea. This study has several limitations. This study was conducted in a single institute, retrospectively. However, as 90% of the patients were referred from all across the country, the data might have reflected the overall characteristics of BCG osteitis in Korea. Additionally, many children might not have been diagnosed due to lack of suspicion by clinicians and lack of conducting molecular testing even when suspected. Despite this concern of underdiagnosis, the estimated incidence of at least 4.08 per million based on a single center data itself depicts the numerous occurrences of BCG osteitis cases in Korea. The relative risk of BCG osteitis associated with the Tokyo-172 strain to the intradermal Danish strain could not be assessed in this study as the exact numbers of each vaccine strain used during the study years are unknown. In conclusion, highly suspecting BCG osteitis based on clinical features is crucial in diagnosing and treating the disease promptly. So far, the incidence of BCG osteitis as well as other serious adverse reactions associated with BCG vaccination in Korea is likely to be underestimated due to limited national surveillance system. Active and thorough monitoring of vaccine safety is urgently required and the adverse reaction profiles should be further reflected in establishing safe and effective BCG vaccination policy in Korea.
Background: Mycobacterium bovis Bacille Calmette-Guérin (BCG) osteitis, a rare complication of BCG vaccination, has not been well investigated in Korea. This study aimed to evaluate the clinical characteristics of BCG osteitis during the recent 10 years in Korea. Methods: Children diagnosed with BCG osteitis at the Seoul National University Children's Hospital from January 2007 to March 2018 were included. M. bovis BCG was confirmed by multiplex polymerase chain reaction (PCR) in the affected bone. BCG immunization status and clinical information were reviewed retrospectively. Results: Twenty-one patients were diagnosed with BCG osteitis and their median symptom onset from BCG vaccination was 13.8 months (range, 6.0-32.5). Sixteen children (76.2%) received Tokyo-172 vaccine by percutaneous multiple puncture method, while four (19.0%) and one (4.8%) received intradermal Tokyo-172 and Danish strain, respectively. Common presenting symptoms were swelling (76.2%), limited movement of the affected site (63.2%), and pain (61.9%) while fever was only accompanied in 19.0%. Femur (33.3%) and the tarsal bones (23.8%) were the most frequently involved sites; and demarcated osteolytic lesions (63.1%) and cortical breakages (42.1%) were observed on plain radiographs. Surgical drainage was performed in 90.5%, and 33.3% of them required repeated surgical interventions due to persistent symptoms. Antituberculosis medications were administered for a median duration of 12 months (range, 12-31). Most patients recovered without evident sequelae. Conclusions: Highly suspecting BCG osteitis based on clinical manifestations is important for prompt management. A comprehensive national surveillance system is needed to understand the exact incidence of serious adverse reactions following BCG vaccination and establish safe vaccination policy in Korea.
null
null
7,263
340
[ 179, 147, 51, 36, 330, 174, 100, 261, 119, 73, 234 ]
15
[ "bcg", "osteitis", "patients", "bcg osteitis", "cases", "strain", "vaccination", "tokyo", "patient", "pcr" ]
[ "bcg vaccination recommended", "percutaneous bcg vaccination", "bcg vaccine previously", "vaccinations bcg osteitis", "bcg osteitis vaccinated" ]
null
null
[CONTENT] Mycobacterium bovis | BCG | Osteitis | Vaccination | Complications [SUMMARY]
[CONTENT] Mycobacterium bovis | BCG | Osteitis | Vaccination | Complications [SUMMARY]
[CONTENT] Mycobacterium bovis | BCG | Osteitis | Vaccination | Complications [SUMMARY]
null
[CONTENT] Mycobacterium bovis | BCG | Osteitis | Vaccination | Complications [SUMMARY]
null
[CONTENT] Antitubercular Agents | BCG Vaccine | Child, Preschool | Female | Humans | Immunization | Infant | Male | Osteitis | Republic of Korea | Retrospective Studies | Tuberculosis [SUMMARY]
[CONTENT] Antitubercular Agents | BCG Vaccine | Child, Preschool | Female | Humans | Immunization | Infant | Male | Osteitis | Republic of Korea | Retrospective Studies | Tuberculosis [SUMMARY]
[CONTENT] Antitubercular Agents | BCG Vaccine | Child, Preschool | Female | Humans | Immunization | Infant | Male | Osteitis | Republic of Korea | Retrospective Studies | Tuberculosis [SUMMARY]
null
[CONTENT] Antitubercular Agents | BCG Vaccine | Child, Preschool | Female | Humans | Immunization | Infant | Male | Osteitis | Republic of Korea | Retrospective Studies | Tuberculosis [SUMMARY]
null
[CONTENT] bcg vaccination recommended | percutaneous bcg vaccination | bcg vaccine previously | vaccinations bcg osteitis | bcg osteitis vaccinated [SUMMARY]
[CONTENT] bcg vaccination recommended | percutaneous bcg vaccination | bcg vaccine previously | vaccinations bcg osteitis | bcg osteitis vaccinated [SUMMARY]
[CONTENT] bcg vaccination recommended | percutaneous bcg vaccination | bcg vaccine previously | vaccinations bcg osteitis | bcg osteitis vaccinated [SUMMARY]
null
[CONTENT] bcg vaccination recommended | percutaneous bcg vaccination | bcg vaccine previously | vaccinations bcg osteitis | bcg osteitis vaccinated [SUMMARY]
null
[CONTENT] bcg | osteitis | patients | bcg osteitis | cases | strain | vaccination | tokyo | patient | pcr [SUMMARY]
[CONTENT] bcg | osteitis | patients | bcg osteitis | cases | strain | vaccination | tokyo | patient | pcr [SUMMARY]
[CONTENT] bcg | osteitis | patients | bcg osteitis | cases | strain | vaccination | tokyo | patient | pcr [SUMMARY]
null
[CONTENT] bcg | osteitis | patients | bcg osteitis | cases | strain | vaccination | tokyo | patient | pcr [SUMMARY]
null
[CONTENT] bcg | japan | taiwan | tokyo | vaccine | 172 | bcg tokyo | strain 172 | bcg tokyo strain | bcg tokyo strain 172 [SUMMARY]
[CONTENT] bcg | data | performed | pcr | national | bovis bcg | bovis | irb | registry | variables [SUMMARY]
[CONTENT] bcg | patients | cases | patient | received | osteitis | bcg osteitis | range | median | pcr [SUMMARY]
null
[CONTENT] bcg | patients | osteitis | cases | bcg osteitis | pcr | strain | performed | received | tokyo [SUMMARY]
null
[CONTENT] Bacille Calmette-Guérin | BCG | BCG | Korea ||| BCG | the recent 10 years | Korea [SUMMARY]
[CONTENT] BCG | the Seoul National University Children's Hospital | January 2007 to March 2018 ||| BCG | PCR ||| [SUMMARY]
[CONTENT] Twenty-one | BCG | BCG | 13.8 months | 6.0 ||| Sixteen | 76.2% | four | 19.0% | 4.8% | Tokyo-172 | Danish ||| 76.2% | 63.2% | 61.9% | 19.0% ||| Femur | 33.3% | 23.8% | 63.1% | 42.1% ||| 90.5% | 33.3% ||| 12 months | 12 ||| [SUMMARY]
null
[CONTENT] Bacille Calmette-Guérin | BCG | BCG | Korea ||| BCG | the recent 10 years | Korea ||| BCG | the Seoul National University Children's Hospital | January 2007 to March 2018 ||| BCG | PCR ||| ||| Twenty-one | BCG | BCG | 13.8 months | 6.0 ||| Sixteen | 76.2% | four | 19.0% | 4.8% | Tokyo-172 | Danish ||| 76.2% | 63.2% | 61.9% | 19.0% ||| Femur | 33.3% | 23.8% | 63.1% | 42.1% ||| 90.5% | 33.3% ||| 12 months | 12 ||| ||| BCG ||| BCG | Korea [SUMMARY]
null
Reflection of Dietary Iodine in the 24 h Urinary Iodine Concentration, Serum Iodine and Thyroglobulin as Biomarkers of Iodine Status: A Pilot Study.
34444680
The iodine status of the US population is considered adequate, but subpopulations remain at risk for iodine deficiency and a biomarker of individual iodine status has yet to be determined. The purpose of this study was to determine whether a 3 day titration diet, providing known quantities of iodized salt, is reflected in 24 h urinary iodine concentration (UIC), serum iodine, and thyroglobulin (Tg).
BACKGROUND
A total of 10 participants (31.3 ± 4.0 years, 76.1 ± 6.3 kg) completed three, 3 day iodine titration diets (minimal iodine, US RDA, (United States Recommended Daily Allowance), and 3× RDA). The 24 h UIC, serum iodine, and Tg were measured following each diet. The 24 h UIC and an iodine-specific food frequency questionnaire (FFQ) were completed at baseline.
METHODS
UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed and was different from minimal to RDA (p = 0.001) and RDA to 3× RDA diets (p = 0.04). Serum iodine was different from RDA to 3× RDA (p = 0.006) whereas Tg was not responsive to diet. Baseline UIC was associated with iodine intake from milk (r = 0.688, p = 0.028) and fish/seafood (r = 0.646, p = 0.043).
RESULTS
These results suggest that 24 h UIC and serum iodine may be reflective of individual iodine status and may serve as biomarkers of iodine status.
CONCLUSION
[ "Adult", "Animals", "Biomarkers", "Dairy Products", "Diet", "Eggs", "Female", "Humans", "Iodine", "Male", "Nutritional Status", "Pilot Projects", "Seafood", "Sodium Chloride, Dietary", "Thyroglobulin" ]
8398459
1. Introduction
Iodine is an essential, rate-limiting element for the synthesis of thyroid hormones, which is currently the only known physiological role of iodine. Ensuring adequate iodine intake is important for all adults, but particularly important for women of childbearing age. Inadequate intake and deficiency during pregnancy can influence brain, nerve and muscle development in the growing fetus and result in growth and developmental abnormalities [1]. Iodine status in the US is generally thought to be adequate [2,3]. However, select populations are susceptible to insufficient iodine status due to geographical location, food intake practices, or increased iodine needs (e.g., pregnancy and lactation). Subpopulations at increased risk for low iodine intake include pregnant and lactating women [4,5,6], vegans [7,8,9] and vegetarians [7,8,9,10,11], those who avoid seafood and/or dairy [12], follow a sodium restricted diet [13,14], or eat local foods in regions with iodine-depleted soils [1,15]. The Institute of Medicine, American Heart Association, and the 2020–2025 Dietary Guidelines for Americans [16] have advocated for decreasing sodium intake to less than 2300 mg/day [17] and more prudently to less than 1500 mg/day [18], which could be reducing Americans’ intake of iodized salt. Additionally, trends toward local and plant-based diets may negatively influence iodine status depending on food selection and dietary intake patterns (e.g., avoidance of seafood, dairy and eggs) and the content of local soils. Therefore, despite presumed adequate iodine status in the general US population, certain dietary choices may directly influence iodine status and hence, influence thyroid function. Additionally, dietary patterns low in iodine content are of particular concern for women of reproductive age and pregnant and lactating women due iodine’s importance during fetal growth and development. The 24 h urinary iodine concentration (UIC) is considered the gold standard for assessing iodine status in populations, as approximately 90% of dietary iodine is excreted in the urine of healthy and iodine replete adults [19]. However, 24 h urine collections are cumbersome for the patient, prone to collection and methodological errors, expensive to measure by state-of-the art procedures employed by most clinical laboratories (i.e., mass spectroscopy) [20], and may not adequately reflect iodine status in individuals [21]. For example, 24 h UIC may better represent acute (e.g., days) versus chronic iodine status [22] because of variations in 24 h iodine excretion due to within-day hydration status changes [23] and other unknown variables [22]. While other biomarkers for assessment of status have been suggested, including 24 h total urinary iodine excretion (UIE) [19,24], serum iodine [25] and thyroglobulin (Tg) concentration [26], use of these markers (as well as 24 h UIC) must be better validated in healthy subjects against known quantities of iodine intake. The primary purpose of this pilot study was to determine whether a 3-day titration diet (which provided known quantities of iodized salt) is reflected in 24 h UIC, UIE, serum iodine, thyroglobulin biomarkers. Secondary purposes were to evaluate the association between baseline 24 h UIC and habitual iodine intake and observe the contribution of iodine-containing foods to total iodine intake assessed by an iodine-specific food frequency questionnaire (FFQ). We hypothesized that 24 h UIC, UIE and serum iodine would increase as iodized salt consumption increased and that Tg would increase when the iodine intake exceeds the iodine Recommended Daily Allowance (RDA) of 150 ug. We also hypothesized that baseline 24 h UIC would be associated with the intake frequency of dairy, eggs, and iodized salt and that these same food items would be associated with the total iodine intake as estimated by the FFQ.
null
null
3. Results
3.1. Participant Characteristics The physical characteristics and baseline biochemical data of the five male and five female participants are summarized in Table 1. Two participants reported following a vegetarian diet and eight were omnivores. Eleven participants were recruited. However, one female dropped out of this study following the screening visit due to illness. Data for this participant are not included. Most of the participants were of normal BMI (18.5–24.9), with two men in the overweight category (25–29.9) and one female in the obese category (≥30). Thyroglobulin antibodies were <1.8 IU/mL (lowest detectable range) for all participants, indicating a very low possibility that any participant had an autoimmune thyroid disorder which could interfere with use of Tg as a marker of iodine status [32]. None of the participants smoked. All participants reported having restrained from strenuous physical activity causing excessive sweating for the duration of this study. The physical characteristics and baseline biochemical data of the five male and five female participants are summarized in Table 1. Two participants reported following a vegetarian diet and eight were omnivores. Eleven participants were recruited. However, one female dropped out of this study following the screening visit due to illness. Data for this participant are not included. Most of the participants were of normal BMI (18.5–24.9), with two men in the overweight category (25–29.9) and one female in the obese category (≥30). Thyroglobulin antibodies were <1.8 IU/mL (lowest detectable range) for all participants, indicating a very low possibility that any participant had an autoimmune thyroid disorder which could interfere with use of Tg as a marker of iodine status [32]. None of the participants smoked. All participants reported having restrained from strenuous physical activity causing excessive sweating for the duration of this study. 3.2. Baseline Daily Iodine Intake and Frequency of Intake of Iodine-Containing Foods No participants reported having consumed supplements containing iodine. Daily iodine intake averaged 265.6 ± 28.2 ug (median: 264.5 ug; range: 93.8 to 401.7 ug) and was not different by sex (p = 0.55). The mean and median iodine intake was higher than the US RDA for male and non-pregnant female adults of 150 µg/day; however, 10% (n = 1) had estimated intakes less than the RDA and no participants had an intake greater than the Tolerable Upper Limit of 1100 ug/day. The estimated average daily iodine intake from contributing foods are as follows: total dairy (186.5 ± 36.9) (milk, yogurt, cheese, cottage cheese), milk (126.3 ± 34.1), eggs (27.3 ± 12.1), total fish and seafood (4.1 ± 1.3), seaweed (1.6 ± 0.5), and iodized table salt (42.8 ± 15.8). Total estimated iodine intake from the FFQ was associated with reported total dairy (r = 0.830, p = 0.003) and milk intake (r = 0.688, p = 0.03), but not with egg (r = 0.101, p = 0.78), fish and seafood (r = 0.457, p = 0.18) seaweed (r = 0.503, p = 0.14), or iodized salt intake (r = −0.235, p = 0.51). No participants reported having consumed supplements containing iodine. Daily iodine intake averaged 265.6 ± 28.2 ug (median: 264.5 ug; range: 93.8 to 401.7 ug) and was not different by sex (p = 0.55). The mean and median iodine intake was higher than the US RDA for male and non-pregnant female adults of 150 µg/day; however, 10% (n = 1) had estimated intakes less than the RDA and no participants had an intake greater than the Tolerable Upper Limit of 1100 ug/day. The estimated average daily iodine intake from contributing foods are as follows: total dairy (186.5 ± 36.9) (milk, yogurt, cheese, cottage cheese), milk (126.3 ± 34.1), eggs (27.3 ± 12.1), total fish and seafood (4.1 ± 1.3), seaweed (1.6 ± 0.5), and iodized table salt (42.8 ± 15.8). Total estimated iodine intake from the FFQ was associated with reported total dairy (r = 0.830, p = 0.003) and milk intake (r = 0.688, p = 0.03), but not with egg (r = 0.101, p = 0.78), fish and seafood (r = 0.457, p = 0.18) seaweed (r = 0.503, p = 0.14), or iodized salt intake (r = −0.235, p = 0.51). 3.3. Baseline 24 h UIC and Iodine Status Baseline 24 h UIC ranged between 67 and 253 µg/L. The average value was 135 µg/L and the median value was 121 µg/L. The frequency of the categories of iodine status based on World Health Organization (WHO) criteria is shown in Figure 2. Average baseline serum Tg fell within the normal range (≤33 ng/mL) with 1 participant having a Tg value > 33 ng/mL, indicating compromised status. Baseline 24 h UIC ranged between 67 and 253 µg/L. The average value was 135 µg/L and the median value was 121 µg/L. The frequency of the categories of iodine status based on World Health Organization (WHO) criteria is shown in Figure 2. Average baseline serum Tg fell within the normal range (≤33 ng/mL) with 1 participant having a Tg value > 33 ng/mL, indicating compromised status. 3.4. Relationship between Baseline Iodine Intake and UIC Estimated total iodine intake from the FFQ was not correlated with UIC (r = 0.273, p= 0.446). FFQ-estimated iodine intake was not correlated with predicted iodine intake using the equation of Zimmerman UIC (r = 0.248, p = 0.489) which incorporates UIC and body mass. UIC, however, was associated with milk (r = 0.688, p = 0.028) and fish/seafood intake (r = 0.646, p = 0.043), but not with reported total dairy intake (r = 0.515, p = 0.128), egg consumption (r = −0.241, p = 0.503), seaweed consumption (r = −0.121, p = 0.740), or iodized table salt use (r = −0.512, p = 0.130). Estimated total iodine intake from the FFQ was not correlated with UIC (r = 0.273, p= 0.446). FFQ-estimated iodine intake was not correlated with predicted iodine intake using the equation of Zimmerman UIC (r = 0.248, p = 0.489) which incorporates UIC and body mass. UIC, however, was associated with milk (r = 0.688, p = 0.028) and fish/seafood intake (r = 0.646, p = 0.043), but not with reported total dairy intake (r = 0.515, p = 0.128), egg consumption (r = −0.241, p = 0.503), seaweed consumption (r = −0.121, p = 0.740), or iodized table salt use (r = −0.512, p = 0.130). 3.5. Effect of Titration Diet on Iodine Status Biomarkers Titration diet effects on 24 h UIC, serum iodine and Tg are shown in Table 2. Urine volumes varied widely among participants (900–4150 mL), and averaged 2155 ± 195 mL, 1972 ± 210 mL and 2200 ± 290 on the minimal, RDA and 3× RDA diets, respectively. Both UIC and UIE increased from minimal to RDA (p < 0.001 for both) and RDA to 3× RDA (p = 0.04 and p = 0.002, respectively). The average UIC intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: UIC = 39.5 + 19.3 (grams iodized salt). Thus, UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed. Regression line slopes and intercepts were not different between sexes (p = 0.27 and p = 0.46, respectively). Serum iodine did not increase from minimal to RDA (p = 0.67) but increased significantly from RDA to 3× RDA (p = 0.006). The average serum iodine intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: serum iodine = 61.2 (± 2.0) + 0.6 (± 0.2) grams iodized salt. Serum Tg concentration, however, was not different from minimal iodine to RDA (p = 0.94) or from RDA to 3× RDA (p = 0.68). Titration diet effects on 24 h UIC, serum iodine and Tg are shown in Table 2. Urine volumes varied widely among participants (900–4150 mL), and averaged 2155 ± 195 mL, 1972 ± 210 mL and 2200 ± 290 on the minimal, RDA and 3× RDA diets, respectively. Both UIC and UIE increased from minimal to RDA (p < 0.001 for both) and RDA to 3× RDA (p = 0.04 and p = 0.002, respectively). The average UIC intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: UIC = 39.5 + 19.3 (grams iodized salt). Thus, UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed. Regression line slopes and intercepts were not different between sexes (p = 0.27 and p = 0.46, respectively). Serum iodine did not increase from minimal to RDA (p = 0.67) but increased significantly from RDA to 3× RDA (p = 0.006). The average serum iodine intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: serum iodine = 61.2 (± 2.0) + 0.6 (± 0.2) grams iodized salt. Serum Tg concentration, however, was not different from minimal iodine to RDA (p = 0.94) or from RDA to 3× RDA (p = 0.68). 3.6. Post-Study Questionnaire Responses Nine of ten participants reported consuming all the provided iodized salt on the RDA and 3× RDA diets. One participant reported consuming 75% of provided iodized salt on day 9 (3× RDA diet). Participants reported adding the iodized salt to reduced or unsalted homemade meals, margaritas, caramel, and 7 of the 10 reported dissolving the salt in water to drink. Seven of 10 participants reported collecting 100% of urine output at each 24 h collection. One female participant reported missing 100–200 mL during the fourth 24 h urine collection (3× RDA diet). Another female missed a single urine collection during the 24 h collection period, and one male reported occasionally forgetting to collect a urine. Overall, participants reported the most difficult part of this study was avoiding eggs, dairy, and commercial bread, remembering to collect all urine, and consuming the 9 g of iodized salt on 3× RDA diet. All ten participants rated consuming all salt on RDA diet and collecting all urine as a 1, 2, or 3 difficulty on a 1–5 Likert scale of difficulty with 5 being the most difficult. Five of ten participants rated consuming all salt on 3× RDA diet as a 4 or 5 difficulty rating. Participants reported changing their normal eating pattern to accommodate additional salt by eating lower sodium foods and adjusting sodium content in homemade recipes, eating larger portions to make food more palatable, and increasing frequency of meals. They also reported their normal eating pattern changed by avoiding dairy, eggs, and bread. Nine of ten participants reported consuming all the provided iodized salt on the RDA and 3× RDA diets. One participant reported consuming 75% of provided iodized salt on day 9 (3× RDA diet). Participants reported adding the iodized salt to reduced or unsalted homemade meals, margaritas, caramel, and 7 of the 10 reported dissolving the salt in water to drink. Seven of 10 participants reported collecting 100% of urine output at each 24 h collection. One female participant reported missing 100–200 mL during the fourth 24 h urine collection (3× RDA diet). Another female missed a single urine collection during the 24 h collection period, and one male reported occasionally forgetting to collect a urine. Overall, participants reported the most difficult part of this study was avoiding eggs, dairy, and commercial bread, remembering to collect all urine, and consuming the 9 g of iodized salt on 3× RDA diet. All ten participants rated consuming all salt on RDA diet and collecting all urine as a 1, 2, or 3 difficulty on a 1–5 Likert scale of difficulty with 5 being the most difficult. Five of ten participants rated consuming all salt on 3× RDA diet as a 4 or 5 difficulty rating. Participants reported changing their normal eating pattern to accommodate additional salt by eating lower sodium foods and adjusting sodium content in homemade recipes, eating larger portions to make food more palatable, and increasing frequency of meals. They also reported their normal eating pattern changed by avoiding dairy, eggs, and bread.
5. Conclusions
The current study provides preliminary support to suggest that 24 h UIC and serum iodine may be indicative of individual iodine status, at least over the short term (3 days) when iodine status is mostly stable. Results provide support that 24 h UIC is likely the most sensitive indicator of individual iodine status at this time and still may be the best biomarker to assess iodine status. However, serum iodine is easier to measure at a single blood draw than single or multiple 24 h UICs and would provide a convenient method to assess iodine status, especially for pregnant and lactating women. Serum Tg was not sensitive to short term changes in iodine intake, but future studies should evaluate its use as a long-term iodine status marker. This study highlights the need for additional research to identify individual iodine biomarkers. Continuing to explore these markers to promptly identify iodine deficiencies in women of reproductive age and pregnant and lactating women is especially important so that dietary interventions can be recommended, and deficiencies can be corrected.
[ "2. Materials and Methods", "2.1. Overview of Screening and Baseline Testing", "2.3. Baseline Measurements and Testing", "2.4. Baseline Iodine Intake and Frequency of Intake of Iodine-Containing Foods", "2.5. Iodine Titration Diet and Biomarker Protocol", "2.6. The 24 h Urine Collections", "2.7. Analysis of Iodine Status Biomarkers in Urine and Blood", "2.8. Post-Study Questionnaire", "2.9. Statistical Analysis", "3.1. Participant Characteristics ", "3.2. Baseline Daily Iodine Intake and Frequency of Intake of Iodine-Containing Foods", "3.3. Baseline 24 h UIC and Iodine Status", "3.4. Relationship between Baseline Iodine Intake and UIC", "3.5. Effect of Titration Diet on Iodine Status Biomarkers", "3.6. Post-Study Questionnaire Responses" ]
[ "This pilot study was conducted between February and June of 2020 with participants completing a 9 day study period in February and March 2020. The study schematic is shown in Figure 1. All research procedures were reviewed and approved by the Institutional Review Board of the University of Wyoming. This study evaluated the responsiveness of a three biomarkers of iodine status using three known quantities of dietary iodine consumption. Participants were notified of any possible risks prior to giving written formal consent to participate in this study. Volunteers were recruited through flyers and word of mouth from the University of Wyoming campus and the local community.\n2.1. Overview of Screening and Baseline Testing In early February, interested volunteers completed a screening interview to ensure preliminary eligibility. Interested and eligible volunteers reported to the lab between 8 and 9 am for an initial/baseline visit, which involved provision of written informed consent, completion of a health history questionnaire, and an iodine-specific FFQ. Blood was drawn for analysis of a complete blood count (CBC), thyroid stimulating hormone (TSH) and thyroglobulin antibodies, and was also used for analysis of baseline serum Tg and iodine concentrations. The 24 h urine was collected for analysis of baseline urinary iodine concentration (UIC) as outlined in Figure 1. Body composition was assessed by dual energy x-ray absorptiometry (DXA (Lunar Prodigy, GE Healthcare, Farfield, CT, USA)) during a separate visit scheduled within 10 days of baseline. Following the baseline visit, participants completed three, 3 day titration diets with increasing concentrations of iodine from iodized table salt (Morton Salt, processed 10/5/19, Providence, Rhode Island, US). These diets included “minimal iodine”, iodine close to the RDA (“RDA”) and iodine close to three times the RDA (“3× RDA”) as explained below (see Section 2.5). The 24 h UIC was measured on the last full day of each diet intervention and is explained in detail below (see Section 2.6 24 h Urine Collections).\nIn early February, interested volunteers completed a screening interview to ensure preliminary eligibility. Interested and eligible volunteers reported to the lab between 8 and 9 am for an initial/baseline visit, which involved provision of written informed consent, completion of a health history questionnaire, and an iodine-specific FFQ. Blood was drawn for analysis of a complete blood count (CBC), thyroid stimulating hormone (TSH) and thyroglobulin antibodies, and was also used for analysis of baseline serum Tg and iodine concentrations. The 24 h urine was collected for analysis of baseline urinary iodine concentration (UIC) as outlined in Figure 1. Body composition was assessed by dual energy x-ray absorptiometry (DXA (Lunar Prodigy, GE Healthcare, Farfield, CT, USA)) during a separate visit scheduled within 10 days of baseline. Following the baseline visit, participants completed three, 3 day titration diets with increasing concentrations of iodine from iodized table salt (Morton Salt, processed 10/5/19, Providence, Rhode Island, US). These diets included “minimal iodine”, iodine close to the RDA (“RDA”) and iodine close to three times the RDA (“3× RDA”) as explained below (see Section 2.5). The 24 h UIC was measured on the last full day of each diet intervention and is explained in detail below (see Section 2.6 24 h Urine Collections).\n2.2. Participants Participants included 5 women and 5 men, aged 19–56 years with a BMI (kg/m2) ≥ 18 to ≤40. A sample size of n = 10 was selected for this pilot study due to resource constraints. Individuals were screened by phone and deemed ineligible if they answered “yes” to any or all questions regarding presence of thyroid disorders, currently pregnant or lactating, current use of commercial douches (which may contain iodine), presence of autoimmune disease history/status, and current use of tobacco (including smoking) and medications. Additional exclusion criteria include not willing to comply with the salt/iodine restricted aspects of this study, not being available to/agreeing to fully participate in all aspects of this study and oblige to all restraints during the iodine-controlled 9 days of this study (including avoidance of strenuous exercise; due to potential iodine loss via sweating [27,28], having extreme dietary habits that would result in exceptionally low or high iodine intake, currently taking or have taken iodine or kelp supplements or medications containing or interfering with iodine/thyroid status in the past month, having a history of thyroid disorders, Addison’s disease, or Cushing’s syndrome, a history of major chronic diseases, having laboratory signs or symptom’s suggestive of anemia and/or hypo/hyperthyroidism. Results of the CBC, TSH, thyroglobulin antibodies and health history were used to ensure eligibility to proceed to the next steps of the study protocol.\nParticipants included 5 women and 5 men, aged 19–56 years with a BMI (kg/m2) ≥ 18 to ≤40. A sample size of n = 10 was selected for this pilot study due to resource constraints. Individuals were screened by phone and deemed ineligible if they answered “yes” to any or all questions regarding presence of thyroid disorders, currently pregnant or lactating, current use of commercial douches (which may contain iodine), presence of autoimmune disease history/status, and current use of tobacco (including smoking) and medications. Additional exclusion criteria include not willing to comply with the salt/iodine restricted aspects of this study, not being available to/agreeing to fully participate in all aspects of this study and oblige to all restraints during the iodine-controlled 9 days of this study (including avoidance of strenuous exercise; due to potential iodine loss via sweating [27,28], having extreme dietary habits that would result in exceptionally low or high iodine intake, currently taking or have taken iodine or kelp supplements or medications containing or interfering with iodine/thyroid status in the past month, having a history of thyroid disorders, Addison’s disease, or Cushing’s syndrome, a history of major chronic diseases, having laboratory signs or symptom’s suggestive of anemia and/or hypo/hyperthyroidism. Results of the CBC, TSH, thyroglobulin antibodies and health history were used to ensure eligibility to proceed to the next steps of the study protocol.\n2.3. Baseline Measurements and Testing Basic anthropometric testing, which included height, weight and body composition measurements, was performed for descriptive proposes. Height and weight were measured in minimal clothing using a stadiometer (Invicta Plastics, Leicester, UK) to the nearest 0.1 cm and a standing digital scale (Tanita, Tokyo, Japan) to the nearest 0.1 kg. Body composition was measured by DXA. A pregnancy test using a urine test strip was completed immediately before the DXA to ensure female participants were not pregnant. Screening blood work (CBC, TSH, thyroglobulin antibodies), Tg and 24 h UIC were measured by a commercial laboratory as detailed in Section 2.7 below. \nBasic anthropometric testing, which included height, weight and body composition measurements, was performed for descriptive proposes. Height and weight were measured in minimal clothing using a stadiometer (Invicta Plastics, Leicester, UK) to the nearest 0.1 cm and a standing digital scale (Tanita, Tokyo, Japan) to the nearest 0.1 kg. Body composition was measured by DXA. A pregnancy test using a urine test strip was completed immediately before the DXA to ensure female participants were not pregnant. Screening blood work (CBC, TSH, thyroglobulin antibodies), Tg and 24 h UIC were measured by a commercial laboratory as detailed in Section 2.7 below. \n2.4. Baseline Iodine Intake and Frequency of Intake of Iodine-Containing Foods The iodine-specific FFQ (Appendix A) evaluated the frequency of consumption of 36 food items, over the past month, known to have significant iodine content (e.g., seafood, seaweed, dairy, egg). Our FFQ is a modified version of a previously validated FFQ by Gostas et al. [29], which found estimated total iodine intake from the FFQ to be correlated with 24 h UIC in over 100 healthy female and male participants. Our modified FFQ omitted six low-iodine-containing foods that were not of interest to our study and asked additional questions about habitual iodized salt intake. Frequency was evaluated according to the following responses: (a) never or less than one time per month; (b) one to three times per month; (c) one time per week; (d) two to four times per week; (e) five to six times per week; (f) one time per day; (g) two to three times per day; (h) four to five times per day; (i) or six or more times per day. Daily intake of iodine was estimated by multiplying the frequency midpoint by the average content of each iodine-containing food and expressed as ug/day (assuming 30 days per month), as previously outlined by Halliday et al. [30]. As iodine content is not available in food composition tables or databases, iodine content of the food items in the FFQ was derived from several sources [29] with most data from the ongoing Total Diet Study (TDS). Iodine content in the TDS is listed per 100 g of the selected food item. Iodine content was recalculated from the TDS to iodine per serving size, to match the household measured listed in the FFQ. The iodine content in other sources was also converted to adjust given units to iodine per serving size. \nThe iodine-specific FFQ (Appendix A) evaluated the frequency of consumption of 36 food items, over the past month, known to have significant iodine content (e.g., seafood, seaweed, dairy, egg). Our FFQ is a modified version of a previously validated FFQ by Gostas et al. [29], which found estimated total iodine intake from the FFQ to be correlated with 24 h UIC in over 100 healthy female and male participants. Our modified FFQ omitted six low-iodine-containing foods that were not of interest to our study and asked additional questions about habitual iodized salt intake. Frequency was evaluated according to the following responses: (a) never or less than one time per month; (b) one to three times per month; (c) one time per week; (d) two to four times per week; (e) five to six times per week; (f) one time per day; (g) two to three times per day; (h) four to five times per day; (i) or six or more times per day. Daily intake of iodine was estimated by multiplying the frequency midpoint by the average content of each iodine-containing food and expressed as ug/day (assuming 30 days per month), as previously outlined by Halliday et al. [30]. As iodine content is not available in food composition tables or databases, iodine content of the food items in the FFQ was derived from several sources [29] with most data from the ongoing Total Diet Study (TDS). Iodine content in the TDS is listed per 100 g of the selected food item. Iodine content was recalculated from the TDS to iodine per serving size, to match the household measured listed in the FFQ. The iodine content in other sources was also converted to adjust given units to iodine per serving size. \n2.5. Iodine Titration Diet and Biomarker Protocol Following the baseline urine collection, volunteers then completed three, 3 day iodine titration diet periods as shown in Figure 1. \nExperimental treatments were designed to provide minimal iodine, close to the recommended dietary allowance (RDA), and 3× the RDA. The “Minimal Iodine” diet was defined as avoiding all dairy products, eggs, seaweed, seafood, commercial bread, milk chocolate, and iodized salt and was completed on study days 1–9. Ad libitum quantities of non-iodized salt was provided for participants on only the “Minimal Iodine” diet. The participants received 3.0 g (~1/2 tsp) of iodized salt/day on the “RDA” diet which took place on study days 4–6. On study days 7–9, participants received 9.0 g (1 ½ tsp) of iodized salt/day on the “3× RDA” diet. Participants were instructed to completely consume the premeasured quantities of iodized salt on study days 4–9 in any manner they chose (sprinkled on food, dissolved in a drinkable solution, etc.) and return empty containers on day 10 (the end of this study). Blood was drawn for analysis of concentrations of serum iodine and Tg in the morning between 8 and 9 am on days 4, 7, and 10. Returning the empty containers and verbal communication served as confirmation that the participants completely consumed the provided iodized salt.\nFollowing the baseline urine collection, volunteers then completed three, 3 day iodine titration diet periods as shown in Figure 1. \nExperimental treatments were designed to provide minimal iodine, close to the recommended dietary allowance (RDA), and 3× the RDA. The “Minimal Iodine” diet was defined as avoiding all dairy products, eggs, seaweed, seafood, commercial bread, milk chocolate, and iodized salt and was completed on study days 1–9. Ad libitum quantities of non-iodized salt was provided for participants on only the “Minimal Iodine” diet. The participants received 3.0 g (~1/2 tsp) of iodized salt/day on the “RDA” diet which took place on study days 4–6. On study days 7–9, participants received 9.0 g (1 ½ tsp) of iodized salt/day on the “3× RDA” diet. Participants were instructed to completely consume the premeasured quantities of iodized salt on study days 4–9 in any manner they chose (sprinkled on food, dissolved in a drinkable solution, etc.) and return empty containers on day 10 (the end of this study). Blood was drawn for analysis of concentrations of serum iodine and Tg in the morning between 8 and 9 am on days 4, 7, and 10. Returning the empty containers and verbal communication served as confirmation that the participants completely consumed the provided iodized salt.\n2.6. The 24 h Urine Collections Volunteers completed a total of four 24 h urine collections. Participants were instructed to void and discard the first morning urine sample and then collect all subsequent samples for 24 h ending with the first sample upon waking the next day. Females were provided urine collector pans which were placed under the toilet seat during urination to allow for ease of collection and pouring of urine into the 24 h collection container. Refer to Figure 1, which indicates 24 h urine collections were completed on study days 3, 6, and 9, in addition to baseline. Total urine volume was measured at minimal, RDA, and 3× RDA time points using a 2 L graduated cylinder.\nVolunteers completed a total of four 24 h urine collections. Participants were instructed to void and discard the first morning urine sample and then collect all subsequent samples for 24 h ending with the first sample upon waking the next day. Females were provided urine collector pans which were placed under the toilet seat during urination to allow for ease of collection and pouring of urine into the 24 h collection container. Refer to Figure 1, which indicates 24 h urine collections were completed on study days 3, 6, and 9, in addition to baseline. Total urine volume was measured at minimal, RDA, and 3× RDA time points using a 2 L graduated cylinder.\n2.7. Analysis of Iodine Status Biomarkers in Urine and Blood Serum iodine and 24 h UIC were measured by a local laboratory (Region West, Scottsbluff, NE, USA) with analysis completed by a contract national laboratory (Mayo Clinic, Rochester, MN, USA). Serum Tg was analyzed at the University of Wyoming using a Human Tg ELISA (Enzyme-Linked Immunosorbent Assay) kit. Specifically, 24 h UIC and serum iodine were measured using Inductively Coupled Plasma-Mass Spectrometry and serum Tg concentration was measured by in vitro enzyme-linked immunosorbent assay. The 24 h UIE was calculated by multiplying 24 h UIC by total urine volume.\nSerum iodine and 24 h UIC were measured by a local laboratory (Region West, Scottsbluff, NE, USA) with analysis completed by a contract national laboratory (Mayo Clinic, Rochester, MN, USA). Serum Tg was analyzed at the University of Wyoming using a Human Tg ELISA (Enzyme-Linked Immunosorbent Assay) kit. Specifically, 24 h UIC and serum iodine were measured using Inductively Coupled Plasma-Mass Spectrometry and serum Tg concentration was measured by in vitro enzyme-linked immunosorbent assay. The 24 h UIE was calculated by multiplying 24 h UIC by total urine volume.\n2.8. Post-Study Questionnaire Participants were provided with a post-study questionnaire on day 10, following the last 24 h urine collection. Participants were asked to rate difficulty of consuming iodized salt on the RDA and 3× RDA diets and collecting urine at each of the 24 h urine collections on a Likert scale of 1–5, with 1 being the easiest and 5 being the most difficult. Additionally, participants were asked open-ended questions that included the most difficult part of completing this study, how the provided iodized salt was consumed, and whether normal eating patterns changed to accompany the added salt. Participants were asked whether all provided salt was consumed on the specified day and whether all urine was collected during each 24 h collection. \nParticipants were provided with a post-study questionnaire on day 10, following the last 24 h urine collection. Participants were asked to rate difficulty of consuming iodized salt on the RDA and 3× RDA diets and collecting urine at each of the 24 h urine collections on a Likert scale of 1–5, with 1 being the easiest and 5 being the most difficult. Additionally, participants were asked open-ended questions that included the most difficult part of completing this study, how the provided iodized salt was consumed, and whether normal eating patterns changed to accompany the added salt. Participants were asked whether all provided salt was consumed on the specified day and whether all urine was collected during each 24 h collection. \n2.9. Statistical Analysis Data were analyzed using Minitab statistics software (Minitab LLC, State College, PA, USA; version 19.1). Response Feature Analysis (RFA) [31] was used to compare differences in UIC and serum iodine values but not Tg due to lack of difference at data collection points. Linear regressions were fitted to UIC and serum iodine measures for each individual participant. A paired t-test was used to compare 24 h UIC, serum iodine, and Tg values at minimal and RDA values and the same biomarkers were compared at RDA to 3× RDA. Multi-comparisons for each sex were not performed due to small sample size. Correlation coefficients (Spearman Rank) were used to evaluate the associations between total daily iodine intake and baseline 24 h UIC and iodine intake from specific foods (dairy, eggs, seafood, seaweed, iodized salt, etc.). Data are expressed as the means ± SEM unless otherwise specified. Significance was set at p < 0.05.\nData were analyzed using Minitab statistics software (Minitab LLC, State College, PA, USA; version 19.1). Response Feature Analysis (RFA) [31] was used to compare differences in UIC and serum iodine values but not Tg due to lack of difference at data collection points. Linear regressions were fitted to UIC and serum iodine measures for each individual participant. A paired t-test was used to compare 24 h UIC, serum iodine, and Tg values at minimal and RDA values and the same biomarkers were compared at RDA to 3× RDA. Multi-comparisons for each sex were not performed due to small sample size. Correlation coefficients (Spearman Rank) were used to evaluate the associations between total daily iodine intake and baseline 24 h UIC and iodine intake from specific foods (dairy, eggs, seafood, seaweed, iodized salt, etc.). Data are expressed as the means ± SEM unless otherwise specified. Significance was set at p < 0.05.", "In early February, interested volunteers completed a screening interview to ensure preliminary eligibility. Interested and eligible volunteers reported to the lab between 8 and 9 am for an initial/baseline visit, which involved provision of written informed consent, completion of a health history questionnaire, and an iodine-specific FFQ. Blood was drawn for analysis of a complete blood count (CBC), thyroid stimulating hormone (TSH) and thyroglobulin antibodies, and was also used for analysis of baseline serum Tg and iodine concentrations. The 24 h urine was collected for analysis of baseline urinary iodine concentration (UIC) as outlined in Figure 1. Body composition was assessed by dual energy x-ray absorptiometry (DXA (Lunar Prodigy, GE Healthcare, Farfield, CT, USA)) during a separate visit scheduled within 10 days of baseline. Following the baseline visit, participants completed three, 3 day titration diets with increasing concentrations of iodine from iodized table salt (Morton Salt, processed 10/5/19, Providence, Rhode Island, US). These diets included “minimal iodine”, iodine close to the RDA (“RDA”) and iodine close to three times the RDA (“3× RDA”) as explained below (see Section 2.5). The 24 h UIC was measured on the last full day of each diet intervention and is explained in detail below (see Section 2.6 24 h Urine Collections).", "Basic anthropometric testing, which included height, weight and body composition measurements, was performed for descriptive proposes. Height and weight were measured in minimal clothing using a stadiometer (Invicta Plastics, Leicester, UK) to the nearest 0.1 cm and a standing digital scale (Tanita, Tokyo, Japan) to the nearest 0.1 kg. Body composition was measured by DXA. A pregnancy test using a urine test strip was completed immediately before the DXA to ensure female participants were not pregnant. Screening blood work (CBC, TSH, thyroglobulin antibodies), Tg and 24 h UIC were measured by a commercial laboratory as detailed in Section 2.7 below. ", "The iodine-specific FFQ (Appendix A) evaluated the frequency of consumption of 36 food items, over the past month, known to have significant iodine content (e.g., seafood, seaweed, dairy, egg). Our FFQ is a modified version of a previously validated FFQ by Gostas et al. [29], which found estimated total iodine intake from the FFQ to be correlated with 24 h UIC in over 100 healthy female and male participants. Our modified FFQ omitted six low-iodine-containing foods that were not of interest to our study and asked additional questions about habitual iodized salt intake. Frequency was evaluated according to the following responses: (a) never or less than one time per month; (b) one to three times per month; (c) one time per week; (d) two to four times per week; (e) five to six times per week; (f) one time per day; (g) two to three times per day; (h) four to five times per day; (i) or six or more times per day. Daily intake of iodine was estimated by multiplying the frequency midpoint by the average content of each iodine-containing food and expressed as ug/day (assuming 30 days per month), as previously outlined by Halliday et al. [30]. As iodine content is not available in food composition tables or databases, iodine content of the food items in the FFQ was derived from several sources [29] with most data from the ongoing Total Diet Study (TDS). Iodine content in the TDS is listed per 100 g of the selected food item. Iodine content was recalculated from the TDS to iodine per serving size, to match the household measured listed in the FFQ. The iodine content in other sources was also converted to adjust given units to iodine per serving size. ", "Following the baseline urine collection, volunteers then completed three, 3 day iodine titration diet periods as shown in Figure 1. \nExperimental treatments were designed to provide minimal iodine, close to the recommended dietary allowance (RDA), and 3× the RDA. The “Minimal Iodine” diet was defined as avoiding all dairy products, eggs, seaweed, seafood, commercial bread, milk chocolate, and iodized salt and was completed on study days 1–9. Ad libitum quantities of non-iodized salt was provided for participants on only the “Minimal Iodine” diet. The participants received 3.0 g (~1/2 tsp) of iodized salt/day on the “RDA” diet which took place on study days 4–6. On study days 7–9, participants received 9.0 g (1 ½ tsp) of iodized salt/day on the “3× RDA” diet. Participants were instructed to completely consume the premeasured quantities of iodized salt on study days 4–9 in any manner they chose (sprinkled on food, dissolved in a drinkable solution, etc.) and return empty containers on day 10 (the end of this study). Blood was drawn for analysis of concentrations of serum iodine and Tg in the morning between 8 and 9 am on days 4, 7, and 10. Returning the empty containers and verbal communication served as confirmation that the participants completely consumed the provided iodized salt.", "Volunteers completed a total of four 24 h urine collections. Participants were instructed to void and discard the first morning urine sample and then collect all subsequent samples for 24 h ending with the first sample upon waking the next day. Females were provided urine collector pans which were placed under the toilet seat during urination to allow for ease of collection and pouring of urine into the 24 h collection container. Refer to Figure 1, which indicates 24 h urine collections were completed on study days 3, 6, and 9, in addition to baseline. Total urine volume was measured at minimal, RDA, and 3× RDA time points using a 2 L graduated cylinder.", "Serum iodine and 24 h UIC were measured by a local laboratory (Region West, Scottsbluff, NE, USA) with analysis completed by a contract national laboratory (Mayo Clinic, Rochester, MN, USA). Serum Tg was analyzed at the University of Wyoming using a Human Tg ELISA (Enzyme-Linked Immunosorbent Assay) kit. Specifically, 24 h UIC and serum iodine were measured using Inductively Coupled Plasma-Mass Spectrometry and serum Tg concentration was measured by in vitro enzyme-linked immunosorbent assay. The 24 h UIE was calculated by multiplying 24 h UIC by total urine volume.", "Participants were provided with a post-study questionnaire on day 10, following the last 24 h urine collection. Participants were asked to rate difficulty of consuming iodized salt on the RDA and 3× RDA diets and collecting urine at each of the 24 h urine collections on a Likert scale of 1–5, with 1 being the easiest and 5 being the most difficult. Additionally, participants were asked open-ended questions that included the most difficult part of completing this study, how the provided iodized salt was consumed, and whether normal eating patterns changed to accompany the added salt. Participants were asked whether all provided salt was consumed on the specified day and whether all urine was collected during each 24 h collection. ", "Data were analyzed using Minitab statistics software (Minitab LLC, State College, PA, USA; version 19.1). Response Feature Analysis (RFA) [31] was used to compare differences in UIC and serum iodine values but not Tg due to lack of difference at data collection points. Linear regressions were fitted to UIC and serum iodine measures for each individual participant. A paired t-test was used to compare 24 h UIC, serum iodine, and Tg values at minimal and RDA values and the same biomarkers were compared at RDA to 3× RDA. Multi-comparisons for each sex were not performed due to small sample size. Correlation coefficients (Spearman Rank) were used to evaluate the associations between total daily iodine intake and baseline 24 h UIC and iodine intake from specific foods (dairy, eggs, seafood, seaweed, iodized salt, etc.). Data are expressed as the means ± SEM unless otherwise specified. Significance was set at p < 0.05.", "The physical characteristics and baseline biochemical data of the five male and five female participants are summarized in Table 1. Two participants reported following a vegetarian diet and eight were omnivores. Eleven participants were recruited. However, one female dropped out of this study following the screening visit due to illness. Data for this participant are not included. Most of the participants were of normal BMI (18.5–24.9), with two men in the overweight category (25–29.9) and one female in the obese category (≥30). Thyroglobulin antibodies were <1.8 IU/mL (lowest detectable range) for all participants, indicating a very low possibility that any participant had an autoimmune thyroid disorder which could interfere with use of Tg as a marker of iodine status [32]. None of the participants smoked. All participants reported having restrained from strenuous physical activity causing excessive sweating for the duration of this study.", "No participants reported having consumed supplements containing iodine. Daily iodine intake averaged 265.6 ± 28.2 ug (median: 264.5 ug; range: 93.8 to 401.7 ug) and was not different by sex (p = 0.55). The mean and median iodine intake was higher than the US RDA for male and non-pregnant female adults of 150 µg/day; however, 10% (n = 1) had estimated intakes less than the RDA and no participants had an intake greater than the Tolerable Upper Limit of 1100 ug/day. The estimated average daily iodine intake from contributing foods are as follows: total dairy (186.5 ± 36.9) (milk, yogurt, cheese, cottage cheese), milk (126.3 ± 34.1), eggs (27.3 ± 12.1), total fish and seafood (4.1 ± 1.3), seaweed (1.6 ± 0.5), and iodized table salt (42.8 ± 15.8). Total estimated iodine intake from the FFQ was associated with reported total dairy (r = 0.830, p = 0.003) and milk intake (r = 0.688, p = 0.03), but not with egg (r = 0.101, p = 0.78), fish and seafood (r = 0.457, p = 0.18) seaweed (r = 0.503, p = 0.14), or iodized salt intake (r = −0.235, p = 0.51). ", "Baseline 24 h UIC ranged between 67 and 253 µg/L. The average value was 135 µg/L and the median value was 121 µg/L. The frequency of the categories of iodine status based on World Health Organization (WHO) criteria is shown in Figure 2. Average baseline serum Tg fell within the normal range (≤33 ng/mL) with 1 participant having a Tg value > 33 ng/mL, indicating compromised status. ", "Estimated total iodine intake from the FFQ was not correlated with UIC (r = 0.273, p= 0.446). FFQ-estimated iodine intake was not correlated with predicted iodine intake using the equation of Zimmerman UIC (r = 0.248, p = 0.489) which incorporates UIC and body mass. UIC, however, was associated with milk (r = 0.688, p = 0.028) and fish/seafood intake (r = 0.646, p = 0.043), but not with reported total dairy intake (r = 0.515, p = 0.128), egg consumption (r = −0.241, p = 0.503), seaweed consumption (r = −0.121, p = 0.740), or iodized table salt use (r = −0.512, p = 0.130).", "Titration diet effects on 24 h UIC, serum iodine and Tg are shown in Table 2. \nUrine volumes varied widely among participants (900–4150 mL), and averaged 2155 ± 195 mL, 1972 ± 210 mL and 2200 ± 290 on the minimal, RDA and 3× RDA diets, respectively. Both UIC and UIE increased from minimal to RDA (p < 0.001 for both) and RDA to 3× RDA (p = 0.04 and p = 0.002, respectively). The average UIC intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: UIC = 39.5 + 19.3 (grams iodized salt). Thus, UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed. Regression line slopes and intercepts were not different between sexes (p = 0.27 and p = 0.46, respectively). \nSerum iodine did not increase from minimal to RDA (p = 0.67) but increased significantly from RDA to 3× RDA (p = 0.006). The average serum iodine intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: serum iodine = 61.2 (± 2.0) + 0.6 (± 0.2) grams iodized salt. Serum Tg concentration, however, was not different from minimal iodine to RDA (p = 0.94) or from RDA to 3× RDA (p = 0.68).", "Nine of ten participants reported consuming all the provided iodized salt on the RDA and 3× RDA diets. One participant reported consuming 75% of provided iodized salt on day 9 (3× RDA diet). Participants reported adding the iodized salt to reduced or unsalted homemade meals, margaritas, caramel, and 7 of the 10 reported dissolving the salt in water to drink.\nSeven of 10 participants reported collecting 100% of urine output at each 24 h collection. One female participant reported missing 100–200 mL during the fourth 24 h urine collection (3× RDA diet). Another female missed a single urine collection during the 24 h collection period, and one male reported occasionally forgetting to collect a urine. \nOverall, participants reported the most difficult part of this study was avoiding eggs, dairy, and commercial bread, remembering to collect all urine, and consuming the 9 g of iodized salt on 3× RDA diet. All ten participants rated consuming all salt on RDA diet and collecting all urine as a 1, 2, or 3 difficulty on a 1–5 Likert scale of difficulty with 5 being the most difficult. Five of ten participants rated consuming all salt on 3× RDA diet as a 4 or 5 difficulty rating. Participants reported changing their normal eating pattern to accommodate additional salt by eating lower sodium foods and adjusting sodium content in homemade recipes, eating larger portions to make food more palatable, and increasing frequency of meals. They also reported their normal eating pattern changed by avoiding dairy, eggs, and bread." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Overview of Screening and Baseline Testing", "2.2. Participants", "2.3. Baseline Measurements and Testing", "2.4. Baseline Iodine Intake and Frequency of Intake of Iodine-Containing Foods", "2.5. Iodine Titration Diet and Biomarker Protocol", "2.6. The 24 h Urine Collections", "2.7. Analysis of Iodine Status Biomarkers in Urine and Blood", "2.8. Post-Study Questionnaire", "2.9. Statistical Analysis", "3. Results", "3.1. Participant Characteristics ", "3.2. Baseline Daily Iodine Intake and Frequency of Intake of Iodine-Containing Foods", "3.3. Baseline 24 h UIC and Iodine Status", "3.4. Relationship between Baseline Iodine Intake and UIC", "3.5. Effect of Titration Diet on Iodine Status Biomarkers", "3.6. Post-Study Questionnaire Responses", "4. Discussion", "5. Conclusions" ]
[ "Iodine is an essential, rate-limiting element for the synthesis of thyroid hormones, which is currently the only known physiological role of iodine. Ensuring adequate iodine intake is important for all adults, but particularly important for women of childbearing age. Inadequate intake and deficiency during pregnancy can influence brain, nerve and muscle development in the growing fetus and result in growth and developmental abnormalities [1].\nIodine status in the US is generally thought to be adequate [2,3]. However, select populations are susceptible to insufficient iodine status due to geographical location, food intake practices, or increased iodine needs (e.g., pregnancy and lactation). Subpopulations at increased risk for low iodine intake include pregnant and lactating women [4,5,6], vegans [7,8,9] and vegetarians [7,8,9,10,11], those who avoid seafood and/or dairy [12], follow a sodium restricted diet [13,14], or eat local foods in regions with iodine-depleted soils [1,15]. The Institute of Medicine, American Heart Association, and the 2020–2025 Dietary Guidelines for Americans [16] have advocated for decreasing sodium intake to less than 2300 mg/day [17] and more prudently to less than 1500 mg/day [18], which could be reducing Americans’ intake of iodized salt. Additionally, trends toward local and plant-based diets may negatively influence iodine status depending on food selection and dietary intake patterns (e.g., avoidance of seafood, dairy and eggs) and the content of local soils. Therefore, despite presumed adequate iodine status in the general US population, certain dietary choices may directly influence iodine status and hence, influence thyroid function. Additionally, dietary patterns low in iodine content are of particular concern for women of reproductive age and pregnant and lactating women due iodine’s importance during fetal growth and development. \nThe 24 h urinary iodine concentration (UIC) is considered the gold standard for assessing iodine status in populations, as approximately 90% of dietary iodine is excreted in the urine of healthy and iodine replete adults [19]. However, 24 h urine collections are cumbersome for the patient, prone to collection and methodological errors, expensive to measure by state-of-the art procedures employed by most clinical laboratories (i.e., mass spectroscopy) [20], and may not adequately reflect iodine status in individuals [21]. For example, 24 h UIC may better represent acute (e.g., days) versus chronic iodine status [22] because of variations in 24 h iodine excretion due to within-day hydration status changes [23] and other unknown variables [22]. While other biomarkers for assessment of status have been suggested, including 24 h total urinary iodine excretion (UIE) [19,24], serum iodine [25] and thyroglobulin (Tg) concentration [26], use of these markers (as well as 24 h UIC) must be better validated in healthy subjects against known quantities of iodine intake. \nThe primary purpose of this pilot study was to determine whether a 3-day titration diet (which provided known quantities of iodized salt) is reflected in 24 h UIC, UIE, serum iodine, thyroglobulin biomarkers. Secondary purposes were to evaluate the association between baseline 24 h UIC and habitual iodine intake and observe the contribution of iodine-containing foods to total iodine intake assessed by an iodine-specific food frequency questionnaire (FFQ). We hypothesized that 24 h UIC, UIE and serum iodine would increase as iodized salt consumption increased and that Tg would increase when the iodine intake exceeds the iodine Recommended Daily Allowance (RDA) of 150 ug. We also hypothesized that baseline 24 h UIC would be associated with the intake frequency of dairy, eggs, and iodized salt and that these same food items would be associated with the total iodine intake as estimated by the FFQ.", "This pilot study was conducted between February and June of 2020 with participants completing a 9 day study period in February and March 2020. The study schematic is shown in Figure 1. All research procedures were reviewed and approved by the Institutional Review Board of the University of Wyoming. This study evaluated the responsiveness of a three biomarkers of iodine status using three known quantities of dietary iodine consumption. Participants were notified of any possible risks prior to giving written formal consent to participate in this study. Volunteers were recruited through flyers and word of mouth from the University of Wyoming campus and the local community.\n2.1. Overview of Screening and Baseline Testing In early February, interested volunteers completed a screening interview to ensure preliminary eligibility. Interested and eligible volunteers reported to the lab between 8 and 9 am for an initial/baseline visit, which involved provision of written informed consent, completion of a health history questionnaire, and an iodine-specific FFQ. Blood was drawn for analysis of a complete blood count (CBC), thyroid stimulating hormone (TSH) and thyroglobulin antibodies, and was also used for analysis of baseline serum Tg and iodine concentrations. The 24 h urine was collected for analysis of baseline urinary iodine concentration (UIC) as outlined in Figure 1. Body composition was assessed by dual energy x-ray absorptiometry (DXA (Lunar Prodigy, GE Healthcare, Farfield, CT, USA)) during a separate visit scheduled within 10 days of baseline. Following the baseline visit, participants completed three, 3 day titration diets with increasing concentrations of iodine from iodized table salt (Morton Salt, processed 10/5/19, Providence, Rhode Island, US). These diets included “minimal iodine”, iodine close to the RDA (“RDA”) and iodine close to three times the RDA (“3× RDA”) as explained below (see Section 2.5). The 24 h UIC was measured on the last full day of each diet intervention and is explained in detail below (see Section 2.6 24 h Urine Collections).\nIn early February, interested volunteers completed a screening interview to ensure preliminary eligibility. Interested and eligible volunteers reported to the lab between 8 and 9 am for an initial/baseline visit, which involved provision of written informed consent, completion of a health history questionnaire, and an iodine-specific FFQ. Blood was drawn for analysis of a complete blood count (CBC), thyroid stimulating hormone (TSH) and thyroglobulin antibodies, and was also used for analysis of baseline serum Tg and iodine concentrations. The 24 h urine was collected for analysis of baseline urinary iodine concentration (UIC) as outlined in Figure 1. Body composition was assessed by dual energy x-ray absorptiometry (DXA (Lunar Prodigy, GE Healthcare, Farfield, CT, USA)) during a separate visit scheduled within 10 days of baseline. Following the baseline visit, participants completed three, 3 day titration diets with increasing concentrations of iodine from iodized table salt (Morton Salt, processed 10/5/19, Providence, Rhode Island, US). These diets included “minimal iodine”, iodine close to the RDA (“RDA”) and iodine close to three times the RDA (“3× RDA”) as explained below (see Section 2.5). The 24 h UIC was measured on the last full day of each diet intervention and is explained in detail below (see Section 2.6 24 h Urine Collections).\n2.2. Participants Participants included 5 women and 5 men, aged 19–56 years with a BMI (kg/m2) ≥ 18 to ≤40. A sample size of n = 10 was selected for this pilot study due to resource constraints. Individuals were screened by phone and deemed ineligible if they answered “yes” to any or all questions regarding presence of thyroid disorders, currently pregnant or lactating, current use of commercial douches (which may contain iodine), presence of autoimmune disease history/status, and current use of tobacco (including smoking) and medications. Additional exclusion criteria include not willing to comply with the salt/iodine restricted aspects of this study, not being available to/agreeing to fully participate in all aspects of this study and oblige to all restraints during the iodine-controlled 9 days of this study (including avoidance of strenuous exercise; due to potential iodine loss via sweating [27,28], having extreme dietary habits that would result in exceptionally low or high iodine intake, currently taking or have taken iodine or kelp supplements or medications containing or interfering with iodine/thyroid status in the past month, having a history of thyroid disorders, Addison’s disease, or Cushing’s syndrome, a history of major chronic diseases, having laboratory signs or symptom’s suggestive of anemia and/or hypo/hyperthyroidism. Results of the CBC, TSH, thyroglobulin antibodies and health history were used to ensure eligibility to proceed to the next steps of the study protocol.\nParticipants included 5 women and 5 men, aged 19–56 years with a BMI (kg/m2) ≥ 18 to ≤40. A sample size of n = 10 was selected for this pilot study due to resource constraints. Individuals were screened by phone and deemed ineligible if they answered “yes” to any or all questions regarding presence of thyroid disorders, currently pregnant or lactating, current use of commercial douches (which may contain iodine), presence of autoimmune disease history/status, and current use of tobacco (including smoking) and medications. Additional exclusion criteria include not willing to comply with the salt/iodine restricted aspects of this study, not being available to/agreeing to fully participate in all aspects of this study and oblige to all restraints during the iodine-controlled 9 days of this study (including avoidance of strenuous exercise; due to potential iodine loss via sweating [27,28], having extreme dietary habits that would result in exceptionally low or high iodine intake, currently taking or have taken iodine or kelp supplements or medications containing or interfering with iodine/thyroid status in the past month, having a history of thyroid disorders, Addison’s disease, or Cushing’s syndrome, a history of major chronic diseases, having laboratory signs or symptom’s suggestive of anemia and/or hypo/hyperthyroidism. Results of the CBC, TSH, thyroglobulin antibodies and health history were used to ensure eligibility to proceed to the next steps of the study protocol.\n2.3. Baseline Measurements and Testing Basic anthropometric testing, which included height, weight and body composition measurements, was performed for descriptive proposes. Height and weight were measured in minimal clothing using a stadiometer (Invicta Plastics, Leicester, UK) to the nearest 0.1 cm and a standing digital scale (Tanita, Tokyo, Japan) to the nearest 0.1 kg. Body composition was measured by DXA. A pregnancy test using a urine test strip was completed immediately before the DXA to ensure female participants were not pregnant. Screening blood work (CBC, TSH, thyroglobulin antibodies), Tg and 24 h UIC were measured by a commercial laboratory as detailed in Section 2.7 below. \nBasic anthropometric testing, which included height, weight and body composition measurements, was performed for descriptive proposes. Height and weight were measured in minimal clothing using a stadiometer (Invicta Plastics, Leicester, UK) to the nearest 0.1 cm and a standing digital scale (Tanita, Tokyo, Japan) to the nearest 0.1 kg. Body composition was measured by DXA. A pregnancy test using a urine test strip was completed immediately before the DXA to ensure female participants were not pregnant. Screening blood work (CBC, TSH, thyroglobulin antibodies), Tg and 24 h UIC were measured by a commercial laboratory as detailed in Section 2.7 below. \n2.4. Baseline Iodine Intake and Frequency of Intake of Iodine-Containing Foods The iodine-specific FFQ (Appendix A) evaluated the frequency of consumption of 36 food items, over the past month, known to have significant iodine content (e.g., seafood, seaweed, dairy, egg). Our FFQ is a modified version of a previously validated FFQ by Gostas et al. [29], which found estimated total iodine intake from the FFQ to be correlated with 24 h UIC in over 100 healthy female and male participants. Our modified FFQ omitted six low-iodine-containing foods that were not of interest to our study and asked additional questions about habitual iodized salt intake. Frequency was evaluated according to the following responses: (a) never or less than one time per month; (b) one to three times per month; (c) one time per week; (d) two to four times per week; (e) five to six times per week; (f) one time per day; (g) two to three times per day; (h) four to five times per day; (i) or six or more times per day. Daily intake of iodine was estimated by multiplying the frequency midpoint by the average content of each iodine-containing food and expressed as ug/day (assuming 30 days per month), as previously outlined by Halliday et al. [30]. As iodine content is not available in food composition tables or databases, iodine content of the food items in the FFQ was derived from several sources [29] with most data from the ongoing Total Diet Study (TDS). Iodine content in the TDS is listed per 100 g of the selected food item. Iodine content was recalculated from the TDS to iodine per serving size, to match the household measured listed in the FFQ. The iodine content in other sources was also converted to adjust given units to iodine per serving size. \nThe iodine-specific FFQ (Appendix A) evaluated the frequency of consumption of 36 food items, over the past month, known to have significant iodine content (e.g., seafood, seaweed, dairy, egg). Our FFQ is a modified version of a previously validated FFQ by Gostas et al. [29], which found estimated total iodine intake from the FFQ to be correlated with 24 h UIC in over 100 healthy female and male participants. Our modified FFQ omitted six low-iodine-containing foods that were not of interest to our study and asked additional questions about habitual iodized salt intake. Frequency was evaluated according to the following responses: (a) never or less than one time per month; (b) one to three times per month; (c) one time per week; (d) two to four times per week; (e) five to six times per week; (f) one time per day; (g) two to three times per day; (h) four to five times per day; (i) or six or more times per day. Daily intake of iodine was estimated by multiplying the frequency midpoint by the average content of each iodine-containing food and expressed as ug/day (assuming 30 days per month), as previously outlined by Halliday et al. [30]. As iodine content is not available in food composition tables or databases, iodine content of the food items in the FFQ was derived from several sources [29] with most data from the ongoing Total Diet Study (TDS). Iodine content in the TDS is listed per 100 g of the selected food item. Iodine content was recalculated from the TDS to iodine per serving size, to match the household measured listed in the FFQ. The iodine content in other sources was also converted to adjust given units to iodine per serving size. \n2.5. Iodine Titration Diet and Biomarker Protocol Following the baseline urine collection, volunteers then completed three, 3 day iodine titration diet periods as shown in Figure 1. \nExperimental treatments were designed to provide minimal iodine, close to the recommended dietary allowance (RDA), and 3× the RDA. The “Minimal Iodine” diet was defined as avoiding all dairy products, eggs, seaweed, seafood, commercial bread, milk chocolate, and iodized salt and was completed on study days 1–9. Ad libitum quantities of non-iodized salt was provided for participants on only the “Minimal Iodine” diet. The participants received 3.0 g (~1/2 tsp) of iodized salt/day on the “RDA” diet which took place on study days 4–6. On study days 7–9, participants received 9.0 g (1 ½ tsp) of iodized salt/day on the “3× RDA” diet. Participants were instructed to completely consume the premeasured quantities of iodized salt on study days 4–9 in any manner they chose (sprinkled on food, dissolved in a drinkable solution, etc.) and return empty containers on day 10 (the end of this study). Blood was drawn for analysis of concentrations of serum iodine and Tg in the morning between 8 and 9 am on days 4, 7, and 10. Returning the empty containers and verbal communication served as confirmation that the participants completely consumed the provided iodized salt.\nFollowing the baseline urine collection, volunteers then completed three, 3 day iodine titration diet periods as shown in Figure 1. \nExperimental treatments were designed to provide minimal iodine, close to the recommended dietary allowance (RDA), and 3× the RDA. The “Minimal Iodine” diet was defined as avoiding all dairy products, eggs, seaweed, seafood, commercial bread, milk chocolate, and iodized salt and was completed on study days 1–9. Ad libitum quantities of non-iodized salt was provided for participants on only the “Minimal Iodine” diet. The participants received 3.0 g (~1/2 tsp) of iodized salt/day on the “RDA” diet which took place on study days 4–6. On study days 7–9, participants received 9.0 g (1 ½ tsp) of iodized salt/day on the “3× RDA” diet. Participants were instructed to completely consume the premeasured quantities of iodized salt on study days 4–9 in any manner they chose (sprinkled on food, dissolved in a drinkable solution, etc.) and return empty containers on day 10 (the end of this study). Blood was drawn for analysis of concentrations of serum iodine and Tg in the morning between 8 and 9 am on days 4, 7, and 10. Returning the empty containers and verbal communication served as confirmation that the participants completely consumed the provided iodized salt.\n2.6. The 24 h Urine Collections Volunteers completed a total of four 24 h urine collections. Participants were instructed to void and discard the first morning urine sample and then collect all subsequent samples for 24 h ending with the first sample upon waking the next day. Females were provided urine collector pans which were placed under the toilet seat during urination to allow for ease of collection and pouring of urine into the 24 h collection container. Refer to Figure 1, which indicates 24 h urine collections were completed on study days 3, 6, and 9, in addition to baseline. Total urine volume was measured at minimal, RDA, and 3× RDA time points using a 2 L graduated cylinder.\nVolunteers completed a total of four 24 h urine collections. Participants were instructed to void and discard the first morning urine sample and then collect all subsequent samples for 24 h ending with the first sample upon waking the next day. Females were provided urine collector pans which were placed under the toilet seat during urination to allow for ease of collection and pouring of urine into the 24 h collection container. Refer to Figure 1, which indicates 24 h urine collections were completed on study days 3, 6, and 9, in addition to baseline. Total urine volume was measured at minimal, RDA, and 3× RDA time points using a 2 L graduated cylinder.\n2.7. Analysis of Iodine Status Biomarkers in Urine and Blood Serum iodine and 24 h UIC were measured by a local laboratory (Region West, Scottsbluff, NE, USA) with analysis completed by a contract national laboratory (Mayo Clinic, Rochester, MN, USA). Serum Tg was analyzed at the University of Wyoming using a Human Tg ELISA (Enzyme-Linked Immunosorbent Assay) kit. Specifically, 24 h UIC and serum iodine were measured using Inductively Coupled Plasma-Mass Spectrometry and serum Tg concentration was measured by in vitro enzyme-linked immunosorbent assay. The 24 h UIE was calculated by multiplying 24 h UIC by total urine volume.\nSerum iodine and 24 h UIC were measured by a local laboratory (Region West, Scottsbluff, NE, USA) with analysis completed by a contract national laboratory (Mayo Clinic, Rochester, MN, USA). Serum Tg was analyzed at the University of Wyoming using a Human Tg ELISA (Enzyme-Linked Immunosorbent Assay) kit. Specifically, 24 h UIC and serum iodine were measured using Inductively Coupled Plasma-Mass Spectrometry and serum Tg concentration was measured by in vitro enzyme-linked immunosorbent assay. The 24 h UIE was calculated by multiplying 24 h UIC by total urine volume.\n2.8. Post-Study Questionnaire Participants were provided with a post-study questionnaire on day 10, following the last 24 h urine collection. Participants were asked to rate difficulty of consuming iodized salt on the RDA and 3× RDA diets and collecting urine at each of the 24 h urine collections on a Likert scale of 1–5, with 1 being the easiest and 5 being the most difficult. Additionally, participants were asked open-ended questions that included the most difficult part of completing this study, how the provided iodized salt was consumed, and whether normal eating patterns changed to accompany the added salt. Participants were asked whether all provided salt was consumed on the specified day and whether all urine was collected during each 24 h collection. \nParticipants were provided with a post-study questionnaire on day 10, following the last 24 h urine collection. Participants were asked to rate difficulty of consuming iodized salt on the RDA and 3× RDA diets and collecting urine at each of the 24 h urine collections on a Likert scale of 1–5, with 1 being the easiest and 5 being the most difficult. Additionally, participants were asked open-ended questions that included the most difficult part of completing this study, how the provided iodized salt was consumed, and whether normal eating patterns changed to accompany the added salt. Participants were asked whether all provided salt was consumed on the specified day and whether all urine was collected during each 24 h collection. \n2.9. Statistical Analysis Data were analyzed using Minitab statistics software (Minitab LLC, State College, PA, USA; version 19.1). Response Feature Analysis (RFA) [31] was used to compare differences in UIC and serum iodine values but not Tg due to lack of difference at data collection points. Linear regressions were fitted to UIC and serum iodine measures for each individual participant. A paired t-test was used to compare 24 h UIC, serum iodine, and Tg values at minimal and RDA values and the same biomarkers were compared at RDA to 3× RDA. Multi-comparisons for each sex were not performed due to small sample size. Correlation coefficients (Spearman Rank) were used to evaluate the associations between total daily iodine intake and baseline 24 h UIC and iodine intake from specific foods (dairy, eggs, seafood, seaweed, iodized salt, etc.). Data are expressed as the means ± SEM unless otherwise specified. Significance was set at p < 0.05.\nData were analyzed using Minitab statistics software (Minitab LLC, State College, PA, USA; version 19.1). Response Feature Analysis (RFA) [31] was used to compare differences in UIC and serum iodine values but not Tg due to lack of difference at data collection points. Linear regressions were fitted to UIC and serum iodine measures for each individual participant. A paired t-test was used to compare 24 h UIC, serum iodine, and Tg values at minimal and RDA values and the same biomarkers were compared at RDA to 3× RDA. Multi-comparisons for each sex were not performed due to small sample size. Correlation coefficients (Spearman Rank) were used to evaluate the associations between total daily iodine intake and baseline 24 h UIC and iodine intake from specific foods (dairy, eggs, seafood, seaweed, iodized salt, etc.). Data are expressed as the means ± SEM unless otherwise specified. Significance was set at p < 0.05.", "In early February, interested volunteers completed a screening interview to ensure preliminary eligibility. Interested and eligible volunteers reported to the lab between 8 and 9 am for an initial/baseline visit, which involved provision of written informed consent, completion of a health history questionnaire, and an iodine-specific FFQ. Blood was drawn for analysis of a complete blood count (CBC), thyroid stimulating hormone (TSH) and thyroglobulin antibodies, and was also used for analysis of baseline serum Tg and iodine concentrations. The 24 h urine was collected for analysis of baseline urinary iodine concentration (UIC) as outlined in Figure 1. Body composition was assessed by dual energy x-ray absorptiometry (DXA (Lunar Prodigy, GE Healthcare, Farfield, CT, USA)) during a separate visit scheduled within 10 days of baseline. Following the baseline visit, participants completed three, 3 day titration diets with increasing concentrations of iodine from iodized table salt (Morton Salt, processed 10/5/19, Providence, Rhode Island, US). These diets included “minimal iodine”, iodine close to the RDA (“RDA”) and iodine close to three times the RDA (“3× RDA”) as explained below (see Section 2.5). The 24 h UIC was measured on the last full day of each diet intervention and is explained in detail below (see Section 2.6 24 h Urine Collections).", "Participants included 5 women and 5 men, aged 19–56 years with a BMI (kg/m2) ≥ 18 to ≤40. A sample size of n = 10 was selected for this pilot study due to resource constraints. Individuals were screened by phone and deemed ineligible if they answered “yes” to any or all questions regarding presence of thyroid disorders, currently pregnant or lactating, current use of commercial douches (which may contain iodine), presence of autoimmune disease history/status, and current use of tobacco (including smoking) and medications. Additional exclusion criteria include not willing to comply with the salt/iodine restricted aspects of this study, not being available to/agreeing to fully participate in all aspects of this study and oblige to all restraints during the iodine-controlled 9 days of this study (including avoidance of strenuous exercise; due to potential iodine loss via sweating [27,28], having extreme dietary habits that would result in exceptionally low or high iodine intake, currently taking or have taken iodine or kelp supplements or medications containing or interfering with iodine/thyroid status in the past month, having a history of thyroid disorders, Addison’s disease, or Cushing’s syndrome, a history of major chronic diseases, having laboratory signs or symptom’s suggestive of anemia and/or hypo/hyperthyroidism. Results of the CBC, TSH, thyroglobulin antibodies and health history were used to ensure eligibility to proceed to the next steps of the study protocol.", "Basic anthropometric testing, which included height, weight and body composition measurements, was performed for descriptive proposes. Height and weight were measured in minimal clothing using a stadiometer (Invicta Plastics, Leicester, UK) to the nearest 0.1 cm and a standing digital scale (Tanita, Tokyo, Japan) to the nearest 0.1 kg. Body composition was measured by DXA. A pregnancy test using a urine test strip was completed immediately before the DXA to ensure female participants were not pregnant. Screening blood work (CBC, TSH, thyroglobulin antibodies), Tg and 24 h UIC were measured by a commercial laboratory as detailed in Section 2.7 below. ", "The iodine-specific FFQ (Appendix A) evaluated the frequency of consumption of 36 food items, over the past month, known to have significant iodine content (e.g., seafood, seaweed, dairy, egg). Our FFQ is a modified version of a previously validated FFQ by Gostas et al. [29], which found estimated total iodine intake from the FFQ to be correlated with 24 h UIC in over 100 healthy female and male participants. Our modified FFQ omitted six low-iodine-containing foods that were not of interest to our study and asked additional questions about habitual iodized salt intake. Frequency was evaluated according to the following responses: (a) never or less than one time per month; (b) one to three times per month; (c) one time per week; (d) two to four times per week; (e) five to six times per week; (f) one time per day; (g) two to three times per day; (h) four to five times per day; (i) or six or more times per day. Daily intake of iodine was estimated by multiplying the frequency midpoint by the average content of each iodine-containing food and expressed as ug/day (assuming 30 days per month), as previously outlined by Halliday et al. [30]. As iodine content is not available in food composition tables or databases, iodine content of the food items in the FFQ was derived from several sources [29] with most data from the ongoing Total Diet Study (TDS). Iodine content in the TDS is listed per 100 g of the selected food item. Iodine content was recalculated from the TDS to iodine per serving size, to match the household measured listed in the FFQ. The iodine content in other sources was also converted to adjust given units to iodine per serving size. ", "Following the baseline urine collection, volunteers then completed three, 3 day iodine titration diet periods as shown in Figure 1. \nExperimental treatments were designed to provide minimal iodine, close to the recommended dietary allowance (RDA), and 3× the RDA. The “Minimal Iodine” diet was defined as avoiding all dairy products, eggs, seaweed, seafood, commercial bread, milk chocolate, and iodized salt and was completed on study days 1–9. Ad libitum quantities of non-iodized salt was provided for participants on only the “Minimal Iodine” diet. The participants received 3.0 g (~1/2 tsp) of iodized salt/day on the “RDA” diet which took place on study days 4–6. On study days 7–9, participants received 9.0 g (1 ½ tsp) of iodized salt/day on the “3× RDA” diet. Participants were instructed to completely consume the premeasured quantities of iodized salt on study days 4–9 in any manner they chose (sprinkled on food, dissolved in a drinkable solution, etc.) and return empty containers on day 10 (the end of this study). Blood was drawn for analysis of concentrations of serum iodine and Tg in the morning between 8 and 9 am on days 4, 7, and 10. Returning the empty containers and verbal communication served as confirmation that the participants completely consumed the provided iodized salt.", "Volunteers completed a total of four 24 h urine collections. Participants were instructed to void and discard the first morning urine sample and then collect all subsequent samples for 24 h ending with the first sample upon waking the next day. Females were provided urine collector pans which were placed under the toilet seat during urination to allow for ease of collection and pouring of urine into the 24 h collection container. Refer to Figure 1, which indicates 24 h urine collections were completed on study days 3, 6, and 9, in addition to baseline. Total urine volume was measured at minimal, RDA, and 3× RDA time points using a 2 L graduated cylinder.", "Serum iodine and 24 h UIC were measured by a local laboratory (Region West, Scottsbluff, NE, USA) with analysis completed by a contract national laboratory (Mayo Clinic, Rochester, MN, USA). Serum Tg was analyzed at the University of Wyoming using a Human Tg ELISA (Enzyme-Linked Immunosorbent Assay) kit. Specifically, 24 h UIC and serum iodine were measured using Inductively Coupled Plasma-Mass Spectrometry and serum Tg concentration was measured by in vitro enzyme-linked immunosorbent assay. The 24 h UIE was calculated by multiplying 24 h UIC by total urine volume.", "Participants were provided with a post-study questionnaire on day 10, following the last 24 h urine collection. Participants were asked to rate difficulty of consuming iodized salt on the RDA and 3× RDA diets and collecting urine at each of the 24 h urine collections on a Likert scale of 1–5, with 1 being the easiest and 5 being the most difficult. Additionally, participants were asked open-ended questions that included the most difficult part of completing this study, how the provided iodized salt was consumed, and whether normal eating patterns changed to accompany the added salt. Participants were asked whether all provided salt was consumed on the specified day and whether all urine was collected during each 24 h collection. ", "Data were analyzed using Minitab statistics software (Minitab LLC, State College, PA, USA; version 19.1). Response Feature Analysis (RFA) [31] was used to compare differences in UIC and serum iodine values but not Tg due to lack of difference at data collection points. Linear regressions were fitted to UIC and serum iodine measures for each individual participant. A paired t-test was used to compare 24 h UIC, serum iodine, and Tg values at minimal and RDA values and the same biomarkers were compared at RDA to 3× RDA. Multi-comparisons for each sex were not performed due to small sample size. Correlation coefficients (Spearman Rank) were used to evaluate the associations between total daily iodine intake and baseline 24 h UIC and iodine intake from specific foods (dairy, eggs, seafood, seaweed, iodized salt, etc.). Data are expressed as the means ± SEM unless otherwise specified. Significance was set at p < 0.05.", "3.1. Participant Characteristics The physical characteristics and baseline biochemical data of the five male and five female participants are summarized in Table 1. Two participants reported following a vegetarian diet and eight were omnivores. Eleven participants were recruited. However, one female dropped out of this study following the screening visit due to illness. Data for this participant are not included. Most of the participants were of normal BMI (18.5–24.9), with two men in the overweight category (25–29.9) and one female in the obese category (≥30). Thyroglobulin antibodies were <1.8 IU/mL (lowest detectable range) for all participants, indicating a very low possibility that any participant had an autoimmune thyroid disorder which could interfere with use of Tg as a marker of iodine status [32]. None of the participants smoked. All participants reported having restrained from strenuous physical activity causing excessive sweating for the duration of this study.\nThe physical characteristics and baseline biochemical data of the five male and five female participants are summarized in Table 1. Two participants reported following a vegetarian diet and eight were omnivores. Eleven participants were recruited. However, one female dropped out of this study following the screening visit due to illness. Data for this participant are not included. Most of the participants were of normal BMI (18.5–24.9), with two men in the overweight category (25–29.9) and one female in the obese category (≥30). Thyroglobulin antibodies were <1.8 IU/mL (lowest detectable range) for all participants, indicating a very low possibility that any participant had an autoimmune thyroid disorder which could interfere with use of Tg as a marker of iodine status [32]. None of the participants smoked. All participants reported having restrained from strenuous physical activity causing excessive sweating for the duration of this study.\n3.2. Baseline Daily Iodine Intake and Frequency of Intake of Iodine-Containing Foods No participants reported having consumed supplements containing iodine. Daily iodine intake averaged 265.6 ± 28.2 ug (median: 264.5 ug; range: 93.8 to 401.7 ug) and was not different by sex (p = 0.55). The mean and median iodine intake was higher than the US RDA for male and non-pregnant female adults of 150 µg/day; however, 10% (n = 1) had estimated intakes less than the RDA and no participants had an intake greater than the Tolerable Upper Limit of 1100 ug/day. The estimated average daily iodine intake from contributing foods are as follows: total dairy (186.5 ± 36.9) (milk, yogurt, cheese, cottage cheese), milk (126.3 ± 34.1), eggs (27.3 ± 12.1), total fish and seafood (4.1 ± 1.3), seaweed (1.6 ± 0.5), and iodized table salt (42.8 ± 15.8). Total estimated iodine intake from the FFQ was associated with reported total dairy (r = 0.830, p = 0.003) and milk intake (r = 0.688, p = 0.03), but not with egg (r = 0.101, p = 0.78), fish and seafood (r = 0.457, p = 0.18) seaweed (r = 0.503, p = 0.14), or iodized salt intake (r = −0.235, p = 0.51). \nNo participants reported having consumed supplements containing iodine. Daily iodine intake averaged 265.6 ± 28.2 ug (median: 264.5 ug; range: 93.8 to 401.7 ug) and was not different by sex (p = 0.55). The mean and median iodine intake was higher than the US RDA for male and non-pregnant female adults of 150 µg/day; however, 10% (n = 1) had estimated intakes less than the RDA and no participants had an intake greater than the Tolerable Upper Limit of 1100 ug/day. The estimated average daily iodine intake from contributing foods are as follows: total dairy (186.5 ± 36.9) (milk, yogurt, cheese, cottage cheese), milk (126.3 ± 34.1), eggs (27.3 ± 12.1), total fish and seafood (4.1 ± 1.3), seaweed (1.6 ± 0.5), and iodized table salt (42.8 ± 15.8). Total estimated iodine intake from the FFQ was associated with reported total dairy (r = 0.830, p = 0.003) and milk intake (r = 0.688, p = 0.03), but not with egg (r = 0.101, p = 0.78), fish and seafood (r = 0.457, p = 0.18) seaweed (r = 0.503, p = 0.14), or iodized salt intake (r = −0.235, p = 0.51). \n3.3. Baseline 24 h UIC and Iodine Status Baseline 24 h UIC ranged between 67 and 253 µg/L. The average value was 135 µg/L and the median value was 121 µg/L. The frequency of the categories of iodine status based on World Health Organization (WHO) criteria is shown in Figure 2. Average baseline serum Tg fell within the normal range (≤33 ng/mL) with 1 participant having a Tg value > 33 ng/mL, indicating compromised status. \nBaseline 24 h UIC ranged between 67 and 253 µg/L. The average value was 135 µg/L and the median value was 121 µg/L. The frequency of the categories of iodine status based on World Health Organization (WHO) criteria is shown in Figure 2. Average baseline serum Tg fell within the normal range (≤33 ng/mL) with 1 participant having a Tg value > 33 ng/mL, indicating compromised status. \n3.4. Relationship between Baseline Iodine Intake and UIC Estimated total iodine intake from the FFQ was not correlated with UIC (r = 0.273, p= 0.446). FFQ-estimated iodine intake was not correlated with predicted iodine intake using the equation of Zimmerman UIC (r = 0.248, p = 0.489) which incorporates UIC and body mass. UIC, however, was associated with milk (r = 0.688, p = 0.028) and fish/seafood intake (r = 0.646, p = 0.043), but not with reported total dairy intake (r = 0.515, p = 0.128), egg consumption (r = −0.241, p = 0.503), seaweed consumption (r = −0.121, p = 0.740), or iodized table salt use (r = −0.512, p = 0.130).\nEstimated total iodine intake from the FFQ was not correlated with UIC (r = 0.273, p= 0.446). FFQ-estimated iodine intake was not correlated with predicted iodine intake using the equation of Zimmerman UIC (r = 0.248, p = 0.489) which incorporates UIC and body mass. UIC, however, was associated with milk (r = 0.688, p = 0.028) and fish/seafood intake (r = 0.646, p = 0.043), but not with reported total dairy intake (r = 0.515, p = 0.128), egg consumption (r = −0.241, p = 0.503), seaweed consumption (r = −0.121, p = 0.740), or iodized table salt use (r = −0.512, p = 0.130).\n3.5. Effect of Titration Diet on Iodine Status Biomarkers Titration diet effects on 24 h UIC, serum iodine and Tg are shown in Table 2. \nUrine volumes varied widely among participants (900–4150 mL), and averaged 2155 ± 195 mL, 1972 ± 210 mL and 2200 ± 290 on the minimal, RDA and 3× RDA diets, respectively. Both UIC and UIE increased from minimal to RDA (p < 0.001 for both) and RDA to 3× RDA (p = 0.04 and p = 0.002, respectively). The average UIC intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: UIC = 39.5 + 19.3 (grams iodized salt). Thus, UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed. Regression line slopes and intercepts were not different between sexes (p = 0.27 and p = 0.46, respectively). \nSerum iodine did not increase from minimal to RDA (p = 0.67) but increased significantly from RDA to 3× RDA (p = 0.006). The average serum iodine intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: serum iodine = 61.2 (± 2.0) + 0.6 (± 0.2) grams iodized salt. Serum Tg concentration, however, was not different from minimal iodine to RDA (p = 0.94) or from RDA to 3× RDA (p = 0.68).\nTitration diet effects on 24 h UIC, serum iodine and Tg are shown in Table 2. \nUrine volumes varied widely among participants (900–4150 mL), and averaged 2155 ± 195 mL, 1972 ± 210 mL and 2200 ± 290 on the minimal, RDA and 3× RDA diets, respectively. Both UIC and UIE increased from minimal to RDA (p < 0.001 for both) and RDA to 3× RDA (p = 0.04 and p = 0.002, respectively). The average UIC intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: UIC = 39.5 + 19.3 (grams iodized salt). Thus, UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed. Regression line slopes and intercepts were not different between sexes (p = 0.27 and p = 0.46, respectively). \nSerum iodine did not increase from minimal to RDA (p = 0.67) but increased significantly from RDA to 3× RDA (p = 0.006). The average serum iodine intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: serum iodine = 61.2 (± 2.0) + 0.6 (± 0.2) grams iodized salt. Serum Tg concentration, however, was not different from minimal iodine to RDA (p = 0.94) or from RDA to 3× RDA (p = 0.68).\n3.6. Post-Study Questionnaire Responses Nine of ten participants reported consuming all the provided iodized salt on the RDA and 3× RDA diets. One participant reported consuming 75% of provided iodized salt on day 9 (3× RDA diet). Participants reported adding the iodized salt to reduced or unsalted homemade meals, margaritas, caramel, and 7 of the 10 reported dissolving the salt in water to drink.\nSeven of 10 participants reported collecting 100% of urine output at each 24 h collection. One female participant reported missing 100–200 mL during the fourth 24 h urine collection (3× RDA diet). Another female missed a single urine collection during the 24 h collection period, and one male reported occasionally forgetting to collect a urine. \nOverall, participants reported the most difficult part of this study was avoiding eggs, dairy, and commercial bread, remembering to collect all urine, and consuming the 9 g of iodized salt on 3× RDA diet. All ten participants rated consuming all salt on RDA diet and collecting all urine as a 1, 2, or 3 difficulty on a 1–5 Likert scale of difficulty with 5 being the most difficult. Five of ten participants rated consuming all salt on 3× RDA diet as a 4 or 5 difficulty rating. Participants reported changing their normal eating pattern to accommodate additional salt by eating lower sodium foods and adjusting sodium content in homemade recipes, eating larger portions to make food more palatable, and increasing frequency of meals. They also reported their normal eating pattern changed by avoiding dairy, eggs, and bread.\nNine of ten participants reported consuming all the provided iodized salt on the RDA and 3× RDA diets. One participant reported consuming 75% of provided iodized salt on day 9 (3× RDA diet). Participants reported adding the iodized salt to reduced or unsalted homemade meals, margaritas, caramel, and 7 of the 10 reported dissolving the salt in water to drink.\nSeven of 10 participants reported collecting 100% of urine output at each 24 h collection. One female participant reported missing 100–200 mL during the fourth 24 h urine collection (3× RDA diet). Another female missed a single urine collection during the 24 h collection period, and one male reported occasionally forgetting to collect a urine. \nOverall, participants reported the most difficult part of this study was avoiding eggs, dairy, and commercial bread, remembering to collect all urine, and consuming the 9 g of iodized salt on 3× RDA diet. All ten participants rated consuming all salt on RDA diet and collecting all urine as a 1, 2, or 3 difficulty on a 1–5 Likert scale of difficulty with 5 being the most difficult. Five of ten participants rated consuming all salt on 3× RDA diet as a 4 or 5 difficulty rating. Participants reported changing their normal eating pattern to accommodate additional salt by eating lower sodium foods and adjusting sodium content in homemade recipes, eating larger portions to make food more palatable, and increasing frequency of meals. They also reported their normal eating pattern changed by avoiding dairy, eggs, and bread.", "The physical characteristics and baseline biochemical data of the five male and five female participants are summarized in Table 1. Two participants reported following a vegetarian diet and eight were omnivores. Eleven participants were recruited. However, one female dropped out of this study following the screening visit due to illness. Data for this participant are not included. Most of the participants were of normal BMI (18.5–24.9), with two men in the overweight category (25–29.9) and one female in the obese category (≥30). Thyroglobulin antibodies were <1.8 IU/mL (lowest detectable range) for all participants, indicating a very low possibility that any participant had an autoimmune thyroid disorder which could interfere with use of Tg as a marker of iodine status [32]. None of the participants smoked. All participants reported having restrained from strenuous physical activity causing excessive sweating for the duration of this study.", "No participants reported having consumed supplements containing iodine. Daily iodine intake averaged 265.6 ± 28.2 ug (median: 264.5 ug; range: 93.8 to 401.7 ug) and was not different by sex (p = 0.55). The mean and median iodine intake was higher than the US RDA for male and non-pregnant female adults of 150 µg/day; however, 10% (n = 1) had estimated intakes less than the RDA and no participants had an intake greater than the Tolerable Upper Limit of 1100 ug/day. The estimated average daily iodine intake from contributing foods are as follows: total dairy (186.5 ± 36.9) (milk, yogurt, cheese, cottage cheese), milk (126.3 ± 34.1), eggs (27.3 ± 12.1), total fish and seafood (4.1 ± 1.3), seaweed (1.6 ± 0.5), and iodized table salt (42.8 ± 15.8). Total estimated iodine intake from the FFQ was associated with reported total dairy (r = 0.830, p = 0.003) and milk intake (r = 0.688, p = 0.03), but not with egg (r = 0.101, p = 0.78), fish and seafood (r = 0.457, p = 0.18) seaweed (r = 0.503, p = 0.14), or iodized salt intake (r = −0.235, p = 0.51). ", "Baseline 24 h UIC ranged between 67 and 253 µg/L. The average value was 135 µg/L and the median value was 121 µg/L. The frequency of the categories of iodine status based on World Health Organization (WHO) criteria is shown in Figure 2. Average baseline serum Tg fell within the normal range (≤33 ng/mL) with 1 participant having a Tg value > 33 ng/mL, indicating compromised status. ", "Estimated total iodine intake from the FFQ was not correlated with UIC (r = 0.273, p= 0.446). FFQ-estimated iodine intake was not correlated with predicted iodine intake using the equation of Zimmerman UIC (r = 0.248, p = 0.489) which incorporates UIC and body mass. UIC, however, was associated with milk (r = 0.688, p = 0.028) and fish/seafood intake (r = 0.646, p = 0.043), but not with reported total dairy intake (r = 0.515, p = 0.128), egg consumption (r = −0.241, p = 0.503), seaweed consumption (r = −0.121, p = 0.740), or iodized table salt use (r = −0.512, p = 0.130).", "Titration diet effects on 24 h UIC, serum iodine and Tg are shown in Table 2. \nUrine volumes varied widely among participants (900–4150 mL), and averaged 2155 ± 195 mL, 1972 ± 210 mL and 2200 ± 290 on the minimal, RDA and 3× RDA diets, respectively. Both UIC and UIE increased from minimal to RDA (p < 0.001 for both) and RDA to 3× RDA (p = 0.04 and p = 0.002, respectively). The average UIC intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: UIC = 39.5 + 19.3 (grams iodized salt). Thus, UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed. Regression line slopes and intercepts were not different between sexes (p = 0.27 and p = 0.46, respectively). \nSerum iodine did not increase from minimal to RDA (p = 0.67) but increased significantly from RDA to 3× RDA (p = 0.006). The average serum iodine intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: serum iodine = 61.2 (± 2.0) + 0.6 (± 0.2) grams iodized salt. Serum Tg concentration, however, was not different from minimal iodine to RDA (p = 0.94) or from RDA to 3× RDA (p = 0.68).", "Nine of ten participants reported consuming all the provided iodized salt on the RDA and 3× RDA diets. One participant reported consuming 75% of provided iodized salt on day 9 (3× RDA diet). Participants reported adding the iodized salt to reduced or unsalted homemade meals, margaritas, caramel, and 7 of the 10 reported dissolving the salt in water to drink.\nSeven of 10 participants reported collecting 100% of urine output at each 24 h collection. One female participant reported missing 100–200 mL during the fourth 24 h urine collection (3× RDA diet). Another female missed a single urine collection during the 24 h collection period, and one male reported occasionally forgetting to collect a urine. \nOverall, participants reported the most difficult part of this study was avoiding eggs, dairy, and commercial bread, remembering to collect all urine, and consuming the 9 g of iodized salt on 3× RDA diet. All ten participants rated consuming all salt on RDA diet and collecting all urine as a 1, 2, or 3 difficulty on a 1–5 Likert scale of difficulty with 5 being the most difficult. Five of ten participants rated consuming all salt on 3× RDA diet as a 4 or 5 difficulty rating. Participants reported changing their normal eating pattern to accommodate additional salt by eating lower sodium foods and adjusting sodium content in homemade recipes, eating larger portions to make food more palatable, and increasing frequency of meals. They also reported their normal eating pattern changed by avoiding dairy, eggs, and bread.", "The primary purpose of this pilot study was to determine whether a titration diet (which used known quantities of iodized salt) was reflected in the 24 h UIC, serum iodine, and thyroglobulin biomarkers of iodine status. Secondary purposes were to evaluate the association between baseline 24 h UIC and habitual iodine intake and observe the contribution of iodine-containing foods to total iodine intake assessed by an iodine-specific FFQ. Overall, in our sample of 10 participants, we found that 24 h UIC measures (including 24 h UIE) were increased as iodized salt consumption increased. Serum iodine, on the other hand, did not increase from the minimal iodine diet to the iodine RDA diet but was elevated when three times the RDA was consumed. In contrast, serum thyroglobulin concentrations were not different during the titration diet period. As estimated by the iodine-specific FFQ, only milk and total dairy intake were associated with estimated total iodine intake, whereas only milk and fish/seafood intake was associated with 24 h UIC.\nTo the best of our knowledge, this is the first study to perform a controlled titrated iodine diet using iodized salt as an iodine source to evaluate the effects on iodine status biomarkers. A previous study providing oral iodine capsules (225μg/day of potassium iodide) to pregnant participants during the first trimester found that the urinary iodine/creatinine ratio increased from a median of 53 μg/g to 150–249 μg/g by the third trimester [33]. Women consumed their typical diet during the supplemental period. Additional published studies have found increases in UIC and presumed iodine status in children and adults from iodine deficient areas following initiation of a salt iodization program [34] or oral iodine supplementation in the form of capsules [33,35] or poppyseed oil [36,37], and by a single-dose injection of intramuscular iodized oil [38]. In our study, the use of iodized salt was a practical way to consume iodine and represented a real-life addition of iodine to the diet (due to iodination of table salt in many countries). An additional strength of this study was that we simultaneously evaluated 24 h UIC along with serum iodine, Tg, and UIE. The 24 h UIC, which is the “gold standard” to assess iodine status in populations and currently, may be the most representative of individual iodine status.\nThe primary purpose of this study was to explore use of potential biomarkers of individual iodine status including UIC, and serum iodine and Tg concentrations. Although 24 h UIC is considered a population iodine status marker, our results suggest that of the iodine status biomarkers evaluated, 24 h UIC has promise as an individual biomarker in that it is sensitive to changes in short term dietary intake. According to our regression model (from minimal iodine intake up to 9 g of iodized salt), 24 h UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed. A gram of iodized salt contains 45 μg of iodine. Thus, an increase of 45 μg of dietary iodine increased the 24 h UIC by an average of 19.3 μg/L in our sample. As the 24 h UIC was completed on the third day of each diet, these results demonstrate that the 24 h UIC was sensitive to recent changes in iodine intake. The same increase in 24 h UIC was observed in the 24 h UIE. The advantage of total UIE is that it adjusts for urine volume and hydration status that can vary among individuals and throughout the day. For example, in our study, we had one participant with daily total urine volumes of 900 mL and another with a daily volumes of more than 4 L. Measuring urine volume to calculate 24 h UIE may be useful when evaluating iodine status of pregnant and lactating women to ensure this population is increasing water intake to the recommended amount (3 L for pregnant women and 3.8 L for lactating women). Since iodine stored in the thyroid is unaffected by changes in short-term iodine intake in iodine-sufficient individuals (i.e., thyroid stores are sufficient), urinary iodine excretion is primarily representative of recent iodine intake. If thyroid stores were not sufficient, more dietary iodine would likely be taken up by the thyroid (to help replete stores) and less would be excreted in urine [39,40]. This concept was demonstrated in our study. As iodine intake increased, the average urinary iodine excretion increased indicating that as intake of iodine exceeds needs and thyroid stores, more iodine will be excreted in the urine. Anderson et al. found that greater than ten spot urines or greater than seven 24 h UICs are required to estimate an individual’s iodine status in free living individuals within a precision range of ±20% [41]. Individuals in this study presumably had variable daily intake of iodine which were not simultaneously estimated during this study. The amount of iodine uptake by the thyroid, and the subsequent amount of iodine excreted in the urine, depends on the typical iodine intake of the individual [42]. If daily iodine intake is >50 ug, the uptake of iodine by the thyroid is maintained, but if daily iodine intake is <50 ug the thyroid’s iodine stores become depleted [29]. Wainwright and Cook suggested that multiple 24 h UICs be collected over a prolonged period of time to provide the best representation of usual iodine intake and to account for seasonal [43], circadian rhythm [44], and iodine and hydration status variations [21]. However, collection of multiple 24 h UICs would be even more of a burden to the patient than a single collection and be subject to collection error including missed urine samples. The difficulty of collecting 24 h urines and additional limitations [25,29,45,46,47,48,49] that include being cumbersome to the patient and being subject to within-day hydration and iodine intake changes prompted us to evaluate other potential biomarkers of iodine status (serum iodine and Tg) in the current study. \nOur finding that serum iodine concentrations were not different from minimal iodine to RDA but were increased from RDA to three times the RDA provides some support that serum iodine may be a potential alternative marker of iodine status. We initially hypothesized that serum iodine concentration would increase throughout the titration diet as iodine intake increased. Although research is limited on the use of serum iodine as an iodine status biomarker, previous research from mostly epidemiological studies in Chinese populations, have concluded that serum iodine may be more indicative of long-term (or typical) iodine status rather than recent iodine intake because serum iodine was observed to differ among location of residence but not between sexes or age groups, or according to smoking or exercise status [25,45]. The difference between our findings and those previously reported may be due to differences in study period duration. Studies conducted in Chinese populations with varying iodine status found that when iodine in the external environment increased (salt fortification and increased iodine in water supply), UIC increased while serum iodine remained more stable in comparison [25,45]. Another study conducted in Chinese adults found serum iodine was positively correlated with total T3 and T4 and free T4 concentrations [46], suggesting that serum iodine appears to most closely represent the bioavailable iodine for the thyroid gland, with 80–90% of total serum iodine incorporated into thyroid hormones [46,50]. While in overall agreement with previous studies, our study found significant increases in serum iodine only when iodine intake is excessive. This may be because our participants were presumably iodine replete at the start of this study and/or that the 3 day duration of this study was not of sufficient duration for serum thyroid to fall due to minimal iodine intake. Thus, more long-term studies (over weeks or months instead of days) may be needed to evaluate long-term serum iodine concentration changes.\nIn contrast to our hypothesis, serum thyroglobulin concentrations were not different as iodized salt consumption increased from minimal iodine (~0 g iodine) intake to 9 g iodized salt (45 μg of iodine)/day. Like serum iodine, this observation may be due to thyroglobulin’s lack of sensitivity to recent changes in iodine intake. Several previous cross-sectional studies [51,52,53] collectively reported inverse associations between serum Tg concentration and iodine status (determined by spot UIC and spot urinary iodine excretion (UIE)) within iodine deficient and excessive populations across the globe [51,52,53]. Because the studies determined iodine status based on a single-spot UIC or UIE sample, individual iodine status could not be determined [53]. Following a salt iodization program, Vejbjerg et al. found median serum Tg decreased from 10.9 to 8.7 mg/L (p < 0.001) in an area with previously mild iodine deficiency and decreased from 14.6 to 8.9 mg/L (p < 0.001) in an area with previous moderate iodine deficiency. Decreases in serum Tg are reflective of better iodine status because when iodine intake is insufficient, low circulating levels of T4 stimulate the synthesis and release of thyrotropin-releasing hormone, which subsequently increases the production of TSH; this then stimulates Tg synthesis, causing an increase in serum Tg [26]. Even though serum TSH increases during iodine deficiency, TSH concentrations often remain within normal reference ranges and thus is not considered a good indicator of iodine status [21]. Additionally, serum Tg concentrations showed minimal day-to-day variation compared to spot UIC samples following the introduction of a salt iodization program [52]. This lack of variation in Tg concentrations compared to UIC may suggest that Tg is more representative of long-term iodine intake in a population, especially in areas with endemic goiter because it may reflect overall thyroid cell mass [35,52]. Overall, however, there are limited studies on serum Tg as a biomarker of individual iodine status and further research is needed to establish reference concentrations in healthy, iodine-sufficient individuals and to understand the relationship between Tg and recent iodine intake.\nThe secondary purposes of the current study were to evaluate the association between baseline 24 h UIC and habitual iodine intake and observe the contribution of iodine-containing foods to total iodine intake assessed by an iodine-specific FFQ. Milk and total dairy intake correlated positively with total iodine intake as estimated by the iodine-specific FFQ. We did find positive associations between UIC and milk and fish/seafood intake. As with all self-reported data, there is potential for under or overreporting of nutrient intake [29] as could be the case with our iodine-specific FFQ. However, previous studies [12,54,55] have found positive correlations with UIC and milk and dairy intake, including one from our lab which reported total dairy and egg intake to predict approximately 20% variance in 24 h UIC [29]. A review by Reijden et al. estimated that milk and dairy contribute ~13–64% of recommended daily iodine intake based on country-specific food intake data [55]. Iodine content in dairy is a reliable, although variable, source of dietary iodine. Iodine enters cow’s milk either through the cow’s ingestion of water, feed, vegetation, or through exposure to iodophor disinfectants. Such disinfectants are used in the US to clean cow teats and udders, milking equipment, and other milk-holding containers and transporting trucks [56]. However, the iodine content of milk can vary greatly, from 33 to 534 μg/L according to a recent study [55]. Such variations are due to differences in agricultural practices, seasonal variations, milk yield, type, concentration, and application method of iodophor sanitizers, type of cow feed, goitrogen intake by dairy cows, and seasonal variations of pasture versus prepared feed [12,55]. Alternately, dairy and milk are easily recognized and recalled compared to other iodine sources such as iodized salt. Dairy items may be a common iodine source because they can be consumed through easily recalled sources including the habitual glass of milk or as milk added to breakfast cereals or consumed as convenient dairy items such as yogurts and cottage cheese. \nAlthough the current study has several strengths, there are limitations which include a smaller sample size and short study duration. The smaller sample size allowed for control and accountability in data collection methodology, but overall findings may not be generalizable to other populations with more extremes in iodine status or in those who are pregnant or lactating. Our participants, for example, were screened for normal thyroid function and most had iodine intake that met the daily US recommendation. Additionally, excess sodium consumption may have contributed to excess fluid retention which could influence serum iodine and Tg biomarker concentrations, particularly on the 3× RDA diet. A longer study duration (perhaps weeks or months) may have allowed for observation of whether serum iodine and thyroglobulin biomarkers would be sensitive to longer term changes in dietary iodine intake. Future studies from our laboratory will further evaluate the use of biomarkers of iodine status using longer titration diet periods and an iodine source other than iodized salt (to avoid risk of excess sodium consumption). Limitations apart from the study design include the high variability of iodine content in food and the absence of iodine in the USDA Food Composition Database, making it difficult to reliably assess iodine intake and make comparisons to iodine status. The iodine content of soil naturally varies and thus the iodine content of plants and animals, agriculture products, saltwater fish and seafood is highly variable. The FFQ used averages of iodine content from the TDS to estimate iodine intake in foods. However, the average values are highly variable. For example, the iodine content of salt water fish fillets ranged from 0.122 to 0.922 μg/g, while a fillet of fresh water fish ranged from 0.005 to 0.082 μg/g in a study by Eckhoff and Maage [57]. ", "The current study provides preliminary support to suggest that 24 h UIC and serum iodine may be indicative of individual iodine status, at least over the short term (3 days) when iodine status is mostly stable. Results provide support that 24 h UIC is likely the most sensitive indicator of individual iodine status at this time and still may be the best biomarker to assess iodine status. However, serum iodine is easier to measure at a single blood draw than single or multiple 24 h UICs and would provide a convenient method to assess iodine status, especially for pregnant and lactating women. Serum Tg was not sensitive to short term changes in iodine intake, but future studies should evaluate its use as a long-term iodine status marker. This study highlights the need for additional research to identify individual iodine biomarkers. Continuing to explore these markers to promptly identify iodine deficiencies in women of reproductive age and pregnant and lactating women is especially important so that dietary interventions can be recommended, and deficiencies can be corrected." ]
[ "intro", null, null, "subjects", null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusions" ]
[ "iodine status", "biomarkers", "validation", "nutritional exposure", "dietary biomarkers", "iodine intake", "urinary iodine concentration", "serum iodine", "thyroglobulin", "food frequency questionnaire", "dairy products" ]
1. Introduction: Iodine is an essential, rate-limiting element for the synthesis of thyroid hormones, which is currently the only known physiological role of iodine. Ensuring adequate iodine intake is important for all adults, but particularly important for women of childbearing age. Inadequate intake and deficiency during pregnancy can influence brain, nerve and muscle development in the growing fetus and result in growth and developmental abnormalities [1]. Iodine status in the US is generally thought to be adequate [2,3]. However, select populations are susceptible to insufficient iodine status due to geographical location, food intake practices, or increased iodine needs (e.g., pregnancy and lactation). Subpopulations at increased risk for low iodine intake include pregnant and lactating women [4,5,6], vegans [7,8,9] and vegetarians [7,8,9,10,11], those who avoid seafood and/or dairy [12], follow a sodium restricted diet [13,14], or eat local foods in regions with iodine-depleted soils [1,15]. The Institute of Medicine, American Heart Association, and the 2020–2025 Dietary Guidelines for Americans [16] have advocated for decreasing sodium intake to less than 2300 mg/day [17] and more prudently to less than 1500 mg/day [18], which could be reducing Americans’ intake of iodized salt. Additionally, trends toward local and plant-based diets may negatively influence iodine status depending on food selection and dietary intake patterns (e.g., avoidance of seafood, dairy and eggs) and the content of local soils. Therefore, despite presumed adequate iodine status in the general US population, certain dietary choices may directly influence iodine status and hence, influence thyroid function. Additionally, dietary patterns low in iodine content are of particular concern for women of reproductive age and pregnant and lactating women due iodine’s importance during fetal growth and development. The 24 h urinary iodine concentration (UIC) is considered the gold standard for assessing iodine status in populations, as approximately 90% of dietary iodine is excreted in the urine of healthy and iodine replete adults [19]. However, 24 h urine collections are cumbersome for the patient, prone to collection and methodological errors, expensive to measure by state-of-the art procedures employed by most clinical laboratories (i.e., mass spectroscopy) [20], and may not adequately reflect iodine status in individuals [21]. For example, 24 h UIC may better represent acute (e.g., days) versus chronic iodine status [22] because of variations in 24 h iodine excretion due to within-day hydration status changes [23] and other unknown variables [22]. While other biomarkers for assessment of status have been suggested, including 24 h total urinary iodine excretion (UIE) [19,24], serum iodine [25] and thyroglobulin (Tg) concentration [26], use of these markers (as well as 24 h UIC) must be better validated in healthy subjects against known quantities of iodine intake. The primary purpose of this pilot study was to determine whether a 3-day titration diet (which provided known quantities of iodized salt) is reflected in 24 h UIC, UIE, serum iodine, thyroglobulin biomarkers. Secondary purposes were to evaluate the association between baseline 24 h UIC and habitual iodine intake and observe the contribution of iodine-containing foods to total iodine intake assessed by an iodine-specific food frequency questionnaire (FFQ). We hypothesized that 24 h UIC, UIE and serum iodine would increase as iodized salt consumption increased and that Tg would increase when the iodine intake exceeds the iodine Recommended Daily Allowance (RDA) of 150 ug. We also hypothesized that baseline 24 h UIC would be associated with the intake frequency of dairy, eggs, and iodized salt and that these same food items would be associated with the total iodine intake as estimated by the FFQ. 2. Materials and Methods: This pilot study was conducted between February and June of 2020 with participants completing a 9 day study period in February and March 2020. The study schematic is shown in Figure 1. All research procedures were reviewed and approved by the Institutional Review Board of the University of Wyoming. This study evaluated the responsiveness of a three biomarkers of iodine status using three known quantities of dietary iodine consumption. Participants were notified of any possible risks prior to giving written formal consent to participate in this study. Volunteers were recruited through flyers and word of mouth from the University of Wyoming campus and the local community. 2.1. Overview of Screening and Baseline Testing In early February, interested volunteers completed a screening interview to ensure preliminary eligibility. Interested and eligible volunteers reported to the lab between 8 and 9 am for an initial/baseline visit, which involved provision of written informed consent, completion of a health history questionnaire, and an iodine-specific FFQ. Blood was drawn for analysis of a complete blood count (CBC), thyroid stimulating hormone (TSH) and thyroglobulin antibodies, and was also used for analysis of baseline serum Tg and iodine concentrations. The 24 h urine was collected for analysis of baseline urinary iodine concentration (UIC) as outlined in Figure 1. Body composition was assessed by dual energy x-ray absorptiometry (DXA (Lunar Prodigy, GE Healthcare, Farfield, CT, USA)) during a separate visit scheduled within 10 days of baseline. Following the baseline visit, participants completed three, 3 day titration diets with increasing concentrations of iodine from iodized table salt (Morton Salt, processed 10/5/19, Providence, Rhode Island, US). These diets included “minimal iodine”, iodine close to the RDA (“RDA”) and iodine close to three times the RDA (“3× RDA”) as explained below (see Section 2.5). The 24 h UIC was measured on the last full day of each diet intervention and is explained in detail below (see Section 2.6 24 h Urine Collections). In early February, interested volunteers completed a screening interview to ensure preliminary eligibility. Interested and eligible volunteers reported to the lab between 8 and 9 am for an initial/baseline visit, which involved provision of written informed consent, completion of a health history questionnaire, and an iodine-specific FFQ. Blood was drawn for analysis of a complete blood count (CBC), thyroid stimulating hormone (TSH) and thyroglobulin antibodies, and was also used for analysis of baseline serum Tg and iodine concentrations. The 24 h urine was collected for analysis of baseline urinary iodine concentration (UIC) as outlined in Figure 1. Body composition was assessed by dual energy x-ray absorptiometry (DXA (Lunar Prodigy, GE Healthcare, Farfield, CT, USA)) during a separate visit scheduled within 10 days of baseline. Following the baseline visit, participants completed three, 3 day titration diets with increasing concentrations of iodine from iodized table salt (Morton Salt, processed 10/5/19, Providence, Rhode Island, US). These diets included “minimal iodine”, iodine close to the RDA (“RDA”) and iodine close to three times the RDA (“3× RDA”) as explained below (see Section 2.5). The 24 h UIC was measured on the last full day of each diet intervention and is explained in detail below (see Section 2.6 24 h Urine Collections). 2.2. Participants Participants included 5 women and 5 men, aged 19–56 years with a BMI (kg/m2) ≥ 18 to ≤40. A sample size of n = 10 was selected for this pilot study due to resource constraints. Individuals were screened by phone and deemed ineligible if they answered “yes” to any or all questions regarding presence of thyroid disorders, currently pregnant or lactating, current use of commercial douches (which may contain iodine), presence of autoimmune disease history/status, and current use of tobacco (including smoking) and medications. Additional exclusion criteria include not willing to comply with the salt/iodine restricted aspects of this study, not being available to/agreeing to fully participate in all aspects of this study and oblige to all restraints during the iodine-controlled 9 days of this study (including avoidance of strenuous exercise; due to potential iodine loss via sweating [27,28], having extreme dietary habits that would result in exceptionally low or high iodine intake, currently taking or have taken iodine or kelp supplements or medications containing or interfering with iodine/thyroid status in the past month, having a history of thyroid disorders, Addison’s disease, or Cushing’s syndrome, a history of major chronic diseases, having laboratory signs or symptom’s suggestive of anemia and/or hypo/hyperthyroidism. Results of the CBC, TSH, thyroglobulin antibodies and health history were used to ensure eligibility to proceed to the next steps of the study protocol. Participants included 5 women and 5 men, aged 19–56 years with a BMI (kg/m2) ≥ 18 to ≤40. A sample size of n = 10 was selected for this pilot study due to resource constraints. Individuals were screened by phone and deemed ineligible if they answered “yes” to any or all questions regarding presence of thyroid disorders, currently pregnant or lactating, current use of commercial douches (which may contain iodine), presence of autoimmune disease history/status, and current use of tobacco (including smoking) and medications. Additional exclusion criteria include not willing to comply with the salt/iodine restricted aspects of this study, not being available to/agreeing to fully participate in all aspects of this study and oblige to all restraints during the iodine-controlled 9 days of this study (including avoidance of strenuous exercise; due to potential iodine loss via sweating [27,28], having extreme dietary habits that would result in exceptionally low or high iodine intake, currently taking or have taken iodine or kelp supplements or medications containing or interfering with iodine/thyroid status in the past month, having a history of thyroid disorders, Addison’s disease, or Cushing’s syndrome, a history of major chronic diseases, having laboratory signs or symptom’s suggestive of anemia and/or hypo/hyperthyroidism. Results of the CBC, TSH, thyroglobulin antibodies and health history were used to ensure eligibility to proceed to the next steps of the study protocol. 2.3. Baseline Measurements and Testing Basic anthropometric testing, which included height, weight and body composition measurements, was performed for descriptive proposes. Height and weight were measured in minimal clothing using a stadiometer (Invicta Plastics, Leicester, UK) to the nearest 0.1 cm and a standing digital scale (Tanita, Tokyo, Japan) to the nearest 0.1 kg. Body composition was measured by DXA. A pregnancy test using a urine test strip was completed immediately before the DXA to ensure female participants were not pregnant. Screening blood work (CBC, TSH, thyroglobulin antibodies), Tg and 24 h UIC were measured by a commercial laboratory as detailed in Section 2.7 below. Basic anthropometric testing, which included height, weight and body composition measurements, was performed for descriptive proposes. Height and weight were measured in minimal clothing using a stadiometer (Invicta Plastics, Leicester, UK) to the nearest 0.1 cm and a standing digital scale (Tanita, Tokyo, Japan) to the nearest 0.1 kg. Body composition was measured by DXA. A pregnancy test using a urine test strip was completed immediately before the DXA to ensure female participants were not pregnant. Screening blood work (CBC, TSH, thyroglobulin antibodies), Tg and 24 h UIC were measured by a commercial laboratory as detailed in Section 2.7 below. 2.4. Baseline Iodine Intake and Frequency of Intake of Iodine-Containing Foods The iodine-specific FFQ (Appendix A) evaluated the frequency of consumption of 36 food items, over the past month, known to have significant iodine content (e.g., seafood, seaweed, dairy, egg). Our FFQ is a modified version of a previously validated FFQ by Gostas et al. [29], which found estimated total iodine intake from the FFQ to be correlated with 24 h UIC in over 100 healthy female and male participants. Our modified FFQ omitted six low-iodine-containing foods that were not of interest to our study and asked additional questions about habitual iodized salt intake. Frequency was evaluated according to the following responses: (a) never or less than one time per month; (b) one to three times per month; (c) one time per week; (d) two to four times per week; (e) five to six times per week; (f) one time per day; (g) two to three times per day; (h) four to five times per day; (i) or six or more times per day. Daily intake of iodine was estimated by multiplying the frequency midpoint by the average content of each iodine-containing food and expressed as ug/day (assuming 30 days per month), as previously outlined by Halliday et al. [30]. As iodine content is not available in food composition tables or databases, iodine content of the food items in the FFQ was derived from several sources [29] with most data from the ongoing Total Diet Study (TDS). Iodine content in the TDS is listed per 100 g of the selected food item. Iodine content was recalculated from the TDS to iodine per serving size, to match the household measured listed in the FFQ. The iodine content in other sources was also converted to adjust given units to iodine per serving size. The iodine-specific FFQ (Appendix A) evaluated the frequency of consumption of 36 food items, over the past month, known to have significant iodine content (e.g., seafood, seaweed, dairy, egg). Our FFQ is a modified version of a previously validated FFQ by Gostas et al. [29], which found estimated total iodine intake from the FFQ to be correlated with 24 h UIC in over 100 healthy female and male participants. Our modified FFQ omitted six low-iodine-containing foods that were not of interest to our study and asked additional questions about habitual iodized salt intake. Frequency was evaluated according to the following responses: (a) never or less than one time per month; (b) one to three times per month; (c) one time per week; (d) two to four times per week; (e) five to six times per week; (f) one time per day; (g) two to three times per day; (h) four to five times per day; (i) or six or more times per day. Daily intake of iodine was estimated by multiplying the frequency midpoint by the average content of each iodine-containing food and expressed as ug/day (assuming 30 days per month), as previously outlined by Halliday et al. [30]. As iodine content is not available in food composition tables or databases, iodine content of the food items in the FFQ was derived from several sources [29] with most data from the ongoing Total Diet Study (TDS). Iodine content in the TDS is listed per 100 g of the selected food item. Iodine content was recalculated from the TDS to iodine per serving size, to match the household measured listed in the FFQ. The iodine content in other sources was also converted to adjust given units to iodine per serving size. 2.5. Iodine Titration Diet and Biomarker Protocol Following the baseline urine collection, volunteers then completed three, 3 day iodine titration diet periods as shown in Figure 1. Experimental treatments were designed to provide minimal iodine, close to the recommended dietary allowance (RDA), and 3× the RDA. The “Minimal Iodine” diet was defined as avoiding all dairy products, eggs, seaweed, seafood, commercial bread, milk chocolate, and iodized salt and was completed on study days 1–9. Ad libitum quantities of non-iodized salt was provided for participants on only the “Minimal Iodine” diet. The participants received 3.0 g (~1/2 tsp) of iodized salt/day on the “RDA” diet which took place on study days 4–6. On study days 7–9, participants received 9.0 g (1 ½ tsp) of iodized salt/day on the “3× RDA” diet. Participants were instructed to completely consume the premeasured quantities of iodized salt on study days 4–9 in any manner they chose (sprinkled on food, dissolved in a drinkable solution, etc.) and return empty containers on day 10 (the end of this study). Blood was drawn for analysis of concentrations of serum iodine and Tg in the morning between 8 and 9 am on days 4, 7, and 10. Returning the empty containers and verbal communication served as confirmation that the participants completely consumed the provided iodized salt. Following the baseline urine collection, volunteers then completed three, 3 day iodine titration diet periods as shown in Figure 1. Experimental treatments were designed to provide minimal iodine, close to the recommended dietary allowance (RDA), and 3× the RDA. The “Minimal Iodine” diet was defined as avoiding all dairy products, eggs, seaweed, seafood, commercial bread, milk chocolate, and iodized salt and was completed on study days 1–9. Ad libitum quantities of non-iodized salt was provided for participants on only the “Minimal Iodine” diet. The participants received 3.0 g (~1/2 tsp) of iodized salt/day on the “RDA” diet which took place on study days 4–6. On study days 7–9, participants received 9.0 g (1 ½ tsp) of iodized salt/day on the “3× RDA” diet. Participants were instructed to completely consume the premeasured quantities of iodized salt on study days 4–9 in any manner they chose (sprinkled on food, dissolved in a drinkable solution, etc.) and return empty containers on day 10 (the end of this study). Blood was drawn for analysis of concentrations of serum iodine and Tg in the morning between 8 and 9 am on days 4, 7, and 10. Returning the empty containers and verbal communication served as confirmation that the participants completely consumed the provided iodized salt. 2.6. The 24 h Urine Collections Volunteers completed a total of four 24 h urine collections. Participants were instructed to void and discard the first morning urine sample and then collect all subsequent samples for 24 h ending with the first sample upon waking the next day. Females were provided urine collector pans which were placed under the toilet seat during urination to allow for ease of collection and pouring of urine into the 24 h collection container. Refer to Figure 1, which indicates 24 h urine collections were completed on study days 3, 6, and 9, in addition to baseline. Total urine volume was measured at minimal, RDA, and 3× RDA time points using a 2 L graduated cylinder. Volunteers completed a total of four 24 h urine collections. Participants were instructed to void and discard the first morning urine sample and then collect all subsequent samples for 24 h ending with the first sample upon waking the next day. Females were provided urine collector pans which were placed under the toilet seat during urination to allow for ease of collection and pouring of urine into the 24 h collection container. Refer to Figure 1, which indicates 24 h urine collections were completed on study days 3, 6, and 9, in addition to baseline. Total urine volume was measured at minimal, RDA, and 3× RDA time points using a 2 L graduated cylinder. 2.7. Analysis of Iodine Status Biomarkers in Urine and Blood Serum iodine and 24 h UIC were measured by a local laboratory (Region West, Scottsbluff, NE, USA) with analysis completed by a contract national laboratory (Mayo Clinic, Rochester, MN, USA). Serum Tg was analyzed at the University of Wyoming using a Human Tg ELISA (Enzyme-Linked Immunosorbent Assay) kit. Specifically, 24 h UIC and serum iodine were measured using Inductively Coupled Plasma-Mass Spectrometry and serum Tg concentration was measured by in vitro enzyme-linked immunosorbent assay. The 24 h UIE was calculated by multiplying 24 h UIC by total urine volume. Serum iodine and 24 h UIC were measured by a local laboratory (Region West, Scottsbluff, NE, USA) with analysis completed by a contract national laboratory (Mayo Clinic, Rochester, MN, USA). Serum Tg was analyzed at the University of Wyoming using a Human Tg ELISA (Enzyme-Linked Immunosorbent Assay) kit. Specifically, 24 h UIC and serum iodine were measured using Inductively Coupled Plasma-Mass Spectrometry and serum Tg concentration was measured by in vitro enzyme-linked immunosorbent assay. The 24 h UIE was calculated by multiplying 24 h UIC by total urine volume. 2.8. Post-Study Questionnaire Participants were provided with a post-study questionnaire on day 10, following the last 24 h urine collection. Participants were asked to rate difficulty of consuming iodized salt on the RDA and 3× RDA diets and collecting urine at each of the 24 h urine collections on a Likert scale of 1–5, with 1 being the easiest and 5 being the most difficult. Additionally, participants were asked open-ended questions that included the most difficult part of completing this study, how the provided iodized salt was consumed, and whether normal eating patterns changed to accompany the added salt. Participants were asked whether all provided salt was consumed on the specified day and whether all urine was collected during each 24 h collection. Participants were provided with a post-study questionnaire on day 10, following the last 24 h urine collection. Participants were asked to rate difficulty of consuming iodized salt on the RDA and 3× RDA diets and collecting urine at each of the 24 h urine collections on a Likert scale of 1–5, with 1 being the easiest and 5 being the most difficult. Additionally, participants were asked open-ended questions that included the most difficult part of completing this study, how the provided iodized salt was consumed, and whether normal eating patterns changed to accompany the added salt. Participants were asked whether all provided salt was consumed on the specified day and whether all urine was collected during each 24 h collection. 2.9. Statistical Analysis Data were analyzed using Minitab statistics software (Minitab LLC, State College, PA, USA; version 19.1). Response Feature Analysis (RFA) [31] was used to compare differences in UIC and serum iodine values but not Tg due to lack of difference at data collection points. Linear regressions were fitted to UIC and serum iodine measures for each individual participant. A paired t-test was used to compare 24 h UIC, serum iodine, and Tg values at minimal and RDA values and the same biomarkers were compared at RDA to 3× RDA. Multi-comparisons for each sex were not performed due to small sample size. Correlation coefficients (Spearman Rank) were used to evaluate the associations between total daily iodine intake and baseline 24 h UIC and iodine intake from specific foods (dairy, eggs, seafood, seaweed, iodized salt, etc.). Data are expressed as the means ± SEM unless otherwise specified. Significance was set at p < 0.05. Data were analyzed using Minitab statistics software (Minitab LLC, State College, PA, USA; version 19.1). Response Feature Analysis (RFA) [31] was used to compare differences in UIC and serum iodine values but not Tg due to lack of difference at data collection points. Linear regressions were fitted to UIC and serum iodine measures for each individual participant. A paired t-test was used to compare 24 h UIC, serum iodine, and Tg values at minimal and RDA values and the same biomarkers were compared at RDA to 3× RDA. Multi-comparisons for each sex were not performed due to small sample size. Correlation coefficients (Spearman Rank) were used to evaluate the associations between total daily iodine intake and baseline 24 h UIC and iodine intake from specific foods (dairy, eggs, seafood, seaweed, iodized salt, etc.). Data are expressed as the means ± SEM unless otherwise specified. Significance was set at p < 0.05. 2.1. Overview of Screening and Baseline Testing: In early February, interested volunteers completed a screening interview to ensure preliminary eligibility. Interested and eligible volunteers reported to the lab between 8 and 9 am for an initial/baseline visit, which involved provision of written informed consent, completion of a health history questionnaire, and an iodine-specific FFQ. Blood was drawn for analysis of a complete blood count (CBC), thyroid stimulating hormone (TSH) and thyroglobulin antibodies, and was also used for analysis of baseline serum Tg and iodine concentrations. The 24 h urine was collected for analysis of baseline urinary iodine concentration (UIC) as outlined in Figure 1. Body composition was assessed by dual energy x-ray absorptiometry (DXA (Lunar Prodigy, GE Healthcare, Farfield, CT, USA)) during a separate visit scheduled within 10 days of baseline. Following the baseline visit, participants completed three, 3 day titration diets with increasing concentrations of iodine from iodized table salt (Morton Salt, processed 10/5/19, Providence, Rhode Island, US). These diets included “minimal iodine”, iodine close to the RDA (“RDA”) and iodine close to three times the RDA (“3× RDA”) as explained below (see Section 2.5). The 24 h UIC was measured on the last full day of each diet intervention and is explained in detail below (see Section 2.6 24 h Urine Collections). 2.2. Participants: Participants included 5 women and 5 men, aged 19–56 years with a BMI (kg/m2) ≥ 18 to ≤40. A sample size of n = 10 was selected for this pilot study due to resource constraints. Individuals were screened by phone and deemed ineligible if they answered “yes” to any or all questions regarding presence of thyroid disorders, currently pregnant or lactating, current use of commercial douches (which may contain iodine), presence of autoimmune disease history/status, and current use of tobacco (including smoking) and medications. Additional exclusion criteria include not willing to comply with the salt/iodine restricted aspects of this study, not being available to/agreeing to fully participate in all aspects of this study and oblige to all restraints during the iodine-controlled 9 days of this study (including avoidance of strenuous exercise; due to potential iodine loss via sweating [27,28], having extreme dietary habits that would result in exceptionally low or high iodine intake, currently taking or have taken iodine or kelp supplements or medications containing or interfering with iodine/thyroid status in the past month, having a history of thyroid disorders, Addison’s disease, or Cushing’s syndrome, a history of major chronic diseases, having laboratory signs or symptom’s suggestive of anemia and/or hypo/hyperthyroidism. Results of the CBC, TSH, thyroglobulin antibodies and health history were used to ensure eligibility to proceed to the next steps of the study protocol. 2.3. Baseline Measurements and Testing: Basic anthropometric testing, which included height, weight and body composition measurements, was performed for descriptive proposes. Height and weight were measured in minimal clothing using a stadiometer (Invicta Plastics, Leicester, UK) to the nearest 0.1 cm and a standing digital scale (Tanita, Tokyo, Japan) to the nearest 0.1 kg. Body composition was measured by DXA. A pregnancy test using a urine test strip was completed immediately before the DXA to ensure female participants were not pregnant. Screening blood work (CBC, TSH, thyroglobulin antibodies), Tg and 24 h UIC were measured by a commercial laboratory as detailed in Section 2.7 below. 2.4. Baseline Iodine Intake and Frequency of Intake of Iodine-Containing Foods: The iodine-specific FFQ (Appendix A) evaluated the frequency of consumption of 36 food items, over the past month, known to have significant iodine content (e.g., seafood, seaweed, dairy, egg). Our FFQ is a modified version of a previously validated FFQ by Gostas et al. [29], which found estimated total iodine intake from the FFQ to be correlated with 24 h UIC in over 100 healthy female and male participants. Our modified FFQ omitted six low-iodine-containing foods that were not of interest to our study and asked additional questions about habitual iodized salt intake. Frequency was evaluated according to the following responses: (a) never or less than one time per month; (b) one to three times per month; (c) one time per week; (d) two to four times per week; (e) five to six times per week; (f) one time per day; (g) two to three times per day; (h) four to five times per day; (i) or six or more times per day. Daily intake of iodine was estimated by multiplying the frequency midpoint by the average content of each iodine-containing food and expressed as ug/day (assuming 30 days per month), as previously outlined by Halliday et al. [30]. As iodine content is not available in food composition tables or databases, iodine content of the food items in the FFQ was derived from several sources [29] with most data from the ongoing Total Diet Study (TDS). Iodine content in the TDS is listed per 100 g of the selected food item. Iodine content was recalculated from the TDS to iodine per serving size, to match the household measured listed in the FFQ. The iodine content in other sources was also converted to adjust given units to iodine per serving size. 2.5. Iodine Titration Diet and Biomarker Protocol: Following the baseline urine collection, volunteers then completed three, 3 day iodine titration diet periods as shown in Figure 1. Experimental treatments were designed to provide minimal iodine, close to the recommended dietary allowance (RDA), and 3× the RDA. The “Minimal Iodine” diet was defined as avoiding all dairy products, eggs, seaweed, seafood, commercial bread, milk chocolate, and iodized salt and was completed on study days 1–9. Ad libitum quantities of non-iodized salt was provided for participants on only the “Minimal Iodine” diet. The participants received 3.0 g (~1/2 tsp) of iodized salt/day on the “RDA” diet which took place on study days 4–6. On study days 7–9, participants received 9.0 g (1 ½ tsp) of iodized salt/day on the “3× RDA” diet. Participants were instructed to completely consume the premeasured quantities of iodized salt on study days 4–9 in any manner they chose (sprinkled on food, dissolved in a drinkable solution, etc.) and return empty containers on day 10 (the end of this study). Blood was drawn for analysis of concentrations of serum iodine and Tg in the morning between 8 and 9 am on days 4, 7, and 10. Returning the empty containers and verbal communication served as confirmation that the participants completely consumed the provided iodized salt. 2.6. The 24 h Urine Collections: Volunteers completed a total of four 24 h urine collections. Participants were instructed to void and discard the first morning urine sample and then collect all subsequent samples for 24 h ending with the first sample upon waking the next day. Females were provided urine collector pans which were placed under the toilet seat during urination to allow for ease of collection and pouring of urine into the 24 h collection container. Refer to Figure 1, which indicates 24 h urine collections were completed on study days 3, 6, and 9, in addition to baseline. Total urine volume was measured at minimal, RDA, and 3× RDA time points using a 2 L graduated cylinder. 2.7. Analysis of Iodine Status Biomarkers in Urine and Blood: Serum iodine and 24 h UIC were measured by a local laboratory (Region West, Scottsbluff, NE, USA) with analysis completed by a contract national laboratory (Mayo Clinic, Rochester, MN, USA). Serum Tg was analyzed at the University of Wyoming using a Human Tg ELISA (Enzyme-Linked Immunosorbent Assay) kit. Specifically, 24 h UIC and serum iodine were measured using Inductively Coupled Plasma-Mass Spectrometry and serum Tg concentration was measured by in vitro enzyme-linked immunosorbent assay. The 24 h UIE was calculated by multiplying 24 h UIC by total urine volume. 2.8. Post-Study Questionnaire: Participants were provided with a post-study questionnaire on day 10, following the last 24 h urine collection. Participants were asked to rate difficulty of consuming iodized salt on the RDA and 3× RDA diets and collecting urine at each of the 24 h urine collections on a Likert scale of 1–5, with 1 being the easiest and 5 being the most difficult. Additionally, participants were asked open-ended questions that included the most difficult part of completing this study, how the provided iodized salt was consumed, and whether normal eating patterns changed to accompany the added salt. Participants were asked whether all provided salt was consumed on the specified day and whether all urine was collected during each 24 h collection. 2.9. Statistical Analysis: Data were analyzed using Minitab statistics software (Minitab LLC, State College, PA, USA; version 19.1). Response Feature Analysis (RFA) [31] was used to compare differences in UIC and serum iodine values but not Tg due to lack of difference at data collection points. Linear regressions were fitted to UIC and serum iodine measures for each individual participant. A paired t-test was used to compare 24 h UIC, serum iodine, and Tg values at minimal and RDA values and the same biomarkers were compared at RDA to 3× RDA. Multi-comparisons for each sex were not performed due to small sample size. Correlation coefficients (Spearman Rank) were used to evaluate the associations between total daily iodine intake and baseline 24 h UIC and iodine intake from specific foods (dairy, eggs, seafood, seaweed, iodized salt, etc.). Data are expressed as the means ± SEM unless otherwise specified. Significance was set at p < 0.05. 3. Results: 3.1. Participant Characteristics The physical characteristics and baseline biochemical data of the five male and five female participants are summarized in Table 1. Two participants reported following a vegetarian diet and eight were omnivores. Eleven participants were recruited. However, one female dropped out of this study following the screening visit due to illness. Data for this participant are not included. Most of the participants were of normal BMI (18.5–24.9), with two men in the overweight category (25–29.9) and one female in the obese category (≥30). Thyroglobulin antibodies were <1.8 IU/mL (lowest detectable range) for all participants, indicating a very low possibility that any participant had an autoimmune thyroid disorder which could interfere with use of Tg as a marker of iodine status [32]. None of the participants smoked. All participants reported having restrained from strenuous physical activity causing excessive sweating for the duration of this study. The physical characteristics and baseline biochemical data of the five male and five female participants are summarized in Table 1. Two participants reported following a vegetarian diet and eight were omnivores. Eleven participants were recruited. However, one female dropped out of this study following the screening visit due to illness. Data for this participant are not included. Most of the participants were of normal BMI (18.5–24.9), with two men in the overweight category (25–29.9) and one female in the obese category (≥30). Thyroglobulin antibodies were <1.8 IU/mL (lowest detectable range) for all participants, indicating a very low possibility that any participant had an autoimmune thyroid disorder which could interfere with use of Tg as a marker of iodine status [32]. None of the participants smoked. All participants reported having restrained from strenuous physical activity causing excessive sweating for the duration of this study. 3.2. Baseline Daily Iodine Intake and Frequency of Intake of Iodine-Containing Foods No participants reported having consumed supplements containing iodine. Daily iodine intake averaged 265.6 ± 28.2 ug (median: 264.5 ug; range: 93.8 to 401.7 ug) and was not different by sex (p = 0.55). The mean and median iodine intake was higher than the US RDA for male and non-pregnant female adults of 150 µg/day; however, 10% (n = 1) had estimated intakes less than the RDA and no participants had an intake greater than the Tolerable Upper Limit of 1100 ug/day. The estimated average daily iodine intake from contributing foods are as follows: total dairy (186.5 ± 36.9) (milk, yogurt, cheese, cottage cheese), milk (126.3 ± 34.1), eggs (27.3 ± 12.1), total fish and seafood (4.1 ± 1.3), seaweed (1.6 ± 0.5), and iodized table salt (42.8 ± 15.8). Total estimated iodine intake from the FFQ was associated with reported total dairy (r = 0.830, p = 0.003) and milk intake (r = 0.688, p = 0.03), but not with egg (r = 0.101, p = 0.78), fish and seafood (r = 0.457, p = 0.18) seaweed (r = 0.503, p = 0.14), or iodized salt intake (r = −0.235, p = 0.51). No participants reported having consumed supplements containing iodine. Daily iodine intake averaged 265.6 ± 28.2 ug (median: 264.5 ug; range: 93.8 to 401.7 ug) and was not different by sex (p = 0.55). The mean and median iodine intake was higher than the US RDA for male and non-pregnant female adults of 150 µg/day; however, 10% (n = 1) had estimated intakes less than the RDA and no participants had an intake greater than the Tolerable Upper Limit of 1100 ug/day. The estimated average daily iodine intake from contributing foods are as follows: total dairy (186.5 ± 36.9) (milk, yogurt, cheese, cottage cheese), milk (126.3 ± 34.1), eggs (27.3 ± 12.1), total fish and seafood (4.1 ± 1.3), seaweed (1.6 ± 0.5), and iodized table salt (42.8 ± 15.8). Total estimated iodine intake from the FFQ was associated with reported total dairy (r = 0.830, p = 0.003) and milk intake (r = 0.688, p = 0.03), but not with egg (r = 0.101, p = 0.78), fish and seafood (r = 0.457, p = 0.18) seaweed (r = 0.503, p = 0.14), or iodized salt intake (r = −0.235, p = 0.51). 3.3. Baseline 24 h UIC and Iodine Status Baseline 24 h UIC ranged between 67 and 253 µg/L. The average value was 135 µg/L and the median value was 121 µg/L. The frequency of the categories of iodine status based on World Health Organization (WHO) criteria is shown in Figure 2. Average baseline serum Tg fell within the normal range (≤33 ng/mL) with 1 participant having a Tg value > 33 ng/mL, indicating compromised status. Baseline 24 h UIC ranged between 67 and 253 µg/L. The average value was 135 µg/L and the median value was 121 µg/L. The frequency of the categories of iodine status based on World Health Organization (WHO) criteria is shown in Figure 2. Average baseline serum Tg fell within the normal range (≤33 ng/mL) with 1 participant having a Tg value > 33 ng/mL, indicating compromised status. 3.4. Relationship between Baseline Iodine Intake and UIC Estimated total iodine intake from the FFQ was not correlated with UIC (r = 0.273, p= 0.446). FFQ-estimated iodine intake was not correlated with predicted iodine intake using the equation of Zimmerman UIC (r = 0.248, p = 0.489) which incorporates UIC and body mass. UIC, however, was associated with milk (r = 0.688, p = 0.028) and fish/seafood intake (r = 0.646, p = 0.043), but not with reported total dairy intake (r = 0.515, p = 0.128), egg consumption (r = −0.241, p = 0.503), seaweed consumption (r = −0.121, p = 0.740), or iodized table salt use (r = −0.512, p = 0.130). Estimated total iodine intake from the FFQ was not correlated with UIC (r = 0.273, p= 0.446). FFQ-estimated iodine intake was not correlated with predicted iodine intake using the equation of Zimmerman UIC (r = 0.248, p = 0.489) which incorporates UIC and body mass. UIC, however, was associated with milk (r = 0.688, p = 0.028) and fish/seafood intake (r = 0.646, p = 0.043), but not with reported total dairy intake (r = 0.515, p = 0.128), egg consumption (r = −0.241, p = 0.503), seaweed consumption (r = −0.121, p = 0.740), or iodized table salt use (r = −0.512, p = 0.130). 3.5. Effect of Titration Diet on Iodine Status Biomarkers Titration diet effects on 24 h UIC, serum iodine and Tg are shown in Table 2. Urine volumes varied widely among participants (900–4150 mL), and averaged 2155 ± 195 mL, 1972 ± 210 mL and 2200 ± 290 on the minimal, RDA and 3× RDA diets, respectively. Both UIC and UIE increased from minimal to RDA (p < 0.001 for both) and RDA to 3× RDA (p = 0.04 and p = 0.002, respectively). The average UIC intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: UIC = 39.5 + 19.3 (grams iodized salt). Thus, UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed. Regression line slopes and intercepts were not different between sexes (p = 0.27 and p = 0.46, respectively). Serum iodine did not increase from minimal to RDA (p = 0.67) but increased significantly from RDA to 3× RDA (p = 0.006). The average serum iodine intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: serum iodine = 61.2 (± 2.0) + 0.6 (± 0.2) grams iodized salt. Serum Tg concentration, however, was not different from minimal iodine to RDA (p = 0.94) or from RDA to 3× RDA (p = 0.68). Titration diet effects on 24 h UIC, serum iodine and Tg are shown in Table 2. Urine volumes varied widely among participants (900–4150 mL), and averaged 2155 ± 195 mL, 1972 ± 210 mL and 2200 ± 290 on the minimal, RDA and 3× RDA diets, respectively. Both UIC and UIE increased from minimal to RDA (p < 0.001 for both) and RDA to 3× RDA (p = 0.04 and p = 0.002, respectively). The average UIC intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: UIC = 39.5 + 19.3 (grams iodized salt). Thus, UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed. Regression line slopes and intercepts were not different between sexes (p = 0.27 and p = 0.46, respectively). Serum iodine did not increase from minimal to RDA (p = 0.67) but increased significantly from RDA to 3× RDA (p = 0.006). The average serum iodine intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: serum iodine = 61.2 (± 2.0) + 0.6 (± 0.2) grams iodized salt. Serum Tg concentration, however, was not different from minimal iodine to RDA (p = 0.94) or from RDA to 3× RDA (p = 0.68). 3.6. Post-Study Questionnaire Responses Nine of ten participants reported consuming all the provided iodized salt on the RDA and 3× RDA diets. One participant reported consuming 75% of provided iodized salt on day 9 (3× RDA diet). Participants reported adding the iodized salt to reduced or unsalted homemade meals, margaritas, caramel, and 7 of the 10 reported dissolving the salt in water to drink. Seven of 10 participants reported collecting 100% of urine output at each 24 h collection. One female participant reported missing 100–200 mL during the fourth 24 h urine collection (3× RDA diet). Another female missed a single urine collection during the 24 h collection period, and one male reported occasionally forgetting to collect a urine. Overall, participants reported the most difficult part of this study was avoiding eggs, dairy, and commercial bread, remembering to collect all urine, and consuming the 9 g of iodized salt on 3× RDA diet. All ten participants rated consuming all salt on RDA diet and collecting all urine as a 1, 2, or 3 difficulty on a 1–5 Likert scale of difficulty with 5 being the most difficult. Five of ten participants rated consuming all salt on 3× RDA diet as a 4 or 5 difficulty rating. Participants reported changing their normal eating pattern to accommodate additional salt by eating lower sodium foods and adjusting sodium content in homemade recipes, eating larger portions to make food more palatable, and increasing frequency of meals. They also reported their normal eating pattern changed by avoiding dairy, eggs, and bread. Nine of ten participants reported consuming all the provided iodized salt on the RDA and 3× RDA diets. One participant reported consuming 75% of provided iodized salt on day 9 (3× RDA diet). Participants reported adding the iodized salt to reduced or unsalted homemade meals, margaritas, caramel, and 7 of the 10 reported dissolving the salt in water to drink. Seven of 10 participants reported collecting 100% of urine output at each 24 h collection. One female participant reported missing 100–200 mL during the fourth 24 h urine collection (3× RDA diet). Another female missed a single urine collection during the 24 h collection period, and one male reported occasionally forgetting to collect a urine. Overall, participants reported the most difficult part of this study was avoiding eggs, dairy, and commercial bread, remembering to collect all urine, and consuming the 9 g of iodized salt on 3× RDA diet. All ten participants rated consuming all salt on RDA diet and collecting all urine as a 1, 2, or 3 difficulty on a 1–5 Likert scale of difficulty with 5 being the most difficult. Five of ten participants rated consuming all salt on 3× RDA diet as a 4 or 5 difficulty rating. Participants reported changing their normal eating pattern to accommodate additional salt by eating lower sodium foods and adjusting sodium content in homemade recipes, eating larger portions to make food more palatable, and increasing frequency of meals. They also reported their normal eating pattern changed by avoiding dairy, eggs, and bread. 3.1. Participant Characteristics : The physical characteristics and baseline biochemical data of the five male and five female participants are summarized in Table 1. Two participants reported following a vegetarian diet and eight were omnivores. Eleven participants were recruited. However, one female dropped out of this study following the screening visit due to illness. Data for this participant are not included. Most of the participants were of normal BMI (18.5–24.9), with two men in the overweight category (25–29.9) and one female in the obese category (≥30). Thyroglobulin antibodies were <1.8 IU/mL (lowest detectable range) for all participants, indicating a very low possibility that any participant had an autoimmune thyroid disorder which could interfere with use of Tg as a marker of iodine status [32]. None of the participants smoked. All participants reported having restrained from strenuous physical activity causing excessive sweating for the duration of this study. 3.2. Baseline Daily Iodine Intake and Frequency of Intake of Iodine-Containing Foods: No participants reported having consumed supplements containing iodine. Daily iodine intake averaged 265.6 ± 28.2 ug (median: 264.5 ug; range: 93.8 to 401.7 ug) and was not different by sex (p = 0.55). The mean and median iodine intake was higher than the US RDA for male and non-pregnant female adults of 150 µg/day; however, 10% (n = 1) had estimated intakes less than the RDA and no participants had an intake greater than the Tolerable Upper Limit of 1100 ug/day. The estimated average daily iodine intake from contributing foods are as follows: total dairy (186.5 ± 36.9) (milk, yogurt, cheese, cottage cheese), milk (126.3 ± 34.1), eggs (27.3 ± 12.1), total fish and seafood (4.1 ± 1.3), seaweed (1.6 ± 0.5), and iodized table salt (42.8 ± 15.8). Total estimated iodine intake from the FFQ was associated with reported total dairy (r = 0.830, p = 0.003) and milk intake (r = 0.688, p = 0.03), but not with egg (r = 0.101, p = 0.78), fish and seafood (r = 0.457, p = 0.18) seaweed (r = 0.503, p = 0.14), or iodized salt intake (r = −0.235, p = 0.51). 3.3. Baseline 24 h UIC and Iodine Status: Baseline 24 h UIC ranged between 67 and 253 µg/L. The average value was 135 µg/L and the median value was 121 µg/L. The frequency of the categories of iodine status based on World Health Organization (WHO) criteria is shown in Figure 2. Average baseline serum Tg fell within the normal range (≤33 ng/mL) with 1 participant having a Tg value > 33 ng/mL, indicating compromised status. 3.4. Relationship between Baseline Iodine Intake and UIC: Estimated total iodine intake from the FFQ was not correlated with UIC (r = 0.273, p= 0.446). FFQ-estimated iodine intake was not correlated with predicted iodine intake using the equation of Zimmerman UIC (r = 0.248, p = 0.489) which incorporates UIC and body mass. UIC, however, was associated with milk (r = 0.688, p = 0.028) and fish/seafood intake (r = 0.646, p = 0.043), but not with reported total dairy intake (r = 0.515, p = 0.128), egg consumption (r = −0.241, p = 0.503), seaweed consumption (r = −0.121, p = 0.740), or iodized table salt use (r = −0.512, p = 0.130). 3.5. Effect of Titration Diet on Iodine Status Biomarkers: Titration diet effects on 24 h UIC, serum iodine and Tg are shown in Table 2. Urine volumes varied widely among participants (900–4150 mL), and averaged 2155 ± 195 mL, 1972 ± 210 mL and 2200 ± 290 on the minimal, RDA and 3× RDA diets, respectively. Both UIC and UIE increased from minimal to RDA (p < 0.001 for both) and RDA to 3× RDA (p = 0.04 and p = 0.002, respectively). The average UIC intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: UIC = 39.5 + 19.3 (grams iodized salt). Thus, UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed. Regression line slopes and intercepts were not different between sexes (p = 0.27 and p = 0.46, respectively). Serum iodine did not increase from minimal to RDA (p = 0.67) but increased significantly from RDA to 3× RDA (p = 0.006). The average serum iodine intercepts and slopes from minimal iodine to 3× RDA produced a regression as follows: serum iodine = 61.2 (± 2.0) + 0.6 (± 0.2) grams iodized salt. Serum Tg concentration, however, was not different from minimal iodine to RDA (p = 0.94) or from RDA to 3× RDA (p = 0.68). 3.6. Post-Study Questionnaire Responses: Nine of ten participants reported consuming all the provided iodized salt on the RDA and 3× RDA diets. One participant reported consuming 75% of provided iodized salt on day 9 (3× RDA diet). Participants reported adding the iodized salt to reduced or unsalted homemade meals, margaritas, caramel, and 7 of the 10 reported dissolving the salt in water to drink. Seven of 10 participants reported collecting 100% of urine output at each 24 h collection. One female participant reported missing 100–200 mL during the fourth 24 h urine collection (3× RDA diet). Another female missed a single urine collection during the 24 h collection period, and one male reported occasionally forgetting to collect a urine. Overall, participants reported the most difficult part of this study was avoiding eggs, dairy, and commercial bread, remembering to collect all urine, and consuming the 9 g of iodized salt on 3× RDA diet. All ten participants rated consuming all salt on RDA diet and collecting all urine as a 1, 2, or 3 difficulty on a 1–5 Likert scale of difficulty with 5 being the most difficult. Five of ten participants rated consuming all salt on 3× RDA diet as a 4 or 5 difficulty rating. Participants reported changing their normal eating pattern to accommodate additional salt by eating lower sodium foods and adjusting sodium content in homemade recipes, eating larger portions to make food more palatable, and increasing frequency of meals. They also reported their normal eating pattern changed by avoiding dairy, eggs, and bread. 4. Discussion: The primary purpose of this pilot study was to determine whether a titration diet (which used known quantities of iodized salt) was reflected in the 24 h UIC, serum iodine, and thyroglobulin biomarkers of iodine status. Secondary purposes were to evaluate the association between baseline 24 h UIC and habitual iodine intake and observe the contribution of iodine-containing foods to total iodine intake assessed by an iodine-specific FFQ. Overall, in our sample of 10 participants, we found that 24 h UIC measures (including 24 h UIE) were increased as iodized salt consumption increased. Serum iodine, on the other hand, did not increase from the minimal iodine diet to the iodine RDA diet but was elevated when three times the RDA was consumed. In contrast, serum thyroglobulin concentrations were not different during the titration diet period. As estimated by the iodine-specific FFQ, only milk and total dairy intake were associated with estimated total iodine intake, whereas only milk and fish/seafood intake was associated with 24 h UIC. To the best of our knowledge, this is the first study to perform a controlled titrated iodine diet using iodized salt as an iodine source to evaluate the effects on iodine status biomarkers. A previous study providing oral iodine capsules (225μg/day of potassium iodide) to pregnant participants during the first trimester found that the urinary iodine/creatinine ratio increased from a median of 53 μg/g to 150–249 μg/g by the third trimester [33]. Women consumed their typical diet during the supplemental period. Additional published studies have found increases in UIC and presumed iodine status in children and adults from iodine deficient areas following initiation of a salt iodization program [34] or oral iodine supplementation in the form of capsules [33,35] or poppyseed oil [36,37], and by a single-dose injection of intramuscular iodized oil [38]. In our study, the use of iodized salt was a practical way to consume iodine and represented a real-life addition of iodine to the diet (due to iodination of table salt in many countries). An additional strength of this study was that we simultaneously evaluated 24 h UIC along with serum iodine, Tg, and UIE. The 24 h UIC, which is the “gold standard” to assess iodine status in populations and currently, may be the most representative of individual iodine status. The primary purpose of this study was to explore use of potential biomarkers of individual iodine status including UIC, and serum iodine and Tg concentrations. Although 24 h UIC is considered a population iodine status marker, our results suggest that of the iodine status biomarkers evaluated, 24 h UIC has promise as an individual biomarker in that it is sensitive to changes in short term dietary intake. According to our regression model (from minimal iodine intake up to 9 g of iodized salt), 24 h UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed. A gram of iodized salt contains 45 μg of iodine. Thus, an increase of 45 μg of dietary iodine increased the 24 h UIC by an average of 19.3 μg/L in our sample. As the 24 h UIC was completed on the third day of each diet, these results demonstrate that the 24 h UIC was sensitive to recent changes in iodine intake. The same increase in 24 h UIC was observed in the 24 h UIE. The advantage of total UIE is that it adjusts for urine volume and hydration status that can vary among individuals and throughout the day. For example, in our study, we had one participant with daily total urine volumes of 900 mL and another with a daily volumes of more than 4 L. Measuring urine volume to calculate 24 h UIE may be useful when evaluating iodine status of pregnant and lactating women to ensure this population is increasing water intake to the recommended amount (3 L for pregnant women and 3.8 L for lactating women). Since iodine stored in the thyroid is unaffected by changes in short-term iodine intake in iodine-sufficient individuals (i.e., thyroid stores are sufficient), urinary iodine excretion is primarily representative of recent iodine intake. If thyroid stores were not sufficient, more dietary iodine would likely be taken up by the thyroid (to help replete stores) and less would be excreted in urine [39,40]. This concept was demonstrated in our study. As iodine intake increased, the average urinary iodine excretion increased indicating that as intake of iodine exceeds needs and thyroid stores, more iodine will be excreted in the urine. Anderson et al. found that greater than ten spot urines or greater than seven 24 h UICs are required to estimate an individual’s iodine status in free living individuals within a precision range of ±20% [41]. Individuals in this study presumably had variable daily intake of iodine which were not simultaneously estimated during this study. The amount of iodine uptake by the thyroid, and the subsequent amount of iodine excreted in the urine, depends on the typical iodine intake of the individual [42]. If daily iodine intake is >50 ug, the uptake of iodine by the thyroid is maintained, but if daily iodine intake is <50 ug the thyroid’s iodine stores become depleted [29]. Wainwright and Cook suggested that multiple 24 h UICs be collected over a prolonged period of time to provide the best representation of usual iodine intake and to account for seasonal [43], circadian rhythm [44], and iodine and hydration status variations [21]. However, collection of multiple 24 h UICs would be even more of a burden to the patient than a single collection and be subject to collection error including missed urine samples. The difficulty of collecting 24 h urines and additional limitations [25,29,45,46,47,48,49] that include being cumbersome to the patient and being subject to within-day hydration and iodine intake changes prompted us to evaluate other potential biomarkers of iodine status (serum iodine and Tg) in the current study. Our finding that serum iodine concentrations were not different from minimal iodine to RDA but were increased from RDA to three times the RDA provides some support that serum iodine may be a potential alternative marker of iodine status. We initially hypothesized that serum iodine concentration would increase throughout the titration diet as iodine intake increased. Although research is limited on the use of serum iodine as an iodine status biomarker, previous research from mostly epidemiological studies in Chinese populations, have concluded that serum iodine may be more indicative of long-term (or typical) iodine status rather than recent iodine intake because serum iodine was observed to differ among location of residence but not between sexes or age groups, or according to smoking or exercise status [25,45]. The difference between our findings and those previously reported may be due to differences in study period duration. Studies conducted in Chinese populations with varying iodine status found that when iodine in the external environment increased (salt fortification and increased iodine in water supply), UIC increased while serum iodine remained more stable in comparison [25,45]. Another study conducted in Chinese adults found serum iodine was positively correlated with total T3 and T4 and free T4 concentrations [46], suggesting that serum iodine appears to most closely represent the bioavailable iodine for the thyroid gland, with 80–90% of total serum iodine incorporated into thyroid hormones [46,50]. While in overall agreement with previous studies, our study found significant increases in serum iodine only when iodine intake is excessive. This may be because our participants were presumably iodine replete at the start of this study and/or that the 3 day duration of this study was not of sufficient duration for serum thyroid to fall due to minimal iodine intake. Thus, more long-term studies (over weeks or months instead of days) may be needed to evaluate long-term serum iodine concentration changes. In contrast to our hypothesis, serum thyroglobulin concentrations were not different as iodized salt consumption increased from minimal iodine (~0 g iodine) intake to 9 g iodized salt (45 μg of iodine)/day. Like serum iodine, this observation may be due to thyroglobulin’s lack of sensitivity to recent changes in iodine intake. Several previous cross-sectional studies [51,52,53] collectively reported inverse associations between serum Tg concentration and iodine status (determined by spot UIC and spot urinary iodine excretion (UIE)) within iodine deficient and excessive populations across the globe [51,52,53]. Because the studies determined iodine status based on a single-spot UIC or UIE sample, individual iodine status could not be determined [53]. Following a salt iodization program, Vejbjerg et al. found median serum Tg decreased from 10.9 to 8.7 mg/L (p < 0.001) in an area with previously mild iodine deficiency and decreased from 14.6 to 8.9 mg/L (p < 0.001) in an area with previous moderate iodine deficiency. Decreases in serum Tg are reflective of better iodine status because when iodine intake is insufficient, low circulating levels of T4 stimulate the synthesis and release of thyrotropin-releasing hormone, which subsequently increases the production of TSH; this then stimulates Tg synthesis, causing an increase in serum Tg [26]. Even though serum TSH increases during iodine deficiency, TSH concentrations often remain within normal reference ranges and thus is not considered a good indicator of iodine status [21]. Additionally, serum Tg concentrations showed minimal day-to-day variation compared to spot UIC samples following the introduction of a salt iodization program [52]. This lack of variation in Tg concentrations compared to UIC may suggest that Tg is more representative of long-term iodine intake in a population, especially in areas with endemic goiter because it may reflect overall thyroid cell mass [35,52]. Overall, however, there are limited studies on serum Tg as a biomarker of individual iodine status and further research is needed to establish reference concentrations in healthy, iodine-sufficient individuals and to understand the relationship between Tg and recent iodine intake. The secondary purposes of the current study were to evaluate the association between baseline 24 h UIC and habitual iodine intake and observe the contribution of iodine-containing foods to total iodine intake assessed by an iodine-specific FFQ. Milk and total dairy intake correlated positively with total iodine intake as estimated by the iodine-specific FFQ. We did find positive associations between UIC and milk and fish/seafood intake. As with all self-reported data, there is potential for under or overreporting of nutrient intake [29] as could be the case with our iodine-specific FFQ. However, previous studies [12,54,55] have found positive correlations with UIC and milk and dairy intake, including one from our lab which reported total dairy and egg intake to predict approximately 20% variance in 24 h UIC [29]. A review by Reijden et al. estimated that milk and dairy contribute ~13–64% of recommended daily iodine intake based on country-specific food intake data [55]. Iodine content in dairy is a reliable, although variable, source of dietary iodine. Iodine enters cow’s milk either through the cow’s ingestion of water, feed, vegetation, or through exposure to iodophor disinfectants. Such disinfectants are used in the US to clean cow teats and udders, milking equipment, and other milk-holding containers and transporting trucks [56]. However, the iodine content of milk can vary greatly, from 33 to 534 μg/L according to a recent study [55]. Such variations are due to differences in agricultural practices, seasonal variations, milk yield, type, concentration, and application method of iodophor sanitizers, type of cow feed, goitrogen intake by dairy cows, and seasonal variations of pasture versus prepared feed [12,55]. Alternately, dairy and milk are easily recognized and recalled compared to other iodine sources such as iodized salt. Dairy items may be a common iodine source because they can be consumed through easily recalled sources including the habitual glass of milk or as milk added to breakfast cereals or consumed as convenient dairy items such as yogurts and cottage cheese. Although the current study has several strengths, there are limitations which include a smaller sample size and short study duration. The smaller sample size allowed for control and accountability in data collection methodology, but overall findings may not be generalizable to other populations with more extremes in iodine status or in those who are pregnant or lactating. Our participants, for example, were screened for normal thyroid function and most had iodine intake that met the daily US recommendation. Additionally, excess sodium consumption may have contributed to excess fluid retention which could influence serum iodine and Tg biomarker concentrations, particularly on the 3× RDA diet. A longer study duration (perhaps weeks or months) may have allowed for observation of whether serum iodine and thyroglobulin biomarkers would be sensitive to longer term changes in dietary iodine intake. Future studies from our laboratory will further evaluate the use of biomarkers of iodine status using longer titration diet periods and an iodine source other than iodized salt (to avoid risk of excess sodium consumption). Limitations apart from the study design include the high variability of iodine content in food and the absence of iodine in the USDA Food Composition Database, making it difficult to reliably assess iodine intake and make comparisons to iodine status. The iodine content of soil naturally varies and thus the iodine content of plants and animals, agriculture products, saltwater fish and seafood is highly variable. The FFQ used averages of iodine content from the TDS to estimate iodine intake in foods. However, the average values are highly variable. For example, the iodine content of salt water fish fillets ranged from 0.122 to 0.922 μg/g, while a fillet of fresh water fish ranged from 0.005 to 0.082 μg/g in a study by Eckhoff and Maage [57]. 5. Conclusions: The current study provides preliminary support to suggest that 24 h UIC and serum iodine may be indicative of individual iodine status, at least over the short term (3 days) when iodine status is mostly stable. Results provide support that 24 h UIC is likely the most sensitive indicator of individual iodine status at this time and still may be the best biomarker to assess iodine status. However, serum iodine is easier to measure at a single blood draw than single or multiple 24 h UICs and would provide a convenient method to assess iodine status, especially for pregnant and lactating women. Serum Tg was not sensitive to short term changes in iodine intake, but future studies should evaluate its use as a long-term iodine status marker. This study highlights the need for additional research to identify individual iodine biomarkers. Continuing to explore these markers to promptly identify iodine deficiencies in women of reproductive age and pregnant and lactating women is especially important so that dietary interventions can be recommended, and deficiencies can be corrected.
Background: The iodine status of the US population is considered adequate, but subpopulations remain at risk for iodine deficiency and a biomarker of individual iodine status has yet to be determined. The purpose of this study was to determine whether a 3 day titration diet, providing known quantities of iodized salt, is reflected in 24 h urinary iodine concentration (UIC), serum iodine, and thyroglobulin (Tg). Methods: A total of 10 participants (31.3 ± 4.0 years, 76.1 ± 6.3 kg) completed three, 3 day iodine titration diets (minimal iodine, US RDA, (United States Recommended Daily Allowance), and 3× RDA). The 24 h UIC, serum iodine, and Tg were measured following each diet. The 24 h UIC and an iodine-specific food frequency questionnaire (FFQ) were completed at baseline. Results: UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed and was different from minimal to RDA (p = 0.001) and RDA to 3× RDA diets (p = 0.04). Serum iodine was different from RDA to 3× RDA (p = 0.006) whereas Tg was not responsive to diet. Baseline UIC was associated with iodine intake from milk (r = 0.688, p = 0.028) and fish/seafood (r = 0.646, p = 0.043). Conclusions: These results suggest that 24 h UIC and serum iodine may be reflective of individual iodine status and may serve as biomarkers of iodine status.
1. Introduction: Iodine is an essential, rate-limiting element for the synthesis of thyroid hormones, which is currently the only known physiological role of iodine. Ensuring adequate iodine intake is important for all adults, but particularly important for women of childbearing age. Inadequate intake and deficiency during pregnancy can influence brain, nerve and muscle development in the growing fetus and result in growth and developmental abnormalities [1]. Iodine status in the US is generally thought to be adequate [2,3]. However, select populations are susceptible to insufficient iodine status due to geographical location, food intake practices, or increased iodine needs (e.g., pregnancy and lactation). Subpopulations at increased risk for low iodine intake include pregnant and lactating women [4,5,6], vegans [7,8,9] and vegetarians [7,8,9,10,11], those who avoid seafood and/or dairy [12], follow a sodium restricted diet [13,14], or eat local foods in regions with iodine-depleted soils [1,15]. The Institute of Medicine, American Heart Association, and the 2020–2025 Dietary Guidelines for Americans [16] have advocated for decreasing sodium intake to less than 2300 mg/day [17] and more prudently to less than 1500 mg/day [18], which could be reducing Americans’ intake of iodized salt. Additionally, trends toward local and plant-based diets may negatively influence iodine status depending on food selection and dietary intake patterns (e.g., avoidance of seafood, dairy and eggs) and the content of local soils. Therefore, despite presumed adequate iodine status in the general US population, certain dietary choices may directly influence iodine status and hence, influence thyroid function. Additionally, dietary patterns low in iodine content are of particular concern for women of reproductive age and pregnant and lactating women due iodine’s importance during fetal growth and development. The 24 h urinary iodine concentration (UIC) is considered the gold standard for assessing iodine status in populations, as approximately 90% of dietary iodine is excreted in the urine of healthy and iodine replete adults [19]. However, 24 h urine collections are cumbersome for the patient, prone to collection and methodological errors, expensive to measure by state-of-the art procedures employed by most clinical laboratories (i.e., mass spectroscopy) [20], and may not adequately reflect iodine status in individuals [21]. For example, 24 h UIC may better represent acute (e.g., days) versus chronic iodine status [22] because of variations in 24 h iodine excretion due to within-day hydration status changes [23] and other unknown variables [22]. While other biomarkers for assessment of status have been suggested, including 24 h total urinary iodine excretion (UIE) [19,24], serum iodine [25] and thyroglobulin (Tg) concentration [26], use of these markers (as well as 24 h UIC) must be better validated in healthy subjects against known quantities of iodine intake. The primary purpose of this pilot study was to determine whether a 3-day titration diet (which provided known quantities of iodized salt) is reflected in 24 h UIC, UIE, serum iodine, thyroglobulin biomarkers. Secondary purposes were to evaluate the association between baseline 24 h UIC and habitual iodine intake and observe the contribution of iodine-containing foods to total iodine intake assessed by an iodine-specific food frequency questionnaire (FFQ). We hypothesized that 24 h UIC, UIE and serum iodine would increase as iodized salt consumption increased and that Tg would increase when the iodine intake exceeds the iodine Recommended Daily Allowance (RDA) of 150 ug. We also hypothesized that baseline 24 h UIC would be associated with the intake frequency of dairy, eggs, and iodized salt and that these same food items would be associated with the total iodine intake as estimated by the FFQ. 5. Conclusions: The current study provides preliminary support to suggest that 24 h UIC and serum iodine may be indicative of individual iodine status, at least over the short term (3 days) when iodine status is mostly stable. Results provide support that 24 h UIC is likely the most sensitive indicator of individual iodine status at this time and still may be the best biomarker to assess iodine status. However, serum iodine is easier to measure at a single blood draw than single or multiple 24 h UICs and would provide a convenient method to assess iodine status, especially for pregnant and lactating women. Serum Tg was not sensitive to short term changes in iodine intake, but future studies should evaluate its use as a long-term iodine status marker. This study highlights the need for additional research to identify individual iodine biomarkers. Continuing to explore these markers to promptly identify iodine deficiencies in women of reproductive age and pregnant and lactating women is especially important so that dietary interventions can be recommended, and deficiencies can be corrected.
Background: The iodine status of the US population is considered adequate, but subpopulations remain at risk for iodine deficiency and a biomarker of individual iodine status has yet to be determined. The purpose of this study was to determine whether a 3 day titration diet, providing known quantities of iodized salt, is reflected in 24 h urinary iodine concentration (UIC), serum iodine, and thyroglobulin (Tg). Methods: A total of 10 participants (31.3 ± 4.0 years, 76.1 ± 6.3 kg) completed three, 3 day iodine titration diets (minimal iodine, US RDA, (United States Recommended Daily Allowance), and 3× RDA). The 24 h UIC, serum iodine, and Tg were measured following each diet. The 24 h UIC and an iodine-specific food frequency questionnaire (FFQ) were completed at baseline. Results: UIC increased an average of 19.3 μg/L for every gram of iodized salt consumed and was different from minimal to RDA (p = 0.001) and RDA to 3× RDA diets (p = 0.04). Serum iodine was different from RDA to 3× RDA (p = 0.006) whereas Tg was not responsive to diet. Baseline UIC was associated with iodine intake from milk (r = 0.688, p = 0.028) and fish/seafood (r = 0.646, p = 0.043). Conclusions: These results suggest that 24 h UIC and serum iodine may be reflective of individual iodine status and may serve as biomarkers of iodine status.
13,100
290
[ 3871, 264, 121, 362, 260, 125, 112, 133, 185, 168, 260, 84, 143, 257, 286 ]
20
[ "iodine", "intake", "rda", "24", "salt", "participants", "uic", "study", "serum", "iodized" ]
[ "iodine status pregnant", "dietary iodine increased", "iodine intake foods", "diet iodine intake", "iodine needs pregnancy" ]
null
[CONTENT] iodine status | biomarkers | validation | nutritional exposure | dietary biomarkers | iodine intake | urinary iodine concentration | serum iodine | thyroglobulin | food frequency questionnaire | dairy products [SUMMARY]
null
[CONTENT] iodine status | biomarkers | validation | nutritional exposure | dietary biomarkers | iodine intake | urinary iodine concentration | serum iodine | thyroglobulin | food frequency questionnaire | dairy products [SUMMARY]
[CONTENT] iodine status | biomarkers | validation | nutritional exposure | dietary biomarkers | iodine intake | urinary iodine concentration | serum iodine | thyroglobulin | food frequency questionnaire | dairy products [SUMMARY]
[CONTENT] iodine status | biomarkers | validation | nutritional exposure | dietary biomarkers | iodine intake | urinary iodine concentration | serum iodine | thyroglobulin | food frequency questionnaire | dairy products [SUMMARY]
[CONTENT] iodine status | biomarkers | validation | nutritional exposure | dietary biomarkers | iodine intake | urinary iodine concentration | serum iodine | thyroglobulin | food frequency questionnaire | dairy products [SUMMARY]
[CONTENT] Adult | Animals | Biomarkers | Dairy Products | Diet | Eggs | Female | Humans | Iodine | Male | Nutritional Status | Pilot Projects | Seafood | Sodium Chloride, Dietary | Thyroglobulin [SUMMARY]
null
[CONTENT] Adult | Animals | Biomarkers | Dairy Products | Diet | Eggs | Female | Humans | Iodine | Male | Nutritional Status | Pilot Projects | Seafood | Sodium Chloride, Dietary | Thyroglobulin [SUMMARY]
[CONTENT] Adult | Animals | Biomarkers | Dairy Products | Diet | Eggs | Female | Humans | Iodine | Male | Nutritional Status | Pilot Projects | Seafood | Sodium Chloride, Dietary | Thyroglobulin [SUMMARY]
[CONTENT] Adult | Animals | Biomarkers | Dairy Products | Diet | Eggs | Female | Humans | Iodine | Male | Nutritional Status | Pilot Projects | Seafood | Sodium Chloride, Dietary | Thyroglobulin [SUMMARY]
[CONTENT] Adult | Animals | Biomarkers | Dairy Products | Diet | Eggs | Female | Humans | Iodine | Male | Nutritional Status | Pilot Projects | Seafood | Sodium Chloride, Dietary | Thyroglobulin [SUMMARY]
[CONTENT] iodine status pregnant | dietary iodine increased | iodine intake foods | diet iodine intake | iodine needs pregnancy [SUMMARY]
null
[CONTENT] iodine status pregnant | dietary iodine increased | iodine intake foods | diet iodine intake | iodine needs pregnancy [SUMMARY]
[CONTENT] iodine status pregnant | dietary iodine increased | iodine intake foods | diet iodine intake | iodine needs pregnancy [SUMMARY]
[CONTENT] iodine status pregnant | dietary iodine increased | iodine intake foods | diet iodine intake | iodine needs pregnancy [SUMMARY]
[CONTENT] iodine status pregnant | dietary iodine increased | iodine intake foods | diet iodine intake | iodine needs pregnancy [SUMMARY]
[CONTENT] iodine | intake | rda | 24 | salt | participants | uic | study | serum | iodized [SUMMARY]
null
[CONTENT] iodine | intake | rda | 24 | salt | participants | uic | study | serum | iodized [SUMMARY]
[CONTENT] iodine | intake | rda | 24 | salt | participants | uic | study | serum | iodized [SUMMARY]
[CONTENT] iodine | intake | rda | 24 | salt | participants | uic | study | serum | iodized [SUMMARY]
[CONTENT] iodine | intake | rda | 24 | salt | participants | uic | study | serum | iodized [SUMMARY]
[CONTENT] iodine | intake | status | iodine status | 24 | influence | iodine intake | dietary | adequate | uic [SUMMARY]
null
[CONTENT] rda | reported | iodine | participants | intake | participants reported | salt | uic | ml | iodized [SUMMARY]
[CONTENT] iodine | iodine status | status | individual iodine | term | individual | women | identify | deficiencies | short [SUMMARY]
[CONTENT] iodine | rda | intake | participants | 24 | uic | salt | urine | serum | study [SUMMARY]
[CONTENT] iodine | rda | intake | participants | 24 | uic | salt | urine | serum | study [SUMMARY]
[CONTENT] US ||| 3 day | 24 | UIC | thyroglobulin [SUMMARY]
null
[CONTENT] UIC | 19.3 | RDA | 0.001 | RDA | 0.04 ||| RDA | 0.006 | Tg ||| Baseline UIC | 0.688 | 0.028 | 0.646 | 0.043 [SUMMARY]
[CONTENT] 24 | UIC [SUMMARY]
[CONTENT] US ||| 3 day | 24 | UIC | thyroglobulin ||| 10 | 31.3 ± | 4.0 years | 76.1 | 6.3 kg | three | 3 day | US | United States Recommended Daily Allowance ||| 24 | UIC | Tg ||| 24 | UIC ||| 19.3 | RDA | 0.001 | RDA | 0.04 ||| RDA | 0.006 | Tg ||| Baseline UIC | 0.688 | 0.028 | 0.646 | 0.043 ||| 24 | UIC [SUMMARY]
[CONTENT] US ||| 3 day | 24 | UIC | thyroglobulin ||| 10 | 31.3 ± | 4.0 years | 76.1 | 6.3 kg | three | 3 day | US | United States Recommended Daily Allowance ||| 24 | UIC | Tg ||| 24 | UIC ||| 19.3 | RDA | 0.001 | RDA | 0.04 ||| RDA | 0.006 | Tg ||| Baseline UIC | 0.688 | 0.028 | 0.646 | 0.043 ||| 24 | UIC [SUMMARY]
A Trichophyton Rubrum Infection Model Based on the Reconstructed Human Epidermis - Episkin®.
26712433
Trichophyton rubrum represents the most common infectious fungus responsible for dermatophytosis in human, but the mechanism involved is still not completely understood. An appropriate model constructed to simulate host infection is the prerequisite to study the pathogenesis of dermatophytosis caused by T. rubrum. In this study, we intended to develop a new T. rubrum infection model in vitro, using the three-dimensional reconstructed epidermis - EpiSkin ®, and to pave the way for further investigation of the mechanisms involved in T. rubrum infection.
BACKGROUND
The reconstructed human epidermis (RHE) was infected by inoculating low-dose (400 conidia) and high-dose (4000 conidia) T. rubrum conidia to optimize the infection dose. During the various periods after infection, the samples were processed for pathological examination and scanning electron microscopy (SEM) observation.
METHODS
The histological analysis of RHE revealed a fully differentiated epidermis with a functional stratum corneum, which was analogous to the normal human epidermis. The results of hematoxylin and eosin staining and the periodic acid-Schiff staining showed that the infection dose of 400 conidia was in accord with the pathological characteristics of host dermatophytosis caused by T. rubrum. SEM observations further exhibited the process of T. rubrum infection in an intuitionistic way.
RESULTS
We established the T. rubrum infection model on RHE in vitro successfully. It is a promising model for further investigation of the mechanisms involved in T. rubrum infection.
CONCLUSIONS
[ "Animals", "Disease Models, Animal", "Epidermis", "Humans", "Keratinocytes", "Tissue Culture Techniques", "Trichophyton" ]
4797543
INTRODUCTION
Trichophyton rubrum represents the most common infectious fungus responsible for dermatophytosis in human.[12] A typical characteristic of dermatophytes is keratinophilic and always invades keratinized structures such as the skin stratum corneum, hairs, and nails. The infection caused by T. rubrum, except for diffusing into the deeper parts of the body in some immunosuppressed patients,[34] is normally limited to the superficial skin and tends to be chronic and recurrent,[5] in which the pathogenic mechanism involved is still not completely understood. Various models have been explored in vitro and ex vivo to investigate the mechanism of dermatophyte infection such as the animal model,[6] stripped sheets of the stratum corneum,[78] nail plates, or monolayer cell culture model.[910] However, due to the anthropophilic feature of T. rubrum and the spontaneous healing of animals after the T. rubrum infection,[11] animal model is not the optimal choice. The stripped sheets of stratum corneum or nail plates from healthy volunteers could not response to the dermatophyte infection on account of missing living cells and no immune function, which is obviously different from the actual situation in vivo. Our previous studies have already demonstrated that human keratinocytes can recognize T. rubrum and induce native immune responses against T. rubrum infection.[121314] Nevertheless, different from the infection localized to the skin cornified layers, T. rubrum submerged under the culture medium and contacted with keratinocytes directly in the monolayer cell culture model, which led to the immunogenicity and pathogenicity of T. rubrum distinct from in vivo.[15] Taken together, all the above models have certain limitations to imitate the human T. rubrum infection, respectively. In recent years, several reconstructed human skin models have been developed. These reconstructed human skin equivalents demonstrated with fully differentiated epidermis could closely resemble native human epidermis,[1617] therefore providing a morphologically relevant means to assess skin irritation and to research the skin-related disease in vitro. In this study, we intended to establish a T. rubrum infection model in vitro, based on the three-dimensional reconstructed human epidermis (RHE) - EpiSkin®. It will pave the way for further investigation of the mechanisms involved in T. rubrum infection in the future.
METHODS
Reconstructed human epidermis EpiSkin® Commercial epidermal tissues EpiSkin (L’Oreal Research and Innovation Center, Shanghai, China) is an in vitro RHE from normal human keratinocytes cultured on a collagen matrix at the air-liquid interface. This RHE is histologically similar to the in vivo human epidermis. Immediately upon arrival, the EpiSkin models were removed from the nutrient agar and transferred to 12-well plates filled with medium without antifungals provided by the manufacturer. After equilibration overnight in an incubator (37°C, 5.0% CO2), the medium was changed, and the tissues were used in the following experiments. Commercial epidermal tissues EpiSkin (L’Oreal Research and Innovation Center, Shanghai, China) is an in vitro RHE from normal human keratinocytes cultured on a collagen matrix at the air-liquid interface. This RHE is histologically similar to the in vivo human epidermis. Immediately upon arrival, the EpiSkin models were removed from the nutrient agar and transferred to 12-well plates filled with medium without antifungals provided by the manufacturer. After equilibration overnight in an incubator (37°C, 5.0% CO2), the medium was changed, and the tissues were used in the following experiments. Fungal strain and conidia collection T. rubrum strain T1a used in the present experiment was obtained from the China Medical Microbiological Culture Collection Center and was confirmed as T. rubrum by morphological identification and sequencing of internal transcribed spacer regions and the D1–D2 domain of the large-subunit rRNA gene. T. rubrum was cultured for 2 weeks at 27°C on potato glucose agar (OXOID, UK) to produce conidia. A mixed suspension of conidia and hyphae fragments was obtained by covering the fungal colonies with sterile saline (0.85%) and gently rubbing the colonies with the inoculation loop. Next, the suspension was filtered with a Whatman filter paper model 1 (pore size, 11 μm; Whatman, UK) to remove the hyphae,[18] and then centrifuged at 4000 r/min for 10 min to collect the conidia. These conidia were washed twice by agitation in sterile saline. The concentration of conidia was adjusted with sterile saline to a final concentration of 8 × 104 CFU/ml and 8 × 105 colony-forming unit (CFU)/ml by hemacytometer counts. T. rubrum strain T1a used in the present experiment was obtained from the China Medical Microbiological Culture Collection Center and was confirmed as T. rubrum by morphological identification and sequencing of internal transcribed spacer regions and the D1–D2 domain of the large-subunit rRNA gene. T. rubrum was cultured for 2 weeks at 27°C on potato glucose agar (OXOID, UK) to produce conidia. A mixed suspension of conidia and hyphae fragments was obtained by covering the fungal colonies with sterile saline (0.85%) and gently rubbing the colonies with the inoculation loop. Next, the suspension was filtered with a Whatman filter paper model 1 (pore size, 11 μm; Whatman, UK) to remove the hyphae,[18] and then centrifuged at 4000 r/min for 10 min to collect the conidia. These conidia were washed twice by agitation in sterile saline. The concentration of conidia was adjusted with sterile saline to a final concentration of 8 × 104 CFU/ml and 8 × 105 colony-forming unit (CFU)/ml by hemacytometer counts. Infection EpiSkin® with Trichophyton rubrum conidia For the pilot experiment, two concentrations of conidia suspensions (8 × 104 CFU/ml and 8 × 105 CFU/ml) were used to infect RHE. A 5 μl inoculum of T. rubrum conidia suspensions (400 conidia and 4000 conidia) was added to the center of each RHE surface-cultured in 12-well plates. The identical volume of sterile saline was added as negative controls. Then, the 12-well plates were transferred to an incubator (37°C, 5.0% CO2), and the maintenance medium was changed every other day. The epidermis tissues were processed after infection for 2 days, 4 days, and 10 days. For the pilot experiment, two concentrations of conidia suspensions (8 × 104 CFU/ml and 8 × 105 CFU/ml) were used to infect RHE. A 5 μl inoculum of T. rubrum conidia suspensions (400 conidia and 4000 conidia) was added to the center of each RHE surface-cultured in 12-well plates. The identical volume of sterile saline was added as negative controls. Then, the 12-well plates were transferred to an incubator (37°C, 5.0% CO2), and the maintenance medium was changed every other day. The epidermis tissues were processed after infection for 2 days, 4 days, and 10 days. Pathological Examination: Hematoxylin and eosin and periodic acid-Schiff staining After rinsing 3 times with 0.01 mol/L phosphate-buffered saline (PBS), the specimens were fixed overnight in 4% paraformaldehyde at 4°C. Afterward, cut the epidermis from the insert with a surgical scalpel and immersed in 0.01 mol/L PBS for 5 min, and then the tissues were processed for dehydration using graded ethanol, vitrification by xylene, immersion, and embedding in paraffin. Slices with 5 μm thicknesses were cut from the embedded tissues and dealt with hematoxylin and eosin (H and E) staining and periodic acid-Schiff (PAS) staining for light microscopic examination. The normal prepuce tissues harvested from clinical circumcision surgery as normal human skin were processed with H and E staining at the same time for comparison. After rinsing 3 times with 0.01 mol/L phosphate-buffered saline (PBS), the specimens were fixed overnight in 4% paraformaldehyde at 4°C. Afterward, cut the epidermis from the insert with a surgical scalpel and immersed in 0.01 mol/L PBS for 5 min, and then the tissues were processed for dehydration using graded ethanol, vitrification by xylene, immersion, and embedding in paraffin. Slices with 5 μm thicknesses were cut from the embedded tissues and dealt with hematoxylin and eosin (H and E) staining and periodic acid-Schiff (PAS) staining for light microscopic examination. The normal prepuce tissues harvested from clinical circumcision surgery as normal human skin were processed with H and E staining at the same time for comparison. Scanning electron microscopy The specimens were washed 3 times with PBS and fixed with glutaraldehyde at 4°C. Then, samples were dehydrated with an increasing gradient of ethanol (50–100%), dried using the critical point method with CO2 in liquid state, and coated with gold. After that, the interaction between the T. rubrum and the RHE was viewed under a FEI Quanta 200 scanning electron microscope (SEM, USA). The specimens were washed 3 times with PBS and fixed with glutaraldehyde at 4°C. Then, samples were dehydrated with an increasing gradient of ethanol (50–100%), dried using the critical point method with CO2 in liquid state, and coated with gold. After that, the interaction between the T. rubrum and the RHE was viewed under a FEI Quanta 200 scanning electron microscope (SEM, USA).
RESULTS
Structure of the reconstructed human epidermis was similar to human epidermis The results of H and E staining [Figure 1a and 1c] showed that the structure of the reconstructed epidermis was composed of stratum corneum, keratinocyte layer, and organic support layer, which was similar to human normal epidermis from prepuce tissue [Figure 1b and 1d]. Morphology of the reconstructed human epidermis EpiSkin®, compared with the normal human skin derived from prepuce tissue through H and E staining. (a and b) original magnification, ×100; (c and d) original magnification, ×400. The results of H and E staining [Figure 1a and 1c] showed that the structure of the reconstructed epidermis was composed of stratum corneum, keratinocyte layer, and organic support layer, which was similar to human normal epidermis from prepuce tissue [Figure 1b and 1d]. Morphology of the reconstructed human epidermis EpiSkin®, compared with the normal human skin derived from prepuce tissue through H and E staining. (a and b) original magnification, ×100; (c and d) original magnification, ×400. Reconstructed human epidermis infected with Trichophyton rubrum conidia To better mimic host dermatophyte infection, we optimized the infection dose by inoculating low-dose (400 conidia) and high-dose (4000 conidia) on the RHE. After 2 days of infection, the results of H and E and PAS staining did not show any conidia or hyphae in the horny layer in either group (data not shown). After T. rubrum was inoculated 4 days, conidia and hyphae fragments were found in the stratum corneum [Figures 2a, 2b and 3a, 3b]; moreover, infection with 4000 conidia showed more conidia and hyphae fragments compared with infection with 400 conidia. On the 10th day of co-culture, the histopathology feature exhibited the great difference between the two groups. The group infected by 400 conidia showed that the invasion limited to the cornified layer without penetrating through the stratum corneum to keratinocytes layer [Figure 2c and 2d], which is in accordance with the pathological characteristics of superficial dermatophytosis caused by T. rubrum in vivo. However, the group infected with 4000 conidia displayed enormous destruction–encroached almost the full epidermis and presented obvious necrosis of keratinocytes [Figure 3c and 3d]. Hence, the inoculum of 400 conidia was used to imitate host dermatophyte infection. The H and E and periodic acid-Schiff staining of reconstructed human epidermis infected with Trichophyton rubrum conidia of 400. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infected by Trichophyton rubrum conidia, conidia and hyphae were found in the stratum corneum (black arrow). The reconstructed human epidermis infected with Trichophyton rubrum conidia of 4000. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infection for 4 days, abundance of conidia and hyphae was presented in the stratum corneum. At the 10th day of infection, the infection extended almost the full epidermis and the epidermis displayed enormous destruction. To better mimic host dermatophyte infection, we optimized the infection dose by inoculating low-dose (400 conidia) and high-dose (4000 conidia) on the RHE. After 2 days of infection, the results of H and E and PAS staining did not show any conidia or hyphae in the horny layer in either group (data not shown). After T. rubrum was inoculated 4 days, conidia and hyphae fragments were found in the stratum corneum [Figures 2a, 2b and 3a, 3b]; moreover, infection with 4000 conidia showed more conidia and hyphae fragments compared with infection with 400 conidia. On the 10th day of co-culture, the histopathology feature exhibited the great difference between the two groups. The group infected by 400 conidia showed that the invasion limited to the cornified layer without penetrating through the stratum corneum to keratinocytes layer [Figure 2c and 2d], which is in accordance with the pathological characteristics of superficial dermatophytosis caused by T. rubrum in vivo. However, the group infected with 4000 conidia displayed enormous destruction–encroached almost the full epidermis and presented obvious necrosis of keratinocytes [Figure 3c and 3d]. Hence, the inoculum of 400 conidia was used to imitate host dermatophyte infection. The H and E and periodic acid-Schiff staining of reconstructed human epidermis infected with Trichophyton rubrum conidia of 400. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infected by Trichophyton rubrum conidia, conidia and hyphae were found in the stratum corneum (black arrow). The reconstructed human epidermis infected with Trichophyton rubrum conidia of 4000. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infection for 4 days, abundance of conidia and hyphae was presented in the stratum corneum. At the 10th day of infection, the infection extended almost the full epidermis and the epidermis displayed enormous destruction. Scanning electron microscopy showed the infection process of Trichophyton rubrum conidia in reconstructed human epidermis After 2 days of infection (400 conidia), SEM revealed the germination of the conidia, an early germ tube originated from the adhered conidia [Figure 4a], as well as the elongation of the germ tubes [Figure 4b]. With further inoculation (4 days), the distribution of hyphae was observed on the surface in netlike manner [Figure 4c]. We can also find the invasion of the hyphae extended horizontally and entered into the RHE [Figure 4d], which indicated that the infecting of the stratum corneum was achieved. Scanning electron microscopy observation is the process of reconstructed human epidermis infected with Trichophyton rubrum conidia. (a and b) Revealing the germination of the conidia and elongation of the germ tubes 2 days after infection. (c and d) Showing the invasion of the hyphae spreading along the reconstructed human epidermis surface and penetrating through the outer surface layer after infection for 4 days. (a and b) Scale bar = 10 μm, (c) Scale bar = 50 μm, and (d) Scale bar = 5 μm. After 2 days of infection (400 conidia), SEM revealed the germination of the conidia, an early germ tube originated from the adhered conidia [Figure 4a], as well as the elongation of the germ tubes [Figure 4b]. With further inoculation (4 days), the distribution of hyphae was observed on the surface in netlike manner [Figure 4c]. We can also find the invasion of the hyphae extended horizontally and entered into the RHE [Figure 4d], which indicated that the infecting of the stratum corneum was achieved. Scanning electron microscopy observation is the process of reconstructed human epidermis infected with Trichophyton rubrum conidia. (a and b) Revealing the germination of the conidia and elongation of the germ tubes 2 days after infection. (c and d) Showing the invasion of the hyphae spreading along the reconstructed human epidermis surface and penetrating through the outer surface layer after infection for 4 days. (a and b) Scale bar = 10 μm, (c) Scale bar = 50 μm, and (d) Scale bar = 5 μm.
null
null
[ "Reconstructed human epidermis EpiSkin®", "Fungal strain and conidia collection", "Infection EpiSkin® with Trichophyton rubrum conidia", "Pathological Examination: Hematoxylin and eosin and periodic acid-Schiff staining", "Scanning electron microscopy", "Structure of the reconstructed human epidermis was similar to human epidermis", "Reconstructed human epidermis infected with Trichophyton rubrum conidia", "Scanning electron microscopy showed the infection process of Trichophyton rubrum conidia in reconstructed human epidermis" ]
[ "Commercial epidermal tissues EpiSkin (L’Oreal Research and Innovation Center, Shanghai, China) is an in vitro RHE from normal human keratinocytes cultured on a collagen matrix at the air-liquid interface. This RHE is histologically similar to the in vivo human epidermis. Immediately upon arrival, the EpiSkin models were removed from the nutrient agar and transferred to 12-well plates filled with medium without antifungals provided by the manufacturer. After equilibration overnight in an incubator (37°C, 5.0% CO2), the medium was changed, and the tissues were used in the following experiments.", "T. rubrum strain T1a used in the present experiment was obtained from the China Medical Microbiological Culture Collection Center and was confirmed as T. rubrum by morphological identification and sequencing of internal transcribed spacer regions and the D1–D2 domain of the large-subunit rRNA gene.\nT. rubrum was cultured for 2 weeks at 27°C on potato glucose agar (OXOID, UK) to produce conidia. A mixed suspension of conidia and hyphae fragments was obtained by covering the fungal colonies with sterile saline (0.85%) and gently rubbing the colonies with the inoculation loop. Next, the suspension was filtered with a Whatman filter paper model 1 (pore size, 11 μm; Whatman, UK) to remove the hyphae,[18] and then centrifuged at 4000 r/min for 10 min to collect the conidia. These conidia were washed twice by agitation in sterile saline. The concentration of conidia was adjusted with sterile saline to a final concentration of 8 × 104 CFU/ml and 8 × 105 colony-forming unit (CFU)/ml by hemacytometer counts.", "For the pilot experiment, two concentrations of conidia suspensions (8 × 104 CFU/ml and 8 × 105 CFU/ml) were used to infect RHE. A 5 μl inoculum of T. rubrum conidia suspensions (400 conidia and 4000 conidia) was added to the center of each RHE surface-cultured in 12-well plates. The identical volume of sterile saline was added as negative controls. Then, the 12-well plates were transferred to an incubator (37°C, 5.0% CO2), and the maintenance medium was changed every other day. The epidermis tissues were processed after infection for 2 days, 4 days, and 10 days.", "After rinsing 3 times with 0.01 mol/L phosphate-buffered saline (PBS), the specimens were fixed overnight in 4% paraformaldehyde at 4°C. Afterward, cut the epidermis from the insert with a surgical scalpel and immersed in 0.01 mol/L PBS for 5 min, and then the tissues were processed for dehydration using graded ethanol, vitrification by xylene, immersion, and embedding in paraffin. Slices with 5 μm thicknesses were cut from the embedded tissues and dealt with hematoxylin and eosin (H and E) staining and periodic acid-Schiff (PAS) staining for light microscopic examination. The normal prepuce tissues harvested from clinical circumcision surgery as normal human skin were processed with H and E staining at the same time for comparison.", "The specimens were washed 3 times with PBS and fixed with glutaraldehyde at 4°C. Then, samples were dehydrated with an increasing gradient of ethanol (50–100%), dried using the critical point method with CO2 in liquid state, and coated with gold. After that, the interaction between the T. rubrum and the RHE was viewed under a FEI Quanta 200 scanning electron microscope (SEM, USA).", "The results of H and E staining [Figure 1a and 1c] showed that the structure of the reconstructed epidermis was composed of stratum corneum, keratinocyte layer, and organic support layer, which was similar to human normal epidermis from prepuce tissue [Figure 1b and 1d].\nMorphology of the reconstructed human epidermis EpiSkin®, compared with the normal human skin derived from prepuce tissue through H and E staining. (a and b) original magnification, ×100; (c and d) original magnification, ×400.", "To better mimic host dermatophyte infection, we optimized the infection dose by inoculating low-dose (400 conidia) and high-dose (4000 conidia) on the RHE. After 2 days of infection, the results of H and E and PAS staining did not show any conidia or hyphae in the horny layer in either group (data not shown). After T. rubrum was inoculated 4 days, conidia and hyphae fragments were found in the stratum corneum [Figures 2a, 2b and 3a, 3b]; moreover, infection with 4000 conidia showed more conidia and hyphae fragments compared with infection with 400 conidia. On the 10th day of co-culture, the histopathology feature exhibited the great difference between the two groups. The group infected by 400 conidia showed that the invasion limited to the cornified layer without penetrating through the stratum corneum to keratinocytes layer [Figure 2c and 2d], which is in accordance with the pathological characteristics of superficial dermatophytosis caused by T. rubrum\nin vivo. However, the group infected with 4000 conidia displayed enormous destruction–encroached almost the full epidermis and presented obvious necrosis of keratinocytes [Figure 3c and 3d]. Hence, the inoculum of 400 conidia was used to imitate host dermatophyte infection.\nThe H and E and periodic acid-Schiff staining of reconstructed human epidermis infected with Trichophyton rubrum conidia of 400. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infected by Trichophyton rubrum conidia, conidia and hyphae were found in the stratum corneum (black arrow).\nThe reconstructed human epidermis infected with Trichophyton rubrum conidia of 4000. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infection for 4 days, abundance of conidia and hyphae was presented in the stratum corneum. At the 10th day of infection, the infection extended almost the full epidermis and the epidermis displayed enormous destruction.", "After 2 days of infection (400 conidia), SEM revealed the germination of the conidia, an early germ tube originated from the adhered conidia [Figure 4a], as well as the elongation of the germ tubes [Figure 4b]. With further inoculation (4 days), the distribution of hyphae was observed on the surface in netlike manner [Figure 4c]. We can also find the invasion of the hyphae extended horizontally and entered into the RHE [Figure 4d], which indicated that the infecting of the stratum corneum was achieved.\nScanning electron microscopy observation is the process of reconstructed human epidermis infected with Trichophyton rubrum conidia. (a and b) Revealing the germination of the conidia and elongation of the germ tubes 2 days after infection. (c and d) Showing the invasion of the hyphae spreading along the reconstructed human epidermis surface and penetrating through the outer surface layer after infection for 4 days. (a and b) Scale bar = 10 μm, (c) Scale bar = 50 μm, and (d) Scale bar = 5 μm." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Reconstructed human epidermis EpiSkin®", "Fungal strain and conidia collection", "Infection EpiSkin® with Trichophyton rubrum conidia", "Pathological Examination: Hematoxylin and eosin and periodic acid-Schiff staining", "Scanning electron microscopy", "RESULTS", "Structure of the reconstructed human epidermis was similar to human epidermis", "Reconstructed human epidermis infected with Trichophyton rubrum conidia", "Scanning electron microscopy showed the infection process of Trichophyton rubrum conidia in reconstructed human epidermis", "DISCUSSION" ]
[ "Trichophyton rubrum represents the most common infectious fungus responsible for dermatophytosis in human.[12] A typical characteristic of dermatophytes is keratinophilic and always invades keratinized structures such as the skin stratum corneum, hairs, and nails. The infection caused by T. rubrum, except for diffusing into the deeper parts of the body in some immunosuppressed patients,[34] is normally limited to the superficial skin and tends to be chronic and recurrent,[5] in which the pathogenic mechanism involved is still not completely understood. Various models have been explored in vitro and ex vivo to investigate the mechanism of dermatophyte infection such as the animal model,[6] stripped sheets of the stratum corneum,[78] nail plates, or monolayer cell culture model.[910] However, due to the anthropophilic feature of T. rubrum and the spontaneous healing of animals after the T. rubrum infection,[11] animal model is not the optimal choice. The stripped sheets of stratum corneum or nail plates from healthy volunteers could not response to the dermatophyte infection on account of missing living cells and no immune function, which is obviously different from the actual situation in vivo. Our previous studies have already demonstrated that human keratinocytes can recognize T. rubrum and induce native immune responses against T. rubrum infection.[121314] Nevertheless, different from the infection localized to the skin cornified layers, T. rubrum submerged under the culture medium and contacted with keratinocytes directly in the monolayer cell culture model, which led to the immunogenicity and pathogenicity of T. rubrum distinct from in vivo.[15] Taken together, all the above models have certain limitations to imitate the human T. rubrum infection, respectively.\nIn recent years, several reconstructed human skin models have been developed. These reconstructed human skin equivalents demonstrated with fully differentiated epidermis could closely resemble native human epidermis,[1617] therefore providing a morphologically relevant means to assess skin irritation and to research the skin-related disease in vitro. In this study, we intended to establish a T. rubrum infection model in vitro, based on the three-dimensional reconstructed human epidermis (RHE) - EpiSkin®. It will pave the way for further investigation of the mechanisms involved in T. rubrum infection in the future.", " Reconstructed human epidermis EpiSkin® Commercial epidermal tissues EpiSkin (L’Oreal Research and Innovation Center, Shanghai, China) is an in vitro RHE from normal human keratinocytes cultured on a collagen matrix at the air-liquid interface. This RHE is histologically similar to the in vivo human epidermis. Immediately upon arrival, the EpiSkin models were removed from the nutrient agar and transferred to 12-well plates filled with medium without antifungals provided by the manufacturer. After equilibration overnight in an incubator (37°C, 5.0% CO2), the medium was changed, and the tissues were used in the following experiments.\nCommercial epidermal tissues EpiSkin (L’Oreal Research and Innovation Center, Shanghai, China) is an in vitro RHE from normal human keratinocytes cultured on a collagen matrix at the air-liquid interface. This RHE is histologically similar to the in vivo human epidermis. Immediately upon arrival, the EpiSkin models were removed from the nutrient agar and transferred to 12-well plates filled with medium without antifungals provided by the manufacturer. After equilibration overnight in an incubator (37°C, 5.0% CO2), the medium was changed, and the tissues were used in the following experiments.\n Fungal strain and conidia collection T. rubrum strain T1a used in the present experiment was obtained from the China Medical Microbiological Culture Collection Center and was confirmed as T. rubrum by morphological identification and sequencing of internal transcribed spacer regions and the D1–D2 domain of the large-subunit rRNA gene.\nT. rubrum was cultured for 2 weeks at 27°C on potato glucose agar (OXOID, UK) to produce conidia. A mixed suspension of conidia and hyphae fragments was obtained by covering the fungal colonies with sterile saline (0.85%) and gently rubbing the colonies with the inoculation loop. Next, the suspension was filtered with a Whatman filter paper model 1 (pore size, 11 μm; Whatman, UK) to remove the hyphae,[18] and then centrifuged at 4000 r/min for 10 min to collect the conidia. These conidia were washed twice by agitation in sterile saline. The concentration of conidia was adjusted with sterile saline to a final concentration of 8 × 104 CFU/ml and 8 × 105 colony-forming unit (CFU)/ml by hemacytometer counts.\nT. rubrum strain T1a used in the present experiment was obtained from the China Medical Microbiological Culture Collection Center and was confirmed as T. rubrum by morphological identification and sequencing of internal transcribed spacer regions and the D1–D2 domain of the large-subunit rRNA gene.\nT. rubrum was cultured for 2 weeks at 27°C on potato glucose agar (OXOID, UK) to produce conidia. A mixed suspension of conidia and hyphae fragments was obtained by covering the fungal colonies with sterile saline (0.85%) and gently rubbing the colonies with the inoculation loop. Next, the suspension was filtered with a Whatman filter paper model 1 (pore size, 11 μm; Whatman, UK) to remove the hyphae,[18] and then centrifuged at 4000 r/min for 10 min to collect the conidia. These conidia were washed twice by agitation in sterile saline. The concentration of conidia was adjusted with sterile saline to a final concentration of 8 × 104 CFU/ml and 8 × 105 colony-forming unit (CFU)/ml by hemacytometer counts.\n Infection EpiSkin® with Trichophyton rubrum conidia For the pilot experiment, two concentrations of conidia suspensions (8 × 104 CFU/ml and 8 × 105 CFU/ml) were used to infect RHE. A 5 μl inoculum of T. rubrum conidia suspensions (400 conidia and 4000 conidia) was added to the center of each RHE surface-cultured in 12-well plates. The identical volume of sterile saline was added as negative controls. Then, the 12-well plates were transferred to an incubator (37°C, 5.0% CO2), and the maintenance medium was changed every other day. The epidermis tissues were processed after infection for 2 days, 4 days, and 10 days.\nFor the pilot experiment, two concentrations of conidia suspensions (8 × 104 CFU/ml and 8 × 105 CFU/ml) were used to infect RHE. A 5 μl inoculum of T. rubrum conidia suspensions (400 conidia and 4000 conidia) was added to the center of each RHE surface-cultured in 12-well plates. The identical volume of sterile saline was added as negative controls. Then, the 12-well plates were transferred to an incubator (37°C, 5.0% CO2), and the maintenance medium was changed every other day. The epidermis tissues were processed after infection for 2 days, 4 days, and 10 days.\n Pathological Examination: Hematoxylin and eosin and periodic acid-Schiff staining After rinsing 3 times with 0.01 mol/L phosphate-buffered saline (PBS), the specimens were fixed overnight in 4% paraformaldehyde at 4°C. Afterward, cut the epidermis from the insert with a surgical scalpel and immersed in 0.01 mol/L PBS for 5 min, and then the tissues were processed for dehydration using graded ethanol, vitrification by xylene, immersion, and embedding in paraffin. Slices with 5 μm thicknesses were cut from the embedded tissues and dealt with hematoxylin and eosin (H and E) staining and periodic acid-Schiff (PAS) staining for light microscopic examination. The normal prepuce tissues harvested from clinical circumcision surgery as normal human skin were processed with H and E staining at the same time for comparison.\nAfter rinsing 3 times with 0.01 mol/L phosphate-buffered saline (PBS), the specimens were fixed overnight in 4% paraformaldehyde at 4°C. Afterward, cut the epidermis from the insert with a surgical scalpel and immersed in 0.01 mol/L PBS for 5 min, and then the tissues were processed for dehydration using graded ethanol, vitrification by xylene, immersion, and embedding in paraffin. Slices with 5 μm thicknesses were cut from the embedded tissues and dealt with hematoxylin and eosin (H and E) staining and periodic acid-Schiff (PAS) staining for light microscopic examination. The normal prepuce tissues harvested from clinical circumcision surgery as normal human skin were processed with H and E staining at the same time for comparison.\n Scanning electron microscopy The specimens were washed 3 times with PBS and fixed with glutaraldehyde at 4°C. Then, samples were dehydrated with an increasing gradient of ethanol (50–100%), dried using the critical point method with CO2 in liquid state, and coated with gold. After that, the interaction between the T. rubrum and the RHE was viewed under a FEI Quanta 200 scanning electron microscope (SEM, USA).\nThe specimens were washed 3 times with PBS and fixed with glutaraldehyde at 4°C. Then, samples were dehydrated with an increasing gradient of ethanol (50–100%), dried using the critical point method with CO2 in liquid state, and coated with gold. After that, the interaction between the T. rubrum and the RHE was viewed under a FEI Quanta 200 scanning electron microscope (SEM, USA).", "Commercial epidermal tissues EpiSkin (L’Oreal Research and Innovation Center, Shanghai, China) is an in vitro RHE from normal human keratinocytes cultured on a collagen matrix at the air-liquid interface. This RHE is histologically similar to the in vivo human epidermis. Immediately upon arrival, the EpiSkin models were removed from the nutrient agar and transferred to 12-well plates filled with medium without antifungals provided by the manufacturer. After equilibration overnight in an incubator (37°C, 5.0% CO2), the medium was changed, and the tissues were used in the following experiments.", "T. rubrum strain T1a used in the present experiment was obtained from the China Medical Microbiological Culture Collection Center and was confirmed as T. rubrum by morphological identification and sequencing of internal transcribed spacer regions and the D1–D2 domain of the large-subunit rRNA gene.\nT. rubrum was cultured for 2 weeks at 27°C on potato glucose agar (OXOID, UK) to produce conidia. A mixed suspension of conidia and hyphae fragments was obtained by covering the fungal colonies with sterile saline (0.85%) and gently rubbing the colonies with the inoculation loop. Next, the suspension was filtered with a Whatman filter paper model 1 (pore size, 11 μm; Whatman, UK) to remove the hyphae,[18] and then centrifuged at 4000 r/min for 10 min to collect the conidia. These conidia were washed twice by agitation in sterile saline. The concentration of conidia was adjusted with sterile saline to a final concentration of 8 × 104 CFU/ml and 8 × 105 colony-forming unit (CFU)/ml by hemacytometer counts.", "For the pilot experiment, two concentrations of conidia suspensions (8 × 104 CFU/ml and 8 × 105 CFU/ml) were used to infect RHE. A 5 μl inoculum of T. rubrum conidia suspensions (400 conidia and 4000 conidia) was added to the center of each RHE surface-cultured in 12-well plates. The identical volume of sterile saline was added as negative controls. Then, the 12-well plates were transferred to an incubator (37°C, 5.0% CO2), and the maintenance medium was changed every other day. The epidermis tissues were processed after infection for 2 days, 4 days, and 10 days.", "After rinsing 3 times with 0.01 mol/L phosphate-buffered saline (PBS), the specimens were fixed overnight in 4% paraformaldehyde at 4°C. Afterward, cut the epidermis from the insert with a surgical scalpel and immersed in 0.01 mol/L PBS for 5 min, and then the tissues were processed for dehydration using graded ethanol, vitrification by xylene, immersion, and embedding in paraffin. Slices with 5 μm thicknesses were cut from the embedded tissues and dealt with hematoxylin and eosin (H and E) staining and periodic acid-Schiff (PAS) staining for light microscopic examination. The normal prepuce tissues harvested from clinical circumcision surgery as normal human skin were processed with H and E staining at the same time for comparison.", "The specimens were washed 3 times with PBS and fixed with glutaraldehyde at 4°C. Then, samples were dehydrated with an increasing gradient of ethanol (50–100%), dried using the critical point method with CO2 in liquid state, and coated with gold. After that, the interaction between the T. rubrum and the RHE was viewed under a FEI Quanta 200 scanning electron microscope (SEM, USA).", " Structure of the reconstructed human epidermis was similar to human epidermis The results of H and E staining [Figure 1a and 1c] showed that the structure of the reconstructed epidermis was composed of stratum corneum, keratinocyte layer, and organic support layer, which was similar to human normal epidermis from prepuce tissue [Figure 1b and 1d].\nMorphology of the reconstructed human epidermis EpiSkin®, compared with the normal human skin derived from prepuce tissue through H and E staining. (a and b) original magnification, ×100; (c and d) original magnification, ×400.\nThe results of H and E staining [Figure 1a and 1c] showed that the structure of the reconstructed epidermis was composed of stratum corneum, keratinocyte layer, and organic support layer, which was similar to human normal epidermis from prepuce tissue [Figure 1b and 1d].\nMorphology of the reconstructed human epidermis EpiSkin®, compared with the normal human skin derived from prepuce tissue through H and E staining. (a and b) original magnification, ×100; (c and d) original magnification, ×400.\n Reconstructed human epidermis infected with Trichophyton rubrum conidia To better mimic host dermatophyte infection, we optimized the infection dose by inoculating low-dose (400 conidia) and high-dose (4000 conidia) on the RHE. After 2 days of infection, the results of H and E and PAS staining did not show any conidia or hyphae in the horny layer in either group (data not shown). After T. rubrum was inoculated 4 days, conidia and hyphae fragments were found in the stratum corneum [Figures 2a, 2b and 3a, 3b]; moreover, infection with 4000 conidia showed more conidia and hyphae fragments compared with infection with 400 conidia. On the 10th day of co-culture, the histopathology feature exhibited the great difference between the two groups. The group infected by 400 conidia showed that the invasion limited to the cornified layer without penetrating through the stratum corneum to keratinocytes layer [Figure 2c and 2d], which is in accordance with the pathological characteristics of superficial dermatophytosis caused by T. rubrum\nin vivo. However, the group infected with 4000 conidia displayed enormous destruction–encroached almost the full epidermis and presented obvious necrosis of keratinocytes [Figure 3c and 3d]. Hence, the inoculum of 400 conidia was used to imitate host dermatophyte infection.\nThe H and E and periodic acid-Schiff staining of reconstructed human epidermis infected with Trichophyton rubrum conidia of 400. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infected by Trichophyton rubrum conidia, conidia and hyphae were found in the stratum corneum (black arrow).\nThe reconstructed human epidermis infected with Trichophyton rubrum conidia of 4000. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infection for 4 days, abundance of conidia and hyphae was presented in the stratum corneum. At the 10th day of infection, the infection extended almost the full epidermis and the epidermis displayed enormous destruction.\nTo better mimic host dermatophyte infection, we optimized the infection dose by inoculating low-dose (400 conidia) and high-dose (4000 conidia) on the RHE. After 2 days of infection, the results of H and E and PAS staining did not show any conidia or hyphae in the horny layer in either group (data not shown). After T. rubrum was inoculated 4 days, conidia and hyphae fragments were found in the stratum corneum [Figures 2a, 2b and 3a, 3b]; moreover, infection with 4000 conidia showed more conidia and hyphae fragments compared with infection with 400 conidia. On the 10th day of co-culture, the histopathology feature exhibited the great difference between the two groups. The group infected by 400 conidia showed that the invasion limited to the cornified layer without penetrating through the stratum corneum to keratinocytes layer [Figure 2c and 2d], which is in accordance with the pathological characteristics of superficial dermatophytosis caused by T. rubrum\nin vivo. However, the group infected with 4000 conidia displayed enormous destruction–encroached almost the full epidermis and presented obvious necrosis of keratinocytes [Figure 3c and 3d]. Hence, the inoculum of 400 conidia was used to imitate host dermatophyte infection.\nThe H and E and periodic acid-Schiff staining of reconstructed human epidermis infected with Trichophyton rubrum conidia of 400. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infected by Trichophyton rubrum conidia, conidia and hyphae were found in the stratum corneum (black arrow).\nThe reconstructed human epidermis infected with Trichophyton rubrum conidia of 4000. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infection for 4 days, abundance of conidia and hyphae was presented in the stratum corneum. At the 10th day of infection, the infection extended almost the full epidermis and the epidermis displayed enormous destruction.\n Scanning electron microscopy showed the infection process of Trichophyton rubrum conidia in reconstructed human epidermis After 2 days of infection (400 conidia), SEM revealed the germination of the conidia, an early germ tube originated from the adhered conidia [Figure 4a], as well as the elongation of the germ tubes [Figure 4b]. With further inoculation (4 days), the distribution of hyphae was observed on the surface in netlike manner [Figure 4c]. We can also find the invasion of the hyphae extended horizontally and entered into the RHE [Figure 4d], which indicated that the infecting of the stratum corneum was achieved.\nScanning electron microscopy observation is the process of reconstructed human epidermis infected with Trichophyton rubrum conidia. (a and b) Revealing the germination of the conidia and elongation of the germ tubes 2 days after infection. (c and d) Showing the invasion of the hyphae spreading along the reconstructed human epidermis surface and penetrating through the outer surface layer after infection for 4 days. (a and b) Scale bar = 10 μm, (c) Scale bar = 50 μm, and (d) Scale bar = 5 μm.\nAfter 2 days of infection (400 conidia), SEM revealed the germination of the conidia, an early germ tube originated from the adhered conidia [Figure 4a], as well as the elongation of the germ tubes [Figure 4b]. With further inoculation (4 days), the distribution of hyphae was observed on the surface in netlike manner [Figure 4c]. We can also find the invasion of the hyphae extended horizontally and entered into the RHE [Figure 4d], which indicated that the infecting of the stratum corneum was achieved.\nScanning electron microscopy observation is the process of reconstructed human epidermis infected with Trichophyton rubrum conidia. (a and b) Revealing the germination of the conidia and elongation of the germ tubes 2 days after infection. (c and d) Showing the invasion of the hyphae spreading along the reconstructed human epidermis surface and penetrating through the outer surface layer after infection for 4 days. (a and b) Scale bar = 10 μm, (c) Scale bar = 50 μm, and (d) Scale bar = 5 μm.", "The results of H and E staining [Figure 1a and 1c] showed that the structure of the reconstructed epidermis was composed of stratum corneum, keratinocyte layer, and organic support layer, which was similar to human normal epidermis from prepuce tissue [Figure 1b and 1d].\nMorphology of the reconstructed human epidermis EpiSkin®, compared with the normal human skin derived from prepuce tissue through H and E staining. (a and b) original magnification, ×100; (c and d) original magnification, ×400.", "To better mimic host dermatophyte infection, we optimized the infection dose by inoculating low-dose (400 conidia) and high-dose (4000 conidia) on the RHE. After 2 days of infection, the results of H and E and PAS staining did not show any conidia or hyphae in the horny layer in either group (data not shown). After T. rubrum was inoculated 4 days, conidia and hyphae fragments were found in the stratum corneum [Figures 2a, 2b and 3a, 3b]; moreover, infection with 4000 conidia showed more conidia and hyphae fragments compared with infection with 400 conidia. On the 10th day of co-culture, the histopathology feature exhibited the great difference between the two groups. The group infected by 400 conidia showed that the invasion limited to the cornified layer without penetrating through the stratum corneum to keratinocytes layer [Figure 2c and 2d], which is in accordance with the pathological characteristics of superficial dermatophytosis caused by T. rubrum\nin vivo. However, the group infected with 4000 conidia displayed enormous destruction–encroached almost the full epidermis and presented obvious necrosis of keratinocytes [Figure 3c and 3d]. Hence, the inoculum of 400 conidia was used to imitate host dermatophyte infection.\nThe H and E and periodic acid-Schiff staining of reconstructed human epidermis infected with Trichophyton rubrum conidia of 400. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infected by Trichophyton rubrum conidia, conidia and hyphae were found in the stratum corneum (black arrow).\nThe reconstructed human epidermis infected with Trichophyton rubrum conidia of 4000. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infection for 4 days, abundance of conidia and hyphae was presented in the stratum corneum. At the 10th day of infection, the infection extended almost the full epidermis and the epidermis displayed enormous destruction.", "After 2 days of infection (400 conidia), SEM revealed the germination of the conidia, an early germ tube originated from the adhered conidia [Figure 4a], as well as the elongation of the germ tubes [Figure 4b]. With further inoculation (4 days), the distribution of hyphae was observed on the surface in netlike manner [Figure 4c]. We can also find the invasion of the hyphae extended horizontally and entered into the RHE [Figure 4d], which indicated that the infecting of the stratum corneum was achieved.\nScanning electron microscopy observation is the process of reconstructed human epidermis infected with Trichophyton rubrum conidia. (a and b) Revealing the germination of the conidia and elongation of the germ tubes 2 days after infection. (c and d) Showing the invasion of the hyphae spreading along the reconstructed human epidermis surface and penetrating through the outer surface layer after infection for 4 days. (a and b) Scale bar = 10 μm, (c) Scale bar = 50 μm, and (d) Scale bar = 5 μm.", "An appropriate model to simulate host dermatophyte infection is the prerequisite to study the pathogenesis of dermatophytosis. There is still some distance of the models made in vitro and ex vivo from the actual situation of dermatophyte infection in vivo because of the defects described earlier in this study. Recently, reconstructed human skin as an alternative has developed quickly and gradually applied to study skin-related disease. In 1995, Rashid et al. used the living skin equivalent to study the effects of antifungal drugs on dermatophyte (Trichophyton mentagrophytes) for the 1st time.[19] Liu's laboratory also constructed the fungal (including T. mentagrophytes and T. rubrum)-infected, tissue-engineered skin in vitro.[20] However, both of their models presented obvious necrosis of keratinocytes, along with the dermatophytes not only invaded the stratum corneum but also penetrated the full thickness epidermis to dermal component in a short time. These were conflicted with the actual situation in vivo that dermatophytes were restricted to the outer stratum corneum of the epidermis. Recently, Achterman et al. used EpiDerm reconstructed epidermis to mimic human dermatophytosis caused by various dermatophytes (including T. rubrum);[21] in their research, the release of cytoplasmic lactate dehydrogenase was used to indicate the progression of infection; however, this method could not certify if the infection was limited to the stratum corneum, thus could not affirm that the results they got were in accord with the real situation in vivo. In this study, we adopted the commercial RHE EpiSkin® as a platform, successfully elaborated the T. rubrum infection epidermis model in vitro. Morphological structure of the RHE confirmed that it was analogous to the normal human skin from prepuce tissue.\nPreliminary experiments were designed to determine the optimal inoculum amount so as to more resemble the actual situation of host T. rubrum dermatophytosis. The H and E staining and the PAS staining exhibited that no matter on the 4th day or the 10th day after infection of 400 conidia; the T. rubrum just limited to the stratum corneum, which is in accord with the real situation of T. rubrum dermatophytosis that always invades cornified layer. Moreover, SEM observations were adopted to further exhibit the process of adhesion and invasion stratum corneum in more intuitionistic way after infecting with 400 conidia. During the infection, T. rubrum conidia first adhered to the RHE surface, followed by germination, then invaded keratinized-structure, which is consistent with the reported general process of dermatophytes infection of skin.[2223] Therefore, this model was more closely resembling that in vivo\nT. rubrum infection. Taken together, the inoculum of 400 conidia can be used to mimic host T. rubrum infection and after the infection of at least 10 days, the pathological features conformed to the in vivo infection, during which this model can be used to research the mechanisms involved in T. rubrum infection.\nIn summary, we established the T. rubrum infection reconstructed epidermis model in vitro successfully. Nevertheless, there are a few unavoidable limitations in the model. First, compared with the normal human epidermis, the RHE is absent of skin appendages such as hair follicles, sweat gland, lack of cutaneous microflora, and no Langerhans cell or merkel cell. Second, the maintenance time of the RHE cultured in vitro is limited. Third, compared with the natural human dermatophyte infection, 400 conidia are still a relative more exposure than human contaminated in their daily life. Considering the time limitation of the RHE, more conidia can also accelerate the course of infection, which facilitates the infection research in the laboratory. Although the reconstructed epidermis cannot behave exactly the same as native human epidermis, the RHE displays many advantages. First, the RHE can avoid ethical risk of acquiring tissues from the dermatophytosis patients since there is no need of proceeding pathological examination in the routine diagnosis of superficial fungal infection. Moreover, the RHE is a promising animal alternative model, which is in accordance with the animal welfare principle. Second, comparing with the animal model, the RHE can avoid the outcome bias caused by species gap because of the anthropophilic feature of T. rubrum. Third, the condition for preparing RHE is stable and standardized, which is beneficial to enhance the repeatability of the experiment. In short, the RHE, composed by fully differentiated keratinocytes with a functional stratum corneum, is a promising model to mimic host dermatophyte infection compared with other models in vitro to research the mechanisms involved in natural human T. rubrum infections." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "EpiSkin", "Infection Model", "\nTrichophyton Rubrum\n" ]
INTRODUCTION: Trichophyton rubrum represents the most common infectious fungus responsible for dermatophytosis in human.[12] A typical characteristic of dermatophytes is keratinophilic and always invades keratinized structures such as the skin stratum corneum, hairs, and nails. The infection caused by T. rubrum, except for diffusing into the deeper parts of the body in some immunosuppressed patients,[34] is normally limited to the superficial skin and tends to be chronic and recurrent,[5] in which the pathogenic mechanism involved is still not completely understood. Various models have been explored in vitro and ex vivo to investigate the mechanism of dermatophyte infection such as the animal model,[6] stripped sheets of the stratum corneum,[78] nail plates, or monolayer cell culture model.[910] However, due to the anthropophilic feature of T. rubrum and the spontaneous healing of animals after the T. rubrum infection,[11] animal model is not the optimal choice. The stripped sheets of stratum corneum or nail plates from healthy volunteers could not response to the dermatophyte infection on account of missing living cells and no immune function, which is obviously different from the actual situation in vivo. Our previous studies have already demonstrated that human keratinocytes can recognize T. rubrum and induce native immune responses against T. rubrum infection.[121314] Nevertheless, different from the infection localized to the skin cornified layers, T. rubrum submerged under the culture medium and contacted with keratinocytes directly in the monolayer cell culture model, which led to the immunogenicity and pathogenicity of T. rubrum distinct from in vivo.[15] Taken together, all the above models have certain limitations to imitate the human T. rubrum infection, respectively. In recent years, several reconstructed human skin models have been developed. These reconstructed human skin equivalents demonstrated with fully differentiated epidermis could closely resemble native human epidermis,[1617] therefore providing a morphologically relevant means to assess skin irritation and to research the skin-related disease in vitro. In this study, we intended to establish a T. rubrum infection model in vitro, based on the three-dimensional reconstructed human epidermis (RHE) - EpiSkin®. It will pave the way for further investigation of the mechanisms involved in T. rubrum infection in the future. METHODS: Reconstructed human epidermis EpiSkin® Commercial epidermal tissues EpiSkin (L’Oreal Research and Innovation Center, Shanghai, China) is an in vitro RHE from normal human keratinocytes cultured on a collagen matrix at the air-liquid interface. This RHE is histologically similar to the in vivo human epidermis. Immediately upon arrival, the EpiSkin models were removed from the nutrient agar and transferred to 12-well plates filled with medium without antifungals provided by the manufacturer. After equilibration overnight in an incubator (37°C, 5.0% CO2), the medium was changed, and the tissues were used in the following experiments. Commercial epidermal tissues EpiSkin (L’Oreal Research and Innovation Center, Shanghai, China) is an in vitro RHE from normal human keratinocytes cultured on a collagen matrix at the air-liquid interface. This RHE is histologically similar to the in vivo human epidermis. Immediately upon arrival, the EpiSkin models were removed from the nutrient agar and transferred to 12-well plates filled with medium without antifungals provided by the manufacturer. After equilibration overnight in an incubator (37°C, 5.0% CO2), the medium was changed, and the tissues were used in the following experiments. Fungal strain and conidia collection T. rubrum strain T1a used in the present experiment was obtained from the China Medical Microbiological Culture Collection Center and was confirmed as T. rubrum by morphological identification and sequencing of internal transcribed spacer regions and the D1–D2 domain of the large-subunit rRNA gene. T. rubrum was cultured for 2 weeks at 27°C on potato glucose agar (OXOID, UK) to produce conidia. A mixed suspension of conidia and hyphae fragments was obtained by covering the fungal colonies with sterile saline (0.85%) and gently rubbing the colonies with the inoculation loop. Next, the suspension was filtered with a Whatman filter paper model 1 (pore size, 11 μm; Whatman, UK) to remove the hyphae,[18] and then centrifuged at 4000 r/min for 10 min to collect the conidia. These conidia were washed twice by agitation in sterile saline. The concentration of conidia was adjusted with sterile saline to a final concentration of 8 × 104 CFU/ml and 8 × 105 colony-forming unit (CFU)/ml by hemacytometer counts. T. rubrum strain T1a used in the present experiment was obtained from the China Medical Microbiological Culture Collection Center and was confirmed as T. rubrum by morphological identification and sequencing of internal transcribed spacer regions and the D1–D2 domain of the large-subunit rRNA gene. T. rubrum was cultured for 2 weeks at 27°C on potato glucose agar (OXOID, UK) to produce conidia. A mixed suspension of conidia and hyphae fragments was obtained by covering the fungal colonies with sterile saline (0.85%) and gently rubbing the colonies with the inoculation loop. Next, the suspension was filtered with a Whatman filter paper model 1 (pore size, 11 μm; Whatman, UK) to remove the hyphae,[18] and then centrifuged at 4000 r/min for 10 min to collect the conidia. These conidia were washed twice by agitation in sterile saline. The concentration of conidia was adjusted with sterile saline to a final concentration of 8 × 104 CFU/ml and 8 × 105 colony-forming unit (CFU)/ml by hemacytometer counts. Infection EpiSkin® with Trichophyton rubrum conidia For the pilot experiment, two concentrations of conidia suspensions (8 × 104 CFU/ml and 8 × 105 CFU/ml) were used to infect RHE. A 5 μl inoculum of T. rubrum conidia suspensions (400 conidia and 4000 conidia) was added to the center of each RHE surface-cultured in 12-well plates. The identical volume of sterile saline was added as negative controls. Then, the 12-well plates were transferred to an incubator (37°C, 5.0% CO2), and the maintenance medium was changed every other day. The epidermis tissues were processed after infection for 2 days, 4 days, and 10 days. For the pilot experiment, two concentrations of conidia suspensions (8 × 104 CFU/ml and 8 × 105 CFU/ml) were used to infect RHE. A 5 μl inoculum of T. rubrum conidia suspensions (400 conidia and 4000 conidia) was added to the center of each RHE surface-cultured in 12-well plates. The identical volume of sterile saline was added as negative controls. Then, the 12-well plates were transferred to an incubator (37°C, 5.0% CO2), and the maintenance medium was changed every other day. The epidermis tissues were processed after infection for 2 days, 4 days, and 10 days. Pathological Examination: Hematoxylin and eosin and periodic acid-Schiff staining After rinsing 3 times with 0.01 mol/L phosphate-buffered saline (PBS), the specimens were fixed overnight in 4% paraformaldehyde at 4°C. Afterward, cut the epidermis from the insert with a surgical scalpel and immersed in 0.01 mol/L PBS for 5 min, and then the tissues were processed for dehydration using graded ethanol, vitrification by xylene, immersion, and embedding in paraffin. Slices with 5 μm thicknesses were cut from the embedded tissues and dealt with hematoxylin and eosin (H and E) staining and periodic acid-Schiff (PAS) staining for light microscopic examination. The normal prepuce tissues harvested from clinical circumcision surgery as normal human skin were processed with H and E staining at the same time for comparison. After rinsing 3 times with 0.01 mol/L phosphate-buffered saline (PBS), the specimens were fixed overnight in 4% paraformaldehyde at 4°C. Afterward, cut the epidermis from the insert with a surgical scalpel and immersed in 0.01 mol/L PBS for 5 min, and then the tissues were processed for dehydration using graded ethanol, vitrification by xylene, immersion, and embedding in paraffin. Slices with 5 μm thicknesses were cut from the embedded tissues and dealt with hematoxylin and eosin (H and E) staining and periodic acid-Schiff (PAS) staining for light microscopic examination. The normal prepuce tissues harvested from clinical circumcision surgery as normal human skin were processed with H and E staining at the same time for comparison. Scanning electron microscopy The specimens were washed 3 times with PBS and fixed with glutaraldehyde at 4°C. Then, samples were dehydrated with an increasing gradient of ethanol (50–100%), dried using the critical point method with CO2 in liquid state, and coated with gold. After that, the interaction between the T. rubrum and the RHE was viewed under a FEI Quanta 200 scanning electron microscope (SEM, USA). The specimens were washed 3 times with PBS and fixed with glutaraldehyde at 4°C. Then, samples were dehydrated with an increasing gradient of ethanol (50–100%), dried using the critical point method with CO2 in liquid state, and coated with gold. After that, the interaction between the T. rubrum and the RHE was viewed under a FEI Quanta 200 scanning electron microscope (SEM, USA). Reconstructed human epidermis EpiSkin®: Commercial epidermal tissues EpiSkin (L’Oreal Research and Innovation Center, Shanghai, China) is an in vitro RHE from normal human keratinocytes cultured on a collagen matrix at the air-liquid interface. This RHE is histologically similar to the in vivo human epidermis. Immediately upon arrival, the EpiSkin models were removed from the nutrient agar and transferred to 12-well plates filled with medium without antifungals provided by the manufacturer. After equilibration overnight in an incubator (37°C, 5.0% CO2), the medium was changed, and the tissues were used in the following experiments. Fungal strain and conidia collection: T. rubrum strain T1a used in the present experiment was obtained from the China Medical Microbiological Culture Collection Center and was confirmed as T. rubrum by morphological identification and sequencing of internal transcribed spacer regions and the D1–D2 domain of the large-subunit rRNA gene. T. rubrum was cultured for 2 weeks at 27°C on potato glucose agar (OXOID, UK) to produce conidia. A mixed suspension of conidia and hyphae fragments was obtained by covering the fungal colonies with sterile saline (0.85%) and gently rubbing the colonies with the inoculation loop. Next, the suspension was filtered with a Whatman filter paper model 1 (pore size, 11 μm; Whatman, UK) to remove the hyphae,[18] and then centrifuged at 4000 r/min for 10 min to collect the conidia. These conidia were washed twice by agitation in sterile saline. The concentration of conidia was adjusted with sterile saline to a final concentration of 8 × 104 CFU/ml and 8 × 105 colony-forming unit (CFU)/ml by hemacytometer counts. Infection EpiSkin® with Trichophyton rubrum conidia: For the pilot experiment, two concentrations of conidia suspensions (8 × 104 CFU/ml and 8 × 105 CFU/ml) were used to infect RHE. A 5 μl inoculum of T. rubrum conidia suspensions (400 conidia and 4000 conidia) was added to the center of each RHE surface-cultured in 12-well plates. The identical volume of sterile saline was added as negative controls. Then, the 12-well plates were transferred to an incubator (37°C, 5.0% CO2), and the maintenance medium was changed every other day. The epidermis tissues were processed after infection for 2 days, 4 days, and 10 days. Pathological Examination: Hematoxylin and eosin and periodic acid-Schiff staining: After rinsing 3 times with 0.01 mol/L phosphate-buffered saline (PBS), the specimens were fixed overnight in 4% paraformaldehyde at 4°C. Afterward, cut the epidermis from the insert with a surgical scalpel and immersed in 0.01 mol/L PBS for 5 min, and then the tissues were processed for dehydration using graded ethanol, vitrification by xylene, immersion, and embedding in paraffin. Slices with 5 μm thicknesses were cut from the embedded tissues and dealt with hematoxylin and eosin (H and E) staining and periodic acid-Schiff (PAS) staining for light microscopic examination. The normal prepuce tissues harvested from clinical circumcision surgery as normal human skin were processed with H and E staining at the same time for comparison. Scanning electron microscopy: The specimens were washed 3 times with PBS and fixed with glutaraldehyde at 4°C. Then, samples were dehydrated with an increasing gradient of ethanol (50–100%), dried using the critical point method with CO2 in liquid state, and coated with gold. After that, the interaction between the T. rubrum and the RHE was viewed under a FEI Quanta 200 scanning electron microscope (SEM, USA). RESULTS: Structure of the reconstructed human epidermis was similar to human epidermis The results of H and E staining [Figure 1a and 1c] showed that the structure of the reconstructed epidermis was composed of stratum corneum, keratinocyte layer, and organic support layer, which was similar to human normal epidermis from prepuce tissue [Figure 1b and 1d]. Morphology of the reconstructed human epidermis EpiSkin®, compared with the normal human skin derived from prepuce tissue through H and E staining. (a and b) original magnification, ×100; (c and d) original magnification, ×400. The results of H and E staining [Figure 1a and 1c] showed that the structure of the reconstructed epidermis was composed of stratum corneum, keratinocyte layer, and organic support layer, which was similar to human normal epidermis from prepuce tissue [Figure 1b and 1d]. Morphology of the reconstructed human epidermis EpiSkin®, compared with the normal human skin derived from prepuce tissue through H and E staining. (a and b) original magnification, ×100; (c and d) original magnification, ×400. Reconstructed human epidermis infected with Trichophyton rubrum conidia To better mimic host dermatophyte infection, we optimized the infection dose by inoculating low-dose (400 conidia) and high-dose (4000 conidia) on the RHE. After 2 days of infection, the results of H and E and PAS staining did not show any conidia or hyphae in the horny layer in either group (data not shown). After T. rubrum was inoculated 4 days, conidia and hyphae fragments were found in the stratum corneum [Figures 2a, 2b and 3a, 3b]; moreover, infection with 4000 conidia showed more conidia and hyphae fragments compared with infection with 400 conidia. On the 10th day of co-culture, the histopathology feature exhibited the great difference between the two groups. The group infected by 400 conidia showed that the invasion limited to the cornified layer without penetrating through the stratum corneum to keratinocytes layer [Figure 2c and 2d], which is in accordance with the pathological characteristics of superficial dermatophytosis caused by T. rubrum in vivo. However, the group infected with 4000 conidia displayed enormous destruction–encroached almost the full epidermis and presented obvious necrosis of keratinocytes [Figure 3c and 3d]. Hence, the inoculum of 400 conidia was used to imitate host dermatophyte infection. The H and E and periodic acid-Schiff staining of reconstructed human epidermis infected with Trichophyton rubrum conidia of 400. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infected by Trichophyton rubrum conidia, conidia and hyphae were found in the stratum corneum (black arrow). The reconstructed human epidermis infected with Trichophyton rubrum conidia of 4000. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infection for 4 days, abundance of conidia and hyphae was presented in the stratum corneum. At the 10th day of infection, the infection extended almost the full epidermis and the epidermis displayed enormous destruction. To better mimic host dermatophyte infection, we optimized the infection dose by inoculating low-dose (400 conidia) and high-dose (4000 conidia) on the RHE. After 2 days of infection, the results of H and E and PAS staining did not show any conidia or hyphae in the horny layer in either group (data not shown). After T. rubrum was inoculated 4 days, conidia and hyphae fragments were found in the stratum corneum [Figures 2a, 2b and 3a, 3b]; moreover, infection with 4000 conidia showed more conidia and hyphae fragments compared with infection with 400 conidia. On the 10th day of co-culture, the histopathology feature exhibited the great difference between the two groups. The group infected by 400 conidia showed that the invasion limited to the cornified layer without penetrating through the stratum corneum to keratinocytes layer [Figure 2c and 2d], which is in accordance with the pathological characteristics of superficial dermatophytosis caused by T. rubrum in vivo. However, the group infected with 4000 conidia displayed enormous destruction–encroached almost the full epidermis and presented obvious necrosis of keratinocytes [Figure 3c and 3d]. Hence, the inoculum of 400 conidia was used to imitate host dermatophyte infection. The H and E and periodic acid-Schiff staining of reconstructed human epidermis infected with Trichophyton rubrum conidia of 400. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infected by Trichophyton rubrum conidia, conidia and hyphae were found in the stratum corneum (black arrow). The reconstructed human epidermis infected with Trichophyton rubrum conidia of 4000. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infection for 4 days, abundance of conidia and hyphae was presented in the stratum corneum. At the 10th day of infection, the infection extended almost the full epidermis and the epidermis displayed enormous destruction. Scanning electron microscopy showed the infection process of Trichophyton rubrum conidia in reconstructed human epidermis After 2 days of infection (400 conidia), SEM revealed the germination of the conidia, an early germ tube originated from the adhered conidia [Figure 4a], as well as the elongation of the germ tubes [Figure 4b]. With further inoculation (4 days), the distribution of hyphae was observed on the surface in netlike manner [Figure 4c]. We can also find the invasion of the hyphae extended horizontally and entered into the RHE [Figure 4d], which indicated that the infecting of the stratum corneum was achieved. Scanning electron microscopy observation is the process of reconstructed human epidermis infected with Trichophyton rubrum conidia. (a and b) Revealing the germination of the conidia and elongation of the germ tubes 2 days after infection. (c and d) Showing the invasion of the hyphae spreading along the reconstructed human epidermis surface and penetrating through the outer surface layer after infection for 4 days. (a and b) Scale bar = 10 μm, (c) Scale bar = 50 μm, and (d) Scale bar = 5 μm. After 2 days of infection (400 conidia), SEM revealed the germination of the conidia, an early germ tube originated from the adhered conidia [Figure 4a], as well as the elongation of the germ tubes [Figure 4b]. With further inoculation (4 days), the distribution of hyphae was observed on the surface in netlike manner [Figure 4c]. We can also find the invasion of the hyphae extended horizontally and entered into the RHE [Figure 4d], which indicated that the infecting of the stratum corneum was achieved. Scanning electron microscopy observation is the process of reconstructed human epidermis infected with Trichophyton rubrum conidia. (a and b) Revealing the germination of the conidia and elongation of the germ tubes 2 days after infection. (c and d) Showing the invasion of the hyphae spreading along the reconstructed human epidermis surface and penetrating through the outer surface layer after infection for 4 days. (a and b) Scale bar = 10 μm, (c) Scale bar = 50 μm, and (d) Scale bar = 5 μm. Structure of the reconstructed human epidermis was similar to human epidermis: The results of H and E staining [Figure 1a and 1c] showed that the structure of the reconstructed epidermis was composed of stratum corneum, keratinocyte layer, and organic support layer, which was similar to human normal epidermis from prepuce tissue [Figure 1b and 1d]. Morphology of the reconstructed human epidermis EpiSkin®, compared with the normal human skin derived from prepuce tissue through H and E staining. (a and b) original magnification, ×100; (c and d) original magnification, ×400. Reconstructed human epidermis infected with Trichophyton rubrum conidia: To better mimic host dermatophyte infection, we optimized the infection dose by inoculating low-dose (400 conidia) and high-dose (4000 conidia) on the RHE. After 2 days of infection, the results of H and E and PAS staining did not show any conidia or hyphae in the horny layer in either group (data not shown). After T. rubrum was inoculated 4 days, conidia and hyphae fragments were found in the stratum corneum [Figures 2a, 2b and 3a, 3b]; moreover, infection with 4000 conidia showed more conidia and hyphae fragments compared with infection with 400 conidia. On the 10th day of co-culture, the histopathology feature exhibited the great difference between the two groups. The group infected by 400 conidia showed that the invasion limited to the cornified layer without penetrating through the stratum corneum to keratinocytes layer [Figure 2c and 2d], which is in accordance with the pathological characteristics of superficial dermatophytosis caused by T. rubrum in vivo. However, the group infected with 4000 conidia displayed enormous destruction–encroached almost the full epidermis and presented obvious necrosis of keratinocytes [Figure 3c and 3d]. Hence, the inoculum of 400 conidia was used to imitate host dermatophyte infection. The H and E and periodic acid-Schiff staining of reconstructed human epidermis infected with Trichophyton rubrum conidia of 400. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infected by Trichophyton rubrum conidia, conidia and hyphae were found in the stratum corneum (black arrow). The reconstructed human epidermis infected with Trichophyton rubrum conidia of 4000. (a and b) Stained by H and E and periodic acid-Schiff after 4 days of infection (original magnification, ×400). (c and d) Stained by H and E and periodic acid-Schiff after 10 days of infection (original magnification, ×400). After infection for 4 days, abundance of conidia and hyphae was presented in the stratum corneum. At the 10th day of infection, the infection extended almost the full epidermis and the epidermis displayed enormous destruction. Scanning electron microscopy showed the infection process of Trichophyton rubrum conidia in reconstructed human epidermis: After 2 days of infection (400 conidia), SEM revealed the germination of the conidia, an early germ tube originated from the adhered conidia [Figure 4a], as well as the elongation of the germ tubes [Figure 4b]. With further inoculation (4 days), the distribution of hyphae was observed on the surface in netlike manner [Figure 4c]. We can also find the invasion of the hyphae extended horizontally and entered into the RHE [Figure 4d], which indicated that the infecting of the stratum corneum was achieved. Scanning electron microscopy observation is the process of reconstructed human epidermis infected with Trichophyton rubrum conidia. (a and b) Revealing the germination of the conidia and elongation of the germ tubes 2 days after infection. (c and d) Showing the invasion of the hyphae spreading along the reconstructed human epidermis surface and penetrating through the outer surface layer after infection for 4 days. (a and b) Scale bar = 10 μm, (c) Scale bar = 50 μm, and (d) Scale bar = 5 μm. DISCUSSION: An appropriate model to simulate host dermatophyte infection is the prerequisite to study the pathogenesis of dermatophytosis. There is still some distance of the models made in vitro and ex vivo from the actual situation of dermatophyte infection in vivo because of the defects described earlier in this study. Recently, reconstructed human skin as an alternative has developed quickly and gradually applied to study skin-related disease. In 1995, Rashid et al. used the living skin equivalent to study the effects of antifungal drugs on dermatophyte (Trichophyton mentagrophytes) for the 1st time.[19] Liu's laboratory also constructed the fungal (including T. mentagrophytes and T. rubrum)-infected, tissue-engineered skin in vitro.[20] However, both of their models presented obvious necrosis of keratinocytes, along with the dermatophytes not only invaded the stratum corneum but also penetrated the full thickness epidermis to dermal component in a short time. These were conflicted with the actual situation in vivo that dermatophytes were restricted to the outer stratum corneum of the epidermis. Recently, Achterman et al. used EpiDerm reconstructed epidermis to mimic human dermatophytosis caused by various dermatophytes (including T. rubrum);[21] in their research, the release of cytoplasmic lactate dehydrogenase was used to indicate the progression of infection; however, this method could not certify if the infection was limited to the stratum corneum, thus could not affirm that the results they got were in accord with the real situation in vivo. In this study, we adopted the commercial RHE EpiSkin® as a platform, successfully elaborated the T. rubrum infection epidermis model in vitro. Morphological structure of the RHE confirmed that it was analogous to the normal human skin from prepuce tissue. Preliminary experiments were designed to determine the optimal inoculum amount so as to more resemble the actual situation of host T. rubrum dermatophytosis. The H and E staining and the PAS staining exhibited that no matter on the 4th day or the 10th day after infection of 400 conidia; the T. rubrum just limited to the stratum corneum, which is in accord with the real situation of T. rubrum dermatophytosis that always invades cornified layer. Moreover, SEM observations were adopted to further exhibit the process of adhesion and invasion stratum corneum in more intuitionistic way after infecting with 400 conidia. During the infection, T. rubrum conidia first adhered to the RHE surface, followed by germination, then invaded keratinized-structure, which is consistent with the reported general process of dermatophytes infection of skin.[2223] Therefore, this model was more closely resembling that in vivo T. rubrum infection. Taken together, the inoculum of 400 conidia can be used to mimic host T. rubrum infection and after the infection of at least 10 days, the pathological features conformed to the in vivo infection, during which this model can be used to research the mechanisms involved in T. rubrum infection. In summary, we established the T. rubrum infection reconstructed epidermis model in vitro successfully. Nevertheless, there are a few unavoidable limitations in the model. First, compared with the normal human epidermis, the RHE is absent of skin appendages such as hair follicles, sweat gland, lack of cutaneous microflora, and no Langerhans cell or merkel cell. Second, the maintenance time of the RHE cultured in vitro is limited. Third, compared with the natural human dermatophyte infection, 400 conidia are still a relative more exposure than human contaminated in their daily life. Considering the time limitation of the RHE, more conidia can also accelerate the course of infection, which facilitates the infection research in the laboratory. Although the reconstructed epidermis cannot behave exactly the same as native human epidermis, the RHE displays many advantages. First, the RHE can avoid ethical risk of acquiring tissues from the dermatophytosis patients since there is no need of proceeding pathological examination in the routine diagnosis of superficial fungal infection. Moreover, the RHE is a promising animal alternative model, which is in accordance with the animal welfare principle. Second, comparing with the animal model, the RHE can avoid the outcome bias caused by species gap because of the anthropophilic feature of T. rubrum. Third, the condition for preparing RHE is stable and standardized, which is beneficial to enhance the repeatability of the experiment. In short, the RHE, composed by fully differentiated keratinocytes with a functional stratum corneum, is a promising model to mimic host dermatophyte infection compared with other models in vitro to research the mechanisms involved in natural human T. rubrum infections.
Background: Trichophyton rubrum represents the most common infectious fungus responsible for dermatophytosis in human, but the mechanism involved is still not completely understood. An appropriate model constructed to simulate host infection is the prerequisite to study the pathogenesis of dermatophytosis caused by T. rubrum. In this study, we intended to develop a new T. rubrum infection model in vitro, using the three-dimensional reconstructed epidermis - EpiSkin ®, and to pave the way for further investigation of the mechanisms involved in T. rubrum infection. Methods: The reconstructed human epidermis (RHE) was infected by inoculating low-dose (400 conidia) and high-dose (4000 conidia) T. rubrum conidia to optimize the infection dose. During the various periods after infection, the samples were processed for pathological examination and scanning electron microscopy (SEM) observation. Results: The histological analysis of RHE revealed a fully differentiated epidermis with a functional stratum corneum, which was analogous to the normal human epidermis. The results of hematoxylin and eosin staining and the periodic acid-Schiff staining showed that the infection dose of 400 conidia was in accord with the pathological characteristics of host dermatophytosis caused by T. rubrum. SEM observations further exhibited the process of T. rubrum infection in an intuitionistic way. Conclusions: We established the T. rubrum infection model on RHE in vitro successfully. It is a promising model for further investigation of the mechanisms involved in T. rubrum infection.
null
null
5,579
271
[ 109, 196, 126, 142, 78, 98, 436, 205 ]
12
[ "conidia", "infection", "rubrum", "epidermis", "human", "days", "400", "rhe", "reconstructed", "human epidermis" ]
[ "necrosis keratinocytes dermatophytes", "human dermatophyte infection", "fungus responsible dermatophytosis", "situation rubrum dermatophytosis", "rubrum dermatophytosis invades" ]
null
null
[CONTENT] EpiSkin | Infection Model | Trichophyton Rubrum [SUMMARY]
[CONTENT] EpiSkin | Infection Model | Trichophyton Rubrum [SUMMARY]
[CONTENT] EpiSkin | Infection Model | Trichophyton Rubrum [SUMMARY]
null
[CONTENT] EpiSkin | Infection Model | Trichophyton Rubrum [SUMMARY]
null
[CONTENT] Animals | Disease Models, Animal | Epidermis | Humans | Keratinocytes | Tissue Culture Techniques | Trichophyton [SUMMARY]
[CONTENT] Animals | Disease Models, Animal | Epidermis | Humans | Keratinocytes | Tissue Culture Techniques | Trichophyton [SUMMARY]
[CONTENT] Animals | Disease Models, Animal | Epidermis | Humans | Keratinocytes | Tissue Culture Techniques | Trichophyton [SUMMARY]
null
[CONTENT] Animals | Disease Models, Animal | Epidermis | Humans | Keratinocytes | Tissue Culture Techniques | Trichophyton [SUMMARY]
null
[CONTENT] necrosis keratinocytes dermatophytes | human dermatophyte infection | fungus responsible dermatophytosis | situation rubrum dermatophytosis | rubrum dermatophytosis invades [SUMMARY]
[CONTENT] necrosis keratinocytes dermatophytes | human dermatophyte infection | fungus responsible dermatophytosis | situation rubrum dermatophytosis | rubrum dermatophytosis invades [SUMMARY]
[CONTENT] necrosis keratinocytes dermatophytes | human dermatophyte infection | fungus responsible dermatophytosis | situation rubrum dermatophytosis | rubrum dermatophytosis invades [SUMMARY]
null
[CONTENT] necrosis keratinocytes dermatophytes | human dermatophyte infection | fungus responsible dermatophytosis | situation rubrum dermatophytosis | rubrum dermatophytosis invades [SUMMARY]
null
[CONTENT] conidia | infection | rubrum | epidermis | human | days | 400 | rhe | reconstructed | human epidermis [SUMMARY]
[CONTENT] conidia | infection | rubrum | epidermis | human | days | 400 | rhe | reconstructed | human epidermis [SUMMARY]
[CONTENT] conidia | infection | rubrum | epidermis | human | days | 400 | rhe | reconstructed | human epidermis [SUMMARY]
null
[CONTENT] conidia | infection | rubrum | epidermis | human | days | 400 | rhe | reconstructed | human epidermis [SUMMARY]
null
[CONTENT] rubrum | infection | rubrum infection | skin | model | human | models | vitro | sheets | monolayer cell [SUMMARY]
[CONTENT] conidia | tissues | saline | cfu | sterile | sterile saline | ml | cfu ml | rubrum | pbs [SUMMARY]
[CONTENT] conidia | infection | days | 400 | figure | days infection | hyphae | epidermis | magnification | original [SUMMARY]
null
[CONTENT] conidia | infection | rubrum | human | days | epidermis | 400 | rhe | figure | hyphae [SUMMARY]
null
[CONTENT] Trichophyton ||| T. ||| T. | three | T. [SUMMARY]
[CONTENT] RHE | 400 conidia ||| ||| [SUMMARY]
[CONTENT] RHE ||| hematoxylin | 400 | T. ||| T. [SUMMARY]
null
[CONTENT] Trichophyton ||| T. ||| T. | three | T. ||| RHE | 400 conidia ||| ||| ||| ||| RHE ||| hematoxylin | 400 | T. ||| T. ||| T. | RHE ||| T. [SUMMARY]
null
Association between sleep duration and incidence of type 2 diabetes in China: the REACTION study.
35568995
Inadequate sleep duration is associated with a higher risk of type 2 diabetes and the relationship is nonlinear. We aim to assess the curve relationship between night sleep duration and the incidence of type 2 diabetes in China.
BACKGROUNDS
A cohort of 11,539 participants from the REACTION study without diabetes at baseline (2011) were followed until 2014 for the development of type 2 diabetes. The average number of hours of sleep per night was grouped. Incidence rates and odds ratios (ORs) were calculated for the development of diabetes in each sleep duration category.
METHODS
Compared to people who sleep for 7 to 8 h/night, people with longer sleep duration (≥9 h/night) had a greater risk of type 2 diabetes (OR: 1.27; 95% CI: 1.01-1.61), while shorter sleep (<6 h/night) had no significant difference in risk of type 2 diabetes. When the dataset was stratified based on selected covariates, the association between type 2 diabetes and long sleep duration became more evident among individuals <65 years of age, male, body mass index <24 kg/m 2 or with hypertension or hyperlipidemia, no interaction effects were observed. Furthermore, compared to people persistently sleeping 7 to 9 h/night, those who persistently slept ≥9 h/night had a higher risk of type 2 diabetes. The optimal sleep duration was 6.3 to 7.5 h/night.
RESULTS
Short or long sleep duration was associated with a higher risk of type 2 diabetes. Persistently long sleep duration increased the risk.
CONCLUSIONS
[ "China", "Diabetes Mellitus, Type 2", "Humans", "Incidence", "Male", "Risk Factors", "Sleep", "Sleep Deprivation" ]
9337253
Introduction
Type 2 diabetes is a critical public health challenge worldwide. Patients with type 2 diabetes are at increased risk for premature mortality and hospitalization due to complications.[1] Given the global burden of type 2 diabetes, understanding the impacts of modifiable risk factors is of great importance.[2,3] Sleep is essential to the health of patients with type 2 diabetes.[4] Although humans spend about a third of their time sleeping, they may not understand the importance of it. Adequate high-quality sleep is vital to maintain the normal physiological state of the body.[5] Insufficient sleep is a health problem,[5] and long sleep duration is associated with increased body mass index (BMI),[6] impaired glucose tolerance,[7] and increased probability of developing type 2 diabetes.[7] Although lifestyle changes, such as increasing physical activity and weight loss, are of great importance to the management of this disease, understanding the link between type 2 diabetes and sleep duration may help to reduce its incidence. Short sleepers and long sleepers show an increased incidence of type 2 diabetes.[8] In addition, a study in Japan found a J-shaped relationship between sleep time and HbA1c level.[9] Therefore, this retrospective cohort study assessed the associations between both nighttime and daytime napping and risk for type 2 diabetes. We used data from the Risk Evaluation of cAncers in Chinese diabeTic Individuals: a lONgitudinal (REACTION) study cohort, which covered a 4-year period. To our knowledge, no such study has yet explored this relationship in Chinese people with a risk for type 2 diabetes.
Methods
Ethic approval All participants provided written informed consent and all protocols were approved by the Ethical Committee of Rui-jin Hospital, Shanghai Jiao Tong University School of Medicine, which is in charge of the REACTION study (No. 2011-14). All participants provided written informed consent and all protocols were approved by the Ethical Committee of Rui-jin Hospital, Shanghai Jiao Tong University School of Medicine, which is in charge of the REACTION study (No. 2011-14). Study subjects We used data from the REACTION cohort study, which investigated the association between type 2 diabetes and pre-type 2 diabetes and the risk of cancer in the Chinese population.[10] All subjects live in Laoshan, Jingding, and Gucheng communities of Beijing (China) or in Ningde and Wuyishan, Fujian Province (China). They were invited to complete baseline questionnaires and medical examinations in 2011. During the first follow-up survey in 2015, the same community was investigated, and the study had a total size of 14,429 participants [Figure 1]. Flowchart of participants from the Risk Evaluation of cAncers in Chinese diabetic Individuals: a lONgitudinal (REACTION) study. In all, 1754 subjects were diagnosed with type 2 diabetes in 2011, 51 subjects were with missing information, and 1085 subjects were with history of cancer or related diseases or being pregnant. The remaining 11,539 subjects (4043 men, 7496 women) were enrolled in the present study. We used data from the REACTION cohort study, which investigated the association between type 2 diabetes and pre-type 2 diabetes and the risk of cancer in the Chinese population.[10] All subjects live in Laoshan, Jingding, and Gucheng communities of Beijing (China) or in Ningde and Wuyishan, Fujian Province (China). They were invited to complete baseline questionnaires and medical examinations in 2011. During the first follow-up survey in 2015, the same community was investigated, and the study had a total size of 14,429 participants [Figure 1]. Flowchart of participants from the Risk Evaluation of cAncers in Chinese diabetic Individuals: a lONgitudinal (REACTION) study. In all, 1754 subjects were diagnosed with type 2 diabetes in 2011, 51 subjects were with missing information, and 1085 subjects were with history of cancer or related diseases or being pregnant. The remaining 11,539 subjects (4043 men, 7496 women) were enrolled in the present study. Assessment of covariates Data, such as age, sex, smoking status, and drinking status, were collected during the baseline investigation by trained doctors using a detailed questionnaire. BMI was calculated as weight in kilograms divided by height in meters squared. Smoking and alcohol consumption were classified into three levels: current, occasional, and never. Subjects who smoke at least one cigarette/day for more than half a year were defined as current smokers.[11] Subjects who drink at least one time/week for more than half a year were defined as current drinkers.[12] The first level was regarded as positive responses. Nighttime and midday sleep time and sleep quality data were obtained through a self-administrated questionnaire.[10] Sleep duration was calculated from bedtime to waking time and was categorized into five groups: <6 h, 6 to <7 h, 7 to <8 h, 8 to <9 h, or ≥9 h. Midday napping was divided into groups of no napping (0 min), 1 to 29 min, 30 to 59 min, 60 to 89 min, and ≥90 min. Sleep quality was divided into three groups: good, fair, and poor, with frequent use of hypnotics included in the fair group. Hypertension was defined as selfreported physician-diagnosed hypertension or current use of antihypertensive medications or SBP≥130 mm Hg/DBP≥80 mm Hg by the 2017 American College of Cardiology/American Heart Association guidelines.[13] Diabetes was defined as self-reported physician-diagnosed diabetes or fasting glucose level ≥7.0 mmol/L or current usage of antidiabetic medications. Hyperlipidemia was defined as a history of physician-diagnosed hyperlipidemia or total cholesterol ≥6.22 mmol/L or triglycerides >2.26 mmol/L or high-density lipoprotein cholesterol < 1.04 mmol/L or low-density lipoprotein cholesterol ≥4.14 mmol/L or current usage of lipid-lowering medications. Data, such as age, sex, smoking status, and drinking status, were collected during the baseline investigation by trained doctors using a detailed questionnaire. BMI was calculated as weight in kilograms divided by height in meters squared. Smoking and alcohol consumption were classified into three levels: current, occasional, and never. Subjects who smoke at least one cigarette/day for more than half a year were defined as current smokers.[11] Subjects who drink at least one time/week for more than half a year were defined as current drinkers.[12] The first level was regarded as positive responses. Nighttime and midday sleep time and sleep quality data were obtained through a self-administrated questionnaire.[10] Sleep duration was calculated from bedtime to waking time and was categorized into five groups: <6 h, 6 to <7 h, 7 to <8 h, 8 to <9 h, or ≥9 h. Midday napping was divided into groups of no napping (0 min), 1 to 29 min, 30 to 59 min, 60 to 89 min, and ≥90 min. Sleep quality was divided into three groups: good, fair, and poor, with frequent use of hypnotics included in the fair group. Hypertension was defined as selfreported physician-diagnosed hypertension or current use of antihypertensive medications or SBP≥130 mm Hg/DBP≥80 mm Hg by the 2017 American College of Cardiology/American Heart Association guidelines.[13] Diabetes was defined as self-reported physician-diagnosed diabetes or fasting glucose level ≥7.0 mmol/L or current usage of antidiabetic medications. Hyperlipidemia was defined as a history of physician-diagnosed hyperlipidemia or total cholesterol ≥6.22 mmol/L or triglycerides >2.26 mmol/L or high-density lipoprotein cholesterol < 1.04 mmol/L or low-density lipoprotein cholesterol ≥4.14 mmol/L or current usage of lipid-lowering medications. Statistical analysis We performed all analyses with Stata statistical software, v. 14.2 (Stata Corp., College Station, TX, USA). Data were compared using analysis of variance (ANOVA) for continuous variables and χ2 analysis for categorical variables. Continuous variables are expressed as mean ± standard deviation, and categorical variables are expressed as n (%). Binary logistic regression was constructed to assess the ORs and 95% confidence intervals (CIs) of sleep duration and type 2 diabetes using sleep duration of 7 to 8 h/night as the reference groups, as previous studies have suggested that sleeping for 7 to 8 h is optimal.[14,15] Potential covariates included in the multivariate-adjusted model were age, gender, BMI, smoking and drinking status, hypertension, and hyperlipidemia. Considering that type 2 diabetes risk might follow nonlinear trends, we used a restricted cubic spline model with three knots at 25,50, and 75th percentiles of sleep duration and 7 h/night as the reference group.[16] Subjects were divided into several groups by age (<65 years, ≥65 years), sex (male, female), BMI (<24 kg/m2, ≥24 kg/m2), hypertension (yes, no), and hyperlipidemia (yes, no). In addition, potential interactions were tested using interaction terms of these covariates with sleep duration. We evaluated the association between changes in sleep duration and incidence of type 2 diabetes. This association was examined using crude and multivariate-adjusted models, with subjects persistently sleeping 7 to 9 h/night in both surveys as the reference group. We further evaluated the joint effects of sleep duration and midday napping and that of sleep duration and sleep quality on the risk of developing diabetes, using moderate sleep duration (7–8 h/night) with midday napping (1–29 min), and moderate sleep duration (7–8 h/night) with good sleep quality as the reference groups. We performed all analyses with Stata statistical software, v. 14.2 (Stata Corp., College Station, TX, USA). Data were compared using analysis of variance (ANOVA) for continuous variables and χ2 analysis for categorical variables. Continuous variables are expressed as mean ± standard deviation, and categorical variables are expressed as n (%). Binary logistic regression was constructed to assess the ORs and 95% confidence intervals (CIs) of sleep duration and type 2 diabetes using sleep duration of 7 to 8 h/night as the reference groups, as previous studies have suggested that sleeping for 7 to 8 h is optimal.[14,15] Potential covariates included in the multivariate-adjusted model were age, gender, BMI, smoking and drinking status, hypertension, and hyperlipidemia. Considering that type 2 diabetes risk might follow nonlinear trends, we used a restricted cubic spline model with three knots at 25,50, and 75th percentiles of sleep duration and 7 h/night as the reference group.[16] Subjects were divided into several groups by age (<65 years, ≥65 years), sex (male, female), BMI (<24 kg/m2, ≥24 kg/m2), hypertension (yes, no), and hyperlipidemia (yes, no). In addition, potential interactions were tested using interaction terms of these covariates with sleep duration. We evaluated the association between changes in sleep duration and incidence of type 2 diabetes. This association was examined using crude and multivariate-adjusted models, with subjects persistently sleeping 7 to 9 h/night in both surveys as the reference group. We further evaluated the joint effects of sleep duration and midday napping and that of sleep duration and sleep quality on the risk of developing diabetes, using moderate sleep duration (7–8 h/night) with midday napping (1–29 min), and moderate sleep duration (7–8 h/night) with good sleep quality as the reference groups.
Results
Among 11,539 subjects, 13.11% (n = 1513) reported sleeping ≥9 h/night and 5.18% (n = 598) reported midday napping >90 min. Compared to subjects reporting 7 to 8 h/night of sleep, those reporting sleep duration ≥9 h/night were more likely to be female, current smokers, and current drinkers. Meanwhile, compared to subjects with 1 to 29 min of midday napping, those reporting midday napping ≥90 min were more likely to be male, hypertension and hyperlipidemia, current smokers, current drinkers (P < 0.05). In addition, compared to the reference groups, participants reporting≥9 h/night of sleep were less likely to have hypertension and hyperlipidemia [Table 1]. General characteristics of the study participants according to sleep duration and midday napping (N = 11,539) Data are presented as mean ± SD for continuous variables. BMI: Body mass index; SD: Standard deviation. During the follow-up investigation, we documented 694 type 2 diabetes patients. Compared to people who slept for 7 to 8 h/night, the ORs (95% CIs) of type 2 diabetes were 1.21 (0.85–1.72) for <6 h/night, 0.91 (0.70–1.17) for 6 to 7 h/night, 1.03 (0.85–1.24) for 8 to 9 h/night, and 1.27 (1.01–1.61) for ≥9 h/night (P = 0.040), respectively. After adjusting for age (continuous), sex, BMI (continuous), smoking status (yes or no), drinking status (yes or no), hypertension (yes or no), and hyperlipidemia (yes or no), similar associations were observed for those having ≥9 h/night of sleep (P = 0.02) [Table 2]. We further exploredthe interaction of sleep duration and midday napping or sleep quality on the risks of type 2 diabetes. We found an interaction between sleep time and nap time or sleep quality. Compared to participants napping for 1 to 29 min, the ORs (95% CIs) of type 2 diabetes were 1.30 (0.53–3.20) for those reporting no midday napping, 1.04 (0.38–2.83) for 30 to 59 min, 1.02 (0.39–2.65) for 60 to 89 min, and 1.27 (0.48–3.31) for midday napping ≥90 min. In addition, no significant association was observed between these groups after adjusting for age, sex, BMI, smoking status, drinking status, type 2 diabetes mellitus, and hyperlipidemia. Restricted cubic spline regression analysis showed a J-shaped curve and confirmed that people who slept ≥9 h/night had a high risk of type 2 diabetes [Figure 2]. The optimal nighttime sleep duration was 7.2 to 7.5 h, and it was 6.3 to 7.5 h after adjusting for all variables. When stratified by selected covariates, the association between type 2 diabetes and long sleep duration became more evident in individuals who were <65 years of age, male, BMI < 24 kg/m2, or with hypertension or hyperlipidemia, no interaction effects were observed [Figure 3]. Association of sleep duration and midday napping with type 2 diabetes. Adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, or never), drinking status (current, former, or never), hypertension (yes or no), and hyperlipidemia (yes or no). BMI: Body mass index; CI: Confidence interval; OR: Odds ratio. Spline curves for associations of sleep duration with type 2 diabetes. (A) Nonadjusted. The reference group was 7 h/night sleep duration. P values were 0.184 for the overall association and 0.204 for the nonlinear association. (B) Adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, never), drinking status (current, former, never), hypertension (yes, no), and hyperlipidemia (yes, no). The reference group was 7 h/night sleep duration. Solid line represents odds ratio, and dotted lines represent 95% confidence interval of odds ratio. P values were 0.062 for the overall association and 0.092 for the nonlinear association. BMI: Body mass index; T2DM: Type 2 diabetes mellitus. Sleep duration and type 2 diabetes risk, stratified by baseline characteristics. (A) All ORs were calculated using moderate sleep duration (7–8 h/night) as the reference group. Each group was adjusted for all other covariates except the one being tested. (B) All ORs were calculated using moderate sleep duration (7–8 h/night) as the reference group, with model 2 adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, never), drinking status (current, former, never), hypertension (yes, no), and hyperlipidemia (yes, no). Each group was adjusted for all other covariates except the one being tested. BMI: Body mass index; CI: Confidence interval; ORs: Odds ratios. We explored the combined effect of sleep duration and sleep quality on the risks of type 2 diabetes. Subjects with ≥9 h/night of sleep and good sleep quality (OR 1.27, 95% CI 1.01–1.61) had a high risk of diabetes than those who reported moderate nighttime sleep duration (7–8 h/night) and good sleep quality. After adjustments, the OR was 1.37(95%: CI 1.06–1.77) [Table 3]. Joint effects of sleep duration and sleep quality on the incidence of type 2 diabetes. Adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, or never), drinking status (current, former, or never), hypertension (yes or no), and hyperlipidemia (yes or no). BMI: Body mass index; CI: Confidence interval; OR: Odds ratio. Table 4 shows the relationship between changes in sleep time with type 2 diabetes. Compared to participants who reported between 7 and 9 h of sleep in both surveys, those who reported sleeping ≥9 h in both surveys showed ORs of 1.51 (95% CI: 1.05–2.17) and 1.54 (95% CI:1.07– 2.24), indicating a higher risk of diabetes. In addition, after adjusting for all variables, the OR was 1.54 (95% CI:1.07–2.24). Association of change in sleep duration between surveys with type 2 diabetes. Adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, or never), drinking status (current, former, or never), hypertension (yes or no), and hyperlipidemia (yes or no). BMI: Body mass index; CI: Confidence interval; OR: Odds ratio.
Conclusions
We found a J-shaped relationship between sleep duration and the incidence of type 2 diabetes, with the lowest risk for type 2 diabetes in individuals sleeping 6.3 to 7.5 h after adjusting for covariates. Sleep duration that is too long or too short increases the risk of type 2 diabetes. Further studies are needed to reveal the mechanism driving the relationship between sleep time and the incidence of diabetes.
[ "Ethic approval", "Study subjects", "Assessment of covariates", "Statistical analysis", "Funding" ]
[ "All participants provided written informed consent and all protocols were approved by the Ethical Committee of Rui-jin Hospital, Shanghai Jiao Tong University School of Medicine, which is in charge of the REACTION study (No. 2011-14).", "We used data from the REACTION cohort study, which investigated the association between type 2 diabetes and pre-type 2 diabetes and the risk of cancer in the Chinese population.[10] All subjects live in Laoshan, Jingding, and Gucheng communities of Beijing (China) or in Ningde and Wuyishan, Fujian Province (China). They were invited to complete baseline questionnaires and medical examinations in 2011. During the first follow-up survey in 2015, the same community was investigated, and the study had a total size of 14,429 participants [Figure 1].\nFlowchart of participants from the Risk Evaluation of cAncers in Chinese diabetic Individuals: a lONgitudinal (REACTION) study.\nIn all, 1754 subjects were diagnosed with type 2 diabetes in 2011, 51 subjects were with missing information, and 1085 subjects were with history of cancer or related diseases or being pregnant. The remaining 11,539 subjects (4043 men, 7496 women) were enrolled in the present study.", "Data, such as age, sex, smoking status, and drinking status, were collected during the baseline investigation by trained doctors using a detailed questionnaire. BMI was calculated as weight in kilograms divided by height in meters squared. Smoking and alcohol consumption were classified into three levels: current, occasional, and never. Subjects who smoke at least one cigarette/day for more than half a year were defined as current smokers.[11] Subjects who drink at least one time/week for more than half a year were defined as current drinkers.[12] The first level was regarded as positive responses. Nighttime and midday sleep time and sleep quality data were obtained through a self-administrated questionnaire.[10] Sleep duration was calculated from bedtime to waking time and was categorized into five groups: <6 h, 6 to <7 h, 7 to <8 h, 8 to <9 h, or ≥9 h. Midday napping was divided into groups of no napping (0 min), 1 to 29 min, 30 to 59 min, 60 to 89 min, and ≥90 min. Sleep quality was divided into three groups: good, fair, and poor, with frequent use of hypnotics included in the fair group. Hypertension was defined as selfreported physician-diagnosed hypertension or current use of antihypertensive medications or SBP≥130 mm Hg/DBP≥80 mm Hg by the 2017 American College of Cardiology/American Heart Association guidelines.[13] Diabetes was defined as self-reported physician-diagnosed diabetes or fasting glucose level ≥7.0 mmol/L or current usage of antidiabetic medications. Hyperlipidemia was defined as a history of physician-diagnosed hyperlipidemia or total cholesterol ≥6.22 mmol/L or triglycerides >2.26 mmol/L or high-density lipoprotein cholesterol < 1.04 mmol/L or low-density lipoprotein cholesterol ≥4.14 mmol/L or current usage of lipid-lowering medications.", "We performed all analyses with Stata statistical software, v. 14.2 (Stata Corp., College Station, TX, USA). Data were compared using analysis of variance (ANOVA) for continuous variables and χ2 analysis for categorical variables. Continuous variables are expressed as mean ± standard deviation, and categorical variables are expressed as n (%).\nBinary logistic regression was constructed to assess the ORs and 95% confidence intervals (CIs) of sleep duration and type 2 diabetes using sleep duration of 7 to 8 h/night as the reference groups, as previous studies have suggested that sleeping for 7 to 8 h is optimal.[14,15] Potential covariates included in the multivariate-adjusted model were age, gender, BMI, smoking and drinking status, hypertension, and hyperlipidemia. Considering that type 2 diabetes risk might follow nonlinear trends, we used a restricted cubic spline model with three knots at 25,50, and 75th percentiles of sleep duration and 7 h/night as the reference group.[16] Subjects were divided into several groups by age (<65 years, ≥65 years), sex (male, female), BMI (<24 kg/m2, ≥24 kg/m2), hypertension (yes, no), and hyperlipidemia (yes, no). In addition, potential interactions were tested using interaction terms of these covariates with sleep duration.\nWe evaluated the association between changes in sleep duration and incidence of type 2 diabetes. This association was examined using crude and multivariate-adjusted models, with subjects persistently sleeping 7 to 9 h/night in both surveys as the reference group.\nWe further evaluated the joint effects of sleep duration and midday napping and that of sleep duration and sleep quality on the risk of developing diabetes, using moderate sleep duration (7–8 h/night) with midday napping (1–29 min), and moderate sleep duration (7–8 h/night) with good sleep quality as the reference groups.", "This work was supported by a grant from the National Key Research and Development Program of China (No. 2018YFC1314100)." ]
[ null, null, null, null, null ]
[ "Introduction", "Methods", "Ethic approval", "Study subjects", "Assessment of covariates", "Statistical analysis", "Results", "Discussion", "Conclusions", "Funding", "Conflicts of interest" ]
[ "Type 2 diabetes is a critical public health challenge worldwide. Patients with type 2 diabetes are at increased risk for premature mortality and hospitalization due to complications.[1] Given the global burden of type 2 diabetes, understanding the impacts of modifiable risk factors is of great importance.[2,3]\nSleep is essential to the health of patients with type 2 diabetes.[4] Although humans spend about a third of their time sleeping, they may not understand the importance of it. Adequate high-quality sleep is vital to maintain the normal physiological state of the body.[5] Insufficient sleep is a health problem,[5] and long sleep duration is associated with increased body mass index (BMI),[6] impaired glucose tolerance,[7] and increased probability of developing type 2 diabetes.[7] Although lifestyle changes, such as increasing physical activity and weight loss, are of great importance to the management of this disease, understanding the link between type 2 diabetes and sleep duration may help to reduce its incidence.\nShort sleepers and long sleepers show an increased incidence of type 2 diabetes.[8] In addition, a study in Japan found a J-shaped relationship between sleep time and HbA1c level.[9] Therefore, this retrospective cohort study assessed the associations between both nighttime and daytime napping and risk for type 2 diabetes. We used data from the Risk Evaluation of cAncers in Chinese diabeTic Individuals: a lONgitudinal (REACTION) study cohort, which covered a 4-year period. To our knowledge, no such study has yet explored this relationship in Chinese people with a risk for type 2 diabetes.", "Ethic approval All participants provided written informed consent and all protocols were approved by the Ethical Committee of Rui-jin Hospital, Shanghai Jiao Tong University School of Medicine, which is in charge of the REACTION study (No. 2011-14).\nAll participants provided written informed consent and all protocols were approved by the Ethical Committee of Rui-jin Hospital, Shanghai Jiao Tong University School of Medicine, which is in charge of the REACTION study (No. 2011-14).\nStudy subjects We used data from the REACTION cohort study, which investigated the association between type 2 diabetes and pre-type 2 diabetes and the risk of cancer in the Chinese population.[10] All subjects live in Laoshan, Jingding, and Gucheng communities of Beijing (China) or in Ningde and Wuyishan, Fujian Province (China). They were invited to complete baseline questionnaires and medical examinations in 2011. During the first follow-up survey in 2015, the same community was investigated, and the study had a total size of 14,429 participants [Figure 1].\nFlowchart of participants from the Risk Evaluation of cAncers in Chinese diabetic Individuals: a lONgitudinal (REACTION) study.\nIn all, 1754 subjects were diagnosed with type 2 diabetes in 2011, 51 subjects were with missing information, and 1085 subjects were with history of cancer or related diseases or being pregnant. The remaining 11,539 subjects (4043 men, 7496 women) were enrolled in the present study.\nWe used data from the REACTION cohort study, which investigated the association between type 2 diabetes and pre-type 2 diabetes and the risk of cancer in the Chinese population.[10] All subjects live in Laoshan, Jingding, and Gucheng communities of Beijing (China) or in Ningde and Wuyishan, Fujian Province (China). They were invited to complete baseline questionnaires and medical examinations in 2011. During the first follow-up survey in 2015, the same community was investigated, and the study had a total size of 14,429 participants [Figure 1].\nFlowchart of participants from the Risk Evaluation of cAncers in Chinese diabetic Individuals: a lONgitudinal (REACTION) study.\nIn all, 1754 subjects were diagnosed with type 2 diabetes in 2011, 51 subjects were with missing information, and 1085 subjects were with history of cancer or related diseases or being pregnant. The remaining 11,539 subjects (4043 men, 7496 women) were enrolled in the present study.\nAssessment of covariates Data, such as age, sex, smoking status, and drinking status, were collected during the baseline investigation by trained doctors using a detailed questionnaire. BMI was calculated as weight in kilograms divided by height in meters squared. Smoking and alcohol consumption were classified into three levels: current, occasional, and never. Subjects who smoke at least one cigarette/day for more than half a year were defined as current smokers.[11] Subjects who drink at least one time/week for more than half a year were defined as current drinkers.[12] The first level was regarded as positive responses. Nighttime and midday sleep time and sleep quality data were obtained through a self-administrated questionnaire.[10] Sleep duration was calculated from bedtime to waking time and was categorized into five groups: <6 h, 6 to <7 h, 7 to <8 h, 8 to <9 h, or ≥9 h. Midday napping was divided into groups of no napping (0 min), 1 to 29 min, 30 to 59 min, 60 to 89 min, and ≥90 min. Sleep quality was divided into three groups: good, fair, and poor, with frequent use of hypnotics included in the fair group. Hypertension was defined as selfreported physician-diagnosed hypertension or current use of antihypertensive medications or SBP≥130 mm Hg/DBP≥80 mm Hg by the 2017 American College of Cardiology/American Heart Association guidelines.[13] Diabetes was defined as self-reported physician-diagnosed diabetes or fasting glucose level ≥7.0 mmol/L or current usage of antidiabetic medications. Hyperlipidemia was defined as a history of physician-diagnosed hyperlipidemia or total cholesterol ≥6.22 mmol/L or triglycerides >2.26 mmol/L or high-density lipoprotein cholesterol < 1.04 mmol/L or low-density lipoprotein cholesterol ≥4.14 mmol/L or current usage of lipid-lowering medications.\nData, such as age, sex, smoking status, and drinking status, were collected during the baseline investigation by trained doctors using a detailed questionnaire. BMI was calculated as weight in kilograms divided by height in meters squared. Smoking and alcohol consumption were classified into three levels: current, occasional, and never. Subjects who smoke at least one cigarette/day for more than half a year were defined as current smokers.[11] Subjects who drink at least one time/week for more than half a year were defined as current drinkers.[12] The first level was regarded as positive responses. Nighttime and midday sleep time and sleep quality data were obtained through a self-administrated questionnaire.[10] Sleep duration was calculated from bedtime to waking time and was categorized into five groups: <6 h, 6 to <7 h, 7 to <8 h, 8 to <9 h, or ≥9 h. Midday napping was divided into groups of no napping (0 min), 1 to 29 min, 30 to 59 min, 60 to 89 min, and ≥90 min. Sleep quality was divided into three groups: good, fair, and poor, with frequent use of hypnotics included in the fair group. Hypertension was defined as selfreported physician-diagnosed hypertension or current use of antihypertensive medications or SBP≥130 mm Hg/DBP≥80 mm Hg by the 2017 American College of Cardiology/American Heart Association guidelines.[13] Diabetes was defined as self-reported physician-diagnosed diabetes or fasting glucose level ≥7.0 mmol/L or current usage of antidiabetic medications. Hyperlipidemia was defined as a history of physician-diagnosed hyperlipidemia or total cholesterol ≥6.22 mmol/L or triglycerides >2.26 mmol/L or high-density lipoprotein cholesterol < 1.04 mmol/L or low-density lipoprotein cholesterol ≥4.14 mmol/L or current usage of lipid-lowering medications.\nStatistical analysis We performed all analyses with Stata statistical software, v. 14.2 (Stata Corp., College Station, TX, USA). Data were compared using analysis of variance (ANOVA) for continuous variables and χ2 analysis for categorical variables. Continuous variables are expressed as mean ± standard deviation, and categorical variables are expressed as n (%).\nBinary logistic regression was constructed to assess the ORs and 95% confidence intervals (CIs) of sleep duration and type 2 diabetes using sleep duration of 7 to 8 h/night as the reference groups, as previous studies have suggested that sleeping for 7 to 8 h is optimal.[14,15] Potential covariates included in the multivariate-adjusted model were age, gender, BMI, smoking and drinking status, hypertension, and hyperlipidemia. Considering that type 2 diabetes risk might follow nonlinear trends, we used a restricted cubic spline model with three knots at 25,50, and 75th percentiles of sleep duration and 7 h/night as the reference group.[16] Subjects were divided into several groups by age (<65 years, ≥65 years), sex (male, female), BMI (<24 kg/m2, ≥24 kg/m2), hypertension (yes, no), and hyperlipidemia (yes, no). In addition, potential interactions were tested using interaction terms of these covariates with sleep duration.\nWe evaluated the association between changes in sleep duration and incidence of type 2 diabetes. This association was examined using crude and multivariate-adjusted models, with subjects persistently sleeping 7 to 9 h/night in both surveys as the reference group.\nWe further evaluated the joint effects of sleep duration and midday napping and that of sleep duration and sleep quality on the risk of developing diabetes, using moderate sleep duration (7–8 h/night) with midday napping (1–29 min), and moderate sleep duration (7–8 h/night) with good sleep quality as the reference groups.\nWe performed all analyses with Stata statistical software, v. 14.2 (Stata Corp., College Station, TX, USA). Data were compared using analysis of variance (ANOVA) for continuous variables and χ2 analysis for categorical variables. Continuous variables are expressed as mean ± standard deviation, and categorical variables are expressed as n (%).\nBinary logistic regression was constructed to assess the ORs and 95% confidence intervals (CIs) of sleep duration and type 2 diabetes using sleep duration of 7 to 8 h/night as the reference groups, as previous studies have suggested that sleeping for 7 to 8 h is optimal.[14,15] Potential covariates included in the multivariate-adjusted model were age, gender, BMI, smoking and drinking status, hypertension, and hyperlipidemia. Considering that type 2 diabetes risk might follow nonlinear trends, we used a restricted cubic spline model with three knots at 25,50, and 75th percentiles of sleep duration and 7 h/night as the reference group.[16] Subjects were divided into several groups by age (<65 years, ≥65 years), sex (male, female), BMI (<24 kg/m2, ≥24 kg/m2), hypertension (yes, no), and hyperlipidemia (yes, no). In addition, potential interactions were tested using interaction terms of these covariates with sleep duration.\nWe evaluated the association between changes in sleep duration and incidence of type 2 diabetes. This association was examined using crude and multivariate-adjusted models, with subjects persistently sleeping 7 to 9 h/night in both surveys as the reference group.\nWe further evaluated the joint effects of sleep duration and midday napping and that of sleep duration and sleep quality on the risk of developing diabetes, using moderate sleep duration (7–8 h/night) with midday napping (1–29 min), and moderate sleep duration (7–8 h/night) with good sleep quality as the reference groups.", "All participants provided written informed consent and all protocols were approved by the Ethical Committee of Rui-jin Hospital, Shanghai Jiao Tong University School of Medicine, which is in charge of the REACTION study (No. 2011-14).", "We used data from the REACTION cohort study, which investigated the association between type 2 diabetes and pre-type 2 diabetes and the risk of cancer in the Chinese population.[10] All subjects live in Laoshan, Jingding, and Gucheng communities of Beijing (China) or in Ningde and Wuyishan, Fujian Province (China). They were invited to complete baseline questionnaires and medical examinations in 2011. During the first follow-up survey in 2015, the same community was investigated, and the study had a total size of 14,429 participants [Figure 1].\nFlowchart of participants from the Risk Evaluation of cAncers in Chinese diabetic Individuals: a lONgitudinal (REACTION) study.\nIn all, 1754 subjects were diagnosed with type 2 diabetes in 2011, 51 subjects were with missing information, and 1085 subjects were with history of cancer or related diseases or being pregnant. The remaining 11,539 subjects (4043 men, 7496 women) were enrolled in the present study.", "Data, such as age, sex, smoking status, and drinking status, were collected during the baseline investigation by trained doctors using a detailed questionnaire. BMI was calculated as weight in kilograms divided by height in meters squared. Smoking and alcohol consumption were classified into three levels: current, occasional, and never. Subjects who smoke at least one cigarette/day for more than half a year were defined as current smokers.[11] Subjects who drink at least one time/week for more than half a year were defined as current drinkers.[12] The first level was regarded as positive responses. Nighttime and midday sleep time and sleep quality data were obtained through a self-administrated questionnaire.[10] Sleep duration was calculated from bedtime to waking time and was categorized into five groups: <6 h, 6 to <7 h, 7 to <8 h, 8 to <9 h, or ≥9 h. Midday napping was divided into groups of no napping (0 min), 1 to 29 min, 30 to 59 min, 60 to 89 min, and ≥90 min. Sleep quality was divided into three groups: good, fair, and poor, with frequent use of hypnotics included in the fair group. Hypertension was defined as selfreported physician-diagnosed hypertension or current use of antihypertensive medications or SBP≥130 mm Hg/DBP≥80 mm Hg by the 2017 American College of Cardiology/American Heart Association guidelines.[13] Diabetes was defined as self-reported physician-diagnosed diabetes or fasting glucose level ≥7.0 mmol/L or current usage of antidiabetic medications. Hyperlipidemia was defined as a history of physician-diagnosed hyperlipidemia or total cholesterol ≥6.22 mmol/L or triglycerides >2.26 mmol/L or high-density lipoprotein cholesterol < 1.04 mmol/L or low-density lipoprotein cholesterol ≥4.14 mmol/L or current usage of lipid-lowering medications.", "We performed all analyses with Stata statistical software, v. 14.2 (Stata Corp., College Station, TX, USA). Data were compared using analysis of variance (ANOVA) for continuous variables and χ2 analysis for categorical variables. Continuous variables are expressed as mean ± standard deviation, and categorical variables are expressed as n (%).\nBinary logistic regression was constructed to assess the ORs and 95% confidence intervals (CIs) of sleep duration and type 2 diabetes using sleep duration of 7 to 8 h/night as the reference groups, as previous studies have suggested that sleeping for 7 to 8 h is optimal.[14,15] Potential covariates included in the multivariate-adjusted model were age, gender, BMI, smoking and drinking status, hypertension, and hyperlipidemia. Considering that type 2 diabetes risk might follow nonlinear trends, we used a restricted cubic spline model with three knots at 25,50, and 75th percentiles of sleep duration and 7 h/night as the reference group.[16] Subjects were divided into several groups by age (<65 years, ≥65 years), sex (male, female), BMI (<24 kg/m2, ≥24 kg/m2), hypertension (yes, no), and hyperlipidemia (yes, no). In addition, potential interactions were tested using interaction terms of these covariates with sleep duration.\nWe evaluated the association between changes in sleep duration and incidence of type 2 diabetes. This association was examined using crude and multivariate-adjusted models, with subjects persistently sleeping 7 to 9 h/night in both surveys as the reference group.\nWe further evaluated the joint effects of sleep duration and midday napping and that of sleep duration and sleep quality on the risk of developing diabetes, using moderate sleep duration (7–8 h/night) with midday napping (1–29 min), and moderate sleep duration (7–8 h/night) with good sleep quality as the reference groups.", "Among 11,539 subjects, 13.11% (n = 1513) reported sleeping ≥9 h/night and 5.18% (n = 598) reported midday napping >90 min. Compared to subjects reporting 7 to 8 h/night of sleep, those reporting sleep duration ≥9 h/night were more likely to be female, current smokers, and current drinkers. Meanwhile, compared to subjects with 1 to 29 min of midday napping, those reporting midday napping ≥90 min were more likely to be male, hypertension and hyperlipidemia, current smokers, current drinkers (P < 0.05). In addition, compared to the reference groups, participants reporting≥9 h/night of sleep were less likely to have hypertension and hyperlipidemia [Table 1].\nGeneral characteristics of the study participants according to sleep duration and midday napping (N = 11,539)\nData are presented as mean ± SD for continuous variables. BMI: Body mass index; SD: Standard deviation.\nDuring the follow-up investigation, we documented 694 type 2 diabetes patients. Compared to people who slept for 7 to 8 h/night, the ORs (95% CIs) of type 2 diabetes were 1.21 (0.85–1.72) for <6 h/night, 0.91 (0.70–1.17) for 6 to 7 h/night, 1.03 (0.85–1.24) for 8 to 9 h/night, and 1.27 (1.01–1.61) for ≥9 h/night (P = 0.040), respectively. After adjusting for age (continuous), sex, BMI (continuous), smoking status (yes or no), drinking status (yes or no), hypertension (yes or no), and hyperlipidemia (yes or no), similar associations were observed for those having ≥9 h/night of sleep (P = 0.02) [Table 2]. We further exploredthe interaction of sleep duration and midday napping or sleep quality on the risks of type 2 diabetes. We found an interaction between sleep time and nap time or sleep quality. Compared to participants napping for 1 to 29 min, the ORs (95% CIs) of type 2 diabetes were 1.30 (0.53–3.20) for those reporting no midday napping, 1.04 (0.38–2.83) for 30 to 59 min, 1.02 (0.39–2.65) for 60 to 89 min, and 1.27 (0.48–3.31) for midday napping ≥90 min. In addition, no significant association was observed between these groups after adjusting for age, sex, BMI, smoking status, drinking status, type 2 diabetes mellitus, and hyperlipidemia. Restricted cubic spline regression analysis showed a J-shaped curve and confirmed that people who slept ≥9 h/night had a high risk of type 2 diabetes [Figure 2]. The optimal nighttime sleep duration was 7.2 to 7.5 h, and it was 6.3 to 7.5 h after adjusting for all variables. When stratified by selected covariates, the association between type 2 diabetes and long sleep duration became more evident in individuals who were <65 years of age, male, BMI < 24 kg/m2, or with hypertension or hyperlipidemia, no interaction effects were observed [Figure 3].\nAssociation of sleep duration and midday napping with type 2 diabetes.\nAdjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, or never), drinking status (current, former, or never), hypertension (yes or no), and hyperlipidemia (yes or no). BMI: Body mass index; CI: Confidence interval; OR: Odds ratio.\nSpline curves for associations of sleep duration with type 2 diabetes. (A) Nonadjusted. The reference group was 7 h/night sleep duration. P values were 0.184 for the overall association and 0.204 for the nonlinear association. (B) Adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, never), drinking status (current, former, never), hypertension (yes, no), and hyperlipidemia (yes, no). The reference group was 7 h/night sleep duration. Solid line represents odds ratio, and dotted lines represent 95% confidence interval of odds ratio. P values were 0.062 for the overall association and 0.092 for the nonlinear association. BMI: Body mass index; T2DM: Type 2 diabetes mellitus.\nSleep duration and type 2 diabetes risk, stratified by baseline characteristics. (A) All ORs were calculated using moderate sleep duration (7–8 h/night) as the reference group. Each group was adjusted for all other covariates except the one being tested. (B) All ORs were calculated using moderate sleep duration (7–8 h/night) as the reference group, with model 2 adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, never), drinking status (current, former, never), hypertension (yes, no), and hyperlipidemia (yes, no). Each group was adjusted for all other covariates except the one being tested. BMI: Body mass index; CI: Confidence interval; ORs: Odds ratios.\nWe explored the combined effect of sleep duration and sleep quality on the risks of type 2 diabetes. Subjects with ≥9 h/night of sleep and good sleep quality (OR 1.27, 95% CI 1.01–1.61) had a high risk of diabetes than those who reported moderate nighttime sleep duration (7–8 h/night) and good sleep quality. After adjustments, the OR was 1.37(95%: CI 1.06–1.77) [Table 3].\nJoint effects of sleep duration and sleep quality on the incidence of type 2 diabetes.\nAdjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, or never), drinking status (current, former, or never), hypertension (yes or no), and hyperlipidemia (yes or no). BMI: Body mass index; CI: Confidence interval; OR: Odds ratio.\nTable 4 shows the relationship between changes in sleep time with type 2 diabetes. Compared to participants who reported between 7 and 9 h of sleep in both surveys, those who reported sleeping ≥9 h in both surveys showed ORs of 1.51 (95% CI: 1.05–2.17) and 1.54 (95% CI:1.07– 2.24), indicating a higher risk of diabetes. In addition, after adjusting for all variables, the OR was 1.54 (95% CI:1.07–2.24).\nAssociation of change in sleep duration between surveys with type 2 diabetes.\nAdjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, or never), drinking status (current, former, or never), hypertension (yes or no), and hyperlipidemia (yes or no). BMI: Body mass index; CI: Confidence interval; OR: Odds ratio.", "In this large retrospective cohort study, we found that subjects who slept ≥9 h per night had a high risk of type 2 diabetes. Moreover, optimal sleep duration at night was 6.3 to 7.5 h after adjusting for age, sex, BMI, smoking status, drinking status, hypertension, and hyperlipidemia. To avoid an influence of region on our results, we selected people in both the northern and southern regions of China. To the best of our knowledge, this is the first retrospective study to report that persistently sleeping ≥9 h/night is related to higher type 2 diabetes risk compared to persistently sleeping for a moderate duration (7–9 h/night) in the Chinese population.\nMany studies have assessed the association between sleep duration and the incidence of type 2 diabetes[7] or blood glucose level.[8] In these studies, participants have usually divided participants into several groups according to sleep duration. Therefore, such studies can only draw the conclusions that one group of people has the lowest incidence of type 2 diabetes, for example, people who sleep 7 to 8 h at night.[8] A meta-analysis suggested that for a short duration of sleep (5–6 h/night), the risk ratio was 1.28, while that for a long duration of sleep (8–9 h/night) it was 1.48.[17] In the present study, we limited the optimal sleep duration into a narrow range according to the restricted cubic spline model.\nThe elevated risk for type 2 diabetes and long sleep duration appeared to be more pronounced in individuals who were<65 years old, male, BMI <24 kg/m2, and/or with hypertension or hyperlipidemia, no interaction was noted. However, we did not find significant differences between each subgroup based on the P values of interaction terms. The reason for the lack of interactions may be that we observed individuals for a short period of time, and type 2 diabetes is a chronic disease with a relatively low incidence. As a result, we observed fewer cases.\nAfternoon napping is a common habit in many countries including China. The relationship between midday napping and the risk of type 2 diabetes has also been investigated in several previous cross-sectional[18] and cohort studies.[19] These studies have suggested that the incidence of type 2 diabetes or elevated blood glucose levels is higher in individuals with longer sleep duration. However, midday napping can influence the quantity and quality of nocturnal sleep.[20] Therefore, we considered the effects of midday napping on the incidence of type 2 diabetes alongside that of nocturnal sleep. To our knowledge, this is the first study to examine the impact of midday napping in combination with nocturnal sleep on the incidence of type 2 diabetes in the Chinese population. However, we found no interaction between sleep time and midday napping. Observing the relationships between the incidence of type 2 diabetes and both sleep duration and midday napping demands a large number of subjects. Previous research has suggested that type 2 diabetes patients prefer longer midday naps because they are more fatigued.[21] We excluded individuals with diabetics at the start of our study, and therefore we can conclude that a longer midday napping leads to type 2 diabetes.\nPrevious studies have been shown that the prevalence of poor sleep quality was significantly higher among people with diabetes than those without it.[22] In our study, only 105 participants with poor sleep quality developed diabetes during the follow-up period. As a result, the difference in the incidences of diabetes is not statistically significant between this group and the reference group. A larger sample size is needed to investigate the effects of sleep quality.\nNumerous possible mechanisms could explain the relationship between long sleep duration and midday napping and the incidence of type 2 diabetes. Research revealed that leptin and ghrelin are of great importance to the incidence of type 2 diabetes.[23] Short sleep can reduce leptin and elevate ghrelin in the blood, leading to increases in hunger and appetite, accompanied by a decrease in glucose tolerance.[24] In addition, short sleep may contribute to the secretion of adiponectin and insulin. Adiponectin, which is secreted by adipocytes, is associated with insulin sensitivity.[25–27] Sleep restriction can increase sympathetic nervous system activity, leading to decreased insulin sensitivity.[28] However, the mechanisms through which long sleep duration leads to increased risk for type 2 diabetes are not fully understood. Long sleep duration may reflect a more sedentary lifestyle and, similar to short sleepers, long sleepers may engage in more snacking.[29]\nThe study also had several limitations. First, the sleep duration at night and midday is defined as the time from going to bed to waking in the questionnaire survey, which is slightly different from the actual time to sleep. However, it was impossible to obtain the objective measures of sleep duration and napping in large prospective population studies, and the self-administrated survey is the most commonly used method of evaluating sleep duration and napping in large population-based studies. Second, previous studies have suggested that poor sleep quality is associated with higher blood glucose levels in patients with type 2 diabetes.[22] However, some subgroups contained few cases of type 2 diabetes, causing their differences to be non-significant. No significant result was found on the interaction between sleep time and nap time or sleep quality. The current results may be related to overfitting, and the mechanism will be studied in the next research. In addition, restricted cubic spline showed that the sleeping time was related to the incidence rate of diabetes, but it was not a significant curve relationship. A larger sample size is needed in future studies on this subject for further study. Third, we did not record the family members when we sent out the questionnaire, so we could not use these data. We will use a more accurate questionnaire in the next survey to facilitate the follow-up analysis.", "We found a J-shaped relationship between sleep duration and the incidence of type 2 diabetes, with the lowest risk for type 2 diabetes in individuals sleeping 6.3 to 7.5 h after adjusting for covariates. Sleep duration that is too long or too short increases the risk of type 2 diabetes. Further studies are needed to reveal the mechanism driving the relationship between sleep time and the incidence of diabetes.", "This work was supported by a grant from the National Key Research and Development Program of China (No. 2018YFC1314100).", "None." ]
[ "intro", "methods", null, null, null, null, "results", "discussion", "conclusion", null, "COI-statement" ]
[ "Sleep duration", "Type 2 diabetes", "Prevalence", "Risk" ]
Introduction: Type 2 diabetes is a critical public health challenge worldwide. Patients with type 2 diabetes are at increased risk for premature mortality and hospitalization due to complications.[1] Given the global burden of type 2 diabetes, understanding the impacts of modifiable risk factors is of great importance.[2,3] Sleep is essential to the health of patients with type 2 diabetes.[4] Although humans spend about a third of their time sleeping, they may not understand the importance of it. Adequate high-quality sleep is vital to maintain the normal physiological state of the body.[5] Insufficient sleep is a health problem,[5] and long sleep duration is associated with increased body mass index (BMI),[6] impaired glucose tolerance,[7] and increased probability of developing type 2 diabetes.[7] Although lifestyle changes, such as increasing physical activity and weight loss, are of great importance to the management of this disease, understanding the link between type 2 diabetes and sleep duration may help to reduce its incidence. Short sleepers and long sleepers show an increased incidence of type 2 diabetes.[8] In addition, a study in Japan found a J-shaped relationship between sleep time and HbA1c level.[9] Therefore, this retrospective cohort study assessed the associations between both nighttime and daytime napping and risk for type 2 diabetes. We used data from the Risk Evaluation of cAncers in Chinese diabeTic Individuals: a lONgitudinal (REACTION) study cohort, which covered a 4-year period. To our knowledge, no such study has yet explored this relationship in Chinese people with a risk for type 2 diabetes. Methods: Ethic approval All participants provided written informed consent and all protocols were approved by the Ethical Committee of Rui-jin Hospital, Shanghai Jiao Tong University School of Medicine, which is in charge of the REACTION study (No. 2011-14). All participants provided written informed consent and all protocols were approved by the Ethical Committee of Rui-jin Hospital, Shanghai Jiao Tong University School of Medicine, which is in charge of the REACTION study (No. 2011-14). Study subjects We used data from the REACTION cohort study, which investigated the association between type 2 diabetes and pre-type 2 diabetes and the risk of cancer in the Chinese population.[10] All subjects live in Laoshan, Jingding, and Gucheng communities of Beijing (China) or in Ningde and Wuyishan, Fujian Province (China). They were invited to complete baseline questionnaires and medical examinations in 2011. During the first follow-up survey in 2015, the same community was investigated, and the study had a total size of 14,429 participants [Figure 1]. Flowchart of participants from the Risk Evaluation of cAncers in Chinese diabetic Individuals: a lONgitudinal (REACTION) study. In all, 1754 subjects were diagnosed with type 2 diabetes in 2011, 51 subjects were with missing information, and 1085 subjects were with history of cancer or related diseases or being pregnant. The remaining 11,539 subjects (4043 men, 7496 women) were enrolled in the present study. We used data from the REACTION cohort study, which investigated the association between type 2 diabetes and pre-type 2 diabetes and the risk of cancer in the Chinese population.[10] All subjects live in Laoshan, Jingding, and Gucheng communities of Beijing (China) or in Ningde and Wuyishan, Fujian Province (China). They were invited to complete baseline questionnaires and medical examinations in 2011. During the first follow-up survey in 2015, the same community was investigated, and the study had a total size of 14,429 participants [Figure 1]. Flowchart of participants from the Risk Evaluation of cAncers in Chinese diabetic Individuals: a lONgitudinal (REACTION) study. In all, 1754 subjects were diagnosed with type 2 diabetes in 2011, 51 subjects were with missing information, and 1085 subjects were with history of cancer or related diseases or being pregnant. The remaining 11,539 subjects (4043 men, 7496 women) were enrolled in the present study. Assessment of covariates Data, such as age, sex, smoking status, and drinking status, were collected during the baseline investigation by trained doctors using a detailed questionnaire. BMI was calculated as weight in kilograms divided by height in meters squared. Smoking and alcohol consumption were classified into three levels: current, occasional, and never. Subjects who smoke at least one cigarette/day for more than half a year were defined as current smokers.[11] Subjects who drink at least one time/week for more than half a year were defined as current drinkers.[12] The first level was regarded as positive responses. Nighttime and midday sleep time and sleep quality data were obtained through a self-administrated questionnaire.[10] Sleep duration was calculated from bedtime to waking time and was categorized into five groups: <6 h, 6 to <7 h, 7 to <8 h, 8 to <9 h, or ≥9 h. Midday napping was divided into groups of no napping (0 min), 1 to 29 min, 30 to 59 min, 60 to 89 min, and ≥90 min. Sleep quality was divided into three groups: good, fair, and poor, with frequent use of hypnotics included in the fair group. Hypertension was defined as selfreported physician-diagnosed hypertension or current use of antihypertensive medications or SBP≥130 mm Hg/DBP≥80 mm Hg by the 2017 American College of Cardiology/American Heart Association guidelines.[13] Diabetes was defined as self-reported physician-diagnosed diabetes or fasting glucose level ≥7.0 mmol/L or current usage of antidiabetic medications. Hyperlipidemia was defined as a history of physician-diagnosed hyperlipidemia or total cholesterol ≥6.22 mmol/L or triglycerides >2.26 mmol/L or high-density lipoprotein cholesterol < 1.04 mmol/L or low-density lipoprotein cholesterol ≥4.14 mmol/L or current usage of lipid-lowering medications. Data, such as age, sex, smoking status, and drinking status, were collected during the baseline investigation by trained doctors using a detailed questionnaire. BMI was calculated as weight in kilograms divided by height in meters squared. Smoking and alcohol consumption were classified into three levels: current, occasional, and never. Subjects who smoke at least one cigarette/day for more than half a year were defined as current smokers.[11] Subjects who drink at least one time/week for more than half a year were defined as current drinkers.[12] The first level was regarded as positive responses. Nighttime and midday sleep time and sleep quality data were obtained through a self-administrated questionnaire.[10] Sleep duration was calculated from bedtime to waking time and was categorized into five groups: <6 h, 6 to <7 h, 7 to <8 h, 8 to <9 h, or ≥9 h. Midday napping was divided into groups of no napping (0 min), 1 to 29 min, 30 to 59 min, 60 to 89 min, and ≥90 min. Sleep quality was divided into three groups: good, fair, and poor, with frequent use of hypnotics included in the fair group. Hypertension was defined as selfreported physician-diagnosed hypertension or current use of antihypertensive medications or SBP≥130 mm Hg/DBP≥80 mm Hg by the 2017 American College of Cardiology/American Heart Association guidelines.[13] Diabetes was defined as self-reported physician-diagnosed diabetes or fasting glucose level ≥7.0 mmol/L or current usage of antidiabetic medications. Hyperlipidemia was defined as a history of physician-diagnosed hyperlipidemia or total cholesterol ≥6.22 mmol/L or triglycerides >2.26 mmol/L or high-density lipoprotein cholesterol < 1.04 mmol/L or low-density lipoprotein cholesterol ≥4.14 mmol/L or current usage of lipid-lowering medications. Statistical analysis We performed all analyses with Stata statistical software, v. 14.2 (Stata Corp., College Station, TX, USA). Data were compared using analysis of variance (ANOVA) for continuous variables and χ2 analysis for categorical variables. Continuous variables are expressed as mean ± standard deviation, and categorical variables are expressed as n (%). Binary logistic regression was constructed to assess the ORs and 95% confidence intervals (CIs) of sleep duration and type 2 diabetes using sleep duration of 7 to 8 h/night as the reference groups, as previous studies have suggested that sleeping for 7 to 8 h is optimal.[14,15] Potential covariates included in the multivariate-adjusted model were age, gender, BMI, smoking and drinking status, hypertension, and hyperlipidemia. Considering that type 2 diabetes risk might follow nonlinear trends, we used a restricted cubic spline model with three knots at 25,50, and 75th percentiles of sleep duration and 7 h/night as the reference group.[16] Subjects were divided into several groups by age (<65 years, ≥65 years), sex (male, female), BMI (<24 kg/m2, ≥24 kg/m2), hypertension (yes, no), and hyperlipidemia (yes, no). In addition, potential interactions were tested using interaction terms of these covariates with sleep duration. We evaluated the association between changes in sleep duration and incidence of type 2 diabetes. This association was examined using crude and multivariate-adjusted models, with subjects persistently sleeping 7 to 9 h/night in both surveys as the reference group. We further evaluated the joint effects of sleep duration and midday napping and that of sleep duration and sleep quality on the risk of developing diabetes, using moderate sleep duration (7–8 h/night) with midday napping (1–29 min), and moderate sleep duration (7–8 h/night) with good sleep quality as the reference groups. We performed all analyses with Stata statistical software, v. 14.2 (Stata Corp., College Station, TX, USA). Data were compared using analysis of variance (ANOVA) for continuous variables and χ2 analysis for categorical variables. Continuous variables are expressed as mean ± standard deviation, and categorical variables are expressed as n (%). Binary logistic regression was constructed to assess the ORs and 95% confidence intervals (CIs) of sleep duration and type 2 diabetes using sleep duration of 7 to 8 h/night as the reference groups, as previous studies have suggested that sleeping for 7 to 8 h is optimal.[14,15] Potential covariates included in the multivariate-adjusted model were age, gender, BMI, smoking and drinking status, hypertension, and hyperlipidemia. Considering that type 2 diabetes risk might follow nonlinear trends, we used a restricted cubic spline model with three knots at 25,50, and 75th percentiles of sleep duration and 7 h/night as the reference group.[16] Subjects were divided into several groups by age (<65 years, ≥65 years), sex (male, female), BMI (<24 kg/m2, ≥24 kg/m2), hypertension (yes, no), and hyperlipidemia (yes, no). In addition, potential interactions were tested using interaction terms of these covariates with sleep duration. We evaluated the association between changes in sleep duration and incidence of type 2 diabetes. This association was examined using crude and multivariate-adjusted models, with subjects persistently sleeping 7 to 9 h/night in both surveys as the reference group. We further evaluated the joint effects of sleep duration and midday napping and that of sleep duration and sleep quality on the risk of developing diabetes, using moderate sleep duration (7–8 h/night) with midday napping (1–29 min), and moderate sleep duration (7–8 h/night) with good sleep quality as the reference groups. Ethic approval: All participants provided written informed consent and all protocols were approved by the Ethical Committee of Rui-jin Hospital, Shanghai Jiao Tong University School of Medicine, which is in charge of the REACTION study (No. 2011-14). Study subjects: We used data from the REACTION cohort study, which investigated the association between type 2 diabetes and pre-type 2 diabetes and the risk of cancer in the Chinese population.[10] All subjects live in Laoshan, Jingding, and Gucheng communities of Beijing (China) or in Ningde and Wuyishan, Fujian Province (China). They were invited to complete baseline questionnaires and medical examinations in 2011. During the first follow-up survey in 2015, the same community was investigated, and the study had a total size of 14,429 participants [Figure 1]. Flowchart of participants from the Risk Evaluation of cAncers in Chinese diabetic Individuals: a lONgitudinal (REACTION) study. In all, 1754 subjects were diagnosed with type 2 diabetes in 2011, 51 subjects were with missing information, and 1085 subjects were with history of cancer or related diseases or being pregnant. The remaining 11,539 subjects (4043 men, 7496 women) were enrolled in the present study. Assessment of covariates: Data, such as age, sex, smoking status, and drinking status, were collected during the baseline investigation by trained doctors using a detailed questionnaire. BMI was calculated as weight in kilograms divided by height in meters squared. Smoking and alcohol consumption were classified into three levels: current, occasional, and never. Subjects who smoke at least one cigarette/day for more than half a year were defined as current smokers.[11] Subjects who drink at least one time/week for more than half a year were defined as current drinkers.[12] The first level was regarded as positive responses. Nighttime and midday sleep time and sleep quality data were obtained through a self-administrated questionnaire.[10] Sleep duration was calculated from bedtime to waking time and was categorized into five groups: <6 h, 6 to <7 h, 7 to <8 h, 8 to <9 h, or ≥9 h. Midday napping was divided into groups of no napping (0 min), 1 to 29 min, 30 to 59 min, 60 to 89 min, and ≥90 min. Sleep quality was divided into three groups: good, fair, and poor, with frequent use of hypnotics included in the fair group. Hypertension was defined as selfreported physician-diagnosed hypertension or current use of antihypertensive medications or SBP≥130 mm Hg/DBP≥80 mm Hg by the 2017 American College of Cardiology/American Heart Association guidelines.[13] Diabetes was defined as self-reported physician-diagnosed diabetes or fasting glucose level ≥7.0 mmol/L or current usage of antidiabetic medications. Hyperlipidemia was defined as a history of physician-diagnosed hyperlipidemia or total cholesterol ≥6.22 mmol/L or triglycerides >2.26 mmol/L or high-density lipoprotein cholesterol < 1.04 mmol/L or low-density lipoprotein cholesterol ≥4.14 mmol/L or current usage of lipid-lowering medications. Statistical analysis: We performed all analyses with Stata statistical software, v. 14.2 (Stata Corp., College Station, TX, USA). Data were compared using analysis of variance (ANOVA) for continuous variables and χ2 analysis for categorical variables. Continuous variables are expressed as mean ± standard deviation, and categorical variables are expressed as n (%). Binary logistic regression was constructed to assess the ORs and 95% confidence intervals (CIs) of sleep duration and type 2 diabetes using sleep duration of 7 to 8 h/night as the reference groups, as previous studies have suggested that sleeping for 7 to 8 h is optimal.[14,15] Potential covariates included in the multivariate-adjusted model were age, gender, BMI, smoking and drinking status, hypertension, and hyperlipidemia. Considering that type 2 diabetes risk might follow nonlinear trends, we used a restricted cubic spline model with three knots at 25,50, and 75th percentiles of sleep duration and 7 h/night as the reference group.[16] Subjects were divided into several groups by age (<65 years, ≥65 years), sex (male, female), BMI (<24 kg/m2, ≥24 kg/m2), hypertension (yes, no), and hyperlipidemia (yes, no). In addition, potential interactions were tested using interaction terms of these covariates with sleep duration. We evaluated the association between changes in sleep duration and incidence of type 2 diabetes. This association was examined using crude and multivariate-adjusted models, with subjects persistently sleeping 7 to 9 h/night in both surveys as the reference group. We further evaluated the joint effects of sleep duration and midday napping and that of sleep duration and sleep quality on the risk of developing diabetes, using moderate sleep duration (7–8 h/night) with midday napping (1–29 min), and moderate sleep duration (7–8 h/night) with good sleep quality as the reference groups. Results: Among 11,539 subjects, 13.11% (n = 1513) reported sleeping ≥9 h/night and 5.18% (n = 598) reported midday napping >90 min. Compared to subjects reporting 7 to 8 h/night of sleep, those reporting sleep duration ≥9 h/night were more likely to be female, current smokers, and current drinkers. Meanwhile, compared to subjects with 1 to 29 min of midday napping, those reporting midday napping ≥90 min were more likely to be male, hypertension and hyperlipidemia, current smokers, current drinkers (P < 0.05). In addition, compared to the reference groups, participants reporting≥9 h/night of sleep were less likely to have hypertension and hyperlipidemia [Table 1]. General characteristics of the study participants according to sleep duration and midday napping (N = 11,539) Data are presented as mean ± SD for continuous variables. BMI: Body mass index; SD: Standard deviation. During the follow-up investigation, we documented 694 type 2 diabetes patients. Compared to people who slept for 7 to 8 h/night, the ORs (95% CIs) of type 2 diabetes were 1.21 (0.85–1.72) for <6 h/night, 0.91 (0.70–1.17) for 6 to 7 h/night, 1.03 (0.85–1.24) for 8 to 9 h/night, and 1.27 (1.01–1.61) for ≥9 h/night (P = 0.040), respectively. After adjusting for age (continuous), sex, BMI (continuous), smoking status (yes or no), drinking status (yes or no), hypertension (yes or no), and hyperlipidemia (yes or no), similar associations were observed for those having ≥9 h/night of sleep (P = 0.02) [Table 2]. We further exploredthe interaction of sleep duration and midday napping or sleep quality on the risks of type 2 diabetes. We found an interaction between sleep time and nap time or sleep quality. Compared to participants napping for 1 to 29 min, the ORs (95% CIs) of type 2 diabetes were 1.30 (0.53–3.20) for those reporting no midday napping, 1.04 (0.38–2.83) for 30 to 59 min, 1.02 (0.39–2.65) for 60 to 89 min, and 1.27 (0.48–3.31) for midday napping ≥90 min. In addition, no significant association was observed between these groups after adjusting for age, sex, BMI, smoking status, drinking status, type 2 diabetes mellitus, and hyperlipidemia. Restricted cubic spline regression analysis showed a J-shaped curve and confirmed that people who slept ≥9 h/night had a high risk of type 2 diabetes [Figure 2]. The optimal nighttime sleep duration was 7.2 to 7.5 h, and it was 6.3 to 7.5 h after adjusting for all variables. When stratified by selected covariates, the association between type 2 diabetes and long sleep duration became more evident in individuals who were <65 years of age, male, BMI < 24 kg/m2, or with hypertension or hyperlipidemia, no interaction effects were observed [Figure 3]. Association of sleep duration and midday napping with type 2 diabetes. Adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, or never), drinking status (current, former, or never), hypertension (yes or no), and hyperlipidemia (yes or no). BMI: Body mass index; CI: Confidence interval; OR: Odds ratio. Spline curves for associations of sleep duration with type 2 diabetes. (A) Nonadjusted. The reference group was 7 h/night sleep duration. P values were 0.184 for the overall association and 0.204 for the nonlinear association. (B) Adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, never), drinking status (current, former, never), hypertension (yes, no), and hyperlipidemia (yes, no). The reference group was 7 h/night sleep duration. Solid line represents odds ratio, and dotted lines represent 95% confidence interval of odds ratio. P values were 0.062 for the overall association and 0.092 for the nonlinear association. BMI: Body mass index; T2DM: Type 2 diabetes mellitus. Sleep duration and type 2 diabetes risk, stratified by baseline characteristics. (A) All ORs were calculated using moderate sleep duration (7–8 h/night) as the reference group. Each group was adjusted for all other covariates except the one being tested. (B) All ORs were calculated using moderate sleep duration (7–8 h/night) as the reference group, with model 2 adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, never), drinking status (current, former, never), hypertension (yes, no), and hyperlipidemia (yes, no). Each group was adjusted for all other covariates except the one being tested. BMI: Body mass index; CI: Confidence interval; ORs: Odds ratios. We explored the combined effect of sleep duration and sleep quality on the risks of type 2 diabetes. Subjects with ≥9 h/night of sleep and good sleep quality (OR 1.27, 95% CI 1.01–1.61) had a high risk of diabetes than those who reported moderate nighttime sleep duration (7–8 h/night) and good sleep quality. After adjustments, the OR was 1.37(95%: CI 1.06–1.77) [Table 3]. Joint effects of sleep duration and sleep quality on the incidence of type 2 diabetes. Adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, or never), drinking status (current, former, or never), hypertension (yes or no), and hyperlipidemia (yes or no). BMI: Body mass index; CI: Confidence interval; OR: Odds ratio. Table 4 shows the relationship between changes in sleep time with type 2 diabetes. Compared to participants who reported between 7 and 9 h of sleep in both surveys, those who reported sleeping ≥9 h in both surveys showed ORs of 1.51 (95% CI: 1.05–2.17) and 1.54 (95% CI:1.07– 2.24), indicating a higher risk of diabetes. In addition, after adjusting for all variables, the OR was 1.54 (95% CI:1.07–2.24). Association of change in sleep duration between surveys with type 2 diabetes. Adjusted for age (continuous), sex, BMI (continuous), smoking status (current, former, or never), drinking status (current, former, or never), hypertension (yes or no), and hyperlipidemia (yes or no). BMI: Body mass index; CI: Confidence interval; OR: Odds ratio. Discussion: In this large retrospective cohort study, we found that subjects who slept ≥9 h per night had a high risk of type 2 diabetes. Moreover, optimal sleep duration at night was 6.3 to 7.5 h after adjusting for age, sex, BMI, smoking status, drinking status, hypertension, and hyperlipidemia. To avoid an influence of region on our results, we selected people in both the northern and southern regions of China. To the best of our knowledge, this is the first retrospective study to report that persistently sleeping ≥9 h/night is related to higher type 2 diabetes risk compared to persistently sleeping for a moderate duration (7–9 h/night) in the Chinese population. Many studies have assessed the association between sleep duration and the incidence of type 2 diabetes[7] or blood glucose level.[8] In these studies, participants have usually divided participants into several groups according to sleep duration. Therefore, such studies can only draw the conclusions that one group of people has the lowest incidence of type 2 diabetes, for example, people who sleep 7 to 8 h at night.[8] A meta-analysis suggested that for a short duration of sleep (5–6 h/night), the risk ratio was 1.28, while that for a long duration of sleep (8–9 h/night) it was 1.48.[17] In the present study, we limited the optimal sleep duration into a narrow range according to the restricted cubic spline model. The elevated risk for type 2 diabetes and long sleep duration appeared to be more pronounced in individuals who were<65 years old, male, BMI <24 kg/m2, and/or with hypertension or hyperlipidemia, no interaction was noted. However, we did not find significant differences between each subgroup based on the P values of interaction terms. The reason for the lack of interactions may be that we observed individuals for a short period of time, and type 2 diabetes is a chronic disease with a relatively low incidence. As a result, we observed fewer cases. Afternoon napping is a common habit in many countries including China. The relationship between midday napping and the risk of type 2 diabetes has also been investigated in several previous cross-sectional[18] and cohort studies.[19] These studies have suggested that the incidence of type 2 diabetes or elevated blood glucose levels is higher in individuals with longer sleep duration. However, midday napping can influence the quantity and quality of nocturnal sleep.[20] Therefore, we considered the effects of midday napping on the incidence of type 2 diabetes alongside that of nocturnal sleep. To our knowledge, this is the first study to examine the impact of midday napping in combination with nocturnal sleep on the incidence of type 2 diabetes in the Chinese population. However, we found no interaction between sleep time and midday napping. Observing the relationships between the incidence of type 2 diabetes and both sleep duration and midday napping demands a large number of subjects. Previous research has suggested that type 2 diabetes patients prefer longer midday naps because they are more fatigued.[21] We excluded individuals with diabetics at the start of our study, and therefore we can conclude that a longer midday napping leads to type 2 diabetes. Previous studies have been shown that the prevalence of poor sleep quality was significantly higher among people with diabetes than those without it.[22] In our study, only 105 participants with poor sleep quality developed diabetes during the follow-up period. As a result, the difference in the incidences of diabetes is not statistically significant between this group and the reference group. A larger sample size is needed to investigate the effects of sleep quality. Numerous possible mechanisms could explain the relationship between long sleep duration and midday napping and the incidence of type 2 diabetes. Research revealed that leptin and ghrelin are of great importance to the incidence of type 2 diabetes.[23] Short sleep can reduce leptin and elevate ghrelin in the blood, leading to increases in hunger and appetite, accompanied by a decrease in glucose tolerance.[24] In addition, short sleep may contribute to the secretion of adiponectin and insulin. Adiponectin, which is secreted by adipocytes, is associated with insulin sensitivity.[25–27] Sleep restriction can increase sympathetic nervous system activity, leading to decreased insulin sensitivity.[28] However, the mechanisms through which long sleep duration leads to increased risk for type 2 diabetes are not fully understood. Long sleep duration may reflect a more sedentary lifestyle and, similar to short sleepers, long sleepers may engage in more snacking.[29] The study also had several limitations. First, the sleep duration at night and midday is defined as the time from going to bed to waking in the questionnaire survey, which is slightly different from the actual time to sleep. However, it was impossible to obtain the objective measures of sleep duration and napping in large prospective population studies, and the self-administrated survey is the most commonly used method of evaluating sleep duration and napping in large population-based studies. Second, previous studies have suggested that poor sleep quality is associated with higher blood glucose levels in patients with type 2 diabetes.[22] However, some subgroups contained few cases of type 2 diabetes, causing their differences to be non-significant. No significant result was found on the interaction between sleep time and nap time or sleep quality. The current results may be related to overfitting, and the mechanism will be studied in the next research. In addition, restricted cubic spline showed that the sleeping time was related to the incidence rate of diabetes, but it was not a significant curve relationship. A larger sample size is needed in future studies on this subject for further study. Third, we did not record the family members when we sent out the questionnaire, so we could not use these data. We will use a more accurate questionnaire in the next survey to facilitate the follow-up analysis. Conclusions: We found a J-shaped relationship between sleep duration and the incidence of type 2 diabetes, with the lowest risk for type 2 diabetes in individuals sleeping 6.3 to 7.5 h after adjusting for covariates. Sleep duration that is too long or too short increases the risk of type 2 diabetes. Further studies are needed to reveal the mechanism driving the relationship between sleep time and the incidence of diabetes. Funding: This work was supported by a grant from the National Key Research and Development Program of China (No. 2018YFC1314100). Conflicts of interest: None.
Background: Inadequate sleep duration is associated with a higher risk of type 2 diabetes and the relationship is nonlinear. We aim to assess the curve relationship between night sleep duration and the incidence of type 2 diabetes in China. Methods: A cohort of 11,539 participants from the REACTION study without diabetes at baseline (2011) were followed until 2014 for the development of type 2 diabetes. The average number of hours of sleep per night was grouped. Incidence rates and odds ratios (ORs) were calculated for the development of diabetes in each sleep duration category. Results: Compared to people who sleep for 7 to 8 h/night, people with longer sleep duration (≥9 h/night) had a greater risk of type 2 diabetes (OR: 1.27; 95% CI: 1.01-1.61), while shorter sleep (<6 h/night) had no significant difference in risk of type 2 diabetes. When the dataset was stratified based on selected covariates, the association between type 2 diabetes and long sleep duration became more evident among individuals <65 years of age, male, body mass index <24 kg/m 2 or with hypertension or hyperlipidemia, no interaction effects were observed. Furthermore, compared to people persistently sleeping 7 to 9 h/night, those who persistently slept ≥9 h/night had a higher risk of type 2 diabetes. The optimal sleep duration was 6.3 to 7.5 h/night. Conclusions: Short or long sleep duration was associated with a higher risk of type 2 diabetes. Persistently long sleep duration increased the risk.
Introduction: Type 2 diabetes is a critical public health challenge worldwide. Patients with type 2 diabetes are at increased risk for premature mortality and hospitalization due to complications.[1] Given the global burden of type 2 diabetes, understanding the impacts of modifiable risk factors is of great importance.[2,3] Sleep is essential to the health of patients with type 2 diabetes.[4] Although humans spend about a third of their time sleeping, they may not understand the importance of it. Adequate high-quality sleep is vital to maintain the normal physiological state of the body.[5] Insufficient sleep is a health problem,[5] and long sleep duration is associated with increased body mass index (BMI),[6] impaired glucose tolerance,[7] and increased probability of developing type 2 diabetes.[7] Although lifestyle changes, such as increasing physical activity and weight loss, are of great importance to the management of this disease, understanding the link between type 2 diabetes and sleep duration may help to reduce its incidence. Short sleepers and long sleepers show an increased incidence of type 2 diabetes.[8] In addition, a study in Japan found a J-shaped relationship between sleep time and HbA1c level.[9] Therefore, this retrospective cohort study assessed the associations between both nighttime and daytime napping and risk for type 2 diabetes. We used data from the Risk Evaluation of cAncers in Chinese diabeTic Individuals: a lONgitudinal (REACTION) study cohort, which covered a 4-year period. To our knowledge, no such study has yet explored this relationship in Chinese people with a risk for type 2 diabetes. Conclusions: We found a J-shaped relationship between sleep duration and the incidence of type 2 diabetes, with the lowest risk for type 2 diabetes in individuals sleeping 6.3 to 7.5 h after adjusting for covariates. Sleep duration that is too long or too short increases the risk of type 2 diabetes. Further studies are needed to reveal the mechanism driving the relationship between sleep time and the incidence of diabetes.
Background: Inadequate sleep duration is associated with a higher risk of type 2 diabetes and the relationship is nonlinear. We aim to assess the curve relationship between night sleep duration and the incidence of type 2 diabetes in China. Methods: A cohort of 11,539 participants from the REACTION study without diabetes at baseline (2011) were followed until 2014 for the development of type 2 diabetes. The average number of hours of sleep per night was grouped. Incidence rates and odds ratios (ORs) were calculated for the development of diabetes in each sleep duration category. Results: Compared to people who sleep for 7 to 8 h/night, people with longer sleep duration (≥9 h/night) had a greater risk of type 2 diabetes (OR: 1.27; 95% CI: 1.01-1.61), while shorter sleep (<6 h/night) had no significant difference in risk of type 2 diabetes. When the dataset was stratified based on selected covariates, the association between type 2 diabetes and long sleep duration became more evident among individuals <65 years of age, male, body mass index <24 kg/m 2 or with hypertension or hyperlipidemia, no interaction effects were observed. Furthermore, compared to people persistently sleeping 7 to 9 h/night, those who persistently slept ≥9 h/night had a higher risk of type 2 diabetes. The optimal sleep duration was 6.3 to 7.5 h/night. Conclusions: Short or long sleep duration was associated with a higher risk of type 2 diabetes. Persistently long sleep duration increased the risk.
5,705
304
[ 45, 182, 347, 369, 23 ]
11
[ "sleep", "diabetes", "duration", "sleep duration", "type diabetes", "type", "night", "subjects", "current", "napping" ]
[ "sleep quality developed", "sleep quality associated", "diabetes individuals sleeping", "diabetes optimal sleep", "diabetes sleep duration" ]
[CONTENT] Sleep duration | Type 2 diabetes | Prevalence | Risk [SUMMARY]
[CONTENT] Sleep duration | Type 2 diabetes | Prevalence | Risk [SUMMARY]
[CONTENT] Sleep duration | Type 2 diabetes | Prevalence | Risk [SUMMARY]
[CONTENT] Sleep duration | Type 2 diabetes | Prevalence | Risk [SUMMARY]
[CONTENT] Sleep duration | Type 2 diabetes | Prevalence | Risk [SUMMARY]
[CONTENT] Sleep duration | Type 2 diabetes | Prevalence | Risk [SUMMARY]
[CONTENT] China | Diabetes Mellitus, Type 2 | Humans | Incidence | Male | Risk Factors | Sleep | Sleep Deprivation [SUMMARY]
[CONTENT] China | Diabetes Mellitus, Type 2 | Humans | Incidence | Male | Risk Factors | Sleep | Sleep Deprivation [SUMMARY]
[CONTENT] China | Diabetes Mellitus, Type 2 | Humans | Incidence | Male | Risk Factors | Sleep | Sleep Deprivation [SUMMARY]
[CONTENT] China | Diabetes Mellitus, Type 2 | Humans | Incidence | Male | Risk Factors | Sleep | Sleep Deprivation [SUMMARY]
[CONTENT] China | Diabetes Mellitus, Type 2 | Humans | Incidence | Male | Risk Factors | Sleep | Sleep Deprivation [SUMMARY]
[CONTENT] China | Diabetes Mellitus, Type 2 | Humans | Incidence | Male | Risk Factors | Sleep | Sleep Deprivation [SUMMARY]
[CONTENT] sleep quality developed | sleep quality associated | diabetes individuals sleeping | diabetes optimal sleep | diabetes sleep duration [SUMMARY]
[CONTENT] sleep quality developed | sleep quality associated | diabetes individuals sleeping | diabetes optimal sleep | diabetes sleep duration [SUMMARY]
[CONTENT] sleep quality developed | sleep quality associated | diabetes individuals sleeping | diabetes optimal sleep | diabetes sleep duration [SUMMARY]
[CONTENT] sleep quality developed | sleep quality associated | diabetes individuals sleeping | diabetes optimal sleep | diabetes sleep duration [SUMMARY]
[CONTENT] sleep quality developed | sleep quality associated | diabetes individuals sleeping | diabetes optimal sleep | diabetes sleep duration [SUMMARY]
[CONTENT] sleep quality developed | sleep quality associated | diabetes individuals sleeping | diabetes optimal sleep | diabetes sleep duration [SUMMARY]
[CONTENT] sleep | diabetes | duration | sleep duration | type diabetes | type | night | subjects | current | napping [SUMMARY]
[CONTENT] sleep | diabetes | duration | sleep duration | type diabetes | type | night | subjects | current | napping [SUMMARY]
[CONTENT] sleep | diabetes | duration | sleep duration | type diabetes | type | night | subjects | current | napping [SUMMARY]
[CONTENT] sleep | diabetes | duration | sleep duration | type diabetes | type | night | subjects | current | napping [SUMMARY]
[CONTENT] sleep | diabetes | duration | sleep duration | type diabetes | type | night | subjects | current | napping [SUMMARY]
[CONTENT] sleep | diabetes | duration | sleep duration | type diabetes | type | night | subjects | current | napping [SUMMARY]
[CONTENT] type | type diabetes | diabetes | increased | sleep | health | importance | risk | study | understanding [SUMMARY]
[CONTENT] sleep | subjects | sleep duration | duration | mmol | diabetes | current | min | defined | groups [SUMMARY]
[CONTENT] sleep | night | yes | status current | continuous | current | ci | status | sleep duration | duration [SUMMARY]
[CONTENT] diabetes | relationship sleep | sleep | type | type diabetes | risk type diabetes | relationship | risk type | incidence | sleep duration [SUMMARY]
[CONTENT] sleep | diabetes | type | type diabetes | duration | sleep duration | night | study | subjects | risk [SUMMARY]
[CONTENT] sleep | diabetes | type | type diabetes | duration | sleep duration | night | study | subjects | risk [SUMMARY]
[CONTENT] 2 ||| between night | 2 | China [SUMMARY]
[CONTENT] 11,539 | REACTION | 2011 | 2014 | 2 ||| hours ||| [SUMMARY]
[CONTENT] 7 to 8 | 2 | 1.27 | 95% | CI | 1.01-1.61 | 6 h/night | 2 ||| 2 | 65 years of age | 24 kg | 2 ||| 7 | 2 ||| 6.3 | 7.5 h/night [SUMMARY]
[CONTENT] 2 ||| [SUMMARY]
[CONTENT] 2 ||| between night | 2 | China ||| 11,539 | REACTION | 2011 | 2014 | 2 ||| hours ||| ||| 7 to 8 | 2 | 1.27 | 95% | CI | 1.01-1.61 | 6 h/night | 2 ||| 2 | 65 years of age | 24 kg | 2 ||| 7 | 2 ||| 6.3 | 7.5 h/night ||| 2 ||| [SUMMARY]
[CONTENT] 2 ||| between night | 2 | China ||| 11,539 | REACTION | 2011 | 2014 | 2 ||| hours ||| ||| 7 to 8 | 2 | 1.27 | 95% | CI | 1.01-1.61 | 6 h/night | 2 ||| 2 | 65 years of age | 24 kg | 2 ||| 7 | 2 ||| 6.3 | 7.5 h/night ||| 2 ||| [SUMMARY]
Neochlorogenic acid: an anti-HIV active compound identified by screening of Cortex Mori [
34714196
Chinese herbs such as Cortex Mori [Morus alba L. (Moraceae)] may inhibit human immunodeficiency virus (HIV), but active compounds are unknown.
CONTEXT
HIV-1 virus (multiplicity of infection: 20), and herbs (dissolved in dimethyl sulfoxide, working concentrations: 10, 1, and 0.1 mg/mL) such as Cortex Mori, etc., were added to 786-O cells (105 cell/well). Zidovudine was used as a positive control. Cell survival and viral inhibition rates were measured. The herb that was the closest inactivity to zidovudine was screened. Mass spectrometry identified the active compounds in herbs (mobile phase: 0.05% formic acid aqueous solution and acetonitrile, gradient elution, detection wavelength: 210 nm). The effect of the compounds on reverse transcriptase (RT) products were evaluated by real-time PCR. Gene enrichment was used to analyse underlying mechanisms.
MATERIALS AND METHODS
With a dose of 1 mg/mL of Cortex Mori, the cell survival rate (57.94%) and viral inhibition rate (74.95%) were closest to the effect of zidovudine (87.87%, 79.81%, respectively). Neochlorogenic acid, one of the active ingredients, was identified by mass spectrometry in Cortex Mori. PCR discovery total RT products of neochlorogenic acid group (mean relative gene expression: 6.01) significantly inhibited (control: 35.42, p < 0.0001). Enrichment analysis showed that neochlorogenic acid may act on haemopoietic cell kinase, epidermal growth factor receptor, sarcoma, etc., thus inhibiting HIV-1 infection.
RESULTS
For people of low socioeconomic status affected by HIV, Chinese medicine (such as Cortex Mori) has many advantages: it is inexpensive and does not easily produce resistance. Drugs based on active ingredients may be developed and could have important value.
CONCLUSIONS
[ "Anti-HIV Agents", "Cell Line, Tumor", "Cell Survival", "Chlorogenic Acid", "Dose-Response Relationship, Drug", "HEK293 Cells", "HIV Infections", "HIV-1", "Humans", "Morus", "Plant Extracts", "Quinic Acid", "Zidovudine" ]
8567877
Introduction
Acquired immune deficiency syndrome (AIDS) is an infectious disease characterised by an injured systemic immune system due to human immunodeficiency virus (HIV) infection (Lu et al. 2018; Seydi et al. 2018). Reverse transcriptase (RT), integrase (IN), and protease (PR) enzymes are essential for three key steps during HIV infection and nucleic acid replication and are also the main targets of HIV drug treatments (Andrabi et al. 2014; Laskey and Siliciano 2014). Recently, the search for new drug targets has been an important trend in HIV drug developmental studies, and RT inhibitors are a hotspot in the development of anti-HIV drugs (Wang et al. 2020). Since HIV RT is not a high-fidelity DNA polymerase and lacks proofreading function, it will cause increased mutation rates in HIV during the replication process. Therefore, the emergence of drug-resistant viruses is inevitable. Chinese medicines have been used in the treatment of HIV for many years in China. Compared with synthetic compounds, natural compounds extracted from Chinese herbal medicines are characterised by good biological compatibility, relatively low toxicity, and improved immunity. Traditional Chinese herbal medicine may allow for the development of new anti-HIV drugs with low toxicity and high efficacy (Chu and Liu 2011; Harvey et al. 2015). Chinese herbal medicine is a vital part of traditional Chinese medicine (TCM) and has been used as a treatment technique since its inception in ancient China. Recently, many types of Chinese herbal medicines with different degrees of antiviral activity have been reported (Wan et al. 2020; Yu et al. 2020), including Glycyrrhiza uralensis Fischer (Leguminosae) (rhizome) (Wan et al. 2020), Reynoutria japonica Houtt (Polygonaceae) (root) (Johnston 1990), Nepeta cataria Linn (Labiatae) (stem and leaf) (Johnston 1990), Lithospermum erythrorhizon Sieb. et Zucc (Boraginaceae) (root) (Chen et al. 1997), Sophora flavescens Alt (Leguminosae) (root) (Chen et al. 1997), Cinnamomum cassia Presl (Lauraceae) (bark) (Dai et al. 2012), Euchresta japonica Hook. f. ex Regel (Leguminosae) (root) (Sun et al. 2015), and Cortex Mori [Morus alba L. (Moraceae)] (bark) (Lee et al. 2007). In this study, eight types of Chinese medicines were selected for study, and their anti-HIV activities were preliminarily evaluated. Chinese medicines with definite HIV inhibitory effects were screened from the eight types of Chinese medicines, and their natural compounds were used for HIV inhibition experiments. Furthermore, their functions and mechanisms were explored to determine the active monomeric compounds in the Chinese medicines that showed targeted inhibition of HIV-1 RT. The study results should provide a theoretical and experimental basis for the drug design, structural modification, and development of a new generation of HIV-RT inhibitors.
null
null
Results
HIV packaging The pHIV-Lus-ZsGreen plasmid was sequenced and compared, and the sequence was consistent with the reference sequence, suggesting the vector was correct and subsequent experiments could be performed (Figure 1). Vector map and sequencing results. (A) pLenti-P2A Vector. (B) pLenti-P2B Vector. (C) pHIV-lus-ZsGreen Vector. (D) Sequence alignment results. The pHIV-Lus-ZsGreen plasmid was sequenced and compared, and the sequence was consistent with the reference sequence, suggesting the vector was correct and subsequent experiments could be performed (Figure 1). Vector map and sequencing results. (A) pLenti-P2A Vector. (B) pLenti-P2B Vector. (C) pHIV-lus-ZsGreen Vector. (D) Sequence alignment results. Lentivirus-infected cells Fluorescence microscopy was performed at 48 h after co-transfection with three plasmids into 293 T cells. The results indicated high expression levels of green fluorescent proteins (GFP), suggesting that the plasmid was successfully transfected into the cells (Figure 2(A,B)). The supernatant was collected at 48 h after transfection and added into 786-O cells. The cells expressed green fluorescence 48 h later (Figure 2(C,D)), suggesting that the viruses were successfully packaged. After being concentrated, the titre of the viruses was detected according to the method provided by the virus detection kit, and the viruses were packed separately and stored at −80 °C. Plasmid transfection and viral packaging. (A,B) Transfection of plasmids into 293 T cells, (C,D) 786-O cells infected by packaged viruses. Fluorescence microscopy was performed at 48 h after co-transfection with three plasmids into 293 T cells. The results indicated high expression levels of green fluorescent proteins (GFP), suggesting that the plasmid was successfully transfected into the cells (Figure 2(A,B)). The supernatant was collected at 48 h after transfection and added into 786-O cells. The cells expressed green fluorescence 48 h later (Figure 2(C,D)), suggesting that the viruses were successfully packaged. After being concentrated, the titre of the viruses was detected according to the method provided by the virus detection kit, and the viruses were packed separately and stored at −80 °C. Plasmid transfection and viral packaging. (A,B) Transfection of plasmids into 293 T cells, (C,D) 786-O cells infected by packaged viruses. Preliminary screening results of eight Chinese medicines The cell survival rate and virus inhibition rate were detected and compared among eight groups treated with Chinese medicines. The cell survival rate in the Cortex Mori group (the means of the high to low concentration groups were 63.28%, 57.94%, 56.94%; same below) was the highest (Figure 3(A)). The HIV inhibition rates of the Cortex Mori group (77.94%, 74.95%, and 61.75%) and N. cataria group (74.24%, 71.91%, and 71.26%) were higher than those of the other six Chinese medicine groups (Figure 3(B)). Thus, Cortex Mori was selected to further study its action mechanism. Since the experiment results at 10 mg/mL and 1 mg/mL were similar, 1 mg/mL was selected as the initial concentration of Cortex Mori in the follow-up study. Screening results of eight Chinese medicinal compositions. (A) Effects of eight Chinese medicinal compositions on cell survival rates. (B) Effects of eight Chinese medicinal compositions on viral inhibition rates. 1: G. uralensis; 2: R. japonica; 3: N. cataria; 4: L. erythrorhizon; 5: S. flavescens; 6: C. cassia; 7: E. japonica; 8: Cortex Mori. The cell survival rate and virus inhibition rate were detected and compared among eight groups treated with Chinese medicines. The cell survival rate in the Cortex Mori group (the means of the high to low concentration groups were 63.28%, 57.94%, 56.94%; same below) was the highest (Figure 3(A)). The HIV inhibition rates of the Cortex Mori group (77.94%, 74.95%, and 61.75%) and N. cataria group (74.24%, 71.91%, and 71.26%) were higher than those of the other six Chinese medicine groups (Figure 3(B)). Thus, Cortex Mori was selected to further study its action mechanism. Since the experiment results at 10 mg/mL and 1 mg/mL were similar, 1 mg/mL was selected as the initial concentration of Cortex Mori in the follow-up study. Screening results of eight Chinese medicinal compositions. (A) Effects of eight Chinese medicinal compositions on cell survival rates. (B) Effects of eight Chinese medicinal compositions on viral inhibition rates. 1: G. uralensis; 2: R. japonica; 3: N. cataria; 4: L. erythrorhizon; 5: S. flavescens; 6: C. cassia; 7: E. japonica; 8: Cortex Mori. Effects of Cortex Mori on the products of HIV RT at different stages When treated with Cortex Mori (1 mg/mL), the total RT group, late RT group, and integrated DNA group (each group’s relative gene expression mean values were 9.88, 16.16, and 11.83, respectively; same below) compared with the HIV group (the mean values were 22.94, 24.45, and 45.43, respectively) were significantly reduced (all p < 0.05). Those of the Cortex Mori group were higher than those of the AZT control group (1.00, 1.21, and 1.00, respectively; all p < 0.01) (Figure 4(A–C)). The results showed that Cortex Mori had a significant inhibitory effect on the HIV-1 RT enzyme, but the effect was not as favourable as that of AZT. Effects of Cortex Mori on the expression of RT enzyme products at different stages. (A) Effects of Cortex Mori on the expression of total RT enzyme products. (B) Effects of Cortex Mori on the expression of late RT enzyme products. (C) Effects of Cortex Mori on the expression of integrated DNA enzyme products. CM group: DMSO, HIV, and Cortex Mori were added to the cells; AZT group: only AZT and DMSO were added to the cells; HIV group: only HIV and DMSO were added to the cells; DMSO group: only DMSO was added to the cells; control group: the cells did not receive any medicinal treatments. Data are expressed as the mean ± SEM. **p < 0.01, ***p < 0.001, ****p < 0.0001. When treated with Cortex Mori (1 mg/mL), the total RT group, late RT group, and integrated DNA group (each group’s relative gene expression mean values were 9.88, 16.16, and 11.83, respectively; same below) compared with the HIV group (the mean values were 22.94, 24.45, and 45.43, respectively) were significantly reduced (all p < 0.05). Those of the Cortex Mori group were higher than those of the AZT control group (1.00, 1.21, and 1.00, respectively; all p < 0.01) (Figure 4(A–C)). The results showed that Cortex Mori had a significant inhibitory effect on the HIV-1 RT enzyme, but the effect was not as favourable as that of AZT. Effects of Cortex Mori on the expression of RT enzyme products at different stages. (A) Effects of Cortex Mori on the expression of total RT enzyme products. (B) Effects of Cortex Mori on the expression of late RT enzyme products. (C) Effects of Cortex Mori on the expression of integrated DNA enzyme products. CM group: DMSO, HIV, and Cortex Mori were added to the cells; AZT group: only AZT and DMSO were added to the cells; HIV group: only HIV and DMSO were added to the cells; DMSO group: only DMSO was added to the cells; control group: the cells did not receive any medicinal treatments. Data are expressed as the mean ± SEM. **p < 0.01, ***p < 0.001, ****p < 0.0001. Chemical monomers in Cortex Mori detected by liquid mass spectrometry To identify the chemical composition of Cortex Mori granules and to further clarify the active components of natural compound monomers contained in Cortex Mori granules, and to identify structural characteristics for the development of new antiviral drugs, according to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), 10 Chinese medicine monomers in Cortex Mori were selected, and 10 standard compounds were identified in Cortex Mori granules by liquid-phase mass spectrometry gradient elution and mass spectrometry analysis, including mulberroside A (peak number: 15; same below), chlorogenic acid (20, 21), neochlorogenic acid (24), palmitic acid (33), astragalin (38), β-sitosterol (56), emodin (57), ursolic acid (62), morusin (63) and lupeol (66) (Figure 5, Table 2). Chromatogram of liquid phase mass spectrum peaks for separation and identification of Cortex Mori granules. Chromatographic peak parameters corresponding to the chromatograms of 10 Chinese medicine monomers contained in Mori Cortex. To identify the chemical composition of Cortex Mori granules and to further clarify the active components of natural compound monomers contained in Cortex Mori granules, and to identify structural characteristics for the development of new antiviral drugs, according to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), 10 Chinese medicine monomers in Cortex Mori were selected, and 10 standard compounds were identified in Cortex Mori granules by liquid-phase mass spectrometry gradient elution and mass spectrometry analysis, including mulberroside A (peak number: 15; same below), chlorogenic acid (20, 21), neochlorogenic acid (24), palmitic acid (33), astragalin (38), β-sitosterol (56), emodin (57), ursolic acid (62), morusin (63) and lupeol (66) (Figure 5, Table 2). Chromatogram of liquid phase mass spectrum peaks for separation and identification of Cortex Mori granules. Chromatographic peak parameters corresponding to the chromatograms of 10 Chinese medicine monomers contained in Mori Cortex. Effects of 10 monomers contained in Cortex Mori on the products of RT Fluorescent PCR and probe-based quantitative methods were used to further verify the inhibitory effect and action mechanism of the 10 natural compound molecular monomers contained in Cortex Mori on HIV-1 RT, to obtain new natural compound monomer molecules with effective anti-HIV effects. Ten types of Chinese medicine monomers were added into the cells, and then DNA products at different periods were collected. The results showed that five monomeric compounds, such as emodin, ursolic acid, morusin, chlorogenic acid, and astragalin had greater cytotoxicity at a concentration of 1 mg/mL, and the cell survival rate was low (data not published), which led to difficulty in extracting DNA to complete the follow-up test. Therefore, we selected the DNA products of the other five compounds, such as lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A for quantitative analysis, and the changes in RT products at each stage were determined by PCR assays (Figure 6(A–C)). The results showed that lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A could inhibit the activity of the RT enzymes to some extent. neochlorogenic acid and AZT had the most similar inhibitory effect (the most inhibited effect in 10 monomer) (Figure 6(A–C)). Compared with the HIV group three stages products (mean of the related expression level of each group: 35.42, 34.78, 45.57), neochlorogenic acid group (6.01, 5.76, 2.53) were significantly decreased (all p < 0.0001). Effects of five active monomer compounds in Cortex Mori on the expression of products at different stages of HIV infection. (A) Effects of five active monomer compounds in Cortex Mori on the expression of total RT enzyme products. (B) Effects of five active monomer compounds in Cortex Mori on the expression of late RT enzyme products. (C) Effects of five active monomer compounds in Cortex Mori on the expression of integrated DNA products. Data are expressed as the mean ± SEM. ***p < 0.001, ****p < 0.0001. Fluorescent PCR and probe-based quantitative methods were used to further verify the inhibitory effect and action mechanism of the 10 natural compound molecular monomers contained in Cortex Mori on HIV-1 RT, to obtain new natural compound monomer molecules with effective anti-HIV effects. Ten types of Chinese medicine monomers were added into the cells, and then DNA products at different periods were collected. The results showed that five monomeric compounds, such as emodin, ursolic acid, morusin, chlorogenic acid, and astragalin had greater cytotoxicity at a concentration of 1 mg/mL, and the cell survival rate was low (data not published), which led to difficulty in extracting DNA to complete the follow-up test. Therefore, we selected the DNA products of the other five compounds, such as lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A for quantitative analysis, and the changes in RT products at each stage were determined by PCR assays (Figure 6(A–C)). The results showed that lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A could inhibit the activity of the RT enzymes to some extent. neochlorogenic acid and AZT had the most similar inhibitory effect (the most inhibited effect in 10 monomer) (Figure 6(A–C)). Compared with the HIV group three stages products (mean of the related expression level of each group: 35.42, 34.78, 45.57), neochlorogenic acid group (6.01, 5.76, 2.53) were significantly decreased (all p < 0.0001). Effects of five active monomer compounds in Cortex Mori on the expression of products at different stages of HIV infection. (A) Effects of five active monomer compounds in Cortex Mori on the expression of total RT enzyme products. (B) Effects of five active monomer compounds in Cortex Mori on the expression of late RT enzyme products. (C) Effects of five active monomer compounds in Cortex Mori on the expression of integrated DNA products. Data are expressed as the mean ± SEM. ***p < 0.001, ****p < 0.0001. Screening of neochlorogenic acid targets The PubChem ID of neochlorogenic acid is 5280633, the molecular formula is C16H18O9, and the molecular weight is 354.31 g/mol. A total of 58 protein targets with neochlorogenic acid (normalised fit score > 0.7) were obtained. GO enrichment analysis revealed that these targets mainly involve cellular reactions (such as cellular response to lipid and response to inorganic substance), synthetic or metabolic processes (such as aromatic compound catabolic process, carboxylic acid biosynthetic process, and other biological processes), and molecular functions (such as phosphotransferase activity, hydrolase activity, oxidoreductase activity, and protein domain specific binding) (Figure 7(A)). The pathway analysis showed that the main pathways were prostate cancer, oestrogen signalling pathway, nitrogen metabolism, and other signalling pathways (Figure 7(B)). With PPI analysis, after removal of outliers, a PPI network with 40 nodes and 82 edges was obtained, with an average node degree of 4.2; the largest node degrees were for EGFR, ESR1, and AR (node degrees of 18, 14, and 11, respectively) (Figure 7(C)). Analysis of the target of neochlorogenic acid. (A) GO enrichment analysis. (B) Pathway analysis. (C) PPI analysis. The PubChem ID of neochlorogenic acid is 5280633, the molecular formula is C16H18O9, and the molecular weight is 354.31 g/mol. A total of 58 protein targets with neochlorogenic acid (normalised fit score > 0.7) were obtained. GO enrichment analysis revealed that these targets mainly involve cellular reactions (such as cellular response to lipid and response to inorganic substance), synthetic or metabolic processes (such as aromatic compound catabolic process, carboxylic acid biosynthetic process, and other biological processes), and molecular functions (such as phosphotransferase activity, hydrolase activity, oxidoreductase activity, and protein domain specific binding) (Figure 7(A)). The pathway analysis showed that the main pathways were prostate cancer, oestrogen signalling pathway, nitrogen metabolism, and other signalling pathways (Figure 7(B)). With PPI analysis, after removal of outliers, a PPI network with 40 nodes and 82 edges was obtained, with an average node degree of 4.2; the largest node degrees were for EGFR, ESR1, and AR (node degrees of 18, 14, and 11, respectively) (Figure 7(C)). Analysis of the target of neochlorogenic acid. (A) GO enrichment analysis. (B) Pathway analysis. (C) PPI analysis. Target analysis of neochlorogenic acid on HIV A total of 860 genes in 13 pathways were obtained using PathCard, among which the pathways with the highest node degrees were for HIV life cycle and HIV infection (Figure 8(A)). After crossing with the neochlorogenic acid target, four genes were obtained: haemopoietic cell kinase (HCK), epidermal growth factor receptor (EGFR), sarcoma (SRC), and 3-phosphoinositide dependent protein kinase 1 (PDPK1) (Figure 8(B)). The enrichment analysis results of these four genes showed that they were mainly concentrated in protein autophosphorylation, peptidyl-tyrosine autophosphor, epidermal growth factor receptor, and Fc receptor signalling pathway functional categories (Figure 8(C)). According to the structural formula of neochlorogenic acid (Figure 8(D)), we unexpectedly discovered that it can bind HIV type 2 (HIV-2) RT (PDB ID: 1MU2) (Figure 8(E)). Target analysis of neochlorogenic acid acting on HIV. (A) HIV-related pathway. (B) The intersection of HIV-related genes and new chlorogenic acid targets. (C) Intersection gene enrichment results. (D) The structural formula of neochlorogenic acid. (E) Combination model of neochlorogenic acid and 1MU2. A total of 860 genes in 13 pathways were obtained using PathCard, among which the pathways with the highest node degrees were for HIV life cycle and HIV infection (Figure 8(A)). After crossing with the neochlorogenic acid target, four genes were obtained: haemopoietic cell kinase (HCK), epidermal growth factor receptor (EGFR), sarcoma (SRC), and 3-phosphoinositide dependent protein kinase 1 (PDPK1) (Figure 8(B)). The enrichment analysis results of these four genes showed that they were mainly concentrated in protein autophosphorylation, peptidyl-tyrosine autophosphor, epidermal growth factor receptor, and Fc receptor signalling pathway functional categories (Figure 8(C)). According to the structural formula of neochlorogenic acid (Figure 8(D)), we unexpectedly discovered that it can bind HIV type 2 (HIV-2) RT (PDB ID: 1MU2) (Figure 8(E)). Target analysis of neochlorogenic acid acting on HIV. (A) HIV-related pathway. (B) The intersection of HIV-related genes and new chlorogenic acid targets. (C) Intersection gene enrichment results. (D) The structural formula of neochlorogenic acid. (E) Combination model of neochlorogenic acid and 1MU2.
Conclusions
We analysed the anti-HIV effects of eight types of TCMs and found that Cortex Mori inhibits HIV. We further screened and identified 10 compounds from Cortex Mori. Cell experiments indicated that neochlorogenic acid has a good inhibitory effect on the HIV-1 RT enzyme. Neochlorogenic acid may also inhibit HIV through four targets: HCK, EGFR, SRC, and PDPK1. Moreover, neochlorogenic acid may bind HIV-2 RT. Our research can facilitate the development and utilisation of Chinese herbal medicines, and it can serve as a reference for the development of anti-HIV-1 drugs.
[ "Cell line culture", "Lentivirus packaging", "Detection of cell survival rate and virus inhibition rate", "Real-time quantitative PCR analysis of HIV-1 DNA", "Separation and identification of natural compound constituents contained by liquid mass spectrometry", "Screening of monomer targets", "Analysis of potential targets of monomers acting on HIV", "Statistical analysis", "HIV packaging", "Lentivirus-infected cells", "Preliminary screening results of eight Chinese medicines", "Effects of Cortex Mori on the products of HIV RT at different stages", "Chemical monomers in Cortex Mori detected by liquid mass spectrometry", "Effects of 10 monomers contained in Cortex Mori on the products of RT", "Screening of neochlorogenic acid targets", "Target analysis of neochlorogenic acid on HIV" ]
[ "Human renal carcinoma 786-O cells and human embryonic kidney 293 T cells were purchased from Beina Chuanglian Biotechnology Institute (Beijing, China). Cells were stored in Roswell Park Memorial Institute (RPMI) 1640 medium (Gibco, Suzhou, China) or Dulbecco’s modified Eagle’s medium (DMEM) (Gibco, Suzhou, China) and cultured in media containing 10% foetal bovine serum (FBS) at 37 °C in 5% CO2.", "pHIV-lus-zsGreen plasmids were transfected into DH5α competent cells and the cell suspension was smeared on LB agar plates (LB medium containing 10 g/L tryptone, 5 g/L yeast extract, 10 g/L NaCl, 15 g/L agar, and 10 μg/mL ampicillin), and bacterial plaques were screened. All operations were performed according to the instructions of the EndoFree Plasmid Maxi Kit (QIAGEN, Dusseldorf, Germany) to obtain many high-purity endotoxin-free plasmids, which were stored at −20 °C. The bacterial solution and glycerine were mixed at a ratio of 1:1 and then stored at −80 °C. The primer was designed in the promoter region of the plasmids, the extracted plasmids were sent to the Beijing Genomics Institute for sequencing, and DNAMAN 6.0 software was used to compare the sequencing results with the reference sequence provided by Addgene (http://www.addgene.org/).\nThe mixture solutions containing the two plasmids, pHIV-Lus-ZsGreen and 2nd Generation Packaging Mix, were added into a serum-free and double-antibody-free DMEM medium at a ratio of 1:1. Further, the same procedure was performed for the DNAfectinTM Plus Transfection Reagent, and both mixture solutions were incubated for 5 min, respectively. Then, both solutions were co-incubated for 30 min and added to 293 T cells. Subsequently, the cells were incubated at 37 °C in 5% CO2 for 6–8 h. Thereafter, the medium was replaced with DMEM complete medium containing 10% FBS. After 48 h of transfection, the expression of green fluorescence proteins in the cells was observed under a fluorescence microscope. The media were collected if the expression of green fluorescence proteins was observed.\nThe supernatant was collected and centrifuged at 10,000 rpm for 40 min at 4 °C. Then, the supernatant was discarded and the sediment was added to a 0.5 mL medium. The titre of the viruses was determined according to the procedure provided by the qPCR Lentivirus Titration (Titre) Kit; subsequently, the lentiviruses were centrifuged and concentrated to achieve a final titre with an order of magnitude of 108 TU/mL and stored at −80 °C after sub-packaging. The treated viruses were added onto the 786-O cells cultured in a 6-well plate at a volume of 5 μL/well. Cells were then incubated with polybrene (final concentration: 0.8 µg/mL) at 37 °C in 5% CO2 for 48 h and the expression level of green fluorescence protein was analysed.", "The eight Chinese medicines (G. uralensis rhizome, R. japonica root, N. cataria stem, L. erythrorhizon root, S. flavescens root, C. cassia bark, E. japonica root, and Cortex Mori) were provided by the Department of Internal Medicine of Traditional Chinese Medicine at Chongqing University Three Gorges Hospital and were identified by Fangzheng Mou, director of the Department of Internal Medicine of Traditional Chinese Medicine. A total of 200 mg of each granule preparation of the eight medicines was accurately weighed. A liquid nitrogen grinder (40 mesh) was used to grind the granule preparations into powders. The powders were dissolved and mixed with 200 μL dimethyl sulfoxide (DMSO), then treated with ultrasound at room temperature for 30–60 min. The extract solution was cooled to room temperature, and three working concentration gradients were prepared for each drug: 10, 1, and 0.1 mg/mL.\nHuman renal carcinoma 786-O cells were seeded onto transparent 96-well plates (105 cell/well) and then incubated with the Chinese medicine solutions at the above-mentioned concentrations and HIV-1 viruses (multiplicity of infection = 20) and polybrene (working concentration: 0.8 µg/mL; same below). The cells were further cultured at 37 °C in 5% CO2 for 48 h, and subsequently cultured with 10 μL Cell Counting Kit-8 (CCK8) for another 2 h. The cell survival rate was then detected.\nAdditionally, cells were seeded onto white 96-well plates and the cells were incubated with the eight Chinese medicine solutions at the above-mentioned concentrations. The cells were further incubated at 37 °C in 5% CO2 for 1 h and co-cultured with viruses. Subsequently, the cells were incubated with polybrene and the cells continued to be cultured for another 48 h. Next, luciferase substrate was added to the cells and shaken gently in the dark for 15 min.\nThe luciferase luminescence signals were detected by a microplate reader, and the inhibition rates of the medicinal compositions on lentivirus infection were calculated.\nCell survival rate=[(As−Ab)/(Ac−Ab)]×100%.\nVirus inhibition rate=[(As−Ab)/(Ac−Ab)]×100%.\n\nAs: Experimental well (the well contained medicinal compositions, cells, and culture medium)\nAc: Negative control well (the well did not contain any medicinal compositions, only cells and culture medium)\nAb: Blank control well (the well did not contain any medicinal compositions, only culture medium).\nIn the subsequent steps, Chinese medicine granules associated with a high cell survival rate or high cell inhibition rate were selected as candidate medicines for chemical composition analysis.", "To further determine the mechanism of inhibition on HIV infection in the early stage, the real-time quantitative PCR (qPCR) probe method was used to detect changes in the expression levels of HIV DNA products during HIV-1 cell infection. The primers and probes were designed according to previous studies (King and Attardi 1989; Butler et al. 2001; Brussel and Sonigo 2003; Bacman and Moraes 2007). Mitochondrial DNA was selected as the internal reference, and the primer information is as shown in Table 1 (Butler et al. 2001; Brussel and Sonigo 2003).\nRT-PCR primers and probes.\nChinese medicines and the positive control medicine, zidovudine (AZT), were both dissolved in DMSO. The initial concentration of Chinese medicines was determined based on the results from a previous step. The initial concentration of the AZT was 100 μg/mL.\nHuman renal carcinoma 786-O cells were cultured in 6-well plates, and the cells + AZT, the cells + lentivirus, and untreated cells were assigned to the positive control group, negative control group, and blank control group, respectively. The cells were initially incubated with medicine supplementation for 1 h and then were incubated with lentivirus and polybrene at 37 °C in 5% CO2 for 8 h or 24 h, respectively. (Note: The reaction time of products detected by different probes is different). The cells in the 6-well plates were subsequently washed gently 8–10 times with normal saline to remove the remaining drugs and viruses. Subsequently, the cells were digested with pancreatin and washed thrice with normal saline. DNA was extracted from the cells according to the method provided by the DNeasy Blood and Tissue (QIAGEN, Dusseldorf, Germany) kit, and the DNA concentration was measured. The isolated DNA was stored at −20 °C until further use. Three parallel experiments were performed for each group.\nDetection of total RT products and late RT products: The primers and probes in Table 1 were used to detect the expressions of total RT products and late RT products at 8 h after incubation with lentivirus and medicine supplementation. The reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), 0.5 µL probe, and 10 µL qPCR detection reagent; some water was added to reach a final volume of 20 µL, and three replicate wells were established. The reaction procedure was 55 °C for 3 min, 95 °C for 5 s, and 60 °C for 30 s for 40 cycles.\nDetection of integrated DNA: DNA products obtained after 24 h incubation with lentivirus and medicine supplementation were used for this step. Firstly, the primers in Table 1 were used to perform Alu-LTR PCR; the reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), and 10 µL High Fidelity PCR Enzyme Mix; water was added to make a final volume of 20 µL. The reaction procedure was 95 °C for 5 min, 95 °C for 30 s, 60 °C for 30 s, and 72 °C for 3 min for 16 cycles; and then 72 °C for 5 min. The products were stored at 16 °C until use. Further, 1 µL of Alu-LTR PCR product was used for qPCR detection using the same method as previously described.", "Based on the experimental results obtained from the survival rate and viral inhibition rate analyses of the lentivirus-transfected cells treated with eight Chinese medicines, we chose the best drug for further experiments. According to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), select monomers that may have anti-HIV effects. These monomers were analysed and detected by liquid mass spectrometry. A total of 50 mg of each Chinese medicine was weighed and dissolved in 50 μL DMSO (as above), and then treated by ultrasound at room temperature for 30 min and filtered through a 0.22 μm organic membrane. A 5 μL injection sample was taken for each LC/MS analysis. The conditions for liquid mass spectrometry were as follows:\nChromatographic column: Shim-pack VP-ODS18 (250 mm × 2.0 mm, 5 μm); mobile phase: 0.05% formic acid aqueous solution (A), acetonitrile (B), gradient elution (0–5 min, 9% B; 5–22 min, 9–22% B; 22–40 min, 22–65% B; 40–60 min, 65–95% B; 60–65 min, 95% B); detection wavelength: 210 nm; flow rate: 0.3 mL/min; column temperature: 30 °C, and injection volume: 5 μL.\nMass spectrometry conditions: ionisation mode: ESI (±); atomising gas flow rate: 3.0 L/min; drying gas flow rate: 10 L/min; heating gas flow rate: 10 L/min; interface temperature; 450 °C; DL temperature: 300 °C; and heating module temperature: 400 °C.\nAfterward, real-time quantitative PCR analysis of the identified 10 monomers (working concentration: 1 mg/mL) on HIV-1 DNA was performed using the same method, as shown above.", "We used PubChem (https://pubchem.ncbi.nlm.nih.gov/) and Discovery Studio 2020 Client to obtain the monomer structure, uploaded the structural formula to PharmMapper (http://www.lilab-ecust.cn/pharmmapper/) (Wang et al. 2017), target selection ‘Human Protein Targets Only (v2010, 2241)’, and uploaded the predicted protein target to UniProt (https://www.uniprot.org/) with Normalised Fit Score > 0.7 to obtain the gene targets. We uploaded the obtained gene targets to Metascape (https://metascape.org/gp/index.html#/main/step1) (Zhou et al. 2019) for Gene Ontology (GO) enrichment analysis and Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway analysis and uploaded the enrichment analysis GO results to a bioinformatics online tool (http://www.bioinformatics.com.cn/) for visualisation. We then uploaded the obtained gene targets to String (https://string-db.org/) to construct a protein-protein interaction (PPI) network and used Cytoscape 3.7.2 to perform further processing of the results.", "We used PathCard (https://PathCards.genecards.org/) (Belinky et al. 2015) to obtain all gene names for HIV-related pathways and used a bioinformatics online tool (http://www.bioinformatics.com.cn/) to obtain the intersection of HIV-related genes and the target of monomer. The results contained possible targets of monomer and HIV. We then performed enrichment analysis and PPI analysis on the target again.", "Results are presented as the mean ± standard error of the mean (SEM) for the sample number of each group. One-way analysis of variance (ANOVA) was used to evaluate the significance between multiple groups. Tukey’s multiple comparisons test was used to compare the mean values of two specific groups in GraphPad Prism 8.0.1, and p < 0.05 was considered significant.", "The pHIV-Lus-ZsGreen plasmid was sequenced and compared, and the sequence was consistent with the reference sequence, suggesting the vector was correct and subsequent experiments could be performed (Figure 1).\nVector map and sequencing results. (A) pLenti-P2A Vector. (B) pLenti-P2B Vector. (C) pHIV-lus-ZsGreen Vector. (D) Sequence alignment results.", "Fluorescence microscopy was performed at 48 h after co-transfection with three plasmids into 293 T cells. The results indicated high expression levels of green fluorescent proteins (GFP), suggesting that the plasmid was successfully transfected into the cells (Figure 2(A,B)). The supernatant was collected at 48 h after transfection and added into 786-O cells. The cells expressed green fluorescence 48 h later (Figure 2(C,D)), suggesting that the viruses were successfully packaged. After being concentrated, the titre of the viruses was detected according to the method provided by the virus detection kit, and the viruses were packed separately and stored at −80 °C.\nPlasmid transfection and viral packaging. (A,B) Transfection of plasmids into 293 T cells, (C,D) 786-O cells infected by packaged viruses.", "The cell survival rate and virus inhibition rate were detected and compared among eight groups treated with Chinese medicines. The cell survival rate in the Cortex Mori group (the means of the high to low concentration groups were 63.28%, 57.94%, 56.94%; same below) was the highest (Figure 3(A)). The HIV inhibition rates of the Cortex Mori group (77.94%, 74.95%, and 61.75%) and N. cataria group (74.24%, 71.91%, and 71.26%) were higher than those of the other six Chinese medicine groups (Figure 3(B)). Thus, Cortex Mori was selected to further study its action mechanism. Since the experiment results at 10 mg/mL and 1 mg/mL were similar, 1 mg/mL was selected as the initial concentration of Cortex Mori in the follow-up study.\nScreening results of eight Chinese medicinal compositions. (A) Effects of eight Chinese medicinal compositions on cell survival rates. (B) Effects of eight Chinese medicinal compositions on viral inhibition rates. 1: G. uralensis; 2: R. japonica; 3: N. cataria; 4: L. erythrorhizon; 5: S. flavescens; 6: C. cassia; 7: E. japonica; 8: Cortex Mori.", "When treated with Cortex Mori (1 mg/mL), the total RT group, late RT group, and integrated DNA group (each group’s relative gene expression mean values were 9.88, 16.16, and 11.83, respectively; same below) compared with the HIV group (the mean values were 22.94, 24.45, and 45.43, respectively) were significantly reduced (all p < 0.05). Those of the Cortex Mori group were higher than those of the AZT control group (1.00, 1.21, and 1.00, respectively; all p < 0.01) (Figure 4(A–C)). The results showed that Cortex Mori had a significant inhibitory effect on the HIV-1 RT enzyme, but the effect was not as favourable as that of AZT.\nEffects of Cortex Mori on the expression of RT enzyme products at different stages. (A) Effects of Cortex Mori on the expression of total RT enzyme products. (B) Effects of Cortex Mori on the expression of late RT enzyme products. (C) Effects of Cortex Mori on the expression of integrated DNA enzyme products. CM group: DMSO, HIV, and Cortex Mori were added to the cells; AZT group: only AZT and DMSO were added to the cells; HIV group: only HIV and DMSO were added to the cells; DMSO group: only DMSO was added to the cells; control group: the cells did not receive any medicinal treatments. Data are expressed as the mean ± SEM. **p < 0.01, ***p < 0.001, ****p < 0.0001.", "To identify the chemical composition of Cortex Mori granules and to further clarify the active components of natural compound monomers contained in Cortex Mori granules, and to identify structural characteristics for the development of new antiviral drugs, according to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), 10 Chinese medicine monomers in Cortex Mori were selected, and 10 standard compounds were identified in Cortex Mori granules by liquid-phase mass spectrometry gradient elution and mass spectrometry analysis, including mulberroside A (peak number: 15; same below), chlorogenic acid (20, 21), neochlorogenic acid (24), palmitic acid (33), astragalin (38), β-sitosterol (56), emodin (57), ursolic acid (62), morusin (63) and lupeol (66) (Figure 5, Table 2).\nChromatogram of liquid phase mass spectrum peaks for separation and identification of Cortex Mori granules.\nChromatographic peak parameters corresponding to the chromatograms of 10 Chinese medicine monomers contained in Mori Cortex.", "Fluorescent PCR and probe-based quantitative methods were used to further verify the inhibitory effect and action mechanism of the 10 natural compound molecular monomers contained in Cortex Mori on HIV-1 RT, to obtain new natural compound monomer molecules with effective anti-HIV effects.\nTen types of Chinese medicine monomers were added into the cells, and then DNA products at different periods were collected. The results showed that five monomeric compounds, such as emodin, ursolic acid, morusin, chlorogenic acid, and astragalin had greater cytotoxicity at a concentration of 1 mg/mL, and the cell survival rate was low (data not published), which led to difficulty in extracting DNA to complete the follow-up test. Therefore, we selected the DNA products of the other five compounds, such as lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A for quantitative analysis, and the changes in RT products at each stage were determined by PCR assays (Figure 6(A–C)). The results showed that lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A could inhibit the activity of the RT enzymes to some extent. neochlorogenic acid and AZT had the most similar inhibitory effect (the most inhibited effect in 10 monomer) (Figure 6(A–C)). Compared with the HIV group three stages products (mean of the related expression level of each group: 35.42, 34.78, 45.57), neochlorogenic acid group (6.01, 5.76, 2.53) were significantly decreased (all p < 0.0001).\nEffects of five active monomer compounds in Cortex Mori on the expression of products at different stages of HIV infection. (A) Effects of five active monomer compounds in Cortex Mori on the expression of total RT enzyme products. (B) Effects of five active monomer compounds in Cortex Mori on the expression of late RT enzyme products. (C) Effects of five active monomer compounds in Cortex Mori on the expression of integrated DNA products. Data are expressed as the mean ± SEM. ***p < 0.001, ****p < 0.0001.", "The PubChem ID of neochlorogenic acid is 5280633, the molecular formula is C16H18O9, and the molecular weight is 354.31 g/mol. A total of 58 protein targets with neochlorogenic acid (normalised fit score > 0.7) were obtained. GO enrichment analysis revealed that these targets mainly involve cellular reactions (such as cellular response to lipid and response to inorganic substance), synthetic or metabolic processes (such as aromatic compound catabolic process, carboxylic acid biosynthetic process, and other biological processes), and molecular functions (such as phosphotransferase activity, hydrolase activity, oxidoreductase activity, and protein domain specific binding) (Figure 7(A)). The pathway analysis showed that the main pathways were prostate cancer, oestrogen signalling pathway, nitrogen metabolism, and other signalling pathways (Figure 7(B)). With PPI analysis, after removal of outliers, a PPI network with 40 nodes and 82 edges was obtained, with an average node degree of 4.2; the largest node degrees were for EGFR, ESR1, and AR (node degrees of 18, 14, and 11, respectively) (Figure 7(C)).\nAnalysis of the target of neochlorogenic acid. (A) GO enrichment analysis. (B) Pathway analysis. (C) PPI analysis.", "A total of 860 genes in 13 pathways were obtained using PathCard, among which the pathways with the highest node degrees were for HIV life cycle and HIV infection (Figure 8(A)). After crossing with the neochlorogenic acid target, four genes were obtained: haemopoietic cell kinase (HCK), epidermal growth factor receptor (EGFR), sarcoma (SRC), and 3-phosphoinositide dependent protein kinase 1 (PDPK1) (Figure 8(B)). The enrichment analysis results of these four genes showed that they were mainly concentrated in protein autophosphorylation, peptidyl-tyrosine autophosphor, epidermal growth factor receptor, and Fc receptor signalling pathway functional categories (Figure 8(C)). According to the structural formula of neochlorogenic acid (Figure 8(D)), we unexpectedly discovered that it can bind HIV type 2 (HIV-2) RT (PDB ID: 1MU2) (Figure 8(E)).\nTarget analysis of neochlorogenic acid acting on HIV. (A) HIV-related pathway. (B) The intersection of HIV-related genes and new chlorogenic acid targets. (C) Intersection gene enrichment results. (D) The structural formula of neochlorogenic acid. (E) Combination model of neochlorogenic acid and 1MU2." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Cell line culture", "Lentivirus packaging", "Detection of cell survival rate and virus inhibition rate", "Real-time quantitative PCR analysis of HIV-1 DNA", "Separation and identification of natural compound constituents contained by liquid mass spectrometry", "Screening of monomer targets", "Analysis of potential targets of monomers acting on HIV", "Statistical analysis", "Results", "HIV packaging", "Lentivirus-infected cells", "Preliminary screening results of eight Chinese medicines", "Effects of Cortex Mori on the products of HIV RT at different stages", "Chemical monomers in Cortex Mori detected by liquid mass spectrometry", "Effects of 10 monomers contained in Cortex Mori on the products of RT", "Screening of neochlorogenic acid targets", "Target analysis of neochlorogenic acid on HIV", "Discussion", "Conclusions" ]
[ "Acquired immune deficiency syndrome (AIDS) is an infectious disease characterised by an injured systemic immune system due to human immunodeficiency virus (HIV) infection (Lu et al. 2018; Seydi et al. 2018). Reverse transcriptase (RT), integrase (IN), and protease (PR) enzymes are essential for three key steps during HIV infection and nucleic acid replication and are also the main targets of HIV drug treatments (Andrabi et al. 2014; Laskey and Siliciano 2014). Recently, the search for new drug targets has been an important trend in HIV drug developmental studies, and RT inhibitors are a hotspot in the development of anti-HIV drugs (Wang et al. 2020). Since HIV RT is not a high-fidelity DNA polymerase and lacks proofreading function, it will cause increased mutation rates in HIV during the replication process. Therefore, the emergence of drug-resistant viruses is inevitable.\nChinese medicines have been used in the treatment of HIV for many years in China. Compared with synthetic compounds, natural compounds extracted from Chinese herbal medicines are characterised by good biological compatibility, relatively low toxicity, and improved immunity. Traditional Chinese herbal medicine may allow for the development of new anti-HIV drugs with low toxicity and high efficacy (Chu and Liu 2011; Harvey et al. 2015).\nChinese herbal medicine is a vital part of traditional Chinese medicine (TCM) and has been used as a treatment technique since its inception in ancient China. Recently, many types of Chinese herbal medicines with different degrees of antiviral activity have been reported (Wan et al. 2020; Yu et al. 2020), including Glycyrrhiza uralensis Fischer (Leguminosae) (rhizome) (Wan et al. 2020), Reynoutria japonica Houtt (Polygonaceae) (root) (Johnston 1990), Nepeta cataria Linn (Labiatae) (stem and leaf) (Johnston 1990), Lithospermum erythrorhizon Sieb. et Zucc (Boraginaceae) (root) (Chen et al. 1997), Sophora flavescens Alt (Leguminosae) (root) (Chen et al. 1997), Cinnamomum cassia Presl (Lauraceae) (bark) (Dai et al. 2012), Euchresta japonica Hook. f. ex Regel (Leguminosae) (root) (Sun et al. 2015), and Cortex Mori [Morus alba L. (Moraceae)] (bark) (Lee et al. 2007). In this study, eight types of Chinese medicines were selected for study, and their anti-HIV activities were preliminarily evaluated. Chinese medicines with definite HIV inhibitory effects were screened from the eight types of Chinese medicines, and their natural compounds were used for HIV inhibition experiments. Furthermore, their functions and mechanisms were explored to determine the active monomeric compounds in the Chinese medicines that showed targeted inhibition of HIV-1 RT. The study results should provide a theoretical and experimental basis for the drug design, structural modification, and development of a new generation of HIV-RT inhibitors.", "Cell line culture Human renal carcinoma 786-O cells and human embryonic kidney 293 T cells were purchased from Beina Chuanglian Biotechnology Institute (Beijing, China). Cells were stored in Roswell Park Memorial Institute (RPMI) 1640 medium (Gibco, Suzhou, China) or Dulbecco’s modified Eagle’s medium (DMEM) (Gibco, Suzhou, China) and cultured in media containing 10% foetal bovine serum (FBS) at 37 °C in 5% CO2.\nHuman renal carcinoma 786-O cells and human embryonic kidney 293 T cells were purchased from Beina Chuanglian Biotechnology Institute (Beijing, China). Cells were stored in Roswell Park Memorial Institute (RPMI) 1640 medium (Gibco, Suzhou, China) or Dulbecco’s modified Eagle’s medium (DMEM) (Gibco, Suzhou, China) and cultured in media containing 10% foetal bovine serum (FBS) at 37 °C in 5% CO2.\nLentivirus packaging pHIV-lus-zsGreen plasmids were transfected into DH5α competent cells and the cell suspension was smeared on LB agar plates (LB medium containing 10 g/L tryptone, 5 g/L yeast extract, 10 g/L NaCl, 15 g/L agar, and 10 μg/mL ampicillin), and bacterial plaques were screened. All operations were performed according to the instructions of the EndoFree Plasmid Maxi Kit (QIAGEN, Dusseldorf, Germany) to obtain many high-purity endotoxin-free plasmids, which were stored at −20 °C. The bacterial solution and glycerine were mixed at a ratio of 1:1 and then stored at −80 °C. The primer was designed in the promoter region of the plasmids, the extracted plasmids were sent to the Beijing Genomics Institute for sequencing, and DNAMAN 6.0 software was used to compare the sequencing results with the reference sequence provided by Addgene (http://www.addgene.org/).\nThe mixture solutions containing the two plasmids, pHIV-Lus-ZsGreen and 2nd Generation Packaging Mix, were added into a serum-free and double-antibody-free DMEM medium at a ratio of 1:1. Further, the same procedure was performed for the DNAfectinTM Plus Transfection Reagent, and both mixture solutions were incubated for 5 min, respectively. Then, both solutions were co-incubated for 30 min and added to 293 T cells. Subsequently, the cells were incubated at 37 °C in 5% CO2 for 6–8 h. Thereafter, the medium was replaced with DMEM complete medium containing 10% FBS. After 48 h of transfection, the expression of green fluorescence proteins in the cells was observed under a fluorescence microscope. The media were collected if the expression of green fluorescence proteins was observed.\nThe supernatant was collected and centrifuged at 10,000 rpm for 40 min at 4 °C. Then, the supernatant was discarded and the sediment was added to a 0.5 mL medium. The titre of the viruses was determined according to the procedure provided by the qPCR Lentivirus Titration (Titre) Kit; subsequently, the lentiviruses were centrifuged and concentrated to achieve a final titre with an order of magnitude of 108 TU/mL and stored at −80 °C after sub-packaging. The treated viruses were added onto the 786-O cells cultured in a 6-well plate at a volume of 5 μL/well. Cells were then incubated with polybrene (final concentration: 0.8 µg/mL) at 37 °C in 5% CO2 for 48 h and the expression level of green fluorescence protein was analysed.\npHIV-lus-zsGreen plasmids were transfected into DH5α competent cells and the cell suspension was smeared on LB agar plates (LB medium containing 10 g/L tryptone, 5 g/L yeast extract, 10 g/L NaCl, 15 g/L agar, and 10 μg/mL ampicillin), and bacterial plaques were screened. All operations were performed according to the instructions of the EndoFree Plasmid Maxi Kit (QIAGEN, Dusseldorf, Germany) to obtain many high-purity endotoxin-free plasmids, which were stored at −20 °C. The bacterial solution and glycerine were mixed at a ratio of 1:1 and then stored at −80 °C. The primer was designed in the promoter region of the plasmids, the extracted plasmids were sent to the Beijing Genomics Institute for sequencing, and DNAMAN 6.0 software was used to compare the sequencing results with the reference sequence provided by Addgene (http://www.addgene.org/).\nThe mixture solutions containing the two plasmids, pHIV-Lus-ZsGreen and 2nd Generation Packaging Mix, were added into a serum-free and double-antibody-free DMEM medium at a ratio of 1:1. Further, the same procedure was performed for the DNAfectinTM Plus Transfection Reagent, and both mixture solutions were incubated for 5 min, respectively. Then, both solutions were co-incubated for 30 min and added to 293 T cells. Subsequently, the cells were incubated at 37 °C in 5% CO2 for 6–8 h. Thereafter, the medium was replaced with DMEM complete medium containing 10% FBS. After 48 h of transfection, the expression of green fluorescence proteins in the cells was observed under a fluorescence microscope. The media were collected if the expression of green fluorescence proteins was observed.\nThe supernatant was collected and centrifuged at 10,000 rpm for 40 min at 4 °C. Then, the supernatant was discarded and the sediment was added to a 0.5 mL medium. The titre of the viruses was determined according to the procedure provided by the qPCR Lentivirus Titration (Titre) Kit; subsequently, the lentiviruses were centrifuged and concentrated to achieve a final titre with an order of magnitude of 108 TU/mL and stored at −80 °C after sub-packaging. The treated viruses were added onto the 786-O cells cultured in a 6-well plate at a volume of 5 μL/well. Cells were then incubated with polybrene (final concentration: 0.8 µg/mL) at 37 °C in 5% CO2 for 48 h and the expression level of green fluorescence protein was analysed.\nDetection of cell survival rate and virus inhibition rate The eight Chinese medicines (G. uralensis rhizome, R. japonica root, N. cataria stem, L. erythrorhizon root, S. flavescens root, C. cassia bark, E. japonica root, and Cortex Mori) were provided by the Department of Internal Medicine of Traditional Chinese Medicine at Chongqing University Three Gorges Hospital and were identified by Fangzheng Mou, director of the Department of Internal Medicine of Traditional Chinese Medicine. A total of 200 mg of each granule preparation of the eight medicines was accurately weighed. A liquid nitrogen grinder (40 mesh) was used to grind the granule preparations into powders. The powders were dissolved and mixed with 200 μL dimethyl sulfoxide (DMSO), then treated with ultrasound at room temperature for 30–60 min. The extract solution was cooled to room temperature, and three working concentration gradients were prepared for each drug: 10, 1, and 0.1 mg/mL.\nHuman renal carcinoma 786-O cells were seeded onto transparent 96-well plates (105 cell/well) and then incubated with the Chinese medicine solutions at the above-mentioned concentrations and HIV-1 viruses (multiplicity of infection = 20) and polybrene (working concentration: 0.8 µg/mL; same below). The cells were further cultured at 37 °C in 5% CO2 for 48 h, and subsequently cultured with 10 μL Cell Counting Kit-8 (CCK8) for another 2 h. The cell survival rate was then detected.\nAdditionally, cells were seeded onto white 96-well plates and the cells were incubated with the eight Chinese medicine solutions at the above-mentioned concentrations. The cells were further incubated at 37 °C in 5% CO2 for 1 h and co-cultured with viruses. Subsequently, the cells were incubated with polybrene and the cells continued to be cultured for another 48 h. Next, luciferase substrate was added to the cells and shaken gently in the dark for 15 min.\nThe luciferase luminescence signals were detected by a microplate reader, and the inhibition rates of the medicinal compositions on lentivirus infection were calculated.\nCell survival rate=[(As−Ab)/(Ac−Ab)]×100%.\nVirus inhibition rate=[(As−Ab)/(Ac−Ab)]×100%.\n\nAs: Experimental well (the well contained medicinal compositions, cells, and culture medium)\nAc: Negative control well (the well did not contain any medicinal compositions, only cells and culture medium)\nAb: Blank control well (the well did not contain any medicinal compositions, only culture medium).\nIn the subsequent steps, Chinese medicine granules associated with a high cell survival rate or high cell inhibition rate were selected as candidate medicines for chemical composition analysis.\nThe eight Chinese medicines (G. uralensis rhizome, R. japonica root, N. cataria stem, L. erythrorhizon root, S. flavescens root, C. cassia bark, E. japonica root, and Cortex Mori) were provided by the Department of Internal Medicine of Traditional Chinese Medicine at Chongqing University Three Gorges Hospital and were identified by Fangzheng Mou, director of the Department of Internal Medicine of Traditional Chinese Medicine. A total of 200 mg of each granule preparation of the eight medicines was accurately weighed. A liquid nitrogen grinder (40 mesh) was used to grind the granule preparations into powders. The powders were dissolved and mixed with 200 μL dimethyl sulfoxide (DMSO), then treated with ultrasound at room temperature for 30–60 min. The extract solution was cooled to room temperature, and three working concentration gradients were prepared for each drug: 10, 1, and 0.1 mg/mL.\nHuman renal carcinoma 786-O cells were seeded onto transparent 96-well plates (105 cell/well) and then incubated with the Chinese medicine solutions at the above-mentioned concentrations and HIV-1 viruses (multiplicity of infection = 20) and polybrene (working concentration: 0.8 µg/mL; same below). The cells were further cultured at 37 °C in 5% CO2 for 48 h, and subsequently cultured with 10 μL Cell Counting Kit-8 (CCK8) for another 2 h. The cell survival rate was then detected.\nAdditionally, cells were seeded onto white 96-well plates and the cells were incubated with the eight Chinese medicine solutions at the above-mentioned concentrations. The cells were further incubated at 37 °C in 5% CO2 for 1 h and co-cultured with viruses. Subsequently, the cells were incubated with polybrene and the cells continued to be cultured for another 48 h. Next, luciferase substrate was added to the cells and shaken gently in the dark for 15 min.\nThe luciferase luminescence signals were detected by a microplate reader, and the inhibition rates of the medicinal compositions on lentivirus infection were calculated.\nCell survival rate=[(As−Ab)/(Ac−Ab)]×100%.\nVirus inhibition rate=[(As−Ab)/(Ac−Ab)]×100%.\n\nAs: Experimental well (the well contained medicinal compositions, cells, and culture medium)\nAc: Negative control well (the well did not contain any medicinal compositions, only cells and culture medium)\nAb: Blank control well (the well did not contain any medicinal compositions, only culture medium).\nIn the subsequent steps, Chinese medicine granules associated with a high cell survival rate or high cell inhibition rate were selected as candidate medicines for chemical composition analysis.\nReal-time quantitative PCR analysis of HIV-1 DNA To further determine the mechanism of inhibition on HIV infection in the early stage, the real-time quantitative PCR (qPCR) probe method was used to detect changes in the expression levels of HIV DNA products during HIV-1 cell infection. The primers and probes were designed according to previous studies (King and Attardi 1989; Butler et al. 2001; Brussel and Sonigo 2003; Bacman and Moraes 2007). Mitochondrial DNA was selected as the internal reference, and the primer information is as shown in Table 1 (Butler et al. 2001; Brussel and Sonigo 2003).\nRT-PCR primers and probes.\nChinese medicines and the positive control medicine, zidovudine (AZT), were both dissolved in DMSO. The initial concentration of Chinese medicines was determined based on the results from a previous step. The initial concentration of the AZT was 100 μg/mL.\nHuman renal carcinoma 786-O cells were cultured in 6-well plates, and the cells + AZT, the cells + lentivirus, and untreated cells were assigned to the positive control group, negative control group, and blank control group, respectively. The cells were initially incubated with medicine supplementation for 1 h and then were incubated with lentivirus and polybrene at 37 °C in 5% CO2 for 8 h or 24 h, respectively. (Note: The reaction time of products detected by different probes is different). The cells in the 6-well plates were subsequently washed gently 8–10 times with normal saline to remove the remaining drugs and viruses. Subsequently, the cells were digested with pancreatin and washed thrice with normal saline. DNA was extracted from the cells according to the method provided by the DNeasy Blood and Tissue (QIAGEN, Dusseldorf, Germany) kit, and the DNA concentration was measured. The isolated DNA was stored at −20 °C until further use. Three parallel experiments were performed for each group.\nDetection of total RT products and late RT products: The primers and probes in Table 1 were used to detect the expressions of total RT products and late RT products at 8 h after incubation with lentivirus and medicine supplementation. The reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), 0.5 µL probe, and 10 µL qPCR detection reagent; some water was added to reach a final volume of 20 µL, and three replicate wells were established. The reaction procedure was 55 °C for 3 min, 95 °C for 5 s, and 60 °C for 30 s for 40 cycles.\nDetection of integrated DNA: DNA products obtained after 24 h incubation with lentivirus and medicine supplementation were used for this step. Firstly, the primers in Table 1 were used to perform Alu-LTR PCR; the reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), and 10 µL High Fidelity PCR Enzyme Mix; water was added to make a final volume of 20 µL. The reaction procedure was 95 °C for 5 min, 95 °C for 30 s, 60 °C for 30 s, and 72 °C for 3 min for 16 cycles; and then 72 °C for 5 min. The products were stored at 16 °C until use. Further, 1 µL of Alu-LTR PCR product was used for qPCR detection using the same method as previously described.\nTo further determine the mechanism of inhibition on HIV infection in the early stage, the real-time quantitative PCR (qPCR) probe method was used to detect changes in the expression levels of HIV DNA products during HIV-1 cell infection. The primers and probes were designed according to previous studies (King and Attardi 1989; Butler et al. 2001; Brussel and Sonigo 2003; Bacman and Moraes 2007). Mitochondrial DNA was selected as the internal reference, and the primer information is as shown in Table 1 (Butler et al. 2001; Brussel and Sonigo 2003).\nRT-PCR primers and probes.\nChinese medicines and the positive control medicine, zidovudine (AZT), were both dissolved in DMSO. The initial concentration of Chinese medicines was determined based on the results from a previous step. The initial concentration of the AZT was 100 μg/mL.\nHuman renal carcinoma 786-O cells were cultured in 6-well plates, and the cells + AZT, the cells + lentivirus, and untreated cells were assigned to the positive control group, negative control group, and blank control group, respectively. The cells were initially incubated with medicine supplementation for 1 h and then were incubated with lentivirus and polybrene at 37 °C in 5% CO2 for 8 h or 24 h, respectively. (Note: The reaction time of products detected by different probes is different). The cells in the 6-well plates were subsequently washed gently 8–10 times with normal saline to remove the remaining drugs and viruses. Subsequently, the cells were digested with pancreatin and washed thrice with normal saline. DNA was extracted from the cells according to the method provided by the DNeasy Blood and Tissue (QIAGEN, Dusseldorf, Germany) kit, and the DNA concentration was measured. The isolated DNA was stored at −20 °C until further use. Three parallel experiments were performed for each group.\nDetection of total RT products and late RT products: The primers and probes in Table 1 were used to detect the expressions of total RT products and late RT products at 8 h after incubation with lentivirus and medicine supplementation. The reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), 0.5 µL probe, and 10 µL qPCR detection reagent; some water was added to reach a final volume of 20 µL, and three replicate wells were established. The reaction procedure was 55 °C for 3 min, 95 °C for 5 s, and 60 °C for 30 s for 40 cycles.\nDetection of integrated DNA: DNA products obtained after 24 h incubation with lentivirus and medicine supplementation were used for this step. Firstly, the primers in Table 1 were used to perform Alu-LTR PCR; the reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), and 10 µL High Fidelity PCR Enzyme Mix; water was added to make a final volume of 20 µL. The reaction procedure was 95 °C for 5 min, 95 °C for 30 s, 60 °C for 30 s, and 72 °C for 3 min for 16 cycles; and then 72 °C for 5 min. The products were stored at 16 °C until use. Further, 1 µL of Alu-LTR PCR product was used for qPCR detection using the same method as previously described.\nSeparation and identification of natural compound constituents contained by liquid mass spectrometry Based on the experimental results obtained from the survival rate and viral inhibition rate analyses of the lentivirus-transfected cells treated with eight Chinese medicines, we chose the best drug for further experiments. According to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), select monomers that may have anti-HIV effects. These monomers were analysed and detected by liquid mass spectrometry. A total of 50 mg of each Chinese medicine was weighed and dissolved in 50 μL DMSO (as above), and then treated by ultrasound at room temperature for 30 min and filtered through a 0.22 μm organic membrane. A 5 μL injection sample was taken for each LC/MS analysis. The conditions for liquid mass spectrometry were as follows:\nChromatographic column: Shim-pack VP-ODS18 (250 mm × 2.0 mm, 5 μm); mobile phase: 0.05% formic acid aqueous solution (A), acetonitrile (B), gradient elution (0–5 min, 9% B; 5–22 min, 9–22% B; 22–40 min, 22–65% B; 40–60 min, 65–95% B; 60–65 min, 95% B); detection wavelength: 210 nm; flow rate: 0.3 mL/min; column temperature: 30 °C, and injection volume: 5 μL.\nMass spectrometry conditions: ionisation mode: ESI (±); atomising gas flow rate: 3.0 L/min; drying gas flow rate: 10 L/min; heating gas flow rate: 10 L/min; interface temperature; 450 °C; DL temperature: 300 °C; and heating module temperature: 400 °C.\nAfterward, real-time quantitative PCR analysis of the identified 10 monomers (working concentration: 1 mg/mL) on HIV-1 DNA was performed using the same method, as shown above.\nBased on the experimental results obtained from the survival rate and viral inhibition rate analyses of the lentivirus-transfected cells treated with eight Chinese medicines, we chose the best drug for further experiments. According to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), select monomers that may have anti-HIV effects. These monomers were analysed and detected by liquid mass spectrometry. A total of 50 mg of each Chinese medicine was weighed and dissolved in 50 μL DMSO (as above), and then treated by ultrasound at room temperature for 30 min and filtered through a 0.22 μm organic membrane. A 5 μL injection sample was taken for each LC/MS analysis. The conditions for liquid mass spectrometry were as follows:\nChromatographic column: Shim-pack VP-ODS18 (250 mm × 2.0 mm, 5 μm); mobile phase: 0.05% formic acid aqueous solution (A), acetonitrile (B), gradient elution (0–5 min, 9% B; 5–22 min, 9–22% B; 22–40 min, 22–65% B; 40–60 min, 65–95% B; 60–65 min, 95% B); detection wavelength: 210 nm; flow rate: 0.3 mL/min; column temperature: 30 °C, and injection volume: 5 μL.\nMass spectrometry conditions: ionisation mode: ESI (±); atomising gas flow rate: 3.0 L/min; drying gas flow rate: 10 L/min; heating gas flow rate: 10 L/min; interface temperature; 450 °C; DL temperature: 300 °C; and heating module temperature: 400 °C.\nAfterward, real-time quantitative PCR analysis of the identified 10 monomers (working concentration: 1 mg/mL) on HIV-1 DNA was performed using the same method, as shown above.\nScreening of monomer targets We used PubChem (https://pubchem.ncbi.nlm.nih.gov/) and Discovery Studio 2020 Client to obtain the monomer structure, uploaded the structural formula to PharmMapper (http://www.lilab-ecust.cn/pharmmapper/) (Wang et al. 2017), target selection ‘Human Protein Targets Only (v2010, 2241)’, and uploaded the predicted protein target to UniProt (https://www.uniprot.org/) with Normalised Fit Score > 0.7 to obtain the gene targets. We uploaded the obtained gene targets to Metascape (https://metascape.org/gp/index.html#/main/step1) (Zhou et al. 2019) for Gene Ontology (GO) enrichment analysis and Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway analysis and uploaded the enrichment analysis GO results to a bioinformatics online tool (http://www.bioinformatics.com.cn/) for visualisation. We then uploaded the obtained gene targets to String (https://string-db.org/) to construct a protein-protein interaction (PPI) network and used Cytoscape 3.7.2 to perform further processing of the results.\nWe used PubChem (https://pubchem.ncbi.nlm.nih.gov/) and Discovery Studio 2020 Client to obtain the monomer structure, uploaded the structural formula to PharmMapper (http://www.lilab-ecust.cn/pharmmapper/) (Wang et al. 2017), target selection ‘Human Protein Targets Only (v2010, 2241)’, and uploaded the predicted protein target to UniProt (https://www.uniprot.org/) with Normalised Fit Score > 0.7 to obtain the gene targets. We uploaded the obtained gene targets to Metascape (https://metascape.org/gp/index.html#/main/step1) (Zhou et al. 2019) for Gene Ontology (GO) enrichment analysis and Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway analysis and uploaded the enrichment analysis GO results to a bioinformatics online tool (http://www.bioinformatics.com.cn/) for visualisation. We then uploaded the obtained gene targets to String (https://string-db.org/) to construct a protein-protein interaction (PPI) network and used Cytoscape 3.7.2 to perform further processing of the results.\nAnalysis of potential targets of monomers acting on HIV We used PathCard (https://PathCards.genecards.org/) (Belinky et al. 2015) to obtain all gene names for HIV-related pathways and used a bioinformatics online tool (http://www.bioinformatics.com.cn/) to obtain the intersection of HIV-related genes and the target of monomer. The results contained possible targets of monomer and HIV. We then performed enrichment analysis and PPI analysis on the target again.\nWe used PathCard (https://PathCards.genecards.org/) (Belinky et al. 2015) to obtain all gene names for HIV-related pathways and used a bioinformatics online tool (http://www.bioinformatics.com.cn/) to obtain the intersection of HIV-related genes and the target of monomer. The results contained possible targets of monomer and HIV. We then performed enrichment analysis and PPI analysis on the target again.\nStatistical analysis Results are presented as the mean ± standard error of the mean (SEM) for the sample number of each group. One-way analysis of variance (ANOVA) was used to evaluate the significance between multiple groups. Tukey’s multiple comparisons test was used to compare the mean values of two specific groups in GraphPad Prism 8.0.1, and p < 0.05 was considered significant.\nResults are presented as the mean ± standard error of the mean (SEM) for the sample number of each group. One-way analysis of variance (ANOVA) was used to evaluate the significance between multiple groups. Tukey’s multiple comparisons test was used to compare the mean values of two specific groups in GraphPad Prism 8.0.1, and p < 0.05 was considered significant.", "Human renal carcinoma 786-O cells and human embryonic kidney 293 T cells were purchased from Beina Chuanglian Biotechnology Institute (Beijing, China). Cells were stored in Roswell Park Memorial Institute (RPMI) 1640 medium (Gibco, Suzhou, China) or Dulbecco’s modified Eagle’s medium (DMEM) (Gibco, Suzhou, China) and cultured in media containing 10% foetal bovine serum (FBS) at 37 °C in 5% CO2.", "pHIV-lus-zsGreen plasmids were transfected into DH5α competent cells and the cell suspension was smeared on LB agar plates (LB medium containing 10 g/L tryptone, 5 g/L yeast extract, 10 g/L NaCl, 15 g/L agar, and 10 μg/mL ampicillin), and bacterial plaques were screened. All operations were performed according to the instructions of the EndoFree Plasmid Maxi Kit (QIAGEN, Dusseldorf, Germany) to obtain many high-purity endotoxin-free plasmids, which were stored at −20 °C. The bacterial solution and glycerine were mixed at a ratio of 1:1 and then stored at −80 °C. The primer was designed in the promoter region of the plasmids, the extracted plasmids were sent to the Beijing Genomics Institute for sequencing, and DNAMAN 6.0 software was used to compare the sequencing results with the reference sequence provided by Addgene (http://www.addgene.org/).\nThe mixture solutions containing the two plasmids, pHIV-Lus-ZsGreen and 2nd Generation Packaging Mix, were added into a serum-free and double-antibody-free DMEM medium at a ratio of 1:1. Further, the same procedure was performed for the DNAfectinTM Plus Transfection Reagent, and both mixture solutions were incubated for 5 min, respectively. Then, both solutions were co-incubated for 30 min and added to 293 T cells. Subsequently, the cells were incubated at 37 °C in 5% CO2 for 6–8 h. Thereafter, the medium was replaced with DMEM complete medium containing 10% FBS. After 48 h of transfection, the expression of green fluorescence proteins in the cells was observed under a fluorescence microscope. The media were collected if the expression of green fluorescence proteins was observed.\nThe supernatant was collected and centrifuged at 10,000 rpm for 40 min at 4 °C. Then, the supernatant was discarded and the sediment was added to a 0.5 mL medium. The titre of the viruses was determined according to the procedure provided by the qPCR Lentivirus Titration (Titre) Kit; subsequently, the lentiviruses were centrifuged and concentrated to achieve a final titre with an order of magnitude of 108 TU/mL and stored at −80 °C after sub-packaging. The treated viruses were added onto the 786-O cells cultured in a 6-well plate at a volume of 5 μL/well. Cells were then incubated with polybrene (final concentration: 0.8 µg/mL) at 37 °C in 5% CO2 for 48 h and the expression level of green fluorescence protein was analysed.", "The eight Chinese medicines (G. uralensis rhizome, R. japonica root, N. cataria stem, L. erythrorhizon root, S. flavescens root, C. cassia bark, E. japonica root, and Cortex Mori) were provided by the Department of Internal Medicine of Traditional Chinese Medicine at Chongqing University Three Gorges Hospital and were identified by Fangzheng Mou, director of the Department of Internal Medicine of Traditional Chinese Medicine. A total of 200 mg of each granule preparation of the eight medicines was accurately weighed. A liquid nitrogen grinder (40 mesh) was used to grind the granule preparations into powders. The powders were dissolved and mixed with 200 μL dimethyl sulfoxide (DMSO), then treated with ultrasound at room temperature for 30–60 min. The extract solution was cooled to room temperature, and three working concentration gradients were prepared for each drug: 10, 1, and 0.1 mg/mL.\nHuman renal carcinoma 786-O cells were seeded onto transparent 96-well plates (105 cell/well) and then incubated with the Chinese medicine solutions at the above-mentioned concentrations and HIV-1 viruses (multiplicity of infection = 20) and polybrene (working concentration: 0.8 µg/mL; same below). The cells were further cultured at 37 °C in 5% CO2 for 48 h, and subsequently cultured with 10 μL Cell Counting Kit-8 (CCK8) for another 2 h. The cell survival rate was then detected.\nAdditionally, cells were seeded onto white 96-well plates and the cells were incubated with the eight Chinese medicine solutions at the above-mentioned concentrations. The cells were further incubated at 37 °C in 5% CO2 for 1 h and co-cultured with viruses. Subsequently, the cells were incubated with polybrene and the cells continued to be cultured for another 48 h. Next, luciferase substrate was added to the cells and shaken gently in the dark for 15 min.\nThe luciferase luminescence signals were detected by a microplate reader, and the inhibition rates of the medicinal compositions on lentivirus infection were calculated.\nCell survival rate=[(As−Ab)/(Ac−Ab)]×100%.\nVirus inhibition rate=[(As−Ab)/(Ac−Ab)]×100%.\n\nAs: Experimental well (the well contained medicinal compositions, cells, and culture medium)\nAc: Negative control well (the well did not contain any medicinal compositions, only cells and culture medium)\nAb: Blank control well (the well did not contain any medicinal compositions, only culture medium).\nIn the subsequent steps, Chinese medicine granules associated with a high cell survival rate or high cell inhibition rate were selected as candidate medicines for chemical composition analysis.", "To further determine the mechanism of inhibition on HIV infection in the early stage, the real-time quantitative PCR (qPCR) probe method was used to detect changes in the expression levels of HIV DNA products during HIV-1 cell infection. The primers and probes were designed according to previous studies (King and Attardi 1989; Butler et al. 2001; Brussel and Sonigo 2003; Bacman and Moraes 2007). Mitochondrial DNA was selected as the internal reference, and the primer information is as shown in Table 1 (Butler et al. 2001; Brussel and Sonigo 2003).\nRT-PCR primers and probes.\nChinese medicines and the positive control medicine, zidovudine (AZT), were both dissolved in DMSO. The initial concentration of Chinese medicines was determined based on the results from a previous step. The initial concentration of the AZT was 100 μg/mL.\nHuman renal carcinoma 786-O cells were cultured in 6-well plates, and the cells + AZT, the cells + lentivirus, and untreated cells were assigned to the positive control group, negative control group, and blank control group, respectively. The cells were initially incubated with medicine supplementation for 1 h and then were incubated with lentivirus and polybrene at 37 °C in 5% CO2 for 8 h or 24 h, respectively. (Note: The reaction time of products detected by different probes is different). The cells in the 6-well plates were subsequently washed gently 8–10 times with normal saline to remove the remaining drugs and viruses. Subsequently, the cells were digested with pancreatin and washed thrice with normal saline. DNA was extracted from the cells according to the method provided by the DNeasy Blood and Tissue (QIAGEN, Dusseldorf, Germany) kit, and the DNA concentration was measured. The isolated DNA was stored at −20 °C until further use. Three parallel experiments were performed for each group.\nDetection of total RT products and late RT products: The primers and probes in Table 1 were used to detect the expressions of total RT products and late RT products at 8 h after incubation with lentivirus and medicine supplementation. The reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), 0.5 µL probe, and 10 µL qPCR detection reagent; some water was added to reach a final volume of 20 µL, and three replicate wells were established. The reaction procedure was 55 °C for 3 min, 95 °C for 5 s, and 60 °C for 30 s for 40 cycles.\nDetection of integrated DNA: DNA products obtained after 24 h incubation with lentivirus and medicine supplementation were used for this step. Firstly, the primers in Table 1 were used to perform Alu-LTR PCR; the reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), and 10 µL High Fidelity PCR Enzyme Mix; water was added to make a final volume of 20 µL. The reaction procedure was 95 °C for 5 min, 95 °C for 30 s, 60 °C for 30 s, and 72 °C for 3 min for 16 cycles; and then 72 °C for 5 min. The products were stored at 16 °C until use. Further, 1 µL of Alu-LTR PCR product was used for qPCR detection using the same method as previously described.", "Based on the experimental results obtained from the survival rate and viral inhibition rate analyses of the lentivirus-transfected cells treated with eight Chinese medicines, we chose the best drug for further experiments. According to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), select monomers that may have anti-HIV effects. These monomers were analysed and detected by liquid mass spectrometry. A total of 50 mg of each Chinese medicine was weighed and dissolved in 50 μL DMSO (as above), and then treated by ultrasound at room temperature for 30 min and filtered through a 0.22 μm organic membrane. A 5 μL injection sample was taken for each LC/MS analysis. The conditions for liquid mass spectrometry were as follows:\nChromatographic column: Shim-pack VP-ODS18 (250 mm × 2.0 mm, 5 μm); mobile phase: 0.05% formic acid aqueous solution (A), acetonitrile (B), gradient elution (0–5 min, 9% B; 5–22 min, 9–22% B; 22–40 min, 22–65% B; 40–60 min, 65–95% B; 60–65 min, 95% B); detection wavelength: 210 nm; flow rate: 0.3 mL/min; column temperature: 30 °C, and injection volume: 5 μL.\nMass spectrometry conditions: ionisation mode: ESI (±); atomising gas flow rate: 3.0 L/min; drying gas flow rate: 10 L/min; heating gas flow rate: 10 L/min; interface temperature; 450 °C; DL temperature: 300 °C; and heating module temperature: 400 °C.\nAfterward, real-time quantitative PCR analysis of the identified 10 monomers (working concentration: 1 mg/mL) on HIV-1 DNA was performed using the same method, as shown above.", "We used PubChem (https://pubchem.ncbi.nlm.nih.gov/) and Discovery Studio 2020 Client to obtain the monomer structure, uploaded the structural formula to PharmMapper (http://www.lilab-ecust.cn/pharmmapper/) (Wang et al. 2017), target selection ‘Human Protein Targets Only (v2010, 2241)’, and uploaded the predicted protein target to UniProt (https://www.uniprot.org/) with Normalised Fit Score > 0.7 to obtain the gene targets. We uploaded the obtained gene targets to Metascape (https://metascape.org/gp/index.html#/main/step1) (Zhou et al. 2019) for Gene Ontology (GO) enrichment analysis and Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway analysis and uploaded the enrichment analysis GO results to a bioinformatics online tool (http://www.bioinformatics.com.cn/) for visualisation. We then uploaded the obtained gene targets to String (https://string-db.org/) to construct a protein-protein interaction (PPI) network and used Cytoscape 3.7.2 to perform further processing of the results.", "We used PathCard (https://PathCards.genecards.org/) (Belinky et al. 2015) to obtain all gene names for HIV-related pathways and used a bioinformatics online tool (http://www.bioinformatics.com.cn/) to obtain the intersection of HIV-related genes and the target of monomer. The results contained possible targets of monomer and HIV. We then performed enrichment analysis and PPI analysis on the target again.", "Results are presented as the mean ± standard error of the mean (SEM) for the sample number of each group. One-way analysis of variance (ANOVA) was used to evaluate the significance between multiple groups. Tukey’s multiple comparisons test was used to compare the mean values of two specific groups in GraphPad Prism 8.0.1, and p < 0.05 was considered significant.", "HIV packaging The pHIV-Lus-ZsGreen plasmid was sequenced and compared, and the sequence was consistent with the reference sequence, suggesting the vector was correct and subsequent experiments could be performed (Figure 1).\nVector map and sequencing results. (A) pLenti-P2A Vector. (B) pLenti-P2B Vector. (C) pHIV-lus-ZsGreen Vector. (D) Sequence alignment results.\nThe pHIV-Lus-ZsGreen plasmid was sequenced and compared, and the sequence was consistent with the reference sequence, suggesting the vector was correct and subsequent experiments could be performed (Figure 1).\nVector map and sequencing results. (A) pLenti-P2A Vector. (B) pLenti-P2B Vector. (C) pHIV-lus-ZsGreen Vector. (D) Sequence alignment results.\nLentivirus-infected cells Fluorescence microscopy was performed at 48 h after co-transfection with three plasmids into 293 T cells. The results indicated high expression levels of green fluorescent proteins (GFP), suggesting that the plasmid was successfully transfected into the cells (Figure 2(A,B)). The supernatant was collected at 48 h after transfection and added into 786-O cells. The cells expressed green fluorescence 48 h later (Figure 2(C,D)), suggesting that the viruses were successfully packaged. After being concentrated, the titre of the viruses was detected according to the method provided by the virus detection kit, and the viruses were packed separately and stored at −80 °C.\nPlasmid transfection and viral packaging. (A,B) Transfection of plasmids into 293 T cells, (C,D) 786-O cells infected by packaged viruses.\nFluorescence microscopy was performed at 48 h after co-transfection with three plasmids into 293 T cells. The results indicated high expression levels of green fluorescent proteins (GFP), suggesting that the plasmid was successfully transfected into the cells (Figure 2(A,B)). The supernatant was collected at 48 h after transfection and added into 786-O cells. The cells expressed green fluorescence 48 h later (Figure 2(C,D)), suggesting that the viruses were successfully packaged. After being concentrated, the titre of the viruses was detected according to the method provided by the virus detection kit, and the viruses were packed separately and stored at −80 °C.\nPlasmid transfection and viral packaging. (A,B) Transfection of plasmids into 293 T cells, (C,D) 786-O cells infected by packaged viruses.\nPreliminary screening results of eight Chinese medicines The cell survival rate and virus inhibition rate were detected and compared among eight groups treated with Chinese medicines. The cell survival rate in the Cortex Mori group (the means of the high to low concentration groups were 63.28%, 57.94%, 56.94%; same below) was the highest (Figure 3(A)). The HIV inhibition rates of the Cortex Mori group (77.94%, 74.95%, and 61.75%) and N. cataria group (74.24%, 71.91%, and 71.26%) were higher than those of the other six Chinese medicine groups (Figure 3(B)). Thus, Cortex Mori was selected to further study its action mechanism. Since the experiment results at 10 mg/mL and 1 mg/mL were similar, 1 mg/mL was selected as the initial concentration of Cortex Mori in the follow-up study.\nScreening results of eight Chinese medicinal compositions. (A) Effects of eight Chinese medicinal compositions on cell survival rates. (B) Effects of eight Chinese medicinal compositions on viral inhibition rates. 1: G. uralensis; 2: R. japonica; 3: N. cataria; 4: L. erythrorhizon; 5: S. flavescens; 6: C. cassia; 7: E. japonica; 8: Cortex Mori.\nThe cell survival rate and virus inhibition rate were detected and compared among eight groups treated with Chinese medicines. The cell survival rate in the Cortex Mori group (the means of the high to low concentration groups were 63.28%, 57.94%, 56.94%; same below) was the highest (Figure 3(A)). The HIV inhibition rates of the Cortex Mori group (77.94%, 74.95%, and 61.75%) and N. cataria group (74.24%, 71.91%, and 71.26%) were higher than those of the other six Chinese medicine groups (Figure 3(B)). Thus, Cortex Mori was selected to further study its action mechanism. Since the experiment results at 10 mg/mL and 1 mg/mL were similar, 1 mg/mL was selected as the initial concentration of Cortex Mori in the follow-up study.\nScreening results of eight Chinese medicinal compositions. (A) Effects of eight Chinese medicinal compositions on cell survival rates. (B) Effects of eight Chinese medicinal compositions on viral inhibition rates. 1: G. uralensis; 2: R. japonica; 3: N. cataria; 4: L. erythrorhizon; 5: S. flavescens; 6: C. cassia; 7: E. japonica; 8: Cortex Mori.\nEffects of Cortex Mori on the products of HIV RT at different stages When treated with Cortex Mori (1 mg/mL), the total RT group, late RT group, and integrated DNA group (each group’s relative gene expression mean values were 9.88, 16.16, and 11.83, respectively; same below) compared with the HIV group (the mean values were 22.94, 24.45, and 45.43, respectively) were significantly reduced (all p < 0.05). Those of the Cortex Mori group were higher than those of the AZT control group (1.00, 1.21, and 1.00, respectively; all p < 0.01) (Figure 4(A–C)). The results showed that Cortex Mori had a significant inhibitory effect on the HIV-1 RT enzyme, but the effect was not as favourable as that of AZT.\nEffects of Cortex Mori on the expression of RT enzyme products at different stages. (A) Effects of Cortex Mori on the expression of total RT enzyme products. (B) Effects of Cortex Mori on the expression of late RT enzyme products. (C) Effects of Cortex Mori on the expression of integrated DNA enzyme products. CM group: DMSO, HIV, and Cortex Mori were added to the cells; AZT group: only AZT and DMSO were added to the cells; HIV group: only HIV and DMSO were added to the cells; DMSO group: only DMSO was added to the cells; control group: the cells did not receive any medicinal treatments. Data are expressed as the mean ± SEM. **p < 0.01, ***p < 0.001, ****p < 0.0001.\nWhen treated with Cortex Mori (1 mg/mL), the total RT group, late RT group, and integrated DNA group (each group’s relative gene expression mean values were 9.88, 16.16, and 11.83, respectively; same below) compared with the HIV group (the mean values were 22.94, 24.45, and 45.43, respectively) were significantly reduced (all p < 0.05). Those of the Cortex Mori group were higher than those of the AZT control group (1.00, 1.21, and 1.00, respectively; all p < 0.01) (Figure 4(A–C)). The results showed that Cortex Mori had a significant inhibitory effect on the HIV-1 RT enzyme, but the effect was not as favourable as that of AZT.\nEffects of Cortex Mori on the expression of RT enzyme products at different stages. (A) Effects of Cortex Mori on the expression of total RT enzyme products. (B) Effects of Cortex Mori on the expression of late RT enzyme products. (C) Effects of Cortex Mori on the expression of integrated DNA enzyme products. CM group: DMSO, HIV, and Cortex Mori were added to the cells; AZT group: only AZT and DMSO were added to the cells; HIV group: only HIV and DMSO were added to the cells; DMSO group: only DMSO was added to the cells; control group: the cells did not receive any medicinal treatments. Data are expressed as the mean ± SEM. **p < 0.01, ***p < 0.001, ****p < 0.0001.\nChemical monomers in Cortex Mori detected by liquid mass spectrometry To identify the chemical composition of Cortex Mori granules and to further clarify the active components of natural compound monomers contained in Cortex Mori granules, and to identify structural characteristics for the development of new antiviral drugs, according to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), 10 Chinese medicine monomers in Cortex Mori were selected, and 10 standard compounds were identified in Cortex Mori granules by liquid-phase mass spectrometry gradient elution and mass spectrometry analysis, including mulberroside A (peak number: 15; same below), chlorogenic acid (20, 21), neochlorogenic acid (24), palmitic acid (33), astragalin (38), β-sitosterol (56), emodin (57), ursolic acid (62), morusin (63) and lupeol (66) (Figure 5, Table 2).\nChromatogram of liquid phase mass spectrum peaks for separation and identification of Cortex Mori granules.\nChromatographic peak parameters corresponding to the chromatograms of 10 Chinese medicine monomers contained in Mori Cortex.\nTo identify the chemical composition of Cortex Mori granules and to further clarify the active components of natural compound monomers contained in Cortex Mori granules, and to identify structural characteristics for the development of new antiviral drugs, according to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), 10 Chinese medicine monomers in Cortex Mori were selected, and 10 standard compounds were identified in Cortex Mori granules by liquid-phase mass spectrometry gradient elution and mass spectrometry analysis, including mulberroside A (peak number: 15; same below), chlorogenic acid (20, 21), neochlorogenic acid (24), palmitic acid (33), astragalin (38), β-sitosterol (56), emodin (57), ursolic acid (62), morusin (63) and lupeol (66) (Figure 5, Table 2).\nChromatogram of liquid phase mass spectrum peaks for separation and identification of Cortex Mori granules.\nChromatographic peak parameters corresponding to the chromatograms of 10 Chinese medicine monomers contained in Mori Cortex.\nEffects of 10 monomers contained in Cortex Mori on the products of RT Fluorescent PCR and probe-based quantitative methods were used to further verify the inhibitory effect and action mechanism of the 10 natural compound molecular monomers contained in Cortex Mori on HIV-1 RT, to obtain new natural compound monomer molecules with effective anti-HIV effects.\nTen types of Chinese medicine monomers were added into the cells, and then DNA products at different periods were collected. The results showed that five monomeric compounds, such as emodin, ursolic acid, morusin, chlorogenic acid, and astragalin had greater cytotoxicity at a concentration of 1 mg/mL, and the cell survival rate was low (data not published), which led to difficulty in extracting DNA to complete the follow-up test. Therefore, we selected the DNA products of the other five compounds, such as lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A for quantitative analysis, and the changes in RT products at each stage were determined by PCR assays (Figure 6(A–C)). The results showed that lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A could inhibit the activity of the RT enzymes to some extent. neochlorogenic acid and AZT had the most similar inhibitory effect (the most inhibited effect in 10 monomer) (Figure 6(A–C)). Compared with the HIV group three stages products (mean of the related expression level of each group: 35.42, 34.78, 45.57), neochlorogenic acid group (6.01, 5.76, 2.53) were significantly decreased (all p < 0.0001).\nEffects of five active monomer compounds in Cortex Mori on the expression of products at different stages of HIV infection. (A) Effects of five active monomer compounds in Cortex Mori on the expression of total RT enzyme products. (B) Effects of five active monomer compounds in Cortex Mori on the expression of late RT enzyme products. (C) Effects of five active monomer compounds in Cortex Mori on the expression of integrated DNA products. Data are expressed as the mean ± SEM. ***p < 0.001, ****p < 0.0001.\nFluorescent PCR and probe-based quantitative methods were used to further verify the inhibitory effect and action mechanism of the 10 natural compound molecular monomers contained in Cortex Mori on HIV-1 RT, to obtain new natural compound monomer molecules with effective anti-HIV effects.\nTen types of Chinese medicine monomers were added into the cells, and then DNA products at different periods were collected. The results showed that five monomeric compounds, such as emodin, ursolic acid, morusin, chlorogenic acid, and astragalin had greater cytotoxicity at a concentration of 1 mg/mL, and the cell survival rate was low (data not published), which led to difficulty in extracting DNA to complete the follow-up test. Therefore, we selected the DNA products of the other five compounds, such as lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A for quantitative analysis, and the changes in RT products at each stage were determined by PCR assays (Figure 6(A–C)). The results showed that lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A could inhibit the activity of the RT enzymes to some extent. neochlorogenic acid and AZT had the most similar inhibitory effect (the most inhibited effect in 10 monomer) (Figure 6(A–C)). Compared with the HIV group three stages products (mean of the related expression level of each group: 35.42, 34.78, 45.57), neochlorogenic acid group (6.01, 5.76, 2.53) were significantly decreased (all p < 0.0001).\nEffects of five active monomer compounds in Cortex Mori on the expression of products at different stages of HIV infection. (A) Effects of five active monomer compounds in Cortex Mori on the expression of total RT enzyme products. (B) Effects of five active monomer compounds in Cortex Mori on the expression of late RT enzyme products. (C) Effects of five active monomer compounds in Cortex Mori on the expression of integrated DNA products. Data are expressed as the mean ± SEM. ***p < 0.001, ****p < 0.0001.\nScreening of neochlorogenic acid targets The PubChem ID of neochlorogenic acid is 5280633, the molecular formula is C16H18O9, and the molecular weight is 354.31 g/mol. A total of 58 protein targets with neochlorogenic acid (normalised fit score > 0.7) were obtained. GO enrichment analysis revealed that these targets mainly involve cellular reactions (such as cellular response to lipid and response to inorganic substance), synthetic or metabolic processes (such as aromatic compound catabolic process, carboxylic acid biosynthetic process, and other biological processes), and molecular functions (such as phosphotransferase activity, hydrolase activity, oxidoreductase activity, and protein domain specific binding) (Figure 7(A)). The pathway analysis showed that the main pathways were prostate cancer, oestrogen signalling pathway, nitrogen metabolism, and other signalling pathways (Figure 7(B)). With PPI analysis, after removal of outliers, a PPI network with 40 nodes and 82 edges was obtained, with an average node degree of 4.2; the largest node degrees were for EGFR, ESR1, and AR (node degrees of 18, 14, and 11, respectively) (Figure 7(C)).\nAnalysis of the target of neochlorogenic acid. (A) GO enrichment analysis. (B) Pathway analysis. (C) PPI analysis.\nThe PubChem ID of neochlorogenic acid is 5280633, the molecular formula is C16H18O9, and the molecular weight is 354.31 g/mol. A total of 58 protein targets with neochlorogenic acid (normalised fit score > 0.7) were obtained. GO enrichment analysis revealed that these targets mainly involve cellular reactions (such as cellular response to lipid and response to inorganic substance), synthetic or metabolic processes (such as aromatic compound catabolic process, carboxylic acid biosynthetic process, and other biological processes), and molecular functions (such as phosphotransferase activity, hydrolase activity, oxidoreductase activity, and protein domain specific binding) (Figure 7(A)). The pathway analysis showed that the main pathways were prostate cancer, oestrogen signalling pathway, nitrogen metabolism, and other signalling pathways (Figure 7(B)). With PPI analysis, after removal of outliers, a PPI network with 40 nodes and 82 edges was obtained, with an average node degree of 4.2; the largest node degrees were for EGFR, ESR1, and AR (node degrees of 18, 14, and 11, respectively) (Figure 7(C)).\nAnalysis of the target of neochlorogenic acid. (A) GO enrichment analysis. (B) Pathway analysis. (C) PPI analysis.\nTarget analysis of neochlorogenic acid on HIV A total of 860 genes in 13 pathways were obtained using PathCard, among which the pathways with the highest node degrees were for HIV life cycle and HIV infection (Figure 8(A)). After crossing with the neochlorogenic acid target, four genes were obtained: haemopoietic cell kinase (HCK), epidermal growth factor receptor (EGFR), sarcoma (SRC), and 3-phosphoinositide dependent protein kinase 1 (PDPK1) (Figure 8(B)). The enrichment analysis results of these four genes showed that they were mainly concentrated in protein autophosphorylation, peptidyl-tyrosine autophosphor, epidermal growth factor receptor, and Fc receptor signalling pathway functional categories (Figure 8(C)). According to the structural formula of neochlorogenic acid (Figure 8(D)), we unexpectedly discovered that it can bind HIV type 2 (HIV-2) RT (PDB ID: 1MU2) (Figure 8(E)).\nTarget analysis of neochlorogenic acid acting on HIV. (A) HIV-related pathway. (B) The intersection of HIV-related genes and new chlorogenic acid targets. (C) Intersection gene enrichment results. (D) The structural formula of neochlorogenic acid. (E) Combination model of neochlorogenic acid and 1MU2.\nA total of 860 genes in 13 pathways were obtained using PathCard, among which the pathways with the highest node degrees were for HIV life cycle and HIV infection (Figure 8(A)). After crossing with the neochlorogenic acid target, four genes were obtained: haemopoietic cell kinase (HCK), epidermal growth factor receptor (EGFR), sarcoma (SRC), and 3-phosphoinositide dependent protein kinase 1 (PDPK1) (Figure 8(B)). The enrichment analysis results of these four genes showed that they were mainly concentrated in protein autophosphorylation, peptidyl-tyrosine autophosphor, epidermal growth factor receptor, and Fc receptor signalling pathway functional categories (Figure 8(C)). According to the structural formula of neochlorogenic acid (Figure 8(D)), we unexpectedly discovered that it can bind HIV type 2 (HIV-2) RT (PDB ID: 1MU2) (Figure 8(E)).\nTarget analysis of neochlorogenic acid acting on HIV. (A) HIV-related pathway. (B) The intersection of HIV-related genes and new chlorogenic acid targets. (C) Intersection gene enrichment results. (D) The structural formula of neochlorogenic acid. (E) Combination model of neochlorogenic acid and 1MU2.", "The pHIV-Lus-ZsGreen plasmid was sequenced and compared, and the sequence was consistent with the reference sequence, suggesting the vector was correct and subsequent experiments could be performed (Figure 1).\nVector map and sequencing results. (A) pLenti-P2A Vector. (B) pLenti-P2B Vector. (C) pHIV-lus-ZsGreen Vector. (D) Sequence alignment results.", "Fluorescence microscopy was performed at 48 h after co-transfection with three plasmids into 293 T cells. The results indicated high expression levels of green fluorescent proteins (GFP), suggesting that the plasmid was successfully transfected into the cells (Figure 2(A,B)). The supernatant was collected at 48 h after transfection and added into 786-O cells. The cells expressed green fluorescence 48 h later (Figure 2(C,D)), suggesting that the viruses were successfully packaged. After being concentrated, the titre of the viruses was detected according to the method provided by the virus detection kit, and the viruses were packed separately and stored at −80 °C.\nPlasmid transfection and viral packaging. (A,B) Transfection of plasmids into 293 T cells, (C,D) 786-O cells infected by packaged viruses.", "The cell survival rate and virus inhibition rate were detected and compared among eight groups treated with Chinese medicines. The cell survival rate in the Cortex Mori group (the means of the high to low concentration groups were 63.28%, 57.94%, 56.94%; same below) was the highest (Figure 3(A)). The HIV inhibition rates of the Cortex Mori group (77.94%, 74.95%, and 61.75%) and N. cataria group (74.24%, 71.91%, and 71.26%) were higher than those of the other six Chinese medicine groups (Figure 3(B)). Thus, Cortex Mori was selected to further study its action mechanism. Since the experiment results at 10 mg/mL and 1 mg/mL were similar, 1 mg/mL was selected as the initial concentration of Cortex Mori in the follow-up study.\nScreening results of eight Chinese medicinal compositions. (A) Effects of eight Chinese medicinal compositions on cell survival rates. (B) Effects of eight Chinese medicinal compositions on viral inhibition rates. 1: G. uralensis; 2: R. japonica; 3: N. cataria; 4: L. erythrorhizon; 5: S. flavescens; 6: C. cassia; 7: E. japonica; 8: Cortex Mori.", "When treated with Cortex Mori (1 mg/mL), the total RT group, late RT group, and integrated DNA group (each group’s relative gene expression mean values were 9.88, 16.16, and 11.83, respectively; same below) compared with the HIV group (the mean values were 22.94, 24.45, and 45.43, respectively) were significantly reduced (all p < 0.05). Those of the Cortex Mori group were higher than those of the AZT control group (1.00, 1.21, and 1.00, respectively; all p < 0.01) (Figure 4(A–C)). The results showed that Cortex Mori had a significant inhibitory effect on the HIV-1 RT enzyme, but the effect was not as favourable as that of AZT.\nEffects of Cortex Mori on the expression of RT enzyme products at different stages. (A) Effects of Cortex Mori on the expression of total RT enzyme products. (B) Effects of Cortex Mori on the expression of late RT enzyme products. (C) Effects of Cortex Mori on the expression of integrated DNA enzyme products. CM group: DMSO, HIV, and Cortex Mori were added to the cells; AZT group: only AZT and DMSO were added to the cells; HIV group: only HIV and DMSO were added to the cells; DMSO group: only DMSO was added to the cells; control group: the cells did not receive any medicinal treatments. Data are expressed as the mean ± SEM. **p < 0.01, ***p < 0.001, ****p < 0.0001.", "To identify the chemical composition of Cortex Mori granules and to further clarify the active components of natural compound monomers contained in Cortex Mori granules, and to identify structural characteristics for the development of new antiviral drugs, according to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), 10 Chinese medicine monomers in Cortex Mori were selected, and 10 standard compounds were identified in Cortex Mori granules by liquid-phase mass spectrometry gradient elution and mass spectrometry analysis, including mulberroside A (peak number: 15; same below), chlorogenic acid (20, 21), neochlorogenic acid (24), palmitic acid (33), astragalin (38), β-sitosterol (56), emodin (57), ursolic acid (62), morusin (63) and lupeol (66) (Figure 5, Table 2).\nChromatogram of liquid phase mass spectrum peaks for separation and identification of Cortex Mori granules.\nChromatographic peak parameters corresponding to the chromatograms of 10 Chinese medicine monomers contained in Mori Cortex.", "Fluorescent PCR and probe-based quantitative methods were used to further verify the inhibitory effect and action mechanism of the 10 natural compound molecular monomers contained in Cortex Mori on HIV-1 RT, to obtain new natural compound monomer molecules with effective anti-HIV effects.\nTen types of Chinese medicine monomers were added into the cells, and then DNA products at different periods were collected. The results showed that five monomeric compounds, such as emodin, ursolic acid, morusin, chlorogenic acid, and astragalin had greater cytotoxicity at a concentration of 1 mg/mL, and the cell survival rate was low (data not published), which led to difficulty in extracting DNA to complete the follow-up test. Therefore, we selected the DNA products of the other five compounds, such as lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A for quantitative analysis, and the changes in RT products at each stage were determined by PCR assays (Figure 6(A–C)). The results showed that lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A could inhibit the activity of the RT enzymes to some extent. neochlorogenic acid and AZT had the most similar inhibitory effect (the most inhibited effect in 10 monomer) (Figure 6(A–C)). Compared with the HIV group three stages products (mean of the related expression level of each group: 35.42, 34.78, 45.57), neochlorogenic acid group (6.01, 5.76, 2.53) were significantly decreased (all p < 0.0001).\nEffects of five active monomer compounds in Cortex Mori on the expression of products at different stages of HIV infection. (A) Effects of five active monomer compounds in Cortex Mori on the expression of total RT enzyme products. (B) Effects of five active monomer compounds in Cortex Mori on the expression of late RT enzyme products. (C) Effects of five active monomer compounds in Cortex Mori on the expression of integrated DNA products. Data are expressed as the mean ± SEM. ***p < 0.001, ****p < 0.0001.", "The PubChem ID of neochlorogenic acid is 5280633, the molecular formula is C16H18O9, and the molecular weight is 354.31 g/mol. A total of 58 protein targets with neochlorogenic acid (normalised fit score > 0.7) were obtained. GO enrichment analysis revealed that these targets mainly involve cellular reactions (such as cellular response to lipid and response to inorganic substance), synthetic or metabolic processes (such as aromatic compound catabolic process, carboxylic acid biosynthetic process, and other biological processes), and molecular functions (such as phosphotransferase activity, hydrolase activity, oxidoreductase activity, and protein domain specific binding) (Figure 7(A)). The pathway analysis showed that the main pathways were prostate cancer, oestrogen signalling pathway, nitrogen metabolism, and other signalling pathways (Figure 7(B)). With PPI analysis, after removal of outliers, a PPI network with 40 nodes and 82 edges was obtained, with an average node degree of 4.2; the largest node degrees were for EGFR, ESR1, and AR (node degrees of 18, 14, and 11, respectively) (Figure 7(C)).\nAnalysis of the target of neochlorogenic acid. (A) GO enrichment analysis. (B) Pathway analysis. (C) PPI analysis.", "A total of 860 genes in 13 pathways were obtained using PathCard, among which the pathways with the highest node degrees were for HIV life cycle and HIV infection (Figure 8(A)). After crossing with the neochlorogenic acid target, four genes were obtained: haemopoietic cell kinase (HCK), epidermal growth factor receptor (EGFR), sarcoma (SRC), and 3-phosphoinositide dependent protein kinase 1 (PDPK1) (Figure 8(B)). The enrichment analysis results of these four genes showed that they were mainly concentrated in protein autophosphorylation, peptidyl-tyrosine autophosphor, epidermal growth factor receptor, and Fc receptor signalling pathway functional categories (Figure 8(C)). According to the structural formula of neochlorogenic acid (Figure 8(D)), we unexpectedly discovered that it can bind HIV type 2 (HIV-2) RT (PDB ID: 1MU2) (Figure 8(E)).\nTarget analysis of neochlorogenic acid acting on HIV. (A) HIV-related pathway. (B) The intersection of HIV-related genes and new chlorogenic acid targets. (C) Intersection gene enrichment results. (D) The structural formula of neochlorogenic acid. (E) Combination model of neochlorogenic acid and 1MU2.", "Under the action of viral RT, pre-viral DNA is synthesised with viral RNA as the template. Antiretroviral drugs can be used to treat patients with HIV retrovirus infection. The American National Institutes of Health (NIH) and National Centre for AIDS/STD Control and Prevention (NCAIDS/STD) of the China Centre for Disease Control and Prevention have recommended the use of antiretroviral drugs to treat patients with AIDS-related symptoms. Different types of antiretroviral drugs act on various stages of the HIV reverse transcription process. However, the consumption of complex combinations of current anti-HIV chemical medicines may have serious side effects or the virus may become resistant to the medicines.\nChinese medicines have been widely used to fight viruses for decades in China. Recently, Chinese medicines played a significant role in the treatment of patients with novel coronavirus pneumonia in China. Tens of thousands of patients have recovered and been cured under the intervention of Chinese medicines, proving their safety and effectiveness in antiviral treatments (Luo et al. 2020; Ren et al. 2020; Yang et al. 2020). In general, Chinese medicines have complex components. Many natural compounds from Chinese medicines, such as coumarin, alkaloid, lignin, flavonoids, tannins, and terpenes have been proven to inhibit HIV RT in vitro (Li et al. 2020).\nCortex Mori is a common medicinal material in TCM; it has many pharmacological effects, including anti-inflammatory and analgesic, antitussive and anti-asthmatic, diuretic, hypoglycaemic, hypolipidemic, hypotensive, antitumor, and antiviral effects, and it can improve peripheral neuropathy. However, the pharmacological mechanism of Cortex Mori remains unclear (Du et al. 2003; Hou et al. 2018; Yu et al. 2019; Lu et al. 2020). We found that Cortex Mori significantly inhibited the activity of HIV-1 RT enzyme, thus blocking the replication of the virus. In anti-HIV experiments in vitro, Shi de Luo et al. (1995) found that 50 types of Chinese medicines, including Cortex Mori, had anti-HIV effects. Cortex Mori is also one of the components of ‘Compound SH’ (An anti-HIV compound of TCM) (Cheng et al. 2015). A clinical study showed that the decline in CD4 cell count was slowed and reversed in AIDS patients taking oral compound SH (Kusum et al. 2004). Such results are consistent with the results of our study, which shows that mulberry bark is a potential anti-HIV drug.\nShi de Luo et al. (1995) identified six components, morusin, kuwanon H, mulberofuran D, mulberofuran K, mulberofuran G, and kuwanon, from Cortex Mori, among which morusin and kuwanon H have been indicated to have anti-HIV activities from in vitro experiments. Xue (2009) obtained nine compounds by mass spectrometry, including tritriacontane, hexadecanoic acid, β-sitosterol, betulic acid, oleanolic acid, scopoletin, 4′,5,7-trihydroxy-8-(3,3-dimethylallyl)-flavone, morusignin L, and daucostero, with some anti-HIV effects. In this study, 10 chemical components were identified by mass spectrometry: ursolic acid, emodin, palmitic acid, β-sitosterol, neochlorogenic acid, mulberroside A, chlorogenic acid, astragalin, morusin, and lupeol. We found that neochlorogenic acid in Cortex Mori had the best pharmacological activity. Combined with findings from previous studies, our results show that mulberry contains a variety of anti-HIV components, and neochlorogenic acid maybe just one of them; therefore, the others require further research. Our results also highlight the multi-component and multi-target characteristics of Chinese medicine.\nIn our research results, the four genes HCK, EGFR, SRC, and PDPK1 may be potential targets through which neochlorogenic acid inhibits HIV infection in humans. HCK protein is a member of the Src family of non-receptor tyrosine kinases, which is preferentially expressed in myeloid and B lymphoid haematopoietic cells (Moarefi et al. 1997). Nef is a multifunctional pathogenic protein of HIV-1, and its interaction with the Src tyrosine kinase Hck, which is highly expressed in macrophages, is related to the development of AIDS (Suzu et al. 2005; Hiyoshi et al. 2008). We speculate that neochlorogenic acid may competitively bind HCK protein, thereby inhibiting Nef protein function. Few studies have examined EGFP and HIV, and they have mainly focussed on mutations (Crequit et al. 2016; Walline et al. 2017; Liu et al. 2019). Kaposi sarcoma (KS) is the most common malignant tumour in HIV/AIDS. HIV-related exosomes increase the expression of HIV transactivator (TAR) RNA and EGFR in oral mucosal epithelial cells, thereby promoting Kaposi sarcoma-associated herpesvirus (KSHV) infection (Chen et al. 2020). Lin et al. (2011) found that the phosphorylation level of PDPK1 is closely associated with the expression of p300 protein, which regulates the function of the HIV-1-encoded RNA-binding protein Tat. The above results all show that our prediction results are fairly reliable. In addition, we found that neochlorogenic acid can bind the RT of HIV-2, thus suggesting that neochlorogenic acid may have a favourable inhibitory effect on HIV-2. This result further shows that neochlorogenic acid is a potent inhibitor of HIV-1 virus replication.\nIn follow-up research, the anti-HIV effects of Cortex Mori or neochlorogenic acid should be further studied through enzyme kinetics, primer extension, band shift experiments, and RNase H activity detection, as well as cell biology, and the possible molecular mechanism of gene expression should be investigated.\nIn summary, Cortex Mori has anti-HIV effects and lower cytotoxicity, which may be mainly achieved by inhibiting the function of RT enzymes. In addition, HCK, EGFR, SRC, and PDPK1 may be protein targets allowing neochlorogenic acid to inhibit HIV infection in the human body. The precise effect of Cortex Mori on RT needs further study. The existing anti-HIV treatment strategies approved by the FDA are very expensive for more than 90% of HIV-infected patients in developing countries. There have been many studies on the treatment of AIDS with Chinese medicine. It is of great significance to explore TCMs and Chinese medicines to develop affordable drugs with significant effects to prevent and treat AIDS.", "We analysed the anti-HIV effects of eight types of TCMs and found that Cortex Mori inhibits HIV. We further screened and identified 10 compounds from Cortex Mori. Cell experiments indicated that neochlorogenic acid has a good inhibitory effect on the HIV-1 RT enzyme. Neochlorogenic acid may also inhibit HIV through four targets: HCK, EGFR, SRC, and PDPK1. Moreover, neochlorogenic acid may bind HIV-2 RT. Our research can facilitate the development and utilisation of Chinese herbal medicines, and it can serve as a reference for the development of anti-HIV-1 drugs." ]
[ "intro", "materials", null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, null, "discussion", "conclusions" ]
[ "Traditional Chinese medicine", "reverse transcriptase", "enrichment analysis" ]
Introduction: Acquired immune deficiency syndrome (AIDS) is an infectious disease characterised by an injured systemic immune system due to human immunodeficiency virus (HIV) infection (Lu et al. 2018; Seydi et al. 2018). Reverse transcriptase (RT), integrase (IN), and protease (PR) enzymes are essential for three key steps during HIV infection and nucleic acid replication and are also the main targets of HIV drug treatments (Andrabi et al. 2014; Laskey and Siliciano 2014). Recently, the search for new drug targets has been an important trend in HIV drug developmental studies, and RT inhibitors are a hotspot in the development of anti-HIV drugs (Wang et al. 2020). Since HIV RT is not a high-fidelity DNA polymerase and lacks proofreading function, it will cause increased mutation rates in HIV during the replication process. Therefore, the emergence of drug-resistant viruses is inevitable. Chinese medicines have been used in the treatment of HIV for many years in China. Compared with synthetic compounds, natural compounds extracted from Chinese herbal medicines are characterised by good biological compatibility, relatively low toxicity, and improved immunity. Traditional Chinese herbal medicine may allow for the development of new anti-HIV drugs with low toxicity and high efficacy (Chu and Liu 2011; Harvey et al. 2015). Chinese herbal medicine is a vital part of traditional Chinese medicine (TCM) and has been used as a treatment technique since its inception in ancient China. Recently, many types of Chinese herbal medicines with different degrees of antiviral activity have been reported (Wan et al. 2020; Yu et al. 2020), including Glycyrrhiza uralensis Fischer (Leguminosae) (rhizome) (Wan et al. 2020), Reynoutria japonica Houtt (Polygonaceae) (root) (Johnston 1990), Nepeta cataria Linn (Labiatae) (stem and leaf) (Johnston 1990), Lithospermum erythrorhizon Sieb. et Zucc (Boraginaceae) (root) (Chen et al. 1997), Sophora flavescens Alt (Leguminosae) (root) (Chen et al. 1997), Cinnamomum cassia Presl (Lauraceae) (bark) (Dai et al. 2012), Euchresta japonica Hook. f. ex Regel (Leguminosae) (root) (Sun et al. 2015), and Cortex Mori [Morus alba L. (Moraceae)] (bark) (Lee et al. 2007). In this study, eight types of Chinese medicines were selected for study, and their anti-HIV activities were preliminarily evaluated. Chinese medicines with definite HIV inhibitory effects were screened from the eight types of Chinese medicines, and their natural compounds were used for HIV inhibition experiments. Furthermore, their functions and mechanisms were explored to determine the active monomeric compounds in the Chinese medicines that showed targeted inhibition of HIV-1 RT. The study results should provide a theoretical and experimental basis for the drug design, structural modification, and development of a new generation of HIV-RT inhibitors. Materials and methods: Cell line culture Human renal carcinoma 786-O cells and human embryonic kidney 293 T cells were purchased from Beina Chuanglian Biotechnology Institute (Beijing, China). Cells were stored in Roswell Park Memorial Institute (RPMI) 1640 medium (Gibco, Suzhou, China) or Dulbecco’s modified Eagle’s medium (DMEM) (Gibco, Suzhou, China) and cultured in media containing 10% foetal bovine serum (FBS) at 37 °C in 5% CO2. Human renal carcinoma 786-O cells and human embryonic kidney 293 T cells were purchased from Beina Chuanglian Biotechnology Institute (Beijing, China). Cells were stored in Roswell Park Memorial Institute (RPMI) 1640 medium (Gibco, Suzhou, China) or Dulbecco’s modified Eagle’s medium (DMEM) (Gibco, Suzhou, China) and cultured in media containing 10% foetal bovine serum (FBS) at 37 °C in 5% CO2. Lentivirus packaging pHIV-lus-zsGreen plasmids were transfected into DH5α competent cells and the cell suspension was smeared on LB agar plates (LB medium containing 10 g/L tryptone, 5 g/L yeast extract, 10 g/L NaCl, 15 g/L agar, and 10 μg/mL ampicillin), and bacterial plaques were screened. All operations were performed according to the instructions of the EndoFree Plasmid Maxi Kit (QIAGEN, Dusseldorf, Germany) to obtain many high-purity endotoxin-free plasmids, which were stored at −20 °C. The bacterial solution and glycerine were mixed at a ratio of 1:1 and then stored at −80 °C. The primer was designed in the promoter region of the plasmids, the extracted plasmids were sent to the Beijing Genomics Institute for sequencing, and DNAMAN 6.0 software was used to compare the sequencing results with the reference sequence provided by Addgene (http://www.addgene.org/). The mixture solutions containing the two plasmids, pHIV-Lus-ZsGreen and 2nd Generation Packaging Mix, were added into a serum-free and double-antibody-free DMEM medium at a ratio of 1:1. Further, the same procedure was performed for the DNAfectinTM Plus Transfection Reagent, and both mixture solutions were incubated for 5 min, respectively. Then, both solutions were co-incubated for 30 min and added to 293 T cells. Subsequently, the cells were incubated at 37 °C in 5% CO2 for 6–8 h. Thereafter, the medium was replaced with DMEM complete medium containing 10% FBS. After 48 h of transfection, the expression of green fluorescence proteins in the cells was observed under a fluorescence microscope. The media were collected if the expression of green fluorescence proteins was observed. The supernatant was collected and centrifuged at 10,000 rpm for 40 min at 4 °C. Then, the supernatant was discarded and the sediment was added to a 0.5 mL medium. The titre of the viruses was determined according to the procedure provided by the qPCR Lentivirus Titration (Titre) Kit; subsequently, the lentiviruses were centrifuged and concentrated to achieve a final titre with an order of magnitude of 108 TU/mL and stored at −80 °C after sub-packaging. The treated viruses were added onto the 786-O cells cultured in a 6-well plate at a volume of 5 μL/well. Cells were then incubated with polybrene (final concentration: 0.8 µg/mL) at 37 °C in 5% CO2 for 48 h and the expression level of green fluorescence protein was analysed. pHIV-lus-zsGreen plasmids were transfected into DH5α competent cells and the cell suspension was smeared on LB agar plates (LB medium containing 10 g/L tryptone, 5 g/L yeast extract, 10 g/L NaCl, 15 g/L agar, and 10 μg/mL ampicillin), and bacterial plaques were screened. All operations were performed according to the instructions of the EndoFree Plasmid Maxi Kit (QIAGEN, Dusseldorf, Germany) to obtain many high-purity endotoxin-free plasmids, which were stored at −20 °C. The bacterial solution and glycerine were mixed at a ratio of 1:1 and then stored at −80 °C. The primer was designed in the promoter region of the plasmids, the extracted plasmids were sent to the Beijing Genomics Institute for sequencing, and DNAMAN 6.0 software was used to compare the sequencing results with the reference sequence provided by Addgene (http://www.addgene.org/). The mixture solutions containing the two plasmids, pHIV-Lus-ZsGreen and 2nd Generation Packaging Mix, were added into a serum-free and double-antibody-free DMEM medium at a ratio of 1:1. Further, the same procedure was performed for the DNAfectinTM Plus Transfection Reagent, and both mixture solutions were incubated for 5 min, respectively. Then, both solutions were co-incubated for 30 min and added to 293 T cells. Subsequently, the cells were incubated at 37 °C in 5% CO2 for 6–8 h. Thereafter, the medium was replaced with DMEM complete medium containing 10% FBS. After 48 h of transfection, the expression of green fluorescence proteins in the cells was observed under a fluorescence microscope. The media were collected if the expression of green fluorescence proteins was observed. The supernatant was collected and centrifuged at 10,000 rpm for 40 min at 4 °C. Then, the supernatant was discarded and the sediment was added to a 0.5 mL medium. The titre of the viruses was determined according to the procedure provided by the qPCR Lentivirus Titration (Titre) Kit; subsequently, the lentiviruses were centrifuged and concentrated to achieve a final titre with an order of magnitude of 108 TU/mL and stored at −80 °C after sub-packaging. The treated viruses were added onto the 786-O cells cultured in a 6-well plate at a volume of 5 μL/well. Cells were then incubated with polybrene (final concentration: 0.8 µg/mL) at 37 °C in 5% CO2 for 48 h and the expression level of green fluorescence protein was analysed. Detection of cell survival rate and virus inhibition rate The eight Chinese medicines (G. uralensis rhizome, R. japonica root, N. cataria stem, L. erythrorhizon root, S. flavescens root, C. cassia bark, E. japonica root, and Cortex Mori) were provided by the Department of Internal Medicine of Traditional Chinese Medicine at Chongqing University Three Gorges Hospital and were identified by Fangzheng Mou, director of the Department of Internal Medicine of Traditional Chinese Medicine. A total of 200 mg of each granule preparation of the eight medicines was accurately weighed. A liquid nitrogen grinder (40 mesh) was used to grind the granule preparations into powders. The powders were dissolved and mixed with 200 μL dimethyl sulfoxide (DMSO), then treated with ultrasound at room temperature for 30–60 min. The extract solution was cooled to room temperature, and three working concentration gradients were prepared for each drug: 10, 1, and 0.1 mg/mL. Human renal carcinoma 786-O cells were seeded onto transparent 96-well plates (105 cell/well) and then incubated with the Chinese medicine solutions at the above-mentioned concentrations and HIV-1 viruses (multiplicity of infection = 20) and polybrene (working concentration: 0.8 µg/mL; same below). The cells were further cultured at 37 °C in 5% CO2 for 48 h, and subsequently cultured with 10 μL Cell Counting Kit-8 (CCK8) for another 2 h. The cell survival rate was then detected. Additionally, cells were seeded onto white 96-well plates and the cells were incubated with the eight Chinese medicine solutions at the above-mentioned concentrations. The cells were further incubated at 37 °C in 5% CO2 for 1 h and co-cultured with viruses. Subsequently, the cells were incubated with polybrene and the cells continued to be cultured for another 48 h. Next, luciferase substrate was added to the cells and shaken gently in the dark for 15 min. The luciferase luminescence signals were detected by a microplate reader, and the inhibition rates of the medicinal compositions on lentivirus infection were calculated. Cell survival rate=[(As−Ab)/(Ac−Ab)]×100%. Virus inhibition rate=[(As−Ab)/(Ac−Ab)]×100%. As: Experimental well (the well contained medicinal compositions, cells, and culture medium) Ac: Negative control well (the well did not contain any medicinal compositions, only cells and culture medium) Ab: Blank control well (the well did not contain any medicinal compositions, only culture medium). In the subsequent steps, Chinese medicine granules associated with a high cell survival rate or high cell inhibition rate were selected as candidate medicines for chemical composition analysis. The eight Chinese medicines (G. uralensis rhizome, R. japonica root, N. cataria stem, L. erythrorhizon root, S. flavescens root, C. cassia bark, E. japonica root, and Cortex Mori) were provided by the Department of Internal Medicine of Traditional Chinese Medicine at Chongqing University Three Gorges Hospital and were identified by Fangzheng Mou, director of the Department of Internal Medicine of Traditional Chinese Medicine. A total of 200 mg of each granule preparation of the eight medicines was accurately weighed. A liquid nitrogen grinder (40 mesh) was used to grind the granule preparations into powders. The powders were dissolved and mixed with 200 μL dimethyl sulfoxide (DMSO), then treated with ultrasound at room temperature for 30–60 min. The extract solution was cooled to room temperature, and three working concentration gradients were prepared for each drug: 10, 1, and 0.1 mg/mL. Human renal carcinoma 786-O cells were seeded onto transparent 96-well plates (105 cell/well) and then incubated with the Chinese medicine solutions at the above-mentioned concentrations and HIV-1 viruses (multiplicity of infection = 20) and polybrene (working concentration: 0.8 µg/mL; same below). The cells were further cultured at 37 °C in 5% CO2 for 48 h, and subsequently cultured with 10 μL Cell Counting Kit-8 (CCK8) for another 2 h. The cell survival rate was then detected. Additionally, cells were seeded onto white 96-well plates and the cells were incubated with the eight Chinese medicine solutions at the above-mentioned concentrations. The cells were further incubated at 37 °C in 5% CO2 for 1 h and co-cultured with viruses. Subsequently, the cells were incubated with polybrene and the cells continued to be cultured for another 48 h. Next, luciferase substrate was added to the cells and shaken gently in the dark for 15 min. The luciferase luminescence signals were detected by a microplate reader, and the inhibition rates of the medicinal compositions on lentivirus infection were calculated. Cell survival rate=[(As−Ab)/(Ac−Ab)]×100%. Virus inhibition rate=[(As−Ab)/(Ac−Ab)]×100%. As: Experimental well (the well contained medicinal compositions, cells, and culture medium) Ac: Negative control well (the well did not contain any medicinal compositions, only cells and culture medium) Ab: Blank control well (the well did not contain any medicinal compositions, only culture medium). In the subsequent steps, Chinese medicine granules associated with a high cell survival rate or high cell inhibition rate were selected as candidate medicines for chemical composition analysis. Real-time quantitative PCR analysis of HIV-1 DNA To further determine the mechanism of inhibition on HIV infection in the early stage, the real-time quantitative PCR (qPCR) probe method was used to detect changes in the expression levels of HIV DNA products during HIV-1 cell infection. The primers and probes were designed according to previous studies (King and Attardi 1989; Butler et al. 2001; Brussel and Sonigo 2003; Bacman and Moraes 2007). Mitochondrial DNA was selected as the internal reference, and the primer information is as shown in Table 1 (Butler et al. 2001; Brussel and Sonigo 2003). RT-PCR primers and probes. Chinese medicines and the positive control medicine, zidovudine (AZT), were both dissolved in DMSO. The initial concentration of Chinese medicines was determined based on the results from a previous step. The initial concentration of the AZT was 100 μg/mL. Human renal carcinoma 786-O cells were cultured in 6-well plates, and the cells + AZT, the cells + lentivirus, and untreated cells were assigned to the positive control group, negative control group, and blank control group, respectively. The cells were initially incubated with medicine supplementation for 1 h and then were incubated with lentivirus and polybrene at 37 °C in 5% CO2 for 8 h or 24 h, respectively. (Note: The reaction time of products detected by different probes is different). The cells in the 6-well plates were subsequently washed gently 8–10 times with normal saline to remove the remaining drugs and viruses. Subsequently, the cells were digested with pancreatin and washed thrice with normal saline. DNA was extracted from the cells according to the method provided by the DNeasy Blood and Tissue (QIAGEN, Dusseldorf, Germany) kit, and the DNA concentration was measured. The isolated DNA was stored at −20 °C until further use. Three parallel experiments were performed for each group. Detection of total RT products and late RT products: The primers and probes in Table 1 were used to detect the expressions of total RT products and late RT products at 8 h after incubation with lentivirus and medicine supplementation. The reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), 0.5 µL probe, and 10 µL qPCR detection reagent; some water was added to reach a final volume of 20 µL, and three replicate wells were established. The reaction procedure was 55 °C for 3 min, 95 °C for 5 s, and 60 °C for 30 s for 40 cycles. Detection of integrated DNA: DNA products obtained after 24 h incubation with lentivirus and medicine supplementation were used for this step. Firstly, the primers in Table 1 were used to perform Alu-LTR PCR; the reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), and 10 µL High Fidelity PCR Enzyme Mix; water was added to make a final volume of 20 µL. The reaction procedure was 95 °C for 5 min, 95 °C for 30 s, 60 °C for 30 s, and 72 °C for 3 min for 16 cycles; and then 72 °C for 5 min. The products were stored at 16 °C until use. Further, 1 µL of Alu-LTR PCR product was used for qPCR detection using the same method as previously described. To further determine the mechanism of inhibition on HIV infection in the early stage, the real-time quantitative PCR (qPCR) probe method was used to detect changes in the expression levels of HIV DNA products during HIV-1 cell infection. The primers and probes were designed according to previous studies (King and Attardi 1989; Butler et al. 2001; Brussel and Sonigo 2003; Bacman and Moraes 2007). Mitochondrial DNA was selected as the internal reference, and the primer information is as shown in Table 1 (Butler et al. 2001; Brussel and Sonigo 2003). RT-PCR primers and probes. Chinese medicines and the positive control medicine, zidovudine (AZT), were both dissolved in DMSO. The initial concentration of Chinese medicines was determined based on the results from a previous step. The initial concentration of the AZT was 100 μg/mL. Human renal carcinoma 786-O cells were cultured in 6-well plates, and the cells + AZT, the cells + lentivirus, and untreated cells were assigned to the positive control group, negative control group, and blank control group, respectively. The cells were initially incubated with medicine supplementation for 1 h and then were incubated with lentivirus and polybrene at 37 °C in 5% CO2 for 8 h or 24 h, respectively. (Note: The reaction time of products detected by different probes is different). The cells in the 6-well plates were subsequently washed gently 8–10 times with normal saline to remove the remaining drugs and viruses. Subsequently, the cells were digested with pancreatin and washed thrice with normal saline. DNA was extracted from the cells according to the method provided by the DNeasy Blood and Tissue (QIAGEN, Dusseldorf, Germany) kit, and the DNA concentration was measured. The isolated DNA was stored at −20 °C until further use. Three parallel experiments were performed for each group. Detection of total RT products and late RT products: The primers and probes in Table 1 were used to detect the expressions of total RT products and late RT products at 8 h after incubation with lentivirus and medicine supplementation. The reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), 0.5 µL probe, and 10 µL qPCR detection reagent; some water was added to reach a final volume of 20 µL, and three replicate wells were established. The reaction procedure was 55 °C for 3 min, 95 °C for 5 s, and 60 °C for 30 s for 40 cycles. Detection of integrated DNA: DNA products obtained after 24 h incubation with lentivirus and medicine supplementation were used for this step. Firstly, the primers in Table 1 were used to perform Alu-LTR PCR; the reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), and 10 µL High Fidelity PCR Enzyme Mix; water was added to make a final volume of 20 µL. The reaction procedure was 95 °C for 5 min, 95 °C for 30 s, 60 °C for 30 s, and 72 °C for 3 min for 16 cycles; and then 72 °C for 5 min. The products were stored at 16 °C until use. Further, 1 µL of Alu-LTR PCR product was used for qPCR detection using the same method as previously described. Separation and identification of natural compound constituents contained by liquid mass spectrometry Based on the experimental results obtained from the survival rate and viral inhibition rate analyses of the lentivirus-transfected cells treated with eight Chinese medicines, we chose the best drug for further experiments. According to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), select monomers that may have anti-HIV effects. These monomers were analysed and detected by liquid mass spectrometry. A total of 50 mg of each Chinese medicine was weighed and dissolved in 50 μL DMSO (as above), and then treated by ultrasound at room temperature for 30 min and filtered through a 0.22 μm organic membrane. A 5 μL injection sample was taken for each LC/MS analysis. The conditions for liquid mass spectrometry were as follows: Chromatographic column: Shim-pack VP-ODS18 (250 mm × 2.0 mm, 5 μm); mobile phase: 0.05% formic acid aqueous solution (A), acetonitrile (B), gradient elution (0–5 min, 9% B; 5–22 min, 9–22% B; 22–40 min, 22–65% B; 40–60 min, 65–95% B; 60–65 min, 95% B); detection wavelength: 210 nm; flow rate: 0.3 mL/min; column temperature: 30 °C, and injection volume: 5 μL. Mass spectrometry conditions: ionisation mode: ESI (±); atomising gas flow rate: 3.0 L/min; drying gas flow rate: 10 L/min; heating gas flow rate: 10 L/min; interface temperature; 450 °C; DL temperature: 300 °C; and heating module temperature: 400 °C. Afterward, real-time quantitative PCR analysis of the identified 10 monomers (working concentration: 1 mg/mL) on HIV-1 DNA was performed using the same method, as shown above. Based on the experimental results obtained from the survival rate and viral inhibition rate analyses of the lentivirus-transfected cells treated with eight Chinese medicines, we chose the best drug for further experiments. According to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), select monomers that may have anti-HIV effects. These monomers were analysed and detected by liquid mass spectrometry. A total of 50 mg of each Chinese medicine was weighed and dissolved in 50 μL DMSO (as above), and then treated by ultrasound at room temperature for 30 min and filtered through a 0.22 μm organic membrane. A 5 μL injection sample was taken for each LC/MS analysis. The conditions for liquid mass spectrometry were as follows: Chromatographic column: Shim-pack VP-ODS18 (250 mm × 2.0 mm, 5 μm); mobile phase: 0.05% formic acid aqueous solution (A), acetonitrile (B), gradient elution (0–5 min, 9% B; 5–22 min, 9–22% B; 22–40 min, 22–65% B; 40–60 min, 65–95% B; 60–65 min, 95% B); detection wavelength: 210 nm; flow rate: 0.3 mL/min; column temperature: 30 °C, and injection volume: 5 μL. Mass spectrometry conditions: ionisation mode: ESI (±); atomising gas flow rate: 3.0 L/min; drying gas flow rate: 10 L/min; heating gas flow rate: 10 L/min; interface temperature; 450 °C; DL temperature: 300 °C; and heating module temperature: 400 °C. Afterward, real-time quantitative PCR analysis of the identified 10 monomers (working concentration: 1 mg/mL) on HIV-1 DNA was performed using the same method, as shown above. Screening of monomer targets We used PubChem (https://pubchem.ncbi.nlm.nih.gov/) and Discovery Studio 2020 Client to obtain the monomer structure, uploaded the structural formula to PharmMapper (http://www.lilab-ecust.cn/pharmmapper/) (Wang et al. 2017), target selection ‘Human Protein Targets Only (v2010, 2241)’, and uploaded the predicted protein target to UniProt (https://www.uniprot.org/) with Normalised Fit Score > 0.7 to obtain the gene targets. We uploaded the obtained gene targets to Metascape (https://metascape.org/gp/index.html#/main/step1) (Zhou et al. 2019) for Gene Ontology (GO) enrichment analysis and Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway analysis and uploaded the enrichment analysis GO results to a bioinformatics online tool (http://www.bioinformatics.com.cn/) for visualisation. We then uploaded the obtained gene targets to String (https://string-db.org/) to construct a protein-protein interaction (PPI) network and used Cytoscape 3.7.2 to perform further processing of the results. We used PubChem (https://pubchem.ncbi.nlm.nih.gov/) and Discovery Studio 2020 Client to obtain the monomer structure, uploaded the structural formula to PharmMapper (http://www.lilab-ecust.cn/pharmmapper/) (Wang et al. 2017), target selection ‘Human Protein Targets Only (v2010, 2241)’, and uploaded the predicted protein target to UniProt (https://www.uniprot.org/) with Normalised Fit Score > 0.7 to obtain the gene targets. We uploaded the obtained gene targets to Metascape (https://metascape.org/gp/index.html#/main/step1) (Zhou et al. 2019) for Gene Ontology (GO) enrichment analysis and Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway analysis and uploaded the enrichment analysis GO results to a bioinformatics online tool (http://www.bioinformatics.com.cn/) for visualisation. We then uploaded the obtained gene targets to String (https://string-db.org/) to construct a protein-protein interaction (PPI) network and used Cytoscape 3.7.2 to perform further processing of the results. Analysis of potential targets of monomers acting on HIV We used PathCard (https://PathCards.genecards.org/) (Belinky et al. 2015) to obtain all gene names for HIV-related pathways and used a bioinformatics online tool (http://www.bioinformatics.com.cn/) to obtain the intersection of HIV-related genes and the target of monomer. The results contained possible targets of monomer and HIV. We then performed enrichment analysis and PPI analysis on the target again. We used PathCard (https://PathCards.genecards.org/) (Belinky et al. 2015) to obtain all gene names for HIV-related pathways and used a bioinformatics online tool (http://www.bioinformatics.com.cn/) to obtain the intersection of HIV-related genes and the target of monomer. The results contained possible targets of monomer and HIV. We then performed enrichment analysis and PPI analysis on the target again. Statistical analysis Results are presented as the mean ± standard error of the mean (SEM) for the sample number of each group. One-way analysis of variance (ANOVA) was used to evaluate the significance between multiple groups. Tukey’s multiple comparisons test was used to compare the mean values of two specific groups in GraphPad Prism 8.0.1, and p < 0.05 was considered significant. Results are presented as the mean ± standard error of the mean (SEM) for the sample number of each group. One-way analysis of variance (ANOVA) was used to evaluate the significance between multiple groups. Tukey’s multiple comparisons test was used to compare the mean values of two specific groups in GraphPad Prism 8.0.1, and p < 0.05 was considered significant. Cell line culture: Human renal carcinoma 786-O cells and human embryonic kidney 293 T cells were purchased from Beina Chuanglian Biotechnology Institute (Beijing, China). Cells were stored in Roswell Park Memorial Institute (RPMI) 1640 medium (Gibco, Suzhou, China) or Dulbecco’s modified Eagle’s medium (DMEM) (Gibco, Suzhou, China) and cultured in media containing 10% foetal bovine serum (FBS) at 37 °C in 5% CO2. Lentivirus packaging: pHIV-lus-zsGreen plasmids were transfected into DH5α competent cells and the cell suspension was smeared on LB agar plates (LB medium containing 10 g/L tryptone, 5 g/L yeast extract, 10 g/L NaCl, 15 g/L agar, and 10 μg/mL ampicillin), and bacterial plaques were screened. All operations were performed according to the instructions of the EndoFree Plasmid Maxi Kit (QIAGEN, Dusseldorf, Germany) to obtain many high-purity endotoxin-free plasmids, which were stored at −20 °C. The bacterial solution and glycerine were mixed at a ratio of 1:1 and then stored at −80 °C. The primer was designed in the promoter region of the plasmids, the extracted plasmids were sent to the Beijing Genomics Institute for sequencing, and DNAMAN 6.0 software was used to compare the sequencing results with the reference sequence provided by Addgene (http://www.addgene.org/). The mixture solutions containing the two plasmids, pHIV-Lus-ZsGreen and 2nd Generation Packaging Mix, were added into a serum-free and double-antibody-free DMEM medium at a ratio of 1:1. Further, the same procedure was performed for the DNAfectinTM Plus Transfection Reagent, and both mixture solutions were incubated for 5 min, respectively. Then, both solutions were co-incubated for 30 min and added to 293 T cells. Subsequently, the cells were incubated at 37 °C in 5% CO2 for 6–8 h. Thereafter, the medium was replaced with DMEM complete medium containing 10% FBS. After 48 h of transfection, the expression of green fluorescence proteins in the cells was observed under a fluorescence microscope. The media were collected if the expression of green fluorescence proteins was observed. The supernatant was collected and centrifuged at 10,000 rpm for 40 min at 4 °C. Then, the supernatant was discarded and the sediment was added to a 0.5 mL medium. The titre of the viruses was determined according to the procedure provided by the qPCR Lentivirus Titration (Titre) Kit; subsequently, the lentiviruses were centrifuged and concentrated to achieve a final titre with an order of magnitude of 108 TU/mL and stored at −80 °C after sub-packaging. The treated viruses were added onto the 786-O cells cultured in a 6-well plate at a volume of 5 μL/well. Cells were then incubated with polybrene (final concentration: 0.8 µg/mL) at 37 °C in 5% CO2 for 48 h and the expression level of green fluorescence protein was analysed. Detection of cell survival rate and virus inhibition rate: The eight Chinese medicines (G. uralensis rhizome, R. japonica root, N. cataria stem, L. erythrorhizon root, S. flavescens root, C. cassia bark, E. japonica root, and Cortex Mori) were provided by the Department of Internal Medicine of Traditional Chinese Medicine at Chongqing University Three Gorges Hospital and were identified by Fangzheng Mou, director of the Department of Internal Medicine of Traditional Chinese Medicine. A total of 200 mg of each granule preparation of the eight medicines was accurately weighed. A liquid nitrogen grinder (40 mesh) was used to grind the granule preparations into powders. The powders were dissolved and mixed with 200 μL dimethyl sulfoxide (DMSO), then treated with ultrasound at room temperature for 30–60 min. The extract solution was cooled to room temperature, and three working concentration gradients were prepared for each drug: 10, 1, and 0.1 mg/mL. Human renal carcinoma 786-O cells were seeded onto transparent 96-well plates (105 cell/well) and then incubated with the Chinese medicine solutions at the above-mentioned concentrations and HIV-1 viruses (multiplicity of infection = 20) and polybrene (working concentration: 0.8 µg/mL; same below). The cells were further cultured at 37 °C in 5% CO2 for 48 h, and subsequently cultured with 10 μL Cell Counting Kit-8 (CCK8) for another 2 h. The cell survival rate was then detected. Additionally, cells were seeded onto white 96-well plates and the cells were incubated with the eight Chinese medicine solutions at the above-mentioned concentrations. The cells were further incubated at 37 °C in 5% CO2 for 1 h and co-cultured with viruses. Subsequently, the cells were incubated with polybrene and the cells continued to be cultured for another 48 h. Next, luciferase substrate was added to the cells and shaken gently in the dark for 15 min. The luciferase luminescence signals were detected by a microplate reader, and the inhibition rates of the medicinal compositions on lentivirus infection were calculated. Cell survival rate=[(As−Ab)/(Ac−Ab)]×100%. Virus inhibition rate=[(As−Ab)/(Ac−Ab)]×100%. As: Experimental well (the well contained medicinal compositions, cells, and culture medium) Ac: Negative control well (the well did not contain any medicinal compositions, only cells and culture medium) Ab: Blank control well (the well did not contain any medicinal compositions, only culture medium). In the subsequent steps, Chinese medicine granules associated with a high cell survival rate or high cell inhibition rate were selected as candidate medicines for chemical composition analysis. Real-time quantitative PCR analysis of HIV-1 DNA: To further determine the mechanism of inhibition on HIV infection in the early stage, the real-time quantitative PCR (qPCR) probe method was used to detect changes in the expression levels of HIV DNA products during HIV-1 cell infection. The primers and probes were designed according to previous studies (King and Attardi 1989; Butler et al. 2001; Brussel and Sonigo 2003; Bacman and Moraes 2007). Mitochondrial DNA was selected as the internal reference, and the primer information is as shown in Table 1 (Butler et al. 2001; Brussel and Sonigo 2003). RT-PCR primers and probes. Chinese medicines and the positive control medicine, zidovudine (AZT), were both dissolved in DMSO. The initial concentration of Chinese medicines was determined based on the results from a previous step. The initial concentration of the AZT was 100 μg/mL. Human renal carcinoma 786-O cells were cultured in 6-well plates, and the cells + AZT, the cells + lentivirus, and untreated cells were assigned to the positive control group, negative control group, and blank control group, respectively. The cells were initially incubated with medicine supplementation for 1 h and then were incubated with lentivirus and polybrene at 37 °C in 5% CO2 for 8 h or 24 h, respectively. (Note: The reaction time of products detected by different probes is different). The cells in the 6-well plates were subsequently washed gently 8–10 times with normal saline to remove the remaining drugs and viruses. Subsequently, the cells were digested with pancreatin and washed thrice with normal saline. DNA was extracted from the cells according to the method provided by the DNeasy Blood and Tissue (QIAGEN, Dusseldorf, Germany) kit, and the DNA concentration was measured. The isolated DNA was stored at −20 °C until further use. Three parallel experiments were performed for each group. Detection of total RT products and late RT products: The primers and probes in Table 1 were used to detect the expressions of total RT products and late RT products at 8 h after incubation with lentivirus and medicine supplementation. The reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), 0.5 µL probe, and 10 µL qPCR detection reagent; some water was added to reach a final volume of 20 µL, and three replicate wells were established. The reaction procedure was 55 °C for 3 min, 95 °C for 5 s, and 60 °C for 30 s for 40 cycles. Detection of integrated DNA: DNA products obtained after 24 h incubation with lentivirus and medicine supplementation were used for this step. Firstly, the primers in Table 1 were used to perform Alu-LTR PCR; the reaction system contained 1 µL DNA template (100 ng/µL), 0.5 µL upstream and downstream primers (900 mmol), and 10 µL High Fidelity PCR Enzyme Mix; water was added to make a final volume of 20 µL. The reaction procedure was 95 °C for 5 min, 95 °C for 30 s, 60 °C for 30 s, and 72 °C for 3 min for 16 cycles; and then 72 °C for 5 min. The products were stored at 16 °C until use. Further, 1 µL of Alu-LTR PCR product was used for qPCR detection using the same method as previously described. Separation and identification of natural compound constituents contained by liquid mass spectrometry: Based on the experimental results obtained from the survival rate and viral inhibition rate analyses of the lentivirus-transfected cells treated with eight Chinese medicines, we chose the best drug for further experiments. According to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), select monomers that may have anti-HIV effects. These monomers were analysed and detected by liquid mass spectrometry. A total of 50 mg of each Chinese medicine was weighed and dissolved in 50 μL DMSO (as above), and then treated by ultrasound at room temperature for 30 min and filtered through a 0.22 μm organic membrane. A 5 μL injection sample was taken for each LC/MS analysis. The conditions for liquid mass spectrometry were as follows: Chromatographic column: Shim-pack VP-ODS18 (250 mm × 2.0 mm, 5 μm); mobile phase: 0.05% formic acid aqueous solution (A), acetonitrile (B), gradient elution (0–5 min, 9% B; 5–22 min, 9–22% B; 22–40 min, 22–65% B; 40–60 min, 65–95% B; 60–65 min, 95% B); detection wavelength: 210 nm; flow rate: 0.3 mL/min; column temperature: 30 °C, and injection volume: 5 μL. Mass spectrometry conditions: ionisation mode: ESI (±); atomising gas flow rate: 3.0 L/min; drying gas flow rate: 10 L/min; heating gas flow rate: 10 L/min; interface temperature; 450 °C; DL temperature: 300 °C; and heating module temperature: 400 °C. Afterward, real-time quantitative PCR analysis of the identified 10 monomers (working concentration: 1 mg/mL) on HIV-1 DNA was performed using the same method, as shown above. Screening of monomer targets: We used PubChem (https://pubchem.ncbi.nlm.nih.gov/) and Discovery Studio 2020 Client to obtain the monomer structure, uploaded the structural formula to PharmMapper (http://www.lilab-ecust.cn/pharmmapper/) (Wang et al. 2017), target selection ‘Human Protein Targets Only (v2010, 2241)’, and uploaded the predicted protein target to UniProt (https://www.uniprot.org/) with Normalised Fit Score > 0.7 to obtain the gene targets. We uploaded the obtained gene targets to Metascape (https://metascape.org/gp/index.html#/main/step1) (Zhou et al. 2019) for Gene Ontology (GO) enrichment analysis and Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway analysis and uploaded the enrichment analysis GO results to a bioinformatics online tool (http://www.bioinformatics.com.cn/) for visualisation. We then uploaded the obtained gene targets to String (https://string-db.org/) to construct a protein-protein interaction (PPI) network and used Cytoscape 3.7.2 to perform further processing of the results. Analysis of potential targets of monomers acting on HIV: We used PathCard (https://PathCards.genecards.org/) (Belinky et al. 2015) to obtain all gene names for HIV-related pathways and used a bioinformatics online tool (http://www.bioinformatics.com.cn/) to obtain the intersection of HIV-related genes and the target of monomer. The results contained possible targets of monomer and HIV. We then performed enrichment analysis and PPI analysis on the target again. Statistical analysis: Results are presented as the mean ± standard error of the mean (SEM) for the sample number of each group. One-way analysis of variance (ANOVA) was used to evaluate the significance between multiple groups. Tukey’s multiple comparisons test was used to compare the mean values of two specific groups in GraphPad Prism 8.0.1, and p < 0.05 was considered significant. Results: HIV packaging The pHIV-Lus-ZsGreen plasmid was sequenced and compared, and the sequence was consistent with the reference sequence, suggesting the vector was correct and subsequent experiments could be performed (Figure 1). Vector map and sequencing results. (A) pLenti-P2A Vector. (B) pLenti-P2B Vector. (C) pHIV-lus-ZsGreen Vector. (D) Sequence alignment results. The pHIV-Lus-ZsGreen plasmid was sequenced and compared, and the sequence was consistent with the reference sequence, suggesting the vector was correct and subsequent experiments could be performed (Figure 1). Vector map and sequencing results. (A) pLenti-P2A Vector. (B) pLenti-P2B Vector. (C) pHIV-lus-ZsGreen Vector. (D) Sequence alignment results. Lentivirus-infected cells Fluorescence microscopy was performed at 48 h after co-transfection with three plasmids into 293 T cells. The results indicated high expression levels of green fluorescent proteins (GFP), suggesting that the plasmid was successfully transfected into the cells (Figure 2(A,B)). The supernatant was collected at 48 h after transfection and added into 786-O cells. The cells expressed green fluorescence 48 h later (Figure 2(C,D)), suggesting that the viruses were successfully packaged. After being concentrated, the titre of the viruses was detected according to the method provided by the virus detection kit, and the viruses were packed separately and stored at −80 °C. Plasmid transfection and viral packaging. (A,B) Transfection of plasmids into 293 T cells, (C,D) 786-O cells infected by packaged viruses. Fluorescence microscopy was performed at 48 h after co-transfection with three plasmids into 293 T cells. The results indicated high expression levels of green fluorescent proteins (GFP), suggesting that the plasmid was successfully transfected into the cells (Figure 2(A,B)). The supernatant was collected at 48 h after transfection and added into 786-O cells. The cells expressed green fluorescence 48 h later (Figure 2(C,D)), suggesting that the viruses were successfully packaged. After being concentrated, the titre of the viruses was detected according to the method provided by the virus detection kit, and the viruses were packed separately and stored at −80 °C. Plasmid transfection and viral packaging. (A,B) Transfection of plasmids into 293 T cells, (C,D) 786-O cells infected by packaged viruses. Preliminary screening results of eight Chinese medicines The cell survival rate and virus inhibition rate were detected and compared among eight groups treated with Chinese medicines. The cell survival rate in the Cortex Mori group (the means of the high to low concentration groups were 63.28%, 57.94%, 56.94%; same below) was the highest (Figure 3(A)). The HIV inhibition rates of the Cortex Mori group (77.94%, 74.95%, and 61.75%) and N. cataria group (74.24%, 71.91%, and 71.26%) were higher than those of the other six Chinese medicine groups (Figure 3(B)). Thus, Cortex Mori was selected to further study its action mechanism. Since the experiment results at 10 mg/mL and 1 mg/mL were similar, 1 mg/mL was selected as the initial concentration of Cortex Mori in the follow-up study. Screening results of eight Chinese medicinal compositions. (A) Effects of eight Chinese medicinal compositions on cell survival rates. (B) Effects of eight Chinese medicinal compositions on viral inhibition rates. 1: G. uralensis; 2: R. japonica; 3: N. cataria; 4: L. erythrorhizon; 5: S. flavescens; 6: C. cassia; 7: E. japonica; 8: Cortex Mori. The cell survival rate and virus inhibition rate were detected and compared among eight groups treated with Chinese medicines. The cell survival rate in the Cortex Mori group (the means of the high to low concentration groups were 63.28%, 57.94%, 56.94%; same below) was the highest (Figure 3(A)). The HIV inhibition rates of the Cortex Mori group (77.94%, 74.95%, and 61.75%) and N. cataria group (74.24%, 71.91%, and 71.26%) were higher than those of the other six Chinese medicine groups (Figure 3(B)). Thus, Cortex Mori was selected to further study its action mechanism. Since the experiment results at 10 mg/mL and 1 mg/mL were similar, 1 mg/mL was selected as the initial concentration of Cortex Mori in the follow-up study. Screening results of eight Chinese medicinal compositions. (A) Effects of eight Chinese medicinal compositions on cell survival rates. (B) Effects of eight Chinese medicinal compositions on viral inhibition rates. 1: G. uralensis; 2: R. japonica; 3: N. cataria; 4: L. erythrorhizon; 5: S. flavescens; 6: C. cassia; 7: E. japonica; 8: Cortex Mori. Effects of Cortex Mori on the products of HIV RT at different stages When treated with Cortex Mori (1 mg/mL), the total RT group, late RT group, and integrated DNA group (each group’s relative gene expression mean values were 9.88, 16.16, and 11.83, respectively; same below) compared with the HIV group (the mean values were 22.94, 24.45, and 45.43, respectively) were significantly reduced (all p < 0.05). Those of the Cortex Mori group were higher than those of the AZT control group (1.00, 1.21, and 1.00, respectively; all p < 0.01) (Figure 4(A–C)). The results showed that Cortex Mori had a significant inhibitory effect on the HIV-1 RT enzyme, but the effect was not as favourable as that of AZT. Effects of Cortex Mori on the expression of RT enzyme products at different stages. (A) Effects of Cortex Mori on the expression of total RT enzyme products. (B) Effects of Cortex Mori on the expression of late RT enzyme products. (C) Effects of Cortex Mori on the expression of integrated DNA enzyme products. CM group: DMSO, HIV, and Cortex Mori were added to the cells; AZT group: only AZT and DMSO were added to the cells; HIV group: only HIV and DMSO were added to the cells; DMSO group: only DMSO was added to the cells; control group: the cells did not receive any medicinal treatments. Data are expressed as the mean ± SEM. **p < 0.01, ***p < 0.001, ****p < 0.0001. When treated with Cortex Mori (1 mg/mL), the total RT group, late RT group, and integrated DNA group (each group’s relative gene expression mean values were 9.88, 16.16, and 11.83, respectively; same below) compared with the HIV group (the mean values were 22.94, 24.45, and 45.43, respectively) were significantly reduced (all p < 0.05). Those of the Cortex Mori group were higher than those of the AZT control group (1.00, 1.21, and 1.00, respectively; all p < 0.01) (Figure 4(A–C)). The results showed that Cortex Mori had a significant inhibitory effect on the HIV-1 RT enzyme, but the effect was not as favourable as that of AZT. Effects of Cortex Mori on the expression of RT enzyme products at different stages. (A) Effects of Cortex Mori on the expression of total RT enzyme products. (B) Effects of Cortex Mori on the expression of late RT enzyme products. (C) Effects of Cortex Mori on the expression of integrated DNA enzyme products. CM group: DMSO, HIV, and Cortex Mori were added to the cells; AZT group: only AZT and DMSO were added to the cells; HIV group: only HIV and DMSO were added to the cells; DMSO group: only DMSO was added to the cells; control group: the cells did not receive any medicinal treatments. Data are expressed as the mean ± SEM. **p < 0.01, ***p < 0.001, ****p < 0.0001. Chemical monomers in Cortex Mori detected by liquid mass spectrometry To identify the chemical composition of Cortex Mori granules and to further clarify the active components of natural compound monomers contained in Cortex Mori granules, and to identify structural characteristics for the development of new antiviral drugs, according to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), 10 Chinese medicine monomers in Cortex Mori were selected, and 10 standard compounds were identified in Cortex Mori granules by liquid-phase mass spectrometry gradient elution and mass spectrometry analysis, including mulberroside A (peak number: 15; same below), chlorogenic acid (20, 21), neochlorogenic acid (24), palmitic acid (33), astragalin (38), β-sitosterol (56), emodin (57), ursolic acid (62), morusin (63) and lupeol (66) (Figure 5, Table 2). Chromatogram of liquid phase mass spectrum peaks for separation and identification of Cortex Mori granules. Chromatographic peak parameters corresponding to the chromatograms of 10 Chinese medicine monomers contained in Mori Cortex. To identify the chemical composition of Cortex Mori granules and to further clarify the active components of natural compound monomers contained in Cortex Mori granules, and to identify structural characteristics for the development of new antiviral drugs, according to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), 10 Chinese medicine monomers in Cortex Mori were selected, and 10 standard compounds were identified in Cortex Mori granules by liquid-phase mass spectrometry gradient elution and mass spectrometry analysis, including mulberroside A (peak number: 15; same below), chlorogenic acid (20, 21), neochlorogenic acid (24), palmitic acid (33), astragalin (38), β-sitosterol (56), emodin (57), ursolic acid (62), morusin (63) and lupeol (66) (Figure 5, Table 2). Chromatogram of liquid phase mass spectrum peaks for separation and identification of Cortex Mori granules. Chromatographic peak parameters corresponding to the chromatograms of 10 Chinese medicine monomers contained in Mori Cortex. Effects of 10 monomers contained in Cortex Mori on the products of RT Fluorescent PCR and probe-based quantitative methods were used to further verify the inhibitory effect and action mechanism of the 10 natural compound molecular monomers contained in Cortex Mori on HIV-1 RT, to obtain new natural compound monomer molecules with effective anti-HIV effects. Ten types of Chinese medicine monomers were added into the cells, and then DNA products at different periods were collected. The results showed that five monomeric compounds, such as emodin, ursolic acid, morusin, chlorogenic acid, and astragalin had greater cytotoxicity at a concentration of 1 mg/mL, and the cell survival rate was low (data not published), which led to difficulty in extracting DNA to complete the follow-up test. Therefore, we selected the DNA products of the other five compounds, such as lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A for quantitative analysis, and the changes in RT products at each stage were determined by PCR assays (Figure 6(A–C)). The results showed that lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A could inhibit the activity of the RT enzymes to some extent. neochlorogenic acid and AZT had the most similar inhibitory effect (the most inhibited effect in 10 monomer) (Figure 6(A–C)). Compared with the HIV group three stages products (mean of the related expression level of each group: 35.42, 34.78, 45.57), neochlorogenic acid group (6.01, 5.76, 2.53) were significantly decreased (all p < 0.0001). Effects of five active monomer compounds in Cortex Mori on the expression of products at different stages of HIV infection. (A) Effects of five active monomer compounds in Cortex Mori on the expression of total RT enzyme products. (B) Effects of five active monomer compounds in Cortex Mori on the expression of late RT enzyme products. (C) Effects of five active monomer compounds in Cortex Mori on the expression of integrated DNA products. Data are expressed as the mean ± SEM. ***p < 0.001, ****p < 0.0001. Fluorescent PCR and probe-based quantitative methods were used to further verify the inhibitory effect and action mechanism of the 10 natural compound molecular monomers contained in Cortex Mori on HIV-1 RT, to obtain new natural compound monomer molecules with effective anti-HIV effects. Ten types of Chinese medicine monomers were added into the cells, and then DNA products at different periods were collected. The results showed that five monomeric compounds, such as emodin, ursolic acid, morusin, chlorogenic acid, and astragalin had greater cytotoxicity at a concentration of 1 mg/mL, and the cell survival rate was low (data not published), which led to difficulty in extracting DNA to complete the follow-up test. Therefore, we selected the DNA products of the other five compounds, such as lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A for quantitative analysis, and the changes in RT products at each stage were determined by PCR assays (Figure 6(A–C)). The results showed that lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A could inhibit the activity of the RT enzymes to some extent. neochlorogenic acid and AZT had the most similar inhibitory effect (the most inhibited effect in 10 monomer) (Figure 6(A–C)). Compared with the HIV group three stages products (mean of the related expression level of each group: 35.42, 34.78, 45.57), neochlorogenic acid group (6.01, 5.76, 2.53) were significantly decreased (all p < 0.0001). Effects of five active monomer compounds in Cortex Mori on the expression of products at different stages of HIV infection. (A) Effects of five active monomer compounds in Cortex Mori on the expression of total RT enzyme products. (B) Effects of five active monomer compounds in Cortex Mori on the expression of late RT enzyme products. (C) Effects of five active monomer compounds in Cortex Mori on the expression of integrated DNA products. Data are expressed as the mean ± SEM. ***p < 0.001, ****p < 0.0001. Screening of neochlorogenic acid targets The PubChem ID of neochlorogenic acid is 5280633, the molecular formula is C16H18O9, and the molecular weight is 354.31 g/mol. A total of 58 protein targets with neochlorogenic acid (normalised fit score > 0.7) were obtained. GO enrichment analysis revealed that these targets mainly involve cellular reactions (such as cellular response to lipid and response to inorganic substance), synthetic or metabolic processes (such as aromatic compound catabolic process, carboxylic acid biosynthetic process, and other biological processes), and molecular functions (such as phosphotransferase activity, hydrolase activity, oxidoreductase activity, and protein domain specific binding) (Figure 7(A)). The pathway analysis showed that the main pathways were prostate cancer, oestrogen signalling pathway, nitrogen metabolism, and other signalling pathways (Figure 7(B)). With PPI analysis, after removal of outliers, a PPI network with 40 nodes and 82 edges was obtained, with an average node degree of 4.2; the largest node degrees were for EGFR, ESR1, and AR (node degrees of 18, 14, and 11, respectively) (Figure 7(C)). Analysis of the target of neochlorogenic acid. (A) GO enrichment analysis. (B) Pathway analysis. (C) PPI analysis. The PubChem ID of neochlorogenic acid is 5280633, the molecular formula is C16H18O9, and the molecular weight is 354.31 g/mol. A total of 58 protein targets with neochlorogenic acid (normalised fit score > 0.7) were obtained. GO enrichment analysis revealed that these targets mainly involve cellular reactions (such as cellular response to lipid and response to inorganic substance), synthetic or metabolic processes (such as aromatic compound catabolic process, carboxylic acid biosynthetic process, and other biological processes), and molecular functions (such as phosphotransferase activity, hydrolase activity, oxidoreductase activity, and protein domain specific binding) (Figure 7(A)). The pathway analysis showed that the main pathways were prostate cancer, oestrogen signalling pathway, nitrogen metabolism, and other signalling pathways (Figure 7(B)). With PPI analysis, after removal of outliers, a PPI network with 40 nodes and 82 edges was obtained, with an average node degree of 4.2; the largest node degrees were for EGFR, ESR1, and AR (node degrees of 18, 14, and 11, respectively) (Figure 7(C)). Analysis of the target of neochlorogenic acid. (A) GO enrichment analysis. (B) Pathway analysis. (C) PPI analysis. Target analysis of neochlorogenic acid on HIV A total of 860 genes in 13 pathways were obtained using PathCard, among which the pathways with the highest node degrees were for HIV life cycle and HIV infection (Figure 8(A)). After crossing with the neochlorogenic acid target, four genes were obtained: haemopoietic cell kinase (HCK), epidermal growth factor receptor (EGFR), sarcoma (SRC), and 3-phosphoinositide dependent protein kinase 1 (PDPK1) (Figure 8(B)). The enrichment analysis results of these four genes showed that they were mainly concentrated in protein autophosphorylation, peptidyl-tyrosine autophosphor, epidermal growth factor receptor, and Fc receptor signalling pathway functional categories (Figure 8(C)). According to the structural formula of neochlorogenic acid (Figure 8(D)), we unexpectedly discovered that it can bind HIV type 2 (HIV-2) RT (PDB ID: 1MU2) (Figure 8(E)). Target analysis of neochlorogenic acid acting on HIV. (A) HIV-related pathway. (B) The intersection of HIV-related genes and new chlorogenic acid targets. (C) Intersection gene enrichment results. (D) The structural formula of neochlorogenic acid. (E) Combination model of neochlorogenic acid and 1MU2. A total of 860 genes in 13 pathways were obtained using PathCard, among which the pathways with the highest node degrees were for HIV life cycle and HIV infection (Figure 8(A)). After crossing with the neochlorogenic acid target, four genes were obtained: haemopoietic cell kinase (HCK), epidermal growth factor receptor (EGFR), sarcoma (SRC), and 3-phosphoinositide dependent protein kinase 1 (PDPK1) (Figure 8(B)). The enrichment analysis results of these four genes showed that they were mainly concentrated in protein autophosphorylation, peptidyl-tyrosine autophosphor, epidermal growth factor receptor, and Fc receptor signalling pathway functional categories (Figure 8(C)). According to the structural formula of neochlorogenic acid (Figure 8(D)), we unexpectedly discovered that it can bind HIV type 2 (HIV-2) RT (PDB ID: 1MU2) (Figure 8(E)). Target analysis of neochlorogenic acid acting on HIV. (A) HIV-related pathway. (B) The intersection of HIV-related genes and new chlorogenic acid targets. (C) Intersection gene enrichment results. (D) The structural formula of neochlorogenic acid. (E) Combination model of neochlorogenic acid and 1MU2. HIV packaging: The pHIV-Lus-ZsGreen plasmid was sequenced and compared, and the sequence was consistent with the reference sequence, suggesting the vector was correct and subsequent experiments could be performed (Figure 1). Vector map and sequencing results. (A) pLenti-P2A Vector. (B) pLenti-P2B Vector. (C) pHIV-lus-ZsGreen Vector. (D) Sequence alignment results. Lentivirus-infected cells: Fluorescence microscopy was performed at 48 h after co-transfection with three plasmids into 293 T cells. The results indicated high expression levels of green fluorescent proteins (GFP), suggesting that the plasmid was successfully transfected into the cells (Figure 2(A,B)). The supernatant was collected at 48 h after transfection and added into 786-O cells. The cells expressed green fluorescence 48 h later (Figure 2(C,D)), suggesting that the viruses were successfully packaged. After being concentrated, the titre of the viruses was detected according to the method provided by the virus detection kit, and the viruses were packed separately and stored at −80 °C. Plasmid transfection and viral packaging. (A,B) Transfection of plasmids into 293 T cells, (C,D) 786-O cells infected by packaged viruses. Preliminary screening results of eight Chinese medicines: The cell survival rate and virus inhibition rate were detected and compared among eight groups treated with Chinese medicines. The cell survival rate in the Cortex Mori group (the means of the high to low concentration groups were 63.28%, 57.94%, 56.94%; same below) was the highest (Figure 3(A)). The HIV inhibition rates of the Cortex Mori group (77.94%, 74.95%, and 61.75%) and N. cataria group (74.24%, 71.91%, and 71.26%) were higher than those of the other six Chinese medicine groups (Figure 3(B)). Thus, Cortex Mori was selected to further study its action mechanism. Since the experiment results at 10 mg/mL and 1 mg/mL were similar, 1 mg/mL was selected as the initial concentration of Cortex Mori in the follow-up study. Screening results of eight Chinese medicinal compositions. (A) Effects of eight Chinese medicinal compositions on cell survival rates. (B) Effects of eight Chinese medicinal compositions on viral inhibition rates. 1: G. uralensis; 2: R. japonica; 3: N. cataria; 4: L. erythrorhizon; 5: S. flavescens; 6: C. cassia; 7: E. japonica; 8: Cortex Mori. Effects of Cortex Mori on the products of HIV RT at different stages: When treated with Cortex Mori (1 mg/mL), the total RT group, late RT group, and integrated DNA group (each group’s relative gene expression mean values were 9.88, 16.16, and 11.83, respectively; same below) compared with the HIV group (the mean values were 22.94, 24.45, and 45.43, respectively) were significantly reduced (all p < 0.05). Those of the Cortex Mori group were higher than those of the AZT control group (1.00, 1.21, and 1.00, respectively; all p < 0.01) (Figure 4(A–C)). The results showed that Cortex Mori had a significant inhibitory effect on the HIV-1 RT enzyme, but the effect was not as favourable as that of AZT. Effects of Cortex Mori on the expression of RT enzyme products at different stages. (A) Effects of Cortex Mori on the expression of total RT enzyme products. (B) Effects of Cortex Mori on the expression of late RT enzyme products. (C) Effects of Cortex Mori on the expression of integrated DNA enzyme products. CM group: DMSO, HIV, and Cortex Mori were added to the cells; AZT group: only AZT and DMSO were added to the cells; HIV group: only HIV and DMSO were added to the cells; DMSO group: only DMSO was added to the cells; control group: the cells did not receive any medicinal treatments. Data are expressed as the mean ± SEM. **p < 0.01, ***p < 0.001, ****p < 0.0001. Chemical monomers in Cortex Mori detected by liquid mass spectrometry: To identify the chemical composition of Cortex Mori granules and to further clarify the active components of natural compound monomers contained in Cortex Mori granules, and to identify structural characteristics for the development of new antiviral drugs, according to previously published studies (Feng et al. 2013; Zhiyong Chen et al. 2018; Guo 2019), 10 Chinese medicine monomers in Cortex Mori were selected, and 10 standard compounds were identified in Cortex Mori granules by liquid-phase mass spectrometry gradient elution and mass spectrometry analysis, including mulberroside A (peak number: 15; same below), chlorogenic acid (20, 21), neochlorogenic acid (24), palmitic acid (33), astragalin (38), β-sitosterol (56), emodin (57), ursolic acid (62), morusin (63) and lupeol (66) (Figure 5, Table 2). Chromatogram of liquid phase mass spectrum peaks for separation and identification of Cortex Mori granules. Chromatographic peak parameters corresponding to the chromatograms of 10 Chinese medicine monomers contained in Mori Cortex. Effects of 10 monomers contained in Cortex Mori on the products of RT: Fluorescent PCR and probe-based quantitative methods were used to further verify the inhibitory effect and action mechanism of the 10 natural compound molecular monomers contained in Cortex Mori on HIV-1 RT, to obtain new natural compound monomer molecules with effective anti-HIV effects. Ten types of Chinese medicine monomers were added into the cells, and then DNA products at different periods were collected. The results showed that five monomeric compounds, such as emodin, ursolic acid, morusin, chlorogenic acid, and astragalin had greater cytotoxicity at a concentration of 1 mg/mL, and the cell survival rate was low (data not published), which led to difficulty in extracting DNA to complete the follow-up test. Therefore, we selected the DNA products of the other five compounds, such as lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A for quantitative analysis, and the changes in RT products at each stage were determined by PCR assays (Figure 6(A–C)). The results showed that lupeol, neochlorogenic acid, β-sitosterol, palmitic acid, and mulberroside A could inhibit the activity of the RT enzymes to some extent. neochlorogenic acid and AZT had the most similar inhibitory effect (the most inhibited effect in 10 monomer) (Figure 6(A–C)). Compared with the HIV group three stages products (mean of the related expression level of each group: 35.42, 34.78, 45.57), neochlorogenic acid group (6.01, 5.76, 2.53) were significantly decreased (all p < 0.0001). Effects of five active monomer compounds in Cortex Mori on the expression of products at different stages of HIV infection. (A) Effects of five active monomer compounds in Cortex Mori on the expression of total RT enzyme products. (B) Effects of five active monomer compounds in Cortex Mori on the expression of late RT enzyme products. (C) Effects of five active monomer compounds in Cortex Mori on the expression of integrated DNA products. Data are expressed as the mean ± SEM. ***p < 0.001, ****p < 0.0001. Screening of neochlorogenic acid targets: The PubChem ID of neochlorogenic acid is 5280633, the molecular formula is C16H18O9, and the molecular weight is 354.31 g/mol. A total of 58 protein targets with neochlorogenic acid (normalised fit score > 0.7) were obtained. GO enrichment analysis revealed that these targets mainly involve cellular reactions (such as cellular response to lipid and response to inorganic substance), synthetic or metabolic processes (such as aromatic compound catabolic process, carboxylic acid biosynthetic process, and other biological processes), and molecular functions (such as phosphotransferase activity, hydrolase activity, oxidoreductase activity, and protein domain specific binding) (Figure 7(A)). The pathway analysis showed that the main pathways were prostate cancer, oestrogen signalling pathway, nitrogen metabolism, and other signalling pathways (Figure 7(B)). With PPI analysis, after removal of outliers, a PPI network with 40 nodes and 82 edges was obtained, with an average node degree of 4.2; the largest node degrees were for EGFR, ESR1, and AR (node degrees of 18, 14, and 11, respectively) (Figure 7(C)). Analysis of the target of neochlorogenic acid. (A) GO enrichment analysis. (B) Pathway analysis. (C) PPI analysis. Target analysis of neochlorogenic acid on HIV: A total of 860 genes in 13 pathways were obtained using PathCard, among which the pathways with the highest node degrees were for HIV life cycle and HIV infection (Figure 8(A)). After crossing with the neochlorogenic acid target, four genes were obtained: haemopoietic cell kinase (HCK), epidermal growth factor receptor (EGFR), sarcoma (SRC), and 3-phosphoinositide dependent protein kinase 1 (PDPK1) (Figure 8(B)). The enrichment analysis results of these four genes showed that they were mainly concentrated in protein autophosphorylation, peptidyl-tyrosine autophosphor, epidermal growth factor receptor, and Fc receptor signalling pathway functional categories (Figure 8(C)). According to the structural formula of neochlorogenic acid (Figure 8(D)), we unexpectedly discovered that it can bind HIV type 2 (HIV-2) RT (PDB ID: 1MU2) (Figure 8(E)). Target analysis of neochlorogenic acid acting on HIV. (A) HIV-related pathway. (B) The intersection of HIV-related genes and new chlorogenic acid targets. (C) Intersection gene enrichment results. (D) The structural formula of neochlorogenic acid. (E) Combination model of neochlorogenic acid and 1MU2. Discussion: Under the action of viral RT, pre-viral DNA is synthesised with viral RNA as the template. Antiretroviral drugs can be used to treat patients with HIV retrovirus infection. The American National Institutes of Health (NIH) and National Centre for AIDS/STD Control and Prevention (NCAIDS/STD) of the China Centre for Disease Control and Prevention have recommended the use of antiretroviral drugs to treat patients with AIDS-related symptoms. Different types of antiretroviral drugs act on various stages of the HIV reverse transcription process. However, the consumption of complex combinations of current anti-HIV chemical medicines may have serious side effects or the virus may become resistant to the medicines. Chinese medicines have been widely used to fight viruses for decades in China. Recently, Chinese medicines played a significant role in the treatment of patients with novel coronavirus pneumonia in China. Tens of thousands of patients have recovered and been cured under the intervention of Chinese medicines, proving their safety and effectiveness in antiviral treatments (Luo et al. 2020; Ren et al. 2020; Yang et al. 2020). In general, Chinese medicines have complex components. Many natural compounds from Chinese medicines, such as coumarin, alkaloid, lignin, flavonoids, tannins, and terpenes have been proven to inhibit HIV RT in vitro (Li et al. 2020). Cortex Mori is a common medicinal material in TCM; it has many pharmacological effects, including anti-inflammatory and analgesic, antitussive and anti-asthmatic, diuretic, hypoglycaemic, hypolipidemic, hypotensive, antitumor, and antiviral effects, and it can improve peripheral neuropathy. However, the pharmacological mechanism of Cortex Mori remains unclear (Du et al. 2003; Hou et al. 2018; Yu et al. 2019; Lu et al. 2020). We found that Cortex Mori significantly inhibited the activity of HIV-1 RT enzyme, thus blocking the replication of the virus. In anti-HIV experiments in vitro, Shi de Luo et al. (1995) found that 50 types of Chinese medicines, including Cortex Mori, had anti-HIV effects. Cortex Mori is also one of the components of ‘Compound SH’ (An anti-HIV compound of TCM) (Cheng et al. 2015). A clinical study showed that the decline in CD4 cell count was slowed and reversed in AIDS patients taking oral compound SH (Kusum et al. 2004). Such results are consistent with the results of our study, which shows that mulberry bark is a potential anti-HIV drug. Shi de Luo et al. (1995) identified six components, morusin, kuwanon H, mulberofuran D, mulberofuran K, mulberofuran G, and kuwanon, from Cortex Mori, among which morusin and kuwanon H have been indicated to have anti-HIV activities from in vitro experiments. Xue (2009) obtained nine compounds by mass spectrometry, including tritriacontane, hexadecanoic acid, β-sitosterol, betulic acid, oleanolic acid, scopoletin, 4′,5,7-trihydroxy-8-(3,3-dimethylallyl)-flavone, morusignin L, and daucostero, with some anti-HIV effects. In this study, 10 chemical components were identified by mass spectrometry: ursolic acid, emodin, palmitic acid, β-sitosterol, neochlorogenic acid, mulberroside A, chlorogenic acid, astragalin, morusin, and lupeol. We found that neochlorogenic acid in Cortex Mori had the best pharmacological activity. Combined with findings from previous studies, our results show that mulberry contains a variety of anti-HIV components, and neochlorogenic acid maybe just one of them; therefore, the others require further research. Our results also highlight the multi-component and multi-target characteristics of Chinese medicine. In our research results, the four genes HCK, EGFR, SRC, and PDPK1 may be potential targets through which neochlorogenic acid inhibits HIV infection in humans. HCK protein is a member of the Src family of non-receptor tyrosine kinases, which is preferentially expressed in myeloid and B lymphoid haematopoietic cells (Moarefi et al. 1997). Nef is a multifunctional pathogenic protein of HIV-1, and its interaction with the Src tyrosine kinase Hck, which is highly expressed in macrophages, is related to the development of AIDS (Suzu et al. 2005; Hiyoshi et al. 2008). We speculate that neochlorogenic acid may competitively bind HCK protein, thereby inhibiting Nef protein function. Few studies have examined EGFP and HIV, and they have mainly focussed on mutations (Crequit et al. 2016; Walline et al. 2017; Liu et al. 2019). Kaposi sarcoma (KS) is the most common malignant tumour in HIV/AIDS. HIV-related exosomes increase the expression of HIV transactivator (TAR) RNA and EGFR in oral mucosal epithelial cells, thereby promoting Kaposi sarcoma-associated herpesvirus (KSHV) infection (Chen et al. 2020). Lin et al. (2011) found that the phosphorylation level of PDPK1 is closely associated with the expression of p300 protein, which regulates the function of the HIV-1-encoded RNA-binding protein Tat. The above results all show that our prediction results are fairly reliable. In addition, we found that neochlorogenic acid can bind the RT of HIV-2, thus suggesting that neochlorogenic acid may have a favourable inhibitory effect on HIV-2. This result further shows that neochlorogenic acid is a potent inhibitor of HIV-1 virus replication. In follow-up research, the anti-HIV effects of Cortex Mori or neochlorogenic acid should be further studied through enzyme kinetics, primer extension, band shift experiments, and RNase H activity detection, as well as cell biology, and the possible molecular mechanism of gene expression should be investigated. In summary, Cortex Mori has anti-HIV effects and lower cytotoxicity, which may be mainly achieved by inhibiting the function of RT enzymes. In addition, HCK, EGFR, SRC, and PDPK1 may be protein targets allowing neochlorogenic acid to inhibit HIV infection in the human body. The precise effect of Cortex Mori on RT needs further study. The existing anti-HIV treatment strategies approved by the FDA are very expensive for more than 90% of HIV-infected patients in developing countries. There have been many studies on the treatment of AIDS with Chinese medicine. It is of great significance to explore TCMs and Chinese medicines to develop affordable drugs with significant effects to prevent and treat AIDS. Conclusions: We analysed the anti-HIV effects of eight types of TCMs and found that Cortex Mori inhibits HIV. We further screened and identified 10 compounds from Cortex Mori. Cell experiments indicated that neochlorogenic acid has a good inhibitory effect on the HIV-1 RT enzyme. Neochlorogenic acid may also inhibit HIV through four targets: HCK, EGFR, SRC, and PDPK1. Moreover, neochlorogenic acid may bind HIV-2 RT. Our research can facilitate the development and utilisation of Chinese herbal medicines, and it can serve as a reference for the development of anti-HIV-1 drugs.
Background: Chinese herbs such as Cortex Mori [Morus alba L. (Moraceae)] may inhibit human immunodeficiency virus (HIV), but active compounds are unknown. Methods: HIV-1 virus (multiplicity of infection: 20), and herbs (dissolved in dimethyl sulfoxide, working concentrations: 10, 1, and 0.1 mg/mL) such as Cortex Mori, etc., were added to 786-O cells (105 cell/well). Zidovudine was used as a positive control. Cell survival and viral inhibition rates were measured. The herb that was the closest inactivity to zidovudine was screened. Mass spectrometry identified the active compounds in herbs (mobile phase: 0.05% formic acid aqueous solution and acetonitrile, gradient elution, detection wavelength: 210 nm). The effect of the compounds on reverse transcriptase (RT) products were evaluated by real-time PCR. Gene enrichment was used to analyse underlying mechanisms. Results: With a dose of 1 mg/mL of Cortex Mori, the cell survival rate (57.94%) and viral inhibition rate (74.95%) were closest to the effect of zidovudine (87.87%, 79.81%, respectively). Neochlorogenic acid, one of the active ingredients, was identified by mass spectrometry in Cortex Mori. PCR discovery total RT products of neochlorogenic acid group (mean relative gene expression: 6.01) significantly inhibited (control: 35.42, p < 0.0001). Enrichment analysis showed that neochlorogenic acid may act on haemopoietic cell kinase, epidermal growth factor receptor, sarcoma, etc., thus inhibiting HIV-1 infection. Conclusions: For people of low socioeconomic status affected by HIV, Chinese medicine (such as Cortex Mori) has many advantages: it is inexpensive and does not easily produce resistance. Drugs based on active ingredients may be developed and could have important value.
Introduction: Acquired immune deficiency syndrome (AIDS) is an infectious disease characterised by an injured systemic immune system due to human immunodeficiency virus (HIV) infection (Lu et al. 2018; Seydi et al. 2018). Reverse transcriptase (RT), integrase (IN), and protease (PR) enzymes are essential for three key steps during HIV infection and nucleic acid replication and are also the main targets of HIV drug treatments (Andrabi et al. 2014; Laskey and Siliciano 2014). Recently, the search for new drug targets has been an important trend in HIV drug developmental studies, and RT inhibitors are a hotspot in the development of anti-HIV drugs (Wang et al. 2020). Since HIV RT is not a high-fidelity DNA polymerase and lacks proofreading function, it will cause increased mutation rates in HIV during the replication process. Therefore, the emergence of drug-resistant viruses is inevitable. Chinese medicines have been used in the treatment of HIV for many years in China. Compared with synthetic compounds, natural compounds extracted from Chinese herbal medicines are characterised by good biological compatibility, relatively low toxicity, and improved immunity. Traditional Chinese herbal medicine may allow for the development of new anti-HIV drugs with low toxicity and high efficacy (Chu and Liu 2011; Harvey et al. 2015). Chinese herbal medicine is a vital part of traditional Chinese medicine (TCM) and has been used as a treatment technique since its inception in ancient China. Recently, many types of Chinese herbal medicines with different degrees of antiviral activity have been reported (Wan et al. 2020; Yu et al. 2020), including Glycyrrhiza uralensis Fischer (Leguminosae) (rhizome) (Wan et al. 2020), Reynoutria japonica Houtt (Polygonaceae) (root) (Johnston 1990), Nepeta cataria Linn (Labiatae) (stem and leaf) (Johnston 1990), Lithospermum erythrorhizon Sieb. et Zucc (Boraginaceae) (root) (Chen et al. 1997), Sophora flavescens Alt (Leguminosae) (root) (Chen et al. 1997), Cinnamomum cassia Presl (Lauraceae) (bark) (Dai et al. 2012), Euchresta japonica Hook. f. ex Regel (Leguminosae) (root) (Sun et al. 2015), and Cortex Mori [Morus alba L. (Moraceae)] (bark) (Lee et al. 2007). In this study, eight types of Chinese medicines were selected for study, and their anti-HIV activities were preliminarily evaluated. Chinese medicines with definite HIV inhibitory effects were screened from the eight types of Chinese medicines, and their natural compounds were used for HIV inhibition experiments. Furthermore, their functions and mechanisms were explored to determine the active monomeric compounds in the Chinese medicines that showed targeted inhibition of HIV-1 RT. The study results should provide a theoretical and experimental basis for the drug design, structural modification, and development of a new generation of HIV-RT inhibitors. Conclusions: We analysed the anti-HIV effects of eight types of TCMs and found that Cortex Mori inhibits HIV. We further screened and identified 10 compounds from Cortex Mori. Cell experiments indicated that neochlorogenic acid has a good inhibitory effect on the HIV-1 RT enzyme. Neochlorogenic acid may also inhibit HIV through four targets: HCK, EGFR, SRC, and PDPK1. Moreover, neochlorogenic acid may bind HIV-2 RT. Our research can facilitate the development and utilisation of Chinese herbal medicines, and it can serve as a reference for the development of anti-HIV-1 drugs.
Background: Chinese herbs such as Cortex Mori [Morus alba L. (Moraceae)] may inhibit human immunodeficiency virus (HIV), but active compounds are unknown. Methods: HIV-1 virus (multiplicity of infection: 20), and herbs (dissolved in dimethyl sulfoxide, working concentrations: 10, 1, and 0.1 mg/mL) such as Cortex Mori, etc., were added to 786-O cells (105 cell/well). Zidovudine was used as a positive control. Cell survival and viral inhibition rates were measured. The herb that was the closest inactivity to zidovudine was screened. Mass spectrometry identified the active compounds in herbs (mobile phase: 0.05% formic acid aqueous solution and acetonitrile, gradient elution, detection wavelength: 210 nm). The effect of the compounds on reverse transcriptase (RT) products were evaluated by real-time PCR. Gene enrichment was used to analyse underlying mechanisms. Results: With a dose of 1 mg/mL of Cortex Mori, the cell survival rate (57.94%) and viral inhibition rate (74.95%) were closest to the effect of zidovudine (87.87%, 79.81%, respectively). Neochlorogenic acid, one of the active ingredients, was identified by mass spectrometry in Cortex Mori. PCR discovery total RT products of neochlorogenic acid group (mean relative gene expression: 6.01) significantly inhibited (control: 35.42, p < 0.0001). Enrichment analysis showed that neochlorogenic acid may act on haemopoietic cell kinase, epidermal growth factor receptor, sarcoma, etc., thus inhibiting HIV-1 infection. Conclusions: For people of low socioeconomic status affected by HIV, Chinese medicine (such as Cortex Mori) has many advantages: it is inexpensive and does not easily produce resistance. Drugs based on active ingredients may be developed and could have important value.
15,330
357
[ 88, 502, 504, 696, 380, 164, 71, 75, 78, 167, 243, 315, 205, 410, 237, 230 ]
21
[ "hiv", "cells", "mori", "cortex", "acid", "cortex mori", "chinese", "group", "rt", "analysis" ]
[ "hiv chemical medicines", "potent inhibitor hiv", "inhibit hiv rt", "hiv rt enzyme", "natural compounds hiv" ]
null
[CONTENT] Traditional Chinese medicine | reverse transcriptase | enrichment analysis [SUMMARY]
null
[CONTENT] Traditional Chinese medicine | reverse transcriptase | enrichment analysis [SUMMARY]
[CONTENT] Traditional Chinese medicine | reverse transcriptase | enrichment analysis [SUMMARY]
[CONTENT] Traditional Chinese medicine | reverse transcriptase | enrichment analysis [SUMMARY]
[CONTENT] Traditional Chinese medicine | reverse transcriptase | enrichment analysis [SUMMARY]
[CONTENT] Anti-HIV Agents | Cell Line, Tumor | Cell Survival | Chlorogenic Acid | Dose-Response Relationship, Drug | HEK293 Cells | HIV Infections | HIV-1 | Humans | Morus | Plant Extracts | Quinic Acid | Zidovudine [SUMMARY]
null
[CONTENT] Anti-HIV Agents | Cell Line, Tumor | Cell Survival | Chlorogenic Acid | Dose-Response Relationship, Drug | HEK293 Cells | HIV Infections | HIV-1 | Humans | Morus | Plant Extracts | Quinic Acid | Zidovudine [SUMMARY]
[CONTENT] Anti-HIV Agents | Cell Line, Tumor | Cell Survival | Chlorogenic Acid | Dose-Response Relationship, Drug | HEK293 Cells | HIV Infections | HIV-1 | Humans | Morus | Plant Extracts | Quinic Acid | Zidovudine [SUMMARY]
[CONTENT] Anti-HIV Agents | Cell Line, Tumor | Cell Survival | Chlorogenic Acid | Dose-Response Relationship, Drug | HEK293 Cells | HIV Infections | HIV-1 | Humans | Morus | Plant Extracts | Quinic Acid | Zidovudine [SUMMARY]
[CONTENT] Anti-HIV Agents | Cell Line, Tumor | Cell Survival | Chlorogenic Acid | Dose-Response Relationship, Drug | HEK293 Cells | HIV Infections | HIV-1 | Humans | Morus | Plant Extracts | Quinic Acid | Zidovudine [SUMMARY]
[CONTENT] hiv chemical medicines | potent inhibitor hiv | inhibit hiv rt | hiv rt enzyme | natural compounds hiv [SUMMARY]
null
[CONTENT] hiv chemical medicines | potent inhibitor hiv | inhibit hiv rt | hiv rt enzyme | natural compounds hiv [SUMMARY]
[CONTENT] hiv chemical medicines | potent inhibitor hiv | inhibit hiv rt | hiv rt enzyme | natural compounds hiv [SUMMARY]
[CONTENT] hiv chemical medicines | potent inhibitor hiv | inhibit hiv rt | hiv rt enzyme | natural compounds hiv [SUMMARY]
[CONTENT] hiv chemical medicines | potent inhibitor hiv | inhibit hiv rt | hiv rt enzyme | natural compounds hiv [SUMMARY]
[CONTENT] hiv | cells | mori | cortex | acid | cortex mori | chinese | group | rt | analysis [SUMMARY]
null
[CONTENT] hiv | cells | mori | cortex | acid | cortex mori | chinese | group | rt | analysis [SUMMARY]
[CONTENT] hiv | cells | mori | cortex | acid | cortex mori | chinese | group | rt | analysis [SUMMARY]
[CONTENT] hiv | cells | mori | cortex | acid | cortex mori | chinese | group | rt | analysis [SUMMARY]
[CONTENT] hiv | cells | mori | cortex | acid | cortex mori | chinese | group | rt | analysis [SUMMARY]
[CONTENT] hiv | chinese | medicines | chinese herbal | herbal | drug | root | leguminosae | 2020 | chinese medicines [SUMMARY]
null
[CONTENT] cortex | mori | cortex mori | acid | group | figure | products | neochlorogenic acid | neochlorogenic | hiv [SUMMARY]
[CONTENT] hiv | neochlorogenic acid | neochlorogenic | acid | development | anti | anti hiv | hiv rt | rt | hiv rt research facilitate [SUMMARY]
[CONTENT] hiv | cells | acid | mori | cortex | cortex mori | group | neochlorogenic | neochlorogenic acid | chinese [SUMMARY]
[CONTENT] hiv | cells | acid | mori | cortex | cortex mori | group | neochlorogenic | neochlorogenic acid | chinese [SUMMARY]
[CONTENT] Chinese ||| Cortex Mori ||| L. [SUMMARY]
null
[CONTENT] 1 mg/mL | Cortex Mori | 57.94% | 74.95% | 87.87% | 79.81% ||| one | Cortex Mori ||| PCR | 6.01 | 35.42 | 0.0001 ||| HIV-1 [SUMMARY]
[CONTENT] Chinese | Cortex Mori ||| [SUMMARY]
[CONTENT] Chinese | Cortex Mori ||| L. ||| 20 | 10 | 1 | 0.1 mg | Cortex Mori | 786 | 105 ||| Zidovudine ||| ||| ||| 0.05% | 210 ||| PCR ||| ||| 1 mg/mL | Cortex Mori | 57.94% | 74.95% | 87.87% | 79.81% ||| one | Cortex Mori ||| PCR | 6.01 | 35.42 | 0.0001 ||| HIV-1 ||| Chinese | Cortex Mori ||| [SUMMARY]
[CONTENT] Chinese | Cortex Mori ||| L. ||| 20 | 10 | 1 | 0.1 mg | Cortex Mori | 786 | 105 ||| Zidovudine ||| ||| ||| 0.05% | 210 ||| PCR ||| ||| 1 mg/mL | Cortex Mori | 57.94% | 74.95% | 87.87% | 79.81% ||| one | Cortex Mori ||| PCR | 6.01 | 35.42 | 0.0001 ||| HIV-1 ||| Chinese | Cortex Mori ||| [SUMMARY]
THE HERBAL MIXTURE XIAO-CHAI-HU TANG (XCHT) INDUCES APOPTOSIS OF HUMAN HEPATOCELLULAR CARCINOMA HUH7 CELLS
28480435
Xiao-Chai-Hu Tang (XCHT) is an extract of seven herbs with anticancer properties, but its mechanism of action is unknown. In this study, we evaluated XCHT-treated hepatocellular carcinoma (HCC) for anti-proliferative and pro-apoptotic effects.
BACKGROUND
Using a hepatic cancer xenograft model, we investigated the in vivo efficacy of XCHT against tumor growth by evaluating tumor volume and weight, as well as measuring apoptosis and cellular proliferation within the tumor. To study the effects of XCHT in vitro, we measured the cell viability of XCHT-treated Huh7 cells, as well as colony formation and apoptosis. To identify a potential mechanism of action, the gene and protein expression levels of Bax, Bcl-2, CDK4 and cyclin-D1 were measured in XCHT-treated Huh7 cells.
MATERIALS AND METHODS
We found that XCHT reduced tumor size and weight, as well as significantly decreased cell viability both in vivo and in vitro. XCHT suppressed the expression of the proliferation marker Ki-67 in HCC tissues and inhibited Huh7 colony formation. XCHT induced apoptosis in HCC tumor tissues and in Huh7 cells. Finally, XCHT altered the expression of Bax, Bcl-2, CDK4 and cyclin-D1, which halted cell proliferation and promoted apoptosis.
RESULTS
Our data suggest that XCHT enhances expression of pro-apoptotic pathways, resulting in potent anticancer activity.
CONCLUSION
[ "Antineoplastic Agents", "Apoptosis", "Carcinoma, Hepatocellular", "Cell Line, Tumor", "Cell Proliferation", "Cell Survival", "Cyclin D1", "Cyclin-Dependent Kinase 4", "Drugs, Chinese Herbal", "Genes, bcl-2", "Humans", "Liver Neoplasms", "bcl-2-Associated X Protein" ]
5412230
Introduction
Hepatocellular carcinoma (HCC) is one of the top five cancers diagnosed worldwide [El-Serag and Rudolph, 2007; Jemal et al, 2011]. HCC is the third leading cause of cancer-related deaths, and China will contribute close to half of the estimated 600,000 individuals who will succomb to the disease annually [Jemal et al, 2011; Sherman, 2005]. Surgical resection is the preferred treatment following HCC diagnosis, since removing the tumor completely offers the best prognosis for long-term survival. This treatment is suitable for only 10-15% of patients with early-stage disease, since tumor resection may disrupt vital functions or structures if the tumor is large or has infiltrated into major blood vessels [Levin et al, 1995]. For patients with advanced stage HCC, chemotherapy is the best therapeutic option available. Standard chemotherapeutic regimens can involve single agents or a combination of drugs such as doxorubicin, cisplatin or fluorouracil. Late stage HCC develops drug resistance to standard chemotherapeutic combinations, and less than 20% of patients with advanced liver cancer will respond to these treatment regimens [Abou-Alfa et al, 2008]. Identification of more effective anticancer therapies is needed to provide alternatives to standard chemotherapeutic regimens, as well as treatments for drug resistant HCC. Complementary and alternative medicines (CAM) have received considerable attention in Western countries for their potential therapeutic applications [Xu et al, 2006; Cui et al, 2010]. Traditional Chinese medicine (TCM) has been used in the treatment of cancer for thousands of years in China and other Asian countries. These medicines have gained acceptance as alternative cancer treatments in the United States and Europe [Wong et al, 2001; Gai et al, 2008]. When TCM is combined with conventional chemotherapy, there is an increase in the sensitivity of tumors to chemotherapeutic drugs, a reduction in both the side effects and complications associated with chemotherapy or radiotherapy, and an improvement in patient quality of life and survival [Konkimalla and Efferth, 2008]. Xiao-Chai-Hu-Tang (XCHT) is an extract of seven herbs: Bupleurum chinense (Chai-Hu), Pinellia ternata (Ban-Xia), Scutellaria baicalensis (Huang-Qin or Chinese skullcap root), Zizyphus jujube var. inermis (Da-Zao or jujube fruit), Panax ginseng (Ren-Shen or ginseng root), Glycyrrhiza uralensis (Gan-Cao or licorice root), and Zingiber officinale (Sheng-Jiang or ginger rhizome). The formulation for XCHT was first recorded in Shang Han Za Bing Lun during the Han Dynasty, and the ancient methodology for preparing XCHT has been performed in China for thousands of years. In TCM, XCHT has traditionally been used to treat a variety of Shaoyang diseases (including disorders of the liver and gallbladder). Common side-effects experienced with XCHT treatment include alternating chills and fever, but the medication is well-tolerated by the majority of patients. The relatively low toxicity of XCHT is an important characteristic for potential use of this extract in combinatorial therapies. Ancient scholars believed the mechanism of action of XCHT involved harmonizing the Shaoyang, where pathogens were eliminated through enhanced liver function and improved digestion. More recently, XCHT has been reported to be effective as an anti-cancer agent, although the precise mechanism of its tumoricidal activity remains unclear [Zhu et al, 2005; Shiota et al, 2002; Watanabe et al, 2001; Yano et al, 1994; Kato et al, 1998; Liu et al, 1998; Mizushima et al, 1995]. To gain further insights into the mechanism of action of this ancient extract, we examined the anti-proliferative and pro-apoptotic activities of XCHT on HCC.
null
null
Results
XCHT inhibits hepatocellular carcinoma (HCC) growth in vitro and in vivo. We evaluated the in vitro anti-cancer activity of XCHT on the human hepatocellular carcinoma cell line Huh7 using the MTT assay. As shown in Figure 1A, treatment with 0.5-1.5 mg/ml of XCHT for 24, 48, or 72 reduced the viability of Huh7 cells by 17.91-26.27%, 44.01-77%, and 64.8-86.71% (P < 0.05) compared to untreated controls. These results indicate that XCHT inhibits the growth of Huh7 cells in both dose- and time-dependent manners. To verify these results, we evaluated the effect of XCHT on Huh7 cell morphology using phase-contrast microscopy. As shown in Figure 1B, untreated Huh7 cells appeared as densely packed and disorganized multilayers. In contrast, many of the XCHT-treated cells were rounded, shrunken, and detached from adjacent cells, or floating in the medium. Taken together, these data demonstrate that XCHT inhibits the growth of Huh7 cells. Effect of Xiao-Chai-Hu Tang (XCHT) on the viability and morphology of Huh7 cells. (A) Cell viability was determined by the MTT assay after Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24, 48 or 72 h. The data were normalized to the viability of control cells (100%). Data are the averages with standard deviation (SD; error bars) from 3 independent experiments. The symbols (#), (*), and (&) indicate statistical significance compared to control cells (P<0.05), for each indicated timepoint. (B) The Huh7 cells were treated with the 0.5-1.5 mg/ml XCHT for 24 h, and morphological changes were observed using phase-contrast microscopy. The images were captured at a magnification of 200x. Images are representative of 3 independent experiments. To explore the anti-cancer activity of XCHT in vivo, we measured tumor volume and weight in HCC xenograft mice. As shown in Figure 2A and 2B, XCHT treatment significantly reduced both tumor volume and weight by Day 11 post-treatment. In contrast to tumors from the control group (293.37±19.36 mm3), XCHT treatment reduced tumor volume by 15.15% (201.5±27.83 mm3; P < 0.05). Tumor weight was reduced by 31.84% in XCHT-treated mice compared to control-treated mice (P < 0.05). Mice treated with XCHT demonstrated no changes in body weight during the course of the study, suggesting that XCHT treatment was relatively nontoxic to the animals (Figure 2C). Taken together, these data suggest that XCHT inhibits HCC growth both in vivo and in vitro, without apparent adverse effects. Effect of XCHT on tumor growth in hepatocellular carcinoma (HCC) xenograft mice. After tumor development, the mice were given intra-gastric administration of 14.2 g/kg of XCHT or PBS daily for 21 days. Tumor volume (A), tumor weight (B), and body weight (C) were measured. Data shown are averages with SD (error bars) from 10 mice in each group (n = 10). * P< 0.05, versus controls. We evaluated the in vitro anti-cancer activity of XCHT on the human hepatocellular carcinoma cell line Huh7 using the MTT assay. As shown in Figure 1A, treatment with 0.5-1.5 mg/ml of XCHT for 24, 48, or 72 reduced the viability of Huh7 cells by 17.91-26.27%, 44.01-77%, and 64.8-86.71% (P < 0.05) compared to untreated controls. These results indicate that XCHT inhibits the growth of Huh7 cells in both dose- and time-dependent manners. To verify these results, we evaluated the effect of XCHT on Huh7 cell morphology using phase-contrast microscopy. As shown in Figure 1B, untreated Huh7 cells appeared as densely packed and disorganized multilayers. In contrast, many of the XCHT-treated cells were rounded, shrunken, and detached from adjacent cells, or floating in the medium. Taken together, these data demonstrate that XCHT inhibits the growth of Huh7 cells. Effect of Xiao-Chai-Hu Tang (XCHT) on the viability and morphology of Huh7 cells. (A) Cell viability was determined by the MTT assay after Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24, 48 or 72 h. The data were normalized to the viability of control cells (100%). Data are the averages with standard deviation (SD; error bars) from 3 independent experiments. The symbols (#), (*), and (&) indicate statistical significance compared to control cells (P<0.05), for each indicated timepoint. (B) The Huh7 cells were treated with the 0.5-1.5 mg/ml XCHT for 24 h, and morphological changes were observed using phase-contrast microscopy. The images were captured at a magnification of 200x. Images are representative of 3 independent experiments. To explore the anti-cancer activity of XCHT in vivo, we measured tumor volume and weight in HCC xenograft mice. As shown in Figure 2A and 2B, XCHT treatment significantly reduced both tumor volume and weight by Day 11 post-treatment. In contrast to tumors from the control group (293.37±19.36 mm3), XCHT treatment reduced tumor volume by 15.15% (201.5±27.83 mm3; P < 0.05). Tumor weight was reduced by 31.84% in XCHT-treated mice compared to control-treated mice (P < 0.05). Mice treated with XCHT demonstrated no changes in body weight during the course of the study, suggesting that XCHT treatment was relatively nontoxic to the animals (Figure 2C). Taken together, these data suggest that XCHT inhibits HCC growth both in vivo and in vitro, without apparent adverse effects. Effect of XCHT on tumor growth in hepatocellular carcinoma (HCC) xenograft mice. After tumor development, the mice were given intra-gastric administration of 14.2 g/kg of XCHT or PBS daily for 21 days. Tumor volume (A), tumor weight (B), and body weight (C) were measured. Data shown are averages with SD (error bars) from 10 mice in each group (n = 10). * P< 0.05, versus controls. XCHT inhibits HCC proliferation in vivo and in vitro One of the hallmarks of oncogenesis is unchecked cell proliferation, but the anti-proliferative activity of XCHT on hepatocarcinoma tumors was unknown. To measure the effect of XCHT on cell proliferation, we examined XCHT-treated tumors for a marker expressed by proliferating cell nuclei (Ki-67) using immunohistochemical staining (IHC). As shown in Figure 3A, the percentages of Ki-67 positive cells in tumor tissues from control and XCHT-treated xenograft mice were 76.0 ± 9.6% and 51 ± 15.3%, respectively (P <0.05). To examine the anti-proliferative activity of XCHT in vitro, Huh7 cells were treated with 0.5-1.5 mg/ml of extract and colony formation was measured. As shown in Figure 3B, treatment with increasing concentrations of XCHT for 24 h reduced the cell survival rate by 55.3%, 69.8% and 92.8% compared to untreated controls (P < 0.05). These results suggest that XCHT can inhibit HCC cell proliferation both in vivo and in vitro. Effect of XCHT on cell proliferation in HCC xenograft mice and Huh7 cells. (A) Ki-67 assay in tumor tissues (400 χ). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). * P < 0.05, versus controls. (B) Huh7 cell colony formation assay. Data are averages with SD (error bars) from at least three independent experiments. *P < 0.01, versus control cells. One of the hallmarks of oncogenesis is unchecked cell proliferation, but the anti-proliferative activity of XCHT on hepatocarcinoma tumors was unknown. To measure the effect of XCHT on cell proliferation, we examined XCHT-treated tumors for a marker expressed by proliferating cell nuclei (Ki-67) using immunohistochemical staining (IHC). As shown in Figure 3A, the percentages of Ki-67 positive cells in tumor tissues from control and XCHT-treated xenograft mice were 76.0 ± 9.6% and 51 ± 15.3%, respectively (P <0.05). To examine the anti-proliferative activity of XCHT in vitro, Huh7 cells were treated with 0.5-1.5 mg/ml of extract and colony formation was measured. As shown in Figure 3B, treatment with increasing concentrations of XCHT for 24 h reduced the cell survival rate by 55.3%, 69.8% and 92.8% compared to untreated controls (P < 0.05). These results suggest that XCHT can inhibit HCC cell proliferation both in vivo and in vitro. Effect of XCHT on cell proliferation in HCC xenograft mice and Huh7 cells. (A) Ki-67 assay in tumor tissues (400 χ). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). * P < 0.05, versus controls. (B) Huh7 cell colony formation assay. Data are averages with SD (error bars) from at least three independent experiments. *P < 0.01, versus control cells. XCHT induces apoptosis of hepatocellular carcinoma cells in vivo and in vitro. Apoptosis within HCC tumor tissues was visualized by IHC staining using the TUNEL assay. As shown in Fig 4A, XCHT-treated mice had a significantly higher percentage of TUNEL-positive cells (82±15.3%) compared to the untreated control mice (47±9.6%), indicating the pro-apoptotic effect of XCHT in vivo. To visualize the pro-apoptotic activity in XCHT-treated Huh7 cells, we evaluated changes in nuclear morphology using the DNA-binding dye Hoechst 33258. As shown in Fig 4B, XCHT -treated cells revealed condensed chromatin and a fragmented nuclear morphology, and these characteristics demonstrated typical apoptotic morphological features. In contrast, untreated cell nuclei showed a less intense but homogenous staining pattern, indicative of proliferating cells. Effect of XCHT on apoptosis in both HCC xenograft mice and Huh7 cells. (A) TUNEL assay in tumor tissues (400 x). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). *P < 0.05, versus controls; (B) Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24 h and stained with Hoechst 33258. Images were visualized using a phase-contrast fluorescence microscope. The images were captured at a magnification of 400x. Images are representative of 3 independent experiments. Apoptosis within HCC tumor tissues was visualized by IHC staining using the TUNEL assay. As shown in Fig 4A, XCHT-treated mice had a significantly higher percentage of TUNEL-positive cells (82±15.3%) compared to the untreated control mice (47±9.6%), indicating the pro-apoptotic effect of XCHT in vivo. To visualize the pro-apoptotic activity in XCHT-treated Huh7 cells, we evaluated changes in nuclear morphology using the DNA-binding dye Hoechst 33258. As shown in Fig 4B, XCHT -treated cells revealed condensed chromatin and a fragmented nuclear morphology, and these characteristics demonstrated typical apoptotic morphological features. In contrast, untreated cell nuclei showed a less intense but homogenous staining pattern, indicative of proliferating cells. Effect of XCHT on apoptosis in both HCC xenograft mice and Huh7 cells. (A) TUNEL assay in tumor tissues (400 x). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). *P < 0.05, versus controls; (B) Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24 h and stained with Hoechst 33258. Images were visualized using a phase-contrast fluorescence microscope. The images were captured at a magnification of 400x. Images are representative of 3 independent experiments. XCHT regulates the gene and protein expression correlated with apoptosis and proliferation in vivo and in vitro Very little is known regarding the mechanism of action of XCHT, or the pathways that this extract targets to generate its anti-proliferative and pro-apoptotic activities. We examined the effect of XCHT treatment on the expression of correlated genes and proteins that are important regulators of apoptosis and proliferation using RT-PCR and western blotting. As shown in Fig 5A and B, XCHT significantly reduced both gene and protein expression of the anti-apoptotic factor Bcl-2 in HCC tumor tissues. In contrast, the gene and protein expression of the pro-apoptotic factor Bax increased after XCHT treatment. XCHT treatment significantly decreased the mRNA and protein expression levels of CDK4 and cyclin-D1 compared to the control group (P < 0.05). Correlating with our in vivo data from HCC tissues, Huh7 cells treated with XCHT demonstrated a significant reduction in the mRNA and protein expression levels of Bcl-2, CDK4 and cyclin-D1, and an increase in the expression of Bax in vitro (Fig 6A and B). Together these data suggest that XCHT promotes apoptosis and inhibits proliferation of HCC by increasing the pro-apoptotic Bax/Bcl-2 ratio and modulating the expression of cell cycle-regulatory genes. Effect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in HCC xenograft mice. (A) Four tumors were randomly selected from each group, and the mRNA or protein expression levels of Bcl-2, Bax, cyclin-D1, and CDK4 were determined by RT-PCR and Western blot analysis. GAPDH or β-actin were used as the internal controls. Data shown are representative samples. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression levels of untreated control mice (100%). The symbols (*), (&), (#), and (§) indicate statistical significance versus controls (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1. Effect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in Huh7 cells. The cells were treated with 0.5-1.5 mg/ml XCHT for 24 h. (A) The mRNA and protein levels of Bcl-2, Bax, CDK4 and cyclin-D1 were determined using RT- PCR or western blotting. GAPDH or β - actin were used as the internal controls. The data are representative of three independent experiments. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression of the untreated control cells (100%). The symbols (*), (&), (#), and (§) indicate statistical significance compared to control cells (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1. Very little is known regarding the mechanism of action of XCHT, or the pathways that this extract targets to generate its anti-proliferative and pro-apoptotic activities. We examined the effect of XCHT treatment on the expression of correlated genes and proteins that are important regulators of apoptosis and proliferation using RT-PCR and western blotting. As shown in Fig 5A and B, XCHT significantly reduced both gene and protein expression of the anti-apoptotic factor Bcl-2 in HCC tumor tissues. In contrast, the gene and protein expression of the pro-apoptotic factor Bax increased after XCHT treatment. XCHT treatment significantly decreased the mRNA and protein expression levels of CDK4 and cyclin-D1 compared to the control group (P < 0.05). Correlating with our in vivo data from HCC tissues, Huh7 cells treated with XCHT demonstrated a significant reduction in the mRNA and protein expression levels of Bcl-2, CDK4 and cyclin-D1, and an increase in the expression of Bax in vitro (Fig 6A and B). Together these data suggest that XCHT promotes apoptosis and inhibits proliferation of HCC by increasing the pro-apoptotic Bax/Bcl-2 ratio and modulating the expression of cell cycle-regulatory genes. Effect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in HCC xenograft mice. (A) Four tumors were randomly selected from each group, and the mRNA or protein expression levels of Bcl-2, Bax, cyclin-D1, and CDK4 were determined by RT-PCR and Western blot analysis. GAPDH or β-actin were used as the internal controls. Data shown are representative samples. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression levels of untreated control mice (100%). The symbols (*), (&), (#), and (§) indicate statistical significance versus controls (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1. Effect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in Huh7 cells. The cells were treated with 0.5-1.5 mg/ml XCHT for 24 h. (A) The mRNA and protein levels of Bcl-2, Bax, CDK4 and cyclin-D1 were determined using RT- PCR or western blotting. GAPDH or β - actin were used as the internal controls. The data are representative of three independent experiments. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression of the untreated control cells (100%). The symbols (*), (&), (#), and (§) indicate statistical significance compared to control cells (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1.
Conclusion
In conclusion, we demonstrate for the first time that XCHT can inhibit proliferation and induce apoptosis in liver cancer by regulating the expression of the Bcl-2 protein family and decreasing CDK4 and cyclin-D1 levels. Although the active ingredient and precise mechanism of action are not known, this research provides a starting point for exploring an XCHT-related HCC cancer therapy.
[ "Materials and Reagents", "Preparation of Xiao-Chai-Hu-Tang (XCHT).", "Cell Culture", "Animals", "In Vivo Xenograft Study", "Assessment of Cell Viability by the MTT Assay.", "Cell Morphology", "Detection of Apoptosis with Hoechst Staining.", "Colony Formation", "Apoptosis Detection in Hepatocarcinoma Tissues by TUNEL Staining.", "Immunohistochemistry Analysis of Hepatocarcinoma Tissues", "RNA Extraction and RT-PCR Analysis", "Western Blotting Analysis", "Statistical Analysis", "XCHT inhibits hepatocellular carcinoma (HCC) growth in vitro and in vivo.", "XCHT inhibits HCC proliferation in vivo and in vitro", "XCHT induces apoptosis of hepatocellular carcinoma cells in vivo and in vitro.", "XCHT regulates the gene and protein expression correlated with apoptosis and proliferation in vivo and in vitro" ]
[ "Dulbecco’s Modified Eagle Medium (DMEM), fetal bovine serum (FBS), penicillin-streptomycin, trypsin-EDTA and TriZol reagents were purchased from Life Technologies (Carlsbad, CA, USA). PrimeScript™ RT reagent Kit with gDNA Eraser was purchased from Takara BIO Inc. (Tokyo, Japan). TUNEL assay kit was purchased from R&D Systems (Minneapolis, MN, USA). BCA Protein Assay Kit was purchased from Tiangen Biotech Co., Ltd. (Beijing, China). Antibodies for Bax, Bcl-2, CDK4 and CyclinD1 were obtained from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). All other chemicals, unless otherwise stated, were obtained from Sigma-Aldrich (St. Louis, MO, USA).", "The Xiao-Chai-Hu-Tang extract was prepared by boiling 7 authenticated herbs in distilled water: 24.0g Bupleurum chinense root (Chai-Hu), 9.0g Pinellia ternata tuber (Ban-Xia), 9.0g Scutellaria baicalensis root (Huang-Qin), 9g Zizyphus jujube var. inermis fruit (Da-Zao or jujube fruit), 6g Panax ginseng root (Ren-Shen or ginseng), 5.0g Glycyrrhiza uralensis root (Gan-Cao or licorice), and 9.0g Zingiber officinale rhizome (Sheng-Jiang or ginger). The aqueous extraction was filtered and spray-dried. A stock solution of XCHT was prepared immediately prior to use by dissolving the XCHT powder in DMEM at a concentration of 250 mg/ml. The working concentrations of XCHT were obtained by diluting the stock solution in the culture medium.", "A human hepatoma cell line (Huh7) was purchased from Xiangya Cell Center (Hunan, China). Huh7 cells were grown in DMEM supplemented with 10% (v/v) FBS, 100 units/ml penicillin, and 100 μg/ml streptomycin. Huh7 cells were cultured at 37°C, in a 5% CO2 humidified environment. The cells were subcultured at 80-90% confluency.", "Male BALB/C athymic nude mice (with an initial body weight of 20-22 g) were obtained from SLAC Animal Inc. (Shanghai, China). Animals were housed in standard plastic cages under automatic 12 h light/dark cycles at 23 °C, with free access to food and water. All animals were kept under specific pathogen-free conditions. The animal studies were approved by the Fujian Institute of Traditional Chinese Medicine Animal Ethics Committee (Fuzhou, Fujian, China). The experimental procedures were carried out in accordance with the Guidelines for Animal Experimentation of Fujian University of Traditional Chinese Medicine (Fuzhou, Fujian, China).", "Hepatocarcinoma xenograft mice were produced using Huh7 cells. The cells were grown in culture, detached by trypsinization, washed, and resuspended in serum-free DMEM. Resuspended cells (4 × 106) mixed with Matrigel (1:1) were subcutaneously injected into the right flank of nude mice to initiate tumor growth. When tumor sizes reached 3 millimeters in diameter, mice were randomly divided into two groups (n =10) and treated with XCHT (dissolved in saline) or saline daily by intraperitoneal injection. All treatments were given 5 days a week for 21 days. Body weight and tumor size were measured. Tumor size was determined by measuring the major (L) and minor (W) diameters with a caliper. The tumor volume was calculated according to the following formula: tumor volume = n/6xLxW2. At the end of the experiment, the mice were anaesthetized with ether and sacrificed by cervical vertebra dislocation. The tumors were then excised and weighed. and tumor segments were fixed in buffered formalin and stored at -80°C for molecular analysis.", "Cell viability was assessed by the MTT colorimetric assay. Huh7 cells were seeded into 96-well plates at a density of 1x104 cells/well in 0.1 ml medium. The cells were treated with various concentrations (0, 0.5, 1.0, 1.5 mg/ml) of XCHT for 24 h, 48 h and 72 h. At the end of the treatment, 100 μl of MTT (0.5 mg/ml in PBS) were added to each well, and the samples were incubated for an additional 4 h at 37°C. The purple-blue MTT formazan precipitate was dissolved in 100 μl DMSO. The absorbance was measured at 570 nm using an ELISA reader (BioTek, Model ELX800, USA).", "Huh7 cells were seeded into 6-well plates at a density of 2 x105 cells/ml in 2 ml DMEM. The cells were treated with 0, 0.5, 1.0, and 1.5 mg/mL of XCHT for 24 h. Cell morphology was observed using a phase-contrast microscope (Olympus, Japan), and the photographs were taken at a magnification of 200 x.", "Huh7 cells were seeded into 12-well plates at a density of 1x105 cells/ml in 1 ml medium. After the cells were treated with XCHT for 24 h, apoptosis was visualized using the Hoechst staining kit as described in the manufacturer’s instructions. Briefly, at the end of the treatment, cells were fixed with 4% polyoxymethylene and then incubated in Hoechst solution for 5-10 min in the dark. Images were captured using a phase-contrast fluorescence microscope (Leica, Germany) at a magnification of 400x.", "Huh7 cells were seeded into 6-well plates at a density of 2x105 cells/ml in 2 ml medium. After treatment with various concentrations (0, 0.5, 1.0, 1.5mg/ml) of XCHT for 24 h, the cells were collected and diluted in fresh medium in the absence of XCHT and then reseeded into 6-well plates at a density of 1x103 cells/well. Following incubation for 8 days in a 37°C humidified incubator with 5% CO2, the colonies were fixed with 10% formaldehyde, stained with 0.01% crystal violet and counted. Cell survival was calculated by comparing the survival of compound-treated cells to the control cells (normalized to 100% survival).", "Six tumors were randomly selected from XCHT-treatment or control groups. Tumor tissues were fixed in 10% formaldehyde for 48 h, paraffin-embedded and then sectioned into 4- μm-thick slides. Samples were analyzed by TUNEL staining. Apoptotic cells were counted as DAB-positive cells (brown staining) in five arbitrarily selected microscopic fields at a magnification of 400x. TUNEL-positive cells were counted as a percentage of the total cells.", "Immunohistochemical staining for Ki-67 was performed as previously described. The sections were deparaffinized in xylene and hydrated through graded alcohols. Antigen retrieval was performed using heat treatment (placed in a microwave oven at 750 Watts for 7 minutes), in 10 mM sodium citrate buffer, pH 6.0. Sections were allowed to cool in the buffer at room temperature for 30 minutes, and the sections were then rinsed in deionised water three times for two minutes each. The endogenous peroxidase activity was blocked with 3% (v/v) hydrogen peroxide for 10 minutes. The sections were incubated with 1% bovine serum albumin in order to decrease non-specific staining and reduce endogenous peroxidase activity. The sections were then incubated with Ki-67 antibody (1:100 dilution), at 4° C over night using a staining chamber. After rinsing three times in PBS, sections were incubated in biotinylated goat anti-rabbit IgG (Boshide Wuhan, China), followed by treatment with an avidin-biotin-peroxidase complex (Vector). Immunostaining was visualized by incubation in 3,3-diaminobenzidine (DAB) as a chromogen. Sections were counterstained with haematoxylin. The Ki-67 positive immunostaining was visualized using a Nikon Eclipse 50i microscope (40x objective). The evaluation of Ki-67 expression was analyzed in 5 different fields, and the mean percentage of Ki-67 positive staining was evaluated. To rule out any non-specific staining, PBS was used in place of the primary antibody as a negative control.", "The expression of Bax, Bcl-2, CDK4 and CyclinD1 genes in HCC tissues or cells was analyzed by RT-PCR. Total RNA was isolated with TriZol Reagent according to the manufacturer’s instructions. 1 μg of total RNA was used to synthesize cDNA using the SuperScript II reverse transcriptase Kit (AMV) (TaKaRa, Tokyo, Japan). The reaction contained RNA, selected primers, and 10μl of the RT-PCR master mix: 10 mM Tris-HCl (pH8.3), 50 mM KCl, 5 mM MgCl2, 1 unit/μl RNase inhibitor, 0.25 unit/μl AMV reverse transcriptase, 2.5 ml random primer, and 1 mM each of dATP, dGTP, dCTP and dTTP. Reverse transcription was performed for 1 hour at 42°C. The obtained cDNA was used to determine the mRNA amount of Bax, Bcl-2, CDK4 and cyclin-D1 by PCR. GAPDH was used as an internal control. Samples were analyzed by gel electrophoresis (1.5% agarose). The DNA bands were visualized using a gel documentation system (BioRad, Model Gel Doc 2000, USA).\nThe sequences of the primers used for amplification of CDK4, CyclinD1, Bcl-2, Bax and GAPDH transcripts are as follows: CDK4 forward, 5’- CAT GTA GAC CAG GAC CTA AGC-3’ and reverse, 5’ -AAC TGG CGC ATC AGA TCC TAG-3’; cyclin-D1 forward, 5’-TGG ATG CTG GAG GTC TGC GAG GAA -3’ and reverse, 5’ - GGC TTC GAT CTG CTC CTG GCA GGC -3’; Bcl-2 forward, 5’-CAG CTG CAC CT GAC GCC CTT-3 and reverse, 5’-GCC TCC GTT ATC CTG GAT CC-3’; Bax forward, 5’-TGC TTC AGG GTT TCA TCC AGG-3’ and reverse, 5’-TGG CAA AGT AGA AAA GGG CGA-3’; GAPDH forward, 5’-GTC ATC CAT GAC AAC TTT GG-3’ and reverse, 5’-GAG CTT GAC AAA GTG GTC GT-3’.", "Four tumors were randomly selected from XCHT-treatment or control groups. Tumors were homogenized in nondenaturing lysis buffer and centrifuged at 14,000 x g for 15 min. Protein concentrations were determined by BCA protein assay. Huh7 cells (2.0 x 105 cells/ml) in 5ml medium were seeded into 25 cm2 flasks and treated with the indicated concentrations of XCHT for 24 h. Treated cells were lysed in mammalian cell lysis buffer containing protease and phosphatase inhibitor cocktails, and centrifuged at 14,000 x g for 15 min. Protein concentrations in cell lysate supernatants were determined by BCA protein assay. Equal amounts of protein from each tumor or cell lysate were resolved on 12% SDS-PAGE gels using 80 V for 2 h and transferred onto PVDF membranes. The membranes were blocked for 2 h with 5% nonfat milk and incubated with the desired primary antibody directed against Bax, Bcl-2, CDK4, Cyclin D1, or β-actin (all diluted 1:1000) overnight at 4°C. Appropriate HRP-conjugated secondary antibodies (anti-rabbit or anti-mouse; 1:2000) were incubated with the membrane for 1 h at room temperature, and the membranes were washed again in TBS-T followed by enhanced chemiluminescence detection.", "All data are shown as the mean of three measurements. The data were analyzed using the SPSS package for Windows (Version 11.5). Statistical analysis of the data was performed using the Student’s t-test and ANOVA. Differences with P<0.05 were considered statistically significant.", "We evaluated the in vitro anti-cancer activity of XCHT on the human hepatocellular carcinoma cell line Huh7 using the MTT assay. As shown in Figure 1A, treatment with 0.5-1.5 mg/ml of XCHT for 24, 48, or 72 reduced the viability of Huh7 cells by 17.91-26.27%, 44.01-77%, and 64.8-86.71% (P < 0.05) compared to untreated controls. These results indicate that XCHT inhibits the growth of Huh7 cells in both dose- and time-dependent manners. To verify these results, we evaluated the effect of XCHT on Huh7 cell morphology using phase-contrast microscopy. As shown in Figure 1B, untreated Huh7 cells appeared as densely packed and disorganized multilayers. In contrast, many of the XCHT-treated cells were rounded, shrunken, and detached from adjacent cells, or floating in the medium. Taken together, these data demonstrate that XCHT inhibits the growth of Huh7 cells.\nEffect of Xiao-Chai-Hu Tang (XCHT) on the viability and morphology of Huh7 cells. (A) Cell viability was determined by the MTT assay after Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24, 48 or 72 h. The data were normalized to the viability of control cells (100%). Data are the averages with standard deviation (SD; error bars) from 3 independent experiments. The symbols (#), (*), and (&) indicate statistical significance compared to control cells (P<0.05), for each indicated timepoint. (B) The Huh7 cells were treated with the 0.5-1.5 mg/ml XCHT for 24 h, and morphological changes were observed using phase-contrast microscopy. The images were captured at a magnification of 200x. Images are representative of 3 independent experiments.\nTo explore the anti-cancer activity of XCHT in vivo, we measured tumor volume and weight in HCC xenograft mice. As shown in Figure 2A and 2B, XCHT treatment significantly reduced both tumor volume and weight by Day 11 post-treatment. In contrast to tumors from the control group (293.37±19.36 mm3), XCHT treatment reduced tumor volume by 15.15% (201.5±27.83 mm3; P < 0.05). Tumor weight was reduced by 31.84% in XCHT-treated mice compared to control-treated mice (P < 0.05). Mice treated with XCHT demonstrated no changes in body weight during the course of the study, suggesting that XCHT treatment was relatively nontoxic to the animals (Figure 2C). Taken together, these data suggest that XCHT inhibits HCC growth both in vivo and in vitro, without apparent adverse effects.\nEffect of XCHT on tumor growth in hepatocellular carcinoma (HCC) xenograft mice. After tumor development, the mice were given intra-gastric administration of 14.2 g/kg of XCHT or PBS daily for 21 days. Tumor volume (A), tumor weight (B), and body weight (C) were measured. Data shown are averages with SD (error bars) from 10 mice in each group (n = 10). * P< 0.05, versus controls.", "One of the hallmarks of oncogenesis is unchecked cell proliferation, but the anti-proliferative activity of XCHT on hepatocarcinoma tumors was unknown. To measure the effect of XCHT on cell proliferation, we examined XCHT-treated tumors for a marker expressed by proliferating cell nuclei (Ki-67) using immunohistochemical staining (IHC). As shown in Figure 3A, the percentages of Ki-67 positive cells in tumor tissues from control and XCHT-treated xenograft mice were 76.0 ± 9.6% and 51 ± 15.3%, respectively (P <0.05). To examine the anti-proliferative activity of XCHT in vitro, Huh7 cells were treated with 0.5-1.5 mg/ml of extract and colony formation was measured. As shown in Figure 3B, treatment with increasing concentrations of XCHT for 24 h reduced the cell survival rate by 55.3%, 69.8% and 92.8% compared to untreated controls (P < 0.05). These results suggest that XCHT can inhibit HCC cell proliferation both in vivo and in vitro.\nEffect of XCHT on cell proliferation in HCC xenograft mice and Huh7 cells. (A) Ki-67 assay in tumor tissues (400 χ). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). * P < 0.05, versus controls. (B) Huh7 cell colony formation assay. Data are averages with SD (error bars) from at least three independent experiments. *P < 0.01, versus control cells.", "Apoptosis within HCC tumor tissues was visualized by IHC staining using the TUNEL assay. As shown in Fig 4A, XCHT-treated mice had a significantly higher percentage of TUNEL-positive cells (82±15.3%) compared to the untreated control mice (47±9.6%), indicating the pro-apoptotic effect of XCHT in vivo. To visualize the pro-apoptotic activity in XCHT-treated Huh7 cells, we evaluated changes in nuclear morphology using the DNA-binding dye Hoechst 33258. As shown in Fig 4B, XCHT -treated cells revealed condensed chromatin and a fragmented nuclear morphology, and these characteristics demonstrated typical apoptotic morphological features. In contrast, untreated cell nuclei showed a less intense but homogenous staining pattern, indicative of proliferating cells.\nEffect of XCHT on apoptosis in both HCC xenograft mice and Huh7 cells. (A) TUNEL assay in tumor tissues (400 x). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). *P < 0.05, versus controls; (B) Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24 h and stained with Hoechst 33258. Images were visualized using a phase-contrast fluorescence microscope. The images were captured at a magnification of 400x. Images are representative of 3 independent experiments.", "Very little is known regarding the mechanism of action of XCHT, or the pathways that this extract targets to generate its anti-proliferative and pro-apoptotic activities. We examined the effect of XCHT treatment on the expression of correlated genes and proteins that are important regulators of apoptosis and proliferation using RT-PCR and western blotting. As shown in Fig 5A and B, XCHT significantly reduced both gene and protein expression of the anti-apoptotic factor Bcl-2 in HCC tumor tissues. In contrast, the gene and protein expression of the pro-apoptotic factor Bax increased after XCHT treatment. XCHT treatment significantly decreased the mRNA and protein expression levels of CDK4 and cyclin-D1 compared to the control group (P < 0.05). Correlating with our in vivo data from HCC tissues, Huh7 cells treated with XCHT demonstrated a significant reduction in the mRNA and protein expression levels of Bcl-2, CDK4 and cyclin-D1, and an increase in the expression of Bax in vitro (Fig 6A and B). Together these data suggest that XCHT promotes apoptosis and inhibits proliferation of HCC by increasing the pro-apoptotic Bax/Bcl-2 ratio and modulating the expression of cell cycle-regulatory genes.\nEffect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in HCC xenograft mice. (A) Four tumors were randomly selected from each group, and the mRNA or protein expression levels of Bcl-2, Bax, cyclin-D1, and CDK4 were determined by RT-PCR and Western blot analysis. GAPDH or β-actin were used as the internal controls. Data shown are representative samples. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression levels of untreated control mice (100%). The symbols (*), (&), (#), and (§) indicate statistical significance versus controls (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1.\nEffect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in Huh7 cells. The cells were treated with 0.5-1.5 mg/ml XCHT for 24 h. (A) The mRNA and protein levels of Bcl-2, Bax, CDK4 and cyclin-D1 were\ndetermined using RT- PCR or western blotting. GAPDH or β - actin were used as the internal controls. The data are representative of three independent experiments. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression of the untreated control cells (100%). The symbols (*), (&), (#), and (§) indicate statistical significance compared to control cells (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and Methods", "Materials and Reagents", "Preparation of Xiao-Chai-Hu-Tang (XCHT).", "Cell Culture", "Animals", "In Vivo Xenograft Study", "Assessment of Cell Viability by the MTT Assay.", "Cell Morphology", "Detection of Apoptosis with Hoechst Staining.", "Colony Formation", "Apoptosis Detection in Hepatocarcinoma Tissues by TUNEL Staining.", "Immunohistochemistry Analysis of Hepatocarcinoma Tissues", "RNA Extraction and RT-PCR Analysis", "Western Blotting Analysis", "Statistical Analysis", "Results", "XCHT inhibits hepatocellular carcinoma (HCC) growth in vitro and in vivo.", "XCHT inhibits HCC proliferation in vivo and in vitro", "XCHT induces apoptosis of hepatocellular carcinoma cells in vivo and in vitro.", "XCHT regulates the gene and protein expression correlated with apoptosis and proliferation in vivo and in vitro", "Discussion", "Conclusion" ]
[ "Hepatocellular carcinoma (HCC) is one of the top five cancers diagnosed worldwide [El-Serag and Rudolph, 2007; Jemal et al, 2011]. HCC is the third leading cause of cancer-related deaths, and China will contribute close to half of the estimated 600,000 individuals who will succomb to the disease annually [Jemal et al, 2011; Sherman, 2005]. Surgical resection is the preferred treatment following HCC diagnosis, since removing the tumor completely offers the best prognosis for long-term survival. This treatment is suitable for only 10-15% of patients with early-stage disease, since tumor resection may disrupt vital functions or structures if the tumor is large or has infiltrated into major blood vessels [Levin et al, 1995]. For patients with advanced stage HCC, chemotherapy is the best therapeutic option available. Standard chemotherapeutic regimens can involve single agents or a combination of drugs such as doxorubicin, cisplatin or fluorouracil. Late stage HCC develops drug resistance to standard chemotherapeutic combinations, and less than 20% of patients with advanced liver cancer will respond to these treatment regimens [Abou-Alfa et al, 2008]. Identification of more effective anticancer therapies is needed to provide alternatives to standard chemotherapeutic regimens, as well as treatments for drug resistant HCC.\nComplementary and alternative medicines (CAM) have received considerable attention in Western countries for their potential therapeutic applications [Xu et al, 2006; Cui et al, 2010]. Traditional Chinese medicine (TCM) has been used in the treatment of cancer for thousands of years in China and other Asian countries. These medicines have gained acceptance as alternative cancer treatments in the United States and Europe [Wong et al, 2001; Gai et al, 2008]. When TCM is combined with conventional chemotherapy, there is an increase in the sensitivity of tumors to chemotherapeutic drugs, a reduction in both the side effects and complications associated with chemotherapy or radiotherapy, and an improvement in patient quality of life and survival [Konkimalla and Efferth, 2008].\nXiao-Chai-Hu-Tang (XCHT) is an extract of seven herbs: Bupleurum chinense (Chai-Hu), Pinellia ternata (Ban-Xia), Scutellaria baicalensis (Huang-Qin or Chinese skullcap root), Zizyphus jujube var. inermis (Da-Zao or jujube fruit), Panax ginseng (Ren-Shen or ginseng root), Glycyrrhiza uralensis (Gan-Cao or licorice root), and Zingiber officinale (Sheng-Jiang or ginger rhizome). The formulation for XCHT was first recorded in Shang Han Za Bing Lun during the Han Dynasty, and the ancient methodology for preparing XCHT has been performed in China for thousands of years. In TCM, XCHT has traditionally been used to treat a variety of Shaoyang diseases (including disorders of the liver and gallbladder). Common side-effects experienced with XCHT treatment include alternating chills and fever, but the medication is well-tolerated by the majority of patients. The relatively low toxicity of XCHT is an important characteristic for potential use of this extract in combinatorial therapies. Ancient scholars believed the mechanism of action of XCHT involved harmonizing the Shaoyang, where pathogens were eliminated through enhanced liver function and improved digestion. More recently, XCHT has been reported to be effective as an anti-cancer agent, although the precise mechanism of its tumoricidal activity remains unclear [Zhu et al, 2005; Shiota et al, 2002; Watanabe et al, 2001; Yano et al, 1994; Kato et al, 1998; Liu et al, 1998; Mizushima et al, 1995]. To gain further insights into the mechanism of action of this ancient extract, we examined the anti-proliferative and pro-apoptotic activities of XCHT on HCC.", " Materials and Reagents Dulbecco’s Modified Eagle Medium (DMEM), fetal bovine serum (FBS), penicillin-streptomycin, trypsin-EDTA and TriZol reagents were purchased from Life Technologies (Carlsbad, CA, USA). PrimeScript™ RT reagent Kit with gDNA Eraser was purchased from Takara BIO Inc. (Tokyo, Japan). TUNEL assay kit was purchased from R&D Systems (Minneapolis, MN, USA). BCA Protein Assay Kit was purchased from Tiangen Biotech Co., Ltd. (Beijing, China). Antibodies for Bax, Bcl-2, CDK4 and CyclinD1 were obtained from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). All other chemicals, unless otherwise stated, were obtained from Sigma-Aldrich (St. Louis, MO, USA).\nDulbecco’s Modified Eagle Medium (DMEM), fetal bovine serum (FBS), penicillin-streptomycin, trypsin-EDTA and TriZol reagents were purchased from Life Technologies (Carlsbad, CA, USA). PrimeScript™ RT reagent Kit with gDNA Eraser was purchased from Takara BIO Inc. (Tokyo, Japan). TUNEL assay kit was purchased from R&D Systems (Minneapolis, MN, USA). BCA Protein Assay Kit was purchased from Tiangen Biotech Co., Ltd. (Beijing, China). Antibodies for Bax, Bcl-2, CDK4 and CyclinD1 were obtained from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). All other chemicals, unless otherwise stated, were obtained from Sigma-Aldrich (St. Louis, MO, USA).\n Preparation of Xiao-Chai-Hu-Tang (XCHT). The Xiao-Chai-Hu-Tang extract was prepared by boiling 7 authenticated herbs in distilled water: 24.0g Bupleurum chinense root (Chai-Hu), 9.0g Pinellia ternata tuber (Ban-Xia), 9.0g Scutellaria baicalensis root (Huang-Qin), 9g Zizyphus jujube var. inermis fruit (Da-Zao or jujube fruit), 6g Panax ginseng root (Ren-Shen or ginseng), 5.0g Glycyrrhiza uralensis root (Gan-Cao or licorice), and 9.0g Zingiber officinale rhizome (Sheng-Jiang or ginger). The aqueous extraction was filtered and spray-dried. A stock solution of XCHT was prepared immediately prior to use by dissolving the XCHT powder in DMEM at a concentration of 250 mg/ml. The working concentrations of XCHT were obtained by diluting the stock solution in the culture medium.\nThe Xiao-Chai-Hu-Tang extract was prepared by boiling 7 authenticated herbs in distilled water: 24.0g Bupleurum chinense root (Chai-Hu), 9.0g Pinellia ternata tuber (Ban-Xia), 9.0g Scutellaria baicalensis root (Huang-Qin), 9g Zizyphus jujube var. inermis fruit (Da-Zao or jujube fruit), 6g Panax ginseng root (Ren-Shen or ginseng), 5.0g Glycyrrhiza uralensis root (Gan-Cao or licorice), and 9.0g Zingiber officinale rhizome (Sheng-Jiang or ginger). The aqueous extraction was filtered and spray-dried. A stock solution of XCHT was prepared immediately prior to use by dissolving the XCHT powder in DMEM at a concentration of 250 mg/ml. The working concentrations of XCHT were obtained by diluting the stock solution in the culture medium.\n Cell Culture A human hepatoma cell line (Huh7) was purchased from Xiangya Cell Center (Hunan, China). Huh7 cells were grown in DMEM supplemented with 10% (v/v) FBS, 100 units/ml penicillin, and 100 μg/ml streptomycin. Huh7 cells were cultured at 37°C, in a 5% CO2 humidified environment. The cells were subcultured at 80-90% confluency.\nA human hepatoma cell line (Huh7) was purchased from Xiangya Cell Center (Hunan, China). Huh7 cells were grown in DMEM supplemented with 10% (v/v) FBS, 100 units/ml penicillin, and 100 μg/ml streptomycin. Huh7 cells were cultured at 37°C, in a 5% CO2 humidified environment. The cells were subcultured at 80-90% confluency.\n Animals Male BALB/C athymic nude mice (with an initial body weight of 20-22 g) were obtained from SLAC Animal Inc. (Shanghai, China). Animals were housed in standard plastic cages under automatic 12 h light/dark cycles at 23 °C, with free access to food and water. All animals were kept under specific pathogen-free conditions. The animal studies were approved by the Fujian Institute of Traditional Chinese Medicine Animal Ethics Committee (Fuzhou, Fujian, China). The experimental procedures were carried out in accordance with the Guidelines for Animal Experimentation of Fujian University of Traditional Chinese Medicine (Fuzhou, Fujian, China).\nMale BALB/C athymic nude mice (with an initial body weight of 20-22 g) were obtained from SLAC Animal Inc. (Shanghai, China). Animals were housed in standard plastic cages under automatic 12 h light/dark cycles at 23 °C, with free access to food and water. All animals were kept under specific pathogen-free conditions. The animal studies were approved by the Fujian Institute of Traditional Chinese Medicine Animal Ethics Committee (Fuzhou, Fujian, China). The experimental procedures were carried out in accordance with the Guidelines for Animal Experimentation of Fujian University of Traditional Chinese Medicine (Fuzhou, Fujian, China).\n In Vivo Xenograft Study Hepatocarcinoma xenograft mice were produced using Huh7 cells. The cells were grown in culture, detached by trypsinization, washed, and resuspended in serum-free DMEM. Resuspended cells (4 × 106) mixed with Matrigel (1:1) were subcutaneously injected into the right flank of nude mice to initiate tumor growth. When tumor sizes reached 3 millimeters in diameter, mice were randomly divided into two groups (n =10) and treated with XCHT (dissolved in saline) or saline daily by intraperitoneal injection. All treatments were given 5 days a week for 21 days. Body weight and tumor size were measured. Tumor size was determined by measuring the major (L) and minor (W) diameters with a caliper. The tumor volume was calculated according to the following formula: tumor volume = n/6xLxW2. At the end of the experiment, the mice were anaesthetized with ether and sacrificed by cervical vertebra dislocation. The tumors were then excised and weighed. and tumor segments were fixed in buffered formalin and stored at -80°C for molecular analysis.\nHepatocarcinoma xenograft mice were produced using Huh7 cells. The cells were grown in culture, detached by trypsinization, washed, and resuspended in serum-free DMEM. Resuspended cells (4 × 106) mixed with Matrigel (1:1) were subcutaneously injected into the right flank of nude mice to initiate tumor growth. When tumor sizes reached 3 millimeters in diameter, mice were randomly divided into two groups (n =10) and treated with XCHT (dissolved in saline) or saline daily by intraperitoneal injection. All treatments were given 5 days a week for 21 days. Body weight and tumor size were measured. Tumor size was determined by measuring the major (L) and minor (W) diameters with a caliper. The tumor volume was calculated according to the following formula: tumor volume = n/6xLxW2. At the end of the experiment, the mice were anaesthetized with ether and sacrificed by cervical vertebra dislocation. The tumors were then excised and weighed. and tumor segments were fixed in buffered formalin and stored at -80°C for molecular analysis.\n Assessment of Cell Viability by the MTT Assay. Cell viability was assessed by the MTT colorimetric assay. Huh7 cells were seeded into 96-well plates at a density of 1x104 cells/well in 0.1 ml medium. The cells were treated with various concentrations (0, 0.5, 1.0, 1.5 mg/ml) of XCHT for 24 h, 48 h and 72 h. At the end of the treatment, 100 μl of MTT (0.5 mg/ml in PBS) were added to each well, and the samples were incubated for an additional 4 h at 37°C. The purple-blue MTT formazan precipitate was dissolved in 100 μl DMSO. The absorbance was measured at 570 nm using an ELISA reader (BioTek, Model ELX800, USA).\nCell viability was assessed by the MTT colorimetric assay. Huh7 cells were seeded into 96-well plates at a density of 1x104 cells/well in 0.1 ml medium. The cells were treated with various concentrations (0, 0.5, 1.0, 1.5 mg/ml) of XCHT for 24 h, 48 h and 72 h. At the end of the treatment, 100 μl of MTT (0.5 mg/ml in PBS) were added to each well, and the samples were incubated for an additional 4 h at 37°C. The purple-blue MTT formazan precipitate was dissolved in 100 μl DMSO. The absorbance was measured at 570 nm using an ELISA reader (BioTek, Model ELX800, USA).\n Cell Morphology Huh7 cells were seeded into 6-well plates at a density of 2 x105 cells/ml in 2 ml DMEM. The cells were treated with 0, 0.5, 1.0, and 1.5 mg/mL of XCHT for 24 h. Cell morphology was observed using a phase-contrast microscope (Olympus, Japan), and the photographs were taken at a magnification of 200 x.\nHuh7 cells were seeded into 6-well plates at a density of 2 x105 cells/ml in 2 ml DMEM. The cells were treated with 0, 0.5, 1.0, and 1.5 mg/mL of XCHT for 24 h. Cell morphology was observed using a phase-contrast microscope (Olympus, Japan), and the photographs were taken at a magnification of 200 x.\n Detection of Apoptosis with Hoechst Staining. Huh7 cells were seeded into 12-well plates at a density of 1x105 cells/ml in 1 ml medium. After the cells were treated with XCHT for 24 h, apoptosis was visualized using the Hoechst staining kit as described in the manufacturer’s instructions. Briefly, at the end of the treatment, cells were fixed with 4% polyoxymethylene and then incubated in Hoechst solution for 5-10 min in the dark. Images were captured using a phase-contrast fluorescence microscope (Leica, Germany) at a magnification of 400x.\nHuh7 cells were seeded into 12-well plates at a density of 1x105 cells/ml in 1 ml medium. After the cells were treated with XCHT for 24 h, apoptosis was visualized using the Hoechst staining kit as described in the manufacturer’s instructions. Briefly, at the end of the treatment, cells were fixed with 4% polyoxymethylene and then incubated in Hoechst solution for 5-10 min in the dark. Images were captured using a phase-contrast fluorescence microscope (Leica, Germany) at a magnification of 400x.\n Colony Formation Huh7 cells were seeded into 6-well plates at a density of 2x105 cells/ml in 2 ml medium. After treatment with various concentrations (0, 0.5, 1.0, 1.5mg/ml) of XCHT for 24 h, the cells were collected and diluted in fresh medium in the absence of XCHT and then reseeded into 6-well plates at a density of 1x103 cells/well. Following incubation for 8 days in a 37°C humidified incubator with 5% CO2, the colonies were fixed with 10% formaldehyde, stained with 0.01% crystal violet and counted. Cell survival was calculated by comparing the survival of compound-treated cells to the control cells (normalized to 100% survival).\nHuh7 cells were seeded into 6-well plates at a density of 2x105 cells/ml in 2 ml medium. After treatment with various concentrations (0, 0.5, 1.0, 1.5mg/ml) of XCHT for 24 h, the cells were collected and diluted in fresh medium in the absence of XCHT and then reseeded into 6-well plates at a density of 1x103 cells/well. Following incubation for 8 days in a 37°C humidified incubator with 5% CO2, the colonies were fixed with 10% formaldehyde, stained with 0.01% crystal violet and counted. Cell survival was calculated by comparing the survival of compound-treated cells to the control cells (normalized to 100% survival).\n Apoptosis Detection in Hepatocarcinoma Tissues by TUNEL Staining. Six tumors were randomly selected from XCHT-treatment or control groups. Tumor tissues were fixed in 10% formaldehyde for 48 h, paraffin-embedded and then sectioned into 4- μm-thick slides. Samples were analyzed by TUNEL staining. Apoptotic cells were counted as DAB-positive cells (brown staining) in five arbitrarily selected microscopic fields at a magnification of 400x. TUNEL-positive cells were counted as a percentage of the total cells.\nSix tumors were randomly selected from XCHT-treatment or control groups. Tumor tissues were fixed in 10% formaldehyde for 48 h, paraffin-embedded and then sectioned into 4- μm-thick slides. Samples were analyzed by TUNEL staining. Apoptotic cells were counted as DAB-positive cells (brown staining) in five arbitrarily selected microscopic fields at a magnification of 400x. TUNEL-positive cells were counted as a percentage of the total cells.\n Immunohistochemistry Analysis of Hepatocarcinoma Tissues Immunohistochemical staining for Ki-67 was performed as previously described. The sections were deparaffinized in xylene and hydrated through graded alcohols. Antigen retrieval was performed using heat treatment (placed in a microwave oven at 750 Watts for 7 minutes), in 10 mM sodium citrate buffer, pH 6.0. Sections were allowed to cool in the buffer at room temperature for 30 minutes, and the sections were then rinsed in deionised water three times for two minutes each. The endogenous peroxidase activity was blocked with 3% (v/v) hydrogen peroxide for 10 minutes. The sections were incubated with 1% bovine serum albumin in order to decrease non-specific staining and reduce endogenous peroxidase activity. The sections were then incubated with Ki-67 antibody (1:100 dilution), at 4° C over night using a staining chamber. After rinsing three times in PBS, sections were incubated in biotinylated goat anti-rabbit IgG (Boshide Wuhan, China), followed by treatment with an avidin-biotin-peroxidase complex (Vector). Immunostaining was visualized by incubation in 3,3-diaminobenzidine (DAB) as a chromogen. Sections were counterstained with haematoxylin. The Ki-67 positive immunostaining was visualized using a Nikon Eclipse 50i microscope (40x objective). The evaluation of Ki-67 expression was analyzed in 5 different fields, and the mean percentage of Ki-67 positive staining was evaluated. To rule out any non-specific staining, PBS was used in place of the primary antibody as a negative control.\nImmunohistochemical staining for Ki-67 was performed as previously described. The sections were deparaffinized in xylene and hydrated through graded alcohols. Antigen retrieval was performed using heat treatment (placed in a microwave oven at 750 Watts for 7 minutes), in 10 mM sodium citrate buffer, pH 6.0. Sections were allowed to cool in the buffer at room temperature for 30 minutes, and the sections were then rinsed in deionised water three times for two minutes each. The endogenous peroxidase activity was blocked with 3% (v/v) hydrogen peroxide for 10 minutes. The sections were incubated with 1% bovine serum albumin in order to decrease non-specific staining and reduce endogenous peroxidase activity. The sections were then incubated with Ki-67 antibody (1:100 dilution), at 4° C over night using a staining chamber. After rinsing three times in PBS, sections were incubated in biotinylated goat anti-rabbit IgG (Boshide Wuhan, China), followed by treatment with an avidin-biotin-peroxidase complex (Vector). Immunostaining was visualized by incubation in 3,3-diaminobenzidine (DAB) as a chromogen. Sections were counterstained with haematoxylin. The Ki-67 positive immunostaining was visualized using a Nikon Eclipse 50i microscope (40x objective). The evaluation of Ki-67 expression was analyzed in 5 different fields, and the mean percentage of Ki-67 positive staining was evaluated. To rule out any non-specific staining, PBS was used in place of the primary antibody as a negative control.\n RNA Extraction and RT-PCR Analysis The expression of Bax, Bcl-2, CDK4 and CyclinD1 genes in HCC tissues or cells was analyzed by RT-PCR. Total RNA was isolated with TriZol Reagent according to the manufacturer’s instructions. 1 μg of total RNA was used to synthesize cDNA using the SuperScript II reverse transcriptase Kit (AMV) (TaKaRa, Tokyo, Japan). The reaction contained RNA, selected primers, and 10μl of the RT-PCR master mix: 10 mM Tris-HCl (pH8.3), 50 mM KCl, 5 mM MgCl2, 1 unit/μl RNase inhibitor, 0.25 unit/μl AMV reverse transcriptase, 2.5 ml random primer, and 1 mM each of dATP, dGTP, dCTP and dTTP. Reverse transcription was performed for 1 hour at 42°C. The obtained cDNA was used to determine the mRNA amount of Bax, Bcl-2, CDK4 and cyclin-D1 by PCR. GAPDH was used as an internal control. Samples were analyzed by gel electrophoresis (1.5% agarose). The DNA bands were visualized using a gel documentation system (BioRad, Model Gel Doc 2000, USA).\nThe sequences of the primers used for amplification of CDK4, CyclinD1, Bcl-2, Bax and GAPDH transcripts are as follows: CDK4 forward, 5’- CAT GTA GAC CAG GAC CTA AGC-3’ and reverse, 5’ -AAC TGG CGC ATC AGA TCC TAG-3’; cyclin-D1 forward, 5’-TGG ATG CTG GAG GTC TGC GAG GAA -3’ and reverse, 5’ - GGC TTC GAT CTG CTC CTG GCA GGC -3’; Bcl-2 forward, 5’-CAG CTG CAC CT GAC GCC CTT-3 and reverse, 5’-GCC TCC GTT ATC CTG GAT CC-3’; Bax forward, 5’-TGC TTC AGG GTT TCA TCC AGG-3’ and reverse, 5’-TGG CAA AGT AGA AAA GGG CGA-3’; GAPDH forward, 5’-GTC ATC CAT GAC AAC TTT GG-3’ and reverse, 5’-GAG CTT GAC AAA GTG GTC GT-3’.\nThe expression of Bax, Bcl-2, CDK4 and CyclinD1 genes in HCC tissues or cells was analyzed by RT-PCR. Total RNA was isolated with TriZol Reagent according to the manufacturer’s instructions. 1 μg of total RNA was used to synthesize cDNA using the SuperScript II reverse transcriptase Kit (AMV) (TaKaRa, Tokyo, Japan). The reaction contained RNA, selected primers, and 10μl of the RT-PCR master mix: 10 mM Tris-HCl (pH8.3), 50 mM KCl, 5 mM MgCl2, 1 unit/μl RNase inhibitor, 0.25 unit/μl AMV reverse transcriptase, 2.5 ml random primer, and 1 mM each of dATP, dGTP, dCTP and dTTP. Reverse transcription was performed for 1 hour at 42°C. The obtained cDNA was used to determine the mRNA amount of Bax, Bcl-2, CDK4 and cyclin-D1 by PCR. GAPDH was used as an internal control. Samples were analyzed by gel electrophoresis (1.5% agarose). The DNA bands were visualized using a gel documentation system (BioRad, Model Gel Doc 2000, USA).\nThe sequences of the primers used for amplification of CDK4, CyclinD1, Bcl-2, Bax and GAPDH transcripts are as follows: CDK4 forward, 5’- CAT GTA GAC CAG GAC CTA AGC-3’ and reverse, 5’ -AAC TGG CGC ATC AGA TCC TAG-3’; cyclin-D1 forward, 5’-TGG ATG CTG GAG GTC TGC GAG GAA -3’ and reverse, 5’ - GGC TTC GAT CTG CTC CTG GCA GGC -3’; Bcl-2 forward, 5’-CAG CTG CAC CT GAC GCC CTT-3 and reverse, 5’-GCC TCC GTT ATC CTG GAT CC-3’; Bax forward, 5’-TGC TTC AGG GTT TCA TCC AGG-3’ and reverse, 5’-TGG CAA AGT AGA AAA GGG CGA-3’; GAPDH forward, 5’-GTC ATC CAT GAC AAC TTT GG-3’ and reverse, 5’-GAG CTT GAC AAA GTG GTC GT-3’.\n Western Blotting Analysis Four tumors were randomly selected from XCHT-treatment or control groups. Tumors were homogenized in nondenaturing lysis buffer and centrifuged at 14,000 x g for 15 min. Protein concentrations were determined by BCA protein assay. Huh7 cells (2.0 x 105 cells/ml) in 5ml medium were seeded into 25 cm2 flasks and treated with the indicated concentrations of XCHT for 24 h. Treated cells were lysed in mammalian cell lysis buffer containing protease and phosphatase inhibitor cocktails, and centrifuged at 14,000 x g for 15 min. Protein concentrations in cell lysate supernatants were determined by BCA protein assay. Equal amounts of protein from each tumor or cell lysate were resolved on 12% SDS-PAGE gels using 80 V for 2 h and transferred onto PVDF membranes. The membranes were blocked for 2 h with 5% nonfat milk and incubated with the desired primary antibody directed against Bax, Bcl-2, CDK4, Cyclin D1, or β-actin (all diluted 1:1000) overnight at 4°C. Appropriate HRP-conjugated secondary antibodies (anti-rabbit or anti-mouse; 1:2000) were incubated with the membrane for 1 h at room temperature, and the membranes were washed again in TBS-T followed by enhanced chemiluminescence detection.\nFour tumors were randomly selected from XCHT-treatment or control groups. Tumors were homogenized in nondenaturing lysis buffer and centrifuged at 14,000 x g for 15 min. Protein concentrations were determined by BCA protein assay. Huh7 cells (2.0 x 105 cells/ml) in 5ml medium were seeded into 25 cm2 flasks and treated with the indicated concentrations of XCHT for 24 h. Treated cells were lysed in mammalian cell lysis buffer containing protease and phosphatase inhibitor cocktails, and centrifuged at 14,000 x g for 15 min. Protein concentrations in cell lysate supernatants were determined by BCA protein assay. Equal amounts of protein from each tumor or cell lysate were resolved on 12% SDS-PAGE gels using 80 V for 2 h and transferred onto PVDF membranes. The membranes were blocked for 2 h with 5% nonfat milk and incubated with the desired primary antibody directed against Bax, Bcl-2, CDK4, Cyclin D1, or β-actin (all diluted 1:1000) overnight at 4°C. Appropriate HRP-conjugated secondary antibodies (anti-rabbit or anti-mouse; 1:2000) were incubated with the membrane for 1 h at room temperature, and the membranes were washed again in TBS-T followed by enhanced chemiluminescence detection.\n Statistical Analysis All data are shown as the mean of three measurements. The data were analyzed using the SPSS package for Windows (Version 11.5). Statistical analysis of the data was performed using the Student’s t-test and ANOVA. Differences with P<0.05 were considered statistically significant.\nAll data are shown as the mean of three measurements. The data were analyzed using the SPSS package for Windows (Version 11.5). Statistical analysis of the data was performed using the Student’s t-test and ANOVA. Differences with P<0.05 were considered statistically significant.", "Dulbecco’s Modified Eagle Medium (DMEM), fetal bovine serum (FBS), penicillin-streptomycin, trypsin-EDTA and TriZol reagents were purchased from Life Technologies (Carlsbad, CA, USA). PrimeScript™ RT reagent Kit with gDNA Eraser was purchased from Takara BIO Inc. (Tokyo, Japan). TUNEL assay kit was purchased from R&D Systems (Minneapolis, MN, USA). BCA Protein Assay Kit was purchased from Tiangen Biotech Co., Ltd. (Beijing, China). Antibodies for Bax, Bcl-2, CDK4 and CyclinD1 were obtained from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). All other chemicals, unless otherwise stated, were obtained from Sigma-Aldrich (St. Louis, MO, USA).", "The Xiao-Chai-Hu-Tang extract was prepared by boiling 7 authenticated herbs in distilled water: 24.0g Bupleurum chinense root (Chai-Hu), 9.0g Pinellia ternata tuber (Ban-Xia), 9.0g Scutellaria baicalensis root (Huang-Qin), 9g Zizyphus jujube var. inermis fruit (Da-Zao or jujube fruit), 6g Panax ginseng root (Ren-Shen or ginseng), 5.0g Glycyrrhiza uralensis root (Gan-Cao or licorice), and 9.0g Zingiber officinale rhizome (Sheng-Jiang or ginger). The aqueous extraction was filtered and spray-dried. A stock solution of XCHT was prepared immediately prior to use by dissolving the XCHT powder in DMEM at a concentration of 250 mg/ml. The working concentrations of XCHT were obtained by diluting the stock solution in the culture medium.", "A human hepatoma cell line (Huh7) was purchased from Xiangya Cell Center (Hunan, China). Huh7 cells were grown in DMEM supplemented with 10% (v/v) FBS, 100 units/ml penicillin, and 100 μg/ml streptomycin. Huh7 cells were cultured at 37°C, in a 5% CO2 humidified environment. The cells were subcultured at 80-90% confluency.", "Male BALB/C athymic nude mice (with an initial body weight of 20-22 g) were obtained from SLAC Animal Inc. (Shanghai, China). Animals were housed in standard plastic cages under automatic 12 h light/dark cycles at 23 °C, with free access to food and water. All animals were kept under specific pathogen-free conditions. The animal studies were approved by the Fujian Institute of Traditional Chinese Medicine Animal Ethics Committee (Fuzhou, Fujian, China). The experimental procedures were carried out in accordance with the Guidelines for Animal Experimentation of Fujian University of Traditional Chinese Medicine (Fuzhou, Fujian, China).", "Hepatocarcinoma xenograft mice were produced using Huh7 cells. The cells were grown in culture, detached by trypsinization, washed, and resuspended in serum-free DMEM. Resuspended cells (4 × 106) mixed with Matrigel (1:1) were subcutaneously injected into the right flank of nude mice to initiate tumor growth. When tumor sizes reached 3 millimeters in diameter, mice were randomly divided into two groups (n =10) and treated with XCHT (dissolved in saline) or saline daily by intraperitoneal injection. All treatments were given 5 days a week for 21 days. Body weight and tumor size were measured. Tumor size was determined by measuring the major (L) and minor (W) diameters with a caliper. The tumor volume was calculated according to the following formula: tumor volume = n/6xLxW2. At the end of the experiment, the mice were anaesthetized with ether and sacrificed by cervical vertebra dislocation. The tumors were then excised and weighed. and tumor segments were fixed in buffered formalin and stored at -80°C for molecular analysis.", "Cell viability was assessed by the MTT colorimetric assay. Huh7 cells were seeded into 96-well plates at a density of 1x104 cells/well in 0.1 ml medium. The cells were treated with various concentrations (0, 0.5, 1.0, 1.5 mg/ml) of XCHT for 24 h, 48 h and 72 h. At the end of the treatment, 100 μl of MTT (0.5 mg/ml in PBS) were added to each well, and the samples were incubated for an additional 4 h at 37°C. The purple-blue MTT formazan precipitate was dissolved in 100 μl DMSO. The absorbance was measured at 570 nm using an ELISA reader (BioTek, Model ELX800, USA).", "Huh7 cells were seeded into 6-well plates at a density of 2 x105 cells/ml in 2 ml DMEM. The cells were treated with 0, 0.5, 1.0, and 1.5 mg/mL of XCHT for 24 h. Cell morphology was observed using a phase-contrast microscope (Olympus, Japan), and the photographs were taken at a magnification of 200 x.", "Huh7 cells were seeded into 12-well plates at a density of 1x105 cells/ml in 1 ml medium. After the cells were treated with XCHT for 24 h, apoptosis was visualized using the Hoechst staining kit as described in the manufacturer’s instructions. Briefly, at the end of the treatment, cells were fixed with 4% polyoxymethylene and then incubated in Hoechst solution for 5-10 min in the dark. Images were captured using a phase-contrast fluorescence microscope (Leica, Germany) at a magnification of 400x.", "Huh7 cells were seeded into 6-well plates at a density of 2x105 cells/ml in 2 ml medium. After treatment with various concentrations (0, 0.5, 1.0, 1.5mg/ml) of XCHT for 24 h, the cells were collected and diluted in fresh medium in the absence of XCHT and then reseeded into 6-well plates at a density of 1x103 cells/well. Following incubation for 8 days in a 37°C humidified incubator with 5% CO2, the colonies were fixed with 10% formaldehyde, stained with 0.01% crystal violet and counted. Cell survival was calculated by comparing the survival of compound-treated cells to the control cells (normalized to 100% survival).", "Six tumors were randomly selected from XCHT-treatment or control groups. Tumor tissues were fixed in 10% formaldehyde for 48 h, paraffin-embedded and then sectioned into 4- μm-thick slides. Samples were analyzed by TUNEL staining. Apoptotic cells were counted as DAB-positive cells (brown staining) in five arbitrarily selected microscopic fields at a magnification of 400x. TUNEL-positive cells were counted as a percentage of the total cells.", "Immunohistochemical staining for Ki-67 was performed as previously described. The sections were deparaffinized in xylene and hydrated through graded alcohols. Antigen retrieval was performed using heat treatment (placed in a microwave oven at 750 Watts for 7 minutes), in 10 mM sodium citrate buffer, pH 6.0. Sections were allowed to cool in the buffer at room temperature for 30 minutes, and the sections were then rinsed in deionised water three times for two minutes each. The endogenous peroxidase activity was blocked with 3% (v/v) hydrogen peroxide for 10 minutes. The sections were incubated with 1% bovine serum albumin in order to decrease non-specific staining and reduce endogenous peroxidase activity. The sections were then incubated with Ki-67 antibody (1:100 dilution), at 4° C over night using a staining chamber. After rinsing three times in PBS, sections were incubated in biotinylated goat anti-rabbit IgG (Boshide Wuhan, China), followed by treatment with an avidin-biotin-peroxidase complex (Vector). Immunostaining was visualized by incubation in 3,3-diaminobenzidine (DAB) as a chromogen. Sections were counterstained with haematoxylin. The Ki-67 positive immunostaining was visualized using a Nikon Eclipse 50i microscope (40x objective). The evaluation of Ki-67 expression was analyzed in 5 different fields, and the mean percentage of Ki-67 positive staining was evaluated. To rule out any non-specific staining, PBS was used in place of the primary antibody as a negative control.", "The expression of Bax, Bcl-2, CDK4 and CyclinD1 genes in HCC tissues or cells was analyzed by RT-PCR. Total RNA was isolated with TriZol Reagent according to the manufacturer’s instructions. 1 μg of total RNA was used to synthesize cDNA using the SuperScript II reverse transcriptase Kit (AMV) (TaKaRa, Tokyo, Japan). The reaction contained RNA, selected primers, and 10μl of the RT-PCR master mix: 10 mM Tris-HCl (pH8.3), 50 mM KCl, 5 mM MgCl2, 1 unit/μl RNase inhibitor, 0.25 unit/μl AMV reverse transcriptase, 2.5 ml random primer, and 1 mM each of dATP, dGTP, dCTP and dTTP. Reverse transcription was performed for 1 hour at 42°C. The obtained cDNA was used to determine the mRNA amount of Bax, Bcl-2, CDK4 and cyclin-D1 by PCR. GAPDH was used as an internal control. Samples were analyzed by gel electrophoresis (1.5% agarose). The DNA bands were visualized using a gel documentation system (BioRad, Model Gel Doc 2000, USA).\nThe sequences of the primers used for amplification of CDK4, CyclinD1, Bcl-2, Bax and GAPDH transcripts are as follows: CDK4 forward, 5’- CAT GTA GAC CAG GAC CTA AGC-3’ and reverse, 5’ -AAC TGG CGC ATC AGA TCC TAG-3’; cyclin-D1 forward, 5’-TGG ATG CTG GAG GTC TGC GAG GAA -3’ and reverse, 5’ - GGC TTC GAT CTG CTC CTG GCA GGC -3’; Bcl-2 forward, 5’-CAG CTG CAC CT GAC GCC CTT-3 and reverse, 5’-GCC TCC GTT ATC CTG GAT CC-3’; Bax forward, 5’-TGC TTC AGG GTT TCA TCC AGG-3’ and reverse, 5’-TGG CAA AGT AGA AAA GGG CGA-3’; GAPDH forward, 5’-GTC ATC CAT GAC AAC TTT GG-3’ and reverse, 5’-GAG CTT GAC AAA GTG GTC GT-3’.", "Four tumors were randomly selected from XCHT-treatment or control groups. Tumors were homogenized in nondenaturing lysis buffer and centrifuged at 14,000 x g for 15 min. Protein concentrations were determined by BCA protein assay. Huh7 cells (2.0 x 105 cells/ml) in 5ml medium were seeded into 25 cm2 flasks and treated with the indicated concentrations of XCHT for 24 h. Treated cells were lysed in mammalian cell lysis buffer containing protease and phosphatase inhibitor cocktails, and centrifuged at 14,000 x g for 15 min. Protein concentrations in cell lysate supernatants were determined by BCA protein assay. Equal amounts of protein from each tumor or cell lysate were resolved on 12% SDS-PAGE gels using 80 V for 2 h and transferred onto PVDF membranes. The membranes were blocked for 2 h with 5% nonfat milk and incubated with the desired primary antibody directed against Bax, Bcl-2, CDK4, Cyclin D1, or β-actin (all diluted 1:1000) overnight at 4°C. Appropriate HRP-conjugated secondary antibodies (anti-rabbit or anti-mouse; 1:2000) were incubated with the membrane for 1 h at room temperature, and the membranes were washed again in TBS-T followed by enhanced chemiluminescence detection.", "All data are shown as the mean of three measurements. The data were analyzed using the SPSS package for Windows (Version 11.5). Statistical analysis of the data was performed using the Student’s t-test and ANOVA. Differences with P<0.05 were considered statistically significant.", " XCHT inhibits hepatocellular carcinoma (HCC) growth in vitro and in vivo. We evaluated the in vitro anti-cancer activity of XCHT on the human hepatocellular carcinoma cell line Huh7 using the MTT assay. As shown in Figure 1A, treatment with 0.5-1.5 mg/ml of XCHT for 24, 48, or 72 reduced the viability of Huh7 cells by 17.91-26.27%, 44.01-77%, and 64.8-86.71% (P < 0.05) compared to untreated controls. These results indicate that XCHT inhibits the growth of Huh7 cells in both dose- and time-dependent manners. To verify these results, we evaluated the effect of XCHT on Huh7 cell morphology using phase-contrast microscopy. As shown in Figure 1B, untreated Huh7 cells appeared as densely packed and disorganized multilayers. In contrast, many of the XCHT-treated cells were rounded, shrunken, and detached from adjacent cells, or floating in the medium. Taken together, these data demonstrate that XCHT inhibits the growth of Huh7 cells.\nEffect of Xiao-Chai-Hu Tang (XCHT) on the viability and morphology of Huh7 cells. (A) Cell viability was determined by the MTT assay after Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24, 48 or 72 h. The data were normalized to the viability of control cells (100%). Data are the averages with standard deviation (SD; error bars) from 3 independent experiments. The symbols (#), (*), and (&) indicate statistical significance compared to control cells (P<0.05), for each indicated timepoint. (B) The Huh7 cells were treated with the 0.5-1.5 mg/ml XCHT for 24 h, and morphological changes were observed using phase-contrast microscopy. The images were captured at a magnification of 200x. Images are representative of 3 independent experiments.\nTo explore the anti-cancer activity of XCHT in vivo, we measured tumor volume and weight in HCC xenograft mice. As shown in Figure 2A and 2B, XCHT treatment significantly reduced both tumor volume and weight by Day 11 post-treatment. In contrast to tumors from the control group (293.37±19.36 mm3), XCHT treatment reduced tumor volume by 15.15% (201.5±27.83 mm3; P < 0.05). Tumor weight was reduced by 31.84% in XCHT-treated mice compared to control-treated mice (P < 0.05). Mice treated with XCHT demonstrated no changes in body weight during the course of the study, suggesting that XCHT treatment was relatively nontoxic to the animals (Figure 2C). Taken together, these data suggest that XCHT inhibits HCC growth both in vivo and in vitro, without apparent adverse effects.\nEffect of XCHT on tumor growth in hepatocellular carcinoma (HCC) xenograft mice. After tumor development, the mice were given intra-gastric administration of 14.2 g/kg of XCHT or PBS daily for 21 days. Tumor volume (A), tumor weight (B), and body weight (C) were measured. Data shown are averages with SD (error bars) from 10 mice in each group (n = 10). * P< 0.05, versus controls.\nWe evaluated the in vitro anti-cancer activity of XCHT on the human hepatocellular carcinoma cell line Huh7 using the MTT assay. As shown in Figure 1A, treatment with 0.5-1.5 mg/ml of XCHT for 24, 48, or 72 reduced the viability of Huh7 cells by 17.91-26.27%, 44.01-77%, and 64.8-86.71% (P < 0.05) compared to untreated controls. These results indicate that XCHT inhibits the growth of Huh7 cells in both dose- and time-dependent manners. To verify these results, we evaluated the effect of XCHT on Huh7 cell morphology using phase-contrast microscopy. As shown in Figure 1B, untreated Huh7 cells appeared as densely packed and disorganized multilayers. In contrast, many of the XCHT-treated cells were rounded, shrunken, and detached from adjacent cells, or floating in the medium. Taken together, these data demonstrate that XCHT inhibits the growth of Huh7 cells.\nEffect of Xiao-Chai-Hu Tang (XCHT) on the viability and morphology of Huh7 cells. (A) Cell viability was determined by the MTT assay after Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24, 48 or 72 h. The data were normalized to the viability of control cells (100%). Data are the averages with standard deviation (SD; error bars) from 3 independent experiments. The symbols (#), (*), and (&) indicate statistical significance compared to control cells (P<0.05), for each indicated timepoint. (B) The Huh7 cells were treated with the 0.5-1.5 mg/ml XCHT for 24 h, and morphological changes were observed using phase-contrast microscopy. The images were captured at a magnification of 200x. Images are representative of 3 independent experiments.\nTo explore the anti-cancer activity of XCHT in vivo, we measured tumor volume and weight in HCC xenograft mice. As shown in Figure 2A and 2B, XCHT treatment significantly reduced both tumor volume and weight by Day 11 post-treatment. In contrast to tumors from the control group (293.37±19.36 mm3), XCHT treatment reduced tumor volume by 15.15% (201.5±27.83 mm3; P < 0.05). Tumor weight was reduced by 31.84% in XCHT-treated mice compared to control-treated mice (P < 0.05). Mice treated with XCHT demonstrated no changes in body weight during the course of the study, suggesting that XCHT treatment was relatively nontoxic to the animals (Figure 2C). Taken together, these data suggest that XCHT inhibits HCC growth both in vivo and in vitro, without apparent adverse effects.\nEffect of XCHT on tumor growth in hepatocellular carcinoma (HCC) xenograft mice. After tumor development, the mice were given intra-gastric administration of 14.2 g/kg of XCHT or PBS daily for 21 days. Tumor volume (A), tumor weight (B), and body weight (C) were measured. Data shown are averages with SD (error bars) from 10 mice in each group (n = 10). * P< 0.05, versus controls.\n XCHT inhibits HCC proliferation in vivo and in vitro One of the hallmarks of oncogenesis is unchecked cell proliferation, but the anti-proliferative activity of XCHT on hepatocarcinoma tumors was unknown. To measure the effect of XCHT on cell proliferation, we examined XCHT-treated tumors for a marker expressed by proliferating cell nuclei (Ki-67) using immunohistochemical staining (IHC). As shown in Figure 3A, the percentages of Ki-67 positive cells in tumor tissues from control and XCHT-treated xenograft mice were 76.0 ± 9.6% and 51 ± 15.3%, respectively (P <0.05). To examine the anti-proliferative activity of XCHT in vitro, Huh7 cells were treated with 0.5-1.5 mg/ml of extract and colony formation was measured. As shown in Figure 3B, treatment with increasing concentrations of XCHT for 24 h reduced the cell survival rate by 55.3%, 69.8% and 92.8% compared to untreated controls (P < 0.05). These results suggest that XCHT can inhibit HCC cell proliferation both in vivo and in vitro.\nEffect of XCHT on cell proliferation in HCC xenograft mice and Huh7 cells. (A) Ki-67 assay in tumor tissues (400 χ). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). * P < 0.05, versus controls. (B) Huh7 cell colony formation assay. Data are averages with SD (error bars) from at least three independent experiments. *P < 0.01, versus control cells.\nOne of the hallmarks of oncogenesis is unchecked cell proliferation, but the anti-proliferative activity of XCHT on hepatocarcinoma tumors was unknown. To measure the effect of XCHT on cell proliferation, we examined XCHT-treated tumors for a marker expressed by proliferating cell nuclei (Ki-67) using immunohistochemical staining (IHC). As shown in Figure 3A, the percentages of Ki-67 positive cells in tumor tissues from control and XCHT-treated xenograft mice were 76.0 ± 9.6% and 51 ± 15.3%, respectively (P <0.05). To examine the anti-proliferative activity of XCHT in vitro, Huh7 cells were treated with 0.5-1.5 mg/ml of extract and colony formation was measured. As shown in Figure 3B, treatment with increasing concentrations of XCHT for 24 h reduced the cell survival rate by 55.3%, 69.8% and 92.8% compared to untreated controls (P < 0.05). These results suggest that XCHT can inhibit HCC cell proliferation both in vivo and in vitro.\nEffect of XCHT on cell proliferation in HCC xenograft mice and Huh7 cells. (A) Ki-67 assay in tumor tissues (400 χ). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). * P < 0.05, versus controls. (B) Huh7 cell colony formation assay. Data are averages with SD (error bars) from at least three independent experiments. *P < 0.01, versus control cells.\n XCHT induces apoptosis of hepatocellular carcinoma cells in vivo and in vitro. Apoptosis within HCC tumor tissues was visualized by IHC staining using the TUNEL assay. As shown in Fig 4A, XCHT-treated mice had a significantly higher percentage of TUNEL-positive cells (82±15.3%) compared to the untreated control mice (47±9.6%), indicating the pro-apoptotic effect of XCHT in vivo. To visualize the pro-apoptotic activity in XCHT-treated Huh7 cells, we evaluated changes in nuclear morphology using the DNA-binding dye Hoechst 33258. As shown in Fig 4B, XCHT -treated cells revealed condensed chromatin and a fragmented nuclear morphology, and these characteristics demonstrated typical apoptotic morphological features. In contrast, untreated cell nuclei showed a less intense but homogenous staining pattern, indicative of proliferating cells.\nEffect of XCHT on apoptosis in both HCC xenograft mice and Huh7 cells. (A) TUNEL assay in tumor tissues (400 x). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). *P < 0.05, versus controls; (B) Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24 h and stained with Hoechst 33258. Images were visualized using a phase-contrast fluorescence microscope. The images were captured at a magnification of 400x. Images are representative of 3 independent experiments.\nApoptosis within HCC tumor tissues was visualized by IHC staining using the TUNEL assay. As shown in Fig 4A, XCHT-treated mice had a significantly higher percentage of TUNEL-positive cells (82±15.3%) compared to the untreated control mice (47±9.6%), indicating the pro-apoptotic effect of XCHT in vivo. To visualize the pro-apoptotic activity in XCHT-treated Huh7 cells, we evaluated changes in nuclear morphology using the DNA-binding dye Hoechst 33258. As shown in Fig 4B, XCHT -treated cells revealed condensed chromatin and a fragmented nuclear morphology, and these characteristics demonstrated typical apoptotic morphological features. In contrast, untreated cell nuclei showed a less intense but homogenous staining pattern, indicative of proliferating cells.\nEffect of XCHT on apoptosis in both HCC xenograft mice and Huh7 cells. (A) TUNEL assay in tumor tissues (400 x). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). *P < 0.05, versus controls; (B) Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24 h and stained with Hoechst 33258. Images were visualized using a phase-contrast fluorescence microscope. The images were captured at a magnification of 400x. Images are representative of 3 independent experiments.\n XCHT regulates the gene and protein expression correlated with apoptosis and proliferation in vivo and in vitro Very little is known regarding the mechanism of action of XCHT, or the pathways that this extract targets to generate its anti-proliferative and pro-apoptotic activities. We examined the effect of XCHT treatment on the expression of correlated genes and proteins that are important regulators of apoptosis and proliferation using RT-PCR and western blotting. As shown in Fig 5A and B, XCHT significantly reduced both gene and protein expression of the anti-apoptotic factor Bcl-2 in HCC tumor tissues. In contrast, the gene and protein expression of the pro-apoptotic factor Bax increased after XCHT treatment. XCHT treatment significantly decreased the mRNA and protein expression levels of CDK4 and cyclin-D1 compared to the control group (P < 0.05). Correlating with our in vivo data from HCC tissues, Huh7 cells treated with XCHT demonstrated a significant reduction in the mRNA and protein expression levels of Bcl-2, CDK4 and cyclin-D1, and an increase in the expression of Bax in vitro (Fig 6A and B). Together these data suggest that XCHT promotes apoptosis and inhibits proliferation of HCC by increasing the pro-apoptotic Bax/Bcl-2 ratio and modulating the expression of cell cycle-regulatory genes.\nEffect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in HCC xenograft mice. (A) Four tumors were randomly selected from each group, and the mRNA or protein expression levels of Bcl-2, Bax, cyclin-D1, and CDK4 were determined by RT-PCR and Western blot analysis. GAPDH or β-actin were used as the internal controls. Data shown are representative samples. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression levels of untreated control mice (100%). The symbols (*), (&), (#), and (§) indicate statistical significance versus controls (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1.\nEffect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in Huh7 cells. The cells were treated with 0.5-1.5 mg/ml XCHT for 24 h. (A) The mRNA and protein levels of Bcl-2, Bax, CDK4 and cyclin-D1 were\ndetermined using RT- PCR or western blotting. GAPDH or β - actin were used as the internal controls. The data are representative of three independent experiments. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression of the untreated control cells (100%). The symbols (*), (&), (#), and (§) indicate statistical significance compared to control cells (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1.\nVery little is known regarding the mechanism of action of XCHT, or the pathways that this extract targets to generate its anti-proliferative and pro-apoptotic activities. We examined the effect of XCHT treatment on the expression of correlated genes and proteins that are important regulators of apoptosis and proliferation using RT-PCR and western blotting. As shown in Fig 5A and B, XCHT significantly reduced both gene and protein expression of the anti-apoptotic factor Bcl-2 in HCC tumor tissues. In contrast, the gene and protein expression of the pro-apoptotic factor Bax increased after XCHT treatment. XCHT treatment significantly decreased the mRNA and protein expression levels of CDK4 and cyclin-D1 compared to the control group (P < 0.05). Correlating with our in vivo data from HCC tissues, Huh7 cells treated with XCHT demonstrated a significant reduction in the mRNA and protein expression levels of Bcl-2, CDK4 and cyclin-D1, and an increase in the expression of Bax in vitro (Fig 6A and B). Together these data suggest that XCHT promotes apoptosis and inhibits proliferation of HCC by increasing the pro-apoptotic Bax/Bcl-2 ratio and modulating the expression of cell cycle-regulatory genes.\nEffect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in HCC xenograft mice. (A) Four tumors were randomly selected from each group, and the mRNA or protein expression levels of Bcl-2, Bax, cyclin-D1, and CDK4 were determined by RT-PCR and Western blot analysis. GAPDH or β-actin were used as the internal controls. Data shown are representative samples. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression levels of untreated control mice (100%). The symbols (*), (&), (#), and (§) indicate statistical significance versus controls (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1.\nEffect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in Huh7 cells. The cells were treated with 0.5-1.5 mg/ml XCHT for 24 h. (A) The mRNA and protein levels of Bcl-2, Bax, CDK4 and cyclin-D1 were\ndetermined using RT- PCR or western blotting. GAPDH or β - actin were used as the internal controls. The data are representative of three independent experiments. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression of the untreated control cells (100%). The symbols (*), (&), (#), and (§) indicate statistical significance compared to control cells (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1.", "We evaluated the in vitro anti-cancer activity of XCHT on the human hepatocellular carcinoma cell line Huh7 using the MTT assay. As shown in Figure 1A, treatment with 0.5-1.5 mg/ml of XCHT for 24, 48, or 72 reduced the viability of Huh7 cells by 17.91-26.27%, 44.01-77%, and 64.8-86.71% (P < 0.05) compared to untreated controls. These results indicate that XCHT inhibits the growth of Huh7 cells in both dose- and time-dependent manners. To verify these results, we evaluated the effect of XCHT on Huh7 cell morphology using phase-contrast microscopy. As shown in Figure 1B, untreated Huh7 cells appeared as densely packed and disorganized multilayers. In contrast, many of the XCHT-treated cells were rounded, shrunken, and detached from adjacent cells, or floating in the medium. Taken together, these data demonstrate that XCHT inhibits the growth of Huh7 cells.\nEffect of Xiao-Chai-Hu Tang (XCHT) on the viability and morphology of Huh7 cells. (A) Cell viability was determined by the MTT assay after Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24, 48 or 72 h. The data were normalized to the viability of control cells (100%). Data are the averages with standard deviation (SD; error bars) from 3 independent experiments. The symbols (#), (*), and (&) indicate statistical significance compared to control cells (P<0.05), for each indicated timepoint. (B) The Huh7 cells were treated with the 0.5-1.5 mg/ml XCHT for 24 h, and morphological changes were observed using phase-contrast microscopy. The images were captured at a magnification of 200x. Images are representative of 3 independent experiments.\nTo explore the anti-cancer activity of XCHT in vivo, we measured tumor volume and weight in HCC xenograft mice. As shown in Figure 2A and 2B, XCHT treatment significantly reduced both tumor volume and weight by Day 11 post-treatment. In contrast to tumors from the control group (293.37±19.36 mm3), XCHT treatment reduced tumor volume by 15.15% (201.5±27.83 mm3; P < 0.05). Tumor weight was reduced by 31.84% in XCHT-treated mice compared to control-treated mice (P < 0.05). Mice treated with XCHT demonstrated no changes in body weight during the course of the study, suggesting that XCHT treatment was relatively nontoxic to the animals (Figure 2C). Taken together, these data suggest that XCHT inhibits HCC growth both in vivo and in vitro, without apparent adverse effects.\nEffect of XCHT on tumor growth in hepatocellular carcinoma (HCC) xenograft mice. After tumor development, the mice were given intra-gastric administration of 14.2 g/kg of XCHT or PBS daily for 21 days. Tumor volume (A), tumor weight (B), and body weight (C) were measured. Data shown are averages with SD (error bars) from 10 mice in each group (n = 10). * P< 0.05, versus controls.", "One of the hallmarks of oncogenesis is unchecked cell proliferation, but the anti-proliferative activity of XCHT on hepatocarcinoma tumors was unknown. To measure the effect of XCHT on cell proliferation, we examined XCHT-treated tumors for a marker expressed by proliferating cell nuclei (Ki-67) using immunohistochemical staining (IHC). As shown in Figure 3A, the percentages of Ki-67 positive cells in tumor tissues from control and XCHT-treated xenograft mice were 76.0 ± 9.6% and 51 ± 15.3%, respectively (P <0.05). To examine the anti-proliferative activity of XCHT in vitro, Huh7 cells were treated with 0.5-1.5 mg/ml of extract and colony formation was measured. As shown in Figure 3B, treatment with increasing concentrations of XCHT for 24 h reduced the cell survival rate by 55.3%, 69.8% and 92.8% compared to untreated controls (P < 0.05). These results suggest that XCHT can inhibit HCC cell proliferation both in vivo and in vitro.\nEffect of XCHT on cell proliferation in HCC xenograft mice and Huh7 cells. (A) Ki-67 assay in tumor tissues (400 χ). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). * P < 0.05, versus controls. (B) Huh7 cell colony formation assay. Data are averages with SD (error bars) from at least three independent experiments. *P < 0.01, versus control cells.", "Apoptosis within HCC tumor tissues was visualized by IHC staining using the TUNEL assay. As shown in Fig 4A, XCHT-treated mice had a significantly higher percentage of TUNEL-positive cells (82±15.3%) compared to the untreated control mice (47±9.6%), indicating the pro-apoptotic effect of XCHT in vivo. To visualize the pro-apoptotic activity in XCHT-treated Huh7 cells, we evaluated changes in nuclear morphology using the DNA-binding dye Hoechst 33258. As shown in Fig 4B, XCHT -treated cells revealed condensed chromatin and a fragmented nuclear morphology, and these characteristics demonstrated typical apoptotic morphological features. In contrast, untreated cell nuclei showed a less intense but homogenous staining pattern, indicative of proliferating cells.\nEffect of XCHT on apoptosis in both HCC xenograft mice and Huh7 cells. (A) TUNEL assay in tumor tissues (400 x). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). *P < 0.05, versus controls; (B) Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24 h and stained with Hoechst 33258. Images were visualized using a phase-contrast fluorescence microscope. The images were captured at a magnification of 400x. Images are representative of 3 independent experiments.", "Very little is known regarding the mechanism of action of XCHT, or the pathways that this extract targets to generate its anti-proliferative and pro-apoptotic activities. We examined the effect of XCHT treatment on the expression of correlated genes and proteins that are important regulators of apoptosis and proliferation using RT-PCR and western blotting. As shown in Fig 5A and B, XCHT significantly reduced both gene and protein expression of the anti-apoptotic factor Bcl-2 in HCC tumor tissues. In contrast, the gene and protein expression of the pro-apoptotic factor Bax increased after XCHT treatment. XCHT treatment significantly decreased the mRNA and protein expression levels of CDK4 and cyclin-D1 compared to the control group (P < 0.05). Correlating with our in vivo data from HCC tissues, Huh7 cells treated with XCHT demonstrated a significant reduction in the mRNA and protein expression levels of Bcl-2, CDK4 and cyclin-D1, and an increase in the expression of Bax in vitro (Fig 6A and B). Together these data suggest that XCHT promotes apoptosis and inhibits proliferation of HCC by increasing the pro-apoptotic Bax/Bcl-2 ratio and modulating the expression of cell cycle-regulatory genes.\nEffect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in HCC xenograft mice. (A) Four tumors were randomly selected from each group, and the mRNA or protein expression levels of Bcl-2, Bax, cyclin-D1, and CDK4 were determined by RT-PCR and Western blot analysis. GAPDH or β-actin were used as the internal controls. Data shown are representative samples. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression levels of untreated control mice (100%). The symbols (*), (&), (#), and (§) indicate statistical significance versus controls (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1.\nEffect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in Huh7 cells. The cells were treated with 0.5-1.5 mg/ml XCHT for 24 h. (A) The mRNA and protein levels of Bcl-2, Bax, CDK4 and cyclin-D1 were\ndetermined using RT- PCR or western blotting. GAPDH or β - actin were used as the internal controls. The data are representative of three independent experiments. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression of the untreated control cells (100%). The symbols (*), (&), (#), and (§) indicate statistical significance compared to control cells (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1.", "Xiao-Chai-Hu Tang (XCHT) is one of the most widely used Chinese herbal preparations, and it has long been used for the treatment of chronic liver diseases in China and other Asian countries. In the present study, we investigated its anti-tumor and anti-proliferative activities. Our findings suggest that XCHT could promote apoptosis and inhibit cellular proliferation in liver cancer.\nCancer cells are characterized by an uncontrolled increase in cell proliferation and/or a reduction in apoptosis [Adams and Cory, 2007]. Cell cycle deregulation is a hallmark of tumor cells, and targeting the proteins that mediate critical cell cycle processes is an emerging strategy for the treatment of cancer [Stewart et al, 2003]. The G1/S transition is one of the two main checkpoints of the cell cycle [Nurse, 1994], which is responsible for initiation and completion of DNA replication. G1/S progression is strongly regulated by cyclin-D1, which exerts its function by forming an active complex with its CDK major catalytic partners (CDK4/6) [Morgan, 1995]. An unchecked or hyperactivated cyclin-D1/CDK4 complex often leads to uncontrolled cell division and malignancy [Harakeh et al, 2008; Kessel and Luo, 2000; Chen et al,1996; Zafonte et al, 2000]. Examination of cyclin-D1 and CDK4 expression levels during XCHT treatment demonstrated that XCHT suppresses the expression of both factors in HCC xenograft tissues and in Huh7 cells.\nApoptosis is important for embryogenesis, tissue homeostasis and defence against pathogens. Notably, apoptotic resistance is one of the main causes of tumorigenesis and tumor drug resistance [Cory and Adams, 2002; Lee and Schmitt, 2003]. The Bcl-2 family plays a critical role in apoptotic regulation. Pro-apoptotic Bax promotes intrinsic apoptosis by forming oligomers in the mitochondrial outer membrane, which facilitates the release of the apoptogenic molecules. In contrast, anti-apoptotic Bcl-2 blocks mitochondrial apoptosis by inhibiting the release and oligomerization of Bax [Leibowitz and Yu, 2010]. XCHT can induce apoptosis and overcome the apoptotic resistance of hepatocarcinoma cells in vitro and in vivo, which elevates the potential of this extract for development as an effective chemotherapy agent for treating HCC.", "In conclusion, we demonstrate for the first time that XCHT can inhibit proliferation and induce apoptosis in liver cancer by regulating the expression of the Bcl-2 protein family and decreasing CDK4 and cyclin-D1 levels. Although the active ingredient and precise mechanism of action are not known, this research provides a starting point for exploring an XCHT-related HCC cancer therapy." ]
[ "intro", "materials|methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusion" ]
[ "Xiao-Chai-Hu Tang", "proliferation", "apoptosis", "hepatocellular carcinoma" ]
Introduction: Hepatocellular carcinoma (HCC) is one of the top five cancers diagnosed worldwide [El-Serag and Rudolph, 2007; Jemal et al, 2011]. HCC is the third leading cause of cancer-related deaths, and China will contribute close to half of the estimated 600,000 individuals who will succomb to the disease annually [Jemal et al, 2011; Sherman, 2005]. Surgical resection is the preferred treatment following HCC diagnosis, since removing the tumor completely offers the best prognosis for long-term survival. This treatment is suitable for only 10-15% of patients with early-stage disease, since tumor resection may disrupt vital functions or structures if the tumor is large or has infiltrated into major blood vessels [Levin et al, 1995]. For patients with advanced stage HCC, chemotherapy is the best therapeutic option available. Standard chemotherapeutic regimens can involve single agents or a combination of drugs such as doxorubicin, cisplatin or fluorouracil. Late stage HCC develops drug resistance to standard chemotherapeutic combinations, and less than 20% of patients with advanced liver cancer will respond to these treatment regimens [Abou-Alfa et al, 2008]. Identification of more effective anticancer therapies is needed to provide alternatives to standard chemotherapeutic regimens, as well as treatments for drug resistant HCC. Complementary and alternative medicines (CAM) have received considerable attention in Western countries for their potential therapeutic applications [Xu et al, 2006; Cui et al, 2010]. Traditional Chinese medicine (TCM) has been used in the treatment of cancer for thousands of years in China and other Asian countries. These medicines have gained acceptance as alternative cancer treatments in the United States and Europe [Wong et al, 2001; Gai et al, 2008]. When TCM is combined with conventional chemotherapy, there is an increase in the sensitivity of tumors to chemotherapeutic drugs, a reduction in both the side effects and complications associated with chemotherapy or radiotherapy, and an improvement in patient quality of life and survival [Konkimalla and Efferth, 2008]. Xiao-Chai-Hu-Tang (XCHT) is an extract of seven herbs: Bupleurum chinense (Chai-Hu), Pinellia ternata (Ban-Xia), Scutellaria baicalensis (Huang-Qin or Chinese skullcap root), Zizyphus jujube var. inermis (Da-Zao or jujube fruit), Panax ginseng (Ren-Shen or ginseng root), Glycyrrhiza uralensis (Gan-Cao or licorice root), and Zingiber officinale (Sheng-Jiang or ginger rhizome). The formulation for XCHT was first recorded in Shang Han Za Bing Lun during the Han Dynasty, and the ancient methodology for preparing XCHT has been performed in China for thousands of years. In TCM, XCHT has traditionally been used to treat a variety of Shaoyang diseases (including disorders of the liver and gallbladder). Common side-effects experienced with XCHT treatment include alternating chills and fever, but the medication is well-tolerated by the majority of patients. The relatively low toxicity of XCHT is an important characteristic for potential use of this extract in combinatorial therapies. Ancient scholars believed the mechanism of action of XCHT involved harmonizing the Shaoyang, where pathogens were eliminated through enhanced liver function and improved digestion. More recently, XCHT has been reported to be effective as an anti-cancer agent, although the precise mechanism of its tumoricidal activity remains unclear [Zhu et al, 2005; Shiota et al, 2002; Watanabe et al, 2001; Yano et al, 1994; Kato et al, 1998; Liu et al, 1998; Mizushima et al, 1995]. To gain further insights into the mechanism of action of this ancient extract, we examined the anti-proliferative and pro-apoptotic activities of XCHT on HCC. Materials and Methods: Materials and Reagents Dulbecco’s Modified Eagle Medium (DMEM), fetal bovine serum (FBS), penicillin-streptomycin, trypsin-EDTA and TriZol reagents were purchased from Life Technologies (Carlsbad, CA, USA). PrimeScript™ RT reagent Kit with gDNA Eraser was purchased from Takara BIO Inc. (Tokyo, Japan). TUNEL assay kit was purchased from R&D Systems (Minneapolis, MN, USA). BCA Protein Assay Kit was purchased from Tiangen Biotech Co., Ltd. (Beijing, China). Antibodies for Bax, Bcl-2, CDK4 and CyclinD1 were obtained from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). All other chemicals, unless otherwise stated, were obtained from Sigma-Aldrich (St. Louis, MO, USA). Dulbecco’s Modified Eagle Medium (DMEM), fetal bovine serum (FBS), penicillin-streptomycin, trypsin-EDTA and TriZol reagents were purchased from Life Technologies (Carlsbad, CA, USA). PrimeScript™ RT reagent Kit with gDNA Eraser was purchased from Takara BIO Inc. (Tokyo, Japan). TUNEL assay kit was purchased from R&D Systems (Minneapolis, MN, USA). BCA Protein Assay Kit was purchased from Tiangen Biotech Co., Ltd. (Beijing, China). Antibodies for Bax, Bcl-2, CDK4 and CyclinD1 were obtained from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). All other chemicals, unless otherwise stated, were obtained from Sigma-Aldrich (St. Louis, MO, USA). Preparation of Xiao-Chai-Hu-Tang (XCHT). The Xiao-Chai-Hu-Tang extract was prepared by boiling 7 authenticated herbs in distilled water: 24.0g Bupleurum chinense root (Chai-Hu), 9.0g Pinellia ternata tuber (Ban-Xia), 9.0g Scutellaria baicalensis root (Huang-Qin), 9g Zizyphus jujube var. inermis fruit (Da-Zao or jujube fruit), 6g Panax ginseng root (Ren-Shen or ginseng), 5.0g Glycyrrhiza uralensis root (Gan-Cao or licorice), and 9.0g Zingiber officinale rhizome (Sheng-Jiang or ginger). The aqueous extraction was filtered and spray-dried. A stock solution of XCHT was prepared immediately prior to use by dissolving the XCHT powder in DMEM at a concentration of 250 mg/ml. The working concentrations of XCHT were obtained by diluting the stock solution in the culture medium. The Xiao-Chai-Hu-Tang extract was prepared by boiling 7 authenticated herbs in distilled water: 24.0g Bupleurum chinense root (Chai-Hu), 9.0g Pinellia ternata tuber (Ban-Xia), 9.0g Scutellaria baicalensis root (Huang-Qin), 9g Zizyphus jujube var. inermis fruit (Da-Zao or jujube fruit), 6g Panax ginseng root (Ren-Shen or ginseng), 5.0g Glycyrrhiza uralensis root (Gan-Cao or licorice), and 9.0g Zingiber officinale rhizome (Sheng-Jiang or ginger). The aqueous extraction was filtered and spray-dried. A stock solution of XCHT was prepared immediately prior to use by dissolving the XCHT powder in DMEM at a concentration of 250 mg/ml. The working concentrations of XCHT were obtained by diluting the stock solution in the culture medium. Cell Culture A human hepatoma cell line (Huh7) was purchased from Xiangya Cell Center (Hunan, China). Huh7 cells were grown in DMEM supplemented with 10% (v/v) FBS, 100 units/ml penicillin, and 100 μg/ml streptomycin. Huh7 cells were cultured at 37°C, in a 5% CO2 humidified environment. The cells were subcultured at 80-90% confluency. A human hepatoma cell line (Huh7) was purchased from Xiangya Cell Center (Hunan, China). Huh7 cells were grown in DMEM supplemented with 10% (v/v) FBS, 100 units/ml penicillin, and 100 μg/ml streptomycin. Huh7 cells were cultured at 37°C, in a 5% CO2 humidified environment. The cells were subcultured at 80-90% confluency. Animals Male BALB/C athymic nude mice (with an initial body weight of 20-22 g) were obtained from SLAC Animal Inc. (Shanghai, China). Animals were housed in standard plastic cages under automatic 12 h light/dark cycles at 23 °C, with free access to food and water. All animals were kept under specific pathogen-free conditions. The animal studies were approved by the Fujian Institute of Traditional Chinese Medicine Animal Ethics Committee (Fuzhou, Fujian, China). The experimental procedures were carried out in accordance with the Guidelines for Animal Experimentation of Fujian University of Traditional Chinese Medicine (Fuzhou, Fujian, China). Male BALB/C athymic nude mice (with an initial body weight of 20-22 g) were obtained from SLAC Animal Inc. (Shanghai, China). Animals were housed in standard plastic cages under automatic 12 h light/dark cycles at 23 °C, with free access to food and water. All animals were kept under specific pathogen-free conditions. The animal studies were approved by the Fujian Institute of Traditional Chinese Medicine Animal Ethics Committee (Fuzhou, Fujian, China). The experimental procedures were carried out in accordance with the Guidelines for Animal Experimentation of Fujian University of Traditional Chinese Medicine (Fuzhou, Fujian, China). In Vivo Xenograft Study Hepatocarcinoma xenograft mice were produced using Huh7 cells. The cells were grown in culture, detached by trypsinization, washed, and resuspended in serum-free DMEM. Resuspended cells (4 × 106) mixed with Matrigel (1:1) were subcutaneously injected into the right flank of nude mice to initiate tumor growth. When tumor sizes reached 3 millimeters in diameter, mice were randomly divided into two groups (n =10) and treated with XCHT (dissolved in saline) or saline daily by intraperitoneal injection. All treatments were given 5 days a week for 21 days. Body weight and tumor size were measured. Tumor size was determined by measuring the major (L) and minor (W) diameters with a caliper. The tumor volume was calculated according to the following formula: tumor volume = n/6xLxW2. At the end of the experiment, the mice were anaesthetized with ether and sacrificed by cervical vertebra dislocation. The tumors were then excised and weighed. and tumor segments were fixed in buffered formalin and stored at -80°C for molecular analysis. Hepatocarcinoma xenograft mice were produced using Huh7 cells. The cells were grown in culture, detached by trypsinization, washed, and resuspended in serum-free DMEM. Resuspended cells (4 × 106) mixed with Matrigel (1:1) were subcutaneously injected into the right flank of nude mice to initiate tumor growth. When tumor sizes reached 3 millimeters in diameter, mice were randomly divided into two groups (n =10) and treated with XCHT (dissolved in saline) or saline daily by intraperitoneal injection. All treatments were given 5 days a week for 21 days. Body weight and tumor size were measured. Tumor size was determined by measuring the major (L) and minor (W) diameters with a caliper. The tumor volume was calculated according to the following formula: tumor volume = n/6xLxW2. At the end of the experiment, the mice were anaesthetized with ether and sacrificed by cervical vertebra dislocation. The tumors were then excised and weighed. and tumor segments were fixed in buffered formalin and stored at -80°C for molecular analysis. Assessment of Cell Viability by the MTT Assay. Cell viability was assessed by the MTT colorimetric assay. Huh7 cells were seeded into 96-well plates at a density of 1x104 cells/well in 0.1 ml medium. The cells were treated with various concentrations (0, 0.5, 1.0, 1.5 mg/ml) of XCHT for 24 h, 48 h and 72 h. At the end of the treatment, 100 μl of MTT (0.5 mg/ml in PBS) were added to each well, and the samples were incubated for an additional 4 h at 37°C. The purple-blue MTT formazan precipitate was dissolved in 100 μl DMSO. The absorbance was measured at 570 nm using an ELISA reader (BioTek, Model ELX800, USA). Cell viability was assessed by the MTT colorimetric assay. Huh7 cells were seeded into 96-well plates at a density of 1x104 cells/well in 0.1 ml medium. The cells were treated with various concentrations (0, 0.5, 1.0, 1.5 mg/ml) of XCHT for 24 h, 48 h and 72 h. At the end of the treatment, 100 μl of MTT (0.5 mg/ml in PBS) were added to each well, and the samples were incubated for an additional 4 h at 37°C. The purple-blue MTT formazan precipitate was dissolved in 100 μl DMSO. The absorbance was measured at 570 nm using an ELISA reader (BioTek, Model ELX800, USA). Cell Morphology Huh7 cells were seeded into 6-well plates at a density of 2 x105 cells/ml in 2 ml DMEM. The cells were treated with 0, 0.5, 1.0, and 1.5 mg/mL of XCHT for 24 h. Cell morphology was observed using a phase-contrast microscope (Olympus, Japan), and the photographs were taken at a magnification of 200 x. Huh7 cells were seeded into 6-well plates at a density of 2 x105 cells/ml in 2 ml DMEM. The cells were treated with 0, 0.5, 1.0, and 1.5 mg/mL of XCHT for 24 h. Cell morphology was observed using a phase-contrast microscope (Olympus, Japan), and the photographs were taken at a magnification of 200 x. Detection of Apoptosis with Hoechst Staining. Huh7 cells were seeded into 12-well plates at a density of 1x105 cells/ml in 1 ml medium. After the cells were treated with XCHT for 24 h, apoptosis was visualized using the Hoechst staining kit as described in the manufacturer’s instructions. Briefly, at the end of the treatment, cells were fixed with 4% polyoxymethylene and then incubated in Hoechst solution for 5-10 min in the dark. Images were captured using a phase-contrast fluorescence microscope (Leica, Germany) at a magnification of 400x. Huh7 cells were seeded into 12-well plates at a density of 1x105 cells/ml in 1 ml medium. After the cells were treated with XCHT for 24 h, apoptosis was visualized using the Hoechst staining kit as described in the manufacturer’s instructions. Briefly, at the end of the treatment, cells were fixed with 4% polyoxymethylene and then incubated in Hoechst solution for 5-10 min in the dark. Images were captured using a phase-contrast fluorescence microscope (Leica, Germany) at a magnification of 400x. Colony Formation Huh7 cells were seeded into 6-well plates at a density of 2x105 cells/ml in 2 ml medium. After treatment with various concentrations (0, 0.5, 1.0, 1.5mg/ml) of XCHT for 24 h, the cells were collected and diluted in fresh medium in the absence of XCHT and then reseeded into 6-well plates at a density of 1x103 cells/well. Following incubation for 8 days in a 37°C humidified incubator with 5% CO2, the colonies were fixed with 10% formaldehyde, stained with 0.01% crystal violet and counted. Cell survival was calculated by comparing the survival of compound-treated cells to the control cells (normalized to 100% survival). Huh7 cells were seeded into 6-well plates at a density of 2x105 cells/ml in 2 ml medium. After treatment with various concentrations (0, 0.5, 1.0, 1.5mg/ml) of XCHT for 24 h, the cells were collected and diluted in fresh medium in the absence of XCHT and then reseeded into 6-well plates at a density of 1x103 cells/well. Following incubation for 8 days in a 37°C humidified incubator with 5% CO2, the colonies were fixed with 10% formaldehyde, stained with 0.01% crystal violet and counted. Cell survival was calculated by comparing the survival of compound-treated cells to the control cells (normalized to 100% survival). Apoptosis Detection in Hepatocarcinoma Tissues by TUNEL Staining. Six tumors were randomly selected from XCHT-treatment or control groups. Tumor tissues were fixed in 10% formaldehyde for 48 h, paraffin-embedded and then sectioned into 4- μm-thick slides. Samples were analyzed by TUNEL staining. Apoptotic cells were counted as DAB-positive cells (brown staining) in five arbitrarily selected microscopic fields at a magnification of 400x. TUNEL-positive cells were counted as a percentage of the total cells. Six tumors were randomly selected from XCHT-treatment or control groups. Tumor tissues were fixed in 10% formaldehyde for 48 h, paraffin-embedded and then sectioned into 4- μm-thick slides. Samples were analyzed by TUNEL staining. Apoptotic cells were counted as DAB-positive cells (brown staining) in five arbitrarily selected microscopic fields at a magnification of 400x. TUNEL-positive cells were counted as a percentage of the total cells. Immunohistochemistry Analysis of Hepatocarcinoma Tissues Immunohistochemical staining for Ki-67 was performed as previously described. The sections were deparaffinized in xylene and hydrated through graded alcohols. Antigen retrieval was performed using heat treatment (placed in a microwave oven at 750 Watts for 7 minutes), in 10 mM sodium citrate buffer, pH 6.0. Sections were allowed to cool in the buffer at room temperature for 30 minutes, and the sections were then rinsed in deionised water three times for two minutes each. The endogenous peroxidase activity was blocked with 3% (v/v) hydrogen peroxide for 10 minutes. The sections were incubated with 1% bovine serum albumin in order to decrease non-specific staining and reduce endogenous peroxidase activity. The sections were then incubated with Ki-67 antibody (1:100 dilution), at 4° C over night using a staining chamber. After rinsing three times in PBS, sections were incubated in biotinylated goat anti-rabbit IgG (Boshide Wuhan, China), followed by treatment with an avidin-biotin-peroxidase complex (Vector). Immunostaining was visualized by incubation in 3,3-diaminobenzidine (DAB) as a chromogen. Sections were counterstained with haematoxylin. The Ki-67 positive immunostaining was visualized using a Nikon Eclipse 50i microscope (40x objective). The evaluation of Ki-67 expression was analyzed in 5 different fields, and the mean percentage of Ki-67 positive staining was evaluated. To rule out any non-specific staining, PBS was used in place of the primary antibody as a negative control. Immunohistochemical staining for Ki-67 was performed as previously described. The sections were deparaffinized in xylene and hydrated through graded alcohols. Antigen retrieval was performed using heat treatment (placed in a microwave oven at 750 Watts for 7 minutes), in 10 mM sodium citrate buffer, pH 6.0. Sections were allowed to cool in the buffer at room temperature for 30 minutes, and the sections were then rinsed in deionised water three times for two minutes each. The endogenous peroxidase activity was blocked with 3% (v/v) hydrogen peroxide for 10 minutes. The sections were incubated with 1% bovine serum albumin in order to decrease non-specific staining and reduce endogenous peroxidase activity. The sections were then incubated with Ki-67 antibody (1:100 dilution), at 4° C over night using a staining chamber. After rinsing three times in PBS, sections were incubated in biotinylated goat anti-rabbit IgG (Boshide Wuhan, China), followed by treatment with an avidin-biotin-peroxidase complex (Vector). Immunostaining was visualized by incubation in 3,3-diaminobenzidine (DAB) as a chromogen. Sections were counterstained with haematoxylin. The Ki-67 positive immunostaining was visualized using a Nikon Eclipse 50i microscope (40x objective). The evaluation of Ki-67 expression was analyzed in 5 different fields, and the mean percentage of Ki-67 positive staining was evaluated. To rule out any non-specific staining, PBS was used in place of the primary antibody as a negative control. RNA Extraction and RT-PCR Analysis The expression of Bax, Bcl-2, CDK4 and CyclinD1 genes in HCC tissues or cells was analyzed by RT-PCR. Total RNA was isolated with TriZol Reagent according to the manufacturer’s instructions. 1 μg of total RNA was used to synthesize cDNA using the SuperScript II reverse transcriptase Kit (AMV) (TaKaRa, Tokyo, Japan). The reaction contained RNA, selected primers, and 10μl of the RT-PCR master mix: 10 mM Tris-HCl (pH8.3), 50 mM KCl, 5 mM MgCl2, 1 unit/μl RNase inhibitor, 0.25 unit/μl AMV reverse transcriptase, 2.5 ml random primer, and 1 mM each of dATP, dGTP, dCTP and dTTP. Reverse transcription was performed for 1 hour at 42°C. The obtained cDNA was used to determine the mRNA amount of Bax, Bcl-2, CDK4 and cyclin-D1 by PCR. GAPDH was used as an internal control. Samples were analyzed by gel electrophoresis (1.5% agarose). The DNA bands were visualized using a gel documentation system (BioRad, Model Gel Doc 2000, USA). The sequences of the primers used for amplification of CDK4, CyclinD1, Bcl-2, Bax and GAPDH transcripts are as follows: CDK4 forward, 5’- CAT GTA GAC CAG GAC CTA AGC-3’ and reverse, 5’ -AAC TGG CGC ATC AGA TCC TAG-3’; cyclin-D1 forward, 5’-TGG ATG CTG GAG GTC TGC GAG GAA -3’ and reverse, 5’ - GGC TTC GAT CTG CTC CTG GCA GGC -3’; Bcl-2 forward, 5’-CAG CTG CAC CT GAC GCC CTT-3 and reverse, 5’-GCC TCC GTT ATC CTG GAT CC-3’; Bax forward, 5’-TGC TTC AGG GTT TCA TCC AGG-3’ and reverse, 5’-TGG CAA AGT AGA AAA GGG CGA-3’; GAPDH forward, 5’-GTC ATC CAT GAC AAC TTT GG-3’ and reverse, 5’-GAG CTT GAC AAA GTG GTC GT-3’. The expression of Bax, Bcl-2, CDK4 and CyclinD1 genes in HCC tissues or cells was analyzed by RT-PCR. Total RNA was isolated with TriZol Reagent according to the manufacturer’s instructions. 1 μg of total RNA was used to synthesize cDNA using the SuperScript II reverse transcriptase Kit (AMV) (TaKaRa, Tokyo, Japan). The reaction contained RNA, selected primers, and 10μl of the RT-PCR master mix: 10 mM Tris-HCl (pH8.3), 50 mM KCl, 5 mM MgCl2, 1 unit/μl RNase inhibitor, 0.25 unit/μl AMV reverse transcriptase, 2.5 ml random primer, and 1 mM each of dATP, dGTP, dCTP and dTTP. Reverse transcription was performed for 1 hour at 42°C. The obtained cDNA was used to determine the mRNA amount of Bax, Bcl-2, CDK4 and cyclin-D1 by PCR. GAPDH was used as an internal control. Samples were analyzed by gel electrophoresis (1.5% agarose). The DNA bands were visualized using a gel documentation system (BioRad, Model Gel Doc 2000, USA). The sequences of the primers used for amplification of CDK4, CyclinD1, Bcl-2, Bax and GAPDH transcripts are as follows: CDK4 forward, 5’- CAT GTA GAC CAG GAC CTA AGC-3’ and reverse, 5’ -AAC TGG CGC ATC AGA TCC TAG-3’; cyclin-D1 forward, 5’-TGG ATG CTG GAG GTC TGC GAG GAA -3’ and reverse, 5’ - GGC TTC GAT CTG CTC CTG GCA GGC -3’; Bcl-2 forward, 5’-CAG CTG CAC CT GAC GCC CTT-3 and reverse, 5’-GCC TCC GTT ATC CTG GAT CC-3’; Bax forward, 5’-TGC TTC AGG GTT TCA TCC AGG-3’ and reverse, 5’-TGG CAA AGT AGA AAA GGG CGA-3’; GAPDH forward, 5’-GTC ATC CAT GAC AAC TTT GG-3’ and reverse, 5’-GAG CTT GAC AAA GTG GTC GT-3’. Western Blotting Analysis Four tumors were randomly selected from XCHT-treatment or control groups. Tumors were homogenized in nondenaturing lysis buffer and centrifuged at 14,000 x g for 15 min. Protein concentrations were determined by BCA protein assay. Huh7 cells (2.0 x 105 cells/ml) in 5ml medium were seeded into 25 cm2 flasks and treated with the indicated concentrations of XCHT for 24 h. Treated cells were lysed in mammalian cell lysis buffer containing protease and phosphatase inhibitor cocktails, and centrifuged at 14,000 x g for 15 min. Protein concentrations in cell lysate supernatants were determined by BCA protein assay. Equal amounts of protein from each tumor or cell lysate were resolved on 12% SDS-PAGE gels using 80 V for 2 h and transferred onto PVDF membranes. The membranes were blocked for 2 h with 5% nonfat milk and incubated with the desired primary antibody directed against Bax, Bcl-2, CDK4, Cyclin D1, or β-actin (all diluted 1:1000) overnight at 4°C. Appropriate HRP-conjugated secondary antibodies (anti-rabbit or anti-mouse; 1:2000) were incubated with the membrane for 1 h at room temperature, and the membranes were washed again in TBS-T followed by enhanced chemiluminescence detection. Four tumors were randomly selected from XCHT-treatment or control groups. Tumors were homogenized in nondenaturing lysis buffer and centrifuged at 14,000 x g for 15 min. Protein concentrations were determined by BCA protein assay. Huh7 cells (2.0 x 105 cells/ml) in 5ml medium were seeded into 25 cm2 flasks and treated with the indicated concentrations of XCHT for 24 h. Treated cells were lysed in mammalian cell lysis buffer containing protease and phosphatase inhibitor cocktails, and centrifuged at 14,000 x g for 15 min. Protein concentrations in cell lysate supernatants were determined by BCA protein assay. Equal amounts of protein from each tumor or cell lysate were resolved on 12% SDS-PAGE gels using 80 V for 2 h and transferred onto PVDF membranes. The membranes were blocked for 2 h with 5% nonfat milk and incubated with the desired primary antibody directed against Bax, Bcl-2, CDK4, Cyclin D1, or β-actin (all diluted 1:1000) overnight at 4°C. Appropriate HRP-conjugated secondary antibodies (anti-rabbit or anti-mouse; 1:2000) were incubated with the membrane for 1 h at room temperature, and the membranes were washed again in TBS-T followed by enhanced chemiluminescence detection. Statistical Analysis All data are shown as the mean of three measurements. The data were analyzed using the SPSS package for Windows (Version 11.5). Statistical analysis of the data was performed using the Student’s t-test and ANOVA. Differences with P<0.05 were considered statistically significant. All data are shown as the mean of three measurements. The data were analyzed using the SPSS package for Windows (Version 11.5). Statistical analysis of the data was performed using the Student’s t-test and ANOVA. Differences with P<0.05 were considered statistically significant. Materials and Reagents: Dulbecco’s Modified Eagle Medium (DMEM), fetal bovine serum (FBS), penicillin-streptomycin, trypsin-EDTA and TriZol reagents were purchased from Life Technologies (Carlsbad, CA, USA). PrimeScript™ RT reagent Kit with gDNA Eraser was purchased from Takara BIO Inc. (Tokyo, Japan). TUNEL assay kit was purchased from R&D Systems (Minneapolis, MN, USA). BCA Protein Assay Kit was purchased from Tiangen Biotech Co., Ltd. (Beijing, China). Antibodies for Bax, Bcl-2, CDK4 and CyclinD1 were obtained from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). All other chemicals, unless otherwise stated, were obtained from Sigma-Aldrich (St. Louis, MO, USA). Preparation of Xiao-Chai-Hu-Tang (XCHT).: The Xiao-Chai-Hu-Tang extract was prepared by boiling 7 authenticated herbs in distilled water: 24.0g Bupleurum chinense root (Chai-Hu), 9.0g Pinellia ternata tuber (Ban-Xia), 9.0g Scutellaria baicalensis root (Huang-Qin), 9g Zizyphus jujube var. inermis fruit (Da-Zao or jujube fruit), 6g Panax ginseng root (Ren-Shen or ginseng), 5.0g Glycyrrhiza uralensis root (Gan-Cao or licorice), and 9.0g Zingiber officinale rhizome (Sheng-Jiang or ginger). The aqueous extraction was filtered and spray-dried. A stock solution of XCHT was prepared immediately prior to use by dissolving the XCHT powder in DMEM at a concentration of 250 mg/ml. The working concentrations of XCHT were obtained by diluting the stock solution in the culture medium. Cell Culture: A human hepatoma cell line (Huh7) was purchased from Xiangya Cell Center (Hunan, China). Huh7 cells were grown in DMEM supplemented with 10% (v/v) FBS, 100 units/ml penicillin, and 100 μg/ml streptomycin. Huh7 cells were cultured at 37°C, in a 5% CO2 humidified environment. The cells were subcultured at 80-90% confluency. Animals: Male BALB/C athymic nude mice (with an initial body weight of 20-22 g) were obtained from SLAC Animal Inc. (Shanghai, China). Animals were housed in standard plastic cages under automatic 12 h light/dark cycles at 23 °C, with free access to food and water. All animals were kept under specific pathogen-free conditions. The animal studies were approved by the Fujian Institute of Traditional Chinese Medicine Animal Ethics Committee (Fuzhou, Fujian, China). The experimental procedures were carried out in accordance with the Guidelines for Animal Experimentation of Fujian University of Traditional Chinese Medicine (Fuzhou, Fujian, China). In Vivo Xenograft Study: Hepatocarcinoma xenograft mice were produced using Huh7 cells. The cells were grown in culture, detached by trypsinization, washed, and resuspended in serum-free DMEM. Resuspended cells (4 × 106) mixed with Matrigel (1:1) were subcutaneously injected into the right flank of nude mice to initiate tumor growth. When tumor sizes reached 3 millimeters in diameter, mice were randomly divided into two groups (n =10) and treated with XCHT (dissolved in saline) or saline daily by intraperitoneal injection. All treatments were given 5 days a week for 21 days. Body weight and tumor size were measured. Tumor size was determined by measuring the major (L) and minor (W) diameters with a caliper. The tumor volume was calculated according to the following formula: tumor volume = n/6xLxW2. At the end of the experiment, the mice were anaesthetized with ether and sacrificed by cervical vertebra dislocation. The tumors were then excised and weighed. and tumor segments were fixed in buffered formalin and stored at -80°C for molecular analysis. Assessment of Cell Viability by the MTT Assay.: Cell viability was assessed by the MTT colorimetric assay. Huh7 cells were seeded into 96-well plates at a density of 1x104 cells/well in 0.1 ml medium. The cells were treated with various concentrations (0, 0.5, 1.0, 1.5 mg/ml) of XCHT for 24 h, 48 h and 72 h. At the end of the treatment, 100 μl of MTT (0.5 mg/ml in PBS) were added to each well, and the samples were incubated for an additional 4 h at 37°C. The purple-blue MTT formazan precipitate was dissolved in 100 μl DMSO. The absorbance was measured at 570 nm using an ELISA reader (BioTek, Model ELX800, USA). Cell Morphology: Huh7 cells were seeded into 6-well plates at a density of 2 x105 cells/ml in 2 ml DMEM. The cells were treated with 0, 0.5, 1.0, and 1.5 mg/mL of XCHT for 24 h. Cell morphology was observed using a phase-contrast microscope (Olympus, Japan), and the photographs were taken at a magnification of 200 x. Detection of Apoptosis with Hoechst Staining.: Huh7 cells were seeded into 12-well plates at a density of 1x105 cells/ml in 1 ml medium. After the cells were treated with XCHT for 24 h, apoptosis was visualized using the Hoechst staining kit as described in the manufacturer’s instructions. Briefly, at the end of the treatment, cells were fixed with 4% polyoxymethylene and then incubated in Hoechst solution for 5-10 min in the dark. Images were captured using a phase-contrast fluorescence microscope (Leica, Germany) at a magnification of 400x. Colony Formation: Huh7 cells were seeded into 6-well plates at a density of 2x105 cells/ml in 2 ml medium. After treatment with various concentrations (0, 0.5, 1.0, 1.5mg/ml) of XCHT for 24 h, the cells were collected and diluted in fresh medium in the absence of XCHT and then reseeded into 6-well plates at a density of 1x103 cells/well. Following incubation for 8 days in a 37°C humidified incubator with 5% CO2, the colonies were fixed with 10% formaldehyde, stained with 0.01% crystal violet and counted. Cell survival was calculated by comparing the survival of compound-treated cells to the control cells (normalized to 100% survival). Apoptosis Detection in Hepatocarcinoma Tissues by TUNEL Staining.: Six tumors were randomly selected from XCHT-treatment or control groups. Tumor tissues were fixed in 10% formaldehyde for 48 h, paraffin-embedded and then sectioned into 4- μm-thick slides. Samples were analyzed by TUNEL staining. Apoptotic cells were counted as DAB-positive cells (brown staining) in five arbitrarily selected microscopic fields at a magnification of 400x. TUNEL-positive cells were counted as a percentage of the total cells. Immunohistochemistry Analysis of Hepatocarcinoma Tissues: Immunohistochemical staining for Ki-67 was performed as previously described. The sections were deparaffinized in xylene and hydrated through graded alcohols. Antigen retrieval was performed using heat treatment (placed in a microwave oven at 750 Watts for 7 minutes), in 10 mM sodium citrate buffer, pH 6.0. Sections were allowed to cool in the buffer at room temperature for 30 minutes, and the sections were then rinsed in deionised water three times for two minutes each. The endogenous peroxidase activity was blocked with 3% (v/v) hydrogen peroxide for 10 minutes. The sections were incubated with 1% bovine serum albumin in order to decrease non-specific staining and reduce endogenous peroxidase activity. The sections were then incubated with Ki-67 antibody (1:100 dilution), at 4° C over night using a staining chamber. After rinsing three times in PBS, sections were incubated in biotinylated goat anti-rabbit IgG (Boshide Wuhan, China), followed by treatment with an avidin-biotin-peroxidase complex (Vector). Immunostaining was visualized by incubation in 3,3-diaminobenzidine (DAB) as a chromogen. Sections were counterstained with haematoxylin. The Ki-67 positive immunostaining was visualized using a Nikon Eclipse 50i microscope (40x objective). The evaluation of Ki-67 expression was analyzed in 5 different fields, and the mean percentage of Ki-67 positive staining was evaluated. To rule out any non-specific staining, PBS was used in place of the primary antibody as a negative control. RNA Extraction and RT-PCR Analysis: The expression of Bax, Bcl-2, CDK4 and CyclinD1 genes in HCC tissues or cells was analyzed by RT-PCR. Total RNA was isolated with TriZol Reagent according to the manufacturer’s instructions. 1 μg of total RNA was used to synthesize cDNA using the SuperScript II reverse transcriptase Kit (AMV) (TaKaRa, Tokyo, Japan). The reaction contained RNA, selected primers, and 10μl of the RT-PCR master mix: 10 mM Tris-HCl (pH8.3), 50 mM KCl, 5 mM MgCl2, 1 unit/μl RNase inhibitor, 0.25 unit/μl AMV reverse transcriptase, 2.5 ml random primer, and 1 mM each of dATP, dGTP, dCTP and dTTP. Reverse transcription was performed for 1 hour at 42°C. The obtained cDNA was used to determine the mRNA amount of Bax, Bcl-2, CDK4 and cyclin-D1 by PCR. GAPDH was used as an internal control. Samples were analyzed by gel electrophoresis (1.5% agarose). The DNA bands were visualized using a gel documentation system (BioRad, Model Gel Doc 2000, USA). The sequences of the primers used for amplification of CDK4, CyclinD1, Bcl-2, Bax and GAPDH transcripts are as follows: CDK4 forward, 5’- CAT GTA GAC CAG GAC CTA AGC-3’ and reverse, 5’ -AAC TGG CGC ATC AGA TCC TAG-3’; cyclin-D1 forward, 5’-TGG ATG CTG GAG GTC TGC GAG GAA -3’ and reverse, 5’ - GGC TTC GAT CTG CTC CTG GCA GGC -3’; Bcl-2 forward, 5’-CAG CTG CAC CT GAC GCC CTT-3 and reverse, 5’-GCC TCC GTT ATC CTG GAT CC-3’; Bax forward, 5’-TGC TTC AGG GTT TCA TCC AGG-3’ and reverse, 5’-TGG CAA AGT AGA AAA GGG CGA-3’; GAPDH forward, 5’-GTC ATC CAT GAC AAC TTT GG-3’ and reverse, 5’-GAG CTT GAC AAA GTG GTC GT-3’. Western Blotting Analysis: Four tumors were randomly selected from XCHT-treatment or control groups. Tumors were homogenized in nondenaturing lysis buffer and centrifuged at 14,000 x g for 15 min. Protein concentrations were determined by BCA protein assay. Huh7 cells (2.0 x 105 cells/ml) in 5ml medium were seeded into 25 cm2 flasks and treated with the indicated concentrations of XCHT for 24 h. Treated cells were lysed in mammalian cell lysis buffer containing protease and phosphatase inhibitor cocktails, and centrifuged at 14,000 x g for 15 min. Protein concentrations in cell lysate supernatants were determined by BCA protein assay. Equal amounts of protein from each tumor or cell lysate were resolved on 12% SDS-PAGE gels using 80 V for 2 h and transferred onto PVDF membranes. The membranes were blocked for 2 h with 5% nonfat milk and incubated with the desired primary antibody directed against Bax, Bcl-2, CDK4, Cyclin D1, or β-actin (all diluted 1:1000) overnight at 4°C. Appropriate HRP-conjugated secondary antibodies (anti-rabbit or anti-mouse; 1:2000) were incubated with the membrane for 1 h at room temperature, and the membranes were washed again in TBS-T followed by enhanced chemiluminescence detection. Statistical Analysis: All data are shown as the mean of three measurements. The data were analyzed using the SPSS package for Windows (Version 11.5). Statistical analysis of the data was performed using the Student’s t-test and ANOVA. Differences with P<0.05 were considered statistically significant. Results: XCHT inhibits hepatocellular carcinoma (HCC) growth in vitro and in vivo. We evaluated the in vitro anti-cancer activity of XCHT on the human hepatocellular carcinoma cell line Huh7 using the MTT assay. As shown in Figure 1A, treatment with 0.5-1.5 mg/ml of XCHT for 24, 48, or 72 reduced the viability of Huh7 cells by 17.91-26.27%, 44.01-77%, and 64.8-86.71% (P < 0.05) compared to untreated controls. These results indicate that XCHT inhibits the growth of Huh7 cells in both dose- and time-dependent manners. To verify these results, we evaluated the effect of XCHT on Huh7 cell morphology using phase-contrast microscopy. As shown in Figure 1B, untreated Huh7 cells appeared as densely packed and disorganized multilayers. In contrast, many of the XCHT-treated cells were rounded, shrunken, and detached from adjacent cells, or floating in the medium. Taken together, these data demonstrate that XCHT inhibits the growth of Huh7 cells. Effect of Xiao-Chai-Hu Tang (XCHT) on the viability and morphology of Huh7 cells. (A) Cell viability was determined by the MTT assay after Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24, 48 or 72 h. The data were normalized to the viability of control cells (100%). Data are the averages with standard deviation (SD; error bars) from 3 independent experiments. The symbols (#), (*), and (&) indicate statistical significance compared to control cells (P<0.05), for each indicated timepoint. (B) The Huh7 cells were treated with the 0.5-1.5 mg/ml XCHT for 24 h, and morphological changes were observed using phase-contrast microscopy. The images were captured at a magnification of 200x. Images are representative of 3 independent experiments. To explore the anti-cancer activity of XCHT in vivo, we measured tumor volume and weight in HCC xenograft mice. As shown in Figure 2A and 2B, XCHT treatment significantly reduced both tumor volume and weight by Day 11 post-treatment. In contrast to tumors from the control group (293.37±19.36 mm3), XCHT treatment reduced tumor volume by 15.15% (201.5±27.83 mm3; P < 0.05). Tumor weight was reduced by 31.84% in XCHT-treated mice compared to control-treated mice (P < 0.05). Mice treated with XCHT demonstrated no changes in body weight during the course of the study, suggesting that XCHT treatment was relatively nontoxic to the animals (Figure 2C). Taken together, these data suggest that XCHT inhibits HCC growth both in vivo and in vitro, without apparent adverse effects. Effect of XCHT on tumor growth in hepatocellular carcinoma (HCC) xenograft mice. After tumor development, the mice were given intra-gastric administration of 14.2 g/kg of XCHT or PBS daily for 21 days. Tumor volume (A), tumor weight (B), and body weight (C) were measured. Data shown are averages with SD (error bars) from 10 mice in each group (n = 10). * P< 0.05, versus controls. We evaluated the in vitro anti-cancer activity of XCHT on the human hepatocellular carcinoma cell line Huh7 using the MTT assay. As shown in Figure 1A, treatment with 0.5-1.5 mg/ml of XCHT for 24, 48, or 72 reduced the viability of Huh7 cells by 17.91-26.27%, 44.01-77%, and 64.8-86.71% (P < 0.05) compared to untreated controls. These results indicate that XCHT inhibits the growth of Huh7 cells in both dose- and time-dependent manners. To verify these results, we evaluated the effect of XCHT on Huh7 cell morphology using phase-contrast microscopy. As shown in Figure 1B, untreated Huh7 cells appeared as densely packed and disorganized multilayers. In contrast, many of the XCHT-treated cells were rounded, shrunken, and detached from adjacent cells, or floating in the medium. Taken together, these data demonstrate that XCHT inhibits the growth of Huh7 cells. Effect of Xiao-Chai-Hu Tang (XCHT) on the viability and morphology of Huh7 cells. (A) Cell viability was determined by the MTT assay after Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24, 48 or 72 h. The data were normalized to the viability of control cells (100%). Data are the averages with standard deviation (SD; error bars) from 3 independent experiments. The symbols (#), (*), and (&) indicate statistical significance compared to control cells (P<0.05), for each indicated timepoint. (B) The Huh7 cells were treated with the 0.5-1.5 mg/ml XCHT for 24 h, and morphological changes were observed using phase-contrast microscopy. The images were captured at a magnification of 200x. Images are representative of 3 independent experiments. To explore the anti-cancer activity of XCHT in vivo, we measured tumor volume and weight in HCC xenograft mice. As shown in Figure 2A and 2B, XCHT treatment significantly reduced both tumor volume and weight by Day 11 post-treatment. In contrast to tumors from the control group (293.37±19.36 mm3), XCHT treatment reduced tumor volume by 15.15% (201.5±27.83 mm3; P < 0.05). Tumor weight was reduced by 31.84% in XCHT-treated mice compared to control-treated mice (P < 0.05). Mice treated with XCHT demonstrated no changes in body weight during the course of the study, suggesting that XCHT treatment was relatively nontoxic to the animals (Figure 2C). Taken together, these data suggest that XCHT inhibits HCC growth both in vivo and in vitro, without apparent adverse effects. Effect of XCHT on tumor growth in hepatocellular carcinoma (HCC) xenograft mice. After tumor development, the mice were given intra-gastric administration of 14.2 g/kg of XCHT or PBS daily for 21 days. Tumor volume (A), tumor weight (B), and body weight (C) were measured. Data shown are averages with SD (error bars) from 10 mice in each group (n = 10). * P< 0.05, versus controls. XCHT inhibits HCC proliferation in vivo and in vitro One of the hallmarks of oncogenesis is unchecked cell proliferation, but the anti-proliferative activity of XCHT on hepatocarcinoma tumors was unknown. To measure the effect of XCHT on cell proliferation, we examined XCHT-treated tumors for a marker expressed by proliferating cell nuclei (Ki-67) using immunohistochemical staining (IHC). As shown in Figure 3A, the percentages of Ki-67 positive cells in tumor tissues from control and XCHT-treated xenograft mice were 76.0 ± 9.6% and 51 ± 15.3%, respectively (P <0.05). To examine the anti-proliferative activity of XCHT in vitro, Huh7 cells were treated with 0.5-1.5 mg/ml of extract and colony formation was measured. As shown in Figure 3B, treatment with increasing concentrations of XCHT for 24 h reduced the cell survival rate by 55.3%, 69.8% and 92.8% compared to untreated controls (P < 0.05). These results suggest that XCHT can inhibit HCC cell proliferation both in vivo and in vitro. Effect of XCHT on cell proliferation in HCC xenograft mice and Huh7 cells. (A) Ki-67 assay in tumor tissues (400 χ). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). * P < 0.05, versus controls. (B) Huh7 cell colony formation assay. Data are averages with SD (error bars) from at least three independent experiments. *P < 0.01, versus control cells. One of the hallmarks of oncogenesis is unchecked cell proliferation, but the anti-proliferative activity of XCHT on hepatocarcinoma tumors was unknown. To measure the effect of XCHT on cell proliferation, we examined XCHT-treated tumors for a marker expressed by proliferating cell nuclei (Ki-67) using immunohistochemical staining (IHC). As shown in Figure 3A, the percentages of Ki-67 positive cells in tumor tissues from control and XCHT-treated xenograft mice were 76.0 ± 9.6% and 51 ± 15.3%, respectively (P <0.05). To examine the anti-proliferative activity of XCHT in vitro, Huh7 cells were treated with 0.5-1.5 mg/ml of extract and colony formation was measured. As shown in Figure 3B, treatment with increasing concentrations of XCHT for 24 h reduced the cell survival rate by 55.3%, 69.8% and 92.8% compared to untreated controls (P < 0.05). These results suggest that XCHT can inhibit HCC cell proliferation both in vivo and in vitro. Effect of XCHT on cell proliferation in HCC xenograft mice and Huh7 cells. (A) Ki-67 assay in tumor tissues (400 χ). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). * P < 0.05, versus controls. (B) Huh7 cell colony formation assay. Data are averages with SD (error bars) from at least three independent experiments. *P < 0.01, versus control cells. XCHT induces apoptosis of hepatocellular carcinoma cells in vivo and in vitro. Apoptosis within HCC tumor tissues was visualized by IHC staining using the TUNEL assay. As shown in Fig 4A, XCHT-treated mice had a significantly higher percentage of TUNEL-positive cells (82±15.3%) compared to the untreated control mice (47±9.6%), indicating the pro-apoptotic effect of XCHT in vivo. To visualize the pro-apoptotic activity in XCHT-treated Huh7 cells, we evaluated changes in nuclear morphology using the DNA-binding dye Hoechst 33258. As shown in Fig 4B, XCHT -treated cells revealed condensed chromatin and a fragmented nuclear morphology, and these characteristics demonstrated typical apoptotic morphological features. In contrast, untreated cell nuclei showed a less intense but homogenous staining pattern, indicative of proliferating cells. Effect of XCHT on apoptosis in both HCC xenograft mice and Huh7 cells. (A) TUNEL assay in tumor tissues (400 x). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). *P < 0.05, versus controls; (B) Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24 h and stained with Hoechst 33258. Images were visualized using a phase-contrast fluorescence microscope. The images were captured at a magnification of 400x. Images are representative of 3 independent experiments. Apoptosis within HCC tumor tissues was visualized by IHC staining using the TUNEL assay. As shown in Fig 4A, XCHT-treated mice had a significantly higher percentage of TUNEL-positive cells (82±15.3%) compared to the untreated control mice (47±9.6%), indicating the pro-apoptotic effect of XCHT in vivo. To visualize the pro-apoptotic activity in XCHT-treated Huh7 cells, we evaluated changes in nuclear morphology using the DNA-binding dye Hoechst 33258. As shown in Fig 4B, XCHT -treated cells revealed condensed chromatin and a fragmented nuclear morphology, and these characteristics demonstrated typical apoptotic morphological features. In contrast, untreated cell nuclei showed a less intense but homogenous staining pattern, indicative of proliferating cells. Effect of XCHT on apoptosis in both HCC xenograft mice and Huh7 cells. (A) TUNEL assay in tumor tissues (400 x). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). *P < 0.05, versus controls; (B) Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24 h and stained with Hoechst 33258. Images were visualized using a phase-contrast fluorescence microscope. The images were captured at a magnification of 400x. Images are representative of 3 independent experiments. XCHT regulates the gene and protein expression correlated with apoptosis and proliferation in vivo and in vitro Very little is known regarding the mechanism of action of XCHT, or the pathways that this extract targets to generate its anti-proliferative and pro-apoptotic activities. We examined the effect of XCHT treatment on the expression of correlated genes and proteins that are important regulators of apoptosis and proliferation using RT-PCR and western blotting. As shown in Fig 5A and B, XCHT significantly reduced both gene and protein expression of the anti-apoptotic factor Bcl-2 in HCC tumor tissues. In contrast, the gene and protein expression of the pro-apoptotic factor Bax increased after XCHT treatment. XCHT treatment significantly decreased the mRNA and protein expression levels of CDK4 and cyclin-D1 compared to the control group (P < 0.05). Correlating with our in vivo data from HCC tissues, Huh7 cells treated with XCHT demonstrated a significant reduction in the mRNA and protein expression levels of Bcl-2, CDK4 and cyclin-D1, and an increase in the expression of Bax in vitro (Fig 6A and B). Together these data suggest that XCHT promotes apoptosis and inhibits proliferation of HCC by increasing the pro-apoptotic Bax/Bcl-2 ratio and modulating the expression of cell cycle-regulatory genes. Effect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in HCC xenograft mice. (A) Four tumors were randomly selected from each group, and the mRNA or protein expression levels of Bcl-2, Bax, cyclin-D1, and CDK4 were determined by RT-PCR and Western blot analysis. GAPDH or β-actin were used as the internal controls. Data shown are representative samples. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression levels of untreated control mice (100%). The symbols (*), (&), (#), and (§) indicate statistical significance versus controls (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1. Effect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in Huh7 cells. The cells were treated with 0.5-1.5 mg/ml XCHT for 24 h. (A) The mRNA and protein levels of Bcl-2, Bax, CDK4 and cyclin-D1 were determined using RT- PCR or western blotting. GAPDH or β - actin were used as the internal controls. The data are representative of three independent experiments. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression of the untreated control cells (100%). The symbols (*), (&), (#), and (§) indicate statistical significance compared to control cells (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1. Very little is known regarding the mechanism of action of XCHT, or the pathways that this extract targets to generate its anti-proliferative and pro-apoptotic activities. We examined the effect of XCHT treatment on the expression of correlated genes and proteins that are important regulators of apoptosis and proliferation using RT-PCR and western blotting. As shown in Fig 5A and B, XCHT significantly reduced both gene and protein expression of the anti-apoptotic factor Bcl-2 in HCC tumor tissues. In contrast, the gene and protein expression of the pro-apoptotic factor Bax increased after XCHT treatment. XCHT treatment significantly decreased the mRNA and protein expression levels of CDK4 and cyclin-D1 compared to the control group (P < 0.05). Correlating with our in vivo data from HCC tissues, Huh7 cells treated with XCHT demonstrated a significant reduction in the mRNA and protein expression levels of Bcl-2, CDK4 and cyclin-D1, and an increase in the expression of Bax in vitro (Fig 6A and B). Together these data suggest that XCHT promotes apoptosis and inhibits proliferation of HCC by increasing the pro-apoptotic Bax/Bcl-2 ratio and modulating the expression of cell cycle-regulatory genes. Effect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in HCC xenograft mice. (A) Four tumors were randomly selected from each group, and the mRNA or protein expression levels of Bcl-2, Bax, cyclin-D1, and CDK4 were determined by RT-PCR and Western blot analysis. GAPDH or β-actin were used as the internal controls. Data shown are representative samples. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression levels of untreated control mice (100%). The symbols (*), (&), (#), and (§) indicate statistical significance versus controls (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1. Effect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in Huh7 cells. The cells were treated with 0.5-1.5 mg/ml XCHT for 24 h. (A) The mRNA and protein levels of Bcl-2, Bax, CDK4 and cyclin-D1 were determined using RT- PCR or western blotting. GAPDH or β - actin were used as the internal controls. The data are representative of three independent experiments. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression of the untreated control cells (100%). The symbols (*), (&), (#), and (§) indicate statistical significance compared to control cells (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1. XCHT inhibits hepatocellular carcinoma (HCC) growth in vitro and in vivo.: We evaluated the in vitro anti-cancer activity of XCHT on the human hepatocellular carcinoma cell line Huh7 using the MTT assay. As shown in Figure 1A, treatment with 0.5-1.5 mg/ml of XCHT for 24, 48, or 72 reduced the viability of Huh7 cells by 17.91-26.27%, 44.01-77%, and 64.8-86.71% (P < 0.05) compared to untreated controls. These results indicate that XCHT inhibits the growth of Huh7 cells in both dose- and time-dependent manners. To verify these results, we evaluated the effect of XCHT on Huh7 cell morphology using phase-contrast microscopy. As shown in Figure 1B, untreated Huh7 cells appeared as densely packed and disorganized multilayers. In contrast, many of the XCHT-treated cells were rounded, shrunken, and detached from adjacent cells, or floating in the medium. Taken together, these data demonstrate that XCHT inhibits the growth of Huh7 cells. Effect of Xiao-Chai-Hu Tang (XCHT) on the viability and morphology of Huh7 cells. (A) Cell viability was determined by the MTT assay after Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24, 48 or 72 h. The data were normalized to the viability of control cells (100%). Data are the averages with standard deviation (SD; error bars) from 3 independent experiments. The symbols (#), (*), and (&) indicate statistical significance compared to control cells (P<0.05), for each indicated timepoint. (B) The Huh7 cells were treated with the 0.5-1.5 mg/ml XCHT for 24 h, and morphological changes were observed using phase-contrast microscopy. The images were captured at a magnification of 200x. Images are representative of 3 independent experiments. To explore the anti-cancer activity of XCHT in vivo, we measured tumor volume and weight in HCC xenograft mice. As shown in Figure 2A and 2B, XCHT treatment significantly reduced both tumor volume and weight by Day 11 post-treatment. In contrast to tumors from the control group (293.37±19.36 mm3), XCHT treatment reduced tumor volume by 15.15% (201.5±27.83 mm3; P < 0.05). Tumor weight was reduced by 31.84% in XCHT-treated mice compared to control-treated mice (P < 0.05). Mice treated with XCHT demonstrated no changes in body weight during the course of the study, suggesting that XCHT treatment was relatively nontoxic to the animals (Figure 2C). Taken together, these data suggest that XCHT inhibits HCC growth both in vivo and in vitro, without apparent adverse effects. Effect of XCHT on tumor growth in hepatocellular carcinoma (HCC) xenograft mice. After tumor development, the mice were given intra-gastric administration of 14.2 g/kg of XCHT or PBS daily for 21 days. Tumor volume (A), tumor weight (B), and body weight (C) were measured. Data shown are averages with SD (error bars) from 10 mice in each group (n = 10). * P< 0.05, versus controls. XCHT inhibits HCC proliferation in vivo and in vitro: One of the hallmarks of oncogenesis is unchecked cell proliferation, but the anti-proliferative activity of XCHT on hepatocarcinoma tumors was unknown. To measure the effect of XCHT on cell proliferation, we examined XCHT-treated tumors for a marker expressed by proliferating cell nuclei (Ki-67) using immunohistochemical staining (IHC). As shown in Figure 3A, the percentages of Ki-67 positive cells in tumor tissues from control and XCHT-treated xenograft mice were 76.0 ± 9.6% and 51 ± 15.3%, respectively (P <0.05). To examine the anti-proliferative activity of XCHT in vitro, Huh7 cells were treated with 0.5-1.5 mg/ml of extract and colony formation was measured. As shown in Figure 3B, treatment with increasing concentrations of XCHT for 24 h reduced the cell survival rate by 55.3%, 69.8% and 92.8% compared to untreated controls (P < 0.05). These results suggest that XCHT can inhibit HCC cell proliferation both in vivo and in vitro. Effect of XCHT on cell proliferation in HCC xenograft mice and Huh7 cells. (A) Ki-67 assay in tumor tissues (400 χ). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). * P < 0.05, versus controls. (B) Huh7 cell colony formation assay. Data are averages with SD (error bars) from at least three independent experiments. *P < 0.01, versus control cells. XCHT induces apoptosis of hepatocellular carcinoma cells in vivo and in vitro.: Apoptosis within HCC tumor tissues was visualized by IHC staining using the TUNEL assay. As shown in Fig 4A, XCHT-treated mice had a significantly higher percentage of TUNEL-positive cells (82±15.3%) compared to the untreated control mice (47±9.6%), indicating the pro-apoptotic effect of XCHT in vivo. To visualize the pro-apoptotic activity in XCHT-treated Huh7 cells, we evaluated changes in nuclear morphology using the DNA-binding dye Hoechst 33258. As shown in Fig 4B, XCHT -treated cells revealed condensed chromatin and a fragmented nuclear morphology, and these characteristics demonstrated typical apoptotic morphological features. In contrast, untreated cell nuclei showed a less intense but homogenous staining pattern, indicative of proliferating cells. Effect of XCHT on apoptosis in both HCC xenograft mice and Huh7 cells. (A) TUNEL assay in tumor tissues (400 x). Data shown are averages with SD (error bars) from 6 individual mice in each group (n = 6). *P < 0.05, versus controls; (B) Huh7 cells were treated with 0.5-1.5 mg/ml XCHT for 24 h and stained with Hoechst 33258. Images were visualized using a phase-contrast fluorescence microscope. The images were captured at a magnification of 400x. Images are representative of 3 independent experiments. XCHT regulates the gene and protein expression correlated with apoptosis and proliferation in vivo and in vitro: Very little is known regarding the mechanism of action of XCHT, or the pathways that this extract targets to generate its anti-proliferative and pro-apoptotic activities. We examined the effect of XCHT treatment on the expression of correlated genes and proteins that are important regulators of apoptosis and proliferation using RT-PCR and western blotting. As shown in Fig 5A and B, XCHT significantly reduced both gene and protein expression of the anti-apoptotic factor Bcl-2 in HCC tumor tissues. In contrast, the gene and protein expression of the pro-apoptotic factor Bax increased after XCHT treatment. XCHT treatment significantly decreased the mRNA and protein expression levels of CDK4 and cyclin-D1 compared to the control group (P < 0.05). Correlating with our in vivo data from HCC tissues, Huh7 cells treated with XCHT demonstrated a significant reduction in the mRNA and protein expression levels of Bcl-2, CDK4 and cyclin-D1, and an increase in the expression of Bax in vitro (Fig 6A and B). Together these data suggest that XCHT promotes apoptosis and inhibits proliferation of HCC by increasing the pro-apoptotic Bax/Bcl-2 ratio and modulating the expression of cell cycle-regulatory genes. Effect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in HCC xenograft mice. (A) Four tumors were randomly selected from each group, and the mRNA or protein expression levels of Bcl-2, Bax, cyclin-D1, and CDK4 were determined by RT-PCR and Western blot analysis. GAPDH or β-actin were used as the internal controls. Data shown are representative samples. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression levels of untreated control mice (100%). The symbols (*), (&), (#), and (§) indicate statistical significance versus controls (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1. Effect of XCHT on the expression of Bcl-2, Bax, cyclin-D1 and CDK4 in Huh7 cells. The cells were treated with 0.5-1.5 mg/ml XCHT for 24 h. (A) The mRNA and protein levels of Bcl-2, Bax, CDK4 and cyclin-D1 were determined using RT- PCR or western blotting. GAPDH or β - actin were used as the internal controls. The data are representative of three independent experiments. (B) Densitometric analysis of gene and protein expression levels. The data were normalized to the mean mRNA or protein expression of the untreated control cells (100%). The symbols (*), (&), (#), and (§) indicate statistical significance compared to control cells (P< 0.05), for Bax, Bcl-2, CDK4 or cyclin-D1. Discussion: Xiao-Chai-Hu Tang (XCHT) is one of the most widely used Chinese herbal preparations, and it has long been used for the treatment of chronic liver diseases in China and other Asian countries. In the present study, we investigated its anti-tumor and anti-proliferative activities. Our findings suggest that XCHT could promote apoptosis and inhibit cellular proliferation in liver cancer. Cancer cells are characterized by an uncontrolled increase in cell proliferation and/or a reduction in apoptosis [Adams and Cory, 2007]. Cell cycle deregulation is a hallmark of tumor cells, and targeting the proteins that mediate critical cell cycle processes is an emerging strategy for the treatment of cancer [Stewart et al, 2003]. The G1/S transition is one of the two main checkpoints of the cell cycle [Nurse, 1994], which is responsible for initiation and completion of DNA replication. G1/S progression is strongly regulated by cyclin-D1, which exerts its function by forming an active complex with its CDK major catalytic partners (CDK4/6) [Morgan, 1995]. An unchecked or hyperactivated cyclin-D1/CDK4 complex often leads to uncontrolled cell division and malignancy [Harakeh et al, 2008; Kessel and Luo, 2000; Chen et al,1996; Zafonte et al, 2000]. Examination of cyclin-D1 and CDK4 expression levels during XCHT treatment demonstrated that XCHT suppresses the expression of both factors in HCC xenograft tissues and in Huh7 cells. Apoptosis is important for embryogenesis, tissue homeostasis and defence against pathogens. Notably, apoptotic resistance is one of the main causes of tumorigenesis and tumor drug resistance [Cory and Adams, 2002; Lee and Schmitt, 2003]. The Bcl-2 family plays a critical role in apoptotic regulation. Pro-apoptotic Bax promotes intrinsic apoptosis by forming oligomers in the mitochondrial outer membrane, which facilitates the release of the apoptogenic molecules. In contrast, anti-apoptotic Bcl-2 blocks mitochondrial apoptosis by inhibiting the release and oligomerization of Bax [Leibowitz and Yu, 2010]. XCHT can induce apoptosis and overcome the apoptotic resistance of hepatocarcinoma cells in vitro and in vivo, which elevates the potential of this extract for development as an effective chemotherapy agent for treating HCC. Conclusion: In conclusion, we demonstrate for the first time that XCHT can inhibit proliferation and induce apoptosis in liver cancer by regulating the expression of the Bcl-2 protein family and decreasing CDK4 and cyclin-D1 levels. Although the active ingredient and precise mechanism of action are not known, this research provides a starting point for exploring an XCHT-related HCC cancer therapy.
Background: Xiao-Chai-Hu Tang (XCHT) is an extract of seven herbs with anticancer properties, but its mechanism of action is unknown. In this study, we evaluated XCHT-treated hepatocellular carcinoma (HCC) for anti-proliferative and pro-apoptotic effects. Methods: Using a hepatic cancer xenograft model, we investigated the in vivo efficacy of XCHT against tumor growth by evaluating tumor volume and weight, as well as measuring apoptosis and cellular proliferation within the tumor. To study the effects of XCHT in vitro, we measured the cell viability of XCHT-treated Huh7 cells, as well as colony formation and apoptosis. To identify a potential mechanism of action, the gene and protein expression levels of Bax, Bcl-2, CDK4 and cyclin-D1 were measured in XCHT-treated Huh7 cells. Results: We found that XCHT reduced tumor size and weight, as well as significantly decreased cell viability both in vivo and in vitro. XCHT suppressed the expression of the proliferation marker Ki-67 in HCC tissues and inhibited Huh7 colony formation. XCHT induced apoptosis in HCC tumor tissues and in Huh7 cells. Finally, XCHT altered the expression of Bax, Bcl-2, CDK4 and cyclin-D1, which halted cell proliferation and promoted apoptosis. Conclusions: Our data suggest that XCHT enhances expression of pro-apoptotic pathways, resulting in potent anticancer activity.
Introduction: Hepatocellular carcinoma (HCC) is one of the top five cancers diagnosed worldwide [El-Serag and Rudolph, 2007; Jemal et al, 2011]. HCC is the third leading cause of cancer-related deaths, and China will contribute close to half of the estimated 600,000 individuals who will succomb to the disease annually [Jemal et al, 2011; Sherman, 2005]. Surgical resection is the preferred treatment following HCC diagnosis, since removing the tumor completely offers the best prognosis for long-term survival. This treatment is suitable for only 10-15% of patients with early-stage disease, since tumor resection may disrupt vital functions or structures if the tumor is large or has infiltrated into major blood vessels [Levin et al, 1995]. For patients with advanced stage HCC, chemotherapy is the best therapeutic option available. Standard chemotherapeutic regimens can involve single agents or a combination of drugs such as doxorubicin, cisplatin or fluorouracil. Late stage HCC develops drug resistance to standard chemotherapeutic combinations, and less than 20% of patients with advanced liver cancer will respond to these treatment regimens [Abou-Alfa et al, 2008]. Identification of more effective anticancer therapies is needed to provide alternatives to standard chemotherapeutic regimens, as well as treatments for drug resistant HCC. Complementary and alternative medicines (CAM) have received considerable attention in Western countries for their potential therapeutic applications [Xu et al, 2006; Cui et al, 2010]. Traditional Chinese medicine (TCM) has been used in the treatment of cancer for thousands of years in China and other Asian countries. These medicines have gained acceptance as alternative cancer treatments in the United States and Europe [Wong et al, 2001; Gai et al, 2008]. When TCM is combined with conventional chemotherapy, there is an increase in the sensitivity of tumors to chemotherapeutic drugs, a reduction in both the side effects and complications associated with chemotherapy or radiotherapy, and an improvement in patient quality of life and survival [Konkimalla and Efferth, 2008]. Xiao-Chai-Hu-Tang (XCHT) is an extract of seven herbs: Bupleurum chinense (Chai-Hu), Pinellia ternata (Ban-Xia), Scutellaria baicalensis (Huang-Qin or Chinese skullcap root), Zizyphus jujube var. inermis (Da-Zao or jujube fruit), Panax ginseng (Ren-Shen or ginseng root), Glycyrrhiza uralensis (Gan-Cao or licorice root), and Zingiber officinale (Sheng-Jiang or ginger rhizome). The formulation for XCHT was first recorded in Shang Han Za Bing Lun during the Han Dynasty, and the ancient methodology for preparing XCHT has been performed in China for thousands of years. In TCM, XCHT has traditionally been used to treat a variety of Shaoyang diseases (including disorders of the liver and gallbladder). Common side-effects experienced with XCHT treatment include alternating chills and fever, but the medication is well-tolerated by the majority of patients. The relatively low toxicity of XCHT is an important characteristic for potential use of this extract in combinatorial therapies. Ancient scholars believed the mechanism of action of XCHT involved harmonizing the Shaoyang, where pathogens were eliminated through enhanced liver function and improved digestion. More recently, XCHT has been reported to be effective as an anti-cancer agent, although the precise mechanism of its tumoricidal activity remains unclear [Zhu et al, 2005; Shiota et al, 2002; Watanabe et al, 2001; Yano et al, 1994; Kato et al, 1998; Liu et al, 1998; Mizushima et al, 1995]. To gain further insights into the mechanism of action of this ancient extract, we examined the anti-proliferative and pro-apoptotic activities of XCHT on HCC. Conclusion: In conclusion, we demonstrate for the first time that XCHT can inhibit proliferation and induce apoptosis in liver cancer by regulating the expression of the Bcl-2 protein family and decreasing CDK4 and cyclin-D1 levels. Although the active ingredient and precise mechanism of action are not known, this research provides a starting point for exploring an XCHT-related HCC cancer therapy.
Background: Xiao-Chai-Hu Tang (XCHT) is an extract of seven herbs with anticancer properties, but its mechanism of action is unknown. In this study, we evaluated XCHT-treated hepatocellular carcinoma (HCC) for anti-proliferative and pro-apoptotic effects. Methods: Using a hepatic cancer xenograft model, we investigated the in vivo efficacy of XCHT against tumor growth by evaluating tumor volume and weight, as well as measuring apoptosis and cellular proliferation within the tumor. To study the effects of XCHT in vitro, we measured the cell viability of XCHT-treated Huh7 cells, as well as colony formation and apoptosis. To identify a potential mechanism of action, the gene and protein expression levels of Bax, Bcl-2, CDK4 and cyclin-D1 were measured in XCHT-treated Huh7 cells. Results: We found that XCHT reduced tumor size and weight, as well as significantly decreased cell viability both in vivo and in vitro. XCHT suppressed the expression of the proliferation marker Ki-67 in HCC tissues and inhibited Huh7 colony formation. XCHT induced apoptosis in HCC tumor tissues and in Huh7 cells. Finally, XCHT altered the expression of Bax, Bcl-2, CDK4 and cyclin-D1, which halted cell proliferation and promoted apoptosis. Conclusions: Our data suggest that XCHT enhances expression of pro-apoptotic pathways, resulting in potent anticancer activity.
12,985
263
[ 140, 164, 78, 123, 198, 136, 71, 101, 135, 84, 276, 359, 229, 51, 594, 280, 249, 537 ]
23
[ "xcht", "cells", "huh7", "tumor", "cell", "huh7 cells", "ml", "treated", "mice", "treatment" ]
[ "hepatocellular carcinoma", "carcinoma hcc cancers", "inhibits hepatocellular carcinoma", "stage hcc chemotherapy", "hcc chemotherapy best" ]
null
[CONTENT] Xiao-Chai-Hu Tang | proliferation | apoptosis | hepatocellular carcinoma [SUMMARY]
null
[CONTENT] Xiao-Chai-Hu Tang | proliferation | apoptosis | hepatocellular carcinoma [SUMMARY]
[CONTENT] Xiao-Chai-Hu Tang | proliferation | apoptosis | hepatocellular carcinoma [SUMMARY]
[CONTENT] Xiao-Chai-Hu Tang | proliferation | apoptosis | hepatocellular carcinoma [SUMMARY]
[CONTENT] Xiao-Chai-Hu Tang | proliferation | apoptosis | hepatocellular carcinoma [SUMMARY]
[CONTENT] Antineoplastic Agents | Apoptosis | Carcinoma, Hepatocellular | Cell Line, Tumor | Cell Proliferation | Cell Survival | Cyclin D1 | Cyclin-Dependent Kinase 4 | Drugs, Chinese Herbal | Genes, bcl-2 | Humans | Liver Neoplasms | bcl-2-Associated X Protein [SUMMARY]
null
[CONTENT] Antineoplastic Agents | Apoptosis | Carcinoma, Hepatocellular | Cell Line, Tumor | Cell Proliferation | Cell Survival | Cyclin D1 | Cyclin-Dependent Kinase 4 | Drugs, Chinese Herbal | Genes, bcl-2 | Humans | Liver Neoplasms | bcl-2-Associated X Protein [SUMMARY]
[CONTENT] Antineoplastic Agents | Apoptosis | Carcinoma, Hepatocellular | Cell Line, Tumor | Cell Proliferation | Cell Survival | Cyclin D1 | Cyclin-Dependent Kinase 4 | Drugs, Chinese Herbal | Genes, bcl-2 | Humans | Liver Neoplasms | bcl-2-Associated X Protein [SUMMARY]
[CONTENT] Antineoplastic Agents | Apoptosis | Carcinoma, Hepatocellular | Cell Line, Tumor | Cell Proliferation | Cell Survival | Cyclin D1 | Cyclin-Dependent Kinase 4 | Drugs, Chinese Herbal | Genes, bcl-2 | Humans | Liver Neoplasms | bcl-2-Associated X Protein [SUMMARY]
[CONTENT] Antineoplastic Agents | Apoptosis | Carcinoma, Hepatocellular | Cell Line, Tumor | Cell Proliferation | Cell Survival | Cyclin D1 | Cyclin-Dependent Kinase 4 | Drugs, Chinese Herbal | Genes, bcl-2 | Humans | Liver Neoplasms | bcl-2-Associated X Protein [SUMMARY]
[CONTENT] hepatocellular carcinoma | carcinoma hcc cancers | inhibits hepatocellular carcinoma | stage hcc chemotherapy | hcc chemotherapy best [SUMMARY]
null
[CONTENT] hepatocellular carcinoma | carcinoma hcc cancers | inhibits hepatocellular carcinoma | stage hcc chemotherapy | hcc chemotherapy best [SUMMARY]
[CONTENT] hepatocellular carcinoma | carcinoma hcc cancers | inhibits hepatocellular carcinoma | stage hcc chemotherapy | hcc chemotherapy best [SUMMARY]
[CONTENT] hepatocellular carcinoma | carcinoma hcc cancers | inhibits hepatocellular carcinoma | stage hcc chemotherapy | hcc chemotherapy best [SUMMARY]
[CONTENT] hepatocellular carcinoma | carcinoma hcc cancers | inhibits hepatocellular carcinoma | stage hcc chemotherapy | hcc chemotherapy best [SUMMARY]
[CONTENT] xcht | cells | huh7 | tumor | cell | huh7 cells | ml | treated | mice | treatment [SUMMARY]
null
[CONTENT] xcht | cells | huh7 | tumor | cell | huh7 cells | ml | treated | mice | treatment [SUMMARY]
[CONTENT] xcht | cells | huh7 | tumor | cell | huh7 cells | ml | treated | mice | treatment [SUMMARY]
[CONTENT] xcht | cells | huh7 | tumor | cell | huh7 cells | ml | treated | mice | treatment [SUMMARY]
[CONTENT] xcht | cells | huh7 | tumor | cell | huh7 cells | ml | treated | mice | treatment [SUMMARY]
[CONTENT] patients | chemotherapeutic | hcc | cancer | xcht | standard chemotherapeutic | tcm | regimens | stage | ancient [SUMMARY]
null
[CONTENT] xcht | cells | mice | expression | data | protein expression | shown | 05 | huh7 | treated [SUMMARY]
[CONTENT] cancer | mechanism action known research | exploring xcht | family decreasing | family decreasing cdk4 | family decreasing cdk4 cyclin | d1 levels active | d1 levels | starting point | starting point exploring [SUMMARY]
[CONTENT] cells | xcht | ml | huh7 | cell | tumor | mice | treated | huh7 cells | data [SUMMARY]
[CONTENT] cells | xcht | ml | huh7 | cell | tumor | mice | treated | huh7 cells | data [SUMMARY]
[CONTENT] Xiao-Chai-Hu Tang | seven ||| [SUMMARY]
null
[CONTENT] ||| HCC ||| HCC | Huh7 ||| Bax | Bcl-2 | cyclin-D1 [SUMMARY]
[CONTENT] XCHT [SUMMARY]
[CONTENT] Xiao-Chai-Hu Tang | seven ||| ||| ||| ||| Bax | Bcl-2 | cyclin-D1 | Huh7 ||| ||| ||| HCC ||| HCC | Huh7 ||| Bax | Bcl-2 | cyclin-D1 ||| XCHT [SUMMARY]
[CONTENT] Xiao-Chai-Hu Tang | seven ||| ||| ||| ||| Bax | Bcl-2 | cyclin-D1 | Huh7 ||| ||| ||| HCC ||| HCC | Huh7 ||| Bax | Bcl-2 | cyclin-D1 ||| XCHT [SUMMARY]
Drug-Induced Liver Injury during Antidepressant Treatment: Results of AMSP, a Drug Surveillance Program.
26721950
Drug-induced liver injury is a common cause of liver damage and the most frequent reason for withdrawal of a drug in the United States. The symptoms of drug-induced liver damage are extremely diverse, with some patients remaining asymptomatic.
BACKGROUND
This observational study is based on data of Arzneimittelsicherheit in der Psychiatrie, a multicenter drug surveillance program in German-speaking countries (Austria, Germany, and Switzerland) recording severe drug reactions in psychiatric inpatients. Of 184234 psychiatric inpatients treated with antidepressants between 1993 and 2011 in 80 psychiatric hospitals, 149 cases of drug-induced liver injury (0.08%) were reported.
METHODS
The study revealed that incidence rates of drug-induced liver injury were highest during treatment with mianserine (0.36%), agomelatine (0.33%), and clomipramine (0.23%). The lowest probability of drug-induced liver injury occurred during treatment with selective serotonin reuptake inhibitors ([0.03%), especially escitalopram [0.01%], citalopram [0.02%], and fluoxetine [0.02%]). The most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. In contrast to previous findings, the dosage at the timepoint when DILI occurred was higher in 7 of 9 substances than the median overall dosage. Regarding liver enzymes, duloxetine and clomipramine were associated with increased glutamat-pyruvat-transaminase and glutamat-oxalat-transaminase values, while mirtazapine hardly increased enzyme values. By contrast, duloxetine performed best in terms of gamma-glutamyl-transferase values, and trimipramine, clomipramine, and venlafaxine performed worst.
RESULTS
Our findings suggest that selective serotonin reuptake inhibitors are less likely than the other antidepressants, examined in this study, to precipitate drug-induced liver injury, especially in patients with preknown liver dysfunction.
CONCLUSIONS
[ "Adverse Drug Reaction Reporting Systems", "Aged", "Antidepressive Agents", "Austria", "Chemical and Drug Induced Liver Injury", "Dose-Response Relationship, Drug", "Drug Therapy, Combination", "Female", "Germany", "Hospitals, Psychiatric", "Humans", "Incidence", "Inpatients", "Liver", "Male", "Mental Disorders", "Middle Aged", "Switzerland" ]
4851269
Introduction
The liver, the central organ of biotransformation, is particularly prone to oral medication-related toxicity due to high concentrations of drugs and their metabolites in portal blood rather than in the actual target area of the central nervous system. It is, however, difficult to attribute liver damage to a specific medication in clinical practice (Meyer, 2000). The susceptibility of an individual to drug-induced liver injury (DILI) depends on multiple genetic and epigenetic factors, age, gender, weight, and alcohol consumption that influence the occurrence of hepatic adverse effects (Krähenbühl and Kaplowitz, 1996). Older patients seem more vulnerable, and women have a stronger tendency to toxic liver reaction than men (Meyer, 2000); ethnic differences have also been reported (Evans, 1986). Genetic metabolic variability is the most significant susceptibility factor in drug-induced liver toxicity. Enzyme polymorphisms can cause a slowing or complete disruption of enzyme function, which in turn results in the inefficient processing of drugs (Shenfield and Gross, 1999). This may not always result in corresponding liver damage but does contribute to an increased toxicity of substances. The majority of drugs and almost all psychotropic drugs are metabolized by the enzyme CYP450. Due to genetically determined polymorphisms of CYP450-isoenzymes, individuals can be categorized as poor, intermediate, extensive, or superextensive metabolizers (Miners and Birkett, 1998; Shenfield and Gross, 1999; Wilkinson, 2004). If a poor metabolizer receives medication containing several substrates or inhibitors of the same isoenzyme, the risk of a toxic reaction increases owing to a slower drug metabolism. As most psychotropic drugs are a substrate of CYP2D6 (Ingelman-Sundberg, 2005), this cytochrome is especially significant in the pharmacokinetic interaction. Approximately 5% to 10% of Caucasians have reduced or nonexistent CYP2D6 activity and are therefore at risk of toxicity when receiving psychotropic treatment (Transon et al., 1996; Griese et al. 1998; Ingelman-Sundberg, 2005; Bernarda et al., 2006). A further important consideration is whether patients with preexisting liver dysfunction have a higher risk of hepatotoxic reactions. Although little information from controlled studies exists, there are indications that patients with preexisting liver disorders generally do not display an increased risk of drug-induced hepatotoxicity. It is more likely that preexisting liver damage negatively affects the ability of the liver to regenerate in the case of a hepatotoxic reaction (Chang and Schiano, 2007). The clinical symptoms of DILI are extremely diverse, with some patients remaining asymptomatic. Possible symptoms are tiredness, lack of appetite, nausea, vomiting, fever, a feeling of pressure in the upper right region of the abdomen, joint and muscle pain, pruritus, rashes, and jaundice; the latter is the only symptom directly indicative of the liver’s involvement (Chang and Schiano, 2007). To diagnose asymptomatic toxic liver damage early, a minimum of laboratory testing is required. This involves the measurement of the glutamat-oxalat transaminase (GOT), glutamat-pyruvat-transaminase (GPT), and gamma-glutamyl-transferase (γ-GT) in serum which, if found to be normal, indicates that there has been no disruption to liver function. GOT and GPT are also well known as the enzyme aspartate aminotransferase (AST) and alanine aminotransferase (ALT), respectively. It is important to consider the possibility of DILI when prescribing psychotropic drugs and to record a detailed history of all medication taken by the patient, with particular attention paid to the length of use, the dose, and the time between the intake of medication and appearance of symptoms. The latency period involved here can vary between a few days and some months and, as liver damage may result from other causes such as viral, autoimmune, alcohol-induced hepatitis, and acute Morbus Wilson, the diagnosis of drug-induced toxic liver-damage is often a diagnosis of exclusion (Norris et al., 2008). Recently, Chalasani et al. (2014) developed practice guidelines for diagnosing and managing DILI. The hepatic pattern of damage can be classified as predominantly hepatocellular, predominantly cholestatic, or a hepatocellular/cholestatic mixture and is important, as these patterns are of varying severity. The drugs also cause drug-specific patterns of liver damage revealing increased values of transaminases (GOT and GTP) and/or cholestasis (γ-GT, alkaline phosphatase [Zimmerman, 1999; Andrade et a., 2004]). A slight increase in transaminases or γ-GT levels to twice the norm without a rise in bilirubin is often of no clinical significance and in spite of continued medication, can simply disappear. This is a phenomenon often observed in antiepileptic or mood-stabilizing therapy (Yatham et al., 2002). These small functional changes must still be checked and in the case of a further elevation in liver enzyme levels medication must be discontinued (Voican et al., 2014). The prognosis of DILI is generally good, and less severe forms heal quickly and completely (Hayashi and Fontana, 2014). It is difficult to obtain figures regarding hepatotoxic drug reactions, as systematic epidemiological analyses are seldom done and observations are not conducted for a long enough period to have any true validity. Adverse effects are also not reliably reported or registered. Drug surveillance programs permit an early detection of adverse drug reactions (ADRs) and this may minimize consequences. The Arzneimittelsicherheit in der Psychiatrie (AMSP) study is one such program in the field of psychiatry systematically evaluating severe ADRs of psychotropic medication in inpatients. The AMSP produces a database of these ADRs registered in the participating psychiatric clinics in Austria, Germany, and Switzerland (for details on AMSP methods, see Grohmann et al., 2004, 2013 Konstantinidis et al., 2012). In the present study, we have used this database to analyze the elevation of liver enzymes with a particular focus on sociodemographic data and the significance of clinical manifestations as well as transaminase levels measured during antidepressant (AD) monotherapy and combination therapies.
Methods
The AMSP program aims for a continuous detection of severe ADRs resulting from psychotropic treatment. These are evaluated during inpatient treatment. In our study, we analyzed data from 80 university, municipal, or state psychiatric hospitals or departments participating in the AMSP program in 1993 to 2011. Information on severe ADRs is collected from clinicians on a regular basis by psychiatrists as drug monitors who use a standardized questionnaire to document cases. The drug monitors get in touch with ward psychiatrists at regular intervals and severe adverse drug reactions are reported at weekly meetings of the medical staff (Grohmann et al., 2004). Information is collected on the details of adverse events as well as on patient demographics and nonpsychotropic drug intake. It includes alternative hypotheses on the causes of the ADR, relevant risk factors, measures undertaken, and previous exposure to the drug. Senior doctors of each hospital involved review the cases that are later discussed at central and regional case conferences, which take place 3 times per year. Participants comprise hospital drug monitors, representatives from the national authorities regulating drugs, and drug safety experts from the pharmaceutical industry. Following discussions and analyses, ADR probability ratings are assigned and sent to the relevant authorities, and pharmaceutical companies receive the case questionnaires, which are also stored in the AMSP central database. Based on the AMSP study guidelines (Grohmann et al., 2004) and recommendations of Hurwitz and Wade (1969) and Seidl et al. (1965), probability ratings were performed. The ADR probability rating system defines the following grades of probability beginning with Grade1, in which ADR is possible, that is, the risk of ADR is not known or the probability of another cause other than the drug in question is >50%. Grade 2 is defined as probable, with a known reaction, time course, and dosage for a specific drug. The likelihood of alternative causes is <50%. Grade 3 is categorized as definite, meaning a reexposure to the drug again causes the ADR. Grade 4 signifies questionable or not sufficiently documented. In cases where an ADR results from a pharmacodynamic interaction of 2 or more drugs, each drug is given a rating of possible, probable, or definite according to the given facts. Furthermore, drug-use data are collected twice per year from all hospitals participating in the AMSP program; the number of all inpatients and the mean treatment duration of all patients per year are also recorded. The data presented in this study refer only to elevated liver enzymes due to “probable” (grade 2) and “definite” (grade 3) ADRs. Documentation of ADRs occurs when the value for one of the liver enzymes (GOT/AST, GPT/ALT, γ-GT, or alkaline phosphatase) exceed 5 times the upper normal values (“severe” as defined by the AMSP, based on the judgment of hepatologic experts) or when there are severe clinical symptoms and/or cholestasis. The threshold of 5 times the upper limit of normal GOT and GPT values have been proposed in the literature to avoid unnecessary withdrawal of substances (Aithal et al., 2011). Maximal levels of each liver enzyme are recorded in the AMSP in all DILI cases; mean maximum values per drug were evaluated for this analysis. Only drugs prescribed more than 2000 times within the overall study population were included in the analyses. Our retrospective analysis employs data extracted from the anonymized databank of the AMSP drawn from all 80 participating hospitals between 1993 and 2011. Detailed information on the hospitals participating in the program can be found online (www.amsp.de). The informed consent of participants was not required, as the data analyzed were derived from an anonymized databank. The AMSP drug surveillance program was approved by the leading boards of each participating institute prior to implementation, and the Ethics Committee of the University of Munich formally approved evaluations based on the AMSP databank. Statistical Analysis Incidence rates of hepatotoxicity were calculated as the percentage of inpatients receiving a specific AD or AD subclass and presented together with their 95% CIs. Regarding the low actual number of cases and the significant number of inpatients involved, the CI was calculated employing the exact method rather than one of the approximate methods (Vollset, 1993). The statistical program R was used to generate the figures (R Core Team, 2014). Q-square tests were calculated using the SPSS system Version 22.0. Significance was set at P<.05. Incidence rates of hepatotoxicity were calculated as the percentage of inpatients receiving a specific AD or AD subclass and presented together with their 95% CIs. Regarding the low actual number of cases and the significant number of inpatients involved, the CI was calculated employing the exact method rather than one of the approximate methods (Vollset, 1993). The statistical program R was used to generate the figures (R Core Team, 2014). Q-square tests were calculated using the SPSS system Version 22.0. Significance was set at P<.05.
null
null
Conclusions
Since 1993 educational and research grants have been given by the following pharmaceutical companies to the 3 local nonprofit associations of the AMSP: (1) Austrian companies: AESCA Pharma GmbH, AstraZeneca ÖsterreichGmbH, Boehringer Ingelheim Austria, Bristol–Myers Squibb GmbH, CSC Pharmaceuticals GmbH, Eli Lilly GmbH, Germania Pharma GmbH, GlaxoSmithKline Pharma GmbH, Janssen-Cilag Pharma GmbH, Lundbeck GmbH, Novartis Pharma GmbH, Pfizer Med Inform, Servier Austria GmbH, and Wyeth Lederle Pharma GmbH; (2) German companies: Abbott GmbH & Co. KG, AstraZeneca GmbH, Aventis Pharma Deutschland GmbH GE-O/R/N, Bayer Vital GmbH & Co. KG, Boehringer Mannheim GmbH, Bristol-Myers-Squibb, Ciba Geigy GmbH, Desitin Arzneimittel GmbH, Duphar Pharma GmbH & Co. KG, Eisai GmbH, esparma GmbH Arzneimittel, GlaxoSmithKline Pharma GmbH & Co. KG, Hoffmann-La Roche AG Medical Affairs, Janssen-Cilag GmbH, Janssen Research Foundation, Knoll Deutschland GmbH, Lilly Deutschland GmbH Niederlassung Bad Homburg, Lundbeck GmbH & Co. KG, Novartis Pharma GmbH, Nordmark Arzneimittel GmbH, Organon GmbH, Otsuka-Pharma Frankfurt, Pfizer GmbH, Pharmacia & Upjohn GmbH, Promonta Lundbeck Arzneimittel, Rhone Poulenc Rohrer, Sanofi-Synthelabo GmbH, Sanofi-Aventis Deutschland, Schering AG, SmithKlineBeecham Pharma GmbH, Solvay Arzneimittel GmbH, Synthelabo Arzneimittel GmbH, Dr Wilmar Schwabe GmbH & Co., Thiemann Arzneimittel GmbH, Troponwerke GmbH & Co. KG, Upjohn GmbH, Wander Pharma GmbH, and Wyeth-Pharma GmbH; and (3) Swiss companies: AHP (Schweiz) AG, AstraZeneca AG, Bristol–Myers Squibb AG, Desitin Pharma GmbH, Eli Lilly (Suisse) S.A., Essex Chemie AG, GlaxoSmithKline AG, Janssen-Cilag AG, Lundbeck (Suisse) AG, Mepha Schweiz AG/Teva, MSD Merck Sharp & Dohme AG, Organon AG, Pfizer AG, Pharmacia, Sandoz Pharmaceuticals AG, Sanofi-Aventis (Suisse) S.A., Sanofi Synthe′labo SA, Servier SA, SmithKlineBeecham AG, Solvay Pharma AG, Vifor SA, Wyeth AHP (Suisse) AG, and Wyeth Pharmaceuticals AG. Dr Papageorgiou received honoraria from RB Pharmaceuticals and Bristol–Myers Squibb. Dr Konstantinidis received honoraria from Affiris, AstraZeneca, Novartis, Pfizer, and Servier, served as a consultant for AstraZeneca, and was a speaker for AstraZeneca, Bristol–Myers Squib, and Janssen. Dr Winkler has received speaker honoraria from Angelini, Bristol-Myers Squibb, Novartis, Pfizer, and Servier. Drs Grohmann and Toto are involved in the project management of AMSP. Dr Greil has been a member of an advisory board for Lundbeck and has received speaker’s fees from AstraZeneca, Lundbeck, and Lundbeck Institute. Dr Kasper received grant/research support from Bristol–Myers Squibb, Eli Lilly, GlaxoSmithKline, Lundbeck, Organon, Sepracor, and Servier; he has served as a consultant or on advisory boards for AstraZeneca, Bristol–Myers Squibb, Eli Lilly, GlaxoSmithKline, Janssen, Lundbeck, Merck Sharp and Dome (MSD), Novartis, Organon, Pfizer, Schwabe, Sepracor, and Servier; and has served on speakers’ bureaus for Angelini, AstraZeneca, Bristol–Myers Squibb, Eli Lilly, Janssen, Lundbeck, Pfizer, Pierre Fabre, Schwabe, Sepracor, and Servier. Dr Winkler has received lecture fees from Bristol-Myers Squibb, CSC Pharmaceuticals, Novartis, Pfizer, and Servier.
[ "Introduction", "Statistical Analysis", "Results", "Social Demographic and Illness-Related Data", "Drugs Involved in DILI", "Dose-Dependent Aspects of Involved Drugs", "Combination Treatment and DILI", "Elevation of Liver Enzymes and Involved Drugs", "Elevation of Liver Enzymes in Preexisting Liver Damage and Clinical Symptoms", "Single Case of Acute Liver Failure", "Discussion", "Conclusions" ]
[ "The liver, the central organ of biotransformation, is particularly prone to oral medication-related toxicity due to high concentrations of drugs and their metabolites in portal blood rather than in the actual target area of the central nervous system. It is, however, difficult to attribute liver damage to a specific medication in clinical practice (Meyer, 2000). The susceptibility of an individual to drug-induced liver injury (DILI) depends on multiple genetic and epigenetic factors, age, gender, weight, and alcohol consumption that influence the occurrence of hepatic adverse effects (Krähenbühl and Kaplowitz, 1996). Older patients seem more vulnerable, and women have a stronger tendency to toxic liver reaction than men (Meyer, 2000); ethnic differences have also been reported (Evans, 1986).\nGenetic metabolic variability is the most significant susceptibility factor in drug-induced liver toxicity. Enzyme polymorphisms can cause a slowing or complete disruption of enzyme function, which in turn results in the inefficient processing of drugs (Shenfield and Gross, 1999). This may not always result in corresponding liver damage but does contribute to an increased toxicity of substances. The majority of drugs and almost all psychotropic drugs are metabolized by the enzyme CYP450. Due to genetically determined polymorphisms of CYP450-isoenzymes, individuals can be categorized as poor, intermediate, extensive, or superextensive metabolizers (Miners and Birkett, 1998; Shenfield and Gross, 1999; Wilkinson, 2004). If a poor metabolizer receives medication containing several substrates or inhibitors of the same isoenzyme, the risk of a toxic reaction increases owing to a slower drug metabolism. As most psychotropic drugs are a substrate of CYP2D6 (Ingelman-Sundberg, 2005), this cytochrome is especially significant in the pharmacokinetic interaction. Approximately 5% to 10% of Caucasians have reduced or nonexistent CYP2D6 activity and are therefore at risk of toxicity when receiving psychotropic treatment (Transon et al., 1996; Griese et al. 1998; Ingelman-Sundberg, 2005; Bernarda et al., 2006).\nA further important consideration is whether patients with preexisting liver dysfunction have a higher risk of hepatotoxic reactions. Although little information from controlled studies exists, there are indications that patients with preexisting liver disorders generally do not display an increased risk of drug-induced hepatotoxicity. It is more likely that preexisting liver damage negatively affects the ability of the liver to regenerate in the case of a hepatotoxic reaction (Chang and Schiano, 2007).\nThe clinical symptoms of DILI are extremely diverse, with some patients remaining asymptomatic. Possible symptoms are tiredness, lack of appetite, nausea, vomiting, fever, a feeling of pressure in the upper right region of the abdomen, joint and muscle pain, pruritus, rashes, and jaundice; the latter is the only symptom directly indicative of the liver’s involvement (Chang and Schiano, 2007).\nTo diagnose asymptomatic toxic liver damage early, a minimum of laboratory testing is required. This involves the measurement of the glutamat-oxalat transaminase (GOT), glutamat-pyruvat-transaminase (GPT), and gamma-glutamyl-transferase (γ-GT) in serum which, if found to be normal, indicates that there has been no disruption to liver function. GOT and GPT are also well known as the enzyme aspartate aminotransferase (AST) and alanine aminotransferase (ALT), respectively. It is important to consider the possibility of DILI when prescribing psychotropic drugs and to record a detailed history of all medication taken by the patient, with particular attention paid to the length of use, the dose, and the time between the intake of medication and appearance of symptoms. The latency period involved here can vary between a few days and some months and, as liver damage may result from other causes such as viral, autoimmune, alcohol-induced hepatitis, and acute Morbus Wilson, the diagnosis of drug-induced toxic liver-damage is often a diagnosis of exclusion (Norris et al., 2008). Recently, Chalasani et al. (2014) developed practice guidelines for diagnosing and managing DILI.\nThe hepatic pattern of damage can be classified as predominantly hepatocellular, predominantly cholestatic, or a hepatocellular/cholestatic mixture and is important, as these patterns are of varying severity. The drugs also cause drug-specific patterns of liver damage revealing increased values of transaminases (GOT and GTP) and/or cholestasis (γ-GT, alkaline phosphatase [Zimmerman, 1999; Andrade et a., 2004]). A slight increase in transaminases or γ-GT levels to twice the norm without a rise in bilirubin is often of no clinical significance and in spite of continued medication, can simply disappear. This is a phenomenon often observed in antiepileptic or mood-stabilizing therapy (Yatham et al., 2002). These small functional changes must still be checked and in the case of a further elevation in liver enzyme levels medication must be discontinued (Voican et al., 2014). The prognosis of DILI is generally good, and less severe forms heal quickly and completely (Hayashi and Fontana, 2014). It is difficult to obtain figures regarding hepatotoxic drug reactions, as systematic epidemiological analyses are seldom done and observations are not conducted for a long enough period to have any true validity. Adverse effects are also not reliably reported or registered.\nDrug surveillance programs permit an early detection of adverse drug reactions (ADRs) and this may minimize consequences. The Arzneimittelsicherheit in der Psychiatrie (AMSP) study is one such program in the field of psychiatry systematically evaluating severe ADRs of psychotropic medication in inpatients. The AMSP produces a database of these ADRs registered in the participating psychiatric clinics in Austria, Germany, and Switzerland (for details on AMSP methods, see Grohmann et al., 2004, 2013\nKonstantinidis et al., 2012). In the present study, we have used this database to analyze the elevation of liver enzymes with a particular focus on sociodemographic data and the significance of clinical manifestations as well as transaminase levels measured during antidepressant (AD) monotherapy and combination therapies.", "Incidence rates of hepatotoxicity were calculated as the percentage of inpatients receiving a specific AD or AD subclass and presented together with their 95% CIs. Regarding the low actual number of cases and the significant number of inpatients involved, the CI was calculated employing the exact method rather than one of the approximate methods (Vollset, 1993). The statistical program R was used to generate the figures (R Core Team, 2014). Q-square tests were calculated using the SPSS system Version 22.0. Significance was set at P<.05.", " Social Demographic and Illness-Related Data From 1993 to 2011 the AMSP program monitored 390252 inpatients in 80 hospitals. A total of 184234 inpatients were treated with antidepressants. In 147 inpatients (and 149 cases, as 2 inpatients suffered from DILI twice) a severe hepatic ADR was observed (0.08%). Within 27 of 149 cases, clinical symptoms appeared (18.1%). In 104 inpatients, only ADs were imputed with the remaining inpatients suffering toxicity from an AD in combination with other psychotropic drugs. The majority of all monitored inpatients treated with antidepressants (56.5%) were suffering from depression. A total 75.9% were aged <65 years. Inpatients under surveillance were predominantly female (63.1%). A total 75.2% of inpatients suffering from DILI were diagnosed with depression, followed by 9.4% with the diagnosis of schizophrenia (Table 1). Thus, DILI patients differed significantly in their diagnostic distribution from the total AD population. Age and sex distribution, on the other hand, did not differ in DILI patients from all monitored AD patients.\nAge, Sex, and International Classification of Diseases Version 10 (ICD-10) Diagnosis of Patients Monitored during the Period of 1993–2011 Suffering from DILI Due to ADs and the Total Population under Surveillance (149 cases of DILI)\nAbbreviations: DILI, drug-induced liver injury; AD, antidepressant; n, number.\nFrom 1993 to 2011 the AMSP program monitored 390252 inpatients in 80 hospitals. A total of 184234 inpatients were treated with antidepressants. In 147 inpatients (and 149 cases, as 2 inpatients suffered from DILI twice) a severe hepatic ADR was observed (0.08%). Within 27 of 149 cases, clinical symptoms appeared (18.1%). In 104 inpatients, only ADs were imputed with the remaining inpatients suffering toxicity from an AD in combination with other psychotropic drugs. The majority of all monitored inpatients treated with antidepressants (56.5%) were suffering from depression. A total 75.9% were aged <65 years. Inpatients under surveillance were predominantly female (63.1%). A total 75.2% of inpatients suffering from DILI were diagnosed with depression, followed by 9.4% with the diagnosis of schizophrenia (Table 1). Thus, DILI patients differed significantly in their diagnostic distribution from the total AD population. Age and sex distribution, on the other hand, did not differ in DILI patients from all monitored AD patients.\nAge, Sex, and International Classification of Diseases Version 10 (ICD-10) Diagnosis of Patients Monitored during the Period of 1993–2011 Suffering from DILI Due to ADs and the Total Population under Surveillance (149 cases of DILI)\nAbbreviations: DILI, drug-induced liver injury; AD, antidepressant; n, number.\n Drugs Involved in DILI In 147 inpatients (and 149 cases), 19 single substances were solely held responsible for DILI. In all other cases, combinations of several drugs were imputed. DILI frequencies for the different single substances as well as classes of ADs are given in Table 2 and Figures 1 and 2.\n\nIncidence of DILI and Median Dosages among Drug Classes (N=149 Cases of DILI and 184.234 Patients Monitored Overall, Respectively)\n*Other TCAs: amitryptilinoxid, desipramine, dibenzepin, imipramine.\n**No case of milnacipran and tianeptin; multiple nominations possible.\nDrug-induced liver injury (DILI) per antidepressant (AD) classes/subgroups in percent of exposed patients, only cases where AD subgroups were imputed alone for DILI, and only substance classes imputed 3 times or more (except agomelatine due to its delayed implementation, which was imputed 2 times).\nDrug-induced liver injury (DILI) per antidepressant (AD)/single substance in percent of exposed patients, only cases where single ADs were imputed alone, and just substance classes imputed 3 times or more were included (except agomelatine due to its delayed implementation, which was imputed 2 times).\nAs for AD classes, the subgroup of tricyclic and tetracyclic ADs showed the most unfavorable profiles in terms of DILI, while the subgroup of serotonin reuptake inhibitors (SSRIs) had the lowest rates of DILI (all cases as well as SSRIs alone cases).\nAs for single drugs, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI with 0.36%, 0.33%, and 0.23%, respectively. Escitalopram, citalopram, and fluoxetine performed best. Trazodone (the only serotonin antagonist and reuptake inhibitor), serotonin norepinephrine reuptake inhibitors (SNRIs), and noradrenergic and specific serotonergic antidepressant (NaSSA) obtained similar results in between. Mianserine was added to the tricyclic and tetracyclic ADs according to existing literature (Benkert et al., 2010), as its side effects profile is similar to the latter. Nevertheless, this can be argued, as some authors add it towards the NaSSA group due to its similar chemical structure.\nIn 104 of 149 cases, ADs were imputed to be solely responsible for DILI, 96 cases were registered where only one AD was imputed, and 8 cases where 2 ADs or more were imputed in combination. The drugs listed as “other tricyclic antidepressants” (9 cases of DILI) were amitriptylinoxid (1 case), desipramine (1 case), dibenzepine (6 cases), and imipramine (1 case). The substances mentioned as “other ADs” were nefazodone (1 case) and bupropion (1 case). The group of monoaminooxidase (MAO) inhibitors consisted of tranylcypromine (3 cases) and moclobemide (no case). The substances metioned in “other TCAs” (tricyclic antidepressants), “other Ads,” and MAO inhibitors were prescribed <2000 times; hence, these single drugs were not included in the analyses of the present study. An exception was made for agomelatine, due to the particular interest in this drugs hepatotoxicity. The results of agomelatine, however, have to be interpreted with caution, as it was not introduced until April 2009. Therefore, the observation period for agomelatine was significantly shorter than for all other drugs observed since 1993.\nIn 147 inpatients (and 149 cases), 19 single substances were solely held responsible for DILI. In all other cases, combinations of several drugs were imputed. DILI frequencies for the different single substances as well as classes of ADs are given in Table 2 and Figures 1 and 2.\n\nIncidence of DILI and Median Dosages among Drug Classes (N=149 Cases of DILI and 184.234 Patients Monitored Overall, Respectively)\n*Other TCAs: amitryptilinoxid, desipramine, dibenzepin, imipramine.\n**No case of milnacipran and tianeptin; multiple nominations possible.\nDrug-induced liver injury (DILI) per antidepressant (AD) classes/subgroups in percent of exposed patients, only cases where AD subgroups were imputed alone for DILI, and only substance classes imputed 3 times or more (except agomelatine due to its delayed implementation, which was imputed 2 times).\nDrug-induced liver injury (DILI) per antidepressant (AD)/single substance in percent of exposed patients, only cases where single ADs were imputed alone, and just substance classes imputed 3 times or more were included (except agomelatine due to its delayed implementation, which was imputed 2 times).\nAs for AD classes, the subgroup of tricyclic and tetracyclic ADs showed the most unfavorable profiles in terms of DILI, while the subgroup of serotonin reuptake inhibitors (SSRIs) had the lowest rates of DILI (all cases as well as SSRIs alone cases).\nAs for single drugs, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI with 0.36%, 0.33%, and 0.23%, respectively. Escitalopram, citalopram, and fluoxetine performed best. Trazodone (the only serotonin antagonist and reuptake inhibitor), serotonin norepinephrine reuptake inhibitors (SNRIs), and noradrenergic and specific serotonergic antidepressant (NaSSA) obtained similar results in between. Mianserine was added to the tricyclic and tetracyclic ADs according to existing literature (Benkert et al., 2010), as its side effects profile is similar to the latter. Nevertheless, this can be argued, as some authors add it towards the NaSSA group due to its similar chemical structure.\nIn 104 of 149 cases, ADs were imputed to be solely responsible for DILI, 96 cases were registered where only one AD was imputed, and 8 cases where 2 ADs or more were imputed in combination. The drugs listed as “other tricyclic antidepressants” (9 cases of DILI) were amitriptylinoxid (1 case), desipramine (1 case), dibenzepine (6 cases), and imipramine (1 case). The substances mentioned as “other ADs” were nefazodone (1 case) and bupropion (1 case). The group of monoaminooxidase (MAO) inhibitors consisted of tranylcypromine (3 cases) and moclobemide (no case). The substances metioned in “other TCAs” (tricyclic antidepressants), “other Ads,” and MAO inhibitors were prescribed <2000 times; hence, these single drugs were not included in the analyses of the present study. An exception was made for agomelatine, due to the particular interest in this drugs hepatotoxicity. The results of agomelatine, however, have to be interpreted with caution, as it was not introduced until April 2009. Therefore, the observation period for agomelatine was significantly shorter than for all other drugs observed since 1993.\n Dose-Dependent Aspects of Involved Drugs As presented in Table 2, there were differences in the median dosages between the drugs deemed responsible for DILI and those for all monitored inpatients treated with ADs. Within the SSRI subgroup, escitalopram, citalopram, and sertraline were prescribed at double the dosage compared with all monitored inpatients at the time when DILI appeared. Also within the SNRI, noradrenalin reuptake inhibitor, and NaSSA subgroups, higher dosages compared with the median dosage for all patients monitored were observed when DILI occurred. Within the tricyclic and tetracyclic class, only maprotiline was prescribed at a lower dosage at the moment of DILI, while all the other substances of this subgroup were prescribed at higher dosages in cases of DILI.\nAs presented in Table 2, there were differences in the median dosages between the drugs deemed responsible for DILI and those for all monitored inpatients treated with ADs. Within the SSRI subgroup, escitalopram, citalopram, and sertraline were prescribed at double the dosage compared with all monitored inpatients at the time when DILI appeared. Also within the SNRI, noradrenalin reuptake inhibitor, and NaSSA subgroups, higher dosages compared with the median dosage for all patients monitored were observed when DILI occurred. Within the tricyclic and tetracyclic class, only maprotiline was prescribed at a lower dosage at the moment of DILI, while all the other substances of this subgroup were prescribed at higher dosages in cases of DILI.\n Combination Treatment and DILI The most prevalent drug class combination was the one of ADs and antipsychotic drugs (APs), in 31 cases within our study. First, olanzapine was implicated in DILI (6 cases), followed by clozapine (3 cases), and other APs held responsible for DILI in only 1 to 2 cases (haloperidol, melperone, chlorprothixene, quetiapine, perazine, levomepromazine, promethazine, and risperidone). Second, anticonvulsant drugs were combined with ADs (7 cases). Valproic acid was responsible for 3 DILI cases followed by carbamazepine, galantamine, pregabaline, and lamotrigine, implicated in only one case each.\nThe most prevalent drug class combination was the one of ADs and antipsychotic drugs (APs), in 31 cases within our study. First, olanzapine was implicated in DILI (6 cases), followed by clozapine (3 cases), and other APs held responsible for DILI in only 1 to 2 cases (haloperidol, melperone, chlorprothixene, quetiapine, perazine, levomepromazine, promethazine, and risperidone). Second, anticonvulsant drugs were combined with ADs (7 cases). Valproic acid was responsible for 3 DILI cases followed by carbamazepine, galantamine, pregabaline, and lamotrigine, implicated in only one case each.\n Elevation of Liver Enzymes and Involved Drugs Maximum gamma-GT and transaminase (glutamat-oxalat-transaminase [GOT] and glutamat-pyruvat-transaminase [GPT]) values per DILI case were evaluated for the time period from 2003 to 2011 (values for agomelatine from 2009 to 2011, as agomelatine was introduced in 2009). As there are small deviations in terms of maximum GOT (also known as aspartate-aminotransferase or AST), GPT (also known as alanin-aminotranferase or ALT), and alkaline phosphatase values across the participating institutions, a 5-fold increase in enzyme values was determined as DILI. From 2003 on, measurement of liver enzymes was done at all participating hospitals at a temperature of 37°C. Prior to this, measurement was done at 15 to 20°C, resulting in lower values for varying time periods at the different hospitals.\nDuloxetine, clomipramine, and paroxetine were mainly responsible for high GPT values, while mirtazapine affected GPT values least. In terms of GOT values, duloxetine and clomipramine performed worst, and again mirtazapine had the least influence on GOT values. Regarding γ-GT, duloxetine performed best, while trimipramine, clomipramine, as well as venlafaxine increased γ-GT values most (Figure 3a-c). The duration of treatment when DILI occurred was different among the antidepressants; mianserine was taken for 22 days on average, while mirtazapine was taken for 40 days. Trimipramine had the longest time span with 43.8 days on average until DILI occurred. Bilirubin was elevated in 5 of 149 cases.\n(a) Gamma-Glutamyl-Transferase (Gamma-GT) mean maximum values of single substances (imputed alone for a minimum of 3 times except agomelatine due to its delayed implementation, which was imputed 2 times). (b) Glutamat-oxalat-transaminase (GOT; also known as aspartate-aminotransferase [AST]) mean maximum values of single substances (imputed alone for a minimum of 3 cases, except agomelatine due to its delayed implementation, which was imputed 2 times). (c) Glutamat-pyruvat-transaminase (GPT; also known as alanin-aminotransferase [ALT]) mean maximum values of single substances (imputed alone for a minimum of 3 cases except agomelatine due to its delayed implementation).\nMaximum gamma-GT and transaminase (glutamat-oxalat-transaminase [GOT] and glutamat-pyruvat-transaminase [GPT]) values per DILI case were evaluated for the time period from 2003 to 2011 (values for agomelatine from 2009 to 2011, as agomelatine was introduced in 2009). As there are small deviations in terms of maximum GOT (also known as aspartate-aminotransferase or AST), GPT (also known as alanin-aminotranferase or ALT), and alkaline phosphatase values across the participating institutions, a 5-fold increase in enzyme values was determined as DILI. From 2003 on, measurement of liver enzymes was done at all participating hospitals at a temperature of 37°C. Prior to this, measurement was done at 15 to 20°C, resulting in lower values for varying time periods at the different hospitals.\nDuloxetine, clomipramine, and paroxetine were mainly responsible for high GPT values, while mirtazapine affected GPT values least. In terms of GOT values, duloxetine and clomipramine performed worst, and again mirtazapine had the least influence on GOT values. Regarding γ-GT, duloxetine performed best, while trimipramine, clomipramine, as well as venlafaxine increased γ-GT values most (Figure 3a-c). The duration of treatment when DILI occurred was different among the antidepressants; mianserine was taken for 22 days on average, while mirtazapine was taken for 40 days. Trimipramine had the longest time span with 43.8 days on average until DILI occurred. Bilirubin was elevated in 5 of 149 cases.\n(a) Gamma-Glutamyl-Transferase (Gamma-GT) mean maximum values of single substances (imputed alone for a minimum of 3 times except agomelatine due to its delayed implementation, which was imputed 2 times). (b) Glutamat-oxalat-transaminase (GOT; also known as aspartate-aminotransferase [AST]) mean maximum values of single substances (imputed alone for a minimum of 3 cases, except agomelatine due to its delayed implementation, which was imputed 2 times). (c) Glutamat-pyruvat-transaminase (GPT; also known as alanin-aminotransferase [ALT]) mean maximum values of single substances (imputed alone for a minimum of 3 cases except agomelatine due to its delayed implementation).\n Elevation of Liver Enzymes in Preexisting Liver Damage and Clinical Symptoms For inpatients with no preexisting liver damage, the mean maximum values for γ-GT, GOT, and GPT were 240, 202, and 285U/L, respectively, when DILI was diagnosed. Cases with preknown liver damage presented maximum γ-GT, GOT, and GPT mean values of 525, 402, and 564U/L, respectively. This indicates that preknown liver damage inpatients had more than doubled mean maximum values for γ-GT transaminases than subjects with normal liver status at the time when DILI appeared. In our study sample, risk factors were documented in 57% (85 of 149 cases). Preexisting hepatic injury was the most common risk factor by far (59 cases), followed by substance abuse, mostly alcohol (20 cases). Furthermore, predisposition to adverse reactions occurred in 10 cases.\nThe most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. A total of 27 inpatients showed clinical symptoms, while the majority did not show any. In 8 cases, the AD treatment remained and dosage was reduced, while in all other cases the drug was withdrawn after DILI was assessed. Within 55 cases, DILI disappeared totally, while in 85 cases DILI improved. Within 9 cases the course was unknown.\nFor inpatients with no preexisting liver damage, the mean maximum values for γ-GT, GOT, and GPT were 240, 202, and 285U/L, respectively, when DILI was diagnosed. Cases with preknown liver damage presented maximum γ-GT, GOT, and GPT mean values of 525, 402, and 564U/L, respectively. This indicates that preknown liver damage inpatients had more than doubled mean maximum values for γ-GT transaminases than subjects with normal liver status at the time when DILI appeared. In our study sample, risk factors were documented in 57% (85 of 149 cases). Preexisting hepatic injury was the most common risk factor by far (59 cases), followed by substance abuse, mostly alcohol (20 cases). Furthermore, predisposition to adverse reactions occurred in 10 cases.\nThe most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. A total of 27 inpatients showed clinical symptoms, while the majority did not show any. In 8 cases, the AD treatment remained and dosage was reduced, while in all other cases the drug was withdrawn after DILI was assessed. Within 55 cases, DILI disappeared totally, while in 85 cases DILI improved. Within 9 cases the course was unknown.\n Single Case of Acute Liver Failure In our study sample of 149 liver enzyme elevations, only one case of acute liver failure occurred in a 20-year-old woman with a predamaged liver resulting from an overdose of paracetamol. At the time of admission to the psychiatric ward, the transaminase values were normal. She had been on a medication of 150mg doxepine (for 3 days) and 10mg olanzapine (for 6 days). The patient’s liver enzymes increased rapidly, and clinical symptoms such as vomiting, nausea, and epigastric pain set in. In the following laboratory analysis, a hepato-toxicity was identified (bilirubin 3.8mg/dL, GPT 8827U/L, GOT 7363U/L, lactate dehydrogenase 4321U/L). As soon as acute liver failure was diagnosed, the patient was transferred to the intensive care ward where she was under the care of the transplantation consulting team. All medication was discontinued and the patient received electrolyte infusions. As her liver function recovered rapidly, a liver transplantation was no longer necessary. The hepatotoxic effects of doxepine and olanzapine have been discribed in previous literature, but to our knowledge such a severe case has not been presented so far.\nIn our study sample of 149 liver enzyme elevations, only one case of acute liver failure occurred in a 20-year-old woman with a predamaged liver resulting from an overdose of paracetamol. At the time of admission to the psychiatric ward, the transaminase values were normal. She had been on a medication of 150mg doxepine (for 3 days) and 10mg olanzapine (for 6 days). The patient’s liver enzymes increased rapidly, and clinical symptoms such as vomiting, nausea, and epigastric pain set in. In the following laboratory analysis, a hepato-toxicity was identified (bilirubin 3.8mg/dL, GPT 8827U/L, GOT 7363U/L, lactate dehydrogenase 4321U/L). As soon as acute liver failure was diagnosed, the patient was transferred to the intensive care ward where she was under the care of the transplantation consulting team. All medication was discontinued and the patient received electrolyte infusions. As her liver function recovered rapidly, a liver transplantation was no longer necessary. The hepatotoxic effects of doxepine and olanzapine have been discribed in previous literature, but to our knowledge such a severe case has not been presented so far.", "From 1993 to 2011 the AMSP program monitored 390252 inpatients in 80 hospitals. A total of 184234 inpatients were treated with antidepressants. In 147 inpatients (and 149 cases, as 2 inpatients suffered from DILI twice) a severe hepatic ADR was observed (0.08%). Within 27 of 149 cases, clinical symptoms appeared (18.1%). In 104 inpatients, only ADs were imputed with the remaining inpatients suffering toxicity from an AD in combination with other psychotropic drugs. The majority of all monitored inpatients treated with antidepressants (56.5%) were suffering from depression. A total 75.9% were aged <65 years. Inpatients under surveillance were predominantly female (63.1%). A total 75.2% of inpatients suffering from DILI were diagnosed with depression, followed by 9.4% with the diagnosis of schizophrenia (Table 1). Thus, DILI patients differed significantly in their diagnostic distribution from the total AD population. Age and sex distribution, on the other hand, did not differ in DILI patients from all monitored AD patients.\nAge, Sex, and International Classification of Diseases Version 10 (ICD-10) Diagnosis of Patients Monitored during the Period of 1993–2011 Suffering from DILI Due to ADs and the Total Population under Surveillance (149 cases of DILI)\nAbbreviations: DILI, drug-induced liver injury; AD, antidepressant; n, number.", "In 147 inpatients (and 149 cases), 19 single substances were solely held responsible for DILI. In all other cases, combinations of several drugs were imputed. DILI frequencies for the different single substances as well as classes of ADs are given in Table 2 and Figures 1 and 2.\n\nIncidence of DILI and Median Dosages among Drug Classes (N=149 Cases of DILI and 184.234 Patients Monitored Overall, Respectively)\n*Other TCAs: amitryptilinoxid, desipramine, dibenzepin, imipramine.\n**No case of milnacipran and tianeptin; multiple nominations possible.\nDrug-induced liver injury (DILI) per antidepressant (AD) classes/subgroups in percent of exposed patients, only cases where AD subgroups were imputed alone for DILI, and only substance classes imputed 3 times or more (except agomelatine due to its delayed implementation, which was imputed 2 times).\nDrug-induced liver injury (DILI) per antidepressant (AD)/single substance in percent of exposed patients, only cases where single ADs were imputed alone, and just substance classes imputed 3 times or more were included (except agomelatine due to its delayed implementation, which was imputed 2 times).\nAs for AD classes, the subgroup of tricyclic and tetracyclic ADs showed the most unfavorable profiles in terms of DILI, while the subgroup of serotonin reuptake inhibitors (SSRIs) had the lowest rates of DILI (all cases as well as SSRIs alone cases).\nAs for single drugs, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI with 0.36%, 0.33%, and 0.23%, respectively. Escitalopram, citalopram, and fluoxetine performed best. Trazodone (the only serotonin antagonist and reuptake inhibitor), serotonin norepinephrine reuptake inhibitors (SNRIs), and noradrenergic and specific serotonergic antidepressant (NaSSA) obtained similar results in between. Mianserine was added to the tricyclic and tetracyclic ADs according to existing literature (Benkert et al., 2010), as its side effects profile is similar to the latter. Nevertheless, this can be argued, as some authors add it towards the NaSSA group due to its similar chemical structure.\nIn 104 of 149 cases, ADs were imputed to be solely responsible for DILI, 96 cases were registered where only one AD was imputed, and 8 cases where 2 ADs or more were imputed in combination. The drugs listed as “other tricyclic antidepressants” (9 cases of DILI) were amitriptylinoxid (1 case), desipramine (1 case), dibenzepine (6 cases), and imipramine (1 case). The substances mentioned as “other ADs” were nefazodone (1 case) and bupropion (1 case). The group of monoaminooxidase (MAO) inhibitors consisted of tranylcypromine (3 cases) and moclobemide (no case). The substances metioned in “other TCAs” (tricyclic antidepressants), “other Ads,” and MAO inhibitors were prescribed <2000 times; hence, these single drugs were not included in the analyses of the present study. An exception was made for agomelatine, due to the particular interest in this drugs hepatotoxicity. The results of agomelatine, however, have to be interpreted with caution, as it was not introduced until April 2009. Therefore, the observation period for agomelatine was significantly shorter than for all other drugs observed since 1993.", "As presented in Table 2, there were differences in the median dosages between the drugs deemed responsible for DILI and those for all monitored inpatients treated with ADs. Within the SSRI subgroup, escitalopram, citalopram, and sertraline were prescribed at double the dosage compared with all monitored inpatients at the time when DILI appeared. Also within the SNRI, noradrenalin reuptake inhibitor, and NaSSA subgroups, higher dosages compared with the median dosage for all patients monitored were observed when DILI occurred. Within the tricyclic and tetracyclic class, only maprotiline was prescribed at a lower dosage at the moment of DILI, while all the other substances of this subgroup were prescribed at higher dosages in cases of DILI.", "The most prevalent drug class combination was the one of ADs and antipsychotic drugs (APs), in 31 cases within our study. First, olanzapine was implicated in DILI (6 cases), followed by clozapine (3 cases), and other APs held responsible for DILI in only 1 to 2 cases (haloperidol, melperone, chlorprothixene, quetiapine, perazine, levomepromazine, promethazine, and risperidone). Second, anticonvulsant drugs were combined with ADs (7 cases). Valproic acid was responsible for 3 DILI cases followed by carbamazepine, galantamine, pregabaline, and lamotrigine, implicated in only one case each.", "Maximum gamma-GT and transaminase (glutamat-oxalat-transaminase [GOT] and glutamat-pyruvat-transaminase [GPT]) values per DILI case were evaluated for the time period from 2003 to 2011 (values for agomelatine from 2009 to 2011, as agomelatine was introduced in 2009). As there are small deviations in terms of maximum GOT (also known as aspartate-aminotransferase or AST), GPT (also known as alanin-aminotranferase or ALT), and alkaline phosphatase values across the participating institutions, a 5-fold increase in enzyme values was determined as DILI. From 2003 on, measurement of liver enzymes was done at all participating hospitals at a temperature of 37°C. Prior to this, measurement was done at 15 to 20°C, resulting in lower values for varying time periods at the different hospitals.\nDuloxetine, clomipramine, and paroxetine were mainly responsible for high GPT values, while mirtazapine affected GPT values least. In terms of GOT values, duloxetine and clomipramine performed worst, and again mirtazapine had the least influence on GOT values. Regarding γ-GT, duloxetine performed best, while trimipramine, clomipramine, as well as venlafaxine increased γ-GT values most (Figure 3a-c). The duration of treatment when DILI occurred was different among the antidepressants; mianserine was taken for 22 days on average, while mirtazapine was taken for 40 days. Trimipramine had the longest time span with 43.8 days on average until DILI occurred. Bilirubin was elevated in 5 of 149 cases.\n(a) Gamma-Glutamyl-Transferase (Gamma-GT) mean maximum values of single substances (imputed alone for a minimum of 3 times except agomelatine due to its delayed implementation, which was imputed 2 times). (b) Glutamat-oxalat-transaminase (GOT; also known as aspartate-aminotransferase [AST]) mean maximum values of single substances (imputed alone for a minimum of 3 cases, except agomelatine due to its delayed implementation, which was imputed 2 times). (c) Glutamat-pyruvat-transaminase (GPT; also known as alanin-aminotransferase [ALT]) mean maximum values of single substances (imputed alone for a minimum of 3 cases except agomelatine due to its delayed implementation).", "For inpatients with no preexisting liver damage, the mean maximum values for γ-GT, GOT, and GPT were 240, 202, and 285U/L, respectively, when DILI was diagnosed. Cases with preknown liver damage presented maximum γ-GT, GOT, and GPT mean values of 525, 402, and 564U/L, respectively. This indicates that preknown liver damage inpatients had more than doubled mean maximum values for γ-GT transaminases than subjects with normal liver status at the time when DILI appeared. In our study sample, risk factors were documented in 57% (85 of 149 cases). Preexisting hepatic injury was the most common risk factor by far (59 cases), followed by substance abuse, mostly alcohol (20 cases). Furthermore, predisposition to adverse reactions occurred in 10 cases.\nThe most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. A total of 27 inpatients showed clinical symptoms, while the majority did not show any. In 8 cases, the AD treatment remained and dosage was reduced, while in all other cases the drug was withdrawn after DILI was assessed. Within 55 cases, DILI disappeared totally, while in 85 cases DILI improved. Within 9 cases the course was unknown.", "In our study sample of 149 liver enzyme elevations, only one case of acute liver failure occurred in a 20-year-old woman with a predamaged liver resulting from an overdose of paracetamol. At the time of admission to the psychiatric ward, the transaminase values were normal. She had been on a medication of 150mg doxepine (for 3 days) and 10mg olanzapine (for 6 days). The patient’s liver enzymes increased rapidly, and clinical symptoms such as vomiting, nausea, and epigastric pain set in. In the following laboratory analysis, a hepato-toxicity was identified (bilirubin 3.8mg/dL, GPT 8827U/L, GOT 7363U/L, lactate dehydrogenase 4321U/L). As soon as acute liver failure was diagnosed, the patient was transferred to the intensive care ward where she was under the care of the transplantation consulting team. All medication was discontinued and the patient received electrolyte infusions. As her liver function recovered rapidly, a liver transplantation was no longer necessary. The hepatotoxic effects of doxepine and olanzapine have been discribed in previous literature, but to our knowledge such a severe case has not been presented so far.", "To date, studies on the occurrence of the elevation of liver enzymes during psychotropic treatment have generally been based on case reports. A systematic drug surveillance program, however, increases the methodological accuracy significantly, and several such programs have shown links between ADRs and a range of psychotropic drugs (Grohmann et al., 2004, 2013; Gallego et al., 2012; Lettmaier et al., 2012).\nIn our study, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI. This result regarding the TCAs is in accordance with the previous results of the AMSP and Arzneimittel-Überwachungs-Programm in der Psychiatrie (German Drug Surveillance in Psychiatry) study group. The AMSP group published a manuscript on severe ADRs of ADs in the year 2004 (Degner et al., 2004). ADs were classified according to receptors and their diverse action profiles, and TCAs were linked to increased levels of liver enzymes. Classical TCAs have a significantly higher potential for inducing hepatic ADRs than newer ADs. Predominantly, these ADRs provoke cholestatic liver damage with prolonged cholestasis, and hepatocellular necrosis may also occur (Zimmerman, 1999). In an intensive drug monitoring study by the Arzneimittel-Überwachungs-Programm working group, elevated liver values were observed in 13.8% of inpatients taking TCAs, but the majority of inpatients presented with only minor increase in transaminases (eg, GPT and AP in one-third of cases observed) (Grohmann et al., 1999, 2004; Degner et al., 2004). Most TCAs do not induce or inhibit CYP-450-isoenzymes. As a substrate of these enzymes, however, they may be affected by interactions, a point that is of interest due to their relatively restricted therapeutic index (Chou et al., 2000; Kalra, 2007).\nIn our study population, up to 0.02% of inpatients receiving long-term therapy with fluoxetine showed elevated liver enzymes. While severe hepatotoxic reactions are rare, the literature reported some ADRs linked to fluoxetine and a few to paroxetine and sertraline (Grohmann et al., 1999, 2004; Charlier et al., 2000; Degner et al., 2004). Many new ADs inhibit CYP-450 enzymes; for example, both fluoxetine and paroxetine are inhibitors of CYP2D6. In combination with TCAs, severe intoxications may occur, and in those involving 3 or more substances, the likelihood of toxicity is even higher (Gillman et al., 2007).\nAs seen in short-term studies, mirtazapine elevates liver enzymes up to 3 times of the norm in 2% of patients, but in most cases patients do not develop significant liver damage, with some patients’ values even recovering in spite of continued medication (Hui et al., 2002; Biswas et al., 2003). Two cases have been documented, however, in which mirtazapine induced severe cholestatic hepatitis (Dodd et al., 2001; Hui et al., 2002). Within our study sample, mirtazapine did not perform worse than SNRIs, especially in terms of GPT and GOT values, where it actually showed a favorable profile.\nIn our study in cases of DILI, the most prevalent drug class combination was the one of ADs and APs, with most cases concerning a combination of AD with olanzapine or clozapine. Most classical APs are metabolized via CYP2D6. A total of 5% to 10% of patients are slow metabolizers and show both high plasma levels and a high risk of a hepatotoxic reaction (Kevin et al., 2007). There is little information available on the newer generation of APs regarding hepatotoxic side effects, but extreme hepatotoxicity seems to occur very rarely. Clozapine and risperidone induced liver damage, and even acute liver failure associated with clozapine has been documented (Macfarlane et al., 1997). Olanzapine seems to trigger a hypersensitivity reaction with involvement of the liver (Mansur et al., 2008). Clozapine causes a mild and mostly temporary increase in transaminases in 37% of patients (Grohmann et al., 1989; Macfarlane et al., 1997).\nOur results are to some extent consistent with preexisting findings as summarized in a recent review of antidepressant-induced liver injury published in 2014, which also indicated a greater risk of hepatotoxicity for TCAs and agomelatine and the least potential for DILI with SSRIs (Voican et el., 2014). The latter review claimed aminotransferase surveillance (GPT) as the most useful tool for detecting DILI. In accordance with Voican et al. (2014), duloxetine and TCAs such as clomipramine had the least favorable influence on GPT values.\nFurthermore, antidepressant-induced liver injury is considered to be dose independent. This is in agreement with our findings; in our sample, the median dosage when DILI occurred was higher than the overall median dosage in 7 of 9 substances. Additionally, compared with existing findings, age was not significantly related with the occurrence of DILI. Nefazodone and MAO-inhibitors were often described as highly responsible for DILI in previous studies, which cannot be confirmed within the results of this surveillance program, as single MAO inhibitors as well as nefazodone were only rarely prescribed and therefore could not be reliably compared with other drugs.", "Our findings suggest that SSRIs are less likely than the other antidepressants examined in this study to precipitate DILI. Preknown liver damage inpatients are more at risk and had more than doubled mean values for γ-GT and transaminases than subjects with healthy liver status, at the time when DILI appeared in our data. Thus, special attention should be given to these inpatients when prescribing antidepressants with potential adverse effects affecting the liver. Given the huge sample size in our observational naturalistic study, the present findings may contribute significantly to the existing literature and help to prevent antidepressant-induced adverse hepatic events.\n Limitations The findings from the present study reflect data obtained from inpatients who are likely to be more severely ill and have higher antidepressant dosages or more polypharmacy compared with outpatients. Second, the detection of DILI was dependent on increased liver enzyme values and hence on blood examination tests. Regular blood tests are taken at the time of admittance to the hospital; however, there is no standardized regimen for laboratory testing after admittance that might influence the detection of DILI, especially in cases of asymptomatic drug-induced liver dysfunction. Small differences in surveillance habits of liver enzymes across the 80 hospitals participating in the AMSP program may further contribute to the aforementioned problem. The AMSP program focuses on only severe ADRs (Grohmann et al., 2004) with at least 5-fold increase of liver enzymes. This leads to a lower incidence rate of DILI compared with other studies using GPT values 3 times and GOT values 2-fold above the normal value as indicative of DILI. Furthermore, reporting bias cannot be ruled out due to the nature of the surveillance program. To prevent discrepancies among reported cases, the latter are discussed and examined in a systematic way at regional and international meetings within the AMSP group. In terms of the results for agomelatine, it has to be mentioned that there was an awareness of possible liver ADRs from the beginning of the surveillance. The so-called “dear doctor letters” (product safety information) might have influenced the detection of agomelatine-induced liver enzyme elevations due to this sensitization prior to the onset of DILI.\nThe findings from the present study reflect data obtained from inpatients who are likely to be more severely ill and have higher antidepressant dosages or more polypharmacy compared with outpatients. Second, the detection of DILI was dependent on increased liver enzyme values and hence on blood examination tests. Regular blood tests are taken at the time of admittance to the hospital; however, there is no standardized regimen for laboratory testing after admittance that might influence the detection of DILI, especially in cases of asymptomatic drug-induced liver dysfunction. Small differences in surveillance habits of liver enzymes across the 80 hospitals participating in the AMSP program may further contribute to the aforementioned problem. The AMSP program focuses on only severe ADRs (Grohmann et al., 2004) with at least 5-fold increase of liver enzymes. This leads to a lower incidence rate of DILI compared with other studies using GPT values 3 times and GOT values 2-fold above the normal value as indicative of DILI. Furthermore, reporting bias cannot be ruled out due to the nature of the surveillance program. To prevent discrepancies among reported cases, the latter are discussed and examined in a systematic way at regional and international meetings within the AMSP group. In terms of the results for agomelatine, it has to be mentioned that there was an awareness of possible liver ADRs from the beginning of the surveillance. The so-called “dear doctor letters” (product safety information) might have influenced the detection of agomelatine-induced liver enzyme elevations due to this sensitization prior to the onset of DILI." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Statistical Analysis", "Results", "Social Demographic and Illness-Related Data", "Drugs Involved in DILI", "Dose-Dependent Aspects of Involved Drugs", "Combination Treatment and DILI", "Elevation of Liver Enzymes and Involved Drugs", "Elevation of Liver Enzymes in Preexisting Liver Damage and Clinical Symptoms", "Single Case of Acute Liver Failure", "Discussion", "Conclusions" ]
[ "The liver, the central organ of biotransformation, is particularly prone to oral medication-related toxicity due to high concentrations of drugs and their metabolites in portal blood rather than in the actual target area of the central nervous system. It is, however, difficult to attribute liver damage to a specific medication in clinical practice (Meyer, 2000). The susceptibility of an individual to drug-induced liver injury (DILI) depends on multiple genetic and epigenetic factors, age, gender, weight, and alcohol consumption that influence the occurrence of hepatic adverse effects (Krähenbühl and Kaplowitz, 1996). Older patients seem more vulnerable, and women have a stronger tendency to toxic liver reaction than men (Meyer, 2000); ethnic differences have also been reported (Evans, 1986).\nGenetic metabolic variability is the most significant susceptibility factor in drug-induced liver toxicity. Enzyme polymorphisms can cause a slowing or complete disruption of enzyme function, which in turn results in the inefficient processing of drugs (Shenfield and Gross, 1999). This may not always result in corresponding liver damage but does contribute to an increased toxicity of substances. The majority of drugs and almost all psychotropic drugs are metabolized by the enzyme CYP450. Due to genetically determined polymorphisms of CYP450-isoenzymes, individuals can be categorized as poor, intermediate, extensive, or superextensive metabolizers (Miners and Birkett, 1998; Shenfield and Gross, 1999; Wilkinson, 2004). If a poor metabolizer receives medication containing several substrates or inhibitors of the same isoenzyme, the risk of a toxic reaction increases owing to a slower drug metabolism. As most psychotropic drugs are a substrate of CYP2D6 (Ingelman-Sundberg, 2005), this cytochrome is especially significant in the pharmacokinetic interaction. Approximately 5% to 10% of Caucasians have reduced or nonexistent CYP2D6 activity and are therefore at risk of toxicity when receiving psychotropic treatment (Transon et al., 1996; Griese et al. 1998; Ingelman-Sundberg, 2005; Bernarda et al., 2006).\nA further important consideration is whether patients with preexisting liver dysfunction have a higher risk of hepatotoxic reactions. Although little information from controlled studies exists, there are indications that patients with preexisting liver disorders generally do not display an increased risk of drug-induced hepatotoxicity. It is more likely that preexisting liver damage negatively affects the ability of the liver to regenerate in the case of a hepatotoxic reaction (Chang and Schiano, 2007).\nThe clinical symptoms of DILI are extremely diverse, with some patients remaining asymptomatic. Possible symptoms are tiredness, lack of appetite, nausea, vomiting, fever, a feeling of pressure in the upper right region of the abdomen, joint and muscle pain, pruritus, rashes, and jaundice; the latter is the only symptom directly indicative of the liver’s involvement (Chang and Schiano, 2007).\nTo diagnose asymptomatic toxic liver damage early, a minimum of laboratory testing is required. This involves the measurement of the glutamat-oxalat transaminase (GOT), glutamat-pyruvat-transaminase (GPT), and gamma-glutamyl-transferase (γ-GT) in serum which, if found to be normal, indicates that there has been no disruption to liver function. GOT and GPT are also well known as the enzyme aspartate aminotransferase (AST) and alanine aminotransferase (ALT), respectively. It is important to consider the possibility of DILI when prescribing psychotropic drugs and to record a detailed history of all medication taken by the patient, with particular attention paid to the length of use, the dose, and the time between the intake of medication and appearance of symptoms. The latency period involved here can vary between a few days and some months and, as liver damage may result from other causes such as viral, autoimmune, alcohol-induced hepatitis, and acute Morbus Wilson, the diagnosis of drug-induced toxic liver-damage is often a diagnosis of exclusion (Norris et al., 2008). Recently, Chalasani et al. (2014) developed practice guidelines for diagnosing and managing DILI.\nThe hepatic pattern of damage can be classified as predominantly hepatocellular, predominantly cholestatic, or a hepatocellular/cholestatic mixture and is important, as these patterns are of varying severity. The drugs also cause drug-specific patterns of liver damage revealing increased values of transaminases (GOT and GTP) and/or cholestasis (γ-GT, alkaline phosphatase [Zimmerman, 1999; Andrade et a., 2004]). A slight increase in transaminases or γ-GT levels to twice the norm without a rise in bilirubin is often of no clinical significance and in spite of continued medication, can simply disappear. This is a phenomenon often observed in antiepileptic or mood-stabilizing therapy (Yatham et al., 2002). These small functional changes must still be checked and in the case of a further elevation in liver enzyme levels medication must be discontinued (Voican et al., 2014). The prognosis of DILI is generally good, and less severe forms heal quickly and completely (Hayashi and Fontana, 2014). It is difficult to obtain figures regarding hepatotoxic drug reactions, as systematic epidemiological analyses are seldom done and observations are not conducted for a long enough period to have any true validity. Adverse effects are also not reliably reported or registered.\nDrug surveillance programs permit an early detection of adverse drug reactions (ADRs) and this may minimize consequences. The Arzneimittelsicherheit in der Psychiatrie (AMSP) study is one such program in the field of psychiatry systematically evaluating severe ADRs of psychotropic medication in inpatients. The AMSP produces a database of these ADRs registered in the participating psychiatric clinics in Austria, Germany, and Switzerland (for details on AMSP methods, see Grohmann et al., 2004, 2013\nKonstantinidis et al., 2012). In the present study, we have used this database to analyze the elevation of liver enzymes with a particular focus on sociodemographic data and the significance of clinical manifestations as well as transaminase levels measured during antidepressant (AD) monotherapy and combination therapies.", "The AMSP program aims for a continuous detection of severe ADRs resulting from psychotropic treatment. These are evaluated during inpatient treatment. In our study, we analyzed data from 80 university, municipal, or state psychiatric hospitals or departments participating in the AMSP program in 1993 to 2011. Information on severe ADRs is collected from clinicians on a regular basis by psychiatrists as drug monitors who use a standardized questionnaire to document cases. The drug monitors get in touch with ward psychiatrists at regular intervals and severe adverse drug reactions are reported at weekly meetings of the medical staff (Grohmann et al., 2004). Information is collected on the details of adverse events as well as on patient demographics and nonpsychotropic drug intake. It includes alternative hypotheses on the causes of the ADR, relevant risk factors, measures undertaken, and previous exposure to the drug. Senior doctors of each hospital involved review the cases that are later discussed at central and regional case conferences, which take place 3 times per year. Participants comprise hospital drug monitors, representatives from the national authorities regulating drugs, and drug safety experts from the pharmaceutical industry. Following discussions and analyses, ADR probability ratings are assigned and sent to the relevant authorities, and pharmaceutical companies receive the case questionnaires, which are also stored in the AMSP central database.\nBased on the AMSP study guidelines (Grohmann et al., 2004) and recommendations of Hurwitz and Wade (1969) and Seidl et al. (1965), probability ratings were performed. The ADR probability rating system defines the following grades of probability beginning with Grade1, in which ADR is possible, that is, the risk of ADR is not known or the probability of another cause other than the drug in question is >50%. Grade 2 is defined as probable, with a known reaction, time course, and dosage for a specific drug. The likelihood of alternative causes is <50%. Grade 3 is categorized as definite, meaning a reexposure to the drug again causes the ADR. Grade 4 signifies questionable or not sufficiently documented.\nIn cases where an ADR results from a pharmacodynamic interaction of 2 or more drugs, each drug is given a rating of possible, probable, or definite according to the given facts.\nFurthermore, drug-use data are collected twice per year from all hospitals participating in the AMSP program; the number of all inpatients and the mean treatment duration of all patients per year are also recorded.\nThe data presented in this study refer only to elevated liver enzymes due to “probable” (grade 2) and “definite” (grade 3) ADRs. Documentation of ADRs occurs when the value for one of the liver enzymes (GOT/AST, GPT/ALT, γ-GT, or alkaline phosphatase) exceed 5 times the upper normal values (“severe” as defined by the AMSP, based on the judgment of hepatologic experts) or when there are severe clinical symptoms and/or cholestasis. The threshold of 5 times the upper limit of normal GOT and GPT values have been proposed in the literature to avoid unnecessary withdrawal of substances (Aithal et al., 2011). Maximal levels of each liver enzyme are recorded in the AMSP in all DILI cases; mean maximum values per drug were evaluated for this analysis. Only drugs prescribed more than 2000 times within the overall study population were included in the analyses.\nOur retrospective analysis employs data extracted from the anonymized databank of the AMSP drawn from all 80 participating hospitals between 1993 and 2011. Detailed information on the hospitals participating in the program can be found online (www.amsp.de). The informed consent of participants was not required, as the data analyzed were derived from an anonymized databank.\nThe AMSP drug surveillance program was approved by the leading boards of each participating institute prior to implementation, and the Ethics Committee of the University of Munich formally approved evaluations based on the AMSP databank.\n Statistical Analysis Incidence rates of hepatotoxicity were calculated as the percentage of inpatients receiving a specific AD or AD subclass and presented together with their 95% CIs. Regarding the low actual number of cases and the significant number of inpatients involved, the CI was calculated employing the exact method rather than one of the approximate methods (Vollset, 1993). The statistical program R was used to generate the figures (R Core Team, 2014). Q-square tests were calculated using the SPSS system Version 22.0. Significance was set at P<.05.\nIncidence rates of hepatotoxicity were calculated as the percentage of inpatients receiving a specific AD or AD subclass and presented together with their 95% CIs. Regarding the low actual number of cases and the significant number of inpatients involved, the CI was calculated employing the exact method rather than one of the approximate methods (Vollset, 1993). The statistical program R was used to generate the figures (R Core Team, 2014). Q-square tests were calculated using the SPSS system Version 22.0. Significance was set at P<.05.", "Incidence rates of hepatotoxicity were calculated as the percentage of inpatients receiving a specific AD or AD subclass and presented together with their 95% CIs. Regarding the low actual number of cases and the significant number of inpatients involved, the CI was calculated employing the exact method rather than one of the approximate methods (Vollset, 1993). The statistical program R was used to generate the figures (R Core Team, 2014). Q-square tests were calculated using the SPSS system Version 22.0. Significance was set at P<.05.", " Social Demographic and Illness-Related Data From 1993 to 2011 the AMSP program monitored 390252 inpatients in 80 hospitals. A total of 184234 inpatients were treated with antidepressants. In 147 inpatients (and 149 cases, as 2 inpatients suffered from DILI twice) a severe hepatic ADR was observed (0.08%). Within 27 of 149 cases, clinical symptoms appeared (18.1%). In 104 inpatients, only ADs were imputed with the remaining inpatients suffering toxicity from an AD in combination with other psychotropic drugs. The majority of all monitored inpatients treated with antidepressants (56.5%) were suffering from depression. A total 75.9% were aged <65 years. Inpatients under surveillance were predominantly female (63.1%). A total 75.2% of inpatients suffering from DILI were diagnosed with depression, followed by 9.4% with the diagnosis of schizophrenia (Table 1). Thus, DILI patients differed significantly in their diagnostic distribution from the total AD population. Age and sex distribution, on the other hand, did not differ in DILI patients from all monitored AD patients.\nAge, Sex, and International Classification of Diseases Version 10 (ICD-10) Diagnosis of Patients Monitored during the Period of 1993–2011 Suffering from DILI Due to ADs and the Total Population under Surveillance (149 cases of DILI)\nAbbreviations: DILI, drug-induced liver injury; AD, antidepressant; n, number.\nFrom 1993 to 2011 the AMSP program monitored 390252 inpatients in 80 hospitals. A total of 184234 inpatients were treated with antidepressants. In 147 inpatients (and 149 cases, as 2 inpatients suffered from DILI twice) a severe hepatic ADR was observed (0.08%). Within 27 of 149 cases, clinical symptoms appeared (18.1%). In 104 inpatients, only ADs were imputed with the remaining inpatients suffering toxicity from an AD in combination with other psychotropic drugs. The majority of all monitored inpatients treated with antidepressants (56.5%) were suffering from depression. A total 75.9% were aged <65 years. Inpatients under surveillance were predominantly female (63.1%). A total 75.2% of inpatients suffering from DILI were diagnosed with depression, followed by 9.4% with the diagnosis of schizophrenia (Table 1). Thus, DILI patients differed significantly in their diagnostic distribution from the total AD population. Age and sex distribution, on the other hand, did not differ in DILI patients from all monitored AD patients.\nAge, Sex, and International Classification of Diseases Version 10 (ICD-10) Diagnosis of Patients Monitored during the Period of 1993–2011 Suffering from DILI Due to ADs and the Total Population under Surveillance (149 cases of DILI)\nAbbreviations: DILI, drug-induced liver injury; AD, antidepressant; n, number.\n Drugs Involved in DILI In 147 inpatients (and 149 cases), 19 single substances were solely held responsible for DILI. In all other cases, combinations of several drugs were imputed. DILI frequencies for the different single substances as well as classes of ADs are given in Table 2 and Figures 1 and 2.\n\nIncidence of DILI and Median Dosages among Drug Classes (N=149 Cases of DILI and 184.234 Patients Monitored Overall, Respectively)\n*Other TCAs: amitryptilinoxid, desipramine, dibenzepin, imipramine.\n**No case of milnacipran and tianeptin; multiple nominations possible.\nDrug-induced liver injury (DILI) per antidepressant (AD) classes/subgroups in percent of exposed patients, only cases where AD subgroups were imputed alone for DILI, and only substance classes imputed 3 times or more (except agomelatine due to its delayed implementation, which was imputed 2 times).\nDrug-induced liver injury (DILI) per antidepressant (AD)/single substance in percent of exposed patients, only cases where single ADs were imputed alone, and just substance classes imputed 3 times or more were included (except agomelatine due to its delayed implementation, which was imputed 2 times).\nAs for AD classes, the subgroup of tricyclic and tetracyclic ADs showed the most unfavorable profiles in terms of DILI, while the subgroup of serotonin reuptake inhibitors (SSRIs) had the lowest rates of DILI (all cases as well as SSRIs alone cases).\nAs for single drugs, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI with 0.36%, 0.33%, and 0.23%, respectively. Escitalopram, citalopram, and fluoxetine performed best. Trazodone (the only serotonin antagonist and reuptake inhibitor), serotonin norepinephrine reuptake inhibitors (SNRIs), and noradrenergic and specific serotonergic antidepressant (NaSSA) obtained similar results in between. Mianserine was added to the tricyclic and tetracyclic ADs according to existing literature (Benkert et al., 2010), as its side effects profile is similar to the latter. Nevertheless, this can be argued, as some authors add it towards the NaSSA group due to its similar chemical structure.\nIn 104 of 149 cases, ADs were imputed to be solely responsible for DILI, 96 cases were registered where only one AD was imputed, and 8 cases where 2 ADs or more were imputed in combination. The drugs listed as “other tricyclic antidepressants” (9 cases of DILI) were amitriptylinoxid (1 case), desipramine (1 case), dibenzepine (6 cases), and imipramine (1 case). The substances mentioned as “other ADs” were nefazodone (1 case) and bupropion (1 case). The group of monoaminooxidase (MAO) inhibitors consisted of tranylcypromine (3 cases) and moclobemide (no case). The substances metioned in “other TCAs” (tricyclic antidepressants), “other Ads,” and MAO inhibitors were prescribed <2000 times; hence, these single drugs were not included in the analyses of the present study. An exception was made for agomelatine, due to the particular interest in this drugs hepatotoxicity. The results of agomelatine, however, have to be interpreted with caution, as it was not introduced until April 2009. Therefore, the observation period for agomelatine was significantly shorter than for all other drugs observed since 1993.\nIn 147 inpatients (and 149 cases), 19 single substances were solely held responsible for DILI. In all other cases, combinations of several drugs were imputed. DILI frequencies for the different single substances as well as classes of ADs are given in Table 2 and Figures 1 and 2.\n\nIncidence of DILI and Median Dosages among Drug Classes (N=149 Cases of DILI and 184.234 Patients Monitored Overall, Respectively)\n*Other TCAs: amitryptilinoxid, desipramine, dibenzepin, imipramine.\n**No case of milnacipran and tianeptin; multiple nominations possible.\nDrug-induced liver injury (DILI) per antidepressant (AD) classes/subgroups in percent of exposed patients, only cases where AD subgroups were imputed alone for DILI, and only substance classes imputed 3 times or more (except agomelatine due to its delayed implementation, which was imputed 2 times).\nDrug-induced liver injury (DILI) per antidepressant (AD)/single substance in percent of exposed patients, only cases where single ADs were imputed alone, and just substance classes imputed 3 times or more were included (except agomelatine due to its delayed implementation, which was imputed 2 times).\nAs for AD classes, the subgroup of tricyclic and tetracyclic ADs showed the most unfavorable profiles in terms of DILI, while the subgroup of serotonin reuptake inhibitors (SSRIs) had the lowest rates of DILI (all cases as well as SSRIs alone cases).\nAs for single drugs, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI with 0.36%, 0.33%, and 0.23%, respectively. Escitalopram, citalopram, and fluoxetine performed best. Trazodone (the only serotonin antagonist and reuptake inhibitor), serotonin norepinephrine reuptake inhibitors (SNRIs), and noradrenergic and specific serotonergic antidepressant (NaSSA) obtained similar results in between. Mianserine was added to the tricyclic and tetracyclic ADs according to existing literature (Benkert et al., 2010), as its side effects profile is similar to the latter. Nevertheless, this can be argued, as some authors add it towards the NaSSA group due to its similar chemical structure.\nIn 104 of 149 cases, ADs were imputed to be solely responsible for DILI, 96 cases were registered where only one AD was imputed, and 8 cases where 2 ADs or more were imputed in combination. The drugs listed as “other tricyclic antidepressants” (9 cases of DILI) were amitriptylinoxid (1 case), desipramine (1 case), dibenzepine (6 cases), and imipramine (1 case). The substances mentioned as “other ADs” were nefazodone (1 case) and bupropion (1 case). The group of monoaminooxidase (MAO) inhibitors consisted of tranylcypromine (3 cases) and moclobemide (no case). The substances metioned in “other TCAs” (tricyclic antidepressants), “other Ads,” and MAO inhibitors were prescribed <2000 times; hence, these single drugs were not included in the analyses of the present study. An exception was made for agomelatine, due to the particular interest in this drugs hepatotoxicity. The results of agomelatine, however, have to be interpreted with caution, as it was not introduced until April 2009. Therefore, the observation period for agomelatine was significantly shorter than for all other drugs observed since 1993.\n Dose-Dependent Aspects of Involved Drugs As presented in Table 2, there were differences in the median dosages between the drugs deemed responsible for DILI and those for all monitored inpatients treated with ADs. Within the SSRI subgroup, escitalopram, citalopram, and sertraline were prescribed at double the dosage compared with all monitored inpatients at the time when DILI appeared. Also within the SNRI, noradrenalin reuptake inhibitor, and NaSSA subgroups, higher dosages compared with the median dosage for all patients monitored were observed when DILI occurred. Within the tricyclic and tetracyclic class, only maprotiline was prescribed at a lower dosage at the moment of DILI, while all the other substances of this subgroup were prescribed at higher dosages in cases of DILI.\nAs presented in Table 2, there were differences in the median dosages between the drugs deemed responsible for DILI and those for all monitored inpatients treated with ADs. Within the SSRI subgroup, escitalopram, citalopram, and sertraline were prescribed at double the dosage compared with all monitored inpatients at the time when DILI appeared. Also within the SNRI, noradrenalin reuptake inhibitor, and NaSSA subgroups, higher dosages compared with the median dosage for all patients monitored were observed when DILI occurred. Within the tricyclic and tetracyclic class, only maprotiline was prescribed at a lower dosage at the moment of DILI, while all the other substances of this subgroup were prescribed at higher dosages in cases of DILI.\n Combination Treatment and DILI The most prevalent drug class combination was the one of ADs and antipsychotic drugs (APs), in 31 cases within our study. First, olanzapine was implicated in DILI (6 cases), followed by clozapine (3 cases), and other APs held responsible for DILI in only 1 to 2 cases (haloperidol, melperone, chlorprothixene, quetiapine, perazine, levomepromazine, promethazine, and risperidone). Second, anticonvulsant drugs were combined with ADs (7 cases). Valproic acid was responsible for 3 DILI cases followed by carbamazepine, galantamine, pregabaline, and lamotrigine, implicated in only one case each.\nThe most prevalent drug class combination was the one of ADs and antipsychotic drugs (APs), in 31 cases within our study. First, olanzapine was implicated in DILI (6 cases), followed by clozapine (3 cases), and other APs held responsible for DILI in only 1 to 2 cases (haloperidol, melperone, chlorprothixene, quetiapine, perazine, levomepromazine, promethazine, and risperidone). Second, anticonvulsant drugs were combined with ADs (7 cases). Valproic acid was responsible for 3 DILI cases followed by carbamazepine, galantamine, pregabaline, and lamotrigine, implicated in only one case each.\n Elevation of Liver Enzymes and Involved Drugs Maximum gamma-GT and transaminase (glutamat-oxalat-transaminase [GOT] and glutamat-pyruvat-transaminase [GPT]) values per DILI case were evaluated for the time period from 2003 to 2011 (values for agomelatine from 2009 to 2011, as agomelatine was introduced in 2009). As there are small deviations in terms of maximum GOT (also known as aspartate-aminotransferase or AST), GPT (also known as alanin-aminotranferase or ALT), and alkaline phosphatase values across the participating institutions, a 5-fold increase in enzyme values was determined as DILI. From 2003 on, measurement of liver enzymes was done at all participating hospitals at a temperature of 37°C. Prior to this, measurement was done at 15 to 20°C, resulting in lower values for varying time periods at the different hospitals.\nDuloxetine, clomipramine, and paroxetine were mainly responsible for high GPT values, while mirtazapine affected GPT values least. In terms of GOT values, duloxetine and clomipramine performed worst, and again mirtazapine had the least influence on GOT values. Regarding γ-GT, duloxetine performed best, while trimipramine, clomipramine, as well as venlafaxine increased γ-GT values most (Figure 3a-c). The duration of treatment when DILI occurred was different among the antidepressants; mianserine was taken for 22 days on average, while mirtazapine was taken for 40 days. Trimipramine had the longest time span with 43.8 days on average until DILI occurred. Bilirubin was elevated in 5 of 149 cases.\n(a) Gamma-Glutamyl-Transferase (Gamma-GT) mean maximum values of single substances (imputed alone for a minimum of 3 times except agomelatine due to its delayed implementation, which was imputed 2 times). (b) Glutamat-oxalat-transaminase (GOT; also known as aspartate-aminotransferase [AST]) mean maximum values of single substances (imputed alone for a minimum of 3 cases, except agomelatine due to its delayed implementation, which was imputed 2 times). (c) Glutamat-pyruvat-transaminase (GPT; also known as alanin-aminotransferase [ALT]) mean maximum values of single substances (imputed alone for a minimum of 3 cases except agomelatine due to its delayed implementation).\nMaximum gamma-GT and transaminase (glutamat-oxalat-transaminase [GOT] and glutamat-pyruvat-transaminase [GPT]) values per DILI case were evaluated for the time period from 2003 to 2011 (values for agomelatine from 2009 to 2011, as agomelatine was introduced in 2009). As there are small deviations in terms of maximum GOT (also known as aspartate-aminotransferase or AST), GPT (also known as alanin-aminotranferase or ALT), and alkaline phosphatase values across the participating institutions, a 5-fold increase in enzyme values was determined as DILI. From 2003 on, measurement of liver enzymes was done at all participating hospitals at a temperature of 37°C. Prior to this, measurement was done at 15 to 20°C, resulting in lower values for varying time periods at the different hospitals.\nDuloxetine, clomipramine, and paroxetine were mainly responsible for high GPT values, while mirtazapine affected GPT values least. In terms of GOT values, duloxetine and clomipramine performed worst, and again mirtazapine had the least influence on GOT values. Regarding γ-GT, duloxetine performed best, while trimipramine, clomipramine, as well as venlafaxine increased γ-GT values most (Figure 3a-c). The duration of treatment when DILI occurred was different among the antidepressants; mianserine was taken for 22 days on average, while mirtazapine was taken for 40 days. Trimipramine had the longest time span with 43.8 days on average until DILI occurred. Bilirubin was elevated in 5 of 149 cases.\n(a) Gamma-Glutamyl-Transferase (Gamma-GT) mean maximum values of single substances (imputed alone for a minimum of 3 times except agomelatine due to its delayed implementation, which was imputed 2 times). (b) Glutamat-oxalat-transaminase (GOT; also known as aspartate-aminotransferase [AST]) mean maximum values of single substances (imputed alone for a minimum of 3 cases, except agomelatine due to its delayed implementation, which was imputed 2 times). (c) Glutamat-pyruvat-transaminase (GPT; also known as alanin-aminotransferase [ALT]) mean maximum values of single substances (imputed alone for a minimum of 3 cases except agomelatine due to its delayed implementation).\n Elevation of Liver Enzymes in Preexisting Liver Damage and Clinical Symptoms For inpatients with no preexisting liver damage, the mean maximum values for γ-GT, GOT, and GPT were 240, 202, and 285U/L, respectively, when DILI was diagnosed. Cases with preknown liver damage presented maximum γ-GT, GOT, and GPT mean values of 525, 402, and 564U/L, respectively. This indicates that preknown liver damage inpatients had more than doubled mean maximum values for γ-GT transaminases than subjects with normal liver status at the time when DILI appeared. In our study sample, risk factors were documented in 57% (85 of 149 cases). Preexisting hepatic injury was the most common risk factor by far (59 cases), followed by substance abuse, mostly alcohol (20 cases). Furthermore, predisposition to adverse reactions occurred in 10 cases.\nThe most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. A total of 27 inpatients showed clinical symptoms, while the majority did not show any. In 8 cases, the AD treatment remained and dosage was reduced, while in all other cases the drug was withdrawn after DILI was assessed. Within 55 cases, DILI disappeared totally, while in 85 cases DILI improved. Within 9 cases the course was unknown.\nFor inpatients with no preexisting liver damage, the mean maximum values for γ-GT, GOT, and GPT were 240, 202, and 285U/L, respectively, when DILI was diagnosed. Cases with preknown liver damage presented maximum γ-GT, GOT, and GPT mean values of 525, 402, and 564U/L, respectively. This indicates that preknown liver damage inpatients had more than doubled mean maximum values for γ-GT transaminases than subjects with normal liver status at the time when DILI appeared. In our study sample, risk factors were documented in 57% (85 of 149 cases). Preexisting hepatic injury was the most common risk factor by far (59 cases), followed by substance abuse, mostly alcohol (20 cases). Furthermore, predisposition to adverse reactions occurred in 10 cases.\nThe most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. A total of 27 inpatients showed clinical symptoms, while the majority did not show any. In 8 cases, the AD treatment remained and dosage was reduced, while in all other cases the drug was withdrawn after DILI was assessed. Within 55 cases, DILI disappeared totally, while in 85 cases DILI improved. Within 9 cases the course was unknown.\n Single Case of Acute Liver Failure In our study sample of 149 liver enzyme elevations, only one case of acute liver failure occurred in a 20-year-old woman with a predamaged liver resulting from an overdose of paracetamol. At the time of admission to the psychiatric ward, the transaminase values were normal. She had been on a medication of 150mg doxepine (for 3 days) and 10mg olanzapine (for 6 days). The patient’s liver enzymes increased rapidly, and clinical symptoms such as vomiting, nausea, and epigastric pain set in. In the following laboratory analysis, a hepato-toxicity was identified (bilirubin 3.8mg/dL, GPT 8827U/L, GOT 7363U/L, lactate dehydrogenase 4321U/L). As soon as acute liver failure was diagnosed, the patient was transferred to the intensive care ward where she was under the care of the transplantation consulting team. All medication was discontinued and the patient received electrolyte infusions. As her liver function recovered rapidly, a liver transplantation was no longer necessary. The hepatotoxic effects of doxepine and olanzapine have been discribed in previous literature, but to our knowledge such a severe case has not been presented so far.\nIn our study sample of 149 liver enzyme elevations, only one case of acute liver failure occurred in a 20-year-old woman with a predamaged liver resulting from an overdose of paracetamol. At the time of admission to the psychiatric ward, the transaminase values were normal. She had been on a medication of 150mg doxepine (for 3 days) and 10mg olanzapine (for 6 days). The patient’s liver enzymes increased rapidly, and clinical symptoms such as vomiting, nausea, and epigastric pain set in. In the following laboratory analysis, a hepato-toxicity was identified (bilirubin 3.8mg/dL, GPT 8827U/L, GOT 7363U/L, lactate dehydrogenase 4321U/L). As soon as acute liver failure was diagnosed, the patient was transferred to the intensive care ward where she was under the care of the transplantation consulting team. All medication was discontinued and the patient received electrolyte infusions. As her liver function recovered rapidly, a liver transplantation was no longer necessary. The hepatotoxic effects of doxepine and olanzapine have been discribed in previous literature, but to our knowledge such a severe case has not been presented so far.", "From 1993 to 2011 the AMSP program monitored 390252 inpatients in 80 hospitals. A total of 184234 inpatients were treated with antidepressants. In 147 inpatients (and 149 cases, as 2 inpatients suffered from DILI twice) a severe hepatic ADR was observed (0.08%). Within 27 of 149 cases, clinical symptoms appeared (18.1%). In 104 inpatients, only ADs were imputed with the remaining inpatients suffering toxicity from an AD in combination with other psychotropic drugs. The majority of all monitored inpatients treated with antidepressants (56.5%) were suffering from depression. A total 75.9% were aged <65 years. Inpatients under surveillance were predominantly female (63.1%). A total 75.2% of inpatients suffering from DILI were diagnosed with depression, followed by 9.4% with the diagnosis of schizophrenia (Table 1). Thus, DILI patients differed significantly in their diagnostic distribution from the total AD population. Age and sex distribution, on the other hand, did not differ in DILI patients from all monitored AD patients.\nAge, Sex, and International Classification of Diseases Version 10 (ICD-10) Diagnosis of Patients Monitored during the Period of 1993–2011 Suffering from DILI Due to ADs and the Total Population under Surveillance (149 cases of DILI)\nAbbreviations: DILI, drug-induced liver injury; AD, antidepressant; n, number.", "In 147 inpatients (and 149 cases), 19 single substances were solely held responsible for DILI. In all other cases, combinations of several drugs were imputed. DILI frequencies for the different single substances as well as classes of ADs are given in Table 2 and Figures 1 and 2.\n\nIncidence of DILI and Median Dosages among Drug Classes (N=149 Cases of DILI and 184.234 Patients Monitored Overall, Respectively)\n*Other TCAs: amitryptilinoxid, desipramine, dibenzepin, imipramine.\n**No case of milnacipran and tianeptin; multiple nominations possible.\nDrug-induced liver injury (DILI) per antidepressant (AD) classes/subgroups in percent of exposed patients, only cases where AD subgroups were imputed alone for DILI, and only substance classes imputed 3 times or more (except agomelatine due to its delayed implementation, which was imputed 2 times).\nDrug-induced liver injury (DILI) per antidepressant (AD)/single substance in percent of exposed patients, only cases where single ADs were imputed alone, and just substance classes imputed 3 times or more were included (except agomelatine due to its delayed implementation, which was imputed 2 times).\nAs for AD classes, the subgroup of tricyclic and tetracyclic ADs showed the most unfavorable profiles in terms of DILI, while the subgroup of serotonin reuptake inhibitors (SSRIs) had the lowest rates of DILI (all cases as well as SSRIs alone cases).\nAs for single drugs, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI with 0.36%, 0.33%, and 0.23%, respectively. Escitalopram, citalopram, and fluoxetine performed best. Trazodone (the only serotonin antagonist and reuptake inhibitor), serotonin norepinephrine reuptake inhibitors (SNRIs), and noradrenergic and specific serotonergic antidepressant (NaSSA) obtained similar results in between. Mianserine was added to the tricyclic and tetracyclic ADs according to existing literature (Benkert et al., 2010), as its side effects profile is similar to the latter. Nevertheless, this can be argued, as some authors add it towards the NaSSA group due to its similar chemical structure.\nIn 104 of 149 cases, ADs were imputed to be solely responsible for DILI, 96 cases were registered where only one AD was imputed, and 8 cases where 2 ADs or more were imputed in combination. The drugs listed as “other tricyclic antidepressants” (9 cases of DILI) were amitriptylinoxid (1 case), desipramine (1 case), dibenzepine (6 cases), and imipramine (1 case). The substances mentioned as “other ADs” were nefazodone (1 case) and bupropion (1 case). The group of monoaminooxidase (MAO) inhibitors consisted of tranylcypromine (3 cases) and moclobemide (no case). The substances metioned in “other TCAs” (tricyclic antidepressants), “other Ads,” and MAO inhibitors were prescribed <2000 times; hence, these single drugs were not included in the analyses of the present study. An exception was made for agomelatine, due to the particular interest in this drugs hepatotoxicity. The results of agomelatine, however, have to be interpreted with caution, as it was not introduced until April 2009. Therefore, the observation period for agomelatine was significantly shorter than for all other drugs observed since 1993.", "As presented in Table 2, there were differences in the median dosages between the drugs deemed responsible for DILI and those for all monitored inpatients treated with ADs. Within the SSRI subgroup, escitalopram, citalopram, and sertraline were prescribed at double the dosage compared with all monitored inpatients at the time when DILI appeared. Also within the SNRI, noradrenalin reuptake inhibitor, and NaSSA subgroups, higher dosages compared with the median dosage for all patients monitored were observed when DILI occurred. Within the tricyclic and tetracyclic class, only maprotiline was prescribed at a lower dosage at the moment of DILI, while all the other substances of this subgroup were prescribed at higher dosages in cases of DILI.", "The most prevalent drug class combination was the one of ADs and antipsychotic drugs (APs), in 31 cases within our study. First, olanzapine was implicated in DILI (6 cases), followed by clozapine (3 cases), and other APs held responsible for DILI in only 1 to 2 cases (haloperidol, melperone, chlorprothixene, quetiapine, perazine, levomepromazine, promethazine, and risperidone). Second, anticonvulsant drugs were combined with ADs (7 cases). Valproic acid was responsible for 3 DILI cases followed by carbamazepine, galantamine, pregabaline, and lamotrigine, implicated in only one case each.", "Maximum gamma-GT and transaminase (glutamat-oxalat-transaminase [GOT] and glutamat-pyruvat-transaminase [GPT]) values per DILI case were evaluated for the time period from 2003 to 2011 (values for agomelatine from 2009 to 2011, as agomelatine was introduced in 2009). As there are small deviations in terms of maximum GOT (also known as aspartate-aminotransferase or AST), GPT (also known as alanin-aminotranferase or ALT), and alkaline phosphatase values across the participating institutions, a 5-fold increase in enzyme values was determined as DILI. From 2003 on, measurement of liver enzymes was done at all participating hospitals at a temperature of 37°C. Prior to this, measurement was done at 15 to 20°C, resulting in lower values for varying time periods at the different hospitals.\nDuloxetine, clomipramine, and paroxetine were mainly responsible for high GPT values, while mirtazapine affected GPT values least. In terms of GOT values, duloxetine and clomipramine performed worst, and again mirtazapine had the least influence on GOT values. Regarding γ-GT, duloxetine performed best, while trimipramine, clomipramine, as well as venlafaxine increased γ-GT values most (Figure 3a-c). The duration of treatment when DILI occurred was different among the antidepressants; mianserine was taken for 22 days on average, while mirtazapine was taken for 40 days. Trimipramine had the longest time span with 43.8 days on average until DILI occurred. Bilirubin was elevated in 5 of 149 cases.\n(a) Gamma-Glutamyl-Transferase (Gamma-GT) mean maximum values of single substances (imputed alone for a minimum of 3 times except agomelatine due to its delayed implementation, which was imputed 2 times). (b) Glutamat-oxalat-transaminase (GOT; also known as aspartate-aminotransferase [AST]) mean maximum values of single substances (imputed alone for a minimum of 3 cases, except agomelatine due to its delayed implementation, which was imputed 2 times). (c) Glutamat-pyruvat-transaminase (GPT; also known as alanin-aminotransferase [ALT]) mean maximum values of single substances (imputed alone for a minimum of 3 cases except agomelatine due to its delayed implementation).", "For inpatients with no preexisting liver damage, the mean maximum values for γ-GT, GOT, and GPT were 240, 202, and 285U/L, respectively, when DILI was diagnosed. Cases with preknown liver damage presented maximum γ-GT, GOT, and GPT mean values of 525, 402, and 564U/L, respectively. This indicates that preknown liver damage inpatients had more than doubled mean maximum values for γ-GT transaminases than subjects with normal liver status at the time when DILI appeared. In our study sample, risk factors were documented in 57% (85 of 149 cases). Preexisting hepatic injury was the most common risk factor by far (59 cases), followed by substance abuse, mostly alcohol (20 cases). Furthermore, predisposition to adverse reactions occurred in 10 cases.\nThe most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. A total of 27 inpatients showed clinical symptoms, while the majority did not show any. In 8 cases, the AD treatment remained and dosage was reduced, while in all other cases the drug was withdrawn after DILI was assessed. Within 55 cases, DILI disappeared totally, while in 85 cases DILI improved. Within 9 cases the course was unknown.", "In our study sample of 149 liver enzyme elevations, only one case of acute liver failure occurred in a 20-year-old woman with a predamaged liver resulting from an overdose of paracetamol. At the time of admission to the psychiatric ward, the transaminase values were normal. She had been on a medication of 150mg doxepine (for 3 days) and 10mg olanzapine (for 6 days). The patient’s liver enzymes increased rapidly, and clinical symptoms such as vomiting, nausea, and epigastric pain set in. In the following laboratory analysis, a hepato-toxicity was identified (bilirubin 3.8mg/dL, GPT 8827U/L, GOT 7363U/L, lactate dehydrogenase 4321U/L). As soon as acute liver failure was diagnosed, the patient was transferred to the intensive care ward where she was under the care of the transplantation consulting team. All medication was discontinued and the patient received electrolyte infusions. As her liver function recovered rapidly, a liver transplantation was no longer necessary. The hepatotoxic effects of doxepine and olanzapine have been discribed in previous literature, but to our knowledge such a severe case has not been presented so far.", "To date, studies on the occurrence of the elevation of liver enzymes during psychotropic treatment have generally been based on case reports. A systematic drug surveillance program, however, increases the methodological accuracy significantly, and several such programs have shown links between ADRs and a range of psychotropic drugs (Grohmann et al., 2004, 2013; Gallego et al., 2012; Lettmaier et al., 2012).\nIn our study, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI. This result regarding the TCAs is in accordance with the previous results of the AMSP and Arzneimittel-Überwachungs-Programm in der Psychiatrie (German Drug Surveillance in Psychiatry) study group. The AMSP group published a manuscript on severe ADRs of ADs in the year 2004 (Degner et al., 2004). ADs were classified according to receptors and their diverse action profiles, and TCAs were linked to increased levels of liver enzymes. Classical TCAs have a significantly higher potential for inducing hepatic ADRs than newer ADs. Predominantly, these ADRs provoke cholestatic liver damage with prolonged cholestasis, and hepatocellular necrosis may also occur (Zimmerman, 1999). In an intensive drug monitoring study by the Arzneimittel-Überwachungs-Programm working group, elevated liver values were observed in 13.8% of inpatients taking TCAs, but the majority of inpatients presented with only minor increase in transaminases (eg, GPT and AP in one-third of cases observed) (Grohmann et al., 1999, 2004; Degner et al., 2004). Most TCAs do not induce or inhibit CYP-450-isoenzymes. As a substrate of these enzymes, however, they may be affected by interactions, a point that is of interest due to their relatively restricted therapeutic index (Chou et al., 2000; Kalra, 2007).\nIn our study population, up to 0.02% of inpatients receiving long-term therapy with fluoxetine showed elevated liver enzymes. While severe hepatotoxic reactions are rare, the literature reported some ADRs linked to fluoxetine and a few to paroxetine and sertraline (Grohmann et al., 1999, 2004; Charlier et al., 2000; Degner et al., 2004). Many new ADs inhibit CYP-450 enzymes; for example, both fluoxetine and paroxetine are inhibitors of CYP2D6. In combination with TCAs, severe intoxications may occur, and in those involving 3 or more substances, the likelihood of toxicity is even higher (Gillman et al., 2007).\nAs seen in short-term studies, mirtazapine elevates liver enzymes up to 3 times of the norm in 2% of patients, but in most cases patients do not develop significant liver damage, with some patients’ values even recovering in spite of continued medication (Hui et al., 2002; Biswas et al., 2003). Two cases have been documented, however, in which mirtazapine induced severe cholestatic hepatitis (Dodd et al., 2001; Hui et al., 2002). Within our study sample, mirtazapine did not perform worse than SNRIs, especially in terms of GPT and GOT values, where it actually showed a favorable profile.\nIn our study in cases of DILI, the most prevalent drug class combination was the one of ADs and APs, with most cases concerning a combination of AD with olanzapine or clozapine. Most classical APs are metabolized via CYP2D6. A total of 5% to 10% of patients are slow metabolizers and show both high plasma levels and a high risk of a hepatotoxic reaction (Kevin et al., 2007). There is little information available on the newer generation of APs regarding hepatotoxic side effects, but extreme hepatotoxicity seems to occur very rarely. Clozapine and risperidone induced liver damage, and even acute liver failure associated with clozapine has been documented (Macfarlane et al., 1997). Olanzapine seems to trigger a hypersensitivity reaction with involvement of the liver (Mansur et al., 2008). Clozapine causes a mild and mostly temporary increase in transaminases in 37% of patients (Grohmann et al., 1989; Macfarlane et al., 1997).\nOur results are to some extent consistent with preexisting findings as summarized in a recent review of antidepressant-induced liver injury published in 2014, which also indicated a greater risk of hepatotoxicity for TCAs and agomelatine and the least potential for DILI with SSRIs (Voican et el., 2014). The latter review claimed aminotransferase surveillance (GPT) as the most useful tool for detecting DILI. In accordance with Voican et al. (2014), duloxetine and TCAs such as clomipramine had the least favorable influence on GPT values.\nFurthermore, antidepressant-induced liver injury is considered to be dose independent. This is in agreement with our findings; in our sample, the median dosage when DILI occurred was higher than the overall median dosage in 7 of 9 substances. Additionally, compared with existing findings, age was not significantly related with the occurrence of DILI. Nefazodone and MAO-inhibitors were often described as highly responsible for DILI in previous studies, which cannot be confirmed within the results of this surveillance program, as single MAO inhibitors as well as nefazodone were only rarely prescribed and therefore could not be reliably compared with other drugs.", "Our findings suggest that SSRIs are less likely than the other antidepressants examined in this study to precipitate DILI. Preknown liver damage inpatients are more at risk and had more than doubled mean values for γ-GT and transaminases than subjects with healthy liver status, at the time when DILI appeared in our data. Thus, special attention should be given to these inpatients when prescribing antidepressants with potential adverse effects affecting the liver. Given the huge sample size in our observational naturalistic study, the present findings may contribute significantly to the existing literature and help to prevent antidepressant-induced adverse hepatic events.\n Limitations The findings from the present study reflect data obtained from inpatients who are likely to be more severely ill and have higher antidepressant dosages or more polypharmacy compared with outpatients. Second, the detection of DILI was dependent on increased liver enzyme values and hence on blood examination tests. Regular blood tests are taken at the time of admittance to the hospital; however, there is no standardized regimen for laboratory testing after admittance that might influence the detection of DILI, especially in cases of asymptomatic drug-induced liver dysfunction. Small differences in surveillance habits of liver enzymes across the 80 hospitals participating in the AMSP program may further contribute to the aforementioned problem. The AMSP program focuses on only severe ADRs (Grohmann et al., 2004) with at least 5-fold increase of liver enzymes. This leads to a lower incidence rate of DILI compared with other studies using GPT values 3 times and GOT values 2-fold above the normal value as indicative of DILI. Furthermore, reporting bias cannot be ruled out due to the nature of the surveillance program. To prevent discrepancies among reported cases, the latter are discussed and examined in a systematic way at regional and international meetings within the AMSP group. In terms of the results for agomelatine, it has to be mentioned that there was an awareness of possible liver ADRs from the beginning of the surveillance. The so-called “dear doctor letters” (product safety information) might have influenced the detection of agomelatine-induced liver enzyme elevations due to this sensitization prior to the onset of DILI.\nThe findings from the present study reflect data obtained from inpatients who are likely to be more severely ill and have higher antidepressant dosages or more polypharmacy compared with outpatients. Second, the detection of DILI was dependent on increased liver enzyme values and hence on blood examination tests. Regular blood tests are taken at the time of admittance to the hospital; however, there is no standardized regimen for laboratory testing after admittance that might influence the detection of DILI, especially in cases of asymptomatic drug-induced liver dysfunction. Small differences in surveillance habits of liver enzymes across the 80 hospitals participating in the AMSP program may further contribute to the aforementioned problem. The AMSP program focuses on only severe ADRs (Grohmann et al., 2004) with at least 5-fold increase of liver enzymes. This leads to a lower incidence rate of DILI compared with other studies using GPT values 3 times and GOT values 2-fold above the normal value as indicative of DILI. Furthermore, reporting bias cannot be ruled out due to the nature of the surveillance program. To prevent discrepancies among reported cases, the latter are discussed and examined in a systematic way at regional and international meetings within the AMSP group. In terms of the results for agomelatine, it has to be mentioned that there was an awareness of possible liver ADRs from the beginning of the surveillance. The so-called “dear doctor letters” (product safety information) might have influenced the detection of agomelatine-induced liver enzyme elevations due to this sensitization prior to the onset of DILI." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null ]
[ "Adverse drug reaction", "antidepressants", "drug surveillance", "elevation of liver enzymes" ]
Introduction: The liver, the central organ of biotransformation, is particularly prone to oral medication-related toxicity due to high concentrations of drugs and their metabolites in portal blood rather than in the actual target area of the central nervous system. It is, however, difficult to attribute liver damage to a specific medication in clinical practice (Meyer, 2000). The susceptibility of an individual to drug-induced liver injury (DILI) depends on multiple genetic and epigenetic factors, age, gender, weight, and alcohol consumption that influence the occurrence of hepatic adverse effects (Krähenbühl and Kaplowitz, 1996). Older patients seem more vulnerable, and women have a stronger tendency to toxic liver reaction than men (Meyer, 2000); ethnic differences have also been reported (Evans, 1986). Genetic metabolic variability is the most significant susceptibility factor in drug-induced liver toxicity. Enzyme polymorphisms can cause a slowing or complete disruption of enzyme function, which in turn results in the inefficient processing of drugs (Shenfield and Gross, 1999). This may not always result in corresponding liver damage but does contribute to an increased toxicity of substances. The majority of drugs and almost all psychotropic drugs are metabolized by the enzyme CYP450. Due to genetically determined polymorphisms of CYP450-isoenzymes, individuals can be categorized as poor, intermediate, extensive, or superextensive metabolizers (Miners and Birkett, 1998; Shenfield and Gross, 1999; Wilkinson, 2004). If a poor metabolizer receives medication containing several substrates or inhibitors of the same isoenzyme, the risk of a toxic reaction increases owing to a slower drug metabolism. As most psychotropic drugs are a substrate of CYP2D6 (Ingelman-Sundberg, 2005), this cytochrome is especially significant in the pharmacokinetic interaction. Approximately 5% to 10% of Caucasians have reduced or nonexistent CYP2D6 activity and are therefore at risk of toxicity when receiving psychotropic treatment (Transon et al., 1996; Griese et al. 1998; Ingelman-Sundberg, 2005; Bernarda et al., 2006). A further important consideration is whether patients with preexisting liver dysfunction have a higher risk of hepatotoxic reactions. Although little information from controlled studies exists, there are indications that patients with preexisting liver disorders generally do not display an increased risk of drug-induced hepatotoxicity. It is more likely that preexisting liver damage negatively affects the ability of the liver to regenerate in the case of a hepatotoxic reaction (Chang and Schiano, 2007). The clinical symptoms of DILI are extremely diverse, with some patients remaining asymptomatic. Possible symptoms are tiredness, lack of appetite, nausea, vomiting, fever, a feeling of pressure in the upper right region of the abdomen, joint and muscle pain, pruritus, rashes, and jaundice; the latter is the only symptom directly indicative of the liver’s involvement (Chang and Schiano, 2007). To diagnose asymptomatic toxic liver damage early, a minimum of laboratory testing is required. This involves the measurement of the glutamat-oxalat transaminase (GOT), glutamat-pyruvat-transaminase (GPT), and gamma-glutamyl-transferase (γ-GT) in serum which, if found to be normal, indicates that there has been no disruption to liver function. GOT and GPT are also well known as the enzyme aspartate aminotransferase (AST) and alanine aminotransferase (ALT), respectively. It is important to consider the possibility of DILI when prescribing psychotropic drugs and to record a detailed history of all medication taken by the patient, with particular attention paid to the length of use, the dose, and the time between the intake of medication and appearance of symptoms. The latency period involved here can vary between a few days and some months and, as liver damage may result from other causes such as viral, autoimmune, alcohol-induced hepatitis, and acute Morbus Wilson, the diagnosis of drug-induced toxic liver-damage is often a diagnosis of exclusion (Norris et al., 2008). Recently, Chalasani et al. (2014) developed practice guidelines for diagnosing and managing DILI. The hepatic pattern of damage can be classified as predominantly hepatocellular, predominantly cholestatic, or a hepatocellular/cholestatic mixture and is important, as these patterns are of varying severity. The drugs also cause drug-specific patterns of liver damage revealing increased values of transaminases (GOT and GTP) and/or cholestasis (γ-GT, alkaline phosphatase [Zimmerman, 1999; Andrade et a., 2004]). A slight increase in transaminases or γ-GT levels to twice the norm without a rise in bilirubin is often of no clinical significance and in spite of continued medication, can simply disappear. This is a phenomenon often observed in antiepileptic or mood-stabilizing therapy (Yatham et al., 2002). These small functional changes must still be checked and in the case of a further elevation in liver enzyme levels medication must be discontinued (Voican et al., 2014). The prognosis of DILI is generally good, and less severe forms heal quickly and completely (Hayashi and Fontana, 2014). It is difficult to obtain figures regarding hepatotoxic drug reactions, as systematic epidemiological analyses are seldom done and observations are not conducted for a long enough period to have any true validity. Adverse effects are also not reliably reported or registered. Drug surveillance programs permit an early detection of adverse drug reactions (ADRs) and this may minimize consequences. The Arzneimittelsicherheit in der Psychiatrie (AMSP) study is one such program in the field of psychiatry systematically evaluating severe ADRs of psychotropic medication in inpatients. The AMSP produces a database of these ADRs registered in the participating psychiatric clinics in Austria, Germany, and Switzerland (for details on AMSP methods, see Grohmann et al., 2004, 2013 Konstantinidis et al., 2012). In the present study, we have used this database to analyze the elevation of liver enzymes with a particular focus on sociodemographic data and the significance of clinical manifestations as well as transaminase levels measured during antidepressant (AD) monotherapy and combination therapies. Methods: The AMSP program aims for a continuous detection of severe ADRs resulting from psychotropic treatment. These are evaluated during inpatient treatment. In our study, we analyzed data from 80 university, municipal, or state psychiatric hospitals or departments participating in the AMSP program in 1993 to 2011. Information on severe ADRs is collected from clinicians on a regular basis by psychiatrists as drug monitors who use a standardized questionnaire to document cases. The drug monitors get in touch with ward psychiatrists at regular intervals and severe adverse drug reactions are reported at weekly meetings of the medical staff (Grohmann et al., 2004). Information is collected on the details of adverse events as well as on patient demographics and nonpsychotropic drug intake. It includes alternative hypotheses on the causes of the ADR, relevant risk factors, measures undertaken, and previous exposure to the drug. Senior doctors of each hospital involved review the cases that are later discussed at central and regional case conferences, which take place 3 times per year. Participants comprise hospital drug monitors, representatives from the national authorities regulating drugs, and drug safety experts from the pharmaceutical industry. Following discussions and analyses, ADR probability ratings are assigned and sent to the relevant authorities, and pharmaceutical companies receive the case questionnaires, which are also stored in the AMSP central database. Based on the AMSP study guidelines (Grohmann et al., 2004) and recommendations of Hurwitz and Wade (1969) and Seidl et al. (1965), probability ratings were performed. The ADR probability rating system defines the following grades of probability beginning with Grade1, in which ADR is possible, that is, the risk of ADR is not known or the probability of another cause other than the drug in question is >50%. Grade 2 is defined as probable, with a known reaction, time course, and dosage for a specific drug. The likelihood of alternative causes is <50%. Grade 3 is categorized as definite, meaning a reexposure to the drug again causes the ADR. Grade 4 signifies questionable or not sufficiently documented. In cases where an ADR results from a pharmacodynamic interaction of 2 or more drugs, each drug is given a rating of possible, probable, or definite according to the given facts. Furthermore, drug-use data are collected twice per year from all hospitals participating in the AMSP program; the number of all inpatients and the mean treatment duration of all patients per year are also recorded. The data presented in this study refer only to elevated liver enzymes due to “probable” (grade 2) and “definite” (grade 3) ADRs. Documentation of ADRs occurs when the value for one of the liver enzymes (GOT/AST, GPT/ALT, γ-GT, or alkaline phosphatase) exceed 5 times the upper normal values (“severe” as defined by the AMSP, based on the judgment of hepatologic experts) or when there are severe clinical symptoms and/or cholestasis. The threshold of 5 times the upper limit of normal GOT and GPT values have been proposed in the literature to avoid unnecessary withdrawal of substances (Aithal et al., 2011). Maximal levels of each liver enzyme are recorded in the AMSP in all DILI cases; mean maximum values per drug were evaluated for this analysis. Only drugs prescribed more than 2000 times within the overall study population were included in the analyses. Our retrospective analysis employs data extracted from the anonymized databank of the AMSP drawn from all 80 participating hospitals between 1993 and 2011. Detailed information on the hospitals participating in the program can be found online (www.amsp.de). The informed consent of participants was not required, as the data analyzed were derived from an anonymized databank. The AMSP drug surveillance program was approved by the leading boards of each participating institute prior to implementation, and the Ethics Committee of the University of Munich formally approved evaluations based on the AMSP databank. Statistical Analysis Incidence rates of hepatotoxicity were calculated as the percentage of inpatients receiving a specific AD or AD subclass and presented together with their 95% CIs. Regarding the low actual number of cases and the significant number of inpatients involved, the CI was calculated employing the exact method rather than one of the approximate methods (Vollset, 1993). The statistical program R was used to generate the figures (R Core Team, 2014). Q-square tests were calculated using the SPSS system Version 22.0. Significance was set at P<.05. Incidence rates of hepatotoxicity were calculated as the percentage of inpatients receiving a specific AD or AD subclass and presented together with their 95% CIs. Regarding the low actual number of cases and the significant number of inpatients involved, the CI was calculated employing the exact method rather than one of the approximate methods (Vollset, 1993). The statistical program R was used to generate the figures (R Core Team, 2014). Q-square tests were calculated using the SPSS system Version 22.0. Significance was set at P<.05. Statistical Analysis: Incidence rates of hepatotoxicity were calculated as the percentage of inpatients receiving a specific AD or AD subclass and presented together with their 95% CIs. Regarding the low actual number of cases and the significant number of inpatients involved, the CI was calculated employing the exact method rather than one of the approximate methods (Vollset, 1993). The statistical program R was used to generate the figures (R Core Team, 2014). Q-square tests were calculated using the SPSS system Version 22.0. Significance was set at P<.05. Results: Social Demographic and Illness-Related Data From 1993 to 2011 the AMSP program monitored 390252 inpatients in 80 hospitals. A total of 184234 inpatients were treated with antidepressants. In 147 inpatients (and 149 cases, as 2 inpatients suffered from DILI twice) a severe hepatic ADR was observed (0.08%). Within 27 of 149 cases, clinical symptoms appeared (18.1%). In 104 inpatients, only ADs were imputed with the remaining inpatients suffering toxicity from an AD in combination with other psychotropic drugs. The majority of all monitored inpatients treated with antidepressants (56.5%) were suffering from depression. A total 75.9% were aged <65 years. Inpatients under surveillance were predominantly female (63.1%). A total 75.2% of inpatients suffering from DILI were diagnosed with depression, followed by 9.4% with the diagnosis of schizophrenia (Table 1). Thus, DILI patients differed significantly in their diagnostic distribution from the total AD population. Age and sex distribution, on the other hand, did not differ in DILI patients from all monitored AD patients. Age, Sex, and International Classification of Diseases Version 10 (ICD-10) Diagnosis of Patients Monitored during the Period of 1993–2011 Suffering from DILI Due to ADs and the Total Population under Surveillance (149 cases of DILI) Abbreviations: DILI, drug-induced liver injury; AD, antidepressant; n, number. From 1993 to 2011 the AMSP program monitored 390252 inpatients in 80 hospitals. A total of 184234 inpatients were treated with antidepressants. In 147 inpatients (and 149 cases, as 2 inpatients suffered from DILI twice) a severe hepatic ADR was observed (0.08%). Within 27 of 149 cases, clinical symptoms appeared (18.1%). In 104 inpatients, only ADs were imputed with the remaining inpatients suffering toxicity from an AD in combination with other psychotropic drugs. The majority of all monitored inpatients treated with antidepressants (56.5%) were suffering from depression. A total 75.9% were aged <65 years. Inpatients under surveillance were predominantly female (63.1%). A total 75.2% of inpatients suffering from DILI were diagnosed with depression, followed by 9.4% with the diagnosis of schizophrenia (Table 1). Thus, DILI patients differed significantly in their diagnostic distribution from the total AD population. Age and sex distribution, on the other hand, did not differ in DILI patients from all monitored AD patients. Age, Sex, and International Classification of Diseases Version 10 (ICD-10) Diagnosis of Patients Monitored during the Period of 1993–2011 Suffering from DILI Due to ADs and the Total Population under Surveillance (149 cases of DILI) Abbreviations: DILI, drug-induced liver injury; AD, antidepressant; n, number. Drugs Involved in DILI In 147 inpatients (and 149 cases), 19 single substances were solely held responsible for DILI. In all other cases, combinations of several drugs were imputed. DILI frequencies for the different single substances as well as classes of ADs are given in Table 2 and Figures 1 and 2. Incidence of DILI and Median Dosages among Drug Classes (N=149 Cases of DILI and 184.234 Patients Monitored Overall, Respectively) *Other TCAs: amitryptilinoxid, desipramine, dibenzepin, imipramine. **No case of milnacipran and tianeptin; multiple nominations possible. Drug-induced liver injury (DILI) per antidepressant (AD) classes/subgroups in percent of exposed patients, only cases where AD subgroups were imputed alone for DILI, and only substance classes imputed 3 times or more (except agomelatine due to its delayed implementation, which was imputed 2 times). Drug-induced liver injury (DILI) per antidepressant (AD)/single substance in percent of exposed patients, only cases where single ADs were imputed alone, and just substance classes imputed 3 times or more were included (except agomelatine due to its delayed implementation, which was imputed 2 times). As for AD classes, the subgroup of tricyclic and tetracyclic ADs showed the most unfavorable profiles in terms of DILI, while the subgroup of serotonin reuptake inhibitors (SSRIs) had the lowest rates of DILI (all cases as well as SSRIs alone cases). As for single drugs, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI with 0.36%, 0.33%, and 0.23%, respectively. Escitalopram, citalopram, and fluoxetine performed best. Trazodone (the only serotonin antagonist and reuptake inhibitor), serotonin norepinephrine reuptake inhibitors (SNRIs), and noradrenergic and specific serotonergic antidepressant (NaSSA) obtained similar results in between. Mianserine was added to the tricyclic and tetracyclic ADs according to existing literature (Benkert et al., 2010), as its side effects profile is similar to the latter. Nevertheless, this can be argued, as some authors add it towards the NaSSA group due to its similar chemical structure. In 104 of 149 cases, ADs were imputed to be solely responsible for DILI, 96 cases were registered where only one AD was imputed, and 8 cases where 2 ADs or more were imputed in combination. The drugs listed as “other tricyclic antidepressants” (9 cases of DILI) were amitriptylinoxid (1 case), desipramine (1 case), dibenzepine (6 cases), and imipramine (1 case). The substances mentioned as “other ADs” were nefazodone (1 case) and bupropion (1 case). The group of monoaminooxidase (MAO) inhibitors consisted of tranylcypromine (3 cases) and moclobemide (no case). The substances metioned in “other TCAs” (tricyclic antidepressants), “other Ads,” and MAO inhibitors were prescribed <2000 times; hence, these single drugs were not included in the analyses of the present study. An exception was made for agomelatine, due to the particular interest in this drugs hepatotoxicity. The results of agomelatine, however, have to be interpreted with caution, as it was not introduced until April 2009. Therefore, the observation period for agomelatine was significantly shorter than for all other drugs observed since 1993. In 147 inpatients (and 149 cases), 19 single substances were solely held responsible for DILI. In all other cases, combinations of several drugs were imputed. DILI frequencies for the different single substances as well as classes of ADs are given in Table 2 and Figures 1 and 2. Incidence of DILI and Median Dosages among Drug Classes (N=149 Cases of DILI and 184.234 Patients Monitored Overall, Respectively) *Other TCAs: amitryptilinoxid, desipramine, dibenzepin, imipramine. **No case of milnacipran and tianeptin; multiple nominations possible. Drug-induced liver injury (DILI) per antidepressant (AD) classes/subgroups in percent of exposed patients, only cases where AD subgroups were imputed alone for DILI, and only substance classes imputed 3 times or more (except agomelatine due to its delayed implementation, which was imputed 2 times). Drug-induced liver injury (DILI) per antidepressant (AD)/single substance in percent of exposed patients, only cases where single ADs were imputed alone, and just substance classes imputed 3 times or more were included (except agomelatine due to its delayed implementation, which was imputed 2 times). As for AD classes, the subgroup of tricyclic and tetracyclic ADs showed the most unfavorable profiles in terms of DILI, while the subgroup of serotonin reuptake inhibitors (SSRIs) had the lowest rates of DILI (all cases as well as SSRIs alone cases). As for single drugs, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI with 0.36%, 0.33%, and 0.23%, respectively. Escitalopram, citalopram, and fluoxetine performed best. Trazodone (the only serotonin antagonist and reuptake inhibitor), serotonin norepinephrine reuptake inhibitors (SNRIs), and noradrenergic and specific serotonergic antidepressant (NaSSA) obtained similar results in between. Mianserine was added to the tricyclic and tetracyclic ADs according to existing literature (Benkert et al., 2010), as its side effects profile is similar to the latter. Nevertheless, this can be argued, as some authors add it towards the NaSSA group due to its similar chemical structure. In 104 of 149 cases, ADs were imputed to be solely responsible for DILI, 96 cases were registered where only one AD was imputed, and 8 cases where 2 ADs or more were imputed in combination. The drugs listed as “other tricyclic antidepressants” (9 cases of DILI) were amitriptylinoxid (1 case), desipramine (1 case), dibenzepine (6 cases), and imipramine (1 case). The substances mentioned as “other ADs” were nefazodone (1 case) and bupropion (1 case). The group of monoaminooxidase (MAO) inhibitors consisted of tranylcypromine (3 cases) and moclobemide (no case). The substances metioned in “other TCAs” (tricyclic antidepressants), “other Ads,” and MAO inhibitors were prescribed <2000 times; hence, these single drugs were not included in the analyses of the present study. An exception was made for agomelatine, due to the particular interest in this drugs hepatotoxicity. The results of agomelatine, however, have to be interpreted with caution, as it was not introduced until April 2009. Therefore, the observation period for agomelatine was significantly shorter than for all other drugs observed since 1993. Dose-Dependent Aspects of Involved Drugs As presented in Table 2, there were differences in the median dosages between the drugs deemed responsible for DILI and those for all monitored inpatients treated with ADs. Within the SSRI subgroup, escitalopram, citalopram, and sertraline were prescribed at double the dosage compared with all monitored inpatients at the time when DILI appeared. Also within the SNRI, noradrenalin reuptake inhibitor, and NaSSA subgroups, higher dosages compared with the median dosage for all patients monitored were observed when DILI occurred. Within the tricyclic and tetracyclic class, only maprotiline was prescribed at a lower dosage at the moment of DILI, while all the other substances of this subgroup were prescribed at higher dosages in cases of DILI. As presented in Table 2, there were differences in the median dosages between the drugs deemed responsible for DILI and those for all monitored inpatients treated with ADs. Within the SSRI subgroup, escitalopram, citalopram, and sertraline were prescribed at double the dosage compared with all monitored inpatients at the time when DILI appeared. Also within the SNRI, noradrenalin reuptake inhibitor, and NaSSA subgroups, higher dosages compared with the median dosage for all patients monitored were observed when DILI occurred. Within the tricyclic and tetracyclic class, only maprotiline was prescribed at a lower dosage at the moment of DILI, while all the other substances of this subgroup were prescribed at higher dosages in cases of DILI. Combination Treatment and DILI The most prevalent drug class combination was the one of ADs and antipsychotic drugs (APs), in 31 cases within our study. First, olanzapine was implicated in DILI (6 cases), followed by clozapine (3 cases), and other APs held responsible for DILI in only 1 to 2 cases (haloperidol, melperone, chlorprothixene, quetiapine, perazine, levomepromazine, promethazine, and risperidone). Second, anticonvulsant drugs were combined with ADs (7 cases). Valproic acid was responsible for 3 DILI cases followed by carbamazepine, galantamine, pregabaline, and lamotrigine, implicated in only one case each. The most prevalent drug class combination was the one of ADs and antipsychotic drugs (APs), in 31 cases within our study. First, olanzapine was implicated in DILI (6 cases), followed by clozapine (3 cases), and other APs held responsible for DILI in only 1 to 2 cases (haloperidol, melperone, chlorprothixene, quetiapine, perazine, levomepromazine, promethazine, and risperidone). Second, anticonvulsant drugs were combined with ADs (7 cases). Valproic acid was responsible for 3 DILI cases followed by carbamazepine, galantamine, pregabaline, and lamotrigine, implicated in only one case each. Elevation of Liver Enzymes and Involved Drugs Maximum gamma-GT and transaminase (glutamat-oxalat-transaminase [GOT] and glutamat-pyruvat-transaminase [GPT]) values per DILI case were evaluated for the time period from 2003 to 2011 (values for agomelatine from 2009 to 2011, as agomelatine was introduced in 2009). As there are small deviations in terms of maximum GOT (also known as aspartate-aminotransferase or AST), GPT (also known as alanin-aminotranferase or ALT), and alkaline phosphatase values across the participating institutions, a 5-fold increase in enzyme values was determined as DILI. From 2003 on, measurement of liver enzymes was done at all participating hospitals at a temperature of 37°C. Prior to this, measurement was done at 15 to 20°C, resulting in lower values for varying time periods at the different hospitals. Duloxetine, clomipramine, and paroxetine were mainly responsible for high GPT values, while mirtazapine affected GPT values least. In terms of GOT values, duloxetine and clomipramine performed worst, and again mirtazapine had the least influence on GOT values. Regarding γ-GT, duloxetine performed best, while trimipramine, clomipramine, as well as venlafaxine increased γ-GT values most (Figure 3a-c). The duration of treatment when DILI occurred was different among the antidepressants; mianserine was taken for 22 days on average, while mirtazapine was taken for 40 days. Trimipramine had the longest time span with 43.8 days on average until DILI occurred. Bilirubin was elevated in 5 of 149 cases. (a) Gamma-Glutamyl-Transferase (Gamma-GT) mean maximum values of single substances (imputed alone for a minimum of 3 times except agomelatine due to its delayed implementation, which was imputed 2 times). (b) Glutamat-oxalat-transaminase (GOT; also known as aspartate-aminotransferase [AST]) mean maximum values of single substances (imputed alone for a minimum of 3 cases, except agomelatine due to its delayed implementation, which was imputed 2 times). (c) Glutamat-pyruvat-transaminase (GPT; also known as alanin-aminotransferase [ALT]) mean maximum values of single substances (imputed alone for a minimum of 3 cases except agomelatine due to its delayed implementation). Maximum gamma-GT and transaminase (glutamat-oxalat-transaminase [GOT] and glutamat-pyruvat-transaminase [GPT]) values per DILI case were evaluated for the time period from 2003 to 2011 (values for agomelatine from 2009 to 2011, as agomelatine was introduced in 2009). As there are small deviations in terms of maximum GOT (also known as aspartate-aminotransferase or AST), GPT (also known as alanin-aminotranferase or ALT), and alkaline phosphatase values across the participating institutions, a 5-fold increase in enzyme values was determined as DILI. From 2003 on, measurement of liver enzymes was done at all participating hospitals at a temperature of 37°C. Prior to this, measurement was done at 15 to 20°C, resulting in lower values for varying time periods at the different hospitals. Duloxetine, clomipramine, and paroxetine were mainly responsible for high GPT values, while mirtazapine affected GPT values least. In terms of GOT values, duloxetine and clomipramine performed worst, and again mirtazapine had the least influence on GOT values. Regarding γ-GT, duloxetine performed best, while trimipramine, clomipramine, as well as venlafaxine increased γ-GT values most (Figure 3a-c). The duration of treatment when DILI occurred was different among the antidepressants; mianserine was taken for 22 days on average, while mirtazapine was taken for 40 days. Trimipramine had the longest time span with 43.8 days on average until DILI occurred. Bilirubin was elevated in 5 of 149 cases. (a) Gamma-Glutamyl-Transferase (Gamma-GT) mean maximum values of single substances (imputed alone for a minimum of 3 times except agomelatine due to its delayed implementation, which was imputed 2 times). (b) Glutamat-oxalat-transaminase (GOT; also known as aspartate-aminotransferase [AST]) mean maximum values of single substances (imputed alone for a minimum of 3 cases, except agomelatine due to its delayed implementation, which was imputed 2 times). (c) Glutamat-pyruvat-transaminase (GPT; also known as alanin-aminotransferase [ALT]) mean maximum values of single substances (imputed alone for a minimum of 3 cases except agomelatine due to its delayed implementation). Elevation of Liver Enzymes in Preexisting Liver Damage and Clinical Symptoms For inpatients with no preexisting liver damage, the mean maximum values for γ-GT, GOT, and GPT were 240, 202, and 285U/L, respectively, when DILI was diagnosed. Cases with preknown liver damage presented maximum γ-GT, GOT, and GPT mean values of 525, 402, and 564U/L, respectively. This indicates that preknown liver damage inpatients had more than doubled mean maximum values for γ-GT transaminases than subjects with normal liver status at the time when DILI appeared. In our study sample, risk factors were documented in 57% (85 of 149 cases). Preexisting hepatic injury was the most common risk factor by far (59 cases), followed by substance abuse, mostly alcohol (20 cases). Furthermore, predisposition to adverse reactions occurred in 10 cases. The most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. A total of 27 inpatients showed clinical symptoms, while the majority did not show any. In 8 cases, the AD treatment remained and dosage was reduced, while in all other cases the drug was withdrawn after DILI was assessed. Within 55 cases, DILI disappeared totally, while in 85 cases DILI improved. Within 9 cases the course was unknown. For inpatients with no preexisting liver damage, the mean maximum values for γ-GT, GOT, and GPT were 240, 202, and 285U/L, respectively, when DILI was diagnosed. Cases with preknown liver damage presented maximum γ-GT, GOT, and GPT mean values of 525, 402, and 564U/L, respectively. This indicates that preknown liver damage inpatients had more than doubled mean maximum values for γ-GT transaminases than subjects with normal liver status at the time when DILI appeared. In our study sample, risk factors were documented in 57% (85 of 149 cases). Preexisting hepatic injury was the most common risk factor by far (59 cases), followed by substance abuse, mostly alcohol (20 cases). Furthermore, predisposition to adverse reactions occurred in 10 cases. The most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. A total of 27 inpatients showed clinical symptoms, while the majority did not show any. In 8 cases, the AD treatment remained and dosage was reduced, while in all other cases the drug was withdrawn after DILI was assessed. Within 55 cases, DILI disappeared totally, while in 85 cases DILI improved. Within 9 cases the course was unknown. Single Case of Acute Liver Failure In our study sample of 149 liver enzyme elevations, only one case of acute liver failure occurred in a 20-year-old woman with a predamaged liver resulting from an overdose of paracetamol. At the time of admission to the psychiatric ward, the transaminase values were normal. She had been on a medication of 150mg doxepine (for 3 days) and 10mg olanzapine (for 6 days). The patient’s liver enzymes increased rapidly, and clinical symptoms such as vomiting, nausea, and epigastric pain set in. In the following laboratory analysis, a hepato-toxicity was identified (bilirubin 3.8mg/dL, GPT 8827U/L, GOT 7363U/L, lactate dehydrogenase 4321U/L). As soon as acute liver failure was diagnosed, the patient was transferred to the intensive care ward where she was under the care of the transplantation consulting team. All medication was discontinued and the patient received electrolyte infusions. As her liver function recovered rapidly, a liver transplantation was no longer necessary. The hepatotoxic effects of doxepine and olanzapine have been discribed in previous literature, but to our knowledge such a severe case has not been presented so far. In our study sample of 149 liver enzyme elevations, only one case of acute liver failure occurred in a 20-year-old woman with a predamaged liver resulting from an overdose of paracetamol. At the time of admission to the psychiatric ward, the transaminase values were normal. She had been on a medication of 150mg doxepine (for 3 days) and 10mg olanzapine (for 6 days). The patient’s liver enzymes increased rapidly, and clinical symptoms such as vomiting, nausea, and epigastric pain set in. In the following laboratory analysis, a hepato-toxicity was identified (bilirubin 3.8mg/dL, GPT 8827U/L, GOT 7363U/L, lactate dehydrogenase 4321U/L). As soon as acute liver failure was diagnosed, the patient was transferred to the intensive care ward where she was under the care of the transplantation consulting team. All medication was discontinued and the patient received electrolyte infusions. As her liver function recovered rapidly, a liver transplantation was no longer necessary. The hepatotoxic effects of doxepine and olanzapine have been discribed in previous literature, but to our knowledge such a severe case has not been presented so far. Social Demographic and Illness-Related Data: From 1993 to 2011 the AMSP program monitored 390252 inpatients in 80 hospitals. A total of 184234 inpatients were treated with antidepressants. In 147 inpatients (and 149 cases, as 2 inpatients suffered from DILI twice) a severe hepatic ADR was observed (0.08%). Within 27 of 149 cases, clinical symptoms appeared (18.1%). In 104 inpatients, only ADs were imputed with the remaining inpatients suffering toxicity from an AD in combination with other psychotropic drugs. The majority of all monitored inpatients treated with antidepressants (56.5%) were suffering from depression. A total 75.9% were aged <65 years. Inpatients under surveillance were predominantly female (63.1%). A total 75.2% of inpatients suffering from DILI were diagnosed with depression, followed by 9.4% with the diagnosis of schizophrenia (Table 1). Thus, DILI patients differed significantly in their diagnostic distribution from the total AD population. Age and sex distribution, on the other hand, did not differ in DILI patients from all monitored AD patients. Age, Sex, and International Classification of Diseases Version 10 (ICD-10) Diagnosis of Patients Monitored during the Period of 1993–2011 Suffering from DILI Due to ADs and the Total Population under Surveillance (149 cases of DILI) Abbreviations: DILI, drug-induced liver injury; AD, antidepressant; n, number. Drugs Involved in DILI: In 147 inpatients (and 149 cases), 19 single substances were solely held responsible for DILI. In all other cases, combinations of several drugs were imputed. DILI frequencies for the different single substances as well as classes of ADs are given in Table 2 and Figures 1 and 2. Incidence of DILI and Median Dosages among Drug Classes (N=149 Cases of DILI and 184.234 Patients Monitored Overall, Respectively) *Other TCAs: amitryptilinoxid, desipramine, dibenzepin, imipramine. **No case of milnacipran and tianeptin; multiple nominations possible. Drug-induced liver injury (DILI) per antidepressant (AD) classes/subgroups in percent of exposed patients, only cases where AD subgroups were imputed alone for DILI, and only substance classes imputed 3 times or more (except agomelatine due to its delayed implementation, which was imputed 2 times). Drug-induced liver injury (DILI) per antidepressant (AD)/single substance in percent of exposed patients, only cases where single ADs were imputed alone, and just substance classes imputed 3 times or more were included (except agomelatine due to its delayed implementation, which was imputed 2 times). As for AD classes, the subgroup of tricyclic and tetracyclic ADs showed the most unfavorable profiles in terms of DILI, while the subgroup of serotonin reuptake inhibitors (SSRIs) had the lowest rates of DILI (all cases as well as SSRIs alone cases). As for single drugs, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI with 0.36%, 0.33%, and 0.23%, respectively. Escitalopram, citalopram, and fluoxetine performed best. Trazodone (the only serotonin antagonist and reuptake inhibitor), serotonin norepinephrine reuptake inhibitors (SNRIs), and noradrenergic and specific serotonergic antidepressant (NaSSA) obtained similar results in between. Mianserine was added to the tricyclic and tetracyclic ADs according to existing literature (Benkert et al., 2010), as its side effects profile is similar to the latter. Nevertheless, this can be argued, as some authors add it towards the NaSSA group due to its similar chemical structure. In 104 of 149 cases, ADs were imputed to be solely responsible for DILI, 96 cases were registered where only one AD was imputed, and 8 cases where 2 ADs or more were imputed in combination. The drugs listed as “other tricyclic antidepressants” (9 cases of DILI) were amitriptylinoxid (1 case), desipramine (1 case), dibenzepine (6 cases), and imipramine (1 case). The substances mentioned as “other ADs” were nefazodone (1 case) and bupropion (1 case). The group of monoaminooxidase (MAO) inhibitors consisted of tranylcypromine (3 cases) and moclobemide (no case). The substances metioned in “other TCAs” (tricyclic antidepressants), “other Ads,” and MAO inhibitors were prescribed <2000 times; hence, these single drugs were not included in the analyses of the present study. An exception was made for agomelatine, due to the particular interest in this drugs hepatotoxicity. The results of agomelatine, however, have to be interpreted with caution, as it was not introduced until April 2009. Therefore, the observation period for agomelatine was significantly shorter than for all other drugs observed since 1993. Dose-Dependent Aspects of Involved Drugs: As presented in Table 2, there were differences in the median dosages between the drugs deemed responsible for DILI and those for all monitored inpatients treated with ADs. Within the SSRI subgroup, escitalopram, citalopram, and sertraline were prescribed at double the dosage compared with all monitored inpatients at the time when DILI appeared. Also within the SNRI, noradrenalin reuptake inhibitor, and NaSSA subgroups, higher dosages compared with the median dosage for all patients monitored were observed when DILI occurred. Within the tricyclic and tetracyclic class, only maprotiline was prescribed at a lower dosage at the moment of DILI, while all the other substances of this subgroup were prescribed at higher dosages in cases of DILI. Combination Treatment and DILI: The most prevalent drug class combination was the one of ADs and antipsychotic drugs (APs), in 31 cases within our study. First, olanzapine was implicated in DILI (6 cases), followed by clozapine (3 cases), and other APs held responsible for DILI in only 1 to 2 cases (haloperidol, melperone, chlorprothixene, quetiapine, perazine, levomepromazine, promethazine, and risperidone). Second, anticonvulsant drugs were combined with ADs (7 cases). Valproic acid was responsible for 3 DILI cases followed by carbamazepine, galantamine, pregabaline, and lamotrigine, implicated in only one case each. Elevation of Liver Enzymes and Involved Drugs: Maximum gamma-GT and transaminase (glutamat-oxalat-transaminase [GOT] and glutamat-pyruvat-transaminase [GPT]) values per DILI case were evaluated for the time period from 2003 to 2011 (values for agomelatine from 2009 to 2011, as agomelatine was introduced in 2009). As there are small deviations in terms of maximum GOT (also known as aspartate-aminotransferase or AST), GPT (also known as alanin-aminotranferase or ALT), and alkaline phosphatase values across the participating institutions, a 5-fold increase in enzyme values was determined as DILI. From 2003 on, measurement of liver enzymes was done at all participating hospitals at a temperature of 37°C. Prior to this, measurement was done at 15 to 20°C, resulting in lower values for varying time periods at the different hospitals. Duloxetine, clomipramine, and paroxetine were mainly responsible for high GPT values, while mirtazapine affected GPT values least. In terms of GOT values, duloxetine and clomipramine performed worst, and again mirtazapine had the least influence on GOT values. Regarding γ-GT, duloxetine performed best, while trimipramine, clomipramine, as well as venlafaxine increased γ-GT values most (Figure 3a-c). The duration of treatment when DILI occurred was different among the antidepressants; mianserine was taken for 22 days on average, while mirtazapine was taken for 40 days. Trimipramine had the longest time span with 43.8 days on average until DILI occurred. Bilirubin was elevated in 5 of 149 cases. (a) Gamma-Glutamyl-Transferase (Gamma-GT) mean maximum values of single substances (imputed alone for a minimum of 3 times except agomelatine due to its delayed implementation, which was imputed 2 times). (b) Glutamat-oxalat-transaminase (GOT; also known as aspartate-aminotransferase [AST]) mean maximum values of single substances (imputed alone for a minimum of 3 cases, except agomelatine due to its delayed implementation, which was imputed 2 times). (c) Glutamat-pyruvat-transaminase (GPT; also known as alanin-aminotransferase [ALT]) mean maximum values of single substances (imputed alone for a minimum of 3 cases except agomelatine due to its delayed implementation). Elevation of Liver Enzymes in Preexisting Liver Damage and Clinical Symptoms: For inpatients with no preexisting liver damage, the mean maximum values for γ-GT, GOT, and GPT were 240, 202, and 285U/L, respectively, when DILI was diagnosed. Cases with preknown liver damage presented maximum γ-GT, GOT, and GPT mean values of 525, 402, and 564U/L, respectively. This indicates that preknown liver damage inpatients had more than doubled mean maximum values for γ-GT transaminases than subjects with normal liver status at the time when DILI appeared. In our study sample, risk factors were documented in 57% (85 of 149 cases). Preexisting hepatic injury was the most common risk factor by far (59 cases), followed by substance abuse, mostly alcohol (20 cases). Furthermore, predisposition to adverse reactions occurred in 10 cases. The most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. A total of 27 inpatients showed clinical symptoms, while the majority did not show any. In 8 cases, the AD treatment remained and dosage was reduced, while in all other cases the drug was withdrawn after DILI was assessed. Within 55 cases, DILI disappeared totally, while in 85 cases DILI improved. Within 9 cases the course was unknown. Single Case of Acute Liver Failure: In our study sample of 149 liver enzyme elevations, only one case of acute liver failure occurred in a 20-year-old woman with a predamaged liver resulting from an overdose of paracetamol. At the time of admission to the psychiatric ward, the transaminase values were normal. She had been on a medication of 150mg doxepine (for 3 days) and 10mg olanzapine (for 6 days). The patient’s liver enzymes increased rapidly, and clinical symptoms such as vomiting, nausea, and epigastric pain set in. In the following laboratory analysis, a hepato-toxicity was identified (bilirubin 3.8mg/dL, GPT 8827U/L, GOT 7363U/L, lactate dehydrogenase 4321U/L). As soon as acute liver failure was diagnosed, the patient was transferred to the intensive care ward where she was under the care of the transplantation consulting team. All medication was discontinued and the patient received electrolyte infusions. As her liver function recovered rapidly, a liver transplantation was no longer necessary. The hepatotoxic effects of doxepine and olanzapine have been discribed in previous literature, but to our knowledge such a severe case has not been presented so far. Discussion: To date, studies on the occurrence of the elevation of liver enzymes during psychotropic treatment have generally been based on case reports. A systematic drug surveillance program, however, increases the methodological accuracy significantly, and several such programs have shown links between ADRs and a range of psychotropic drugs (Grohmann et al., 2004, 2013; Gallego et al., 2012; Lettmaier et al., 2012). In our study, mianserine, agomelatine, and clomipramine showed the highest frequencies of DILI. This result regarding the TCAs is in accordance with the previous results of the AMSP and Arzneimittel-Überwachungs-Programm in der Psychiatrie (German Drug Surveillance in Psychiatry) study group. The AMSP group published a manuscript on severe ADRs of ADs in the year 2004 (Degner et al., 2004). ADs were classified according to receptors and their diverse action profiles, and TCAs were linked to increased levels of liver enzymes. Classical TCAs have a significantly higher potential for inducing hepatic ADRs than newer ADs. Predominantly, these ADRs provoke cholestatic liver damage with prolonged cholestasis, and hepatocellular necrosis may also occur (Zimmerman, 1999). In an intensive drug monitoring study by the Arzneimittel-Überwachungs-Programm working group, elevated liver values were observed in 13.8% of inpatients taking TCAs, but the majority of inpatients presented with only minor increase in transaminases (eg, GPT and AP in one-third of cases observed) (Grohmann et al., 1999, 2004; Degner et al., 2004). Most TCAs do not induce or inhibit CYP-450-isoenzymes. As a substrate of these enzymes, however, they may be affected by interactions, a point that is of interest due to their relatively restricted therapeutic index (Chou et al., 2000; Kalra, 2007). In our study population, up to 0.02% of inpatients receiving long-term therapy with fluoxetine showed elevated liver enzymes. While severe hepatotoxic reactions are rare, the literature reported some ADRs linked to fluoxetine and a few to paroxetine and sertraline (Grohmann et al., 1999, 2004; Charlier et al., 2000; Degner et al., 2004). Many new ADs inhibit CYP-450 enzymes; for example, both fluoxetine and paroxetine are inhibitors of CYP2D6. In combination with TCAs, severe intoxications may occur, and in those involving 3 or more substances, the likelihood of toxicity is even higher (Gillman et al., 2007). As seen in short-term studies, mirtazapine elevates liver enzymes up to 3 times of the norm in 2% of patients, but in most cases patients do not develop significant liver damage, with some patients’ values even recovering in spite of continued medication (Hui et al., 2002; Biswas et al., 2003). Two cases have been documented, however, in which mirtazapine induced severe cholestatic hepatitis (Dodd et al., 2001; Hui et al., 2002). Within our study sample, mirtazapine did not perform worse than SNRIs, especially in terms of GPT and GOT values, where it actually showed a favorable profile. In our study in cases of DILI, the most prevalent drug class combination was the one of ADs and APs, with most cases concerning a combination of AD with olanzapine or clozapine. Most classical APs are metabolized via CYP2D6. A total of 5% to 10% of patients are slow metabolizers and show both high plasma levels and a high risk of a hepatotoxic reaction (Kevin et al., 2007). There is little information available on the newer generation of APs regarding hepatotoxic side effects, but extreme hepatotoxicity seems to occur very rarely. Clozapine and risperidone induced liver damage, and even acute liver failure associated with clozapine has been documented (Macfarlane et al., 1997). Olanzapine seems to trigger a hypersensitivity reaction with involvement of the liver (Mansur et al., 2008). Clozapine causes a mild and mostly temporary increase in transaminases in 37% of patients (Grohmann et al., 1989; Macfarlane et al., 1997). Our results are to some extent consistent with preexisting findings as summarized in a recent review of antidepressant-induced liver injury published in 2014, which also indicated a greater risk of hepatotoxicity for TCAs and agomelatine and the least potential for DILI with SSRIs (Voican et el., 2014). The latter review claimed aminotransferase surveillance (GPT) as the most useful tool for detecting DILI. In accordance with Voican et al. (2014), duloxetine and TCAs such as clomipramine had the least favorable influence on GPT values. Furthermore, antidepressant-induced liver injury is considered to be dose independent. This is in agreement with our findings; in our sample, the median dosage when DILI occurred was higher than the overall median dosage in 7 of 9 substances. Additionally, compared with existing findings, age was not significantly related with the occurrence of DILI. Nefazodone and MAO-inhibitors were often described as highly responsible for DILI in previous studies, which cannot be confirmed within the results of this surveillance program, as single MAO inhibitors as well as nefazodone were only rarely prescribed and therefore could not be reliably compared with other drugs. Conclusions: Our findings suggest that SSRIs are less likely than the other antidepressants examined in this study to precipitate DILI. Preknown liver damage inpatients are more at risk and had more than doubled mean values for γ-GT and transaminases than subjects with healthy liver status, at the time when DILI appeared in our data. Thus, special attention should be given to these inpatients when prescribing antidepressants with potential adverse effects affecting the liver. Given the huge sample size in our observational naturalistic study, the present findings may contribute significantly to the existing literature and help to prevent antidepressant-induced adverse hepatic events. Limitations The findings from the present study reflect data obtained from inpatients who are likely to be more severely ill and have higher antidepressant dosages or more polypharmacy compared with outpatients. Second, the detection of DILI was dependent on increased liver enzyme values and hence on blood examination tests. Regular blood tests are taken at the time of admittance to the hospital; however, there is no standardized regimen for laboratory testing after admittance that might influence the detection of DILI, especially in cases of asymptomatic drug-induced liver dysfunction. Small differences in surveillance habits of liver enzymes across the 80 hospitals participating in the AMSP program may further contribute to the aforementioned problem. The AMSP program focuses on only severe ADRs (Grohmann et al., 2004) with at least 5-fold increase of liver enzymes. This leads to a lower incidence rate of DILI compared with other studies using GPT values 3 times and GOT values 2-fold above the normal value as indicative of DILI. Furthermore, reporting bias cannot be ruled out due to the nature of the surveillance program. To prevent discrepancies among reported cases, the latter are discussed and examined in a systematic way at regional and international meetings within the AMSP group. In terms of the results for agomelatine, it has to be mentioned that there was an awareness of possible liver ADRs from the beginning of the surveillance. The so-called “dear doctor letters” (product safety information) might have influenced the detection of agomelatine-induced liver enzyme elevations due to this sensitization prior to the onset of DILI. The findings from the present study reflect data obtained from inpatients who are likely to be more severely ill and have higher antidepressant dosages or more polypharmacy compared with outpatients. Second, the detection of DILI was dependent on increased liver enzyme values and hence on blood examination tests. Regular blood tests are taken at the time of admittance to the hospital; however, there is no standardized regimen for laboratory testing after admittance that might influence the detection of DILI, especially in cases of asymptomatic drug-induced liver dysfunction. Small differences in surveillance habits of liver enzymes across the 80 hospitals participating in the AMSP program may further contribute to the aforementioned problem. The AMSP program focuses on only severe ADRs (Grohmann et al., 2004) with at least 5-fold increase of liver enzymes. This leads to a lower incidence rate of DILI compared with other studies using GPT values 3 times and GOT values 2-fold above the normal value as indicative of DILI. Furthermore, reporting bias cannot be ruled out due to the nature of the surveillance program. To prevent discrepancies among reported cases, the latter are discussed and examined in a systematic way at regional and international meetings within the AMSP group. In terms of the results for agomelatine, it has to be mentioned that there was an awareness of possible liver ADRs from the beginning of the surveillance. The so-called “dear doctor letters” (product safety information) might have influenced the detection of agomelatine-induced liver enzyme elevations due to this sensitization prior to the onset of DILI.
Background: Drug-induced liver injury is a common cause of liver damage and the most frequent reason for withdrawal of a drug in the United States. The symptoms of drug-induced liver damage are extremely diverse, with some patients remaining asymptomatic. Methods: This observational study is based on data of Arzneimittelsicherheit in der Psychiatrie, a multicenter drug surveillance program in German-speaking countries (Austria, Germany, and Switzerland) recording severe drug reactions in psychiatric inpatients. Of 184234 psychiatric inpatients treated with antidepressants between 1993 and 2011 in 80 psychiatric hospitals, 149 cases of drug-induced liver injury (0.08%) were reported. Results: The study revealed that incidence rates of drug-induced liver injury were highest during treatment with mianserine (0.36%), agomelatine (0.33%), and clomipramine (0.23%). The lowest probability of drug-induced liver injury occurred during treatment with selective serotonin reuptake inhibitors ([0.03%), especially escitalopram [0.01%], citalopram [0.02%], and fluoxetine [0.02%]). The most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. In contrast to previous findings, the dosage at the timepoint when DILI occurred was higher in 7 of 9 substances than the median overall dosage. Regarding liver enzymes, duloxetine and clomipramine were associated with increased glutamat-pyruvat-transaminase and glutamat-oxalat-transaminase values, while mirtazapine hardly increased enzyme values. By contrast, duloxetine performed best in terms of gamma-glutamyl-transferase values, and trimipramine, clomipramine, and venlafaxine performed worst. Conclusions: Our findings suggest that selective serotonin reuptake inhibitors are less likely than the other antidepressants, examined in this study, to precipitate drug-induced liver injury, especially in patients with preknown liver dysfunction.
Introduction: The liver, the central organ of biotransformation, is particularly prone to oral medication-related toxicity due to high concentrations of drugs and their metabolites in portal blood rather than in the actual target area of the central nervous system. It is, however, difficult to attribute liver damage to a specific medication in clinical practice (Meyer, 2000). The susceptibility of an individual to drug-induced liver injury (DILI) depends on multiple genetic and epigenetic factors, age, gender, weight, and alcohol consumption that influence the occurrence of hepatic adverse effects (Krähenbühl and Kaplowitz, 1996). Older patients seem more vulnerable, and women have a stronger tendency to toxic liver reaction than men (Meyer, 2000); ethnic differences have also been reported (Evans, 1986). Genetic metabolic variability is the most significant susceptibility factor in drug-induced liver toxicity. Enzyme polymorphisms can cause a slowing or complete disruption of enzyme function, which in turn results in the inefficient processing of drugs (Shenfield and Gross, 1999). This may not always result in corresponding liver damage but does contribute to an increased toxicity of substances. The majority of drugs and almost all psychotropic drugs are metabolized by the enzyme CYP450. Due to genetically determined polymorphisms of CYP450-isoenzymes, individuals can be categorized as poor, intermediate, extensive, or superextensive metabolizers (Miners and Birkett, 1998; Shenfield and Gross, 1999; Wilkinson, 2004). If a poor metabolizer receives medication containing several substrates or inhibitors of the same isoenzyme, the risk of a toxic reaction increases owing to a slower drug metabolism. As most psychotropic drugs are a substrate of CYP2D6 (Ingelman-Sundberg, 2005), this cytochrome is especially significant in the pharmacokinetic interaction. Approximately 5% to 10% of Caucasians have reduced or nonexistent CYP2D6 activity and are therefore at risk of toxicity when receiving psychotropic treatment (Transon et al., 1996; Griese et al. 1998; Ingelman-Sundberg, 2005; Bernarda et al., 2006). A further important consideration is whether patients with preexisting liver dysfunction have a higher risk of hepatotoxic reactions. Although little information from controlled studies exists, there are indications that patients with preexisting liver disorders generally do not display an increased risk of drug-induced hepatotoxicity. It is more likely that preexisting liver damage negatively affects the ability of the liver to regenerate in the case of a hepatotoxic reaction (Chang and Schiano, 2007). The clinical symptoms of DILI are extremely diverse, with some patients remaining asymptomatic. Possible symptoms are tiredness, lack of appetite, nausea, vomiting, fever, a feeling of pressure in the upper right region of the abdomen, joint and muscle pain, pruritus, rashes, and jaundice; the latter is the only symptom directly indicative of the liver’s involvement (Chang and Schiano, 2007). To diagnose asymptomatic toxic liver damage early, a minimum of laboratory testing is required. This involves the measurement of the glutamat-oxalat transaminase (GOT), glutamat-pyruvat-transaminase (GPT), and gamma-glutamyl-transferase (γ-GT) in serum which, if found to be normal, indicates that there has been no disruption to liver function. GOT and GPT are also well known as the enzyme aspartate aminotransferase (AST) and alanine aminotransferase (ALT), respectively. It is important to consider the possibility of DILI when prescribing psychotropic drugs and to record a detailed history of all medication taken by the patient, with particular attention paid to the length of use, the dose, and the time between the intake of medication and appearance of symptoms. The latency period involved here can vary between a few days and some months and, as liver damage may result from other causes such as viral, autoimmune, alcohol-induced hepatitis, and acute Morbus Wilson, the diagnosis of drug-induced toxic liver-damage is often a diagnosis of exclusion (Norris et al., 2008). Recently, Chalasani et al. (2014) developed practice guidelines for diagnosing and managing DILI. The hepatic pattern of damage can be classified as predominantly hepatocellular, predominantly cholestatic, or a hepatocellular/cholestatic mixture and is important, as these patterns are of varying severity. The drugs also cause drug-specific patterns of liver damage revealing increased values of transaminases (GOT and GTP) and/or cholestasis (γ-GT, alkaline phosphatase [Zimmerman, 1999; Andrade et a., 2004]). A slight increase in transaminases or γ-GT levels to twice the norm without a rise in bilirubin is often of no clinical significance and in spite of continued medication, can simply disappear. This is a phenomenon often observed in antiepileptic or mood-stabilizing therapy (Yatham et al., 2002). These small functional changes must still be checked and in the case of a further elevation in liver enzyme levels medication must be discontinued (Voican et al., 2014). The prognosis of DILI is generally good, and less severe forms heal quickly and completely (Hayashi and Fontana, 2014). It is difficult to obtain figures regarding hepatotoxic drug reactions, as systematic epidemiological analyses are seldom done and observations are not conducted for a long enough period to have any true validity. Adverse effects are also not reliably reported or registered. Drug surveillance programs permit an early detection of adverse drug reactions (ADRs) and this may minimize consequences. The Arzneimittelsicherheit in der Psychiatrie (AMSP) study is one such program in the field of psychiatry systematically evaluating severe ADRs of psychotropic medication in inpatients. The AMSP produces a database of these ADRs registered in the participating psychiatric clinics in Austria, Germany, and Switzerland (for details on AMSP methods, see Grohmann et al., 2004, 2013 Konstantinidis et al., 2012). In the present study, we have used this database to analyze the elevation of liver enzymes with a particular focus on sociodemographic data and the significance of clinical manifestations as well as transaminase levels measured during antidepressant (AD) monotherapy and combination therapies. Conclusions: Since 1993 educational and research grants have been given by the following pharmaceutical companies to the 3 local nonprofit associations of the AMSP: (1) Austrian companies: AESCA Pharma GmbH, AstraZeneca ÖsterreichGmbH, Boehringer Ingelheim Austria, Bristol–Myers Squibb GmbH, CSC Pharmaceuticals GmbH, Eli Lilly GmbH, Germania Pharma GmbH, GlaxoSmithKline Pharma GmbH, Janssen-Cilag Pharma GmbH, Lundbeck GmbH, Novartis Pharma GmbH, Pfizer Med Inform, Servier Austria GmbH, and Wyeth Lederle Pharma GmbH; (2) German companies: Abbott GmbH & Co. KG, AstraZeneca GmbH, Aventis Pharma Deutschland GmbH GE-O/R/N, Bayer Vital GmbH & Co. KG, Boehringer Mannheim GmbH, Bristol-Myers-Squibb, Ciba Geigy GmbH, Desitin Arzneimittel GmbH, Duphar Pharma GmbH & Co. KG, Eisai GmbH, esparma GmbH Arzneimittel, GlaxoSmithKline Pharma GmbH & Co. KG, Hoffmann-La Roche AG Medical Affairs, Janssen-Cilag GmbH, Janssen Research Foundation, Knoll Deutschland GmbH, Lilly Deutschland GmbH Niederlassung Bad Homburg, Lundbeck GmbH & Co. KG, Novartis Pharma GmbH, Nordmark Arzneimittel GmbH, Organon GmbH, Otsuka-Pharma Frankfurt, Pfizer GmbH, Pharmacia & Upjohn GmbH, Promonta Lundbeck Arzneimittel, Rhone Poulenc Rohrer, Sanofi-Synthelabo GmbH, Sanofi-Aventis Deutschland, Schering AG, SmithKlineBeecham Pharma GmbH, Solvay Arzneimittel GmbH, Synthelabo Arzneimittel GmbH, Dr Wilmar Schwabe GmbH & Co., Thiemann Arzneimittel GmbH, Troponwerke GmbH & Co. KG, Upjohn GmbH, Wander Pharma GmbH, and Wyeth-Pharma GmbH; and (3) Swiss companies: AHP (Schweiz) AG, AstraZeneca AG, Bristol–Myers Squibb AG, Desitin Pharma GmbH, Eli Lilly (Suisse) S.A., Essex Chemie AG, GlaxoSmithKline AG, Janssen-Cilag AG, Lundbeck (Suisse) AG, Mepha Schweiz AG/Teva, MSD Merck Sharp & Dohme AG, Organon AG, Pfizer AG, Pharmacia, Sandoz Pharmaceuticals AG, Sanofi-Aventis (Suisse) S.A., Sanofi Synthe′labo SA, Servier SA, SmithKlineBeecham AG, Solvay Pharma AG, Vifor SA, Wyeth AHP (Suisse) AG, and Wyeth Pharmaceuticals AG. Dr Papageorgiou received honoraria from RB Pharmaceuticals and Bristol–Myers Squibb. Dr Konstantinidis received honoraria from Affiris, AstraZeneca, Novartis, Pfizer, and Servier, served as a consultant for AstraZeneca, and was a speaker for AstraZeneca, Bristol–Myers Squib, and Janssen. Dr Winkler has received speaker honoraria from Angelini, Bristol-Myers Squibb, Novartis, Pfizer, and Servier. Drs Grohmann and Toto are involved in the project management of AMSP. Dr Greil has been a member of an advisory board for Lundbeck and has received speaker’s fees from AstraZeneca, Lundbeck, and Lundbeck Institute. Dr Kasper received grant/research support from Bristol–Myers Squibb, Eli Lilly, GlaxoSmithKline, Lundbeck, Organon, Sepracor, and Servier; he has served as a consultant or on advisory boards for AstraZeneca, Bristol–Myers Squibb, Eli Lilly, GlaxoSmithKline, Janssen, Lundbeck, Merck Sharp and Dome (MSD), Novartis, Organon, Pfizer, Schwabe, Sepracor, and Servier; and has served on speakers’ bureaus for Angelini, AstraZeneca, Bristol–Myers Squibb, Eli Lilly, Janssen, Lundbeck, Pfizer, Pierre Fabre, Schwabe, Sepracor, and Servier. Dr Winkler has received lecture fees from Bristol-Myers Squibb, CSC Pharmaceuticals, Novartis, Pfizer, and Servier.
Background: Drug-induced liver injury is a common cause of liver damage and the most frequent reason for withdrawal of a drug in the United States. The symptoms of drug-induced liver damage are extremely diverse, with some patients remaining asymptomatic. Methods: This observational study is based on data of Arzneimittelsicherheit in der Psychiatrie, a multicenter drug surveillance program in German-speaking countries (Austria, Germany, and Switzerland) recording severe drug reactions in psychiatric inpatients. Of 184234 psychiatric inpatients treated with antidepressants between 1993 and 2011 in 80 psychiatric hospitals, 149 cases of drug-induced liver injury (0.08%) were reported. Results: The study revealed that incidence rates of drug-induced liver injury were highest during treatment with mianserine (0.36%), agomelatine (0.33%), and clomipramine (0.23%). The lowest probability of drug-induced liver injury occurred during treatment with selective serotonin reuptake inhibitors ([0.03%), especially escitalopram [0.01%], citalopram [0.02%], and fluoxetine [0.02%]). The most common clinical symptoms were nausea, fatigue, loss of appetite, and abdominal pain. In contrast to previous findings, the dosage at the timepoint when DILI occurred was higher in 7 of 9 substances than the median overall dosage. Regarding liver enzymes, duloxetine and clomipramine were associated with increased glutamat-pyruvat-transaminase and glutamat-oxalat-transaminase values, while mirtazapine hardly increased enzyme values. By contrast, duloxetine performed best in terms of gamma-glutamyl-transferase values, and trimipramine, clomipramine, and venlafaxine performed worst. Conclusions: Our findings suggest that selective serotonin reuptake inhibitors are less likely than the other antidepressants, examined in this study, to precipitate drug-induced liver injury, especially in patients with preknown liver dysfunction.
10,114
349
[ 1152, 100, 4109, 255, 627, 128, 116, 431, 243, 221, 995, 699 ]
13
[ "dili", "cases", "liver", "values", "inpatients", "imputed", "drugs", "drug", "ads", "case" ]
[ "toxicity enzyme polymorphisms", "toxic liver damage", "drug induced liver", "hepatic adverse effects", "affects ability liver" ]
null
[CONTENT] Adverse drug reaction | antidepressants | drug surveillance | elevation of liver enzymes [SUMMARY]
[CONTENT] Adverse drug reaction | antidepressants | drug surveillance | elevation of liver enzymes [SUMMARY]
null
[CONTENT] Adverse drug reaction | antidepressants | drug surveillance | elevation of liver enzymes [SUMMARY]
[CONTENT] Adverse drug reaction | antidepressants | drug surveillance | elevation of liver enzymes [SUMMARY]
[CONTENT] Adverse drug reaction | antidepressants | drug surveillance | elevation of liver enzymes [SUMMARY]
[CONTENT] Adverse Drug Reaction Reporting Systems | Aged | Antidepressive Agents | Austria | Chemical and Drug Induced Liver Injury | Dose-Response Relationship, Drug | Drug Therapy, Combination | Female | Germany | Hospitals, Psychiatric | Humans | Incidence | Inpatients | Liver | Male | Mental Disorders | Middle Aged | Switzerland [SUMMARY]
[CONTENT] Adverse Drug Reaction Reporting Systems | Aged | Antidepressive Agents | Austria | Chemical and Drug Induced Liver Injury | Dose-Response Relationship, Drug | Drug Therapy, Combination | Female | Germany | Hospitals, Psychiatric | Humans | Incidence | Inpatients | Liver | Male | Mental Disorders | Middle Aged | Switzerland [SUMMARY]
null
[CONTENT] Adverse Drug Reaction Reporting Systems | Aged | Antidepressive Agents | Austria | Chemical and Drug Induced Liver Injury | Dose-Response Relationship, Drug | Drug Therapy, Combination | Female | Germany | Hospitals, Psychiatric | Humans | Incidence | Inpatients | Liver | Male | Mental Disorders | Middle Aged | Switzerland [SUMMARY]
[CONTENT] Adverse Drug Reaction Reporting Systems | Aged | Antidepressive Agents | Austria | Chemical and Drug Induced Liver Injury | Dose-Response Relationship, Drug | Drug Therapy, Combination | Female | Germany | Hospitals, Psychiatric | Humans | Incidence | Inpatients | Liver | Male | Mental Disorders | Middle Aged | Switzerland [SUMMARY]
[CONTENT] Adverse Drug Reaction Reporting Systems | Aged | Antidepressive Agents | Austria | Chemical and Drug Induced Liver Injury | Dose-Response Relationship, Drug | Drug Therapy, Combination | Female | Germany | Hospitals, Psychiatric | Humans | Incidence | Inpatients | Liver | Male | Mental Disorders | Middle Aged | Switzerland [SUMMARY]
[CONTENT] toxicity enzyme polymorphisms | toxic liver damage | drug induced liver | hepatic adverse effects | affects ability liver [SUMMARY]
[CONTENT] toxicity enzyme polymorphisms | toxic liver damage | drug induced liver | hepatic adverse effects | affects ability liver [SUMMARY]
null
[CONTENT] toxicity enzyme polymorphisms | toxic liver damage | drug induced liver | hepatic adverse effects | affects ability liver [SUMMARY]
[CONTENT] toxicity enzyme polymorphisms | toxic liver damage | drug induced liver | hepatic adverse effects | affects ability liver [SUMMARY]
[CONTENT] toxicity enzyme polymorphisms | toxic liver damage | drug induced liver | hepatic adverse effects | affects ability liver [SUMMARY]
[CONTENT] dili | cases | liver | values | inpatients | imputed | drugs | drug | ads | case [SUMMARY]
[CONTENT] dili | cases | liver | values | inpatients | imputed | drugs | drug | ads | case [SUMMARY]
null
[CONTENT] dili | cases | liver | values | inpatients | imputed | drugs | drug | ads | case [SUMMARY]
[CONTENT] dili | cases | liver | values | inpatients | imputed | drugs | drug | ads | case [SUMMARY]
[CONTENT] dili | cases | liver | values | inpatients | imputed | drugs | drug | ads | case [SUMMARY]
[CONTENT] liver | medication | damage | liver damage | drug | toxic | drugs | psychotropic | important | toxic liver [SUMMARY]
[CONTENT] drug | amsp | adr | calculated | probability | grade | program | data | number | participating [SUMMARY]
null
[CONTENT] liver | dili | detection | detection dili | admittance | findings | blood | amsp | surveillance | values [SUMMARY]
[CONTENT] dili | cases | liver | inpatients | values | imputed | ads | drug | drugs | ad [SUMMARY]
[CONTENT] dili | cases | liver | inpatients | values | imputed | ads | drug | drugs | ad [SUMMARY]
[CONTENT] the United States ||| [SUMMARY]
[CONTENT] Arzneimittelsicherheit | Psychiatrie | German | Austria | Germany | Switzerland ||| 184234 | between 1993 and 2011 | 80 | 149 | 0.08% [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] the United States ||| ||| Arzneimittelsicherheit | Psychiatrie | German | Austria | Germany | Switzerland ||| 184234 | between 1993 and 2011 | 80 | 149 | 0.08% ||| ||| 0.36% | 0.33% | 0.23% ||| 0.03% | 0.01% ||| 0.02% | 0.02% ||| ||| DILI | 7 | 9 ||| ||| ||| [SUMMARY]
[CONTENT] the United States ||| ||| Arzneimittelsicherheit | Psychiatrie | German | Austria | Germany | Switzerland ||| 184234 | between 1993 and 2011 | 80 | 149 | 0.08% ||| ||| 0.36% | 0.33% | 0.23% ||| 0.03% | 0.01% ||| 0.02% | 0.02% ||| ||| DILI | 7 | 9 ||| ||| ||| [SUMMARY]
A large outbreak of acute gastroenteritis caused by the human norovirus GII.17 strain at a university in Henan Province, China.
28143569
Human noroviruses are a major cause of viral gastroenteritis and are the main etiological agents of acute gastroenteritis outbreaks. An increasing number of outbreaks and sporadic cases of norovirus have been reported in China in recent years. There was a large acute gastroenteritis outbreak at a university in Henan Province, China in the past five years. We want to identify the source, transmission routes of the outbreak by epidemiological investigation and laboratory testing in order to provide the effective control measures.
BACKGROUND
The clinical cases were investigated, and analysed by descriptive epidemiological methods according to factors such as time, department, grade and so on. Samples were collected from clinical cases, healthy persons, the environment, water, and food at the university. These samples were tested for potential bacteria and viruses. The samples that tested positive for norovirus were selected for whole genome sequencing and the sequences were then analysed.
METHODS
From 4 March to 3 April 2015, a total of 753 acute diarrhoea cases were reported at the university; the attack rate was 3.29%. The epidemic curve showed two peaks, with the main peak occurring between 10 and 20 March, accounting for 85.26% of reported cases. The rates of norovirus detection in samples from confirmed cases, people without symptoms, and environmental samples were 32.72%, 17.39%, and 9.17%, respectively. The phylogenetic analysis showed that the norovirus belonged to the genotype GII.17.
RESULTS
This is the largest and most severe outbreak caused by genotype GII.17 norovirus in recent years in China. The GII.17 viruses displayed high epidemic activity and have become a dominant strain in China since the winter of 2014, having replaced the previously dominant GII.4 Sydney 2012 strain.
CONCLUSIONS
[ "Acute Disease", "Adult", "China", "Disease Outbreaks", "Female", "Gastroenteritis", "Humans", "Male", "Middle Aged", "Norovirus", "Phylogeny", "Universities", "Young Adult" ]
5286658
Background
Human noroviruses are positive-sense single stranded ribonucleic acid (RNA) viruses belonging to the family Caliciviridae, and are the most common cause of acute gastroenteritis outbreaks globally [1–3]. The disease burden of noroviruses is substantial and has a significant influence on public health [4, 5]. No vaccines or antiviral therapies are currently available for norovirus infections. Norovirus infections and outbreaks are usually more common in cooler or winter months. Noroviruses are readily transmitted through the fecal-oral route, through person-to-person contact, or through contaminated food or water, meaning that noroviruses spread quickly in enclosed places such as nursing homes, daycare centres, schools, and cruise ships, and are also a major cause of outbreaks in restaurants and catered-meal settings if contaminated food is served [6–8]. Noroviruses have an incubation period of 12–48 hours and symptoms typically include nausea, vomiting, diarrhea, abdominal pain, and fever. Norovirus infections are generally self-limited with mild to moderate symptoms, although severe morbidity and occasional mortality have been observed in immunocompromised patients and the elderly. Symptoms usually last for 1–3 days but can persist longer in young, old, and immunocompromised patients [9–12]. From 4 to 30 March 2015, 753 cases of acute gastroenteritis were reported to the National Notifiable Reportable Diseases Surveillance System (NNDSS) in China from a university in Nanyang, Henan Province. Preliminary investigation indicated that the incident was a large acute gastroenteritis outbreak caused by human norovirus, and the route of transmission might be person-to-person and environmental transmission. We conducted an in-depth epidemiological investigation and laboratory testing in order to identify the source of the outbreak and provide guidance on effective control measures for future outbreaks.
null
null
Results
Descriptive epidemiology From 4 to 30 March 2015, a total of 753 acute diarrhea cases were reported at a university in Henan Province to the NNDSS in China. The first case, whose main clinical symptoms included diarrhea, nausea, abdominal distension, abdominal pain, and fatigue, without fever or vomiting, occurred in the School of Economics and Management on the third day after the winter holiday. In the next few weeks, a high number of cases with similar clinical symptoms at various schools of the university were reported. The 753 cases comprised 751 students and two teachers, and the attack rate was 3.29% (753/22 861). Among the cases, 426 were males and 325 were females, with a male–female sex ratio of 1.31. The median age of the cases was 21 years (range: 19–50); the two teachers were 38 years old and 50 years old. The time distribution showed that a main peak of cases occurred (85.26% of the reported cases) between 10 and 20 March (see Fig. 1).Fig. 1Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015 Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015 The 753 cases occurred across 16 departments of the university. There was a statistically significant difference with respect to attack rates within the different departments (χ 2 = 179.92, P < 0.001). Two cases were from the Institute of Education, which is a relatively independent unit located on a different campus of the university. The attack rate in grades 1–2 (3.76%) was higher than in grades 3–4 (3.30%), but this was not an important difference (χ 2 = 3.118, P > 0.05) (see Table 1).Table 1Numbers of reported noroviruses cases by gender, residence, age group and yearDepartmentNo. of studentsNo. of casesAttack rate (%)Gender of casesGrade of casesMaleFemale1-23-4Electronics School1 246514.0931204110International Education College1 418614.3032293130Traditional Chinese Medicine College1 609321.9918141022Mechanics Institute1 615342.112113268Computer and Information Engineering College896313.461714265Architecture School849435.0623202914Economics and Management College1 415745.2340342945Software School3 7201143.0668467242Biological and Chemical Engineering College1 092393.5723162019Mathematics and Physics College419143.347786Civil Engineering College1 381453.262718396Foreign Languages School606304.951317228Humanities and Law College1 342745.5141332648Art Institute1 710764.4442343244Music College599315.182291219Education College1 36320.151111Total21 2807513.53426325404347 Numbers of reported noroviruses cases by gender, residence, age group and year From 4 to 30 March 2015, a total of 753 acute diarrhea cases were reported at a university in Henan Province to the NNDSS in China. The first case, whose main clinical symptoms included diarrhea, nausea, abdominal distension, abdominal pain, and fatigue, without fever or vomiting, occurred in the School of Economics and Management on the third day after the winter holiday. In the next few weeks, a high number of cases with similar clinical symptoms at various schools of the university were reported. The 753 cases comprised 751 students and two teachers, and the attack rate was 3.29% (753/22 861). Among the cases, 426 were males and 325 were females, with a male–female sex ratio of 1.31. The median age of the cases was 21 years (range: 19–50); the two teachers were 38 years old and 50 years old. The time distribution showed that a main peak of cases occurred (85.26% of the reported cases) between 10 and 20 March (see Fig. 1).Fig. 1Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015 Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015 The 753 cases occurred across 16 departments of the university. There was a statistically significant difference with respect to attack rates within the different departments (χ 2 = 179.92, P < 0.001). Two cases were from the Institute of Education, which is a relatively independent unit located on a different campus of the university. The attack rate in grades 1–2 (3.76%) was higher than in grades 3–4 (3.30%), but this was not an important difference (χ 2 = 3.118, P > 0.05) (see Table 1).Table 1Numbers of reported noroviruses cases by gender, residence, age group and yearDepartmentNo. of studentsNo. of casesAttack rate (%)Gender of casesGrade of casesMaleFemale1-23-4Electronics School1 246514.0931204110International Education College1 418614.3032293130Traditional Chinese Medicine College1 609321.9918141022Mechanics Institute1 615342.112113268Computer and Information Engineering College896313.461714265Architecture School849435.0623202914Economics and Management College1 415745.2340342945Software School3 7201143.0668467242Biological and Chemical Engineering College1 092393.5723162019Mathematics and Physics College419143.347786Civil Engineering College1 381453.262718396Foreign Languages School606304.951317228Humanities and Law College1 342745.5141332648Art Institute1 710764.4442343244Music College599315.182291219Education College1 36320.151111Total21 2807513.53426325404347 Numbers of reported noroviruses cases by gender, residence, age group and year Field epidemiologic investigation and preventive controls On 13 March 2015, both the local CDC and the provincial CDC were notified about an acute gastroenteritis outbreak at a university in Henan Province, and infection control experts initiated a field epidemiologic investigation. The university is located in southwest Henan in a district with both urban and rural areas, a subtropical to warm-temperate transitional zone, a continental monsoon humid climate, and four distinct seasons. It is a university with 16 departments, including science, engineering, medicine, education, management science, law, economics, and the arts. The university has two separate campuses, the Central Campus and the Education College. At the time of the outbreak, there were 1 581 staff members and 21 280 students, comprising 19 917 students at the Central Campus and 1 363 students at the Education College. Teachers generally do not have meals on campus as they live in an apartment building that is within walking distance of the university. Most students have three meals a day in the university canteens. A retrospective investigation showed that the first acute gastroenteritis case occurred on 4 March 2015. On 9 March, 10 students with similar clinical symptoms went to see doctors at the university-affiliated hospital. This information was brought to the attention of the hospital administrators who then reported it to the local public health agency. On 10 March, samples of food and drinking water from the university dining hall were collected and tested for multiple enteric pathogens, however, none of the samples tested positive. A further epidemiological investigation identified clustered cases among roommates and classmates. On 15 March, comprehensive measures were taken to prevent and control this acute gastroenteritis outbreak, including establishing a temporary diarrhea clinic in the school hospital; isolating and treating patients; disinfecting student dormitories, classrooms, and cafeterias with vitalight lamp radiation (Fushan Creator UV & IR Lighting Co., Ltd., Guangdong, China) or 5–6% sodium hypochlorite disinfectant (Dezhou city Sunkang Disinfection Products Co., Ltd., Shandong, China); offering health education; and encouraging enhanced personal hygiene and social distancing. No new cases occurred after 30 March 2015. On 13 March 2015, both the local CDC and the provincial CDC were notified about an acute gastroenteritis outbreak at a university in Henan Province, and infection control experts initiated a field epidemiologic investigation. The university is located in southwest Henan in a district with both urban and rural areas, a subtropical to warm-temperate transitional zone, a continental monsoon humid climate, and four distinct seasons. It is a university with 16 departments, including science, engineering, medicine, education, management science, law, economics, and the arts. The university has two separate campuses, the Central Campus and the Education College. At the time of the outbreak, there were 1 581 staff members and 21 280 students, comprising 19 917 students at the Central Campus and 1 363 students at the Education College. Teachers generally do not have meals on campus as they live in an apartment building that is within walking distance of the university. Most students have three meals a day in the university canteens. A retrospective investigation showed that the first acute gastroenteritis case occurred on 4 March 2015. On 9 March, 10 students with similar clinical symptoms went to see doctors at the university-affiliated hospital. This information was brought to the attention of the hospital administrators who then reported it to the local public health agency. On 10 March, samples of food and drinking water from the university dining hall were collected and tested for multiple enteric pathogens, however, none of the samples tested positive. A further epidemiological investigation identified clustered cases among roommates and classmates. On 15 March, comprehensive measures were taken to prevent and control this acute gastroenteritis outbreak, including establishing a temporary diarrhea clinic in the school hospital; isolating and treating patients; disinfecting student dormitories, classrooms, and cafeterias with vitalight lamp radiation (Fushan Creator UV & IR Lighting Co., Ltd., Guangdong, China) or 5–6% sodium hypochlorite disinfectant (Dezhou city Sunkang Disinfection Products Co., Ltd., Shandong, China); offering health education; and encouraging enhanced personal hygiene and social distancing. No new cases occurred after 30 March 2015. Clinical symptoms Clinical symptoms were recorded for 471 out of 753 cases. The main symptoms were diarrhea (85.14%), vomiting (65.61%), nausea (69.64%), stomachache (59.45%), abdominal distension (53.29%), and fever (43.77%) (see Table 2). The disease remitted within 72 hours (median: 50 hours, range: 11–72 hours). No cases were hospitalized and there were no deaths.Table 2Clinical symptoms of 471 clinical casesSymptom/signNo. of cases (n = 471)Proportion (%)Diarrhea40185.14Vomiting30965.61Nausea32869.64Bellyache28059.45Abdominal distension25153.29Fever20643.77 Clinical symptoms of 471 clinical cases Clinical symptoms were recorded for 471 out of 753 cases. The main symptoms were diarrhea (85.14%), vomiting (65.61%), nausea (69.64%), stomachache (59.45%), abdominal distension (53.29%), and fever (43.77%) (see Table 2). The disease remitted within 72 hours (median: 50 hours, range: 11–72 hours). No cases were hospitalized and there were no deaths.Table 2Clinical symptoms of 471 clinical casesSymptom/signNo. of cases (n = 471)Proportion (%)Diarrhea40185.14Vomiting30965.61Nausea32869.64Bellyache28059.45Abdominal distension25153.29Fever20643.77 Clinical symptoms of 471 clinical cases Pathogen detection From 14 March to 1 April 2015, 110 clinical samples from cases were collected, including 77 stool samples, 24 vomit samples, and nine saliva samples. No bacterial pathogens causing the disease were detected in any samples by culture methods. Thirty-six (32.72%, 36/110) samples from cases tested positive for norovirus using real-time RT-PCR, which comprised 27 stool (35.06%, 27/77), seven vomit (29.17%, 7/24), and two saliva (22.22%, 2/9) samples. Four (5.19%, 4/77) stool samples were positive for rotavirus using real-time RT-PCR. No other gastrointestinal viruses were detected. From 14 March to 1 April 2015, 110 clinical samples from cases were collected, including 77 stool samples, 24 vomit samples, and nine saliva samples. No bacterial pathogens causing the disease were detected in any samples by culture methods. Thirty-six (32.72%, 36/110) samples from cases tested positive for norovirus using real-time RT-PCR, which comprised 27 stool (35.06%, 27/77), seven vomit (29.17%, 7/24), and two saliva (22.22%, 2/9) samples. Four (5.19%, 4/77) stool samples were positive for rotavirus using real-time RT-PCR. No other gastrointestinal viruses were detected. Environmental health investigation Four (17.39%) of the 23 students without symptoms tested positive for norovirus. All samples from cafeteria food handlers were negative for norovirus. Eleven (9.17%, 11/120) swab samples tested positive for norovirus, which comprised eight toilet and three gargle cup surface samples. The food and drinking water samples were all negative for norovirus and rotavirus. Bacterial pathogens were not detected in the environmental health samples. Four (17.39%) of the 23 students without symptoms tested positive for norovirus. All samples from cafeteria food handlers were negative for norovirus. Eleven (9.17%, 11/120) swab samples tested positive for norovirus, which comprised eight toilet and three gargle cup surface samples. The food and drinking water samples were all negative for norovirus and rotavirus. Bacterial pathogens were not detected in the environmental health samples. Molecular characterization of norovirus All six strains were typed as GII.17 using the norovirus automated genotyping tool. The result of the phylogenetic analysis, which was performed to verified genotypes of six norovirus strains, coincided with the conclusion above (see Fig. 2).Fig. 2Molecular characterization and phylogenetic analysis of noroviruses, Henan Province Molecular characterization and phylogenetic analysis of noroviruses, Henan Province The complete genomes of the six norovirus-positive samples were determined. Nucleotide sequences were submitted to GenBank (accession Nos. KT992785, KT992786, KT992787, KT992788, KT992789, KT992790). The nucleotide identity among the six GII.17 strains ranged from 95.8% to 99.9%. To further determine the genetic characteristics of the norovirus strains, complete VP1 gene sequences of six norovirus strains from Henan were compared with 21 other GII.17 strains selected from GenBank by phylogenetic analysis. On the basis of VP1 gene sequencing, the GII.17 strains could be divided into three clusters in the phylogenetic tree (I-III): GII.17 strains detected from 1978 to 2002 formed cluster I, GII.17 strains collected from 2005 to 2009 formed cluster II, and cluster III composed of six GII.17 strains from Henan and strains isolated from Hong Kong, Guangdong, Beijing, Italy, Taiwan, Japan, and the USA after 2013 (except for KT589391.1 GII.17/HKG/2015, which belonged to cluster I) (see Fig. 3).Fig. 3Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples The phylogenetic analysis suggested that viruses from this study clustered with viral sequences obtained from viruses from other provinces in China circulating at a similar time, and co-evolved and co-circulated with those from surrounding provinces. All six strains were typed as GII.17 using the norovirus automated genotyping tool. The result of the phylogenetic analysis, which was performed to verified genotypes of six norovirus strains, coincided with the conclusion above (see Fig. 2).Fig. 2Molecular characterization and phylogenetic analysis of noroviruses, Henan Province Molecular characterization and phylogenetic analysis of noroviruses, Henan Province The complete genomes of the six norovirus-positive samples were determined. Nucleotide sequences were submitted to GenBank (accession Nos. KT992785, KT992786, KT992787, KT992788, KT992789, KT992790). The nucleotide identity among the six GII.17 strains ranged from 95.8% to 99.9%. To further determine the genetic characteristics of the norovirus strains, complete VP1 gene sequences of six norovirus strains from Henan were compared with 21 other GII.17 strains selected from GenBank by phylogenetic analysis. On the basis of VP1 gene sequencing, the GII.17 strains could be divided into three clusters in the phylogenetic tree (I-III): GII.17 strains detected from 1978 to 2002 formed cluster I, GII.17 strains collected from 2005 to 2009 formed cluster II, and cluster III composed of six GII.17 strains from Henan and strains isolated from Hong Kong, Guangdong, Beijing, Italy, Taiwan, Japan, and the USA after 2013 (except for KT589391.1 GII.17/HKG/2015, which belonged to cluster I) (see Fig. 3).Fig. 3Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples The phylogenetic analysis suggested that viruses from this study clustered with viral sequences obtained from viruses from other provinces in China circulating at a similar time, and co-evolved and co-circulated with those from surrounding provinces.
Conclusion
This study identified the largest outbreak of acute gastroenteritis caused by human norovirus GII.17 in Henan Province, China. The environmental transmission contributed to the outbreak, as the students at the university under investigation study and live relatively close in dormitories and classrooms. Environmental disinfecting measures and hand washing should be promoted to prevent such infections and outbreaks because there are not vaccines or antiviral therapies currently available for norovirus infections.
[ "Multilingual abstracts", "Case definition", "Epidemiological investigation", "Specimen collection", "Screening for gastroenteritis pathogens", "Full genome sequencing of norovirus", "Phylogenetic analysis", "Statistical analysis", "Ethical clearance", "Descriptive epidemiology", "Field epidemiologic investigation and preventive controls", "Clinical symptoms", "Pathogen detection", "Environmental health investigation", "Molecular characterization of norovirus" ]
[ "Please see Additional file 1 for translations of the abstract into the five official working languages of the United Nations.", "The investigated subjects included any person at the university. A clinical case was defined by the onset of diarrhoea ( ≥3 times/day), vomiting ( ≥2 times/day), or diarrhoeawith vomiting (unlimited number of times/day) at the university during the period from 1 March to 3 April 2015. A laboratory-confirmed case was defined when the stool or vomit specimen of a clinical case tested positive for norovirus by real-time reverse transcription polymerase chain reaction (RT-PCR).", "Medical practitioners reported clinical cases to the Henan Center for Disease Control and Prevention (CDC) from 1 March to 3 April 2015. A questionnaire was used to collect information on demographics, clinical symptoms, date of disease onset, and date of recovery.", "Medical workers collected a total of 110 stool, vomitus, and/or saliva samples from clinical cases. Additionally, stool samples were collected from students and staff who did not exhibit symptoms of vomiting or diarrhea. Stool samples were collected from 53 people, comprising 23 students and 30 cafeteria food handlers. Additionally, 120 samples of food and water, and from environmental surfaces were collected, including 15 swabs from cafeteria tables, food carts, kitchen cabinets, kitchen rags, and drinking water taps; 41 swabs from doorknobs, classroom tables, toilets, and gargle cups; 50 food samples; and 14 drinking water samples. The school cafeteria provided the food and drinking water samples. All samples were transported frozen to the pathogen laboratory of the Henan CDC.", "All samples were cultured for bacterial pathogens including Shiga toxin-producing Escherichia coli, Salmonella, Shigella, Yersinia enterocolitica, Vibrio cholerae, V. parahaemolyticus, and Aeromonas hydrophila, following the technical procedures of diarrheal pathogenic spectrum surveillance formulated by the China CDC [13]. These samples were also tested for rotavirus, enteric adenovirus, norovirus, sapovirus, and astrovirus using commercially available real-time RT-PCR kits (Shanghai ZJ Bio-Tech Co., Ltd., Shanghai, China or Jiangsu Shuoshi Biological Technology Co., Ltd., Taizhou, China), as per the manufacturer’s protocols [14].", "Six samples that tested positive for norovirus (including four stool samples, one vomit sample, and one environmental sample) were randomly selected for whole genome sequencing. Total RNA was directly extracted from the samples using a QIAamp® Viral RNA Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. RNA was eluted in a final volume of 60 μL elution buffer and used immediately or stored at −80 °C. The whole genome sequences of norovirus were amplified by conventional RT-PCR using primers designed in this study (see Additional file 2: Table S1). The RT-PCR products were sent to Sangon Biotech Co., Ltd. (Shanghai, China) for DNA sequencing using an automated ABI 3730 DNA sequencer (Applied Biosystems, Foster City, CA, USA).", "The full norovirus genomes were compiled using the SeqMan program in the Lasergene software package (DNASTAR, Version 2.0, Madison, WI, USA). The percentage similarities of nucleotide identity or amino acid identity were calculated using the ClustalX software, [Version 2.0, European Bioinformatics Institute (EMBL-EBI), Cambridge, UK]. Molecular phylogenetic analysis was conducted using the maximum likelihood method based on the Kimura 2-parameter model with MEGA 5 software (available at: http://mega.software.informer.com/5.0/) [15]. The tree with the highest log-likelihood was shown. The percentage of trees in which the associated taxa clustered together was shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically as follows: when the number of common sites was < 100 or < 1/4 of the total number of sites, the maximum parsimony method was used; otherwise the Neighbor-Joining (NJ) method with maximum likelihood (ML) distance matrix was used. The tree was drawn to scale, with branch lengths measured in the number of substitutions per site. Complete norovirus genomes from GenBank were used as a reference, and phylogenetic trees were constructed to type and to understand the molecular epidemiology of the outbreak strain.", "All epidemiologic and laboratory data were entered into EpiData 3.1 software (The EpiData Association, http://www.epidata.dk/download.php, Denmark). All statistical analyses were performed using SAS® v9.13 (SAS Institute Inc., Cary, NC, USA). An association of P < 0.05 was considered statistically significant.", "This research was approved by the Institutional Review Board of the Henan CDC. All participants gave written informed consent for use of their samples for research purposes. Personal identifiable information was stored by the NNDSS, and not provided to any third party for any purpose according to the Law of the People’s Republic of China on the Prevention and Treatment of Infectious Diseases.", "From 4 to 30 March 2015, a total of 753 acute diarrhea cases were reported at a university in Henan Province to the NNDSS in China. The first case, whose main clinical symptoms included diarrhea, nausea, abdominal distension, abdominal pain, and fatigue, without fever or vomiting, occurred in the School of Economics and Management on the third day after the winter holiday. In the next few weeks, a high number of cases with similar clinical symptoms at various schools of the university were reported.\nThe 753 cases comprised 751 students and two teachers, and the attack rate was 3.29% (753/22 861). Among the cases, 426 were males and 325 were females, with a male–female sex ratio of 1.31. The median age of the cases was 21 years (range: 19–50); the two teachers were 38 years old and 50 years old. The time distribution showed that a main peak of cases occurred (85.26% of the reported cases) between 10 and 20 March (see Fig. 1).Fig. 1Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015\n\nEpidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015\nThe 753 cases occurred across 16 departments of the university. There was a statistically significant difference with respect to attack rates within the different departments (χ\n2 = 179.92, P < 0.001). Two cases were from the Institute of Education, which is a relatively independent unit located on a different campus of the university. The attack rate in grades 1–2 (3.76%) was higher than in grades 3–4 (3.30%), but this was not an important difference (χ\n2 = 3.118, P > 0.05) (see Table 1).Table 1Numbers of reported noroviruses cases by gender, residence, age group and yearDepartmentNo. of studentsNo. of casesAttack rate (%)Gender of casesGrade of casesMaleFemale1-23-4Electronics School1 246514.0931204110International Education College1 418614.3032293130Traditional Chinese Medicine College1 609321.9918141022Mechanics Institute1 615342.112113268Computer and Information Engineering College896313.461714265Architecture School849435.0623202914Economics and Management College1 415745.2340342945Software School3 7201143.0668467242Biological and Chemical Engineering College1 092393.5723162019Mathematics and Physics College419143.347786Civil Engineering College1 381453.262718396Foreign Languages School606304.951317228Humanities and Law College1 342745.5141332648Art Institute1 710764.4442343244Music College599315.182291219Education College1 36320.151111Total21 2807513.53426325404347\n\nNumbers of reported noroviruses cases by gender, residence, age group and year", "On 13 March 2015, both the local CDC and the provincial CDC were notified about an acute gastroenteritis outbreak at a university in Henan Province, and infection control experts initiated a field epidemiologic investigation.\nThe university is located in southwest Henan in a district with both urban and rural areas, a subtropical to warm-temperate transitional zone, a continental monsoon humid climate, and four distinct seasons. It is a university with 16 departments, including science, engineering, medicine, education, management science, law, economics, and the arts. The university has two separate campuses, the Central Campus and the Education College. At the time of the outbreak, there were 1 581 staff members and 21 280 students, comprising 19 917 students at the Central Campus and 1 363 students at the Education College. Teachers generally do not have meals on campus as they live in an apartment building that is within walking distance of the university. Most students have three meals a day in the university canteens.\nA retrospective investigation showed that the first acute gastroenteritis case occurred on 4 March 2015. On 9 March, 10 students with similar clinical symptoms went to see doctors at the university-affiliated hospital. This information was brought to the attention of the hospital administrators who then reported it to the local public health agency. On 10 March, samples of food and drinking water from the university dining hall were collected and tested for multiple enteric pathogens, however, none of the samples tested positive.\nA further epidemiological investigation identified clustered cases among roommates and classmates. On 15 March, comprehensive measures were taken to prevent and control this acute gastroenteritis outbreak, including establishing a temporary diarrhea clinic in the school hospital; isolating and treating patients; disinfecting student dormitories, classrooms, and cafeterias with vitalight lamp radiation (Fushan Creator UV & IR Lighting Co., Ltd., Guangdong, China) or 5–6% sodium hypochlorite disinfectant (Dezhou city Sunkang Disinfection Products Co., Ltd., Shandong, China); offering health education; and encouraging enhanced personal hygiene and social distancing. No new cases occurred after 30 March 2015.", "Clinical symptoms were recorded for 471 out of 753 cases. The main symptoms were diarrhea (85.14%), vomiting (65.61%), nausea (69.64%), stomachache (59.45%), abdominal distension (53.29%), and fever (43.77%) (see Table 2). The disease remitted within 72 hours (median: 50 hours, range: 11–72 hours). No cases were hospitalized and there were no deaths.Table 2Clinical symptoms of 471 clinical casesSymptom/signNo. of cases (n = 471)Proportion (%)Diarrhea40185.14Vomiting30965.61Nausea32869.64Bellyache28059.45Abdominal distension25153.29Fever20643.77\n\nClinical symptoms of 471 clinical cases", "From 14 March to 1 April 2015, 110 clinical samples from cases were collected, including 77 stool samples, 24 vomit samples, and nine saliva samples. No bacterial pathogens causing the disease were detected in any samples by culture methods. Thirty-six (32.72%, 36/110) samples from cases tested positive for norovirus using real-time RT-PCR, which comprised 27 stool (35.06%, 27/77), seven vomit (29.17%, 7/24), and two saliva (22.22%, 2/9) samples. Four (5.19%, 4/77) stool samples were positive for rotavirus using real-time RT-PCR. No other gastrointestinal viruses were detected.", "Four (17.39%) of the 23 students without symptoms tested positive for norovirus. All samples from cafeteria food handlers were negative for norovirus. Eleven (9.17%, 11/120) swab samples tested positive for norovirus, which comprised eight toilet and three gargle cup surface samples. The food and drinking water samples were all negative for norovirus and rotavirus. Bacterial pathogens were not detected in the environmental health samples.", "All six strains were typed as GII.17 using the norovirus automated genotyping tool. The result of the phylogenetic analysis, which was performed to verified genotypes of six norovirus strains, coincided with the conclusion above (see Fig. 2).Fig. 2Molecular characterization and phylogenetic analysis of noroviruses, Henan Province\n\nMolecular characterization and phylogenetic analysis of noroviruses, Henan Province\nThe complete genomes of the six norovirus-positive samples were determined. Nucleotide sequences were submitted to GenBank (accession Nos. KT992785, KT992786, KT992787, KT992788, KT992789, KT992790). The nucleotide identity among the six GII.17 strains ranged from 95.8% to 99.9%. To further determine the genetic characteristics of the norovirus strains, complete VP1 gene sequences of six norovirus strains from Henan were compared with 21 other GII.17 strains selected from GenBank by phylogenetic analysis. On the basis of VP1 gene sequencing, the GII.17 strains could be divided into three clusters in the phylogenetic tree (I-III): GII.17 strains detected from 1978 to 2002 formed cluster I, GII.17 strains collected from 2005 to 2009 formed cluster II, and cluster III composed of six GII.17 strains from Henan and strains isolated from Hong Kong, Guangdong, Beijing, Italy, Taiwan, Japan, and the USA after 2013 (except for KT589391.1 GII.17/HKG/2015, which belonged to cluster I) (see Fig. 3).Fig. 3Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples\n\nPhylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples\nThe phylogenetic analysis suggested that viruses from this study clustered with viral sequences obtained from viruses from other provinces in China circulating at a similar time, and co-evolved and co-circulated with those from surrounding provinces." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Multilingual abstracts", "Background", "Methods", "Case definition", "Epidemiological investigation", "Specimen collection", "Screening for gastroenteritis pathogens", "Full genome sequencing of norovirus", "Phylogenetic analysis", "Statistical analysis", "Ethical clearance", "Results", "Descriptive epidemiology", "Field epidemiologic investigation and preventive controls", "Clinical symptoms", "Pathogen detection", "Environmental health investigation", "Molecular characterization of norovirus", "Discussion", "Conclusion" ]
[ "Please see Additional file 1 for translations of the abstract into the five official working languages of the United Nations.", "Human noroviruses are positive-sense single stranded ribonucleic acid (RNA) viruses belonging to the family Caliciviridae, and are the most common cause of acute gastroenteritis outbreaks globally [1–3]. The disease burden of noroviruses is substantial and has a significant influence on public health [4, 5]. No vaccines or antiviral therapies are currently available for norovirus infections. Norovirus infections and outbreaks are usually more common in cooler or winter months. Noroviruses are readily transmitted through the fecal-oral route, through person-to-person contact, or through contaminated food or water, meaning that noroviruses spread quickly in enclosed places such as nursing homes, daycare centres, schools, and cruise ships, and are also a major cause of outbreaks in restaurants and catered-meal settings if contaminated food is served [6–8]. Noroviruses have an incubation period of 12–48 hours and symptoms typically include nausea, vomiting, diarrhea, abdominal pain, and fever. Norovirus infections are generally self-limited with mild to moderate symptoms, although severe morbidity and occasional mortality have been observed in immunocompromised patients and the elderly. Symptoms usually last for 1–3 days but can persist longer in young, old, and immunocompromised patients [9–12].\nFrom 4 to 30 March 2015, 753 cases of acute gastroenteritis were reported to the National Notifiable Reportable Diseases Surveillance System (NNDSS) in China from a university in Nanyang, Henan Province. Preliminary investigation indicated that the incident was a large acute gastroenteritis outbreak caused by human norovirus, and the route of transmission might be person-to-person and environmental transmission. We conducted an in-depth epidemiological investigation and laboratory testing in order to identify the source of the outbreak and provide guidance on effective control measures for future outbreaks.", " Case definition The investigated subjects included any person at the university. A clinical case was defined by the onset of diarrhoea ( ≥3 times/day), vomiting ( ≥2 times/day), or diarrhoeawith vomiting (unlimited number of times/day) at the university during the period from 1 March to 3 April 2015. A laboratory-confirmed case was defined when the stool or vomit specimen of a clinical case tested positive for norovirus by real-time reverse transcription polymerase chain reaction (RT-PCR).\nThe investigated subjects included any person at the university. A clinical case was defined by the onset of diarrhoea ( ≥3 times/day), vomiting ( ≥2 times/day), or diarrhoeawith vomiting (unlimited number of times/day) at the university during the period from 1 March to 3 April 2015. A laboratory-confirmed case was defined when the stool or vomit specimen of a clinical case tested positive for norovirus by real-time reverse transcription polymerase chain reaction (RT-PCR).\n Epidemiological investigation Medical practitioners reported clinical cases to the Henan Center for Disease Control and Prevention (CDC) from 1 March to 3 April 2015. A questionnaire was used to collect information on demographics, clinical symptoms, date of disease onset, and date of recovery.\nMedical practitioners reported clinical cases to the Henan Center for Disease Control and Prevention (CDC) from 1 March to 3 April 2015. A questionnaire was used to collect information on demographics, clinical symptoms, date of disease onset, and date of recovery.\n Specimen collection Medical workers collected a total of 110 stool, vomitus, and/or saliva samples from clinical cases. Additionally, stool samples were collected from students and staff who did not exhibit symptoms of vomiting or diarrhea. Stool samples were collected from 53 people, comprising 23 students and 30 cafeteria food handlers. Additionally, 120 samples of food and water, and from environmental surfaces were collected, including 15 swabs from cafeteria tables, food carts, kitchen cabinets, kitchen rags, and drinking water taps; 41 swabs from doorknobs, classroom tables, toilets, and gargle cups; 50 food samples; and 14 drinking water samples. The school cafeteria provided the food and drinking water samples. All samples were transported frozen to the pathogen laboratory of the Henan CDC.\nMedical workers collected a total of 110 stool, vomitus, and/or saliva samples from clinical cases. Additionally, stool samples were collected from students and staff who did not exhibit symptoms of vomiting or diarrhea. Stool samples were collected from 53 people, comprising 23 students and 30 cafeteria food handlers. Additionally, 120 samples of food and water, and from environmental surfaces were collected, including 15 swabs from cafeteria tables, food carts, kitchen cabinets, kitchen rags, and drinking water taps; 41 swabs from doorknobs, classroom tables, toilets, and gargle cups; 50 food samples; and 14 drinking water samples. The school cafeteria provided the food and drinking water samples. All samples were transported frozen to the pathogen laboratory of the Henan CDC.\n Screening for gastroenteritis pathogens All samples were cultured for bacterial pathogens including Shiga toxin-producing Escherichia coli, Salmonella, Shigella, Yersinia enterocolitica, Vibrio cholerae, V. parahaemolyticus, and Aeromonas hydrophila, following the technical procedures of diarrheal pathogenic spectrum surveillance formulated by the China CDC [13]. These samples were also tested for rotavirus, enteric adenovirus, norovirus, sapovirus, and astrovirus using commercially available real-time RT-PCR kits (Shanghai ZJ Bio-Tech Co., Ltd., Shanghai, China or Jiangsu Shuoshi Biological Technology Co., Ltd., Taizhou, China), as per the manufacturer’s protocols [14].\nAll samples were cultured for bacterial pathogens including Shiga toxin-producing Escherichia coli, Salmonella, Shigella, Yersinia enterocolitica, Vibrio cholerae, V. parahaemolyticus, and Aeromonas hydrophila, following the technical procedures of diarrheal pathogenic spectrum surveillance formulated by the China CDC [13]. These samples were also tested for rotavirus, enteric adenovirus, norovirus, sapovirus, and astrovirus using commercially available real-time RT-PCR kits (Shanghai ZJ Bio-Tech Co., Ltd., Shanghai, China or Jiangsu Shuoshi Biological Technology Co., Ltd., Taizhou, China), as per the manufacturer’s protocols [14].\n Full genome sequencing of norovirus Six samples that tested positive for norovirus (including four stool samples, one vomit sample, and one environmental sample) were randomly selected for whole genome sequencing. Total RNA was directly extracted from the samples using a QIAamp® Viral RNA Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. RNA was eluted in a final volume of 60 μL elution buffer and used immediately or stored at −80 °C. The whole genome sequences of norovirus were amplified by conventional RT-PCR using primers designed in this study (see Additional file 2: Table S1). The RT-PCR products were sent to Sangon Biotech Co., Ltd. (Shanghai, China) for DNA sequencing using an automated ABI 3730 DNA sequencer (Applied Biosystems, Foster City, CA, USA).\nSix samples that tested positive for norovirus (including four stool samples, one vomit sample, and one environmental sample) were randomly selected for whole genome sequencing. Total RNA was directly extracted from the samples using a QIAamp® Viral RNA Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. RNA was eluted in a final volume of 60 μL elution buffer and used immediately or stored at −80 °C. The whole genome sequences of norovirus were amplified by conventional RT-PCR using primers designed in this study (see Additional file 2: Table S1). The RT-PCR products were sent to Sangon Biotech Co., Ltd. (Shanghai, China) for DNA sequencing using an automated ABI 3730 DNA sequencer (Applied Biosystems, Foster City, CA, USA).\n Phylogenetic analysis The full norovirus genomes were compiled using the SeqMan program in the Lasergene software package (DNASTAR, Version 2.0, Madison, WI, USA). The percentage similarities of nucleotide identity or amino acid identity were calculated using the ClustalX software, [Version 2.0, European Bioinformatics Institute (EMBL-EBI), Cambridge, UK]. Molecular phylogenetic analysis was conducted using the maximum likelihood method based on the Kimura 2-parameter model with MEGA 5 software (available at: http://mega.software.informer.com/5.0/) [15]. The tree with the highest log-likelihood was shown. The percentage of trees in which the associated taxa clustered together was shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically as follows: when the number of common sites was < 100 or < 1/4 of the total number of sites, the maximum parsimony method was used; otherwise the Neighbor-Joining (NJ) method with maximum likelihood (ML) distance matrix was used. The tree was drawn to scale, with branch lengths measured in the number of substitutions per site. Complete norovirus genomes from GenBank were used as a reference, and phylogenetic trees were constructed to type and to understand the molecular epidemiology of the outbreak strain.\nThe full norovirus genomes were compiled using the SeqMan program in the Lasergene software package (DNASTAR, Version 2.0, Madison, WI, USA). The percentage similarities of nucleotide identity or amino acid identity were calculated using the ClustalX software, [Version 2.0, European Bioinformatics Institute (EMBL-EBI), Cambridge, UK]. Molecular phylogenetic analysis was conducted using the maximum likelihood method based on the Kimura 2-parameter model with MEGA 5 software (available at: http://mega.software.informer.com/5.0/) [15]. The tree with the highest log-likelihood was shown. The percentage of trees in which the associated taxa clustered together was shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically as follows: when the number of common sites was < 100 or < 1/4 of the total number of sites, the maximum parsimony method was used; otherwise the Neighbor-Joining (NJ) method with maximum likelihood (ML) distance matrix was used. The tree was drawn to scale, with branch lengths measured in the number of substitutions per site. Complete norovirus genomes from GenBank were used as a reference, and phylogenetic trees were constructed to type and to understand the molecular epidemiology of the outbreak strain.\n Statistical analysis All epidemiologic and laboratory data were entered into EpiData 3.1 software (The EpiData Association, http://www.epidata.dk/download.php, Denmark). All statistical analyses were performed using SAS® v9.13 (SAS Institute Inc., Cary, NC, USA). An association of P < 0.05 was considered statistically significant.\nAll epidemiologic and laboratory data were entered into EpiData 3.1 software (The EpiData Association, http://www.epidata.dk/download.php, Denmark). All statistical analyses were performed using SAS® v9.13 (SAS Institute Inc., Cary, NC, USA). An association of P < 0.05 was considered statistically significant.\n Ethical clearance This research was approved by the Institutional Review Board of the Henan CDC. All participants gave written informed consent for use of their samples for research purposes. Personal identifiable information was stored by the NNDSS, and not provided to any third party for any purpose according to the Law of the People’s Republic of China on the Prevention and Treatment of Infectious Diseases.\nThis research was approved by the Institutional Review Board of the Henan CDC. All participants gave written informed consent for use of their samples for research purposes. Personal identifiable information was stored by the NNDSS, and not provided to any third party for any purpose according to the Law of the People’s Republic of China on the Prevention and Treatment of Infectious Diseases.", "The investigated subjects included any person at the university. A clinical case was defined by the onset of diarrhoea ( ≥3 times/day), vomiting ( ≥2 times/day), or diarrhoeawith vomiting (unlimited number of times/day) at the university during the period from 1 March to 3 April 2015. A laboratory-confirmed case was defined when the stool or vomit specimen of a clinical case tested positive for norovirus by real-time reverse transcription polymerase chain reaction (RT-PCR).", "Medical practitioners reported clinical cases to the Henan Center for Disease Control and Prevention (CDC) from 1 March to 3 April 2015. A questionnaire was used to collect information on demographics, clinical symptoms, date of disease onset, and date of recovery.", "Medical workers collected a total of 110 stool, vomitus, and/or saliva samples from clinical cases. Additionally, stool samples were collected from students and staff who did not exhibit symptoms of vomiting or diarrhea. Stool samples were collected from 53 people, comprising 23 students and 30 cafeteria food handlers. Additionally, 120 samples of food and water, and from environmental surfaces were collected, including 15 swabs from cafeteria tables, food carts, kitchen cabinets, kitchen rags, and drinking water taps; 41 swabs from doorknobs, classroom tables, toilets, and gargle cups; 50 food samples; and 14 drinking water samples. The school cafeteria provided the food and drinking water samples. All samples were transported frozen to the pathogen laboratory of the Henan CDC.", "All samples were cultured for bacterial pathogens including Shiga toxin-producing Escherichia coli, Salmonella, Shigella, Yersinia enterocolitica, Vibrio cholerae, V. parahaemolyticus, and Aeromonas hydrophila, following the technical procedures of diarrheal pathogenic spectrum surveillance formulated by the China CDC [13]. These samples were also tested for rotavirus, enteric adenovirus, norovirus, sapovirus, and astrovirus using commercially available real-time RT-PCR kits (Shanghai ZJ Bio-Tech Co., Ltd., Shanghai, China or Jiangsu Shuoshi Biological Technology Co., Ltd., Taizhou, China), as per the manufacturer’s protocols [14].", "Six samples that tested positive for norovirus (including four stool samples, one vomit sample, and one environmental sample) were randomly selected for whole genome sequencing. Total RNA was directly extracted from the samples using a QIAamp® Viral RNA Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. RNA was eluted in a final volume of 60 μL elution buffer and used immediately or stored at −80 °C. The whole genome sequences of norovirus were amplified by conventional RT-PCR using primers designed in this study (see Additional file 2: Table S1). The RT-PCR products were sent to Sangon Biotech Co., Ltd. (Shanghai, China) for DNA sequencing using an automated ABI 3730 DNA sequencer (Applied Biosystems, Foster City, CA, USA).", "The full norovirus genomes were compiled using the SeqMan program in the Lasergene software package (DNASTAR, Version 2.0, Madison, WI, USA). The percentage similarities of nucleotide identity or amino acid identity were calculated using the ClustalX software, [Version 2.0, European Bioinformatics Institute (EMBL-EBI), Cambridge, UK]. Molecular phylogenetic analysis was conducted using the maximum likelihood method based on the Kimura 2-parameter model with MEGA 5 software (available at: http://mega.software.informer.com/5.0/) [15]. The tree with the highest log-likelihood was shown. The percentage of trees in which the associated taxa clustered together was shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically as follows: when the number of common sites was < 100 or < 1/4 of the total number of sites, the maximum parsimony method was used; otherwise the Neighbor-Joining (NJ) method with maximum likelihood (ML) distance matrix was used. The tree was drawn to scale, with branch lengths measured in the number of substitutions per site. Complete norovirus genomes from GenBank were used as a reference, and phylogenetic trees were constructed to type and to understand the molecular epidemiology of the outbreak strain.", "All epidemiologic and laboratory data were entered into EpiData 3.1 software (The EpiData Association, http://www.epidata.dk/download.php, Denmark). All statistical analyses were performed using SAS® v9.13 (SAS Institute Inc., Cary, NC, USA). An association of P < 0.05 was considered statistically significant.", "This research was approved by the Institutional Review Board of the Henan CDC. All participants gave written informed consent for use of their samples for research purposes. Personal identifiable information was stored by the NNDSS, and not provided to any third party for any purpose according to the Law of the People’s Republic of China on the Prevention and Treatment of Infectious Diseases.", " Descriptive epidemiology From 4 to 30 March 2015, a total of 753 acute diarrhea cases were reported at a university in Henan Province to the NNDSS in China. The first case, whose main clinical symptoms included diarrhea, nausea, abdominal distension, abdominal pain, and fatigue, without fever or vomiting, occurred in the School of Economics and Management on the third day after the winter holiday. In the next few weeks, a high number of cases with similar clinical symptoms at various schools of the university were reported.\nThe 753 cases comprised 751 students and two teachers, and the attack rate was 3.29% (753/22 861). Among the cases, 426 were males and 325 were females, with a male–female sex ratio of 1.31. The median age of the cases was 21 years (range: 19–50); the two teachers were 38 years old and 50 years old. The time distribution showed that a main peak of cases occurred (85.26% of the reported cases) between 10 and 20 March (see Fig. 1).Fig. 1Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015\n\nEpidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015\nThe 753 cases occurred across 16 departments of the university. There was a statistically significant difference with respect to attack rates within the different departments (χ\n2 = 179.92, P < 0.001). Two cases were from the Institute of Education, which is a relatively independent unit located on a different campus of the university. The attack rate in grades 1–2 (3.76%) was higher than in grades 3–4 (3.30%), but this was not an important difference (χ\n2 = 3.118, P > 0.05) (see Table 1).Table 1Numbers of reported noroviruses cases by gender, residence, age group and yearDepartmentNo. of studentsNo. of casesAttack rate (%)Gender of casesGrade of casesMaleFemale1-23-4Electronics School1 246514.0931204110International Education College1 418614.3032293130Traditional Chinese Medicine College1 609321.9918141022Mechanics Institute1 615342.112113268Computer and Information Engineering College896313.461714265Architecture School849435.0623202914Economics and Management College1 415745.2340342945Software School3 7201143.0668467242Biological and Chemical Engineering College1 092393.5723162019Mathematics and Physics College419143.347786Civil Engineering College1 381453.262718396Foreign Languages School606304.951317228Humanities and Law College1 342745.5141332648Art Institute1 710764.4442343244Music College599315.182291219Education College1 36320.151111Total21 2807513.53426325404347\n\nNumbers of reported noroviruses cases by gender, residence, age group and year\nFrom 4 to 30 March 2015, a total of 753 acute diarrhea cases were reported at a university in Henan Province to the NNDSS in China. The first case, whose main clinical symptoms included diarrhea, nausea, abdominal distension, abdominal pain, and fatigue, without fever or vomiting, occurred in the School of Economics and Management on the third day after the winter holiday. In the next few weeks, a high number of cases with similar clinical symptoms at various schools of the university were reported.\nThe 753 cases comprised 751 students and two teachers, and the attack rate was 3.29% (753/22 861). Among the cases, 426 were males and 325 were females, with a male–female sex ratio of 1.31. The median age of the cases was 21 years (range: 19–50); the two teachers were 38 years old and 50 years old. The time distribution showed that a main peak of cases occurred (85.26% of the reported cases) between 10 and 20 March (see Fig. 1).Fig. 1Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015\n\nEpidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015\nThe 753 cases occurred across 16 departments of the university. There was a statistically significant difference with respect to attack rates within the different departments (χ\n2 = 179.92, P < 0.001). Two cases were from the Institute of Education, which is a relatively independent unit located on a different campus of the university. The attack rate in grades 1–2 (3.76%) was higher than in grades 3–4 (3.30%), but this was not an important difference (χ\n2 = 3.118, P > 0.05) (see Table 1).Table 1Numbers of reported noroviruses cases by gender, residence, age group and yearDepartmentNo. of studentsNo. of casesAttack rate (%)Gender of casesGrade of casesMaleFemale1-23-4Electronics School1 246514.0931204110International Education College1 418614.3032293130Traditional Chinese Medicine College1 609321.9918141022Mechanics Institute1 615342.112113268Computer and Information Engineering College896313.461714265Architecture School849435.0623202914Economics and Management College1 415745.2340342945Software School3 7201143.0668467242Biological and Chemical Engineering College1 092393.5723162019Mathematics and Physics College419143.347786Civil Engineering College1 381453.262718396Foreign Languages School606304.951317228Humanities and Law College1 342745.5141332648Art Institute1 710764.4442343244Music College599315.182291219Education College1 36320.151111Total21 2807513.53426325404347\n\nNumbers of reported noroviruses cases by gender, residence, age group and year\n Field epidemiologic investigation and preventive controls On 13 March 2015, both the local CDC and the provincial CDC were notified about an acute gastroenteritis outbreak at a university in Henan Province, and infection control experts initiated a field epidemiologic investigation.\nThe university is located in southwest Henan in a district with both urban and rural areas, a subtropical to warm-temperate transitional zone, a continental monsoon humid climate, and four distinct seasons. It is a university with 16 departments, including science, engineering, medicine, education, management science, law, economics, and the arts. The university has two separate campuses, the Central Campus and the Education College. At the time of the outbreak, there were 1 581 staff members and 21 280 students, comprising 19 917 students at the Central Campus and 1 363 students at the Education College. Teachers generally do not have meals on campus as they live in an apartment building that is within walking distance of the university. Most students have three meals a day in the university canteens.\nA retrospective investigation showed that the first acute gastroenteritis case occurred on 4 March 2015. On 9 March, 10 students with similar clinical symptoms went to see doctors at the university-affiliated hospital. This information was brought to the attention of the hospital administrators who then reported it to the local public health agency. On 10 March, samples of food and drinking water from the university dining hall were collected and tested for multiple enteric pathogens, however, none of the samples tested positive.\nA further epidemiological investigation identified clustered cases among roommates and classmates. On 15 March, comprehensive measures were taken to prevent and control this acute gastroenteritis outbreak, including establishing a temporary diarrhea clinic in the school hospital; isolating and treating patients; disinfecting student dormitories, classrooms, and cafeterias with vitalight lamp radiation (Fushan Creator UV & IR Lighting Co., Ltd., Guangdong, China) or 5–6% sodium hypochlorite disinfectant (Dezhou city Sunkang Disinfection Products Co., Ltd., Shandong, China); offering health education; and encouraging enhanced personal hygiene and social distancing. No new cases occurred after 30 March 2015.\nOn 13 March 2015, both the local CDC and the provincial CDC were notified about an acute gastroenteritis outbreak at a university in Henan Province, and infection control experts initiated a field epidemiologic investigation.\nThe university is located in southwest Henan in a district with both urban and rural areas, a subtropical to warm-temperate transitional zone, a continental monsoon humid climate, and four distinct seasons. It is a university with 16 departments, including science, engineering, medicine, education, management science, law, economics, and the arts. The university has two separate campuses, the Central Campus and the Education College. At the time of the outbreak, there were 1 581 staff members and 21 280 students, comprising 19 917 students at the Central Campus and 1 363 students at the Education College. Teachers generally do not have meals on campus as they live in an apartment building that is within walking distance of the university. Most students have three meals a day in the university canteens.\nA retrospective investigation showed that the first acute gastroenteritis case occurred on 4 March 2015. On 9 March, 10 students with similar clinical symptoms went to see doctors at the university-affiliated hospital. This information was brought to the attention of the hospital administrators who then reported it to the local public health agency. On 10 March, samples of food and drinking water from the university dining hall were collected and tested for multiple enteric pathogens, however, none of the samples tested positive.\nA further epidemiological investigation identified clustered cases among roommates and classmates. On 15 March, comprehensive measures were taken to prevent and control this acute gastroenteritis outbreak, including establishing a temporary diarrhea clinic in the school hospital; isolating and treating patients; disinfecting student dormitories, classrooms, and cafeterias with vitalight lamp radiation (Fushan Creator UV & IR Lighting Co., Ltd., Guangdong, China) or 5–6% sodium hypochlorite disinfectant (Dezhou city Sunkang Disinfection Products Co., Ltd., Shandong, China); offering health education; and encouraging enhanced personal hygiene and social distancing. No new cases occurred after 30 March 2015.\n Clinical symptoms Clinical symptoms were recorded for 471 out of 753 cases. The main symptoms were diarrhea (85.14%), vomiting (65.61%), nausea (69.64%), stomachache (59.45%), abdominal distension (53.29%), and fever (43.77%) (see Table 2). The disease remitted within 72 hours (median: 50 hours, range: 11–72 hours). No cases were hospitalized and there were no deaths.Table 2Clinical symptoms of 471 clinical casesSymptom/signNo. of cases (n = 471)Proportion (%)Diarrhea40185.14Vomiting30965.61Nausea32869.64Bellyache28059.45Abdominal distension25153.29Fever20643.77\n\nClinical symptoms of 471 clinical cases\nClinical symptoms were recorded for 471 out of 753 cases. The main symptoms were diarrhea (85.14%), vomiting (65.61%), nausea (69.64%), stomachache (59.45%), abdominal distension (53.29%), and fever (43.77%) (see Table 2). The disease remitted within 72 hours (median: 50 hours, range: 11–72 hours). No cases were hospitalized and there were no deaths.Table 2Clinical symptoms of 471 clinical casesSymptom/signNo. of cases (n = 471)Proportion (%)Diarrhea40185.14Vomiting30965.61Nausea32869.64Bellyache28059.45Abdominal distension25153.29Fever20643.77\n\nClinical symptoms of 471 clinical cases\n Pathogen detection From 14 March to 1 April 2015, 110 clinical samples from cases were collected, including 77 stool samples, 24 vomit samples, and nine saliva samples. No bacterial pathogens causing the disease were detected in any samples by culture methods. Thirty-six (32.72%, 36/110) samples from cases tested positive for norovirus using real-time RT-PCR, which comprised 27 stool (35.06%, 27/77), seven vomit (29.17%, 7/24), and two saliva (22.22%, 2/9) samples. Four (5.19%, 4/77) stool samples were positive for rotavirus using real-time RT-PCR. No other gastrointestinal viruses were detected.\nFrom 14 March to 1 April 2015, 110 clinical samples from cases were collected, including 77 stool samples, 24 vomit samples, and nine saliva samples. No bacterial pathogens causing the disease were detected in any samples by culture methods. Thirty-six (32.72%, 36/110) samples from cases tested positive for norovirus using real-time RT-PCR, which comprised 27 stool (35.06%, 27/77), seven vomit (29.17%, 7/24), and two saliva (22.22%, 2/9) samples. Four (5.19%, 4/77) stool samples were positive for rotavirus using real-time RT-PCR. No other gastrointestinal viruses were detected.\n Environmental health investigation Four (17.39%) of the 23 students without symptoms tested positive for norovirus. All samples from cafeteria food handlers were negative for norovirus. Eleven (9.17%, 11/120) swab samples tested positive for norovirus, which comprised eight toilet and three gargle cup surface samples. The food and drinking water samples were all negative for norovirus and rotavirus. Bacterial pathogens were not detected in the environmental health samples.\nFour (17.39%) of the 23 students without symptoms tested positive for norovirus. All samples from cafeteria food handlers were negative for norovirus. Eleven (9.17%, 11/120) swab samples tested positive for norovirus, which comprised eight toilet and three gargle cup surface samples. The food and drinking water samples were all negative for norovirus and rotavirus. Bacterial pathogens were not detected in the environmental health samples.\n Molecular characterization of norovirus All six strains were typed as GII.17 using the norovirus automated genotyping tool. The result of the phylogenetic analysis, which was performed to verified genotypes of six norovirus strains, coincided with the conclusion above (see Fig. 2).Fig. 2Molecular characterization and phylogenetic analysis of noroviruses, Henan Province\n\nMolecular characterization and phylogenetic analysis of noroviruses, Henan Province\nThe complete genomes of the six norovirus-positive samples were determined. Nucleotide sequences were submitted to GenBank (accession Nos. KT992785, KT992786, KT992787, KT992788, KT992789, KT992790). The nucleotide identity among the six GII.17 strains ranged from 95.8% to 99.9%. To further determine the genetic characteristics of the norovirus strains, complete VP1 gene sequences of six norovirus strains from Henan were compared with 21 other GII.17 strains selected from GenBank by phylogenetic analysis. On the basis of VP1 gene sequencing, the GII.17 strains could be divided into three clusters in the phylogenetic tree (I-III): GII.17 strains detected from 1978 to 2002 formed cluster I, GII.17 strains collected from 2005 to 2009 formed cluster II, and cluster III composed of six GII.17 strains from Henan and strains isolated from Hong Kong, Guangdong, Beijing, Italy, Taiwan, Japan, and the USA after 2013 (except for KT589391.1 GII.17/HKG/2015, which belonged to cluster I) (see Fig. 3).Fig. 3Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples\n\nPhylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples\nThe phylogenetic analysis suggested that viruses from this study clustered with viral sequences obtained from viruses from other provinces in China circulating at a similar time, and co-evolved and co-circulated with those from surrounding provinces.\nAll six strains were typed as GII.17 using the norovirus automated genotyping tool. The result of the phylogenetic analysis, which was performed to verified genotypes of six norovirus strains, coincided with the conclusion above (see Fig. 2).Fig. 2Molecular characterization and phylogenetic analysis of noroviruses, Henan Province\n\nMolecular characterization and phylogenetic analysis of noroviruses, Henan Province\nThe complete genomes of the six norovirus-positive samples were determined. Nucleotide sequences were submitted to GenBank (accession Nos. KT992785, KT992786, KT992787, KT992788, KT992789, KT992790). The nucleotide identity among the six GII.17 strains ranged from 95.8% to 99.9%. To further determine the genetic characteristics of the norovirus strains, complete VP1 gene sequences of six norovirus strains from Henan were compared with 21 other GII.17 strains selected from GenBank by phylogenetic analysis. On the basis of VP1 gene sequencing, the GII.17 strains could be divided into three clusters in the phylogenetic tree (I-III): GII.17 strains detected from 1978 to 2002 formed cluster I, GII.17 strains collected from 2005 to 2009 formed cluster II, and cluster III composed of six GII.17 strains from Henan and strains isolated from Hong Kong, Guangdong, Beijing, Italy, Taiwan, Japan, and the USA after 2013 (except for KT589391.1 GII.17/HKG/2015, which belonged to cluster I) (see Fig. 3).Fig. 3Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples\n\nPhylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples\nThe phylogenetic analysis suggested that viruses from this study clustered with viral sequences obtained from viruses from other provinces in China circulating at a similar time, and co-evolved and co-circulated with those from surrounding provinces.", "From 4 to 30 March 2015, a total of 753 acute diarrhea cases were reported at a university in Henan Province to the NNDSS in China. The first case, whose main clinical symptoms included diarrhea, nausea, abdominal distension, abdominal pain, and fatigue, without fever or vomiting, occurred in the School of Economics and Management on the third day after the winter holiday. In the next few weeks, a high number of cases with similar clinical symptoms at various schools of the university were reported.\nThe 753 cases comprised 751 students and two teachers, and the attack rate was 3.29% (753/22 861). Among the cases, 426 were males and 325 were females, with a male–female sex ratio of 1.31. The median age of the cases was 21 years (range: 19–50); the two teachers were 38 years old and 50 years old. The time distribution showed that a main peak of cases occurred (85.26% of the reported cases) between 10 and 20 March (see Fig. 1).Fig. 1Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015\n\nEpidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015\nThe 753 cases occurred across 16 departments of the university. There was a statistically significant difference with respect to attack rates within the different departments (χ\n2 = 179.92, P < 0.001). Two cases were from the Institute of Education, which is a relatively independent unit located on a different campus of the university. The attack rate in grades 1–2 (3.76%) was higher than in grades 3–4 (3.30%), but this was not an important difference (χ\n2 = 3.118, P > 0.05) (see Table 1).Table 1Numbers of reported noroviruses cases by gender, residence, age group and yearDepartmentNo. of studentsNo. of casesAttack rate (%)Gender of casesGrade of casesMaleFemale1-23-4Electronics School1 246514.0931204110International Education College1 418614.3032293130Traditional Chinese Medicine College1 609321.9918141022Mechanics Institute1 615342.112113268Computer and Information Engineering College896313.461714265Architecture School849435.0623202914Economics and Management College1 415745.2340342945Software School3 7201143.0668467242Biological and Chemical Engineering College1 092393.5723162019Mathematics and Physics College419143.347786Civil Engineering College1 381453.262718396Foreign Languages School606304.951317228Humanities and Law College1 342745.5141332648Art Institute1 710764.4442343244Music College599315.182291219Education College1 36320.151111Total21 2807513.53426325404347\n\nNumbers of reported noroviruses cases by gender, residence, age group and year", "On 13 March 2015, both the local CDC and the provincial CDC were notified about an acute gastroenteritis outbreak at a university in Henan Province, and infection control experts initiated a field epidemiologic investigation.\nThe university is located in southwest Henan in a district with both urban and rural areas, a subtropical to warm-temperate transitional zone, a continental monsoon humid climate, and four distinct seasons. It is a university with 16 departments, including science, engineering, medicine, education, management science, law, economics, and the arts. The university has two separate campuses, the Central Campus and the Education College. At the time of the outbreak, there were 1 581 staff members and 21 280 students, comprising 19 917 students at the Central Campus and 1 363 students at the Education College. Teachers generally do not have meals on campus as they live in an apartment building that is within walking distance of the university. Most students have three meals a day in the university canteens.\nA retrospective investigation showed that the first acute gastroenteritis case occurred on 4 March 2015. On 9 March, 10 students with similar clinical symptoms went to see doctors at the university-affiliated hospital. This information was brought to the attention of the hospital administrators who then reported it to the local public health agency. On 10 March, samples of food and drinking water from the university dining hall were collected and tested for multiple enteric pathogens, however, none of the samples tested positive.\nA further epidemiological investigation identified clustered cases among roommates and classmates. On 15 March, comprehensive measures were taken to prevent and control this acute gastroenteritis outbreak, including establishing a temporary diarrhea clinic in the school hospital; isolating and treating patients; disinfecting student dormitories, classrooms, and cafeterias with vitalight lamp radiation (Fushan Creator UV & IR Lighting Co., Ltd., Guangdong, China) or 5–6% sodium hypochlorite disinfectant (Dezhou city Sunkang Disinfection Products Co., Ltd., Shandong, China); offering health education; and encouraging enhanced personal hygiene and social distancing. No new cases occurred after 30 March 2015.", "Clinical symptoms were recorded for 471 out of 753 cases. The main symptoms were diarrhea (85.14%), vomiting (65.61%), nausea (69.64%), stomachache (59.45%), abdominal distension (53.29%), and fever (43.77%) (see Table 2). The disease remitted within 72 hours (median: 50 hours, range: 11–72 hours). No cases were hospitalized and there were no deaths.Table 2Clinical symptoms of 471 clinical casesSymptom/signNo. of cases (n = 471)Proportion (%)Diarrhea40185.14Vomiting30965.61Nausea32869.64Bellyache28059.45Abdominal distension25153.29Fever20643.77\n\nClinical symptoms of 471 clinical cases", "From 14 March to 1 April 2015, 110 clinical samples from cases were collected, including 77 stool samples, 24 vomit samples, and nine saliva samples. No bacterial pathogens causing the disease were detected in any samples by culture methods. Thirty-six (32.72%, 36/110) samples from cases tested positive for norovirus using real-time RT-PCR, which comprised 27 stool (35.06%, 27/77), seven vomit (29.17%, 7/24), and two saliva (22.22%, 2/9) samples. Four (5.19%, 4/77) stool samples were positive for rotavirus using real-time RT-PCR. No other gastrointestinal viruses were detected.", "Four (17.39%) of the 23 students without symptoms tested positive for norovirus. All samples from cafeteria food handlers were negative for norovirus. Eleven (9.17%, 11/120) swab samples tested positive for norovirus, which comprised eight toilet and three gargle cup surface samples. The food and drinking water samples were all negative for norovirus and rotavirus. Bacterial pathogens were not detected in the environmental health samples.", "All six strains were typed as GII.17 using the norovirus automated genotyping tool. The result of the phylogenetic analysis, which was performed to verified genotypes of six norovirus strains, coincided with the conclusion above (see Fig. 2).Fig. 2Molecular characterization and phylogenetic analysis of noroviruses, Henan Province\n\nMolecular characterization and phylogenetic analysis of noroviruses, Henan Province\nThe complete genomes of the six norovirus-positive samples were determined. Nucleotide sequences were submitted to GenBank (accession Nos. KT992785, KT992786, KT992787, KT992788, KT992789, KT992790). The nucleotide identity among the six GII.17 strains ranged from 95.8% to 99.9%. To further determine the genetic characteristics of the norovirus strains, complete VP1 gene sequences of six norovirus strains from Henan were compared with 21 other GII.17 strains selected from GenBank by phylogenetic analysis. On the basis of VP1 gene sequencing, the GII.17 strains could be divided into three clusters in the phylogenetic tree (I-III): GII.17 strains detected from 1978 to 2002 formed cluster I, GII.17 strains collected from 2005 to 2009 formed cluster II, and cluster III composed of six GII.17 strains from Henan and strains isolated from Hong Kong, Guangdong, Beijing, Italy, Taiwan, Japan, and the USA after 2013 (except for KT589391.1 GII.17/HKG/2015, which belonged to cluster I) (see Fig. 3).Fig. 3Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples\n\nPhylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples\nThe phylogenetic analysis suggested that viruses from this study clustered with viral sequences obtained from viruses from other provinces in China circulating at a similar time, and co-evolved and co-circulated with those from surrounding provinces.", "In this outbreak, a total of 753 acute diarrhea cases were reported at the university; the attack rate was 3.29%. The epidemic curve showed two peaks, with the main peak occurring between 10 March and 20 March, accounting for 85.26% of the reported cases. The statistical analysis identified a significant difference with respect to attack rates among the departments at the university and among grades, and there were obvious clusters among roommates and classmates. All samples collected from the cafeteria food handlers, and food and drinking water samples tested negative for norovirus.\nThe data suggested that the route of norovirus transmission was more likely to be person-to-person and/or environmental transmission than foodborne or waterborne transmission. Norovirus transmission occurs via the fecal-oral route, usually through ingestion of contaminated food or water, or by direct contact with an infected individual. Environmental transmission occurs when episodes of vomiting or diarrhea contaminate surfaces with infectious virus particles that may persist for weeks [16, 17]. The resilience and persistence of norovirus in the environment allows for its spread through a wide range of common and unexpected sources. It has been estimated that as few as 18 norovirus particles may be sufficient to cause infection in humans [18, 19]. Infected individuals shed a large amount of norovirus in both fecal material and vomitus, contributing to the high number of outbreaks observed annually in environments with close quarters such as cruise ships, restaurants, long-term care facilities, and schools [16, 17, 20]. Dormitories and classrooms are the main places where students study and live, which results in them being highly centralized and relatively confined environments. The swab samples from these environments tested positive for norovirus. After environmental disinfecting measures and hand washing were implemented, the number of cases sharply decreased. Thus, environmental transmission likely contributed to the outbreak [17, 21].\nThe epidemic curve showed two peaks of incidence. The first peak appeared on the seventh day after the first case occurred, continuing to 20 March. Cases with onset during these 11 days accounted for 85.29% of all reported cases. The incubation period for norovirus is normally 12–48 hours, meaning that this peak period very likely included second- or third-generation transmission. Laboratory testing identified norovirus contamination in environmental samples, not food or water. Epidemiological investigation showed that the first case occurred on the third day of school. According to the incubation period of noroviruses [22, 23], this case was likely infected prior to returning to university after the holidays. The university has students from all over China. From November 2014 through to January 2015, a total of more than 120 identified outbreaks were reported in China. We speculate that the infected student introduced norovirus into the university. The first case was a possible source of transmission, but asymptomatically infected students may also be sources.\nNoroviruses can be divided into seven genogroups (GI to GVII) on the basis of sequence differences in the virus VP1 region. GI, GII, and GIV viruses can infect humans. The genogroups are further classified into genotypes with at least nine genotypes belonging to GI, 22 genotypes belonging to GII, and two genotypes belonging to GIV. GII viruses are the most frequently detected (89%), whereas GI viruses cause approximately 11% of all outbreaks [24–27]. During the past decade, most reported norovirus outbreaks were caused by GII.4. New variants of GII.4 have emerged approximately every 2–4 years and have caused norovirus gastroenteritis pandemics globally [28, 29]. Since 1999, the major circulating genotype in mainland China has been GII.4, accounting for 64% of all genotypes detected [30]. GII.17 is another common type of GII viruses. The first GII.17 strain in the National Center for Biotechnology Information databank is from 1978. Since then, GII.17 viruses have sporadically been detected in Africa, Asia, Europe, North America, and South America, and have been circulating in the human population for at least 37 years [31]. In Asia, more widespread circulation of GII.17 was first reported from environmental samples in Korea from 2004 to 2006 [32]. From 2012 to 2013, GII.17 viruses accounted for 76% of all detected norovirus strains in rivers in rural and urban areas in Kenya [33].\nA sharp increase in the number of norovirus cases caused by a novel GII.17 virus was observed in Japan during the 2014/15 winter season [34]. This novel GII.17 norovirus was first detected in acute gastroenteritis outbreaks in Guangdong Province in China in November 2014 and thereafter spread rapidly across Asia. From November 2014 through to January 2015, GII.17 norovirus outbreaks were reported in 10 cities of Guangdong Province and represented 83% (24/29) of all outbreaks in Guangdong [35]. During the same winter, there was also an increase in outbreak activity in Jiangsu Province, Zhejiang Province, and other provinces, which could be attributed to the emergence of this novel GII.17 strain [36–38].\nOur study is the first report of a genotype GII.17 norovirus causing an outbreak in Henan Province, China. Furthermore, our results indicate that the GII.17 viruses displayed high epidemic activity and have become a dominant strain in China since the winter of 2014, having replaced the previously dominant GII.4 Sydney 2012 strain.\nThis study had several limitations. First, the early stages of the outbreak did not cause alarm and attention, as the symptoms of the norovirus cases were relatively mild and of short duration. Second, due to a lack of experience with norovirus outbreaks in Henan Province, it took some time before the field epidemiological investigation finally determined that this outbreak was caused by norovirus. Third, there is no routine surveillance data on norovirus infections among infectious viral diarrhea cases in this province, so whether or not GII.17 will continue to circulate is unclear.", "This study identified the largest outbreak of acute gastroenteritis caused by human norovirus GII.17 in Henan Province, China. The environmental transmission contributed to the outbreak, as the students at the university under investigation study and live relatively close in dormitories and classrooms. Environmental disinfecting measures and hand washing should be promoted to prevent such infections and outbreaks because there are not vaccines or antiviral therapies currently available for norovirus infections." ]
[ null, "introduction", "materials|methods", null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusion" ]
[ "Human norovirus", "Acute gastroenteritis outbreak", "Epidemiological investigation", "Phylogenetic analysis", "Henan Province", "China" ]
Multilingual abstracts: Please see Additional file 1 for translations of the abstract into the five official working languages of the United Nations. Background: Human noroviruses are positive-sense single stranded ribonucleic acid (RNA) viruses belonging to the family Caliciviridae, and are the most common cause of acute gastroenteritis outbreaks globally [1–3]. The disease burden of noroviruses is substantial and has a significant influence on public health [4, 5]. No vaccines or antiviral therapies are currently available for norovirus infections. Norovirus infections and outbreaks are usually more common in cooler or winter months. Noroviruses are readily transmitted through the fecal-oral route, through person-to-person contact, or through contaminated food or water, meaning that noroviruses spread quickly in enclosed places such as nursing homes, daycare centres, schools, and cruise ships, and are also a major cause of outbreaks in restaurants and catered-meal settings if contaminated food is served [6–8]. Noroviruses have an incubation period of 12–48 hours and symptoms typically include nausea, vomiting, diarrhea, abdominal pain, and fever. Norovirus infections are generally self-limited with mild to moderate symptoms, although severe morbidity and occasional mortality have been observed in immunocompromised patients and the elderly. Symptoms usually last for 1–3 days but can persist longer in young, old, and immunocompromised patients [9–12]. From 4 to 30 March 2015, 753 cases of acute gastroenteritis were reported to the National Notifiable Reportable Diseases Surveillance System (NNDSS) in China from a university in Nanyang, Henan Province. Preliminary investigation indicated that the incident was a large acute gastroenteritis outbreak caused by human norovirus, and the route of transmission might be person-to-person and environmental transmission. We conducted an in-depth epidemiological investigation and laboratory testing in order to identify the source of the outbreak and provide guidance on effective control measures for future outbreaks. Methods: Case definition The investigated subjects included any person at the university. A clinical case was defined by the onset of diarrhoea ( ≥3 times/day), vomiting ( ≥2 times/day), or diarrhoeawith vomiting (unlimited number of times/day) at the university during the period from 1 March to 3 April 2015. A laboratory-confirmed case was defined when the stool or vomit specimen of a clinical case tested positive for norovirus by real-time reverse transcription polymerase chain reaction (RT-PCR). The investigated subjects included any person at the university. A clinical case was defined by the onset of diarrhoea ( ≥3 times/day), vomiting ( ≥2 times/day), or diarrhoeawith vomiting (unlimited number of times/day) at the university during the period from 1 March to 3 April 2015. A laboratory-confirmed case was defined when the stool or vomit specimen of a clinical case tested positive for norovirus by real-time reverse transcription polymerase chain reaction (RT-PCR). Epidemiological investigation Medical practitioners reported clinical cases to the Henan Center for Disease Control and Prevention (CDC) from 1 March to 3 April 2015. A questionnaire was used to collect information on demographics, clinical symptoms, date of disease onset, and date of recovery. Medical practitioners reported clinical cases to the Henan Center for Disease Control and Prevention (CDC) from 1 March to 3 April 2015. A questionnaire was used to collect information on demographics, clinical symptoms, date of disease onset, and date of recovery. Specimen collection Medical workers collected a total of 110 stool, vomitus, and/or saliva samples from clinical cases. Additionally, stool samples were collected from students and staff who did not exhibit symptoms of vomiting or diarrhea. Stool samples were collected from 53 people, comprising 23 students and 30 cafeteria food handlers. Additionally, 120 samples of food and water, and from environmental surfaces were collected, including 15 swabs from cafeteria tables, food carts, kitchen cabinets, kitchen rags, and drinking water taps; 41 swabs from doorknobs, classroom tables, toilets, and gargle cups; 50 food samples; and 14 drinking water samples. The school cafeteria provided the food and drinking water samples. All samples were transported frozen to the pathogen laboratory of the Henan CDC. Medical workers collected a total of 110 stool, vomitus, and/or saliva samples from clinical cases. Additionally, stool samples were collected from students and staff who did not exhibit symptoms of vomiting or diarrhea. Stool samples were collected from 53 people, comprising 23 students and 30 cafeteria food handlers. Additionally, 120 samples of food and water, and from environmental surfaces were collected, including 15 swabs from cafeteria tables, food carts, kitchen cabinets, kitchen rags, and drinking water taps; 41 swabs from doorknobs, classroom tables, toilets, and gargle cups; 50 food samples; and 14 drinking water samples. The school cafeteria provided the food and drinking water samples. All samples were transported frozen to the pathogen laboratory of the Henan CDC. Screening for gastroenteritis pathogens All samples were cultured for bacterial pathogens including Shiga toxin-producing Escherichia coli, Salmonella, Shigella, Yersinia enterocolitica, Vibrio cholerae, V. parahaemolyticus, and Aeromonas hydrophila, following the technical procedures of diarrheal pathogenic spectrum surveillance formulated by the China CDC [13]. These samples were also tested for rotavirus, enteric adenovirus, norovirus, sapovirus, and astrovirus using commercially available real-time RT-PCR kits (Shanghai ZJ Bio-Tech Co., Ltd., Shanghai, China or Jiangsu Shuoshi Biological Technology Co., Ltd., Taizhou, China), as per the manufacturer’s protocols [14]. All samples were cultured for bacterial pathogens including Shiga toxin-producing Escherichia coli, Salmonella, Shigella, Yersinia enterocolitica, Vibrio cholerae, V. parahaemolyticus, and Aeromonas hydrophila, following the technical procedures of diarrheal pathogenic spectrum surveillance formulated by the China CDC [13]. These samples were also tested for rotavirus, enteric adenovirus, norovirus, sapovirus, and astrovirus using commercially available real-time RT-PCR kits (Shanghai ZJ Bio-Tech Co., Ltd., Shanghai, China or Jiangsu Shuoshi Biological Technology Co., Ltd., Taizhou, China), as per the manufacturer’s protocols [14]. Full genome sequencing of norovirus Six samples that tested positive for norovirus (including four stool samples, one vomit sample, and one environmental sample) were randomly selected for whole genome sequencing. Total RNA was directly extracted from the samples using a QIAamp® Viral RNA Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. RNA was eluted in a final volume of 60 μL elution buffer and used immediately or stored at −80 °C. The whole genome sequences of norovirus were amplified by conventional RT-PCR using primers designed in this study (see Additional file 2: Table S1). The RT-PCR products were sent to Sangon Biotech Co., Ltd. (Shanghai, China) for DNA sequencing using an automated ABI 3730 DNA sequencer (Applied Biosystems, Foster City, CA, USA). Six samples that tested positive for norovirus (including four stool samples, one vomit sample, and one environmental sample) were randomly selected for whole genome sequencing. Total RNA was directly extracted from the samples using a QIAamp® Viral RNA Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. RNA was eluted in a final volume of 60 μL elution buffer and used immediately or stored at −80 °C. The whole genome sequences of norovirus were amplified by conventional RT-PCR using primers designed in this study (see Additional file 2: Table S1). The RT-PCR products were sent to Sangon Biotech Co., Ltd. (Shanghai, China) for DNA sequencing using an automated ABI 3730 DNA sequencer (Applied Biosystems, Foster City, CA, USA). Phylogenetic analysis The full norovirus genomes were compiled using the SeqMan program in the Lasergene software package (DNASTAR, Version 2.0, Madison, WI, USA). The percentage similarities of nucleotide identity or amino acid identity were calculated using the ClustalX software, [Version 2.0, European Bioinformatics Institute (EMBL-EBI), Cambridge, UK]. Molecular phylogenetic analysis was conducted using the maximum likelihood method based on the Kimura 2-parameter model with MEGA 5 software (available at: http://mega.software.informer.com/5.0/) [15]. The tree with the highest log-likelihood was shown. The percentage of trees in which the associated taxa clustered together was shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically as follows: when the number of common sites was < 100 or < 1/4 of the total number of sites, the maximum parsimony method was used; otherwise the Neighbor-Joining (NJ) method with maximum likelihood (ML) distance matrix was used. The tree was drawn to scale, with branch lengths measured in the number of substitutions per site. Complete norovirus genomes from GenBank were used as a reference, and phylogenetic trees were constructed to type and to understand the molecular epidemiology of the outbreak strain. The full norovirus genomes were compiled using the SeqMan program in the Lasergene software package (DNASTAR, Version 2.0, Madison, WI, USA). The percentage similarities of nucleotide identity or amino acid identity were calculated using the ClustalX software, [Version 2.0, European Bioinformatics Institute (EMBL-EBI), Cambridge, UK]. Molecular phylogenetic analysis was conducted using the maximum likelihood method based on the Kimura 2-parameter model with MEGA 5 software (available at: http://mega.software.informer.com/5.0/) [15]. The tree with the highest log-likelihood was shown. The percentage of trees in which the associated taxa clustered together was shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically as follows: when the number of common sites was < 100 or < 1/4 of the total number of sites, the maximum parsimony method was used; otherwise the Neighbor-Joining (NJ) method with maximum likelihood (ML) distance matrix was used. The tree was drawn to scale, with branch lengths measured in the number of substitutions per site. Complete norovirus genomes from GenBank were used as a reference, and phylogenetic trees were constructed to type and to understand the molecular epidemiology of the outbreak strain. Statistical analysis All epidemiologic and laboratory data were entered into EpiData 3.1 software (The EpiData Association, http://www.epidata.dk/download.php, Denmark). All statistical analyses were performed using SAS® v9.13 (SAS Institute Inc., Cary, NC, USA). An association of P < 0.05 was considered statistically significant. All epidemiologic and laboratory data were entered into EpiData 3.1 software (The EpiData Association, http://www.epidata.dk/download.php, Denmark). All statistical analyses were performed using SAS® v9.13 (SAS Institute Inc., Cary, NC, USA). An association of P < 0.05 was considered statistically significant. Ethical clearance This research was approved by the Institutional Review Board of the Henan CDC. All participants gave written informed consent for use of their samples for research purposes. Personal identifiable information was stored by the NNDSS, and not provided to any third party for any purpose according to the Law of the People’s Republic of China on the Prevention and Treatment of Infectious Diseases. This research was approved by the Institutional Review Board of the Henan CDC. All participants gave written informed consent for use of their samples for research purposes. Personal identifiable information was stored by the NNDSS, and not provided to any third party for any purpose according to the Law of the People’s Republic of China on the Prevention and Treatment of Infectious Diseases. Case definition: The investigated subjects included any person at the university. A clinical case was defined by the onset of diarrhoea ( ≥3 times/day), vomiting ( ≥2 times/day), or diarrhoeawith vomiting (unlimited number of times/day) at the university during the period from 1 March to 3 April 2015. A laboratory-confirmed case was defined when the stool or vomit specimen of a clinical case tested positive for norovirus by real-time reverse transcription polymerase chain reaction (RT-PCR). Epidemiological investigation: Medical practitioners reported clinical cases to the Henan Center for Disease Control and Prevention (CDC) from 1 March to 3 April 2015. A questionnaire was used to collect information on demographics, clinical symptoms, date of disease onset, and date of recovery. Specimen collection: Medical workers collected a total of 110 stool, vomitus, and/or saliva samples from clinical cases. Additionally, stool samples were collected from students and staff who did not exhibit symptoms of vomiting or diarrhea. Stool samples were collected from 53 people, comprising 23 students and 30 cafeteria food handlers. Additionally, 120 samples of food and water, and from environmental surfaces were collected, including 15 swabs from cafeteria tables, food carts, kitchen cabinets, kitchen rags, and drinking water taps; 41 swabs from doorknobs, classroom tables, toilets, and gargle cups; 50 food samples; and 14 drinking water samples. The school cafeteria provided the food and drinking water samples. All samples were transported frozen to the pathogen laboratory of the Henan CDC. Screening for gastroenteritis pathogens: All samples were cultured for bacterial pathogens including Shiga toxin-producing Escherichia coli, Salmonella, Shigella, Yersinia enterocolitica, Vibrio cholerae, V. parahaemolyticus, and Aeromonas hydrophila, following the technical procedures of diarrheal pathogenic spectrum surveillance formulated by the China CDC [13]. These samples were also tested for rotavirus, enteric adenovirus, norovirus, sapovirus, and astrovirus using commercially available real-time RT-PCR kits (Shanghai ZJ Bio-Tech Co., Ltd., Shanghai, China or Jiangsu Shuoshi Biological Technology Co., Ltd., Taizhou, China), as per the manufacturer’s protocols [14]. Full genome sequencing of norovirus: Six samples that tested positive for norovirus (including four stool samples, one vomit sample, and one environmental sample) were randomly selected for whole genome sequencing. Total RNA was directly extracted from the samples using a QIAamp® Viral RNA Mini Kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. RNA was eluted in a final volume of 60 μL elution buffer and used immediately or stored at −80 °C. The whole genome sequences of norovirus were amplified by conventional RT-PCR using primers designed in this study (see Additional file 2: Table S1). The RT-PCR products were sent to Sangon Biotech Co., Ltd. (Shanghai, China) for DNA sequencing using an automated ABI 3730 DNA sequencer (Applied Biosystems, Foster City, CA, USA). Phylogenetic analysis: The full norovirus genomes were compiled using the SeqMan program in the Lasergene software package (DNASTAR, Version 2.0, Madison, WI, USA). The percentage similarities of nucleotide identity or amino acid identity were calculated using the ClustalX software, [Version 2.0, European Bioinformatics Institute (EMBL-EBI), Cambridge, UK]. Molecular phylogenetic analysis was conducted using the maximum likelihood method based on the Kimura 2-parameter model with MEGA 5 software (available at: http://mega.software.informer.com/5.0/) [15]. The tree with the highest log-likelihood was shown. The percentage of trees in which the associated taxa clustered together was shown next to the branches. Initial tree(s) for the heuristic search were obtained automatically as follows: when the number of common sites was < 100 or < 1/4 of the total number of sites, the maximum parsimony method was used; otherwise the Neighbor-Joining (NJ) method with maximum likelihood (ML) distance matrix was used. The tree was drawn to scale, with branch lengths measured in the number of substitutions per site. Complete norovirus genomes from GenBank were used as a reference, and phylogenetic trees were constructed to type and to understand the molecular epidemiology of the outbreak strain. Statistical analysis: All epidemiologic and laboratory data were entered into EpiData 3.1 software (The EpiData Association, http://www.epidata.dk/download.php, Denmark). All statistical analyses were performed using SAS® v9.13 (SAS Institute Inc., Cary, NC, USA). An association of P < 0.05 was considered statistically significant. Ethical clearance: This research was approved by the Institutional Review Board of the Henan CDC. All participants gave written informed consent for use of their samples for research purposes. Personal identifiable information was stored by the NNDSS, and not provided to any third party for any purpose according to the Law of the People’s Republic of China on the Prevention and Treatment of Infectious Diseases. Results: Descriptive epidemiology From 4 to 30 March 2015, a total of 753 acute diarrhea cases were reported at a university in Henan Province to the NNDSS in China. The first case, whose main clinical symptoms included diarrhea, nausea, abdominal distension, abdominal pain, and fatigue, without fever or vomiting, occurred in the School of Economics and Management on the third day after the winter holiday. In the next few weeks, a high number of cases with similar clinical symptoms at various schools of the university were reported. The 753 cases comprised 751 students and two teachers, and the attack rate was 3.29% (753/22 861). Among the cases, 426 were males and 325 were females, with a male–female sex ratio of 1.31. The median age of the cases was 21 years (range: 19–50); the two teachers were 38 years old and 50 years old. The time distribution showed that a main peak of cases occurred (85.26% of the reported cases) between 10 and 20 March (see Fig. 1).Fig. 1Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015 Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015 The 753 cases occurred across 16 departments of the university. There was a statistically significant difference with respect to attack rates within the different departments (χ 2 = 179.92, P < 0.001). Two cases were from the Institute of Education, which is a relatively independent unit located on a different campus of the university. The attack rate in grades 1–2 (3.76%) was higher than in grades 3–4 (3.30%), but this was not an important difference (χ 2 = 3.118, P > 0.05) (see Table 1).Table 1Numbers of reported noroviruses cases by gender, residence, age group and yearDepartmentNo. of studentsNo. of casesAttack rate (%)Gender of casesGrade of casesMaleFemale1-23-4Electronics School1 246514.0931204110International Education College1 418614.3032293130Traditional Chinese Medicine College1 609321.9918141022Mechanics Institute1 615342.112113268Computer and Information Engineering College896313.461714265Architecture School849435.0623202914Economics and Management College1 415745.2340342945Software School3 7201143.0668467242Biological and Chemical Engineering College1 092393.5723162019Mathematics and Physics College419143.347786Civil Engineering College1 381453.262718396Foreign Languages School606304.951317228Humanities and Law College1 342745.5141332648Art Institute1 710764.4442343244Music College599315.182291219Education College1 36320.151111Total21 2807513.53426325404347 Numbers of reported noroviruses cases by gender, residence, age group and year From 4 to 30 March 2015, a total of 753 acute diarrhea cases were reported at a university in Henan Province to the NNDSS in China. The first case, whose main clinical symptoms included diarrhea, nausea, abdominal distension, abdominal pain, and fatigue, without fever or vomiting, occurred in the School of Economics and Management on the third day after the winter holiday. In the next few weeks, a high number of cases with similar clinical symptoms at various schools of the university were reported. The 753 cases comprised 751 students and two teachers, and the attack rate was 3.29% (753/22 861). Among the cases, 426 were males and 325 were females, with a male–female sex ratio of 1.31. The median age of the cases was 21 years (range: 19–50); the two teachers were 38 years old and 50 years old. The time distribution showed that a main peak of cases occurred (85.26% of the reported cases) between 10 and 20 March (see Fig. 1).Fig. 1Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015 Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015 The 753 cases occurred across 16 departments of the university. There was a statistically significant difference with respect to attack rates within the different departments (χ 2 = 179.92, P < 0.001). Two cases were from the Institute of Education, which is a relatively independent unit located on a different campus of the university. The attack rate in grades 1–2 (3.76%) was higher than in grades 3–4 (3.30%), but this was not an important difference (χ 2 = 3.118, P > 0.05) (see Table 1).Table 1Numbers of reported noroviruses cases by gender, residence, age group and yearDepartmentNo. of studentsNo. of casesAttack rate (%)Gender of casesGrade of casesMaleFemale1-23-4Electronics School1 246514.0931204110International Education College1 418614.3032293130Traditional Chinese Medicine College1 609321.9918141022Mechanics Institute1 615342.112113268Computer and Information Engineering College896313.461714265Architecture School849435.0623202914Economics and Management College1 415745.2340342945Software School3 7201143.0668467242Biological and Chemical Engineering College1 092393.5723162019Mathematics and Physics College419143.347786Civil Engineering College1 381453.262718396Foreign Languages School606304.951317228Humanities and Law College1 342745.5141332648Art Institute1 710764.4442343244Music College599315.182291219Education College1 36320.151111Total21 2807513.53426325404347 Numbers of reported noroviruses cases by gender, residence, age group and year Field epidemiologic investigation and preventive controls On 13 March 2015, both the local CDC and the provincial CDC were notified about an acute gastroenteritis outbreak at a university in Henan Province, and infection control experts initiated a field epidemiologic investigation. The university is located in southwest Henan in a district with both urban and rural areas, a subtropical to warm-temperate transitional zone, a continental monsoon humid climate, and four distinct seasons. It is a university with 16 departments, including science, engineering, medicine, education, management science, law, economics, and the arts. The university has two separate campuses, the Central Campus and the Education College. At the time of the outbreak, there were 1 581 staff members and 21 280 students, comprising 19 917 students at the Central Campus and 1 363 students at the Education College. Teachers generally do not have meals on campus as they live in an apartment building that is within walking distance of the university. Most students have three meals a day in the university canteens. A retrospective investigation showed that the first acute gastroenteritis case occurred on 4 March 2015. On 9 March, 10 students with similar clinical symptoms went to see doctors at the university-affiliated hospital. This information was brought to the attention of the hospital administrators who then reported it to the local public health agency. On 10 March, samples of food and drinking water from the university dining hall were collected and tested for multiple enteric pathogens, however, none of the samples tested positive. A further epidemiological investigation identified clustered cases among roommates and classmates. On 15 March, comprehensive measures were taken to prevent and control this acute gastroenteritis outbreak, including establishing a temporary diarrhea clinic in the school hospital; isolating and treating patients; disinfecting student dormitories, classrooms, and cafeterias with vitalight lamp radiation (Fushan Creator UV & IR Lighting Co., Ltd., Guangdong, China) or 5–6% sodium hypochlorite disinfectant (Dezhou city Sunkang Disinfection Products Co., Ltd., Shandong, China); offering health education; and encouraging enhanced personal hygiene and social distancing. No new cases occurred after 30 March 2015. On 13 March 2015, both the local CDC and the provincial CDC were notified about an acute gastroenteritis outbreak at a university in Henan Province, and infection control experts initiated a field epidemiologic investigation. The university is located in southwest Henan in a district with both urban and rural areas, a subtropical to warm-temperate transitional zone, a continental monsoon humid climate, and four distinct seasons. It is a university with 16 departments, including science, engineering, medicine, education, management science, law, economics, and the arts. The university has two separate campuses, the Central Campus and the Education College. At the time of the outbreak, there were 1 581 staff members and 21 280 students, comprising 19 917 students at the Central Campus and 1 363 students at the Education College. Teachers generally do not have meals on campus as they live in an apartment building that is within walking distance of the university. Most students have three meals a day in the university canteens. A retrospective investigation showed that the first acute gastroenteritis case occurred on 4 March 2015. On 9 March, 10 students with similar clinical symptoms went to see doctors at the university-affiliated hospital. This information was brought to the attention of the hospital administrators who then reported it to the local public health agency. On 10 March, samples of food and drinking water from the university dining hall were collected and tested for multiple enteric pathogens, however, none of the samples tested positive. A further epidemiological investigation identified clustered cases among roommates and classmates. On 15 March, comprehensive measures were taken to prevent and control this acute gastroenteritis outbreak, including establishing a temporary diarrhea clinic in the school hospital; isolating and treating patients; disinfecting student dormitories, classrooms, and cafeterias with vitalight lamp radiation (Fushan Creator UV & IR Lighting Co., Ltd., Guangdong, China) or 5–6% sodium hypochlorite disinfectant (Dezhou city Sunkang Disinfection Products Co., Ltd., Shandong, China); offering health education; and encouraging enhanced personal hygiene and social distancing. No new cases occurred after 30 March 2015. Clinical symptoms Clinical symptoms were recorded for 471 out of 753 cases. The main symptoms were diarrhea (85.14%), vomiting (65.61%), nausea (69.64%), stomachache (59.45%), abdominal distension (53.29%), and fever (43.77%) (see Table 2). The disease remitted within 72 hours (median: 50 hours, range: 11–72 hours). No cases were hospitalized and there were no deaths.Table 2Clinical symptoms of 471 clinical casesSymptom/signNo. of cases (n = 471)Proportion (%)Diarrhea40185.14Vomiting30965.61Nausea32869.64Bellyache28059.45Abdominal distension25153.29Fever20643.77 Clinical symptoms of 471 clinical cases Clinical symptoms were recorded for 471 out of 753 cases. The main symptoms were diarrhea (85.14%), vomiting (65.61%), nausea (69.64%), stomachache (59.45%), abdominal distension (53.29%), and fever (43.77%) (see Table 2). The disease remitted within 72 hours (median: 50 hours, range: 11–72 hours). No cases were hospitalized and there were no deaths.Table 2Clinical symptoms of 471 clinical casesSymptom/signNo. of cases (n = 471)Proportion (%)Diarrhea40185.14Vomiting30965.61Nausea32869.64Bellyache28059.45Abdominal distension25153.29Fever20643.77 Clinical symptoms of 471 clinical cases Pathogen detection From 14 March to 1 April 2015, 110 clinical samples from cases were collected, including 77 stool samples, 24 vomit samples, and nine saliva samples. No bacterial pathogens causing the disease were detected in any samples by culture methods. Thirty-six (32.72%, 36/110) samples from cases tested positive for norovirus using real-time RT-PCR, which comprised 27 stool (35.06%, 27/77), seven vomit (29.17%, 7/24), and two saliva (22.22%, 2/9) samples. Four (5.19%, 4/77) stool samples were positive for rotavirus using real-time RT-PCR. No other gastrointestinal viruses were detected. From 14 March to 1 April 2015, 110 clinical samples from cases were collected, including 77 stool samples, 24 vomit samples, and nine saliva samples. No bacterial pathogens causing the disease were detected in any samples by culture methods. Thirty-six (32.72%, 36/110) samples from cases tested positive for norovirus using real-time RT-PCR, which comprised 27 stool (35.06%, 27/77), seven vomit (29.17%, 7/24), and two saliva (22.22%, 2/9) samples. Four (5.19%, 4/77) stool samples were positive for rotavirus using real-time RT-PCR. No other gastrointestinal viruses were detected. Environmental health investigation Four (17.39%) of the 23 students without symptoms tested positive for norovirus. All samples from cafeteria food handlers were negative for norovirus. Eleven (9.17%, 11/120) swab samples tested positive for norovirus, which comprised eight toilet and three gargle cup surface samples. The food and drinking water samples were all negative for norovirus and rotavirus. Bacterial pathogens were not detected in the environmental health samples. Four (17.39%) of the 23 students without symptoms tested positive for norovirus. All samples from cafeteria food handlers were negative for norovirus. Eleven (9.17%, 11/120) swab samples tested positive for norovirus, which comprised eight toilet and three gargle cup surface samples. The food and drinking water samples were all negative for norovirus and rotavirus. Bacterial pathogens were not detected in the environmental health samples. Molecular characterization of norovirus All six strains were typed as GII.17 using the norovirus automated genotyping tool. The result of the phylogenetic analysis, which was performed to verified genotypes of six norovirus strains, coincided with the conclusion above (see Fig. 2).Fig. 2Molecular characterization and phylogenetic analysis of noroviruses, Henan Province Molecular characterization and phylogenetic analysis of noroviruses, Henan Province The complete genomes of the six norovirus-positive samples were determined. Nucleotide sequences were submitted to GenBank (accession Nos. KT992785, KT992786, KT992787, KT992788, KT992789, KT992790). The nucleotide identity among the six GII.17 strains ranged from 95.8% to 99.9%. To further determine the genetic characteristics of the norovirus strains, complete VP1 gene sequences of six norovirus strains from Henan were compared with 21 other GII.17 strains selected from GenBank by phylogenetic analysis. On the basis of VP1 gene sequencing, the GII.17 strains could be divided into three clusters in the phylogenetic tree (I-III): GII.17 strains detected from 1978 to 2002 formed cluster I, GII.17 strains collected from 2005 to 2009 formed cluster II, and cluster III composed of six GII.17 strains from Henan and strains isolated from Hong Kong, Guangdong, Beijing, Italy, Taiwan, Japan, and the USA after 2013 (except for KT589391.1 GII.17/HKG/2015, which belonged to cluster I) (see Fig. 3).Fig. 3Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples The phylogenetic analysis suggested that viruses from this study clustered with viral sequences obtained from viruses from other provinces in China circulating at a similar time, and co-evolved and co-circulated with those from surrounding provinces. All six strains were typed as GII.17 using the norovirus automated genotyping tool. The result of the phylogenetic analysis, which was performed to verified genotypes of six norovirus strains, coincided with the conclusion above (see Fig. 2).Fig. 2Molecular characterization and phylogenetic analysis of noroviruses, Henan Province Molecular characterization and phylogenetic analysis of noroviruses, Henan Province The complete genomes of the six norovirus-positive samples were determined. Nucleotide sequences were submitted to GenBank (accession Nos. KT992785, KT992786, KT992787, KT992788, KT992789, KT992790). The nucleotide identity among the six GII.17 strains ranged from 95.8% to 99.9%. To further determine the genetic characteristics of the norovirus strains, complete VP1 gene sequences of six norovirus strains from Henan were compared with 21 other GII.17 strains selected from GenBank by phylogenetic analysis. On the basis of VP1 gene sequencing, the GII.17 strains could be divided into three clusters in the phylogenetic tree (I-III): GII.17 strains detected from 1978 to 2002 formed cluster I, GII.17 strains collected from 2005 to 2009 formed cluster II, and cluster III composed of six GII.17 strains from Henan and strains isolated from Hong Kong, Guangdong, Beijing, Italy, Taiwan, Japan, and the USA after 2013 (except for KT589391.1 GII.17/HKG/2015, which belonged to cluster I) (see Fig. 3).Fig. 3Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples The phylogenetic analysis suggested that viruses from this study clustered with viral sequences obtained from viruses from other provinces in China circulating at a similar time, and co-evolved and co-circulated with those from surrounding provinces. Descriptive epidemiology: From 4 to 30 March 2015, a total of 753 acute diarrhea cases were reported at a university in Henan Province to the NNDSS in China. The first case, whose main clinical symptoms included diarrhea, nausea, abdominal distension, abdominal pain, and fatigue, without fever or vomiting, occurred in the School of Economics and Management on the third day after the winter holiday. In the next few weeks, a high number of cases with similar clinical symptoms at various schools of the university were reported. The 753 cases comprised 751 students and two teachers, and the attack rate was 3.29% (753/22 861). Among the cases, 426 were males and 325 were females, with a male–female sex ratio of 1.31. The median age of the cases was 21 years (range: 19–50); the two teachers were 38 years old and 50 years old. The time distribution showed that a main peak of cases occurred (85.26% of the reported cases) between 10 and 20 March (see Fig. 1).Fig. 1Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015 Epidemic curve showing reported cases of acute gastroenteritis by 24 h-intervals at a university in Henan Province, China, 2015 The 753 cases occurred across 16 departments of the university. There was a statistically significant difference with respect to attack rates within the different departments (χ 2 = 179.92, P < 0.001). Two cases were from the Institute of Education, which is a relatively independent unit located on a different campus of the university. The attack rate in grades 1–2 (3.76%) was higher than in grades 3–4 (3.30%), but this was not an important difference (χ 2 = 3.118, P > 0.05) (see Table 1).Table 1Numbers of reported noroviruses cases by gender, residence, age group and yearDepartmentNo. of studentsNo. of casesAttack rate (%)Gender of casesGrade of casesMaleFemale1-23-4Electronics School1 246514.0931204110International Education College1 418614.3032293130Traditional Chinese Medicine College1 609321.9918141022Mechanics Institute1 615342.112113268Computer and Information Engineering College896313.461714265Architecture School849435.0623202914Economics and Management College1 415745.2340342945Software School3 7201143.0668467242Biological and Chemical Engineering College1 092393.5723162019Mathematics and Physics College419143.347786Civil Engineering College1 381453.262718396Foreign Languages School606304.951317228Humanities and Law College1 342745.5141332648Art Institute1 710764.4442343244Music College599315.182291219Education College1 36320.151111Total21 2807513.53426325404347 Numbers of reported noroviruses cases by gender, residence, age group and year Field epidemiologic investigation and preventive controls: On 13 March 2015, both the local CDC and the provincial CDC were notified about an acute gastroenteritis outbreak at a university in Henan Province, and infection control experts initiated a field epidemiologic investigation. The university is located in southwest Henan in a district with both urban and rural areas, a subtropical to warm-temperate transitional zone, a continental monsoon humid climate, and four distinct seasons. It is a university with 16 departments, including science, engineering, medicine, education, management science, law, economics, and the arts. The university has two separate campuses, the Central Campus and the Education College. At the time of the outbreak, there were 1 581 staff members and 21 280 students, comprising 19 917 students at the Central Campus and 1 363 students at the Education College. Teachers generally do not have meals on campus as they live in an apartment building that is within walking distance of the university. Most students have three meals a day in the university canteens. A retrospective investigation showed that the first acute gastroenteritis case occurred on 4 March 2015. On 9 March, 10 students with similar clinical symptoms went to see doctors at the university-affiliated hospital. This information was brought to the attention of the hospital administrators who then reported it to the local public health agency. On 10 March, samples of food and drinking water from the university dining hall were collected and tested for multiple enteric pathogens, however, none of the samples tested positive. A further epidemiological investigation identified clustered cases among roommates and classmates. On 15 March, comprehensive measures were taken to prevent and control this acute gastroenteritis outbreak, including establishing a temporary diarrhea clinic in the school hospital; isolating and treating patients; disinfecting student dormitories, classrooms, and cafeterias with vitalight lamp radiation (Fushan Creator UV & IR Lighting Co., Ltd., Guangdong, China) or 5–6% sodium hypochlorite disinfectant (Dezhou city Sunkang Disinfection Products Co., Ltd., Shandong, China); offering health education; and encouraging enhanced personal hygiene and social distancing. No new cases occurred after 30 March 2015. Clinical symptoms: Clinical symptoms were recorded for 471 out of 753 cases. The main symptoms were diarrhea (85.14%), vomiting (65.61%), nausea (69.64%), stomachache (59.45%), abdominal distension (53.29%), and fever (43.77%) (see Table 2). The disease remitted within 72 hours (median: 50 hours, range: 11–72 hours). No cases were hospitalized and there were no deaths.Table 2Clinical symptoms of 471 clinical casesSymptom/signNo. of cases (n = 471)Proportion (%)Diarrhea40185.14Vomiting30965.61Nausea32869.64Bellyache28059.45Abdominal distension25153.29Fever20643.77 Clinical symptoms of 471 clinical cases Pathogen detection: From 14 March to 1 April 2015, 110 clinical samples from cases were collected, including 77 stool samples, 24 vomit samples, and nine saliva samples. No bacterial pathogens causing the disease were detected in any samples by culture methods. Thirty-six (32.72%, 36/110) samples from cases tested positive for norovirus using real-time RT-PCR, which comprised 27 stool (35.06%, 27/77), seven vomit (29.17%, 7/24), and two saliva (22.22%, 2/9) samples. Four (5.19%, 4/77) stool samples were positive for rotavirus using real-time RT-PCR. No other gastrointestinal viruses were detected. Environmental health investigation: Four (17.39%) of the 23 students without symptoms tested positive for norovirus. All samples from cafeteria food handlers were negative for norovirus. Eleven (9.17%, 11/120) swab samples tested positive for norovirus, which comprised eight toilet and three gargle cup surface samples. The food and drinking water samples were all negative for norovirus and rotavirus. Bacterial pathogens were not detected in the environmental health samples. Molecular characterization of norovirus: All six strains were typed as GII.17 using the norovirus automated genotyping tool. The result of the phylogenetic analysis, which was performed to verified genotypes of six norovirus strains, coincided with the conclusion above (see Fig. 2).Fig. 2Molecular characterization and phylogenetic analysis of noroviruses, Henan Province Molecular characterization and phylogenetic analysis of noroviruses, Henan Province The complete genomes of the six norovirus-positive samples were determined. Nucleotide sequences were submitted to GenBank (accession Nos. KT992785, KT992786, KT992787, KT992788, KT992789, KT992790). The nucleotide identity among the six GII.17 strains ranged from 95.8% to 99.9%. To further determine the genetic characteristics of the norovirus strains, complete VP1 gene sequences of six norovirus strains from Henan were compared with 21 other GII.17 strains selected from GenBank by phylogenetic analysis. On the basis of VP1 gene sequencing, the GII.17 strains could be divided into three clusters in the phylogenetic tree (I-III): GII.17 strains detected from 1978 to 2002 formed cluster I, GII.17 strains collected from 2005 to 2009 formed cluster II, and cluster III composed of six GII.17 strains from Henan and strains isolated from Hong Kong, Guangdong, Beijing, Italy, Taiwan, Japan, and the USA after 2013 (except for KT589391.1 GII.17/HKG/2015, which belonged to cluster I) (see Fig. 3).Fig. 3Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples Phylogenetic analyses of noroviruses of the GII.17 strain based on the gene sequence of VP1. The tree shows the comparison between the six noroviruses strains studied and the GII.17 reference strains. The circle represents the stool samples, the square represents the swab samples, and the triangle represents the vomit samples The phylogenetic analysis suggested that viruses from this study clustered with viral sequences obtained from viruses from other provinces in China circulating at a similar time, and co-evolved and co-circulated with those from surrounding provinces. Discussion: In this outbreak, a total of 753 acute diarrhea cases were reported at the university; the attack rate was 3.29%. The epidemic curve showed two peaks, with the main peak occurring between 10 March and 20 March, accounting for 85.26% of the reported cases. The statistical analysis identified a significant difference with respect to attack rates among the departments at the university and among grades, and there were obvious clusters among roommates and classmates. All samples collected from the cafeteria food handlers, and food and drinking water samples tested negative for norovirus. The data suggested that the route of norovirus transmission was more likely to be person-to-person and/or environmental transmission than foodborne or waterborne transmission. Norovirus transmission occurs via the fecal-oral route, usually through ingestion of contaminated food or water, or by direct contact with an infected individual. Environmental transmission occurs when episodes of vomiting or diarrhea contaminate surfaces with infectious virus particles that may persist for weeks [16, 17]. The resilience and persistence of norovirus in the environment allows for its spread through a wide range of common and unexpected sources. It has been estimated that as few as 18 norovirus particles may be sufficient to cause infection in humans [18, 19]. Infected individuals shed a large amount of norovirus in both fecal material and vomitus, contributing to the high number of outbreaks observed annually in environments with close quarters such as cruise ships, restaurants, long-term care facilities, and schools [16, 17, 20]. Dormitories and classrooms are the main places where students study and live, which results in them being highly centralized and relatively confined environments. The swab samples from these environments tested positive for norovirus. After environmental disinfecting measures and hand washing were implemented, the number of cases sharply decreased. Thus, environmental transmission likely contributed to the outbreak [17, 21]. The epidemic curve showed two peaks of incidence. The first peak appeared on the seventh day after the first case occurred, continuing to 20 March. Cases with onset during these 11 days accounted for 85.29% of all reported cases. The incubation period for norovirus is normally 12–48 hours, meaning that this peak period very likely included second- or third-generation transmission. Laboratory testing identified norovirus contamination in environmental samples, not food or water. Epidemiological investigation showed that the first case occurred on the third day of school. According to the incubation period of noroviruses [22, 23], this case was likely infected prior to returning to university after the holidays. The university has students from all over China. From November 2014 through to January 2015, a total of more than 120 identified outbreaks were reported in China. We speculate that the infected student introduced norovirus into the university. The first case was a possible source of transmission, but asymptomatically infected students may also be sources. Noroviruses can be divided into seven genogroups (GI to GVII) on the basis of sequence differences in the virus VP1 region. GI, GII, and GIV viruses can infect humans. The genogroups are further classified into genotypes with at least nine genotypes belonging to GI, 22 genotypes belonging to GII, and two genotypes belonging to GIV. GII viruses are the most frequently detected (89%), whereas GI viruses cause approximately 11% of all outbreaks [24–27]. During the past decade, most reported norovirus outbreaks were caused by GII.4. New variants of GII.4 have emerged approximately every 2–4 years and have caused norovirus gastroenteritis pandemics globally [28, 29]. Since 1999, the major circulating genotype in mainland China has been GII.4, accounting for 64% of all genotypes detected [30]. GII.17 is another common type of GII viruses. The first GII.17 strain in the National Center for Biotechnology Information databank is from 1978. Since then, GII.17 viruses have sporadically been detected in Africa, Asia, Europe, North America, and South America, and have been circulating in the human population for at least 37 years [31]. In Asia, more widespread circulation of GII.17 was first reported from environmental samples in Korea from 2004 to 2006 [32]. From 2012 to 2013, GII.17 viruses accounted for 76% of all detected norovirus strains in rivers in rural and urban areas in Kenya [33]. A sharp increase in the number of norovirus cases caused by a novel GII.17 virus was observed in Japan during the 2014/15 winter season [34]. This novel GII.17 norovirus was first detected in acute gastroenteritis outbreaks in Guangdong Province in China in November 2014 and thereafter spread rapidly across Asia. From November 2014 through to January 2015, GII.17 norovirus outbreaks were reported in 10 cities of Guangdong Province and represented 83% (24/29) of all outbreaks in Guangdong [35]. During the same winter, there was also an increase in outbreak activity in Jiangsu Province, Zhejiang Province, and other provinces, which could be attributed to the emergence of this novel GII.17 strain [36–38]. Our study is the first report of a genotype GII.17 norovirus causing an outbreak in Henan Province, China. Furthermore, our results indicate that the GII.17 viruses displayed high epidemic activity and have become a dominant strain in China since the winter of 2014, having replaced the previously dominant GII.4 Sydney 2012 strain. This study had several limitations. First, the early stages of the outbreak did not cause alarm and attention, as the symptoms of the norovirus cases were relatively mild and of short duration. Second, due to a lack of experience with norovirus outbreaks in Henan Province, it took some time before the field epidemiological investigation finally determined that this outbreak was caused by norovirus. Third, there is no routine surveillance data on norovirus infections among infectious viral diarrhea cases in this province, so whether or not GII.17 will continue to circulate is unclear. Conclusion: This study identified the largest outbreak of acute gastroenteritis caused by human norovirus GII.17 in Henan Province, China. The environmental transmission contributed to the outbreak, as the students at the university under investigation study and live relatively close in dormitories and classrooms. Environmental disinfecting measures and hand washing should be promoted to prevent such infections and outbreaks because there are not vaccines or antiviral therapies currently available for norovirus infections.
Background: Human noroviruses are a major cause of viral gastroenteritis and are the main etiological agents of acute gastroenteritis outbreaks. An increasing number of outbreaks and sporadic cases of norovirus have been reported in China in recent years. There was a large acute gastroenteritis outbreak at a university in Henan Province, China in the past five years. We want to identify the source, transmission routes of the outbreak by epidemiological investigation and laboratory testing in order to provide the effective control measures. Methods: The clinical cases were investigated, and analysed by descriptive epidemiological methods according to factors such as time, department, grade and so on. Samples were collected from clinical cases, healthy persons, the environment, water, and food at the university. These samples were tested for potential bacteria and viruses. The samples that tested positive for norovirus were selected for whole genome sequencing and the sequences were then analysed. Results: From 4 March to 3 April 2015, a total of 753 acute diarrhoea cases were reported at the university; the attack rate was 3.29%. The epidemic curve showed two peaks, with the main peak occurring between 10 and 20 March, accounting for 85.26% of reported cases. The rates of norovirus detection in samples from confirmed cases, people without symptoms, and environmental samples were 32.72%, 17.39%, and 9.17%, respectively. The phylogenetic analysis showed that the norovirus belonged to the genotype GII.17. Conclusions: This is the largest and most severe outbreak caused by genotype GII.17 norovirus in recent years in China. The GII.17 viruses displayed high epidemic activity and have become a dominant strain in China since the winter of 2014, having replaced the previously dominant GII.4 Sydney 2012 strain.
Background: Human noroviruses are positive-sense single stranded ribonucleic acid (RNA) viruses belonging to the family Caliciviridae, and are the most common cause of acute gastroenteritis outbreaks globally [1–3]. The disease burden of noroviruses is substantial and has a significant influence on public health [4, 5]. No vaccines or antiviral therapies are currently available for norovirus infections. Norovirus infections and outbreaks are usually more common in cooler or winter months. Noroviruses are readily transmitted through the fecal-oral route, through person-to-person contact, or through contaminated food or water, meaning that noroviruses spread quickly in enclosed places such as nursing homes, daycare centres, schools, and cruise ships, and are also a major cause of outbreaks in restaurants and catered-meal settings if contaminated food is served [6–8]. Noroviruses have an incubation period of 12–48 hours and symptoms typically include nausea, vomiting, diarrhea, abdominal pain, and fever. Norovirus infections are generally self-limited with mild to moderate symptoms, although severe morbidity and occasional mortality have been observed in immunocompromised patients and the elderly. Symptoms usually last for 1–3 days but can persist longer in young, old, and immunocompromised patients [9–12]. From 4 to 30 March 2015, 753 cases of acute gastroenteritis were reported to the National Notifiable Reportable Diseases Surveillance System (NNDSS) in China from a university in Nanyang, Henan Province. Preliminary investigation indicated that the incident was a large acute gastroenteritis outbreak caused by human norovirus, and the route of transmission might be person-to-person and environmental transmission. We conducted an in-depth epidemiological investigation and laboratory testing in order to identify the source of the outbreak and provide guidance on effective control measures for future outbreaks. Conclusion: This study identified the largest outbreak of acute gastroenteritis caused by human norovirus GII.17 in Henan Province, China. The environmental transmission contributed to the outbreak, as the students at the university under investigation study and live relatively close in dormitories and classrooms. Environmental disinfecting measures and hand washing should be promoted to prevent such infections and outbreaks because there are not vaccines or antiviral therapies currently available for norovirus infections.
Background: Human noroviruses are a major cause of viral gastroenteritis and are the main etiological agents of acute gastroenteritis outbreaks. An increasing number of outbreaks and sporadic cases of norovirus have been reported in China in recent years. There was a large acute gastroenteritis outbreak at a university in Henan Province, China in the past five years. We want to identify the source, transmission routes of the outbreak by epidemiological investigation and laboratory testing in order to provide the effective control measures. Methods: The clinical cases were investigated, and analysed by descriptive epidemiological methods according to factors such as time, department, grade and so on. Samples were collected from clinical cases, healthy persons, the environment, water, and food at the university. These samples were tested for potential bacteria and viruses. The samples that tested positive for norovirus were selected for whole genome sequencing and the sequences were then analysed. Results: From 4 March to 3 April 2015, a total of 753 acute diarrhoea cases were reported at the university; the attack rate was 3.29%. The epidemic curve showed two peaks, with the main peak occurring between 10 and 20 March, accounting for 85.26% of reported cases. The rates of norovirus detection in samples from confirmed cases, people without symptoms, and environmental samples were 32.72%, 17.39%, and 9.17%, respectively. The phylogenetic analysis showed that the norovirus belonged to the genotype GII.17. Conclusions: This is the largest and most severe outbreak caused by genotype GII.17 norovirus in recent years in China. The GII.17 viruses displayed high epidemic activity and have become a dominant strain in China since the winter of 2014, having replaced the previously dominant GII.4 Sydney 2012 strain.
9,211
327
[ 21, 96, 48, 141, 114, 154, 235, 55, 68, 459, 398, 119, 128, 77, 405 ]
20
[ "samples", "norovirus", "cases", "17", "gii", "university", "gii 17", "strains", "clinical", "china" ]
[ "caused norovirus gastroenteritis", "norovirus outbreaks reported", "noroviruses readily transmitted", "norovirus gastroenteritis pandemics", "experience norovirus outbreaks" ]
null
[CONTENT] Human norovirus | Acute gastroenteritis outbreak | Epidemiological investigation | Phylogenetic analysis | Henan Province | China [SUMMARY]
null
[CONTENT] Human norovirus | Acute gastroenteritis outbreak | Epidemiological investigation | Phylogenetic analysis | Henan Province | China [SUMMARY]
[CONTENT] Human norovirus | Acute gastroenteritis outbreak | Epidemiological investigation | Phylogenetic analysis | Henan Province | China [SUMMARY]
[CONTENT] Human norovirus | Acute gastroenteritis outbreak | Epidemiological investigation | Phylogenetic analysis | Henan Province | China [SUMMARY]
[CONTENT] Human norovirus | Acute gastroenteritis outbreak | Epidemiological investigation | Phylogenetic analysis | Henan Province | China [SUMMARY]
[CONTENT] Acute Disease | Adult | China | Disease Outbreaks | Female | Gastroenteritis | Humans | Male | Middle Aged | Norovirus | Phylogeny | Universities | Young Adult [SUMMARY]
null
[CONTENT] Acute Disease | Adult | China | Disease Outbreaks | Female | Gastroenteritis | Humans | Male | Middle Aged | Norovirus | Phylogeny | Universities | Young Adult [SUMMARY]
[CONTENT] Acute Disease | Adult | China | Disease Outbreaks | Female | Gastroenteritis | Humans | Male | Middle Aged | Norovirus | Phylogeny | Universities | Young Adult [SUMMARY]
[CONTENT] Acute Disease | Adult | China | Disease Outbreaks | Female | Gastroenteritis | Humans | Male | Middle Aged | Norovirus | Phylogeny | Universities | Young Adult [SUMMARY]
[CONTENT] Acute Disease | Adult | China | Disease Outbreaks | Female | Gastroenteritis | Humans | Male | Middle Aged | Norovirus | Phylogeny | Universities | Young Adult [SUMMARY]
[CONTENT] caused norovirus gastroenteritis | norovirus outbreaks reported | noroviruses readily transmitted | norovirus gastroenteritis pandemics | experience norovirus outbreaks [SUMMARY]
null
[CONTENT] caused norovirus gastroenteritis | norovirus outbreaks reported | noroviruses readily transmitted | norovirus gastroenteritis pandemics | experience norovirus outbreaks [SUMMARY]
[CONTENT] caused norovirus gastroenteritis | norovirus outbreaks reported | noroviruses readily transmitted | norovirus gastroenteritis pandemics | experience norovirus outbreaks [SUMMARY]
[CONTENT] caused norovirus gastroenteritis | norovirus outbreaks reported | noroviruses readily transmitted | norovirus gastroenteritis pandemics | experience norovirus outbreaks [SUMMARY]
[CONTENT] caused norovirus gastroenteritis | norovirus outbreaks reported | noroviruses readily transmitted | norovirus gastroenteritis pandemics | experience norovirus outbreaks [SUMMARY]
[CONTENT] samples | norovirus | cases | 17 | gii | university | gii 17 | strains | clinical | china [SUMMARY]
null
[CONTENT] samples | norovirus | cases | 17 | gii | university | gii 17 | strains | clinical | china [SUMMARY]
[CONTENT] samples | norovirus | cases | 17 | gii | university | gii 17 | strains | clinical | china [SUMMARY]
[CONTENT] samples | norovirus | cases | 17 | gii | university | gii 17 | strains | clinical | china [SUMMARY]
[CONTENT] samples | norovirus | cases | 17 | gii | university | gii 17 | strains | clinical | china [SUMMARY]
[CONTENT] noroviruses | outbreaks | person | infections | norovirus infections | immunocompromised | immunocompromised patients | acute | acute gastroenteritis | norovirus [SUMMARY]
null
[CONTENT] strains | cases | samples | 17 | gii | gii 17 | university | college1 | noroviruses | 17 strains [SUMMARY]
[CONTENT] infections | study | outbreak | environmental | outbreaks vaccines antiviral | norovirus gii | students university | gastroenteritis caused human | gastroenteritis caused human norovirus | dormitories classrooms environmental [SUMMARY]
[CONTENT] samples | norovirus | cases | 17 | gii | university | clinical | gii 17 | strains | food [SUMMARY]
[CONTENT] samples | norovirus | cases | 17 | gii | university | clinical | gii 17 | strains | food [SUMMARY]
[CONTENT] ||| China | recent years ||| Henan Province | China | the past five years ||| [SUMMARY]
null
[CONTENT] 4 March to 3 April 2015 | 753 | 3.29% ||| two | between 10 and 20 March | 85.26% ||| 32.72% | 17.39% | 9.17% ||| [SUMMARY]
[CONTENT] recent years | China ||| China | the winter of 2014 | Sydney | 2012 [SUMMARY]
[CONTENT] ||| China | recent years ||| Henan Province | China | the past five years ||| ||| ||| ||| ||| ||| 4 March to 3 April 2015 | 753 | 3.29% ||| two | between 10 and 20 March | 85.26% ||| 32.72% | 17.39% | 9.17% ||| ||| recent years | China ||| China | the winter of 2014 | Sydney | 2012 [SUMMARY]
[CONTENT] ||| China | recent years ||| Henan Province | China | the past five years ||| ||| ||| ||| ||| ||| 4 March to 3 April 2015 | 753 | 3.29% ||| two | between 10 and 20 March | 85.26% ||| 32.72% | 17.39% | 9.17% ||| ||| recent years | China ||| China | the winter of 2014 | Sydney | 2012 [SUMMARY]
Depressed mood and frailty among older people in Tokyo during the COVID-19 pandemic.
34530494
The study aim was to identify depressed mood and frailty and its related factors in older people during the coronavirus disease 19 pandemic.
BACKGROUND
Since 2010, we have conducted questionnaire surveys on all older residents, who are not certified in the long-term care insurance, living in one district of Tokyo municipality. These residents are divided into two groups by birth month, that is those born between April and September and those born between October and March, and each group completes the survey every 2 years (in April and May). Study participants were older residents who were born between April and September and who completed the survey in spring 2018 and in spring 2020, the pandemic period. Depressed mood and frailty were assessed using the Kihon Checklist, which is widely used by local governments in Japan. We had no control group in this study.
METHODS
A total of 1736 residents responded to both surveys. From 2018 to 2020, the depressed mood rate increased from 29% to 38%, and frailty increased from 10% to 16%. The incidence of depressed mood and frailty was 25% and 11%, respectively. Incidence of depressed mood was related to subjective memory impairment and difficulty in device usage, and incidence of frailty was related to being older, subjective memory impairment, lack of emotional social support, poor subjective health, and social participation difficulties.
RESULTS
Older people with subjective memory impairment may be a high-risk group during the coronavirus pandemic. Telephone outreach for frail older people could be an effective solution. We recommend extending the scope of the 'reasonable accommodation' concept beyond disability and including older people to build an age-friendly and crisis-resistant community.
CONCLUSIONS
[ "Aged", "COVID-19", "Frail Elderly", "Frailty", "Geriatric Assessment", "Humans", "Independent Living", "Japan", "Pandemics", "SARS-CoV-2", "Tokyo" ]
8662134
INTRODUCTION
The coronavirus disease 19 (COVID‐19) pandemic has spread around the world. In Japan, the number of patients rapidly increased in March, followed by an emergency state declaration by the government on April 7, 2020. Because social distancing was recommended, nonessential businesses, schools, sports and recreational facilities, and places of worship were closed. Residents were asked to stay in their homes. Although the government did not take mandatory action to ensure that people remained at home, the number of people outside substantially decreased during the emergency state; according to the Japanese Cabinet Office, the number of people circulating in the five large stations in the Tokyo metropolitan area, as measured by mobile phone geographical data (approved by owners for public use), decreased from 68.9% to 87.3% compared with average data in January and February. 1 The risk of severe illness from COVID‐19 increases with age, and older adults are at highest risk. 2 This is because: (i) frailty in older adults increases the risk of various infections and reduces all aspects of the immune response; and (ii) older people have multiple comorbidities and more hospitalisations, which increases the chance of infection during a pandemic. 3 Thus, older adults are particularly cautious about the risk of getting infected and avoided contacting with others in person. However, social isolation in older people is a serious public health concern, because of the greater risk of physical and mental health problems in older people. 4 According to Santini et al., 5 social disconnection puts older adults at greater risk of depression. A comparison of the National Health Interview Survey 2018 and 2020 shows that psychological distress and loneliness have increased during the COVID‐19 pandemic. 6 In addition, social distancing is a risk factor for progressive frailty, as it reduces physical activity. 7 In Japan, several measures have been used to assess depressed mood among older people. The most widely used measure is the Kihon Checklist (KCL), which is described in the methods section. One survey that used the KCL found that 25% of older people had depressed mood. 8 However, to the best of our knowledge, no previous studies have used the KCL to assess the onset of depressed mood. Another widely used scale is the 15‐item Geriatric Depression Scale 9 ; scores of five or greater on this scale indicate depressive symptoms. A study of older people in Japan found that the prevalence of depressive symptoms was 25%. 10 A large‐scale multicentre longitudinal study showed that the incidence of depressive symptoms over 3 years was 16.5% for men and 15.7% for women. 11 In Japan, frailty is often assessed using the KCL or the Japanese version of the Cardiovascular Health Study criteria (CHS). 12 A meta‐analysis of studies that used the CHS found a pooled prevalence of frailty of 7.4% (95% confidence interval (CI) 6.1–9.0). 13 However, to the best of our knowledge, there are no CHS data on the incidence of frailty. One study that used the KCL found a frailty prevalence of 8% and a 5‐year onset of frailty of 8%. 14 The aim of this study was to identify psychological and physical changes in older people by comparing 2020 data (collected during the COVID‐19 pandemic) with 2018 data from the same population. An additional aim was to identify factors related to psychological and physical changes.
METHODS
Introduction Since 2010, we have conducted an epidemiological survey of older people living in one district of Tokyo. 15 , 16 , 17 Questionnaires are usually mailed in April; this year, Japan was in an emergency state at this time. Although there was substantial societal disruption, the local government decided to mail the questionnaires as planned to obtain a rapid assessment of the situation and to prioritise focused support. Since 2010, we have conducted an epidemiological survey of older people living in one district of Tokyo. 15 , 16 , 17 Questionnaires are usually mailed in April; this year, Japan was in an emergency state at this time. Although there was substantial societal disruption, the local government decided to mail the questionnaires as planned to obtain a rapid assessment of the situation and to prioritise focused support. Participants In close collaboration with local government, we have conducted epidemiological surveys of all older people (i.e., individuals aged 65 years or over, which is the official definition of ‘older people’ in Japan) not certified in the long‐term care insurance (LTCI) scheme and living in one district of Tokyo. Respondents are divided into two groups by birth month: those born between April and September comprise group 1 and those born between October and March comprise group 2. Groups 1 and 2 complete the surveys in odd years and even years, respectively; the group 2 survey started in 2010 and has been conducted every 2 years. The annual alternation of groups 1 and 2 equalises the yearly workload for local government. The flow of the project is shown in Figure 1. The participants of this study were older people from group 1 who responded to both the 2018 survey and the 2020 survey. The study flow is shown in Figure 2. Flow chart of the project. Flow chart of the study. Japan's LTCI is a mandatory program that provides institutional, home, and community‐based services for older persons. To use long‐term care services, applicants must receive long‐term care need certification, which is determined by a committee of specialists. 18 In close collaboration with local government, we have conducted epidemiological surveys of all older people (i.e., individuals aged 65 years or over, which is the official definition of ‘older people’ in Japan) not certified in the long‐term care insurance (LTCI) scheme and living in one district of Tokyo. Respondents are divided into two groups by birth month: those born between April and September comprise group 1 and those born between October and March comprise group 2. Groups 1 and 2 complete the surveys in odd years and even years, respectively; the group 2 survey started in 2010 and has been conducted every 2 years. The annual alternation of groups 1 and 2 equalises the yearly workload for local government. The flow of the project is shown in Figure 1. The participants of this study were older people from group 1 who responded to both the 2018 survey and the 2020 survey. The study flow is shown in Figure 2. Flow chart of the project. Flow chart of the study. Japan's LTCI is a mandatory program that provides institutional, home, and community‐based services for older persons. To use long‐term care services, applicants must receive long‐term care need certification, which is determined by a committee of specialists. 18 Setting This study was conducted in one district which is located in the centre of the Tokyo metropolitan area. The total population is approximately 67 000, including 11 000 people aged 65 years or over. According to publicly available data, the LTCI certification rate of this district is 20.2%. 19 Every year, the local government mails a questionnaire to respondents in April. Respondents are asked to mail the questionnaire back by May 29. This study was conducted in one district which is located in the centre of the Tokyo metropolitan area. The total population is approximately 67 000, including 11 000 people aged 65 years or over. According to publicly available data, the LTCI certification rate of this district is 20.2%. 19 Every year, the local government mails a questionnaire to respondents in April. Respondents are asked to mail the questionnaire back by May 29. Measures As this survey was a joint project with local government, it included the KCL. The KCL was developed by the Japanese Ministry of Health, Labour and Welfare to identify older people at risk of requiring care/support, and is widely used by local governments to assess health and care needs. 20 The KCL comprises 20 items about the overall health status of older people and five items that assess depressed mood. 21 Response options for each item are ‘yes’ and ‘no’. Depressed mood and frailty scores were derived from KCL responses. Main outcome Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Covariates Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 As this survey was a joint project with local government, it included the KCL. The KCL was developed by the Japanese Ministry of Health, Labour and Welfare to identify older people at risk of requiring care/support, and is widely used by local governments to assess health and care needs. 20 The KCL comprises 20 items about the overall health status of older people and five items that assess depressed mood. 21 Response options for each item are ‘yes’ and ‘no’. Depressed mood and frailty scores were derived from KCL responses. Main outcome Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Covariates Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Data analysis Of participants considered not to have depressed mood in the 2018 survey, those judged to have depressed mood in the 2020 survey were regarded as the ‘new depressed mood’ group. Similarly, of participants considered not to show frailty in the 2018 survey, those judged to show frailty in the 2020 survey were regarded as the ‘new frailty’ group. Characteristics of the new depressed mood group and new frailty group were compared with controls using the Chi‐square test for nominal variables and the t‐test for continuous variables. Multivariate logistic regression analyses were subsequently performed. The dependent variables were new depressed mood and new frailty, and factors showing significant associations in the previous bivariate analysis were included. Age was converted to a two‐value item: young‐old (65–74 years) and old‐old (≥75 years). P < 0.05 was regarded as statistically significant. For the memory‐related items (i.e., forgetfulness about the location of things and forgetfulness about things that happened a few minutes earlier), only one item was included in the multivariate analysis to avoid multicollinearity. In both multivariate logistic regression analyses, the variance inflation factor was less than 2.0 for all items, indicating no multicollinearity. Analyses were performed using SPSS version 25 (IBM Corp, Armonk, NY, USA). Of participants considered not to have depressed mood in the 2018 survey, those judged to have depressed mood in the 2020 survey were regarded as the ‘new depressed mood’ group. Similarly, of participants considered not to show frailty in the 2018 survey, those judged to show frailty in the 2020 survey were regarded as the ‘new frailty’ group. Characteristics of the new depressed mood group and new frailty group were compared with controls using the Chi‐square test for nominal variables and the t‐test for continuous variables. Multivariate logistic regression analyses were subsequently performed. The dependent variables were new depressed mood and new frailty, and factors showing significant associations in the previous bivariate analysis were included. Age was converted to a two‐value item: young‐old (65–74 years) and old‐old (≥75 years). P < 0.05 was regarded as statistically significant. For the memory‐related items (i.e., forgetfulness about the location of things and forgetfulness about things that happened a few minutes earlier), only one item was included in the multivariate analysis to avoid multicollinearity. In both multivariate logistic regression analyses, the variance inflation factor was less than 2.0 for all items, indicating no multicollinearity. Analyses were performed using SPSS version 25 (IBM Corp, Armonk, NY, USA). Ethical considerations The study protocol was approved by the ethics committee of the Tokyo Metropolitan Institute of Gerontology. Written informed consent was obtained from all participants. The study protocol was approved by the ethics committee of the Tokyo Metropolitan Institute of Gerontology. Written informed consent was obtained from all participants.
RESULTS
The number of mailed questionnaires for residents aged 65 years or above and born between April and September was 4914 in 2018, and 2621 questionnaires were retrieved (response rate 53.3%). Similarly, the number of mailed questionnaires was 4973 in 2020, and 2649 questionnaires were retrieved (response rate 53.3%). A total of 1736 residents responded to both surveys (i.e., the rate of analysed questionnaires per mailed questionnaires was 35.3% and 34.6%, respectively). In 2018, 29% of participants had depressed mood and 10% showed frailty. In 2020, 38% of participants had depressed mood and 16% showed frailty. A simple comparison showed that the rates of depressed mood and frailty increased in 2020 (Table 1). Ratio of participants who had depressed mood and frailty Of the 1736 participants, 509 participants had depressed mood and 1227 did not in 2018. Of these 1227 participants, 307 had depressed mood in 2020. That is, the depressed mood progression rate was 25%. Of the 1736 participants, 171 participants had frailty and 1565 did not in 2018. Of these 1565 participants, 165 had frailty in 2020 (Fig. 3). Incidence of depressed mood and frailty. The comparative characteristics of participants who developed depressed mood in 2020 and those who did not are shown in Table 2. Being older, not married, low education level, residing in current location for longer than 10 years, being forgetful about the location of things, being forgetful about things that happened a few minutes earlier, difficulty operating a video recorder, difficulty watching educational programs, difficulty taking care of family members or acquaintances, and lack of emotional social support were related to new depressed mood. The multivariate logistic regression analysis showed that subjective memory impairment (forgetfulness about the location of things) (odds ratio (OR) = 1.47, 95% CI: 1.11–1.94) and difficulty using devices (operating a video recorder) (OR = 1.45, 95% CI: 1.04–2.03) were significantly associated with new depressed mood (Table 3). Comparative characteristics of new depressed mood group *P < 0.05; **P < 0.01; ***P < 0.001. BMI, body mass index. Factors associated with incidence of depressed mood in multivariate analysis *P < 0.05; **P < 0.01. OR, odds ratio; CI, confidence interval. Factors included in this model analysis: (i) basic variables such as age, education, new residential status; (ii) forgetfulness about the location of things (to assess subjective memory impairment); (iii) items to assess instrumental activities of daily living such as difficulty operating a video recorder, difficulty watching educational programs, and difficulty taking care of family members or acquaintances; and (iv) emotional social support (to assess psychological variables). Age was converted to a dichotomous variable: young‐old (65–74 years) and old‐old (≥75 years). The comparative characteristics of older people who had developed frailty in 2020 and those who had not are shown in Table 4. Being older, not married, living alone, low education level, not having a job, being forgetful about the location of things, being forgetful about things that happened a few minutes earlier, poor subjective health, lack of concern regarding their oral health, difficulty operating a video recorder, difficulty watching educational programs, difficulty taking care of family members or acquaintances, difficulty in assuming roles such as the leader in a residents' association, lack of emotional social support, and lack of instrumental social support were related to new frailty. The multivariate logistic regression analysis showed that being young‐old (OR = 1.87, 95% CI: 1.18–2.97), subjective memory impairment (forgetfulness about things that happened a few minutes earlier) (OR = 2.18, 95% CI: 1.47–3.22), lack of emotional social support (OR = 2.64, 95% CI: 1.34–5.13), poor subjective health (OR = 2.27, 95% CI: 1.16–4.44), and difficulty in social participation (difficulty assuming roles such as the leader in a residents' association) (OR = 1.92, 95% CI: 1.25–2.95) were significantly associated with new frailty (Table 5). Comparative characteristics of new frailty group *P < 0.05; **P < 0.01;***P < 0.001. BMI, body mass index. Factors associated with incidence of frailty in multivariate analysis *P < 0.05; **P < 0.01; ***P < 0.001. OR, odds ratio; CI, confidence interval. Factors included in this model analysis: (i) basic variables such as age, living status, marital status, working status, and education; (ii) forgetfulness about things that happened a few minutes earlier (to assess subjective memory impairment); (iii) physical health‐related items such as subjective health and oral health care; (iv) items about instrumental activities of daily living such as difficulty operating a video recorder, difficulty watching educational programs, difficulty taking care of family members or acquaintances, and difficulty assuming roles (e.g., the leader) in a residents' association; and (iv) psychological variables such as emotional social support and instrumental social support. Age was converted to a dichotomous variable: young‐old (65–74 years) and old‐old (≥75 years).
null
null
[ "INTRODUCTION", "Introduction", "Participants", "Setting", "Measures", "\nMain outcome\n", "Depressed mood", "Frailty", "\nCovariates\n", "Basic information", "Memory‐related variables (subjective memory impairment)", "Physical health‐related variables", "Daily life competence", "Psychological variables", "Data analysis", "Ethical considerations", "STRENGTHS AND LIMITATIONS" ]
[ "The coronavirus disease 19 (COVID‐19) pandemic has spread around the world. In Japan, the number of patients rapidly increased in March, followed by an emergency state declaration by the government on April 7, 2020. Because social distancing was recommended, nonessential businesses, schools, sports and recreational facilities, and places of worship were closed. Residents were asked to stay in their homes. Although the government did not take mandatory action to ensure that people remained at home, the number of people outside substantially decreased during the emergency state; according to the Japanese Cabinet Office, the number of people circulating in the five large stations in the Tokyo metropolitan area, as measured by mobile phone geographical data (approved by owners for public use), decreased from 68.9% to 87.3% compared with average data in January and February.\n1\n\n\nThe risk of severe illness from COVID‐19 increases with age, and older adults are at highest risk.\n2\n This is because: (i) frailty in older adults increases the risk of various infections and reduces all aspects of the immune response; and (ii) older people have multiple comorbidities and more hospitalisations, which increases the chance of infection during a pandemic.\n3\n Thus, older adults are particularly cautious about the risk of getting infected and avoided contacting with others in person.\nHowever, social isolation in older people is a serious public health concern, because of the greater risk of physical and mental health problems in older people.\n4\n According to Santini et al.,\n5\n social disconnection puts older adults at greater risk of depression. A comparison of the National Health Interview Survey 2018 and 2020 shows that psychological distress and loneliness have increased during the COVID‐19 pandemic.\n6\n In addition, social distancing is a risk factor for progressive frailty, as it reduces physical activity.\n7\n\n\nIn Japan, several measures have been used to assess depressed mood among older people. The most widely used measure is the Kihon Checklist (KCL), which is described in the methods section. One survey that used the KCL found that 25% of older people had depressed mood.\n8\n However, to the best of our knowledge, no previous studies have used the KCL to assess the onset of depressed mood. Another widely used scale is the 15‐item Geriatric Depression Scale\n9\n; scores of five or greater on this scale indicate depressive symptoms. A study of older people in Japan found that the prevalence of depressive symptoms was 25%.\n10\n A large‐scale multicentre longitudinal study showed that the incidence of depressive symptoms over 3 years was 16.5% for men and 15.7% for women.\n11\n\n\nIn Japan, frailty is often assessed using the KCL or the Japanese version of the Cardiovascular Health Study criteria (CHS).\n12\n A meta‐analysis of studies that used the CHS found a pooled prevalence of frailty of 7.4% (95% confidence interval (CI) 6.1–9.0).\n13\n However, to the best of our knowledge, there are no CHS data on the incidence of frailty. One study that used the KCL found a frailty prevalence of 8% and a 5‐year onset of frailty of 8%.\n14\n\n\nThe aim of this study was to identify psychological and physical changes in older people by comparing 2020 data (collected during the COVID‐19 pandemic) with 2018 data from the same population. An additional aim was to identify factors related to psychological and physical changes.", "Since 2010, we have conducted an epidemiological survey of older people living in one district of Tokyo.\n15\n, \n16\n, \n17\n Questionnaires are usually mailed in April; this year, Japan was in an emergency state at this time. Although there was substantial societal disruption, the local government decided to mail the questionnaires as planned to obtain a rapid assessment of the situation and to prioritise focused support.", "In close collaboration with local government, we have conducted epidemiological surveys of all older people (i.e., individuals aged 65 years or over, which is the official definition of ‘older people’ in Japan) not certified in the long‐term care insurance (LTCI) scheme and living in one district of Tokyo. Respondents are divided into two groups by birth month: those born between April and September comprise group 1 and those born between October and March comprise group 2. Groups 1 and 2 complete the surveys in odd years and even years, respectively; the group 2 survey started in 2010 and has been conducted every 2 years. The annual alternation of groups 1 and 2 equalises the yearly workload for local government. The flow of the project is shown in Figure 1. The participants of this study were older people from group 1 who responded to both the 2018 survey and the 2020 survey. The study flow is shown in Figure 2.\nFlow chart of the project.\nFlow chart of the study.\nJapan's LTCI is a mandatory program that provides institutional, home, and community‐based services for older persons. To use long‐term care services, applicants must receive long‐term care need certification, which is determined by a committee of specialists.\n18\n\n", "This study was conducted in one district which is located in the centre of the Tokyo metropolitan area. The total population is approximately 67 000, including 11 000 people aged 65 years or over. According to publicly available data, the LTCI certification rate of this district is 20.2%.\n19\n Every year, the local government mails a questionnaire to respondents in April. Respondents are asked to mail the questionnaire back by May 29.", "As this survey was a joint project with local government, it included the KCL. The KCL was developed by the Japanese Ministry of Health, Labour and Welfare to identify older people at risk of requiring care/support, and is widely used by local governments to assess health and care needs.\n20\n The KCL comprises 20 items about the overall health status of older people and five items that assess depressed mood.\n21\n Response options for each item are ‘yes’ and ‘no’. Depressed mood and frailty scores were derived from KCL responses.\n\nMain outcome\n Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nThe five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nFrailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nThe 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nDepressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nThe five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nFrailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nThe 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\n\nCovariates\n Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nWe collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nMemory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nWe assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nPhysical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nBody height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nDaily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nDaily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nPsychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nEmotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nBasic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nWe collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nMemory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nWe assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nPhysical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nBody height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nDaily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nDaily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nPsychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nEmotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n", "Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nThe five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nFrailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nThe 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n", "The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n", "The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n", "Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nWe collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nMemory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nWe assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nPhysical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nBody height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nDaily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nDaily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nPsychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nEmotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n", "We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).", "We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.", "Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.", "Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’", "Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n", "Of participants considered not to have depressed mood in the 2018 survey, those judged to have depressed mood in the 2020 survey were regarded as the ‘new depressed mood’ group. Similarly, of participants considered not to show frailty in the 2018 survey, those judged to show frailty in the 2020 survey were regarded as the ‘new frailty’ group.\nCharacteristics of the new depressed mood group and new frailty group were compared with controls using the Chi‐square test for nominal variables and the t‐test for continuous variables.\nMultivariate logistic regression analyses were subsequently performed. The dependent variables were new depressed mood and new frailty, and factors showing significant associations in the previous bivariate analysis were included. Age was converted to a two‐value item: young‐old (65–74 years) and old‐old (≥75 years). P < 0.05 was regarded as statistically significant. For the memory‐related items (i.e., forgetfulness about the location of things and forgetfulness about things that happened a few minutes earlier), only one item was included in the multivariate analysis to avoid multicollinearity. In both multivariate logistic regression analyses, the variance inflation factor was less than 2.0 for all items, indicating no multicollinearity. Analyses were performed using SPSS version 25 (IBM Corp, Armonk, NY, USA).", "The study protocol was approved by the ethics committee of the Tokyo Metropolitan Institute of Gerontology. Written informed consent was obtained from all participants.", "A strength of this study is that it was conducted during the national state of emergency (i.e., April and May, 2020). This was possible because of the 10 years of cooperation and trust building between the research team and local government. In addition, as we used pre‐existing questionnaires, the study did not disrupt essential government work. Accordingly, we could reveal the incidence of depression and frailty during the COVID‐19 pandemic.\nThis study had some limitations. First, as mentioned above, we lacked a control group. This is a major study limitation. Second, this was a self‐report mail survey and we did not collect objective data. Third, depressed mood was defined using the KCL. Although the frailty assessed by KCL is reported to be adequate for cross‐cultural studies,\n23\n depressed mood assessed by KCL has not been sufficiently validated enough. However, because the KCL is used universally in the Japanese public sector and this was a local government survey, we used and analysed KCL data. We also need to establish the equivalence of KCL defined depressed mood with widely used measures such as the Geriatric Depression Scale\n9\n to share the research outcome with the world. Fourth, older people certified on the LTCI scheme, who may constitute the highest‐risk group, were excluded from the survey. Fifth, nutritional status and physical performance play important roles in preventing depression and frailty, but they were not analysed in this study." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Introduction", "Participants", "Setting", "Measures", "\nMain outcome\n", "Depressed mood", "Frailty", "\nCovariates\n", "Basic information", "Memory‐related variables (subjective memory impairment)", "Physical health‐related variables", "Daily life competence", "Psychological variables", "Data analysis", "Ethical considerations", "RESULTS", "DISCUSSION", "STRENGTHS AND LIMITATIONS" ]
[ "The coronavirus disease 19 (COVID‐19) pandemic has spread around the world. In Japan, the number of patients rapidly increased in March, followed by an emergency state declaration by the government on April 7, 2020. Because social distancing was recommended, nonessential businesses, schools, sports and recreational facilities, and places of worship were closed. Residents were asked to stay in their homes. Although the government did not take mandatory action to ensure that people remained at home, the number of people outside substantially decreased during the emergency state; according to the Japanese Cabinet Office, the number of people circulating in the five large stations in the Tokyo metropolitan area, as measured by mobile phone geographical data (approved by owners for public use), decreased from 68.9% to 87.3% compared with average data in January and February.\n1\n\n\nThe risk of severe illness from COVID‐19 increases with age, and older adults are at highest risk.\n2\n This is because: (i) frailty in older adults increases the risk of various infections and reduces all aspects of the immune response; and (ii) older people have multiple comorbidities and more hospitalisations, which increases the chance of infection during a pandemic.\n3\n Thus, older adults are particularly cautious about the risk of getting infected and avoided contacting with others in person.\nHowever, social isolation in older people is a serious public health concern, because of the greater risk of physical and mental health problems in older people.\n4\n According to Santini et al.,\n5\n social disconnection puts older adults at greater risk of depression. A comparison of the National Health Interview Survey 2018 and 2020 shows that psychological distress and loneliness have increased during the COVID‐19 pandemic.\n6\n In addition, social distancing is a risk factor for progressive frailty, as it reduces physical activity.\n7\n\n\nIn Japan, several measures have been used to assess depressed mood among older people. The most widely used measure is the Kihon Checklist (KCL), which is described in the methods section. One survey that used the KCL found that 25% of older people had depressed mood.\n8\n However, to the best of our knowledge, no previous studies have used the KCL to assess the onset of depressed mood. Another widely used scale is the 15‐item Geriatric Depression Scale\n9\n; scores of five or greater on this scale indicate depressive symptoms. A study of older people in Japan found that the prevalence of depressive symptoms was 25%.\n10\n A large‐scale multicentre longitudinal study showed that the incidence of depressive symptoms over 3 years was 16.5% for men and 15.7% for women.\n11\n\n\nIn Japan, frailty is often assessed using the KCL or the Japanese version of the Cardiovascular Health Study criteria (CHS).\n12\n A meta‐analysis of studies that used the CHS found a pooled prevalence of frailty of 7.4% (95% confidence interval (CI) 6.1–9.0).\n13\n However, to the best of our knowledge, there are no CHS data on the incidence of frailty. One study that used the KCL found a frailty prevalence of 8% and a 5‐year onset of frailty of 8%.\n14\n\n\nThe aim of this study was to identify psychological and physical changes in older people by comparing 2020 data (collected during the COVID‐19 pandemic) with 2018 data from the same population. An additional aim was to identify factors related to psychological and physical changes.", "Introduction Since 2010, we have conducted an epidemiological survey of older people living in one district of Tokyo.\n15\n, \n16\n, \n17\n Questionnaires are usually mailed in April; this year, Japan was in an emergency state at this time. Although there was substantial societal disruption, the local government decided to mail the questionnaires as planned to obtain a rapid assessment of the situation and to prioritise focused support.\nSince 2010, we have conducted an epidemiological survey of older people living in one district of Tokyo.\n15\n, \n16\n, \n17\n Questionnaires are usually mailed in April; this year, Japan was in an emergency state at this time. Although there was substantial societal disruption, the local government decided to mail the questionnaires as planned to obtain a rapid assessment of the situation and to prioritise focused support.\nParticipants In close collaboration with local government, we have conducted epidemiological surveys of all older people (i.e., individuals aged 65 years or over, which is the official definition of ‘older people’ in Japan) not certified in the long‐term care insurance (LTCI) scheme and living in one district of Tokyo. Respondents are divided into two groups by birth month: those born between April and September comprise group 1 and those born between October and March comprise group 2. Groups 1 and 2 complete the surveys in odd years and even years, respectively; the group 2 survey started in 2010 and has been conducted every 2 years. The annual alternation of groups 1 and 2 equalises the yearly workload for local government. The flow of the project is shown in Figure 1. The participants of this study were older people from group 1 who responded to both the 2018 survey and the 2020 survey. The study flow is shown in Figure 2.\nFlow chart of the project.\nFlow chart of the study.\nJapan's LTCI is a mandatory program that provides institutional, home, and community‐based services for older persons. To use long‐term care services, applicants must receive long‐term care need certification, which is determined by a committee of specialists.\n18\n\n\nIn close collaboration with local government, we have conducted epidemiological surveys of all older people (i.e., individuals aged 65 years or over, which is the official definition of ‘older people’ in Japan) not certified in the long‐term care insurance (LTCI) scheme and living in one district of Tokyo. Respondents are divided into two groups by birth month: those born between April and September comprise group 1 and those born between October and March comprise group 2. Groups 1 and 2 complete the surveys in odd years and even years, respectively; the group 2 survey started in 2010 and has been conducted every 2 years. The annual alternation of groups 1 and 2 equalises the yearly workload for local government. The flow of the project is shown in Figure 1. The participants of this study were older people from group 1 who responded to both the 2018 survey and the 2020 survey. The study flow is shown in Figure 2.\nFlow chart of the project.\nFlow chart of the study.\nJapan's LTCI is a mandatory program that provides institutional, home, and community‐based services for older persons. To use long‐term care services, applicants must receive long‐term care need certification, which is determined by a committee of specialists.\n18\n\n\nSetting This study was conducted in one district which is located in the centre of the Tokyo metropolitan area. The total population is approximately 67 000, including 11 000 people aged 65 years or over. According to publicly available data, the LTCI certification rate of this district is 20.2%.\n19\n Every year, the local government mails a questionnaire to respondents in April. Respondents are asked to mail the questionnaire back by May 29.\nThis study was conducted in one district which is located in the centre of the Tokyo metropolitan area. The total population is approximately 67 000, including 11 000 people aged 65 years or over. According to publicly available data, the LTCI certification rate of this district is 20.2%.\n19\n Every year, the local government mails a questionnaire to respondents in April. Respondents are asked to mail the questionnaire back by May 29.\nMeasures As this survey was a joint project with local government, it included the KCL. The KCL was developed by the Japanese Ministry of Health, Labour and Welfare to identify older people at risk of requiring care/support, and is widely used by local governments to assess health and care needs.\n20\n The KCL comprises 20 items about the overall health status of older people and five items that assess depressed mood.\n21\n Response options for each item are ‘yes’ and ‘no’. Depressed mood and frailty scores were derived from KCL responses.\n\nMain outcome\n Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nThe five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nFrailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nThe 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nDepressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nThe five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nFrailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nThe 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\n\nCovariates\n Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nWe collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nMemory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nWe assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nPhysical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nBody height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nDaily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nDaily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nPsychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nEmotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nBasic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nWe collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nMemory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nWe assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nPhysical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nBody height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nDaily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nDaily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nPsychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nEmotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nAs this survey was a joint project with local government, it included the KCL. The KCL was developed by the Japanese Ministry of Health, Labour and Welfare to identify older people at risk of requiring care/support, and is widely used by local governments to assess health and care needs.\n20\n The KCL comprises 20 items about the overall health status of older people and five items that assess depressed mood.\n21\n Response options for each item are ‘yes’ and ‘no’. Depressed mood and frailty scores were derived from KCL responses.\n\nMain outcome\n Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nThe five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nFrailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nThe 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nDepressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nThe five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nFrailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nThe 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\n\nCovariates\n Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nWe collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nMemory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nWe assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nPhysical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nBody height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nDaily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nDaily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nPsychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nEmotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nBasic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nWe collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nMemory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nWe assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nPhysical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nBody height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nDaily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nDaily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nPsychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nEmotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nData analysis Of participants considered not to have depressed mood in the 2018 survey, those judged to have depressed mood in the 2020 survey were regarded as the ‘new depressed mood’ group. Similarly, of participants considered not to show frailty in the 2018 survey, those judged to show frailty in the 2020 survey were regarded as the ‘new frailty’ group.\nCharacteristics of the new depressed mood group and new frailty group were compared with controls using the Chi‐square test for nominal variables and the t‐test for continuous variables.\nMultivariate logistic regression analyses were subsequently performed. The dependent variables were new depressed mood and new frailty, and factors showing significant associations in the previous bivariate analysis were included. Age was converted to a two‐value item: young‐old (65–74 years) and old‐old (≥75 years). P < 0.05 was regarded as statistically significant. For the memory‐related items (i.e., forgetfulness about the location of things and forgetfulness about things that happened a few minutes earlier), only one item was included in the multivariate analysis to avoid multicollinearity. In both multivariate logistic regression analyses, the variance inflation factor was less than 2.0 for all items, indicating no multicollinearity. Analyses were performed using SPSS version 25 (IBM Corp, Armonk, NY, USA).\nOf participants considered not to have depressed mood in the 2018 survey, those judged to have depressed mood in the 2020 survey were regarded as the ‘new depressed mood’ group. Similarly, of participants considered not to show frailty in the 2018 survey, those judged to show frailty in the 2020 survey were regarded as the ‘new frailty’ group.\nCharacteristics of the new depressed mood group and new frailty group were compared with controls using the Chi‐square test for nominal variables and the t‐test for continuous variables.\nMultivariate logistic regression analyses were subsequently performed. The dependent variables were new depressed mood and new frailty, and factors showing significant associations in the previous bivariate analysis were included. Age was converted to a two‐value item: young‐old (65–74 years) and old‐old (≥75 years). P < 0.05 was regarded as statistically significant. For the memory‐related items (i.e., forgetfulness about the location of things and forgetfulness about things that happened a few minutes earlier), only one item was included in the multivariate analysis to avoid multicollinearity. In both multivariate logistic regression analyses, the variance inflation factor was less than 2.0 for all items, indicating no multicollinearity. Analyses were performed using SPSS version 25 (IBM Corp, Armonk, NY, USA).\nEthical considerations The study protocol was approved by the ethics committee of the Tokyo Metropolitan Institute of Gerontology. Written informed consent was obtained from all participants.\nThe study protocol was approved by the ethics committee of the Tokyo Metropolitan Institute of Gerontology. Written informed consent was obtained from all participants.", "Since 2010, we have conducted an epidemiological survey of older people living in one district of Tokyo.\n15\n, \n16\n, \n17\n Questionnaires are usually mailed in April; this year, Japan was in an emergency state at this time. Although there was substantial societal disruption, the local government decided to mail the questionnaires as planned to obtain a rapid assessment of the situation and to prioritise focused support.", "In close collaboration with local government, we have conducted epidemiological surveys of all older people (i.e., individuals aged 65 years or over, which is the official definition of ‘older people’ in Japan) not certified in the long‐term care insurance (LTCI) scheme and living in one district of Tokyo. Respondents are divided into two groups by birth month: those born between April and September comprise group 1 and those born between October and March comprise group 2. Groups 1 and 2 complete the surveys in odd years and even years, respectively; the group 2 survey started in 2010 and has been conducted every 2 years. The annual alternation of groups 1 and 2 equalises the yearly workload for local government. The flow of the project is shown in Figure 1. The participants of this study were older people from group 1 who responded to both the 2018 survey and the 2020 survey. The study flow is shown in Figure 2.\nFlow chart of the project.\nFlow chart of the study.\nJapan's LTCI is a mandatory program that provides institutional, home, and community‐based services for older persons. To use long‐term care services, applicants must receive long‐term care need certification, which is determined by a committee of specialists.\n18\n\n", "This study was conducted in one district which is located in the centre of the Tokyo metropolitan area. The total population is approximately 67 000, including 11 000 people aged 65 years or over. According to publicly available data, the LTCI certification rate of this district is 20.2%.\n19\n Every year, the local government mails a questionnaire to respondents in April. Respondents are asked to mail the questionnaire back by May 29.", "As this survey was a joint project with local government, it included the KCL. The KCL was developed by the Japanese Ministry of Health, Labour and Welfare to identify older people at risk of requiring care/support, and is widely used by local governments to assess health and care needs.\n20\n The KCL comprises 20 items about the overall health status of older people and five items that assess depressed mood.\n21\n Response options for each item are ‘yes’ and ‘no’. Depressed mood and frailty scores were derived from KCL responses.\n\nMain outcome\n Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nThe five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nFrailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nThe 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nDepressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nThe five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nFrailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nThe 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\n\nCovariates\n Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nWe collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nMemory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nWe assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nPhysical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nBody height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nDaily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nDaily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nPsychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nEmotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nBasic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nWe collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nMemory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nWe assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nPhysical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nBody height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nDaily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nDaily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nPsychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nEmotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n", "Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nThe five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n\nFrailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n\nThe 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n", "The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood.\n8\n\n", "The 20 KCL items that assess overall health status were used to measure frailty. Satake et al.\n22\n noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts.\n23\n\n", "Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nWe collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).\nMemory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nWe assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.\nPhysical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nBody height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.\nDaily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nDaily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’\nPsychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n\nEmotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n", "We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location).", "We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21),\n24\n which is widely used with the Japanese national dementia strategy.", "Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health.", "Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC).\n25\n, \n26\n The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’", "Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al.\n27\n\n", "Of participants considered not to have depressed mood in the 2018 survey, those judged to have depressed mood in the 2020 survey were regarded as the ‘new depressed mood’ group. Similarly, of participants considered not to show frailty in the 2018 survey, those judged to show frailty in the 2020 survey were regarded as the ‘new frailty’ group.\nCharacteristics of the new depressed mood group and new frailty group were compared with controls using the Chi‐square test for nominal variables and the t‐test for continuous variables.\nMultivariate logistic regression analyses were subsequently performed. The dependent variables were new depressed mood and new frailty, and factors showing significant associations in the previous bivariate analysis were included. Age was converted to a two‐value item: young‐old (65–74 years) and old‐old (≥75 years). P < 0.05 was regarded as statistically significant. For the memory‐related items (i.e., forgetfulness about the location of things and forgetfulness about things that happened a few minutes earlier), only one item was included in the multivariate analysis to avoid multicollinearity. In both multivariate logistic regression analyses, the variance inflation factor was less than 2.0 for all items, indicating no multicollinearity. Analyses were performed using SPSS version 25 (IBM Corp, Armonk, NY, USA).", "The study protocol was approved by the ethics committee of the Tokyo Metropolitan Institute of Gerontology. Written informed consent was obtained from all participants.", "The number of mailed questionnaires for residents aged 65 years or above and born between April and September was 4914 in 2018, and 2621 questionnaires were retrieved (response rate 53.3%). Similarly, the number of mailed questionnaires was 4973 in 2020, and 2649 questionnaires were retrieved (response rate 53.3%). A total of 1736 residents responded to both surveys (i.e., the rate of analysed questionnaires per mailed questionnaires was 35.3% and 34.6%, respectively).\nIn 2018, 29% of participants had depressed mood and 10% showed frailty. In 2020, 38% of participants had depressed mood and 16% showed frailty. A simple comparison showed that the rates of depressed mood and frailty increased in 2020 (Table 1).\nRatio of participants who had depressed mood and frailty\nOf the 1736 participants, 509 participants had depressed mood and 1227 did not in 2018. Of these 1227 participants, 307 had depressed mood in 2020. That is, the depressed mood progression rate was 25%. Of the 1736 participants, 171 participants had frailty and 1565 did not in 2018. Of these 1565 participants, 165 had frailty in 2020 (Fig. 3).\nIncidence of depressed mood and frailty.\nThe comparative characteristics of participants who developed depressed mood in 2020 and those who did not are shown in Table 2. Being older, not married, low education level, residing in current location for longer than 10 years, being forgetful about the location of things, being forgetful about things that happened a few minutes earlier, difficulty operating a video recorder, difficulty watching educational programs, difficulty taking care of family members or acquaintances, and lack of emotional social support were related to new depressed mood. The multivariate logistic regression analysis showed that subjective memory impairment (forgetfulness about the location of things) (odds ratio (OR) = 1.47, 95% CI: 1.11–1.94) and difficulty using devices (operating a video recorder) (OR = 1.45, 95% CI: 1.04–2.03) were significantly associated with new depressed mood (Table 3).\nComparative characteristics of new depressed mood group\n*P < 0.05; **P < 0.01; ***P < 0.001. BMI, body mass index.\nFactors associated with incidence of depressed mood in multivariate analysis\n*P < 0.05; **P < 0.01. OR, odds ratio; CI, confidence interval.\nFactors included in this model analysis: (i) basic variables such as age, education, new residential status; (ii) forgetfulness about the location of things (to assess subjective memory impairment); (iii) items to assess instrumental activities of daily living such as difficulty operating a video recorder, difficulty watching educational programs, and difficulty taking care of family members or acquaintances; and (iv) emotional social support (to assess psychological variables).\nAge was converted to a dichotomous variable: young‐old (65–74 years) and old‐old (≥75 years).\nThe comparative characteristics of older people who had developed frailty in 2020 and those who had not are shown in Table 4. Being older, not married, living alone, low education level, not having a job, being forgetful about the location of things, being forgetful about things that happened a few minutes earlier, poor subjective health, lack of concern regarding their oral health, difficulty operating a video recorder, difficulty watching educational programs, difficulty taking care of family members or acquaintances, difficulty in assuming roles such as the leader in a residents' association, lack of emotional social support, and lack of instrumental social support were related to new frailty. The multivariate logistic regression analysis showed that being young‐old (OR = 1.87, 95% CI: 1.18–2.97), subjective memory impairment (forgetfulness about things that happened a few minutes earlier) (OR = 2.18, 95% CI: 1.47–3.22), lack of emotional social support (OR = 2.64, 95% CI: 1.34–5.13), poor subjective health (OR = 2.27, 95% CI: 1.16–4.44), and difficulty in social participation (difficulty assuming roles such as the leader in a residents' association) (OR = 1.92, 95% CI: 1.25–2.95) were significantly associated with new frailty (Table 5).\nComparative characteristics of new frailty group\n*P < 0.05; **P < 0.01;***P < 0.001. BMI, body mass index.\nFactors associated with incidence of frailty in multivariate analysis\n*P < 0.05; **P < 0.01; ***P < 0.001. OR, odds ratio; CI, confidence interval.\nFactors included in this model analysis: (i) basic variables such as age, living status, marital status, working status, and education; (ii) forgetfulness about things that happened a few minutes earlier (to assess subjective memory impairment); (iii) physical health‐related items such as subjective health and oral health care; (iv) items about instrumental activities of daily living such as difficulty operating a video recorder, difficulty watching educational programs, difficulty taking care of family members or acquaintances, and difficulty assuming roles (e.g., the leader) in a residents' association; and (iv) psychological variables such as emotional social support and instrumental social support.\nAge was converted to a dichotomous variable: young‐old (65–74 years) and old‐old (≥75 years).", "Our findings identified factors related to incidence of depressed mood and that of frailty during the COVID‐19 pandemic. Factors related to incidence of depression were subjective memory impairment and difficulty in device usage, and factors related to incidence of frailty were being older, subjective memory impairment, lack of emotional social support, poor subjective health, and difficulty in social participation. Of these factors, subjective memory impairment correlated with both new depressed mood and new frailty. Older people with subjective memory impairment may be a high‐risk group for depressed mood and frailty during this pandemic.\nWe found that the incidence of depressed mood and frailty over a 2‐year period that included the COVID‐19 pandemic were 25% and 11%, respectively. Because we had no control group, we could not differentiate the effect of the COVID‐19 pandemic from the effect of time‐dependent factors (e.g., the normal ageing process). However, the incidence of depressed mood (i.e., 25%) over 2 years was higher than the previously reported 3‐year incidence of depressive symptoms (of approximately 16%).\n11\n In addition, participants in the previous study\n11\n (like those in the present study) were community residents aged 65 years or older who were not receiving long‐term care. However, our study was conducted in metropolitan Tokyo, whereas the previous study sampled participants from 40 local government administrative divisions.\nWe found a slightly higher incidence of frailty (i.e., 11%) over 2 years than the 5‐year frailty incidence of 8% previously reported for older adults aged 65–70 years.\n14\n This difference warrants careful interpretation, because the participants and the observation periods were not identical between the two surveys. Our survey was conducted in the first 2 months after the government declared a state of emergency and people started practising social distancing; hence, the effect of social distancing on frailty may not have been fully accounted for.\nThe social costs of depression and frailty are heavy. Although the costs of depression and frailty in older people are unclear, the total cost of depression in Japan is 2 trillion yen (equivalent to 18 billion USD).\n28\n There seem to be no figures for the total societal cost of frailty. However, individual‐level increases in care costs have been reported in Japan\n29\n and Germany.\n30\n The present findings suggest that the COVID‐19 pandemic may cause a substantial long‐term social burden through its effect on depressed mood.\nSocial distancing is a reasonable strategy to combat COVID‐19, but it is crucial for older persons to maintain social connections to reduce depressed mood and frailty. Humans are social beings, despite differences in nationality and cultural background, and social distancing may cause substantial psychological distress; therefore, it is important to clarify the effect of social distancing on short‐ and long‐term mental health.\n31\n A narrative review that included several cross‐sectional descriptive studies reported a depression prevalence ranging from 15% to 47% during the COVID‐19 pandemic.\n32\n According to Puccinelli et al,\n33\n physical activity level during the period of social distancing was lower than that prior to the pandemic period. They also reported a bidirectional effect of depression and physical inactivity, which suggests that social distancing is associated with a large increase in physical and mental vulnerability.\nTo address this adverse effect of social distancing, one potential solution is the use of remote tools, which can help frail older people maintain social connections.\n34\n As many older people are unfamiliar with modern communication tools like social network services or email, the use of more traditional devices like telephones may be preferable. For example, we have started a telephone outreach service for frail older people in a large housing complex in another area of Tokyo, which has proved effective.\n35\n\n\nImmediate action to help older people worldwide is essential. However, the construction of a systematic telephone outreach network for older people with memory impairment is more difficult during a pandemic. Our experience with a community‐based participatory research framework\n36\n, \n37\n indicates that building networks for isolated older people takes time and effort in the real world. We overcame this difficulty by ensuring that specialists such as doctors, psychologists, and public health nurses develop face‐to‐face relationships with residents or community workers over a period of years. A useful strategy would be to focus on building age‐friendly communities and to maintain an effective outreach network to prepare individuals, especially frail older people, for future crisis situations. AGE Platform Europe\n38\n recommends extending the scope of the ‘reasonable accommodation’ concept beyond disability and including older persons. In the current ‘super‐aged society’, specific interventions are needed to protect frail older people (who are easily isolated from society and experience depressed mood and frailty) without compromising their freedom.\nFinally, long‐term studies suggest that disasters have a lasting effect on the mental health of victims.\n39\n, \n40\n Future studies are needed to identify the long‐term influence of COVID‐19.", "A strength of this study is that it was conducted during the national state of emergency (i.e., April and May, 2020). This was possible because of the 10 years of cooperation and trust building between the research team and local government. In addition, as we used pre‐existing questionnaires, the study did not disrupt essential government work. Accordingly, we could reveal the incidence of depression and frailty during the COVID‐19 pandemic.\nThis study had some limitations. First, as mentioned above, we lacked a control group. This is a major study limitation. Second, this was a self‐report mail survey and we did not collect objective data. Third, depressed mood was defined using the KCL. Although the frailty assessed by KCL is reported to be adequate for cross‐cultural studies,\n23\n depressed mood assessed by KCL has not been sufficiently validated enough. However, because the KCL is used universally in the Japanese public sector and this was a local government survey, we used and analysed KCL data. We also need to establish the equivalence of KCL defined depressed mood with widely used measures such as the Geriatric Depression Scale\n9\n to share the research outcome with the world. Fourth, older people certified on the LTCI scheme, who may constitute the highest‐risk group, were excluded from the survey. Fifth, nutritional status and physical performance play important roles in preventing depression and frailty, but they were not analysed in this study." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "results", "discussion", null ]
[ "COVID‐19", "depressed mood", "epidemiology", "frailty", "memory impairment" ]
INTRODUCTION: The coronavirus disease 19 (COVID‐19) pandemic has spread around the world. In Japan, the number of patients rapidly increased in March, followed by an emergency state declaration by the government on April 7, 2020. Because social distancing was recommended, nonessential businesses, schools, sports and recreational facilities, and places of worship were closed. Residents were asked to stay in their homes. Although the government did not take mandatory action to ensure that people remained at home, the number of people outside substantially decreased during the emergency state; according to the Japanese Cabinet Office, the number of people circulating in the five large stations in the Tokyo metropolitan area, as measured by mobile phone geographical data (approved by owners for public use), decreased from 68.9% to 87.3% compared with average data in January and February. 1 The risk of severe illness from COVID‐19 increases with age, and older adults are at highest risk. 2 This is because: (i) frailty in older adults increases the risk of various infections and reduces all aspects of the immune response; and (ii) older people have multiple comorbidities and more hospitalisations, which increases the chance of infection during a pandemic. 3 Thus, older adults are particularly cautious about the risk of getting infected and avoided contacting with others in person. However, social isolation in older people is a serious public health concern, because of the greater risk of physical and mental health problems in older people. 4 According to Santini et al., 5 social disconnection puts older adults at greater risk of depression. A comparison of the National Health Interview Survey 2018 and 2020 shows that psychological distress and loneliness have increased during the COVID‐19 pandemic. 6 In addition, social distancing is a risk factor for progressive frailty, as it reduces physical activity. 7 In Japan, several measures have been used to assess depressed mood among older people. The most widely used measure is the Kihon Checklist (KCL), which is described in the methods section. One survey that used the KCL found that 25% of older people had depressed mood. 8 However, to the best of our knowledge, no previous studies have used the KCL to assess the onset of depressed mood. Another widely used scale is the 15‐item Geriatric Depression Scale 9 ; scores of five or greater on this scale indicate depressive symptoms. A study of older people in Japan found that the prevalence of depressive symptoms was 25%. 10 A large‐scale multicentre longitudinal study showed that the incidence of depressive symptoms over 3 years was 16.5% for men and 15.7% for women. 11 In Japan, frailty is often assessed using the KCL or the Japanese version of the Cardiovascular Health Study criteria (CHS). 12 A meta‐analysis of studies that used the CHS found a pooled prevalence of frailty of 7.4% (95% confidence interval (CI) 6.1–9.0). 13 However, to the best of our knowledge, there are no CHS data on the incidence of frailty. One study that used the KCL found a frailty prevalence of 8% and a 5‐year onset of frailty of 8%. 14 The aim of this study was to identify psychological and physical changes in older people by comparing 2020 data (collected during the COVID‐19 pandemic) with 2018 data from the same population. An additional aim was to identify factors related to psychological and physical changes. METHODS: Introduction Since 2010, we have conducted an epidemiological survey of older people living in one district of Tokyo. 15 , 16 , 17 Questionnaires are usually mailed in April; this year, Japan was in an emergency state at this time. Although there was substantial societal disruption, the local government decided to mail the questionnaires as planned to obtain a rapid assessment of the situation and to prioritise focused support. Since 2010, we have conducted an epidemiological survey of older people living in one district of Tokyo. 15 , 16 , 17 Questionnaires are usually mailed in April; this year, Japan was in an emergency state at this time. Although there was substantial societal disruption, the local government decided to mail the questionnaires as planned to obtain a rapid assessment of the situation and to prioritise focused support. Participants In close collaboration with local government, we have conducted epidemiological surveys of all older people (i.e., individuals aged 65 years or over, which is the official definition of ‘older people’ in Japan) not certified in the long‐term care insurance (LTCI) scheme and living in one district of Tokyo. Respondents are divided into two groups by birth month: those born between April and September comprise group 1 and those born between October and March comprise group 2. Groups 1 and 2 complete the surveys in odd years and even years, respectively; the group 2 survey started in 2010 and has been conducted every 2 years. The annual alternation of groups 1 and 2 equalises the yearly workload for local government. The flow of the project is shown in Figure 1. The participants of this study were older people from group 1 who responded to both the 2018 survey and the 2020 survey. The study flow is shown in Figure 2. Flow chart of the project. Flow chart of the study. Japan's LTCI is a mandatory program that provides institutional, home, and community‐based services for older persons. To use long‐term care services, applicants must receive long‐term care need certification, which is determined by a committee of specialists. 18 In close collaboration with local government, we have conducted epidemiological surveys of all older people (i.e., individuals aged 65 years or over, which is the official definition of ‘older people’ in Japan) not certified in the long‐term care insurance (LTCI) scheme and living in one district of Tokyo. Respondents are divided into two groups by birth month: those born between April and September comprise group 1 and those born between October and March comprise group 2. Groups 1 and 2 complete the surveys in odd years and even years, respectively; the group 2 survey started in 2010 and has been conducted every 2 years. The annual alternation of groups 1 and 2 equalises the yearly workload for local government. The flow of the project is shown in Figure 1. The participants of this study were older people from group 1 who responded to both the 2018 survey and the 2020 survey. The study flow is shown in Figure 2. Flow chart of the project. Flow chart of the study. Japan's LTCI is a mandatory program that provides institutional, home, and community‐based services for older persons. To use long‐term care services, applicants must receive long‐term care need certification, which is determined by a committee of specialists. 18 Setting This study was conducted in one district which is located in the centre of the Tokyo metropolitan area. The total population is approximately 67 000, including 11 000 people aged 65 years or over. According to publicly available data, the LTCI certification rate of this district is 20.2%. 19 Every year, the local government mails a questionnaire to respondents in April. Respondents are asked to mail the questionnaire back by May 29. This study was conducted in one district which is located in the centre of the Tokyo metropolitan area. The total population is approximately 67 000, including 11 000 people aged 65 years or over. According to publicly available data, the LTCI certification rate of this district is 20.2%. 19 Every year, the local government mails a questionnaire to respondents in April. Respondents are asked to mail the questionnaire back by May 29. Measures As this survey was a joint project with local government, it included the KCL. The KCL was developed by the Japanese Ministry of Health, Labour and Welfare to identify older people at risk of requiring care/support, and is widely used by local governments to assess health and care needs. 20 The KCL comprises 20 items about the overall health status of older people and five items that assess depressed mood. 21 Response options for each item are ‘yes’ and ‘no’. Depressed mood and frailty scores were derived from KCL responses. Main outcome Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Covariates Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 As this survey was a joint project with local government, it included the KCL. The KCL was developed by the Japanese Ministry of Health, Labour and Welfare to identify older people at risk of requiring care/support, and is widely used by local governments to assess health and care needs. 20 The KCL comprises 20 items about the overall health status of older people and five items that assess depressed mood. 21 Response options for each item are ‘yes’ and ‘no’. Depressed mood and frailty scores were derived from KCL responses. Main outcome Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Covariates Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Data analysis Of participants considered not to have depressed mood in the 2018 survey, those judged to have depressed mood in the 2020 survey were regarded as the ‘new depressed mood’ group. Similarly, of participants considered not to show frailty in the 2018 survey, those judged to show frailty in the 2020 survey were regarded as the ‘new frailty’ group. Characteristics of the new depressed mood group and new frailty group were compared with controls using the Chi‐square test for nominal variables and the t‐test for continuous variables. Multivariate logistic regression analyses were subsequently performed. The dependent variables were new depressed mood and new frailty, and factors showing significant associations in the previous bivariate analysis were included. Age was converted to a two‐value item: young‐old (65–74 years) and old‐old (≥75 years). P < 0.05 was regarded as statistically significant. For the memory‐related items (i.e., forgetfulness about the location of things and forgetfulness about things that happened a few minutes earlier), only one item was included in the multivariate analysis to avoid multicollinearity. In both multivariate logistic regression analyses, the variance inflation factor was less than 2.0 for all items, indicating no multicollinearity. Analyses were performed using SPSS version 25 (IBM Corp, Armonk, NY, USA). Of participants considered not to have depressed mood in the 2018 survey, those judged to have depressed mood in the 2020 survey were regarded as the ‘new depressed mood’ group. Similarly, of participants considered not to show frailty in the 2018 survey, those judged to show frailty in the 2020 survey were regarded as the ‘new frailty’ group. Characteristics of the new depressed mood group and new frailty group were compared with controls using the Chi‐square test for nominal variables and the t‐test for continuous variables. Multivariate logistic regression analyses were subsequently performed. The dependent variables were new depressed mood and new frailty, and factors showing significant associations in the previous bivariate analysis were included. Age was converted to a two‐value item: young‐old (65–74 years) and old‐old (≥75 years). P < 0.05 was regarded as statistically significant. For the memory‐related items (i.e., forgetfulness about the location of things and forgetfulness about things that happened a few minutes earlier), only one item was included in the multivariate analysis to avoid multicollinearity. In both multivariate logistic regression analyses, the variance inflation factor was less than 2.0 for all items, indicating no multicollinearity. Analyses were performed using SPSS version 25 (IBM Corp, Armonk, NY, USA). Ethical considerations The study protocol was approved by the ethics committee of the Tokyo Metropolitan Institute of Gerontology. Written informed consent was obtained from all participants. The study protocol was approved by the ethics committee of the Tokyo Metropolitan Institute of Gerontology. Written informed consent was obtained from all participants. Introduction: Since 2010, we have conducted an epidemiological survey of older people living in one district of Tokyo. 15 , 16 , 17 Questionnaires are usually mailed in April; this year, Japan was in an emergency state at this time. Although there was substantial societal disruption, the local government decided to mail the questionnaires as planned to obtain a rapid assessment of the situation and to prioritise focused support. Participants: In close collaboration with local government, we have conducted epidemiological surveys of all older people (i.e., individuals aged 65 years or over, which is the official definition of ‘older people’ in Japan) not certified in the long‐term care insurance (LTCI) scheme and living in one district of Tokyo. Respondents are divided into two groups by birth month: those born between April and September comprise group 1 and those born between October and March comprise group 2. Groups 1 and 2 complete the surveys in odd years and even years, respectively; the group 2 survey started in 2010 and has been conducted every 2 years. The annual alternation of groups 1 and 2 equalises the yearly workload for local government. The flow of the project is shown in Figure 1. The participants of this study were older people from group 1 who responded to both the 2018 survey and the 2020 survey. The study flow is shown in Figure 2. Flow chart of the project. Flow chart of the study. Japan's LTCI is a mandatory program that provides institutional, home, and community‐based services for older persons. To use long‐term care services, applicants must receive long‐term care need certification, which is determined by a committee of specialists. 18 Setting: This study was conducted in one district which is located in the centre of the Tokyo metropolitan area. The total population is approximately 67 000, including 11 000 people aged 65 years or over. According to publicly available data, the LTCI certification rate of this district is 20.2%. 19 Every year, the local government mails a questionnaire to respondents in April. Respondents are asked to mail the questionnaire back by May 29. Measures: As this survey was a joint project with local government, it included the KCL. The KCL was developed by the Japanese Ministry of Health, Labour and Welfare to identify older people at risk of requiring care/support, and is widely used by local governments to assess health and care needs. 20 The KCL comprises 20 items about the overall health status of older people and five items that assess depressed mood. 21 Response options for each item are ‘yes’ and ‘no’. Depressed mood and frailty scores were derived from KCL responses. Main outcome Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Covariates Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Main outcome : Depressed mood The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Depressed mood: The five KCL items that assess depressed mood measure lack of fulfilment, lack of joy, difficulty in doing what one could easily do before, helplessness, and tiredness without a reason. Participants who answered yes to two or more items were considered to have depressed mood. 8 Frailty: The 20 KCL items that assess overall health status were used to measure frailty. Satake et al. 22 noted that the total KCL score is strongly correlated with frailty, as defined in the CHS criteria. Cutoffs of 7/8 for the 20 KCL health items were used as the threshold to identify frailty. The KCL was shown to be adequate for cross‐cultural studies and to be suitable for addressing frailty demands among elderly people in multiple cohorts. 23 Covariates : Basic information We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment) We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Basic information: We collected data on age, gender, living status (living alone or not), marital status (married or not), working status (working or not), education (completed mandatory education and above), and being a new resident or not (the cutoff was set at 10 years of residence in current location). Memory‐related variables (subjective memory impairment): We assessed participants' forgetfulness about the location of things, and forgetfulness about things that happened a few minutes earlier. Questions were adapted from the Dementia Assessment Sheet for Community‐based Integrated Care System‐21 items (DASC‐21), 24 which is widely used with the Japanese national dementia strategy. Physical health‐related variables: Body height and body weight were assessed to calculate body mass index. Subjective health was assessed using a four‐item Likert scale and responses were categorised overall as indicating ‘healthy’ or ‘not healthy.’ The presence of hypertension, stroke, heart disease, diabetes mellitus, hyperlipidaemia, and cancer was recorded. Participants were also asked about their concern regarding their oral health. Daily life competence: Daily life competence was assessed using items adapted from the Japan Science and Technology Agency Index of Competence (JST‐IC). 25 , 26 The JST‐IC consists of 16 items that assess four domains: device usage (four items), information gathering (four items), life management (four items), and social participation (four items). We used the items ‘to operate a video recorder,’ ‘to watch educational programs,’ ‘to take care of your family members or acquaintances,’ and ‘to assume roles such as the leader in a residents' association’ from each domain. Potential item responses were ‘possible’ or ‘impossible.’ Psychological variables: Emotional social support was assessed by a question about whether the participant had someone to consult when they were ill. Instrumental social support was assessed by a question about whether the participant had someone who would take care of them when they were ill. Both items were adapted from the report by Muraoka et al. 27 Data analysis: Of participants considered not to have depressed mood in the 2018 survey, those judged to have depressed mood in the 2020 survey were regarded as the ‘new depressed mood’ group. Similarly, of participants considered not to show frailty in the 2018 survey, those judged to show frailty in the 2020 survey were regarded as the ‘new frailty’ group. Characteristics of the new depressed mood group and new frailty group were compared with controls using the Chi‐square test for nominal variables and the t‐test for continuous variables. Multivariate logistic regression analyses were subsequently performed. The dependent variables were new depressed mood and new frailty, and factors showing significant associations in the previous bivariate analysis were included. Age was converted to a two‐value item: young‐old (65–74 years) and old‐old (≥75 years). P < 0.05 was regarded as statistically significant. For the memory‐related items (i.e., forgetfulness about the location of things and forgetfulness about things that happened a few minutes earlier), only one item was included in the multivariate analysis to avoid multicollinearity. In both multivariate logistic regression analyses, the variance inflation factor was less than 2.0 for all items, indicating no multicollinearity. Analyses were performed using SPSS version 25 (IBM Corp, Armonk, NY, USA). Ethical considerations: The study protocol was approved by the ethics committee of the Tokyo Metropolitan Institute of Gerontology. Written informed consent was obtained from all participants. RESULTS: The number of mailed questionnaires for residents aged 65 years or above and born between April and September was 4914 in 2018, and 2621 questionnaires were retrieved (response rate 53.3%). Similarly, the number of mailed questionnaires was 4973 in 2020, and 2649 questionnaires were retrieved (response rate 53.3%). A total of 1736 residents responded to both surveys (i.e., the rate of analysed questionnaires per mailed questionnaires was 35.3% and 34.6%, respectively). In 2018, 29% of participants had depressed mood and 10% showed frailty. In 2020, 38% of participants had depressed mood and 16% showed frailty. A simple comparison showed that the rates of depressed mood and frailty increased in 2020 (Table 1). Ratio of participants who had depressed mood and frailty Of the 1736 participants, 509 participants had depressed mood and 1227 did not in 2018. Of these 1227 participants, 307 had depressed mood in 2020. That is, the depressed mood progression rate was 25%. Of the 1736 participants, 171 participants had frailty and 1565 did not in 2018. Of these 1565 participants, 165 had frailty in 2020 (Fig. 3). Incidence of depressed mood and frailty. The comparative characteristics of participants who developed depressed mood in 2020 and those who did not are shown in Table 2. Being older, not married, low education level, residing in current location for longer than 10 years, being forgetful about the location of things, being forgetful about things that happened a few minutes earlier, difficulty operating a video recorder, difficulty watching educational programs, difficulty taking care of family members or acquaintances, and lack of emotional social support were related to new depressed mood. The multivariate logistic regression analysis showed that subjective memory impairment (forgetfulness about the location of things) (odds ratio (OR) = 1.47, 95% CI: 1.11–1.94) and difficulty using devices (operating a video recorder) (OR = 1.45, 95% CI: 1.04–2.03) were significantly associated with new depressed mood (Table 3). Comparative characteristics of new depressed mood group *P < 0.05; **P < 0.01; ***P < 0.001. BMI, body mass index. Factors associated with incidence of depressed mood in multivariate analysis *P < 0.05; **P < 0.01. OR, odds ratio; CI, confidence interval. Factors included in this model analysis: (i) basic variables such as age, education, new residential status; (ii) forgetfulness about the location of things (to assess subjective memory impairment); (iii) items to assess instrumental activities of daily living such as difficulty operating a video recorder, difficulty watching educational programs, and difficulty taking care of family members or acquaintances; and (iv) emotional social support (to assess psychological variables). Age was converted to a dichotomous variable: young‐old (65–74 years) and old‐old (≥75 years). The comparative characteristics of older people who had developed frailty in 2020 and those who had not are shown in Table 4. Being older, not married, living alone, low education level, not having a job, being forgetful about the location of things, being forgetful about things that happened a few minutes earlier, poor subjective health, lack of concern regarding their oral health, difficulty operating a video recorder, difficulty watching educational programs, difficulty taking care of family members or acquaintances, difficulty in assuming roles such as the leader in a residents' association, lack of emotional social support, and lack of instrumental social support were related to new frailty. The multivariate logistic regression analysis showed that being young‐old (OR = 1.87, 95% CI: 1.18–2.97), subjective memory impairment (forgetfulness about things that happened a few minutes earlier) (OR = 2.18, 95% CI: 1.47–3.22), lack of emotional social support (OR = 2.64, 95% CI: 1.34–5.13), poor subjective health (OR = 2.27, 95% CI: 1.16–4.44), and difficulty in social participation (difficulty assuming roles such as the leader in a residents' association) (OR = 1.92, 95% CI: 1.25–2.95) were significantly associated with new frailty (Table 5). Comparative characteristics of new frailty group *P < 0.05; **P < 0.01;***P < 0.001. BMI, body mass index. Factors associated with incidence of frailty in multivariate analysis *P < 0.05; **P < 0.01; ***P < 0.001. OR, odds ratio; CI, confidence interval. Factors included in this model analysis: (i) basic variables such as age, living status, marital status, working status, and education; (ii) forgetfulness about things that happened a few minutes earlier (to assess subjective memory impairment); (iii) physical health‐related items such as subjective health and oral health care; (iv) items about instrumental activities of daily living such as difficulty operating a video recorder, difficulty watching educational programs, difficulty taking care of family members or acquaintances, and difficulty assuming roles (e.g., the leader) in a residents' association; and (iv) psychological variables such as emotional social support and instrumental social support. Age was converted to a dichotomous variable: young‐old (65–74 years) and old‐old (≥75 years). DISCUSSION: Our findings identified factors related to incidence of depressed mood and that of frailty during the COVID‐19 pandemic. Factors related to incidence of depression were subjective memory impairment and difficulty in device usage, and factors related to incidence of frailty were being older, subjective memory impairment, lack of emotional social support, poor subjective health, and difficulty in social participation. Of these factors, subjective memory impairment correlated with both new depressed mood and new frailty. Older people with subjective memory impairment may be a high‐risk group for depressed mood and frailty during this pandemic. We found that the incidence of depressed mood and frailty over a 2‐year period that included the COVID‐19 pandemic were 25% and 11%, respectively. Because we had no control group, we could not differentiate the effect of the COVID‐19 pandemic from the effect of time‐dependent factors (e.g., the normal ageing process). However, the incidence of depressed mood (i.e., 25%) over 2 years was higher than the previously reported 3‐year incidence of depressive symptoms (of approximately 16%). 11 In addition, participants in the previous study 11 (like those in the present study) were community residents aged 65 years or older who were not receiving long‐term care. However, our study was conducted in metropolitan Tokyo, whereas the previous study sampled participants from 40 local government administrative divisions. We found a slightly higher incidence of frailty (i.e., 11%) over 2 years than the 5‐year frailty incidence of 8% previously reported for older adults aged 65–70 years. 14 This difference warrants careful interpretation, because the participants and the observation periods were not identical between the two surveys. Our survey was conducted in the first 2 months after the government declared a state of emergency and people started practising social distancing; hence, the effect of social distancing on frailty may not have been fully accounted for. The social costs of depression and frailty are heavy. Although the costs of depression and frailty in older people are unclear, the total cost of depression in Japan is 2 trillion yen (equivalent to 18 billion USD). 28 There seem to be no figures for the total societal cost of frailty. However, individual‐level increases in care costs have been reported in Japan 29 and Germany. 30 The present findings suggest that the COVID‐19 pandemic may cause a substantial long‐term social burden through its effect on depressed mood. Social distancing is a reasonable strategy to combat COVID‐19, but it is crucial for older persons to maintain social connections to reduce depressed mood and frailty. Humans are social beings, despite differences in nationality and cultural background, and social distancing may cause substantial psychological distress; therefore, it is important to clarify the effect of social distancing on short‐ and long‐term mental health. 31 A narrative review that included several cross‐sectional descriptive studies reported a depression prevalence ranging from 15% to 47% during the COVID‐19 pandemic. 32 According to Puccinelli et al, 33 physical activity level during the period of social distancing was lower than that prior to the pandemic period. They also reported a bidirectional effect of depression and physical inactivity, which suggests that social distancing is associated with a large increase in physical and mental vulnerability. To address this adverse effect of social distancing, one potential solution is the use of remote tools, which can help frail older people maintain social connections. 34 As many older people are unfamiliar with modern communication tools like social network services or email, the use of more traditional devices like telephones may be preferable. For example, we have started a telephone outreach service for frail older people in a large housing complex in another area of Tokyo, which has proved effective. 35 Immediate action to help older people worldwide is essential. However, the construction of a systematic telephone outreach network for older people with memory impairment is more difficult during a pandemic. Our experience with a community‐based participatory research framework 36 , 37 indicates that building networks for isolated older people takes time and effort in the real world. We overcame this difficulty by ensuring that specialists such as doctors, psychologists, and public health nurses develop face‐to‐face relationships with residents or community workers over a period of years. A useful strategy would be to focus on building age‐friendly communities and to maintain an effective outreach network to prepare individuals, especially frail older people, for future crisis situations. AGE Platform Europe 38 recommends extending the scope of the ‘reasonable accommodation’ concept beyond disability and including older persons. In the current ‘super‐aged society’, specific interventions are needed to protect frail older people (who are easily isolated from society and experience depressed mood and frailty) without compromising their freedom. Finally, long‐term studies suggest that disasters have a lasting effect on the mental health of victims. 39 , 40 Future studies are needed to identify the long‐term influence of COVID‐19. STRENGTHS AND LIMITATIONS: A strength of this study is that it was conducted during the national state of emergency (i.e., April and May, 2020). This was possible because of the 10 years of cooperation and trust building between the research team and local government. In addition, as we used pre‐existing questionnaires, the study did not disrupt essential government work. Accordingly, we could reveal the incidence of depression and frailty during the COVID‐19 pandemic. This study had some limitations. First, as mentioned above, we lacked a control group. This is a major study limitation. Second, this was a self‐report mail survey and we did not collect objective data. Third, depressed mood was defined using the KCL. Although the frailty assessed by KCL is reported to be adequate for cross‐cultural studies, 23 depressed mood assessed by KCL has not been sufficiently validated enough. However, because the KCL is used universally in the Japanese public sector and this was a local government survey, we used and analysed KCL data. We also need to establish the equivalence of KCL defined depressed mood with widely used measures such as the Geriatric Depression Scale 9 to share the research outcome with the world. Fourth, older people certified on the LTCI scheme, who may constitute the highest‐risk group, were excluded from the survey. Fifth, nutritional status and physical performance play important roles in preventing depression and frailty, but they were not analysed in this study.
Background: The study aim was to identify depressed mood and frailty and its related factors in older people during the coronavirus disease 19 pandemic. Methods: Since 2010, we have conducted questionnaire surveys on all older residents, who are not certified in the long-term care insurance, living in one district of Tokyo municipality. These residents are divided into two groups by birth month, that is those born between April and September and those born between October and March, and each group completes the survey every 2 years (in April and May). Study participants were older residents who were born between April and September and who completed the survey in spring 2018 and in spring 2020, the pandemic period. Depressed mood and frailty were assessed using the Kihon Checklist, which is widely used by local governments in Japan. We had no control group in this study. Results: A total of 1736 residents responded to both surveys. From 2018 to 2020, the depressed mood rate increased from 29% to 38%, and frailty increased from 10% to 16%. The incidence of depressed mood and frailty was 25% and 11%, respectively. Incidence of depressed mood was related to subjective memory impairment and difficulty in device usage, and incidence of frailty was related to being older, subjective memory impairment, lack of emotional social support, poor subjective health, and social participation difficulties. Conclusions: Older people with subjective memory impairment may be a high-risk group during the coronavirus pandemic. Telephone outreach for frail older people could be an effective solution. We recommend extending the scope of the 'reasonable accommodation' concept beyond disability and including older people to build an age-friendly and crisis-resistant community.
null
null
13,475
333
[ 667, 81, 243, 86, 2258, 285, 54, 87, 787, 65, 54, 70, 131, 61, 242, 26, 274 ]
20
[ "items", "frailty", "kcl", "assessed", "health", "mood", "depressed", "depressed mood", "social", "participants" ]
[ "covid 19 pandemic", "older people japan", "infection pandemic older", "japan number patients", "japan found prevalence" ]
null
null
[CONTENT] COVID‐19 | depressed mood | epidemiology | frailty | memory impairment [SUMMARY]
[CONTENT] COVID‐19 | depressed mood | epidemiology | frailty | memory impairment [SUMMARY]
[CONTENT] COVID‐19 | depressed mood | epidemiology | frailty | memory impairment [SUMMARY]
null
[CONTENT] COVID‐19 | depressed mood | epidemiology | frailty | memory impairment [SUMMARY]
null
[CONTENT] Aged | COVID-19 | Frail Elderly | Frailty | Geriatric Assessment | Humans | Independent Living | Japan | Pandemics | SARS-CoV-2 | Tokyo [SUMMARY]
[CONTENT] Aged | COVID-19 | Frail Elderly | Frailty | Geriatric Assessment | Humans | Independent Living | Japan | Pandemics | SARS-CoV-2 | Tokyo [SUMMARY]
[CONTENT] Aged | COVID-19 | Frail Elderly | Frailty | Geriatric Assessment | Humans | Independent Living | Japan | Pandemics | SARS-CoV-2 | Tokyo [SUMMARY]
null
[CONTENT] Aged | COVID-19 | Frail Elderly | Frailty | Geriatric Assessment | Humans | Independent Living | Japan | Pandemics | SARS-CoV-2 | Tokyo [SUMMARY]
null
[CONTENT] covid 19 pandemic | older people japan | infection pandemic older | japan number patients | japan found prevalence [SUMMARY]
[CONTENT] covid 19 pandemic | older people japan | infection pandemic older | japan number patients | japan found prevalence [SUMMARY]
[CONTENT] covid 19 pandemic | older people japan | infection pandemic older | japan number patients | japan found prevalence [SUMMARY]
null
[CONTENT] covid 19 pandemic | older people japan | infection pandemic older | japan number patients | japan found prevalence [SUMMARY]
null
[CONTENT] items | frailty | kcl | assessed | health | mood | depressed | depressed mood | social | participants [SUMMARY]
[CONTENT] items | frailty | kcl | assessed | health | mood | depressed | depressed mood | social | participants [SUMMARY]
[CONTENT] items | frailty | kcl | assessed | health | mood | depressed | depressed mood | social | participants [SUMMARY]
null
[CONTENT] items | frailty | kcl | assessed | health | mood | depressed | depressed mood | social | participants [SUMMARY]
null
[CONTENT] older | risk | people | older people | frailty | found | adults | older adults | 19 | pandemic [SUMMARY]
[CONTENT] items | kcl | assessed | frailty | health | status | mood | depressed mood | depressed | care [SUMMARY]
[CONTENT] difficulty | ci | 95 | 95 ci | depressed | mood | depressed mood | frailty | old | things [SUMMARY]
null
[CONTENT] items | frailty | kcl | depressed mood | depressed | mood | assessed | health | social | status [SUMMARY]
null
[CONTENT] 19 [SUMMARY]
[CONTENT] 2010 | one | Tokyo ||| two | birth month | between April and September | between October and March | 2 years | April | May ||| between April and September | spring 2018 | spring 2020 ||| the Kihon Checklist | Japan ||| [SUMMARY]
[CONTENT] 1736 ||| 2018 | 2020 | 29% to 38% | 10% to 16% ||| 25% and | 11% ||| [SUMMARY]
null
[CONTENT] 19 ||| 2010 | one | Tokyo ||| two | birth month | between April and September | between October and March | 2 years | April | May ||| between April and September | spring 2018 | spring 2020 ||| the Kihon Checklist | Japan ||| ||| 1736 ||| 2018 | 2020 | 29% to 38% | 10% to 16% ||| 25% and | 11% ||| ||| ||| ||| [SUMMARY]
null
Evaluation of the optimal neutrophil gelatinase-associated lipocalin value as a screening biomarker for urinary tract infections in children.
25187887
Neutrophil gelatinase-associated lipocalin (NGAL) is a promising biomarker in the detection of kidney injury. Early diagnosis of urinary tract infection (UTI), one of the most common infections in children, is important in order to avert long-term consequences. We assessed whether serum NGAL (sNGAL) or urine NGAL (uNGAL) would be reliable markers of UTI and evaluated the appropriate diagnostic cutoff value for the screening of UTI in children.
BACKGROUND
A total of 812 urine specimens and 323 serum samples, collected from pediatric patients, were analyzed. UTI was diagnosed on the basis of culture results and symptoms reported by the patients. NGAL values were measured by using ELISA.
METHODS
NGAL values were more elevated in the UTI cases than in the non-UTI cases, but the difference between the values were not statistically significant (P=0.190 for sNGAL and P=0.064 for uNGAL). The optimal diagnostic cutoff values of sNGAL and uNGAL for UTI screening were 65.25 ng/mL and 5.75 ng/mL, respectively.
RESULTS
We suggest that it is not appropriate to use NGAL as a marker for early diagnosis of UTI in children.
CONCLUSIONS
[ "Acute-Phase Proteins", "Area Under Curve", "Biomarkers", "Child", "Child, Preschool", "Early Diagnosis", "Enzyme-Linked Immunosorbent Assay", "Female", "Humans", "Infant", "Lipocalin-2", "Lipocalins", "Male", "Mass Screening", "Proto-Oncogene Proteins", "ROC Curve", "Urinary Tract Infections" ]
4151003
INTRODUCTION
Neutrophil gelatinase-associated lipocalin (NGAL) is a 25 kDa protein and a member of the lipocalin family [1]. NGAL is covalently bound to matrix metalloproteinase 9 expressed by neutrophils and exhibits antibacterial properties, and so, this lipocalin is considered a component of the innate immune system [2]. NGAL is also expressed by other cells, such as kidney epithelial and tubular cells [1, 3]. Although NGAL is expressed only at very low values in several human tissues, its expression is markedly increased in injured epithelial cells, including in those of the kidney, colon, liver, and lung [1]. These findings provide a potential molecular mechanism for the role of NGAL in affecting the epithelial phenotype, both during kidney development and following acute kidney injury (AKI) [4]. Urinary tract infection (UTI) is one of the most common infections in children. Early diagnosis of UTI is important because delayed diagnosis may result in the failure of subsequent treatment and may result in long-term consequences, including renal scarring, hypertension, and chronic renal failure [5]. Urine culture is the standard approach for the diagnosis of UTI, but it has limitations. Difficulty in specimen collection and interpretation of inadequately collected specimens may contribute to misdiagnosis [5]. Furthermore, positive culture results require 2-3 days in order to identify the cause of the infection. Despite these limitations, urinalysis is commonly used in screening for UTI. However, indicators of UTI in urinalysis, such as pyuria and positive nitrite test results, have some limitations associated with low sensitivity [5]. Nitrite tests produce false negative results in the presence of gram-positive pathogens, and the sensitivity of this test can be decreased when testing urine with a high specific gravity. Similarly, pyuria may be absent in Proteus species infections. In addition, sterile pyuria may occur in noninfectious conditions such as urolithiasis [5]. Therefore, another marker is needed for rapid and accurate diagnosis of UTI to initiate early treatment. Several studies have suggested that urine NGAL (uNGAL) may be associated with UTI [6, 7, 8]. In addition, since UTI can cause neutrophilia, we attempted to evaluate the association between serum NGAL (sNGAL) and UTI. The aim of this study was to assess whether sNGAL and/or uNGAL could be used as reliable UTI markers and to evaluate the appropriate NGAL detection cutoff values for screening of UTI in children.
METHODS
1. Study population This retrospective study was conducted among children who visited Chung-Ang University hospital in Seoul, Korea between February and November 2010. Serum and/or catheterization or midstream urine samples were obtained. Urine culture, urinalysis, serum blood urea nitrogen (BUN) and creatinine (Cr), C-reactive protein (CRP), and whole blood white blood cell (WBC) count results were retrospectively reviewed. A total of 812 urine specimens from 333 patients and 323 serum samples from 208 patients were collected from a total of 444 patients (177 males, 267 females). Both serum and urine specimens were available from 97 patients. Either serum or urine samples were available from the remaining 347 patients. The UTI group consisted of 107 children (58 males, 49 females), and 191 serum samples and 284 urine samples were included. The mean age of the UTI group was 4.29±2.93 yr. The non-UTI group consisted of 337 children (119 males, 218 females), and 232 serum samples and 528 urine samples were included. The mean age of the non-UTI group was 3.87±2.59 yr. Serum BUN and Cr levels were normal in all children in both groups. Whole blood WBC count did not show a significant difference in NGAL values between the UTI group and the non-UTI group (12.4×109/L vs. 10.7×109/L, P=0.087). Serum CRP level was significantly higher in the UTI group than in the non-UTI group (3.20 mg/L vs. 0.79 mg/L, P=0.042). This retrospective study was conducted among children who visited Chung-Ang University hospital in Seoul, Korea between February and November 2010. Serum and/or catheterization or midstream urine samples were obtained. Urine culture, urinalysis, serum blood urea nitrogen (BUN) and creatinine (Cr), C-reactive protein (CRP), and whole blood white blood cell (WBC) count results were retrospectively reviewed. A total of 812 urine specimens from 333 patients and 323 serum samples from 208 patients were collected from a total of 444 patients (177 males, 267 females). Both serum and urine specimens were available from 97 patients. Either serum or urine samples were available from the remaining 347 patients. The UTI group consisted of 107 children (58 males, 49 females), and 191 serum samples and 284 urine samples were included. The mean age of the UTI group was 4.29±2.93 yr. The non-UTI group consisted of 337 children (119 males, 218 females), and 232 serum samples and 528 urine samples were included. The mean age of the non-UTI group was 3.87±2.59 yr. Serum BUN and Cr levels were normal in all children in both groups. Whole blood WBC count did not show a significant difference in NGAL values between the UTI group and the non-UTI group (12.4×109/L vs. 10.7×109/L, P=0.087). Serum CRP level was significantly higher in the UTI group than in the non-UTI group (3.20 mg/L vs. 0.79 mg/L, P=0.042). 2. Laboratory methods Urinalysis was performed using URiSCAN (YD Diagnostics, Seoul, Korea), and microscopic analysis of urine was performed by using a Sysmex UF-1000i full automatic urine analyzer (Sysmex, Hyogo, Japan). Values of sNGAL and uNGAL were assayed by using a NGAL Rapid ELISA Kit (Bioporto diagnostics, Gentofte, Denmark) following the manufacturer's instructions. Bioporto's NGAL Rapid ELISA Kit assay is a sandwich ELISA performed in microwells coated with an anti-human NGAL monoclonal antibody. Bound NGAL is detected with another monoclonal antibody labeled with biotin, and the assay is developed with horseradish peroxidase (HRP)-conjugated streptavidin and a color-forming substrate. The color intensity is read at 450 nm using PR 3100 TSC Microplate Reader (Bio-Rad Laboratories, Hercules, CA, USA). Urinalysis was performed using URiSCAN (YD Diagnostics, Seoul, Korea), and microscopic analysis of urine was performed by using a Sysmex UF-1000i full automatic urine analyzer (Sysmex, Hyogo, Japan). Values of sNGAL and uNGAL were assayed by using a NGAL Rapid ELISA Kit (Bioporto diagnostics, Gentofte, Denmark) following the manufacturer's instructions. Bioporto's NGAL Rapid ELISA Kit assay is a sandwich ELISA performed in microwells coated with an anti-human NGAL monoclonal antibody. Bound NGAL is detected with another monoclonal antibody labeled with biotin, and the assay is developed with horseradish peroxidase (HRP)-conjugated streptavidin and a color-forming substrate. The color intensity is read at 450 nm using PR 3100 TSC Microplate Reader (Bio-Rad Laboratories, Hercules, CA, USA). 3. Definition of UTI UTI was diagnosed by significant bacteriuria (≥100,000 colony-forming unit [CFU]/mL of a single pathogen) without symptoms associated with UTI, or by significant bacteriuria (≥10,000 CFU/mL) with symptoms associated with UTI, including any of the followings: fever, vomiting, dysuria, abdominal pain, nausea, and voiding frequency [9]. Additionally, we defined our own criteria for the screening of presumptive UTI (pUTI) by urinalysis markers. First, if pyuria (WBC ≥5/HPF) were present, a diagnosis of pUTI was made. Second, if leukocyte esterase was present, patients were regarded as having pUTI. UTI was diagnosed by significant bacteriuria (≥100,000 colony-forming unit [CFU]/mL of a single pathogen) without symptoms associated with UTI, or by significant bacteriuria (≥10,000 CFU/mL) with symptoms associated with UTI, including any of the followings: fever, vomiting, dysuria, abdominal pain, nausea, and voiding frequency [9]. Additionally, we defined our own criteria for the screening of presumptive UTI (pUTI) by urinalysis markers. First, if pyuria (WBC ≥5/HPF) were present, a diagnosis of pUTI was made. Second, if leukocyte esterase was present, patients were regarded as having pUTI. 4. Statistical analysis Statistical analysis was performed by using the IBM SPSS Statistics for Windows, version 19 (SPSS Inc., Chicago, IL, USA). The independent samples t-test was used to compare mean NGAL values in both groups. ROC analysis was performed to determine the sensitivity, specificity, and optimal diagnostic cutoff values of NGAL in screening for UTI. The most appropriate cutoff value was chosen according to ROC analysis, and the area under the curve (AUC) was calculated. The statistical significance level was established at P<0.05. Statistical analysis was performed by using the IBM SPSS Statistics for Windows, version 19 (SPSS Inc., Chicago, IL, USA). The independent samples t-test was used to compare mean NGAL values in both groups. ROC analysis was performed to determine the sensitivity, specificity, and optimal diagnostic cutoff values of NGAL in screening for UTI. The most appropriate cutoff value was chosen according to ROC analysis, and the area under the curve (AUC) was calculated. The statistical significance level was established at P<0.05.
RESULTS
The values of sNGAL and uNGAL were elevated in the UTI group compared to the non-UTI group; however, there was no significant difference in the mean values between the groups (Table 1). According to ROC analysis, the optimal diagnostic cutoff values to predict UTI were 65.25 ng/mL for sNGAL and 5.75 ng/mL for uNGAL. If a cutoff of 65.25 ng/mL for sNGAL for diagnosing UTI was used, the sensitivity and specificity of the diagnosis were 70% (95% confidence interval [CI], 18%-79%) and 35% (95% CI, 20%-74%), respectively (Fig. 1). For uNGAL, using a cutoff value of 5.75 ng/mL for diagnosing UTI, the sensitivity and specificity of diagnosis were 70% (95% CI, 21%-83%) and 42% (95% CI, 36%-89%), respectively (Fig. 1). The results of the pUTI and the non-pUTI groups based on the presence or absence of pyuria are presented below. Mean values of sNGAL were significantly higher in the pyuria-positive pUTI group than in the pyuria-negative group (120.05 ng/mL vs. 84.08 ng/mL, P=0.013) (Table 1). Mean uNGAL values were also significantly higher in the pyuria-positive pUTI group than in the pyuria-negative, non-pUTI group (67.58 ng/mL vs. 47.59 ng/mL, P=0.007) (Table 1). According to ROC analysis, the optimal diagnostic cutoff value was 82.5 ng/mL for sNGAL, with 70% sensitivity (95% CI, 23%-86%) and 60% specificity (95% CI, 27%-88%) (Fig. 2). Similarly, the optimal diagnostic cutoff value was 10.3 ng/mL for uNGAL, with 70% sensitivity (95% CI, 20%-87%) and 70% specificity (95% CI, 21%-86%) (Fig. 2). In examining the presence or absence of leukocyte esterase in the pUTI and non-pUTI group samples, mean sNGAL and uNGAL values were found to be higher in the esterase-positive pUTI group than the esterase-negative, non-pUTI group (112.09 ng/mL vs. 83.52 ng/mL, 27.38 ng/mL vs. 9.63 ng/mL), but only the mean uNGAL value between the 2 groups showed a statistically significant difference (P=0.007) (Table 1). According to ROC analysis, the optimal diagnostic cutoff value to predict pUTI was 75.05 ng/mL for sNGAL and 9.65 ng/mL for uNGAL. Using a cutoff of 75.05 ng/mL of sNGAL for diagnosing pUTI, the sensitivity and specificity were 70% (95% CI, 24%-89%) and 49% (95% CI, 11%-62%), respectively (Fig. 3). For uNGAL, using a cutoff value of 9.65 ng/mL for the presumptive diagnosis of UTI, the sensitivity and specificity were 70% (95% CI, 30%-91%) and 68% (95% CI, 25%-78%), respectively (Fig. 3). Lastly, we evaluated the uNGAL:Cr ratio between the culture-proven UTI and non-UTI groups. There was no significant difference in uNGAL:Cr ratio between the 2 groups (results not shown).
null
null
[ "1. Study population", "2. Laboratory methods", "3. Definition of UTI", "4. Statistical analysis" ]
[ "This retrospective study was conducted among children who visited Chung-Ang University hospital in Seoul, Korea between February and November 2010. Serum and/or catheterization or midstream urine samples were obtained. Urine culture, urinalysis, serum blood urea nitrogen (BUN) and creatinine (Cr), C-reactive protein (CRP), and whole blood white blood cell (WBC) count results were retrospectively reviewed.\nA total of 812 urine specimens from 333 patients and 323 serum samples from 208 patients were collected from a total of 444 patients (177 males, 267 females). Both serum and urine specimens were available from 97 patients. Either serum or urine samples were available from the remaining 347 patients. The UTI group consisted of 107 children (58 males, 49 females), and 191 serum samples and 284 urine samples were included. The mean age of the UTI group was 4.29±2.93 yr. The non-UTI group consisted of 337 children (119 males, 218 females), and 232 serum samples and 528 urine samples were included. The mean age of the non-UTI group was 3.87±2.59 yr. Serum BUN and Cr levels were normal in all children in both groups. Whole blood WBC count did not show a significant difference in NGAL values between the UTI group and the non-UTI group (12.4×109/L vs. 10.7×109/L, P=0.087). Serum CRP level was significantly higher in the UTI group than in the non-UTI group (3.20 mg/L vs. 0.79 mg/L, P=0.042).", "Urinalysis was performed using URiSCAN (YD Diagnostics, Seoul, Korea), and microscopic analysis of urine was performed by using a Sysmex UF-1000i full automatic urine analyzer (Sysmex, Hyogo, Japan). Values of sNGAL and uNGAL were assayed by using a NGAL Rapid ELISA Kit (Bioporto diagnostics, Gentofte, Denmark) following the manufacturer's instructions. Bioporto's NGAL Rapid ELISA Kit assay is a sandwich ELISA performed in microwells coated with an anti-human NGAL monoclonal antibody. Bound NGAL is detected with another monoclonal antibody labeled with biotin, and the assay is developed with horseradish peroxidase (HRP)-conjugated streptavidin and a color-forming substrate. The color intensity is read at 450 nm using PR 3100 TSC Microplate Reader (Bio-Rad Laboratories, Hercules, CA, USA).", "UTI was diagnosed by significant bacteriuria (≥100,000 colony-forming unit [CFU]/mL of a single pathogen) without symptoms associated with UTI, or by significant bacteriuria (≥10,000 CFU/mL) with symptoms associated with UTI, including any of the followings: fever, vomiting, dysuria, abdominal pain, nausea, and voiding frequency [9]. Additionally, we defined our own criteria for the screening of presumptive UTI (pUTI) by urinalysis markers. First, if pyuria (WBC ≥5/HPF) were present, a diagnosis of pUTI was made. Second, if leukocyte esterase was present, patients were regarded as having pUTI.", "Statistical analysis was performed by using the IBM SPSS Statistics for Windows, version 19 (SPSS Inc., Chicago, IL, USA). The independent samples t-test was used to compare mean NGAL values in both groups. ROC analysis was performed to determine the sensitivity, specificity, and optimal diagnostic cutoff values of NGAL in screening for UTI. The most appropriate cutoff value was chosen according to ROC analysis, and the area under the curve (AUC) was calculated. The statistical significance level was established at P<0.05." ]
[ null, null, null, null ]
[ "INTRODUCTION", "METHODS", "1. Study population", "2. Laboratory methods", "3. Definition of UTI", "4. Statistical analysis", "RESULTS", "DISCUSSION" ]
[ "Neutrophil gelatinase-associated lipocalin (NGAL) is a 25 kDa protein and a member of the lipocalin family [1]. NGAL is covalently bound to matrix metalloproteinase 9 expressed by neutrophils and exhibits antibacterial properties, and so, this lipocalin is considered a component of the innate immune system [2]. NGAL is also expressed by other cells, such as kidney epithelial and tubular cells [1, 3]. Although NGAL is expressed only at very low values in several human tissues, its expression is markedly increased in injured epithelial cells, including in those of the kidney, colon, liver, and lung [1]. These findings provide a potential molecular mechanism for the role of NGAL in affecting the epithelial phenotype, both during kidney development and following acute kidney injury (AKI) [4].\nUrinary tract infection (UTI) is one of the most common infections in children. Early diagnosis of UTI is important because delayed diagnosis may result in the failure of subsequent treatment and may result in long-term consequences, including renal scarring, hypertension, and chronic renal failure [5]. Urine culture is the standard approach for the diagnosis of UTI, but it has limitations. Difficulty in specimen collection and interpretation of inadequately collected specimens may contribute to misdiagnosis [5]. Furthermore, positive culture results require 2-3 days in order to identify the cause of the infection. Despite these limitations, urinalysis is commonly used in screening for UTI. However, indicators of UTI in urinalysis, such as pyuria and positive nitrite test results, have some limitations associated with low sensitivity [5]. Nitrite tests produce false negative results in the presence of gram-positive pathogens, and the sensitivity of this test can be decreased when testing urine with a high specific gravity. Similarly, pyuria may be absent in Proteus species infections. In addition, sterile pyuria may occur in noninfectious conditions such as urolithiasis [5]. Therefore, another marker is needed for rapid and accurate diagnosis of UTI to initiate early treatment.\nSeveral studies have suggested that urine NGAL (uNGAL) may be associated with UTI [6, 7, 8]. In addition, since UTI can cause neutrophilia, we attempted to evaluate the association between serum NGAL (sNGAL) and UTI. The aim of this study was to assess whether sNGAL and/or uNGAL could be used as reliable UTI markers and to evaluate the appropriate NGAL detection cutoff values for screening of UTI in children.", " 1. Study population This retrospective study was conducted among children who visited Chung-Ang University hospital in Seoul, Korea between February and November 2010. Serum and/or catheterization or midstream urine samples were obtained. Urine culture, urinalysis, serum blood urea nitrogen (BUN) and creatinine (Cr), C-reactive protein (CRP), and whole blood white blood cell (WBC) count results were retrospectively reviewed.\nA total of 812 urine specimens from 333 patients and 323 serum samples from 208 patients were collected from a total of 444 patients (177 males, 267 females). Both serum and urine specimens were available from 97 patients. Either serum or urine samples were available from the remaining 347 patients. The UTI group consisted of 107 children (58 males, 49 females), and 191 serum samples and 284 urine samples were included. The mean age of the UTI group was 4.29±2.93 yr. The non-UTI group consisted of 337 children (119 males, 218 females), and 232 serum samples and 528 urine samples were included. The mean age of the non-UTI group was 3.87±2.59 yr. Serum BUN and Cr levels were normal in all children in both groups. Whole blood WBC count did not show a significant difference in NGAL values between the UTI group and the non-UTI group (12.4×109/L vs. 10.7×109/L, P=0.087). Serum CRP level was significantly higher in the UTI group than in the non-UTI group (3.20 mg/L vs. 0.79 mg/L, P=0.042).\nThis retrospective study was conducted among children who visited Chung-Ang University hospital in Seoul, Korea between February and November 2010. Serum and/or catheterization or midstream urine samples were obtained. Urine culture, urinalysis, serum blood urea nitrogen (BUN) and creatinine (Cr), C-reactive protein (CRP), and whole blood white blood cell (WBC) count results were retrospectively reviewed.\nA total of 812 urine specimens from 333 patients and 323 serum samples from 208 patients were collected from a total of 444 patients (177 males, 267 females). Both serum and urine specimens were available from 97 patients. Either serum or urine samples were available from the remaining 347 patients. The UTI group consisted of 107 children (58 males, 49 females), and 191 serum samples and 284 urine samples were included. The mean age of the UTI group was 4.29±2.93 yr. The non-UTI group consisted of 337 children (119 males, 218 females), and 232 serum samples and 528 urine samples were included. The mean age of the non-UTI group was 3.87±2.59 yr. Serum BUN and Cr levels were normal in all children in both groups. Whole blood WBC count did not show a significant difference in NGAL values between the UTI group and the non-UTI group (12.4×109/L vs. 10.7×109/L, P=0.087). Serum CRP level was significantly higher in the UTI group than in the non-UTI group (3.20 mg/L vs. 0.79 mg/L, P=0.042).\n 2. Laboratory methods Urinalysis was performed using URiSCAN (YD Diagnostics, Seoul, Korea), and microscopic analysis of urine was performed by using a Sysmex UF-1000i full automatic urine analyzer (Sysmex, Hyogo, Japan). Values of sNGAL and uNGAL were assayed by using a NGAL Rapid ELISA Kit (Bioporto diagnostics, Gentofte, Denmark) following the manufacturer's instructions. Bioporto's NGAL Rapid ELISA Kit assay is a sandwich ELISA performed in microwells coated with an anti-human NGAL monoclonal antibody. Bound NGAL is detected with another monoclonal antibody labeled with biotin, and the assay is developed with horseradish peroxidase (HRP)-conjugated streptavidin and a color-forming substrate. The color intensity is read at 450 nm using PR 3100 TSC Microplate Reader (Bio-Rad Laboratories, Hercules, CA, USA).\nUrinalysis was performed using URiSCAN (YD Diagnostics, Seoul, Korea), and microscopic analysis of urine was performed by using a Sysmex UF-1000i full automatic urine analyzer (Sysmex, Hyogo, Japan). Values of sNGAL and uNGAL were assayed by using a NGAL Rapid ELISA Kit (Bioporto diagnostics, Gentofte, Denmark) following the manufacturer's instructions. Bioporto's NGAL Rapid ELISA Kit assay is a sandwich ELISA performed in microwells coated with an anti-human NGAL monoclonal antibody. Bound NGAL is detected with another monoclonal antibody labeled with biotin, and the assay is developed with horseradish peroxidase (HRP)-conjugated streptavidin and a color-forming substrate. The color intensity is read at 450 nm using PR 3100 TSC Microplate Reader (Bio-Rad Laboratories, Hercules, CA, USA).\n 3. Definition of UTI UTI was diagnosed by significant bacteriuria (≥100,000 colony-forming unit [CFU]/mL of a single pathogen) without symptoms associated with UTI, or by significant bacteriuria (≥10,000 CFU/mL) with symptoms associated with UTI, including any of the followings: fever, vomiting, dysuria, abdominal pain, nausea, and voiding frequency [9]. Additionally, we defined our own criteria for the screening of presumptive UTI (pUTI) by urinalysis markers. First, if pyuria (WBC ≥5/HPF) were present, a diagnosis of pUTI was made. Second, if leukocyte esterase was present, patients were regarded as having pUTI.\nUTI was diagnosed by significant bacteriuria (≥100,000 colony-forming unit [CFU]/mL of a single pathogen) without symptoms associated with UTI, or by significant bacteriuria (≥10,000 CFU/mL) with symptoms associated with UTI, including any of the followings: fever, vomiting, dysuria, abdominal pain, nausea, and voiding frequency [9]. Additionally, we defined our own criteria for the screening of presumptive UTI (pUTI) by urinalysis markers. First, if pyuria (WBC ≥5/HPF) were present, a diagnosis of pUTI was made. Second, if leukocyte esterase was present, patients were regarded as having pUTI.\n 4. Statistical analysis Statistical analysis was performed by using the IBM SPSS Statistics for Windows, version 19 (SPSS Inc., Chicago, IL, USA). The independent samples t-test was used to compare mean NGAL values in both groups. ROC analysis was performed to determine the sensitivity, specificity, and optimal diagnostic cutoff values of NGAL in screening for UTI. The most appropriate cutoff value was chosen according to ROC analysis, and the area under the curve (AUC) was calculated. The statistical significance level was established at P<0.05.\nStatistical analysis was performed by using the IBM SPSS Statistics for Windows, version 19 (SPSS Inc., Chicago, IL, USA). The independent samples t-test was used to compare mean NGAL values in both groups. ROC analysis was performed to determine the sensitivity, specificity, and optimal diagnostic cutoff values of NGAL in screening for UTI. The most appropriate cutoff value was chosen according to ROC analysis, and the area under the curve (AUC) was calculated. The statistical significance level was established at P<0.05.", "This retrospective study was conducted among children who visited Chung-Ang University hospital in Seoul, Korea between February and November 2010. Serum and/or catheterization or midstream urine samples were obtained. Urine culture, urinalysis, serum blood urea nitrogen (BUN) and creatinine (Cr), C-reactive protein (CRP), and whole blood white blood cell (WBC) count results were retrospectively reviewed.\nA total of 812 urine specimens from 333 patients and 323 serum samples from 208 patients were collected from a total of 444 patients (177 males, 267 females). Both serum and urine specimens were available from 97 patients. Either serum or urine samples were available from the remaining 347 patients. The UTI group consisted of 107 children (58 males, 49 females), and 191 serum samples and 284 urine samples were included. The mean age of the UTI group was 4.29±2.93 yr. The non-UTI group consisted of 337 children (119 males, 218 females), and 232 serum samples and 528 urine samples were included. The mean age of the non-UTI group was 3.87±2.59 yr. Serum BUN and Cr levels were normal in all children in both groups. Whole blood WBC count did not show a significant difference in NGAL values between the UTI group and the non-UTI group (12.4×109/L vs. 10.7×109/L, P=0.087). Serum CRP level was significantly higher in the UTI group than in the non-UTI group (3.20 mg/L vs. 0.79 mg/L, P=0.042).", "Urinalysis was performed using URiSCAN (YD Diagnostics, Seoul, Korea), and microscopic analysis of urine was performed by using a Sysmex UF-1000i full automatic urine analyzer (Sysmex, Hyogo, Japan). Values of sNGAL and uNGAL were assayed by using a NGAL Rapid ELISA Kit (Bioporto diagnostics, Gentofte, Denmark) following the manufacturer's instructions. Bioporto's NGAL Rapid ELISA Kit assay is a sandwich ELISA performed in microwells coated with an anti-human NGAL monoclonal antibody. Bound NGAL is detected with another monoclonal antibody labeled with biotin, and the assay is developed with horseradish peroxidase (HRP)-conjugated streptavidin and a color-forming substrate. The color intensity is read at 450 nm using PR 3100 TSC Microplate Reader (Bio-Rad Laboratories, Hercules, CA, USA).", "UTI was diagnosed by significant bacteriuria (≥100,000 colony-forming unit [CFU]/mL of a single pathogen) without symptoms associated with UTI, or by significant bacteriuria (≥10,000 CFU/mL) with symptoms associated with UTI, including any of the followings: fever, vomiting, dysuria, abdominal pain, nausea, and voiding frequency [9]. Additionally, we defined our own criteria for the screening of presumptive UTI (pUTI) by urinalysis markers. First, if pyuria (WBC ≥5/HPF) were present, a diagnosis of pUTI was made. Second, if leukocyte esterase was present, patients were regarded as having pUTI.", "Statistical analysis was performed by using the IBM SPSS Statistics for Windows, version 19 (SPSS Inc., Chicago, IL, USA). The independent samples t-test was used to compare mean NGAL values in both groups. ROC analysis was performed to determine the sensitivity, specificity, and optimal diagnostic cutoff values of NGAL in screening for UTI. The most appropriate cutoff value was chosen according to ROC analysis, and the area under the curve (AUC) was calculated. The statistical significance level was established at P<0.05.", "The values of sNGAL and uNGAL were elevated in the UTI group compared to the non-UTI group; however, there was no significant difference in the mean values between the groups (Table 1). According to ROC analysis, the optimal diagnostic cutoff values to predict UTI were 65.25 ng/mL for sNGAL and 5.75 ng/mL for uNGAL. If a cutoff of 65.25 ng/mL for sNGAL for diagnosing UTI was used, the sensitivity and specificity of the diagnosis were 70% (95% confidence interval [CI], 18%-79%) and 35% (95% CI, 20%-74%), respectively (Fig. 1). For uNGAL, using a cutoff value of 5.75 ng/mL for diagnosing UTI, the sensitivity and specificity of diagnosis were 70% (95% CI, 21%-83%) and 42% (95% CI, 36%-89%), respectively (Fig. 1).\nThe results of the pUTI and the non-pUTI groups based on the presence or absence of pyuria are presented below. Mean values of sNGAL were significantly higher in the pyuria-positive pUTI group than in the pyuria-negative group (120.05 ng/mL vs. 84.08 ng/mL, P=0.013) (Table 1). Mean uNGAL values were also significantly higher in the pyuria-positive pUTI group than in the pyuria-negative, non-pUTI group (67.58 ng/mL vs. 47.59 ng/mL, P=0.007) (Table 1). According to ROC analysis, the optimal diagnostic cutoff value was 82.5 ng/mL for sNGAL, with 70% sensitivity (95% CI, 23%-86%) and 60% specificity (95% CI, 27%-88%) (Fig. 2). Similarly, the optimal diagnostic cutoff value was 10.3 ng/mL for uNGAL, with 70% sensitivity (95% CI, 20%-87%) and 70% specificity (95% CI, 21%-86%) (Fig. 2).\nIn examining the presence or absence of leukocyte esterase in the pUTI and non-pUTI group samples, mean sNGAL and uNGAL values were found to be higher in the esterase-positive pUTI group than the esterase-negative, non-pUTI group (112.09 ng/mL vs. 83.52 ng/mL, 27.38 ng/mL vs. 9.63 ng/mL), but only the mean uNGAL value between the 2 groups showed a statistically significant difference (P=0.007) (Table 1). According to ROC analysis, the optimal diagnostic cutoff value to predict pUTI was 75.05 ng/mL for sNGAL and 9.65 ng/mL for uNGAL. Using a cutoff of 75.05 ng/mL of sNGAL for diagnosing pUTI, the sensitivity and specificity were 70% (95% CI, 24%-89%) and 49% (95% CI, 11%-62%), respectively (Fig. 3). For uNGAL, using a cutoff value of 9.65 ng/mL for the presumptive diagnosis of UTI, the sensitivity and specificity were 70% (95% CI, 30%-91%) and 68% (95% CI, 25%-78%), respectively (Fig. 3).\nLastly, we evaluated the uNGAL:Cr ratio between the culture-proven UTI and non-UTI groups. There was no significant difference in uNGAL:Cr ratio between the 2 groups (results not shown).", "NGAL is a useful marker for the early diagnosis of AKI [1, 4, 10, 11, 12, 13, 14] and several studies have evaluated uNGAL values in the early diagnosis of UTI [3, 6, 7, 8, 15]. Ichino et al. [3] reported increased uNGAL value in pyelonephritis using an experimental rat model, and Yilmaz et al. [4] reported that uNGAL and uNGAL:Cr ratio were useful markers in the early diagnosis of UTI in children without acute renal injury and/or chronic kidney disease. In contrast, we did not find significant differences between sNGAL and uNGAL values in the presence or absence of UTI. NGAL was originally isolated from human neutrophils; however, subsequent studies demonstrated that NGAL may also be expressed under certain conditions in cells in the kidney and liver, and in epithelial tissues [16, 17]. Increased values of sNGAL and uNGAL are observed in AKI [1]. With respect to sNGAL, AKI results in increased NGAL mRNA expression in distal organs [18], specifically in the liver and lungs, and over-expressed NGAL protein released into the circulation may contribute to increases in sNGAL values [1]. In addition, NGAL is an acute phase reactant and may be released from neutrophils, macrophages, and other immune cells [1]. Furthermore, decreases in glomerula filtration rate resulting from AKI would be expected to decrease the renal clearance of NGAL, resulting in its subsequent accumulation in the serum [1]. sNGAL is freely filtered by the glomerulus and is largely reabsorbed in the proximal tubules [19], whereas uNGAL is detected only in the presence of a concomitant proximal renal tubular injury that precludes NGAL reabsorption and/or increases NGAL synthesis [1]. Recent studies revealed that uNGAL is increased in AKI, and it has been speculated that NGAL expression is induced in order to aid in tissue regeneration after kidney damage [3, 11]. The contribution of neutrophilic NGAL to uNGAL values is generally small, but the effect may be important in pyuria [8]. Theoretically, sNGAL or uNGAL values may not increase during UTI in the absence of upper urinary tract injury; even in the presence of UTI, pyuria is not always observed. Therefore, it is possible that there is no association between UTI and increased NGAL values.\nSince sNGAL or uNGAL values alone did not sufficiently predict culture-proven UTI, we compared NGAL values in the presence or absence of pyuria or urine leukocyte esterase. Decavele et al. [8] demonstrated that urine WBC counts are significantly correlated with uNGAL values in both upper and lower UTI. In agreement with this finding, in the presence of pyuria or urine leukocyte esterase, the uNGAL value was significantly higher than in their absence. These results can be explained by the presence of NGAL originating from urinary neutrophils [8]. sNGAL value was significantly higher only in the presence of pyuria. We speculate that this was due to the difference in diagnostic cutoff sensitivity in the measurements taken in the presence of pyuria and leukocyte esterase. Such results suggest that uNGAL value is more useful than sNGAL during screening for UTI.\nFor more rapid diagnosis of UTI, many clinicians routinely use urinalysis results, such as the presence of pyuria, nitrite, and leukocyte esterase, but the low sensitivity of this analysis is well known. One study showed that sensitivity and specificity of leukocyte esterase were 65.4% and 94%, respectively, whereas sensitivity and specificity of the nitrite test were 38.9% and 99.5%, respectively [15]. In this study, we demonstrated the optimal cutoff values for sNGAL (65.25 ng/mL) and uNGAL (5.75 ng/mL) in the diagnosis of UTI. At these established cutoff points, the sensitivity in the measurement of both sNGAL and uNGAL (70%) was found to be higher than in the measurement of leukocyte esterase or in the nitrite test, although the specificity of sNGAL (35%) and uNGAL (42%) was lower than in urinalysis. Hirsch et al. [11] reported that AKI due to contrast administration can be predicted using a cutoff of 100 ng/mL for uNGAL. In different studies, the cutoff values for predicting AKI after cardiopulmonary bypass were determined to be between 50 ng/mL and 100 ng/mL [10, 12]. Moreover, according to the manufacturer, the cutoff value for sNGAL was 106 ng/mL and 9.8 ng/mL for uNGAL. Our results suggest that the optimal cutoff value for predicting UTI is lower than the values determined for AKI and even lower than the values suggested by the manufacturer.\nIn screening for pUTI using pyuria or urine leukocyte esterase values, we evaluated the optimal concentration of sNGAL and uNGAL for such cases. sNGAL values were consistently lower than the manufacturer's reports (Bioporto diagnostics; 82.5 ng/mL and 75.05 ng/mL) at a sensitivity of 70%. uNGAL values were also similar to or lower than the manufacturer's reports (Bioporto; 10.3 ng/mL and 9.65 ng/mL). At these values, specificity was higher than that of culture-proven UTI (70% in the pyuria-positive group and 68% in the esterase-positive group). Therefore, we confirmed that uNGAL value is correlated with the presence of leukocytes in the urine sample.\nIn conclusion, to the best of our knowledge, this is the first study demonstrating that there is no significant correlation between sNGAL or uNGAL values and UTI. We suggest that the use of NGAL as a new marker in the early diagnosis of UTI may not be appropriate. If NGAL is being considered as a marker in the early diagnosis of UTI, uNGAL value will be a more useful marker than sNGAL value. In addition, a new cutoff value must be set that is lower than the value reported by the manufacturer in order to increase sensitivity. To better understand the relationship between NGAL values and UTI in children, and to set an appropriate diagnostic cutoff value of NGAL in the early diagnosis of UTI in children, a future prospective study using larger numbers of patients and a control group of healthy children is needed." ]
[ "intro", "methods", null, null, null, null, "results", "discussion" ]
[ "Neutrophil gelatinase associated lipocalin", "Screening", "Urinary tract infection" ]
INTRODUCTION: Neutrophil gelatinase-associated lipocalin (NGAL) is a 25 kDa protein and a member of the lipocalin family [1]. NGAL is covalently bound to matrix metalloproteinase 9 expressed by neutrophils and exhibits antibacterial properties, and so, this lipocalin is considered a component of the innate immune system [2]. NGAL is also expressed by other cells, such as kidney epithelial and tubular cells [1, 3]. Although NGAL is expressed only at very low values in several human tissues, its expression is markedly increased in injured epithelial cells, including in those of the kidney, colon, liver, and lung [1]. These findings provide a potential molecular mechanism for the role of NGAL in affecting the epithelial phenotype, both during kidney development and following acute kidney injury (AKI) [4]. Urinary tract infection (UTI) is one of the most common infections in children. Early diagnosis of UTI is important because delayed diagnosis may result in the failure of subsequent treatment and may result in long-term consequences, including renal scarring, hypertension, and chronic renal failure [5]. Urine culture is the standard approach for the diagnosis of UTI, but it has limitations. Difficulty in specimen collection and interpretation of inadequately collected specimens may contribute to misdiagnosis [5]. Furthermore, positive culture results require 2-3 days in order to identify the cause of the infection. Despite these limitations, urinalysis is commonly used in screening for UTI. However, indicators of UTI in urinalysis, such as pyuria and positive nitrite test results, have some limitations associated with low sensitivity [5]. Nitrite tests produce false negative results in the presence of gram-positive pathogens, and the sensitivity of this test can be decreased when testing urine with a high specific gravity. Similarly, pyuria may be absent in Proteus species infections. In addition, sterile pyuria may occur in noninfectious conditions such as urolithiasis [5]. Therefore, another marker is needed for rapid and accurate diagnosis of UTI to initiate early treatment. Several studies have suggested that urine NGAL (uNGAL) may be associated with UTI [6, 7, 8]. In addition, since UTI can cause neutrophilia, we attempted to evaluate the association between serum NGAL (sNGAL) and UTI. The aim of this study was to assess whether sNGAL and/or uNGAL could be used as reliable UTI markers and to evaluate the appropriate NGAL detection cutoff values for screening of UTI in children. METHODS: 1. Study population This retrospective study was conducted among children who visited Chung-Ang University hospital in Seoul, Korea between February and November 2010. Serum and/or catheterization or midstream urine samples were obtained. Urine culture, urinalysis, serum blood urea nitrogen (BUN) and creatinine (Cr), C-reactive protein (CRP), and whole blood white blood cell (WBC) count results were retrospectively reviewed. A total of 812 urine specimens from 333 patients and 323 serum samples from 208 patients were collected from a total of 444 patients (177 males, 267 females). Both serum and urine specimens were available from 97 patients. Either serum or urine samples were available from the remaining 347 patients. The UTI group consisted of 107 children (58 males, 49 females), and 191 serum samples and 284 urine samples were included. The mean age of the UTI group was 4.29±2.93 yr. The non-UTI group consisted of 337 children (119 males, 218 females), and 232 serum samples and 528 urine samples were included. The mean age of the non-UTI group was 3.87±2.59 yr. Serum BUN and Cr levels were normal in all children in both groups. Whole blood WBC count did not show a significant difference in NGAL values between the UTI group and the non-UTI group (12.4×109/L vs. 10.7×109/L, P=0.087). Serum CRP level was significantly higher in the UTI group than in the non-UTI group (3.20 mg/L vs. 0.79 mg/L, P=0.042). This retrospective study was conducted among children who visited Chung-Ang University hospital in Seoul, Korea between February and November 2010. Serum and/or catheterization or midstream urine samples were obtained. Urine culture, urinalysis, serum blood urea nitrogen (BUN) and creatinine (Cr), C-reactive protein (CRP), and whole blood white blood cell (WBC) count results were retrospectively reviewed. A total of 812 urine specimens from 333 patients and 323 serum samples from 208 patients were collected from a total of 444 patients (177 males, 267 females). Both serum and urine specimens were available from 97 patients. Either serum or urine samples were available from the remaining 347 patients. The UTI group consisted of 107 children (58 males, 49 females), and 191 serum samples and 284 urine samples were included. The mean age of the UTI group was 4.29±2.93 yr. The non-UTI group consisted of 337 children (119 males, 218 females), and 232 serum samples and 528 urine samples were included. The mean age of the non-UTI group was 3.87±2.59 yr. Serum BUN and Cr levels were normal in all children in both groups. Whole blood WBC count did not show a significant difference in NGAL values between the UTI group and the non-UTI group (12.4×109/L vs. 10.7×109/L, P=0.087). Serum CRP level was significantly higher in the UTI group than in the non-UTI group (3.20 mg/L vs. 0.79 mg/L, P=0.042). 2. Laboratory methods Urinalysis was performed using URiSCAN (YD Diagnostics, Seoul, Korea), and microscopic analysis of urine was performed by using a Sysmex UF-1000i full automatic urine analyzer (Sysmex, Hyogo, Japan). Values of sNGAL and uNGAL were assayed by using a NGAL Rapid ELISA Kit (Bioporto diagnostics, Gentofte, Denmark) following the manufacturer's instructions. Bioporto's NGAL Rapid ELISA Kit assay is a sandwich ELISA performed in microwells coated with an anti-human NGAL monoclonal antibody. Bound NGAL is detected with another monoclonal antibody labeled with biotin, and the assay is developed with horseradish peroxidase (HRP)-conjugated streptavidin and a color-forming substrate. The color intensity is read at 450 nm using PR 3100 TSC Microplate Reader (Bio-Rad Laboratories, Hercules, CA, USA). Urinalysis was performed using URiSCAN (YD Diagnostics, Seoul, Korea), and microscopic analysis of urine was performed by using a Sysmex UF-1000i full automatic urine analyzer (Sysmex, Hyogo, Japan). Values of sNGAL and uNGAL were assayed by using a NGAL Rapid ELISA Kit (Bioporto diagnostics, Gentofte, Denmark) following the manufacturer's instructions. Bioporto's NGAL Rapid ELISA Kit assay is a sandwich ELISA performed in microwells coated with an anti-human NGAL monoclonal antibody. Bound NGAL is detected with another monoclonal antibody labeled with biotin, and the assay is developed with horseradish peroxidase (HRP)-conjugated streptavidin and a color-forming substrate. The color intensity is read at 450 nm using PR 3100 TSC Microplate Reader (Bio-Rad Laboratories, Hercules, CA, USA). 3. Definition of UTI UTI was diagnosed by significant bacteriuria (≥100,000 colony-forming unit [CFU]/mL of a single pathogen) without symptoms associated with UTI, or by significant bacteriuria (≥10,000 CFU/mL) with symptoms associated with UTI, including any of the followings: fever, vomiting, dysuria, abdominal pain, nausea, and voiding frequency [9]. Additionally, we defined our own criteria for the screening of presumptive UTI (pUTI) by urinalysis markers. First, if pyuria (WBC ≥5/HPF) were present, a diagnosis of pUTI was made. Second, if leukocyte esterase was present, patients were regarded as having pUTI. UTI was diagnosed by significant bacteriuria (≥100,000 colony-forming unit [CFU]/mL of a single pathogen) without symptoms associated with UTI, or by significant bacteriuria (≥10,000 CFU/mL) with symptoms associated with UTI, including any of the followings: fever, vomiting, dysuria, abdominal pain, nausea, and voiding frequency [9]. Additionally, we defined our own criteria for the screening of presumptive UTI (pUTI) by urinalysis markers. First, if pyuria (WBC ≥5/HPF) were present, a diagnosis of pUTI was made. Second, if leukocyte esterase was present, patients were regarded as having pUTI. 4. Statistical analysis Statistical analysis was performed by using the IBM SPSS Statistics for Windows, version 19 (SPSS Inc., Chicago, IL, USA). The independent samples t-test was used to compare mean NGAL values in both groups. ROC analysis was performed to determine the sensitivity, specificity, and optimal diagnostic cutoff values of NGAL in screening for UTI. The most appropriate cutoff value was chosen according to ROC analysis, and the area under the curve (AUC) was calculated. The statistical significance level was established at P<0.05. Statistical analysis was performed by using the IBM SPSS Statistics for Windows, version 19 (SPSS Inc., Chicago, IL, USA). The independent samples t-test was used to compare mean NGAL values in both groups. ROC analysis was performed to determine the sensitivity, specificity, and optimal diagnostic cutoff values of NGAL in screening for UTI. The most appropriate cutoff value was chosen according to ROC analysis, and the area under the curve (AUC) was calculated. The statistical significance level was established at P<0.05. 1. Study population: This retrospective study was conducted among children who visited Chung-Ang University hospital in Seoul, Korea between February and November 2010. Serum and/or catheterization or midstream urine samples were obtained. Urine culture, urinalysis, serum blood urea nitrogen (BUN) and creatinine (Cr), C-reactive protein (CRP), and whole blood white blood cell (WBC) count results were retrospectively reviewed. A total of 812 urine specimens from 333 patients and 323 serum samples from 208 patients were collected from a total of 444 patients (177 males, 267 females). Both serum and urine specimens were available from 97 patients. Either serum or urine samples were available from the remaining 347 patients. The UTI group consisted of 107 children (58 males, 49 females), and 191 serum samples and 284 urine samples were included. The mean age of the UTI group was 4.29±2.93 yr. The non-UTI group consisted of 337 children (119 males, 218 females), and 232 serum samples and 528 urine samples were included. The mean age of the non-UTI group was 3.87±2.59 yr. Serum BUN and Cr levels were normal in all children in both groups. Whole blood WBC count did not show a significant difference in NGAL values between the UTI group and the non-UTI group (12.4×109/L vs. 10.7×109/L, P=0.087). Serum CRP level was significantly higher in the UTI group than in the non-UTI group (3.20 mg/L vs. 0.79 mg/L, P=0.042). 2. Laboratory methods: Urinalysis was performed using URiSCAN (YD Diagnostics, Seoul, Korea), and microscopic analysis of urine was performed by using a Sysmex UF-1000i full automatic urine analyzer (Sysmex, Hyogo, Japan). Values of sNGAL and uNGAL were assayed by using a NGAL Rapid ELISA Kit (Bioporto diagnostics, Gentofte, Denmark) following the manufacturer's instructions. Bioporto's NGAL Rapid ELISA Kit assay is a sandwich ELISA performed in microwells coated with an anti-human NGAL monoclonal antibody. Bound NGAL is detected with another monoclonal antibody labeled with biotin, and the assay is developed with horseradish peroxidase (HRP)-conjugated streptavidin and a color-forming substrate. The color intensity is read at 450 nm using PR 3100 TSC Microplate Reader (Bio-Rad Laboratories, Hercules, CA, USA). 3. Definition of UTI: UTI was diagnosed by significant bacteriuria (≥100,000 colony-forming unit [CFU]/mL of a single pathogen) without symptoms associated with UTI, or by significant bacteriuria (≥10,000 CFU/mL) with symptoms associated with UTI, including any of the followings: fever, vomiting, dysuria, abdominal pain, nausea, and voiding frequency [9]. Additionally, we defined our own criteria for the screening of presumptive UTI (pUTI) by urinalysis markers. First, if pyuria (WBC ≥5/HPF) were present, a diagnosis of pUTI was made. Second, if leukocyte esterase was present, patients were regarded as having pUTI. 4. Statistical analysis: Statistical analysis was performed by using the IBM SPSS Statistics for Windows, version 19 (SPSS Inc., Chicago, IL, USA). The independent samples t-test was used to compare mean NGAL values in both groups. ROC analysis was performed to determine the sensitivity, specificity, and optimal diagnostic cutoff values of NGAL in screening for UTI. The most appropriate cutoff value was chosen according to ROC analysis, and the area under the curve (AUC) was calculated. The statistical significance level was established at P<0.05. RESULTS: The values of sNGAL and uNGAL were elevated in the UTI group compared to the non-UTI group; however, there was no significant difference in the mean values between the groups (Table 1). According to ROC analysis, the optimal diagnostic cutoff values to predict UTI were 65.25 ng/mL for sNGAL and 5.75 ng/mL for uNGAL. If a cutoff of 65.25 ng/mL for sNGAL for diagnosing UTI was used, the sensitivity and specificity of the diagnosis were 70% (95% confidence interval [CI], 18%-79%) and 35% (95% CI, 20%-74%), respectively (Fig. 1). For uNGAL, using a cutoff value of 5.75 ng/mL for diagnosing UTI, the sensitivity and specificity of diagnosis were 70% (95% CI, 21%-83%) and 42% (95% CI, 36%-89%), respectively (Fig. 1). The results of the pUTI and the non-pUTI groups based on the presence or absence of pyuria are presented below. Mean values of sNGAL were significantly higher in the pyuria-positive pUTI group than in the pyuria-negative group (120.05 ng/mL vs. 84.08 ng/mL, P=0.013) (Table 1). Mean uNGAL values were also significantly higher in the pyuria-positive pUTI group than in the pyuria-negative, non-pUTI group (67.58 ng/mL vs. 47.59 ng/mL, P=0.007) (Table 1). According to ROC analysis, the optimal diagnostic cutoff value was 82.5 ng/mL for sNGAL, with 70% sensitivity (95% CI, 23%-86%) and 60% specificity (95% CI, 27%-88%) (Fig. 2). Similarly, the optimal diagnostic cutoff value was 10.3 ng/mL for uNGAL, with 70% sensitivity (95% CI, 20%-87%) and 70% specificity (95% CI, 21%-86%) (Fig. 2). In examining the presence or absence of leukocyte esterase in the pUTI and non-pUTI group samples, mean sNGAL and uNGAL values were found to be higher in the esterase-positive pUTI group than the esterase-negative, non-pUTI group (112.09 ng/mL vs. 83.52 ng/mL, 27.38 ng/mL vs. 9.63 ng/mL), but only the mean uNGAL value between the 2 groups showed a statistically significant difference (P=0.007) (Table 1). According to ROC analysis, the optimal diagnostic cutoff value to predict pUTI was 75.05 ng/mL for sNGAL and 9.65 ng/mL for uNGAL. Using a cutoff of 75.05 ng/mL of sNGAL for diagnosing pUTI, the sensitivity and specificity were 70% (95% CI, 24%-89%) and 49% (95% CI, 11%-62%), respectively (Fig. 3). For uNGAL, using a cutoff value of 9.65 ng/mL for the presumptive diagnosis of UTI, the sensitivity and specificity were 70% (95% CI, 30%-91%) and 68% (95% CI, 25%-78%), respectively (Fig. 3). Lastly, we evaluated the uNGAL:Cr ratio between the culture-proven UTI and non-UTI groups. There was no significant difference in uNGAL:Cr ratio between the 2 groups (results not shown). DISCUSSION: NGAL is a useful marker for the early diagnosis of AKI [1, 4, 10, 11, 12, 13, 14] and several studies have evaluated uNGAL values in the early diagnosis of UTI [3, 6, 7, 8, 15]. Ichino et al. [3] reported increased uNGAL value in pyelonephritis using an experimental rat model, and Yilmaz et al. [4] reported that uNGAL and uNGAL:Cr ratio were useful markers in the early diagnosis of UTI in children without acute renal injury and/or chronic kidney disease. In contrast, we did not find significant differences between sNGAL and uNGAL values in the presence or absence of UTI. NGAL was originally isolated from human neutrophils; however, subsequent studies demonstrated that NGAL may also be expressed under certain conditions in cells in the kidney and liver, and in epithelial tissues [16, 17]. Increased values of sNGAL and uNGAL are observed in AKI [1]. With respect to sNGAL, AKI results in increased NGAL mRNA expression in distal organs [18], specifically in the liver and lungs, and over-expressed NGAL protein released into the circulation may contribute to increases in sNGAL values [1]. In addition, NGAL is an acute phase reactant and may be released from neutrophils, macrophages, and other immune cells [1]. Furthermore, decreases in glomerula filtration rate resulting from AKI would be expected to decrease the renal clearance of NGAL, resulting in its subsequent accumulation in the serum [1]. sNGAL is freely filtered by the glomerulus and is largely reabsorbed in the proximal tubules [19], whereas uNGAL is detected only in the presence of a concomitant proximal renal tubular injury that precludes NGAL reabsorption and/or increases NGAL synthesis [1]. Recent studies revealed that uNGAL is increased in AKI, and it has been speculated that NGAL expression is induced in order to aid in tissue regeneration after kidney damage [3, 11]. The contribution of neutrophilic NGAL to uNGAL values is generally small, but the effect may be important in pyuria [8]. Theoretically, sNGAL or uNGAL values may not increase during UTI in the absence of upper urinary tract injury; even in the presence of UTI, pyuria is not always observed. Therefore, it is possible that there is no association between UTI and increased NGAL values. Since sNGAL or uNGAL values alone did not sufficiently predict culture-proven UTI, we compared NGAL values in the presence or absence of pyuria or urine leukocyte esterase. Decavele et al. [8] demonstrated that urine WBC counts are significantly correlated with uNGAL values in both upper and lower UTI. In agreement with this finding, in the presence of pyuria or urine leukocyte esterase, the uNGAL value was significantly higher than in their absence. These results can be explained by the presence of NGAL originating from urinary neutrophils [8]. sNGAL value was significantly higher only in the presence of pyuria. We speculate that this was due to the difference in diagnostic cutoff sensitivity in the measurements taken in the presence of pyuria and leukocyte esterase. Such results suggest that uNGAL value is more useful than sNGAL during screening for UTI. For more rapid diagnosis of UTI, many clinicians routinely use urinalysis results, such as the presence of pyuria, nitrite, and leukocyte esterase, but the low sensitivity of this analysis is well known. One study showed that sensitivity and specificity of leukocyte esterase were 65.4% and 94%, respectively, whereas sensitivity and specificity of the nitrite test were 38.9% and 99.5%, respectively [15]. In this study, we demonstrated the optimal cutoff values for sNGAL (65.25 ng/mL) and uNGAL (5.75 ng/mL) in the diagnosis of UTI. At these established cutoff points, the sensitivity in the measurement of both sNGAL and uNGAL (70%) was found to be higher than in the measurement of leukocyte esterase or in the nitrite test, although the specificity of sNGAL (35%) and uNGAL (42%) was lower than in urinalysis. Hirsch et al. [11] reported that AKI due to contrast administration can be predicted using a cutoff of 100 ng/mL for uNGAL. In different studies, the cutoff values for predicting AKI after cardiopulmonary bypass were determined to be between 50 ng/mL and 100 ng/mL [10, 12]. Moreover, according to the manufacturer, the cutoff value for sNGAL was 106 ng/mL and 9.8 ng/mL for uNGAL. Our results suggest that the optimal cutoff value for predicting UTI is lower than the values determined for AKI and even lower than the values suggested by the manufacturer. In screening for pUTI using pyuria or urine leukocyte esterase values, we evaluated the optimal concentration of sNGAL and uNGAL for such cases. sNGAL values were consistently lower than the manufacturer's reports (Bioporto diagnostics; 82.5 ng/mL and 75.05 ng/mL) at a sensitivity of 70%. uNGAL values were also similar to or lower than the manufacturer's reports (Bioporto; 10.3 ng/mL and 9.65 ng/mL). At these values, specificity was higher than that of culture-proven UTI (70% in the pyuria-positive group and 68% in the esterase-positive group). Therefore, we confirmed that uNGAL value is correlated with the presence of leukocytes in the urine sample. In conclusion, to the best of our knowledge, this is the first study demonstrating that there is no significant correlation between sNGAL or uNGAL values and UTI. We suggest that the use of NGAL as a new marker in the early diagnosis of UTI may not be appropriate. If NGAL is being considered as a marker in the early diagnosis of UTI, uNGAL value will be a more useful marker than sNGAL value. In addition, a new cutoff value must be set that is lower than the value reported by the manufacturer in order to increase sensitivity. To better understand the relationship between NGAL values and UTI in children, and to set an appropriate diagnostic cutoff value of NGAL in the early diagnosis of UTI in children, a future prospective study using larger numbers of patients and a control group of healthy children is needed.
Background: Neutrophil gelatinase-associated lipocalin (NGAL) is a promising biomarker in the detection of kidney injury. Early diagnosis of urinary tract infection (UTI), one of the most common infections in children, is important in order to avert long-term consequences. We assessed whether serum NGAL (sNGAL) or urine NGAL (uNGAL) would be reliable markers of UTI and evaluated the appropriate diagnostic cutoff value for the screening of UTI in children. Methods: A total of 812 urine specimens and 323 serum samples, collected from pediatric patients, were analyzed. UTI was diagnosed on the basis of culture results and symptoms reported by the patients. NGAL values were measured by using ELISA. Results: NGAL values were more elevated in the UTI cases than in the non-UTI cases, but the difference between the values were not statistically significant (P=0.190 for sNGAL and P=0.064 for uNGAL). The optimal diagnostic cutoff values of sNGAL and uNGAL for UTI screening were 65.25 ng/mL and 5.75 ng/mL, respectively. Conclusions: We suggest that it is not appropriate to use NGAL as a marker for early diagnosis of UTI in children.
null
null
4,327
225
[ 290, 147, 120, 99 ]
8
[ "uti", "ngal", "ungal", "values", "group", "urine", "ml", "sngal", "ng", "ng ml" ]
[ "kidney injury aki", "matrix metalloproteinase expressed", "metalloproteinase expressed neutrophils", "originating urinary neutrophils", "ngal protein" ]
null
null
[CONTENT] Neutrophil gelatinase associated lipocalin | Screening | Urinary tract infection [SUMMARY]
[CONTENT] Neutrophil gelatinase associated lipocalin | Screening | Urinary tract infection [SUMMARY]
[CONTENT] Neutrophil gelatinase associated lipocalin | Screening | Urinary tract infection [SUMMARY]
null
[CONTENT] Neutrophil gelatinase associated lipocalin | Screening | Urinary tract infection [SUMMARY]
null
[CONTENT] Acute-Phase Proteins | Area Under Curve | Biomarkers | Child | Child, Preschool | Early Diagnosis | Enzyme-Linked Immunosorbent Assay | Female | Humans | Infant | Lipocalin-2 | Lipocalins | Male | Mass Screening | Proto-Oncogene Proteins | ROC Curve | Urinary Tract Infections [SUMMARY]
[CONTENT] Acute-Phase Proteins | Area Under Curve | Biomarkers | Child | Child, Preschool | Early Diagnosis | Enzyme-Linked Immunosorbent Assay | Female | Humans | Infant | Lipocalin-2 | Lipocalins | Male | Mass Screening | Proto-Oncogene Proteins | ROC Curve | Urinary Tract Infections [SUMMARY]
[CONTENT] Acute-Phase Proteins | Area Under Curve | Biomarkers | Child | Child, Preschool | Early Diagnosis | Enzyme-Linked Immunosorbent Assay | Female | Humans | Infant | Lipocalin-2 | Lipocalins | Male | Mass Screening | Proto-Oncogene Proteins | ROC Curve | Urinary Tract Infections [SUMMARY]
null
[CONTENT] Acute-Phase Proteins | Area Under Curve | Biomarkers | Child | Child, Preschool | Early Diagnosis | Enzyme-Linked Immunosorbent Assay | Female | Humans | Infant | Lipocalin-2 | Lipocalins | Male | Mass Screening | Proto-Oncogene Proteins | ROC Curve | Urinary Tract Infections [SUMMARY]
null
[CONTENT] kidney injury aki | matrix metalloproteinase expressed | metalloproteinase expressed neutrophils | originating urinary neutrophils | ngal protein [SUMMARY]
[CONTENT] kidney injury aki | matrix metalloproteinase expressed | metalloproteinase expressed neutrophils | originating urinary neutrophils | ngal protein [SUMMARY]
[CONTENT] kidney injury aki | matrix metalloproteinase expressed | metalloproteinase expressed neutrophils | originating urinary neutrophils | ngal protein [SUMMARY]
null
[CONTENT] kidney injury aki | matrix metalloproteinase expressed | metalloproteinase expressed neutrophils | originating urinary neutrophils | ngal protein [SUMMARY]
null
[CONTENT] uti | ngal | ungal | values | group | urine | ml | sngal | ng | ng ml [SUMMARY]
[CONTENT] uti | ngal | ungal | values | group | urine | ml | sngal | ng | ng ml [SUMMARY]
[CONTENT] uti | ngal | ungal | values | group | urine | ml | sngal | ng | ng ml [SUMMARY]
null
[CONTENT] uti | ngal | ungal | values | group | urine | ml | sngal | ng | ng ml [SUMMARY]
null
[CONTENT] uti | ngal | kidney | lipocalin | limitations | epithelial | expressed | cells | diagnosis | positive [SUMMARY]
[CONTENT] uti | uti group | serum | group | samples | urine | patients | performed | ngal | urine samples [SUMMARY]
[CONTENT] ng ml | ng | 95 | ci | ml | 95 ci | puti | ungal | fig | puti group [SUMMARY]
null
[CONTENT] uti | ngal | group | urine | ml | ng | ng ml | ungal | uti group | serum [SUMMARY]
null
[CONTENT] NGAL ||| UTI ||| NGAL | NGAL | UTI | UTI [SUMMARY]
[CONTENT] 812 | 323 ||| UTI ||| NGAL | ELISA [SUMMARY]
[CONTENT] NGAL | UTI | non-UTI | P=0.064 ||| UTI | 65.25 ng/mL | 5.75 ng/mL [SUMMARY]
null
[CONTENT] NGAL ||| UTI ||| NGAL | NGAL | UTI | UTI ||| 812 | 323 ||| UTI ||| NGAL | ELISA ||| ||| NGAL | UTI | non-UTI | P=0.064 ||| UTI | 65.25 ng/mL | 5.75 ng/mL ||| NGAL | UTI [SUMMARY]
null
Should men be exempted from vaccination against human papillomavirus? Health disparities regarding HPV: the example of sexual minorities in Poland.
34604578
Social campaigns concerning vaccinations against human papillomavirus (HPV) in Poland are mainly addressed to women. In addition to cervical cancer, anal, penile, and oropharyngeal cancers can be caused by the virus, which clearly affects men as well. HPV vaccinations are voluntary and mostly not refunded in Poland.
INTRODUCTION
A survey was published on social media's group gathering males and contained questions concerning epidemiological data, knowledge about HPV, and opinions of HPV vaccination. A questionnaire was enriched with educational note regarding HPV-dependent cancers and available vaccines against HPV in Poland.
METHODS
Because of age limitations, 169 males (115 heterosexuals, 48 homosexuals) aged 14-39 were chosen for the study. Seventyfive percent of straight and 88% of gay men were aware of HPV, but less than 4 and 17% (respectively) were vaccinated against the virus. Main sources of knowledge about HPV were the Internet (61%), media (28%) and relatives (27%). HPV infection was linked with the development of anal and oropharyngeal cancers by 28, and 37% of heterosexual males, compared with 56.3 and 43.8% of homosexual males. The majority of respondents (88%) indicated that all genders should be vaccinated, although only 57% were aware of HPV vaccination availability in Poland.
RESULTS
The men are at risk of HPV-related cancers and the danger is poorly understood amongst Polish men. Despite awareness of HPV vaccines, the vaccination rate is low. Consequently, there is a serious need to broaden educational campaignes with a special attention to LGBTQ+ communities.
CONCLUSIONS
[ "Alphapapillomavirus", "Anus Neoplasms", "Health Knowledge, Attitudes, Practice", "Healthcare Disparities", "Humans", "Male", "Medically Underserved Area", "Oropharyngeal Neoplasms", "Papillomaviridae", "Papillomavirus Infections", "Papillomavirus Vaccines", "Penile Neoplasms", "Poland", "Sexual and Gender Minorities", "Vaccination", "Vulnerable Populations" ]
8451346
Introduction
The incidence of sexually transmitted infections (STIs) in Poland is increasing, with human papillomavirus (HPV) the most common cause [1, 2]. HPV is a dsDNA virus, comprising more than 170 distinguishable types [3]. Almost every sexually active person is exposed to HPV during their lifetime and both women and men are at risk of infection. Approximately 90% of HPV infections are asymptomatic and resolve spontaneously [4]. However, in some cases infection results in palmar/plantar warts, laryngeal papillomatosis, precancerous lesions, and increased risk of developing cancer [2]. More than 99% of cervical cancer is associated with HPV infection (70% caused by HPV 16 and 18), constituting the sixth most common malignant neoplasm of reproductive organs among women and the seventh highest cause of cancer related death in Poland [3, 5]. Moreover, 70% of oropharyngeal, 63% of penile, and 91% of anal cancers are associated with HPV infection [5-7]. In some minority groups, such as gay men, the risk of HPV-dependent cancers is higher [8]. HPV can be transmitted through sexual intercourse, direct skin-to-skin contact, via contaminated surfaces, as well as during labour and the perinatal period (vertically) [2, 9]. The role of blood transfusions in the process of transmission remains uncertain [10]. Early sexual activity, multiple sexual partners or impaired immune function (e.g. immunosuppression) are the leading risk factors for HPV infection [4]. The Centres for Disease Control and Prevention (CDC) recommends mutually monogamous sexual relationships, barrier contraceptives, and vaccines as preventive masseurs [4, 11, 12]. The World Health Organization (WHO) states that vaccination against HPV is most effective prior to the onset of sexual activity [13]. In Poland HPV vaccinations are targeted at children aged 11-12, are voluntary and mostly not refunded [14]. Whilst the risk of developing HPV-dependant cancer applies to both men and women, social campaigns regarding vaccination against papillomavirus in Poland are aimed mainly at women. Due to a lack of sexual education in Polish schools, knowledge about STIs is usually obtained from the Internet, media, and social campaigns [15]. An absence of educational programs considering the health needs of LGBTQ+ communities in medical school curricula affects sexual minorities and results in health disparities [16, 17]. Studies regarding knowledge about HPV and vaccination rate among males in Poland are limited and to the best of our knowledge, there are no studies referring to sexual minorities in Poland. Therefore, we set out to explore the knowledge of Polish men about HPV infection and HPV-related cancers, and to identify any inequalities between heterosexual and homosexual men.
Methods
Institutional Review Board (IRB) approval was obtained from the Wroclaw Medical University. A survey comprising open-ended, close-ended, and nominal, multiple-choice questions with predefined answers was prepared in Google™ Form. For nominal questions an additional, optional space was provided for respondents to implement answers not included by authors. A non-probabilistic method of self-selection sampling was used for this study. Participation invitations were posted at selected groups on social media platforms such as Facebook™, Reddit™, and Wykop™. Target groups were found with the combinations of search terms: ‘men’, ‘boy’, ‘lad’, ‘Poland’, ‘polish’, ‘lgbt’, ‘lgbtqia’, and ‘lgbt+’. The inclusion criteria for group consisted of polish language, number of group’s male members above 25, number of posts above 20 while exclusion criteria included presence of homophobic content or hate speech and last post older than 2 weeks. Finally, survey was published on 32 groups. The questionnaire consisted of 20 questions divided into three parts. The first one concerned epidemiological profile of respondents: age, place of residence (metropolitan, non-metropolitan, rural), educational status (primary, secondary, professional, higher), medical education (medical/non-medical), and sexual orientation (straight, gay, other or I prefer not to say). The second part tested their knowledge about human papillomavirus, HPV-related diseases with distant implications of infection, possible routes of transmission, and prevention methods. The last part referred to vaccines as a protective tool against infection and opinions about HPV vaccines in Poland. Respondents were also asked to declare their vaccination status. At the end of the survey, the information note was projected on respondent’s screen with basic information about HPV related cancers and available vaccines in Poland. Moreover, a link to the educational page of the National Institute of Public Health concerning vaccinations against HPV was provided. Chi-square (χ2) independence test with Yates correction when necessary and Fisher exact test were used to assess the relationship between the studied variables in the nominal and ordinal scales. Shapiro-Wilk test was used to assess the normality of variable distribution. Mann-Whitney test was used to assess differences between heterosexual and homosexual males. Statistical analysis was performed using Statistica™ v.13.5 (StatSoft™).
null
null
Conclusions
Characteristic of the study group. HPV-related diseases in opinion of respondents.
[ "Introduction", "Results", "Discussion", "LIMITATIONS", "Conclusions" ]
[ "The incidence of sexually transmitted infections (STIs) in Poland is increasing, with human papillomavirus (HPV) the most common cause [1, 2]. HPV is a dsDNA virus, comprising more than 170 distinguishable types [3]. Almost every sexually active person is exposed to HPV during their lifetime and both women and men are at risk of infection. Approximately 90% of HPV infections are asymptomatic and resolve spontaneously [4]. However, in some cases infection results in palmar/plantar warts, laryngeal papillomatosis, precancerous lesions, and increased risk of developing cancer [2]. More than 99% of cervical cancer is associated with HPV infection (70% caused by HPV 16 and 18), constituting the sixth most common malignant neoplasm of reproductive organs among women and the seventh highest cause of cancer related death in Poland [3, 5]. Moreover, 70% of oropharyngeal, 63% of penile, and 91% of anal cancers are associated with HPV infection [5-7]. In some minority groups, such as gay men, the risk of HPV-dependent cancers is higher [8].\nHPV can be transmitted through sexual intercourse, direct skin-to-skin contact, via contaminated surfaces, as well as during labour and the perinatal period (vertically) [2, 9]. The role of blood transfusions in the process of transmission remains uncertain [10]. Early sexual activity, multiple sexual partners or impaired immune function (e.g. immunosuppression) are the leading risk factors for HPV infection [4]. The Centres for Disease Control and Prevention (CDC) recommends mutually monogamous sexual relationships, barrier contraceptives, and vaccines as preventive masseurs [4, 11, 12]. The World Health Organization (WHO) states that vaccination against HPV is most effective prior to the onset of sexual activity [13]. In Poland HPV vaccinations are targeted at children aged 11-12, are voluntary and mostly not refunded [14]. Whilst the risk of developing HPV-dependant cancer applies to both men and women, social campaigns regarding vaccination against papillomavirus in Poland are aimed mainly at women.\nDue to a lack of sexual education in Polish schools, knowledge about STIs is usually obtained from the Internet, media, and social campaigns [15]. An absence of educational programs considering the health needs of LGBTQ+ communities in medical school curricula affects sexual minorities and results in health disparities [16, 17]. Studies regarding knowledge about HPV and vaccination rate among males in Poland are limited and to the best of our knowledge, there are no studies referring to sexual minorities in Poland. Therefore, we set out to explore the knowledge of Polish men about HPV infection and HPV-related cancers, and to identify any inequalities between heterosexual and homosexual men.", "Six hundred seventy-one men were asked to participate in the study, 247 responded. Fifty-three participants were excluded due to incorrectly completed survey. In order to keep age distribution normal, age limit was established and outliers (25 respondents) were removed. Finally, 169 males (aged 14-39) were chosen for the study. The mean age of respondents is 22.7 (+/- 5 years). Most respondents were straight (68.05%), lived in large cities (35.50%), were educated to a secondary school level (53.85), were not medically trained (74.65%), and were not vaccinated against HPV (92.09%) (Tab. I).\nThe majority (78.7%) of participants were aware (e.g. had heard of the virus) of HPV. Sexually active males were more aware than those who were not sexually active (75.18 vs 24.82%, p < 0.05). Unsurprisingly, all participants with a medical education were aware of HPV. Participants living in cities with more than 100,000 habitants were more aware of HPV than those living in areas with less than 100,000 habitants (85.7 vs 69%, p < 0.05). However, population density is not correlated with greater general knowledge about the virus (p > 0.05).\nWhereas, place of residence influences on whether the concept of the virus is known, the percent of correct answers does not vary significantly between respondents from cities above 100,000 habitants and respondents from towns with less than 100,000 habitants and villages. The mean result in the first group is 61.6% of correct answers and 56.2% in second group (p > 0.05). The educational level is not associated with better outcomes either. The mean result of men with higher education level constitutes 61.7% of correct answers in comparison with men without higher education whose mean result is 58% (p > 0.05).\nRespondents reported that information about HPV was obtained most often from the Internet (60.94%), broadcast media (28.4%), and literature [newspapers, magazines, popular science books, and school manuals (24.26%)]. Other sources of knowledge included family/friends (23.67%), social campaigns (22.49%), physicians (21.89%), universities (8.18%), and schools (5.33%). Statistically better results (defined as more than 50% of correct answers) was observed in respondents whose reported the Internet, media, family/friends, physicians, literature, and universities as their sources of information about HPV (p < 0.05).\nTwo-thirds of participants (62.72%) identified HPV infection as being associated with cervical cancer and 36.69% associated HPV with anal cancer. Vaginal and vulvar cancers were also considered a HPV-related diseases by 49.7% of participants, as was oropharyngeal cancer (38.46%), and plantar, and genital warts (43.2%). Most participants (81.07%) identified both sex intercourse, 42.6% identified labour and the perinatal period, 31.36% indicated skin-to-skin contacts, and 26.63% indicated contaminated surfaces. Two-thirds (62.13%) pointed out infected human blood and 14.79% considered droplets as a route of transmission. A small proportion thought that HPV can spread through contaminated food (9.74%) and insect bites (8.28%).\nParticipants considered vaccination (81.66%), avoidance of sexual encounters (65.09%), and use of condoms (62.72%) as protective measure against HPV infection. Almost half (46.75%) indicated risk of HPV infection can be minimized by mutually monogamous relationships, while 44.37% indicated sexual abstinence. A healthy lifestyle (17.75%), the use of lubricant, and spermicides (42.6%), avoidance of crowded places (11.83%), prophylactic use of antibiotics/antiretroviral therapy (6.51%), elimination of contaminated food/water (5.36%), and use of repellents/insecticides (3.55%) were all proposed as means through which risk of HPV infection could be minimised.\nThe vast majority of respondents (88.17%) know that both sexes should be vaccinated and 84.02% believe that vaccination should be compulsory in Poland. Conversely, 5.33% opposed mandatory immunization. More than half were aware of vaccine availability in Poland (56.80%), although 67.46% do not know whether vaccination is refunded. Most participants (73.37%) correctly identified that vaccination should occur before the commencement of sexual activity. Less than one-in-ten (7.1%) of participants were vaccinated against HPV. There are no statistically significant differences in vaccination rate between groups declaring medical education and non-medical education (9.30% versus 6.35%). Similarly, no links were demonstrated between residence and education level and uptake of vaccination among participants.\nNo significant difference existed with regards to awareness of HPV between straight and gay participants (75.65 vs 87.5%). Homosexual participants indicated HPV infections increased the risk of anal cancer more frequently than heterosexual participants (43.75 vs 27.83%, p < 0.05). (Tab. II). Gay people were more frequently vaccinated against HPV than straight ones (16.67 vs 3.48%, p < 0.05).", "There are currently three vaccines available against HPV – bivalent, quadrivalent and nonavalent. Countries with national programs of vaccination (at least 10 years long) have noted a 90% reduction in HPV 6, 11, 16, 18 infections, 90% depletion in genital warts and 85% decrease in high grade squamous intraepithelial lesion (HSIL) of the uterine cervix [18]. A combined analysis of vaccination with bivalent vaccine unveiled that the vaccine was effective at preventing oral, anal, and cervical infections in the cohort of HPV naive women in 83.5% [19]. Data of vaccination effectiveness in occurrence of anal, penile, and oropharyngeal cancers is limited due to their low prevalence [20]. However, evidence is emerging that vaccination for males is warranted. In a study evaluating efficiency of quadrivalent HPV vaccine in men and boys, there was an overall 85.6% reduction in persistent infection of HPV 6, 11, 16, and 18, constituting preventive measure against anogenital cancer, intraepithelial neoplasia, recurrent respiratory papillomatosis and cancer of oropharynx [21].\nMost programs target girls exclusively, with only a handful of countries (e.g. Australia, Austria, Canada) vaccinating both sexes [22]. A systematic review and meta-analysis of predictions from transmission-dynamic models funded by Canadian Institutes of Health Research suggest that elimination of HPV 6, 11, 16, and 18 is possible when 80% coverage in girls and boys is reached and a high vaccine efficacy is maintained over time [22]. In 2007, Australia introduced a national HPV vaccination program, which was broadened in 2013 by vaccination of both sexes. In 2017 Australia achieved vaccination levels of 80.2% in girls and 75.9% in boys at the age of 15. It is estimated that the age-standardised annual incidence of cervical cancer will decrease to less than 4 new cases per 100,000 women by 2028 [23].\nThere are no specific analyses concerning the coverage vaccination level against HPV in Poland. According to estimates of Major Statistical Department, the vaccination level in Poland fluctuates around 7.5-10% [24]. However, it is estimated that vaccination for boys is much lower. In Poland, the vaccine is not mandatory, but recommended by the Ministry of Health [14]. The total cost of vaccination (regardless of type) for one child in Poland is 246 USD [25]. In 2018 the average expenditure per capita in Polish household was 286,65 USD [26]. Consequently, the cost of vaccination in Poland for one person is almost the equivalent of one month’s maintenance. Currently in Poland, numerous reimbursement programs for vaccination are being implemented by local governments and are located in the southern/western parts of the country. Only 12 municipalities organize vaccinations for girls and boys [27]. This situation varies from year to year, depending on how the funds are allocated. This is likely to account for the variability in responses in regarding vaccinations reimbursement among the participants.\nCurrent WHO recommendations target primary HPV vaccination to the population of girls aged 9-14, prior to becoming sexually active, with at least 2-dose schedule. WHO suggests that achieving > 80% vaccination coverage in girls also reduces the risk of HPV infection for boys due to herd protection. Vaccination regardless of gender and age should be considered with other elements, such as competing health priorities, disease burden, programmatic implications, cost-effectiveness, and affordability [28]. The Advisory Committee on Immunization Practices (ACIP) endorses vaccination of all boys and girls under age of 11 or 12 [12]. Catch-up vaccines are recommended for males through age 21 and 26 for females. Furthermore, the CDC advocates vaccination for homosexual men and for both men and women with compromised immune systems, by the age of 26 if they were not fully vaccinated previously [12]. Thus, the need for vaccination among boys and adult men, especially those from higher risk groups cannot be omitted [29].\nLIMITATIONS As the self-selection sampling option was chosen for, there is no random sample. Consequently, the study group may not be representative of the entire population. Taking under consideration the low number of participants (169 males), the study group should be extended in further research. Any identification verification tool was used in the study, thus one person can submit multiple responses. What is more, the fact that gay men could identify themselves as straight in the survey cannot be forgotten.\nAs the self-selection sampling option was chosen for, there is no random sample. Consequently, the study group may not be representative of the entire population. Taking under consideration the low number of participants (169 males), the study group should be extended in further research. Any identification verification tool was used in the study, thus one person can submit multiple responses. What is more, the fact that gay men could identify themselves as straight in the survey cannot be forgotten.", "As the self-selection sampling option was chosen for, there is no random sample. Consequently, the study group may not be representative of the entire population. Taking under consideration the low number of participants (169 males), the study group should be extended in further research. Any identification verification tool was used in the study, thus one person can submit multiple responses. What is more, the fact that gay men could identify themselves as straight in the survey cannot be forgotten.", "The men are at risk of HPV-related cancers and the danger is poorly understood amongst Polish men. Whilst the WHO suggests that vaccination rates of 80% of girls will protect men through heard immunity, gay men remain at risk. As such, HPV vaccination programs need to be extended to include boys and made more affordable to increase uptake for both sexes.\nOur study has demonstrated that knowledge about HPV is no correlated with place of residence and level of education. Educational campaigns have demonstrated minimal effectiveness, suggesting that funds should be transferred to improving their influence or expanding training programs within schools and universities. Further, surcharging of HPV vaccines should be introduced. The introduction of compulsory, refunded vaccines is likely the most effective means through which to increase the percentage of vaccinated people, thus decreasing the number of HPV-related cancers in Poland.\nFew respondents indicated doctors as a potential source of information about HPV. This situation requires greater engagement of physicians in programs referring to education and prophylaxis. We urge medical schools to broaden the knowledge of Polish medical students and healthcare professionals about health needs of LGBTQ+ communities, preventing from health disparities in the future." ]
[ null, null, null, null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "LIMITATIONS", "Conclusions" ]
[ "The incidence of sexually transmitted infections (STIs) in Poland is increasing, with human papillomavirus (HPV) the most common cause [1, 2]. HPV is a dsDNA virus, comprising more than 170 distinguishable types [3]. Almost every sexually active person is exposed to HPV during their lifetime and both women and men are at risk of infection. Approximately 90% of HPV infections are asymptomatic and resolve spontaneously [4]. However, in some cases infection results in palmar/plantar warts, laryngeal papillomatosis, precancerous lesions, and increased risk of developing cancer [2]. More than 99% of cervical cancer is associated with HPV infection (70% caused by HPV 16 and 18), constituting the sixth most common malignant neoplasm of reproductive organs among women and the seventh highest cause of cancer related death in Poland [3, 5]. Moreover, 70% of oropharyngeal, 63% of penile, and 91% of anal cancers are associated with HPV infection [5-7]. In some minority groups, such as gay men, the risk of HPV-dependent cancers is higher [8].\nHPV can be transmitted through sexual intercourse, direct skin-to-skin contact, via contaminated surfaces, as well as during labour and the perinatal period (vertically) [2, 9]. The role of blood transfusions in the process of transmission remains uncertain [10]. Early sexual activity, multiple sexual partners or impaired immune function (e.g. immunosuppression) are the leading risk factors for HPV infection [4]. The Centres for Disease Control and Prevention (CDC) recommends mutually monogamous sexual relationships, barrier contraceptives, and vaccines as preventive masseurs [4, 11, 12]. The World Health Organization (WHO) states that vaccination against HPV is most effective prior to the onset of sexual activity [13]. In Poland HPV vaccinations are targeted at children aged 11-12, are voluntary and mostly not refunded [14]. Whilst the risk of developing HPV-dependant cancer applies to both men and women, social campaigns regarding vaccination against papillomavirus in Poland are aimed mainly at women.\nDue to a lack of sexual education in Polish schools, knowledge about STIs is usually obtained from the Internet, media, and social campaigns [15]. An absence of educational programs considering the health needs of LGBTQ+ communities in medical school curricula affects sexual minorities and results in health disparities [16, 17]. Studies regarding knowledge about HPV and vaccination rate among males in Poland are limited and to the best of our knowledge, there are no studies referring to sexual minorities in Poland. Therefore, we set out to explore the knowledge of Polish men about HPV infection and HPV-related cancers, and to identify any inequalities between heterosexual and homosexual men.", "Institutional Review Board (IRB) approval was obtained from the Wroclaw Medical University. A survey comprising open-ended, close-ended, and nominal, multiple-choice questions with predefined answers was prepared in Google™ Form. For nominal questions an additional, optional space was provided for respondents to implement answers not included by authors. A non-probabilistic method of self-selection sampling was used for this study. Participation invitations were posted at selected groups on social media platforms such as Facebook™, Reddit™, and Wykop™.\nTarget groups were found with the combinations of search terms: ‘men’, ‘boy’, ‘lad’, ‘Poland’, ‘polish’, ‘lgbt’, ‘lgbtqia’, and ‘lgbt+’. The inclusion criteria for group consisted of polish language, number of group’s male members above 25, number of posts above 20 while exclusion criteria included presence of homophobic content or hate speech and last post older than 2 weeks. Finally, survey was published on 32 groups.\nThe questionnaire consisted of 20 questions divided into three parts. The first one concerned epidemiological profile of respondents: age, place of residence (metropolitan, non-metropolitan, rural), educational status (primary, secondary, professional, higher), medical education (medical/non-medical), and sexual orientation (straight, gay, other or I prefer not to say). The second part tested their knowledge about human papillomavirus, HPV-related diseases with distant implications of infection, possible routes of transmission, and prevention methods. The last part referred to vaccines as a protective tool against infection and opinions about HPV vaccines in Poland. Respondents were also asked to declare their vaccination status. At the end of the survey, the information note was projected on respondent’s screen with basic information about HPV related cancers and available vaccines in Poland. Moreover, a link to the educational page of the National Institute of Public Health concerning vaccinations against HPV was provided.\nChi-square (χ2) independence test with Yates correction when necessary and Fisher exact test were used to assess the relationship between the studied variables in the nominal and ordinal scales. Shapiro-Wilk test was used to assess the normality of variable distribution. Mann-Whitney test was used to assess differences between heterosexual and homosexual males. Statistical analysis was performed using Statistica™ v.13.5 (StatSoft™).", "Six hundred seventy-one men were asked to participate in the study, 247 responded. Fifty-three participants were excluded due to incorrectly completed survey. In order to keep age distribution normal, age limit was established and outliers (25 respondents) were removed. Finally, 169 males (aged 14-39) were chosen for the study. The mean age of respondents is 22.7 (+/- 5 years). Most respondents were straight (68.05%), lived in large cities (35.50%), were educated to a secondary school level (53.85), were not medically trained (74.65%), and were not vaccinated against HPV (92.09%) (Tab. I).\nThe majority (78.7%) of participants were aware (e.g. had heard of the virus) of HPV. Sexually active males were more aware than those who were not sexually active (75.18 vs 24.82%, p < 0.05). Unsurprisingly, all participants with a medical education were aware of HPV. Participants living in cities with more than 100,000 habitants were more aware of HPV than those living in areas with less than 100,000 habitants (85.7 vs 69%, p < 0.05). However, population density is not correlated with greater general knowledge about the virus (p > 0.05).\nWhereas, place of residence influences on whether the concept of the virus is known, the percent of correct answers does not vary significantly between respondents from cities above 100,000 habitants and respondents from towns with less than 100,000 habitants and villages. The mean result in the first group is 61.6% of correct answers and 56.2% in second group (p > 0.05). The educational level is not associated with better outcomes either. The mean result of men with higher education level constitutes 61.7% of correct answers in comparison with men without higher education whose mean result is 58% (p > 0.05).\nRespondents reported that information about HPV was obtained most often from the Internet (60.94%), broadcast media (28.4%), and literature [newspapers, magazines, popular science books, and school manuals (24.26%)]. Other sources of knowledge included family/friends (23.67%), social campaigns (22.49%), physicians (21.89%), universities (8.18%), and schools (5.33%). Statistically better results (defined as more than 50% of correct answers) was observed in respondents whose reported the Internet, media, family/friends, physicians, literature, and universities as their sources of information about HPV (p < 0.05).\nTwo-thirds of participants (62.72%) identified HPV infection as being associated with cervical cancer and 36.69% associated HPV with anal cancer. Vaginal and vulvar cancers were also considered a HPV-related diseases by 49.7% of participants, as was oropharyngeal cancer (38.46%), and plantar, and genital warts (43.2%). Most participants (81.07%) identified both sex intercourse, 42.6% identified labour and the perinatal period, 31.36% indicated skin-to-skin contacts, and 26.63% indicated contaminated surfaces. Two-thirds (62.13%) pointed out infected human blood and 14.79% considered droplets as a route of transmission. A small proportion thought that HPV can spread through contaminated food (9.74%) and insect bites (8.28%).\nParticipants considered vaccination (81.66%), avoidance of sexual encounters (65.09%), and use of condoms (62.72%) as protective measure against HPV infection. Almost half (46.75%) indicated risk of HPV infection can be minimized by mutually monogamous relationships, while 44.37% indicated sexual abstinence. A healthy lifestyle (17.75%), the use of lubricant, and spermicides (42.6%), avoidance of crowded places (11.83%), prophylactic use of antibiotics/antiretroviral therapy (6.51%), elimination of contaminated food/water (5.36%), and use of repellents/insecticides (3.55%) were all proposed as means through which risk of HPV infection could be minimised.\nThe vast majority of respondents (88.17%) know that both sexes should be vaccinated and 84.02% believe that vaccination should be compulsory in Poland. Conversely, 5.33% opposed mandatory immunization. More than half were aware of vaccine availability in Poland (56.80%), although 67.46% do not know whether vaccination is refunded. Most participants (73.37%) correctly identified that vaccination should occur before the commencement of sexual activity. Less than one-in-ten (7.1%) of participants were vaccinated against HPV. There are no statistically significant differences in vaccination rate between groups declaring medical education and non-medical education (9.30% versus 6.35%). Similarly, no links were demonstrated between residence and education level and uptake of vaccination among participants.\nNo significant difference existed with regards to awareness of HPV between straight and gay participants (75.65 vs 87.5%). Homosexual participants indicated HPV infections increased the risk of anal cancer more frequently than heterosexual participants (43.75 vs 27.83%, p < 0.05). (Tab. II). Gay people were more frequently vaccinated against HPV than straight ones (16.67 vs 3.48%, p < 0.05).", "There are currently three vaccines available against HPV – bivalent, quadrivalent and nonavalent. Countries with national programs of vaccination (at least 10 years long) have noted a 90% reduction in HPV 6, 11, 16, 18 infections, 90% depletion in genital warts and 85% decrease in high grade squamous intraepithelial lesion (HSIL) of the uterine cervix [18]. A combined analysis of vaccination with bivalent vaccine unveiled that the vaccine was effective at preventing oral, anal, and cervical infections in the cohort of HPV naive women in 83.5% [19]. Data of vaccination effectiveness in occurrence of anal, penile, and oropharyngeal cancers is limited due to their low prevalence [20]. However, evidence is emerging that vaccination for males is warranted. In a study evaluating efficiency of quadrivalent HPV vaccine in men and boys, there was an overall 85.6% reduction in persistent infection of HPV 6, 11, 16, and 18, constituting preventive measure against anogenital cancer, intraepithelial neoplasia, recurrent respiratory papillomatosis and cancer of oropharynx [21].\nMost programs target girls exclusively, with only a handful of countries (e.g. Australia, Austria, Canada) vaccinating both sexes [22]. A systematic review and meta-analysis of predictions from transmission-dynamic models funded by Canadian Institutes of Health Research suggest that elimination of HPV 6, 11, 16, and 18 is possible when 80% coverage in girls and boys is reached and a high vaccine efficacy is maintained over time [22]. In 2007, Australia introduced a national HPV vaccination program, which was broadened in 2013 by vaccination of both sexes. In 2017 Australia achieved vaccination levels of 80.2% in girls and 75.9% in boys at the age of 15. It is estimated that the age-standardised annual incidence of cervical cancer will decrease to less than 4 new cases per 100,000 women by 2028 [23].\nThere are no specific analyses concerning the coverage vaccination level against HPV in Poland. According to estimates of Major Statistical Department, the vaccination level in Poland fluctuates around 7.5-10% [24]. However, it is estimated that vaccination for boys is much lower. In Poland, the vaccine is not mandatory, but recommended by the Ministry of Health [14]. The total cost of vaccination (regardless of type) for one child in Poland is 246 USD [25]. In 2018 the average expenditure per capita in Polish household was 286,65 USD [26]. Consequently, the cost of vaccination in Poland for one person is almost the equivalent of one month’s maintenance. Currently in Poland, numerous reimbursement programs for vaccination are being implemented by local governments and are located in the southern/western parts of the country. Only 12 municipalities organize vaccinations for girls and boys [27]. This situation varies from year to year, depending on how the funds are allocated. This is likely to account for the variability in responses in regarding vaccinations reimbursement among the participants.\nCurrent WHO recommendations target primary HPV vaccination to the population of girls aged 9-14, prior to becoming sexually active, with at least 2-dose schedule. WHO suggests that achieving > 80% vaccination coverage in girls also reduces the risk of HPV infection for boys due to herd protection. Vaccination regardless of gender and age should be considered with other elements, such as competing health priorities, disease burden, programmatic implications, cost-effectiveness, and affordability [28]. The Advisory Committee on Immunization Practices (ACIP) endorses vaccination of all boys and girls under age of 11 or 12 [12]. Catch-up vaccines are recommended for males through age 21 and 26 for females. Furthermore, the CDC advocates vaccination for homosexual men and for both men and women with compromised immune systems, by the age of 26 if they were not fully vaccinated previously [12]. Thus, the need for vaccination among boys and adult men, especially those from higher risk groups cannot be omitted [29].\nLIMITATIONS As the self-selection sampling option was chosen for, there is no random sample. Consequently, the study group may not be representative of the entire population. Taking under consideration the low number of participants (169 males), the study group should be extended in further research. Any identification verification tool was used in the study, thus one person can submit multiple responses. What is more, the fact that gay men could identify themselves as straight in the survey cannot be forgotten.\nAs the self-selection sampling option was chosen for, there is no random sample. Consequently, the study group may not be representative of the entire population. Taking under consideration the low number of participants (169 males), the study group should be extended in further research. Any identification verification tool was used in the study, thus one person can submit multiple responses. What is more, the fact that gay men could identify themselves as straight in the survey cannot be forgotten.", "As the self-selection sampling option was chosen for, there is no random sample. Consequently, the study group may not be representative of the entire population. Taking under consideration the low number of participants (169 males), the study group should be extended in further research. Any identification verification tool was used in the study, thus one person can submit multiple responses. What is more, the fact that gay men could identify themselves as straight in the survey cannot be forgotten.", "The men are at risk of HPV-related cancers and the danger is poorly understood amongst Polish men. Whilst the WHO suggests that vaccination rates of 80% of girls will protect men through heard immunity, gay men remain at risk. As such, HPV vaccination programs need to be extended to include boys and made more affordable to increase uptake for both sexes.\nOur study has demonstrated that knowledge about HPV is no correlated with place of residence and level of education. Educational campaigns have demonstrated minimal effectiveness, suggesting that funds should be transferred to improving their influence or expanding training programs within schools and universities. Further, surcharging of HPV vaccines should be introduced. The introduction of compulsory, refunded vaccines is likely the most effective means through which to increase the percentage of vaccinated people, thus decreasing the number of HPV-related cancers in Poland.\nFew respondents indicated doctors as a potential source of information about HPV. This situation requires greater engagement of physicians in programs referring to education and prophylaxis. We urge medical schools to broaden the knowledge of Polish medical students and healthcare professionals about health needs of LGBTQ+ communities, preventing from health disparities in the future." ]
[ null, "methods", null, null, null, null ]
[ "HPV", "LGBTQ+", "Health disparities", "HPV vaccines", "Underserved populations" ]
Introduction: The incidence of sexually transmitted infections (STIs) in Poland is increasing, with human papillomavirus (HPV) the most common cause [1, 2]. HPV is a dsDNA virus, comprising more than 170 distinguishable types [3]. Almost every sexually active person is exposed to HPV during their lifetime and both women and men are at risk of infection. Approximately 90% of HPV infections are asymptomatic and resolve spontaneously [4]. However, in some cases infection results in palmar/plantar warts, laryngeal papillomatosis, precancerous lesions, and increased risk of developing cancer [2]. More than 99% of cervical cancer is associated with HPV infection (70% caused by HPV 16 and 18), constituting the sixth most common malignant neoplasm of reproductive organs among women and the seventh highest cause of cancer related death in Poland [3, 5]. Moreover, 70% of oropharyngeal, 63% of penile, and 91% of anal cancers are associated with HPV infection [5-7]. In some minority groups, such as gay men, the risk of HPV-dependent cancers is higher [8]. HPV can be transmitted through sexual intercourse, direct skin-to-skin contact, via contaminated surfaces, as well as during labour and the perinatal period (vertically) [2, 9]. The role of blood transfusions in the process of transmission remains uncertain [10]. Early sexual activity, multiple sexual partners or impaired immune function (e.g. immunosuppression) are the leading risk factors for HPV infection [4]. The Centres for Disease Control and Prevention (CDC) recommends mutually monogamous sexual relationships, barrier contraceptives, and vaccines as preventive masseurs [4, 11, 12]. The World Health Organization (WHO) states that vaccination against HPV is most effective prior to the onset of sexual activity [13]. In Poland HPV vaccinations are targeted at children aged 11-12, are voluntary and mostly not refunded [14]. Whilst the risk of developing HPV-dependant cancer applies to both men and women, social campaigns regarding vaccination against papillomavirus in Poland are aimed mainly at women. Due to a lack of sexual education in Polish schools, knowledge about STIs is usually obtained from the Internet, media, and social campaigns [15]. An absence of educational programs considering the health needs of LGBTQ+ communities in medical school curricula affects sexual minorities and results in health disparities [16, 17]. Studies regarding knowledge about HPV and vaccination rate among males in Poland are limited and to the best of our knowledge, there are no studies referring to sexual minorities in Poland. Therefore, we set out to explore the knowledge of Polish men about HPV infection and HPV-related cancers, and to identify any inequalities between heterosexual and homosexual men. Methods: Institutional Review Board (IRB) approval was obtained from the Wroclaw Medical University. A survey comprising open-ended, close-ended, and nominal, multiple-choice questions with predefined answers was prepared in Google™ Form. For nominal questions an additional, optional space was provided for respondents to implement answers not included by authors. A non-probabilistic method of self-selection sampling was used for this study. Participation invitations were posted at selected groups on social media platforms such as Facebook™, Reddit™, and Wykop™. Target groups were found with the combinations of search terms: ‘men’, ‘boy’, ‘lad’, ‘Poland’, ‘polish’, ‘lgbt’, ‘lgbtqia’, and ‘lgbt+’. The inclusion criteria for group consisted of polish language, number of group’s male members above 25, number of posts above 20 while exclusion criteria included presence of homophobic content or hate speech and last post older than 2 weeks. Finally, survey was published on 32 groups. The questionnaire consisted of 20 questions divided into three parts. The first one concerned epidemiological profile of respondents: age, place of residence (metropolitan, non-metropolitan, rural), educational status (primary, secondary, professional, higher), medical education (medical/non-medical), and sexual orientation (straight, gay, other or I prefer not to say). The second part tested their knowledge about human papillomavirus, HPV-related diseases with distant implications of infection, possible routes of transmission, and prevention methods. The last part referred to vaccines as a protective tool against infection and opinions about HPV vaccines in Poland. Respondents were also asked to declare their vaccination status. At the end of the survey, the information note was projected on respondent’s screen with basic information about HPV related cancers and available vaccines in Poland. Moreover, a link to the educational page of the National Institute of Public Health concerning vaccinations against HPV was provided. Chi-square (χ2) independence test with Yates correction when necessary and Fisher exact test were used to assess the relationship between the studied variables in the nominal and ordinal scales. Shapiro-Wilk test was used to assess the normality of variable distribution. Mann-Whitney test was used to assess differences between heterosexual and homosexual males. Statistical analysis was performed using Statistica™ v.13.5 (StatSoft™). Results: Six hundred seventy-one men were asked to participate in the study, 247 responded. Fifty-three participants were excluded due to incorrectly completed survey. In order to keep age distribution normal, age limit was established and outliers (25 respondents) were removed. Finally, 169 males (aged 14-39) were chosen for the study. The mean age of respondents is 22.7 (+/- 5 years). Most respondents were straight (68.05%), lived in large cities (35.50%), were educated to a secondary school level (53.85), were not medically trained (74.65%), and were not vaccinated against HPV (92.09%) (Tab. I). The majority (78.7%) of participants were aware (e.g. had heard of the virus) of HPV. Sexually active males were more aware than those who were not sexually active (75.18 vs 24.82%, p < 0.05). Unsurprisingly, all participants with a medical education were aware of HPV. Participants living in cities with more than 100,000 habitants were more aware of HPV than those living in areas with less than 100,000 habitants (85.7 vs 69%, p < 0.05). However, population density is not correlated with greater general knowledge about the virus (p > 0.05). Whereas, place of residence influences on whether the concept of the virus is known, the percent of correct answers does not vary significantly between respondents from cities above 100,000 habitants and respondents from towns with less than 100,000 habitants and villages. The mean result in the first group is 61.6% of correct answers and 56.2% in second group (p > 0.05). The educational level is not associated with better outcomes either. The mean result of men with higher education level constitutes 61.7% of correct answers in comparison with men without higher education whose mean result is 58% (p > 0.05). Respondents reported that information about HPV was obtained most often from the Internet (60.94%), broadcast media (28.4%), and literature [newspapers, magazines, popular science books, and school manuals (24.26%)]. Other sources of knowledge included family/friends (23.67%), social campaigns (22.49%), physicians (21.89%), universities (8.18%), and schools (5.33%). Statistically better results (defined as more than 50% of correct answers) was observed in respondents whose reported the Internet, media, family/friends, physicians, literature, and universities as their sources of information about HPV (p < 0.05). Two-thirds of participants (62.72%) identified HPV infection as being associated with cervical cancer and 36.69% associated HPV with anal cancer. Vaginal and vulvar cancers were also considered a HPV-related diseases by 49.7% of participants, as was oropharyngeal cancer (38.46%), and plantar, and genital warts (43.2%). Most participants (81.07%) identified both sex intercourse, 42.6% identified labour and the perinatal period, 31.36% indicated skin-to-skin contacts, and 26.63% indicated contaminated surfaces. Two-thirds (62.13%) pointed out infected human blood and 14.79% considered droplets as a route of transmission. A small proportion thought that HPV can spread through contaminated food (9.74%) and insect bites (8.28%). Participants considered vaccination (81.66%), avoidance of sexual encounters (65.09%), and use of condoms (62.72%) as protective measure against HPV infection. Almost half (46.75%) indicated risk of HPV infection can be minimized by mutually monogamous relationships, while 44.37% indicated sexual abstinence. A healthy lifestyle (17.75%), the use of lubricant, and spermicides (42.6%), avoidance of crowded places (11.83%), prophylactic use of antibiotics/antiretroviral therapy (6.51%), elimination of contaminated food/water (5.36%), and use of repellents/insecticides (3.55%) were all proposed as means through which risk of HPV infection could be minimised. The vast majority of respondents (88.17%) know that both sexes should be vaccinated and 84.02% believe that vaccination should be compulsory in Poland. Conversely, 5.33% opposed mandatory immunization. More than half were aware of vaccine availability in Poland (56.80%), although 67.46% do not know whether vaccination is refunded. Most participants (73.37%) correctly identified that vaccination should occur before the commencement of sexual activity. Less than one-in-ten (7.1%) of participants were vaccinated against HPV. There are no statistically significant differences in vaccination rate between groups declaring medical education and non-medical education (9.30% versus 6.35%). Similarly, no links were demonstrated between residence and education level and uptake of vaccination among participants. No significant difference existed with regards to awareness of HPV between straight and gay participants (75.65 vs 87.5%). Homosexual participants indicated HPV infections increased the risk of anal cancer more frequently than heterosexual participants (43.75 vs 27.83%, p < 0.05). (Tab. II). Gay people were more frequently vaccinated against HPV than straight ones (16.67 vs 3.48%, p < 0.05). Discussion: There are currently three vaccines available against HPV – bivalent, quadrivalent and nonavalent. Countries with national programs of vaccination (at least 10 years long) have noted a 90% reduction in HPV 6, 11, 16, 18 infections, 90% depletion in genital warts and 85% decrease in high grade squamous intraepithelial lesion (HSIL) of the uterine cervix [18]. A combined analysis of vaccination with bivalent vaccine unveiled that the vaccine was effective at preventing oral, anal, and cervical infections in the cohort of HPV naive women in 83.5% [19]. Data of vaccination effectiveness in occurrence of anal, penile, and oropharyngeal cancers is limited due to their low prevalence [20]. However, evidence is emerging that vaccination for males is warranted. In a study evaluating efficiency of quadrivalent HPV vaccine in men and boys, there was an overall 85.6% reduction in persistent infection of HPV 6, 11, 16, and 18, constituting preventive measure against anogenital cancer, intraepithelial neoplasia, recurrent respiratory papillomatosis and cancer of oropharynx [21]. Most programs target girls exclusively, with only a handful of countries (e.g. Australia, Austria, Canada) vaccinating both sexes [22]. A systematic review and meta-analysis of predictions from transmission-dynamic models funded by Canadian Institutes of Health Research suggest that elimination of HPV 6, 11, 16, and 18 is possible when 80% coverage in girls and boys is reached and a high vaccine efficacy is maintained over time [22]. In 2007, Australia introduced a national HPV vaccination program, which was broadened in 2013 by vaccination of both sexes. In 2017 Australia achieved vaccination levels of 80.2% in girls and 75.9% in boys at the age of 15. It is estimated that the age-standardised annual incidence of cervical cancer will decrease to less than 4 new cases per 100,000 women by 2028 [23]. There are no specific analyses concerning the coverage vaccination level against HPV in Poland. According to estimates of Major Statistical Department, the vaccination level in Poland fluctuates around 7.5-10% [24]. However, it is estimated that vaccination for boys is much lower. In Poland, the vaccine is not mandatory, but recommended by the Ministry of Health [14]. The total cost of vaccination (regardless of type) for one child in Poland is 246 USD [25]. In 2018 the average expenditure per capita in Polish household was 286,65 USD [26]. Consequently, the cost of vaccination in Poland for one person is almost the equivalent of one month’s maintenance. Currently in Poland, numerous reimbursement programs for vaccination are being implemented by local governments and are located in the southern/western parts of the country. Only 12 municipalities organize vaccinations for girls and boys [27]. This situation varies from year to year, depending on how the funds are allocated. This is likely to account for the variability in responses in regarding vaccinations reimbursement among the participants. Current WHO recommendations target primary HPV vaccination to the population of girls aged 9-14, prior to becoming sexually active, with at least 2-dose schedule. WHO suggests that achieving > 80% vaccination coverage in girls also reduces the risk of HPV infection for boys due to herd protection. Vaccination regardless of gender and age should be considered with other elements, such as competing health priorities, disease burden, programmatic implications, cost-effectiveness, and affordability [28]. The Advisory Committee on Immunization Practices (ACIP) endorses vaccination of all boys and girls under age of 11 or 12 [12]. Catch-up vaccines are recommended for males through age 21 and 26 for females. Furthermore, the CDC advocates vaccination for homosexual men and for both men and women with compromised immune systems, by the age of 26 if they were not fully vaccinated previously [12]. Thus, the need for vaccination among boys and adult men, especially those from higher risk groups cannot be omitted [29]. LIMITATIONS As the self-selection sampling option was chosen for, there is no random sample. Consequently, the study group may not be representative of the entire population. Taking under consideration the low number of participants (169 males), the study group should be extended in further research. Any identification verification tool was used in the study, thus one person can submit multiple responses. What is more, the fact that gay men could identify themselves as straight in the survey cannot be forgotten. As the self-selection sampling option was chosen for, there is no random sample. Consequently, the study group may not be representative of the entire population. Taking under consideration the low number of participants (169 males), the study group should be extended in further research. Any identification verification tool was used in the study, thus one person can submit multiple responses. What is more, the fact that gay men could identify themselves as straight in the survey cannot be forgotten. LIMITATIONS: As the self-selection sampling option was chosen for, there is no random sample. Consequently, the study group may not be representative of the entire population. Taking under consideration the low number of participants (169 males), the study group should be extended in further research. Any identification verification tool was used in the study, thus one person can submit multiple responses. What is more, the fact that gay men could identify themselves as straight in the survey cannot be forgotten. Conclusions: The men are at risk of HPV-related cancers and the danger is poorly understood amongst Polish men. Whilst the WHO suggests that vaccination rates of 80% of girls will protect men through heard immunity, gay men remain at risk. As such, HPV vaccination programs need to be extended to include boys and made more affordable to increase uptake for both sexes. Our study has demonstrated that knowledge about HPV is no correlated with place of residence and level of education. Educational campaigns have demonstrated minimal effectiveness, suggesting that funds should be transferred to improving their influence or expanding training programs within schools and universities. Further, surcharging of HPV vaccines should be introduced. The introduction of compulsory, refunded vaccines is likely the most effective means through which to increase the percentage of vaccinated people, thus decreasing the number of HPV-related cancers in Poland. Few respondents indicated doctors as a potential source of information about HPV. This situation requires greater engagement of physicians in programs referring to education and prophylaxis. We urge medical schools to broaden the knowledge of Polish medical students and healthcare professionals about health needs of LGBTQ+ communities, preventing from health disparities in the future.
Background: Social campaigns concerning vaccinations against human papillomavirus (HPV) in Poland are mainly addressed to women. In addition to cervical cancer, anal, penile, and oropharyngeal cancers can be caused by the virus, which clearly affects men as well. HPV vaccinations are voluntary and mostly not refunded in Poland. Methods: A survey was published on social media's group gathering males and contained questions concerning epidemiological data, knowledge about HPV, and opinions of HPV vaccination. A questionnaire was enriched with educational note regarding HPV-dependent cancers and available vaccines against HPV in Poland. Results: Because of age limitations, 169 males (115 heterosexuals, 48 homosexuals) aged 14-39 were chosen for the study. Seventyfive percent of straight and 88% of gay men were aware of HPV, but less than 4 and 17% (respectively) were vaccinated against the virus. Main sources of knowledge about HPV were the Internet (61%), media (28%) and relatives (27%). HPV infection was linked with the development of anal and oropharyngeal cancers by 28, and 37% of heterosexual males, compared with 56.3 and 43.8% of homosexual males. The majority of respondents (88%) indicated that all genders should be vaccinated, although only 57% were aware of HPV vaccination availability in Poland. Conclusions: The men are at risk of HPV-related cancers and the danger is poorly understood amongst Polish men. Despite awareness of HPV vaccines, the vaccination rate is low. Consequently, there is a serious need to broaden educational campaignes with a special attention to LGBTQ+ communities.
Introduction: The incidence of sexually transmitted infections (STIs) in Poland is increasing, with human papillomavirus (HPV) the most common cause [1, 2]. HPV is a dsDNA virus, comprising more than 170 distinguishable types [3]. Almost every sexually active person is exposed to HPV during their lifetime and both women and men are at risk of infection. Approximately 90% of HPV infections are asymptomatic and resolve spontaneously [4]. However, in some cases infection results in palmar/plantar warts, laryngeal papillomatosis, precancerous lesions, and increased risk of developing cancer [2]. More than 99% of cervical cancer is associated with HPV infection (70% caused by HPV 16 and 18), constituting the sixth most common malignant neoplasm of reproductive organs among women and the seventh highest cause of cancer related death in Poland [3, 5]. Moreover, 70% of oropharyngeal, 63% of penile, and 91% of anal cancers are associated with HPV infection [5-7]. In some minority groups, such as gay men, the risk of HPV-dependent cancers is higher [8]. HPV can be transmitted through sexual intercourse, direct skin-to-skin contact, via contaminated surfaces, as well as during labour and the perinatal period (vertically) [2, 9]. The role of blood transfusions in the process of transmission remains uncertain [10]. Early sexual activity, multiple sexual partners or impaired immune function (e.g. immunosuppression) are the leading risk factors for HPV infection [4]. The Centres for Disease Control and Prevention (CDC) recommends mutually monogamous sexual relationships, barrier contraceptives, and vaccines as preventive masseurs [4, 11, 12]. The World Health Organization (WHO) states that vaccination against HPV is most effective prior to the onset of sexual activity [13]. In Poland HPV vaccinations are targeted at children aged 11-12, are voluntary and mostly not refunded [14]. Whilst the risk of developing HPV-dependant cancer applies to both men and women, social campaigns regarding vaccination against papillomavirus in Poland are aimed mainly at women. Due to a lack of sexual education in Polish schools, knowledge about STIs is usually obtained from the Internet, media, and social campaigns [15]. An absence of educational programs considering the health needs of LGBTQ+ communities in medical school curricula affects sexual minorities and results in health disparities [16, 17]. Studies regarding knowledge about HPV and vaccination rate among males in Poland are limited and to the best of our knowledge, there are no studies referring to sexual minorities in Poland. Therefore, we set out to explore the knowledge of Polish men about HPV infection and HPV-related cancers, and to identify any inequalities between heterosexual and homosexual men. Conclusions: Characteristic of the study group. HPV-related diseases in opinion of respondents.
Background: Social campaigns concerning vaccinations against human papillomavirus (HPV) in Poland are mainly addressed to women. In addition to cervical cancer, anal, penile, and oropharyngeal cancers can be caused by the virus, which clearly affects men as well. HPV vaccinations are voluntary and mostly not refunded in Poland. Methods: A survey was published on social media's group gathering males and contained questions concerning epidemiological data, knowledge about HPV, and opinions of HPV vaccination. A questionnaire was enriched with educational note regarding HPV-dependent cancers and available vaccines against HPV in Poland. Results: Because of age limitations, 169 males (115 heterosexuals, 48 homosexuals) aged 14-39 were chosen for the study. Seventyfive percent of straight and 88% of gay men were aware of HPV, but less than 4 and 17% (respectively) were vaccinated against the virus. Main sources of knowledge about HPV were the Internet (61%), media (28%) and relatives (27%). HPV infection was linked with the development of anal and oropharyngeal cancers by 28, and 37% of heterosexual males, compared with 56.3 and 43.8% of homosexual males. The majority of respondents (88%) indicated that all genders should be vaccinated, although only 57% were aware of HPV vaccination availability in Poland. Conclusions: The men are at risk of HPV-related cancers and the danger is poorly understood amongst Polish men. Despite awareness of HPV vaccines, the vaccination rate is low. Consequently, there is a serious need to broaden educational campaignes with a special attention to LGBTQ+ communities.
3,296
312
[ 536, 1004, 957, 94, 220 ]
6
[ "hpv", "vaccination", "men", "poland", "participants", "study", "infection", "respondents", "risk", "sexual" ]
[ "hpv transmitted sexual", "virus hpv sexually", "human papillomavirus", "vaccination papillomavirus poland", "papillomavirus poland aimed" ]
null
[CONTENT] HPV | LGBTQ+ | Health disparities | HPV vaccines | Underserved populations [SUMMARY]
[CONTENT] HPV | LGBTQ+ | Health disparities | HPV vaccines | Underserved populations [SUMMARY]
null
[CONTENT] HPV | LGBTQ+ | Health disparities | HPV vaccines | Underserved populations [SUMMARY]
[CONTENT] HPV | LGBTQ+ | Health disparities | HPV vaccines | Underserved populations [SUMMARY]
[CONTENT] HPV | LGBTQ+ | Health disparities | HPV vaccines | Underserved populations [SUMMARY]
[CONTENT] Alphapapillomavirus | Anus Neoplasms | Health Knowledge, Attitudes, Practice | Healthcare Disparities | Humans | Male | Medically Underserved Area | Oropharyngeal Neoplasms | Papillomaviridae | Papillomavirus Infections | Papillomavirus Vaccines | Penile Neoplasms | Poland | Sexual and Gender Minorities | Vaccination | Vulnerable Populations [SUMMARY]
[CONTENT] Alphapapillomavirus | Anus Neoplasms | Health Knowledge, Attitudes, Practice | Healthcare Disparities | Humans | Male | Medically Underserved Area | Oropharyngeal Neoplasms | Papillomaviridae | Papillomavirus Infections | Papillomavirus Vaccines | Penile Neoplasms | Poland | Sexual and Gender Minorities | Vaccination | Vulnerable Populations [SUMMARY]
null
[CONTENT] Alphapapillomavirus | Anus Neoplasms | Health Knowledge, Attitudes, Practice | Healthcare Disparities | Humans | Male | Medically Underserved Area | Oropharyngeal Neoplasms | Papillomaviridae | Papillomavirus Infections | Papillomavirus Vaccines | Penile Neoplasms | Poland | Sexual and Gender Minorities | Vaccination | Vulnerable Populations [SUMMARY]
[CONTENT] Alphapapillomavirus | Anus Neoplasms | Health Knowledge, Attitudes, Practice | Healthcare Disparities | Humans | Male | Medically Underserved Area | Oropharyngeal Neoplasms | Papillomaviridae | Papillomavirus Infections | Papillomavirus Vaccines | Penile Neoplasms | Poland | Sexual and Gender Minorities | Vaccination | Vulnerable Populations [SUMMARY]
[CONTENT] Alphapapillomavirus | Anus Neoplasms | Health Knowledge, Attitudes, Practice | Healthcare Disparities | Humans | Male | Medically Underserved Area | Oropharyngeal Neoplasms | Papillomaviridae | Papillomavirus Infections | Papillomavirus Vaccines | Penile Neoplasms | Poland | Sexual and Gender Minorities | Vaccination | Vulnerable Populations [SUMMARY]
[CONTENT] hpv transmitted sexual | virus hpv sexually | human papillomavirus | vaccination papillomavirus poland | papillomavirus poland aimed [SUMMARY]
[CONTENT] hpv transmitted sexual | virus hpv sexually | human papillomavirus | vaccination papillomavirus poland | papillomavirus poland aimed [SUMMARY]
null
[CONTENT] hpv transmitted sexual | virus hpv sexually | human papillomavirus | vaccination papillomavirus poland | papillomavirus poland aimed [SUMMARY]
[CONTENT] hpv transmitted sexual | virus hpv sexually | human papillomavirus | vaccination papillomavirus poland | papillomavirus poland aimed [SUMMARY]
[CONTENT] hpv transmitted sexual | virus hpv sexually | human papillomavirus | vaccination papillomavirus poland | papillomavirus poland aimed [SUMMARY]
[CONTENT] hpv | vaccination | men | poland | participants | study | infection | respondents | risk | sexual [SUMMARY]
[CONTENT] hpv | vaccination | men | poland | participants | study | infection | respondents | risk | sexual [SUMMARY]
null
[CONTENT] hpv | vaccination | men | poland | participants | study | infection | respondents | risk | sexual [SUMMARY]
[CONTENT] hpv | vaccination | men | poland | participants | study | infection | respondents | risk | sexual [SUMMARY]
[CONTENT] hpv | vaccination | men | poland | participants | study | infection | respondents | risk | sexual [SUMMARY]
[CONTENT] hpv | sexual | infection | women | poland | risk | cancer | hpv infection | knowledge | men [SUMMARY]
[CONTENT] test | questions | assess | test assess | nominal | non | medical | respondents | hpv | metropolitan [SUMMARY]
null
[CONTENT] hpv | programs | increase | men | demonstrated | related cancers | schools | hpv related cancers | polish | related [SUMMARY]
[CONTENT] hpv | vaccination | participants | men | study | poland | study group | group | sexual | infection [SUMMARY]
[CONTENT] hpv | vaccination | participants | men | study | poland | study group | group | sexual | infection [SUMMARY]
[CONTENT] Poland ||| ||| Poland [SUMMARY]
[CONTENT] HPV | HPV ||| HPV | HPV | Poland [SUMMARY]
null
[CONTENT] HPV | Polish ||| ||| [SUMMARY]
[CONTENT] Poland ||| ||| Poland ||| HPV | HPV ||| HPV | HPV | Poland ||| 169 | 115 | 48 | 14-39 ||| 88% | HPV | less than 4 | 17% ||| HPV | 61% | 28% | 27% ||| 28 | 37% | 56.3 | 43.8% ||| 88% | only 57% | HPV | Poland ||| HPV | Polish ||| ||| [SUMMARY]
[CONTENT] Poland ||| ||| Poland ||| HPV | HPV ||| HPV | HPV | Poland ||| 169 | 115 | 48 | 14-39 ||| 88% | HPV | less than 4 | 17% ||| HPV | 61% | 28% | 27% ||| 28 | 37% | 56.3 | 43.8% ||| 88% | only 57% | HPV | Poland ||| HPV | Polish ||| ||| [SUMMARY]
Prevalence of overweight/obesity and its associated factors among a sample of Moroccan type 2 diabetes patients.
34394277
Obesity constitutes a major risk factor for the development of diabetes, and has been linked with poor glycaemic control among type 2 diabetic patients.
BACKGROUND
A questionnaire-based cross-sectional study was conducted in 2017 among 975 diabetes patients attending primary health centres. Demographic and clinical data were collected through face-to-face interviews. Anthropometric measurements, including body weight, height and waist circumference, were taken using standardized techniques and calibrated equipment.
METHODS
The prevalence of overweight was 40.4%, the general obesity was 28.8% and the abdominal obesity was 73.7%. Using multivariate analysis, we noted that the general obesity was associated with female sex (AOR= 3,004, 95% CI: 1.761-5.104, P<0.001), increased age (AOR=2.192, 95% CI: 1.116-4.307, P<0.023) and good glycaemic control (AOR=1.594, 95% CI: 1.056-2.407, P=0.027), whereas abdominal obesity was associated wih female sex (AOR=2.654, 95% CI: 1.507-4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031-8.757, P=0.048).
RESULTS
Overweight, general obesity and abdominal obesity were high among participants, especially among women. Taken together, these findings urge the implementation of a roadmap for this diabetic subpopulation to have a new lifestyle.
CONCLUSION
[ "Adult", "Aged", "Body Mass Index", "Cross-Sectional Studies", "Diabetes Mellitus, Type 2", "Female", "Humans", "Male", "Middle Aged", "Morocco", "Obesity", "Obesity, Abdominal", "Overweight", "Prevalence", "Risk Factors", "Socioeconomic Factors", "Waist Circumference", "Young Adult" ]
8356625
Introduction
Diabetes is a major public health problem due to its negative effects on health and well-being and the costs engendered by its complications. This could be exacerbated by the fact that the prevalence of this disease is elevated and is highly dynamic with a serious and socioeconomic impact1. As a result, diabetes is currently one of the most worrying diseases in both industrialised and developing countries, as the latter are in nutrition transition. According to the International Diabetes Federation2, the number of adult diabetic patients recorded in 2015 was 415 million, representing 8.8% of the world's population, and type 2 diabetes mellitus (T2DM) is the most common form of diabetes, representing more than 90% of all declared cases3. In Morocco, a country in the full phase of demographic, nutritional and epidemiological transition4,5, diabetes is emerging as a preoccupying public health issue that represents a challenge facing health practitioners daily. Based on a survey carried out by health authorities in 2000, the prevalence of diabetes in Morocco was about 6.6% in a population aged at least 20 years 6, whereas it reached 10.6% in 2018 according to the latest national survey on the common risk factors for non-communicable diseases7. These statistics reflect the evolution of the disease in Morocco and that the situation requires rigorous management of diabetes along with its associated factors such as overweight and obesity. It is well known that for diabetic and non-diabetic persons, being overweight or obese is a major risk factor. That is why professional organisations and health professionals recommend weight loss as a primary strategy for glycaemic control. For example, the American Diabetes Association (ADA) recommends weight loss for all overweight or obese individuals who have or are at risk for diabetes8. There is a very close relation between weight and T2DM. Indeed, many previous published data have reported that most of patients with T2DM are overweight or obese and that obese people present the highest risk of developing T2DM9. The simultaneous occurrence of the complicated conditions of diabetes and obesity within a single individual is called ‘diabesity’10. Furthermore, overweight and obesity in diabetics are associated with poorer control of blood glucose levels and blood pressure11, which represents high cardiovascular risk12. Conversely, intentional weight loss is associated with reduced mortality in people with ‘diabesity’ 13. Determinants of weight gain leading to overweight and obesity are clearly multifactorial and involve genetic, socioeconomic and environmental components14. Additionally, assessment of the nutritional status of patients with T2DM is essential to detect malnutrition that could increase morbidity and mortality and prolong the length of hospital stay. However, data regarding nutritional status and its associated factors in Moroccan type 2 diabetic patients remain scarce and limited. To our knowledge, there are no published studies on this subject in the Beni-Mellal Khenifra (BM-KH) region and in other regions of Morocco, except the study carried out by Ramdani et al. in 2012 on diabetes and obesity in eastern Morocco15. A better understanding of the factors associated with diabesity in the Moroccan context is strongly needed to help diabetics and health professionals better manage diabetes. Thus, the aim of this work is to determine the prevalence of overweight and obesity and to identify its determinant factors in a sample of type 2 diabetic patients in the BM-KH region.
Methods
Study participants and data collection We conducted a cross-sectional survey in 2017 among 975 T2DM patients attending primary health centres in the BM-KH region of Morocco. At the time of the survey and according to the Regional Observatory of Health in the BM-KH region, the primary health centres provide health services for 153 000 T2DM patients registered in five provinces who receive regular medical follow-up and get their medications dispensed at the centres free of charge. For patient selection, a multilevel random-sampling method was used to recruit participants. The sample size was calculated based on the following parameters: prevalence of overweight and obesity (50%) among T2DM patients, 4% margin of error (e=0.04) and 95% confidence level (z=3.20); thus, the minimum study sample size was 932, which was rounded up to 1000 persons for more accuracy and in order to account for possible exclusions and the need to carry out subgroup analysis. The sample surveyed in the five provinces of the BMKH region was proportional to the total T2DM population in each province. All primary health centres providing diabetes care in each province were counted and centres were randomly selected from these. The number of primary health centres was chosen based on proportions of diabetes patients recorded in each province. Thus 15 primary health centres were the setting for the survey. Every workday, a list of expected participants was obtained from the healthcare centres. The value of K participants depended on the number of people attending the centre each day, which varies between centres. The first K participant to be recruited into the study and who met the inclusion criteria was randomly selected by the investigator and then every Kth patient was recruited into the study. If the Kth person declined, the next person was invited. The recruitment was continued until data were collected from 1000 patients. After cleaning of the files, 25 questionnaires with missing data or unreadable handwriting were eliminated; the sample size remains 975. A face-to-face interview was carried out by trained interviewers to collect data, including sociodemographic information, such as age, sex, place of residence, marital status, family size, level of education, and occupational status. Participants' educational levels were classified into 4 categories as follows: “Illiterate” (unable to read and write and without formal education); “Primary” (had 1 to 6 years of formal education); “Secondary” (had 7 to 12 years of formal education) and “university” (had at least 13 years of formal education). The employment status was categorized as working or not currently employed. In addition, we collected information about diabetes, such as the duration of diabetes in years, family history of diabetes (defined as having a parent or sibling with diabetes), treatment type, and complications linked to diabetes. The inclusion criteria for this study were as follows: patients diagnosed with T2DM for 1 year or more, with an available medical file; aged at least 18 years; had an HbA1c test during the last three months; physically and mentally able to provide all data required for the study; and willing to participate in the study. Patients with type 1 diabetes, hospitalized patients and pregnant women with diabetes were excluded from this study. Written approval for this study was obtained from the Health Ministry, Morocco, on 3 March 2016 (reference no. 6397-3/3/2016). For the questionnaire, informed written consent was obtained from all respondents after explaining the purpose of the study, the importance of their contribution and their right to refuse participation. The data are anonymized and free of personally identifiable information. We conducted a cross-sectional survey in 2017 among 975 T2DM patients attending primary health centres in the BM-KH region of Morocco. At the time of the survey and according to the Regional Observatory of Health in the BM-KH region, the primary health centres provide health services for 153 000 T2DM patients registered in five provinces who receive regular medical follow-up and get their medications dispensed at the centres free of charge. For patient selection, a multilevel random-sampling method was used to recruit participants. The sample size was calculated based on the following parameters: prevalence of overweight and obesity (50%) among T2DM patients, 4% margin of error (e=0.04) and 95% confidence level (z=3.20); thus, the minimum study sample size was 932, which was rounded up to 1000 persons for more accuracy and in order to account for possible exclusions and the need to carry out subgroup analysis. The sample surveyed in the five provinces of the BMKH region was proportional to the total T2DM population in each province. All primary health centres providing diabetes care in each province were counted and centres were randomly selected from these. The number of primary health centres was chosen based on proportions of diabetes patients recorded in each province. Thus 15 primary health centres were the setting for the survey. Every workday, a list of expected participants was obtained from the healthcare centres. The value of K participants depended on the number of people attending the centre each day, which varies between centres. The first K participant to be recruited into the study and who met the inclusion criteria was randomly selected by the investigator and then every Kth patient was recruited into the study. If the Kth person declined, the next person was invited. The recruitment was continued until data were collected from 1000 patients. After cleaning of the files, 25 questionnaires with missing data or unreadable handwriting were eliminated; the sample size remains 975. A face-to-face interview was carried out by trained interviewers to collect data, including sociodemographic information, such as age, sex, place of residence, marital status, family size, level of education, and occupational status. Participants' educational levels were classified into 4 categories as follows: “Illiterate” (unable to read and write and without formal education); “Primary” (had 1 to 6 years of formal education); “Secondary” (had 7 to 12 years of formal education) and “university” (had at least 13 years of formal education). The employment status was categorized as working or not currently employed. In addition, we collected information about diabetes, such as the duration of diabetes in years, family history of diabetes (defined as having a parent or sibling with diabetes), treatment type, and complications linked to diabetes. The inclusion criteria for this study were as follows: patients diagnosed with T2DM for 1 year or more, with an available medical file; aged at least 18 years; had an HbA1c test during the last three months; physically and mentally able to provide all data required for the study; and willing to participate in the study. Patients with type 1 diabetes, hospitalized patients and pregnant women with diabetes were excluded from this study. Written approval for this study was obtained from the Health Ministry, Morocco, on 3 March 2016 (reference no. 6397-3/3/2016). For the questionnaire, informed written consent was obtained from all respondents after explaining the purpose of the study, the importance of their contribution and their right to refuse participation. The data are anonymized and free of personally identifiable information. Anthropometric measurements and biological parameters Height and body weight were measured for all participants by trained research staff; body weight was measured to the nearest 0.1 kg using a digital scale (Seca 877, Hamburg, Germany), and height was recorded to the nearest 0.1 cm using a wall-mounted stadiometer (Seca 216, Hamburg, Germany). Measurements were taken for each participant with light clothing and without shoes, and body mass index (BMI) was calculated as weight in kilograms divided by height in metres squared and categorized as underweight (<18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2) and obese (≥ 30 kg/m2) 14. Waist circumference was also measured to the nearest 0.5 cm, and abdominal obesity (AO) was defined as waist circumference (WC) ≥102 centimetres in men and ≥88 centimetres in women14. For biological indicators, the most recent HbA1c measurements (if not exceeding 3 months prior) were extracted from medical patients' records. According to the ADA, we defined glycaemic status as good glycaemic control if HbA1c <7% and poor glycaemic control if HbA1c ≥ 7%16,17. Height and body weight were measured for all participants by trained research staff; body weight was measured to the nearest 0.1 kg using a digital scale (Seca 877, Hamburg, Germany), and height was recorded to the nearest 0.1 cm using a wall-mounted stadiometer (Seca 216, Hamburg, Germany). Measurements were taken for each participant with light clothing and without shoes, and body mass index (BMI) was calculated as weight in kilograms divided by height in metres squared and categorized as underweight (<18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2) and obese (≥ 30 kg/m2) 14. Waist circumference was also measured to the nearest 0.5 cm, and abdominal obesity (AO) was defined as waist circumference (WC) ≥102 centimetres in men and ≥88 centimetres in women14. For biological indicators, the most recent HbA1c measurements (if not exceeding 3 months prior) were extracted from medical patients' records. According to the ADA, we defined glycaemic status as good glycaemic control if HbA1c <7% and poor glycaemic control if HbA1c ≥ 7%16,17. Statistical analysis Statistical analysis was carried out using Statistical Package for Social Sciences (Version 19.0, SPSS, Inc) Software. Data are described as the mean ± standard deviation (SD) for continuous variables and proportions for categorical variables. Numerical variables were analyzed using the student t-test. The association between overweight/obesity and the determinant factors considered were researched through bivariate logistic regression analysis and then all significant variables in the bivariate analysis (p< 0.05) were considered in the multivariate logistic regression model to determine independent factors associated with being obese or overweight. For each statistical test used, the test is considered significant when P-value (degree of significance) is less than 0.05. Statistical analysis was carried out using Statistical Package for Social Sciences (Version 19.0, SPSS, Inc) Software. Data are described as the mean ± standard deviation (SD) for continuous variables and proportions for categorical variables. Numerical variables were analyzed using the student t-test. The association between overweight/obesity and the determinant factors considered were researched through bivariate logistic regression analysis and then all significant variables in the bivariate analysis (p< 0.05) were considered in the multivariate logistic regression model to determine independent factors associated with being obese or overweight. For each statistical test used, the test is considered significant when P-value (degree of significance) is less than 0.05.
Results
Socio-demographic, clinical and anthropometric characteristics The socio-demographic, clinical and anthropometric characteristics of participants are presented in table 1. Socio-demographic, clinical and anthropometric characteristics of T2DM patients Women were over-represented (74 %) and the majority of the respondents (76%) was living in urban areas. The mean age was 56.19 ± 11.486 years, 9.5% were less than 40 years of age, 20.7% were 41–50 years, 35 % of the respondents represented the age group 51–60 years and 34.9 % were ≥ 61 years. Almost two thirds (66.5%) of the patients were illiterate, 15.7% had primary education, 13.2% of them had completed secondary education and only 4.6% had university education level. Over half of the study participants (67.2%) were married at the time of the study. We noted that the prevalence of overweight, including obesity (BMI ≥ 25 kg/m2) was at the level of 69.2 and 28.8% of the respondents were obese (BMI ≥ 30 kg/m2). The remaining proportions of participants (29.4%) were normal weight, while only 1.4% were underweight. Regarding AO, measured by WC, the results of this study showed that the mean value of WC was significantly higher in women (100.59 ± 11.63 cm) than in men (95.20 ±16.57 cm) (t= -3.287; P<0.001). Concerning the duration of diabetes, the mean duration was 8.55± 6.95 years. The average fasting plasma glucose and HbA1c of the subjects were higher than the ADA treatment goals16 and the glycaemic control measured by HbA1c showed that 69.4% of the patients were classified as having poor glycaemic control (HbA1c ≥ 7%). With regard to diabetic medications, 46.1% of respondents took oral medication either alone or in combination with insulin (21.8%), 26.4% were treated with insulin alone, and 5.4% were on diet only. The socio-demographic, clinical and anthropometric characteristics of participants are presented in table 1. Socio-demographic, clinical and anthropometric characteristics of T2DM patients Women were over-represented (74 %) and the majority of the respondents (76%) was living in urban areas. The mean age was 56.19 ± 11.486 years, 9.5% were less than 40 years of age, 20.7% were 41–50 years, 35 % of the respondents represented the age group 51–60 years and 34.9 % were ≥ 61 years. Almost two thirds (66.5%) of the patients were illiterate, 15.7% had primary education, 13.2% of them had completed secondary education and only 4.6% had university education level. Over half of the study participants (67.2%) were married at the time of the study. We noted that the prevalence of overweight, including obesity (BMI ≥ 25 kg/m2) was at the level of 69.2 and 28.8% of the respondents were obese (BMI ≥ 30 kg/m2). The remaining proportions of participants (29.4%) were normal weight, while only 1.4% were underweight. Regarding AO, measured by WC, the results of this study showed that the mean value of WC was significantly higher in women (100.59 ± 11.63 cm) than in men (95.20 ±16.57 cm) (t= -3.287; P<0.001). Concerning the duration of diabetes, the mean duration was 8.55± 6.95 years. The average fasting plasma glucose and HbA1c of the subjects were higher than the ADA treatment goals16 and the glycaemic control measured by HbA1c showed that 69.4% of the patients were classified as having poor glycaemic control (HbA1c ≥ 7%). With regard to diabetic medications, 46.1% of respondents took oral medication either alone or in combination with insulin (21.8%), 26.4% were treated with insulin alone, and 5.4% were on diet only. Nutritional status and associated factors Overweight and obesity were observed in 529 (69.2 %) patients. Sex, age, educational level, household members, occupation, diabetes duration, glycaemic control and type of diabetic treatment were the candidate variables for logistic regression. The following factors were statistically significant in bivariate analysis: sex, age, education level, occupation and glycaemic control (HbA1c<7%). However, by adjusting the model using multivariate logistic regression, we have found that overweight and obesity were statistically associated with female sex, age above 41 years and good glycaemic control (Table 2). Factors associated with overweight and obesity among T2DM patients Statistically significant at P value <0.05 COR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval The overweight and obesity among females were three times higher than among males (AOR= 3,004, 95% CI: 1.761–5.104, P<0.001). Regarding age, the relative probability of being overweight and obese among participants aged 41 years and above was higher than those with age below 41 years (AOR=2.192, 95% CI: 1.116–4.307, P<0.023). Concerning the glycaemic control, the relative probability of being overweight and obese among patients with good glycaemic control was higher than patients with poor glycaemic control ones (AOR=1.594, 95% CI: 1.056–2.407, P=0.027). The prevalence of AO was higher (73.7%) among patients and was statistically significant in bivariate analysis with sex (P<0.001), age (P=0.032), occupation (P<0.001) and type of treatment P=0.016) (Table 3). Factors associated with abdominal obesity among T2DM patients Statistically significant at P value <0.05 COR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval By adjusting the model using multivariate logistic regression, a statistically significant difference was found in AO to female sex (AOR=2.654, 95% CI: 1.507–4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031–8.757, P=0.048). Overweight and obesity were observed in 529 (69.2 %) patients. Sex, age, educational level, household members, occupation, diabetes duration, glycaemic control and type of diabetic treatment were the candidate variables for logistic regression. The following factors were statistically significant in bivariate analysis: sex, age, education level, occupation and glycaemic control (HbA1c<7%). However, by adjusting the model using multivariate logistic regression, we have found that overweight and obesity were statistically associated with female sex, age above 41 years and good glycaemic control (Table 2). Factors associated with overweight and obesity among T2DM patients Statistically significant at P value <0.05 COR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval The overweight and obesity among females were three times higher than among males (AOR= 3,004, 95% CI: 1.761–5.104, P<0.001). Regarding age, the relative probability of being overweight and obese among participants aged 41 years and above was higher than those with age below 41 years (AOR=2.192, 95% CI: 1.116–4.307, P<0.023). Concerning the glycaemic control, the relative probability of being overweight and obese among patients with good glycaemic control was higher than patients with poor glycaemic control ones (AOR=1.594, 95% CI: 1.056–2.407, P=0.027). The prevalence of AO was higher (73.7%) among patients and was statistically significant in bivariate analysis with sex (P<0.001), age (P=0.032), occupation (P<0.001) and type of treatment P=0.016) (Table 3). Factors associated with abdominal obesity among T2DM patients Statistically significant at P value <0.05 COR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval By adjusting the model using multivariate logistic regression, a statistically significant difference was found in AO to female sex (AOR=2.654, 95% CI: 1.507–4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031–8.757, P=0.048).
Conclusion
Overweight, general obesity and abdominal obesity were high among participants. The general obesity was associated with female sex and good glycaemic control, whereas abdominal obesity was associated with female sex and insulin treatment. Given the high prevalence of obesity among women, there may be additional public health benefits of targeting this population group, because their behaviors may influence the behaviors of other proximal population groups, such as their children and families. The health consequences of diabetes are compounded by overweight and obesity. However, the prevalence of overweight and obesity among people with diabetes has not been monitored regularly given this fact weight management should receive a higher priority in the management of diabetes.
[ "Study participants and data collection", "Anthropometric measurements and biological parameters", "Statistical analysis", "Socio-demographic, clinical and anthropometric characteristics", "Nutritional status and associated factors" ]
[ "We conducted a cross-sectional survey in 2017 among 975 T2DM patients attending primary health centres in the BM-KH region of Morocco. At the time of the survey and according to the Regional Observatory of Health in the BM-KH region, the primary health centres provide health services for 153 000 T2DM patients registered in five provinces who receive regular medical follow-up and get their medications dispensed at the centres free of charge.\nFor patient selection, a multilevel random-sampling method was used to recruit participants.\nThe sample size was calculated based on the following parameters: prevalence of overweight and obesity (50%) among T2DM patients, 4% margin of error (e=0.04) and 95% confidence level (z=3.20); thus, the minimum study sample size was 932, which was rounded up to 1000 persons for more accuracy and in order to account for possible exclusions and the need to carry out subgroup analysis.\nThe sample surveyed in the five provinces of the BMKH region was proportional to the total T2DM population in each province. All primary health centres providing diabetes care in each province were counted and centres were randomly selected from these. The number of primary health centres was chosen based on proportions of diabetes patients recorded in each province. Thus 15 primary health centres were the setting for the survey.\nEvery workday, a list of expected participants was obtained from the healthcare centres. The value of K participants depended on the number of people attending the centre each day, which varies between centres. The first K participant to be recruited into the study and who met the inclusion criteria was randomly selected by the investigator and then every Kth patient was recruited into the study. If the Kth person declined, the next person was invited. The recruitment was continued until data were collected from 1000 patients. After cleaning of the files, 25 questionnaires with missing data or unreadable handwriting were eliminated; the sample size remains 975.\nA face-to-face interview was carried out by trained interviewers to collect data, including sociodemographic information, such as age, sex, place of residence, marital status, family size, level of education, and occupational status. Participants' educational levels were classified into 4 categories as follows: “Illiterate” (unable to read and write and without formal education); “Primary” (had 1 to 6 years of formal education); “Secondary” (had 7 to 12 years of formal education) and “university” (had at least 13 years of formal education). The employment status was categorized as working or not currently employed. In addition, we collected information about diabetes, such as the duration of diabetes in years, family history of diabetes (defined as having a parent or sibling with diabetes), treatment type, and complications linked to diabetes.\nThe inclusion criteria for this study were as follows: patients diagnosed with T2DM for 1 year or more, with an available medical file; aged at least 18 years; had an HbA1c test during the last three months; physically and mentally able to provide all data required for the study; and willing to participate in the study.\nPatients with type 1 diabetes, hospitalized patients and pregnant women with diabetes were excluded from this study.\nWritten approval for this study was obtained from the Health Ministry, Morocco, on 3 March 2016 (reference no. 6397-3/3/2016). For the questionnaire, informed written consent was obtained from all respondents after explaining the purpose of the study, the importance of their contribution and their right to refuse participation. The data are anonymized and free of personally identifiable information.", "Height and body weight were measured for all participants by trained research staff; body weight was measured to the nearest 0.1 kg using a digital scale (Seca 877, Hamburg, Germany), and height was recorded to the nearest 0.1 cm using a wall-mounted stadiometer (Seca 216, Hamburg, Germany). Measurements were taken for each participant with light clothing and without shoes, and body mass index (BMI) was calculated as weight in kilograms divided by height in metres squared and categorized as underweight (<18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2) and obese (≥ 30 kg/m2) 14.\nWaist circumference was also measured to the nearest 0.5 cm, and abdominal obesity (AO) was defined as waist circumference (WC) ≥102 centimetres in men and ≥88 centimetres in women14.\nFor biological indicators, the most recent HbA1c measurements (if not exceeding 3 months prior) were extracted from medical patients' records. According to the ADA, we defined glycaemic status as good glycaemic control if HbA1c <7% and poor glycaemic control if HbA1c ≥ 7%16,17.", "Statistical analysis was carried out using Statistical Package for Social Sciences (Version 19.0, SPSS, Inc) Software. Data are described as the mean ± standard deviation (SD) for continuous variables and proportions for categorical variables. Numerical variables were analyzed using the student t-test. The association between overweight/obesity and the determinant factors considered were researched through bivariate logistic regression analysis and then all significant variables in the bivariate analysis (p< 0.05) were considered in the multivariate logistic regression model to determine independent factors associated with being obese or overweight. For each statistical test used, the test is considered significant when P-value (degree of significance) is less than 0.05.", "The socio-demographic, clinical and anthropometric characteristics of participants are presented in table 1.\nSocio-demographic, clinical and anthropometric characteristics of T2DM patients\nWomen were over-represented (74 %) and the majority of the respondents (76%) was living in urban areas. The mean age was 56.19 ± 11.486 years, 9.5% were less than 40 years of age, 20.7% were 41–50 years, 35 % of the respondents represented the age group 51–60 years and 34.9 % were ≥ 61 years. Almost two thirds (66.5%) of the patients were illiterate, 15.7% had primary education, 13.2% of them had completed secondary education and only 4.6% had university education level. Over half of the study participants (67.2%) were married at the time of the study.\nWe noted that the prevalence of overweight, including obesity (BMI ≥ 25 kg/m2) was at the level of 69.2 and 28.8% of the respondents were obese (BMI ≥ 30 kg/m2). The remaining proportions of participants (29.4%) were normal weight, while only 1.4% were underweight. Regarding AO, measured by WC, the results of this study showed that the mean value of WC was significantly higher in women (100.59 ± 11.63 cm) than in men (95.20 ±16.57 cm) (t= -3.287; P<0.001). Concerning the duration of diabetes, the mean duration was 8.55± 6.95 years. The average fasting plasma glucose and HbA1c of the subjects were higher than the ADA treatment goals16 and the glycaemic control measured by HbA1c showed that 69.4% of the patients were classified as having poor glycaemic control (HbA1c ≥ 7%). With regard to diabetic medications, 46.1% of respondents took oral medication either alone or in combination with insulin (21.8%), 26.4% were treated with insulin alone, and 5.4% were on diet only.", "Overweight and obesity were observed in 529 (69.2 %) patients. Sex, age, educational level, household members, occupation, diabetes duration, glycaemic control and type of diabetic treatment were the candidate variables for logistic regression. The following factors were statistically significant in bivariate analysis: sex, age, education level, occupation and glycaemic control (HbA1c<7%). However, by adjusting the model using multivariate logistic regression, we have found that overweight and obesity were statistically associated with female sex, age above 41 years and good glycaemic control (Table 2).\nFactors associated with overweight and obesity among T2DM patients\nStatistically significant at P value <0.05\nCOR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval\nThe overweight and obesity among females were three times higher than among males (AOR= 3,004, 95% CI: 1.761–5.104, P<0.001). Regarding age, the relative probability of being overweight and obese among participants aged 41 years and above was higher than those with age below 41 years (AOR=2.192, 95% CI: 1.116–4.307, P<0.023). Concerning the glycaemic control, the relative probability of being overweight and obese among patients with good glycaemic control was higher than patients with poor glycaemic control ones (AOR=1.594, 95% CI: 1.056–2.407, P=0.027).\nThe prevalence of AO was higher (73.7%) among patients and was statistically significant in bivariate analysis with sex (P<0.001), age (P=0.032), occupation (P<0.001) and type of treatment P=0.016) (Table 3).\nFactors associated with abdominal obesity among T2DM patients\nStatistically significant at P value <0.05\nCOR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval\nBy adjusting the model using multivariate logistic regression, a statistically significant difference was found in AO to female sex (AOR=2.654, 95% CI: 1.507–4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031–8.757, P=0.048)." ]
[ null, null, null, null, null ]
[ "Introduction", "Methods", "Study participants and data collection", "Anthropometric measurements and biological parameters", "Statistical analysis", "Results", "Socio-demographic, clinical and anthropometric characteristics", "Nutritional status and associated factors", "Discussion", "Conclusion" ]
[ "Diabetes is a major public health problem due to its negative effects on health and well-being and the costs engendered by its complications. This could be exacerbated by the fact that the prevalence of this disease is elevated and is highly dynamic with a serious and socioeconomic impact1. As a result, diabetes is currently one of the most worrying diseases in both industrialised and developing countries, as the latter are in nutrition transition.\nAccording to the International Diabetes Federation2, the number of adult diabetic patients recorded in 2015 was 415 million, representing 8.8% of the world's population, and type 2 diabetes mellitus (T2DM) is the most common form of diabetes, representing more than 90% of all declared cases3. In Morocco, a country in the full phase of demographic, nutritional and epidemiological transition4,5, diabetes is emerging as a preoccupying public health issue that represents a challenge facing health practitioners daily. Based on a survey carried out by health authorities in 2000, the prevalence of diabetes in Morocco was about 6.6% in a population aged at least 20 years 6, whereas it reached 10.6% in 2018 according to the latest national survey on the common risk factors for non-communicable diseases7. These statistics reflect the evolution of the disease in Morocco and that the situation requires rigorous management of diabetes along with its associated factors such as overweight and obesity.\nIt is well known that for diabetic and non-diabetic persons, being overweight or obese is a major risk factor. That is why professional organisations and health professionals recommend weight loss as a primary strategy for glycaemic control. For example, the American Diabetes Association (ADA) recommends weight loss for all overweight or obese individuals who have or are at risk for diabetes8.\nThere is a very close relation between weight and T2DM. Indeed, many previous published data have reported that most of patients with T2DM are overweight or obese and that obese people present the highest risk of developing T2DM9. The simultaneous occurrence of the complicated conditions of diabetes and obesity within a single individual is called ‘diabesity’10. Furthermore, overweight and obesity in diabetics are associated with poorer control of blood glucose levels and blood pressure11, which represents high cardiovascular risk12. Conversely, intentional weight loss is associated with reduced mortality in people with ‘diabesity’ 13.\nDeterminants of weight gain leading to overweight and obesity are clearly multifactorial and involve genetic, socioeconomic and environmental components14. Additionally, assessment of the nutritional status of patients with T2DM is essential to detect malnutrition that could increase morbidity and mortality and prolong the length of hospital stay. However, data regarding nutritional status and its associated factors in Moroccan type 2 diabetic patients remain scarce and limited. To our knowledge, there are no published studies on this subject in the Beni-Mellal Khenifra (BM-KH) region and in other regions of Morocco, except the study carried out by Ramdani et al. in 2012 on diabetes and obesity in eastern Morocco15. A better understanding of the factors associated with diabesity in the Moroccan context is strongly needed to help diabetics and health professionals better manage diabetes. Thus, the aim of this work is to determine the prevalence of overweight and obesity and to identify its determinant factors in a sample of type 2 diabetic patients in the BM-KH region.", "Study participants and data collection We conducted a cross-sectional survey in 2017 among 975 T2DM patients attending primary health centres in the BM-KH region of Morocco. At the time of the survey and according to the Regional Observatory of Health in the BM-KH region, the primary health centres provide health services for 153 000 T2DM patients registered in five provinces who receive regular medical follow-up and get their medications dispensed at the centres free of charge.\nFor patient selection, a multilevel random-sampling method was used to recruit participants.\nThe sample size was calculated based on the following parameters: prevalence of overweight and obesity (50%) among T2DM patients, 4% margin of error (e=0.04) and 95% confidence level (z=3.20); thus, the minimum study sample size was 932, which was rounded up to 1000 persons for more accuracy and in order to account for possible exclusions and the need to carry out subgroup analysis.\nThe sample surveyed in the five provinces of the BMKH region was proportional to the total T2DM population in each province. All primary health centres providing diabetes care in each province were counted and centres were randomly selected from these. The number of primary health centres was chosen based on proportions of diabetes patients recorded in each province. Thus 15 primary health centres were the setting for the survey.\nEvery workday, a list of expected participants was obtained from the healthcare centres. The value of K participants depended on the number of people attending the centre each day, which varies between centres. The first K participant to be recruited into the study and who met the inclusion criteria was randomly selected by the investigator and then every Kth patient was recruited into the study. If the Kth person declined, the next person was invited. The recruitment was continued until data were collected from 1000 patients. After cleaning of the files, 25 questionnaires with missing data or unreadable handwriting were eliminated; the sample size remains 975.\nA face-to-face interview was carried out by trained interviewers to collect data, including sociodemographic information, such as age, sex, place of residence, marital status, family size, level of education, and occupational status. Participants' educational levels were classified into 4 categories as follows: “Illiterate” (unable to read and write and without formal education); “Primary” (had 1 to 6 years of formal education); “Secondary” (had 7 to 12 years of formal education) and “university” (had at least 13 years of formal education). The employment status was categorized as working or not currently employed. In addition, we collected information about diabetes, such as the duration of diabetes in years, family history of diabetes (defined as having a parent or sibling with diabetes), treatment type, and complications linked to diabetes.\nThe inclusion criteria for this study were as follows: patients diagnosed with T2DM for 1 year or more, with an available medical file; aged at least 18 years; had an HbA1c test during the last three months; physically and mentally able to provide all data required for the study; and willing to participate in the study.\nPatients with type 1 diabetes, hospitalized patients and pregnant women with diabetes were excluded from this study.\nWritten approval for this study was obtained from the Health Ministry, Morocco, on 3 March 2016 (reference no. 6397-3/3/2016). For the questionnaire, informed written consent was obtained from all respondents after explaining the purpose of the study, the importance of their contribution and their right to refuse participation. The data are anonymized and free of personally identifiable information.\nWe conducted a cross-sectional survey in 2017 among 975 T2DM patients attending primary health centres in the BM-KH region of Morocco. At the time of the survey and according to the Regional Observatory of Health in the BM-KH region, the primary health centres provide health services for 153 000 T2DM patients registered in five provinces who receive regular medical follow-up and get their medications dispensed at the centres free of charge.\nFor patient selection, a multilevel random-sampling method was used to recruit participants.\nThe sample size was calculated based on the following parameters: prevalence of overweight and obesity (50%) among T2DM patients, 4% margin of error (e=0.04) and 95% confidence level (z=3.20); thus, the minimum study sample size was 932, which was rounded up to 1000 persons for more accuracy and in order to account for possible exclusions and the need to carry out subgroup analysis.\nThe sample surveyed in the five provinces of the BMKH region was proportional to the total T2DM population in each province. All primary health centres providing diabetes care in each province were counted and centres were randomly selected from these. The number of primary health centres was chosen based on proportions of diabetes patients recorded in each province. Thus 15 primary health centres were the setting for the survey.\nEvery workday, a list of expected participants was obtained from the healthcare centres. The value of K participants depended on the number of people attending the centre each day, which varies between centres. The first K participant to be recruited into the study and who met the inclusion criteria was randomly selected by the investigator and then every Kth patient was recruited into the study. If the Kth person declined, the next person was invited. The recruitment was continued until data were collected from 1000 patients. After cleaning of the files, 25 questionnaires with missing data or unreadable handwriting were eliminated; the sample size remains 975.\nA face-to-face interview was carried out by trained interviewers to collect data, including sociodemographic information, such as age, sex, place of residence, marital status, family size, level of education, and occupational status. Participants' educational levels were classified into 4 categories as follows: “Illiterate” (unable to read and write and without formal education); “Primary” (had 1 to 6 years of formal education); “Secondary” (had 7 to 12 years of formal education) and “university” (had at least 13 years of formal education). The employment status was categorized as working or not currently employed. In addition, we collected information about diabetes, such as the duration of diabetes in years, family history of diabetes (defined as having a parent or sibling with diabetes), treatment type, and complications linked to diabetes.\nThe inclusion criteria for this study were as follows: patients diagnosed with T2DM for 1 year or more, with an available medical file; aged at least 18 years; had an HbA1c test during the last three months; physically and mentally able to provide all data required for the study; and willing to participate in the study.\nPatients with type 1 diabetes, hospitalized patients and pregnant women with diabetes were excluded from this study.\nWritten approval for this study was obtained from the Health Ministry, Morocco, on 3 March 2016 (reference no. 6397-3/3/2016). For the questionnaire, informed written consent was obtained from all respondents after explaining the purpose of the study, the importance of their contribution and their right to refuse participation. The data are anonymized and free of personally identifiable information.\nAnthropometric measurements and biological parameters Height and body weight were measured for all participants by trained research staff; body weight was measured to the nearest 0.1 kg using a digital scale (Seca 877, Hamburg, Germany), and height was recorded to the nearest 0.1 cm using a wall-mounted stadiometer (Seca 216, Hamburg, Germany). Measurements were taken for each participant with light clothing and without shoes, and body mass index (BMI) was calculated as weight in kilograms divided by height in metres squared and categorized as underweight (<18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2) and obese (≥ 30 kg/m2) 14.\nWaist circumference was also measured to the nearest 0.5 cm, and abdominal obesity (AO) was defined as waist circumference (WC) ≥102 centimetres in men and ≥88 centimetres in women14.\nFor biological indicators, the most recent HbA1c measurements (if not exceeding 3 months prior) were extracted from medical patients' records. According to the ADA, we defined glycaemic status as good glycaemic control if HbA1c <7% and poor glycaemic control if HbA1c ≥ 7%16,17.\nHeight and body weight were measured for all participants by trained research staff; body weight was measured to the nearest 0.1 kg using a digital scale (Seca 877, Hamburg, Germany), and height was recorded to the nearest 0.1 cm using a wall-mounted stadiometer (Seca 216, Hamburg, Germany). Measurements were taken for each participant with light clothing and without shoes, and body mass index (BMI) was calculated as weight in kilograms divided by height in metres squared and categorized as underweight (<18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2) and obese (≥ 30 kg/m2) 14.\nWaist circumference was also measured to the nearest 0.5 cm, and abdominal obesity (AO) was defined as waist circumference (WC) ≥102 centimetres in men and ≥88 centimetres in women14.\nFor biological indicators, the most recent HbA1c measurements (if not exceeding 3 months prior) were extracted from medical patients' records. According to the ADA, we defined glycaemic status as good glycaemic control if HbA1c <7% and poor glycaemic control if HbA1c ≥ 7%16,17.\nStatistical analysis Statistical analysis was carried out using Statistical Package for Social Sciences (Version 19.0, SPSS, Inc) Software. Data are described as the mean ± standard deviation (SD) for continuous variables and proportions for categorical variables. Numerical variables were analyzed using the student t-test. The association between overweight/obesity and the determinant factors considered were researched through bivariate logistic regression analysis and then all significant variables in the bivariate analysis (p< 0.05) were considered in the multivariate logistic regression model to determine independent factors associated with being obese or overweight. For each statistical test used, the test is considered significant when P-value (degree of significance) is less than 0.05.\nStatistical analysis was carried out using Statistical Package for Social Sciences (Version 19.0, SPSS, Inc) Software. Data are described as the mean ± standard deviation (SD) for continuous variables and proportions for categorical variables. Numerical variables were analyzed using the student t-test. The association between overweight/obesity and the determinant factors considered were researched through bivariate logistic regression analysis and then all significant variables in the bivariate analysis (p< 0.05) were considered in the multivariate logistic regression model to determine independent factors associated with being obese or overweight. For each statistical test used, the test is considered significant when P-value (degree of significance) is less than 0.05.", "We conducted a cross-sectional survey in 2017 among 975 T2DM patients attending primary health centres in the BM-KH region of Morocco. At the time of the survey and according to the Regional Observatory of Health in the BM-KH region, the primary health centres provide health services for 153 000 T2DM patients registered in five provinces who receive regular medical follow-up and get their medications dispensed at the centres free of charge.\nFor patient selection, a multilevel random-sampling method was used to recruit participants.\nThe sample size was calculated based on the following parameters: prevalence of overweight and obesity (50%) among T2DM patients, 4% margin of error (e=0.04) and 95% confidence level (z=3.20); thus, the minimum study sample size was 932, which was rounded up to 1000 persons for more accuracy and in order to account for possible exclusions and the need to carry out subgroup analysis.\nThe sample surveyed in the five provinces of the BMKH region was proportional to the total T2DM population in each province. All primary health centres providing diabetes care in each province were counted and centres were randomly selected from these. The number of primary health centres was chosen based on proportions of diabetes patients recorded in each province. Thus 15 primary health centres were the setting for the survey.\nEvery workday, a list of expected participants was obtained from the healthcare centres. The value of K participants depended on the number of people attending the centre each day, which varies between centres. The first K participant to be recruited into the study and who met the inclusion criteria was randomly selected by the investigator and then every Kth patient was recruited into the study. If the Kth person declined, the next person was invited. The recruitment was continued until data were collected from 1000 patients. After cleaning of the files, 25 questionnaires with missing data or unreadable handwriting were eliminated; the sample size remains 975.\nA face-to-face interview was carried out by trained interviewers to collect data, including sociodemographic information, such as age, sex, place of residence, marital status, family size, level of education, and occupational status. Participants' educational levels were classified into 4 categories as follows: “Illiterate” (unable to read and write and without formal education); “Primary” (had 1 to 6 years of formal education); “Secondary” (had 7 to 12 years of formal education) and “university” (had at least 13 years of formal education). The employment status was categorized as working or not currently employed. In addition, we collected information about diabetes, such as the duration of diabetes in years, family history of diabetes (defined as having a parent or sibling with diabetes), treatment type, and complications linked to diabetes.\nThe inclusion criteria for this study were as follows: patients diagnosed with T2DM for 1 year or more, with an available medical file; aged at least 18 years; had an HbA1c test during the last three months; physically and mentally able to provide all data required for the study; and willing to participate in the study.\nPatients with type 1 diabetes, hospitalized patients and pregnant women with diabetes were excluded from this study.\nWritten approval for this study was obtained from the Health Ministry, Morocco, on 3 March 2016 (reference no. 6397-3/3/2016). For the questionnaire, informed written consent was obtained from all respondents after explaining the purpose of the study, the importance of their contribution and their right to refuse participation. The data are anonymized and free of personally identifiable information.", "Height and body weight were measured for all participants by trained research staff; body weight was measured to the nearest 0.1 kg using a digital scale (Seca 877, Hamburg, Germany), and height was recorded to the nearest 0.1 cm using a wall-mounted stadiometer (Seca 216, Hamburg, Germany). Measurements were taken for each participant with light clothing and without shoes, and body mass index (BMI) was calculated as weight in kilograms divided by height in metres squared and categorized as underweight (<18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2) and obese (≥ 30 kg/m2) 14.\nWaist circumference was also measured to the nearest 0.5 cm, and abdominal obesity (AO) was defined as waist circumference (WC) ≥102 centimetres in men and ≥88 centimetres in women14.\nFor biological indicators, the most recent HbA1c measurements (if not exceeding 3 months prior) were extracted from medical patients' records. According to the ADA, we defined glycaemic status as good glycaemic control if HbA1c <7% and poor glycaemic control if HbA1c ≥ 7%16,17.", "Statistical analysis was carried out using Statistical Package for Social Sciences (Version 19.0, SPSS, Inc) Software. Data are described as the mean ± standard deviation (SD) for continuous variables and proportions for categorical variables. Numerical variables were analyzed using the student t-test. The association between overweight/obesity and the determinant factors considered were researched through bivariate logistic regression analysis and then all significant variables in the bivariate analysis (p< 0.05) were considered in the multivariate logistic regression model to determine independent factors associated with being obese or overweight. For each statistical test used, the test is considered significant when P-value (degree of significance) is less than 0.05.", "Socio-demographic, clinical and anthropometric characteristics The socio-demographic, clinical and anthropometric characteristics of participants are presented in table 1.\nSocio-demographic, clinical and anthropometric characteristics of T2DM patients\nWomen were over-represented (74 %) and the majority of the respondents (76%) was living in urban areas. The mean age was 56.19 ± 11.486 years, 9.5% were less than 40 years of age, 20.7% were 41–50 years, 35 % of the respondents represented the age group 51–60 years and 34.9 % were ≥ 61 years. Almost two thirds (66.5%) of the patients were illiterate, 15.7% had primary education, 13.2% of them had completed secondary education and only 4.6% had university education level. Over half of the study participants (67.2%) were married at the time of the study.\nWe noted that the prevalence of overweight, including obesity (BMI ≥ 25 kg/m2) was at the level of 69.2 and 28.8% of the respondents were obese (BMI ≥ 30 kg/m2). The remaining proportions of participants (29.4%) were normal weight, while only 1.4% were underweight. Regarding AO, measured by WC, the results of this study showed that the mean value of WC was significantly higher in women (100.59 ± 11.63 cm) than in men (95.20 ±16.57 cm) (t= -3.287; P<0.001). Concerning the duration of diabetes, the mean duration was 8.55± 6.95 years. The average fasting plasma glucose and HbA1c of the subjects were higher than the ADA treatment goals16 and the glycaemic control measured by HbA1c showed that 69.4% of the patients were classified as having poor glycaemic control (HbA1c ≥ 7%). With regard to diabetic medications, 46.1% of respondents took oral medication either alone or in combination with insulin (21.8%), 26.4% were treated with insulin alone, and 5.4% were on diet only.\nThe socio-demographic, clinical and anthropometric characteristics of participants are presented in table 1.\nSocio-demographic, clinical and anthropometric characteristics of T2DM patients\nWomen were over-represented (74 %) and the majority of the respondents (76%) was living in urban areas. The mean age was 56.19 ± 11.486 years, 9.5% were less than 40 years of age, 20.7% were 41–50 years, 35 % of the respondents represented the age group 51–60 years and 34.9 % were ≥ 61 years. Almost two thirds (66.5%) of the patients were illiterate, 15.7% had primary education, 13.2% of them had completed secondary education and only 4.6% had university education level. Over half of the study participants (67.2%) were married at the time of the study.\nWe noted that the prevalence of overweight, including obesity (BMI ≥ 25 kg/m2) was at the level of 69.2 and 28.8% of the respondents were obese (BMI ≥ 30 kg/m2). The remaining proportions of participants (29.4%) were normal weight, while only 1.4% were underweight. Regarding AO, measured by WC, the results of this study showed that the mean value of WC was significantly higher in women (100.59 ± 11.63 cm) than in men (95.20 ±16.57 cm) (t= -3.287; P<0.001). Concerning the duration of diabetes, the mean duration was 8.55± 6.95 years. The average fasting plasma glucose and HbA1c of the subjects were higher than the ADA treatment goals16 and the glycaemic control measured by HbA1c showed that 69.4% of the patients were classified as having poor glycaemic control (HbA1c ≥ 7%). With regard to diabetic medications, 46.1% of respondents took oral medication either alone or in combination with insulin (21.8%), 26.4% were treated with insulin alone, and 5.4% were on diet only.\nNutritional status and associated factors Overweight and obesity were observed in 529 (69.2 %) patients. Sex, age, educational level, household members, occupation, diabetes duration, glycaemic control and type of diabetic treatment were the candidate variables for logistic regression. The following factors were statistically significant in bivariate analysis: sex, age, education level, occupation and glycaemic control (HbA1c<7%). However, by adjusting the model using multivariate logistic regression, we have found that overweight and obesity were statistically associated with female sex, age above 41 years and good glycaemic control (Table 2).\nFactors associated with overweight and obesity among T2DM patients\nStatistically significant at P value <0.05\nCOR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval\nThe overweight and obesity among females were three times higher than among males (AOR= 3,004, 95% CI: 1.761–5.104, P<0.001). Regarding age, the relative probability of being overweight and obese among participants aged 41 years and above was higher than those with age below 41 years (AOR=2.192, 95% CI: 1.116–4.307, P<0.023). Concerning the glycaemic control, the relative probability of being overweight and obese among patients with good glycaemic control was higher than patients with poor glycaemic control ones (AOR=1.594, 95% CI: 1.056–2.407, P=0.027).\nThe prevalence of AO was higher (73.7%) among patients and was statistically significant in bivariate analysis with sex (P<0.001), age (P=0.032), occupation (P<0.001) and type of treatment P=0.016) (Table 3).\nFactors associated with abdominal obesity among T2DM patients\nStatistically significant at P value <0.05\nCOR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval\nBy adjusting the model using multivariate logistic regression, a statistically significant difference was found in AO to female sex (AOR=2.654, 95% CI: 1.507–4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031–8.757, P=0.048).\nOverweight and obesity were observed in 529 (69.2 %) patients. Sex, age, educational level, household members, occupation, diabetes duration, glycaemic control and type of diabetic treatment were the candidate variables for logistic regression. The following factors were statistically significant in bivariate analysis: sex, age, education level, occupation and glycaemic control (HbA1c<7%). However, by adjusting the model using multivariate logistic regression, we have found that overweight and obesity were statistically associated with female sex, age above 41 years and good glycaemic control (Table 2).\nFactors associated with overweight and obesity among T2DM patients\nStatistically significant at P value <0.05\nCOR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval\nThe overweight and obesity among females were three times higher than among males (AOR= 3,004, 95% CI: 1.761–5.104, P<0.001). Regarding age, the relative probability of being overweight and obese among participants aged 41 years and above was higher than those with age below 41 years (AOR=2.192, 95% CI: 1.116–4.307, P<0.023). Concerning the glycaemic control, the relative probability of being overweight and obese among patients with good glycaemic control was higher than patients with poor glycaemic control ones (AOR=1.594, 95% CI: 1.056–2.407, P=0.027).\nThe prevalence of AO was higher (73.7%) among patients and was statistically significant in bivariate analysis with sex (P<0.001), age (P=0.032), occupation (P<0.001) and type of treatment P=0.016) (Table 3).\nFactors associated with abdominal obesity among T2DM patients\nStatistically significant at P value <0.05\nCOR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval\nBy adjusting the model using multivariate logistic regression, a statistically significant difference was found in AO to female sex (AOR=2.654, 95% CI: 1.507–4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031–8.757, P=0.048).", "The socio-demographic, clinical and anthropometric characteristics of participants are presented in table 1.\nSocio-demographic, clinical and anthropometric characteristics of T2DM patients\nWomen were over-represented (74 %) and the majority of the respondents (76%) was living in urban areas. The mean age was 56.19 ± 11.486 years, 9.5% were less than 40 years of age, 20.7% were 41–50 years, 35 % of the respondents represented the age group 51–60 years and 34.9 % were ≥ 61 years. Almost two thirds (66.5%) of the patients were illiterate, 15.7% had primary education, 13.2% of them had completed secondary education and only 4.6% had university education level. Over half of the study participants (67.2%) were married at the time of the study.\nWe noted that the prevalence of overweight, including obesity (BMI ≥ 25 kg/m2) was at the level of 69.2 and 28.8% of the respondents were obese (BMI ≥ 30 kg/m2). The remaining proportions of participants (29.4%) were normal weight, while only 1.4% were underweight. Regarding AO, measured by WC, the results of this study showed that the mean value of WC was significantly higher in women (100.59 ± 11.63 cm) than in men (95.20 ±16.57 cm) (t= -3.287; P<0.001). Concerning the duration of diabetes, the mean duration was 8.55± 6.95 years. The average fasting plasma glucose and HbA1c of the subjects were higher than the ADA treatment goals16 and the glycaemic control measured by HbA1c showed that 69.4% of the patients were classified as having poor glycaemic control (HbA1c ≥ 7%). With regard to diabetic medications, 46.1% of respondents took oral medication either alone or in combination with insulin (21.8%), 26.4% were treated with insulin alone, and 5.4% were on diet only.", "Overweight and obesity were observed in 529 (69.2 %) patients. Sex, age, educational level, household members, occupation, diabetes duration, glycaemic control and type of diabetic treatment were the candidate variables for logistic regression. The following factors were statistically significant in bivariate analysis: sex, age, education level, occupation and glycaemic control (HbA1c<7%). However, by adjusting the model using multivariate logistic regression, we have found that overweight and obesity were statistically associated with female sex, age above 41 years and good glycaemic control (Table 2).\nFactors associated with overweight and obesity among T2DM patients\nStatistically significant at P value <0.05\nCOR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval\nThe overweight and obesity among females were three times higher than among males (AOR= 3,004, 95% CI: 1.761–5.104, P<0.001). Regarding age, the relative probability of being overweight and obese among participants aged 41 years and above was higher than those with age below 41 years (AOR=2.192, 95% CI: 1.116–4.307, P<0.023). Concerning the glycaemic control, the relative probability of being overweight and obese among patients with good glycaemic control was higher than patients with poor glycaemic control ones (AOR=1.594, 95% CI: 1.056–2.407, P=0.027).\nThe prevalence of AO was higher (73.7%) among patients and was statistically significant in bivariate analysis with sex (P<0.001), age (P=0.032), occupation (P<0.001) and type of treatment P=0.016) (Table 3).\nFactors associated with abdominal obesity among T2DM patients\nStatistically significant at P value <0.05\nCOR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval\nBy adjusting the model using multivariate logistic regression, a statistically significant difference was found in AO to female sex (AOR=2.654, 95% CI: 1.507–4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031–8.757, P=0.048).", "The main goal of weight management is to ensure optimal glycaemic control, avoiding the diabetes complications. In line of this, the present study has assessed the prevalence of overweight and obesity and its associated factors among T2DM patients. The results showed that 69.2 % of respondents were overweight (BMI ≥ 25 kg/m2) of which 28.8% of them were obese (BMI ≥ 30 kg/m2). These findings are in accordance with that found in Yemen in 2014, reporting that 58.8% of patients with T2DM were overweight and 28.8% of them were obese18. Similarly, our data are equivalent to those reported by Tseng CH. (2007) in Taiwan, another country with nutrition transition, as 65 % of the patients were overweight or obese and less than one-third had a normal BMI19. Equivalent data were also observed in Oman and Qatar with 60.1% and 59.7% of the diabetic patients presenting obesity, respectively20,21. However, the data of all these studies are still less high than those found in U.S. adults with diagnosed diabetes where the prevalence of overweight or obesity was 85.2%, and the prevalence of obesity alone was 54.8%22. Nevertheless, the prevalence of overweight and obesity showed high scores in Jordan as 91% of respondents participating in a similar study were overweight of whom 58% were obese23. Increased modernization and a westernized diet and lifestyle are probably behind this increased prevalence of obesity in Jordan as well as in many developing countries24. Furthermore, our study has shown that the prevalence of obesity in T2DM is slightly higher than in the general population taken at whole in Morocco where overweight was found to occur among 53.0% of people with the prevalence of obesity was in the range of 20.0%7.\nIn this work, we investigated also the prevalence of AO which has been found to be higher than the prevalence of obesity defined by BMI (73.7% vs. 28.8%). Unfortunately, the difference in standards adopted to characterize AO did not allow comparison with other studies. The results obtained from the multivariate logistic regression analysis indicated that female sex was significantly associated with both overweight/obesity and AO. These findings are in agreement with previous studies in Belgium, in the United Kingdom,25 in Saudi Arabia,26 in Oman,27 and in Yemen 18 showing that significantly higher obesity rate was noted in females in comparison to males. Several studies conducted in the Middle East and Africa showed that the factors causing this high prevalence of obesity in females rather than in males can be attributed to less physical activity, rapid urbanization, less employment and also for cultural reasons. Indeed, women who are overweight will be socially accepted as well looked and provide her with more acceptance in the community28.\nIn this study, in addition to the female sex, the multivariate logistic regression analysis showed also that overweight and obesity were significantly associated with increased age and good glycaemic control. Regarding age, this result is similar to that found in Finland in 2009 among the general population, where both in men and in women, the prevalence of obesity increased with age29.\nFor glycaemic control, our results are in line with previous studies reporting that higher BMI is associated with good glycaemic control and patient with lower BMI are poorly controlled and have low C-peptide levels, reflecting inadequate β-cell reserves30. In contrast, other research studies reported that overweight or obesity was associated with a significantly higher probability of having HbA1c ≥7%,31 a finding that may be explained by the fact that obese diabetic patients often reported irregular meal patterns, leading to poorer glycaemic control and reduced insulin sensitivity32. On the other hand, other research studies reported the absence of a link between BMI and glycaemic control and critics argued that the change in HbA1c was independent of the change in weight, suggesting that there was no link between the two variables33. This criticism arose because some individuals with higher BMI were metabolically healthy. These studies suggested that BMI should be a component of a comprehensive evaluation of the overall health status to determine the association of BMI with glycaemic control.\nRegarding AO, in addition to the female sex, the multivariate logistic regression analysis showed that insulin treatment was risk factor for AO. Indeed, it is known that AO is associated with insulin resistance34, and that insulin treatment, causes an excess of serum insulin that can cause a constant sensation of hunger leading by consequent to a vicious circle in which overeating generates excess body fat that accumulates in the viscera leading to AO35. One other possible explanation is that the majority of patients with T2DM are overweight or obese at the time of diagnosis, and treatment with insulin is known to have weight gain as an adverse effect. Given these facts, therapeutic agents that target weight loss could represent another approach to the control of T2DM.\nThis study was the first study conducted in the BM KH region to determine factors associated with overweight and obesity among T2DM; it investigates a relatively large sample. In contrast, it has some limitations. First, the cross-sectional design of the study limits conclusions regarding the causality of the identified associations, so longitudinal studies are greatly needed; second, some factors such as dietary habits, physical activity, and psychological factors were not assessed in this study. Therefore, further studies have to be done to assess the contribution of these factors in obesity status among T2DM patients; third, the comparison of our results was difficult because studies on this topic especially in T2DM patients are scarce in Morocco.", "Overweight, general obesity and abdominal obesity were high among participants. The general obesity was associated with female sex and good glycaemic control, whereas abdominal obesity was associated with female sex and insulin treatment. Given the high prevalence of obesity among women, there may be additional public health benefits of targeting this population group, because their behaviors may influence the behaviors of other proximal population groups, such as their children and families. The health consequences of diabetes are compounded by overweight and obesity. However, the prevalence of overweight and obesity among people with diabetes has not been monitored regularly given this fact weight management should receive a higher priority in the management of diabetes." ]
[ "intro", "methods", null, null, null, "results", null, null, "discussion", "conclusions" ]
[ "Obesity", "overweight", "abdominal obesity", "type 2 diabetes", "Morocco" ]
Introduction: Diabetes is a major public health problem due to its negative effects on health and well-being and the costs engendered by its complications. This could be exacerbated by the fact that the prevalence of this disease is elevated and is highly dynamic with a serious and socioeconomic impact1. As a result, diabetes is currently one of the most worrying diseases in both industrialised and developing countries, as the latter are in nutrition transition. According to the International Diabetes Federation2, the number of adult diabetic patients recorded in 2015 was 415 million, representing 8.8% of the world's population, and type 2 diabetes mellitus (T2DM) is the most common form of diabetes, representing more than 90% of all declared cases3. In Morocco, a country in the full phase of demographic, nutritional and epidemiological transition4,5, diabetes is emerging as a preoccupying public health issue that represents a challenge facing health practitioners daily. Based on a survey carried out by health authorities in 2000, the prevalence of diabetes in Morocco was about 6.6% in a population aged at least 20 years 6, whereas it reached 10.6% in 2018 according to the latest national survey on the common risk factors for non-communicable diseases7. These statistics reflect the evolution of the disease in Morocco and that the situation requires rigorous management of diabetes along with its associated factors such as overweight and obesity. It is well known that for diabetic and non-diabetic persons, being overweight or obese is a major risk factor. That is why professional organisations and health professionals recommend weight loss as a primary strategy for glycaemic control. For example, the American Diabetes Association (ADA) recommends weight loss for all overweight or obese individuals who have or are at risk for diabetes8. There is a very close relation between weight and T2DM. Indeed, many previous published data have reported that most of patients with T2DM are overweight or obese and that obese people present the highest risk of developing T2DM9. The simultaneous occurrence of the complicated conditions of diabetes and obesity within a single individual is called ‘diabesity’10. Furthermore, overweight and obesity in diabetics are associated with poorer control of blood glucose levels and blood pressure11, which represents high cardiovascular risk12. Conversely, intentional weight loss is associated with reduced mortality in people with ‘diabesity’ 13. Determinants of weight gain leading to overweight and obesity are clearly multifactorial and involve genetic, socioeconomic and environmental components14. Additionally, assessment of the nutritional status of patients with T2DM is essential to detect malnutrition that could increase morbidity and mortality and prolong the length of hospital stay. However, data regarding nutritional status and its associated factors in Moroccan type 2 diabetic patients remain scarce and limited. To our knowledge, there are no published studies on this subject in the Beni-Mellal Khenifra (BM-KH) region and in other regions of Morocco, except the study carried out by Ramdani et al. in 2012 on diabetes and obesity in eastern Morocco15. A better understanding of the factors associated with diabesity in the Moroccan context is strongly needed to help diabetics and health professionals better manage diabetes. Thus, the aim of this work is to determine the prevalence of overweight and obesity and to identify its determinant factors in a sample of type 2 diabetic patients in the BM-KH region. Methods: Study participants and data collection We conducted a cross-sectional survey in 2017 among 975 T2DM patients attending primary health centres in the BM-KH region of Morocco. At the time of the survey and according to the Regional Observatory of Health in the BM-KH region, the primary health centres provide health services for 153 000 T2DM patients registered in five provinces who receive regular medical follow-up and get their medications dispensed at the centres free of charge. For patient selection, a multilevel random-sampling method was used to recruit participants. The sample size was calculated based on the following parameters: prevalence of overweight and obesity (50%) among T2DM patients, 4% margin of error (e=0.04) and 95% confidence level (z=3.20); thus, the minimum study sample size was 932, which was rounded up to 1000 persons for more accuracy and in order to account for possible exclusions and the need to carry out subgroup analysis. The sample surveyed in the five provinces of the BMKH region was proportional to the total T2DM population in each province. All primary health centres providing diabetes care in each province were counted and centres were randomly selected from these. The number of primary health centres was chosen based on proportions of diabetes patients recorded in each province. Thus 15 primary health centres were the setting for the survey. Every workday, a list of expected participants was obtained from the healthcare centres. The value of K participants depended on the number of people attending the centre each day, which varies between centres. The first K participant to be recruited into the study and who met the inclusion criteria was randomly selected by the investigator and then every Kth patient was recruited into the study. If the Kth person declined, the next person was invited. The recruitment was continued until data were collected from 1000 patients. After cleaning of the files, 25 questionnaires with missing data or unreadable handwriting were eliminated; the sample size remains 975. A face-to-face interview was carried out by trained interviewers to collect data, including sociodemographic information, such as age, sex, place of residence, marital status, family size, level of education, and occupational status. Participants' educational levels were classified into 4 categories as follows: “Illiterate” (unable to read and write and without formal education); “Primary” (had 1 to 6 years of formal education); “Secondary” (had 7 to 12 years of formal education) and “university” (had at least 13 years of formal education). The employment status was categorized as working or not currently employed. In addition, we collected information about diabetes, such as the duration of diabetes in years, family history of diabetes (defined as having a parent or sibling with diabetes), treatment type, and complications linked to diabetes. The inclusion criteria for this study were as follows: patients diagnosed with T2DM for 1 year or more, with an available medical file; aged at least 18 years; had an HbA1c test during the last three months; physically and mentally able to provide all data required for the study; and willing to participate in the study. Patients with type 1 diabetes, hospitalized patients and pregnant women with diabetes were excluded from this study. Written approval for this study was obtained from the Health Ministry, Morocco, on 3 March 2016 (reference no. 6397-3/3/2016). For the questionnaire, informed written consent was obtained from all respondents after explaining the purpose of the study, the importance of their contribution and their right to refuse participation. The data are anonymized and free of personally identifiable information. We conducted a cross-sectional survey in 2017 among 975 T2DM patients attending primary health centres in the BM-KH region of Morocco. At the time of the survey and according to the Regional Observatory of Health in the BM-KH region, the primary health centres provide health services for 153 000 T2DM patients registered in five provinces who receive regular medical follow-up and get their medications dispensed at the centres free of charge. For patient selection, a multilevel random-sampling method was used to recruit participants. The sample size was calculated based on the following parameters: prevalence of overweight and obesity (50%) among T2DM patients, 4% margin of error (e=0.04) and 95% confidence level (z=3.20); thus, the minimum study sample size was 932, which was rounded up to 1000 persons for more accuracy and in order to account for possible exclusions and the need to carry out subgroup analysis. The sample surveyed in the five provinces of the BMKH region was proportional to the total T2DM population in each province. All primary health centres providing diabetes care in each province were counted and centres were randomly selected from these. The number of primary health centres was chosen based on proportions of diabetes patients recorded in each province. Thus 15 primary health centres were the setting for the survey. Every workday, a list of expected participants was obtained from the healthcare centres. The value of K participants depended on the number of people attending the centre each day, which varies between centres. The first K participant to be recruited into the study and who met the inclusion criteria was randomly selected by the investigator and then every Kth patient was recruited into the study. If the Kth person declined, the next person was invited. The recruitment was continued until data were collected from 1000 patients. After cleaning of the files, 25 questionnaires with missing data or unreadable handwriting were eliminated; the sample size remains 975. A face-to-face interview was carried out by trained interviewers to collect data, including sociodemographic information, such as age, sex, place of residence, marital status, family size, level of education, and occupational status. Participants' educational levels were classified into 4 categories as follows: “Illiterate” (unable to read and write and without formal education); “Primary” (had 1 to 6 years of formal education); “Secondary” (had 7 to 12 years of formal education) and “university” (had at least 13 years of formal education). The employment status was categorized as working or not currently employed. In addition, we collected information about diabetes, such as the duration of diabetes in years, family history of diabetes (defined as having a parent or sibling with diabetes), treatment type, and complications linked to diabetes. The inclusion criteria for this study were as follows: patients diagnosed with T2DM for 1 year or more, with an available medical file; aged at least 18 years; had an HbA1c test during the last three months; physically and mentally able to provide all data required for the study; and willing to participate in the study. Patients with type 1 diabetes, hospitalized patients and pregnant women with diabetes were excluded from this study. Written approval for this study was obtained from the Health Ministry, Morocco, on 3 March 2016 (reference no. 6397-3/3/2016). For the questionnaire, informed written consent was obtained from all respondents after explaining the purpose of the study, the importance of their contribution and their right to refuse participation. The data are anonymized and free of personally identifiable information. Anthropometric measurements and biological parameters Height and body weight were measured for all participants by trained research staff; body weight was measured to the nearest 0.1 kg using a digital scale (Seca 877, Hamburg, Germany), and height was recorded to the nearest 0.1 cm using a wall-mounted stadiometer (Seca 216, Hamburg, Germany). Measurements were taken for each participant with light clothing and without shoes, and body mass index (BMI) was calculated as weight in kilograms divided by height in metres squared and categorized as underweight (<18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2) and obese (≥ 30 kg/m2) 14. Waist circumference was also measured to the nearest 0.5 cm, and abdominal obesity (AO) was defined as waist circumference (WC) ≥102 centimetres in men and ≥88 centimetres in women14. For biological indicators, the most recent HbA1c measurements (if not exceeding 3 months prior) were extracted from medical patients' records. According to the ADA, we defined glycaemic status as good glycaemic control if HbA1c <7% and poor glycaemic control if HbA1c ≥ 7%16,17. Height and body weight were measured for all participants by trained research staff; body weight was measured to the nearest 0.1 kg using a digital scale (Seca 877, Hamburg, Germany), and height was recorded to the nearest 0.1 cm using a wall-mounted stadiometer (Seca 216, Hamburg, Germany). Measurements were taken for each participant with light clothing and without shoes, and body mass index (BMI) was calculated as weight in kilograms divided by height in metres squared and categorized as underweight (<18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2) and obese (≥ 30 kg/m2) 14. Waist circumference was also measured to the nearest 0.5 cm, and abdominal obesity (AO) was defined as waist circumference (WC) ≥102 centimetres in men and ≥88 centimetres in women14. For biological indicators, the most recent HbA1c measurements (if not exceeding 3 months prior) were extracted from medical patients' records. According to the ADA, we defined glycaemic status as good glycaemic control if HbA1c <7% and poor glycaemic control if HbA1c ≥ 7%16,17. Statistical analysis Statistical analysis was carried out using Statistical Package for Social Sciences (Version 19.0, SPSS, Inc) Software. Data are described as the mean ± standard deviation (SD) for continuous variables and proportions for categorical variables. Numerical variables were analyzed using the student t-test. The association between overweight/obesity and the determinant factors considered were researched through bivariate logistic regression analysis and then all significant variables in the bivariate analysis (p< 0.05) were considered in the multivariate logistic regression model to determine independent factors associated with being obese or overweight. For each statistical test used, the test is considered significant when P-value (degree of significance) is less than 0.05. Statistical analysis was carried out using Statistical Package for Social Sciences (Version 19.0, SPSS, Inc) Software. Data are described as the mean ± standard deviation (SD) for continuous variables and proportions for categorical variables. Numerical variables were analyzed using the student t-test. The association between overweight/obesity and the determinant factors considered were researched through bivariate logistic regression analysis and then all significant variables in the bivariate analysis (p< 0.05) were considered in the multivariate logistic regression model to determine independent factors associated with being obese or overweight. For each statistical test used, the test is considered significant when P-value (degree of significance) is less than 0.05. Study participants and data collection: We conducted a cross-sectional survey in 2017 among 975 T2DM patients attending primary health centres in the BM-KH region of Morocco. At the time of the survey and according to the Regional Observatory of Health in the BM-KH region, the primary health centres provide health services for 153 000 T2DM patients registered in five provinces who receive regular medical follow-up and get their medications dispensed at the centres free of charge. For patient selection, a multilevel random-sampling method was used to recruit participants. The sample size was calculated based on the following parameters: prevalence of overweight and obesity (50%) among T2DM patients, 4% margin of error (e=0.04) and 95% confidence level (z=3.20); thus, the minimum study sample size was 932, which was rounded up to 1000 persons for more accuracy and in order to account for possible exclusions and the need to carry out subgroup analysis. The sample surveyed in the five provinces of the BMKH region was proportional to the total T2DM population in each province. All primary health centres providing diabetes care in each province were counted and centres were randomly selected from these. The number of primary health centres was chosen based on proportions of diabetes patients recorded in each province. Thus 15 primary health centres were the setting for the survey. Every workday, a list of expected participants was obtained from the healthcare centres. The value of K participants depended on the number of people attending the centre each day, which varies between centres. The first K participant to be recruited into the study and who met the inclusion criteria was randomly selected by the investigator and then every Kth patient was recruited into the study. If the Kth person declined, the next person was invited. The recruitment was continued until data were collected from 1000 patients. After cleaning of the files, 25 questionnaires with missing data or unreadable handwriting were eliminated; the sample size remains 975. A face-to-face interview was carried out by trained interviewers to collect data, including sociodemographic information, such as age, sex, place of residence, marital status, family size, level of education, and occupational status. Participants' educational levels were classified into 4 categories as follows: “Illiterate” (unable to read and write and without formal education); “Primary” (had 1 to 6 years of formal education); “Secondary” (had 7 to 12 years of formal education) and “university” (had at least 13 years of formal education). The employment status was categorized as working or not currently employed. In addition, we collected information about diabetes, such as the duration of diabetes in years, family history of diabetes (defined as having a parent or sibling with diabetes), treatment type, and complications linked to diabetes. The inclusion criteria for this study were as follows: patients diagnosed with T2DM for 1 year or more, with an available medical file; aged at least 18 years; had an HbA1c test during the last three months; physically and mentally able to provide all data required for the study; and willing to participate in the study. Patients with type 1 diabetes, hospitalized patients and pregnant women with diabetes were excluded from this study. Written approval for this study was obtained from the Health Ministry, Morocco, on 3 March 2016 (reference no. 6397-3/3/2016). For the questionnaire, informed written consent was obtained from all respondents after explaining the purpose of the study, the importance of their contribution and their right to refuse participation. The data are anonymized and free of personally identifiable information. Anthropometric measurements and biological parameters: Height and body weight were measured for all participants by trained research staff; body weight was measured to the nearest 0.1 kg using a digital scale (Seca 877, Hamburg, Germany), and height was recorded to the nearest 0.1 cm using a wall-mounted stadiometer (Seca 216, Hamburg, Germany). Measurements were taken for each participant with light clothing and without shoes, and body mass index (BMI) was calculated as weight in kilograms divided by height in metres squared and categorized as underweight (<18.5 kg/m2), normal (18.5–24.9 kg/m2), overweight (25–29.9 kg/m2) and obese (≥ 30 kg/m2) 14. Waist circumference was also measured to the nearest 0.5 cm, and abdominal obesity (AO) was defined as waist circumference (WC) ≥102 centimetres in men and ≥88 centimetres in women14. For biological indicators, the most recent HbA1c measurements (if not exceeding 3 months prior) were extracted from medical patients' records. According to the ADA, we defined glycaemic status as good glycaemic control if HbA1c <7% and poor glycaemic control if HbA1c ≥ 7%16,17. Statistical analysis: Statistical analysis was carried out using Statistical Package for Social Sciences (Version 19.0, SPSS, Inc) Software. Data are described as the mean ± standard deviation (SD) for continuous variables and proportions for categorical variables. Numerical variables were analyzed using the student t-test. The association between overweight/obesity and the determinant factors considered were researched through bivariate logistic regression analysis and then all significant variables in the bivariate analysis (p< 0.05) were considered in the multivariate logistic regression model to determine independent factors associated with being obese or overweight. For each statistical test used, the test is considered significant when P-value (degree of significance) is less than 0.05. Results: Socio-demographic, clinical and anthropometric characteristics The socio-demographic, clinical and anthropometric characteristics of participants are presented in table 1. Socio-demographic, clinical and anthropometric characteristics of T2DM patients Women were over-represented (74 %) and the majority of the respondents (76%) was living in urban areas. The mean age was 56.19 ± 11.486 years, 9.5% were less than 40 years of age, 20.7% were 41–50 years, 35 % of the respondents represented the age group 51–60 years and 34.9 % were ≥ 61 years. Almost two thirds (66.5%) of the patients were illiterate, 15.7% had primary education, 13.2% of them had completed secondary education and only 4.6% had university education level. Over half of the study participants (67.2%) were married at the time of the study. We noted that the prevalence of overweight, including obesity (BMI ≥ 25 kg/m2) was at the level of 69.2 and 28.8% of the respondents were obese (BMI ≥ 30 kg/m2). The remaining proportions of participants (29.4%) were normal weight, while only 1.4% were underweight. Regarding AO, measured by WC, the results of this study showed that the mean value of WC was significantly higher in women (100.59 ± 11.63 cm) than in men (95.20 ±16.57 cm) (t= -3.287; P<0.001). Concerning the duration of diabetes, the mean duration was 8.55± 6.95 years. The average fasting plasma glucose and HbA1c of the subjects were higher than the ADA treatment goals16 and the glycaemic control measured by HbA1c showed that 69.4% of the patients were classified as having poor glycaemic control (HbA1c ≥ 7%). With regard to diabetic medications, 46.1% of respondents took oral medication either alone or in combination with insulin (21.8%), 26.4% were treated with insulin alone, and 5.4% were on diet only. The socio-demographic, clinical and anthropometric characteristics of participants are presented in table 1. Socio-demographic, clinical and anthropometric characteristics of T2DM patients Women were over-represented (74 %) and the majority of the respondents (76%) was living in urban areas. The mean age was 56.19 ± 11.486 years, 9.5% were less than 40 years of age, 20.7% were 41–50 years, 35 % of the respondents represented the age group 51–60 years and 34.9 % were ≥ 61 years. Almost two thirds (66.5%) of the patients were illiterate, 15.7% had primary education, 13.2% of them had completed secondary education and only 4.6% had university education level. Over half of the study participants (67.2%) were married at the time of the study. We noted that the prevalence of overweight, including obesity (BMI ≥ 25 kg/m2) was at the level of 69.2 and 28.8% of the respondents were obese (BMI ≥ 30 kg/m2). The remaining proportions of participants (29.4%) were normal weight, while only 1.4% were underweight. Regarding AO, measured by WC, the results of this study showed that the mean value of WC was significantly higher in women (100.59 ± 11.63 cm) than in men (95.20 ±16.57 cm) (t= -3.287; P<0.001). Concerning the duration of diabetes, the mean duration was 8.55± 6.95 years. The average fasting plasma glucose and HbA1c of the subjects were higher than the ADA treatment goals16 and the glycaemic control measured by HbA1c showed that 69.4% of the patients were classified as having poor glycaemic control (HbA1c ≥ 7%). With regard to diabetic medications, 46.1% of respondents took oral medication either alone or in combination with insulin (21.8%), 26.4% were treated with insulin alone, and 5.4% were on diet only. Nutritional status and associated factors Overweight and obesity were observed in 529 (69.2 %) patients. Sex, age, educational level, household members, occupation, diabetes duration, glycaemic control and type of diabetic treatment were the candidate variables for logistic regression. The following factors were statistically significant in bivariate analysis: sex, age, education level, occupation and glycaemic control (HbA1c<7%). However, by adjusting the model using multivariate logistic regression, we have found that overweight and obesity were statistically associated with female sex, age above 41 years and good glycaemic control (Table 2). Factors associated with overweight and obesity among T2DM patients Statistically significant at P value <0.05 COR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval The overweight and obesity among females were three times higher than among males (AOR= 3,004, 95% CI: 1.761–5.104, P<0.001). Regarding age, the relative probability of being overweight and obese among participants aged 41 years and above was higher than those with age below 41 years (AOR=2.192, 95% CI: 1.116–4.307, P<0.023). Concerning the glycaemic control, the relative probability of being overweight and obese among patients with good glycaemic control was higher than patients with poor glycaemic control ones (AOR=1.594, 95% CI: 1.056–2.407, P=0.027). The prevalence of AO was higher (73.7%) among patients and was statistically significant in bivariate analysis with sex (P<0.001), age (P=0.032), occupation (P<0.001) and type of treatment P=0.016) (Table 3). Factors associated with abdominal obesity among T2DM patients Statistically significant at P value <0.05 COR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval By adjusting the model using multivariate logistic regression, a statistically significant difference was found in AO to female sex (AOR=2.654, 95% CI: 1.507–4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031–8.757, P=0.048). Overweight and obesity were observed in 529 (69.2 %) patients. Sex, age, educational level, household members, occupation, diabetes duration, glycaemic control and type of diabetic treatment were the candidate variables for logistic regression. The following factors were statistically significant in bivariate analysis: sex, age, education level, occupation and glycaemic control (HbA1c<7%). However, by adjusting the model using multivariate logistic regression, we have found that overweight and obesity were statistically associated with female sex, age above 41 years and good glycaemic control (Table 2). Factors associated with overweight and obesity among T2DM patients Statistically significant at P value <0.05 COR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval The overweight and obesity among females were three times higher than among males (AOR= 3,004, 95% CI: 1.761–5.104, P<0.001). Regarding age, the relative probability of being overweight and obese among participants aged 41 years and above was higher than those with age below 41 years (AOR=2.192, 95% CI: 1.116–4.307, P<0.023). Concerning the glycaemic control, the relative probability of being overweight and obese among patients with good glycaemic control was higher than patients with poor glycaemic control ones (AOR=1.594, 95% CI: 1.056–2.407, P=0.027). The prevalence of AO was higher (73.7%) among patients and was statistically significant in bivariate analysis with sex (P<0.001), age (P=0.032), occupation (P<0.001) and type of treatment P=0.016) (Table 3). Factors associated with abdominal obesity among T2DM patients Statistically significant at P value <0.05 COR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval By adjusting the model using multivariate logistic regression, a statistically significant difference was found in AO to female sex (AOR=2.654, 95% CI: 1.507–4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031–8.757, P=0.048). Socio-demographic, clinical and anthropometric characteristics: The socio-demographic, clinical and anthropometric characteristics of participants are presented in table 1. Socio-demographic, clinical and anthropometric characteristics of T2DM patients Women were over-represented (74 %) and the majority of the respondents (76%) was living in urban areas. The mean age was 56.19 ± 11.486 years, 9.5% were less than 40 years of age, 20.7% were 41–50 years, 35 % of the respondents represented the age group 51–60 years and 34.9 % were ≥ 61 years. Almost two thirds (66.5%) of the patients were illiterate, 15.7% had primary education, 13.2% of them had completed secondary education and only 4.6% had university education level. Over half of the study participants (67.2%) were married at the time of the study. We noted that the prevalence of overweight, including obesity (BMI ≥ 25 kg/m2) was at the level of 69.2 and 28.8% of the respondents were obese (BMI ≥ 30 kg/m2). The remaining proportions of participants (29.4%) were normal weight, while only 1.4% were underweight. Regarding AO, measured by WC, the results of this study showed that the mean value of WC was significantly higher in women (100.59 ± 11.63 cm) than in men (95.20 ±16.57 cm) (t= -3.287; P<0.001). Concerning the duration of diabetes, the mean duration was 8.55± 6.95 years. The average fasting plasma glucose and HbA1c of the subjects were higher than the ADA treatment goals16 and the glycaemic control measured by HbA1c showed that 69.4% of the patients were classified as having poor glycaemic control (HbA1c ≥ 7%). With regard to diabetic medications, 46.1% of respondents took oral medication either alone or in combination with insulin (21.8%), 26.4% were treated with insulin alone, and 5.4% were on diet only. Nutritional status and associated factors: Overweight and obesity were observed in 529 (69.2 %) patients. Sex, age, educational level, household members, occupation, diabetes duration, glycaemic control and type of diabetic treatment were the candidate variables for logistic regression. The following factors were statistically significant in bivariate analysis: sex, age, education level, occupation and glycaemic control (HbA1c<7%). However, by adjusting the model using multivariate logistic regression, we have found that overweight and obesity were statistically associated with female sex, age above 41 years and good glycaemic control (Table 2). Factors associated with overweight and obesity among T2DM patients Statistically significant at P value <0.05 COR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval The overweight and obesity among females were three times higher than among males (AOR= 3,004, 95% CI: 1.761–5.104, P<0.001). Regarding age, the relative probability of being overweight and obese among participants aged 41 years and above was higher than those with age below 41 years (AOR=2.192, 95% CI: 1.116–4.307, P<0.023). Concerning the glycaemic control, the relative probability of being overweight and obese among patients with good glycaemic control was higher than patients with poor glycaemic control ones (AOR=1.594, 95% CI: 1.056–2.407, P=0.027). The prevalence of AO was higher (73.7%) among patients and was statistically significant in bivariate analysis with sex (P<0.001), age (P=0.032), occupation (P<0.001) and type of treatment P=0.016) (Table 3). Factors associated with abdominal obesity among T2DM patients Statistically significant at P value <0.05 COR= Crude Odds Ratio; AOR= Adjusted Odds Ratio; CI= Confidence Interval By adjusting the model using multivariate logistic regression, a statistically significant difference was found in AO to female sex (AOR=2.654, 95% CI: 1.507–4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031–8.757, P=0.048). Discussion: The main goal of weight management is to ensure optimal glycaemic control, avoiding the diabetes complications. In line of this, the present study has assessed the prevalence of overweight and obesity and its associated factors among T2DM patients. The results showed that 69.2 % of respondents were overweight (BMI ≥ 25 kg/m2) of which 28.8% of them were obese (BMI ≥ 30 kg/m2). These findings are in accordance with that found in Yemen in 2014, reporting that 58.8% of patients with T2DM were overweight and 28.8% of them were obese18. Similarly, our data are equivalent to those reported by Tseng CH. (2007) in Taiwan, another country with nutrition transition, as 65 % of the patients were overweight or obese and less than one-third had a normal BMI19. Equivalent data were also observed in Oman and Qatar with 60.1% and 59.7% of the diabetic patients presenting obesity, respectively20,21. However, the data of all these studies are still less high than those found in U.S. adults with diagnosed diabetes where the prevalence of overweight or obesity was 85.2%, and the prevalence of obesity alone was 54.8%22. Nevertheless, the prevalence of overweight and obesity showed high scores in Jordan as 91% of respondents participating in a similar study were overweight of whom 58% were obese23. Increased modernization and a westernized diet and lifestyle are probably behind this increased prevalence of obesity in Jordan as well as in many developing countries24. Furthermore, our study has shown that the prevalence of obesity in T2DM is slightly higher than in the general population taken at whole in Morocco where overweight was found to occur among 53.0% of people with the prevalence of obesity was in the range of 20.0%7. In this work, we investigated also the prevalence of AO which has been found to be higher than the prevalence of obesity defined by BMI (73.7% vs. 28.8%). Unfortunately, the difference in standards adopted to characterize AO did not allow comparison with other studies. The results obtained from the multivariate logistic regression analysis indicated that female sex was significantly associated with both overweight/obesity and AO. These findings are in agreement with previous studies in Belgium, in the United Kingdom,25 in Saudi Arabia,26 in Oman,27 and in Yemen 18 showing that significantly higher obesity rate was noted in females in comparison to males. Several studies conducted in the Middle East and Africa showed that the factors causing this high prevalence of obesity in females rather than in males can be attributed to less physical activity, rapid urbanization, less employment and also for cultural reasons. Indeed, women who are overweight will be socially accepted as well looked and provide her with more acceptance in the community28. In this study, in addition to the female sex, the multivariate logistic regression analysis showed also that overweight and obesity were significantly associated with increased age and good glycaemic control. Regarding age, this result is similar to that found in Finland in 2009 among the general population, where both in men and in women, the prevalence of obesity increased with age29. For glycaemic control, our results are in line with previous studies reporting that higher BMI is associated with good glycaemic control and patient with lower BMI are poorly controlled and have low C-peptide levels, reflecting inadequate β-cell reserves30. In contrast, other research studies reported that overweight or obesity was associated with a significantly higher probability of having HbA1c ≥7%,31 a finding that may be explained by the fact that obese diabetic patients often reported irregular meal patterns, leading to poorer glycaemic control and reduced insulin sensitivity32. On the other hand, other research studies reported the absence of a link between BMI and glycaemic control and critics argued that the change in HbA1c was independent of the change in weight, suggesting that there was no link between the two variables33. This criticism arose because some individuals with higher BMI were metabolically healthy. These studies suggested that BMI should be a component of a comprehensive evaluation of the overall health status to determine the association of BMI with glycaemic control. Regarding AO, in addition to the female sex, the multivariate logistic regression analysis showed that insulin treatment was risk factor for AO. Indeed, it is known that AO is associated with insulin resistance34, and that insulin treatment, causes an excess of serum insulin that can cause a constant sensation of hunger leading by consequent to a vicious circle in which overeating generates excess body fat that accumulates in the viscera leading to AO35. One other possible explanation is that the majority of patients with T2DM are overweight or obese at the time of diagnosis, and treatment with insulin is known to have weight gain as an adverse effect. Given these facts, therapeutic agents that target weight loss could represent another approach to the control of T2DM. This study was the first study conducted in the BM KH region to determine factors associated with overweight and obesity among T2DM; it investigates a relatively large sample. In contrast, it has some limitations. First, the cross-sectional design of the study limits conclusions regarding the causality of the identified associations, so longitudinal studies are greatly needed; second, some factors such as dietary habits, physical activity, and psychological factors were not assessed in this study. Therefore, further studies have to be done to assess the contribution of these factors in obesity status among T2DM patients; third, the comparison of our results was difficult because studies on this topic especially in T2DM patients are scarce in Morocco. Conclusion: Overweight, general obesity and abdominal obesity were high among participants. The general obesity was associated with female sex and good glycaemic control, whereas abdominal obesity was associated with female sex and insulin treatment. Given the high prevalence of obesity among women, there may be additional public health benefits of targeting this population group, because their behaviors may influence the behaviors of other proximal population groups, such as their children and families. The health consequences of diabetes are compounded by overweight and obesity. However, the prevalence of overweight and obesity among people with diabetes has not been monitored regularly given this fact weight management should receive a higher priority in the management of diabetes.
Background: Obesity constitutes a major risk factor for the development of diabetes, and has been linked with poor glycaemic control among type 2 diabetic patients. Methods: A questionnaire-based cross-sectional study was conducted in 2017 among 975 diabetes patients attending primary health centres. Demographic and clinical data were collected through face-to-face interviews. Anthropometric measurements, including body weight, height and waist circumference, were taken using standardized techniques and calibrated equipment. Results: The prevalence of overweight was 40.4%, the general obesity was 28.8% and the abdominal obesity was 73.7%. Using multivariate analysis, we noted that the general obesity was associated with female sex (AOR= 3,004, 95% CI: 1.761-5.104, P<0.001), increased age (AOR=2.192, 95% CI: 1.116-4.307, P<0.023) and good glycaemic control (AOR=1.594, 95% CI: 1.056-2.407, P=0.027), whereas abdominal obesity was associated wih female sex (AOR=2.654, 95% CI: 1.507-4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031-8.757, P=0.048). Conclusions: Overweight, general obesity and abdominal obesity were high among participants, especially among women. Taken together, these findings urge the implementation of a roadmap for this diabetic subpopulation to have a new lifestyle.
Introduction: Diabetes is a major public health problem due to its negative effects on health and well-being and the costs engendered by its complications. This could be exacerbated by the fact that the prevalence of this disease is elevated and is highly dynamic with a serious and socioeconomic impact1. As a result, diabetes is currently one of the most worrying diseases in both industrialised and developing countries, as the latter are in nutrition transition. According to the International Diabetes Federation2, the number of adult diabetic patients recorded in 2015 was 415 million, representing 8.8% of the world's population, and type 2 diabetes mellitus (T2DM) is the most common form of diabetes, representing more than 90% of all declared cases3. In Morocco, a country in the full phase of demographic, nutritional and epidemiological transition4,5, diabetes is emerging as a preoccupying public health issue that represents a challenge facing health practitioners daily. Based on a survey carried out by health authorities in 2000, the prevalence of diabetes in Morocco was about 6.6% in a population aged at least 20 years 6, whereas it reached 10.6% in 2018 according to the latest national survey on the common risk factors for non-communicable diseases7. These statistics reflect the evolution of the disease in Morocco and that the situation requires rigorous management of diabetes along with its associated factors such as overweight and obesity. It is well known that for diabetic and non-diabetic persons, being overweight or obese is a major risk factor. That is why professional organisations and health professionals recommend weight loss as a primary strategy for glycaemic control. For example, the American Diabetes Association (ADA) recommends weight loss for all overweight or obese individuals who have or are at risk for diabetes8. There is a very close relation between weight and T2DM. Indeed, many previous published data have reported that most of patients with T2DM are overweight or obese and that obese people present the highest risk of developing T2DM9. The simultaneous occurrence of the complicated conditions of diabetes and obesity within a single individual is called ‘diabesity’10. Furthermore, overweight and obesity in diabetics are associated with poorer control of blood glucose levels and blood pressure11, which represents high cardiovascular risk12. Conversely, intentional weight loss is associated with reduced mortality in people with ‘diabesity’ 13. Determinants of weight gain leading to overweight and obesity are clearly multifactorial and involve genetic, socioeconomic and environmental components14. Additionally, assessment of the nutritional status of patients with T2DM is essential to detect malnutrition that could increase morbidity and mortality and prolong the length of hospital stay. However, data regarding nutritional status and its associated factors in Moroccan type 2 diabetic patients remain scarce and limited. To our knowledge, there are no published studies on this subject in the Beni-Mellal Khenifra (BM-KH) region and in other regions of Morocco, except the study carried out by Ramdani et al. in 2012 on diabetes and obesity in eastern Morocco15. A better understanding of the factors associated with diabesity in the Moroccan context is strongly needed to help diabetics and health professionals better manage diabetes. Thus, the aim of this work is to determine the prevalence of overweight and obesity and to identify its determinant factors in a sample of type 2 diabetic patients in the BM-KH region. Conclusion: Overweight, general obesity and abdominal obesity were high among participants. The general obesity was associated with female sex and good glycaemic control, whereas abdominal obesity was associated with female sex and insulin treatment. Given the high prevalence of obesity among women, there may be additional public health benefits of targeting this population group, because their behaviors may influence the behaviors of other proximal population groups, such as their children and families. The health consequences of diabetes are compounded by overweight and obesity. However, the prevalence of overweight and obesity among people with diabetes has not been monitored regularly given this fact weight management should receive a higher priority in the management of diabetes.
Background: Obesity constitutes a major risk factor for the development of diabetes, and has been linked with poor glycaemic control among type 2 diabetic patients. Methods: A questionnaire-based cross-sectional study was conducted in 2017 among 975 diabetes patients attending primary health centres. Demographic and clinical data were collected through face-to-face interviews. Anthropometric measurements, including body weight, height and waist circumference, were taken using standardized techniques and calibrated equipment. Results: The prevalence of overweight was 40.4%, the general obesity was 28.8% and the abdominal obesity was 73.7%. Using multivariate analysis, we noted that the general obesity was associated with female sex (AOR= 3,004, 95% CI: 1.761-5.104, P<0.001), increased age (AOR=2.192, 95% CI: 1.116-4.307, P<0.023) and good glycaemic control (AOR=1.594, 95% CI: 1.056-2.407, P=0.027), whereas abdominal obesity was associated wih female sex (AOR=2.654, 95% CI: 1.507-4.671, P<0.001) and insulin treatment (AOR=2.927, 95% CI: 1.031-8.757, P=0.048). Conclusions: Overweight, general obesity and abdominal obesity were high among participants, especially among women. Taken together, these findings urge the implementation of a roadmap for this diabetic subpopulation to have a new lifestyle.
7,190
257
[ 695, 219, 129, 361, 369 ]
10
[ "patients", "obesity", "overweight", "diabetes", "study", "years", "glycaemic", "control", "glycaemic control", "t2dm" ]
[ "diabetes mellitus", "diabetics health", "2000 prevalence diabetes", "moroccan type diabetic", "diabetes morocco population" ]
[CONTENT] Obesity | overweight | abdominal obesity | type 2 diabetes | Morocco [SUMMARY]
[CONTENT] Obesity | overweight | abdominal obesity | type 2 diabetes | Morocco [SUMMARY]
[CONTENT] Obesity | overweight | abdominal obesity | type 2 diabetes | Morocco [SUMMARY]
[CONTENT] Obesity | overweight | abdominal obesity | type 2 diabetes | Morocco [SUMMARY]
[CONTENT] Obesity | overweight | abdominal obesity | type 2 diabetes | Morocco [SUMMARY]
[CONTENT] Obesity | overweight | abdominal obesity | type 2 diabetes | Morocco [SUMMARY]
[CONTENT] Adult | Aged | Body Mass Index | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Morocco | Obesity | Obesity, Abdominal | Overweight | Prevalence | Risk Factors | Socioeconomic Factors | Waist Circumference | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Body Mass Index | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Morocco | Obesity | Obesity, Abdominal | Overweight | Prevalence | Risk Factors | Socioeconomic Factors | Waist Circumference | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Body Mass Index | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Morocco | Obesity | Obesity, Abdominal | Overweight | Prevalence | Risk Factors | Socioeconomic Factors | Waist Circumference | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Body Mass Index | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Morocco | Obesity | Obesity, Abdominal | Overweight | Prevalence | Risk Factors | Socioeconomic Factors | Waist Circumference | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Body Mass Index | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Morocco | Obesity | Obesity, Abdominal | Overweight | Prevalence | Risk Factors | Socioeconomic Factors | Waist Circumference | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Body Mass Index | Cross-Sectional Studies | Diabetes Mellitus, Type 2 | Female | Humans | Male | Middle Aged | Morocco | Obesity | Obesity, Abdominal | Overweight | Prevalence | Risk Factors | Socioeconomic Factors | Waist Circumference | Young Adult [SUMMARY]
[CONTENT] diabetes mellitus | diabetics health | 2000 prevalence diabetes | moroccan type diabetic | diabetes morocco population [SUMMARY]
[CONTENT] diabetes mellitus | diabetics health | 2000 prevalence diabetes | moroccan type diabetic | diabetes morocco population [SUMMARY]
[CONTENT] diabetes mellitus | diabetics health | 2000 prevalence diabetes | moroccan type diabetic | diabetes morocco population [SUMMARY]
[CONTENT] diabetes mellitus | diabetics health | 2000 prevalence diabetes | moroccan type diabetic | diabetes morocco population [SUMMARY]
[CONTENT] diabetes mellitus | diabetics health | 2000 prevalence diabetes | moroccan type diabetic | diabetes morocco population [SUMMARY]
[CONTENT] diabetes mellitus | diabetics health | 2000 prevalence diabetes | moroccan type diabetic | diabetes morocco population [SUMMARY]
[CONTENT] patients | obesity | overweight | diabetes | study | years | glycaemic | control | glycaemic control | t2dm [SUMMARY]
[CONTENT] patients | obesity | overweight | diabetes | study | years | glycaemic | control | glycaemic control | t2dm [SUMMARY]
[CONTENT] patients | obesity | overweight | diabetes | study | years | glycaemic | control | glycaemic control | t2dm [SUMMARY]
[CONTENT] patients | obesity | overweight | diabetes | study | years | glycaemic | control | glycaemic control | t2dm [SUMMARY]
[CONTENT] patients | obesity | overweight | diabetes | study | years | glycaemic | control | glycaemic control | t2dm [SUMMARY]
[CONTENT] patients | obesity | overweight | diabetes | study | years | glycaemic | control | glycaemic control | t2dm [SUMMARY]
[CONTENT] diabetes | health | risk | diabesity | diabetic | factors | morocco | overweight | loss | weight loss [SUMMARY]
[CONTENT] centres | study | health | primary health | primary health centres | health centres | diabetes | patients | data | primary [SUMMARY]
[CONTENT] ci | aor | statistically | age | years | statistically significant | 95 ci | 95 | patients | 001 [SUMMARY]
[CONTENT] obesity | general obesity | obesity associated female | obesity associated female sex | behaviors | given | general | obesity associated | management | associated female sex [SUMMARY]
[CONTENT] patients | obesity | overweight | diabetes | years | study | health | glycaemic | control | centres [SUMMARY]
[CONTENT] patients | obesity | overweight | diabetes | years | study | health | glycaemic | control | centres [SUMMARY]
[CONTENT] 2 [SUMMARY]
[CONTENT] 2017 | 975 ||| ||| [SUMMARY]
[CONTENT] 40.4% | 28.8% | 73.7% ||| 3,004 | 95% | CI | 1.761-5.104 | 95% | CI | 1.116-4.307 | 95% | CI | 1.056-2.407 | 95% | CI | 1.507-4.671 | AOR=2.927 | 95% | CI | 1.031-8.757 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 2 ||| 2017 | 975 ||| ||| ||| ||| 40.4% | 28.8% | 73.7% ||| 3,004 | 95% | CI | 1.761-5.104 | 95% | CI | 1.116-4.307 | 95% | CI | 1.056-2.407 | 95% | CI | 1.507-4.671 | AOR=2.927 | 95% | CI | 1.031-8.757 ||| ||| [SUMMARY]
[CONTENT] 2 ||| 2017 | 975 ||| ||| ||| ||| 40.4% | 28.8% | 73.7% ||| 3,004 | 95% | CI | 1.761-5.104 | 95% | CI | 1.116-4.307 | 95% | CI | 1.056-2.407 | 95% | CI | 1.507-4.671 | AOR=2.927 | 95% | CI | 1.031-8.757 ||| ||| [SUMMARY]
E-cigarette use and intentions to smoke among 10-11-year-old never-smokers in Wales.
25535293
E-cigarettes are seen by some as offering harm reduction potential, where used effectively as smoking cessation devices. However, there is emerging international evidence of growing use among young people, amid concerns that this may increase tobacco uptake. Few UK studies examine the prevalence of e-cigarette use in non-smoking children or associations with intentions to smoke.
BACKGROUND
A cross-sectional survey of year 6 (10-11-year-old) children in Wales. Approximately 1500 children completed questions on e-cigarette use, parental and peer smoking, and intentions to smoke. Logistic regression analyses among never smoking children, adjusted for school-level clustering, examined associations of smoking norms with e-cigarette use, and of e-cigarette use with intentions to smoke tobacco within the next 2 years.
METHODS
Approximately 6% of year 6 children, including 5% of never smokers, reported having used an e-cigarette. By comparison to children whose parents neither smoked nor used e-cigarettes, children were most likely to have used an e-cigarette if parents used both tobacco and e-cigarettes (OR=3.40; 95% CI 1.73 to 6.69). Having used an e-cigarette was associated with intentions to smoke (OR=3.21; 95% CI 1.66 to 6.23). While few children reported that they would smoke in 2 years' time, children who had used an e-cigarette were less likely to report that they definitely would not smoke tobacco in 2 years' time and were more likely to say that they might.
RESULTS
E-cigarettes represent a new form of childhood experimentation with nicotine. Findings are consistent with a hypothesis that children use e-cigarettes to imitate parental and peer smoking behaviours, and that e-cigarette use is associated with weaker antismoking intentions.
CONCLUSIONS
[ "Age Factors", "Child", "Child Behavior", "Cross-Sectional Studies", "Electronic Nicotine Delivery Systems", "Female", "Health Behavior", "Humans", "Intention", "Logistic Models", "Male", "Multivariate Analysis", "Odds Ratio", "Parents", "Peer Influence", "Prevalence", "Risk Factors", "Smoking", "Smoking Prevention", "Surveys and Questionnaires", "Time Factors", "Wales" ]
4789807
Background
Arguments regarding the harm reductions that could be achieved, for individual smokers and for public health if tobacco were replaced with e-cigarettes,1 have led many public health experts to urge the WHO not to back calls to regulate e-cigarettes as tobacco products or restrict their marketing.2 To date, e-cigarette marketing has heavily emphasised smoking cessation benefits.3 While such claims have perhaps been made somewhat in advance of robust evidence of effectiveness, a small number of emerging studies do indicate that e-cigarettes may support cessation for some smokers.4 5 However, other leading public health experts have argued for greater regulation, pointing to limited evidence regarding direct harms and emerging evidence that e-cigarettes are not adopted primarily for smoking cessation.6 Most adult e-cigarette users also smoke tobacco (dual use),7 with e-cigarettes being used by some as a means of using nicotine in places where smoking is prohibited.8 Internationally, the prevalence of e-cigarette use among adolescents has also increased rapidly in recent years.9–13 E-cigarettes do contain some carcinogens and other toxins,2 and harm reduction arguments hold little weight were used by young people who would not otherwise have been smoking tobacco. Public health experts on both sides of debates regarding regulation agree that efforts should be made to prevent young people from taking up e-cigarettes.2 6 To date, policy responses to concerns about e-cigarette uptake have led to actions such as plans to ban sales to minors.14 More controversially, some have expressed concern regarding visibility of e-cigarettes in places where marketing or use of tobacco has been banned, arguing that this may reverse efforts to denormalise smoking.15 Tobacco companies have increasingly invested in e-cigarettes, with some arguing that marketing has targeted youth.3 16 17 Hence, while presenting itself as a partner in harm reduction, the industry is arguably seizing new opportunities to introduce young people to nicotine. Governments such as those in parts of the USA have responded to concerns regarding the growing visibility of e-cigarettes by banning their use in public places.18 In the UK, the Welsh Government has recently issued a white paper consulting on potential similar legislation.14 Arguments relating to potential impacts of the visibility of e-cigarettes centre in part on assumptions that children's perceptions of e-cigarette use as a normative behaviour will increase uptake. Perhaps the most commonly studied source of normative influence on adolescent smoking uptake is parental smoking, with children whose parents smoke more likely to smoke themselves.19 While parental influence on e-cigarette use has yet to be investigated, if e-cigarette use is driven by normative factors, children whose parents use e-cigarettes may be more likely to use them. Given that most adult users of e-cigarettes also smoke tobacco, many parents who model e-cigarette use may also model smoking, and their e-cigarette use may be seen by children as a ‘safe’ means of mimicking parental smoking. Peer influences on smoking have also been well established;20 although little research has considered whether e-cigarette use represents a means of imitating peer smoking. Perhaps the most significant concern among those arguing for greater regulation is that childhood e-cigarette use may act as a ‘gateway’ into smoking tobacco.10 Opponents of regulating e-cigarettes or limiting their visibility emphasise the lack of evidence for the gateway effect, while expressing concerns that limiting the visibility of safer alternatives may perversely protect tobacco markets.2 Indeed, the WHO has described evidence for renormalisation or gateway effects as non-existent. While backing a ban on use of e-cigarettes indoors, the WHO points to uncertainty regarding whether vapour from e-cigarettes is toxic to non-users as a justification for such a move, rather than renormalisation or gateway arguments.21 However, the WHO emphasises a need to balance efforts to promote cessation against risks of simultaneously promoting e-cigarettes use among children, also arguing that cessation claims which drive the case against regulation, should be banned from e-cigarette marketing until supported by firmer evidence.21 This lack of evidence on both sides of this debate is inevitable. E-cigarettes are a new phenomenon and insufficient time has passed for their harms or benefits to be understood. Experts on both sides have continued to emphasise a lack of evidence for their opponents’ position, while themselves advancing untested hypotheses regarding the harms or benefits of e-cigarettes. Further research is needed to dispassionately support or refute hypotheses being advanced on both sides of the debate. This paper reports findings from a Wales-wide survey of 10–11-year-old children. First, it examines the prevalence of e-cigarette use, then potential normative influences on children's e-cigarette use, including parental smoking and e-cigarette use and peer smoking. Finally, it tests the hypothesis that never smoking children who report having used an e-cigarette will be more likely to report an intention to take up smoking tobacco; an association which has to date been demonstrated in one study of US middle school children, though has yet to be investigated in younger children or in the UK.22
Methods
Study design and sample Child exposure to Environmental Tobacco Smoke (CHETS) Wales 2 was a cross-sectional study of year 6 school children within 75 schools in Wales. Its protocol was reviewed and approved by the Cardiff University Social Science Research Ethics Committee. It replicated the earlier surveys conducted in Wales which examined child's secondhand smoke exposure before and after introduction of smoke-free legislation (CHETS Wales),23 and was commissioned and powered primarily to investigate changes in child exposure to smoke in cars. This article reports on questions on e-cigarette use which were included only in the 2014 survey. To ensure that sampled schools were representative of the population of Wales, for CHETS Wales, state-maintained schools with year 6 students were stratified according to high/low (cut-off point identified as average entitlement across whole sample; 17.12%) free school meal entitlement and funding by the Local Education Authority. Within each stratum, schools were selected on a probability proportional to school size. The 75 schools participating in CHETS Wales were invited to take part in CHETS Wales 2; where schools declined, replacement schools were identified from the same stratum. Within each school, one year 6 (age 10–11 years) class was randomly selected by the research team to participate. The samples obtained in CHETS Wales 2 were comparable to CHETS Wales samples in terms of age, sex, socioeconomic status and family structure, indicating that the sampling strategy had been effectively replicated. Consistent with previous analyses from CHETS Wales, no non-response weights were employed.23 Child exposure to Environmental Tobacco Smoke (CHETS) Wales 2 was a cross-sectional study of year 6 school children within 75 schools in Wales. Its protocol was reviewed and approved by the Cardiff University Social Science Research Ethics Committee. It replicated the earlier surveys conducted in Wales which examined child's secondhand smoke exposure before and after introduction of smoke-free legislation (CHETS Wales),23 and was commissioned and powered primarily to investigate changes in child exposure to smoke in cars. This article reports on questions on e-cigarette use which were included only in the 2014 survey. To ensure that sampled schools were representative of the population of Wales, for CHETS Wales, state-maintained schools with year 6 students were stratified according to high/low (cut-off point identified as average entitlement across whole sample; 17.12%) free school meal entitlement and funding by the Local Education Authority. Within each stratum, schools were selected on a probability proportional to school size. The 75 schools participating in CHETS Wales were invited to take part in CHETS Wales 2; where schools declined, replacement schools were identified from the same stratum. Within each school, one year 6 (age 10–11 years) class was randomly selected by the research team to participate. The samples obtained in CHETS Wales 2 were comparable to CHETS Wales samples in terms of age, sex, socioeconomic status and family structure, indicating that the sampling strategy had been effectively replicated. Consistent with previous analyses from CHETS Wales, no non-response weights were employed.23 Measures Demographics Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays. Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays. Parental smoking and e-cigarette use Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents. Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents. Peer smoking behaviour Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes. Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes. Ever smoking and future smoking intentions Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’. Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’. Awareness and use of e-cigarettes Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke. Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke. Demographics Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays. Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays. Parental smoking and e-cigarette use Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents. Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents. Peer smoking behaviour Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes. Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes. Ever smoking and future smoking intentions Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’. Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’. Awareness and use of e-cigarettes Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke. Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke. Consent Schools signed and returned a commitment form to participate in the study. Parental approval was obtained through letters sent via Royal Mail. In addition to consent forms, information sheets were provided which clearly stated that parents had the option of withdrawing their child from data collection at any time. An ‘opt out’ system was implemented in all but one school. The remaining school requested an ‘opt in’ consent procedure whereby parents/carers informed their child's school if they did wish their child to participate in the study. At each data collection session, students were also asked to complete an assent form after having read an information sheet and having had the study explained to them to ensure that they fully understood what they were invited to do, and to give them the opportunity to withdraw from the data collection session if they did not wish to participate. Schools signed and returned a commitment form to participate in the study. Parental approval was obtained through letters sent via Royal Mail. In addition to consent forms, information sheets were provided which clearly stated that parents had the option of withdrawing their child from data collection at any time. An ‘opt out’ system was implemented in all but one school. The remaining school requested an ‘opt in’ consent procedure whereby parents/carers informed their child's school if they did wish their child to participate in the study. At each data collection session, students were also asked to complete an assent form after having read an information sheet and having had the study explained to them to ensure that they fully understood what they were invited to do, and to give them the opportunity to withdraw from the data collection session if they did not wish to participate. Data collection Data were collected between February and April 2014. Children completed pen and paper surveys, which were placed in sealed envelopes before being collected by researchers. Two researchers attended each data collection to ensure sufficient support and assistance where required. All staff were provided with a data collection protocol and trained by DECIPHer. Teachers were asked to be present, but not to intervene in the data collection in any other way. Briefing sheets were provided for any school staff present, which explained the nature of the study and provided information about the data collection and their anticipated role. Data were collected between February and April 2014. Children completed pen and paper surveys, which were placed in sealed envelopes before being collected by researchers. Two researchers attended each data collection to ensure sufficient support and assistance where required. All staff were provided with a data collection protocol and trained by DECIPHer. Teachers were asked to be present, but not to intervene in the data collection in any other way. Briefing sheets were provided for any school staff present, which explained the nature of the study and provided information about the data collection and their anticipated role. Statistical analysis Frequencies and percentages of children who reported using e-cigarettes were calculated for the subsample of children who reported having tried smoking, and for those who reported that they had never tried smoking. Among never-smokers, frequencies and percentages using e-cigarettes were presented by sex, parental smoking behaviour, combined parental cigarette and e-cigarette use, and friends’ smoking behaviour. Binary logistic regression models were used to examine predictors of e-cigarette use. Independent variables were parental cigarette and e-cigarette use (combined into a categorical variable including those who used neither, e-cigarettes and tobacco cigarettes, e-cigarettes only or tobacco cigarettes only), friends’ smoking behaviour, sex and family affluence. Ordinal regression models examined predictors of future smoking intentions, with ORs indicating the relative odds of being assigned to a higher rather than a lower category for the intention to smoke variable (coded from ‘definitely not’=0 to ‘definitely yes’=4). Independent variables were the same as for e-cigarette use, though e-cigarette use was now entered as an independent variable. Owing to the small number of children saying that they might or probably would smoke in 2 years, ordinal models were also conducted with a 3 category dependent variable (combining children who said that they might or would smoke in 2 years), as well as binary models (comparing ‘definitely not’ to all other responses). Comparable results were obtained, hence, we report only models using the 5 category dependent variable. Proportional odds assumption tests for multivariate ordinal models were run using the omodel plug in for Stata V.11, indicating no violations of the proportional odds assumption. To account for the sampling design and non-independence of children within schools, models were adjusted for school-level clustering using the svy commands in Stata V.11. Frequencies and percentages of children who reported using e-cigarettes were calculated for the subsample of children who reported having tried smoking, and for those who reported that they had never tried smoking. Among never-smokers, frequencies and percentages using e-cigarettes were presented by sex, parental smoking behaviour, combined parental cigarette and e-cigarette use, and friends’ smoking behaviour. Binary logistic regression models were used to examine predictors of e-cigarette use. Independent variables were parental cigarette and e-cigarette use (combined into a categorical variable including those who used neither, e-cigarettes and tobacco cigarettes, e-cigarettes only or tobacco cigarettes only), friends’ smoking behaviour, sex and family affluence. Ordinal regression models examined predictors of future smoking intentions, with ORs indicating the relative odds of being assigned to a higher rather than a lower category for the intention to smoke variable (coded from ‘definitely not’=0 to ‘definitely yes’=4). Independent variables were the same as for e-cigarette use, though e-cigarette use was now entered as an independent variable. Owing to the small number of children saying that they might or probably would smoke in 2 years, ordinal models were also conducted with a 3 category dependent variable (combining children who said that they might or would smoke in 2 years), as well as binary models (comparing ‘definitely not’ to all other responses). Comparable results were obtained, hence, we report only models using the 5 category dependent variable. Proportional odds assumption tests for multivariate ordinal models were run using the omodel plug in for Stata V.11, indicating no violations of the proportional odds assumption. To account for the sampling design and non-independence of children within schools, models were adjusted for school-level clustering using the svy commands in Stata V.11.
Results
Response rates and sample description Overall, 114 schools were invited to participate before the target sample of 75 schools was reached (overall response rate=65.8%). Of the 1862 pupils within selected classes, completed questionnaires were obtained from 1601 (86%). In schools where ‘opt out’ consent procedures were followed (n=74 schools, 1810 pupils), 56 children were opted out by parents, 35 children refused and 141 were absent on the day of data collection. Data were obtained from 1578 pupils (87.2%). In the one school which requested opt-in consent, this was given for 23 of 52 children (44.2%), all of whom provided data. Items on e-cigarette use were completed by 1495 children, of whom 51% were female, with a mean (and SD) age of 10.92 (0.40) years. Twenty-one (1.4%) children reported that they had ever smoked tobacco. There were no significant differences between children who did or did not complete questions on e-cigarette use, in terms of age, socioeconomic status (p=0.84) or parental smoking (p=0.50). E-cigarette questions were completed by slightly fewer boys than girls (p<0.01), though overall, an approximately even gender balance was maintained (48.6% boys; 51.4% girls). Overall, 114 schools were invited to participate before the target sample of 75 schools was reached (overall response rate=65.8%). Of the 1862 pupils within selected classes, completed questionnaires were obtained from 1601 (86%). In schools where ‘opt out’ consent procedures were followed (n=74 schools, 1810 pupils), 56 children were opted out by parents, 35 children refused and 141 were absent on the day of data collection. Data were obtained from 1578 pupils (87.2%). In the one school which requested opt-in consent, this was given for 23 of 52 children (44.2%), all of whom provided data. Items on e-cigarette use were completed by 1495 children, of whom 51% were female, with a mean (and SD) age of 10.92 (0.40) years. Twenty-one (1.4%) children reported that they had ever smoked tobacco. There were no significant differences between children who did or did not complete questions on e-cigarette use, in terms of age, socioeconomic status (p=0.84) or parental smoking (p=0.50). E-cigarette questions were completed by slightly fewer boys than girls (p<0.01), though overall, an approximately even gender balance was maintained (48.6% boys; 51.4% girls). Prevalence of e-cigarette awareness and use In total, 1014 children (66.8%) reported having heard of e-cigarettes. Among the small number of children who reported having used tobacco (n=21), almost half (47.6%; n=10) also reported having used an e-cigarette. Among never-smokers (N=1467), 77 children (5.3%) reported that they had used an e-cigarette. Table 1 shows frequencies and percentages of e-cigarette use among never smokers by demographic factors, and by parental smoking and e-cigarette use. Overall, 6.5% of male never-smokers and 4.1% of female never-smokers reported having used an e-cigarette. Frequencies and percentages of e-cigarette use among children reporting never having smoked a cigarette *p Value from design-adjusted χ2 analyses. †Variable representing whether a parent figure smokes tobacco (regardless of whether they also use e-cigarettes). ‡Variable representing whether a parent figure uses e-cigarettes (regardless of whether they also smoke tobacco). In total, 1014 children (66.8%) reported having heard of e-cigarettes. Among the small number of children who reported having used tobacco (n=21), almost half (47.6%; n=10) also reported having used an e-cigarette. Among never-smokers (N=1467), 77 children (5.3%) reported that they had used an e-cigarette. Table 1 shows frequencies and percentages of e-cigarette use among never smokers by demographic factors, and by parental smoking and e-cigarette use. Overall, 6.5% of male never-smokers and 4.1% of female never-smokers reported having used an e-cigarette. Frequencies and percentages of e-cigarette use among children reporting never having smoked a cigarette *p Value from design-adjusted χ2 analyses. †Variable representing whether a parent figure smokes tobacco (regardless of whether they also use e-cigarettes). ‡Variable representing whether a parent figure uses e-cigarettes (regardless of whether they also smoke tobacco). Parental smoking, e-cigarette use and dual use Overall, 231 children (17%) reported that one or more parent figures used e-cigarettes; substantially lower than the percentage (n=615; 39.1%) who reported that at least one parent figure used tobacco. Among never-smoking children who reported that one or more parent figures used e-cigarettes, a large majority (n=168; 72.7%) reported that these parent figure(s) were dual users, who also smoked tobacco. A smaller number (n=20; 8.7%) reported that one parent figure used only e-cigarettes, while the other smoked tobacco. The remaining 18.6% (n=43) reported having only parent figures who exclusively used e-cigarettes. Hence, the vast majority of children who reported that a parent figure used e-cigarettes reported that tobacco was also used by the same parent figure, or in a smaller number of cases, by another parent figure. Overall, 231 children (17%) reported that one or more parent figures used e-cigarettes; substantially lower than the percentage (n=615; 39.1%) who reported that at least one parent figure used tobacco. Among never-smoking children who reported that one or more parent figures used e-cigarettes, a large majority (n=168; 72.7%) reported that these parent figure(s) were dual users, who also smoked tobacco. A smaller number (n=20; 8.7%) reported that one parent figure used only e-cigarettes, while the other smoked tobacco. The remaining 18.6% (n=43) reported having only parent figures who exclusively used e-cigarettes. Hence, the vast majority of children who reported that a parent figure used e-cigarettes reported that tobacco was also used by the same parent figure, or in a smaller number of cases, by another parent figure. Parental behaviour and child e-cigarette use As indicated in table 1, for the four category variable representing the number of the child's parent figures who smoked tobacco, the percentage of children reporting having used an e-cigarette increased substantially with parental smoking status. Among never-smoking children who reported that they did not have a parent figure who smoked tobacco, 3.5% reported having used an e-cigarette, whereas for children who reported having a mother and a father figure who smoked tobacco, approximately 1 in 9 (11.7%) reported using an e-cigarette. For parental e-cigarette use, this gradient was steeper. Among children who reported that they did not have a parent figure who used e-cigarettes, 3.5% reported that they had used an e-cigarette, compared to 18.6% of those who reported having a mother and father who used e-cigarettes. Child e-cigarette use was significantly higher among children who reported that parent figures used tobacco and e-cigarettes than among children whose parents did not use e-cigarettes. As indicated in table 1, for the four category variable representing the number of the child's parent figures who smoked tobacco, the percentage of children reporting having used an e-cigarette increased substantially with parental smoking status. Among never-smoking children who reported that they did not have a parent figure who smoked tobacco, 3.5% reported having used an e-cigarette, whereas for children who reported having a mother and a father figure who smoked tobacco, approximately 1 in 9 (11.7%) reported using an e-cigarette. For parental e-cigarette use, this gradient was steeper. Among children who reported that they did not have a parent figure who used e-cigarettes, 3.5% reported that they had used an e-cigarette, compared to 18.6% of those who reported having a mother and father who used e-cigarettes. Child e-cigarette use was significantly higher among children who reported that parent figures used tobacco and e-cigarettes than among children whose parents did not use e-cigarettes. Peer smoking and e-cigarette use Overall, 97 children (6.2%) reported that at least one friend smoked. Among never-smoking children who reported having friends who smoked, 17.7% reported having tried e-cigarettes as compared to 4.5% of those who reported that they did not have friends who smoke. Overall, 97 children (6.2%) reported that at least one friend smoked. Among never-smoking children who reported having friends who smoked, 17.7% reported having tried e-cigarettes as compared to 4.5% of those who reported that they did not have friends who smoke. Logistic regression analyses of predictors of e-cigarette use Table 2 presents ORs and 95% CIs from logistic regression models examining associations of parental smoking, friends’ smoking, sex and family affluence with e-cigarette use. In univariate models, children were more likely to report e-cigarette use if parent figures used e-cigarettes (either solely or in conjunction with smoking). Where parents smoked but did not use e-cigarettes, children were not significantly more likely to have used an e-cigarette. Children who reported having friends who smoked were almost five times as likely to have used an e-cigarette, while boys and children from less affluent families were also more likely to have used an e-cigarette. In multivariate models, however, only parental e-cigarette use (either solely or in conjunction with smoking) and friends’ smoking remained significant predictors of having used an e-cigarette. ORs and 95% CIs from logistic regression analyses of e-cigarette use and future smoking intention among 10–11-year-old never-smokers Significant associations (p<0.05) are highlighted in bold. FAS, Family Affluence Scale. Table 2 presents ORs and 95% CIs from logistic regression models examining associations of parental smoking, friends’ smoking, sex and family affluence with e-cigarette use. In univariate models, children were more likely to report e-cigarette use if parent figures used e-cigarettes (either solely or in conjunction with smoking). Where parents smoked but did not use e-cigarettes, children were not significantly more likely to have used an e-cigarette. Children who reported having friends who smoked were almost five times as likely to have used an e-cigarette, while boys and children from less affluent families were also more likely to have used an e-cigarette. In multivariate models, however, only parental e-cigarette use (either solely or in conjunction with smoking) and friends’ smoking remained significant predictors of having used an e-cigarette. ORs and 95% CIs from logistic regression analyses of e-cigarette use and future smoking intention among 10–11-year-old never-smokers Significant associations (p<0.05) are highlighted in bold. FAS, Family Affluence Scale. Future smoking intentions Overall, among never-smokers, almost all children reported that they would definitely not or probably not smoke in 2 years (table 3). Among never-smoking children who reported having used an e-cigarette, few stated that they probably or definitely will smoke in 2 years. However, children who had used an e-cigarette were substantially less likely to report that they definitely would not smoke in 2 years, and were more likely to report that they probably will not or might smoke in 2 years’ time. Hence, having used an e-cigarette is associated with weaker antismoking intentions. Percentage of never-smoking children reporting each level of intention to smoke by whether or not they had used an e-cigarette In univariate models (table 2), antismoking intentions were significantly weaker among children whose parents smoked tobacco (solely or in conjunction with e-cigarettes), among children who reported having friends who smoked, and among boys. Never smoking children, who reported having used e-cigarettes, reported substantially weaker antismoking intentions than those who had not. In a multivariate model including all variables except for e-cigarette use, all significant associations remained, though associations of parental and friends smoking were reduced. The association of e-cigarette use with future smoking intentions remains after adjustment for parental and friends smoking and demographic variables. Overall, among never-smokers, almost all children reported that they would definitely not or probably not smoke in 2 years (table 3). Among never-smoking children who reported having used an e-cigarette, few stated that they probably or definitely will smoke in 2 years. However, children who had used an e-cigarette were substantially less likely to report that they definitely would not smoke in 2 years, and were more likely to report that they probably will not or might smoke in 2 years’ time. Hence, having used an e-cigarette is associated with weaker antismoking intentions. Percentage of never-smoking children reporting each level of intention to smoke by whether or not they had used an e-cigarette In univariate models (table 2), antismoking intentions were significantly weaker among children whose parents smoked tobacco (solely or in conjunction with e-cigarettes), among children who reported having friends who smoked, and among boys. Never smoking children, who reported having used e-cigarettes, reported substantially weaker antismoking intentions than those who had not. In a multivariate model including all variables except for e-cigarette use, all significant associations remained, though associations of parental and friends smoking were reduced. The association of e-cigarette use with future smoking intentions remains after adjustment for parental and friends smoking and demographic variables.
null
null
[ "Background", "Study design and sample", "Measures", "Demographics", "Parental smoking and e-cigarette use", "Peer smoking behaviour", "Ever smoking and future smoking intentions", "Awareness and use of e-cigarettes", "Consent", "Data collection", "Statistical analysis", "Response rates and sample description", "Prevalence of e-cigarette awareness and use", "Parental smoking, e-cigarette use and dual use", "Parental behaviour and child e-cigarette use", "Peer smoking and e-cigarette use", "Logistic regression analyses of predictors of e-cigarette use", "Future smoking intentions" ]
[ "Arguments regarding the harm reductions that could be achieved, for individual smokers and for public health if tobacco were replaced with e-cigarettes,1 have led many public health experts to urge the WHO not to back calls to regulate e-cigarettes as tobacco products or restrict their marketing.2 To date, e-cigarette marketing has heavily emphasised smoking cessation benefits.3 While such claims have perhaps been made somewhat in advance of robust evidence of effectiveness, a small number of emerging studies do indicate that e-cigarettes may support cessation for some smokers.4\n5\nHowever, other leading public health experts have argued for greater regulation, pointing to limited evidence regarding direct harms and emerging evidence that e-cigarettes are not adopted primarily for smoking cessation.6 Most adult e-cigarette users also smoke tobacco (dual use),7 with e-cigarettes being used by some as a means of using nicotine in places where smoking is prohibited.8 Internationally, the prevalence of e-cigarette use among adolescents has also increased rapidly in recent years.9–13 E-cigarettes do contain some carcinogens and other toxins,2 and harm reduction arguments hold little weight were used by young people who would not otherwise have been smoking tobacco.\nPublic health experts on both sides of debates regarding regulation agree that efforts should be made to prevent young people from taking up e-cigarettes.2\n6 To date, policy responses to concerns about e-cigarette uptake have led to actions such as plans to ban sales to minors.14 More controversially, some have expressed concern regarding visibility of e-cigarettes in places where marketing or use of tobacco has been banned, arguing that this may reverse efforts to denormalise smoking.15 Tobacco companies have increasingly invested in e-cigarettes, with some arguing that marketing has targeted youth.3\n16\n17 Hence, while presenting itself as a partner in harm reduction, the industry is arguably seizing new opportunities to introduce young people to nicotine. Governments such as those in parts of the USA have responded to concerns regarding the growing visibility of e-cigarettes by banning their use in public places.18 In the UK, the Welsh Government has recently issued a white paper consulting on potential similar legislation.14\nArguments relating to potential impacts of the visibility of e-cigarettes centre in part on assumptions that children's perceptions of e-cigarette use as a normative behaviour will increase uptake. Perhaps the most commonly studied source of normative influence on adolescent smoking uptake is parental smoking, with children whose parents smoke more likely to smoke themselves.19 While parental influence on e-cigarette use has yet to be investigated, if e-cigarette use is driven by normative factors, children whose parents use e-cigarettes may be more likely to use them. Given that most adult users of e-cigarettes also smoke tobacco, many parents who model e-cigarette use may also model smoking, and their e-cigarette use may be seen by children as a ‘safe’ means of mimicking parental smoking. Peer influences on smoking have also been well established;20 although little research has considered whether e-cigarette use represents a means of imitating peer smoking.\nPerhaps the most significant concern among those arguing for greater regulation is that childhood e-cigarette use may act as a ‘gateway’ into smoking tobacco.10 Opponents of regulating e-cigarettes or limiting their visibility emphasise the lack of evidence for the gateway effect, while expressing concerns that limiting the visibility of safer alternatives may perversely protect tobacco markets.2 Indeed, the WHO has described evidence for renormalisation or gateway effects as non-existent. While backing a ban on use of e-cigarettes indoors, the WHO points to uncertainty regarding whether vapour from e-cigarettes is toxic to non-users as a justification for such a move, rather than renormalisation or gateway arguments.21 However, the WHO emphasises a need to balance efforts to promote cessation against risks of simultaneously promoting e-cigarettes use among children, also arguing that cessation claims which drive the case against regulation, should be banned from e-cigarette marketing until supported by firmer evidence.21\nThis lack of evidence on both sides of this debate is inevitable. E-cigarettes are a new phenomenon and insufficient time has passed for their harms or benefits to be understood. Experts on both sides have continued to emphasise a lack of evidence for their opponents’ position, while themselves advancing untested hypotheses regarding the harms or benefits of e-cigarettes. Further research is needed to dispassionately support or refute hypotheses being advanced on both sides of the debate.\nThis paper reports findings from a Wales-wide survey of 10–11-year-old children. First, it examines the prevalence of e-cigarette use, then potential normative influences on children's e-cigarette use, including parental smoking and e-cigarette use and peer smoking. Finally, it tests the hypothesis that never smoking children who report having used an e-cigarette will be more likely to report an intention to take up smoking tobacco; an association which has to date been demonstrated in one study of US middle school children, though has yet to be investigated in younger children or in the UK.22", "Child exposure to Environmental Tobacco Smoke (CHETS) Wales 2 was a cross-sectional study of year 6 school children within 75 schools in Wales. Its protocol was reviewed and approved by the Cardiff University Social Science Research Ethics Committee. It replicated the earlier surveys conducted in Wales which examined child's secondhand smoke exposure before and after introduction of smoke-free legislation (CHETS Wales),23 and was commissioned and powered primarily to investigate changes in child exposure to smoke in cars. This article reports on questions on e-cigarette use which were included only in the 2014 survey. To ensure that sampled schools were representative of the population of Wales, for CHETS Wales, state-maintained schools with year 6 students were stratified according to high/low (cut-off point identified as average entitlement across whole sample; 17.12%) free school meal entitlement and funding by the Local Education Authority. Within each stratum, schools were selected on a probability proportional to school size. The 75 schools participating in CHETS Wales were invited to take part in CHETS Wales 2; where schools declined, replacement schools were identified from the same stratum. Within each school, one year 6 (age 10–11 years) class was randomly selected by the research team to participate. The samples obtained in CHETS Wales 2 were comparable to CHETS Wales samples in terms of age, sex, socioeconomic status and family structure, indicating that the sampling strategy had been effectively replicated. Consistent with previous analyses from CHETS Wales, no non-response weights were employed.23", " Demographics Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays.\nChildren indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays.\n Parental smoking and e-cigarette use Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents.\nChildren indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents.\n Peer smoking behaviour Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes.\nChildren were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes.\n Ever smoking and future smoking intentions Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’.\nChildren's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’.\n Awareness and use of e-cigarettes Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke.\nAwareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke.", "Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays.", "Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents.", "Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes.", "Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’.", "Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke.", "Schools signed and returned a commitment form to participate in the study. Parental approval was obtained through letters sent via Royal Mail. In addition to consent forms, information sheets were provided which clearly stated that parents had the option of withdrawing their child from data collection at any time. An ‘opt out’ system was implemented in all but one school. The remaining school requested an ‘opt in’ consent procedure whereby parents/carers informed their child's school if they did wish their child to participate in the study. At each data collection session, students were also asked to complete an assent form after having read an information sheet and having had the study explained to them to ensure that they fully understood what they were invited to do, and to give them the opportunity to withdraw from the data collection session if they did not wish to participate.", "Data were collected between February and April 2014. Children completed pen and paper surveys, which were placed in sealed envelopes before being collected by researchers. Two researchers attended each data collection to ensure sufficient support and assistance where required. All staff were provided with a data collection protocol and trained by DECIPHer. Teachers were asked to be present, but not to intervene in the data collection in any other way. Briefing sheets were provided for any school staff present, which explained the nature of the study and provided information about the data collection and their anticipated role.", "Frequencies and percentages of children who reported using e-cigarettes were calculated for the subsample of children who reported having tried smoking, and for those who reported that they had never tried smoking. Among never-smokers, frequencies and percentages using e-cigarettes were presented by sex, parental smoking behaviour, combined parental cigarette and e-cigarette use, and friends’ smoking behaviour. Binary logistic regression models were used to examine predictors of e-cigarette use. Independent variables were parental cigarette and e-cigarette use (combined into a categorical variable including those who used neither, e-cigarettes and tobacco cigarettes, e-cigarettes only or tobacco cigarettes only), friends’ smoking behaviour, sex and family affluence. Ordinal regression models examined predictors of future smoking intentions, with ORs indicating the relative odds of being assigned to a higher rather than a lower category for the intention to smoke variable (coded from ‘definitely not’=0 to ‘definitely yes’=4). Independent variables were the same as for e-cigarette use, though e-cigarette use was now entered as an independent variable. Owing to the small number of children saying that they might or probably would smoke in 2 years, ordinal models were also conducted with a 3 category dependent variable (combining children who said that they might or would smoke in 2 years), as well as binary models (comparing ‘definitely not’ to all other responses). Comparable results were obtained, hence, we report only models using the 5 category dependent variable. Proportional odds assumption tests for multivariate ordinal models were run using the omodel plug in for Stata V.11, indicating no violations of the proportional odds assumption. To account for the sampling design and non-independence of children within schools, models were adjusted for school-level clustering using the svy commands in Stata V.11.", "Overall, 114 schools were invited to participate before the target sample of 75 schools was reached (overall response rate=65.8%). Of the 1862 pupils within selected classes, completed questionnaires were obtained from 1601 (86%). In schools where ‘opt out’ consent procedures were followed (n=74 schools, 1810 pupils), 56 children were opted out by parents, 35 children refused and 141 were absent on the day of data collection. Data were obtained from 1578 pupils (87.2%). In the one school which requested opt-in consent, this was given for 23 of 52 children (44.2%), all of whom provided data. Items on e-cigarette use were completed by 1495 children, of whom 51% were female, with a mean (and SD) age of 10.92 (0.40) years. Twenty-one (1.4%) children reported that they had ever smoked tobacco. There were no significant differences between children who did or did not complete questions on e-cigarette use, in terms of age, socioeconomic status (p=0.84) or parental smoking (p=0.50). E-cigarette questions were completed by slightly fewer boys than girls (p<0.01), though overall, an approximately even gender balance was maintained (48.6% boys; 51.4% girls).", "In total, 1014 children (66.8%) reported having heard of e-cigarettes. Among the small number of children who reported having used tobacco (n=21), almost half (47.6%; n=10) also reported having used an e-cigarette. Among never-smokers (N=1467), 77 children (5.3%) reported that they had used an e-cigarette. Table 1 shows frequencies and percentages of e-cigarette use among never smokers by demographic factors, and by parental smoking and e-cigarette use. Overall, 6.5% of male never-smokers and 4.1% of female never-smokers reported having used an e-cigarette.\nFrequencies and percentages of e-cigarette use among children reporting never having smoked a cigarette\n*p Value from design-adjusted χ2 analyses.\n†Variable representing whether a parent figure smokes tobacco (regardless of whether they also use e-cigarettes).\n‡Variable representing whether a parent figure uses e-cigarettes (regardless of whether they also smoke tobacco).", "Overall, 231 children (17%) reported that one or more parent figures used e-cigarettes; substantially lower than the percentage (n=615; 39.1%) who reported that at least one parent figure used tobacco. Among never-smoking children who reported that one or more parent figures used e-cigarettes, a large majority (n=168; 72.7%) reported that these parent figure(s) were dual users, who also smoked tobacco. A smaller number (n=20; 8.7%) reported that one parent figure used only e-cigarettes, while the other smoked tobacco. The remaining 18.6% (n=43) reported having only parent figures who exclusively used e-cigarettes. Hence, the vast majority of children who reported that a parent figure used e-cigarettes reported that tobacco was also used by the same parent figure, or in a smaller number of cases, by another parent figure.", "As indicated in table 1, for the four category variable representing the number of the child's parent figures who smoked tobacco, the percentage of children reporting having used an e-cigarette increased substantially with parental smoking status. Among never-smoking children who reported that they did not have a parent figure who smoked tobacco, 3.5% reported having used an e-cigarette, whereas for children who reported having a mother and a father figure who smoked tobacco, approximately 1 in 9 (11.7%) reported using an e-cigarette. For parental e-cigarette use, this gradient was steeper. Among children who reported that they did not have a parent figure who used e-cigarettes, 3.5% reported that they had used an e-cigarette, compared to 18.6% of those who reported having a mother and father who used e-cigarettes. Child e-cigarette use was significantly higher among children who reported that parent figures used tobacco and e-cigarettes than among children whose parents did not use e-cigarettes.", "Overall, 97 children (6.2%) reported that at least one friend smoked. Among never-smoking children who reported having friends who smoked, 17.7% reported having tried e-cigarettes as compared to 4.5% of those who reported that they did not have friends who smoke.", "Table 2 presents ORs and 95% CIs from logistic regression models examining associations of parental smoking, friends’ smoking, sex and family affluence with e-cigarette use. In univariate models, children were more likely to report e-cigarette use if parent figures used e-cigarettes (either solely or in conjunction with smoking). Where parents smoked but did not use e-cigarettes, children were not significantly more likely to have used an e-cigarette. Children who reported having friends who smoked were almost five times as likely to have used an e-cigarette, while boys and children from less affluent families were also more likely to have used an e-cigarette. In multivariate models, however, only parental e-cigarette use (either solely or in conjunction with smoking) and friends’ smoking remained significant predictors of having used an e-cigarette.\nORs and 95% CIs from logistic regression analyses of e-cigarette use and future smoking intention among 10–11-year-old never-smokers\nSignificant associations (p<0.05) are highlighted in bold.\nFAS, Family Affluence Scale.", "Overall, among never-smokers, almost all children reported that they would definitely not or probably not smoke in 2 years (table 3). Among never-smoking children who reported having used an e-cigarette, few stated that they probably or definitely will smoke in 2 years. However, children who had used an e-cigarette were substantially less likely to report that they definitely would not smoke in 2 years, and were more likely to report that they probably will not or might smoke in 2 years’ time. Hence, having used an e-cigarette is associated with weaker antismoking intentions.\nPercentage of never-smoking children reporting each level of intention to smoke by whether or not they had used an e-cigarette\nIn univariate models (table 2), antismoking intentions were significantly weaker among children whose parents smoked tobacco (solely or in conjunction with e-cigarettes), among children who reported having friends who smoked, and among boys. Never smoking children, who reported having used e-cigarettes, reported substantially weaker antismoking intentions than those who had not. In a multivariate model including all variables except for e-cigarette use, all significant associations remained, though associations of parental and friends smoking were reduced. The association of e-cigarette use with future smoking intentions remains after adjustment for parental and friends smoking and demographic variables." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study design and sample", "Measures", "Demographics", "Parental smoking and e-cigarette use", "Peer smoking behaviour", "Ever smoking and future smoking intentions", "Awareness and use of e-cigarettes", "Consent", "Data collection", "Statistical analysis", "Results", "Response rates and sample description", "Prevalence of e-cigarette awareness and use", "Parental smoking, e-cigarette use and dual use", "Parental behaviour and child e-cigarette use", "Peer smoking and e-cigarette use", "Logistic regression analyses of predictors of e-cigarette use", "Future smoking intentions", "Discussion" ]
[ "Arguments regarding the harm reductions that could be achieved, for individual smokers and for public health if tobacco were replaced with e-cigarettes,1 have led many public health experts to urge the WHO not to back calls to regulate e-cigarettes as tobacco products or restrict their marketing.2 To date, e-cigarette marketing has heavily emphasised smoking cessation benefits.3 While such claims have perhaps been made somewhat in advance of robust evidence of effectiveness, a small number of emerging studies do indicate that e-cigarettes may support cessation for some smokers.4\n5\nHowever, other leading public health experts have argued for greater regulation, pointing to limited evidence regarding direct harms and emerging evidence that e-cigarettes are not adopted primarily for smoking cessation.6 Most adult e-cigarette users also smoke tobacco (dual use),7 with e-cigarettes being used by some as a means of using nicotine in places where smoking is prohibited.8 Internationally, the prevalence of e-cigarette use among adolescents has also increased rapidly in recent years.9–13 E-cigarettes do contain some carcinogens and other toxins,2 and harm reduction arguments hold little weight were used by young people who would not otherwise have been smoking tobacco.\nPublic health experts on both sides of debates regarding regulation agree that efforts should be made to prevent young people from taking up e-cigarettes.2\n6 To date, policy responses to concerns about e-cigarette uptake have led to actions such as plans to ban sales to minors.14 More controversially, some have expressed concern regarding visibility of e-cigarettes in places where marketing or use of tobacco has been banned, arguing that this may reverse efforts to denormalise smoking.15 Tobacco companies have increasingly invested in e-cigarettes, with some arguing that marketing has targeted youth.3\n16\n17 Hence, while presenting itself as a partner in harm reduction, the industry is arguably seizing new opportunities to introduce young people to nicotine. Governments such as those in parts of the USA have responded to concerns regarding the growing visibility of e-cigarettes by banning their use in public places.18 In the UK, the Welsh Government has recently issued a white paper consulting on potential similar legislation.14\nArguments relating to potential impacts of the visibility of e-cigarettes centre in part on assumptions that children's perceptions of e-cigarette use as a normative behaviour will increase uptake. Perhaps the most commonly studied source of normative influence on adolescent smoking uptake is parental smoking, with children whose parents smoke more likely to smoke themselves.19 While parental influence on e-cigarette use has yet to be investigated, if e-cigarette use is driven by normative factors, children whose parents use e-cigarettes may be more likely to use them. Given that most adult users of e-cigarettes also smoke tobacco, many parents who model e-cigarette use may also model smoking, and their e-cigarette use may be seen by children as a ‘safe’ means of mimicking parental smoking. Peer influences on smoking have also been well established;20 although little research has considered whether e-cigarette use represents a means of imitating peer smoking.\nPerhaps the most significant concern among those arguing for greater regulation is that childhood e-cigarette use may act as a ‘gateway’ into smoking tobacco.10 Opponents of regulating e-cigarettes or limiting their visibility emphasise the lack of evidence for the gateway effect, while expressing concerns that limiting the visibility of safer alternatives may perversely protect tobacco markets.2 Indeed, the WHO has described evidence for renormalisation or gateway effects as non-existent. While backing a ban on use of e-cigarettes indoors, the WHO points to uncertainty regarding whether vapour from e-cigarettes is toxic to non-users as a justification for such a move, rather than renormalisation or gateway arguments.21 However, the WHO emphasises a need to balance efforts to promote cessation against risks of simultaneously promoting e-cigarettes use among children, also arguing that cessation claims which drive the case against regulation, should be banned from e-cigarette marketing until supported by firmer evidence.21\nThis lack of evidence on both sides of this debate is inevitable. E-cigarettes are a new phenomenon and insufficient time has passed for their harms or benefits to be understood. Experts on both sides have continued to emphasise a lack of evidence for their opponents’ position, while themselves advancing untested hypotheses regarding the harms or benefits of e-cigarettes. Further research is needed to dispassionately support or refute hypotheses being advanced on both sides of the debate.\nThis paper reports findings from a Wales-wide survey of 10–11-year-old children. First, it examines the prevalence of e-cigarette use, then potential normative influences on children's e-cigarette use, including parental smoking and e-cigarette use and peer smoking. Finally, it tests the hypothesis that never smoking children who report having used an e-cigarette will be more likely to report an intention to take up smoking tobacco; an association which has to date been demonstrated in one study of US middle school children, though has yet to be investigated in younger children or in the UK.22", " Study design and sample Child exposure to Environmental Tobacco Smoke (CHETS) Wales 2 was a cross-sectional study of year 6 school children within 75 schools in Wales. Its protocol was reviewed and approved by the Cardiff University Social Science Research Ethics Committee. It replicated the earlier surveys conducted in Wales which examined child's secondhand smoke exposure before and after introduction of smoke-free legislation (CHETS Wales),23 and was commissioned and powered primarily to investigate changes in child exposure to smoke in cars. This article reports on questions on e-cigarette use which were included only in the 2014 survey. To ensure that sampled schools were representative of the population of Wales, for CHETS Wales, state-maintained schools with year 6 students were stratified according to high/low (cut-off point identified as average entitlement across whole sample; 17.12%) free school meal entitlement and funding by the Local Education Authority. Within each stratum, schools were selected on a probability proportional to school size. The 75 schools participating in CHETS Wales were invited to take part in CHETS Wales 2; where schools declined, replacement schools were identified from the same stratum. Within each school, one year 6 (age 10–11 years) class was randomly selected by the research team to participate. The samples obtained in CHETS Wales 2 were comparable to CHETS Wales samples in terms of age, sex, socioeconomic status and family structure, indicating that the sampling strategy had been effectively replicated. Consistent with previous analyses from CHETS Wales, no non-response weights were employed.23\nChild exposure to Environmental Tobacco Smoke (CHETS) Wales 2 was a cross-sectional study of year 6 school children within 75 schools in Wales. Its protocol was reviewed and approved by the Cardiff University Social Science Research Ethics Committee. It replicated the earlier surveys conducted in Wales which examined child's secondhand smoke exposure before and after introduction of smoke-free legislation (CHETS Wales),23 and was commissioned and powered primarily to investigate changes in child exposure to smoke in cars. This article reports on questions on e-cigarette use which were included only in the 2014 survey. To ensure that sampled schools were representative of the population of Wales, for CHETS Wales, state-maintained schools with year 6 students were stratified according to high/low (cut-off point identified as average entitlement across whole sample; 17.12%) free school meal entitlement and funding by the Local Education Authority. Within each stratum, schools were selected on a probability proportional to school size. The 75 schools participating in CHETS Wales were invited to take part in CHETS Wales 2; where schools declined, replacement schools were identified from the same stratum. Within each school, one year 6 (age 10–11 years) class was randomly selected by the research team to participate. The samples obtained in CHETS Wales 2 were comparable to CHETS Wales samples in terms of age, sex, socioeconomic status and family structure, indicating that the sampling strategy had been effectively replicated. Consistent with previous analyses from CHETS Wales, no non-response weights were employed.23\n Measures Demographics Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays.\nChildren indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays.\n Parental smoking and e-cigarette use Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents.\nChildren indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents.\n Peer smoking behaviour Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes.\nChildren were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes.\n Ever smoking and future smoking intentions Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’.\nChildren's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’.\n Awareness and use of e-cigarettes Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke.\nAwareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke.\n Demographics Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays.\nChildren indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays.\n Parental smoking and e-cigarette use Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents.\nChildren indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents.\n Peer smoking behaviour Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes.\nChildren were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes.\n Ever smoking and future smoking intentions Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’.\nChildren's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’.\n Awareness and use of e-cigarettes Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke.\nAwareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke.\n Consent Schools signed and returned a commitment form to participate in the study. Parental approval was obtained through letters sent via Royal Mail. In addition to consent forms, information sheets were provided which clearly stated that parents had the option of withdrawing their child from data collection at any time. An ‘opt out’ system was implemented in all but one school. The remaining school requested an ‘opt in’ consent procedure whereby parents/carers informed their child's school if they did wish their child to participate in the study. At each data collection session, students were also asked to complete an assent form after having read an information sheet and having had the study explained to them to ensure that they fully understood what they were invited to do, and to give them the opportunity to withdraw from the data collection session if they did not wish to participate.\nSchools signed and returned a commitment form to participate in the study. Parental approval was obtained through letters sent via Royal Mail. In addition to consent forms, information sheets were provided which clearly stated that parents had the option of withdrawing their child from data collection at any time. An ‘opt out’ system was implemented in all but one school. The remaining school requested an ‘opt in’ consent procedure whereby parents/carers informed their child's school if they did wish their child to participate in the study. At each data collection session, students were also asked to complete an assent form after having read an information sheet and having had the study explained to them to ensure that they fully understood what they were invited to do, and to give them the opportunity to withdraw from the data collection session if they did not wish to participate.\n Data collection Data were collected between February and April 2014. Children completed pen and paper surveys, which were placed in sealed envelopes before being collected by researchers. Two researchers attended each data collection to ensure sufficient support and assistance where required. All staff were provided with a data collection protocol and trained by DECIPHer. Teachers were asked to be present, but not to intervene in the data collection in any other way. Briefing sheets were provided for any school staff present, which explained the nature of the study and provided information about the data collection and their anticipated role.\nData were collected between February and April 2014. Children completed pen and paper surveys, which were placed in sealed envelopes before being collected by researchers. Two researchers attended each data collection to ensure sufficient support and assistance where required. All staff were provided with a data collection protocol and trained by DECIPHer. Teachers were asked to be present, but not to intervene in the data collection in any other way. Briefing sheets were provided for any school staff present, which explained the nature of the study and provided information about the data collection and their anticipated role.\n Statistical analysis Frequencies and percentages of children who reported using e-cigarettes were calculated for the subsample of children who reported having tried smoking, and for those who reported that they had never tried smoking. Among never-smokers, frequencies and percentages using e-cigarettes were presented by sex, parental smoking behaviour, combined parental cigarette and e-cigarette use, and friends’ smoking behaviour. Binary logistic regression models were used to examine predictors of e-cigarette use. Independent variables were parental cigarette and e-cigarette use (combined into a categorical variable including those who used neither, e-cigarettes and tobacco cigarettes, e-cigarettes only or tobacco cigarettes only), friends’ smoking behaviour, sex and family affluence. Ordinal regression models examined predictors of future smoking intentions, with ORs indicating the relative odds of being assigned to a higher rather than a lower category for the intention to smoke variable (coded from ‘definitely not’=0 to ‘definitely yes’=4). Independent variables were the same as for e-cigarette use, though e-cigarette use was now entered as an independent variable. Owing to the small number of children saying that they might or probably would smoke in 2 years, ordinal models were also conducted with a 3 category dependent variable (combining children who said that they might or would smoke in 2 years), as well as binary models (comparing ‘definitely not’ to all other responses). Comparable results were obtained, hence, we report only models using the 5 category dependent variable. Proportional odds assumption tests for multivariate ordinal models were run using the omodel plug in for Stata V.11, indicating no violations of the proportional odds assumption. To account for the sampling design and non-independence of children within schools, models were adjusted for school-level clustering using the svy commands in Stata V.11.\nFrequencies and percentages of children who reported using e-cigarettes were calculated for the subsample of children who reported having tried smoking, and for those who reported that they had never tried smoking. Among never-smokers, frequencies and percentages using e-cigarettes were presented by sex, parental smoking behaviour, combined parental cigarette and e-cigarette use, and friends’ smoking behaviour. Binary logistic regression models were used to examine predictors of e-cigarette use. Independent variables were parental cigarette and e-cigarette use (combined into a categorical variable including those who used neither, e-cigarettes and tobacco cigarettes, e-cigarettes only or tobacco cigarettes only), friends’ smoking behaviour, sex and family affluence. Ordinal regression models examined predictors of future smoking intentions, with ORs indicating the relative odds of being assigned to a higher rather than a lower category for the intention to smoke variable (coded from ‘definitely not’=0 to ‘definitely yes’=4). Independent variables were the same as for e-cigarette use, though e-cigarette use was now entered as an independent variable. Owing to the small number of children saying that they might or probably would smoke in 2 years, ordinal models were also conducted with a 3 category dependent variable (combining children who said that they might or would smoke in 2 years), as well as binary models (comparing ‘definitely not’ to all other responses). Comparable results were obtained, hence, we report only models using the 5 category dependent variable. Proportional odds assumption tests for multivariate ordinal models were run using the omodel plug in for Stata V.11, indicating no violations of the proportional odds assumption. To account for the sampling design and non-independence of children within schools, models were adjusted for school-level clustering using the svy commands in Stata V.11.", "Child exposure to Environmental Tobacco Smoke (CHETS) Wales 2 was a cross-sectional study of year 6 school children within 75 schools in Wales. Its protocol was reviewed and approved by the Cardiff University Social Science Research Ethics Committee. It replicated the earlier surveys conducted in Wales which examined child's secondhand smoke exposure before and after introduction of smoke-free legislation (CHETS Wales),23 and was commissioned and powered primarily to investigate changes in child exposure to smoke in cars. This article reports on questions on e-cigarette use which were included only in the 2014 survey. To ensure that sampled schools were representative of the population of Wales, for CHETS Wales, state-maintained schools with year 6 students were stratified according to high/low (cut-off point identified as average entitlement across whole sample; 17.12%) free school meal entitlement and funding by the Local Education Authority. Within each stratum, schools were selected on a probability proportional to school size. The 75 schools participating in CHETS Wales were invited to take part in CHETS Wales 2; where schools declined, replacement schools were identified from the same stratum. Within each school, one year 6 (age 10–11 years) class was randomly selected by the research team to participate. The samples obtained in CHETS Wales 2 were comparable to CHETS Wales samples in terms of age, sex, socioeconomic status and family structure, indicating that the sampling strategy had been effectively replicated. Consistent with previous analyses from CHETS Wales, no non-response weights were employed.23", " Demographics Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays.\nChildren indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays.\n Parental smoking and e-cigarette use Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents.\nChildren indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents.\n Peer smoking behaviour Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes.\nChildren were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes.\n Ever smoking and future smoking intentions Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’.\nChildren's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’.\n Awareness and use of e-cigarettes Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke.\nAwareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke.", "Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays.", "Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents.", "Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes.", "Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’.", "Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke.", "Schools signed and returned a commitment form to participate in the study. Parental approval was obtained through letters sent via Royal Mail. In addition to consent forms, information sheets were provided which clearly stated that parents had the option of withdrawing their child from data collection at any time. An ‘opt out’ system was implemented in all but one school. The remaining school requested an ‘opt in’ consent procedure whereby parents/carers informed their child's school if they did wish their child to participate in the study. At each data collection session, students were also asked to complete an assent form after having read an information sheet and having had the study explained to them to ensure that they fully understood what they were invited to do, and to give them the opportunity to withdraw from the data collection session if they did not wish to participate.", "Data were collected between February and April 2014. Children completed pen and paper surveys, which were placed in sealed envelopes before being collected by researchers. Two researchers attended each data collection to ensure sufficient support and assistance where required. All staff were provided with a data collection protocol and trained by DECIPHer. Teachers were asked to be present, but not to intervene in the data collection in any other way. Briefing sheets were provided for any school staff present, which explained the nature of the study and provided information about the data collection and their anticipated role.", "Frequencies and percentages of children who reported using e-cigarettes were calculated for the subsample of children who reported having tried smoking, and for those who reported that they had never tried smoking. Among never-smokers, frequencies and percentages using e-cigarettes were presented by sex, parental smoking behaviour, combined parental cigarette and e-cigarette use, and friends’ smoking behaviour. Binary logistic regression models were used to examine predictors of e-cigarette use. Independent variables were parental cigarette and e-cigarette use (combined into a categorical variable including those who used neither, e-cigarettes and tobacco cigarettes, e-cigarettes only or tobacco cigarettes only), friends’ smoking behaviour, sex and family affluence. Ordinal regression models examined predictors of future smoking intentions, with ORs indicating the relative odds of being assigned to a higher rather than a lower category for the intention to smoke variable (coded from ‘definitely not’=0 to ‘definitely yes’=4). Independent variables were the same as for e-cigarette use, though e-cigarette use was now entered as an independent variable. Owing to the small number of children saying that they might or probably would smoke in 2 years, ordinal models were also conducted with a 3 category dependent variable (combining children who said that they might or would smoke in 2 years), as well as binary models (comparing ‘definitely not’ to all other responses). Comparable results were obtained, hence, we report only models using the 5 category dependent variable. Proportional odds assumption tests for multivariate ordinal models were run using the omodel plug in for Stata V.11, indicating no violations of the proportional odds assumption. To account for the sampling design and non-independence of children within schools, models were adjusted for school-level clustering using the svy commands in Stata V.11.", " Response rates and sample description Overall, 114 schools were invited to participate before the target sample of 75 schools was reached (overall response rate=65.8%). Of the 1862 pupils within selected classes, completed questionnaires were obtained from 1601 (86%). In schools where ‘opt out’ consent procedures were followed (n=74 schools, 1810 pupils), 56 children were opted out by parents, 35 children refused and 141 were absent on the day of data collection. Data were obtained from 1578 pupils (87.2%). In the one school which requested opt-in consent, this was given for 23 of 52 children (44.2%), all of whom provided data. Items on e-cigarette use were completed by 1495 children, of whom 51% were female, with a mean (and SD) age of 10.92 (0.40) years. Twenty-one (1.4%) children reported that they had ever smoked tobacco. There were no significant differences between children who did or did not complete questions on e-cigarette use, in terms of age, socioeconomic status (p=0.84) or parental smoking (p=0.50). E-cigarette questions were completed by slightly fewer boys than girls (p<0.01), though overall, an approximately even gender balance was maintained (48.6% boys; 51.4% girls).\nOverall, 114 schools were invited to participate before the target sample of 75 schools was reached (overall response rate=65.8%). Of the 1862 pupils within selected classes, completed questionnaires were obtained from 1601 (86%). In schools where ‘opt out’ consent procedures were followed (n=74 schools, 1810 pupils), 56 children were opted out by parents, 35 children refused and 141 were absent on the day of data collection. Data were obtained from 1578 pupils (87.2%). In the one school which requested opt-in consent, this was given for 23 of 52 children (44.2%), all of whom provided data. Items on e-cigarette use were completed by 1495 children, of whom 51% were female, with a mean (and SD) age of 10.92 (0.40) years. Twenty-one (1.4%) children reported that they had ever smoked tobacco. There were no significant differences between children who did or did not complete questions on e-cigarette use, in terms of age, socioeconomic status (p=0.84) or parental smoking (p=0.50). E-cigarette questions were completed by slightly fewer boys than girls (p<0.01), though overall, an approximately even gender balance was maintained (48.6% boys; 51.4% girls).\n Prevalence of e-cigarette awareness and use In total, 1014 children (66.8%) reported having heard of e-cigarettes. Among the small number of children who reported having used tobacco (n=21), almost half (47.6%; n=10) also reported having used an e-cigarette. Among never-smokers (N=1467), 77 children (5.3%) reported that they had used an e-cigarette. Table 1 shows frequencies and percentages of e-cigarette use among never smokers by demographic factors, and by parental smoking and e-cigarette use. Overall, 6.5% of male never-smokers and 4.1% of female never-smokers reported having used an e-cigarette.\nFrequencies and percentages of e-cigarette use among children reporting never having smoked a cigarette\n*p Value from design-adjusted χ2 analyses.\n†Variable representing whether a parent figure smokes tobacco (regardless of whether they also use e-cigarettes).\n‡Variable representing whether a parent figure uses e-cigarettes (regardless of whether they also smoke tobacco).\nIn total, 1014 children (66.8%) reported having heard of e-cigarettes. Among the small number of children who reported having used tobacco (n=21), almost half (47.6%; n=10) also reported having used an e-cigarette. Among never-smokers (N=1467), 77 children (5.3%) reported that they had used an e-cigarette. Table 1 shows frequencies and percentages of e-cigarette use among never smokers by demographic factors, and by parental smoking and e-cigarette use. Overall, 6.5% of male never-smokers and 4.1% of female never-smokers reported having used an e-cigarette.\nFrequencies and percentages of e-cigarette use among children reporting never having smoked a cigarette\n*p Value from design-adjusted χ2 analyses.\n†Variable representing whether a parent figure smokes tobacco (regardless of whether they also use e-cigarettes).\n‡Variable representing whether a parent figure uses e-cigarettes (regardless of whether they also smoke tobacco).\n Parental smoking, e-cigarette use and dual use Overall, 231 children (17%) reported that one or more parent figures used e-cigarettes; substantially lower than the percentage (n=615; 39.1%) who reported that at least one parent figure used tobacco. Among never-smoking children who reported that one or more parent figures used e-cigarettes, a large majority (n=168; 72.7%) reported that these parent figure(s) were dual users, who also smoked tobacco. A smaller number (n=20; 8.7%) reported that one parent figure used only e-cigarettes, while the other smoked tobacco. The remaining 18.6% (n=43) reported having only parent figures who exclusively used e-cigarettes. Hence, the vast majority of children who reported that a parent figure used e-cigarettes reported that tobacco was also used by the same parent figure, or in a smaller number of cases, by another parent figure.\nOverall, 231 children (17%) reported that one or more parent figures used e-cigarettes; substantially lower than the percentage (n=615; 39.1%) who reported that at least one parent figure used tobacco. Among never-smoking children who reported that one or more parent figures used e-cigarettes, a large majority (n=168; 72.7%) reported that these parent figure(s) were dual users, who also smoked tobacco. A smaller number (n=20; 8.7%) reported that one parent figure used only e-cigarettes, while the other smoked tobacco. The remaining 18.6% (n=43) reported having only parent figures who exclusively used e-cigarettes. Hence, the vast majority of children who reported that a parent figure used e-cigarettes reported that tobacco was also used by the same parent figure, or in a smaller number of cases, by another parent figure.\n Parental behaviour and child e-cigarette use As indicated in table 1, for the four category variable representing the number of the child's parent figures who smoked tobacco, the percentage of children reporting having used an e-cigarette increased substantially with parental smoking status. Among never-smoking children who reported that they did not have a parent figure who smoked tobacco, 3.5% reported having used an e-cigarette, whereas for children who reported having a mother and a father figure who smoked tobacco, approximately 1 in 9 (11.7%) reported using an e-cigarette. For parental e-cigarette use, this gradient was steeper. Among children who reported that they did not have a parent figure who used e-cigarettes, 3.5% reported that they had used an e-cigarette, compared to 18.6% of those who reported having a mother and father who used e-cigarettes. Child e-cigarette use was significantly higher among children who reported that parent figures used tobacco and e-cigarettes than among children whose parents did not use e-cigarettes.\nAs indicated in table 1, for the four category variable representing the number of the child's parent figures who smoked tobacco, the percentage of children reporting having used an e-cigarette increased substantially with parental smoking status. Among never-smoking children who reported that they did not have a parent figure who smoked tobacco, 3.5% reported having used an e-cigarette, whereas for children who reported having a mother and a father figure who smoked tobacco, approximately 1 in 9 (11.7%) reported using an e-cigarette. For parental e-cigarette use, this gradient was steeper. Among children who reported that they did not have a parent figure who used e-cigarettes, 3.5% reported that they had used an e-cigarette, compared to 18.6% of those who reported having a mother and father who used e-cigarettes. Child e-cigarette use was significantly higher among children who reported that parent figures used tobacco and e-cigarettes than among children whose parents did not use e-cigarettes.\n Peer smoking and e-cigarette use Overall, 97 children (6.2%) reported that at least one friend smoked. Among never-smoking children who reported having friends who smoked, 17.7% reported having tried e-cigarettes as compared to 4.5% of those who reported that they did not have friends who smoke.\nOverall, 97 children (6.2%) reported that at least one friend smoked. Among never-smoking children who reported having friends who smoked, 17.7% reported having tried e-cigarettes as compared to 4.5% of those who reported that they did not have friends who smoke.\n Logistic regression analyses of predictors of e-cigarette use Table 2 presents ORs and 95% CIs from logistic regression models examining associations of parental smoking, friends’ smoking, sex and family affluence with e-cigarette use. In univariate models, children were more likely to report e-cigarette use if parent figures used e-cigarettes (either solely or in conjunction with smoking). Where parents smoked but did not use e-cigarettes, children were not significantly more likely to have used an e-cigarette. Children who reported having friends who smoked were almost five times as likely to have used an e-cigarette, while boys and children from less affluent families were also more likely to have used an e-cigarette. In multivariate models, however, only parental e-cigarette use (either solely or in conjunction with smoking) and friends’ smoking remained significant predictors of having used an e-cigarette.\nORs and 95% CIs from logistic regression analyses of e-cigarette use and future smoking intention among 10–11-year-old never-smokers\nSignificant associations (p<0.05) are highlighted in bold.\nFAS, Family Affluence Scale.\nTable 2 presents ORs and 95% CIs from logistic regression models examining associations of parental smoking, friends’ smoking, sex and family affluence with e-cigarette use. In univariate models, children were more likely to report e-cigarette use if parent figures used e-cigarettes (either solely or in conjunction with smoking). Where parents smoked but did not use e-cigarettes, children were not significantly more likely to have used an e-cigarette. Children who reported having friends who smoked were almost five times as likely to have used an e-cigarette, while boys and children from less affluent families were also more likely to have used an e-cigarette. In multivariate models, however, only parental e-cigarette use (either solely or in conjunction with smoking) and friends’ smoking remained significant predictors of having used an e-cigarette.\nORs and 95% CIs from logistic regression analyses of e-cigarette use and future smoking intention among 10–11-year-old never-smokers\nSignificant associations (p<0.05) are highlighted in bold.\nFAS, Family Affluence Scale.\n Future smoking intentions Overall, among never-smokers, almost all children reported that they would definitely not or probably not smoke in 2 years (table 3). Among never-smoking children who reported having used an e-cigarette, few stated that they probably or definitely will smoke in 2 years. However, children who had used an e-cigarette were substantially less likely to report that they definitely would not smoke in 2 years, and were more likely to report that they probably will not or might smoke in 2 years’ time. Hence, having used an e-cigarette is associated with weaker antismoking intentions.\nPercentage of never-smoking children reporting each level of intention to smoke by whether or not they had used an e-cigarette\nIn univariate models (table 2), antismoking intentions were significantly weaker among children whose parents smoked tobacco (solely or in conjunction with e-cigarettes), among children who reported having friends who smoked, and among boys. Never smoking children, who reported having used e-cigarettes, reported substantially weaker antismoking intentions than those who had not. In a multivariate model including all variables except for e-cigarette use, all significant associations remained, though associations of parental and friends smoking were reduced. The association of e-cigarette use with future smoking intentions remains after adjustment for parental and friends smoking and demographic variables.\nOverall, among never-smokers, almost all children reported that they would definitely not or probably not smoke in 2 years (table 3). Among never-smoking children who reported having used an e-cigarette, few stated that they probably or definitely will smoke in 2 years. However, children who had used an e-cigarette were substantially less likely to report that they definitely would not smoke in 2 years, and were more likely to report that they probably will not or might smoke in 2 years’ time. Hence, having used an e-cigarette is associated with weaker antismoking intentions.\nPercentage of never-smoking children reporting each level of intention to smoke by whether or not they had used an e-cigarette\nIn univariate models (table 2), antismoking intentions were significantly weaker among children whose parents smoked tobacco (solely or in conjunction with e-cigarettes), among children who reported having friends who smoked, and among boys. Never smoking children, who reported having used e-cigarettes, reported substantially weaker antismoking intentions than those who had not. In a multivariate model including all variables except for e-cigarette use, all significant associations remained, though associations of parental and friends smoking were reduced. The association of e-cigarette use with future smoking intentions remains after adjustment for parental and friends smoking and demographic variables.", "Overall, 114 schools were invited to participate before the target sample of 75 schools was reached (overall response rate=65.8%). Of the 1862 pupils within selected classes, completed questionnaires were obtained from 1601 (86%). In schools where ‘opt out’ consent procedures were followed (n=74 schools, 1810 pupils), 56 children were opted out by parents, 35 children refused and 141 were absent on the day of data collection. Data were obtained from 1578 pupils (87.2%). In the one school which requested opt-in consent, this was given for 23 of 52 children (44.2%), all of whom provided data. Items on e-cigarette use were completed by 1495 children, of whom 51% were female, with a mean (and SD) age of 10.92 (0.40) years. Twenty-one (1.4%) children reported that they had ever smoked tobacco. There were no significant differences between children who did or did not complete questions on e-cigarette use, in terms of age, socioeconomic status (p=0.84) or parental smoking (p=0.50). E-cigarette questions were completed by slightly fewer boys than girls (p<0.01), though overall, an approximately even gender balance was maintained (48.6% boys; 51.4% girls).", "In total, 1014 children (66.8%) reported having heard of e-cigarettes. Among the small number of children who reported having used tobacco (n=21), almost half (47.6%; n=10) also reported having used an e-cigarette. Among never-smokers (N=1467), 77 children (5.3%) reported that they had used an e-cigarette. Table 1 shows frequencies and percentages of e-cigarette use among never smokers by demographic factors, and by parental smoking and e-cigarette use. Overall, 6.5% of male never-smokers and 4.1% of female never-smokers reported having used an e-cigarette.\nFrequencies and percentages of e-cigarette use among children reporting never having smoked a cigarette\n*p Value from design-adjusted χ2 analyses.\n†Variable representing whether a parent figure smokes tobacco (regardless of whether they also use e-cigarettes).\n‡Variable representing whether a parent figure uses e-cigarettes (regardless of whether they also smoke tobacco).", "Overall, 231 children (17%) reported that one or more parent figures used e-cigarettes; substantially lower than the percentage (n=615; 39.1%) who reported that at least one parent figure used tobacco. Among never-smoking children who reported that one or more parent figures used e-cigarettes, a large majority (n=168; 72.7%) reported that these parent figure(s) were dual users, who also smoked tobacco. A smaller number (n=20; 8.7%) reported that one parent figure used only e-cigarettes, while the other smoked tobacco. The remaining 18.6% (n=43) reported having only parent figures who exclusively used e-cigarettes. Hence, the vast majority of children who reported that a parent figure used e-cigarettes reported that tobacco was also used by the same parent figure, or in a smaller number of cases, by another parent figure.", "As indicated in table 1, for the four category variable representing the number of the child's parent figures who smoked tobacco, the percentage of children reporting having used an e-cigarette increased substantially with parental smoking status. Among never-smoking children who reported that they did not have a parent figure who smoked tobacco, 3.5% reported having used an e-cigarette, whereas for children who reported having a mother and a father figure who smoked tobacco, approximately 1 in 9 (11.7%) reported using an e-cigarette. For parental e-cigarette use, this gradient was steeper. Among children who reported that they did not have a parent figure who used e-cigarettes, 3.5% reported that they had used an e-cigarette, compared to 18.6% of those who reported having a mother and father who used e-cigarettes. Child e-cigarette use was significantly higher among children who reported that parent figures used tobacco and e-cigarettes than among children whose parents did not use e-cigarettes.", "Overall, 97 children (6.2%) reported that at least one friend smoked. Among never-smoking children who reported having friends who smoked, 17.7% reported having tried e-cigarettes as compared to 4.5% of those who reported that they did not have friends who smoke.", "Table 2 presents ORs and 95% CIs from logistic regression models examining associations of parental smoking, friends’ smoking, sex and family affluence with e-cigarette use. In univariate models, children were more likely to report e-cigarette use if parent figures used e-cigarettes (either solely or in conjunction with smoking). Where parents smoked but did not use e-cigarettes, children were not significantly more likely to have used an e-cigarette. Children who reported having friends who smoked were almost five times as likely to have used an e-cigarette, while boys and children from less affluent families were also more likely to have used an e-cigarette. In multivariate models, however, only parental e-cigarette use (either solely or in conjunction with smoking) and friends’ smoking remained significant predictors of having used an e-cigarette.\nORs and 95% CIs from logistic regression analyses of e-cigarette use and future smoking intention among 10–11-year-old never-smokers\nSignificant associations (p<0.05) are highlighted in bold.\nFAS, Family Affluence Scale.", "Overall, among never-smokers, almost all children reported that they would definitely not or probably not smoke in 2 years (table 3). Among never-smoking children who reported having used an e-cigarette, few stated that they probably or definitely will smoke in 2 years. However, children who had used an e-cigarette were substantially less likely to report that they definitely would not smoke in 2 years, and were more likely to report that they probably will not or might smoke in 2 years’ time. Hence, having used an e-cigarette is associated with weaker antismoking intentions.\nPercentage of never-smoking children reporting each level of intention to smoke by whether or not they had used an e-cigarette\nIn univariate models (table 2), antismoking intentions were significantly weaker among children whose parents smoked tobacco (solely or in conjunction with e-cigarettes), among children who reported having friends who smoked, and among boys. Never smoking children, who reported having used e-cigarettes, reported substantially weaker antismoking intentions than those who had not. In a multivariate model including all variables except for e-cigarette use, all significant associations remained, though associations of parental and friends smoking were reduced. The association of e-cigarette use with future smoking intentions remains after adjustment for parental and friends smoking and demographic variables.", "Among 10–11-year-old children in Wales, the proportion who had tried an e-cigarette was substantially higher than the proportion who had used tobacco. Hence, data are consistent with concerns, supported by emerging international research, that e-cigarette use appears to represent a new form of early experimentation with nicotine use.6\n9\n10 In addition, consistent with international findings, the vast majority of children who reported that parents used e-cigarettes reported that they were ‘dual users’, who used e-cigarettes as well as tobacco,7 indicating that most parents who used e-cigarettes did not completely replace tobacco with them.\nConsistent with a body of research showing associations of parental modelling of smoking with uptake of tobacco,19 e-cigarette use was also substantially more common among children whose parents used e-cigarettes. However, where parents smoked tobacco though did not use e-cigarettes, children were not significantly more likely to have used e-cigarettes. Hence, there was no evidence that children were using e-cigarettes as a means of mimicking adult smoking in the absence of parental e-cigarette use. It is possible that imitating this behaviour was seen by children as a safer form of experimentation than smoking a cigarette. However, it is also possible that children with parent figures who used e-cigarettes were simply able to access them more easily by, for example, using their parent's e-cigarette, than were the children who did not. While no measure of how many of children's friends used e-cigarettes was included, e-cigarette use was substantially greater among children who reported having at least one friend who smoked.\nBefore and after adjusting for demographic factors and normative variables, a strong association of e-cigarette use with intention to take up smoking in the next 2 years was observed. This is consistent with a recent US study with older children which found that children who had used an e-cigarette were twice as likely to intend to smoke.22 It is important to note that even among children who had used an e-cigarette, few said that they would smoke within the next 2 years. However, substantially fewer children who had used an e-cigarette said that they would definitely not smoke tobacco in 2 years, while a larger proportion said that they might. Hence, children who had used an e-cigarette appeared to have weaker antismoking intentions, indicating greater openness to the possibility of taking up smoking in the near future. Data from the present study are consistent with the hypotheses that children use e-cigarettes to imitate behaviours of parents and peers, and add some tentative support for the hypothesis that use of e-cigarettes may increase children's susceptibility to smoking.\nThis study is among the first to report on the prevalence and patterning of e-cigarette use in a large survey of primary school-aged children, sampled to be representative of children in the UK, and to the best of our knowledge, the first study to examine associations of e-cigarette use with future smoking intentions in primary school children. However, perhaps the most significant limitation of the study is its reliance on self-report data. While a description of e-cigarettes was given, it might be that some children were unsure what this term meant. There are currently no validated objective means of ascertaining whether or not a child uses e-cigarettes. E-cigarettes are also becoming increasingly differentiated in type, and hence more detailed measures which capture these differences may be useful for future research. The cross-sectional design precludes cause and effect conclusions. For example, while findings are consistent with a hypothesis that e-cigarette use increases children's intention to smoke, intention to smoke may drive e-cigarette use rather than the other way around. Finally, we were only able to demonstrate associations with behavioural intention, which is an imperfect predictor of future behaviour.25\nIt is perhaps premature to be making firm policy recommendations on the basis of an emerging and underdeveloped evidence base; at present, debates on both sides of the current divide are presented with far greater conviction than the evidence base can support. Our findings point to a need to carefully balance harm reduction arguments, which are posed as a justification for limiting regulation of e-cigarettes (and remain contingent on further evidence that e-cigarettes are safe and can be successfully used as a smoking cessation aid), against accumulating evidence of dual use by adults and use among children who would not otherwise be smoking tobacco.\nThe primary implications of this study relate to a need for further research into children's e-cigarette use. Development of methods to validate children's reports of e-cigarette use and to differentiate between types of electronic nicotine delivery systems is a priority to provide greater confidence in the prevalence estimates obtained from surveys. While we are not able to definitively demonstrate that e-cigarette use leads to uptake of smoking, research adopting longitudinal designs is clearly needed to understand the direction of the associations observed as recognised by the WHO,21 as well as to ascertain whether e-cigarette use is followed by subsequent uptake of tobacco. Should future research continue to suggest that childhood e-cigarette use represents an early warning sign that smoking may follow, this may add support for moves toward greater regulation of e-cigarettes, in terms of their advertising and visibility.\nWhile opponents of regulating e-cigarettes appear to be arguing from a default position whereby e-cigarettes are presumed to be associated with few harms unless proven otherwise, the WHO have adopted a position which argues that e-cigarettes should be treated with caution until the potential harms or benefits of e-cigarettes are known.21 Research to investigate the safety of e-cigarettes (for users and non-users) and the mechanisms through which they might be offered as effective smoking cessation devices, while limiting children's exposure to them, is necessary in order to better inform public health strategies and reach a compromise between both sides of this debate.\nWhat this paper addsThis study indicates that e-cigarette use represents a new form of childhood experimentation with nicotine which is more common among 10–11-year-olds than tobacco use.The majority of children who report that parents use e-cigarettes report that they are ‘dual users’ who also smoke tobacco. Parental ‘dual use’ is associated with children's reported use of e-cigarettes.Children who report having used an e-cigarette are less likely to report definite intentions not to smoke and are more likely to report that they might smoke tobacco in 2 years’ time.\nThis study indicates that e-cigarette use represents a new form of childhood experimentation with nicotine which is more common among 10–11-year-olds than tobacco use.\nThe majority of children who report that parents use e-cigarettes report that they are ‘dual users’ who also smoke tobacco. Parental ‘dual use’ is associated with children's reported use of e-cigarettes.\nChildren who report having used an e-cigarette are less likely to report definite intentions not to smoke and are more likely to report that they might smoke tobacco in 2 years’ time." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "discussion" ]
[ "Electronic nicotine delivery devices", "Non-cigarette tobacco products", "Prevention" ]
Background: Arguments regarding the harm reductions that could be achieved, for individual smokers and for public health if tobacco were replaced with e-cigarettes,1 have led many public health experts to urge the WHO not to back calls to regulate e-cigarettes as tobacco products or restrict their marketing.2 To date, e-cigarette marketing has heavily emphasised smoking cessation benefits.3 While such claims have perhaps been made somewhat in advance of robust evidence of effectiveness, a small number of emerging studies do indicate that e-cigarettes may support cessation for some smokers.4 5 However, other leading public health experts have argued for greater regulation, pointing to limited evidence regarding direct harms and emerging evidence that e-cigarettes are not adopted primarily for smoking cessation.6 Most adult e-cigarette users also smoke tobacco (dual use),7 with e-cigarettes being used by some as a means of using nicotine in places where smoking is prohibited.8 Internationally, the prevalence of e-cigarette use among adolescents has also increased rapidly in recent years.9–13 E-cigarettes do contain some carcinogens and other toxins,2 and harm reduction arguments hold little weight were used by young people who would not otherwise have been smoking tobacco. Public health experts on both sides of debates regarding regulation agree that efforts should be made to prevent young people from taking up e-cigarettes.2 6 To date, policy responses to concerns about e-cigarette uptake have led to actions such as plans to ban sales to minors.14 More controversially, some have expressed concern regarding visibility of e-cigarettes in places where marketing or use of tobacco has been banned, arguing that this may reverse efforts to denormalise smoking.15 Tobacco companies have increasingly invested in e-cigarettes, with some arguing that marketing has targeted youth.3 16 17 Hence, while presenting itself as a partner in harm reduction, the industry is arguably seizing new opportunities to introduce young people to nicotine. Governments such as those in parts of the USA have responded to concerns regarding the growing visibility of e-cigarettes by banning their use in public places.18 In the UK, the Welsh Government has recently issued a white paper consulting on potential similar legislation.14 Arguments relating to potential impacts of the visibility of e-cigarettes centre in part on assumptions that children's perceptions of e-cigarette use as a normative behaviour will increase uptake. Perhaps the most commonly studied source of normative influence on adolescent smoking uptake is parental smoking, with children whose parents smoke more likely to smoke themselves.19 While parental influence on e-cigarette use has yet to be investigated, if e-cigarette use is driven by normative factors, children whose parents use e-cigarettes may be more likely to use them. Given that most adult users of e-cigarettes also smoke tobacco, many parents who model e-cigarette use may also model smoking, and their e-cigarette use may be seen by children as a ‘safe’ means of mimicking parental smoking. Peer influences on smoking have also been well established;20 although little research has considered whether e-cigarette use represents a means of imitating peer smoking. Perhaps the most significant concern among those arguing for greater regulation is that childhood e-cigarette use may act as a ‘gateway’ into smoking tobacco.10 Opponents of regulating e-cigarettes or limiting their visibility emphasise the lack of evidence for the gateway effect, while expressing concerns that limiting the visibility of safer alternatives may perversely protect tobacco markets.2 Indeed, the WHO has described evidence for renormalisation or gateway effects as non-existent. While backing a ban on use of e-cigarettes indoors, the WHO points to uncertainty regarding whether vapour from e-cigarettes is toxic to non-users as a justification for such a move, rather than renormalisation or gateway arguments.21 However, the WHO emphasises a need to balance efforts to promote cessation against risks of simultaneously promoting e-cigarettes use among children, also arguing that cessation claims which drive the case against regulation, should be banned from e-cigarette marketing until supported by firmer evidence.21 This lack of evidence on both sides of this debate is inevitable. E-cigarettes are a new phenomenon and insufficient time has passed for their harms or benefits to be understood. Experts on both sides have continued to emphasise a lack of evidence for their opponents’ position, while themselves advancing untested hypotheses regarding the harms or benefits of e-cigarettes. Further research is needed to dispassionately support or refute hypotheses being advanced on both sides of the debate. This paper reports findings from a Wales-wide survey of 10–11-year-old children. First, it examines the prevalence of e-cigarette use, then potential normative influences on children's e-cigarette use, including parental smoking and e-cigarette use and peer smoking. Finally, it tests the hypothesis that never smoking children who report having used an e-cigarette will be more likely to report an intention to take up smoking tobacco; an association which has to date been demonstrated in one study of US middle school children, though has yet to be investigated in younger children or in the UK.22 Methods: Study design and sample Child exposure to Environmental Tobacco Smoke (CHETS) Wales 2 was a cross-sectional study of year 6 school children within 75 schools in Wales. Its protocol was reviewed and approved by the Cardiff University Social Science Research Ethics Committee. It replicated the earlier surveys conducted in Wales which examined child's secondhand smoke exposure before and after introduction of smoke-free legislation (CHETS Wales),23 and was commissioned and powered primarily to investigate changes in child exposure to smoke in cars. This article reports on questions on e-cigarette use which were included only in the 2014 survey. To ensure that sampled schools were representative of the population of Wales, for CHETS Wales, state-maintained schools with year 6 students were stratified according to high/low (cut-off point identified as average entitlement across whole sample; 17.12%) free school meal entitlement and funding by the Local Education Authority. Within each stratum, schools were selected on a probability proportional to school size. The 75 schools participating in CHETS Wales were invited to take part in CHETS Wales 2; where schools declined, replacement schools were identified from the same stratum. Within each school, one year 6 (age 10–11 years) class was randomly selected by the research team to participate. The samples obtained in CHETS Wales 2 were comparable to CHETS Wales samples in terms of age, sex, socioeconomic status and family structure, indicating that the sampling strategy had been effectively replicated. Consistent with previous analyses from CHETS Wales, no non-response weights were employed.23 Child exposure to Environmental Tobacco Smoke (CHETS) Wales 2 was a cross-sectional study of year 6 school children within 75 schools in Wales. Its protocol was reviewed and approved by the Cardiff University Social Science Research Ethics Committee. It replicated the earlier surveys conducted in Wales which examined child's secondhand smoke exposure before and after introduction of smoke-free legislation (CHETS Wales),23 and was commissioned and powered primarily to investigate changes in child exposure to smoke in cars. This article reports on questions on e-cigarette use which were included only in the 2014 survey. To ensure that sampled schools were representative of the population of Wales, for CHETS Wales, state-maintained schools with year 6 students were stratified according to high/low (cut-off point identified as average entitlement across whole sample; 17.12%) free school meal entitlement and funding by the Local Education Authority. Within each stratum, schools were selected on a probability proportional to school size. The 75 schools participating in CHETS Wales were invited to take part in CHETS Wales 2; where schools declined, replacement schools were identified from the same stratum. Within each school, one year 6 (age 10–11 years) class was randomly selected by the research team to participate. The samples obtained in CHETS Wales 2 were comparable to CHETS Wales samples in terms of age, sex, socioeconomic status and family structure, indicating that the sampling strategy had been effectively replicated. Consistent with previous analyses from CHETS Wales, no non-response weights were employed.23 Measures Demographics Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays. Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays. Parental smoking and e-cigarette use Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents. Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents. Peer smoking behaviour Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes. Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes. Ever smoking and future smoking intentions Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’. Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’. Awareness and use of e-cigarettes Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke. Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke. Demographics Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays. Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays. Parental smoking and e-cigarette use Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents. Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents. Peer smoking behaviour Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes. Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes. Ever smoking and future smoking intentions Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’. Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’. Awareness and use of e-cigarettes Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke. Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke. Consent Schools signed and returned a commitment form to participate in the study. Parental approval was obtained through letters sent via Royal Mail. In addition to consent forms, information sheets were provided which clearly stated that parents had the option of withdrawing their child from data collection at any time. An ‘opt out’ system was implemented in all but one school. The remaining school requested an ‘opt in’ consent procedure whereby parents/carers informed their child's school if they did wish their child to participate in the study. At each data collection session, students were also asked to complete an assent form after having read an information sheet and having had the study explained to them to ensure that they fully understood what they were invited to do, and to give them the opportunity to withdraw from the data collection session if they did not wish to participate. Schools signed and returned a commitment form to participate in the study. Parental approval was obtained through letters sent via Royal Mail. In addition to consent forms, information sheets were provided which clearly stated that parents had the option of withdrawing their child from data collection at any time. An ‘opt out’ system was implemented in all but one school. The remaining school requested an ‘opt in’ consent procedure whereby parents/carers informed their child's school if they did wish their child to participate in the study. At each data collection session, students were also asked to complete an assent form after having read an information sheet and having had the study explained to them to ensure that they fully understood what they were invited to do, and to give them the opportunity to withdraw from the data collection session if they did not wish to participate. Data collection Data were collected between February and April 2014. Children completed pen and paper surveys, which were placed in sealed envelopes before being collected by researchers. Two researchers attended each data collection to ensure sufficient support and assistance where required. All staff were provided with a data collection protocol and trained by DECIPHer. Teachers were asked to be present, but not to intervene in the data collection in any other way. Briefing sheets were provided for any school staff present, which explained the nature of the study and provided information about the data collection and their anticipated role. Data were collected between February and April 2014. Children completed pen and paper surveys, which were placed in sealed envelopes before being collected by researchers. Two researchers attended each data collection to ensure sufficient support and assistance where required. All staff were provided with a data collection protocol and trained by DECIPHer. Teachers were asked to be present, but not to intervene in the data collection in any other way. Briefing sheets were provided for any school staff present, which explained the nature of the study and provided information about the data collection and their anticipated role. Statistical analysis Frequencies and percentages of children who reported using e-cigarettes were calculated for the subsample of children who reported having tried smoking, and for those who reported that they had never tried smoking. Among never-smokers, frequencies and percentages using e-cigarettes were presented by sex, parental smoking behaviour, combined parental cigarette and e-cigarette use, and friends’ smoking behaviour. Binary logistic regression models were used to examine predictors of e-cigarette use. Independent variables were parental cigarette and e-cigarette use (combined into a categorical variable including those who used neither, e-cigarettes and tobacco cigarettes, e-cigarettes only or tobacco cigarettes only), friends’ smoking behaviour, sex and family affluence. Ordinal regression models examined predictors of future smoking intentions, with ORs indicating the relative odds of being assigned to a higher rather than a lower category for the intention to smoke variable (coded from ‘definitely not’=0 to ‘definitely yes’=4). Independent variables were the same as for e-cigarette use, though e-cigarette use was now entered as an independent variable. Owing to the small number of children saying that they might or probably would smoke in 2 years, ordinal models were also conducted with a 3 category dependent variable (combining children who said that they might or would smoke in 2 years), as well as binary models (comparing ‘definitely not’ to all other responses). Comparable results were obtained, hence, we report only models using the 5 category dependent variable. Proportional odds assumption tests for multivariate ordinal models were run using the omodel plug in for Stata V.11, indicating no violations of the proportional odds assumption. To account for the sampling design and non-independence of children within schools, models were adjusted for school-level clustering using the svy commands in Stata V.11. Frequencies and percentages of children who reported using e-cigarettes were calculated for the subsample of children who reported having tried smoking, and for those who reported that they had never tried smoking. Among never-smokers, frequencies and percentages using e-cigarettes were presented by sex, parental smoking behaviour, combined parental cigarette and e-cigarette use, and friends’ smoking behaviour. Binary logistic regression models were used to examine predictors of e-cigarette use. Independent variables were parental cigarette and e-cigarette use (combined into a categorical variable including those who used neither, e-cigarettes and tobacco cigarettes, e-cigarettes only or tobacco cigarettes only), friends’ smoking behaviour, sex and family affluence. Ordinal regression models examined predictors of future smoking intentions, with ORs indicating the relative odds of being assigned to a higher rather than a lower category for the intention to smoke variable (coded from ‘definitely not’=0 to ‘definitely yes’=4). Independent variables were the same as for e-cigarette use, though e-cigarette use was now entered as an independent variable. Owing to the small number of children saying that they might or probably would smoke in 2 years, ordinal models were also conducted with a 3 category dependent variable (combining children who said that they might or would smoke in 2 years), as well as binary models (comparing ‘definitely not’ to all other responses). Comparable results were obtained, hence, we report only models using the 5 category dependent variable. Proportional odds assumption tests for multivariate ordinal models were run using the omodel plug in for Stata V.11, indicating no violations of the proportional odds assumption. To account for the sampling design and non-independence of children within schools, models were adjusted for school-level clustering using the svy commands in Stata V.11. Study design and sample: Child exposure to Environmental Tobacco Smoke (CHETS) Wales 2 was a cross-sectional study of year 6 school children within 75 schools in Wales. Its protocol was reviewed and approved by the Cardiff University Social Science Research Ethics Committee. It replicated the earlier surveys conducted in Wales which examined child's secondhand smoke exposure before and after introduction of smoke-free legislation (CHETS Wales),23 and was commissioned and powered primarily to investigate changes in child exposure to smoke in cars. This article reports on questions on e-cigarette use which were included only in the 2014 survey. To ensure that sampled schools were representative of the population of Wales, for CHETS Wales, state-maintained schools with year 6 students were stratified according to high/low (cut-off point identified as average entitlement across whole sample; 17.12%) free school meal entitlement and funding by the Local Education Authority. Within each stratum, schools were selected on a probability proportional to school size. The 75 schools participating in CHETS Wales were invited to take part in CHETS Wales 2; where schools declined, replacement schools were identified from the same stratum. Within each school, one year 6 (age 10–11 years) class was randomly selected by the research team to participate. The samples obtained in CHETS Wales 2 were comparable to CHETS Wales samples in terms of age, sex, socioeconomic status and family structure, indicating that the sampling strategy had been effectively replicated. Consistent with previous analyses from CHETS Wales, no non-response weights were employed.23 Measures: Demographics Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays. Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays. Parental smoking and e-cigarette use Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents. Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents. Peer smoking behaviour Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes. Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes. Ever smoking and future smoking intentions Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’. Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’. Awareness and use of e-cigarettes Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke. Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke. Demographics: Children indicated their sex and year and month of birth. To measure socioeconomic status, children completed the Family Affluence Scale (FAS),24 comprising measures of bedroom occupancy, car and computer ownership, and family holidays. Parental smoking and e-cigarette use: Children indicated whether any of the following people smoked: (1) father (2) mother (3) stepfather (or mother's partner) and (4) stepmother (or father's partner). Response options were ‘smokes every day’, ‘smokes sometimes’, ‘does not smoke’, ‘I don't know’, ‘I don't have or see this person’. A parent figure was classified as smoking if the child responded ‘smokes every day’ or ‘smokes sometimes’, with all other responses classified as non-smoking parents. Replicating the coding procedures used for parental smoking in CHETS Wales,23 children were categorised as having no parent figure who smokes, a father figure (including father or stepfather), mother figure (including mother or stepmother), or a mother and father figures who smokes. The same questions were asked for parental e-cigarette use, with children categorised as having no parent figure who use e-cigarettes, a father figure, mother figure or a mother and father figure who use e-cigarettes. A combined variable was created which indicated whether children had no parent figures who used either tobacco or e-cigarettes, a parent figure who used tobacco only, who used e-cigarettes only or parent figures who used tobacco and e-cigarettes. As only 2% of children reported living with a primary caregiver other than a parent, these variables only included smoking behaviour and e-cigarette usage of parents and step-parents. Peer smoking behaviour: Children were asked to indicate how many of their friends smoked, with response options of ‘most of them’, ‘half of them’, ‘some of them’, ‘none of them’ or ‘I don't know’. Children were classified as having smoking friends if they said that at least some of their friends smoked. No data were available on how many of children's friends used e-cigarettes. Ever smoking and future smoking intentions: Children's ever smoking behaviour was measured by asking ‘Have you ever smoked tobacco? (at least one cigarette, cigar or pipe)’, with response options of ‘yes’ or ‘no’. Future intentions were measured by the question ‘Do you think you will smoke in 2 years’ time?’, with response options of ‘definitely yes’, ‘probably yes’, ‘maybe or maybe not’, ‘probably no’ and ‘definitely no’. Awareness and use of e-cigarettes: Awareness of e-cigarettes was measured by asking children ‘Have you heard of e-cigarettes before this survey?’ Children were asked ‘Have you ever used an e-cigarette?’, with response options of ‘no’, ‘yes, once’ or ‘yes, more than once’. Children were classified as having used an e-cigarette if they responded ‘yes, once’ or ‘yes, more than once’. E-cigarettes were defined as electronic versions of cigarettes which do not give off smoke. Consent: Schools signed and returned a commitment form to participate in the study. Parental approval was obtained through letters sent via Royal Mail. In addition to consent forms, information sheets were provided which clearly stated that parents had the option of withdrawing their child from data collection at any time. An ‘opt out’ system was implemented in all but one school. The remaining school requested an ‘opt in’ consent procedure whereby parents/carers informed their child's school if they did wish their child to participate in the study. At each data collection session, students were also asked to complete an assent form after having read an information sheet and having had the study explained to them to ensure that they fully understood what they were invited to do, and to give them the opportunity to withdraw from the data collection session if they did not wish to participate. Data collection: Data were collected between February and April 2014. Children completed pen and paper surveys, which were placed in sealed envelopes before being collected by researchers. Two researchers attended each data collection to ensure sufficient support and assistance where required. All staff were provided with a data collection protocol and trained by DECIPHer. Teachers were asked to be present, but not to intervene in the data collection in any other way. Briefing sheets were provided for any school staff present, which explained the nature of the study and provided information about the data collection and their anticipated role. Statistical analysis: Frequencies and percentages of children who reported using e-cigarettes were calculated for the subsample of children who reported having tried smoking, and for those who reported that they had never tried smoking. Among never-smokers, frequencies and percentages using e-cigarettes were presented by sex, parental smoking behaviour, combined parental cigarette and e-cigarette use, and friends’ smoking behaviour. Binary logistic regression models were used to examine predictors of e-cigarette use. Independent variables were parental cigarette and e-cigarette use (combined into a categorical variable including those who used neither, e-cigarettes and tobacco cigarettes, e-cigarettes only or tobacco cigarettes only), friends’ smoking behaviour, sex and family affluence. Ordinal regression models examined predictors of future smoking intentions, with ORs indicating the relative odds of being assigned to a higher rather than a lower category for the intention to smoke variable (coded from ‘definitely not’=0 to ‘definitely yes’=4). Independent variables were the same as for e-cigarette use, though e-cigarette use was now entered as an independent variable. Owing to the small number of children saying that they might or probably would smoke in 2 years, ordinal models were also conducted with a 3 category dependent variable (combining children who said that they might or would smoke in 2 years), as well as binary models (comparing ‘definitely not’ to all other responses). Comparable results were obtained, hence, we report only models using the 5 category dependent variable. Proportional odds assumption tests for multivariate ordinal models were run using the omodel plug in for Stata V.11, indicating no violations of the proportional odds assumption. To account for the sampling design and non-independence of children within schools, models were adjusted for school-level clustering using the svy commands in Stata V.11. Results: Response rates and sample description Overall, 114 schools were invited to participate before the target sample of 75 schools was reached (overall response rate=65.8%). Of the 1862 pupils within selected classes, completed questionnaires were obtained from 1601 (86%). In schools where ‘opt out’ consent procedures were followed (n=74 schools, 1810 pupils), 56 children were opted out by parents, 35 children refused and 141 were absent on the day of data collection. Data were obtained from 1578 pupils (87.2%). In the one school which requested opt-in consent, this was given for 23 of 52 children (44.2%), all of whom provided data. Items on e-cigarette use were completed by 1495 children, of whom 51% were female, with a mean (and SD) age of 10.92 (0.40) years. Twenty-one (1.4%) children reported that they had ever smoked tobacco. There were no significant differences between children who did or did not complete questions on e-cigarette use, in terms of age, socioeconomic status (p=0.84) or parental smoking (p=0.50). E-cigarette questions were completed by slightly fewer boys than girls (p<0.01), though overall, an approximately even gender balance was maintained (48.6% boys; 51.4% girls). Overall, 114 schools were invited to participate before the target sample of 75 schools was reached (overall response rate=65.8%). Of the 1862 pupils within selected classes, completed questionnaires were obtained from 1601 (86%). In schools where ‘opt out’ consent procedures were followed (n=74 schools, 1810 pupils), 56 children were opted out by parents, 35 children refused and 141 were absent on the day of data collection. Data were obtained from 1578 pupils (87.2%). In the one school which requested opt-in consent, this was given for 23 of 52 children (44.2%), all of whom provided data. Items on e-cigarette use were completed by 1495 children, of whom 51% were female, with a mean (and SD) age of 10.92 (0.40) years. Twenty-one (1.4%) children reported that they had ever smoked tobacco. There were no significant differences between children who did or did not complete questions on e-cigarette use, in terms of age, socioeconomic status (p=0.84) or parental smoking (p=0.50). E-cigarette questions were completed by slightly fewer boys than girls (p<0.01), though overall, an approximately even gender balance was maintained (48.6% boys; 51.4% girls). Prevalence of e-cigarette awareness and use In total, 1014 children (66.8%) reported having heard of e-cigarettes. Among the small number of children who reported having used tobacco (n=21), almost half (47.6%; n=10) also reported having used an e-cigarette. Among never-smokers (N=1467), 77 children (5.3%) reported that they had used an e-cigarette. Table 1 shows frequencies and percentages of e-cigarette use among never smokers by demographic factors, and by parental smoking and e-cigarette use. Overall, 6.5% of male never-smokers and 4.1% of female never-smokers reported having used an e-cigarette. Frequencies and percentages of e-cigarette use among children reporting never having smoked a cigarette *p Value from design-adjusted χ2 analyses. †Variable representing whether a parent figure smokes tobacco (regardless of whether they also use e-cigarettes). ‡Variable representing whether a parent figure uses e-cigarettes (regardless of whether they also smoke tobacco). In total, 1014 children (66.8%) reported having heard of e-cigarettes. Among the small number of children who reported having used tobacco (n=21), almost half (47.6%; n=10) also reported having used an e-cigarette. Among never-smokers (N=1467), 77 children (5.3%) reported that they had used an e-cigarette. Table 1 shows frequencies and percentages of e-cigarette use among never smokers by demographic factors, and by parental smoking and e-cigarette use. Overall, 6.5% of male never-smokers and 4.1% of female never-smokers reported having used an e-cigarette. Frequencies and percentages of e-cigarette use among children reporting never having smoked a cigarette *p Value from design-adjusted χ2 analyses. †Variable representing whether a parent figure smokes tobacco (regardless of whether they also use e-cigarettes). ‡Variable representing whether a parent figure uses e-cigarettes (regardless of whether they also smoke tobacco). Parental smoking, e-cigarette use and dual use Overall, 231 children (17%) reported that one or more parent figures used e-cigarettes; substantially lower than the percentage (n=615; 39.1%) who reported that at least one parent figure used tobacco. Among never-smoking children who reported that one or more parent figures used e-cigarettes, a large majority (n=168; 72.7%) reported that these parent figure(s) were dual users, who also smoked tobacco. A smaller number (n=20; 8.7%) reported that one parent figure used only e-cigarettes, while the other smoked tobacco. The remaining 18.6% (n=43) reported having only parent figures who exclusively used e-cigarettes. Hence, the vast majority of children who reported that a parent figure used e-cigarettes reported that tobacco was also used by the same parent figure, or in a smaller number of cases, by another parent figure. Overall, 231 children (17%) reported that one or more parent figures used e-cigarettes; substantially lower than the percentage (n=615; 39.1%) who reported that at least one parent figure used tobacco. Among never-smoking children who reported that one or more parent figures used e-cigarettes, a large majority (n=168; 72.7%) reported that these parent figure(s) were dual users, who also smoked tobacco. A smaller number (n=20; 8.7%) reported that one parent figure used only e-cigarettes, while the other smoked tobacco. The remaining 18.6% (n=43) reported having only parent figures who exclusively used e-cigarettes. Hence, the vast majority of children who reported that a parent figure used e-cigarettes reported that tobacco was also used by the same parent figure, or in a smaller number of cases, by another parent figure. Parental behaviour and child e-cigarette use As indicated in table 1, for the four category variable representing the number of the child's parent figures who smoked tobacco, the percentage of children reporting having used an e-cigarette increased substantially with parental smoking status. Among never-smoking children who reported that they did not have a parent figure who smoked tobacco, 3.5% reported having used an e-cigarette, whereas for children who reported having a mother and a father figure who smoked tobacco, approximately 1 in 9 (11.7%) reported using an e-cigarette. For parental e-cigarette use, this gradient was steeper. Among children who reported that they did not have a parent figure who used e-cigarettes, 3.5% reported that they had used an e-cigarette, compared to 18.6% of those who reported having a mother and father who used e-cigarettes. Child e-cigarette use was significantly higher among children who reported that parent figures used tobacco and e-cigarettes than among children whose parents did not use e-cigarettes. As indicated in table 1, for the four category variable representing the number of the child's parent figures who smoked tobacco, the percentage of children reporting having used an e-cigarette increased substantially with parental smoking status. Among never-smoking children who reported that they did not have a parent figure who smoked tobacco, 3.5% reported having used an e-cigarette, whereas for children who reported having a mother and a father figure who smoked tobacco, approximately 1 in 9 (11.7%) reported using an e-cigarette. For parental e-cigarette use, this gradient was steeper. Among children who reported that they did not have a parent figure who used e-cigarettes, 3.5% reported that they had used an e-cigarette, compared to 18.6% of those who reported having a mother and father who used e-cigarettes. Child e-cigarette use was significantly higher among children who reported that parent figures used tobacco and e-cigarettes than among children whose parents did not use e-cigarettes. Peer smoking and e-cigarette use Overall, 97 children (6.2%) reported that at least one friend smoked. Among never-smoking children who reported having friends who smoked, 17.7% reported having tried e-cigarettes as compared to 4.5% of those who reported that they did not have friends who smoke. Overall, 97 children (6.2%) reported that at least one friend smoked. Among never-smoking children who reported having friends who smoked, 17.7% reported having tried e-cigarettes as compared to 4.5% of those who reported that they did not have friends who smoke. Logistic regression analyses of predictors of e-cigarette use Table 2 presents ORs and 95% CIs from logistic regression models examining associations of parental smoking, friends’ smoking, sex and family affluence with e-cigarette use. In univariate models, children were more likely to report e-cigarette use if parent figures used e-cigarettes (either solely or in conjunction with smoking). Where parents smoked but did not use e-cigarettes, children were not significantly more likely to have used an e-cigarette. Children who reported having friends who smoked were almost five times as likely to have used an e-cigarette, while boys and children from less affluent families were also more likely to have used an e-cigarette. In multivariate models, however, only parental e-cigarette use (either solely or in conjunction with smoking) and friends’ smoking remained significant predictors of having used an e-cigarette. ORs and 95% CIs from logistic regression analyses of e-cigarette use and future smoking intention among 10–11-year-old never-smokers Significant associations (p<0.05) are highlighted in bold. FAS, Family Affluence Scale. Table 2 presents ORs and 95% CIs from logistic regression models examining associations of parental smoking, friends’ smoking, sex and family affluence with e-cigarette use. In univariate models, children were more likely to report e-cigarette use if parent figures used e-cigarettes (either solely or in conjunction with smoking). Where parents smoked but did not use e-cigarettes, children were not significantly more likely to have used an e-cigarette. Children who reported having friends who smoked were almost five times as likely to have used an e-cigarette, while boys and children from less affluent families were also more likely to have used an e-cigarette. In multivariate models, however, only parental e-cigarette use (either solely or in conjunction with smoking) and friends’ smoking remained significant predictors of having used an e-cigarette. ORs and 95% CIs from logistic regression analyses of e-cigarette use and future smoking intention among 10–11-year-old never-smokers Significant associations (p<0.05) are highlighted in bold. FAS, Family Affluence Scale. Future smoking intentions Overall, among never-smokers, almost all children reported that they would definitely not or probably not smoke in 2 years (table 3). Among never-smoking children who reported having used an e-cigarette, few stated that they probably or definitely will smoke in 2 years. However, children who had used an e-cigarette were substantially less likely to report that they definitely would not smoke in 2 years, and were more likely to report that they probably will not or might smoke in 2 years’ time. Hence, having used an e-cigarette is associated with weaker antismoking intentions. Percentage of never-smoking children reporting each level of intention to smoke by whether or not they had used an e-cigarette In univariate models (table 2), antismoking intentions were significantly weaker among children whose parents smoked tobacco (solely or in conjunction with e-cigarettes), among children who reported having friends who smoked, and among boys. Never smoking children, who reported having used e-cigarettes, reported substantially weaker antismoking intentions than those who had not. In a multivariate model including all variables except for e-cigarette use, all significant associations remained, though associations of parental and friends smoking were reduced. The association of e-cigarette use with future smoking intentions remains after adjustment for parental and friends smoking and demographic variables. Overall, among never-smokers, almost all children reported that they would definitely not or probably not smoke in 2 years (table 3). Among never-smoking children who reported having used an e-cigarette, few stated that they probably or definitely will smoke in 2 years. However, children who had used an e-cigarette were substantially less likely to report that they definitely would not smoke in 2 years, and were more likely to report that they probably will not or might smoke in 2 years’ time. Hence, having used an e-cigarette is associated with weaker antismoking intentions. Percentage of never-smoking children reporting each level of intention to smoke by whether or not they had used an e-cigarette In univariate models (table 2), antismoking intentions were significantly weaker among children whose parents smoked tobacco (solely or in conjunction with e-cigarettes), among children who reported having friends who smoked, and among boys. Never smoking children, who reported having used e-cigarettes, reported substantially weaker antismoking intentions than those who had not. In a multivariate model including all variables except for e-cigarette use, all significant associations remained, though associations of parental and friends smoking were reduced. The association of e-cigarette use with future smoking intentions remains after adjustment for parental and friends smoking and demographic variables. Response rates and sample description: Overall, 114 schools were invited to participate before the target sample of 75 schools was reached (overall response rate=65.8%). Of the 1862 pupils within selected classes, completed questionnaires were obtained from 1601 (86%). In schools where ‘opt out’ consent procedures were followed (n=74 schools, 1810 pupils), 56 children were opted out by parents, 35 children refused and 141 were absent on the day of data collection. Data were obtained from 1578 pupils (87.2%). In the one school which requested opt-in consent, this was given for 23 of 52 children (44.2%), all of whom provided data. Items on e-cigarette use were completed by 1495 children, of whom 51% were female, with a mean (and SD) age of 10.92 (0.40) years. Twenty-one (1.4%) children reported that they had ever smoked tobacco. There were no significant differences between children who did or did not complete questions on e-cigarette use, in terms of age, socioeconomic status (p=0.84) or parental smoking (p=0.50). E-cigarette questions were completed by slightly fewer boys than girls (p<0.01), though overall, an approximately even gender balance was maintained (48.6% boys; 51.4% girls). Prevalence of e-cigarette awareness and use: In total, 1014 children (66.8%) reported having heard of e-cigarettes. Among the small number of children who reported having used tobacco (n=21), almost half (47.6%; n=10) also reported having used an e-cigarette. Among never-smokers (N=1467), 77 children (5.3%) reported that they had used an e-cigarette. Table 1 shows frequencies and percentages of e-cigarette use among never smokers by demographic factors, and by parental smoking and e-cigarette use. Overall, 6.5% of male never-smokers and 4.1% of female never-smokers reported having used an e-cigarette. Frequencies and percentages of e-cigarette use among children reporting never having smoked a cigarette *p Value from design-adjusted χ2 analyses. †Variable representing whether a parent figure smokes tobacco (regardless of whether they also use e-cigarettes). ‡Variable representing whether a parent figure uses e-cigarettes (regardless of whether they also smoke tobacco). Parental smoking, e-cigarette use and dual use: Overall, 231 children (17%) reported that one or more parent figures used e-cigarettes; substantially lower than the percentage (n=615; 39.1%) who reported that at least one parent figure used tobacco. Among never-smoking children who reported that one or more parent figures used e-cigarettes, a large majority (n=168; 72.7%) reported that these parent figure(s) were dual users, who also smoked tobacco. A smaller number (n=20; 8.7%) reported that one parent figure used only e-cigarettes, while the other smoked tobacco. The remaining 18.6% (n=43) reported having only parent figures who exclusively used e-cigarettes. Hence, the vast majority of children who reported that a parent figure used e-cigarettes reported that tobacco was also used by the same parent figure, or in a smaller number of cases, by another parent figure. Parental behaviour and child e-cigarette use: As indicated in table 1, for the four category variable representing the number of the child's parent figures who smoked tobacco, the percentage of children reporting having used an e-cigarette increased substantially with parental smoking status. Among never-smoking children who reported that they did not have a parent figure who smoked tobacco, 3.5% reported having used an e-cigarette, whereas for children who reported having a mother and a father figure who smoked tobacco, approximately 1 in 9 (11.7%) reported using an e-cigarette. For parental e-cigarette use, this gradient was steeper. Among children who reported that they did not have a parent figure who used e-cigarettes, 3.5% reported that they had used an e-cigarette, compared to 18.6% of those who reported having a mother and father who used e-cigarettes. Child e-cigarette use was significantly higher among children who reported that parent figures used tobacco and e-cigarettes than among children whose parents did not use e-cigarettes. Peer smoking and e-cigarette use: Overall, 97 children (6.2%) reported that at least one friend smoked. Among never-smoking children who reported having friends who smoked, 17.7% reported having tried e-cigarettes as compared to 4.5% of those who reported that they did not have friends who smoke. Logistic regression analyses of predictors of e-cigarette use: Table 2 presents ORs and 95% CIs from logistic regression models examining associations of parental smoking, friends’ smoking, sex and family affluence with e-cigarette use. In univariate models, children were more likely to report e-cigarette use if parent figures used e-cigarettes (either solely or in conjunction with smoking). Where parents smoked but did not use e-cigarettes, children were not significantly more likely to have used an e-cigarette. Children who reported having friends who smoked were almost five times as likely to have used an e-cigarette, while boys and children from less affluent families were also more likely to have used an e-cigarette. In multivariate models, however, only parental e-cigarette use (either solely or in conjunction with smoking) and friends’ smoking remained significant predictors of having used an e-cigarette. ORs and 95% CIs from logistic regression analyses of e-cigarette use and future smoking intention among 10–11-year-old never-smokers Significant associations (p<0.05) are highlighted in bold. FAS, Family Affluence Scale. Future smoking intentions: Overall, among never-smokers, almost all children reported that they would definitely not or probably not smoke in 2 years (table 3). Among never-smoking children who reported having used an e-cigarette, few stated that they probably or definitely will smoke in 2 years. However, children who had used an e-cigarette were substantially less likely to report that they definitely would not smoke in 2 years, and were more likely to report that they probably will not or might smoke in 2 years’ time. Hence, having used an e-cigarette is associated with weaker antismoking intentions. Percentage of never-smoking children reporting each level of intention to smoke by whether or not they had used an e-cigarette In univariate models (table 2), antismoking intentions were significantly weaker among children whose parents smoked tobacco (solely or in conjunction with e-cigarettes), among children who reported having friends who smoked, and among boys. Never smoking children, who reported having used e-cigarettes, reported substantially weaker antismoking intentions than those who had not. In a multivariate model including all variables except for e-cigarette use, all significant associations remained, though associations of parental and friends smoking were reduced. The association of e-cigarette use with future smoking intentions remains after adjustment for parental and friends smoking and demographic variables. Discussion: Among 10–11-year-old children in Wales, the proportion who had tried an e-cigarette was substantially higher than the proportion who had used tobacco. Hence, data are consistent with concerns, supported by emerging international research, that e-cigarette use appears to represent a new form of early experimentation with nicotine use.6 9 10 In addition, consistent with international findings, the vast majority of children who reported that parents used e-cigarettes reported that they were ‘dual users’, who used e-cigarettes as well as tobacco,7 indicating that most parents who used e-cigarettes did not completely replace tobacco with them. Consistent with a body of research showing associations of parental modelling of smoking with uptake of tobacco,19 e-cigarette use was also substantially more common among children whose parents used e-cigarettes. However, where parents smoked tobacco though did not use e-cigarettes, children were not significantly more likely to have used e-cigarettes. Hence, there was no evidence that children were using e-cigarettes as a means of mimicking adult smoking in the absence of parental e-cigarette use. It is possible that imitating this behaviour was seen by children as a safer form of experimentation than smoking a cigarette. However, it is also possible that children with parent figures who used e-cigarettes were simply able to access them more easily by, for example, using their parent's e-cigarette, than were the children who did not. While no measure of how many of children's friends used e-cigarettes was included, e-cigarette use was substantially greater among children who reported having at least one friend who smoked. Before and after adjusting for demographic factors and normative variables, a strong association of e-cigarette use with intention to take up smoking in the next 2 years was observed. This is consistent with a recent US study with older children which found that children who had used an e-cigarette were twice as likely to intend to smoke.22 It is important to note that even among children who had used an e-cigarette, few said that they would smoke within the next 2 years. However, substantially fewer children who had used an e-cigarette said that they would definitely not smoke tobacco in 2 years, while a larger proportion said that they might. Hence, children who had used an e-cigarette appeared to have weaker antismoking intentions, indicating greater openness to the possibility of taking up smoking in the near future. Data from the present study are consistent with the hypotheses that children use e-cigarettes to imitate behaviours of parents and peers, and add some tentative support for the hypothesis that use of e-cigarettes may increase children's susceptibility to smoking. This study is among the first to report on the prevalence and patterning of e-cigarette use in a large survey of primary school-aged children, sampled to be representative of children in the UK, and to the best of our knowledge, the first study to examine associations of e-cigarette use with future smoking intentions in primary school children. However, perhaps the most significant limitation of the study is its reliance on self-report data. While a description of e-cigarettes was given, it might be that some children were unsure what this term meant. There are currently no validated objective means of ascertaining whether or not a child uses e-cigarettes. E-cigarettes are also becoming increasingly differentiated in type, and hence more detailed measures which capture these differences may be useful for future research. The cross-sectional design precludes cause and effect conclusions. For example, while findings are consistent with a hypothesis that e-cigarette use increases children's intention to smoke, intention to smoke may drive e-cigarette use rather than the other way around. Finally, we were only able to demonstrate associations with behavioural intention, which is an imperfect predictor of future behaviour.25 It is perhaps premature to be making firm policy recommendations on the basis of an emerging and underdeveloped evidence base; at present, debates on both sides of the current divide are presented with far greater conviction than the evidence base can support. Our findings point to a need to carefully balance harm reduction arguments, which are posed as a justification for limiting regulation of e-cigarettes (and remain contingent on further evidence that e-cigarettes are safe and can be successfully used as a smoking cessation aid), against accumulating evidence of dual use by adults and use among children who would not otherwise be smoking tobacco. The primary implications of this study relate to a need for further research into children's e-cigarette use. Development of methods to validate children's reports of e-cigarette use and to differentiate between types of electronic nicotine delivery systems is a priority to provide greater confidence in the prevalence estimates obtained from surveys. While we are not able to definitively demonstrate that e-cigarette use leads to uptake of smoking, research adopting longitudinal designs is clearly needed to understand the direction of the associations observed as recognised by the WHO,21 as well as to ascertain whether e-cigarette use is followed by subsequent uptake of tobacco. Should future research continue to suggest that childhood e-cigarette use represents an early warning sign that smoking may follow, this may add support for moves toward greater regulation of e-cigarettes, in terms of their advertising and visibility. While opponents of regulating e-cigarettes appear to be arguing from a default position whereby e-cigarettes are presumed to be associated with few harms unless proven otherwise, the WHO have adopted a position which argues that e-cigarettes should be treated with caution until the potential harms or benefits of e-cigarettes are known.21 Research to investigate the safety of e-cigarettes (for users and non-users) and the mechanisms through which they might be offered as effective smoking cessation devices, while limiting children's exposure to them, is necessary in order to better inform public health strategies and reach a compromise between both sides of this debate. What this paper addsThis study indicates that e-cigarette use represents a new form of childhood experimentation with nicotine which is more common among 10–11-year-olds than tobacco use.The majority of children who report that parents use e-cigarettes report that they are ‘dual users’ who also smoke tobacco. Parental ‘dual use’ is associated with children's reported use of e-cigarettes.Children who report having used an e-cigarette are less likely to report definite intentions not to smoke and are more likely to report that they might smoke tobacco in 2 years’ time. This study indicates that e-cigarette use represents a new form of childhood experimentation with nicotine which is more common among 10–11-year-olds than tobacco use. The majority of children who report that parents use e-cigarettes report that they are ‘dual users’ who also smoke tobacco. Parental ‘dual use’ is associated with children's reported use of e-cigarettes. Children who report having used an e-cigarette are less likely to report definite intentions not to smoke and are more likely to report that they might smoke tobacco in 2 years’ time.
Background: E-cigarettes are seen by some as offering harm reduction potential, where used effectively as smoking cessation devices. However, there is emerging international evidence of growing use among young people, amid concerns that this may increase tobacco uptake. Few UK studies examine the prevalence of e-cigarette use in non-smoking children or associations with intentions to smoke. Methods: A cross-sectional survey of year 6 (10-11-year-old) children in Wales. Approximately 1500 children completed questions on e-cigarette use, parental and peer smoking, and intentions to smoke. Logistic regression analyses among never smoking children, adjusted for school-level clustering, examined associations of smoking norms with e-cigarette use, and of e-cigarette use with intentions to smoke tobacco within the next 2 years. Results: Approximately 6% of year 6 children, including 5% of never smokers, reported having used an e-cigarette. By comparison to children whose parents neither smoked nor used e-cigarettes, children were most likely to have used an e-cigarette if parents used both tobacco and e-cigarettes (OR=3.40; 95% CI 1.73 to 6.69). Having used an e-cigarette was associated with intentions to smoke (OR=3.21; 95% CI 1.66 to 6.23). While few children reported that they would smoke in 2 years' time, children who had used an e-cigarette were less likely to report that they definitely would not smoke tobacco in 2 years' time and were more likely to say that they might. Conclusions: E-cigarettes represent a new form of childhood experimentation with nicotine. Findings are consistent with a hypothesis that children use e-cigarettes to imitate parental and peer smoking behaviours, and that e-cigarette use is associated with weaker antismoking intentions.
null
null
13,668
353
[ 953, 287, 1251, 39, 287, 82, 94, 104, 162, 106, 348, 248, 197, 171, 196, 54, 212, 264 ]
21
[ "children", "cigarette", "cigarettes", "smoking", "use", "reported", "cigarette use", "tobacco", "parent", "figure" ]
[ "harms benefits cigarettes", "responses concerns cigarette", "awareness use cigarettes", "regulation cigarettes remain", "cigarette use adolescents" ]
null
null
[CONTENT] Electronic nicotine delivery devices | Non-cigarette tobacco products | Prevention [SUMMARY]
[CONTENT] Electronic nicotine delivery devices | Non-cigarette tobacco products | Prevention [SUMMARY]
[CONTENT] Electronic nicotine delivery devices | Non-cigarette tobacco products | Prevention [SUMMARY]
null
[CONTENT] Electronic nicotine delivery devices | Non-cigarette tobacco products | Prevention [SUMMARY]
null
[CONTENT] Age Factors | Child | Child Behavior | Cross-Sectional Studies | Electronic Nicotine Delivery Systems | Female | Health Behavior | Humans | Intention | Logistic Models | Male | Multivariate Analysis | Odds Ratio | Parents | Peer Influence | Prevalence | Risk Factors | Smoking | Smoking Prevention | Surveys and Questionnaires | Time Factors | Wales [SUMMARY]
[CONTENT] Age Factors | Child | Child Behavior | Cross-Sectional Studies | Electronic Nicotine Delivery Systems | Female | Health Behavior | Humans | Intention | Logistic Models | Male | Multivariate Analysis | Odds Ratio | Parents | Peer Influence | Prevalence | Risk Factors | Smoking | Smoking Prevention | Surveys and Questionnaires | Time Factors | Wales [SUMMARY]
[CONTENT] Age Factors | Child | Child Behavior | Cross-Sectional Studies | Electronic Nicotine Delivery Systems | Female | Health Behavior | Humans | Intention | Logistic Models | Male | Multivariate Analysis | Odds Ratio | Parents | Peer Influence | Prevalence | Risk Factors | Smoking | Smoking Prevention | Surveys and Questionnaires | Time Factors | Wales [SUMMARY]
null
[CONTENT] Age Factors | Child | Child Behavior | Cross-Sectional Studies | Electronic Nicotine Delivery Systems | Female | Health Behavior | Humans | Intention | Logistic Models | Male | Multivariate Analysis | Odds Ratio | Parents | Peer Influence | Prevalence | Risk Factors | Smoking | Smoking Prevention | Surveys and Questionnaires | Time Factors | Wales [SUMMARY]
null
[CONTENT] harms benefits cigarettes | responses concerns cigarette | awareness use cigarettes | regulation cigarettes remain | cigarette use adolescents [SUMMARY]
[CONTENT] harms benefits cigarettes | responses concerns cigarette | awareness use cigarettes | regulation cigarettes remain | cigarette use adolescents [SUMMARY]
[CONTENT] harms benefits cigarettes | responses concerns cigarette | awareness use cigarettes | regulation cigarettes remain | cigarette use adolescents [SUMMARY]
null
[CONTENT] harms benefits cigarettes | responses concerns cigarette | awareness use cigarettes | regulation cigarettes remain | cigarette use adolescents [SUMMARY]
null
[CONTENT] children | cigarette | cigarettes | smoking | use | reported | cigarette use | tobacco | parent | figure [SUMMARY]
[CONTENT] children | cigarette | cigarettes | smoking | use | reported | cigarette use | tobacco | parent | figure [SUMMARY]
[CONTENT] children | cigarette | cigarettes | smoking | use | reported | cigarette use | tobacco | parent | figure [SUMMARY]
null
[CONTENT] children | cigarette | cigarettes | smoking | use | reported | cigarette use | tobacco | parent | figure [SUMMARY]
null
[CONTENT] use | cigarettes | evidence | cigarette | smoking | marketing | cigarette use | cessation | public | visibility [SUMMARY]
[CONTENT] children | figure | cigarettes | yes | mother | father | wales | smoking | smokes | cigarette [SUMMARY]
[CONTENT] reported | cigarette | children | parent | use | smoking | reported parent | reported having | children reported | having [SUMMARY]
null
[CONTENT] children | cigarette | cigarettes | reported | use | smoking | figure | parent | having | yes [SUMMARY]
null
[CONTENT] ||| ||| UK [SUMMARY]
[CONTENT] year 6 | 10-11-year-old | Wales ||| Approximately 1500 ||| the next 2 years [SUMMARY]
[CONTENT] Approximately 6% | year 6 | 5% ||| 95% | CI | 1.73 | 6.69 ||| 95% | CI | 1.66 | 6.23 ||| 2 years' | 2 years' [SUMMARY]
null
[CONTENT] ||| ||| UK ||| year 6 | 10-11-year-old | Wales ||| Approximately 1500 ||| the next 2 years ||| Approximately 6% | year 6 | 5% ||| 95% | CI | 1.73 | 6.69 ||| 95% | CI | 1.66 | 6.23 ||| 2 years' | 2 years' ||| ||| [SUMMARY]
null
Effectiveness of the Early Staged Hybrid Approach for Treatment of Symptomatic Atrial Fibrillation: the Electrophysiology Study Could Be Deferred?
34751009
The efficacy of catheter ablation for persistent atrial fibrillation (AF) remains suboptimal. A hybrid approach of catheter ablation combined with totally thoracoscopic surgical ablation can improve outcomes. In this study, we evaluated the efficacy of the early staged hybrid procedure in hospital stay after totally thoracoscopic ablation compared to the stand-alone totally thoracoscopic ablation.
BACKGROUND
Patients who underwent totally thoracoscopic ablation from February 2012 to December 2018 were included in this study. We compared the outcomes of the totally thoracoscopic ablation only group versus the early staged hybrid procedure group. The primary outcome was recurrence of atrial tachyarrhythmia after three months of blanking period. The secondary outcome was repeated unplanned additional electrophysiology study and catheter ablation due to atrial tachyarrhythmia recurrence.
METHODS
A total of 306 patients (mean age, 56.8 ± 8.5 years; 278 [90.8%] males) was included in the study, with 81 patients in the early staged hybrid group and 225 patients in the stand-alone totally thoracoscopic ablation only group. The mean follow-up duration was 30.0 months. Overall arrhythmia-free survival showed no significant difference between the two groups (log-rank P = 0.402). There was no significant difference in the rate of repeat procedure between the two groups (log-rank = 0.11).
RESULTS
The early staged hybrid procedure after thoracoscopic ablation could not improve the outcome of recurrence of atrial tachyarrhythmia. The second stage of electrophysiology study could be deferred to patients with recurrence of atrial tachyarrhythmia during follow up after totally thoracoscopic ablation.
CONCLUSION
[ "Aged", "Atrial Fibrillation", "Disease-Free Survival", "Electrophysiological Phenomena", "Female", "Follow-Up Studies", "Humans", "Male", "Middle Aged", "Recurrence", "Severity of Illness Index", "Tachycardia", "Thoracoscopy", "Treatment Outcome" ]
8575760
INTRODUCTION
Atrial fibrillation (AF) is the most common arrhythmia and is associated with increased risk of thromboembolism and heart failure.12 Since the findings that pulmonary veins (PVs) are the trigger foci for AF, catheter ablation has been recommended for drug refractory symptomatic AF.3 Catheter ablation in patients with heart failure showed significantly better outcome, and almost 70% of these patients showed persistent AF.4 However, the efficacy of catheter ablation for persistent AF remains suboptimal due to electrical atrial remodeling, and more than 40% of patients experienced AF recurrence.56 Surgical approaches such as Cox–Maze operation are options for this issue. However, surgical ablation is invasive and showed better outcome when performed concomitant to other cardiac surgery.789 Thoracoscopic ablation is an alternative, less invasive approach for stand-alone surgery for AF. Current guidelines recommend that thoracoscopic hybrid surgical ablation procedures should be considered in patients with failed percutaneous AF ablation.10 Hybrid AF ablation can enhance the benefits of both surgical ablation and catheter ablation. Surgical ablation targets both PVs, the left atrial (LA) posterior wall, and left atrial appendage (LAA) closure with the ability to visualize the atrium. It also can access the epicardial structures such as ganglionic plexi and ligament of Marshall.11 Catheter ablation allows more detailed electrical mapping and can target gap ablation and ablate sites out of the surgeon's view such as the cavotricuspid isthmus (CTI) or the septal region. The proper hybrid treatment should consist of a planned combination of surgical and catheter ablation. Theoretically combining the epicardial approach with endocardial ablation that includes validation of the ablation lines and modification of the residual AF substrate may increase long-term procedural success rates. A hybrid procedure consisting of the sequential combination of thoracoscopic surgical ablation and endocardial catheter ablation is an attractive alternative that complements the respective limitations of epi- and endocardial approaches. However, these two-stage approaches require longer hospitalization with increased medical costs. The timing and sequence of the hybrid procedure remains a matter of debate. Performing the whole procedure during single session lessens the risk of repeated hospitalization and anesthesia, but prolonged the duration of procedure time. Intervening the procedures more than 2–3 months interval between two stages allows the epicardial lesions to heal and stabilize, which could uncover conduction gaps and warrants the overall efficacy. In this study, we evaluated the efficacy of the early staged hybrid procedure in hospital stay after totally thoracoscopic ablation.
METHODS
Study population This study was a single-center, retrospective, observational study. Consecutive patients who underwent totally thoracoscopic ablation procedure from February 2012 to December 2018 were included. Patients with failed PV isolation and who received Cox–Maze surgery or LAA exclusion only were excluded. Patients who were treated with catheter ablation before totally thoracoscopic ablation were excluded. The early staged hybrid approach with an electrophysiology study after surgical thoracoscopic ablation in hospital stay was performed. In stand-alone surgical ablation group, routine electrophysiology study was not performed after totally thoracoscopic ablation and the patients were followed up after discharge. We compared the outcomes of the early staged hybrid group versus the stand-alone surgical ablation group. This study was a single-center, retrospective, observational study. Consecutive patients who underwent totally thoracoscopic ablation procedure from February 2012 to December 2018 were included. Patients with failed PV isolation and who received Cox–Maze surgery or LAA exclusion only were excluded. Patients who were treated with catheter ablation before totally thoracoscopic ablation were excluded. The early staged hybrid approach with an electrophysiology study after surgical thoracoscopic ablation in hospital stay was performed. In stand-alone surgical ablation group, routine electrophysiology study was not performed after totally thoracoscopic ablation and the patients were followed up after discharge. We compared the outcomes of the early staged hybrid group versus the stand-alone surgical ablation group. Surgical techniques Totally thoracoscopic ablation procedures were performed under general anesthesia. All procedures were performed using standard techniques as described previously.1213 The bilateral thoracoscopic approach with video-assisted thoracoscopic surgical technique was used. This technique requires only three holes for one 10 mm port and two 5 mm ports. Starting on the right side, a 5mm port was introduced in the fourth intercostal space at the mid-axillary line. After carbon dioxide insufflation to expand the operative field and depress the diaphragm, the remaining two ports were placed in the third intercostal space at the anterior axillary line and the sixth intercostal space at the mid-axillary line, respectively. After pericardial tenting, a lighted dissector (AtriCure Lumitip Dissector, Atricure, Inc., Cincinnati, OH, USA) was used to pass a rubber band under the PV antrum through the oblique sinus. An AtriCure Isolator Transpolar Clamp (Atricure, Inc.) was connected to the rubber band and positioned around the PV antrum. PV antrum isolation was performed by applying bipolar radiofrequency energy 6 times to the clamps around the PV antrum. Additional superior and inferior ablation lines connecting both PV isolation lines were created epicardially using a linear pen device (MLP, Atricure, Inc.). Ganglionated plexi subsequently were ablated with bipolar radiofrequency energy with the aid of high-frequency stimulation. Confirmation of ablation lines was obtained by pacing testing using the AtriCure Cooltip pen (MAX5, Atricure, Inc.). The procedure was repeated on the left side. Before PV and ganglionated plexus ablation, the ligament of Marshall was dissected and ablated. When all ablations were complete and conduction block was confirmed, the LAA was removed with an endoscopic stapling device. Totally thoracoscopic ablation procedures were performed under general anesthesia. All procedures were performed using standard techniques as described previously.1213 The bilateral thoracoscopic approach with video-assisted thoracoscopic surgical technique was used. This technique requires only three holes for one 10 mm port and two 5 mm ports. Starting on the right side, a 5mm port was introduced in the fourth intercostal space at the mid-axillary line. After carbon dioxide insufflation to expand the operative field and depress the diaphragm, the remaining two ports were placed in the third intercostal space at the anterior axillary line and the sixth intercostal space at the mid-axillary line, respectively. After pericardial tenting, a lighted dissector (AtriCure Lumitip Dissector, Atricure, Inc., Cincinnati, OH, USA) was used to pass a rubber band under the PV antrum through the oblique sinus. An AtriCure Isolator Transpolar Clamp (Atricure, Inc.) was connected to the rubber band and positioned around the PV antrum. PV antrum isolation was performed by applying bipolar radiofrequency energy 6 times to the clamps around the PV antrum. Additional superior and inferior ablation lines connecting both PV isolation lines were created epicardially using a linear pen device (MLP, Atricure, Inc.). Ganglionated plexi subsequently were ablated with bipolar radiofrequency energy with the aid of high-frequency stimulation. Confirmation of ablation lines was obtained by pacing testing using the AtriCure Cooltip pen (MAX5, Atricure, Inc.). The procedure was repeated on the left side. Before PV and ganglionated plexus ablation, the ligament of Marshall was dissected and ablated. When all ablations were complete and conduction block was confirmed, the LAA was removed with an endoscopic stapling device. Hybrid approach A staged electrophysiology study was performed after totally thoracoscopic ablation median 6 days after surgery during hospitalization. The electrophysiological study was performed under sedation, and detailed electroanatomical data were obtained from the CARTO mapping system (Biosense Webster, Diamond Bar, CA, USA). Single or double trans-septal access was performed with an SL1 sheath (St. Jude Medical, St. Paul, MN, USA). The circular mapping catheter and the ablation catheter were introduced into the left atrium. Bidirectional block was confirmed for both PVs with entrance block and exit block. We applied radiofrequency catheter ablation (RFCA) if a residual PV gap was present. Additional CTI ablation was performed after PVI. Other linear lesions were created by discretion of the physician. All patients underwent catheter ablation with an open irrigated catheter (Thermocool, Biosense Webster). RF energy up to 25–35 W was used. Patients were administered anticoagulant during procedures with intravenous heparin. The infusion was adjusted to maintain an activated coagulation time of 300–400 sec. A staged electrophysiology study was performed after totally thoracoscopic ablation median 6 days after surgery during hospitalization. The electrophysiological study was performed under sedation, and detailed electroanatomical data were obtained from the CARTO mapping system (Biosense Webster, Diamond Bar, CA, USA). Single or double trans-septal access was performed with an SL1 sheath (St. Jude Medical, St. Paul, MN, USA). The circular mapping catheter and the ablation catheter were introduced into the left atrium. Bidirectional block was confirmed for both PVs with entrance block and exit block. We applied radiofrequency catheter ablation (RFCA) if a residual PV gap was present. Additional CTI ablation was performed after PVI. Other linear lesions were created by discretion of the physician. All patients underwent catheter ablation with an open irrigated catheter (Thermocool, Biosense Webster). RF energy up to 25–35 W was used. Patients were administered anticoagulant during procedures with intravenous heparin. The infusion was adjusted to maintain an activated coagulation time of 300–400 sec. Outcome The primary outcome was recurrence of atrial tachyarrhythmia after three months of blanking period. Atrial tachyarrhythmia recurrence was defined as any evidence of sustained atrial tachyarrhythmia for longer than 30 sec in Holter monitoring or clinical documentation of 12-lead electrocardiogram (ECG). The secondary outcome was repeat unplanned electrophysiology study and catheter ablation due to atrial tachyarrhythmia recurrence that was refractory to drug therapy or cardioversion. The primary outcome was recurrence of atrial tachyarrhythmia after three months of blanking period. Atrial tachyarrhythmia recurrence was defined as any evidence of sustained atrial tachyarrhythmia for longer than 30 sec in Holter monitoring or clinical documentation of 12-lead electrocardiogram (ECG). The secondary outcome was repeat unplanned electrophysiology study and catheter ablation due to atrial tachyarrhythmia recurrence that was refractory to drug therapy or cardioversion. Follow-up All patients were followed up at two weeks, three months, six months, and every six months thereafter. In addition, 24-hour Holter monitoring was performed at three, six, and 12 months and annually thereafter. Additional monitoring was performed when patients experienced tachyarrhythmia symptoms. All patients were followed up at two weeks, three months, six months, and every six months thereafter. In addition, 24-hour Holter monitoring was performed at three, six, and 12 months and annually thereafter. Additional monitoring was performed when patients experienced tachyarrhythmia symptoms. Statistical analysis Statistical analysis was performed using SPSS ver. 27.0 software (SPSS Inc., Chicago, IL, USA). Continuous variables were compared using the unpaired t-test or Wilcoxon rank-sum test, and categorical variables were compared using either the χ2 test or Fisher's exact test, as appropriate. Events rate curves were obtained by Kaplan–Meier analysis and compared using a log-rank test. A P value < 0.05 was considered significant. Statistical analysis was performed using SPSS ver. 27.0 software (SPSS Inc., Chicago, IL, USA). Continuous variables were compared using the unpaired t-test or Wilcoxon rank-sum test, and categorical variables were compared using either the χ2 test or Fisher's exact test, as appropriate. Events rate curves were obtained by Kaplan–Meier analysis and compared using a log-rank test. A P value < 0.05 was considered significant. Ethics statement This study was approved by the Institutional Review Board of Samsung Medical Center, South Korea (IRB No. 2020-06-159). Informed consent was waived because of the retrospective nature of the study. This study was approved by the Institutional Review Board of Samsung Medical Center, South Korea (IRB No. 2020-06-159). Informed consent was waived because of the retrospective nature of the study.
RESULTS
Baseline characteristics The study selection process is shown in Fig. 1. Among the initially included 408 patients who received totally thoracoscopic ablation procedure, 51 were excluded due to incomplete PVI or conversion to Cox–Maze surgery. An additional 51 patients who received RFCA before totally thoracoscopic ablation was excluded. Finally, a total of 306 patients (mean age, 56.8 ± 8.5 years; 278 [90.8%] males) was included in the study. The overall mean LA diameter was 46.2 ± 6.5 mm, and 258 (84.3%) patients showed persistent AF. Among the included patients, 81 were in the hybrid group and 225 were in the totally thoracoscopic ablation only group. The baseline characteristics are summarized in Table 1. The only significant differences between the hybrid group and the totally thoracoscopic ablation only group was sex. TTA = totally thoracoscopic ablation, AF = atrial fibrillation, LA = left atrium, PVI = PV isolation, RFA = radiofrequency ablation. Values are presented as mean ± standard deviation or number (%). AF = atrial fibrillation, EF = ejection fraction, LA = left atrium, LS = long standing, TTA = totally thoracoscopic ablation. aparoxysmal vs. persistent; bpersistent vs. LS persistent; cparoxysmal vs. LS persistent. The study selection process is shown in Fig. 1. Among the initially included 408 patients who received totally thoracoscopic ablation procedure, 51 were excluded due to incomplete PVI or conversion to Cox–Maze surgery. An additional 51 patients who received RFCA before totally thoracoscopic ablation was excluded. Finally, a total of 306 patients (mean age, 56.8 ± 8.5 years; 278 [90.8%] males) was included in the study. The overall mean LA diameter was 46.2 ± 6.5 mm, and 258 (84.3%) patients showed persistent AF. Among the included patients, 81 were in the hybrid group and 225 were in the totally thoracoscopic ablation only group. The baseline characteristics are summarized in Table 1. The only significant differences between the hybrid group and the totally thoracoscopic ablation only group was sex. TTA = totally thoracoscopic ablation, AF = atrial fibrillation, LA = left atrium, PVI = PV isolation, RFA = radiofrequency ablation. Values are presented as mean ± standard deviation or number (%). AF = atrial fibrillation, EF = ejection fraction, LA = left atrium, LS = long standing, TTA = totally thoracoscopic ablation. aparoxysmal vs. persistent; bpersistent vs. LS persistent; cparoxysmal vs. LS persistent. Outcomes The mean follow-up duration was 30.0 months (46 months in hybrid group and 24 months in TTA only group). The one- and two-year atrial tachyarrhythmia–free survival rates were 73.9% and 63.8% in the totally thoracoscopic ablation only group and 85.2% and 67.8% in the hybrid group, respectively. The overall arrhythmia-free survival curves showed no significant differences between the two groups (log-rank P = 0.402, Fig. 2) TTA = totally thoracoscopic ablation. In the hybrid group, 18 (22%) patients underwent repeat catheter ablation for recurrent atrial tachyarrhythmia who were refractory to antiarrhythmic agents or cardioversion compared to 50 (22%) in the totally thoracoscopic ablation only group. There was no significant difference in the rate of repeat procedure between the two groups (log-rank P = 0.111). The mean follow-up duration was 30.0 months (46 months in hybrid group and 24 months in TTA only group). The one- and two-year atrial tachyarrhythmia–free survival rates were 73.9% and 63.8% in the totally thoracoscopic ablation only group and 85.2% and 67.8% in the hybrid group, respectively. The overall arrhythmia-free survival curves showed no significant differences between the two groups (log-rank P = 0.402, Fig. 2) TTA = totally thoracoscopic ablation. In the hybrid group, 18 (22%) patients underwent repeat catheter ablation for recurrent atrial tachyarrhythmia who were refractory to antiarrhythmic agents or cardioversion compared to 50 (22%) in the totally thoracoscopic ablation only group. There was no significant difference in the rate of repeat procedure between the two groups (log-rank P = 0.111). Detailed characteristics of the two groups Electrophysiology study was performed at a median 6 days after surgery in the hybrid group. Complete PV isolation (PVI) was confirmed in 72 (88.9%) patients. The patients who showed PV gap received additional catheter ablation, and complete PVI was identified in all patients. All but 3 patients underwent CTI ablation and bidirectional block was confirmed. Other additional linear lesion sets including septal line, perimitral line, and superior vena cava isolation were created in 10 patients by physician's discretion. A total of 37 patients in the hybrid group showed recurrence of atrial arrhythmia during follow-up. Among these patients, 27 showed recurrence with AF and 10 showed atypical atrial flutter (AFL). In the totally thoracoscopic ablation only group, 86 showed recurrence of atrial tachyarrhythmia during follow-up. Among these patients, 7 showed recurrence with definite CTI-dependent typical AFL. Fifty-eight patients had recurrence of AF, and 21 showed atypical AFL. The procedure-related complications are summarized in Table 2. Atrioesophageal fistula was observed in one case in the hybrid group. Other complications including bradycardia, thromboembolism, pericarditis, and pleuritis were observed in both groups. Values are presented as number (%). TTA = totally thoracoscopic ablation. Electrophysiology study was performed at a median 6 days after surgery in the hybrid group. Complete PV isolation (PVI) was confirmed in 72 (88.9%) patients. The patients who showed PV gap received additional catheter ablation, and complete PVI was identified in all patients. All but 3 patients underwent CTI ablation and bidirectional block was confirmed. Other additional linear lesion sets including septal line, perimitral line, and superior vena cava isolation were created in 10 patients by physician's discretion. A total of 37 patients in the hybrid group showed recurrence of atrial arrhythmia during follow-up. Among these patients, 27 showed recurrence with AF and 10 showed atypical atrial flutter (AFL). In the totally thoracoscopic ablation only group, 86 showed recurrence of atrial tachyarrhythmia during follow-up. Among these patients, 7 showed recurrence with definite CTI-dependent typical AFL. Fifty-eight patients had recurrence of AF, and 21 showed atypical AFL. The procedure-related complications are summarized in Table 2. Atrioesophageal fistula was observed in one case in the hybrid group. Other complications including bradycardia, thromboembolism, pericarditis, and pleuritis were observed in both groups. Values are presented as number (%). TTA = totally thoracoscopic ablation.
null
null
[ "Study population", "Surgical techniques", "Hybrid approach", "Outcome", "Follow-up", "Statistical analysis", "Ethics statement", "Baseline characteristics", "Outcomes", "Detailed characteristics of the two groups" ]
[ "This study was a single-center, retrospective, observational study. Consecutive patients who underwent totally thoracoscopic ablation procedure from February 2012 to December 2018 were included. Patients with failed PV isolation and who received Cox–Maze surgery or LAA exclusion only were excluded. Patients who were treated with catheter ablation before totally thoracoscopic ablation were excluded. The early staged hybrid approach with an electrophysiology study after surgical thoracoscopic ablation in hospital stay was performed. In stand-alone surgical ablation group, routine electrophysiology study was not performed after totally thoracoscopic ablation and the patients were followed up after discharge. We compared the outcomes of the early staged hybrid group versus the stand-alone surgical ablation group.", "Totally thoracoscopic ablation procedures were performed under general anesthesia. All procedures were performed using standard techniques as described previously.1213 The bilateral thoracoscopic approach with video-assisted thoracoscopic surgical technique was used. This technique requires only three holes for one 10 mm port and two 5 mm ports. Starting on the right side, a 5mm port was introduced in the fourth intercostal space at the mid-axillary line. After carbon dioxide insufflation to expand the operative field and depress the diaphragm, the remaining two ports were placed in the third intercostal space at the anterior axillary line and the sixth intercostal space at the mid-axillary line, respectively. After pericardial tenting, a lighted dissector (AtriCure Lumitip Dissector, Atricure, Inc., Cincinnati, OH, USA) was used to pass a rubber band under the PV antrum through the oblique sinus. An AtriCure Isolator Transpolar Clamp (Atricure, Inc.) was connected to the rubber band and positioned around the PV antrum. PV antrum isolation was performed by applying bipolar radiofrequency energy 6 times to the clamps around the PV antrum. Additional superior and inferior ablation lines connecting both PV isolation lines were created epicardially using a linear pen device (MLP, Atricure, Inc.). Ganglionated plexi subsequently were ablated with bipolar radiofrequency energy with the aid of high-frequency stimulation. Confirmation of ablation lines was obtained by pacing testing using the AtriCure Cooltip pen (MAX5, Atricure, Inc.). The procedure was repeated on the left side. Before PV and ganglionated plexus ablation, the ligament of Marshall was dissected and ablated. When all ablations were complete and conduction block was confirmed, the LAA was removed with an endoscopic stapling device.", "A staged electrophysiology study was performed after totally thoracoscopic ablation median 6 days after surgery during hospitalization. The electrophysiological study was performed under sedation, and detailed electroanatomical data were obtained from the CARTO mapping system (Biosense Webster, Diamond Bar, CA, USA). Single or double trans-septal access was performed with an SL1 sheath (St. Jude Medical, St. Paul, MN, USA). The circular mapping catheter and the ablation catheter were introduced into the left atrium. Bidirectional block was confirmed for both PVs with entrance block and exit block. We applied radiofrequency catheter ablation (RFCA) if a residual PV gap was present. Additional CTI ablation was performed after PVI. Other linear lesions were created by discretion of the physician. All patients underwent catheter ablation with an open irrigated catheter (Thermocool, Biosense Webster). RF energy up to 25–35 W was used. Patients were administered anticoagulant during procedures with intravenous heparin. The infusion was adjusted to maintain an activated coagulation time of 300–400 sec.", "The primary outcome was recurrence of atrial tachyarrhythmia after three months of blanking period. Atrial tachyarrhythmia recurrence was defined as any evidence of sustained atrial tachyarrhythmia for longer than 30 sec in Holter monitoring or clinical documentation of 12-lead electrocardiogram (ECG). The secondary outcome was repeat unplanned electrophysiology study and catheter ablation due to atrial tachyarrhythmia recurrence that was refractory to drug therapy or cardioversion.", "All patients were followed up at two weeks, three months, six months, and every six months thereafter. In addition, 24-hour Holter monitoring was performed at three, six, and 12 months and annually thereafter. Additional monitoring was performed when patients experienced tachyarrhythmia symptoms.", "Statistical analysis was performed using SPSS ver. 27.0 software (SPSS Inc., Chicago, IL, USA). Continuous variables were compared using the unpaired t-test or Wilcoxon rank-sum test, and categorical variables were compared using either the χ2 test or Fisher's exact test, as appropriate. Events rate curves were obtained by Kaplan–Meier analysis and compared using a log-rank test. A P value < 0.05 was considered significant.", "This study was approved by the Institutional Review Board of Samsung Medical Center, South Korea (IRB No. 2020-06-159). Informed consent was waived because of the retrospective nature of the study.", "The study selection process is shown in Fig. 1. Among the initially included 408 patients who received totally thoracoscopic ablation procedure, 51 were excluded due to incomplete PVI or conversion to Cox–Maze surgery. An additional 51 patients who received RFCA before totally thoracoscopic ablation was excluded. Finally, a total of 306 patients (mean age, 56.8 ± 8.5 years; 278 [90.8%] males) was included in the study. The overall mean LA diameter was 46.2 ± 6.5 mm, and 258 (84.3%) patients showed persistent AF. Among the included patients, 81 were in the hybrid group and 225 were in the totally thoracoscopic ablation only group. The baseline characteristics are summarized in Table 1. The only significant differences between the hybrid group and the totally thoracoscopic ablation only group was sex.\nTTA = totally thoracoscopic ablation, AF = atrial fibrillation, LA = left atrium, PVI = PV isolation, RFA = radiofrequency ablation.\nValues are presented as mean ± standard deviation or number (%).\nAF = atrial fibrillation, EF = ejection fraction, LA = left atrium, LS = long standing, TTA = totally thoracoscopic ablation.\naparoxysmal vs. persistent; bpersistent vs. LS persistent; cparoxysmal vs. LS persistent.", "The mean follow-up duration was 30.0 months (46 months in hybrid group and 24 months in TTA only group). The one- and two-year atrial tachyarrhythmia–free survival rates were 73.9% and 63.8% in the totally thoracoscopic ablation only group and 85.2% and 67.8% in the hybrid group, respectively. The overall arrhythmia-free survival curves showed no significant differences between the two groups (log-rank P = 0.402, Fig. 2)\nTTA = totally thoracoscopic ablation.\nIn the hybrid group, 18 (22%) patients underwent repeat catheter ablation for recurrent atrial tachyarrhythmia who were refractory to antiarrhythmic agents or cardioversion compared to 50 (22%) in the totally thoracoscopic ablation only group. There was no significant difference in the rate of repeat procedure between the two groups (log-rank P = 0.111).", "Electrophysiology study was performed at a median 6 days after surgery in the hybrid group. Complete PV isolation (PVI) was confirmed in 72 (88.9%) patients. The patients who showed PV gap received additional catheter ablation, and complete PVI was identified in all patients. All but 3 patients underwent CTI ablation and bidirectional block was confirmed. Other additional linear lesion sets including septal line, perimitral line, and superior vena cava isolation were created in 10 patients by physician's discretion. A total of 37 patients in the hybrid group showed recurrence of atrial arrhythmia during follow-up. Among these patients, 27 showed recurrence with AF and 10 showed atypical atrial flutter (AFL).\nIn the totally thoracoscopic ablation only group, 86 showed recurrence of atrial tachyarrhythmia during follow-up. Among these patients, 7 showed recurrence with definite CTI-dependent typical AFL. Fifty-eight patients had recurrence of AF, and 21 showed atypical AFL.\nThe procedure-related complications are summarized in Table 2. Atrioesophageal fistula was observed in one case in the hybrid group. Other complications including bradycardia, thromboembolism, pericarditis, and pleuritis were observed in both groups.\nValues are presented as number (%).\nTTA = totally thoracoscopic ablation." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study population", "Surgical techniques", "Hybrid approach", "Outcome", "Follow-up", "Statistical analysis", "Ethics statement", "RESULTS", "Baseline characteristics", "Outcomes", "Detailed characteristics of the two groups", "DISCUSSION" ]
[ "Atrial fibrillation (AF) is the most common arrhythmia and is associated with increased risk of thromboembolism and heart failure.12 Since the findings that pulmonary veins (PVs) are the trigger foci for AF, catheter ablation has been recommended for drug refractory symptomatic AF.3 Catheter ablation in patients with heart failure showed significantly better outcome, and almost 70% of these patients showed persistent AF.4 However, the efficacy of catheter ablation for persistent AF remains suboptimal due to electrical atrial remodeling, and more than 40% of patients experienced AF recurrence.56\nSurgical approaches such as Cox–Maze operation are options for this issue. However, surgical ablation is invasive and showed better outcome when performed concomitant to other cardiac surgery.789 Thoracoscopic ablation is an alternative, less invasive approach for stand-alone surgery for AF. Current guidelines recommend that thoracoscopic hybrid surgical ablation procedures should be considered in patients with failed percutaneous AF ablation.10\nHybrid AF ablation can enhance the benefits of both surgical ablation and catheter ablation. Surgical ablation targets both PVs, the left atrial (LA) posterior wall, and left atrial appendage (LAA) closure with the ability to visualize the atrium. It also can access the epicardial structures such as ganglionic plexi and ligament of Marshall.11 Catheter ablation allows more detailed electrical mapping and can target gap ablation and ablate sites out of the surgeon's view such as the cavotricuspid isthmus (CTI) or the septal region. The proper hybrid treatment should consist of a planned combination of surgical and catheter ablation. Theoretically combining the epicardial approach with endocardial ablation that includes validation of the ablation lines and modification of the residual AF substrate may increase long-term procedural success rates. A hybrid procedure consisting of the sequential combination of thoracoscopic surgical ablation and endocardial catheter ablation is an attractive alternative that complements the respective limitations of epi- and endocardial approaches. However, these two-stage approaches require longer hospitalization with increased medical costs. The timing and sequence of the hybrid procedure remains a matter of debate. Performing the whole procedure during single session lessens the risk of repeated hospitalization and anesthesia, but prolonged the duration of procedure time. Intervening the procedures more than 2–3 months interval between two stages allows the epicardial lesions to heal and stabilize, which could uncover conduction gaps and warrants the overall efficacy.\nIn this study, we evaluated the efficacy of the early staged hybrid procedure in hospital stay after totally thoracoscopic ablation.", "Study population This study was a single-center, retrospective, observational study. Consecutive patients who underwent totally thoracoscopic ablation procedure from February 2012 to December 2018 were included. Patients with failed PV isolation and who received Cox–Maze surgery or LAA exclusion only were excluded. Patients who were treated with catheter ablation before totally thoracoscopic ablation were excluded. The early staged hybrid approach with an electrophysiology study after surgical thoracoscopic ablation in hospital stay was performed. In stand-alone surgical ablation group, routine electrophysiology study was not performed after totally thoracoscopic ablation and the patients were followed up after discharge. We compared the outcomes of the early staged hybrid group versus the stand-alone surgical ablation group.\nThis study was a single-center, retrospective, observational study. Consecutive patients who underwent totally thoracoscopic ablation procedure from February 2012 to December 2018 were included. Patients with failed PV isolation and who received Cox–Maze surgery or LAA exclusion only were excluded. Patients who were treated with catheter ablation before totally thoracoscopic ablation were excluded. The early staged hybrid approach with an electrophysiology study after surgical thoracoscopic ablation in hospital stay was performed. In stand-alone surgical ablation group, routine electrophysiology study was not performed after totally thoracoscopic ablation and the patients were followed up after discharge. We compared the outcomes of the early staged hybrid group versus the stand-alone surgical ablation group.\nSurgical techniques Totally thoracoscopic ablation procedures were performed under general anesthesia. All procedures were performed using standard techniques as described previously.1213 The bilateral thoracoscopic approach with video-assisted thoracoscopic surgical technique was used. This technique requires only three holes for one 10 mm port and two 5 mm ports. Starting on the right side, a 5mm port was introduced in the fourth intercostal space at the mid-axillary line. After carbon dioxide insufflation to expand the operative field and depress the diaphragm, the remaining two ports were placed in the third intercostal space at the anterior axillary line and the sixth intercostal space at the mid-axillary line, respectively. After pericardial tenting, a lighted dissector (AtriCure Lumitip Dissector, Atricure, Inc., Cincinnati, OH, USA) was used to pass a rubber band under the PV antrum through the oblique sinus. An AtriCure Isolator Transpolar Clamp (Atricure, Inc.) was connected to the rubber band and positioned around the PV antrum. PV antrum isolation was performed by applying bipolar radiofrequency energy 6 times to the clamps around the PV antrum. Additional superior and inferior ablation lines connecting both PV isolation lines were created epicardially using a linear pen device (MLP, Atricure, Inc.). Ganglionated plexi subsequently were ablated with bipolar radiofrequency energy with the aid of high-frequency stimulation. Confirmation of ablation lines was obtained by pacing testing using the AtriCure Cooltip pen (MAX5, Atricure, Inc.). The procedure was repeated on the left side. Before PV and ganglionated plexus ablation, the ligament of Marshall was dissected and ablated. When all ablations were complete and conduction block was confirmed, the LAA was removed with an endoscopic stapling device.\nTotally thoracoscopic ablation procedures were performed under general anesthesia. All procedures were performed using standard techniques as described previously.1213 The bilateral thoracoscopic approach with video-assisted thoracoscopic surgical technique was used. This technique requires only three holes for one 10 mm port and two 5 mm ports. Starting on the right side, a 5mm port was introduced in the fourth intercostal space at the mid-axillary line. After carbon dioxide insufflation to expand the operative field and depress the diaphragm, the remaining two ports were placed in the third intercostal space at the anterior axillary line and the sixth intercostal space at the mid-axillary line, respectively. After pericardial tenting, a lighted dissector (AtriCure Lumitip Dissector, Atricure, Inc., Cincinnati, OH, USA) was used to pass a rubber band under the PV antrum through the oblique sinus. An AtriCure Isolator Transpolar Clamp (Atricure, Inc.) was connected to the rubber band and positioned around the PV antrum. PV antrum isolation was performed by applying bipolar radiofrequency energy 6 times to the clamps around the PV antrum. Additional superior and inferior ablation lines connecting both PV isolation lines were created epicardially using a linear pen device (MLP, Atricure, Inc.). Ganglionated plexi subsequently were ablated with bipolar radiofrequency energy with the aid of high-frequency stimulation. Confirmation of ablation lines was obtained by pacing testing using the AtriCure Cooltip pen (MAX5, Atricure, Inc.). The procedure was repeated on the left side. Before PV and ganglionated plexus ablation, the ligament of Marshall was dissected and ablated. When all ablations were complete and conduction block was confirmed, the LAA was removed with an endoscopic stapling device.\nHybrid approach A staged electrophysiology study was performed after totally thoracoscopic ablation median 6 days after surgery during hospitalization. The electrophysiological study was performed under sedation, and detailed electroanatomical data were obtained from the CARTO mapping system (Biosense Webster, Diamond Bar, CA, USA). Single or double trans-septal access was performed with an SL1 sheath (St. Jude Medical, St. Paul, MN, USA). The circular mapping catheter and the ablation catheter were introduced into the left atrium. Bidirectional block was confirmed for both PVs with entrance block and exit block. We applied radiofrequency catheter ablation (RFCA) if a residual PV gap was present. Additional CTI ablation was performed after PVI. Other linear lesions were created by discretion of the physician. All patients underwent catheter ablation with an open irrigated catheter (Thermocool, Biosense Webster). RF energy up to 25–35 W was used. Patients were administered anticoagulant during procedures with intravenous heparin. The infusion was adjusted to maintain an activated coagulation time of 300–400 sec.\nA staged electrophysiology study was performed after totally thoracoscopic ablation median 6 days after surgery during hospitalization. The electrophysiological study was performed under sedation, and detailed electroanatomical data were obtained from the CARTO mapping system (Biosense Webster, Diamond Bar, CA, USA). Single or double trans-septal access was performed with an SL1 sheath (St. Jude Medical, St. Paul, MN, USA). The circular mapping catheter and the ablation catheter were introduced into the left atrium. Bidirectional block was confirmed for both PVs with entrance block and exit block. We applied radiofrequency catheter ablation (RFCA) if a residual PV gap was present. Additional CTI ablation was performed after PVI. Other linear lesions were created by discretion of the physician. All patients underwent catheter ablation with an open irrigated catheter (Thermocool, Biosense Webster). RF energy up to 25–35 W was used. Patients were administered anticoagulant during procedures with intravenous heparin. The infusion was adjusted to maintain an activated coagulation time of 300–400 sec.\nOutcome The primary outcome was recurrence of atrial tachyarrhythmia after three months of blanking period. Atrial tachyarrhythmia recurrence was defined as any evidence of sustained atrial tachyarrhythmia for longer than 30 sec in Holter monitoring or clinical documentation of 12-lead electrocardiogram (ECG). The secondary outcome was repeat unplanned electrophysiology study and catheter ablation due to atrial tachyarrhythmia recurrence that was refractory to drug therapy or cardioversion.\nThe primary outcome was recurrence of atrial tachyarrhythmia after three months of blanking period. Atrial tachyarrhythmia recurrence was defined as any evidence of sustained atrial tachyarrhythmia for longer than 30 sec in Holter monitoring or clinical documentation of 12-lead electrocardiogram (ECG). The secondary outcome was repeat unplanned electrophysiology study and catheter ablation due to atrial tachyarrhythmia recurrence that was refractory to drug therapy or cardioversion.\nFollow-up All patients were followed up at two weeks, three months, six months, and every six months thereafter. In addition, 24-hour Holter monitoring was performed at three, six, and 12 months and annually thereafter. Additional monitoring was performed when patients experienced tachyarrhythmia symptoms.\nAll patients were followed up at two weeks, three months, six months, and every six months thereafter. In addition, 24-hour Holter monitoring was performed at three, six, and 12 months and annually thereafter. Additional monitoring was performed when patients experienced tachyarrhythmia symptoms.\nStatistical analysis Statistical analysis was performed using SPSS ver. 27.0 software (SPSS Inc., Chicago, IL, USA). Continuous variables were compared using the unpaired t-test or Wilcoxon rank-sum test, and categorical variables were compared using either the χ2 test or Fisher's exact test, as appropriate. Events rate curves were obtained by Kaplan–Meier analysis and compared using a log-rank test. A P value < 0.05 was considered significant.\nStatistical analysis was performed using SPSS ver. 27.0 software (SPSS Inc., Chicago, IL, USA). Continuous variables were compared using the unpaired t-test or Wilcoxon rank-sum test, and categorical variables were compared using either the χ2 test or Fisher's exact test, as appropriate. Events rate curves were obtained by Kaplan–Meier analysis and compared using a log-rank test. A P value < 0.05 was considered significant.\nEthics statement This study was approved by the Institutional Review Board of Samsung Medical Center, South Korea (IRB No. 2020-06-159). Informed consent was waived because of the retrospective nature of the study.\nThis study was approved by the Institutional Review Board of Samsung Medical Center, South Korea (IRB No. 2020-06-159). Informed consent was waived because of the retrospective nature of the study.", "This study was a single-center, retrospective, observational study. Consecutive patients who underwent totally thoracoscopic ablation procedure from February 2012 to December 2018 were included. Patients with failed PV isolation and who received Cox–Maze surgery or LAA exclusion only were excluded. Patients who were treated with catheter ablation before totally thoracoscopic ablation were excluded. The early staged hybrid approach with an electrophysiology study after surgical thoracoscopic ablation in hospital stay was performed. In stand-alone surgical ablation group, routine electrophysiology study was not performed after totally thoracoscopic ablation and the patients were followed up after discharge. We compared the outcomes of the early staged hybrid group versus the stand-alone surgical ablation group.", "Totally thoracoscopic ablation procedures were performed under general anesthesia. All procedures were performed using standard techniques as described previously.1213 The bilateral thoracoscopic approach with video-assisted thoracoscopic surgical technique was used. This technique requires only three holes for one 10 mm port and two 5 mm ports. Starting on the right side, a 5mm port was introduced in the fourth intercostal space at the mid-axillary line. After carbon dioxide insufflation to expand the operative field and depress the diaphragm, the remaining two ports were placed in the third intercostal space at the anterior axillary line and the sixth intercostal space at the mid-axillary line, respectively. After pericardial tenting, a lighted dissector (AtriCure Lumitip Dissector, Atricure, Inc., Cincinnati, OH, USA) was used to pass a rubber band under the PV antrum through the oblique sinus. An AtriCure Isolator Transpolar Clamp (Atricure, Inc.) was connected to the rubber band and positioned around the PV antrum. PV antrum isolation was performed by applying bipolar radiofrequency energy 6 times to the clamps around the PV antrum. Additional superior and inferior ablation lines connecting both PV isolation lines were created epicardially using a linear pen device (MLP, Atricure, Inc.). Ganglionated plexi subsequently were ablated with bipolar radiofrequency energy with the aid of high-frequency stimulation. Confirmation of ablation lines was obtained by pacing testing using the AtriCure Cooltip pen (MAX5, Atricure, Inc.). The procedure was repeated on the left side. Before PV and ganglionated plexus ablation, the ligament of Marshall was dissected and ablated. When all ablations were complete and conduction block was confirmed, the LAA was removed with an endoscopic stapling device.", "A staged electrophysiology study was performed after totally thoracoscopic ablation median 6 days after surgery during hospitalization. The electrophysiological study was performed under sedation, and detailed electroanatomical data were obtained from the CARTO mapping system (Biosense Webster, Diamond Bar, CA, USA). Single or double trans-septal access was performed with an SL1 sheath (St. Jude Medical, St. Paul, MN, USA). The circular mapping catheter and the ablation catheter were introduced into the left atrium. Bidirectional block was confirmed for both PVs with entrance block and exit block. We applied radiofrequency catheter ablation (RFCA) if a residual PV gap was present. Additional CTI ablation was performed after PVI. Other linear lesions were created by discretion of the physician. All patients underwent catheter ablation with an open irrigated catheter (Thermocool, Biosense Webster). RF energy up to 25–35 W was used. Patients were administered anticoagulant during procedures with intravenous heparin. The infusion was adjusted to maintain an activated coagulation time of 300–400 sec.", "The primary outcome was recurrence of atrial tachyarrhythmia after three months of blanking period. Atrial tachyarrhythmia recurrence was defined as any evidence of sustained atrial tachyarrhythmia for longer than 30 sec in Holter monitoring or clinical documentation of 12-lead electrocardiogram (ECG). The secondary outcome was repeat unplanned electrophysiology study and catheter ablation due to atrial tachyarrhythmia recurrence that was refractory to drug therapy or cardioversion.", "All patients were followed up at two weeks, three months, six months, and every six months thereafter. In addition, 24-hour Holter monitoring was performed at three, six, and 12 months and annually thereafter. Additional monitoring was performed when patients experienced tachyarrhythmia symptoms.", "Statistical analysis was performed using SPSS ver. 27.0 software (SPSS Inc., Chicago, IL, USA). Continuous variables were compared using the unpaired t-test or Wilcoxon rank-sum test, and categorical variables were compared using either the χ2 test or Fisher's exact test, as appropriate. Events rate curves were obtained by Kaplan–Meier analysis and compared using a log-rank test. A P value < 0.05 was considered significant.", "This study was approved by the Institutional Review Board of Samsung Medical Center, South Korea (IRB No. 2020-06-159). Informed consent was waived because of the retrospective nature of the study.", "Baseline characteristics The study selection process is shown in Fig. 1. Among the initially included 408 patients who received totally thoracoscopic ablation procedure, 51 were excluded due to incomplete PVI or conversion to Cox–Maze surgery. An additional 51 patients who received RFCA before totally thoracoscopic ablation was excluded. Finally, a total of 306 patients (mean age, 56.8 ± 8.5 years; 278 [90.8%] males) was included in the study. The overall mean LA diameter was 46.2 ± 6.5 mm, and 258 (84.3%) patients showed persistent AF. Among the included patients, 81 were in the hybrid group and 225 were in the totally thoracoscopic ablation only group. The baseline characteristics are summarized in Table 1. The only significant differences between the hybrid group and the totally thoracoscopic ablation only group was sex.\nTTA = totally thoracoscopic ablation, AF = atrial fibrillation, LA = left atrium, PVI = PV isolation, RFA = radiofrequency ablation.\nValues are presented as mean ± standard deviation or number (%).\nAF = atrial fibrillation, EF = ejection fraction, LA = left atrium, LS = long standing, TTA = totally thoracoscopic ablation.\naparoxysmal vs. persistent; bpersistent vs. LS persistent; cparoxysmal vs. LS persistent.\nThe study selection process is shown in Fig. 1. Among the initially included 408 patients who received totally thoracoscopic ablation procedure, 51 were excluded due to incomplete PVI or conversion to Cox–Maze surgery. An additional 51 patients who received RFCA before totally thoracoscopic ablation was excluded. Finally, a total of 306 patients (mean age, 56.8 ± 8.5 years; 278 [90.8%] males) was included in the study. The overall mean LA diameter was 46.2 ± 6.5 mm, and 258 (84.3%) patients showed persistent AF. Among the included patients, 81 were in the hybrid group and 225 were in the totally thoracoscopic ablation only group. The baseline characteristics are summarized in Table 1. The only significant differences between the hybrid group and the totally thoracoscopic ablation only group was sex.\nTTA = totally thoracoscopic ablation, AF = atrial fibrillation, LA = left atrium, PVI = PV isolation, RFA = radiofrequency ablation.\nValues are presented as mean ± standard deviation or number (%).\nAF = atrial fibrillation, EF = ejection fraction, LA = left atrium, LS = long standing, TTA = totally thoracoscopic ablation.\naparoxysmal vs. persistent; bpersistent vs. LS persistent; cparoxysmal vs. LS persistent.\nOutcomes The mean follow-up duration was 30.0 months (46 months in hybrid group and 24 months in TTA only group). The one- and two-year atrial tachyarrhythmia–free survival rates were 73.9% and 63.8% in the totally thoracoscopic ablation only group and 85.2% and 67.8% in the hybrid group, respectively. The overall arrhythmia-free survival curves showed no significant differences between the two groups (log-rank P = 0.402, Fig. 2)\nTTA = totally thoracoscopic ablation.\nIn the hybrid group, 18 (22%) patients underwent repeat catheter ablation for recurrent atrial tachyarrhythmia who were refractory to antiarrhythmic agents or cardioversion compared to 50 (22%) in the totally thoracoscopic ablation only group. There was no significant difference in the rate of repeat procedure between the two groups (log-rank P = 0.111).\nThe mean follow-up duration was 30.0 months (46 months in hybrid group and 24 months in TTA only group). The one- and two-year atrial tachyarrhythmia–free survival rates were 73.9% and 63.8% in the totally thoracoscopic ablation only group and 85.2% and 67.8% in the hybrid group, respectively. The overall arrhythmia-free survival curves showed no significant differences between the two groups (log-rank P = 0.402, Fig. 2)\nTTA = totally thoracoscopic ablation.\nIn the hybrid group, 18 (22%) patients underwent repeat catheter ablation for recurrent atrial tachyarrhythmia who were refractory to antiarrhythmic agents or cardioversion compared to 50 (22%) in the totally thoracoscopic ablation only group. There was no significant difference in the rate of repeat procedure between the two groups (log-rank P = 0.111).\nDetailed characteristics of the two groups Electrophysiology study was performed at a median 6 days after surgery in the hybrid group. Complete PV isolation (PVI) was confirmed in 72 (88.9%) patients. The patients who showed PV gap received additional catheter ablation, and complete PVI was identified in all patients. All but 3 patients underwent CTI ablation and bidirectional block was confirmed. Other additional linear lesion sets including septal line, perimitral line, and superior vena cava isolation were created in 10 patients by physician's discretion. A total of 37 patients in the hybrid group showed recurrence of atrial arrhythmia during follow-up. Among these patients, 27 showed recurrence with AF and 10 showed atypical atrial flutter (AFL).\nIn the totally thoracoscopic ablation only group, 86 showed recurrence of atrial tachyarrhythmia during follow-up. Among these patients, 7 showed recurrence with definite CTI-dependent typical AFL. Fifty-eight patients had recurrence of AF, and 21 showed atypical AFL.\nThe procedure-related complications are summarized in Table 2. Atrioesophageal fistula was observed in one case in the hybrid group. Other complications including bradycardia, thromboembolism, pericarditis, and pleuritis were observed in both groups.\nValues are presented as number (%).\nTTA = totally thoracoscopic ablation.\nElectrophysiology study was performed at a median 6 days after surgery in the hybrid group. Complete PV isolation (PVI) was confirmed in 72 (88.9%) patients. The patients who showed PV gap received additional catheter ablation, and complete PVI was identified in all patients. All but 3 patients underwent CTI ablation and bidirectional block was confirmed. Other additional linear lesion sets including septal line, perimitral line, and superior vena cava isolation were created in 10 patients by physician's discretion. A total of 37 patients in the hybrid group showed recurrence of atrial arrhythmia during follow-up. Among these patients, 27 showed recurrence with AF and 10 showed atypical atrial flutter (AFL).\nIn the totally thoracoscopic ablation only group, 86 showed recurrence of atrial tachyarrhythmia during follow-up. Among these patients, 7 showed recurrence with definite CTI-dependent typical AFL. Fifty-eight patients had recurrence of AF, and 21 showed atypical AFL.\nThe procedure-related complications are summarized in Table 2. Atrioesophageal fistula was observed in one case in the hybrid group. Other complications including bradycardia, thromboembolism, pericarditis, and pleuritis were observed in both groups.\nValues are presented as number (%).\nTTA = totally thoracoscopic ablation.", "The study selection process is shown in Fig. 1. Among the initially included 408 patients who received totally thoracoscopic ablation procedure, 51 were excluded due to incomplete PVI or conversion to Cox–Maze surgery. An additional 51 patients who received RFCA before totally thoracoscopic ablation was excluded. Finally, a total of 306 patients (mean age, 56.8 ± 8.5 years; 278 [90.8%] males) was included in the study. The overall mean LA diameter was 46.2 ± 6.5 mm, and 258 (84.3%) patients showed persistent AF. Among the included patients, 81 were in the hybrid group and 225 were in the totally thoracoscopic ablation only group. The baseline characteristics are summarized in Table 1. The only significant differences between the hybrid group and the totally thoracoscopic ablation only group was sex.\nTTA = totally thoracoscopic ablation, AF = atrial fibrillation, LA = left atrium, PVI = PV isolation, RFA = radiofrequency ablation.\nValues are presented as mean ± standard deviation or number (%).\nAF = atrial fibrillation, EF = ejection fraction, LA = left atrium, LS = long standing, TTA = totally thoracoscopic ablation.\naparoxysmal vs. persistent; bpersistent vs. LS persistent; cparoxysmal vs. LS persistent.", "The mean follow-up duration was 30.0 months (46 months in hybrid group and 24 months in TTA only group). The one- and two-year atrial tachyarrhythmia–free survival rates were 73.9% and 63.8% in the totally thoracoscopic ablation only group and 85.2% and 67.8% in the hybrid group, respectively. The overall arrhythmia-free survival curves showed no significant differences between the two groups (log-rank P = 0.402, Fig. 2)\nTTA = totally thoracoscopic ablation.\nIn the hybrid group, 18 (22%) patients underwent repeat catheter ablation for recurrent atrial tachyarrhythmia who were refractory to antiarrhythmic agents or cardioversion compared to 50 (22%) in the totally thoracoscopic ablation only group. There was no significant difference in the rate of repeat procedure between the two groups (log-rank P = 0.111).", "Electrophysiology study was performed at a median 6 days after surgery in the hybrid group. Complete PV isolation (PVI) was confirmed in 72 (88.9%) patients. The patients who showed PV gap received additional catheter ablation, and complete PVI was identified in all patients. All but 3 patients underwent CTI ablation and bidirectional block was confirmed. Other additional linear lesion sets including septal line, perimitral line, and superior vena cava isolation were created in 10 patients by physician's discretion. A total of 37 patients in the hybrid group showed recurrence of atrial arrhythmia during follow-up. Among these patients, 27 showed recurrence with AF and 10 showed atypical atrial flutter (AFL).\nIn the totally thoracoscopic ablation only group, 86 showed recurrence of atrial tachyarrhythmia during follow-up. Among these patients, 7 showed recurrence with definite CTI-dependent typical AFL. Fifty-eight patients had recurrence of AF, and 21 showed atypical AFL.\nThe procedure-related complications are summarized in Table 2. Atrioesophageal fistula was observed in one case in the hybrid group. Other complications including bradycardia, thromboembolism, pericarditis, and pleuritis were observed in both groups.\nValues are presented as number (%).\nTTA = totally thoracoscopic ablation.", "This is the largest study evaluating the efficacy of the hybrid approach for treatment of AF. The main finding was that more than 70% of patients maintained sinus rhythm at one year of follow-up, and there was no significant difference between the early staged hybrid group and the totally thoracoscopic ablation only group. Furthermore, the rate of repeat catheter ablation procedures showed no significant difference between the two groups.\nTotally thoracoscopic ablation have advantages of hybrid cardiac surgery. This is a video-assisted minimally invasive surgery and does not require median sternotomy or a cardiac pump.14 One weakness of surgical ablation of AF is that it is difficult to ablate inaccessible cardiac structures not in face with the epicardial side, such as the CTI or interatrial septal region. Therefore, hybrid AF ablation has an advantage for improved ablation of these inaccessible structures and confirmation of electrical block using a detailed mapping technique. Furthermore, confident transmural lesions can be achieved through additional touch up ablation. In our cohort, almost 90% of patients showed complete PVI immediately after totally thoracoscopic surgical ablation. In addition, 7 of the 225 patients showed recurrence with definite CTI-dependent flutter. In this sense, the routine second stage of the approach just after surgery is not cost effective and force the patients to a longer hospital stay.\nWhen AF progresses, atrial fibrosis and atriopathy can develop. In these cases, it is effective to identify scar and border areas through additional voltage mapping, and this often requires additional endocardial ablation.1516 Notably, detailed mapping and additional ablation are not possible through totally thoracoscopic ablation alone. However, it was not effective to perform the hybrid procedure in all patients and we confirmed that the routine electrophysiology study during the hospital stay did not show additional benefit. In this study, the frequencies of unplanned additional electrophysiology study and catheter ablation showed no significant differences between the two groups. Also, routine hybrid procedures in hospitalization after totally thoracoscopic ablation to perform prophylactic CTI ablation and confirm complete PVI did not show additional benefits. Additional electrophysiology study and catheter ablation is required only in patients with recurrence of atrial tachyarrhythmia during follow-up after totally thoracoscopic ablation. This demonstrates that deferring second-stage electrophysiology study after arrhythmia recurrence is sufficient and cost-effective.\nThis is a single-center, retrospective cohort study. Physician performance depends on experience and skills, and these technical approaches should be performed at experienced centers. In addition, the routine hybrid approach was the initial strategy of our hospital. Therefore, there might be differences in surgical outcomes between the two groups related to a learning curve. There also was a difference in follow-up duration. All patients were followed up with 24-hour Holter monitor or ECG instead of implantable loop recorder or longer duration Holter monitoring. However, both groups were followed up with same strategy. Therefore, this might be not affected to the difference between two groups. Finally, patients with unilateral PV isolation during surgical ablation were purposely excluded from the study. It could be creating a bias in favor of surgical ablation. The further larger-scale prospective studies would be needed.\nThe early staged hybrid procedure after totally thoracoscopic ablation could not improve the outcome of recurrence of atrial tachyarrhythmia. The second stage of electrophysiology study could be deferred to patients with recurrence of atrial tachyarrhythmia during follow-up after totally thoracoscopic ablation." ]
[ "intro", "methods", null, null, null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "Atrial Fibrillation", "Catheter Ablation", "Totally Thoracoscopic Ablation", "Hybrid", "Electrophysiology Study" ]
INTRODUCTION: Atrial fibrillation (AF) is the most common arrhythmia and is associated with increased risk of thromboembolism and heart failure.12 Since the findings that pulmonary veins (PVs) are the trigger foci for AF, catheter ablation has been recommended for drug refractory symptomatic AF.3 Catheter ablation in patients with heart failure showed significantly better outcome, and almost 70% of these patients showed persistent AF.4 However, the efficacy of catheter ablation for persistent AF remains suboptimal due to electrical atrial remodeling, and more than 40% of patients experienced AF recurrence.56 Surgical approaches such as Cox–Maze operation are options for this issue. However, surgical ablation is invasive and showed better outcome when performed concomitant to other cardiac surgery.789 Thoracoscopic ablation is an alternative, less invasive approach for stand-alone surgery for AF. Current guidelines recommend that thoracoscopic hybrid surgical ablation procedures should be considered in patients with failed percutaneous AF ablation.10 Hybrid AF ablation can enhance the benefits of both surgical ablation and catheter ablation. Surgical ablation targets both PVs, the left atrial (LA) posterior wall, and left atrial appendage (LAA) closure with the ability to visualize the atrium. It also can access the epicardial structures such as ganglionic plexi and ligament of Marshall.11 Catheter ablation allows more detailed electrical mapping and can target gap ablation and ablate sites out of the surgeon's view such as the cavotricuspid isthmus (CTI) or the septal region. The proper hybrid treatment should consist of a planned combination of surgical and catheter ablation. Theoretically combining the epicardial approach with endocardial ablation that includes validation of the ablation lines and modification of the residual AF substrate may increase long-term procedural success rates. A hybrid procedure consisting of the sequential combination of thoracoscopic surgical ablation and endocardial catheter ablation is an attractive alternative that complements the respective limitations of epi- and endocardial approaches. However, these two-stage approaches require longer hospitalization with increased medical costs. The timing and sequence of the hybrid procedure remains a matter of debate. Performing the whole procedure during single session lessens the risk of repeated hospitalization and anesthesia, but prolonged the duration of procedure time. Intervening the procedures more than 2–3 months interval between two stages allows the epicardial lesions to heal and stabilize, which could uncover conduction gaps and warrants the overall efficacy. In this study, we evaluated the efficacy of the early staged hybrid procedure in hospital stay after totally thoracoscopic ablation. METHODS: Study population This study was a single-center, retrospective, observational study. Consecutive patients who underwent totally thoracoscopic ablation procedure from February 2012 to December 2018 were included. Patients with failed PV isolation and who received Cox–Maze surgery or LAA exclusion only were excluded. Patients who were treated with catheter ablation before totally thoracoscopic ablation were excluded. The early staged hybrid approach with an electrophysiology study after surgical thoracoscopic ablation in hospital stay was performed. In stand-alone surgical ablation group, routine electrophysiology study was not performed after totally thoracoscopic ablation and the patients were followed up after discharge. We compared the outcomes of the early staged hybrid group versus the stand-alone surgical ablation group. This study was a single-center, retrospective, observational study. Consecutive patients who underwent totally thoracoscopic ablation procedure from February 2012 to December 2018 were included. Patients with failed PV isolation and who received Cox–Maze surgery or LAA exclusion only were excluded. Patients who were treated with catheter ablation before totally thoracoscopic ablation were excluded. The early staged hybrid approach with an electrophysiology study after surgical thoracoscopic ablation in hospital stay was performed. In stand-alone surgical ablation group, routine electrophysiology study was not performed after totally thoracoscopic ablation and the patients were followed up after discharge. We compared the outcomes of the early staged hybrid group versus the stand-alone surgical ablation group. Surgical techniques Totally thoracoscopic ablation procedures were performed under general anesthesia. All procedures were performed using standard techniques as described previously.1213 The bilateral thoracoscopic approach with video-assisted thoracoscopic surgical technique was used. This technique requires only three holes for one 10 mm port and two 5 mm ports. Starting on the right side, a 5mm port was introduced in the fourth intercostal space at the mid-axillary line. After carbon dioxide insufflation to expand the operative field and depress the diaphragm, the remaining two ports were placed in the third intercostal space at the anterior axillary line and the sixth intercostal space at the mid-axillary line, respectively. After pericardial tenting, a lighted dissector (AtriCure Lumitip Dissector, Atricure, Inc., Cincinnati, OH, USA) was used to pass a rubber band under the PV antrum through the oblique sinus. An AtriCure Isolator Transpolar Clamp (Atricure, Inc.) was connected to the rubber band and positioned around the PV antrum. PV antrum isolation was performed by applying bipolar radiofrequency energy 6 times to the clamps around the PV antrum. Additional superior and inferior ablation lines connecting both PV isolation lines were created epicardially using a linear pen device (MLP, Atricure, Inc.). Ganglionated plexi subsequently were ablated with bipolar radiofrequency energy with the aid of high-frequency stimulation. Confirmation of ablation lines was obtained by pacing testing using the AtriCure Cooltip pen (MAX5, Atricure, Inc.). The procedure was repeated on the left side. Before PV and ganglionated plexus ablation, the ligament of Marshall was dissected and ablated. When all ablations were complete and conduction block was confirmed, the LAA was removed with an endoscopic stapling device. Totally thoracoscopic ablation procedures were performed under general anesthesia. All procedures were performed using standard techniques as described previously.1213 The bilateral thoracoscopic approach with video-assisted thoracoscopic surgical technique was used. This technique requires only three holes for one 10 mm port and two 5 mm ports. Starting on the right side, a 5mm port was introduced in the fourth intercostal space at the mid-axillary line. After carbon dioxide insufflation to expand the operative field and depress the diaphragm, the remaining two ports were placed in the third intercostal space at the anterior axillary line and the sixth intercostal space at the mid-axillary line, respectively. After pericardial tenting, a lighted dissector (AtriCure Lumitip Dissector, Atricure, Inc., Cincinnati, OH, USA) was used to pass a rubber band under the PV antrum through the oblique sinus. An AtriCure Isolator Transpolar Clamp (Atricure, Inc.) was connected to the rubber band and positioned around the PV antrum. PV antrum isolation was performed by applying bipolar radiofrequency energy 6 times to the clamps around the PV antrum. Additional superior and inferior ablation lines connecting both PV isolation lines were created epicardially using a linear pen device (MLP, Atricure, Inc.). Ganglionated plexi subsequently were ablated with bipolar radiofrequency energy with the aid of high-frequency stimulation. Confirmation of ablation lines was obtained by pacing testing using the AtriCure Cooltip pen (MAX5, Atricure, Inc.). The procedure was repeated on the left side. Before PV and ganglionated plexus ablation, the ligament of Marshall was dissected and ablated. When all ablations were complete and conduction block was confirmed, the LAA was removed with an endoscopic stapling device. Hybrid approach A staged electrophysiology study was performed after totally thoracoscopic ablation median 6 days after surgery during hospitalization. The electrophysiological study was performed under sedation, and detailed electroanatomical data were obtained from the CARTO mapping system (Biosense Webster, Diamond Bar, CA, USA). Single or double trans-septal access was performed with an SL1 sheath (St. Jude Medical, St. Paul, MN, USA). The circular mapping catheter and the ablation catheter were introduced into the left atrium. Bidirectional block was confirmed for both PVs with entrance block and exit block. We applied radiofrequency catheter ablation (RFCA) if a residual PV gap was present. Additional CTI ablation was performed after PVI. Other linear lesions were created by discretion of the physician. All patients underwent catheter ablation with an open irrigated catheter (Thermocool, Biosense Webster). RF energy up to 25–35 W was used. Patients were administered anticoagulant during procedures with intravenous heparin. The infusion was adjusted to maintain an activated coagulation time of 300–400 sec. A staged electrophysiology study was performed after totally thoracoscopic ablation median 6 days after surgery during hospitalization. The electrophysiological study was performed under sedation, and detailed electroanatomical data were obtained from the CARTO mapping system (Biosense Webster, Diamond Bar, CA, USA). Single or double trans-septal access was performed with an SL1 sheath (St. Jude Medical, St. Paul, MN, USA). The circular mapping catheter and the ablation catheter were introduced into the left atrium. Bidirectional block was confirmed for both PVs with entrance block and exit block. We applied radiofrequency catheter ablation (RFCA) if a residual PV gap was present. Additional CTI ablation was performed after PVI. Other linear lesions were created by discretion of the physician. All patients underwent catheter ablation with an open irrigated catheter (Thermocool, Biosense Webster). RF energy up to 25–35 W was used. Patients were administered anticoagulant during procedures with intravenous heparin. The infusion was adjusted to maintain an activated coagulation time of 300–400 sec. Outcome The primary outcome was recurrence of atrial tachyarrhythmia after three months of blanking period. Atrial tachyarrhythmia recurrence was defined as any evidence of sustained atrial tachyarrhythmia for longer than 30 sec in Holter monitoring or clinical documentation of 12-lead electrocardiogram (ECG). The secondary outcome was repeat unplanned electrophysiology study and catheter ablation due to atrial tachyarrhythmia recurrence that was refractory to drug therapy or cardioversion. The primary outcome was recurrence of atrial tachyarrhythmia after three months of blanking period. Atrial tachyarrhythmia recurrence was defined as any evidence of sustained atrial tachyarrhythmia for longer than 30 sec in Holter monitoring or clinical documentation of 12-lead electrocardiogram (ECG). The secondary outcome was repeat unplanned electrophysiology study and catheter ablation due to atrial tachyarrhythmia recurrence that was refractory to drug therapy or cardioversion. Follow-up All patients were followed up at two weeks, three months, six months, and every six months thereafter. In addition, 24-hour Holter monitoring was performed at three, six, and 12 months and annually thereafter. Additional monitoring was performed when patients experienced tachyarrhythmia symptoms. All patients were followed up at two weeks, three months, six months, and every six months thereafter. In addition, 24-hour Holter monitoring was performed at three, six, and 12 months and annually thereafter. Additional monitoring was performed when patients experienced tachyarrhythmia symptoms. Statistical analysis Statistical analysis was performed using SPSS ver. 27.0 software (SPSS Inc., Chicago, IL, USA). Continuous variables were compared using the unpaired t-test or Wilcoxon rank-sum test, and categorical variables were compared using either the χ2 test or Fisher's exact test, as appropriate. Events rate curves were obtained by Kaplan–Meier analysis and compared using a log-rank test. A P value < 0.05 was considered significant. Statistical analysis was performed using SPSS ver. 27.0 software (SPSS Inc., Chicago, IL, USA). Continuous variables were compared using the unpaired t-test or Wilcoxon rank-sum test, and categorical variables were compared using either the χ2 test or Fisher's exact test, as appropriate. Events rate curves were obtained by Kaplan–Meier analysis and compared using a log-rank test. A P value < 0.05 was considered significant. Ethics statement This study was approved by the Institutional Review Board of Samsung Medical Center, South Korea (IRB No. 2020-06-159). Informed consent was waived because of the retrospective nature of the study. This study was approved by the Institutional Review Board of Samsung Medical Center, South Korea (IRB No. 2020-06-159). Informed consent was waived because of the retrospective nature of the study. Study population: This study was a single-center, retrospective, observational study. Consecutive patients who underwent totally thoracoscopic ablation procedure from February 2012 to December 2018 were included. Patients with failed PV isolation and who received Cox–Maze surgery or LAA exclusion only were excluded. Patients who were treated with catheter ablation before totally thoracoscopic ablation were excluded. The early staged hybrid approach with an electrophysiology study after surgical thoracoscopic ablation in hospital stay was performed. In stand-alone surgical ablation group, routine electrophysiology study was not performed after totally thoracoscopic ablation and the patients were followed up after discharge. We compared the outcomes of the early staged hybrid group versus the stand-alone surgical ablation group. Surgical techniques: Totally thoracoscopic ablation procedures were performed under general anesthesia. All procedures were performed using standard techniques as described previously.1213 The bilateral thoracoscopic approach with video-assisted thoracoscopic surgical technique was used. This technique requires only three holes for one 10 mm port and two 5 mm ports. Starting on the right side, a 5mm port was introduced in the fourth intercostal space at the mid-axillary line. After carbon dioxide insufflation to expand the operative field and depress the diaphragm, the remaining two ports were placed in the third intercostal space at the anterior axillary line and the sixth intercostal space at the mid-axillary line, respectively. After pericardial tenting, a lighted dissector (AtriCure Lumitip Dissector, Atricure, Inc., Cincinnati, OH, USA) was used to pass a rubber band under the PV antrum through the oblique sinus. An AtriCure Isolator Transpolar Clamp (Atricure, Inc.) was connected to the rubber band and positioned around the PV antrum. PV antrum isolation was performed by applying bipolar radiofrequency energy 6 times to the clamps around the PV antrum. Additional superior and inferior ablation lines connecting both PV isolation lines were created epicardially using a linear pen device (MLP, Atricure, Inc.). Ganglionated plexi subsequently were ablated with bipolar radiofrequency energy with the aid of high-frequency stimulation. Confirmation of ablation lines was obtained by pacing testing using the AtriCure Cooltip pen (MAX5, Atricure, Inc.). The procedure was repeated on the left side. Before PV and ganglionated plexus ablation, the ligament of Marshall was dissected and ablated. When all ablations were complete and conduction block was confirmed, the LAA was removed with an endoscopic stapling device. Hybrid approach: A staged electrophysiology study was performed after totally thoracoscopic ablation median 6 days after surgery during hospitalization. The electrophysiological study was performed under sedation, and detailed electroanatomical data were obtained from the CARTO mapping system (Biosense Webster, Diamond Bar, CA, USA). Single or double trans-septal access was performed with an SL1 sheath (St. Jude Medical, St. Paul, MN, USA). The circular mapping catheter and the ablation catheter were introduced into the left atrium. Bidirectional block was confirmed for both PVs with entrance block and exit block. We applied radiofrequency catheter ablation (RFCA) if a residual PV gap was present. Additional CTI ablation was performed after PVI. Other linear lesions were created by discretion of the physician. All patients underwent catheter ablation with an open irrigated catheter (Thermocool, Biosense Webster). RF energy up to 25–35 W was used. Patients were administered anticoagulant during procedures with intravenous heparin. The infusion was adjusted to maintain an activated coagulation time of 300–400 sec. Outcome: The primary outcome was recurrence of atrial tachyarrhythmia after three months of blanking period. Atrial tachyarrhythmia recurrence was defined as any evidence of sustained atrial tachyarrhythmia for longer than 30 sec in Holter monitoring or clinical documentation of 12-lead electrocardiogram (ECG). The secondary outcome was repeat unplanned electrophysiology study and catheter ablation due to atrial tachyarrhythmia recurrence that was refractory to drug therapy or cardioversion. Follow-up: All patients were followed up at two weeks, three months, six months, and every six months thereafter. In addition, 24-hour Holter monitoring was performed at three, six, and 12 months and annually thereafter. Additional monitoring was performed when patients experienced tachyarrhythmia symptoms. Statistical analysis: Statistical analysis was performed using SPSS ver. 27.0 software (SPSS Inc., Chicago, IL, USA). Continuous variables were compared using the unpaired t-test or Wilcoxon rank-sum test, and categorical variables were compared using either the χ2 test or Fisher's exact test, as appropriate. Events rate curves were obtained by Kaplan–Meier analysis and compared using a log-rank test. A P value < 0.05 was considered significant. Ethics statement: This study was approved by the Institutional Review Board of Samsung Medical Center, South Korea (IRB No. 2020-06-159). Informed consent was waived because of the retrospective nature of the study. RESULTS: Baseline characteristics The study selection process is shown in Fig. 1. Among the initially included 408 patients who received totally thoracoscopic ablation procedure, 51 were excluded due to incomplete PVI or conversion to Cox–Maze surgery. An additional 51 patients who received RFCA before totally thoracoscopic ablation was excluded. Finally, a total of 306 patients (mean age, 56.8 ± 8.5 years; 278 [90.8%] males) was included in the study. The overall mean LA diameter was 46.2 ± 6.5 mm, and 258 (84.3%) patients showed persistent AF. Among the included patients, 81 were in the hybrid group and 225 were in the totally thoracoscopic ablation only group. The baseline characteristics are summarized in Table 1. The only significant differences between the hybrid group and the totally thoracoscopic ablation only group was sex. TTA = totally thoracoscopic ablation, AF = atrial fibrillation, LA = left atrium, PVI = PV isolation, RFA = radiofrequency ablation. Values are presented as mean ± standard deviation or number (%). AF = atrial fibrillation, EF = ejection fraction, LA = left atrium, LS = long standing, TTA = totally thoracoscopic ablation. aparoxysmal vs. persistent; bpersistent vs. LS persistent; cparoxysmal vs. LS persistent. The study selection process is shown in Fig. 1. Among the initially included 408 patients who received totally thoracoscopic ablation procedure, 51 were excluded due to incomplete PVI or conversion to Cox–Maze surgery. An additional 51 patients who received RFCA before totally thoracoscopic ablation was excluded. Finally, a total of 306 patients (mean age, 56.8 ± 8.5 years; 278 [90.8%] males) was included in the study. The overall mean LA diameter was 46.2 ± 6.5 mm, and 258 (84.3%) patients showed persistent AF. Among the included patients, 81 were in the hybrid group and 225 were in the totally thoracoscopic ablation only group. The baseline characteristics are summarized in Table 1. The only significant differences between the hybrid group and the totally thoracoscopic ablation only group was sex. TTA = totally thoracoscopic ablation, AF = atrial fibrillation, LA = left atrium, PVI = PV isolation, RFA = radiofrequency ablation. Values are presented as mean ± standard deviation or number (%). AF = atrial fibrillation, EF = ejection fraction, LA = left atrium, LS = long standing, TTA = totally thoracoscopic ablation. aparoxysmal vs. persistent; bpersistent vs. LS persistent; cparoxysmal vs. LS persistent. Outcomes The mean follow-up duration was 30.0 months (46 months in hybrid group and 24 months in TTA only group). The one- and two-year atrial tachyarrhythmia–free survival rates were 73.9% and 63.8% in the totally thoracoscopic ablation only group and 85.2% and 67.8% in the hybrid group, respectively. The overall arrhythmia-free survival curves showed no significant differences between the two groups (log-rank P = 0.402, Fig. 2) TTA = totally thoracoscopic ablation. In the hybrid group, 18 (22%) patients underwent repeat catheter ablation for recurrent atrial tachyarrhythmia who were refractory to antiarrhythmic agents or cardioversion compared to 50 (22%) in the totally thoracoscopic ablation only group. There was no significant difference in the rate of repeat procedure between the two groups (log-rank P = 0.111). The mean follow-up duration was 30.0 months (46 months in hybrid group and 24 months in TTA only group). The one- and two-year atrial tachyarrhythmia–free survival rates were 73.9% and 63.8% in the totally thoracoscopic ablation only group and 85.2% and 67.8% in the hybrid group, respectively. The overall arrhythmia-free survival curves showed no significant differences between the two groups (log-rank P = 0.402, Fig. 2) TTA = totally thoracoscopic ablation. In the hybrid group, 18 (22%) patients underwent repeat catheter ablation for recurrent atrial tachyarrhythmia who were refractory to antiarrhythmic agents or cardioversion compared to 50 (22%) in the totally thoracoscopic ablation only group. There was no significant difference in the rate of repeat procedure between the two groups (log-rank P = 0.111). Detailed characteristics of the two groups Electrophysiology study was performed at a median 6 days after surgery in the hybrid group. Complete PV isolation (PVI) was confirmed in 72 (88.9%) patients. The patients who showed PV gap received additional catheter ablation, and complete PVI was identified in all patients. All but 3 patients underwent CTI ablation and bidirectional block was confirmed. Other additional linear lesion sets including septal line, perimitral line, and superior vena cava isolation were created in 10 patients by physician's discretion. A total of 37 patients in the hybrid group showed recurrence of atrial arrhythmia during follow-up. Among these patients, 27 showed recurrence with AF and 10 showed atypical atrial flutter (AFL). In the totally thoracoscopic ablation only group, 86 showed recurrence of atrial tachyarrhythmia during follow-up. Among these patients, 7 showed recurrence with definite CTI-dependent typical AFL. Fifty-eight patients had recurrence of AF, and 21 showed atypical AFL. The procedure-related complications are summarized in Table 2. Atrioesophageal fistula was observed in one case in the hybrid group. Other complications including bradycardia, thromboembolism, pericarditis, and pleuritis were observed in both groups. Values are presented as number (%). TTA = totally thoracoscopic ablation. Electrophysiology study was performed at a median 6 days after surgery in the hybrid group. Complete PV isolation (PVI) was confirmed in 72 (88.9%) patients. The patients who showed PV gap received additional catheter ablation, and complete PVI was identified in all patients. All but 3 patients underwent CTI ablation and bidirectional block was confirmed. Other additional linear lesion sets including septal line, perimitral line, and superior vena cava isolation were created in 10 patients by physician's discretion. A total of 37 patients in the hybrid group showed recurrence of atrial arrhythmia during follow-up. Among these patients, 27 showed recurrence with AF and 10 showed atypical atrial flutter (AFL). In the totally thoracoscopic ablation only group, 86 showed recurrence of atrial tachyarrhythmia during follow-up. Among these patients, 7 showed recurrence with definite CTI-dependent typical AFL. Fifty-eight patients had recurrence of AF, and 21 showed atypical AFL. The procedure-related complications are summarized in Table 2. Atrioesophageal fistula was observed in one case in the hybrid group. Other complications including bradycardia, thromboembolism, pericarditis, and pleuritis were observed in both groups. Values are presented as number (%). TTA = totally thoracoscopic ablation. Baseline characteristics: The study selection process is shown in Fig. 1. Among the initially included 408 patients who received totally thoracoscopic ablation procedure, 51 were excluded due to incomplete PVI or conversion to Cox–Maze surgery. An additional 51 patients who received RFCA before totally thoracoscopic ablation was excluded. Finally, a total of 306 patients (mean age, 56.8 ± 8.5 years; 278 [90.8%] males) was included in the study. The overall mean LA diameter was 46.2 ± 6.5 mm, and 258 (84.3%) patients showed persistent AF. Among the included patients, 81 were in the hybrid group and 225 were in the totally thoracoscopic ablation only group. The baseline characteristics are summarized in Table 1. The only significant differences between the hybrid group and the totally thoracoscopic ablation only group was sex. TTA = totally thoracoscopic ablation, AF = atrial fibrillation, LA = left atrium, PVI = PV isolation, RFA = radiofrequency ablation. Values are presented as mean ± standard deviation or number (%). AF = atrial fibrillation, EF = ejection fraction, LA = left atrium, LS = long standing, TTA = totally thoracoscopic ablation. aparoxysmal vs. persistent; bpersistent vs. LS persistent; cparoxysmal vs. LS persistent. Outcomes: The mean follow-up duration was 30.0 months (46 months in hybrid group and 24 months in TTA only group). The one- and two-year atrial tachyarrhythmia–free survival rates were 73.9% and 63.8% in the totally thoracoscopic ablation only group and 85.2% and 67.8% in the hybrid group, respectively. The overall arrhythmia-free survival curves showed no significant differences between the two groups (log-rank P = 0.402, Fig. 2) TTA = totally thoracoscopic ablation. In the hybrid group, 18 (22%) patients underwent repeat catheter ablation for recurrent atrial tachyarrhythmia who were refractory to antiarrhythmic agents or cardioversion compared to 50 (22%) in the totally thoracoscopic ablation only group. There was no significant difference in the rate of repeat procedure between the two groups (log-rank P = 0.111). Detailed characteristics of the two groups: Electrophysiology study was performed at a median 6 days after surgery in the hybrid group. Complete PV isolation (PVI) was confirmed in 72 (88.9%) patients. The patients who showed PV gap received additional catheter ablation, and complete PVI was identified in all patients. All but 3 patients underwent CTI ablation and bidirectional block was confirmed. Other additional linear lesion sets including septal line, perimitral line, and superior vena cava isolation were created in 10 patients by physician's discretion. A total of 37 patients in the hybrid group showed recurrence of atrial arrhythmia during follow-up. Among these patients, 27 showed recurrence with AF and 10 showed atypical atrial flutter (AFL). In the totally thoracoscopic ablation only group, 86 showed recurrence of atrial tachyarrhythmia during follow-up. Among these patients, 7 showed recurrence with definite CTI-dependent typical AFL. Fifty-eight patients had recurrence of AF, and 21 showed atypical AFL. The procedure-related complications are summarized in Table 2. Atrioesophageal fistula was observed in one case in the hybrid group. Other complications including bradycardia, thromboembolism, pericarditis, and pleuritis were observed in both groups. Values are presented as number (%). TTA = totally thoracoscopic ablation. DISCUSSION: This is the largest study evaluating the efficacy of the hybrid approach for treatment of AF. The main finding was that more than 70% of patients maintained sinus rhythm at one year of follow-up, and there was no significant difference between the early staged hybrid group and the totally thoracoscopic ablation only group. Furthermore, the rate of repeat catheter ablation procedures showed no significant difference between the two groups. Totally thoracoscopic ablation have advantages of hybrid cardiac surgery. This is a video-assisted minimally invasive surgery and does not require median sternotomy or a cardiac pump.14 One weakness of surgical ablation of AF is that it is difficult to ablate inaccessible cardiac structures not in face with the epicardial side, such as the CTI or interatrial septal region. Therefore, hybrid AF ablation has an advantage for improved ablation of these inaccessible structures and confirmation of electrical block using a detailed mapping technique. Furthermore, confident transmural lesions can be achieved through additional touch up ablation. In our cohort, almost 90% of patients showed complete PVI immediately after totally thoracoscopic surgical ablation. In addition, 7 of the 225 patients showed recurrence with definite CTI-dependent flutter. In this sense, the routine second stage of the approach just after surgery is not cost effective and force the patients to a longer hospital stay. When AF progresses, atrial fibrosis and atriopathy can develop. In these cases, it is effective to identify scar and border areas through additional voltage mapping, and this often requires additional endocardial ablation.1516 Notably, detailed mapping and additional ablation are not possible through totally thoracoscopic ablation alone. However, it was not effective to perform the hybrid procedure in all patients and we confirmed that the routine electrophysiology study during the hospital stay did not show additional benefit. In this study, the frequencies of unplanned additional electrophysiology study and catheter ablation showed no significant differences between the two groups. Also, routine hybrid procedures in hospitalization after totally thoracoscopic ablation to perform prophylactic CTI ablation and confirm complete PVI did not show additional benefits. Additional electrophysiology study and catheter ablation is required only in patients with recurrence of atrial tachyarrhythmia during follow-up after totally thoracoscopic ablation. This demonstrates that deferring second-stage electrophysiology study after arrhythmia recurrence is sufficient and cost-effective. This is a single-center, retrospective cohort study. Physician performance depends on experience and skills, and these technical approaches should be performed at experienced centers. In addition, the routine hybrid approach was the initial strategy of our hospital. Therefore, there might be differences in surgical outcomes between the two groups related to a learning curve. There also was a difference in follow-up duration. All patients were followed up with 24-hour Holter monitor or ECG instead of implantable loop recorder or longer duration Holter monitoring. However, both groups were followed up with same strategy. Therefore, this might be not affected to the difference between two groups. Finally, patients with unilateral PV isolation during surgical ablation were purposely excluded from the study. It could be creating a bias in favor of surgical ablation. The further larger-scale prospective studies would be needed. The early staged hybrid procedure after totally thoracoscopic ablation could not improve the outcome of recurrence of atrial tachyarrhythmia. The second stage of electrophysiology study could be deferred to patients with recurrence of atrial tachyarrhythmia during follow-up after totally thoracoscopic ablation.
Background: The efficacy of catheter ablation for persistent atrial fibrillation (AF) remains suboptimal. A hybrid approach of catheter ablation combined with totally thoracoscopic surgical ablation can improve outcomes. In this study, we evaluated the efficacy of the early staged hybrid procedure in hospital stay after totally thoracoscopic ablation compared to the stand-alone totally thoracoscopic ablation. Methods: Patients who underwent totally thoracoscopic ablation from February 2012 to December 2018 were included in this study. We compared the outcomes of the totally thoracoscopic ablation only group versus the early staged hybrid procedure group. The primary outcome was recurrence of atrial tachyarrhythmia after three months of blanking period. The secondary outcome was repeated unplanned additional electrophysiology study and catheter ablation due to atrial tachyarrhythmia recurrence. Results: A total of 306 patients (mean age, 56.8 ± 8.5 years; 278 [90.8%] males) was included in the study, with 81 patients in the early staged hybrid group and 225 patients in the stand-alone totally thoracoscopic ablation only group. The mean follow-up duration was 30.0 months. Overall arrhythmia-free survival showed no significant difference between the two groups (log-rank P = 0.402). There was no significant difference in the rate of repeat procedure between the two groups (log-rank = 0.11). Conclusions: The early staged hybrid procedure after thoracoscopic ablation could not improve the outcome of recurrence of atrial tachyarrhythmia. The second stage of electrophysiology study could be deferred to patients with recurrence of atrial tachyarrhythmia during follow up after totally thoracoscopic ablation.
null
null
5,744
295
[ 128, 313, 189, 72, 53, 85, 40, 237, 163, 238 ]
14
[ "ablation", "patients", "thoracoscopic", "thoracoscopic ablation", "totally thoracoscopic", "totally", "totally thoracoscopic ablation", "group", "study", "hybrid" ]
[ "thoracoscopic ablation patients", "efficacy catheter ablation", "thoracoscopic ablation hybrid", "af catheter ablation", "catheter ablation atrial" ]
null
null
[CONTENT] Atrial Fibrillation | Catheter Ablation | Totally Thoracoscopic Ablation | Hybrid | Electrophysiology Study [SUMMARY]
[CONTENT] Atrial Fibrillation | Catheter Ablation | Totally Thoracoscopic Ablation | Hybrid | Electrophysiology Study [SUMMARY]
[CONTENT] Atrial Fibrillation | Catheter Ablation | Totally Thoracoscopic Ablation | Hybrid | Electrophysiology Study [SUMMARY]
null
[CONTENT] Atrial Fibrillation | Catheter Ablation | Totally Thoracoscopic Ablation | Hybrid | Electrophysiology Study [SUMMARY]
null
[CONTENT] Aged | Atrial Fibrillation | Disease-Free Survival | Electrophysiological Phenomena | Female | Follow-Up Studies | Humans | Male | Middle Aged | Recurrence | Severity of Illness Index | Tachycardia | Thoracoscopy | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Atrial Fibrillation | Disease-Free Survival | Electrophysiological Phenomena | Female | Follow-Up Studies | Humans | Male | Middle Aged | Recurrence | Severity of Illness Index | Tachycardia | Thoracoscopy | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Atrial Fibrillation | Disease-Free Survival | Electrophysiological Phenomena | Female | Follow-Up Studies | Humans | Male | Middle Aged | Recurrence | Severity of Illness Index | Tachycardia | Thoracoscopy | Treatment Outcome [SUMMARY]
null
[CONTENT] Aged | Atrial Fibrillation | Disease-Free Survival | Electrophysiological Phenomena | Female | Follow-Up Studies | Humans | Male | Middle Aged | Recurrence | Severity of Illness Index | Tachycardia | Thoracoscopy | Treatment Outcome [SUMMARY]
null
[CONTENT] thoracoscopic ablation patients | efficacy catheter ablation | thoracoscopic ablation hybrid | af catheter ablation | catheter ablation atrial [SUMMARY]
[CONTENT] thoracoscopic ablation patients | efficacy catheter ablation | thoracoscopic ablation hybrid | af catheter ablation | catheter ablation atrial [SUMMARY]
[CONTENT] thoracoscopic ablation patients | efficacy catheter ablation | thoracoscopic ablation hybrid | af catheter ablation | catheter ablation atrial [SUMMARY]
null
[CONTENT] thoracoscopic ablation patients | efficacy catheter ablation | thoracoscopic ablation hybrid | af catheter ablation | catheter ablation atrial [SUMMARY]
null
[CONTENT] ablation | patients | thoracoscopic | thoracoscopic ablation | totally thoracoscopic | totally | totally thoracoscopic ablation | group | study | hybrid [SUMMARY]
[CONTENT] ablation | patients | thoracoscopic | thoracoscopic ablation | totally thoracoscopic | totally | totally thoracoscopic ablation | group | study | hybrid [SUMMARY]
[CONTENT] ablation | patients | thoracoscopic | thoracoscopic ablation | totally thoracoscopic | totally | totally thoracoscopic ablation | group | study | hybrid [SUMMARY]
null
[CONTENT] ablation | patients | thoracoscopic | thoracoscopic ablation | totally thoracoscopic | totally | totally thoracoscopic ablation | group | study | hybrid [SUMMARY]
null
[CONTENT] ablation | af | surgical | surgical ablation | catheter ablation | catheter | hybrid | epicardial | endocardial | hybrid procedure [SUMMARY]
[CONTENT] ablation | atricure | performed | test | study | pv | inc | thoracoscopic | patients | antrum [SUMMARY]
[CONTENT] group | patients | ablation | showed | totally thoracoscopic ablation | thoracoscopic ablation | thoracoscopic | totally | totally thoracoscopic | hybrid group [SUMMARY]
null
[CONTENT] ablation | patients | thoracoscopic | group | thoracoscopic ablation | totally | totally thoracoscopic | totally thoracoscopic ablation | study | atrial [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] February 2012 to December 2018 ||| ||| three months ||| [SUMMARY]
[CONTENT] 306 | 56.8 | 8.5 years | 278 | 90.8% | 81 | 225 ||| 30.0 months ||| two | 0.402 ||| two | 0.11 [SUMMARY]
null
[CONTENT] ||| ||| ||| February 2012 to December 2018 ||| ||| three months ||| ||| ||| 306 | 56.8 | 8.5 years | 278 | 90.8% | 81 | 225 ||| 30.0 months ||| two | 0.402 ||| two | 0.11 ||| ||| second [SUMMARY]
null
Effectiveness of combined microfocused ultrasound with visualization and subdermal calcium hydroxyapatite injections for the management of brachial skin laxity.
34716645
There is no publication to date on the combined use of microfocused ultrasound with visualization (MFU-V) and calcium hydroxylapatite (CaHA) for brachial skin laxity.
BACKGROUND
Female subjects who had skin laxity in the brachial regions and who desired non-surgical intervention were enrolled into this prospective, single-arm pilot study. MFU-V (Ultherapy® , Merz North America, Inc. Raleigh, N.C.) was applied using the 4.0 MHz-4.5 mm and 7.0 MHz-3.0 mm depth transducers, followed by subdermal injections of diluted (1:1)/hyperdiluted (1:2) CaHA (Radiesse® , Merz North America, Inc). Subjects were followed for six months after treatment. Objective biophysical skin assessments were conducted using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany). Subjective assessments included the arm visual analogue scale (VAS), global aesthetic improvement scale (GAIS), and subject global satisfaction scale.
SUBJECTS/METHODS
Twelve subjects participated in the study. The mean R0 reading (measure of skin firmness) progressively improved from 0.515 mm at baseline to 0.433 mm at 24 weeks (p < 0.05 for 12 and 24 weeks). The mean R2 reading (measure of skin elasticity) and mean arm VAS improved significantly from baseline at all visits (p < 0.05 for all). The majority of subjects at each visit showed improved arm appearance and were satisfied with their treatment. Both procedures were well-tolerated.
RESULTS
Combined use of MFU-V with diluted/hyperdiluted CaHA demonstrates significant improvements in both objective and subjective measures of brachial skin laxity.
CONCLUSIONS
[ "Cosmetic Techniques", "Durapatite", "Female", "Humans", "Patient Satisfaction", "Pilot Projects", "Prospective Studies", "Skin Aging", "Treatment Outcome", "Ultrasonic Therapy", "Ultrasonography" ]
9297859
INTRODUCTION
There is a paradigm shift for body contouring treatments. 1 Individuals are paying more attention to concerns in visible body areas, including the arms, where sagging and laxity in the brachial area may present as “batwing appearance,” 2 and along with this is a higher desire for non‐invasive procedures. 3 Indeed, data from the American Society for Dermatologic Surgery (ASDS) suggest that the number of invasive and non‐invasive body contouring procedures performed has increased by fourfold between 2012 and 2018. 4 Brachial skin laxity, in particular, can negatively impact an individual's quality of life. 5 The condition can become more apparent as collagen and elastin levels decline with aging and weight loss, which cause the skin to lose firmness and elasticity. 2 , 6 The demand for improvement in brachial skin laxity is evidenced by the more than 50‐fold increase in “upper arm lifts” or brachioplasty from 2000 to 2019 as reported in the Plastic Surgery Statistics Report by American Society of Plastic Surgeons (ASPS). 3 Although surgical procedures can improve the appearance of the brachial area, they can be associated with significant complications and extensive scarring often requiring revision surgery. 7 Indeed, there has been a rise in demand for non‐invasive procedures for the improvement of skin laxity. 1 Microfocused ultrasound with visualization (MFU‐V; Ultherapy®, Merz North America, Inc. Raleigh, N.C.) and calcium hydroxylapatite (CaHA; Radiesse®, Merz North America, Inc) injections are examples of non‐invasive procedures that stimulate collagen and elastin production. Independently, these have been shown to promote dermal remodeling, resulting in lifting and tightening of lax skin primarily for concerns on the face and neck, but there are limited data to suggest that these procedures may have potential applications for use in other body sites. 8 , 9 , 10 , 11 MFU‐V generates focused ultrasound waves that are converted to heat, creating discrete thermal coagulation points at precise depths beneath the skin's surface resulting in collagen denaturation and subsequent new collagen production. 12 , 13 A small number of studies has demonstrated improvement of brachial laxity with MFU‐V. 8 , 9 These are limited by the predominant use of subjective assessments to evaluate clinical improvements without including objective measurements. An alternative option has been the use of CaHA as a biostimulatory agent. 14 Treatment with subdermal injections of diluted or hyperdiluted CaHA has been shown to stimulate neocollagenesis, elastogenesis, and neoangiogenesis, with corresponding increase in dermal thickness and elasticity. 15 To date, a handful of case studies has reported the use of CaHA injections to improve skin laxity in the brachial region and other areas. 10 , 11 In an effort to pursue further improvement in skin tightening especially in body sites, recent studies have examined the effectiveness of combining both MFU‐V and CaHA treatments for addressing skin laxity in the neck and buttocks. 16 , 17 Given that each of these modalities has been shown to improve collagen stores, it is plausible that a combination of both procedures could produce synergistic effects. 17 , 18 There is, however, no publication to date on the combined use of both modalities in a single treatment session in the brachial region. Given the increasing demand for arm skin tightening procedures, particularly with non‐invasive treatments, this prospective pilot study aimed to assess the effectiveness of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improving brachial skin laxity.
null
null
RESULTS
Subject demographics and baseline characteristics Twelve female subjects participated in and completed the study. Although follow‐up data were available for all 12 subjects at 24 weeks, these were available for 11 and 8 subjects at 4 weeks and 12 weeks, respectively, as these visits fell within the period of COVID‐19–related movement restrictions in Singapore. Subject demographics and baseline characteristics are summarized in Table 1. The mean age was 51.3 (SD 7.3) years. Three subjects (25%) were Asian and the remaining nine (75%) were Caucasian. The mean BMI was 23.3 (SD 3.1) kg/m2, five subjects (42%) had BMI 20–23 kg/m2, three (25%) had BMI >23 to 25 kg/m2, another three (25%) had BMI >25 kg/m2, and one (8%) had BMI <20 kg/m2. At baseline, the mean arm VAS score (average of both arms) was 2.9 (SD 0.6). All arms (24, 100%) had a VAS score of 2 or more, indicating at least mild brachial laxity (VAS score 2: 3 arms, 13%; VAS score >2–3: 16 arms, 67%; VAS score >3–4: 4 arms, 17%; and VAS score >4: one arm, 4%). Subject demographics and baseline characteristics. Data presented are mean (SD) unless otherwise stated. Arm VAS was reported based on the average score from both reviewers. Abbreviations: BMI, Body mass index; SD, Standard deviation; VAS, Visual analogue scale. aAverage of both arms bPercentage calculated out of a total of 24 arms. Twelve female subjects participated in and completed the study. Although follow‐up data were available for all 12 subjects at 24 weeks, these were available for 11 and 8 subjects at 4 weeks and 12 weeks, respectively, as these visits fell within the period of COVID‐19–related movement restrictions in Singapore. Subject demographics and baseline characteristics are summarized in Table 1. The mean age was 51.3 (SD 7.3) years. Three subjects (25%) were Asian and the remaining nine (75%) were Caucasian. The mean BMI was 23.3 (SD 3.1) kg/m2, five subjects (42%) had BMI 20–23 kg/m2, three (25%) had BMI >23 to 25 kg/m2, another three (25%) had BMI >25 kg/m2, and one (8%) had BMI <20 kg/m2. At baseline, the mean arm VAS score (average of both arms) was 2.9 (SD 0.6). All arms (24, 100%) had a VAS score of 2 or more, indicating at least mild brachial laxity (VAS score 2: 3 arms, 13%; VAS score >2–3: 16 arms, 67%; VAS score >3–4: 4 arms, 17%; and VAS score >4: one arm, 4%). Subject demographics and baseline characteristics. Data presented are mean (SD) unless otherwise stated. Arm VAS was reported based on the average score from both reviewers. Abbreviations: BMI, Body mass index; SD, Standard deviation; VAS, Visual analogue scale. aAverage of both arms bPercentage calculated out of a total of 24 arms. Impact on objective outcome measures: biophysical skin parameters Cutometer readings demonstrated a progressive decrease in the mean R0 reading (the lower the reading, the better the firmness of the skin), throughout the course of the study, suggesting progressive improvement in skin firmness from baseline at all time‐points (Figure 2A). This reduced from 0.515 (SD 0.098) mm at baseline to 0.433 (SD 0.049) mm at 24 weeks. Statistically significant improvements were observed from 12 weeks onwards (p < 0.05 for 12 and 24 weeks). In addition, the mean R2 reading (the closer the reading to 1, the better the elasticity of the skin) improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 2B). The mean R2 reading increased from 0.816 (SD 0.032) at baseline to 0.847 (SD 0.037) at 12 weeks, where maximum improvement was observed. Objective biophysical parameters of the brachial skin over time. (A) Mean (SD) R0 readings, a measure of skin firmness, and (B) mean (SD) R2 readings, a measure of skin elasticity, before treatment and at 4, 12, and 24 weeks after treatment Cutometer readings demonstrated a progressive decrease in the mean R0 reading (the lower the reading, the better the firmness of the skin), throughout the course of the study, suggesting progressive improvement in skin firmness from baseline at all time‐points (Figure 2A). This reduced from 0.515 (SD 0.098) mm at baseline to 0.433 (SD 0.049) mm at 24 weeks. Statistically significant improvements were observed from 12 weeks onwards (p < 0.05 for 12 and 24 weeks). In addition, the mean R2 reading (the closer the reading to 1, the better the elasticity of the skin) improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 2B). The mean R2 reading increased from 0.816 (SD 0.032) at baseline to 0.847 (SD 0.037) at 12 weeks, where maximum improvement was observed. Objective biophysical parameters of the brachial skin over time. (A) Mean (SD) R0 readings, a measure of skin firmness, and (B) mean (SD) R2 readings, a measure of skin elasticity, before treatment and at 4, 12, and 24 weeks after treatment Impact on subjective outcome measures: aesthetic appearance and subject satisfaction Reviewer‐rated arm VAS score improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 3). The mean arm VAS score reduced from 2.9 (SD 0.6) at baseline to 2.1 (SD 0.7) at 12 weeks, where maximum improvement was observed. Figure 4 shows the change in overall skin quality in a subject who had a VAS score of 4 (severe laxity) at baseline. Pre‐ and post‐treatment photographs depict the improvement in overall skin quality and laxity from baseline to 24 weeks after treatment. Mean (SD) arm VAS scores before treatment and at 4, 12, and 24 weeks after treatment Photographs of upper arms of a subject before treatment (left) and at 24 weeks after combined treatment with microfocused ultrasound with visualization and diluted calcium hydroxylapatite (right) Overall change in arm appearance was assessed using the GAIS (Figure 5). Based on investigator's ratings, over 70% of all subjects showed an improved aesthetic appearance compared with baseline at all time‐points throughout the study. At 4 weeks, 73% of subjects (8/11) were assessed to have improvement of brachial laxity. By 12 and 24 weeks, 88% (7/8) and 83% (10/12) of subjects were assessed to have improved, respectively. Furthermore, over 37% of subjects were reported to have “much improved” and “very much improved” brachial laxity by 12 and 24 weeks. There were no reported cases with worsening of brachial laxity. Investigators global aesthetic improvement scale (GAIS) ratings over time When asked to rate their overall satisfaction with the treatment, 55% of the subjects (6/11) reported they were satisfied at 4 weeks (Figure 6). At the 12 weeks, the highest percentage of satisfied subjects (75%; 6/8) was noted. By 24 weeks, 58% (7/12) were still satisfied with the treatment. No subjects were dissatisfied during the study. Subject satisfaction over time Reviewer‐rated arm VAS score improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 3). The mean arm VAS score reduced from 2.9 (SD 0.6) at baseline to 2.1 (SD 0.7) at 12 weeks, where maximum improvement was observed. Figure 4 shows the change in overall skin quality in a subject who had a VAS score of 4 (severe laxity) at baseline. Pre‐ and post‐treatment photographs depict the improvement in overall skin quality and laxity from baseline to 24 weeks after treatment. Mean (SD) arm VAS scores before treatment and at 4, 12, and 24 weeks after treatment Photographs of upper arms of a subject before treatment (left) and at 24 weeks after combined treatment with microfocused ultrasound with visualization and diluted calcium hydroxylapatite (right) Overall change in arm appearance was assessed using the GAIS (Figure 5). Based on investigator's ratings, over 70% of all subjects showed an improved aesthetic appearance compared with baseline at all time‐points throughout the study. At 4 weeks, 73% of subjects (8/11) were assessed to have improvement of brachial laxity. By 12 and 24 weeks, 88% (7/8) and 83% (10/12) of subjects were assessed to have improved, respectively. Furthermore, over 37% of subjects were reported to have “much improved” and “very much improved” brachial laxity by 12 and 24 weeks. There were no reported cases with worsening of brachial laxity. Investigators global aesthetic improvement scale (GAIS) ratings over time When asked to rate their overall satisfaction with the treatment, 55% of the subjects (6/11) reported they were satisfied at 4 weeks (Figure 6). At the 12 weeks, the highest percentage of satisfied subjects (75%; 6/8) was noted. By 24 weeks, 58% (7/12) were still satisfied with the treatment. No subjects were dissatisfied during the study. Subject satisfaction over time Adverse events No serious AEs were noted during the study. All AEs were mild and transient in nature. The most common AEs reported were mild bruising (92%; 11/12) and redness (25%; 3/12), which resolved spontaneously within a week. One subject reported mild paresthesia (electric shock sensations) down the left arm, which also resolved spontaneously over two weeks. No serious AEs were noted during the study. All AEs were mild and transient in nature. The most common AEs reported were mild bruising (92%; 11/12) and redness (25%; 3/12), which resolved spontaneously within a week. One subject reported mild paresthesia (electric shock sensations) down the left arm, which also resolved spontaneously over two weeks.
CONCLUSION
This study demonstrates novel findings that combination treatment with MFU‐V followed by injection of diluted or hyperdiluted CaHA is effective for the management of brachial skin laxity as demonstrated by clinically and statistically significant improvements in both objective and subjective outcome measures.
[ "INTRODUCTION", "SUBJECTS AND METHODS", "Study design", "Study population", "Intervention", "Study assessments", "Statistical analyses", "Subject demographics and baseline characteristics", "Impact on objective outcome measures: biophysical skin parameters", "Impact on subjective outcome measures: aesthetic appearance and subject satisfaction", "Adverse events", "AUTHORS CONTRIBUTIONS", "ETHICAL APPROVAL" ]
[ "There is a paradigm shift for body contouring treatments.\n1\n Individuals are paying more attention to concerns in visible body areas, including the arms, where sagging and laxity in the brachial area may present as “batwing appearance,”\n2\n and along with this is a higher desire for non‐invasive procedures.\n3\n Indeed, data from the American Society for Dermatologic Surgery (ASDS) suggest that the number of invasive and non‐invasive body contouring procedures performed has increased by fourfold between 2012 and 2018.\n4\n Brachial skin laxity, in particular, can negatively impact an individual's quality of life.\n5\n The condition can become more apparent as collagen and elastin levels decline with aging and weight loss, which cause the skin to lose firmness and elasticity.\n2\n, \n6\n The demand for improvement in brachial skin laxity is evidenced by the more than 50‐fold increase in “upper arm lifts” or brachioplasty from 2000 to 2019 as reported in the Plastic Surgery Statistics Report by American Society of Plastic Surgeons (ASPS).\n3\n\n\nAlthough surgical procedures can improve the appearance of the brachial area, they can be associated with significant complications and extensive scarring often requiring revision surgery.\n7\n Indeed, there has been a rise in demand for non‐invasive procedures for the improvement of skin laxity.\n1\n Microfocused ultrasound with visualization (MFU‐V; Ultherapy®, Merz North America, Inc. Raleigh, N.C.) and calcium hydroxylapatite (CaHA; Radiesse®, Merz North America, Inc) injections are examples of non‐invasive procedures that stimulate collagen and elastin production. Independently, these have been shown to promote dermal remodeling, resulting in lifting and tightening of lax skin primarily for concerns on the face and neck, but there are limited data to suggest that these procedures may have potential applications for use in other body sites.\n8\n, \n9\n, \n10\n, \n11\n MFU‐V generates focused ultrasound waves that are converted to heat, creating discrete thermal coagulation points at precise depths beneath the skin's surface resulting in collagen denaturation and subsequent new collagen production.\n12\n, \n13\n A small number of studies has demonstrated improvement of brachial laxity with MFU‐V.\n8\n, \n9\n These are limited by the predominant use of subjective assessments to evaluate clinical improvements without including objective measurements. An alternative option has been the use of CaHA as a biostimulatory agent.\n14\n Treatment with subdermal injections of diluted or hyperdiluted CaHA has been shown to stimulate neocollagenesis, elastogenesis, and neoangiogenesis, with corresponding increase in dermal thickness and elasticity.\n15\n To date, a handful of case studies has reported the use of CaHA injections to improve skin laxity in the brachial region and other areas.\n10\n, \n11\n\n\nIn an effort to pursue further improvement in skin tightening especially in body sites, recent studies have examined the effectiveness of combining both MFU‐V and CaHA treatments for addressing skin laxity in the neck and buttocks.\n16\n, \n17\n Given that each of these modalities has been shown to improve collagen stores, it is plausible that a combination of both procedures could produce synergistic effects.\n17\n, \n18\n There is, however, no publication to date on the combined use of both modalities in a single treatment session in the brachial region. Given the increasing demand for arm skin tightening procedures, particularly with non‐invasive treatments, this prospective pilot study aimed to assess the effectiveness of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improving brachial skin laxity.", "Study design This was a prospective, open‐label, non‐randomized, single‐arm case series of healthy female subjects from two outpatient clinics in Singapore conducted between September 2019 and August 2020. Subjects underwent one treatment session of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improvement of brachial skin laxity and were followed for 24 weeks over three visits to evaluate the effectiveness of the combination treatment. The study was approved by the relevant ethics committee, and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript.\nThis was a prospective, open‐label, non‐randomized, single‐arm case series of healthy female subjects from two outpatient clinics in Singapore conducted between September 2019 and August 2020. Subjects underwent one treatment session of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improvement of brachial skin laxity and were followed for 24 weeks over three visits to evaluate the effectiveness of the combination treatment. The study was approved by the relevant ethics committee, and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript.\nStudy population Female subjects aged 35–65 years of all skin types and ethnic groups who had skin laxity in the brachial regions, desired lifting and tightening of the upper arm skin through non‐surgical interventions, were in good health, and could comply with the study requirements were included in the study. Subjects who met any of the following criteria were excluded from participation: (1) used immunosuppressive drugs or had any active systemic or local skin disease that may alter wound healing; (2) used antiplatelet agents or anticoagulants; (3) had significant scarring or keloid scarring in the proposed treatment areas, or a history of keloid formation; (4) had significant open wounds or lesions, or presence of active implants in the proposed treatment areas; (5) performed non‐invasive fat reduction procedures; ablative or non‐ablative skin procedures, or surgical procedures to the proposed treatment areas within the past six months; and (6) were pregnant or lactating during screening; and (7) had body mass index (BMI) >28 kg/m2.\nFemale subjects aged 35–65 years of all skin types and ethnic groups who had skin laxity in the brachial regions, desired lifting and tightening of the upper arm skin through non‐surgical interventions, were in good health, and could comply with the study requirements were included in the study. Subjects who met any of the following criteria were excluded from participation: (1) used immunosuppressive drugs or had any active systemic or local skin disease that may alter wound healing; (2) used antiplatelet agents or anticoagulants; (3) had significant scarring or keloid scarring in the proposed treatment areas, or a history of keloid formation; (4) had significant open wounds or lesions, or presence of active implants in the proposed treatment areas; (5) performed non‐invasive fat reduction procedures; ablative or non‐ablative skin procedures, or surgical procedures to the proposed treatment areas within the past six months; and (6) were pregnant or lactating during screening; and (7) had body mass index (BMI) >28 kg/m2.\nIntervention MFU‐V treatment was first administered as described in previously published studies for brachial skin laxity.\n8\n, \n9\n Diluted or hyperdiluted CaHA was then administered according to the recommendations of published consensus guidelines.\n14\n, \n19\n\n\nAnalgesia was achieved with oral administration of ibuprofen (800 mg) and application of a topical anesthetic (EMLA cream: a eutectic mixture of local anesthetic—lidocaine 2.5% and prilocaine 2.5%, or BLT cream: benzocaine 20%, lidocaine 6%, tetracaine 4%) one hour before treatment. Oral paracetamol (1 g) or tramadol (50 mg) was given to subjects who were allergic to non‐steroidal anti‐inflammatory drugs (NSAIDs).\nThe treatment area was marked with the subject standing upright and with the arms extended at 45 degrees from the body using the lower horizontal border of the underarm area and the outer border of the axilla border as reference points. A standardized ruler that matched the width and height of the MFU‐V transducer was used to create a grid as shown in Figure 1A. A thin layer of ultrasound gel was then applied to the subject's arms. Each transducer was placed on the skin and was evenly coupled to the skin surface before treatment was administered. The brachial regions were treated in a standardized pattern as previously described\n8\n, \n9\n at two depths using the 4.0 MHz‐4.5 mm depth transducer for deeper penetration to the superficial fascial layer in the first pass, followed by the 7.0 MHz‐3.0 mm depth transducer for more superficial penetration to the deep dermis in the second pass. Depending on the surface area to be treated, a total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered to each arm using each transducer (Figure 1A), for a total of 360–480 lines per arm with both transducers, with more lines for patients with larger arm volumes. At least half of the height of the surface area of the inner arm was treated. The treatment protocol was not modified by severity of brachial skin laxity.\nTreatment administration. (A) MFU‐V treatment: The brachial regions were treated in a standardized pattern at two depths using the 4.0 MHz‐4.5 mm depth transducer and 7.0 MHz‐3.0 mm depth transducer. A total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered using each transducer. (B) CaHA treatment: subjects received subdermal injections of diluted or hyperdiluted CaHA with a 25‐gauge cannula using a retrograde fanning technique. The crosses in Figure 1A denote the three standardized points where cutometer readings were taken\nDepending on individual subject's availability, subjects received subdermal injections of diluted or hyperdiluted CaHA immediately or up to one week following MFU‐V treatment in the same areas with a 25‐gauge cannula using a retrograde fanning technique (Figure 1B) in accordance with the recommendations of published consensus guidelines.\n14\n, \n19\n Two different dilutions were used depending on investigator's assessment of skin thickness. A dilution ratio of 1:2 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 2.7 ml normal saline, for a total volume of 4.5 ml per arm) was used for subjects with thinner skin. A dilution ratio of 1:1 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 1.2 ml normal saline, for a total volume of 3 ml per arm) was used for subjects with thicker skin. After injecting CaHA, vigorous massage of the upper arm areas was performed to ensure even distribution of CaHA.\nMFU‐V treatment was first administered as described in previously published studies for brachial skin laxity.\n8\n, \n9\n Diluted or hyperdiluted CaHA was then administered according to the recommendations of published consensus guidelines.\n14\n, \n19\n\n\nAnalgesia was achieved with oral administration of ibuprofen (800 mg) and application of a topical anesthetic (EMLA cream: a eutectic mixture of local anesthetic—lidocaine 2.5% and prilocaine 2.5%, or BLT cream: benzocaine 20%, lidocaine 6%, tetracaine 4%) one hour before treatment. Oral paracetamol (1 g) or tramadol (50 mg) was given to subjects who were allergic to non‐steroidal anti‐inflammatory drugs (NSAIDs).\nThe treatment area was marked with the subject standing upright and with the arms extended at 45 degrees from the body using the lower horizontal border of the underarm area and the outer border of the axilla border as reference points. A standardized ruler that matched the width and height of the MFU‐V transducer was used to create a grid as shown in Figure 1A. A thin layer of ultrasound gel was then applied to the subject's arms. Each transducer was placed on the skin and was evenly coupled to the skin surface before treatment was administered. The brachial regions were treated in a standardized pattern as previously described\n8\n, \n9\n at two depths using the 4.0 MHz‐4.5 mm depth transducer for deeper penetration to the superficial fascial layer in the first pass, followed by the 7.0 MHz‐3.0 mm depth transducer for more superficial penetration to the deep dermis in the second pass. Depending on the surface area to be treated, a total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered to each arm using each transducer (Figure 1A), for a total of 360–480 lines per arm with both transducers, with more lines for patients with larger arm volumes. At least half of the height of the surface area of the inner arm was treated. The treatment protocol was not modified by severity of brachial skin laxity.\nTreatment administration. (A) MFU‐V treatment: The brachial regions were treated in a standardized pattern at two depths using the 4.0 MHz‐4.5 mm depth transducer and 7.0 MHz‐3.0 mm depth transducer. A total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered using each transducer. (B) CaHA treatment: subjects received subdermal injections of diluted or hyperdiluted CaHA with a 25‐gauge cannula using a retrograde fanning technique. The crosses in Figure 1A denote the three standardized points where cutometer readings were taken\nDepending on individual subject's availability, subjects received subdermal injections of diluted or hyperdiluted CaHA immediately or up to one week following MFU‐V treatment in the same areas with a 25‐gauge cannula using a retrograde fanning technique (Figure 1B) in accordance with the recommendations of published consensus guidelines.\n14\n, \n19\n Two different dilutions were used depending on investigator's assessment of skin thickness. A dilution ratio of 1:2 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 2.7 ml normal saline, for a total volume of 4.5 ml per arm) was used for subjects with thinner skin. A dilution ratio of 1:1 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 1.2 ml normal saline, for a total volume of 3 ml per arm) was used for subjects with thicker skin. After injecting CaHA, vigorous massage of the upper arm areas was performed to ensure even distribution of CaHA.\nStudy assessments Subjects were followed for six months after treatment and evaluated at 4, 12, and 24 weeks. The primary outcome evaluated was the change in biophysical skin parameters, specifically skin firmness (R0) and skin elasticity (R2) during the 4, 12, and 24 week follow‐up visits as compared with baseline. These parameters were measured using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany), which is an established objective assessment of skin firmness and skin elasticity in dermatologic clinical studies.\n20\n The cutometer uses a suction method which creates a negative pressure that draws the skin into a probe and releases it after a defined time. It measures skin firmness (R0: the resistance of the skin to negative pressure, lower R0 value denotes firmer skin) and elasticity (R2: the ability of the skin to return to its original position, higher R2 denotes more elastic skin) using a non‐contact optical measuring system.\n21\n Readings were taken at three standardized points on each arm during each visit (Figure 1A), and the average of three measurements was reported for each visit.\nSecondary outcomes utilized subjective assessments including the arm visual analogue scale (VAS),\n10\n the global aesthetic improvement scale (GAIS), and subject global satisfaction scale. Standardized photographs of upper arms were taken before treatment and at each visit post‐treatment with the subject standing and both arms outstretched at a 45‐degree angle from the body using standardized camera angles and room lighting. Two blinded reviewers evaluated photographs of upper arms and provided their assessments using the VAS for upper arms developed by Amselem et al (Type I = no laxity, Type II = mild laxity, Type III = moderate laxity, Type IV = severe laxity, and Type V = very severe laxity).\n10\n The average VAS scores from both reviewers were reported for each visit. In addition, participating investigators compared photographs from each post‐treatment visit with baseline and provided their assessments on changes in overall aesthetic appearance after treatment using the GAIS (0 = worsened, 1 = no change, 2 = improved, 3 = much improved, and 4 = very much improved), as previously described.\n11\n Subjects provided their assessments of treatment satisfaction at each post‐treatment visit using the subject global satisfaction scale (1 = very dissatisfied to 5 = very satisfied). Adverse events (AEs) were recorded immediately after treatment and at all post‐treatment visits.\nSubjects were followed for six months after treatment and evaluated at 4, 12, and 24 weeks. The primary outcome evaluated was the change in biophysical skin parameters, specifically skin firmness (R0) and skin elasticity (R2) during the 4, 12, and 24 week follow‐up visits as compared with baseline. These parameters were measured using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany), which is an established objective assessment of skin firmness and skin elasticity in dermatologic clinical studies.\n20\n The cutometer uses a suction method which creates a negative pressure that draws the skin into a probe and releases it after a defined time. It measures skin firmness (R0: the resistance of the skin to negative pressure, lower R0 value denotes firmer skin) and elasticity (R2: the ability of the skin to return to its original position, higher R2 denotes more elastic skin) using a non‐contact optical measuring system.\n21\n Readings were taken at three standardized points on each arm during each visit (Figure 1A), and the average of three measurements was reported for each visit.\nSecondary outcomes utilized subjective assessments including the arm visual analogue scale (VAS),\n10\n the global aesthetic improvement scale (GAIS), and subject global satisfaction scale. Standardized photographs of upper arms were taken before treatment and at each visit post‐treatment with the subject standing and both arms outstretched at a 45‐degree angle from the body using standardized camera angles and room lighting. Two blinded reviewers evaluated photographs of upper arms and provided their assessments using the VAS for upper arms developed by Amselem et al (Type I = no laxity, Type II = mild laxity, Type III = moderate laxity, Type IV = severe laxity, and Type V = very severe laxity).\n10\n The average VAS scores from both reviewers were reported for each visit. In addition, participating investigators compared photographs from each post‐treatment visit with baseline and provided their assessments on changes in overall aesthetic appearance after treatment using the GAIS (0 = worsened, 1 = no change, 2 = improved, 3 = much improved, and 4 = very much improved), as previously described.\n11\n Subjects provided their assessments of treatment satisfaction at each post‐treatment visit using the subject global satisfaction scale (1 = very dissatisfied to 5 = very satisfied). Adverse events (AEs) were recorded immediately after treatment and at all post‐treatment visits.\nStatistical analyses Descriptive statistics were primarily used to summarize the results. Cutometer values (R0 and R2) and arm VAS scores were summarized using mean and standard deviation. Investigators’ GAIS, subject satisfaction, and AEs were summarized by percentages. The Wilcoxon signed‐rank test was used to compare cutometer values and arm VAS scores at each follow‐up visit with baseline values. p values < 0.05 were considered statistically significant. R (version 4.0.3, 2020–10–10), and R Studio (version 1.3.1093) were used to perform the analyses.\nDescriptive statistics were primarily used to summarize the results. Cutometer values (R0 and R2) and arm VAS scores were summarized using mean and standard deviation. Investigators’ GAIS, subject satisfaction, and AEs were summarized by percentages. The Wilcoxon signed‐rank test was used to compare cutometer values and arm VAS scores at each follow‐up visit with baseline values. p values < 0.05 were considered statistically significant. R (version 4.0.3, 2020–10–10), and R Studio (version 1.3.1093) were used to perform the analyses.", "This was a prospective, open‐label, non‐randomized, single‐arm case series of healthy female subjects from two outpatient clinics in Singapore conducted between September 2019 and August 2020. Subjects underwent one treatment session of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improvement of brachial skin laxity and were followed for 24 weeks over three visits to evaluate the effectiveness of the combination treatment. The study was approved by the relevant ethics committee, and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript.", "Female subjects aged 35–65 years of all skin types and ethnic groups who had skin laxity in the brachial regions, desired lifting and tightening of the upper arm skin through non‐surgical interventions, were in good health, and could comply with the study requirements were included in the study. Subjects who met any of the following criteria were excluded from participation: (1) used immunosuppressive drugs or had any active systemic or local skin disease that may alter wound healing; (2) used antiplatelet agents or anticoagulants; (3) had significant scarring or keloid scarring in the proposed treatment areas, or a history of keloid formation; (4) had significant open wounds or lesions, or presence of active implants in the proposed treatment areas; (5) performed non‐invasive fat reduction procedures; ablative or non‐ablative skin procedures, or surgical procedures to the proposed treatment areas within the past six months; and (6) were pregnant or lactating during screening; and (7) had body mass index (BMI) >28 kg/m2.", "MFU‐V treatment was first administered as described in previously published studies for brachial skin laxity.\n8\n, \n9\n Diluted or hyperdiluted CaHA was then administered according to the recommendations of published consensus guidelines.\n14\n, \n19\n\n\nAnalgesia was achieved with oral administration of ibuprofen (800 mg) and application of a topical anesthetic (EMLA cream: a eutectic mixture of local anesthetic—lidocaine 2.5% and prilocaine 2.5%, or BLT cream: benzocaine 20%, lidocaine 6%, tetracaine 4%) one hour before treatment. Oral paracetamol (1 g) or tramadol (50 mg) was given to subjects who were allergic to non‐steroidal anti‐inflammatory drugs (NSAIDs).\nThe treatment area was marked with the subject standing upright and with the arms extended at 45 degrees from the body using the lower horizontal border of the underarm area and the outer border of the axilla border as reference points. A standardized ruler that matched the width and height of the MFU‐V transducer was used to create a grid as shown in Figure 1A. A thin layer of ultrasound gel was then applied to the subject's arms. Each transducer was placed on the skin and was evenly coupled to the skin surface before treatment was administered. The brachial regions were treated in a standardized pattern as previously described\n8\n, \n9\n at two depths using the 4.0 MHz‐4.5 mm depth transducer for deeper penetration to the superficial fascial layer in the first pass, followed by the 7.0 MHz‐3.0 mm depth transducer for more superficial penetration to the deep dermis in the second pass. Depending on the surface area to be treated, a total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered to each arm using each transducer (Figure 1A), for a total of 360–480 lines per arm with both transducers, with more lines for patients with larger arm volumes. At least half of the height of the surface area of the inner arm was treated. The treatment protocol was not modified by severity of brachial skin laxity.\nTreatment administration. (A) MFU‐V treatment: The brachial regions were treated in a standardized pattern at two depths using the 4.0 MHz‐4.5 mm depth transducer and 7.0 MHz‐3.0 mm depth transducer. A total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered using each transducer. (B) CaHA treatment: subjects received subdermal injections of diluted or hyperdiluted CaHA with a 25‐gauge cannula using a retrograde fanning technique. The crosses in Figure 1A denote the three standardized points where cutometer readings were taken\nDepending on individual subject's availability, subjects received subdermal injections of diluted or hyperdiluted CaHA immediately or up to one week following MFU‐V treatment in the same areas with a 25‐gauge cannula using a retrograde fanning technique (Figure 1B) in accordance with the recommendations of published consensus guidelines.\n14\n, \n19\n Two different dilutions were used depending on investigator's assessment of skin thickness. A dilution ratio of 1:2 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 2.7 ml normal saline, for a total volume of 4.5 ml per arm) was used for subjects with thinner skin. A dilution ratio of 1:1 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 1.2 ml normal saline, for a total volume of 3 ml per arm) was used for subjects with thicker skin. After injecting CaHA, vigorous massage of the upper arm areas was performed to ensure even distribution of CaHA.", "Subjects were followed for six months after treatment and evaluated at 4, 12, and 24 weeks. The primary outcome evaluated was the change in biophysical skin parameters, specifically skin firmness (R0) and skin elasticity (R2) during the 4, 12, and 24 week follow‐up visits as compared with baseline. These parameters were measured using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany), which is an established objective assessment of skin firmness and skin elasticity in dermatologic clinical studies.\n20\n The cutometer uses a suction method which creates a negative pressure that draws the skin into a probe and releases it after a defined time. It measures skin firmness (R0: the resistance of the skin to negative pressure, lower R0 value denotes firmer skin) and elasticity (R2: the ability of the skin to return to its original position, higher R2 denotes more elastic skin) using a non‐contact optical measuring system.\n21\n Readings were taken at three standardized points on each arm during each visit (Figure 1A), and the average of three measurements was reported for each visit.\nSecondary outcomes utilized subjective assessments including the arm visual analogue scale (VAS),\n10\n the global aesthetic improvement scale (GAIS), and subject global satisfaction scale. Standardized photographs of upper arms were taken before treatment and at each visit post‐treatment with the subject standing and both arms outstretched at a 45‐degree angle from the body using standardized camera angles and room lighting. Two blinded reviewers evaluated photographs of upper arms and provided their assessments using the VAS for upper arms developed by Amselem et al (Type I = no laxity, Type II = mild laxity, Type III = moderate laxity, Type IV = severe laxity, and Type V = very severe laxity).\n10\n The average VAS scores from both reviewers were reported for each visit. In addition, participating investigators compared photographs from each post‐treatment visit with baseline and provided their assessments on changes in overall aesthetic appearance after treatment using the GAIS (0 = worsened, 1 = no change, 2 = improved, 3 = much improved, and 4 = very much improved), as previously described.\n11\n Subjects provided their assessments of treatment satisfaction at each post‐treatment visit using the subject global satisfaction scale (1 = very dissatisfied to 5 = very satisfied). Adverse events (AEs) were recorded immediately after treatment and at all post‐treatment visits.", "Descriptive statistics were primarily used to summarize the results. Cutometer values (R0 and R2) and arm VAS scores were summarized using mean and standard deviation. Investigators’ GAIS, subject satisfaction, and AEs were summarized by percentages. The Wilcoxon signed‐rank test was used to compare cutometer values and arm VAS scores at each follow‐up visit with baseline values. p values < 0.05 were considered statistically significant. R (version 4.0.3, 2020–10–10), and R Studio (version 1.3.1093) were used to perform the analyses.", "Twelve female subjects participated in and completed the study. Although follow‐up data were available for all 12 subjects at 24 weeks, these were available for 11 and 8 subjects at 4 weeks and 12 weeks, respectively, as these visits fell within the period of COVID‐19–related movement restrictions in Singapore. Subject demographics and baseline characteristics are summarized in Table 1. The mean age was 51.3 (SD 7.3) years. Three subjects (25%) were Asian and the remaining nine (75%) were Caucasian. The mean BMI was 23.3 (SD 3.1) kg/m2, five subjects (42%) had BMI 20–23 kg/m2, three (25%) had BMI >23 to 25 kg/m2, another three (25%) had BMI >25 kg/m2, and one (8%) had BMI <20 kg/m2. At baseline, the mean arm VAS score (average of both arms) was 2.9 (SD 0.6). All arms (24, 100%) had a VAS score of 2 or more, indicating at least mild brachial laxity (VAS score 2: 3 arms, 13%; VAS score >2–3: 16 arms, 67%; VAS score >3–4: 4 arms, 17%; and VAS score >4: one arm, 4%).\nSubject demographics and baseline characteristics.\nData presented are mean (SD) unless otherwise stated. Arm VAS was reported based on the average score from both reviewers.\nAbbreviations: BMI, Body mass index; SD, Standard deviation; VAS, Visual analogue scale.\n\naAverage of both arms\n\nbPercentage calculated out of a total of 24 arms.", "Cutometer readings demonstrated a progressive decrease in the mean R0 reading (the lower the reading, the better the firmness of the skin), throughout the course of the study, suggesting progressive improvement in skin firmness from baseline at all time‐points (Figure 2A). This reduced from 0.515 (SD 0.098) mm at baseline to 0.433 (SD 0.049) mm at 24 weeks. Statistically significant improvements were observed from 12 weeks onwards (p < 0.05 for 12 and 24 weeks). In addition, the mean R2 reading (the closer the reading to 1, the better the elasticity of the skin) improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 2B). The mean R2 reading increased from 0.816 (SD 0.032) at baseline to 0.847 (SD 0.037) at 12 weeks, where maximum improvement was observed.\nObjective biophysical parameters of the brachial skin over time. (A) Mean (SD) R0 readings, a measure of skin firmness, and (B) mean (SD) R2 readings, a measure of skin elasticity, before treatment and at 4, 12, and 24 weeks after treatment", "Reviewer‐rated arm VAS score improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 3). The mean arm VAS score reduced from 2.9 (SD 0.6) at baseline to 2.1 (SD 0.7) at 12 weeks, where maximum improvement was observed. Figure 4 shows the change in overall skin quality in a subject who had a VAS score of 4 (severe laxity) at baseline. Pre‐ and post‐treatment photographs depict the improvement in overall skin quality and laxity from baseline to 24 weeks after treatment.\nMean (SD) arm VAS scores before treatment and at 4, 12, and 24 weeks after treatment\nPhotographs of upper arms of a subject before treatment (left) and at 24 weeks after combined treatment with microfocused ultrasound with visualization and diluted calcium hydroxylapatite (right)\nOverall change in arm appearance was assessed using the GAIS (Figure 5). Based on investigator's ratings, over 70% of all subjects showed an improved aesthetic appearance compared with baseline at all time‐points throughout the study. At 4 weeks, 73% of subjects (8/11) were assessed to have improvement of brachial laxity. By 12 and 24 weeks, 88% (7/8) and 83% (10/12) of subjects were assessed to have improved, respectively. Furthermore, over 37% of subjects were reported to have “much improved” and “very much improved” brachial laxity by 12 and 24 weeks. There were no reported cases with worsening of brachial laxity.\nInvestigators global aesthetic improvement scale (GAIS) ratings over time\nWhen asked to rate their overall satisfaction with the treatment, 55% of the subjects (6/11) reported they were satisfied at 4 weeks (Figure 6). At the 12 weeks, the highest percentage of satisfied subjects (75%; 6/8) was noted. By 24 weeks, 58% (7/12) were still satisfied with the treatment. No subjects were dissatisfied during the study.\nSubject satisfaction over time", "No serious AEs were noted during the study. All AEs were mild and transient in nature. The most common AEs reported were mild bruising (92%; 11/12) and redness (25%; 3/12), which resolved spontaneously within a week. One subject reported mild paresthesia (electric shock sensations) down the left arm, which also resolved spontaneously over two weeks.", "Both authors contributed to study conception and design, as well as acquisition and interpretation of data, with the first author contributing to a greater extent in both aspects. They were involved in revising the manuscript for important intellectual content, with the first author contributing to a greater extent, and both have given final approval of the version to be published.", "The study was approved by the Parkway Pantai institutional ethics committee (approval reference: PIEC/2019/024), and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "SUBJECTS AND METHODS", "Study design", "Study population", "Intervention", "Study assessments", "Statistical analyses", "RESULTS", "Subject demographics and baseline characteristics", "Impact on objective outcome measures: biophysical skin parameters", "Impact on subjective outcome measures: aesthetic appearance and subject satisfaction", "Adverse events", "DISCUSSION", "CONCLUSION", "CONFLICTS OF INTEREST", "AUTHORS CONTRIBUTIONS", "ETHICAL APPROVAL", "DATA AVAILABILITY STATEMENT" ]
[ "There is a paradigm shift for body contouring treatments.\n1\n Individuals are paying more attention to concerns in visible body areas, including the arms, where sagging and laxity in the brachial area may present as “batwing appearance,”\n2\n and along with this is a higher desire for non‐invasive procedures.\n3\n Indeed, data from the American Society for Dermatologic Surgery (ASDS) suggest that the number of invasive and non‐invasive body contouring procedures performed has increased by fourfold between 2012 and 2018.\n4\n Brachial skin laxity, in particular, can negatively impact an individual's quality of life.\n5\n The condition can become more apparent as collagen and elastin levels decline with aging and weight loss, which cause the skin to lose firmness and elasticity.\n2\n, \n6\n The demand for improvement in brachial skin laxity is evidenced by the more than 50‐fold increase in “upper arm lifts” or brachioplasty from 2000 to 2019 as reported in the Plastic Surgery Statistics Report by American Society of Plastic Surgeons (ASPS).\n3\n\n\nAlthough surgical procedures can improve the appearance of the brachial area, they can be associated with significant complications and extensive scarring often requiring revision surgery.\n7\n Indeed, there has been a rise in demand for non‐invasive procedures for the improvement of skin laxity.\n1\n Microfocused ultrasound with visualization (MFU‐V; Ultherapy®, Merz North America, Inc. Raleigh, N.C.) and calcium hydroxylapatite (CaHA; Radiesse®, Merz North America, Inc) injections are examples of non‐invasive procedures that stimulate collagen and elastin production. Independently, these have been shown to promote dermal remodeling, resulting in lifting and tightening of lax skin primarily for concerns on the face and neck, but there are limited data to suggest that these procedures may have potential applications for use in other body sites.\n8\n, \n9\n, \n10\n, \n11\n MFU‐V generates focused ultrasound waves that are converted to heat, creating discrete thermal coagulation points at precise depths beneath the skin's surface resulting in collagen denaturation and subsequent new collagen production.\n12\n, \n13\n A small number of studies has demonstrated improvement of brachial laxity with MFU‐V.\n8\n, \n9\n These are limited by the predominant use of subjective assessments to evaluate clinical improvements without including objective measurements. An alternative option has been the use of CaHA as a biostimulatory agent.\n14\n Treatment with subdermal injections of diluted or hyperdiluted CaHA has been shown to stimulate neocollagenesis, elastogenesis, and neoangiogenesis, with corresponding increase in dermal thickness and elasticity.\n15\n To date, a handful of case studies has reported the use of CaHA injections to improve skin laxity in the brachial region and other areas.\n10\n, \n11\n\n\nIn an effort to pursue further improvement in skin tightening especially in body sites, recent studies have examined the effectiveness of combining both MFU‐V and CaHA treatments for addressing skin laxity in the neck and buttocks.\n16\n, \n17\n Given that each of these modalities has been shown to improve collagen stores, it is plausible that a combination of both procedures could produce synergistic effects.\n17\n, \n18\n There is, however, no publication to date on the combined use of both modalities in a single treatment session in the brachial region. Given the increasing demand for arm skin tightening procedures, particularly with non‐invasive treatments, this prospective pilot study aimed to assess the effectiveness of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improving brachial skin laxity.", "Study design This was a prospective, open‐label, non‐randomized, single‐arm case series of healthy female subjects from two outpatient clinics in Singapore conducted between September 2019 and August 2020. Subjects underwent one treatment session of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improvement of brachial skin laxity and were followed for 24 weeks over three visits to evaluate the effectiveness of the combination treatment. The study was approved by the relevant ethics committee, and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript.\nThis was a prospective, open‐label, non‐randomized, single‐arm case series of healthy female subjects from two outpatient clinics in Singapore conducted between September 2019 and August 2020. Subjects underwent one treatment session of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improvement of brachial skin laxity and were followed for 24 weeks over three visits to evaluate the effectiveness of the combination treatment. The study was approved by the relevant ethics committee, and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript.\nStudy population Female subjects aged 35–65 years of all skin types and ethnic groups who had skin laxity in the brachial regions, desired lifting and tightening of the upper arm skin through non‐surgical interventions, were in good health, and could comply with the study requirements were included in the study. Subjects who met any of the following criteria were excluded from participation: (1) used immunosuppressive drugs or had any active systemic or local skin disease that may alter wound healing; (2) used antiplatelet agents or anticoagulants; (3) had significant scarring or keloid scarring in the proposed treatment areas, or a history of keloid formation; (4) had significant open wounds or lesions, or presence of active implants in the proposed treatment areas; (5) performed non‐invasive fat reduction procedures; ablative or non‐ablative skin procedures, or surgical procedures to the proposed treatment areas within the past six months; and (6) were pregnant or lactating during screening; and (7) had body mass index (BMI) >28 kg/m2.\nFemale subjects aged 35–65 years of all skin types and ethnic groups who had skin laxity in the brachial regions, desired lifting and tightening of the upper arm skin through non‐surgical interventions, were in good health, and could comply with the study requirements were included in the study. Subjects who met any of the following criteria were excluded from participation: (1) used immunosuppressive drugs or had any active systemic or local skin disease that may alter wound healing; (2) used antiplatelet agents or anticoagulants; (3) had significant scarring or keloid scarring in the proposed treatment areas, or a history of keloid formation; (4) had significant open wounds or lesions, or presence of active implants in the proposed treatment areas; (5) performed non‐invasive fat reduction procedures; ablative or non‐ablative skin procedures, or surgical procedures to the proposed treatment areas within the past six months; and (6) were pregnant or lactating during screening; and (7) had body mass index (BMI) >28 kg/m2.\nIntervention MFU‐V treatment was first administered as described in previously published studies for brachial skin laxity.\n8\n, \n9\n Diluted or hyperdiluted CaHA was then administered according to the recommendations of published consensus guidelines.\n14\n, \n19\n\n\nAnalgesia was achieved with oral administration of ibuprofen (800 mg) and application of a topical anesthetic (EMLA cream: a eutectic mixture of local anesthetic—lidocaine 2.5% and prilocaine 2.5%, or BLT cream: benzocaine 20%, lidocaine 6%, tetracaine 4%) one hour before treatment. Oral paracetamol (1 g) or tramadol (50 mg) was given to subjects who were allergic to non‐steroidal anti‐inflammatory drugs (NSAIDs).\nThe treatment area was marked with the subject standing upright and with the arms extended at 45 degrees from the body using the lower horizontal border of the underarm area and the outer border of the axilla border as reference points. A standardized ruler that matched the width and height of the MFU‐V transducer was used to create a grid as shown in Figure 1A. A thin layer of ultrasound gel was then applied to the subject's arms. Each transducer was placed on the skin and was evenly coupled to the skin surface before treatment was administered. The brachial regions were treated in a standardized pattern as previously described\n8\n, \n9\n at two depths using the 4.0 MHz‐4.5 mm depth transducer for deeper penetration to the superficial fascial layer in the first pass, followed by the 7.0 MHz‐3.0 mm depth transducer for more superficial penetration to the deep dermis in the second pass. Depending on the surface area to be treated, a total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered to each arm using each transducer (Figure 1A), for a total of 360–480 lines per arm with both transducers, with more lines for patients with larger arm volumes. At least half of the height of the surface area of the inner arm was treated. The treatment protocol was not modified by severity of brachial skin laxity.\nTreatment administration. (A) MFU‐V treatment: The brachial regions were treated in a standardized pattern at two depths using the 4.0 MHz‐4.5 mm depth transducer and 7.0 MHz‐3.0 mm depth transducer. A total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered using each transducer. (B) CaHA treatment: subjects received subdermal injections of diluted or hyperdiluted CaHA with a 25‐gauge cannula using a retrograde fanning technique. The crosses in Figure 1A denote the three standardized points where cutometer readings were taken\nDepending on individual subject's availability, subjects received subdermal injections of diluted or hyperdiluted CaHA immediately or up to one week following MFU‐V treatment in the same areas with a 25‐gauge cannula using a retrograde fanning technique (Figure 1B) in accordance with the recommendations of published consensus guidelines.\n14\n, \n19\n Two different dilutions were used depending on investigator's assessment of skin thickness. A dilution ratio of 1:2 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 2.7 ml normal saline, for a total volume of 4.5 ml per arm) was used for subjects with thinner skin. A dilution ratio of 1:1 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 1.2 ml normal saline, for a total volume of 3 ml per arm) was used for subjects with thicker skin. After injecting CaHA, vigorous massage of the upper arm areas was performed to ensure even distribution of CaHA.\nMFU‐V treatment was first administered as described in previously published studies for brachial skin laxity.\n8\n, \n9\n Diluted or hyperdiluted CaHA was then administered according to the recommendations of published consensus guidelines.\n14\n, \n19\n\n\nAnalgesia was achieved with oral administration of ibuprofen (800 mg) and application of a topical anesthetic (EMLA cream: a eutectic mixture of local anesthetic—lidocaine 2.5% and prilocaine 2.5%, or BLT cream: benzocaine 20%, lidocaine 6%, tetracaine 4%) one hour before treatment. Oral paracetamol (1 g) or tramadol (50 mg) was given to subjects who were allergic to non‐steroidal anti‐inflammatory drugs (NSAIDs).\nThe treatment area was marked with the subject standing upright and with the arms extended at 45 degrees from the body using the lower horizontal border of the underarm area and the outer border of the axilla border as reference points. A standardized ruler that matched the width and height of the MFU‐V transducer was used to create a grid as shown in Figure 1A. A thin layer of ultrasound gel was then applied to the subject's arms. Each transducer was placed on the skin and was evenly coupled to the skin surface before treatment was administered. The brachial regions were treated in a standardized pattern as previously described\n8\n, \n9\n at two depths using the 4.0 MHz‐4.5 mm depth transducer for deeper penetration to the superficial fascial layer in the first pass, followed by the 7.0 MHz‐3.0 mm depth transducer for more superficial penetration to the deep dermis in the second pass. Depending on the surface area to be treated, a total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered to each arm using each transducer (Figure 1A), for a total of 360–480 lines per arm with both transducers, with more lines for patients with larger arm volumes. At least half of the height of the surface area of the inner arm was treated. The treatment protocol was not modified by severity of brachial skin laxity.\nTreatment administration. (A) MFU‐V treatment: The brachial regions were treated in a standardized pattern at two depths using the 4.0 MHz‐4.5 mm depth transducer and 7.0 MHz‐3.0 mm depth transducer. A total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered using each transducer. (B) CaHA treatment: subjects received subdermal injections of diluted or hyperdiluted CaHA with a 25‐gauge cannula using a retrograde fanning technique. The crosses in Figure 1A denote the three standardized points where cutometer readings were taken\nDepending on individual subject's availability, subjects received subdermal injections of diluted or hyperdiluted CaHA immediately or up to one week following MFU‐V treatment in the same areas with a 25‐gauge cannula using a retrograde fanning technique (Figure 1B) in accordance with the recommendations of published consensus guidelines.\n14\n, \n19\n Two different dilutions were used depending on investigator's assessment of skin thickness. A dilution ratio of 1:2 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 2.7 ml normal saline, for a total volume of 4.5 ml per arm) was used for subjects with thinner skin. A dilution ratio of 1:1 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 1.2 ml normal saline, for a total volume of 3 ml per arm) was used for subjects with thicker skin. After injecting CaHA, vigorous massage of the upper arm areas was performed to ensure even distribution of CaHA.\nStudy assessments Subjects were followed for six months after treatment and evaluated at 4, 12, and 24 weeks. The primary outcome evaluated was the change in biophysical skin parameters, specifically skin firmness (R0) and skin elasticity (R2) during the 4, 12, and 24 week follow‐up visits as compared with baseline. These parameters were measured using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany), which is an established objective assessment of skin firmness and skin elasticity in dermatologic clinical studies.\n20\n The cutometer uses a suction method which creates a negative pressure that draws the skin into a probe and releases it after a defined time. It measures skin firmness (R0: the resistance of the skin to negative pressure, lower R0 value denotes firmer skin) and elasticity (R2: the ability of the skin to return to its original position, higher R2 denotes more elastic skin) using a non‐contact optical measuring system.\n21\n Readings were taken at three standardized points on each arm during each visit (Figure 1A), and the average of three measurements was reported for each visit.\nSecondary outcomes utilized subjective assessments including the arm visual analogue scale (VAS),\n10\n the global aesthetic improvement scale (GAIS), and subject global satisfaction scale. Standardized photographs of upper arms were taken before treatment and at each visit post‐treatment with the subject standing and both arms outstretched at a 45‐degree angle from the body using standardized camera angles and room lighting. Two blinded reviewers evaluated photographs of upper arms and provided their assessments using the VAS for upper arms developed by Amselem et al (Type I = no laxity, Type II = mild laxity, Type III = moderate laxity, Type IV = severe laxity, and Type V = very severe laxity).\n10\n The average VAS scores from both reviewers were reported for each visit. In addition, participating investigators compared photographs from each post‐treatment visit with baseline and provided their assessments on changes in overall aesthetic appearance after treatment using the GAIS (0 = worsened, 1 = no change, 2 = improved, 3 = much improved, and 4 = very much improved), as previously described.\n11\n Subjects provided their assessments of treatment satisfaction at each post‐treatment visit using the subject global satisfaction scale (1 = very dissatisfied to 5 = very satisfied). Adverse events (AEs) were recorded immediately after treatment and at all post‐treatment visits.\nSubjects were followed for six months after treatment and evaluated at 4, 12, and 24 weeks. The primary outcome evaluated was the change in biophysical skin parameters, specifically skin firmness (R0) and skin elasticity (R2) during the 4, 12, and 24 week follow‐up visits as compared with baseline. These parameters were measured using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany), which is an established objective assessment of skin firmness and skin elasticity in dermatologic clinical studies.\n20\n The cutometer uses a suction method which creates a negative pressure that draws the skin into a probe and releases it after a defined time. It measures skin firmness (R0: the resistance of the skin to negative pressure, lower R0 value denotes firmer skin) and elasticity (R2: the ability of the skin to return to its original position, higher R2 denotes more elastic skin) using a non‐contact optical measuring system.\n21\n Readings were taken at three standardized points on each arm during each visit (Figure 1A), and the average of three measurements was reported for each visit.\nSecondary outcomes utilized subjective assessments including the arm visual analogue scale (VAS),\n10\n the global aesthetic improvement scale (GAIS), and subject global satisfaction scale. Standardized photographs of upper arms were taken before treatment and at each visit post‐treatment with the subject standing and both arms outstretched at a 45‐degree angle from the body using standardized camera angles and room lighting. Two blinded reviewers evaluated photographs of upper arms and provided their assessments using the VAS for upper arms developed by Amselem et al (Type I = no laxity, Type II = mild laxity, Type III = moderate laxity, Type IV = severe laxity, and Type V = very severe laxity).\n10\n The average VAS scores from both reviewers were reported for each visit. In addition, participating investigators compared photographs from each post‐treatment visit with baseline and provided their assessments on changes in overall aesthetic appearance after treatment using the GAIS (0 = worsened, 1 = no change, 2 = improved, 3 = much improved, and 4 = very much improved), as previously described.\n11\n Subjects provided their assessments of treatment satisfaction at each post‐treatment visit using the subject global satisfaction scale (1 = very dissatisfied to 5 = very satisfied). Adverse events (AEs) were recorded immediately after treatment and at all post‐treatment visits.\nStatistical analyses Descriptive statistics were primarily used to summarize the results. Cutometer values (R0 and R2) and arm VAS scores were summarized using mean and standard deviation. Investigators’ GAIS, subject satisfaction, and AEs were summarized by percentages. The Wilcoxon signed‐rank test was used to compare cutometer values and arm VAS scores at each follow‐up visit with baseline values. p values < 0.05 were considered statistically significant. R (version 4.0.3, 2020–10–10), and R Studio (version 1.3.1093) were used to perform the analyses.\nDescriptive statistics were primarily used to summarize the results. Cutometer values (R0 and R2) and arm VAS scores were summarized using mean and standard deviation. Investigators’ GAIS, subject satisfaction, and AEs were summarized by percentages. The Wilcoxon signed‐rank test was used to compare cutometer values and arm VAS scores at each follow‐up visit with baseline values. p values < 0.05 were considered statistically significant. R (version 4.0.3, 2020–10–10), and R Studio (version 1.3.1093) were used to perform the analyses.", "This was a prospective, open‐label, non‐randomized, single‐arm case series of healthy female subjects from two outpatient clinics in Singapore conducted between September 2019 and August 2020. Subjects underwent one treatment session of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improvement of brachial skin laxity and were followed for 24 weeks over three visits to evaluate the effectiveness of the combination treatment. The study was approved by the relevant ethics committee, and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript.", "Female subjects aged 35–65 years of all skin types and ethnic groups who had skin laxity in the brachial regions, desired lifting and tightening of the upper arm skin through non‐surgical interventions, were in good health, and could comply with the study requirements were included in the study. Subjects who met any of the following criteria were excluded from participation: (1) used immunosuppressive drugs or had any active systemic or local skin disease that may alter wound healing; (2) used antiplatelet agents or anticoagulants; (3) had significant scarring or keloid scarring in the proposed treatment areas, or a history of keloid formation; (4) had significant open wounds or lesions, or presence of active implants in the proposed treatment areas; (5) performed non‐invasive fat reduction procedures; ablative or non‐ablative skin procedures, or surgical procedures to the proposed treatment areas within the past six months; and (6) were pregnant or lactating during screening; and (7) had body mass index (BMI) >28 kg/m2.", "MFU‐V treatment was first administered as described in previously published studies for brachial skin laxity.\n8\n, \n9\n Diluted or hyperdiluted CaHA was then administered according to the recommendations of published consensus guidelines.\n14\n, \n19\n\n\nAnalgesia was achieved with oral administration of ibuprofen (800 mg) and application of a topical anesthetic (EMLA cream: a eutectic mixture of local anesthetic—lidocaine 2.5% and prilocaine 2.5%, or BLT cream: benzocaine 20%, lidocaine 6%, tetracaine 4%) one hour before treatment. Oral paracetamol (1 g) or tramadol (50 mg) was given to subjects who were allergic to non‐steroidal anti‐inflammatory drugs (NSAIDs).\nThe treatment area was marked with the subject standing upright and with the arms extended at 45 degrees from the body using the lower horizontal border of the underarm area and the outer border of the axilla border as reference points. A standardized ruler that matched the width and height of the MFU‐V transducer was used to create a grid as shown in Figure 1A. A thin layer of ultrasound gel was then applied to the subject's arms. Each transducer was placed on the skin and was evenly coupled to the skin surface before treatment was administered. The brachial regions were treated in a standardized pattern as previously described\n8\n, \n9\n at two depths using the 4.0 MHz‐4.5 mm depth transducer for deeper penetration to the superficial fascial layer in the first pass, followed by the 7.0 MHz‐3.0 mm depth transducer for more superficial penetration to the deep dermis in the second pass. Depending on the surface area to be treated, a total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered to each arm using each transducer (Figure 1A), for a total of 360–480 lines per arm with both transducers, with more lines for patients with larger arm volumes. At least half of the height of the surface area of the inner arm was treated. The treatment protocol was not modified by severity of brachial skin laxity.\nTreatment administration. (A) MFU‐V treatment: The brachial regions were treated in a standardized pattern at two depths using the 4.0 MHz‐4.5 mm depth transducer and 7.0 MHz‐3.0 mm depth transducer. A total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered using each transducer. (B) CaHA treatment: subjects received subdermal injections of diluted or hyperdiluted CaHA with a 25‐gauge cannula using a retrograde fanning technique. The crosses in Figure 1A denote the three standardized points where cutometer readings were taken\nDepending on individual subject's availability, subjects received subdermal injections of diluted or hyperdiluted CaHA immediately or up to one week following MFU‐V treatment in the same areas with a 25‐gauge cannula using a retrograde fanning technique (Figure 1B) in accordance with the recommendations of published consensus guidelines.\n14\n, \n19\n Two different dilutions were used depending on investigator's assessment of skin thickness. A dilution ratio of 1:2 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 2.7 ml normal saline, for a total volume of 4.5 ml per arm) was used for subjects with thinner skin. A dilution ratio of 1:1 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 1.2 ml normal saline, for a total volume of 3 ml per arm) was used for subjects with thicker skin. After injecting CaHA, vigorous massage of the upper arm areas was performed to ensure even distribution of CaHA.", "Subjects were followed for six months after treatment and evaluated at 4, 12, and 24 weeks. The primary outcome evaluated was the change in biophysical skin parameters, specifically skin firmness (R0) and skin elasticity (R2) during the 4, 12, and 24 week follow‐up visits as compared with baseline. These parameters were measured using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany), which is an established objective assessment of skin firmness and skin elasticity in dermatologic clinical studies.\n20\n The cutometer uses a suction method which creates a negative pressure that draws the skin into a probe and releases it after a defined time. It measures skin firmness (R0: the resistance of the skin to negative pressure, lower R0 value denotes firmer skin) and elasticity (R2: the ability of the skin to return to its original position, higher R2 denotes more elastic skin) using a non‐contact optical measuring system.\n21\n Readings were taken at three standardized points on each arm during each visit (Figure 1A), and the average of three measurements was reported for each visit.\nSecondary outcomes utilized subjective assessments including the arm visual analogue scale (VAS),\n10\n the global aesthetic improvement scale (GAIS), and subject global satisfaction scale. Standardized photographs of upper arms were taken before treatment and at each visit post‐treatment with the subject standing and both arms outstretched at a 45‐degree angle from the body using standardized camera angles and room lighting. Two blinded reviewers evaluated photographs of upper arms and provided their assessments using the VAS for upper arms developed by Amselem et al (Type I = no laxity, Type II = mild laxity, Type III = moderate laxity, Type IV = severe laxity, and Type V = very severe laxity).\n10\n The average VAS scores from both reviewers were reported for each visit. In addition, participating investigators compared photographs from each post‐treatment visit with baseline and provided their assessments on changes in overall aesthetic appearance after treatment using the GAIS (0 = worsened, 1 = no change, 2 = improved, 3 = much improved, and 4 = very much improved), as previously described.\n11\n Subjects provided their assessments of treatment satisfaction at each post‐treatment visit using the subject global satisfaction scale (1 = very dissatisfied to 5 = very satisfied). Adverse events (AEs) were recorded immediately after treatment and at all post‐treatment visits.", "Descriptive statistics were primarily used to summarize the results. Cutometer values (R0 and R2) and arm VAS scores were summarized using mean and standard deviation. Investigators’ GAIS, subject satisfaction, and AEs were summarized by percentages. The Wilcoxon signed‐rank test was used to compare cutometer values and arm VAS scores at each follow‐up visit with baseline values. p values < 0.05 were considered statistically significant. R (version 4.0.3, 2020–10–10), and R Studio (version 1.3.1093) were used to perform the analyses.", "Subject demographics and baseline characteristics Twelve female subjects participated in and completed the study. Although follow‐up data were available for all 12 subjects at 24 weeks, these were available for 11 and 8 subjects at 4 weeks and 12 weeks, respectively, as these visits fell within the period of COVID‐19–related movement restrictions in Singapore. Subject demographics and baseline characteristics are summarized in Table 1. The mean age was 51.3 (SD 7.3) years. Three subjects (25%) were Asian and the remaining nine (75%) were Caucasian. The mean BMI was 23.3 (SD 3.1) kg/m2, five subjects (42%) had BMI 20–23 kg/m2, three (25%) had BMI >23 to 25 kg/m2, another three (25%) had BMI >25 kg/m2, and one (8%) had BMI <20 kg/m2. At baseline, the mean arm VAS score (average of both arms) was 2.9 (SD 0.6). All arms (24, 100%) had a VAS score of 2 or more, indicating at least mild brachial laxity (VAS score 2: 3 arms, 13%; VAS score >2–3: 16 arms, 67%; VAS score >3–4: 4 arms, 17%; and VAS score >4: one arm, 4%).\nSubject demographics and baseline characteristics.\nData presented are mean (SD) unless otherwise stated. Arm VAS was reported based on the average score from both reviewers.\nAbbreviations: BMI, Body mass index; SD, Standard deviation; VAS, Visual analogue scale.\n\naAverage of both arms\n\nbPercentage calculated out of a total of 24 arms.\nTwelve female subjects participated in and completed the study. Although follow‐up data were available for all 12 subjects at 24 weeks, these were available for 11 and 8 subjects at 4 weeks and 12 weeks, respectively, as these visits fell within the period of COVID‐19–related movement restrictions in Singapore. Subject demographics and baseline characteristics are summarized in Table 1. The mean age was 51.3 (SD 7.3) years. Three subjects (25%) were Asian and the remaining nine (75%) were Caucasian. The mean BMI was 23.3 (SD 3.1) kg/m2, five subjects (42%) had BMI 20–23 kg/m2, three (25%) had BMI >23 to 25 kg/m2, another three (25%) had BMI >25 kg/m2, and one (8%) had BMI <20 kg/m2. At baseline, the mean arm VAS score (average of both arms) was 2.9 (SD 0.6). All arms (24, 100%) had a VAS score of 2 or more, indicating at least mild brachial laxity (VAS score 2: 3 arms, 13%; VAS score >2–3: 16 arms, 67%; VAS score >3–4: 4 arms, 17%; and VAS score >4: one arm, 4%).\nSubject demographics and baseline characteristics.\nData presented are mean (SD) unless otherwise stated. Arm VAS was reported based on the average score from both reviewers.\nAbbreviations: BMI, Body mass index; SD, Standard deviation; VAS, Visual analogue scale.\n\naAverage of both arms\n\nbPercentage calculated out of a total of 24 arms.\nImpact on objective outcome measures: biophysical skin parameters Cutometer readings demonstrated a progressive decrease in the mean R0 reading (the lower the reading, the better the firmness of the skin), throughout the course of the study, suggesting progressive improvement in skin firmness from baseline at all time‐points (Figure 2A). This reduced from 0.515 (SD 0.098) mm at baseline to 0.433 (SD 0.049) mm at 24 weeks. Statistically significant improvements were observed from 12 weeks onwards (p < 0.05 for 12 and 24 weeks). In addition, the mean R2 reading (the closer the reading to 1, the better the elasticity of the skin) improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 2B). The mean R2 reading increased from 0.816 (SD 0.032) at baseline to 0.847 (SD 0.037) at 12 weeks, where maximum improvement was observed.\nObjective biophysical parameters of the brachial skin over time. (A) Mean (SD) R0 readings, a measure of skin firmness, and (B) mean (SD) R2 readings, a measure of skin elasticity, before treatment and at 4, 12, and 24 weeks after treatment\nCutometer readings demonstrated a progressive decrease in the mean R0 reading (the lower the reading, the better the firmness of the skin), throughout the course of the study, suggesting progressive improvement in skin firmness from baseline at all time‐points (Figure 2A). This reduced from 0.515 (SD 0.098) mm at baseline to 0.433 (SD 0.049) mm at 24 weeks. Statistically significant improvements were observed from 12 weeks onwards (p < 0.05 for 12 and 24 weeks). In addition, the mean R2 reading (the closer the reading to 1, the better the elasticity of the skin) improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 2B). The mean R2 reading increased from 0.816 (SD 0.032) at baseline to 0.847 (SD 0.037) at 12 weeks, where maximum improvement was observed.\nObjective biophysical parameters of the brachial skin over time. (A) Mean (SD) R0 readings, a measure of skin firmness, and (B) mean (SD) R2 readings, a measure of skin elasticity, before treatment and at 4, 12, and 24 weeks after treatment\nImpact on subjective outcome measures: aesthetic appearance and subject satisfaction Reviewer‐rated arm VAS score improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 3). The mean arm VAS score reduced from 2.9 (SD 0.6) at baseline to 2.1 (SD 0.7) at 12 weeks, where maximum improvement was observed. Figure 4 shows the change in overall skin quality in a subject who had a VAS score of 4 (severe laxity) at baseline. Pre‐ and post‐treatment photographs depict the improvement in overall skin quality and laxity from baseline to 24 weeks after treatment.\nMean (SD) arm VAS scores before treatment and at 4, 12, and 24 weeks after treatment\nPhotographs of upper arms of a subject before treatment (left) and at 24 weeks after combined treatment with microfocused ultrasound with visualization and diluted calcium hydroxylapatite (right)\nOverall change in arm appearance was assessed using the GAIS (Figure 5). Based on investigator's ratings, over 70% of all subjects showed an improved aesthetic appearance compared with baseline at all time‐points throughout the study. At 4 weeks, 73% of subjects (8/11) were assessed to have improvement of brachial laxity. By 12 and 24 weeks, 88% (7/8) and 83% (10/12) of subjects were assessed to have improved, respectively. Furthermore, over 37% of subjects were reported to have “much improved” and “very much improved” brachial laxity by 12 and 24 weeks. There were no reported cases with worsening of brachial laxity.\nInvestigators global aesthetic improvement scale (GAIS) ratings over time\nWhen asked to rate their overall satisfaction with the treatment, 55% of the subjects (6/11) reported they were satisfied at 4 weeks (Figure 6). At the 12 weeks, the highest percentage of satisfied subjects (75%; 6/8) was noted. By 24 weeks, 58% (7/12) were still satisfied with the treatment. No subjects were dissatisfied during the study.\nSubject satisfaction over time\nReviewer‐rated arm VAS score improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 3). The mean arm VAS score reduced from 2.9 (SD 0.6) at baseline to 2.1 (SD 0.7) at 12 weeks, where maximum improvement was observed. Figure 4 shows the change in overall skin quality in a subject who had a VAS score of 4 (severe laxity) at baseline. Pre‐ and post‐treatment photographs depict the improvement in overall skin quality and laxity from baseline to 24 weeks after treatment.\nMean (SD) arm VAS scores before treatment and at 4, 12, and 24 weeks after treatment\nPhotographs of upper arms of a subject before treatment (left) and at 24 weeks after combined treatment with microfocused ultrasound with visualization and diluted calcium hydroxylapatite (right)\nOverall change in arm appearance was assessed using the GAIS (Figure 5). Based on investigator's ratings, over 70% of all subjects showed an improved aesthetic appearance compared with baseline at all time‐points throughout the study. At 4 weeks, 73% of subjects (8/11) were assessed to have improvement of brachial laxity. By 12 and 24 weeks, 88% (7/8) and 83% (10/12) of subjects were assessed to have improved, respectively. Furthermore, over 37% of subjects were reported to have “much improved” and “very much improved” brachial laxity by 12 and 24 weeks. There were no reported cases with worsening of brachial laxity.\nInvestigators global aesthetic improvement scale (GAIS) ratings over time\nWhen asked to rate their overall satisfaction with the treatment, 55% of the subjects (6/11) reported they were satisfied at 4 weeks (Figure 6). At the 12 weeks, the highest percentage of satisfied subjects (75%; 6/8) was noted. By 24 weeks, 58% (7/12) were still satisfied with the treatment. No subjects were dissatisfied during the study.\nSubject satisfaction over time\nAdverse events No serious AEs were noted during the study. All AEs were mild and transient in nature. The most common AEs reported were mild bruising (92%; 11/12) and redness (25%; 3/12), which resolved spontaneously within a week. One subject reported mild paresthesia (electric shock sensations) down the left arm, which also resolved spontaneously over two weeks.\nNo serious AEs were noted during the study. All AEs were mild and transient in nature. The most common AEs reported were mild bruising (92%; 11/12) and redness (25%; 3/12), which resolved spontaneously within a week. One subject reported mild paresthesia (electric shock sensations) down the left arm, which also resolved spontaneously over two weeks.", "Twelve female subjects participated in and completed the study. Although follow‐up data were available for all 12 subjects at 24 weeks, these were available for 11 and 8 subjects at 4 weeks and 12 weeks, respectively, as these visits fell within the period of COVID‐19–related movement restrictions in Singapore. Subject demographics and baseline characteristics are summarized in Table 1. The mean age was 51.3 (SD 7.3) years. Three subjects (25%) were Asian and the remaining nine (75%) were Caucasian. The mean BMI was 23.3 (SD 3.1) kg/m2, five subjects (42%) had BMI 20–23 kg/m2, three (25%) had BMI >23 to 25 kg/m2, another three (25%) had BMI >25 kg/m2, and one (8%) had BMI <20 kg/m2. At baseline, the mean arm VAS score (average of both arms) was 2.9 (SD 0.6). All arms (24, 100%) had a VAS score of 2 or more, indicating at least mild brachial laxity (VAS score 2: 3 arms, 13%; VAS score >2–3: 16 arms, 67%; VAS score >3–4: 4 arms, 17%; and VAS score >4: one arm, 4%).\nSubject demographics and baseline characteristics.\nData presented are mean (SD) unless otherwise stated. Arm VAS was reported based on the average score from both reviewers.\nAbbreviations: BMI, Body mass index; SD, Standard deviation; VAS, Visual analogue scale.\n\naAverage of both arms\n\nbPercentage calculated out of a total of 24 arms.", "Cutometer readings demonstrated a progressive decrease in the mean R0 reading (the lower the reading, the better the firmness of the skin), throughout the course of the study, suggesting progressive improvement in skin firmness from baseline at all time‐points (Figure 2A). This reduced from 0.515 (SD 0.098) mm at baseline to 0.433 (SD 0.049) mm at 24 weeks. Statistically significant improvements were observed from 12 weeks onwards (p < 0.05 for 12 and 24 weeks). In addition, the mean R2 reading (the closer the reading to 1, the better the elasticity of the skin) improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 2B). The mean R2 reading increased from 0.816 (SD 0.032) at baseline to 0.847 (SD 0.037) at 12 weeks, where maximum improvement was observed.\nObjective biophysical parameters of the brachial skin over time. (A) Mean (SD) R0 readings, a measure of skin firmness, and (B) mean (SD) R2 readings, a measure of skin elasticity, before treatment and at 4, 12, and 24 weeks after treatment", "Reviewer‐rated arm VAS score improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 3). The mean arm VAS score reduced from 2.9 (SD 0.6) at baseline to 2.1 (SD 0.7) at 12 weeks, where maximum improvement was observed. Figure 4 shows the change in overall skin quality in a subject who had a VAS score of 4 (severe laxity) at baseline. Pre‐ and post‐treatment photographs depict the improvement in overall skin quality and laxity from baseline to 24 weeks after treatment.\nMean (SD) arm VAS scores before treatment and at 4, 12, and 24 weeks after treatment\nPhotographs of upper arms of a subject before treatment (left) and at 24 weeks after combined treatment with microfocused ultrasound with visualization and diluted calcium hydroxylapatite (right)\nOverall change in arm appearance was assessed using the GAIS (Figure 5). Based on investigator's ratings, over 70% of all subjects showed an improved aesthetic appearance compared with baseline at all time‐points throughout the study. At 4 weeks, 73% of subjects (8/11) were assessed to have improvement of brachial laxity. By 12 and 24 weeks, 88% (7/8) and 83% (10/12) of subjects were assessed to have improved, respectively. Furthermore, over 37% of subjects were reported to have “much improved” and “very much improved” brachial laxity by 12 and 24 weeks. There were no reported cases with worsening of brachial laxity.\nInvestigators global aesthetic improvement scale (GAIS) ratings over time\nWhen asked to rate their overall satisfaction with the treatment, 55% of the subjects (6/11) reported they were satisfied at 4 weeks (Figure 6). At the 12 weeks, the highest percentage of satisfied subjects (75%; 6/8) was noted. By 24 weeks, 58% (7/12) were still satisfied with the treatment. No subjects were dissatisfied during the study.\nSubject satisfaction over time", "No serious AEs were noted during the study. All AEs were mild and transient in nature. The most common AEs reported were mild bruising (92%; 11/12) and redness (25%; 3/12), which resolved spontaneously within a week. One subject reported mild paresthesia (electric shock sensations) down the left arm, which also resolved spontaneously over two weeks.", "This 24 weeks pilot study assessed the effectiveness of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA in healthy Asian and Caucasian female subjects with brachial skin laxity. Our findings demonstrate that a single session of the combined treatment modalities was associated with significant improvement in skin laxity and improved aesthetic appearance of the brachial region. Objective outcome measures using a cutometer demonstrated significantly improved skin firmness and skin elasticity after the combination of treatments. Other subjective outcome measures including arm VAS, GAIS, and subject global satisfaction scale demonstrated improvement in brachial laxity and a high degree of subject satisfaction. These encouraging results are especially relevant given the novel findings of this study and the growing demands for such treatments. This study, to our knowledge, is the first to assess single session combined use of MFU‐V and diluted or hyperdiluted CaHA treatment for brachial skin laxity. This is of particular importance given the increased concern over laxity in the brachial region as evidenced by the marked increase in demand for upper arm lifts.\n3\n While this was a pilot study, we chose to recruit a diverse group of subjects as far as possible. The study cohort included subjects with a range of BMI from <20 kg/m2 to >25 kg/m2, varying degrees of laxity, as well as Asians and Caucasians subjects.\nGiven that MFU‐V and CaHA stimulate neocollagenesis via different mechanisms, there is considerable rationale for combining both modalities in the same treatment area for enhanced neocollagenesis. MFU‐V generates focused ultrasound waves, which cause thermal coagulation at precise depths beneath the targeted skin areas, resulting in collagen denaturation and subsequent collagen remodeling.\n12\n, \n13\n Injecting small amounts of CaHA may then precipitate further collagen building at specific points of injection. The CaHA particles form a matrix, which activates fibroblasts to produce more collagen and elastin.\n15\n Superficial injections of diluted or hyperdiluted CaHA result in skin regeneration, without causing volumization of the brachial tissues. CaHA dilution is titrated according to skin thickness.\n14\n, \n19\n Indeed, limited studies have demonstrated increased neocollagenesis with combined use of MFU‐V and CaHA, and these have been for the treatment of other body areas, such as the neck, décolletage, and buttocks.\n16\n, \n17\n, \n18\n In the case study by Casabona et al, histological examination of skin samples taken from the inner thighs revealed thicker and denser collagen fibers at 24 weeks when treatments were combined compared with those treated with a single procedure.\n18\n In addition, the skin samples showed no significant differences in immunological patterns between those treated with MFU‐V combined with CaHA and those treated with CaHA alone, demonstrating the safety profile of the combined treatment.\n18\n In a subsequent study, significant improvements in skin laxity and cellulite severity in the buttocks and upper thighs were observed 90 days following treatment with MFU‐V combined with diluted CaHA.\n17\n Similarly, the combined use of MFU‐V and diluted CaHA has been shown to improve skin laxity and lines on the neck and the décolletage.\n16\n The combined approach was well‐tolerated and was associated with high subject satisfaction.\n16\n, \n17\n\n\nConsistent with these previous studies, our study showed clinically and statistically significant improvement in skin laxity in the brachial area with the combined treatment. In our study, combined use of MFU‐V and subdermal injections of diluted or hyperdiluted CaHA was associated with improvements in brachial skin laxity as assessed by both objective and subjective outcome measures. Cutometer assessment of skin firmness (R0) and elasticity (R2), which has been used extensively in the dermatologic literature to measure changes in skin quality due to various factors, including aging,\n20\n demonstrated significant improvements in objective biophysical parameters of the brachial skin. In addition, both reviewer‐rated arm VAS and investigator‐rated GAIS revealed improvement in arm VAS score with the majority of subjects assessed to have improved arm appearance after treatment.\nThe cutometer measure for skin firmness (R0 reading) progressively improved throughout the course of the study, with statistically significant improvement from baseline observed from 12 weeks onwards. Notably, it was also at the same time‐point of 12 weeks that the greatest improvements from baseline in mean R2 reading (measure of skin elasticity) and mean arm VAS score were observed. The mean arm VAS score improved by almost one grade from a mean score of 2.9 (close to a rating of moderate laxity) to 2.1 (close to mild laxity) after only one treatment session. In addition, the highest percentage of subjects with improved arm appearance was noted at 12 weeks, as assessed by GAIS. These observations coincide with the timeframe required for neocollagenesis as observed in earlier studies, where improvements in neck, décolletage, and cellulite appearance were observed 12 weeks after combined treatment with MFU‐V and diluted CaHA.\n16\n, \n17\n Histological examination of skin samples taken from the inner thighs showed more type I and type III collagen in samples treated with MFU‐V combined with 1:1 dilution of CaHA compared with untreated samples.\n17\n It should be noted that the improvement in mean arm VAS score from baseline was lower at 24 weeks compared with that at 12 weeks, and the percentage of subjects with improved aesthetic appearance at 24 weeks was lower than that at 12 weeks. The observed trend could possibly be due to the inherent limitations of the subjective outcome measures. Also, photographic assessment of arm appearance can be challenging as it is difficult to capture changes in skin quality on photographs. In light of these drawbacks, objective outcome measures (eg, cutometer assessments to detect biophysical changes in skin quality) become all the more crucial for measuring changes that may not be detected or assessed accurately by more subjective and less sensitive measures.\nThe use of patient‐reported outcome measures, such as the subject global satisfaction scale, allows aesthetic physicians to capture valuable data to understand subjects’ perception of improvement. In this study, the majority of subjects at each time‐point reported that they were satisfied with the treatment, and no subjects were dissatisfied. Consistent with the results for R2 value, arm VAS score, and investigator‐rated improvement in arm appearance, the highest percentage of satisfied subjects was observed 12 weeks after treatment. A lower percentage of satisfied subjects were noted at 24 weeks than at 12 weeks. It is plausible that subjects developed perception drift\n22\n during the study. In such cases, as commonly perceived flaws become increasingly addressed, subjects become more fixated on previously ignored flaws and start to judge their perceived improvement against a new baseline, making them prone to underestimate the magnitude of improvement. In addition, the follow‐up visits of this study fell within the period of COVID‐19–related movement restrictions in Singapore, where the public were asked not to leave their homes except for essential purposes. As such, it is possible that some fluctuations in weight may have occurred during this period, subsequently affecting subjects’ satisfaction. Furthermore, subjects were likely to have been distracted and concerned by other issues related to COVID‐19 at this time, potentially influencing their mindset regarding the procedure.\nIn this study, combined use of MFU‐V and diluted CaHA in a single treatment session was well‐tolerated. AEs reported in this study were limited to mild bruising, redness, and paresthesia, and were transient in nature. The safety profile of combined treatment is consistent with that observed in studies which had used MFU‐V and diluted CaHA individually in the brachial area or other body areas.\n8\n, \n9\n, \n10\n, \n11\n\n\nA limitation of this study was the small sample size and the relatively short follow‐up period. Both MFU‐V and CaHA have individually demonstrated a duration of effect of at least one year.\n23\n Future work should consider studying subjects for a longer period of time to examine whether further improvement can be observed, as well as to assess the durability of the esthetic effects of the combined treatment. We acknowledged that diluted CaHA injection did not immediately follow MFU‐V treatment in some patients because they had time constraints. Nonetheless, the investigators ensured that diluted CaHA was injected within one week of MFU‐V treatment. This is not expected to have a significant impact on the treatment results as the process of neocollagenesis is known to occur over several months\n16\n, \n17\n and a 1 week interval is unlikely to make an appreciable difference. Next, there is lack of a validated clinical grading scale to assess the severity of laxity in the upper arm area. While the arm VAS developed by Amselem et al\n10\n was published in the literature, it has not been properly validated. In addition, racial/ethnic differences in response to these treatments need to be evaluated. There are recognized differences in skin quality characteristics for Asians vs Caucasians associated with aging,\n24\n and it is possible that certain non‐invasive treatments may be more effective for particular racial groups and Fitzpatrick skin types. Arguably, non‐invasive procedures, such as MFU‐V in combination with CaHA may be more appropriate for individuals with higher Fitzpatrick score given the higher risk of hypertrophic scarring and keloid formation in this population.\n25\n Finally, baseline severity of brachial laxity and patient's BMI may have a potential impact on the response to this treatment combination. However, we did not perform stratified analyses by clinical and demographic characteristics such as severity of laxity and BMI due to sample size constraints. Future studies involving a larger patient population should be considered to assess whether these baseline characteristics have any influence on treatment outcomes and identify individuals most suited for these non‐invasive treatments. Another limitation of the study is the use of a standardized treatment protocol at two pre‐determined depths of 4.5 mm and 3.0 mm. Further improvement of outcomes may have been achieved with customization of the treatment depths. The real‐time ultrasound visualization capability of MFU‐V can be used to determine the precise depths of target tissue layers and guide subsequent selection of transducers according to individual's needs. Nevertheless, the results of this study are positive and encourage further investigation. Future work can look into treating the posterior aspect of the upper arms for circumferential tightening and lifting and reduced adverse events.", "This study demonstrates novel findings that combination treatment with MFU‐V followed by injection of diluted or hyperdiluted CaHA is effective for the management of brachial skin laxity as demonstrated by clinically and statistically significant improvements in both objective and subjective outcome measures.", "This study was supported by a research grant from Merz Aesthetics. Sylvia Ramirez and Ivan Puah report no additional conflicts of interest in this work.", "Both authors contributed to study conception and design, as well as acquisition and interpretation of data, with the first author contributing to a greater extent in both aspects. They were involved in revising the manuscript for important intellectual content, with the first author contributing to a greater extent, and both have given final approval of the version to be published.", "The study was approved by the Parkway Pantai institutional ethics committee (approval reference: PIEC/2019/024), and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript.", "The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions." ]
[ null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusions", "COI-statement", null, null, "data-availability" ]
[ "arm laxity", "calcium hydroxylapatite", "combination therapy", "microfocused ultrasound with visualization" ]
INTRODUCTION: There is a paradigm shift for body contouring treatments. 1 Individuals are paying more attention to concerns in visible body areas, including the arms, where sagging and laxity in the brachial area may present as “batwing appearance,” 2 and along with this is a higher desire for non‐invasive procedures. 3 Indeed, data from the American Society for Dermatologic Surgery (ASDS) suggest that the number of invasive and non‐invasive body contouring procedures performed has increased by fourfold between 2012 and 2018. 4 Brachial skin laxity, in particular, can negatively impact an individual's quality of life. 5 The condition can become more apparent as collagen and elastin levels decline with aging and weight loss, which cause the skin to lose firmness and elasticity. 2 , 6 The demand for improvement in brachial skin laxity is evidenced by the more than 50‐fold increase in “upper arm lifts” or brachioplasty from 2000 to 2019 as reported in the Plastic Surgery Statistics Report by American Society of Plastic Surgeons (ASPS). 3 Although surgical procedures can improve the appearance of the brachial area, they can be associated with significant complications and extensive scarring often requiring revision surgery. 7 Indeed, there has been a rise in demand for non‐invasive procedures for the improvement of skin laxity. 1 Microfocused ultrasound with visualization (MFU‐V; Ultherapy®, Merz North America, Inc. Raleigh, N.C.) and calcium hydroxylapatite (CaHA; Radiesse®, Merz North America, Inc) injections are examples of non‐invasive procedures that stimulate collagen and elastin production. Independently, these have been shown to promote dermal remodeling, resulting in lifting and tightening of lax skin primarily for concerns on the face and neck, but there are limited data to suggest that these procedures may have potential applications for use in other body sites. 8 , 9 , 10 , 11 MFU‐V generates focused ultrasound waves that are converted to heat, creating discrete thermal coagulation points at precise depths beneath the skin's surface resulting in collagen denaturation and subsequent new collagen production. 12 , 13 A small number of studies has demonstrated improvement of brachial laxity with MFU‐V. 8 , 9 These are limited by the predominant use of subjective assessments to evaluate clinical improvements without including objective measurements. An alternative option has been the use of CaHA as a biostimulatory agent. 14 Treatment with subdermal injections of diluted or hyperdiluted CaHA has been shown to stimulate neocollagenesis, elastogenesis, and neoangiogenesis, with corresponding increase in dermal thickness and elasticity. 15 To date, a handful of case studies has reported the use of CaHA injections to improve skin laxity in the brachial region and other areas. 10 , 11 In an effort to pursue further improvement in skin tightening especially in body sites, recent studies have examined the effectiveness of combining both MFU‐V and CaHA treatments for addressing skin laxity in the neck and buttocks. 16 , 17 Given that each of these modalities has been shown to improve collagen stores, it is plausible that a combination of both procedures could produce synergistic effects. 17 , 18 There is, however, no publication to date on the combined use of both modalities in a single treatment session in the brachial region. Given the increasing demand for arm skin tightening procedures, particularly with non‐invasive treatments, this prospective pilot study aimed to assess the effectiveness of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improving brachial skin laxity. SUBJECTS AND METHODS: Study design This was a prospective, open‐label, non‐randomized, single‐arm case series of healthy female subjects from two outpatient clinics in Singapore conducted between September 2019 and August 2020. Subjects underwent one treatment session of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improvement of brachial skin laxity and were followed for 24 weeks over three visits to evaluate the effectiveness of the combination treatment. The study was approved by the relevant ethics committee, and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript. This was a prospective, open‐label, non‐randomized, single‐arm case series of healthy female subjects from two outpatient clinics in Singapore conducted between September 2019 and August 2020. Subjects underwent one treatment session of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improvement of brachial skin laxity and were followed for 24 weeks over three visits to evaluate the effectiveness of the combination treatment. The study was approved by the relevant ethics committee, and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript. Study population Female subjects aged 35–65 years of all skin types and ethnic groups who had skin laxity in the brachial regions, desired lifting and tightening of the upper arm skin through non‐surgical interventions, were in good health, and could comply with the study requirements were included in the study. Subjects who met any of the following criteria were excluded from participation: (1) used immunosuppressive drugs or had any active systemic or local skin disease that may alter wound healing; (2) used antiplatelet agents or anticoagulants; (3) had significant scarring or keloid scarring in the proposed treatment areas, or a history of keloid formation; (4) had significant open wounds or lesions, or presence of active implants in the proposed treatment areas; (5) performed non‐invasive fat reduction procedures; ablative or non‐ablative skin procedures, or surgical procedures to the proposed treatment areas within the past six months; and (6) were pregnant or lactating during screening; and (7) had body mass index (BMI) >28 kg/m2. Female subjects aged 35–65 years of all skin types and ethnic groups who had skin laxity in the brachial regions, desired lifting and tightening of the upper arm skin through non‐surgical interventions, were in good health, and could comply with the study requirements were included in the study. Subjects who met any of the following criteria were excluded from participation: (1) used immunosuppressive drugs or had any active systemic or local skin disease that may alter wound healing; (2) used antiplatelet agents or anticoagulants; (3) had significant scarring or keloid scarring in the proposed treatment areas, or a history of keloid formation; (4) had significant open wounds or lesions, or presence of active implants in the proposed treatment areas; (5) performed non‐invasive fat reduction procedures; ablative or non‐ablative skin procedures, or surgical procedures to the proposed treatment areas within the past six months; and (6) were pregnant or lactating during screening; and (7) had body mass index (BMI) >28 kg/m2. Intervention MFU‐V treatment was first administered as described in previously published studies for brachial skin laxity. 8 , 9 Diluted or hyperdiluted CaHA was then administered according to the recommendations of published consensus guidelines. 14 , 19 Analgesia was achieved with oral administration of ibuprofen (800 mg) and application of a topical anesthetic (EMLA cream: a eutectic mixture of local anesthetic—lidocaine 2.5% and prilocaine 2.5%, or BLT cream: benzocaine 20%, lidocaine 6%, tetracaine 4%) one hour before treatment. Oral paracetamol (1 g) or tramadol (50 mg) was given to subjects who were allergic to non‐steroidal anti‐inflammatory drugs (NSAIDs). The treatment area was marked with the subject standing upright and with the arms extended at 45 degrees from the body using the lower horizontal border of the underarm area and the outer border of the axilla border as reference points. A standardized ruler that matched the width and height of the MFU‐V transducer was used to create a grid as shown in Figure 1A. A thin layer of ultrasound gel was then applied to the subject's arms. Each transducer was placed on the skin and was evenly coupled to the skin surface before treatment was administered. The brachial regions were treated in a standardized pattern as previously described 8 , 9 at two depths using the 4.0 MHz‐4.5 mm depth transducer for deeper penetration to the superficial fascial layer in the first pass, followed by the 7.0 MHz‐3.0 mm depth transducer for more superficial penetration to the deep dermis in the second pass. Depending on the surface area to be treated, a total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered to each arm using each transducer (Figure 1A), for a total of 360–480 lines per arm with both transducers, with more lines for patients with larger arm volumes. At least half of the height of the surface area of the inner arm was treated. The treatment protocol was not modified by severity of brachial skin laxity. Treatment administration. (A) MFU‐V treatment: The brachial regions were treated in a standardized pattern at two depths using the 4.0 MHz‐4.5 mm depth transducer and 7.0 MHz‐3.0 mm depth transducer. A total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered using each transducer. (B) CaHA treatment: subjects received subdermal injections of diluted or hyperdiluted CaHA with a 25‐gauge cannula using a retrograde fanning technique. The crosses in Figure 1A denote the three standardized points where cutometer readings were taken Depending on individual subject's availability, subjects received subdermal injections of diluted or hyperdiluted CaHA immediately or up to one week following MFU‐V treatment in the same areas with a 25‐gauge cannula using a retrograde fanning technique (Figure 1B) in accordance with the recommendations of published consensus guidelines. 14 , 19 Two different dilutions were used depending on investigator's assessment of skin thickness. A dilution ratio of 1:2 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 2.7 ml normal saline, for a total volume of 4.5 ml per arm) was used for subjects with thinner skin. A dilution ratio of 1:1 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 1.2 ml normal saline, for a total volume of 3 ml per arm) was used for subjects with thicker skin. After injecting CaHA, vigorous massage of the upper arm areas was performed to ensure even distribution of CaHA. MFU‐V treatment was first administered as described in previously published studies for brachial skin laxity. 8 , 9 Diluted or hyperdiluted CaHA was then administered according to the recommendations of published consensus guidelines. 14 , 19 Analgesia was achieved with oral administration of ibuprofen (800 mg) and application of a topical anesthetic (EMLA cream: a eutectic mixture of local anesthetic—lidocaine 2.5% and prilocaine 2.5%, or BLT cream: benzocaine 20%, lidocaine 6%, tetracaine 4%) one hour before treatment. Oral paracetamol (1 g) or tramadol (50 mg) was given to subjects who were allergic to non‐steroidal anti‐inflammatory drugs (NSAIDs). The treatment area was marked with the subject standing upright and with the arms extended at 45 degrees from the body using the lower horizontal border of the underarm area and the outer border of the axilla border as reference points. A standardized ruler that matched the width and height of the MFU‐V transducer was used to create a grid as shown in Figure 1A. A thin layer of ultrasound gel was then applied to the subject's arms. Each transducer was placed on the skin and was evenly coupled to the skin surface before treatment was administered. The brachial regions were treated in a standardized pattern as previously described 8 , 9 at two depths using the 4.0 MHz‐4.5 mm depth transducer for deeper penetration to the superficial fascial layer in the first pass, followed by the 7.0 MHz‐3.0 mm depth transducer for more superficial penetration to the deep dermis in the second pass. Depending on the surface area to be treated, a total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered to each arm using each transducer (Figure 1A), for a total of 360–480 lines per arm with both transducers, with more lines for patients with larger arm volumes. At least half of the height of the surface area of the inner arm was treated. The treatment protocol was not modified by severity of brachial skin laxity. Treatment administration. (A) MFU‐V treatment: The brachial regions were treated in a standardized pattern at two depths using the 4.0 MHz‐4.5 mm depth transducer and 7.0 MHz‐3.0 mm depth transducer. A total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered using each transducer. (B) CaHA treatment: subjects received subdermal injections of diluted or hyperdiluted CaHA with a 25‐gauge cannula using a retrograde fanning technique. The crosses in Figure 1A denote the three standardized points where cutometer readings were taken Depending on individual subject's availability, subjects received subdermal injections of diluted or hyperdiluted CaHA immediately or up to one week following MFU‐V treatment in the same areas with a 25‐gauge cannula using a retrograde fanning technique (Figure 1B) in accordance with the recommendations of published consensus guidelines. 14 , 19 Two different dilutions were used depending on investigator's assessment of skin thickness. A dilution ratio of 1:2 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 2.7 ml normal saline, for a total volume of 4.5 ml per arm) was used for subjects with thinner skin. A dilution ratio of 1:1 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 1.2 ml normal saline, for a total volume of 3 ml per arm) was used for subjects with thicker skin. After injecting CaHA, vigorous massage of the upper arm areas was performed to ensure even distribution of CaHA. Study assessments Subjects were followed for six months after treatment and evaluated at 4, 12, and 24 weeks. The primary outcome evaluated was the change in biophysical skin parameters, specifically skin firmness (R0) and skin elasticity (R2) during the 4, 12, and 24 week follow‐up visits as compared with baseline. These parameters were measured using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany), which is an established objective assessment of skin firmness and skin elasticity in dermatologic clinical studies. 20 The cutometer uses a suction method which creates a negative pressure that draws the skin into a probe and releases it after a defined time. It measures skin firmness (R0: the resistance of the skin to negative pressure, lower R0 value denotes firmer skin) and elasticity (R2: the ability of the skin to return to its original position, higher R2 denotes more elastic skin) using a non‐contact optical measuring system. 21 Readings were taken at three standardized points on each arm during each visit (Figure 1A), and the average of three measurements was reported for each visit. Secondary outcomes utilized subjective assessments including the arm visual analogue scale (VAS), 10 the global aesthetic improvement scale (GAIS), and subject global satisfaction scale. Standardized photographs of upper arms were taken before treatment and at each visit post‐treatment with the subject standing and both arms outstretched at a 45‐degree angle from the body using standardized camera angles and room lighting. Two blinded reviewers evaluated photographs of upper arms and provided their assessments using the VAS for upper arms developed by Amselem et al (Type I = no laxity, Type II = mild laxity, Type III = moderate laxity, Type IV = severe laxity, and Type V = very severe laxity). 10 The average VAS scores from both reviewers were reported for each visit. In addition, participating investigators compared photographs from each post‐treatment visit with baseline and provided their assessments on changes in overall aesthetic appearance after treatment using the GAIS (0 = worsened, 1 = no change, 2 = improved, 3 = much improved, and 4 = very much improved), as previously described. 11 Subjects provided their assessments of treatment satisfaction at each post‐treatment visit using the subject global satisfaction scale (1 = very dissatisfied to 5 = very satisfied). Adverse events (AEs) were recorded immediately after treatment and at all post‐treatment visits. Subjects were followed for six months after treatment and evaluated at 4, 12, and 24 weeks. The primary outcome evaluated was the change in biophysical skin parameters, specifically skin firmness (R0) and skin elasticity (R2) during the 4, 12, and 24 week follow‐up visits as compared with baseline. These parameters were measured using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany), which is an established objective assessment of skin firmness and skin elasticity in dermatologic clinical studies. 20 The cutometer uses a suction method which creates a negative pressure that draws the skin into a probe and releases it after a defined time. It measures skin firmness (R0: the resistance of the skin to negative pressure, lower R0 value denotes firmer skin) and elasticity (R2: the ability of the skin to return to its original position, higher R2 denotes more elastic skin) using a non‐contact optical measuring system. 21 Readings were taken at three standardized points on each arm during each visit (Figure 1A), and the average of three measurements was reported for each visit. Secondary outcomes utilized subjective assessments including the arm visual analogue scale (VAS), 10 the global aesthetic improvement scale (GAIS), and subject global satisfaction scale. Standardized photographs of upper arms were taken before treatment and at each visit post‐treatment with the subject standing and both arms outstretched at a 45‐degree angle from the body using standardized camera angles and room lighting. Two blinded reviewers evaluated photographs of upper arms and provided their assessments using the VAS for upper arms developed by Amselem et al (Type I = no laxity, Type II = mild laxity, Type III = moderate laxity, Type IV = severe laxity, and Type V = very severe laxity). 10 The average VAS scores from both reviewers were reported for each visit. In addition, participating investigators compared photographs from each post‐treatment visit with baseline and provided their assessments on changes in overall aesthetic appearance after treatment using the GAIS (0 = worsened, 1 = no change, 2 = improved, 3 = much improved, and 4 = very much improved), as previously described. 11 Subjects provided their assessments of treatment satisfaction at each post‐treatment visit using the subject global satisfaction scale (1 = very dissatisfied to 5 = very satisfied). Adverse events (AEs) were recorded immediately after treatment and at all post‐treatment visits. Statistical analyses Descriptive statistics were primarily used to summarize the results. Cutometer values (R0 and R2) and arm VAS scores were summarized using mean and standard deviation. Investigators’ GAIS, subject satisfaction, and AEs were summarized by percentages. The Wilcoxon signed‐rank test was used to compare cutometer values and arm VAS scores at each follow‐up visit with baseline values. p values < 0.05 were considered statistically significant. R (version 4.0.3, 2020–10–10), and R Studio (version 1.3.1093) were used to perform the analyses. Descriptive statistics were primarily used to summarize the results. Cutometer values (R0 and R2) and arm VAS scores were summarized using mean and standard deviation. Investigators’ GAIS, subject satisfaction, and AEs were summarized by percentages. The Wilcoxon signed‐rank test was used to compare cutometer values and arm VAS scores at each follow‐up visit with baseline values. p values < 0.05 were considered statistically significant. R (version 4.0.3, 2020–10–10), and R Studio (version 1.3.1093) were used to perform the analyses. Study design: This was a prospective, open‐label, non‐randomized, single‐arm case series of healthy female subjects from two outpatient clinics in Singapore conducted between September 2019 and August 2020. Subjects underwent one treatment session of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improvement of brachial skin laxity and were followed for 24 weeks over three visits to evaluate the effectiveness of the combination treatment. The study was approved by the relevant ethics committee, and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript. Study population: Female subjects aged 35–65 years of all skin types and ethnic groups who had skin laxity in the brachial regions, desired lifting and tightening of the upper arm skin through non‐surgical interventions, were in good health, and could comply with the study requirements were included in the study. Subjects who met any of the following criteria were excluded from participation: (1) used immunosuppressive drugs or had any active systemic or local skin disease that may alter wound healing; (2) used antiplatelet agents or anticoagulants; (3) had significant scarring or keloid scarring in the proposed treatment areas, or a history of keloid formation; (4) had significant open wounds or lesions, or presence of active implants in the proposed treatment areas; (5) performed non‐invasive fat reduction procedures; ablative or non‐ablative skin procedures, or surgical procedures to the proposed treatment areas within the past six months; and (6) were pregnant or lactating during screening; and (7) had body mass index (BMI) >28 kg/m2. Intervention: MFU‐V treatment was first administered as described in previously published studies for brachial skin laxity. 8 , 9 Diluted or hyperdiluted CaHA was then administered according to the recommendations of published consensus guidelines. 14 , 19 Analgesia was achieved with oral administration of ibuprofen (800 mg) and application of a topical anesthetic (EMLA cream: a eutectic mixture of local anesthetic—lidocaine 2.5% and prilocaine 2.5%, or BLT cream: benzocaine 20%, lidocaine 6%, tetracaine 4%) one hour before treatment. Oral paracetamol (1 g) or tramadol (50 mg) was given to subjects who were allergic to non‐steroidal anti‐inflammatory drugs (NSAIDs). The treatment area was marked with the subject standing upright and with the arms extended at 45 degrees from the body using the lower horizontal border of the underarm area and the outer border of the axilla border as reference points. A standardized ruler that matched the width and height of the MFU‐V transducer was used to create a grid as shown in Figure 1A. A thin layer of ultrasound gel was then applied to the subject's arms. Each transducer was placed on the skin and was evenly coupled to the skin surface before treatment was administered. The brachial regions were treated in a standardized pattern as previously described 8 , 9 at two depths using the 4.0 MHz‐4.5 mm depth transducer for deeper penetration to the superficial fascial layer in the first pass, followed by the 7.0 MHz‐3.0 mm depth transducer for more superficial penetration to the deep dermis in the second pass. Depending on the surface area to be treated, a total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered to each arm using each transducer (Figure 1A), for a total of 360–480 lines per arm with both transducers, with more lines for patients with larger arm volumes. At least half of the height of the surface area of the inner arm was treated. The treatment protocol was not modified by severity of brachial skin laxity. Treatment administration. (A) MFU‐V treatment: The brachial regions were treated in a standardized pattern at two depths using the 4.0 MHz‐4.5 mm depth transducer and 7.0 MHz‐3.0 mm depth transducer. A total of 180–240 treatment lines (20 lines per 2.5 × 2.5 cm square) were administered using each transducer. (B) CaHA treatment: subjects received subdermal injections of diluted or hyperdiluted CaHA with a 25‐gauge cannula using a retrograde fanning technique. The crosses in Figure 1A denote the three standardized points where cutometer readings were taken Depending on individual subject's availability, subjects received subdermal injections of diluted or hyperdiluted CaHA immediately or up to one week following MFU‐V treatment in the same areas with a 25‐gauge cannula using a retrograde fanning technique (Figure 1B) in accordance with the recommendations of published consensus guidelines. 14 , 19 Two different dilutions were used depending on investigator's assessment of skin thickness. A dilution ratio of 1:2 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 2.7 ml normal saline, for a total volume of 4.5 ml per arm) was used for subjects with thinner skin. A dilution ratio of 1:1 (1.5 ml of CaHA diluted with 0.3 ml 2% lidocaine and 1.2 ml normal saline, for a total volume of 3 ml per arm) was used for subjects with thicker skin. After injecting CaHA, vigorous massage of the upper arm areas was performed to ensure even distribution of CaHA. Study assessments: Subjects were followed for six months after treatment and evaluated at 4, 12, and 24 weeks. The primary outcome evaluated was the change in biophysical skin parameters, specifically skin firmness (R0) and skin elasticity (R2) during the 4, 12, and 24 week follow‐up visits as compared with baseline. These parameters were measured using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany), which is an established objective assessment of skin firmness and skin elasticity in dermatologic clinical studies. 20 The cutometer uses a suction method which creates a negative pressure that draws the skin into a probe and releases it after a defined time. It measures skin firmness (R0: the resistance of the skin to negative pressure, lower R0 value denotes firmer skin) and elasticity (R2: the ability of the skin to return to its original position, higher R2 denotes more elastic skin) using a non‐contact optical measuring system. 21 Readings were taken at three standardized points on each arm during each visit (Figure 1A), and the average of three measurements was reported for each visit. Secondary outcomes utilized subjective assessments including the arm visual analogue scale (VAS), 10 the global aesthetic improvement scale (GAIS), and subject global satisfaction scale. Standardized photographs of upper arms were taken before treatment and at each visit post‐treatment with the subject standing and both arms outstretched at a 45‐degree angle from the body using standardized camera angles and room lighting. Two blinded reviewers evaluated photographs of upper arms and provided their assessments using the VAS for upper arms developed by Amselem et al (Type I = no laxity, Type II = mild laxity, Type III = moderate laxity, Type IV = severe laxity, and Type V = very severe laxity). 10 The average VAS scores from both reviewers were reported for each visit. In addition, participating investigators compared photographs from each post‐treatment visit with baseline and provided their assessments on changes in overall aesthetic appearance after treatment using the GAIS (0 = worsened, 1 = no change, 2 = improved, 3 = much improved, and 4 = very much improved), as previously described. 11 Subjects provided their assessments of treatment satisfaction at each post‐treatment visit using the subject global satisfaction scale (1 = very dissatisfied to 5 = very satisfied). Adverse events (AEs) were recorded immediately after treatment and at all post‐treatment visits. Statistical analyses: Descriptive statistics were primarily used to summarize the results. Cutometer values (R0 and R2) and arm VAS scores were summarized using mean and standard deviation. Investigators’ GAIS, subject satisfaction, and AEs were summarized by percentages. The Wilcoxon signed‐rank test was used to compare cutometer values and arm VAS scores at each follow‐up visit with baseline values. p values < 0.05 were considered statistically significant. R (version 4.0.3, 2020–10–10), and R Studio (version 1.3.1093) were used to perform the analyses. RESULTS: Subject demographics and baseline characteristics Twelve female subjects participated in and completed the study. Although follow‐up data were available for all 12 subjects at 24 weeks, these were available for 11 and 8 subjects at 4 weeks and 12 weeks, respectively, as these visits fell within the period of COVID‐19–related movement restrictions in Singapore. Subject demographics and baseline characteristics are summarized in Table 1. The mean age was 51.3 (SD 7.3) years. Three subjects (25%) were Asian and the remaining nine (75%) were Caucasian. The mean BMI was 23.3 (SD 3.1) kg/m2, five subjects (42%) had BMI 20–23 kg/m2, three (25%) had BMI >23 to 25 kg/m2, another three (25%) had BMI >25 kg/m2, and one (8%) had BMI <20 kg/m2. At baseline, the mean arm VAS score (average of both arms) was 2.9 (SD 0.6). All arms (24, 100%) had a VAS score of 2 or more, indicating at least mild brachial laxity (VAS score 2: 3 arms, 13%; VAS score >2–3: 16 arms, 67%; VAS score >3–4: 4 arms, 17%; and VAS score >4: one arm, 4%). Subject demographics and baseline characteristics. Data presented are mean (SD) unless otherwise stated. Arm VAS was reported based on the average score from both reviewers. Abbreviations: BMI, Body mass index; SD, Standard deviation; VAS, Visual analogue scale. aAverage of both arms bPercentage calculated out of a total of 24 arms. Twelve female subjects participated in and completed the study. Although follow‐up data were available for all 12 subjects at 24 weeks, these were available for 11 and 8 subjects at 4 weeks and 12 weeks, respectively, as these visits fell within the period of COVID‐19–related movement restrictions in Singapore. Subject demographics and baseline characteristics are summarized in Table 1. The mean age was 51.3 (SD 7.3) years. Three subjects (25%) were Asian and the remaining nine (75%) were Caucasian. The mean BMI was 23.3 (SD 3.1) kg/m2, five subjects (42%) had BMI 20–23 kg/m2, three (25%) had BMI >23 to 25 kg/m2, another three (25%) had BMI >25 kg/m2, and one (8%) had BMI <20 kg/m2. At baseline, the mean arm VAS score (average of both arms) was 2.9 (SD 0.6). All arms (24, 100%) had a VAS score of 2 or more, indicating at least mild brachial laxity (VAS score 2: 3 arms, 13%; VAS score >2–3: 16 arms, 67%; VAS score >3–4: 4 arms, 17%; and VAS score >4: one arm, 4%). Subject demographics and baseline characteristics. Data presented are mean (SD) unless otherwise stated. Arm VAS was reported based on the average score from both reviewers. Abbreviations: BMI, Body mass index; SD, Standard deviation; VAS, Visual analogue scale. aAverage of both arms bPercentage calculated out of a total of 24 arms. Impact on objective outcome measures: biophysical skin parameters Cutometer readings demonstrated a progressive decrease in the mean R0 reading (the lower the reading, the better the firmness of the skin), throughout the course of the study, suggesting progressive improvement in skin firmness from baseline at all time‐points (Figure 2A). This reduced from 0.515 (SD 0.098) mm at baseline to 0.433 (SD 0.049) mm at 24 weeks. Statistically significant improvements were observed from 12 weeks onwards (p < 0.05 for 12 and 24 weeks). In addition, the mean R2 reading (the closer the reading to 1, the better the elasticity of the skin) improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 2B). The mean R2 reading increased from 0.816 (SD 0.032) at baseline to 0.847 (SD 0.037) at 12 weeks, where maximum improvement was observed. Objective biophysical parameters of the brachial skin over time. (A) Mean (SD) R0 readings, a measure of skin firmness, and (B) mean (SD) R2 readings, a measure of skin elasticity, before treatment and at 4, 12, and 24 weeks after treatment Cutometer readings demonstrated a progressive decrease in the mean R0 reading (the lower the reading, the better the firmness of the skin), throughout the course of the study, suggesting progressive improvement in skin firmness from baseline at all time‐points (Figure 2A). This reduced from 0.515 (SD 0.098) mm at baseline to 0.433 (SD 0.049) mm at 24 weeks. Statistically significant improvements were observed from 12 weeks onwards (p < 0.05 for 12 and 24 weeks). In addition, the mean R2 reading (the closer the reading to 1, the better the elasticity of the skin) improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 2B). The mean R2 reading increased from 0.816 (SD 0.032) at baseline to 0.847 (SD 0.037) at 12 weeks, where maximum improvement was observed. Objective biophysical parameters of the brachial skin over time. (A) Mean (SD) R0 readings, a measure of skin firmness, and (B) mean (SD) R2 readings, a measure of skin elasticity, before treatment and at 4, 12, and 24 weeks after treatment Impact on subjective outcome measures: aesthetic appearance and subject satisfaction Reviewer‐rated arm VAS score improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 3). The mean arm VAS score reduced from 2.9 (SD 0.6) at baseline to 2.1 (SD 0.7) at 12 weeks, where maximum improvement was observed. Figure 4 shows the change in overall skin quality in a subject who had a VAS score of 4 (severe laxity) at baseline. Pre‐ and post‐treatment photographs depict the improvement in overall skin quality and laxity from baseline to 24 weeks after treatment. Mean (SD) arm VAS scores before treatment and at 4, 12, and 24 weeks after treatment Photographs of upper arms of a subject before treatment (left) and at 24 weeks after combined treatment with microfocused ultrasound with visualization and diluted calcium hydroxylapatite (right) Overall change in arm appearance was assessed using the GAIS (Figure 5). Based on investigator's ratings, over 70% of all subjects showed an improved aesthetic appearance compared with baseline at all time‐points throughout the study. At 4 weeks, 73% of subjects (8/11) were assessed to have improvement of brachial laxity. By 12 and 24 weeks, 88% (7/8) and 83% (10/12) of subjects were assessed to have improved, respectively. Furthermore, over 37% of subjects were reported to have “much improved” and “very much improved” brachial laxity by 12 and 24 weeks. There were no reported cases with worsening of brachial laxity. Investigators global aesthetic improvement scale (GAIS) ratings over time When asked to rate their overall satisfaction with the treatment, 55% of the subjects (6/11) reported they were satisfied at 4 weeks (Figure 6). At the 12 weeks, the highest percentage of satisfied subjects (75%; 6/8) was noted. By 24 weeks, 58% (7/12) were still satisfied with the treatment. No subjects were dissatisfied during the study. Subject satisfaction over time Reviewer‐rated arm VAS score improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 3). The mean arm VAS score reduced from 2.9 (SD 0.6) at baseline to 2.1 (SD 0.7) at 12 weeks, where maximum improvement was observed. Figure 4 shows the change in overall skin quality in a subject who had a VAS score of 4 (severe laxity) at baseline. Pre‐ and post‐treatment photographs depict the improvement in overall skin quality and laxity from baseline to 24 weeks after treatment. Mean (SD) arm VAS scores before treatment and at 4, 12, and 24 weeks after treatment Photographs of upper arms of a subject before treatment (left) and at 24 weeks after combined treatment with microfocused ultrasound with visualization and diluted calcium hydroxylapatite (right) Overall change in arm appearance was assessed using the GAIS (Figure 5). Based on investigator's ratings, over 70% of all subjects showed an improved aesthetic appearance compared with baseline at all time‐points throughout the study. At 4 weeks, 73% of subjects (8/11) were assessed to have improvement of brachial laxity. By 12 and 24 weeks, 88% (7/8) and 83% (10/12) of subjects were assessed to have improved, respectively. Furthermore, over 37% of subjects were reported to have “much improved” and “very much improved” brachial laxity by 12 and 24 weeks. There were no reported cases with worsening of brachial laxity. Investigators global aesthetic improvement scale (GAIS) ratings over time When asked to rate their overall satisfaction with the treatment, 55% of the subjects (6/11) reported they were satisfied at 4 weeks (Figure 6). At the 12 weeks, the highest percentage of satisfied subjects (75%; 6/8) was noted. By 24 weeks, 58% (7/12) were still satisfied with the treatment. No subjects were dissatisfied during the study. Subject satisfaction over time Adverse events No serious AEs were noted during the study. All AEs were mild and transient in nature. The most common AEs reported were mild bruising (92%; 11/12) and redness (25%; 3/12), which resolved spontaneously within a week. One subject reported mild paresthesia (electric shock sensations) down the left arm, which also resolved spontaneously over two weeks. No serious AEs were noted during the study. All AEs were mild and transient in nature. The most common AEs reported were mild bruising (92%; 11/12) and redness (25%; 3/12), which resolved spontaneously within a week. One subject reported mild paresthesia (electric shock sensations) down the left arm, which also resolved spontaneously over two weeks. Subject demographics and baseline characteristics: Twelve female subjects participated in and completed the study. Although follow‐up data were available for all 12 subjects at 24 weeks, these were available for 11 and 8 subjects at 4 weeks and 12 weeks, respectively, as these visits fell within the period of COVID‐19–related movement restrictions in Singapore. Subject demographics and baseline characteristics are summarized in Table 1. The mean age was 51.3 (SD 7.3) years. Three subjects (25%) were Asian and the remaining nine (75%) were Caucasian. The mean BMI was 23.3 (SD 3.1) kg/m2, five subjects (42%) had BMI 20–23 kg/m2, three (25%) had BMI >23 to 25 kg/m2, another three (25%) had BMI >25 kg/m2, and one (8%) had BMI <20 kg/m2. At baseline, the mean arm VAS score (average of both arms) was 2.9 (SD 0.6). All arms (24, 100%) had a VAS score of 2 or more, indicating at least mild brachial laxity (VAS score 2: 3 arms, 13%; VAS score >2–3: 16 arms, 67%; VAS score >3–4: 4 arms, 17%; and VAS score >4: one arm, 4%). Subject demographics and baseline characteristics. Data presented are mean (SD) unless otherwise stated. Arm VAS was reported based on the average score from both reviewers. Abbreviations: BMI, Body mass index; SD, Standard deviation; VAS, Visual analogue scale. aAverage of both arms bPercentage calculated out of a total of 24 arms. Impact on objective outcome measures: biophysical skin parameters: Cutometer readings demonstrated a progressive decrease in the mean R0 reading (the lower the reading, the better the firmness of the skin), throughout the course of the study, suggesting progressive improvement in skin firmness from baseline at all time‐points (Figure 2A). This reduced from 0.515 (SD 0.098) mm at baseline to 0.433 (SD 0.049) mm at 24 weeks. Statistically significant improvements were observed from 12 weeks onwards (p < 0.05 for 12 and 24 weeks). In addition, the mean R2 reading (the closer the reading to 1, the better the elasticity of the skin) improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 2B). The mean R2 reading increased from 0.816 (SD 0.032) at baseline to 0.847 (SD 0.037) at 12 weeks, where maximum improvement was observed. Objective biophysical parameters of the brachial skin over time. (A) Mean (SD) R0 readings, a measure of skin firmness, and (B) mean (SD) R2 readings, a measure of skin elasticity, before treatment and at 4, 12, and 24 weeks after treatment Impact on subjective outcome measures: aesthetic appearance and subject satisfaction: Reviewer‐rated arm VAS score improved significantly from baseline at all time‐points (p < 0.05 for all) (Figure 3). The mean arm VAS score reduced from 2.9 (SD 0.6) at baseline to 2.1 (SD 0.7) at 12 weeks, where maximum improvement was observed. Figure 4 shows the change in overall skin quality in a subject who had a VAS score of 4 (severe laxity) at baseline. Pre‐ and post‐treatment photographs depict the improvement in overall skin quality and laxity from baseline to 24 weeks after treatment. Mean (SD) arm VAS scores before treatment and at 4, 12, and 24 weeks after treatment Photographs of upper arms of a subject before treatment (left) and at 24 weeks after combined treatment with microfocused ultrasound with visualization and diluted calcium hydroxylapatite (right) Overall change in arm appearance was assessed using the GAIS (Figure 5). Based on investigator's ratings, over 70% of all subjects showed an improved aesthetic appearance compared with baseline at all time‐points throughout the study. At 4 weeks, 73% of subjects (8/11) were assessed to have improvement of brachial laxity. By 12 and 24 weeks, 88% (7/8) and 83% (10/12) of subjects were assessed to have improved, respectively. Furthermore, over 37% of subjects were reported to have “much improved” and “very much improved” brachial laxity by 12 and 24 weeks. There were no reported cases with worsening of brachial laxity. Investigators global aesthetic improvement scale (GAIS) ratings over time When asked to rate their overall satisfaction with the treatment, 55% of the subjects (6/11) reported they were satisfied at 4 weeks (Figure 6). At the 12 weeks, the highest percentage of satisfied subjects (75%; 6/8) was noted. By 24 weeks, 58% (7/12) were still satisfied with the treatment. No subjects were dissatisfied during the study. Subject satisfaction over time Adverse events: No serious AEs were noted during the study. All AEs were mild and transient in nature. The most common AEs reported were mild bruising (92%; 11/12) and redness (25%; 3/12), which resolved spontaneously within a week. One subject reported mild paresthesia (electric shock sensations) down the left arm, which also resolved spontaneously over two weeks. DISCUSSION: This 24 weeks pilot study assessed the effectiveness of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA in healthy Asian and Caucasian female subjects with brachial skin laxity. Our findings demonstrate that a single session of the combined treatment modalities was associated with significant improvement in skin laxity and improved aesthetic appearance of the brachial region. Objective outcome measures using a cutometer demonstrated significantly improved skin firmness and skin elasticity after the combination of treatments. Other subjective outcome measures including arm VAS, GAIS, and subject global satisfaction scale demonstrated improvement in brachial laxity and a high degree of subject satisfaction. These encouraging results are especially relevant given the novel findings of this study and the growing demands for such treatments. This study, to our knowledge, is the first to assess single session combined use of MFU‐V and diluted or hyperdiluted CaHA treatment for brachial skin laxity. This is of particular importance given the increased concern over laxity in the brachial region as evidenced by the marked increase in demand for upper arm lifts. 3 While this was a pilot study, we chose to recruit a diverse group of subjects as far as possible. The study cohort included subjects with a range of BMI from <20 kg/m2 to >25 kg/m2, varying degrees of laxity, as well as Asians and Caucasians subjects. Given that MFU‐V and CaHA stimulate neocollagenesis via different mechanisms, there is considerable rationale for combining both modalities in the same treatment area for enhanced neocollagenesis. MFU‐V generates focused ultrasound waves, which cause thermal coagulation at precise depths beneath the targeted skin areas, resulting in collagen denaturation and subsequent collagen remodeling. 12 , 13 Injecting small amounts of CaHA may then precipitate further collagen building at specific points of injection. The CaHA particles form a matrix, which activates fibroblasts to produce more collagen and elastin. 15 Superficial injections of diluted or hyperdiluted CaHA result in skin regeneration, without causing volumization of the brachial tissues. CaHA dilution is titrated according to skin thickness. 14 , 19 Indeed, limited studies have demonstrated increased neocollagenesis with combined use of MFU‐V and CaHA, and these have been for the treatment of other body areas, such as the neck, décolletage, and buttocks. 16 , 17 , 18 In the case study by Casabona et al, histological examination of skin samples taken from the inner thighs revealed thicker and denser collagen fibers at 24 weeks when treatments were combined compared with those treated with a single procedure. 18 In addition, the skin samples showed no significant differences in immunological patterns between those treated with MFU‐V combined with CaHA and those treated with CaHA alone, demonstrating the safety profile of the combined treatment. 18 In a subsequent study, significant improvements in skin laxity and cellulite severity in the buttocks and upper thighs were observed 90 days following treatment with MFU‐V combined with diluted CaHA. 17 Similarly, the combined use of MFU‐V and diluted CaHA has been shown to improve skin laxity and lines on the neck and the décolletage. 16 The combined approach was well‐tolerated and was associated with high subject satisfaction. 16 , 17 Consistent with these previous studies, our study showed clinically and statistically significant improvement in skin laxity in the brachial area with the combined treatment. In our study, combined use of MFU‐V and subdermal injections of diluted or hyperdiluted CaHA was associated with improvements in brachial skin laxity as assessed by both objective and subjective outcome measures. Cutometer assessment of skin firmness (R0) and elasticity (R2), which has been used extensively in the dermatologic literature to measure changes in skin quality due to various factors, including aging, 20 demonstrated significant improvements in objective biophysical parameters of the brachial skin. In addition, both reviewer‐rated arm VAS and investigator‐rated GAIS revealed improvement in arm VAS score with the majority of subjects assessed to have improved arm appearance after treatment. The cutometer measure for skin firmness (R0 reading) progressively improved throughout the course of the study, with statistically significant improvement from baseline observed from 12 weeks onwards. Notably, it was also at the same time‐point of 12 weeks that the greatest improvements from baseline in mean R2 reading (measure of skin elasticity) and mean arm VAS score were observed. The mean arm VAS score improved by almost one grade from a mean score of 2.9 (close to a rating of moderate laxity) to 2.1 (close to mild laxity) after only one treatment session. In addition, the highest percentage of subjects with improved arm appearance was noted at 12 weeks, as assessed by GAIS. These observations coincide with the timeframe required for neocollagenesis as observed in earlier studies, where improvements in neck, décolletage, and cellulite appearance were observed 12 weeks after combined treatment with MFU‐V and diluted CaHA. 16 , 17 Histological examination of skin samples taken from the inner thighs showed more type I and type III collagen in samples treated with MFU‐V combined with 1:1 dilution of CaHA compared with untreated samples. 17 It should be noted that the improvement in mean arm VAS score from baseline was lower at 24 weeks compared with that at 12 weeks, and the percentage of subjects with improved aesthetic appearance at 24 weeks was lower than that at 12 weeks. The observed trend could possibly be due to the inherent limitations of the subjective outcome measures. Also, photographic assessment of arm appearance can be challenging as it is difficult to capture changes in skin quality on photographs. In light of these drawbacks, objective outcome measures (eg, cutometer assessments to detect biophysical changes in skin quality) become all the more crucial for measuring changes that may not be detected or assessed accurately by more subjective and less sensitive measures. The use of patient‐reported outcome measures, such as the subject global satisfaction scale, allows aesthetic physicians to capture valuable data to understand subjects’ perception of improvement. In this study, the majority of subjects at each time‐point reported that they were satisfied with the treatment, and no subjects were dissatisfied. Consistent with the results for R2 value, arm VAS score, and investigator‐rated improvement in arm appearance, the highest percentage of satisfied subjects was observed 12 weeks after treatment. A lower percentage of satisfied subjects were noted at 24 weeks than at 12 weeks. It is plausible that subjects developed perception drift 22 during the study. In such cases, as commonly perceived flaws become increasingly addressed, subjects become more fixated on previously ignored flaws and start to judge their perceived improvement against a new baseline, making them prone to underestimate the magnitude of improvement. In addition, the follow‐up visits of this study fell within the period of COVID‐19–related movement restrictions in Singapore, where the public were asked not to leave their homes except for essential purposes. As such, it is possible that some fluctuations in weight may have occurred during this period, subsequently affecting subjects’ satisfaction. Furthermore, subjects were likely to have been distracted and concerned by other issues related to COVID‐19 at this time, potentially influencing their mindset regarding the procedure. In this study, combined use of MFU‐V and diluted CaHA in a single treatment session was well‐tolerated. AEs reported in this study were limited to mild bruising, redness, and paresthesia, and were transient in nature. The safety profile of combined treatment is consistent with that observed in studies which had used MFU‐V and diluted CaHA individually in the brachial area or other body areas. 8 , 9 , 10 , 11 A limitation of this study was the small sample size and the relatively short follow‐up period. Both MFU‐V and CaHA have individually demonstrated a duration of effect of at least one year. 23 Future work should consider studying subjects for a longer period of time to examine whether further improvement can be observed, as well as to assess the durability of the esthetic effects of the combined treatment. We acknowledged that diluted CaHA injection did not immediately follow MFU‐V treatment in some patients because they had time constraints. Nonetheless, the investigators ensured that diluted CaHA was injected within one week of MFU‐V treatment. This is not expected to have a significant impact on the treatment results as the process of neocollagenesis is known to occur over several months 16 , 17 and a 1 week interval is unlikely to make an appreciable difference. Next, there is lack of a validated clinical grading scale to assess the severity of laxity in the upper arm area. While the arm VAS developed by Amselem et al 10 was published in the literature, it has not been properly validated. In addition, racial/ethnic differences in response to these treatments need to be evaluated. There are recognized differences in skin quality characteristics for Asians vs Caucasians associated with aging, 24 and it is possible that certain non‐invasive treatments may be more effective for particular racial groups and Fitzpatrick skin types. Arguably, non‐invasive procedures, such as MFU‐V in combination with CaHA may be more appropriate for individuals with higher Fitzpatrick score given the higher risk of hypertrophic scarring and keloid formation in this population. 25 Finally, baseline severity of brachial laxity and patient's BMI may have a potential impact on the response to this treatment combination. However, we did not perform stratified analyses by clinical and demographic characteristics such as severity of laxity and BMI due to sample size constraints. Future studies involving a larger patient population should be considered to assess whether these baseline characteristics have any influence on treatment outcomes and identify individuals most suited for these non‐invasive treatments. Another limitation of the study is the use of a standardized treatment protocol at two pre‐determined depths of 4.5 mm and 3.0 mm. Further improvement of outcomes may have been achieved with customization of the treatment depths. The real‐time ultrasound visualization capability of MFU‐V can be used to determine the precise depths of target tissue layers and guide subsequent selection of transducers according to individual's needs. Nevertheless, the results of this study are positive and encourage further investigation. Future work can look into treating the posterior aspect of the upper arms for circumferential tightening and lifting and reduced adverse events. CONCLUSION: This study demonstrates novel findings that combination treatment with MFU‐V followed by injection of diluted or hyperdiluted CaHA is effective for the management of brachial skin laxity as demonstrated by clinically and statistically significant improvements in both objective and subjective outcome measures. CONFLICTS OF INTEREST: This study was supported by a research grant from Merz Aesthetics. Sylvia Ramirez and Ivan Puah report no additional conflicts of interest in this work. AUTHORS CONTRIBUTIONS: Both authors contributed to study conception and design, as well as acquisition and interpretation of data, with the first author contributing to a greater extent in both aspects. They were involved in revising the manuscript for important intellectual content, with the first author contributing to a greater extent, and both have given final approval of the version to be published. ETHICAL APPROVAL: The study was approved by the Parkway Pantai institutional ethics committee (approval reference: PIEC/2019/024), and all subjects provided written informed consent before the study commenced. The study was carried out in accordance with the Declaration of Helsinki. Written informed consent for publication of the pre‐ and post‐treatment photographs was obtained from the subject whose photographs were included in the manuscript. DATA AVAILABILITY STATEMENT: The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
Background: There is no publication to date on the combined use of microfocused ultrasound with visualization (MFU-V) and calcium hydroxylapatite (CaHA) for brachial skin laxity. Methods: Female subjects who had skin laxity in the brachial regions and who desired non-surgical intervention were enrolled into this prospective, single-arm pilot study. MFU-V (Ultherapy® , Merz North America, Inc. Raleigh, N.C.) was applied using the 4.0 MHz-4.5 mm and 7.0 MHz-3.0 mm depth transducers, followed by subdermal injections of diluted (1:1)/hyperdiluted (1:2) CaHA (Radiesse® , Merz North America, Inc). Subjects were followed for six months after treatment. Objective biophysical skin assessments were conducted using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany). Subjective assessments included the arm visual analogue scale (VAS), global aesthetic improvement scale (GAIS), and subject global satisfaction scale. Results: Twelve subjects participated in the study. The mean R0 reading (measure of skin firmness) progressively improved from 0.515 mm at baseline to 0.433 mm at 24 weeks (p < 0.05 for 12 and 24 weeks). The mean R2 reading (measure of skin elasticity) and mean arm VAS improved significantly from baseline at all visits (p < 0.05 for all). The majority of subjects at each visit showed improved arm appearance and were satisfied with their treatment. Both procedures were well-tolerated. Conclusions: Combined use of MFU-V with diluted/hyperdiluted CaHA demonstrates significant improvements in both objective and subjective measures of brachial skin laxity.
INTRODUCTION: There is a paradigm shift for body contouring treatments. 1 Individuals are paying more attention to concerns in visible body areas, including the arms, where sagging and laxity in the brachial area may present as “batwing appearance,” 2 and along with this is a higher desire for non‐invasive procedures. 3 Indeed, data from the American Society for Dermatologic Surgery (ASDS) suggest that the number of invasive and non‐invasive body contouring procedures performed has increased by fourfold between 2012 and 2018. 4 Brachial skin laxity, in particular, can negatively impact an individual's quality of life. 5 The condition can become more apparent as collagen and elastin levels decline with aging and weight loss, which cause the skin to lose firmness and elasticity. 2 , 6 The demand for improvement in brachial skin laxity is evidenced by the more than 50‐fold increase in “upper arm lifts” or brachioplasty from 2000 to 2019 as reported in the Plastic Surgery Statistics Report by American Society of Plastic Surgeons (ASPS). 3 Although surgical procedures can improve the appearance of the brachial area, they can be associated with significant complications and extensive scarring often requiring revision surgery. 7 Indeed, there has been a rise in demand for non‐invasive procedures for the improvement of skin laxity. 1 Microfocused ultrasound with visualization (MFU‐V; Ultherapy®, Merz North America, Inc. Raleigh, N.C.) and calcium hydroxylapatite (CaHA; Radiesse®, Merz North America, Inc) injections are examples of non‐invasive procedures that stimulate collagen and elastin production. Independently, these have been shown to promote dermal remodeling, resulting in lifting and tightening of lax skin primarily for concerns on the face and neck, but there are limited data to suggest that these procedures may have potential applications for use in other body sites. 8 , 9 , 10 , 11 MFU‐V generates focused ultrasound waves that are converted to heat, creating discrete thermal coagulation points at precise depths beneath the skin's surface resulting in collagen denaturation and subsequent new collagen production. 12 , 13 A small number of studies has demonstrated improvement of brachial laxity with MFU‐V. 8 , 9 These are limited by the predominant use of subjective assessments to evaluate clinical improvements without including objective measurements. An alternative option has been the use of CaHA as a biostimulatory agent. 14 Treatment with subdermal injections of diluted or hyperdiluted CaHA has been shown to stimulate neocollagenesis, elastogenesis, and neoangiogenesis, with corresponding increase in dermal thickness and elasticity. 15 To date, a handful of case studies has reported the use of CaHA injections to improve skin laxity in the brachial region and other areas. 10 , 11 In an effort to pursue further improvement in skin tightening especially in body sites, recent studies have examined the effectiveness of combining both MFU‐V and CaHA treatments for addressing skin laxity in the neck and buttocks. 16 , 17 Given that each of these modalities has been shown to improve collagen stores, it is plausible that a combination of both procedures could produce synergistic effects. 17 , 18 There is, however, no publication to date on the combined use of both modalities in a single treatment session in the brachial region. Given the increasing demand for arm skin tightening procedures, particularly with non‐invasive treatments, this prospective pilot study aimed to assess the effectiveness of MFU‐V combined with subdermal injections of diluted or hyperdiluted CaHA for improving brachial skin laxity. CONCLUSION: This study demonstrates novel findings that combination treatment with MFU‐V followed by injection of diluted or hyperdiluted CaHA is effective for the management of brachial skin laxity as demonstrated by clinically and statistically significant improvements in both objective and subjective outcome measures.
Background: There is no publication to date on the combined use of microfocused ultrasound with visualization (MFU-V) and calcium hydroxylapatite (CaHA) for brachial skin laxity. Methods: Female subjects who had skin laxity in the brachial regions and who desired non-surgical intervention were enrolled into this prospective, single-arm pilot study. MFU-V (Ultherapy® , Merz North America, Inc. Raleigh, N.C.) was applied using the 4.0 MHz-4.5 mm and 7.0 MHz-3.0 mm depth transducers, followed by subdermal injections of diluted (1:1)/hyperdiluted (1:2) CaHA (Radiesse® , Merz North America, Inc). Subjects were followed for six months after treatment. Objective biophysical skin assessments were conducted using a cutometer (Cutometer® Dual 580 MPA; Courage & Khazaka, Cologne, Germany). Subjective assessments included the arm visual analogue scale (VAS), global aesthetic improvement scale (GAIS), and subject global satisfaction scale. Results: Twelve subjects participated in the study. The mean R0 reading (measure of skin firmness) progressively improved from 0.515 mm at baseline to 0.433 mm at 24 weeks (p < 0.05 for 12 and 24 weeks). The mean R2 reading (measure of skin elasticity) and mean arm VAS improved significantly from baseline at all visits (p < 0.05 for all). The majority of subjects at each visit showed improved arm appearance and were satisfied with their treatment. Both procedures were well-tolerated. Conclusions: Combined use of MFU-V with diluted/hyperdiluted CaHA demonstrates significant improvements in both objective and subjective measures of brachial skin laxity.
10,916
321
[ 678, 3238, 130, 197, 691, 494, 98, 334, 230, 392, 71, 66, 67 ]
18
[ "treatment", "skin", "subjects", "arm", "weeks", "laxity", "study", "12", "vas", "caha" ]
[ "improving brachial skin", "2018 brachial skin", "improve appearance brachial", "skin laxity brachial", "brachial skin laxity" ]
null
[CONTENT] arm laxity | calcium hydroxylapatite | combination therapy | microfocused ultrasound with visualization [SUMMARY]
null
[CONTENT] arm laxity | calcium hydroxylapatite | combination therapy | microfocused ultrasound with visualization [SUMMARY]
[CONTENT] arm laxity | calcium hydroxylapatite | combination therapy | microfocused ultrasound with visualization [SUMMARY]
[CONTENT] arm laxity | calcium hydroxylapatite | combination therapy | microfocused ultrasound with visualization [SUMMARY]
[CONTENT] arm laxity | calcium hydroxylapatite | combination therapy | microfocused ultrasound with visualization [SUMMARY]
[CONTENT] Cosmetic Techniques | Durapatite | Female | Humans | Patient Satisfaction | Pilot Projects | Prospective Studies | Skin Aging | Treatment Outcome | Ultrasonic Therapy | Ultrasonography [SUMMARY]
null
[CONTENT] Cosmetic Techniques | Durapatite | Female | Humans | Patient Satisfaction | Pilot Projects | Prospective Studies | Skin Aging | Treatment Outcome | Ultrasonic Therapy | Ultrasonography [SUMMARY]
[CONTENT] Cosmetic Techniques | Durapatite | Female | Humans | Patient Satisfaction | Pilot Projects | Prospective Studies | Skin Aging | Treatment Outcome | Ultrasonic Therapy | Ultrasonography [SUMMARY]
[CONTENT] Cosmetic Techniques | Durapatite | Female | Humans | Patient Satisfaction | Pilot Projects | Prospective Studies | Skin Aging | Treatment Outcome | Ultrasonic Therapy | Ultrasonography [SUMMARY]
[CONTENT] Cosmetic Techniques | Durapatite | Female | Humans | Patient Satisfaction | Pilot Projects | Prospective Studies | Skin Aging | Treatment Outcome | Ultrasonic Therapy | Ultrasonography [SUMMARY]
[CONTENT] improving brachial skin | 2018 brachial skin | improve appearance brachial | skin laxity brachial | brachial skin laxity [SUMMARY]
null
[CONTENT] improving brachial skin | 2018 brachial skin | improve appearance brachial | skin laxity brachial | brachial skin laxity [SUMMARY]
[CONTENT] improving brachial skin | 2018 brachial skin | improve appearance brachial | skin laxity brachial | brachial skin laxity [SUMMARY]
[CONTENT] improving brachial skin | 2018 brachial skin | improve appearance brachial | skin laxity brachial | brachial skin laxity [SUMMARY]
[CONTENT] improving brachial skin | 2018 brachial skin | improve appearance brachial | skin laxity brachial | brachial skin laxity [SUMMARY]
[CONTENT] treatment | skin | subjects | arm | weeks | laxity | study | 12 | vas | caha [SUMMARY]
null
[CONTENT] treatment | skin | subjects | arm | weeks | laxity | study | 12 | vas | caha [SUMMARY]
[CONTENT] treatment | skin | subjects | arm | weeks | laxity | study | 12 | vas | caha [SUMMARY]
[CONTENT] treatment | skin | subjects | arm | weeks | laxity | study | 12 | vas | caha [SUMMARY]
[CONTENT] treatment | skin | subjects | arm | weeks | laxity | study | 12 | vas | caha [SUMMARY]
[CONTENT] procedures | skin | use | collagen | invasive | caha | laxity | brachial | non invasive | skin laxity [SUMMARY]
null
[CONTENT] sd | weeks | 12 | score | vas | baseline | vas score | 24 | mean | subjects [SUMMARY]
[CONTENT] injection diluted hyperdiluted caha | skin laxity demonstrated clinically | clinically statistically significant improvements | mfu followed | mfu followed injection | hyperdiluted caha effective | hyperdiluted caha effective management | improvements objective subjective outcome | improvements objective subjective | skin laxity demonstrated [SUMMARY]
[CONTENT] skin | treatment | subjects | weeks | vas | 12 | sd | study | arm | laxity [SUMMARY]
[CONTENT] skin | treatment | subjects | weeks | vas | 12 | sd | study | arm | laxity [SUMMARY]
[CONTENT] MFU-V | CaHA [SUMMARY]
null
[CONTENT] Twelve ||| R0 | 0.515 mm | 0.433 mm | 24 weeks | 0.05 | 12 | 24 weeks ||| R2 | VAS | 0.05 ||| ||| [SUMMARY]
[CONTENT] CaHA [SUMMARY]
[CONTENT] MFU-V | CaHA ||| ||| MFU-V | Merz North America ||| Raleigh | N.C. | 4.0 MHz-4.5 mm | 7.0 MHz-3.0 | mm | 1:2 | CaHA | Merz North America, Inc ||| six months ||| 580 | Courage & Khazaka | Cologne | Germany ||| ||| Twelve ||| R0 | 0.515 mm | 0.433 mm | 24 weeks | 0.05 | 12 | 24 weeks ||| R2 | VAS | 0.05 ||| ||| ||| CaHA [SUMMARY]
[CONTENT] MFU-V | CaHA ||| ||| MFU-V | Merz North America ||| Raleigh | N.C. | 4.0 MHz-4.5 mm | 7.0 MHz-3.0 | mm | 1:2 | CaHA | Merz North America, Inc ||| six months ||| 580 | Courage & Khazaka | Cologne | Germany ||| ||| Twelve ||| R0 | 0.515 mm | 0.433 mm | 24 weeks | 0.05 | 12 | 24 weeks ||| R2 | VAS | 0.05 ||| ||| ||| CaHA [SUMMARY]
Association between Early Menopause, Gynecological Cancer, and Tobacco Smoking: A Cross-Sectional Study.
34710992
The rates of smoking among women are rising. Previous studies have shown that smoking is associated with early menopause. However, the association of gynecological cancer, including breast and cervical cancer, with early menopause and smoking, remains unclear. Therefore, this study aimed to determine the association between smoking and early menopause, breast cancer, and cervical cancer.
BACKGROUND
This cross-sectional study used data from the Korean National Health and Nutritional Survey Examination (KHANES) (2016-2018). Early menopause was defined as menopause before 50 years of age.
METHODS
A total of 4,481 participants were included in the analysis. There was no association between early menopause and cervical cancer (adjusted odds ratio [aOR]: 1.435, 95% confidence interval [CI]: 0.730-2.821), but women who had experienced early menopause had a significantly higher risk of breast cancer than women who had experienced normal menopause (aOR: 1.683, 95% CI: 1.089-2.602, p=0.019). Early menopause was not associated with an increased risk of breast cancer in ever-smoker (aOR: 0.475, 95% CI: 0.039-5.748), but was associated with a significantly increased risk of breast cancer in never-smokers (aOR: 1.828, 95% CI: 1.171-2.852).
RESULTS
Early menopause was associated with an increased risk of breast cancer in women who had never smoked, but not in women who had ever smoked.
CONCLUSIONS
[ "Adult", "Age Factors", "Breast Neoplasms", "Confidence Intervals", "Cross-Sectional Studies", "Female", "Humans", "Menopause, Premature", "Middle Aged", "Non-Smokers", "Nutrition Surveys", "Odds Ratio", "Republic of Korea", "Risk", "Smokers", "Socioeconomic Factors", "Tobacco Smoking", "Uterine Cervical Neoplasms", "Young Adult" ]
8858255
Introduction
Menopause transition, is the time when the ovarian follicle pool falls below the standard threshold. It is a biological process experienced by all women (Tawfik et al., 2015). Natural menopause is described defined as a period of 12 or more months of amenorrhea that is not a result of specific medical procedures such as chemotherapy and surgery or other known causes (Tawfik et al., 2015). It is associated with a decline in female sex hormones, and affects the child-bearing process. While menopause occurs in all women, the age at which it occurs varies. Early onset of menopause is associated with an increased risk of various health conditions, such as certain types of cancer, heart disease, stroke, and osteoporosis, thus becoming, a serious public health issue, affecting women’s fertility rates worldwide (Shuster et al., 2010). Therefore, there is a need to evaluate the risk factors associated with early menopause in order to be able to inform women of the health risks. Several studies have been conducted regarding the risk factors associated with early menopause. A study by Mikkelsen et al., (2007) found that nulliparous women were at a higher risk of early menopause than parous women. Factors such as higher educational levels, moderate alcohol intake, and higher coffee intake were associated with a lower risk of early menopause. In contrast, they found a greater prevalence of pre-menopause among women who smoked during the perimenopausal age period, indicating that active smoking was a risk factor for early menopause. Further, they established dose-response effect of smoking on early menopause, and that active and passive smoking resulted in early menopause . The association between smoking and early menopause has been described as the decline in the quality and quantity of ovarian follicles caused by the byproducts found in cigarettes. Smoking causes fluctuation in the levels of reproductive hormones during the fertility period, which alters the follicular development patterns and consequently, leads to changes in the follicle-stimulating hormone levels. Further, the impact of smoke exposure on the follicle pool has been found to affect the timing of menopause (Tawfik et al., 2015). Prior studies have established that tobacco smoking has an adverse impact on hormone production in women, given that these toxins affect multiple sites involved in the hormonal synthesis. In this regard, considerable studies affirm the effects of active smoking on the timing of menopause. This aspect is also evident in the study given that a significant proportion of active and former smokers experienced early menopause (Hyland et al., 2015). Prenatal smoke exposure has been found to lead to early menopause by damaging the follicles and suppressing their formation. A study by Lutterodt et al., (2009) found that prenatal smoke exposure damaged the somatic cells during the process of ovarian development, which could lead to early menopause. The higher risk of early menopause due to smoking makes women vulnerable to developing various health conditions such as gynecological cancers. It is fundamental to note that menopause typically occurs between the ages of 45 to 55 years (Gold, 2011). For instance, Korean women have a 49.2-year average of menopause. However, this stage can occur earlier in life as a result of multiple factors. While some women dread menopause, others may find it relieving owing to the associated pain and other menstrual complications. In this regard, menopause symptoms vary from one woman to another, and they may be confused for other natural biological processes (Park et al., 2002). The current trend of increasing prevalence of smoking in women and the close connection between smoking and early menopause is a critical public health concern. While previous studies have addressed the association between smoking and early menopause , the association of gynecological cancer, including breast and cervical cancer, with early menopause and smoking, is not clear. Therefore, this study aimed to investigate the association of early menopause, breast cancer, and cervical cancer with smoking.
null
null
Results
Of the 4,841 study participants, 3,037 (67.78%), 1,358 (30.31%), and 86 (1.92%) were ≥ 60, 50-59, and 40-49 years old, respectively. Menopause was experienced by women across all age groups. As shown in Table 1, menopause was mostly experienced by the participants among ages 40-49 (n = 1,796; 40.08%) and 50-59 (n = 2,547; 56.84%). In contrast, it was least experienced by patients in the age-group 20-29 (n = 1; 0.02%). Of the participants, 86.89% had a high school education and 13.11% had a college education. Most of the participants were from low- (33.91%) and mid-low-income (25.75%) families. Regarding the smoking status, 4,149 (92.59%) had never smoked, 127 (2.83%) were current smokers, 157 (3.50%) were former smokers, 29 (0.65%) were occasional smokers and 19 (0.42%) reported their smoking status as ‘do not know.’ Regarding alcohol consumption, 1,208 (26.96%) were current drinkers, 3,259 (72.72%) were non-drinkers, and 14 did not report their drinking status. For the BMI, 1,728 (38.8%) were underweight, 1,088 (24.43%) were of normal weight, and 1,638 (36.78%) were overweight. Additionally, a majority of the participants exercised (2,463, 54.97%). The distribution of gynecological cancers (cervical and breast) among the participants categorized according to the identified variables are summarized in Table 1. Model 1, the crude model, revealed an increased likelihood of breast cancer and cervical cancer in women who experienced early menopause. ORs for breast and cervical cancers were 1.613 (95% CI: 1.048-2.481) and 1.615 (95% CI: 0.828-3.149), respectively, using normal menopause as the reference group. In contrast, there was a reduced possibility of cancer in women who experienced normal menopause. In Model 2, the adjusted model, the ORs were adjusted for educational levels, household income, exercise, BMI, and smoking and drinking habits. The adjusted ORs for breast and cervical cancer with respect to early menopause were 1.683 and 1.435, respectively. Additionally, the adjusted ORs for breast and cervical cancer among never-smokers were 1.828 (95% CI: 1.171–2.852) and 1.336 (95% CI: 0.653–2.734), respectively (Tables 2 and 3).
null
null
[ "Authors Contribution Statement" ]
[ "Joyce M. Kim, Yeun Soo Yang, Su Hyun Lee, and Sun Ha Jee contributed to the design and implementation of the research, to the analysis of the results and to the writing of the manuscript." ]
[ null ]
[ "Introduction", "Materials and Methods", "Results", "Discussion", "Authors Contribution Statement" ]
[ "Menopause transition, is the time when the ovarian follicle pool falls below the standard threshold. It is a biological process experienced by all women (Tawfik et al., 2015). Natural menopause is described defined as a period of 12 or more months of amenorrhea that is not a result of specific medical procedures such as chemotherapy and surgery or other known causes (Tawfik et al., 2015). It is associated with a decline in female sex hormones, and affects the child-bearing process. While menopause occurs in all women, the age at which it occurs varies. \nEarly onset of menopause is associated with an increased risk of various health conditions, such as certain types of cancer, heart disease, stroke, and osteoporosis, thus becoming, a serious public health issue, affecting women’s fertility rates worldwide (Shuster et al., 2010). Therefore, there is a need to evaluate the risk factors associated with early menopause in order to be able to inform women of the health risks. \nSeveral studies have been conducted regarding the risk factors associated with early menopause. A study by Mikkelsen et al., (2007) found that nulliparous women were at a higher risk of early menopause than parous women. Factors such as higher educational levels, moderate alcohol intake, and higher coffee intake were associated with a lower risk of early menopause. In contrast, they found a greater prevalence of pre-menopause among women who smoked during the perimenopausal age period, indicating that active smoking was a risk factor for early menopause. Further, they established dose-response effect of smoking on early menopause, and that active and passive smoking resulted in early menopause . \nThe association between smoking and early menopause has been described as the decline in the quality and quantity of ovarian follicles caused by the byproducts found in cigarettes. Smoking causes fluctuation in the levels of reproductive hormones during the fertility period, which alters the follicular development patterns and consequently, leads to changes in the follicle-stimulating hormone levels. Further, the impact of smoke exposure on the follicle pool has been found to affect the timing of menopause (Tawfik et al., 2015). \nPrior studies have established that tobacco smoking has an adverse impact on hormone production in women, given that these toxins affect multiple sites involved in the hormonal synthesis. In this regard, considerable studies affirm the effects of active smoking on the timing of menopause. This aspect is also evident in the study given that a significant proportion of active and former smokers experienced early menopause (Hyland et al., 2015).\nPrenatal smoke exposure has been found to lead to early menopause by damaging the follicles and suppressing their formation. A study by Lutterodt et al., (2009) found that prenatal smoke exposure damaged the somatic cells during the process of ovarian development, which could lead to early menopause. The higher risk of early menopause due to smoking makes women vulnerable to developing various health conditions such as gynecological cancers.\nIt is fundamental to note that menopause typically occurs between the ages of 45 to 55 years (Gold, 2011). For instance, Korean women have a 49.2-year average of menopause. However, this stage can occur earlier in life as a result of multiple factors. While some women dread menopause, others may find it relieving owing to the associated pain and other menstrual complications. In this regard, menopause symptoms vary from one woman to another, and they may be confused for other natural biological processes (Park et al., 2002). \nThe current trend of increasing prevalence of smoking in women and the close connection between smoking and early menopause is a critical public health concern. While previous studies have addressed the association between smoking and early menopause , the association of gynecological cancer, including breast and cervical cancer, with early menopause and smoking, is not clear. Therefore, this study aimed to investigate the association of early menopause, breast cancer, and cervical cancer with smoking. ", "This cross-sectional study used data from the 7th editions of Korean National Health and Nutritional Survey Examination (KHANES), conducted from 2016 to 2018 by Korea Centers for Disease Control and Prevention, Ministry of Health and Welfare. The KNHANES is an annual cross-sectional survey, and its target population comprises nationally representative non-institutionalized South Korean citizens. The study population was screened for eligibility by KNHANES (Kweon et al., 2014). The gross population sample for this study, comprising 24,269 participants was selected between 2016 and 2018. Since this study was women-centric, 11,071 men were excluded from this study population leaving 13,198 women. Women aged < 40 years, and those who were pregnant or menstruating were excluded. Additionally, women who failed to answer the study questions regarding menopause, and those who provided responses such as ‘do not know,’ or had missing responses, were excluded. This resulted in the exclusion of an additional 8,717 participants from the study population. Women aged 40–80 years who were comfortable with answering questions on their menopause status were included. Finally, a total of 4,481 women were included in this analysis. Figure 1 presents a flow diagram of the study inclusion and exclusion criteria. In the logistic regression models, early menopause was defined as menopause that occurred before the age of 50 years, while normal menopause was defined as menopause that occurred at 50 years of age or above. In this study, the exposure variable was menopause (early menopause) resulting owing to smoking. The outcome of this exposure was gynecological cancer, which included cervical and breast cancers. Statistical analyses were performed using the SAS software, version 9.4 (SAS Institute).\nThe target population was first assessed for breast and cervical cancers. The variables considered were as follows: age (40–49, 50–59, and > 60 years), age at menopause (20–29, 30–39, 40–49, 50–59, and > 60 years), educational level (high school and college), household income (low, mid-low, mid-high, high), smoking status (current, sometimes, former, never, and do not know), drinking status (yes, no, do not know), body mass index (BMI) (underweight, normal, overweight), and exercise (yes, no). An analysis was conducted on the percentage of participants who experienced menopause at the various ages. Two models were used to calculate the odds ratio (OR) for breast cancer and cervical cancer based on the menopausal status. The first model was a crude model, which included the OR and 95% CIs for gynecological cancers with respect to the occurrence of menopause. The second model was an adjusted model, in which the ORs and 95% CIs for gynecological cancers with respect to the menopause status were adjusted for the variables in the study. The variables included in the second model were educational level, smoking status, drinking status, household income, BMI, and exercise. \nOR was used to evaluate the possibility of occurrence of breast and cervical cancer in women who had early menopause. It compares the odds of an event occurring after exposing the participant to specific risk factors with the odds of the same event in a controlled situation, where the participant is not exposed to any risk factor. In this study, OR enabled the understanding of the likelihood of developing cervical and breast cancers in women who experienced early menopause, and the possibility of cancer occurring in women who experienced menopause at the average age. Further, the OR enabled us to determine the relationship of early menopause, breast cancer, and cervical cancer with tobacco smoking in Korean women.\nGeneral Characteristics of the Study Population\n*, Educational level -missing number N=2 (excluded); **, Household income - missing number N=19 (excluded); ***, BMI - missing number N=27 (excluded) \nFlowchart of the Inclusion and Exclusion Criteria\nOdds Ratios (95% Confidence Intervals) for the Risk of Gynecological Cancer According to Early Menopause Status\nValues are presented as Odd Ratio (95% confidence interval); *Model I, unadjusted; **Model II, was adjusted for education level, house income, drinking, BMI and exercise\nOdds Ratios (95% Confidence Intervals) of Early Menopause According to the Smoking Status\nValues are presented as odds ratio (95% confidence interval); *Model I, unadjusted; **Model II, was adjusted for educational level, household income, drinking habit, BMI, and exercise; a Includes current, sometimes, and former smokers.", "Of the 4,841 study participants, 3,037 (67.78%), 1,358 (30.31%), and 86 (1.92%) were ≥ 60, 50-59, and 40-49 years old, respectively. Menopause was experienced by women across all age groups. As shown in Table 1, menopause was mostly experienced by the participants among ages 40-49 (n = 1,796; 40.08%) and 50-59 (n = 2,547; 56.84%). In contrast, it was least experienced by patients in the age-group 20-29 (n = 1; 0.02%).\nOf the participants, 86.89% had a high school education and 13.11% had a college education. Most of the participants were from low- (33.91%) and mid-low-income (25.75%) families. Regarding the smoking status, 4,149 (92.59%) had never smoked, 127 (2.83%) were current smokers, 157 (3.50%) were former smokers, 29 (0.65%) were occasional smokers and 19 (0.42%) reported their smoking status as ‘do not know.’ Regarding alcohol consumption, 1,208 (26.96%) were current drinkers, 3,259 (72.72%) were non-drinkers, and 14 did not report their drinking status. For the BMI, 1,728 (38.8%) were underweight, 1,088 (24.43%) were of normal weight, and 1,638 (36.78%) were overweight. Additionally, a majority of the participants exercised (2,463, 54.97%). The distribution of gynecological cancers (cervical and breast) among the participants categorized according to the identified variables are summarized in Table 1.\nModel 1, the crude model, revealed an increased likelihood of breast cancer and cervical cancer in women who experienced early menopause. ORs for breast and cervical cancers were 1.613 (95% CI: 1.048-2.481) and 1.615 (95% CI: 0.828-3.149), respectively, using normal menopause as the reference group. In contrast, there was a reduced possibility of cancer in women who experienced normal menopause. In Model 2, the adjusted model, the ORs were adjusted for educational levels, household income, exercise, BMI, and smoking and drinking habits. The adjusted ORs for breast and cervical cancer with respect to early menopause were 1.683 and 1.435, respectively. Additionally, the adjusted ORs for breast and cervical cancer among never-smokers were 1.828 (95% CI: 1.171–2.852) and 1.336 (95% CI: 0.653–2.734), respectively (Tables 2 and 3).", "The variables considered in this research aimed to classify the study population according to their respective characteristics. This study focused on identifying how the selected variables affected the possibility of developing gynecological cancers (breast and cervical) in women experiencing early menopause. The ORs of breast cancer and cervical cancer according to the age at menopause were analyzed. \nThe two models revealed a higher OR for breast cancer and cervical cancer according to the age at menopause. The first model indicated that early menopause increased the potential of breast cancer and cervical cancer in women. The second model suggested that early menopause was attributable to the study variables, namely household income, BMI, exercise, smoking and drinking habits, and education levels, which increased the possibility of women developing gynecological cancer. These findings were supported by Rosenberg et al., (2013) who found a close association between early menopause and the incidence of breast cancer. The study also showed that negative symptoms such as weight gain, fatigue, sleep challenges, and depression and anxiety, may characterize early menopause associated with the increased risk of breast cancer. \nAccording to Shuster et al., (2010), 25% of breast cancer cases involve premenopausal women and 75% involve menopausal women. Therefore, early menopause can be considered a risk factor for breast cancer. A study by Taneri et al., (2016) also found that natural menopause was associated with a reduced risk of breast, ovarian, and endometrial cancers. This indicated a decreased occurrence of breast cancer and cervical cancer in patients who experienced menopause at the normal age. \nThe incidence rates of early menopause in women raise a serious public health concern (5%, early menopause; 1%, premature menopause) (Faubion et al., 2015). Thus, larger numbers of women are at risk of breast cancer and cervical cancer due to early menopause. Studies have proposed that coping with this challenge requires addressing the behavioral aspects associated with early menopause. Smoking is a modifiable and independent risk factor for early menopause. A study by Yang et al., (2015) on Korean women revealed that the menopausal age of current smokers was considerably lower than that of never-smokers. Therefore, addressing the challenge of smoking in women could significantly reduce the incidence of gynecological cancer. Moreover, according to Whitcomb et al., (2018), modifiable risk factors can have a significant impact on the aging of ovaries, and further, that smoking is also a lifestyle factor that can affect the timing of menopause significantly. In this study, the never-smoker status was significant, as the number of participants who had not smoked in their lifetime was low. The findings of this study have created a new area for future research which, when using more extensive data sets, should investigate smoking as an effect modifier. \nThe major strength of this research was its large sample size that helps reduce statistical noise due to variations in participants’ characteristics (Andrade, 2020). This implies that the results of our research were authentic and reliable. Further, the large sample used in this research ensured that the study population is more representative of the entire Korean women population. Thus, the generalizations developed from this research can be used for policy formulation and further research. Furthermore, the study findings were consistent with those of past studies; hence, the findings have external validity. \nOur study also had limitations. First, it was not possible to verify the participants’ responses in the study. Therefore, the results may be affected by potential biases due to the possibility of incorrect answers. Further, this study did not extensively address the scope of smoking, particularly the association of passive smoking and early menopause with the occurrence of gynecological cancer. Passive smoking should have been considered a significant concern in this research since it affects many women who do not actively engage in smoking behavior. The non-inclusion of passive smoking in this research limits the scope of the study. More robust mathematical analyses using regression and correlation models should be applied to enhance the reliability of the research results. \nTo conclude, the findings of this study suggested that smoking is positively correlated with a reduction in the menopausal age of Korean women. Moreover, this study identified an association between the occurrence of early menopause and gynecological cancers. Those who experienced menopause before the age of 50 years had (OR=1.613, 95% CI 1.048-2.481) of developing breast cancer than those who experienced menopause after the age of 50 years which was statistically significant. Additionally, early menopause and breast cancer were not associated in patients of the ever-smokers group, while those in the never-smokers group had (OR=1.695, 95% CI 1.092-2.630) of developing breast cancer. Thus, the association between early menopause and breast cancer is relevant, especially in non-smokers.", "Joyce M. Kim, Yeun Soo Yang, Su Hyun Lee, and Sun Ha Jee contributed to the design and implementation of the research, to the analysis of the results and to the writing of the manuscript." ]
[ "intro", "materials|methods", "results", "discussion", null ]
[ "Early menopause", "breast cancer", "cervical cancer", "tobacco", "epidemiology" ]
Introduction: Menopause transition, is the time when the ovarian follicle pool falls below the standard threshold. It is a biological process experienced by all women (Tawfik et al., 2015). Natural menopause is described defined as a period of 12 or more months of amenorrhea that is not a result of specific medical procedures such as chemotherapy and surgery or other known causes (Tawfik et al., 2015). It is associated with a decline in female sex hormones, and affects the child-bearing process. While menopause occurs in all women, the age at which it occurs varies. Early onset of menopause is associated with an increased risk of various health conditions, such as certain types of cancer, heart disease, stroke, and osteoporosis, thus becoming, a serious public health issue, affecting women’s fertility rates worldwide (Shuster et al., 2010). Therefore, there is a need to evaluate the risk factors associated with early menopause in order to be able to inform women of the health risks. Several studies have been conducted regarding the risk factors associated with early menopause. A study by Mikkelsen et al., (2007) found that nulliparous women were at a higher risk of early menopause than parous women. Factors such as higher educational levels, moderate alcohol intake, and higher coffee intake were associated with a lower risk of early menopause. In contrast, they found a greater prevalence of pre-menopause among women who smoked during the perimenopausal age period, indicating that active smoking was a risk factor for early menopause. Further, they established dose-response effect of smoking on early menopause, and that active and passive smoking resulted in early menopause . The association between smoking and early menopause has been described as the decline in the quality and quantity of ovarian follicles caused by the byproducts found in cigarettes. Smoking causes fluctuation in the levels of reproductive hormones during the fertility period, which alters the follicular development patterns and consequently, leads to changes in the follicle-stimulating hormone levels. Further, the impact of smoke exposure on the follicle pool has been found to affect the timing of menopause (Tawfik et al., 2015). Prior studies have established that tobacco smoking has an adverse impact on hormone production in women, given that these toxins affect multiple sites involved in the hormonal synthesis. In this regard, considerable studies affirm the effects of active smoking on the timing of menopause. This aspect is also evident in the study given that a significant proportion of active and former smokers experienced early menopause (Hyland et al., 2015). Prenatal smoke exposure has been found to lead to early menopause by damaging the follicles and suppressing their formation. A study by Lutterodt et al., (2009) found that prenatal smoke exposure damaged the somatic cells during the process of ovarian development, which could lead to early menopause. The higher risk of early menopause due to smoking makes women vulnerable to developing various health conditions such as gynecological cancers. It is fundamental to note that menopause typically occurs between the ages of 45 to 55 years (Gold, 2011). For instance, Korean women have a 49.2-year average of menopause. However, this stage can occur earlier in life as a result of multiple factors. While some women dread menopause, others may find it relieving owing to the associated pain and other menstrual complications. In this regard, menopause symptoms vary from one woman to another, and they may be confused for other natural biological processes (Park et al., 2002). The current trend of increasing prevalence of smoking in women and the close connection between smoking and early menopause is a critical public health concern. While previous studies have addressed the association between smoking and early menopause , the association of gynecological cancer, including breast and cervical cancer, with early menopause and smoking, is not clear. Therefore, this study aimed to investigate the association of early menopause, breast cancer, and cervical cancer with smoking. Materials and Methods: This cross-sectional study used data from the 7th editions of Korean National Health and Nutritional Survey Examination (KHANES), conducted from 2016 to 2018 by Korea Centers for Disease Control and Prevention, Ministry of Health and Welfare. The KNHANES is an annual cross-sectional survey, and its target population comprises nationally representative non-institutionalized South Korean citizens. The study population was screened for eligibility by KNHANES (Kweon et al., 2014). The gross population sample for this study, comprising 24,269 participants was selected between 2016 and 2018. Since this study was women-centric, 11,071 men were excluded from this study population leaving 13,198 women. Women aged < 40 years, and those who were pregnant or menstruating were excluded. Additionally, women who failed to answer the study questions regarding menopause, and those who provided responses such as ‘do not know,’ or had missing responses, were excluded. This resulted in the exclusion of an additional 8,717 participants from the study population. Women aged 40–80 years who were comfortable with answering questions on their menopause status were included. Finally, a total of 4,481 women were included in this analysis. Figure 1 presents a flow diagram of the study inclusion and exclusion criteria. In the logistic regression models, early menopause was defined as menopause that occurred before the age of 50 years, while normal menopause was defined as menopause that occurred at 50 years of age or above. In this study, the exposure variable was menopause (early menopause) resulting owing to smoking. The outcome of this exposure was gynecological cancer, which included cervical and breast cancers. Statistical analyses were performed using the SAS software, version 9.4 (SAS Institute). The target population was first assessed for breast and cervical cancers. The variables considered were as follows: age (40–49, 50–59, and > 60 years), age at menopause (20–29, 30–39, 40–49, 50–59, and > 60 years), educational level (high school and college), household income (low, mid-low, mid-high, high), smoking status (current, sometimes, former, never, and do not know), drinking status (yes, no, do not know), body mass index (BMI) (underweight, normal, overweight), and exercise (yes, no). An analysis was conducted on the percentage of participants who experienced menopause at the various ages. Two models were used to calculate the odds ratio (OR) for breast cancer and cervical cancer based on the menopausal status. The first model was a crude model, which included the OR and 95% CIs for gynecological cancers with respect to the occurrence of menopause. The second model was an adjusted model, in which the ORs and 95% CIs for gynecological cancers with respect to the menopause status were adjusted for the variables in the study. The variables included in the second model were educational level, smoking status, drinking status, household income, BMI, and exercise. OR was used to evaluate the possibility of occurrence of breast and cervical cancer in women who had early menopause. It compares the odds of an event occurring after exposing the participant to specific risk factors with the odds of the same event in a controlled situation, where the participant is not exposed to any risk factor. In this study, OR enabled the understanding of the likelihood of developing cervical and breast cancers in women who experienced early menopause, and the possibility of cancer occurring in women who experienced menopause at the average age. Further, the OR enabled us to determine the relationship of early menopause, breast cancer, and cervical cancer with tobacco smoking in Korean women. General Characteristics of the Study Population *, Educational level -missing number N=2 (excluded); **, Household income - missing number N=19 (excluded); ***, BMI - missing number N=27 (excluded) Flowchart of the Inclusion and Exclusion Criteria Odds Ratios (95% Confidence Intervals) for the Risk of Gynecological Cancer According to Early Menopause Status Values are presented as Odd Ratio (95% confidence interval); *Model I, unadjusted; **Model II, was adjusted for education level, house income, drinking, BMI and exercise Odds Ratios (95% Confidence Intervals) of Early Menopause According to the Smoking Status Values are presented as odds ratio (95% confidence interval); *Model I, unadjusted; **Model II, was adjusted for educational level, household income, drinking habit, BMI, and exercise; a Includes current, sometimes, and former smokers. Results: Of the 4,841 study participants, 3,037 (67.78%), 1,358 (30.31%), and 86 (1.92%) were ≥ 60, 50-59, and 40-49 years old, respectively. Menopause was experienced by women across all age groups. As shown in Table 1, menopause was mostly experienced by the participants among ages 40-49 (n = 1,796; 40.08%) and 50-59 (n = 2,547; 56.84%). In contrast, it was least experienced by patients in the age-group 20-29 (n = 1; 0.02%). Of the participants, 86.89% had a high school education and 13.11% had a college education. Most of the participants were from low- (33.91%) and mid-low-income (25.75%) families. Regarding the smoking status, 4,149 (92.59%) had never smoked, 127 (2.83%) were current smokers, 157 (3.50%) were former smokers, 29 (0.65%) were occasional smokers and 19 (0.42%) reported their smoking status as ‘do not know.’ Regarding alcohol consumption, 1,208 (26.96%) were current drinkers, 3,259 (72.72%) were non-drinkers, and 14 did not report their drinking status. For the BMI, 1,728 (38.8%) were underweight, 1,088 (24.43%) were of normal weight, and 1,638 (36.78%) were overweight. Additionally, a majority of the participants exercised (2,463, 54.97%). The distribution of gynecological cancers (cervical and breast) among the participants categorized according to the identified variables are summarized in Table 1. Model 1, the crude model, revealed an increased likelihood of breast cancer and cervical cancer in women who experienced early menopause. ORs for breast and cervical cancers were 1.613 (95% CI: 1.048-2.481) and 1.615 (95% CI: 0.828-3.149), respectively, using normal menopause as the reference group. In contrast, there was a reduced possibility of cancer in women who experienced normal menopause. In Model 2, the adjusted model, the ORs were adjusted for educational levels, household income, exercise, BMI, and smoking and drinking habits. The adjusted ORs for breast and cervical cancer with respect to early menopause were 1.683 and 1.435, respectively. Additionally, the adjusted ORs for breast and cervical cancer among never-smokers were 1.828 (95% CI: 1.171–2.852) and 1.336 (95% CI: 0.653–2.734), respectively (Tables 2 and 3). Discussion: The variables considered in this research aimed to classify the study population according to their respective characteristics. This study focused on identifying how the selected variables affected the possibility of developing gynecological cancers (breast and cervical) in women experiencing early menopause. The ORs of breast cancer and cervical cancer according to the age at menopause were analyzed. The two models revealed a higher OR for breast cancer and cervical cancer according to the age at menopause. The first model indicated that early menopause increased the potential of breast cancer and cervical cancer in women. The second model suggested that early menopause was attributable to the study variables, namely household income, BMI, exercise, smoking and drinking habits, and education levels, which increased the possibility of women developing gynecological cancer. These findings were supported by Rosenberg et al., (2013) who found a close association between early menopause and the incidence of breast cancer. The study also showed that negative symptoms such as weight gain, fatigue, sleep challenges, and depression and anxiety, may characterize early menopause associated with the increased risk of breast cancer. According to Shuster et al., (2010), 25% of breast cancer cases involve premenopausal women and 75% involve menopausal women. Therefore, early menopause can be considered a risk factor for breast cancer. A study by Taneri et al., (2016) also found that natural menopause was associated with a reduced risk of breast, ovarian, and endometrial cancers. This indicated a decreased occurrence of breast cancer and cervical cancer in patients who experienced menopause at the normal age. The incidence rates of early menopause in women raise a serious public health concern (5%, early menopause; 1%, premature menopause) (Faubion et al., 2015). Thus, larger numbers of women are at risk of breast cancer and cervical cancer due to early menopause. Studies have proposed that coping with this challenge requires addressing the behavioral aspects associated with early menopause. Smoking is a modifiable and independent risk factor for early menopause. A study by Yang et al., (2015) on Korean women revealed that the menopausal age of current smokers was considerably lower than that of never-smokers. Therefore, addressing the challenge of smoking in women could significantly reduce the incidence of gynecological cancer. Moreover, according to Whitcomb et al., (2018), modifiable risk factors can have a significant impact on the aging of ovaries, and further, that smoking is also a lifestyle factor that can affect the timing of menopause significantly. In this study, the never-smoker status was significant, as the number of participants who had not smoked in their lifetime was low. The findings of this study have created a new area for future research which, when using more extensive data sets, should investigate smoking as an effect modifier. The major strength of this research was its large sample size that helps reduce statistical noise due to variations in participants’ characteristics (Andrade, 2020). This implies that the results of our research were authentic and reliable. Further, the large sample used in this research ensured that the study population is more representative of the entire Korean women population. Thus, the generalizations developed from this research can be used for policy formulation and further research. Furthermore, the study findings were consistent with those of past studies; hence, the findings have external validity. Our study also had limitations. First, it was not possible to verify the participants’ responses in the study. Therefore, the results may be affected by potential biases due to the possibility of incorrect answers. Further, this study did not extensively address the scope of smoking, particularly the association of passive smoking and early menopause with the occurrence of gynecological cancer. Passive smoking should have been considered a significant concern in this research since it affects many women who do not actively engage in smoking behavior. The non-inclusion of passive smoking in this research limits the scope of the study. More robust mathematical analyses using regression and correlation models should be applied to enhance the reliability of the research results. To conclude, the findings of this study suggested that smoking is positively correlated with a reduction in the menopausal age of Korean women. Moreover, this study identified an association between the occurrence of early menopause and gynecological cancers. Those who experienced menopause before the age of 50 years had (OR=1.613, 95% CI 1.048-2.481) of developing breast cancer than those who experienced menopause after the age of 50 years which was statistically significant. Additionally, early menopause and breast cancer were not associated in patients of the ever-smokers group, while those in the never-smokers group had (OR=1.695, 95% CI 1.092-2.630) of developing breast cancer. Thus, the association between early menopause and breast cancer is relevant, especially in non-smokers. Authors Contribution Statement: Joyce M. Kim, Yeun Soo Yang, Su Hyun Lee, and Sun Ha Jee contributed to the design and implementation of the research, to the analysis of the results and to the writing of the manuscript.
Background: The rates of smoking among women are rising. Previous studies have shown that smoking is associated with early menopause. However, the association of gynecological cancer, including breast and cervical cancer, with early menopause and smoking, remains unclear. Therefore, this study aimed to determine the association between smoking and early menopause, breast cancer, and cervical cancer. Methods: This cross-sectional study used data from the Korean National Health and Nutritional Survey Examination (KHANES) (2016-2018). Early menopause was defined as menopause before 50 years of age. Results: A total of 4,481 participants were included in the analysis. There was no association between early menopause and cervical cancer (adjusted odds ratio [aOR]: 1.435, 95% confidence interval [CI]: 0.730-2.821), but women who had experienced early menopause had a significantly higher risk of breast cancer than women who had experienced normal menopause (aOR: 1.683, 95% CI: 1.089-2.602, p=0.019). Early menopause was not associated with an increased risk of breast cancer in ever-smoker (aOR: 0.475, 95% CI: 0.039-5.748), but was associated with a significantly increased risk of breast cancer in never-smokers (aOR: 1.828, 95% CI: 1.171-2.852). Conclusions: Early menopause was associated with an increased risk of breast cancer in women who had never smoked, but not in women who had ever smoked.
null
null
3,121
286
[ 40 ]
5
[ "menopause", "early", "early menopause", "cancer", "women", "study", "smoking", "breast", "cervical", "breast cancer" ]
[ "characterize early menopause", "factor early menopause", "early menopause women", "early menopause incidence", "menopause considered risk" ]
null
null
null
[CONTENT] Early menopause | breast cancer | cervical cancer | tobacco | epidemiology [SUMMARY]
null
[CONTENT] Early menopause | breast cancer | cervical cancer | tobacco | epidemiology [SUMMARY]
null
[CONTENT] Early menopause | breast cancer | cervical cancer | tobacco | epidemiology [SUMMARY]
null
[CONTENT] Adult | Age Factors | Breast Neoplasms | Confidence Intervals | Cross-Sectional Studies | Female | Humans | Menopause, Premature | Middle Aged | Non-Smokers | Nutrition Surveys | Odds Ratio | Republic of Korea | Risk | Smokers | Socioeconomic Factors | Tobacco Smoking | Uterine Cervical Neoplasms | Young Adult [SUMMARY]
null
[CONTENT] Adult | Age Factors | Breast Neoplasms | Confidence Intervals | Cross-Sectional Studies | Female | Humans | Menopause, Premature | Middle Aged | Non-Smokers | Nutrition Surveys | Odds Ratio | Republic of Korea | Risk | Smokers | Socioeconomic Factors | Tobacco Smoking | Uterine Cervical Neoplasms | Young Adult [SUMMARY]
null
[CONTENT] Adult | Age Factors | Breast Neoplasms | Confidence Intervals | Cross-Sectional Studies | Female | Humans | Menopause, Premature | Middle Aged | Non-Smokers | Nutrition Surveys | Odds Ratio | Republic of Korea | Risk | Smokers | Socioeconomic Factors | Tobacco Smoking | Uterine Cervical Neoplasms | Young Adult [SUMMARY]
null
[CONTENT] characterize early menopause | factor early menopause | early menopause women | early menopause incidence | menopause considered risk [SUMMARY]
null
[CONTENT] characterize early menopause | factor early menopause | early menopause women | early menopause incidence | menopause considered risk [SUMMARY]
null
[CONTENT] characterize early menopause | factor early menopause | early menopause women | early menopause incidence | menopause considered risk [SUMMARY]
null
[CONTENT] menopause | early | early menopause | cancer | women | study | smoking | breast | cervical | breast cancer [SUMMARY]
null
[CONTENT] menopause | early | early menopause | cancer | women | study | smoking | breast | cervical | breast cancer [SUMMARY]
null
[CONTENT] menopause | early | early menopause | cancer | women | study | smoking | breast | cervical | breast cancer [SUMMARY]
null
[CONTENT] menopause | early | early menopause | smoking | women | found | associated | risk | active | health [SUMMARY]
null
[CONTENT] participants | respectively | menopause | adjusted | 95 ci | ci | ors breast cervical | experienced | breast | cancer [SUMMARY]
null
[CONTENT] menopause | early | early menopause | cancer | women | smoking | study | breast | research | cervical [SUMMARY]
null
[CONTENT] ||| ||| ||| [SUMMARY]
null
[CONTENT] 4,481 ||| 1.435 | 95% ||| CI | 0.730-2.821 | 1.683 | 95% | CI | 1.089-2.602 ||| 0.475 | 95% | CI | 0.039-5.748 | 1.828 | 95% | CI | 1.171 [SUMMARY]
null
[CONTENT] ||| ||| ||| ||| the Korean National Health and Nutritional Survey Examination | KHANES | 2016-2018 ||| before 50 years of age ||| 4,481 ||| 1.435 | 95% ||| CI | 0.730-2.821 | 1.683 | 95% | CI | 1.089-2.602 ||| 0.475 | 95% | CI | 0.039-5.748 | 1.828 | 95% | CI | 1.171 ||| [SUMMARY]
null
Pre-hospital Care to Trauma Patients in Addis Ababa, Ethiopia: Hospital-based Cross-sectional Study.
35221619
Trauma is a major cause of morbidity and mortality worldwide. Prompt use of pre-hospital care is associated with reduced early and late morbidity and mortality from trauma. This study aimed to assess the time to reach the facility and the pattern of pre-hospital care provided for trauma patients.
BACKGROUND
A cross-sectional study design with a structured interview questioner was used for patients presenting to Addis Ababa Burn Emergency and Trauma Hospital Emergency Department from April 1 to May 30, 2020.
METHODS
Out of 238 interviewed patients, the most common means of transportation from the scene to the initial health facility were taxi 77(32.4%) and ambulance 54(22.7%). The time of arrival from the scene to the initial health care facility was within one hour, 133(56.1%) and in 1-3 hours 84(35.5%). Some form of care was provided at the scene in 110(46.2%) of cases. The care provided was bleeding arrest 74(31.1 %), removing from wreck 51(21.4%), splinting/immobilizing injured area 38(16%), position for patient comfort 19(8%), and others. Relatives were the most common care provider 49(45%) followed by bystanders 37(33.9%), trained ambulance staff 19(17.4%), and police 2 (1.8%). The main reasons for not providing care were lack of knowledge 79(61.2%), and lack of equipment 25 (19.4%).
RESULT
The study showed relatives and bystanders were the first responders during trauma care. However, ambulance utilization for pre-hospital care was low. There was trauma patients delay to arrive to hospital. Only half of the patients presented to the health facility within Golden hour.
CONCLUSION
[ "Ambulances", "Cross-Sectional Studies", "Ethiopia", "Hospitals", "Humans", "Police" ]
8843143
Introduction
Injury is a major cause of early death and disability worldwide. Every year approximately 5 million people die in the world. Deaths from severe injury occur in one of three phases: Immediate (death occurring from overwhelming injury), intermediate or subacute (death occurring within several hours of trauma from treatable conditions), and delayed (death occurring days to weeks after trauma as a result of infection, multi-organ failure or other late complications) (1–3). The most effective way to prevent mortality and morbidity from trauma is the prevention of occurrence of trauma but providing effective pre-hospital care minimizes morbidity and mortality. Most deaths in the first hours of trauma are airway obstruction, hypoxia, and hemorrhage all of which can be reduced using effective first aid measures(1,4). Prompt pre-hospital care can also prevent delayed death of trauma patients with proper wound and burn care, immobilization of fractures, and support of oxygen and blood pressure in traumatic brain injury. Pre-hospital care reduces mortality and morbidity from serious illness and injuries. It is estimated that about 45% of mortality and 35% morbidity can be reduced by providing robust out of hospital emergency care(2). In low and middle-income countries without formal emergency care system around 80% of deaths in severe trauma occurred in pre-hospital setting(5, 6). In Ethiopia in the past 10 years, the Federal Ministry of Health has introduced an effort to improve the Emergency Medical Service (EMS) systems. Efforts include distribution of ambulances to all regions, providing at least one ambulance per district (woreda), training of paramedics, and procurement of on-board medical equipment. In Ethiopia, there is less grown pre-hospital care and a paucity of study on the existing level of care and determinants of care(4,7). This study aimed to assess the time to reach the facility and the pattern of pre-hospital care provided for trauma patients visiting a tertiary care trauma center in Addis Ababa.
Methods
Study area period: This study was conducted from April 1 to May 31, 2020 at Addis Ababa Burn Emergency and Trauma (AaBET) Hospital. AaBET Hospital is a part of St. Paul's Hospital Millennium Medical College. It is an emergency dedicated center with level 3 trauma care. It provides emergency and critical care services, Orthopedic, Neuro-surgery, General surgery, and Plastics surgery services. The emergency department has 60 beds and the overall hospital bed is 300. Annual emergency room patient visits ranges from 15,000 to 20,000. Design and sampling: A hospital-based cross-sectional study design was used. The source population was all trauma patients seen in Addis Ababa Burn Emergency and Trauma (AaBET) Emergency Department (ED). All adult (age >15 years) trauma patients who came to AaBET Hospital Emergency Department during the 2 months study period were included. Patients sent to ED from regular Out Patient Department (OPD) for admission, death on arrival, and patients who came for follow-up were excluded Sample size determination: Sample size calculated by using the single population proportion formula, prevalence of (0.17) was used where 17 % of patients received pre-hospital care (4), with 10% error sample size of 238 patients was included using simple random sampling from a total of 1064 trauma patients seen during the 2 month study period. Data collection and analysis: Data was collected using a pilot-tested, structured, interviewing questionnaire prepared in English and translated to the patient's mother tongue language by two data collectors from the patient and/or care givers. The questionnaire was prepared by reviewing different kinds of literature and undertaking modifications for the population studied. (4, 7) The data collectors were trained about the data collection and the quality of data was check by the principal investigator. Data collection was done after patient clinical stabilization. Patients in critical condition and/or unable to communicate data was collected from care givers. Data were cleaned initially before entering to analysis using SPSS version 21. Descriptive statistics were done on demographics, mode of transport, mechanism of transport, body site of injury, and time of arrival to the hospital. Associations were done using the chi-square test and binary logistic regression was used to determine factors associated with pre-hospital care provision. All tests with P-Value< 0.05 was considered statistically significant. Tables and graphs were used for data presentation. Ethics Approval: The research proposal was approved by St Paul's Hospital Millennium Medical College's research ethical committee. This study was conducted per the Declaration of Helsinki: each study participant was well informed about the aim of the study, benefits, and risks; informed written consent was secured from study participants; study participants' confidentiality was maintained; no personal identifiers were used in the data collection questionnaire, and codes were used in place of them. Emergency care intervention was not delayed for the interview.
Results
Socio-demographic characteristics: Out of 238 included patients, 186(78.2%) were male with a male to female ratio of 3.6:1. The mean age was 32.25 years with an SD of 13.45years. Most patients were from Oromia 117(49.2%) and Addis Ababa 104(43.7%). Regarding occupation, the majority were day laborers 81(34%) followed by farming 40(16.8%). (Table 1). Socio-demographic characteristics and time to a first health facility of trauma patients presented to AaBET Hospital, Addis Ababa, Ethiopia, April 1 – May 30, 2020 Clinical profile: The most common mechanism of trauma was a road traffic accident 102(42.9%) followed by falling 62(26.1%). Trauma to extremity region 135(56.7%) was the most common site of injury followed by the head 86(36.1%). Upon arrival, patients were triaged to yellow-green 185(77.8%), 39(16.4%) orange, and 16(5.9%) red side. The majority of patients had sustained the injury at work 87(36.6%), public gathering 46(19.3%), pedestrian 44(18.4%), passenger 35(14.7%), and driving 19(8%) (Table 2). Summary of triage, mechanism of trauma, and type of trauma patients presented to AaBET Hospital ED, Addis Ababa, Ethiopia, April 1–May 30 2020 Pre-hospital care: One hundred ten patients (46.2%) received some form of pre-hospital care mainly first aid at the scene. Cares provided at the scene were positioning patient 19(8%), bleeding arrest 74(31.1 %), splinting/immobilizing injured area 38 (16%), removing from wreck 51(21.4%) and others like calling police or others for help 7 (2.9%). Relatives were the most common care provider (49(45%)) followed by bystanders 37(33.9%), trained ambulance staff 19(17.4%), and police 2 (1.8%) provided first aid. The main reasons for not providing care were lack of knowledge 79(61.2%), followed by lack of equipment 25 (19.4%), fear of procedure 11(8.5%), fear of medico-legal issue 9(7%) and others like fear of transmitted diseases 5(3.9%). One hundred thirty-three (56.1%) patients presented to the first health facility within one hour. 57 (33.5%) of them transported to first health facility using taxi. Time of arrival and mode of transport is shown in Table 3. Time of arrival to the first facility against the mode of transport presented to AaBET Hospital ED, Addis Ababa, Ethiopia, April 1–May 30 2020 Inter health facility referral: A total of 155(65.1%) were referred from different areas, only 83(53.5%) were referred with communication. The most common source of referral was public hospital 79(46.7%) followed by public health center 74(43.7%) and private facility 16(9.6%). About 142(82.6%, n= 172) were transported by ambulance, 20(11.5%, n=172) with taxi, 9(5.2%, n= 172) by private car and one patient walking. Determinants of pre-hospital care: Chi-square test was used to assess the association between socio-demographic, mode of transport, mechanism of trauma, and type of trauma with the delivery of pre-hospital care. Only mechanism of injury had statistically significant association with provision of pre-hospital care (P-value=0.04) (Table 4). Association between demographics and pre-hospital care for trauma patients presented to AaBET Hospital ED, Addis Ababa, Ethiopia, April 1–May 30 2020 P Value<0.05 is considered as statistically significant.
null
null
[]
[]
[]
[ "Introduction", "Methods", "Results", "Discussion" ]
[ "Injury is a major cause of early death and disability worldwide. Every year approximately 5 million people die in the world. Deaths from severe injury occur in one of three phases: Immediate (death occurring from overwhelming injury), intermediate or subacute (death occurring within several hours of trauma from treatable conditions), and delayed (death occurring days to weeks after trauma as a result of infection, multi-organ failure or other late complications) (1–3).\nThe most effective way to prevent mortality and morbidity from trauma is the prevention of occurrence of trauma but providing effective pre-hospital care minimizes morbidity and mortality. Most deaths in the first hours of trauma are airway obstruction, hypoxia, and hemorrhage all of which can be reduced using effective first aid measures(1,4).\nPrompt pre-hospital care can also prevent delayed death of trauma patients with proper wound and burn care, immobilization of fractures, and support of oxygen and blood pressure in traumatic brain injury. Pre-hospital care reduces mortality and morbidity from serious illness and injuries. It is estimated that about 45% of mortality and 35% morbidity can be reduced by providing robust out of hospital emergency care(2).\nIn low and middle-income countries without formal emergency care system around 80% of deaths in severe trauma occurred in pre-hospital setting(5, 6). In Ethiopia in the past 10 years, the Federal Ministry of Health has introduced an effort to improve the Emergency Medical Service (EMS) systems. Efforts include distribution of ambulances to all regions, providing at least one ambulance per district (woreda), training of paramedics, and procurement of on-board medical equipment. In Ethiopia, there is less grown pre-hospital care and a paucity of study on the existing level of care and determinants of care(4,7).\nThis study aimed to assess the time to reach the facility and the pattern of pre-hospital care provided for trauma patients visiting a tertiary care trauma center in Addis Ababa.", "Study area period: This study was conducted from April 1 to May 31, 2020 at Addis Ababa Burn Emergency and Trauma (AaBET) Hospital. AaBET Hospital is a part of St. Paul's Hospital Millennium Medical College. It is an emergency dedicated center with level 3 trauma care. It provides emergency and critical care services, Orthopedic, Neuro-surgery, General surgery, and Plastics surgery services. The emergency department has 60 beds and the overall hospital bed is 300. Annual emergency room patient visits ranges from 15,000 to 20,000.\nDesign and sampling: A hospital-based cross-sectional study design was used. The source population was all trauma patients seen in Addis Ababa Burn Emergency and Trauma (AaBET) Emergency Department (ED). All adult (age >15 years) trauma patients who came to AaBET Hospital Emergency Department during the 2 months study period were included. Patients sent to ED from regular Out Patient Department (OPD) for admission, death on arrival, and patients who came for follow-up were excluded\nSample size determination: Sample size calculated by using the single population proportion formula, prevalence of (0.17) was used where 17 % of patients received pre-hospital care (4), with 10% error sample size of 238 patients was included using simple random sampling from a total of 1064 trauma patients seen during the 2 month study period.\nData collection and analysis: Data was collected using a pilot-tested, structured, interviewing questionnaire prepared in English and translated to the patient's mother tongue language by two data collectors from the patient and/or care givers. The questionnaire was prepared by reviewing different kinds of literature and undertaking modifications for the population studied. (4, 7) The data collectors were trained about the data collection and the quality of data was check by the principal investigator. Data collection was done after patient clinical stabilization. Patients in critical condition and/or unable to communicate data was collected from care givers.\nData were cleaned initially before entering to analysis using SPSS version 21. Descriptive statistics were done on demographics, mode of transport, mechanism of transport, body site of injury, and time of arrival to the hospital.\nAssociations were done using the chi-square test and binary logistic regression was used to determine factors associated with pre-hospital care provision. All tests with P-Value< 0.05 was considered statistically significant. Tables and graphs were used for data presentation.\nEthics Approval: The research proposal was approved by St Paul's Hospital Millennium Medical College's research ethical committee. This study was conducted per the Declaration of Helsinki: each study participant was well informed about the aim of the study, benefits, and risks; informed written consent was secured from study participants; study participants' confidentiality was maintained; no personal identifiers were used in the data collection questionnaire, and codes were used in place of them. Emergency care intervention was not delayed for the interview.", "Socio-demographic characteristics: Out of 238 included patients, 186(78.2%) were male with a male to female ratio of 3.6:1. The mean age was 32.25 years with an SD of 13.45years. Most patients were from Oromia 117(49.2%) and Addis Ababa 104(43.7%). Regarding occupation, the majority were day laborers 81(34%) followed by farming 40(16.8%). (Table 1).\nSocio-demographic characteristics and time to a first health facility of trauma patients presented to AaBET Hospital, Addis Ababa, Ethiopia, April 1 – May 30, 2020\nClinical profile: The most common mechanism of trauma was a road traffic accident 102(42.9%) followed by falling 62(26.1%). Trauma to extremity region 135(56.7%) was the most common site of injury followed by the head 86(36.1%). Upon arrival, patients were triaged to yellow-green 185(77.8%), 39(16.4%) orange, and 16(5.9%) red side. The majority of patients had sustained the injury at work 87(36.6%), public gathering 46(19.3%), pedestrian 44(18.4%), passenger 35(14.7%), and driving 19(8%) (Table 2).\nSummary of triage, mechanism of trauma, and type of trauma patients presented to AaBET Hospital ED, Addis Ababa, Ethiopia, April 1–May 30 2020\nPre-hospital care: One hundred ten patients (46.2%) received some form of pre-hospital care mainly first aid at the scene. Cares provided at the scene were positioning patient 19(8%), bleeding arrest 74(31.1 %), splinting/immobilizing injured area 38 (16%), removing from wreck 51(21.4%) and others like calling police or others for help 7 (2.9%).\nRelatives were the most common care provider (49(45%)) followed by bystanders 37(33.9%), trained ambulance staff 19(17.4%), and police 2 (1.8%) provided first aid. The main reasons for not providing care were lack of knowledge 79(61.2%), followed by lack of equipment 25 (19.4%), fear of procedure 11(8.5%), fear of medico-legal issue 9(7%) and others like fear of transmitted diseases 5(3.9%).\nOne hundred thirty-three (56.1%) patients presented to the first health facility within one hour. 57 (33.5%) of them transported to first health facility using taxi. Time of arrival and mode of transport is shown in Table 3.\nTime of arrival to the first facility against the mode of transport presented to AaBET Hospital ED, Addis Ababa, Ethiopia, April 1–May 30 2020\nInter health facility referral: A total of 155(65.1%) were referred from different areas, only 83(53.5%) were referred with communication. The most common source of referral was public hospital 79(46.7%) followed by public health center 74(43.7%) and private facility 16(9.6%). About 142(82.6%, n= 172) were transported by ambulance, 20(11.5%, n=172) with taxi, 9(5.2%, n= 172) by private car and one patient walking.\nDeterminants of pre-hospital care: Chi-square test was used to assess the association between socio-demographic, mode of transport, mechanism of trauma, and type of trauma with the delivery of pre-hospital care. Only mechanism of injury had statistically significant association with provision of pre-hospital care (P-value=0.04) (Table 4).\nAssociation between demographics and pre-hospital care for trauma patients presented to AaBET Hospital ED, Addis Ababa, Ethiopia, April 1–May 30 2020\nP Value<0.05 is considered as statistically significant.", "This study showed the overall utilization of pre-hospital care for trauma patients was 110(46.2%) which showed improvement from a study done in Addis Ababa Tikur Anbesa Specialized Hospital (TASH) seven years back (16.7%), also higher than the study from India 26.5%, southwest Nigeria 8.6%(4, 8, 9). It is comparable to cross-sectional study done in Hanoi, Vietnam 48%(10). Authors suggested that this improvement could be because of community awareness and different stakeholders' engagement in decreasing trauma. However, this requires further study.\nTime to the health facility is crucial factor for trauma patients. Based on The Golden hour concept one hour is determined to be determinant for patients' better survival(11). In this study, 56.1% of patients reached the initial health facility within one hour was comparable to studies done in Kenya, 66.2% arriving within one hour and southwest Nigeria with 57% arriving in the first one hour of trauma(12,13). But this time was better than a study from Tikur Anbessa showed 18.5% arrive in one hour and 57% arrive in 1–2 hours(4). The study from Seattle showed 75% arriving initial facility within thirty-minute of trauma(14). This significant variation may be related to differences in availability of the nearby facility, transport infrastructure, and community awareness of where and when to go.\nThe majority of scene care provisions in our study were none trained relatives; this is consistent with the reports from developing nations(4). However, in developed countries like the USA and UK studies show scene care is delivered by well-trained personnel(15,16). This variation may arise from the presence of a single call number and community awareness on utilization of ambulances. Most cares provided in our study are at their basic level arrest bleeding (33.9%), removing the patient from the trauma area (21%), immobilization (16%), and others. this is also consistent with the reports from the developing nations(4,12).\nThe explanation for this may be because most of the providers are none trained bystanders and relatives, unlike the developed countries where care is by personnel with advanced training and equipped ambulances. It is better to train the community on first aids via community health workers and awareness creation with mass media.\nUpon analysis for means of transportation to the initial health facility, our study showed taxi was the most common means (32.4%) followed by ambulance (22.7%), walking with support (15.5%), and others. This was in keeping with studies done in Tikur Anbessa, Vietnam, and India where the majority of patients were transported by taxi and private vehicles(4,8,13). A study in Kenya showed only 1.4% ambulance use and that of Tanzania showed no patient transported by ambulance (12, 13). However, studies in Britain and the USA showed more than 96% ambulance usage(15, 16). This low utilization of ambulance for transport may be due to inadequate distribution, poor infrastructure, lack of community awareness on how to get them, and the large shift in health system resources including ambulances to fight the pandemic. Ambulances were used mostly for interfacility transfer. Taxi use was the most common mode of transport in this study and other African studies which indicates there should be a way for rethinking about ambulance-based pre-hospital care which is standard care from westerns (4, 12, 13). There should be African based solution for better pre-hospital trauma care. Training taxi drivers about trauma first aid and equipping taxis the necessary equipment will improve pre-hospital trauma care.\nAlthough this study provided a good incite to pre-hospital care of trauma patients, it was conducted during a pandemic which jeopardized the health system shifting the majority of resources for the pandemic subsequently interfere with the pattern of some variables. This study is hospital-based, single centered making, cross- sectional and short study period making it liable for selection bias and limits generalizability.\nIn conclusion, this study showed relatives and bystanders were the first responders during trauma care. However, ambulance utilization for pre-hospital care was low. There were trauma patients delay to arrive to hospital. Only half of the patients presented to the health facility within Golden hour." ]
[ "intro", "methods", "results", "discussion" ]
[ "Pre-hospital care", "Trauma", "Emergency department", "Ambulance" ]
Introduction: Injury is a major cause of early death and disability worldwide. Every year approximately 5 million people die in the world. Deaths from severe injury occur in one of three phases: Immediate (death occurring from overwhelming injury), intermediate or subacute (death occurring within several hours of trauma from treatable conditions), and delayed (death occurring days to weeks after trauma as a result of infection, multi-organ failure or other late complications) (1–3). The most effective way to prevent mortality and morbidity from trauma is the prevention of occurrence of trauma but providing effective pre-hospital care minimizes morbidity and mortality. Most deaths in the first hours of trauma are airway obstruction, hypoxia, and hemorrhage all of which can be reduced using effective first aid measures(1,4). Prompt pre-hospital care can also prevent delayed death of trauma patients with proper wound and burn care, immobilization of fractures, and support of oxygen and blood pressure in traumatic brain injury. Pre-hospital care reduces mortality and morbidity from serious illness and injuries. It is estimated that about 45% of mortality and 35% morbidity can be reduced by providing robust out of hospital emergency care(2). In low and middle-income countries without formal emergency care system around 80% of deaths in severe trauma occurred in pre-hospital setting(5, 6). In Ethiopia in the past 10 years, the Federal Ministry of Health has introduced an effort to improve the Emergency Medical Service (EMS) systems. Efforts include distribution of ambulances to all regions, providing at least one ambulance per district (woreda), training of paramedics, and procurement of on-board medical equipment. In Ethiopia, there is less grown pre-hospital care and a paucity of study on the existing level of care and determinants of care(4,7). This study aimed to assess the time to reach the facility and the pattern of pre-hospital care provided for trauma patients visiting a tertiary care trauma center in Addis Ababa. Methods: Study area period: This study was conducted from April 1 to May 31, 2020 at Addis Ababa Burn Emergency and Trauma (AaBET) Hospital. AaBET Hospital is a part of St. Paul's Hospital Millennium Medical College. It is an emergency dedicated center with level 3 trauma care. It provides emergency and critical care services, Orthopedic, Neuro-surgery, General surgery, and Plastics surgery services. The emergency department has 60 beds and the overall hospital bed is 300. Annual emergency room patient visits ranges from 15,000 to 20,000. Design and sampling: A hospital-based cross-sectional study design was used. The source population was all trauma patients seen in Addis Ababa Burn Emergency and Trauma (AaBET) Emergency Department (ED). All adult (age >15 years) trauma patients who came to AaBET Hospital Emergency Department during the 2 months study period were included. Patients sent to ED from regular Out Patient Department (OPD) for admission, death on arrival, and patients who came for follow-up were excluded Sample size determination: Sample size calculated by using the single population proportion formula, prevalence of (0.17) was used where 17 % of patients received pre-hospital care (4), with 10% error sample size of 238 patients was included using simple random sampling from a total of 1064 trauma patients seen during the 2 month study period. Data collection and analysis: Data was collected using a pilot-tested, structured, interviewing questionnaire prepared in English and translated to the patient's mother tongue language by two data collectors from the patient and/or care givers. The questionnaire was prepared by reviewing different kinds of literature and undertaking modifications for the population studied. (4, 7) The data collectors were trained about the data collection and the quality of data was check by the principal investigator. Data collection was done after patient clinical stabilization. Patients in critical condition and/or unable to communicate data was collected from care givers. Data were cleaned initially before entering to analysis using SPSS version 21. Descriptive statistics were done on demographics, mode of transport, mechanism of transport, body site of injury, and time of arrival to the hospital. Associations were done using the chi-square test and binary logistic regression was used to determine factors associated with pre-hospital care provision. All tests with P-Value< 0.05 was considered statistically significant. Tables and graphs were used for data presentation. Ethics Approval: The research proposal was approved by St Paul's Hospital Millennium Medical College's research ethical committee. This study was conducted per the Declaration of Helsinki: each study participant was well informed about the aim of the study, benefits, and risks; informed written consent was secured from study participants; study participants' confidentiality was maintained; no personal identifiers were used in the data collection questionnaire, and codes were used in place of them. Emergency care intervention was not delayed for the interview. Results: Socio-demographic characteristics: Out of 238 included patients, 186(78.2%) were male with a male to female ratio of 3.6:1. The mean age was 32.25 years with an SD of 13.45years. Most patients were from Oromia 117(49.2%) and Addis Ababa 104(43.7%). Regarding occupation, the majority were day laborers 81(34%) followed by farming 40(16.8%). (Table 1). Socio-demographic characteristics and time to a first health facility of trauma patients presented to AaBET Hospital, Addis Ababa, Ethiopia, April 1 – May 30, 2020 Clinical profile: The most common mechanism of trauma was a road traffic accident 102(42.9%) followed by falling 62(26.1%). Trauma to extremity region 135(56.7%) was the most common site of injury followed by the head 86(36.1%). Upon arrival, patients were triaged to yellow-green 185(77.8%), 39(16.4%) orange, and 16(5.9%) red side. The majority of patients had sustained the injury at work 87(36.6%), public gathering 46(19.3%), pedestrian 44(18.4%), passenger 35(14.7%), and driving 19(8%) (Table 2). Summary of triage, mechanism of trauma, and type of trauma patients presented to AaBET Hospital ED, Addis Ababa, Ethiopia, April 1–May 30 2020 Pre-hospital care: One hundred ten patients (46.2%) received some form of pre-hospital care mainly first aid at the scene. Cares provided at the scene were positioning patient 19(8%), bleeding arrest 74(31.1 %), splinting/immobilizing injured area 38 (16%), removing from wreck 51(21.4%) and others like calling police or others for help 7 (2.9%). Relatives were the most common care provider (49(45%)) followed by bystanders 37(33.9%), trained ambulance staff 19(17.4%), and police 2 (1.8%) provided first aid. The main reasons for not providing care were lack of knowledge 79(61.2%), followed by lack of equipment 25 (19.4%), fear of procedure 11(8.5%), fear of medico-legal issue 9(7%) and others like fear of transmitted diseases 5(3.9%). One hundred thirty-three (56.1%) patients presented to the first health facility within one hour. 57 (33.5%) of them transported to first health facility using taxi. Time of arrival and mode of transport is shown in Table 3. Time of arrival to the first facility against the mode of transport presented to AaBET Hospital ED, Addis Ababa, Ethiopia, April 1–May 30 2020 Inter health facility referral: A total of 155(65.1%) were referred from different areas, only 83(53.5%) were referred with communication. The most common source of referral was public hospital 79(46.7%) followed by public health center 74(43.7%) and private facility 16(9.6%). About 142(82.6%, n= 172) were transported by ambulance, 20(11.5%, n=172) with taxi, 9(5.2%, n= 172) by private car and one patient walking. Determinants of pre-hospital care: Chi-square test was used to assess the association between socio-demographic, mode of transport, mechanism of trauma, and type of trauma with the delivery of pre-hospital care. Only mechanism of injury had statistically significant association with provision of pre-hospital care (P-value=0.04) (Table 4). Association between demographics and pre-hospital care for trauma patients presented to AaBET Hospital ED, Addis Ababa, Ethiopia, April 1–May 30 2020 P Value<0.05 is considered as statistically significant. Discussion: This study showed the overall utilization of pre-hospital care for trauma patients was 110(46.2%) which showed improvement from a study done in Addis Ababa Tikur Anbesa Specialized Hospital (TASH) seven years back (16.7%), also higher than the study from India 26.5%, southwest Nigeria 8.6%(4, 8, 9). It is comparable to cross-sectional study done in Hanoi, Vietnam 48%(10). Authors suggested that this improvement could be because of community awareness and different stakeholders' engagement in decreasing trauma. However, this requires further study. Time to the health facility is crucial factor for trauma patients. Based on The Golden hour concept one hour is determined to be determinant for patients' better survival(11). In this study, 56.1% of patients reached the initial health facility within one hour was comparable to studies done in Kenya, 66.2% arriving within one hour and southwest Nigeria with 57% arriving in the first one hour of trauma(12,13). But this time was better than a study from Tikur Anbessa showed 18.5% arrive in one hour and 57% arrive in 1–2 hours(4). The study from Seattle showed 75% arriving initial facility within thirty-minute of trauma(14). This significant variation may be related to differences in availability of the nearby facility, transport infrastructure, and community awareness of where and when to go. The majority of scene care provisions in our study were none trained relatives; this is consistent with the reports from developing nations(4). However, in developed countries like the USA and UK studies show scene care is delivered by well-trained personnel(15,16). This variation may arise from the presence of a single call number and community awareness on utilization of ambulances. Most cares provided in our study are at their basic level arrest bleeding (33.9%), removing the patient from the trauma area (21%), immobilization (16%), and others. this is also consistent with the reports from the developing nations(4,12). The explanation for this may be because most of the providers are none trained bystanders and relatives, unlike the developed countries where care is by personnel with advanced training and equipped ambulances. It is better to train the community on first aids via community health workers and awareness creation with mass media. Upon analysis for means of transportation to the initial health facility, our study showed taxi was the most common means (32.4%) followed by ambulance (22.7%), walking with support (15.5%), and others. This was in keeping with studies done in Tikur Anbessa, Vietnam, and India where the majority of patients were transported by taxi and private vehicles(4,8,13). A study in Kenya showed only 1.4% ambulance use and that of Tanzania showed no patient transported by ambulance (12, 13). However, studies in Britain and the USA showed more than 96% ambulance usage(15, 16). This low utilization of ambulance for transport may be due to inadequate distribution, poor infrastructure, lack of community awareness on how to get them, and the large shift in health system resources including ambulances to fight the pandemic. Ambulances were used mostly for interfacility transfer. Taxi use was the most common mode of transport in this study and other African studies which indicates there should be a way for rethinking about ambulance-based pre-hospital care which is standard care from westerns (4, 12, 13). There should be African based solution for better pre-hospital trauma care. Training taxi drivers about trauma first aid and equipping taxis the necessary equipment will improve pre-hospital trauma care. Although this study provided a good incite to pre-hospital care of trauma patients, it was conducted during a pandemic which jeopardized the health system shifting the majority of resources for the pandemic subsequently interfere with the pattern of some variables. This study is hospital-based, single centered making, cross- sectional and short study period making it liable for selection bias and limits generalizability. In conclusion, this study showed relatives and bystanders were the first responders during trauma care. However, ambulance utilization for pre-hospital care was low. There were trauma patients delay to arrive to hospital. Only half of the patients presented to the health facility within Golden hour.
Background: Trauma is a major cause of morbidity and mortality worldwide. Prompt use of pre-hospital care is associated with reduced early and late morbidity and mortality from trauma. This study aimed to assess the time to reach the facility and the pattern of pre-hospital care provided for trauma patients. Methods: A cross-sectional study design with a structured interview questioner was used for patients presenting to Addis Ababa Burn Emergency and Trauma Hospital Emergency Department from April 1 to May 30, 2020. Results: Out of 238 interviewed patients, the most common means of transportation from the scene to the initial health facility were taxi 77(32.4%) and ambulance 54(22.7%). The time of arrival from the scene to the initial health care facility was within one hour, 133(56.1%) and in 1-3 hours 84(35.5%). Some form of care was provided at the scene in 110(46.2%) of cases. The care provided was bleeding arrest 74(31.1 %), removing from wreck 51(21.4%), splinting/immobilizing injured area 38(16%), position for patient comfort 19(8%), and others. Relatives were the most common care provider 49(45%) followed by bystanders 37(33.9%), trained ambulance staff 19(17.4%), and police 2 (1.8%). The main reasons for not providing care were lack of knowledge 79(61.2%), and lack of equipment 25 (19.4%). Conclusions: The study showed relatives and bystanders were the first responders during trauma care. However, ambulance utilization for pre-hospital care was low. There was trauma patients delay to arrive to hospital. Only half of the patients presented to the health facility within Golden hour.
null
null
2,462
327
[]
4
[ "care", "hospital", "trauma", "study", "patients", "pre", "pre hospital", "hospital care", "pre hospital care", "facility" ]
[ "deaths severe injury", "improve pre hospital", "mortality morbidity trauma", "trauma prevention", "injury pre hospital" ]
null
null
[CONTENT] Pre-hospital care | Trauma | Emergency department | Ambulance [SUMMARY]
[CONTENT] Pre-hospital care | Trauma | Emergency department | Ambulance [SUMMARY]
[CONTENT] Pre-hospital care | Trauma | Emergency department | Ambulance [SUMMARY]
null
[CONTENT] Pre-hospital care | Trauma | Emergency department | Ambulance [SUMMARY]
null
[CONTENT] Ambulances | Cross-Sectional Studies | Ethiopia | Hospitals | Humans | Police [SUMMARY]
[CONTENT] Ambulances | Cross-Sectional Studies | Ethiopia | Hospitals | Humans | Police [SUMMARY]
[CONTENT] Ambulances | Cross-Sectional Studies | Ethiopia | Hospitals | Humans | Police [SUMMARY]
null
[CONTENT] Ambulances | Cross-Sectional Studies | Ethiopia | Hospitals | Humans | Police [SUMMARY]
null
[CONTENT] deaths severe injury | improve pre hospital | mortality morbidity trauma | trauma prevention | injury pre hospital [SUMMARY]
[CONTENT] deaths severe injury | improve pre hospital | mortality morbidity trauma | trauma prevention | injury pre hospital [SUMMARY]
[CONTENT] deaths severe injury | improve pre hospital | mortality morbidity trauma | trauma prevention | injury pre hospital [SUMMARY]
null
[CONTENT] deaths severe injury | improve pre hospital | mortality morbidity trauma | trauma prevention | injury pre hospital [SUMMARY]
null
[CONTENT] care | hospital | trauma | study | patients | pre | pre hospital | hospital care | pre hospital care | facility [SUMMARY]
[CONTENT] care | hospital | trauma | study | patients | pre | pre hospital | hospital care | pre hospital care | facility [SUMMARY]
[CONTENT] care | hospital | trauma | study | patients | pre | pre hospital | hospital care | pre hospital care | facility [SUMMARY]
null
[CONTENT] care | hospital | trauma | study | patients | pre | pre hospital | hospital care | pre hospital care | facility [SUMMARY]
null
[CONTENT] care | trauma | morbidity | mortality | death | hospital | pre hospital | pre | death occurring | effective [SUMMARY]
[CONTENT] data | emergency | study | hospital | patients | department | collection | data collection | care | patient [SUMMARY]
[CONTENT] hospital | 19 | followed | patients | trauma | care | ababa ethiopia april | presented aabet | addis ababa ethiopia april | addis ababa ethiopia [SUMMARY]
null
[CONTENT] care | hospital | trauma | study | patients | data | pre | pre hospital | emergency | hospital care [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] Addis Ababa Burn | Trauma Hospital Emergency Department | April 1 to May 30, 2020 [SUMMARY]
[CONTENT] 238 | 77(32.4% | 54(22.7% ||| one hour | 133(56.1% | 1-3 hours | 84(35.5% ||| 110(46.2% ||| 74(31.1 % ||| 51(21.4% ||| 38(16% | 19(8% ||| 49(45% | 37(33.9% | 19(17.4% | 2 | 1.8% ||| 79(61.2% | 25 | 19.4% [SUMMARY]
null
[CONTENT] ||| ||| ||| Addis Ababa Burn | Trauma Hospital Emergency Department | April 1 to May 30, 2020 ||| 238 | 77(32.4% | 54(22.7% ||| one hour | 133(56.1% | 1-3 hours | 84(35.5% ||| 110(46.2% ||| 74(31.1 % ||| 51(21.4% ||| 38(16% | 19(8% ||| 49(45% | 37(33.9% | 19(17.4% | 2 | 1.8% ||| 79(61.2% | 25 | 19.4% ||| first ||| ||| ||| Only half | Golden hour [SUMMARY]
null
Is DAIR Still an Effective Way to Eradicate Acute Prosthetic Joint Infections? Our Experience in the Jordanian Royal Medical Services.
35169373
Prosthetic joint infections is estimated to occur in 1-2% of primary total joint arthroplasty. Debridement, antibiotics, irrigation and retention of prosthesis (DAIR) is the traditional treatment for acute prosthetic joint infections.
BACKGROUND
Our prospective, double blind and randomized investigation included 70 subjects, of both sexes, aged 63-72 years, who were managed with debridement, antibiotics, irrigation and retention of prosthesis for total hip or total knee arthroplasty acute prosthetic joint infections at Prince Hashim military hospital and Queen Alia military hospital, Jordan, during the period October 2017-October 2020. The observation period was 3 years. Therapy success was defined as absence of infection following 3 years, retention of the prosthesis and no further antibiotics therapy. Prosthetic joint infection was defined based on one or more of: a) growth of the same microorganism in minimum 2 cultures; b) one positive culture and a purulent synovial fluid upon debridement; c) negative culture and minimum 2 of purulent synovial fluid upon debridement. A successful outcome was defined as no clinical and laboratory evidence of infection (serum C-reactive protein less than 10 mg/L) at 3 years. Subjects with chronic, suppressive antibiotics or with prosthesis removal were considered therapy failure. Parameters statistically and remarkably discrepant between success and failure groups were investigated with logistic regression. P less than 0.05 were considered statistically significant.
METHODS
Amount of 46 subjects (65.7%) had no infection during the period of observation. Factors correlated with therapy failure were: history of Rheumatoid Arthritis, delayed infection (more than 1.5 years following arthroplasty), ESR at presentation of more than 50 mm/h and infection induced by coagulase-negative Staphylococcus. Symptoms duration of less than 5 days was associated with a better outcome. The use of Gentamicin sponges was statistically remarkably more in the success group and the use of beads was more in the failure group in the univariate but not in the logistic regression. Less surgical interventions were needed in the group managed with sponges than in the group managed with beads. Prosthetic joint infection induced by coagulase-negative Staphylococcus was associated with a less success rate and streptococcal infections were associated with an increased success rate.
RESULTS
Rheumatoid arthritis, duration of symptoms of more than 5 days, ESR of more than 50 mm/h, delayed infection (more than 1.5 years following the index arthroplasty) and coagulase-negative Staphylococcus infections reduce the rate of a successful debridement, antibiotics, irrigation and retention of prosthesis therapy.
CONCLUSION
[ "Aged", "Anti-Bacterial Agents", "Debridement", "Female", "Humans", "Male", "Middle Aged", "Prospective Studies", "Prostheses and Implants", "Prosthesis-Related Infections", "Retrospective Studies", "Treatment Outcome" ]
8802682
BACKGROUND
Prosthetic joint infections is estimated to occur in 1-2% of initial total hip arthroplasties and total knee arthroplasties (1). Infected prosthetic joints are unresponsive to systemic antibiotics therapy alone because of the poor vascular supply and biofilm production. Acute prosthetic joint infections are divided in three groups according to the period of clinical picture and postoperative period: a) Early after surgery: clinical picture less than 4 weeks postoperatively; b) Delayed chronic: gradual onset of clinical picture and c) Acute hematogenous: acute onset in a previously well-functioning prosthesis (2) or early (3 months), late/low-grade (3-24 months) and delayed infection (more than 24 months) (3). Different risk factors associated with acute prosthetic joint infections are rheumatoid arthritis, diabetes mellitus, malignancy, obesity and immunosuppressant use (4). Revision operation increases the risk of acute prosthetic joint infections. Factors associated with a worse result of acute prosthetic joint infections therapy are: infections induced by Staphylococcus spp. (5) and more by Staphylococcus aureus (6), polymicrobial acute infections (4), intra-articular purulence (5), retention of exchangeable elements (4) and a longer period between the index arthroplasty and the confirmation of infection (4). Most acute prosthetic joint infections are caused by coagulase-negative Staphylococcus (30-41%) and S. aureus (12-47%). Streptococcus spp. and Enterococcus spp. are found in approximately 10% of cases and gram-negative bacteria such as Escherichia coli are less than 5% (7). 5–39% were polymicrobial infections (6). Debridement, antibiotics, irrigation and retention of the prosthesis (DAIR) is used for acute infections with no risk factors such as remarkable co-morbidities. DAIR has a success rate between 14% and 100%(8). Success of more than 70% can be expected in cases with a short period of symptoms (less than 3-4 weeks), a stable implant and a healthy soft tissues around the prosthesis are selected (9). In chronic infections, implant retention is rarely successful. Local antibiotics with aminoglycosides in beads or sponges achieve increased local concentrations with no toxic serum levels. Beads have a lengthened release in comparison to sponges but with no increased concentrations, acting as foreign bodies, to which bacteria can adhere.
METHODS
Our prospective, double blind and randomized investigation included 70 subjects, of both sexes, aged 63-72 yrs. and managed with debridement, antibiotics, irrigation and retention of prosthesis for total hip arthroplasty or total knee arthroplasty acute prosthetic joint infections at Prince Hashim military hospital and Queen Alia military hospital, Jordan, during the period October 2017 - October 2020, after obtaining written informed consent from all subjects and approval from our local ethical and research board review committee of the Jordanian Royal medical services. The observation period was 3 years. Absence of infection following 1.5 years, retention of the prosthesis and no further antibiotics therapy was regarded as therapy success. Prosthetic joint infection was defined based on one or more of (8): a) growth of the same microorganism in minimum 2 cultures (before surgery joint aspiration and/or during surgery, intracapsular); b) one positive culture and intracapsular purulence in debridement, acute inflammation upon histological study of tissue taken intraoperatively and/or an actively draining sinus tract; c) negative culture and minimum 2 of intra-articular purulence upon debridement, acute inflammation upon histological study of tissue taken intraoperatively and an actively draining sinus tract. The decision to do DAIR or not was made according to the clinical picture, kind of infection and wither the prostheses were looser not. DAIR was repeated following 2 weeks if the clinical and laboratory picture worsened. Antibiotic therapy based on bacterial antibiogram was given for a minimum 6 weeks. The joint was opened via the old scar or incision, tissues were samples for three cultures from synovium, capsule and interfaces, and the joint was debrided with synovial resection. Following debridement, the joint and overlying soft tissues were irrigated with saline using lavage, and primarily closed. Removal of Gentamicin beads was with debridement. After surgery, antibiotics were started, either with a broad coverage such as Vancomycin, or an agent according to antibiogram. Enoxaparin 4000 IU SC was given daily in the hospital. Before debridement, antibiotic holiday and aspirating the joint to define the offending organism before operation is recommended. Removal of skin margins, excision of any sinuses, radical synovectomy and exchange of modular parts of the implant (polyethylene inlay, femoral head, polyethylene tibial insert) follows. The joint should have a full lavage. A suction drain can be left inside for couple of days. If drainage or infection continue, then a second debridement is advised. Continuous closed irrigation was not more efficient than standard technique with initial closure and in situ drain. A proper treatment outcome was defined as no clinical and laboratory picture of inflammation (serum C-reactive protein less than 10 mg/L) at 3 years. Subjects with chronic, suppressive antibiotics or with prosthesis removal were therapy failure. Statistical analysis Independent t-tests were done for continuous parameters and the non-parametric Mann-Whitney U-test was used for continuous parameters. For categorical variables, chi-square and Fisher’s exact tests were done. Parameters statistically and remarkably discrepant between success and failure groups were investigated with logistic regression. P less than 0.05 were considered statistically significant.
null
null
CONCLUSION
Multiple factors were related with DAIR failure: rheumatoid arthritis, period of clinical picture more than 5 days, delayed infection (more than 1.5 years following arthroplasty), ESR of more than 50 mm/h and coagulase-negative Staphylococcus. If one or more of these factors are found in a patient, the chances of successful DAIR therapy reduces. Gentamicin sponges had outcome similar to those of beads.
[ "BACKGROUND", "OBJECTIVE", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Prosthetic joint infections is estimated to occur in 1-2% of initial total hip arthroplasties and total knee arthroplasties (1). Infected prosthetic joints are unresponsive to systemic antibiotics therapy alone because of the poor vascular supply and biofilm production. Acute prosthetic joint infections are divided in three groups according to the period of clinical picture and postoperative period: a) Early after surgery: clinical picture less than 4 weeks postoperatively; b) Delayed chronic: gradual onset of clinical picture and c) Acute hematogenous: acute onset in a previously well-functioning prosthesis (2) or early (3 months), late/low-grade (3-24 months) and delayed infection (more than 24 months) (3).\nDifferent risk factors associated with acute prosthetic joint infections are rheumatoid arthritis, diabetes mellitus, malignancy, obesity and immunosuppressant use (4). Revision operation increases the risk of acute prosthetic joint infections. Factors associated with a worse result of acute prosthetic joint infections therapy are: infections induced by Staphylococcus spp. (5) and more by Staphylococcus aureus (6), polymicrobial acute infections (4), intra-articular purulence (5), retention of exchangeable elements (4) and a longer period between the index arthroplasty and the confirmation of infection (4).\nMost acute prosthetic joint infections are caused by coagulase-negative Staphylococcus (30-41%) and S. aureus (12-47%). Streptococcus spp. and Enterococcus spp. are found in approximately 10% of cases and gram-negative bacteria such as Escherichia coli are less than 5% (7). 5–39% were polymicrobial infections (6). Debridement, antibiotics, irrigation and retention of the prosthesis (DAIR) is used for acute infections with no risk factors such as remarkable co-morbidities. DAIR has a success rate between 14% and 100%(8). Success of more than 70% can be expected in cases with a short period of symptoms (less than 3-4 weeks), a stable implant and a healthy soft tissues around the prosthesis are selected (9). In chronic infections, implant retention is rarely successful.\nLocal antibiotics with aminoglycosides in beads or sponges achieve increased local concentrations with no toxic serum levels. Beads have a lengthened release in comparison to sponges but with no increased concentrations, acting as foreign bodies, to which bacteria can adhere.", "The aim of this study was to determine the risk factors in patients managed with debridement, antibiotics, irrigation and retention of prosthesis for acute prosthetic joint infections and the results of DAIR for total hip and knee.", "Amount of 70 patients with prosthetic joint infections (48 hips and 22 knees) were managed with DAIR, of whom 46 subjects were infection-free after treatment without the need for further resection arthroplasty or administration of suppressive antibiotics (65.7% success rate). Patients’ data analyzed for the Success and failure groups risk factors are shown in Table I.\nFive factors were correlated with failure: rheumatoid arthritis, delayed infection, ESR more than 50 mm/h, clinical picture period more than 5 days before the beginning of therapy and prosthetic joint-correlated infection induced by coagulase-negative Staphylococcus (Table 2). Revision arthroplasty was not correlated with either therapy failure or success. The average number of techniques was comparable in successful and failed subjects (Table 3). There was no discrepancy in success frequency between one DAIR and multi-methods. The administration of Gentamicin sponges was statistically remarkably more in the success group, and the administration of beads was remarkably more in the failure group. The average number of techniques was less when sponges were used (2.2) than when beads were used (3.0) (P < 0.005).\nAspiration before surgery was done in 50 subjects, and a microorganism was found in 46 subjects. Samples were taken during surgery from all 70 subjects, with one positive in 47 subjects and in the other 3 subjects, all aspirations had positive cultures. Most infections were induced by S. aureus (Table 4). MRSA was causative in only 3 subjects of prosthetic joint-correlated infection, with successful therapy. Prosthetic joint-correlated infection induced by coagulase-negative Staphylococcus was correlated with a reduced success incidence and streptococcal infections were correlated with an increased success incidence. All streptococcal infections were managed during 5 days of clinical picture. Five subjects had a polymicrobial prosthetic joint-correlated infection with successful therapy.", "Prosthetic joint infection is a serious complication in total joint replacement and is the second most frequent cause of revision of joint replacement. Management choices are: non-surgical methods with long period of antibiotics, debridement followed by a period of antibiotics and implant retention (DAIR), 1 or 2-stage revision, joint fusion and amputation. Implant retention with infection treatment is optimum and DAIR had different success percentages based on the patient, period of infection, the offending micro-organisms, technique, number of debridements performed, arthroscopic or open, and the period of antibiotic administration. Risk factors of DAIR failure are: immunocompromised host, a long period between the start of infection and surgical debridement, the presence of a sinus, Staphylococcal infection (especially Methicillin Resistant Staphylococcal Aureus), multi-debridements, short antibiotic period and retention of exchangeable parts.\nOf 70 subjects in our study, 46 were managed successfully with DAIR. Revision arthroplasty was a risk factor for prosthetic joint infection, but was not linked to therapy failure. Only rheumatoid arthritis had more therapy failure rate (8). Only three subjects with Rheumatoid Arthritis were using immunosuppressive agents (one in the success group and two in the failure group).\nIn most subjects (46/70), intervention was started during 5 days of presentation with more favorable outcome. The therapy success with early infections and infections of short period of symptoms is frequently caused by lack of biofilm formation (1), and it is indicated that DAIR must only be used for subjects with a short period of symptoms (not more than 15 days) or period following index surgery of less than 30 days (9). Clinical picture of less than seven days was correlated with therapy success. Six subjects with a period of clinical picture of more than 4 weeks were managed with DAIR. Early prosthetic joint infection was recorded during 3 months following index operation (3). Period of clinical picture of more than 4 weeks was not a factor associated with therapy outcome. An ESR of more than 50 mm/h was associated with therapy failure. A reduced ESR might indicate a shorter period of infection, and might be an anticipative of an increased success. These markers (CRP and ESR) can confirm prosthetic joint infection (10), although increased CRP was anticipative of failure (4).\nDuring revision, Gentamicin beads were used to fill the dead space following arthroplasty removal (8). The use of Gentamicin sponges in management of prosthetic joint infection was used (11). An increased success incidence for sponges and a reduced success incidence for beads was recorded. The collagen-Gentamicin sponges used are biodegradable, but not beads. Sponges attain more local antibiotic concentrations than beads. Beads are foreign bodies and sustain infection. \nThis study was small and beads were used in severe infection. Coagulase-negative Staphylococcus infection was linked with therapy failure. Staphylococcal or S. aureus infection had more risk of failure (5) due to the ability of the species to produce an early biofilm. The virulence of coagulase-negative Staphylococcus is low, which may delay the confirmation of infection. Management of streptococcal infections had an increased success percentage during 5 days following clinical picture. The increased success percentage is due to the short period of clinical picture. There was an association between streptococcal infections and good outcome (12). Three of 70 subjects experienced a polymicrobial infection, but were with no infection later on. Polymicrobial infections was in the range of 5 and 10%(13, 14) and more percentages of multi-organism prosthetic joint infection of 19- 39% (6). More failure for polymicrobial infection was recorded with a hazard ratio of 1.8 (4).\nNot exchanging the polyethylene insert in TKR or the PE inly and femoral head in THR was an independent risk factor for failure. Modular exchangeable parts should always be exchanged.\nThe methods of irrigation and debridement and culture were not identical in all subjects. There was possible bias in the use of Gentamicin beads and sponges. DAIR must be performed in the acute phase after surgery during one month or in acute late haematogenous infections of TKA during ten days of onset. The debridement must be open and not arthroscopic and the modular parts should be exchanged. Antibiotics after surgery must be administered for at least 3 months and followed up by clinical inflammatory markers. Immunocompromised patients, MRSA infection, the failure of one DAIR and poor status of surrounding soft tissues usually end up in one or two-stage revision arthroplasty. During lavage, there was a tendency toward using chlorhexidine gluconate 0.05%. Chlorhexidine gluconate was most efficient in decreasing bacterial growth and hence was superior to jet saline lavage.", "Multiple factors were related with DAIR failure: rheumatoid arthritis, period of clinical picture more than 5 days, delayed infection (more than 1.5 years following arthroplasty), ESR of more than 50 mm/h and coagulase-negative Staphylococcus. If one or more of these factors are found in a patient, the chances of successful DAIR therapy reduces. Gentamicin sponges had outcome similar to those of beads." ]
[ null, null, null, null, null ]
[ "BACKGROUND", "OBJECTIVE", "METHODS", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Prosthetic joint infections is estimated to occur in 1-2% of initial total hip arthroplasties and total knee arthroplasties (1). Infected prosthetic joints are unresponsive to systemic antibiotics therapy alone because of the poor vascular supply and biofilm production. Acute prosthetic joint infections are divided in three groups according to the period of clinical picture and postoperative period: a) Early after surgery: clinical picture less than 4 weeks postoperatively; b) Delayed chronic: gradual onset of clinical picture and c) Acute hematogenous: acute onset in a previously well-functioning prosthesis (2) or early (3 months), late/low-grade (3-24 months) and delayed infection (more than 24 months) (3).\nDifferent risk factors associated with acute prosthetic joint infections are rheumatoid arthritis, diabetes mellitus, malignancy, obesity and immunosuppressant use (4). Revision operation increases the risk of acute prosthetic joint infections. Factors associated with a worse result of acute prosthetic joint infections therapy are: infections induced by Staphylococcus spp. (5) and more by Staphylococcus aureus (6), polymicrobial acute infections (4), intra-articular purulence (5), retention of exchangeable elements (4) and a longer period between the index arthroplasty and the confirmation of infection (4).\nMost acute prosthetic joint infections are caused by coagulase-negative Staphylococcus (30-41%) and S. aureus (12-47%). Streptococcus spp. and Enterococcus spp. are found in approximately 10% of cases and gram-negative bacteria such as Escherichia coli are less than 5% (7). 5–39% were polymicrobial infections (6). Debridement, antibiotics, irrigation and retention of the prosthesis (DAIR) is used for acute infections with no risk factors such as remarkable co-morbidities. DAIR has a success rate between 14% and 100%(8). Success of more than 70% can be expected in cases with a short period of symptoms (less than 3-4 weeks), a stable implant and a healthy soft tissues around the prosthesis are selected (9). In chronic infections, implant retention is rarely successful.\nLocal antibiotics with aminoglycosides in beads or sponges achieve increased local concentrations with no toxic serum levels. Beads have a lengthened release in comparison to sponges but with no increased concentrations, acting as foreign bodies, to which bacteria can adhere.", "The aim of this study was to determine the risk factors in patients managed with debridement, antibiotics, irrigation and retention of prosthesis for acute prosthetic joint infections and the results of DAIR for total hip and knee.", "Our prospective, double blind and randomized investigation included 70 subjects, of both sexes, aged 63-72 yrs. and managed with debridement, antibiotics, irrigation and retention of prosthesis for total hip arthroplasty or total knee arthroplasty acute prosthetic joint infections at Prince Hashim military hospital and Queen Alia military hospital, Jordan, during the period October 2017 - October 2020, after obtaining written informed consent from all subjects and approval from our local ethical and research board review committee of the Jordanian Royal medical services. The observation period was 3 years. Absence of infection following 1.5 years, retention of the prosthesis and no further antibiotics therapy was regarded as therapy success.\nProsthetic joint infection was defined based on one or more of (8): a) growth of the same microorganism in minimum 2 cultures (before surgery joint aspiration and/or during surgery, intracapsular); b) one positive culture and intracapsular purulence in debridement, acute inflammation upon histological study of tissue taken intraoperatively and/or an actively draining sinus tract; c) negative culture and minimum 2 of intra-articular purulence upon debridement, acute inflammation upon histological study of tissue taken intraoperatively and an actively draining sinus tract.\nThe decision to do DAIR or not was made according to the clinical picture, kind of infection and wither the prostheses were looser not. DAIR was repeated following 2 weeks if the clinical and laboratory picture worsened. Antibiotic therapy based on bacterial antibiogram was given for a minimum 6 weeks. The joint was opened via the old scar or incision, tissues were samples for three cultures from synovium, capsule and interfaces, and the joint was debrided with synovial resection. Following debridement, the joint and overlying soft tissues were irrigated with saline using lavage, and primarily closed. Removal of Gentamicin beads was with debridement. After surgery, antibiotics were started, either with a broad coverage such as Vancomycin, or an agent according to antibiogram. Enoxaparin 4000 IU SC was given daily in the hospital. Before debridement, antibiotic holiday and aspirating the joint to define the offending organism before operation is recommended. Removal of skin margins, excision of any sinuses, radical synovectomy and exchange of modular parts of the implant (polyethylene inlay, femoral head, polyethylene tibial insert) follows. The joint should have a full lavage. A suction drain can be left inside for couple of days. If drainage or infection continue, then a second debridement is advised. Continuous closed irrigation was not more efficient than standard technique with initial closure and in situ drain.\nA proper treatment outcome was defined as no clinical and laboratory picture of inflammation (serum C-reactive protein less than 10 mg/L) at 3 years. Subjects with chronic, suppressive antibiotics or with prosthesis removal were therapy failure.\n\nStatistical analysis\n\nIndependent t-tests were done for continuous parameters and the non-parametric Mann-Whitney U-test was used for continuous parameters. For categorical variables, chi-square and Fisher’s exact tests were done. Parameters statistically and remarkably discrepant between success and failure groups were investigated with logistic regression. P less than 0.05 were considered statistically significant.", "Amount of 70 patients with prosthetic joint infections (48 hips and 22 knees) were managed with DAIR, of whom 46 subjects were infection-free after treatment without the need for further resection arthroplasty or administration of suppressive antibiotics (65.7% success rate). Patients’ data analyzed for the Success and failure groups risk factors are shown in Table I.\nFive factors were correlated with failure: rheumatoid arthritis, delayed infection, ESR more than 50 mm/h, clinical picture period more than 5 days before the beginning of therapy and prosthetic joint-correlated infection induced by coagulase-negative Staphylococcus (Table 2). Revision arthroplasty was not correlated with either therapy failure or success. The average number of techniques was comparable in successful and failed subjects (Table 3). There was no discrepancy in success frequency between one DAIR and multi-methods. The administration of Gentamicin sponges was statistically remarkably more in the success group, and the administration of beads was remarkably more in the failure group. The average number of techniques was less when sponges were used (2.2) than when beads were used (3.0) (P < 0.005).\nAspiration before surgery was done in 50 subjects, and a microorganism was found in 46 subjects. Samples were taken during surgery from all 70 subjects, with one positive in 47 subjects and in the other 3 subjects, all aspirations had positive cultures. Most infections were induced by S. aureus (Table 4). MRSA was causative in only 3 subjects of prosthetic joint-correlated infection, with successful therapy. Prosthetic joint-correlated infection induced by coagulase-negative Staphylococcus was correlated with a reduced success incidence and streptococcal infections were correlated with an increased success incidence. All streptococcal infections were managed during 5 days of clinical picture. Five subjects had a polymicrobial prosthetic joint-correlated infection with successful therapy.", "Prosthetic joint infection is a serious complication in total joint replacement and is the second most frequent cause of revision of joint replacement. Management choices are: non-surgical methods with long period of antibiotics, debridement followed by a period of antibiotics and implant retention (DAIR), 1 or 2-stage revision, joint fusion and amputation. Implant retention with infection treatment is optimum and DAIR had different success percentages based on the patient, period of infection, the offending micro-organisms, technique, number of debridements performed, arthroscopic or open, and the period of antibiotic administration. Risk factors of DAIR failure are: immunocompromised host, a long period between the start of infection and surgical debridement, the presence of a sinus, Staphylococcal infection (especially Methicillin Resistant Staphylococcal Aureus), multi-debridements, short antibiotic period and retention of exchangeable parts.\nOf 70 subjects in our study, 46 were managed successfully with DAIR. Revision arthroplasty was a risk factor for prosthetic joint infection, but was not linked to therapy failure. Only rheumatoid arthritis had more therapy failure rate (8). Only three subjects with Rheumatoid Arthritis were using immunosuppressive agents (one in the success group and two in the failure group).\nIn most subjects (46/70), intervention was started during 5 days of presentation with more favorable outcome. The therapy success with early infections and infections of short period of symptoms is frequently caused by lack of biofilm formation (1), and it is indicated that DAIR must only be used for subjects with a short period of symptoms (not more than 15 days) or period following index surgery of less than 30 days (9). Clinical picture of less than seven days was correlated with therapy success. Six subjects with a period of clinical picture of more than 4 weeks were managed with DAIR. Early prosthetic joint infection was recorded during 3 months following index operation (3). Period of clinical picture of more than 4 weeks was not a factor associated with therapy outcome. An ESR of more than 50 mm/h was associated with therapy failure. A reduced ESR might indicate a shorter period of infection, and might be an anticipative of an increased success. These markers (CRP and ESR) can confirm prosthetic joint infection (10), although increased CRP was anticipative of failure (4).\nDuring revision, Gentamicin beads were used to fill the dead space following arthroplasty removal (8). The use of Gentamicin sponges in management of prosthetic joint infection was used (11). An increased success incidence for sponges and a reduced success incidence for beads was recorded. The collagen-Gentamicin sponges used are biodegradable, but not beads. Sponges attain more local antibiotic concentrations than beads. Beads are foreign bodies and sustain infection. \nThis study was small and beads were used in severe infection. Coagulase-negative Staphylococcus infection was linked with therapy failure. Staphylococcal or S. aureus infection had more risk of failure (5) due to the ability of the species to produce an early biofilm. The virulence of coagulase-negative Staphylococcus is low, which may delay the confirmation of infection. Management of streptococcal infections had an increased success percentage during 5 days following clinical picture. The increased success percentage is due to the short period of clinical picture. There was an association between streptococcal infections and good outcome (12). Three of 70 subjects experienced a polymicrobial infection, but were with no infection later on. Polymicrobial infections was in the range of 5 and 10%(13, 14) and more percentages of multi-organism prosthetic joint infection of 19- 39% (6). More failure for polymicrobial infection was recorded with a hazard ratio of 1.8 (4).\nNot exchanging the polyethylene insert in TKR or the PE inly and femoral head in THR was an independent risk factor for failure. Modular exchangeable parts should always be exchanged.\nThe methods of irrigation and debridement and culture were not identical in all subjects. There was possible bias in the use of Gentamicin beads and sponges. DAIR must be performed in the acute phase after surgery during one month or in acute late haematogenous infections of TKA during ten days of onset. The debridement must be open and not arthroscopic and the modular parts should be exchanged. Antibiotics after surgery must be administered for at least 3 months and followed up by clinical inflammatory markers. Immunocompromised patients, MRSA infection, the failure of one DAIR and poor status of surrounding soft tissues usually end up in one or two-stage revision arthroplasty. During lavage, there was a tendency toward using chlorhexidine gluconate 0.05%. Chlorhexidine gluconate was most efficient in decreasing bacterial growth and hence was superior to jet saline lavage.", "Multiple factors were related with DAIR failure: rheumatoid arthritis, period of clinical picture more than 5 days, delayed infection (more than 1.5 years following arthroplasty), ESR of more than 50 mm/h and coagulase-negative Staphylococcus. If one or more of these factors are found in a patient, the chances of successful DAIR therapy reduces. Gentamicin sponges had outcome similar to those of beads." ]
[ null, null, "methods", null, null, null ]
[ "Debridement", "antibiotics", "irrigation", "retention", "Prosthetic joint infections" ]
BACKGROUND: Prosthetic joint infections is estimated to occur in 1-2% of initial total hip arthroplasties and total knee arthroplasties (1). Infected prosthetic joints are unresponsive to systemic antibiotics therapy alone because of the poor vascular supply and biofilm production. Acute prosthetic joint infections are divided in three groups according to the period of clinical picture and postoperative period: a) Early after surgery: clinical picture less than 4 weeks postoperatively; b) Delayed chronic: gradual onset of clinical picture and c) Acute hematogenous: acute onset in a previously well-functioning prosthesis (2) or early (3 months), late/low-grade (3-24 months) and delayed infection (more than 24 months) (3). Different risk factors associated with acute prosthetic joint infections are rheumatoid arthritis, diabetes mellitus, malignancy, obesity and immunosuppressant use (4). Revision operation increases the risk of acute prosthetic joint infections. Factors associated with a worse result of acute prosthetic joint infections therapy are: infections induced by Staphylococcus spp. (5) and more by Staphylococcus aureus (6), polymicrobial acute infections (4), intra-articular purulence (5), retention of exchangeable elements (4) and a longer period between the index arthroplasty and the confirmation of infection (4). Most acute prosthetic joint infections are caused by coagulase-negative Staphylococcus (30-41%) and S. aureus (12-47%). Streptococcus spp. and Enterococcus spp. are found in approximately 10% of cases and gram-negative bacteria such as Escherichia coli are less than 5% (7). 5–39% were polymicrobial infections (6). Debridement, antibiotics, irrigation and retention of the prosthesis (DAIR) is used for acute infections with no risk factors such as remarkable co-morbidities. DAIR has a success rate between 14% and 100%(8). Success of more than 70% can be expected in cases with a short period of symptoms (less than 3-4 weeks), a stable implant and a healthy soft tissues around the prosthesis are selected (9). In chronic infections, implant retention is rarely successful. Local antibiotics with aminoglycosides in beads or sponges achieve increased local concentrations with no toxic serum levels. Beads have a lengthened release in comparison to sponges but with no increased concentrations, acting as foreign bodies, to which bacteria can adhere. OBJECTIVE: The aim of this study was to determine the risk factors in patients managed with debridement, antibiotics, irrigation and retention of prosthesis for acute prosthetic joint infections and the results of DAIR for total hip and knee. METHODS: Our prospective, double blind and randomized investigation included 70 subjects, of both sexes, aged 63-72 yrs. and managed with debridement, antibiotics, irrigation and retention of prosthesis for total hip arthroplasty or total knee arthroplasty acute prosthetic joint infections at Prince Hashim military hospital and Queen Alia military hospital, Jordan, during the period October 2017 - October 2020, after obtaining written informed consent from all subjects and approval from our local ethical and research board review committee of the Jordanian Royal medical services. The observation period was 3 years. Absence of infection following 1.5 years, retention of the prosthesis and no further antibiotics therapy was regarded as therapy success. Prosthetic joint infection was defined based on one or more of (8): a) growth of the same microorganism in minimum 2 cultures (before surgery joint aspiration and/or during surgery, intracapsular); b) one positive culture and intracapsular purulence in debridement, acute inflammation upon histological study of tissue taken intraoperatively and/or an actively draining sinus tract; c) negative culture and minimum 2 of intra-articular purulence upon debridement, acute inflammation upon histological study of tissue taken intraoperatively and an actively draining sinus tract. The decision to do DAIR or not was made according to the clinical picture, kind of infection and wither the prostheses were looser not. DAIR was repeated following 2 weeks if the clinical and laboratory picture worsened. Antibiotic therapy based on bacterial antibiogram was given for a minimum 6 weeks. The joint was opened via the old scar or incision, tissues were samples for three cultures from synovium, capsule and interfaces, and the joint was debrided with synovial resection. Following debridement, the joint and overlying soft tissues were irrigated with saline using lavage, and primarily closed. Removal of Gentamicin beads was with debridement. After surgery, antibiotics were started, either with a broad coverage such as Vancomycin, or an agent according to antibiogram. Enoxaparin 4000 IU SC was given daily in the hospital. Before debridement, antibiotic holiday and aspirating the joint to define the offending organism before operation is recommended. Removal of skin margins, excision of any sinuses, radical synovectomy and exchange of modular parts of the implant (polyethylene inlay, femoral head, polyethylene tibial insert) follows. The joint should have a full lavage. A suction drain can be left inside for couple of days. If drainage or infection continue, then a second debridement is advised. Continuous closed irrigation was not more efficient than standard technique with initial closure and in situ drain. A proper treatment outcome was defined as no clinical and laboratory picture of inflammation (serum C-reactive protein less than 10 mg/L) at 3 years. Subjects with chronic, suppressive antibiotics or with prosthesis removal were therapy failure. Statistical analysis Independent t-tests were done for continuous parameters and the non-parametric Mann-Whitney U-test was used for continuous parameters. For categorical variables, chi-square and Fisher’s exact tests were done. Parameters statistically and remarkably discrepant between success and failure groups were investigated with logistic regression. P less than 0.05 were considered statistically significant. RESULTS: Amount of 70 patients with prosthetic joint infections (48 hips and 22 knees) were managed with DAIR, of whom 46 subjects were infection-free after treatment without the need for further resection arthroplasty or administration of suppressive antibiotics (65.7% success rate). Patients’ data analyzed for the Success and failure groups risk factors are shown in Table I. Five factors were correlated with failure: rheumatoid arthritis, delayed infection, ESR more than 50 mm/h, clinical picture period more than 5 days before the beginning of therapy and prosthetic joint-correlated infection induced by coagulase-negative Staphylococcus (Table 2). Revision arthroplasty was not correlated with either therapy failure or success. The average number of techniques was comparable in successful and failed subjects (Table 3). There was no discrepancy in success frequency between one DAIR and multi-methods. The administration of Gentamicin sponges was statistically remarkably more in the success group, and the administration of beads was remarkably more in the failure group. The average number of techniques was less when sponges were used (2.2) than when beads were used (3.0) (P < 0.005). Aspiration before surgery was done in 50 subjects, and a microorganism was found in 46 subjects. Samples were taken during surgery from all 70 subjects, with one positive in 47 subjects and in the other 3 subjects, all aspirations had positive cultures. Most infections were induced by S. aureus (Table 4). MRSA was causative in only 3 subjects of prosthetic joint-correlated infection, with successful therapy. Prosthetic joint-correlated infection induced by coagulase-negative Staphylococcus was correlated with a reduced success incidence and streptococcal infections were correlated with an increased success incidence. All streptococcal infections were managed during 5 days of clinical picture. Five subjects had a polymicrobial prosthetic joint-correlated infection with successful therapy. DISCUSSION: Prosthetic joint infection is a serious complication in total joint replacement and is the second most frequent cause of revision of joint replacement. Management choices are: non-surgical methods with long period of antibiotics, debridement followed by a period of antibiotics and implant retention (DAIR), 1 or 2-stage revision, joint fusion and amputation. Implant retention with infection treatment is optimum and DAIR had different success percentages based on the patient, period of infection, the offending micro-organisms, technique, number of debridements performed, arthroscopic or open, and the period of antibiotic administration. Risk factors of DAIR failure are: immunocompromised host, a long period between the start of infection and surgical debridement, the presence of a sinus, Staphylococcal infection (especially Methicillin Resistant Staphylococcal Aureus), multi-debridements, short antibiotic period and retention of exchangeable parts. Of 70 subjects in our study, 46 were managed successfully with DAIR. Revision arthroplasty was a risk factor for prosthetic joint infection, but was not linked to therapy failure. Only rheumatoid arthritis had more therapy failure rate (8). Only three subjects with Rheumatoid Arthritis were using immunosuppressive agents (one in the success group and two in the failure group). In most subjects (46/70), intervention was started during 5 days of presentation with more favorable outcome. The therapy success with early infections and infections of short period of symptoms is frequently caused by lack of biofilm formation (1), and it is indicated that DAIR must only be used for subjects with a short period of symptoms (not more than 15 days) or period following index surgery of less than 30 days (9). Clinical picture of less than seven days was correlated with therapy success. Six subjects with a period of clinical picture of more than 4 weeks were managed with DAIR. Early prosthetic joint infection was recorded during 3 months following index operation (3). Period of clinical picture of more than 4 weeks was not a factor associated with therapy outcome. An ESR of more than 50 mm/h was associated with therapy failure. A reduced ESR might indicate a shorter period of infection, and might be an anticipative of an increased success. These markers (CRP and ESR) can confirm prosthetic joint infection (10), although increased CRP was anticipative of failure (4). During revision, Gentamicin beads were used to fill the dead space following arthroplasty removal (8). The use of Gentamicin sponges in management of prosthetic joint infection was used (11). An increased success incidence for sponges and a reduced success incidence for beads was recorded. The collagen-Gentamicin sponges used are biodegradable, but not beads. Sponges attain more local antibiotic concentrations than beads. Beads are foreign bodies and sustain infection. This study was small and beads were used in severe infection. Coagulase-negative Staphylococcus infection was linked with therapy failure. Staphylococcal or S. aureus infection had more risk of failure (5) due to the ability of the species to produce an early biofilm. The virulence of coagulase-negative Staphylococcus is low, which may delay the confirmation of infection. Management of streptococcal infections had an increased success percentage during 5 days following clinical picture. The increased success percentage is due to the short period of clinical picture. There was an association between streptococcal infections and good outcome (12). Three of 70 subjects experienced a polymicrobial infection, but were with no infection later on. Polymicrobial infections was in the range of 5 and 10%(13, 14) and more percentages of multi-organism prosthetic joint infection of 19- 39% (6). More failure for polymicrobial infection was recorded with a hazard ratio of 1.8 (4). Not exchanging the polyethylene insert in TKR or the PE inly and femoral head in THR was an independent risk factor for failure. Modular exchangeable parts should always be exchanged. The methods of irrigation and debridement and culture were not identical in all subjects. There was possible bias in the use of Gentamicin beads and sponges. DAIR must be performed in the acute phase after surgery during one month or in acute late haematogenous infections of TKA during ten days of onset. The debridement must be open and not arthroscopic and the modular parts should be exchanged. Antibiotics after surgery must be administered for at least 3 months and followed up by clinical inflammatory markers. Immunocompromised patients, MRSA infection, the failure of one DAIR and poor status of surrounding soft tissues usually end up in one or two-stage revision arthroplasty. During lavage, there was a tendency toward using chlorhexidine gluconate 0.05%. Chlorhexidine gluconate was most efficient in decreasing bacterial growth and hence was superior to jet saline lavage. CONCLUSION: Multiple factors were related with DAIR failure: rheumatoid arthritis, period of clinical picture more than 5 days, delayed infection (more than 1.5 years following arthroplasty), ESR of more than 50 mm/h and coagulase-negative Staphylococcus. If one or more of these factors are found in a patient, the chances of successful DAIR therapy reduces. Gentamicin sponges had outcome similar to those of beads.
Background: Prosthetic joint infections is estimated to occur in 1-2% of primary total joint arthroplasty. Debridement, antibiotics, irrigation and retention of prosthesis (DAIR) is the traditional treatment for acute prosthetic joint infections. Methods: Our prospective, double blind and randomized investigation included 70 subjects, of both sexes, aged 63-72 years, who were managed with debridement, antibiotics, irrigation and retention of prosthesis for total hip or total knee arthroplasty acute prosthetic joint infections at Prince Hashim military hospital and Queen Alia military hospital, Jordan, during the period October 2017-October 2020. The observation period was 3 years. Therapy success was defined as absence of infection following 3 years, retention of the prosthesis and no further antibiotics therapy. Prosthetic joint infection was defined based on one or more of: a) growth of the same microorganism in minimum 2 cultures; b) one positive culture and a purulent synovial fluid upon debridement; c) negative culture and minimum 2 of purulent synovial fluid upon debridement. A successful outcome was defined as no clinical and laboratory evidence of infection (serum C-reactive protein less than 10 mg/L) at 3 years. Subjects with chronic, suppressive antibiotics or with prosthesis removal were considered therapy failure. Parameters statistically and remarkably discrepant between success and failure groups were investigated with logistic regression. P less than 0.05 were considered statistically significant. Results: Amount of 46 subjects (65.7%) had no infection during the period of observation. Factors correlated with therapy failure were: history of Rheumatoid Arthritis, delayed infection (more than 1.5 years following arthroplasty), ESR at presentation of more than 50 mm/h and infection induced by coagulase-negative Staphylococcus. Symptoms duration of less than 5 days was associated with a better outcome. The use of Gentamicin sponges was statistically remarkably more in the success group and the use of beads was more in the failure group in the univariate but not in the logistic regression. Less surgical interventions were needed in the group managed with sponges than in the group managed with beads. Prosthetic joint infection induced by coagulase-negative Staphylococcus was associated with a less success rate and streptococcal infections were associated with an increased success rate. Conclusions: Rheumatoid arthritis, duration of symptoms of more than 5 days, ESR of more than 50 mm/h, delayed infection (more than 1.5 years following the index arthroplasty) and coagulase-negative Staphylococcus infections reduce the rate of a successful debridement, antibiotics, irrigation and retention of prosthesis therapy.
BACKGROUND: Prosthetic joint infections is estimated to occur in 1-2% of initial total hip arthroplasties and total knee arthroplasties (1). Infected prosthetic joints are unresponsive to systemic antibiotics therapy alone because of the poor vascular supply and biofilm production. Acute prosthetic joint infections are divided in three groups according to the period of clinical picture and postoperative period: a) Early after surgery: clinical picture less than 4 weeks postoperatively; b) Delayed chronic: gradual onset of clinical picture and c) Acute hematogenous: acute onset in a previously well-functioning prosthesis (2) or early (3 months), late/low-grade (3-24 months) and delayed infection (more than 24 months) (3). Different risk factors associated with acute prosthetic joint infections are rheumatoid arthritis, diabetes mellitus, malignancy, obesity and immunosuppressant use (4). Revision operation increases the risk of acute prosthetic joint infections. Factors associated with a worse result of acute prosthetic joint infections therapy are: infections induced by Staphylococcus spp. (5) and more by Staphylococcus aureus (6), polymicrobial acute infections (4), intra-articular purulence (5), retention of exchangeable elements (4) and a longer period between the index arthroplasty and the confirmation of infection (4). Most acute prosthetic joint infections are caused by coagulase-negative Staphylococcus (30-41%) and S. aureus (12-47%). Streptococcus spp. and Enterococcus spp. are found in approximately 10% of cases and gram-negative bacteria such as Escherichia coli are less than 5% (7). 5–39% were polymicrobial infections (6). Debridement, antibiotics, irrigation and retention of the prosthesis (DAIR) is used for acute infections with no risk factors such as remarkable co-morbidities. DAIR has a success rate between 14% and 100%(8). Success of more than 70% can be expected in cases with a short period of symptoms (less than 3-4 weeks), a stable implant and a healthy soft tissues around the prosthesis are selected (9). In chronic infections, implant retention is rarely successful. Local antibiotics with aminoglycosides in beads or sponges achieve increased local concentrations with no toxic serum levels. Beads have a lengthened release in comparison to sponges but with no increased concentrations, acting as foreign bodies, to which bacteria can adhere. CONCLUSION: Multiple factors were related with DAIR failure: rheumatoid arthritis, period of clinical picture more than 5 days, delayed infection (more than 1.5 years following arthroplasty), ESR of more than 50 mm/h and coagulase-negative Staphylococcus. If one or more of these factors are found in a patient, the chances of successful DAIR therapy reduces. Gentamicin sponges had outcome similar to those of beads.
Background: Prosthetic joint infections is estimated to occur in 1-2% of primary total joint arthroplasty. Debridement, antibiotics, irrigation and retention of prosthesis (DAIR) is the traditional treatment for acute prosthetic joint infections. Methods: Our prospective, double blind and randomized investigation included 70 subjects, of both sexes, aged 63-72 years, who were managed with debridement, antibiotics, irrigation and retention of prosthesis for total hip or total knee arthroplasty acute prosthetic joint infections at Prince Hashim military hospital and Queen Alia military hospital, Jordan, during the period October 2017-October 2020. The observation period was 3 years. Therapy success was defined as absence of infection following 3 years, retention of the prosthesis and no further antibiotics therapy. Prosthetic joint infection was defined based on one or more of: a) growth of the same microorganism in minimum 2 cultures; b) one positive culture and a purulent synovial fluid upon debridement; c) negative culture and minimum 2 of purulent synovial fluid upon debridement. A successful outcome was defined as no clinical and laboratory evidence of infection (serum C-reactive protein less than 10 mg/L) at 3 years. Subjects with chronic, suppressive antibiotics or with prosthesis removal were considered therapy failure. Parameters statistically and remarkably discrepant between success and failure groups were investigated with logistic regression. P less than 0.05 were considered statistically significant. Results: Amount of 46 subjects (65.7%) had no infection during the period of observation. Factors correlated with therapy failure were: history of Rheumatoid Arthritis, delayed infection (more than 1.5 years following arthroplasty), ESR at presentation of more than 50 mm/h and infection induced by coagulase-negative Staphylococcus. Symptoms duration of less than 5 days was associated with a better outcome. The use of Gentamicin sponges was statistically remarkably more in the success group and the use of beads was more in the failure group in the univariate but not in the logistic regression. Less surgical interventions were needed in the group managed with sponges than in the group managed with beads. Prosthetic joint infection induced by coagulase-negative Staphylococcus was associated with a less success rate and streptococcal infections were associated with an increased success rate. Conclusions: Rheumatoid arthritis, duration of symptoms of more than 5 days, ESR of more than 50 mm/h, delayed infection (more than 1.5 years following the index arthroplasty) and coagulase-negative Staphylococcus infections reduce the rate of a successful debridement, antibiotics, irrigation and retention of prosthesis therapy.
2,422
482
[ 457, 40, 349, 888, 76 ]
6
[ "infection", "joint", "infections", "prosthetic", "period", "success", "prosthetic joint", "subjects", "failure", "therapy" ]
[ "joint infections caused", "joint infections estimated", "arthritis delayed infection", "arthroplasties infected prosthetic", "infection acute prosthetic" ]
null
[CONTENT] Debridement | antibiotics | irrigation | retention | Prosthetic joint infections [SUMMARY]
[CONTENT] Debridement | antibiotics | irrigation | retention | Prosthetic joint infections [SUMMARY]
null
[CONTENT] Debridement | antibiotics | irrigation | retention | Prosthetic joint infections [SUMMARY]
[CONTENT] Debridement | antibiotics | irrigation | retention | Prosthetic joint infections [SUMMARY]
[CONTENT] Debridement | antibiotics | irrigation | retention | Prosthetic joint infections [SUMMARY]
[CONTENT] Aged | Anti-Bacterial Agents | Debridement | Female | Humans | Male | Middle Aged | Prospective Studies | Prostheses and Implants | Prosthesis-Related Infections | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Anti-Bacterial Agents | Debridement | Female | Humans | Male | Middle Aged | Prospective Studies | Prostheses and Implants | Prosthesis-Related Infections | Retrospective Studies | Treatment Outcome [SUMMARY]
null
[CONTENT] Aged | Anti-Bacterial Agents | Debridement | Female | Humans | Male | Middle Aged | Prospective Studies | Prostheses and Implants | Prosthesis-Related Infections | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Anti-Bacterial Agents | Debridement | Female | Humans | Male | Middle Aged | Prospective Studies | Prostheses and Implants | Prosthesis-Related Infections | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Anti-Bacterial Agents | Debridement | Female | Humans | Male | Middle Aged | Prospective Studies | Prostheses and Implants | Prosthesis-Related Infections | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] joint infections caused | joint infections estimated | arthritis delayed infection | arthroplasties infected prosthetic | infection acute prosthetic [SUMMARY]
[CONTENT] joint infections caused | joint infections estimated | arthritis delayed infection | arthroplasties infected prosthetic | infection acute prosthetic [SUMMARY]
null
[CONTENT] joint infections caused | joint infections estimated | arthritis delayed infection | arthroplasties infected prosthetic | infection acute prosthetic [SUMMARY]
[CONTENT] joint infections caused | joint infections estimated | arthritis delayed infection | arthroplasties infected prosthetic | infection acute prosthetic [SUMMARY]
[CONTENT] joint infections caused | joint infections estimated | arthritis delayed infection | arthroplasties infected prosthetic | infection acute prosthetic [SUMMARY]
[CONTENT] infection | joint | infections | prosthetic | period | success | prosthetic joint | subjects | failure | therapy [SUMMARY]
[CONTENT] infection | joint | infections | prosthetic | period | success | prosthetic joint | subjects | failure | therapy [SUMMARY]
null
[CONTENT] infection | joint | infections | prosthetic | period | success | prosthetic joint | subjects | failure | therapy [SUMMARY]
[CONTENT] infection | joint | infections | prosthetic | period | success | prosthetic joint | subjects | failure | therapy [SUMMARY]
[CONTENT] infection | joint | infections | prosthetic | period | success | prosthetic joint | subjects | failure | therapy [SUMMARY]
[CONTENT] infections | acute | prosthetic | joint infections | prosthetic joint infections | acute prosthetic joint infections | acute prosthetic joint | acute prosthetic | prosthetic joint | joint [SUMMARY]
[CONTENT] debridement | joint | minimum | continuous | parameters | hospital | inflammation | years | removal | prosthesis [SUMMARY]
null
[CONTENT] factors | gentamicin sponges outcome similar | clinical picture days | similar beads | chances successful dair therapy | chances successful dair | chances successful | chances | dair therapy | dair therapy reduces [SUMMARY]
[CONTENT] joint | infection | infections | subjects | prosthetic | prosthetic joint | success | acute | failure | period [SUMMARY]
[CONTENT] joint | infection | infections | subjects | prosthetic | prosthetic joint | success | acute | failure | period [SUMMARY]
[CONTENT] 1-2% ||| DAIR [SUMMARY]
[CONTENT] 70 | 63-72 years | arthroplasty acute prosthetic joint infections | Queen Alia | Jordan | the period | 2017-October 2020 ||| 3 years ||| 3 years ||| one | 2 | one | 2 ||| less than 10 | 3 years ||| ||| ||| less than 0.05 [SUMMARY]
null
[CONTENT] Rheumatoid | more than 5 days | ESR | more than 50 mm | more than 1.5 years | Staphylococcus [SUMMARY]
[CONTENT] 1-2% ||| DAIR ||| 70 | 63-72 years | arthroplasty acute prosthetic joint infections | Queen Alia | Jordan | the period | 2017-October 2020 ||| 3 years ||| 3 years ||| one | 2 | one | 2 ||| less than 10 | 3 years ||| ||| ||| less than 0.05 ||| 46 | 65.7% ||| Rheumatoid Arthritis | more than 1.5 years | ESR | more than 50 mm | Staphylococcus ||| less than 5 days ||| ||| ||| Staphylococcus ||| Rheumatoid | more than 5 days | ESR | more than 50 mm | more than 1.5 years | Staphylococcus [SUMMARY]
[CONTENT] 1-2% ||| DAIR ||| 70 | 63-72 years | arthroplasty acute prosthetic joint infections | Queen Alia | Jordan | the period | 2017-October 2020 ||| 3 years ||| 3 years ||| one | 2 | one | 2 ||| less than 10 | 3 years ||| ||| ||| less than 0.05 ||| 46 | 65.7% ||| Rheumatoid Arthritis | more than 1.5 years | ESR | more than 50 mm | Staphylococcus ||| less than 5 days ||| ||| ||| Staphylococcus ||| Rheumatoid | more than 5 days | ESR | more than 50 mm | more than 1.5 years | Staphylococcus [SUMMARY]
Management of Complex Jugular Paragangliomas: Surgical Resection and Outcomes.
36349670
This study aimed to review tumor control and cranial nerve function outcomes in patients with complex jugular paragangliomas and to refine the surgical strategies for complex jugular paragangliomas.
BACKGROUND
We describe our experience with 12 patients with complex jugular paragangliomas diagnosed in our institution from January 2013 to June 2020. The main outcomes included tumor control, complications, and function of facial nerve and lower cranial nerves, postoperatively.
METHODS
Gross-total resection was achieved for 9 (75%) patients, and subtotal resection was achieved for 3 (25%) patients. The surgical tumor control rate was 100% after a mean follow-up of 45.5 months (range, 13-111 months). Postoperatively, 10 patients (83.3%) obtained unchanged or improved facial nerve function. However, new lower cranial nerve deficits occurred in 2 patients (16.7%) due to surgical removal of the concurrent vagal paraganglioma and scar tissue enclosing the IX and XII nerves.
RESULTS
Our refined surgical techniques, including tension-free anterior facial nerve rerouting, sigmoid sinus tunnel-packing, and pushpacking techniques, could be a choice for the treatment of complex jugular paragangliomas to achieve tumor control and cranial nerves preservation. A 2-stage surgery should be applied to minimize the risk of bilateral cranial neuropathies and the influence on cerebral circulation in patients with bilateral paragangliomas. The preoperative endovascular intervention such as coil embolization or internal carotid artery stenting can be employed for the management of paragangliomas with internal carotid artery-associated lesions.
CONCLUSION
[ "Humans", "Carotid Stenosis", "Treatment Outcome", "Retrospective Studies", "Stents", "Glomus Jugulare Tumor", "Paraganglioma" ]
9682854
Introduction
Jugular paragangliomas (JPs) are the most common primary benign tumors of the jugular foramen region, which are aggressive lesions that can infiltrate surrounding bony structures, blood vessels, posterior fossa, cranial nerves, and even the intracranial cavity.1 The management of JPs is challenging because the tumors are always hypervascular and intimated with the internal carotid artery (ICA), lower cranial nerves (LCNs), and inferior petrous sinus. With a better understanding of the natural history of JPs, “wait and scan,” surgery, and radiotherapy have been applied as primary treatment modalities for JPs.2-6 However, surgery also plays a crucial role to offer a chance of disease-free survival for patients with JPs. Complex jugular paragangliomas (CJPs) have been defined as fulfilling one or more of the following criteria7,8: (1) very large size; (2) great intradural extension; (3) extension to the cavernous sinus foramen magnum and clivus; (4) significant involvement (encasement and stenosis) of ICA; (5) single ICA on the lesion side; (6) involvement of the vertebral artery; (7) dominant or unilateral sigmoid sinus on the lesion side; (8) bilateral or multiple paragangliomas; and (9) recurrence after previous surgery. Patients with CJPs pose an extreme challenge to skull base surgeons, and there are few studies focused on the surgical management of CJPs. In this study, we aimed to review surgical strategies, tumor control, complications, postoperative facial nerve (FN) and LCNs damage, and functional recovery in our unique series of CJPs patients.
Methods
Design and Participants A retrospective clinical database was queried to identify 12 patients who met the criteria for CJPs from January 2013 to June 2020. All patients underwent surgery with histopathological confirmation of the diagnosis. This study was approved by the ethics committee of Eye, Ear, Nose, and Throat Hospital, Fudan University (No.2021048). All patients were informed of the risks and benefits of all available treatment modalities, including surgery, radiotherapy, and “wait and scan.” Informed consent forms were signed by all patients. A retrospective clinical database was queried to identify 12 patients who met the criteria for CJPs from January 2013 to June 2020. All patients underwent surgery with histopathological confirmation of the diagnosis. This study was approved by the ethics committee of Eye, Ear, Nose, and Throat Hospital, Fudan University (No.2021048). All patients were informed of the risks and benefits of all available treatment modalities, including surgery, radiotherapy, and “wait and scan.” Informed consent forms were signed by all patients. Preoperative and Intraoperative Evaluation All patients underwent a preoperative otoscopic examination, assessment of hearing, FN, and LCNs function, trans-abdominal sonography, catecholamine secretion screening, enhanced temporal bone MRI, CT scanning, magnetic resonance angiography, and digital subtraction angiography (DSA). Temporal bone paragangliomas were graded according to Fisch’s classification.9 Facial nerve function was graded according to the House Brackmann (HB) grading system. Superselective endovascular embolization was routinely performed for all patients 2 days before surgery. Facial nerve monitoring was routinely conducted in patients with normal preoperative FN function. The tumor extension or growth patterns were defined based on preoperative radiological and intraoperative findings. All patients underwent a preoperative otoscopic examination, assessment of hearing, FN, and LCNs function, trans-abdominal sonography, catecholamine secretion screening, enhanced temporal bone MRI, CT scanning, magnetic resonance angiography, and digital subtraction angiography (DSA). Temporal bone paragangliomas were graded according to Fisch’s classification.9 Facial nerve function was graded according to the House Brackmann (HB) grading system. Superselective endovascular embolization was routinely performed for all patients 2 days before surgery. Facial nerve monitoring was routinely conducted in patients with normal preoperative FN function. The tumor extension or growth patterns were defined based on preoperative radiological and intraoperative findings. Details of Surgical Procedure Infratemporal fossa approach (ITFA) was applied in the present study. If the perineurium of the FN was intact, we performed the tension-free anterior FN rerouting technique. The key point is to suture the parotid gland with the inferior temporal muscle to reduce the distance between the genicular ganglion to the stylomastoid foramen of the FN, while the digastric muscle, FN, and parotid gland underwent anterior transposition.10 If the FN was infiltrated by the tumor and had to be sacrificed, grafting with the great auricular nerve was performed to reconstruct FN. To control bleeding, we decreased tumor vascularity with bipolar coagulation cauterizing. After the jugular foramen was exposed, we separated the tumor from the ICA and then ligated the jugular vein. The sigmoid sinus was occluded with Surgicel (Ethicon, Somerville, NJ) and bone wax. We applied sigmoid sinus tunnel-packing and push-packing techniques to control bleeding from the inferior petrous sinus.11 Intrabulbar dissection and preservation of the medial wall of the jugular bulb were used to preserve LCNs function, as long as the tumor had not penetrated the medial wall of the jugular bulb or infiltrated the LCNs. Infratemporal fossa approach (ITFA) was applied in the present study. If the perineurium of the FN was intact, we performed the tension-free anterior FN rerouting technique. The key point is to suture the parotid gland with the inferior temporal muscle to reduce the distance between the genicular ganglion to the stylomastoid foramen of the FN, while the digastric muscle, FN, and parotid gland underwent anterior transposition.10 If the FN was infiltrated by the tumor and had to be sacrificed, grafting with the great auricular nerve was performed to reconstruct FN. To control bleeding, we decreased tumor vascularity with bipolar coagulation cauterizing. After the jugular foramen was exposed, we separated the tumor from the ICA and then ligated the jugular vein. The sigmoid sinus was occluded with Surgicel (Ethicon, Somerville, NJ) and bone wax. We applied sigmoid sinus tunnel-packing and push-packing techniques to control bleeding from the inferior petrous sinus.11 Intrabulbar dissection and preservation of the medial wall of the jugular bulb were used to preserve LCNs function, as long as the tumor had not penetrated the medial wall of the jugular bulb or infiltrated the LCNs. Follow-Up All patients underwent regular enhanced temporal bone MRI and clinical examination postoperatively, which was usually performed 3 months postoperatively and annually thereafter. The follow-up period was defined as the period extending from surgery to the most recent clinical visit or patient contact. All patients underwent regular enhanced temporal bone MRI and clinical examination postoperatively, which was usually performed 3 months postoperatively and annually thereafter. The follow-up period was defined as the period extending from surgery to the most recent clinical visit or patient contact.
Results
Surgical Outcomes All patient characteristics and tumor status is summarized in Table 1. Meanwhile, the descriptive analysis of patient demographics and clinical presentation of CJPs is summarized in Table 2. In 12 patients, hearing deficit and pulsatile tinnitus were the most common symptoms. Facial nerve involvement was seen in 8 patients (4 patients with HB grade I and 8 patients with HB grade III-VI). Two patients with previous surgery history presented with lower cranial nerve impairment preoperatively. Gross total tumor resection was achieved for 9 patients (75%), and subtotal resection was achieved for 3 patients (25%). Multiple paragangliomas on the ipsilateral side were removed in a single stage in 3 patients. A 2-stage resection was conducted in 5 patients with bilateral lesions. The average intraoperative blood loss was 1001 mL. Case 8, who lost 2500 mL blood during operation, suffered from mild hemiparesis (muscle strength grade 4) as a result of a postoperative lacunar cerebral infarction but resolved (muscle strength grade 5) 2 months after surgery. There was no mortality, and all patients were discharged from the hospital 7-10 days postoperatively. Pulsatile tinnitus and otalgia resolved in all patients. Tumor control of 100% was achieved at a mean follow-up of 45.5 months (range, 13-111 months). The details of surgical outcomes are summarized in Table 3. All patient characteristics and tumor status is summarized in Table 1. Meanwhile, the descriptive analysis of patient demographics and clinical presentation of CJPs is summarized in Table 2. In 12 patients, hearing deficit and pulsatile tinnitus were the most common symptoms. Facial nerve involvement was seen in 8 patients (4 patients with HB grade I and 8 patients with HB grade III-VI). Two patients with previous surgery history presented with lower cranial nerve impairment preoperatively. Gross total tumor resection was achieved for 9 patients (75%), and subtotal resection was achieved for 3 patients (25%). Multiple paragangliomas on the ipsilateral side were removed in a single stage in 3 patients. A 2-stage resection was conducted in 5 patients with bilateral lesions. The average intraoperative blood loss was 1001 mL. Case 8, who lost 2500 mL blood during operation, suffered from mild hemiparesis (muscle strength grade 4) as a result of a postoperative lacunar cerebral infarction but resolved (muscle strength grade 5) 2 months after surgery. There was no mortality, and all patients were discharged from the hospital 7-10 days postoperatively. Pulsatile tinnitus and otalgia resolved in all patients. Tumor control of 100% was achieved at a mean follow-up of 45.5 months (range, 13-111 months). The details of surgical outcomes are summarized in Table 3. Facial Nerve and Lower Cranial Nerves Function Postoperatively, 10 patients (83.3%) obtained unchanged or improved FN function. The preoperative FN function was H-B grade I in 4 patients and H-B grade VI in 5 patients, except for 1 patient deteriorated to H-B grade III from grade I and 1 patient improved to H-B grade III from grade VI, but the postoperative FN function remained stable for other patients at the last follow-up. The remaining 3 patients showed H-B grade III-V function before surgery, and improvement was shown in 1 patient from grade V to grade IV. Intrabulbar dissection and preservation of the medial wall of the jugular bulb were applied to 10 patients, except for 2 patients with previous surgery history. Two patients (cases 2 and 10) experienced newly developed LCNs deficit postoperatively. Case 2 suffered vocal cord paralysis after surgical removal of the vagal paraganglioma (VP) and JP. Case 10 had to remove the IX and XII nerves due to encapsulated scar tissue from previous surgery. Postoperatively, 10 patients (83.3%) obtained unchanged or improved FN function. The preoperative FN function was H-B grade I in 4 patients and H-B grade VI in 5 patients, except for 1 patient deteriorated to H-B grade III from grade I and 1 patient improved to H-B grade III from grade VI, but the postoperative FN function remained stable for other patients at the last follow-up. The remaining 3 patients showed H-B grade III-V function before surgery, and improvement was shown in 1 patient from grade V to grade IV. Intrabulbar dissection and preservation of the medial wall of the jugular bulb were applied to 10 patients, except for 2 patients with previous surgery history. Two patients (cases 2 and 10) experienced newly developed LCNs deficit postoperatively. Case 2 suffered vocal cord paralysis after surgical removal of the vagal paraganglioma (VP) and JP. Case 10 had to remove the IX and XII nerves due to encapsulated scar tissue from previous surgery. Illustrative Cases Ipsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas. A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas. Jugular Paraganglioma with Abundant Feeding Arteries A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA. A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA. Jugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications. A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications. Ipsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas. A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas. Jugular Paraganglioma with Abundant Feeding Arteries A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA. A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA. Jugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications. A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications.
null
null
[ "Main Points", "Design and Participants", "Preoperative and Intraoperative Evaluation", "Details of Surgical Procedure", "Follow-Up", "Surgical Outcomes", "Facial Nerve and Lower Cranial Nerves Function", "Illustrative Cases", "Ipsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor", "Jugular Paraganglioma with Abundant Feeding Arteries", "Jugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm", "Clinical Characteristics", "Surgical Approaches", "Multiple Paragangliomas", "Vascular Considerations" ]
[ "Attempts have been made to achieve tumor control and prevent cranial nerve damage or dysfunction or other complications, postoperatively.\nThe infratemporal fossa approach with our unique refined surgical techniques including tension-free anterior facial nerve rerouting, sigmoid sinus tunnel-packing, and push-packing techniques can be a considerable treatment option for complex jugular paragangliomas.\nFor patients with bilateral paragangliomas, a 2-stage surgery should be applied to minimize the risk of bilateral cranial neuropathies and the impact on cerebral circulation.\nEndovascular intervention such as coil embolization or internal carotid artery (ICA) stenting can be employed for the management of paragangliomas with ICA-associated lesions.\nIn case ICA has to be a permanent occlusion, the surgeon should prudentially balance blood pressure, blood loss, and tumor removal to prevent cerebrovascular accident intraoperatively.", "A retrospective clinical database was queried to identify 12 patients who met the criteria for CJPs from January 2013 to June 2020. All patients underwent surgery with histopathological confirmation of the diagnosis. This study was approved by the ethics committee of Eye, Ear, Nose, and Throat Hospital, Fudan University (No.2021048). All patients were informed of the risks and benefits of all available treatment modalities, including surgery, radiotherapy, and “wait and scan.” Informed consent forms were signed by all patients.", "All patients underwent a preoperative otoscopic examination, assessment of hearing, FN, and LCNs function, trans-abdominal sonography, catecholamine secretion screening, enhanced temporal bone MRI, CT scanning, magnetic resonance angiography, and digital subtraction angiography (DSA). Temporal bone paragangliomas were graded according to Fisch’s classification.9 Facial nerve function was graded according to the House Brackmann (HB) grading system. Superselective endovascular embolization was routinely performed for all patients 2 days before surgery. Facial nerve monitoring was routinely conducted in patients with normal preoperative FN function. The tumor extension or growth patterns were defined based on preoperative radiological and intraoperative findings.", "Infratemporal fossa approach (ITFA) was applied in the present study. If the perineurium of the FN was intact, we performed the tension-free anterior FN rerouting technique. The key point is to suture the parotid gland with the inferior temporal muscle to reduce the distance between the genicular ganglion to the stylomastoid foramen of the FN, while the digastric muscle, FN, and parotid gland underwent anterior transposition.10 If the FN was infiltrated by the tumor and had to be sacrificed, grafting with the great auricular nerve was performed to reconstruct FN.\nTo control bleeding, we decreased tumor vascularity with bipolar coagulation cauterizing. After the jugular foramen was exposed, we separated the tumor from the ICA and then ligated the jugular vein. The sigmoid sinus was occluded with Surgicel (Ethicon, Somerville, NJ) and bone wax. We applied sigmoid sinus tunnel-packing and push-packing techniques to control bleeding from the inferior petrous sinus.11 Intrabulbar dissection and preservation of the medial wall of the jugular bulb were used to preserve LCNs function, as long as the tumor had not penetrated the medial wall of the jugular bulb or infiltrated the LCNs.", "All patients underwent regular enhanced temporal bone MRI and clinical examination postoperatively, which was usually performed 3 months postoperatively and annually thereafter. The follow-up period was defined as the period extending from surgery to the most recent clinical visit or patient contact.", "All patient characteristics and tumor status is summarized in Table 1. Meanwhile, the descriptive analysis of patient demographics and clinical presentation of CJPs is summarized in Table 2. In 12 patients, hearing deficit and pulsatile tinnitus were the most common symptoms. Facial nerve involvement was seen in 8 patients (4 patients with HB grade I and 8 patients with HB grade III-VI). Two patients with previous surgery history presented with lower cranial nerve impairment preoperatively.\nGross total tumor resection was achieved for 9 patients (75%), and subtotal resection was achieved for 3 patients (25%). Multiple paragangliomas on the ipsilateral side were removed in a single stage in 3 patients. A 2-stage resection was conducted in 5 patients with bilateral lesions. The average intraoperative blood loss was 1001 mL. Case 8, who lost 2500 mL blood during operation, suffered from mild hemiparesis (muscle strength grade 4) as a result of a postoperative lacunar cerebral infarction but resolved (muscle strength grade 5) 2 months after surgery. There was no mortality, and all patients were discharged from the hospital 7-10 days postoperatively. Pulsatile tinnitus and otalgia resolved in all patients. Tumor control of 100% was achieved at a mean follow-up of 45.5 months (range, 13-111 months). The details of surgical outcomes are summarized in Table 3.", "Postoperatively, 10 patients (83.3%) obtained unchanged or improved FN function. The preoperative FN function was H-B grade I in 4 patients and H-B grade VI in 5 patients, except for 1 patient deteriorated to H-B grade III from grade I and 1 patient improved to H-B grade III from grade VI, but the postoperative FN function remained stable for other patients at the last follow-up. The remaining 3 patients showed H-B grade III-V function before surgery, and improvement was shown in 1 patient from grade V to grade IV.\nIntrabulbar dissection and preservation of the medial wall of the jugular bulb were applied to 10 patients, except for 2 patients with previous surgery history. Two patients (cases 2 and 10) experienced newly developed LCNs deficit postoperatively. Case 2 suffered vocal cord paralysis after surgical removal of the vagal paraganglioma (VP) and JP. Case 10 had to remove the IX and XII nerves due to encapsulated scar tissue from previous surgery.", "Ipsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas.\nA 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas.\nJugular Paraganglioma with Abundant Feeding Arteries A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA.\nA 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA.\nJugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications.\nA 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications.", "A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas.", "A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA.", "A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications.", "The typical clinical manifestations of JPs are pulsatile tinnitus, hearing loss, and a reddish mass behind the eardrum or in the external auditory canal.5,15 Our patients did not present with any significant variation with respect to typical clinical presentations, but a higher proportion of hearing loss (91.6%) and facial paralysis (66.7%) were shown. Giant tumors are rare and usually considered inoperable or have a high risk of morbidity and mortality.7 However, there is no clear standard for defining a “giant tumor.” In our series, a giant tumor was defined as having a maximum diameter greater than 4 cm, which means that at least the horizontal segment of the ICA was involved. Because the common location of VPs arises from the superior vagal ganglion, the differential diagnosis of VPs from JPs is difficult. Imaging features may help to distinguish between these 2 types. We noticed that the internal jugular vein was always pushed outward or inward by VPs (Figure 4 b1 and b2), but JPs always spread inferiorly within the internal jugular vein lumen on temporal bone MRI images (Figure 4 a1 and a2).", "Various surgical approaches are applied in clinical practice: the retro-sigmoid approach, the far lateral approach or its variations, and ITFA.16-18 The first 2 approaches have limitations in controlling anterior lesions around the ICA, and their advantages are able to expose tumors with intradural extensions. We prefer the ITFA which provides optimal control of the upper parapharyngeal space, the ICA, and the LCNs following FN anterior rerouting. However, some surgeons criticize the FN dysfunction postoperatively. With the application of the tension-free FN anterior rerouting technique, we were able to achieve good FN function.10", "In our series, approximately 66.7% (8/12) of patients presented with multiple paragangliomas and 41.7% with bilateral paragangliomas. The management of multiple paragangliomas is controversial, particularly when bilateral LCNs or ICA are involved. Van der Mey et al15 supported that a wait-and-scan policy should be considered for such patients. Al-Mefty et al7 indicated that ipsilateral tumors could be excised simultaneously, and the opposite side should be treated surgically only if the first resection did not cause essential cranial nerve palsy in bilateral tumors. Michael G. Moore et al19 recommended a staged fashion to minimize the risk of bilateral cranial neuropathies and/or impact on cerebral circulation when surgery is considered. Other factors such as prior treatment modalities, the patient’s neurologic function, and life expectancy, as well as swallowing and pulmonary function also should be taken into account.19 We removed ipsilateral JP and VP or CBT during the same operation. For bilateral tumors, we preferred prior surgery for the contralateral CBT, and secondary stage surgery for ipsilateral JPs or VP as a surgical strategy. If there was no LCNs injury and the internal jugular vein was preserved intact contralaterally, we were more confident in resecting the JP in the second stage. Otherwise, conservative treatments were chosen.", "Our previous study showed hemorrhage from the sigmoid sinus and inferior petrosal sinus can be effectively controlled by tunnel-packing and push-packing techniques.11 The ICA in its upper neck and intra-temporal portions is often involved in patients with paragangliomas. Angiography is a useful tool to identify the feeding arteries, evaluate the collateral cerebral circulation, detect occult vascular lesions, and determine the management strategy of the ipsilateral ICA during surgery. Al-Mefty et al7 stated that a plane of dissection can be identified between the tumor and the ICA with the aid of a microscope. We noticed that the presence of scar tissue in the jugular foramen due to previous surgery makes it extremely difficult to navigate the ICA in the upper neck or find a normal anatomic landmark. We preferred to use the cochlea and the tympanic ostium of Eustachian tube as landmarks to identify the vertical ICA. In most cases, the tumor could be separated from the ICA. Given a benign tumor, even if a tiny piece of tumor is left, we would not sacrifice the ICA. Three patients in our study with subtotal tumor removal resulted in extensive blood loss, which put patients at risk of cerebral infarction following balloon occlusion of ICA or tumor adhered to the vascular adventitia of the ICA. In these cases, we had to make a balance between gross tumor removal and postoperative morbidity. The 3 patients who followed subtotal resection of CJPs showed no tumor growth occurred over an average of 40.7 months of follow-up. Various procedures have been described in the management of ICA preoperative, including saphenous vein bypass grafting, permanent occlusion, and endovascular stenting.8,20,21 In our study, we used embolization coiling or endovascular stenting in the management ICA-associated lesions, which prevented disastrous incidents intraoperative.\nOur study demonstrates the feasibility of surgical resection and the possibility of preserving cranial nerve function in patients with CJPs. The weakness of the present study is obvious: retrospective review study, small sample size, and lack of statistical power. Given the lower incidence of CJPs and the indolent course of these tumors, future studies will enroll more patients with multicenter cooperation to validate our treatment and increase the duration of follow-up." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Main Points", "Introduction", "Methods", "Design and Participants", "Preoperative and Intraoperative Evaluation", "Details of Surgical Procedure", "Follow-Up", "Results", "Surgical Outcomes", "Facial Nerve and Lower Cranial Nerves Function", "Illustrative Cases", "Ipsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor", "Jugular Paraganglioma with Abundant Feeding Arteries", "Jugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm", "Discussion", "Clinical Characteristics", "Surgical Approaches", "Multiple Paragangliomas", "Vascular Considerations", "Conclusion" ]
[ "Attempts have been made to achieve tumor control and prevent cranial nerve damage or dysfunction or other complications, postoperatively.\nThe infratemporal fossa approach with our unique refined surgical techniques including tension-free anterior facial nerve rerouting, sigmoid sinus tunnel-packing, and push-packing techniques can be a considerable treatment option for complex jugular paragangliomas.\nFor patients with bilateral paragangliomas, a 2-stage surgery should be applied to minimize the risk of bilateral cranial neuropathies and the impact on cerebral circulation.\nEndovascular intervention such as coil embolization or internal carotid artery (ICA) stenting can be employed for the management of paragangliomas with ICA-associated lesions.\nIn case ICA has to be a permanent occlusion, the surgeon should prudentially balance blood pressure, blood loss, and tumor removal to prevent cerebrovascular accident intraoperatively.", "Jugular paragangliomas (JPs) are the most common primary benign tumors of the jugular foramen region, which are aggressive lesions that can infiltrate surrounding bony structures, blood vessels, posterior fossa, cranial nerves, and even the intracranial cavity.1 The management of JPs is challenging because the tumors are always hypervascular and intimated with the internal carotid artery (ICA), lower cranial nerves (LCNs), and inferior petrous sinus. With a better understanding of the natural history of JPs, “wait and scan,” surgery, and radiotherapy have been applied as primary treatment modalities for JPs.2-6 However, surgery also plays a crucial role to offer a chance of disease-free survival for patients with JPs. Complex jugular paragangliomas (CJPs) have been defined as fulfilling one or more of the following criteria7,8: (1) very large size; (2) great intradural extension; (3) extension to the cavernous sinus foramen magnum and clivus; (4) significant involvement (encasement and stenosis) of ICA; (5) single ICA on the lesion side; (6) involvement of the vertebral artery; (7) dominant or unilateral sigmoid sinus on the lesion side; (8) bilateral or multiple paragangliomas; and (9) recurrence after previous surgery. Patients with CJPs pose an extreme challenge to skull base surgeons, and there are few studies focused on the surgical management of CJPs. In this study, we aimed to review surgical strategies, tumor control, complications, postoperative facial nerve (FN) and LCNs damage, and functional recovery in our unique series of CJPs patients.", "Design and Participants A retrospective clinical database was queried to identify 12 patients who met the criteria for CJPs from January 2013 to June 2020. All patients underwent surgery with histopathological confirmation of the diagnosis. This study was approved by the ethics committee of Eye, Ear, Nose, and Throat Hospital, Fudan University (No.2021048). All patients were informed of the risks and benefits of all available treatment modalities, including surgery, radiotherapy, and “wait and scan.” Informed consent forms were signed by all patients.\nA retrospective clinical database was queried to identify 12 patients who met the criteria for CJPs from January 2013 to June 2020. All patients underwent surgery with histopathological confirmation of the diagnosis. This study was approved by the ethics committee of Eye, Ear, Nose, and Throat Hospital, Fudan University (No.2021048). All patients were informed of the risks and benefits of all available treatment modalities, including surgery, radiotherapy, and “wait and scan.” Informed consent forms were signed by all patients.\nPreoperative and Intraoperative Evaluation All patients underwent a preoperative otoscopic examination, assessment of hearing, FN, and LCNs function, trans-abdominal sonography, catecholamine secretion screening, enhanced temporal bone MRI, CT scanning, magnetic resonance angiography, and digital subtraction angiography (DSA). Temporal bone paragangliomas were graded according to Fisch’s classification.9 Facial nerve function was graded according to the House Brackmann (HB) grading system. Superselective endovascular embolization was routinely performed for all patients 2 days before surgery. Facial nerve monitoring was routinely conducted in patients with normal preoperative FN function. The tumor extension or growth patterns were defined based on preoperative radiological and intraoperative findings.\nAll patients underwent a preoperative otoscopic examination, assessment of hearing, FN, and LCNs function, trans-abdominal sonography, catecholamine secretion screening, enhanced temporal bone MRI, CT scanning, magnetic resonance angiography, and digital subtraction angiography (DSA). Temporal bone paragangliomas were graded according to Fisch’s classification.9 Facial nerve function was graded according to the House Brackmann (HB) grading system. Superselective endovascular embolization was routinely performed for all patients 2 days before surgery. Facial nerve monitoring was routinely conducted in patients with normal preoperative FN function. The tumor extension or growth patterns were defined based on preoperative radiological and intraoperative findings.\nDetails of Surgical Procedure Infratemporal fossa approach (ITFA) was applied in the present study. If the perineurium of the FN was intact, we performed the tension-free anterior FN rerouting technique. The key point is to suture the parotid gland with the inferior temporal muscle to reduce the distance between the genicular ganglion to the stylomastoid foramen of the FN, while the digastric muscle, FN, and parotid gland underwent anterior transposition.10 If the FN was infiltrated by the tumor and had to be sacrificed, grafting with the great auricular nerve was performed to reconstruct FN.\nTo control bleeding, we decreased tumor vascularity with bipolar coagulation cauterizing. After the jugular foramen was exposed, we separated the tumor from the ICA and then ligated the jugular vein. The sigmoid sinus was occluded with Surgicel (Ethicon, Somerville, NJ) and bone wax. We applied sigmoid sinus tunnel-packing and push-packing techniques to control bleeding from the inferior petrous sinus.11 Intrabulbar dissection and preservation of the medial wall of the jugular bulb were used to preserve LCNs function, as long as the tumor had not penetrated the medial wall of the jugular bulb or infiltrated the LCNs.\nInfratemporal fossa approach (ITFA) was applied in the present study. If the perineurium of the FN was intact, we performed the tension-free anterior FN rerouting technique. The key point is to suture the parotid gland with the inferior temporal muscle to reduce the distance between the genicular ganglion to the stylomastoid foramen of the FN, while the digastric muscle, FN, and parotid gland underwent anterior transposition.10 If the FN was infiltrated by the tumor and had to be sacrificed, grafting with the great auricular nerve was performed to reconstruct FN.\nTo control bleeding, we decreased tumor vascularity with bipolar coagulation cauterizing. After the jugular foramen was exposed, we separated the tumor from the ICA and then ligated the jugular vein. The sigmoid sinus was occluded with Surgicel (Ethicon, Somerville, NJ) and bone wax. We applied sigmoid sinus tunnel-packing and push-packing techniques to control bleeding from the inferior petrous sinus.11 Intrabulbar dissection and preservation of the medial wall of the jugular bulb were used to preserve LCNs function, as long as the tumor had not penetrated the medial wall of the jugular bulb or infiltrated the LCNs.\nFollow-Up All patients underwent regular enhanced temporal bone MRI and clinical examination postoperatively, which was usually performed 3 months postoperatively and annually thereafter. The follow-up period was defined as the period extending from surgery to the most recent clinical visit or patient contact.\nAll patients underwent regular enhanced temporal bone MRI and clinical examination postoperatively, which was usually performed 3 months postoperatively and annually thereafter. The follow-up period was defined as the period extending from surgery to the most recent clinical visit or patient contact.", "A retrospective clinical database was queried to identify 12 patients who met the criteria for CJPs from January 2013 to June 2020. All patients underwent surgery with histopathological confirmation of the diagnosis. This study was approved by the ethics committee of Eye, Ear, Nose, and Throat Hospital, Fudan University (No.2021048). All patients were informed of the risks and benefits of all available treatment modalities, including surgery, radiotherapy, and “wait and scan.” Informed consent forms were signed by all patients.", "All patients underwent a preoperative otoscopic examination, assessment of hearing, FN, and LCNs function, trans-abdominal sonography, catecholamine secretion screening, enhanced temporal bone MRI, CT scanning, magnetic resonance angiography, and digital subtraction angiography (DSA). Temporal bone paragangliomas were graded according to Fisch’s classification.9 Facial nerve function was graded according to the House Brackmann (HB) grading system. Superselective endovascular embolization was routinely performed for all patients 2 days before surgery. Facial nerve monitoring was routinely conducted in patients with normal preoperative FN function. The tumor extension or growth patterns were defined based on preoperative radiological and intraoperative findings.", "Infratemporal fossa approach (ITFA) was applied in the present study. If the perineurium of the FN was intact, we performed the tension-free anterior FN rerouting technique. The key point is to suture the parotid gland with the inferior temporal muscle to reduce the distance between the genicular ganglion to the stylomastoid foramen of the FN, while the digastric muscle, FN, and parotid gland underwent anterior transposition.10 If the FN was infiltrated by the tumor and had to be sacrificed, grafting with the great auricular nerve was performed to reconstruct FN.\nTo control bleeding, we decreased tumor vascularity with bipolar coagulation cauterizing. After the jugular foramen was exposed, we separated the tumor from the ICA and then ligated the jugular vein. The sigmoid sinus was occluded with Surgicel (Ethicon, Somerville, NJ) and bone wax. We applied sigmoid sinus tunnel-packing and push-packing techniques to control bleeding from the inferior petrous sinus.11 Intrabulbar dissection and preservation of the medial wall of the jugular bulb were used to preserve LCNs function, as long as the tumor had not penetrated the medial wall of the jugular bulb or infiltrated the LCNs.", "All patients underwent regular enhanced temporal bone MRI and clinical examination postoperatively, which was usually performed 3 months postoperatively and annually thereafter. The follow-up period was defined as the period extending from surgery to the most recent clinical visit or patient contact.", "Surgical Outcomes All patient characteristics and tumor status is summarized in Table 1. Meanwhile, the descriptive analysis of patient demographics and clinical presentation of CJPs is summarized in Table 2. In 12 patients, hearing deficit and pulsatile tinnitus were the most common symptoms. Facial nerve involvement was seen in 8 patients (4 patients with HB grade I and 8 patients with HB grade III-VI). Two patients with previous surgery history presented with lower cranial nerve impairment preoperatively.\nGross total tumor resection was achieved for 9 patients (75%), and subtotal resection was achieved for 3 patients (25%). Multiple paragangliomas on the ipsilateral side were removed in a single stage in 3 patients. A 2-stage resection was conducted in 5 patients with bilateral lesions. The average intraoperative blood loss was 1001 mL. Case 8, who lost 2500 mL blood during operation, suffered from mild hemiparesis (muscle strength grade 4) as a result of a postoperative lacunar cerebral infarction but resolved (muscle strength grade 5) 2 months after surgery. There was no mortality, and all patients were discharged from the hospital 7-10 days postoperatively. Pulsatile tinnitus and otalgia resolved in all patients. Tumor control of 100% was achieved at a mean follow-up of 45.5 months (range, 13-111 months). The details of surgical outcomes are summarized in Table 3.\nAll patient characteristics and tumor status is summarized in Table 1. Meanwhile, the descriptive analysis of patient demographics and clinical presentation of CJPs is summarized in Table 2. In 12 patients, hearing deficit and pulsatile tinnitus were the most common symptoms. Facial nerve involvement was seen in 8 patients (4 patients with HB grade I and 8 patients with HB grade III-VI). Two patients with previous surgery history presented with lower cranial nerve impairment preoperatively.\nGross total tumor resection was achieved for 9 patients (75%), and subtotal resection was achieved for 3 patients (25%). Multiple paragangliomas on the ipsilateral side were removed in a single stage in 3 patients. A 2-stage resection was conducted in 5 patients with bilateral lesions. The average intraoperative blood loss was 1001 mL. Case 8, who lost 2500 mL blood during operation, suffered from mild hemiparesis (muscle strength grade 4) as a result of a postoperative lacunar cerebral infarction but resolved (muscle strength grade 5) 2 months after surgery. There was no mortality, and all patients were discharged from the hospital 7-10 days postoperatively. Pulsatile tinnitus and otalgia resolved in all patients. Tumor control of 100% was achieved at a mean follow-up of 45.5 months (range, 13-111 months). The details of surgical outcomes are summarized in Table 3.\nFacial Nerve and Lower Cranial Nerves Function Postoperatively, 10 patients (83.3%) obtained unchanged or improved FN function. The preoperative FN function was H-B grade I in 4 patients and H-B grade VI in 5 patients, except for 1 patient deteriorated to H-B grade III from grade I and 1 patient improved to H-B grade III from grade VI, but the postoperative FN function remained stable for other patients at the last follow-up. The remaining 3 patients showed H-B grade III-V function before surgery, and improvement was shown in 1 patient from grade V to grade IV.\nIntrabulbar dissection and preservation of the medial wall of the jugular bulb were applied to 10 patients, except for 2 patients with previous surgery history. Two patients (cases 2 and 10) experienced newly developed LCNs deficit postoperatively. Case 2 suffered vocal cord paralysis after surgical removal of the vagal paraganglioma (VP) and JP. Case 10 had to remove the IX and XII nerves due to encapsulated scar tissue from previous surgery.\nPostoperatively, 10 patients (83.3%) obtained unchanged or improved FN function. The preoperative FN function was H-B grade I in 4 patients and H-B grade VI in 5 patients, except for 1 patient deteriorated to H-B grade III from grade I and 1 patient improved to H-B grade III from grade VI, but the postoperative FN function remained stable for other patients at the last follow-up. The remaining 3 patients showed H-B grade III-V function before surgery, and improvement was shown in 1 patient from grade V to grade IV.\nIntrabulbar dissection and preservation of the medial wall of the jugular bulb were applied to 10 patients, except for 2 patients with previous surgery history. Two patients (cases 2 and 10) experienced newly developed LCNs deficit postoperatively. Case 2 suffered vocal cord paralysis after surgical removal of the vagal paraganglioma (VP) and JP. Case 10 had to remove the IX and XII nerves due to encapsulated scar tissue from previous surgery.\nIllustrative Cases Ipsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas.\nA 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas.\nJugular Paraganglioma with Abundant Feeding Arteries A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA.\nA 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA.\nJugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications.\nA 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications.\nIpsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas.\nA 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas.\nJugular Paraganglioma with Abundant Feeding Arteries A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA.\nA 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA.\nJugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications.\nA 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications.", "All patient characteristics and tumor status is summarized in Table 1. Meanwhile, the descriptive analysis of patient demographics and clinical presentation of CJPs is summarized in Table 2. In 12 patients, hearing deficit and pulsatile tinnitus were the most common symptoms. Facial nerve involvement was seen in 8 patients (4 patients with HB grade I and 8 patients with HB grade III-VI). Two patients with previous surgery history presented with lower cranial nerve impairment preoperatively.\nGross total tumor resection was achieved for 9 patients (75%), and subtotal resection was achieved for 3 patients (25%). Multiple paragangliomas on the ipsilateral side were removed in a single stage in 3 patients. A 2-stage resection was conducted in 5 patients with bilateral lesions. The average intraoperative blood loss was 1001 mL. Case 8, who lost 2500 mL blood during operation, suffered from mild hemiparesis (muscle strength grade 4) as a result of a postoperative lacunar cerebral infarction but resolved (muscle strength grade 5) 2 months after surgery. There was no mortality, and all patients were discharged from the hospital 7-10 days postoperatively. Pulsatile tinnitus and otalgia resolved in all patients. Tumor control of 100% was achieved at a mean follow-up of 45.5 months (range, 13-111 months). The details of surgical outcomes are summarized in Table 3.", "Postoperatively, 10 patients (83.3%) obtained unchanged or improved FN function. The preoperative FN function was H-B grade I in 4 patients and H-B grade VI in 5 patients, except for 1 patient deteriorated to H-B grade III from grade I and 1 patient improved to H-B grade III from grade VI, but the postoperative FN function remained stable for other patients at the last follow-up. The remaining 3 patients showed H-B grade III-V function before surgery, and improvement was shown in 1 patient from grade V to grade IV.\nIntrabulbar dissection and preservation of the medial wall of the jugular bulb were applied to 10 patients, except for 2 patients with previous surgery history. Two patients (cases 2 and 10) experienced newly developed LCNs deficit postoperatively. Case 2 suffered vocal cord paralysis after surgical removal of the vagal paraganglioma (VP) and JP. Case 10 had to remove the IX and XII nerves due to encapsulated scar tissue from previous surgery.", "Ipsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas.\nA 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas.\nJugular Paraganglioma with Abundant Feeding Arteries A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA.\nA 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA.\nJugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications.\nA 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications.", "A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas.", "A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA.", "A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications.", "Currently, the treatment modalities for patients with CJPs are still controversial. A number of studies including surgery and/or radiotherapy have demonstrated clinical experience in the management of CJPs.7,12-14 Although complete surgical excision of JPs is technically possible, it has often been associated with a high risk of LCN injuries. In this retrospective study, we assess the clinical symptoms and discuss individualized surgery strategies to achieve tumor control and cranial nerves preservation in 12 patients with CJPs.\nClinical Characteristics The typical clinical manifestations of JPs are pulsatile tinnitus, hearing loss, and a reddish mass behind the eardrum or in the external auditory canal.5,15 Our patients did not present with any significant variation with respect to typical clinical presentations, but a higher proportion of hearing loss (91.6%) and facial paralysis (66.7%) were shown. Giant tumors are rare and usually considered inoperable or have a high risk of morbidity and mortality.7 However, there is no clear standard for defining a “giant tumor.” In our series, a giant tumor was defined as having a maximum diameter greater than 4 cm, which means that at least the horizontal segment of the ICA was involved. Because the common location of VPs arises from the superior vagal ganglion, the differential diagnosis of VPs from JPs is difficult. Imaging features may help to distinguish between these 2 types. We noticed that the internal jugular vein was always pushed outward or inward by VPs (Figure 4 b1 and b2), but JPs always spread inferiorly within the internal jugular vein lumen on temporal bone MRI images (Figure 4 a1 and a2).\nThe typical clinical manifestations of JPs are pulsatile tinnitus, hearing loss, and a reddish mass behind the eardrum or in the external auditory canal.5,15 Our patients did not present with any significant variation with respect to typical clinical presentations, but a higher proportion of hearing loss (91.6%) and facial paralysis (66.7%) were shown. Giant tumors are rare and usually considered inoperable or have a high risk of morbidity and mortality.7 However, there is no clear standard for defining a “giant tumor.” In our series, a giant tumor was defined as having a maximum diameter greater than 4 cm, which means that at least the horizontal segment of the ICA was involved. Because the common location of VPs arises from the superior vagal ganglion, the differential diagnosis of VPs from JPs is difficult. Imaging features may help to distinguish between these 2 types. We noticed that the internal jugular vein was always pushed outward or inward by VPs (Figure 4 b1 and b2), but JPs always spread inferiorly within the internal jugular vein lumen on temporal bone MRI images (Figure 4 a1 and a2).\nSurgical Approaches Various surgical approaches are applied in clinical practice: the retro-sigmoid approach, the far lateral approach or its variations, and ITFA.16-18 The first 2 approaches have limitations in controlling anterior lesions around the ICA, and their advantages are able to expose tumors with intradural extensions. We prefer the ITFA which provides optimal control of the upper parapharyngeal space, the ICA, and the LCNs following FN anterior rerouting. However, some surgeons criticize the FN dysfunction postoperatively. With the application of the tension-free FN anterior rerouting technique, we were able to achieve good FN function.10\nVarious surgical approaches are applied in clinical practice: the retro-sigmoid approach, the far lateral approach or its variations, and ITFA.16-18 The first 2 approaches have limitations in controlling anterior lesions around the ICA, and their advantages are able to expose tumors with intradural extensions. We prefer the ITFA which provides optimal control of the upper parapharyngeal space, the ICA, and the LCNs following FN anterior rerouting. However, some surgeons criticize the FN dysfunction postoperatively. With the application of the tension-free FN anterior rerouting technique, we were able to achieve good FN function.10\nMultiple Paragangliomas In our series, approximately 66.7% (8/12) of patients presented with multiple paragangliomas and 41.7% with bilateral paragangliomas. The management of multiple paragangliomas is controversial, particularly when bilateral LCNs or ICA are involved. Van der Mey et al15 supported that a wait-and-scan policy should be considered for such patients. Al-Mefty et al7 indicated that ipsilateral tumors could be excised simultaneously, and the opposite side should be treated surgically only if the first resection did not cause essential cranial nerve palsy in bilateral tumors. Michael G. Moore et al19 recommended a staged fashion to minimize the risk of bilateral cranial neuropathies and/or impact on cerebral circulation when surgery is considered. Other factors such as prior treatment modalities, the patient’s neurologic function, and life expectancy, as well as swallowing and pulmonary function also should be taken into account.19 We removed ipsilateral JP and VP or CBT during the same operation. For bilateral tumors, we preferred prior surgery for the contralateral CBT, and secondary stage surgery for ipsilateral JPs or VP as a surgical strategy. If there was no LCNs injury and the internal jugular vein was preserved intact contralaterally, we were more confident in resecting the JP in the second stage. Otherwise, conservative treatments were chosen.\nIn our series, approximately 66.7% (8/12) of patients presented with multiple paragangliomas and 41.7% with bilateral paragangliomas. The management of multiple paragangliomas is controversial, particularly when bilateral LCNs or ICA are involved. Van der Mey et al15 supported that a wait-and-scan policy should be considered for such patients. Al-Mefty et al7 indicated that ipsilateral tumors could be excised simultaneously, and the opposite side should be treated surgically only if the first resection did not cause essential cranial nerve palsy in bilateral tumors. Michael G. Moore et al19 recommended a staged fashion to minimize the risk of bilateral cranial neuropathies and/or impact on cerebral circulation when surgery is considered. Other factors such as prior treatment modalities, the patient’s neurologic function, and life expectancy, as well as swallowing and pulmonary function also should be taken into account.19 We removed ipsilateral JP and VP or CBT during the same operation. For bilateral tumors, we preferred prior surgery for the contralateral CBT, and secondary stage surgery for ipsilateral JPs or VP as a surgical strategy. If there was no LCNs injury and the internal jugular vein was preserved intact contralaterally, we were more confident in resecting the JP in the second stage. Otherwise, conservative treatments were chosen.\nVascular Considerations Our previous study showed hemorrhage from the sigmoid sinus and inferior petrosal sinus can be effectively controlled by tunnel-packing and push-packing techniques.11 The ICA in its upper neck and intra-temporal portions is often involved in patients with paragangliomas. Angiography is a useful tool to identify the feeding arteries, evaluate the collateral cerebral circulation, detect occult vascular lesions, and determine the management strategy of the ipsilateral ICA during surgery. Al-Mefty et al7 stated that a plane of dissection can be identified between the tumor and the ICA with the aid of a microscope. We noticed that the presence of scar tissue in the jugular foramen due to previous surgery makes it extremely difficult to navigate the ICA in the upper neck or find a normal anatomic landmark. We preferred to use the cochlea and the tympanic ostium of Eustachian tube as landmarks to identify the vertical ICA. In most cases, the tumor could be separated from the ICA. Given a benign tumor, even if a tiny piece of tumor is left, we would not sacrifice the ICA. Three patients in our study with subtotal tumor removal resulted in extensive blood loss, which put patients at risk of cerebral infarction following balloon occlusion of ICA or tumor adhered to the vascular adventitia of the ICA. In these cases, we had to make a balance between gross tumor removal and postoperative morbidity. The 3 patients who followed subtotal resection of CJPs showed no tumor growth occurred over an average of 40.7 months of follow-up. Various procedures have been described in the management of ICA preoperative, including saphenous vein bypass grafting, permanent occlusion, and endovascular stenting.8,20,21 In our study, we used embolization coiling or endovascular stenting in the management ICA-associated lesions, which prevented disastrous incidents intraoperative.\nOur study demonstrates the feasibility of surgical resection and the possibility of preserving cranial nerve function in patients with CJPs. The weakness of the present study is obvious: retrospective review study, small sample size, and lack of statistical power. Given the lower incidence of CJPs and the indolent course of these tumors, future studies will enroll more patients with multicenter cooperation to validate our treatment and increase the duration of follow-up.\nOur previous study showed hemorrhage from the sigmoid sinus and inferior petrosal sinus can be effectively controlled by tunnel-packing and push-packing techniques.11 The ICA in its upper neck and intra-temporal portions is often involved in patients with paragangliomas. Angiography is a useful tool to identify the feeding arteries, evaluate the collateral cerebral circulation, detect occult vascular lesions, and determine the management strategy of the ipsilateral ICA during surgery. Al-Mefty et al7 stated that a plane of dissection can be identified between the tumor and the ICA with the aid of a microscope. We noticed that the presence of scar tissue in the jugular foramen due to previous surgery makes it extremely difficult to navigate the ICA in the upper neck or find a normal anatomic landmark. We preferred to use the cochlea and the tympanic ostium of Eustachian tube as landmarks to identify the vertical ICA. In most cases, the tumor could be separated from the ICA. Given a benign tumor, even if a tiny piece of tumor is left, we would not sacrifice the ICA. Three patients in our study with subtotal tumor removal resulted in extensive blood loss, which put patients at risk of cerebral infarction following balloon occlusion of ICA or tumor adhered to the vascular adventitia of the ICA. In these cases, we had to make a balance between gross tumor removal and postoperative morbidity. The 3 patients who followed subtotal resection of CJPs showed no tumor growth occurred over an average of 40.7 months of follow-up. Various procedures have been described in the management of ICA preoperative, including saphenous vein bypass grafting, permanent occlusion, and endovascular stenting.8,20,21 In our study, we used embolization coiling or endovascular stenting in the management ICA-associated lesions, which prevented disastrous incidents intraoperative.\nOur study demonstrates the feasibility of surgical resection and the possibility of preserving cranial nerve function in patients with CJPs. The weakness of the present study is obvious: retrospective review study, small sample size, and lack of statistical power. Given the lower incidence of CJPs and the indolent course of these tumors, future studies will enroll more patients with multicenter cooperation to validate our treatment and increase the duration of follow-up.", "The typical clinical manifestations of JPs are pulsatile tinnitus, hearing loss, and a reddish mass behind the eardrum or in the external auditory canal.5,15 Our patients did not present with any significant variation with respect to typical clinical presentations, but a higher proportion of hearing loss (91.6%) and facial paralysis (66.7%) were shown. Giant tumors are rare and usually considered inoperable or have a high risk of morbidity and mortality.7 However, there is no clear standard for defining a “giant tumor.” In our series, a giant tumor was defined as having a maximum diameter greater than 4 cm, which means that at least the horizontal segment of the ICA was involved. Because the common location of VPs arises from the superior vagal ganglion, the differential diagnosis of VPs from JPs is difficult. Imaging features may help to distinguish between these 2 types. We noticed that the internal jugular vein was always pushed outward or inward by VPs (Figure 4 b1 and b2), but JPs always spread inferiorly within the internal jugular vein lumen on temporal bone MRI images (Figure 4 a1 and a2).", "Various surgical approaches are applied in clinical practice: the retro-sigmoid approach, the far lateral approach or its variations, and ITFA.16-18 The first 2 approaches have limitations in controlling anterior lesions around the ICA, and their advantages are able to expose tumors with intradural extensions. We prefer the ITFA which provides optimal control of the upper parapharyngeal space, the ICA, and the LCNs following FN anterior rerouting. However, some surgeons criticize the FN dysfunction postoperatively. With the application of the tension-free FN anterior rerouting technique, we were able to achieve good FN function.10", "In our series, approximately 66.7% (8/12) of patients presented with multiple paragangliomas and 41.7% with bilateral paragangliomas. The management of multiple paragangliomas is controversial, particularly when bilateral LCNs or ICA are involved. Van der Mey et al15 supported that a wait-and-scan policy should be considered for such patients. Al-Mefty et al7 indicated that ipsilateral tumors could be excised simultaneously, and the opposite side should be treated surgically only if the first resection did not cause essential cranial nerve palsy in bilateral tumors. Michael G. Moore et al19 recommended a staged fashion to minimize the risk of bilateral cranial neuropathies and/or impact on cerebral circulation when surgery is considered. Other factors such as prior treatment modalities, the patient’s neurologic function, and life expectancy, as well as swallowing and pulmonary function also should be taken into account.19 We removed ipsilateral JP and VP or CBT during the same operation. For bilateral tumors, we preferred prior surgery for the contralateral CBT, and secondary stage surgery for ipsilateral JPs or VP as a surgical strategy. If there was no LCNs injury and the internal jugular vein was preserved intact contralaterally, we were more confident in resecting the JP in the second stage. Otherwise, conservative treatments were chosen.", "Our previous study showed hemorrhage from the sigmoid sinus and inferior petrosal sinus can be effectively controlled by tunnel-packing and push-packing techniques.11 The ICA in its upper neck and intra-temporal portions is often involved in patients with paragangliomas. Angiography is a useful tool to identify the feeding arteries, evaluate the collateral cerebral circulation, detect occult vascular lesions, and determine the management strategy of the ipsilateral ICA during surgery. Al-Mefty et al7 stated that a plane of dissection can be identified between the tumor and the ICA with the aid of a microscope. We noticed that the presence of scar tissue in the jugular foramen due to previous surgery makes it extremely difficult to navigate the ICA in the upper neck or find a normal anatomic landmark. We preferred to use the cochlea and the tympanic ostium of Eustachian tube as landmarks to identify the vertical ICA. In most cases, the tumor could be separated from the ICA. Given a benign tumor, even if a tiny piece of tumor is left, we would not sacrifice the ICA. Three patients in our study with subtotal tumor removal resulted in extensive blood loss, which put patients at risk of cerebral infarction following balloon occlusion of ICA or tumor adhered to the vascular adventitia of the ICA. In these cases, we had to make a balance between gross tumor removal and postoperative morbidity. The 3 patients who followed subtotal resection of CJPs showed no tumor growth occurred over an average of 40.7 months of follow-up. Various procedures have been described in the management of ICA preoperative, including saphenous vein bypass grafting, permanent occlusion, and endovascular stenting.8,20,21 In our study, we used embolization coiling or endovascular stenting in the management ICA-associated lesions, which prevented disastrous incidents intraoperative.\nOur study demonstrates the feasibility of surgical resection and the possibility of preserving cranial nerve function in patients with CJPs. The weakness of the present study is obvious: retrospective review study, small sample size, and lack of statistical power. Given the lower incidence of CJPs and the indolent course of these tumors, future studies will enroll more patients with multicenter cooperation to validate our treatment and increase the duration of follow-up.", "There remains no consensus regarding the best treatment modalities for CJPs, particularly for younger patients and for patients who initially present with functional LCNs. With refined surgical techniques, including tension-free anterior FN rerouting, sigmoid sinus tunnel-packing, and push-packing techniques, our data suggest that surgery can reach a low incidence of residual tumor, achieve tumor control, and cranial nerves preservation. For bilateral paragangliomas, a 2-stage surgery should be applied to minimize the risk of bilateral cranial neuropathies and the impact on cerebral circulation. The proper preoperative endovascular intervention such as coil embolization or internal carotid artery stenting should be employed in the management of paragangliomas with ICA-associated lesions." ]
[ null, "intro", "methods", null, null, null, null, "results", null, null, null, null, null, null, "discussion", null, null, null, null, "H1" ]
[ "Jugular paraganglioma", "carotid body tumor", "vagal paraganglioma", "carotid artery", "infratemporal fossa approach" ]
Main Points: Attempts have been made to achieve tumor control and prevent cranial nerve damage or dysfunction or other complications, postoperatively. The infratemporal fossa approach with our unique refined surgical techniques including tension-free anterior facial nerve rerouting, sigmoid sinus tunnel-packing, and push-packing techniques can be a considerable treatment option for complex jugular paragangliomas. For patients with bilateral paragangliomas, a 2-stage surgery should be applied to minimize the risk of bilateral cranial neuropathies and the impact on cerebral circulation. Endovascular intervention such as coil embolization or internal carotid artery (ICA) stenting can be employed for the management of paragangliomas with ICA-associated lesions. In case ICA has to be a permanent occlusion, the surgeon should prudentially balance blood pressure, blood loss, and tumor removal to prevent cerebrovascular accident intraoperatively. Introduction: Jugular paragangliomas (JPs) are the most common primary benign tumors of the jugular foramen region, which are aggressive lesions that can infiltrate surrounding bony structures, blood vessels, posterior fossa, cranial nerves, and even the intracranial cavity.1 The management of JPs is challenging because the tumors are always hypervascular and intimated with the internal carotid artery (ICA), lower cranial nerves (LCNs), and inferior petrous sinus. With a better understanding of the natural history of JPs, “wait and scan,” surgery, and radiotherapy have been applied as primary treatment modalities for JPs.2-6 However, surgery also plays a crucial role to offer a chance of disease-free survival for patients with JPs. Complex jugular paragangliomas (CJPs) have been defined as fulfilling one or more of the following criteria7,8: (1) very large size; (2) great intradural extension; (3) extension to the cavernous sinus foramen magnum and clivus; (4) significant involvement (encasement and stenosis) of ICA; (5) single ICA on the lesion side; (6) involvement of the vertebral artery; (7) dominant or unilateral sigmoid sinus on the lesion side; (8) bilateral or multiple paragangliomas; and (9) recurrence after previous surgery. Patients with CJPs pose an extreme challenge to skull base surgeons, and there are few studies focused on the surgical management of CJPs. In this study, we aimed to review surgical strategies, tumor control, complications, postoperative facial nerve (FN) and LCNs damage, and functional recovery in our unique series of CJPs patients. Methods: Design and Participants A retrospective clinical database was queried to identify 12 patients who met the criteria for CJPs from January 2013 to June 2020. All patients underwent surgery with histopathological confirmation of the diagnosis. This study was approved by the ethics committee of Eye, Ear, Nose, and Throat Hospital, Fudan University (No.2021048). All patients were informed of the risks and benefits of all available treatment modalities, including surgery, radiotherapy, and “wait and scan.” Informed consent forms were signed by all patients. A retrospective clinical database was queried to identify 12 patients who met the criteria for CJPs from January 2013 to June 2020. All patients underwent surgery with histopathological confirmation of the diagnosis. This study was approved by the ethics committee of Eye, Ear, Nose, and Throat Hospital, Fudan University (No.2021048). All patients were informed of the risks and benefits of all available treatment modalities, including surgery, radiotherapy, and “wait and scan.” Informed consent forms were signed by all patients. Preoperative and Intraoperative Evaluation All patients underwent a preoperative otoscopic examination, assessment of hearing, FN, and LCNs function, trans-abdominal sonography, catecholamine secretion screening, enhanced temporal bone MRI, CT scanning, magnetic resonance angiography, and digital subtraction angiography (DSA). Temporal bone paragangliomas were graded according to Fisch’s classification.9 Facial nerve function was graded according to the House Brackmann (HB) grading system. Superselective endovascular embolization was routinely performed for all patients 2 days before surgery. Facial nerve monitoring was routinely conducted in patients with normal preoperative FN function. The tumor extension or growth patterns were defined based on preoperative radiological and intraoperative findings. All patients underwent a preoperative otoscopic examination, assessment of hearing, FN, and LCNs function, trans-abdominal sonography, catecholamine secretion screening, enhanced temporal bone MRI, CT scanning, magnetic resonance angiography, and digital subtraction angiography (DSA). Temporal bone paragangliomas were graded according to Fisch’s classification.9 Facial nerve function was graded according to the House Brackmann (HB) grading system. Superselective endovascular embolization was routinely performed for all patients 2 days before surgery. Facial nerve monitoring was routinely conducted in patients with normal preoperative FN function. The tumor extension or growth patterns were defined based on preoperative radiological and intraoperative findings. Details of Surgical Procedure Infratemporal fossa approach (ITFA) was applied in the present study. If the perineurium of the FN was intact, we performed the tension-free anterior FN rerouting technique. The key point is to suture the parotid gland with the inferior temporal muscle to reduce the distance between the genicular ganglion to the stylomastoid foramen of the FN, while the digastric muscle, FN, and parotid gland underwent anterior transposition.10 If the FN was infiltrated by the tumor and had to be sacrificed, grafting with the great auricular nerve was performed to reconstruct FN. To control bleeding, we decreased tumor vascularity with bipolar coagulation cauterizing. After the jugular foramen was exposed, we separated the tumor from the ICA and then ligated the jugular vein. The sigmoid sinus was occluded with Surgicel (Ethicon, Somerville, NJ) and bone wax. We applied sigmoid sinus tunnel-packing and push-packing techniques to control bleeding from the inferior petrous sinus.11 Intrabulbar dissection and preservation of the medial wall of the jugular bulb were used to preserve LCNs function, as long as the tumor had not penetrated the medial wall of the jugular bulb or infiltrated the LCNs. Infratemporal fossa approach (ITFA) was applied in the present study. If the perineurium of the FN was intact, we performed the tension-free anterior FN rerouting technique. The key point is to suture the parotid gland with the inferior temporal muscle to reduce the distance between the genicular ganglion to the stylomastoid foramen of the FN, while the digastric muscle, FN, and parotid gland underwent anterior transposition.10 If the FN was infiltrated by the tumor and had to be sacrificed, grafting with the great auricular nerve was performed to reconstruct FN. To control bleeding, we decreased tumor vascularity with bipolar coagulation cauterizing. After the jugular foramen was exposed, we separated the tumor from the ICA and then ligated the jugular vein. The sigmoid sinus was occluded with Surgicel (Ethicon, Somerville, NJ) and bone wax. We applied sigmoid sinus tunnel-packing and push-packing techniques to control bleeding from the inferior petrous sinus.11 Intrabulbar dissection and preservation of the medial wall of the jugular bulb were used to preserve LCNs function, as long as the tumor had not penetrated the medial wall of the jugular bulb or infiltrated the LCNs. Follow-Up All patients underwent regular enhanced temporal bone MRI and clinical examination postoperatively, which was usually performed 3 months postoperatively and annually thereafter. The follow-up period was defined as the period extending from surgery to the most recent clinical visit or patient contact. All patients underwent regular enhanced temporal bone MRI and clinical examination postoperatively, which was usually performed 3 months postoperatively and annually thereafter. The follow-up period was defined as the period extending from surgery to the most recent clinical visit or patient contact. Design and Participants: A retrospective clinical database was queried to identify 12 patients who met the criteria for CJPs from January 2013 to June 2020. All patients underwent surgery with histopathological confirmation of the diagnosis. This study was approved by the ethics committee of Eye, Ear, Nose, and Throat Hospital, Fudan University (No.2021048). All patients were informed of the risks and benefits of all available treatment modalities, including surgery, radiotherapy, and “wait and scan.” Informed consent forms were signed by all patients. Preoperative and Intraoperative Evaluation: All patients underwent a preoperative otoscopic examination, assessment of hearing, FN, and LCNs function, trans-abdominal sonography, catecholamine secretion screening, enhanced temporal bone MRI, CT scanning, magnetic resonance angiography, and digital subtraction angiography (DSA). Temporal bone paragangliomas were graded according to Fisch’s classification.9 Facial nerve function was graded according to the House Brackmann (HB) grading system. Superselective endovascular embolization was routinely performed for all patients 2 days before surgery. Facial nerve monitoring was routinely conducted in patients with normal preoperative FN function. The tumor extension or growth patterns were defined based on preoperative radiological and intraoperative findings. Details of Surgical Procedure: Infratemporal fossa approach (ITFA) was applied in the present study. If the perineurium of the FN was intact, we performed the tension-free anterior FN rerouting technique. The key point is to suture the parotid gland with the inferior temporal muscle to reduce the distance between the genicular ganglion to the stylomastoid foramen of the FN, while the digastric muscle, FN, and parotid gland underwent anterior transposition.10 If the FN was infiltrated by the tumor and had to be sacrificed, grafting with the great auricular nerve was performed to reconstruct FN. To control bleeding, we decreased tumor vascularity with bipolar coagulation cauterizing. After the jugular foramen was exposed, we separated the tumor from the ICA and then ligated the jugular vein. The sigmoid sinus was occluded with Surgicel (Ethicon, Somerville, NJ) and bone wax. We applied sigmoid sinus tunnel-packing and push-packing techniques to control bleeding from the inferior petrous sinus.11 Intrabulbar dissection and preservation of the medial wall of the jugular bulb were used to preserve LCNs function, as long as the tumor had not penetrated the medial wall of the jugular bulb or infiltrated the LCNs. Follow-Up: All patients underwent regular enhanced temporal bone MRI and clinical examination postoperatively, which was usually performed 3 months postoperatively and annually thereafter. The follow-up period was defined as the period extending from surgery to the most recent clinical visit or patient contact. Results: Surgical Outcomes All patient characteristics and tumor status is summarized in Table 1. Meanwhile, the descriptive analysis of patient demographics and clinical presentation of CJPs is summarized in Table 2. In 12 patients, hearing deficit and pulsatile tinnitus were the most common symptoms. Facial nerve involvement was seen in 8 patients (4 patients with HB grade I and 8 patients with HB grade III-VI). Two patients with previous surgery history presented with lower cranial nerve impairment preoperatively. Gross total tumor resection was achieved for 9 patients (75%), and subtotal resection was achieved for 3 patients (25%). Multiple paragangliomas on the ipsilateral side were removed in a single stage in 3 patients. A 2-stage resection was conducted in 5 patients with bilateral lesions. The average intraoperative blood loss was 1001 mL. Case 8, who lost 2500 mL blood during operation, suffered from mild hemiparesis (muscle strength grade 4) as a result of a postoperative lacunar cerebral infarction but resolved (muscle strength grade 5) 2 months after surgery. There was no mortality, and all patients were discharged from the hospital 7-10 days postoperatively. Pulsatile tinnitus and otalgia resolved in all patients. Tumor control of 100% was achieved at a mean follow-up of 45.5 months (range, 13-111 months). The details of surgical outcomes are summarized in Table 3. All patient characteristics and tumor status is summarized in Table 1. Meanwhile, the descriptive analysis of patient demographics and clinical presentation of CJPs is summarized in Table 2. In 12 patients, hearing deficit and pulsatile tinnitus were the most common symptoms. Facial nerve involvement was seen in 8 patients (4 patients with HB grade I and 8 patients with HB grade III-VI). Two patients with previous surgery history presented with lower cranial nerve impairment preoperatively. Gross total tumor resection was achieved for 9 patients (75%), and subtotal resection was achieved for 3 patients (25%). Multiple paragangliomas on the ipsilateral side were removed in a single stage in 3 patients. A 2-stage resection was conducted in 5 patients with bilateral lesions. The average intraoperative blood loss was 1001 mL. Case 8, who lost 2500 mL blood during operation, suffered from mild hemiparesis (muscle strength grade 4) as a result of a postoperative lacunar cerebral infarction but resolved (muscle strength grade 5) 2 months after surgery. There was no mortality, and all patients were discharged from the hospital 7-10 days postoperatively. Pulsatile tinnitus and otalgia resolved in all patients. Tumor control of 100% was achieved at a mean follow-up of 45.5 months (range, 13-111 months). The details of surgical outcomes are summarized in Table 3. Facial Nerve and Lower Cranial Nerves Function Postoperatively, 10 patients (83.3%) obtained unchanged or improved FN function. The preoperative FN function was H-B grade I in 4 patients and H-B grade VI in 5 patients, except for 1 patient deteriorated to H-B grade III from grade I and 1 patient improved to H-B grade III from grade VI, but the postoperative FN function remained stable for other patients at the last follow-up. The remaining 3 patients showed H-B grade III-V function before surgery, and improvement was shown in 1 patient from grade V to grade IV. Intrabulbar dissection and preservation of the medial wall of the jugular bulb were applied to 10 patients, except for 2 patients with previous surgery history. Two patients (cases 2 and 10) experienced newly developed LCNs deficit postoperatively. Case 2 suffered vocal cord paralysis after surgical removal of the vagal paraganglioma (VP) and JP. Case 10 had to remove the IX and XII nerves due to encapsulated scar tissue from previous surgery. Postoperatively, 10 patients (83.3%) obtained unchanged or improved FN function. The preoperative FN function was H-B grade I in 4 patients and H-B grade VI in 5 patients, except for 1 patient deteriorated to H-B grade III from grade I and 1 patient improved to H-B grade III from grade VI, but the postoperative FN function remained stable for other patients at the last follow-up. The remaining 3 patients showed H-B grade III-V function before surgery, and improvement was shown in 1 patient from grade V to grade IV. Intrabulbar dissection and preservation of the medial wall of the jugular bulb were applied to 10 patients, except for 2 patients with previous surgery history. Two patients (cases 2 and 10) experienced newly developed LCNs deficit postoperatively. Case 2 suffered vocal cord paralysis after surgical removal of the vagal paraganglioma (VP) and JP. Case 10 had to remove the IX and XII nerves due to encapsulated scar tissue from previous surgery. Illustrative Cases Ipsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas. A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas. Jugular Paraganglioma with Abundant Feeding Arteries A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA. A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA. Jugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications. A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications. Ipsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas. A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas. Jugular Paraganglioma with Abundant Feeding Arteries A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA. A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA. Jugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications. A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications. Surgical Outcomes: All patient characteristics and tumor status is summarized in Table 1. Meanwhile, the descriptive analysis of patient demographics and clinical presentation of CJPs is summarized in Table 2. In 12 patients, hearing deficit and pulsatile tinnitus were the most common symptoms. Facial nerve involvement was seen in 8 patients (4 patients with HB grade I and 8 patients with HB grade III-VI). Two patients with previous surgery history presented with lower cranial nerve impairment preoperatively. Gross total tumor resection was achieved for 9 patients (75%), and subtotal resection was achieved for 3 patients (25%). Multiple paragangliomas on the ipsilateral side were removed in a single stage in 3 patients. A 2-stage resection was conducted in 5 patients with bilateral lesions. The average intraoperative blood loss was 1001 mL. Case 8, who lost 2500 mL blood during operation, suffered from mild hemiparesis (muscle strength grade 4) as a result of a postoperative lacunar cerebral infarction but resolved (muscle strength grade 5) 2 months after surgery. There was no mortality, and all patients were discharged from the hospital 7-10 days postoperatively. Pulsatile tinnitus and otalgia resolved in all patients. Tumor control of 100% was achieved at a mean follow-up of 45.5 months (range, 13-111 months). The details of surgical outcomes are summarized in Table 3. Facial Nerve and Lower Cranial Nerves Function: Postoperatively, 10 patients (83.3%) obtained unchanged or improved FN function. The preoperative FN function was H-B grade I in 4 patients and H-B grade VI in 5 patients, except for 1 patient deteriorated to H-B grade III from grade I and 1 patient improved to H-B grade III from grade VI, but the postoperative FN function remained stable for other patients at the last follow-up. The remaining 3 patients showed H-B grade III-V function before surgery, and improvement was shown in 1 patient from grade V to grade IV. Intrabulbar dissection and preservation of the medial wall of the jugular bulb were applied to 10 patients, except for 2 patients with previous surgery history. Two patients (cases 2 and 10) experienced newly developed LCNs deficit postoperatively. Case 2 suffered vocal cord paralysis after surgical removal of the vagal paraganglioma (VP) and JP. Case 10 had to remove the IX and XII nerves due to encapsulated scar tissue from previous surgery. Illustrative Cases: Ipsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas. A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas. Jugular Paraganglioma with Abundant Feeding Arteries A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA. A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA. Jugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications. A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications. Ipsilateral Vagal Paraganglioma + Jugular Paraganglioma and Contralateral Carotid Body Tumor: A 32-year-old woman (case 2) presented with symptoms of otalgia and severe mixed hearing loss. Physical exam showed a reddish tumor behind the eardrum. Enhanced temporal bone MRI revealed the presence of multiple paragangliomas: a JP and VP on the right side and a carotid body tumor (CBT) on the left side (Figure 1). We initially resected the left CBT in an attempt to preserve the LCNs, jugular vein, and CA in the first stage. The FN function on the right side progressed from HB I to III within 3 months before her second-stage surgery. After removing the VP and JP, a new X deficit developed, but the FN function was maintained at H-B III following grafting with the great auricular nerve. In addition, removal of the VP definitely results in vocal cord paralysis postoperatively. All efforts were made to preserve the LCN function on at least one side and the cerebral circulation. This case demonstrated that a 2-stage surgical strategy is applicable for patients with bilateral multiple paragangliomas. Jugular Paraganglioma with Abundant Feeding Arteries: A 52-year-old man (case 8) complained of right pulsatile tinnitus, deafness, vertigo, and headache. Physical exam showed a red tumor behind the eardrum. The FN and LCN functions were normal preoperatively. A giant JP was detected on enhanced temporal bone MRI (Figure 2). The patient underwent surgery to remove an ipsilateral CBT 18 years ago, and his right external CA was ligated intraoperatively. Preoperative DSA showed that the right ICA, vertebral artery, thyroid cervical trunk, and contralateral external CA fed the JP. Given that the ICA was the main feeding artery, the branches could not be embolized; furthermore, the ICA was encased by the tumor, and the balloon occlusion test was negative, the right ICA was completely occluded preoperatively. The blood supply of the patient's right cerebral hemisphere was well compensated despite the ICA occlusion. However, the surgical process presented a great challenge to the surgeon, who had to balance intraoperative blood pressure, bleeding, tumor removal, and maintenance of the brain blood supply. Due to massive intraoperative blood loss (approximately 2500 mL), a subtotal resection was performed. The patient suffered from mild hemiparesis as a result of a postoperative lacunar cerebral infarction, which we considered associated with right ICA occlusion and extensive blood loss. Fortunately, this patient resolved 2 months after surgery. The LCNs were preserved and no cerebrospinal fluid leakage developed. This case showed that complex vascular conditions developed after a previous intervention with an adverse outcome of a ligated external CA. Jugular Paraganglioma Concurrent with Internal Carotid Artery Aneurysm: A 37-year-old female (case 9) presented with a history of surgical resection of a right JP and deafness 11 years ago and FN paralysis (H-B III) 2 years ago. The temporal bone MRI and CT demonstrated a recurrent JP. Preoperative magnetic resonance angiography revealed mild dilatation at the junction of the horizontal and vertical segments of the intratemporal ICA, and an aneurysm was confirmed by DSA (Figure 3). The aneurysm was embolized with coils, and an endovascular stent was placed in the ICA preoperatively. The JP was removed by planned surgery after oral anticoagulant for approximately half a year. The FN was resected and grafted with the great auricular nerve to reconstruct the FN. The FN function remained HB III 1 year after surgery. The LCNs function was intact. This case demonstrates that a meticulous preoperative review of temporal bone images can reveal hidden lesions and prevent disastrous complications. Discussion: Currently, the treatment modalities for patients with CJPs are still controversial. A number of studies including surgery and/or radiotherapy have demonstrated clinical experience in the management of CJPs.7,12-14 Although complete surgical excision of JPs is technically possible, it has often been associated with a high risk of LCN injuries. In this retrospective study, we assess the clinical symptoms and discuss individualized surgery strategies to achieve tumor control and cranial nerves preservation in 12 patients with CJPs. Clinical Characteristics The typical clinical manifestations of JPs are pulsatile tinnitus, hearing loss, and a reddish mass behind the eardrum or in the external auditory canal.5,15 Our patients did not present with any significant variation with respect to typical clinical presentations, but a higher proportion of hearing loss (91.6%) and facial paralysis (66.7%) were shown. Giant tumors are rare and usually considered inoperable or have a high risk of morbidity and mortality.7 However, there is no clear standard for defining a “giant tumor.” In our series, a giant tumor was defined as having a maximum diameter greater than 4 cm, which means that at least the horizontal segment of the ICA was involved. Because the common location of VPs arises from the superior vagal ganglion, the differential diagnosis of VPs from JPs is difficult. Imaging features may help to distinguish between these 2 types. We noticed that the internal jugular vein was always pushed outward or inward by VPs (Figure 4 b1 and b2), but JPs always spread inferiorly within the internal jugular vein lumen on temporal bone MRI images (Figure 4 a1 and a2). The typical clinical manifestations of JPs are pulsatile tinnitus, hearing loss, and a reddish mass behind the eardrum or in the external auditory canal.5,15 Our patients did not present with any significant variation with respect to typical clinical presentations, but a higher proportion of hearing loss (91.6%) and facial paralysis (66.7%) were shown. Giant tumors are rare and usually considered inoperable or have a high risk of morbidity and mortality.7 However, there is no clear standard for defining a “giant tumor.” In our series, a giant tumor was defined as having a maximum diameter greater than 4 cm, which means that at least the horizontal segment of the ICA was involved. Because the common location of VPs arises from the superior vagal ganglion, the differential diagnosis of VPs from JPs is difficult. Imaging features may help to distinguish between these 2 types. We noticed that the internal jugular vein was always pushed outward or inward by VPs (Figure 4 b1 and b2), but JPs always spread inferiorly within the internal jugular vein lumen on temporal bone MRI images (Figure 4 a1 and a2). Surgical Approaches Various surgical approaches are applied in clinical practice: the retro-sigmoid approach, the far lateral approach or its variations, and ITFA.16-18 The first 2 approaches have limitations in controlling anterior lesions around the ICA, and their advantages are able to expose tumors with intradural extensions. We prefer the ITFA which provides optimal control of the upper parapharyngeal space, the ICA, and the LCNs following FN anterior rerouting. However, some surgeons criticize the FN dysfunction postoperatively. With the application of the tension-free FN anterior rerouting technique, we were able to achieve good FN function.10 Various surgical approaches are applied in clinical practice: the retro-sigmoid approach, the far lateral approach or its variations, and ITFA.16-18 The first 2 approaches have limitations in controlling anterior lesions around the ICA, and their advantages are able to expose tumors with intradural extensions. We prefer the ITFA which provides optimal control of the upper parapharyngeal space, the ICA, and the LCNs following FN anterior rerouting. However, some surgeons criticize the FN dysfunction postoperatively. With the application of the tension-free FN anterior rerouting technique, we were able to achieve good FN function.10 Multiple Paragangliomas In our series, approximately 66.7% (8/12) of patients presented with multiple paragangliomas and 41.7% with bilateral paragangliomas. The management of multiple paragangliomas is controversial, particularly when bilateral LCNs or ICA are involved. Van der Mey et al15 supported that a wait-and-scan policy should be considered for such patients. Al-Mefty et al7 indicated that ipsilateral tumors could be excised simultaneously, and the opposite side should be treated surgically only if the first resection did not cause essential cranial nerve palsy in bilateral tumors. Michael G. Moore et al19 recommended a staged fashion to minimize the risk of bilateral cranial neuropathies and/or impact on cerebral circulation when surgery is considered. Other factors such as prior treatment modalities, the patient’s neurologic function, and life expectancy, as well as swallowing and pulmonary function also should be taken into account.19 We removed ipsilateral JP and VP or CBT during the same operation. For bilateral tumors, we preferred prior surgery for the contralateral CBT, and secondary stage surgery for ipsilateral JPs or VP as a surgical strategy. If there was no LCNs injury and the internal jugular vein was preserved intact contralaterally, we were more confident in resecting the JP in the second stage. Otherwise, conservative treatments were chosen. In our series, approximately 66.7% (8/12) of patients presented with multiple paragangliomas and 41.7% with bilateral paragangliomas. The management of multiple paragangliomas is controversial, particularly when bilateral LCNs or ICA are involved. Van der Mey et al15 supported that a wait-and-scan policy should be considered for such patients. Al-Mefty et al7 indicated that ipsilateral tumors could be excised simultaneously, and the opposite side should be treated surgically only if the first resection did not cause essential cranial nerve palsy in bilateral tumors. Michael G. Moore et al19 recommended a staged fashion to minimize the risk of bilateral cranial neuropathies and/or impact on cerebral circulation when surgery is considered. Other factors such as prior treatment modalities, the patient’s neurologic function, and life expectancy, as well as swallowing and pulmonary function also should be taken into account.19 We removed ipsilateral JP and VP or CBT during the same operation. For bilateral tumors, we preferred prior surgery for the contralateral CBT, and secondary stage surgery for ipsilateral JPs or VP as a surgical strategy. If there was no LCNs injury and the internal jugular vein was preserved intact contralaterally, we were more confident in resecting the JP in the second stage. Otherwise, conservative treatments were chosen. Vascular Considerations Our previous study showed hemorrhage from the sigmoid sinus and inferior petrosal sinus can be effectively controlled by tunnel-packing and push-packing techniques.11 The ICA in its upper neck and intra-temporal portions is often involved in patients with paragangliomas. Angiography is a useful tool to identify the feeding arteries, evaluate the collateral cerebral circulation, detect occult vascular lesions, and determine the management strategy of the ipsilateral ICA during surgery. Al-Mefty et al7 stated that a plane of dissection can be identified between the tumor and the ICA with the aid of a microscope. We noticed that the presence of scar tissue in the jugular foramen due to previous surgery makes it extremely difficult to navigate the ICA in the upper neck or find a normal anatomic landmark. We preferred to use the cochlea and the tympanic ostium of Eustachian tube as landmarks to identify the vertical ICA. In most cases, the tumor could be separated from the ICA. Given a benign tumor, even if a tiny piece of tumor is left, we would not sacrifice the ICA. Three patients in our study with subtotal tumor removal resulted in extensive blood loss, which put patients at risk of cerebral infarction following balloon occlusion of ICA or tumor adhered to the vascular adventitia of the ICA. In these cases, we had to make a balance between gross tumor removal and postoperative morbidity. The 3 patients who followed subtotal resection of CJPs showed no tumor growth occurred over an average of 40.7 months of follow-up. Various procedures have been described in the management of ICA preoperative, including saphenous vein bypass grafting, permanent occlusion, and endovascular stenting.8,20,21 In our study, we used embolization coiling or endovascular stenting in the management ICA-associated lesions, which prevented disastrous incidents intraoperative. Our study demonstrates the feasibility of surgical resection and the possibility of preserving cranial nerve function in patients with CJPs. The weakness of the present study is obvious: retrospective review study, small sample size, and lack of statistical power. Given the lower incidence of CJPs and the indolent course of these tumors, future studies will enroll more patients with multicenter cooperation to validate our treatment and increase the duration of follow-up. Our previous study showed hemorrhage from the sigmoid sinus and inferior petrosal sinus can be effectively controlled by tunnel-packing and push-packing techniques.11 The ICA in its upper neck and intra-temporal portions is often involved in patients with paragangliomas. Angiography is a useful tool to identify the feeding arteries, evaluate the collateral cerebral circulation, detect occult vascular lesions, and determine the management strategy of the ipsilateral ICA during surgery. Al-Mefty et al7 stated that a plane of dissection can be identified between the tumor and the ICA with the aid of a microscope. We noticed that the presence of scar tissue in the jugular foramen due to previous surgery makes it extremely difficult to navigate the ICA in the upper neck or find a normal anatomic landmark. We preferred to use the cochlea and the tympanic ostium of Eustachian tube as landmarks to identify the vertical ICA. In most cases, the tumor could be separated from the ICA. Given a benign tumor, even if a tiny piece of tumor is left, we would not sacrifice the ICA. Three patients in our study with subtotal tumor removal resulted in extensive blood loss, which put patients at risk of cerebral infarction following balloon occlusion of ICA or tumor adhered to the vascular adventitia of the ICA. In these cases, we had to make a balance between gross tumor removal and postoperative morbidity. The 3 patients who followed subtotal resection of CJPs showed no tumor growth occurred over an average of 40.7 months of follow-up. Various procedures have been described in the management of ICA preoperative, including saphenous vein bypass grafting, permanent occlusion, and endovascular stenting.8,20,21 In our study, we used embolization coiling or endovascular stenting in the management ICA-associated lesions, which prevented disastrous incidents intraoperative. Our study demonstrates the feasibility of surgical resection and the possibility of preserving cranial nerve function in patients with CJPs. The weakness of the present study is obvious: retrospective review study, small sample size, and lack of statistical power. Given the lower incidence of CJPs and the indolent course of these tumors, future studies will enroll more patients with multicenter cooperation to validate our treatment and increase the duration of follow-up. Clinical Characteristics: The typical clinical manifestations of JPs are pulsatile tinnitus, hearing loss, and a reddish mass behind the eardrum or in the external auditory canal.5,15 Our patients did not present with any significant variation with respect to typical clinical presentations, but a higher proportion of hearing loss (91.6%) and facial paralysis (66.7%) were shown. Giant tumors are rare and usually considered inoperable or have a high risk of morbidity and mortality.7 However, there is no clear standard for defining a “giant tumor.” In our series, a giant tumor was defined as having a maximum diameter greater than 4 cm, which means that at least the horizontal segment of the ICA was involved. Because the common location of VPs arises from the superior vagal ganglion, the differential diagnosis of VPs from JPs is difficult. Imaging features may help to distinguish between these 2 types. We noticed that the internal jugular vein was always pushed outward or inward by VPs (Figure 4 b1 and b2), but JPs always spread inferiorly within the internal jugular vein lumen on temporal bone MRI images (Figure 4 a1 and a2). Surgical Approaches: Various surgical approaches are applied in clinical practice: the retro-sigmoid approach, the far lateral approach or its variations, and ITFA.16-18 The first 2 approaches have limitations in controlling anterior lesions around the ICA, and their advantages are able to expose tumors with intradural extensions. We prefer the ITFA which provides optimal control of the upper parapharyngeal space, the ICA, and the LCNs following FN anterior rerouting. However, some surgeons criticize the FN dysfunction postoperatively. With the application of the tension-free FN anterior rerouting technique, we were able to achieve good FN function.10 Multiple Paragangliomas: In our series, approximately 66.7% (8/12) of patients presented with multiple paragangliomas and 41.7% with bilateral paragangliomas. The management of multiple paragangliomas is controversial, particularly when bilateral LCNs or ICA are involved. Van der Mey et al15 supported that a wait-and-scan policy should be considered for such patients. Al-Mefty et al7 indicated that ipsilateral tumors could be excised simultaneously, and the opposite side should be treated surgically only if the first resection did not cause essential cranial nerve palsy in bilateral tumors. Michael G. Moore et al19 recommended a staged fashion to minimize the risk of bilateral cranial neuropathies and/or impact on cerebral circulation when surgery is considered. Other factors such as prior treatment modalities, the patient’s neurologic function, and life expectancy, as well as swallowing and pulmonary function also should be taken into account.19 We removed ipsilateral JP and VP or CBT during the same operation. For bilateral tumors, we preferred prior surgery for the contralateral CBT, and secondary stage surgery for ipsilateral JPs or VP as a surgical strategy. If there was no LCNs injury and the internal jugular vein was preserved intact contralaterally, we were more confident in resecting the JP in the second stage. Otherwise, conservative treatments were chosen. Vascular Considerations: Our previous study showed hemorrhage from the sigmoid sinus and inferior petrosal sinus can be effectively controlled by tunnel-packing and push-packing techniques.11 The ICA in its upper neck and intra-temporal portions is often involved in patients with paragangliomas. Angiography is a useful tool to identify the feeding arteries, evaluate the collateral cerebral circulation, detect occult vascular lesions, and determine the management strategy of the ipsilateral ICA during surgery. Al-Mefty et al7 stated that a plane of dissection can be identified between the tumor and the ICA with the aid of a microscope. We noticed that the presence of scar tissue in the jugular foramen due to previous surgery makes it extremely difficult to navigate the ICA in the upper neck or find a normal anatomic landmark. We preferred to use the cochlea and the tympanic ostium of Eustachian tube as landmarks to identify the vertical ICA. In most cases, the tumor could be separated from the ICA. Given a benign tumor, even if a tiny piece of tumor is left, we would not sacrifice the ICA. Three patients in our study with subtotal tumor removal resulted in extensive blood loss, which put patients at risk of cerebral infarction following balloon occlusion of ICA or tumor adhered to the vascular adventitia of the ICA. In these cases, we had to make a balance between gross tumor removal and postoperative morbidity. The 3 patients who followed subtotal resection of CJPs showed no tumor growth occurred over an average of 40.7 months of follow-up. Various procedures have been described in the management of ICA preoperative, including saphenous vein bypass grafting, permanent occlusion, and endovascular stenting.8,20,21 In our study, we used embolization coiling or endovascular stenting in the management ICA-associated lesions, which prevented disastrous incidents intraoperative. Our study demonstrates the feasibility of surgical resection and the possibility of preserving cranial nerve function in patients with CJPs. The weakness of the present study is obvious: retrospective review study, small sample size, and lack of statistical power. Given the lower incidence of CJPs and the indolent course of these tumors, future studies will enroll more patients with multicenter cooperation to validate our treatment and increase the duration of follow-up. Conclusion: There remains no consensus regarding the best treatment modalities for CJPs, particularly for younger patients and for patients who initially present with functional LCNs. With refined surgical techniques, including tension-free anterior FN rerouting, sigmoid sinus tunnel-packing, and push-packing techniques, our data suggest that surgery can reach a low incidence of residual tumor, achieve tumor control, and cranial nerves preservation. For bilateral paragangliomas, a 2-stage surgery should be applied to minimize the risk of bilateral cranial neuropathies and the impact on cerebral circulation. The proper preoperative endovascular intervention such as coil embolization or internal carotid artery stenting should be employed in the management of paragangliomas with ICA-associated lesions.
Background: This study aimed to review tumor control and cranial nerve function outcomes in patients with complex jugular paragangliomas and to refine the surgical strategies for complex jugular paragangliomas. Methods: We describe our experience with 12 patients with complex jugular paragangliomas diagnosed in our institution from January 2013 to June 2020. The main outcomes included tumor control, complications, and function of facial nerve and lower cranial nerves, postoperatively. Results: Gross-total resection was achieved for 9 (75%) patients, and subtotal resection was achieved for 3 (25%) patients. The surgical tumor control rate was 100% after a mean follow-up of 45.5 months (range, 13-111 months). Postoperatively, 10 patients (83.3%) obtained unchanged or improved facial nerve function. However, new lower cranial nerve deficits occurred in 2 patients (16.7%) due to surgical removal of the concurrent vagal paraganglioma and scar tissue enclosing the IX and XII nerves. Conclusions: Our refined surgical techniques, including tension-free anterior facial nerve rerouting, sigmoid sinus tunnel-packing, and pushpacking techniques, could be a choice for the treatment of complex jugular paragangliomas to achieve tumor control and cranial nerves preservation. A 2-stage surgery should be applied to minimize the risk of bilateral cranial neuropathies and the influence on cerebral circulation in patients with bilateral paragangliomas. The preoperative endovascular intervention such as coil embolization or internal carotid artery stenting can be employed for the management of paragangliomas with internal carotid artery-associated lesions.
null
null
11,214
291
[ 153, 95, 117, 214, 47, 259, 194, 1350, 200, 287, 172, 209, 109, 233, 409 ]
20
[ "patients", "ica", "tumor", "fn", "surgery", "function", "right", "jp", "case", "jugular" ]
[ "external jugular paraganglioma", "paraganglioma jugular paraganglioma", "complex jugular paragangliomas", "jugular paraganglioma contralateral", "jugular paragangliomas patients" ]
null
null
[CONTENT] Jugular paraganglioma | carotid body tumor | vagal paraganglioma | carotid artery | infratemporal fossa approach [SUMMARY]
[CONTENT] Jugular paraganglioma | carotid body tumor | vagal paraganglioma | carotid artery | infratemporal fossa approach [SUMMARY]
[CONTENT] Jugular paraganglioma | carotid body tumor | vagal paraganglioma | carotid artery | infratemporal fossa approach [SUMMARY]
null
[CONTENT] Jugular paraganglioma | carotid body tumor | vagal paraganglioma | carotid artery | infratemporal fossa approach [SUMMARY]
null
[CONTENT] Humans | Carotid Stenosis | Treatment Outcome | Retrospective Studies | Stents | Glomus Jugulare Tumor | Paraganglioma [SUMMARY]
[CONTENT] Humans | Carotid Stenosis | Treatment Outcome | Retrospective Studies | Stents | Glomus Jugulare Tumor | Paraganglioma [SUMMARY]
[CONTENT] Humans | Carotid Stenosis | Treatment Outcome | Retrospective Studies | Stents | Glomus Jugulare Tumor | Paraganglioma [SUMMARY]
null
[CONTENT] Humans | Carotid Stenosis | Treatment Outcome | Retrospective Studies | Stents | Glomus Jugulare Tumor | Paraganglioma [SUMMARY]
null
[CONTENT] external jugular paraganglioma | paraganglioma jugular paraganglioma | complex jugular paragangliomas | jugular paraganglioma contralateral | jugular paragangliomas patients [SUMMARY]
[CONTENT] external jugular paraganglioma | paraganglioma jugular paraganglioma | complex jugular paragangliomas | jugular paraganglioma contralateral | jugular paragangliomas patients [SUMMARY]
[CONTENT] external jugular paraganglioma | paraganglioma jugular paraganglioma | complex jugular paragangliomas | jugular paraganglioma contralateral | jugular paragangliomas patients [SUMMARY]
null
[CONTENT] external jugular paraganglioma | paraganglioma jugular paraganglioma | complex jugular paragangliomas | jugular paraganglioma contralateral | jugular paragangliomas patients [SUMMARY]
null
[CONTENT] patients | ica | tumor | fn | surgery | function | right | jp | case | jugular [SUMMARY]
[CONTENT] patients | ica | tumor | fn | surgery | function | right | jp | case | jugular [SUMMARY]
[CONTENT] patients | ica | tumor | fn | surgery | function | right | jp | case | jugular [SUMMARY]
null
[CONTENT] patients | ica | tumor | fn | surgery | function | right | jp | case | jugular [SUMMARY]
null
[CONTENT] jps | cjps | primary | lesion | sinus | jugular paragangliomas | involvement | extension | cranial nerves | nerves [SUMMARY]
[CONTENT] fn | patients | performed | underwent | patients underwent | tumor | bone | jugular | function | temporal [SUMMARY]
[CONTENT] right | grade | case | jp | patients | iii | fn | patient | year | blood [SUMMARY]
null
[CONTENT] patients | fn | ica | tumor | surgery | function | right | grade | jp | case [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] 12 | January 2013 to June 2020 ||| [SUMMARY]
[CONTENT] 9 | 75% | 3 | 25% ||| 100% | 45.5 months | 13-111 months ||| 10 | 83.3% ||| 2 | 16.7% | IX | XII [SUMMARY]
null
[CONTENT] ||| 12 | January 2013 to June 2020 ||| ||| 9 | 75% | 3 | 25% ||| 100% | 45.5 months | 13-111 months ||| 10 | 83.3% ||| 2 | 16.7% | IX | XII ||| ||| 2 ||| [SUMMARY]
null
[Musculoskeletal laboratory diagnostics in competitive sport].
33616701
Laboratory diagnostics represent a valuable tool for the optimization and assessment of the performance and regeneration ability in professional athletes. Blood parameters play an important role in the prevention, diagnosis and rehabilitation of injuries and physical overload.
BACKGROUND
Literature search and narrative review.
METHODS
The laboratory assessment of bone metabolism includes vitamin D, calcium and bone turnover and aims to provide a preventive benefit with respect to skeletal complications (e.g., to minimize the risk of bone stress injuries). In addition, muscular serum markers, such as lactate dehydrogenase (LDH), creatine kinase (CK), myoglobin and aspartate aminotransferase (AST) can be used to monitor metabolic adaptation to physical exercise and to obtain information about the muscular workload and potential damage. The energy availability can be estimated and optimized by appropriate balancing and laboratory determination of macro- and micronutrients.
RESULTS
Laboratory diagnostics have a clinical relevance across different sport disciplines. They are intended to support athletes and medical staff on their way to the highest possible performance and help to ensure the optimal prevention of bone and muscle injuries. Parameters with deficiency results (e.g., vitamin D) should be adequately compensated. A periodization of the laboratory tests, with at least two tests per year, and the establishment of individual variability and reference ranges can improve the assessment.
CONCLUSIONS
[ "Athletes", "Humans", "Laboratories", "Sports", "Vitamin D", "Vitamin D Deficiency" ]
8416848
null
null
null
null
null
null
Fazit für die Praxis
Ein regelmäßig durchgeführtes labordiagnostisches Assessment (im Leistungssport zumindest zweimal pro Jahr) und die daraus abgeleitete Intervention bei auffälligen Werten kann helfen, die Leistungsfähigkeit von Athleten zu verbessern.Eine ausgeglichene Calciumhomöostase sollte durch eine optimale Vitamin-D-Versorgung sowie eine ausgewogene Ernährung angestrebt werden, um das Risiko von Verletzungen wie Stressfrakturen zu verringern.Das laborchemische Assessment beinhaltet verschiedene Muskelenzyme, sowie Makro- und Mikronährstoffe unter Berücksichtigung des individuellen Energiebedarfs.Allgemeine Referenzbereiche dienen als Orientierung, wobei bei Leistungssportlern ein individuelles Vorgehen mit Aufstellung individueller und ggf. sportartspezifischer Referenzbereiche sinnvoll erscheint.Eine detaillierte laborchemische Diagnostik sollte heutzutage fester Bestandteil in der ärztlichen Betreuung von professionellen Athleten. Ein regelmäßig durchgeführtes labordiagnostisches Assessment (im Leistungssport zumindest zweimal pro Jahr) und die daraus abgeleitete Intervention bei auffälligen Werten kann helfen, die Leistungsfähigkeit von Athleten zu verbessern. Eine ausgeglichene Calciumhomöostase sollte durch eine optimale Vitamin-D-Versorgung sowie eine ausgewogene Ernährung angestrebt werden, um das Risiko von Verletzungen wie Stressfrakturen zu verringern. Das laborchemische Assessment beinhaltet verschiedene Muskelenzyme, sowie Makro- und Mikronährstoffe unter Berücksichtigung des individuellen Energiebedarfs. Allgemeine Referenzbereiche dienen als Orientierung, wobei bei Leistungssportlern ein individuelles Vorgehen mit Aufstellung individueller und ggf. sportartspezifischer Referenzbereiche sinnvoll erscheint. Eine detaillierte laborchemische Diagnostik sollte heutzutage fester Bestandteil in der ärztlichen Betreuung von professionellen Athleten.
[ "Knochenstoffwechsel", "Muskelstoffwechsel", "Nährstoffe, Energieverfügbarkeit und Vitamine", "Periodisierung des laborchemischen Assessments" ]
[ "Für Leistungssportler mit hohen körperlichen Belastungen ist ein intakter Knochenstoffwechsel von essenzieller Bedeutung. Der Knochen bildet als Endoskelett neben der Muskulatur, den Sehnen und Bändern das Hauptorgan des Bewegungsapparates. Er garantiert die Beweglichkeit und Stabilität und unterliegt ständiger Adaptation. Mittels labordiagnostischer Parameter, welche sich als Marker des Knochenumbaus („remodelings“) und der Mineralstoffhomöostase eignen, lässt sich neben apparativen diagnostischen Methoden der metabolische Knochenstatus des Athleten abbilden [3].\nDie Knochenmineralisation wird maßgeblich über die Calciumhomöostase reguliert. Die empfohlene Tagesaufnahme von Calcium beträgt 1000–1500 mg/Tag [4, 5]. Neben Calcium ist Phosphat als Mineralstoff ebenfalls maßgeblich an der Mineralisation des Knochens beteiligt. Bei nicht ausreichender Versorgung des Körpers mit Calcium und/oder Phosphat resultiert eine reduzierte Knochenmineralisation bis hin zur Osteomalazie, begleitet von einer muskulären Insuffizienz [6, 7]. Chronische Hypophosphatämien können vielfältige Ursachen haben und entstehen beispielsweise durch Mangelernährung (v. a. Anorexia nervosa), in Zusammenhang mit Vitamin-D-Mangel, bei genetischen Erkrankungen (v. a. Phosphatdiabetes), paraneoplastisch (v. a. onkogene Osteomalazie) oder medikamentenassoziiert (u. a. Antazida, Diuretika) [6].\nVon übergeordneter Bedeutung für den Knochenmetabolismus und die Knochenfestigkeit sind das Steroidhormon Calcitriol (1,25(OH)2-Vitamin‑D3), das Parathormon (PTH) sowie der „fibroblast growth factor“ 23 (FGF23), welche die Hauptregulatoren der Calcium- und Phosphathomöostase darstellen [8–10]. Calcitriol steigert als aktiver Metabolit des Vitamin D3 (Cholecalciferol) die intestinale und renale Resorption von Calcium, erhöht somit die Calciumkonzentration im Körper und führt zu einer Mineralisation unmineralisierter Knochensubstanz (Osteoid) [11]. Bei einem Vitamin-D-Mangel und konsekutiver enteraler Calciumaufnahmestörung kann durch eine kompensatorisch gesteigerte PTH-Sekretion (sekundärer Hyperparathyreoidismus) eine Stimulation der knochenresorbierenden Osteoklasten und somit eine vermehrte Calciummobilisation aus dem Knochen induziert werden. Dies findet auf Kosten der Knochenqualität statt und kann durch eine Reduktion des Knochenmineralsalzgehaltes und der Knochenstruktur zu einer Verschlechterung der Knochenstabilität führen [12]. Ein weiterer häufiger Grund für eine gestörte enterale Calciumaufnahme ist die (chronische) Einnahme von Protonenpumpeninhibitoren mit konsekutiver Hypochlorhydrie [13]. Ferner wird in diesem Zustand die renale Phosphatelimination gesteigert, da ein erhöhter Phosphatspiegel wiederum einem Calciumanstieg entgegenwirken würde. Eine erhöhte PTH-Konzentration bei gleichzeitig erhöhtem Calciumspiegel spricht hingegen für eine autonome Überfunktion der Nebenschilddrüse im Sinne eines primären oder tertiären (Folge einer chronischen Überstimulation nach sekundärem) Hyperparathyreoidismus [9]. FGF23 ist ein endokrines Hormon, welches eine Phosphaturie in der Niere verursacht, während die Produktion von Calcitriol gehemmt wird [10]. Die laborchemische Bestimmung von FGF23 stellt aktuell noch keine Routinediagnostik dar, ist jedoch bei rezidivierenden Hypophosphatämien und Verdacht auf eine genetische oder erworbene Phosphatstoffwechselstörung indiziert.\nDie klinische Relevanz von Vitamin D ist in der sportmedizinischen Betreuung von Leistungs- und auch Gelegenheitssportlern hoch, da Vitamin D aufgrund einer direkten oder indirekten Beeinflussung der Leistungs- und Regenerationsfähigkeit sowie des Verletzungsrisikos, wie für ossäre Stressreaktionen, einen maßgeblichen Effekt auf die Gesundheit des Athleten haben kann [14–16]. Dennoch stellt ein Vitamin-D-Mangel im professionellen Sport ein häufig auftretendes Phänomen dar [1, 17–20]. So konnte bei 13 von 20 männlichen Profifußballern des englischen Premier-League-Teams Liverpool FC im Dezember 2010 ein insuffizienter Vitamin-D-Status erhoben werden [17]. Auch bei 31 von 70 männlichen Handballspielern aus der ersten deutschen Handballliga konnte ein Vitamin-D-Mangel festgestellt werden [18]. Dieser Mangel ist nicht nur ausschließlich in den Wintermonaten oder bei Hallensportlern, sondern über das ganze Jahr hinweg und auch bei anderen Sportarten zu beobachten. So zeigten lediglich 25 von 80 Footballspielern der National Football League (NFL) im Rahmen der routinemäßigen Gesundheitsuntersuchungen während der Saisonpause und in der Vorbereitungsphase adäquate Vitamin-D-Spiegel, wobei das Risiko für einen Vitamin-D-Mangel bei dunkelhäutigen Athleten erhöht war [19]. Hierbei ist zu bedenken, dass eine derart hohe Prävalenz an Mangelzuständen trotz in den Vereinigten Staaten teilweise mit Vitamin D angereicherter Lebensmittel vorlag [21]. Ein optimaler Referenzwert für Vitamin D bei Leistungssportlern ist Gegenstand von Diskussionen [7, 14, 15, 22, 23]. Zur Beurteilung, ob ein Athlet oder eine Athletin ausreichend mit Vitamin D versorgt ist, wird in der Regel das Calcidiol (25(OH)-Vitamin-D3) im Serum gemessen. Zunächst konnte auf histologischer Ebene in der Allgemeinbevölkerung gezeigt werden, dass ein Vitamin-D-Spiegel (25(OH)D3) von ≥ 30 ng/ml bei Frauen und Männern eine Hypomineralisation im Sinne einer Osteomalazie weitestgehend ausschließt [7]. Während bei Vitamin-D-Konzentrationen von ≥ 40 ng/ml von einem präventiven Nutzen bezüglich (Stress‑)Frakturen auszugehen ist, scheint ein Spiegel von ≥ 50 ng/ml für beide Geschlechter eine optimale Voraussetzung für eine maximale Leistungsfähigkeit der Athleten darzustellen [22]. So konnten Williams und Kollegen in verschiedenen amerikanischen Männer- und Frauenprofiteams (Crosslauf, Basketball, Fußball, Leichtathletik) zeigen, dass durch eine 8‑wöchige Vitamin-D-Supplementation mit 50.000 IE pro Woche bei Vitamin-D-insuffizienten Athleten (<30 ng/ml) eine Reduktion der Inzidenz von Stressfrakturen von 7,5 % auf 1,6 % erzielt werden konnte [14]. Auch bei 5201 untersuchten weiblichen Navy-Rekruten konnte unter Calcium- (2000 mg/Tag) und Vitamin-D-Supplementation (800 IE/Tag) eine um 21 % geringere Inzidenz für Stressfrakturen festgestellt werden [16]. Umgekehrt war in einem Kollektiv von 53 Patienten mit Stressfrakturen bei 44 Betroffenen (83 %) ein Vitamin-D-Spiegel von < 40 ng/ml zu beobachten [15].\nAuch was die Rekonvaleszenz nach eingetretener Fraktur anbetrifft, scheint Vitamin D eine entscheidende Rolle zu spielen. In einem Kollektiv von 617 Patienten beider Geschlechter konnte gezeigt werden, dass sich die Inzidenz einer verzögerten Frakturheilung („delayed union“) bei Patienten mit Vitamin-D-Mangel (9,7 %) signifikant von denen mit einem suffizienten Vitamin-D-Spiegel (0,3 %) unterschied [23]. Bei Patienten mit initialem Mangel und anschließender Supplementation von 1200 IE Vitamin D täglich über 4 Monate konnte eine „delayed union“ ebenfalls signifikant reduziert werden [23]. Dieses verdeutlicht eindrücklich, dass selbst bei einer Vitamin-D-Insuffizienz zum Zeitpunkt einer Fraktur ein Ausgleich dieses Defizits von großem klinischem Nutzen sein kann. Die Bedeutung eines ausgeglichenen Vitamin-D-Spiegels und einer balancierten Calciumhomöostase bezüglich der klinischen und radiologischen Entwicklung von Stressfrakturen bei Leistungssportlern kann im angeführten Fallbeispiel verdeutlicht werden, in dem bei einer Marathonläuferin nach konsequenter Anpassung der Belastung und einer adäquaten Vitamin-D-Supplementation eine Ausheilung der Stressfraktur erreicht werden konnte (Abb. 2).\nUm die Balance des Knochenstoffwechsels beurteilen zu können, werden laborchemische Knochenformations- und Knochenresorptionsmarker herangezogen. Ihre Messwerte spiegeln das Ausmaß des Knochenaufbaus bzw. -abbaus wider. Formationsmarker stellen unter anderem die knochenspezifische alkalische Phosphatase („bone-specific alkaline phosphatase“ [BAP]) und das Osteocalcin (Oc) dar. Die BAP gehört als enzymatisches Isoenzym der ubiquitär vorkommenden Gesamt-AP an. Die Bestimmung der BAP wird der Gesamt-AP aufgrund der höheren Spezifität vorgezogen. Sie ist an allen Phasen der Knochenmineralisation beteiligt und kann als Indikator für die Osteoblastenaktivität und die Knochenformation fungieren [24]. Osteocalcin ist ein Hydroxylapatit-bindendes Protein, welches als spezifischer Marker für Osteoblasten gilt und eine maßgebliche Rolle bei der Regulierung der Knochenmineralisation spielt [25].\nEs gibt jedoch auch Evidenz, dass Osteocalcin eine hormonelle Rolle, beispielsweise in der Regulation des Energiestoffwechsels sowie der Fertilität, einnimmt [26, 27]. Durch das Fehlen einer methodischen Standardisierung und der geringen Halbwertszeit variieren die Werte zum Teil stark zwischen verschiedenen Laboren, was die Aussagekraft einschränken kann [24]. Eine laborchemische Dissoziation zwischen erhöhter beziehungsweise hoch normaler BAP bei Vorliegen eines in Relation deutlich niedrigeren Osteocalcins spricht für das Vorliegen einer osteomalazischen Komponente, das heißt einer Mineralisationsstörung des Knochens. Durch das Auftreten höherer Osteocalcin- und BAP-Werte bei Triathleten im Vergleich zu Radfahrern kann außerdem darauf geschlossen werden, dass sich die individuellen körperlichen Belastungen bei verschiedenen Sportarten bezüglich des Effekts auf den Knochenmetabolismus unterscheiden können [28]. Hier wird davon ausgegangen, dass die auftretenden Maximalkräfte maßgeblich für einen osteogenen Stimulus sind, was sich in erhöhten Knochenformationsmarkern äußert [28, 29].\nEin bewährter Knochenresorptionsmarker, der die Osteoklastenaktivität widerspiegelt, stellt die Kollagenquervernetzung („Crosslink“) Desoxypyridinolin (DPD) dar. DPD fungiert als molekulare Brücke der Extrazellulärmatrix zwischen Kollagenmolekülen des Typs 1 und ist nahezu ausschließlich im Knochen und Dentin anzutreffen. Nach resorptiver Wirkung der Osteoklasten resultiert eine vermehrte Abgabe der Crosslinks an Blut und Urin, wo sie quantitativ bestimmt werden können. Goldstandard ist die Bestimmung der DPD-Konzentration im Urin. Erhöhte DPD-Konzentrationen stehen somit mit einer katabolen Knochenstoffwechsellage und einem erhöhten Risiko für Knochenfrakturen im direkten Zusammenhang und können über die Reduktion der Knochenmineraldichte mit der Entstehung einer Osteoporose assoziiert sein [24, 30]. Generell gilt es zu beachten, dass Jugendliche höhere Formations- und Resorptionsmarker aufgrund des gesteigerten Knochenumbaus während des Wachstums aufweisen [31].", "Die Erfassung des Muskelstoffwechsels ist im funktionellen Assessment respektive der Leistungsdiagnostik von Athleten ein wichtiger Bestandteil. Neben direkten Traumata können sich auch intensive körperliche Belastungen bei Ungleichgewicht zwischen Belastung und Trainingsadaptation in Mikro- oder Makroschädigungen in den Muskeln äußern. Mittels Serummarker wie der Laktatdehydrogenase (LDH), der Kreatinkinase (CK), dem Myoglobin und der Aspartat-Aminotransferase (ASAT = GOT) lässt sich ein Monitoring der metabolischen Adaptation an das physische Training durchführen und es lassen sich Aussagen über die muskuläre Arbeitslast oder mögliche Schädigungen gewinnen [32]. Die messbaren Enzymaktivitäten zeigen eine direkte Korrelation zu der Belastungsintensität und können bis zu dem 4fachen des Ausgangswertes ansteigen [32]. Eine medikamenteninduzierte Erhöhung (z. B. CK-Erhöhung bei Statinen oder Steroiden) muss bei der Interpretation mitberücksichtigt werden [33].\nDie CK katalysiert die Phosphorylierung von Adenosindiphosphat (ADP) zu Adenosintriphosphat (ATP) und besitzt daher eine zentrale Rolle im Energiestoffwechsel. Isoenzyme sind in verschiedenen Organen vorzufinden: CK-MM im Skelettmuskel, CK-MB im Herzmuskel und CK-BB im Gehirn. Unter physiologischen Umständen ist ausschließlich CK-MM im Blutserum nachzuweisen. Das Vorkommen anderer Isoenzyme sollte als verdächtig betrachtet werden. Zwar wurde unter anderem ein Nachweis von CK-MB bei Ultramarathonläufern und CK-BB bei Boxern beobachtet [33, 34], allerdings sollten diese Nachweise immer kritisch hinterfragt werden. Ebenfalls sollte bei Vorliegen einer erhöhten Gesamt-CK-Aktivität in Ruhephasen, auch bei Fehlen von prädisponierenden Faktoren, eine Diagnostik samt kardialer Labordiagnostik und Echokardiografie erfolgen. Zu beachten ist, dass Sportler physiologisch höhere CK-Aktivitäten besitzen als sportlich inaktive Menschen [33, 35]. So empfahlen Meyer et al. nach Analyse von 467 männlichen Profifußballspielern der 1. und 2. Bundesliga fußballspezifische Referenzbereiche für die CK. Grund für erhöhte CK-Werte scheinen die fußballspezifischen Bewegungen mit ihrem Stop-and-go-Charakter zu sein, welche zu einer hohen exzentrischen Belastung führen und eine stärkere Freisetzung der CK aus dem Zytosol der Muskelzellen bedingen [35]. Da generell bei den meisten Athleten nach körperlichen Belastungen ein Anstieg der CK zu verzeichnen ist, wird sie zum Nachweis von Skelettmuskelschäden bei Sportlern selten verwendet. Nach körperlicher Belastung sind bei einigen Athleten aufgrund der Trainingsadaptation geringere respektive sogar kaum vorhandene Anstiege der CK-Aktivität zu verzeichnen, man spricht hier von Non-Respondern [33, 36].\nEin Performance-Test mit maximaler Ausbelastung des Athleten kann für eine Evaluation der Variabilität der individuellen CK-Werte von Nutzen sein. Messungen vor der Belastung und 30 min, 6 h, 24 h, 48 h, und 72 h nach der körperlichen Anstrengung scheinen sinnvoll, um den dynamischen Verlauf darzustellen. Ein Peak der CK-Aktivität nach 24 h ist zu erwarten, während eine Normalisierung nach 48–72 h folgen sollte [33]. Unterschiede der physischen Belastungscharakteristika zwischen Kraft- und Ausdauerathleten sind bei der Interpretation zu beachten. So lassen sich bei Kraftathleten, vor allem nach Ausübung von exzentrischem Krafttraining, hohe CK-Aktivitäten nachweisen [37, 38].\nDie LDH, ein Enzym aus der Gruppe der Oxidoreduktasen, wandelt, unter Umwandlung von NAD+ und seiner reduzierten Form NADH, Pyruvat und Laktat ineinander um. Nach körperlicher Belastung beziehungsweise muskulären Verletzungen resultieren im Vergleich zur CK langsamer steigende LDH-Aktivitäten, welche nach ausdauernder körperlicher Aktivität für 14 Tage erhöht sein können [39]. Der Anstieg geschieht vor allem am dritten bis fünften Tag nach dem Reiz [32]. Während sich nicht trainierte Menschen bezüglich der LDH-Konzentration in Ruhe nicht von trainierten Athleten unterscheiden, konnte bei diesen bereits nach einem 300-m-Sprint im Vergleich zu Athleten signifikant höhere LDH-Konzentrationen bestimmt werden [40]. Dieses lässt eine schnellere Schädigung des Muskelgewebes durch eine fehlende Trainingsadaptation vermuten und unterstreicht die Notwendigkeit eines guten Trainingszustandes, um Muskelverletzungen präventiv entgegenzuwirken.\nDas seit vielen Jahren im Breiten- sowie im Spitzensport als diagnostisches Tool zur Leistungsdiagnostik und Trainingssteuerung verwendete Laktat entsteht bei hohen körperlichen Belastungen aus Pyruvat, welches während der anaeroben Glykolyse mithilfe der Laktatdehydrogenase in Laktat umgewandelt wird. Kurz zusammengefasst steigt mit verbessertem Trainingszustand die Laktatkonzentration erst bei einer höheren Belastung an. Somit spiegelt die Laktatkonzentration im Blut die kurzfristige (innerhalb von Minuten) metabolische Beanspruchung wider, wobei das Konzept der Laktatschwellen den Übergang von einer aeroben zur anaeroben Energiebereitstellung umfasst [41]. Die in standardisierten Belastungsprotokollen gemessenen Laktatwerte stellen in Form von Laktatleistungskurven eine seit vielen Jahren genutzte, unverzichtbare Hilfe in der leistungsorientierten Trainingssteuerung dar, obwohl dieses diagnostische Mittel auch einige Schwächen offenbart, wie die Abhängigkeit von anderen Faktoren wie der Ernährung oder der Vorbelastung [42].\nDas zytoplasmatische Hämoprotein Myoglobin, welches aus einer Polypeptidkette und einem Porphyrinring mit zentralem Eisenmolekül besteht, stellt das sauerstoffbindende Protein des Muskels dar. Es wird ausschließlich in den Herzmuskelzellen und in oxidativen Skelettmuskelfasern exprimiert und ist fähig, Sauerstoff (O2) reversibel zu binden. Es ist in der Lage, bei Vorliegen einer Hypoxie Sauerstoff der Oxidation zur Verfügung zu stellen. Nach anstrengender körperlicher Betätigung kommt es durch den Abbau von Muskelproteinen zu einer Freisetzung von Myoglobin, welches bereits nach 30 min messbar ist [43]. Eine erhöhte Myoglobinkonzentration kann für 5 Tage verbleiben, vermutlich aufgrund moderater Inflammationsprozesse [32]. So ist eine Korrelation der Aktivitäten von CK und Myoglobin mit der durch Stress induzierten Reaktion der neutrophilen Granulozyten bekannt, wobei eine ausreichende Proteinzufuhr eine Abschwächung des Anstiegs bewirken kann [32]. Im Blut wird es hauptsächlich – neben anderen Parametern – zum Ausschluss eines kardialen Geschehens genutzt. Als weiterer Serummarker für Muskelschädigungen kann die Transaminase ASAT angesehen werden, für welche Meyer und Kollegen ebenfalls einen fußball- und mannschaftssportspezifischen Referenzbereich empfehlen [35]. Die ASAT stellt hierbei im Gegensatz zur leberspezifischen Transaminase ALAT (Alanin-Aminotransferase = GPT) eine ubiquitär und in großem Maße in den Muskelzellen vorkommende Transaminase dar.\nNeben der Leistungsdiagnostik bietet das laborchemische Assessment mit den daraus abgeleiteten Handlungsempfehlungen weitere Möglichkeiten, die muskuläre Leistungsfähigkeit des Athleten zu optimieren, wie zum Beispiel durch ein Monitoring des Vitamin-D-Spiegels. Ein ausgeglichener Vitamin-D-Haushalt stellt eine wichtige Voraussetzung für die muskuläre Leistungsfähigkeit dar. Bei 61 männlichen britischen Athleten verschiedener Sportarten (Rugby, Fußball und professionelle Pferderennreiter) konnte durch die Optimierung des Vitamin-D-Status bereits nach einer 8‑wöchigen Supplementation von 5000 IE Vitamin D pro Tag eine signifikante Verbesserung der 10-m-Sprintzeit und der vertikalen Sprungfähigkeit festgestellt werden [44]. Bei 24 professionellen Balletttänzern und -tänzerinnen, bei denen initial Vitamin-D-Spiegel von < 30 ng/ml vorlagen, konnte dieses bestätigt werden: Nach einer 4‑monatigen Substitution von 2000 IE Vitamin D pro Tag wurde neben einer Verbesserung der Sprungkraft um 7,1 % außerdem eine signifikante Steigerung der isometrischen Kraft des M. quadriceps femoris um 18,7 % beobachtet [20]. So scheint ein Vitamin-D-Serumwert von > 40 ng/ml die Muskelkraft und -funktion, vor allem bei Athleten mit schnellkraftbetonten Sportarten, signifikant zu verbessern [45]. Außerdem konnte bei Typ-II-Muskelfasern („fast twitch fibres“), welche essenziell für sportliche Höchstleistung und für die Vermeidung von Stürzen sind, bei Vitamin-D-Mangelzuständen Muskelfaseratrophien mit Fettinfiltrationen und Fibrosen beobachtet werden, die nach Supplementierung partiell reversibel waren [22, 46].\nAuch ein Zusammenhang zwischen posttraumatischem und altersbedingtem Muskelabbau und Vitamin D wird vermutet [47]. Weitere Untersuchungen konnten zeigen, dass Vitamin D nicht nur bei der muskulären Zelldifferenzierung, sondern auch bei der Zellproliferation bzw. der Proteinbiosynthese im mitochondrialen Metabolismus der Zellen eine wichtige Rolle spielt. Dies ist unter anderen durch eine Erhöhung des oxidativen Stresses und einer Reduktion der Sauerstoffverbrauchsrate in der Skelettmuskulatur bei Vitamin-D-Mangel zu begründen, wobei die molekularen Mechanismen komplex und zum Teil unerforscht sind [48]. Schlussendlich konnte ein systematisches Review positive Auswirkungen von ausgeglichenen Vitamin-D-Spiegeln auf die Muskelkraft demonstrieren, obwohl eine hohe Variabilität bezüglich der Effektstärken besteht [49]. Bezüglich des Zusammenhangs von niedrigen Vitamin-D-Spiegeln und akuten Muskelverletzungen scheint es hingegen deutlich weniger Evidenz zu geben. Eine Auswahl der geeigneten laborchemischen Parameter der muskuloskelettalen Labordiagnostik ist in der Tab. 1 dargestellt.ParameterBedeutungAbweichungCalciumaMineralstoff der Skelettmineralisation↓ Höhere Inzidenz für Stressfrakturen [16]; chronischer Mangel kann in einer Osteomalazie resultierenPhosphataMineralstoff der Skelettmineralisation↓ Chronischer Mangel kann in einer hypophosphatämischen Osteomalazie resultieren [6]Vitamin D (25(OH)D3)aSchlüsselfunktion in der Calciumhomöostase und Skelettmineralisation↑ ≥ 40 ng/ml präventiver Nutzen bezüglich (Stress‑)Frakturen [22]≥50 ng/ml optimale Voraussetzung für maximale Leistungsfähigkeit [22]↓ < 30 ng/ml reduzierte Calciumresorption; höhere Inzidenz für Stressfrakturen [14, 15] und „delayed union“ [23]Osteocalcin (Oc)aCalciumbindendes Peptidhormon der Osteoblasten; Knochenformationsmarker↓ Kataboler/„low-turnover“ Knochenstoffwechsel [24, 28]Knochenspezifische AP (BAP)aEnzym der Knochenformation; Knochenformationsmarker↑ Bei Dissoziation zu Osteocalcin (Verhältnis BAP>Oc): Hinweis auf osteomalazische Komponente↓ Kataboler/„low-turnover“ Knochenstoffwechsel [24], HypophosphatasieDesoxypyridinolin (DPD)aProdukt der Knochenresorption; Knochenresorptionsmarker↑ Erhöhte Knochenresorption mit gesteigertem Risiko für Frakturen [24]Parathormon (PTH)aRegulationsfunktion in der Calciumhomöostase↑ Primärer/tertiärer Hyperparathyreoidismus: Calcium (↑), Phosphat (↓); sekundärer Hyperparathyreoidismus: Calcium (↓/n), Phosphat (↑/n) [9, 12]Kreatinkinase (CK)bEnergiebereitstellung durch Rephosphorylierung von ADP zu ATP↑ Muskelzellschädigung; medikamenteninduziert (beispielsweise Statine, Steroide) [33]; potenzielles Übertraining [76]Laktatdehydrogenase (LDH)bEnergiebereitstellung im anaeroben Energiestoffwechsel↑ Muskelzellschädigung [39]Aspartat-Aminotransferase (ASAT)bTransaminase mit ubiquitärem Vorkommen↑ Muskel- und Leberzellschädigung durch körperliche Belastung [77]LaktatbEndprodukt der anaeroben Glykolyse↑ Akute körperliche Belastung; potenzielles Übertraining [76]Magnesium (Mg)Mineralstoff mit Einfluss auf die Skelettmineralisation↑ Aktivierung der Osteoklasten [67]↓ Inhibierung der Osteoblasten und Aktivierung der Osteoklasten [67]; Verringerung der muskulären Leistungsfähigkeit und Integrität [68]Eisen, Ferritin (Fe)Mineralstoff mit Einfluss auf die Blutbildung und Leistungsfähigkeit↓ Erhöhtes Risiko für Frakturen und verlängerte Regenerationsdauer nach Verletzungen [50, 61]Zink (Zn)Mineralstoff mit Einfluss auf die Skelettmineralisation↓ Negativer Effekt auf die Skelettmineralisation mit möglichem reduziertem Knochenmineralsalzgehalt [70]Die dargestellten Parameter stellen lediglich eine Auswahl dar. Vor allem im Kapitel Nährstoffe wird ebenfalls auf die Relevanz weiterer Makro- und Mikronährstoffe (Proteine, Vitamine wie Vitamin B6, B9 und B12) hingewiesen. aCalcium- und Knochenstoffwechsel, bMuskelstoffwechsel\n↑ ≥ 40 ng/ml präventiver Nutzen bezüglich (Stress‑)Frakturen [22]\n≥50 ng/ml optimale Voraussetzung für maximale Leistungsfähigkeit [22]\n↓ < 30 ng/ml reduzierte Calciumresorption; höhere Inzidenz für Stressfrakturen [14, 15] und „delayed union“ [23]\n↑ Bei Dissoziation zu Osteocalcin (Verhältnis BAP>Oc): Hinweis auf osteomalazische Komponente\n↓ Kataboler/„low-turnover“ Knochenstoffwechsel [24], Hypophosphatasie\n↑ Aktivierung der Osteoklasten [67]\n↓ Inhibierung der Osteoblasten und Aktivierung der Osteoklasten [67]; Verringerung der muskulären Leistungsfähigkeit und Integrität [68]\nDie dargestellten Parameter stellen lediglich eine Auswahl dar. Vor allem im Kapitel Nährstoffe wird ebenfalls auf die Relevanz weiterer Makro- und Mikronährstoffe (Proteine, Vitamine wie Vitamin B6, B9 und B12) hingewiesen. aCalcium- und Knochenstoffwechsel, bMuskelstoffwechsel", "Eine angemessene körperliche Bewegung und Belastung gilt generell als osteoprotektiv. So konnte bei Athletengruppen aus Sportarten mit hohen Maximalkräften und multidirektionalen Bewegungen, wie z. B. Fußball, Volleyball oder auch Rugby, eine bessere Knochenqualität festgestellt werden, während Athleten aus Ausdauersportarten mit niedrigen Maximalkräften und niedriger Energieverfügbarkeit, wie Langstreckenlauf, Schwimmen oder Radrennen, eine reduzierte Knochenmasse aufwiesen [50]. Da 90 % der Gesamtknochenmasse bis zum 20. Lebensjahr generiert werden und der Aufbau mit dem 30. Lebensjahr weitgehend abgeschlossen ist („peak bone mass“) [50], ist anzustreben, dass trotz einer aktiven Sportlerkarriere in diesen Jahren suffiziente Bedingungen für einen Knochenmasseaufbau geboten werden. Als positiver Stimulus für den Knochenstoffwechsel wird der mechanischen Belastung der Knochen mit dem Auftreten von Maximalkräften eine entscheidende Rolle beigemessen [51]. Dabei stellt jedoch ein weiterer wichtiger Einflussfaktor die Ernährung respektive die Energieverfügbarkeit dar.\nIhle und Loucks (2004) untersuchten die Dosis-Wirkungs-Beziehung zwischen drei Stufen verminderter Energieverfügbarkeit und dem Knochenstoffwechsel von jungen gesunden Frauen im Vergleich zu energiebalancierten Kontrollen mit einer Energieverfügbarkeit von 45 kcal/kg fettfreier Körpermasse („lean body mass“ [LBM])/Tag. So waren bei Energieverfügbarkeiten von 30 beziehungsweise 20 kcal/kg LBM/Tag signifikant geringere Knochenformationsraten zu detektieren, während die Knochenresorption noch unverändert blieb. Bei weiterer Reduktion der Energieverfügbarkeit auf 10 kcal/kg LBM/Tag stieg zusätzlich die Knochenresorption an, mit der Gefahr einer katabolen Knochenstoffwechsellage [52]. Hier gilt es zu beachten, dass neben einer geringeren Energieaufnahme auch ein hoher Energieverbrauch bei Athleten eine geringere Energiebilanz bedingt, was vor allem für Ausdauerathleten relevant ist. So werden Eliteausdauerathleten das Niveau von 45 kcal/kg LBM/Tag angesichts der hohen Energieausgaben nur schwerlich erreichen können [53]. Außerdem sehen viele Ausdauersportler ein Energiedefizit als essenziell an, um den Phänotyp des Ausdauerathleten mit einem möglichst großen Anteil fettfreier Masse zu generieren. Nichtsdestotrotz ist eine ausreichende Energieverfügbarkeit beziehungsweise suffiziente Versorgung mit Nährstoffen anzustreben, um die kurz- und langfristige Knochengesundheit zu gewährleisten.\nDie Kombination aus einer geringen Energieverfügbarkeit (mit oder ohne Essstörung), einem veränderten Menstruationszyklus mit niedrigeren Östrogenen und anderen Hormonstörungen und einer verminderten Knochenmineraldichte („bone mineral density“ [BMD]) beschreibt einen Zustand, der vor allem bei intensiv sporttreibenden Frauen beobachtet wird und vormals als „female athlete triad“ bezeichnet wurde [54]. Da mittlerweile bekannt ist, dass der relative Energiemangel die Grundproblematik darstellt und auch Männer betroffen sein können, wurde die Terminologie in „relative energy deficiency in sport“ (RED-S) geändert [55]. Der Begriff stellt eine Erweiterung der Triade der weiblichen Athletin dar, scheint jedoch im Bewusstsein der Ärztinnen und Ärzte noch nicht ausreichend vertreten zu sein [56]. Neben der langfristigen Verringerung der Knochenmineraldichte wurde in diesem Zusammenhang auch eine höhere Inzidenz von Stressfrakturen beschrieben [57].\nAls Orientierung für entsprechende Mangelzustände kann neben einer Bilanzierung der Energieverfügbarkeit eine labordiagnostische Bestimmung von Nährstoffen dienen, um eine Limitation der Entwicklung und Funktionstüchtigkeit des muskuloskelettalen Systems durch das Vorliegen von Mangelzuständen auszuschließen. Das Erkennen und Beseitigen von Mangelzuständen stellt eine Grundvoraussetzung für die Optimierung der Leistungs- und Regenerationsfähigkeit der Athleten dar. Calcium, Vitamin D und Phosphat wurden als maßgebliche osteologische Parameter bereits angeführt. Weitere Parameter stellen als Makronährstoffe die Proteine und als Mikronährstoffe Eisen, Magnesium und Zink sowie Vitamin B6, B9 und B12 dar. Die von der International Society of Sports Nutrition (ISSN) empfohlene tägliche Proteinzufuhr für gesunde Sportler liegt bei 1,4–2,0 g Protein/kg Körpergewicht, um durch eine positive Proteinbilanz bestmöglich den Muskelaufbau/-erhalt und die Trainingsadaptation zu garantieren, während einem Muskelverlust vorgebeugt werden soll [58]. Proteine als Bestandteil des Kollagens und der Wachstumsfaktoren haben folglich ebenfalls einen positiven Effekt auf den Knochen [50]. Für eine Beurteilung der individuell benötigten Menge an Proteinen pro Tag können unterstützend Biomarker herangezogen werden (Gesamtprotein, Albumin, Stickstoffbilanz, Harnstoff-Stickstoff, Aminosäurenanalyse). Eine Fehlernährung mit resultierendem Proteinmangel scheint hierbei mit einer Reduktion der Albuminsynthese und einem verminderten Gesamtprotein einherzugehen. Zusätzlich kann die Stickstoffbilanz, zum Beispiel durch eine Messung des Harnstoff-Stickstoffs als Abbauprodukt von Proteinen (im Blut oder Urin) Aufschlüsse über den Eiweißstoffwechsel liefern und beispielsweise auf Mangelernährung hinweisen [59].\nEin Eisenmangel stellt einen leistungslimitierenden Faktor dar, der sich laborchemisch in Form eines erniedrigten Ferritins beziehungsweise Hämoglobins im Falle einer Eisenmangelanämie widerspiegelt [60] und sich klinisch unter anderem durch eine Ermüdung („Fatigue“) mit verringerter maximaler Sauerstoffaufnahme (VO2max) und Dyspnoe äußert [61]. Auf muskuloskelettaler Ebene kann sich ein Eisenmangel in einem erhöhten Risiko für eine herabgesetzte Knochendichte, für Stressfrakturen, oder aber auch durch eine verlängerte Regenerationsdauer nach Verletzungen manifestieren [50, 61–63]. Bei 1085 Eliteathleten (570 weiblich, 515 männlich) aus über 26 Sportarten konnte bei 15 % der männlichen und 52 % der weiblichen Athleten ein defizitärer Eisenstatus festgestellt werden [64]. Als möglicher Einflussfaktor der deutlich höheren Prävalenz eines Eisenmangels bei Athletinnen ist hier ein menstruationsbedingter Eisenverlust zu bedenken [65].\nMagnesium (Mg) beeinflusst als Cofaktor zahlreicher anaboler und kataboler Reaktionen ebenfalls die Knochengesundheit und Muskelleistung des Athleten [66], wobei niedrige Magnesiumkonzentrationen möglicherweise eine katabole Stoffwechselsituation des Knochens herbeiführen können [67]. Ein positiver Zusammenhang zwischen Magnesiumstatus, der Muskelkraft und der Muskelleistung durch eine muskelprotektive Wirkung respektive Aufrechterhaltung der Muskelintegrität ist ebenfalls bekannt [68]. So konnten bei Radsportlern nach Magnesiumsupplementation (400 mg/d) während eines 21-tägigen Etappenrennens signifikant niedrigere Myoglobinkonzentrationen nachgewiesen werden [69]. Zink (Zn) ist ein weiterer Mineralstoff, der aufgrund des erhöhten Bedarfs bei Sportlern abgedeckt sein sollte und für eine Vielzahl von Funktionen der Wundheilung, der Glukoseverwertung und der Proteinsynthese erforderlich ist und sowohl antioxidativ als auch antiinflammatorisch wirkt [59]. Daneben nimmt Zink als Cofaktor mehrerer Enzyme, wie der AP und der Kollagenase, eine Rolle in der Knochenmineralisation und der Synthese der kollagenen Strukturen des Knochens ein [70]. Ein Zinkmangel ist kein seltenes Ereignis und kann durch eine suboptimale Zinkaufnahme bei körperlichen Belastungen, Stress und einseitigen Ernährungsgewohnheiten bedingt sein und zu einer verminderten Knochendichte führen [70, 71].\nVitamine stellen eine Gruppe von lebensnotwendigen organischen Verbindungen unterschiedlicher Stoffklassen dar, welche vom Körper nicht selbst synthetisiert werden können und durch die Nahrung aufgenommen werden müssen. Trotz der Wichtigkeit einer adäquaten Aufnahme der Vitamine C, E und K ist die Verlässlichkeit und der Nutzen ihrer laborchemischen Bestimmung nicht vollständig nachgewiesen und wird in dieser Arbeit nicht thematisiert. Für das labordiagnostische Assessment sind vor allem die Vitamine B6, B9 und B12 geeignet. Der Vitamin-B-Komplex spielt eine bedeutende Rolle in der Regulation des Energiemetabolismus [59]. Wichtige Vertreter stellen das B6 (Pyridoxin), B9 (Folsäure) und B12 (Cobalamin) dar. So hat Pyridoxin eine zentrale Stellung als Coenzym vieler Reaktionen des Aminosäurenstoffwechsels und trägt zur Synthese von Fettsäuren bei [72]. Folsäure ist maßgeblich an Wachstums- und Zellteilungsprozessen im menschlichen Körper beteiligt [73]. Eine Analyse des Cobalamins, welches vorwiegend in tierischen Produkten vorkommt, ist vor allem Vegetariern und Veganern empfohlen und es sollte gegebenenfalls substituiert werden. Ein Cobalaminmangel scheint mit einer Verminderung der Knochendichte sowie einer Beeinträchtigung der Osteoblastenfunktion einherzugehen [74, 75].", "Ein individuelles Vorgehen im Rahmen der Labordiagnostik erscheint sinnvoll, da individuell zwischen Athleten bedeutende Unterschiede der labordiagnostischen Referenzintervalle auftreten. So sind bedeutende belastungsabhängige und sportartspezifische Unterschiede zahlreicher Laborparameter zwischen Athleten nachzuweisen [35, 59]. Es ist sinnvoll, aber zeitlich und technisch aufwändig, durch eine Periodisierung der Messungen personalisierte Referenz- und Variabilitätsbereiche der Athleten zu bestimmen, welche im Rahmen von unterschiedlichen Trainings- und Belastungszuständen durchgeführt werden (Abb. 3). Es wird jedoch auch deutlich, dass keine pauschale Aussage über die Zeitintervalle zwischen den Laboranalysen sinnvoll ist, sondern ein bedarfs- und saisonabhängiges Vorgehen gewählt werden sollte. Ein Mindestmaß von zwei Labordiagnostiken pro Jahr ist jedoch zu empfehlen, um die medizinische Grundversorgung eines Leistungssportlers gewährleisten zu können.\nAuch während der Rehabilitation nach einer Verletzung können zuvor erhobene Basiswerte der individuellen Referenz- und Variabilitätsbereiche helfen, Verlaufswerte besser einzuschätzen." ]
[ null, null, null, null ]
[ "Knochenstoffwechsel", "Muskelstoffwechsel", "Nährstoffe, Energieverfügbarkeit und Vitamine", "Periodisierung des laborchemischen Assessments", "Fazit für die Praxis" ]
[ "Für Leistungssportler mit hohen körperlichen Belastungen ist ein intakter Knochenstoffwechsel von essenzieller Bedeutung. Der Knochen bildet als Endoskelett neben der Muskulatur, den Sehnen und Bändern das Hauptorgan des Bewegungsapparates. Er garantiert die Beweglichkeit und Stabilität und unterliegt ständiger Adaptation. Mittels labordiagnostischer Parameter, welche sich als Marker des Knochenumbaus („remodelings“) und der Mineralstoffhomöostase eignen, lässt sich neben apparativen diagnostischen Methoden der metabolische Knochenstatus des Athleten abbilden [3].\nDie Knochenmineralisation wird maßgeblich über die Calciumhomöostase reguliert. Die empfohlene Tagesaufnahme von Calcium beträgt 1000–1500 mg/Tag [4, 5]. Neben Calcium ist Phosphat als Mineralstoff ebenfalls maßgeblich an der Mineralisation des Knochens beteiligt. Bei nicht ausreichender Versorgung des Körpers mit Calcium und/oder Phosphat resultiert eine reduzierte Knochenmineralisation bis hin zur Osteomalazie, begleitet von einer muskulären Insuffizienz [6, 7]. Chronische Hypophosphatämien können vielfältige Ursachen haben und entstehen beispielsweise durch Mangelernährung (v. a. Anorexia nervosa), in Zusammenhang mit Vitamin-D-Mangel, bei genetischen Erkrankungen (v. a. Phosphatdiabetes), paraneoplastisch (v. a. onkogene Osteomalazie) oder medikamentenassoziiert (u. a. Antazida, Diuretika) [6].\nVon übergeordneter Bedeutung für den Knochenmetabolismus und die Knochenfestigkeit sind das Steroidhormon Calcitriol (1,25(OH)2-Vitamin‑D3), das Parathormon (PTH) sowie der „fibroblast growth factor“ 23 (FGF23), welche die Hauptregulatoren der Calcium- und Phosphathomöostase darstellen [8–10]. Calcitriol steigert als aktiver Metabolit des Vitamin D3 (Cholecalciferol) die intestinale und renale Resorption von Calcium, erhöht somit die Calciumkonzentration im Körper und führt zu einer Mineralisation unmineralisierter Knochensubstanz (Osteoid) [11]. Bei einem Vitamin-D-Mangel und konsekutiver enteraler Calciumaufnahmestörung kann durch eine kompensatorisch gesteigerte PTH-Sekretion (sekundärer Hyperparathyreoidismus) eine Stimulation der knochenresorbierenden Osteoklasten und somit eine vermehrte Calciummobilisation aus dem Knochen induziert werden. Dies findet auf Kosten der Knochenqualität statt und kann durch eine Reduktion des Knochenmineralsalzgehaltes und der Knochenstruktur zu einer Verschlechterung der Knochenstabilität führen [12]. Ein weiterer häufiger Grund für eine gestörte enterale Calciumaufnahme ist die (chronische) Einnahme von Protonenpumpeninhibitoren mit konsekutiver Hypochlorhydrie [13]. Ferner wird in diesem Zustand die renale Phosphatelimination gesteigert, da ein erhöhter Phosphatspiegel wiederum einem Calciumanstieg entgegenwirken würde. Eine erhöhte PTH-Konzentration bei gleichzeitig erhöhtem Calciumspiegel spricht hingegen für eine autonome Überfunktion der Nebenschilddrüse im Sinne eines primären oder tertiären (Folge einer chronischen Überstimulation nach sekundärem) Hyperparathyreoidismus [9]. FGF23 ist ein endokrines Hormon, welches eine Phosphaturie in der Niere verursacht, während die Produktion von Calcitriol gehemmt wird [10]. Die laborchemische Bestimmung von FGF23 stellt aktuell noch keine Routinediagnostik dar, ist jedoch bei rezidivierenden Hypophosphatämien und Verdacht auf eine genetische oder erworbene Phosphatstoffwechselstörung indiziert.\nDie klinische Relevanz von Vitamin D ist in der sportmedizinischen Betreuung von Leistungs- und auch Gelegenheitssportlern hoch, da Vitamin D aufgrund einer direkten oder indirekten Beeinflussung der Leistungs- und Regenerationsfähigkeit sowie des Verletzungsrisikos, wie für ossäre Stressreaktionen, einen maßgeblichen Effekt auf die Gesundheit des Athleten haben kann [14–16]. Dennoch stellt ein Vitamin-D-Mangel im professionellen Sport ein häufig auftretendes Phänomen dar [1, 17–20]. So konnte bei 13 von 20 männlichen Profifußballern des englischen Premier-League-Teams Liverpool FC im Dezember 2010 ein insuffizienter Vitamin-D-Status erhoben werden [17]. Auch bei 31 von 70 männlichen Handballspielern aus der ersten deutschen Handballliga konnte ein Vitamin-D-Mangel festgestellt werden [18]. Dieser Mangel ist nicht nur ausschließlich in den Wintermonaten oder bei Hallensportlern, sondern über das ganze Jahr hinweg und auch bei anderen Sportarten zu beobachten. So zeigten lediglich 25 von 80 Footballspielern der National Football League (NFL) im Rahmen der routinemäßigen Gesundheitsuntersuchungen während der Saisonpause und in der Vorbereitungsphase adäquate Vitamin-D-Spiegel, wobei das Risiko für einen Vitamin-D-Mangel bei dunkelhäutigen Athleten erhöht war [19]. Hierbei ist zu bedenken, dass eine derart hohe Prävalenz an Mangelzuständen trotz in den Vereinigten Staaten teilweise mit Vitamin D angereicherter Lebensmittel vorlag [21]. Ein optimaler Referenzwert für Vitamin D bei Leistungssportlern ist Gegenstand von Diskussionen [7, 14, 15, 22, 23]. Zur Beurteilung, ob ein Athlet oder eine Athletin ausreichend mit Vitamin D versorgt ist, wird in der Regel das Calcidiol (25(OH)-Vitamin-D3) im Serum gemessen. Zunächst konnte auf histologischer Ebene in der Allgemeinbevölkerung gezeigt werden, dass ein Vitamin-D-Spiegel (25(OH)D3) von ≥ 30 ng/ml bei Frauen und Männern eine Hypomineralisation im Sinne einer Osteomalazie weitestgehend ausschließt [7]. Während bei Vitamin-D-Konzentrationen von ≥ 40 ng/ml von einem präventiven Nutzen bezüglich (Stress‑)Frakturen auszugehen ist, scheint ein Spiegel von ≥ 50 ng/ml für beide Geschlechter eine optimale Voraussetzung für eine maximale Leistungsfähigkeit der Athleten darzustellen [22]. So konnten Williams und Kollegen in verschiedenen amerikanischen Männer- und Frauenprofiteams (Crosslauf, Basketball, Fußball, Leichtathletik) zeigen, dass durch eine 8‑wöchige Vitamin-D-Supplementation mit 50.000 IE pro Woche bei Vitamin-D-insuffizienten Athleten (<30 ng/ml) eine Reduktion der Inzidenz von Stressfrakturen von 7,5 % auf 1,6 % erzielt werden konnte [14]. Auch bei 5201 untersuchten weiblichen Navy-Rekruten konnte unter Calcium- (2000 mg/Tag) und Vitamin-D-Supplementation (800 IE/Tag) eine um 21 % geringere Inzidenz für Stressfrakturen festgestellt werden [16]. Umgekehrt war in einem Kollektiv von 53 Patienten mit Stressfrakturen bei 44 Betroffenen (83 %) ein Vitamin-D-Spiegel von < 40 ng/ml zu beobachten [15].\nAuch was die Rekonvaleszenz nach eingetretener Fraktur anbetrifft, scheint Vitamin D eine entscheidende Rolle zu spielen. In einem Kollektiv von 617 Patienten beider Geschlechter konnte gezeigt werden, dass sich die Inzidenz einer verzögerten Frakturheilung („delayed union“) bei Patienten mit Vitamin-D-Mangel (9,7 %) signifikant von denen mit einem suffizienten Vitamin-D-Spiegel (0,3 %) unterschied [23]. Bei Patienten mit initialem Mangel und anschließender Supplementation von 1200 IE Vitamin D täglich über 4 Monate konnte eine „delayed union“ ebenfalls signifikant reduziert werden [23]. Dieses verdeutlicht eindrücklich, dass selbst bei einer Vitamin-D-Insuffizienz zum Zeitpunkt einer Fraktur ein Ausgleich dieses Defizits von großem klinischem Nutzen sein kann. Die Bedeutung eines ausgeglichenen Vitamin-D-Spiegels und einer balancierten Calciumhomöostase bezüglich der klinischen und radiologischen Entwicklung von Stressfrakturen bei Leistungssportlern kann im angeführten Fallbeispiel verdeutlicht werden, in dem bei einer Marathonläuferin nach konsequenter Anpassung der Belastung und einer adäquaten Vitamin-D-Supplementation eine Ausheilung der Stressfraktur erreicht werden konnte (Abb. 2).\nUm die Balance des Knochenstoffwechsels beurteilen zu können, werden laborchemische Knochenformations- und Knochenresorptionsmarker herangezogen. Ihre Messwerte spiegeln das Ausmaß des Knochenaufbaus bzw. -abbaus wider. Formationsmarker stellen unter anderem die knochenspezifische alkalische Phosphatase („bone-specific alkaline phosphatase“ [BAP]) und das Osteocalcin (Oc) dar. Die BAP gehört als enzymatisches Isoenzym der ubiquitär vorkommenden Gesamt-AP an. Die Bestimmung der BAP wird der Gesamt-AP aufgrund der höheren Spezifität vorgezogen. Sie ist an allen Phasen der Knochenmineralisation beteiligt und kann als Indikator für die Osteoblastenaktivität und die Knochenformation fungieren [24]. Osteocalcin ist ein Hydroxylapatit-bindendes Protein, welches als spezifischer Marker für Osteoblasten gilt und eine maßgebliche Rolle bei der Regulierung der Knochenmineralisation spielt [25].\nEs gibt jedoch auch Evidenz, dass Osteocalcin eine hormonelle Rolle, beispielsweise in der Regulation des Energiestoffwechsels sowie der Fertilität, einnimmt [26, 27]. Durch das Fehlen einer methodischen Standardisierung und der geringen Halbwertszeit variieren die Werte zum Teil stark zwischen verschiedenen Laboren, was die Aussagekraft einschränken kann [24]. Eine laborchemische Dissoziation zwischen erhöhter beziehungsweise hoch normaler BAP bei Vorliegen eines in Relation deutlich niedrigeren Osteocalcins spricht für das Vorliegen einer osteomalazischen Komponente, das heißt einer Mineralisationsstörung des Knochens. Durch das Auftreten höherer Osteocalcin- und BAP-Werte bei Triathleten im Vergleich zu Radfahrern kann außerdem darauf geschlossen werden, dass sich die individuellen körperlichen Belastungen bei verschiedenen Sportarten bezüglich des Effekts auf den Knochenmetabolismus unterscheiden können [28]. Hier wird davon ausgegangen, dass die auftretenden Maximalkräfte maßgeblich für einen osteogenen Stimulus sind, was sich in erhöhten Knochenformationsmarkern äußert [28, 29].\nEin bewährter Knochenresorptionsmarker, der die Osteoklastenaktivität widerspiegelt, stellt die Kollagenquervernetzung („Crosslink“) Desoxypyridinolin (DPD) dar. DPD fungiert als molekulare Brücke der Extrazellulärmatrix zwischen Kollagenmolekülen des Typs 1 und ist nahezu ausschließlich im Knochen und Dentin anzutreffen. Nach resorptiver Wirkung der Osteoklasten resultiert eine vermehrte Abgabe der Crosslinks an Blut und Urin, wo sie quantitativ bestimmt werden können. Goldstandard ist die Bestimmung der DPD-Konzentration im Urin. Erhöhte DPD-Konzentrationen stehen somit mit einer katabolen Knochenstoffwechsellage und einem erhöhten Risiko für Knochenfrakturen im direkten Zusammenhang und können über die Reduktion der Knochenmineraldichte mit der Entstehung einer Osteoporose assoziiert sein [24, 30]. Generell gilt es zu beachten, dass Jugendliche höhere Formations- und Resorptionsmarker aufgrund des gesteigerten Knochenumbaus während des Wachstums aufweisen [31].", "Die Erfassung des Muskelstoffwechsels ist im funktionellen Assessment respektive der Leistungsdiagnostik von Athleten ein wichtiger Bestandteil. Neben direkten Traumata können sich auch intensive körperliche Belastungen bei Ungleichgewicht zwischen Belastung und Trainingsadaptation in Mikro- oder Makroschädigungen in den Muskeln äußern. Mittels Serummarker wie der Laktatdehydrogenase (LDH), der Kreatinkinase (CK), dem Myoglobin und der Aspartat-Aminotransferase (ASAT = GOT) lässt sich ein Monitoring der metabolischen Adaptation an das physische Training durchführen und es lassen sich Aussagen über die muskuläre Arbeitslast oder mögliche Schädigungen gewinnen [32]. Die messbaren Enzymaktivitäten zeigen eine direkte Korrelation zu der Belastungsintensität und können bis zu dem 4fachen des Ausgangswertes ansteigen [32]. Eine medikamenteninduzierte Erhöhung (z. B. CK-Erhöhung bei Statinen oder Steroiden) muss bei der Interpretation mitberücksichtigt werden [33].\nDie CK katalysiert die Phosphorylierung von Adenosindiphosphat (ADP) zu Adenosintriphosphat (ATP) und besitzt daher eine zentrale Rolle im Energiestoffwechsel. Isoenzyme sind in verschiedenen Organen vorzufinden: CK-MM im Skelettmuskel, CK-MB im Herzmuskel und CK-BB im Gehirn. Unter physiologischen Umständen ist ausschließlich CK-MM im Blutserum nachzuweisen. Das Vorkommen anderer Isoenzyme sollte als verdächtig betrachtet werden. Zwar wurde unter anderem ein Nachweis von CK-MB bei Ultramarathonläufern und CK-BB bei Boxern beobachtet [33, 34], allerdings sollten diese Nachweise immer kritisch hinterfragt werden. Ebenfalls sollte bei Vorliegen einer erhöhten Gesamt-CK-Aktivität in Ruhephasen, auch bei Fehlen von prädisponierenden Faktoren, eine Diagnostik samt kardialer Labordiagnostik und Echokardiografie erfolgen. Zu beachten ist, dass Sportler physiologisch höhere CK-Aktivitäten besitzen als sportlich inaktive Menschen [33, 35]. So empfahlen Meyer et al. nach Analyse von 467 männlichen Profifußballspielern der 1. und 2. Bundesliga fußballspezifische Referenzbereiche für die CK. Grund für erhöhte CK-Werte scheinen die fußballspezifischen Bewegungen mit ihrem Stop-and-go-Charakter zu sein, welche zu einer hohen exzentrischen Belastung führen und eine stärkere Freisetzung der CK aus dem Zytosol der Muskelzellen bedingen [35]. Da generell bei den meisten Athleten nach körperlichen Belastungen ein Anstieg der CK zu verzeichnen ist, wird sie zum Nachweis von Skelettmuskelschäden bei Sportlern selten verwendet. Nach körperlicher Belastung sind bei einigen Athleten aufgrund der Trainingsadaptation geringere respektive sogar kaum vorhandene Anstiege der CK-Aktivität zu verzeichnen, man spricht hier von Non-Respondern [33, 36].\nEin Performance-Test mit maximaler Ausbelastung des Athleten kann für eine Evaluation der Variabilität der individuellen CK-Werte von Nutzen sein. Messungen vor der Belastung und 30 min, 6 h, 24 h, 48 h, und 72 h nach der körperlichen Anstrengung scheinen sinnvoll, um den dynamischen Verlauf darzustellen. Ein Peak der CK-Aktivität nach 24 h ist zu erwarten, während eine Normalisierung nach 48–72 h folgen sollte [33]. Unterschiede der physischen Belastungscharakteristika zwischen Kraft- und Ausdauerathleten sind bei der Interpretation zu beachten. So lassen sich bei Kraftathleten, vor allem nach Ausübung von exzentrischem Krafttraining, hohe CK-Aktivitäten nachweisen [37, 38].\nDie LDH, ein Enzym aus der Gruppe der Oxidoreduktasen, wandelt, unter Umwandlung von NAD+ und seiner reduzierten Form NADH, Pyruvat und Laktat ineinander um. Nach körperlicher Belastung beziehungsweise muskulären Verletzungen resultieren im Vergleich zur CK langsamer steigende LDH-Aktivitäten, welche nach ausdauernder körperlicher Aktivität für 14 Tage erhöht sein können [39]. Der Anstieg geschieht vor allem am dritten bis fünften Tag nach dem Reiz [32]. Während sich nicht trainierte Menschen bezüglich der LDH-Konzentration in Ruhe nicht von trainierten Athleten unterscheiden, konnte bei diesen bereits nach einem 300-m-Sprint im Vergleich zu Athleten signifikant höhere LDH-Konzentrationen bestimmt werden [40]. Dieses lässt eine schnellere Schädigung des Muskelgewebes durch eine fehlende Trainingsadaptation vermuten und unterstreicht die Notwendigkeit eines guten Trainingszustandes, um Muskelverletzungen präventiv entgegenzuwirken.\nDas seit vielen Jahren im Breiten- sowie im Spitzensport als diagnostisches Tool zur Leistungsdiagnostik und Trainingssteuerung verwendete Laktat entsteht bei hohen körperlichen Belastungen aus Pyruvat, welches während der anaeroben Glykolyse mithilfe der Laktatdehydrogenase in Laktat umgewandelt wird. Kurz zusammengefasst steigt mit verbessertem Trainingszustand die Laktatkonzentration erst bei einer höheren Belastung an. Somit spiegelt die Laktatkonzentration im Blut die kurzfristige (innerhalb von Minuten) metabolische Beanspruchung wider, wobei das Konzept der Laktatschwellen den Übergang von einer aeroben zur anaeroben Energiebereitstellung umfasst [41]. Die in standardisierten Belastungsprotokollen gemessenen Laktatwerte stellen in Form von Laktatleistungskurven eine seit vielen Jahren genutzte, unverzichtbare Hilfe in der leistungsorientierten Trainingssteuerung dar, obwohl dieses diagnostische Mittel auch einige Schwächen offenbart, wie die Abhängigkeit von anderen Faktoren wie der Ernährung oder der Vorbelastung [42].\nDas zytoplasmatische Hämoprotein Myoglobin, welches aus einer Polypeptidkette und einem Porphyrinring mit zentralem Eisenmolekül besteht, stellt das sauerstoffbindende Protein des Muskels dar. Es wird ausschließlich in den Herzmuskelzellen und in oxidativen Skelettmuskelfasern exprimiert und ist fähig, Sauerstoff (O2) reversibel zu binden. Es ist in der Lage, bei Vorliegen einer Hypoxie Sauerstoff der Oxidation zur Verfügung zu stellen. Nach anstrengender körperlicher Betätigung kommt es durch den Abbau von Muskelproteinen zu einer Freisetzung von Myoglobin, welches bereits nach 30 min messbar ist [43]. Eine erhöhte Myoglobinkonzentration kann für 5 Tage verbleiben, vermutlich aufgrund moderater Inflammationsprozesse [32]. So ist eine Korrelation der Aktivitäten von CK und Myoglobin mit der durch Stress induzierten Reaktion der neutrophilen Granulozyten bekannt, wobei eine ausreichende Proteinzufuhr eine Abschwächung des Anstiegs bewirken kann [32]. Im Blut wird es hauptsächlich – neben anderen Parametern – zum Ausschluss eines kardialen Geschehens genutzt. Als weiterer Serummarker für Muskelschädigungen kann die Transaminase ASAT angesehen werden, für welche Meyer und Kollegen ebenfalls einen fußball- und mannschaftssportspezifischen Referenzbereich empfehlen [35]. Die ASAT stellt hierbei im Gegensatz zur leberspezifischen Transaminase ALAT (Alanin-Aminotransferase = GPT) eine ubiquitär und in großem Maße in den Muskelzellen vorkommende Transaminase dar.\nNeben der Leistungsdiagnostik bietet das laborchemische Assessment mit den daraus abgeleiteten Handlungsempfehlungen weitere Möglichkeiten, die muskuläre Leistungsfähigkeit des Athleten zu optimieren, wie zum Beispiel durch ein Monitoring des Vitamin-D-Spiegels. Ein ausgeglichener Vitamin-D-Haushalt stellt eine wichtige Voraussetzung für die muskuläre Leistungsfähigkeit dar. Bei 61 männlichen britischen Athleten verschiedener Sportarten (Rugby, Fußball und professionelle Pferderennreiter) konnte durch die Optimierung des Vitamin-D-Status bereits nach einer 8‑wöchigen Supplementation von 5000 IE Vitamin D pro Tag eine signifikante Verbesserung der 10-m-Sprintzeit und der vertikalen Sprungfähigkeit festgestellt werden [44]. Bei 24 professionellen Balletttänzern und -tänzerinnen, bei denen initial Vitamin-D-Spiegel von < 30 ng/ml vorlagen, konnte dieses bestätigt werden: Nach einer 4‑monatigen Substitution von 2000 IE Vitamin D pro Tag wurde neben einer Verbesserung der Sprungkraft um 7,1 % außerdem eine signifikante Steigerung der isometrischen Kraft des M. quadriceps femoris um 18,7 % beobachtet [20]. So scheint ein Vitamin-D-Serumwert von > 40 ng/ml die Muskelkraft und -funktion, vor allem bei Athleten mit schnellkraftbetonten Sportarten, signifikant zu verbessern [45]. Außerdem konnte bei Typ-II-Muskelfasern („fast twitch fibres“), welche essenziell für sportliche Höchstleistung und für die Vermeidung von Stürzen sind, bei Vitamin-D-Mangelzuständen Muskelfaseratrophien mit Fettinfiltrationen und Fibrosen beobachtet werden, die nach Supplementierung partiell reversibel waren [22, 46].\nAuch ein Zusammenhang zwischen posttraumatischem und altersbedingtem Muskelabbau und Vitamin D wird vermutet [47]. Weitere Untersuchungen konnten zeigen, dass Vitamin D nicht nur bei der muskulären Zelldifferenzierung, sondern auch bei der Zellproliferation bzw. der Proteinbiosynthese im mitochondrialen Metabolismus der Zellen eine wichtige Rolle spielt. Dies ist unter anderen durch eine Erhöhung des oxidativen Stresses und einer Reduktion der Sauerstoffverbrauchsrate in der Skelettmuskulatur bei Vitamin-D-Mangel zu begründen, wobei die molekularen Mechanismen komplex und zum Teil unerforscht sind [48]. Schlussendlich konnte ein systematisches Review positive Auswirkungen von ausgeglichenen Vitamin-D-Spiegeln auf die Muskelkraft demonstrieren, obwohl eine hohe Variabilität bezüglich der Effektstärken besteht [49]. Bezüglich des Zusammenhangs von niedrigen Vitamin-D-Spiegeln und akuten Muskelverletzungen scheint es hingegen deutlich weniger Evidenz zu geben. Eine Auswahl der geeigneten laborchemischen Parameter der muskuloskelettalen Labordiagnostik ist in der Tab. 1 dargestellt.ParameterBedeutungAbweichungCalciumaMineralstoff der Skelettmineralisation↓ Höhere Inzidenz für Stressfrakturen [16]; chronischer Mangel kann in einer Osteomalazie resultierenPhosphataMineralstoff der Skelettmineralisation↓ Chronischer Mangel kann in einer hypophosphatämischen Osteomalazie resultieren [6]Vitamin D (25(OH)D3)aSchlüsselfunktion in der Calciumhomöostase und Skelettmineralisation↑ ≥ 40 ng/ml präventiver Nutzen bezüglich (Stress‑)Frakturen [22]≥50 ng/ml optimale Voraussetzung für maximale Leistungsfähigkeit [22]↓ < 30 ng/ml reduzierte Calciumresorption; höhere Inzidenz für Stressfrakturen [14, 15] und „delayed union“ [23]Osteocalcin (Oc)aCalciumbindendes Peptidhormon der Osteoblasten; Knochenformationsmarker↓ Kataboler/„low-turnover“ Knochenstoffwechsel [24, 28]Knochenspezifische AP (BAP)aEnzym der Knochenformation; Knochenformationsmarker↑ Bei Dissoziation zu Osteocalcin (Verhältnis BAP>Oc): Hinweis auf osteomalazische Komponente↓ Kataboler/„low-turnover“ Knochenstoffwechsel [24], HypophosphatasieDesoxypyridinolin (DPD)aProdukt der Knochenresorption; Knochenresorptionsmarker↑ Erhöhte Knochenresorption mit gesteigertem Risiko für Frakturen [24]Parathormon (PTH)aRegulationsfunktion in der Calciumhomöostase↑ Primärer/tertiärer Hyperparathyreoidismus: Calcium (↑), Phosphat (↓); sekundärer Hyperparathyreoidismus: Calcium (↓/n), Phosphat (↑/n) [9, 12]Kreatinkinase (CK)bEnergiebereitstellung durch Rephosphorylierung von ADP zu ATP↑ Muskelzellschädigung; medikamenteninduziert (beispielsweise Statine, Steroide) [33]; potenzielles Übertraining [76]Laktatdehydrogenase (LDH)bEnergiebereitstellung im anaeroben Energiestoffwechsel↑ Muskelzellschädigung [39]Aspartat-Aminotransferase (ASAT)bTransaminase mit ubiquitärem Vorkommen↑ Muskel- und Leberzellschädigung durch körperliche Belastung [77]LaktatbEndprodukt der anaeroben Glykolyse↑ Akute körperliche Belastung; potenzielles Übertraining [76]Magnesium (Mg)Mineralstoff mit Einfluss auf die Skelettmineralisation↑ Aktivierung der Osteoklasten [67]↓ Inhibierung der Osteoblasten und Aktivierung der Osteoklasten [67]; Verringerung der muskulären Leistungsfähigkeit und Integrität [68]Eisen, Ferritin (Fe)Mineralstoff mit Einfluss auf die Blutbildung und Leistungsfähigkeit↓ Erhöhtes Risiko für Frakturen und verlängerte Regenerationsdauer nach Verletzungen [50, 61]Zink (Zn)Mineralstoff mit Einfluss auf die Skelettmineralisation↓ Negativer Effekt auf die Skelettmineralisation mit möglichem reduziertem Knochenmineralsalzgehalt [70]Die dargestellten Parameter stellen lediglich eine Auswahl dar. Vor allem im Kapitel Nährstoffe wird ebenfalls auf die Relevanz weiterer Makro- und Mikronährstoffe (Proteine, Vitamine wie Vitamin B6, B9 und B12) hingewiesen. aCalcium- und Knochenstoffwechsel, bMuskelstoffwechsel\n↑ ≥ 40 ng/ml präventiver Nutzen bezüglich (Stress‑)Frakturen [22]\n≥50 ng/ml optimale Voraussetzung für maximale Leistungsfähigkeit [22]\n↓ < 30 ng/ml reduzierte Calciumresorption; höhere Inzidenz für Stressfrakturen [14, 15] und „delayed union“ [23]\n↑ Bei Dissoziation zu Osteocalcin (Verhältnis BAP>Oc): Hinweis auf osteomalazische Komponente\n↓ Kataboler/„low-turnover“ Knochenstoffwechsel [24], Hypophosphatasie\n↑ Aktivierung der Osteoklasten [67]\n↓ Inhibierung der Osteoblasten und Aktivierung der Osteoklasten [67]; Verringerung der muskulären Leistungsfähigkeit und Integrität [68]\nDie dargestellten Parameter stellen lediglich eine Auswahl dar. Vor allem im Kapitel Nährstoffe wird ebenfalls auf die Relevanz weiterer Makro- und Mikronährstoffe (Proteine, Vitamine wie Vitamin B6, B9 und B12) hingewiesen. aCalcium- und Knochenstoffwechsel, bMuskelstoffwechsel", "Eine angemessene körperliche Bewegung und Belastung gilt generell als osteoprotektiv. So konnte bei Athletengruppen aus Sportarten mit hohen Maximalkräften und multidirektionalen Bewegungen, wie z. B. Fußball, Volleyball oder auch Rugby, eine bessere Knochenqualität festgestellt werden, während Athleten aus Ausdauersportarten mit niedrigen Maximalkräften und niedriger Energieverfügbarkeit, wie Langstreckenlauf, Schwimmen oder Radrennen, eine reduzierte Knochenmasse aufwiesen [50]. Da 90 % der Gesamtknochenmasse bis zum 20. Lebensjahr generiert werden und der Aufbau mit dem 30. Lebensjahr weitgehend abgeschlossen ist („peak bone mass“) [50], ist anzustreben, dass trotz einer aktiven Sportlerkarriere in diesen Jahren suffiziente Bedingungen für einen Knochenmasseaufbau geboten werden. Als positiver Stimulus für den Knochenstoffwechsel wird der mechanischen Belastung der Knochen mit dem Auftreten von Maximalkräften eine entscheidende Rolle beigemessen [51]. Dabei stellt jedoch ein weiterer wichtiger Einflussfaktor die Ernährung respektive die Energieverfügbarkeit dar.\nIhle und Loucks (2004) untersuchten die Dosis-Wirkungs-Beziehung zwischen drei Stufen verminderter Energieverfügbarkeit und dem Knochenstoffwechsel von jungen gesunden Frauen im Vergleich zu energiebalancierten Kontrollen mit einer Energieverfügbarkeit von 45 kcal/kg fettfreier Körpermasse („lean body mass“ [LBM])/Tag. So waren bei Energieverfügbarkeiten von 30 beziehungsweise 20 kcal/kg LBM/Tag signifikant geringere Knochenformationsraten zu detektieren, während die Knochenresorption noch unverändert blieb. Bei weiterer Reduktion der Energieverfügbarkeit auf 10 kcal/kg LBM/Tag stieg zusätzlich die Knochenresorption an, mit der Gefahr einer katabolen Knochenstoffwechsellage [52]. Hier gilt es zu beachten, dass neben einer geringeren Energieaufnahme auch ein hoher Energieverbrauch bei Athleten eine geringere Energiebilanz bedingt, was vor allem für Ausdauerathleten relevant ist. So werden Eliteausdauerathleten das Niveau von 45 kcal/kg LBM/Tag angesichts der hohen Energieausgaben nur schwerlich erreichen können [53]. Außerdem sehen viele Ausdauersportler ein Energiedefizit als essenziell an, um den Phänotyp des Ausdauerathleten mit einem möglichst großen Anteil fettfreier Masse zu generieren. Nichtsdestotrotz ist eine ausreichende Energieverfügbarkeit beziehungsweise suffiziente Versorgung mit Nährstoffen anzustreben, um die kurz- und langfristige Knochengesundheit zu gewährleisten.\nDie Kombination aus einer geringen Energieverfügbarkeit (mit oder ohne Essstörung), einem veränderten Menstruationszyklus mit niedrigeren Östrogenen und anderen Hormonstörungen und einer verminderten Knochenmineraldichte („bone mineral density“ [BMD]) beschreibt einen Zustand, der vor allem bei intensiv sporttreibenden Frauen beobachtet wird und vormals als „female athlete triad“ bezeichnet wurde [54]. Da mittlerweile bekannt ist, dass der relative Energiemangel die Grundproblematik darstellt und auch Männer betroffen sein können, wurde die Terminologie in „relative energy deficiency in sport“ (RED-S) geändert [55]. Der Begriff stellt eine Erweiterung der Triade der weiblichen Athletin dar, scheint jedoch im Bewusstsein der Ärztinnen und Ärzte noch nicht ausreichend vertreten zu sein [56]. Neben der langfristigen Verringerung der Knochenmineraldichte wurde in diesem Zusammenhang auch eine höhere Inzidenz von Stressfrakturen beschrieben [57].\nAls Orientierung für entsprechende Mangelzustände kann neben einer Bilanzierung der Energieverfügbarkeit eine labordiagnostische Bestimmung von Nährstoffen dienen, um eine Limitation der Entwicklung und Funktionstüchtigkeit des muskuloskelettalen Systems durch das Vorliegen von Mangelzuständen auszuschließen. Das Erkennen und Beseitigen von Mangelzuständen stellt eine Grundvoraussetzung für die Optimierung der Leistungs- und Regenerationsfähigkeit der Athleten dar. Calcium, Vitamin D und Phosphat wurden als maßgebliche osteologische Parameter bereits angeführt. Weitere Parameter stellen als Makronährstoffe die Proteine und als Mikronährstoffe Eisen, Magnesium und Zink sowie Vitamin B6, B9 und B12 dar. Die von der International Society of Sports Nutrition (ISSN) empfohlene tägliche Proteinzufuhr für gesunde Sportler liegt bei 1,4–2,0 g Protein/kg Körpergewicht, um durch eine positive Proteinbilanz bestmöglich den Muskelaufbau/-erhalt und die Trainingsadaptation zu garantieren, während einem Muskelverlust vorgebeugt werden soll [58]. Proteine als Bestandteil des Kollagens und der Wachstumsfaktoren haben folglich ebenfalls einen positiven Effekt auf den Knochen [50]. Für eine Beurteilung der individuell benötigten Menge an Proteinen pro Tag können unterstützend Biomarker herangezogen werden (Gesamtprotein, Albumin, Stickstoffbilanz, Harnstoff-Stickstoff, Aminosäurenanalyse). Eine Fehlernährung mit resultierendem Proteinmangel scheint hierbei mit einer Reduktion der Albuminsynthese und einem verminderten Gesamtprotein einherzugehen. Zusätzlich kann die Stickstoffbilanz, zum Beispiel durch eine Messung des Harnstoff-Stickstoffs als Abbauprodukt von Proteinen (im Blut oder Urin) Aufschlüsse über den Eiweißstoffwechsel liefern und beispielsweise auf Mangelernährung hinweisen [59].\nEin Eisenmangel stellt einen leistungslimitierenden Faktor dar, der sich laborchemisch in Form eines erniedrigten Ferritins beziehungsweise Hämoglobins im Falle einer Eisenmangelanämie widerspiegelt [60] und sich klinisch unter anderem durch eine Ermüdung („Fatigue“) mit verringerter maximaler Sauerstoffaufnahme (VO2max) und Dyspnoe äußert [61]. Auf muskuloskelettaler Ebene kann sich ein Eisenmangel in einem erhöhten Risiko für eine herabgesetzte Knochendichte, für Stressfrakturen, oder aber auch durch eine verlängerte Regenerationsdauer nach Verletzungen manifestieren [50, 61–63]. Bei 1085 Eliteathleten (570 weiblich, 515 männlich) aus über 26 Sportarten konnte bei 15 % der männlichen und 52 % der weiblichen Athleten ein defizitärer Eisenstatus festgestellt werden [64]. Als möglicher Einflussfaktor der deutlich höheren Prävalenz eines Eisenmangels bei Athletinnen ist hier ein menstruationsbedingter Eisenverlust zu bedenken [65].\nMagnesium (Mg) beeinflusst als Cofaktor zahlreicher anaboler und kataboler Reaktionen ebenfalls die Knochengesundheit und Muskelleistung des Athleten [66], wobei niedrige Magnesiumkonzentrationen möglicherweise eine katabole Stoffwechselsituation des Knochens herbeiführen können [67]. Ein positiver Zusammenhang zwischen Magnesiumstatus, der Muskelkraft und der Muskelleistung durch eine muskelprotektive Wirkung respektive Aufrechterhaltung der Muskelintegrität ist ebenfalls bekannt [68]. So konnten bei Radsportlern nach Magnesiumsupplementation (400 mg/d) während eines 21-tägigen Etappenrennens signifikant niedrigere Myoglobinkonzentrationen nachgewiesen werden [69]. Zink (Zn) ist ein weiterer Mineralstoff, der aufgrund des erhöhten Bedarfs bei Sportlern abgedeckt sein sollte und für eine Vielzahl von Funktionen der Wundheilung, der Glukoseverwertung und der Proteinsynthese erforderlich ist und sowohl antioxidativ als auch antiinflammatorisch wirkt [59]. Daneben nimmt Zink als Cofaktor mehrerer Enzyme, wie der AP und der Kollagenase, eine Rolle in der Knochenmineralisation und der Synthese der kollagenen Strukturen des Knochens ein [70]. Ein Zinkmangel ist kein seltenes Ereignis und kann durch eine suboptimale Zinkaufnahme bei körperlichen Belastungen, Stress und einseitigen Ernährungsgewohnheiten bedingt sein und zu einer verminderten Knochendichte führen [70, 71].\nVitamine stellen eine Gruppe von lebensnotwendigen organischen Verbindungen unterschiedlicher Stoffklassen dar, welche vom Körper nicht selbst synthetisiert werden können und durch die Nahrung aufgenommen werden müssen. Trotz der Wichtigkeit einer adäquaten Aufnahme der Vitamine C, E und K ist die Verlässlichkeit und der Nutzen ihrer laborchemischen Bestimmung nicht vollständig nachgewiesen und wird in dieser Arbeit nicht thematisiert. Für das labordiagnostische Assessment sind vor allem die Vitamine B6, B9 und B12 geeignet. Der Vitamin-B-Komplex spielt eine bedeutende Rolle in der Regulation des Energiemetabolismus [59]. Wichtige Vertreter stellen das B6 (Pyridoxin), B9 (Folsäure) und B12 (Cobalamin) dar. So hat Pyridoxin eine zentrale Stellung als Coenzym vieler Reaktionen des Aminosäurenstoffwechsels und trägt zur Synthese von Fettsäuren bei [72]. Folsäure ist maßgeblich an Wachstums- und Zellteilungsprozessen im menschlichen Körper beteiligt [73]. Eine Analyse des Cobalamins, welches vorwiegend in tierischen Produkten vorkommt, ist vor allem Vegetariern und Veganern empfohlen und es sollte gegebenenfalls substituiert werden. Ein Cobalaminmangel scheint mit einer Verminderung der Knochendichte sowie einer Beeinträchtigung der Osteoblastenfunktion einherzugehen [74, 75].", "Ein individuelles Vorgehen im Rahmen der Labordiagnostik erscheint sinnvoll, da individuell zwischen Athleten bedeutende Unterschiede der labordiagnostischen Referenzintervalle auftreten. So sind bedeutende belastungsabhängige und sportartspezifische Unterschiede zahlreicher Laborparameter zwischen Athleten nachzuweisen [35, 59]. Es ist sinnvoll, aber zeitlich und technisch aufwändig, durch eine Periodisierung der Messungen personalisierte Referenz- und Variabilitätsbereiche der Athleten zu bestimmen, welche im Rahmen von unterschiedlichen Trainings- und Belastungszuständen durchgeführt werden (Abb. 3). Es wird jedoch auch deutlich, dass keine pauschale Aussage über die Zeitintervalle zwischen den Laboranalysen sinnvoll ist, sondern ein bedarfs- und saisonabhängiges Vorgehen gewählt werden sollte. Ein Mindestmaß von zwei Labordiagnostiken pro Jahr ist jedoch zu empfehlen, um die medizinische Grundversorgung eines Leistungssportlers gewährleisten zu können.\nAuch während der Rehabilitation nach einer Verletzung können zuvor erhobene Basiswerte der individuellen Referenz- und Variabilitätsbereiche helfen, Verlaufswerte besser einzuschätzen.", "\nEin regelmäßig durchgeführtes labordiagnostisches Assessment (im Leistungssport zumindest zweimal pro Jahr) und die daraus abgeleitete Intervention bei auffälligen Werten kann helfen, die Leistungsfähigkeit von Athleten zu verbessern.Eine ausgeglichene Calciumhomöostase sollte durch eine optimale Vitamin-D-Versorgung sowie eine ausgewogene Ernährung angestrebt werden, um das Risiko von Verletzungen wie Stressfrakturen zu verringern.Das laborchemische Assessment beinhaltet verschiedene Muskelenzyme, sowie Makro- und Mikronährstoffe unter Berücksichtigung des individuellen Energiebedarfs.Allgemeine Referenzbereiche dienen als Orientierung, wobei bei Leistungssportlern ein individuelles Vorgehen mit Aufstellung individueller und ggf. sportartspezifischer Referenzbereiche sinnvoll erscheint.Eine detaillierte laborchemische Diagnostik sollte heutzutage fester Bestandteil in der ärztlichen Betreuung von professionellen Athleten.\n\nEin regelmäßig durchgeführtes labordiagnostisches Assessment (im Leistungssport zumindest zweimal pro Jahr) und die daraus abgeleitete Intervention bei auffälligen Werten kann helfen, die Leistungsfähigkeit von Athleten zu verbessern.\nEine ausgeglichene Calciumhomöostase sollte durch eine optimale Vitamin-D-Versorgung sowie eine ausgewogene Ernährung angestrebt werden, um das Risiko von Verletzungen wie Stressfrakturen zu verringern.\nDas laborchemische Assessment beinhaltet verschiedene Muskelenzyme, sowie Makro- und Mikronährstoffe unter Berücksichtigung des individuellen Energiebedarfs.\nAllgemeine Referenzbereiche dienen als Orientierung, wobei bei Leistungssportlern ein individuelles Vorgehen mit Aufstellung individueller und ggf. sportartspezifischer Referenzbereiche sinnvoll erscheint.\nEine detaillierte laborchemische Diagnostik sollte heutzutage fester Bestandteil in der ärztlichen Betreuung von professionellen Athleten." ]
[ null, null, null, null, "conclusion" ]
[ "Biomarker", "Professionelle Athleten", "Regeneration", "Stressfraktur", "Vitamin D", "Biomarker", "Professional athletes", "Regeneration", "Stress fractures", "Vitamin D" ]
Knochenstoffwechsel: Für Leistungssportler mit hohen körperlichen Belastungen ist ein intakter Knochenstoffwechsel von essenzieller Bedeutung. Der Knochen bildet als Endoskelett neben der Muskulatur, den Sehnen und Bändern das Hauptorgan des Bewegungsapparates. Er garantiert die Beweglichkeit und Stabilität und unterliegt ständiger Adaptation. Mittels labordiagnostischer Parameter, welche sich als Marker des Knochenumbaus („remodelings“) und der Mineralstoffhomöostase eignen, lässt sich neben apparativen diagnostischen Methoden der metabolische Knochenstatus des Athleten abbilden [3]. Die Knochenmineralisation wird maßgeblich über die Calciumhomöostase reguliert. Die empfohlene Tagesaufnahme von Calcium beträgt 1000–1500 mg/Tag [4, 5]. Neben Calcium ist Phosphat als Mineralstoff ebenfalls maßgeblich an der Mineralisation des Knochens beteiligt. Bei nicht ausreichender Versorgung des Körpers mit Calcium und/oder Phosphat resultiert eine reduzierte Knochenmineralisation bis hin zur Osteomalazie, begleitet von einer muskulären Insuffizienz [6, 7]. Chronische Hypophosphatämien können vielfältige Ursachen haben und entstehen beispielsweise durch Mangelernährung (v. a. Anorexia nervosa), in Zusammenhang mit Vitamin-D-Mangel, bei genetischen Erkrankungen (v. a. Phosphatdiabetes), paraneoplastisch (v. a. onkogene Osteomalazie) oder medikamentenassoziiert (u. a. Antazida, Diuretika) [6]. Von übergeordneter Bedeutung für den Knochenmetabolismus und die Knochenfestigkeit sind das Steroidhormon Calcitriol (1,25(OH)2-Vitamin‑D3), das Parathormon (PTH) sowie der „fibroblast growth factor“ 23 (FGF23), welche die Hauptregulatoren der Calcium- und Phosphathomöostase darstellen [8–10]. Calcitriol steigert als aktiver Metabolit des Vitamin D3 (Cholecalciferol) die intestinale und renale Resorption von Calcium, erhöht somit die Calciumkonzentration im Körper und führt zu einer Mineralisation unmineralisierter Knochensubstanz (Osteoid) [11]. Bei einem Vitamin-D-Mangel und konsekutiver enteraler Calciumaufnahmestörung kann durch eine kompensatorisch gesteigerte PTH-Sekretion (sekundärer Hyperparathyreoidismus) eine Stimulation der knochenresorbierenden Osteoklasten und somit eine vermehrte Calciummobilisation aus dem Knochen induziert werden. Dies findet auf Kosten der Knochenqualität statt und kann durch eine Reduktion des Knochenmineralsalzgehaltes und der Knochenstruktur zu einer Verschlechterung der Knochenstabilität führen [12]. Ein weiterer häufiger Grund für eine gestörte enterale Calciumaufnahme ist die (chronische) Einnahme von Protonenpumpeninhibitoren mit konsekutiver Hypochlorhydrie [13]. Ferner wird in diesem Zustand die renale Phosphatelimination gesteigert, da ein erhöhter Phosphatspiegel wiederum einem Calciumanstieg entgegenwirken würde. Eine erhöhte PTH-Konzentration bei gleichzeitig erhöhtem Calciumspiegel spricht hingegen für eine autonome Überfunktion der Nebenschilddrüse im Sinne eines primären oder tertiären (Folge einer chronischen Überstimulation nach sekundärem) Hyperparathyreoidismus [9]. FGF23 ist ein endokrines Hormon, welches eine Phosphaturie in der Niere verursacht, während die Produktion von Calcitriol gehemmt wird [10]. Die laborchemische Bestimmung von FGF23 stellt aktuell noch keine Routinediagnostik dar, ist jedoch bei rezidivierenden Hypophosphatämien und Verdacht auf eine genetische oder erworbene Phosphatstoffwechselstörung indiziert. Die klinische Relevanz von Vitamin D ist in der sportmedizinischen Betreuung von Leistungs- und auch Gelegenheitssportlern hoch, da Vitamin D aufgrund einer direkten oder indirekten Beeinflussung der Leistungs- und Regenerationsfähigkeit sowie des Verletzungsrisikos, wie für ossäre Stressreaktionen, einen maßgeblichen Effekt auf die Gesundheit des Athleten haben kann [14–16]. Dennoch stellt ein Vitamin-D-Mangel im professionellen Sport ein häufig auftretendes Phänomen dar [1, 17–20]. So konnte bei 13 von 20 männlichen Profifußballern des englischen Premier-League-Teams Liverpool FC im Dezember 2010 ein insuffizienter Vitamin-D-Status erhoben werden [17]. Auch bei 31 von 70 männlichen Handballspielern aus der ersten deutschen Handballliga konnte ein Vitamin-D-Mangel festgestellt werden [18]. Dieser Mangel ist nicht nur ausschließlich in den Wintermonaten oder bei Hallensportlern, sondern über das ganze Jahr hinweg und auch bei anderen Sportarten zu beobachten. So zeigten lediglich 25 von 80 Footballspielern der National Football League (NFL) im Rahmen der routinemäßigen Gesundheitsuntersuchungen während der Saisonpause und in der Vorbereitungsphase adäquate Vitamin-D-Spiegel, wobei das Risiko für einen Vitamin-D-Mangel bei dunkelhäutigen Athleten erhöht war [19]. Hierbei ist zu bedenken, dass eine derart hohe Prävalenz an Mangelzuständen trotz in den Vereinigten Staaten teilweise mit Vitamin D angereicherter Lebensmittel vorlag [21]. Ein optimaler Referenzwert für Vitamin D bei Leistungssportlern ist Gegenstand von Diskussionen [7, 14, 15, 22, 23]. Zur Beurteilung, ob ein Athlet oder eine Athletin ausreichend mit Vitamin D versorgt ist, wird in der Regel das Calcidiol (25(OH)-Vitamin-D3) im Serum gemessen. Zunächst konnte auf histologischer Ebene in der Allgemeinbevölkerung gezeigt werden, dass ein Vitamin-D-Spiegel (25(OH)D3) von ≥ 30 ng/ml bei Frauen und Männern eine Hypomineralisation im Sinne einer Osteomalazie weitestgehend ausschließt [7]. Während bei Vitamin-D-Konzentrationen von ≥ 40 ng/ml von einem präventiven Nutzen bezüglich (Stress‑)Frakturen auszugehen ist, scheint ein Spiegel von ≥ 50 ng/ml für beide Geschlechter eine optimale Voraussetzung für eine maximale Leistungsfähigkeit der Athleten darzustellen [22]. So konnten Williams und Kollegen in verschiedenen amerikanischen Männer- und Frauenprofiteams (Crosslauf, Basketball, Fußball, Leichtathletik) zeigen, dass durch eine 8‑wöchige Vitamin-D-Supplementation mit 50.000 IE pro Woche bei Vitamin-D-insuffizienten Athleten (<30 ng/ml) eine Reduktion der Inzidenz von Stressfrakturen von 7,5 % auf 1,6 % erzielt werden konnte [14]. Auch bei 5201 untersuchten weiblichen Navy-Rekruten konnte unter Calcium- (2000 mg/Tag) und Vitamin-D-Supplementation (800 IE/Tag) eine um 21 % geringere Inzidenz für Stressfrakturen festgestellt werden [16]. Umgekehrt war in einem Kollektiv von 53 Patienten mit Stressfrakturen bei 44 Betroffenen (83 %) ein Vitamin-D-Spiegel von < 40 ng/ml zu beobachten [15]. Auch was die Rekonvaleszenz nach eingetretener Fraktur anbetrifft, scheint Vitamin D eine entscheidende Rolle zu spielen. In einem Kollektiv von 617 Patienten beider Geschlechter konnte gezeigt werden, dass sich die Inzidenz einer verzögerten Frakturheilung („delayed union“) bei Patienten mit Vitamin-D-Mangel (9,7 %) signifikant von denen mit einem suffizienten Vitamin-D-Spiegel (0,3 %) unterschied [23]. Bei Patienten mit initialem Mangel und anschließender Supplementation von 1200 IE Vitamin D täglich über 4 Monate konnte eine „delayed union“ ebenfalls signifikant reduziert werden [23]. Dieses verdeutlicht eindrücklich, dass selbst bei einer Vitamin-D-Insuffizienz zum Zeitpunkt einer Fraktur ein Ausgleich dieses Defizits von großem klinischem Nutzen sein kann. Die Bedeutung eines ausgeglichenen Vitamin-D-Spiegels und einer balancierten Calciumhomöostase bezüglich der klinischen und radiologischen Entwicklung von Stressfrakturen bei Leistungssportlern kann im angeführten Fallbeispiel verdeutlicht werden, in dem bei einer Marathonläuferin nach konsequenter Anpassung der Belastung und einer adäquaten Vitamin-D-Supplementation eine Ausheilung der Stressfraktur erreicht werden konnte (Abb. 2). Um die Balance des Knochenstoffwechsels beurteilen zu können, werden laborchemische Knochenformations- und Knochenresorptionsmarker herangezogen. Ihre Messwerte spiegeln das Ausmaß des Knochenaufbaus bzw. -abbaus wider. Formationsmarker stellen unter anderem die knochenspezifische alkalische Phosphatase („bone-specific alkaline phosphatase“ [BAP]) und das Osteocalcin (Oc) dar. Die BAP gehört als enzymatisches Isoenzym der ubiquitär vorkommenden Gesamt-AP an. Die Bestimmung der BAP wird der Gesamt-AP aufgrund der höheren Spezifität vorgezogen. Sie ist an allen Phasen der Knochenmineralisation beteiligt und kann als Indikator für die Osteoblastenaktivität und die Knochenformation fungieren [24]. Osteocalcin ist ein Hydroxylapatit-bindendes Protein, welches als spezifischer Marker für Osteoblasten gilt und eine maßgebliche Rolle bei der Regulierung der Knochenmineralisation spielt [25]. Es gibt jedoch auch Evidenz, dass Osteocalcin eine hormonelle Rolle, beispielsweise in der Regulation des Energiestoffwechsels sowie der Fertilität, einnimmt [26, 27]. Durch das Fehlen einer methodischen Standardisierung und der geringen Halbwertszeit variieren die Werte zum Teil stark zwischen verschiedenen Laboren, was die Aussagekraft einschränken kann [24]. Eine laborchemische Dissoziation zwischen erhöhter beziehungsweise hoch normaler BAP bei Vorliegen eines in Relation deutlich niedrigeren Osteocalcins spricht für das Vorliegen einer osteomalazischen Komponente, das heißt einer Mineralisationsstörung des Knochens. Durch das Auftreten höherer Osteocalcin- und BAP-Werte bei Triathleten im Vergleich zu Radfahrern kann außerdem darauf geschlossen werden, dass sich die individuellen körperlichen Belastungen bei verschiedenen Sportarten bezüglich des Effekts auf den Knochenmetabolismus unterscheiden können [28]. Hier wird davon ausgegangen, dass die auftretenden Maximalkräfte maßgeblich für einen osteogenen Stimulus sind, was sich in erhöhten Knochenformationsmarkern äußert [28, 29]. Ein bewährter Knochenresorptionsmarker, der die Osteoklastenaktivität widerspiegelt, stellt die Kollagenquervernetzung („Crosslink“) Desoxypyridinolin (DPD) dar. DPD fungiert als molekulare Brücke der Extrazellulärmatrix zwischen Kollagenmolekülen des Typs 1 und ist nahezu ausschließlich im Knochen und Dentin anzutreffen. Nach resorptiver Wirkung der Osteoklasten resultiert eine vermehrte Abgabe der Crosslinks an Blut und Urin, wo sie quantitativ bestimmt werden können. Goldstandard ist die Bestimmung der DPD-Konzentration im Urin. Erhöhte DPD-Konzentrationen stehen somit mit einer katabolen Knochenstoffwechsellage und einem erhöhten Risiko für Knochenfrakturen im direkten Zusammenhang und können über die Reduktion der Knochenmineraldichte mit der Entstehung einer Osteoporose assoziiert sein [24, 30]. Generell gilt es zu beachten, dass Jugendliche höhere Formations- und Resorptionsmarker aufgrund des gesteigerten Knochenumbaus während des Wachstums aufweisen [31]. Muskelstoffwechsel: Die Erfassung des Muskelstoffwechsels ist im funktionellen Assessment respektive der Leistungsdiagnostik von Athleten ein wichtiger Bestandteil. Neben direkten Traumata können sich auch intensive körperliche Belastungen bei Ungleichgewicht zwischen Belastung und Trainingsadaptation in Mikro- oder Makroschädigungen in den Muskeln äußern. Mittels Serummarker wie der Laktatdehydrogenase (LDH), der Kreatinkinase (CK), dem Myoglobin und der Aspartat-Aminotransferase (ASAT = GOT) lässt sich ein Monitoring der metabolischen Adaptation an das physische Training durchführen und es lassen sich Aussagen über die muskuläre Arbeitslast oder mögliche Schädigungen gewinnen [32]. Die messbaren Enzymaktivitäten zeigen eine direkte Korrelation zu der Belastungsintensität und können bis zu dem 4fachen des Ausgangswertes ansteigen [32]. Eine medikamenteninduzierte Erhöhung (z. B. CK-Erhöhung bei Statinen oder Steroiden) muss bei der Interpretation mitberücksichtigt werden [33]. Die CK katalysiert die Phosphorylierung von Adenosindiphosphat (ADP) zu Adenosintriphosphat (ATP) und besitzt daher eine zentrale Rolle im Energiestoffwechsel. Isoenzyme sind in verschiedenen Organen vorzufinden: CK-MM im Skelettmuskel, CK-MB im Herzmuskel und CK-BB im Gehirn. Unter physiologischen Umständen ist ausschließlich CK-MM im Blutserum nachzuweisen. Das Vorkommen anderer Isoenzyme sollte als verdächtig betrachtet werden. Zwar wurde unter anderem ein Nachweis von CK-MB bei Ultramarathonläufern und CK-BB bei Boxern beobachtet [33, 34], allerdings sollten diese Nachweise immer kritisch hinterfragt werden. Ebenfalls sollte bei Vorliegen einer erhöhten Gesamt-CK-Aktivität in Ruhephasen, auch bei Fehlen von prädisponierenden Faktoren, eine Diagnostik samt kardialer Labordiagnostik und Echokardiografie erfolgen. Zu beachten ist, dass Sportler physiologisch höhere CK-Aktivitäten besitzen als sportlich inaktive Menschen [33, 35]. So empfahlen Meyer et al. nach Analyse von 467 männlichen Profifußballspielern der 1. und 2. Bundesliga fußballspezifische Referenzbereiche für die CK. Grund für erhöhte CK-Werte scheinen die fußballspezifischen Bewegungen mit ihrem Stop-and-go-Charakter zu sein, welche zu einer hohen exzentrischen Belastung führen und eine stärkere Freisetzung der CK aus dem Zytosol der Muskelzellen bedingen [35]. Da generell bei den meisten Athleten nach körperlichen Belastungen ein Anstieg der CK zu verzeichnen ist, wird sie zum Nachweis von Skelettmuskelschäden bei Sportlern selten verwendet. Nach körperlicher Belastung sind bei einigen Athleten aufgrund der Trainingsadaptation geringere respektive sogar kaum vorhandene Anstiege der CK-Aktivität zu verzeichnen, man spricht hier von Non-Respondern [33, 36]. Ein Performance-Test mit maximaler Ausbelastung des Athleten kann für eine Evaluation der Variabilität der individuellen CK-Werte von Nutzen sein. Messungen vor der Belastung und 30 min, 6 h, 24 h, 48 h, und 72 h nach der körperlichen Anstrengung scheinen sinnvoll, um den dynamischen Verlauf darzustellen. Ein Peak der CK-Aktivität nach 24 h ist zu erwarten, während eine Normalisierung nach 48–72 h folgen sollte [33]. Unterschiede der physischen Belastungscharakteristika zwischen Kraft- und Ausdauerathleten sind bei der Interpretation zu beachten. So lassen sich bei Kraftathleten, vor allem nach Ausübung von exzentrischem Krafttraining, hohe CK-Aktivitäten nachweisen [37, 38]. Die LDH, ein Enzym aus der Gruppe der Oxidoreduktasen, wandelt, unter Umwandlung von NAD+ und seiner reduzierten Form NADH, Pyruvat und Laktat ineinander um. Nach körperlicher Belastung beziehungsweise muskulären Verletzungen resultieren im Vergleich zur CK langsamer steigende LDH-Aktivitäten, welche nach ausdauernder körperlicher Aktivität für 14 Tage erhöht sein können [39]. Der Anstieg geschieht vor allem am dritten bis fünften Tag nach dem Reiz [32]. Während sich nicht trainierte Menschen bezüglich der LDH-Konzentration in Ruhe nicht von trainierten Athleten unterscheiden, konnte bei diesen bereits nach einem 300-m-Sprint im Vergleich zu Athleten signifikant höhere LDH-Konzentrationen bestimmt werden [40]. Dieses lässt eine schnellere Schädigung des Muskelgewebes durch eine fehlende Trainingsadaptation vermuten und unterstreicht die Notwendigkeit eines guten Trainingszustandes, um Muskelverletzungen präventiv entgegenzuwirken. Das seit vielen Jahren im Breiten- sowie im Spitzensport als diagnostisches Tool zur Leistungsdiagnostik und Trainingssteuerung verwendete Laktat entsteht bei hohen körperlichen Belastungen aus Pyruvat, welches während der anaeroben Glykolyse mithilfe der Laktatdehydrogenase in Laktat umgewandelt wird. Kurz zusammengefasst steigt mit verbessertem Trainingszustand die Laktatkonzentration erst bei einer höheren Belastung an. Somit spiegelt die Laktatkonzentration im Blut die kurzfristige (innerhalb von Minuten) metabolische Beanspruchung wider, wobei das Konzept der Laktatschwellen den Übergang von einer aeroben zur anaeroben Energiebereitstellung umfasst [41]. Die in standardisierten Belastungsprotokollen gemessenen Laktatwerte stellen in Form von Laktatleistungskurven eine seit vielen Jahren genutzte, unverzichtbare Hilfe in der leistungsorientierten Trainingssteuerung dar, obwohl dieses diagnostische Mittel auch einige Schwächen offenbart, wie die Abhängigkeit von anderen Faktoren wie der Ernährung oder der Vorbelastung [42]. Das zytoplasmatische Hämoprotein Myoglobin, welches aus einer Polypeptidkette und einem Porphyrinring mit zentralem Eisenmolekül besteht, stellt das sauerstoffbindende Protein des Muskels dar. Es wird ausschließlich in den Herzmuskelzellen und in oxidativen Skelettmuskelfasern exprimiert und ist fähig, Sauerstoff (O2) reversibel zu binden. Es ist in der Lage, bei Vorliegen einer Hypoxie Sauerstoff der Oxidation zur Verfügung zu stellen. Nach anstrengender körperlicher Betätigung kommt es durch den Abbau von Muskelproteinen zu einer Freisetzung von Myoglobin, welches bereits nach 30 min messbar ist [43]. Eine erhöhte Myoglobinkonzentration kann für 5 Tage verbleiben, vermutlich aufgrund moderater Inflammationsprozesse [32]. So ist eine Korrelation der Aktivitäten von CK und Myoglobin mit der durch Stress induzierten Reaktion der neutrophilen Granulozyten bekannt, wobei eine ausreichende Proteinzufuhr eine Abschwächung des Anstiegs bewirken kann [32]. Im Blut wird es hauptsächlich – neben anderen Parametern – zum Ausschluss eines kardialen Geschehens genutzt. Als weiterer Serummarker für Muskelschädigungen kann die Transaminase ASAT angesehen werden, für welche Meyer und Kollegen ebenfalls einen fußball- und mannschaftssportspezifischen Referenzbereich empfehlen [35]. Die ASAT stellt hierbei im Gegensatz zur leberspezifischen Transaminase ALAT (Alanin-Aminotransferase = GPT) eine ubiquitär und in großem Maße in den Muskelzellen vorkommende Transaminase dar. Neben der Leistungsdiagnostik bietet das laborchemische Assessment mit den daraus abgeleiteten Handlungsempfehlungen weitere Möglichkeiten, die muskuläre Leistungsfähigkeit des Athleten zu optimieren, wie zum Beispiel durch ein Monitoring des Vitamin-D-Spiegels. Ein ausgeglichener Vitamin-D-Haushalt stellt eine wichtige Voraussetzung für die muskuläre Leistungsfähigkeit dar. Bei 61 männlichen britischen Athleten verschiedener Sportarten (Rugby, Fußball und professionelle Pferderennreiter) konnte durch die Optimierung des Vitamin-D-Status bereits nach einer 8‑wöchigen Supplementation von 5000 IE Vitamin D pro Tag eine signifikante Verbesserung der 10-m-Sprintzeit und der vertikalen Sprungfähigkeit festgestellt werden [44]. Bei 24 professionellen Balletttänzern und -tänzerinnen, bei denen initial Vitamin-D-Spiegel von < 30 ng/ml vorlagen, konnte dieses bestätigt werden: Nach einer 4‑monatigen Substitution von 2000 IE Vitamin D pro Tag wurde neben einer Verbesserung der Sprungkraft um 7,1 % außerdem eine signifikante Steigerung der isometrischen Kraft des M. quadriceps femoris um 18,7 % beobachtet [20]. So scheint ein Vitamin-D-Serumwert von > 40 ng/ml die Muskelkraft und -funktion, vor allem bei Athleten mit schnellkraftbetonten Sportarten, signifikant zu verbessern [45]. Außerdem konnte bei Typ-II-Muskelfasern („fast twitch fibres“), welche essenziell für sportliche Höchstleistung und für die Vermeidung von Stürzen sind, bei Vitamin-D-Mangelzuständen Muskelfaseratrophien mit Fettinfiltrationen und Fibrosen beobachtet werden, die nach Supplementierung partiell reversibel waren [22, 46]. Auch ein Zusammenhang zwischen posttraumatischem und altersbedingtem Muskelabbau und Vitamin D wird vermutet [47]. Weitere Untersuchungen konnten zeigen, dass Vitamin D nicht nur bei der muskulären Zelldifferenzierung, sondern auch bei der Zellproliferation bzw. der Proteinbiosynthese im mitochondrialen Metabolismus der Zellen eine wichtige Rolle spielt. Dies ist unter anderen durch eine Erhöhung des oxidativen Stresses und einer Reduktion der Sauerstoffverbrauchsrate in der Skelettmuskulatur bei Vitamin-D-Mangel zu begründen, wobei die molekularen Mechanismen komplex und zum Teil unerforscht sind [48]. Schlussendlich konnte ein systematisches Review positive Auswirkungen von ausgeglichenen Vitamin-D-Spiegeln auf die Muskelkraft demonstrieren, obwohl eine hohe Variabilität bezüglich der Effektstärken besteht [49]. Bezüglich des Zusammenhangs von niedrigen Vitamin-D-Spiegeln und akuten Muskelverletzungen scheint es hingegen deutlich weniger Evidenz zu geben. Eine Auswahl der geeigneten laborchemischen Parameter der muskuloskelettalen Labordiagnostik ist in der Tab. 1 dargestellt.ParameterBedeutungAbweichungCalciumaMineralstoff der Skelettmineralisation↓ Höhere Inzidenz für Stressfrakturen [16]; chronischer Mangel kann in einer Osteomalazie resultierenPhosphataMineralstoff der Skelettmineralisation↓ Chronischer Mangel kann in einer hypophosphatämischen Osteomalazie resultieren [6]Vitamin D (25(OH)D3)aSchlüsselfunktion in der Calciumhomöostase und Skelettmineralisation↑ ≥ 40 ng/ml präventiver Nutzen bezüglich (Stress‑)Frakturen [22]≥50 ng/ml optimale Voraussetzung für maximale Leistungsfähigkeit [22]↓ < 30 ng/ml reduzierte Calciumresorption; höhere Inzidenz für Stressfrakturen [14, 15] und „delayed union“ [23]Osteocalcin (Oc)aCalciumbindendes Peptidhormon der Osteoblasten; Knochenformationsmarker↓ Kataboler/„low-turnover“ Knochenstoffwechsel [24, 28]Knochenspezifische AP (BAP)aEnzym der Knochenformation; Knochenformationsmarker↑ Bei Dissoziation zu Osteocalcin (Verhältnis BAP>Oc): Hinweis auf osteomalazische Komponente↓ Kataboler/„low-turnover“ Knochenstoffwechsel [24], HypophosphatasieDesoxypyridinolin (DPD)aProdukt der Knochenresorption; Knochenresorptionsmarker↑ Erhöhte Knochenresorption mit gesteigertem Risiko für Frakturen [24]Parathormon (PTH)aRegulationsfunktion in der Calciumhomöostase↑ Primärer/tertiärer Hyperparathyreoidismus: Calcium (↑), Phosphat (↓); sekundärer Hyperparathyreoidismus: Calcium (↓/n), Phosphat (↑/n) [9, 12]Kreatinkinase (CK)bEnergiebereitstellung durch Rephosphorylierung von ADP zu ATP↑ Muskelzellschädigung; medikamenteninduziert (beispielsweise Statine, Steroide) [33]; potenzielles Übertraining [76]Laktatdehydrogenase (LDH)bEnergiebereitstellung im anaeroben Energiestoffwechsel↑ Muskelzellschädigung [39]Aspartat-Aminotransferase (ASAT)bTransaminase mit ubiquitärem Vorkommen↑ Muskel- und Leberzellschädigung durch körperliche Belastung [77]LaktatbEndprodukt der anaeroben Glykolyse↑ Akute körperliche Belastung; potenzielles Übertraining [76]Magnesium (Mg)Mineralstoff mit Einfluss auf die Skelettmineralisation↑ Aktivierung der Osteoklasten [67]↓ Inhibierung der Osteoblasten und Aktivierung der Osteoklasten [67]; Verringerung der muskulären Leistungsfähigkeit und Integrität [68]Eisen, Ferritin (Fe)Mineralstoff mit Einfluss auf die Blutbildung und Leistungsfähigkeit↓ Erhöhtes Risiko für Frakturen und verlängerte Regenerationsdauer nach Verletzungen [50, 61]Zink (Zn)Mineralstoff mit Einfluss auf die Skelettmineralisation↓ Negativer Effekt auf die Skelettmineralisation mit möglichem reduziertem Knochenmineralsalzgehalt [70]Die dargestellten Parameter stellen lediglich eine Auswahl dar. Vor allem im Kapitel Nährstoffe wird ebenfalls auf die Relevanz weiterer Makro- und Mikronährstoffe (Proteine, Vitamine wie Vitamin B6, B9 und B12) hingewiesen. aCalcium- und Knochenstoffwechsel, bMuskelstoffwechsel ↑ ≥ 40 ng/ml präventiver Nutzen bezüglich (Stress‑)Frakturen [22] ≥50 ng/ml optimale Voraussetzung für maximale Leistungsfähigkeit [22] ↓ < 30 ng/ml reduzierte Calciumresorption; höhere Inzidenz für Stressfrakturen [14, 15] und „delayed union“ [23] ↑ Bei Dissoziation zu Osteocalcin (Verhältnis BAP>Oc): Hinweis auf osteomalazische Komponente ↓ Kataboler/„low-turnover“ Knochenstoffwechsel [24], Hypophosphatasie ↑ Aktivierung der Osteoklasten [67] ↓ Inhibierung der Osteoblasten und Aktivierung der Osteoklasten [67]; Verringerung der muskulären Leistungsfähigkeit und Integrität [68] Die dargestellten Parameter stellen lediglich eine Auswahl dar. Vor allem im Kapitel Nährstoffe wird ebenfalls auf die Relevanz weiterer Makro- und Mikronährstoffe (Proteine, Vitamine wie Vitamin B6, B9 und B12) hingewiesen. aCalcium- und Knochenstoffwechsel, bMuskelstoffwechsel Nährstoffe, Energieverfügbarkeit und Vitamine: Eine angemessene körperliche Bewegung und Belastung gilt generell als osteoprotektiv. So konnte bei Athletengruppen aus Sportarten mit hohen Maximalkräften und multidirektionalen Bewegungen, wie z. B. Fußball, Volleyball oder auch Rugby, eine bessere Knochenqualität festgestellt werden, während Athleten aus Ausdauersportarten mit niedrigen Maximalkräften und niedriger Energieverfügbarkeit, wie Langstreckenlauf, Schwimmen oder Radrennen, eine reduzierte Knochenmasse aufwiesen [50]. Da 90 % der Gesamtknochenmasse bis zum 20. Lebensjahr generiert werden und der Aufbau mit dem 30. Lebensjahr weitgehend abgeschlossen ist („peak bone mass“) [50], ist anzustreben, dass trotz einer aktiven Sportlerkarriere in diesen Jahren suffiziente Bedingungen für einen Knochenmasseaufbau geboten werden. Als positiver Stimulus für den Knochenstoffwechsel wird der mechanischen Belastung der Knochen mit dem Auftreten von Maximalkräften eine entscheidende Rolle beigemessen [51]. Dabei stellt jedoch ein weiterer wichtiger Einflussfaktor die Ernährung respektive die Energieverfügbarkeit dar. Ihle und Loucks (2004) untersuchten die Dosis-Wirkungs-Beziehung zwischen drei Stufen verminderter Energieverfügbarkeit und dem Knochenstoffwechsel von jungen gesunden Frauen im Vergleich zu energiebalancierten Kontrollen mit einer Energieverfügbarkeit von 45 kcal/kg fettfreier Körpermasse („lean body mass“ [LBM])/Tag. So waren bei Energieverfügbarkeiten von 30 beziehungsweise 20 kcal/kg LBM/Tag signifikant geringere Knochenformationsraten zu detektieren, während die Knochenresorption noch unverändert blieb. Bei weiterer Reduktion der Energieverfügbarkeit auf 10 kcal/kg LBM/Tag stieg zusätzlich die Knochenresorption an, mit der Gefahr einer katabolen Knochenstoffwechsellage [52]. Hier gilt es zu beachten, dass neben einer geringeren Energieaufnahme auch ein hoher Energieverbrauch bei Athleten eine geringere Energiebilanz bedingt, was vor allem für Ausdauerathleten relevant ist. So werden Eliteausdauerathleten das Niveau von 45 kcal/kg LBM/Tag angesichts der hohen Energieausgaben nur schwerlich erreichen können [53]. Außerdem sehen viele Ausdauersportler ein Energiedefizit als essenziell an, um den Phänotyp des Ausdauerathleten mit einem möglichst großen Anteil fettfreier Masse zu generieren. Nichtsdestotrotz ist eine ausreichende Energieverfügbarkeit beziehungsweise suffiziente Versorgung mit Nährstoffen anzustreben, um die kurz- und langfristige Knochengesundheit zu gewährleisten. Die Kombination aus einer geringen Energieverfügbarkeit (mit oder ohne Essstörung), einem veränderten Menstruationszyklus mit niedrigeren Östrogenen und anderen Hormonstörungen und einer verminderten Knochenmineraldichte („bone mineral density“ [BMD]) beschreibt einen Zustand, der vor allem bei intensiv sporttreibenden Frauen beobachtet wird und vormals als „female athlete triad“ bezeichnet wurde [54]. Da mittlerweile bekannt ist, dass der relative Energiemangel die Grundproblematik darstellt und auch Männer betroffen sein können, wurde die Terminologie in „relative energy deficiency in sport“ (RED-S) geändert [55]. Der Begriff stellt eine Erweiterung der Triade der weiblichen Athletin dar, scheint jedoch im Bewusstsein der Ärztinnen und Ärzte noch nicht ausreichend vertreten zu sein [56]. Neben der langfristigen Verringerung der Knochenmineraldichte wurde in diesem Zusammenhang auch eine höhere Inzidenz von Stressfrakturen beschrieben [57]. Als Orientierung für entsprechende Mangelzustände kann neben einer Bilanzierung der Energieverfügbarkeit eine labordiagnostische Bestimmung von Nährstoffen dienen, um eine Limitation der Entwicklung und Funktionstüchtigkeit des muskuloskelettalen Systems durch das Vorliegen von Mangelzuständen auszuschließen. Das Erkennen und Beseitigen von Mangelzuständen stellt eine Grundvoraussetzung für die Optimierung der Leistungs- und Regenerationsfähigkeit der Athleten dar. Calcium, Vitamin D und Phosphat wurden als maßgebliche osteologische Parameter bereits angeführt. Weitere Parameter stellen als Makronährstoffe die Proteine und als Mikronährstoffe Eisen, Magnesium und Zink sowie Vitamin B6, B9 und B12 dar. Die von der International Society of Sports Nutrition (ISSN) empfohlene tägliche Proteinzufuhr für gesunde Sportler liegt bei 1,4–2,0 g Protein/kg Körpergewicht, um durch eine positive Proteinbilanz bestmöglich den Muskelaufbau/-erhalt und die Trainingsadaptation zu garantieren, während einem Muskelverlust vorgebeugt werden soll [58]. Proteine als Bestandteil des Kollagens und der Wachstumsfaktoren haben folglich ebenfalls einen positiven Effekt auf den Knochen [50]. Für eine Beurteilung der individuell benötigten Menge an Proteinen pro Tag können unterstützend Biomarker herangezogen werden (Gesamtprotein, Albumin, Stickstoffbilanz, Harnstoff-Stickstoff, Aminosäurenanalyse). Eine Fehlernährung mit resultierendem Proteinmangel scheint hierbei mit einer Reduktion der Albuminsynthese und einem verminderten Gesamtprotein einherzugehen. Zusätzlich kann die Stickstoffbilanz, zum Beispiel durch eine Messung des Harnstoff-Stickstoffs als Abbauprodukt von Proteinen (im Blut oder Urin) Aufschlüsse über den Eiweißstoffwechsel liefern und beispielsweise auf Mangelernährung hinweisen [59]. Ein Eisenmangel stellt einen leistungslimitierenden Faktor dar, der sich laborchemisch in Form eines erniedrigten Ferritins beziehungsweise Hämoglobins im Falle einer Eisenmangelanämie widerspiegelt [60] und sich klinisch unter anderem durch eine Ermüdung („Fatigue“) mit verringerter maximaler Sauerstoffaufnahme (VO2max) und Dyspnoe äußert [61]. Auf muskuloskelettaler Ebene kann sich ein Eisenmangel in einem erhöhten Risiko für eine herabgesetzte Knochendichte, für Stressfrakturen, oder aber auch durch eine verlängerte Regenerationsdauer nach Verletzungen manifestieren [50, 61–63]. Bei 1085 Eliteathleten (570 weiblich, 515 männlich) aus über 26 Sportarten konnte bei 15 % der männlichen und 52 % der weiblichen Athleten ein defizitärer Eisenstatus festgestellt werden [64]. Als möglicher Einflussfaktor der deutlich höheren Prävalenz eines Eisenmangels bei Athletinnen ist hier ein menstruationsbedingter Eisenverlust zu bedenken [65]. Magnesium (Mg) beeinflusst als Cofaktor zahlreicher anaboler und kataboler Reaktionen ebenfalls die Knochengesundheit und Muskelleistung des Athleten [66], wobei niedrige Magnesiumkonzentrationen möglicherweise eine katabole Stoffwechselsituation des Knochens herbeiführen können [67]. Ein positiver Zusammenhang zwischen Magnesiumstatus, der Muskelkraft und der Muskelleistung durch eine muskelprotektive Wirkung respektive Aufrechterhaltung der Muskelintegrität ist ebenfalls bekannt [68]. So konnten bei Radsportlern nach Magnesiumsupplementation (400 mg/d) während eines 21-tägigen Etappenrennens signifikant niedrigere Myoglobinkonzentrationen nachgewiesen werden [69]. Zink (Zn) ist ein weiterer Mineralstoff, der aufgrund des erhöhten Bedarfs bei Sportlern abgedeckt sein sollte und für eine Vielzahl von Funktionen der Wundheilung, der Glukoseverwertung und der Proteinsynthese erforderlich ist und sowohl antioxidativ als auch antiinflammatorisch wirkt [59]. Daneben nimmt Zink als Cofaktor mehrerer Enzyme, wie der AP und der Kollagenase, eine Rolle in der Knochenmineralisation und der Synthese der kollagenen Strukturen des Knochens ein [70]. Ein Zinkmangel ist kein seltenes Ereignis und kann durch eine suboptimale Zinkaufnahme bei körperlichen Belastungen, Stress und einseitigen Ernährungsgewohnheiten bedingt sein und zu einer verminderten Knochendichte führen [70, 71]. Vitamine stellen eine Gruppe von lebensnotwendigen organischen Verbindungen unterschiedlicher Stoffklassen dar, welche vom Körper nicht selbst synthetisiert werden können und durch die Nahrung aufgenommen werden müssen. Trotz der Wichtigkeit einer adäquaten Aufnahme der Vitamine C, E und K ist die Verlässlichkeit und der Nutzen ihrer laborchemischen Bestimmung nicht vollständig nachgewiesen und wird in dieser Arbeit nicht thematisiert. Für das labordiagnostische Assessment sind vor allem die Vitamine B6, B9 und B12 geeignet. Der Vitamin-B-Komplex spielt eine bedeutende Rolle in der Regulation des Energiemetabolismus [59]. Wichtige Vertreter stellen das B6 (Pyridoxin), B9 (Folsäure) und B12 (Cobalamin) dar. So hat Pyridoxin eine zentrale Stellung als Coenzym vieler Reaktionen des Aminosäurenstoffwechsels und trägt zur Synthese von Fettsäuren bei [72]. Folsäure ist maßgeblich an Wachstums- und Zellteilungsprozessen im menschlichen Körper beteiligt [73]. Eine Analyse des Cobalamins, welches vorwiegend in tierischen Produkten vorkommt, ist vor allem Vegetariern und Veganern empfohlen und es sollte gegebenenfalls substituiert werden. Ein Cobalaminmangel scheint mit einer Verminderung der Knochendichte sowie einer Beeinträchtigung der Osteoblastenfunktion einherzugehen [74, 75]. Periodisierung des laborchemischen Assessments: Ein individuelles Vorgehen im Rahmen der Labordiagnostik erscheint sinnvoll, da individuell zwischen Athleten bedeutende Unterschiede der labordiagnostischen Referenzintervalle auftreten. So sind bedeutende belastungsabhängige und sportartspezifische Unterschiede zahlreicher Laborparameter zwischen Athleten nachzuweisen [35, 59]. Es ist sinnvoll, aber zeitlich und technisch aufwändig, durch eine Periodisierung der Messungen personalisierte Referenz- und Variabilitätsbereiche der Athleten zu bestimmen, welche im Rahmen von unterschiedlichen Trainings- und Belastungszuständen durchgeführt werden (Abb. 3). Es wird jedoch auch deutlich, dass keine pauschale Aussage über die Zeitintervalle zwischen den Laboranalysen sinnvoll ist, sondern ein bedarfs- und saisonabhängiges Vorgehen gewählt werden sollte. Ein Mindestmaß von zwei Labordiagnostiken pro Jahr ist jedoch zu empfehlen, um die medizinische Grundversorgung eines Leistungssportlers gewährleisten zu können. Auch während der Rehabilitation nach einer Verletzung können zuvor erhobene Basiswerte der individuellen Referenz- und Variabilitätsbereiche helfen, Verlaufswerte besser einzuschätzen. Fazit für die Praxis: Ein regelmäßig durchgeführtes labordiagnostisches Assessment (im Leistungssport zumindest zweimal pro Jahr) und die daraus abgeleitete Intervention bei auffälligen Werten kann helfen, die Leistungsfähigkeit von Athleten zu verbessern.Eine ausgeglichene Calciumhomöostase sollte durch eine optimale Vitamin-D-Versorgung sowie eine ausgewogene Ernährung angestrebt werden, um das Risiko von Verletzungen wie Stressfrakturen zu verringern.Das laborchemische Assessment beinhaltet verschiedene Muskelenzyme, sowie Makro- und Mikronährstoffe unter Berücksichtigung des individuellen Energiebedarfs.Allgemeine Referenzbereiche dienen als Orientierung, wobei bei Leistungssportlern ein individuelles Vorgehen mit Aufstellung individueller und ggf. sportartspezifischer Referenzbereiche sinnvoll erscheint.Eine detaillierte laborchemische Diagnostik sollte heutzutage fester Bestandteil in der ärztlichen Betreuung von professionellen Athleten. Ein regelmäßig durchgeführtes labordiagnostisches Assessment (im Leistungssport zumindest zweimal pro Jahr) und die daraus abgeleitete Intervention bei auffälligen Werten kann helfen, die Leistungsfähigkeit von Athleten zu verbessern. Eine ausgeglichene Calciumhomöostase sollte durch eine optimale Vitamin-D-Versorgung sowie eine ausgewogene Ernährung angestrebt werden, um das Risiko von Verletzungen wie Stressfrakturen zu verringern. Das laborchemische Assessment beinhaltet verschiedene Muskelenzyme, sowie Makro- und Mikronährstoffe unter Berücksichtigung des individuellen Energiebedarfs. Allgemeine Referenzbereiche dienen als Orientierung, wobei bei Leistungssportlern ein individuelles Vorgehen mit Aufstellung individueller und ggf. sportartspezifischer Referenzbereiche sinnvoll erscheint. Eine detaillierte laborchemische Diagnostik sollte heutzutage fester Bestandteil in der ärztlichen Betreuung von professionellen Athleten.
Background: Laboratory diagnostics represent a valuable tool for the optimization and assessment of the performance and regeneration ability in professional athletes. Blood parameters play an important role in the prevention, diagnosis and rehabilitation of injuries and physical overload. Methods: Literature search and narrative review. Results: The laboratory assessment of bone metabolism includes vitamin D, calcium and bone turnover and aims to provide a preventive benefit with respect to skeletal complications (e.g., to minimize the risk of bone stress injuries). In addition, muscular serum markers, such as lactate dehydrogenase (LDH), creatine kinase (CK), myoglobin and aspartate aminotransferase (AST) can be used to monitor metabolic adaptation to physical exercise and to obtain information about the muscular workload and potential damage. The energy availability can be estimated and optimized by appropriate balancing and laboratory determination of macro- and micronutrients. Conclusions: Laboratory diagnostics have a clinical relevance across different sport disciplines. They are intended to support athletes and medical staff on their way to the highest possible performance and help to ensure the optimal prevention of bone and muscle injuries. Parameters with deficiency results (e.g., vitamin D) should be adequately compensated. A periodization of the laboratory tests, with at least two tests per year, and the establishment of individual variability and reference ranges can improve the assessment.
null
null
5,567
260
[ 1725, 2071, 1346, 156 ]
5
[ "der", "und", "die", "eine", "von", "bei", "vitamin", "ein", "zu", "einer" ]
[ "knochenmineraldichte bone mineral", "mineralisationsstörung des knochens", "aregulationsfunktion der calciumhomöostase", "vermehrte calciummobilisation", "tagesaufnahme von calcium" ]
null
null
null
null
null
null
null
[CONTENT] Biomarker | Professionelle Athleten | Regeneration | Stressfraktur | Vitamin D | Biomarker | Professional athletes | Regeneration | Stress fractures | Vitamin D [SUMMARY]
[CONTENT] Biomarker | Professionelle Athleten | Regeneration | Stressfraktur | Vitamin D | Biomarker | Professional athletes | Regeneration | Stress fractures | Vitamin D [SUMMARY]
null
null
null
null
[CONTENT] Athletes | Humans | Laboratories | Sports | Vitamin D | Vitamin D Deficiency [SUMMARY]
[CONTENT] Athletes | Humans | Laboratories | Sports | Vitamin D | Vitamin D Deficiency [SUMMARY]
null
null
null
null
[CONTENT] knochenmineraldichte bone mineral | mineralisationsstörung des knochens | aregulationsfunktion der calciumhomöostase | vermehrte calciummobilisation | tagesaufnahme von calcium [SUMMARY]
[CONTENT] knochenmineraldichte bone mineral | mineralisationsstörung des knochens | aregulationsfunktion der calciumhomöostase | vermehrte calciummobilisation | tagesaufnahme von calcium [SUMMARY]
null
null
null
null
[CONTENT] der | und | die | eine | von | bei | vitamin | ein | zu | einer [SUMMARY]
[CONTENT] der | und | die | eine | von | bei | vitamin | ein | zu | einer [SUMMARY]
null
null
null
null
[CONTENT] eine | referenzbereiche | von | und | laborchemische | assessment | sowie | sollte | bei | das [SUMMARY]
[CONTENT] der | und | eine | die | von | bei | ein | ist | für | vitamin [SUMMARY]
null
null
null
null
[CONTENT] ||| ||| ||| at least two [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| creatine kinase | CK ||| ||| ||| ||| ||| at least two [SUMMARY]
null
Updated Trends in Cancer in Japan: Incidence in 1985-2015 and Mortality in 1958-2018-A Sign of Decrease in Cancer Incidence.
33551387
Unlike many North American and European countries, Japan has observed a continuous increase in cancer incidence over the last few decades. We examined the most recent trends in population-based cancer incidence and mortality in Japan.
BACKGROUND
National cancer mortality data between 1958 and 2018 were obtained from published vital statistics. Cancer incidence data between 1985 and 2015 were obtained from high-quality population-based cancer registries maintained by three prefectures (Yamagata, Fukui, and Nagasaki). Trends in age-standardized rates (ASR) were examined using Joinpoint regression analysis.
METHODS
For males, all-cancer incidence increased between 1985 and 1996 (annual percent change [APC] +1.1%; 95% confidence interval [CI], 0.7-1.5%), increased again in 2000-2010 (+1.3%; 95% CI, 0.9-1.8%), and then decreased until 2015 (-1.4%; 95% CI, -2.5 to -0.3%). For females, all-cancer incidence increased until 2010 (+0.8%; 95% CI, 0.6-0.9% in 1985-2004 and +2.4%; 95% CI, 1.3-3.4% in 2004-2010), and stabilized thereafter until 2015. The post-2000 increase was mainly attributable to prostate in males and breast in females, which slowed or levelled during the first decade of the 2000s. After a sustained increase, all-cancer mortality for males decreased in 1996-2013 (-1.6%; 95% CI, -1.6 to -1.5%) and accelerated thereafter until 2018 (-2.5%; 95% CI, -2.9 to -2.0%). All-cancer mortality for females decreased intermittently throughout the observation period, with the most recent APC of -1.0% (95% CI, -1.1 to -0.9%) in 2003-2018. The recent decreases in mortality in both sexes, and in incidence in males, were mainly attributable to stomach, liver, and male lung cancers.
RESULTS
The ASR of all-cancer incidence began decreasing significantly in males and levelled off in females in 2010.
CONCLUSION
[ "Female", "Humans", "Incidence", "Japan", "Male", "Mortality", "Neoplasms", "Registries" ]
8187612
INTRODUCTION
Globally, the incidence of major cancers is entering a decreasing phase. For example, a significant decrease in age-standardize rate (ASR) has been observed for colorectal, male lung, female breast, and cervical cancer incidence in North American1,2 and European countries,3,4 as well as in Asian populations.5 These decreasing trends have been interpreted as having resulted from effective cancer control policies, including tobacco control and screening interventions.6–10 By contrast, Japan is reported to be experiencing a significant increase in ASR of cancer incidence for all major cancer sites except stomach and liver.11–13 These reports are relatively outdated, however, with the most recent year of diagnosis being 2012, and subsequent trends in incidence in Japan have yet to be identified. Further, trends in cancers other than major cancer sites have not been sufficiently documented.11–16 Japan enacted the Cancer Registration Promotion Act in 2013, under which the registration of diagnosed cancer cases was started in 2016 under the new National Cancer Registry (NCR) system. Incidence data collected under the NCR to date has been published for diagnosed cases in 2016 and 2017 only, and at least a decade will be required before the secular trends in these national data can be evaluated. Accordingly, the present study aimed to examine secular trends in cancer incidence and mortality in Japan, including non-major cancer sites, with the use of the most recent data available from selected high-quality population-based cancer registries and national vital statistics.
METHODS
Cancer incidence data were obtained in the framework of the Monitoring of Cancer Incidence in Japan (MCIJ) project.17,18 Annual cancer incidence data between 1985 and 2015 (before introduction of the NCR) were obtained from population-based cancer registries in four prefectures (Miyagi, Yamagata, Fukui, and Nagasaki), which were selected because of the availability of long-term high-quality data.19 Although we had confirmed the validity of using data from these four prefectural cancer registries in terms of representativeness for Japan,19 the updating of data from Miyagi Prefecture was unstable as a result of the ongoing data transfer process related to the start of NCR. Therefore, as adopted in our previous study,11,12 the present study used cancer incidence data from three prefectures (Yamagata, Fukui, and Nagasaki). The validity of using these three prefectures was previously confirmed.12 For cancer mortality, we obtained the population and number of annual cancer deaths between 1958 and 2018 from published vital statistics.20 Prefectural population data were obtained from the Center for Cancer Control and Information Services, National Cancer Center, Japan for the years 1985–2006,21 and from the Bureau of Statistics, Ministry of Internal Affairs and Communications for the years 2007–2015.22 We analyzed site-specific cancers and all cancers combined with reference to the International Classification of Diseases (ICD) version 10 codes (C00–C96, additionally D00–D09 for incidence, and C00–C97 for mortality). Twenty-five cancer sites were selected according to the list of cancers adopted by the National Cancer Center, Japan,23 which were the same as those analyzed in our previous analysis.11,12 We defined “major cancer sites” as the five leading cancers, and “sub-major cancer sites” as the sixth to tenth leading cancers in the latest cancer statistics (either in males or females and in incidence or mortality).23 Specifically, major cancer sites were stomach, colon/rectum, liver, pancreas, lung, female breast, uterus, and prostate; sub-major cancer cites were esophagus, gallbladder and bile ducts, ovary, urinary bladder, kidney and other urinary organs (except bladder), thyroid, and malignant lymphoma. For these major and sub-major cancers, we discussed potential factors underlying the observed trends in incidence and mortality. We added the analysis of all-cancer incidence excluding stomach, stomach and liver, prostate, and female breast to examine the effect of these influential cancer sites. ASR were standardized to the 1985 model Japanese population for cancer incidence and mortality. A Joinpoint regression model24 was applied using the Joinpoint Regression Program version 4.7.0.0, developed by the United States National Cancer Institute. In the Joinpoint Regression analysis, the number of incidence or death was assumed to follow a Poisson distribution; the maximum number of joinpoints was set at five; the minimum number of observations from a joinpoint to either end of the data was set at two; and the minimum number of observations between two joinpoints was set at three. To identify cancer sites contributing to the recent decrease in all-cancer mortality rates, we calculated the degree of contribution of each cancer site using the same method as that adopted in our previous study.12 Briefly, we calculated the average APC (AAPC) during the last 10 years for the trend of all-cancer and site-specific ASR of mortality for each sex, calculated the amount of change in ASR by multiplying the 10th power of (1+AAPC) by the ASR in the first year of the 10-year period, and then calculated the proportion of each cancer site in terms of the amount of change. For cancer incidence, since a joinpoint was observed during the last 10 years for both sexes (more specifically, the trend changed from significant increase to significant decrease or levelling off), the contribution of each cancer site was calculated using the same method as that used for cancer mortality, during each of the last significant increasing segment and the subsequent decreasing segment (if significant). This study was approved by the institutional review board of the National Cancer Center, Japan (2019-202).
RESULTS
Figure 1A shows annual trends in ASRs of incidence for all-cancer and major cancer sites in the three selected prefectures in Japan. The ASR of all-cancer incidence for males was 376.2 in 1985, which then peaked at 464.5 in 2010 and decreased to 431.8 in 2015. For females, the ASR was 228.3 in 1985, which then peaked at 307.5 in 2013 and decreased to 302.2 in 2015. Trends for sub-major cancers are shown in Figure 1B. Table 1 shows the results of the Joinpoint regression analysis on the trends in all-cancer incidence in the three selected prefectures. For males and females combined, the ASR of all-cancer incidence intermittently increased from 1985 through 2010 with a significant APC of +1.0% between 1985–1996 and +1.7% between 2000–2010, and then levelled off after 2010. Similar patterns were seen in the separate analyses for males and females; for males the decrease after 2010 was statistically significant. Overall results were the same when stomach and liver cancers were excluded, when prostate or female breast cancer were excluded (Table 1) or when the data were restricted to ages under 75 years old (eTable 1). Corresponding results that included carcinoma in situ are shown in eTable 2. ICD-10, International Classification of Diseases, 10th revision. aYamagata, Fukui, and Nagasaki Prefectures. *Annual % change was statistically significantly different from zero (P < 0.05). Table 2A and Table 2B show the corresponding site-specific results of cancer incidence for males and females, respectively. For major cancer sites in males, pancreatic and prostate cancers showed significant increases through the observation period. The increase in pancreatic cancer was monotonous. On the other hand, prostate cancer significantly increased in the most recent segment, but the APC was much smaller than in the previous segment (1.3% in 2004–2015 vs 22.4% in 2000–2004). Major cancers that showed a significant decrease during the most recent segment (the period including the most recent year) were stomach, liver, and lung. Among sub-major cancer sites, esophagus, kidney and other urinary organs except bladder, and malignant lymphoma showed a significant increase in the most recent segment, while gallbladder and bile ducts and urinary bladder showed a significant decrease. For females, significantly increasing major cancers in the most recent segment were colon, rectum, (also colon and rectum combined), pancreas, lung, cervix uteri, and corpus uteri (also uterus as a whole). Notably, a long-term increase in breast cancer (APC 4.0% in 1985–2010) stopped in 2010. Thyroid cancer significantly increased in 2002–2008 (APC 6.5%), but levelled off thereafter. Significant decreases were seen in the most recent segment for stomach and liver. Among sub-major cancer sites, esophagus, ovary, kidney, and malignant lymphoma showed significant increases in the most recent segment, while gallbladder and urinary bladder showed a significant decrease. Among the other cancer sites, oral cavity and pharynx, skin and multiple myeloma showed a significant increase in both sexes. ICD-10, International Classification of Diseases, 10th revision. aYamagata, Fukui, and Nagasaki Prefectures. *Annual % change was statistically significantly different from zero (P < 0.05). ICD-10, International Classification of Diseases, 10th revision. aYamagata, Fukui, and Nagasaki Prefectures. *Annual % change was statistically significantly different from zero (P < 0.05). Figure 2 shows the contribution of cancer sites to the significant increase or decrease in all-cancer incidence. For males, prostate cancer accounted for 64.5% of the most recent significant increase in all-cancer incidence between 2000 and 2010. The contributions of other cancer sites were less than 10% (lung: 9.3%, malignant lymphoma: 5.8%, kidney and other urinary organs except bladder: 5.4%, and oral cavity and pharynx: 3.8%). For females, the largest contribution of cancer site to the significant all-cancer increase in incidence between 2004 and 2010 was breast (51.1%), followed by thyroid (8.8%), lung (8.6%), and colon/rectum (7.2%). For males, all-cancer incidence significantly decreased after 2010, and this decrease was mainly accounted for by stomach (41.1%), lung (26.8%), and liver (24.1%) cancers. For females, there was no significant increase in all-cancer incidence after 2010. Figure 3A shows the annual trends in ASRs of mortality for all-cancer and major cancer sites in Japan (national data). The ASR of all-cancer mortality for males was 182.6 in 1958, which then peaked at 226.1 in 1995 and decreased to 152.1 in 2018. For females, the ASR was 130.7 in 1958, which then peaked at 132.0 in 1960 and decreased to 84.5 in 2015. Trends of sub-major cancers are shown in Figure 3B. Table 3 shows the results of the Joinpoint regression analysis of all-cancer mortality from the national data of Japan. For males, the ASR of all-cancer mortality intermittently increased from 1958 through 1996 and decreased thereafter. The decrease accelerated from 2013 from the APC of −1.6% (1996–2013) to −2.5% (2013–2018). For females, the ASR of all-cancer mortality showed a long-term decreasing trend from 1968, with the APCs of −0.8% (1968–1993), −1.4% (1997–2003), and −1.0% (2003–2018). A similar decreasing trend was seen for males and females combined, with the APCs of −0.2% (1966–1993), −1.4% (1997–2015), and −2.2% (2015–2018). Overall results were the same when the data were restricted to ages under 75 years old (eTable 3). ICD-10, International Classification of Diseases, 10th revision. *Annual % change was statistically significantly different from zero (P < 0.05). Table 4A and Table 4B show the corresponding site-specific results of cancer mortality for males and females, respectively. For major cancer sites in males, pancreatic cancer alone showed a significant increase during the most recent segment. Conversely, all the remaining major cancer sites showed a significant decrease during the most recent segment: stomach, colon, rectum (also colon and rectum combined), liver, lung, and prostate. Among sub-major cancer sites also, all cancer sites showed a significant decrease during the most recent segment, except malignant lymphoma, which significantly decreased in 2001–2005. For females, significantly increasing major cancers in the most recent segment were pancreas, breast, cervix uteri, and corpus uteri, (also uterus as a whole). Similarly to males, all the remaining major cancer sites showed a significant decrease during the most recent segment, except colon, which significantly decreased in 1993–2008 and levelled off thereafter. Among sub-major cancer sites, all cancer sites except kidney and malignant lymphoma showed a significant decrease during the most recent segment. ICD-10, International Classification of Diseases, 10th revision. *Annual % change was statistically significantly different from zero (P < 0.05). ICD-10, International Classification of Diseases, 10th revision. *Annual % change was statistically significantly different from zero (P < 0.05). Figure 4 shows the contribution of specific cancer sites to the significant decrease in all-cancer mortality in the most recent 10 years (2009–2018). For males, stomach cancer accounted for 29.8% of the decrease of all-cancer incidence, followed by liver (25.2%) and lung (22.3%). These three sites accounted for 77.3% of the all-cancer decrease. Esophagus, and gallbladder and bile ducts accounted for less than 10% (7.1% and 4.2%, respectively). For females also, stomach, liver, and lung accounted for nearly 75% of the all-cancer decrease (34.4%, 28.7%, and 11.8%, respectively). Unlike the result in males, however, the contribution of gallbladder and bile ducts was slightly larger than that of lung (12.6%), while ovary contributed 3.7%. The contributions of cancer sites to the significant changes in all-cancer incidence and mortality under 75 years old are shown in eFigure 1 and eFigure 2, respectively. The largest contributions of prostate in males and breast in females were the same as in the results for all ages. There was no significant decrease in all-cancer incidence when age was restricted to under 75 years old. The large contributions of stomach, liver, lung, and gallbladder and bile ducts to the recent reduction in all-cancer mortality were also the same as the result of all ages. Figure 5A shows the trends in incidence and mortality of all cancers combined. There was a marked divergence between the trends in incidence and mortality in both males and females. For males, the divergence became wider after the late 1990s due to the decrease in mortality. For females, the gap between incidence and mortality widened constantly. Figure 5B and Figure 5C show the trends in incidence and mortality of major and sub-major cancer sites, respectively. Similar to all cancers combined, a divergence between incidence and mortality was a common feature observed in almost all cancer sites. The results of cancers other than major and sub-major sites are shown in eFigure 3. Table 5 summarizes the description of observed trends in incidence and mortality and potential interpretations for each cancer site. Decreases in exposure to major risk factor such as infectious agents and tobacco smoking were considered to be associated with the decreases in incidence and/or mortality of related cancers (stomach, liver, lung, and urinary bladder). Effects of the introduction and dissemination of cancer screening were considered to have been reflected in the trends in incidence and/or mortality of several cancers (stomach, colon/rectum, female breast, and prostate), among which the increase in prostate cancer incidence was most remarkable. Improvements in diagnostic measures and treatments were common factors associated with the divergence of incidence and mortality. Potential overdiagnosis was considered to be included in the increase in incidence of prostate and thyroid cancers. ICD-10, International Classification of Diseases, 10th revision.
null
null
[]
[]
[]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION" ]
[ "Globally, the incidence of major cancers is entering a decreasing phase. For example, a significant decrease in age-standardize rate (ASR) has been observed for colorectal, male lung, female breast, and cervical cancer incidence in North American1,2 and European countries,3,4 as well as in Asian populations.5 These decreasing trends have been interpreted as having resulted from effective cancer control policies, including tobacco control and screening interventions.6–10 By contrast, Japan is reported to be experiencing a significant increase in ASR of cancer incidence for all major cancer sites except stomach and liver.11–13 These reports are relatively outdated, however, with the most recent year of diagnosis being 2012, and subsequent trends in incidence in Japan have yet to be identified. Further, trends in cancers other than major cancer sites have not been sufficiently documented.11–16 Japan enacted the Cancer Registration Promotion Act in 2013, under which the registration of diagnosed cancer cases was started in 2016 under the new National Cancer Registry (NCR) system. Incidence data collected under the NCR to date has been published for diagnosed cases in 2016 and 2017 only, and at least a decade will be required before the secular trends in these national data can be evaluated. Accordingly, the present study aimed to examine secular trends in cancer incidence and mortality in Japan, including non-major cancer sites, with the use of the most recent data available from selected high-quality population-based cancer registries and national vital statistics.", "Cancer incidence data were obtained in the framework of the Monitoring of Cancer Incidence in Japan (MCIJ) project.17,18 Annual cancer incidence data between 1985 and 2015 (before introduction of the NCR) were obtained from population-based cancer registries in four prefectures (Miyagi, Yamagata, Fukui, and Nagasaki), which were selected because of the availability of long-term high-quality data.19 Although we had confirmed the validity of using data from these four prefectural cancer registries in terms of representativeness for Japan,19 the updating of data from Miyagi Prefecture was unstable as a result of the ongoing data transfer process related to the start of NCR. Therefore, as adopted in our previous study,11,12 the present study used cancer incidence data from three prefectures (Yamagata, Fukui, and Nagasaki). The validity of using these three prefectures was previously confirmed.12\nFor cancer mortality, we obtained the population and number of annual cancer deaths between 1958 and 2018 from published vital statistics.20 Prefectural population data were obtained from the Center for Cancer Control and Information Services, National Cancer Center, Japan for the years 1985–2006,21 and from the Bureau of Statistics, Ministry of Internal Affairs and Communications for the years 2007–2015.22\nWe analyzed site-specific cancers and all cancers combined with reference to the International Classification of Diseases (ICD) version 10 codes (C00–C96, additionally D00–D09 for incidence, and C00–C97 for mortality). Twenty-five cancer sites were selected according to the list of cancers adopted by the National Cancer Center, Japan,23 which were the same as those analyzed in our previous analysis.11,12 We defined “major cancer sites” as the five leading cancers, and “sub-major cancer sites” as the sixth to tenth leading cancers in the latest cancer statistics (either in males or females and in incidence or mortality).23 Specifically, major cancer sites were stomach, colon/rectum, liver, pancreas, lung, female breast, uterus, and prostate; sub-major cancer cites were esophagus, gallbladder and bile ducts, ovary, urinary bladder, kidney and other urinary organs (except bladder), thyroid, and malignant lymphoma. For these major and sub-major cancers, we discussed potential factors underlying the observed trends in incidence and mortality. We added the analysis of all-cancer incidence excluding stomach, stomach and liver, prostate, and female breast to examine the effect of these influential cancer sites. ASR were standardized to the 1985 model Japanese population for cancer incidence and mortality.\nA Joinpoint regression model24 was applied using the Joinpoint Regression Program version 4.7.0.0, developed by the United States National Cancer Institute. In the Joinpoint Regression analysis, the number of incidence or death was assumed to follow a Poisson distribution; the maximum number of joinpoints was set at five; the minimum number of observations from a joinpoint to either end of the data was set at two; and the minimum number of observations between two joinpoints was set at three.\nTo identify cancer sites contributing to the recent decrease in all-cancer mortality rates, we calculated the degree of contribution of each cancer site using the same method as that adopted in our previous study.12 Briefly, we calculated the average APC (AAPC) during the last 10 years for the trend of all-cancer and site-specific ASR of mortality for each sex, calculated the amount of change in ASR by multiplying the 10th power of (1+AAPC) by the ASR in the first year of the 10-year period, and then calculated the proportion of each cancer site in terms of the amount of change. For cancer incidence, since a joinpoint was observed during the last 10 years for both sexes (more specifically, the trend changed from significant increase to significant decrease or levelling off), the contribution of each cancer site was calculated using the same method as that used for cancer mortality, during each of the last significant increasing segment and the subsequent decreasing segment (if significant).\nThis study was approved by the institutional review board of the National Cancer Center, Japan (2019-202).", "Figure 1A shows annual trends in ASRs of incidence for all-cancer and major cancer sites in the three selected prefectures in Japan. The ASR of all-cancer incidence for males was 376.2 in 1985, which then peaked at 464.5 in 2010 and decreased to 431.8 in 2015. For females, the ASR was 228.3 in 1985, which then peaked at 307.5 in 2013 and decreased to 302.2 in 2015. Trends for sub-major cancers are shown in Figure 1B.\nTable 1 shows the results of the Joinpoint regression analysis on the trends in all-cancer incidence in the three selected prefectures. For males and females combined, the ASR of all-cancer incidence intermittently increased from 1985 through 2010 with a significant APC of +1.0% between 1985–1996 and +1.7% between 2000–2010, and then levelled off after 2010. Similar patterns were seen in the separate analyses for males and females; for males the decrease after 2010 was statistically significant. Overall results were the same when stomach and liver cancers were excluded, when prostate or female breast cancer were excluded (Table 1) or when the data were restricted to ages under 75 years old (eTable 1). Corresponding results that included carcinoma in situ are shown in eTable 2.\nICD-10, International Classification of Diseases, 10th revision.\naYamagata, Fukui, and Nagasaki Prefectures.\n*Annual % change was statistically significantly different from zero (P < 0.05).\nTable 2A and Table 2B show the corresponding site-specific results of cancer incidence for males and females, respectively. For major cancer sites in males, pancreatic and prostate cancers showed significant increases through the observation period. The increase in pancreatic cancer was monotonous. On the other hand, prostate cancer significantly increased in the most recent segment, but the APC was much smaller than in the previous segment (1.3% in 2004–2015 vs 22.4% in 2000–2004). Major cancers that showed a significant decrease during the most recent segment (the period including the most recent year) were stomach, liver, and lung. Among sub-major cancer sites, esophagus, kidney and other urinary organs except bladder, and malignant lymphoma showed a significant increase in the most recent segment, while gallbladder and bile ducts and urinary bladder showed a significant decrease. For females, significantly increasing major cancers in the most recent segment were colon, rectum, (also colon and rectum combined), pancreas, lung, cervix uteri, and corpus uteri (also uterus as a whole). Notably, a long-term increase in breast cancer (APC 4.0% in 1985–2010) stopped in 2010. Thyroid cancer significantly increased in 2002–2008 (APC 6.5%), but levelled off thereafter. Significant decreases were seen in the most recent segment for stomach and liver. Among sub-major cancer sites, esophagus, ovary, kidney, and malignant lymphoma showed significant increases in the most recent segment, while gallbladder and urinary bladder showed a significant decrease. Among the other cancer sites, oral cavity and pharynx, skin and multiple myeloma showed a significant increase in both sexes.\nICD-10, International Classification of Diseases, 10th revision.\naYamagata, Fukui, and Nagasaki Prefectures.\n*Annual % change was statistically significantly different from zero (P < 0.05).\nICD-10, International Classification of Diseases, 10th revision.\naYamagata, Fukui, and Nagasaki Prefectures.\n*Annual % change was statistically significantly different from zero (P < 0.05).\nFigure 2 shows the contribution of cancer sites to the significant increase or decrease in all-cancer incidence. For males, prostate cancer accounted for 64.5% of the most recent significant increase in all-cancer incidence between 2000 and 2010. The contributions of other cancer sites were less than 10% (lung: 9.3%, malignant lymphoma: 5.8%, kidney and other urinary organs except bladder: 5.4%, and oral cavity and pharynx: 3.8%). For females, the largest contribution of cancer site to the significant all-cancer increase in incidence between 2004 and 2010 was breast (51.1%), followed by thyroid (8.8%), lung (8.6%), and colon/rectum (7.2%). For males, all-cancer incidence significantly decreased after 2010, and this decrease was mainly accounted for by stomach (41.1%), lung (26.8%), and liver (24.1%) cancers. For females, there was no significant increase in all-cancer incidence after 2010.\nFigure 3A shows the annual trends in ASRs of mortality for all-cancer and major cancer sites in Japan (national data). The ASR of all-cancer mortality for males was 182.6 in 1958, which then peaked at 226.1 in 1995 and decreased to 152.1 in 2018. For females, the ASR was 130.7 in 1958, which then peaked at 132.0 in 1960 and decreased to 84.5 in 2015. Trends of sub-major cancers are shown in Figure 3B.\nTable 3 shows the results of the Joinpoint regression analysis of all-cancer mortality from the national data of Japan. For males, the ASR of all-cancer mortality intermittently increased from 1958 through 1996 and decreased thereafter. The decrease accelerated from 2013 from the APC of −1.6% (1996–2013) to −2.5% (2013–2018). For females, the ASR of all-cancer mortality showed a long-term decreasing trend from 1968, with the APCs of −0.8% (1968–1993), −1.4% (1997–2003), and −1.0% (2003–2018). A similar decreasing trend was seen for males and females combined, with the APCs of −0.2% (1966–1993), −1.4% (1997–2015), and −2.2% (2015–2018). Overall results were the same when the data were restricted to ages under 75 years old (eTable 3).\nICD-10, International Classification of Diseases, 10th revision.\n*Annual % change was statistically significantly different from zero (P < 0.05).\nTable 4A and Table 4B show the corresponding site-specific results of cancer mortality for males and females, respectively. For major cancer sites in males, pancreatic cancer alone showed a significant increase during the most recent segment. Conversely, all the remaining major cancer sites showed a significant decrease during the most recent segment: stomach, colon, rectum (also colon and rectum combined), liver, lung, and prostate. Among sub-major cancer sites also, all cancer sites showed a significant decrease during the most recent segment, except malignant lymphoma, which significantly decreased in 2001–2005. For females, significantly increasing major cancers in the most recent segment were pancreas, breast, cervix uteri, and corpus uteri, (also uterus as a whole). Similarly to males, all the remaining major cancer sites showed a significant decrease during the most recent segment, except colon, which significantly decreased in 1993–2008 and levelled off thereafter. Among sub-major cancer sites, all cancer sites except kidney and malignant lymphoma showed a significant decrease during the most recent segment.\nICD-10, International Classification of Diseases, 10th revision.\n*Annual % change was statistically significantly different from zero (P < 0.05).\nICD-10, International Classification of Diseases, 10th revision.\n*Annual % change was statistically significantly different from zero (P < 0.05).\nFigure 4 shows the contribution of specific cancer sites to the significant decrease in all-cancer mortality in the most recent 10 years (2009–2018). For males, stomach cancer accounted for 29.8% of the decrease of all-cancer incidence, followed by liver (25.2%) and lung (22.3%). These three sites accounted for 77.3% of the all-cancer decrease. Esophagus, and gallbladder and bile ducts accounted for less than 10% (7.1% and 4.2%, respectively). For females also, stomach, liver, and lung accounted for nearly 75% of the all-cancer decrease (34.4%, 28.7%, and 11.8%, respectively). Unlike the result in males, however, the contribution of gallbladder and bile ducts was slightly larger than that of lung (12.6%), while ovary contributed 3.7%.\nThe contributions of cancer sites to the significant changes in all-cancer incidence and mortality under 75 years old are shown in eFigure 1 and eFigure 2, respectively. The largest contributions of prostate in males and breast in females were the same as in the results for all ages. There was no significant decrease in all-cancer incidence when age was restricted to under 75 years old. The large contributions of stomach, liver, lung, and gallbladder and bile ducts to the recent reduction in all-cancer mortality were also the same as the result of all ages.\nFigure 5A shows the trends in incidence and mortality of all cancers combined. There was a marked divergence between the trends in incidence and mortality in both males and females. For males, the divergence became wider after the late 1990s due to the decrease in mortality. For females, the gap between incidence and mortality widened constantly. Figure 5B and Figure 5C show the trends in incidence and mortality of major and sub-major cancer sites, respectively. Similar to all cancers combined, a divergence between incidence and mortality was a common feature observed in almost all cancer sites. The results of cancers other than major and sub-major sites are shown in eFigure 3.\nTable 5 summarizes the description of observed trends in incidence and mortality and potential interpretations for each cancer site. Decreases in exposure to major risk factor such as infectious agents and tobacco smoking were considered to be associated with the decreases in incidence and/or mortality of related cancers (stomach, liver, lung, and urinary bladder). Effects of the introduction and dissemination of cancer screening were considered to have been reflected in the trends in incidence and/or mortality of several cancers (stomach, colon/rectum, female breast, and prostate), among which the increase in prostate cancer incidence was most remarkable. Improvements in diagnostic measures and treatments were common factors associated with the divergence of incidence and mortality. Potential overdiagnosis was considered to be included in the increase in incidence of prostate and thyroid cancers.\nICD-10, International Classification of Diseases, 10th revision.", "This study analyzed the trends in cancer incidence and mortality in Japan with updated representative datasets. A notable finding was that the ASR of all-cancer incidence started to significantly decrease in males and level off in females in 2010, after a long-term intermittent increase.11,12,25 The leading cancer sites that contributed to the past long-term increase were prostate in males and breast in females, but these slowed down or levelled off during the first decade of 2000s. For males, the main contributing cancer sites to this significant decrease were stomach, liver and lung cancers.\nThe ASR of prostate cancer incidence dramatically increased between 2000 and 2004 (APC 22.4%) and slowed down thereafter (APC 1.3% in 2004–2015). As summarized in Table 5, this rapid increase in the early 2000s was prominent for localized cancer and suggested the contribution of the spread of prostate-specific antigen (PSA) screening.26 Indeed, PSA screening increased almost threefold in 2003 both in terms of the number of participants and the number of municipalities that adopted it as an organizational screening program, and subsequently shifted to a slow increase.27,28\nThe levelling off of ASR of female breast cancer incidence after 2010 observed in the present study is an unprecedented phenomenon.11–13 The mortality of female breast cancer12 also slowed a slowing down of its increasing trend. Long-term increase in incidence as well as mortality can be interpreted as the effect of changes in reproductive factors in Japanese females (Table 5).11,29,30 This effect might be converging in breast cancer, but cancers of the corpus uteri and ovary, which share common reproductive risk factors, continued to increase in incidence. The participation rate of female breast cancer screening (mammography) has been increasing in Japan, and early-stage cancer and carcinoma in situ of the breast was reported to have increased in a study using a prefectural cancer registry.31 Together with the slowing down of the increasing trend in mortality observed in the present study, these changes in trends could have partially reflected the dissemination of breast cancer screening.11\nThe significant decrease in ASR of all-cancer incidence in males after 2010 was accounted for by stomach, liver, and lung cancers. These cancer sites also contributed to the decrease in all-cancer mortality in both males and females (77.3% and 74.9%, respectively; Figure 4). Stomach cancer consistently decreased during the whole observation period with regard to both incidence and mortality, which can be interpreted to be the result of dramatic reduction of Helicobacter pylori (H. pylori) infection, combined with improvements in sanitation, diet (reduced salt intake), and food preservation techniques (Table 5).11,32,33 A study using data from a hospital-based registry in 2007–2015 showed a decrease in the proportion of cancer in the pylorus, the main subsite of H. pylori-induced cancer.34 A related factor is the eradication of H. pylori, which was covered by the universal health insurance in Japan in 2013 as part of the treatment for chronic gastritis as well as for gastric and duodenal ulcer. Indeed, the reported number of eradications doubled after this extension of coverage.35 Although we observed an acceleration in the decrease in stomach cancer mortality in males (APC −3.2% in 1996–2012, −4.6% in 2012–2018; Table 4A), this was not consistent for incidence nor in females. Thus, the effect of H. pylori eradication is not clear at the population level.\nEvidence of a relation between the absence of H. pylori infection and risk of adenocarcinoma of the esophagus is accumulating.36 The long-term decrease in the prevalence of H. pylori in Japan may be related to the increase in esophageal cancer incidence observed in the present study, through the pathway from gastroesophageal reflux disease to the occurrence of esophageal cancer (Table 5).\nLiver cancer is another site that showed a dramatic decrease in incidence and mortality. As discussed in previous literature, the long-term decrease in liver cancer in Japan is mainly due to the decrease in the prevalence of hepatitis C virus (HCV).11,13,37 The observed acceleration both in incidence and mortality in 2008 or 2010 (Table 2 and Table 4) can be interpreted as a reflection of therapeutic improvements made in the early 2000s such as pegylated interferon in 2004 and a protease inhibitor (Telaprevire) in 2011.11,38–40 Improvements in differential diagnosis could have also affected the divergence between incidence and mortality since 1980s (Table 5).\nCancer of the gallbladder and bile ducts had a similar pattern of trends to that of liver cancer. Chronic infections have been proposed as one of the risk factors for gallbladder cancer as well as gallstones and obesity (Table 5).41–43 Control over communicable diseases could have resulted in the reduction of incidence rate of gallbladder cancer.44 Regarding cholangiocarcinoma (CCA), overlapping of risk factors and misclassification between intra- and extra-hepatic CCA45 might explain the similarity to the trends in liver cancer and cancer of the combined category of gallbladder and bile ducts.\nThe decrease in lung cancer incidence in males was a phenomenon that had never been observed in previous literature.11–13 Trends in lung cancer incidence in Osaka by histological type revealed that the ASR of adenocarcinoma continuously increased, whereas those of squamous and small-cell carcinomas decreased from 1990s, which was interpreted to be the result of the spread of diagnostic use of computed tomography and the decreasing trend in smoking prevalence, respectively.46,47 Another possibility is the shift from nonfilter to filtered cigarettes in the consumption of tobacco products in Japan (Table 5), which may be more influential because the increase in adenocarcinoma was observed even before the introduction of major diagnostic advances.48 The decrease in overall lung cancer incidence observed in the present study could have reflected the predominant effect of declining smoking prevalence, albeit that analysis stratified by histological type is needed to clarify this possibility. Cancers of the kidney and urinary tracts are also strongly related to tobacco smoking,49,50 but no similarity was found between the trends in these cancers and smoking prevalence except for bladder cancer incidence in males (Table 5).\nAn important feature of our results is that a decrease in incidence was not observed for colorectal cancer, which can be prevented by organizational screening. The ASR of colorectal cancer has been significantly decreasing in many countries.1–3,5 Using simulation modeling techniques, one study revealed that the reduction in colorectal cancer in the United States was a combined effect of cancer control measures for prevention and screening.6 Cervical cancer also showed a sharp contrast; the ASR of this cancer has been consistently decreasing overseas, including the Republic of Korea,1,2,5 whereas the present study showed a significant increase in incidence and mortality, just as was observed in our previous analysis.11–13 The increase in mortality of cervical cancer (cancer of the corpus uteri as well) in Japan should be interpreted with caution because it could have included the shift from cancer of the ‘uterus, not otherwise specified (NOS)’. However, the proportion of NOS had been stable since the late 1990s, and the increase was also observed in incidence.11,13 Cervical cancer can be prevented by a combination of organizational screening and human papillomavirus (HPV) vaccination.7,9,51,52 In Japan, the national HPV vaccination program has been substantially halted by the fear of potential adverse effects.53,54 A simulation modeling study demonstrated that rapid restoration of vaccination coverage and catch-up for missed cohorts could avoid approximately 50,000 cervical cancer cases in 50 years.55 Realizing a reduction in colorectal and cervical cancer incidence by promoting the primary and secondary preventive measures is a major challenge for Japan.\nPancreatic cancer was another example that showed a long-term increase both in incidence and mortality. Increase in risk factors, such as type 2 diabetes, may be related to the increase in incidence56 and mortality as well. Improvements in diagnostic measures and biopsy for histologic confirmation have also been proposed as underlying factors of the increasing trend in earlier years.57\nMonitoring cancer incidence trends is useful in examining the possibility of overdiagnosis at a population level. In the United States, prostate, female breast, skin, kidney, thyroid, and lung cancers have been listed as examples of potential overdiagnosis, characterized as a sharp increase in incidence in the absence of a clear change in mortality.58 In the present study, this typical pattern seemed to be observed for prostate and thyroid cancers (Figure 5 and Table 5). A common background factor of these cancers is the availability of non-invasive tests (ie, PSA for prostate cancer and ultrasonography for thyroid cancer).11,26,58–61 In addition to this descriptive approach, empirical or modelling approaches that compare screened and unscreened (or tested and untested) populations are needed to quantitively examine overdiagnosis.62 Despite the possibility of overdiagnosis, both prostate and thyroid cancers showed significant decreases in mortality. Refinements in diagnosis, treatment and disease management could have contributed to those trends in mortality.60,61,63,64 Malignant lymphoma showed a pattern of sharp increase in incidence and no clear change in mortality, but changes in lifestyle and improvements in diagnosis and prognosis, as well as in coding of registry data, have been proposed as underlying factors.65,66\nSome of the reduction in cancer mortality observed in the present and previous studies11–13 likely reflects improvements in the prognosis of cancer patients. This effect can be seen in the divergence between trends in incidence and mortality (Figure 5), which is consistent with the evidence of improved diagnosis, treatment, and disease management cited in Table 5. Indeed, several studies on hematological cancers showed that the introduction of a new drug or treatment was associated with a reduction in mortality at the national level.67–69 Studies using population-based cancer registries have also reported increases in survival rates that can be interpreted as a reflection of improvements in treatment.46,70–73 Our study group is planning to update these reports using the most recent MCIJ dataset (patients diagnosed in 1993–2015).\nA strength of the present study is the representativeness of the data. Mortality data were from the national vital statistics and based on a complete mandatory reporting system. Incidence data were from three prefectures, but the representativeness of the data in terms of secular trends has been validated.12,19\nOne of the limitations of the present study is that the trends in cancer incidence might have been affected by an improvement in the completeness and data quality of prefectural cancer registries. Indeed, even in the three present prefectures that have long-term high-quality data, there was a slight increase in completeness and quality indices during the first decade of the 2000s (eFigure 4). The observed increases in incidence in this period might therefore reflect an improvement in data completeness. A second limitation is that our analysis was only descriptive. Furthermore, the cancers we analyzed were not grouped into clinically relevant subtypes. As stated above, further research is required to clarify factors underlying the observed cancer trends, such as analyses according to clinical stage or histological type and modelling approaches.\nIn conclusion, this analysis of cancer trends in Japan revealed that the ASR of all-cancer incidence started to decrease significantly in males and level off in females in 2010, after a long-term intermittent increase. The convergence of an increase in all-cancer incidence was mainly due to the slowing down of prostate and breast cancers in males and females, respectively. The ASR of all-cancer mortality continued to decrease, the main contributing cancer sites of which were still stomach, liver, and male lung." ]
[ "intro", "methods", "results", "discussion" ]
[ "incidence", "mortality", "neoplasms", "population surveillance", "vital statistics" ]
INTRODUCTION: Globally, the incidence of major cancers is entering a decreasing phase. For example, a significant decrease in age-standardize rate (ASR) has been observed for colorectal, male lung, female breast, and cervical cancer incidence in North American1,2 and European countries,3,4 as well as in Asian populations.5 These decreasing trends have been interpreted as having resulted from effective cancer control policies, including tobacco control and screening interventions.6–10 By contrast, Japan is reported to be experiencing a significant increase in ASR of cancer incidence for all major cancer sites except stomach and liver.11–13 These reports are relatively outdated, however, with the most recent year of diagnosis being 2012, and subsequent trends in incidence in Japan have yet to be identified. Further, trends in cancers other than major cancer sites have not been sufficiently documented.11–16 Japan enacted the Cancer Registration Promotion Act in 2013, under which the registration of diagnosed cancer cases was started in 2016 under the new National Cancer Registry (NCR) system. Incidence data collected under the NCR to date has been published for diagnosed cases in 2016 and 2017 only, and at least a decade will be required before the secular trends in these national data can be evaluated. Accordingly, the present study aimed to examine secular trends in cancer incidence and mortality in Japan, including non-major cancer sites, with the use of the most recent data available from selected high-quality population-based cancer registries and national vital statistics. METHODS: Cancer incidence data were obtained in the framework of the Monitoring of Cancer Incidence in Japan (MCIJ) project.17,18 Annual cancer incidence data between 1985 and 2015 (before introduction of the NCR) were obtained from population-based cancer registries in four prefectures (Miyagi, Yamagata, Fukui, and Nagasaki), which were selected because of the availability of long-term high-quality data.19 Although we had confirmed the validity of using data from these four prefectural cancer registries in terms of representativeness for Japan,19 the updating of data from Miyagi Prefecture was unstable as a result of the ongoing data transfer process related to the start of NCR. Therefore, as adopted in our previous study,11,12 the present study used cancer incidence data from three prefectures (Yamagata, Fukui, and Nagasaki). The validity of using these three prefectures was previously confirmed.12 For cancer mortality, we obtained the population and number of annual cancer deaths between 1958 and 2018 from published vital statistics.20 Prefectural population data were obtained from the Center for Cancer Control and Information Services, National Cancer Center, Japan for the years 1985–2006,21 and from the Bureau of Statistics, Ministry of Internal Affairs and Communications for the years 2007–2015.22 We analyzed site-specific cancers and all cancers combined with reference to the International Classification of Diseases (ICD) version 10 codes (C00–C96, additionally D00–D09 for incidence, and C00–C97 for mortality). Twenty-five cancer sites were selected according to the list of cancers adopted by the National Cancer Center, Japan,23 which were the same as those analyzed in our previous analysis.11,12 We defined “major cancer sites” as the five leading cancers, and “sub-major cancer sites” as the sixth to tenth leading cancers in the latest cancer statistics (either in males or females and in incidence or mortality).23 Specifically, major cancer sites were stomach, colon/rectum, liver, pancreas, lung, female breast, uterus, and prostate; sub-major cancer cites were esophagus, gallbladder and bile ducts, ovary, urinary bladder, kidney and other urinary organs (except bladder), thyroid, and malignant lymphoma. For these major and sub-major cancers, we discussed potential factors underlying the observed trends in incidence and mortality. We added the analysis of all-cancer incidence excluding stomach, stomach and liver, prostate, and female breast to examine the effect of these influential cancer sites. ASR were standardized to the 1985 model Japanese population for cancer incidence and mortality. A Joinpoint regression model24 was applied using the Joinpoint Regression Program version 4.7.0.0, developed by the United States National Cancer Institute. In the Joinpoint Regression analysis, the number of incidence or death was assumed to follow a Poisson distribution; the maximum number of joinpoints was set at five; the minimum number of observations from a joinpoint to either end of the data was set at two; and the minimum number of observations between two joinpoints was set at three. To identify cancer sites contributing to the recent decrease in all-cancer mortality rates, we calculated the degree of contribution of each cancer site using the same method as that adopted in our previous study.12 Briefly, we calculated the average APC (AAPC) during the last 10 years for the trend of all-cancer and site-specific ASR of mortality for each sex, calculated the amount of change in ASR by multiplying the 10th power of (1+AAPC) by the ASR in the first year of the 10-year period, and then calculated the proportion of each cancer site in terms of the amount of change. For cancer incidence, since a joinpoint was observed during the last 10 years for both sexes (more specifically, the trend changed from significant increase to significant decrease or levelling off), the contribution of each cancer site was calculated using the same method as that used for cancer mortality, during each of the last significant increasing segment and the subsequent decreasing segment (if significant). This study was approved by the institutional review board of the National Cancer Center, Japan (2019-202). RESULTS: Figure 1A shows annual trends in ASRs of incidence for all-cancer and major cancer sites in the three selected prefectures in Japan. The ASR of all-cancer incidence for males was 376.2 in 1985, which then peaked at 464.5 in 2010 and decreased to 431.8 in 2015. For females, the ASR was 228.3 in 1985, which then peaked at 307.5 in 2013 and decreased to 302.2 in 2015. Trends for sub-major cancers are shown in Figure 1B. Table 1 shows the results of the Joinpoint regression analysis on the trends in all-cancer incidence in the three selected prefectures. For males and females combined, the ASR of all-cancer incidence intermittently increased from 1985 through 2010 with a significant APC of +1.0% between 1985–1996 and +1.7% between 2000–2010, and then levelled off after 2010. Similar patterns were seen in the separate analyses for males and females; for males the decrease after 2010 was statistically significant. Overall results were the same when stomach and liver cancers were excluded, when prostate or female breast cancer were excluded (Table 1) or when the data were restricted to ages under 75 years old (eTable 1). Corresponding results that included carcinoma in situ are shown in eTable 2. ICD-10, International Classification of Diseases, 10th revision. aYamagata, Fukui, and Nagasaki Prefectures. *Annual % change was statistically significantly different from zero (P < 0.05). Table 2A and Table 2B show the corresponding site-specific results of cancer incidence for males and females, respectively. For major cancer sites in males, pancreatic and prostate cancers showed significant increases through the observation period. The increase in pancreatic cancer was monotonous. On the other hand, prostate cancer significantly increased in the most recent segment, but the APC was much smaller than in the previous segment (1.3% in 2004–2015 vs 22.4% in 2000–2004). Major cancers that showed a significant decrease during the most recent segment (the period including the most recent year) were stomach, liver, and lung. Among sub-major cancer sites, esophagus, kidney and other urinary organs except bladder, and malignant lymphoma showed a significant increase in the most recent segment, while gallbladder and bile ducts and urinary bladder showed a significant decrease. For females, significantly increasing major cancers in the most recent segment were colon, rectum, (also colon and rectum combined), pancreas, lung, cervix uteri, and corpus uteri (also uterus as a whole). Notably, a long-term increase in breast cancer (APC 4.0% in 1985–2010) stopped in 2010. Thyroid cancer significantly increased in 2002–2008 (APC 6.5%), but levelled off thereafter. Significant decreases were seen in the most recent segment for stomach and liver. Among sub-major cancer sites, esophagus, ovary, kidney, and malignant lymphoma showed significant increases in the most recent segment, while gallbladder and urinary bladder showed a significant decrease. Among the other cancer sites, oral cavity and pharynx, skin and multiple myeloma showed a significant increase in both sexes. ICD-10, International Classification of Diseases, 10th revision. aYamagata, Fukui, and Nagasaki Prefectures. *Annual % change was statistically significantly different from zero (P < 0.05). ICD-10, International Classification of Diseases, 10th revision. aYamagata, Fukui, and Nagasaki Prefectures. *Annual % change was statistically significantly different from zero (P < 0.05). Figure 2 shows the contribution of cancer sites to the significant increase or decrease in all-cancer incidence. For males, prostate cancer accounted for 64.5% of the most recent significant increase in all-cancer incidence between 2000 and 2010. The contributions of other cancer sites were less than 10% (lung: 9.3%, malignant lymphoma: 5.8%, kidney and other urinary organs except bladder: 5.4%, and oral cavity and pharynx: 3.8%). For females, the largest contribution of cancer site to the significant all-cancer increase in incidence between 2004 and 2010 was breast (51.1%), followed by thyroid (8.8%), lung (8.6%), and colon/rectum (7.2%). For males, all-cancer incidence significantly decreased after 2010, and this decrease was mainly accounted for by stomach (41.1%), lung (26.8%), and liver (24.1%) cancers. For females, there was no significant increase in all-cancer incidence after 2010. Figure 3A shows the annual trends in ASRs of mortality for all-cancer and major cancer sites in Japan (national data). The ASR of all-cancer mortality for males was 182.6 in 1958, which then peaked at 226.1 in 1995 and decreased to 152.1 in 2018. For females, the ASR was 130.7 in 1958, which then peaked at 132.0 in 1960 and decreased to 84.5 in 2015. Trends of sub-major cancers are shown in Figure 3B. Table 3 shows the results of the Joinpoint regression analysis of all-cancer mortality from the national data of Japan. For males, the ASR of all-cancer mortality intermittently increased from 1958 through 1996 and decreased thereafter. The decrease accelerated from 2013 from the APC of −1.6% (1996–2013) to −2.5% (2013–2018). For females, the ASR of all-cancer mortality showed a long-term decreasing trend from 1968, with the APCs of −0.8% (1968–1993), −1.4% (1997–2003), and −1.0% (2003–2018). A similar decreasing trend was seen for males and females combined, with the APCs of −0.2% (1966–1993), −1.4% (1997–2015), and −2.2% (2015–2018). Overall results were the same when the data were restricted to ages under 75 years old (eTable 3). ICD-10, International Classification of Diseases, 10th revision. *Annual % change was statistically significantly different from zero (P < 0.05). Table 4A and Table 4B show the corresponding site-specific results of cancer mortality for males and females, respectively. For major cancer sites in males, pancreatic cancer alone showed a significant increase during the most recent segment. Conversely, all the remaining major cancer sites showed a significant decrease during the most recent segment: stomach, colon, rectum (also colon and rectum combined), liver, lung, and prostate. Among sub-major cancer sites also, all cancer sites showed a significant decrease during the most recent segment, except malignant lymphoma, which significantly decreased in 2001–2005. For females, significantly increasing major cancers in the most recent segment were pancreas, breast, cervix uteri, and corpus uteri, (also uterus as a whole). Similarly to males, all the remaining major cancer sites showed a significant decrease during the most recent segment, except colon, which significantly decreased in 1993–2008 and levelled off thereafter. Among sub-major cancer sites, all cancer sites except kidney and malignant lymphoma showed a significant decrease during the most recent segment. ICD-10, International Classification of Diseases, 10th revision. *Annual % change was statistically significantly different from zero (P < 0.05). ICD-10, International Classification of Diseases, 10th revision. *Annual % change was statistically significantly different from zero (P < 0.05). Figure 4 shows the contribution of specific cancer sites to the significant decrease in all-cancer mortality in the most recent 10 years (2009–2018). For males, stomach cancer accounted for 29.8% of the decrease of all-cancer incidence, followed by liver (25.2%) and lung (22.3%). These three sites accounted for 77.3% of the all-cancer decrease. Esophagus, and gallbladder and bile ducts accounted for less than 10% (7.1% and 4.2%, respectively). For females also, stomach, liver, and lung accounted for nearly 75% of the all-cancer decrease (34.4%, 28.7%, and 11.8%, respectively). Unlike the result in males, however, the contribution of gallbladder and bile ducts was slightly larger than that of lung (12.6%), while ovary contributed 3.7%. The contributions of cancer sites to the significant changes in all-cancer incidence and mortality under 75 years old are shown in eFigure 1 and eFigure 2, respectively. The largest contributions of prostate in males and breast in females were the same as in the results for all ages. There was no significant decrease in all-cancer incidence when age was restricted to under 75 years old. The large contributions of stomach, liver, lung, and gallbladder and bile ducts to the recent reduction in all-cancer mortality were also the same as the result of all ages. Figure 5A shows the trends in incidence and mortality of all cancers combined. There was a marked divergence between the trends in incidence and mortality in both males and females. For males, the divergence became wider after the late 1990s due to the decrease in mortality. For females, the gap between incidence and mortality widened constantly. Figure 5B and Figure 5C show the trends in incidence and mortality of major and sub-major cancer sites, respectively. Similar to all cancers combined, a divergence between incidence and mortality was a common feature observed in almost all cancer sites. The results of cancers other than major and sub-major sites are shown in eFigure 3. Table 5 summarizes the description of observed trends in incidence and mortality and potential interpretations for each cancer site. Decreases in exposure to major risk factor such as infectious agents and tobacco smoking were considered to be associated with the decreases in incidence and/or mortality of related cancers (stomach, liver, lung, and urinary bladder). Effects of the introduction and dissemination of cancer screening were considered to have been reflected in the trends in incidence and/or mortality of several cancers (stomach, colon/rectum, female breast, and prostate), among which the increase in prostate cancer incidence was most remarkable. Improvements in diagnostic measures and treatments were common factors associated with the divergence of incidence and mortality. Potential overdiagnosis was considered to be included in the increase in incidence of prostate and thyroid cancers. ICD-10, International Classification of Diseases, 10th revision. DISCUSSION: This study analyzed the trends in cancer incidence and mortality in Japan with updated representative datasets. A notable finding was that the ASR of all-cancer incidence started to significantly decrease in males and level off in females in 2010, after a long-term intermittent increase.11,12,25 The leading cancer sites that contributed to the past long-term increase were prostate in males and breast in females, but these slowed down or levelled off during the first decade of 2000s. For males, the main contributing cancer sites to this significant decrease were stomach, liver and lung cancers. The ASR of prostate cancer incidence dramatically increased between 2000 and 2004 (APC 22.4%) and slowed down thereafter (APC 1.3% in 2004–2015). As summarized in Table 5, this rapid increase in the early 2000s was prominent for localized cancer and suggested the contribution of the spread of prostate-specific antigen (PSA) screening.26 Indeed, PSA screening increased almost threefold in 2003 both in terms of the number of participants and the number of municipalities that adopted it as an organizational screening program, and subsequently shifted to a slow increase.27,28 The levelling off of ASR of female breast cancer incidence after 2010 observed in the present study is an unprecedented phenomenon.11–13 The mortality of female breast cancer12 also slowed a slowing down of its increasing trend. Long-term increase in incidence as well as mortality can be interpreted as the effect of changes in reproductive factors in Japanese females (Table 5).11,29,30 This effect might be converging in breast cancer, but cancers of the corpus uteri and ovary, which share common reproductive risk factors, continued to increase in incidence. The participation rate of female breast cancer screening (mammography) has been increasing in Japan, and early-stage cancer and carcinoma in situ of the breast was reported to have increased in a study using a prefectural cancer registry.31 Together with the slowing down of the increasing trend in mortality observed in the present study, these changes in trends could have partially reflected the dissemination of breast cancer screening.11 The significant decrease in ASR of all-cancer incidence in males after 2010 was accounted for by stomach, liver, and lung cancers. These cancer sites also contributed to the decrease in all-cancer mortality in both males and females (77.3% and 74.9%, respectively; Figure 4). Stomach cancer consistently decreased during the whole observation period with regard to both incidence and mortality, which can be interpreted to be the result of dramatic reduction of Helicobacter pylori (H. pylori) infection, combined with improvements in sanitation, diet (reduced salt intake), and food preservation techniques (Table 5).11,32,33 A study using data from a hospital-based registry in 2007–2015 showed a decrease in the proportion of cancer in the pylorus, the main subsite of H. pylori-induced cancer.34 A related factor is the eradication of H. pylori, which was covered by the universal health insurance in Japan in 2013 as part of the treatment for chronic gastritis as well as for gastric and duodenal ulcer. Indeed, the reported number of eradications doubled after this extension of coverage.35 Although we observed an acceleration in the decrease in stomach cancer mortality in males (APC −3.2% in 1996–2012, −4.6% in 2012–2018; Table 4A), this was not consistent for incidence nor in females. Thus, the effect of H. pylori eradication is not clear at the population level. Evidence of a relation between the absence of H. pylori infection and risk of adenocarcinoma of the esophagus is accumulating.36 The long-term decrease in the prevalence of H. pylori in Japan may be related to the increase in esophageal cancer incidence observed in the present study, through the pathway from gastroesophageal reflux disease to the occurrence of esophageal cancer (Table 5). Liver cancer is another site that showed a dramatic decrease in incidence and mortality. As discussed in previous literature, the long-term decrease in liver cancer in Japan is mainly due to the decrease in the prevalence of hepatitis C virus (HCV).11,13,37 The observed acceleration both in incidence and mortality in 2008 or 2010 (Table 2 and Table 4) can be interpreted as a reflection of therapeutic improvements made in the early 2000s such as pegylated interferon in 2004 and a protease inhibitor (Telaprevire) in 2011.11,38–40 Improvements in differential diagnosis could have also affected the divergence between incidence and mortality since 1980s (Table 5). Cancer of the gallbladder and bile ducts had a similar pattern of trends to that of liver cancer. Chronic infections have been proposed as one of the risk factors for gallbladder cancer as well as gallstones and obesity (Table 5).41–43 Control over communicable diseases could have resulted in the reduction of incidence rate of gallbladder cancer.44 Regarding cholangiocarcinoma (CCA), overlapping of risk factors and misclassification between intra- and extra-hepatic CCA45 might explain the similarity to the trends in liver cancer and cancer of the combined category of gallbladder and bile ducts. The decrease in lung cancer incidence in males was a phenomenon that had never been observed in previous literature.11–13 Trends in lung cancer incidence in Osaka by histological type revealed that the ASR of adenocarcinoma continuously increased, whereas those of squamous and small-cell carcinomas decreased from 1990s, which was interpreted to be the result of the spread of diagnostic use of computed tomography and the decreasing trend in smoking prevalence, respectively.46,47 Another possibility is the shift from nonfilter to filtered cigarettes in the consumption of tobacco products in Japan (Table 5), which may be more influential because the increase in adenocarcinoma was observed even before the introduction of major diagnostic advances.48 The decrease in overall lung cancer incidence observed in the present study could have reflected the predominant effect of declining smoking prevalence, albeit that analysis stratified by histological type is needed to clarify this possibility. Cancers of the kidney and urinary tracts are also strongly related to tobacco smoking,49,50 but no similarity was found between the trends in these cancers and smoking prevalence except for bladder cancer incidence in males (Table 5). An important feature of our results is that a decrease in incidence was not observed for colorectal cancer, which can be prevented by organizational screening. The ASR of colorectal cancer has been significantly decreasing in many countries.1–3,5 Using simulation modeling techniques, one study revealed that the reduction in colorectal cancer in the United States was a combined effect of cancer control measures for prevention and screening.6 Cervical cancer also showed a sharp contrast; the ASR of this cancer has been consistently decreasing overseas, including the Republic of Korea,1,2,5 whereas the present study showed a significant increase in incidence and mortality, just as was observed in our previous analysis.11–13 The increase in mortality of cervical cancer (cancer of the corpus uteri as well) in Japan should be interpreted with caution because it could have included the shift from cancer of the ‘uterus, not otherwise specified (NOS)’. However, the proportion of NOS had been stable since the late 1990s, and the increase was also observed in incidence.11,13 Cervical cancer can be prevented by a combination of organizational screening and human papillomavirus (HPV) vaccination.7,9,51,52 In Japan, the national HPV vaccination program has been substantially halted by the fear of potential adverse effects.53,54 A simulation modeling study demonstrated that rapid restoration of vaccination coverage and catch-up for missed cohorts could avoid approximately 50,000 cervical cancer cases in 50 years.55 Realizing a reduction in colorectal and cervical cancer incidence by promoting the primary and secondary preventive measures is a major challenge for Japan. Pancreatic cancer was another example that showed a long-term increase both in incidence and mortality. Increase in risk factors, such as type 2 diabetes, may be related to the increase in incidence56 and mortality as well. Improvements in diagnostic measures and biopsy for histologic confirmation have also been proposed as underlying factors of the increasing trend in earlier years.57 Monitoring cancer incidence trends is useful in examining the possibility of overdiagnosis at a population level. In the United States, prostate, female breast, skin, kidney, thyroid, and lung cancers have been listed as examples of potential overdiagnosis, characterized as a sharp increase in incidence in the absence of a clear change in mortality.58 In the present study, this typical pattern seemed to be observed for prostate and thyroid cancers (Figure 5 and Table 5). A common background factor of these cancers is the availability of non-invasive tests (ie, PSA for prostate cancer and ultrasonography for thyroid cancer).11,26,58–61 In addition to this descriptive approach, empirical or modelling approaches that compare screened and unscreened (or tested and untested) populations are needed to quantitively examine overdiagnosis.62 Despite the possibility of overdiagnosis, both prostate and thyroid cancers showed significant decreases in mortality. Refinements in diagnosis, treatment and disease management could have contributed to those trends in mortality.60,61,63,64 Malignant lymphoma showed a pattern of sharp increase in incidence and no clear change in mortality, but changes in lifestyle and improvements in diagnosis and prognosis, as well as in coding of registry data, have been proposed as underlying factors.65,66 Some of the reduction in cancer mortality observed in the present and previous studies11–13 likely reflects improvements in the prognosis of cancer patients. This effect can be seen in the divergence between trends in incidence and mortality (Figure 5), which is consistent with the evidence of improved diagnosis, treatment, and disease management cited in Table 5. Indeed, several studies on hematological cancers showed that the introduction of a new drug or treatment was associated with a reduction in mortality at the national level.67–69 Studies using population-based cancer registries have also reported increases in survival rates that can be interpreted as a reflection of improvements in treatment.46,70–73 Our study group is planning to update these reports using the most recent MCIJ dataset (patients diagnosed in 1993–2015). A strength of the present study is the representativeness of the data. Mortality data were from the national vital statistics and based on a complete mandatory reporting system. Incidence data were from three prefectures, but the representativeness of the data in terms of secular trends has been validated.12,19 One of the limitations of the present study is that the trends in cancer incidence might have been affected by an improvement in the completeness and data quality of prefectural cancer registries. Indeed, even in the three present prefectures that have long-term high-quality data, there was a slight increase in completeness and quality indices during the first decade of the 2000s (eFigure 4). The observed increases in incidence in this period might therefore reflect an improvement in data completeness. A second limitation is that our analysis was only descriptive. Furthermore, the cancers we analyzed were not grouped into clinically relevant subtypes. As stated above, further research is required to clarify factors underlying the observed cancer trends, such as analyses according to clinical stage or histological type and modelling approaches. In conclusion, this analysis of cancer trends in Japan revealed that the ASR of all-cancer incidence started to decrease significantly in males and level off in females in 2010, after a long-term intermittent increase. The convergence of an increase in all-cancer incidence was mainly due to the slowing down of prostate and breast cancers in males and females, respectively. The ASR of all-cancer mortality continued to decrease, the main contributing cancer sites of which were still stomach, liver, and male lung.
Background: Unlike many North American and European countries, Japan has observed a continuous increase in cancer incidence over the last few decades. We examined the most recent trends in population-based cancer incidence and mortality in Japan. Methods: National cancer mortality data between 1958 and 2018 were obtained from published vital statistics. Cancer incidence data between 1985 and 2015 were obtained from high-quality population-based cancer registries maintained by three prefectures (Yamagata, Fukui, and Nagasaki). Trends in age-standardized rates (ASR) were examined using Joinpoint regression analysis. Results: For males, all-cancer incidence increased between 1985 and 1996 (annual percent change [APC] +1.1%; 95% confidence interval [CI], 0.7-1.5%), increased again in 2000-2010 (+1.3%; 95% CI, 0.9-1.8%), and then decreased until 2015 (-1.4%; 95% CI, -2.5 to -0.3%). For females, all-cancer incidence increased until 2010 (+0.8%; 95% CI, 0.6-0.9% in 1985-2004 and +2.4%; 95% CI, 1.3-3.4% in 2004-2010), and stabilized thereafter until 2015. The post-2000 increase was mainly attributable to prostate in males and breast in females, which slowed or levelled during the first decade of the 2000s. After a sustained increase, all-cancer mortality for males decreased in 1996-2013 (-1.6%; 95% CI, -1.6 to -1.5%) and accelerated thereafter until 2018 (-2.5%; 95% CI, -2.9 to -2.0%). All-cancer mortality for females decreased intermittently throughout the observation period, with the most recent APC of -1.0% (95% CI, -1.1 to -0.9%) in 2003-2018. The recent decreases in mortality in both sexes, and in incidence in males, were mainly attributable to stomach, liver, and male lung cancers. Conclusions: The ASR of all-cancer incidence began decreasing significantly in males and levelled off in females in 2010.
null
null
5,115
401
[]
4
[ "cancer", "incidence", "mortality", "cancer incidence", "decrease", "cancers", "sites", "increase", "cancer sites", "major" ]
[ "cancer sites japan", "cancer incidence osaka", "cancer registries prefectures", "japan asr cancer", "japanese population cancer" ]
null
null
[CONTENT] incidence | mortality | neoplasms | population surveillance | vital statistics [SUMMARY]
[CONTENT] incidence | mortality | neoplasms | population surveillance | vital statistics [SUMMARY]
[CONTENT] incidence | mortality | neoplasms | population surveillance | vital statistics [SUMMARY]
null
[CONTENT] incidence | mortality | neoplasms | population surveillance | vital statistics [SUMMARY]
null
[CONTENT] Female | Humans | Incidence | Japan | Male | Mortality | Neoplasms | Registries [SUMMARY]
[CONTENT] Female | Humans | Incidence | Japan | Male | Mortality | Neoplasms | Registries [SUMMARY]
[CONTENT] Female | Humans | Incidence | Japan | Male | Mortality | Neoplasms | Registries [SUMMARY]
null
[CONTENT] Female | Humans | Incidence | Japan | Male | Mortality | Neoplasms | Registries [SUMMARY]
null
[CONTENT] cancer sites japan | cancer incidence osaka | cancer registries prefectures | japan asr cancer | japanese population cancer [SUMMARY]
[CONTENT] cancer sites japan | cancer incidence osaka | cancer registries prefectures | japan asr cancer | japanese population cancer [SUMMARY]
[CONTENT] cancer sites japan | cancer incidence osaka | cancer registries prefectures | japan asr cancer | japanese population cancer [SUMMARY]
null
[CONTENT] cancer sites japan | cancer incidence osaka | cancer registries prefectures | japan asr cancer | japanese population cancer [SUMMARY]
null
[CONTENT] cancer | incidence | mortality | cancer incidence | decrease | cancers | sites | increase | cancer sites | major [SUMMARY]
[CONTENT] cancer | incidence | mortality | cancer incidence | decrease | cancers | sites | increase | cancer sites | major [SUMMARY]
[CONTENT] cancer | incidence | mortality | cancer incidence | decrease | cancers | sites | increase | cancer sites | major [SUMMARY]
null
[CONTENT] cancer | incidence | mortality | cancer incidence | decrease | cancers | sites | increase | cancer sites | major [SUMMARY]
null
[CONTENT] cancer | incidence | trends | major | japan | registration | incidence major | 2016 | major cancer | major cancer sites [SUMMARY]
[CONTENT] cancer | incidence | calculated | data | mortality | obtained | center | joinpoint | number | cancer incidence [SUMMARY]
[CONTENT] cancer | incidence | males | recent segment | significant | sites | major | segment | showed | significantly [SUMMARY]
null
[CONTENT] cancer | incidence | mortality | cancer incidence | major | sites | cancer sites | cancers | trends | data [SUMMARY]
null
[CONTENT] North American | European | Japan | the last few decades ||| Japan [SUMMARY]
[CONTENT] between 1958 and 2018 ||| between 1985 and 2015 | three | Yamagata, Fukui | Nagasaki ||| Joinpoint [SUMMARY]
[CONTENT] between 1985 and 1996 | APC | +1.1% | 95% ||| CI | 0.7-1.5% | 2000-2010 | +1.3% | 95% | CI | 0.9-1.8% | 2015 | 95% | CI ||| 2010 | +0.8% | 95% | CI | 0.6-0.9% | 1985-2004 | +2.4% | 95% | CI | 1.3-3.4% | 2004-2010 | 2015 ||| post-2000 | the first decade of the 2000s ||| 1996-2013 | 95% | CI | 2018 | 95% | CI ||| APC | 95% | CI | 2003-2018 ||| [SUMMARY]
null
[CONTENT] North American | European | Japan | the last few decades ||| Japan ||| between 1958 and 2018 ||| between 1985 and 2015 | three | Yamagata, Fukui | Nagasaki ||| Joinpoint ||| between 1985 and 1996 | APC | +1.1% | 95% ||| CI | 0.7-1.5% | 2000-2010 | +1.3% | 95% | CI | 0.9-1.8% | 2015 | 95% | CI ||| 2010 | +0.8% | 95% | CI | 0.6-0.9% | 1985-2004 | +2.4% | 95% | CI | 1.3-3.4% | 2004-2010 | 2015 ||| post-2000 | the first decade of the 2000s ||| 1996-2013 | 95% | CI | 2018 | 95% | CI ||| APC | 95% | CI | 2003-2018 ||| ||| 2010 [SUMMARY]
null
Epidemiological survey of antinuclear antibodies in healthy population and analysis of clinical characteristics of positive population.
31313384
In China, the incidence of autoimmune diseases is gradually increasing. To decrease the misdiagnosis rate of autoimmune diseases, we conducted an epidemiological investigation about the presence of antinuclear antibody (ANA) in healthy populations and analyzed the clinical characteristics of healthy population with both high titer of ANA and positive anti-SSA and AMA-M2.
BACKGROUND
Serum ANA titers were detected by indirect immunofluorescence (IIF), and other 15 types of ANA-specific antibodies were detected by line immunoassays.
METHODS
In 25 110 individuals for routine examination, the positive rate of ANA titer >1:100 was 14.01%, of which the positive rate of female (19.05%) was higher than that of male (9.04%; P < 0.01). The positive rate of ANA titer >1:320 was 5.93%, of which the positive rate of female (8.68%) was higher than that of male (3.21%; P < 0.01). The specific antibodies were detected in 1489 of ANA-positive people with titer >1:320, and the top three detected antibodies were anti-Ro-52 (212), AMA-M2 (189), and anti-SSA (144). The abnormal rate of blood routine test, liver function test, and other clinical indicators in AMA-M2-positive population was significantly different from those in the control group. The abnormal rate of blood routine test, liver function test, and immune index in anti-SSA-positive population was higher than those in control group.
RESULTS
There was a high prevalence of ANA positive in healthy population. To avoid misdiagnosis, those who had symptoms of abdominal discomfort, pruritus, or fatigue with abnormal results of blood routine and liver function test should be examined for ANA, AMA-M2, anti-SSA as early as possible.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Antibodies, Antinuclear", "Autoantigens", "Autoimmune Diseases", "Child", "Child, Preschool", "Epidemiological Monitoring", "Female", "Follow-Up Studies", "Humans", "Infant", "Male", "Middle Aged", "Prognosis", "Young Adult" ]
6805280
INTRODUCTION
Antinuclear antibodies (ANA) are a spectrum of immunoglobulins that react with various nuclear and cytoplasmic components of karyocytes and are majorly produced by plasma cells. ANA family includes more than 15 types of specific antibodies, such as anti‐SSA and AMA‐M2 antibodies, which have important values for clinical diagnosis. Previous studies have shown that AMA‐M2 and anti‐SSA antibodies could be detected in the patients with primary biliary cholangitis (PBC) and systemic lupus erythematosus (SLE) several years before the disease onset.1, 2, 3 ANA can directly act on the mitosis of karyocyte and stagnate the mitotic cycle at different stages, leading to the disorder of DNA synthesis and protein production, and which may further block the metabolism of various tissues, and even cause morphological and structural changes and the loss of function. Therefore, the vital tissues and vigorous organs for body metabolism have become the targets of ANA attack, such as liver, hematological system, epithelial tissue, and so on, which are also the initial sites of clinical symptoms in ANA‐positive patients. Unfortunately, these symptoms are rarely connected to abnormal ANA by doctors in basic hospitals, which usually lead to a high rate of misdiagnosis in population. Therefore, we conducted an epidemiological investigation of ANA in healthy population and further explored the clinical characteristics in individuals with serum‐positive AMA‐M2 and anti‐SSA antibodies, so that we could provide necessary data for clinical practice.
null
null
RESULT
Positive rate of ANA in healthy population A total of 25 110 residents were detected for ANA, including 12 640 males and 12 470 females. The positive rate of ANA titer >1:100 was 14.01% (3519/25 110), of which 9.04% of males (1143/12 640) and 19.05% of females (2376/12 470) was positive. The positive rate of ANA titer >1:320 was 5.93% (1489/25 110), of which 3.21% of males (406/12 640) and 8.68% of female (1083/12 470) was positive (Table 1). The difference in positive rates between genders was statistically significant. Distribution of antinuclear antibody‐positive population by sex Abbreviation: ANA, antinuclear antibody. A total of 25 110 residents were detected for ANA, including 12 640 males and 12 470 females. The positive rate of ANA titer >1:100 was 14.01% (3519/25 110), of which 9.04% of males (1143/12 640) and 19.05% of females (2376/12 470) was positive. The positive rate of ANA titer >1:320 was 5.93% (1489/25 110), of which 3.21% of males (406/12 640) and 8.68% of female (1083/12 470) was positive (Table 1). The difference in positive rates between genders was statistically significant. Distribution of antinuclear antibody‐positive population by sex Abbreviation: ANA, antinuclear antibody. Distribution of ANA‐specific antibodies in 1489 patients with ANA titer >1:320 A total of 1489 positive samples with ANA >1:320 population were further tested by line immunoassay (LIA) for 15 specific autoantibodies. The positive rate of 15 specific antibodies was 44.29% (659/ 1489). The top three antibodies were anti‐Ro‐52 (212), AMA‐M2 (189), and anti‐SSA (144) (Figure 1). Distribution of antinuclear antibody (ANA)‐specific Antibody in positive population. 15 types of ANA‐specific antibodies were detected by line immunoassays A total of 1489 positive samples with ANA >1:320 population were further tested by line immunoassay (LIA) for 15 specific autoantibodies. The positive rate of 15 specific antibodies was 44.29% (659/ 1489). The top three antibodies were anti‐Ro‐52 (212), AMA‐M2 (189), and anti‐SSA (144) (Figure 1). Distribution of antinuclear antibody (ANA)‐specific Antibody in positive population. 15 types of ANA‐specific antibodies were detected by line immunoassays Statistical analysis of PBC‐related laboratory indicators in AMA‐M2–positive patients By detecting 15 specific antibodies, we found the high positive rate of AMA‐M2 with a total of 189 of AMA‐M2–positive cases, including 39 males and 150 females. Samples from 40 males and 160 females with negative ANA were randomly selected as controls. Our data demonstrated that the abnormal rate of blood routine, liver function, and other clinical indicators in 189 of AMA‐M2–positive cases was significantly different from that in the control group (Table 2). We detected the biochemical indexes of liver function and found that alkaline phosphatase (ALP) was elevated in 70 patients, and the diagnostic rate of PBC was 37.4% (Table 3). Distribution of laboratory test results of primary biliary cholangitis‐related indicators in AMA‐M2–positive population and control group Abnormal blood routine: Single decrease and mixed decrease in erythrocyte, leukocyte, and platelet. Abnormal liver function: Single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL). Abdominal discomfort: including ventosity, celiakia, ructus, and intermittent sicchasia. Gallbladder lesion: including gallstones, post‐cholecystectomy, and thickened gallbladder wall. Distribution of liver function biochemical indicators in AMA‐M2–positive population Abbreviations: ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CBIL, conjugated bilirubin; GGT, gamma‐glutamyltranspeptidase. By detecting 15 specific antibodies, we found the high positive rate of AMA‐M2 with a total of 189 of AMA‐M2–positive cases, including 39 males and 150 females. Samples from 40 males and 160 females with negative ANA were randomly selected as controls. Our data demonstrated that the abnormal rate of blood routine, liver function, and other clinical indicators in 189 of AMA‐M2–positive cases was significantly different from that in the control group (Table 2). We detected the biochemical indexes of liver function and found that alkaline phosphatase (ALP) was elevated in 70 patients, and the diagnostic rate of PBC was 37.4% (Table 3). Distribution of laboratory test results of primary biliary cholangitis‐related indicators in AMA‐M2–positive population and control group Abnormal blood routine: Single decrease and mixed decrease in erythrocyte, leukocyte, and platelet. Abnormal liver function: Single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL). Abdominal discomfort: including ventosity, celiakia, ructus, and intermittent sicchasia. Gallbladder lesion: including gallstones, post‐cholecystectomy, and thickened gallbladder wall. Distribution of liver function biochemical indicators in AMA‐M2–positive population Abbreviations: ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CBIL, conjugated bilirubin; GGT, gamma‐glutamyltranspeptidase. Analysis of laboratory test indicators for anti‐SSA antibody‐positive population The positive rate of anti‐SSA antibody was high in the detection of 15 specific antibodies. A total of 144 of cases were positive in anti‐SSA antibody, including 36 males and 144 females. 30 males and 120 females with negative ANA were randomly selected as control group. Compared with the control group, there were significant decreased blood formed element and increased erythrocyte sedimentation rate (ESR) in anti‐SSA antibody‐positive group. The abnormal rates of liver function, IgG, C3, C4, and rheumatoid factor (RF) in anti‐SSA antibody‐positive group were significantly different from those in the control group (Table 4). Distribution of laboratory test indicators in anti‐SSA antibody‐positive population and control group Abbreviations: ESR, erythrocyte sedimentation rate; RF, rheumatoid factor. Abnormal blood routine: single decrease and mixed decrease in erythrocyte, leukocyte, and platelet. Abnormal liver function: single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL). The positive rate of anti‐SSA antibody was high in the detection of 15 specific antibodies. A total of 144 of cases were positive in anti‐SSA antibody, including 36 males and 144 females. 30 males and 120 females with negative ANA were randomly selected as control group. Compared with the control group, there were significant decreased blood formed element and increased erythrocyte sedimentation rate (ESR) in anti‐SSA antibody‐positive group. The abnormal rates of liver function, IgG, C3, C4, and rheumatoid factor (RF) in anti‐SSA antibody‐positive group were significantly different from those in the control group (Table 4). Distribution of laboratory test indicators in anti‐SSA antibody‐positive population and control group Abbreviations: ESR, erythrocyte sedimentation rate; RF, rheumatoid factor. Abnormal blood routine: single decrease and mixed decrease in erythrocyte, leukocyte, and platelet. Abnormal liver function: single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL).
CONCLUSION
Based on the epidemiological statistical analysis of healthy population, this study revealed the distribution of antinuclear antibody in male and female in healthy population and the clinical characteristics of AMA‐M2 antibody and anti‐SSA antibody. It improves the clinicians' understanding of antinuclear antibody detection, is conducive to promoting the popularization of antinuclear antibody detection, and is convenient for early diagnosis and control of the disease.
[ "INTRODUCTION", "Subjects", "Detection of antibody", "Statistical analysis", "Positive rate of ANA in healthy population", "Distribution of ANA‐specific antibodies in 1489 patients with ANA titer >1:320", "Statistical analysis of PBC‐related laboratory indicators in AMA‐M2–positive patients", "Analysis of laboratory test indicators for anti‐SSA antibody‐positive population" ]
[ "Antinuclear antibodies (ANA) are a spectrum of immunoglobulins that react with various nuclear and cytoplasmic components of karyocytes and are majorly produced by plasma cells. ANA family includes more than 15 types of specific antibodies, such as anti‐SSA and AMA‐M2 antibodies, which have important values for clinical diagnosis. Previous studies have shown that AMA‐M2 and anti‐SSA antibodies could be detected in the patients with primary biliary cholangitis (PBC) and systemic lupus erythematosus (SLE) several years before the disease onset.1, 2, 3 ANA can directly act on the mitosis of karyocyte and stagnate the mitotic cycle at different stages, leading to the disorder of DNA synthesis and protein production, and which may further block the metabolism of various tissues, and even cause morphological and structural changes and the loss of function. Therefore, the vital tissues and vigorous organs for body metabolism have become the targets of ANA attack, such as liver, hematological system, epithelial tissue, and so on, which are also the initial sites of clinical symptoms in ANA‐positive patients. Unfortunately, these symptoms are rarely connected to abnormal ANA by doctors in basic hospitals, which usually lead to a high rate of misdiagnosis in population. Therefore, we conducted an epidemiological investigation of ANA in healthy population and further explored the clinical characteristics in individuals with serum‐positive AMA‐M2 and anti‐SSA antibodies, so that we could provide necessary data for clinical practice.", "We collected samples from 25 110 residents who received health checkup in Baoding NO.1 central hospital, Baoding, Hebei, China, from January 2015 to June 2018. These participants included workers, peasants, cadres, students, kindergartens, and so on. The age range of the subjects was from 4 months to 93 years old.\nThe subjects were healthy individuals and patients diagnosed by AID, and those who may cause ANA positive in chronic active hepatitis, tuberculosis, and chronic infection have been excluded. In order to ensure the specificity of the results and exclude the effects of some interference factors, such as the elderly, some drugs, malignant diseases, and so on, we only chose the individuals with ANA titer >1:320. On the premise of the patient's voluntary, 15 specific antibody were also analyzed, including anti‐SSA and anti‐M2. The control group was negative for antibody detection, and the sex ratio was basically the same to that of the positive population. The study was approved by the Ethics Committee of Baoding NO.1 central hospitals, and informed consent was acquired from each individual.", "Blood samples (3 mL) were collected from 25 110 residents who received health checkup. Serum was separated by centrifugation at 900 g for 5 minutes. ANA were tested by indirect immunofluorescence on HEp‐2 cells according to the manufacturer's instructions (Euroimmun AG). As a result, 1489 positive samples were further used by line immunoassay (LIA; Euroimmun AG) for 15 specific autoantibodies (anti‐Ro52, anti‐nRNP, anti‐Sm, anti‐SSA, anti‐SSB, anti‐Scl‐70, anti‐Jo‐1, anti‐CNEPB, anti‐dsDNA, anti‐PCNA, anti‐His, anti‐Nuc, anti‐RIB, anti‐M2, and anti‐PMScl‐70). The serum for LIA test was diluted 1:100. EUROBlotMaster (Euroimmun AG) and EUROLineScan (Euroimmun AG) were used to complete the operation and for test result interpretation, respectively.", "Statistical analysis was performed using SPSS Version 19.0 software (IBM Corp.). P < 0.05 was considered as statistically significant. Comparison of counting data between groups was performed using chi‐square test.", "A total of 25 110 residents were detected for ANA, including 12 640 males and 12 470 females. The positive rate of ANA titer >1:100 was 14.01% (3519/25 110), of which 9.04% of males (1143/12 640) and 19.05% of females (2376/12 470) was positive. The positive rate of ANA titer >1:320 was 5.93% (1489/25 110), of which 3.21% of males (406/12 640) and 8.68% of female (1083/12 470) was positive (Table 1). The difference in positive rates between genders was statistically significant.\nDistribution of antinuclear antibody‐positive population by sex\nAbbreviation: ANA, antinuclear antibody.", "A total of 1489 positive samples with ANA >1:320 population were further tested by line immunoassay (LIA) for 15 specific autoantibodies. The positive rate of 15 specific antibodies was 44.29% (659/ 1489). The top three antibodies were anti‐Ro‐52 (212), AMA‐M2 (189), and anti‐SSA (144) (Figure 1).\nDistribution of antinuclear antibody (ANA)‐specific Antibody in positive population. 15 types of ANA‐specific antibodies were detected by line immunoassays", "By detecting 15 specific antibodies, we found the high positive rate of AMA‐M2 with a total of 189 of AMA‐M2–positive cases, including 39 males and 150 females. Samples from 40 males and 160 females with negative ANA were randomly selected as controls. Our data demonstrated that the abnormal rate of blood routine, liver function, and other clinical indicators in 189 of AMA‐M2–positive cases was significantly different from that in the control group (Table 2). We detected the biochemical indexes of liver function and found that alkaline phosphatase (ALP) was elevated in 70 patients, and the diagnostic rate of PBC was 37.4% (Table 3).\nDistribution of laboratory test results of primary biliary cholangitis‐related indicators in AMA‐M2–positive population and control group\nAbnormal blood routine: Single decrease and mixed decrease in erythrocyte, leukocyte, and platelet.\nAbnormal liver function: Single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL).\nAbdominal discomfort: including ventosity, celiakia, ructus, and intermittent sicchasia.\nGallbladder lesion: including gallstones, post‐cholecystectomy, and thickened gallbladder wall.\nDistribution of liver function biochemical indicators in AMA‐M2–positive population\nAbbreviations: ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CBIL, conjugated bilirubin; GGT, gamma‐glutamyltranspeptidase.", "The positive rate of anti‐SSA antibody was high in the detection of 15 specific antibodies. A total of 144 of cases were positive in anti‐SSA antibody, including 36 males and 144 females. 30 males and 120 females with negative ANA were randomly selected as control group. Compared with the control group, there were significant decreased blood formed element and increased erythrocyte sedimentation rate (ESR) in anti‐SSA antibody‐positive group. The abnormal rates of liver function, IgG, C3, C4, and rheumatoid factor (RF) in anti‐SSA antibody‐positive group were significantly different from those in the control group (Table 4).\nDistribution of laboratory test indicators in anti‐SSA antibody‐positive population and control group\nAbbreviations: ESR, erythrocyte sedimentation rate; RF, rheumatoid factor.\nAbnormal blood routine: single decrease and mixed decrease in erythrocyte, leukocyte, and platelet.\nAbnormal liver function: single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL)." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Subjects", "Detection of antibody", "Statistical analysis", "RESULT", "Positive rate of ANA in healthy population", "Distribution of ANA‐specific antibodies in 1489 patients with ANA titer >1:320", "Statistical analysis of PBC‐related laboratory indicators in AMA‐M2–positive patients", "Analysis of laboratory test indicators for anti‐SSA antibody‐positive population", "DISCUSSION", "CONCLUSION" ]
[ "Antinuclear antibodies (ANA) are a spectrum of immunoglobulins that react with various nuclear and cytoplasmic components of karyocytes and are majorly produced by plasma cells. ANA family includes more than 15 types of specific antibodies, such as anti‐SSA and AMA‐M2 antibodies, which have important values for clinical diagnosis. Previous studies have shown that AMA‐M2 and anti‐SSA antibodies could be detected in the patients with primary biliary cholangitis (PBC) and systemic lupus erythematosus (SLE) several years before the disease onset.1, 2, 3 ANA can directly act on the mitosis of karyocyte and stagnate the mitotic cycle at different stages, leading to the disorder of DNA synthesis and protein production, and which may further block the metabolism of various tissues, and even cause morphological and structural changes and the loss of function. Therefore, the vital tissues and vigorous organs for body metabolism have become the targets of ANA attack, such as liver, hematological system, epithelial tissue, and so on, which are also the initial sites of clinical symptoms in ANA‐positive patients. Unfortunately, these symptoms are rarely connected to abnormal ANA by doctors in basic hospitals, which usually lead to a high rate of misdiagnosis in population. Therefore, we conducted an epidemiological investigation of ANA in healthy population and further explored the clinical characteristics in individuals with serum‐positive AMA‐M2 and anti‐SSA antibodies, so that we could provide necessary data for clinical practice.", " Subjects We collected samples from 25 110 residents who received health checkup in Baoding NO.1 central hospital, Baoding, Hebei, China, from January 2015 to June 2018. These participants included workers, peasants, cadres, students, kindergartens, and so on. The age range of the subjects was from 4 months to 93 years old.\nThe subjects were healthy individuals and patients diagnosed by AID, and those who may cause ANA positive in chronic active hepatitis, tuberculosis, and chronic infection have been excluded. In order to ensure the specificity of the results and exclude the effects of some interference factors, such as the elderly, some drugs, malignant diseases, and so on, we only chose the individuals with ANA titer >1:320. On the premise of the patient's voluntary, 15 specific antibody were also analyzed, including anti‐SSA and anti‐M2. The control group was negative for antibody detection, and the sex ratio was basically the same to that of the positive population. The study was approved by the Ethics Committee of Baoding NO.1 central hospitals, and informed consent was acquired from each individual.\nWe collected samples from 25 110 residents who received health checkup in Baoding NO.1 central hospital, Baoding, Hebei, China, from January 2015 to June 2018. These participants included workers, peasants, cadres, students, kindergartens, and so on. The age range of the subjects was from 4 months to 93 years old.\nThe subjects were healthy individuals and patients diagnosed by AID, and those who may cause ANA positive in chronic active hepatitis, tuberculosis, and chronic infection have been excluded. In order to ensure the specificity of the results and exclude the effects of some interference factors, such as the elderly, some drugs, malignant diseases, and so on, we only chose the individuals with ANA titer >1:320. On the premise of the patient's voluntary, 15 specific antibody were also analyzed, including anti‐SSA and anti‐M2. The control group was negative for antibody detection, and the sex ratio was basically the same to that of the positive population. The study was approved by the Ethics Committee of Baoding NO.1 central hospitals, and informed consent was acquired from each individual.\n Detection of antibody Blood samples (3 mL) were collected from 25 110 residents who received health checkup. Serum was separated by centrifugation at 900 g for 5 minutes. ANA were tested by indirect immunofluorescence on HEp‐2 cells according to the manufacturer's instructions (Euroimmun AG). As a result, 1489 positive samples were further used by line immunoassay (LIA; Euroimmun AG) for 15 specific autoantibodies (anti‐Ro52, anti‐nRNP, anti‐Sm, anti‐SSA, anti‐SSB, anti‐Scl‐70, anti‐Jo‐1, anti‐CNEPB, anti‐dsDNA, anti‐PCNA, anti‐His, anti‐Nuc, anti‐RIB, anti‐M2, and anti‐PMScl‐70). The serum for LIA test was diluted 1:100. EUROBlotMaster (Euroimmun AG) and EUROLineScan (Euroimmun AG) were used to complete the operation and for test result interpretation, respectively.\nBlood samples (3 mL) were collected from 25 110 residents who received health checkup. Serum was separated by centrifugation at 900 g for 5 minutes. ANA were tested by indirect immunofluorescence on HEp‐2 cells according to the manufacturer's instructions (Euroimmun AG). As a result, 1489 positive samples were further used by line immunoassay (LIA; Euroimmun AG) for 15 specific autoantibodies (anti‐Ro52, anti‐nRNP, anti‐Sm, anti‐SSA, anti‐SSB, anti‐Scl‐70, anti‐Jo‐1, anti‐CNEPB, anti‐dsDNA, anti‐PCNA, anti‐His, anti‐Nuc, anti‐RIB, anti‐M2, and anti‐PMScl‐70). The serum for LIA test was diluted 1:100. EUROBlotMaster (Euroimmun AG) and EUROLineScan (Euroimmun AG) were used to complete the operation and for test result interpretation, respectively.\n Statistical analysis Statistical analysis was performed using SPSS Version 19.0 software (IBM Corp.). P < 0.05 was considered as statistically significant. Comparison of counting data between groups was performed using chi‐square test.\nStatistical analysis was performed using SPSS Version 19.0 software (IBM Corp.). P < 0.05 was considered as statistically significant. Comparison of counting data between groups was performed using chi‐square test.", "We collected samples from 25 110 residents who received health checkup in Baoding NO.1 central hospital, Baoding, Hebei, China, from January 2015 to June 2018. These participants included workers, peasants, cadres, students, kindergartens, and so on. The age range of the subjects was from 4 months to 93 years old.\nThe subjects were healthy individuals and patients diagnosed by AID, and those who may cause ANA positive in chronic active hepatitis, tuberculosis, and chronic infection have been excluded. In order to ensure the specificity of the results and exclude the effects of some interference factors, such as the elderly, some drugs, malignant diseases, and so on, we only chose the individuals with ANA titer >1:320. On the premise of the patient's voluntary, 15 specific antibody were also analyzed, including anti‐SSA and anti‐M2. The control group was negative for antibody detection, and the sex ratio was basically the same to that of the positive population. The study was approved by the Ethics Committee of Baoding NO.1 central hospitals, and informed consent was acquired from each individual.", "Blood samples (3 mL) were collected from 25 110 residents who received health checkup. Serum was separated by centrifugation at 900 g for 5 minutes. ANA were tested by indirect immunofluorescence on HEp‐2 cells according to the manufacturer's instructions (Euroimmun AG). As a result, 1489 positive samples were further used by line immunoassay (LIA; Euroimmun AG) for 15 specific autoantibodies (anti‐Ro52, anti‐nRNP, anti‐Sm, anti‐SSA, anti‐SSB, anti‐Scl‐70, anti‐Jo‐1, anti‐CNEPB, anti‐dsDNA, anti‐PCNA, anti‐His, anti‐Nuc, anti‐RIB, anti‐M2, and anti‐PMScl‐70). The serum for LIA test was diluted 1:100. EUROBlotMaster (Euroimmun AG) and EUROLineScan (Euroimmun AG) were used to complete the operation and for test result interpretation, respectively.", "Statistical analysis was performed using SPSS Version 19.0 software (IBM Corp.). P < 0.05 was considered as statistically significant. Comparison of counting data between groups was performed using chi‐square test.", " Positive rate of ANA in healthy population A total of 25 110 residents were detected for ANA, including 12 640 males and 12 470 females. The positive rate of ANA titer >1:100 was 14.01% (3519/25 110), of which 9.04% of males (1143/12 640) and 19.05% of females (2376/12 470) was positive. The positive rate of ANA titer >1:320 was 5.93% (1489/25 110), of which 3.21% of males (406/12 640) and 8.68% of female (1083/12 470) was positive (Table 1). The difference in positive rates between genders was statistically significant.\nDistribution of antinuclear antibody‐positive population by sex\nAbbreviation: ANA, antinuclear antibody.\nA total of 25 110 residents were detected for ANA, including 12 640 males and 12 470 females. The positive rate of ANA titer >1:100 was 14.01% (3519/25 110), of which 9.04% of males (1143/12 640) and 19.05% of females (2376/12 470) was positive. The positive rate of ANA titer >1:320 was 5.93% (1489/25 110), of which 3.21% of males (406/12 640) and 8.68% of female (1083/12 470) was positive (Table 1). The difference in positive rates between genders was statistically significant.\nDistribution of antinuclear antibody‐positive population by sex\nAbbreviation: ANA, antinuclear antibody.\n Distribution of ANA‐specific antibodies in 1489 patients with ANA titer >1:320 A total of 1489 positive samples with ANA >1:320 population were further tested by line immunoassay (LIA) for 15 specific autoantibodies. The positive rate of 15 specific antibodies was 44.29% (659/ 1489). The top three antibodies were anti‐Ro‐52 (212), AMA‐M2 (189), and anti‐SSA (144) (Figure 1).\nDistribution of antinuclear antibody (ANA)‐specific Antibody in positive population. 15 types of ANA‐specific antibodies were detected by line immunoassays\nA total of 1489 positive samples with ANA >1:320 population were further tested by line immunoassay (LIA) for 15 specific autoantibodies. The positive rate of 15 specific antibodies was 44.29% (659/ 1489). The top three antibodies were anti‐Ro‐52 (212), AMA‐M2 (189), and anti‐SSA (144) (Figure 1).\nDistribution of antinuclear antibody (ANA)‐specific Antibody in positive population. 15 types of ANA‐specific antibodies were detected by line immunoassays\n Statistical analysis of PBC‐related laboratory indicators in AMA‐M2–positive patients By detecting 15 specific antibodies, we found the high positive rate of AMA‐M2 with a total of 189 of AMA‐M2–positive cases, including 39 males and 150 females. Samples from 40 males and 160 females with negative ANA were randomly selected as controls. Our data demonstrated that the abnormal rate of blood routine, liver function, and other clinical indicators in 189 of AMA‐M2–positive cases was significantly different from that in the control group (Table 2). We detected the biochemical indexes of liver function and found that alkaline phosphatase (ALP) was elevated in 70 patients, and the diagnostic rate of PBC was 37.4% (Table 3).\nDistribution of laboratory test results of primary biliary cholangitis‐related indicators in AMA‐M2–positive population and control group\nAbnormal blood routine: Single decrease and mixed decrease in erythrocyte, leukocyte, and platelet.\nAbnormal liver function: Single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL).\nAbdominal discomfort: including ventosity, celiakia, ructus, and intermittent sicchasia.\nGallbladder lesion: including gallstones, post‐cholecystectomy, and thickened gallbladder wall.\nDistribution of liver function biochemical indicators in AMA‐M2–positive population\nAbbreviations: ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CBIL, conjugated bilirubin; GGT, gamma‐glutamyltranspeptidase.\nBy detecting 15 specific antibodies, we found the high positive rate of AMA‐M2 with a total of 189 of AMA‐M2–positive cases, including 39 males and 150 females. Samples from 40 males and 160 females with negative ANA were randomly selected as controls. Our data demonstrated that the abnormal rate of blood routine, liver function, and other clinical indicators in 189 of AMA‐M2–positive cases was significantly different from that in the control group (Table 2). We detected the biochemical indexes of liver function and found that alkaline phosphatase (ALP) was elevated in 70 patients, and the diagnostic rate of PBC was 37.4% (Table 3).\nDistribution of laboratory test results of primary biliary cholangitis‐related indicators in AMA‐M2–positive population and control group\nAbnormal blood routine: Single decrease and mixed decrease in erythrocyte, leukocyte, and platelet.\nAbnormal liver function: Single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL).\nAbdominal discomfort: including ventosity, celiakia, ructus, and intermittent sicchasia.\nGallbladder lesion: including gallstones, post‐cholecystectomy, and thickened gallbladder wall.\nDistribution of liver function biochemical indicators in AMA‐M2–positive population\nAbbreviations: ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CBIL, conjugated bilirubin; GGT, gamma‐glutamyltranspeptidase.\n Analysis of laboratory test indicators for anti‐SSA antibody‐positive population The positive rate of anti‐SSA antibody was high in the detection of 15 specific antibodies. A total of 144 of cases were positive in anti‐SSA antibody, including 36 males and 144 females. 30 males and 120 females with negative ANA were randomly selected as control group. Compared with the control group, there were significant decreased blood formed element and increased erythrocyte sedimentation rate (ESR) in anti‐SSA antibody‐positive group. The abnormal rates of liver function, IgG, C3, C4, and rheumatoid factor (RF) in anti‐SSA antibody‐positive group were significantly different from those in the control group (Table 4).\nDistribution of laboratory test indicators in anti‐SSA antibody‐positive population and control group\nAbbreviations: ESR, erythrocyte sedimentation rate; RF, rheumatoid factor.\nAbnormal blood routine: single decrease and mixed decrease in erythrocyte, leukocyte, and platelet.\nAbnormal liver function: single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL).\nThe positive rate of anti‐SSA antibody was high in the detection of 15 specific antibodies. A total of 144 of cases were positive in anti‐SSA antibody, including 36 males and 144 females. 30 males and 120 females with negative ANA were randomly selected as control group. Compared with the control group, there were significant decreased blood formed element and increased erythrocyte sedimentation rate (ESR) in anti‐SSA antibody‐positive group. The abnormal rates of liver function, IgG, C3, C4, and rheumatoid factor (RF) in anti‐SSA antibody‐positive group were significantly different from those in the control group (Table 4).\nDistribution of laboratory test indicators in anti‐SSA antibody‐positive population and control group\nAbbreviations: ESR, erythrocyte sedimentation rate; RF, rheumatoid factor.\nAbnormal blood routine: single decrease and mixed decrease in erythrocyte, leukocyte, and platelet.\nAbnormal liver function: single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL).", "A total of 25 110 residents were detected for ANA, including 12 640 males and 12 470 females. The positive rate of ANA titer >1:100 was 14.01% (3519/25 110), of which 9.04% of males (1143/12 640) and 19.05% of females (2376/12 470) was positive. The positive rate of ANA titer >1:320 was 5.93% (1489/25 110), of which 3.21% of males (406/12 640) and 8.68% of female (1083/12 470) was positive (Table 1). The difference in positive rates between genders was statistically significant.\nDistribution of antinuclear antibody‐positive population by sex\nAbbreviation: ANA, antinuclear antibody.", "A total of 1489 positive samples with ANA >1:320 population were further tested by line immunoassay (LIA) for 15 specific autoantibodies. The positive rate of 15 specific antibodies was 44.29% (659/ 1489). The top three antibodies were anti‐Ro‐52 (212), AMA‐M2 (189), and anti‐SSA (144) (Figure 1).\nDistribution of antinuclear antibody (ANA)‐specific Antibody in positive population. 15 types of ANA‐specific antibodies were detected by line immunoassays", "By detecting 15 specific antibodies, we found the high positive rate of AMA‐M2 with a total of 189 of AMA‐M2–positive cases, including 39 males and 150 females. Samples from 40 males and 160 females with negative ANA were randomly selected as controls. Our data demonstrated that the abnormal rate of blood routine, liver function, and other clinical indicators in 189 of AMA‐M2–positive cases was significantly different from that in the control group (Table 2). We detected the biochemical indexes of liver function and found that alkaline phosphatase (ALP) was elevated in 70 patients, and the diagnostic rate of PBC was 37.4% (Table 3).\nDistribution of laboratory test results of primary biliary cholangitis‐related indicators in AMA‐M2–positive population and control group\nAbnormal blood routine: Single decrease and mixed decrease in erythrocyte, leukocyte, and platelet.\nAbnormal liver function: Single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL).\nAbdominal discomfort: including ventosity, celiakia, ructus, and intermittent sicchasia.\nGallbladder lesion: including gallstones, post‐cholecystectomy, and thickened gallbladder wall.\nDistribution of liver function biochemical indicators in AMA‐M2–positive population\nAbbreviations: ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CBIL, conjugated bilirubin; GGT, gamma‐glutamyltranspeptidase.", "The positive rate of anti‐SSA antibody was high in the detection of 15 specific antibodies. A total of 144 of cases were positive in anti‐SSA antibody, including 36 males and 144 females. 30 males and 120 females with negative ANA were randomly selected as control group. Compared with the control group, there were significant decreased blood formed element and increased erythrocyte sedimentation rate (ESR) in anti‐SSA antibody‐positive group. The abnormal rates of liver function, IgG, C3, C4, and rheumatoid factor (RF) in anti‐SSA antibody‐positive group were significantly different from those in the control group (Table 4).\nDistribution of laboratory test indicators in anti‐SSA antibody‐positive population and control group\nAbbreviations: ESR, erythrocyte sedimentation rate; RF, rheumatoid factor.\nAbnormal blood routine: single decrease and mixed decrease in erythrocyte, leukocyte, and platelet.\nAbnormal liver function: single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL).", "In this study, we retrospectively analyzed the positive rate of ANA in healthy population aged 4 months to 93 years old. The results showed that the positive rate of ANA titer >1:100 was 14.01%, which was basically consistent with Satoh's ANA test in healthy people over 12 years old in the United States.4 The positive rate of females with ANA titer >1:100 was 2.08 times that of males, and positive rate of females with ANA titer >1:320 was 2.67 times that of males. It is suggested that female patients were at high risk of autoimmune diseases.\nWe detected specific antibodies in population with high titer of ANA positive and found that the top three positive antibodies were anti‐Ro‐52, AMA‐M2, and anti‐SSA antibodies, respectively. In this study, we focused on AMA‐M2 and anti‐SSA antibodies which had high specificity in autoimmune disease (AID).\nAnti‐mitochondrial antibody (AMA) is an autoantibody against mitochondrial inner membrane lipoprotein in cytoplasm. It is divided into nine subtypes, in which M2 subtype is a serological marker of PBC and plays an important role in the diagnosis of PBC. The specificity of AMA‐M2 to PBC is up to 95%.5, 6 PBC is an immune‐mediated, progressive, and non‐suppurative inflammatory disease of bile duct with uncertain etiology. PBC is usually complicated with intrahepatic cholestasis and the damage of intrahepatic bile ductules, eventually leads to liver fibrosis and cirrhosis.7, 8 In this study, we analyzed the clinical symptoms associated with PBC in AMA‐M2–positive population. The PBC is usually divided into four stages: the first stage is preclinical, generally only AMA positive in serum or bile duct epithelium. Only a small part of the AMA‐M2–positive population did not find abnormal indicators and did not feel uncomfortable, which may be in the preclinical period, but it also should be paid great attention. It has been reported that 29 of AMA‐positive population have no clinical symptoms. However, after 18 years of follow‐up, the results show that 83% of them have abnormal liver function, 76% of them have fatigue and pruritus.3 The second stage is liver dysfunction, which is mainly characterized by the increase in ALP, gamma‐glutamyltranspeptidase (GGT), and other enzymes. In this investigation, we found that ALP, GGT, aspartate aminotransferase (AST), alanine aminotransferase (ALT), TB, and DB were increased in 80 cases of AMA‐M2–positive group, which was significantly higher than that of negative group. 70 cases of AMA‐M2–positive group with elevated ALP could be diagnosed as PBC, which met the diagnostic criteria of PBC in Europe and America in 2009.9, 10 The diagnostic rate of PBC in AMA‐M2–positive group was 37.4%. It is of great significance for patients to be diagnosed in this period to control the disease progression and prevent misdiagnosis. The third stage is the clinical symptom stage, in which the patients have typical symptoms such as fatigue and pruritus. In AMA‐M2–positive group, 46 cases showed fatigue symptoms, but 30 of them had blood formed element decrease such as RBC, WBC, and PLT. There was no definite conclusion as to whether fatigue was caused solely by anemia or whether both of them were inherent symptoms of PBC. Pruritus occurred in 13 positive individuals, but all of them were indirect or seasonal. The fourth stage is decompensated stage, which is characterized by jaundice, ascites, and encephalopathy. Once the patients go through this phase and develop jaundice, the disease rapidly deteriorates. The average time from this period to death or liver transplantation is 4 years. We only found jaundice in 4 patients in this survey. It is related to the fact that the subjects of our study were healthy population rather than patients. Therefore, mass screening for the AMA‐M2 antibody in those who had abnormal symptoms of blood routine and liver function and abdominal discomfort, allergic, and fatigue is important for improving diagnosis rate, facilitating early diagnosis, promoting early treatments of PBC, and reducing the occurrence of malignant complications.\nAnti‐SSA antibody is an immunoglobulin produced by the immune system in response to a group of small ribonucleic acid proteins in autologous cells. Many studies have confirmed that this antibody is an invasive autoantibody.11, 12 A study of patients with SLE found that anti‐SSA antibodies can adhere to the surface of granulocytes in patients with SLE and activate the complement system to cause granulocyte destruction.13 Other studies have shown that when the separated lymphocytes from patients with neurological lupus were cultured with serum containing autologous anti‐SSA antibody, the apoptotic rate increased significantly. As we found in this study, the blood components of anti‐SSA antibody‐positive population decreased significantly compared with the control group, which is likely to be caused by the antibody itself. Previous studies have shown that the prominent manifestation of humoral immune abnormalities in patients with pSS is hyperglobulinemia, especially the increase in IgG. Hyperglobulinemia promotes the increase in erythrocyte sedimentation rate (ESR).14 In this study, the immunological indexes such as IgG, C3, C4, and RF in anti‐SSA–positive population were also shown to be obviously abnormal. In addition, we found that the rate of abnormal liver function in anti‐SSA antibody‐positive group was significantly higher than that in control group. The target antigen of anti‐SSA antibody is a small ribonucleic acid protein, which is closely related to cell mitosis and protein synthesis. Studies have shown that the antibody may be involved in the transcriptional regulation process. When the anti‐SSA antibody binds to the antigen site, the function of the antigen would be blocked, and the downstream protein synthesis is also interrupted. Liver, as a vigorous organs of the body, has become the main target of anti‐SSA antibody. It is suggested that the population with abnormal blood routine, liver function index, and immune index in clinical laboratory should be detected for anti‐SSA as soon as possible in order to avoid misdiagnosis of the disease.\nThe limitation of this study is that there is no in‐depth study of anti‐Ro‐52 antibodies. Anti‐Ro‐52 antibody can exist in a variety of diseases, although it does not have high specificity, but as an antinuclear antibody with high positive rate, it is usually combined with anti‐SSA antibody, which is of auxiliary diagnosis and differential diagnosis value.", "Based on the epidemiological statistical analysis of healthy population, this study revealed the distribution of antinuclear antibody in male and female in healthy population and the clinical characteristics of AMA‐M2 antibody and anti‐SSA antibody. It improves the clinicians' understanding of antinuclear antibody detection, is conducive to promoting the popularization of antinuclear antibody detection, and is convenient for early diagnosis and control of the disease." ]
[ null, "materials-and-methods", null, null, null, "results", null, null, null, null, "discussion", "conclusions" ]
[ "AMA‐M2", "antinuclear antibody", "anti‐SSA antibody", "clinical characteristics", "misdiagnosis" ]
INTRODUCTION: Antinuclear antibodies (ANA) are a spectrum of immunoglobulins that react with various nuclear and cytoplasmic components of karyocytes and are majorly produced by plasma cells. ANA family includes more than 15 types of specific antibodies, such as anti‐SSA and AMA‐M2 antibodies, which have important values for clinical diagnosis. Previous studies have shown that AMA‐M2 and anti‐SSA antibodies could be detected in the patients with primary biliary cholangitis (PBC) and systemic lupus erythematosus (SLE) several years before the disease onset.1, 2, 3 ANA can directly act on the mitosis of karyocyte and stagnate the mitotic cycle at different stages, leading to the disorder of DNA synthesis and protein production, and which may further block the metabolism of various tissues, and even cause morphological and structural changes and the loss of function. Therefore, the vital tissues and vigorous organs for body metabolism have become the targets of ANA attack, such as liver, hematological system, epithelial tissue, and so on, which are also the initial sites of clinical symptoms in ANA‐positive patients. Unfortunately, these symptoms are rarely connected to abnormal ANA by doctors in basic hospitals, which usually lead to a high rate of misdiagnosis in population. Therefore, we conducted an epidemiological investigation of ANA in healthy population and further explored the clinical characteristics in individuals with serum‐positive AMA‐M2 and anti‐SSA antibodies, so that we could provide necessary data for clinical practice. MATERIALS AND METHODS: Subjects We collected samples from 25 110 residents who received health checkup in Baoding NO.1 central hospital, Baoding, Hebei, China, from January 2015 to June 2018. These participants included workers, peasants, cadres, students, kindergartens, and so on. The age range of the subjects was from 4 months to 93 years old. The subjects were healthy individuals and patients diagnosed by AID, and those who may cause ANA positive in chronic active hepatitis, tuberculosis, and chronic infection have been excluded. In order to ensure the specificity of the results and exclude the effects of some interference factors, such as the elderly, some drugs, malignant diseases, and so on, we only chose the individuals with ANA titer >1:320. On the premise of the patient's voluntary, 15 specific antibody were also analyzed, including anti‐SSA and anti‐M2. The control group was negative for antibody detection, and the sex ratio was basically the same to that of the positive population. The study was approved by the Ethics Committee of Baoding NO.1 central hospitals, and informed consent was acquired from each individual. We collected samples from 25 110 residents who received health checkup in Baoding NO.1 central hospital, Baoding, Hebei, China, from January 2015 to June 2018. These participants included workers, peasants, cadres, students, kindergartens, and so on. The age range of the subjects was from 4 months to 93 years old. The subjects were healthy individuals and patients diagnosed by AID, and those who may cause ANA positive in chronic active hepatitis, tuberculosis, and chronic infection have been excluded. In order to ensure the specificity of the results and exclude the effects of some interference factors, such as the elderly, some drugs, malignant diseases, and so on, we only chose the individuals with ANA titer >1:320. On the premise of the patient's voluntary, 15 specific antibody were also analyzed, including anti‐SSA and anti‐M2. The control group was negative for antibody detection, and the sex ratio was basically the same to that of the positive population. The study was approved by the Ethics Committee of Baoding NO.1 central hospitals, and informed consent was acquired from each individual. Detection of antibody Blood samples (3 mL) were collected from 25 110 residents who received health checkup. Serum was separated by centrifugation at 900 g for 5 minutes. ANA were tested by indirect immunofluorescence on HEp‐2 cells according to the manufacturer's instructions (Euroimmun AG). As a result, 1489 positive samples were further used by line immunoassay (LIA; Euroimmun AG) for 15 specific autoantibodies (anti‐Ro52, anti‐nRNP, anti‐Sm, anti‐SSA, anti‐SSB, anti‐Scl‐70, anti‐Jo‐1, anti‐CNEPB, anti‐dsDNA, anti‐PCNA, anti‐His, anti‐Nuc, anti‐RIB, anti‐M2, and anti‐PMScl‐70). The serum for LIA test was diluted 1:100. EUROBlotMaster (Euroimmun AG) and EUROLineScan (Euroimmun AG) were used to complete the operation and for test result interpretation, respectively. Blood samples (3 mL) were collected from 25 110 residents who received health checkup. Serum was separated by centrifugation at 900 g for 5 minutes. ANA were tested by indirect immunofluorescence on HEp‐2 cells according to the manufacturer's instructions (Euroimmun AG). As a result, 1489 positive samples were further used by line immunoassay (LIA; Euroimmun AG) for 15 specific autoantibodies (anti‐Ro52, anti‐nRNP, anti‐Sm, anti‐SSA, anti‐SSB, anti‐Scl‐70, anti‐Jo‐1, anti‐CNEPB, anti‐dsDNA, anti‐PCNA, anti‐His, anti‐Nuc, anti‐RIB, anti‐M2, and anti‐PMScl‐70). The serum for LIA test was diluted 1:100. EUROBlotMaster (Euroimmun AG) and EUROLineScan (Euroimmun AG) were used to complete the operation and for test result interpretation, respectively. Statistical analysis Statistical analysis was performed using SPSS Version 19.0 software (IBM Corp.). P < 0.05 was considered as statistically significant. Comparison of counting data between groups was performed using chi‐square test. Statistical analysis was performed using SPSS Version 19.0 software (IBM Corp.). P < 0.05 was considered as statistically significant. Comparison of counting data between groups was performed using chi‐square test. Subjects: We collected samples from 25 110 residents who received health checkup in Baoding NO.1 central hospital, Baoding, Hebei, China, from January 2015 to June 2018. These participants included workers, peasants, cadres, students, kindergartens, and so on. The age range of the subjects was from 4 months to 93 years old. The subjects were healthy individuals and patients diagnosed by AID, and those who may cause ANA positive in chronic active hepatitis, tuberculosis, and chronic infection have been excluded. In order to ensure the specificity of the results and exclude the effects of some interference factors, such as the elderly, some drugs, malignant diseases, and so on, we only chose the individuals with ANA titer >1:320. On the premise of the patient's voluntary, 15 specific antibody were also analyzed, including anti‐SSA and anti‐M2. The control group was negative for antibody detection, and the sex ratio was basically the same to that of the positive population. The study was approved by the Ethics Committee of Baoding NO.1 central hospitals, and informed consent was acquired from each individual. Detection of antibody: Blood samples (3 mL) were collected from 25 110 residents who received health checkup. Serum was separated by centrifugation at 900 g for 5 minutes. ANA were tested by indirect immunofluorescence on HEp‐2 cells according to the manufacturer's instructions (Euroimmun AG). As a result, 1489 positive samples were further used by line immunoassay (LIA; Euroimmun AG) for 15 specific autoantibodies (anti‐Ro52, anti‐nRNP, anti‐Sm, anti‐SSA, anti‐SSB, anti‐Scl‐70, anti‐Jo‐1, anti‐CNEPB, anti‐dsDNA, anti‐PCNA, anti‐His, anti‐Nuc, anti‐RIB, anti‐M2, and anti‐PMScl‐70). The serum for LIA test was diluted 1:100. EUROBlotMaster (Euroimmun AG) and EUROLineScan (Euroimmun AG) were used to complete the operation and for test result interpretation, respectively. Statistical analysis: Statistical analysis was performed using SPSS Version 19.0 software (IBM Corp.). P < 0.05 was considered as statistically significant. Comparison of counting data between groups was performed using chi‐square test. RESULT: Positive rate of ANA in healthy population A total of 25 110 residents were detected for ANA, including 12 640 males and 12 470 females. The positive rate of ANA titer >1:100 was 14.01% (3519/25 110), of which 9.04% of males (1143/12 640) and 19.05% of females (2376/12 470) was positive. The positive rate of ANA titer >1:320 was 5.93% (1489/25 110), of which 3.21% of males (406/12 640) and 8.68% of female (1083/12 470) was positive (Table 1). The difference in positive rates between genders was statistically significant. Distribution of antinuclear antibody‐positive population by sex Abbreviation: ANA, antinuclear antibody. A total of 25 110 residents were detected for ANA, including 12 640 males and 12 470 females. The positive rate of ANA titer >1:100 was 14.01% (3519/25 110), of which 9.04% of males (1143/12 640) and 19.05% of females (2376/12 470) was positive. The positive rate of ANA titer >1:320 was 5.93% (1489/25 110), of which 3.21% of males (406/12 640) and 8.68% of female (1083/12 470) was positive (Table 1). The difference in positive rates between genders was statistically significant. Distribution of antinuclear antibody‐positive population by sex Abbreviation: ANA, antinuclear antibody. Distribution of ANA‐specific antibodies in 1489 patients with ANA titer >1:320 A total of 1489 positive samples with ANA >1:320 population were further tested by line immunoassay (LIA) for 15 specific autoantibodies. The positive rate of 15 specific antibodies was 44.29% (659/ 1489). The top three antibodies were anti‐Ro‐52 (212), AMA‐M2 (189), and anti‐SSA (144) (Figure 1). Distribution of antinuclear antibody (ANA)‐specific Antibody in positive population. 15 types of ANA‐specific antibodies were detected by line immunoassays A total of 1489 positive samples with ANA >1:320 population were further tested by line immunoassay (LIA) for 15 specific autoantibodies. The positive rate of 15 specific antibodies was 44.29% (659/ 1489). The top three antibodies were anti‐Ro‐52 (212), AMA‐M2 (189), and anti‐SSA (144) (Figure 1). Distribution of antinuclear antibody (ANA)‐specific Antibody in positive population. 15 types of ANA‐specific antibodies were detected by line immunoassays Statistical analysis of PBC‐related laboratory indicators in AMA‐M2–positive patients By detecting 15 specific antibodies, we found the high positive rate of AMA‐M2 with a total of 189 of AMA‐M2–positive cases, including 39 males and 150 females. Samples from 40 males and 160 females with negative ANA were randomly selected as controls. Our data demonstrated that the abnormal rate of blood routine, liver function, and other clinical indicators in 189 of AMA‐M2–positive cases was significantly different from that in the control group (Table 2). We detected the biochemical indexes of liver function and found that alkaline phosphatase (ALP) was elevated in 70 patients, and the diagnostic rate of PBC was 37.4% (Table 3). Distribution of laboratory test results of primary biliary cholangitis‐related indicators in AMA‐M2–positive population and control group Abnormal blood routine: Single decrease and mixed decrease in erythrocyte, leukocyte, and platelet. Abnormal liver function: Single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL). Abdominal discomfort: including ventosity, celiakia, ructus, and intermittent sicchasia. Gallbladder lesion: including gallstones, post‐cholecystectomy, and thickened gallbladder wall. Distribution of liver function biochemical indicators in AMA‐M2–positive population Abbreviations: ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CBIL, conjugated bilirubin; GGT, gamma‐glutamyltranspeptidase. By detecting 15 specific antibodies, we found the high positive rate of AMA‐M2 with a total of 189 of AMA‐M2–positive cases, including 39 males and 150 females. Samples from 40 males and 160 females with negative ANA were randomly selected as controls. Our data demonstrated that the abnormal rate of blood routine, liver function, and other clinical indicators in 189 of AMA‐M2–positive cases was significantly different from that in the control group (Table 2). We detected the biochemical indexes of liver function and found that alkaline phosphatase (ALP) was elevated in 70 patients, and the diagnostic rate of PBC was 37.4% (Table 3). Distribution of laboratory test results of primary biliary cholangitis‐related indicators in AMA‐M2–positive population and control group Abnormal blood routine: Single decrease and mixed decrease in erythrocyte, leukocyte, and platelet. Abnormal liver function: Single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL). Abdominal discomfort: including ventosity, celiakia, ructus, and intermittent sicchasia. Gallbladder lesion: including gallstones, post‐cholecystectomy, and thickened gallbladder wall. Distribution of liver function biochemical indicators in AMA‐M2–positive population Abbreviations: ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CBIL, conjugated bilirubin; GGT, gamma‐glutamyltranspeptidase. Analysis of laboratory test indicators for anti‐SSA antibody‐positive population The positive rate of anti‐SSA antibody was high in the detection of 15 specific antibodies. A total of 144 of cases were positive in anti‐SSA antibody, including 36 males and 144 females. 30 males and 120 females with negative ANA were randomly selected as control group. Compared with the control group, there were significant decreased blood formed element and increased erythrocyte sedimentation rate (ESR) in anti‐SSA antibody‐positive group. The abnormal rates of liver function, IgG, C3, C4, and rheumatoid factor (RF) in anti‐SSA antibody‐positive group were significantly different from those in the control group (Table 4). Distribution of laboratory test indicators in anti‐SSA antibody‐positive population and control group Abbreviations: ESR, erythrocyte sedimentation rate; RF, rheumatoid factor. Abnormal blood routine: single decrease and mixed decrease in erythrocyte, leukocyte, and platelet. Abnormal liver function: single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL). The positive rate of anti‐SSA antibody was high in the detection of 15 specific antibodies. A total of 144 of cases were positive in anti‐SSA antibody, including 36 males and 144 females. 30 males and 120 females with negative ANA were randomly selected as control group. Compared with the control group, there were significant decreased blood formed element and increased erythrocyte sedimentation rate (ESR) in anti‐SSA antibody‐positive group. The abnormal rates of liver function, IgG, C3, C4, and rheumatoid factor (RF) in anti‐SSA antibody‐positive group were significantly different from those in the control group (Table 4). Distribution of laboratory test indicators in anti‐SSA antibody‐positive population and control group Abbreviations: ESR, erythrocyte sedimentation rate; RF, rheumatoid factor. Abnormal blood routine: single decrease and mixed decrease in erythrocyte, leukocyte, and platelet. Abnormal liver function: single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL). Positive rate of ANA in healthy population: A total of 25 110 residents were detected for ANA, including 12 640 males and 12 470 females. The positive rate of ANA titer >1:100 was 14.01% (3519/25 110), of which 9.04% of males (1143/12 640) and 19.05% of females (2376/12 470) was positive. The positive rate of ANA titer >1:320 was 5.93% (1489/25 110), of which 3.21% of males (406/12 640) and 8.68% of female (1083/12 470) was positive (Table 1). The difference in positive rates between genders was statistically significant. Distribution of antinuclear antibody‐positive population by sex Abbreviation: ANA, antinuclear antibody. Distribution of ANA‐specific antibodies in 1489 patients with ANA titer >1:320: A total of 1489 positive samples with ANA >1:320 population were further tested by line immunoassay (LIA) for 15 specific autoantibodies. The positive rate of 15 specific antibodies was 44.29% (659/ 1489). The top three antibodies were anti‐Ro‐52 (212), AMA‐M2 (189), and anti‐SSA (144) (Figure 1). Distribution of antinuclear antibody (ANA)‐specific Antibody in positive population. 15 types of ANA‐specific antibodies were detected by line immunoassays Statistical analysis of PBC‐related laboratory indicators in AMA‐M2–positive patients: By detecting 15 specific antibodies, we found the high positive rate of AMA‐M2 with a total of 189 of AMA‐M2–positive cases, including 39 males and 150 females. Samples from 40 males and 160 females with negative ANA were randomly selected as controls. Our data demonstrated that the abnormal rate of blood routine, liver function, and other clinical indicators in 189 of AMA‐M2–positive cases was significantly different from that in the control group (Table 2). We detected the biochemical indexes of liver function and found that alkaline phosphatase (ALP) was elevated in 70 patients, and the diagnostic rate of PBC was 37.4% (Table 3). Distribution of laboratory test results of primary biliary cholangitis‐related indicators in AMA‐M2–positive population and control group Abnormal blood routine: Single decrease and mixed decrease in erythrocyte, leukocyte, and platelet. Abnormal liver function: Single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL). Abdominal discomfort: including ventosity, celiakia, ructus, and intermittent sicchasia. Gallbladder lesion: including gallstones, post‐cholecystectomy, and thickened gallbladder wall. Distribution of liver function biochemical indicators in AMA‐M2–positive population Abbreviations: ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CBIL, conjugated bilirubin; GGT, gamma‐glutamyltranspeptidase. Analysis of laboratory test indicators for anti‐SSA antibody‐positive population: The positive rate of anti‐SSA antibody was high in the detection of 15 specific antibodies. A total of 144 of cases were positive in anti‐SSA antibody, including 36 males and 144 females. 30 males and 120 females with negative ANA were randomly selected as control group. Compared with the control group, there were significant decreased blood formed element and increased erythrocyte sedimentation rate (ESR) in anti‐SSA antibody‐positive group. The abnormal rates of liver function, IgG, C3, C4, and rheumatoid factor (RF) in anti‐SSA antibody‐positive group were significantly different from those in the control group (Table 4). Distribution of laboratory test indicators in anti‐SSA antibody‐positive population and control group Abbreviations: ESR, erythrocyte sedimentation rate; RF, rheumatoid factor. Abnormal blood routine: single decrease and mixed decrease in erythrocyte, leukocyte, and platelet. Abnormal liver function: single increase and mixed increase in alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma‐glutamyltranspeptidase (GGT), alkaline phosphatase (ALP), total bilirubin (TBIL), conjugated bilirubin (CBIL). DISCUSSION: In this study, we retrospectively analyzed the positive rate of ANA in healthy population aged 4 months to 93 years old. The results showed that the positive rate of ANA titer >1:100 was 14.01%, which was basically consistent with Satoh's ANA test in healthy people over 12 years old in the United States.4 The positive rate of females with ANA titer >1:100 was 2.08 times that of males, and positive rate of females with ANA titer >1:320 was 2.67 times that of males. It is suggested that female patients were at high risk of autoimmune diseases. We detected specific antibodies in population with high titer of ANA positive and found that the top three positive antibodies were anti‐Ro‐52, AMA‐M2, and anti‐SSA antibodies, respectively. In this study, we focused on AMA‐M2 and anti‐SSA antibodies which had high specificity in autoimmune disease (AID). Anti‐mitochondrial antibody (AMA) is an autoantibody against mitochondrial inner membrane lipoprotein in cytoplasm. It is divided into nine subtypes, in which M2 subtype is a serological marker of PBC and plays an important role in the diagnosis of PBC. The specificity of AMA‐M2 to PBC is up to 95%.5, 6 PBC is an immune‐mediated, progressive, and non‐suppurative inflammatory disease of bile duct with uncertain etiology. PBC is usually complicated with intrahepatic cholestasis and the damage of intrahepatic bile ductules, eventually leads to liver fibrosis and cirrhosis.7, 8 In this study, we analyzed the clinical symptoms associated with PBC in AMA‐M2–positive population. The PBC is usually divided into four stages: the first stage is preclinical, generally only AMA positive in serum or bile duct epithelium. Only a small part of the AMA‐M2–positive population did not find abnormal indicators and did not feel uncomfortable, which may be in the preclinical period, but it also should be paid great attention. It has been reported that 29 of AMA‐positive population have no clinical symptoms. However, after 18 years of follow‐up, the results show that 83% of them have abnormal liver function, 76% of them have fatigue and pruritus.3 The second stage is liver dysfunction, which is mainly characterized by the increase in ALP, gamma‐glutamyltranspeptidase (GGT), and other enzymes. In this investigation, we found that ALP, GGT, aspartate aminotransferase (AST), alanine aminotransferase (ALT), TB, and DB were increased in 80 cases of AMA‐M2–positive group, which was significantly higher than that of negative group. 70 cases of AMA‐M2–positive group with elevated ALP could be diagnosed as PBC, which met the diagnostic criteria of PBC in Europe and America in 2009.9, 10 The diagnostic rate of PBC in AMA‐M2–positive group was 37.4%. It is of great significance for patients to be diagnosed in this period to control the disease progression and prevent misdiagnosis. The third stage is the clinical symptom stage, in which the patients have typical symptoms such as fatigue and pruritus. In AMA‐M2–positive group, 46 cases showed fatigue symptoms, but 30 of them had blood formed element decrease such as RBC, WBC, and PLT. There was no definite conclusion as to whether fatigue was caused solely by anemia or whether both of them were inherent symptoms of PBC. Pruritus occurred in 13 positive individuals, but all of them were indirect or seasonal. The fourth stage is decompensated stage, which is characterized by jaundice, ascites, and encephalopathy. Once the patients go through this phase and develop jaundice, the disease rapidly deteriorates. The average time from this period to death or liver transplantation is 4 years. We only found jaundice in 4 patients in this survey. It is related to the fact that the subjects of our study were healthy population rather than patients. Therefore, mass screening for the AMA‐M2 antibody in those who had abnormal symptoms of blood routine and liver function and abdominal discomfort, allergic, and fatigue is important for improving diagnosis rate, facilitating early diagnosis, promoting early treatments of PBC, and reducing the occurrence of malignant complications. Anti‐SSA antibody is an immunoglobulin produced by the immune system in response to a group of small ribonucleic acid proteins in autologous cells. Many studies have confirmed that this antibody is an invasive autoantibody.11, 12 A study of patients with SLE found that anti‐SSA antibodies can adhere to the surface of granulocytes in patients with SLE and activate the complement system to cause granulocyte destruction.13 Other studies have shown that when the separated lymphocytes from patients with neurological lupus were cultured with serum containing autologous anti‐SSA antibody, the apoptotic rate increased significantly. As we found in this study, the blood components of anti‐SSA antibody‐positive population decreased significantly compared with the control group, which is likely to be caused by the antibody itself. Previous studies have shown that the prominent manifestation of humoral immune abnormalities in patients with pSS is hyperglobulinemia, especially the increase in IgG. Hyperglobulinemia promotes the increase in erythrocyte sedimentation rate (ESR).14 In this study, the immunological indexes such as IgG, C3, C4, and RF in anti‐SSA–positive population were also shown to be obviously abnormal. In addition, we found that the rate of abnormal liver function in anti‐SSA antibody‐positive group was significantly higher than that in control group. The target antigen of anti‐SSA antibody is a small ribonucleic acid protein, which is closely related to cell mitosis and protein synthesis. Studies have shown that the antibody may be involved in the transcriptional regulation process. When the anti‐SSA antibody binds to the antigen site, the function of the antigen would be blocked, and the downstream protein synthesis is also interrupted. Liver, as a vigorous organs of the body, has become the main target of anti‐SSA antibody. It is suggested that the population with abnormal blood routine, liver function index, and immune index in clinical laboratory should be detected for anti‐SSA as soon as possible in order to avoid misdiagnosis of the disease. The limitation of this study is that there is no in‐depth study of anti‐Ro‐52 antibodies. Anti‐Ro‐52 antibody can exist in a variety of diseases, although it does not have high specificity, but as an antinuclear antibody with high positive rate, it is usually combined with anti‐SSA antibody, which is of auxiliary diagnosis and differential diagnosis value. CONCLUSION: Based on the epidemiological statistical analysis of healthy population, this study revealed the distribution of antinuclear antibody in male and female in healthy population and the clinical characteristics of AMA‐M2 antibody and anti‐SSA antibody. It improves the clinicians' understanding of antinuclear antibody detection, is conducive to promoting the popularization of antinuclear antibody detection, and is convenient for early diagnosis and control of the disease.
Background: In China, the incidence of autoimmune diseases is gradually increasing. To decrease the misdiagnosis rate of autoimmune diseases, we conducted an epidemiological investigation about the presence of antinuclear antibody (ANA) in healthy populations and analyzed the clinical characteristics of healthy population with both high titer of ANA and positive anti-SSA and AMA-M2. Methods: Serum ANA titers were detected by indirect immunofluorescence (IIF), and other 15 types of ANA-specific antibodies were detected by line immunoassays. Results: In 25 110 individuals for routine examination, the positive rate of ANA titer >1:100 was 14.01%, of which the positive rate of female (19.05%) was higher than that of male (9.04%; P < 0.01). The positive rate of ANA titer >1:320 was 5.93%, of which the positive rate of female (8.68%) was higher than that of male (3.21%; P < 0.01). The specific antibodies were detected in 1489 of ANA-positive people with titer >1:320, and the top three detected antibodies were anti-Ro-52 (212), AMA-M2 (189), and anti-SSA (144). The abnormal rate of blood routine test, liver function test, and other clinical indicators in AMA-M2-positive population was significantly different from those in the control group. The abnormal rate of blood routine test, liver function test, and immune index in anti-SSA-positive population was higher than those in control group. Conclusions: There was a high prevalence of ANA positive in healthy population. To avoid misdiagnosis, those who had symptoms of abdominal discomfort, pruritus, or fatigue with abnormal results of blood routine and liver function test should be examined for ANA, AMA-M2, anti-SSA as early as possible.
INTRODUCTION: Antinuclear antibodies (ANA) are a spectrum of immunoglobulins that react with various nuclear and cytoplasmic components of karyocytes and are majorly produced by plasma cells. ANA family includes more than 15 types of specific antibodies, such as anti‐SSA and AMA‐M2 antibodies, which have important values for clinical diagnosis. Previous studies have shown that AMA‐M2 and anti‐SSA antibodies could be detected in the patients with primary biliary cholangitis (PBC) and systemic lupus erythematosus (SLE) several years before the disease onset.1, 2, 3 ANA can directly act on the mitosis of karyocyte and stagnate the mitotic cycle at different stages, leading to the disorder of DNA synthesis and protein production, and which may further block the metabolism of various tissues, and even cause morphological and structural changes and the loss of function. Therefore, the vital tissues and vigorous organs for body metabolism have become the targets of ANA attack, such as liver, hematological system, epithelial tissue, and so on, which are also the initial sites of clinical symptoms in ANA‐positive patients. Unfortunately, these symptoms are rarely connected to abnormal ANA by doctors in basic hospitals, which usually lead to a high rate of misdiagnosis in population. Therefore, we conducted an epidemiological investigation of ANA in healthy population and further explored the clinical characteristics in individuals with serum‐positive AMA‐M2 and anti‐SSA antibodies, so that we could provide necessary data for clinical practice. CONCLUSION: Based on the epidemiological statistical analysis of healthy population, this study revealed the distribution of antinuclear antibody in male and female in healthy population and the clinical characteristics of AMA‐M2 antibody and anti‐SSA antibody. It improves the clinicians' understanding of antinuclear antibody detection, is conducive to promoting the popularization of antinuclear antibody detection, and is convenient for early diagnosis and control of the disease.
Background: In China, the incidence of autoimmune diseases is gradually increasing. To decrease the misdiagnosis rate of autoimmune diseases, we conducted an epidemiological investigation about the presence of antinuclear antibody (ANA) in healthy populations and analyzed the clinical characteristics of healthy population with both high titer of ANA and positive anti-SSA and AMA-M2. Methods: Serum ANA titers were detected by indirect immunofluorescence (IIF), and other 15 types of ANA-specific antibodies were detected by line immunoassays. Results: In 25 110 individuals for routine examination, the positive rate of ANA titer >1:100 was 14.01%, of which the positive rate of female (19.05%) was higher than that of male (9.04%; P < 0.01). The positive rate of ANA titer >1:320 was 5.93%, of which the positive rate of female (8.68%) was higher than that of male (3.21%; P < 0.01). The specific antibodies were detected in 1489 of ANA-positive people with titer >1:320, and the top three detected antibodies were anti-Ro-52 (212), AMA-M2 (189), and anti-SSA (144). The abnormal rate of blood routine test, liver function test, and other clinical indicators in AMA-M2-positive population was significantly different from those in the control group. The abnormal rate of blood routine test, liver function test, and immune index in anti-SSA-positive population was higher than those in control group. Conclusions: There was a high prevalence of ANA positive in healthy population. To avoid misdiagnosis, those who had symptoms of abdominal discomfort, pruritus, or fatigue with abnormal results of blood routine and liver function test should be examined for ANA, AMA-M2, anti-SSA as early as possible.
4,913
358
[ 258, 210, 142, 37, 136, 87, 274, 204 ]
12
[ "anti", "positive", "antibody", "ana", "ssa", "anti ssa", "m2", "rate", "population", "ama" ]
[ "specificity antinuclear antibody", "introduction antinuclear antibodies", "antibodies ana spectrum", "antinuclear antibody high", "ana specific antibodies" ]
null
[CONTENT] AMA‐M2 | antinuclear antibody | anti‐SSA antibody | clinical characteristics | misdiagnosis [SUMMARY]
null
[CONTENT] AMA‐M2 | antinuclear antibody | anti‐SSA antibody | clinical characteristics | misdiagnosis [SUMMARY]
[CONTENT] AMA‐M2 | antinuclear antibody | anti‐SSA antibody | clinical characteristics | misdiagnosis [SUMMARY]
[CONTENT] AMA‐M2 | antinuclear antibody | anti‐SSA antibody | clinical characteristics | misdiagnosis [SUMMARY]
[CONTENT] AMA‐M2 | antinuclear antibody | anti‐SSA antibody | clinical characteristics | misdiagnosis [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antibodies, Antinuclear | Autoantigens | Autoimmune Diseases | Child | Child, Preschool | Epidemiological Monitoring | Female | Follow-Up Studies | Humans | Infant | Male | Middle Aged | Prognosis | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antibodies, Antinuclear | Autoantigens | Autoimmune Diseases | Child | Child, Preschool | Epidemiological Monitoring | Female | Follow-Up Studies | Humans | Infant | Male | Middle Aged | Prognosis | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antibodies, Antinuclear | Autoantigens | Autoimmune Diseases | Child | Child, Preschool | Epidemiological Monitoring | Female | Follow-Up Studies | Humans | Infant | Male | Middle Aged | Prognosis | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antibodies, Antinuclear | Autoantigens | Autoimmune Diseases | Child | Child, Preschool | Epidemiological Monitoring | Female | Follow-Up Studies | Humans | Infant | Male | Middle Aged | Prognosis | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Antibodies, Antinuclear | Autoantigens | Autoimmune Diseases | Child | Child, Preschool | Epidemiological Monitoring | Female | Follow-Up Studies | Humans | Infant | Male | Middle Aged | Prognosis | Young Adult [SUMMARY]
[CONTENT] specificity antinuclear antibody | introduction antinuclear antibodies | antibodies ana spectrum | antinuclear antibody high | ana specific antibodies [SUMMARY]
null
[CONTENT] specificity antinuclear antibody | introduction antinuclear antibodies | antibodies ana spectrum | antinuclear antibody high | ana specific antibodies [SUMMARY]
[CONTENT] specificity antinuclear antibody | introduction antinuclear antibodies | antibodies ana spectrum | antinuclear antibody high | ana specific antibodies [SUMMARY]
[CONTENT] specificity antinuclear antibody | introduction antinuclear antibodies | antibodies ana spectrum | antinuclear antibody high | ana specific antibodies [SUMMARY]
[CONTENT] specificity antinuclear antibody | introduction antinuclear antibodies | antibodies ana spectrum | antinuclear antibody high | ana specific antibodies [SUMMARY]
[CONTENT] anti | positive | antibody | ana | ssa | anti ssa | m2 | rate | population | ama [SUMMARY]
null
[CONTENT] anti | positive | antibody | ana | ssa | anti ssa | m2 | rate | population | ama [SUMMARY]
[CONTENT] anti | positive | antibody | ana | ssa | anti ssa | m2 | rate | population | ama [SUMMARY]
[CONTENT] anti | positive | antibody | ana | ssa | anti ssa | m2 | rate | population | ama [SUMMARY]
[CONTENT] anti | positive | antibody | ana | ssa | anti ssa | m2 | rate | population | ama [SUMMARY]
[CONTENT] ana | antibodies | clinical | tissues | metabolism | ama m2 anti | ama m2 anti ssa | anti ssa antibodies | symptoms | m2 anti ssa [SUMMARY]
null
[CONTENT] positive | rate | 12 | antibody | group | males | ana | aminotransferase | liver function | bilirubin [SUMMARY]
[CONTENT] antibody | antinuclear antibody detection | antinuclear antibody | antinuclear | antibody detection | healthy population | detection | healthy | improves clinicians understanding antinuclear | ssa antibody improves clinicians [SUMMARY]
[CONTENT] anti | positive | antibody | ana | ama | ssa | anti ssa | rate | ama m2 | group [SUMMARY]
[CONTENT] anti | positive | antibody | ana | ama | ssa | anti ssa | rate | ama m2 | group [SUMMARY]
[CONTENT] China ||| ANA | AMA-M2 [SUMMARY]
null
[CONTENT] 25 | 110 | ANA | 1:100 | 14.01% | 19.05% | 9.04% | 0.01 ||| ANA | 5.93% | 8.68% | 3.21% | 0.01 ||| 1489 | three | 212 | AMA-M2 | 189 | 144 ||| AMA ||| [SUMMARY]
[CONTENT] ANA ||| ANA | AMA-M2 [SUMMARY]
[CONTENT] China ||| ANA | AMA-M2 | 15 ||| ||| 25 | 110 | ANA | 1:100 | 14.01% | 19.05% | 9.04% | 0.01 ||| ANA | 5.93% | 8.68% | 3.21% | 0.01 ||| 1489 | three | 212 | AMA-M2 | 189 | 144 ||| AMA ||| ||| ANA ||| ANA | AMA-M2 [SUMMARY]
[CONTENT] China ||| ANA | AMA-M2 | 15 ||| ||| 25 | 110 | ANA | 1:100 | 14.01% | 19.05% | 9.04% | 0.01 ||| ANA | 5.93% | 8.68% | 3.21% | 0.01 ||| 1489 | three | 212 | AMA-M2 | 189 | 144 ||| AMA ||| ||| ANA ||| ANA | AMA-M2 [SUMMARY]
Associations between low back pain, urinary incontinence, and abdominal muscle recruitment as assessed via ultrasonography in the elderly.
25714438
Low back pain (LBP) and urinary incontinence (UI) are highly prevalent among elderly individuals. In young adults, changes in trunk muscle recruitment, as assessed via ultrasound imaging, may be associated with lumbar spine stability.
BACKGROUND
Fifty-four elderly individuals (mean age: 72±5.2 years) who complained of LBP and/or UI as assessed by the McGill Pain Questionnaire, Incontinence Questionnaire-Short Form, and ultrasound imaging were included in the study. The statistical analysis comprised a multiple linear regression model, and a p-value <0.05 was considered significant.
METHOD
The regression models for the TrA, IO, and EO muscle thickness levels explained 2.0% (R2=0.02; F=0.47; p=0.628), 10.6% (R2=0.106; F=3.03; p=0.057), and 10.1% (R2=0.101; F=2.70; p=0.077) of the variability, respectively. None of the regression models developed for the abdominal muscles exhibited statistical significance. A significant and negative association (p=0.018; β=-0.0343) was observed only between UI and IO recruitment.
RESULTS
These results suggest that age-related factors may have interfered with the findings of the study, thus emphasizing the need to perform ultrasound imaging-based studies to measure abdominal muscle recruitment in the elderly.
CONCLUSION
[ "Abdominal Muscles", "Aged", "Aged, 80 and over", "Cross-Sectional Studies", "Female", "Humans", "Low Back Pain", "Male", "Ultrasonography", "Urinary Incontinence" ]
4351610
Introduction
Population aging is occurring worldwide. In Brazil, given the demographic and epidemiological evolution of chronic diseases, population aging requires constant care and monitoring and thus increases the demand for health services1. Low back pain (LBP)2 and urinary incontinence (UI)3 are conditions that strongly affect functioning in the elderly and hinder the performance of everyday activities, thus causing physical and emotional distress, incurring high socioeconomic costs, restricting social participation, and decreasing the quality of life4. Moreover, LBP and UI are erroneously considered natural aspects of the aging process3. Approximately 50-80% of the general population appears to have experienced at least 1 episode of LBP during their lifetime1. The prevalence of LBP remains stable and ranges from 12-33%5 worldwide, with rates of approximately 63% in the Brazilian population and 57.7% among elderly individuals1. The annual incidence of UI in women ranges from 2-11%, and this disorder is twice as common in women as it is in men6. In healthy individuals, the abdominal and pelvic floor muscles work synergistically7 - 10. However, in the absence of micturition control, the pelvic muscle activation pattern apparently changes and overloads the spine stabilizers3 , 10. The pelvic floor muscles play an important role in the provision of postural lumbo-pelvic stability, which is conferred by connections of the muscles around the trunk7 , 8 , 10. Because of its anatomic characteristics, the transversus abdominis (TrA) muscle preferentially stabilizes the spine11 , 12. Moreover, the TrA is the first muscle to be activated in response to lower and upper limb movements, thus conferring the required rigidity to the lumbar spine and avoiding undesired segmental movements13. Delayed TrA activation is observed in younger adults with chronic LBP and suggests a failure in lumbo-pelvic stabilization12 - 14. In the elderly, stabilization failure due to geometric muscle and postural alterations can occur because the musculoskeletal and nervous systems are influenced by a variety of pathophysiological changes that lead to uncoordinated performance15 - 17, including decreased maximal voluntary contractions15; reductions in the peak muscle power15, transverse area, and rate of neuromuscular activation; increased intramuscular fat deposits17; and reductions in the muscle fiber length (atrophy) and number (hypoplasia), which particularly affects hybrid fibers15. Changes might also occur in sensory receptors, peripheral nerves, joints, and the central nervous system (e.g., decreases in white and gray matter volume and dopaminergic denervation)18. This complex array of modifications responsible for age-related losses of muscle mass is collectively called sarcopenia and occurs due to hormonal, nutritional, immunological, and metabolic alterations15. Sarcopenia can be triggered by changes in either the intracellular signaling cascade or the basic cellular processes that inhibit satellite cell activation, particularly during inflammation19. Ultrasound imaging (i.e., rehabilitative ultrasound imaging [RUSI]) has been accepted as a valid tool for assessing muscle recruitment because similar results can be obtained via electromyography (EMG) evaluation12 , 20 , 21. The main advantage of ultrasound imaging is its low invasiveness12 , 20 , 21. Physical therapists use ultrasound measurements to assess muscle function and soft tissue morphology during movement or while performing specific tasks20 , 22. Ultrasound imaging is also used to assist therapeutic approaches intended to improve neuromuscular function14 , 20 , 23 , 24. Several studies have described the neuromuscular trunk muscle patterns via ultrasound imaging in young adults both with and without a history of low back pain12 - 14 , 21. However, the relevance of these findings in elderly populations is unknown, as no studies involving ultrasound imaging in elderly individuals with LBP were found in the reviewed literature2. Accordingly, the present study analyzed the association between LBP and UI as well as the patterns of TrA, internal oblique (IO), and external oblique (EO) muscle recruitment as determined via ultrasound imaging in a cohort of community-dwelling elderly individuals.
Method
Individuals Data were collected from male and female community-dwelling elderly individuals who were aged 65 years or older and had no cognitive alterations as assessed using the Mini Mental State Examination (MMSE)25. These individuals had complained of LBP, and some had reported UI. The exclusion criteria were acute low back pain; evidence of radiculopathies (e.g., reflex alteration, dermatomes, and/or myotomes or positive Lasegue test); a positive clinical history of neurological diseases, thoraco-abdominal surgeries (e.g., cesarean delivery and hysterectomy) or spinal surgeries, and vertebral fractures; signs suggestive of severe spinal cord injury due to severe trauma; a history of malignant tumor (prostate cancer) or unexplained weight loss; severe spinal deformities (e.g., scoliosis, hyperkyphosis); spinal physical therapy in the last 6 months; and/or lumbar and/or pelvic floor stabilization exercises. The sample size was calculated while considering the abdominal muscle recruitment pattern as well as the dependent and independent study variables, the presence of LBP, and reported UI. It was calculated that 54 subjects would be required to obtain a correlation of 0.40 between the dependent and independent variables and a correlation of 0.20 among the independent variables with a coefficient of determination (R2) of 0.40 and a statistical power of 80%. For convenience, we sequentially recruited 60 community-dwelling elderly individuals. After selection and initiating data collection, 6 participants were excluded from the study. Therefore, the statistical analyses included 54 participants. Data were collected from male and female community-dwelling elderly individuals who were aged 65 years or older and had no cognitive alterations as assessed using the Mini Mental State Examination (MMSE)25. These individuals had complained of LBP, and some had reported UI. The exclusion criteria were acute low back pain; evidence of radiculopathies (e.g., reflex alteration, dermatomes, and/or myotomes or positive Lasegue test); a positive clinical history of neurological diseases, thoraco-abdominal surgeries (e.g., cesarean delivery and hysterectomy) or spinal surgeries, and vertebral fractures; signs suggestive of severe spinal cord injury due to severe trauma; a history of malignant tumor (prostate cancer) or unexplained weight loss; severe spinal deformities (e.g., scoliosis, hyperkyphosis); spinal physical therapy in the last 6 months; and/or lumbar and/or pelvic floor stabilization exercises. The sample size was calculated while considering the abdominal muscle recruitment pattern as well as the dependent and independent study variables, the presence of LBP, and reported UI. It was calculated that 54 subjects would be required to obtain a correlation of 0.40 between the dependent and independent variables and a correlation of 0.20 among the independent variables with a coefficient of determination (R2) of 0.40 and a statistical power of 80%. For convenience, we sequentially recruited 60 community-dwelling elderly individuals. After selection and initiating data collection, 6 participants were excluded from the study. Therefore, the statistical analyses included 54 participants. Study design This cross-sectional observational study was approved by the Research Ethics Committee of the Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG, Brazil (ETIC 324/07) and was conducted according to Resolution 196/96 of the National Health Council, which addresses the Code of Ethics of Human Research. After having read and obtained clarification regarding the study terms, each individual signed an informed consent form prior to participation. This cross-sectional observational study was approved by the Research Ethics Committee of the Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG, Brazil (ETIC 324/07) and was conducted according to Resolution 196/96 of the National Health Council, which addresses the Code of Ethics of Human Research. After having read and obtained clarification regarding the study terms, each individual signed an informed consent form prior to participation. Materials and procedures The cohort was assessed using a questionnaire that included questions about sociodemographic and clinical-functional information. LBP was characterized using the McGill Pain Questionnaire (Br-MPQ)4 , 26, an appropriate tool for assessing chronic pain in elderly individuals. The intra- and inter-examiner reliability rates for the Brazilian version of the Br-MPQ were found to be 0.86 and 0.89, respectively, with rates of 0.71 and 0.68 for orthopedic and neurological diseases, respectively4. To assess the presence of UI and determine the frequencies and amounts of urinary loss reported by the participants, 2 questions from the International Consultation on Incontinence Questionnaire-Short Form (ICIQ-SF) quality of life survey that were UI-specific and had been validated for Portuguese-speaking subjects were implemented27. A 2-dimensional ultrasound imaging device (Sonoline SL1; Siemens Healthcare, Erlangen, Germany) was used to evaluate abdominal muscle recruitment. The images were captured by a 10-cm, 7.5-MHz transducer coupled to the ultrasound imaging device. A more detailed description of the protocol used for our assessments and measurements was provided in the original study21. To ensure intra- and inter-examiner reliability, a pilot study with 12 volunteers was conducted. The test-retest reliability results obtained using the intraclass correlation coefficient (ICC) were 0.76 (95% CI: 0.16-0.93) for the TrA, 0.49 (95% CI: -0.76-0.85) for the IO, and 0.58 (95% CI: -0.46-0.88) for the EO. All ultrasound images were captured and analyzed by the same previously trained researcher. Each participant was asked to lie on a stretcher, and the researchers positioned the lower limbs using a device with a rectangular metal frame according to a previous model12 , 21. The limbs were positioned to allow the hips and knees to remain flexed at 50º and 90º, respectively (Figure 1). The participant was then asked to cross their arms over their chest, and the ultrasound transducer was positioned at the height of the umbilical scar, which was approximately 10 cm from the midline, lateral to the abdominal wall, and between the iliac crest and rib cage. After proper positioning, the participant was asked to remain at rest while images of the abdominal muscles at rest (baseline) were captured. Next, the participant was instructed to generate a contraction force before bending and then extending the knees; this corresponded to 7.5% of the body mass. This force produced an isometric contraction of the abdominal muscles, which was measured using a force-gauge (Cabela's(r) Digital Scale; Cabela's Incorporated, Sidney, NE, USA). The images were stored using video software (Pinnacle Studio, version 9.4(r); Corel Corporation, Ottawa, ON, Canada). Figure 1Diagram of the device used to position the participant during ultrasound image collection. Source: Ferreira et al.12 (used with permission). The cohort was assessed using a questionnaire that included questions about sociodemographic and clinical-functional information. LBP was characterized using the McGill Pain Questionnaire (Br-MPQ)4 , 26, an appropriate tool for assessing chronic pain in elderly individuals. The intra- and inter-examiner reliability rates for the Brazilian version of the Br-MPQ were found to be 0.86 and 0.89, respectively, with rates of 0.71 and 0.68 for orthopedic and neurological diseases, respectively4. To assess the presence of UI and determine the frequencies and amounts of urinary loss reported by the participants, 2 questions from the International Consultation on Incontinence Questionnaire-Short Form (ICIQ-SF) quality of life survey that were UI-specific and had been validated for Portuguese-speaking subjects were implemented27. A 2-dimensional ultrasound imaging device (Sonoline SL1; Siemens Healthcare, Erlangen, Germany) was used to evaluate abdominal muscle recruitment. The images were captured by a 10-cm, 7.5-MHz transducer coupled to the ultrasound imaging device. A more detailed description of the protocol used for our assessments and measurements was provided in the original study21. To ensure intra- and inter-examiner reliability, a pilot study with 12 volunteers was conducted. The test-retest reliability results obtained using the intraclass correlation coefficient (ICC) were 0.76 (95% CI: 0.16-0.93) for the TrA, 0.49 (95% CI: -0.76-0.85) for the IO, and 0.58 (95% CI: -0.46-0.88) for the EO. All ultrasound images were captured and analyzed by the same previously trained researcher. Each participant was asked to lie on a stretcher, and the researchers positioned the lower limbs using a device with a rectangular metal frame according to a previous model12 , 21. The limbs were positioned to allow the hips and knees to remain flexed at 50º and 90º, respectively (Figure 1). The participant was then asked to cross their arms over their chest, and the ultrasound transducer was positioned at the height of the umbilical scar, which was approximately 10 cm from the midline, lateral to the abdominal wall, and between the iliac crest and rib cage. After proper positioning, the participant was asked to remain at rest while images of the abdominal muscles at rest (baseline) were captured. Next, the participant was instructed to generate a contraction force before bending and then extending the knees; this corresponded to 7.5% of the body mass. This force produced an isometric contraction of the abdominal muscles, which was measured using a force-gauge (Cabela's(r) Digital Scale; Cabela's Incorporated, Sidney, NE, USA). The images were stored using video software (Pinnacle Studio, version 9.4(r); Corel Corporation, Ottawa, ON, Canada). Figure 1Diagram of the device used to position the participant during ultrasound image collection. Source: Ferreira et al.12 (used with permission). Statistical analysis A descriptive analysis of the quantitative variables was conducted by calculating the average, a central trend measure, and the standard deviation that assessed the sample data variability. A descriptive analysis of the qualitative variables was conducted by calculating the frequencies of each category. To evaluate the associations between the continuous variables, 3 multiple linear regression analysis models were created. Each model used the proportion of the abdominal muscle (TrA, IO, and EO) recruitment as the dependent variable and the LBP and UI as independent variables. All statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) for Windows (version 17.0; SPSS Inc., Chicago, IL, USA). The level of significance was set at p<0.05. A descriptive analysis of the quantitative variables was conducted by calculating the average, a central trend measure, and the standard deviation that assessed the sample data variability. A descriptive analysis of the qualitative variables was conducted by calculating the frequencies of each category. To evaluate the associations between the continuous variables, 3 multiple linear regression analysis models were created. Each model used the proportion of the abdominal muscle (TrA, IO, and EO) recruitment as the dependent variable and the LBP and UI as independent variables. All statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) for Windows (version 17.0; SPSS Inc., Chicago, IL, USA). The level of significance was set at p<0.05.
Results
The sociodemographic and clinical characteristics of the study cohort are presented in Table 1. The cohort mostly comprised women (76%) and after calculating the body mass index (BMI) values of the participants, 46.3% of the subjects were determined to be eutrophic according to the interval (22-27 kg/m2) for elderly individuals as proposed by Lipschitz28; an additional 14.8% were malnourished, and 38.9% were overweight. Among the elderly patients suffering from LBP (n=34), 52.9% reported moderate pain (2.24±0.78) and 44.1% reported short/transitional/temporary pain according to the Br-MPQ survey (Table 1). Among the elderly patients reporting UI (n=22), 54.5% were losing urine once weekly or less frequently, and 81.8% ranked this loss as of low intensity. Table 1Sociodemographic and clinical characteristics of the studied cohort (n=54). Sociodemographic and clinical characteristics n (%) Mean±SD Range Age (years)54 (100%)72±5.265-84 Gender (Female)41 (76%) School education (year) Analphabet5 (9.3%) 1-839 (72.2%) ≥911 (20.3%) Number of comorbidities-4.78±2.351-13 Number of drugs None7 (13%) 1-538 (70.3%) >59 (16.7%) UI (yes) 22 (40.7%) LBP (yes) 34 (62.9%) Pain intensity (Br-MPQ)a 1.41±1.250-4 Mild (1)5 (14.7%) Moderate (2)18 (52.9%) Severe (3)9 (26.5%) Unbearable (4)2 (5.9%) Pain temporal pattern (Br-MPQ)a Continuous/stable/constant10 (29.4%) Rhythmic/periodic/intermittent9 (26.4%) Brief/momentary/transitory15 (44.1%)SD: standard deviation; %: relative percentage of elderly individuals (n=54); Br-MPQ: McGill pain questionnaire-Brazil; UI: urinary incontinence; LBP: low back pain; aValues for n=34 elderly patients who reported the presence of LBP. SD: standard deviation; %: relative percentage of elderly individuals (n=54); Br-MPQ: McGill pain questionnaire-Brazil; UI: urinary incontinence; LBP: low back pain; Values for n=34 elderly patients who reported the presence of LBP. None of the clinical variables strongly correlated with the TrA (UI, p=0.541; LBP, p=0.412) and EO muscle thicknesses (UI, p=0.091; LBP, p=0.078). Furthermore, LBP was not associated with the IO muscle thickness (p=0.931). Table 2 shows the results of the linear multiple regression models used to assess correlations between the TrA, IO, and EO muscle recruitment patterns and the variables of LBP and UI. These results illustrated that the regression models for the TrA, IO, and EO muscle recruitment levels explained 2.0% (R2=0.02; F=0.47; p=0.628), 10.6% (R2=0.106; F=3.03; p=0.057), and 10.1% (R2=0.101; F=2.70; p=0.077) of the variability, respectively. Only the model for the IO muscle was statistically significant. Table 2Associations between the TrA, IO, and EO recruitment levels and LBP and UI. Variables TrA IO EO Β p b p Β p Constant (b0)0.0390.015a 0.03030.015‡ 0.00010.992 UI (b1)–0.0110.541–0.03430.018‡ 0.02180.091 LBP (b2)0.0150.4120.00120.931–0.02320.078UI: urinary incontinence; LBP: low back pain; Multiple linear regression analysis;ap<0.05;‡p<0.05 Stepwise multiple linear regression. UI: urinary incontinence; LBP: low back pain; Multiple linear regression analysis; p<0.05; p<0.05 Stepwise multiple linear regression.
null
null
[ "Individuals", "Study design", "Materials and procedures", "Statistical analysis" ]
[ "Data were collected from male and female community-dwelling elderly individuals who\nwere aged 65 years or older and had no cognitive alterations as assessed using the\nMini Mental State Examination (MMSE)25. These\nindividuals had complained of LBP, and some had reported UI. The exclusion criteria\nwere acute low back pain; evidence of radiculopathies (e.g., reflex alteration,\ndermatomes, and/or myotomes or positive Lasegue test); a positive clinical history of\nneurological diseases, thoraco-abdominal surgeries (e.g., cesarean delivery and\nhysterectomy) or spinal surgeries, and vertebral fractures; signs suggestive of\nsevere spinal cord injury due to severe trauma; a history of malignant tumor\n(prostate cancer) or unexplained weight loss; severe spinal deformities (e.g.,\nscoliosis, hyperkyphosis); spinal physical therapy in the last 6 months; and/or\nlumbar and/or pelvic floor stabilization exercises.\nThe sample size was calculated while considering the abdominal muscle recruitment\npattern as well as the dependent and independent study variables, the presence of\nLBP, and reported UI. It was calculated that 54 subjects would be required to obtain\na correlation of 0.40 between the dependent and independent variables and a\ncorrelation of 0.20 among the independent variables with a coefficient of\ndetermination (R2) of 0.40 and a statistical power of 80%. For\nconvenience, we sequentially recruited 60 community-dwelling elderly individuals.\nAfter selection and initiating data collection, 6 participants were excluded from the\nstudy. Therefore, the statistical analyses included 54 participants.", "This cross-sectional observational study was approved by the Research Ethics\nCommittee of the Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG,\nBrazil (ETIC 324/07) and was conducted according to Resolution 196/96 of the National\nHealth Council, which addresses the Code of Ethics of Human Research. After having\nread and obtained clarification regarding the study terms, each individual signed an\ninformed consent form prior to participation.", "The cohort was assessed using a questionnaire that included questions about\nsociodemographic and clinical-functional information. LBP was characterized using the\nMcGill Pain Questionnaire (Br-MPQ)4\n,\n26, an appropriate tool for assessing chronic\npain in elderly individuals. The intra- and inter-examiner reliability rates for the\nBrazilian version of the Br-MPQ were found to be 0.86 and 0.89, respectively, with\nrates of 0.71 and 0.68 for orthopedic and neurological diseases, respectively4.\nTo assess the presence of UI and determine the frequencies and amounts of urinary\nloss reported by the participants, 2 questions from the International Consultation on\nIncontinence Questionnaire-Short Form (ICIQ-SF) quality of life survey that were\nUI-specific and had been validated for Portuguese-speaking subjects were\nimplemented27.\nA 2-dimensional ultrasound imaging device (Sonoline SL1; Siemens Healthcare,\nErlangen, Germany) was used to evaluate abdominal muscle recruitment. The images were\ncaptured by a 10-cm, 7.5-MHz transducer coupled to the ultrasound imaging device. A\nmore detailed description of the protocol used for our assessments and measurements\nwas provided in the original study21. To\nensure intra- and inter-examiner reliability, a pilot study with 12 volunteers was\nconducted. The test-retest reliability results obtained using the intraclass\ncorrelation coefficient (ICC) were 0.76 (95% CI: 0.16-0.93) for the TrA, 0.49 (95%\nCI: -0.76-0.85) for the IO, and 0.58 (95% CI: -0.46-0.88) for the EO. All ultrasound\nimages were captured and analyzed by the same previously trained researcher. Each\nparticipant was asked to lie on a stretcher, and the researchers positioned the lower\nlimbs using a device with a rectangular metal frame according to a previous\nmodel12\n,\n21. The limbs were positioned to allow the\nhips and knees to remain flexed at 50º and 90º, respectively (Figure 1). The participant was then asked to cross their arms\nover their chest, and the ultrasound transducer was positioned at the height of the\numbilical scar, which was approximately 10 cm from the midline, lateral to the\nabdominal wall, and between the iliac crest and rib cage. After proper positioning,\nthe participant was asked to remain at rest while images of the abdominal muscles at\nrest (baseline) were captured. Next, the participant was instructed to generate a\ncontraction force before bending and then extending the knees; this corresponded to\n7.5% of the body mass. This force produced an isometric contraction of the abdominal\nmuscles, which was measured using a force-gauge (Cabela's(r) Digital\nScale; Cabela's Incorporated, Sidney, NE, USA). The images were stored using video\nsoftware (Pinnacle Studio, version 9.4(r); Corel Corporation, Ottawa, ON,\nCanada).\n\nFigure 1Diagram of the device used to position the participant during ultrasound\nimage collection. Source: Ferreira et al.12 (used with permission).\n", "A descriptive analysis of the quantitative variables was conducted by calculating the\naverage, a central trend measure, and the standard deviation that assessed the sample\ndata variability. A descriptive analysis of the qualitative variables was conducted\nby calculating the frequencies of each category.\nTo evaluate the associations between the continuous variables, 3 multiple linear\nregression analysis models were created. Each model used the proportion of the\nabdominal muscle (TrA, IO, and EO) recruitment as the dependent variable and the LBP\nand UI as independent variables.\nAll statistical analyses were performed using the Statistical Package for the Social\nSciences (SPSS) for Windows (version 17.0; SPSS Inc., Chicago, IL, USA). The level of\nsignificance was set at p<0.05." ]
[ null, null, null, null ]
[ "Introduction", "Method", "Individuals", "Study design", "Materials and procedures", "Statistical analysis", "Results", "Discussion" ]
[ "Population aging is occurring worldwide. In Brazil, given the demographic and\nepidemiological evolution of chronic diseases, population aging requires constant care\nand monitoring and thus increases the demand for health services1.\nLow back pain (LBP)2 and urinary incontinence\n(UI)3 are conditions that strongly affect\nfunctioning in the elderly and hinder the performance of everyday activities, thus\ncausing physical and emotional distress, incurring high socioeconomic costs, restricting\nsocial participation, and decreasing the quality of life4. Moreover, LBP and UI are erroneously considered natural aspects of the\naging process3. Approximately 50-80% of the\ngeneral population appears to have experienced at least 1 episode of LBP during their\nlifetime1. The prevalence of LBP remains\nstable and ranges from 12-33%5 worldwide, with\nrates of approximately 63% in the Brazilian population and 57.7% among elderly\nindividuals1. The annual incidence of UI in\nwomen ranges from 2-11%, and this disorder is twice as common in women as it is in\nmen6.\nIn healthy individuals, the abdominal and pelvic floor muscles work synergistically7\n-\n10. However, in the absence of micturition\ncontrol, the pelvic muscle activation pattern apparently changes and overloads the spine\nstabilizers3\n,\n10. The pelvic floor muscles play an important\nrole in the provision of postural lumbo-pelvic stability, which is conferred by\nconnections of the muscles around the trunk7\n,\n8\n,\n10.\nBecause of its anatomic characteristics, the transversus abdominis (TrA) muscle\npreferentially stabilizes the spine11\n,\n12. Moreover, the TrA is the first muscle to be\nactivated in response to lower and upper limb movements, thus conferring the required\nrigidity to the lumbar spine and avoiding undesired segmental movements13. Delayed TrA activation is observed in younger\nadults with chronic LBP and suggests a failure in lumbo-pelvic stabilization12\n-\n14.\nIn the elderly, stabilization failure due to geometric muscle and postural alterations\ncan occur because the musculoskeletal and nervous systems are influenced by a variety of\npathophysiological changes that lead to uncoordinated performance15\n-\n17, including decreased maximal voluntary\ncontractions15; reductions in the peak muscle\npower15, transverse area, and rate of\nneuromuscular activation; increased intramuscular fat deposits17; and reductions in the muscle fiber length (atrophy) and number\n(hypoplasia), which particularly affects hybrid fibers15. Changes might also occur in sensory receptors, peripheral nerves, joints,\nand the central nervous system (e.g., decreases in white and gray matter volume and\ndopaminergic denervation)18. This complex array\nof modifications responsible for age-related losses of muscle mass is collectively\ncalled sarcopenia and occurs due to hormonal, nutritional, immunological, and metabolic\nalterations15. Sarcopenia can be triggered by\nchanges in either the intracellular signaling cascade or the basic cellular processes\nthat inhibit satellite cell activation, particularly during inflammation19.\nUltrasound imaging (i.e., rehabilitative ultrasound imaging [RUSI]) has been accepted as\na valid tool for assessing muscle recruitment because similar results can be obtained\nvia electromyography (EMG) evaluation12\n,\n20\n,\n21. The main advantage of ultrasound imaging is\nits low invasiveness12\n,\n20\n,\n21. Physical therapists use ultrasound\nmeasurements to assess muscle function and soft tissue morphology during movement or\nwhile performing specific tasks20\n,\n22. Ultrasound imaging is also used to assist\ntherapeutic approaches intended to improve neuromuscular function14\n,\n20\n,\n23\n,\n24.\nSeveral studies have described the neuromuscular trunk muscle patterns via ultrasound\nimaging in young adults both with and without a history of low back pain12\n-\n14\n,\n21. However, the relevance of these findings in\nelderly populations is unknown, as no studies involving ultrasound imaging in elderly\nindividuals with LBP were found in the reviewed literature2.\nAccordingly, the present study analyzed the association between LBP and UI as well as\nthe patterns of TrA, internal oblique (IO), and external oblique (EO) muscle recruitment\nas determined via ultrasound imaging in a cohort of community-dwelling elderly\nindividuals.", " Individuals Data were collected from male and female community-dwelling elderly individuals who\nwere aged 65 years or older and had no cognitive alterations as assessed using the\nMini Mental State Examination (MMSE)25. These\nindividuals had complained of LBP, and some had reported UI. The exclusion criteria\nwere acute low back pain; evidence of radiculopathies (e.g., reflex alteration,\ndermatomes, and/or myotomes or positive Lasegue test); a positive clinical history of\nneurological diseases, thoraco-abdominal surgeries (e.g., cesarean delivery and\nhysterectomy) or spinal surgeries, and vertebral fractures; signs suggestive of\nsevere spinal cord injury due to severe trauma; a history of malignant tumor\n(prostate cancer) or unexplained weight loss; severe spinal deformities (e.g.,\nscoliosis, hyperkyphosis); spinal physical therapy in the last 6 months; and/or\nlumbar and/or pelvic floor stabilization exercises.\nThe sample size was calculated while considering the abdominal muscle recruitment\npattern as well as the dependent and independent study variables, the presence of\nLBP, and reported UI. It was calculated that 54 subjects would be required to obtain\na correlation of 0.40 between the dependent and independent variables and a\ncorrelation of 0.20 among the independent variables with a coefficient of\ndetermination (R2) of 0.40 and a statistical power of 80%. For\nconvenience, we sequentially recruited 60 community-dwelling elderly individuals.\nAfter selection and initiating data collection, 6 participants were excluded from the\nstudy. Therefore, the statistical analyses included 54 participants.\nData were collected from male and female community-dwelling elderly individuals who\nwere aged 65 years or older and had no cognitive alterations as assessed using the\nMini Mental State Examination (MMSE)25. These\nindividuals had complained of LBP, and some had reported UI. The exclusion criteria\nwere acute low back pain; evidence of radiculopathies (e.g., reflex alteration,\ndermatomes, and/or myotomes or positive Lasegue test); a positive clinical history of\nneurological diseases, thoraco-abdominal surgeries (e.g., cesarean delivery and\nhysterectomy) or spinal surgeries, and vertebral fractures; signs suggestive of\nsevere spinal cord injury due to severe trauma; a history of malignant tumor\n(prostate cancer) or unexplained weight loss; severe spinal deformities (e.g.,\nscoliosis, hyperkyphosis); spinal physical therapy in the last 6 months; and/or\nlumbar and/or pelvic floor stabilization exercises.\nThe sample size was calculated while considering the abdominal muscle recruitment\npattern as well as the dependent and independent study variables, the presence of\nLBP, and reported UI. It was calculated that 54 subjects would be required to obtain\na correlation of 0.40 between the dependent and independent variables and a\ncorrelation of 0.20 among the independent variables with a coefficient of\ndetermination (R2) of 0.40 and a statistical power of 80%. For\nconvenience, we sequentially recruited 60 community-dwelling elderly individuals.\nAfter selection and initiating data collection, 6 participants were excluded from the\nstudy. Therefore, the statistical analyses included 54 participants.\n Study design This cross-sectional observational study was approved by the Research Ethics\nCommittee of the Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG,\nBrazil (ETIC 324/07) and was conducted according to Resolution 196/96 of the National\nHealth Council, which addresses the Code of Ethics of Human Research. After having\nread and obtained clarification regarding the study terms, each individual signed an\ninformed consent form prior to participation.\nThis cross-sectional observational study was approved by the Research Ethics\nCommittee of the Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG,\nBrazil (ETIC 324/07) and was conducted according to Resolution 196/96 of the National\nHealth Council, which addresses the Code of Ethics of Human Research. After having\nread and obtained clarification regarding the study terms, each individual signed an\ninformed consent form prior to participation.\n Materials and procedures The cohort was assessed using a questionnaire that included questions about\nsociodemographic and clinical-functional information. LBP was characterized using the\nMcGill Pain Questionnaire (Br-MPQ)4\n,\n26, an appropriate tool for assessing chronic\npain in elderly individuals. The intra- and inter-examiner reliability rates for the\nBrazilian version of the Br-MPQ were found to be 0.86 and 0.89, respectively, with\nrates of 0.71 and 0.68 for orthopedic and neurological diseases, respectively4.\nTo assess the presence of UI and determine the frequencies and amounts of urinary\nloss reported by the participants, 2 questions from the International Consultation on\nIncontinence Questionnaire-Short Form (ICIQ-SF) quality of life survey that were\nUI-specific and had been validated for Portuguese-speaking subjects were\nimplemented27.\nA 2-dimensional ultrasound imaging device (Sonoline SL1; Siemens Healthcare,\nErlangen, Germany) was used to evaluate abdominal muscle recruitment. The images were\ncaptured by a 10-cm, 7.5-MHz transducer coupled to the ultrasound imaging device. A\nmore detailed description of the protocol used for our assessments and measurements\nwas provided in the original study21. To\nensure intra- and inter-examiner reliability, a pilot study with 12 volunteers was\nconducted. The test-retest reliability results obtained using the intraclass\ncorrelation coefficient (ICC) were 0.76 (95% CI: 0.16-0.93) for the TrA, 0.49 (95%\nCI: -0.76-0.85) for the IO, and 0.58 (95% CI: -0.46-0.88) for the EO. All ultrasound\nimages were captured and analyzed by the same previously trained researcher. Each\nparticipant was asked to lie on a stretcher, and the researchers positioned the lower\nlimbs using a device with a rectangular metal frame according to a previous\nmodel12\n,\n21. The limbs were positioned to allow the\nhips and knees to remain flexed at 50º and 90º, respectively (Figure 1). The participant was then asked to cross their arms\nover their chest, and the ultrasound transducer was positioned at the height of the\numbilical scar, which was approximately 10 cm from the midline, lateral to the\nabdominal wall, and between the iliac crest and rib cage. After proper positioning,\nthe participant was asked to remain at rest while images of the abdominal muscles at\nrest (baseline) were captured. Next, the participant was instructed to generate a\ncontraction force before bending and then extending the knees; this corresponded to\n7.5% of the body mass. This force produced an isometric contraction of the abdominal\nmuscles, which was measured using a force-gauge (Cabela's(r) Digital\nScale; Cabela's Incorporated, Sidney, NE, USA). The images were stored using video\nsoftware (Pinnacle Studio, version 9.4(r); Corel Corporation, Ottawa, ON,\nCanada).\n\nFigure 1Diagram of the device used to position the participant during ultrasound\nimage collection. Source: Ferreira et al.12 (used with permission).\n\nThe cohort was assessed using a questionnaire that included questions about\nsociodemographic and clinical-functional information. LBP was characterized using the\nMcGill Pain Questionnaire (Br-MPQ)4\n,\n26, an appropriate tool for assessing chronic\npain in elderly individuals. The intra- and inter-examiner reliability rates for the\nBrazilian version of the Br-MPQ were found to be 0.86 and 0.89, respectively, with\nrates of 0.71 and 0.68 for orthopedic and neurological diseases, respectively4.\nTo assess the presence of UI and determine the frequencies and amounts of urinary\nloss reported by the participants, 2 questions from the International Consultation on\nIncontinence Questionnaire-Short Form (ICIQ-SF) quality of life survey that were\nUI-specific and had been validated for Portuguese-speaking subjects were\nimplemented27.\nA 2-dimensional ultrasound imaging device (Sonoline SL1; Siemens Healthcare,\nErlangen, Germany) was used to evaluate abdominal muscle recruitment. The images were\ncaptured by a 10-cm, 7.5-MHz transducer coupled to the ultrasound imaging device. A\nmore detailed description of the protocol used for our assessments and measurements\nwas provided in the original study21. To\nensure intra- and inter-examiner reliability, a pilot study with 12 volunteers was\nconducted. The test-retest reliability results obtained using the intraclass\ncorrelation coefficient (ICC) were 0.76 (95% CI: 0.16-0.93) for the TrA, 0.49 (95%\nCI: -0.76-0.85) for the IO, and 0.58 (95% CI: -0.46-0.88) for the EO. All ultrasound\nimages were captured and analyzed by the same previously trained researcher. Each\nparticipant was asked to lie on a stretcher, and the researchers positioned the lower\nlimbs using a device with a rectangular metal frame according to a previous\nmodel12\n,\n21. The limbs were positioned to allow the\nhips and knees to remain flexed at 50º and 90º, respectively (Figure 1). The participant was then asked to cross their arms\nover their chest, and the ultrasound transducer was positioned at the height of the\numbilical scar, which was approximately 10 cm from the midline, lateral to the\nabdominal wall, and between the iliac crest and rib cage. After proper positioning,\nthe participant was asked to remain at rest while images of the abdominal muscles at\nrest (baseline) were captured. Next, the participant was instructed to generate a\ncontraction force before bending and then extending the knees; this corresponded to\n7.5% of the body mass. This force produced an isometric contraction of the abdominal\nmuscles, which was measured using a force-gauge (Cabela's(r) Digital\nScale; Cabela's Incorporated, Sidney, NE, USA). The images were stored using video\nsoftware (Pinnacle Studio, version 9.4(r); Corel Corporation, Ottawa, ON,\nCanada).\n\nFigure 1Diagram of the device used to position the participant during ultrasound\nimage collection. Source: Ferreira et al.12 (used with permission).\n\n Statistical analysis A descriptive analysis of the quantitative variables was conducted by calculating the\naverage, a central trend measure, and the standard deviation that assessed the sample\ndata variability. A descriptive analysis of the qualitative variables was conducted\nby calculating the frequencies of each category.\nTo evaluate the associations between the continuous variables, 3 multiple linear\nregression analysis models were created. Each model used the proportion of the\nabdominal muscle (TrA, IO, and EO) recruitment as the dependent variable and the LBP\nand UI as independent variables.\nAll statistical analyses were performed using the Statistical Package for the Social\nSciences (SPSS) for Windows (version 17.0; SPSS Inc., Chicago, IL, USA). The level of\nsignificance was set at p<0.05.\nA descriptive analysis of the quantitative variables was conducted by calculating the\naverage, a central trend measure, and the standard deviation that assessed the sample\ndata variability. A descriptive analysis of the qualitative variables was conducted\nby calculating the frequencies of each category.\nTo evaluate the associations between the continuous variables, 3 multiple linear\nregression analysis models were created. Each model used the proportion of the\nabdominal muscle (TrA, IO, and EO) recruitment as the dependent variable and the LBP\nand UI as independent variables.\nAll statistical analyses were performed using the Statistical Package for the Social\nSciences (SPSS) for Windows (version 17.0; SPSS Inc., Chicago, IL, USA). The level of\nsignificance was set at p<0.05.", "Data were collected from male and female community-dwelling elderly individuals who\nwere aged 65 years or older and had no cognitive alterations as assessed using the\nMini Mental State Examination (MMSE)25. These\nindividuals had complained of LBP, and some had reported UI. The exclusion criteria\nwere acute low back pain; evidence of radiculopathies (e.g., reflex alteration,\ndermatomes, and/or myotomes or positive Lasegue test); a positive clinical history of\nneurological diseases, thoraco-abdominal surgeries (e.g., cesarean delivery and\nhysterectomy) or spinal surgeries, and vertebral fractures; signs suggestive of\nsevere spinal cord injury due to severe trauma; a history of malignant tumor\n(prostate cancer) or unexplained weight loss; severe spinal deformities (e.g.,\nscoliosis, hyperkyphosis); spinal physical therapy in the last 6 months; and/or\nlumbar and/or pelvic floor stabilization exercises.\nThe sample size was calculated while considering the abdominal muscle recruitment\npattern as well as the dependent and independent study variables, the presence of\nLBP, and reported UI. It was calculated that 54 subjects would be required to obtain\na correlation of 0.40 between the dependent and independent variables and a\ncorrelation of 0.20 among the independent variables with a coefficient of\ndetermination (R2) of 0.40 and a statistical power of 80%. For\nconvenience, we sequentially recruited 60 community-dwelling elderly individuals.\nAfter selection and initiating data collection, 6 participants were excluded from the\nstudy. Therefore, the statistical analyses included 54 participants.", "This cross-sectional observational study was approved by the Research Ethics\nCommittee of the Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG,\nBrazil (ETIC 324/07) and was conducted according to Resolution 196/96 of the National\nHealth Council, which addresses the Code of Ethics of Human Research. After having\nread and obtained clarification regarding the study terms, each individual signed an\ninformed consent form prior to participation.", "The cohort was assessed using a questionnaire that included questions about\nsociodemographic and clinical-functional information. LBP was characterized using the\nMcGill Pain Questionnaire (Br-MPQ)4\n,\n26, an appropriate tool for assessing chronic\npain in elderly individuals. The intra- and inter-examiner reliability rates for the\nBrazilian version of the Br-MPQ were found to be 0.86 and 0.89, respectively, with\nrates of 0.71 and 0.68 for orthopedic and neurological diseases, respectively4.\nTo assess the presence of UI and determine the frequencies and amounts of urinary\nloss reported by the participants, 2 questions from the International Consultation on\nIncontinence Questionnaire-Short Form (ICIQ-SF) quality of life survey that were\nUI-specific and had been validated for Portuguese-speaking subjects were\nimplemented27.\nA 2-dimensional ultrasound imaging device (Sonoline SL1; Siemens Healthcare,\nErlangen, Germany) was used to evaluate abdominal muscle recruitment. The images were\ncaptured by a 10-cm, 7.5-MHz transducer coupled to the ultrasound imaging device. A\nmore detailed description of the protocol used for our assessments and measurements\nwas provided in the original study21. To\nensure intra- and inter-examiner reliability, a pilot study with 12 volunteers was\nconducted. The test-retest reliability results obtained using the intraclass\ncorrelation coefficient (ICC) were 0.76 (95% CI: 0.16-0.93) for the TrA, 0.49 (95%\nCI: -0.76-0.85) for the IO, and 0.58 (95% CI: -0.46-0.88) for the EO. All ultrasound\nimages were captured and analyzed by the same previously trained researcher. Each\nparticipant was asked to lie on a stretcher, and the researchers positioned the lower\nlimbs using a device with a rectangular metal frame according to a previous\nmodel12\n,\n21. The limbs were positioned to allow the\nhips and knees to remain flexed at 50º and 90º, respectively (Figure 1). The participant was then asked to cross their arms\nover their chest, and the ultrasound transducer was positioned at the height of the\numbilical scar, which was approximately 10 cm from the midline, lateral to the\nabdominal wall, and between the iliac crest and rib cage. After proper positioning,\nthe participant was asked to remain at rest while images of the abdominal muscles at\nrest (baseline) were captured. Next, the participant was instructed to generate a\ncontraction force before bending and then extending the knees; this corresponded to\n7.5% of the body mass. This force produced an isometric contraction of the abdominal\nmuscles, which was measured using a force-gauge (Cabela's(r) Digital\nScale; Cabela's Incorporated, Sidney, NE, USA). The images were stored using video\nsoftware (Pinnacle Studio, version 9.4(r); Corel Corporation, Ottawa, ON,\nCanada).\n\nFigure 1Diagram of the device used to position the participant during ultrasound\nimage collection. Source: Ferreira et al.12 (used with permission).\n", "A descriptive analysis of the quantitative variables was conducted by calculating the\naverage, a central trend measure, and the standard deviation that assessed the sample\ndata variability. A descriptive analysis of the qualitative variables was conducted\nby calculating the frequencies of each category.\nTo evaluate the associations between the continuous variables, 3 multiple linear\nregression analysis models were created. Each model used the proportion of the\nabdominal muscle (TrA, IO, and EO) recruitment as the dependent variable and the LBP\nand UI as independent variables.\nAll statistical analyses were performed using the Statistical Package for the Social\nSciences (SPSS) for Windows (version 17.0; SPSS Inc., Chicago, IL, USA). The level of\nsignificance was set at p<0.05.", "The sociodemographic and clinical characteristics of the study cohort are presented in\nTable 1. The cohort mostly comprised women\n(76%) and after calculating the body mass index (BMI) values of the participants, 46.3%\nof the subjects were determined to be eutrophic according to the interval (22-27\nkg/m2) for elderly individuals as proposed by Lipschitz28; an additional 14.8% were malnourished, and 38.9%\nwere overweight. Among the elderly patients suffering from LBP (n=34), 52.9% reported\nmoderate pain (2.24±0.78) and 44.1% reported short/transitional/temporary pain according\nto the Br-MPQ survey (Table 1). Among the\nelderly patients reporting UI (n=22), 54.5% were losing urine once weekly or less\nfrequently, and 81.8% ranked this loss as of low intensity.\n\nTable 1Sociodemographic and clinical characteristics of the studied cohort\n(n=54). Sociodemographic and clinical\ncharacteristics n (%) Mean±SD Range Age (years)54 (100%)72±5.265-84 Gender (Female)41 (76%) School education (year) Analphabet5 (9.3%) 1-839 (72.2%) ≥911 (20.3%) Number of comorbidities-4.78±2.351-13 Number of drugs None7 (13%) 1-538 (70.3%) >59 (16.7%) UI (yes)\n22 (40.7%) LBP (yes)\n34 (62.9%) Pain intensity (Br-MPQ)a\n1.41±1.250-4 Mild (1)5 (14.7%) Moderate (2)18 (52.9%) Severe (3)9 (26.5%) Unbearable (4)2 (5.9%) Pain temporal pattern (Br-MPQ)a\n Continuous/stable/constant10 (29.4%) Rhythmic/periodic/intermittent9 (26.4%) Brief/momentary/transitory15 (44.1%)SD: standard deviation; %: relative percentage of elderly individuals\n(n=54); Br-MPQ: McGill pain questionnaire-Brazil; UI: urinary incontinence;\nLBP: low back pain; aValues for n=34 elderly patients who reported the presence of LBP.\n\nSD: standard deviation; %: relative percentage of elderly individuals\n(n=54); Br-MPQ: McGill pain questionnaire-Brazil; UI: urinary incontinence;\nLBP: low back pain; \nValues for n=34 elderly patients who reported the presence of LBP.\nNone of the clinical variables strongly correlated with the TrA (UI,\np=0.541; LBP, p=0.412) and EO muscle thicknesses (UI,\np=0.091; LBP, p=0.078). Furthermore, LBP was not\nassociated with the IO muscle thickness (p=0.931). Table 2 shows the results of the linear multiple\nregression models used to assess correlations between the TrA, IO, and EO muscle\nrecruitment patterns and the variables of LBP and UI. These results illustrated that the\nregression models for the TrA, IO, and EO muscle recruitment levels explained 2.0%\n(R2=0.02; F=0.47; p=0.628), 10.6%\n(R2=0.106; F=3.03; p=0.057), and 10.1%\n(R2=0.101; F=2.70; p=0.077) of the\nvariability, respectively. Only the model for the IO muscle was statistically\nsignificant.\n\nTable 2Associations between the TrA, IO, and EO recruitment levels and LBP and\nUI. Variables TrA IO EO\n Β p b p Β p Constant (b0)0.0390.015a\n0.03030.015‡\n0.00010.992 UI (b1)–0.0110.541–0.03430.018‡\n0.02180.091 LBP (b2)0.0150.4120.00120.931–0.02320.078UI: urinary incontinence; LBP: low back pain; Multiple linear regression\nanalysis;ap<0.05;‡p<0.05 Stepwise multiple linear regression.\n\nUI: urinary incontinence; LBP: low back pain; Multiple linear regression\nanalysis;\np<0.05;\np<0.05 Stepwise multiple linear regression.", "This study analyzed the associations between LBP and UI and abdominal muscle (TrA, IO,\nand EO) recruitment patterns as measured by ultrasound imaging in a cohort of\ncommunity-dwelling elderly individuals. According to our findings, no abdominal muscle\nrecruitment could be used to explain the LBP and UI variables. However, UI was\nsignificantly negatively associated (p=0.018; β=-0.0343) with IO\nrecruitment; in other words, elderly individuals who exhibited higher IO activation\nreported lower urine losses. This finding partially confirmed those of previous studies\nin which the IO and TrA were reportedly predominantly recruited during pelvic floor\nmuscle contraction, thus controlling continence through bladder stabilization and\nincreased intra-urethral pressure7\n,\n8.\nThe co-activation of the abdominal muscles and pelvic floor is consistent with the model\nin which the muscles surrounding the abdominal cavity were predicted to work together in\na coordinated manner to ensure column stability and maintain continence7\n,\n8.\nIn a recent review study conducted by Ferreira and Santos29, the synergistic activation of the deep abdominal muscles (i.e., the lower\nfibers of the TrA and IO) was described during pelvic floor muscle contraction such that\nTrA contraction led to pelvic floor muscle co-contraction. These authors also suggested\nthat abdominal muscle contraction should not be discouraged during pelvic floor muscle\nexercises because this would limit the pubococcygeus muscle response without producing a\nsignificant increase in intra-abdominal pressure.\nOne possible explanation of this finding relies on the characteristics of the studied\nsample, which comprised elderly individuals with an average age of 72±5.2 years and\ncomplaints of LBP and/or UI. Among the individuals complaining of UI (n=22), 72.7%\n(n=16) also complained of LBP, a factor that may have influenced the observed results.\nThe previous studies7\n,\n8 were conducted in a cohort of young adults with\nno history of LBP and/or UI. Therefore, we suggest that the association observed between\nUI and IO recruitment in this study be interpreted with caution and that further studies\nintended to classify these individuals be performed.\nThe lack of association between EO recruitment and the variables of LBP and UI concurred\nwith previously reported results12 and indicates\nthat ultrasound imaging is likely not a valid instrument with which to measure EO\nrecruitment, given the poor correlation between the EMG results and ultrasound images\nobserved for this muscle (R=0.28).\nTrA muscle recruitment variability was not associated with either LBP or UI in the\npresent study sample. The trunk muscle recruitment pattern observed via ultrasound\nimaging in young adults with and without LBP has been highlighted in the literature12\n-\n14\n,\n20\n,\n21. During muscle contraction, although EMG\ndetects the production of action potentials, changes in the muscle shape and geometry\nare also noted. These changes enable the ultrasound imaging-based measurement and\nrecording of changes in the muscle thickness during contraction11\n,\n12\n,\n21.\nThe elderly exhibit changes in movement and motor control15\n-\n17. The former are due to sarcopenia, osteopenia,\nreduced sensory and motor proprioception, postural and biomechanical compensatory\nchanges, and reduced nerve conduction velocity15.\nLosses of muscle mass, losses and atrophy of muscle fibers (particularly more marked\nlosses of type II fibers [i.e., fast glycolytic contraction fibers])16, and losses in the size and number of motor\nunits30 are responsible for the decreases in\nmuscular strength, power, and endurance15\n,\n30 observed during aging.\nThe protocol used in this study21 was developed\nand tested in young adults. It is possible that the low force generated by the isometric\nmuscle contraction used in this study (7.5% of the body mass) did not allow the\ndetection of changes in the TrA thickness, given that typical age-related alterations\ncan affect the structure and composition of muscles.\nSingh et al.16 compared changes in the lumbar\nextensor muscles, fiber orientations, and muscle strength in young and elderly subjects\nusing ultrasound imaging and found that age-related changes interfered with both muscle\ngeometry and posture. Moreover, changes in the spinal curvature and consequently the\nbody position and joint movements might affect the muscle contraction and lever\nstrength13\n,\n14. The loss of lumbar lordosis might affect the\nmuscle length, fiber orientation, and fascicle geometry, thus affecting muscle\nstrength16\n,\n30.\nIn summary, the lack of existing studies in the literature that incorporate ultrasound\nimaging to determine the abdominal muscle recruitment pattern in elderly individuals\nwith complaints of LBP and UI made it difficult to compare the results observed in this\nstudy with those reported in the literature.\nRegarding the association between LBP and UI, this study pioneered the evaluation of the\nTrA, IO, and EO muscle recruitment pattern via ultrasound imaging in elderly\nindividuals. The multiple linear regression models used to verify this association\nrevealed that only UI exhibited a significant association with IO recruitment. These\nresults differed from those observed in young adults. Inherent age-related factors such\nas sarcopenia, changes in motor control, and the ultrasonography technique all possibly\ninterfered with the findings of this study.\nAs age-related changes affect the entire body, further investigations involving\nultrasound imaging will be required to identify the effects of aging on the recruitment\npatterns of these muscles." ]
[ "intro", "methods", null, null, null, null, "results", "discussion" ]
[ "elderly", "low back pain", "urinary incontinence", "physical therapy", "ultrasound imaging" ]
Introduction: Population aging is occurring worldwide. In Brazil, given the demographic and epidemiological evolution of chronic diseases, population aging requires constant care and monitoring and thus increases the demand for health services1. Low back pain (LBP)2 and urinary incontinence (UI)3 are conditions that strongly affect functioning in the elderly and hinder the performance of everyday activities, thus causing physical and emotional distress, incurring high socioeconomic costs, restricting social participation, and decreasing the quality of life4. Moreover, LBP and UI are erroneously considered natural aspects of the aging process3. Approximately 50-80% of the general population appears to have experienced at least 1 episode of LBP during their lifetime1. The prevalence of LBP remains stable and ranges from 12-33%5 worldwide, with rates of approximately 63% in the Brazilian population and 57.7% among elderly individuals1. The annual incidence of UI in women ranges from 2-11%, and this disorder is twice as common in women as it is in men6. In healthy individuals, the abdominal and pelvic floor muscles work synergistically7 - 10. However, in the absence of micturition control, the pelvic muscle activation pattern apparently changes and overloads the spine stabilizers3 , 10. The pelvic floor muscles play an important role in the provision of postural lumbo-pelvic stability, which is conferred by connections of the muscles around the trunk7 , 8 , 10. Because of its anatomic characteristics, the transversus abdominis (TrA) muscle preferentially stabilizes the spine11 , 12. Moreover, the TrA is the first muscle to be activated in response to lower and upper limb movements, thus conferring the required rigidity to the lumbar spine and avoiding undesired segmental movements13. Delayed TrA activation is observed in younger adults with chronic LBP and suggests a failure in lumbo-pelvic stabilization12 - 14. In the elderly, stabilization failure due to geometric muscle and postural alterations can occur because the musculoskeletal and nervous systems are influenced by a variety of pathophysiological changes that lead to uncoordinated performance15 - 17, including decreased maximal voluntary contractions15; reductions in the peak muscle power15, transverse area, and rate of neuromuscular activation; increased intramuscular fat deposits17; and reductions in the muscle fiber length (atrophy) and number (hypoplasia), which particularly affects hybrid fibers15. Changes might also occur in sensory receptors, peripheral nerves, joints, and the central nervous system (e.g., decreases in white and gray matter volume and dopaminergic denervation)18. This complex array of modifications responsible for age-related losses of muscle mass is collectively called sarcopenia and occurs due to hormonal, nutritional, immunological, and metabolic alterations15. Sarcopenia can be triggered by changes in either the intracellular signaling cascade or the basic cellular processes that inhibit satellite cell activation, particularly during inflammation19. Ultrasound imaging (i.e., rehabilitative ultrasound imaging [RUSI]) has been accepted as a valid tool for assessing muscle recruitment because similar results can be obtained via electromyography (EMG) evaluation12 , 20 , 21. The main advantage of ultrasound imaging is its low invasiveness12 , 20 , 21. Physical therapists use ultrasound measurements to assess muscle function and soft tissue morphology during movement or while performing specific tasks20 , 22. Ultrasound imaging is also used to assist therapeutic approaches intended to improve neuromuscular function14 , 20 , 23 , 24. Several studies have described the neuromuscular trunk muscle patterns via ultrasound imaging in young adults both with and without a history of low back pain12 - 14 , 21. However, the relevance of these findings in elderly populations is unknown, as no studies involving ultrasound imaging in elderly individuals with LBP were found in the reviewed literature2. Accordingly, the present study analyzed the association between LBP and UI as well as the patterns of TrA, internal oblique (IO), and external oblique (EO) muscle recruitment as determined via ultrasound imaging in a cohort of community-dwelling elderly individuals. Method: Individuals Data were collected from male and female community-dwelling elderly individuals who were aged 65 years or older and had no cognitive alterations as assessed using the Mini Mental State Examination (MMSE)25. These individuals had complained of LBP, and some had reported UI. The exclusion criteria were acute low back pain; evidence of radiculopathies (e.g., reflex alteration, dermatomes, and/or myotomes or positive Lasegue test); a positive clinical history of neurological diseases, thoraco-abdominal surgeries (e.g., cesarean delivery and hysterectomy) or spinal surgeries, and vertebral fractures; signs suggestive of severe spinal cord injury due to severe trauma; a history of malignant tumor (prostate cancer) or unexplained weight loss; severe spinal deformities (e.g., scoliosis, hyperkyphosis); spinal physical therapy in the last 6 months; and/or lumbar and/or pelvic floor stabilization exercises. The sample size was calculated while considering the abdominal muscle recruitment pattern as well as the dependent and independent study variables, the presence of LBP, and reported UI. It was calculated that 54 subjects would be required to obtain a correlation of 0.40 between the dependent and independent variables and a correlation of 0.20 among the independent variables with a coefficient of determination (R2) of 0.40 and a statistical power of 80%. For convenience, we sequentially recruited 60 community-dwelling elderly individuals. After selection and initiating data collection, 6 participants were excluded from the study. Therefore, the statistical analyses included 54 participants. Data were collected from male and female community-dwelling elderly individuals who were aged 65 years or older and had no cognitive alterations as assessed using the Mini Mental State Examination (MMSE)25. These individuals had complained of LBP, and some had reported UI. The exclusion criteria were acute low back pain; evidence of radiculopathies (e.g., reflex alteration, dermatomes, and/or myotomes or positive Lasegue test); a positive clinical history of neurological diseases, thoraco-abdominal surgeries (e.g., cesarean delivery and hysterectomy) or spinal surgeries, and vertebral fractures; signs suggestive of severe spinal cord injury due to severe trauma; a history of malignant tumor (prostate cancer) or unexplained weight loss; severe spinal deformities (e.g., scoliosis, hyperkyphosis); spinal physical therapy in the last 6 months; and/or lumbar and/or pelvic floor stabilization exercises. The sample size was calculated while considering the abdominal muscle recruitment pattern as well as the dependent and independent study variables, the presence of LBP, and reported UI. It was calculated that 54 subjects would be required to obtain a correlation of 0.40 between the dependent and independent variables and a correlation of 0.20 among the independent variables with a coefficient of determination (R2) of 0.40 and a statistical power of 80%. For convenience, we sequentially recruited 60 community-dwelling elderly individuals. After selection and initiating data collection, 6 participants were excluded from the study. Therefore, the statistical analyses included 54 participants. Study design This cross-sectional observational study was approved by the Research Ethics Committee of the Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG, Brazil (ETIC 324/07) and was conducted according to Resolution 196/96 of the National Health Council, which addresses the Code of Ethics of Human Research. After having read and obtained clarification regarding the study terms, each individual signed an informed consent form prior to participation. This cross-sectional observational study was approved by the Research Ethics Committee of the Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG, Brazil (ETIC 324/07) and was conducted according to Resolution 196/96 of the National Health Council, which addresses the Code of Ethics of Human Research. After having read and obtained clarification regarding the study terms, each individual signed an informed consent form prior to participation. Materials and procedures The cohort was assessed using a questionnaire that included questions about sociodemographic and clinical-functional information. LBP was characterized using the McGill Pain Questionnaire (Br-MPQ)4 , 26, an appropriate tool for assessing chronic pain in elderly individuals. The intra- and inter-examiner reliability rates for the Brazilian version of the Br-MPQ were found to be 0.86 and 0.89, respectively, with rates of 0.71 and 0.68 for orthopedic and neurological diseases, respectively4. To assess the presence of UI and determine the frequencies and amounts of urinary loss reported by the participants, 2 questions from the International Consultation on Incontinence Questionnaire-Short Form (ICIQ-SF) quality of life survey that were UI-specific and had been validated for Portuguese-speaking subjects were implemented27. A 2-dimensional ultrasound imaging device (Sonoline SL1; Siemens Healthcare, Erlangen, Germany) was used to evaluate abdominal muscle recruitment. The images were captured by a 10-cm, 7.5-MHz transducer coupled to the ultrasound imaging device. A more detailed description of the protocol used for our assessments and measurements was provided in the original study21. To ensure intra- and inter-examiner reliability, a pilot study with 12 volunteers was conducted. The test-retest reliability results obtained using the intraclass correlation coefficient (ICC) were 0.76 (95% CI: 0.16-0.93) for the TrA, 0.49 (95% CI: -0.76-0.85) for the IO, and 0.58 (95% CI: -0.46-0.88) for the EO. All ultrasound images were captured and analyzed by the same previously trained researcher. Each participant was asked to lie on a stretcher, and the researchers positioned the lower limbs using a device with a rectangular metal frame according to a previous model12 , 21. The limbs were positioned to allow the hips and knees to remain flexed at 50º and 90º, respectively (Figure 1). The participant was then asked to cross their arms over their chest, and the ultrasound transducer was positioned at the height of the umbilical scar, which was approximately 10 cm from the midline, lateral to the abdominal wall, and between the iliac crest and rib cage. After proper positioning, the participant was asked to remain at rest while images of the abdominal muscles at rest (baseline) were captured. Next, the participant was instructed to generate a contraction force before bending and then extending the knees; this corresponded to 7.5% of the body mass. This force produced an isometric contraction of the abdominal muscles, which was measured using a force-gauge (Cabela's(r) Digital Scale; Cabela's Incorporated, Sidney, NE, USA). The images were stored using video software (Pinnacle Studio, version 9.4(r); Corel Corporation, Ottawa, ON, Canada). Figure 1Diagram of the device used to position the participant during ultrasound image collection. Source: Ferreira et al.12 (used with permission). The cohort was assessed using a questionnaire that included questions about sociodemographic and clinical-functional information. LBP was characterized using the McGill Pain Questionnaire (Br-MPQ)4 , 26, an appropriate tool for assessing chronic pain in elderly individuals. The intra- and inter-examiner reliability rates for the Brazilian version of the Br-MPQ were found to be 0.86 and 0.89, respectively, with rates of 0.71 and 0.68 for orthopedic and neurological diseases, respectively4. To assess the presence of UI and determine the frequencies and amounts of urinary loss reported by the participants, 2 questions from the International Consultation on Incontinence Questionnaire-Short Form (ICIQ-SF) quality of life survey that were UI-specific and had been validated for Portuguese-speaking subjects were implemented27. A 2-dimensional ultrasound imaging device (Sonoline SL1; Siemens Healthcare, Erlangen, Germany) was used to evaluate abdominal muscle recruitment. The images were captured by a 10-cm, 7.5-MHz transducer coupled to the ultrasound imaging device. A more detailed description of the protocol used for our assessments and measurements was provided in the original study21. To ensure intra- and inter-examiner reliability, a pilot study with 12 volunteers was conducted. The test-retest reliability results obtained using the intraclass correlation coefficient (ICC) were 0.76 (95% CI: 0.16-0.93) for the TrA, 0.49 (95% CI: -0.76-0.85) for the IO, and 0.58 (95% CI: -0.46-0.88) for the EO. All ultrasound images were captured and analyzed by the same previously trained researcher. Each participant was asked to lie on a stretcher, and the researchers positioned the lower limbs using a device with a rectangular metal frame according to a previous model12 , 21. The limbs were positioned to allow the hips and knees to remain flexed at 50º and 90º, respectively (Figure 1). The participant was then asked to cross their arms over their chest, and the ultrasound transducer was positioned at the height of the umbilical scar, which was approximately 10 cm from the midline, lateral to the abdominal wall, and between the iliac crest and rib cage. After proper positioning, the participant was asked to remain at rest while images of the abdominal muscles at rest (baseline) were captured. Next, the participant was instructed to generate a contraction force before bending and then extending the knees; this corresponded to 7.5% of the body mass. This force produced an isometric contraction of the abdominal muscles, which was measured using a force-gauge (Cabela's(r) Digital Scale; Cabela's Incorporated, Sidney, NE, USA). The images were stored using video software (Pinnacle Studio, version 9.4(r); Corel Corporation, Ottawa, ON, Canada). Figure 1Diagram of the device used to position the participant during ultrasound image collection. Source: Ferreira et al.12 (used with permission). Statistical analysis A descriptive analysis of the quantitative variables was conducted by calculating the average, a central trend measure, and the standard deviation that assessed the sample data variability. A descriptive analysis of the qualitative variables was conducted by calculating the frequencies of each category. To evaluate the associations between the continuous variables, 3 multiple linear regression analysis models were created. Each model used the proportion of the abdominal muscle (TrA, IO, and EO) recruitment as the dependent variable and the LBP and UI as independent variables. All statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) for Windows (version 17.0; SPSS Inc., Chicago, IL, USA). The level of significance was set at p<0.05. A descriptive analysis of the quantitative variables was conducted by calculating the average, a central trend measure, and the standard deviation that assessed the sample data variability. A descriptive analysis of the qualitative variables was conducted by calculating the frequencies of each category. To evaluate the associations between the continuous variables, 3 multiple linear regression analysis models were created. Each model used the proportion of the abdominal muscle (TrA, IO, and EO) recruitment as the dependent variable and the LBP and UI as independent variables. All statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) for Windows (version 17.0; SPSS Inc., Chicago, IL, USA). The level of significance was set at p<0.05. Individuals: Data were collected from male and female community-dwelling elderly individuals who were aged 65 years or older and had no cognitive alterations as assessed using the Mini Mental State Examination (MMSE)25. These individuals had complained of LBP, and some had reported UI. The exclusion criteria were acute low back pain; evidence of radiculopathies (e.g., reflex alteration, dermatomes, and/or myotomes or positive Lasegue test); a positive clinical history of neurological diseases, thoraco-abdominal surgeries (e.g., cesarean delivery and hysterectomy) or spinal surgeries, and vertebral fractures; signs suggestive of severe spinal cord injury due to severe trauma; a history of malignant tumor (prostate cancer) or unexplained weight loss; severe spinal deformities (e.g., scoliosis, hyperkyphosis); spinal physical therapy in the last 6 months; and/or lumbar and/or pelvic floor stabilization exercises. The sample size was calculated while considering the abdominal muscle recruitment pattern as well as the dependent and independent study variables, the presence of LBP, and reported UI. It was calculated that 54 subjects would be required to obtain a correlation of 0.40 between the dependent and independent variables and a correlation of 0.20 among the independent variables with a coefficient of determination (R2) of 0.40 and a statistical power of 80%. For convenience, we sequentially recruited 60 community-dwelling elderly individuals. After selection and initiating data collection, 6 participants were excluded from the study. Therefore, the statistical analyses included 54 participants. Study design: This cross-sectional observational study was approved by the Research Ethics Committee of the Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG, Brazil (ETIC 324/07) and was conducted according to Resolution 196/96 of the National Health Council, which addresses the Code of Ethics of Human Research. After having read and obtained clarification regarding the study terms, each individual signed an informed consent form prior to participation. Materials and procedures: The cohort was assessed using a questionnaire that included questions about sociodemographic and clinical-functional information. LBP was characterized using the McGill Pain Questionnaire (Br-MPQ)4 , 26, an appropriate tool for assessing chronic pain in elderly individuals. The intra- and inter-examiner reliability rates for the Brazilian version of the Br-MPQ were found to be 0.86 and 0.89, respectively, with rates of 0.71 and 0.68 for orthopedic and neurological diseases, respectively4. To assess the presence of UI and determine the frequencies and amounts of urinary loss reported by the participants, 2 questions from the International Consultation on Incontinence Questionnaire-Short Form (ICIQ-SF) quality of life survey that were UI-specific and had been validated for Portuguese-speaking subjects were implemented27. A 2-dimensional ultrasound imaging device (Sonoline SL1; Siemens Healthcare, Erlangen, Germany) was used to evaluate abdominal muscle recruitment. The images were captured by a 10-cm, 7.5-MHz transducer coupled to the ultrasound imaging device. A more detailed description of the protocol used for our assessments and measurements was provided in the original study21. To ensure intra- and inter-examiner reliability, a pilot study with 12 volunteers was conducted. The test-retest reliability results obtained using the intraclass correlation coefficient (ICC) were 0.76 (95% CI: 0.16-0.93) for the TrA, 0.49 (95% CI: -0.76-0.85) for the IO, and 0.58 (95% CI: -0.46-0.88) for the EO. All ultrasound images were captured and analyzed by the same previously trained researcher. Each participant was asked to lie on a stretcher, and the researchers positioned the lower limbs using a device with a rectangular metal frame according to a previous model12 , 21. The limbs were positioned to allow the hips and knees to remain flexed at 50º and 90º, respectively (Figure 1). The participant was then asked to cross their arms over their chest, and the ultrasound transducer was positioned at the height of the umbilical scar, which was approximately 10 cm from the midline, lateral to the abdominal wall, and between the iliac crest and rib cage. After proper positioning, the participant was asked to remain at rest while images of the abdominal muscles at rest (baseline) were captured. Next, the participant was instructed to generate a contraction force before bending and then extending the knees; this corresponded to 7.5% of the body mass. This force produced an isometric contraction of the abdominal muscles, which was measured using a force-gauge (Cabela's(r) Digital Scale; Cabela's Incorporated, Sidney, NE, USA). The images were stored using video software (Pinnacle Studio, version 9.4(r); Corel Corporation, Ottawa, ON, Canada). Figure 1Diagram of the device used to position the participant during ultrasound image collection. Source: Ferreira et al.12 (used with permission). Statistical analysis: A descriptive analysis of the quantitative variables was conducted by calculating the average, a central trend measure, and the standard deviation that assessed the sample data variability. A descriptive analysis of the qualitative variables was conducted by calculating the frequencies of each category. To evaluate the associations between the continuous variables, 3 multiple linear regression analysis models were created. Each model used the proportion of the abdominal muscle (TrA, IO, and EO) recruitment as the dependent variable and the LBP and UI as independent variables. All statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) for Windows (version 17.0; SPSS Inc., Chicago, IL, USA). The level of significance was set at p<0.05. Results: The sociodemographic and clinical characteristics of the study cohort are presented in Table 1. The cohort mostly comprised women (76%) and after calculating the body mass index (BMI) values of the participants, 46.3% of the subjects were determined to be eutrophic according to the interval (22-27 kg/m2) for elderly individuals as proposed by Lipschitz28; an additional 14.8% were malnourished, and 38.9% were overweight. Among the elderly patients suffering from LBP (n=34), 52.9% reported moderate pain (2.24±0.78) and 44.1% reported short/transitional/temporary pain according to the Br-MPQ survey (Table 1). Among the elderly patients reporting UI (n=22), 54.5% were losing urine once weekly or less frequently, and 81.8% ranked this loss as of low intensity. Table 1Sociodemographic and clinical characteristics of the studied cohort (n=54). Sociodemographic and clinical characteristics n (%) Mean±SD Range Age (years)54 (100%)72±5.265-84 Gender (Female)41 (76%) School education (year) Analphabet5 (9.3%) 1-839 (72.2%) ≥911 (20.3%) Number of comorbidities-4.78±2.351-13 Number of drugs None7 (13%) 1-538 (70.3%) >59 (16.7%) UI (yes) 22 (40.7%) LBP (yes) 34 (62.9%) Pain intensity (Br-MPQ)a 1.41±1.250-4 Mild (1)5 (14.7%) Moderate (2)18 (52.9%) Severe (3)9 (26.5%) Unbearable (4)2 (5.9%) Pain temporal pattern (Br-MPQ)a Continuous/stable/constant10 (29.4%) Rhythmic/periodic/intermittent9 (26.4%) Brief/momentary/transitory15 (44.1%)SD: standard deviation; %: relative percentage of elderly individuals (n=54); Br-MPQ: McGill pain questionnaire-Brazil; UI: urinary incontinence; LBP: low back pain; aValues for n=34 elderly patients who reported the presence of LBP. SD: standard deviation; %: relative percentage of elderly individuals (n=54); Br-MPQ: McGill pain questionnaire-Brazil; UI: urinary incontinence; LBP: low back pain; Values for n=34 elderly patients who reported the presence of LBP. None of the clinical variables strongly correlated with the TrA (UI, p=0.541; LBP, p=0.412) and EO muscle thicknesses (UI, p=0.091; LBP, p=0.078). Furthermore, LBP was not associated with the IO muscle thickness (p=0.931). Table 2 shows the results of the linear multiple regression models used to assess correlations between the TrA, IO, and EO muscle recruitment patterns and the variables of LBP and UI. These results illustrated that the regression models for the TrA, IO, and EO muscle recruitment levels explained 2.0% (R2=0.02; F=0.47; p=0.628), 10.6% (R2=0.106; F=3.03; p=0.057), and 10.1% (R2=0.101; F=2.70; p=0.077) of the variability, respectively. Only the model for the IO muscle was statistically significant. Table 2Associations between the TrA, IO, and EO recruitment levels and LBP and UI. Variables TrA IO EO Β p b p Β p Constant (b0)0.0390.015a 0.03030.015‡ 0.00010.992 UI (b1)–0.0110.541–0.03430.018‡ 0.02180.091 LBP (b2)0.0150.4120.00120.931–0.02320.078UI: urinary incontinence; LBP: low back pain; Multiple linear regression analysis;ap<0.05;‡p<0.05 Stepwise multiple linear regression. UI: urinary incontinence; LBP: low back pain; Multiple linear regression analysis; p<0.05; p<0.05 Stepwise multiple linear regression. Discussion: This study analyzed the associations between LBP and UI and abdominal muscle (TrA, IO, and EO) recruitment patterns as measured by ultrasound imaging in a cohort of community-dwelling elderly individuals. According to our findings, no abdominal muscle recruitment could be used to explain the LBP and UI variables. However, UI was significantly negatively associated (p=0.018; β=-0.0343) with IO recruitment; in other words, elderly individuals who exhibited higher IO activation reported lower urine losses. This finding partially confirmed those of previous studies in which the IO and TrA were reportedly predominantly recruited during pelvic floor muscle contraction, thus controlling continence through bladder stabilization and increased intra-urethral pressure7 , 8. The co-activation of the abdominal muscles and pelvic floor is consistent with the model in which the muscles surrounding the abdominal cavity were predicted to work together in a coordinated manner to ensure column stability and maintain continence7 , 8. In a recent review study conducted by Ferreira and Santos29, the synergistic activation of the deep abdominal muscles (i.e., the lower fibers of the TrA and IO) was described during pelvic floor muscle contraction such that TrA contraction led to pelvic floor muscle co-contraction. These authors also suggested that abdominal muscle contraction should not be discouraged during pelvic floor muscle exercises because this would limit the pubococcygeus muscle response without producing a significant increase in intra-abdominal pressure. One possible explanation of this finding relies on the characteristics of the studied sample, which comprised elderly individuals with an average age of 72±5.2 years and complaints of LBP and/or UI. Among the individuals complaining of UI (n=22), 72.7% (n=16) also complained of LBP, a factor that may have influenced the observed results. The previous studies7 , 8 were conducted in a cohort of young adults with no history of LBP and/or UI. Therefore, we suggest that the association observed between UI and IO recruitment in this study be interpreted with caution and that further studies intended to classify these individuals be performed. The lack of association between EO recruitment and the variables of LBP and UI concurred with previously reported results12 and indicates that ultrasound imaging is likely not a valid instrument with which to measure EO recruitment, given the poor correlation between the EMG results and ultrasound images observed for this muscle (R=0.28). TrA muscle recruitment variability was not associated with either LBP or UI in the present study sample. The trunk muscle recruitment pattern observed via ultrasound imaging in young adults with and without LBP has been highlighted in the literature12 - 14 , 20 , 21. During muscle contraction, although EMG detects the production of action potentials, changes in the muscle shape and geometry are also noted. These changes enable the ultrasound imaging-based measurement and recording of changes in the muscle thickness during contraction11 , 12 , 21. The elderly exhibit changes in movement and motor control15 - 17. The former are due to sarcopenia, osteopenia, reduced sensory and motor proprioception, postural and biomechanical compensatory changes, and reduced nerve conduction velocity15. Losses of muscle mass, losses and atrophy of muscle fibers (particularly more marked losses of type II fibers [i.e., fast glycolytic contraction fibers])16, and losses in the size and number of motor units30 are responsible for the decreases in muscular strength, power, and endurance15 , 30 observed during aging. The protocol used in this study21 was developed and tested in young adults. It is possible that the low force generated by the isometric muscle contraction used in this study (7.5% of the body mass) did not allow the detection of changes in the TrA thickness, given that typical age-related alterations can affect the structure and composition of muscles. Singh et al.16 compared changes in the lumbar extensor muscles, fiber orientations, and muscle strength in young and elderly subjects using ultrasound imaging and found that age-related changes interfered with both muscle geometry and posture. Moreover, changes in the spinal curvature and consequently the body position and joint movements might affect the muscle contraction and lever strength13 , 14. The loss of lumbar lordosis might affect the muscle length, fiber orientation, and fascicle geometry, thus affecting muscle strength16 , 30. In summary, the lack of existing studies in the literature that incorporate ultrasound imaging to determine the abdominal muscle recruitment pattern in elderly individuals with complaints of LBP and UI made it difficult to compare the results observed in this study with those reported in the literature. Regarding the association between LBP and UI, this study pioneered the evaluation of the TrA, IO, and EO muscle recruitment pattern via ultrasound imaging in elderly individuals. The multiple linear regression models used to verify this association revealed that only UI exhibited a significant association with IO recruitment. These results differed from those observed in young adults. Inherent age-related factors such as sarcopenia, changes in motor control, and the ultrasonography technique all possibly interfered with the findings of this study. As age-related changes affect the entire body, further investigations involving ultrasound imaging will be required to identify the effects of aging on the recruitment patterns of these muscles.
Background: Low back pain (LBP) and urinary incontinence (UI) are highly prevalent among elderly individuals. In young adults, changes in trunk muscle recruitment, as assessed via ultrasound imaging, may be associated with lumbar spine stability. Methods: Fifty-four elderly individuals (mean age: 72±5.2 years) who complained of LBP and/or UI as assessed by the McGill Pain Questionnaire, Incontinence Questionnaire-Short Form, and ultrasound imaging were included in the study. The statistical analysis comprised a multiple linear regression model, and a p-value <0.05 was considered significant. Results: The regression models for the TrA, IO, and EO muscle thickness levels explained 2.0% (R2=0.02; F=0.47; p=0.628), 10.6% (R2=0.106; F=3.03; p=0.057), and 10.1% (R2=0.101; F=2.70; p=0.077) of the variability, respectively. None of the regression models developed for the abdominal muscles exhibited statistical significance. A significant and negative association (p=0.018; β=-0.0343) was observed only between UI and IO recruitment. Conclusions: These results suggest that age-related factors may have interfered with the findings of the study, thus emphasizing the need to perform ultrasound imaging-based studies to measure abdominal muscle recruitment in the elderly.
null
null
5,959
242
[ 294, 84, 593, 147 ]
8
[ "muscle", "lbp", "ui", "ultrasound", "abdominal", "elderly", "study", "individuals", "variables", "recruitment" ]
[ "pain elderly individuals", "urinary incontinence ui", "incontinence questionnaire", "chronic pain elderly", "incontinence lbp low" ]
null
null
[CONTENT] elderly | low back pain | urinary incontinence | physical therapy | ultrasound imaging [SUMMARY]
[CONTENT] elderly | low back pain | urinary incontinence | physical therapy | ultrasound imaging [SUMMARY]
[CONTENT] elderly | low back pain | urinary incontinence | physical therapy | ultrasound imaging [SUMMARY]
null
[CONTENT] elderly | low back pain | urinary incontinence | physical therapy | ultrasound imaging [SUMMARY]
null
[CONTENT] Abdominal Muscles | Aged | Aged, 80 and over | Cross-Sectional Studies | Female | Humans | Low Back Pain | Male | Ultrasonography | Urinary Incontinence [SUMMARY]
[CONTENT] Abdominal Muscles | Aged | Aged, 80 and over | Cross-Sectional Studies | Female | Humans | Low Back Pain | Male | Ultrasonography | Urinary Incontinence [SUMMARY]
[CONTENT] Abdominal Muscles | Aged | Aged, 80 and over | Cross-Sectional Studies | Female | Humans | Low Back Pain | Male | Ultrasonography | Urinary Incontinence [SUMMARY]
null
[CONTENT] Abdominal Muscles | Aged | Aged, 80 and over | Cross-Sectional Studies | Female | Humans | Low Back Pain | Male | Ultrasonography | Urinary Incontinence [SUMMARY]
null
[CONTENT] pain elderly individuals | urinary incontinence ui | incontinence questionnaire | chronic pain elderly | incontinence lbp low [SUMMARY]
[CONTENT] pain elderly individuals | urinary incontinence ui | incontinence questionnaire | chronic pain elderly | incontinence lbp low [SUMMARY]
[CONTENT] pain elderly individuals | urinary incontinence ui | incontinence questionnaire | chronic pain elderly | incontinence lbp low [SUMMARY]
null
[CONTENT] pain elderly individuals | urinary incontinence ui | incontinence questionnaire | chronic pain elderly | incontinence lbp low [SUMMARY]
null
[CONTENT] muscle | lbp | ui | ultrasound | abdominal | elderly | study | individuals | variables | recruitment [SUMMARY]
[CONTENT] muscle | lbp | ui | ultrasound | abdominal | elderly | study | individuals | variables | recruitment [SUMMARY]
[CONTENT] muscle | lbp | ui | ultrasound | abdominal | elderly | study | individuals | variables | recruitment [SUMMARY]
null
[CONTENT] muscle | lbp | ui | ultrasound | abdominal | elderly | study | individuals | variables | recruitment [SUMMARY]
null
[CONTENT] ultrasound | muscle | imaging | ultrasound imaging | population | activation | changes | pelvic | lbp | neuromuscular [SUMMARY]
[CONTENT] participant | variables | abdominal | device | statistical | ultrasound | spinal | images | independent | analysis [SUMMARY]
[CONTENT] lbp | pain | table | ui | patients | urinary incontinence lbp | incontinence lbp low pain | incontinence lbp low | lbp low pain | elderly patients [SUMMARY]
null
[CONTENT] muscle | lbp | ultrasound | ui | variables | abdominal | elderly | study | individuals | imaging [SUMMARY]
null
[CONTENT] LBP ||| [SUMMARY]
[CONTENT] Fifty-four | 72±5.2 years | LBP and/or UI | the McGill Pain Questionnaire | Incontinence Questionnaire-Short Form ||| 0.05 [SUMMARY]
[CONTENT] TrA | IO | EO | 2.0% | 10.6% | 10.1% | R2=0.101 ||| ||| UI | IO [SUMMARY]
null
[CONTENT] LBP ||| ||| Fifty-four | 72±5.2 years | LBP and/or UI | the McGill Pain Questionnaire | Incontinence Questionnaire-Short Form ||| 0.05 ||| ||| TrA | IO | EO | 2.0% | 10.6% | 10.1% | R2=0.101 ||| ||| UI | IO ||| [SUMMARY]
null
Worksite-based cardiovascular risk screening and management: a feasibility study.
28652760
Established cardiovascular risk factors are highly prevalent and contribute substantially to cardiovascular morbidity and mortality because they remain uncontrolled in many Canadians. Worksite-based cardiovascular risk factor screening and management represent a largely untapped strategy for optimizing risk factor control.
BACKGROUND
In a 2-phase collaborative demonstration project between Alberta Health Services (AHS) and the Alberta Newsprint Company (ANC), ANC employees were offered cardiovascular risk factor screening and management. Screening was performed at the worksite by AHS nurses, who collected baseline history, performed automated blood pressure measurement and point-of-care testing for lipids and A1c, and calculated 10-year Framingham risk. Employees with a Framingham risk score of ≥10% and uncontrolled blood pressure, dyslipidemia, or smoking were offered 6 months of pharmacist case management to optimize their risk factor control.
METHODS
In total, 87 of 190 (46%) employees volunteered to undergo cardiovascular risk factor screening. Mean age was 44.5±11.9 years, 73 (83.9%) were male, 14 (16.1%) had hypertension, 4 (4.6%) had diabetes, 12 (13.8%) were current smokers, and 9 (10%) had dyslipidemia. Of 36 employees with an estimated Framingham risk score of ≥10%, 21 (58%) agreed to receive case management and 15 (42%) attended baseline and 6-month follow-up case management visits. Statistically significant reductions in left arm systolic blood pressure (-8.0±12.4 mmHg; p=0.03) and triglyceride levels (-0.8±1.4 mmol/L; p=0.04) occurred following case management.
RESULTS
These findings demonstrate the feasibility and usefulness of collaborative, worksite-based cardiovascular risk factor screening and management. Expansion of this type of partnership in a cost-effective manner is warranted.
CONCLUSION
[ "Adult", "Alberta", "Antihypertensive Agents", "Cardiovascular Diseases", "Community Pharmacy Services", "Delivery of Health Care, Integrated", "Dyslipidemias", "Feasibility Studies", "Female", "Humans", "Hypertension", "Hypolipidemic Agents", "Male", "Mass Screening", "Middle Aged", "Models, Organizational", "Occupational Health Services", "Organizational Objectives", "Predictive Value of Tests", "Risk Assessment", "Risk Factors", "Risk Reduction Behavior", "Smoking", "Smoking Cessation", "Smoking Prevention", "Time Factors", "Treatment Outcome", "Workplace" ]
5476431
Background
Cardiovascular disease, which includes heart disease and stroke, is the second leading cause of mortality in Canada, accounting for 125,000 deaths or 35% of all fatalities in 2013.1 The 4 most important modifiable risk factors for cardiovascular disease are hypertension, dyslipidemia, diabetes, and smoking.2,3 These risk factors are highly prevalent in Canadians – 23% have hypertension, 45% have dyslipidemia, 9% have diabetes, and 18% smoke.4–6 Despite the widespread availability of proven efficacious treatments, cardiovascular risk factor control rates are unacceptably low. For example, 37% of Canadians with hypertension and 81% of Canadians with dyslipidemia are not controlled to target levels.4,7 These care-gaps reflect a lost opportunity to improve health, prevent premature death and disability, and reduce unnecessary health care expenditures. As traditional methods of controlling cardiovascular risk factors (largely consisting of patients seeing their health care provider to get screened and treated) have not fully optimized risk factor control, additional approaches are needed. Given that Canadian workers spend an average of 30 hours per week at work, worksite cardiovascular risk screening and management programs represent a promising option to improve risk factor detection and control in working-aged Canadians.8 In a recently performed cross-sectional survey of Canadian workers, 40% were unaware of having at least one uncontrolled cardiovascular risk factor.9 Further, individuals with uncontrolled risk were less likely than those with no risk factors to engage in healthy behaviors.9 This underscores the importance of identifying unrecognized risk in order that it can be appropriately managed. The purpose of this report is to detail the results of a worksite-based collaboration between Alberta Health Services (AHS) and the Alberta Newsprint Company (ANC). The primary objective of this demonstration project was to examine the feasibility and usefulness of using embedded health care providers to perform cardiovascular risk factor screening and management onsite in the workplace.
Methods
Prior to study initiation, approval from University of Alberta research ethics board was obtained. Informed consent was obtained from all participants. Setting This worksite-based program was part of a larger, multipronged AHS initiative, termed the Vascular Risk Reduction Initiative, aimed at optimizing the screening and control of cardiovascular risk factors in Albertans by using novel means of health care service delivery. ANC is a newsprint producer based in Whitecourt, AB (estimated 2016 population of approximately 10,000). At the time this project was conducted, the company had 190 employees. AHS is Canada’s largest fully integrated health services provider, responsible for delivering care to the >4 million residents of the province of Alberta. This worksite-based program was part of a larger, multipronged AHS initiative, termed the Vascular Risk Reduction Initiative, aimed at optimizing the screening and control of cardiovascular risk factors in Albertans by using novel means of health care service delivery. ANC is a newsprint producer based in Whitecourt, AB (estimated 2016 population of approximately 10,000). At the time this project was conducted, the company had 190 employees. AHS is Canada’s largest fully integrated health services provider, responsible for delivering care to the >4 million residents of the province of Alberta. Screening Cardiovascular risk factor screening was offered to all ANC employees. Two AHS nurses performed screening procedures onsite at the company pulp mill over a 5-week period. Screening took approximately 30 minutes per worker and consisted of collecting data on baseline demographics, history of cardiovascular risk factors or disease, family history, weight, height, body mass index (BMI), blood pressure, and point-of-care A1c and lipids. Weight was measured to the nearest tenth of a kilogram with the subject wearing light clothing and no shoes. Blood pressure was measured using the previously validated Watch BP Office (Microlife, Taipei, Taiwan) automated device in both arms simultaneously using recommended techniques.10,11 The first reading was discarded, and the mean of the latter 2 readings was calculated. DCA Vantage Analyzer (Siemens, Munich, Germany) and Cholestech LDX Analyzer (Alere, Waltham, MA, USA) devices were used to obtain A1c and lipid measurements. Ten-year cardiovascular disease risk and vascular age were estimated using the Canadian Cardiovascular Society Framingham Risk Score Calculator. Hypertension was defined according to past history, use of medications, and/or a blood pressure ≥140/90 mmHg (≥130/80 in participants with diabetes). Dyslipidemia was defined according to past history, use of medications, and/or a high low-density lipoprotein (LDL) cholesterol level (defined as ≥5.0 mmol/L in subjects with a Framingham risk estimate of <10%, ≥3.5 mmol/L for a risk estimate of 10–19%, and ≥2.0 mmol/L for a risk estimate of ≥20%). Diabetes was defined according to self-report, medication use, and/or an A1c ≥6.5%. Smoking was defined according to self-report. Cardiovascular risk factor screening was offered to all ANC employees. Two AHS nurses performed screening procedures onsite at the company pulp mill over a 5-week period. Screening took approximately 30 minutes per worker and consisted of collecting data on baseline demographics, history of cardiovascular risk factors or disease, family history, weight, height, body mass index (BMI), blood pressure, and point-of-care A1c and lipids. Weight was measured to the nearest tenth of a kilogram with the subject wearing light clothing and no shoes. Blood pressure was measured using the previously validated Watch BP Office (Microlife, Taipei, Taiwan) automated device in both arms simultaneously using recommended techniques.10,11 The first reading was discarded, and the mean of the latter 2 readings was calculated. DCA Vantage Analyzer (Siemens, Munich, Germany) and Cholestech LDX Analyzer (Alere, Waltham, MA, USA) devices were used to obtain A1c and lipid measurements. Ten-year cardiovascular disease risk and vascular age were estimated using the Canadian Cardiovascular Society Framingham Risk Score Calculator. Hypertension was defined according to past history, use of medications, and/or a blood pressure ≥140/90 mmHg (≥130/80 in participants with diabetes). Dyslipidemia was defined according to past history, use of medications, and/or a high low-density lipoprotein (LDL) cholesterol level (defined as ≥5.0 mmol/L in subjects with a Framingham risk estimate of <10%, ≥3.5 mmol/L for a risk estimate of 10–19%, and ≥2.0 mmol/L for a risk estimate of ≥20%). Diabetes was defined according to self-report, medication use, and/or an A1c ≥6.5%. Smoking was defined according to self-report. Case management Workers with uncontrolled blood pressure, LDL cholesterol, and/or smoking status, and a calculated Framingham risk score ≥10% were offered case management to optimize their cardiovascular risk profiles over a 6-month time frame. Two local pharmacists with full prescribing privileges, as per pharmacists’ scope of practice in Alberta, performed case management. The case managers primarily focused on addressing uncontrolled blood pressure, dyslipidemia, and smoking. Management was based on contemporary guidelines and consisted of health behavior counseling and appropriate pharmacotherapy.10 Health behavior counseling followed the recommendations of the Canadian Hypertension Guidelines and included recommendations to reduce sodium intake towards 2 g/day, achievement/maintenance of a normal body weight (18.5–24.9 kg/m2), increase aerobic physical activity to 30–60 minutes per day on 4–7 days per week, and consumption of the Dietary Approaches to Stop Hypertension diet.10 Risk factor targets were as follows: Blood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)An LDL cholesterol level of <2.0 mmol/L or a 50% declineNonsmoking status Blood pressure <140/90 mmHg (130/80 mmHg for those with diabetes) An LDL cholesterol level of <2.0 mmol/L or a 50% decline Nonsmoking status Secondary outcomes included changes in A1c levels, body weight, BMI, and the 10-year total Framingham cardiovascular risk estimate. Changes in prescription medications were not tracked. Workers with uncontrolled blood pressure, LDL cholesterol, and/or smoking status, and a calculated Framingham risk score ≥10% were offered case management to optimize their cardiovascular risk profiles over a 6-month time frame. Two local pharmacists with full prescribing privileges, as per pharmacists’ scope of practice in Alberta, performed case management. The case managers primarily focused on addressing uncontrolled blood pressure, dyslipidemia, and smoking. Management was based on contemporary guidelines and consisted of health behavior counseling and appropriate pharmacotherapy.10 Health behavior counseling followed the recommendations of the Canadian Hypertension Guidelines and included recommendations to reduce sodium intake towards 2 g/day, achievement/maintenance of a normal body weight (18.5–24.9 kg/m2), increase aerobic physical activity to 30–60 minutes per day on 4–7 days per week, and consumption of the Dietary Approaches to Stop Hypertension diet.10 Risk factor targets were as follows: Blood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)An LDL cholesterol level of <2.0 mmol/L or a 50% declineNonsmoking status Blood pressure <140/90 mmHg (130/80 mmHg for those with diabetes) An LDL cholesterol level of <2.0 mmol/L or a 50% decline Nonsmoking status Secondary outcomes included changes in A1c levels, body weight, BMI, and the 10-year total Framingham cardiovascular risk estimate. Changes in prescription medications were not tracked. Analyses Descriptive analyses were first performed, including calculation of means, standard deviations, and proportions. Changes in cardiovascular risk factors from baseline to follow-up were analyzed using paired Student’s t-tests for continuous variables and McNemar tests for dichotomous ones. p-Values <0.05 were considered statistically significant. Analyses were conducted using Excel (Microsoft, Redmond, WA, USA). Descriptive analyses were first performed, including calculation of means, standard deviations, and proportions. Changes in cardiovascular risk factors from baseline to follow-up were analyzed using paired Student’s t-tests for continuous variables and McNemar tests for dichotomous ones. p-Values <0.05 were considered statistically significant. Analyses were conducted using Excel (Microsoft, Redmond, WA, USA).
Results
In total, 87 workers received cardiovascular risk factor screening. Baseline characteristics of the study population, stratified by receipt of case management, are summarized in Table 1. Mean age was 44.5±11.9 years, 73 (83.9%) were male, 14 (16.1%) had hypertension, 4 (4.6%) had diabetes, 12 (13.8%) were current smokers, and 9 (10%) had dyslipidemia. Case-managed workers were older, more likely male, and had a greater cardiovascular risk factor burden. Thirty-six (41%) workers had an estimated Framingham risk score of ≥10% and were smokers or had uncontrolled blood pressure or lipids. Of these, 21 (58%) agreed to receive case management. These individuals had a mean 10-year Framingham risk score of 22.8%. Fifteen (42%) attended baseline and 6-month follow-up case management visits and, in these individuals, statistically significant reductions in left arm systolic blood pressure (−8.0±12.4 mmHg; p=0.03) and triglyceride (−0.8±1.4 mmHg; p=0.04) levels occurred after case management (Table 2). Of 5 smokers who agreed to case management, 4 did not complete case management and the remaining one quit smoking following case management. No substantial changes in BMI, A1c, or Framingham risk score were observed.
Conclusion
The results of this study verify the worksite-based cardiovascular risk factor screening and management is a viable strategy for identifying and reducing latent cardiovascular risk. These findings demonstrate how health service provid- ers and employers can collaborate to improve the health of workers. Future work should focus on expanding this pilot project in a cost-effective manner.
[ "Background", "Setting", "Screening", "Case management", "Analyses", "Limitations", "Conclusion" ]
[ "Cardiovascular disease, which includes heart disease and stroke, is the second leading cause of mortality in Canada, accounting for 125,000 deaths or 35% of all fatalities in 2013.1 The 4 most important modifiable risk factors for cardiovascular disease are hypertension, dyslipidemia, diabetes, and smoking.2,3 These risk factors are highly prevalent in Canadians – 23% have hypertension, 45% have dyslipidemia, 9% have diabetes, and 18% smoke.4–6\nDespite the widespread availability of proven efficacious treatments, cardiovascular risk factor control rates are unacceptably low. For example, 37% of Canadians with hypertension and 81% of Canadians with dyslipidemia are not controlled to target levels.4,7 These care-gaps reflect a lost opportunity to improve health, prevent premature death and disability, and reduce unnecessary health care expenditures.\nAs traditional methods of controlling cardiovascular risk factors (largely consisting of patients seeing their health care provider to get screened and treated) have not fully optimized risk factor control, additional approaches are needed. Given that Canadian workers spend an average of 30 hours per week at work, worksite cardiovascular risk screening and management programs represent a promising option to improve risk factor detection and control in working-aged Canadians.8 In a recently performed cross-sectional survey of Canadian workers, 40% were unaware of having at least one uncontrolled cardiovascular risk factor.9 Further, individuals with uncontrolled risk were less likely than those with no risk factors to engage in healthy behaviors.9 This underscores the importance of identifying unrecognized risk in order that it can be appropriately managed.\nThe purpose of this report is to detail the results of a worksite-based collaboration between Alberta Health Services (AHS) and the Alberta Newsprint Company (ANC). The primary objective of this demonstration project was to examine the feasibility and usefulness of using embedded health care providers to perform cardiovascular risk factor screening and management onsite in the workplace.", "This worksite-based program was part of a larger, multipronged AHS initiative, termed the Vascular Risk Reduction Initiative, aimed at optimizing the screening and control of cardiovascular risk factors in Albertans by using novel means of health care service delivery. ANC is a newsprint producer based in Whitecourt, AB (estimated 2016 population of approximately 10,000). At the time this project was conducted, the company had 190 employees. AHS is Canada’s largest fully integrated health services provider, responsible for delivering care to the >4 million residents of the province of Alberta.", "Cardiovascular risk factor screening was offered to all ANC employees. Two AHS nurses performed screening procedures onsite at the company pulp mill over a 5-week period. Screening took approximately 30 minutes per worker and consisted of collecting data on baseline demographics, history of cardiovascular risk factors or disease, family history, weight, height, body mass index (BMI), blood pressure, and point-of-care A1c and lipids. Weight was measured to the nearest tenth of a kilogram with the subject wearing light clothing and no shoes. Blood pressure was measured using the previously validated Watch BP Office (Microlife, Taipei, Taiwan) automated device in both arms simultaneously using recommended techniques.10,11 The first reading was discarded, and the mean of the latter 2 readings was calculated. DCA Vantage Analyzer (Siemens, Munich, Germany) and Cholestech LDX Analyzer (Alere, Waltham, MA, USA) devices were used to obtain A1c and lipid measurements. Ten-year cardiovascular disease risk and vascular age were estimated using the Canadian Cardiovascular Society Framingham Risk Score Calculator.\nHypertension was defined according to past history, use of medications, and/or a blood pressure ≥140/90 mmHg (≥130/80 in participants with diabetes). Dyslipidemia was defined according to past history, use of medications, and/or a high low-density lipoprotein (LDL) cholesterol level (defined as ≥5.0 mmol/L in subjects with a Framingham risk estimate of <10%, ≥3.5 mmol/L for a risk estimate of 10–19%, and ≥2.0 mmol/L for a risk estimate of ≥20%). Diabetes was defined according to self-report, medication use, and/or an A1c ≥6.5%. Smoking was defined according to self-report.", "Workers with uncontrolled blood pressure, LDL cholesterol, and/or smoking status, and a calculated Framingham risk score ≥10% were offered case management to optimize their cardiovascular risk profiles over a 6-month time frame. Two local pharmacists with full prescribing privileges, as per pharmacists’ scope of practice in Alberta, performed case management. The case managers primarily focused on addressing uncontrolled blood pressure, dyslipidemia, and smoking. Management was based on contemporary guidelines and consisted of health behavior counseling and appropriate pharmacotherapy.10 Health behavior counseling followed the recommendations of the Canadian Hypertension Guidelines and included recommendations to reduce sodium intake towards 2 g/day, achievement/maintenance of a normal body weight (18.5–24.9 kg/m2), increase aerobic physical activity to 30–60 minutes per day on 4–7 days per week, and consumption of the Dietary Approaches to Stop Hypertension diet.10 Risk factor targets were as follows:\nBlood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)An LDL cholesterol level of <2.0 mmol/L or a 50% declineNonsmoking status\nBlood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)\nAn LDL cholesterol level of <2.0 mmol/L or a 50% decline\nNonsmoking status\nSecondary outcomes included changes in A1c levels, body weight, BMI, and the 10-year total Framingham cardiovascular risk estimate. Changes in prescription medications were not tracked.", "Descriptive analyses were first performed, including calculation of means, standard deviations, and proportions. Changes in cardiovascular risk factors from baseline to follow-up were analyzed using paired Student’s t-tests for continuous variables and McNemar tests for dichotomous ones. p-Values <0.05 were considered statistically significant. Analyses were conducted using Excel (Microsoft, Redmond, WA, USA).", "The major limitations of this study are the relatively small sample size, lack of controls, and short duration of follow-up. The lack of a control group precludes adjustment for temporal trends. In addition, an economic assessment was not performed to evaluate the cost-effectiveness of this strategy. These limitations indicate the need to confirm the robustness of these results in larger sample sizes and over more sustained follow-up periods.", "The results of this study verify the worksite-based cardiovascular risk factor screening and management is a viable strategy for identifying and reducing latent cardiovascular risk. These findings demonstrate how health service provid- ers and employers can collaborate to improve the health of workers. Future work should focus on expanding this pilot project in a cost-effective manner." ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Setting", "Screening", "Case management", "Analyses", "Results", "Discussion", "Limitations", "Conclusion" ]
[ "Cardiovascular disease, which includes heart disease and stroke, is the second leading cause of mortality in Canada, accounting for 125,000 deaths or 35% of all fatalities in 2013.1 The 4 most important modifiable risk factors for cardiovascular disease are hypertension, dyslipidemia, diabetes, and smoking.2,3 These risk factors are highly prevalent in Canadians – 23% have hypertension, 45% have dyslipidemia, 9% have diabetes, and 18% smoke.4–6\nDespite the widespread availability of proven efficacious treatments, cardiovascular risk factor control rates are unacceptably low. For example, 37% of Canadians with hypertension and 81% of Canadians with dyslipidemia are not controlled to target levels.4,7 These care-gaps reflect a lost opportunity to improve health, prevent premature death and disability, and reduce unnecessary health care expenditures.\nAs traditional methods of controlling cardiovascular risk factors (largely consisting of patients seeing their health care provider to get screened and treated) have not fully optimized risk factor control, additional approaches are needed. Given that Canadian workers spend an average of 30 hours per week at work, worksite cardiovascular risk screening and management programs represent a promising option to improve risk factor detection and control in working-aged Canadians.8 In a recently performed cross-sectional survey of Canadian workers, 40% were unaware of having at least one uncontrolled cardiovascular risk factor.9 Further, individuals with uncontrolled risk were less likely than those with no risk factors to engage in healthy behaviors.9 This underscores the importance of identifying unrecognized risk in order that it can be appropriately managed.\nThe purpose of this report is to detail the results of a worksite-based collaboration between Alberta Health Services (AHS) and the Alberta Newsprint Company (ANC). The primary objective of this demonstration project was to examine the feasibility and usefulness of using embedded health care providers to perform cardiovascular risk factor screening and management onsite in the workplace.", "Prior to study initiation, approval from University of Alberta research ethics board was obtained. Informed consent was obtained from all participants.\n Setting This worksite-based program was part of a larger, multipronged AHS initiative, termed the Vascular Risk Reduction Initiative, aimed at optimizing the screening and control of cardiovascular risk factors in Albertans by using novel means of health care service delivery. ANC is a newsprint producer based in Whitecourt, AB (estimated 2016 population of approximately 10,000). At the time this project was conducted, the company had 190 employees. AHS is Canada’s largest fully integrated health services provider, responsible for delivering care to the >4 million residents of the province of Alberta.\nThis worksite-based program was part of a larger, multipronged AHS initiative, termed the Vascular Risk Reduction Initiative, aimed at optimizing the screening and control of cardiovascular risk factors in Albertans by using novel means of health care service delivery. ANC is a newsprint producer based in Whitecourt, AB (estimated 2016 population of approximately 10,000). At the time this project was conducted, the company had 190 employees. AHS is Canada’s largest fully integrated health services provider, responsible for delivering care to the >4 million residents of the province of Alberta.\n Screening Cardiovascular risk factor screening was offered to all ANC employees. Two AHS nurses performed screening procedures onsite at the company pulp mill over a 5-week period. Screening took approximately 30 minutes per worker and consisted of collecting data on baseline demographics, history of cardiovascular risk factors or disease, family history, weight, height, body mass index (BMI), blood pressure, and point-of-care A1c and lipids. Weight was measured to the nearest tenth of a kilogram with the subject wearing light clothing and no shoes. Blood pressure was measured using the previously validated Watch BP Office (Microlife, Taipei, Taiwan) automated device in both arms simultaneously using recommended techniques.10,11 The first reading was discarded, and the mean of the latter 2 readings was calculated. DCA Vantage Analyzer (Siemens, Munich, Germany) and Cholestech LDX Analyzer (Alere, Waltham, MA, USA) devices were used to obtain A1c and lipid measurements. Ten-year cardiovascular disease risk and vascular age were estimated using the Canadian Cardiovascular Society Framingham Risk Score Calculator.\nHypertension was defined according to past history, use of medications, and/or a blood pressure ≥140/90 mmHg (≥130/80 in participants with diabetes). Dyslipidemia was defined according to past history, use of medications, and/or a high low-density lipoprotein (LDL) cholesterol level (defined as ≥5.0 mmol/L in subjects with a Framingham risk estimate of <10%, ≥3.5 mmol/L for a risk estimate of 10–19%, and ≥2.0 mmol/L for a risk estimate of ≥20%). Diabetes was defined according to self-report, medication use, and/or an A1c ≥6.5%. Smoking was defined according to self-report.\nCardiovascular risk factor screening was offered to all ANC employees. Two AHS nurses performed screening procedures onsite at the company pulp mill over a 5-week period. Screening took approximately 30 minutes per worker and consisted of collecting data on baseline demographics, history of cardiovascular risk factors or disease, family history, weight, height, body mass index (BMI), blood pressure, and point-of-care A1c and lipids. Weight was measured to the nearest tenth of a kilogram with the subject wearing light clothing and no shoes. Blood pressure was measured using the previously validated Watch BP Office (Microlife, Taipei, Taiwan) automated device in both arms simultaneously using recommended techniques.10,11 The first reading was discarded, and the mean of the latter 2 readings was calculated. DCA Vantage Analyzer (Siemens, Munich, Germany) and Cholestech LDX Analyzer (Alere, Waltham, MA, USA) devices were used to obtain A1c and lipid measurements. Ten-year cardiovascular disease risk and vascular age were estimated using the Canadian Cardiovascular Society Framingham Risk Score Calculator.\nHypertension was defined according to past history, use of medications, and/or a blood pressure ≥140/90 mmHg (≥130/80 in participants with diabetes). Dyslipidemia was defined according to past history, use of medications, and/or a high low-density lipoprotein (LDL) cholesterol level (defined as ≥5.0 mmol/L in subjects with a Framingham risk estimate of <10%, ≥3.5 mmol/L for a risk estimate of 10–19%, and ≥2.0 mmol/L for a risk estimate of ≥20%). Diabetes was defined according to self-report, medication use, and/or an A1c ≥6.5%. Smoking was defined according to self-report.\n Case management Workers with uncontrolled blood pressure, LDL cholesterol, and/or smoking status, and a calculated Framingham risk score ≥10% were offered case management to optimize their cardiovascular risk profiles over a 6-month time frame. Two local pharmacists with full prescribing privileges, as per pharmacists’ scope of practice in Alberta, performed case management. The case managers primarily focused on addressing uncontrolled blood pressure, dyslipidemia, and smoking. Management was based on contemporary guidelines and consisted of health behavior counseling and appropriate pharmacotherapy.10 Health behavior counseling followed the recommendations of the Canadian Hypertension Guidelines and included recommendations to reduce sodium intake towards 2 g/day, achievement/maintenance of a normal body weight (18.5–24.9 kg/m2), increase aerobic physical activity to 30–60 minutes per day on 4–7 days per week, and consumption of the Dietary Approaches to Stop Hypertension diet.10 Risk factor targets were as follows:\nBlood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)An LDL cholesterol level of <2.0 mmol/L or a 50% declineNonsmoking status\nBlood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)\nAn LDL cholesterol level of <2.0 mmol/L or a 50% decline\nNonsmoking status\nSecondary outcomes included changes in A1c levels, body weight, BMI, and the 10-year total Framingham cardiovascular risk estimate. Changes in prescription medications were not tracked.\nWorkers with uncontrolled blood pressure, LDL cholesterol, and/or smoking status, and a calculated Framingham risk score ≥10% were offered case management to optimize their cardiovascular risk profiles over a 6-month time frame. Two local pharmacists with full prescribing privileges, as per pharmacists’ scope of practice in Alberta, performed case management. The case managers primarily focused on addressing uncontrolled blood pressure, dyslipidemia, and smoking. Management was based on contemporary guidelines and consisted of health behavior counseling and appropriate pharmacotherapy.10 Health behavior counseling followed the recommendations of the Canadian Hypertension Guidelines and included recommendations to reduce sodium intake towards 2 g/day, achievement/maintenance of a normal body weight (18.5–24.9 kg/m2), increase aerobic physical activity to 30–60 minutes per day on 4–7 days per week, and consumption of the Dietary Approaches to Stop Hypertension diet.10 Risk factor targets were as follows:\nBlood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)An LDL cholesterol level of <2.0 mmol/L or a 50% declineNonsmoking status\nBlood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)\nAn LDL cholesterol level of <2.0 mmol/L or a 50% decline\nNonsmoking status\nSecondary outcomes included changes in A1c levels, body weight, BMI, and the 10-year total Framingham cardiovascular risk estimate. Changes in prescription medications were not tracked.\n Analyses Descriptive analyses were first performed, including calculation of means, standard deviations, and proportions. Changes in cardiovascular risk factors from baseline to follow-up were analyzed using paired Student’s t-tests for continuous variables and McNemar tests for dichotomous ones. p-Values <0.05 were considered statistically significant. Analyses were conducted using Excel (Microsoft, Redmond, WA, USA).\nDescriptive analyses were first performed, including calculation of means, standard deviations, and proportions. Changes in cardiovascular risk factors from baseline to follow-up were analyzed using paired Student’s t-tests for continuous variables and McNemar tests for dichotomous ones. p-Values <0.05 were considered statistically significant. Analyses were conducted using Excel (Microsoft, Redmond, WA, USA).", "This worksite-based program was part of a larger, multipronged AHS initiative, termed the Vascular Risk Reduction Initiative, aimed at optimizing the screening and control of cardiovascular risk factors in Albertans by using novel means of health care service delivery. ANC is a newsprint producer based in Whitecourt, AB (estimated 2016 population of approximately 10,000). At the time this project was conducted, the company had 190 employees. AHS is Canada’s largest fully integrated health services provider, responsible for delivering care to the >4 million residents of the province of Alberta.", "Cardiovascular risk factor screening was offered to all ANC employees. Two AHS nurses performed screening procedures onsite at the company pulp mill over a 5-week period. Screening took approximately 30 minutes per worker and consisted of collecting data on baseline demographics, history of cardiovascular risk factors or disease, family history, weight, height, body mass index (BMI), blood pressure, and point-of-care A1c and lipids. Weight was measured to the nearest tenth of a kilogram with the subject wearing light clothing and no shoes. Blood pressure was measured using the previously validated Watch BP Office (Microlife, Taipei, Taiwan) automated device in both arms simultaneously using recommended techniques.10,11 The first reading was discarded, and the mean of the latter 2 readings was calculated. DCA Vantage Analyzer (Siemens, Munich, Germany) and Cholestech LDX Analyzer (Alere, Waltham, MA, USA) devices were used to obtain A1c and lipid measurements. Ten-year cardiovascular disease risk and vascular age were estimated using the Canadian Cardiovascular Society Framingham Risk Score Calculator.\nHypertension was defined according to past history, use of medications, and/or a blood pressure ≥140/90 mmHg (≥130/80 in participants with diabetes). Dyslipidemia was defined according to past history, use of medications, and/or a high low-density lipoprotein (LDL) cholesterol level (defined as ≥5.0 mmol/L in subjects with a Framingham risk estimate of <10%, ≥3.5 mmol/L for a risk estimate of 10–19%, and ≥2.0 mmol/L for a risk estimate of ≥20%). Diabetes was defined according to self-report, medication use, and/or an A1c ≥6.5%. Smoking was defined according to self-report.", "Workers with uncontrolled blood pressure, LDL cholesterol, and/or smoking status, and a calculated Framingham risk score ≥10% were offered case management to optimize their cardiovascular risk profiles over a 6-month time frame. Two local pharmacists with full prescribing privileges, as per pharmacists’ scope of practice in Alberta, performed case management. The case managers primarily focused on addressing uncontrolled blood pressure, dyslipidemia, and smoking. Management was based on contemporary guidelines and consisted of health behavior counseling and appropriate pharmacotherapy.10 Health behavior counseling followed the recommendations of the Canadian Hypertension Guidelines and included recommendations to reduce sodium intake towards 2 g/day, achievement/maintenance of a normal body weight (18.5–24.9 kg/m2), increase aerobic physical activity to 30–60 minutes per day on 4–7 days per week, and consumption of the Dietary Approaches to Stop Hypertension diet.10 Risk factor targets were as follows:\nBlood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)An LDL cholesterol level of <2.0 mmol/L or a 50% declineNonsmoking status\nBlood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)\nAn LDL cholesterol level of <2.0 mmol/L or a 50% decline\nNonsmoking status\nSecondary outcomes included changes in A1c levels, body weight, BMI, and the 10-year total Framingham cardiovascular risk estimate. Changes in prescription medications were not tracked.", "Descriptive analyses were first performed, including calculation of means, standard deviations, and proportions. Changes in cardiovascular risk factors from baseline to follow-up were analyzed using paired Student’s t-tests for continuous variables and McNemar tests for dichotomous ones. p-Values <0.05 were considered statistically significant. Analyses were conducted using Excel (Microsoft, Redmond, WA, USA).", "In total, 87 workers received cardiovascular risk factor screening. Baseline characteristics of the study population, stratified by receipt of case management, are summarized in Table 1. Mean age was 44.5±11.9 years, 73 (83.9%) were male, 14 (16.1%) had hypertension, 4 (4.6%) had diabetes, 12 (13.8%) were current smokers, and 9 (10%) had dyslipidemia. Case-managed workers were older, more likely male, and had a greater cardiovascular risk factor burden.\nThirty-six (41%) workers had an estimated Framingham risk score of ≥10% and were smokers or had uncontrolled blood pressure or lipids. Of these, 21 (58%) agreed to receive case management. These individuals had a mean 10-year Framingham risk score of 22.8%. Fifteen (42%) attended baseline and 6-month follow-up case management visits and, in these individuals, statistically significant reductions in left arm systolic blood pressure (−8.0±12.4 mmHg; p=0.03) and triglyceride (−0.8±1.4 mmHg; p=0.04) levels occurred after case management (Table 2). Of 5 smokers who agreed to case management, 4 did not complete case management and the remaining one quit smoking following case management. No substantial changes in BMI, A1c, or Framingham risk score were observed.", "In summary, the results of this pilot project confirm that worksite-based cardiovascular risk factor screening and management is both feasible and useful. In the screening phase of the program, >40% of the workers receiving screened were identified as having a moderate-to-high cardiovascular risk level. In the workers who received case management, improvements in cardiovascular risk factors, particularly blood pressure and triglycerides, were demonstrated.\nThe most novel aspect of this collaborative project is that it involved embedding AHS and community clinicians within the worksite to deliver care. This approach moves beyond the traditional “worksite wellness” program model that focuses on educational interventions alone. Traditional approaches have unsurprisingly had limited effectiveness because they do not involve use of the most effective and proven aspects of cardiovascular risk factor control, prescription of antihypertensive drugs and statins.12–14 Embedding clinicians in the worksite greatly increases convenience for workers and also obviates the need for them to take time off to attend clinic appointments. Indeed, this type of care model has the potential to reach Canadians who do not typically seek medical care and who are less likely to be aware of having uncontrolled cardiovascular risk, such as younger individuals and males.15\nDespite encouragement, only 15 subjects, or 42% of the high-risk patients, followed through with case management. This reflects known practical realities in engaging high-risk patients in their care and is highly likely to be found with replication of this project.", "The major limitations of this study are the relatively small sample size, lack of controls, and short duration of follow-up. The lack of a control group precludes adjustment for temporal trends. In addition, an economic assessment was not performed to evaluate the cost-effectiveness of this strategy. These limitations indicate the need to confirm the robustness of these results in larger sample sizes and over more sustained follow-up periods.", "The results of this study verify the worksite-based cardiovascular risk factor screening and management is a viable strategy for identifying and reducing latent cardiovascular risk. These findings demonstrate how health service provid- ers and employers can collaborate to improve the health of workers. Future work should focus on expanding this pilot project in a cost-effective manner." ]
[ null, "methods", null, null, null, null, "results", "discussion", null, null ]
[ "blood pressure", "dyslipidemia", "smoking", "pharmacist", "worksite" ]
Background: Cardiovascular disease, which includes heart disease and stroke, is the second leading cause of mortality in Canada, accounting for 125,000 deaths or 35% of all fatalities in 2013.1 The 4 most important modifiable risk factors for cardiovascular disease are hypertension, dyslipidemia, diabetes, and smoking.2,3 These risk factors are highly prevalent in Canadians – 23% have hypertension, 45% have dyslipidemia, 9% have diabetes, and 18% smoke.4–6 Despite the widespread availability of proven efficacious treatments, cardiovascular risk factor control rates are unacceptably low. For example, 37% of Canadians with hypertension and 81% of Canadians with dyslipidemia are not controlled to target levels.4,7 These care-gaps reflect a lost opportunity to improve health, prevent premature death and disability, and reduce unnecessary health care expenditures. As traditional methods of controlling cardiovascular risk factors (largely consisting of patients seeing their health care provider to get screened and treated) have not fully optimized risk factor control, additional approaches are needed. Given that Canadian workers spend an average of 30 hours per week at work, worksite cardiovascular risk screening and management programs represent a promising option to improve risk factor detection and control in working-aged Canadians.8 In a recently performed cross-sectional survey of Canadian workers, 40% were unaware of having at least one uncontrolled cardiovascular risk factor.9 Further, individuals with uncontrolled risk were less likely than those with no risk factors to engage in healthy behaviors.9 This underscores the importance of identifying unrecognized risk in order that it can be appropriately managed. The purpose of this report is to detail the results of a worksite-based collaboration between Alberta Health Services (AHS) and the Alberta Newsprint Company (ANC). The primary objective of this demonstration project was to examine the feasibility and usefulness of using embedded health care providers to perform cardiovascular risk factor screening and management onsite in the workplace. Methods: Prior to study initiation, approval from University of Alberta research ethics board was obtained. Informed consent was obtained from all participants. Setting This worksite-based program was part of a larger, multipronged AHS initiative, termed the Vascular Risk Reduction Initiative, aimed at optimizing the screening and control of cardiovascular risk factors in Albertans by using novel means of health care service delivery. ANC is a newsprint producer based in Whitecourt, AB (estimated 2016 population of approximately 10,000). At the time this project was conducted, the company had 190 employees. AHS is Canada’s largest fully integrated health services provider, responsible for delivering care to the >4 million residents of the province of Alberta. This worksite-based program was part of a larger, multipronged AHS initiative, termed the Vascular Risk Reduction Initiative, aimed at optimizing the screening and control of cardiovascular risk factors in Albertans by using novel means of health care service delivery. ANC is a newsprint producer based in Whitecourt, AB (estimated 2016 population of approximately 10,000). At the time this project was conducted, the company had 190 employees. AHS is Canada’s largest fully integrated health services provider, responsible for delivering care to the >4 million residents of the province of Alberta. Screening Cardiovascular risk factor screening was offered to all ANC employees. Two AHS nurses performed screening procedures onsite at the company pulp mill over a 5-week period. Screening took approximately 30 minutes per worker and consisted of collecting data on baseline demographics, history of cardiovascular risk factors or disease, family history, weight, height, body mass index (BMI), blood pressure, and point-of-care A1c and lipids. Weight was measured to the nearest tenth of a kilogram with the subject wearing light clothing and no shoes. Blood pressure was measured using the previously validated Watch BP Office (Microlife, Taipei, Taiwan) automated device in both arms simultaneously using recommended techniques.10,11 The first reading was discarded, and the mean of the latter 2 readings was calculated. DCA Vantage Analyzer (Siemens, Munich, Germany) and Cholestech LDX Analyzer (Alere, Waltham, MA, USA) devices were used to obtain A1c and lipid measurements. Ten-year cardiovascular disease risk and vascular age were estimated using the Canadian Cardiovascular Society Framingham Risk Score Calculator. Hypertension was defined according to past history, use of medications, and/or a blood pressure ≥140/90 mmHg (≥130/80 in participants with diabetes). Dyslipidemia was defined according to past history, use of medications, and/or a high low-density lipoprotein (LDL) cholesterol level (defined as ≥5.0 mmol/L in subjects with a Framingham risk estimate of <10%, ≥3.5 mmol/L for a risk estimate of 10–19%, and ≥2.0 mmol/L for a risk estimate of ≥20%). Diabetes was defined according to self-report, medication use, and/or an A1c ≥6.5%. Smoking was defined according to self-report. Cardiovascular risk factor screening was offered to all ANC employees. Two AHS nurses performed screening procedures onsite at the company pulp mill over a 5-week period. Screening took approximately 30 minutes per worker and consisted of collecting data on baseline demographics, history of cardiovascular risk factors or disease, family history, weight, height, body mass index (BMI), blood pressure, and point-of-care A1c and lipids. Weight was measured to the nearest tenth of a kilogram with the subject wearing light clothing and no shoes. Blood pressure was measured using the previously validated Watch BP Office (Microlife, Taipei, Taiwan) automated device in both arms simultaneously using recommended techniques.10,11 The first reading was discarded, and the mean of the latter 2 readings was calculated. DCA Vantage Analyzer (Siemens, Munich, Germany) and Cholestech LDX Analyzer (Alere, Waltham, MA, USA) devices were used to obtain A1c and lipid measurements. Ten-year cardiovascular disease risk and vascular age were estimated using the Canadian Cardiovascular Society Framingham Risk Score Calculator. Hypertension was defined according to past history, use of medications, and/or a blood pressure ≥140/90 mmHg (≥130/80 in participants with diabetes). Dyslipidemia was defined according to past history, use of medications, and/or a high low-density lipoprotein (LDL) cholesterol level (defined as ≥5.0 mmol/L in subjects with a Framingham risk estimate of <10%, ≥3.5 mmol/L for a risk estimate of 10–19%, and ≥2.0 mmol/L for a risk estimate of ≥20%). Diabetes was defined according to self-report, medication use, and/or an A1c ≥6.5%. Smoking was defined according to self-report. Case management Workers with uncontrolled blood pressure, LDL cholesterol, and/or smoking status, and a calculated Framingham risk score ≥10% were offered case management to optimize their cardiovascular risk profiles over a 6-month time frame. Two local pharmacists with full prescribing privileges, as per pharmacists’ scope of practice in Alberta, performed case management. The case managers primarily focused on addressing uncontrolled blood pressure, dyslipidemia, and smoking. Management was based on contemporary guidelines and consisted of health behavior counseling and appropriate pharmacotherapy.10 Health behavior counseling followed the recommendations of the Canadian Hypertension Guidelines and included recommendations to reduce sodium intake towards 2 g/day, achievement/maintenance of a normal body weight (18.5–24.9 kg/m2), increase aerobic physical activity to 30–60 minutes per day on 4–7 days per week, and consumption of the Dietary Approaches to Stop Hypertension diet.10 Risk factor targets were as follows: Blood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)An LDL cholesterol level of <2.0 mmol/L or a 50% declineNonsmoking status Blood pressure <140/90 mmHg (130/80 mmHg for those with diabetes) An LDL cholesterol level of <2.0 mmol/L or a 50% decline Nonsmoking status Secondary outcomes included changes in A1c levels, body weight, BMI, and the 10-year total Framingham cardiovascular risk estimate. Changes in prescription medications were not tracked. Workers with uncontrolled blood pressure, LDL cholesterol, and/or smoking status, and a calculated Framingham risk score ≥10% were offered case management to optimize their cardiovascular risk profiles over a 6-month time frame. Two local pharmacists with full prescribing privileges, as per pharmacists’ scope of practice in Alberta, performed case management. The case managers primarily focused on addressing uncontrolled blood pressure, dyslipidemia, and smoking. Management was based on contemporary guidelines and consisted of health behavior counseling and appropriate pharmacotherapy.10 Health behavior counseling followed the recommendations of the Canadian Hypertension Guidelines and included recommendations to reduce sodium intake towards 2 g/day, achievement/maintenance of a normal body weight (18.5–24.9 kg/m2), increase aerobic physical activity to 30–60 minutes per day on 4–7 days per week, and consumption of the Dietary Approaches to Stop Hypertension diet.10 Risk factor targets were as follows: Blood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)An LDL cholesterol level of <2.0 mmol/L or a 50% declineNonsmoking status Blood pressure <140/90 mmHg (130/80 mmHg for those with diabetes) An LDL cholesterol level of <2.0 mmol/L or a 50% decline Nonsmoking status Secondary outcomes included changes in A1c levels, body weight, BMI, and the 10-year total Framingham cardiovascular risk estimate. Changes in prescription medications were not tracked. Analyses Descriptive analyses were first performed, including calculation of means, standard deviations, and proportions. Changes in cardiovascular risk factors from baseline to follow-up were analyzed using paired Student’s t-tests for continuous variables and McNemar tests for dichotomous ones. p-Values <0.05 were considered statistically significant. Analyses were conducted using Excel (Microsoft, Redmond, WA, USA). Descriptive analyses were first performed, including calculation of means, standard deviations, and proportions. Changes in cardiovascular risk factors from baseline to follow-up were analyzed using paired Student’s t-tests for continuous variables and McNemar tests for dichotomous ones. p-Values <0.05 were considered statistically significant. Analyses were conducted using Excel (Microsoft, Redmond, WA, USA). Setting: This worksite-based program was part of a larger, multipronged AHS initiative, termed the Vascular Risk Reduction Initiative, aimed at optimizing the screening and control of cardiovascular risk factors in Albertans by using novel means of health care service delivery. ANC is a newsprint producer based in Whitecourt, AB (estimated 2016 population of approximately 10,000). At the time this project was conducted, the company had 190 employees. AHS is Canada’s largest fully integrated health services provider, responsible for delivering care to the >4 million residents of the province of Alberta. Screening: Cardiovascular risk factor screening was offered to all ANC employees. Two AHS nurses performed screening procedures onsite at the company pulp mill over a 5-week period. Screening took approximately 30 minutes per worker and consisted of collecting data on baseline demographics, history of cardiovascular risk factors or disease, family history, weight, height, body mass index (BMI), blood pressure, and point-of-care A1c and lipids. Weight was measured to the nearest tenth of a kilogram with the subject wearing light clothing and no shoes. Blood pressure was measured using the previously validated Watch BP Office (Microlife, Taipei, Taiwan) automated device in both arms simultaneously using recommended techniques.10,11 The first reading was discarded, and the mean of the latter 2 readings was calculated. DCA Vantage Analyzer (Siemens, Munich, Germany) and Cholestech LDX Analyzer (Alere, Waltham, MA, USA) devices were used to obtain A1c and lipid measurements. Ten-year cardiovascular disease risk and vascular age were estimated using the Canadian Cardiovascular Society Framingham Risk Score Calculator. Hypertension was defined according to past history, use of medications, and/or a blood pressure ≥140/90 mmHg (≥130/80 in participants with diabetes). Dyslipidemia was defined according to past history, use of medications, and/or a high low-density lipoprotein (LDL) cholesterol level (defined as ≥5.0 mmol/L in subjects with a Framingham risk estimate of <10%, ≥3.5 mmol/L for a risk estimate of 10–19%, and ≥2.0 mmol/L for a risk estimate of ≥20%). Diabetes was defined according to self-report, medication use, and/or an A1c ≥6.5%. Smoking was defined according to self-report. Case management: Workers with uncontrolled blood pressure, LDL cholesterol, and/or smoking status, and a calculated Framingham risk score ≥10% were offered case management to optimize their cardiovascular risk profiles over a 6-month time frame. Two local pharmacists with full prescribing privileges, as per pharmacists’ scope of practice in Alberta, performed case management. The case managers primarily focused on addressing uncontrolled blood pressure, dyslipidemia, and smoking. Management was based on contemporary guidelines and consisted of health behavior counseling and appropriate pharmacotherapy.10 Health behavior counseling followed the recommendations of the Canadian Hypertension Guidelines and included recommendations to reduce sodium intake towards 2 g/day, achievement/maintenance of a normal body weight (18.5–24.9 kg/m2), increase aerobic physical activity to 30–60 minutes per day on 4–7 days per week, and consumption of the Dietary Approaches to Stop Hypertension diet.10 Risk factor targets were as follows: Blood pressure <140/90 mmHg (130/80 mmHg for those with diabetes)An LDL cholesterol level of <2.0 mmol/L or a 50% declineNonsmoking status Blood pressure <140/90 mmHg (130/80 mmHg for those with diabetes) An LDL cholesterol level of <2.0 mmol/L or a 50% decline Nonsmoking status Secondary outcomes included changes in A1c levels, body weight, BMI, and the 10-year total Framingham cardiovascular risk estimate. Changes in prescription medications were not tracked. Analyses: Descriptive analyses were first performed, including calculation of means, standard deviations, and proportions. Changes in cardiovascular risk factors from baseline to follow-up were analyzed using paired Student’s t-tests for continuous variables and McNemar tests for dichotomous ones. p-Values <0.05 were considered statistically significant. Analyses were conducted using Excel (Microsoft, Redmond, WA, USA). Results: In total, 87 workers received cardiovascular risk factor screening. Baseline characteristics of the study population, stratified by receipt of case management, are summarized in Table 1. Mean age was 44.5±11.9 years, 73 (83.9%) were male, 14 (16.1%) had hypertension, 4 (4.6%) had diabetes, 12 (13.8%) were current smokers, and 9 (10%) had dyslipidemia. Case-managed workers were older, more likely male, and had a greater cardiovascular risk factor burden. Thirty-six (41%) workers had an estimated Framingham risk score of ≥10% and were smokers or had uncontrolled blood pressure or lipids. Of these, 21 (58%) agreed to receive case management. These individuals had a mean 10-year Framingham risk score of 22.8%. Fifteen (42%) attended baseline and 6-month follow-up case management visits and, in these individuals, statistically significant reductions in left arm systolic blood pressure (−8.0±12.4 mmHg; p=0.03) and triglyceride (−0.8±1.4 mmHg; p=0.04) levels occurred after case management (Table 2). Of 5 smokers who agreed to case management, 4 did not complete case management and the remaining one quit smoking following case management. No substantial changes in BMI, A1c, or Framingham risk score were observed. Discussion: In summary, the results of this pilot project confirm that worksite-based cardiovascular risk factor screening and management is both feasible and useful. In the screening phase of the program, >40% of the workers receiving screened were identified as having a moderate-to-high cardiovascular risk level. In the workers who received case management, improvements in cardiovascular risk factors, particularly blood pressure and triglycerides, were demonstrated. The most novel aspect of this collaborative project is that it involved embedding AHS and community clinicians within the worksite to deliver care. This approach moves beyond the traditional “worksite wellness” program model that focuses on educational interventions alone. Traditional approaches have unsurprisingly had limited effectiveness because they do not involve use of the most effective and proven aspects of cardiovascular risk factor control, prescription of antihypertensive drugs and statins.12–14 Embedding clinicians in the worksite greatly increases convenience for workers and also obviates the need for them to take time off to attend clinic appointments. Indeed, this type of care model has the potential to reach Canadians who do not typically seek medical care and who are less likely to be aware of having uncontrolled cardiovascular risk, such as younger individuals and males.15 Despite encouragement, only 15 subjects, or 42% of the high-risk patients, followed through with case management. This reflects known practical realities in engaging high-risk patients in their care and is highly likely to be found with replication of this project. Limitations: The major limitations of this study are the relatively small sample size, lack of controls, and short duration of follow-up. The lack of a control group precludes adjustment for temporal trends. In addition, an economic assessment was not performed to evaluate the cost-effectiveness of this strategy. These limitations indicate the need to confirm the robustness of these results in larger sample sizes and over more sustained follow-up periods. Conclusion: The results of this study verify the worksite-based cardiovascular risk factor screening and management is a viable strategy for identifying and reducing latent cardiovascular risk. These findings demonstrate how health service provid- ers and employers can collaborate to improve the health of workers. Future work should focus on expanding this pilot project in a cost-effective manner.
Background: Established cardiovascular risk factors are highly prevalent and contribute substantially to cardiovascular morbidity and mortality because they remain uncontrolled in many Canadians. Worksite-based cardiovascular risk factor screening and management represent a largely untapped strategy for optimizing risk factor control. Methods: In a 2-phase collaborative demonstration project between Alberta Health Services (AHS) and the Alberta Newsprint Company (ANC), ANC employees were offered cardiovascular risk factor screening and management. Screening was performed at the worksite by AHS nurses, who collected baseline history, performed automated blood pressure measurement and point-of-care testing for lipids and A1c, and calculated 10-year Framingham risk. Employees with a Framingham risk score of ≥10% and uncontrolled blood pressure, dyslipidemia, or smoking were offered 6 months of pharmacist case management to optimize their risk factor control. Results: In total, 87 of 190 (46%) employees volunteered to undergo cardiovascular risk factor screening. Mean age was 44.5±11.9 years, 73 (83.9%) were male, 14 (16.1%) had hypertension, 4 (4.6%) had diabetes, 12 (13.8%) were current smokers, and 9 (10%) had dyslipidemia. Of 36 employees with an estimated Framingham risk score of ≥10%, 21 (58%) agreed to receive case management and 15 (42%) attended baseline and 6-month follow-up case management visits. Statistically significant reductions in left arm systolic blood pressure (-8.0±12.4 mmHg; p=0.03) and triglyceride levels (-0.8±1.4 mmol/L; p=0.04) occurred following case management. Conclusions: These findings demonstrate the feasibility and usefulness of collaborative, worksite-based cardiovascular risk factor screening and management. Expansion of this type of partnership in a cost-effective manner is warranted.
Background: Cardiovascular disease, which includes heart disease and stroke, is the second leading cause of mortality in Canada, accounting for 125,000 deaths or 35% of all fatalities in 2013.1 The 4 most important modifiable risk factors for cardiovascular disease are hypertension, dyslipidemia, diabetes, and smoking.2,3 These risk factors are highly prevalent in Canadians – 23% have hypertension, 45% have dyslipidemia, 9% have diabetes, and 18% smoke.4–6 Despite the widespread availability of proven efficacious treatments, cardiovascular risk factor control rates are unacceptably low. For example, 37% of Canadians with hypertension and 81% of Canadians with dyslipidemia are not controlled to target levels.4,7 These care-gaps reflect a lost opportunity to improve health, prevent premature death and disability, and reduce unnecessary health care expenditures. As traditional methods of controlling cardiovascular risk factors (largely consisting of patients seeing their health care provider to get screened and treated) have not fully optimized risk factor control, additional approaches are needed. Given that Canadian workers spend an average of 30 hours per week at work, worksite cardiovascular risk screening and management programs represent a promising option to improve risk factor detection and control in working-aged Canadians.8 In a recently performed cross-sectional survey of Canadian workers, 40% were unaware of having at least one uncontrolled cardiovascular risk factor.9 Further, individuals with uncontrolled risk were less likely than those with no risk factors to engage in healthy behaviors.9 This underscores the importance of identifying unrecognized risk in order that it can be appropriately managed. The purpose of this report is to detail the results of a worksite-based collaboration between Alberta Health Services (AHS) and the Alberta Newsprint Company (ANC). The primary objective of this demonstration project was to examine the feasibility and usefulness of using embedded health care providers to perform cardiovascular risk factor screening and management onsite in the workplace. Conclusion: The results of this study verify the worksite-based cardiovascular risk factor screening and management is a viable strategy for identifying and reducing latent cardiovascular risk. These findings demonstrate how health service provid- ers and employers can collaborate to improve the health of workers. Future work should focus on expanding this pilot project in a cost-effective manner.
Background: Established cardiovascular risk factors are highly prevalent and contribute substantially to cardiovascular morbidity and mortality because they remain uncontrolled in many Canadians. Worksite-based cardiovascular risk factor screening and management represent a largely untapped strategy for optimizing risk factor control. Methods: In a 2-phase collaborative demonstration project between Alberta Health Services (AHS) and the Alberta Newsprint Company (ANC), ANC employees were offered cardiovascular risk factor screening and management. Screening was performed at the worksite by AHS nurses, who collected baseline history, performed automated blood pressure measurement and point-of-care testing for lipids and A1c, and calculated 10-year Framingham risk. Employees with a Framingham risk score of ≥10% and uncontrolled blood pressure, dyslipidemia, or smoking were offered 6 months of pharmacist case management to optimize their risk factor control. Results: In total, 87 of 190 (46%) employees volunteered to undergo cardiovascular risk factor screening. Mean age was 44.5±11.9 years, 73 (83.9%) were male, 14 (16.1%) had hypertension, 4 (4.6%) had diabetes, 12 (13.8%) were current smokers, and 9 (10%) had dyslipidemia. Of 36 employees with an estimated Framingham risk score of ≥10%, 21 (58%) agreed to receive case management and 15 (42%) attended baseline and 6-month follow-up case management visits. Statistically significant reductions in left arm systolic blood pressure (-8.0±12.4 mmHg; p=0.03) and triglyceride levels (-0.8±1.4 mmol/L; p=0.04) occurred following case management. Conclusions: These findings demonstrate the feasibility and usefulness of collaborative, worksite-based cardiovascular risk factor screening and management. Expansion of this type of partnership in a cost-effective manner is warranted.
3,368
341
[ 349, 105, 322, 257, 72, 81, 63 ]
10
[ "risk", "cardiovascular", "cardiovascular risk", "10", "management", "pressure", "blood pressure", "blood", "case", "screening" ]
[ "screening cardiovascular risk", "cardiovascular risk factor", "optimize cardiovascular risk", "cardiovascular risk screening", "controlling cardiovascular risk" ]
[CONTENT] blood pressure | dyslipidemia | smoking | pharmacist | worksite [SUMMARY]
[CONTENT] blood pressure | dyslipidemia | smoking | pharmacist | worksite [SUMMARY]
[CONTENT] blood pressure | dyslipidemia | smoking | pharmacist | worksite [SUMMARY]
[CONTENT] blood pressure | dyslipidemia | smoking | pharmacist | worksite [SUMMARY]
[CONTENT] blood pressure | dyslipidemia | smoking | pharmacist | worksite [SUMMARY]
[CONTENT] blood pressure | dyslipidemia | smoking | pharmacist | worksite [SUMMARY]
[CONTENT] Adult | Alberta | Antihypertensive Agents | Cardiovascular Diseases | Community Pharmacy Services | Delivery of Health Care, Integrated | Dyslipidemias | Feasibility Studies | Female | Humans | Hypertension | Hypolipidemic Agents | Male | Mass Screening | Middle Aged | Models, Organizational | Occupational Health Services | Organizational Objectives | Predictive Value of Tests | Risk Assessment | Risk Factors | Risk Reduction Behavior | Smoking | Smoking Cessation | Smoking Prevention | Time Factors | Treatment Outcome | Workplace [SUMMARY]
[CONTENT] Adult | Alberta | Antihypertensive Agents | Cardiovascular Diseases | Community Pharmacy Services | Delivery of Health Care, Integrated | Dyslipidemias | Feasibility Studies | Female | Humans | Hypertension | Hypolipidemic Agents | Male | Mass Screening | Middle Aged | Models, Organizational | Occupational Health Services | Organizational Objectives | Predictive Value of Tests | Risk Assessment | Risk Factors | Risk Reduction Behavior | Smoking | Smoking Cessation | Smoking Prevention | Time Factors | Treatment Outcome | Workplace [SUMMARY]
[CONTENT] Adult | Alberta | Antihypertensive Agents | Cardiovascular Diseases | Community Pharmacy Services | Delivery of Health Care, Integrated | Dyslipidemias | Feasibility Studies | Female | Humans | Hypertension | Hypolipidemic Agents | Male | Mass Screening | Middle Aged | Models, Organizational | Occupational Health Services | Organizational Objectives | Predictive Value of Tests | Risk Assessment | Risk Factors | Risk Reduction Behavior | Smoking | Smoking Cessation | Smoking Prevention | Time Factors | Treatment Outcome | Workplace [SUMMARY]
[CONTENT] Adult | Alberta | Antihypertensive Agents | Cardiovascular Diseases | Community Pharmacy Services | Delivery of Health Care, Integrated | Dyslipidemias | Feasibility Studies | Female | Humans | Hypertension | Hypolipidemic Agents | Male | Mass Screening | Middle Aged | Models, Organizational | Occupational Health Services | Organizational Objectives | Predictive Value of Tests | Risk Assessment | Risk Factors | Risk Reduction Behavior | Smoking | Smoking Cessation | Smoking Prevention | Time Factors | Treatment Outcome | Workplace [SUMMARY]
[CONTENT] Adult | Alberta | Antihypertensive Agents | Cardiovascular Diseases | Community Pharmacy Services | Delivery of Health Care, Integrated | Dyslipidemias | Feasibility Studies | Female | Humans | Hypertension | Hypolipidemic Agents | Male | Mass Screening | Middle Aged | Models, Organizational | Occupational Health Services | Organizational Objectives | Predictive Value of Tests | Risk Assessment | Risk Factors | Risk Reduction Behavior | Smoking | Smoking Cessation | Smoking Prevention | Time Factors | Treatment Outcome | Workplace [SUMMARY]
[CONTENT] Adult | Alberta | Antihypertensive Agents | Cardiovascular Diseases | Community Pharmacy Services | Delivery of Health Care, Integrated | Dyslipidemias | Feasibility Studies | Female | Humans | Hypertension | Hypolipidemic Agents | Male | Mass Screening | Middle Aged | Models, Organizational | Occupational Health Services | Organizational Objectives | Predictive Value of Tests | Risk Assessment | Risk Factors | Risk Reduction Behavior | Smoking | Smoking Cessation | Smoking Prevention | Time Factors | Treatment Outcome | Workplace [SUMMARY]
[CONTENT] screening cardiovascular risk | cardiovascular risk factor | optimize cardiovascular risk | cardiovascular risk screening | controlling cardiovascular risk [SUMMARY]
[CONTENT] screening cardiovascular risk | cardiovascular risk factor | optimize cardiovascular risk | cardiovascular risk screening | controlling cardiovascular risk [SUMMARY]
[CONTENT] screening cardiovascular risk | cardiovascular risk factor | optimize cardiovascular risk | cardiovascular risk screening | controlling cardiovascular risk [SUMMARY]
[CONTENT] screening cardiovascular risk | cardiovascular risk factor | optimize cardiovascular risk | cardiovascular risk screening | controlling cardiovascular risk [SUMMARY]
[CONTENT] screening cardiovascular risk | cardiovascular risk factor | optimize cardiovascular risk | cardiovascular risk screening | controlling cardiovascular risk [SUMMARY]
[CONTENT] screening cardiovascular risk | cardiovascular risk factor | optimize cardiovascular risk | cardiovascular risk screening | controlling cardiovascular risk [SUMMARY]
[CONTENT] risk | cardiovascular | cardiovascular risk | 10 | management | pressure | blood pressure | blood | case | screening [SUMMARY]
[CONTENT] risk | cardiovascular | cardiovascular risk | 10 | management | pressure | blood pressure | blood | case | screening [SUMMARY]
[CONTENT] risk | cardiovascular | cardiovascular risk | 10 | management | pressure | blood pressure | blood | case | screening [SUMMARY]
[CONTENT] risk | cardiovascular | cardiovascular risk | 10 | management | pressure | blood pressure | blood | case | screening [SUMMARY]
[CONTENT] risk | cardiovascular | cardiovascular risk | 10 | management | pressure | blood pressure | blood | case | screening [SUMMARY]
[CONTENT] risk | cardiovascular | cardiovascular risk | 10 | management | pressure | blood pressure | blood | case | screening [SUMMARY]
[CONTENT] risk | canadians | health | cardiovascular | factor | risk factor | care | disease | health care | factors [SUMMARY]
[CONTENT] risk | 10 | defined | blood pressure | pressure | blood | mmol | history | defined according | according [SUMMARY]
[CONTENT] case | case management | management | smokers | risk | agreed | male | table | risk score | score [SUMMARY]
[CONTENT] health | collaborate improve health | work focus expanding | health workers future | health workers | health service provid ers | health service provid | health service | risk findings | risk findings demonstrate [SUMMARY]
[CONTENT] risk | cardiovascular | cardiovascular risk | management | case | 10 | health | pressure | blood pressure | blood [SUMMARY]
[CONTENT] risk | cardiovascular | cardiovascular risk | management | case | 10 | health | pressure | blood pressure | blood [SUMMARY]
[CONTENT] Canadians ||| [SUMMARY]
[CONTENT] 2 | Alberta Health Services | AHS | the Alberta Newsprint Company | ANC | ANC ||| AHS | A1c | 10-year | Framingham ||| Framingham | 6 months [SUMMARY]
[CONTENT] 87 | 190 | 46% ||| 44.5±11.9 years | 73 | 83.9% | 14 | 16.1% | 4 | 4.6% | 12 | 13.8% | 9 | 10% ||| 36 | Framingham | 21 | 58% | 15 | 42% | 6-month ||| p=0.04 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Canadians ||| ||| 2 | Alberta Health Services | AHS | the Alberta Newsprint Company | ANC | ANC ||| AHS | A1c | 10-year | Framingham ||| Framingham | 6 months ||| ||| 87 | 190 | 46% ||| 44.5±11.9 years | 73 | 83.9% | 14 | 16.1% | 4 | 4.6% | 12 | 13.8% | 9 | 10% ||| 36 | Framingham | 21 | 58% | 15 | 42% | 6-month ||| p=0.04 ||| ||| [SUMMARY]
[CONTENT] Canadians ||| ||| 2 | Alberta Health Services | AHS | the Alberta Newsprint Company | ANC | ANC ||| AHS | A1c | 10-year | Framingham ||| Framingham | 6 months ||| ||| 87 | 190 | 46% ||| 44.5±11.9 years | 73 | 83.9% | 14 | 16.1% | 4 | 4.6% | 12 | 13.8% | 9 | 10% ||| 36 | Framingham | 21 | 58% | 15 | 42% | 6-month ||| p=0.04 ||| ||| [SUMMARY]
Prevalence and Risk Factors of High Blood Pressure among Adults in Banyuwangi Coastal Communities, Indonesia.
33883839
Hypertension is a disease that still a problem in the world. Hypertension is a risk factor for heart disease and stroke mortality. Economic development and an emphasis on coastal tourism may have an impact on public health conditions, such as hypertension. This study aimed to determine risk factors related to hypertension among adults in coastal communities in Indonesia.
BACKGROUND
This was a cross-sectional study of 123 respondents between the age of 18-59 years old selected by cluster sampling. This study was conducted among coastal communities in Banyuwangi District, East Java, Indonesia. Data was analyzed using multivariate logistic regression.
METHODS
Our study showed that the prevalence of systolic and diastolic hypertension among residents of coastal communities were as high as 33.33% and 31.71%, respectively. Increasing age was associated with systolic and diastolic hypertension (ORsystolic=1.11; 95% CI=1.03-1.19, p=0.01 and ORdiastolic=1.07; 95% CI=1.01-1.15, p=0.03) after controlling other variables. Respondents with the poorest and richer socio-economic status had higher odds of having systolic and diastolic hypertension compared to respondents with the richest socio-economic status (ORsystolic-poorest =12.78; 95% CI=1.61-101.54, p=0.02; ORsystolic-richer=10.74; 95% CI =1.55-74.37, p=0.02 and ORdiastolic-poorest=10.36; 95% CI= 1.40-76.74, p=0.02;ORdiastolic-richer=6.45; 95% CI=1.01-41.43, p=0.05) after controlling other variables.
RESULTS
Being of older age and of the lower in socioeconomic status are significantly associated with increasing risk for systolic and diastolic hypertension in these coastal communities. More studies need to be done in these and other coastal village to help design appropriate health promotion and counseling strategies for coastal community.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Blood Pressure", "Cross-Sectional Studies", "Humans", "Hypertension", "Indonesia", "Middle Aged", "Prevalence", "Risk Factors", "Young Adult" ]
8047239
Introduction
Globally, cardiovascular disease accounts for approximately 17 million deaths per year, of which 9.4 million deaths were due to complications of hypertension (1). Hypertension is a risk factor for heart disease and stroke that is responsible for at least 45% of heart disease mortality, and 51% of stroke mortality (1). The Indonesian Basic Health Research in 2018 reported that the prevalence of hypertension in Indonesia was 34.1% of the total adult population, and in East Java Province, it was 13.5% (2). However, Banyuwangi, one of the districts in East Java Province, had a higher prevalence of 33.3% in 2016 (3). Previous studies showed that the prevalence of hypertension tends to be higher in women, in urban area, in those with low education and in those who are unemployed (4–6). Other studies found that there was a tendency of high incidence of hypertension in coastal communities. The high prevalence of hypertension among coastal communities has previously been suggested to be due to high dietary salt consumed from salted dry fish, a staple diet high in sodium and cholesterol (6–8). The coastal communities of Banyuwangi Regency are growing rapidly driven by economic development centered on coastal tourism. This rapid pace of development may have a significant impact on the health conditions such as obesity, diabetes and hypertension in the community. As happened in other regions in Indonesia and Asia, the prevalence of obesity, hypertension and diabetes mellitus increases with the increase in the economy. The changing diet due to the need to procure food for tourists may also cause an increase in the prevalence of metabolic diseases such as hypertension in Banyuwangi. Hence, this study aimed to determine the prevalence of hypertension and the factors related to hypertension among adult population living in coastal communities of Banyuwangi District, East Java, Indonesia.
Methods
Study design: This is a cross-sectional study conducted from September to November 2016 in 5 coastal communities in Banyuwangi District (Kampung Mandar, Ketapang, Grajagan, Bangsring, Buluagung), East Java, Indonesia. Ethical permission was obtained from the Ethical Committee of the Faculty of Public Health, Universitas Airlangga in Surabaya. Population and sampling: The study population was communities living in 5 coastal villages, aged between 18–59 years. We used cluster sampling methods with villages as clusters. There were 22 coastal villages of the 52 villages in Banyuwangi District. Then, we selected 5 of the 22 coastal villages randomly for study location. After the cluster was selected, all mothers from the Family Welfare Development Group and their husbands in the village were randomly selected (9). There were 156 respondents invited into the study, and 151 respondents agreed to participate (response rate=96.8%). This sample size was sufficient to detect a 44% difference in the proportion of determinants of increased systolic and diastolic blood pressure with the confidence interval of 95%, power of 90%, design effect of 2 and the possibility of rejection of 25%. Data collection: The data collected were primary data. Structured questionnaires were filled in the village office after obtaining consent from the respondents. Data collection and measurement of waist circumference were conducted by trained data collectors consisting of public health students. The data collectors were trained by the researchers on interview technique, questionnaires administration and measurement technique. They then underwent a field testing in which the acceptability of the questionnaires and data collectors' skills were assessed and errors were corrected. Waist circumference was measured at respondents' navel using Medline non-stretchable measuring tape. Waist circumference was used to determine abdominal obesity. Abdominal obesity was divided into yes (waist circumference ≥90 cm in men and ≥80 cm in women) and no (waist circumference <90 cm in men and <80 cm in women) (10). Blood pressure measurements were taken by an experienced nurse. Systolic and diastolic blood pressure were recorded as the average of two measurements using blood pressure monitor (Omron Hem-7130, Omron Healthcare Co., Japan) while respondents were sitting in constant ambient temperature. Weight was measured using the same digital scale (Seca 869, Seca Asia Pacific), and height was measured by stature meter (Seca 213, Seca Asia Pacific). Systolic blood pressure was divided into ≥140 mmHg (high) and < 140 mmHg (low) (11, 12). Diastolic blood pressure was divided into ≥90 mmHg (high) and < 90 mmHg (low) (11,12). All tools were calibrated prior to testing. The questionnaires were tested for validity and reliability in Kepatihan village, Banyuwangi Sub-district, Banyuwangi, on 1 October 2016, and the test resulted in good validity and reliability with Cronbach alpha of 0.72. Data analysis: We analyzed data from respondents aged 18–59 years old. Covariates tested were age (13,14), sex (14,15), education level (16), occupation (16), abdominal obesity (16), Body Mass Index (BMI) (14,15,17,18), socio-economic status (15,19,20), mental-emotional status (21), family health history (14), smoking status (16), family member smoking status, ethnic group (17,22) and location (23). Age was defined as the last anniversary of the respondent at the time of the study. Sex was divided into male and female. The level of education consisted of lower education (no schooling, no primary school and primary school), middle level education (high school) and higher education (graduated from high school and college). Occupation level was categorized as currently working or not; socioeconomic status was categorized into equally distributed quintiles (poorest, poorer, middle, richer and richest) from wealth index derived from Principal Component Analysis of household ownership of radio, goat, chicken and rice field (24). Smoking habits for both respondents and members of their household were divided into smoking and not smoking. Respondents were considered to have family health history if a member of their family had a history of one of the following diseases: diabetes mellitus, seizures, obesity, heart disease, recurrent headache, stroke and high blood pressure. Mental-emotional condition was measured using Self-Reporting Questionnaire (SRQ) consisting of 20 questions (25). Respondents scoring ≥ 6 were considered to have high score and thus having mental emotional problem. Respondents were considered obese if their waist circumferences were ≥90 cm for males and ≥80 cm for females (26,27). Ethnicity consisted of Javanese, Madura and others (Osing, Bali, Sasak and Bugis). Statistical analysis: Data were analyzed using multivariable logistic regression in STATA 14. Covariate variables that had a relationship with systolic and diastolic blood pressure with p-value of <0.25 were included in the initial multivariable analysis to allow for a possibility that insignificant covariates in the univariable analysis might become significant when adjusted by other variables. Backward method was used to select variables to be retained in the final model. Confounding assessment was done by reentering covariate variables into the model one by one, starting from variables that have the greatest p-value. If the difference in Odds Ratio (OR) of the factors between before and after the covariate was included was greater than 10%, the variable was declared confounding and must remain in the model. Ethical clearance: The study was approved by Ethics Committee, Universitas Airlangga, Indonesia with a decision letter numbered 512-KEPK.
Results
Of the 151 participants who provided data, we excluded data from 16 respondents who were older than 59 years. Of the 135 remaining observations, 14 were excluded from analysis due to incomplete information (2 observations had missing outcome data and 12 observations had missing covariate information). This resulted in 123(81.46%) observations ready for analysis. The mean age of respondents was 41.82 ±9.08 years, and the mean BMI was 27.28 ±6.67 kg/m2. There were 33.33% of respondents who had systolic hypertension and 31.71% who had diastolic hypertension. The majority of the respondents were females (69.92%), had higher education (54.47%), were employed (65.85%), and married (97.56%). There were 96 respondents (78.05%) with abdominal obesity, and 72.36% had lower mental-emotional status. In addition, 47.97% of the respondents had at least one family member who smoked while 83.74% of respondents did not smoke. The majority of the respondents belonged to Javanese ethnicity (63.41%) (Table 1). Characteristics of respondents of the study Based on multivariable analysis, factors associated with both systolic and diastolic hypertension were age and socio-economic status after adjustment for abdominal obesity, family history and location variables (Tables 2 and 3). The odds of getting systolic blood hypertension increased 1.11 for every one-year increase in age after controlling for other variables (Table 2: OR= 1.11; 95% CI = 1.03–1.19, p=0.01), while the odds of getting diastolic hypertension increased 1.07 for every one-year increase in age after controlling for other variables (Table 3: OR= 1.07; 95% CI = 1.01–1.15, p=0.03). Respondents belonging to the poorest socio-economic status had 12.78 times greater odds of getting systolic hypertension compared to the richest socio-economic status after controlling for other variables (Table 2:ORsystolic= 12.78; 95% CI = 1.61–101.54, p=0.02). Respondents whose socio-economic status was the poorest had nearly 10.36 times greater odds for obtaining diastolic hypertension compared to respondents from the richest quintile after controlling for other variables (Table 3: ORdiastolic= 10.36; 95% CI = 1.40–76.74, p=0.02). In addition, respondents belonging to the richer socio-economic status had 10.74 times greater odds of getting systolic hypertension compared to the richest socio-economic status after controlling for other variables (Table 2: ORsystolic= 10.74; 95% CI = 1.55–74.37, p=0.02). Respondents whose socioeconomic status was the richer had nearly 6.45 times greater odds for obtaining diastolic hypertension compared to respondents from the richest quintile after controlling for other variables (Table 3:ORdiastolic= 6.45; 95% CI = 1.01–41.43, p=0.05). Factors related to systolic blood pressure Odds Ratio Adjusted Odds Ratio Factors related to diastolic blood pressure Odds Ratio Adjusted Odds Ratio
null
null
[]
[]
[]
[ "Introduction", "Methods", "Results", "Discussion" ]
[ "Globally, cardiovascular disease accounts for approximately 17 million deaths per year, of which 9.4 million deaths were due to complications of hypertension (1). Hypertension is a risk factor for heart disease and stroke that is responsible for at least 45% of heart disease mortality, and 51% of stroke mortality (1). The Indonesian Basic Health Research in 2018 reported that the prevalence of hypertension in Indonesia was 34.1% of the total adult population, and in East Java Province, it was 13.5% (2). However, Banyuwangi, one of the districts in East Java Province, had a higher prevalence of 33.3% in 2016 (3).\nPrevious studies showed that the prevalence of hypertension tends to be higher in women, in urban area, in those with low education and in those who are unemployed (4–6). Other studies found that there was a tendency of high incidence of hypertension in coastal communities. The high prevalence of hypertension among coastal communities has previously been suggested to be due to high dietary salt consumed from salted dry fish, a staple diet high in sodium and cholesterol (6–8).\nThe coastal communities of Banyuwangi Regency are growing rapidly driven by economic development centered on coastal tourism. This rapid pace of development may have a significant impact on the health conditions such as obesity, diabetes and hypertension in the community. As happened in other regions in Indonesia and Asia, the prevalence of obesity, hypertension and diabetes mellitus increases with the increase in the economy. The changing diet due to the need to procure food for tourists may also cause an increase in the prevalence of metabolic diseases such as hypertension in Banyuwangi. Hence, this study aimed to determine the prevalence of hypertension and the factors related to hypertension among adult population living in coastal communities of Banyuwangi District, East Java, Indonesia.", "Study design: This is a cross-sectional study conducted from September to November 2016 in 5 coastal communities in Banyuwangi District (Kampung Mandar, Ketapang, Grajagan, Bangsring, Buluagung), East Java, Indonesia. Ethical permission was obtained from the Ethical Committee of the Faculty of Public Health, Universitas Airlangga in Surabaya.\nPopulation and sampling: The study population was communities living in 5 coastal villages, aged between 18–59 years. We used cluster sampling methods with villages as clusters. There were 22 coastal villages of the 52 villages in Banyuwangi District. Then, we selected 5 of the 22 coastal villages randomly for study location. After the cluster was selected, all mothers from the Family Welfare Development Group and their husbands in the village were randomly selected (9). There were 156 respondents invited into the study, and 151 respondents agreed to participate (response rate=96.8%). This sample size was sufficient to detect a 44% difference in the proportion of determinants of increased systolic and diastolic blood pressure with the confidence interval of 95%, power of 90%, design effect of 2 and the possibility of rejection of 25%.\nData collection: The data collected were primary data. Structured questionnaires were filled in the village office after obtaining consent from the respondents. Data collection and measurement of waist circumference were conducted by trained data collectors consisting of public health students. The data collectors were trained by the researchers on interview technique, questionnaires administration and measurement technique. They then underwent a field testing in which the acceptability of the questionnaires and data collectors' skills were assessed and errors were corrected. Waist circumference was measured at respondents' navel using Medline non-stretchable measuring tape. Waist circumference was used to determine abdominal obesity. Abdominal obesity was divided into yes (waist circumference ≥90 cm in men and ≥80 cm in women) and no (waist circumference <90 cm in men and <80 cm in women) (10). Blood pressure measurements were taken by an experienced nurse. Systolic and diastolic blood pressure were recorded as the average of two measurements using blood pressure monitor (Omron Hem-7130, Omron Healthcare Co., Japan) while respondents were sitting in constant ambient temperature. Weight was measured using the same digital scale (Seca 869, Seca Asia Pacific), and height was measured by stature meter (Seca 213, Seca Asia Pacific). Systolic blood pressure was divided into ≥140 mmHg (high) and < 140 mmHg (low) (11, 12). Diastolic blood pressure was divided into ≥90 mmHg (high) and < 90 mmHg (low) (11,12). All tools were calibrated prior to testing. The questionnaires were tested for validity and reliability in Kepatihan village, Banyuwangi Sub-district, Banyuwangi, on 1 October 2016, and the test resulted in good validity and reliability with Cronbach alpha of 0.72.\nData analysis: We analyzed data from respondents aged 18–59 years old. Covariates tested were age (13,14), sex (14,15), education level (16), occupation (16), abdominal obesity (16), Body Mass Index (BMI) (14,15,17,18), socio-economic status (15,19,20), mental-emotional status (21), family health history (14), smoking status (16), family member smoking status, ethnic group (17,22) and location (23). Age was defined as the last anniversary of the respondent at the time of the study. Sex was divided into male and female. The level of education consisted of lower education (no schooling, no primary school and primary school), middle level education (high school) and higher education (graduated from high school and college). Occupation level was categorized as currently working or not; socioeconomic status was categorized into equally distributed quintiles (poorest, poorer, middle, richer and richest) from wealth index derived from Principal Component Analysis of household ownership of radio, goat, chicken and rice field (24). Smoking habits for both respondents and members of their household were divided into smoking and not smoking. Respondents were considered to have family health history if a member of their family had a history of one of the following diseases: diabetes mellitus, seizures, obesity, heart disease, recurrent headache, stroke and high blood pressure. Mental-emotional condition was measured using Self-Reporting Questionnaire (SRQ) consisting of 20 questions (25). Respondents scoring ≥ 6 were considered to have high score and thus having mental emotional problem. Respondents were considered obese if their waist circumferences were ≥90 cm for males and ≥80 cm for females (26,27). Ethnicity consisted of Javanese, Madura and others (Osing, Bali, Sasak and Bugis).\nStatistical analysis: Data were analyzed using multivariable logistic regression in STATA 14. Covariate variables that had a relationship with systolic and diastolic blood pressure with p-value of <0.25 were included in the initial multivariable analysis to allow for a possibility that insignificant covariates in the univariable analysis might become significant when adjusted by other variables. Backward method was used to select variables to be retained in the final model. Confounding assessment was done by reentering covariate variables into the model one by one, starting from variables that have the greatest p-value. If the difference in Odds Ratio (OR) of the factors between before and after the covariate was included was greater than 10%, the variable was declared confounding and must remain in the model.\nEthical clearance: The study was approved by Ethics Committee, Universitas Airlangga, Indonesia with a decision letter numbered 512-KEPK.", "Of the 151 participants who provided data, we excluded data from 16 respondents who were older than 59 years. Of the 135 remaining observations, 14 were excluded from analysis due to incomplete information (2 observations had missing outcome data and 12 observations had missing covariate information). This resulted in 123(81.46%) observations ready for analysis.\nThe mean age of respondents was 41.82 ±9.08 years, and the mean BMI was 27.28 ±6.67 kg/m2. There were 33.33% of respondents who had systolic hypertension and 31.71% who had diastolic hypertension. The majority of the respondents were females (69.92%), had higher education (54.47%), were employed (65.85%), and married (97.56%). There were 96 respondents (78.05%) with abdominal obesity, and 72.36% had lower mental-emotional status. In addition, 47.97% of the respondents had at least one family member who smoked while 83.74% of respondents did not smoke. The majority of the respondents belonged to Javanese ethnicity (63.41%) (Table 1).\nCharacteristics of respondents of the study\nBased on multivariable analysis, factors associated with both systolic and diastolic hypertension were age and socio-economic status after adjustment for abdominal obesity, family history and location variables (Tables 2 and 3). The odds of getting systolic blood hypertension increased 1.11 for every one-year increase in age after controlling for other variables (Table 2: OR= 1.11; 95% CI = 1.03–1.19, p=0.01), while the odds of getting diastolic hypertension increased 1.07 for every one-year increase in age after controlling for other variables (Table 3: OR= 1.07; 95% CI = 1.01–1.15, p=0.03). Respondents belonging to the poorest socio-economic status had 12.78 times greater odds of getting systolic hypertension compared to the richest socio-economic status after controlling for other variables (Table 2:ORsystolic= 12.78; 95% CI = 1.61–101.54, p=0.02). Respondents whose socio-economic status was the poorest had nearly 10.36 times greater odds for obtaining diastolic hypertension compared to respondents from the richest quintile after controlling for other variables (Table 3: ORdiastolic= 10.36; 95% CI = 1.40–76.74, p=0.02). In addition, respondents belonging to the richer socio-economic status had 10.74 times greater odds of getting systolic hypertension compared to the richest socio-economic status after controlling for other variables (Table 2: ORsystolic= 10.74; 95% CI = 1.55–74.37, p=0.02). Respondents whose socioeconomic status was the richer had nearly 6.45 times greater odds for obtaining diastolic hypertension compared to respondents from the richest quintile after controlling for other variables (Table 3:ORdiastolic= 6.45; 95% CI = 1.01–41.43, p=0.05).\nFactors related to systolic blood pressure\nOdds Ratio\nAdjusted Odds Ratio\nFactors related to diastolic blood pressure\nOdds Ratio\nAdjusted Odds Ratio", "Our study found that the prevalence of systolic and diastolic hypertension among coastal communities in Banyuwangi District was high at 33.33% and 31.71%, respectively. The final model of multiple logistic regression showed that older age and lower socioeconomic status were the determinants of higher systolic and diastolic blood pressure levels after controlling for other variables. Older people had greater odds of having higher systolic and diastolic hypertension compared to younger ones. Those of the lower socioeconomic status had greater odds of having higher systolic and diastolic hypertension compared to those of the highest socioeconomic status.\nOur results on coastal communities reflect similar findings from another study conducted in rural community in Indonesia that showed people of older age had greater risk of hypertension compared to younger ones (13). This study found that people aged 40 years or older had greater risk of developing hypertension compared to those aged 17–39 years, and the risk was most prominent among the age group of 55–59 years (13). Other studies also reported an increase in prevalence of hypertension as age increased (4, 28–30). Due to structural changes that comes with aging, arterial wall loses its flexibility and becomes stiffer. Consequently, systolic and diastolic blood pressure increases due to reduce pulsatility of the arterial wall (31).\nA research conducted in low- and middle-income countries found that higher incomes, household assets or social class were positively associated with hypertension in South Asia, but in East Asia and Africa, no associations were detected (20). Our results are also in line with other studies showing that lower socioeconomic status was associated with high blood pressure (15,32). Modifiable socio-economic factors, such as education and employment, were also associated with hypertension. This is in line with the fact that the final stages of the epidemiological transition, the burden of chronic diseases including hypertension shifts from the higher socioeconomic groups to the lower socioeconomic groups (15,33). This may be because awareness of prevention and disease control was better in groups with higher socioeconomic status. In addition, people from the higher socioeconomic status had better access to healthcare.\nThe strength of this study was that we assessed various determinants, including demographic factors, socioeconomic factors, individual lifestyles, smoking factors in the family, ethnicity and family health history. Studies regarding systolic and diastolic hypertension specifically those focusing on coastal areas are limited. Hence, this study will add to the current limited pool of data on the topic. Together with the results for increased blood glucose level (34), this study can provide information on metabolic syndrome in coastal communities. The coastal communities in our study locations comprised mostly people of Javanese and Maduranese ethnic groups. Our results may be generalized for other coastal communities in Indonesia with similar ethnic profile. However, further studies are needed to cover other coastal communities with other ethnic profiles.\nThe limitation of this study is its cross-sectional design which does not permit assumption for causality of risk factors with outcome (temporal ambiguity). Another limitation perhaps is the small sample size, although only 81.46% observation ready for analysis is considered good. In addition, waist circumference was measured at respondents' navel which might have underestimated the true waist circumference in this population (35).\nThe prevalence of hypertension among coastal communities in Banyuwangi was high. Factors related to systolic and diastolic hypertension in these coastal communities were older age and lower socioeconomic status. Our finding implies the need for promotion of healthy lifestyle that would reduce the risk of hypertension, such as healthy diet and improved physical activities among coastal communities. There is currently a national program in Indonesia for the management of chronic disease called Prolanis, which help monitor and ensure continuous treatment for chronic disease at the community health centres. The government of Indonesia can optimize the program by integrating it with hamlet level health posts available widely in Indonesia for maternal and child health services. Hamlet level service will improve health care and health information access of the elderly, especially those from low economic households. In addition, policies to ensure adequate access to healthy food, clean water and energy need to be improved by the government especially for older populations and lower socioeconomic conditions." ]
[ "intro", "methods", "results", "discussion" ]
[ "Coastal community", "Diastolic blood pressure", "Systolic blood pressure", "Cardiovascular disease" ]
Introduction: Globally, cardiovascular disease accounts for approximately 17 million deaths per year, of which 9.4 million deaths were due to complications of hypertension (1). Hypertension is a risk factor for heart disease and stroke that is responsible for at least 45% of heart disease mortality, and 51% of stroke mortality (1). The Indonesian Basic Health Research in 2018 reported that the prevalence of hypertension in Indonesia was 34.1% of the total adult population, and in East Java Province, it was 13.5% (2). However, Banyuwangi, one of the districts in East Java Province, had a higher prevalence of 33.3% in 2016 (3). Previous studies showed that the prevalence of hypertension tends to be higher in women, in urban area, in those with low education and in those who are unemployed (4–6). Other studies found that there was a tendency of high incidence of hypertension in coastal communities. The high prevalence of hypertension among coastal communities has previously been suggested to be due to high dietary salt consumed from salted dry fish, a staple diet high in sodium and cholesterol (6–8). The coastal communities of Banyuwangi Regency are growing rapidly driven by economic development centered on coastal tourism. This rapid pace of development may have a significant impact on the health conditions such as obesity, diabetes and hypertension in the community. As happened in other regions in Indonesia and Asia, the prevalence of obesity, hypertension and diabetes mellitus increases with the increase in the economy. The changing diet due to the need to procure food for tourists may also cause an increase in the prevalence of metabolic diseases such as hypertension in Banyuwangi. Hence, this study aimed to determine the prevalence of hypertension and the factors related to hypertension among adult population living in coastal communities of Banyuwangi District, East Java, Indonesia. Methods: Study design: This is a cross-sectional study conducted from September to November 2016 in 5 coastal communities in Banyuwangi District (Kampung Mandar, Ketapang, Grajagan, Bangsring, Buluagung), East Java, Indonesia. Ethical permission was obtained from the Ethical Committee of the Faculty of Public Health, Universitas Airlangga in Surabaya. Population and sampling: The study population was communities living in 5 coastal villages, aged between 18–59 years. We used cluster sampling methods with villages as clusters. There were 22 coastal villages of the 52 villages in Banyuwangi District. Then, we selected 5 of the 22 coastal villages randomly for study location. After the cluster was selected, all mothers from the Family Welfare Development Group and their husbands in the village were randomly selected (9). There were 156 respondents invited into the study, and 151 respondents agreed to participate (response rate=96.8%). This sample size was sufficient to detect a 44% difference in the proportion of determinants of increased systolic and diastolic blood pressure with the confidence interval of 95%, power of 90%, design effect of 2 and the possibility of rejection of 25%. Data collection: The data collected were primary data. Structured questionnaires were filled in the village office after obtaining consent from the respondents. Data collection and measurement of waist circumference were conducted by trained data collectors consisting of public health students. The data collectors were trained by the researchers on interview technique, questionnaires administration and measurement technique. They then underwent a field testing in which the acceptability of the questionnaires and data collectors' skills were assessed and errors were corrected. Waist circumference was measured at respondents' navel using Medline non-stretchable measuring tape. Waist circumference was used to determine abdominal obesity. Abdominal obesity was divided into yes (waist circumference ≥90 cm in men and ≥80 cm in women) and no (waist circumference <90 cm in men and <80 cm in women) (10). Blood pressure measurements were taken by an experienced nurse. Systolic and diastolic blood pressure were recorded as the average of two measurements using blood pressure monitor (Omron Hem-7130, Omron Healthcare Co., Japan) while respondents were sitting in constant ambient temperature. Weight was measured using the same digital scale (Seca 869, Seca Asia Pacific), and height was measured by stature meter (Seca 213, Seca Asia Pacific). Systolic blood pressure was divided into ≥140 mmHg (high) and < 140 mmHg (low) (11, 12). Diastolic blood pressure was divided into ≥90 mmHg (high) and < 90 mmHg (low) (11,12). All tools were calibrated prior to testing. The questionnaires were tested for validity and reliability in Kepatihan village, Banyuwangi Sub-district, Banyuwangi, on 1 October 2016, and the test resulted in good validity and reliability with Cronbach alpha of 0.72. Data analysis: We analyzed data from respondents aged 18–59 years old. Covariates tested were age (13,14), sex (14,15), education level (16), occupation (16), abdominal obesity (16), Body Mass Index (BMI) (14,15,17,18), socio-economic status (15,19,20), mental-emotional status (21), family health history (14), smoking status (16), family member smoking status, ethnic group (17,22) and location (23). Age was defined as the last anniversary of the respondent at the time of the study. Sex was divided into male and female. The level of education consisted of lower education (no schooling, no primary school and primary school), middle level education (high school) and higher education (graduated from high school and college). Occupation level was categorized as currently working or not; socioeconomic status was categorized into equally distributed quintiles (poorest, poorer, middle, richer and richest) from wealth index derived from Principal Component Analysis of household ownership of radio, goat, chicken and rice field (24). Smoking habits for both respondents and members of their household were divided into smoking and not smoking. Respondents were considered to have family health history if a member of their family had a history of one of the following diseases: diabetes mellitus, seizures, obesity, heart disease, recurrent headache, stroke and high blood pressure. Mental-emotional condition was measured using Self-Reporting Questionnaire (SRQ) consisting of 20 questions (25). Respondents scoring ≥ 6 were considered to have high score and thus having mental emotional problem. Respondents were considered obese if their waist circumferences were ≥90 cm for males and ≥80 cm for females (26,27). Ethnicity consisted of Javanese, Madura and others (Osing, Bali, Sasak and Bugis). Statistical analysis: Data were analyzed using multivariable logistic regression in STATA 14. Covariate variables that had a relationship with systolic and diastolic blood pressure with p-value of <0.25 were included in the initial multivariable analysis to allow for a possibility that insignificant covariates in the univariable analysis might become significant when adjusted by other variables. Backward method was used to select variables to be retained in the final model. Confounding assessment was done by reentering covariate variables into the model one by one, starting from variables that have the greatest p-value. If the difference in Odds Ratio (OR) of the factors between before and after the covariate was included was greater than 10%, the variable was declared confounding and must remain in the model. Ethical clearance: The study was approved by Ethics Committee, Universitas Airlangga, Indonesia with a decision letter numbered 512-KEPK. Results: Of the 151 participants who provided data, we excluded data from 16 respondents who were older than 59 years. Of the 135 remaining observations, 14 were excluded from analysis due to incomplete information (2 observations had missing outcome data and 12 observations had missing covariate information). This resulted in 123(81.46%) observations ready for analysis. The mean age of respondents was 41.82 ±9.08 years, and the mean BMI was 27.28 ±6.67 kg/m2. There were 33.33% of respondents who had systolic hypertension and 31.71% who had diastolic hypertension. The majority of the respondents were females (69.92%), had higher education (54.47%), were employed (65.85%), and married (97.56%). There were 96 respondents (78.05%) with abdominal obesity, and 72.36% had lower mental-emotional status. In addition, 47.97% of the respondents had at least one family member who smoked while 83.74% of respondents did not smoke. The majority of the respondents belonged to Javanese ethnicity (63.41%) (Table 1). Characteristics of respondents of the study Based on multivariable analysis, factors associated with both systolic and diastolic hypertension were age and socio-economic status after adjustment for abdominal obesity, family history and location variables (Tables 2 and 3). The odds of getting systolic blood hypertension increased 1.11 for every one-year increase in age after controlling for other variables (Table 2: OR= 1.11; 95% CI = 1.03–1.19, p=0.01), while the odds of getting diastolic hypertension increased 1.07 for every one-year increase in age after controlling for other variables (Table 3: OR= 1.07; 95% CI = 1.01–1.15, p=0.03). Respondents belonging to the poorest socio-economic status had 12.78 times greater odds of getting systolic hypertension compared to the richest socio-economic status after controlling for other variables (Table 2:ORsystolic= 12.78; 95% CI = 1.61–101.54, p=0.02). Respondents whose socio-economic status was the poorest had nearly 10.36 times greater odds for obtaining diastolic hypertension compared to respondents from the richest quintile after controlling for other variables (Table 3: ORdiastolic= 10.36; 95% CI = 1.40–76.74, p=0.02). In addition, respondents belonging to the richer socio-economic status had 10.74 times greater odds of getting systolic hypertension compared to the richest socio-economic status after controlling for other variables (Table 2: ORsystolic= 10.74; 95% CI = 1.55–74.37, p=0.02). Respondents whose socioeconomic status was the richer had nearly 6.45 times greater odds for obtaining diastolic hypertension compared to respondents from the richest quintile after controlling for other variables (Table 3:ORdiastolic= 6.45; 95% CI = 1.01–41.43, p=0.05). Factors related to systolic blood pressure Odds Ratio Adjusted Odds Ratio Factors related to diastolic blood pressure Odds Ratio Adjusted Odds Ratio Discussion: Our study found that the prevalence of systolic and diastolic hypertension among coastal communities in Banyuwangi District was high at 33.33% and 31.71%, respectively. The final model of multiple logistic regression showed that older age and lower socioeconomic status were the determinants of higher systolic and diastolic blood pressure levels after controlling for other variables. Older people had greater odds of having higher systolic and diastolic hypertension compared to younger ones. Those of the lower socioeconomic status had greater odds of having higher systolic and diastolic hypertension compared to those of the highest socioeconomic status. Our results on coastal communities reflect similar findings from another study conducted in rural community in Indonesia that showed people of older age had greater risk of hypertension compared to younger ones (13). This study found that people aged 40 years or older had greater risk of developing hypertension compared to those aged 17–39 years, and the risk was most prominent among the age group of 55–59 years (13). Other studies also reported an increase in prevalence of hypertension as age increased (4, 28–30). Due to structural changes that comes with aging, arterial wall loses its flexibility and becomes stiffer. Consequently, systolic and diastolic blood pressure increases due to reduce pulsatility of the arterial wall (31). A research conducted in low- and middle-income countries found that higher incomes, household assets or social class were positively associated with hypertension in South Asia, but in East Asia and Africa, no associations were detected (20). Our results are also in line with other studies showing that lower socioeconomic status was associated with high blood pressure (15,32). Modifiable socio-economic factors, such as education and employment, were also associated with hypertension. This is in line with the fact that the final stages of the epidemiological transition, the burden of chronic diseases including hypertension shifts from the higher socioeconomic groups to the lower socioeconomic groups (15,33). This may be because awareness of prevention and disease control was better in groups with higher socioeconomic status. In addition, people from the higher socioeconomic status had better access to healthcare. The strength of this study was that we assessed various determinants, including demographic factors, socioeconomic factors, individual lifestyles, smoking factors in the family, ethnicity and family health history. Studies regarding systolic and diastolic hypertension specifically those focusing on coastal areas are limited. Hence, this study will add to the current limited pool of data on the topic. Together with the results for increased blood glucose level (34), this study can provide information on metabolic syndrome in coastal communities. The coastal communities in our study locations comprised mostly people of Javanese and Maduranese ethnic groups. Our results may be generalized for other coastal communities in Indonesia with similar ethnic profile. However, further studies are needed to cover other coastal communities with other ethnic profiles. The limitation of this study is its cross-sectional design which does not permit assumption for causality of risk factors with outcome (temporal ambiguity). Another limitation perhaps is the small sample size, although only 81.46% observation ready for analysis is considered good. In addition, waist circumference was measured at respondents' navel which might have underestimated the true waist circumference in this population (35). The prevalence of hypertension among coastal communities in Banyuwangi was high. Factors related to systolic and diastolic hypertension in these coastal communities were older age and lower socioeconomic status. Our finding implies the need for promotion of healthy lifestyle that would reduce the risk of hypertension, such as healthy diet and improved physical activities among coastal communities. There is currently a national program in Indonesia for the management of chronic disease called Prolanis, which help monitor and ensure continuous treatment for chronic disease at the community health centres. The government of Indonesia can optimize the program by integrating it with hamlet level health posts available widely in Indonesia for maternal and child health services. Hamlet level service will improve health care and health information access of the elderly, especially those from low economic households. In addition, policies to ensure adequate access to healthy food, clean water and energy need to be improved by the government especially for older populations and lower socioeconomic conditions.
Background: Hypertension is a disease that still a problem in the world. Hypertension is a risk factor for heart disease and stroke mortality. Economic development and an emphasis on coastal tourism may have an impact on public health conditions, such as hypertension. This study aimed to determine risk factors related to hypertension among adults in coastal communities in Indonesia. Methods: This was a cross-sectional study of 123 respondents between the age of 18-59 years old selected by cluster sampling. This study was conducted among coastal communities in Banyuwangi District, East Java, Indonesia. Data was analyzed using multivariate logistic regression. Results: Our study showed that the prevalence of systolic and diastolic hypertension among residents of coastal communities were as high as 33.33% and 31.71%, respectively. Increasing age was associated with systolic and diastolic hypertension (ORsystolic=1.11; 95% CI=1.03-1.19, p=0.01 and ORdiastolic=1.07; 95% CI=1.01-1.15, p=0.03) after controlling other variables. Respondents with the poorest and richer socio-economic status had higher odds of having systolic and diastolic hypertension compared to respondents with the richest socio-economic status (ORsystolic-poorest =12.78; 95% CI=1.61-101.54, p=0.02; ORsystolic-richer=10.74; 95% CI =1.55-74.37, p=0.02 and ORdiastolic-poorest=10.36; 95% CI= 1.40-76.74, p=0.02;ORdiastolic-richer=6.45; 95% CI=1.01-41.43, p=0.05) after controlling other variables. Conclusions: Being of older age and of the lower in socioeconomic status are significantly associated with increasing risk for systolic and diastolic hypertension in these coastal communities. More studies need to be done in these and other coastal village to help design appropriate health promotion and counseling strategies for coastal community.
null
null
2,756
327
[]
4
[ "hypertension", "respondents", "status", "coastal", "diastolic", "systolic", "study", "blood", "communities", "data" ]
[ "diseases hypertension banyuwangi", "incidence hypertension coastal", "hypertension banyuwangi study", "hypertension coastal communities", "prevalence hypertension indonesia" ]
null
null
[CONTENT] Coastal community | Diastolic blood pressure | Systolic blood pressure | Cardiovascular disease [SUMMARY]
[CONTENT] Coastal community | Diastolic blood pressure | Systolic blood pressure | Cardiovascular disease [SUMMARY]
[CONTENT] Coastal community | Diastolic blood pressure | Systolic blood pressure | Cardiovascular disease [SUMMARY]
null
[CONTENT] Coastal community | Diastolic blood pressure | Systolic blood pressure | Cardiovascular disease [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Blood Pressure | Cross-Sectional Studies | Humans | Hypertension | Indonesia | Middle Aged | Prevalence | Risk Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Blood Pressure | Cross-Sectional Studies | Humans | Hypertension | Indonesia | Middle Aged | Prevalence | Risk Factors | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Blood Pressure | Cross-Sectional Studies | Humans | Hypertension | Indonesia | Middle Aged | Prevalence | Risk Factors | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Blood Pressure | Cross-Sectional Studies | Humans | Hypertension | Indonesia | Middle Aged | Prevalence | Risk Factors | Young Adult [SUMMARY]
null
[CONTENT] diseases hypertension banyuwangi | incidence hypertension coastal | hypertension banyuwangi study | hypertension coastal communities | prevalence hypertension indonesia [SUMMARY]
[CONTENT] diseases hypertension banyuwangi | incidence hypertension coastal | hypertension banyuwangi study | hypertension coastal communities | prevalence hypertension indonesia [SUMMARY]
[CONTENT] diseases hypertension banyuwangi | incidence hypertension coastal | hypertension banyuwangi study | hypertension coastal communities | prevalence hypertension indonesia [SUMMARY]
null
[CONTENT] diseases hypertension banyuwangi | incidence hypertension coastal | hypertension banyuwangi study | hypertension coastal communities | prevalence hypertension indonesia [SUMMARY]
null
[CONTENT] hypertension | respondents | status | coastal | diastolic | systolic | study | blood | communities | data [SUMMARY]
[CONTENT] hypertension | respondents | status | coastal | diastolic | systolic | study | blood | communities | data [SUMMARY]
[CONTENT] hypertension | respondents | status | coastal | diastolic | systolic | study | blood | communities | data [SUMMARY]
null
[CONTENT] hypertension | respondents | status | coastal | diastolic | systolic | study | blood | communities | data [SUMMARY]
null
[CONTENT] hypertension | prevalence | coastal | prevalence hypertension | high | banyuwangi | coastal communities | communities | java | east java [SUMMARY]
[CONTENT] respondents | data | cm | 90 | blood | pressure | blood pressure | divided | villages | waist [SUMMARY]
[CONTENT] respondents | table | odds | variables table | 95 ci | controlling variables table | ci | hypertension | status | 74 [SUMMARY]
null
[CONTENT] hypertension | respondents | coastal | status | prevalence | communities | coastal communities | diastolic | systolic | high [SUMMARY]
null
[CONTENT] ||| ||| ||| Indonesia [SUMMARY]
[CONTENT] 123 | between the age of 18-59 years old ||| Banyuwangi District | East Java | Indonesia ||| [SUMMARY]
[CONTENT] 33.33% | 31.71% ||| ORsystolic=1.11 | 95% | ORdiastolic=1.07 | 95% | CI=1.01-1.15 ||| 12.78 | 95% | CI=1.61-101.54 | ORsystolic-richer=10.74 | 95% | CI | 1.55 | ORdiastolic-poorest=10.36 | 95% | 1.40-76.74 | 95% | CI=1.01-41.43 [SUMMARY]
null
[CONTENT] ||| ||| ||| Indonesia ||| 123 | between the age of 18-59 years old ||| Banyuwangi District | East Java | Indonesia ||| ||| ||| 33.33% | 31.71% ||| ORsystolic=1.11 | 95% | ORdiastolic=1.07 | 95% | CI=1.01-1.15 ||| 12.78 | 95% | CI=1.61-101.54 | ORsystolic-richer=10.74 | 95% | CI | 1.55 | ORdiastolic-poorest=10.36 | 95% | 1.40-76.74 | 95% | CI=1.01-41.43 ||| ||| [SUMMARY]
null
Can SARS-CoV-2 vaccine increase the risk of reactivation of Varicella zoster? A systematic review.
34719084
Although the COVID-19 vaccination is deemed safe, exact incidence and nature if adverse effects, particularly dermatological ones, are still unknown.
INTRODUCTION
We have performed a systemic review of articles from PubMed and Embase using MeSH and keywords like "Shingles," "Herpes zoster," "Varicella zoster," "COVID-19," "Vaccine," "SARS-CoV-2." No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases.
METHODS
A total of 54 cases consisting of 27 male and 27 female patients have been reported. There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). The mean (SD) period between development of herpes zoster and COVID-19 vaccination was 7.64 (6.92) days. Majority of the cases were from the high-income and/or middle-income countries. 86.27% of the cases of HZ were reported due to mRNA vaccine. Thirty-six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID-19 vaccine among those who received mRNA vaccine.
RESULTS
We could not establish definite link but there may be possible association between COVID-19 vaccine and shingles. Large-scale studies may help to understand the cause-effect relationship.
CONCLUSION
[ "COVID-19", "COVID-19 Vaccines", "Chickenpox", "Female", "Herpes Zoster", "Herpes Zoster Vaccine", "Humans", "Male", "Middle Aged", "SARS-CoV-2" ]
8597588
INTRODUCTION
The World Health Organization (WHO) announced the coronavirus disease 2019 (COVID‐19) outbreak a pandemic on March 11, 2020 and the development of a safe and effective COVID‐19 vaccine quickly became a worldwide priority. As of July 2021, 27.6% of the world population has received at least one dose of COVID‐19 vaccine. 1 In an attempt to prevent severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) viral transmission, non‐replicating viral vector vaccines, DNA‐based or RNA‐based vaccines, and inactivated vaccines, protein subunit recombinant vaccines have been developed recently. Adverse events after vaccination (AEAV) might be a coincidence or a direct result of the vaccine. In order to answer this question, a temporal relationship between vaccination and AEAV is required, as well as a biological mechanism is necessary to explain the link. The platform for COVID‐19 vaccine leaders is based on gene therapy. Still, it is the matter of research for the dermatologists, to investigate the long‐term consequences of older generation vaccines. As a result, the long‐term consequences of vaccinations based on gene therapy remain essentially unknown. COVID‐19 vaccines have been reported to be associated with reactogenicity with symptoms such as fever, fatigue, and headache being frequently reported. This is due to the vaccines' intrinsic character, even though they have been shown to be safe in clinical trials. 2 , 3 Although the COVID‐19 vaccination is deemed safe, adverse effects, particularly dermatological ones, are still unknown. Clinical studies have reported that the most common cutaneous adverse effects are injection site reaction and pruritus; allergic reactions such as urticaria and widespread erythematous rash. 4 In the literature, various cutaneous conditions have been observed including anecdotal cases of erythema multiforme with morbilliform rash, delayed‐type hypersensitivity reaction, bullous drug eruption, pernio/chilblains (eg, “COVID toes”), erythromelalgia, and pityriasis‐rosea‐like exanthems, and herpes zoster (HZ). 5 , 6 , 7 , 8 Studies have reported development of herpes zoster due to SARS‐CoV‐2 infection either at the time of disease progression or following recovery from the disease. In the context of COVID‐19 pandemic, studies observed that COVID‐19 is associated lymphopenia, particularly CD3+, CD8+ lymphocytes, and functional impairment of CD4+ T cells, might make a patient more vulnerable to development of herpes zoster. 9 While there have been several case reports and case series published in literature on COVID‐19 vaccine‐induced HZ, to our knowledge, there is no comprehensive review on this subject. Given how crucial widespread vaccination is in curbing the COVID‐19 pandemic, limited data on HZ after COVID‐19 vaccine with handful of case reports and case series promoted us to systemically review published cases of HZ to describe the demographic, clinical, morphological characteristics, outcomes, and timing of development of herpes zoster to the various COVID‐19 vaccines.
METHOD
Search strategy A systemic review was conducted due to the sparse but quickly increasing scientific literature on HZ and COVID‐19. We performed a search in PubMed and Embase using “Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA)” guidelines by using MeSH and keywords like “Shingles,” “Herpes zoster,” “Varicella zoster,” “COVID‐19,” “Vaccine,” “SARS‐CoV‐2.” Boolean operators (“OR”; “AND”) were used to combine the search results. No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases. The findings were saved in an EndNote library. The search strategy is shown in PRISMA diagram (Figure 1). Articles published until August 14, 2021 were included. PRISMA flowchart of study selection A systemic review was conducted due to the sparse but quickly increasing scientific literature on HZ and COVID‐19. We performed a search in PubMed and Embase using “Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA)” guidelines by using MeSH and keywords like “Shingles,” “Herpes zoster,” “Varicella zoster,” “COVID‐19,” “Vaccine,” “SARS‐CoV‐2.” Boolean operators (“OR”; “AND”) were used to combine the search results. No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases. The findings were saved in an EndNote library. The search strategy is shown in PRISMA diagram (Figure 1). Articles published until August 14, 2021 were included. PRISMA flowchart of study selection Study selection We selected all the available individual case reports and case series. During phase 1, two authors (H.D. and K.S.) independently assessed the abstracts, titles, and categories of research that met the requirements. The disagreements were resolved by consensus with another two authors (J.P and M.G). Based on the inclusion criteria, the second phase of the search involved assessing full‐text articles in order to identify items for data extraction. Data from the article were curated and summarized in the form of age, gender, country of origin, symptoms and clinical characteristic of the lesion, vaccine type, and dose, days until symptom onset of post‐vaccination, location of the lesion, past medical history/comorbidities, medical intervention during hospitalization and confirmatory test. Descriptive analysis and data collection were performed using Microsoft Excel software. We selected all the available individual case reports and case series. During phase 1, two authors (H.D. and K.S.) independently assessed the abstracts, titles, and categories of research that met the requirements. The disagreements were resolved by consensus with another two authors (J.P and M.G). Based on the inclusion criteria, the second phase of the search involved assessing full‐text articles in order to identify items for data extraction. Data from the article were curated and summarized in the form of age, gender, country of origin, symptoms and clinical characteristic of the lesion, vaccine type, and dose, days until symptom onset of post‐vaccination, location of the lesion, past medical history/comorbidities, medical intervention during hospitalization and confirmatory test. Descriptive analysis and data collection were performed using Microsoft Excel software.
RESULTS
The search yielded a total of 121 and 67 hits from Embase and PubMed, respectively. We identified 14 articles that met as per the study requirement. Description of Herpes zoster cases in patients with COVID‐19 vaccination Demographic and comorbid conditions The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA, 10 , 11 Lebanon, 12  Turkey, 13 , 14 , 15 Greece, 16 Italy, 17 India, 18 Israel, 19 Finland, 20  Taiwan, 21 Spain, 22 and Portugal. 23  There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases. Conducted studies on COVID‐19 vaccine and Varicella zoster mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/2nd (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Moderna) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) Inactivated Vaccine, 2nd Dose mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Moderna) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Pfizer) Systemic Lupus Erythematous Hypertension Plaque type psoriasis Hemophilia A Psoriatic Arthritis Antiphospholipid antibody syndrome mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Moderna) The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA, 10 , 11 Lebanon, 12  Turkey, 13 , 14 , 15 Greece, 16 Italy, 17 India, 18 Israel, 19 Finland, 20  Taiwan, 21 Spain, 22 and Portugal. 23  There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases. Conducted studies on COVID‐19 vaccine and Varicella zoster mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/2nd (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Moderna) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) Inactivated Vaccine, 2nd Dose mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Moderna) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Pfizer) Systemic Lupus Erythematous Hypertension Plaque type psoriasis Hemophilia A Psoriatic Arthritis Antiphospholipid antibody syndrome mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Moderna) Demographic and comorbid conditions The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA, 10 , 11 Lebanon, 12  Turkey, 13 , 14 , 15 Greece, 16 Italy, 17 India, 18 Israel, 19 Finland, 20  Taiwan, 21 Spain, 22 and Portugal. 23  There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases. Conducted studies on COVID‐19 vaccine and Varicella zoster mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/2nd (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Moderna) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) Inactivated Vaccine, 2nd Dose mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Moderna) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Pfizer) Systemic Lupus Erythematous Hypertension Plaque type psoriasis Hemophilia A Psoriatic Arthritis Antiphospholipid antibody syndrome mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Moderna) The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA, 10 , 11 Lebanon, 12  Turkey, 13 , 14 , 15 Greece, 16 Italy, 17 India, 18 Israel, 19 Finland, 20  Taiwan, 21 Spain, 22 and Portugal. 23  There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases. Conducted studies on COVID‐19 vaccine and Varicella zoster mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/2nd (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Moderna) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) Inactivated Vaccine, 2nd Dose mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Moderna) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Pfizer) Systemic Lupus Erythematous Hypertension Plaque type psoriasis Hemophilia A Psoriatic Arthritis Antiphospholipid antibody syndrome mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Moderna) Period between diagnosis of herpes zoster (HZ) and COVID‐19 vaccination, vaccine component, and its number of dose The mean (SD) period between herpes zoster and COVID‐19 vaccination was 7.64 (6.92) days (Figure 2). In most of the cases, 52 (96.07%) HZ was developed within 1–3 weeks following COVID‐19 vaccine. Among, 29 (50.98%) of the cases it developed within 1st week of vaccination irrespective of the number of vaccine dose. Only 1 case reported after the one month of COVID‐19 vaccine. Majority of the patients 45/54 (86.27%) received mRNA vaccine followed by 5/54 (5.88%) inactivated COVID‐19 vaccine, 4/51 (7.84%) non‐replicating viral vector. Thirty‐six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID‐19 vaccine among those who received mRNA vaccine. All the patients who received non‐replicating viral vector vaccine developed herpes zoster after the 1st dose. Eleven patients (11/54) were diagnosed through VZV PCR followed by clinical diagnosis (40/54). Period of herpes zoster development following COVID‐19 vaccination (n = 51) The mean (SD) period between herpes zoster and COVID‐19 vaccination was 7.64 (6.92) days (Figure 2). In most of the cases, 52 (96.07%) HZ was developed within 1–3 weeks following COVID‐19 vaccine. Among, 29 (50.98%) of the cases it developed within 1st week of vaccination irrespective of the number of vaccine dose. Only 1 case reported after the one month of COVID‐19 vaccine. Majority of the patients 45/54 (86.27%) received mRNA vaccine followed by 5/54 (5.88%) inactivated COVID‐19 vaccine, 4/51 (7.84%) non‐replicating viral vector. Thirty‐six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID‐19 vaccine among those who received mRNA vaccine. All the patients who received non‐replicating viral vector vaccine developed herpes zoster after the 1st dose. Eleven patients (11/54) were diagnosed through VZV PCR followed by clinical diagnosis (40/54). Period of herpes zoster development following COVID‐19 vaccination (n = 51) Clinical presentation and past history of varicella zoster (VZV) infection or vaccine Majority of the herpes zoster cases diagnosed clinically (40/54), followed by VZV PCR (11/54), VZV IgG (1/54), 10 and skin biopsy (2/54). 17 , 18  Various dermatomal lesion has been observed including trunk, 10  hips/buttocks or inguinal region, 19  herpes zoster ophthalmicus (HZO), 10 , 19 upper limb, 10 thigh, 12 and abdomen and flank. 10  There were 3 cases who developed lymphadenopathy including cervical and inguinal lymphadenopathy. 19 , 22 Postherpetic neuralgia was reported in 3 cases. 16 Prior VZV infection was reported in 13 cases, 10 , 19 , 20 and 5 cases 10 , 11 , 19 reported prior zoster vaccination. Majority of the herpes zoster cases diagnosed clinically (40/54), followed by VZV PCR (11/54), VZV IgG (1/54), 10 and skin biopsy (2/54). 17 , 18  Various dermatomal lesion has been observed including trunk, 10  hips/buttocks or inguinal region, 19  herpes zoster ophthalmicus (HZO), 10 , 19 upper limb, 10 thigh, 12 and abdomen and flank. 10  There were 3 cases who developed lymphadenopathy including cervical and inguinal lymphadenopathy. 19 , 22 Postherpetic neuralgia was reported in 3 cases. 16 Prior VZV infection was reported in 13 cases, 10 , 19 , 20 and 5 cases 10 , 11 , 19 reported prior zoster vaccination.
CONCLUSION
Our study does not establish any causality or definite link but draws the attention to a possible association between COVID‐19 vaccine and shingles. Large‐scale immunological, epidemiological, and clinical studies may help to understand the cause‐effect relationship. Based on the criteria of temporal connection with vaccination and a plausible biological link, HZ appears to be a "possible" but uncommon AEAV. Furthermore, these findings may be therapeutically relevant in deciding whether to use antiviral as a temporary prophylactic prior to immunization for individuals who are at a greater risk of VZV reactivation following SARS‐CoV‐2 vaccination.
[ "INTRODUCTION", "Search strategy", "Study selection", "Description of Herpes zoster cases in patients with COVID‐19 vaccination", "Demographic and comorbid conditions", "Period between diagnosis of herpes zoster (HZ) and COVID‐19 vaccination, vaccine component, and its number of dose", "Clinical presentation and past history of varicella zoster (VZV) infection or vaccine", "Is there a medical or biological basis for an increased risk of COVID‐19 vaccine‐induced HZ?", "Is there temporal association of development of HZ and COVID‐19 vaccine?", "Interpretation of the most up‐to‐date clinical evidence", "What kind of clinical/epidemiological evidence is required to determine whether HZ cases have increased due to COVID‐19 vaccine?", "AUTHOR CONTRIBUTIONS", "DISCLAIMER", "Ethical Approval" ]
[ "The World Health Organization (WHO) announced the coronavirus disease 2019 (COVID‐19) outbreak a pandemic on March 11, 2020 and the development of a safe and effective COVID‐19 vaccine quickly became a worldwide priority. As of July 2021, 27.6% of the world population has received at least one dose of COVID‐19 vaccine.\n1\n In an attempt to prevent severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) viral transmission, non‐replicating viral vector vaccines, DNA‐based or RNA‐based vaccines, and inactivated vaccines, protein subunit recombinant vaccines have been developed recently. Adverse events after vaccination (AEAV) might be a coincidence or a direct result of the vaccine. In order to answer this question, a temporal relationship between vaccination and AEAV is required, as well as a biological mechanism is necessary to explain the link.\nThe platform for COVID‐19 vaccine leaders is based on gene therapy. Still, it is the matter of research for the dermatologists, to investigate the long‐term consequences of older generation vaccines. As a result, the long‐term consequences of vaccinations based on gene therapy remain essentially unknown. COVID‐19 vaccines have been reported to be associated with reactogenicity with symptoms such as fever, fatigue, and headache being frequently reported. This is due to the vaccines' intrinsic character, even though they have been shown to be safe in clinical trials.\n2\n, \n3\n Although the COVID‐19 vaccination is deemed safe, adverse effects, particularly dermatological ones, are still unknown. Clinical studies have reported that the most common cutaneous adverse effects are injection site reaction and pruritus; allergic reactions such as urticaria and widespread erythematous rash.\n4\n In the literature, various cutaneous conditions have been observed including anecdotal cases of erythema multiforme with morbilliform rash, delayed‐type hypersensitivity reaction, bullous drug eruption, pernio/chilblains (eg, “COVID toes”), erythromelalgia, and pityriasis‐rosea‐like exanthems, and herpes zoster (HZ).\n5\n, \n6\n, \n7\n, \n8\n Studies have reported development of herpes zoster due to SARS‐CoV‐2 infection either at the time of disease progression or following recovery from the disease. In the context of COVID‐19 pandemic, studies observed that COVID‐19 is associated lymphopenia, particularly CD3+, CD8+ lymphocytes, and functional impairment of CD4+ T cells, might make a patient more vulnerable to development of herpes zoster.\n9\n\n\nWhile there have been several case reports and case series published in literature on COVID‐19 vaccine‐induced HZ, to our knowledge, there is no comprehensive review on this subject. Given how crucial widespread vaccination is in curbing the COVID‐19 pandemic, limited data on HZ after COVID‐19 vaccine with handful of case reports and case series promoted us to systemically review published cases of HZ to describe the demographic, clinical, morphological characteristics, outcomes, and timing of development of herpes zoster to the various COVID‐19 vaccines.", "A systemic review was conducted due to the sparse but quickly increasing scientific literature on HZ and COVID‐19. We performed a search in PubMed and Embase using “Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA)” guidelines by using MeSH and keywords like “Shingles,” “Herpes zoster,” “Varicella zoster,” “COVID‐19,” “Vaccine,” “SARS‐CoV‐2.” Boolean operators (“OR”; “AND”) were used to combine the search results. No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases. The findings were saved in an EndNote library. The search strategy is shown in PRISMA diagram (Figure 1). Articles published until August 14, 2021 were included.\nPRISMA flowchart of study selection", "We selected all the available individual case reports and case series. During phase 1, two authors (H.D. and K.S.) independently assessed the abstracts, titles, and categories of research that met the requirements. The disagreements were resolved by consensus with another two authors (J.P and M.G). Based on the inclusion criteria, the second phase of the search involved assessing full‐text articles in order to identify items for data extraction. Data from the article were curated and summarized in the form of age, gender, country of origin, symptoms and clinical characteristic of the lesion, vaccine type, and dose, days until symptom onset of post‐vaccination, location of the lesion, past medical history/comorbidities, medical intervention during hospitalization and confirmatory test. Descriptive analysis and data collection were performed using Microsoft Excel software.", "Demographic and comorbid conditions The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA,\n10\n, \n11\n Lebanon,\n12\n Turkey,\n13\n, \n14\n, \n15\n Greece,\n16\n Italy,\n17\n India,\n18\n Israel,\n19\n Finland,\n20\n Taiwan,\n21\n Spain,\n22\n and Portugal.\n23\n There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases.\nConducted studies on COVID‐19 vaccine and Varicella zoster\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nInactivated Vaccine,\n2nd Dose\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Moderna)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Pfizer)\nSystemic Lupus Erythematous\nHypertension\nPlaque type psoriasis\nHemophilia A\nPsoriatic Arthritis\nAntiphospholipid antibody syndrome\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nThe search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA,\n10\n, \n11\n Lebanon,\n12\n Turkey,\n13\n, \n14\n, \n15\n Greece,\n16\n Italy,\n17\n India,\n18\n Israel,\n19\n Finland,\n20\n Taiwan,\n21\n Spain,\n22\n and Portugal.\n23\n There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases.\nConducted studies on COVID‐19 vaccine and Varicella zoster\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nInactivated Vaccine,\n2nd Dose\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Moderna)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Pfizer)\nSystemic Lupus Erythematous\nHypertension\nPlaque type psoriasis\nHemophilia A\nPsoriatic Arthritis\nAntiphospholipid antibody syndrome\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)", "The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA,\n10\n, \n11\n Lebanon,\n12\n Turkey,\n13\n, \n14\n, \n15\n Greece,\n16\n Italy,\n17\n India,\n18\n Israel,\n19\n Finland,\n20\n Taiwan,\n21\n Spain,\n22\n and Portugal.\n23\n There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases.\nConducted studies on COVID‐19 vaccine and Varicella zoster\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nInactivated Vaccine,\n2nd Dose\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Moderna)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Pfizer)\nSystemic Lupus Erythematous\nHypertension\nPlaque type psoriasis\nHemophilia A\nPsoriatic Arthritis\nAntiphospholipid antibody syndrome\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)", "The mean (SD) period between herpes zoster and COVID‐19 vaccination was 7.64 (6.92) days (Figure 2). In most of the cases, 52 (96.07%) HZ was developed within 1–3 weeks following COVID‐19 vaccine. Among, 29 (50.98%) of the cases it developed within 1st week of vaccination irrespective of the number of vaccine dose. Only 1 case reported after the one month of COVID‐19 vaccine. Majority of the patients 45/54 (86.27%) received mRNA vaccine followed by 5/54 (5.88%) inactivated COVID‐19 vaccine, 4/51 (7.84%) non‐replicating viral vector. Thirty‐six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID‐19 vaccine among those who received mRNA vaccine. All the patients who received non‐replicating viral vector vaccine developed herpes zoster after the 1st dose. Eleven patients (11/54) were diagnosed through VZV PCR followed by clinical diagnosis (40/54).\nPeriod of herpes zoster development following COVID‐19 vaccination (n = 51)", "Majority of the herpes zoster cases diagnosed clinically (40/54), followed by VZV PCR (11/54), VZV IgG (1/54),\n10\n and skin biopsy (2/54).\n17\n, \n18\n Various dermatomal lesion has been observed including trunk,\n10\n hips/buttocks or inguinal region,\n19\n herpes zoster ophthalmicus (HZO),\n10\n, \n19\n upper limb,\n10\n thigh,\n12\n and abdomen and flank.\n10\n There were 3 cases who developed lymphadenopathy including cervical and inguinal lymphadenopathy.\n19\n, \n22\n Postherpetic neuralgia was reported in 3 cases.\n16\n Prior VZV infection was reported in 13 cases,\n10\n, \n19\n, \n20\n and 5 cases\n10\n, \n11\n, \n19\n reported prior zoster vaccination.", "Several theories can be postulated to explain the relationship between development of herpes zoster and COVID‐19 vaccines. Age was found to be the major risk factor for the development of HZ partly due to age‐related decline in cell‐mediated immune responses to VZV, whereas disease‐related immunocompromise is another risk factor including such as HIV infection, iatrogenic immunocompromission, physical trauma, or comorbid conditions such as malignancy or chronic kidney or liver disease. Studies have reported cross‐reactivity between spike protein and self‐antigen may lead to development of immune‐mediated disorders in COVID‐19 patients in the long run. The authors hypothesized that similar response can happen following COVID‐19 vaccine.\n9\n, \n10\n Toll‐like receptors (TLR) stimulation of innate immunity might be the connection between COVID‐19 vaccine and HZ development.\n15\n The stimulation of these receptors has been related to the reactivation of VZV, allowing the latent virus to remain dormant in the afflicted people.\n14\n The COVID‐19 immunization may lead to the production of type I IFNs and other inflammatory cytokines, activating T‐ and B‐cell immunity and negatively affecting antigen expression, resulting in herpes zoster reactivation.\n16\n, \n17\n, \n18\n The peak of antigen expression is determined by the administration method and vaccine composition, which is another approach to modulate the immune response.\n11\n, \n12\n, \n13\n\n\nFurthermore, herpes zoster is more common in HIV patients with lower CD4 cell counts, underlining the significance of T‐cell immunity in sustaining VZV latency.\n26\n, \n27\n According to Sahin U et al.,\n28\n vaccination with BNT162b2 produces coordinated humoral and cellular adaptive immunity in healthy individuals. A robust cellular response with spike‐specific CD8+ T cells and T helper type 1 (Th1) CD4+ T cells is growing seven days after the booster dosage, with a high proportion of them generating interferon (IFN), a cytokine involved in numerous antiviral responses. S1‐binding IgG associated positively with the S‐specific CD4+ T‐cell responses, as did the intensity of S‐specific CD8+ T‐cell responses, which correlated positively with S1‐binding IgG. Furthermore, among participants over 55 years old, the SARS‐CoV‐2 mRNA‐1273 vaccination generated a robust CD4 cytokine response involving type 1 helper T cells.\n29\n After a large shift of naive CD8+ cells to create CD8+ cells specific to control HIV or VZV, we hypothesize that VZV‐specific CD8+ cells are momentarily incapable of regulating VZV.", "WHO and CDC have established standard methods for conducting causation evaluations of individual instances of AEAV. When an incident occurs during the time frame defined for increased risk, it is said to be \"consistent with\" a causal association. As per WHO updated guidelines on causality definitions, a \"probable\" association suggests a temporal relationship and the existence of a biologic mechanism for causal association between the vaccination and the occurrence.\n25\n, \n30\n, \n31\n Of 54 herpes zoster reported cases, 52 (96.29%) of the patients developed it at the defined higher risk timeframe (1–21 days after the initial dose).\n23\n As a result, the AEAV may be categorized as \"compatible with,\" and a \"likely\" causal relationship based on the World Health Organization Working Group can be presented, indicating the existence of a temporal link and a credible physiological mechanism.\n25\n, \n30\n, \n31\n, \n32\n\n", "Despite the fact that the case fatality rate (CFR) for HZ is exceedingly low in COVID‐19 patients.\n9\n Herpes zoster is often associated with disability, especially among aged individuals, and available management only decreases viral shedding reducing the risk of transmission and prevents post‐herpetic neuralgia and the severity and the duration of pain. In immunocompetent people over the age of 50, the recombinant zoster vaccination is indicated.\n33\n Clinicians may not be aware of the link between HZ and COVID‐19 immunization. The awareness of this clinical condition encourages additional reporting and communication of HZ after vaccination. Now the question raised whether to use antiviral or not in highly suspected old aged immunocompromised people as to prevent the risk of development of HZ. Therefore, clinical review committees need to decide whether to recommend antiviral treatment or not before initiating SARS‐CoV‐2 immunization.", "Despite the rarity of the articles, HZ is a common occurrence. In the VAERS, there were 232 HZ‐related adverse events recorded for the COVID‐19 vaccinations.\n34\n As of March 21, 2021, there were 331 cases of HZ after the Pfizer/BioNTech vaccination and 297 after the Astra‐Zeneca vaccine reported in the Medicines and Healthcare Products Regulatory Agency of the United Kingdom (MHRA) Yellow Card adverse reaction reporting program.\n35\n, \n36\n, \n37\n However, because HZ is underreported as an adverse event following vaccination for other infectious agents, these findings may underestimate the prevalence of herpes zoster. So, ongoing studies are required to understand the immunological mechanism that control SARS‐CoV‐2 and VZV long‐term protection. Studies could assess either the risk or trend of development of HZ due to COVID‐19 vaccine. Studies designed to assess the risk of HZ in patients with and without COVID‐19 immunization could assess whether there has been an increased risk of HZ in COVID‐19 vaccinated individual. Both retrospective cohort and case‐control studies can be considered. Cases and controls may require to be matched. Studies can focus to determine the prevalence of herpes zoster in the general population or in specialized populations (eg, administrative or hospital databases). Interestingly, a study was done by Pedro et al. observed that incidence of HZ development after 1 month of follow‐up of all vaccinated patients was 5–6 times higher than the usual annual HZ incidence in their geographical area. Future studies need to focus on cytokine function, T cells, and absolute lymphocytes in patients who present with reactivation of VZV following COVID‐19 immunization and its effects on cellular immunity with regard to its role in etiology, prognosis, and manifestation.\n22\n\n\nOur study was limited by publication bias, small sample size, missing data, and lack of generalizability in demographics of the series analyzed. In the majority of the cases, diagnosis of HZ was pertinent to the HZ clinical findings.", "\nHardik D Desai: Writing and revising the manuscript. Kamal Sharma, Anchal Shah, Jaimini Patoliya, Anant Patil, Zahra Hooshanginezhad, and Stephan Grabbe: Review and revising the manuscript. Mohamad Goldust: Conception, writing, review and revising the manuscript.", "We confirm that the manuscript has been read and approved by all the authors, that the requirements for authorship as stated earlier in this document have been met and that each author believes that the manuscript represents honest work.", "This study was conducted using published online material, hence the ethical approval is waived." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHOD", "Search strategy", "Study selection", "RESULTS", "Description of Herpes zoster cases in patients with COVID‐19 vaccination", "Demographic and comorbid conditions", "Period between diagnosis of herpes zoster (HZ) and COVID‐19 vaccination, vaccine component, and its number of dose", "Clinical presentation and past history of varicella zoster (VZV) infection or vaccine", "DISCUSSION", "Is there a medical or biological basis for an increased risk of COVID‐19 vaccine‐induced HZ?", "Is there temporal association of development of HZ and COVID‐19 vaccine?", "Interpretation of the most up‐to‐date clinical evidence", "What kind of clinical/epidemiological evidence is required to determine whether HZ cases have increased due to COVID‐19 vaccine?", "CONCLUSION", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTIONS", "DISCLAIMER", "Ethical Approval" ]
[ "The World Health Organization (WHO) announced the coronavirus disease 2019 (COVID‐19) outbreak a pandemic on March 11, 2020 and the development of a safe and effective COVID‐19 vaccine quickly became a worldwide priority. As of July 2021, 27.6% of the world population has received at least one dose of COVID‐19 vaccine.\n1\n In an attempt to prevent severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) viral transmission, non‐replicating viral vector vaccines, DNA‐based or RNA‐based vaccines, and inactivated vaccines, protein subunit recombinant vaccines have been developed recently. Adverse events after vaccination (AEAV) might be a coincidence or a direct result of the vaccine. In order to answer this question, a temporal relationship between vaccination and AEAV is required, as well as a biological mechanism is necessary to explain the link.\nThe platform for COVID‐19 vaccine leaders is based on gene therapy. Still, it is the matter of research for the dermatologists, to investigate the long‐term consequences of older generation vaccines. As a result, the long‐term consequences of vaccinations based on gene therapy remain essentially unknown. COVID‐19 vaccines have been reported to be associated with reactogenicity with symptoms such as fever, fatigue, and headache being frequently reported. This is due to the vaccines' intrinsic character, even though they have been shown to be safe in clinical trials.\n2\n, \n3\n Although the COVID‐19 vaccination is deemed safe, adverse effects, particularly dermatological ones, are still unknown. Clinical studies have reported that the most common cutaneous adverse effects are injection site reaction and pruritus; allergic reactions such as urticaria and widespread erythematous rash.\n4\n In the literature, various cutaneous conditions have been observed including anecdotal cases of erythema multiforme with morbilliform rash, delayed‐type hypersensitivity reaction, bullous drug eruption, pernio/chilblains (eg, “COVID toes”), erythromelalgia, and pityriasis‐rosea‐like exanthems, and herpes zoster (HZ).\n5\n, \n6\n, \n7\n, \n8\n Studies have reported development of herpes zoster due to SARS‐CoV‐2 infection either at the time of disease progression or following recovery from the disease. In the context of COVID‐19 pandemic, studies observed that COVID‐19 is associated lymphopenia, particularly CD3+, CD8+ lymphocytes, and functional impairment of CD4+ T cells, might make a patient more vulnerable to development of herpes zoster.\n9\n\n\nWhile there have been several case reports and case series published in literature on COVID‐19 vaccine‐induced HZ, to our knowledge, there is no comprehensive review on this subject. Given how crucial widespread vaccination is in curbing the COVID‐19 pandemic, limited data on HZ after COVID‐19 vaccine with handful of case reports and case series promoted us to systemically review published cases of HZ to describe the demographic, clinical, morphological characteristics, outcomes, and timing of development of herpes zoster to the various COVID‐19 vaccines.", "Search strategy A systemic review was conducted due to the sparse but quickly increasing scientific literature on HZ and COVID‐19. We performed a search in PubMed and Embase using “Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA)” guidelines by using MeSH and keywords like “Shingles,” “Herpes zoster,” “Varicella zoster,” “COVID‐19,” “Vaccine,” “SARS‐CoV‐2.” Boolean operators (“OR”; “AND”) were used to combine the search results. No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases. The findings were saved in an EndNote library. The search strategy is shown in PRISMA diagram (Figure 1). Articles published until August 14, 2021 were included.\nPRISMA flowchart of study selection\nA systemic review was conducted due to the sparse but quickly increasing scientific literature on HZ and COVID‐19. We performed a search in PubMed and Embase using “Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA)” guidelines by using MeSH and keywords like “Shingles,” “Herpes zoster,” “Varicella zoster,” “COVID‐19,” “Vaccine,” “SARS‐CoV‐2.” Boolean operators (“OR”; “AND”) were used to combine the search results. No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases. The findings were saved in an EndNote library. The search strategy is shown in PRISMA diagram (Figure 1). Articles published until August 14, 2021 were included.\nPRISMA flowchart of study selection\nStudy selection We selected all the available individual case reports and case series. During phase 1, two authors (H.D. and K.S.) independently assessed the abstracts, titles, and categories of research that met the requirements. The disagreements were resolved by consensus with another two authors (J.P and M.G). Based on the inclusion criteria, the second phase of the search involved assessing full‐text articles in order to identify items for data extraction. Data from the article were curated and summarized in the form of age, gender, country of origin, symptoms and clinical characteristic of the lesion, vaccine type, and dose, days until symptom onset of post‐vaccination, location of the lesion, past medical history/comorbidities, medical intervention during hospitalization and confirmatory test. Descriptive analysis and data collection were performed using Microsoft Excel software.\nWe selected all the available individual case reports and case series. During phase 1, two authors (H.D. and K.S.) independently assessed the abstracts, titles, and categories of research that met the requirements. The disagreements were resolved by consensus with another two authors (J.P and M.G). Based on the inclusion criteria, the second phase of the search involved assessing full‐text articles in order to identify items for data extraction. Data from the article were curated and summarized in the form of age, gender, country of origin, symptoms and clinical characteristic of the lesion, vaccine type, and dose, days until symptom onset of post‐vaccination, location of the lesion, past medical history/comorbidities, medical intervention during hospitalization and confirmatory test. Descriptive analysis and data collection were performed using Microsoft Excel software.", "A systemic review was conducted due to the sparse but quickly increasing scientific literature on HZ and COVID‐19. We performed a search in PubMed and Embase using “Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA)” guidelines by using MeSH and keywords like “Shingles,” “Herpes zoster,” “Varicella zoster,” “COVID‐19,” “Vaccine,” “SARS‐CoV‐2.” Boolean operators (“OR”; “AND”) were used to combine the search results. No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases. The findings were saved in an EndNote library. The search strategy is shown in PRISMA diagram (Figure 1). Articles published until August 14, 2021 were included.\nPRISMA flowchart of study selection", "We selected all the available individual case reports and case series. During phase 1, two authors (H.D. and K.S.) independently assessed the abstracts, titles, and categories of research that met the requirements. The disagreements were resolved by consensus with another two authors (J.P and M.G). Based on the inclusion criteria, the second phase of the search involved assessing full‐text articles in order to identify items for data extraction. Data from the article were curated and summarized in the form of age, gender, country of origin, symptoms and clinical characteristic of the lesion, vaccine type, and dose, days until symptom onset of post‐vaccination, location of the lesion, past medical history/comorbidities, medical intervention during hospitalization and confirmatory test. Descriptive analysis and data collection were performed using Microsoft Excel software.", "The search yielded a total of 121 and 67 hits from Embase and PubMed, respectively. We identified 14 articles that met as per the study requirement.\nDescription of Herpes zoster cases in patients with COVID‐19 vaccination Demographic and comorbid conditions The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA,\n10\n, \n11\n Lebanon,\n12\n Turkey,\n13\n, \n14\n, \n15\n Greece,\n16\n Italy,\n17\n India,\n18\n Israel,\n19\n Finland,\n20\n Taiwan,\n21\n Spain,\n22\n and Portugal.\n23\n There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases.\nConducted studies on COVID‐19 vaccine and Varicella zoster\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nInactivated Vaccine,\n2nd Dose\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Moderna)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Pfizer)\nSystemic Lupus Erythematous\nHypertension\nPlaque type psoriasis\nHemophilia A\nPsoriatic Arthritis\nAntiphospholipid antibody syndrome\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nThe search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA,\n10\n, \n11\n Lebanon,\n12\n Turkey,\n13\n, \n14\n, \n15\n Greece,\n16\n Italy,\n17\n India,\n18\n Israel,\n19\n Finland,\n20\n Taiwan,\n21\n Spain,\n22\n and Portugal.\n23\n There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases.\nConducted studies on COVID‐19 vaccine and Varicella zoster\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nInactivated Vaccine,\n2nd Dose\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Moderna)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Pfizer)\nSystemic Lupus Erythematous\nHypertension\nPlaque type psoriasis\nHemophilia A\nPsoriatic Arthritis\nAntiphospholipid antibody syndrome\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nDemographic and comorbid conditions The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA,\n10\n, \n11\n Lebanon,\n12\n Turkey,\n13\n, \n14\n, \n15\n Greece,\n16\n Italy,\n17\n India,\n18\n Israel,\n19\n Finland,\n20\n Taiwan,\n21\n Spain,\n22\n and Portugal.\n23\n There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases.\nConducted studies on COVID‐19 vaccine and Varicella zoster\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nInactivated Vaccine,\n2nd Dose\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Moderna)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Pfizer)\nSystemic Lupus Erythematous\nHypertension\nPlaque type psoriasis\nHemophilia A\nPsoriatic Arthritis\nAntiphospholipid antibody syndrome\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nThe search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA,\n10\n, \n11\n Lebanon,\n12\n Turkey,\n13\n, \n14\n, \n15\n Greece,\n16\n Italy,\n17\n India,\n18\n Israel,\n19\n Finland,\n20\n Taiwan,\n21\n Spain,\n22\n and Portugal.\n23\n There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases.\nConducted studies on COVID‐19 vaccine and Varicella zoster\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nInactivated Vaccine,\n2nd Dose\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Moderna)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Pfizer)\nSystemic Lupus Erythematous\nHypertension\nPlaque type psoriasis\nHemophilia A\nPsoriatic Arthritis\nAntiphospholipid antibody syndrome\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nPeriod between diagnosis of herpes zoster (HZ) and COVID‐19 vaccination, vaccine component, and its number of dose The mean (SD) period between herpes zoster and COVID‐19 vaccination was 7.64 (6.92) days (Figure 2). In most of the cases, 52 (96.07%) HZ was developed within 1–3 weeks following COVID‐19 vaccine. Among, 29 (50.98%) of the cases it developed within 1st week of vaccination irrespective of the number of vaccine dose. Only 1 case reported after the one month of COVID‐19 vaccine. Majority of the patients 45/54 (86.27%) received mRNA vaccine followed by 5/54 (5.88%) inactivated COVID‐19 vaccine, 4/51 (7.84%) non‐replicating viral vector. Thirty‐six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID‐19 vaccine among those who received mRNA vaccine. All the patients who received non‐replicating viral vector vaccine developed herpes zoster after the 1st dose. Eleven patients (11/54) were diagnosed through VZV PCR followed by clinical diagnosis (40/54).\nPeriod of herpes zoster development following COVID‐19 vaccination (n = 51)\nThe mean (SD) period between herpes zoster and COVID‐19 vaccination was 7.64 (6.92) days (Figure 2). In most of the cases, 52 (96.07%) HZ was developed within 1–3 weeks following COVID‐19 vaccine. Among, 29 (50.98%) of the cases it developed within 1st week of vaccination irrespective of the number of vaccine dose. Only 1 case reported after the one month of COVID‐19 vaccine. Majority of the patients 45/54 (86.27%) received mRNA vaccine followed by 5/54 (5.88%) inactivated COVID‐19 vaccine, 4/51 (7.84%) non‐replicating viral vector. Thirty‐six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID‐19 vaccine among those who received mRNA vaccine. All the patients who received non‐replicating viral vector vaccine developed herpes zoster after the 1st dose. Eleven patients (11/54) were diagnosed through VZV PCR followed by clinical diagnosis (40/54).\nPeriod of herpes zoster development following COVID‐19 vaccination (n = 51)\nClinical presentation and past history of varicella zoster (VZV) infection or vaccine Majority of the herpes zoster cases diagnosed clinically (40/54), followed by VZV PCR (11/54), VZV IgG (1/54),\n10\n and skin biopsy (2/54).\n17\n, \n18\n Various dermatomal lesion has been observed including trunk,\n10\n hips/buttocks or inguinal region,\n19\n herpes zoster ophthalmicus (HZO),\n10\n, \n19\n upper limb,\n10\n thigh,\n12\n and abdomen and flank.\n10\n There were 3 cases who developed lymphadenopathy including cervical and inguinal lymphadenopathy.\n19\n, \n22\n Postherpetic neuralgia was reported in 3 cases.\n16\n Prior VZV infection was reported in 13 cases,\n10\n, \n19\n, \n20\n and 5 cases\n10\n, \n11\n, \n19\n reported prior zoster vaccination.\nMajority of the herpes zoster cases diagnosed clinically (40/54), followed by VZV PCR (11/54), VZV IgG (1/54),\n10\n and skin biopsy (2/54).\n17\n, \n18\n Various dermatomal lesion has been observed including trunk,\n10\n hips/buttocks or inguinal region,\n19\n herpes zoster ophthalmicus (HZO),\n10\n, \n19\n upper limb,\n10\n thigh,\n12\n and abdomen and flank.\n10\n There were 3 cases who developed lymphadenopathy including cervical and inguinal lymphadenopathy.\n19\n, \n22\n Postherpetic neuralgia was reported in 3 cases.\n16\n Prior VZV infection was reported in 13 cases,\n10\n, \n19\n, \n20\n and 5 cases\n10\n, \n11\n, \n19\n reported prior zoster vaccination.", "Demographic and comorbid conditions The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA,\n10\n, \n11\n Lebanon,\n12\n Turkey,\n13\n, \n14\n, \n15\n Greece,\n16\n Italy,\n17\n India,\n18\n Israel,\n19\n Finland,\n20\n Taiwan,\n21\n Spain,\n22\n and Portugal.\n23\n There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases.\nConducted studies on COVID‐19 vaccine and Varicella zoster\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nInactivated Vaccine,\n2nd Dose\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Moderna)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Pfizer)\nSystemic Lupus Erythematous\nHypertension\nPlaque type psoriasis\nHemophilia A\nPsoriatic Arthritis\nAntiphospholipid antibody syndrome\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nThe search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA,\n10\n, \n11\n Lebanon,\n12\n Turkey,\n13\n, \n14\n, \n15\n Greece,\n16\n Italy,\n17\n India,\n18\n Israel,\n19\n Finland,\n20\n Taiwan,\n21\n Spain,\n22\n and Portugal.\n23\n There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases.\nConducted studies on COVID‐19 vaccine and Varicella zoster\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nInactivated Vaccine,\n2nd Dose\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Moderna)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Pfizer)\nSystemic Lupus Erythematous\nHypertension\nPlaque type psoriasis\nHemophilia A\nPsoriatic Arthritis\nAntiphospholipid antibody syndrome\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)", "The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA,\n10\n, \n11\n Lebanon,\n12\n Turkey,\n13\n, \n14\n, \n15\n Greece,\n16\n Italy,\n17\n India,\n18\n Israel,\n19\n Finland,\n20\n Taiwan,\n21\n Spain,\n22\n and Portugal.\n23\n There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases.\nConducted studies on COVID‐19 vaccine and Varicella zoster\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Moderna)\nmRNA/1st\n(Moderna)\nmRNA/2nd\n(Moderna)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nInactivated Vaccine,\n2nd Dose\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nmRNA/2nd\n(Pfizer)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Moderna)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nNone replicating viral vector 1st\n(Astrazeneca)\nmRNA/1st\n(Pfizer)\nSystemic Lupus Erythematous\nHypertension\nPlaque type psoriasis\nHemophilia A\nPsoriatic Arthritis\nAntiphospholipid antibody syndrome\nmRNA/1st\n(Pfizer)\nmRNA/1st\n(Pfizer)\nmRNA/2nd\n(Moderna)", "The mean (SD) period between herpes zoster and COVID‐19 vaccination was 7.64 (6.92) days (Figure 2). In most of the cases, 52 (96.07%) HZ was developed within 1–3 weeks following COVID‐19 vaccine. Among, 29 (50.98%) of the cases it developed within 1st week of vaccination irrespective of the number of vaccine dose. Only 1 case reported after the one month of COVID‐19 vaccine. Majority of the patients 45/54 (86.27%) received mRNA vaccine followed by 5/54 (5.88%) inactivated COVID‐19 vaccine, 4/51 (7.84%) non‐replicating viral vector. Thirty‐six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID‐19 vaccine among those who received mRNA vaccine. All the patients who received non‐replicating viral vector vaccine developed herpes zoster after the 1st dose. Eleven patients (11/54) were diagnosed through VZV PCR followed by clinical diagnosis (40/54).\nPeriod of herpes zoster development following COVID‐19 vaccination (n = 51)", "Majority of the herpes zoster cases diagnosed clinically (40/54), followed by VZV PCR (11/54), VZV IgG (1/54),\n10\n and skin biopsy (2/54).\n17\n, \n18\n Various dermatomal lesion has been observed including trunk,\n10\n hips/buttocks or inguinal region,\n19\n herpes zoster ophthalmicus (HZO),\n10\n, \n19\n upper limb,\n10\n thigh,\n12\n and abdomen and flank.\n10\n There were 3 cases who developed lymphadenopathy including cervical and inguinal lymphadenopathy.\n19\n, \n22\n Postherpetic neuralgia was reported in 3 cases.\n16\n Prior VZV infection was reported in 13 cases,\n10\n, \n19\n, \n20\n and 5 cases\n10\n, \n11\n, \n19\n reported prior zoster vaccination.", "Vaccinations have been found to have a high level of reactogenicity, with headache, fever, and tiredness being more prevalent than with other vaccines. The distinctive inflammatory nature of these vaccinations may explain the higher‐than‐usual reported adverse effects. These vaccinations have been proven to strengthen the cellular immune system and generate a Th1‐type response with high IFNg, TNFa, and IL2 levels. As a result, they may hypothetically play a role in the flare‐up of dermatological disorders including psoriasis, lichen planus, vitiligo, and other diseases with a known Th1 function in pathogenesis.\n24\n\n\nHerpes zoster following vaccination is rarely reported in the literature. Studies have reported herpes zoster following vaccination of inactivated vaccines for influenza, hepatitis A, and rabies and Japanese encephalitis, and yellow fever. In this review, we studied baseline clinical and demographic characteristics, and timing of development of herpes zoster following COVID‐19 vaccine. HZ has been reported due to almost all types of COVID‐19 vaccine. Among, majority of the cases were reported due to mRNA vaccine. However, underreporting, fluctuating data quality, the absence of specified diagnostic criteria, missing denominator information, and reporter bias all restrict the reporting of AEAV occurrences in major databases like the Vaccine Adverse Event Reporting System (VAERS).\n25\n Majority of the HZ cases reported from Turkey, USA, and there were no reports of herpes zoster from patients in low‐ and middle‐income countries, raising attention to global inequities in COVID‐19 vaccine access or lack of reporting bias.\nAntiviral medicines and analgesics are currently mainstay of the treatment of herpes zoster, and the initial discomfort and skin rash are well controlled. All the patients were treated with antivirals either valacyclovir or acyclovir or famciclovir and no any causality was observed.\nIs there a medical or biological basis for an increased risk of COVID‐19 vaccine‐induced HZ? Several theories can be postulated to explain the relationship between development of herpes zoster and COVID‐19 vaccines. Age was found to be the major risk factor for the development of HZ partly due to age‐related decline in cell‐mediated immune responses to VZV, whereas disease‐related immunocompromise is another risk factor including such as HIV infection, iatrogenic immunocompromission, physical trauma, or comorbid conditions such as malignancy or chronic kidney or liver disease. Studies have reported cross‐reactivity between spike protein and self‐antigen may lead to development of immune‐mediated disorders in COVID‐19 patients in the long run. The authors hypothesized that similar response can happen following COVID‐19 vaccine.\n9\n, \n10\n Toll‐like receptors (TLR) stimulation of innate immunity might be the connection between COVID‐19 vaccine and HZ development.\n15\n The stimulation of these receptors has been related to the reactivation of VZV, allowing the latent virus to remain dormant in the afflicted people.\n14\n The COVID‐19 immunization may lead to the production of type I IFNs and other inflammatory cytokines, activating T‐ and B‐cell immunity and negatively affecting antigen expression, resulting in herpes zoster reactivation.\n16\n, \n17\n, \n18\n The peak of antigen expression is determined by the administration method and vaccine composition, which is another approach to modulate the immune response.\n11\n, \n12\n, \n13\n\n\nFurthermore, herpes zoster is more common in HIV patients with lower CD4 cell counts, underlining the significance of T‐cell immunity in sustaining VZV latency.\n26\n, \n27\n According to Sahin U et al.,\n28\n vaccination with BNT162b2 produces coordinated humoral and cellular adaptive immunity in healthy individuals. A robust cellular response with spike‐specific CD8+ T cells and T helper type 1 (Th1) CD4+ T cells is growing seven days after the booster dosage, with a high proportion of them generating interferon (IFN), a cytokine involved in numerous antiviral responses. S1‐binding IgG associated positively with the S‐specific CD4+ T‐cell responses, as did the intensity of S‐specific CD8+ T‐cell responses, which correlated positively with S1‐binding IgG. Furthermore, among participants over 55 years old, the SARS‐CoV‐2 mRNA‐1273 vaccination generated a robust CD4 cytokine response involving type 1 helper T cells.\n29\n After a large shift of naive CD8+ cells to create CD8+ cells specific to control HIV or VZV, we hypothesize that VZV‐specific CD8+ cells are momentarily incapable of regulating VZV.\nSeveral theories can be postulated to explain the relationship between development of herpes zoster and COVID‐19 vaccines. Age was found to be the major risk factor for the development of HZ partly due to age‐related decline in cell‐mediated immune responses to VZV, whereas disease‐related immunocompromise is another risk factor including such as HIV infection, iatrogenic immunocompromission, physical trauma, or comorbid conditions such as malignancy or chronic kidney or liver disease. Studies have reported cross‐reactivity between spike protein and self‐antigen may lead to development of immune‐mediated disorders in COVID‐19 patients in the long run. The authors hypothesized that similar response can happen following COVID‐19 vaccine.\n9\n, \n10\n Toll‐like receptors (TLR) stimulation of innate immunity might be the connection between COVID‐19 vaccine and HZ development.\n15\n The stimulation of these receptors has been related to the reactivation of VZV, allowing the latent virus to remain dormant in the afflicted people.\n14\n The COVID‐19 immunization may lead to the production of type I IFNs and other inflammatory cytokines, activating T‐ and B‐cell immunity and negatively affecting antigen expression, resulting in herpes zoster reactivation.\n16\n, \n17\n, \n18\n The peak of antigen expression is determined by the administration method and vaccine composition, which is another approach to modulate the immune response.\n11\n, \n12\n, \n13\n\n\nFurthermore, herpes zoster is more common in HIV patients with lower CD4 cell counts, underlining the significance of T‐cell immunity in sustaining VZV latency.\n26\n, \n27\n According to Sahin U et al.,\n28\n vaccination with BNT162b2 produces coordinated humoral and cellular adaptive immunity in healthy individuals. A robust cellular response with spike‐specific CD8+ T cells and T helper type 1 (Th1) CD4+ T cells is growing seven days after the booster dosage, with a high proportion of them generating interferon (IFN), a cytokine involved in numerous antiviral responses. S1‐binding IgG associated positively with the S‐specific CD4+ T‐cell responses, as did the intensity of S‐specific CD8+ T‐cell responses, which correlated positively with S1‐binding IgG. Furthermore, among participants over 55 years old, the SARS‐CoV‐2 mRNA‐1273 vaccination generated a robust CD4 cytokine response involving type 1 helper T cells.\n29\n After a large shift of naive CD8+ cells to create CD8+ cells specific to control HIV or VZV, we hypothesize that VZV‐specific CD8+ cells are momentarily incapable of regulating VZV.\nIs there temporal association of development of HZ and COVID‐19 vaccine? WHO and CDC have established standard methods for conducting causation evaluations of individual instances of AEAV. When an incident occurs during the time frame defined for increased risk, it is said to be \"consistent with\" a causal association. As per WHO updated guidelines on causality definitions, a \"probable\" association suggests a temporal relationship and the existence of a biologic mechanism for causal association between the vaccination and the occurrence.\n25\n, \n30\n, \n31\n Of 54 herpes zoster reported cases, 52 (96.29%) of the patients developed it at the defined higher risk timeframe (1–21 days after the initial dose).\n23\n As a result, the AEAV may be categorized as \"compatible with,\" and a \"likely\" causal relationship based on the World Health Organization Working Group can be presented, indicating the existence of a temporal link and a credible physiological mechanism.\n25\n, \n30\n, \n31\n, \n32\n\n\nWHO and CDC have established standard methods for conducting causation evaluations of individual instances of AEAV. When an incident occurs during the time frame defined for increased risk, it is said to be \"consistent with\" a causal association. As per WHO updated guidelines on causality definitions, a \"probable\" association suggests a temporal relationship and the existence of a biologic mechanism for causal association between the vaccination and the occurrence.\n25\n, \n30\n, \n31\n Of 54 herpes zoster reported cases, 52 (96.29%) of the patients developed it at the defined higher risk timeframe (1–21 days after the initial dose).\n23\n As a result, the AEAV may be categorized as \"compatible with,\" and a \"likely\" causal relationship based on the World Health Organization Working Group can be presented, indicating the existence of a temporal link and a credible physiological mechanism.\n25\n, \n30\n, \n31\n, \n32\n\n\nInterpretation of the most up‐to‐date clinical evidence Despite the fact that the case fatality rate (CFR) for HZ is exceedingly low in COVID‐19 patients.\n9\n Herpes zoster is often associated with disability, especially among aged individuals, and available management only decreases viral shedding reducing the risk of transmission and prevents post‐herpetic neuralgia and the severity and the duration of pain. In immunocompetent people over the age of 50, the recombinant zoster vaccination is indicated.\n33\n Clinicians may not be aware of the link between HZ and COVID‐19 immunization. The awareness of this clinical condition encourages additional reporting and communication of HZ after vaccination. Now the question raised whether to use antiviral or not in highly suspected old aged immunocompromised people as to prevent the risk of development of HZ. Therefore, clinical review committees need to decide whether to recommend antiviral treatment or not before initiating SARS‐CoV‐2 immunization.\nDespite the fact that the case fatality rate (CFR) for HZ is exceedingly low in COVID‐19 patients.\n9\n Herpes zoster is often associated with disability, especially among aged individuals, and available management only decreases viral shedding reducing the risk of transmission and prevents post‐herpetic neuralgia and the severity and the duration of pain. In immunocompetent people over the age of 50, the recombinant zoster vaccination is indicated.\n33\n Clinicians may not be aware of the link between HZ and COVID‐19 immunization. The awareness of this clinical condition encourages additional reporting and communication of HZ after vaccination. Now the question raised whether to use antiviral or not in highly suspected old aged immunocompromised people as to prevent the risk of development of HZ. Therefore, clinical review committees need to decide whether to recommend antiviral treatment or not before initiating SARS‐CoV‐2 immunization.\nWhat kind of clinical/epidemiological evidence is required to determine whether HZ cases have increased due to COVID‐19 vaccine? Despite the rarity of the articles, HZ is a common occurrence. In the VAERS, there were 232 HZ‐related adverse events recorded for the COVID‐19 vaccinations.\n34\n As of March 21, 2021, there were 331 cases of HZ after the Pfizer/BioNTech vaccination and 297 after the Astra‐Zeneca vaccine reported in the Medicines and Healthcare Products Regulatory Agency of the United Kingdom (MHRA) Yellow Card adverse reaction reporting program.\n35\n, \n36\n, \n37\n However, because HZ is underreported as an adverse event following vaccination for other infectious agents, these findings may underestimate the prevalence of herpes zoster. So, ongoing studies are required to understand the immunological mechanism that control SARS‐CoV‐2 and VZV long‐term protection. Studies could assess either the risk or trend of development of HZ due to COVID‐19 vaccine. Studies designed to assess the risk of HZ in patients with and without COVID‐19 immunization could assess whether there has been an increased risk of HZ in COVID‐19 vaccinated individual. Both retrospective cohort and case‐control studies can be considered. Cases and controls may require to be matched. Studies can focus to determine the prevalence of herpes zoster in the general population or in specialized populations (eg, administrative or hospital databases). Interestingly, a study was done by Pedro et al. observed that incidence of HZ development after 1 month of follow‐up of all vaccinated patients was 5–6 times higher than the usual annual HZ incidence in their geographical area. Future studies need to focus on cytokine function, T cells, and absolute lymphocytes in patients who present with reactivation of VZV following COVID‐19 immunization and its effects on cellular immunity with regard to its role in etiology, prognosis, and manifestation.\n22\n\n\nOur study was limited by publication bias, small sample size, missing data, and lack of generalizability in demographics of the series analyzed. In the majority of the cases, diagnosis of HZ was pertinent to the HZ clinical findings.\nDespite the rarity of the articles, HZ is a common occurrence. In the VAERS, there were 232 HZ‐related adverse events recorded for the COVID‐19 vaccinations.\n34\n As of March 21, 2021, there were 331 cases of HZ after the Pfizer/BioNTech vaccination and 297 after the Astra‐Zeneca vaccine reported in the Medicines and Healthcare Products Regulatory Agency of the United Kingdom (MHRA) Yellow Card adverse reaction reporting program.\n35\n, \n36\n, \n37\n However, because HZ is underreported as an adverse event following vaccination for other infectious agents, these findings may underestimate the prevalence of herpes zoster. So, ongoing studies are required to understand the immunological mechanism that control SARS‐CoV‐2 and VZV long‐term protection. Studies could assess either the risk or trend of development of HZ due to COVID‐19 vaccine. Studies designed to assess the risk of HZ in patients with and without COVID‐19 immunization could assess whether there has been an increased risk of HZ in COVID‐19 vaccinated individual. Both retrospective cohort and case‐control studies can be considered. Cases and controls may require to be matched. Studies can focus to determine the prevalence of herpes zoster in the general population or in specialized populations (eg, administrative or hospital databases). Interestingly, a study was done by Pedro et al. observed that incidence of HZ development after 1 month of follow‐up of all vaccinated patients was 5–6 times higher than the usual annual HZ incidence in their geographical area. Future studies need to focus on cytokine function, T cells, and absolute lymphocytes in patients who present with reactivation of VZV following COVID‐19 immunization and its effects on cellular immunity with regard to its role in etiology, prognosis, and manifestation.\n22\n\n\nOur study was limited by publication bias, small sample size, missing data, and lack of generalizability in demographics of the series analyzed. In the majority of the cases, diagnosis of HZ was pertinent to the HZ clinical findings.", "Several theories can be postulated to explain the relationship between development of herpes zoster and COVID‐19 vaccines. Age was found to be the major risk factor for the development of HZ partly due to age‐related decline in cell‐mediated immune responses to VZV, whereas disease‐related immunocompromise is another risk factor including such as HIV infection, iatrogenic immunocompromission, physical trauma, or comorbid conditions such as malignancy or chronic kidney or liver disease. Studies have reported cross‐reactivity between spike protein and self‐antigen may lead to development of immune‐mediated disorders in COVID‐19 patients in the long run. The authors hypothesized that similar response can happen following COVID‐19 vaccine.\n9\n, \n10\n Toll‐like receptors (TLR) stimulation of innate immunity might be the connection between COVID‐19 vaccine and HZ development.\n15\n The stimulation of these receptors has been related to the reactivation of VZV, allowing the latent virus to remain dormant in the afflicted people.\n14\n The COVID‐19 immunization may lead to the production of type I IFNs and other inflammatory cytokines, activating T‐ and B‐cell immunity and negatively affecting antigen expression, resulting in herpes zoster reactivation.\n16\n, \n17\n, \n18\n The peak of antigen expression is determined by the administration method and vaccine composition, which is another approach to modulate the immune response.\n11\n, \n12\n, \n13\n\n\nFurthermore, herpes zoster is more common in HIV patients with lower CD4 cell counts, underlining the significance of T‐cell immunity in sustaining VZV latency.\n26\n, \n27\n According to Sahin U et al.,\n28\n vaccination with BNT162b2 produces coordinated humoral and cellular adaptive immunity in healthy individuals. A robust cellular response with spike‐specific CD8+ T cells and T helper type 1 (Th1) CD4+ T cells is growing seven days after the booster dosage, with a high proportion of them generating interferon (IFN), a cytokine involved in numerous antiviral responses. S1‐binding IgG associated positively with the S‐specific CD4+ T‐cell responses, as did the intensity of S‐specific CD8+ T‐cell responses, which correlated positively with S1‐binding IgG. Furthermore, among participants over 55 years old, the SARS‐CoV‐2 mRNA‐1273 vaccination generated a robust CD4 cytokine response involving type 1 helper T cells.\n29\n After a large shift of naive CD8+ cells to create CD8+ cells specific to control HIV or VZV, we hypothesize that VZV‐specific CD8+ cells are momentarily incapable of regulating VZV.", "WHO and CDC have established standard methods for conducting causation evaluations of individual instances of AEAV. When an incident occurs during the time frame defined for increased risk, it is said to be \"consistent with\" a causal association. As per WHO updated guidelines on causality definitions, a \"probable\" association suggests a temporal relationship and the existence of a biologic mechanism for causal association between the vaccination and the occurrence.\n25\n, \n30\n, \n31\n Of 54 herpes zoster reported cases, 52 (96.29%) of the patients developed it at the defined higher risk timeframe (1–21 days after the initial dose).\n23\n As a result, the AEAV may be categorized as \"compatible with,\" and a \"likely\" causal relationship based on the World Health Organization Working Group can be presented, indicating the existence of a temporal link and a credible physiological mechanism.\n25\n, \n30\n, \n31\n, \n32\n\n", "Despite the fact that the case fatality rate (CFR) for HZ is exceedingly low in COVID‐19 patients.\n9\n Herpes zoster is often associated with disability, especially among aged individuals, and available management only decreases viral shedding reducing the risk of transmission and prevents post‐herpetic neuralgia and the severity and the duration of pain. In immunocompetent people over the age of 50, the recombinant zoster vaccination is indicated.\n33\n Clinicians may not be aware of the link between HZ and COVID‐19 immunization. The awareness of this clinical condition encourages additional reporting and communication of HZ after vaccination. Now the question raised whether to use antiviral or not in highly suspected old aged immunocompromised people as to prevent the risk of development of HZ. Therefore, clinical review committees need to decide whether to recommend antiviral treatment or not before initiating SARS‐CoV‐2 immunization.", "Despite the rarity of the articles, HZ is a common occurrence. In the VAERS, there were 232 HZ‐related adverse events recorded for the COVID‐19 vaccinations.\n34\n As of March 21, 2021, there were 331 cases of HZ after the Pfizer/BioNTech vaccination and 297 after the Astra‐Zeneca vaccine reported in the Medicines and Healthcare Products Regulatory Agency of the United Kingdom (MHRA) Yellow Card adverse reaction reporting program.\n35\n, \n36\n, \n37\n However, because HZ is underreported as an adverse event following vaccination for other infectious agents, these findings may underestimate the prevalence of herpes zoster. So, ongoing studies are required to understand the immunological mechanism that control SARS‐CoV‐2 and VZV long‐term protection. Studies could assess either the risk or trend of development of HZ due to COVID‐19 vaccine. Studies designed to assess the risk of HZ in patients with and without COVID‐19 immunization could assess whether there has been an increased risk of HZ in COVID‐19 vaccinated individual. Both retrospective cohort and case‐control studies can be considered. Cases and controls may require to be matched. Studies can focus to determine the prevalence of herpes zoster in the general population or in specialized populations (eg, administrative or hospital databases). Interestingly, a study was done by Pedro et al. observed that incidence of HZ development after 1 month of follow‐up of all vaccinated patients was 5–6 times higher than the usual annual HZ incidence in their geographical area. Future studies need to focus on cytokine function, T cells, and absolute lymphocytes in patients who present with reactivation of VZV following COVID‐19 immunization and its effects on cellular immunity with regard to its role in etiology, prognosis, and manifestation.\n22\n\n\nOur study was limited by publication bias, small sample size, missing data, and lack of generalizability in demographics of the series analyzed. In the majority of the cases, diagnosis of HZ was pertinent to the HZ clinical findings.", "Our study does not establish any causality or definite link but draws the attention to a possible association between COVID‐19 vaccine and shingles. Large‐scale immunological, epidemiological, and clinical studies may help to understand the cause‐effect relationship. Based on the criteria of temporal connection with vaccination and a plausible biological link, HZ appears to be a \"possible\" but uncommon AEAV. Furthermore, these findings may be therapeutically relevant in deciding whether to use antiviral as a temporary prophylactic prior to immunization for individuals who are at a greater risk of VZV reactivation following SARS‐CoV‐2 vaccination.", "None.", "\nHardik D Desai: Writing and revising the manuscript. Kamal Sharma, Anchal Shah, Jaimini Patoliya, Anant Patil, Zahra Hooshanginezhad, and Stephan Grabbe: Review and revising the manuscript. Mohamad Goldust: Conception, writing, review and revising the manuscript.", "We confirm that the manuscript has been read and approved by all the authors, that the requirements for authorship as stated earlier in this document have been met and that each author believes that the manuscript represents honest work.", "This study was conducted using published online material, hence the ethical approval is waived." ]
[ null, "methods", null, null, "results", null, null, null, null, "discussion", null, null, null, null, "conclusions", "COI-statement", null, null, null ]
[ "COVID‐19", "herpes zoster", "vaccine", "Varicella zoster" ]
INTRODUCTION: The World Health Organization (WHO) announced the coronavirus disease 2019 (COVID‐19) outbreak a pandemic on March 11, 2020 and the development of a safe and effective COVID‐19 vaccine quickly became a worldwide priority. As of July 2021, 27.6% of the world population has received at least one dose of COVID‐19 vaccine. 1 In an attempt to prevent severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) viral transmission, non‐replicating viral vector vaccines, DNA‐based or RNA‐based vaccines, and inactivated vaccines, protein subunit recombinant vaccines have been developed recently. Adverse events after vaccination (AEAV) might be a coincidence or a direct result of the vaccine. In order to answer this question, a temporal relationship between vaccination and AEAV is required, as well as a biological mechanism is necessary to explain the link. The platform for COVID‐19 vaccine leaders is based on gene therapy. Still, it is the matter of research for the dermatologists, to investigate the long‐term consequences of older generation vaccines. As a result, the long‐term consequences of vaccinations based on gene therapy remain essentially unknown. COVID‐19 vaccines have been reported to be associated with reactogenicity with symptoms such as fever, fatigue, and headache being frequently reported. This is due to the vaccines' intrinsic character, even though they have been shown to be safe in clinical trials. 2 , 3 Although the COVID‐19 vaccination is deemed safe, adverse effects, particularly dermatological ones, are still unknown. Clinical studies have reported that the most common cutaneous adverse effects are injection site reaction and pruritus; allergic reactions such as urticaria and widespread erythematous rash. 4 In the literature, various cutaneous conditions have been observed including anecdotal cases of erythema multiforme with morbilliform rash, delayed‐type hypersensitivity reaction, bullous drug eruption, pernio/chilblains (eg, “COVID toes”), erythromelalgia, and pityriasis‐rosea‐like exanthems, and herpes zoster (HZ). 5 , 6 , 7 , 8 Studies have reported development of herpes zoster due to SARS‐CoV‐2 infection either at the time of disease progression or following recovery from the disease. In the context of COVID‐19 pandemic, studies observed that COVID‐19 is associated lymphopenia, particularly CD3+, CD8+ lymphocytes, and functional impairment of CD4+ T cells, might make a patient more vulnerable to development of herpes zoster. 9 While there have been several case reports and case series published in literature on COVID‐19 vaccine‐induced HZ, to our knowledge, there is no comprehensive review on this subject. Given how crucial widespread vaccination is in curbing the COVID‐19 pandemic, limited data on HZ after COVID‐19 vaccine with handful of case reports and case series promoted us to systemically review published cases of HZ to describe the demographic, clinical, morphological characteristics, outcomes, and timing of development of herpes zoster to the various COVID‐19 vaccines. METHOD: Search strategy A systemic review was conducted due to the sparse but quickly increasing scientific literature on HZ and COVID‐19. We performed a search in PubMed and Embase using “Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA)” guidelines by using MeSH and keywords like “Shingles,” “Herpes zoster,” “Varicella zoster,” “COVID‐19,” “Vaccine,” “SARS‐CoV‐2.” Boolean operators (“OR”; “AND”) were used to combine the search results. No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases. The findings were saved in an EndNote library. The search strategy is shown in PRISMA diagram (Figure 1). Articles published until August 14, 2021 were included. PRISMA flowchart of study selection A systemic review was conducted due to the sparse but quickly increasing scientific literature on HZ and COVID‐19. We performed a search in PubMed and Embase using “Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA)” guidelines by using MeSH and keywords like “Shingles,” “Herpes zoster,” “Varicella zoster,” “COVID‐19,” “Vaccine,” “SARS‐CoV‐2.” Boolean operators (“OR”; “AND”) were used to combine the search results. No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases. The findings were saved in an EndNote library. The search strategy is shown in PRISMA diagram (Figure 1). Articles published until August 14, 2021 were included. PRISMA flowchart of study selection Study selection We selected all the available individual case reports and case series. During phase 1, two authors (H.D. and K.S.) independently assessed the abstracts, titles, and categories of research that met the requirements. The disagreements were resolved by consensus with another two authors (J.P and M.G). Based on the inclusion criteria, the second phase of the search involved assessing full‐text articles in order to identify items for data extraction. Data from the article were curated and summarized in the form of age, gender, country of origin, symptoms and clinical characteristic of the lesion, vaccine type, and dose, days until symptom onset of post‐vaccination, location of the lesion, past medical history/comorbidities, medical intervention during hospitalization and confirmatory test. Descriptive analysis and data collection were performed using Microsoft Excel software. We selected all the available individual case reports and case series. During phase 1, two authors (H.D. and K.S.) independently assessed the abstracts, titles, and categories of research that met the requirements. The disagreements were resolved by consensus with another two authors (J.P and M.G). Based on the inclusion criteria, the second phase of the search involved assessing full‐text articles in order to identify items for data extraction. Data from the article were curated and summarized in the form of age, gender, country of origin, symptoms and clinical characteristic of the lesion, vaccine type, and dose, days until symptom onset of post‐vaccination, location of the lesion, past medical history/comorbidities, medical intervention during hospitalization and confirmatory test. Descriptive analysis and data collection were performed using Microsoft Excel software. Search strategy: A systemic review was conducted due to the sparse but quickly increasing scientific literature on HZ and COVID‐19. We performed a search in PubMed and Embase using “Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA)” guidelines by using MeSH and keywords like “Shingles,” “Herpes zoster,” “Varicella zoster,” “COVID‐19,” “Vaccine,” “SARS‐CoV‐2.” Boolean operators (“OR”; “AND”) were used to combine the search results. No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases. The findings were saved in an EndNote library. The search strategy is shown in PRISMA diagram (Figure 1). Articles published until August 14, 2021 were included. PRISMA flowchart of study selection Study selection: We selected all the available individual case reports and case series. During phase 1, two authors (H.D. and K.S.) independently assessed the abstracts, titles, and categories of research that met the requirements. The disagreements were resolved by consensus with another two authors (J.P and M.G). Based on the inclusion criteria, the second phase of the search involved assessing full‐text articles in order to identify items for data extraction. Data from the article were curated and summarized in the form of age, gender, country of origin, symptoms and clinical characteristic of the lesion, vaccine type, and dose, days until symptom onset of post‐vaccination, location of the lesion, past medical history/comorbidities, medical intervention during hospitalization and confirmatory test. Descriptive analysis and data collection were performed using Microsoft Excel software. RESULTS: The search yielded a total of 121 and 67 hits from Embase and PubMed, respectively. We identified 14 articles that met as per the study requirement. Description of Herpes zoster cases in patients with COVID‐19 vaccination Demographic and comorbid conditions The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA, 10 , 11 Lebanon, 12  Turkey, 13 , 14 , 15 Greece, 16 Italy, 17 India, 18 Israel, 19 Finland, 20  Taiwan, 21 Spain, 22 and Portugal. 23  There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases. Conducted studies on COVID‐19 vaccine and Varicella zoster mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/2nd (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Moderna) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) Inactivated Vaccine, 2nd Dose mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Moderna) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Pfizer) Systemic Lupus Erythematous Hypertension Plaque type psoriasis Hemophilia A Psoriatic Arthritis Antiphospholipid antibody syndrome mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Moderna) The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA, 10 , 11 Lebanon, 12  Turkey, 13 , 14 , 15 Greece, 16 Italy, 17 India, 18 Israel, 19 Finland, 20  Taiwan, 21 Spain, 22 and Portugal. 23  There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases. Conducted studies on COVID‐19 vaccine and Varicella zoster mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/2nd (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Moderna) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) Inactivated Vaccine, 2nd Dose mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Moderna) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Pfizer) Systemic Lupus Erythematous Hypertension Plaque type psoriasis Hemophilia A Psoriatic Arthritis Antiphospholipid antibody syndrome mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Moderna) Demographic and comorbid conditions The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA, 10 , 11 Lebanon, 12  Turkey, 13 , 14 , 15 Greece, 16 Italy, 17 India, 18 Israel, 19 Finland, 20  Taiwan, 21 Spain, 22 and Portugal. 23  There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases. Conducted studies on COVID‐19 vaccine and Varicella zoster mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/2nd (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Moderna) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) Inactivated Vaccine, 2nd Dose mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Moderna) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Pfizer) Systemic Lupus Erythematous Hypertension Plaque type psoriasis Hemophilia A Psoriatic Arthritis Antiphospholipid antibody syndrome mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Moderna) The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA, 10 , 11 Lebanon, 12  Turkey, 13 , 14 , 15 Greece, 16 Italy, 17 India, 18 Israel, 19 Finland, 20  Taiwan, 21 Spain, 22 and Portugal. 23  There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases. Conducted studies on COVID‐19 vaccine and Varicella zoster mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/2nd (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Moderna) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) Inactivated Vaccine, 2nd Dose mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Moderna) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Pfizer) Systemic Lupus Erythematous Hypertension Plaque type psoriasis Hemophilia A Psoriatic Arthritis Antiphospholipid antibody syndrome mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Moderna) Period between diagnosis of herpes zoster (HZ) and COVID‐19 vaccination, vaccine component, and its number of dose The mean (SD) period between herpes zoster and COVID‐19 vaccination was 7.64 (6.92) days (Figure 2). In most of the cases, 52 (96.07%) HZ was developed within 1–3 weeks following COVID‐19 vaccine. Among, 29 (50.98%) of the cases it developed within 1st week of vaccination irrespective of the number of vaccine dose. Only 1 case reported after the one month of COVID‐19 vaccine. Majority of the patients 45/54 (86.27%) received mRNA vaccine followed by 5/54 (5.88%) inactivated COVID‐19 vaccine, 4/51 (7.84%) non‐replicating viral vector. Thirty‐six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID‐19 vaccine among those who received mRNA vaccine. All the patients who received non‐replicating viral vector vaccine developed herpes zoster after the 1st dose. Eleven patients (11/54) were diagnosed through VZV PCR followed by clinical diagnosis (40/54). Period of herpes zoster development following COVID‐19 vaccination (n = 51) The mean (SD) period between herpes zoster and COVID‐19 vaccination was 7.64 (6.92) days (Figure 2). In most of the cases, 52 (96.07%) HZ was developed within 1–3 weeks following COVID‐19 vaccine. Among, 29 (50.98%) of the cases it developed within 1st week of vaccination irrespective of the number of vaccine dose. Only 1 case reported after the one month of COVID‐19 vaccine. Majority of the patients 45/54 (86.27%) received mRNA vaccine followed by 5/54 (5.88%) inactivated COVID‐19 vaccine, 4/51 (7.84%) non‐replicating viral vector. Thirty‐six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID‐19 vaccine among those who received mRNA vaccine. All the patients who received non‐replicating viral vector vaccine developed herpes zoster after the 1st dose. Eleven patients (11/54) were diagnosed through VZV PCR followed by clinical diagnosis (40/54). Period of herpes zoster development following COVID‐19 vaccination (n = 51) Clinical presentation and past history of varicella zoster (VZV) infection or vaccine Majority of the herpes zoster cases diagnosed clinically (40/54), followed by VZV PCR (11/54), VZV IgG (1/54), 10 and skin biopsy (2/54). 17 , 18  Various dermatomal lesion has been observed including trunk, 10  hips/buttocks or inguinal region, 19  herpes zoster ophthalmicus (HZO), 10 , 19 upper limb, 10 thigh, 12 and abdomen and flank. 10  There were 3 cases who developed lymphadenopathy including cervical and inguinal lymphadenopathy. 19 , 22 Postherpetic neuralgia was reported in 3 cases. 16 Prior VZV infection was reported in 13 cases, 10 , 19 , 20 and 5 cases 10 , 11 , 19 reported prior zoster vaccination. Majority of the herpes zoster cases diagnosed clinically (40/54), followed by VZV PCR (11/54), VZV IgG (1/54), 10 and skin biopsy (2/54). 17 , 18  Various dermatomal lesion has been observed including trunk, 10  hips/buttocks or inguinal region, 19  herpes zoster ophthalmicus (HZO), 10 , 19 upper limb, 10 thigh, 12 and abdomen and flank. 10  There were 3 cases who developed lymphadenopathy including cervical and inguinal lymphadenopathy. 19 , 22 Postherpetic neuralgia was reported in 3 cases. 16 Prior VZV infection was reported in 13 cases, 10 , 19 , 20 and 5 cases 10 , 11 , 19 reported prior zoster vaccination. Description of Herpes zoster cases in patients with COVID‐19 vaccination: Demographic and comorbid conditions The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA, 10 , 11 Lebanon, 12  Turkey, 13 , 14 , 15 Greece, 16 Italy, 17 India, 18 Israel, 19 Finland, 20  Taiwan, 21 Spain, 22 and Portugal. 23  There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases. Conducted studies on COVID‐19 vaccine and Varicella zoster mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/2nd (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Moderna) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) Inactivated Vaccine, 2nd Dose mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Moderna) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Pfizer) Systemic Lupus Erythematous Hypertension Plaque type psoriasis Hemophilia A Psoriatic Arthritis Antiphospholipid antibody syndrome mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Moderna) The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA, 10 , 11 Lebanon, 12  Turkey, 13 , 14 , 15 Greece, 16 Italy, 17 India, 18 Israel, 19 Finland, 20  Taiwan, 21 Spain, 22 and Portugal. 23  There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases. Conducted studies on COVID‐19 vaccine and Varicella zoster mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/2nd (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Moderna) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) Inactivated Vaccine, 2nd Dose mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Moderna) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Pfizer) Systemic Lupus Erythematous Hypertension Plaque type psoriasis Hemophilia A Psoriatic Arthritis Antiphospholipid antibody syndrome mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Moderna) Demographic and comorbid conditions: The search yielded a total of 54 patients consisting of 27 male and 27 females. Cases were reported worldwide including USA, 10 , 11 Lebanon, 12  Turkey, 13 , 14 , 15 Greece, 16 Italy, 17 India, 18 Israel, 19 Finland, 20  Taiwan, 21 Spain, 22 and Portugal. 23  There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). Table 1 shows the details of cases. Conducted studies on COVID‐19 vaccine and Varicella zoster mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/2nd (Moderna) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/1st (Pfizer) mRNA/1st (Moderna) mRNA/1st (Moderna) mRNA/2nd (Moderna) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) Inactivated Vaccine, 2nd Dose mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Pfizer) mRNA/2nd (Pfizer) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Moderna) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) None replicating viral vector 1st (Astrazeneca) mRNA/1st (Pfizer) Systemic Lupus Erythematous Hypertension Plaque type psoriasis Hemophilia A Psoriatic Arthritis Antiphospholipid antibody syndrome mRNA/1st (Pfizer) mRNA/1st (Pfizer) mRNA/2nd (Moderna) Period between diagnosis of herpes zoster (HZ) and COVID‐19 vaccination, vaccine component, and its number of dose: The mean (SD) period between herpes zoster and COVID‐19 vaccination was 7.64 (6.92) days (Figure 2). In most of the cases, 52 (96.07%) HZ was developed within 1–3 weeks following COVID‐19 vaccine. Among, 29 (50.98%) of the cases it developed within 1st week of vaccination irrespective of the number of vaccine dose. Only 1 case reported after the one month of COVID‐19 vaccine. Majority of the patients 45/54 (86.27%) received mRNA vaccine followed by 5/54 (5.88%) inactivated COVID‐19 vaccine, 4/51 (7.84%) non‐replicating viral vector. Thirty‐six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID‐19 vaccine among those who received mRNA vaccine. All the patients who received non‐replicating viral vector vaccine developed herpes zoster after the 1st dose. Eleven patients (11/54) were diagnosed through VZV PCR followed by clinical diagnosis (40/54). Period of herpes zoster development following COVID‐19 vaccination (n = 51) Clinical presentation and past history of varicella zoster (VZV) infection or vaccine: Majority of the herpes zoster cases diagnosed clinically (40/54), followed by VZV PCR (11/54), VZV IgG (1/54), 10 and skin biopsy (2/54). 17 , 18  Various dermatomal lesion has been observed including trunk, 10  hips/buttocks or inguinal region, 19  herpes zoster ophthalmicus (HZO), 10 , 19 upper limb, 10 thigh, 12 and abdomen and flank. 10  There were 3 cases who developed lymphadenopathy including cervical and inguinal lymphadenopathy. 19 , 22 Postherpetic neuralgia was reported in 3 cases. 16 Prior VZV infection was reported in 13 cases, 10 , 19 , 20 and 5 cases 10 , 11 , 19 reported prior zoster vaccination. DISCUSSION: Vaccinations have been found to have a high level of reactogenicity, with headache, fever, and tiredness being more prevalent than with other vaccines. The distinctive inflammatory nature of these vaccinations may explain the higher‐than‐usual reported adverse effects. These vaccinations have been proven to strengthen the cellular immune system and generate a Th1‐type response with high IFNg, TNFa, and IL2 levels. As a result, they may hypothetically play a role in the flare‐up of dermatological disorders including psoriasis, lichen planus, vitiligo, and other diseases with a known Th1 function in pathogenesis. 24 Herpes zoster following vaccination is rarely reported in the literature. Studies have reported herpes zoster following vaccination of inactivated vaccines for influenza, hepatitis A, and rabies and Japanese encephalitis, and yellow fever. In this review, we studied baseline clinical and demographic characteristics, and timing of development of herpes zoster following COVID‐19 vaccine. HZ has been reported due to almost all types of COVID‐19 vaccine. Among, majority of the cases were reported due to mRNA vaccine. However, underreporting, fluctuating data quality, the absence of specified diagnostic criteria, missing denominator information, and reporter bias all restrict the reporting of AEAV occurrences in major databases like the Vaccine Adverse Event Reporting System (VAERS). 25  Majority of the HZ cases reported from Turkey, USA, and there were no reports of herpes zoster from patients in low‐ and middle‐income countries, raising attention to global inequities in COVID‐19 vaccine access or lack of reporting bias. Antiviral medicines and analgesics are currently mainstay of the treatment of herpes zoster, and the initial discomfort and skin rash are well controlled. All the patients were treated with antivirals either valacyclovir or acyclovir or famciclovir and no any causality was observed. Is there a medical or biological basis for an increased risk of COVID‐19 vaccine‐induced HZ? Several theories can be postulated to explain the relationship between development of herpes zoster and COVID‐19 vaccines. Age was found to be the major risk factor for the development of HZ partly due to age‐related decline in cell‐mediated immune responses to VZV, whereas disease‐related immunocompromise is another risk factor including such as HIV infection, iatrogenic immunocompromission, physical trauma, or comorbid conditions such as malignancy or chronic kidney or liver disease. Studies have reported cross‐reactivity between spike protein and self‐antigen may lead to development of immune‐mediated disorders in COVID‐19 patients in the long run. The authors hypothesized that similar response can happen following COVID‐19 vaccine. 9 , 10  Toll‐like receptors (TLR) stimulation of innate immunity might be the connection between COVID‐19 vaccine and HZ development. 15  The stimulation of these receptors has been related to the reactivation of VZV, allowing the latent virus to remain dormant in the afflicted people. 14  The COVID‐19 immunization may lead to the production of type I IFNs and other inflammatory cytokines, activating T‐ and B‐cell immunity and negatively affecting antigen expression, resulting in herpes zoster reactivation. 16 , 17 , 18  The peak of antigen expression is determined by the administration method and vaccine composition, which is another approach to modulate the immune response. 11 , 12 , 13 Furthermore, herpes zoster is more common in HIV patients with lower CD4 cell counts, underlining the significance of T‐cell immunity in sustaining VZV latency. 26 , 27 According to Sahin U et al., 28 vaccination with BNT162b2 produces coordinated humoral and cellular adaptive immunity in healthy individuals. A robust cellular response with spike‐specific CD8+ T cells and T helper type 1 (Th1) CD4+ T cells is growing seven days after the booster dosage, with a high proportion of them generating interferon (IFN), a cytokine involved in numerous antiviral responses. S1‐binding IgG associated positively with the S‐specific CD4+ T‐cell responses, as did the intensity of S‐specific CD8+ T‐cell responses, which correlated positively with S1‐binding IgG. Furthermore, among participants over 55 years old, the SARS‐CoV‐2 mRNA‐1273 vaccination generated a robust CD4 cytokine response involving type 1 helper T cells. 29 After a large shift of naive CD8+ cells to create CD8+ cells specific to control HIV or VZV, we hypothesize that VZV‐specific CD8+ cells are momentarily incapable of regulating VZV. Several theories can be postulated to explain the relationship between development of herpes zoster and COVID‐19 vaccines. Age was found to be the major risk factor for the development of HZ partly due to age‐related decline in cell‐mediated immune responses to VZV, whereas disease‐related immunocompromise is another risk factor including such as HIV infection, iatrogenic immunocompromission, physical trauma, or comorbid conditions such as malignancy or chronic kidney or liver disease. Studies have reported cross‐reactivity between spike protein and self‐antigen may lead to development of immune‐mediated disorders in COVID‐19 patients in the long run. The authors hypothesized that similar response can happen following COVID‐19 vaccine. 9 , 10  Toll‐like receptors (TLR) stimulation of innate immunity might be the connection between COVID‐19 vaccine and HZ development. 15  The stimulation of these receptors has been related to the reactivation of VZV, allowing the latent virus to remain dormant in the afflicted people. 14  The COVID‐19 immunization may lead to the production of type I IFNs and other inflammatory cytokines, activating T‐ and B‐cell immunity and negatively affecting antigen expression, resulting in herpes zoster reactivation. 16 , 17 , 18  The peak of antigen expression is determined by the administration method and vaccine composition, which is another approach to modulate the immune response. 11 , 12 , 13 Furthermore, herpes zoster is more common in HIV patients with lower CD4 cell counts, underlining the significance of T‐cell immunity in sustaining VZV latency. 26 , 27 According to Sahin U et al., 28 vaccination with BNT162b2 produces coordinated humoral and cellular adaptive immunity in healthy individuals. A robust cellular response with spike‐specific CD8+ T cells and T helper type 1 (Th1) CD4+ T cells is growing seven days after the booster dosage, with a high proportion of them generating interferon (IFN), a cytokine involved in numerous antiviral responses. S1‐binding IgG associated positively with the S‐specific CD4+ T‐cell responses, as did the intensity of S‐specific CD8+ T‐cell responses, which correlated positively with S1‐binding IgG. Furthermore, among participants over 55 years old, the SARS‐CoV‐2 mRNA‐1273 vaccination generated a robust CD4 cytokine response involving type 1 helper T cells. 29 After a large shift of naive CD8+ cells to create CD8+ cells specific to control HIV or VZV, we hypothesize that VZV‐specific CD8+ cells are momentarily incapable of regulating VZV. Is there temporal association of development of HZ and COVID‐19 vaccine? WHO and CDC have established standard methods for conducting causation evaluations of individual instances of AEAV. When an incident occurs during the time frame defined for increased risk, it is said to be "consistent with" a causal association. As per WHO updated guidelines on causality definitions, a "probable" association suggests a temporal relationship and the existence of a biologic mechanism for causal association between the vaccination and the occurrence. 25 , 30 , 31 Of 54 herpes zoster reported cases, 52 (96.29%) of the patients developed it at the defined higher risk timeframe (1–21 days after the initial dose). 23 As a result, the AEAV may be categorized as "compatible with," and a "likely" causal relationship based on the World Health Organization Working Group can be presented, indicating the existence of a temporal link and a credible physiological mechanism. 25 , 30 , 31 , 32 WHO and CDC have established standard methods for conducting causation evaluations of individual instances of AEAV. When an incident occurs during the time frame defined for increased risk, it is said to be "consistent with" a causal association. As per WHO updated guidelines on causality definitions, a "probable" association suggests a temporal relationship and the existence of a biologic mechanism for causal association between the vaccination and the occurrence. 25 , 30 , 31 Of 54 herpes zoster reported cases, 52 (96.29%) of the patients developed it at the defined higher risk timeframe (1–21 days after the initial dose). 23 As a result, the AEAV may be categorized as "compatible with," and a "likely" causal relationship based on the World Health Organization Working Group can be presented, indicating the existence of a temporal link and a credible physiological mechanism. 25 , 30 , 31 , 32 Interpretation of the most up‐to‐date clinical evidence Despite the fact that the case fatality rate (CFR) for HZ is exceedingly low in COVID‐19 patients. 9 Herpes zoster is often associated with disability, especially among aged individuals, and available management only decreases viral shedding reducing the risk of transmission and prevents post‐herpetic neuralgia and the severity and the duration of pain. In immunocompetent people over the age of 50, the recombinant zoster vaccination is indicated. 33 Clinicians may not be aware of the link between HZ and COVID‐19 immunization. The awareness of this clinical condition encourages additional reporting and communication of HZ after vaccination. Now the question raised whether to use antiviral or not in highly suspected old aged immunocompromised people as to prevent the risk of development of HZ. Therefore, clinical review committees need to decide whether to recommend antiviral treatment or not before initiating SARS‐CoV‐2 immunization. Despite the fact that the case fatality rate (CFR) for HZ is exceedingly low in COVID‐19 patients. 9 Herpes zoster is often associated with disability, especially among aged individuals, and available management only decreases viral shedding reducing the risk of transmission and prevents post‐herpetic neuralgia and the severity and the duration of pain. In immunocompetent people over the age of 50, the recombinant zoster vaccination is indicated. 33 Clinicians may not be aware of the link between HZ and COVID‐19 immunization. The awareness of this clinical condition encourages additional reporting and communication of HZ after vaccination. Now the question raised whether to use antiviral or not in highly suspected old aged immunocompromised people as to prevent the risk of development of HZ. Therefore, clinical review committees need to decide whether to recommend antiviral treatment or not before initiating SARS‐CoV‐2 immunization. What kind of clinical/epidemiological evidence is required to determine whether HZ cases have increased due to COVID‐19 vaccine? Despite the rarity of the articles, HZ is a common occurrence. In the VAERS, there were 232 HZ‐related adverse events recorded for the COVID‐19 vaccinations. 34 As of March 21, 2021, there were 331 cases of HZ after the Pfizer/BioNTech vaccination and 297 after the Astra‐Zeneca vaccine reported in the Medicines and Healthcare Products Regulatory Agency of the United Kingdom (MHRA) Yellow Card adverse reaction reporting program. 35 , 36 , 37 However, because HZ is underreported as an adverse event following vaccination for other infectious agents, these findings may underestimate the prevalence of herpes zoster. So, ongoing studies are required to understand the immunological mechanism that control SARS‐CoV‐2 and VZV long‐term protection. Studies could assess either the risk or trend of development of HZ due to COVID‐19 vaccine. Studies designed to assess the risk of HZ in patients with and without COVID‐19 immunization could assess whether there has been an increased risk of HZ in COVID‐19 vaccinated individual. Both retrospective cohort and case‐control studies can be considered. Cases and controls may require to be matched. Studies can focus to determine the prevalence of herpes zoster in the general population or in specialized populations (eg, administrative or hospital databases). Interestingly, a study was done by Pedro et al. observed that incidence of HZ development after 1 month of follow‐up of all vaccinated patients was 5–6 times higher than the usual annual HZ incidence in their geographical area. Future studies need to focus on cytokine function, T cells, and absolute lymphocytes in patients who present with reactivation of VZV following COVID‐19 immunization and its effects on cellular immunity with regard to its role in etiology, prognosis, and manifestation. 22 Our study was limited by publication bias, small sample size, missing data, and lack of generalizability in demographics of the series analyzed. In the majority of the cases, diagnosis of HZ was pertinent to the HZ clinical findings. Despite the rarity of the articles, HZ is a common occurrence. In the VAERS, there were 232 HZ‐related adverse events recorded for the COVID‐19 vaccinations. 34 As of March 21, 2021, there were 331 cases of HZ after the Pfizer/BioNTech vaccination and 297 after the Astra‐Zeneca vaccine reported in the Medicines and Healthcare Products Regulatory Agency of the United Kingdom (MHRA) Yellow Card adverse reaction reporting program. 35 , 36 , 37 However, because HZ is underreported as an adverse event following vaccination for other infectious agents, these findings may underestimate the prevalence of herpes zoster. So, ongoing studies are required to understand the immunological mechanism that control SARS‐CoV‐2 and VZV long‐term protection. Studies could assess either the risk or trend of development of HZ due to COVID‐19 vaccine. Studies designed to assess the risk of HZ in patients with and without COVID‐19 immunization could assess whether there has been an increased risk of HZ in COVID‐19 vaccinated individual. Both retrospective cohort and case‐control studies can be considered. Cases and controls may require to be matched. Studies can focus to determine the prevalence of herpes zoster in the general population or in specialized populations (eg, administrative or hospital databases). Interestingly, a study was done by Pedro et al. observed that incidence of HZ development after 1 month of follow‐up of all vaccinated patients was 5–6 times higher than the usual annual HZ incidence in their geographical area. Future studies need to focus on cytokine function, T cells, and absolute lymphocytes in patients who present with reactivation of VZV following COVID‐19 immunization and its effects on cellular immunity with regard to its role in etiology, prognosis, and manifestation. 22 Our study was limited by publication bias, small sample size, missing data, and lack of generalizability in demographics of the series analyzed. In the majority of the cases, diagnosis of HZ was pertinent to the HZ clinical findings. Is there a medical or biological basis for an increased risk of COVID‐19 vaccine‐induced HZ?: Several theories can be postulated to explain the relationship between development of herpes zoster and COVID‐19 vaccines. Age was found to be the major risk factor for the development of HZ partly due to age‐related decline in cell‐mediated immune responses to VZV, whereas disease‐related immunocompromise is another risk factor including such as HIV infection, iatrogenic immunocompromission, physical trauma, or comorbid conditions such as malignancy or chronic kidney or liver disease. Studies have reported cross‐reactivity between spike protein and self‐antigen may lead to development of immune‐mediated disorders in COVID‐19 patients in the long run. The authors hypothesized that similar response can happen following COVID‐19 vaccine. 9 , 10  Toll‐like receptors (TLR) stimulation of innate immunity might be the connection between COVID‐19 vaccine and HZ development. 15  The stimulation of these receptors has been related to the reactivation of VZV, allowing the latent virus to remain dormant in the afflicted people. 14  The COVID‐19 immunization may lead to the production of type I IFNs and other inflammatory cytokines, activating T‐ and B‐cell immunity and negatively affecting antigen expression, resulting in herpes zoster reactivation. 16 , 17 , 18  The peak of antigen expression is determined by the administration method and vaccine composition, which is another approach to modulate the immune response. 11 , 12 , 13 Furthermore, herpes zoster is more common in HIV patients with lower CD4 cell counts, underlining the significance of T‐cell immunity in sustaining VZV latency. 26 , 27 According to Sahin U et al., 28 vaccination with BNT162b2 produces coordinated humoral and cellular adaptive immunity in healthy individuals. A robust cellular response with spike‐specific CD8+ T cells and T helper type 1 (Th1) CD4+ T cells is growing seven days after the booster dosage, with a high proportion of them generating interferon (IFN), a cytokine involved in numerous antiviral responses. S1‐binding IgG associated positively with the S‐specific CD4+ T‐cell responses, as did the intensity of S‐specific CD8+ T‐cell responses, which correlated positively with S1‐binding IgG. Furthermore, among participants over 55 years old, the SARS‐CoV‐2 mRNA‐1273 vaccination generated a robust CD4 cytokine response involving type 1 helper T cells. 29 After a large shift of naive CD8+ cells to create CD8+ cells specific to control HIV or VZV, we hypothesize that VZV‐specific CD8+ cells are momentarily incapable of regulating VZV. Is there temporal association of development of HZ and COVID‐19 vaccine?: WHO and CDC have established standard methods for conducting causation evaluations of individual instances of AEAV. When an incident occurs during the time frame defined for increased risk, it is said to be "consistent with" a causal association. As per WHO updated guidelines on causality definitions, a "probable" association suggests a temporal relationship and the existence of a biologic mechanism for causal association between the vaccination and the occurrence. 25 , 30 , 31 Of 54 herpes zoster reported cases, 52 (96.29%) of the patients developed it at the defined higher risk timeframe (1–21 days after the initial dose). 23 As a result, the AEAV may be categorized as "compatible with," and a "likely" causal relationship based on the World Health Organization Working Group can be presented, indicating the existence of a temporal link and a credible physiological mechanism. 25 , 30 , 31 , 32 Interpretation of the most up‐to‐date clinical evidence: Despite the fact that the case fatality rate (CFR) for HZ is exceedingly low in COVID‐19 patients. 9 Herpes zoster is often associated with disability, especially among aged individuals, and available management only decreases viral shedding reducing the risk of transmission and prevents post‐herpetic neuralgia and the severity and the duration of pain. In immunocompetent people over the age of 50, the recombinant zoster vaccination is indicated. 33 Clinicians may not be aware of the link between HZ and COVID‐19 immunization. The awareness of this clinical condition encourages additional reporting and communication of HZ after vaccination. Now the question raised whether to use antiviral or not in highly suspected old aged immunocompromised people as to prevent the risk of development of HZ. Therefore, clinical review committees need to decide whether to recommend antiviral treatment or not before initiating SARS‐CoV‐2 immunization. What kind of clinical/epidemiological evidence is required to determine whether HZ cases have increased due to COVID‐19 vaccine?: Despite the rarity of the articles, HZ is a common occurrence. In the VAERS, there were 232 HZ‐related adverse events recorded for the COVID‐19 vaccinations. 34 As of March 21, 2021, there were 331 cases of HZ after the Pfizer/BioNTech vaccination and 297 after the Astra‐Zeneca vaccine reported in the Medicines and Healthcare Products Regulatory Agency of the United Kingdom (MHRA) Yellow Card adverse reaction reporting program. 35 , 36 , 37 However, because HZ is underreported as an adverse event following vaccination for other infectious agents, these findings may underestimate the prevalence of herpes zoster. So, ongoing studies are required to understand the immunological mechanism that control SARS‐CoV‐2 and VZV long‐term protection. Studies could assess either the risk or trend of development of HZ due to COVID‐19 vaccine. Studies designed to assess the risk of HZ in patients with and without COVID‐19 immunization could assess whether there has been an increased risk of HZ in COVID‐19 vaccinated individual. Both retrospective cohort and case‐control studies can be considered. Cases and controls may require to be matched. Studies can focus to determine the prevalence of herpes zoster in the general population or in specialized populations (eg, administrative or hospital databases). Interestingly, a study was done by Pedro et al. observed that incidence of HZ development after 1 month of follow‐up of all vaccinated patients was 5–6 times higher than the usual annual HZ incidence in their geographical area. Future studies need to focus on cytokine function, T cells, and absolute lymphocytes in patients who present with reactivation of VZV following COVID‐19 immunization and its effects on cellular immunity with regard to its role in etiology, prognosis, and manifestation. 22 Our study was limited by publication bias, small sample size, missing data, and lack of generalizability in demographics of the series analyzed. In the majority of the cases, diagnosis of HZ was pertinent to the HZ clinical findings. CONCLUSION: Our study does not establish any causality or definite link but draws the attention to a possible association between COVID‐19 vaccine and shingles. Large‐scale immunological, epidemiological, and clinical studies may help to understand the cause‐effect relationship. Based on the criteria of temporal connection with vaccination and a plausible biological link, HZ appears to be a "possible" but uncommon AEAV. Furthermore, these findings may be therapeutically relevant in deciding whether to use antiviral as a temporary prophylactic prior to immunization for individuals who are at a greater risk of VZV reactivation following SARS‐CoV‐2 vaccination. CONFLICT OF INTEREST: None. AUTHOR CONTRIBUTIONS: Hardik D Desai: Writing and revising the manuscript. Kamal Sharma, Anchal Shah, Jaimini Patoliya, Anant Patil, Zahra Hooshanginezhad, and Stephan Grabbe: Review and revising the manuscript. Mohamad Goldust: Conception, writing, review and revising the manuscript. DISCLAIMER: We confirm that the manuscript has been read and approved by all the authors, that the requirements for authorship as stated earlier in this document have been met and that each author believes that the manuscript represents honest work. Ethical Approval: This study was conducted using published online material, hence the ethical approval is waived.
Background: Although the COVID-19 vaccination is deemed safe, exact incidence and nature if adverse effects, particularly dermatological ones, are still unknown. Methods: We have performed a systemic review of articles from PubMed and Embase using MeSH and keywords like "Shingles," "Herpes zoster," "Varicella zoster," "COVID-19," "Vaccine," "SARS-CoV-2." No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases. Results: A total of 54 cases consisting of 27 male and 27 female patients have been reported. There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). The mean (SD) period between development of herpes zoster and COVID-19 vaccination was 7.64 (6.92) days. Majority of the cases were from the high-income and/or middle-income countries. 86.27% of the cases of HZ were reported due to mRNA vaccine. Thirty-six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID-19 vaccine among those who received mRNA vaccine. Conclusions: We could not establish definite link but there may be possible association between COVID-19 vaccine and shingles. Large-scale studies may help to understand the cause-effect relationship.
INTRODUCTION: The World Health Organization (WHO) announced the coronavirus disease 2019 (COVID‐19) outbreak a pandemic on March 11, 2020 and the development of a safe and effective COVID‐19 vaccine quickly became a worldwide priority. As of July 2021, 27.6% of the world population has received at least one dose of COVID‐19 vaccine. 1 In an attempt to prevent severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) viral transmission, non‐replicating viral vector vaccines, DNA‐based or RNA‐based vaccines, and inactivated vaccines, protein subunit recombinant vaccines have been developed recently. Adverse events after vaccination (AEAV) might be a coincidence or a direct result of the vaccine. In order to answer this question, a temporal relationship between vaccination and AEAV is required, as well as a biological mechanism is necessary to explain the link. The platform for COVID‐19 vaccine leaders is based on gene therapy. Still, it is the matter of research for the dermatologists, to investigate the long‐term consequences of older generation vaccines. As a result, the long‐term consequences of vaccinations based on gene therapy remain essentially unknown. COVID‐19 vaccines have been reported to be associated with reactogenicity with symptoms such as fever, fatigue, and headache being frequently reported. This is due to the vaccines' intrinsic character, even though they have been shown to be safe in clinical trials. 2 , 3 Although the COVID‐19 vaccination is deemed safe, adverse effects, particularly dermatological ones, are still unknown. Clinical studies have reported that the most common cutaneous adverse effects are injection site reaction and pruritus; allergic reactions such as urticaria and widespread erythematous rash. 4 In the literature, various cutaneous conditions have been observed including anecdotal cases of erythema multiforme with morbilliform rash, delayed‐type hypersensitivity reaction, bullous drug eruption, pernio/chilblains (eg, “COVID toes”), erythromelalgia, and pityriasis‐rosea‐like exanthems, and herpes zoster (HZ). 5 , 6 , 7 , 8 Studies have reported development of herpes zoster due to SARS‐CoV‐2 infection either at the time of disease progression or following recovery from the disease. In the context of COVID‐19 pandemic, studies observed that COVID‐19 is associated lymphopenia, particularly CD3+, CD8+ lymphocytes, and functional impairment of CD4+ T cells, might make a patient more vulnerable to development of herpes zoster. 9 While there have been several case reports and case series published in literature on COVID‐19 vaccine‐induced HZ, to our knowledge, there is no comprehensive review on this subject. Given how crucial widespread vaccination is in curbing the COVID‐19 pandemic, limited data on HZ after COVID‐19 vaccine with handful of case reports and case series promoted us to systemically review published cases of HZ to describe the demographic, clinical, morphological characteristics, outcomes, and timing of development of herpes zoster to the various COVID‐19 vaccines. CONCLUSION: Our study does not establish any causality or definite link but draws the attention to a possible association between COVID‐19 vaccine and shingles. Large‐scale immunological, epidemiological, and clinical studies may help to understand the cause‐effect relationship. Based on the criteria of temporal connection with vaccination and a plausible biological link, HZ appears to be a "possible" but uncommon AEAV. Furthermore, these findings may be therapeutically relevant in deciding whether to use antiviral as a temporary prophylactic prior to immunization for individuals who are at a greater risk of VZV reactivation following SARS‐CoV‐2 vaccination.
Background: Although the COVID-19 vaccination is deemed safe, exact incidence and nature if adverse effects, particularly dermatological ones, are still unknown. Methods: We have performed a systemic review of articles from PubMed and Embase using MeSH and keywords like "Shingles," "Herpes zoster," "Varicella zoster," "COVID-19," "Vaccine," "SARS-CoV-2." No filters including country of publication, language, type of articles were applied. Individual case report references were filtered for any pertinent cases. Results: A total of 54 cases consisting of 27 male and 27 female patients have been reported. There were cases with known risk factors for herpes zoster, which included age more than 50 years (n = 36), immunological disorders (n = 10), chronic disease (n = 25), metabolic disorder (n = 13), malignancy (n = 4), and psychiatric disorder (n = 2). The mean (SD) period between development of herpes zoster and COVID-19 vaccination was 7.64 (6.92) days. Majority of the cases were from the high-income and/or middle-income countries. 86.27% of the cases of HZ were reported due to mRNA vaccine. Thirty-six patients 36/45 (80%) developed herpes zoster following the priming dose of COVID-19 vaccine among those who received mRNA vaccine. Conclusions: We could not establish definite link but there may be possible association between COVID-19 vaccine and shingles. Large-scale studies may help to understand the cause-effect relationship.
10,554
315
[ 540, 159, 152, 1039, 517, 190, 168, 458, 188, 158, 365, 49, 41, 16 ]
19
[ "mrna", "1st", "mrna 1st", "pfizer", "pfizer mrna", "mrna 1st pfizer", "1st pfizer", "1st pfizer mrna", "pfizer mrna 1st", "moderna" ]
[ "coronavirus disease", "viral vector vaccine", "viral vector vaccines", "covid 19 immunization", "coronavirus disease 2019" ]
[CONTENT] COVID‐19 | herpes zoster | vaccine | Varicella zoster [SUMMARY]
[CONTENT] COVID‐19 | herpes zoster | vaccine | Varicella zoster [SUMMARY]
[CONTENT] COVID‐19 | herpes zoster | vaccine | Varicella zoster [SUMMARY]
[CONTENT] COVID‐19 | herpes zoster | vaccine | Varicella zoster [SUMMARY]
[CONTENT] COVID‐19 | herpes zoster | vaccine | Varicella zoster [SUMMARY]
[CONTENT] COVID‐19 | herpes zoster | vaccine | Varicella zoster [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Chickenpox | Female | Herpes Zoster | Herpes Zoster Vaccine | Humans | Male | Middle Aged | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Chickenpox | Female | Herpes Zoster | Herpes Zoster Vaccine | Humans | Male | Middle Aged | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Chickenpox | Female | Herpes Zoster | Herpes Zoster Vaccine | Humans | Male | Middle Aged | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Chickenpox | Female | Herpes Zoster | Herpes Zoster Vaccine | Humans | Male | Middle Aged | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Chickenpox | Female | Herpes Zoster | Herpes Zoster Vaccine | Humans | Male | Middle Aged | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | COVID-19 Vaccines | Chickenpox | Female | Herpes Zoster | Herpes Zoster Vaccine | Humans | Male | Middle Aged | SARS-CoV-2 [SUMMARY]
[CONTENT] coronavirus disease | viral vector vaccine | viral vector vaccines | covid 19 immunization | coronavirus disease 2019 [SUMMARY]
[CONTENT] coronavirus disease | viral vector vaccine | viral vector vaccines | covid 19 immunization | coronavirus disease 2019 [SUMMARY]
[CONTENT] coronavirus disease | viral vector vaccine | viral vector vaccines | covid 19 immunization | coronavirus disease 2019 [SUMMARY]
[CONTENT] coronavirus disease | viral vector vaccine | viral vector vaccines | covid 19 immunization | coronavirus disease 2019 [SUMMARY]
[CONTENT] coronavirus disease | viral vector vaccine | viral vector vaccines | covid 19 immunization | coronavirus disease 2019 [SUMMARY]
[CONTENT] coronavirus disease | viral vector vaccine | viral vector vaccines | covid 19 immunization | coronavirus disease 2019 [SUMMARY]
[CONTENT] mrna | 1st | mrna 1st | pfizer | pfizer mrna | mrna 1st pfizer | 1st pfizer | 1st pfizer mrna | pfizer mrna 1st | moderna [SUMMARY]
[CONTENT] mrna | 1st | mrna 1st | pfizer | pfizer mrna | mrna 1st pfizer | 1st pfizer | 1st pfizer mrna | pfizer mrna 1st | moderna [SUMMARY]
[CONTENT] mrna | 1st | mrna 1st | pfizer | pfizer mrna | mrna 1st pfizer | 1st pfizer | 1st pfizer mrna | pfizer mrna 1st | moderna [SUMMARY]
[CONTENT] mrna | 1st | mrna 1st | pfizer | pfizer mrna | mrna 1st pfizer | 1st pfizer | 1st pfizer mrna | pfizer mrna 1st | moderna [SUMMARY]
[CONTENT] mrna | 1st | mrna 1st | pfizer | pfizer mrna | mrna 1st pfizer | 1st pfizer | 1st pfizer mrna | pfizer mrna 1st | moderna [SUMMARY]
[CONTENT] mrna | 1st | mrna 1st | pfizer | pfizer mrna | mrna 1st pfizer | 1st pfizer | 1st pfizer mrna | pfizer mrna 1st | moderna [SUMMARY]
[CONTENT] vaccines | covid | covid 19 | 19 | pandemic | safe | vaccine | based | development herpes | adverse [SUMMARY]
[CONTENT] search | prisma | data | articles | phase | country | individual case | items | performed | medical [SUMMARY]
[CONTENT] 1st | mrna | mrna 1st | pfizer mrna | pfizer | 1st pfizer | mrna 1st pfizer | mrna 1st pfizer mrna | 1st pfizer mrna | pfizer mrna 1st [SUMMARY]
[CONTENT] possible | link | help understand cause | plausible biological link hz | risk vzv reactivation following | risk vzv reactivation | risk vzv | help | help understand | help understand cause effect [SUMMARY]
[CONTENT] mrna | 1st | mrna 1st | 19 | covid | covid 19 | pfizer | pfizer mrna | hz | mrna 1st pfizer [SUMMARY]
[CONTENT] mrna | 1st | mrna 1st | 19 | covid | covid 19 | pfizer | pfizer mrna | hz | mrna 1st pfizer [SUMMARY]
[CONTENT] COVID-19 [SUMMARY]
[CONTENT] PubMed | Embase | MeSH | Varicella | COVID-19 | Vaccine ||| ||| [SUMMARY]
[CONTENT] 54 | 27 | 27 ||| more than 50 years | 36 | 10 | 13 | 2 ||| COVID-19 | 7.64 | 6.92) days ||| ||| 86.27% | HZ ||| Thirty-six | 36/45 | 80% | COVID-19 [SUMMARY]
[CONTENT] COVID-19 ||| [SUMMARY]
[CONTENT] ||| COVID-19 ||| PubMed | Embase | MeSH | Varicella | COVID-19 | Vaccine ||| ||| ||| 54 | 27 | 27 ||| more than 50 years | 36 | 10 | 13 | 2 ||| COVID-19 | 7.64 | 6.92) days ||| ||| 86.27% | HZ ||| Thirty-six | 36/45 | 80% | COVID-19 ||| COVID-19 ||| [SUMMARY]
[CONTENT] ||| COVID-19 ||| PubMed | Embase | MeSH | Varicella | COVID-19 | Vaccine ||| ||| ||| 54 | 27 | 27 ||| more than 50 years | 36 | 10 | 13 | 2 ||| COVID-19 | 7.64 | 6.92) days ||| ||| 86.27% | HZ ||| Thirty-six | 36/45 | 80% | COVID-19 ||| COVID-19 ||| [SUMMARY]
Dynamic contrast enhanced MRI of pulmonary adenocarcinomas for early risk stratification: higher contrast uptake associated with response and better prognosis.
36471318
To explore the prognostic value of serial dynamic contrast-enhanced (DCE) MRI in patients with advanced pulmonary adenocarcinoma undergoing first-line therapy with either tyrosine-kinase inhibitors (TKI) or platinum-based chemotherapy (PBC).
BACKGROUND
Patients underwent baseline (day 0, n = 98), and post-therapeutic DCE MRI (PBC: day + 1, n = 52); TKI: day + 7, n = 46) at 1.5T. Perfusion curves were acquired at 10, 40, and 70 s after contrast application and analysed semiquantitatively. Treatment response was evaluated at 6 weeks by CT (RECIST 1.1); progression-free survival (PFS) and overall survival  were analysed with respect to clinical and perfusion parameters. Relative uptake was defined as signal difference between contrast and non-contrast images, divided by the non-contrast signal. Predictors of survival were selected using Cox regression analysis. Median follow-up was 825 days.
METHODS
In pre-therapeutic and early post-therapeutic MRI, treatment responders (n = 27) showed significantly higher relative contrast uptake within the tumor at 70 s after application as compared to non-responders (n = 71, p ≤ 0.02), response defined as PR by RECIST 1.1 at 6 weeks. There was no significant change of perfusion at early MRI after treatment. In multivariate regression analysis of selected parameters, the strongest association with PFS were relative uptake at 40 s in the early post-treatment MRI and pre-treatment clinical data (presence of liver metastases, ECOG performance status).
RESULTS
Higher contrast uptake within the tumor at pre-treatment and early post-treatment MRI was associated with treatment response and better prognosis. DCE MRI of pulmonary adenocarcinoma may provide important prognostic information.
CONCLUSION
[ "Humans", "Contrast Media", "Magnetic Resonance Imaging", "Prognosis", "Liver Neoplasms", "Adenocarcinoma", "Treatment Outcome" ]
9724354
Background
Risk stratification and early therapy response assessment are of key importance for patients with cancer, in order to guide subsequent management and avoid unnecessary toxicity and costs. Median survival of patients with advanced non-small-cell lung cancer (NSCLC) ranges from 1.5 to several years depending on mutation status [1]. The balance between treatment risk and therapeutic benefit is difficult to define in routine clinical practice. There are multiple factors to consider: comorbidities, patient preference, biology, and extent of metastatic spread. Of special interest in this regard are the so-called imaging biomarkers, which could predict tumor aggressiveness more precisely than routine staging procedures alone, while also avoiding the procedural risk associated with repeat biopsies and histopathologic evaluation. [2, 3] Importantly, treatment response in targeted therapies may not be reflected appropriately by RECIST because of a different mechanism of action compared to direct cytotoxic agents [4, 5]. Therefore, morphological and functional imaging criteria have been explored for improved and earlier prediction of treatment response, such as volume reduction, change of tumor parameters including echogenicity, apparent diffusion coefficient, tissue perfusion, PET tracer accumulation, markers of ischemia [4, 6–13]. However, only few of these have been implemented in clinical decision-making algorithms thus far. For example, FDG uptake quantification is used for response evaluation in lymphoma [14], quantitative ultrasound parameters were found suitable for response assessment in breast cancer [15], and rectal cancer treatment response is evaluated by diffusion weighted imaging [16]. However, heterogeneity of tumor biology, small study cohorts and lack of standardization hampers validation of these criteria. Alongside PET/CT and perfusion CT, multiparametric MRI has shown promising initial results in characterization of pulmonary tumors [17] and assessment of treatment response [8, 18–21]. Contrast uptake is a widely accepted biomarker for tissue vitality and influenced by both tissue damage and vascular changes induced by the treatment [22, 23]. It is thought to correlate with tissue metabolism [4, 20]. Reduction in tumor perfusion has been shown in breast cancer under bevacizumab [24]. Similar effects have been described for different tumor entities under tyrosine-kinase inhibitors (TKI), like glioblastomas and colorectal cancer. Notably, these effects have been shown as early as two days after treatment initiation [24]. The present study investigates the prognostic information of serial dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) in two histologically relatively homogeneous groups of patients with advanced pulmonary adenocarcinoma. Baseline and very early post-treatment contrast uptake curves under either platinum-based chemotherapy (PBC) or TKI were analyzed in conjunction with the subsequent clinical course.
null
null
Results
98 patients with sufficient imaging and clinical data were finally included into the study, 46 patients TKI group (15 male) and 52 patients with PBC (27 male). At 6 weeks, 27 (4 PBC, 23 TKI) showed partial treatment response. Responders and non-responders had generally similar baseline characteristics, with one notable exception: more never smokers responded (Table 1). All six patients without metastases (stage III disease) showed no response after 6 weeks of treatment. Table 1Patient characteristicsResponders* (n = 27)Non-responders* (n = 71)Total population (n = 98)TKI/PBC23/4¤23/48¤46/52Post-treatment MRI [days]8.6 ± 4.4¤3.9 ± 4.3¤5.2 ± 4.8Mean age64 ± 964 ± 964 ± 9Male10 (37%)32 (45%)42 (43%)ECOG > 015 (56%)28 (39%)43 (44%)Pathologic BMI♦4 (15%)19 (27%)23 (24%)Never Smoker10 (37%)×10 (14%)×20 (20%)Pack Years15 ± 18¤33 ± 23¤28 ± 23Vital capacity [l]2.8 ± 1.13.0 ± 1.03.0 ± 1.0Baseline CEA [ng/ml]294 ± 125997 ± 241149 ± 675Baseline Cyfra 21.1 [ng/ml]9.1 ± 11.39.1 ± 10.59.1 ± 10.7Baseline NSE [ng/ml]35 ± 2427 ± 2729 ± 27TumorStage III0 (0%)6 (8%)6 (6%)Stage IV27 (100%)65 (92%)92 (94%)T-stageT12 (7%)7 (10%)9 (9%)T25 (19%)18 (25%)23 (24%)T35 (19%)18 (25%)23 (24%)T414 (52%)28 (39%)42 (43%)N-stageN1 and N216 (59%)42 (59%)58 (59%)N311 (41%)29 (41%)40 (41%)M-stageM00 (0%)×6 (8%)×6 (6%)1 site4 (15%)23 (32%)27 (28%)2 sites11 (41%)21 (30%)32 (33%)≥ 3 sites12 (44%)21 (30%)33 (34%)MetastasesLiver5 (19%)16 (23%)21 (21%)Brain9 (33%)24 (34%)33 (34%)Bone17 (63%)31 (44%)48 (49%)Lung13 (48%)22 (31%)35 (36%)Categorial variables in absolute values (relative value) tested by Chi Square test, continuous variables in means ± SD tested by Welch’s t-test*Defined as RECIST 1.1 PR in first follow-up CT+Defined as RECIST 1.1 SD or PD in first follow-up CT♦BMI < 20 or > 30; ×P = 0.005; ¤P < 0.001 Patient characteristics Categorial variables in absolute values (relative value) tested by Chi Square test, continuous variables in means ± SD tested by Welch’s t-test *Defined as RECIST 1.1 PR in first follow-up CT +Defined as RECIST 1.1 SD or PD in first follow-up CT ♦BMI < 20 or > 30; ×P = 0.005; ¤P < 0.001 In pre-treatment MRI, lung tumors of responders presented a significantly higher contrast uptake 70 s after contrast administration compared to non-responders (Table 2). Consequently, the slope of contrast curve was also higher. In the early post-treatment MRI, differences of contrast uptake were more pronounced: other additional parameters, such as relative contrast uptake 40 s after administration, slope at 40 s, maximum contrast uptake, and AUC were significantly higher in responders. Except for ΔAUC, pre-treatment to post-treatment differences of these parameters were not significant, indicating no measurable treatment effect on the present contrast curves. Notably, in responders, there was a significant reduction of ROI area between pre- and post-treatment MRI after 5.2 ± 4.8 (range 1 to 18) days. Patients that received TKI presented tumors with higher perfusion values compared to patients witch received PBC. Table 2Comparison responder (RECIST 1.1 PR at 6 week CT) and non-responder (SD or PD at 6 week CT)Responders* (n = 27)Non-responders+ (n = 71)P-value×General featuresSum of diameter CT [cm]7.7 ± 4.68.4 ± 3.90.44Mean PFS ± SD [days]401 ± 211317 ± 2300.10Mean OS ± SD [days]706 ± 320508 ± 2930.004Pre-therapeutic MRI40 s rel. uptake [%]33.7 ± 15.628.4 ± 17.90.18Slope 0–40 s [*10]9.4 ± 4.77.9 ± 4.90.1970 s rel. uptake [%]49.0 ± 17.935.9 ± 25.60.02Slope 0–70 s [*10]7.7 ± 2.45.6 ± 3.60.006Max. uptake [SI]171 ± 24162 ± 320.22AUC [SI/250 s]4029 ± 5513849 ± 7310.24Post-therapeutic MRI40 s rel. uptake [%]33.4 ± 12.025.3 ± 17.10.03Slope 0–40 s [*10]9.7 ± 3.77.1 ± 4.60.0170 s rel. uptake [%]47.3 ± 22.432.4 ± 24.10.007Slope 0–70 s [*10]7.7 ± 3.25.0 ± 3.3< 0.001Max. uptake [SI]182 ± 22162 ± 25< 0.001AUC [SI/250 s]4264 ± 5093814 ± 6100.001Difference MRI pre-MRI post therapyArea difference MR1-MR2 [cm2]3.1 ± 4.10.6 ± 2.5< 0.001Δ 40 s rel. uptake [%]0.3 ± 13.32.7 ± 15.80.49Δ Slope 0–40 s [*10]– 0.4 ± 3.40.7 ± 4.20.24Δ 70 s rel. uptake [%]2.7 ± 22.12.7 ± 21.10.99Δ Slope 0–70 s [*10]0.0 ± 2.80.5 ± 3.00.52Δ Max. uptake [SI]– 9 ± 220 ± 200.05Δ AUC [SI/250 s]– 184 ± 37029 ± 4740.04Bold means P-value < 0.05 was considered to be significant×Means ± SD tested by students t-test*Defined as RECIST 1.1 PR in first follow-up CT+Defined as RECIST 1.1 SD or PD in first follow-up CT Comparison responder (RECIST 1.1 PR at 6 week CT) and non-responder (SD or PD at 6 week CT) Bold means P-value < 0.05 was considered to be significant ×Means ± SD tested by students t-test *Defined as RECIST 1.1 PR in first follow-up CT +Defined as RECIST 1.1 SD or PD in first follow-up CT Figures 3, 4 and 5 illustrate three representative cases. The tumor of a TKI non-responder showed a 75% uptake at 70 s after contrast administration that dropped stepwise under treatment (Fig. 3). In contrast to this, a TKI responder showed an initial relatively low uptake of 40% at 70 s, discretely increasing to 60% (Fig. 4), while a responder to PBC treatment with central tumor necrosis presented a perfusion reduction (Fig. 5). Figure 6 demonstrates higher mortality (A, C) and shorter progression-free survival (B, D) of patients with contrast uptake below median.Fig. 6Kaplan–Meier plots: OS (A, C) and PFS (B, D) dependent on pre-treatment contrast uptake (A, B) and early post-treatment contrast uptake (C, D) Kaplan–Meier plots: OS (A, C) and PFS (B, D) dependent on pre-treatment contrast uptake (A, B) and early post-treatment contrast uptake (C, D) Univariate analyses of clinical factors, pre-therapeutic imaging and post-therapeutic imaging The relationship between clinical, pre-therapeutic imaging and post-therapeutic imaging parameters with PFS and OS were analyzed using univariate Cox regression (Additional file 1: Table A.1). There was a significant association with several clinical parameters as well as pre-treatment and post-treatment imaging parameters. The relationship between clinical, pre-therapeutic imaging and post-therapeutic imaging parameters with PFS and OS were analyzed using univariate Cox regression (Additional file 1: Table A.1). There was a significant association with several clinical parameters as well as pre-treatment and post-treatment imaging parameters. Model selection and multivariate analyses Using forward and backward selection procedures, four clinical parameters with optimally combined PFS or OS prediction were selected (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, first row). In the second step, best model fit was achieved using slope 0–70 s for OS. For PFS, pre-therapeutic MRI did not lead to a better model fit (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, second row). In the third step, the post-therapeutic relative uptake value at 40 s lead to a better model fit for PFS (Additional file 1: Table A. 3). In contrast, for OS, results of the post-therapeutic MRI did not result in significant improvement of the model (Additional file 1: Table A. 2). Using forward and backward selection procedures, four clinical parameters with optimally combined PFS or OS prediction were selected (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, first row). In the second step, best model fit was achieved using slope 0–70 s for OS. For PFS, pre-therapeutic MRI did not lead to a better model fit (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, second row). In the third step, the post-therapeutic relative uptake value at 40 s lead to a better model fit for PFS (Additional file 1: Table A. 3). In contrast, for OS, results of the post-therapeutic MRI did not result in significant improvement of the model (Additional file 1: Table A. 2).
Conclusion
Better tumor perfusion of pulmonary adenocarcinomas predicts response before and also shortly after treatment start and is independently associated with better prognosis.
[ "Background", "Patients", "Clinical documentation", "MR examination", "CT examination", "Image analysis", "Statistics", "Univariate analyses of clinical factors, pre-therapeutic imaging and post-therapeutic imaging", "Model selection and multivariate analyses", "" ]
[ "Risk stratification and early therapy response assessment are of key importance for patients with cancer, in order to guide subsequent management and avoid unnecessary toxicity and costs. Median survival of patients with advanced non-small-cell lung cancer (NSCLC) ranges from 1.5 to several years depending on mutation status [1]. The balance between treatment risk and therapeutic benefit is difficult to define in routine clinical practice. There are multiple factors to consider: comorbidities, patient preference, biology, and extent of metastatic spread. Of special interest in this regard are the so-called imaging biomarkers, which could predict tumor aggressiveness more precisely than routine staging procedures alone, while also avoiding the procedural risk associated with repeat biopsies and histopathologic evaluation. [2, 3]\nImportantly, treatment response in targeted therapies may not be reflected appropriately by RECIST because of a different mechanism of action compared to direct cytotoxic agents [4, 5]. Therefore, morphological and functional imaging criteria have been explored for improved and earlier prediction of treatment response, such as volume reduction, change of tumor parameters including echogenicity, apparent diffusion coefficient, tissue perfusion, PET tracer accumulation, markers of ischemia [4, 6–13]. However, only few of these have been implemented in clinical decision-making algorithms thus far. For example, FDG uptake quantification is used for response evaluation in lymphoma [14], quantitative ultrasound parameters were found suitable for response assessment in breast cancer [15], and rectal cancer treatment response is evaluated by diffusion weighted imaging [16]. However, heterogeneity of tumor biology, small study cohorts and lack of standardization hampers validation of these criteria. Alongside PET/CT and perfusion CT, multiparametric MRI has shown promising initial results in characterization of pulmonary tumors [17] and assessment of treatment response [8, 18–21].\nContrast uptake is a widely accepted biomarker for tissue vitality and influenced by both tissue damage and vascular changes induced by the treatment [22, 23]. It is thought to correlate with tissue metabolism [4, 20]. Reduction in tumor perfusion has been shown in breast cancer under bevacizumab [24]. Similar effects have been described for different tumor entities under tyrosine-kinase inhibitors (TKI), like glioblastomas and colorectal cancer. Notably, these effects have been shown as early as two days after treatment initiation [24].\nThe present study investigates the prognostic information of serial dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) in two histologically relatively homogeneous groups of patients with advanced pulmonary adenocarcinoma. Baseline and very early post-treatment contrast uptake curves under either platinum-based chemotherapy (PBC) or TKI were analyzed in conjunction with the subsequent clinical course.", "Between November 2016 and July 2019, 150 patients with advanced pulmonary adenocarcinoma and a measureable lesion of at least 2 cm in size under first line therapy were included in this prospective study (Fig. 1). Treatment was performed according to guidelines after consultation of the interdisciplinary tumor board. Patients undergoing radiation therapy of the primary tumor or local lymph nodes within the first 3 months were excluded. All included patients underwent pre-treatment and post-treatment MRI scans of high quality with few motion/pulsation artifacts, subjectively sufficient contrast enhancement and complete coverage of the primary tumor.Fig. 1Flowchart of study patients\nFlowchart of study patients", "Baseline patient and tumor characteristics were collected systematically from the medical records: body-mass-index (BMI), pulmonary function parameters, Eastern Cooperative Oncology Group (ECOG), smoking state including pack years and tumor biology (histology, mutation status, programmed death-ligand 1 (PD-L1) tumor proportion score; blood levels of the tumor markers carcinoembryonic antigen (CEA), Cytokeratin-fragment (Cyfra) 21.1, neuron-specific enolase (NSE), tumor stage (TNM 8th edition)).\nAll patients underwent routine CT and clinical work-up at maximum 4 weeks before and every 6 weeks after treatment initiation. RECIST 1.1 based response assessment was used as the gold standard [25]. Progression-free survival (PFS) was calculated as days between first MRI and follow-up CT with first progression or clinical progression in medical records. The imaging independent overall survival (OS) was calculated as days between first MRI and date of death.", "According to our study design (Fig. 1), all MRI examinations of the lung were performed on the same 1.5T scanner (Magnetom Aera, Siemens, Erlangen, Germany). First MRI was performed at the day of treatment initiation (TKI orally daily or PBC intravenously every 3 weeks). Second MRI was performed one day after treatment start (PBC) or 1 week after treatment start (TKI).\nAxial 3D volumetric interpolated breath-hold gradient echo T1 weighed fat saturated (frequency selective) dynamic contrast-enhanced sequences (T1 vibe) were acquired with the following parameters: 24 slices of matrix 320 × 180 pixels, slice thickness 4 mm, pixel bandwidth 540 Hz, repetition time 3.6 s, echo time 1.65s, flip angle 5°. This resulted in an acquisition time of 10 s for 24 slices and 30 s for 80 slices. After non-contrast series, contrast media was injected via a cubital vein with a flow of 1.5 ml/s followed by a 30 ml chaser bolus (1 mmol/kg body weight gadobutrol; Bayer, Leverkusen, Germany). Dynamic imaging sequences were triggered by bolus tracking sequence in the pulmonary trunk in coronal plane. In one single 30 s long breath hold, three repeated small image stacks covering the primary tumor with 24 images were obtained 10 s, 20 and 40 s after contrast administration (Fig. 2). At 70 s, 130 and 250 s delay whole thorax imaging (80 images each) was performed, each during separate 30 s breath holds. Note that time between contrast administration is simplified as a uniform 10 s interval. Time steps are 0 s (non-contrast), 10 s, 20 s, 40 s, 70 s, 130 and 250 s. Breath holding was instructed automatically between the sequences [26]. Overall MR acquisition time was around 15 min.Fig. 2MRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax\nMRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax", "CT scans (max. 3 mm slices, no motion artifacts, at maximum 1 month before treatment start) were obtained as part of routine clinical care. Most scans were obtained with a Somatom Definition AS64 scanner (Siemens, Erlangen, Germany) with application of iodinated contrast media.", "To compensate for respiration-related misplacement for each time step of DCE-MRI, a free-hand region of interest (ROI) was placed around the whole tumor at the level of widest tumor diameter, sparing airways and vessels. Care was taken in each examination pre- and post-treatment that the ROI was placed in an equivalent anatomical position. ROI area was recorded for each MRI. As reference, ROIs were placed in pectoral muscle, normalized enhancement curves exemplary shown in Figs. 3, 4 and 5. MR analysis for pre-treatment and post-treatment measurement and documentation took around 30 min. ROI placement was performed in our routine image viewer (Synapse© PACS, Fujifilm, Minato, Japan) results were documented in Microsoft Excel® 2019 (Redmond, Washington, USA). Internal reproducibility was confirmed by a single observer. In 16 patients repeated measurements were carried in a time interval of 6 months. Interclass correlation coefficient was between 0.96 and 0.99 for signal ratios at 0 s, 40 s, 70 s, relative uptake at 40 s and at 70 s, and for the slope values (explained in the next section).Fig. 3DCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI).Fig. 4DCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS.Fig. 572 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow)\nDCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI).\nDCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS.\n72 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow)\nThe following semiquantitative parameters were calculated from perfusion curves: relative contrast uptake at 40 and 70 s, maximal uptake, wash-in contrast kinetic (0 to 40 s, 0 to 70 s). Relative tumor uptake (Rel. UT) was calculated according to the following formula: Rel. UT = (SIt – SI0)/SI0, where SIt is tumor signal intensity at time t and SI0 is tumor signal intensity before contrast administration. As surrogate for total contrast enhancement, the area under the curve (AUC) was calculated as the sum of the mean signal for each time interval multiplied by that time interval over the range of 0–250 s. Image processing and documentation of clinical data and imaging were done by expert thoracic radiologists (at least 8 years of experience) and thoracic oncologists (at least 15 years of experience).", "Baseline variables are descriptively compared for both groups (responders, non-responders). Depending on the variable, mean ± standard deviation or absolute and relative frequencies are given. Associated p-values are calculated by Student’s t-test, Welch’s t-test, or Chi-Square test, respectively. We report the median follow-up time calculated by the inverse Kaplan–Meier method with corresponding 95% confidence intervals and “stability interval” as suggested by Schemper and Betensyk, respectively [27, 28].\nIn order to assess the potential additional benefit of imaging parameters, a combination of forward and backward selection procedure (the FAMoS Algorithm) based on the AIC (Akaike information criterion) was used for model selection [29]. To construct a robust multivariate model for our study group of 98 patients, we performed the model selection in three steps: First, we performed a variable selection on a data set containing complete observations on all relevant clinical variables (therapy group, age, gender, abnormal body mass index, clinical status, smoking status, Cyfra 21.1, EGFR status, tumor stage and presence of liver metastases). The variables selected in this step were included in the starting model. In the second step pre-therapeutic MRI variables could be included (forward selected), but clinical parameters could be excluded (backward selection), based on a data set containing all information on the relevant variables. In the third step, again the selected variables from the step before were included in the starting model. Post-therapeutic MRI variables were included if relevant and previously selected variables could be excluded based on the AIC criterion and a data set which contained all information on the relevant variables. The model was applied to OS and PFS respectively, and the group variable (TKI, PBC) was always included in the model. The resulting Cox regression models are presented by means of the hazard ratios (HR) and associated 95% confidence intervals and descriptive p-values of the selected variables, as well as the AIC, number of observations and events in the model.\nA p-value of < 0.05 was considered as statistically significant. Missing values were not imputed, resulting in complete case analysis with respect to the specific analysis. Analysis was done using R Version 4.0.2 (30) and SPSS Version 27, IBM, Armonk, USA. In order to facilitate better understanding of the calculated hazard ratios, slope values were multiplied by 10 to report a clinically relevant scale.", "The relationship between clinical, pre-therapeutic imaging and post-therapeutic imaging parameters with PFS and OS were analyzed using univariate Cox regression (Additional file 1: Table A.1). There was a significant association with several clinical parameters as well as pre-treatment and post-treatment imaging parameters.", "Using forward and backward selection procedures, four clinical parameters with optimally combined PFS or OS prediction were selected (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, first row). In the second step, best model fit was achieved using slope 0–70 s for OS. For PFS, pre-therapeutic MRI did not lead to a better model fit (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, second row). In the third step, the post-therapeutic relative uptake value at 40 s lead to a better model fit for PFS (Additional file 1: Table A. 3). In contrast, for OS, results of the post-therapeutic MRI did not result in significant improvement of the model (Additional file 1: Table A. 2).", "\nAdditional file 1. Supplementary tables.\nAdditional file 1. Supplementary tables." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Materials and methods", "Patients", "Clinical documentation", "MR examination", "CT examination", "Image analysis", "Statistics", "Results", "Univariate analyses of clinical factors, pre-therapeutic imaging and post-therapeutic imaging", "Model selection and multivariate analyses", "Discussion", "Conclusion", "Supplementary Information", "" ]
[ "Risk stratification and early therapy response assessment are of key importance for patients with cancer, in order to guide subsequent management and avoid unnecessary toxicity and costs. Median survival of patients with advanced non-small-cell lung cancer (NSCLC) ranges from 1.5 to several years depending on mutation status [1]. The balance between treatment risk and therapeutic benefit is difficult to define in routine clinical practice. There are multiple factors to consider: comorbidities, patient preference, biology, and extent of metastatic spread. Of special interest in this regard are the so-called imaging biomarkers, which could predict tumor aggressiveness more precisely than routine staging procedures alone, while also avoiding the procedural risk associated with repeat biopsies and histopathologic evaluation. [2, 3]\nImportantly, treatment response in targeted therapies may not be reflected appropriately by RECIST because of a different mechanism of action compared to direct cytotoxic agents [4, 5]. Therefore, morphological and functional imaging criteria have been explored for improved and earlier prediction of treatment response, such as volume reduction, change of tumor parameters including echogenicity, apparent diffusion coefficient, tissue perfusion, PET tracer accumulation, markers of ischemia [4, 6–13]. However, only few of these have been implemented in clinical decision-making algorithms thus far. For example, FDG uptake quantification is used for response evaluation in lymphoma [14], quantitative ultrasound parameters were found suitable for response assessment in breast cancer [15], and rectal cancer treatment response is evaluated by diffusion weighted imaging [16]. However, heterogeneity of tumor biology, small study cohorts and lack of standardization hampers validation of these criteria. Alongside PET/CT and perfusion CT, multiparametric MRI has shown promising initial results in characterization of pulmonary tumors [17] and assessment of treatment response [8, 18–21].\nContrast uptake is a widely accepted biomarker for tissue vitality and influenced by both tissue damage and vascular changes induced by the treatment [22, 23]. It is thought to correlate with tissue metabolism [4, 20]. Reduction in tumor perfusion has been shown in breast cancer under bevacizumab [24]. Similar effects have been described for different tumor entities under tyrosine-kinase inhibitors (TKI), like glioblastomas and colorectal cancer. Notably, these effects have been shown as early as two days after treatment initiation [24].\nThe present study investigates the prognostic information of serial dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) in two histologically relatively homogeneous groups of patients with advanced pulmonary adenocarcinoma. Baseline and very early post-treatment contrast uptake curves under either platinum-based chemotherapy (PBC) or TKI were analyzed in conjunction with the subsequent clinical course.", "This study was approved by the ethics committee of the medical faculty of Heidelberg (S-445/2015), and all participants provided written informed consent.\nPatients Between November 2016 and July 2019, 150 patients with advanced pulmonary adenocarcinoma and a measureable lesion of at least 2 cm in size under first line therapy were included in this prospective study (Fig. 1). Treatment was performed according to guidelines after consultation of the interdisciplinary tumor board. Patients undergoing radiation therapy of the primary tumor or local lymph nodes within the first 3 months were excluded. All included patients underwent pre-treatment and post-treatment MRI scans of high quality with few motion/pulsation artifacts, subjectively sufficient contrast enhancement and complete coverage of the primary tumor.Fig. 1Flowchart of study patients\nFlowchart of study patients\nBetween November 2016 and July 2019, 150 patients with advanced pulmonary adenocarcinoma and a measureable lesion of at least 2 cm in size under first line therapy were included in this prospective study (Fig. 1). Treatment was performed according to guidelines after consultation of the interdisciplinary tumor board. Patients undergoing radiation therapy of the primary tumor or local lymph nodes within the first 3 months were excluded. All included patients underwent pre-treatment and post-treatment MRI scans of high quality with few motion/pulsation artifacts, subjectively sufficient contrast enhancement and complete coverage of the primary tumor.Fig. 1Flowchart of study patients\nFlowchart of study patients\nClinical documentation Baseline patient and tumor characteristics were collected systematically from the medical records: body-mass-index (BMI), pulmonary function parameters, Eastern Cooperative Oncology Group (ECOG), smoking state including pack years and tumor biology (histology, mutation status, programmed death-ligand 1 (PD-L1) tumor proportion score; blood levels of the tumor markers carcinoembryonic antigen (CEA), Cytokeratin-fragment (Cyfra) 21.1, neuron-specific enolase (NSE), tumor stage (TNM 8th edition)).\nAll patients underwent routine CT and clinical work-up at maximum 4 weeks before and every 6 weeks after treatment initiation. RECIST 1.1 based response assessment was used as the gold standard [25]. Progression-free survival (PFS) was calculated as days between first MRI and follow-up CT with first progression or clinical progression in medical records. The imaging independent overall survival (OS) was calculated as days between first MRI and date of death.\nBaseline patient and tumor characteristics were collected systematically from the medical records: body-mass-index (BMI), pulmonary function parameters, Eastern Cooperative Oncology Group (ECOG), smoking state including pack years and tumor biology (histology, mutation status, programmed death-ligand 1 (PD-L1) tumor proportion score; blood levels of the tumor markers carcinoembryonic antigen (CEA), Cytokeratin-fragment (Cyfra) 21.1, neuron-specific enolase (NSE), tumor stage (TNM 8th edition)).\nAll patients underwent routine CT and clinical work-up at maximum 4 weeks before and every 6 weeks after treatment initiation. RECIST 1.1 based response assessment was used as the gold standard [25]. Progression-free survival (PFS) was calculated as days between first MRI and follow-up CT with first progression or clinical progression in medical records. The imaging independent overall survival (OS) was calculated as days between first MRI and date of death.\nMR examination According to our study design (Fig. 1), all MRI examinations of the lung were performed on the same 1.5T scanner (Magnetom Aera, Siemens, Erlangen, Germany). First MRI was performed at the day of treatment initiation (TKI orally daily or PBC intravenously every 3 weeks). Second MRI was performed one day after treatment start (PBC) or 1 week after treatment start (TKI).\nAxial 3D volumetric interpolated breath-hold gradient echo T1 weighed fat saturated (frequency selective) dynamic contrast-enhanced sequences (T1 vibe) were acquired with the following parameters: 24 slices of matrix 320 × 180 pixels, slice thickness 4 mm, pixel bandwidth 540 Hz, repetition time 3.6 s, echo time 1.65s, flip angle 5°. This resulted in an acquisition time of 10 s for 24 slices and 30 s for 80 slices. After non-contrast series, contrast media was injected via a cubital vein with a flow of 1.5 ml/s followed by a 30 ml chaser bolus (1 mmol/kg body weight gadobutrol; Bayer, Leverkusen, Germany). Dynamic imaging sequences were triggered by bolus tracking sequence in the pulmonary trunk in coronal plane. In one single 30 s long breath hold, three repeated small image stacks covering the primary tumor with 24 images were obtained 10 s, 20 and 40 s after contrast administration (Fig. 2). At 70 s, 130 and 250 s delay whole thorax imaging (80 images each) was performed, each during separate 30 s breath holds. Note that time between contrast administration is simplified as a uniform 10 s interval. Time steps are 0 s (non-contrast), 10 s, 20 s, 40 s, 70 s, 130 and 250 s. Breath holding was instructed automatically between the sequences [26]. Overall MR acquisition time was around 15 min.Fig. 2MRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax\nMRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax\nAccording to our study design (Fig. 1), all MRI examinations of the lung were performed on the same 1.5T scanner (Magnetom Aera, Siemens, Erlangen, Germany). First MRI was performed at the day of treatment initiation (TKI orally daily or PBC intravenously every 3 weeks). Second MRI was performed one day after treatment start (PBC) or 1 week after treatment start (TKI).\nAxial 3D volumetric interpolated breath-hold gradient echo T1 weighed fat saturated (frequency selective) dynamic contrast-enhanced sequences (T1 vibe) were acquired with the following parameters: 24 slices of matrix 320 × 180 pixels, slice thickness 4 mm, pixel bandwidth 540 Hz, repetition time 3.6 s, echo time 1.65s, flip angle 5°. This resulted in an acquisition time of 10 s for 24 slices and 30 s for 80 slices. After non-contrast series, contrast media was injected via a cubital vein with a flow of 1.5 ml/s followed by a 30 ml chaser bolus (1 mmol/kg body weight gadobutrol; Bayer, Leverkusen, Germany). Dynamic imaging sequences were triggered by bolus tracking sequence in the pulmonary trunk in coronal plane. In one single 30 s long breath hold, three repeated small image stacks covering the primary tumor with 24 images were obtained 10 s, 20 and 40 s after contrast administration (Fig. 2). At 70 s, 130 and 250 s delay whole thorax imaging (80 images each) was performed, each during separate 30 s breath holds. Note that time between contrast administration is simplified as a uniform 10 s interval. Time steps are 0 s (non-contrast), 10 s, 20 s, 40 s, 70 s, 130 and 250 s. Breath holding was instructed automatically between the sequences [26]. Overall MR acquisition time was around 15 min.Fig. 2MRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax\nMRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax\nCT examination CT scans (max. 3 mm slices, no motion artifacts, at maximum 1 month before treatment start) were obtained as part of routine clinical care. Most scans were obtained with a Somatom Definition AS64 scanner (Siemens, Erlangen, Germany) with application of iodinated contrast media.\nCT scans (max. 3 mm slices, no motion artifacts, at maximum 1 month before treatment start) were obtained as part of routine clinical care. Most scans were obtained with a Somatom Definition AS64 scanner (Siemens, Erlangen, Germany) with application of iodinated contrast media.\nImage analysis To compensate for respiration-related misplacement for each time step of DCE-MRI, a free-hand region of interest (ROI) was placed around the whole tumor at the level of widest tumor diameter, sparing airways and vessels. Care was taken in each examination pre- and post-treatment that the ROI was placed in an equivalent anatomical position. ROI area was recorded for each MRI. As reference, ROIs were placed in pectoral muscle, normalized enhancement curves exemplary shown in Figs. 3, 4 and 5. MR analysis for pre-treatment and post-treatment measurement and documentation took around 30 min. ROI placement was performed in our routine image viewer (Synapse© PACS, Fujifilm, Minato, Japan) results were documented in Microsoft Excel® 2019 (Redmond, Washington, USA). Internal reproducibility was confirmed by a single observer. In 16 patients repeated measurements were carried in a time interval of 6 months. Interclass correlation coefficient was between 0.96 and 0.99 for signal ratios at 0 s, 40 s, 70 s, relative uptake at 40 s and at 70 s, and for the slope values (explained in the next section).Fig. 3DCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI).Fig. 4DCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS.Fig. 572 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow)\nDCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI).\nDCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS.\n72 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow)\nThe following semiquantitative parameters were calculated from perfusion curves: relative contrast uptake at 40 and 70 s, maximal uptake, wash-in contrast kinetic (0 to 40 s, 0 to 70 s). Relative tumor uptake (Rel. UT) was calculated according to the following formula: Rel. UT = (SIt – SI0)/SI0, where SIt is tumor signal intensity at time t and SI0 is tumor signal intensity before contrast administration. As surrogate for total contrast enhancement, the area under the curve (AUC) was calculated as the sum of the mean signal for each time interval multiplied by that time interval over the range of 0–250 s. Image processing and documentation of clinical data and imaging were done by expert thoracic radiologists (at least 8 years of experience) and thoracic oncologists (at least 15 years of experience).\nTo compensate for respiration-related misplacement for each time step of DCE-MRI, a free-hand region of interest (ROI) was placed around the whole tumor at the level of widest tumor diameter, sparing airways and vessels. Care was taken in each examination pre- and post-treatment that the ROI was placed in an equivalent anatomical position. ROI area was recorded for each MRI. As reference, ROIs were placed in pectoral muscle, normalized enhancement curves exemplary shown in Figs. 3, 4 and 5. MR analysis for pre-treatment and post-treatment measurement and documentation took around 30 min. ROI placement was performed in our routine image viewer (Synapse© PACS, Fujifilm, Minato, Japan) results were documented in Microsoft Excel® 2019 (Redmond, Washington, USA). Internal reproducibility was confirmed by a single observer. In 16 patients repeated measurements were carried in a time interval of 6 months. Interclass correlation coefficient was between 0.96 and 0.99 for signal ratios at 0 s, 40 s, 70 s, relative uptake at 40 s and at 70 s, and for the slope values (explained in the next section).Fig. 3DCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI).Fig. 4DCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS.Fig. 572 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow)\nDCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI).\nDCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS.\n72 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow)\nThe following semiquantitative parameters were calculated from perfusion curves: relative contrast uptake at 40 and 70 s, maximal uptake, wash-in contrast kinetic (0 to 40 s, 0 to 70 s). Relative tumor uptake (Rel. UT) was calculated according to the following formula: Rel. UT = (SIt – SI0)/SI0, where SIt is tumor signal intensity at time t and SI0 is tumor signal intensity before contrast administration. As surrogate for total contrast enhancement, the area under the curve (AUC) was calculated as the sum of the mean signal for each time interval multiplied by that time interval over the range of 0–250 s. Image processing and documentation of clinical data and imaging were done by expert thoracic radiologists (at least 8 years of experience) and thoracic oncologists (at least 15 years of experience).\nStatistics Baseline variables are descriptively compared for both groups (responders, non-responders). Depending on the variable, mean ± standard deviation or absolute and relative frequencies are given. Associated p-values are calculated by Student’s t-test, Welch’s t-test, or Chi-Square test, respectively. We report the median follow-up time calculated by the inverse Kaplan–Meier method with corresponding 95% confidence intervals and “stability interval” as suggested by Schemper and Betensyk, respectively [27, 28].\nIn order to assess the potential additional benefit of imaging parameters, a combination of forward and backward selection procedure (the FAMoS Algorithm) based on the AIC (Akaike information criterion) was used for model selection [29]. To construct a robust multivariate model for our study group of 98 patients, we performed the model selection in three steps: First, we performed a variable selection on a data set containing complete observations on all relevant clinical variables (therapy group, age, gender, abnormal body mass index, clinical status, smoking status, Cyfra 21.1, EGFR status, tumor stage and presence of liver metastases). The variables selected in this step were included in the starting model. In the second step pre-therapeutic MRI variables could be included (forward selected), but clinical parameters could be excluded (backward selection), based on a data set containing all information on the relevant variables. In the third step, again the selected variables from the step before were included in the starting model. Post-therapeutic MRI variables were included if relevant and previously selected variables could be excluded based on the AIC criterion and a data set which contained all information on the relevant variables. The model was applied to OS and PFS respectively, and the group variable (TKI, PBC) was always included in the model. The resulting Cox regression models are presented by means of the hazard ratios (HR) and associated 95% confidence intervals and descriptive p-values of the selected variables, as well as the AIC, number of observations and events in the model.\nA p-value of < 0.05 was considered as statistically significant. Missing values were not imputed, resulting in complete case analysis with respect to the specific analysis. Analysis was done using R Version 4.0.2 (30) and SPSS Version 27, IBM, Armonk, USA. In order to facilitate better understanding of the calculated hazard ratios, slope values were multiplied by 10 to report a clinically relevant scale.\nBaseline variables are descriptively compared for both groups (responders, non-responders). Depending on the variable, mean ± standard deviation or absolute and relative frequencies are given. Associated p-values are calculated by Student’s t-test, Welch’s t-test, or Chi-Square test, respectively. We report the median follow-up time calculated by the inverse Kaplan–Meier method with corresponding 95% confidence intervals and “stability interval” as suggested by Schemper and Betensyk, respectively [27, 28].\nIn order to assess the potential additional benefit of imaging parameters, a combination of forward and backward selection procedure (the FAMoS Algorithm) based on the AIC (Akaike information criterion) was used for model selection [29]. To construct a robust multivariate model for our study group of 98 patients, we performed the model selection in three steps: First, we performed a variable selection on a data set containing complete observations on all relevant clinical variables (therapy group, age, gender, abnormal body mass index, clinical status, smoking status, Cyfra 21.1, EGFR status, tumor stage and presence of liver metastases). The variables selected in this step were included in the starting model. In the second step pre-therapeutic MRI variables could be included (forward selected), but clinical parameters could be excluded (backward selection), based on a data set containing all information on the relevant variables. In the third step, again the selected variables from the step before were included in the starting model. Post-therapeutic MRI variables were included if relevant and previously selected variables could be excluded based on the AIC criterion and a data set which contained all information on the relevant variables. The model was applied to OS and PFS respectively, and the group variable (TKI, PBC) was always included in the model. The resulting Cox regression models are presented by means of the hazard ratios (HR) and associated 95% confidence intervals and descriptive p-values of the selected variables, as well as the AIC, number of observations and events in the model.\nA p-value of < 0.05 was considered as statistically significant. Missing values were not imputed, resulting in complete case analysis with respect to the specific analysis. Analysis was done using R Version 4.0.2 (30) and SPSS Version 27, IBM, Armonk, USA. In order to facilitate better understanding of the calculated hazard ratios, slope values were multiplied by 10 to report a clinically relevant scale.", "Between November 2016 and July 2019, 150 patients with advanced pulmonary adenocarcinoma and a measureable lesion of at least 2 cm in size under first line therapy were included in this prospective study (Fig. 1). Treatment was performed according to guidelines after consultation of the interdisciplinary tumor board. Patients undergoing radiation therapy of the primary tumor or local lymph nodes within the first 3 months were excluded. All included patients underwent pre-treatment and post-treatment MRI scans of high quality with few motion/pulsation artifacts, subjectively sufficient contrast enhancement and complete coverage of the primary tumor.Fig. 1Flowchart of study patients\nFlowchart of study patients", "Baseline patient and tumor characteristics were collected systematically from the medical records: body-mass-index (BMI), pulmonary function parameters, Eastern Cooperative Oncology Group (ECOG), smoking state including pack years and tumor biology (histology, mutation status, programmed death-ligand 1 (PD-L1) tumor proportion score; blood levels of the tumor markers carcinoembryonic antigen (CEA), Cytokeratin-fragment (Cyfra) 21.1, neuron-specific enolase (NSE), tumor stage (TNM 8th edition)).\nAll patients underwent routine CT and clinical work-up at maximum 4 weeks before and every 6 weeks after treatment initiation. RECIST 1.1 based response assessment was used as the gold standard [25]. Progression-free survival (PFS) was calculated as days between first MRI and follow-up CT with first progression or clinical progression in medical records. The imaging independent overall survival (OS) was calculated as days between first MRI and date of death.", "According to our study design (Fig. 1), all MRI examinations of the lung were performed on the same 1.5T scanner (Magnetom Aera, Siemens, Erlangen, Germany). First MRI was performed at the day of treatment initiation (TKI orally daily or PBC intravenously every 3 weeks). Second MRI was performed one day after treatment start (PBC) or 1 week after treatment start (TKI).\nAxial 3D volumetric interpolated breath-hold gradient echo T1 weighed fat saturated (frequency selective) dynamic contrast-enhanced sequences (T1 vibe) were acquired with the following parameters: 24 slices of matrix 320 × 180 pixels, slice thickness 4 mm, pixel bandwidth 540 Hz, repetition time 3.6 s, echo time 1.65s, flip angle 5°. This resulted in an acquisition time of 10 s for 24 slices and 30 s for 80 slices. After non-contrast series, contrast media was injected via a cubital vein with a flow of 1.5 ml/s followed by a 30 ml chaser bolus (1 mmol/kg body weight gadobutrol; Bayer, Leverkusen, Germany). Dynamic imaging sequences were triggered by bolus tracking sequence in the pulmonary trunk in coronal plane. In one single 30 s long breath hold, three repeated small image stacks covering the primary tumor with 24 images were obtained 10 s, 20 and 40 s after contrast administration (Fig. 2). At 70 s, 130 and 250 s delay whole thorax imaging (80 images each) was performed, each during separate 30 s breath holds. Note that time between contrast administration is simplified as a uniform 10 s interval. Time steps are 0 s (non-contrast), 10 s, 20 s, 40 s, 70 s, 130 and 250 s. Breath holding was instructed automatically between the sequences [26]. Overall MR acquisition time was around 15 min.Fig. 2MRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax\nMRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax", "CT scans (max. 3 mm slices, no motion artifacts, at maximum 1 month before treatment start) were obtained as part of routine clinical care. Most scans were obtained with a Somatom Definition AS64 scanner (Siemens, Erlangen, Germany) with application of iodinated contrast media.", "To compensate for respiration-related misplacement for each time step of DCE-MRI, a free-hand region of interest (ROI) was placed around the whole tumor at the level of widest tumor diameter, sparing airways and vessels. Care was taken in each examination pre- and post-treatment that the ROI was placed in an equivalent anatomical position. ROI area was recorded for each MRI. As reference, ROIs were placed in pectoral muscle, normalized enhancement curves exemplary shown in Figs. 3, 4 and 5. MR analysis for pre-treatment and post-treatment measurement and documentation took around 30 min. ROI placement was performed in our routine image viewer (Synapse© PACS, Fujifilm, Minato, Japan) results were documented in Microsoft Excel® 2019 (Redmond, Washington, USA). Internal reproducibility was confirmed by a single observer. In 16 patients repeated measurements were carried in a time interval of 6 months. Interclass correlation coefficient was between 0.96 and 0.99 for signal ratios at 0 s, 40 s, 70 s, relative uptake at 40 s and at 70 s, and for the slope values (explained in the next section).Fig. 3DCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI).Fig. 4DCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS.Fig. 572 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow)\nDCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI).\nDCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS.\n72 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow)\nThe following semiquantitative parameters were calculated from perfusion curves: relative contrast uptake at 40 and 70 s, maximal uptake, wash-in contrast kinetic (0 to 40 s, 0 to 70 s). Relative tumor uptake (Rel. UT) was calculated according to the following formula: Rel. UT = (SIt – SI0)/SI0, where SIt is tumor signal intensity at time t and SI0 is tumor signal intensity before contrast administration. As surrogate for total contrast enhancement, the area under the curve (AUC) was calculated as the sum of the mean signal for each time interval multiplied by that time interval over the range of 0–250 s. Image processing and documentation of clinical data and imaging were done by expert thoracic radiologists (at least 8 years of experience) and thoracic oncologists (at least 15 years of experience).", "Baseline variables are descriptively compared for both groups (responders, non-responders). Depending on the variable, mean ± standard deviation or absolute and relative frequencies are given. Associated p-values are calculated by Student’s t-test, Welch’s t-test, or Chi-Square test, respectively. We report the median follow-up time calculated by the inverse Kaplan–Meier method with corresponding 95% confidence intervals and “stability interval” as suggested by Schemper and Betensyk, respectively [27, 28].\nIn order to assess the potential additional benefit of imaging parameters, a combination of forward and backward selection procedure (the FAMoS Algorithm) based on the AIC (Akaike information criterion) was used for model selection [29]. To construct a robust multivariate model for our study group of 98 patients, we performed the model selection in three steps: First, we performed a variable selection on a data set containing complete observations on all relevant clinical variables (therapy group, age, gender, abnormal body mass index, clinical status, smoking status, Cyfra 21.1, EGFR status, tumor stage and presence of liver metastases). The variables selected in this step were included in the starting model. In the second step pre-therapeutic MRI variables could be included (forward selected), but clinical parameters could be excluded (backward selection), based on a data set containing all information on the relevant variables. In the third step, again the selected variables from the step before were included in the starting model. Post-therapeutic MRI variables were included if relevant and previously selected variables could be excluded based on the AIC criterion and a data set which contained all information on the relevant variables. The model was applied to OS and PFS respectively, and the group variable (TKI, PBC) was always included in the model. The resulting Cox regression models are presented by means of the hazard ratios (HR) and associated 95% confidence intervals and descriptive p-values of the selected variables, as well as the AIC, number of observations and events in the model.\nA p-value of < 0.05 was considered as statistically significant. Missing values were not imputed, resulting in complete case analysis with respect to the specific analysis. Analysis was done using R Version 4.0.2 (30) and SPSS Version 27, IBM, Armonk, USA. In order to facilitate better understanding of the calculated hazard ratios, slope values were multiplied by 10 to report a clinically relevant scale.", "98 patients with sufficient imaging and clinical data were finally included into the study, 46 patients TKI group (15 male) and 52 patients with PBC (27 male). At 6 weeks, 27 (4 PBC, 23 TKI) showed partial treatment response. Responders and non-responders had generally similar baseline characteristics, with one notable exception: more never smokers responded (Table 1). All six patients without metastases (stage III disease) showed no response after 6 weeks of treatment.\nTable 1Patient characteristicsResponders* (n = 27)Non-responders* (n = 71)Total population (n = 98)TKI/PBC23/4¤23/48¤46/52Post-treatment MRI [days]8.6 ± 4.4¤3.9 ± 4.3¤5.2 ± 4.8Mean age64 ± 964 ± 964 ± 9Male10 (37%)32 (45%)42 (43%)ECOG > 015 (56%)28 (39%)43 (44%)Pathologic BMI♦4 (15%)19 (27%)23 (24%)Never Smoker10 (37%)×10 (14%)×20 (20%)Pack Years15 ± 18¤33 ± 23¤28 ± 23Vital capacity [l]2.8 ± 1.13.0 ± 1.03.0 ± 1.0Baseline CEA [ng/ml]294 ± 125997 ± 241149 ± 675Baseline Cyfra 21.1 [ng/ml]9.1 ± 11.39.1 ± 10.59.1 ± 10.7Baseline NSE [ng/ml]35 ± 2427 ± 2729 ± 27TumorStage III0 (0%)6 (8%)6 (6%)Stage IV27 (100%)65 (92%)92 (94%)T-stageT12 (7%)7 (10%)9 (9%)T25 (19%)18 (25%)23 (24%)T35 (19%)18 (25%)23 (24%)T414 (52%)28 (39%)42 (43%)N-stageN1 and N216 (59%)42 (59%)58 (59%)N311 (41%)29 (41%)40 (41%)M-stageM00 (0%)×6 (8%)×6 (6%)1 site4 (15%)23 (32%)27 (28%)2 sites11 (41%)21 (30%)32 (33%)≥ 3 sites12 (44%)21 (30%)33 (34%)MetastasesLiver5 (19%)16 (23%)21 (21%)Brain9 (33%)24 (34%)33 (34%)Bone17 (63%)31 (44%)48 (49%)Lung13 (48%)22 (31%)35 (36%)Categorial variables in absolute values (relative value) tested by Chi Square test, continuous variables in means ± SD tested by Welch’s t-test*Defined as RECIST 1.1 PR in first follow-up CT+Defined as RECIST 1.1 SD or PD in first follow-up CT♦BMI < 20 or > 30; ×P = 0.005; ¤P < 0.001\nPatient characteristics\nCategorial variables in absolute values (relative value) tested by Chi Square test, continuous variables in means ± SD tested by Welch’s t-test\n*Defined as RECIST 1.1 PR in first follow-up CT\n+Defined as RECIST 1.1 SD or PD in first follow-up CT\n♦BMI < 20 or > 30; ×P = 0.005; ¤P < 0.001\nIn pre-treatment MRI, lung tumors of responders presented a significantly higher contrast uptake 70 s after contrast administration compared to non-responders (Table 2). Consequently, the slope of contrast curve was also higher. In the early post-treatment MRI, differences of contrast uptake were more pronounced: other additional parameters, such as relative contrast uptake 40 s after administration, slope at 40 s, maximum contrast uptake, and AUC were significantly higher in responders. Except for ΔAUC, pre-treatment to post-treatment differences of these parameters were not significant, indicating no measurable treatment effect on the present contrast curves. Notably, in responders, there was a significant reduction of ROI area between pre- and post-treatment MRI after 5.2 ± 4.8 (range 1 to 18) days. Patients that received TKI presented tumors with higher perfusion values compared to patients witch received PBC.\nTable 2Comparison responder (RECIST 1.1 PR at 6 week CT) and non-responder (SD or PD at 6 week CT)Responders* (n = 27)Non-responders+ (n = 71)P-value×General featuresSum of diameter CT [cm]7.7 ± 4.68.4 ± 3.90.44Mean PFS ± SD [days]401 ± 211317 ± 2300.10Mean OS ± SD [days]706 ± 320508 ± 2930.004Pre-therapeutic MRI40 s rel. uptake [%]33.7 ± 15.628.4 ± 17.90.18Slope 0–40 s [*10]9.4 ± 4.77.9 ± 4.90.1970 s rel. uptake [%]49.0 ± 17.935.9 ± 25.60.02Slope 0–70 s [*10]7.7 ± 2.45.6 ± 3.60.006Max. uptake [SI]171 ± 24162 ± 320.22AUC [SI/250 s]4029 ± 5513849 ± 7310.24Post-therapeutic MRI40 s rel. uptake [%]33.4 ± 12.025.3 ± 17.10.03Slope 0–40 s [*10]9.7 ± 3.77.1 ± 4.60.0170 s rel. uptake [%]47.3 ± 22.432.4 ± 24.10.007Slope 0–70 s [*10]7.7 ± 3.25.0 ± 3.3< 0.001Max. uptake [SI]182 ± 22162 ± 25< 0.001AUC [SI/250 s]4264 ± 5093814 ± 6100.001Difference MRI pre-MRI post therapyArea difference MR1-MR2 [cm2]3.1 ± 4.10.6 ± 2.5< 0.001Δ 40 s rel. uptake [%]0.3 ± 13.32.7 ± 15.80.49Δ Slope 0–40 s [*10]– 0.4 ± 3.40.7 ± 4.20.24Δ 70 s rel. uptake [%]2.7 ± 22.12.7 ± 21.10.99Δ Slope 0–70 s [*10]0.0 ± 2.80.5 ± 3.00.52Δ Max. uptake [SI]– 9 ± 220 ± 200.05Δ AUC [SI/250 s]– 184 ± 37029 ± 4740.04Bold means P-value < 0.05 was considered to be significant×Means ± SD tested by students t-test*Defined as RECIST 1.1 PR in first follow-up CT+Defined as RECIST 1.1 SD or PD in first follow-up CT\nComparison responder (RECIST 1.1 PR at 6 week CT) and non-responder (SD or PD at 6 week CT)\nBold means P-value < 0.05 was considered to be significant\n×Means ± SD tested by students t-test\n*Defined as RECIST 1.1 PR in first follow-up CT\n+Defined as RECIST 1.1 SD or PD in first follow-up CT\nFigures 3, 4 and 5 illustrate three representative cases. The tumor of a TKI non-responder showed a 75% uptake at 70 s after contrast administration that dropped stepwise under treatment (Fig. 3). In contrast to this, a TKI responder showed an initial relatively low uptake of 40% at 70 s, discretely increasing to 60% (Fig. 4), while a responder to PBC treatment with central tumor necrosis presented a perfusion reduction (Fig. 5). Figure 6 demonstrates higher mortality (A, C) and shorter progression-free survival (B, D) of patients with contrast uptake below median.Fig. 6Kaplan–Meier plots: OS (A, C) and PFS (B, D) dependent on pre-treatment contrast uptake (A, B) and early post-treatment contrast uptake (C, D)\nKaplan–Meier plots: OS (A, C) and PFS (B, D) dependent on pre-treatment contrast uptake (A, B) and early post-treatment contrast uptake (C, D)\nUnivariate analyses of clinical factors, pre-therapeutic imaging and post-therapeutic imaging The relationship between clinical, pre-therapeutic imaging and post-therapeutic imaging parameters with PFS and OS were analyzed using univariate Cox regression (Additional file 1: Table A.1). There was a significant association with several clinical parameters as well as pre-treatment and post-treatment imaging parameters.\nThe relationship between clinical, pre-therapeutic imaging and post-therapeutic imaging parameters with PFS and OS were analyzed using univariate Cox regression (Additional file 1: Table A.1). There was a significant association with several clinical parameters as well as pre-treatment and post-treatment imaging parameters.\nModel selection and multivariate analyses Using forward and backward selection procedures, four clinical parameters with optimally combined PFS or OS prediction were selected (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, first row). In the second step, best model fit was achieved using slope 0–70 s for OS. For PFS, pre-therapeutic MRI did not lead to a better model fit (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, second row). In the third step, the post-therapeutic relative uptake value at 40 s lead to a better model fit for PFS (Additional file 1: Table A. 3). In contrast, for OS, results of the post-therapeutic MRI did not result in significant improvement of the model (Additional file 1: Table A. 2).\nUsing forward and backward selection procedures, four clinical parameters with optimally combined PFS or OS prediction were selected (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, first row). In the second step, best model fit was achieved using slope 0–70 s for OS. For PFS, pre-therapeutic MRI did not lead to a better model fit (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, second row). In the third step, the post-therapeutic relative uptake value at 40 s lead to a better model fit for PFS (Additional file 1: Table A. 3). In contrast, for OS, results of the post-therapeutic MRI did not result in significant improvement of the model (Additional file 1: Table A. 2).", "The relationship between clinical, pre-therapeutic imaging and post-therapeutic imaging parameters with PFS and OS were analyzed using univariate Cox regression (Additional file 1: Table A.1). There was a significant association with several clinical parameters as well as pre-treatment and post-treatment imaging parameters.", "Using forward and backward selection procedures, four clinical parameters with optimally combined PFS or OS prediction were selected (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, first row). In the second step, best model fit was achieved using slope 0–70 s for OS. For PFS, pre-therapeutic MRI did not lead to a better model fit (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, second row). In the third step, the post-therapeutic relative uptake value at 40 s lead to a better model fit for PFS (Additional file 1: Table A. 3). In contrast, for OS, results of the post-therapeutic MRI did not result in significant improvement of the model (Additional file 1: Table A. 2).", "Our study uses semiquantitative contrast wash-in kinetic parameters for description of pre-therapeutic and very early post-therapeutic DCE MRI in 98 adenocarcinomas of the lung. To the best of our knowledge, this is the first study of purely advanced adenocarcinomas of the lung that evaluates early MRI perfusion changes under PBC or TKI therapy. A long follow-up interval allowed regression analysis not only to mainly imaging dependent parameters as RECIST and PFS but also to overall survival. Inclusion criteria were broad, and as such quite representative for a clinical real-life setting.\nMain finding of our study is a significantly higher tumor perfusion of responders compared to non-responders in pre-therapeutic and early post-therapeutic MRI, which were clearly associated to PFS and OS and therefore predicts outcome before treatment start. This confirms former studies, which have also described the relationship between stronger baseline perfusion with better treatment response [8, 19, 31]. For example, Fraioli et al. demonstrated a higher baseline blood flow in 11 responders compared to 34 non-responders in 45 patients with advanced adenocarcinoma using CT perfusion [32]. Tissue perfusion may increase therapy susceptibility as capillarization is mandatory for exposure to therapeutic agents. Possibly, stronger perfused adenocarcinomas might also represent a less aggressive tumor biology as these malignancies may contain fewer microscopic necrotic areas. In our cohort, patients with positive EGFR status and TKI treatment showed higher perfusion values and a higher response rate. Although this is a confounding factor, our multivariate analyses demonstrate treatment independent association of baseline perfusion and prognosis.\nWe could not show clear treatment related changes of MRI parameters in this early phase of treatment, whereas the area reduction of the tumor was significantly higher in responders compared to non-responders. Therefore, in the setting of PBC or TKI without additional antiangiogenics, treatment-related changes were clinically informative only regarding size, but not functional parameters of the tumor. These results are similar to those of other studies, which have observed inferior predictive capacity for perfusion compared to metric changes of the tumor in several tumors, including lung and breast cancer [4, 33]. In contrast, in studies combining PBC with antiangiogenic treatment, blood flow as assessed by CT was reduced after one or more cycles of therapy in responders [32, 34, 35].\nSeveral quantitative DCE MRI studies of small and heterogeneous cohorts have documented reduced perfusion in treatment responders [6, 8, 19]. This finding is explained by tumor tissue damage due to reduced angiogenesis. Contrary to this, treatment-associated inflammation could increase tissue perfusion in the early phase of therapy. Differences in timing might explain conflicting results of studies. As prognostic marker, Tao et al. evaluated deconvolution perfusion MRI before treatment in 36 NSCLC patients, of which 6 were adenocarcinomas [19]. Response was evaluated after completion of radiation therapy after 1 month. Responders showed higher baseline ktrans and lower baseline kep and Ve. Chang et al. also identified prognostic impact of baseline perfusion markers in 11 NSCLC patients of whom 10 suffered from adenocarcinoma. In contrast to the data of Tao et al., high kep correlated with response. Similar to Tao et al., low Ve was predictive for response. As predictive parameter, ktrans reduction correlated with tumor diameter reduction after three cycles of chemotherapy [8]. Similarly, Xu et al. showed as early as 1 week after classic chemotherapy initiation a significantly reduced ktrans and Ve in 13 treatment responders compared to 9 non-responders [6]. This study included 11 patients with adenocarcinomas.\nNo predictive impact of change of ktrans was shown by de Langen et al. in 28 patients with non-squamous NSCLC 3 weeks after starting antiangiogenic therapy. In histogram analysis, increase of standard deviation of ktrans over 15% was associated with treatment failure [4]. Based on these studies, strong baseline tumor perfusion is a positive prognostic marker for NSCLC. Perfusion decrease under treatment seems to correlate with response, but study results differ in this point, potentially due to differences in tumor biology, treatment and timing of imaging. On the whole, OS as an end-point metric criteria other than RECIST have only be defined in a few NSCLC studies [4, 31]. Therefore, in most studies superiority of perfusion parameters to RECIST is not assessable and the benefit of this independent predictive marker additional to early RECIST assessment remains unclear.\nTo assess the interaction of different prognostic factors, multiparametric Cox regression was applied. In order to reduce the problem of multiple statistical testing, we performed a three-step variable pre-selection for multivariate analyses. Our multivariate variable selection model indicates a better OS prediction with parameters of pre-therapeutic and post-therapeutic MRI and a better PFS prediction with parameters of post-therapeutic MRI, additional to selected clinical parameters. Therefore, perfusion MRI of pulmonary adenocarcinomas may supplement peri-therapeutic risk stratification.\nSome important limitations of our study must be acknowledged:\nOne third of the patients have been excluded, most of them due to incomplete data, inferior imaging quality (i.e. low contrast enhancement) or scheduling delay of examinations. Other patients were excluded due to limitations in making tumor measurements, namely tumor atelectasis, diffuse tumor manifestation or too small tumor size. We believe that this exclusion process lead to more robust data analysis, but some exclusion criteria are subjective and confounding effects cannot be excluded. Reduced sample size was not suitable for evaluation of treatment subgroups.Our perfusion approach was a simplified method using breath hold technique without calculation of tissue permeability parameters addressing the limitations of patients with severe pulmonary diseases. The present method has low temporal resolution but high spatial coverage and high contrast resolution than other methods. Time interval of contrast administration to first image series was not documented and this interval was assumed to be 10 s. Therefore, this very early interval is confounded by individual circulation differences of the patients. Review of perfusion curves confirmed sufficient plot of contrast kinetics. For semiquantitative parameters, similar significance levels for perfusion changes in NSCLC were achieved compared to quantitative calculation [21]. Semiquantitative perfusion curve description is easy to perform and robust, whilst quantitative calculation may underlie high variation [21]. Criteria might easily be translated to different imaging techniques like CT and to different study centers. Future free-breathing sequences may provide higher temporal resolution. This may optimize data quality especially in the pre-contrast phase and the inflow phase and might help to calculate reliable tissue specific parameters.Free-hand ROI placement was carried out in one single layer and no histogram analyses were performed. Therefore, tumor changes could be underestimated. Free-hand ROI placement was necessary to compensate for different respiratory positions of the tumor. Tested automatic and semi-automatic registration algorithms were not sufficient to compensate for these movements. Intraobserver reproducibility was excellent, whereas interoberserver reproducibility was not tested in this study.Only primary tumors were measured. This may not represent the prognostic most relevant tumor location. This aspect is less relevant in the first line therapy setting. Primary tumors did not undergo local therapies and systemic therapy effects should be evaluable at this site.Progression-free survival and overall survival are confounded by treatment changes in later course. However, treatment was not changed until first follow-up CT after 6 weeks. Only a minority of patients underwent treatment change before fulfilling criteria of RECIST progress due to individual treatment regimes.\nOne third of the patients have been excluded, most of them due to incomplete data, inferior imaging quality (i.e. low contrast enhancement) or scheduling delay of examinations. Other patients were excluded due to limitations in making tumor measurements, namely tumor atelectasis, diffuse tumor manifestation or too small tumor size. We believe that this exclusion process lead to more robust data analysis, but some exclusion criteria are subjective and confounding effects cannot be excluded. Reduced sample size was not suitable for evaluation of treatment subgroups.\nOur perfusion approach was a simplified method using breath hold technique without calculation of tissue permeability parameters addressing the limitations of patients with severe pulmonary diseases. The present method has low temporal resolution but high spatial coverage and high contrast resolution than other methods. Time interval of contrast administration to first image series was not documented and this interval was assumed to be 10 s. Therefore, this very early interval is confounded by individual circulation differences of the patients. Review of perfusion curves confirmed sufficient plot of contrast kinetics. For semiquantitative parameters, similar significance levels for perfusion changes in NSCLC were achieved compared to quantitative calculation [21]. Semiquantitative perfusion curve description is easy to perform and robust, whilst quantitative calculation may underlie high variation [21]. Criteria might easily be translated to different imaging techniques like CT and to different study centers. Future free-breathing sequences may provide higher temporal resolution. This may optimize data quality especially in the pre-contrast phase and the inflow phase and might help to calculate reliable tissue specific parameters.\nFree-hand ROI placement was carried out in one single layer and no histogram analyses were performed. Therefore, tumor changes could be underestimated. Free-hand ROI placement was necessary to compensate for different respiratory positions of the tumor. Tested automatic and semi-automatic registration algorithms were not sufficient to compensate for these movements. Intraobserver reproducibility was excellent, whereas interoberserver reproducibility was not tested in this study.\nOnly primary tumors were measured. This may not represent the prognostic most relevant tumor location. This aspect is less relevant in the first line therapy setting. Primary tumors did not undergo local therapies and systemic therapy effects should be evaluable at this site.\nProgression-free survival and overall survival are confounded by treatment changes in later course. However, treatment was not changed until first follow-up CT after 6 weeks. Only a minority of patients underwent treatment change before fulfilling criteria of RECIST progress due to individual treatment regimes.", "Better tumor perfusion of pulmonary adenocarcinomas predicts response before and also shortly after treatment start and is independently associated with better prognosis.", " \nAdditional file 1. Supplementary tables.\nAdditional file 1. Supplementary tables.\n\nAdditional file 1. Supplementary tables.\nAdditional file 1. Supplementary tables.", "\nAdditional file 1. Supplementary tables.\nAdditional file 1. Supplementary tables." ]
[ null, "materials|methods", null, null, null, null, null, null, "results", null, null, "discussion", "conclusion", "supplementary-material", null ]
[ "Non-small-cell lung carcinoma", "Early response", "Treatment outcome", "Response evaluation criteria in solid tumors", "Magnetic resonance imaging", "Perfusion", "Protein-tyrosine kinases", "Platinum", "Survival analysis", "Progression-free survival" ]
Background: Risk stratification and early therapy response assessment are of key importance for patients with cancer, in order to guide subsequent management and avoid unnecessary toxicity and costs. Median survival of patients with advanced non-small-cell lung cancer (NSCLC) ranges from 1.5 to several years depending on mutation status [1]. The balance between treatment risk and therapeutic benefit is difficult to define in routine clinical practice. There are multiple factors to consider: comorbidities, patient preference, biology, and extent of metastatic spread. Of special interest in this regard are the so-called imaging biomarkers, which could predict tumor aggressiveness more precisely than routine staging procedures alone, while also avoiding the procedural risk associated with repeat biopsies and histopathologic evaluation. [2, 3] Importantly, treatment response in targeted therapies may not be reflected appropriately by RECIST because of a different mechanism of action compared to direct cytotoxic agents [4, 5]. Therefore, morphological and functional imaging criteria have been explored for improved and earlier prediction of treatment response, such as volume reduction, change of tumor parameters including echogenicity, apparent diffusion coefficient, tissue perfusion, PET tracer accumulation, markers of ischemia [4, 6–13]. However, only few of these have been implemented in clinical decision-making algorithms thus far. For example, FDG uptake quantification is used for response evaluation in lymphoma [14], quantitative ultrasound parameters were found suitable for response assessment in breast cancer [15], and rectal cancer treatment response is evaluated by diffusion weighted imaging [16]. However, heterogeneity of tumor biology, small study cohorts and lack of standardization hampers validation of these criteria. Alongside PET/CT and perfusion CT, multiparametric MRI has shown promising initial results in characterization of pulmonary tumors [17] and assessment of treatment response [8, 18–21]. Contrast uptake is a widely accepted biomarker for tissue vitality and influenced by both tissue damage and vascular changes induced by the treatment [22, 23]. It is thought to correlate with tissue metabolism [4, 20]. Reduction in tumor perfusion has been shown in breast cancer under bevacizumab [24]. Similar effects have been described for different tumor entities under tyrosine-kinase inhibitors (TKI), like glioblastomas and colorectal cancer. Notably, these effects have been shown as early as two days after treatment initiation [24]. The present study investigates the prognostic information of serial dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) in two histologically relatively homogeneous groups of patients with advanced pulmonary adenocarcinoma. Baseline and very early post-treatment contrast uptake curves under either platinum-based chemotherapy (PBC) or TKI were analyzed in conjunction with the subsequent clinical course. Materials and methods: This study was approved by the ethics committee of the medical faculty of Heidelberg (S-445/2015), and all participants provided written informed consent. Patients Between November 2016 and July 2019, 150 patients with advanced pulmonary adenocarcinoma and a measureable lesion of at least 2 cm in size under first line therapy were included in this prospective study (Fig. 1). Treatment was performed according to guidelines after consultation of the interdisciplinary tumor board. Patients undergoing radiation therapy of the primary tumor or local lymph nodes within the first 3 months were excluded. All included patients underwent pre-treatment and post-treatment MRI scans of high quality with few motion/pulsation artifacts, subjectively sufficient contrast enhancement and complete coverage of the primary tumor.Fig. 1Flowchart of study patients Flowchart of study patients Between November 2016 and July 2019, 150 patients with advanced pulmonary adenocarcinoma and a measureable lesion of at least 2 cm in size under first line therapy were included in this prospective study (Fig. 1). Treatment was performed according to guidelines after consultation of the interdisciplinary tumor board. Patients undergoing radiation therapy of the primary tumor or local lymph nodes within the first 3 months were excluded. All included patients underwent pre-treatment and post-treatment MRI scans of high quality with few motion/pulsation artifacts, subjectively sufficient contrast enhancement and complete coverage of the primary tumor.Fig. 1Flowchart of study patients Flowchart of study patients Clinical documentation Baseline patient and tumor characteristics were collected systematically from the medical records: body-mass-index (BMI), pulmonary function parameters, Eastern Cooperative Oncology Group (ECOG), smoking state including pack years and tumor biology (histology, mutation status, programmed death-ligand 1 (PD-L1) tumor proportion score; blood levels of the tumor markers carcinoembryonic antigen (CEA), Cytokeratin-fragment (Cyfra) 21.1, neuron-specific enolase (NSE), tumor stage (TNM 8th edition)). All patients underwent routine CT and clinical work-up at maximum 4 weeks before and every 6 weeks after treatment initiation. RECIST 1.1 based response assessment was used as the gold standard [25]. Progression-free survival (PFS) was calculated as days between first MRI and follow-up CT with first progression or clinical progression in medical records. The imaging independent overall survival (OS) was calculated as days between first MRI and date of death. Baseline patient and tumor characteristics were collected systematically from the medical records: body-mass-index (BMI), pulmonary function parameters, Eastern Cooperative Oncology Group (ECOG), smoking state including pack years and tumor biology (histology, mutation status, programmed death-ligand 1 (PD-L1) tumor proportion score; blood levels of the tumor markers carcinoembryonic antigen (CEA), Cytokeratin-fragment (Cyfra) 21.1, neuron-specific enolase (NSE), tumor stage (TNM 8th edition)). All patients underwent routine CT and clinical work-up at maximum 4 weeks before and every 6 weeks after treatment initiation. RECIST 1.1 based response assessment was used as the gold standard [25]. Progression-free survival (PFS) was calculated as days between first MRI and follow-up CT with first progression or clinical progression in medical records. The imaging independent overall survival (OS) was calculated as days between first MRI and date of death. MR examination According to our study design (Fig. 1), all MRI examinations of the lung were performed on the same 1.5T scanner (Magnetom Aera, Siemens, Erlangen, Germany). First MRI was performed at the day of treatment initiation (TKI orally daily or PBC intravenously every 3 weeks). Second MRI was performed one day after treatment start (PBC) or 1 week after treatment start (TKI). Axial 3D volumetric interpolated breath-hold gradient echo T1 weighed fat saturated (frequency selective) dynamic contrast-enhanced sequences (T1 vibe) were acquired with the following parameters: 24 slices of matrix 320 × 180 pixels, slice thickness 4 mm, pixel bandwidth 540 Hz, repetition time 3.6 s, echo time 1.65s, flip angle 5°. This resulted in an acquisition time of 10 s for 24 slices and 30 s for 80 slices. After non-contrast series, contrast media was injected via a cubital vein with a flow of 1.5 ml/s followed by a 30 ml chaser bolus (1 mmol/kg body weight gadobutrol; Bayer, Leverkusen, Germany). Dynamic imaging sequences were triggered by bolus tracking sequence in the pulmonary trunk in coronal plane. In one single 30 s long breath hold, three repeated small image stacks covering the primary tumor with 24 images were obtained 10 s, 20 and 40 s after contrast administration (Fig. 2). At 70 s, 130 and 250 s delay whole thorax imaging (80 images each) was performed, each during separate 30 s breath holds. Note that time between contrast administration is simplified as a uniform 10 s interval. Time steps are 0 s (non-contrast), 10 s, 20 s, 40 s, 70 s, 130 and 250 s. Breath holding was instructed automatically between the sequences [26]. Overall MR acquisition time was around 15 min.Fig. 2MRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax MRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax According to our study design (Fig. 1), all MRI examinations of the lung were performed on the same 1.5T scanner (Magnetom Aera, Siemens, Erlangen, Germany). First MRI was performed at the day of treatment initiation (TKI orally daily or PBC intravenously every 3 weeks). Second MRI was performed one day after treatment start (PBC) or 1 week after treatment start (TKI). Axial 3D volumetric interpolated breath-hold gradient echo T1 weighed fat saturated (frequency selective) dynamic contrast-enhanced sequences (T1 vibe) were acquired with the following parameters: 24 slices of matrix 320 × 180 pixels, slice thickness 4 mm, pixel bandwidth 540 Hz, repetition time 3.6 s, echo time 1.65s, flip angle 5°. This resulted in an acquisition time of 10 s for 24 slices and 30 s for 80 slices. After non-contrast series, contrast media was injected via a cubital vein with a flow of 1.5 ml/s followed by a 30 ml chaser bolus (1 mmol/kg body weight gadobutrol; Bayer, Leverkusen, Germany). Dynamic imaging sequences were triggered by bolus tracking sequence in the pulmonary trunk in coronal plane. In one single 30 s long breath hold, three repeated small image stacks covering the primary tumor with 24 images were obtained 10 s, 20 and 40 s after contrast administration (Fig. 2). At 70 s, 130 and 250 s delay whole thorax imaging (80 images each) was performed, each during separate 30 s breath holds. Note that time between contrast administration is simplified as a uniform 10 s interval. Time steps are 0 s (non-contrast), 10 s, 20 s, 40 s, 70 s, 130 and 250 s. Breath holding was instructed automatically between the sequences [26]. Overall MR acquisition time was around 15 min.Fig. 2MRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax MRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax CT examination CT scans (max. 3 mm slices, no motion artifacts, at maximum 1 month before treatment start) were obtained as part of routine clinical care. Most scans were obtained with a Somatom Definition AS64 scanner (Siemens, Erlangen, Germany) with application of iodinated contrast media. CT scans (max. 3 mm slices, no motion artifacts, at maximum 1 month before treatment start) were obtained as part of routine clinical care. Most scans were obtained with a Somatom Definition AS64 scanner (Siemens, Erlangen, Germany) with application of iodinated contrast media. Image analysis To compensate for respiration-related misplacement for each time step of DCE-MRI, a free-hand region of interest (ROI) was placed around the whole tumor at the level of widest tumor diameter, sparing airways and vessels. Care was taken in each examination pre- and post-treatment that the ROI was placed in an equivalent anatomical position. ROI area was recorded for each MRI. As reference, ROIs were placed in pectoral muscle, normalized enhancement curves exemplary shown in Figs. 3, 4 and 5. MR analysis for pre-treatment and post-treatment measurement and documentation took around 30 min. ROI placement was performed in our routine image viewer (Synapse© PACS, Fujifilm, Minato, Japan) results were documented in Microsoft Excel® 2019 (Redmond, Washington, USA). Internal reproducibility was confirmed by a single observer. In 16 patients repeated measurements were carried in a time interval of 6 months. Interclass correlation coefficient was between 0.96 and 0.99 for signal ratios at 0 s, 40 s, 70 s, relative uptake at 40 s and at 70 s, and for the slope values (explained in the next section).Fig. 3DCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI).Fig. 4DCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS.Fig. 572 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow) DCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI). DCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS. 72 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow) The following semiquantitative parameters were calculated from perfusion curves: relative contrast uptake at 40 and 70 s, maximal uptake, wash-in contrast kinetic (0 to 40 s, 0 to 70 s). Relative tumor uptake (Rel. UT) was calculated according to the following formula: Rel. UT = (SIt – SI0)/SI0, where SIt is tumor signal intensity at time t and SI0 is tumor signal intensity before contrast administration. As surrogate for total contrast enhancement, the area under the curve (AUC) was calculated as the sum of the mean signal for each time interval multiplied by that time interval over the range of 0–250 s. Image processing and documentation of clinical data and imaging were done by expert thoracic radiologists (at least 8 years of experience) and thoracic oncologists (at least 15 years of experience). To compensate for respiration-related misplacement for each time step of DCE-MRI, a free-hand region of interest (ROI) was placed around the whole tumor at the level of widest tumor diameter, sparing airways and vessels. Care was taken in each examination pre- and post-treatment that the ROI was placed in an equivalent anatomical position. ROI area was recorded for each MRI. As reference, ROIs were placed in pectoral muscle, normalized enhancement curves exemplary shown in Figs. 3, 4 and 5. MR analysis for pre-treatment and post-treatment measurement and documentation took around 30 min. ROI placement was performed in our routine image viewer (Synapse© PACS, Fujifilm, Minato, Japan) results were documented in Microsoft Excel® 2019 (Redmond, Washington, USA). Internal reproducibility was confirmed by a single observer. In 16 patients repeated measurements were carried in a time interval of 6 months. Interclass correlation coefficient was between 0.96 and 0.99 for signal ratios at 0 s, 40 s, 70 s, relative uptake at 40 s and at 70 s, and for the slope values (explained in the next section).Fig. 3DCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI).Fig. 4DCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS.Fig. 572 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow) DCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI). DCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS. 72 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow) The following semiquantitative parameters were calculated from perfusion curves: relative contrast uptake at 40 and 70 s, maximal uptake, wash-in contrast kinetic (0 to 40 s, 0 to 70 s). Relative tumor uptake (Rel. UT) was calculated according to the following formula: Rel. UT = (SIt – SI0)/SI0, where SIt is tumor signal intensity at time t and SI0 is tumor signal intensity before contrast administration. As surrogate for total contrast enhancement, the area under the curve (AUC) was calculated as the sum of the mean signal for each time interval multiplied by that time interval over the range of 0–250 s. Image processing and documentation of clinical data and imaging were done by expert thoracic radiologists (at least 8 years of experience) and thoracic oncologists (at least 15 years of experience). Statistics Baseline variables are descriptively compared for both groups (responders, non-responders). Depending on the variable, mean ± standard deviation or absolute and relative frequencies are given. Associated p-values are calculated by Student’s t-test, Welch’s t-test, or Chi-Square test, respectively. We report the median follow-up time calculated by the inverse Kaplan–Meier method with corresponding 95% confidence intervals and “stability interval” as suggested by Schemper and Betensyk, respectively [27, 28]. In order to assess the potential additional benefit of imaging parameters, a combination of forward and backward selection procedure (the FAMoS Algorithm) based on the AIC (Akaike information criterion) was used for model selection [29]. To construct a robust multivariate model for our study group of 98 patients, we performed the model selection in three steps: First, we performed a variable selection on a data set containing complete observations on all relevant clinical variables (therapy group, age, gender, abnormal body mass index, clinical status, smoking status, Cyfra 21.1, EGFR status, tumor stage and presence of liver metastases). The variables selected in this step were included in the starting model. In the second step pre-therapeutic MRI variables could be included (forward selected), but clinical parameters could be excluded (backward selection), based on a data set containing all information on the relevant variables. In the third step, again the selected variables from the step before were included in the starting model. Post-therapeutic MRI variables were included if relevant and previously selected variables could be excluded based on the AIC criterion and a data set which contained all information on the relevant variables. The model was applied to OS and PFS respectively, and the group variable (TKI, PBC) was always included in the model. The resulting Cox regression models are presented by means of the hazard ratios (HR) and associated 95% confidence intervals and descriptive p-values of the selected variables, as well as the AIC, number of observations and events in the model. A p-value of < 0.05 was considered as statistically significant. Missing values were not imputed, resulting in complete case analysis with respect to the specific analysis. Analysis was done using R Version 4.0.2 (30) and SPSS Version 27, IBM, Armonk, USA. In order to facilitate better understanding of the calculated hazard ratios, slope values were multiplied by 10 to report a clinically relevant scale. Baseline variables are descriptively compared for both groups (responders, non-responders). Depending on the variable, mean ± standard deviation or absolute and relative frequencies are given. Associated p-values are calculated by Student’s t-test, Welch’s t-test, or Chi-Square test, respectively. We report the median follow-up time calculated by the inverse Kaplan–Meier method with corresponding 95% confidence intervals and “stability interval” as suggested by Schemper and Betensyk, respectively [27, 28]. In order to assess the potential additional benefit of imaging parameters, a combination of forward and backward selection procedure (the FAMoS Algorithm) based on the AIC (Akaike information criterion) was used for model selection [29]. To construct a robust multivariate model for our study group of 98 patients, we performed the model selection in three steps: First, we performed a variable selection on a data set containing complete observations on all relevant clinical variables (therapy group, age, gender, abnormal body mass index, clinical status, smoking status, Cyfra 21.1, EGFR status, tumor stage and presence of liver metastases). The variables selected in this step were included in the starting model. In the second step pre-therapeutic MRI variables could be included (forward selected), but clinical parameters could be excluded (backward selection), based on a data set containing all information on the relevant variables. In the third step, again the selected variables from the step before were included in the starting model. Post-therapeutic MRI variables were included if relevant and previously selected variables could be excluded based on the AIC criterion and a data set which contained all information on the relevant variables. The model was applied to OS and PFS respectively, and the group variable (TKI, PBC) was always included in the model. The resulting Cox regression models are presented by means of the hazard ratios (HR) and associated 95% confidence intervals and descriptive p-values of the selected variables, as well as the AIC, number of observations and events in the model. A p-value of < 0.05 was considered as statistically significant. Missing values were not imputed, resulting in complete case analysis with respect to the specific analysis. Analysis was done using R Version 4.0.2 (30) and SPSS Version 27, IBM, Armonk, USA. In order to facilitate better understanding of the calculated hazard ratios, slope values were multiplied by 10 to report a clinically relevant scale. Patients: Between November 2016 and July 2019, 150 patients with advanced pulmonary adenocarcinoma and a measureable lesion of at least 2 cm in size under first line therapy were included in this prospective study (Fig. 1). Treatment was performed according to guidelines after consultation of the interdisciplinary tumor board. Patients undergoing radiation therapy of the primary tumor or local lymph nodes within the first 3 months were excluded. All included patients underwent pre-treatment and post-treatment MRI scans of high quality with few motion/pulsation artifacts, subjectively sufficient contrast enhancement and complete coverage of the primary tumor.Fig. 1Flowchart of study patients Flowchart of study patients Clinical documentation: Baseline patient and tumor characteristics were collected systematically from the medical records: body-mass-index (BMI), pulmonary function parameters, Eastern Cooperative Oncology Group (ECOG), smoking state including pack years and tumor biology (histology, mutation status, programmed death-ligand 1 (PD-L1) tumor proportion score; blood levels of the tumor markers carcinoembryonic antigen (CEA), Cytokeratin-fragment (Cyfra) 21.1, neuron-specific enolase (NSE), tumor stage (TNM 8th edition)). All patients underwent routine CT and clinical work-up at maximum 4 weeks before and every 6 weeks after treatment initiation. RECIST 1.1 based response assessment was used as the gold standard [25]. Progression-free survival (PFS) was calculated as days between first MRI and follow-up CT with first progression or clinical progression in medical records. The imaging independent overall survival (OS) was calculated as days between first MRI and date of death. MR examination: According to our study design (Fig. 1), all MRI examinations of the lung were performed on the same 1.5T scanner (Magnetom Aera, Siemens, Erlangen, Germany). First MRI was performed at the day of treatment initiation (TKI orally daily or PBC intravenously every 3 weeks). Second MRI was performed one day after treatment start (PBC) or 1 week after treatment start (TKI). Axial 3D volumetric interpolated breath-hold gradient echo T1 weighed fat saturated (frequency selective) dynamic contrast-enhanced sequences (T1 vibe) were acquired with the following parameters: 24 slices of matrix 320 × 180 pixels, slice thickness 4 mm, pixel bandwidth 540 Hz, repetition time 3.6 s, echo time 1.65s, flip angle 5°. This resulted in an acquisition time of 10 s for 24 slices and 30 s for 80 slices. After non-contrast series, contrast media was injected via a cubital vein with a flow of 1.5 ml/s followed by a 30 ml chaser bolus (1 mmol/kg body weight gadobutrol; Bayer, Leverkusen, Germany). Dynamic imaging sequences were triggered by bolus tracking sequence in the pulmonary trunk in coronal plane. In one single 30 s long breath hold, three repeated small image stacks covering the primary tumor with 24 images were obtained 10 s, 20 and 40 s after contrast administration (Fig. 2). At 70 s, 130 and 250 s delay whole thorax imaging (80 images each) was performed, each during separate 30 s breath holds. Note that time between contrast administration is simplified as a uniform 10 s interval. Time steps are 0 s (non-contrast), 10 s, 20 s, 40 s, 70 s, 130 and 250 s. Breath holding was instructed automatically between the sequences [26]. Overall MR acquisition time was around 15 min.Fig. 2MRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax MRI protocol: note, that at time point 10 s, 20 s, and 40 s only a small image stack was obtained covering the tumor, whereas all other time points are covering the whole thorax CT examination: CT scans (max. 3 mm slices, no motion artifacts, at maximum 1 month before treatment start) were obtained as part of routine clinical care. Most scans were obtained with a Somatom Definition AS64 scanner (Siemens, Erlangen, Germany) with application of iodinated contrast media. Image analysis: To compensate for respiration-related misplacement for each time step of DCE-MRI, a free-hand region of interest (ROI) was placed around the whole tumor at the level of widest tumor diameter, sparing airways and vessels. Care was taken in each examination pre- and post-treatment that the ROI was placed in an equivalent anatomical position. ROI area was recorded for each MRI. As reference, ROIs were placed in pectoral muscle, normalized enhancement curves exemplary shown in Figs. 3, 4 and 5. MR analysis for pre-treatment and post-treatment measurement and documentation took around 30 min. ROI placement was performed in our routine image viewer (Synapse© PACS, Fujifilm, Minato, Japan) results were documented in Microsoft Excel® 2019 (Redmond, Washington, USA). Internal reproducibility was confirmed by a single observer. In 16 patients repeated measurements were carried in a time interval of 6 months. Interclass correlation coefficient was between 0.96 and 0.99 for signal ratios at 0 s, 40 s, 70 s, relative uptake at 40 s and at 70 s, and for the slope values (explained in the next section).Fig. 3DCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI).Fig. 4DCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS.Fig. 572 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow) DCE MRI of a 69 year-old female non-responder, smoker (15 pack years), adenocarcinoma right upper lobe (blue arrow), T3N1M1(Oss), received TKI, progression-free survival 117 days (progression by new lymph node metastases), overall survival 143 days. Pre-therapeutic strong uptake followed by post-therapeutic reduced uptake accompanied by early progression and short survival. Note: MR2 at day 11 due to scheduling delay, MR3 given additionally. Left: representative time points of DCE MRI pre-therapeutic (day 0) and post-therapeutic. Right: semiquantitative contrast enhancement curves (above absolute SI of tumor, middle relative contrast enhancement, below contrast normalized tumor to muscle SI). DCE MRI of a 81 year-old female responder, never smoker, adeno carcinoma left upper lobe (blue arrow), T4N1M1(Hep, Oss), received TKI, progression-free survival 234 days (progression with new liver metastases), overall survival 1182 days. This relatively long PFS/OS goes along with minimal increase of contrast enhancement. This is in line with calculated negative association of relative uptake after 40 s in post-therapeutic MRI and PFS. 72 Year-old male responder, smoker (50 pack years), adeno carcinoma left upper lobe (blue arrow), T3N2M1(Adr, Oss), received PBC, lost in follow-up after 79 days without progression. At day 7 central necrosis in the tumor is seen. Initially, the tumor shows moderate contrast uptake. This is reduced early after therapy and necrosis is visible at 70 s post contrast injection and later. Note the huge costal metastasis, which changed minimally during the course of therapy (red arrow) The following semiquantitative parameters were calculated from perfusion curves: relative contrast uptake at 40 and 70 s, maximal uptake, wash-in contrast kinetic (0 to 40 s, 0 to 70 s). Relative tumor uptake (Rel. UT) was calculated according to the following formula: Rel. UT = (SIt – SI0)/SI0, where SIt is tumor signal intensity at time t and SI0 is tumor signal intensity before contrast administration. As surrogate for total contrast enhancement, the area under the curve (AUC) was calculated as the sum of the mean signal for each time interval multiplied by that time interval over the range of 0–250 s. Image processing and documentation of clinical data and imaging were done by expert thoracic radiologists (at least 8 years of experience) and thoracic oncologists (at least 15 years of experience). Statistics: Baseline variables are descriptively compared for both groups (responders, non-responders). Depending on the variable, mean ± standard deviation or absolute and relative frequencies are given. Associated p-values are calculated by Student’s t-test, Welch’s t-test, or Chi-Square test, respectively. We report the median follow-up time calculated by the inverse Kaplan–Meier method with corresponding 95% confidence intervals and “stability interval” as suggested by Schemper and Betensyk, respectively [27, 28]. In order to assess the potential additional benefit of imaging parameters, a combination of forward and backward selection procedure (the FAMoS Algorithm) based on the AIC (Akaike information criterion) was used for model selection [29]. To construct a robust multivariate model for our study group of 98 patients, we performed the model selection in three steps: First, we performed a variable selection on a data set containing complete observations on all relevant clinical variables (therapy group, age, gender, abnormal body mass index, clinical status, smoking status, Cyfra 21.1, EGFR status, tumor stage and presence of liver metastases). The variables selected in this step were included in the starting model. In the second step pre-therapeutic MRI variables could be included (forward selected), but clinical parameters could be excluded (backward selection), based on a data set containing all information on the relevant variables. In the third step, again the selected variables from the step before were included in the starting model. Post-therapeutic MRI variables were included if relevant and previously selected variables could be excluded based on the AIC criterion and a data set which contained all information on the relevant variables. The model was applied to OS and PFS respectively, and the group variable (TKI, PBC) was always included in the model. The resulting Cox regression models are presented by means of the hazard ratios (HR) and associated 95% confidence intervals and descriptive p-values of the selected variables, as well as the AIC, number of observations and events in the model. A p-value of < 0.05 was considered as statistically significant. Missing values were not imputed, resulting in complete case analysis with respect to the specific analysis. Analysis was done using R Version 4.0.2 (30) and SPSS Version 27, IBM, Armonk, USA. In order to facilitate better understanding of the calculated hazard ratios, slope values were multiplied by 10 to report a clinically relevant scale. Results: 98 patients with sufficient imaging and clinical data were finally included into the study, 46 patients TKI group (15 male) and 52 patients with PBC (27 male). At 6 weeks, 27 (4 PBC, 23 TKI) showed partial treatment response. Responders and non-responders had generally similar baseline characteristics, with one notable exception: more never smokers responded (Table 1). All six patients without metastases (stage III disease) showed no response after 6 weeks of treatment. Table 1Patient characteristicsResponders* (n = 27)Non-responders* (n = 71)Total population (n = 98)TKI/PBC23/4¤23/48¤46/52Post-treatment MRI [days]8.6 ± 4.4¤3.9 ± 4.3¤5.2 ± 4.8Mean age64 ± 964 ± 964 ± 9Male10 (37%)32 (45%)42 (43%)ECOG > 015 (56%)28 (39%)43 (44%)Pathologic BMI♦4 (15%)19 (27%)23 (24%)Never Smoker10 (37%)×10 (14%)×20 (20%)Pack Years15 ± 18¤33 ± 23¤28 ± 23Vital capacity [l]2.8 ± 1.13.0 ± 1.03.0 ± 1.0Baseline CEA [ng/ml]294 ± 125997 ± 241149 ± 675Baseline Cyfra 21.1 [ng/ml]9.1 ± 11.39.1 ± 10.59.1 ± 10.7Baseline NSE [ng/ml]35 ± 2427 ± 2729 ± 27TumorStage III0 (0%)6 (8%)6 (6%)Stage IV27 (100%)65 (92%)92 (94%)T-stageT12 (7%)7 (10%)9 (9%)T25 (19%)18 (25%)23 (24%)T35 (19%)18 (25%)23 (24%)T414 (52%)28 (39%)42 (43%)N-stageN1 and N216 (59%)42 (59%)58 (59%)N311 (41%)29 (41%)40 (41%)M-stageM00 (0%)×6 (8%)×6 (6%)1 site4 (15%)23 (32%)27 (28%)2 sites11 (41%)21 (30%)32 (33%)≥ 3 sites12 (44%)21 (30%)33 (34%)MetastasesLiver5 (19%)16 (23%)21 (21%)Brain9 (33%)24 (34%)33 (34%)Bone17 (63%)31 (44%)48 (49%)Lung13 (48%)22 (31%)35 (36%)Categorial variables in absolute values (relative value) tested by Chi Square test, continuous variables in means ± SD tested by Welch’s t-test*Defined as RECIST 1.1 PR in first follow-up CT+Defined as RECIST 1.1 SD or PD in first follow-up CT♦BMI < 20 or > 30; ×P = 0.005; ¤P < 0.001 Patient characteristics Categorial variables in absolute values (relative value) tested by Chi Square test, continuous variables in means ± SD tested by Welch’s t-test *Defined as RECIST 1.1 PR in first follow-up CT +Defined as RECIST 1.1 SD or PD in first follow-up CT ♦BMI < 20 or > 30; ×P = 0.005; ¤P < 0.001 In pre-treatment MRI, lung tumors of responders presented a significantly higher contrast uptake 70 s after contrast administration compared to non-responders (Table 2). Consequently, the slope of contrast curve was also higher. In the early post-treatment MRI, differences of contrast uptake were more pronounced: other additional parameters, such as relative contrast uptake 40 s after administration, slope at 40 s, maximum contrast uptake, and AUC were significantly higher in responders. Except for ΔAUC, pre-treatment to post-treatment differences of these parameters were not significant, indicating no measurable treatment effect on the present contrast curves. Notably, in responders, there was a significant reduction of ROI area between pre- and post-treatment MRI after 5.2 ± 4.8 (range 1 to 18) days. Patients that received TKI presented tumors with higher perfusion values compared to patients witch received PBC. Table 2Comparison responder (RECIST 1.1 PR at 6 week CT) and non-responder (SD or PD at 6 week CT)Responders* (n = 27)Non-responders+ (n = 71)P-value×General featuresSum of diameter CT [cm]7.7 ± 4.68.4 ± 3.90.44Mean PFS ± SD [days]401 ± 211317 ± 2300.10Mean OS ± SD [days]706 ± 320508 ± 2930.004Pre-therapeutic MRI40 s rel. uptake [%]33.7 ± 15.628.4 ± 17.90.18Slope 0–40 s [*10]9.4 ± 4.77.9 ± 4.90.1970 s rel. uptake [%]49.0 ± 17.935.9 ± 25.60.02Slope 0–70 s [*10]7.7 ± 2.45.6 ± 3.60.006Max. uptake [SI]171 ± 24162 ± 320.22AUC [SI/250 s]4029 ± 5513849 ± 7310.24Post-therapeutic MRI40 s rel. uptake [%]33.4 ± 12.025.3 ± 17.10.03Slope 0–40 s [*10]9.7 ± 3.77.1 ± 4.60.0170 s rel. uptake [%]47.3 ± 22.432.4 ± 24.10.007Slope 0–70 s [*10]7.7 ± 3.25.0 ± 3.3< 0.001Max. uptake [SI]182 ± 22162 ± 25< 0.001AUC [SI/250 s]4264 ± 5093814 ± 6100.001Difference MRI pre-MRI post therapyArea difference MR1-MR2 [cm2]3.1 ± 4.10.6 ± 2.5< 0.001Δ 40 s rel. uptake [%]0.3 ± 13.32.7 ± 15.80.49Δ Slope 0–40 s [*10]– 0.4 ± 3.40.7 ± 4.20.24Δ 70 s rel. uptake [%]2.7 ± 22.12.7 ± 21.10.99Δ Slope 0–70 s [*10]0.0 ± 2.80.5 ± 3.00.52Δ Max. uptake [SI]– 9 ± 220 ± 200.05Δ AUC [SI/250 s]– 184 ± 37029 ± 4740.04Bold means P-value < 0.05 was considered to be significant×Means ± SD tested by students t-test*Defined as RECIST 1.1 PR in first follow-up CT+Defined as RECIST 1.1 SD or PD in first follow-up CT Comparison responder (RECIST 1.1 PR at 6 week CT) and non-responder (SD or PD at 6 week CT) Bold means P-value < 0.05 was considered to be significant ×Means ± SD tested by students t-test *Defined as RECIST 1.1 PR in first follow-up CT +Defined as RECIST 1.1 SD or PD in first follow-up CT Figures 3, 4 and 5 illustrate three representative cases. The tumor of a TKI non-responder showed a 75% uptake at 70 s after contrast administration that dropped stepwise under treatment (Fig. 3). In contrast to this, a TKI responder showed an initial relatively low uptake of 40% at 70 s, discretely increasing to 60% (Fig. 4), while a responder to PBC treatment with central tumor necrosis presented a perfusion reduction (Fig. 5). Figure 6 demonstrates higher mortality (A, C) and shorter progression-free survival (B, D) of patients with contrast uptake below median.Fig. 6Kaplan–Meier plots: OS (A, C) and PFS (B, D) dependent on pre-treatment contrast uptake (A, B) and early post-treatment contrast uptake (C, D) Kaplan–Meier plots: OS (A, C) and PFS (B, D) dependent on pre-treatment contrast uptake (A, B) and early post-treatment contrast uptake (C, D) Univariate analyses of clinical factors, pre-therapeutic imaging and post-therapeutic imaging The relationship between clinical, pre-therapeutic imaging and post-therapeutic imaging parameters with PFS and OS were analyzed using univariate Cox regression (Additional file 1: Table A.1). There was a significant association with several clinical parameters as well as pre-treatment and post-treatment imaging parameters. The relationship between clinical, pre-therapeutic imaging and post-therapeutic imaging parameters with PFS and OS were analyzed using univariate Cox regression (Additional file 1: Table A.1). There was a significant association with several clinical parameters as well as pre-treatment and post-treatment imaging parameters. Model selection and multivariate analyses Using forward and backward selection procedures, four clinical parameters with optimally combined PFS or OS prediction were selected (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, first row). In the second step, best model fit was achieved using slope 0–70 s for OS. For PFS, pre-therapeutic MRI did not lead to a better model fit (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, second row). In the third step, the post-therapeutic relative uptake value at 40 s lead to a better model fit for PFS (Additional file 1: Table A. 3). In contrast, for OS, results of the post-therapeutic MRI did not result in significant improvement of the model (Additional file 1: Table A. 2). Using forward and backward selection procedures, four clinical parameters with optimally combined PFS or OS prediction were selected (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, first row). In the second step, best model fit was achieved using slope 0–70 s for OS. For PFS, pre-therapeutic MRI did not lead to a better model fit (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, second row). In the third step, the post-therapeutic relative uptake value at 40 s lead to a better model fit for PFS (Additional file 1: Table A. 3). In contrast, for OS, results of the post-therapeutic MRI did not result in significant improvement of the model (Additional file 1: Table A. 2). Univariate analyses of clinical factors, pre-therapeutic imaging and post-therapeutic imaging: The relationship between clinical, pre-therapeutic imaging and post-therapeutic imaging parameters with PFS and OS were analyzed using univariate Cox regression (Additional file 1: Table A.1). There was a significant association with several clinical parameters as well as pre-treatment and post-treatment imaging parameters. Model selection and multivariate analyses: Using forward and backward selection procedures, four clinical parameters with optimally combined PFS or OS prediction were selected (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, first row). In the second step, best model fit was achieved using slope 0–70 s for OS. For PFS, pre-therapeutic MRI did not lead to a better model fit (Additional file 1: Table A. 2 for OS and Additional file 1: Table A. 3 for PFS, second row). In the third step, the post-therapeutic relative uptake value at 40 s lead to a better model fit for PFS (Additional file 1: Table A. 3). In contrast, for OS, results of the post-therapeutic MRI did not result in significant improvement of the model (Additional file 1: Table A. 2). Discussion: Our study uses semiquantitative contrast wash-in kinetic parameters for description of pre-therapeutic and very early post-therapeutic DCE MRI in 98 adenocarcinomas of the lung. To the best of our knowledge, this is the first study of purely advanced adenocarcinomas of the lung that evaluates early MRI perfusion changes under PBC or TKI therapy. A long follow-up interval allowed regression analysis not only to mainly imaging dependent parameters as RECIST and PFS but also to overall survival. Inclusion criteria were broad, and as such quite representative for a clinical real-life setting. Main finding of our study is a significantly higher tumor perfusion of responders compared to non-responders in pre-therapeutic and early post-therapeutic MRI, which were clearly associated to PFS and OS and therefore predicts outcome before treatment start. This confirms former studies, which have also described the relationship between stronger baseline perfusion with better treatment response [8, 19, 31]. For example, Fraioli et al. demonstrated a higher baseline blood flow in 11 responders compared to 34 non-responders in 45 patients with advanced adenocarcinoma using CT perfusion [32]. Tissue perfusion may increase therapy susceptibility as capillarization is mandatory for exposure to therapeutic agents. Possibly, stronger perfused adenocarcinomas might also represent a less aggressive tumor biology as these malignancies may contain fewer microscopic necrotic areas. In our cohort, patients with positive EGFR status and TKI treatment showed higher perfusion values and a higher response rate. Although this is a confounding factor, our multivariate analyses demonstrate treatment independent association of baseline perfusion and prognosis. We could not show clear treatment related changes of MRI parameters in this early phase of treatment, whereas the area reduction of the tumor was significantly higher in responders compared to non-responders. Therefore, in the setting of PBC or TKI without additional antiangiogenics, treatment-related changes were clinically informative only regarding size, but not functional parameters of the tumor. These results are similar to those of other studies, which have observed inferior predictive capacity for perfusion compared to metric changes of the tumor in several tumors, including lung and breast cancer [4, 33]. In contrast, in studies combining PBC with antiangiogenic treatment, blood flow as assessed by CT was reduced after one or more cycles of therapy in responders [32, 34, 35]. Several quantitative DCE MRI studies of small and heterogeneous cohorts have documented reduced perfusion in treatment responders [6, 8, 19]. This finding is explained by tumor tissue damage due to reduced angiogenesis. Contrary to this, treatment-associated inflammation could increase tissue perfusion in the early phase of therapy. Differences in timing might explain conflicting results of studies. As prognostic marker, Tao et al. evaluated deconvolution perfusion MRI before treatment in 36 NSCLC patients, of which 6 were adenocarcinomas [19]. Response was evaluated after completion of radiation therapy after 1 month. Responders showed higher baseline ktrans and lower baseline kep and Ve. Chang et al. also identified prognostic impact of baseline perfusion markers in 11 NSCLC patients of whom 10 suffered from adenocarcinoma. In contrast to the data of Tao et al., high kep correlated with response. Similar to Tao et al., low Ve was predictive for response. As predictive parameter, ktrans reduction correlated with tumor diameter reduction after three cycles of chemotherapy [8]. Similarly, Xu et al. showed as early as 1 week after classic chemotherapy initiation a significantly reduced ktrans and Ve in 13 treatment responders compared to 9 non-responders [6]. This study included 11 patients with adenocarcinomas. No predictive impact of change of ktrans was shown by de Langen et al. in 28 patients with non-squamous NSCLC 3 weeks after starting antiangiogenic therapy. In histogram analysis, increase of standard deviation of ktrans over 15% was associated with treatment failure [4]. Based on these studies, strong baseline tumor perfusion is a positive prognostic marker for NSCLC. Perfusion decrease under treatment seems to correlate with response, but study results differ in this point, potentially due to differences in tumor biology, treatment and timing of imaging. On the whole, OS as an end-point metric criteria other than RECIST have only be defined in a few NSCLC studies [4, 31]. Therefore, in most studies superiority of perfusion parameters to RECIST is not assessable and the benefit of this independent predictive marker additional to early RECIST assessment remains unclear. To assess the interaction of different prognostic factors, multiparametric Cox regression was applied. In order to reduce the problem of multiple statistical testing, we performed a three-step variable pre-selection for multivariate analyses. Our multivariate variable selection model indicates a better OS prediction with parameters of pre-therapeutic and post-therapeutic MRI and a better PFS prediction with parameters of post-therapeutic MRI, additional to selected clinical parameters. Therefore, perfusion MRI of pulmonary adenocarcinomas may supplement peri-therapeutic risk stratification. Some important limitations of our study must be acknowledged: One third of the patients have been excluded, most of them due to incomplete data, inferior imaging quality (i.e. low contrast enhancement) or scheduling delay of examinations. Other patients were excluded due to limitations in making tumor measurements, namely tumor atelectasis, diffuse tumor manifestation or too small tumor size. We believe that this exclusion process lead to more robust data analysis, but some exclusion criteria are subjective and confounding effects cannot be excluded. Reduced sample size was not suitable for evaluation of treatment subgroups.Our perfusion approach was a simplified method using breath hold technique without calculation of tissue permeability parameters addressing the limitations of patients with severe pulmonary diseases. The present method has low temporal resolution but high spatial coverage and high contrast resolution than other methods. Time interval of contrast administration to first image series was not documented and this interval was assumed to be 10 s. Therefore, this very early interval is confounded by individual circulation differences of the patients. Review of perfusion curves confirmed sufficient plot of contrast kinetics. For semiquantitative parameters, similar significance levels for perfusion changes in NSCLC were achieved compared to quantitative calculation [21]. Semiquantitative perfusion curve description is easy to perform and robust, whilst quantitative calculation may underlie high variation [21]. Criteria might easily be translated to different imaging techniques like CT and to different study centers. Future free-breathing sequences may provide higher temporal resolution. This may optimize data quality especially in the pre-contrast phase and the inflow phase and might help to calculate reliable tissue specific parameters.Free-hand ROI placement was carried out in one single layer and no histogram analyses were performed. Therefore, tumor changes could be underestimated. Free-hand ROI placement was necessary to compensate for different respiratory positions of the tumor. Tested automatic and semi-automatic registration algorithms were not sufficient to compensate for these movements. Intraobserver reproducibility was excellent, whereas interoberserver reproducibility was not tested in this study.Only primary tumors were measured. This may not represent the prognostic most relevant tumor location. This aspect is less relevant in the first line therapy setting. Primary tumors did not undergo local therapies and systemic therapy effects should be evaluable at this site.Progression-free survival and overall survival are confounded by treatment changes in later course. However, treatment was not changed until first follow-up CT after 6 weeks. Only a minority of patients underwent treatment change before fulfilling criteria of RECIST progress due to individual treatment regimes. One third of the patients have been excluded, most of them due to incomplete data, inferior imaging quality (i.e. low contrast enhancement) or scheduling delay of examinations. Other patients were excluded due to limitations in making tumor measurements, namely tumor atelectasis, diffuse tumor manifestation or too small tumor size. We believe that this exclusion process lead to more robust data analysis, but some exclusion criteria are subjective and confounding effects cannot be excluded. Reduced sample size was not suitable for evaluation of treatment subgroups. Our perfusion approach was a simplified method using breath hold technique without calculation of tissue permeability parameters addressing the limitations of patients with severe pulmonary diseases. The present method has low temporal resolution but high spatial coverage and high contrast resolution than other methods. Time interval of contrast administration to first image series was not documented and this interval was assumed to be 10 s. Therefore, this very early interval is confounded by individual circulation differences of the patients. Review of perfusion curves confirmed sufficient plot of contrast kinetics. For semiquantitative parameters, similar significance levels for perfusion changes in NSCLC were achieved compared to quantitative calculation [21]. Semiquantitative perfusion curve description is easy to perform and robust, whilst quantitative calculation may underlie high variation [21]. Criteria might easily be translated to different imaging techniques like CT and to different study centers. Future free-breathing sequences may provide higher temporal resolution. This may optimize data quality especially in the pre-contrast phase and the inflow phase and might help to calculate reliable tissue specific parameters. Free-hand ROI placement was carried out in one single layer and no histogram analyses were performed. Therefore, tumor changes could be underestimated. Free-hand ROI placement was necessary to compensate for different respiratory positions of the tumor. Tested automatic and semi-automatic registration algorithms were not sufficient to compensate for these movements. Intraobserver reproducibility was excellent, whereas interoberserver reproducibility was not tested in this study. Only primary tumors were measured. This may not represent the prognostic most relevant tumor location. This aspect is less relevant in the first line therapy setting. Primary tumors did not undergo local therapies and systemic therapy effects should be evaluable at this site. Progression-free survival and overall survival are confounded by treatment changes in later course. However, treatment was not changed until first follow-up CT after 6 weeks. Only a minority of patients underwent treatment change before fulfilling criteria of RECIST progress due to individual treatment regimes. Conclusion: Better tumor perfusion of pulmonary adenocarcinomas predicts response before and also shortly after treatment start and is independently associated with better prognosis. Supplementary Information: Additional file 1. Supplementary tables. Additional file 1. Supplementary tables. Additional file 1. Supplementary tables. Additional file 1. Supplementary tables. : Additional file 1. Supplementary tables. Additional file 1. Supplementary tables.
Background: To explore the prognostic value of serial dynamic contrast-enhanced (DCE) MRI in patients with advanced pulmonary adenocarcinoma undergoing first-line therapy with either tyrosine-kinase inhibitors (TKI) or platinum-based chemotherapy (PBC). Methods: Patients underwent baseline (day 0, n = 98), and post-therapeutic DCE MRI (PBC: day + 1, n = 52); TKI: day + 7, n = 46) at 1.5T. Perfusion curves were acquired at 10, 40, and 70 s after contrast application and analysed semiquantitatively. Treatment response was evaluated at 6 weeks by CT (RECIST 1.1); progression-free survival (PFS) and overall survival  were analysed with respect to clinical and perfusion parameters. Relative uptake was defined as signal difference between contrast and non-contrast images, divided by the non-contrast signal. Predictors of survival were selected using Cox regression analysis. Median follow-up was 825 days. Results: In pre-therapeutic and early post-therapeutic MRI, treatment responders (n = 27) showed significantly higher relative contrast uptake within the tumor at 70 s after application as compared to non-responders (n = 71, p ≤ 0.02), response defined as PR by RECIST 1.1 at 6 weeks. There was no significant change of perfusion at early MRI after treatment. In multivariate regression analysis of selected parameters, the strongest association with PFS were relative uptake at 40 s in the early post-treatment MRI and pre-treatment clinical data (presence of liver metastases, ECOG performance status). Conclusions: Higher contrast uptake within the tumor at pre-treatment and early post-treatment MRI was associated with treatment response and better prognosis. DCE MRI of pulmonary adenocarcinoma may provide important prognostic information.
Background: Risk stratification and early therapy response assessment are of key importance for patients with cancer, in order to guide subsequent management and avoid unnecessary toxicity and costs. Median survival of patients with advanced non-small-cell lung cancer (NSCLC) ranges from 1.5 to several years depending on mutation status [1]. The balance between treatment risk and therapeutic benefit is difficult to define in routine clinical practice. There are multiple factors to consider: comorbidities, patient preference, biology, and extent of metastatic spread. Of special interest in this regard are the so-called imaging biomarkers, which could predict tumor aggressiveness more precisely than routine staging procedures alone, while also avoiding the procedural risk associated with repeat biopsies and histopathologic evaluation. [2, 3] Importantly, treatment response in targeted therapies may not be reflected appropriately by RECIST because of a different mechanism of action compared to direct cytotoxic agents [4, 5]. Therefore, morphological and functional imaging criteria have been explored for improved and earlier prediction of treatment response, such as volume reduction, change of tumor parameters including echogenicity, apparent diffusion coefficient, tissue perfusion, PET tracer accumulation, markers of ischemia [4, 6–13]. However, only few of these have been implemented in clinical decision-making algorithms thus far. For example, FDG uptake quantification is used for response evaluation in lymphoma [14], quantitative ultrasound parameters were found suitable for response assessment in breast cancer [15], and rectal cancer treatment response is evaluated by diffusion weighted imaging [16]. However, heterogeneity of tumor biology, small study cohorts and lack of standardization hampers validation of these criteria. Alongside PET/CT and perfusion CT, multiparametric MRI has shown promising initial results in characterization of pulmonary tumors [17] and assessment of treatment response [8, 18–21]. Contrast uptake is a widely accepted biomarker for tissue vitality and influenced by both tissue damage and vascular changes induced by the treatment [22, 23]. It is thought to correlate with tissue metabolism [4, 20]. Reduction in tumor perfusion has been shown in breast cancer under bevacizumab [24]. Similar effects have been described for different tumor entities under tyrosine-kinase inhibitors (TKI), like glioblastomas and colorectal cancer. Notably, these effects have been shown as early as two days after treatment initiation [24]. The present study investigates the prognostic information of serial dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) in two histologically relatively homogeneous groups of patients with advanced pulmonary adenocarcinoma. Baseline and very early post-treatment contrast uptake curves under either platinum-based chemotherapy (PBC) or TKI were analyzed in conjunction with the subsequent clinical course. Conclusion: Better tumor perfusion of pulmonary adenocarcinomas predicts response before and also shortly after treatment start and is independently associated with better prognosis.
Background: To explore the prognostic value of serial dynamic contrast-enhanced (DCE) MRI in patients with advanced pulmonary adenocarcinoma undergoing first-line therapy with either tyrosine-kinase inhibitors (TKI) or platinum-based chemotherapy (PBC). Methods: Patients underwent baseline (day 0, n = 98), and post-therapeutic DCE MRI (PBC: day + 1, n = 52); TKI: day + 7, n = 46) at 1.5T. Perfusion curves were acquired at 10, 40, and 70 s after contrast application and analysed semiquantitatively. Treatment response was evaluated at 6 weeks by CT (RECIST 1.1); progression-free survival (PFS) and overall survival  were analysed with respect to clinical and perfusion parameters. Relative uptake was defined as signal difference between contrast and non-contrast images, divided by the non-contrast signal. Predictors of survival were selected using Cox regression analysis. Median follow-up was 825 days. Results: In pre-therapeutic and early post-therapeutic MRI, treatment responders (n = 27) showed significantly higher relative contrast uptake within the tumor at 70 s after application as compared to non-responders (n = 71, p ≤ 0.02), response defined as PR by RECIST 1.1 at 6 weeks. There was no significant change of perfusion at early MRI after treatment. In multivariate regression analysis of selected parameters, the strongest association with PFS were relative uptake at 40 s in the early post-treatment MRI and pre-treatment clinical data (presence of liver metastases, ECOG performance status). Conclusions: Higher contrast uptake within the tumor at pre-treatment and early post-treatment MRI was associated with treatment response and better prognosis. DCE MRI of pulmonary adenocarcinoma may provide important prognostic information.
11,728
363
[ 514, 122, 187, 455, 55, 1035, 483, 56, 167, 16 ]
15
[ "tumor", "contrast", "treatment", "mri", "therapeutic", "uptake", "post", "time", "patients", "progression" ]
[ "treatment mri lung", "pulmonary adenocarcinomas predicts", "cell lung cancer", "therapeutic imaging parameters", "lung cancer nsclc" ]
null
[CONTENT] Non-small-cell lung carcinoma | Early response | Treatment outcome | Response evaluation criteria in solid tumors | Magnetic resonance imaging | Perfusion | Protein-tyrosine kinases | Platinum | Survival analysis | Progression-free survival [SUMMARY]
null
[CONTENT] Non-small-cell lung carcinoma | Early response | Treatment outcome | Response evaluation criteria in solid tumors | Magnetic resonance imaging | Perfusion | Protein-tyrosine kinases | Platinum | Survival analysis | Progression-free survival [SUMMARY]
[CONTENT] Non-small-cell lung carcinoma | Early response | Treatment outcome | Response evaluation criteria in solid tumors | Magnetic resonance imaging | Perfusion | Protein-tyrosine kinases | Platinum | Survival analysis | Progression-free survival [SUMMARY]
[CONTENT] Non-small-cell lung carcinoma | Early response | Treatment outcome | Response evaluation criteria in solid tumors | Magnetic resonance imaging | Perfusion | Protein-tyrosine kinases | Platinum | Survival analysis | Progression-free survival [SUMMARY]
[CONTENT] Non-small-cell lung carcinoma | Early response | Treatment outcome | Response evaluation criteria in solid tumors | Magnetic resonance imaging | Perfusion | Protein-tyrosine kinases | Platinum | Survival analysis | Progression-free survival [SUMMARY]
[CONTENT] Humans | Contrast Media | Magnetic Resonance Imaging | Prognosis | Liver Neoplasms | Adenocarcinoma | Treatment Outcome [SUMMARY]
null
[CONTENT] Humans | Contrast Media | Magnetic Resonance Imaging | Prognosis | Liver Neoplasms | Adenocarcinoma | Treatment Outcome [SUMMARY]
[CONTENT] Humans | Contrast Media | Magnetic Resonance Imaging | Prognosis | Liver Neoplasms | Adenocarcinoma | Treatment Outcome [SUMMARY]
[CONTENT] Humans | Contrast Media | Magnetic Resonance Imaging | Prognosis | Liver Neoplasms | Adenocarcinoma | Treatment Outcome [SUMMARY]
[CONTENT] Humans | Contrast Media | Magnetic Resonance Imaging | Prognosis | Liver Neoplasms | Adenocarcinoma | Treatment Outcome [SUMMARY]
[CONTENT] treatment mri lung | pulmonary adenocarcinomas predicts | cell lung cancer | therapeutic imaging parameters | lung cancer nsclc [SUMMARY]
null
[CONTENT] treatment mri lung | pulmonary adenocarcinomas predicts | cell lung cancer | therapeutic imaging parameters | lung cancer nsclc [SUMMARY]
[CONTENT] treatment mri lung | pulmonary adenocarcinomas predicts | cell lung cancer | therapeutic imaging parameters | lung cancer nsclc [SUMMARY]
[CONTENT] treatment mri lung | pulmonary adenocarcinomas predicts | cell lung cancer | therapeutic imaging parameters | lung cancer nsclc [SUMMARY]
[CONTENT] treatment mri lung | pulmonary adenocarcinomas predicts | cell lung cancer | therapeutic imaging parameters | lung cancer nsclc [SUMMARY]
[CONTENT] tumor | contrast | treatment | mri | therapeutic | uptake | post | time | patients | progression [SUMMARY]
null
[CONTENT] tumor | contrast | treatment | mri | therapeutic | uptake | post | time | patients | progression [SUMMARY]
[CONTENT] tumor | contrast | treatment | mri | therapeutic | uptake | post | time | patients | progression [SUMMARY]
[CONTENT] tumor | contrast | treatment | mri | therapeutic | uptake | post | time | patients | progression [SUMMARY]
[CONTENT] tumor | contrast | treatment | mri | therapeutic | uptake | post | time | patients | progression [SUMMARY]
[CONTENT] cancer | response | tissue | treatment | treatment response | risk | tumor | shown | assessment | diffusion [SUMMARY]
null
[CONTENT] uptake | table | sd | additional file table | file table | additional file | file | 10 | defined recist | treatment [SUMMARY]
[CONTENT] better | tumor perfusion pulmonary adenocarcinomas | independently associated better prognosis | better tumor | better tumor perfusion | better tumor perfusion pulmonary | start independently associated better | start independently | adenocarcinomas predicts response shortly | adenocarcinomas predicts response [SUMMARY]
[CONTENT] additional file | file | tumor | treatment | additional | contrast | supplementary tables | file supplementary tables | additional file supplementary tables | tables [SUMMARY]
[CONTENT] additional file | file | tumor | treatment | additional | contrast | supplementary tables | file supplementary tables | additional file supplementary tables | tables [SUMMARY]
[CONTENT] DCE | first | TKI | PBC [SUMMARY]
null
[CONTENT] 27 | 70 | 71 | p ≤ 0.02 | RECIST | 1.1 | 6 weeks ||| ||| 40 s [SUMMARY]
[CONTENT] ||| DCE [SUMMARY]
[CONTENT] DCE | first | TKI | PBC ||| day 0 | 98 | DCE | PBC | 52 | TKI | 46 | 1.5T. Perfusion | 10, 40 | 70 ||| 6 weeks | CT | RECIST | 1.1 ||| ||| Cox ||| 825 days ||| ||| 27 | 70 | 71 | p ≤ 0.02 | RECIST | 1.1 | 6 weeks ||| ||| 40 s ||| ||| DCE [SUMMARY]
[CONTENT] DCE | first | TKI | PBC ||| day 0 | 98 | DCE | PBC | 52 | TKI | 46 | 1.5T. Perfusion | 10, 40 | 70 ||| 6 weeks | CT | RECIST | 1.1 ||| ||| Cox ||| 825 days ||| ||| 27 | 70 | 71 | p ≤ 0.02 | RECIST | 1.1 | 6 weeks ||| ||| 40 s ||| ||| DCE [SUMMARY]
Decreased eosinophil counts and elevated lactate dehydrogenase predict severe COVID-19 in patients with underlying chronic airway diseases.
34810271
Several predictors of COVID-19 severity have been reported. However, chronic airway inflammation characterised by accumulated lymphocytes or eosinophils may affect the pathogenesis of COVID-19.
BACKGROUND
In this retrospective cohort study, we reviewed the medical records of all patients with laboratory-confirmed COVID-19 with chronic bronchitis, chronic obstructive pulmonary disease (COPD) and asthma admitted to the Sino-French New City Branch of Tongji Hospital, a large regional hospital in Wuhan, China, from 26 January to 3 April. The Tongji Hospital Ethics Committee approved this study.
METHODS
There were 59 patients with chronic bronchitis, COPD and asthma. When compared with non-severe patients, severe patients were more likely to have decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), increased lactate dehydrogenase (LDH) (471.0 U/L vs 230.0 U/L, p<0.001) and elevated interleukin 6 level (47.4 pg/mL vs 5.7 pg/mL, p=0.002) on admission. Eosinopaenia and elevated LDH were significantly associated with disease severity in both univariate and multivariate regression models including the above variables. Moreover, eosinophil count and LDH level tended to return to normal range over time in both groups after treatment and severe patients recovered slower than non-severe patients, especially in eosinophil count.
RESULTS
Eosinopaenia and elevated LDH are potential predictors of disease severity in patients with COVID-19 with underlying chronic airway diseases. In addition, they could indicate disease progression and treatment effectiveness.
CONCLUSIONS
[ "Humans", "Asthma", "Bronchitis, Chronic", "COVID-19", "Eosinophils", "Inflammation", "Lactate Dehydrogenases", "Pulmonary Disease, Chronic Obstructive", "Retrospective Studies" ]
8610616
Background
SARS-CoV-2 was first identified after sequencing relevant clinical samples in a bunch of unknown viral pneumonia cases in December 2019 in Wuhan, Hubei Province, China. COVID-19, caused by SARS-CoV-2, was subsequently declared a pandemic by the WHO due to its aggressive spread on a large scale in many countries, leading to thousands of confirmed cases worldwide every day. As of 15 November 2020, 53.7 million confirmed cases of COVID-19 and 1.3 million deaths have been reported worldwide, demanding an urgent need for early identification of severe cases.1 Clinical evidence of SARS-CoV-2 has suggested several transmission routes between humans, with respiratory aerosol droplets undoubtedly being the main source of infection. SARS-CoV-2 is able to attack the respiratory system by binding to the cell entry receptors ACE2 on airway epithelial cells and results in pneumonia and respiratory failure in critically ill patients. Chronic bronchitis, chronic obstructive pulmonary disease (COPD) and asthma are common respiratory diseases with chronic airway inflammation.2–4 Eosinophils, neutrophils and macrophages in innate immune response significantly increase in the airway and lungs during the initial phase of inflammation. Lymphocytopaenia has been reported in severe patients infected with SARS-CoV-2.5 Circulating eosinophil counts have also been reported to be decreased in patients with COVID-19 and associated with severity of the disease.6 Therefore, patients with underlying COPD, asthma and chronic bronchitis may have different inflammatory states after SARS-CoV-2 infection compared with patients without chronic airway inflammation. In this retrospective cohort study, we reviewed the medical records of 59 patients with laboratory-confirmed COVID-19 with underlying chronic airway inflammation and compared the demographic, clinical and radiological characteristics as well as laboratory results between severe and non-severe patients in this cohort. Potential predictors of disease severity were identified in the abnormal laboratory findings using univariate and multivariate regression models.
Methods
Study population and data collection The subjects of this study were adults with COVID-19 and underlying chronic respiratory diseases (admission date from 26 January to 3 April 2020) at the Sino-French New City Branch of Tongji Hospital. Severe and non-severe patients were included in the case and control groups, respectively. COVID-19 was diagnosed according to WHO interim guideline.7 Patients with chronic respiratory diseases were diagnosed according to a previous diagnosis. All patients were confirmed by positive findings in reverse-transcriptase PCR assay of SARS-CoV-2 RNA in throat swab specimens. The study was conducted on 15 June.Demographic information, clinical characteristics (including medical history, symptoms, comorbidities, smoking history and allergic history) and radiological results of each patient were obtained from the electronic medical record system of the Sino-French New City Branch of Tongji Hospital and analysed by three independent researchers. Severity of COVID-19 was staged according to the guidelines for diagnosis and treatment of COVID-19 published by the Chinese National Health Committee (version 5–7). The subjects of this study were adults with COVID-19 and underlying chronic respiratory diseases (admission date from 26 January to 3 April 2020) at the Sino-French New City Branch of Tongji Hospital. Severe and non-severe patients were included in the case and control groups, respectively. COVID-19 was diagnosed according to WHO interim guideline.7 Patients with chronic respiratory diseases were diagnosed according to a previous diagnosis. All patients were confirmed by positive findings in reverse-transcriptase PCR assay of SARS-CoV-2 RNA in throat swab specimens. The study was conducted on 15 June.Demographic information, clinical characteristics (including medical history, symptoms, comorbidities, smoking history and allergic history) and radiological results of each patient were obtained from the electronic medical record system of the Sino-French New City Branch of Tongji Hospital and analysed by three independent researchers. Severity of COVID-19 was staged according to the guidelines for diagnosis and treatment of COVID-19 published by the Chinese National Health Committee (version 5–7). Criteria for severity of COVID-19 Severe COVID-19 was diagnosed when patients met one of the following criteria: (1) respiratory distress with respiratory frequency ≥30 per minute; (2) pulse oximeter oxygen saturation ≤93% at rest; and (3) oxygenation index (artery partial pressure of oxygen/inspired oxygen fraction) ≤300 mm Hg. Severe COVID-19 was diagnosed when patients met one of the following criteria: (1) respiratory distress with respiratory frequency ≥30 per minute; (2) pulse oximeter oxygen saturation ≤93% at rest; and (3) oxygenation index (artery partial pressure of oxygen/inspired oxygen fraction) ≤300 mm Hg. Laboratory testing Medical laboratory results, including number of leucocytes, lymphocytes, monocytes, eosinophils, basophils, platelets, alanine aminotransferase, aspartate aminotransferase, serum creatine kinase, serum lactate dehydrogenase (LDH), blood urea nitrogen, serum creatinine, cardiac troponin I, concentrations of D-dimer, C reactive protein (CRP), procalcitonin, erythrocyte sedimentation rate, serum ferritin, cytokines (interleukin (IL) 2R, IL-6, IL-8, IL-10, tumour necrosis factor (TNF)-α) and immune function were collected for each patient from the electronic medical records. Medical laboratory results, including number of leucocytes, lymphocytes, monocytes, eosinophils, basophils, platelets, alanine aminotransferase, aspartate aminotransferase, serum creatine kinase, serum lactate dehydrogenase (LDH), blood urea nitrogen, serum creatinine, cardiac troponin I, concentrations of D-dimer, C reactive protein (CRP), procalcitonin, erythrocyte sedimentation rate, serum ferritin, cytokines (interleukin (IL) 2R, IL-6, IL-8, IL-10, tumour necrosis factor (TNF)-α) and immune function were collected for each patient from the electronic medical records. Statistical analysis All data were analysed with SPSS Statistics Software (V.26). The statistics for categorical variables were summarised as frequencies and percentages and were compared using χ2 test or Fisher’s exact test between different groups where appropriate. Continuous variables were described using median (IQR) and compared using Mann-Whitney U test. To explore the risk factors associated with disease severity, univariable and multivariable logistic regression models were used to estimate the OR and 95% CI. A two-sided α of less than 0.05 was considered statistically significant. All data were analysed with SPSS Statistics Software (V.26). The statistics for categorical variables were summarised as frequencies and percentages and were compared using χ2 test or Fisher’s exact test between different groups where appropriate. Continuous variables were described using median (IQR) and compared using Mann-Whitney U test. To explore the risk factors associated with disease severity, univariable and multivariable logistic regression models were used to estimate the OR and 95% CI. A two-sided α of less than 0.05 was considered statistically significant.
Results
Demographics and clinical characteristics of patients with non-severe and severe COVID-19 with chronic airway diseases A total of 1888 patients were admitted. Fifty-nine patients with underlying chronic airway inflammation, including COPD (0.95%), asthma (0.53%) and chronic bronchitis (1.64%), were confirmed to have SARS-CoV-2 infection. Of the patients, 33 were classified as non-severe and 26 were classified as severe. Although COPD was more common in patients with severe COVID-19 when compared with patients with non-severe COVID-19 (42% vs 21%), the difference was not statistically significant. The median age of all patients was 71 years (IQR, 57–80) and more than half (54%) were over 70 years old. Majority (71%) of the patients were male (table 1). There was no significant difference in age and sex between non-severe and severe patients. Thirty-one (53%) patients had one or more comorbidities besides the three chronic airway diseases, with cardiovascular disease (46%) and endocrine system disease (15%) being the most common comorbidity. There were no significant differences in the presence of these comorbidities between patients with non-severe and severe COVID-19. Half of the patients had smoking histories or were current smokers. Demographics and clinical characteristics of patients with COVID-19 with chronic airway inflammation on admission Data are median (IQR) or n (%). P values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate. COPD, chronic obstructive pulmonary disease. The most common symptoms were fever (83%), cough (73%), fatigue (47%) and dyspnoea (42%). Dyspnoea was more common in severe patients compared with non-severe patients (65% vs 24%, p=0.001) (table 1). A total of 1888 patients were admitted. Fifty-nine patients with underlying chronic airway inflammation, including COPD (0.95%), asthma (0.53%) and chronic bronchitis (1.64%), were confirmed to have SARS-CoV-2 infection. Of the patients, 33 were classified as non-severe and 26 were classified as severe. Although COPD was more common in patients with severe COVID-19 when compared with patients with non-severe COVID-19 (42% vs 21%), the difference was not statistically significant. The median age of all patients was 71 years (IQR, 57–80) and more than half (54%) were over 70 years old. Majority (71%) of the patients were male (table 1). There was no significant difference in age and sex between non-severe and severe patients. Thirty-one (53%) patients had one or more comorbidities besides the three chronic airway diseases, with cardiovascular disease (46%) and endocrine system disease (15%) being the most common comorbidity. There were no significant differences in the presence of these comorbidities between patients with non-severe and severe COVID-19. Half of the patients had smoking histories or were current smokers. Demographics and clinical characteristics of patients with COVID-19 with chronic airway inflammation on admission Data are median (IQR) or n (%). P values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate. COPD, chronic obstructive pulmonary disease. The most common symptoms were fever (83%), cough (73%), fatigue (47%) and dyspnoea (42%). Dyspnoea was more common in severe patients compared with non-severe patients (65% vs 24%, p=0.001) (table 1). Laboratory findings of patients with non-severe and severe COVID-19 with chronic airway diseases When compared with non-severe patients, severe patients were more likely to have elevated neutrophil counts (8.2×10⁹/L vs 4.1×10⁹/L, p=0.001), decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), elevated D-dimer (>1 µg/mL; 88% vs 42%, p=0.001), increased LDH (471.0 U/L vs 230.0 U/L, p<0.001), elevated blood urea nitrogen (>9.5 mmol/L; 42% vs 3%, p<0.001), increased hypersensitive troponin I (>34 pg/mL; 48% vs 7%, p=0.001), and increased inflammation markers including CRP (126.2 mg/L vs 19.9 mg/L, p<0.001), procalcitonin (≥0.05 ng/mL; 96% vs 43%, p<0.001) and ferritin (1264.2 mg/L vs 293.6 mg/L, p=0.004) (table 2). Of note, significant differences in the expression of inflammation-related cytokines including IL-6, IL-8 and TNF-α were observed between the two groups, which were dramatically increased in severe patients. Laboratory findings of patients with COVID-19 with chronic airway inflammation on admission Data are median (IQR), n (%) or n/N (%), where N is the total number of patients with available data. P values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate. IL, interleukin; TNF-α, tumour necrosis factor-α. When compared with non-severe patients, severe patients were more likely to have elevated neutrophil counts (8.2×10⁹/L vs 4.1×10⁹/L, p=0.001), decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), elevated D-dimer (>1 µg/mL; 88% vs 42%, p=0.001), increased LDH (471.0 U/L vs 230.0 U/L, p<0.001), elevated blood urea nitrogen (>9.5 mmol/L; 42% vs 3%, p<0.001), increased hypersensitive troponin I (>34 pg/mL; 48% vs 7%, p=0.001), and increased inflammation markers including CRP (126.2 mg/L vs 19.9 mg/L, p<0.001), procalcitonin (≥0.05 ng/mL; 96% vs 43%, p<0.001) and ferritin (1264.2 mg/L vs 293.6 mg/L, p=0.004) (table 2). Of note, significant differences in the expression of inflammation-related cytokines including IL-6, IL-8 and TNF-α were observed between the two groups, which were dramatically increased in severe patients. Laboratory findings of patients with COVID-19 with chronic airway inflammation on admission Data are median (IQR), n (%) or n/N (%), where N is the total number of patients with available data. P values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate. IL, interleukin; TNF-α, tumour necrosis factor-α. Predictors of severity of COVID-19 in patients with chronic airway diseases To identify the predictors of severity of COVID-19 in patients with chronic airway diseases, we analysed the association between abnormal laboratory findings and disease severity with univariate and multivariate logistic regression models. Disease severity was significantly associated with all of the above-mentioned abnormal laboratory findings in univariate logistic regression analyses. In a multivariate regression model that incorporated lymphopaenia, eosinopaenia, elevated LDH and increased IL-6, eosinophil counts <0.02×10⁹/L (OR per one-unit decrease, 10.115 (95% CI 2.158 to 47.414), p=0.003) and LDH level >225 U/L (OR per one-unit increase, 22.300 (95% CI 2.179 to 228.247), p=0.009) were independent risk factors for disease severity (table 3). Our data suggest that decreased eosinophil counts and increased LDH levels may help clinicians identify severe COVID-19 in patients with chronic airway diseases. Predictors of severity of COVID-19 in patients with chronic airway diseases *Per one-unit increase. IL, interleukin; ref, reference; TNF-α, tumour necrosis factor-α. To identify the predictors of severity of COVID-19 in patients with chronic airway diseases, we analysed the association between abnormal laboratory findings and disease severity with univariate and multivariate logistic regression models. Disease severity was significantly associated with all of the above-mentioned abnormal laboratory findings in univariate logistic regression analyses. In a multivariate regression model that incorporated lymphopaenia, eosinopaenia, elevated LDH and increased IL-6, eosinophil counts <0.02×10⁹/L (OR per one-unit decrease, 10.115 (95% CI 2.158 to 47.414), p=0.003) and LDH level >225 U/L (OR per one-unit increase, 22.300 (95% CI 2.179 to 228.247), p=0.009) were independent risk factors for disease severity (table 3). Our data suggest that decreased eosinophil counts and increased LDH levels may help clinicians identify severe COVID-19 in patients with chronic airway diseases. Predictors of severity of COVID-19 in patients with chronic airway diseases *Per one-unit increase. IL, interleukin; ref, reference; TNF-α, tumour necrosis factor-α. Eosinophil counts and LDH levels tend to return to normal range over time in non-severe patients We further analysed the eosinophil counts and LDH levels in patients with non-severe and severe COVID-19 with chronic bronchitis, COPD and asthma, respectively. We found that there was a significant difference in eosinophil counts and LDH levels between severe and non-severe patients with chronic bronchitis and COPD, but not in patients with asthma (figure 1). To observe the dynamic changes of eosinophil counts and LDH levels over time, we collected the eosinophil counts and LDH levels on the 5th, 10th, 15th, 20th, 25th and 30th days after admission. We found that eosinophil counts increased over time both in severe and non-severe patients. Meanwhile, LDH decreased over time (figure 2). Severe patients showed a slower recovery rate than non-severe patients. Of note, both eosinophil counts and LDH levels recovered more slowly in severe patients with COPD than those in severe patients with chronic bronchitis. Our data suggest that, as the disease recovers, eosinophil counts and LDH levels tend to return to normal range both in severe and non-severe patients, indicating a good therapeutic effect in patients with chronic airway diseases in COVID-19 treatment. Clinical characteristics of eosinophil and LDH in patients with COVID-19 with chronic airway inflammation. (A) Eosinophil counts in different subgroups. Eosinophil counts were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. (B) LDH levels in different subgroups. LDH levels were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase. Dynamic changes of eosinophil counts and LDH levels in patients with COVID-19 with chronic airway diseases. (A–D) Eosinophil counts increased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). (E–H) LDH levels decreased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001, compared with the eosinophil counts or LDH levels between severe and non-severe patients. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase. We further performed multivariate analysis for mortality in patients with COVID-19 with chronic airway inflammation using the above four variables and found that eosinophil count <0.02×10⁹/L (OR per one-unit decrease, 18.000 (95% CI 1.929 to 167.986), p=0.011) was the only independent risk factor for mortality (online supplemental table 1). Moreover, Kaplan-Meier survival curves indicated that patients with COVID-19 with eosinopaenia or elevated LDH had worse survival probability (p<0.05) (online supplemental figure 1). This suggests that eosinopaenia and elevated LDH are also potential predictors of mortality in patients with COVID-19 with underlying chronic airway diseases. We further analysed the eosinophil counts and LDH levels in patients with non-severe and severe COVID-19 with chronic bronchitis, COPD and asthma, respectively. We found that there was a significant difference in eosinophil counts and LDH levels between severe and non-severe patients with chronic bronchitis and COPD, but not in patients with asthma (figure 1). To observe the dynamic changes of eosinophil counts and LDH levels over time, we collected the eosinophil counts and LDH levels on the 5th, 10th, 15th, 20th, 25th and 30th days after admission. We found that eosinophil counts increased over time both in severe and non-severe patients. Meanwhile, LDH decreased over time (figure 2). Severe patients showed a slower recovery rate than non-severe patients. Of note, both eosinophil counts and LDH levels recovered more slowly in severe patients with COPD than those in severe patients with chronic bronchitis. Our data suggest that, as the disease recovers, eosinophil counts and LDH levels tend to return to normal range both in severe and non-severe patients, indicating a good therapeutic effect in patients with chronic airway diseases in COVID-19 treatment. Clinical characteristics of eosinophil and LDH in patients with COVID-19 with chronic airway inflammation. (A) Eosinophil counts in different subgroups. Eosinophil counts were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. (B) LDH levels in different subgroups. LDH levels were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase. Dynamic changes of eosinophil counts and LDH levels in patients with COVID-19 with chronic airway diseases. (A–D) Eosinophil counts increased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). (E–H) LDH levels decreased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001, compared with the eosinophil counts or LDH levels between severe and non-severe patients. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase. We further performed multivariate analysis for mortality in patients with COVID-19 with chronic airway inflammation using the above four variables and found that eosinophil count <0.02×10⁹/L (OR per one-unit decrease, 18.000 (95% CI 1.929 to 167.986), p=0.011) was the only independent risk factor for mortality (online supplemental table 1). Moreover, Kaplan-Meier survival curves indicated that patients with COVID-19 with eosinopaenia or elevated LDH had worse survival probability (p<0.05) (online supplemental figure 1). This suggests that eosinopaenia and elevated LDH are also potential predictors of mortality in patients with COVID-19 with underlying chronic airway diseases.
Conclusion
Our study reveals that eosinopaenia and elevated LDH on admission are potential predictors of disease severity in adults with COVID-19 with underlying chronic airway diseases. Moreover, eosinophil counts could indicate disease progression of COVID-19, thus revealing treatment efficacy. These predictors may help clinicians identify severe COVID-19 in patients with chronic bronchitis, COPD and asthma. Patients with chronic airway diseases are less likely to suffer from COVID-19. Eosinopaenia and elevated lactate dehydrogenase (LDH) can predict disease severity in patients with COVID-19 with underlying chronic airway diseases. Dynamic changes of eosinophil counts and LDH might indicate disease prognosis and treatment effectiveness. Are other molecules related to chronic airway inflammation also involved in the development of COVID-19? Can the drugs targeting eosinophils be applied in COVID-19 treatment? How do patients with COVID-19 with chronic airway diseases manage themselves?
[ "Background", "Study population and data collection", "Criteria for severity of COVID-19", "Laboratory testing", "Statistical analysis", "Demographics and clinical characteristics of patients with non-severe and severe COVID-19 with chronic airway diseases", "Laboratory findings of patients with non-severe and severe COVID-19 with chronic airway diseases", "Predictors of severity of COVID-19 in patients with chronic airway diseases", "Eosinophil counts and LDH levels tend to return to normal range over time in non-severe patients" ]
[ "SARS-CoV-2 was first identified after sequencing relevant clinical samples in a bunch of unknown viral pneumonia cases in December 2019 in Wuhan, Hubei Province, China. COVID-19, caused by SARS-CoV-2, was subsequently declared a pandemic by the WHO due to its aggressive spread on a large scale in many countries, leading to thousands of confirmed cases worldwide every day. As of 15 November 2020, 53.7 million confirmed cases of COVID-19 and 1.3 million deaths have been reported worldwide, demanding an urgent need for early identification of severe cases.1 Clinical evidence of SARS-CoV-2 has suggested several transmission routes between humans, with respiratory aerosol droplets undoubtedly being the main source of infection. SARS-CoV-2 is able to attack the respiratory system by binding to the cell entry receptors ACE2 on airway epithelial cells and results in pneumonia and respiratory failure in critically ill patients.\nChronic bronchitis, chronic obstructive pulmonary disease (COPD) and asthma are common respiratory diseases with chronic airway inflammation.2–4 Eosinophils, neutrophils and macrophages in innate immune response significantly increase in the airway and lungs during the initial phase of inflammation. Lymphocytopaenia has been reported in severe patients infected with SARS-CoV-2.5 Circulating eosinophil counts have also been reported to be decreased in patients with COVID-19 and associated with severity of the disease.6 Therefore, patients with underlying COPD, asthma and chronic bronchitis may have different inflammatory states after SARS-CoV-2 infection compared with patients without chronic airway inflammation.\nIn this retrospective cohort study, we reviewed the medical records of 59 patients with laboratory-confirmed COVID-19 with underlying chronic airway inflammation and compared the demographic, clinical and radiological characteristics as well as laboratory results between severe and non-severe patients in this cohort. Potential predictors of disease severity were identified in the abnormal laboratory findings using univariate and multivariate regression models.", "The subjects of this study were adults with COVID-19 and underlying chronic respiratory diseases (admission date from 26 January to 3 April 2020) at the Sino-French New City Branch of Tongji Hospital. Severe and non-severe patients were included in the case and control groups, respectively. COVID-19 was diagnosed according to WHO interim guideline.7 Patients with chronic respiratory diseases were diagnosed according to a previous diagnosis. All patients were confirmed by positive findings in reverse-transcriptase PCR assay of SARS-CoV-2 RNA in throat swab specimens. The study was conducted on 15 June.Demographic information, clinical characteristics (including medical history, symptoms, comorbidities, smoking history and allergic history) and radiological results of each patient were obtained from the electronic medical record system of the Sino-French New City Branch of Tongji Hospital and analysed by three independent researchers. Severity of COVID-19 was staged according to the guidelines for diagnosis and treatment of COVID-19 published by the Chinese National Health Committee (version 5–7).", "Severe COVID-19 was diagnosed when patients met one of the following criteria: (1) respiratory distress with respiratory frequency ≥30 per minute; (2) pulse oximeter oxygen saturation ≤93% at rest; and (3) oxygenation index (artery partial pressure of oxygen/inspired oxygen fraction) ≤300 mm Hg.", "Medical laboratory results, including number of leucocytes, lymphocytes, monocytes, eosinophils, basophils, platelets, alanine aminotransferase, aspartate aminotransferase, serum creatine kinase, serum lactate dehydrogenase (LDH), blood urea nitrogen, serum creatinine, cardiac troponin I, concentrations of D-dimer, C reactive protein (CRP), procalcitonin, erythrocyte sedimentation rate, serum ferritin, cytokines (interleukin (IL) 2R, IL-6, IL-8, IL-10, tumour necrosis factor (TNF)-α) and immune function were collected for each patient from the electronic medical records.", "All data were analysed with SPSS Statistics Software (V.26). The statistics for categorical variables were summarised as frequencies and percentages and were compared using χ2 test or Fisher’s exact test between different groups where appropriate. Continuous variables were described using median (IQR) and compared using Mann-Whitney U test. To explore the risk factors associated with disease severity, univariable and multivariable logistic regression models were used to estimate the OR and 95% CI. A two-sided α of less than 0.05 was considered statistically significant.", "A total of 1888 patients were admitted. Fifty-nine patients with underlying chronic airway inflammation, including COPD (0.95%), asthma (0.53%) and chronic bronchitis (1.64%), were confirmed to have SARS-CoV-2 infection. Of the patients, 33 were classified as non-severe and 26 were classified as severe. Although COPD was more common in patients with severe COVID-19 when compared with patients with non-severe COVID-19 (42% vs 21%), the difference was not statistically significant.\nThe median age of all patients was 71 years (IQR, 57–80) and more than half (54%) were over 70 years old. Majority (71%) of the patients were male (table 1). There was no significant difference in age and sex between non-severe and severe patients. Thirty-one (53%) patients had one or more comorbidities besides the three chronic airway diseases, with cardiovascular disease (46%) and endocrine system disease (15%) being the most common comorbidity. There were no significant differences in the presence of these comorbidities between patients with non-severe and severe COVID-19. Half of the patients had smoking histories or were current smokers.\nDemographics and clinical characteristics of patients with COVID-19 with chronic airway inflammation on admission\nData are median (IQR) or n (%).\nP values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate.\nCOPD, chronic obstructive pulmonary disease.\nThe most common symptoms were fever (83%), cough (73%), fatigue (47%) and dyspnoea (42%). Dyspnoea was more common in severe patients compared with non-severe patients (65% vs 24%, p=0.001) (table 1).", "When compared with non-severe patients, severe patients were more likely to have elevated neutrophil counts (8.2×10⁹/L vs 4.1×10⁹/L, p=0.001), decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), elevated D-dimer (>1 µg/mL; 88% vs 42%, p=0.001), increased LDH (471.0 U/L vs 230.0 U/L, p<0.001), elevated blood urea nitrogen (>9.5 mmol/L; 42% vs 3%, p<0.001), increased hypersensitive troponin I (>34 pg/mL; 48% vs 7%, p=0.001), and increased inflammation markers including CRP (126.2 mg/L vs 19.9 mg/L, p<0.001), procalcitonin (≥0.05 ng/mL; 96% vs 43%, p<0.001) and ferritin (1264.2 mg/L vs 293.6 mg/L, p=0.004) (table 2). Of note, significant differences in the expression of inflammation-related cytokines including IL-6, IL-8 and TNF-α were observed between the two groups, which were dramatically increased in severe patients.\nLaboratory findings of patients with COVID-19 with chronic airway inflammation on admission\nData are median (IQR), n (%) or n/N (%), where N is the total number of patients with available data.\nP values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate.\nIL, interleukin; TNF-α, tumour necrosis factor-α.", "To identify the predictors of severity of COVID-19 in patients with chronic airway diseases, we analysed the association between abnormal laboratory findings and disease severity with univariate and multivariate logistic regression models. Disease severity was significantly associated with all of the above-mentioned abnormal laboratory findings in univariate logistic regression analyses. In a multivariate regression model that incorporated lymphopaenia, eosinopaenia, elevated LDH and increased IL-6, eosinophil counts <0.02×10⁹/L (OR per one-unit decrease, 10.115 (95% CI 2.158 to 47.414), p=0.003) and LDH level >225 U/L (OR per one-unit increase, 22.300 (95% CI 2.179 to 228.247), p=0.009) were independent risk factors for disease severity (table 3). Our data suggest that decreased eosinophil counts and increased LDH levels may help clinicians identify severe COVID-19 in patients with chronic airway diseases.\nPredictors of severity of COVID-19 in patients with chronic airway diseases\n*Per one-unit increase.\nIL, interleukin; ref, reference; TNF-α, tumour necrosis factor-α.", "We further analysed the eosinophil counts and LDH levels in patients with non-severe and severe COVID-19 with chronic bronchitis, COPD and asthma, respectively. We found that there was a significant difference in eosinophil counts and LDH levels between severe and non-severe patients with chronic bronchitis and COPD, but not in patients with asthma (figure 1). To observe the dynamic changes of eosinophil counts and LDH levels over time, we collected the eosinophil counts and LDH levels on the 5th, 10th, 15th, 20th, 25th and 30th days after admission. We found that eosinophil counts increased over time both in severe and non-severe patients. Meanwhile, LDH decreased over time (figure 2). Severe patients showed a slower recovery rate than non-severe patients. Of note, both eosinophil counts and LDH levels recovered more slowly in severe patients with COPD than those in severe patients with chronic bronchitis. Our data suggest that, as the disease recovers, eosinophil counts and LDH levels tend to return to normal range both in severe and non-severe patients, indicating a good therapeutic effect in patients with chronic airway diseases in COVID-19 treatment.\nClinical characteristics of eosinophil and LDH in patients with COVID-19 with chronic airway inflammation. (A) Eosinophil counts in different subgroups. Eosinophil counts were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. (B) LDH levels in different subgroups. LDH levels were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase.\nDynamic changes of eosinophil counts and LDH levels in patients with COVID-19 with chronic airway diseases. (A–D) Eosinophil counts increased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). (E–H) LDH levels decreased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001, compared with the eosinophil counts or LDH levels between severe and non-severe patients. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase.\nWe further performed multivariate analysis for mortality in patients with COVID-19 with chronic airway inflammation using the above four variables and found that eosinophil count <0.02×10⁹/L (OR per one-unit decrease, 18.000 (95% CI 1.929 to 167.986), p=0.011) was the only independent risk factor for mortality (online supplemental table 1). Moreover, Kaplan-Meier survival curves indicated that patients with COVID-19 with eosinopaenia or elevated LDH had worse survival probability (p<0.05) (online supplemental figure 1). This suggests that eosinopaenia and elevated LDH are also potential predictors of mortality in patients with COVID-19 with underlying chronic airway diseases.\n\n\n" ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population and data collection", "Criteria for severity of COVID-19", "Laboratory testing", "Statistical analysis", "Results", "Demographics and clinical characteristics of patients with non-severe and severe COVID-19 with chronic airway diseases", "Laboratory findings of patients with non-severe and severe COVID-19 with chronic airway diseases", "Predictors of severity of COVID-19 in patients with chronic airway diseases", "Eosinophil counts and LDH levels tend to return to normal range over time in non-severe patients", "Discussion", "Conclusion" ]
[ "SARS-CoV-2 was first identified after sequencing relevant clinical samples in a bunch of unknown viral pneumonia cases in December 2019 in Wuhan, Hubei Province, China. COVID-19, caused by SARS-CoV-2, was subsequently declared a pandemic by the WHO due to its aggressive spread on a large scale in many countries, leading to thousands of confirmed cases worldwide every day. As of 15 November 2020, 53.7 million confirmed cases of COVID-19 and 1.3 million deaths have been reported worldwide, demanding an urgent need for early identification of severe cases.1 Clinical evidence of SARS-CoV-2 has suggested several transmission routes between humans, with respiratory aerosol droplets undoubtedly being the main source of infection. SARS-CoV-2 is able to attack the respiratory system by binding to the cell entry receptors ACE2 on airway epithelial cells and results in pneumonia and respiratory failure in critically ill patients.\nChronic bronchitis, chronic obstructive pulmonary disease (COPD) and asthma are common respiratory diseases with chronic airway inflammation.2–4 Eosinophils, neutrophils and macrophages in innate immune response significantly increase in the airway and lungs during the initial phase of inflammation. Lymphocytopaenia has been reported in severe patients infected with SARS-CoV-2.5 Circulating eosinophil counts have also been reported to be decreased in patients with COVID-19 and associated with severity of the disease.6 Therefore, patients with underlying COPD, asthma and chronic bronchitis may have different inflammatory states after SARS-CoV-2 infection compared with patients without chronic airway inflammation.\nIn this retrospective cohort study, we reviewed the medical records of 59 patients with laboratory-confirmed COVID-19 with underlying chronic airway inflammation and compared the demographic, clinical and radiological characteristics as well as laboratory results between severe and non-severe patients in this cohort. Potential predictors of disease severity were identified in the abnormal laboratory findings using univariate and multivariate regression models.", "Study population and data collection The subjects of this study were adults with COVID-19 and underlying chronic respiratory diseases (admission date from 26 January to 3 April 2020) at the Sino-French New City Branch of Tongji Hospital. Severe and non-severe patients were included in the case and control groups, respectively. COVID-19 was diagnosed according to WHO interim guideline.7 Patients with chronic respiratory diseases were diagnosed according to a previous diagnosis. All patients were confirmed by positive findings in reverse-transcriptase PCR assay of SARS-CoV-2 RNA in throat swab specimens. The study was conducted on 15 June.Demographic information, clinical characteristics (including medical history, symptoms, comorbidities, smoking history and allergic history) and radiological results of each patient were obtained from the electronic medical record system of the Sino-French New City Branch of Tongji Hospital and analysed by three independent researchers. Severity of COVID-19 was staged according to the guidelines for diagnosis and treatment of COVID-19 published by the Chinese National Health Committee (version 5–7).\nThe subjects of this study were adults with COVID-19 and underlying chronic respiratory diseases (admission date from 26 January to 3 April 2020) at the Sino-French New City Branch of Tongji Hospital. Severe and non-severe patients were included in the case and control groups, respectively. COVID-19 was diagnosed according to WHO interim guideline.7 Patients with chronic respiratory diseases were diagnosed according to a previous diagnosis. All patients were confirmed by positive findings in reverse-transcriptase PCR assay of SARS-CoV-2 RNA in throat swab specimens. The study was conducted on 15 June.Demographic information, clinical characteristics (including medical history, symptoms, comorbidities, smoking history and allergic history) and radiological results of each patient were obtained from the electronic medical record system of the Sino-French New City Branch of Tongji Hospital and analysed by three independent researchers. Severity of COVID-19 was staged according to the guidelines for diagnosis and treatment of COVID-19 published by the Chinese National Health Committee (version 5–7).\nCriteria for severity of COVID-19 Severe COVID-19 was diagnosed when patients met one of the following criteria: (1) respiratory distress with respiratory frequency ≥30 per minute; (2) pulse oximeter oxygen saturation ≤93% at rest; and (3) oxygenation index (artery partial pressure of oxygen/inspired oxygen fraction) ≤300 mm Hg.\nSevere COVID-19 was diagnosed when patients met one of the following criteria: (1) respiratory distress with respiratory frequency ≥30 per minute; (2) pulse oximeter oxygen saturation ≤93% at rest; and (3) oxygenation index (artery partial pressure of oxygen/inspired oxygen fraction) ≤300 mm Hg.\nLaboratory testing Medical laboratory results, including number of leucocytes, lymphocytes, monocytes, eosinophils, basophils, platelets, alanine aminotransferase, aspartate aminotransferase, serum creatine kinase, serum lactate dehydrogenase (LDH), blood urea nitrogen, serum creatinine, cardiac troponin I, concentrations of D-dimer, C reactive protein (CRP), procalcitonin, erythrocyte sedimentation rate, serum ferritin, cytokines (interleukin (IL) 2R, IL-6, IL-8, IL-10, tumour necrosis factor (TNF)-α) and immune function were collected for each patient from the electronic medical records.\nMedical laboratory results, including number of leucocytes, lymphocytes, monocytes, eosinophils, basophils, platelets, alanine aminotransferase, aspartate aminotransferase, serum creatine kinase, serum lactate dehydrogenase (LDH), blood urea nitrogen, serum creatinine, cardiac troponin I, concentrations of D-dimer, C reactive protein (CRP), procalcitonin, erythrocyte sedimentation rate, serum ferritin, cytokines (interleukin (IL) 2R, IL-6, IL-8, IL-10, tumour necrosis factor (TNF)-α) and immune function were collected for each patient from the electronic medical records.\nStatistical analysis All data were analysed with SPSS Statistics Software (V.26). The statistics for categorical variables were summarised as frequencies and percentages and were compared using χ2 test or Fisher’s exact test between different groups where appropriate. Continuous variables were described using median (IQR) and compared using Mann-Whitney U test. To explore the risk factors associated with disease severity, univariable and multivariable logistic regression models were used to estimate the OR and 95% CI. A two-sided α of less than 0.05 was considered statistically significant.\nAll data were analysed with SPSS Statistics Software (V.26). The statistics for categorical variables were summarised as frequencies and percentages and were compared using χ2 test or Fisher’s exact test between different groups where appropriate. Continuous variables were described using median (IQR) and compared using Mann-Whitney U test. To explore the risk factors associated with disease severity, univariable and multivariable logistic regression models were used to estimate the OR and 95% CI. A two-sided α of less than 0.05 was considered statistically significant.", "The subjects of this study were adults with COVID-19 and underlying chronic respiratory diseases (admission date from 26 January to 3 April 2020) at the Sino-French New City Branch of Tongji Hospital. Severe and non-severe patients were included in the case and control groups, respectively. COVID-19 was diagnosed according to WHO interim guideline.7 Patients with chronic respiratory diseases were diagnosed according to a previous diagnosis. All patients were confirmed by positive findings in reverse-transcriptase PCR assay of SARS-CoV-2 RNA in throat swab specimens. The study was conducted on 15 June.Demographic information, clinical characteristics (including medical history, symptoms, comorbidities, smoking history and allergic history) and radiological results of each patient were obtained from the electronic medical record system of the Sino-French New City Branch of Tongji Hospital and analysed by three independent researchers. Severity of COVID-19 was staged according to the guidelines for diagnosis and treatment of COVID-19 published by the Chinese National Health Committee (version 5–7).", "Severe COVID-19 was diagnosed when patients met one of the following criteria: (1) respiratory distress with respiratory frequency ≥30 per minute; (2) pulse oximeter oxygen saturation ≤93% at rest; and (3) oxygenation index (artery partial pressure of oxygen/inspired oxygen fraction) ≤300 mm Hg.", "Medical laboratory results, including number of leucocytes, lymphocytes, monocytes, eosinophils, basophils, platelets, alanine aminotransferase, aspartate aminotransferase, serum creatine kinase, serum lactate dehydrogenase (LDH), blood urea nitrogen, serum creatinine, cardiac troponin I, concentrations of D-dimer, C reactive protein (CRP), procalcitonin, erythrocyte sedimentation rate, serum ferritin, cytokines (interleukin (IL) 2R, IL-6, IL-8, IL-10, tumour necrosis factor (TNF)-α) and immune function were collected for each patient from the electronic medical records.", "All data were analysed with SPSS Statistics Software (V.26). The statistics for categorical variables were summarised as frequencies and percentages and were compared using χ2 test or Fisher’s exact test between different groups where appropriate. Continuous variables were described using median (IQR) and compared using Mann-Whitney U test. To explore the risk factors associated with disease severity, univariable and multivariable logistic regression models were used to estimate the OR and 95% CI. A two-sided α of less than 0.05 was considered statistically significant.", "Demographics and clinical characteristics of patients with non-severe and severe COVID-19 with chronic airway diseases A total of 1888 patients were admitted. Fifty-nine patients with underlying chronic airway inflammation, including COPD (0.95%), asthma (0.53%) and chronic bronchitis (1.64%), were confirmed to have SARS-CoV-2 infection. Of the patients, 33 were classified as non-severe and 26 were classified as severe. Although COPD was more common in patients with severe COVID-19 when compared with patients with non-severe COVID-19 (42% vs 21%), the difference was not statistically significant.\nThe median age of all patients was 71 years (IQR, 57–80) and more than half (54%) were over 70 years old. Majority (71%) of the patients were male (table 1). There was no significant difference in age and sex between non-severe and severe patients. Thirty-one (53%) patients had one or more comorbidities besides the three chronic airway diseases, with cardiovascular disease (46%) and endocrine system disease (15%) being the most common comorbidity. There were no significant differences in the presence of these comorbidities between patients with non-severe and severe COVID-19. Half of the patients had smoking histories or were current smokers.\nDemographics and clinical characteristics of patients with COVID-19 with chronic airway inflammation on admission\nData are median (IQR) or n (%).\nP values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate.\nCOPD, chronic obstructive pulmonary disease.\nThe most common symptoms were fever (83%), cough (73%), fatigue (47%) and dyspnoea (42%). Dyspnoea was more common in severe patients compared with non-severe patients (65% vs 24%, p=0.001) (table 1).\nA total of 1888 patients were admitted. Fifty-nine patients with underlying chronic airway inflammation, including COPD (0.95%), asthma (0.53%) and chronic bronchitis (1.64%), were confirmed to have SARS-CoV-2 infection. Of the patients, 33 were classified as non-severe and 26 were classified as severe. Although COPD was more common in patients with severe COVID-19 when compared with patients with non-severe COVID-19 (42% vs 21%), the difference was not statistically significant.\nThe median age of all patients was 71 years (IQR, 57–80) and more than half (54%) were over 70 years old. Majority (71%) of the patients were male (table 1). There was no significant difference in age and sex between non-severe and severe patients. Thirty-one (53%) patients had one or more comorbidities besides the three chronic airway diseases, with cardiovascular disease (46%) and endocrine system disease (15%) being the most common comorbidity. There were no significant differences in the presence of these comorbidities between patients with non-severe and severe COVID-19. Half of the patients had smoking histories or were current smokers.\nDemographics and clinical characteristics of patients with COVID-19 with chronic airway inflammation on admission\nData are median (IQR) or n (%).\nP values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate.\nCOPD, chronic obstructive pulmonary disease.\nThe most common symptoms were fever (83%), cough (73%), fatigue (47%) and dyspnoea (42%). Dyspnoea was more common in severe patients compared with non-severe patients (65% vs 24%, p=0.001) (table 1).\nLaboratory findings of patients with non-severe and severe COVID-19 with chronic airway diseases When compared with non-severe patients, severe patients were more likely to have elevated neutrophil counts (8.2×10⁹/L vs 4.1×10⁹/L, p=0.001), decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), elevated D-dimer (>1 µg/mL; 88% vs 42%, p=0.001), increased LDH (471.0 U/L vs 230.0 U/L, p<0.001), elevated blood urea nitrogen (>9.5 mmol/L; 42% vs 3%, p<0.001), increased hypersensitive troponin I (>34 pg/mL; 48% vs 7%, p=0.001), and increased inflammation markers including CRP (126.2 mg/L vs 19.9 mg/L, p<0.001), procalcitonin (≥0.05 ng/mL; 96% vs 43%, p<0.001) and ferritin (1264.2 mg/L vs 293.6 mg/L, p=0.004) (table 2). Of note, significant differences in the expression of inflammation-related cytokines including IL-6, IL-8 and TNF-α were observed between the two groups, which were dramatically increased in severe patients.\nLaboratory findings of patients with COVID-19 with chronic airway inflammation on admission\nData are median (IQR), n (%) or n/N (%), where N is the total number of patients with available data.\nP values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate.\nIL, interleukin; TNF-α, tumour necrosis factor-α.\nWhen compared with non-severe patients, severe patients were more likely to have elevated neutrophil counts (8.2×10⁹/L vs 4.1×10⁹/L, p=0.001), decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), elevated D-dimer (>1 µg/mL; 88% vs 42%, p=0.001), increased LDH (471.0 U/L vs 230.0 U/L, p<0.001), elevated blood urea nitrogen (>9.5 mmol/L; 42% vs 3%, p<0.001), increased hypersensitive troponin I (>34 pg/mL; 48% vs 7%, p=0.001), and increased inflammation markers including CRP (126.2 mg/L vs 19.9 mg/L, p<0.001), procalcitonin (≥0.05 ng/mL; 96% vs 43%, p<0.001) and ferritin (1264.2 mg/L vs 293.6 mg/L, p=0.004) (table 2). Of note, significant differences in the expression of inflammation-related cytokines including IL-6, IL-8 and TNF-α were observed between the two groups, which were dramatically increased in severe patients.\nLaboratory findings of patients with COVID-19 with chronic airway inflammation on admission\nData are median (IQR), n (%) or n/N (%), where N is the total number of patients with available data.\nP values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate.\nIL, interleukin; TNF-α, tumour necrosis factor-α.\nPredictors of severity of COVID-19 in patients with chronic airway diseases To identify the predictors of severity of COVID-19 in patients with chronic airway diseases, we analysed the association between abnormal laboratory findings and disease severity with univariate and multivariate logistic regression models. Disease severity was significantly associated with all of the above-mentioned abnormal laboratory findings in univariate logistic regression analyses. In a multivariate regression model that incorporated lymphopaenia, eosinopaenia, elevated LDH and increased IL-6, eosinophil counts <0.02×10⁹/L (OR per one-unit decrease, 10.115 (95% CI 2.158 to 47.414), p=0.003) and LDH level >225 U/L (OR per one-unit increase, 22.300 (95% CI 2.179 to 228.247), p=0.009) were independent risk factors for disease severity (table 3). Our data suggest that decreased eosinophil counts and increased LDH levels may help clinicians identify severe COVID-19 in patients with chronic airway diseases.\nPredictors of severity of COVID-19 in patients with chronic airway diseases\n*Per one-unit increase.\nIL, interleukin; ref, reference; TNF-α, tumour necrosis factor-α.\nTo identify the predictors of severity of COVID-19 in patients with chronic airway diseases, we analysed the association between abnormal laboratory findings and disease severity with univariate and multivariate logistic regression models. Disease severity was significantly associated with all of the above-mentioned abnormal laboratory findings in univariate logistic regression analyses. In a multivariate regression model that incorporated lymphopaenia, eosinopaenia, elevated LDH and increased IL-6, eosinophil counts <0.02×10⁹/L (OR per one-unit decrease, 10.115 (95% CI 2.158 to 47.414), p=0.003) and LDH level >225 U/L (OR per one-unit increase, 22.300 (95% CI 2.179 to 228.247), p=0.009) were independent risk factors for disease severity (table 3). Our data suggest that decreased eosinophil counts and increased LDH levels may help clinicians identify severe COVID-19 in patients with chronic airway diseases.\nPredictors of severity of COVID-19 in patients with chronic airway diseases\n*Per one-unit increase.\nIL, interleukin; ref, reference; TNF-α, tumour necrosis factor-α.\nEosinophil counts and LDH levels tend to return to normal range over time in non-severe patients We further analysed the eosinophil counts and LDH levels in patients with non-severe and severe COVID-19 with chronic bronchitis, COPD and asthma, respectively. We found that there was a significant difference in eosinophil counts and LDH levels between severe and non-severe patients with chronic bronchitis and COPD, but not in patients with asthma (figure 1). To observe the dynamic changes of eosinophil counts and LDH levels over time, we collected the eosinophil counts and LDH levels on the 5th, 10th, 15th, 20th, 25th and 30th days after admission. We found that eosinophil counts increased over time both in severe and non-severe patients. Meanwhile, LDH decreased over time (figure 2). Severe patients showed a slower recovery rate than non-severe patients. Of note, both eosinophil counts and LDH levels recovered more slowly in severe patients with COPD than those in severe patients with chronic bronchitis. Our data suggest that, as the disease recovers, eosinophil counts and LDH levels tend to return to normal range both in severe and non-severe patients, indicating a good therapeutic effect in patients with chronic airway diseases in COVID-19 treatment.\nClinical characteristics of eosinophil and LDH in patients with COVID-19 with chronic airway inflammation. (A) Eosinophil counts in different subgroups. Eosinophil counts were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. (B) LDH levels in different subgroups. LDH levels were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase.\nDynamic changes of eosinophil counts and LDH levels in patients with COVID-19 with chronic airway diseases. (A–D) Eosinophil counts increased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). (E–H) LDH levels decreased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001, compared with the eosinophil counts or LDH levels between severe and non-severe patients. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase.\nWe further performed multivariate analysis for mortality in patients with COVID-19 with chronic airway inflammation using the above four variables and found that eosinophil count <0.02×10⁹/L (OR per one-unit decrease, 18.000 (95% CI 1.929 to 167.986), p=0.011) was the only independent risk factor for mortality (online supplemental table 1). Moreover, Kaplan-Meier survival curves indicated that patients with COVID-19 with eosinopaenia or elevated LDH had worse survival probability (p<0.05) (online supplemental figure 1). This suggests that eosinopaenia and elevated LDH are also potential predictors of mortality in patients with COVID-19 with underlying chronic airway diseases.\n\n\n\nWe further analysed the eosinophil counts and LDH levels in patients with non-severe and severe COVID-19 with chronic bronchitis, COPD and asthma, respectively. We found that there was a significant difference in eosinophil counts and LDH levels between severe and non-severe patients with chronic bronchitis and COPD, but not in patients with asthma (figure 1). To observe the dynamic changes of eosinophil counts and LDH levels over time, we collected the eosinophil counts and LDH levels on the 5th, 10th, 15th, 20th, 25th and 30th days after admission. We found that eosinophil counts increased over time both in severe and non-severe patients. Meanwhile, LDH decreased over time (figure 2). Severe patients showed a slower recovery rate than non-severe patients. Of note, both eosinophil counts and LDH levels recovered more slowly in severe patients with COPD than those in severe patients with chronic bronchitis. Our data suggest that, as the disease recovers, eosinophil counts and LDH levels tend to return to normal range both in severe and non-severe patients, indicating a good therapeutic effect in patients with chronic airway diseases in COVID-19 treatment.\nClinical characteristics of eosinophil and LDH in patients with COVID-19 with chronic airway inflammation. (A) Eosinophil counts in different subgroups. Eosinophil counts were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. (B) LDH levels in different subgroups. LDH levels were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase.\nDynamic changes of eosinophil counts and LDH levels in patients with COVID-19 with chronic airway diseases. (A–D) Eosinophil counts increased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). (E–H) LDH levels decreased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001, compared with the eosinophil counts or LDH levels between severe and non-severe patients. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase.\nWe further performed multivariate analysis for mortality in patients with COVID-19 with chronic airway inflammation using the above four variables and found that eosinophil count <0.02×10⁹/L (OR per one-unit decrease, 18.000 (95% CI 1.929 to 167.986), p=0.011) was the only independent risk factor for mortality (online supplemental table 1). Moreover, Kaplan-Meier survival curves indicated that patients with COVID-19 with eosinopaenia or elevated LDH had worse survival probability (p<0.05) (online supplemental figure 1). This suggests that eosinopaenia and elevated LDH are also potential predictors of mortality in patients with COVID-19 with underlying chronic airway diseases.\n\n\n", "A total of 1888 patients were admitted. Fifty-nine patients with underlying chronic airway inflammation, including COPD (0.95%), asthma (0.53%) and chronic bronchitis (1.64%), were confirmed to have SARS-CoV-2 infection. Of the patients, 33 were classified as non-severe and 26 were classified as severe. Although COPD was more common in patients with severe COVID-19 when compared with patients with non-severe COVID-19 (42% vs 21%), the difference was not statistically significant.\nThe median age of all patients was 71 years (IQR, 57–80) and more than half (54%) were over 70 years old. Majority (71%) of the patients were male (table 1). There was no significant difference in age and sex between non-severe and severe patients. Thirty-one (53%) patients had one or more comorbidities besides the three chronic airway diseases, with cardiovascular disease (46%) and endocrine system disease (15%) being the most common comorbidity. There were no significant differences in the presence of these comorbidities between patients with non-severe and severe COVID-19. Half of the patients had smoking histories or were current smokers.\nDemographics and clinical characteristics of patients with COVID-19 with chronic airway inflammation on admission\nData are median (IQR) or n (%).\nP values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate.\nCOPD, chronic obstructive pulmonary disease.\nThe most common symptoms were fever (83%), cough (73%), fatigue (47%) and dyspnoea (42%). Dyspnoea was more common in severe patients compared with non-severe patients (65% vs 24%, p=0.001) (table 1).", "When compared with non-severe patients, severe patients were more likely to have elevated neutrophil counts (8.2×10⁹/L vs 4.1×10⁹/L, p=0.001), decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), elevated D-dimer (>1 µg/mL; 88% vs 42%, p=0.001), increased LDH (471.0 U/L vs 230.0 U/L, p<0.001), elevated blood urea nitrogen (>9.5 mmol/L; 42% vs 3%, p<0.001), increased hypersensitive troponin I (>34 pg/mL; 48% vs 7%, p=0.001), and increased inflammation markers including CRP (126.2 mg/L vs 19.9 mg/L, p<0.001), procalcitonin (≥0.05 ng/mL; 96% vs 43%, p<0.001) and ferritin (1264.2 mg/L vs 293.6 mg/L, p=0.004) (table 2). Of note, significant differences in the expression of inflammation-related cytokines including IL-6, IL-8 and TNF-α were observed between the two groups, which were dramatically increased in severe patients.\nLaboratory findings of patients with COVID-19 with chronic airway inflammation on admission\nData are median (IQR), n (%) or n/N (%), where N is the total number of patients with available data.\nP values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate.\nIL, interleukin; TNF-α, tumour necrosis factor-α.", "To identify the predictors of severity of COVID-19 in patients with chronic airway diseases, we analysed the association between abnormal laboratory findings and disease severity with univariate and multivariate logistic regression models. Disease severity was significantly associated with all of the above-mentioned abnormal laboratory findings in univariate logistic regression analyses. In a multivariate regression model that incorporated lymphopaenia, eosinopaenia, elevated LDH and increased IL-6, eosinophil counts <0.02×10⁹/L (OR per one-unit decrease, 10.115 (95% CI 2.158 to 47.414), p=0.003) and LDH level >225 U/L (OR per one-unit increase, 22.300 (95% CI 2.179 to 228.247), p=0.009) were independent risk factors for disease severity (table 3). Our data suggest that decreased eosinophil counts and increased LDH levels may help clinicians identify severe COVID-19 in patients with chronic airway diseases.\nPredictors of severity of COVID-19 in patients with chronic airway diseases\n*Per one-unit increase.\nIL, interleukin; ref, reference; TNF-α, tumour necrosis factor-α.", "We further analysed the eosinophil counts and LDH levels in patients with non-severe and severe COVID-19 with chronic bronchitis, COPD and asthma, respectively. We found that there was a significant difference in eosinophil counts and LDH levels between severe and non-severe patients with chronic bronchitis and COPD, but not in patients with asthma (figure 1). To observe the dynamic changes of eosinophil counts and LDH levels over time, we collected the eosinophil counts and LDH levels on the 5th, 10th, 15th, 20th, 25th and 30th days after admission. We found that eosinophil counts increased over time both in severe and non-severe patients. Meanwhile, LDH decreased over time (figure 2). Severe patients showed a slower recovery rate than non-severe patients. Of note, both eosinophil counts and LDH levels recovered more slowly in severe patients with COPD than those in severe patients with chronic bronchitis. Our data suggest that, as the disease recovers, eosinophil counts and LDH levels tend to return to normal range both in severe and non-severe patients, indicating a good therapeutic effect in patients with chronic airway diseases in COVID-19 treatment.\nClinical characteristics of eosinophil and LDH in patients with COVID-19 with chronic airway inflammation. (A) Eosinophil counts in different subgroups. Eosinophil counts were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. (B) LDH levels in different subgroups. LDH levels were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase.\nDynamic changes of eosinophil counts and LDH levels in patients with COVID-19 with chronic airway diseases. (A–D) Eosinophil counts increased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). (E–H) LDH levels decreased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001, compared with the eosinophil counts or LDH levels between severe and non-severe patients. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase.\nWe further performed multivariate analysis for mortality in patients with COVID-19 with chronic airway inflammation using the above four variables and found that eosinophil count <0.02×10⁹/L (OR per one-unit decrease, 18.000 (95% CI 1.929 to 167.986), p=0.011) was the only independent risk factor for mortality (online supplemental table 1). Moreover, Kaplan-Meier survival curves indicated that patients with COVID-19 with eosinopaenia or elevated LDH had worse survival probability (p<0.05) (online supplemental figure 1). This suggests that eosinopaenia and elevated LDH are also potential predictors of mortality in patients with COVID-19 with underlying chronic airway diseases.\n\n\n", "In this retrospective cohort study, we found that eosinophil counts less than 0.02×10⁹/L and LDH levels greater than 225 U/L on admission were associated with severity of COVID-19 in patients with underlying chronic bronchitis, COPD and asthma. Moreover, eosinophil counts and LDH levels tend to return to normal range in severe and non-severe patients after treatment, suggesting their roles as indicators of disease progression and treatment efficacy.\nCirculating and tissue-resident eosinophils are associated with a variety of diseases, in which eosinophils participate in the pathological process and play a potent proinflammatory role, such as COPD, asthma and chronic bronchitis. In view of elevated eosinophils in patients with chronic airway inflammation, COPD, asthma and chronic bronchitis have not yet been reported as major risk factors for severity of SARS-CoV-2 infections. Zhang et al\n8 reported that none had asthma or other comorbid atopic diseases and only two patients had COPD (1.4%) in a cohort of 140 hospitalised patients with COVID-19, more than half of whom (53%) had eosinopaenia on the day of hospital admission. Similarly, Du et al\n9 analysed the clinical features of 85 fatal cases of COVID-19 and found that 81% of the patients had very low eosinophil counts on admission. In our cohort including 1888 patients, 31 patients had chronic bronchitis (1.64%), 18 patients had COPD (0.95%) and only 10 patients had asthma (0.53%). Meanwhile, eosinopaenia was more common in critically severe patients, suggesting that the resolution of eosinopaenia could be a possible way to improve clinical status.10 In our study, lower count of eosinophils showed worse survival probability, and eosinophil counts significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. No significant difference was observed in patients with asthma, partly due to the limited sample size. Moreover, drastically increased eosinophil counts in the airways of most patients with chronic asthma after bronchoprovocation might be another more important cause. We further explored the dynamic changes of eosinophil counts in patients with chronic airway diseases in the course of COVID-19 and found that eosinophil counts gradually increased over time and returned to normal range in both severe and non-severe patients, which could be a possible indicator of treatment effectiveness. It remains unclear how eosinopaenia takes place in COVID-19, but the most possible reason could be due to its depletion of antiviral reaction, since Th1 (Type 1 T helper) antiviral response was inhibited in those patients with chronic airway inflammation.\nLDH has long been reported to be associated with COPD, asthma and chronic bronchitis and identified as a potential marker of chronic airway inflammation.11 12 Meanwhile, a large number of studies reported elevated LDH levels in COVID-19, which could be a risk factor for mortality. Zheng et al\n13 conducted a systematic literature review and meta-analysis including four studies and found that LDH was statistically significantly higher in severe patients compared with non-severe patients. Elevated LDH in severe cases indicated diffuse lung injury and tissue damage; therefore, we hypothesised that LDH might be another predictor of chronic airway inflammation exacerbation in COVID-19. Kaplan-Meier survival analysis suggested the hazard of elevated LDH levels. Similar to eosinophil, LDH showed elevated levels in patients with severe COVID-19 with chronic bronchitis and COPD and gradually decreased over time in patients with severe and non-severe COVID-19.\nMultiple research has highlighted the important roles of eosinopaenia and elevated LDH in facilitating the diagnosis and prognosis of severe COVID-19. Ma et al\n14 included eosinopaenia to introduce the COVID-19-REAL (radiological image, eosinophils, age, and leukocytes) score, which had a good performance in identifying populations at higher risk of getting COVID-19. Cazzaniga et al\n15 reported that absolute eosinopaenia in the binary logistic regression analyses was associated with 4-week mortality and clinical outcomes in patients with COVID-19 pneumonia.15 Guan et al\n16 and Feng et al\n17 both proposed LDH as a significant predictor of COVID-19 mortality and adverse outcomes with the simple-tree XGBoost model, which may help identify high-risk COVID-19 cases. Eosinopaenia and elevated LDH have been identified as risk factors for severe COVID-19 cases; however, it is noteworthy that in our study they were also associated with severity of COVID-19 in patients with chronic airway diseases.\nThere is growing concern regarding the association between COVID-19 and pulmonary function. Previous reports have concentrated on respiratory follow-up after hospitalisation for COVID-19. Trinkmann et al\n18 found that symptomatic patients had a significantly lower forced expiratory volume in 1 s, vital capacity and transfer factor of the lung for carbon monoxide (TLCO) compared with asymptomatic patients. Riou et al\n19 found that patients with critical disease had lower total lung capacity and TLCO. However, how the pre-existing pulmonary function impairment impacted the COVID-19 outcome has not yet been fully elucidated. Currently, related literature is scarce, possibly due to limited pulmonary function data and the temporary closure of pulmonary function testing laboratories during the COVID-19 pandemic. Morgenthau et al\n20 found that mortality in patients with COVID-19 with sarcoidosis was associated with moderate or severe impairment in pulmonary function. He et al\n21 reported that a longer history of COPD increased the risk of death and negative outcomes of patients with COVID-19, which was consistent with He’s work.21 In our cohort, the impaired lung function of patients with COVID-19 with underlying chronic airway diseases might have a significant impact on the outcome. However, the analysis could not be conducted due to unavailable data, which was a limitation of this study.\nPrevious treatment regimens might contribute to the outcome of patients with COVID-19 with underlying chronic airway diseases. Inhaled corticosteroids (ICS) (with or without long-acting β-agonist) are applied directly to the respiratory epithelium in the intervention of stable COPD and asthma to reduce airway inflammation. ICS could decrease the expression of both ACE2 and transmembrane protease serine 2 (TMPRSS2) on airway epithelial cells, subsequently protecting them from being invaded by SARS-CoV-2.22 In addition, the proliferation of coronavirus and cytokine production could also be suppressed by the usage of ICS.23 However, whether the use of regular ICS before the pandemic had an impact on COVID-19 outcomes remains controversial. Bloom et al reported that patients with asthma older than 50 years could benefit from the use of ICS within 2 weeks of admission, while patients with other chronic pulmonary diseases could not.24 Schultze et al\n25’s work denied the positive role of regular ICS use in protecting against severe outcome of COVID-19, both in patients with asthma and in patients with COPD. In our cohort, only one patient with COPD and two patients with asthma reported having long-term use of ICS due to the difficulty in collecting medical histories in the initiation of the pandemic. Further detailed information on comorbidities, prior medication and many bias factors should be taken into account to figure out the benefits or harms of ICS in COVID-19.\nDifferent phenotypes of COPD and asthma based on the complex pathophysiology might also be partly involved in COVID-19; however, the hypotheses need to be further clarified. Kimura et al\n26 found that type 2 inflammatory cytokines (IL-4, IL-5, IL-13) were negatively associated with ACE2 expression while positively associated with TMPRSS2 expression in an ex vivo study. Ferastraoaru et al\n27’s work indicated that a Th2 asthma phenotype was a predictor of reduced COVID-19 morbidity and mortality, while Kermani et al\n28 reported greater morbidity and mortality outcome in neutrophilic severe asthma. A previous report has highlighted that eosinophilic inflammation was also a common and stable phenotype in COPD and blood eosinophil counts could predict response to ICS treatment.29 Watson et al did not find any gene expression differences in ACE2 in blood eosinophilic COPD, further indicating that these patients might not have a different vulnerability to SARS-CoV-2 infection.30 Therefore, how different inflammation types of COPD and asthma might impact the progress of severe COVID-19 needs further investigation.\nOur study also had some other limitations. First, due to the retrospective study design, the accuracy of all laboratory results was dependent on medical records. Observation bias might also exist in this study due to the limited sample size. Second, there could be a selection bias in the multivariate regression model when analysing the risk factors.", "Our study reveals that eosinopaenia and elevated LDH on admission are potential predictors of disease severity in adults with COVID-19 with underlying chronic airway diseases. Moreover, eosinophil counts could indicate disease progression of COVID-19, thus revealing treatment efficacy. These predictors may help clinicians identify severe COVID-19 in patients with chronic bronchitis, COPD and asthma.\nPatients with chronic airway diseases are less likely to suffer from COVID-19.\nEosinopaenia and elevated lactate dehydrogenase (LDH) can predict disease severity in patients with COVID-19 with underlying chronic airway diseases.\nDynamic changes of eosinophil counts and LDH might indicate disease prognosis and treatment effectiveness.\nAre other molecules related to chronic airway inflammation also involved in the development of COVID-19?\nCan the drugs targeting eosinophils be applied in COVID-19 treatment?\nHow do patients with COVID-19 with chronic airway diseases manage themselves?" ]
[ null, "methods", null, null, null, null, "results", null, null, null, null, "discussion", "conclusions" ]
[ "COVID-19" ]
Background: SARS-CoV-2 was first identified after sequencing relevant clinical samples in a bunch of unknown viral pneumonia cases in December 2019 in Wuhan, Hubei Province, China. COVID-19, caused by SARS-CoV-2, was subsequently declared a pandemic by the WHO due to its aggressive spread on a large scale in many countries, leading to thousands of confirmed cases worldwide every day. As of 15 November 2020, 53.7 million confirmed cases of COVID-19 and 1.3 million deaths have been reported worldwide, demanding an urgent need for early identification of severe cases.1 Clinical evidence of SARS-CoV-2 has suggested several transmission routes between humans, with respiratory aerosol droplets undoubtedly being the main source of infection. SARS-CoV-2 is able to attack the respiratory system by binding to the cell entry receptors ACE2 on airway epithelial cells and results in pneumonia and respiratory failure in critically ill patients. Chronic bronchitis, chronic obstructive pulmonary disease (COPD) and asthma are common respiratory diseases with chronic airway inflammation.2–4 Eosinophils, neutrophils and macrophages in innate immune response significantly increase in the airway and lungs during the initial phase of inflammation. Lymphocytopaenia has been reported in severe patients infected with SARS-CoV-2.5 Circulating eosinophil counts have also been reported to be decreased in patients with COVID-19 and associated with severity of the disease.6 Therefore, patients with underlying COPD, asthma and chronic bronchitis may have different inflammatory states after SARS-CoV-2 infection compared with patients without chronic airway inflammation. In this retrospective cohort study, we reviewed the medical records of 59 patients with laboratory-confirmed COVID-19 with underlying chronic airway inflammation and compared the demographic, clinical and radiological characteristics as well as laboratory results between severe and non-severe patients in this cohort. Potential predictors of disease severity were identified in the abnormal laboratory findings using univariate and multivariate regression models. Methods: Study population and data collection The subjects of this study were adults with COVID-19 and underlying chronic respiratory diseases (admission date from 26 January to 3 April 2020) at the Sino-French New City Branch of Tongji Hospital. Severe and non-severe patients were included in the case and control groups, respectively. COVID-19 was diagnosed according to WHO interim guideline.7 Patients with chronic respiratory diseases were diagnosed according to a previous diagnosis. All patients were confirmed by positive findings in reverse-transcriptase PCR assay of SARS-CoV-2 RNA in throat swab specimens. The study was conducted on 15 June.Demographic information, clinical characteristics (including medical history, symptoms, comorbidities, smoking history and allergic history) and radiological results of each patient were obtained from the electronic medical record system of the Sino-French New City Branch of Tongji Hospital and analysed by three independent researchers. Severity of COVID-19 was staged according to the guidelines for diagnosis and treatment of COVID-19 published by the Chinese National Health Committee (version 5–7). The subjects of this study were adults with COVID-19 and underlying chronic respiratory diseases (admission date from 26 January to 3 April 2020) at the Sino-French New City Branch of Tongji Hospital. Severe and non-severe patients were included in the case and control groups, respectively. COVID-19 was diagnosed according to WHO interim guideline.7 Patients with chronic respiratory diseases were diagnosed according to a previous diagnosis. All patients were confirmed by positive findings in reverse-transcriptase PCR assay of SARS-CoV-2 RNA in throat swab specimens. The study was conducted on 15 June.Demographic information, clinical characteristics (including medical history, symptoms, comorbidities, smoking history and allergic history) and radiological results of each patient were obtained from the electronic medical record system of the Sino-French New City Branch of Tongji Hospital and analysed by three independent researchers. Severity of COVID-19 was staged according to the guidelines for diagnosis and treatment of COVID-19 published by the Chinese National Health Committee (version 5–7). Criteria for severity of COVID-19 Severe COVID-19 was diagnosed when patients met one of the following criteria: (1) respiratory distress with respiratory frequency ≥30 per minute; (2) pulse oximeter oxygen saturation ≤93% at rest; and (3) oxygenation index (artery partial pressure of oxygen/inspired oxygen fraction) ≤300 mm Hg. Severe COVID-19 was diagnosed when patients met one of the following criteria: (1) respiratory distress with respiratory frequency ≥30 per minute; (2) pulse oximeter oxygen saturation ≤93% at rest; and (3) oxygenation index (artery partial pressure of oxygen/inspired oxygen fraction) ≤300 mm Hg. Laboratory testing Medical laboratory results, including number of leucocytes, lymphocytes, monocytes, eosinophils, basophils, platelets, alanine aminotransferase, aspartate aminotransferase, serum creatine kinase, serum lactate dehydrogenase (LDH), blood urea nitrogen, serum creatinine, cardiac troponin I, concentrations of D-dimer, C reactive protein (CRP), procalcitonin, erythrocyte sedimentation rate, serum ferritin, cytokines (interleukin (IL) 2R, IL-6, IL-8, IL-10, tumour necrosis factor (TNF)-α) and immune function were collected for each patient from the electronic medical records. Medical laboratory results, including number of leucocytes, lymphocytes, monocytes, eosinophils, basophils, platelets, alanine aminotransferase, aspartate aminotransferase, serum creatine kinase, serum lactate dehydrogenase (LDH), blood urea nitrogen, serum creatinine, cardiac troponin I, concentrations of D-dimer, C reactive protein (CRP), procalcitonin, erythrocyte sedimentation rate, serum ferritin, cytokines (interleukin (IL) 2R, IL-6, IL-8, IL-10, tumour necrosis factor (TNF)-α) and immune function were collected for each patient from the electronic medical records. Statistical analysis All data were analysed with SPSS Statistics Software (V.26). The statistics for categorical variables were summarised as frequencies and percentages and were compared using χ2 test or Fisher’s exact test between different groups where appropriate. Continuous variables were described using median (IQR) and compared using Mann-Whitney U test. To explore the risk factors associated with disease severity, univariable and multivariable logistic regression models were used to estimate the OR and 95% CI. A two-sided α of less than 0.05 was considered statistically significant. All data were analysed with SPSS Statistics Software (V.26). The statistics for categorical variables were summarised as frequencies and percentages and were compared using χ2 test or Fisher’s exact test between different groups where appropriate. Continuous variables were described using median (IQR) and compared using Mann-Whitney U test. To explore the risk factors associated with disease severity, univariable and multivariable logistic regression models were used to estimate the OR and 95% CI. A two-sided α of less than 0.05 was considered statistically significant. Study population and data collection: The subjects of this study were adults with COVID-19 and underlying chronic respiratory diseases (admission date from 26 January to 3 April 2020) at the Sino-French New City Branch of Tongji Hospital. Severe and non-severe patients were included in the case and control groups, respectively. COVID-19 was diagnosed according to WHO interim guideline.7 Patients with chronic respiratory diseases were diagnosed according to a previous diagnosis. All patients were confirmed by positive findings in reverse-transcriptase PCR assay of SARS-CoV-2 RNA in throat swab specimens. The study was conducted on 15 June.Demographic information, clinical characteristics (including medical history, symptoms, comorbidities, smoking history and allergic history) and radiological results of each patient were obtained from the electronic medical record system of the Sino-French New City Branch of Tongji Hospital and analysed by three independent researchers. Severity of COVID-19 was staged according to the guidelines for diagnosis and treatment of COVID-19 published by the Chinese National Health Committee (version 5–7). Criteria for severity of COVID-19: Severe COVID-19 was diagnosed when patients met one of the following criteria: (1) respiratory distress with respiratory frequency ≥30 per minute; (2) pulse oximeter oxygen saturation ≤93% at rest; and (3) oxygenation index (artery partial pressure of oxygen/inspired oxygen fraction) ≤300 mm Hg. Laboratory testing: Medical laboratory results, including number of leucocytes, lymphocytes, monocytes, eosinophils, basophils, platelets, alanine aminotransferase, aspartate aminotransferase, serum creatine kinase, serum lactate dehydrogenase (LDH), blood urea nitrogen, serum creatinine, cardiac troponin I, concentrations of D-dimer, C reactive protein (CRP), procalcitonin, erythrocyte sedimentation rate, serum ferritin, cytokines (interleukin (IL) 2R, IL-6, IL-8, IL-10, tumour necrosis factor (TNF)-α) and immune function were collected for each patient from the electronic medical records. Statistical analysis: All data were analysed with SPSS Statistics Software (V.26). The statistics for categorical variables were summarised as frequencies and percentages and were compared using χ2 test or Fisher’s exact test between different groups where appropriate. Continuous variables were described using median (IQR) and compared using Mann-Whitney U test. To explore the risk factors associated with disease severity, univariable and multivariable logistic regression models were used to estimate the OR and 95% CI. A two-sided α of less than 0.05 was considered statistically significant. Results: Demographics and clinical characteristics of patients with non-severe and severe COVID-19 with chronic airway diseases A total of 1888 patients were admitted. Fifty-nine patients with underlying chronic airway inflammation, including COPD (0.95%), asthma (0.53%) and chronic bronchitis (1.64%), were confirmed to have SARS-CoV-2 infection. Of the patients, 33 were classified as non-severe and 26 were classified as severe. Although COPD was more common in patients with severe COVID-19 when compared with patients with non-severe COVID-19 (42% vs 21%), the difference was not statistically significant. The median age of all patients was 71 years (IQR, 57–80) and more than half (54%) were over 70 years old. Majority (71%) of the patients were male (table 1). There was no significant difference in age and sex between non-severe and severe patients. Thirty-one (53%) patients had one or more comorbidities besides the three chronic airway diseases, with cardiovascular disease (46%) and endocrine system disease (15%) being the most common comorbidity. There were no significant differences in the presence of these comorbidities between patients with non-severe and severe COVID-19. Half of the patients had smoking histories or were current smokers. Demographics and clinical characteristics of patients with COVID-19 with chronic airway inflammation on admission Data are median (IQR) or n (%). P values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate. COPD, chronic obstructive pulmonary disease. The most common symptoms were fever (83%), cough (73%), fatigue (47%) and dyspnoea (42%). Dyspnoea was more common in severe patients compared with non-severe patients (65% vs 24%, p=0.001) (table 1). A total of 1888 patients were admitted. Fifty-nine patients with underlying chronic airway inflammation, including COPD (0.95%), asthma (0.53%) and chronic bronchitis (1.64%), were confirmed to have SARS-CoV-2 infection. Of the patients, 33 were classified as non-severe and 26 were classified as severe. Although COPD was more common in patients with severe COVID-19 when compared with patients with non-severe COVID-19 (42% vs 21%), the difference was not statistically significant. The median age of all patients was 71 years (IQR, 57–80) and more than half (54%) were over 70 years old. Majority (71%) of the patients were male (table 1). There was no significant difference in age and sex between non-severe and severe patients. Thirty-one (53%) patients had one or more comorbidities besides the three chronic airway diseases, with cardiovascular disease (46%) and endocrine system disease (15%) being the most common comorbidity. There were no significant differences in the presence of these comorbidities between patients with non-severe and severe COVID-19. Half of the patients had smoking histories or were current smokers. Demographics and clinical characteristics of patients with COVID-19 with chronic airway inflammation on admission Data are median (IQR) or n (%). P values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate. COPD, chronic obstructive pulmonary disease. The most common symptoms were fever (83%), cough (73%), fatigue (47%) and dyspnoea (42%). Dyspnoea was more common in severe patients compared with non-severe patients (65% vs 24%, p=0.001) (table 1). Laboratory findings of patients with non-severe and severe COVID-19 with chronic airway diseases When compared with non-severe patients, severe patients were more likely to have elevated neutrophil counts (8.2×10⁹/L vs 4.1×10⁹/L, p=0.001), decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), elevated D-dimer (>1 µg/mL; 88% vs 42%, p=0.001), increased LDH (471.0 U/L vs 230.0 U/L, p<0.001), elevated blood urea nitrogen (>9.5 mmol/L; 42% vs 3%, p<0.001), increased hypersensitive troponin I (>34 pg/mL; 48% vs 7%, p=0.001), and increased inflammation markers including CRP (126.2 mg/L vs 19.9 mg/L, p<0.001), procalcitonin (≥0.05 ng/mL; 96% vs 43%, p<0.001) and ferritin (1264.2 mg/L vs 293.6 mg/L, p=0.004) (table 2). Of note, significant differences in the expression of inflammation-related cytokines including IL-6, IL-8 and TNF-α were observed between the two groups, which were dramatically increased in severe patients. Laboratory findings of patients with COVID-19 with chronic airway inflammation on admission Data are median (IQR), n (%) or n/N (%), where N is the total number of patients with available data. P values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate. IL, interleukin; TNF-α, tumour necrosis factor-α. When compared with non-severe patients, severe patients were more likely to have elevated neutrophil counts (8.2×10⁹/L vs 4.1×10⁹/L, p=0.001), decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), elevated D-dimer (>1 µg/mL; 88% vs 42%, p=0.001), increased LDH (471.0 U/L vs 230.0 U/L, p<0.001), elevated blood urea nitrogen (>9.5 mmol/L; 42% vs 3%, p<0.001), increased hypersensitive troponin I (>34 pg/mL; 48% vs 7%, p=0.001), and increased inflammation markers including CRP (126.2 mg/L vs 19.9 mg/L, p<0.001), procalcitonin (≥0.05 ng/mL; 96% vs 43%, p<0.001) and ferritin (1264.2 mg/L vs 293.6 mg/L, p=0.004) (table 2). Of note, significant differences in the expression of inflammation-related cytokines including IL-6, IL-8 and TNF-α were observed between the two groups, which were dramatically increased in severe patients. Laboratory findings of patients with COVID-19 with chronic airway inflammation on admission Data are median (IQR), n (%) or n/N (%), where N is the total number of patients with available data. P values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate. IL, interleukin; TNF-α, tumour necrosis factor-α. Predictors of severity of COVID-19 in patients with chronic airway diseases To identify the predictors of severity of COVID-19 in patients with chronic airway diseases, we analysed the association between abnormal laboratory findings and disease severity with univariate and multivariate logistic regression models. Disease severity was significantly associated with all of the above-mentioned abnormal laboratory findings in univariate logistic regression analyses. In a multivariate regression model that incorporated lymphopaenia, eosinopaenia, elevated LDH and increased IL-6, eosinophil counts <0.02×10⁹/L (OR per one-unit decrease, 10.115 (95% CI 2.158 to 47.414), p=0.003) and LDH level >225 U/L (OR per one-unit increase, 22.300 (95% CI 2.179 to 228.247), p=0.009) were independent risk factors for disease severity (table 3). Our data suggest that decreased eosinophil counts and increased LDH levels may help clinicians identify severe COVID-19 in patients with chronic airway diseases. Predictors of severity of COVID-19 in patients with chronic airway diseases *Per one-unit increase. IL, interleukin; ref, reference; TNF-α, tumour necrosis factor-α. To identify the predictors of severity of COVID-19 in patients with chronic airway diseases, we analysed the association between abnormal laboratory findings and disease severity with univariate and multivariate logistic regression models. Disease severity was significantly associated with all of the above-mentioned abnormal laboratory findings in univariate logistic regression analyses. In a multivariate regression model that incorporated lymphopaenia, eosinopaenia, elevated LDH and increased IL-6, eosinophil counts <0.02×10⁹/L (OR per one-unit decrease, 10.115 (95% CI 2.158 to 47.414), p=0.003) and LDH level >225 U/L (OR per one-unit increase, 22.300 (95% CI 2.179 to 228.247), p=0.009) were independent risk factors for disease severity (table 3). Our data suggest that decreased eosinophil counts and increased LDH levels may help clinicians identify severe COVID-19 in patients with chronic airway diseases. Predictors of severity of COVID-19 in patients with chronic airway diseases *Per one-unit increase. IL, interleukin; ref, reference; TNF-α, tumour necrosis factor-α. Eosinophil counts and LDH levels tend to return to normal range over time in non-severe patients We further analysed the eosinophil counts and LDH levels in patients with non-severe and severe COVID-19 with chronic bronchitis, COPD and asthma, respectively. We found that there was a significant difference in eosinophil counts and LDH levels between severe and non-severe patients with chronic bronchitis and COPD, but not in patients with asthma (figure 1). To observe the dynamic changes of eosinophil counts and LDH levels over time, we collected the eosinophil counts and LDH levels on the 5th, 10th, 15th, 20th, 25th and 30th days after admission. We found that eosinophil counts increased over time both in severe and non-severe patients. Meanwhile, LDH decreased over time (figure 2). Severe patients showed a slower recovery rate than non-severe patients. Of note, both eosinophil counts and LDH levels recovered more slowly in severe patients with COPD than those in severe patients with chronic bronchitis. Our data suggest that, as the disease recovers, eosinophil counts and LDH levels tend to return to normal range both in severe and non-severe patients, indicating a good therapeutic effect in patients with chronic airway diseases in COVID-19 treatment. Clinical characteristics of eosinophil and LDH in patients with COVID-19 with chronic airway inflammation. (A) Eosinophil counts in different subgroups. Eosinophil counts were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. (B) LDH levels in different subgroups. LDH levels were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase. Dynamic changes of eosinophil counts and LDH levels in patients with COVID-19 with chronic airway diseases. (A–D) Eosinophil counts increased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). (E–H) LDH levels decreased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001, compared with the eosinophil counts or LDH levels between severe and non-severe patients. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase. We further performed multivariate analysis for mortality in patients with COVID-19 with chronic airway inflammation using the above four variables and found that eosinophil count <0.02×10⁹/L (OR per one-unit decrease, 18.000 (95% CI 1.929 to 167.986), p=0.011) was the only independent risk factor for mortality (online supplemental table 1). Moreover, Kaplan-Meier survival curves indicated that patients with COVID-19 with eosinopaenia or elevated LDH had worse survival probability (p<0.05) (online supplemental figure 1). This suggests that eosinopaenia and elevated LDH are also potential predictors of mortality in patients with COVID-19 with underlying chronic airway diseases. We further analysed the eosinophil counts and LDH levels in patients with non-severe and severe COVID-19 with chronic bronchitis, COPD and asthma, respectively. We found that there was a significant difference in eosinophil counts and LDH levels between severe and non-severe patients with chronic bronchitis and COPD, but not in patients with asthma (figure 1). To observe the dynamic changes of eosinophil counts and LDH levels over time, we collected the eosinophil counts and LDH levels on the 5th, 10th, 15th, 20th, 25th and 30th days after admission. We found that eosinophil counts increased over time both in severe and non-severe patients. Meanwhile, LDH decreased over time (figure 2). Severe patients showed a slower recovery rate than non-severe patients. Of note, both eosinophil counts and LDH levels recovered more slowly in severe patients with COPD than those in severe patients with chronic bronchitis. Our data suggest that, as the disease recovers, eosinophil counts and LDH levels tend to return to normal range both in severe and non-severe patients, indicating a good therapeutic effect in patients with chronic airway diseases in COVID-19 treatment. Clinical characteristics of eosinophil and LDH in patients with COVID-19 with chronic airway inflammation. (A) Eosinophil counts in different subgroups. Eosinophil counts were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. (B) LDH levels in different subgroups. LDH levels were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase. Dynamic changes of eosinophil counts and LDH levels in patients with COVID-19 with chronic airway diseases. (A–D) Eosinophil counts increased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). (E–H) LDH levels decreased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001, compared with the eosinophil counts or LDH levels between severe and non-severe patients. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase. We further performed multivariate analysis for mortality in patients with COVID-19 with chronic airway inflammation using the above four variables and found that eosinophil count <0.02×10⁹/L (OR per one-unit decrease, 18.000 (95% CI 1.929 to 167.986), p=0.011) was the only independent risk factor for mortality (online supplemental table 1). Moreover, Kaplan-Meier survival curves indicated that patients with COVID-19 with eosinopaenia or elevated LDH had worse survival probability (p<0.05) (online supplemental figure 1). This suggests that eosinopaenia and elevated LDH are also potential predictors of mortality in patients with COVID-19 with underlying chronic airway diseases. Demographics and clinical characteristics of patients with non-severe and severe COVID-19 with chronic airway diseases: A total of 1888 patients were admitted. Fifty-nine patients with underlying chronic airway inflammation, including COPD (0.95%), asthma (0.53%) and chronic bronchitis (1.64%), were confirmed to have SARS-CoV-2 infection. Of the patients, 33 were classified as non-severe and 26 were classified as severe. Although COPD was more common in patients with severe COVID-19 when compared with patients with non-severe COVID-19 (42% vs 21%), the difference was not statistically significant. The median age of all patients was 71 years (IQR, 57–80) and more than half (54%) were over 70 years old. Majority (71%) of the patients were male (table 1). There was no significant difference in age and sex between non-severe and severe patients. Thirty-one (53%) patients had one or more comorbidities besides the three chronic airway diseases, with cardiovascular disease (46%) and endocrine system disease (15%) being the most common comorbidity. There were no significant differences in the presence of these comorbidities between patients with non-severe and severe COVID-19. Half of the patients had smoking histories or were current smokers. Demographics and clinical characteristics of patients with COVID-19 with chronic airway inflammation on admission Data are median (IQR) or n (%). P values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate. COPD, chronic obstructive pulmonary disease. The most common symptoms were fever (83%), cough (73%), fatigue (47%) and dyspnoea (42%). Dyspnoea was more common in severe patients compared with non-severe patients (65% vs 24%, p=0.001) (table 1). Laboratory findings of patients with non-severe and severe COVID-19 with chronic airway diseases: When compared with non-severe patients, severe patients were more likely to have elevated neutrophil counts (8.2×10⁹/L vs 4.1×10⁹/L, p=0.001), decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), elevated D-dimer (>1 µg/mL; 88% vs 42%, p=0.001), increased LDH (471.0 U/L vs 230.0 U/L, p<0.001), elevated blood urea nitrogen (>9.5 mmol/L; 42% vs 3%, p<0.001), increased hypersensitive troponin I (>34 pg/mL; 48% vs 7%, p=0.001), and increased inflammation markers including CRP (126.2 mg/L vs 19.9 mg/L, p<0.001), procalcitonin (≥0.05 ng/mL; 96% vs 43%, p<0.001) and ferritin (1264.2 mg/L vs 293.6 mg/L, p=0.004) (table 2). Of note, significant differences in the expression of inflammation-related cytokines including IL-6, IL-8 and TNF-α were observed between the two groups, which were dramatically increased in severe patients. Laboratory findings of patients with COVID-19 with chronic airway inflammation on admission Data are median (IQR), n (%) or n/N (%), where N is the total number of patients with available data. P values comparing severe with non-severe patients were calculated by χ2 test, Fisher’s exact test or Mann-Whitney U test, as appropriate. IL, interleukin; TNF-α, tumour necrosis factor-α. Predictors of severity of COVID-19 in patients with chronic airway diseases: To identify the predictors of severity of COVID-19 in patients with chronic airway diseases, we analysed the association between abnormal laboratory findings and disease severity with univariate and multivariate logistic regression models. Disease severity was significantly associated with all of the above-mentioned abnormal laboratory findings in univariate logistic regression analyses. In a multivariate regression model that incorporated lymphopaenia, eosinopaenia, elevated LDH and increased IL-6, eosinophil counts <0.02×10⁹/L (OR per one-unit decrease, 10.115 (95% CI 2.158 to 47.414), p=0.003) and LDH level >225 U/L (OR per one-unit increase, 22.300 (95% CI 2.179 to 228.247), p=0.009) were independent risk factors for disease severity (table 3). Our data suggest that decreased eosinophil counts and increased LDH levels may help clinicians identify severe COVID-19 in patients with chronic airway diseases. Predictors of severity of COVID-19 in patients with chronic airway diseases *Per one-unit increase. IL, interleukin; ref, reference; TNF-α, tumour necrosis factor-α. Eosinophil counts and LDH levels tend to return to normal range over time in non-severe patients: We further analysed the eosinophil counts and LDH levels in patients with non-severe and severe COVID-19 with chronic bronchitis, COPD and asthma, respectively. We found that there was a significant difference in eosinophil counts and LDH levels between severe and non-severe patients with chronic bronchitis and COPD, but not in patients with asthma (figure 1). To observe the dynamic changes of eosinophil counts and LDH levels over time, we collected the eosinophil counts and LDH levels on the 5th, 10th, 15th, 20th, 25th and 30th days after admission. We found that eosinophil counts increased over time both in severe and non-severe patients. Meanwhile, LDH decreased over time (figure 2). Severe patients showed a slower recovery rate than non-severe patients. Of note, both eosinophil counts and LDH levels recovered more slowly in severe patients with COPD than those in severe patients with chronic bronchitis. Our data suggest that, as the disease recovers, eosinophil counts and LDH levels tend to return to normal range both in severe and non-severe patients, indicating a good therapeutic effect in patients with chronic airway diseases in COVID-19 treatment. Clinical characteristics of eosinophil and LDH in patients with COVID-19 with chronic airway inflammation. (A) Eosinophil counts in different subgroups. Eosinophil counts were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. (B) LDH levels in different subgroups. LDH levels were significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase. Dynamic changes of eosinophil counts and LDH levels in patients with COVID-19 with chronic airway diseases. (A–D) Eosinophil counts increased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). (E–H) LDH levels decreased over time in non-severe and severe patients with COVID-19 with chronic bronchitis (n=31), COPD (n=18) and asthma (n=10). Values for non-severe and severe patients are presented with open and closed circles, respectively. Mann-Whitney U test was used. *P<0.05, **P<0.01, ***P<0.001, ****P<0.0001, compared with the eosinophil counts or LDH levels between severe and non-severe patients. COPD, chronic obstructive pulmonary disease; LDH, lactate dehydrogenase. We further performed multivariate analysis for mortality in patients with COVID-19 with chronic airway inflammation using the above four variables and found that eosinophil count <0.02×10⁹/L (OR per one-unit decrease, 18.000 (95% CI 1.929 to 167.986), p=0.011) was the only independent risk factor for mortality (online supplemental table 1). Moreover, Kaplan-Meier survival curves indicated that patients with COVID-19 with eosinopaenia or elevated LDH had worse survival probability (p<0.05) (online supplemental figure 1). This suggests that eosinopaenia and elevated LDH are also potential predictors of mortality in patients with COVID-19 with underlying chronic airway diseases. Discussion: In this retrospective cohort study, we found that eosinophil counts less than 0.02×10⁹/L and LDH levels greater than 225 U/L on admission were associated with severity of COVID-19 in patients with underlying chronic bronchitis, COPD and asthma. Moreover, eosinophil counts and LDH levels tend to return to normal range in severe and non-severe patients after treatment, suggesting their roles as indicators of disease progression and treatment efficacy. Circulating and tissue-resident eosinophils are associated with a variety of diseases, in which eosinophils participate in the pathological process and play a potent proinflammatory role, such as COPD, asthma and chronic bronchitis. In view of elevated eosinophils in patients with chronic airway inflammation, COPD, asthma and chronic bronchitis have not yet been reported as major risk factors for severity of SARS-CoV-2 infections. Zhang et al 8 reported that none had asthma or other comorbid atopic diseases and only two patients had COPD (1.4%) in a cohort of 140 hospitalised patients with COVID-19, more than half of whom (53%) had eosinopaenia on the day of hospital admission. Similarly, Du et al 9 analysed the clinical features of 85 fatal cases of COVID-19 and found that 81% of the patients had very low eosinophil counts on admission. In our cohort including 1888 patients, 31 patients had chronic bronchitis (1.64%), 18 patients had COPD (0.95%) and only 10 patients had asthma (0.53%). Meanwhile, eosinopaenia was more common in critically severe patients, suggesting that the resolution of eosinopaenia could be a possible way to improve clinical status.10 In our study, lower count of eosinophils showed worse survival probability, and eosinophil counts significantly decreased in patients with severe COVID-19 with chronic bronchitis and COPD. No significant difference was observed in patients with asthma, partly due to the limited sample size. Moreover, drastically increased eosinophil counts in the airways of most patients with chronic asthma after bronchoprovocation might be another more important cause. We further explored the dynamic changes of eosinophil counts in patients with chronic airway diseases in the course of COVID-19 and found that eosinophil counts gradually increased over time and returned to normal range in both severe and non-severe patients, which could be a possible indicator of treatment effectiveness. It remains unclear how eosinopaenia takes place in COVID-19, but the most possible reason could be due to its depletion of antiviral reaction, since Th1 (Type 1 T helper) antiviral response was inhibited in those patients with chronic airway inflammation. LDH has long been reported to be associated with COPD, asthma and chronic bronchitis and identified as a potential marker of chronic airway inflammation.11 12 Meanwhile, a large number of studies reported elevated LDH levels in COVID-19, which could be a risk factor for mortality. Zheng et al 13 conducted a systematic literature review and meta-analysis including four studies and found that LDH was statistically significantly higher in severe patients compared with non-severe patients. Elevated LDH in severe cases indicated diffuse lung injury and tissue damage; therefore, we hypothesised that LDH might be another predictor of chronic airway inflammation exacerbation in COVID-19. Kaplan-Meier survival analysis suggested the hazard of elevated LDH levels. Similar to eosinophil, LDH showed elevated levels in patients with severe COVID-19 with chronic bronchitis and COPD and gradually decreased over time in patients with severe and non-severe COVID-19. Multiple research has highlighted the important roles of eosinopaenia and elevated LDH in facilitating the diagnosis and prognosis of severe COVID-19. Ma et al 14 included eosinopaenia to introduce the COVID-19-REAL (radiological image, eosinophils, age, and leukocytes) score, which had a good performance in identifying populations at higher risk of getting COVID-19. Cazzaniga et al 15 reported that absolute eosinopaenia in the binary logistic regression analyses was associated with 4-week mortality and clinical outcomes in patients with COVID-19 pneumonia.15 Guan et al 16 and Feng et al 17 both proposed LDH as a significant predictor of COVID-19 mortality and adverse outcomes with the simple-tree XGBoost model, which may help identify high-risk COVID-19 cases. Eosinopaenia and elevated LDH have been identified as risk factors for severe COVID-19 cases; however, it is noteworthy that in our study they were also associated with severity of COVID-19 in patients with chronic airway diseases. There is growing concern regarding the association between COVID-19 and pulmonary function. Previous reports have concentrated on respiratory follow-up after hospitalisation for COVID-19. Trinkmann et al 18 found that symptomatic patients had a significantly lower forced expiratory volume in 1 s, vital capacity and transfer factor of the lung for carbon monoxide (TLCO) compared with asymptomatic patients. Riou et al 19 found that patients with critical disease had lower total lung capacity and TLCO. However, how the pre-existing pulmonary function impairment impacted the COVID-19 outcome has not yet been fully elucidated. Currently, related literature is scarce, possibly due to limited pulmonary function data and the temporary closure of pulmonary function testing laboratories during the COVID-19 pandemic. Morgenthau et al 20 found that mortality in patients with COVID-19 with sarcoidosis was associated with moderate or severe impairment in pulmonary function. He et al 21 reported that a longer history of COPD increased the risk of death and negative outcomes of patients with COVID-19, which was consistent with He’s work.21 In our cohort, the impaired lung function of patients with COVID-19 with underlying chronic airway diseases might have a significant impact on the outcome. However, the analysis could not be conducted due to unavailable data, which was a limitation of this study. Previous treatment regimens might contribute to the outcome of patients with COVID-19 with underlying chronic airway diseases. Inhaled corticosteroids (ICS) (with or without long-acting β-agonist) are applied directly to the respiratory epithelium in the intervention of stable COPD and asthma to reduce airway inflammation. ICS could decrease the expression of both ACE2 and transmembrane protease serine 2 (TMPRSS2) on airway epithelial cells, subsequently protecting them from being invaded by SARS-CoV-2.22 In addition, the proliferation of coronavirus and cytokine production could also be suppressed by the usage of ICS.23 However, whether the use of regular ICS before the pandemic had an impact on COVID-19 outcomes remains controversial. Bloom et al reported that patients with asthma older than 50 years could benefit from the use of ICS within 2 weeks of admission, while patients with other chronic pulmonary diseases could not.24 Schultze et al 25’s work denied the positive role of regular ICS use in protecting against severe outcome of COVID-19, both in patients with asthma and in patients with COPD. In our cohort, only one patient with COPD and two patients with asthma reported having long-term use of ICS due to the difficulty in collecting medical histories in the initiation of the pandemic. Further detailed information on comorbidities, prior medication and many bias factors should be taken into account to figure out the benefits or harms of ICS in COVID-19. Different phenotypes of COPD and asthma based on the complex pathophysiology might also be partly involved in COVID-19; however, the hypotheses need to be further clarified. Kimura et al 26 found that type 2 inflammatory cytokines (IL-4, IL-5, IL-13) were negatively associated with ACE2 expression while positively associated with TMPRSS2 expression in an ex vivo study. Ferastraoaru et al 27’s work indicated that a Th2 asthma phenotype was a predictor of reduced COVID-19 morbidity and mortality, while Kermani et al 28 reported greater morbidity and mortality outcome in neutrophilic severe asthma. A previous report has highlighted that eosinophilic inflammation was also a common and stable phenotype in COPD and blood eosinophil counts could predict response to ICS treatment.29 Watson et al did not find any gene expression differences in ACE2 in blood eosinophilic COPD, further indicating that these patients might not have a different vulnerability to SARS-CoV-2 infection.30 Therefore, how different inflammation types of COPD and asthma might impact the progress of severe COVID-19 needs further investigation. Our study also had some other limitations. First, due to the retrospective study design, the accuracy of all laboratory results was dependent on medical records. Observation bias might also exist in this study due to the limited sample size. Second, there could be a selection bias in the multivariate regression model when analysing the risk factors. Conclusion: Our study reveals that eosinopaenia and elevated LDH on admission are potential predictors of disease severity in adults with COVID-19 with underlying chronic airway diseases. Moreover, eosinophil counts could indicate disease progression of COVID-19, thus revealing treatment efficacy. These predictors may help clinicians identify severe COVID-19 in patients with chronic bronchitis, COPD and asthma. Patients with chronic airway diseases are less likely to suffer from COVID-19. Eosinopaenia and elevated lactate dehydrogenase (LDH) can predict disease severity in patients with COVID-19 with underlying chronic airway diseases. Dynamic changes of eosinophil counts and LDH might indicate disease prognosis and treatment effectiveness. Are other molecules related to chronic airway inflammation also involved in the development of COVID-19? Can the drugs targeting eosinophils be applied in COVID-19 treatment? How do patients with COVID-19 with chronic airway diseases manage themselves?
Background: Several predictors of COVID-19 severity have been reported. However, chronic airway inflammation characterised by accumulated lymphocytes or eosinophils may affect the pathogenesis of COVID-19. Methods: In this retrospective cohort study, we reviewed the medical records of all patients with laboratory-confirmed COVID-19 with chronic bronchitis, chronic obstructive pulmonary disease (COPD) and asthma admitted to the Sino-French New City Branch of Tongji Hospital, a large regional hospital in Wuhan, China, from 26 January to 3 April. The Tongji Hospital Ethics Committee approved this study. Results: There were 59 patients with chronic bronchitis, COPD and asthma. When compared with non-severe patients, severe patients were more likely to have decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), increased lactate dehydrogenase (LDH) (471.0 U/L vs 230.0 U/L, p<0.001) and elevated interleukin 6 level (47.4 pg/mL vs 5.7 pg/mL, p=0.002) on admission. Eosinopaenia and elevated LDH were significantly associated with disease severity in both univariate and multivariate regression models including the above variables. Moreover, eosinophil count and LDH level tended to return to normal range over time in both groups after treatment and severe patients recovered slower than non-severe patients, especially in eosinophil count. Conclusions: Eosinopaenia and elevated LDH are potential predictors of disease severity in patients with COVID-19 with underlying chronic airway diseases. In addition, they could indicate disease progression and treatment effectiveness.
Background: SARS-CoV-2 was first identified after sequencing relevant clinical samples in a bunch of unknown viral pneumonia cases in December 2019 in Wuhan, Hubei Province, China. COVID-19, caused by SARS-CoV-2, was subsequently declared a pandemic by the WHO due to its aggressive spread on a large scale in many countries, leading to thousands of confirmed cases worldwide every day. As of 15 November 2020, 53.7 million confirmed cases of COVID-19 and 1.3 million deaths have been reported worldwide, demanding an urgent need for early identification of severe cases.1 Clinical evidence of SARS-CoV-2 has suggested several transmission routes between humans, with respiratory aerosol droplets undoubtedly being the main source of infection. SARS-CoV-2 is able to attack the respiratory system by binding to the cell entry receptors ACE2 on airway epithelial cells and results in pneumonia and respiratory failure in critically ill patients. Chronic bronchitis, chronic obstructive pulmonary disease (COPD) and asthma are common respiratory diseases with chronic airway inflammation.2–4 Eosinophils, neutrophils and macrophages in innate immune response significantly increase in the airway and lungs during the initial phase of inflammation. Lymphocytopaenia has been reported in severe patients infected with SARS-CoV-2.5 Circulating eosinophil counts have also been reported to be decreased in patients with COVID-19 and associated with severity of the disease.6 Therefore, patients with underlying COPD, asthma and chronic bronchitis may have different inflammatory states after SARS-CoV-2 infection compared with patients without chronic airway inflammation. In this retrospective cohort study, we reviewed the medical records of 59 patients with laboratory-confirmed COVID-19 with underlying chronic airway inflammation and compared the demographic, clinical and radiological characteristics as well as laboratory results between severe and non-severe patients in this cohort. Potential predictors of disease severity were identified in the abnormal laboratory findings using univariate and multivariate regression models. Conclusion: Our study reveals that eosinopaenia and elevated LDH on admission are potential predictors of disease severity in adults with COVID-19 with underlying chronic airway diseases. Moreover, eosinophil counts could indicate disease progression of COVID-19, thus revealing treatment efficacy. These predictors may help clinicians identify severe COVID-19 in patients with chronic bronchitis, COPD and asthma. Patients with chronic airway diseases are less likely to suffer from COVID-19. Eosinopaenia and elevated lactate dehydrogenase (LDH) can predict disease severity in patients with COVID-19 with underlying chronic airway diseases. Dynamic changes of eosinophil counts and LDH might indicate disease prognosis and treatment effectiveness. Are other molecules related to chronic airway inflammation also involved in the development of COVID-19? Can the drugs targeting eosinophils be applied in COVID-19 treatment? How do patients with COVID-19 with chronic airway diseases manage themselves?
Background: Several predictors of COVID-19 severity have been reported. However, chronic airway inflammation characterised by accumulated lymphocytes or eosinophils may affect the pathogenesis of COVID-19. Methods: In this retrospective cohort study, we reviewed the medical records of all patients with laboratory-confirmed COVID-19 with chronic bronchitis, chronic obstructive pulmonary disease (COPD) and asthma admitted to the Sino-French New City Branch of Tongji Hospital, a large regional hospital in Wuhan, China, from 26 January to 3 April. The Tongji Hospital Ethics Committee approved this study. Results: There were 59 patients with chronic bronchitis, COPD and asthma. When compared with non-severe patients, severe patients were more likely to have decreased lymphocyte counts (0.6×10⁹/L vs 1.1×10⁹/L, p<0.001), eosinopaenia (<0.02×10⁹/L; 73% vs 24%, p<0.001), increased lactate dehydrogenase (LDH) (471.0 U/L vs 230.0 U/L, p<0.001) and elevated interleukin 6 level (47.4 pg/mL vs 5.7 pg/mL, p=0.002) on admission. Eosinopaenia and elevated LDH were significantly associated with disease severity in both univariate and multivariate regression models including the above variables. Moreover, eosinophil count and LDH level tended to return to normal range over time in both groups after treatment and severe patients recovered slower than non-severe patients, especially in eosinophil count. Conclusions: Eosinopaenia and elevated LDH are potential predictors of disease severity in patients with COVID-19 with underlying chronic airway diseases. In addition, they could indicate disease progression and treatment effectiveness.
8,064
296
[ 337, 185, 59, 103, 99, 357, 315, 200, 619 ]
13
[ "patients", "severe", "19", "covid", "covid 19", "chronic", "ldh", "severe patients", "non severe", "non" ]
[ "common respiratory diseases", "cells results pneumonia", "covid 19 pulmonary", "sars cov infection", "airway inflammation ics" ]
[CONTENT] COVID-19 [SUMMARY]
[CONTENT] COVID-19 [SUMMARY]
[CONTENT] COVID-19 [SUMMARY]
[CONTENT] COVID-19 [SUMMARY]
[CONTENT] COVID-19 [SUMMARY]
[CONTENT] COVID-19 [SUMMARY]
[CONTENT] Humans | Asthma | Bronchitis, Chronic | COVID-19 | Eosinophils | Inflammation | Lactate Dehydrogenases | Pulmonary Disease, Chronic Obstructive | Retrospective Studies [SUMMARY]
[CONTENT] Humans | Asthma | Bronchitis, Chronic | COVID-19 | Eosinophils | Inflammation | Lactate Dehydrogenases | Pulmonary Disease, Chronic Obstructive | Retrospective Studies [SUMMARY]
[CONTENT] Humans | Asthma | Bronchitis, Chronic | COVID-19 | Eosinophils | Inflammation | Lactate Dehydrogenases | Pulmonary Disease, Chronic Obstructive | Retrospective Studies [SUMMARY]
[CONTENT] Humans | Asthma | Bronchitis, Chronic | COVID-19 | Eosinophils | Inflammation | Lactate Dehydrogenases | Pulmonary Disease, Chronic Obstructive | Retrospective Studies [SUMMARY]
[CONTENT] Humans | Asthma | Bronchitis, Chronic | COVID-19 | Eosinophils | Inflammation | Lactate Dehydrogenases | Pulmonary Disease, Chronic Obstructive | Retrospective Studies [SUMMARY]
[CONTENT] Humans | Asthma | Bronchitis, Chronic | COVID-19 | Eosinophils | Inflammation | Lactate Dehydrogenases | Pulmonary Disease, Chronic Obstructive | Retrospective Studies [SUMMARY]
[CONTENT] common respiratory diseases | cells results pneumonia | covid 19 pulmonary | sars cov infection | airway inflammation ics [SUMMARY]
[CONTENT] common respiratory diseases | cells results pneumonia | covid 19 pulmonary | sars cov infection | airway inflammation ics [SUMMARY]
[CONTENT] common respiratory diseases | cells results pneumonia | covid 19 pulmonary | sars cov infection | airway inflammation ics [SUMMARY]
[CONTENT] common respiratory diseases | cells results pneumonia | covid 19 pulmonary | sars cov infection | airway inflammation ics [SUMMARY]
[CONTENT] common respiratory diseases | cells results pneumonia | covid 19 pulmonary | sars cov infection | airway inflammation ics [SUMMARY]
[CONTENT] common respiratory diseases | cells results pneumonia | covid 19 pulmonary | sars cov infection | airway inflammation ics [SUMMARY]
[CONTENT] patients | severe | 19 | covid | covid 19 | chronic | ldh | severe patients | non severe | non [SUMMARY]
[CONTENT] patients | severe | 19 | covid | covid 19 | chronic | ldh | severe patients | non severe | non [SUMMARY]
[CONTENT] patients | severe | 19 | covid | covid 19 | chronic | ldh | severe patients | non severe | non [SUMMARY]
[CONTENT] patients | severe | 19 | covid | covid 19 | chronic | ldh | severe patients | non severe | non [SUMMARY]
[CONTENT] patients | severe | 19 | covid | covid 19 | chronic | ldh | severe patients | non severe | non [SUMMARY]
[CONTENT] patients | severe | 19 | covid | covid 19 | chronic | ldh | severe patients | non severe | non [SUMMARY]
[CONTENT] cases | cov | sars cov | sars | patients | reported | chronic | respiratory | airway | inflammation [SUMMARY]
[CONTENT] serum | according | oxygen | respiratory | medical | il | diagnosed | history | covid | covid 19 [SUMMARY]
[CONTENT] patients | severe | ldh | chronic | severe patients | non | non severe | eosinophil | vs | counts [SUMMARY]
[CONTENT] covid | 19 | covid 19 | chronic | chronic airway | airway | chronic airway diseases | airway diseases | indicate disease | indicate [SUMMARY]
[CONTENT] patients | severe | 19 | covid | covid 19 | chronic | ldh | airway | severe patients | chronic airway [SUMMARY]
[CONTENT] patients | severe | 19 | covid | covid 19 | chronic | ldh | airway | severe patients | chronic airway [SUMMARY]
[CONTENT] COVID-19 ||| COVID-19 [SUMMARY]
[CONTENT] COVID-19 | Sino-French | New City Branch | Tongji Hospital | Wuhan | China | 26 January to 3 April ||| The Tongji Hospital Ethics Committee [SUMMARY]
[CONTENT] 59 ||| 0.6×10⁹/L | 1.1×10⁹/L | 73% | 24% | 471.0 | 230.0 | U/L | 6 | 47.4 pg | 5.7 ||| Eosinopaenia ||| [SUMMARY]
[CONTENT] Eosinopaenia | COVID-19 ||| [SUMMARY]
[CONTENT] COVID-19 ||| COVID-19 ||| COVID-19 | Sino-French | New City Branch | Tongji Hospital | Wuhan | China | 26 January to 3 April ||| The Tongji Hospital Ethics Committee ||| 59 ||| 0.6×10⁹/L | 1.1×10⁹/L | 73% | 24% | 471.0 | 230.0 | U/L | 6 | 47.4 pg | 5.7 ||| Eosinopaenia ||| ||| COVID-19 ||| [SUMMARY]
[CONTENT] COVID-19 ||| COVID-19 ||| COVID-19 | Sino-French | New City Branch | Tongji Hospital | Wuhan | China | 26 January to 3 April ||| The Tongji Hospital Ethics Committee ||| 59 ||| 0.6×10⁹/L | 1.1×10⁹/L | 73% | 24% | 471.0 | 230.0 | U/L | 6 | 47.4 pg | 5.7 ||| Eosinopaenia ||| ||| COVID-19 ||| [SUMMARY]
Cross-Sectional Association Between Employment Status and Self-Rated Health Among Middle-Aged Japanese Women: The Influence of Socioeconomic Conditions and Work-Life Conflict.
31353324
Few studies examining the impact for women of employment status on health have considered domestic duties and responsibilities as well as household socioeconomic conditions. Moreover, to our knowledge, no studies have explored the influence of work-family conflict on the association between employment status and health. This research aimed to investigate the cross-sectional associations between employment status (regular employee, non-regular employee, or self-employed) with self-rated health among Japanese middle-aged working women.
BACKGROUND
Self-report data were obtained from 21,450 working women aged 40-59 years enrolled in the Japan Public Health Center-based Prospective Study for the Next Generation (JPHC-NEXT Study) in 2011-2016. Multivariate odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for poor self-rated health ('poor' or 'not very good') by employment status. Sub-group analyses by household income and marital status, as well as mediation analysis for work-family conflict, were also conducted.
METHODS
Adjusted ORs for the poor self-rated health of non-regular employees and self-employed workers were 0.90 (95% CI, 0.83-0.98) and 0.84 (95% CI, 0.75-0.94), respectively, compared with regular employees. The identified association of non-regular employment was explained by work-family conflict. Subgroup analysis indicated no statistically significant modifying effects by household income and marital status.
RESULTS
Among middle-aged working Japanese women, employment status was associated with self-rated health; non-regular employees and self-employed workers were less likely to report poor self-rated health, compared with regular employees. Lowered OR of poor self-rated health among non-regular employees may be explained by their reduced work-family conflict.
CONCLUSION
[ "Adult", "Cross-Sectional Studies", "Employment", "Family Characteristics", "Female", "Health Status", "Humans", "Japan", "Middle Aged", "Prospective Studies", "Self Report", "Socioeconomic Factors", "Women, Working", "Work-Life Balance" ]
7429146
INTRODUCTION
Employment status (ie, full-time, part-time, dispatch, contract, non-regular, or self-employed) may influence health.1–4 Precarious employment has been associated empirically with deteriorated health outcomes, using various measures of mortality,2,3,5,6 mental health,7–9 and health behaviors.10,11 However, inconsistent results have also been reported, particularly for self-rated health; non-regular workers have evidenced poorer self-rated health compared with regular workers,12,13 while some studies showed no significant differences14 or even a reverse association.15–17 One of the possible reasons for the inconsistent results could be underlying differences in participants’ reasons for being temporary workers.6 For example, some voluntarily choose temporary employment for domestic reasons, such as maintaining a work-family balance, while others are unable to find regular employment and are forced to accept temporary work. Further, the voluntariness of a job choice for non-regular employees may differ based on household socioeconomic conditions and may be influenced by varying societal norms regarding gender roles.18 In Japan, the number and proportion of female non-regular workers (ie, temporal, contract, and part-time) is much larger than that of non-regular male workers. In 2018, 54% of female employees and 14% of male employees aged 24–65 years old were non-regular workers.19 The gender role norms (ie, men work outside the home, women stay at home and take care of children and the elderly) that exist in Japanese society could be one of the reasons for this significant gender difference. Women’s participation in Japan’s workforce by age group shows a bimodal pattern. By contrast, similar to Japan’s male workforce, many Western societies display a convex shape.20 This bimodal pattern suggests that women tend to take a career break during their 20s and 30s, for such reasons as child rearing, and return to the workforce in their 40s.21,22 Part-time work is often the only choice for women who reenter the workforce23; it is an important path for women reentering the labor market.24 Consequently, a much higher proportion of women than men in Japan are non-regularly employed (mainly part-time).19 The proportion of female non-regular workers has increased along with the promotion of women’s participation in the workforce in the last few decades in Japan.25 However, these promotional efforts have not improved the imbalanced participation in household duties. That is, in most societies including Japan, women’s household responsibilities have persisted despite more women entering and playing a greater role in the workforce.26–28 Work-family conflict, generally defined as “a form of inter-role conflict in which the role pressures from the work and family domains are mutually incompatible in some respect,”29 has been associated with women’s self-rated health.30–33 An international comparison study showed that Japanese female workers had the highest work-family conflict and poorest self-rated mental and physical health among Japanese, Finnish, and English government workers.34 Other research, moreover, has shown that the working arrangement is associated with the prevalence of work-family conflict, with higher work-family conflict existing among full-time versus part-time workers.33,35 Women in Japan are often obligated to work part-time to mitigate their burden of managing both work and household demands and responsibilities because non-regular employment often offers more schedule flexibility compared with regular employment.36,37 Thus, we hypothesized that the association between employment status and self-rated health may be explained by the level of work-family conflict. Indeed, one of the main reasons middle-aged women in Japan choose non-regular employment and self-employment is time flexibility,19 which can reduce work-family conflict raised by multiple social roles for working women.38 To our knowledge, no studies have explored the effect of work-family conflicts on the association between employment status and health. Available financial and material resources can affect the conditions relating to employment status. For example, some women work part-time to make a living, while others only need to boost the household budget; such differences produce heterogeneity amongst non-regular employees.18 Thus, household socioeconomic conditions may impact the association between employment status and health.26–28 However, few studies have included domestic duties and responsibilities and household socioeconomic conditions in examinations of the impact of employment status on health among women. The current study thus aimed to investigate the associations of employment status (ie, regular, non-regular, or self-employed) with self-rated health among Japanese middle-aged working women. Our specific research questions were: Is employment status associated with the probability of having poor self-rated health among Japanese working women? Is the association between employment status and self-rated health explained by level of work-family conflict? Are the associations between employment status and self-rated health noted above modified by socioeconomic conditions?
null
null
RESULTS
Table 1 shows the number (%) and means (SD) of the study population’s characteristics. The average age was 50.3 (SD, 6.08) years; 17.7% of participants had poor self-rated health. Participants with regular employment, non-regular employment, and self-employment represented 40.6%, 43.2%, and 16.2% of the study population, respectively. The proportion of those who had attained a high school education or less was 55.2%, while those with undergraduate degrees or higher was 8.1%. Thirty percent of participants held professional/managerial jobs and 79.8% were married. The distributions of poor self-rated health differed by employment status: 18.9% for regular employees, 17.4% for non-regular employees, and 15.3% for self-employed. Compared with the other employment types, regular employees tended to be younger, managers/professionals, non-married, more educated, to have a higher household income, a medical history of diseases, and higher work-family conflict. SD, standard deviation. aDiseases: heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression. Table 2 presents crude and adjusted ORs (95% CI) of employment status for self-rated poor health. The multivariate ORs for poor self-rated health of non-regular employees and the self-employed, compared with that of regular employees, were 0.90 (95% CI, 0.83–0.98) and 0.84 (95% CI, 0.75–0.94), respectively (model 1). Moreover, the lower OR for poor self-rated health was attenuated after adjusting for work-family conflict and we observed no significant differences in odds between regular employees and non-regular employees (OR 1.00; 95% CI, 0.92–1.09) (model 2). However, while the lower OR was attenuated, a statistically significant association with poor self-rated health was identified after adjusting for work-family conflict among the self-employed (OR 0.87; 95% CI, 0.78–0.98). CI, confidence interval; OR, odds ratio. aModel 1: adjusted by education level, equivalent household income, occupation category, age group, marital status, hypertension, diabetes, and hypercholesterolemia, history of diseases (heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression), and residential area. bModel 2: Model 1+ Work-Family conflict The results of mediation analysis by work-family conflict scores showed that the association between non-regular employment and self-rated health was largely mediated through work-family conflict (the proportion mediated through work-family conflict was 100%), and no direct effect of non-regular employment on self-rated health was identified (Table 3). Part (31%) of the total effect of self-employment on self-rated health was mediated by work-family conflict, but it was mainly a direct effect. CI, confidence interval; OR, odds ratio. aAdjusted by education level, equivalent household income, occupation category, age group, marital status, hypertension, diabetes, and hypercholesterolemia, history of diseases (heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression), and residential area. Table 4 presents the results of sub-group analysis by household income and marital status. The adjusted ORs of non-regular employees and self-employees, compared with regular employees, were 0.83 (95% CI, 0.74–0.94) and 0.78 (95% CI, 0.66–0.92), respectively, in the higher household income group and 0.88 (95% CI, 0.80–0.96) and 0.81 (95% CI, 0.72–0.92), respectively, in the married group, while no statistically significant associations were identified in their counterpart groups. The association between employment status and self-rated health was more evident in the high household income group and in the married group, although they were not statistically significant. CI, confidence interval; OR, odds ratio. aAdjusted by education level, equivalent household income, occupation category, age group, marital status, hypertension, diabetes, and hypercholesterolemia, history of diseases (heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression), and residential area.
Conclusions
Employment status is associated with probabilities of poor self-rated health among middle-aged working Japanese women; non-regular employees and self-employed workers were less likely to report poor self-rated health compared with regular employees. Additionally, most of the associations with the poor self-rated health of non-regular employment could be explained by the level of work-family conflict. We did not identify significant influences of household income and marital status on the association between employment status and self-rated health.
[ "Study cohort", "Study population", "Measurements", "Predictor variable", "Outcome variable", "Covariates", "Statistical analysis", "Conclusions" ]
[ "Data in this study were derived from the Japan Public Health Center-based Prospective Study for the Next Generation (JPHC-NEXT Study). The JPHC-NEXT study was initiated in 2011 and the baseline survey was completed by December 31, 2016. In 2011–2016, we established a population-based cohort of 261,939 residents aged 40–74 years who registered an address in cities, towns, and villages (16 total) in seven prefectures throughout Japan. A self-administered questionnaire was distributed mostly by hand and partly by post to all participants; questions were asked about their lifestyles, personal medical histories, and socio-demographics. Incomplete answers were followed up by telephone interview. Among 261,939 residents, 115,385 agreed to participate in the JPHC-NEXT Study (response rate, 44.1%). All participants provided written informed consent. We excluded those with did not respond to the questionnaire. Our final cohort population was 114,157 (52,572 men and 61,585 women).39\nThe JPHC-NEXT Study was approved by the institutional review boards of the National Cancer Center and other participating institutions.", "We excluded women aged 60 years and older, women who were unemployed, and women for whom no information on occupation was available, to restrict our study population to working women 40–59 years old (n = 24,375). From these, we excluded 1,778 women with a history of cancer or cardiovascular disease and/or physical limitations; we excluded a further 1,147 women with undisclosed employment status, self-rated health, and/or work-family conflict scores. The final study population comprised 21,450 women.", " Predictor variable Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee.\nOur main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee.\n Outcome variable We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’).\nWe measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’).\n Covariates Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis.\nThe data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34\nAge; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis.\nThe data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34", "Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee.", "We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’).", "Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis.\nThe data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34", "We calculated the number (%) or mean (standard deviation [SD]) of the study population’s characteristics. Using chi-square tests and analysis of variance, differences among employment statuses were calculated in terms of proportion and mean values for poor self-rated health, socioeconomic variables, marital status, medical history of hypertension, diabetes mellitus, or hypercholesterolemia, history of diseases, and work-family conflict scores.\nOdds ratios (ORs) with 95% confidence intervals (CIs) of employment status for poor self-rated health were calculated using logistic regression analysis after adjusting for hypothesized confounding variables, such as age, education level, household equivalent income, occupation, marital status, medical history of hypertension, diabetes mellitus, hypercholesterolemia, or any other diseases, and residential area (model 1). To explore the mediating effect of work-family conflict on the association between employment status and self-rated health, we included work-family conflict scores in the model (model 2). Further, we conducted mediation analysis and estimated direct and indirect effects of employment status on self-rated health through work-family conflict. The natural direct effect (NDE), natural indirect effects (NIE), and total effects (TE) were estimated as ORs for poor self-rated health with respect to the mediator, conditioned on the measured covariates.45 We also estimated the percentage of the total associations between employment status and self-rated health that were mediated through work-family conflict using log ORs.46\nSub-group analyses by household equivalent income and marital status were also performed for self-rated health. We tested statistical interactions using cross-product terms for employment status and household equivalent income or marital status. All analyses were performed using SAS version 9.4 (SAS Institute Inc. Cary, NC, USA).", "Employment status is associated with probabilities of poor self-rated health among middle-aged working Japanese women; non-regular employees and self-employed workers were less likely to report poor self-rated health compared with regular employees. Additionally, most of the associations with the poor self-rated health of non-regular employment could be explained by the level of work-family conflict. We did not identify significant influences of household income and marital status on the association between employment status and self-rated health." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIAL AND METHODS", "Study cohort", "Study population", "Measurements", "Predictor variable", "Outcome variable", "Covariates", "Statistical analysis", "RESULTS", "DISCUSSION", "Conclusions" ]
[ "Employment status (ie, full-time, part-time, dispatch, contract, non-regular, or self-employed) may influence health.1–4 Precarious employment has been associated empirically with deteriorated health outcomes, using various measures of mortality,2,3,5,6 mental health,7–9 and health behaviors.10,11 However, inconsistent results have also been reported, particularly for self-rated health; non-regular workers have evidenced poorer self-rated health compared with regular workers,12,13 while some studies showed no significant differences14 or even a reverse association.15–17\nOne of the possible reasons for the inconsistent results could be underlying differences in participants’ reasons for being temporary workers.6 For example, some voluntarily choose temporary employment for domestic reasons, such as maintaining a work-family balance, while others are unable to find regular employment and are forced to accept temporary work. Further, the voluntariness of a job choice for non-regular employees may differ based on household socioeconomic conditions and may be influenced by varying societal norms regarding gender roles.18\nIn Japan, the number and proportion of female non-regular workers (ie, temporal, contract, and part-time) is much larger than that of non-regular male workers. In 2018, 54% of female employees and 14% of male employees aged 24–65 years old were non-regular workers.19 The gender role norms (ie, men work outside the home, women stay at home and take care of children and the elderly) that exist in Japanese society could be one of the reasons for this significant gender difference. Women’s participation in Japan’s workforce by age group shows a bimodal pattern. By contrast, similar to Japan’s male workforce, many Western societies display a convex shape.20 This bimodal pattern suggests that women tend to take a career break during their 20s and 30s, for such reasons as child rearing, and return to the workforce in their 40s.21,22 Part-time work is often the only choice for women who reenter the workforce23; it is an important path for women reentering the labor market.24 Consequently, a much higher proportion of women than men in Japan are non-regularly employed (mainly part-time).19\nThe proportion of female non-regular workers has increased along with the promotion of women’s participation in the workforce in the last few decades in Japan.25 However, these promotional efforts have not improved the imbalanced participation in household duties. That is, in most societies including Japan, women’s household responsibilities have persisted despite more women entering and playing a greater role in the workforce.26–28 Work-family conflict, generally defined as “a form of inter-role conflict in which the role pressures from the work and family domains are mutually incompatible in some respect,”29 has been associated with women’s self-rated health.30–33 An international comparison study showed that Japanese female workers had the highest work-family conflict and poorest self-rated mental and physical health among Japanese, Finnish, and English government workers.34 Other research, moreover, has shown that the working arrangement is associated with the prevalence of work-family conflict, with higher work-family conflict existing among full-time versus part-time workers.33,35 Women in Japan are often obligated to work part-time to mitigate their burden of managing both work and household demands and responsibilities because non-regular employment often offers more schedule flexibility compared with regular employment.36,37 Thus, we hypothesized that the association between employment status and self-rated health may be explained by the level of work-family conflict. Indeed, one of the main reasons middle-aged women in Japan choose non-regular employment and self-employment is time flexibility,19 which can reduce work-family conflict raised by multiple social roles for working women.38 To our knowledge, no studies have explored the effect of work-family conflicts on the association between employment status and health.\nAvailable financial and material resources can affect the conditions relating to employment status. For example, some women work part-time to make a living, while others only need to boost the household budget; such differences produce heterogeneity amongst non-regular employees.18 Thus, household socioeconomic conditions may impact the association between employment status and health.26–28 However, few studies have included domestic duties and responsibilities and household socioeconomic conditions in examinations of the impact of employment status on health among women.\nThe current study thus aimed to investigate the associations of employment status (ie, regular, non-regular, or self-employed) with self-rated health among Japanese middle-aged working women. Our specific research questions were:\nIs employment status associated with the probability of having poor self-rated health among Japanese working women?\nIs the association between employment status and self-rated health explained by level of work-family conflict?\nAre the associations between employment status and self-rated health noted above modified by socioeconomic conditions?", " Study cohort Data in this study were derived from the Japan Public Health Center-based Prospective Study for the Next Generation (JPHC-NEXT Study). The JPHC-NEXT study was initiated in 2011 and the baseline survey was completed by December 31, 2016. In 2011–2016, we established a population-based cohort of 261,939 residents aged 40–74 years who registered an address in cities, towns, and villages (16 total) in seven prefectures throughout Japan. A self-administered questionnaire was distributed mostly by hand and partly by post to all participants; questions were asked about their lifestyles, personal medical histories, and socio-demographics. Incomplete answers were followed up by telephone interview. Among 261,939 residents, 115,385 agreed to participate in the JPHC-NEXT Study (response rate, 44.1%). All participants provided written informed consent. We excluded those with did not respond to the questionnaire. Our final cohort population was 114,157 (52,572 men and 61,585 women).39\nThe JPHC-NEXT Study was approved by the institutional review boards of the National Cancer Center and other participating institutions.\nData in this study were derived from the Japan Public Health Center-based Prospective Study for the Next Generation (JPHC-NEXT Study). The JPHC-NEXT study was initiated in 2011 and the baseline survey was completed by December 31, 2016. In 2011–2016, we established a population-based cohort of 261,939 residents aged 40–74 years who registered an address in cities, towns, and villages (16 total) in seven prefectures throughout Japan. A self-administered questionnaire was distributed mostly by hand and partly by post to all participants; questions were asked about their lifestyles, personal medical histories, and socio-demographics. Incomplete answers were followed up by telephone interview. Among 261,939 residents, 115,385 agreed to participate in the JPHC-NEXT Study (response rate, 44.1%). All participants provided written informed consent. We excluded those with did not respond to the questionnaire. Our final cohort population was 114,157 (52,572 men and 61,585 women).39\nThe JPHC-NEXT Study was approved by the institutional review boards of the National Cancer Center and other participating institutions.\n Study population We excluded women aged 60 years and older, women who were unemployed, and women for whom no information on occupation was available, to restrict our study population to working women 40–59 years old (n = 24,375). From these, we excluded 1,778 women with a history of cancer or cardiovascular disease and/or physical limitations; we excluded a further 1,147 women with undisclosed employment status, self-rated health, and/or work-family conflict scores. The final study population comprised 21,450 women.\nWe excluded women aged 60 years and older, women who were unemployed, and women for whom no information on occupation was available, to restrict our study population to working women 40–59 years old (n = 24,375). From these, we excluded 1,778 women with a history of cancer or cardiovascular disease and/or physical limitations; we excluded a further 1,147 women with undisclosed employment status, self-rated health, and/or work-family conflict scores. The final study population comprised 21,450 women.\n Measurements Predictor variable Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee.\nOur main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee.\n Outcome variable We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’).\nWe measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’).\n Covariates Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis.\nThe data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34\nAge; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis.\nThe data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34\n Predictor variable Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee.\nOur main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee.\n Outcome variable We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’).\nWe measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’).\n Covariates Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis.\nThe data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34\nAge; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis.\nThe data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34\n Statistical analysis We calculated the number (%) or mean (standard deviation [SD]) of the study population’s characteristics. Using chi-square tests and analysis of variance, differences among employment statuses were calculated in terms of proportion and mean values for poor self-rated health, socioeconomic variables, marital status, medical history of hypertension, diabetes mellitus, or hypercholesterolemia, history of diseases, and work-family conflict scores.\nOdds ratios (ORs) with 95% confidence intervals (CIs) of employment status for poor self-rated health were calculated using logistic regression analysis after adjusting for hypothesized confounding variables, such as age, education level, household equivalent income, occupation, marital status, medical history of hypertension, diabetes mellitus, hypercholesterolemia, or any other diseases, and residential area (model 1). To explore the mediating effect of work-family conflict on the association between employment status and self-rated health, we included work-family conflict scores in the model (model 2). Further, we conducted mediation analysis and estimated direct and indirect effects of employment status on self-rated health through work-family conflict. The natural direct effect (NDE), natural indirect effects (NIE), and total effects (TE) were estimated as ORs for poor self-rated health with respect to the mediator, conditioned on the measured covariates.45 We also estimated the percentage of the total associations between employment status and self-rated health that were mediated through work-family conflict using log ORs.46\nSub-group analyses by household equivalent income and marital status were also performed for self-rated health. We tested statistical interactions using cross-product terms for employment status and household equivalent income or marital status. All analyses were performed using SAS version 9.4 (SAS Institute Inc. Cary, NC, USA).\nWe calculated the number (%) or mean (standard deviation [SD]) of the study population’s characteristics. Using chi-square tests and analysis of variance, differences among employment statuses were calculated in terms of proportion and mean values for poor self-rated health, socioeconomic variables, marital status, medical history of hypertension, diabetes mellitus, or hypercholesterolemia, history of diseases, and work-family conflict scores.\nOdds ratios (ORs) with 95% confidence intervals (CIs) of employment status for poor self-rated health were calculated using logistic regression analysis after adjusting for hypothesized confounding variables, such as age, education level, household equivalent income, occupation, marital status, medical history of hypertension, diabetes mellitus, hypercholesterolemia, or any other diseases, and residential area (model 1). To explore the mediating effect of work-family conflict on the association between employment status and self-rated health, we included work-family conflict scores in the model (model 2). Further, we conducted mediation analysis and estimated direct and indirect effects of employment status on self-rated health through work-family conflict. The natural direct effect (NDE), natural indirect effects (NIE), and total effects (TE) were estimated as ORs for poor self-rated health with respect to the mediator, conditioned on the measured covariates.45 We also estimated the percentage of the total associations between employment status and self-rated health that were mediated through work-family conflict using log ORs.46\nSub-group analyses by household equivalent income and marital status were also performed for self-rated health. We tested statistical interactions using cross-product terms for employment status and household equivalent income or marital status. All analyses were performed using SAS version 9.4 (SAS Institute Inc. Cary, NC, USA).", "Data in this study were derived from the Japan Public Health Center-based Prospective Study for the Next Generation (JPHC-NEXT Study). The JPHC-NEXT study was initiated in 2011 and the baseline survey was completed by December 31, 2016. In 2011–2016, we established a population-based cohort of 261,939 residents aged 40–74 years who registered an address in cities, towns, and villages (16 total) in seven prefectures throughout Japan. A self-administered questionnaire was distributed mostly by hand and partly by post to all participants; questions were asked about their lifestyles, personal medical histories, and socio-demographics. Incomplete answers were followed up by telephone interview. Among 261,939 residents, 115,385 agreed to participate in the JPHC-NEXT Study (response rate, 44.1%). All participants provided written informed consent. We excluded those with did not respond to the questionnaire. Our final cohort population was 114,157 (52,572 men and 61,585 women).39\nThe JPHC-NEXT Study was approved by the institutional review boards of the National Cancer Center and other participating institutions.", "We excluded women aged 60 years and older, women who were unemployed, and women for whom no information on occupation was available, to restrict our study population to working women 40–59 years old (n = 24,375). From these, we excluded 1,778 women with a history of cancer or cardiovascular disease and/or physical limitations; we excluded a further 1,147 women with undisclosed employment status, self-rated health, and/or work-family conflict scores. The final study population comprised 21,450 women.", " Predictor variable Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee.\nOur main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee.\n Outcome variable We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’).\nWe measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’).\n Covariates Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis.\nThe data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34\nAge; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis.\nThe data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34", "Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee.", "We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’).", "Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis.\nThe data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34", "We calculated the number (%) or mean (standard deviation [SD]) of the study population’s characteristics. Using chi-square tests and analysis of variance, differences among employment statuses were calculated in terms of proportion and mean values for poor self-rated health, socioeconomic variables, marital status, medical history of hypertension, diabetes mellitus, or hypercholesterolemia, history of diseases, and work-family conflict scores.\nOdds ratios (ORs) with 95% confidence intervals (CIs) of employment status for poor self-rated health were calculated using logistic regression analysis after adjusting for hypothesized confounding variables, such as age, education level, household equivalent income, occupation, marital status, medical history of hypertension, diabetes mellitus, hypercholesterolemia, or any other diseases, and residential area (model 1). To explore the mediating effect of work-family conflict on the association between employment status and self-rated health, we included work-family conflict scores in the model (model 2). Further, we conducted mediation analysis and estimated direct and indirect effects of employment status on self-rated health through work-family conflict. The natural direct effect (NDE), natural indirect effects (NIE), and total effects (TE) were estimated as ORs for poor self-rated health with respect to the mediator, conditioned on the measured covariates.45 We also estimated the percentage of the total associations between employment status and self-rated health that were mediated through work-family conflict using log ORs.46\nSub-group analyses by household equivalent income and marital status were also performed for self-rated health. We tested statistical interactions using cross-product terms for employment status and household equivalent income or marital status. All analyses were performed using SAS version 9.4 (SAS Institute Inc. Cary, NC, USA).", "Table 1 shows the number (%) and means (SD) of the study population’s characteristics. The average age was 50.3 (SD, 6.08) years; 17.7% of participants had poor self-rated health. Participants with regular employment, non-regular employment, and self-employment represented 40.6%, 43.2%, and 16.2% of the study population, respectively. The proportion of those who had attained a high school education or less was 55.2%, while those with undergraduate degrees or higher was 8.1%. Thirty percent of participants held professional/managerial jobs and 79.8% were married. The distributions of poor self-rated health differed by employment status: 18.9% for regular employees, 17.4% for non-regular employees, and 15.3% for self-employed. Compared with the other employment types, regular employees tended to be younger, managers/professionals, non-married, more educated, to have a higher household income, a medical history of diseases, and higher work-family conflict.\nSD, standard deviation.\naDiseases: heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression.\nTable 2 presents crude and adjusted ORs (95% CI) of employment status for self-rated poor health. The multivariate ORs for poor self-rated health of non-regular employees and the self-employed, compared with that of regular employees, were 0.90 (95% CI, 0.83–0.98) and 0.84 (95% CI, 0.75–0.94), respectively (model 1). Moreover, the lower OR for poor self-rated health was attenuated after adjusting for work-family conflict and we observed no significant differences in odds between regular employees and non-regular employees (OR 1.00; 95% CI, 0.92–1.09) (model 2). However, while the lower OR was attenuated, a statistically significant association with poor self-rated health was identified after adjusting for work-family conflict among the self-employed (OR 0.87; 95% CI, 0.78–0.98).\nCI, confidence interval; OR, odds ratio.\naModel 1: adjusted by education level, equivalent household income, occupation category, age group, marital status, hypertension, diabetes, and hypercholesterolemia, history of diseases (heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression), and residential area.\nbModel 2: Model 1+ Work-Family conflict\nThe results of mediation analysis by work-family conflict scores showed that the association between non-regular employment and self-rated health was largely mediated through work-family conflict (the proportion mediated through work-family conflict was 100%), and no direct effect of non-regular employment on self-rated health was identified (Table 3). Part (31%) of the total effect of self-employment on self-rated health was mediated by work-family conflict, but it was mainly a direct effect.\nCI, confidence interval; OR, odds ratio.\naAdjusted by education level, equivalent household income, occupation category, age group, marital status, hypertension, diabetes, and hypercholesterolemia, history of diseases (heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression), and residential area.\nTable 4 presents the results of sub-group analysis by household income and marital status. The adjusted ORs of non-regular employees and self-employees, compared with regular employees, were 0.83 (95% CI, 0.74–0.94) and 0.78 (95% CI, 0.66–0.92), respectively, in the higher household income group and 0.88 (95% CI, 0.80–0.96) and 0.81 (95% CI, 0.72–0.92), respectively, in the married group, while no statistically significant associations were identified in their counterpart groups. The association between employment status and self-rated health was more evident in the high household income group and in the married group, although they were not statistically significant.\nCI, confidence interval; OR, odds ratio.\naAdjusted by education level, equivalent household income, occupation category, age group, marital status, hypertension, diabetes, and hypercholesterolemia, history of diseases (heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression), and residential area.", "In this study of middle-aged Japanese working women, non-regular employees and the self-employed were more likely to have poorer self-rated health compared with regular employees. Furthermore, the apparently lower probability of having poor self-rated health among non-regular employees was explained by their diminished level of work-family conflicts. The association between employment status and self-rated health differed by household income level and marital status, although these interactions were not statistically significant.\nOur results are in line with other research findings15–17; for example, cross-sectional studies among public sector employees in Finland showed lower ORs for fixed-term female employees’ poor self-rated health, compared with female permanent employees (OR 0.70; 95% CI, 0.60–0.82).17 Although only a limited number of similar studies have been conducted in Japan, our results were consistent with those of the Comprehensive National Survey among 18–64-year-old Japanese female employees. They reported that, compared with permanent employment, unstable employment with short working hours was associated with a smaller proportion of women who rated their health as poor.47\nOne of the reasons for poorer self-rated health among regular employees compared with non-regular employees could be the difficulties that women in Japan face in reconciling family life and career. Female workers in Japan who play a significant part of the total workforce are also more likely to be influenced by strong societal gender norms to fulfill household duties.24 Thus, working as regular employees often leads to severe physical and psychosocial hardship for women.38 Women in Japan who were regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than their non-regularly employed counterparts.48 Moreover, Japanese female workers were reported to have the highest work-life conflict and poorest self-rated mental health among Japanese, Finnish, and English government workers.34 Thus, female workers in Japan may voluntarily choose to work part-time to achieve a balance between their work and family lives.\nOur results (ie, significantly lower work-family conflict among non-regular employees versus regular employees, effects of non-regular employment on poor self-rated health was mostly explained by work-family conflict) suggest that non-regular employees may have achieved better self-rated health than regular employees by buffering their work-family conflict. The possible benefits of non-regular employment, such as schedule flexibility and/or fewer duties and responsibilities at work, can reduce their work-family conflict, which may contribute to their reduced probability of having poor self-rated health. Our findings also suggest that Japanese female workers may find relief from difficult life situations caused by juggling work and family lives by taking non-regular jobs or being self-employed, which may also be beneficial for their self-rated health.\nOur results were, however, inconsistent with previous work that showed a higher mortality risk for non-regular female employees than for regular employees in Japan.5 While we do not have a clear explanation for this inconsistency, we speculate that one possible reason could be differences between two outcomes in the mechanisms of the health impact of employment status. That is, self-rated health reflects current life conditions on a short-term basis, while mortality may reflect chronic and continued effects relating to employment such as job insecurity, financial insecurity, and social welfare. These chronic health effects may be particularly strong in Japan, where there is little employment flexibility and the gap between regular and non-regular employees is relatively large.24 In other words, being a non-regular employee may alleviate the psychosocial and physical burdens temporarily; however, it may lead to long-term health deterioration through chronic psychosocial and socioeconomic impacts. In addition, we identified that work-family conflict explained most of the association between employment status and the risk of having poor self-rated health in this study. Controlling for the influence of work-family conflict, we identified no significant differences in probabilities of having poor self-rated health by employment status. In other words, most of the differences observed in self-rated health by employment status may be attributed to differences in psychological well-being. We do not deny the effect of psychological health on mortality; however, we speculate that the lower risk of self-rated health among non-regular employees, mostly explained by psychological health, may not directly reflect the results for mortality.\nSelf-employed workers also showed better self-rated health compared with regular employees. Job autonomy, job control, and/or control of working hours are hypothesized as possible benefits of being self-employed.49 In contrast to non-regular employees, work-family conflict did not appear to explain the identified association between self-employment and poor self-rated health. Work-family conflict is constructed to show the tension between two mutually incompatible domains, work and family.29 The work and family lives of self-employed workers may not present as much conflict as those of the other types of employees because the two domains (work and family) are physically closer and there is greater job autonomy. Further research is needed to understand the mechanisms of the association between being self-employment and self-rated health.\nAlthough we did not identify statistically significant interaction of socioeconomic conditions and employment status on self-rated health, the benefits of being a non-regular employee or being self-employed was greater among women with higher socioeconomic status (ie, high household income and married). Married women and women with a higher socioeconomic background are, arguably, more likely to be working to add extra income to that of the other breadwinner in the household. Thus, they can enjoy the flexibility of non-regular work, and, importantly, take advantage of it by reducing work-family conflict. By contrast, unmarried women and women with in lower socioeconomic conditions are more likely to be making their own living, which could make them vulnerable to the disadvantages of non-regular employment, such as low payment and poor social security.24\nThis study is one of the few to observe the association between employment status and self-rated health in Japan and to explore the associations that are related to work-life conflict. However, there are several limitations. First, given the cross-sectional nature of the study, we cannot claim causative links; in particular, we cannot exclude the possibility of reverse causation. To reduce this possibility, we excluded women with medical histories of major diseases and statistically controlled the medical history of diseases. We also conducted sensitivity analysis by further excluding women with medical histories of diseases, which did not change our conclusions (OR 0.88; 95% CI, 0.80–0.97 for non-regular employee and OR 0.80; 95% CI, 0.70–0.92 for self-employees). Further research is required to establish causative links. Second, our results may have been affected by selection bias caused by non-participation or exclusion because of missing values in our main variables. For example, if non-participation occurred disproportionally in self-rated health conditions or in employment status, our conclusions could have been distorted. We do not have specific information about the direction of bias. However, excluded women with missing values on self-rated health, employment status, and work-family conflict were likely to have lower socioeconomic status, which implies that our results may be underestimated. Third, our study population may not be nationally representative; in particular, we did not include metropolitan areas. In addition, our study population was limited to women aged in their 40s and 50s. They may be widely different from younger generation in terms of their life situations, life-styles, domestic duties and responsibilities, and work-life balance. Thus, the generalizability of our results to other populations requires caution. Finally, measurement errors in our variables, including our outcome and predictor and unmeasured (confounding) variables, may have resulted in residual confounding.\n Conclusions Employment status is associated with probabilities of poor self-rated health among middle-aged working Japanese women; non-regular employees and self-employed workers were less likely to report poor self-rated health compared with regular employees. Additionally, most of the associations with the poor self-rated health of non-regular employment could be explained by the level of work-family conflict. We did not identify significant influences of household income and marital status on the association between employment status and self-rated health.\nEmployment status is associated with probabilities of poor self-rated health among middle-aged working Japanese women; non-regular employees and self-employed workers were less likely to report poor self-rated health compared with regular employees. Additionally, most of the associations with the poor self-rated health of non-regular employment could be explained by the level of work-family conflict. We did not identify significant influences of household income and marital status on the association between employment status and self-rated health.", "Employment status is associated with probabilities of poor self-rated health among middle-aged working Japanese women; non-regular employees and self-employed workers were less likely to report poor self-rated health compared with regular employees. Additionally, most of the associations with the poor self-rated health of non-regular employment could be explained by the level of work-family conflict. We did not identify significant influences of household income and marital status on the association between employment status and self-rated health." ]
[ "intro", "materials|methods", null, null, null, null, null, null, null, "results", "discussion", null ]
[ "employment status", "self-rated health", "work-family conflict", "Japan", "women" ]
INTRODUCTION: Employment status (ie, full-time, part-time, dispatch, contract, non-regular, or self-employed) may influence health.1–4 Precarious employment has been associated empirically with deteriorated health outcomes, using various measures of mortality,2,3,5,6 mental health,7–9 and health behaviors.10,11 However, inconsistent results have also been reported, particularly for self-rated health; non-regular workers have evidenced poorer self-rated health compared with regular workers,12,13 while some studies showed no significant differences14 or even a reverse association.15–17 One of the possible reasons for the inconsistent results could be underlying differences in participants’ reasons for being temporary workers.6 For example, some voluntarily choose temporary employment for domestic reasons, such as maintaining a work-family balance, while others are unable to find regular employment and are forced to accept temporary work. Further, the voluntariness of a job choice for non-regular employees may differ based on household socioeconomic conditions and may be influenced by varying societal norms regarding gender roles.18 In Japan, the number and proportion of female non-regular workers (ie, temporal, contract, and part-time) is much larger than that of non-regular male workers. In 2018, 54% of female employees and 14% of male employees aged 24–65 years old were non-regular workers.19 The gender role norms (ie, men work outside the home, women stay at home and take care of children and the elderly) that exist in Japanese society could be one of the reasons for this significant gender difference. Women’s participation in Japan’s workforce by age group shows a bimodal pattern. By contrast, similar to Japan’s male workforce, many Western societies display a convex shape.20 This bimodal pattern suggests that women tend to take a career break during their 20s and 30s, for such reasons as child rearing, and return to the workforce in their 40s.21,22 Part-time work is often the only choice for women who reenter the workforce23; it is an important path for women reentering the labor market.24 Consequently, a much higher proportion of women than men in Japan are non-regularly employed (mainly part-time).19 The proportion of female non-regular workers has increased along with the promotion of women’s participation in the workforce in the last few decades in Japan.25 However, these promotional efforts have not improved the imbalanced participation in household duties. That is, in most societies including Japan, women’s household responsibilities have persisted despite more women entering and playing a greater role in the workforce.26–28 Work-family conflict, generally defined as “a form of inter-role conflict in which the role pressures from the work and family domains are mutually incompatible in some respect,”29 has been associated with women’s self-rated health.30–33 An international comparison study showed that Japanese female workers had the highest work-family conflict and poorest self-rated mental and physical health among Japanese, Finnish, and English government workers.34 Other research, moreover, has shown that the working arrangement is associated with the prevalence of work-family conflict, with higher work-family conflict existing among full-time versus part-time workers.33,35 Women in Japan are often obligated to work part-time to mitigate their burden of managing both work and household demands and responsibilities because non-regular employment often offers more schedule flexibility compared with regular employment.36,37 Thus, we hypothesized that the association between employment status and self-rated health may be explained by the level of work-family conflict. Indeed, one of the main reasons middle-aged women in Japan choose non-regular employment and self-employment is time flexibility,19 which can reduce work-family conflict raised by multiple social roles for working women.38 To our knowledge, no studies have explored the effect of work-family conflicts on the association between employment status and health. Available financial and material resources can affect the conditions relating to employment status. For example, some women work part-time to make a living, while others only need to boost the household budget; such differences produce heterogeneity amongst non-regular employees.18 Thus, household socioeconomic conditions may impact the association between employment status and health.26–28 However, few studies have included domestic duties and responsibilities and household socioeconomic conditions in examinations of the impact of employment status on health among women. The current study thus aimed to investigate the associations of employment status (ie, regular, non-regular, or self-employed) with self-rated health among Japanese middle-aged working women. Our specific research questions were: Is employment status associated with the probability of having poor self-rated health among Japanese working women? Is the association between employment status and self-rated health explained by level of work-family conflict? Are the associations between employment status and self-rated health noted above modified by socioeconomic conditions? MATERIAL AND METHODS: Study cohort Data in this study were derived from the Japan Public Health Center-based Prospective Study for the Next Generation (JPHC-NEXT Study). The JPHC-NEXT study was initiated in 2011 and the baseline survey was completed by December 31, 2016. In 2011–2016, we established a population-based cohort of 261,939 residents aged 40–74 years who registered an address in cities, towns, and villages (16 total) in seven prefectures throughout Japan. A self-administered questionnaire was distributed mostly by hand and partly by post to all participants; questions were asked about their lifestyles, personal medical histories, and socio-demographics. Incomplete answers were followed up by telephone interview. Among 261,939 residents, 115,385 agreed to participate in the JPHC-NEXT Study (response rate, 44.1%). All participants provided written informed consent. We excluded those with did not respond to the questionnaire. Our final cohort population was 114,157 (52,572 men and 61,585 women).39 The JPHC-NEXT Study was approved by the institutional review boards of the National Cancer Center and other participating institutions. Data in this study were derived from the Japan Public Health Center-based Prospective Study for the Next Generation (JPHC-NEXT Study). The JPHC-NEXT study was initiated in 2011 and the baseline survey was completed by December 31, 2016. In 2011–2016, we established a population-based cohort of 261,939 residents aged 40–74 years who registered an address in cities, towns, and villages (16 total) in seven prefectures throughout Japan. A self-administered questionnaire was distributed mostly by hand and partly by post to all participants; questions were asked about their lifestyles, personal medical histories, and socio-demographics. Incomplete answers were followed up by telephone interview. Among 261,939 residents, 115,385 agreed to participate in the JPHC-NEXT Study (response rate, 44.1%). All participants provided written informed consent. We excluded those with did not respond to the questionnaire. Our final cohort population was 114,157 (52,572 men and 61,585 women).39 The JPHC-NEXT Study was approved by the institutional review boards of the National Cancer Center and other participating institutions. Study population We excluded women aged 60 years and older, women who were unemployed, and women for whom no information on occupation was available, to restrict our study population to working women 40–59 years old (n = 24,375). From these, we excluded 1,778 women with a history of cancer or cardiovascular disease and/or physical limitations; we excluded a further 1,147 women with undisclosed employment status, self-rated health, and/or work-family conflict scores. The final study population comprised 21,450 women. We excluded women aged 60 years and older, women who were unemployed, and women for whom no information on occupation was available, to restrict our study population to working women 40–59 years old (n = 24,375). From these, we excluded 1,778 women with a history of cancer or cardiovascular disease and/or physical limitations; we excluded a further 1,147 women with undisclosed employment status, self-rated health, and/or work-family conflict scores. The final study population comprised 21,450 women. Measurements Predictor variable Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee. Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee. Outcome variable We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’). We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’). Covariates Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis. The data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34 Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis. The data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34 Predictor variable Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee. Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee. Outcome variable We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’). We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’). Covariates Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis. The data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34 Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis. The data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34 Statistical analysis We calculated the number (%) or mean (standard deviation [SD]) of the study population’s characteristics. Using chi-square tests and analysis of variance, differences among employment statuses were calculated in terms of proportion and mean values for poor self-rated health, socioeconomic variables, marital status, medical history of hypertension, diabetes mellitus, or hypercholesterolemia, history of diseases, and work-family conflict scores. Odds ratios (ORs) with 95% confidence intervals (CIs) of employment status for poor self-rated health were calculated using logistic regression analysis after adjusting for hypothesized confounding variables, such as age, education level, household equivalent income, occupation, marital status, medical history of hypertension, diabetes mellitus, hypercholesterolemia, or any other diseases, and residential area (model 1). To explore the mediating effect of work-family conflict on the association between employment status and self-rated health, we included work-family conflict scores in the model (model 2). Further, we conducted mediation analysis and estimated direct and indirect effects of employment status on self-rated health through work-family conflict. The natural direct effect (NDE), natural indirect effects (NIE), and total effects (TE) were estimated as ORs for poor self-rated health with respect to the mediator, conditioned on the measured covariates.45 We also estimated the percentage of the total associations between employment status and self-rated health that were mediated through work-family conflict using log ORs.46 Sub-group analyses by household equivalent income and marital status were also performed for self-rated health. We tested statistical interactions using cross-product terms for employment status and household equivalent income or marital status. All analyses were performed using SAS version 9.4 (SAS Institute Inc. Cary, NC, USA). We calculated the number (%) or mean (standard deviation [SD]) of the study population’s characteristics. Using chi-square tests and analysis of variance, differences among employment statuses were calculated in terms of proportion and mean values for poor self-rated health, socioeconomic variables, marital status, medical history of hypertension, diabetes mellitus, or hypercholesterolemia, history of diseases, and work-family conflict scores. Odds ratios (ORs) with 95% confidence intervals (CIs) of employment status for poor self-rated health were calculated using logistic regression analysis after adjusting for hypothesized confounding variables, such as age, education level, household equivalent income, occupation, marital status, medical history of hypertension, diabetes mellitus, hypercholesterolemia, or any other diseases, and residential area (model 1). To explore the mediating effect of work-family conflict on the association between employment status and self-rated health, we included work-family conflict scores in the model (model 2). Further, we conducted mediation analysis and estimated direct and indirect effects of employment status on self-rated health through work-family conflict. The natural direct effect (NDE), natural indirect effects (NIE), and total effects (TE) were estimated as ORs for poor self-rated health with respect to the mediator, conditioned on the measured covariates.45 We also estimated the percentage of the total associations between employment status and self-rated health that were mediated through work-family conflict using log ORs.46 Sub-group analyses by household equivalent income and marital status were also performed for self-rated health. We tested statistical interactions using cross-product terms for employment status and household equivalent income or marital status. All analyses were performed using SAS version 9.4 (SAS Institute Inc. Cary, NC, USA). Study cohort: Data in this study were derived from the Japan Public Health Center-based Prospective Study for the Next Generation (JPHC-NEXT Study). The JPHC-NEXT study was initiated in 2011 and the baseline survey was completed by December 31, 2016. In 2011–2016, we established a population-based cohort of 261,939 residents aged 40–74 years who registered an address in cities, towns, and villages (16 total) in seven prefectures throughout Japan. A self-administered questionnaire was distributed mostly by hand and partly by post to all participants; questions were asked about their lifestyles, personal medical histories, and socio-demographics. Incomplete answers were followed up by telephone interview. Among 261,939 residents, 115,385 agreed to participate in the JPHC-NEXT Study (response rate, 44.1%). All participants provided written informed consent. We excluded those with did not respond to the questionnaire. Our final cohort population was 114,157 (52,572 men and 61,585 women).39 The JPHC-NEXT Study was approved by the institutional review boards of the National Cancer Center and other participating institutions. Study population: We excluded women aged 60 years and older, women who were unemployed, and women for whom no information on occupation was available, to restrict our study population to working women 40–59 years old (n = 24,375). From these, we excluded 1,778 women with a history of cancer or cardiovascular disease and/or physical limitations; we excluded a further 1,147 women with undisclosed employment status, self-rated health, and/or work-family conflict scores. The final study population comprised 21,450 women. Measurements: Predictor variable Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee. Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee. Outcome variable We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’). We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’). Covariates Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis. The data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34 Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis. The data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34 Predictor variable: Our main predictor variable was employment status. Employment status was identified using self-report and categorized into three groups: 1) regular employee, 2) non-regular employee (ie, part-time, temporary, or contract worker), and 3) self-employee. Outcome variable: We measured self-rated health as our outcome variable, using the single questionnaire item: “How would you describe your overall state of health?”. Participants chose one of five responses: poor, not very good, good, very good, and excellent. Following previous studies,40,41 we categorized the responses into two groups: 1) poor self-rated health (‘poor’ or ‘not very good’), and 2) not poor self-rated health (‘good,’ ‘very good,’ or ‘excellent health’). Covariates: Age; highest education attainment level (junior high school, high school, junior college, college and higher, and other); occupation category (professional/managerial, clerical, or manual job); equivalent household income (quintile); marital status (married and non-married); history of hypertension, diabetes mellitus, hypercholesterolemia, and any other diseases (gout, asthma, chronic obstructive pulmonary disease, chronic bronchitis, chronic kidney failure, cataract, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatic cirrhosis, hepatitis, gall stone, sleep apnea, or depression); and residential area were hypothesized as confounding factors. With regard to income, we obtained the data for annual household income using six categories. We calculated household equivalent income by dividing household income (ie, inserted median value of each category) by the square root of total household members.42 Equivalent household income was classified into quintiles; we also categorized equivalent household income into two groups by its median (High and Low) for sub-group analysis. The data for work-family conflict was obtained through a self-reported questionnaire. Previous studies suggest that work-family conflict consists of two factors (work-to-family and family-to-work conflicts), which can be combined as a joint scale.43,44 The questionnaire for work-family conflict consisted of eight items (four items each for work-to-family conflict and family-to-work conflict; eTable 1), which were adapted from the United States National Study of Midlife Development43 and others.33,34 Each question had three response categories (0 = never, 1 = to some extent, and 2 = often). The responses to the eight items were summed up to yield a measure for work-family conflict scores ranging from 0 to 16. The internal consistency of work-family conflict in this study population was deemed acceptable (Cronbach’s alpha = 0.81). Further evidence on the reliability and validity of these scales have been provided by several other studies.33,34 The scores for work-family conflict were grouped into tertiles for the analysis in accordance with previous studies.34 Statistical analysis: We calculated the number (%) or mean (standard deviation [SD]) of the study population’s characteristics. Using chi-square tests and analysis of variance, differences among employment statuses were calculated in terms of proportion and mean values for poor self-rated health, socioeconomic variables, marital status, medical history of hypertension, diabetes mellitus, or hypercholesterolemia, history of diseases, and work-family conflict scores. Odds ratios (ORs) with 95% confidence intervals (CIs) of employment status for poor self-rated health were calculated using logistic regression analysis after adjusting for hypothesized confounding variables, such as age, education level, household equivalent income, occupation, marital status, medical history of hypertension, diabetes mellitus, hypercholesterolemia, or any other diseases, and residential area (model 1). To explore the mediating effect of work-family conflict on the association between employment status and self-rated health, we included work-family conflict scores in the model (model 2). Further, we conducted mediation analysis and estimated direct and indirect effects of employment status on self-rated health through work-family conflict. The natural direct effect (NDE), natural indirect effects (NIE), and total effects (TE) were estimated as ORs for poor self-rated health with respect to the mediator, conditioned on the measured covariates.45 We also estimated the percentage of the total associations between employment status and self-rated health that were mediated through work-family conflict using log ORs.46 Sub-group analyses by household equivalent income and marital status were also performed for self-rated health. We tested statistical interactions using cross-product terms for employment status and household equivalent income or marital status. All analyses were performed using SAS version 9.4 (SAS Institute Inc. Cary, NC, USA). RESULTS: Table 1 shows the number (%) and means (SD) of the study population’s characteristics. The average age was 50.3 (SD, 6.08) years; 17.7% of participants had poor self-rated health. Participants with regular employment, non-regular employment, and self-employment represented 40.6%, 43.2%, and 16.2% of the study population, respectively. The proportion of those who had attained a high school education or less was 55.2%, while those with undergraduate degrees or higher was 8.1%. Thirty percent of participants held professional/managerial jobs and 79.8% were married. The distributions of poor self-rated health differed by employment status: 18.9% for regular employees, 17.4% for non-regular employees, and 15.3% for self-employed. Compared with the other employment types, regular employees tended to be younger, managers/professionals, non-married, more educated, to have a higher household income, a medical history of diseases, and higher work-family conflict. SD, standard deviation. aDiseases: heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression. Table 2 presents crude and adjusted ORs (95% CI) of employment status for self-rated poor health. The multivariate ORs for poor self-rated health of non-regular employees and the self-employed, compared with that of regular employees, were 0.90 (95% CI, 0.83–0.98) and 0.84 (95% CI, 0.75–0.94), respectively (model 1). Moreover, the lower OR for poor self-rated health was attenuated after adjusting for work-family conflict and we observed no significant differences in odds between regular employees and non-regular employees (OR 1.00; 95% CI, 0.92–1.09) (model 2). However, while the lower OR was attenuated, a statistically significant association with poor self-rated health was identified after adjusting for work-family conflict among the self-employed (OR 0.87; 95% CI, 0.78–0.98). CI, confidence interval; OR, odds ratio. aModel 1: adjusted by education level, equivalent household income, occupation category, age group, marital status, hypertension, diabetes, and hypercholesterolemia, history of diseases (heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression), and residential area. bModel 2: Model 1+ Work-Family conflict The results of mediation analysis by work-family conflict scores showed that the association between non-regular employment and self-rated health was largely mediated through work-family conflict (the proportion mediated through work-family conflict was 100%), and no direct effect of non-regular employment on self-rated health was identified (Table 3). Part (31%) of the total effect of self-employment on self-rated health was mediated by work-family conflict, but it was mainly a direct effect. CI, confidence interval; OR, odds ratio. aAdjusted by education level, equivalent household income, occupation category, age group, marital status, hypertension, diabetes, and hypercholesterolemia, history of diseases (heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression), and residential area. Table 4 presents the results of sub-group analysis by household income and marital status. The adjusted ORs of non-regular employees and self-employees, compared with regular employees, were 0.83 (95% CI, 0.74–0.94) and 0.78 (95% CI, 0.66–0.92), respectively, in the higher household income group and 0.88 (95% CI, 0.80–0.96) and 0.81 (95% CI, 0.72–0.92), respectively, in the married group, while no statistically significant associations were identified in their counterpart groups. The association between employment status and self-rated health was more evident in the high household income group and in the married group, although they were not statistically significant. CI, confidence interval; OR, odds ratio. aAdjusted by education level, equivalent household income, occupation category, age group, marital status, hypertension, diabetes, and hypercholesterolemia, history of diseases (heart disease, gout, asthma, COPD, chronic bronchitis, chronic kidney failure, cataracts, glaucoma, gastric polyp, colon polyp, gastric ulcer, duodenal ulcer, hepatitis/hepatic cirrhosis, gallstone, sleep apnea, or depression), and residential area. DISCUSSION: In this study of middle-aged Japanese working women, non-regular employees and the self-employed were more likely to have poorer self-rated health compared with regular employees. Furthermore, the apparently lower probability of having poor self-rated health among non-regular employees was explained by their diminished level of work-family conflicts. The association between employment status and self-rated health differed by household income level and marital status, although these interactions were not statistically significant. Our results are in line with other research findings15–17; for example, cross-sectional studies among public sector employees in Finland showed lower ORs for fixed-term female employees’ poor self-rated health, compared with female permanent employees (OR 0.70; 95% CI, 0.60–0.82).17 Although only a limited number of similar studies have been conducted in Japan, our results were consistent with those of the Comprehensive National Survey among 18–64-year-old Japanese female employees. They reported that, compared with permanent employment, unstable employment with short working hours was associated with a smaller proportion of women who rated their health as poor.47 One of the reasons for poorer self-rated health among regular employees compared with non-regular employees could be the difficulties that women in Japan face in reconciling family life and career. Female workers in Japan who play a significant part of the total workforce are also more likely to be influenced by strong societal gender norms to fulfill household duties.24 Thus, working as regular employees often leads to severe physical and psychosocial hardship for women.38 Women in Japan who were regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than their non-regularly employed counterparts.48 Moreover, Japanese female workers were reported to have the highest work-life conflict and poorest self-rated mental health among Japanese, Finnish, and English government workers.34 Thus, female workers in Japan may voluntarily choose to work part-time to achieve a balance between their work and family lives. Our results (ie, significantly lower work-family conflict among non-regular employees versus regular employees, effects of non-regular employment on poor self-rated health was mostly explained by work-family conflict) suggest that non-regular employees may have achieved better self-rated health than regular employees by buffering their work-family conflict. The possible benefits of non-regular employment, such as schedule flexibility and/or fewer duties and responsibilities at work, can reduce their work-family conflict, which may contribute to their reduced probability of having poor self-rated health. Our findings also suggest that Japanese female workers may find relief from difficult life situations caused by juggling work and family lives by taking non-regular jobs or being self-employed, which may also be beneficial for their self-rated health. Our results were, however, inconsistent with previous work that showed a higher mortality risk for non-regular female employees than for regular employees in Japan.5 While we do not have a clear explanation for this inconsistency, we speculate that one possible reason could be differences between two outcomes in the mechanisms of the health impact of employment status. That is, self-rated health reflects current life conditions on a short-term basis, while mortality may reflect chronic and continued effects relating to employment such as job insecurity, financial insecurity, and social welfare. These chronic health effects may be particularly strong in Japan, where there is little employment flexibility and the gap between regular and non-regular employees is relatively large.24 In other words, being a non-regular employee may alleviate the psychosocial and physical burdens temporarily; however, it may lead to long-term health deterioration through chronic psychosocial and socioeconomic impacts. In addition, we identified that work-family conflict explained most of the association between employment status and the risk of having poor self-rated health in this study. Controlling for the influence of work-family conflict, we identified no significant differences in probabilities of having poor self-rated health by employment status. In other words, most of the differences observed in self-rated health by employment status may be attributed to differences in psychological well-being. We do not deny the effect of psychological health on mortality; however, we speculate that the lower risk of self-rated health among non-regular employees, mostly explained by psychological health, may not directly reflect the results for mortality. Self-employed workers also showed better self-rated health compared with regular employees. Job autonomy, job control, and/or control of working hours are hypothesized as possible benefits of being self-employed.49 In contrast to non-regular employees, work-family conflict did not appear to explain the identified association between self-employment and poor self-rated health. Work-family conflict is constructed to show the tension between two mutually incompatible domains, work and family.29 The work and family lives of self-employed workers may not present as much conflict as those of the other types of employees because the two domains (work and family) are physically closer and there is greater job autonomy. Further research is needed to understand the mechanisms of the association between being self-employment and self-rated health. Although we did not identify statistically significant interaction of socioeconomic conditions and employment status on self-rated health, the benefits of being a non-regular employee or being self-employed was greater among women with higher socioeconomic status (ie, high household income and married). Married women and women with a higher socioeconomic background are, arguably, more likely to be working to add extra income to that of the other breadwinner in the household. Thus, they can enjoy the flexibility of non-regular work, and, importantly, take advantage of it by reducing work-family conflict. By contrast, unmarried women and women with in lower socioeconomic conditions are more likely to be making their own living, which could make them vulnerable to the disadvantages of non-regular employment, such as low payment and poor social security.24 This study is one of the few to observe the association between employment status and self-rated health in Japan and to explore the associations that are related to work-life conflict. However, there are several limitations. First, given the cross-sectional nature of the study, we cannot claim causative links; in particular, we cannot exclude the possibility of reverse causation. To reduce this possibility, we excluded women with medical histories of major diseases and statistically controlled the medical history of diseases. We also conducted sensitivity analysis by further excluding women with medical histories of diseases, which did not change our conclusions (OR 0.88; 95% CI, 0.80–0.97 for non-regular employee and OR 0.80; 95% CI, 0.70–0.92 for self-employees). Further research is required to establish causative links. Second, our results may have been affected by selection bias caused by non-participation or exclusion because of missing values in our main variables. For example, if non-participation occurred disproportionally in self-rated health conditions or in employment status, our conclusions could have been distorted. We do not have specific information about the direction of bias. However, excluded women with missing values on self-rated health, employment status, and work-family conflict were likely to have lower socioeconomic status, which implies that our results may be underestimated. Third, our study population may not be nationally representative; in particular, we did not include metropolitan areas. In addition, our study population was limited to women aged in their 40s and 50s. They may be widely different from younger generation in terms of their life situations, life-styles, domestic duties and responsibilities, and work-life balance. Thus, the generalizability of our results to other populations requires caution. Finally, measurement errors in our variables, including our outcome and predictor and unmeasured (confounding) variables, may have resulted in residual confounding. Conclusions Employment status is associated with probabilities of poor self-rated health among middle-aged working Japanese women; non-regular employees and self-employed workers were less likely to report poor self-rated health compared with regular employees. Additionally, most of the associations with the poor self-rated health of non-regular employment could be explained by the level of work-family conflict. We did not identify significant influences of household income and marital status on the association between employment status and self-rated health. Employment status is associated with probabilities of poor self-rated health among middle-aged working Japanese women; non-regular employees and self-employed workers were less likely to report poor self-rated health compared with regular employees. Additionally, most of the associations with the poor self-rated health of non-regular employment could be explained by the level of work-family conflict. We did not identify significant influences of household income and marital status on the association between employment status and self-rated health. Conclusions: Employment status is associated with probabilities of poor self-rated health among middle-aged working Japanese women; non-regular employees and self-employed workers were less likely to report poor self-rated health compared with regular employees. Additionally, most of the associations with the poor self-rated health of non-regular employment could be explained by the level of work-family conflict. We did not identify significant influences of household income and marital status on the association between employment status and self-rated health.
Background: Few studies examining the impact for women of employment status on health have considered domestic duties and responsibilities as well as household socioeconomic conditions. Moreover, to our knowledge, no studies have explored the influence of work-family conflict on the association between employment status and health. This research aimed to investigate the cross-sectional associations between employment status (regular employee, non-regular employee, or self-employed) with self-rated health among Japanese middle-aged working women. Methods: Self-report data were obtained from 21,450 working women aged 40-59 years enrolled in the Japan Public Health Center-based Prospective Study for the Next Generation (JPHC-NEXT Study) in 2011-2016. Multivariate odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for poor self-rated health ('poor' or 'not very good') by employment status. Sub-group analyses by household income and marital status, as well as mediation analysis for work-family conflict, were also conducted. Results: Adjusted ORs for the poor self-rated health of non-regular employees and self-employed workers were 0.90 (95% CI, 0.83-0.98) and 0.84 (95% CI, 0.75-0.94), respectively, compared with regular employees. The identified association of non-regular employment was explained by work-family conflict. Subgroup analysis indicated no statistically significant modifying effects by household income and marital status. Conclusions: Among middle-aged working Japanese women, employment status was associated with self-rated health; non-regular employees and self-employed workers were less likely to report poor self-rated health, compared with regular employees. Lowered OR of poor self-rated health among non-regular employees may be explained by their reduced work-family conflict.
INTRODUCTION: Employment status (ie, full-time, part-time, dispatch, contract, non-regular, or self-employed) may influence health.1–4 Precarious employment has been associated empirically with deteriorated health outcomes, using various measures of mortality,2,3,5,6 mental health,7–9 and health behaviors.10,11 However, inconsistent results have also been reported, particularly for self-rated health; non-regular workers have evidenced poorer self-rated health compared with regular workers,12,13 while some studies showed no significant differences14 or even a reverse association.15–17 One of the possible reasons for the inconsistent results could be underlying differences in participants’ reasons for being temporary workers.6 For example, some voluntarily choose temporary employment for domestic reasons, such as maintaining a work-family balance, while others are unable to find regular employment and are forced to accept temporary work. Further, the voluntariness of a job choice for non-regular employees may differ based on household socioeconomic conditions and may be influenced by varying societal norms regarding gender roles.18 In Japan, the number and proportion of female non-regular workers (ie, temporal, contract, and part-time) is much larger than that of non-regular male workers. In 2018, 54% of female employees and 14% of male employees aged 24–65 years old were non-regular workers.19 The gender role norms (ie, men work outside the home, women stay at home and take care of children and the elderly) that exist in Japanese society could be one of the reasons for this significant gender difference. Women’s participation in Japan’s workforce by age group shows a bimodal pattern. By contrast, similar to Japan’s male workforce, many Western societies display a convex shape.20 This bimodal pattern suggests that women tend to take a career break during their 20s and 30s, for such reasons as child rearing, and return to the workforce in their 40s.21,22 Part-time work is often the only choice for women who reenter the workforce23; it is an important path for women reentering the labor market.24 Consequently, a much higher proportion of women than men in Japan are non-regularly employed (mainly part-time).19 The proportion of female non-regular workers has increased along with the promotion of women’s participation in the workforce in the last few decades in Japan.25 However, these promotional efforts have not improved the imbalanced participation in household duties. That is, in most societies including Japan, women’s household responsibilities have persisted despite more women entering and playing a greater role in the workforce.26–28 Work-family conflict, generally defined as “a form of inter-role conflict in which the role pressures from the work and family domains are mutually incompatible in some respect,”29 has been associated with women’s self-rated health.30–33 An international comparison study showed that Japanese female workers had the highest work-family conflict and poorest self-rated mental and physical health among Japanese, Finnish, and English government workers.34 Other research, moreover, has shown that the working arrangement is associated with the prevalence of work-family conflict, with higher work-family conflict existing among full-time versus part-time workers.33,35 Women in Japan are often obligated to work part-time to mitigate their burden of managing both work and household demands and responsibilities because non-regular employment often offers more schedule flexibility compared with regular employment.36,37 Thus, we hypothesized that the association between employment status and self-rated health may be explained by the level of work-family conflict. Indeed, one of the main reasons middle-aged women in Japan choose non-regular employment and self-employment is time flexibility,19 which can reduce work-family conflict raised by multiple social roles for working women.38 To our knowledge, no studies have explored the effect of work-family conflicts on the association between employment status and health. Available financial and material resources can affect the conditions relating to employment status. For example, some women work part-time to make a living, while others only need to boost the household budget; such differences produce heterogeneity amongst non-regular employees.18 Thus, household socioeconomic conditions may impact the association between employment status and health.26–28 However, few studies have included domestic duties and responsibilities and household socioeconomic conditions in examinations of the impact of employment status on health among women. The current study thus aimed to investigate the associations of employment status (ie, regular, non-regular, or self-employed) with self-rated health among Japanese middle-aged working women. Our specific research questions were: Is employment status associated with the probability of having poor self-rated health among Japanese working women? Is the association between employment status and self-rated health explained by level of work-family conflict? Are the associations between employment status and self-rated health noted above modified by socioeconomic conditions? Conclusions: Employment status is associated with probabilities of poor self-rated health among middle-aged working Japanese women; non-regular employees and self-employed workers were less likely to report poor self-rated health compared with regular employees. Additionally, most of the associations with the poor self-rated health of non-regular employment could be explained by the level of work-family conflict. We did not identify significant influences of household income and marital status on the association between employment status and self-rated health.
Background: Few studies examining the impact for women of employment status on health have considered domestic duties and responsibilities as well as household socioeconomic conditions. Moreover, to our knowledge, no studies have explored the influence of work-family conflict on the association between employment status and health. This research aimed to investigate the cross-sectional associations between employment status (regular employee, non-regular employee, or self-employed) with self-rated health among Japanese middle-aged working women. Methods: Self-report data were obtained from 21,450 working women aged 40-59 years enrolled in the Japan Public Health Center-based Prospective Study for the Next Generation (JPHC-NEXT Study) in 2011-2016. Multivariate odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for poor self-rated health ('poor' or 'not very good') by employment status. Sub-group analyses by household income and marital status, as well as mediation analysis for work-family conflict, were also conducted. Results: Adjusted ORs for the poor self-rated health of non-regular employees and self-employed workers were 0.90 (95% CI, 0.83-0.98) and 0.84 (95% CI, 0.75-0.94), respectively, compared with regular employees. The identified association of non-regular employment was explained by work-family conflict. Subgroup analysis indicated no statistically significant modifying effects by household income and marital status. Conclusions: Among middle-aged working Japanese women, employment status was associated with self-rated health; non-regular employees and self-employed workers were less likely to report poor self-rated health, compared with regular employees. Lowered OR of poor self-rated health among non-regular employees may be explained by their reduced work-family conflict.
9,704
357
[ 203, 92, 1152, 54, 107, 408, 349, 98 ]
12
[ "self", "work", "health", "family", "work family", "conflict", "rated", "self rated", "rated health", "family conflict" ]
[ "japan regular employees", "choose temporary employment", "health japanese working", "health differed employment", "health precarious employment" ]
null
[CONTENT] employment status | self-rated health | work-family conflict | Japan | women [SUMMARY]
null
[CONTENT] employment status | self-rated health | work-family conflict | Japan | women [SUMMARY]
[CONTENT] employment status | self-rated health | work-family conflict | Japan | women [SUMMARY]
[CONTENT] employment status | self-rated health | work-family conflict | Japan | women [SUMMARY]
[CONTENT] employment status | self-rated health | work-family conflict | Japan | women [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Employment | Family Characteristics | Female | Health Status | Humans | Japan | Middle Aged | Prospective Studies | Self Report | Socioeconomic Factors | Women, Working | Work-Life Balance [SUMMARY]
null
[CONTENT] Adult | Cross-Sectional Studies | Employment | Family Characteristics | Female | Health Status | Humans | Japan | Middle Aged | Prospective Studies | Self Report | Socioeconomic Factors | Women, Working | Work-Life Balance [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Employment | Family Characteristics | Female | Health Status | Humans | Japan | Middle Aged | Prospective Studies | Self Report | Socioeconomic Factors | Women, Working | Work-Life Balance [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Employment | Family Characteristics | Female | Health Status | Humans | Japan | Middle Aged | Prospective Studies | Self Report | Socioeconomic Factors | Women, Working | Work-Life Balance [SUMMARY]
[CONTENT] Adult | Cross-Sectional Studies | Employment | Family Characteristics | Female | Health Status | Humans | Japan | Middle Aged | Prospective Studies | Self Report | Socioeconomic Factors | Women, Working | Work-Life Balance [SUMMARY]
[CONTENT] japan regular employees | choose temporary employment | health japanese working | health differed employment | health precarious employment [SUMMARY]
null
[CONTENT] japan regular employees | choose temporary employment | health japanese working | health differed employment | health precarious employment [SUMMARY]
[CONTENT] japan regular employees | choose temporary employment | health japanese working | health differed employment | health precarious employment [SUMMARY]
[CONTENT] japan regular employees | choose temporary employment | health japanese working | health differed employment | health precarious employment [SUMMARY]
[CONTENT] japan regular employees | choose temporary employment | health japanese working | health differed employment | health precarious employment [SUMMARY]
[CONTENT] self | work | health | family | work family | conflict | rated | self rated | rated health | family conflict [SUMMARY]
null
[CONTENT] self | work | health | family | work family | conflict | rated | self rated | rated health | family conflict [SUMMARY]
[CONTENT] self | work | health | family | work family | conflict | rated | self rated | rated health | family conflict [SUMMARY]
[CONTENT] self | work | health | family | work family | conflict | rated | self rated | rated health | family conflict [SUMMARY]
[CONTENT] self | work | health | family | work family | conflict | rated | self rated | rated health | family conflict [SUMMARY]
[CONTENT] women | regular | workers | employment | work | health | time | non | non regular | japan [SUMMARY]
null
[CONTENT] ci | 95 ci | employees | regular | 95 | regular employees | self | polyp | gastric | ulcer [SUMMARY]
[CONTENT] self rated | self rated health | rated health | rated | self | health | regular | poor self rated health | poor self | poor [SUMMARY]
[CONTENT] health | self | work | family | rated | self rated | rated health | self rated health | employment | work family [SUMMARY]
[CONTENT] health | self | work | family | rated | self rated | rated health | self rated health | employment | work family [SUMMARY]
[CONTENT] ||| ||| Japanese [SUMMARY]
null
[CONTENT] 0.90 | 95% | CI | 0.83 | 0.84 | 95% | CI | 0.75 ||| ||| [SUMMARY]
[CONTENT] Japanese ||| [SUMMARY]
[CONTENT] ||| ||| Japanese ||| 21,450 | 40-59 years | the Japan Public Health Center | Prospective Study for the Next Generation | 2011-2016 ||| 95% ||| ||| 0.90 | 95% | CI | 0.83 | 0.84 | 95% | CI | 0.75 ||| ||| ||| Japanese ||| [SUMMARY]
[CONTENT] ||| ||| Japanese ||| 21,450 | 40-59 years | the Japan Public Health Center | Prospective Study for the Next Generation | 2011-2016 ||| 95% ||| ||| 0.90 | 95% | CI | 0.83 | 0.84 | 95% | CI | 0.75 ||| ||| ||| Japanese ||| [SUMMARY]
GPR39 (zinc receptor) knockout mice exhibit depression-like behavior and CREB/BDNF down-regulation in the hippocampus.
25609596
Zinc may act as a neurotransmitter in the central nervous system by activation of the GPR39 metabotropic receptors.
BACKGROUND
In the present study, we investigated whether GPR39 knockout would cause depressive-like and/or anxiety-like behavior, as measured by the forced swim test, tail suspension test, and light/dark test. We also investigated whether lack of GPR39 would change levels of cAMP response element-binding protein (CREB),brain-derived neurotrophic factor (BDNF) and tropomyosin related kinase B (TrkB) protein in the hippocampus and frontal cortex of GPR39 knockout mice subjected to the forced swim test, as measured by Western-blot analysis.
METHODS
In this study, GPR39 knockout mice showed an increased immobility time in both the forced swim test and tail suspension test, indicating depressive-like behavior and displayed anxiety-like phenotype. GPR39 knockout mice had lower CREB and BDNF levels in the hippocampus, but not in the frontal cortex, which indicates region specificity for the impaired CREB/BDNF pathway (which is important in antidepressant response) in the absence of GPR39. There were no changes in TrkB protein in either structure. In the present study, we also investigated activity in the hypothalamus-pituitary-adrenal axis under both zinc- and GPR39-deficient conditions. Zinc-deficient mice had higher serum corticosterone levels and lower glucocorticoid receptor levels in the hippocampus and frontal cortex.
RESULTS
There were no changes in the GPR39 knockout mice in comparison with the wild-type control mice, which does not support a role of GPR39 in hypothalamus-pituitary-adrenal axis regulation. The results of this study indicate the involvement of the GPR39 Zn(2+)-sensing receptor in the pathophysiology of depression with component of anxiety.
CONCLUSIONS
[ "Animals", "Brain-Derived Neurotrophic Factor", "CREB-Binding Protein", "Corticosterone", "Dark Adaptation", "Depression", "Disease Models, Animal", "Down-Regulation", "Hindlimb Suspension", "Hippocampus", "Immobility Response, Tonic", "Male", "Mice", "Mice, Inbred C57BL", "Mice, Knockout", "Motor Activity", "Receptor, trkB", "Receptors, G-Protein-Coupled", "Swimming", "Time Factors", "Zinc" ]
4360246
Introduction
Depression is a leading psychiatric illness, with high morbidity and mortality (Ustun, 2004). The lack of appropriate, rapidly acting antidepressants is probably due to the direct pathomechanism of depression being unknown, and this leads to the high suicide statistics. Approximately 50% of those diagnosed with major depressive disorder do not respond to antidepressants when using them for the first time (Fava et al., 2008). Long-term antidepressant treatment generates many side effects, and more than 30% of depressed patients do not experience any mood improvement at all (Fava and Davidson, 1996). Until now, only one drug, ketamine, has shown rapid action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). One drug, ketamine, has shown rapid and sustained action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). This indicates promise for modulators of the glutamatergic system, which may lead to the establishment of homeostasis between glutamate and GABA in the central nervous system (CNS) (Skolnick, 2002; Skolnick et al., 2009; Malkesman et al., 2012; Pilc et al., 2013; Pochwat et al., 2014). In addition, some trace elements, such as magnesium and zinc, are involved in glutamatergic attenuation through their binding sites at the N-methyl-d-aspartate (NMDA) receptor (Swardfager et al., 2013b). Preclinical findings indicate that zinc deficiency has been shown to produce depressive-like behavior (Singewald et al., 2004; Tassabehji et al., 2008; Tamano et al., 2009; Whittle et al., 2009; Młyniec and Nowak, 2012; Młyniec et al., 2013a, 2013b, 2014a). Clinical studies indicate that zinc is lower in the blood of depressed people (Swardfager et al., 2013b), and that zinc supplementation may produce antidepressant effects alone and in combination with conventional antidepressant therapies (Ranjbar et al., 2013; Siwek et al., 2013; Swardfager et al., 2013a).. Zinc is an important trace element in the central nervous system and seems to be involved in neurotransmission. As a natural ligand, it was found to activate the metabotropic GPR39 receptor (Holst et al., 2007). Highest levels of GPR39 are found in the brain regions involved in emotion, such as the amygdala and hippocampus (McKee et al., 1997; Jackson et al., 2006). The GPR39 signals with high constitutive activity via Gq, which stimulates transcription mediated by the cyclic adenosine monophosphate (cAMP) following inositol 1,4,5-triphosphate turnover, as well as via G12/13, leading to activation of transcription mediated by the serum response element (Holst et al., 2004). Zinc was found to be a ligand capable of stimulating the activity of the GPR39, which activates the Gq, G12/13, and Gs pathways (Holst et al., 2007). Since zinc shows antidepressant properties and its deficiency leads to the development of depression-like and anxiety-like behaviors (Whittle et al., 2009; Swardfager et al., 2013a), we investigated whether the GPR39 receptor may be involved in the pathophysiology of depression. Recently, we found GPR39 down-regulation in the frontal cortex and hippocampus of zinc-deficient rodents and suicide victims (Młyniec et al., 2014b). On the other hand, we observed up-regulation of the GPR39 after chronic antidepressant treatment (Młyniec and Nowak, 2013). In the present study, we investigated behavior in mice lacking a GPR39 as well as an hypothalamus-pituitary-adrenal axis (HPA) axis and proteins such as CREB, BDNF, and TrkB, all of which are important in the pathophysiology of depression and the antidepressant response.
Methods
Animals All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków. CD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum. GPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used. As with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts. All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków. CD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum. GPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used. As with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts. Forced Swim Test The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test. The immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured. The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test. The immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured. Tail Suspension Test WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually. WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually. Locomotor Activity Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes. Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes. Light/Dark Test WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters). WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters). Zinc Concentration The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L. The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L. Corticosterone Assay The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation. The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation. Western-Blot Analysis Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place. The frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band. Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place. The frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band. Statistical Analysis The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant. The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant.
null
null
Discussion
None.
[ "Introduction", "Animals", "Forced Swim Test", "Tail Suspension Test", "Locomotor Activity", "Light/Dark Test", "Zinc Concentration", "Corticosterone Assay", "Western-Blot Analysis", "Statistical Analysis", "Results", "Behavioral Studies of Gpr39 KO Mice", "Serum Zinc Concentration in GPR39 KO Mice", "CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice", "Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice", "GR Protein Levels in Zinc-Deficient and GPR39 KO Mice", "Discussion" ]
[ "Depression is a leading psychiatric illness, with high morbidity and mortality (Ustun, 2004). The lack of appropriate, rapidly acting antidepressants is probably due to the direct pathomechanism of depression being unknown, and this leads to the high suicide statistics. Approximately 50% of those diagnosed with major depressive disorder do not respond to antidepressants when using them for the first time (Fava et al., 2008). Long-term antidepressant treatment generates many side effects, and more than 30% of depressed patients do not experience any mood improvement at all (Fava and Davidson, 1996). Until now, only one drug, ketamine, has shown rapid action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). One drug, ketamine, has shown rapid and sustained action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). This indicates promise for modulators of the glutamatergic system, which may lead to the establishment of homeostasis between glutamate and GABA in the central nervous system (CNS) (Skolnick, 2002; Skolnick et al., 2009; Malkesman et al., 2012; Pilc et al., 2013; Pochwat et al., 2014). In addition, some trace elements, such as magnesium and zinc, are involved in glutamatergic attenuation through their binding sites at the N-methyl-d-aspartate (NMDA) receptor (Swardfager et al., 2013b). Preclinical findings indicate that zinc deficiency has been shown to produce depressive-like behavior (Singewald et al., 2004; Tassabehji et al., 2008; Tamano et al., 2009; Whittle et al., 2009; Młyniec and Nowak, 2012; Młyniec et al., 2013a, 2013b, 2014a). Clinical studies indicate that zinc is lower in the blood of depressed people (Swardfager et al., 2013b), and that zinc supplementation may produce antidepressant effects alone and in combination with conventional antidepressant therapies (Ranjbar et al., 2013; Siwek et al., 2013;\nSwardfager et al., 2013a)..\nZinc is an important trace element in the central nervous system and seems to be involved in neurotransmission. As a natural ligand, it was found to activate the metabotropic GPR39 receptor (Holst et al., 2007). Highest levels of GPR39 are found in the brain regions involved in emotion, such as the amygdala and hippocampus (McKee et al., 1997; Jackson et al., 2006). The GPR39 signals with high constitutive activity via Gq, which stimulates transcription mediated by the cyclic adenosine monophosphate (cAMP) following inositol 1,4,5-triphosphate turnover, as well as via G12/13, leading to activation of transcription mediated by the serum response element (Holst et al., 2004). Zinc was found to be a ligand capable of stimulating the activity of the GPR39, which activates the Gq, G12/13, and Gs pathways (Holst et al., 2007). Since zinc shows antidepressant properties and its deficiency leads to the development of depression-like and anxiety-like behaviors (Whittle et al., 2009; Swardfager et al., 2013a), we investigated whether the GPR39 receptor may be involved in the pathophysiology of depression. Recently, we found GPR39 down-regulation in the frontal cortex and hippocampus of zinc-deficient rodents and suicide victims (Młyniec et al., 2014b). On the other hand, we observed up-regulation of the GPR39 after chronic antidepressant treatment (Młyniec and Nowak, 2013). In the present study, we investigated behavior in mice lacking a GPR39 as well as an hypothalamus-pituitary-adrenal axis (HPA) axis and proteins such as CREB, BDNF, and TrkB, all of which are important in the pathophysiology of depression and the antidepressant response.", "All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków.\nCD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum.\nGPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used.\nAs with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts.", "The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test.\nThe immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured.", "WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually.", "Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes.", "WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters).", "The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L.", "The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation.", "Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place.\nThe frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band.", "The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant.", " Behavioral Studies of Gpr39 KO Mice Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.\nBefore experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.\n Serum Zinc Concentration in GPR39 KO Mice There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].\nThere was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].\n CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).\nThe effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).\n Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.\nThe effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.\n GR Protein Levels in Zinc-Deficient and GPR39 KO Mice Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.\nAdministration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.", "Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.", "There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].", "The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).", "The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.", "Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.", "In the present study, we found that elimination of GPR39 leads to the development of depressive-like behavior, as measured by the FST and TST. Additionally, we found decreased entries into the lit compartment and line crossing and increased immobility time in the light/dark test, indicating anxiety-like phenotype in GPR 39 KO mice. Although not statistically significant, we also observed some tendencies towards less time spent in the lit compartment (decreased by 17%) and increased and freezing behavior (28%) in GPR39 KO compared to WT mice.\nThe GPR39 was found to be activated by zinc ions (Holst et al., 2007; Yasuda et al., 2007), and the link between zinc and depression is well known (Maes et al., 1997, 1999; Szewczyk et al., 2011; Swardfager et al., 2013a, 2013b). Ishitobi et al. (2012) did not find behavioral abnormalities in the FST after administering antisense of DNA for GPR39-1b. In our present study, mice with general GPR39 KO were used. GPR39-1a full-length isoform was found to be a receptor for zinc ions, whereas GPR39-1b, corresponding to the 5-transmembrane truncated form (Egerod et al., 2007), did not respond to zinc stimulation, which means that the GPR39-1b splice variant is not a receptor of Zn2+ (Yasuda and Ishida, 2014).\nActivation of the GPR39 triggers diverse neuronal pathways (Holst et al., 2004, 2007; Popovics and Stewart, 2011) that may be involved in neuroprotection (Depoortere, 2012). Zinc stimulates GPR39 activity, which activates the Gαs, Gαq, and Gα12/13 pathways (Holst et al., 2007). The Gαq pathway triggers diverse downstream kinases and mediates CREB activation and cyclic adenosine monophosphate response element-dependent transcription. Our previous study showed decreased CREB, BDNF, and TrkB proteins in the hippocampus of mice under zinc-deficient conditions (Młyniec et al., 2014b). Moreover, disruption of the CaM/CaMKII/CREB signaling pathway was found after administration of a zinc-deficient diet for 5 weeks (Gao et al., 2011). Since GPR39 was discovered to be a receptor for zinc, we previously investigated whether GPR39 may be involved in the pathophysiology of depression and suicide behavior. In our postmortem study, we found GPR39 down-regulation in the hippocampus and the frontal cortex of suicide victims (Młyniec et al., 2014b). In the present study, we investigated whether GPR39 KO would decrease levels of such proteins as CREB, BDNF, and TrkB, which were also found to be impaired in depression in suicide victims (Dwivedi et al., 2003; Pandey et al., 2007). Indeed, we found that lack of the GPR39 gene causes CREB and BDNF reduction in the hippocampus, but not in the frontal cortex, suggesting that the hippocampus might be a specific region for CREB and BDNF down-regulation in the absence of a zinc receptor. The CA3 region of the hippocampus seems to be strongly involved in zinc neurotransmission. Besser et al. (2009) found that the GPR39 is activated by synaptically released zinc ions in the CA3 area of the hippocampus. This activation triggers Ca2+ and Ca2+/calmodulin kinase II, suggesting that it has a role in neuron survival/neuroplasticity in this brain area (Besser et al., 2009), which is of importance in antidepressant action. In this study, we did not find any changes in TrkB levels in either the hippocampus or frontal cortex; in the case of the hippocampus, this may be a compensatory mechanism, and it needs further investigation.\nThere is strong evidence that zinc deficiency leads to hyperactivation of the HPA axis (Watanabe et al., 2010; Takeda and Tamano, 2012; Młyniec et al., 2012, 2013a), which is activated as a reaction to stress. The activity of the HPA axis is regulated by negative feedback through GR receptors that are present in the brain, mainly in the hippocampus (Herman et al., 2005). This mechanism was shown to be impaired in mood disorders. In the present study, we compared corticosterone and GR receptor levels in zinc-deficient and GPR39 KOs. We found an elevated corticosterone concentration in the serum and decreased GR levels in the hippocampus and frontal cortex of mice receiving a zinc-deficient diet. However, there were no changes in corticosterone or GR levels in GPR39 KO mice in comparison with WT mice. This suggests that the depressive-like behavior observed in mice lacking the GPR39 gene is not due to higher corticosterone concentrations and that there is no link between GPR39 and the HPA axis. In the present study, we did not find any changes in the serum zinc level in GPR39 KO mice in comparison with WT mice, which indicates a possible correlation between serum zinc and serum corticosterone.\nDepressive-like changes with component of anxiety observed in GPR39 KO mice may result of glutamatergic abnormalities that were found in cases of zinc deficiency, but this requires further investigation. Zinc as an NMDA antagonist modulates the glutamatergic system, which is overexcited during depression. Zinc co-released with glutamate from “gluzinergic” neurons modulates excitability of the brain by attenuating glutamate release (Frederickson et al., 2005). The GPR39 zinc receptor seems to be involved in the mechanism regulating the amount of glutamate in the brain (Besser et al., 2009). Activation of the GPR39 up-regulates KCC2 and thereby enhances Cl− efflux in the postsynaptic neurons, which may potentiate γ-aminobutyric acidA-mediated inhibition (Chorin et al., 2011).\nOur present study shows that deletion of GPR39 leads to depressive-like behaviors in animals, which may be relevant to depressive disorders in humans. Decreased levels of CREB and BDNF proteins in the hippocampus of GPR39 KO mice support the involvement of GPR39 in the synthesis of CREB and BDNF, proteins that are important in neuronal plasticity and the antidepressant response." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Animals", "Forced Swim Test", "Tail Suspension Test", "Locomotor Activity", "Light/Dark Test", "Zinc Concentration", "Corticosterone Assay", "Western-Blot Analysis", "Statistical Analysis", "Results", "Behavioral Studies of Gpr39 KO Mice", "Serum Zinc Concentration in GPR39 KO Mice", "CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice", "Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice", "GR Protein Levels in Zinc-Deficient and GPR39 KO Mice", "Discussion" ]
[ "Depression is a leading psychiatric illness, with high morbidity and mortality (Ustun, 2004). The lack of appropriate, rapidly acting antidepressants is probably due to the direct pathomechanism of depression being unknown, and this leads to the high suicide statistics. Approximately 50% of those diagnosed with major depressive disorder do not respond to antidepressants when using them for the first time (Fava et al., 2008). Long-term antidepressant treatment generates many side effects, and more than 30% of depressed patients do not experience any mood improvement at all (Fava and Davidson, 1996). Until now, only one drug, ketamine, has shown rapid action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). One drug, ketamine, has shown rapid and sustained action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). This indicates promise for modulators of the glutamatergic system, which may lead to the establishment of homeostasis between glutamate and GABA in the central nervous system (CNS) (Skolnick, 2002; Skolnick et al., 2009; Malkesman et al., 2012; Pilc et al., 2013; Pochwat et al., 2014). In addition, some trace elements, such as magnesium and zinc, are involved in glutamatergic attenuation through their binding sites at the N-methyl-d-aspartate (NMDA) receptor (Swardfager et al., 2013b). Preclinical findings indicate that zinc deficiency has been shown to produce depressive-like behavior (Singewald et al., 2004; Tassabehji et al., 2008; Tamano et al., 2009; Whittle et al., 2009; Młyniec and Nowak, 2012; Młyniec et al., 2013a, 2013b, 2014a). Clinical studies indicate that zinc is lower in the blood of depressed people (Swardfager et al., 2013b), and that zinc supplementation may produce antidepressant effects alone and in combination with conventional antidepressant therapies (Ranjbar et al., 2013; Siwek et al., 2013;\nSwardfager et al., 2013a)..\nZinc is an important trace element in the central nervous system and seems to be involved in neurotransmission. As a natural ligand, it was found to activate the metabotropic GPR39 receptor (Holst et al., 2007). Highest levels of GPR39 are found in the brain regions involved in emotion, such as the amygdala and hippocampus (McKee et al., 1997; Jackson et al., 2006). The GPR39 signals with high constitutive activity via Gq, which stimulates transcription mediated by the cyclic adenosine monophosphate (cAMP) following inositol 1,4,5-triphosphate turnover, as well as via G12/13, leading to activation of transcription mediated by the serum response element (Holst et al., 2004). Zinc was found to be a ligand capable of stimulating the activity of the GPR39, which activates the Gq, G12/13, and Gs pathways (Holst et al., 2007). Since zinc shows antidepressant properties and its deficiency leads to the development of depression-like and anxiety-like behaviors (Whittle et al., 2009; Swardfager et al., 2013a), we investigated whether the GPR39 receptor may be involved in the pathophysiology of depression. Recently, we found GPR39 down-regulation in the frontal cortex and hippocampus of zinc-deficient rodents and suicide victims (Młyniec et al., 2014b). On the other hand, we observed up-regulation of the GPR39 after chronic antidepressant treatment (Młyniec and Nowak, 2013). In the present study, we investigated behavior in mice lacking a GPR39 as well as an hypothalamus-pituitary-adrenal axis (HPA) axis and proteins such as CREB, BDNF, and TrkB, all of which are important in the pathophysiology of depression and the antidepressant response.", " Animals All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków.\nCD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum.\nGPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used.\nAs with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts.\nAll of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków.\nCD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum.\nGPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used.\nAs with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts.\n Forced Swim Test The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test.\nThe immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured.\nThe forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test.\nThe immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured.\n Tail Suspension Test WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually.\nWT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually.\n Locomotor Activity Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes.\nLocomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes.\n Light/Dark Test WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters).\nWT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters).\n Zinc Concentration The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L.\nThe serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L.\n Corticosterone Assay The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation.\nThe serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation.\n Western-Blot Analysis Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place.\nThe frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band.\nGlucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place.\nThe frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band.\n Statistical Analysis The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant.\nThe data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant.", "All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków.\nCD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum.\nGPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used.\nAs with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts.", "The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test.\nThe immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured.", "WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually.", "Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes.", "WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters).", "The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L.", "The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation.", "Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place.\nThe frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band.", "The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant.", " Behavioral Studies of Gpr39 KO Mice Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.\nBefore experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.\n Serum Zinc Concentration in GPR39 KO Mice There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].\nThere was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].\n CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).\nThe effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).\n Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.\nThe effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.\n GR Protein Levels in Zinc-Deficient and GPR39 KO Mice Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.\nAdministration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.", "Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916].\nThe effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2).\nThe effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control.\nThe effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control.\nThere were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1).\nThe Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group.\nIn the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2).\nBehavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control.", "There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790].", "The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively).\nThe effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control.\nIn a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively).", "The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B).\nThe effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control.", "Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively).\nThe effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control.", "In the present study, we found that elimination of GPR39 leads to the development of depressive-like behavior, as measured by the FST and TST. Additionally, we found decreased entries into the lit compartment and line crossing and increased immobility time in the light/dark test, indicating anxiety-like phenotype in GPR 39 KO mice. Although not statistically significant, we also observed some tendencies towards less time spent in the lit compartment (decreased by 17%) and increased and freezing behavior (28%) in GPR39 KO compared to WT mice.\nThe GPR39 was found to be activated by zinc ions (Holst et al., 2007; Yasuda et al., 2007), and the link between zinc and depression is well known (Maes et al., 1997, 1999; Szewczyk et al., 2011; Swardfager et al., 2013a, 2013b). Ishitobi et al. (2012) did not find behavioral abnormalities in the FST after administering antisense of DNA for GPR39-1b. In our present study, mice with general GPR39 KO were used. GPR39-1a full-length isoform was found to be a receptor for zinc ions, whereas GPR39-1b, corresponding to the 5-transmembrane truncated form (Egerod et al., 2007), did not respond to zinc stimulation, which means that the GPR39-1b splice variant is not a receptor of Zn2+ (Yasuda and Ishida, 2014).\nActivation of the GPR39 triggers diverse neuronal pathways (Holst et al., 2004, 2007; Popovics and Stewart, 2011) that may be involved in neuroprotection (Depoortere, 2012). Zinc stimulates GPR39 activity, which activates the Gαs, Gαq, and Gα12/13 pathways (Holst et al., 2007). The Gαq pathway triggers diverse downstream kinases and mediates CREB activation and cyclic adenosine monophosphate response element-dependent transcription. Our previous study showed decreased CREB, BDNF, and TrkB proteins in the hippocampus of mice under zinc-deficient conditions (Młyniec et al., 2014b). Moreover, disruption of the CaM/CaMKII/CREB signaling pathway was found after administration of a zinc-deficient diet for 5 weeks (Gao et al., 2011). Since GPR39 was discovered to be a receptor for zinc, we previously investigated whether GPR39 may be involved in the pathophysiology of depression and suicide behavior. In our postmortem study, we found GPR39 down-regulation in the hippocampus and the frontal cortex of suicide victims (Młyniec et al., 2014b). In the present study, we investigated whether GPR39 KO would decrease levels of such proteins as CREB, BDNF, and TrkB, which were also found to be impaired in depression in suicide victims (Dwivedi et al., 2003; Pandey et al., 2007). Indeed, we found that lack of the GPR39 gene causes CREB and BDNF reduction in the hippocampus, but not in the frontal cortex, suggesting that the hippocampus might be a specific region for CREB and BDNF down-regulation in the absence of a zinc receptor. The CA3 region of the hippocampus seems to be strongly involved in zinc neurotransmission. Besser et al. (2009) found that the GPR39 is activated by synaptically released zinc ions in the CA3 area of the hippocampus. This activation triggers Ca2+ and Ca2+/calmodulin kinase II, suggesting that it has a role in neuron survival/neuroplasticity in this brain area (Besser et al., 2009), which is of importance in antidepressant action. In this study, we did not find any changes in TrkB levels in either the hippocampus or frontal cortex; in the case of the hippocampus, this may be a compensatory mechanism, and it needs further investigation.\nThere is strong evidence that zinc deficiency leads to hyperactivation of the HPA axis (Watanabe et al., 2010; Takeda and Tamano, 2012; Młyniec et al., 2012, 2013a), which is activated as a reaction to stress. The activity of the HPA axis is regulated by negative feedback through GR receptors that are present in the brain, mainly in the hippocampus (Herman et al., 2005). This mechanism was shown to be impaired in mood disorders. In the present study, we compared corticosterone and GR receptor levels in zinc-deficient and GPR39 KOs. We found an elevated corticosterone concentration in the serum and decreased GR levels in the hippocampus and frontal cortex of mice receiving a zinc-deficient diet. However, there were no changes in corticosterone or GR levels in GPR39 KO mice in comparison with WT mice. This suggests that the depressive-like behavior observed in mice lacking the GPR39 gene is not due to higher corticosterone concentrations and that there is no link between GPR39 and the HPA axis. In the present study, we did not find any changes in the serum zinc level in GPR39 KO mice in comparison with WT mice, which indicates a possible correlation between serum zinc and serum corticosterone.\nDepressive-like changes with component of anxiety observed in GPR39 KO mice may result of glutamatergic abnormalities that were found in cases of zinc deficiency, but this requires further investigation. Zinc as an NMDA antagonist modulates the glutamatergic system, which is overexcited during depression. Zinc co-released with glutamate from “gluzinergic” neurons modulates excitability of the brain by attenuating glutamate release (Frederickson et al., 2005). The GPR39 zinc receptor seems to be involved in the mechanism regulating the amount of glutamate in the brain (Besser et al., 2009). Activation of the GPR39 up-regulates KCC2 and thereby enhances Cl− efflux in the postsynaptic neurons, which may potentiate γ-aminobutyric acidA-mediated inhibition (Chorin et al., 2011).\nOur present study shows that deletion of GPR39 leads to depressive-like behaviors in animals, which may be relevant to depressive disorders in humans. Decreased levels of CREB and BDNF proteins in the hippocampus of GPR39 KO mice support the involvement of GPR39 in the synthesis of CREB and BDNF, proteins that are important in neuronal plasticity and the antidepressant response." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "GPR39", "zinc receptor", "depression", "HPA axis", "CREB" ]
Introduction: Depression is a leading psychiatric illness, with high morbidity and mortality (Ustun, 2004). The lack of appropriate, rapidly acting antidepressants is probably due to the direct pathomechanism of depression being unknown, and this leads to the high suicide statistics. Approximately 50% of those diagnosed with major depressive disorder do not respond to antidepressants when using them for the first time (Fava et al., 2008). Long-term antidepressant treatment generates many side effects, and more than 30% of depressed patients do not experience any mood improvement at all (Fava and Davidson, 1996). Until now, only one drug, ketamine, has shown rapid action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). One drug, ketamine, has shown rapid and sustained action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). This indicates promise for modulators of the glutamatergic system, which may lead to the establishment of homeostasis between glutamate and GABA in the central nervous system (CNS) (Skolnick, 2002; Skolnick et al., 2009; Malkesman et al., 2012; Pilc et al., 2013; Pochwat et al., 2014). In addition, some trace elements, such as magnesium and zinc, are involved in glutamatergic attenuation through their binding sites at the N-methyl-d-aspartate (NMDA) receptor (Swardfager et al., 2013b). Preclinical findings indicate that zinc deficiency has been shown to produce depressive-like behavior (Singewald et al., 2004; Tassabehji et al., 2008; Tamano et al., 2009; Whittle et al., 2009; Młyniec and Nowak, 2012; Młyniec et al., 2013a, 2013b, 2014a). Clinical studies indicate that zinc is lower in the blood of depressed people (Swardfager et al., 2013b), and that zinc supplementation may produce antidepressant effects alone and in combination with conventional antidepressant therapies (Ranjbar et al., 2013; Siwek et al., 2013; Swardfager et al., 2013a).. Zinc is an important trace element in the central nervous system and seems to be involved in neurotransmission. As a natural ligand, it was found to activate the metabotropic GPR39 receptor (Holst et al., 2007). Highest levels of GPR39 are found in the brain regions involved in emotion, such as the amygdala and hippocampus (McKee et al., 1997; Jackson et al., 2006). The GPR39 signals with high constitutive activity via Gq, which stimulates transcription mediated by the cyclic adenosine monophosphate (cAMP) following inositol 1,4,5-triphosphate turnover, as well as via G12/13, leading to activation of transcription mediated by the serum response element (Holst et al., 2004). Zinc was found to be a ligand capable of stimulating the activity of the GPR39, which activates the Gq, G12/13, and Gs pathways (Holst et al., 2007). Since zinc shows antidepressant properties and its deficiency leads to the development of depression-like and anxiety-like behaviors (Whittle et al., 2009; Swardfager et al., 2013a), we investigated whether the GPR39 receptor may be involved in the pathophysiology of depression. Recently, we found GPR39 down-regulation in the frontal cortex and hippocampus of zinc-deficient rodents and suicide victims (Młyniec et al., 2014b). On the other hand, we observed up-regulation of the GPR39 after chronic antidepressant treatment (Młyniec and Nowak, 2013). In the present study, we investigated behavior in mice lacking a GPR39 as well as an hypothalamus-pituitary-adrenal axis (HPA) axis and proteins such as CREB, BDNF, and TrkB, all of which are important in the pathophysiology of depression and the antidepressant response. Methods: Animals All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków. CD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum. GPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used. As with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts. All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków. CD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum. GPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used. As with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts. Forced Swim Test The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test. The immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured. The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test. The immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured. Tail Suspension Test WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually. WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually. Locomotor Activity Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes. Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes. Light/Dark Test WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters). WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters). Zinc Concentration The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L. The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L. Corticosterone Assay The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation. The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation. Western-Blot Analysis Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place. The frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band. Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place. The frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band. Statistical Analysis The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant. The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant. Animals: All of the procedures were conducted according to the National Institute of Health Animal Care and Use Committee guidelines, which were approved by the Ethical Committee of the Jagiellonian University Collegium Medicum, Kraków. CD-1 male mice (~22g) were housed with a natural day-night cycle, a temperature of 22±2°C and humidity at 55±5%. The mice received a zinc-adequate (33.5mg Zn/kg) or zinc-deficient (0.2mg Zn/kg) diet purchased from MP Biomedicals (France) and administered for 6 weeks. The access to food as well as water was ad libitum. GPR39 (−/−) male mice as described by Holst et al. (2009) were generated through homologous recombination by Deltagen, Inc. by targeting the first exon of GPR39 and replacing the nucleotides from position 278 to 647 of the open reading frame with a neomycin-containing cassette. The chimeric males were crossed with C57BL/6 females and then backcrossed into C57BL/6 mice. The mice were obtained through heterozygous breeding, resulting in wild-type (WT), homozygous, or heterozygous littermates. Genotypes were verified by polymerase chain reaction. Specific primers for the WT and one specific primer for the insert sequences were used. As with the CD-1 mice, the GPR39 knockouts (KOs) were housed under standard laboratory conditions. GPR39 KO (−/−) and GPR39 WT (+/+) mice received only a standard diet with appropriate zinc amounts. Forced Swim Test: The forced swim test (FST) was carried out on GPR39 KO and WT mice. In the classical test described by Porsolt et al. (1977), mice were dropped individually into a glass cylinder containing water. The total duration of immobility after adaptation time (the first 2 minutes) was measured during the following 4 minutes of the test. The immobility time in the FST reflects the level of despair of the mice, prolonged immobility suggesting depressive-like behavior. Because Porsolt et al. (1977) described FST as being a means of evaluating potential antidepressant properties of drugs, we prolonged the test, as described by Młyniec et al. (2014b), from 4 to 6 minutes, during which time the duration of immobility was measured. Tail Suspension Test: WT and GPR39 KO mice were subjected to the tail suspension test (TST) previously described by Mlyniec and Nowak (2012). Animals were fastened with medical adhesive tape by the tail 30cm below a flat surface and suspended for 6 minutes. During this period, the total immobility time was measured. Immobility time (when mice hung passively without limb movement) was scored manually. Locomotor Activity: Locomotor activity was measured by photoresistor actometers. The number of times the light beams were crossed by GPR39 or WT mice was counted by placing them individually in an actometer, with the test duration being between 2 and 8 minutes. Light/Dark Test: WT and GPR39 KO mice were subjected to the light/dark test as previously described by Whittle et al. (2009). The fully automated light/dark box apparatus (Stoelting) consisted of white and black compartments, which were connected by a small opening located in the center of the common partition. Mice were individually placed in the apparatus for 10 minutes and allowed to freely explore. The following parameters were quantified in the test: 1) time spent in lit compartment (seconds), 2) entries into the lit compartment (number), 3) line crossing (number), 4) immobility time (seconds), 5) freezing (seconds), and 6) overall distance travelled (meters). Zinc Concentration: The serum zinc concentration in both GPR39 KO and WT mice was measured by total reflection X-ray fluorescence as described by Młyniec et al. (2014b). This method is based on the same physical principles as energy dispersive X-ray fluorescence. Galium was added to the serum sample as an internal standard (20mL) to achieve the final concentration of 5mg/L. For all measurements, the total reflection X-ray fluorescence spectrometer Nanohunter (Rigaku) was used as well as a single measurement time of 2000 seconds and a Mo X-ray tube (50kV, 0.8 mA). The detection limits for Zn were about 0.4mg/L. Corticosterone Assay: The serum corticosterone concentration was determined by a radioimmunological method as described by Młyniec et al. (2013a). Corticosterone was extracted from the serum by ethanol. This extract (ethanol-serum) was dried under a nitrogen stream and then dissolved in 0.1mL of 0.05mM phosphate buffer. Extracts were incubated with a 0.1-mL solution of 1,2,6,7-[3H]-corticosterone and with a 0.1-mL solution of a corticosterone antibody (Chemicon) for 16 hours at 4°C. Bound and free corticosterone were separated using dextran-coated charcoal. Incubation time for the samples was established for 10 minutes at 4°C with 0.2mL of 0.05% dextran and 0.5% charcoal suspension. After centrifugation, the supernatant was placed in a scintillator. The radioactivity was measured in a counter (Beckmann LS 335). The corticosterone content was calculated using a log-logit transformation. Western-Blot Analysis: Glucocorticoid receptor (GR) levels were determined in the frontal cortex and hippocampus of zinc-adequate and -deficient mice after administration of the diet for 6 weeks. In the GPR39 KO and WT mice, in addition to GR, the levels of such proteins as CREB, BDNF, and TrkB were also determined, as described by Młyniec et al. (2014b). All mice were previously subjected to the FST. After rapid decapitation of the mice (24 hours after FST procedure), tissues were immediately isolated on dry ice and then frozen at −80°C until analysis took place. The frontal cortex and hippocampus were homogenized in 2% sodium dodecyl sulphate. After centrifugation, the total amount of protein was determined in the supernatant (BCA Protein Assay Kit, Pierce Biotechnology). The samples were separated using sodium dodecyl sulphate-polyacrylamide gel electrophoresis (Bio-Rad) under a constant voltage and then transferred (in a semi-dry transfer process) to nitrocellulose membranes. To avoid nonspecific binding, membranes were blocked for 60 minutes at room temperature with blocking solution (Roche). Then the membranes were incubated overnight at 4°C with primary antibodies: anti-GR (1/1000, Santa Cruz Biotechnology), anti-CREB (1/1000), anti-BDNF (1/1000), and anti-TrkB (1/400) (Abcam, Cambridge, UK). After washing (3×10 minutes in Tris-buffered saline with Tween 20), the membranes were incubated in a secondary antibody with a horseradish peroxidase-conjugated anti-mouse or anti-rabbit immunoglobulin G (Western Blotting Kit, Roche) for 60 minutes at room temperature. Blots were developed using an enhanced chemiluminescence reaction (BM Chemiluminescence Western Blotting Kit, Roche). The GR, CREB, BDNF, and TrkB signals were visualized and quantified with the Gel Doc XR+ system and Image Lab 4.1 software (both Bio-Rad). To confirm equal loading of the samples on the gel, the membranes were incubated with a loading control antibody and then processed as described above. The density of each GR, CREB, BDNF, or TrkB protein band was normalized to the density of the loading control band. Statistical Analysis: The data are presented as the mean±SEM and were evaluated with the Student t test using GraphPad Prism software (San Diego, CA). P<.05 was considered to be statistically significant. Results: Behavioral Studies of Gpr39 KO Mice Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916]. The effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2). The effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control. The effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control. There were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1). The Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group. In the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2). Behavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control. Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916]. The effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2). The effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control. The effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control. There were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1). The Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group. In the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2). Behavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control. Serum Zinc Concentration in GPR39 KO Mice There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790]. There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790]. CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively). The effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control. In a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively). The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively). The effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control. In a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively). Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B). The effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control. The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B). The effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control. GR Protein Levels in Zinc-Deficient and GPR39 KO Mice Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively). The effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control. Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively). The effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control. Behavioral Studies of Gpr39 KO Mice: Before experiments, mice were weighed. There were no differences between WT and GPR39 KO groups [t(10)=0.2715, P=.7916]. The effect of deletion of the GPR39 on immobility time in the FST is shown in Figure 1A. GPR39 KO mice showed an increased immobility time in the FST designed by Porsolt et al. (1977) in comparison with WT mice [t(15)=2.563, P=.0217]. We found a more significant increase in immobility time in GPR39 KO vs WT using a modified FST (Młyniec et al., 2014b) [t(15)=4.571, P=.0004] (Figure 1B). We also found an increased immobility time in the TST in GPR39 KO mice [t(10)=2.415, P=.0363] (Figure 2). The effect of GPR39 knockout (KO) on immobility time in the standard (A) and prolonged (B) forced swim test in GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05, ** p < 0.001 vs wild-type control. The effect of GPR39 knockout (KO) on immobility time in the tail suspension test in GPR39 KO mice. Values are the means ± SEM of 6 animals per group. p* < 0.05 vs wild-type control. There were no differences in locomotor activity between GPR39 KO and WT mice after 2 [t(15)=0.004016, P=.9968], 4 [t(15)=0.1016, P=.9205], 6 [t(15)=.04298, P=.9663], and 8 [t(15)=0.05586, P=.9562] minutes (Table 1). The Effect of GPR39 KO on Spontaneous Locomotor Activity in GPR39 KO Mice. Values are the means ± SEM of 6 to 7 animals per group. In the light/dark box test, GPR39 KO mice displayed decreased entries into the lit compartment, line crossing, and enhanced immobility time compared with WT control mice (Table 2). Behavioral Parameters Quantified in the Light/Dark Test in WT and GPR 39 KO mice. Values are the means ± SEM of 6 animals per group. *p < 0.05, **p < 0.01 vs proper control. Serum Zinc Concentration in GPR39 KO Mice: There was no difference between GPR39 KO (1.707±0.1606mg/L) and WT mice (1.848±0.1130mg/L) in terms of serum zinc concentration [t(11)=0.7328, P=.4790]. CREB, BDNF, and TrkB Protein Levels in GPR39 KO Mice: The effect of deletion of the GPR39 on CREB, BDNF, and TrkB levels in mice is shown in Figure 3. GPR39 KO mice show reduced CREB levels in the hippocampus [t(12)=2.427, P=.0319] but not in the frontal cortex [t(12)=0.8192, P=.4286] in comparison with WT mice (Figures 3A and D, respectively). The effect of GPR39 knockout (KO) on CREB, BDNF, and TrkB levels in the hippocampus (A, B, and C, respectively) and in the frontal cortex (D, E, and F, respectively) of GPR39 KO mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs wild-type control. In a similar way to the CREB levels, GPR39 KO mice also have reduced BDNF levels in the hippocampus [t(10)=2.510, P=.0309] (Figure 3B), but not in the frontal cortex, in comparison with WT control mice [t(12)=0.6015, P=.5587] (Figure 3E). There was no difference in TrkB levels between GPR39 KO and WT mice in the hippocampus [t(12)=0.6861, P=.5057] or frontal cortex [t(12)=0.9219, P=.3747] (Figure 3C and F, respectively). Serum Corticosterone Concentration in Zinc-Deficient and GPR39 KO Mice: The effects of zinc deficiency on the serum corticosterone level are shown in Figure 4A. A 6-week zinc-deficient diet causes a significant increase in serum corticosterone concentration in comparison with control mice [t(8)=2.547, P=.0343]. There were no significant differences between GPR39 KO and WT [t(9)=1.298, P=.2266] (Figure 4B). The effect of a zinc-deficient diet (A) or GPR39 knockout (B) on serum corticosterone level in mice. Values are the means ± SEM of 6 to 7 animals per group. *p < 0.05 vs proper control. GR Protein Levels in Zinc-Deficient and GPR39 KO Mice: Administration of a zinc-deficient diet for 6 weeks causes a reduction in glucocorticoid receptor levels in the hippocampus [t(11)=2.649, P=.0226] and frontal cortex [t(12)=2.475, P=.0292] (Figure 5A and B, respectively). There were no changes in the GR levels in the hippocampus [t(12)=0.3628, P=.7231] or the frontal cortex [t(12)=0.4638, P=.6511] of GPR39 KO mice in comparison with WT control mice (Figure 5C and D, respectively). The effect of a zinc-deficient diet (A,B) or GPR39 knockout (C,D) on glucocorticoid receptor levels in the hippocampus (A,C) and frontal cortex (B,D) of mice. Values are the means ± SEM of 6 to 7 animals per group. * p < 0.05 vs. proper control. Discussion: In the present study, we found that elimination of GPR39 leads to the development of depressive-like behavior, as measured by the FST and TST. Additionally, we found decreased entries into the lit compartment and line crossing and increased immobility time in the light/dark test, indicating anxiety-like phenotype in GPR 39 KO mice. Although not statistically significant, we also observed some tendencies towards less time spent in the lit compartment (decreased by 17%) and increased and freezing behavior (28%) in GPR39 KO compared to WT mice. The GPR39 was found to be activated by zinc ions (Holst et al., 2007; Yasuda et al., 2007), and the link between zinc and depression is well known (Maes et al., 1997, 1999; Szewczyk et al., 2011; Swardfager et al., 2013a, 2013b). Ishitobi et al. (2012) did not find behavioral abnormalities in the FST after administering antisense of DNA for GPR39-1b. In our present study, mice with general GPR39 KO were used. GPR39-1a full-length isoform was found to be a receptor for zinc ions, whereas GPR39-1b, corresponding to the 5-transmembrane truncated form (Egerod et al., 2007), did not respond to zinc stimulation, which means that the GPR39-1b splice variant is not a receptor of Zn2+ (Yasuda and Ishida, 2014). Activation of the GPR39 triggers diverse neuronal pathways (Holst et al., 2004, 2007; Popovics and Stewart, 2011) that may be involved in neuroprotection (Depoortere, 2012). Zinc stimulates GPR39 activity, which activates the Gαs, Gαq, and Gα12/13 pathways (Holst et al., 2007). The Gαq pathway triggers diverse downstream kinases and mediates CREB activation and cyclic adenosine monophosphate response element-dependent transcription. Our previous study showed decreased CREB, BDNF, and TrkB proteins in the hippocampus of mice under zinc-deficient conditions (Młyniec et al., 2014b). Moreover, disruption of the CaM/CaMKII/CREB signaling pathway was found after administration of a zinc-deficient diet for 5 weeks (Gao et al., 2011). Since GPR39 was discovered to be a receptor for zinc, we previously investigated whether GPR39 may be involved in the pathophysiology of depression and suicide behavior. In our postmortem study, we found GPR39 down-regulation in the hippocampus and the frontal cortex of suicide victims (Młyniec et al., 2014b). In the present study, we investigated whether GPR39 KO would decrease levels of such proteins as CREB, BDNF, and TrkB, which were also found to be impaired in depression in suicide victims (Dwivedi et al., 2003; Pandey et al., 2007). Indeed, we found that lack of the GPR39 gene causes CREB and BDNF reduction in the hippocampus, but not in the frontal cortex, suggesting that the hippocampus might be a specific region for CREB and BDNF down-regulation in the absence of a zinc receptor. The CA3 region of the hippocampus seems to be strongly involved in zinc neurotransmission. Besser et al. (2009) found that the GPR39 is activated by synaptically released zinc ions in the CA3 area of the hippocampus. This activation triggers Ca2+ and Ca2+/calmodulin kinase II, suggesting that it has a role in neuron survival/neuroplasticity in this brain area (Besser et al., 2009), which is of importance in antidepressant action. In this study, we did not find any changes in TrkB levels in either the hippocampus or frontal cortex; in the case of the hippocampus, this may be a compensatory mechanism, and it needs further investigation. There is strong evidence that zinc deficiency leads to hyperactivation of the HPA axis (Watanabe et al., 2010; Takeda and Tamano, 2012; Młyniec et al., 2012, 2013a), which is activated as a reaction to stress. The activity of the HPA axis is regulated by negative feedback through GR receptors that are present in the brain, mainly in the hippocampus (Herman et al., 2005). This mechanism was shown to be impaired in mood disorders. In the present study, we compared corticosterone and GR receptor levels in zinc-deficient and GPR39 KOs. We found an elevated corticosterone concentration in the serum and decreased GR levels in the hippocampus and frontal cortex of mice receiving a zinc-deficient diet. However, there were no changes in corticosterone or GR levels in GPR39 KO mice in comparison with WT mice. This suggests that the depressive-like behavior observed in mice lacking the GPR39 gene is not due to higher corticosterone concentrations and that there is no link between GPR39 and the HPA axis. In the present study, we did not find any changes in the serum zinc level in GPR39 KO mice in comparison with WT mice, which indicates a possible correlation between serum zinc and serum corticosterone. Depressive-like changes with component of anxiety observed in GPR39 KO mice may result of glutamatergic abnormalities that were found in cases of zinc deficiency, but this requires further investigation. Zinc as an NMDA antagonist modulates the glutamatergic system, which is overexcited during depression. Zinc co-released with glutamate from “gluzinergic” neurons modulates excitability of the brain by attenuating glutamate release (Frederickson et al., 2005). The GPR39 zinc receptor seems to be involved in the mechanism regulating the amount of glutamate in the brain (Besser et al., 2009). Activation of the GPR39 up-regulates KCC2 and thereby enhances Cl− efflux in the postsynaptic neurons, which may potentiate γ-aminobutyric acidA-mediated inhibition (Chorin et al., 2011). Our present study shows that deletion of GPR39 leads to depressive-like behaviors in animals, which may be relevant to depressive disorders in humans. Decreased levels of CREB and BDNF proteins in the hippocampus of GPR39 KO mice support the involvement of GPR39 in the synthesis of CREB and BDNF, proteins that are important in neuronal plasticity and the antidepressant response.
Background: Zinc may act as a neurotransmitter in the central nervous system by activation of the GPR39 metabotropic receptors. Methods: In the present study, we investigated whether GPR39 knockout would cause depressive-like and/or anxiety-like behavior, as measured by the forced swim test, tail suspension test, and light/dark test. We also investigated whether lack of GPR39 would change levels of cAMP response element-binding protein (CREB),brain-derived neurotrophic factor (BDNF) and tropomyosin related kinase B (TrkB) protein in the hippocampus and frontal cortex of GPR39 knockout mice subjected to the forced swim test, as measured by Western-blot analysis. Results: In this study, GPR39 knockout mice showed an increased immobility time in both the forced swim test and tail suspension test, indicating depressive-like behavior and displayed anxiety-like phenotype. GPR39 knockout mice had lower CREB and BDNF levels in the hippocampus, but not in the frontal cortex, which indicates region specificity for the impaired CREB/BDNF pathway (which is important in antidepressant response) in the absence of GPR39. There were no changes in TrkB protein in either structure. In the present study, we also investigated activity in the hypothalamus-pituitary-adrenal axis under both zinc- and GPR39-deficient conditions. Zinc-deficient mice had higher serum corticosterone levels and lower glucocorticoid receptor levels in the hippocampus and frontal cortex. Conclusions: There were no changes in the GPR39 knockout mice in comparison with the wild-type control mice, which does not support a role of GPR39 in hypothalamus-pituitary-adrenal axis regulation. The results of this study indicate the involvement of the GPR39 Zn(2+)-sensing receptor in the pathophysiology of depression with component of anxiety.
Introduction: Depression is a leading psychiatric illness, with high morbidity and mortality (Ustun, 2004). The lack of appropriate, rapidly acting antidepressants is probably due to the direct pathomechanism of depression being unknown, and this leads to the high suicide statistics. Approximately 50% of those diagnosed with major depressive disorder do not respond to antidepressants when using them for the first time (Fava et al., 2008). Long-term antidepressant treatment generates many side effects, and more than 30% of depressed patients do not experience any mood improvement at all (Fava and Davidson, 1996). Until now, only one drug, ketamine, has shown rapid action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). One drug, ketamine, has shown rapid and sustained action even in treatment-resistant patients (Mathew et al., 2012; Lara et al., 2013; Haile et al., 2014). This indicates promise for modulators of the glutamatergic system, which may lead to the establishment of homeostasis between glutamate and GABA in the central nervous system (CNS) (Skolnick, 2002; Skolnick et al., 2009; Malkesman et al., 2012; Pilc et al., 2013; Pochwat et al., 2014). In addition, some trace elements, such as magnesium and zinc, are involved in glutamatergic attenuation through their binding sites at the N-methyl-d-aspartate (NMDA) receptor (Swardfager et al., 2013b). Preclinical findings indicate that zinc deficiency has been shown to produce depressive-like behavior (Singewald et al., 2004; Tassabehji et al., 2008; Tamano et al., 2009; Whittle et al., 2009; Młyniec and Nowak, 2012; Młyniec et al., 2013a, 2013b, 2014a). Clinical studies indicate that zinc is lower in the blood of depressed people (Swardfager et al., 2013b), and that zinc supplementation may produce antidepressant effects alone and in combination with conventional antidepressant therapies (Ranjbar et al., 2013; Siwek et al., 2013; Swardfager et al., 2013a).. Zinc is an important trace element in the central nervous system and seems to be involved in neurotransmission. As a natural ligand, it was found to activate the metabotropic GPR39 receptor (Holst et al., 2007). Highest levels of GPR39 are found in the brain regions involved in emotion, such as the amygdala and hippocampus (McKee et al., 1997; Jackson et al., 2006). The GPR39 signals with high constitutive activity via Gq, which stimulates transcription mediated by the cyclic adenosine monophosphate (cAMP) following inositol 1,4,5-triphosphate turnover, as well as via G12/13, leading to activation of transcription mediated by the serum response element (Holst et al., 2004). Zinc was found to be a ligand capable of stimulating the activity of the GPR39, which activates the Gq, G12/13, and Gs pathways (Holst et al., 2007). Since zinc shows antidepressant properties and its deficiency leads to the development of depression-like and anxiety-like behaviors (Whittle et al., 2009; Swardfager et al., 2013a), we investigated whether the GPR39 receptor may be involved in the pathophysiology of depression. Recently, we found GPR39 down-regulation in the frontal cortex and hippocampus of zinc-deficient rodents and suicide victims (Młyniec et al., 2014b). On the other hand, we observed up-regulation of the GPR39 after chronic antidepressant treatment (Młyniec and Nowak, 2013). In the present study, we investigated behavior in mice lacking a GPR39 as well as an hypothalamus-pituitary-adrenal axis (HPA) axis and proteins such as CREB, BDNF, and TrkB, all of which are important in the pathophysiology of depression and the antidepressant response. Discussion: None.
Background: Zinc may act as a neurotransmitter in the central nervous system by activation of the GPR39 metabotropic receptors. Methods: In the present study, we investigated whether GPR39 knockout would cause depressive-like and/or anxiety-like behavior, as measured by the forced swim test, tail suspension test, and light/dark test. We also investigated whether lack of GPR39 would change levels of cAMP response element-binding protein (CREB),brain-derived neurotrophic factor (BDNF) and tropomyosin related kinase B (TrkB) protein in the hippocampus and frontal cortex of GPR39 knockout mice subjected to the forced swim test, as measured by Western-blot analysis. Results: In this study, GPR39 knockout mice showed an increased immobility time in both the forced swim test and tail suspension test, indicating depressive-like behavior and displayed anxiety-like phenotype. GPR39 knockout mice had lower CREB and BDNF levels in the hippocampus, but not in the frontal cortex, which indicates region specificity for the impaired CREB/BDNF pathway (which is important in antidepressant response) in the absence of GPR39. There were no changes in TrkB protein in either structure. In the present study, we also investigated activity in the hypothalamus-pituitary-adrenal axis under both zinc- and GPR39-deficient conditions. Zinc-deficient mice had higher serum corticosterone levels and lower glucocorticoid receptor levels in the hippocampus and frontal cortex. Conclusions: There were no changes in the GPR39 knockout mice in comparison with the wild-type control mice, which does not support a role of GPR39 in hypothalamus-pituitary-adrenal axis regulation. The results of this study indicate the involvement of the GPR39 Zn(2+)-sensing receptor in the pathophysiology of depression with component of anxiety.
9,135
330
[ 756, 276, 145, 73, 43, 139, 123, 160, 416, 34, 1902, 397, 32, 229, 109, 153, 1154 ]
18
[ "mice", "gpr39", "ko", "gpr39 ko", "zinc", "wt", "ko mice", "time", "gpr39 ko mice", "test" ]
[ "pathophysiology depression suicide", "produce antidepressant effects", "respond antidepressants", "conventional antidepressant therapies", "rapidly acting antidepressants" ]
null
[CONTENT] GPR39 | zinc receptor | depression | HPA axis | CREB [SUMMARY]
[CONTENT] GPR39 | zinc receptor | depression | HPA axis | CREB [SUMMARY]
null
[CONTENT] GPR39 | zinc receptor | depression | HPA axis | CREB [SUMMARY]
[CONTENT] GPR39 | zinc receptor | depression | HPA axis | CREB [SUMMARY]
[CONTENT] GPR39 | zinc receptor | depression | HPA axis | CREB [SUMMARY]
[CONTENT] Animals | Brain-Derived Neurotrophic Factor | CREB-Binding Protein | Corticosterone | Dark Adaptation | Depression | Disease Models, Animal | Down-Regulation | Hindlimb Suspension | Hippocampus | Immobility Response, Tonic | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Motor Activity | Receptor, trkB | Receptors, G-Protein-Coupled | Swimming | Time Factors | Zinc [SUMMARY]
[CONTENT] Animals | Brain-Derived Neurotrophic Factor | CREB-Binding Protein | Corticosterone | Dark Adaptation | Depression | Disease Models, Animal | Down-Regulation | Hindlimb Suspension | Hippocampus | Immobility Response, Tonic | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Motor Activity | Receptor, trkB | Receptors, G-Protein-Coupled | Swimming | Time Factors | Zinc [SUMMARY]
null
[CONTENT] Animals | Brain-Derived Neurotrophic Factor | CREB-Binding Protein | Corticosterone | Dark Adaptation | Depression | Disease Models, Animal | Down-Regulation | Hindlimb Suspension | Hippocampus | Immobility Response, Tonic | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Motor Activity | Receptor, trkB | Receptors, G-Protein-Coupled | Swimming | Time Factors | Zinc [SUMMARY]
[CONTENT] Animals | Brain-Derived Neurotrophic Factor | CREB-Binding Protein | Corticosterone | Dark Adaptation | Depression | Disease Models, Animal | Down-Regulation | Hindlimb Suspension | Hippocampus | Immobility Response, Tonic | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Motor Activity | Receptor, trkB | Receptors, G-Protein-Coupled | Swimming | Time Factors | Zinc [SUMMARY]
[CONTENT] Animals | Brain-Derived Neurotrophic Factor | CREB-Binding Protein | Corticosterone | Dark Adaptation | Depression | Disease Models, Animal | Down-Regulation | Hindlimb Suspension | Hippocampus | Immobility Response, Tonic | Male | Mice | Mice, Inbred C57BL | Mice, Knockout | Motor Activity | Receptor, trkB | Receptors, G-Protein-Coupled | Swimming | Time Factors | Zinc [SUMMARY]
[CONTENT] pathophysiology depression suicide | produce antidepressant effects | respond antidepressants | conventional antidepressant therapies | rapidly acting antidepressants [SUMMARY]
[CONTENT] pathophysiology depression suicide | produce antidepressant effects | respond antidepressants | conventional antidepressant therapies | rapidly acting antidepressants [SUMMARY]
null
[CONTENT] pathophysiology depression suicide | produce antidepressant effects | respond antidepressants | conventional antidepressant therapies | rapidly acting antidepressants [SUMMARY]
[CONTENT] pathophysiology depression suicide | produce antidepressant effects | respond antidepressants | conventional antidepressant therapies | rapidly acting antidepressants [SUMMARY]
[CONTENT] pathophysiology depression suicide | produce antidepressant effects | respond antidepressants | conventional antidepressant therapies | rapidly acting antidepressants [SUMMARY]
[CONTENT] mice | gpr39 | ko | gpr39 ko | zinc | wt | ko mice | time | gpr39 ko mice | test [SUMMARY]
[CONTENT] mice | gpr39 | ko | gpr39 ko | zinc | wt | ko mice | time | gpr39 ko mice | test [SUMMARY]
null
[CONTENT] mice | gpr39 | ko | gpr39 ko | zinc | wt | ko mice | time | gpr39 ko mice | test [SUMMARY]
[CONTENT] mice | gpr39 | ko | gpr39 ko | zinc | wt | ko mice | time | gpr39 ko mice | test [SUMMARY]
[CONTENT] mice | gpr39 | ko | gpr39 ko | zinc | wt | ko mice | time | gpr39 ko mice | test [SUMMARY]
[CONTENT] 2013 | depression | antidepressant | treatment | zinc | involved | swardfager | patients | high | 2012 [SUMMARY]
[CONTENT] mice | described | test | anti | minutes | membranes | corticosterone | time | immobility | gpr39 [SUMMARY]
null
[CONTENT] zinc | gpr39 | found | study | hippocampus | present | 2007 | present study | creb | mice [SUMMARY]
[CONTENT] mice | gpr39 | ko | zinc | gpr39 ko | wt | test | immobility | time | corticosterone [SUMMARY]
[CONTENT] mice | gpr39 | ko | zinc | gpr39 ko | wt | test | immobility | time | corticosterone [SUMMARY]
[CONTENT] Zinc [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
[CONTENT] Zinc ||| ||| ||| ||| ||| CREB | CREB ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] Zinc ||| ||| ||| ||| ||| CREB | CREB ||| ||| ||| ||| ||| [SUMMARY]
Incense and joss stick making in small household factories, Thailand.
25027042
Incense and joss stick are generally used in the world. Most products were made in small household factories. There are many environmental and occupational hazards in these factories.
BACKGROUND
Nine small household factories in rural areas of Roi-Et, Thailand, were studied. Dust concentration and small aerosol particles were counted through real time exposure monitoring. The inductively coupled plasma optical emission spectrometry (ICP-OES) was used for quantitative measurement of heavy metal residue in incense products.
METHODS
Several heavy metals were found in dissolved dye and joss sticks. Those included barium, manganese, and lead. Rolling and shaking processes produced the highest concentration of dust and aerosols. Only 3.9% of female workers used personal protection equipment.
RESULTS
Dust and chemicals were major threats in small household incense and joss stick factories in Thailand. Increasing awareness towards using personal protection equipment and emphasis on elimination of environmental workplace hazards should be considered to help the workers of this industry.
CONCLUSION
[ "Child", "Cross-Sectional Studies", "Dust", "Family Characteristics", "Female", "Humans", "Industry", "Inhalation Exposure", "Male", "Metals, Heavy", "Occupational Diseases", "Occupational Exposure", "Protective Devices", "Thailand", "Wood" ]
7767602
Introduction
Incense and joss stick have been used for various rituals throughout the world. The Egyptians and Babylonians started using incense for praying and religious ceremonies around 586–538 BC. Both the ancient Greeks and Romans used incense to drive away demons and to attract the gods. Then, Chinese and Japanese used incense sticks of various types at different occasions.1 Nowadays, incense are made from a variety of perfumes and chemicals, and extensively used for room deodorizers and repellents. Most of the incense products come from China, Vietnam, India, Cambodia, Bangladesh, and Thailand. The northern and northeastern parts of Thailand are among the largest centers for incense and joss stick production. Incense sticks are normally made in small household factories; the process needs no advanced technology. In Thailand, many rural areas are suitable for incense making especially for incense drying.2 The main ingredients used in making incense are wood powders including coarse sawdust, sandal wood, glutinous incense powder, fragrance powders, dye colors, and perfume oils. The small incense factories in the villages are usually operated in or near the house. Wood dust, the major hazard produced from the process, diffuses around the house and contaminates the environment. The dust may affect the health of workers and their family members residing in the house. Previous studies showed that chronic exposure to wood dust can affect the respiratory tract and cause asthma, and skin and eye irritation.-7 Occupational exposure to wood is a well-established cause of various respiratory diseases and nasal cancer.8-12 Moreover, numerous chemicals used in production of joss sticks—stains and perfumes—may affect the workers’ health. For example dermal exposure to heavy metal residues such as lead, cadmium, chromium, manganese, etc , which is abundant in the chemical compounds has deleterious health effects.13 Long-term exposure to heavy metals has many multisystem effects such as headache, fatigue, arthralgia, abdominal discomfort, anemia, peripheral neuropathy, etc .20,21 Furthermore, perfumes used in the process are generally essential oils consisting of aromatics compounds that can cause asthmatic reactions, headache, dizziness, nausea, etc .22-24 Most of the previous studies devoted to the health effects of incense smoke. It was found that incense smoke inhalation is associated with lung cancer and asthma.14,26 It is also associated with an increased risk of leukemia in children whose parents had burned incense at home before pregnancy or during nursing period.15 There are only a few studies on the occupational health effects of incense and joss stick making.16,17 Liou, et al ,18 found that the dust concentration in incense stick industries was as high as 42.7 mg/m3. The present study was conducted to assess the environmental hazards in small household incense and joss stick factories in rural areas of Roi-Et, Thailand.
null
null
Results
From the walk-through survey in the nine small household incense factories, we found that two types of incense sticks were made in the village—“Toop Saad ” (ธูปซัด) and “Toop Fun ” (ธูปฟั่น); however, the most popular product was Toop Saad . The raw material for incense and joss stick making consist of bamboo stick, glutinous incense power that Thai people call “Go ow b owa ,” incenses wood powders, fragrances incense powder (sandal wood), color powder and perfume oil. This study has focused on the process of manufacturing Toop Saad incense stick. The process of joss stick making includes 1) bamboo cores preparation; 2) mixing various incense powders in a correct proportion; 3) rolling and shaking wood powders onto the sticks. In this step, a bundle of bamboo stick is immersed into water. The sticks are then rolled into incense wood powders. The layered sticks are then shaked so that the loose powders are removed. This step is repeated so that incense wood powders are firmly rolled on the sticks layer by layer to the desirable size. The sticks are left under the sun to dry. 4) The sticks are then dipped into an incense stain. Red and yellow paints are popular colors. The dyed sticks are then spread out once again and left under the sun to dry. 5) The last step is spraying aromatic perfume on the sticks and packing them (Fig 1). The process of incense making—“Toop Saad” method Environmental Workplace Assessment Out of the nine factories studied, six small incense and joss stick factories were located in a place where people live; three factories were only 10–50 m far away from residence place. Dust was the major hazard from the incense making process and spread around the house—even into bedroom and kitchen. Moreover, raw materials including chemical substances, incense colors, incense wood powder and aromatic oils were stored in the house, near bedroom or kitchen. None of the nine factories used waste management system. The quality of most of the materials used in the process, especially incense colors and aromatic perfumes, were low. Most of them were also unsafe so that they could spread into the environment and contaminate food and water. There were 51 workers in the nine studied factories; 41 (80%) were male and 10 (20%) were female. Most of the male workers were involved in mixing the ingredients, rolling and shaking the sticks, and dyeing and drying the incense, whereas most of female workers worked in wrapping and packing joss sticks. Personal protective equipment (PPE) was not used by all workers (Table 1). Some of the workers (especially those in packing section) took care of their children while working. Therefore, children may inhale the hazardous chemicals and particulate matters. Out of the nine factories studied, six small incense and joss stick factories were located in a place where people live; three factories were only 10–50 m far away from residence place. Dust was the major hazard from the incense making process and spread around the house—even into bedroom and kitchen. Moreover, raw materials including chemical substances, incense colors, incense wood powder and aromatic oils were stored in the house, near bedroom or kitchen. None of the nine factories used waste management system. The quality of most of the materials used in the process, especially incense colors and aromatic perfumes, were low. Most of them were also unsafe so that they could spread into the environment and contaminate food and water. There were 51 workers in the nine studied factories; 41 (80%) were male and 10 (20%) were female. Most of the male workers were involved in mixing the ingredients, rolling and shaking the sticks, and dyeing and drying the incense, whereas most of female workers worked in wrapping and packing joss sticks. Personal protective equipment (PPE) was not used by all workers (Table 1). Some of the workers (especially those in packing section) took care of their children while working. Therefore, children may inhale the hazardous chemicals and particulate matters. Dust and Particles Concentration Measurement Real time exposure monitoring results are shown in Table 2. The dust concentration was high in all production stages, especially in rolling and shaking wood powders onto the sticks, packing and mixing process, where the mean±SD dust concentrations were 0.54±0.27, 0.48±0.16, and 0.31±0.18 mg/m3, respectively. The mean concentration of small aerosol particles was the highest in aromatics spraying and packing units. There were 9018 particles/mLin aromatics spraying unit and 8603 particles/mL in the packing unit. The mean±SD dust concentration measured in four spots of the comparison house was 0.02±0.25 mg/m3;the mean±SD small particle count was 1465±21 particles/mL. The measured dust concentration and small particles count measured in small household factories were significantly (p<0.05) higher than the comparison house. Real time exposure monitoring results are shown in Table 2. The dust concentration was high in all production stages, especially in rolling and shaking wood powders onto the sticks, packing and mixing process, where the mean±SD dust concentrations were 0.54±0.27, 0.48±0.16, and 0.31±0.18 mg/m3, respectively. The mean concentration of small aerosol particles was the highest in aromatics spraying and packing units. There were 9018 particles/mLin aromatics spraying unit and 8603 particles/mL in the packing unit. The mean±SD dust concentration measured in four spots of the comparison house was 0.02±0.25 mg/m3;the mean±SD small particle count was 1465±21 particles/mL. The measured dust concentration and small particles count measured in small household factories were significantly (p<0.05) higher than the comparison house. Assessing the Concentrations of Chemicals in Incense Products and Dissolved Dyes Several heavy metals were detected in 10 dissolved dyes and 10 incense samples (Table 3). Several heavy metals were detected in 10 dissolved dyes and 10 incense samples (Table 3). Environmental Management and Waste Disposal Assessment of environmental workplace and waste management in the nine factories by walk-through survey showed a large amount of waste and residual water, which remain from the incense making process. Most of the wastes and residual water originated from incense stick dipping stages, dyeing and dust or incense powders left on the floor. The waste water would be left around the house or used for watering of plants. Dyes were also stored in a container without covering and could be spread around the house or on the ground. All these put children at high risk of being exposed to various chemicals including heavy metals. Normally, the residual dust spreading on the floor and wall were not swept away or cleansed. The products were usually stored in the residence before distribution. Assessment of environmental workplace and waste management in the nine factories by walk-through survey showed a large amount of waste and residual water, which remain from the incense making process. Most of the wastes and residual water originated from incense stick dipping stages, dyeing and dust or incense powders left on the floor. The waste water would be left around the house or used for watering of plants. Dyes were also stored in a container without covering and could be spread around the house or on the ground. All these put children at high risk of being exposed to various chemicals including heavy metals. Normally, the residual dust spreading on the floor and wall were not swept away or cleansed. The products were usually stored in the residence before distribution. Identification of Hazardous Agents and Health Risk Factors Wood dust and chemical substances were the major hazard that directly affected the workers, especially those who worked in the mixing process, rolling and shaking unit, and dyeing section. However, incense workers who worked in dyeing unit and aromatics spraying were at risk of being exposed to various chemicals, such as heavy metals and volatile organic compounds. Furthermore, working in an awkward working posture, and repetitive movements may lead to musculoskeletal disorders such as back and muscle pain. Moreover, working without PPE, eating while working or eating at contaminated workplace area, and smoking were other important factors that would cause health problems. Working with low information of good practise and good hygiene may cause occupational injuries, skin diseases, respiratory symptoms and others illnesses (Table 4). Wood dust and chemical substances were the major hazard that directly affected the workers, especially those who worked in the mixing process, rolling and shaking unit, and dyeing section. However, incense workers who worked in dyeing unit and aromatics spraying were at risk of being exposed to various chemicals, such as heavy metals and volatile organic compounds. Furthermore, working in an awkward working posture, and repetitive movements may lead to musculoskeletal disorders such as back and muscle pain. Moreover, working without PPE, eating while working or eating at contaminated workplace area, and smoking were other important factors that would cause health problems. Working with low information of good practise and good hygiene may cause occupational injuries, skin diseases, respiratory symptoms and others illnesses (Table 4).
null
null
[ "TAKE-HOME MESSAGE", "\nEnvironmental Workplace Assessment", "\nDust and Particles Concentration Measurement", "\nAssessing the Concentrations of Chemicals in Incense Products and Dissolved Dyes", "\nEnvironmental Management and Waste Disposal", "\nIdentification of Hazardous Agents and Health Risk Factors" ]
[ "Incense and joss sticks are widely used in the world for praying\nand rituals. Most products are produced in Asia—China,\nVietnam, India, Cambodia, Bangladesh, and Thailand.\n\n Raw materials used in incense and joss stick production\nprocess such as incense powder, saw dust and chemicals\nare harmful.\n\n Real time monitoring reveled high dust concentrations in\nall the production process. Many heavy metals were found\nin dissolved dye and incense products samples. Those included\nbarium, manganese, lead, barium, cadmium, and\nnickel.\n\n The health hazards identified in the process included wood\ndust, mold, chemicals (heavy metals, aromatic compounds),\nunsafe machine and equipment, adopting awkward working\npostures, and low frequency of use of personal protective\nequipment.", "\nOut of the nine factories studied, six small incense and joss stick factories were located in a place where people live; three factories were only 10–50 m far away from residence place. Dust was the major hazard from the incense making process and spread around the house—even into bedroom and kitchen. Moreover, raw materials including chemical substances, incense colors, incense wood powder and aromatic oils were stored in the house, near bedroom or kitchen. None of the nine factories used waste management system. The quality of most of the materials used in the process, especially incense colors and aromatic perfumes, were low. Most of them were also unsafe so that they could spread into the environment and contaminate food and water.\n\nThere were 51 workers in the nine studied factories; 41 (80%) were male and 10 (20%) were female. Most of the male workers were involved in mixing the ingredients, rolling and shaking the sticks, and dyeing and drying the incense, whereas most of female workers worked in wrapping and packing joss sticks. Personal protective equipment (PPE) was not used by all workers (Table 1). Some of the workers (especially those in packing section) took care of their children while working. Therefore, children may inhale the hazardous chemicals and particulate matters.", "\nReal time exposure monitoring results are shown in Table 2. The dust concentration was high in all production stages, especially in rolling and shaking wood powders onto the sticks, packing and mixing process, where the mean±SD dust concentrations were 0.54±0.27, 0.48±0.16, and 0.31±0.18 mg/m3, respectively.\n\nThe mean concentration of small aerosol particles was the highest in aromatics spraying and packing units. There were 9018 particles/mLin aromatics spraying unit and 8603 particles/mL in the packing unit. The mean±SD dust concentration measured in four spots of the comparison house was 0.02±0.25 mg/m3;the mean±SD small particle count was 1465±21 particles/mL. The measured dust concentration and small particles count measured in small household factories were significantly (p<0.05) higher than the comparison house.", "\nSeveral heavy metals were detected in 10 dissolved dyes and 10 incense samples (Table 3).", "\nAssessment of environmental workplace and waste management in the nine factories by walk-through survey showed a large amount of waste and residual water, which remain from the incense making process. Most of the wastes and residual water originated from incense stick dipping stages, dyeing and dust or incense powders left on the floor. The waste water would be left around the house or used for watering of plants. Dyes were also stored in a container without covering and could be spread around the house or on the ground. All these put children at high risk of being exposed to various chemicals including heavy metals. Normally, the residual dust spreading on the floor and wall were not swept away or cleansed. The products were usually stored in the residence before distribution.", "\nWood dust and chemical substances were the major hazard that directly affected the workers, especially those who worked in the mixing process, rolling and shaking unit, and dyeing section. However, incense workers who worked in dyeing unit and aromatics spraying were at risk of being exposed to various chemicals, such as heavy metals and volatile organic compounds. Furthermore, working in an awkward working posture, and repetitive movements may lead to musculoskeletal disorders such as back and muscle pain. Moreover, working without PPE, eating while working or eating at contaminated workplace area, and smoking were other important factors that would cause health problems. Working with low information of good practise and good hygiene may cause occupational injuries, skin diseases, respiratory symptoms and others illnesses (Table 4)." ]
[ null, null, null, null, null, null ]
[ "TAKE-HOME MESSAGE", "Introduction", "Materials and Methods", "Results", "\nEnvironmental Workplace Assessment", "\nDust and Particles Concentration Measurement", "\nAssessing the Concentrations of Chemicals in Incense Products and Dissolved Dyes", "\nEnvironmental Management and Waste Disposal", "\nIdentification of Hazardous Agents and Health Risk Factors", "Discussion", "Acknowledgements", "Conflict of Interest:", "Financial Support:" ]
[ "Incense and joss sticks are widely used in the world for praying\nand rituals. Most products are produced in Asia—China,\nVietnam, India, Cambodia, Bangladesh, and Thailand.\n\n Raw materials used in incense and joss stick production\nprocess such as incense powder, saw dust and chemicals\nare harmful.\n\n Real time monitoring reveled high dust concentrations in\nall the production process. Many heavy metals were found\nin dissolved dye and incense products samples. Those included\nbarium, manganese, lead, barium, cadmium, and\nnickel.\n\n The health hazards identified in the process included wood\ndust, mold, chemicals (heavy metals, aromatic compounds),\nunsafe machine and equipment, adopting awkward working\npostures, and low frequency of use of personal protective\nequipment.", "\nIncense and joss stick have been used for various rituals throughout the world. The Egyptians and Babylonians started using incense for praying and religious ceremonies around 586–538 BC. Both the ancient Greeks and Romans used incense to drive away demons and to attract the gods. Then, Chinese and Japanese used incense sticks of various types at different occasions.1 Nowadays, incense are made from a variety of perfumes and chemicals, and extensively used for room deodorizers and repellents. Most of the incense products come from China, Vietnam, India, Cambodia, Bangladesh, and Thailand. The northern and northeastern parts of Thailand are among the largest centers for incense and joss stick production.\n\nIncense sticks are normally made in small household factories; the process needs no advanced technology. In Thailand, many rural areas are suitable for incense making especially for incense drying.2 The main ingredients used in making incense are wood powders including coarse sawdust, sandal wood, glutinous incense powder, fragrance powders, dye colors, and perfume oils. The small incense factories in the villages are usually operated in or near the house. Wood dust, the major hazard produced from the process, diffuses around the house and contaminates the environment. The dust may affect the health of workers and their family members residing in the house. Previous studies showed that chronic exposure to wood dust can affect the respiratory tract and cause asthma, and skin and eye irritation.-7 Occupational exposure to wood is a well-established cause of various respiratory diseases and nasal cancer.8-12 Moreover, numerous chemicals used in production of joss sticks—stains and perfumes—may affect the workers’ health. For example dermal exposure to heavy metal residues such as lead, cadmium, chromium, manganese, etc , which is abundant in the chemical compounds has deleterious health effects.13 Long-term exposure to heavy metals has many multisystem effects such as headache, fatigue, arthralgia, abdominal discomfort, anemia, peripheral neuropathy, etc .20,21 Furthermore, perfumes used in the process are generally essential oils consisting of aromatics compounds that can cause asthmatic reactions, headache, dizziness, nausea, etc .22-24\n\nMost of the previous studies devoted to the health effects of incense smoke. It was found that incense smoke inhalation is associated with lung cancer and asthma.14,26 It is also associated with an increased risk of leukemia in children whose parents had burned incense at home before pregnancy or during nursing period.15 There are only a few studies on the occupational health effects of incense and joss stick making.16,17 Liou, et al ,18 found that the dust concentration in incense stick industries was as high as 42.7 mg/m3.\n\nThe present study was conducted to assess the environmental hazards in small household incense and joss stick factories in rural areas of Roi-Et, Thailand.", "\nThis cross-sectional study was conducted from October 2011 to March 2012 in Dong Deang village at Roi-Et province, Thailand. The village is the largest area for making incense and joss stick in northern Thailand. There were 21 factories in the region where incense and joss sticks were made manually, however, we only included small household factories with five or more workers and where all production steps, including mixing incense powders, rolling and shaking wood powders onto the sticks, staining sticks, spraying of aromatics, and packing were being done. From a primary survey, we found only nine small household factories that fulfilled the inclusion criteria.\n\nThe survey record form was adopted from the Canada Occupational Walkthrough survey form,25 which consisted of a section on the main information such as the nature of process operation, material and quantities used, equipment and machinery, identified hazards, personal protective equipment (PPE) used, waste and environmental management, etc . Real time exposure monitoring by Dusttrak Aerosol Monitor 8520 (DustTrak™) was used to measure the concentration of aerosols and particles corresponding to PM10, according to the Division of Medical Engineering (DME), Ministry of Public Health (MoPH). Small particles and particles of chemical reaction were measured by Ultrafine Particle Counter 8525 (P-Trak™) for environmental assessment of the workplace. The measurements were made in four small household factories and a house from the same village for comparison where no incense making was taken place. The dust and small particles were measured at each production stage in all studied factories. The measurements were repeated for three times every 10 min for each stage. The microwave digestion method and analysis by Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES) was used for determination of heavy metals in dyes used in the process. The measurements were made for 10 dye samples and 10 incense products.13", "\nFrom the walk-through survey in the nine small household incense factories, we found that two types of incense sticks were made in the village—“Toop Saad ” (ธูปซัด) and “Toop Fun ” (ธูปฟั่น); however, the most popular product was Toop Saad . The raw material for incense and joss stick making consist of bamboo stick, glutinous incense power that Thai people call “Go ow b owa ,” incenses wood powders, fragrances incense powder (sandal wood), color powder and perfume oil. This study has focused on the process of manufacturing Toop Saad incense stick.\n\nThe process of joss stick making includes 1) bamboo cores preparation; 2) mixing various incense powders in a correct proportion; 3) rolling and shaking wood powders onto the sticks. In this step, a bundle of bamboo stick is immersed into water. The sticks are then rolled into incense wood powders. The layered sticks are then shaked so that the loose powders are removed. This step is repeated so that incense wood powders are firmly rolled on the sticks layer by layer to the desirable size. The sticks are left under the sun to dry. 4) The sticks are then dipped into an incense stain. Red and yellow paints are popular colors. The dyed sticks are then spread out once again and left under the sun to dry. 5) The last step is spraying aromatic perfume on the sticks and packing them (Fig 1).\nThe process of incense making—“Toop Saad” method\n \nEnvironmental Workplace Assessment \nOut of the nine factories studied, six small incense and joss stick factories were located in a place where people live; three factories were only 10–50 m far away from residence place. Dust was the major hazard from the incense making process and spread around the house—even into bedroom and kitchen. Moreover, raw materials including chemical substances, incense colors, incense wood powder and aromatic oils were stored in the house, near bedroom or kitchen. None of the nine factories used waste management system. The quality of most of the materials used in the process, especially incense colors and aromatic perfumes, were low. Most of them were also unsafe so that they could spread into the environment and contaminate food and water.\n\nThere were 51 workers in the nine studied factories; 41 (80%) were male and 10 (20%) were female. Most of the male workers were involved in mixing the ingredients, rolling and shaking the sticks, and dyeing and drying the incense, whereas most of female workers worked in wrapping and packing joss sticks. Personal protective equipment (PPE) was not used by all workers (Table 1). Some of the workers (especially those in packing section) took care of their children while working. Therefore, children may inhale the hazardous chemicals and particulate matters.\n\nOut of the nine factories studied, six small incense and joss stick factories were located in a place where people live; three factories were only 10–50 m far away from residence place. Dust was the major hazard from the incense making process and spread around the house—even into bedroom and kitchen. Moreover, raw materials including chemical substances, incense colors, incense wood powder and aromatic oils were stored in the house, near bedroom or kitchen. None of the nine factories used waste management system. The quality of most of the materials used in the process, especially incense colors and aromatic perfumes, were low. Most of them were also unsafe so that they could spread into the environment and contaminate food and water.\n\nThere were 51 workers in the nine studied factories; 41 (80%) were male and 10 (20%) were female. Most of the male workers were involved in mixing the ingredients, rolling and shaking the sticks, and dyeing and drying the incense, whereas most of female workers worked in wrapping and packing joss sticks. Personal protective equipment (PPE) was not used by all workers (Table 1). Some of the workers (especially those in packing section) took care of their children while working. Therefore, children may inhale the hazardous chemicals and particulate matters.\n \nDust and Particles Concentration Measurement \nReal time exposure monitoring results are shown in Table 2. The dust concentration was high in all production stages, especially in rolling and shaking wood powders onto the sticks, packing and mixing process, where the mean±SD dust concentrations were 0.54±0.27, 0.48±0.16, and 0.31±0.18 mg/m3, respectively.\n\nThe mean concentration of small aerosol particles was the highest in aromatics spraying and packing units. There were 9018 particles/mLin aromatics spraying unit and 8603 particles/mL in the packing unit. The mean±SD dust concentration measured in four spots of the comparison house was 0.02±0.25 mg/m3;the mean±SD small particle count was 1465±21 particles/mL. The measured dust concentration and small particles count measured in small household factories were significantly (p<0.05) higher than the comparison house.\n\nReal time exposure monitoring results are shown in Table 2. The dust concentration was high in all production stages, especially in rolling and shaking wood powders onto the sticks, packing and mixing process, where the mean±SD dust concentrations were 0.54±0.27, 0.48±0.16, and 0.31±0.18 mg/m3, respectively.\n\nThe mean concentration of small aerosol particles was the highest in aromatics spraying and packing units. There were 9018 particles/mLin aromatics spraying unit and 8603 particles/mL in the packing unit. The mean±SD dust concentration measured in four spots of the comparison house was 0.02±0.25 mg/m3;the mean±SD small particle count was 1465±21 particles/mL. The measured dust concentration and small particles count measured in small household factories were significantly (p<0.05) higher than the comparison house.\n \nAssessing the Concentrations of Chemicals in Incense Products and Dissolved Dyes \nSeveral heavy metals were detected in 10 dissolved dyes and 10 incense samples (Table 3).\n\nSeveral heavy metals were detected in 10 dissolved dyes and 10 incense samples (Table 3).\n \nEnvironmental Management and Waste Disposal \nAssessment of environmental workplace and waste management in the nine factories by walk-through survey showed a large amount of waste and residual water, which remain from the incense making process. Most of the wastes and residual water originated from incense stick dipping stages, dyeing and dust or incense powders left on the floor. The waste water would be left around the house or used for watering of plants. Dyes were also stored in a container without covering and could be spread around the house or on the ground. All these put children at high risk of being exposed to various chemicals including heavy metals. Normally, the residual dust spreading on the floor and wall were not swept away or cleansed. The products were usually stored in the residence before distribution.\n\nAssessment of environmental workplace and waste management in the nine factories by walk-through survey showed a large amount of waste and residual water, which remain from the incense making process. Most of the wastes and residual water originated from incense stick dipping stages, dyeing and dust or incense powders left on the floor. The waste water would be left around the house or used for watering of plants. Dyes were also stored in a container without covering and could be spread around the house or on the ground. All these put children at high risk of being exposed to various chemicals including heavy metals. Normally, the residual dust spreading on the floor and wall were not swept away or cleansed. The products were usually stored in the residence before distribution.\n \nIdentification of Hazardous Agents and Health Risk Factors \nWood dust and chemical substances were the major hazard that directly affected the workers, especially those who worked in the mixing process, rolling and shaking unit, and dyeing section. However, incense workers who worked in dyeing unit and aromatics spraying were at risk of being exposed to various chemicals, such as heavy metals and volatile organic compounds. Furthermore, working in an awkward working posture, and repetitive movements may lead to musculoskeletal disorders such as back and muscle pain. Moreover, working without PPE, eating while working or eating at contaminated workplace area, and smoking were other important factors that would cause health problems. Working with low information of good practise and good hygiene may cause occupational injuries, skin diseases, respiratory symptoms and others illnesses (Table 4).\n\nWood dust and chemical substances were the major hazard that directly affected the workers, especially those who worked in the mixing process, rolling and shaking unit, and dyeing section. However, incense workers who worked in dyeing unit and aromatics spraying were at risk of being exposed to various chemicals, such as heavy metals and volatile organic compounds. Furthermore, working in an awkward working posture, and repetitive movements may lead to musculoskeletal disorders such as back and muscle pain. Moreover, working without PPE, eating while working or eating at contaminated workplace area, and smoking were other important factors that would cause health problems. Working with low information of good practise and good hygiene may cause occupational injuries, skin diseases, respiratory symptoms and others illnesses (Table 4).", "\nOut of the nine factories studied, six small incense and joss stick factories were located in a place where people live; three factories were only 10–50 m far away from residence place. Dust was the major hazard from the incense making process and spread around the house—even into bedroom and kitchen. Moreover, raw materials including chemical substances, incense colors, incense wood powder and aromatic oils were stored in the house, near bedroom or kitchen. None of the nine factories used waste management system. The quality of most of the materials used in the process, especially incense colors and aromatic perfumes, were low. Most of them were also unsafe so that they could spread into the environment and contaminate food and water.\n\nThere were 51 workers in the nine studied factories; 41 (80%) were male and 10 (20%) were female. Most of the male workers were involved in mixing the ingredients, rolling and shaking the sticks, and dyeing and drying the incense, whereas most of female workers worked in wrapping and packing joss sticks. Personal protective equipment (PPE) was not used by all workers (Table 1). Some of the workers (especially those in packing section) took care of their children while working. Therefore, children may inhale the hazardous chemicals and particulate matters.", "\nReal time exposure monitoring results are shown in Table 2. The dust concentration was high in all production stages, especially in rolling and shaking wood powders onto the sticks, packing and mixing process, where the mean±SD dust concentrations were 0.54±0.27, 0.48±0.16, and 0.31±0.18 mg/m3, respectively.\n\nThe mean concentration of small aerosol particles was the highest in aromatics spraying and packing units. There were 9018 particles/mLin aromatics spraying unit and 8603 particles/mL in the packing unit. The mean±SD dust concentration measured in four spots of the comparison house was 0.02±0.25 mg/m3;the mean±SD small particle count was 1465±21 particles/mL. The measured dust concentration and small particles count measured in small household factories were significantly (p<0.05) higher than the comparison house.", "\nSeveral heavy metals were detected in 10 dissolved dyes and 10 incense samples (Table 3).", "\nAssessment of environmental workplace and waste management in the nine factories by walk-through survey showed a large amount of waste and residual water, which remain from the incense making process. Most of the wastes and residual water originated from incense stick dipping stages, dyeing and dust or incense powders left on the floor. The waste water would be left around the house or used for watering of plants. Dyes were also stored in a container without covering and could be spread around the house or on the ground. All these put children at high risk of being exposed to various chemicals including heavy metals. Normally, the residual dust spreading on the floor and wall were not swept away or cleansed. The products were usually stored in the residence before distribution.", "\nWood dust and chemical substances were the major hazard that directly affected the workers, especially those who worked in the mixing process, rolling and shaking unit, and dyeing section. However, incense workers who worked in dyeing unit and aromatics spraying were at risk of being exposed to various chemicals, such as heavy metals and volatile organic compounds. Furthermore, working in an awkward working posture, and repetitive movements may lead to musculoskeletal disorders such as back and muscle pain. Moreover, working without PPE, eating while working or eating at contaminated workplace area, and smoking were other important factors that would cause health problems. Working with low information of good practise and good hygiene may cause occupational injuries, skin diseases, respiratory symptoms and others illnesses (Table 4).", "\nOccupational exposure to hazardous agents is important in causing illness among workers. In this study, we evaluated the environmental hazards in nine small household incense and joss sticks factories in Dong Deang village, Roi-Et province, Thailand. The results showed that in the studied household factories the process was substandard; the working environment was unsafe; there were numerous environmental hazards; workers as well as non-workers (eg , children) who lived in the same place, were exposed to various types of dusts, different chemicals, heavy metals, residual water and wastes. Real time exposure monitoring revealed that the dust (PM10) concentrations were high in all stages of the production. This was consistent with previous studies conducted in Taiwan that showed the concentration of total dust in a large incense stick factory was very high, particularly in the mixing and rolling stage of sticks (9.9–42.7 mg/m3). The total dust concentration in mixing unit in Japanese incense and coil incense factory was 9.9–31.1 mg/m3.16,18 The mean concentration of dust (PM10) in recorded in our study (Table 2) was higher than the mean dust concentration reported in real time monitoring in a tobacco factory, Bangkok, Thailand (0.37 mg/m3).19\n\nHealth hazards associated with wood dust exposure have been described in previous studies. Occupational exposure to wood dust has been shown to cause respiratory diseases, asthma, nasal cancer and skin irritation.3-7 Heavy metals, which were detected in the process of incense production, have various deleterious health effects. In particular, long-term exposure to lead has serious health effects on developing a children.20,21 Many perfumes used in incense production is a mixture of fragrances such as essential oils and aromatic compounds. It is shown that some fragrances can cause asthmatic reactions in some individuals, especially in those with severe or atopic asthma, and can cause headache, dizziness, nausea, etc .22-24\n\nInappropriate working behavior and adopting awkward working posture are other factors influencing the health of incense makers.10,18 In spite of the presence of a high concentration of dust and other pollutants in the environment, only few workers used PPE (Table 1).\n\nWe found that incense production process in small household factories in Thailand is running under unstandardized and unsafe conditions. Waste from the process was not suitably controlled and managed. We should increase the awareness of workers working in such industries of environmental hazards and encourage them to use PPE. Further studies are needed to shed more light over health of workers and children who live in household factories.", "\nThe authors would like to thank the Thai Fogarty ITREOH center, Chulalongkorn University (D43 TWOO7849 NIH FIC) and the Fundamental Research Funds for 90th Anniversary of Chulalongkorn University for financial support of this research. The authors would also to include the Bureau of Epidemiology, DDC, MoPH, Thailand for research joint and collaboration.", "\nNone declared.", "\nThai Fogarty ITREOH center, Chulalongkorn University (D43 TWOO7849 NIH FIC) and The Fundamental Research Funds for 90th Anniversary of Chulalongkorn University." ]
[ null, "introduction", "materials and methods", "results", null, null, null, null, null, "discussion", "acknowledgements", "COI-statement", "financial support:" ]
[ "Environment", "Occupational exposure", "Dust", "Aerosols", "Workplace", "Risk assessment", "Thailand", "Joss stick" ]
TAKE-HOME MESSAGE: Incense and joss sticks are widely used in the world for praying and rituals. Most products are produced in Asia—China, Vietnam, India, Cambodia, Bangladesh, and Thailand. Raw materials used in incense and joss stick production process such as incense powder, saw dust and chemicals are harmful. Real time monitoring reveled high dust concentrations in all the production process. Many heavy metals were found in dissolved dye and incense products samples. Those included barium, manganese, lead, barium, cadmium, and nickel. The health hazards identified in the process included wood dust, mold, chemicals (heavy metals, aromatic compounds), unsafe machine and equipment, adopting awkward working postures, and low frequency of use of personal protective equipment. Introduction: Incense and joss stick have been used for various rituals throughout the world. The Egyptians and Babylonians started using incense for praying and religious ceremonies around 586–538 BC. Both the ancient Greeks and Romans used incense to drive away demons and to attract the gods. Then, Chinese and Japanese used incense sticks of various types at different occasions.1 Nowadays, incense are made from a variety of perfumes and chemicals, and extensively used for room deodorizers and repellents. Most of the incense products come from China, Vietnam, India, Cambodia, Bangladesh, and Thailand. The northern and northeastern parts of Thailand are among the largest centers for incense and joss stick production. Incense sticks are normally made in small household factories; the process needs no advanced technology. In Thailand, many rural areas are suitable for incense making especially for incense drying.2 The main ingredients used in making incense are wood powders including coarse sawdust, sandal wood, glutinous incense powder, fragrance powders, dye colors, and perfume oils. The small incense factories in the villages are usually operated in or near the house. Wood dust, the major hazard produced from the process, diffuses around the house and contaminates the environment. The dust may affect the health of workers and their family members residing in the house. Previous studies showed that chronic exposure to wood dust can affect the respiratory tract and cause asthma, and skin and eye irritation.-7 Occupational exposure to wood is a well-established cause of various respiratory diseases and nasal cancer.8-12 Moreover, numerous chemicals used in production of joss sticks—stains and perfumes—may affect the workers’ health. For example dermal exposure to heavy metal residues such as lead, cadmium, chromium, manganese, etc , which is abundant in the chemical compounds has deleterious health effects.13 Long-term exposure to heavy metals has many multisystem effects such as headache, fatigue, arthralgia, abdominal discomfort, anemia, peripheral neuropathy, etc .20,21 Furthermore, perfumes used in the process are generally essential oils consisting of aromatics compounds that can cause asthmatic reactions, headache, dizziness, nausea, etc .22-24 Most of the previous studies devoted to the health effects of incense smoke. It was found that incense smoke inhalation is associated with lung cancer and asthma.14,26 It is also associated with an increased risk of leukemia in children whose parents had burned incense at home before pregnancy or during nursing period.15 There are only a few studies on the occupational health effects of incense and joss stick making.16,17 Liou, et al ,18 found that the dust concentration in incense stick industries was as high as 42.7 mg/m3. The present study was conducted to assess the environmental hazards in small household incense and joss stick factories in rural areas of Roi-Et, Thailand. Materials and Methods: This cross-sectional study was conducted from October 2011 to March 2012 in Dong Deang village at Roi-Et province, Thailand. The village is the largest area for making incense and joss stick in northern Thailand. There were 21 factories in the region where incense and joss sticks were made manually, however, we only included small household factories with five or more workers and where all production steps, including mixing incense powders, rolling and shaking wood powders onto the sticks, staining sticks, spraying of aromatics, and packing were being done. From a primary survey, we found only nine small household factories that fulfilled the inclusion criteria. The survey record form was adopted from the Canada Occupational Walkthrough survey form,25 which consisted of a section on the main information such as the nature of process operation, material and quantities used, equipment and machinery, identified hazards, personal protective equipment (PPE) used, waste and environmental management, etc . Real time exposure monitoring by Dusttrak Aerosol Monitor 8520 (DustTrak™) was used to measure the concentration of aerosols and particles corresponding to PM10, according to the Division of Medical Engineering (DME), Ministry of Public Health (MoPH). Small particles and particles of chemical reaction were measured by Ultrafine Particle Counter 8525 (P-Trak™) for environmental assessment of the workplace. The measurements were made in four small household factories and a house from the same village for comparison where no incense making was taken place. The dust and small particles were measured at each production stage in all studied factories. The measurements were repeated for three times every 10 min for each stage. The microwave digestion method and analysis by Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES) was used for determination of heavy metals in dyes used in the process. The measurements were made for 10 dye samples and 10 incense products.13 Results: From the walk-through survey in the nine small household incense factories, we found that two types of incense sticks were made in the village—“Toop Saad ” (ธูปซัด) and “Toop Fun ” (ธูปฟั่น); however, the most popular product was Toop Saad . The raw material for incense and joss stick making consist of bamboo stick, glutinous incense power that Thai people call “Go ow b owa ,” incenses wood powders, fragrances incense powder (sandal wood), color powder and perfume oil. This study has focused on the process of manufacturing Toop Saad incense stick. The process of joss stick making includes 1) bamboo cores preparation; 2) mixing various incense powders in a correct proportion; 3) rolling and shaking wood powders onto the sticks. In this step, a bundle of bamboo stick is immersed into water. The sticks are then rolled into incense wood powders. The layered sticks are then shaked so that the loose powders are removed. This step is repeated so that incense wood powders are firmly rolled on the sticks layer by layer to the desirable size. The sticks are left under the sun to dry. 4) The sticks are then dipped into an incense stain. Red and yellow paints are popular colors. The dyed sticks are then spread out once again and left under the sun to dry. 5) The last step is spraying aromatic perfume on the sticks and packing them (Fig 1). The process of incense making—“Toop Saad” method Environmental Workplace Assessment Out of the nine factories studied, six small incense and joss stick factories were located in a place where people live; three factories were only 10–50 m far away from residence place. Dust was the major hazard from the incense making process and spread around the house—even into bedroom and kitchen. Moreover, raw materials including chemical substances, incense colors, incense wood powder and aromatic oils were stored in the house, near bedroom or kitchen. None of the nine factories used waste management system. The quality of most of the materials used in the process, especially incense colors and aromatic perfumes, were low. Most of them were also unsafe so that they could spread into the environment and contaminate food and water. There were 51 workers in the nine studied factories; 41 (80%) were male and 10 (20%) were female. Most of the male workers were involved in mixing the ingredients, rolling and shaking the sticks, and dyeing and drying the incense, whereas most of female workers worked in wrapping and packing joss sticks. Personal protective equipment (PPE) was not used by all workers (Table 1). Some of the workers (especially those in packing section) took care of their children while working. Therefore, children may inhale the hazardous chemicals and particulate matters. Out of the nine factories studied, six small incense and joss stick factories were located in a place where people live; three factories were only 10–50 m far away from residence place. Dust was the major hazard from the incense making process and spread around the house—even into bedroom and kitchen. Moreover, raw materials including chemical substances, incense colors, incense wood powder and aromatic oils were stored in the house, near bedroom or kitchen. None of the nine factories used waste management system. The quality of most of the materials used in the process, especially incense colors and aromatic perfumes, were low. Most of them were also unsafe so that they could spread into the environment and contaminate food and water. There were 51 workers in the nine studied factories; 41 (80%) were male and 10 (20%) were female. Most of the male workers were involved in mixing the ingredients, rolling and shaking the sticks, and dyeing and drying the incense, whereas most of female workers worked in wrapping and packing joss sticks. Personal protective equipment (PPE) was not used by all workers (Table 1). Some of the workers (especially those in packing section) took care of their children while working. Therefore, children may inhale the hazardous chemicals and particulate matters. Dust and Particles Concentration Measurement Real time exposure monitoring results are shown in Table 2. The dust concentration was high in all production stages, especially in rolling and shaking wood powders onto the sticks, packing and mixing process, where the mean±SD dust concentrations were 0.54±0.27, 0.48±0.16, and 0.31±0.18 mg/m3, respectively. The mean concentration of small aerosol particles was the highest in aromatics spraying and packing units. There were 9018 particles/mLin aromatics spraying unit and 8603 particles/mL in the packing unit. The mean±SD dust concentration measured in four spots of the comparison house was 0.02±0.25 mg/m3;the mean±SD small particle count was 1465±21 particles/mL. The measured dust concentration and small particles count measured in small household factories were significantly (p<0.05) higher than the comparison house. Real time exposure monitoring results are shown in Table 2. The dust concentration was high in all production stages, especially in rolling and shaking wood powders onto the sticks, packing and mixing process, where the mean±SD dust concentrations were 0.54±0.27, 0.48±0.16, and 0.31±0.18 mg/m3, respectively. The mean concentration of small aerosol particles was the highest in aromatics spraying and packing units. There were 9018 particles/mLin aromatics spraying unit and 8603 particles/mL in the packing unit. The mean±SD dust concentration measured in four spots of the comparison house was 0.02±0.25 mg/m3;the mean±SD small particle count was 1465±21 particles/mL. The measured dust concentration and small particles count measured in small household factories were significantly (p<0.05) higher than the comparison house. Assessing the Concentrations of Chemicals in Incense Products and Dissolved Dyes Several heavy metals were detected in 10 dissolved dyes and 10 incense samples (Table 3). Several heavy metals were detected in 10 dissolved dyes and 10 incense samples (Table 3). Environmental Management and Waste Disposal Assessment of environmental workplace and waste management in the nine factories by walk-through survey showed a large amount of waste and residual water, which remain from the incense making process. Most of the wastes and residual water originated from incense stick dipping stages, dyeing and dust or incense powders left on the floor. The waste water would be left around the house or used for watering of plants. Dyes were also stored in a container without covering and could be spread around the house or on the ground. All these put children at high risk of being exposed to various chemicals including heavy metals. Normally, the residual dust spreading on the floor and wall were not swept away or cleansed. The products were usually stored in the residence before distribution. Assessment of environmental workplace and waste management in the nine factories by walk-through survey showed a large amount of waste and residual water, which remain from the incense making process. Most of the wastes and residual water originated from incense stick dipping stages, dyeing and dust or incense powders left on the floor. The waste water would be left around the house or used for watering of plants. Dyes were also stored in a container without covering and could be spread around the house or on the ground. All these put children at high risk of being exposed to various chemicals including heavy metals. Normally, the residual dust spreading on the floor and wall were not swept away or cleansed. The products were usually stored in the residence before distribution. Identification of Hazardous Agents and Health Risk Factors Wood dust and chemical substances were the major hazard that directly affected the workers, especially those who worked in the mixing process, rolling and shaking unit, and dyeing section. However, incense workers who worked in dyeing unit and aromatics spraying were at risk of being exposed to various chemicals, such as heavy metals and volatile organic compounds. Furthermore, working in an awkward working posture, and repetitive movements may lead to musculoskeletal disorders such as back and muscle pain. Moreover, working without PPE, eating while working or eating at contaminated workplace area, and smoking were other important factors that would cause health problems. Working with low information of good practise and good hygiene may cause occupational injuries, skin diseases, respiratory symptoms and others illnesses (Table 4). Wood dust and chemical substances were the major hazard that directly affected the workers, especially those who worked in the mixing process, rolling and shaking unit, and dyeing section. However, incense workers who worked in dyeing unit and aromatics spraying were at risk of being exposed to various chemicals, such as heavy metals and volatile organic compounds. Furthermore, working in an awkward working posture, and repetitive movements may lead to musculoskeletal disorders such as back and muscle pain. Moreover, working without PPE, eating while working or eating at contaminated workplace area, and smoking were other important factors that would cause health problems. Working with low information of good practise and good hygiene may cause occupational injuries, skin diseases, respiratory symptoms and others illnesses (Table 4). Environmental Workplace Assessment: Out of the nine factories studied, six small incense and joss stick factories were located in a place where people live; three factories were only 10–50 m far away from residence place. Dust was the major hazard from the incense making process and spread around the house—even into bedroom and kitchen. Moreover, raw materials including chemical substances, incense colors, incense wood powder and aromatic oils were stored in the house, near bedroom or kitchen. None of the nine factories used waste management system. The quality of most of the materials used in the process, especially incense colors and aromatic perfumes, were low. Most of them were also unsafe so that they could spread into the environment and contaminate food and water. There were 51 workers in the nine studied factories; 41 (80%) were male and 10 (20%) were female. Most of the male workers were involved in mixing the ingredients, rolling and shaking the sticks, and dyeing and drying the incense, whereas most of female workers worked in wrapping and packing joss sticks. Personal protective equipment (PPE) was not used by all workers (Table 1). Some of the workers (especially those in packing section) took care of their children while working. Therefore, children may inhale the hazardous chemicals and particulate matters. Dust and Particles Concentration Measurement: Real time exposure monitoring results are shown in Table 2. The dust concentration was high in all production stages, especially in rolling and shaking wood powders onto the sticks, packing and mixing process, where the mean±SD dust concentrations were 0.54±0.27, 0.48±0.16, and 0.31±0.18 mg/m3, respectively. The mean concentration of small aerosol particles was the highest in aromatics spraying and packing units. There were 9018 particles/mLin aromatics spraying unit and 8603 particles/mL in the packing unit. The mean±SD dust concentration measured in four spots of the comparison house was 0.02±0.25 mg/m3;the mean±SD small particle count was 1465±21 particles/mL. The measured dust concentration and small particles count measured in small household factories were significantly (p<0.05) higher than the comparison house. Assessing the Concentrations of Chemicals in Incense Products and Dissolved Dyes: Several heavy metals were detected in 10 dissolved dyes and 10 incense samples (Table 3). Environmental Management and Waste Disposal: Assessment of environmental workplace and waste management in the nine factories by walk-through survey showed a large amount of waste and residual water, which remain from the incense making process. Most of the wastes and residual water originated from incense stick dipping stages, dyeing and dust or incense powders left on the floor. The waste water would be left around the house or used for watering of plants. Dyes were also stored in a container without covering and could be spread around the house or on the ground. All these put children at high risk of being exposed to various chemicals including heavy metals. Normally, the residual dust spreading on the floor and wall were not swept away or cleansed. The products were usually stored in the residence before distribution. Identification of Hazardous Agents and Health Risk Factors: Wood dust and chemical substances were the major hazard that directly affected the workers, especially those who worked in the mixing process, rolling and shaking unit, and dyeing section. However, incense workers who worked in dyeing unit and aromatics spraying were at risk of being exposed to various chemicals, such as heavy metals and volatile organic compounds. Furthermore, working in an awkward working posture, and repetitive movements may lead to musculoskeletal disorders such as back and muscle pain. Moreover, working without PPE, eating while working or eating at contaminated workplace area, and smoking were other important factors that would cause health problems. Working with low information of good practise and good hygiene may cause occupational injuries, skin diseases, respiratory symptoms and others illnesses (Table 4). Discussion: Occupational exposure to hazardous agents is important in causing illness among workers. In this study, we evaluated the environmental hazards in nine small household incense and joss sticks factories in Dong Deang village, Roi-Et province, Thailand. The results showed that in the studied household factories the process was substandard; the working environment was unsafe; there were numerous environmental hazards; workers as well as non-workers (eg , children) who lived in the same place, were exposed to various types of dusts, different chemicals, heavy metals, residual water and wastes. Real time exposure monitoring revealed that the dust (PM10) concentrations were high in all stages of the production. This was consistent with previous studies conducted in Taiwan that showed the concentration of total dust in a large incense stick factory was very high, particularly in the mixing and rolling stage of sticks (9.9–42.7 mg/m3). The total dust concentration in mixing unit in Japanese incense and coil incense factory was 9.9–31.1 mg/m3.16,18 The mean concentration of dust (PM10) in recorded in our study (Table 2) was higher than the mean dust concentration reported in real time monitoring in a tobacco factory, Bangkok, Thailand (0.37 mg/m3).19 Health hazards associated with wood dust exposure have been described in previous studies. Occupational exposure to wood dust has been shown to cause respiratory diseases, asthma, nasal cancer and skin irritation.3-7 Heavy metals, which were detected in the process of incense production, have various deleterious health effects. In particular, long-term exposure to lead has serious health effects on developing a children.20,21 Many perfumes used in incense production is a mixture of fragrances such as essential oils and aromatic compounds. It is shown that some fragrances can cause asthmatic reactions in some individuals, especially in those with severe or atopic asthma, and can cause headache, dizziness, nausea, etc .22-24 Inappropriate working behavior and adopting awkward working posture are other factors influencing the health of incense makers.10,18 In spite of the presence of a high concentration of dust and other pollutants in the environment, only few workers used PPE (Table 1). We found that incense production process in small household factories in Thailand is running under unstandardized and unsafe conditions. Waste from the process was not suitably controlled and managed. We should increase the awareness of workers working in such industries of environmental hazards and encourage them to use PPE. Further studies are needed to shed more light over health of workers and children who live in household factories. Acknowledgements: The authors would like to thank the Thai Fogarty ITREOH center, Chulalongkorn University (D43 TWOO7849 NIH FIC) and the Fundamental Research Funds for 90th Anniversary of Chulalongkorn University for financial support of this research. The authors would also to include the Bureau of Epidemiology, DDC, MoPH, Thailand for research joint and collaboration. Conflict of Interest:: None declared. Financial Support:: Thai Fogarty ITREOH center, Chulalongkorn University (D43 TWOO7849 NIH FIC) and The Fundamental Research Funds for 90th Anniversary of Chulalongkorn University.
Background: Incense and joss stick are generally used in the world. Most products were made in small household factories. There are many environmental and occupational hazards in these factories. Methods: Nine small household factories in rural areas of Roi-Et, Thailand, were studied. Dust concentration and small aerosol particles were counted through real time exposure monitoring. The inductively coupled plasma optical emission spectrometry (ICP-OES) was used for quantitative measurement of heavy metal residue in incense products. Results: Several heavy metals were found in dissolved dye and joss sticks. Those included barium, manganese, and lead. Rolling and shaking processes produced the highest concentration of dust and aerosols. Only 3.9% of female workers used personal protection equipment. Conclusions: Dust and chemicals were major threats in small household incense and joss stick factories in Thailand. Increasing awareness towards using personal protection equipment and emphasis on elimination of environmental workplace hazards should be considered to help the workers of this industry.
null
null
4,100
190
[ 155, 251, 143, 19, 143, 145 ]
13
[ "incense", "dust", "factories", "workers", "process", "sticks", "small", "working", "wood", "house" ]
[ "japanese incense sticks", "stick production incense", "originated incense stick", "household incense joss", "materials incense joss" ]
null
null
null
[CONTENT] Environment | Occupational exposure | Dust | Aerosols | Workplace | Risk assessment | Thailand | Joss stick [SUMMARY]
null
[CONTENT] Environment | Occupational exposure | Dust | Aerosols | Workplace | Risk assessment | Thailand | Joss stick [SUMMARY]
null
[CONTENT] Environment | Occupational exposure | Dust | Aerosols | Workplace | Risk assessment | Thailand | Joss stick [SUMMARY]
null
[CONTENT] Child | Cross-Sectional Studies | Dust | Family Characteristics | Female | Humans | Industry | Inhalation Exposure | Male | Metals, Heavy | Occupational Diseases | Occupational Exposure | Protective Devices | Thailand | Wood [SUMMARY]
null
[CONTENT] Child | Cross-Sectional Studies | Dust | Family Characteristics | Female | Humans | Industry | Inhalation Exposure | Male | Metals, Heavy | Occupational Diseases | Occupational Exposure | Protective Devices | Thailand | Wood [SUMMARY]
null
[CONTENT] Child | Cross-Sectional Studies | Dust | Family Characteristics | Female | Humans | Industry | Inhalation Exposure | Male | Metals, Heavy | Occupational Diseases | Occupational Exposure | Protective Devices | Thailand | Wood [SUMMARY]
null
[CONTENT] japanese incense sticks | stick production incense | originated incense stick | household incense joss | materials incense joss [SUMMARY]
null
[CONTENT] japanese incense sticks | stick production incense | originated incense stick | household incense joss | materials incense joss [SUMMARY]
null
[CONTENT] japanese incense sticks | stick production incense | originated incense stick | household incense joss | materials incense joss [SUMMARY]
null
[CONTENT] incense | dust | factories | workers | process | sticks | small | working | wood | house [SUMMARY]
null
[CONTENT] incense | dust | factories | workers | process | sticks | small | working | wood | house [SUMMARY]
null
[CONTENT] incense | dust | factories | workers | process | sticks | small | working | wood | house [SUMMARY]
null
[CONTENT] incense | effects | affect | joss | health | stick | studies | health effects | exposure | joss stick [SUMMARY]
null
[CONTENT] incense | particles | workers | sticks | factories | dust | packing | working | house | small [SUMMARY]
null
[CONTENT] incense | declared | dust | factories | workers | 10 | working | particles | small | process [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] ||| ||| ||| Only 3.9% [SUMMARY]
null
[CONTENT] ||| ||| ||| Nine | Roi-Et | Thailand ||| ||| ||| ||| ||| ||| ||| Only 3.9% ||| Thailand ||| [SUMMARY]
null
Virtual primary care in high-income countries during the COVID-19 pandemic: Policy responses and lessons for the future.
34431426
Telemedicine, once defined merely as the treatment of certain conditions remotely, has now often been supplanted in use by broader terms such as 'virtual care', in recognition of its increasing capability to deliver a diverse range of healthcare services from afar. With the unexpected onset of COVID-19, virtual care (e.g. telephone, video, online) has become essential to facilitating the continuation of primary care globally. Over several short weeks, existing healthcare policies have adapted quickly and empowered clinicians to use digital means to fulfil a wide range of clinical responsibilities, which until then have required face-to-face consultations.
BACKGROUND
A rapid review of publicly available national policies guiding the use of virtual care in General Practice was conducted. Documents were included if issued in the first six months of the pandemic (March to August of 2020) and focussed primarily on high-income countries. Documents must have been issued by a national health authority, accreditation body, or professional organisation, and directly refer to the delivery of primary care.
METHODS
We extracted six areas of relevance: primary care transformation during COVID-19, the continued delivery of preventative care, the delivery of acute care, remote triaging, funding & reimbursement, and security standards.
RESULTS
Virtual care use in primary care saw a transformative change during the pandemic. However, despite the advances in the various governmental guidance offered, much work remains in addressing the shortcomings exposed during COVID-19 and strengthening viable policies to better incorporate novel technologies into the modern primary care clinical environment.
CONCLUSION
[ "COVID-19", "Developed Countries", "Digital Technology", "Health Policy", "Humans", "Primary Health Care", "Telemedicine" ]
8404680
Introduction
Digital technology has transformed many aspects of modern life; healthcare is no exception. Over the last decade, primary care systems have slowly started to adopt virtual modes of delivery, in which digital tools (i.e. telephone, online video) serve as a first point of contact for patients, directing them to the appropriate digital or face-to-face services based on their needs [1,2]. This approach can provide access to a range of primary care services, such as booking and cancelling appointments, having remote consultations, receiving referrals, and obtaining prescriptions [1,3]. As part of a streamlined, integrated experience, ‘virtual approaches’ have the potential of improving efficiency, patient safety, and access to care [4]. Alongside this gradual transformation, saw a shift in the accompanying terminology with which to describe the underlying technology. The term ‘telemedicine’ which once was defined strictly as the treatment of certain conditions remotely, has now given way somewhat to broader terms such as ‘virtual care’ – a testament to the rapidly expanding functionality and application of these digital tools to facilitate the delivery of more holistic care remotely [5]. For this reason, in this rapid review the latter term was chosen. As a response over the last several years, health policies have been preparing to incorporate virtual care as an essential part of health care delivery. For example, NHS England declared in 2019 that all patients should have the right to video consultations by 2021 and that all primary care practices should ensure at least 25% of their appointments are available for online booking [6]. However, despite similar statements observed worldwide, adoption remained slow, in part due to a general hesitance with novel technologies, privacy concerns, limited stakeholder enthusiasm, and inadequate investment [7,8]. By examining publicly available national guidance documents available to GPs, this background paper aims to examine the differing policy-based approaches taken to meet this challenge, potential technical shortcomings, and ultimately, its effects on revolutionising the delivery of primary care.
Methods
In this background paper, we adopted the principles of a rapid review to identify key areas of relevance on this topic. Rapid reviews are a form of knowledge synthesis in which components of the systematic review process are simplified to produce information on time [9]. In light of the rapidly evolving pandemic, policymakers require evidence synthesis to produce robust guidance for primary care providers. The World Health Organisation (WHO) recommends rapid reviews to provide such evidence [9]. To identify relevant documents, we have searched the websites of relevant national departments and health authorities (ministries of health, primary care organisations and regulatory bodies). In what concerns the inclusion criteria adopted, documents were included if issued in the first six months of the pandemic (March to August of 2020) and focussed on using remote care tools in high-income countries. Documents must have been issued by a national health authority, accreditation bodies or professional organisation, and directly refer to the delivery of primary care. The documents were subsequently evaluated by two independent researchers, with the findings included this rapid review derived from them reaching a consensus upon regular discussions.
Results
Our rapid literature review has identified 16 documents from nine countries (Australia, Bosnia, Canada, Germany, Italy, Netherlands, Spain, United Kingdom, and the United States). From those, we extracted six areas of relevance: primary care transformation during COVID-19, the continued delivery of preventative care, the delivery of acute care, remote triaging, funding & reimbursement, and roadmap for the future of remote care models. COVID-19 and the primary care transformation With the COVID-19 pandemic, the digital landscape in primary care was about to experience a dramatic change. On 11 March 2020, the World Health Organisation officially declared COVID-19 a global pandemic [10]. In the United Kingdom, GPs were hastened to shift patients over to online and electronic prescription services for the dispensing of medications [11]. In Australia, a rapid transition to using mainly virtual consultations was deemed appropriate for most patients who have visited their GPs at least once in the past 12 months or those who specifically requested care to be continued remotely [12]. This new reality, with drastic limitations curtailing physical contact, has resulted in virtual solutions taking centre stage in primary care delivery [13]. In many ways, the COVID-19 outbreak has presented a unique opportunity that has both tested the capacity of pre-existing virtual models of care, and simultaneously demonstrated their growing importance as a complementary and alternative means of delivering community-based healthcare. Virtual models of care have also been increasingly relied upon to coordinate and support public health responses to the pandemic itself, all the while minimising risks of exposure for patients, healthcare providers, and the public [3,13–15]. As part of this transformation, health policies worldwide have undergone drastic changes to adapt and facilitate virtual models’ upscaling. While these changes have lowered many of the existing barriers to digital healthcare, it is important to reflect upon their impact not just as an immediate solution to a public health emergency but also as the first of many potential regulatory adjustments necessary to enabe the continuation and proliferation of high-quality, safe virtual models usage in primary care in the future. With the COVID-19 pandemic, the digital landscape in primary care was about to experience a dramatic change. On 11 March 2020, the World Health Organisation officially declared COVID-19 a global pandemic [10]. In the United Kingdom, GPs were hastened to shift patients over to online and electronic prescription services for the dispensing of medications [11]. In Australia, a rapid transition to using mainly virtual consultations was deemed appropriate for most patients who have visited their GPs at least once in the past 12 months or those who specifically requested care to be continued remotely [12]. This new reality, with drastic limitations curtailing physical contact, has resulted in virtual solutions taking centre stage in primary care delivery [13]. In many ways, the COVID-19 outbreak has presented a unique opportunity that has both tested the capacity of pre-existing virtual models of care, and simultaneously demonstrated their growing importance as a complementary and alternative means of delivering community-based healthcare. Virtual models of care have also been increasingly relied upon to coordinate and support public health responses to the pandemic itself, all the while minimising risks of exposure for patients, healthcare providers, and the public [3,13–15]. As part of this transformation, health policies worldwide have undergone drastic changes to adapt and facilitate virtual models’ upscaling. While these changes have lowered many of the existing barriers to digital healthcare, it is important to reflect upon their impact not just as an immediate solution to a public health emergency but also as the first of many potential regulatory adjustments necessary to enabe the continuation and proliferation of high-quality, safe virtual models usage in primary care in the future. Delivery of routine and preventative care Proactive health promotion and preventive care are an integral part of primary care. During a pandemic, these services still need to be delivered. The US Centres for Disease Control and Prevention (CDC) now supports the use of remote care in a range of routine and preventive care services, including management of chronic health conditions, monitoring of clinical signs (i.e. blood pressure, blood glucose, other remote assessments), patient coaching and support (i.e. weight management and nutrition counselling), and medication management [16]. In Australia, Canada, and the United Kingdom, new guidance has been provided advising GPs to shift patients whenever possible, over to online or telephone-based consultations [12,17,18]. In Germany, the recently passed Digital Healthcare Act in conjunction with the ongoing COVID-19 pandemic, is now used as a catalyst to democratise the routine use of video-based consultations previously requiring an in-person visit first with the clinician [19]. Former insurance reimbursement restrictions capping the proportion of cases seen by clinicians via telemedicine, as well as what types of consultations services could be provided, have also been lifted [19]. In the context of providing routine and preventative care, ‘digital-first’ approaches can be valuable tools to maintain continuity, improve timeliness, and mitigate the negative consequences of delays on provision of care [16]. Additionally, these approaches may also be beneficial in preserving the patient-physician relationship at a time when face-to-face visits are not safe, and facilitate the engagement of those who are shielding, have limited mobility or other physical limitations to access care [16]. At the same time, we must be aware of some less understood risks. When monitoring clinical signs and symptoms remotely, not all patients have access to medical-grade, home-based self-monitoring devices (i.e. BP monitor, blood glucose monitor, weighing scale), and may not necessarily be familiar with proper operating procedures [20]. We cannot risk leaving many of the most vulnerable patients excluded from preventive care or further alienated by the digital divide. Future policies promoting the use of virtual models for routine and preventive care must consider these disparities in the engagement with digital care, which are often driven by ethnicity, age, and socioeconomic status. Left unaddressed, these disparities can lead to the further widening of health inequities [21]. Health policies must therefore incorporate more inclusive implementation strategies by comprising the perspectives of both healthcare providers and patients alike, and strengthen telehealth training to accommodate for language and cultural barriers, varying levels of digital literacy, and disability [21]. Proactive health promotion and preventive care are an integral part of primary care. During a pandemic, these services still need to be delivered. The US Centres for Disease Control and Prevention (CDC) now supports the use of remote care in a range of routine and preventive care services, including management of chronic health conditions, monitoring of clinical signs (i.e. blood pressure, blood glucose, other remote assessments), patient coaching and support (i.e. weight management and nutrition counselling), and medication management [16]. In Australia, Canada, and the United Kingdom, new guidance has been provided advising GPs to shift patients whenever possible, over to online or telephone-based consultations [12,17,18]. In Germany, the recently passed Digital Healthcare Act in conjunction with the ongoing COVID-19 pandemic, is now used as a catalyst to democratise the routine use of video-based consultations previously requiring an in-person visit first with the clinician [19]. Former insurance reimbursement restrictions capping the proportion of cases seen by clinicians via telemedicine, as well as what types of consultations services could be provided, have also been lifted [19]. In the context of providing routine and preventative care, ‘digital-first’ approaches can be valuable tools to maintain continuity, improve timeliness, and mitigate the negative consequences of delays on provision of care [16]. Additionally, these approaches may also be beneficial in preserving the patient-physician relationship at a time when face-to-face visits are not safe, and facilitate the engagement of those who are shielding, have limited mobility or other physical limitations to access care [16]. At the same time, we must be aware of some less understood risks. When monitoring clinical signs and symptoms remotely, not all patients have access to medical-grade, home-based self-monitoring devices (i.e. BP monitor, blood glucose monitor, weighing scale), and may not necessarily be familiar with proper operating procedures [20]. We cannot risk leaving many of the most vulnerable patients excluded from preventive care or further alienated by the digital divide. Future policies promoting the use of virtual models for routine and preventive care must consider these disparities in the engagement with digital care, which are often driven by ethnicity, age, and socioeconomic status. Left unaddressed, these disparities can lead to the further widening of health inequities [21]. Health policies must therefore incorporate more inclusive implementation strategies by comprising the perspectives of both healthcare providers and patients alike, and strengthen telehealth training to accommodate for language and cultural barriers, varying levels of digital literacy, and disability [21]. Delivery of acute care Additionally, virtual approaches can also be used to provide low-risk acute care for non-COVID-19 conditions in the community, identify those persons who may need additional medical consultation or assessment, and refer them as appropriate [16]. However, such substantial change to the traditional means of conducting a clinical consultation inevitably brings with it challenges, including the impracticality of conducting certain acute consultations requiring specialised equipment or physical examination and utilising a simple, reliable means of audio-visual communication with a diverse range of patients. In these circumstances, virtual care can have a detrimental effect on diagnostic uncertainty, a particularly relevant feature in primary care due to the breadth and complexity of diagnoses possible, and its implications on patient safety [22]. To mitigate the associated risks, the US CDC developed the ‘Framework for Healthcare Systems Providing Non-COVID19 Clinical Care’, to assist healthcare providers in determining when in-person acute care is appropriate [23]. Similarly, UK, Canadian, and Australian national accreditation bodies also included comprehensive guidelines defining what tasks are appropriate to be performed via virtual models, what equipment is required, and most importantly, how GPs should go about conducting consultations remotely [12,20,24]. When incorporated in telemedicine policies guidance and materials, the availability of such clinical decision-support tools is key to continue providing necessary services while minimising the risk of harm to patients and providers. Additionally, virtual approaches can also be used to provide low-risk acute care for non-COVID-19 conditions in the community, identify those persons who may need additional medical consultation or assessment, and refer them as appropriate [16]. However, such substantial change to the traditional means of conducting a clinical consultation inevitably brings with it challenges, including the impracticality of conducting certain acute consultations requiring specialised equipment or physical examination and utilising a simple, reliable means of audio-visual communication with a diverse range of patients. In these circumstances, virtual care can have a detrimental effect on diagnostic uncertainty, a particularly relevant feature in primary care due to the breadth and complexity of diagnoses possible, and its implications on patient safety [22]. To mitigate the associated risks, the US CDC developed the ‘Framework for Healthcare Systems Providing Non-COVID19 Clinical Care’, to assist healthcare providers in determining when in-person acute care is appropriate [23]. Similarly, UK, Canadian, and Australian national accreditation bodies also included comprehensive guidelines defining what tasks are appropriate to be performed via virtual models, what equipment is required, and most importantly, how GPs should go about conducting consultations remotely [12,20,24]. When incorporated in telemedicine policies guidance and materials, the availability of such clinical decision-support tools is key to continue providing necessary services while minimising the risk of harm to patients and providers. COVID-19 remote triaging During the COVID-19 outbreak, virtual models have also been used in a triaging capacity in primary care to identify new cases in the community, which patients may require further testing and potentially helping to curtail spread in the wider population [25,26]. The national guidance documentation of Australia, Canada, and the United Kingdom, detailed how to remotely assess patients who present with symptoms suggestive of COVID-19, what public health resources are available, and the appropriate next course of action to take [12,20,24]. In Germany, this was further accompanied by the rollout of patient self-assessment tools to optimise the use of existing telemedicine capacity and enable the ability to detect potential COVID-19 cases early on [27]. In the United States, automated bots have been incorporated alongside existing telemedicine services to allow the triaging of a rapidly expanding number of patients [26]. Patients with symptoms suggestive of COVID-19 infection and deemed higher risk, were automatically referred for further triaging in hospitals. Those with lower risk presentations were scheduled for consultations via telemedicine remotely, thus streamlining the workload for GPs, minimising the need for patients to present to hospital, and reducing overall risks of exposure [26]. As digital triaging permeates to other clinical settings and countries in the future, further evaluation of its impact on quality of care, and particularly on patient safety and efficiency of care delivery, is needed. However, it is important to note that there is no standardised, validated tool to perform remote assessment of patients with COVID-19, nor to stratify their risk of clinical deterioration. The NEWS2 score, an early warning score recommended by the UK’s National Institute for Health and Care Excellence (NICE) in its guidelines for managing COVID-19 patients in critical care, has not been validated in primary care, nor for triage purposes [20]. Further research should pave the way to development of more bespoke prognostics scores to be used in the remote assessment of COVID-19 patients in primary care, capitalising on the growing body of data and novel data analytics methods available [28]. During the COVID-19 outbreak, virtual models have also been used in a triaging capacity in primary care to identify new cases in the community, which patients may require further testing and potentially helping to curtail spread in the wider population [25,26]. The national guidance documentation of Australia, Canada, and the United Kingdom, detailed how to remotely assess patients who present with symptoms suggestive of COVID-19, what public health resources are available, and the appropriate next course of action to take [12,20,24]. In Germany, this was further accompanied by the rollout of patient self-assessment tools to optimise the use of existing telemedicine capacity and enable the ability to detect potential COVID-19 cases early on [27]. In the United States, automated bots have been incorporated alongside existing telemedicine services to allow the triaging of a rapidly expanding number of patients [26]. Patients with symptoms suggestive of COVID-19 infection and deemed higher risk, were automatically referred for further triaging in hospitals. Those with lower risk presentations were scheduled for consultations via telemedicine remotely, thus streamlining the workload for GPs, minimising the need for patients to present to hospital, and reducing overall risks of exposure [26]. As digital triaging permeates to other clinical settings and countries in the future, further evaluation of its impact on quality of care, and particularly on patient safety and efficiency of care delivery, is needed. However, it is important to note that there is no standardised, validated tool to perform remote assessment of patients with COVID-19, nor to stratify their risk of clinical deterioration. The NEWS2 score, an early warning score recommended by the UK’s National Institute for Health and Care Excellence (NICE) in its guidelines for managing COVID-19 patients in critical care, has not been validated in primary care, nor for triage purposes [20]. Further research should pave the way to development of more bespoke prognostics scores to be used in the remote assessment of COVID-19 patients in primary care, capitalising on the growing body of data and novel data analytics methods available [28]. Technology suppliers and security standards During the first wave of the COVID-19 outbreak, many national policies have loosened security standards and relaxed telemedicine restrictions, with several high-income countries permitting the use of common, off-the-shelf communications software to ensure a quick transition for both GPs and their patients. In the US, the Department of Health and Human Services (HHS) has waived penalties regarding the use of video consultation software not previously approved to meet Health Insurance Portability and Accountability Act (HIPAA) requirements, allowing widely accessible consumer-grade services such as FaceTime or Skype to be used for telemedicine purposes, even if the care service is not related to COVID-19 [29]. In Europe, the General Data Protection Regulations already include a clause exempting work in the overwhelming public interest. In the UK, NHSx encourages videoconferencing tools such as Skype, WhatsApp, Facetime, or other commercial products designed specifically for healthcare purposes [30]. Although these policies have acted as strong drivers to expanding virtual care availability during the pandemic, the relaxation of privacy standards raises concerns, including the possibility that patients’ data shared over a non-compliant platform may be inappropriately accessed, shared, or monetised. To address these risks and maintain the momentum needed to building safer solutions, national guidance from several countries such as the Netherlands, US, and the UK, do include a well-defined list of pre-approved vendors [31–33]. In the absence of a list of pre-approved software vendors, Canadian and Australian national accreditation body guidelines provided additional support for GPs specifying how to set up their workplace in a secure manner and remotely take patient informed consent to safeguard patient privacy [12,34]. During the first wave of the COVID-19 outbreak, many national policies have loosened security standards and relaxed telemedicine restrictions, with several high-income countries permitting the use of common, off-the-shelf communications software to ensure a quick transition for both GPs and their patients. In the US, the Department of Health and Human Services (HHS) has waived penalties regarding the use of video consultation software not previously approved to meet Health Insurance Portability and Accountability Act (HIPAA) requirements, allowing widely accessible consumer-grade services such as FaceTime or Skype to be used for telemedicine purposes, even if the care service is not related to COVID-19 [29]. In Europe, the General Data Protection Regulations already include a clause exempting work in the overwhelming public interest. In the UK, NHSx encourages videoconferencing tools such as Skype, WhatsApp, Facetime, or other commercial products designed specifically for healthcare purposes [30]. Although these policies have acted as strong drivers to expanding virtual care availability during the pandemic, the relaxation of privacy standards raises concerns, including the possibility that patients’ data shared over a non-compliant platform may be inappropriately accessed, shared, or monetised. To address these risks and maintain the momentum needed to building safer solutions, national guidance from several countries such as the Netherlands, US, and the UK, do include a well-defined list of pre-approved vendors [31–33]. In the absence of a list of pre-approved software vendors, Canadian and Australian national accreditation body guidelines provided additional support for GPs specifying how to set up their workplace in a secure manner and remotely take patient informed consent to safeguard patient privacy [12,34]. Funding and reimbursement The first wave of COVID-19 demonstrated the need to streamline and reinforce existing funding avenues to answer the need for virtual care delivery during a pandemic. In this context, governments and health policymakers worldwide have unveiled guidance regarding changes in existing billing procedures and the availability of additional funding, with important implications on how primary care is delivered and funded. For example, in Australia, novel financial initiatives such as doubling of ‘bulk billing’ have been put in place [12]. Guidance into what types of patients, and what procedures, items, and services were eligible for the new billing arrangements, was extensively detailed in the new telemedicine guidelines [12,35]. In Canada, several provincial health authorities have introduced new telemedicine-specific billing codes in the hopes of simplifying the overall transition process [34,36]. Additionally, the COVID-19 outbreak raised awareness on the importance of strategically investing in ‘digital-first’ programmes. These programmes represent not only an emergency response to the crisis but also a potential long-term strategic vision to addressing a multitude of existing needs in primary care delivery, particularly reaching out to patients with poor access to care. In the US, the newly passed ‘Coronavirus Aid, Relief, and Economic Security (CARES) Act’ awards a total of $8.7 million a year for telehealth technologies used in rural areas and medically underserved areas [37]. Recent changes to Medicare as well as initiatives by some private insurers have permitted consultations completed via telemedicine to be paid at the same rate as those conducted in-person [38]. However, it remains to be seen as to whether these changes will persist after the eventual passing of COVID-19 and translate into a more fundamental transformation of how virtual care is funded [19]. The first wave of COVID-19 demonstrated the need to streamline and reinforce existing funding avenues to answer the need for virtual care delivery during a pandemic. In this context, governments and health policymakers worldwide have unveiled guidance regarding changes in existing billing procedures and the availability of additional funding, with important implications on how primary care is delivered and funded. For example, in Australia, novel financial initiatives such as doubling of ‘bulk billing’ have been put in place [12]. Guidance into what types of patients, and what procedures, items, and services were eligible for the new billing arrangements, was extensively detailed in the new telemedicine guidelines [12,35]. In Canada, several provincial health authorities have introduced new telemedicine-specific billing codes in the hopes of simplifying the overall transition process [34,36]. Additionally, the COVID-19 outbreak raised awareness on the importance of strategically investing in ‘digital-first’ programmes. These programmes represent not only an emergency response to the crisis but also a potential long-term strategic vision to addressing a multitude of existing needs in primary care delivery, particularly reaching out to patients with poor access to care. In the US, the newly passed ‘Coronavirus Aid, Relief, and Economic Security (CARES) Act’ awards a total of $8.7 million a year for telehealth technologies used in rural areas and medically underserved areas [37]. Recent changes to Medicare as well as initiatives by some private insurers have permitted consultations completed via telemedicine to be paid at the same rate as those conducted in-person [38]. However, it remains to be seen as to whether these changes will persist after the eventual passing of COVID-19 and translate into a more fundamental transformation of how virtual care is funded [19].
Conclusion
The COVID-19 outbreak has been a litmus test for the robustness of virtual care models’ implementation in primary care. It has both revealed its many shortcomings yet simultaneously allowed for novel ideas tackling some of the deep-seated problems to be explored. It is now time to incorporate the practical lessons learned and reshape current policies, rules, and regulations to support safer, more efficient, and more equitable use of virtual care solutions.
[ "COVID-19 and the primary care transformation", "Delivery of routine and preventative care", "Delivery of acute care", "COVID-19 remote triaging", "Technology suppliers and security standards", "Funding and reimbursement", "Implications for the long-term future", "Limitations" ]
[ "With the COVID-19 pandemic, the digital landscape in primary care was about to experience a dramatic change. On 11 March 2020, the World Health Organisation officially declared COVID-19 a global pandemic [10]. In the United Kingdom, GPs were hastened to shift patients over to online and electronic prescription services for the dispensing of medications [11]. In Australia, a rapid transition to using mainly virtual consultations was deemed appropriate for most patients who have visited their GPs at least once in the past 12 months or those who specifically requested care to be continued remotely [12]. This new reality, with drastic limitations curtailing physical contact, has resulted in virtual solutions taking centre stage in primary care delivery [13]. In many ways, the COVID-19 outbreak has presented a unique opportunity that has both tested the capacity of pre-existing virtual models of care, and simultaneously demonstrated their growing importance as a complementary and alternative means of delivering community-based healthcare. Virtual models of care have also been increasingly relied upon to coordinate and support public health responses to the pandemic itself, all the while minimising risks of exposure for patients, healthcare providers, and the public [3,13–15].\nAs part of this transformation, health policies worldwide have undergone drastic changes to adapt and facilitate virtual models’ upscaling. While these changes have lowered many of the existing barriers to digital healthcare, it is important to reflect upon their impact not just as an immediate solution to a public health emergency but also as the first of many potential regulatory adjustments necessary to enabe the continuation and proliferation of high-quality, safe virtual models usage in primary care in the future.", "Proactive health promotion and preventive care are an integral part of primary care. During a pandemic, these services still need to be delivered. The US Centres for Disease Control and Prevention (CDC) now supports the use of remote care in a range of routine and preventive care services, including management of chronic health conditions, monitoring of clinical signs (i.e. blood pressure, blood glucose, other remote assessments), patient coaching and support (i.e. weight management and nutrition counselling), and medication management [16]. In Australia, Canada, and the United Kingdom, new guidance has been provided advising GPs to shift patients whenever possible, over to online or telephone-based consultations [12,17,18]. In Germany, the recently passed Digital Healthcare Act in conjunction with the ongoing COVID-19 pandemic, is now used as a catalyst to democratise the routine use of video-based consultations previously requiring an in-person visit first with the clinician [19]. Former insurance reimbursement restrictions capping the proportion of cases seen by clinicians via telemedicine, as well as what types of consultations services could be provided, have also been lifted [19].\nIn the context of providing routine and preventative care, ‘digital-first’ approaches can be valuable tools to maintain continuity, improve timeliness, and mitigate the negative consequences of delays on provision of care [16]. Additionally, these approaches may also be beneficial in preserving the patient-physician relationship at a time when face-to-face visits are not safe, and facilitate the engagement of those who are shielding, have limited mobility or other physical limitations to access care [16].\nAt the same time, we must be aware of some less understood risks. When monitoring clinical signs and symptoms remotely, not all patients have access to medical-grade, home-based self-monitoring devices (i.e. BP monitor, blood glucose monitor, weighing scale), and may not necessarily be familiar with proper operating procedures [20]. We cannot risk leaving many of the most vulnerable patients excluded from preventive care or further alienated by the digital divide. Future policies promoting the use of virtual models for routine and preventive care must consider these disparities in the engagement with digital care, which are often driven by ethnicity, age, and socioeconomic status. Left unaddressed, these disparities can lead to the further widening of health inequities [21]. Health policies must therefore incorporate more inclusive implementation strategies by comprising the perspectives of both healthcare providers and patients alike, and strengthen telehealth training to accommodate for language and cultural barriers, varying levels of digital literacy, and disability [21].", "Additionally, virtual approaches can also be used to provide low-risk acute care for non-COVID-19 conditions in the community, identify those persons who may need additional medical consultation or assessment, and refer them as appropriate [16]. However, such substantial change to the traditional means of conducting a clinical consultation inevitably brings with it challenges, including the impracticality of conducting certain acute consultations requiring specialised equipment or physical examination and utilising a simple, reliable means of audio-visual communication with a diverse range of patients. In these circumstances, virtual care can have a detrimental effect on diagnostic uncertainty, a particularly relevant feature in primary care due to the breadth and complexity of diagnoses possible, and its implications on patient safety [22].\nTo mitigate the associated risks, the US CDC developed the ‘Framework for Healthcare Systems Providing Non-COVID19 Clinical Care’, to assist healthcare providers in determining when in-person acute care is appropriate [23]. Similarly, UK, Canadian, and Australian national accreditation bodies also included comprehensive guidelines defining what tasks are appropriate to be performed via virtual models, what equipment is required, and most importantly, how GPs should go about conducting consultations remotely [12,20,24]. When incorporated in telemedicine policies guidance and materials, the availability of such clinical decision-support tools is key to continue providing necessary services while minimising the risk of harm to patients and providers.", "During the COVID-19 outbreak, virtual models have also been used in a triaging capacity in primary care to identify new cases in the community, which patients may require further testing and potentially helping to curtail spread in the wider population [25,26]. The national guidance documentation of Australia, Canada, and the United Kingdom, detailed how to remotely assess patients who present with symptoms suggestive of COVID-19, what public health resources are available, and the appropriate next course of action to take [12,20,24]. In Germany, this was further accompanied by the rollout of patient self-assessment tools to optimise the use of existing telemedicine capacity and enable the ability to detect potential COVID-19 cases early on [27].\nIn the United States, automated bots have been incorporated alongside existing telemedicine services to allow the triaging of a rapidly expanding number of patients [26]. Patients with symptoms suggestive of COVID-19 infection and deemed higher risk, were automatically referred for further triaging in hospitals. Those with lower risk presentations were scheduled for consultations via telemedicine remotely, thus streamlining the workload for GPs, minimising the need for patients to present to hospital, and reducing overall risks of exposure [26]. As digital triaging permeates to other clinical settings and countries in the future, further evaluation of its impact on quality of care, and particularly on patient safety and efficiency of care delivery, is needed.\nHowever, it is important to note that there is no standardised, validated tool to perform remote assessment of patients with COVID-19, nor to stratify their risk of clinical deterioration. The NEWS2 score, an early warning score recommended by the UK’s National Institute for Health and Care Excellence (NICE) in its guidelines for managing COVID-19 patients in critical care, has not been validated in primary care, nor for triage purposes [20]. Further research should pave the way to development of more bespoke prognostics scores to be used in the remote assessment of COVID-19 patients in primary care, capitalising on the growing body of data and novel data analytics methods available [28].", "During the first wave of the COVID-19 outbreak, many national policies have loosened security standards and relaxed telemedicine restrictions, with several high-income countries permitting the use of common, off-the-shelf communications software to ensure a quick transition for both GPs and their patients. In the US, the Department of Health and Human Services (HHS) has waived penalties regarding the use of video consultation software not previously approved to meet Health Insurance Portability and Accountability Act (HIPAA) requirements, allowing widely accessible consumer-grade services such as FaceTime or Skype to be used for telemedicine purposes, even if the care service is not related to COVID-19 [29]. In Europe, the General Data Protection Regulations already include a clause exempting work in the overwhelming public interest. In the UK, NHSx encourages videoconferencing tools such as Skype, WhatsApp, Facetime, or other commercial products designed specifically for healthcare purposes [30].\nAlthough these policies have acted as strong drivers to expanding virtual care availability during the pandemic, the relaxation of privacy standards raises concerns, including the possibility that patients’ data shared over a non-compliant platform may be inappropriately accessed, shared, or monetised. To address these risks and maintain the momentum needed to building safer solutions, national guidance from several countries such as the Netherlands, US, and the UK, do include a well-defined list of pre-approved vendors [31–33]. In the absence of a list of pre-approved software vendors, Canadian and Australian national accreditation body guidelines provided additional support for GPs specifying how to set up their workplace in a secure manner and remotely take patient informed consent to safeguard patient privacy [12,34].", "The first wave of COVID-19 demonstrated the need to streamline and reinforce existing funding avenues to answer the need for virtual care delivery during a pandemic. In this context, governments and health policymakers worldwide have unveiled guidance regarding changes in existing billing procedures and the availability of additional funding, with important implications on how primary care is delivered and funded. For example, in Australia, novel financial initiatives such as doubling of ‘bulk billing’ have been put in place [12]. Guidance into what types of patients, and what procedures, items, and services were eligible for the new billing arrangements, was extensively detailed in the new telemedicine guidelines [12,35]. In Canada, several provincial health authorities have introduced new telemedicine-specific billing codes in the hopes of simplifying the overall transition process [34,36].\nAdditionally, the COVID-19 outbreak raised awareness on the importance of strategically investing in ‘digital-first’ programmes. These programmes represent not only an emergency response to the crisis but also a potential long-term strategic vision to addressing a multitude of existing needs in primary care delivery, particularly reaching out to patients with poor access to care. In the US, the newly passed ‘Coronavirus Aid, Relief, and Economic Security (CARES) Act’ awards a total of $8.7 million a year for telehealth technologies used in rural areas and medically underserved areas [37]. Recent changes to Medicare as well as initiatives by some private insurers have permitted consultations completed via telemedicine to be paid at the same rate as those conducted in-person [38]. However, it remains to be seen as to whether these changes will persist after the eventual passing of COVID-19 and translate into a more fundamental transformation of how virtual care is funded [19].", "The COVID-19 pandemic has presented a unique opportunity to challenge the long-established relationship between virtual approaches and their role in community-based care. The emergent circumstances have undoubtedly allowed many of the barriers to change to be overcome. However, this abrupt scaling up of telemedicine has also exposed many weaknesses in existing virtual solutions. It has highlighted the need for health policies to guide the allocation of limited resources better, greater investments to modernise infrastructure, expansion of technical training, prioritisation for systems interoperability [39], and more support for patients with lower health and digital literacy levels [40].\nWhile the pandemic represents an opportunity to rethink how virtual care can be better integrated into primary care, it is imperative to address these challenges through clear, deliberate policies to ensure that this devastating pandemic leaves a positive legacy. Policymakers and researchers need to establish strategic partnerships and undertake a rigorous evaluation of what worked, for which patients, and in what clinical context, to draw lessons for the long-term and define evidence-based policies.\nFinally, primary healthcare policies need to be judiciously designed rather than wishfully conceived – an often-underestimated aspect in current processes [41]. Policies must be able to bridge the gap between theory and practice, both by proposing realistic models and providing the necessary support to translate primary care objectives into reality [41,42]. The inherent complexity of modern primary health systems and their governance makes it unlikely that policies can be designed a priori perfectly, without necessitating continuous, iterative improvement. Therefore, a greater emphasis on sound policy design, holistically incorporating healthcare staff and patients’ feedback on design and maintenance, is key to ensuring the planned actions allow for innovative yet pragmatic means of achieving policy goals.", "It must be acknowledged that this background paper consisted in a rapid review of the literature available from the first months of the pandemic to provide a first overview of the subject. Given the rapidly changing nature of the pandemic, many of the guidance documentation examined were temporary measures hastily introduced and likely subject to further refinement and subsequent changes. It is likely that new guidance has been published since and would require a follow-up re-examination to evaluate more comprehensively the breath of new information available on this evolving subject.\nIt is also important to note that rapid reviews are a preferred option when health decision-makers need timely access to information for background purposes, as per the aim of this background paper. However, they may produce less reliable evidence and may lead to suboptimal decision-making. As future steps for researchers, we suggest further research should evaluate the evidence available on this subject systematically, adhering to the principles and high-level of rigour of a systematic review, including systematic searches and screening, data abstraction, and risk of bias appraisal conducted by two individuals, independently [43]." ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Results", "COVID-19 and the primary care transformation", "Delivery of routine and preventative care", "Delivery of acute care", "COVID-19 remote triaging", "Technology suppliers and security standards", "Funding and reimbursement", "Discussion", "Implications for the long-term future", "Limitations", "Conclusion" ]
[ "Digital technology has transformed many aspects of modern life; healthcare is no exception. Over the last decade, primary care systems have slowly started to adopt virtual modes of delivery, in which digital tools (i.e. telephone, online video) serve as a first point of contact for patients, directing them to the appropriate digital or face-to-face services based on their needs [1,2]. This approach can provide access to a range of primary care services, such as booking and cancelling appointments, having remote consultations, receiving referrals, and obtaining prescriptions [1,3]. As part of a streamlined, integrated experience, ‘virtual approaches’ have the potential of improving efficiency, patient safety, and access to care [4]. Alongside this gradual transformation, saw a shift in the accompanying terminology with which to describe the underlying technology. The term ‘telemedicine’ which once was defined strictly as the treatment of certain conditions remotely, has now given way somewhat to broader terms such as ‘virtual care’ – a testament to the rapidly expanding functionality and application of these digital tools to facilitate the delivery of more holistic care remotely [5]. For this reason, in this rapid review the latter term was chosen.\nAs a response over the last several years, health policies have been preparing to incorporate virtual care as an essential part of health care delivery. For example, NHS England declared in 2019 that all patients should have the right to video consultations by 2021 and that all primary care practices should ensure at least 25% of their appointments are available for online booking [6]. However, despite similar statements observed worldwide, adoption remained slow, in part due to a general hesitance with novel technologies, privacy concerns, limited stakeholder enthusiasm, and inadequate investment [7,8].\nBy examining publicly available national guidance documents available to GPs, this background paper aims to examine the differing policy-based approaches taken to meet this challenge, potential technical shortcomings, and ultimately, its effects on revolutionising the delivery of primary care.", "In this background paper, we adopted the principles of a rapid review to identify key areas of relevance on this topic. Rapid reviews are a form of knowledge synthesis in which components of the systematic review process are simplified to produce information on time [9]. In light of the rapidly evolving pandemic, policymakers require evidence synthesis to produce robust guidance for primary care providers. The World Health Organisation (WHO) recommends rapid reviews to provide such evidence [9].\nTo identify relevant documents, we have searched the websites of relevant national departments and health authorities (ministries of health, primary care organisations and regulatory bodies). In what concerns the inclusion criteria adopted, documents were included if issued in the first six months of the pandemic (March to August of 2020) and focussed on using remote care tools in high-income countries. Documents must have been issued by a national health authority, accreditation bodies or professional organisation, and directly refer to the delivery of primary care. The documents were subsequently evaluated by two independent researchers, with the findings included this rapid review derived from them reaching a consensus upon regular discussions.", "Our rapid literature review has identified 16 documents from nine countries (Australia, Bosnia, Canada, Germany, Italy, Netherlands, Spain, United Kingdom, and the United States). From those, we extracted six areas of relevance: primary care transformation during COVID-19, the continued delivery of preventative care, the delivery of acute care, remote triaging, funding & reimbursement, and roadmap for the future of remote care models.\nCOVID-19 and the primary care transformation With the COVID-19 pandemic, the digital landscape in primary care was about to experience a dramatic change. On 11 March 2020, the World Health Organisation officially declared COVID-19 a global pandemic [10]. In the United Kingdom, GPs were hastened to shift patients over to online and electronic prescription services for the dispensing of medications [11]. In Australia, a rapid transition to using mainly virtual consultations was deemed appropriate for most patients who have visited their GPs at least once in the past 12 months or those who specifically requested care to be continued remotely [12]. This new reality, with drastic limitations curtailing physical contact, has resulted in virtual solutions taking centre stage in primary care delivery [13]. In many ways, the COVID-19 outbreak has presented a unique opportunity that has both tested the capacity of pre-existing virtual models of care, and simultaneously demonstrated their growing importance as a complementary and alternative means of delivering community-based healthcare. Virtual models of care have also been increasingly relied upon to coordinate and support public health responses to the pandemic itself, all the while minimising risks of exposure for patients, healthcare providers, and the public [3,13–15].\nAs part of this transformation, health policies worldwide have undergone drastic changes to adapt and facilitate virtual models’ upscaling. While these changes have lowered many of the existing barriers to digital healthcare, it is important to reflect upon their impact not just as an immediate solution to a public health emergency but also as the first of many potential regulatory adjustments necessary to enabe the continuation and proliferation of high-quality, safe virtual models usage in primary care in the future.\nWith the COVID-19 pandemic, the digital landscape in primary care was about to experience a dramatic change. On 11 March 2020, the World Health Organisation officially declared COVID-19 a global pandemic [10]. In the United Kingdom, GPs were hastened to shift patients over to online and electronic prescription services for the dispensing of medications [11]. In Australia, a rapid transition to using mainly virtual consultations was deemed appropriate for most patients who have visited their GPs at least once in the past 12 months or those who specifically requested care to be continued remotely [12]. This new reality, with drastic limitations curtailing physical contact, has resulted in virtual solutions taking centre stage in primary care delivery [13]. In many ways, the COVID-19 outbreak has presented a unique opportunity that has both tested the capacity of pre-existing virtual models of care, and simultaneously demonstrated their growing importance as a complementary and alternative means of delivering community-based healthcare. Virtual models of care have also been increasingly relied upon to coordinate and support public health responses to the pandemic itself, all the while minimising risks of exposure for patients, healthcare providers, and the public [3,13–15].\nAs part of this transformation, health policies worldwide have undergone drastic changes to adapt and facilitate virtual models’ upscaling. While these changes have lowered many of the existing barriers to digital healthcare, it is important to reflect upon their impact not just as an immediate solution to a public health emergency but also as the first of many potential regulatory adjustments necessary to enabe the continuation and proliferation of high-quality, safe virtual models usage in primary care in the future.\nDelivery of routine and preventative care Proactive health promotion and preventive care are an integral part of primary care. During a pandemic, these services still need to be delivered. The US Centres for Disease Control and Prevention (CDC) now supports the use of remote care in a range of routine and preventive care services, including management of chronic health conditions, monitoring of clinical signs (i.e. blood pressure, blood glucose, other remote assessments), patient coaching and support (i.e. weight management and nutrition counselling), and medication management [16]. In Australia, Canada, and the United Kingdom, new guidance has been provided advising GPs to shift patients whenever possible, over to online or telephone-based consultations [12,17,18]. In Germany, the recently passed Digital Healthcare Act in conjunction with the ongoing COVID-19 pandemic, is now used as a catalyst to democratise the routine use of video-based consultations previously requiring an in-person visit first with the clinician [19]. Former insurance reimbursement restrictions capping the proportion of cases seen by clinicians via telemedicine, as well as what types of consultations services could be provided, have also been lifted [19].\nIn the context of providing routine and preventative care, ‘digital-first’ approaches can be valuable tools to maintain continuity, improve timeliness, and mitigate the negative consequences of delays on provision of care [16]. Additionally, these approaches may also be beneficial in preserving the patient-physician relationship at a time when face-to-face visits are not safe, and facilitate the engagement of those who are shielding, have limited mobility or other physical limitations to access care [16].\nAt the same time, we must be aware of some less understood risks. When monitoring clinical signs and symptoms remotely, not all patients have access to medical-grade, home-based self-monitoring devices (i.e. BP monitor, blood glucose monitor, weighing scale), and may not necessarily be familiar with proper operating procedures [20]. We cannot risk leaving many of the most vulnerable patients excluded from preventive care or further alienated by the digital divide. Future policies promoting the use of virtual models for routine and preventive care must consider these disparities in the engagement with digital care, which are often driven by ethnicity, age, and socioeconomic status. Left unaddressed, these disparities can lead to the further widening of health inequities [21]. Health policies must therefore incorporate more inclusive implementation strategies by comprising the perspectives of both healthcare providers and patients alike, and strengthen telehealth training to accommodate for language and cultural barriers, varying levels of digital literacy, and disability [21].\nProactive health promotion and preventive care are an integral part of primary care. During a pandemic, these services still need to be delivered. The US Centres for Disease Control and Prevention (CDC) now supports the use of remote care in a range of routine and preventive care services, including management of chronic health conditions, monitoring of clinical signs (i.e. blood pressure, blood glucose, other remote assessments), patient coaching and support (i.e. weight management and nutrition counselling), and medication management [16]. In Australia, Canada, and the United Kingdom, new guidance has been provided advising GPs to shift patients whenever possible, over to online or telephone-based consultations [12,17,18]. In Germany, the recently passed Digital Healthcare Act in conjunction with the ongoing COVID-19 pandemic, is now used as a catalyst to democratise the routine use of video-based consultations previously requiring an in-person visit first with the clinician [19]. Former insurance reimbursement restrictions capping the proportion of cases seen by clinicians via telemedicine, as well as what types of consultations services could be provided, have also been lifted [19].\nIn the context of providing routine and preventative care, ‘digital-first’ approaches can be valuable tools to maintain continuity, improve timeliness, and mitigate the negative consequences of delays on provision of care [16]. Additionally, these approaches may also be beneficial in preserving the patient-physician relationship at a time when face-to-face visits are not safe, and facilitate the engagement of those who are shielding, have limited mobility or other physical limitations to access care [16].\nAt the same time, we must be aware of some less understood risks. When monitoring clinical signs and symptoms remotely, not all patients have access to medical-grade, home-based self-monitoring devices (i.e. BP monitor, blood glucose monitor, weighing scale), and may not necessarily be familiar with proper operating procedures [20]. We cannot risk leaving many of the most vulnerable patients excluded from preventive care or further alienated by the digital divide. Future policies promoting the use of virtual models for routine and preventive care must consider these disparities in the engagement with digital care, which are often driven by ethnicity, age, and socioeconomic status. Left unaddressed, these disparities can lead to the further widening of health inequities [21]. Health policies must therefore incorporate more inclusive implementation strategies by comprising the perspectives of both healthcare providers and patients alike, and strengthen telehealth training to accommodate for language and cultural barriers, varying levels of digital literacy, and disability [21].\nDelivery of acute care Additionally, virtual approaches can also be used to provide low-risk acute care for non-COVID-19 conditions in the community, identify those persons who may need additional medical consultation or assessment, and refer them as appropriate [16]. However, such substantial change to the traditional means of conducting a clinical consultation inevitably brings with it challenges, including the impracticality of conducting certain acute consultations requiring specialised equipment or physical examination and utilising a simple, reliable means of audio-visual communication with a diverse range of patients. In these circumstances, virtual care can have a detrimental effect on diagnostic uncertainty, a particularly relevant feature in primary care due to the breadth and complexity of diagnoses possible, and its implications on patient safety [22].\nTo mitigate the associated risks, the US CDC developed the ‘Framework for Healthcare Systems Providing Non-COVID19 Clinical Care’, to assist healthcare providers in determining when in-person acute care is appropriate [23]. Similarly, UK, Canadian, and Australian national accreditation bodies also included comprehensive guidelines defining what tasks are appropriate to be performed via virtual models, what equipment is required, and most importantly, how GPs should go about conducting consultations remotely [12,20,24]. When incorporated in telemedicine policies guidance and materials, the availability of such clinical decision-support tools is key to continue providing necessary services while minimising the risk of harm to patients and providers.\nAdditionally, virtual approaches can also be used to provide low-risk acute care for non-COVID-19 conditions in the community, identify those persons who may need additional medical consultation or assessment, and refer them as appropriate [16]. However, such substantial change to the traditional means of conducting a clinical consultation inevitably brings with it challenges, including the impracticality of conducting certain acute consultations requiring specialised equipment or physical examination and utilising a simple, reliable means of audio-visual communication with a diverse range of patients. In these circumstances, virtual care can have a detrimental effect on diagnostic uncertainty, a particularly relevant feature in primary care due to the breadth and complexity of diagnoses possible, and its implications on patient safety [22].\nTo mitigate the associated risks, the US CDC developed the ‘Framework for Healthcare Systems Providing Non-COVID19 Clinical Care’, to assist healthcare providers in determining when in-person acute care is appropriate [23]. Similarly, UK, Canadian, and Australian national accreditation bodies also included comprehensive guidelines defining what tasks are appropriate to be performed via virtual models, what equipment is required, and most importantly, how GPs should go about conducting consultations remotely [12,20,24]. When incorporated in telemedicine policies guidance and materials, the availability of such clinical decision-support tools is key to continue providing necessary services while minimising the risk of harm to patients and providers.\nCOVID-19 remote triaging During the COVID-19 outbreak, virtual models have also been used in a triaging capacity in primary care to identify new cases in the community, which patients may require further testing and potentially helping to curtail spread in the wider population [25,26]. The national guidance documentation of Australia, Canada, and the United Kingdom, detailed how to remotely assess patients who present with symptoms suggestive of COVID-19, what public health resources are available, and the appropriate next course of action to take [12,20,24]. In Germany, this was further accompanied by the rollout of patient self-assessment tools to optimise the use of existing telemedicine capacity and enable the ability to detect potential COVID-19 cases early on [27].\nIn the United States, automated bots have been incorporated alongside existing telemedicine services to allow the triaging of a rapidly expanding number of patients [26]. Patients with symptoms suggestive of COVID-19 infection and deemed higher risk, were automatically referred for further triaging in hospitals. Those with lower risk presentations were scheduled for consultations via telemedicine remotely, thus streamlining the workload for GPs, minimising the need for patients to present to hospital, and reducing overall risks of exposure [26]. As digital triaging permeates to other clinical settings and countries in the future, further evaluation of its impact on quality of care, and particularly on patient safety and efficiency of care delivery, is needed.\nHowever, it is important to note that there is no standardised, validated tool to perform remote assessment of patients with COVID-19, nor to stratify their risk of clinical deterioration. The NEWS2 score, an early warning score recommended by the UK’s National Institute for Health and Care Excellence (NICE) in its guidelines for managing COVID-19 patients in critical care, has not been validated in primary care, nor for triage purposes [20]. Further research should pave the way to development of more bespoke prognostics scores to be used in the remote assessment of COVID-19 patients in primary care, capitalising on the growing body of data and novel data analytics methods available [28].\nDuring the COVID-19 outbreak, virtual models have also been used in a triaging capacity in primary care to identify new cases in the community, which patients may require further testing and potentially helping to curtail spread in the wider population [25,26]. The national guidance documentation of Australia, Canada, and the United Kingdom, detailed how to remotely assess patients who present with symptoms suggestive of COVID-19, what public health resources are available, and the appropriate next course of action to take [12,20,24]. In Germany, this was further accompanied by the rollout of patient self-assessment tools to optimise the use of existing telemedicine capacity and enable the ability to detect potential COVID-19 cases early on [27].\nIn the United States, automated bots have been incorporated alongside existing telemedicine services to allow the triaging of a rapidly expanding number of patients [26]. Patients with symptoms suggestive of COVID-19 infection and deemed higher risk, were automatically referred for further triaging in hospitals. Those with lower risk presentations were scheduled for consultations via telemedicine remotely, thus streamlining the workload for GPs, minimising the need for patients to present to hospital, and reducing overall risks of exposure [26]. As digital triaging permeates to other clinical settings and countries in the future, further evaluation of its impact on quality of care, and particularly on patient safety and efficiency of care delivery, is needed.\nHowever, it is important to note that there is no standardised, validated tool to perform remote assessment of patients with COVID-19, nor to stratify their risk of clinical deterioration. The NEWS2 score, an early warning score recommended by the UK’s National Institute for Health and Care Excellence (NICE) in its guidelines for managing COVID-19 patients in critical care, has not been validated in primary care, nor for triage purposes [20]. Further research should pave the way to development of more bespoke prognostics scores to be used in the remote assessment of COVID-19 patients in primary care, capitalising on the growing body of data and novel data analytics methods available [28].\nTechnology suppliers and security standards During the first wave of the COVID-19 outbreak, many national policies have loosened security standards and relaxed telemedicine restrictions, with several high-income countries permitting the use of common, off-the-shelf communications software to ensure a quick transition for both GPs and their patients. In the US, the Department of Health and Human Services (HHS) has waived penalties regarding the use of video consultation software not previously approved to meet Health Insurance Portability and Accountability Act (HIPAA) requirements, allowing widely accessible consumer-grade services such as FaceTime or Skype to be used for telemedicine purposes, even if the care service is not related to COVID-19 [29]. In Europe, the General Data Protection Regulations already include a clause exempting work in the overwhelming public interest. In the UK, NHSx encourages videoconferencing tools such as Skype, WhatsApp, Facetime, or other commercial products designed specifically for healthcare purposes [30].\nAlthough these policies have acted as strong drivers to expanding virtual care availability during the pandemic, the relaxation of privacy standards raises concerns, including the possibility that patients’ data shared over a non-compliant platform may be inappropriately accessed, shared, or monetised. To address these risks and maintain the momentum needed to building safer solutions, national guidance from several countries such as the Netherlands, US, and the UK, do include a well-defined list of pre-approved vendors [31–33]. In the absence of a list of pre-approved software vendors, Canadian and Australian national accreditation body guidelines provided additional support for GPs specifying how to set up their workplace in a secure manner and remotely take patient informed consent to safeguard patient privacy [12,34].\nDuring the first wave of the COVID-19 outbreak, many national policies have loosened security standards and relaxed telemedicine restrictions, with several high-income countries permitting the use of common, off-the-shelf communications software to ensure a quick transition for both GPs and their patients. In the US, the Department of Health and Human Services (HHS) has waived penalties regarding the use of video consultation software not previously approved to meet Health Insurance Portability and Accountability Act (HIPAA) requirements, allowing widely accessible consumer-grade services such as FaceTime or Skype to be used for telemedicine purposes, even if the care service is not related to COVID-19 [29]. In Europe, the General Data Protection Regulations already include a clause exempting work in the overwhelming public interest. In the UK, NHSx encourages videoconferencing tools such as Skype, WhatsApp, Facetime, or other commercial products designed specifically for healthcare purposes [30].\nAlthough these policies have acted as strong drivers to expanding virtual care availability during the pandemic, the relaxation of privacy standards raises concerns, including the possibility that patients’ data shared over a non-compliant platform may be inappropriately accessed, shared, or monetised. To address these risks and maintain the momentum needed to building safer solutions, national guidance from several countries such as the Netherlands, US, and the UK, do include a well-defined list of pre-approved vendors [31–33]. In the absence of a list of pre-approved software vendors, Canadian and Australian national accreditation body guidelines provided additional support for GPs specifying how to set up their workplace in a secure manner and remotely take patient informed consent to safeguard patient privacy [12,34].\nFunding and reimbursement The first wave of COVID-19 demonstrated the need to streamline and reinforce existing funding avenues to answer the need for virtual care delivery during a pandemic. In this context, governments and health policymakers worldwide have unveiled guidance regarding changes in existing billing procedures and the availability of additional funding, with important implications on how primary care is delivered and funded. For example, in Australia, novel financial initiatives such as doubling of ‘bulk billing’ have been put in place [12]. Guidance into what types of patients, and what procedures, items, and services were eligible for the new billing arrangements, was extensively detailed in the new telemedicine guidelines [12,35]. In Canada, several provincial health authorities have introduced new telemedicine-specific billing codes in the hopes of simplifying the overall transition process [34,36].\nAdditionally, the COVID-19 outbreak raised awareness on the importance of strategically investing in ‘digital-first’ programmes. These programmes represent not only an emergency response to the crisis but also a potential long-term strategic vision to addressing a multitude of existing needs in primary care delivery, particularly reaching out to patients with poor access to care. In the US, the newly passed ‘Coronavirus Aid, Relief, and Economic Security (CARES) Act’ awards a total of $8.7 million a year for telehealth technologies used in rural areas and medically underserved areas [37]. Recent changes to Medicare as well as initiatives by some private insurers have permitted consultations completed via telemedicine to be paid at the same rate as those conducted in-person [38]. However, it remains to be seen as to whether these changes will persist after the eventual passing of COVID-19 and translate into a more fundamental transformation of how virtual care is funded [19].\nThe first wave of COVID-19 demonstrated the need to streamline and reinforce existing funding avenues to answer the need for virtual care delivery during a pandemic. In this context, governments and health policymakers worldwide have unveiled guidance regarding changes in existing billing procedures and the availability of additional funding, with important implications on how primary care is delivered and funded. For example, in Australia, novel financial initiatives such as doubling of ‘bulk billing’ have been put in place [12]. Guidance into what types of patients, and what procedures, items, and services were eligible for the new billing arrangements, was extensively detailed in the new telemedicine guidelines [12,35]. In Canada, several provincial health authorities have introduced new telemedicine-specific billing codes in the hopes of simplifying the overall transition process [34,36].\nAdditionally, the COVID-19 outbreak raised awareness on the importance of strategically investing in ‘digital-first’ programmes. These programmes represent not only an emergency response to the crisis but also a potential long-term strategic vision to addressing a multitude of existing needs in primary care delivery, particularly reaching out to patients with poor access to care. In the US, the newly passed ‘Coronavirus Aid, Relief, and Economic Security (CARES) Act’ awards a total of $8.7 million a year for telehealth technologies used in rural areas and medically underserved areas [37]. Recent changes to Medicare as well as initiatives by some private insurers have permitted consultations completed via telemedicine to be paid at the same rate as those conducted in-person [38]. However, it remains to be seen as to whether these changes will persist after the eventual passing of COVID-19 and translate into a more fundamental transformation of how virtual care is funded [19].", "With the COVID-19 pandemic, the digital landscape in primary care was about to experience a dramatic change. On 11 March 2020, the World Health Organisation officially declared COVID-19 a global pandemic [10]. In the United Kingdom, GPs were hastened to shift patients over to online and electronic prescription services for the dispensing of medications [11]. In Australia, a rapid transition to using mainly virtual consultations was deemed appropriate for most patients who have visited their GPs at least once in the past 12 months or those who specifically requested care to be continued remotely [12]. This new reality, with drastic limitations curtailing physical contact, has resulted in virtual solutions taking centre stage in primary care delivery [13]. In many ways, the COVID-19 outbreak has presented a unique opportunity that has both tested the capacity of pre-existing virtual models of care, and simultaneously demonstrated their growing importance as a complementary and alternative means of delivering community-based healthcare. Virtual models of care have also been increasingly relied upon to coordinate and support public health responses to the pandemic itself, all the while minimising risks of exposure for patients, healthcare providers, and the public [3,13–15].\nAs part of this transformation, health policies worldwide have undergone drastic changes to adapt and facilitate virtual models’ upscaling. While these changes have lowered many of the existing barriers to digital healthcare, it is important to reflect upon their impact not just as an immediate solution to a public health emergency but also as the first of many potential regulatory adjustments necessary to enabe the continuation and proliferation of high-quality, safe virtual models usage in primary care in the future.", "Proactive health promotion and preventive care are an integral part of primary care. During a pandemic, these services still need to be delivered. The US Centres for Disease Control and Prevention (CDC) now supports the use of remote care in a range of routine and preventive care services, including management of chronic health conditions, monitoring of clinical signs (i.e. blood pressure, blood glucose, other remote assessments), patient coaching and support (i.e. weight management and nutrition counselling), and medication management [16]. In Australia, Canada, and the United Kingdom, new guidance has been provided advising GPs to shift patients whenever possible, over to online or telephone-based consultations [12,17,18]. In Germany, the recently passed Digital Healthcare Act in conjunction with the ongoing COVID-19 pandemic, is now used as a catalyst to democratise the routine use of video-based consultations previously requiring an in-person visit first with the clinician [19]. Former insurance reimbursement restrictions capping the proportion of cases seen by clinicians via telemedicine, as well as what types of consultations services could be provided, have also been lifted [19].\nIn the context of providing routine and preventative care, ‘digital-first’ approaches can be valuable tools to maintain continuity, improve timeliness, and mitigate the negative consequences of delays on provision of care [16]. Additionally, these approaches may also be beneficial in preserving the patient-physician relationship at a time when face-to-face visits are not safe, and facilitate the engagement of those who are shielding, have limited mobility or other physical limitations to access care [16].\nAt the same time, we must be aware of some less understood risks. When monitoring clinical signs and symptoms remotely, not all patients have access to medical-grade, home-based self-monitoring devices (i.e. BP monitor, blood glucose monitor, weighing scale), and may not necessarily be familiar with proper operating procedures [20]. We cannot risk leaving many of the most vulnerable patients excluded from preventive care or further alienated by the digital divide. Future policies promoting the use of virtual models for routine and preventive care must consider these disparities in the engagement with digital care, which are often driven by ethnicity, age, and socioeconomic status. Left unaddressed, these disparities can lead to the further widening of health inequities [21]. Health policies must therefore incorporate more inclusive implementation strategies by comprising the perspectives of both healthcare providers and patients alike, and strengthen telehealth training to accommodate for language and cultural barriers, varying levels of digital literacy, and disability [21].", "Additionally, virtual approaches can also be used to provide low-risk acute care for non-COVID-19 conditions in the community, identify those persons who may need additional medical consultation or assessment, and refer them as appropriate [16]. However, such substantial change to the traditional means of conducting a clinical consultation inevitably brings with it challenges, including the impracticality of conducting certain acute consultations requiring specialised equipment or physical examination and utilising a simple, reliable means of audio-visual communication with a diverse range of patients. In these circumstances, virtual care can have a detrimental effect on diagnostic uncertainty, a particularly relevant feature in primary care due to the breadth and complexity of diagnoses possible, and its implications on patient safety [22].\nTo mitigate the associated risks, the US CDC developed the ‘Framework for Healthcare Systems Providing Non-COVID19 Clinical Care’, to assist healthcare providers in determining when in-person acute care is appropriate [23]. Similarly, UK, Canadian, and Australian national accreditation bodies also included comprehensive guidelines defining what tasks are appropriate to be performed via virtual models, what equipment is required, and most importantly, how GPs should go about conducting consultations remotely [12,20,24]. When incorporated in telemedicine policies guidance and materials, the availability of such clinical decision-support tools is key to continue providing necessary services while minimising the risk of harm to patients and providers.", "During the COVID-19 outbreak, virtual models have also been used in a triaging capacity in primary care to identify new cases in the community, which patients may require further testing and potentially helping to curtail spread in the wider population [25,26]. The national guidance documentation of Australia, Canada, and the United Kingdom, detailed how to remotely assess patients who present with symptoms suggestive of COVID-19, what public health resources are available, and the appropriate next course of action to take [12,20,24]. In Germany, this was further accompanied by the rollout of patient self-assessment tools to optimise the use of existing telemedicine capacity and enable the ability to detect potential COVID-19 cases early on [27].\nIn the United States, automated bots have been incorporated alongside existing telemedicine services to allow the triaging of a rapidly expanding number of patients [26]. Patients with symptoms suggestive of COVID-19 infection and deemed higher risk, were automatically referred for further triaging in hospitals. Those with lower risk presentations were scheduled for consultations via telemedicine remotely, thus streamlining the workload for GPs, minimising the need for patients to present to hospital, and reducing overall risks of exposure [26]. As digital triaging permeates to other clinical settings and countries in the future, further evaluation of its impact on quality of care, and particularly on patient safety and efficiency of care delivery, is needed.\nHowever, it is important to note that there is no standardised, validated tool to perform remote assessment of patients with COVID-19, nor to stratify their risk of clinical deterioration. The NEWS2 score, an early warning score recommended by the UK’s National Institute for Health and Care Excellence (NICE) in its guidelines for managing COVID-19 patients in critical care, has not been validated in primary care, nor for triage purposes [20]. Further research should pave the way to development of more bespoke prognostics scores to be used in the remote assessment of COVID-19 patients in primary care, capitalising on the growing body of data and novel data analytics methods available [28].", "During the first wave of the COVID-19 outbreak, many national policies have loosened security standards and relaxed telemedicine restrictions, with several high-income countries permitting the use of common, off-the-shelf communications software to ensure a quick transition for both GPs and their patients. In the US, the Department of Health and Human Services (HHS) has waived penalties regarding the use of video consultation software not previously approved to meet Health Insurance Portability and Accountability Act (HIPAA) requirements, allowing widely accessible consumer-grade services such as FaceTime or Skype to be used for telemedicine purposes, even if the care service is not related to COVID-19 [29]. In Europe, the General Data Protection Regulations already include a clause exempting work in the overwhelming public interest. In the UK, NHSx encourages videoconferencing tools such as Skype, WhatsApp, Facetime, or other commercial products designed specifically for healthcare purposes [30].\nAlthough these policies have acted as strong drivers to expanding virtual care availability during the pandemic, the relaxation of privacy standards raises concerns, including the possibility that patients’ data shared over a non-compliant platform may be inappropriately accessed, shared, or monetised. To address these risks and maintain the momentum needed to building safer solutions, national guidance from several countries such as the Netherlands, US, and the UK, do include a well-defined list of pre-approved vendors [31–33]. In the absence of a list of pre-approved software vendors, Canadian and Australian national accreditation body guidelines provided additional support for GPs specifying how to set up their workplace in a secure manner and remotely take patient informed consent to safeguard patient privacy [12,34].", "The first wave of COVID-19 demonstrated the need to streamline and reinforce existing funding avenues to answer the need for virtual care delivery during a pandemic. In this context, governments and health policymakers worldwide have unveiled guidance regarding changes in existing billing procedures and the availability of additional funding, with important implications on how primary care is delivered and funded. For example, in Australia, novel financial initiatives such as doubling of ‘bulk billing’ have been put in place [12]. Guidance into what types of patients, and what procedures, items, and services were eligible for the new billing arrangements, was extensively detailed in the new telemedicine guidelines [12,35]. In Canada, several provincial health authorities have introduced new telemedicine-specific billing codes in the hopes of simplifying the overall transition process [34,36].\nAdditionally, the COVID-19 outbreak raised awareness on the importance of strategically investing in ‘digital-first’ programmes. These programmes represent not only an emergency response to the crisis but also a potential long-term strategic vision to addressing a multitude of existing needs in primary care delivery, particularly reaching out to patients with poor access to care. In the US, the newly passed ‘Coronavirus Aid, Relief, and Economic Security (CARES) Act’ awards a total of $8.7 million a year for telehealth technologies used in rural areas and medically underserved areas [37]. Recent changes to Medicare as well as initiatives by some private insurers have permitted consultations completed via telemedicine to be paid at the same rate as those conducted in-person [38]. However, it remains to be seen as to whether these changes will persist after the eventual passing of COVID-19 and translate into a more fundamental transformation of how virtual care is funded [19].", "Implications for the long-term future The COVID-19 pandemic has presented a unique opportunity to challenge the long-established relationship between virtual approaches and their role in community-based care. The emergent circumstances have undoubtedly allowed many of the barriers to change to be overcome. However, this abrupt scaling up of telemedicine has also exposed many weaknesses in existing virtual solutions. It has highlighted the need for health policies to guide the allocation of limited resources better, greater investments to modernise infrastructure, expansion of technical training, prioritisation for systems interoperability [39], and more support for patients with lower health and digital literacy levels [40].\nWhile the pandemic represents an opportunity to rethink how virtual care can be better integrated into primary care, it is imperative to address these challenges through clear, deliberate policies to ensure that this devastating pandemic leaves a positive legacy. Policymakers and researchers need to establish strategic partnerships and undertake a rigorous evaluation of what worked, for which patients, and in what clinical context, to draw lessons for the long-term and define evidence-based policies.\nFinally, primary healthcare policies need to be judiciously designed rather than wishfully conceived – an often-underestimated aspect in current processes [41]. Policies must be able to bridge the gap between theory and practice, both by proposing realistic models and providing the necessary support to translate primary care objectives into reality [41,42]. The inherent complexity of modern primary health systems and their governance makes it unlikely that policies can be designed a priori perfectly, without necessitating continuous, iterative improvement. Therefore, a greater emphasis on sound policy design, holistically incorporating healthcare staff and patients’ feedback on design and maintenance, is key to ensuring the planned actions allow for innovative yet pragmatic means of achieving policy goals.\nThe COVID-19 pandemic has presented a unique opportunity to challenge the long-established relationship between virtual approaches and their role in community-based care. The emergent circumstances have undoubtedly allowed many of the barriers to change to be overcome. However, this abrupt scaling up of telemedicine has also exposed many weaknesses in existing virtual solutions. It has highlighted the need for health policies to guide the allocation of limited resources better, greater investments to modernise infrastructure, expansion of technical training, prioritisation for systems interoperability [39], and more support for patients with lower health and digital literacy levels [40].\nWhile the pandemic represents an opportunity to rethink how virtual care can be better integrated into primary care, it is imperative to address these challenges through clear, deliberate policies to ensure that this devastating pandemic leaves a positive legacy. Policymakers and researchers need to establish strategic partnerships and undertake a rigorous evaluation of what worked, for which patients, and in what clinical context, to draw lessons for the long-term and define evidence-based policies.\nFinally, primary healthcare policies need to be judiciously designed rather than wishfully conceived – an often-underestimated aspect in current processes [41]. Policies must be able to bridge the gap between theory and practice, both by proposing realistic models and providing the necessary support to translate primary care objectives into reality [41,42]. The inherent complexity of modern primary health systems and their governance makes it unlikely that policies can be designed a priori perfectly, without necessitating continuous, iterative improvement. Therefore, a greater emphasis on sound policy design, holistically incorporating healthcare staff and patients’ feedback on design and maintenance, is key to ensuring the planned actions allow for innovative yet pragmatic means of achieving policy goals.\nLimitations It must be acknowledged that this background paper consisted in a rapid review of the literature available from the first months of the pandemic to provide a first overview of the subject. Given the rapidly changing nature of the pandemic, many of the guidance documentation examined were temporary measures hastily introduced and likely subject to further refinement and subsequent changes. It is likely that new guidance has been published since and would require a follow-up re-examination to evaluate more comprehensively the breath of new information available on this evolving subject.\nIt is also important to note that rapid reviews are a preferred option when health decision-makers need timely access to information for background purposes, as per the aim of this background paper. However, they may produce less reliable evidence and may lead to suboptimal decision-making. As future steps for researchers, we suggest further research should evaluate the evidence available on this subject systematically, adhering to the principles and high-level of rigour of a systematic review, including systematic searches and screening, data abstraction, and risk of bias appraisal conducted by two individuals, independently [43].\nIt must be acknowledged that this background paper consisted in a rapid review of the literature available from the first months of the pandemic to provide a first overview of the subject. Given the rapidly changing nature of the pandemic, many of the guidance documentation examined were temporary measures hastily introduced and likely subject to further refinement and subsequent changes. It is likely that new guidance has been published since and would require a follow-up re-examination to evaluate more comprehensively the breath of new information available on this evolving subject.\nIt is also important to note that rapid reviews are a preferred option when health decision-makers need timely access to information for background purposes, as per the aim of this background paper. However, they may produce less reliable evidence and may lead to suboptimal decision-making. As future steps for researchers, we suggest further research should evaluate the evidence available on this subject systematically, adhering to the principles and high-level of rigour of a systematic review, including systematic searches and screening, data abstraction, and risk of bias appraisal conducted by two individuals, independently [43].", "The COVID-19 pandemic has presented a unique opportunity to challenge the long-established relationship between virtual approaches and their role in community-based care. The emergent circumstances have undoubtedly allowed many of the barriers to change to be overcome. However, this abrupt scaling up of telemedicine has also exposed many weaknesses in existing virtual solutions. It has highlighted the need for health policies to guide the allocation of limited resources better, greater investments to modernise infrastructure, expansion of technical training, prioritisation for systems interoperability [39], and more support for patients with lower health and digital literacy levels [40].\nWhile the pandemic represents an opportunity to rethink how virtual care can be better integrated into primary care, it is imperative to address these challenges through clear, deliberate policies to ensure that this devastating pandemic leaves a positive legacy. Policymakers and researchers need to establish strategic partnerships and undertake a rigorous evaluation of what worked, for which patients, and in what clinical context, to draw lessons for the long-term and define evidence-based policies.\nFinally, primary healthcare policies need to be judiciously designed rather than wishfully conceived – an often-underestimated aspect in current processes [41]. Policies must be able to bridge the gap between theory and practice, both by proposing realistic models and providing the necessary support to translate primary care objectives into reality [41,42]. The inherent complexity of modern primary health systems and their governance makes it unlikely that policies can be designed a priori perfectly, without necessitating continuous, iterative improvement. Therefore, a greater emphasis on sound policy design, holistically incorporating healthcare staff and patients’ feedback on design and maintenance, is key to ensuring the planned actions allow for innovative yet pragmatic means of achieving policy goals.", "It must be acknowledged that this background paper consisted in a rapid review of the literature available from the first months of the pandemic to provide a first overview of the subject. Given the rapidly changing nature of the pandemic, many of the guidance documentation examined were temporary measures hastily introduced and likely subject to further refinement and subsequent changes. It is likely that new guidance has been published since and would require a follow-up re-examination to evaluate more comprehensively the breath of new information available on this evolving subject.\nIt is also important to note that rapid reviews are a preferred option when health decision-makers need timely access to information for background purposes, as per the aim of this background paper. However, they may produce less reliable evidence and may lead to suboptimal decision-making. As future steps for researchers, we suggest further research should evaluate the evidence available on this subject systematically, adhering to the principles and high-level of rigour of a systematic review, including systematic searches and screening, data abstraction, and risk of bias appraisal conducted by two individuals, independently [43].", "The COVID-19 outbreak has been a litmus test for the robustness of virtual care models’ implementation in primary care. It has both revealed its many shortcomings yet simultaneously allowed for novel ideas tackling some of the deep-seated problems to be explored. It is now time to incorporate the practical lessons learned and reshape current policies, rules, and regulations to support safer, more efficient, and more equitable use of virtual care solutions." ]
[ "intro", "methods", "results", null, null, null, null, null, null, "discussion", null, null, "conclusions" ]
[ "Primary care", "health policy", "COVID-19", "telemedicine", "health informatics", "clinical informatics" ]
Introduction: Digital technology has transformed many aspects of modern life; healthcare is no exception. Over the last decade, primary care systems have slowly started to adopt virtual modes of delivery, in which digital tools (i.e. telephone, online video) serve as a first point of contact for patients, directing them to the appropriate digital or face-to-face services based on their needs [1,2]. This approach can provide access to a range of primary care services, such as booking and cancelling appointments, having remote consultations, receiving referrals, and obtaining prescriptions [1,3]. As part of a streamlined, integrated experience, ‘virtual approaches’ have the potential of improving efficiency, patient safety, and access to care [4]. Alongside this gradual transformation, saw a shift in the accompanying terminology with which to describe the underlying technology. The term ‘telemedicine’ which once was defined strictly as the treatment of certain conditions remotely, has now given way somewhat to broader terms such as ‘virtual care’ – a testament to the rapidly expanding functionality and application of these digital tools to facilitate the delivery of more holistic care remotely [5]. For this reason, in this rapid review the latter term was chosen. As a response over the last several years, health policies have been preparing to incorporate virtual care as an essential part of health care delivery. For example, NHS England declared in 2019 that all patients should have the right to video consultations by 2021 and that all primary care practices should ensure at least 25% of their appointments are available for online booking [6]. However, despite similar statements observed worldwide, adoption remained slow, in part due to a general hesitance with novel technologies, privacy concerns, limited stakeholder enthusiasm, and inadequate investment [7,8]. By examining publicly available national guidance documents available to GPs, this background paper aims to examine the differing policy-based approaches taken to meet this challenge, potential technical shortcomings, and ultimately, its effects on revolutionising the delivery of primary care. Methods: In this background paper, we adopted the principles of a rapid review to identify key areas of relevance on this topic. Rapid reviews are a form of knowledge synthesis in which components of the systematic review process are simplified to produce information on time [9]. In light of the rapidly evolving pandemic, policymakers require evidence synthesis to produce robust guidance for primary care providers. The World Health Organisation (WHO) recommends rapid reviews to provide such evidence [9]. To identify relevant documents, we have searched the websites of relevant national departments and health authorities (ministries of health, primary care organisations and regulatory bodies). In what concerns the inclusion criteria adopted, documents were included if issued in the first six months of the pandemic (March to August of 2020) and focussed on using remote care tools in high-income countries. Documents must have been issued by a national health authority, accreditation bodies or professional organisation, and directly refer to the delivery of primary care. The documents were subsequently evaluated by two independent researchers, with the findings included this rapid review derived from them reaching a consensus upon regular discussions. Results: Our rapid literature review has identified 16 documents from nine countries (Australia, Bosnia, Canada, Germany, Italy, Netherlands, Spain, United Kingdom, and the United States). From those, we extracted six areas of relevance: primary care transformation during COVID-19, the continued delivery of preventative care, the delivery of acute care, remote triaging, funding & reimbursement, and roadmap for the future of remote care models. COVID-19 and the primary care transformation With the COVID-19 pandemic, the digital landscape in primary care was about to experience a dramatic change. On 11 March 2020, the World Health Organisation officially declared COVID-19 a global pandemic [10]. In the United Kingdom, GPs were hastened to shift patients over to online and electronic prescription services for the dispensing of medications [11]. In Australia, a rapid transition to using mainly virtual consultations was deemed appropriate for most patients who have visited their GPs at least once in the past 12 months or those who specifically requested care to be continued remotely [12]. This new reality, with drastic limitations curtailing physical contact, has resulted in virtual solutions taking centre stage in primary care delivery [13]. In many ways, the COVID-19 outbreak has presented a unique opportunity that has both tested the capacity of pre-existing virtual models of care, and simultaneously demonstrated their growing importance as a complementary and alternative means of delivering community-based healthcare. Virtual models of care have also been increasingly relied upon to coordinate and support public health responses to the pandemic itself, all the while minimising risks of exposure for patients, healthcare providers, and the public [3,13–15]. As part of this transformation, health policies worldwide have undergone drastic changes to adapt and facilitate virtual models’ upscaling. While these changes have lowered many of the existing barriers to digital healthcare, it is important to reflect upon their impact not just as an immediate solution to a public health emergency but also as the first of many potential regulatory adjustments necessary to enabe the continuation and proliferation of high-quality, safe virtual models usage in primary care in the future. With the COVID-19 pandemic, the digital landscape in primary care was about to experience a dramatic change. On 11 March 2020, the World Health Organisation officially declared COVID-19 a global pandemic [10]. In the United Kingdom, GPs were hastened to shift patients over to online and electronic prescription services for the dispensing of medications [11]. In Australia, a rapid transition to using mainly virtual consultations was deemed appropriate for most patients who have visited their GPs at least once in the past 12 months or those who specifically requested care to be continued remotely [12]. This new reality, with drastic limitations curtailing physical contact, has resulted in virtual solutions taking centre stage in primary care delivery [13]. In many ways, the COVID-19 outbreak has presented a unique opportunity that has both tested the capacity of pre-existing virtual models of care, and simultaneously demonstrated their growing importance as a complementary and alternative means of delivering community-based healthcare. Virtual models of care have also been increasingly relied upon to coordinate and support public health responses to the pandemic itself, all the while minimising risks of exposure for patients, healthcare providers, and the public [3,13–15]. As part of this transformation, health policies worldwide have undergone drastic changes to adapt and facilitate virtual models’ upscaling. While these changes have lowered many of the existing barriers to digital healthcare, it is important to reflect upon their impact not just as an immediate solution to a public health emergency but also as the first of many potential regulatory adjustments necessary to enabe the continuation and proliferation of high-quality, safe virtual models usage in primary care in the future. Delivery of routine and preventative care Proactive health promotion and preventive care are an integral part of primary care. During a pandemic, these services still need to be delivered. The US Centres for Disease Control and Prevention (CDC) now supports the use of remote care in a range of routine and preventive care services, including management of chronic health conditions, monitoring of clinical signs (i.e. blood pressure, blood glucose, other remote assessments), patient coaching and support (i.e. weight management and nutrition counselling), and medication management [16]. In Australia, Canada, and the United Kingdom, new guidance has been provided advising GPs to shift patients whenever possible, over to online or telephone-based consultations [12,17,18]. In Germany, the recently passed Digital Healthcare Act in conjunction with the ongoing COVID-19 pandemic, is now used as a catalyst to democratise the routine use of video-based consultations previously requiring an in-person visit first with the clinician [19]. Former insurance reimbursement restrictions capping the proportion of cases seen by clinicians via telemedicine, as well as what types of consultations services could be provided, have also been lifted [19]. In the context of providing routine and preventative care, ‘digital-first’ approaches can be valuable tools to maintain continuity, improve timeliness, and mitigate the negative consequences of delays on provision of care [16]. Additionally, these approaches may also be beneficial in preserving the patient-physician relationship at a time when face-to-face visits are not safe, and facilitate the engagement of those who are shielding, have limited mobility or other physical limitations to access care [16]. At the same time, we must be aware of some less understood risks. When monitoring clinical signs and symptoms remotely, not all patients have access to medical-grade, home-based self-monitoring devices (i.e. BP monitor, blood glucose monitor, weighing scale), and may not necessarily be familiar with proper operating procedures [20]. We cannot risk leaving many of the most vulnerable patients excluded from preventive care or further alienated by the digital divide. Future policies promoting the use of virtual models for routine and preventive care must consider these disparities in the engagement with digital care, which are often driven by ethnicity, age, and socioeconomic status. Left unaddressed, these disparities can lead to the further widening of health inequities [21]. Health policies must therefore incorporate more inclusive implementation strategies by comprising the perspectives of both healthcare providers and patients alike, and strengthen telehealth training to accommodate for language and cultural barriers, varying levels of digital literacy, and disability [21]. Proactive health promotion and preventive care are an integral part of primary care. During a pandemic, these services still need to be delivered. The US Centres for Disease Control and Prevention (CDC) now supports the use of remote care in a range of routine and preventive care services, including management of chronic health conditions, monitoring of clinical signs (i.e. blood pressure, blood glucose, other remote assessments), patient coaching and support (i.e. weight management and nutrition counselling), and medication management [16]. In Australia, Canada, and the United Kingdom, new guidance has been provided advising GPs to shift patients whenever possible, over to online or telephone-based consultations [12,17,18]. In Germany, the recently passed Digital Healthcare Act in conjunction with the ongoing COVID-19 pandemic, is now used as a catalyst to democratise the routine use of video-based consultations previously requiring an in-person visit first with the clinician [19]. Former insurance reimbursement restrictions capping the proportion of cases seen by clinicians via telemedicine, as well as what types of consultations services could be provided, have also been lifted [19]. In the context of providing routine and preventative care, ‘digital-first’ approaches can be valuable tools to maintain continuity, improve timeliness, and mitigate the negative consequences of delays on provision of care [16]. Additionally, these approaches may also be beneficial in preserving the patient-physician relationship at a time when face-to-face visits are not safe, and facilitate the engagement of those who are shielding, have limited mobility or other physical limitations to access care [16]. At the same time, we must be aware of some less understood risks. When monitoring clinical signs and symptoms remotely, not all patients have access to medical-grade, home-based self-monitoring devices (i.e. BP monitor, blood glucose monitor, weighing scale), and may not necessarily be familiar with proper operating procedures [20]. We cannot risk leaving many of the most vulnerable patients excluded from preventive care or further alienated by the digital divide. Future policies promoting the use of virtual models for routine and preventive care must consider these disparities in the engagement with digital care, which are often driven by ethnicity, age, and socioeconomic status. Left unaddressed, these disparities can lead to the further widening of health inequities [21]. Health policies must therefore incorporate more inclusive implementation strategies by comprising the perspectives of both healthcare providers and patients alike, and strengthen telehealth training to accommodate for language and cultural barriers, varying levels of digital literacy, and disability [21]. Delivery of acute care Additionally, virtual approaches can also be used to provide low-risk acute care for non-COVID-19 conditions in the community, identify those persons who may need additional medical consultation or assessment, and refer them as appropriate [16]. However, such substantial change to the traditional means of conducting a clinical consultation inevitably brings with it challenges, including the impracticality of conducting certain acute consultations requiring specialised equipment or physical examination and utilising a simple, reliable means of audio-visual communication with a diverse range of patients. In these circumstances, virtual care can have a detrimental effect on diagnostic uncertainty, a particularly relevant feature in primary care due to the breadth and complexity of diagnoses possible, and its implications on patient safety [22]. To mitigate the associated risks, the US CDC developed the ‘Framework for Healthcare Systems Providing Non-COVID19 Clinical Care’, to assist healthcare providers in determining when in-person acute care is appropriate [23]. Similarly, UK, Canadian, and Australian national accreditation bodies also included comprehensive guidelines defining what tasks are appropriate to be performed via virtual models, what equipment is required, and most importantly, how GPs should go about conducting consultations remotely [12,20,24]. When incorporated in telemedicine policies guidance and materials, the availability of such clinical decision-support tools is key to continue providing necessary services while minimising the risk of harm to patients and providers. Additionally, virtual approaches can also be used to provide low-risk acute care for non-COVID-19 conditions in the community, identify those persons who may need additional medical consultation or assessment, and refer them as appropriate [16]. However, such substantial change to the traditional means of conducting a clinical consultation inevitably brings with it challenges, including the impracticality of conducting certain acute consultations requiring specialised equipment or physical examination and utilising a simple, reliable means of audio-visual communication with a diverse range of patients. In these circumstances, virtual care can have a detrimental effect on diagnostic uncertainty, a particularly relevant feature in primary care due to the breadth and complexity of diagnoses possible, and its implications on patient safety [22]. To mitigate the associated risks, the US CDC developed the ‘Framework for Healthcare Systems Providing Non-COVID19 Clinical Care’, to assist healthcare providers in determining when in-person acute care is appropriate [23]. Similarly, UK, Canadian, and Australian national accreditation bodies also included comprehensive guidelines defining what tasks are appropriate to be performed via virtual models, what equipment is required, and most importantly, how GPs should go about conducting consultations remotely [12,20,24]. When incorporated in telemedicine policies guidance and materials, the availability of such clinical decision-support tools is key to continue providing necessary services while minimising the risk of harm to patients and providers. COVID-19 remote triaging During the COVID-19 outbreak, virtual models have also been used in a triaging capacity in primary care to identify new cases in the community, which patients may require further testing and potentially helping to curtail spread in the wider population [25,26]. The national guidance documentation of Australia, Canada, and the United Kingdom, detailed how to remotely assess patients who present with symptoms suggestive of COVID-19, what public health resources are available, and the appropriate next course of action to take [12,20,24]. In Germany, this was further accompanied by the rollout of patient self-assessment tools to optimise the use of existing telemedicine capacity and enable the ability to detect potential COVID-19 cases early on [27]. In the United States, automated bots have been incorporated alongside existing telemedicine services to allow the triaging of a rapidly expanding number of patients [26]. Patients with symptoms suggestive of COVID-19 infection and deemed higher risk, were automatically referred for further triaging in hospitals. Those with lower risk presentations were scheduled for consultations via telemedicine remotely, thus streamlining the workload for GPs, minimising the need for patients to present to hospital, and reducing overall risks of exposure [26]. As digital triaging permeates to other clinical settings and countries in the future, further evaluation of its impact on quality of care, and particularly on patient safety and efficiency of care delivery, is needed. However, it is important to note that there is no standardised, validated tool to perform remote assessment of patients with COVID-19, nor to stratify their risk of clinical deterioration. The NEWS2 score, an early warning score recommended by the UK’s National Institute for Health and Care Excellence (NICE) in its guidelines for managing COVID-19 patients in critical care, has not been validated in primary care, nor for triage purposes [20]. Further research should pave the way to development of more bespoke prognostics scores to be used in the remote assessment of COVID-19 patients in primary care, capitalising on the growing body of data and novel data analytics methods available [28]. During the COVID-19 outbreak, virtual models have also been used in a triaging capacity in primary care to identify new cases in the community, which patients may require further testing and potentially helping to curtail spread in the wider population [25,26]. The national guidance documentation of Australia, Canada, and the United Kingdom, detailed how to remotely assess patients who present with symptoms suggestive of COVID-19, what public health resources are available, and the appropriate next course of action to take [12,20,24]. In Germany, this was further accompanied by the rollout of patient self-assessment tools to optimise the use of existing telemedicine capacity and enable the ability to detect potential COVID-19 cases early on [27]. In the United States, automated bots have been incorporated alongside existing telemedicine services to allow the triaging of a rapidly expanding number of patients [26]. Patients with symptoms suggestive of COVID-19 infection and deemed higher risk, were automatically referred for further triaging in hospitals. Those with lower risk presentations were scheduled for consultations via telemedicine remotely, thus streamlining the workload for GPs, minimising the need for patients to present to hospital, and reducing overall risks of exposure [26]. As digital triaging permeates to other clinical settings and countries in the future, further evaluation of its impact on quality of care, and particularly on patient safety and efficiency of care delivery, is needed. However, it is important to note that there is no standardised, validated tool to perform remote assessment of patients with COVID-19, nor to stratify their risk of clinical deterioration. The NEWS2 score, an early warning score recommended by the UK’s National Institute for Health and Care Excellence (NICE) in its guidelines for managing COVID-19 patients in critical care, has not been validated in primary care, nor for triage purposes [20]. Further research should pave the way to development of more bespoke prognostics scores to be used in the remote assessment of COVID-19 patients in primary care, capitalising on the growing body of data and novel data analytics methods available [28]. Technology suppliers and security standards During the first wave of the COVID-19 outbreak, many national policies have loosened security standards and relaxed telemedicine restrictions, with several high-income countries permitting the use of common, off-the-shelf communications software to ensure a quick transition for both GPs and their patients. In the US, the Department of Health and Human Services (HHS) has waived penalties regarding the use of video consultation software not previously approved to meet Health Insurance Portability and Accountability Act (HIPAA) requirements, allowing widely accessible consumer-grade services such as FaceTime or Skype to be used for telemedicine purposes, even if the care service is not related to COVID-19 [29]. In Europe, the General Data Protection Regulations already include a clause exempting work in the overwhelming public interest. In the UK, NHSx encourages videoconferencing tools such as Skype, WhatsApp, Facetime, or other commercial products designed specifically for healthcare purposes [30]. Although these policies have acted as strong drivers to expanding virtual care availability during the pandemic, the relaxation of privacy standards raises concerns, including the possibility that patients’ data shared over a non-compliant platform may be inappropriately accessed, shared, or monetised. To address these risks and maintain the momentum needed to building safer solutions, national guidance from several countries such as the Netherlands, US, and the UK, do include a well-defined list of pre-approved vendors [31–33]. In the absence of a list of pre-approved software vendors, Canadian and Australian national accreditation body guidelines provided additional support for GPs specifying how to set up their workplace in a secure manner and remotely take patient informed consent to safeguard patient privacy [12,34]. During the first wave of the COVID-19 outbreak, many national policies have loosened security standards and relaxed telemedicine restrictions, with several high-income countries permitting the use of common, off-the-shelf communications software to ensure a quick transition for both GPs and their patients. In the US, the Department of Health and Human Services (HHS) has waived penalties regarding the use of video consultation software not previously approved to meet Health Insurance Portability and Accountability Act (HIPAA) requirements, allowing widely accessible consumer-grade services such as FaceTime or Skype to be used for telemedicine purposes, even if the care service is not related to COVID-19 [29]. In Europe, the General Data Protection Regulations already include a clause exempting work in the overwhelming public interest. In the UK, NHSx encourages videoconferencing tools such as Skype, WhatsApp, Facetime, or other commercial products designed specifically for healthcare purposes [30]. Although these policies have acted as strong drivers to expanding virtual care availability during the pandemic, the relaxation of privacy standards raises concerns, including the possibility that patients’ data shared over a non-compliant platform may be inappropriately accessed, shared, or monetised. To address these risks and maintain the momentum needed to building safer solutions, national guidance from several countries such as the Netherlands, US, and the UK, do include a well-defined list of pre-approved vendors [31–33]. In the absence of a list of pre-approved software vendors, Canadian and Australian national accreditation body guidelines provided additional support for GPs specifying how to set up their workplace in a secure manner and remotely take patient informed consent to safeguard patient privacy [12,34]. Funding and reimbursement The first wave of COVID-19 demonstrated the need to streamline and reinforce existing funding avenues to answer the need for virtual care delivery during a pandemic. In this context, governments and health policymakers worldwide have unveiled guidance regarding changes in existing billing procedures and the availability of additional funding, with important implications on how primary care is delivered and funded. For example, in Australia, novel financial initiatives such as doubling of ‘bulk billing’ have been put in place [12]. Guidance into what types of patients, and what procedures, items, and services were eligible for the new billing arrangements, was extensively detailed in the new telemedicine guidelines [12,35]. In Canada, several provincial health authorities have introduced new telemedicine-specific billing codes in the hopes of simplifying the overall transition process [34,36]. Additionally, the COVID-19 outbreak raised awareness on the importance of strategically investing in ‘digital-first’ programmes. These programmes represent not only an emergency response to the crisis but also a potential long-term strategic vision to addressing a multitude of existing needs in primary care delivery, particularly reaching out to patients with poor access to care. In the US, the newly passed ‘Coronavirus Aid, Relief, and Economic Security (CARES) Act’ awards a total of $8.7 million a year for telehealth technologies used in rural areas and medically underserved areas [37]. Recent changes to Medicare as well as initiatives by some private insurers have permitted consultations completed via telemedicine to be paid at the same rate as those conducted in-person [38]. However, it remains to be seen as to whether these changes will persist after the eventual passing of COVID-19 and translate into a more fundamental transformation of how virtual care is funded [19]. The first wave of COVID-19 demonstrated the need to streamline and reinforce existing funding avenues to answer the need for virtual care delivery during a pandemic. In this context, governments and health policymakers worldwide have unveiled guidance regarding changes in existing billing procedures and the availability of additional funding, with important implications on how primary care is delivered and funded. For example, in Australia, novel financial initiatives such as doubling of ‘bulk billing’ have been put in place [12]. Guidance into what types of patients, and what procedures, items, and services were eligible for the new billing arrangements, was extensively detailed in the new telemedicine guidelines [12,35]. In Canada, several provincial health authorities have introduced new telemedicine-specific billing codes in the hopes of simplifying the overall transition process [34,36]. Additionally, the COVID-19 outbreak raised awareness on the importance of strategically investing in ‘digital-first’ programmes. These programmes represent not only an emergency response to the crisis but also a potential long-term strategic vision to addressing a multitude of existing needs in primary care delivery, particularly reaching out to patients with poor access to care. In the US, the newly passed ‘Coronavirus Aid, Relief, and Economic Security (CARES) Act’ awards a total of $8.7 million a year for telehealth technologies used in rural areas and medically underserved areas [37]. Recent changes to Medicare as well as initiatives by some private insurers have permitted consultations completed via telemedicine to be paid at the same rate as those conducted in-person [38]. However, it remains to be seen as to whether these changes will persist after the eventual passing of COVID-19 and translate into a more fundamental transformation of how virtual care is funded [19]. COVID-19 and the primary care transformation: With the COVID-19 pandemic, the digital landscape in primary care was about to experience a dramatic change. On 11 March 2020, the World Health Organisation officially declared COVID-19 a global pandemic [10]. In the United Kingdom, GPs were hastened to shift patients over to online and electronic prescription services for the dispensing of medications [11]. In Australia, a rapid transition to using mainly virtual consultations was deemed appropriate for most patients who have visited their GPs at least once in the past 12 months or those who specifically requested care to be continued remotely [12]. This new reality, with drastic limitations curtailing physical contact, has resulted in virtual solutions taking centre stage in primary care delivery [13]. In many ways, the COVID-19 outbreak has presented a unique opportunity that has both tested the capacity of pre-existing virtual models of care, and simultaneously demonstrated their growing importance as a complementary and alternative means of delivering community-based healthcare. Virtual models of care have also been increasingly relied upon to coordinate and support public health responses to the pandemic itself, all the while minimising risks of exposure for patients, healthcare providers, and the public [3,13–15]. As part of this transformation, health policies worldwide have undergone drastic changes to adapt and facilitate virtual models’ upscaling. While these changes have lowered many of the existing barriers to digital healthcare, it is important to reflect upon their impact not just as an immediate solution to a public health emergency but also as the first of many potential regulatory adjustments necessary to enabe the continuation and proliferation of high-quality, safe virtual models usage in primary care in the future. Delivery of routine and preventative care: Proactive health promotion and preventive care are an integral part of primary care. During a pandemic, these services still need to be delivered. The US Centres for Disease Control and Prevention (CDC) now supports the use of remote care in a range of routine and preventive care services, including management of chronic health conditions, monitoring of clinical signs (i.e. blood pressure, blood glucose, other remote assessments), patient coaching and support (i.e. weight management and nutrition counselling), and medication management [16]. In Australia, Canada, and the United Kingdom, new guidance has been provided advising GPs to shift patients whenever possible, over to online or telephone-based consultations [12,17,18]. In Germany, the recently passed Digital Healthcare Act in conjunction with the ongoing COVID-19 pandemic, is now used as a catalyst to democratise the routine use of video-based consultations previously requiring an in-person visit first with the clinician [19]. Former insurance reimbursement restrictions capping the proportion of cases seen by clinicians via telemedicine, as well as what types of consultations services could be provided, have also been lifted [19]. In the context of providing routine and preventative care, ‘digital-first’ approaches can be valuable tools to maintain continuity, improve timeliness, and mitigate the negative consequences of delays on provision of care [16]. Additionally, these approaches may also be beneficial in preserving the patient-physician relationship at a time when face-to-face visits are not safe, and facilitate the engagement of those who are shielding, have limited mobility or other physical limitations to access care [16]. At the same time, we must be aware of some less understood risks. When monitoring clinical signs and symptoms remotely, not all patients have access to medical-grade, home-based self-monitoring devices (i.e. BP monitor, blood glucose monitor, weighing scale), and may not necessarily be familiar with proper operating procedures [20]. We cannot risk leaving many of the most vulnerable patients excluded from preventive care or further alienated by the digital divide. Future policies promoting the use of virtual models for routine and preventive care must consider these disparities in the engagement with digital care, which are often driven by ethnicity, age, and socioeconomic status. Left unaddressed, these disparities can lead to the further widening of health inequities [21]. Health policies must therefore incorporate more inclusive implementation strategies by comprising the perspectives of both healthcare providers and patients alike, and strengthen telehealth training to accommodate for language and cultural barriers, varying levels of digital literacy, and disability [21]. Delivery of acute care: Additionally, virtual approaches can also be used to provide low-risk acute care for non-COVID-19 conditions in the community, identify those persons who may need additional medical consultation or assessment, and refer them as appropriate [16]. However, such substantial change to the traditional means of conducting a clinical consultation inevitably brings with it challenges, including the impracticality of conducting certain acute consultations requiring specialised equipment or physical examination and utilising a simple, reliable means of audio-visual communication with a diverse range of patients. In these circumstances, virtual care can have a detrimental effect on diagnostic uncertainty, a particularly relevant feature in primary care due to the breadth and complexity of diagnoses possible, and its implications on patient safety [22]. To mitigate the associated risks, the US CDC developed the ‘Framework for Healthcare Systems Providing Non-COVID19 Clinical Care’, to assist healthcare providers in determining when in-person acute care is appropriate [23]. Similarly, UK, Canadian, and Australian national accreditation bodies also included comprehensive guidelines defining what tasks are appropriate to be performed via virtual models, what equipment is required, and most importantly, how GPs should go about conducting consultations remotely [12,20,24]. When incorporated in telemedicine policies guidance and materials, the availability of such clinical decision-support tools is key to continue providing necessary services while minimising the risk of harm to patients and providers. COVID-19 remote triaging: During the COVID-19 outbreak, virtual models have also been used in a triaging capacity in primary care to identify new cases in the community, which patients may require further testing and potentially helping to curtail spread in the wider population [25,26]. The national guidance documentation of Australia, Canada, and the United Kingdom, detailed how to remotely assess patients who present with symptoms suggestive of COVID-19, what public health resources are available, and the appropriate next course of action to take [12,20,24]. In Germany, this was further accompanied by the rollout of patient self-assessment tools to optimise the use of existing telemedicine capacity and enable the ability to detect potential COVID-19 cases early on [27]. In the United States, automated bots have been incorporated alongside existing telemedicine services to allow the triaging of a rapidly expanding number of patients [26]. Patients with symptoms suggestive of COVID-19 infection and deemed higher risk, were automatically referred for further triaging in hospitals. Those with lower risk presentations were scheduled for consultations via telemedicine remotely, thus streamlining the workload for GPs, minimising the need for patients to present to hospital, and reducing overall risks of exposure [26]. As digital triaging permeates to other clinical settings and countries in the future, further evaluation of its impact on quality of care, and particularly on patient safety and efficiency of care delivery, is needed. However, it is important to note that there is no standardised, validated tool to perform remote assessment of patients with COVID-19, nor to stratify their risk of clinical deterioration. The NEWS2 score, an early warning score recommended by the UK’s National Institute for Health and Care Excellence (NICE) in its guidelines for managing COVID-19 patients in critical care, has not been validated in primary care, nor for triage purposes [20]. Further research should pave the way to development of more bespoke prognostics scores to be used in the remote assessment of COVID-19 patients in primary care, capitalising on the growing body of data and novel data analytics methods available [28]. Technology suppliers and security standards: During the first wave of the COVID-19 outbreak, many national policies have loosened security standards and relaxed telemedicine restrictions, with several high-income countries permitting the use of common, off-the-shelf communications software to ensure a quick transition for both GPs and their patients. In the US, the Department of Health and Human Services (HHS) has waived penalties regarding the use of video consultation software not previously approved to meet Health Insurance Portability and Accountability Act (HIPAA) requirements, allowing widely accessible consumer-grade services such as FaceTime or Skype to be used for telemedicine purposes, even if the care service is not related to COVID-19 [29]. In Europe, the General Data Protection Regulations already include a clause exempting work in the overwhelming public interest. In the UK, NHSx encourages videoconferencing tools such as Skype, WhatsApp, Facetime, or other commercial products designed specifically for healthcare purposes [30]. Although these policies have acted as strong drivers to expanding virtual care availability during the pandemic, the relaxation of privacy standards raises concerns, including the possibility that patients’ data shared over a non-compliant platform may be inappropriately accessed, shared, or monetised. To address these risks and maintain the momentum needed to building safer solutions, national guidance from several countries such as the Netherlands, US, and the UK, do include a well-defined list of pre-approved vendors [31–33]. In the absence of a list of pre-approved software vendors, Canadian and Australian national accreditation body guidelines provided additional support for GPs specifying how to set up their workplace in a secure manner and remotely take patient informed consent to safeguard patient privacy [12,34]. Funding and reimbursement: The first wave of COVID-19 demonstrated the need to streamline and reinforce existing funding avenues to answer the need for virtual care delivery during a pandemic. In this context, governments and health policymakers worldwide have unveiled guidance regarding changes in existing billing procedures and the availability of additional funding, with important implications on how primary care is delivered and funded. For example, in Australia, novel financial initiatives such as doubling of ‘bulk billing’ have been put in place [12]. Guidance into what types of patients, and what procedures, items, and services were eligible for the new billing arrangements, was extensively detailed in the new telemedicine guidelines [12,35]. In Canada, several provincial health authorities have introduced new telemedicine-specific billing codes in the hopes of simplifying the overall transition process [34,36]. Additionally, the COVID-19 outbreak raised awareness on the importance of strategically investing in ‘digital-first’ programmes. These programmes represent not only an emergency response to the crisis but also a potential long-term strategic vision to addressing a multitude of existing needs in primary care delivery, particularly reaching out to patients with poor access to care. In the US, the newly passed ‘Coronavirus Aid, Relief, and Economic Security (CARES) Act’ awards a total of $8.7 million a year for telehealth technologies used in rural areas and medically underserved areas [37]. Recent changes to Medicare as well as initiatives by some private insurers have permitted consultations completed via telemedicine to be paid at the same rate as those conducted in-person [38]. However, it remains to be seen as to whether these changes will persist after the eventual passing of COVID-19 and translate into a more fundamental transformation of how virtual care is funded [19]. Discussion: Implications for the long-term future The COVID-19 pandemic has presented a unique opportunity to challenge the long-established relationship between virtual approaches and their role in community-based care. The emergent circumstances have undoubtedly allowed many of the barriers to change to be overcome. However, this abrupt scaling up of telemedicine has also exposed many weaknesses in existing virtual solutions. It has highlighted the need for health policies to guide the allocation of limited resources better, greater investments to modernise infrastructure, expansion of technical training, prioritisation for systems interoperability [39], and more support for patients with lower health and digital literacy levels [40]. While the pandemic represents an opportunity to rethink how virtual care can be better integrated into primary care, it is imperative to address these challenges through clear, deliberate policies to ensure that this devastating pandemic leaves a positive legacy. Policymakers and researchers need to establish strategic partnerships and undertake a rigorous evaluation of what worked, for which patients, and in what clinical context, to draw lessons for the long-term and define evidence-based policies. Finally, primary healthcare policies need to be judiciously designed rather than wishfully conceived – an often-underestimated aspect in current processes [41]. Policies must be able to bridge the gap between theory and practice, both by proposing realistic models and providing the necessary support to translate primary care objectives into reality [41,42]. The inherent complexity of modern primary health systems and their governance makes it unlikely that policies can be designed a priori perfectly, without necessitating continuous, iterative improvement. Therefore, a greater emphasis on sound policy design, holistically incorporating healthcare staff and patients’ feedback on design and maintenance, is key to ensuring the planned actions allow for innovative yet pragmatic means of achieving policy goals. The COVID-19 pandemic has presented a unique opportunity to challenge the long-established relationship between virtual approaches and their role in community-based care. The emergent circumstances have undoubtedly allowed many of the barriers to change to be overcome. However, this abrupt scaling up of telemedicine has also exposed many weaknesses in existing virtual solutions. It has highlighted the need for health policies to guide the allocation of limited resources better, greater investments to modernise infrastructure, expansion of technical training, prioritisation for systems interoperability [39], and more support for patients with lower health and digital literacy levels [40]. While the pandemic represents an opportunity to rethink how virtual care can be better integrated into primary care, it is imperative to address these challenges through clear, deliberate policies to ensure that this devastating pandemic leaves a positive legacy. Policymakers and researchers need to establish strategic partnerships and undertake a rigorous evaluation of what worked, for which patients, and in what clinical context, to draw lessons for the long-term and define evidence-based policies. Finally, primary healthcare policies need to be judiciously designed rather than wishfully conceived – an often-underestimated aspect in current processes [41]. Policies must be able to bridge the gap between theory and practice, both by proposing realistic models and providing the necessary support to translate primary care objectives into reality [41,42]. The inherent complexity of modern primary health systems and their governance makes it unlikely that policies can be designed a priori perfectly, without necessitating continuous, iterative improvement. Therefore, a greater emphasis on sound policy design, holistically incorporating healthcare staff and patients’ feedback on design and maintenance, is key to ensuring the planned actions allow for innovative yet pragmatic means of achieving policy goals. Limitations It must be acknowledged that this background paper consisted in a rapid review of the literature available from the first months of the pandemic to provide a first overview of the subject. Given the rapidly changing nature of the pandemic, many of the guidance documentation examined were temporary measures hastily introduced and likely subject to further refinement and subsequent changes. It is likely that new guidance has been published since and would require a follow-up re-examination to evaluate more comprehensively the breath of new information available on this evolving subject. It is also important to note that rapid reviews are a preferred option when health decision-makers need timely access to information for background purposes, as per the aim of this background paper. However, they may produce less reliable evidence and may lead to suboptimal decision-making. As future steps for researchers, we suggest further research should evaluate the evidence available on this subject systematically, adhering to the principles and high-level of rigour of a systematic review, including systematic searches and screening, data abstraction, and risk of bias appraisal conducted by two individuals, independently [43]. It must be acknowledged that this background paper consisted in a rapid review of the literature available from the first months of the pandemic to provide a first overview of the subject. Given the rapidly changing nature of the pandemic, many of the guidance documentation examined were temporary measures hastily introduced and likely subject to further refinement and subsequent changes. It is likely that new guidance has been published since and would require a follow-up re-examination to evaluate more comprehensively the breath of new information available on this evolving subject. It is also important to note that rapid reviews are a preferred option when health decision-makers need timely access to information for background purposes, as per the aim of this background paper. However, they may produce less reliable evidence and may lead to suboptimal decision-making. As future steps for researchers, we suggest further research should evaluate the evidence available on this subject systematically, adhering to the principles and high-level of rigour of a systematic review, including systematic searches and screening, data abstraction, and risk of bias appraisal conducted by two individuals, independently [43]. Implications for the long-term future: The COVID-19 pandemic has presented a unique opportunity to challenge the long-established relationship between virtual approaches and their role in community-based care. The emergent circumstances have undoubtedly allowed many of the barriers to change to be overcome. However, this abrupt scaling up of telemedicine has also exposed many weaknesses in existing virtual solutions. It has highlighted the need for health policies to guide the allocation of limited resources better, greater investments to modernise infrastructure, expansion of technical training, prioritisation for systems interoperability [39], and more support for patients with lower health and digital literacy levels [40]. While the pandemic represents an opportunity to rethink how virtual care can be better integrated into primary care, it is imperative to address these challenges through clear, deliberate policies to ensure that this devastating pandemic leaves a positive legacy. Policymakers and researchers need to establish strategic partnerships and undertake a rigorous evaluation of what worked, for which patients, and in what clinical context, to draw lessons for the long-term and define evidence-based policies. Finally, primary healthcare policies need to be judiciously designed rather than wishfully conceived – an often-underestimated aspect in current processes [41]. Policies must be able to bridge the gap between theory and practice, both by proposing realistic models and providing the necessary support to translate primary care objectives into reality [41,42]. The inherent complexity of modern primary health systems and their governance makes it unlikely that policies can be designed a priori perfectly, without necessitating continuous, iterative improvement. Therefore, a greater emphasis on sound policy design, holistically incorporating healthcare staff and patients’ feedback on design and maintenance, is key to ensuring the planned actions allow for innovative yet pragmatic means of achieving policy goals. Limitations: It must be acknowledged that this background paper consisted in a rapid review of the literature available from the first months of the pandemic to provide a first overview of the subject. Given the rapidly changing nature of the pandemic, many of the guidance documentation examined were temporary measures hastily introduced and likely subject to further refinement and subsequent changes. It is likely that new guidance has been published since and would require a follow-up re-examination to evaluate more comprehensively the breath of new information available on this evolving subject. It is also important to note that rapid reviews are a preferred option when health decision-makers need timely access to information for background purposes, as per the aim of this background paper. However, they may produce less reliable evidence and may lead to suboptimal decision-making. As future steps for researchers, we suggest further research should evaluate the evidence available on this subject systematically, adhering to the principles and high-level of rigour of a systematic review, including systematic searches and screening, data abstraction, and risk of bias appraisal conducted by two individuals, independently [43]. Conclusion: The COVID-19 outbreak has been a litmus test for the robustness of virtual care models’ implementation in primary care. It has both revealed its many shortcomings yet simultaneously allowed for novel ideas tackling some of the deep-seated problems to be explored. It is now time to incorporate the practical lessons learned and reshape current policies, rules, and regulations to support safer, more efficient, and more equitable use of virtual care solutions.
Background: Telemedicine, once defined merely as the treatment of certain conditions remotely, has now often been supplanted in use by broader terms such as 'virtual care', in recognition of its increasing capability to deliver a diverse range of healthcare services from afar. With the unexpected onset of COVID-19, virtual care (e.g. telephone, video, online) has become essential to facilitating the continuation of primary care globally. Over several short weeks, existing healthcare policies have adapted quickly and empowered clinicians to use digital means to fulfil a wide range of clinical responsibilities, which until then have required face-to-face consultations. Methods: A rapid review of publicly available national policies guiding the use of virtual care in General Practice was conducted. Documents were included if issued in the first six months of the pandemic (March to August of 2020) and focussed primarily on high-income countries. Documents must have been issued by a national health authority, accreditation body, or professional organisation, and directly refer to the delivery of primary care. Results: We extracted six areas of relevance: primary care transformation during COVID-19, the continued delivery of preventative care, the delivery of acute care, remote triaging, funding & reimbursement, and security standards. Conclusions: Virtual care use in primary care saw a transformative change during the pandemic. However, despite the advances in the various governmental guidance offered, much work remains in addressing the shortcomings exposed during COVID-19 and strengthening viable policies to better incorporate novel technologies into the modern primary care clinical environment.
Introduction: Digital technology has transformed many aspects of modern life; healthcare is no exception. Over the last decade, primary care systems have slowly started to adopt virtual modes of delivery, in which digital tools (i.e. telephone, online video) serve as a first point of contact for patients, directing them to the appropriate digital or face-to-face services based on their needs [1,2]. This approach can provide access to a range of primary care services, such as booking and cancelling appointments, having remote consultations, receiving referrals, and obtaining prescriptions [1,3]. As part of a streamlined, integrated experience, ‘virtual approaches’ have the potential of improving efficiency, patient safety, and access to care [4]. Alongside this gradual transformation, saw a shift in the accompanying terminology with which to describe the underlying technology. The term ‘telemedicine’ which once was defined strictly as the treatment of certain conditions remotely, has now given way somewhat to broader terms such as ‘virtual care’ – a testament to the rapidly expanding functionality and application of these digital tools to facilitate the delivery of more holistic care remotely [5]. For this reason, in this rapid review the latter term was chosen. As a response over the last several years, health policies have been preparing to incorporate virtual care as an essential part of health care delivery. For example, NHS England declared in 2019 that all patients should have the right to video consultations by 2021 and that all primary care practices should ensure at least 25% of their appointments are available for online booking [6]. However, despite similar statements observed worldwide, adoption remained slow, in part due to a general hesitance with novel technologies, privacy concerns, limited stakeholder enthusiasm, and inadequate investment [7,8]. By examining publicly available national guidance documents available to GPs, this background paper aims to examine the differing policy-based approaches taken to meet this challenge, potential technical shortcomings, and ultimately, its effects on revolutionising the delivery of primary care. Conclusion: The COVID-19 outbreak has been a litmus test for the robustness of virtual care models’ implementation in primary care. It has both revealed its many shortcomings yet simultaneously allowed for novel ideas tackling some of the deep-seated problems to be explored. It is now time to incorporate the practical lessons learned and reshape current policies, rules, and regulations to support safer, more efficient, and more equitable use of virtual care solutions.
Background: Telemedicine, once defined merely as the treatment of certain conditions remotely, has now often been supplanted in use by broader terms such as 'virtual care', in recognition of its increasing capability to deliver a diverse range of healthcare services from afar. With the unexpected onset of COVID-19, virtual care (e.g. telephone, video, online) has become essential to facilitating the continuation of primary care globally. Over several short weeks, existing healthcare policies have adapted quickly and empowered clinicians to use digital means to fulfil a wide range of clinical responsibilities, which until then have required face-to-face consultations. Methods: A rapid review of publicly available national policies guiding the use of virtual care in General Practice was conducted. Documents were included if issued in the first six months of the pandemic (March to August of 2020) and focussed primarily on high-income countries. Documents must have been issued by a national health authority, accreditation body, or professional organisation, and directly refer to the delivery of primary care. Results: We extracted six areas of relevance: primary care transformation during COVID-19, the continued delivery of preventative care, the delivery of acute care, remote triaging, funding & reimbursement, and security standards. Conclusions: Virtual care use in primary care saw a transformative change during the pandemic. However, despite the advances in the various governmental guidance offered, much work remains in addressing the shortcomings exposed during COVID-19 and strengthening viable policies to better incorporate novel technologies into the modern primary care clinical environment.
8,908
297
[ 314, 503, 267, 390, 320, 335, 331, 211 ]
13
[ "care", "patients", "19", "health", "covid 19", "covid", "virtual", "primary", "primary care", "pandemic" ]
[ "term telemedicine defined", "need virtual care", "rethink virtual care", "technology term telemedicine", "virtual care delivery" ]
[CONTENT] Primary care | health policy | COVID-19 | telemedicine | health informatics | clinical informatics [SUMMARY]
[CONTENT] Primary care | health policy | COVID-19 | telemedicine | health informatics | clinical informatics [SUMMARY]
[CONTENT] Primary care | health policy | COVID-19 | telemedicine | health informatics | clinical informatics [SUMMARY]
[CONTENT] Primary care | health policy | COVID-19 | telemedicine | health informatics | clinical informatics [SUMMARY]
[CONTENT] Primary care | health policy | COVID-19 | telemedicine | health informatics | clinical informatics [SUMMARY]
[CONTENT] Primary care | health policy | COVID-19 | telemedicine | health informatics | clinical informatics [SUMMARY]
[CONTENT] COVID-19 | Developed Countries | Digital Technology | Health Policy | Humans | Primary Health Care | Telemedicine [SUMMARY]
[CONTENT] COVID-19 | Developed Countries | Digital Technology | Health Policy | Humans | Primary Health Care | Telemedicine [SUMMARY]
[CONTENT] COVID-19 | Developed Countries | Digital Technology | Health Policy | Humans | Primary Health Care | Telemedicine [SUMMARY]
[CONTENT] COVID-19 | Developed Countries | Digital Technology | Health Policy | Humans | Primary Health Care | Telemedicine [SUMMARY]
[CONTENT] COVID-19 | Developed Countries | Digital Technology | Health Policy | Humans | Primary Health Care | Telemedicine [SUMMARY]
[CONTENT] COVID-19 | Developed Countries | Digital Technology | Health Policy | Humans | Primary Health Care | Telemedicine [SUMMARY]
[CONTENT] term telemedicine defined | need virtual care | rethink virtual care | technology term telemedicine | virtual care delivery [SUMMARY]
[CONTENT] term telemedicine defined | need virtual care | rethink virtual care | technology term telemedicine | virtual care delivery [SUMMARY]
[CONTENT] term telemedicine defined | need virtual care | rethink virtual care | technology term telemedicine | virtual care delivery [SUMMARY]
[CONTENT] term telemedicine defined | need virtual care | rethink virtual care | technology term telemedicine | virtual care delivery [SUMMARY]
[CONTENT] term telemedicine defined | need virtual care | rethink virtual care | technology term telemedicine | virtual care delivery [SUMMARY]
[CONTENT] term telemedicine defined | need virtual care | rethink virtual care | technology term telemedicine | virtual care delivery [SUMMARY]
[CONTENT] care | patients | 19 | health | covid 19 | covid | virtual | primary | primary care | pandemic [SUMMARY]
[CONTENT] care | patients | 19 | health | covid 19 | covid | virtual | primary | primary care | pandemic [SUMMARY]
[CONTENT] care | patients | 19 | health | covid 19 | covid | virtual | primary | primary care | pandemic [SUMMARY]
[CONTENT] care | patients | 19 | health | covid 19 | covid | virtual | primary | primary care | pandemic [SUMMARY]
[CONTENT] care | patients | 19 | health | covid 19 | covid | virtual | primary | primary care | pandemic [SUMMARY]
[CONTENT] care | patients | 19 | health | covid 19 | covid | virtual | primary | primary care | pandemic [SUMMARY]
[CONTENT] care | delivery | digital tools | booking | appointments | digital | available | technology | primary care | primary [SUMMARY]
[CONTENT] documents | rapid | adopted | issued | synthesis | review | health | reviews | organisation | relevant [SUMMARY]
[CONTENT] care | 19 | patients | covid 19 | covid | virtual | health | telemedicine | digital | services [SUMMARY]
[CONTENT] care | shortcomings simultaneously allowed novel | reshape | shortcomings simultaneously | outbreak litmus test robustness | outbreak litmus test | outbreak litmus | incorporate practical lessons learned | incorporate practical lessons | incorporate practical [SUMMARY]
[CONTENT] care | patients | 19 | virtual | health | primary | covid 19 | covid | primary care | policies [SUMMARY]
[CONTENT] care | patients | 19 | virtual | health | primary | covid 19 | covid | primary care | policies [SUMMARY]
[CONTENT] ||| COVID-19 ||| several short weeks | clinicians [SUMMARY]
[CONTENT] General Practice ||| the first six months | March to August of 2020 ||| [SUMMARY]
[CONTENT] six | COVID-19 [SUMMARY]
[CONTENT] ||| COVID-19 [SUMMARY]
[CONTENT] ||| COVID-19 ||| several short weeks | clinicians ||| General Practice ||| the first six months | March to August of 2020 ||| ||| ||| six | COVID-19 ||| ||| COVID-19 [SUMMARY]
[CONTENT] ||| COVID-19 ||| several short weeks | clinicians ||| General Practice ||| the first six months | March to August of 2020 ||| ||| ||| six | COVID-19 ||| ||| COVID-19 [SUMMARY]
The influence of breast density and key demographics of radiographers on mammography reporting performance - a pilot study.
34028205
A high demand has been placed on radiologists to perform screen reads due to higher number of women undergoing mammography. This study aims to examine radiographer performance in reporting low compared with high-mammographic density (MD) images; and to assess the influence of key demographics of Jordanian radiographers on their performance.
INTRODUCTION
Thirty mammograms with varied MD were reported by 12 radiographers using the Breast Imaging-Reporting and Data System (BI-RADS). Radiographer performance was measured using sensitivity, specificity, positive (PPV) and negative predictive values (NPV), and area under the receiver operating characteristic curve (ROC AUC). Performance measures were compared between cases with low- and high-MD and between subgroups of radiographers according to key demographics.
METHODS
All performance measures were significantly higher in low- compared to high-MD cases (P value < 0.0). The mean sensitivity, specificity, PPV, NPV and ROC AUC were 0.58, 0.68, 0.67, 0.63 and 0.69 respectively. PPV was significantly different for readers who had different years of experience in mammography, hours and cases per week P value = 0.023, 0.01, 0.017 respectively. ROC AUC was significantly different for radiographers with different number of hours and cases performed per week (P value = 0.001 and 0.004 respectively).
RESULTS
The results of this pilot study are encouraging however a more extensive study is required to determine if Jordanian radiographers are capable of successfully taking part in breast screen reading. The lack of skills and knowledge required for correct and consistent reporting of high-MD images highlights the need for any formal training in mammographic interpretation to focus on the dense breast.
CONCLUSIONS
[ "Breast Density", "Breast Neoplasms", "Demography", "Female", "Humans", "Mammography", "Pilot Projects" ]
8892415
Introduction
Breast cancer is the most common type of cancer in women worldwide. 1 Early detection is key to decreased morbidity and mortality with dedicated screening programmes available in many countries worldwide including, Australia, 2 the United States, 3 and the United Kingdom. 4 Routine mammography is the gold standard imaging method used to detect breast cancer and has been shown to contribute to at least a 30% reduction in the number of deaths from breast cancer in patients aged over 50 years. 5 However, 2D mammography has limitations including false negatives which have been reported to account for 10% to 30% of missed breast cancers. Using 2D mammography 80% of woman recalled for additional views typically have normal outcomes. 6 The radiologists’ ability to correctly interpret mammograms is strongly influenced by key personal characteristics including age, academic qualification, number of years since qualification, 7 fellowship training, 8 and workload. 9 In the screening setting, the large number of women screened and the required speed of reading may also lead to less effective reporting due to fatigue and eye strain. 10 Patient related factors may also affect the radiologists’ ability to interpret mammograms. Among the most important factor is breast density, mainly because women with higher breast density are more susceptible to developing breast cancer than women with less dense breasts. 11 Higher breast density also results in less visibility (masking) of breast lesions in 2D mammography due to the low contrast between cancer and dense breast tissues. 12 It has been reported that double reading screening mammograms increases cancer detection and decreases mortality from breast cancer. 13 Double reading typically means that the same mammogram is interpreted by two radiologists, 14 however, the high workload of radiologists has seen the evolution of the concept of a ‘skill mix’ in which radiographers contribute to image reporting as double readers. 15 This concept has been used in the UK to reduce the radiologists’ workload by training radiographers to read mammograms in many screening units within the National Health Service Breast Screening Programme (NHSBSP). 16 Several studies have assessed the diagnostic performance of radiographers in reading mammograms. 1 , 15 , 17 In general, the use of radiographers as second readers has been shown to support the increase in the number of detected cancers afforded by double reading. 13 , 18 , 19 However, no current studies have been found that assessed radiographers’ ability to interpret mammograms of differing breast density nor key radiographer demographics that may influence their ability to report on mammograms accurately. The aim of the current study is to measure Jordanian radiographers’ performance in interpreting mammograms and to compare performance measures in cases of differing breast density. This study also aims to examine key demographic factors that may influence their performance.
null
null
Results
Table 1 reports the socio‐demographic and professional characteristics of study participants. All 12 participating radiographers were females, 7 (58.3%) were between the age of 20 to 30 and the same percentage 58.3% work in public hospitals and teaching hospitals. More than half (58.3%) of the participants had 1‐ to 5‐year experience in breast imaging. In relation to workload, half of the radiographers worked in mammography imaging ≤20 hours per week and 41.7% performed ≥20 mammography cases per week. All participating radiographers had no previous training in reading mammography images. Socio‐demographic and professional characteristics of study participants (n = 12). Table 2 reports the performance measures of each radiographer. The ranges of sensitivity, specificity, ROC, PPV and NPV were 0.33–0.80, 0.33–0.93, 0.57–0.80, 0.50–0.88 and 0.50–0.71 respectively. Performance measures of study participants for all cases. ROC AUC, area under receiver operating characteristic curve; PPV, positive predictive value; NPV, negative predictive value. Table 3 shows the difference in radiographers' performance in low compared to high breast density cases. All performance measures were significantly higher in low compared to high breast density mammograms (P value ranges from 0.000–0.024). Difference in performance measures between cases with different density. Low density* Median (IQR) High density** Median (IQR) IQR, Interquartile range; ROC AUC, area under receiver operating characteristic curve; PPV, positive predictive value; NPV, negative predictive value. BI‐RADS a and b. BI‐RADS c and d. As indicated in Table 4, radiographers who had greater years of experience in mammography, who worked for longer hours and who perform more cases per week had significantly higher mean PPV compared with radiographers with less years of experience, less work hours and less number of cases (P value = 0.023, 0.01, 0.017 respectively). The results also demonstrated that the radiographers who work >20 hours in mammography weekly and who perform ≥20 mammograms per week had significantly higher ROC AUC (P value = 0.001 and 0.004 respectively). Difference in performance measures according to demographic and professional characteristics. P value, median (IQR) are reported. IQR, Interquartile range, ROC AUC, area under receiver operating characteristic curve, PPV, positive predictive value, NPV, negative predictive value, CR, computed radiography, DR, digital radiography. Numbers in bold represent a significant difference. This group has only 2 participants.
null
null
[ "Introduction", "Cases", "Participants and study design", "Data analysis" ]
[ "Breast cancer is the most common type of cancer in women worldwide.\n1\n Early detection is key to decreased morbidity and mortality with dedicated screening programmes available in many countries worldwide including, Australia,\n2\n the United States,\n3\n and the United Kingdom.\n4\n Routine mammography is the gold standard imaging method used to detect breast cancer and has been shown to contribute to at least a 30% reduction in the number of deaths from breast cancer in patients aged over 50 years.\n5\n However, 2D mammography has limitations including false negatives which have been reported to account for 10% to 30% of missed breast cancers. Using 2D mammography 80% of woman recalled for additional views typically have normal outcomes.\n6\n\n\nThe radiologists’ ability to correctly interpret mammograms is strongly influenced by key personal characteristics including age, academic qualification, number of years since qualification,\n7\n fellowship training,\n8\n and workload.\n9\n In the screening setting, the large number of women screened and the required speed of reading may also lead to less effective reporting due to fatigue and eye strain.\n10\n\n\nPatient related factors may also affect the radiologists’ ability to interpret mammograms. Among the most important factor is breast density, mainly because women with higher breast density are more susceptible to developing breast cancer than women with less dense breasts.\n11\n Higher breast density also results in less visibility (masking) of breast lesions in 2D mammography due to the low contrast between cancer and dense breast tissues.\n12\n\n\nIt has been reported that double reading screening mammograms increases cancer detection and decreases mortality from breast cancer.\n13\n Double reading typically means that the same mammogram is interpreted by two radiologists,\n14\n however, the high workload of radiologists has seen the evolution of the concept of a ‘skill mix’ in which radiographers contribute to image reporting as double readers.\n15\n This concept has been used in the UK to reduce the radiologists’ workload by training radiographers to read mammograms in many screening units within the National Health Service Breast Screening Programme (NHSBSP).\n16\n\n\nSeveral studies have assessed the diagnostic performance of radiographers in reading mammograms.\n1\n, \n15\n, \n17\n In general, the use of radiographers as second readers has been shown to support the increase in the number of detected cancers afforded by double reading.\n13\n, \n18\n, \n19\n However, no current studies have been found that assessed radiographers’ ability to interpret mammograms of differing breast density nor key radiographer demographics that may influence their ability to report on mammograms accurately. The aim of the current study is to measure Jordanian radiographers’ performance in interpreting mammograms and to compare performance measures in cases of differing breast density. This study also aims to examine key demographic factors that may influence their performance.", "The study consisted of 30 screening cases acquired using computed radiography (CR), the most common mammography units available in Jordanian Hospitals. Each case comprised four routine digital mammograms (cranio‐caudal (CC) and medio‐lateral oblique views (MLO)) for both breasts. The images were selected by an experienced radiologist who had more than 20 years of experience in reading mammograms. In order to achieve the study aims, the radiologist was asked to select cases with different diagnostic outcomes. Of the selected images, 15 were normal as confirmed by a 2‐year follow up examination and 15 had a biopsy proven malignant lesions.\nCases were additionally purposively selected according to mammographic breast density and assigned a density category using the American College of Radiology (ACR) Breast Imaging‐Reporting and Data System (BI‐RADS) 5th edition.\n20\n This classification system consists of four categories, ‘a. the breasts are almost entirely fatty, b. there are scattered areas of fibroglandular density c. the breasts are heterogeneously dense, which may obscure small masses and d. the breasts are extremely dense, which lowers the sensitivity of mammography’. BI‐RADS density scoring was confirmed by two other radiologists and in case of disagreement; the majority rating (two of three readers) was used. Cases that scored BI‐RADS a and b were considered as low mammographic breast density (n = 14), while cases of BI‐RADS c and d were considered high‐mammographic breast density (n = 16) \n20\n. Low mammographic density cases included seven normal and seven abnormal mammograms, while high‐mammographic density cases included eight normal and eight abnormal mammograms.", "This study was conducted in North Jordan. All radiographers working as mammographers at the four main public and private hospitals were invited to participate. Twelve female radiographers aged between the 20 and 50 agreed to participate; none had formal training in reading mammography images. The radiographers were asked to read images displayed on an 8‐megapixel (MP) workstation calibrated according to the Digital Imaging and Communications in Medicine (DICOM) standard. Radiographers were trained to use the available image processing tools including magnification, windowing and panning, and were given unlimited time to read and score all images. Each radiographer was asked to determine if each image was normal or needed to be recalled and to assign a BI‐RADS assessment category 1–5,\n21\n where a score of 1 represents ‘no significant abnormality’, 2 is ‘benign finding’, 3 is ‘indeterminate/equivocal finding’, 4 is ‘suspicious findings of malignancy’ and 5 is ‘malignant findings’.", "Statistical analysis was performed with Statistical Package for Social Sciences (SPSS) 26.0 software. Frequency and percentage analysis were carried out to investigate the descriptive characteristics of study sample. The performance of each radiographer was assessed using; sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the receiver operating characteristic curve (ROC, AUC).\nNon‐parametric hypothesis tests were performed throughout the whole data analysis after performing Kolmogorov–Smirnov and Shapiro–Wilk tests to check for normality. Mann Whitney U test was applied for the comparison between groups, median and interquartile range values were reported. A P value of ≤0.05 was considered statistically significant. The sample size used in this study was able to detect a difference of 0.06 of each performance measure at 80% power. Gender and training background were excluded from the analysis because all readers were female, and none had formal training in image interpretation." ]
[ null, null, null, null ]
[ "Introduction", "Materials and Methods", "Cases", "Participants and study design", "Data analysis", "Results", "Discussion", "Conflict of Interests" ]
[ "Breast cancer is the most common type of cancer in women worldwide.\n1\n Early detection is key to decreased morbidity and mortality with dedicated screening programmes available in many countries worldwide including, Australia,\n2\n the United States,\n3\n and the United Kingdom.\n4\n Routine mammography is the gold standard imaging method used to detect breast cancer and has been shown to contribute to at least a 30% reduction in the number of deaths from breast cancer in patients aged over 50 years.\n5\n However, 2D mammography has limitations including false negatives which have been reported to account for 10% to 30% of missed breast cancers. Using 2D mammography 80% of woman recalled for additional views typically have normal outcomes.\n6\n\n\nThe radiologists’ ability to correctly interpret mammograms is strongly influenced by key personal characteristics including age, academic qualification, number of years since qualification,\n7\n fellowship training,\n8\n and workload.\n9\n In the screening setting, the large number of women screened and the required speed of reading may also lead to less effective reporting due to fatigue and eye strain.\n10\n\n\nPatient related factors may also affect the radiologists’ ability to interpret mammograms. Among the most important factor is breast density, mainly because women with higher breast density are more susceptible to developing breast cancer than women with less dense breasts.\n11\n Higher breast density also results in less visibility (masking) of breast lesions in 2D mammography due to the low contrast between cancer and dense breast tissues.\n12\n\n\nIt has been reported that double reading screening mammograms increases cancer detection and decreases mortality from breast cancer.\n13\n Double reading typically means that the same mammogram is interpreted by two radiologists,\n14\n however, the high workload of radiologists has seen the evolution of the concept of a ‘skill mix’ in which radiographers contribute to image reporting as double readers.\n15\n This concept has been used in the UK to reduce the radiologists’ workload by training radiographers to read mammograms in many screening units within the National Health Service Breast Screening Programme (NHSBSP).\n16\n\n\nSeveral studies have assessed the diagnostic performance of radiographers in reading mammograms.\n1\n, \n15\n, \n17\n In general, the use of radiographers as second readers has been shown to support the increase in the number of detected cancers afforded by double reading.\n13\n, \n18\n, \n19\n However, no current studies have been found that assessed radiographers’ ability to interpret mammograms of differing breast density nor key radiographer demographics that may influence their ability to report on mammograms accurately. The aim of the current study is to measure Jordanian radiographers’ performance in interpreting mammograms and to compare performance measures in cases of differing breast density. This study also aims to examine key demographic factors that may influence their performance.", "Ethical approval was obtained through the Human Research Committee at Jordan University of Science and Technology (approval number: 470‐2020). Written informed consent was obtained from each radiographer before their participation.\nCases The study consisted of 30 screening cases acquired using computed radiography (CR), the most common mammography units available in Jordanian Hospitals. Each case comprised four routine digital mammograms (cranio‐caudal (CC) and medio‐lateral oblique views (MLO)) for both breasts. The images were selected by an experienced radiologist who had more than 20 years of experience in reading mammograms. In order to achieve the study aims, the radiologist was asked to select cases with different diagnostic outcomes. Of the selected images, 15 were normal as confirmed by a 2‐year follow up examination and 15 had a biopsy proven malignant lesions.\nCases were additionally purposively selected according to mammographic breast density and assigned a density category using the American College of Radiology (ACR) Breast Imaging‐Reporting and Data System (BI‐RADS) 5th edition.\n20\n This classification system consists of four categories, ‘a. the breasts are almost entirely fatty, b. there are scattered areas of fibroglandular density c. the breasts are heterogeneously dense, which may obscure small masses and d. the breasts are extremely dense, which lowers the sensitivity of mammography’. BI‐RADS density scoring was confirmed by two other radiologists and in case of disagreement; the majority rating (two of three readers) was used. Cases that scored BI‐RADS a and b were considered as low mammographic breast density (n = 14), while cases of BI‐RADS c and d were considered high‐mammographic breast density (n = 16) \n20\n. Low mammographic density cases included seven normal and seven abnormal mammograms, while high‐mammographic density cases included eight normal and eight abnormal mammograms.\nThe study consisted of 30 screening cases acquired using computed radiography (CR), the most common mammography units available in Jordanian Hospitals. Each case comprised four routine digital mammograms (cranio‐caudal (CC) and medio‐lateral oblique views (MLO)) for both breasts. The images were selected by an experienced radiologist who had more than 20 years of experience in reading mammograms. In order to achieve the study aims, the radiologist was asked to select cases with different diagnostic outcomes. Of the selected images, 15 were normal as confirmed by a 2‐year follow up examination and 15 had a biopsy proven malignant lesions.\nCases were additionally purposively selected according to mammographic breast density and assigned a density category using the American College of Radiology (ACR) Breast Imaging‐Reporting and Data System (BI‐RADS) 5th edition.\n20\n This classification system consists of four categories, ‘a. the breasts are almost entirely fatty, b. there are scattered areas of fibroglandular density c. the breasts are heterogeneously dense, which may obscure small masses and d. the breasts are extremely dense, which lowers the sensitivity of mammography’. BI‐RADS density scoring was confirmed by two other radiologists and in case of disagreement; the majority rating (two of three readers) was used. Cases that scored BI‐RADS a and b were considered as low mammographic breast density (n = 14), while cases of BI‐RADS c and d were considered high‐mammographic breast density (n = 16) \n20\n. Low mammographic density cases included seven normal and seven abnormal mammograms, while high‐mammographic density cases included eight normal and eight abnormal mammograms.\nParticipants and study design This study was conducted in North Jordan. All radiographers working as mammographers at the four main public and private hospitals were invited to participate. Twelve female radiographers aged between the 20 and 50 agreed to participate; none had formal training in reading mammography images. The radiographers were asked to read images displayed on an 8‐megapixel (MP) workstation calibrated according to the Digital Imaging and Communications in Medicine (DICOM) standard. Radiographers were trained to use the available image processing tools including magnification, windowing and panning, and were given unlimited time to read and score all images. Each radiographer was asked to determine if each image was normal or needed to be recalled and to assign a BI‐RADS assessment category 1–5,\n21\n where a score of 1 represents ‘no significant abnormality’, 2 is ‘benign finding’, 3 is ‘indeterminate/equivocal finding’, 4 is ‘suspicious findings of malignancy’ and 5 is ‘malignant findings’.\nThis study was conducted in North Jordan. All radiographers working as mammographers at the four main public and private hospitals were invited to participate. Twelve female radiographers aged between the 20 and 50 agreed to participate; none had formal training in reading mammography images. The radiographers were asked to read images displayed on an 8‐megapixel (MP) workstation calibrated according to the Digital Imaging and Communications in Medicine (DICOM) standard. Radiographers were trained to use the available image processing tools including magnification, windowing and panning, and were given unlimited time to read and score all images. Each radiographer was asked to determine if each image was normal or needed to be recalled and to assign a BI‐RADS assessment category 1–5,\n21\n where a score of 1 represents ‘no significant abnormality’, 2 is ‘benign finding’, 3 is ‘indeterminate/equivocal finding’, 4 is ‘suspicious findings of malignancy’ and 5 is ‘malignant findings’.\nData analysis Statistical analysis was performed with Statistical Package for Social Sciences (SPSS) 26.0 software. Frequency and percentage analysis were carried out to investigate the descriptive characteristics of study sample. The performance of each radiographer was assessed using; sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the receiver operating characteristic curve (ROC, AUC).\nNon‐parametric hypothesis tests were performed throughout the whole data analysis after performing Kolmogorov–Smirnov and Shapiro–Wilk tests to check for normality. Mann Whitney U test was applied for the comparison between groups, median and interquartile range values were reported. A P value of ≤0.05 was considered statistically significant. The sample size used in this study was able to detect a difference of 0.06 of each performance measure at 80% power. Gender and training background were excluded from the analysis because all readers were female, and none had formal training in image interpretation.\nStatistical analysis was performed with Statistical Package for Social Sciences (SPSS) 26.0 software. Frequency and percentage analysis were carried out to investigate the descriptive characteristics of study sample. The performance of each radiographer was assessed using; sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the receiver operating characteristic curve (ROC, AUC).\nNon‐parametric hypothesis tests were performed throughout the whole data analysis after performing Kolmogorov–Smirnov and Shapiro–Wilk tests to check for normality. Mann Whitney U test was applied for the comparison between groups, median and interquartile range values were reported. A P value of ≤0.05 was considered statistically significant. The sample size used in this study was able to detect a difference of 0.06 of each performance measure at 80% power. Gender and training background were excluded from the analysis because all readers were female, and none had formal training in image interpretation.", "The study consisted of 30 screening cases acquired using computed radiography (CR), the most common mammography units available in Jordanian Hospitals. Each case comprised four routine digital mammograms (cranio‐caudal (CC) and medio‐lateral oblique views (MLO)) for both breasts. The images were selected by an experienced radiologist who had more than 20 years of experience in reading mammograms. In order to achieve the study aims, the radiologist was asked to select cases with different diagnostic outcomes. Of the selected images, 15 were normal as confirmed by a 2‐year follow up examination and 15 had a biopsy proven malignant lesions.\nCases were additionally purposively selected according to mammographic breast density and assigned a density category using the American College of Radiology (ACR) Breast Imaging‐Reporting and Data System (BI‐RADS) 5th edition.\n20\n This classification system consists of four categories, ‘a. the breasts are almost entirely fatty, b. there are scattered areas of fibroglandular density c. the breasts are heterogeneously dense, which may obscure small masses and d. the breasts are extremely dense, which lowers the sensitivity of mammography’. BI‐RADS density scoring was confirmed by two other radiologists and in case of disagreement; the majority rating (two of three readers) was used. Cases that scored BI‐RADS a and b were considered as low mammographic breast density (n = 14), while cases of BI‐RADS c and d were considered high‐mammographic breast density (n = 16) \n20\n. Low mammographic density cases included seven normal and seven abnormal mammograms, while high‐mammographic density cases included eight normal and eight abnormal mammograms.", "This study was conducted in North Jordan. All radiographers working as mammographers at the four main public and private hospitals were invited to participate. Twelve female radiographers aged between the 20 and 50 agreed to participate; none had formal training in reading mammography images. The radiographers were asked to read images displayed on an 8‐megapixel (MP) workstation calibrated according to the Digital Imaging and Communications in Medicine (DICOM) standard. Radiographers were trained to use the available image processing tools including magnification, windowing and panning, and were given unlimited time to read and score all images. Each radiographer was asked to determine if each image was normal or needed to be recalled and to assign a BI‐RADS assessment category 1–5,\n21\n where a score of 1 represents ‘no significant abnormality’, 2 is ‘benign finding’, 3 is ‘indeterminate/equivocal finding’, 4 is ‘suspicious findings of malignancy’ and 5 is ‘malignant findings’.", "Statistical analysis was performed with Statistical Package for Social Sciences (SPSS) 26.0 software. Frequency and percentage analysis were carried out to investigate the descriptive characteristics of study sample. The performance of each radiographer was assessed using; sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the receiver operating characteristic curve (ROC, AUC).\nNon‐parametric hypothesis tests were performed throughout the whole data analysis after performing Kolmogorov–Smirnov and Shapiro–Wilk tests to check for normality. Mann Whitney U test was applied for the comparison between groups, median and interquartile range values were reported. A P value of ≤0.05 was considered statistically significant. The sample size used in this study was able to detect a difference of 0.06 of each performance measure at 80% power. Gender and training background were excluded from the analysis because all readers were female, and none had formal training in image interpretation.", "Table 1 reports the socio‐demographic and professional characteristics of study participants. All 12 participating radiographers were females, 7 (58.3%) were between the age of 20 to 30 and the same percentage 58.3% work in public hospitals and teaching hospitals. More than half (58.3%) of the participants had 1‐ to 5‐year experience in breast imaging. In relation to workload, half of the radiographers worked in mammography imaging ≤20 hours per week and 41.7% performed ≥20 mammography cases per week. All participating radiographers had no previous training in reading mammography images.\nSocio‐demographic and professional characteristics of study participants (n = 12).\nTable 2 reports the performance measures of each radiographer. The ranges of sensitivity, specificity, ROC, PPV and NPV were 0.33–0.80, 0.33–0.93, 0.57–0.80, 0.50–0.88 and 0.50–0.71 respectively.\nPerformance measures of study participants for all cases.\nROC AUC, area under receiver operating characteristic curve; PPV, positive predictive value; NPV, negative predictive value.\nTable 3 shows the difference in radiographers' performance in low compared to high breast density cases. All performance measures were significantly higher in low compared to high breast density mammograms (P value ranges from 0.000–0.024).\nDifference in performance measures between cases with different density.\nLow density*\nMedian (IQR)\nHigh density**\nMedian (IQR)\nIQR, Interquartile range; ROC AUC, area under receiver operating characteristic curve; PPV, positive predictive value; NPV, negative predictive value.\nBI‐RADS a and b.\nBI‐RADS c and d.\nAs indicated in Table 4, radiographers who had greater years of experience in mammography, who worked for longer hours and who perform more cases per week had significantly higher mean PPV compared with radiographers with less years of experience, less work hours and less number of cases (P value = 0.023, 0.01, 0.017 respectively). The results also demonstrated that the radiographers who work >20 hours in mammography weekly and who perform ≥20 mammograms per week had significantly higher ROC AUC (P value = 0.001 and 0.004 respectively).\nDifference in performance measures according to demographic and professional characteristics. P value, median (IQR) are reported.\nIQR, Interquartile range, ROC AUC, area under receiver operating characteristic curve, PPV, positive predictive value, NPV, negative predictive value, CR, computed radiography, DR, digital radiography.\nNumbers in bold represent a significant difference.\nThis group has only 2 participants.", "With the introduction of breast screening programmes and the associated increase in the number of women undergoing mammography, a high demand has been placed on radiologists to perform screen reads. In particular the need in many screening services to double read cases and have a third reader if there is discordance has created workload issues.\n22\n An important measure that needs to be considered to address this workload issue is role extension for radiographers which has been used in the UK\n16\n and suggested in other counties such as Australia.\n23\n Previous work has demonstrated that radiographers sensitivity and specificity in reading mammograms was comparable to that of radiologists\n17\n, \n24\n, \n25\n and that the addition of radiographers as a second reader can also contribute positively by detecting more cancers in the screening setting.\n13\n, \n18\n, \n19\n It has been reported that the contribution of a radiographer as a double reader resulted in the detection of 9% more cancers compared with a single reading by a radiologist.\n18\n\n\nIn Jordan, heightened public awareness associated with the development of the Jordanian breast screening programme has increased compliance with screening guidelines. This has resulted in a higher demand for breast screening services including mammography readers. Recently, it has been reported that the shortage of specialised radiologists is one of the main workforce gaps in mammography screening in Jordan \n26\n. This suggests that the same concept of a ‘skill mix’ can potentially be used as a solution to the subsequent increase in radiologists’ workload associated with the higher number of women screened women. However, other contributing factors such as the education and training of Jordanian radiographers must be considered before the establishment of a double reading strategy. This requires the assessment of radiographer’s current performance in reading mammograms as a first step towards future recommendations.\nThe overall results of the current study showed a relatively low to medium mean sensitivity, specificity, ROC AUC, PPV and NPV of 0.58, 0.68, 0.69, 0.67 and 0.63, respectively. However, the results were heterogeneous among radiographers and a wide variation, particularly in sensitivity and specificity values ranging from 0.33–0.80 and 0.33–0.93 respectively were seen. These results are comparable with previous studies which also showed a wide range of sensitivity (61–89%) and specificity (45–97%) among radiographers.\n24\n While the performance measures of participating radiographers in this study are lower than those reported in other studies \n1\n, \n17\n, \n24\n, \n27\n it must be acknowledged that some of these studies\n1\n, \n23\n included radiographers who had higher (up to 44 years) experience in mammography compared to 10 or less years of experience in this study. It must also be noted that all mammograms used in the current study were acquired using a CR unit due to the higher availability of CR systems in Jordanian hospitals. It has previously been reported that CR systems have a lower cancer detection rate than DR systems.\n28\n The low level of performance in the current research might also be attributed to differences in radiographers’ training. All participating radiographers had no previous training in mammographic image interpretation. Previous work reported that dedicated and self‐study training programmes may improve the performance of radiographers not only in detecting cancer, but also in identifying benign lesions and reducing the number of false positive.\n6\n, \n23\n An increase in Jordanian radiographer performance may be evidenced in future studies with formalised screen reading training and assessment.\nWhile radiographers in some countries receive postgraduate formal training in image interpretation such as in the United Kingdom (UK) \n29\n there is no similar approach in Jordan. Radiographers typically gain image interpretation skills through individual efforts and by communication with radiologists and other radiographers during practice.\nAfter dividing the cases into high‐ and low‐breast density categories, our results showed that even without formal training radiographers may have comparable reporting skills to radiologists in low mammographic density breasts reporting a mean sensitivity, specificity, ROC AUC, PPV and NPV of 0.70, 0.80, 0.79, 0.81, 0.73 respectively. This has an important implication on the planning of radiographer's contribution to image interpretation where they may potentially be recruited to read cases with low mammographic density which may relieve radiologists’ workload and free up radiologists for more difficult tasks (which could include reporting mammograms with high‐mammographic density). This has been introduced elsewhere as a more cost‐effective scenario than reading all mammograms by either the radiographer or radiologist.\n30\n Providing formal training programmes in image interpretation focusing on high‐mammographic density cases could alternatively be considered for radiographers who wished to become dedicated screen readers.\nThe results regarding the association between radiographer demographics and performance showed higher PPV for radiographers who had 6–10 years of experience compared with less experienced radiographers. Also, higher PPV and ROC AUC for radiographers who worked for more than 20 hours and who performed 20 or more cases per week compared to those who had less workload in terms of number of hours or cases per week. In line with the results of the current work, it has previously been reported that the most experienced radiology readers have the highest PPV\n31\n which can be explained by the cumulative exposure to normal radiographic features of mammograms and being more able to distinguish abnormal findings. Similarly previous work also found that the performance of radiologists can also be affected by their years of experience and number of reading hours per week.\n32\n\n\nThis study has some limitations. First, the sample size and the number of readers were relatively small and unlike other published studies participating radiographers were not trained and assessed in screen reading as this was not within the aims of the study. Also, location sensitivity was not calculated in this study as the radiographers were not asked to localise the detected lesions due to time considerations. All images included in the current study were acquired using CR, however, not all participating radiographers were familiar with CR acquired images which may have contributed to the variation in radiographer performance.\nIn conclusion, the findings of this pilot study suggest that radiographers working in breast imaging have an inherent skill set that could be capitalised on to support the radiology workforce in Jordan. A more extensive study is required to determine if Jordanian radiographers are capable of successfully taking part in breast screen reading. The lower performance measures in radiographer interpreted high‐mammographic breast density cases emphasises the importance of any training programme providing education that focused on image interpretation skills in the dense breast.", "The authors declare that there is no conflict of interest." ]
[ null, "materials-and-methods", null, null, null, "results", "discussion", "COI-statement" ]
[ "Breast", "mammography", "reporting", "screening" ]
Introduction: Breast cancer is the most common type of cancer in women worldwide. 1 Early detection is key to decreased morbidity and mortality with dedicated screening programmes available in many countries worldwide including, Australia, 2 the United States, 3 and the United Kingdom. 4 Routine mammography is the gold standard imaging method used to detect breast cancer and has been shown to contribute to at least a 30% reduction in the number of deaths from breast cancer in patients aged over 50 years. 5 However, 2D mammography has limitations including false negatives which have been reported to account for 10% to 30% of missed breast cancers. Using 2D mammography 80% of woman recalled for additional views typically have normal outcomes. 6 The radiologists’ ability to correctly interpret mammograms is strongly influenced by key personal characteristics including age, academic qualification, number of years since qualification, 7 fellowship training, 8 and workload. 9 In the screening setting, the large number of women screened and the required speed of reading may also lead to less effective reporting due to fatigue and eye strain. 10 Patient related factors may also affect the radiologists’ ability to interpret mammograms. Among the most important factor is breast density, mainly because women with higher breast density are more susceptible to developing breast cancer than women with less dense breasts. 11 Higher breast density also results in less visibility (masking) of breast lesions in 2D mammography due to the low contrast between cancer and dense breast tissues. 12 It has been reported that double reading screening mammograms increases cancer detection and decreases mortality from breast cancer. 13 Double reading typically means that the same mammogram is interpreted by two radiologists, 14 however, the high workload of radiologists has seen the evolution of the concept of a ‘skill mix’ in which radiographers contribute to image reporting as double readers. 15 This concept has been used in the UK to reduce the radiologists’ workload by training radiographers to read mammograms in many screening units within the National Health Service Breast Screening Programme (NHSBSP). 16 Several studies have assessed the diagnostic performance of radiographers in reading mammograms. 1 , 15 , 17 In general, the use of radiographers as second readers has been shown to support the increase in the number of detected cancers afforded by double reading. 13 , 18 , 19 However, no current studies have been found that assessed radiographers’ ability to interpret mammograms of differing breast density nor key radiographer demographics that may influence their ability to report on mammograms accurately. The aim of the current study is to measure Jordanian radiographers’ performance in interpreting mammograms and to compare performance measures in cases of differing breast density. This study also aims to examine key demographic factors that may influence their performance. Materials and Methods: Ethical approval was obtained through the Human Research Committee at Jordan University of Science and Technology (approval number: 470‐2020). Written informed consent was obtained from each radiographer before their participation. Cases The study consisted of 30 screening cases acquired using computed radiography (CR), the most common mammography units available in Jordanian Hospitals. Each case comprised four routine digital mammograms (cranio‐caudal (CC) and medio‐lateral oblique views (MLO)) for both breasts. The images were selected by an experienced radiologist who had more than 20 years of experience in reading mammograms. In order to achieve the study aims, the radiologist was asked to select cases with different diagnostic outcomes. Of the selected images, 15 were normal as confirmed by a 2‐year follow up examination and 15 had a biopsy proven malignant lesions. Cases were additionally purposively selected according to mammographic breast density and assigned a density category using the American College of Radiology (ACR) Breast Imaging‐Reporting and Data System (BI‐RADS) 5th edition. 20 This classification system consists of four categories, ‘a. the breasts are almost entirely fatty, b. there are scattered areas of fibroglandular density c. the breasts are heterogeneously dense, which may obscure small masses and d. the breasts are extremely dense, which lowers the sensitivity of mammography’. BI‐RADS density scoring was confirmed by two other radiologists and in case of disagreement; the majority rating (two of three readers) was used. Cases that scored BI‐RADS a and b were considered as low mammographic breast density (n = 14), while cases of BI‐RADS c and d were considered high‐mammographic breast density (n = 16) 20 . Low mammographic density cases included seven normal and seven abnormal mammograms, while high‐mammographic density cases included eight normal and eight abnormal mammograms. The study consisted of 30 screening cases acquired using computed radiography (CR), the most common mammography units available in Jordanian Hospitals. Each case comprised four routine digital mammograms (cranio‐caudal (CC) and medio‐lateral oblique views (MLO)) for both breasts. The images were selected by an experienced radiologist who had more than 20 years of experience in reading mammograms. In order to achieve the study aims, the radiologist was asked to select cases with different diagnostic outcomes. Of the selected images, 15 were normal as confirmed by a 2‐year follow up examination and 15 had a biopsy proven malignant lesions. Cases were additionally purposively selected according to mammographic breast density and assigned a density category using the American College of Radiology (ACR) Breast Imaging‐Reporting and Data System (BI‐RADS) 5th edition. 20 This classification system consists of four categories, ‘a. the breasts are almost entirely fatty, b. there are scattered areas of fibroglandular density c. the breasts are heterogeneously dense, which may obscure small masses and d. the breasts are extremely dense, which lowers the sensitivity of mammography’. BI‐RADS density scoring was confirmed by two other radiologists and in case of disagreement; the majority rating (two of three readers) was used. Cases that scored BI‐RADS a and b were considered as low mammographic breast density (n = 14), while cases of BI‐RADS c and d were considered high‐mammographic breast density (n = 16) 20 . Low mammographic density cases included seven normal and seven abnormal mammograms, while high‐mammographic density cases included eight normal and eight abnormal mammograms. Participants and study design This study was conducted in North Jordan. All radiographers working as mammographers at the four main public and private hospitals were invited to participate. Twelve female radiographers aged between the 20 and 50 agreed to participate; none had formal training in reading mammography images. The radiographers were asked to read images displayed on an 8‐megapixel (MP) workstation calibrated according to the Digital Imaging and Communications in Medicine (DICOM) standard. Radiographers were trained to use the available image processing tools including magnification, windowing and panning, and were given unlimited time to read and score all images. Each radiographer was asked to determine if each image was normal or needed to be recalled and to assign a BI‐RADS assessment category 1–5, 21 where a score of 1 represents ‘no significant abnormality’, 2 is ‘benign finding’, 3 is ‘indeterminate/equivocal finding’, 4 is ‘suspicious findings of malignancy’ and 5 is ‘malignant findings’. This study was conducted in North Jordan. All radiographers working as mammographers at the four main public and private hospitals were invited to participate. Twelve female radiographers aged between the 20 and 50 agreed to participate; none had formal training in reading mammography images. The radiographers were asked to read images displayed on an 8‐megapixel (MP) workstation calibrated according to the Digital Imaging and Communications in Medicine (DICOM) standard. Radiographers were trained to use the available image processing tools including magnification, windowing and panning, and were given unlimited time to read and score all images. Each radiographer was asked to determine if each image was normal or needed to be recalled and to assign a BI‐RADS assessment category 1–5, 21 where a score of 1 represents ‘no significant abnormality’, 2 is ‘benign finding’, 3 is ‘indeterminate/equivocal finding’, 4 is ‘suspicious findings of malignancy’ and 5 is ‘malignant findings’. Data analysis Statistical analysis was performed with Statistical Package for Social Sciences (SPSS) 26.0 software. Frequency and percentage analysis were carried out to investigate the descriptive characteristics of study sample. The performance of each radiographer was assessed using; sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the receiver operating characteristic curve (ROC, AUC). Non‐parametric hypothesis tests were performed throughout the whole data analysis after performing Kolmogorov–Smirnov and Shapiro–Wilk tests to check for normality. Mann Whitney U test was applied for the comparison between groups, median and interquartile range values were reported. A P value of ≤0.05 was considered statistically significant. The sample size used in this study was able to detect a difference of 0.06 of each performance measure at 80% power. Gender and training background were excluded from the analysis because all readers were female, and none had formal training in image interpretation. Statistical analysis was performed with Statistical Package for Social Sciences (SPSS) 26.0 software. Frequency and percentage analysis were carried out to investigate the descriptive characteristics of study sample. The performance of each radiographer was assessed using; sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the receiver operating characteristic curve (ROC, AUC). Non‐parametric hypothesis tests were performed throughout the whole data analysis after performing Kolmogorov–Smirnov and Shapiro–Wilk tests to check for normality. Mann Whitney U test was applied for the comparison between groups, median and interquartile range values were reported. A P value of ≤0.05 was considered statistically significant. The sample size used in this study was able to detect a difference of 0.06 of each performance measure at 80% power. Gender and training background were excluded from the analysis because all readers were female, and none had formal training in image interpretation. Cases: The study consisted of 30 screening cases acquired using computed radiography (CR), the most common mammography units available in Jordanian Hospitals. Each case comprised four routine digital mammograms (cranio‐caudal (CC) and medio‐lateral oblique views (MLO)) for both breasts. The images were selected by an experienced radiologist who had more than 20 years of experience in reading mammograms. In order to achieve the study aims, the radiologist was asked to select cases with different diagnostic outcomes. Of the selected images, 15 were normal as confirmed by a 2‐year follow up examination and 15 had a biopsy proven malignant lesions. Cases were additionally purposively selected according to mammographic breast density and assigned a density category using the American College of Radiology (ACR) Breast Imaging‐Reporting and Data System (BI‐RADS) 5th edition. 20 This classification system consists of four categories, ‘a. the breasts are almost entirely fatty, b. there are scattered areas of fibroglandular density c. the breasts are heterogeneously dense, which may obscure small masses and d. the breasts are extremely dense, which lowers the sensitivity of mammography’. BI‐RADS density scoring was confirmed by two other radiologists and in case of disagreement; the majority rating (two of three readers) was used. Cases that scored BI‐RADS a and b were considered as low mammographic breast density (n = 14), while cases of BI‐RADS c and d were considered high‐mammographic breast density (n = 16) 20 . Low mammographic density cases included seven normal and seven abnormal mammograms, while high‐mammographic density cases included eight normal and eight abnormal mammograms. Participants and study design: This study was conducted in North Jordan. All radiographers working as mammographers at the four main public and private hospitals were invited to participate. Twelve female radiographers aged between the 20 and 50 agreed to participate; none had formal training in reading mammography images. The radiographers were asked to read images displayed on an 8‐megapixel (MP) workstation calibrated according to the Digital Imaging and Communications in Medicine (DICOM) standard. Radiographers were trained to use the available image processing tools including magnification, windowing and panning, and were given unlimited time to read and score all images. Each radiographer was asked to determine if each image was normal or needed to be recalled and to assign a BI‐RADS assessment category 1–5, 21 where a score of 1 represents ‘no significant abnormality’, 2 is ‘benign finding’, 3 is ‘indeterminate/equivocal finding’, 4 is ‘suspicious findings of malignancy’ and 5 is ‘malignant findings’. Data analysis: Statistical analysis was performed with Statistical Package for Social Sciences (SPSS) 26.0 software. Frequency and percentage analysis were carried out to investigate the descriptive characteristics of study sample. The performance of each radiographer was assessed using; sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and area under the receiver operating characteristic curve (ROC, AUC). Non‐parametric hypothesis tests were performed throughout the whole data analysis after performing Kolmogorov–Smirnov and Shapiro–Wilk tests to check for normality. Mann Whitney U test was applied for the comparison between groups, median and interquartile range values were reported. A P value of ≤0.05 was considered statistically significant. The sample size used in this study was able to detect a difference of 0.06 of each performance measure at 80% power. Gender and training background were excluded from the analysis because all readers were female, and none had formal training in image interpretation. Results: Table 1 reports the socio‐demographic and professional characteristics of study participants. All 12 participating radiographers were females, 7 (58.3%) were between the age of 20 to 30 and the same percentage 58.3% work in public hospitals and teaching hospitals. More than half (58.3%) of the participants had 1‐ to 5‐year experience in breast imaging. In relation to workload, half of the radiographers worked in mammography imaging ≤20 hours per week and 41.7% performed ≥20 mammography cases per week. All participating radiographers had no previous training in reading mammography images. Socio‐demographic and professional characteristics of study participants (n = 12). Table 2 reports the performance measures of each radiographer. The ranges of sensitivity, specificity, ROC, PPV and NPV were 0.33–0.80, 0.33–0.93, 0.57–0.80, 0.50–0.88 and 0.50–0.71 respectively. Performance measures of study participants for all cases. ROC AUC, area under receiver operating characteristic curve; PPV, positive predictive value; NPV, negative predictive value. Table 3 shows the difference in radiographers' performance in low compared to high breast density cases. All performance measures were significantly higher in low compared to high breast density mammograms (P value ranges from 0.000–0.024). Difference in performance measures between cases with different density. Low density* Median (IQR) High density** Median (IQR) IQR, Interquartile range; ROC AUC, area under receiver operating characteristic curve; PPV, positive predictive value; NPV, negative predictive value. BI‐RADS a and b. BI‐RADS c and d. As indicated in Table 4, radiographers who had greater years of experience in mammography, who worked for longer hours and who perform more cases per week had significantly higher mean PPV compared with radiographers with less years of experience, less work hours and less number of cases (P value = 0.023, 0.01, 0.017 respectively). The results also demonstrated that the radiographers who work >20 hours in mammography weekly and who perform ≥20 mammograms per week had significantly higher ROC AUC (P value = 0.001 and 0.004 respectively). Difference in performance measures according to demographic and professional characteristics. P value, median (IQR) are reported. IQR, Interquartile range, ROC AUC, area under receiver operating characteristic curve, PPV, positive predictive value, NPV, negative predictive value, CR, computed radiography, DR, digital radiography. Numbers in bold represent a significant difference. This group has only 2 participants. Discussion: With the introduction of breast screening programmes and the associated increase in the number of women undergoing mammography, a high demand has been placed on radiologists to perform screen reads. In particular the need in many screening services to double read cases and have a third reader if there is discordance has created workload issues. 22 An important measure that needs to be considered to address this workload issue is role extension for radiographers which has been used in the UK 16 and suggested in other counties such as Australia. 23 Previous work has demonstrated that radiographers sensitivity and specificity in reading mammograms was comparable to that of radiologists 17 , 24 , 25 and that the addition of radiographers as a second reader can also contribute positively by detecting more cancers in the screening setting. 13 , 18 , 19 It has been reported that the contribution of a radiographer as a double reader resulted in the detection of 9% more cancers compared with a single reading by a radiologist. 18 In Jordan, heightened public awareness associated with the development of the Jordanian breast screening programme has increased compliance with screening guidelines. This has resulted in a higher demand for breast screening services including mammography readers. Recently, it has been reported that the shortage of specialised radiologists is one of the main workforce gaps in mammography screening in Jordan 26 . This suggests that the same concept of a ‘skill mix’ can potentially be used as a solution to the subsequent increase in radiologists’ workload associated with the higher number of women screened women. However, other contributing factors such as the education and training of Jordanian radiographers must be considered before the establishment of a double reading strategy. This requires the assessment of radiographer’s current performance in reading mammograms as a first step towards future recommendations. The overall results of the current study showed a relatively low to medium mean sensitivity, specificity, ROC AUC, PPV and NPV of 0.58, 0.68, 0.69, 0.67 and 0.63, respectively. However, the results were heterogeneous among radiographers and a wide variation, particularly in sensitivity and specificity values ranging from 0.33–0.80 and 0.33–0.93 respectively were seen. These results are comparable with previous studies which also showed a wide range of sensitivity (61–89%) and specificity (45–97%) among radiographers. 24 While the performance measures of participating radiographers in this study are lower than those reported in other studies 1 , 17 , 24 , 27 it must be acknowledged that some of these studies 1 , 23 included radiographers who had higher (up to 44 years) experience in mammography compared to 10 or less years of experience in this study. It must also be noted that all mammograms used in the current study were acquired using a CR unit due to the higher availability of CR systems in Jordanian hospitals. It has previously been reported that CR systems have a lower cancer detection rate than DR systems. 28 The low level of performance in the current research might also be attributed to differences in radiographers’ training. All participating radiographers had no previous training in mammographic image interpretation. Previous work reported that dedicated and self‐study training programmes may improve the performance of radiographers not only in detecting cancer, but also in identifying benign lesions and reducing the number of false positive. 6 , 23 An increase in Jordanian radiographer performance may be evidenced in future studies with formalised screen reading training and assessment. While radiographers in some countries receive postgraduate formal training in image interpretation such as in the United Kingdom (UK) 29 there is no similar approach in Jordan. Radiographers typically gain image interpretation skills through individual efforts and by communication with radiologists and other radiographers during practice. After dividing the cases into high‐ and low‐breast density categories, our results showed that even without formal training radiographers may have comparable reporting skills to radiologists in low mammographic density breasts reporting a mean sensitivity, specificity, ROC AUC, PPV and NPV of 0.70, 0.80, 0.79, 0.81, 0.73 respectively. This has an important implication on the planning of radiographer's contribution to image interpretation where they may potentially be recruited to read cases with low mammographic density which may relieve radiologists’ workload and free up radiologists for more difficult tasks (which could include reporting mammograms with high‐mammographic density). This has been introduced elsewhere as a more cost‐effective scenario than reading all mammograms by either the radiographer or radiologist. 30 Providing formal training programmes in image interpretation focusing on high‐mammographic density cases could alternatively be considered for radiographers who wished to become dedicated screen readers. The results regarding the association between radiographer demographics and performance showed higher PPV for radiographers who had 6–10 years of experience compared with less experienced radiographers. Also, higher PPV and ROC AUC for radiographers who worked for more than 20 hours and who performed 20 or more cases per week compared to those who had less workload in terms of number of hours or cases per week. In line with the results of the current work, it has previously been reported that the most experienced radiology readers have the highest PPV 31 which can be explained by the cumulative exposure to normal radiographic features of mammograms and being more able to distinguish abnormal findings. Similarly previous work also found that the performance of radiologists can also be affected by their years of experience and number of reading hours per week. 32 This study has some limitations. First, the sample size and the number of readers were relatively small and unlike other published studies participating radiographers were not trained and assessed in screen reading as this was not within the aims of the study. Also, location sensitivity was not calculated in this study as the radiographers were not asked to localise the detected lesions due to time considerations. All images included in the current study were acquired using CR, however, not all participating radiographers were familiar with CR acquired images which may have contributed to the variation in radiographer performance. In conclusion, the findings of this pilot study suggest that radiographers working in breast imaging have an inherent skill set that could be capitalised on to support the radiology workforce in Jordan. A more extensive study is required to determine if Jordanian radiographers are capable of successfully taking part in breast screen reading. The lower performance measures in radiographer interpreted high‐mammographic breast density cases emphasises the importance of any training programme providing education that focused on image interpretation skills in the dense breast. Conflict of Interests: The authors declare that there is no conflict of interest.
Background: A high demand has been placed on radiologists to perform screen reads due to higher number of women undergoing mammography. This study aims to examine radiographer performance in reporting low compared with high-mammographic density (MD) images; and to assess the influence of key demographics of Jordanian radiographers on their performance. Methods: Thirty mammograms with varied MD were reported by 12 radiographers using the Breast Imaging-Reporting and Data System (BI-RADS). Radiographer performance was measured using sensitivity, specificity, positive (PPV) and negative predictive values (NPV), and area under the receiver operating characteristic curve (ROC AUC). Performance measures were compared between cases with low- and high-MD and between subgroups of radiographers according to key demographics. Results: All performance measures were significantly higher in low- compared to high-MD cases (P value < 0.0). The mean sensitivity, specificity, PPV, NPV and ROC AUC were 0.58, 0.68, 0.67, 0.63 and 0.69 respectively. PPV was significantly different for readers who had different years of experience in mammography, hours and cases per week P value = 0.023, 0.01, 0.017 respectively. ROC AUC was significantly different for radiographers with different number of hours and cases performed per week (P value = 0.001 and 0.004 respectively). Conclusions: The results of this pilot study are encouraging however a more extensive study is required to determine if Jordanian radiographers are capable of successfully taking part in breast screen reading. The lack of skills and knowledge required for correct and consistent reporting of high-MD images highlights the need for any formal training in mammographic interpretation to focus on the dense breast.
null
null
4,341
326
[ 554, 303, 179, 176 ]
8
[ "radiographers", "density", "cases", "breast", "study", "mammograms", "performance", "mammography", "mammographic", "training" ]
[ "screening mammograms increases", "cancers 2d mammography", "mammography screening", "radiographers read mammograms", "radiographers reading mammograms" ]
null
null
null
[CONTENT] Breast | mammography | reporting | screening [SUMMARY]
null
[CONTENT] Breast | mammography | reporting | screening [SUMMARY]
null
[CONTENT] Breast | mammography | reporting | screening [SUMMARY]
null
[CONTENT] Breast Density | Breast Neoplasms | Demography | Female | Humans | Mammography | Pilot Projects [SUMMARY]
null
[CONTENT] Breast Density | Breast Neoplasms | Demography | Female | Humans | Mammography | Pilot Projects [SUMMARY]
null
[CONTENT] Breast Density | Breast Neoplasms | Demography | Female | Humans | Mammography | Pilot Projects [SUMMARY]
null
[CONTENT] screening mammograms increases | cancers 2d mammography | mammography screening | radiographers read mammograms | radiographers reading mammograms [SUMMARY]
null
[CONTENT] screening mammograms increases | cancers 2d mammography | mammography screening | radiographers read mammograms | radiographers reading mammograms [SUMMARY]
null
[CONTENT] screening mammograms increases | cancers 2d mammography | mammography screening | radiographers read mammograms | radiographers reading mammograms [SUMMARY]
null
[CONTENT] radiographers | density | cases | breast | study | mammograms | performance | mammography | mammographic | training [SUMMARY]
null
[CONTENT] radiographers | density | cases | breast | study | mammograms | performance | mammography | mammographic | training [SUMMARY]
null
[CONTENT] radiographers | density | cases | breast | study | mammograms | performance | mammography | mammographic | training [SUMMARY]
null
[CONTENT] breast | cancer | breast cancer | mammograms | ability | key | radiographers | women | double | screening [SUMMARY]
null
[CONTENT] value | iqr | predictive value | predictive | participants | table | radiographers | performance measures | measures | performance [SUMMARY]
null
[CONTENT] radiographers | density | breast | cases | value | mammograms | mammographic | study | performance | analysis [SUMMARY]
null
[CONTENT] ||| MD | Jordanian [SUMMARY]
null
[CONTENT] 0.0 ||| PPV | NPV | ROC | 0.58 | 0.68 | 0.67 | 0.63 | 0.69 ||| PPV | years | hours | 0.023 | 0.01 | 0.017 ||| hours | 0.001 | 0.004 [SUMMARY]
null
[CONTENT] ||| MD | Jordanian ||| Thirty | MD | 12 | the Breast Imaging-Reporting | BI-RADS ||| PPV | NPV ||| ||| ||| 0.0 ||| PPV | NPV | ROC | 0.58 | 0.68 | 0.67 | 0.63 | 0.69 ||| PPV | years | hours | 0.023 | 0.01 | 0.017 ||| hours | 0.001 | 0.004 ||| Jordanian ||| [SUMMARY]
null
Lung function in Lolland-Falster Health Study (LOFUS).
36056580
COPD prevalence in Denmark is estimated at 18% based on data from urban populations. However, studies suggest that using the clinical cut-off for airway obstruction in population studies may overestimate prevalence. The present study aims to compare estimated prevalence of airway obstruction using different cut-offs and to present lung function data from the Lolland-Falster Health Study, set in a rural-provincial area.
BACKGROUND
Descriptive analysis of participant characteristics and self-reported respiratory disease and of spirometry results in the total population and in subgroups defined by these characteristics. Airway obstruction was assessed using previously published Danish reference values and defined according to either FEV1 /FVC below lower limit of normal (LLN) 5% (as in clinical diagnosis) or 2.5% (suggested for population studies), or as FEV1 /FVC < 70%.
METHODS
Using either FEV1 /FVC < 70% or LLN 5% cut-off, 19.0% of LOFUS participants aged 35 years or older had spirometry, suggesting airway obstruction. By the LLN 2.5% criterion, the proportion was considerably lower, 12.2%. The prevalence of airway obstruction was higher among current smokers, in participants with short education or reporting low leisure-time physical activity and in those with known respiratory disease. Approximately 40% of participants reporting known respiratory disease had normal spirometry, and 8.7% without known respiratory disease had airway obstruction.
RESULTS
Prevalence of airway obstruction in this rural population was comparable to previous estimates from urban Danish population studies. The choice of cut-off impacts the estimated prevalence, and using the FEV1 /FVC cut-off may overestimate prevalence. However, many participants with known respiratory disease had normal spirometry in this health study.
CONCLUSION
[ "Airway Obstruction", "Forced Expiratory Volume", "Humans", "Lung", "Pulmonary Disease, Chronic Obstructive", "Spirometry", "Vital Capacity" ]
9527155
BACKGROUND
Worldwide, lung disease is a major cause of morbidity and mortality. 1 Lung function declines with age, 2 and smoking and environmental pollution can cause lung disease, accelerated loss of lung function and premature death. 3 , 4 , 5 , 6 In Denmark, prevalence of chronic obstructive pulmonary disease (COPD) has been estimated at 17.5% of adults aged 35 years or older, 7 with estimated 2.0% having severe COPD. 7 This estimate derived from a large urban population study using a spirometry cut‐off intended for clinical diagnosis, that is, in a clinical setting with suspicion of airway disease. This may lead to overestimation of COPD prevalence. 8 , 9 In addition, the choice of spirometry criterion to most correctly detect airway obstruction has been debated in recent decades. Detection of reduced ratio between forced expiratory volume 1 s (FEV1) and forced vital capacity (FVC) indicates airway obstruction, and while a fixed ratio criterion of FEV1/FVC < 0.70 is often used, studies show that FEV1/FVC ratio below a lower limit of normal (LLN) may be more suitable. 8 , 10 Lolland‐Falster is a mixed rural‐provincial area of 103 000 inhabitants, situated on two main and several small islands in the southern part of Denmark, a small Scandinavian, high‐income country covering 43 000 km2 and a population of 5.8 million. Although the Danish population is genetically relatively homogeneous, population health varies across regions of the country, with life expectancy in Lolland‐Falster hree 3 years below the national average and 5 years below the municipalities with highest life expectancy. 11 , 12 The region scores worse than the national average on several health indicators, including diabetes prevalence, obesity, smoking and COPD. 13 The Lolland‐Falster Health Study (LOFUS) is a population‐based, prospective cohort study designed to investigate determinants of population health in this area. 14 In this paper, we report LOFUS data on lung function measurement with spirometry as well as anthropometric data and questionnaire‐based information on smoking and other risk factors. The aim is to describe the spirometry measurements and results in adults participating in LOFUS and compare it to similar findings from urban Danish population. In addition, we explore how different spirometric criteria affect the estimated prevalence of airway obstruction.
null
null
RESULTS
Participant characteristics Spirometry measurements were tried for all 16 142 participants, 19 participants were excluded due to missing height, and of the 16 123 participants, 13 315 (82.5%) completed three acceptable measurements (Figure 1). Participants with unsuccessful spirometry were more likely to be men, old, overweight, sedentary, having large waist circumference, low school/vocational education, hypertension or ischaemic heart disease and less likely having asthma and other respiratory disease. The population with successful spirometry was on average 59.0 years with 54.2% being women. The majority were overweight (63.4%), had education below high school graduation (62.5%), reported moderate physical activity (61.3%) or were former or current smokers (51.8%), and 26.8% reported to have hypertension (Table S1). Spirometry measurements were tried for all 16 142 participants, 19 participants were excluded due to missing height, and of the 16 123 participants, 13 315 (82.5%) completed three acceptable measurements (Figure 1). Participants with unsuccessful spirometry were more likely to be men, old, overweight, sedentary, having large waist circumference, low school/vocational education, hypertension or ischaemic heart disease and less likely having asthma and other respiratory disease. The population with successful spirometry was on average 59.0 years with 54.2% being women. The majority were overweight (63.4%), had education below high school graduation (62.5%), reported moderate physical activity (61.3%) or were former or current smokers (51.8%), and 26.8% reported to have hypertension (Table S1). Respiratory disease The participants with other respiratory disease (n = 1028) was on average 61.2 years with 56.4% being women (Table S2). 75.1% reported also to have asthma. Compared to the total group with successful spirometry (Table S1), the participants with self‐reported respiratory disease was more likely to be older, have higher waist circumference, lower school education, more sedentary activity, be current daily smokers with more pack‐years and have asthma, allergy, hypertension, diabetes, cancer and ischaemic heart disease. The participants with other respiratory disease (n = 1028) was on average 61.2 years with 56.4% being women (Table S2). 75.1% reported also to have asthma. Compared to the total group with successful spirometry (Table S1), the participants with self‐reported respiratory disease was more likely to be older, have higher waist circumference, lower school education, more sedentary activity, be current daily smokers with more pack‐years and have asthma, allergy, hypertension, diabetes, cancer and ischaemic heart disease. Spirometry results Table 1 shows proportions of participants who met the three different criteria for airway obstruction, by age categories, anthropometric data, educational status and smoking status in participants aged 35 years or older. Overall 12.2% of participants had FEV1/FVC below the LLN 2.5% cut‐off, 19.0% below the LLN 5%, and 19.0% below 70%, but with variation by age group. Up until age 49, the proportions meeting the FEV1/FVC < 70% and the LLN 2.5% criteria were similar, but with increasing age, the proportion with FEV1/FVC < 70% increased more than the proportion meeting the LLN criteria. As a result, the difference between the proportions of FEV1/FVC < 70% and FEV1/FVC < LLN 2.5% increased in the oldest age groups. Proportion of participants with FEV1/FVC < LLN 2.5%, LLN 5% and <70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study Notes: LLN is defined by normal values in a Danish population. 17 Values are number (frequencies) for categorical values. P‐values were calculated with Pearson's X 2 test for categorical values. Other respiratory disease includes COPD, chronic bronchitis and emphysema. As expected, proportions were higher in smokers, especially current smokers and those with most pack‐years, and among participants reporting other respiratory disease. In addition, spirometry results below expected were more common among those with fewer years of schooling and those reporting sedentary or moderate physical activity. The proportion of participants with FEV1/FVC < 70% and FEV1/FVC LLN 2.5% was highest among those with BMI < 18.5 (42.6% and 27.8%, respectively), low school education (30.8% and 16.8%), sedentary activity (25.0% and 17.2%), current daily smoking (33.2% and 25.6%), asthma (49.2% and 41.3%) and other respiratory disease (57.6% and 46.7%). The highest proportion of participants with FEV1/FVC < 70% was observed among current daily smokers (33.2%) and among those with known asthma (49.2%) or other respiratory disease (57.6%). The corresponding values for FEV1/FVC below LLN 2.5 were 25.6%, 41.3% and 46.7%, respectively. Table 2 shows results of univariable and multivariable analyses for proportion with airway obstruction using the different criteria. The strongest association was seen in daily smokers, followed by sometimes smokers and former smokers. Increasing number of pack‐years was also strongly associated with airway obstruction by either criterion. Airway obstruction was associated with increasing age for the fixed ratio criterion but not for LLN. BMI < 18.5 was associated with airway obstruction, as was sedentary lifestyle. Asthma and other respiratory disease were associated with airway obstruction, while other chronic diseases were not. No clear association was found for educational level or waist circumference. Association of characteristics and FEV1/FVC < LLN 2.5%, FEV1/FVC < LLN 5% and proportion with FEV1/FVC < 70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study according to age Note: LLN is defined by normal values in a Danish population. 17 Cumulative smoking (pack‐years) was excluded from the multivariable models, due to missing values for all former smokers. Includes chronic obstructive pulmonary disease (COPD), chronic bronchitis, hyperinflated lungs and emphysema. Table S3 shows additional spirometry results by sex for all participants aged 20 years and above. For men (n = 5984), FEV1 decreased from an average of 4.39 L for individuals aged 20 years to 2.29 L for individuals aged 80 + years (Table S3A). For women (n = 7015), it decreased from 3.24 L for people aged 20 years to 1.61 L for people aged 80 years (Table S3B). Across all ages, men had higher FEV1 than women. For men, the FEV1/FVC < 70% was 11.7 percentage points and 16.5 percentage points higher than FEV1/FVC < LLN 2.5% within age groups 70–79 and 80 + years, respectively. For women, the difference was 17.3 and 29.9 percentage points, respectively, in the two oldest age groups. The proportion of participants aged 20 and above with airway obstruction by the FEV1/FVC < LLN 5%, FEV1/FVC < LLN 2.5% and FEV1/FVC < 70% criteria in different subgroups of participants is shown in Tables S4A (men) and S4B (women). Table 1 shows proportions of participants who met the three different criteria for airway obstruction, by age categories, anthropometric data, educational status and smoking status in participants aged 35 years or older. Overall 12.2% of participants had FEV1/FVC below the LLN 2.5% cut‐off, 19.0% below the LLN 5%, and 19.0% below 70%, but with variation by age group. Up until age 49, the proportions meeting the FEV1/FVC < 70% and the LLN 2.5% criteria were similar, but with increasing age, the proportion with FEV1/FVC < 70% increased more than the proportion meeting the LLN criteria. As a result, the difference between the proportions of FEV1/FVC < 70% and FEV1/FVC < LLN 2.5% increased in the oldest age groups. Proportion of participants with FEV1/FVC < LLN 2.5%, LLN 5% and <70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study Notes: LLN is defined by normal values in a Danish population. 17 Values are number (frequencies) for categorical values. P‐values were calculated with Pearson's X 2 test for categorical values. Other respiratory disease includes COPD, chronic bronchitis and emphysema. As expected, proportions were higher in smokers, especially current smokers and those with most pack‐years, and among participants reporting other respiratory disease. In addition, spirometry results below expected were more common among those with fewer years of schooling and those reporting sedentary or moderate physical activity. The proportion of participants with FEV1/FVC < 70% and FEV1/FVC LLN 2.5% was highest among those with BMI < 18.5 (42.6% and 27.8%, respectively), low school education (30.8% and 16.8%), sedentary activity (25.0% and 17.2%), current daily smoking (33.2% and 25.6%), asthma (49.2% and 41.3%) and other respiratory disease (57.6% and 46.7%). The highest proportion of participants with FEV1/FVC < 70% was observed among current daily smokers (33.2%) and among those with known asthma (49.2%) or other respiratory disease (57.6%). The corresponding values for FEV1/FVC below LLN 2.5 were 25.6%, 41.3% and 46.7%, respectively. Table 2 shows results of univariable and multivariable analyses for proportion with airway obstruction using the different criteria. The strongest association was seen in daily smokers, followed by sometimes smokers and former smokers. Increasing number of pack‐years was also strongly associated with airway obstruction by either criterion. Airway obstruction was associated with increasing age for the fixed ratio criterion but not for LLN. BMI < 18.5 was associated with airway obstruction, as was sedentary lifestyle. Asthma and other respiratory disease were associated with airway obstruction, while other chronic diseases were not. No clear association was found for educational level or waist circumference. Association of characteristics and FEV1/FVC < LLN 2.5%, FEV1/FVC < LLN 5% and proportion with FEV1/FVC < 70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study according to age Note: LLN is defined by normal values in a Danish population. 17 Cumulative smoking (pack‐years) was excluded from the multivariable models, due to missing values for all former smokers. Includes chronic obstructive pulmonary disease (COPD), chronic bronchitis, hyperinflated lungs and emphysema. Table S3 shows additional spirometry results by sex for all participants aged 20 years and above. For men (n = 5984), FEV1 decreased from an average of 4.39 L for individuals aged 20 years to 2.29 L for individuals aged 80 + years (Table S3A). For women (n = 7015), it decreased from 3.24 L for people aged 20 years to 1.61 L for people aged 80 years (Table S3B). Across all ages, men had higher FEV1 than women. For men, the FEV1/FVC < 70% was 11.7 percentage points and 16.5 percentage points higher than FEV1/FVC < LLN 2.5% within age groups 70–79 and 80 + years, respectively. For women, the difference was 17.3 and 29.9 percentage points, respectively, in the two oldest age groups. The proportion of participants aged 20 and above with airway obstruction by the FEV1/FVC < LLN 5%, FEV1/FVC < LLN 2.5% and FEV1/FVC < 70% criteria in different subgroups of participants is shown in Tables S4A (men) and S4B (women).
CONCLUSION
This was a descriptive cross‐sectional study presenting data on proportions with signs of airway obstruction in broad subgroups of participants in a population study from a rural part of Denmark using different cut‐offs for definition of airway obstruction. Our study shows that, using the same criteria to define airway obstruction as previous Danish studies, the population of Lolland‐Falster has comparable proportion airway obstruction, and hence possible COPD, 19% in participants aged 35 years or older. Our study also highlights how choosing a different cut‐off influences the estimate: using the LLN 2.5% cut‐off, which may be preferable for population studies, prevalence of airway obstruction was considerably lower at 12.2%. In addition, choice of criterion—LLN or FEV1/FVC ratio—influences the estimated prevalence of airway obstruction, especially with increasing age.
[ "METHODS AND DATA", "LOFUS", "Spirometry", "Other variables", "Data analysis", "Ethics", "Participant characteristics", "Respiratory disease", "Spirometry results", "ETHICS STATEMENT", "AUTHOR CONTRIBUTIONS" ]
[ "LOFUS LOFUS is a household‐based study where households of randomly selected persons aged 18 and above were invited to participate.\n14\n The data collection encompassed self‐administered, age‐specific questionnaires on social, mental and physical health and lifestyle factors; anthropometric and physiological measurements undertaken in the study clinic; and collection of biological samples. The data collection started in February 2016 and ended February 2020.\nLOFUS is a household‐based study where households of randomly selected persons aged 18 and above were invited to participate.\n14\n The data collection encompassed self‐administered, age‐specific questionnaires on social, mental and physical health and lifestyle factors; anthropometric and physiological measurements undertaken in the study clinic; and collection of biological samples. The data collection started in February 2016 and ended February 2020.\nSpirometry Lung function was measured by trained healthcare professionals using the MicroLoop Handheld Spirometer™ and SpiroUSB™ with Spirometry PC Software (CareFusion Corp., USA). Sex, height and ethnic origin (Caucasian or Asian) were entered into the software, and the spirometry was performed in a standing position (if possible) with the use of a nose clip. There were no restrictions on behaviour or medication prior to the measurement, and bronchodilator was not administered prior to spirometry. The spirometer was calibrated once a week. Three sets of values were obtained for FEV1,FVC, and as a criterion for correct performance, the two highest measurements might differ only by ≤0.150 L, and all measurements should be defined as ‘Good blow’ or ‘Short blow’ by the spirometer. The highest value of both FEV1 and FVC for each participant was used in the analysis, and the ratio of FEV1 to FVC (FEV1/FVC) was calculated for each participant.\nLung function was measured by trained healthcare professionals using the MicroLoop Handheld Spirometer™ and SpiroUSB™ with Spirometry PC Software (CareFusion Corp., USA). Sex, height and ethnic origin (Caucasian or Asian) were entered into the software, and the spirometry was performed in a standing position (if possible) with the use of a nose clip. There were no restrictions on behaviour or medication prior to the measurement, and bronchodilator was not administered prior to spirometry. The spirometer was calibrated once a week. Three sets of values were obtained for FEV1,FVC, and as a criterion for correct performance, the two highest measurements might differ only by ≤0.150 L, and all measurements should be defined as ‘Good blow’ or ‘Short blow’ by the spirometer. The highest value of both FEV1 and FVC for each participant was used in the analysis, and the ratio of FEV1 to FVC (FEV1/FVC) was calculated for each participant.\nOther variables We examined a number of potential determinants of lung function selected from the literature. From the measurements in the clinic, we used information on height (cm), waist circumference (cm) and weight (kg). Body mass index (BMI) was calculated as weight/height2 and categorized into underweight (<18.5), normal weight (18.5–24.9), overweight (25.0–29.9) and obese (≥30.0).\n15\n Waist circumference was grouped into normal or large (with limits of >94 cm for men and >80 cm for women).\n16\n\n\nFrom the questionnaire, we used self‐reported smoking data categorized as current (daily or sometimes), former and never. Number of pack‐years was calculated for current smokers, with 1 pack‐year corresponding to 20 cigarettes or equivalent per day for 1 year. School education was divided into ≤7 years; 8–9 years; 10–11 years; graduated high school; under education; and other. Vocational education was divided into primary school only; semiskilled worker, for example, truck driver; vocational training, for example, hairdresser; short higher education, for example, laboratory worker; middle higher education, for example, school teacher; long higher education, for example, master's degree or equivalent; and other education. Physical activity was based on the following question: ‘How would you characterize your leisure time physical activity within the last year?’ and classified as sedentary, moderate, heavy activity or heavy activity at competition level.\n17\n\n\nSelf‐reported prevalent morbidity was based on the following question from the LOFUS study questionnaire: ‘Do you suffer from any of the following diseases?’ Participants were asked to mark either yes or no for each of the following categories: ‘asthma’, ‘chronic bronchitis, hyperinflated lungs, chronic obstructive pulmonary disease (COPD), or emphysema’, ‘heart attack’, ‘atherosclerosis in the heart’, ‘angina’, ‘hypertension’, ‘diabetes’ and ‘cancer’. Information was merged with self‐reported daily medication use and categorized as asthma, other respiratory disease (including COPD, chronic bronchitis and emphysema), allergy, hypertension, diabetes, cancer or ischaemic heart disease.\nWe examined a number of potential determinants of lung function selected from the literature. From the measurements in the clinic, we used information on height (cm), waist circumference (cm) and weight (kg). Body mass index (BMI) was calculated as weight/height2 and categorized into underweight (<18.5), normal weight (18.5–24.9), overweight (25.0–29.9) and obese (≥30.0).\n15\n Waist circumference was grouped into normal or large (with limits of >94 cm for men and >80 cm for women).\n16\n\n\nFrom the questionnaire, we used self‐reported smoking data categorized as current (daily or sometimes), former and never. Number of pack‐years was calculated for current smokers, with 1 pack‐year corresponding to 20 cigarettes or equivalent per day for 1 year. School education was divided into ≤7 years; 8–9 years; 10–11 years; graduated high school; under education; and other. Vocational education was divided into primary school only; semiskilled worker, for example, truck driver; vocational training, for example, hairdresser; short higher education, for example, laboratory worker; middle higher education, for example, school teacher; long higher education, for example, master's degree or equivalent; and other education. Physical activity was based on the following question: ‘How would you characterize your leisure time physical activity within the last year?’ and classified as sedentary, moderate, heavy activity or heavy activity at competition level.\n17\n\n\nSelf‐reported prevalent morbidity was based on the following question from the LOFUS study questionnaire: ‘Do you suffer from any of the following diseases?’ Participants were asked to mark either yes or no for each of the following categories: ‘asthma’, ‘chronic bronchitis, hyperinflated lungs, chronic obstructive pulmonary disease (COPD), or emphysema’, ‘heart attack’, ‘atherosclerosis in the heart’, ‘angina’, ‘hypertension’, ‘diabetes’ and ‘cancer’. Information was merged with self‐reported daily medication use and categorized as asthma, other respiratory disease (including COPD, chronic bronchitis and emphysema), allergy, hypertension, diabetes, cancer or ischaemic heart disease.\nData analysis We included all 16 142 adults (i.e. aged 18 years and above) from the LOFUS study in the descriptive analyses of participant characteristics. Mean and standard deviation (SD) were calculated for characteristics of all participants. Before analysis, we checked for differences between those with successful and not successful spirometry using Kruskal–Wallis test and Pearson's X\n2 test.\nWe then compared spirometry results with the reference values stated for the Danish normal population by Løkke et al.\n18\n For this part of the analysis, we excluded participants with age <20 years, height <150 cm (male and female) or <155 cm (male) to match the population in the reference material (Figure 1). We considered a reduction in the ratio FEV1/FVC to be indicative of airway obstruction and compared three different cut‐offs: FEV1/FVC < 70%, FEV1/FVC (LLN 5%) and LLN 2.5%. LLN 5% was stated for the Danish normal population by Løkke et al.\n18\n The LLN 2.5% was not stated in the study but calculated by subtracting 1.96 × residual SD from the predicted mean (assuming a Gaussian distribution of the residuals). For each participant, we determined if FEV1, FVC and FEV1/FVC were above or below the LLN 5% and the LLN 2.5% corresponding to that participant's age, sex, and height. We then calculated the proportions with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70%. The proportion with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70% in subgroups of participants defined by sex, age and other variables were tabulated.\nFlowchart of the study population among 16 142 individuals aged ≥18 years in the LOFUS study\nAnalyses were carried out for all participants ≥20 years. As COPD is usually diagnosed in middle‐aged or older adults, we also carried out analysis without the younger age groups. We chose 35 years as cut‐off, because this was used in a previous Danish prevalence study\n7\n (Figure 1). For the next part of the analysis, we excluded participants <35 years, leaving 11 709 participants for analysis. The proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70% in subgroups of participants were tabulated. Logistic regression was used to evaluate variables associated with airway obstruction defined as the proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70%. Analyses were performed using STATA/SE 15.1.\nWe included all 16 142 adults (i.e. aged 18 years and above) from the LOFUS study in the descriptive analyses of participant characteristics. Mean and standard deviation (SD) were calculated for characteristics of all participants. Before analysis, we checked for differences between those with successful and not successful spirometry using Kruskal–Wallis test and Pearson's X\n2 test.\nWe then compared spirometry results with the reference values stated for the Danish normal population by Løkke et al.\n18\n For this part of the analysis, we excluded participants with age <20 years, height <150 cm (male and female) or <155 cm (male) to match the population in the reference material (Figure 1). We considered a reduction in the ratio FEV1/FVC to be indicative of airway obstruction and compared three different cut‐offs: FEV1/FVC < 70%, FEV1/FVC (LLN 5%) and LLN 2.5%. LLN 5% was stated for the Danish normal population by Løkke et al.\n18\n The LLN 2.5% was not stated in the study but calculated by subtracting 1.96 × residual SD from the predicted mean (assuming a Gaussian distribution of the residuals). For each participant, we determined if FEV1, FVC and FEV1/FVC were above or below the LLN 5% and the LLN 2.5% corresponding to that participant's age, sex, and height. We then calculated the proportions with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70%. The proportion with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70% in subgroups of participants defined by sex, age and other variables were tabulated.\nFlowchart of the study population among 16 142 individuals aged ≥18 years in the LOFUS study\nAnalyses were carried out for all participants ≥20 years. As COPD is usually diagnosed in middle‐aged or older adults, we also carried out analysis without the younger age groups. We chose 35 years as cut‐off, because this was used in a previous Danish prevalence study\n7\n (Figure 1). For the next part of the analysis, we excluded participants <35 years, leaving 11 709 participants for analysis. The proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70% in subgroups of participants were tabulated. Logistic regression was used to evaluate variables associated with airway obstruction defined as the proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70%. Analyses were performed using STATA/SE 15.1.\nEthics Informed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896).\nInformed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896).", "LOFUS is a household‐based study where households of randomly selected persons aged 18 and above were invited to participate.\n14\n The data collection encompassed self‐administered, age‐specific questionnaires on social, mental and physical health and lifestyle factors; anthropometric and physiological measurements undertaken in the study clinic; and collection of biological samples. The data collection started in February 2016 and ended February 2020.", "Lung function was measured by trained healthcare professionals using the MicroLoop Handheld Spirometer™ and SpiroUSB™ with Spirometry PC Software (CareFusion Corp., USA). Sex, height and ethnic origin (Caucasian or Asian) were entered into the software, and the spirometry was performed in a standing position (if possible) with the use of a nose clip. There were no restrictions on behaviour or medication prior to the measurement, and bronchodilator was not administered prior to spirometry. The spirometer was calibrated once a week. Three sets of values were obtained for FEV1,FVC, and as a criterion for correct performance, the two highest measurements might differ only by ≤0.150 L, and all measurements should be defined as ‘Good blow’ or ‘Short blow’ by the spirometer. The highest value of both FEV1 and FVC for each participant was used in the analysis, and the ratio of FEV1 to FVC (FEV1/FVC) was calculated for each participant.", "We examined a number of potential determinants of lung function selected from the literature. From the measurements in the clinic, we used information on height (cm), waist circumference (cm) and weight (kg). Body mass index (BMI) was calculated as weight/height2 and categorized into underweight (<18.5), normal weight (18.5–24.9), overweight (25.0–29.9) and obese (≥30.0).\n15\n Waist circumference was grouped into normal or large (with limits of >94 cm for men and >80 cm for women).\n16\n\n\nFrom the questionnaire, we used self‐reported smoking data categorized as current (daily or sometimes), former and never. Number of pack‐years was calculated for current smokers, with 1 pack‐year corresponding to 20 cigarettes or equivalent per day for 1 year. School education was divided into ≤7 years; 8–9 years; 10–11 years; graduated high school; under education; and other. Vocational education was divided into primary school only; semiskilled worker, for example, truck driver; vocational training, for example, hairdresser; short higher education, for example, laboratory worker; middle higher education, for example, school teacher; long higher education, for example, master's degree or equivalent; and other education. Physical activity was based on the following question: ‘How would you characterize your leisure time physical activity within the last year?’ and classified as sedentary, moderate, heavy activity or heavy activity at competition level.\n17\n\n\nSelf‐reported prevalent morbidity was based on the following question from the LOFUS study questionnaire: ‘Do you suffer from any of the following diseases?’ Participants were asked to mark either yes or no for each of the following categories: ‘asthma’, ‘chronic bronchitis, hyperinflated lungs, chronic obstructive pulmonary disease (COPD), or emphysema’, ‘heart attack’, ‘atherosclerosis in the heart’, ‘angina’, ‘hypertension’, ‘diabetes’ and ‘cancer’. Information was merged with self‐reported daily medication use and categorized as asthma, other respiratory disease (including COPD, chronic bronchitis and emphysema), allergy, hypertension, diabetes, cancer or ischaemic heart disease.", "We included all 16 142 adults (i.e. aged 18 years and above) from the LOFUS study in the descriptive analyses of participant characteristics. Mean and standard deviation (SD) were calculated for characteristics of all participants. Before analysis, we checked for differences between those with successful and not successful spirometry using Kruskal–Wallis test and Pearson's X\n2 test.\nWe then compared spirometry results with the reference values stated for the Danish normal population by Løkke et al.\n18\n For this part of the analysis, we excluded participants with age <20 years, height <150 cm (male and female) or <155 cm (male) to match the population in the reference material (Figure 1). We considered a reduction in the ratio FEV1/FVC to be indicative of airway obstruction and compared three different cut‐offs: FEV1/FVC < 70%, FEV1/FVC (LLN 5%) and LLN 2.5%. LLN 5% was stated for the Danish normal population by Løkke et al.\n18\n The LLN 2.5% was not stated in the study but calculated by subtracting 1.96 × residual SD from the predicted mean (assuming a Gaussian distribution of the residuals). For each participant, we determined if FEV1, FVC and FEV1/FVC were above or below the LLN 5% and the LLN 2.5% corresponding to that participant's age, sex, and height. We then calculated the proportions with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70%. The proportion with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70% in subgroups of participants defined by sex, age and other variables were tabulated.\nFlowchart of the study population among 16 142 individuals aged ≥18 years in the LOFUS study\nAnalyses were carried out for all participants ≥20 years. As COPD is usually diagnosed in middle‐aged or older adults, we also carried out analysis without the younger age groups. We chose 35 years as cut‐off, because this was used in a previous Danish prevalence study\n7\n (Figure 1). For the next part of the analysis, we excluded participants <35 years, leaving 11 709 participants for analysis. The proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70% in subgroups of participants were tabulated. Logistic regression was used to evaluate variables associated with airway obstruction defined as the proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70%. Analyses were performed using STATA/SE 15.1.", "Informed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896).", "Spirometry measurements were tried for all 16 142 participants, 19 participants were excluded due to missing height, and of the 16 123 participants, 13 315 (82.5%) completed three acceptable measurements (Figure 1). Participants with unsuccessful spirometry were more likely to be men, old, overweight, sedentary, having large waist circumference, low school/vocational education, hypertension or ischaemic heart disease and less likely having asthma and other respiratory disease.\nThe population with successful spirometry was on average 59.0 years with 54.2% being women. The majority were overweight (63.4%), had education below high school graduation (62.5%), reported moderate physical activity (61.3%) or were former or current smokers (51.8%), and 26.8% reported to have hypertension (Table S1).", "The participants with other respiratory disease (n = 1028) was on average 61.2 years with 56.4% being women (Table S2). 75.1% reported also to have asthma. Compared to the total group with successful spirometry (Table S1), the participants with self‐reported respiratory disease was more likely to be older, have higher waist circumference, lower school education, more sedentary activity, be current daily smokers with more pack‐years and have asthma, allergy, hypertension, diabetes, cancer and ischaemic heart disease.", "Table 1 shows proportions of participants who met the three different criteria for airway obstruction, by age categories, anthropometric data, educational status and smoking status in participants aged 35 years or older. Overall 12.2% of participants had FEV1/FVC below the LLN 2.5% cut‐off, 19.0% below the LLN 5%, and 19.0% below 70%, but with variation by age group. Up until age 49, the proportions meeting the FEV1/FVC < 70% and the LLN 2.5% criteria were similar, but with increasing age, the proportion with FEV1/FVC < 70% increased more than the proportion meeting the LLN criteria. As a result, the difference between the proportions of FEV1/FVC < 70% and FEV1/FVC < LLN 2.5% increased in the oldest age groups.\nProportion of participants with FEV1/FVC < LLN 2.5%, LLN 5% and <70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study\n\nNotes: LLN is defined by normal values in a Danish population.\n17\n Values are number (frequencies) for categorical values. P‐values were calculated with Pearson's X\n2 test for categorical values.\nOther respiratory disease includes COPD, chronic bronchitis and emphysema.\nAs expected, proportions were higher in smokers, especially current smokers and those with most pack‐years, and among participants reporting other respiratory disease. In addition, spirometry results below expected were more common among those with fewer years of schooling and those reporting sedentary or moderate physical activity. The proportion of participants with FEV1/FVC < 70% and FEV1/FVC LLN 2.5% was highest among those with BMI < 18.5 (42.6% and 27.8%, respectively), low school education (30.8% and 16.8%), sedentary activity (25.0% and 17.2%), current daily smoking (33.2% and 25.6%), asthma (49.2% and 41.3%) and other respiratory disease (57.6% and 46.7%).\nThe highest proportion of participants with FEV1/FVC < 70% was observed among current daily smokers (33.2%) and among those with known asthma (49.2%) or other respiratory disease (57.6%). The corresponding values for FEV1/FVC below LLN 2.5 were 25.6%, 41.3% and 46.7%, respectively.\nTable 2 shows results of univariable and multivariable analyses for proportion with airway obstruction using the different criteria. The strongest association was seen in daily smokers, followed by sometimes smokers and former smokers. Increasing number of pack‐years was also strongly associated with airway obstruction by either criterion. Airway obstruction was associated with increasing age for the fixed ratio criterion but not for LLN. BMI < 18.5 was associated with airway obstruction, as was sedentary lifestyle. Asthma and other respiratory disease were associated with airway obstruction, while other chronic diseases were not. No clear association was found for educational level or waist circumference.\nAssociation of characteristics and FEV1/FVC < LLN 2.5%, FEV1/FVC < LLN 5% and proportion with FEV1/FVC < 70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study according to age\n\nNote: LLN is defined by normal values in a Danish population.\n17\n\n\nCumulative smoking (pack‐years) was excluded from the multivariable models, due to missing values for all former smokers.\nIncludes chronic obstructive pulmonary disease (COPD), chronic bronchitis, hyperinflated lungs and emphysema.\nTable S3 shows additional spirometry results by sex for all participants aged 20 years and above. For men (n = 5984), FEV1 decreased from an average of 4.39 L for individuals aged 20 years to 2.29 L for individuals aged 80 + years (Table S3A). For women (n = 7015), it decreased from 3.24 L for people aged 20 years to 1.61 L for people aged 80 years (Table S3B). Across all ages, men had higher FEV1 than women. For men, the FEV1/FVC < 70% was 11.7 percentage points and 16.5 percentage points higher than FEV1/FVC < LLN 2.5% within age groups 70–79 and 80 + years, respectively. For women, the difference was 17.3 and 29.9 percentage points, respectively, in the two oldest age groups. The proportion of participants aged 20 and above with airway obstruction by the FEV1/FVC < LLN 5%, FEV1/FVC < LLN 2.5% and FEV1/FVC < 70% criteria in different subgroups of participants is shown in Tables S4A (men) and S4B (women).", "Informed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896).", "Katja Kemp Jacobsen (KKJ), Randi Jepsen (RJ), Uffe Bodtger (UB), Knud Rasmussen (KR) and Gry St‐Martin (GSM) were involved in study design. RJ was involved in data collection. KKJ, RJ and GSM were involved with data analysis. RJ was involved with data curation. KKJ, RJ and GSM were involved with writing the original draft. KKJ, RJ, UB, KR and GSM were involved with writing—review and editing. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "BACKGROUND", "METHODS AND DATA", "LOFUS", "Spirometry", "Other variables", "Data analysis", "Ethics", "RESULTS", "Participant characteristics", "Respiratory disease", "Spirometry results", "DISCUSSION", "CONCLUSION", "CONFLICT OF INTEREST", "ETHICS STATEMENT", "AUTHOR CONTRIBUTIONS", "Supporting information" ]
[ "Worldwide, lung disease is a major cause of morbidity and mortality.\n1\n Lung function declines with age,\n2\n and smoking and environmental pollution can cause lung disease, accelerated loss of lung function and premature death.\n3\n, \n4\n, \n5\n, \n6\n\n\nIn Denmark, prevalence of chronic obstructive pulmonary disease (COPD) has been estimated at 17.5% of adults aged 35 years or older,\n7\n with estimated 2.0% having severe COPD.\n7\n This estimate derived from a large urban population study using a spirometry cut‐off intended for clinical diagnosis, that is, in a clinical setting with suspicion of airway disease. This may lead to overestimation of COPD prevalence.\n8\n, \n9\n In addition, the choice of spirometry criterion to most correctly detect airway obstruction has been debated in recent decades. Detection of reduced ratio between forced expiratory volume 1 s (FEV1) and forced vital capacity (FVC) indicates airway obstruction, and while a fixed ratio criterion of FEV1/FVC < 0.70 is often used, studies show that FEV1/FVC ratio below a lower limit of normal (LLN) may be more suitable.\n8\n, \n10\n\n\nLolland‐Falster is a mixed rural‐provincial area of 103 000 inhabitants, situated on two main and several small islands in the southern part of Denmark, a small Scandinavian, high‐income country covering 43 000 km2 and a population of 5.8 million. Although the Danish population is genetically relatively homogeneous, population health varies across regions of the country, with life expectancy in Lolland‐Falster hree 3 years below the national average and 5 years below the municipalities with highest life expectancy.\n11\n, \n12\n The region scores worse than the national average on several health indicators, including diabetes prevalence, obesity, smoking and COPD.\n13\n\n\nThe Lolland‐Falster Health Study (LOFUS) is a population‐based, prospective cohort study designed to investigate determinants of population health in this area.\n14\n In this paper, we report LOFUS data on lung function measurement with spirometry as well as anthropometric data and questionnaire‐based information on smoking and other risk factors. The aim is to describe the spirometry measurements and results in adults participating in LOFUS and compare it to similar findings from urban Danish population. In addition, we explore how different spirometric criteria affect the estimated prevalence of airway obstruction.", "LOFUS LOFUS is a household‐based study where households of randomly selected persons aged 18 and above were invited to participate.\n14\n The data collection encompassed self‐administered, age‐specific questionnaires on social, mental and physical health and lifestyle factors; anthropometric and physiological measurements undertaken in the study clinic; and collection of biological samples. The data collection started in February 2016 and ended February 2020.\nLOFUS is a household‐based study where households of randomly selected persons aged 18 and above were invited to participate.\n14\n The data collection encompassed self‐administered, age‐specific questionnaires on social, mental and physical health and lifestyle factors; anthropometric and physiological measurements undertaken in the study clinic; and collection of biological samples. The data collection started in February 2016 and ended February 2020.\nSpirometry Lung function was measured by trained healthcare professionals using the MicroLoop Handheld Spirometer™ and SpiroUSB™ with Spirometry PC Software (CareFusion Corp., USA). Sex, height and ethnic origin (Caucasian or Asian) were entered into the software, and the spirometry was performed in a standing position (if possible) with the use of a nose clip. There were no restrictions on behaviour or medication prior to the measurement, and bronchodilator was not administered prior to spirometry. The spirometer was calibrated once a week. Three sets of values were obtained for FEV1,FVC, and as a criterion for correct performance, the two highest measurements might differ only by ≤0.150 L, and all measurements should be defined as ‘Good blow’ or ‘Short blow’ by the spirometer. The highest value of both FEV1 and FVC for each participant was used in the analysis, and the ratio of FEV1 to FVC (FEV1/FVC) was calculated for each participant.\nLung function was measured by trained healthcare professionals using the MicroLoop Handheld Spirometer™ and SpiroUSB™ with Spirometry PC Software (CareFusion Corp., USA). Sex, height and ethnic origin (Caucasian or Asian) were entered into the software, and the spirometry was performed in a standing position (if possible) with the use of a nose clip. There were no restrictions on behaviour or medication prior to the measurement, and bronchodilator was not administered prior to spirometry. The spirometer was calibrated once a week. Three sets of values were obtained for FEV1,FVC, and as a criterion for correct performance, the two highest measurements might differ only by ≤0.150 L, and all measurements should be defined as ‘Good blow’ or ‘Short blow’ by the spirometer. The highest value of both FEV1 and FVC for each participant was used in the analysis, and the ratio of FEV1 to FVC (FEV1/FVC) was calculated for each participant.\nOther variables We examined a number of potential determinants of lung function selected from the literature. From the measurements in the clinic, we used information on height (cm), waist circumference (cm) and weight (kg). Body mass index (BMI) was calculated as weight/height2 and categorized into underweight (<18.5), normal weight (18.5–24.9), overweight (25.0–29.9) and obese (≥30.0).\n15\n Waist circumference was grouped into normal or large (with limits of >94 cm for men and >80 cm for women).\n16\n\n\nFrom the questionnaire, we used self‐reported smoking data categorized as current (daily or sometimes), former and never. Number of pack‐years was calculated for current smokers, with 1 pack‐year corresponding to 20 cigarettes or equivalent per day for 1 year. School education was divided into ≤7 years; 8–9 years; 10–11 years; graduated high school; under education; and other. Vocational education was divided into primary school only; semiskilled worker, for example, truck driver; vocational training, for example, hairdresser; short higher education, for example, laboratory worker; middle higher education, for example, school teacher; long higher education, for example, master's degree or equivalent; and other education. Physical activity was based on the following question: ‘How would you characterize your leisure time physical activity within the last year?’ and classified as sedentary, moderate, heavy activity or heavy activity at competition level.\n17\n\n\nSelf‐reported prevalent morbidity was based on the following question from the LOFUS study questionnaire: ‘Do you suffer from any of the following diseases?’ Participants were asked to mark either yes or no for each of the following categories: ‘asthma’, ‘chronic bronchitis, hyperinflated lungs, chronic obstructive pulmonary disease (COPD), or emphysema’, ‘heart attack’, ‘atherosclerosis in the heart’, ‘angina’, ‘hypertension’, ‘diabetes’ and ‘cancer’. Information was merged with self‐reported daily medication use and categorized as asthma, other respiratory disease (including COPD, chronic bronchitis and emphysema), allergy, hypertension, diabetes, cancer or ischaemic heart disease.\nWe examined a number of potential determinants of lung function selected from the literature. From the measurements in the clinic, we used information on height (cm), waist circumference (cm) and weight (kg). Body mass index (BMI) was calculated as weight/height2 and categorized into underweight (<18.5), normal weight (18.5–24.9), overweight (25.0–29.9) and obese (≥30.0).\n15\n Waist circumference was grouped into normal or large (with limits of >94 cm for men and >80 cm for women).\n16\n\n\nFrom the questionnaire, we used self‐reported smoking data categorized as current (daily or sometimes), former and never. Number of pack‐years was calculated for current smokers, with 1 pack‐year corresponding to 20 cigarettes or equivalent per day for 1 year. School education was divided into ≤7 years; 8–9 years; 10–11 years; graduated high school; under education; and other. Vocational education was divided into primary school only; semiskilled worker, for example, truck driver; vocational training, for example, hairdresser; short higher education, for example, laboratory worker; middle higher education, for example, school teacher; long higher education, for example, master's degree or equivalent; and other education. Physical activity was based on the following question: ‘How would you characterize your leisure time physical activity within the last year?’ and classified as sedentary, moderate, heavy activity or heavy activity at competition level.\n17\n\n\nSelf‐reported prevalent morbidity was based on the following question from the LOFUS study questionnaire: ‘Do you suffer from any of the following diseases?’ Participants were asked to mark either yes or no for each of the following categories: ‘asthma’, ‘chronic bronchitis, hyperinflated lungs, chronic obstructive pulmonary disease (COPD), or emphysema’, ‘heart attack’, ‘atherosclerosis in the heart’, ‘angina’, ‘hypertension’, ‘diabetes’ and ‘cancer’. Information was merged with self‐reported daily medication use and categorized as asthma, other respiratory disease (including COPD, chronic bronchitis and emphysema), allergy, hypertension, diabetes, cancer or ischaemic heart disease.\nData analysis We included all 16 142 adults (i.e. aged 18 years and above) from the LOFUS study in the descriptive analyses of participant characteristics. Mean and standard deviation (SD) were calculated for characteristics of all participants. Before analysis, we checked for differences between those with successful and not successful spirometry using Kruskal–Wallis test and Pearson's X\n2 test.\nWe then compared spirometry results with the reference values stated for the Danish normal population by Løkke et al.\n18\n For this part of the analysis, we excluded participants with age <20 years, height <150 cm (male and female) or <155 cm (male) to match the population in the reference material (Figure 1). We considered a reduction in the ratio FEV1/FVC to be indicative of airway obstruction and compared three different cut‐offs: FEV1/FVC < 70%, FEV1/FVC (LLN 5%) and LLN 2.5%. LLN 5% was stated for the Danish normal population by Løkke et al.\n18\n The LLN 2.5% was not stated in the study but calculated by subtracting 1.96 × residual SD from the predicted mean (assuming a Gaussian distribution of the residuals). For each participant, we determined if FEV1, FVC and FEV1/FVC were above or below the LLN 5% and the LLN 2.5% corresponding to that participant's age, sex, and height. We then calculated the proportions with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70%. The proportion with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70% in subgroups of participants defined by sex, age and other variables were tabulated.\nFlowchart of the study population among 16 142 individuals aged ≥18 years in the LOFUS study\nAnalyses were carried out for all participants ≥20 years. As COPD is usually diagnosed in middle‐aged or older adults, we also carried out analysis without the younger age groups. We chose 35 years as cut‐off, because this was used in a previous Danish prevalence study\n7\n (Figure 1). For the next part of the analysis, we excluded participants <35 years, leaving 11 709 participants for analysis. The proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70% in subgroups of participants were tabulated. Logistic regression was used to evaluate variables associated with airway obstruction defined as the proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70%. Analyses were performed using STATA/SE 15.1.\nWe included all 16 142 adults (i.e. aged 18 years and above) from the LOFUS study in the descriptive analyses of participant characteristics. Mean and standard deviation (SD) were calculated for characteristics of all participants. Before analysis, we checked for differences between those with successful and not successful spirometry using Kruskal–Wallis test and Pearson's X\n2 test.\nWe then compared spirometry results with the reference values stated for the Danish normal population by Løkke et al.\n18\n For this part of the analysis, we excluded participants with age <20 years, height <150 cm (male and female) or <155 cm (male) to match the population in the reference material (Figure 1). We considered a reduction in the ratio FEV1/FVC to be indicative of airway obstruction and compared three different cut‐offs: FEV1/FVC < 70%, FEV1/FVC (LLN 5%) and LLN 2.5%. LLN 5% was stated for the Danish normal population by Løkke et al.\n18\n The LLN 2.5% was not stated in the study but calculated by subtracting 1.96 × residual SD from the predicted mean (assuming a Gaussian distribution of the residuals). For each participant, we determined if FEV1, FVC and FEV1/FVC were above or below the LLN 5% and the LLN 2.5% corresponding to that participant's age, sex, and height. We then calculated the proportions with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70%. The proportion with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70% in subgroups of participants defined by sex, age and other variables were tabulated.\nFlowchart of the study population among 16 142 individuals aged ≥18 years in the LOFUS study\nAnalyses were carried out for all participants ≥20 years. As COPD is usually diagnosed in middle‐aged or older adults, we also carried out analysis without the younger age groups. We chose 35 years as cut‐off, because this was used in a previous Danish prevalence study\n7\n (Figure 1). For the next part of the analysis, we excluded participants <35 years, leaving 11 709 participants for analysis. The proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70% in subgroups of participants were tabulated. Logistic regression was used to evaluate variables associated with airway obstruction defined as the proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70%. Analyses were performed using STATA/SE 15.1.\nEthics Informed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896).\nInformed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896).", "LOFUS is a household‐based study where households of randomly selected persons aged 18 and above were invited to participate.\n14\n The data collection encompassed self‐administered, age‐specific questionnaires on social, mental and physical health and lifestyle factors; anthropometric and physiological measurements undertaken in the study clinic; and collection of biological samples. The data collection started in February 2016 and ended February 2020.", "Lung function was measured by trained healthcare professionals using the MicroLoop Handheld Spirometer™ and SpiroUSB™ with Spirometry PC Software (CareFusion Corp., USA). Sex, height and ethnic origin (Caucasian or Asian) were entered into the software, and the spirometry was performed in a standing position (if possible) with the use of a nose clip. There were no restrictions on behaviour or medication prior to the measurement, and bronchodilator was not administered prior to spirometry. The spirometer was calibrated once a week. Three sets of values were obtained for FEV1,FVC, and as a criterion for correct performance, the two highest measurements might differ only by ≤0.150 L, and all measurements should be defined as ‘Good blow’ or ‘Short blow’ by the spirometer. The highest value of both FEV1 and FVC for each participant was used in the analysis, and the ratio of FEV1 to FVC (FEV1/FVC) was calculated for each participant.", "We examined a number of potential determinants of lung function selected from the literature. From the measurements in the clinic, we used information on height (cm), waist circumference (cm) and weight (kg). Body mass index (BMI) was calculated as weight/height2 and categorized into underweight (<18.5), normal weight (18.5–24.9), overweight (25.0–29.9) and obese (≥30.0).\n15\n Waist circumference was grouped into normal or large (with limits of >94 cm for men and >80 cm for women).\n16\n\n\nFrom the questionnaire, we used self‐reported smoking data categorized as current (daily or sometimes), former and never. Number of pack‐years was calculated for current smokers, with 1 pack‐year corresponding to 20 cigarettes or equivalent per day for 1 year. School education was divided into ≤7 years; 8–9 years; 10–11 years; graduated high school; under education; and other. Vocational education was divided into primary school only; semiskilled worker, for example, truck driver; vocational training, for example, hairdresser; short higher education, for example, laboratory worker; middle higher education, for example, school teacher; long higher education, for example, master's degree or equivalent; and other education. Physical activity was based on the following question: ‘How would you characterize your leisure time physical activity within the last year?’ and classified as sedentary, moderate, heavy activity or heavy activity at competition level.\n17\n\n\nSelf‐reported prevalent morbidity was based on the following question from the LOFUS study questionnaire: ‘Do you suffer from any of the following diseases?’ Participants were asked to mark either yes or no for each of the following categories: ‘asthma’, ‘chronic bronchitis, hyperinflated lungs, chronic obstructive pulmonary disease (COPD), or emphysema’, ‘heart attack’, ‘atherosclerosis in the heart’, ‘angina’, ‘hypertension’, ‘diabetes’ and ‘cancer’. Information was merged with self‐reported daily medication use and categorized as asthma, other respiratory disease (including COPD, chronic bronchitis and emphysema), allergy, hypertension, diabetes, cancer or ischaemic heart disease.", "We included all 16 142 adults (i.e. aged 18 years and above) from the LOFUS study in the descriptive analyses of participant characteristics. Mean and standard deviation (SD) were calculated for characteristics of all participants. Before analysis, we checked for differences between those with successful and not successful spirometry using Kruskal–Wallis test and Pearson's X\n2 test.\nWe then compared spirometry results with the reference values stated for the Danish normal population by Løkke et al.\n18\n For this part of the analysis, we excluded participants with age <20 years, height <150 cm (male and female) or <155 cm (male) to match the population in the reference material (Figure 1). We considered a reduction in the ratio FEV1/FVC to be indicative of airway obstruction and compared three different cut‐offs: FEV1/FVC < 70%, FEV1/FVC (LLN 5%) and LLN 2.5%. LLN 5% was stated for the Danish normal population by Løkke et al.\n18\n The LLN 2.5% was not stated in the study but calculated by subtracting 1.96 × residual SD from the predicted mean (assuming a Gaussian distribution of the residuals). For each participant, we determined if FEV1, FVC and FEV1/FVC were above or below the LLN 5% and the LLN 2.5% corresponding to that participant's age, sex, and height. We then calculated the proportions with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70%. The proportion with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70% in subgroups of participants defined by sex, age and other variables were tabulated.\nFlowchart of the study population among 16 142 individuals aged ≥18 years in the LOFUS study\nAnalyses were carried out for all participants ≥20 years. As COPD is usually diagnosed in middle‐aged or older adults, we also carried out analysis without the younger age groups. We chose 35 years as cut‐off, because this was used in a previous Danish prevalence study\n7\n (Figure 1). For the next part of the analysis, we excluded participants <35 years, leaving 11 709 participants for analysis. The proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70% in subgroups of participants were tabulated. Logistic regression was used to evaluate variables associated with airway obstruction defined as the proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70%. Analyses were performed using STATA/SE 15.1.", "Informed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896).", "Participant characteristics Spirometry measurements were tried for all 16 142 participants, 19 participants were excluded due to missing height, and of the 16 123 participants, 13 315 (82.5%) completed three acceptable measurements (Figure 1). Participants with unsuccessful spirometry were more likely to be men, old, overweight, sedentary, having large waist circumference, low school/vocational education, hypertension or ischaemic heart disease and less likely having asthma and other respiratory disease.\nThe population with successful spirometry was on average 59.0 years with 54.2% being women. The majority were overweight (63.4%), had education below high school graduation (62.5%), reported moderate physical activity (61.3%) or were former or current smokers (51.8%), and 26.8% reported to have hypertension (Table S1).\nSpirometry measurements were tried for all 16 142 participants, 19 participants were excluded due to missing height, and of the 16 123 participants, 13 315 (82.5%) completed three acceptable measurements (Figure 1). Participants with unsuccessful spirometry were more likely to be men, old, overweight, sedentary, having large waist circumference, low school/vocational education, hypertension or ischaemic heart disease and less likely having asthma and other respiratory disease.\nThe population with successful spirometry was on average 59.0 years with 54.2% being women. The majority were overweight (63.4%), had education below high school graduation (62.5%), reported moderate physical activity (61.3%) or were former or current smokers (51.8%), and 26.8% reported to have hypertension (Table S1).\nRespiratory disease The participants with other respiratory disease (n = 1028) was on average 61.2 years with 56.4% being women (Table S2). 75.1% reported also to have asthma. Compared to the total group with successful spirometry (Table S1), the participants with self‐reported respiratory disease was more likely to be older, have higher waist circumference, lower school education, more sedentary activity, be current daily smokers with more pack‐years and have asthma, allergy, hypertension, diabetes, cancer and ischaemic heart disease.\nThe participants with other respiratory disease (n = 1028) was on average 61.2 years with 56.4% being women (Table S2). 75.1% reported also to have asthma. Compared to the total group with successful spirometry (Table S1), the participants with self‐reported respiratory disease was more likely to be older, have higher waist circumference, lower school education, more sedentary activity, be current daily smokers with more pack‐years and have asthma, allergy, hypertension, diabetes, cancer and ischaemic heart disease.\nSpirometry results Table 1 shows proportions of participants who met the three different criteria for airway obstruction, by age categories, anthropometric data, educational status and smoking status in participants aged 35 years or older. Overall 12.2% of participants had FEV1/FVC below the LLN 2.5% cut‐off, 19.0% below the LLN 5%, and 19.0% below 70%, but with variation by age group. Up until age 49, the proportions meeting the FEV1/FVC < 70% and the LLN 2.5% criteria were similar, but with increasing age, the proportion with FEV1/FVC < 70% increased more than the proportion meeting the LLN criteria. As a result, the difference between the proportions of FEV1/FVC < 70% and FEV1/FVC < LLN 2.5% increased in the oldest age groups.\nProportion of participants with FEV1/FVC < LLN 2.5%, LLN 5% and <70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study\n\nNotes: LLN is defined by normal values in a Danish population.\n17\n Values are number (frequencies) for categorical values. P‐values were calculated with Pearson's X\n2 test for categorical values.\nOther respiratory disease includes COPD, chronic bronchitis and emphysema.\nAs expected, proportions were higher in smokers, especially current smokers and those with most pack‐years, and among participants reporting other respiratory disease. In addition, spirometry results below expected were more common among those with fewer years of schooling and those reporting sedentary or moderate physical activity. The proportion of participants with FEV1/FVC < 70% and FEV1/FVC LLN 2.5% was highest among those with BMI < 18.5 (42.6% and 27.8%, respectively), low school education (30.8% and 16.8%), sedentary activity (25.0% and 17.2%), current daily smoking (33.2% and 25.6%), asthma (49.2% and 41.3%) and other respiratory disease (57.6% and 46.7%).\nThe highest proportion of participants with FEV1/FVC < 70% was observed among current daily smokers (33.2%) and among those with known asthma (49.2%) or other respiratory disease (57.6%). The corresponding values for FEV1/FVC below LLN 2.5 were 25.6%, 41.3% and 46.7%, respectively.\nTable 2 shows results of univariable and multivariable analyses for proportion with airway obstruction using the different criteria. The strongest association was seen in daily smokers, followed by sometimes smokers and former smokers. Increasing number of pack‐years was also strongly associated with airway obstruction by either criterion. Airway obstruction was associated with increasing age for the fixed ratio criterion but not for LLN. BMI < 18.5 was associated with airway obstruction, as was sedentary lifestyle. Asthma and other respiratory disease were associated with airway obstruction, while other chronic diseases were not. No clear association was found for educational level or waist circumference.\nAssociation of characteristics and FEV1/FVC < LLN 2.5%, FEV1/FVC < LLN 5% and proportion with FEV1/FVC < 70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study according to age\n\nNote: LLN is defined by normal values in a Danish population.\n17\n\n\nCumulative smoking (pack‐years) was excluded from the multivariable models, due to missing values for all former smokers.\nIncludes chronic obstructive pulmonary disease (COPD), chronic bronchitis, hyperinflated lungs and emphysema.\nTable S3 shows additional spirometry results by sex for all participants aged 20 years and above. For men (n = 5984), FEV1 decreased from an average of 4.39 L for individuals aged 20 years to 2.29 L for individuals aged 80 + years (Table S3A). For women (n = 7015), it decreased from 3.24 L for people aged 20 years to 1.61 L for people aged 80 years (Table S3B). Across all ages, men had higher FEV1 than women. For men, the FEV1/FVC < 70% was 11.7 percentage points and 16.5 percentage points higher than FEV1/FVC < LLN 2.5% within age groups 70–79 and 80 + years, respectively. For women, the difference was 17.3 and 29.9 percentage points, respectively, in the two oldest age groups. The proportion of participants aged 20 and above with airway obstruction by the FEV1/FVC < LLN 5%, FEV1/FVC < LLN 2.5% and FEV1/FVC < 70% criteria in different subgroups of participants is shown in Tables S4A (men) and S4B (women).\nTable 1 shows proportions of participants who met the three different criteria for airway obstruction, by age categories, anthropometric data, educational status and smoking status in participants aged 35 years or older. Overall 12.2% of participants had FEV1/FVC below the LLN 2.5% cut‐off, 19.0% below the LLN 5%, and 19.0% below 70%, but with variation by age group. Up until age 49, the proportions meeting the FEV1/FVC < 70% and the LLN 2.5% criteria were similar, but with increasing age, the proportion with FEV1/FVC < 70% increased more than the proportion meeting the LLN criteria. As a result, the difference between the proportions of FEV1/FVC < 70% and FEV1/FVC < LLN 2.5% increased in the oldest age groups.\nProportion of participants with FEV1/FVC < LLN 2.5%, LLN 5% and <70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study\n\nNotes: LLN is defined by normal values in a Danish population.\n17\n Values are number (frequencies) for categorical values. P‐values were calculated with Pearson's X\n2 test for categorical values.\nOther respiratory disease includes COPD, chronic bronchitis and emphysema.\nAs expected, proportions were higher in smokers, especially current smokers and those with most pack‐years, and among participants reporting other respiratory disease. In addition, spirometry results below expected were more common among those with fewer years of schooling and those reporting sedentary or moderate physical activity. The proportion of participants with FEV1/FVC < 70% and FEV1/FVC LLN 2.5% was highest among those with BMI < 18.5 (42.6% and 27.8%, respectively), low school education (30.8% and 16.8%), sedentary activity (25.0% and 17.2%), current daily smoking (33.2% and 25.6%), asthma (49.2% and 41.3%) and other respiratory disease (57.6% and 46.7%).\nThe highest proportion of participants with FEV1/FVC < 70% was observed among current daily smokers (33.2%) and among those with known asthma (49.2%) or other respiratory disease (57.6%). The corresponding values for FEV1/FVC below LLN 2.5 were 25.6%, 41.3% and 46.7%, respectively.\nTable 2 shows results of univariable and multivariable analyses for proportion with airway obstruction using the different criteria. The strongest association was seen in daily smokers, followed by sometimes smokers and former smokers. Increasing number of pack‐years was also strongly associated with airway obstruction by either criterion. Airway obstruction was associated with increasing age for the fixed ratio criterion but not for LLN. BMI < 18.5 was associated with airway obstruction, as was sedentary lifestyle. Asthma and other respiratory disease were associated with airway obstruction, while other chronic diseases were not. No clear association was found for educational level or waist circumference.\nAssociation of characteristics and FEV1/FVC < LLN 2.5%, FEV1/FVC < LLN 5% and proportion with FEV1/FVC < 70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study according to age\n\nNote: LLN is defined by normal values in a Danish population.\n17\n\n\nCumulative smoking (pack‐years) was excluded from the multivariable models, due to missing values for all former smokers.\nIncludes chronic obstructive pulmonary disease (COPD), chronic bronchitis, hyperinflated lungs and emphysema.\nTable S3 shows additional spirometry results by sex for all participants aged 20 years and above. For men (n = 5984), FEV1 decreased from an average of 4.39 L for individuals aged 20 years to 2.29 L for individuals aged 80 + years (Table S3A). For women (n = 7015), it decreased from 3.24 L for people aged 20 years to 1.61 L for people aged 80 years (Table S3B). Across all ages, men had higher FEV1 than women. For men, the FEV1/FVC < 70% was 11.7 percentage points and 16.5 percentage points higher than FEV1/FVC < LLN 2.5% within age groups 70–79 and 80 + years, respectively. For women, the difference was 17.3 and 29.9 percentage points, respectively, in the two oldest age groups. The proportion of participants aged 20 and above with airway obstruction by the FEV1/FVC < LLN 5%, FEV1/FVC < LLN 2.5% and FEV1/FVC < 70% criteria in different subgroups of participants is shown in Tables S4A (men) and S4B (women).", "Spirometry measurements were tried for all 16 142 participants, 19 participants were excluded due to missing height, and of the 16 123 participants, 13 315 (82.5%) completed three acceptable measurements (Figure 1). Participants with unsuccessful spirometry were more likely to be men, old, overweight, sedentary, having large waist circumference, low school/vocational education, hypertension or ischaemic heart disease and less likely having asthma and other respiratory disease.\nThe population with successful spirometry was on average 59.0 years with 54.2% being women. The majority were overweight (63.4%), had education below high school graduation (62.5%), reported moderate physical activity (61.3%) or were former or current smokers (51.8%), and 26.8% reported to have hypertension (Table S1).", "The participants with other respiratory disease (n = 1028) was on average 61.2 years with 56.4% being women (Table S2). 75.1% reported also to have asthma. Compared to the total group with successful spirometry (Table S1), the participants with self‐reported respiratory disease was more likely to be older, have higher waist circumference, lower school education, more sedentary activity, be current daily smokers with more pack‐years and have asthma, allergy, hypertension, diabetes, cancer and ischaemic heart disease.", "Table 1 shows proportions of participants who met the three different criteria for airway obstruction, by age categories, anthropometric data, educational status and smoking status in participants aged 35 years or older. Overall 12.2% of participants had FEV1/FVC below the LLN 2.5% cut‐off, 19.0% below the LLN 5%, and 19.0% below 70%, but with variation by age group. Up until age 49, the proportions meeting the FEV1/FVC < 70% and the LLN 2.5% criteria were similar, but with increasing age, the proportion with FEV1/FVC < 70% increased more than the proportion meeting the LLN criteria. As a result, the difference between the proportions of FEV1/FVC < 70% and FEV1/FVC < LLN 2.5% increased in the oldest age groups.\nProportion of participants with FEV1/FVC < LLN 2.5%, LLN 5% and <70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study\n\nNotes: LLN is defined by normal values in a Danish population.\n17\n Values are number (frequencies) for categorical values. P‐values were calculated with Pearson's X\n2 test for categorical values.\nOther respiratory disease includes COPD, chronic bronchitis and emphysema.\nAs expected, proportions were higher in smokers, especially current smokers and those with most pack‐years, and among participants reporting other respiratory disease. In addition, spirometry results below expected were more common among those with fewer years of schooling and those reporting sedentary or moderate physical activity. The proportion of participants with FEV1/FVC < 70% and FEV1/FVC LLN 2.5% was highest among those with BMI < 18.5 (42.6% and 27.8%, respectively), low school education (30.8% and 16.8%), sedentary activity (25.0% and 17.2%), current daily smoking (33.2% and 25.6%), asthma (49.2% and 41.3%) and other respiratory disease (57.6% and 46.7%).\nThe highest proportion of participants with FEV1/FVC < 70% was observed among current daily smokers (33.2%) and among those with known asthma (49.2%) or other respiratory disease (57.6%). The corresponding values for FEV1/FVC below LLN 2.5 were 25.6%, 41.3% and 46.7%, respectively.\nTable 2 shows results of univariable and multivariable analyses for proportion with airway obstruction using the different criteria. The strongest association was seen in daily smokers, followed by sometimes smokers and former smokers. Increasing number of pack‐years was also strongly associated with airway obstruction by either criterion. Airway obstruction was associated with increasing age for the fixed ratio criterion but not for LLN. BMI < 18.5 was associated with airway obstruction, as was sedentary lifestyle. Asthma and other respiratory disease were associated with airway obstruction, while other chronic diseases were not. No clear association was found for educational level or waist circumference.\nAssociation of characteristics and FEV1/FVC < LLN 2.5%, FEV1/FVC < LLN 5% and proportion with FEV1/FVC < 70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study according to age\n\nNote: LLN is defined by normal values in a Danish population.\n17\n\n\nCumulative smoking (pack‐years) was excluded from the multivariable models, due to missing values for all former smokers.\nIncludes chronic obstructive pulmonary disease (COPD), chronic bronchitis, hyperinflated lungs and emphysema.\nTable S3 shows additional spirometry results by sex for all participants aged 20 years and above. For men (n = 5984), FEV1 decreased from an average of 4.39 L for individuals aged 20 years to 2.29 L for individuals aged 80 + years (Table S3A). For women (n = 7015), it decreased from 3.24 L for people aged 20 years to 1.61 L for people aged 80 years (Table S3B). Across all ages, men had higher FEV1 than women. For men, the FEV1/FVC < 70% was 11.7 percentage points and 16.5 percentage points higher than FEV1/FVC < LLN 2.5% within age groups 70–79 and 80 + years, respectively. For women, the difference was 17.3 and 29.9 percentage points, respectively, in the two oldest age groups. The proportion of participants aged 20 and above with airway obstruction by the FEV1/FVC < LLN 5%, FEV1/FVC < LLN 2.5% and FEV1/FVC < 70% criteria in different subgroups of participants is shown in Tables S4A (men) and S4B (women).", "Among LOFUS participants aged 35 years or older, 19% had airway obstruction judged by the fixed ratio criterion of FEV1/FVC < 70% or the LLN‐5% cut‐off. Using LLN2.5%, the proportion with airway obstruction was 12.2% overall. While some subgroups had similar proportions with FEV1/FVC below 70% and below LLN 2.5%, in other subgroups, the proportions were quite different, which may be due to different age composition of the groups.\nA previous Danish study from Copenhagen estimated COPD prevalence in subjects aged over 35 years to be 17.4% based on FEV1/FVC < 70%.\n7\n Another Danish study found prevalence of airway obstruction 18.0% using FEV1/FVC < 70% and 5.6% using LLN 2.5%.\n8\n\n\nIn a German cohort, the FEV1/FVC < 70% criterion was also shown to identify more individuals than LLN 5%, especially in older participants.\n19\n The study also showed that the proportion of participants reporting that they had a physician diagnosis of COPD or took lung medication was higher among those who had airway obstruction by the LLN 5% criterion than among those with the FEV1/FVC < 70% criterion, with difference increasing with age. The use of the LLN criterion is arguably better, because the FEV1/FVC declines with age, irrespective of the presence of lung disease.\n9\n In addition, using LLN, where each participant's spirometry result was compared to the expected value for that participant's age, height and sex, allows for comparison of proportions with airway obstruction across subgroups with different age composition.\nOur study showed that fewer years in school were related to higher prevalence of airway obstruction (Table 1), while in multivariable analysis (Table 2), there was no significant association. It has been found previously that persons with shorter educational attainment have higher risk of developing COPD, and they also tend to have more severe disease, and higher risk for exacerbations and death, even when controlling for disease severity.\n20\n In light of our results, this may be related to uneven distribution of risk factors and not educational level per se.\nMore intense physical activity was associated with lower proportion of individuals with airway obstruction. Previous studies suggested that regular physical activity reduces risk of COPD exacerbations, and among smokers, physical activity slowed lung function decline.\n21\n In contrast, a Canadian cohort study found large waist circumference to be a strong predictor for impaired lung function with physical activity acting as a confounder.\n22\n Unlike waist circumference, proportion with airway obstruction varied with BMI. COPD may be accompanied by weight loss and loss of muscle mass due to systemic involvement in some patients, resulting in low BMI. On the other hand, studies have also found higher prevalence of obesity among COPD patients,\n7\n and in obese individuals, FEV1/FVC ratio may remain normal, and airway obstruction may be underdiagnosed.\n23\n\n\n16.5% of participants reported daily smoking. This is lower than the figures for the latest national health survey, where 21% and 22.8%, respectively, of the population of the two municipalities in Lolland‐Falster reported daily smoking.\n13\n Participants in LOFUS, as in other population health studies may be healthier and have higher socio‐economic status than the background population, leading to underestimation of disease burden.\n24\n\n\nStrengths of the study include a large sample size, high proportion of participants completing spirometry and comprehensive data collection. All data on comorbidities, medication and physical activity were self‐reported and could potentially suffer from reporting bias. For example, most participants reporting other respiratory disease also reported asthma. Although asthma and COPD frequently coexist,\n25\n we cannot be sure whether participants distinguish reliably between these conditions. Spirometry was performed without bronchodilator, which in a US study was associated with 50% higher prevalence of airway obstruction than post‐bronchodilator spirometry.\n26\n Information on medication use on the day of spirometry was not collected. Neither was information on symptoms and therefore the estimates of airway obstruction must be interpreted carefully. Nevertheless, the study showed proportions of participants with airway obstruction in agreement with previous Danish studies.\n7\n, \n8\n A possible limitation was the use of 35 years, while many studies use 40 years, which may affect comparability. However, one of our aims was to relate the LOFUS data to a previous Danish study using 35 years as cut‐off.\n7\n Table 1 shows that the age group 35–39 had the lowest prevalence of airway obstruction. Thus, calculating the AO prevalence in the 40+ population showed proportions of 19.8%, 19.3% and 12.4, using FEV1/FVC < 70, LLN 2.5 and LLN 5.0, respectively.\nKnowing the prevalence of COPD—and undiagnosed COPD—in a population is important for estimating the potential for prevention, early diagnosis and treatment and for planning services. Data from Copenhagen showed that only a minority of people meeting the criteria for COPD received treatment.\n7\n Among LOFUS participants who reported no known respiratory disease, 8.7% were found to have airway obstruction by the LLN 2.5% criterion, and 15.2% by the LLN 5% criterion (data not shown). For current daily smokers, these figures were 19.5% and 29.5%, respectively. Although no data on symptoms were available for assessment of clinical diagnoses, these numbers may give an indication of the extent of undiagnosed obstructive pulmonary disease.\nThe prevalence estimate depends on the spirometry criterion used.\n27\n Even when using the LLN 5%, only 58.5% of men and 52.1% of women who reported respiratory disease had signs of airway obstruction. Consequently, a sizeable proportion of participants with self‐reported disease had spirometry result in the normal range. Whether this is due to error in self‐reported diagnosis, to effect of treatment of existing disease, or a combination of both, cannot be determined from the study. Nevertheless, it suggests that prevalence of lung disease may be underestimated when estimated from spirometry results in population studies such as LOFUS. Conversely, a Dutch study showed that population prevalence of COPD may be underestimated if including only self‐reported or physician diagnosed COPD, as substantial number of cases go undiagnosed.\n28\n\n\nIn a clinical setting, the spirometry result is interpreted in light of patient history and response to bronchodilator treatment. Such information was not available in the LOFUS database, and therefore, using the same cut‐off as in clinical diagnosis but without the clinical information may lead to overestimation of the COPD prevalence. In older age groups, the fixed ratio criterion (FEV1/FVC < 70%) may overestimate COPD prevalence even more than in younger age groups. The LLN criterion seems better suited for population studies,\n25\n especially in older participants, and this has been found across geographical locations.\n28\n, \n29\n, \n30\n, \n31\n It has been suggested that LLN 2.5% cut‐off is more relevant in population studies than 5%.\n8\n In future population studies, inclusion of clinical information and response to bronchodilator treatment would enable researchers to evaluate further which criterion gives the better estimate.", "This was a descriptive cross‐sectional study presenting data on proportions with signs of airway obstruction in broad subgroups of participants in a population study from a rural part of Denmark using different cut‐offs for definition of airway obstruction. Our study shows that, using the same criteria to define airway obstruction as previous Danish studies, the population of Lolland‐Falster has comparable proportion airway obstruction, and hence possible COPD, 19% in participants aged 35 years or older. Our study also highlights how choosing a different cut‐off influences the estimate: using the LLN 2.5% cut‐off, which may be preferable for population studies, prevalence of airway obstruction was considerably lower at 12.2%. In addition, choice of criterion—LLN or FEV1/FVC ratio—influences the estimated prevalence of airway obstruction, especially with increasing age.", "All authors have no conflict of interest to declare.", "Informed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896).", "Katja Kemp Jacobsen (KKJ), Randi Jepsen (RJ), Uffe Bodtger (UB), Knud Rasmussen (KR) and Gry St‐Martin (GSM) were involved in study design. RJ was involved in data collection. KKJ, RJ and GSM were involved with data analysis. RJ was involved with data curation. KKJ, RJ and GSM were involved with writing the original draft. KKJ, RJ, UB, KR and GSM were involved with writing—review and editing. All authors read and approved the final manuscript.", "\nTable S1. Characteristics of participants with and without spirometry results among 16 123 individuals aged ≥18 years in the LOFUS study.\n\nTable S2. Characteristics of participants with successful spirometry and self‐reported other respiratory disease (Includes chronic obstructive pulmonary disease (COPD), chronic bronchitis, hyperinflated lungs and emphysema) among 1028 individuals aged ≥18 years in the LOFUS study.\n\nTable S3a. Values for FEV1 (L) and FVC(L) (means (SD), proportion with FEV1 and FVC < 2.5 and 5% LLN and proportion with FEV1/FVC < 70% among 5984 men aged ≥20 years with successful spirometry in the LOFUS study according to age. LLN is defined by normal values in a Danish population (18).\n\nTable S3b. Values for FEV1 (L) and FVC(L) (means (SD), proportion with FEV1 and FVC < 2.5 and 5% LLN and proportion with FEV1/FVC < 70% among 7015 women aged ≥20 years with successful spirometry in the LOFUS study according to age. LLN is defined by normal values in a Danish population (18).\n\nTable S4a. Proportion with FEV1, FVC and FEV1/FVC < 2.5 and 5% LLN and proportion with FEV1/FVC < 70% among 5984 men ≥20 years with successful spirometry in the LOFUS study according to sex and other characteristics. LLN is defined by normal values in a Danish population (18).\n\nTable S4b. Proportion with FEV1, FVC and FEV1/FVC < 2.5 and 5% LLN and proportion with FEV1/FVC < 70% among 7015 women aged ≥20 years with successful spirometry in the LOFUS study according to sex and characteristics. LLN is defined by normal values in a Danish population (18)\nClick here for additional data file." ]
[ "background", null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusions", "COI-statement", null, null, "supplementary-material" ]
[ "COPD", "COPD prevalence", "criteria", "FEV1/FVC ratio", "lower limit of normal", "lung function", "spirometry" ]
BACKGROUND: Worldwide, lung disease is a major cause of morbidity and mortality. 1 Lung function declines with age, 2 and smoking and environmental pollution can cause lung disease, accelerated loss of lung function and premature death. 3 , 4 , 5 , 6 In Denmark, prevalence of chronic obstructive pulmonary disease (COPD) has been estimated at 17.5% of adults aged 35 years or older, 7 with estimated 2.0% having severe COPD. 7 This estimate derived from a large urban population study using a spirometry cut‐off intended for clinical diagnosis, that is, in a clinical setting with suspicion of airway disease. This may lead to overestimation of COPD prevalence. 8 , 9 In addition, the choice of spirometry criterion to most correctly detect airway obstruction has been debated in recent decades. Detection of reduced ratio between forced expiratory volume 1 s (FEV1) and forced vital capacity (FVC) indicates airway obstruction, and while a fixed ratio criterion of FEV1/FVC < 0.70 is often used, studies show that FEV1/FVC ratio below a lower limit of normal (LLN) may be more suitable. 8 , 10 Lolland‐Falster is a mixed rural‐provincial area of 103 000 inhabitants, situated on two main and several small islands in the southern part of Denmark, a small Scandinavian, high‐income country covering 43 000 km2 and a population of 5.8 million. Although the Danish population is genetically relatively homogeneous, population health varies across regions of the country, with life expectancy in Lolland‐Falster hree 3 years below the national average and 5 years below the municipalities with highest life expectancy. 11 , 12 The region scores worse than the national average on several health indicators, including diabetes prevalence, obesity, smoking and COPD. 13 The Lolland‐Falster Health Study (LOFUS) is a population‐based, prospective cohort study designed to investigate determinants of population health in this area. 14 In this paper, we report LOFUS data on lung function measurement with spirometry as well as anthropometric data and questionnaire‐based information on smoking and other risk factors. The aim is to describe the spirometry measurements and results in adults participating in LOFUS and compare it to similar findings from urban Danish population. In addition, we explore how different spirometric criteria affect the estimated prevalence of airway obstruction. METHODS AND DATA: LOFUS LOFUS is a household‐based study where households of randomly selected persons aged 18 and above were invited to participate. 14 The data collection encompassed self‐administered, age‐specific questionnaires on social, mental and physical health and lifestyle factors; anthropometric and physiological measurements undertaken in the study clinic; and collection of biological samples. The data collection started in February 2016 and ended February 2020. LOFUS is a household‐based study where households of randomly selected persons aged 18 and above were invited to participate. 14 The data collection encompassed self‐administered, age‐specific questionnaires on social, mental and physical health and lifestyle factors; anthropometric and physiological measurements undertaken in the study clinic; and collection of biological samples. The data collection started in February 2016 and ended February 2020. Spirometry Lung function was measured by trained healthcare professionals using the MicroLoop Handheld Spirometer™ and SpiroUSB™ with Spirometry PC Software (CareFusion Corp., USA). Sex, height and ethnic origin (Caucasian or Asian) were entered into the software, and the spirometry was performed in a standing position (if possible) with the use of a nose clip. There were no restrictions on behaviour or medication prior to the measurement, and bronchodilator was not administered prior to spirometry. The spirometer was calibrated once a week. Three sets of values were obtained for FEV1,FVC, and as a criterion for correct performance, the two highest measurements might differ only by ≤0.150 L, and all measurements should be defined as ‘Good blow’ or ‘Short blow’ by the spirometer. The highest value of both FEV1 and FVC for each participant was used in the analysis, and the ratio of FEV1 to FVC (FEV1/FVC) was calculated for each participant. Lung function was measured by trained healthcare professionals using the MicroLoop Handheld Spirometer™ and SpiroUSB™ with Spirometry PC Software (CareFusion Corp., USA). Sex, height and ethnic origin (Caucasian or Asian) were entered into the software, and the spirometry was performed in a standing position (if possible) with the use of a nose clip. There were no restrictions on behaviour or medication prior to the measurement, and bronchodilator was not administered prior to spirometry. The spirometer was calibrated once a week. Three sets of values were obtained for FEV1,FVC, and as a criterion for correct performance, the two highest measurements might differ only by ≤0.150 L, and all measurements should be defined as ‘Good blow’ or ‘Short blow’ by the spirometer. The highest value of both FEV1 and FVC for each participant was used in the analysis, and the ratio of FEV1 to FVC (FEV1/FVC) was calculated for each participant. Other variables We examined a number of potential determinants of lung function selected from the literature. From the measurements in the clinic, we used information on height (cm), waist circumference (cm) and weight (kg). Body mass index (BMI) was calculated as weight/height2 and categorized into underweight (<18.5), normal weight (18.5–24.9), overweight (25.0–29.9) and obese (≥30.0). 15 Waist circumference was grouped into normal or large (with limits of >94 cm for men and >80 cm for women). 16 From the questionnaire, we used self‐reported smoking data categorized as current (daily or sometimes), former and never. Number of pack‐years was calculated for current smokers, with 1 pack‐year corresponding to 20 cigarettes or equivalent per day for 1 year. School education was divided into ≤7 years; 8–9 years; 10–11 years; graduated high school; under education; and other. Vocational education was divided into primary school only; semiskilled worker, for example, truck driver; vocational training, for example, hairdresser; short higher education, for example, laboratory worker; middle higher education, for example, school teacher; long higher education, for example, master's degree or equivalent; and other education. Physical activity was based on the following question: ‘How would you characterize your leisure time physical activity within the last year?’ and classified as sedentary, moderate, heavy activity or heavy activity at competition level. 17 Self‐reported prevalent morbidity was based on the following question from the LOFUS study questionnaire: ‘Do you suffer from any of the following diseases?’ Participants were asked to mark either yes or no for each of the following categories: ‘asthma’, ‘chronic bronchitis, hyperinflated lungs, chronic obstructive pulmonary disease (COPD), or emphysema’, ‘heart attack’, ‘atherosclerosis in the heart’, ‘angina’, ‘hypertension’, ‘diabetes’ and ‘cancer’. Information was merged with self‐reported daily medication use and categorized as asthma, other respiratory disease (including COPD, chronic bronchitis and emphysema), allergy, hypertension, diabetes, cancer or ischaemic heart disease. We examined a number of potential determinants of lung function selected from the literature. From the measurements in the clinic, we used information on height (cm), waist circumference (cm) and weight (kg). Body mass index (BMI) was calculated as weight/height2 and categorized into underweight (<18.5), normal weight (18.5–24.9), overweight (25.0–29.9) and obese (≥30.0). 15 Waist circumference was grouped into normal or large (with limits of >94 cm for men and >80 cm for women). 16 From the questionnaire, we used self‐reported smoking data categorized as current (daily or sometimes), former and never. Number of pack‐years was calculated for current smokers, with 1 pack‐year corresponding to 20 cigarettes or equivalent per day for 1 year. School education was divided into ≤7 years; 8–9 years; 10–11 years; graduated high school; under education; and other. Vocational education was divided into primary school only; semiskilled worker, for example, truck driver; vocational training, for example, hairdresser; short higher education, for example, laboratory worker; middle higher education, for example, school teacher; long higher education, for example, master's degree or equivalent; and other education. Physical activity was based on the following question: ‘How would you characterize your leisure time physical activity within the last year?’ and classified as sedentary, moderate, heavy activity or heavy activity at competition level. 17 Self‐reported prevalent morbidity was based on the following question from the LOFUS study questionnaire: ‘Do you suffer from any of the following diseases?’ Participants were asked to mark either yes or no for each of the following categories: ‘asthma’, ‘chronic bronchitis, hyperinflated lungs, chronic obstructive pulmonary disease (COPD), or emphysema’, ‘heart attack’, ‘atherosclerosis in the heart’, ‘angina’, ‘hypertension’, ‘diabetes’ and ‘cancer’. Information was merged with self‐reported daily medication use and categorized as asthma, other respiratory disease (including COPD, chronic bronchitis and emphysema), allergy, hypertension, diabetes, cancer or ischaemic heart disease. Data analysis We included all 16 142 adults (i.e. aged 18 years and above) from the LOFUS study in the descriptive analyses of participant characteristics. Mean and standard deviation (SD) were calculated for characteristics of all participants. Before analysis, we checked for differences between those with successful and not successful spirometry using Kruskal–Wallis test and Pearson's X 2 test. We then compared spirometry results with the reference values stated for the Danish normal population by Løkke et al. 18 For this part of the analysis, we excluded participants with age <20 years, height <150 cm (male and female) or <155 cm (male) to match the population in the reference material (Figure 1). We considered a reduction in the ratio FEV1/FVC to be indicative of airway obstruction and compared three different cut‐offs: FEV1/FVC < 70%, FEV1/FVC (LLN 5%) and LLN 2.5%. LLN 5% was stated for the Danish normal population by Løkke et al. 18 The LLN 2.5% was not stated in the study but calculated by subtracting 1.96 × residual SD from the predicted mean (assuming a Gaussian distribution of the residuals). For each participant, we determined if FEV1, FVC and FEV1/FVC were above or below the LLN 5% and the LLN 2.5% corresponding to that participant's age, sex, and height. We then calculated the proportions with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70%. The proportion with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70% in subgroups of participants defined by sex, age and other variables were tabulated. Flowchart of the study population among 16 142 individuals aged ≥18 years in the LOFUS study Analyses were carried out for all participants ≥20 years. As COPD is usually diagnosed in middle‐aged or older adults, we also carried out analysis without the younger age groups. We chose 35 years as cut‐off, because this was used in a previous Danish prevalence study 7 (Figure 1). For the next part of the analysis, we excluded participants <35 years, leaving 11 709 participants for analysis. The proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70% in subgroups of participants were tabulated. Logistic regression was used to evaluate variables associated with airway obstruction defined as the proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70%. Analyses were performed using STATA/SE 15.1. We included all 16 142 adults (i.e. aged 18 years and above) from the LOFUS study in the descriptive analyses of participant characteristics. Mean and standard deviation (SD) were calculated for characteristics of all participants. Before analysis, we checked for differences between those with successful and not successful spirometry using Kruskal–Wallis test and Pearson's X 2 test. We then compared spirometry results with the reference values stated for the Danish normal population by Løkke et al. 18 For this part of the analysis, we excluded participants with age <20 years, height <150 cm (male and female) or <155 cm (male) to match the population in the reference material (Figure 1). We considered a reduction in the ratio FEV1/FVC to be indicative of airway obstruction and compared three different cut‐offs: FEV1/FVC < 70%, FEV1/FVC (LLN 5%) and LLN 2.5%. LLN 5% was stated for the Danish normal population by Løkke et al. 18 The LLN 2.5% was not stated in the study but calculated by subtracting 1.96 × residual SD from the predicted mean (assuming a Gaussian distribution of the residuals). For each participant, we determined if FEV1, FVC and FEV1/FVC were above or below the LLN 5% and the LLN 2.5% corresponding to that participant's age, sex, and height. We then calculated the proportions with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70%. The proportion with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70% in subgroups of participants defined by sex, age and other variables were tabulated. Flowchart of the study population among 16 142 individuals aged ≥18 years in the LOFUS study Analyses were carried out for all participants ≥20 years. As COPD is usually diagnosed in middle‐aged or older adults, we also carried out analysis without the younger age groups. We chose 35 years as cut‐off, because this was used in a previous Danish prevalence study 7 (Figure 1). For the next part of the analysis, we excluded participants <35 years, leaving 11 709 participants for analysis. The proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70% in subgroups of participants were tabulated. Logistic regression was used to evaluate variables associated with airway obstruction defined as the proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70%. Analyses were performed using STATA/SE 15.1. Ethics Informed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896). Informed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896). LOFUS: LOFUS is a household‐based study where households of randomly selected persons aged 18 and above were invited to participate. 14 The data collection encompassed self‐administered, age‐specific questionnaires on social, mental and physical health and lifestyle factors; anthropometric and physiological measurements undertaken in the study clinic; and collection of biological samples. The data collection started in February 2016 and ended February 2020. Spirometry: Lung function was measured by trained healthcare professionals using the MicroLoop Handheld Spirometer™ and SpiroUSB™ with Spirometry PC Software (CareFusion Corp., USA). Sex, height and ethnic origin (Caucasian or Asian) were entered into the software, and the spirometry was performed in a standing position (if possible) with the use of a nose clip. There were no restrictions on behaviour or medication prior to the measurement, and bronchodilator was not administered prior to spirometry. The spirometer was calibrated once a week. Three sets of values were obtained for FEV1,FVC, and as a criterion for correct performance, the two highest measurements might differ only by ≤0.150 L, and all measurements should be defined as ‘Good blow’ or ‘Short blow’ by the spirometer. The highest value of both FEV1 and FVC for each participant was used in the analysis, and the ratio of FEV1 to FVC (FEV1/FVC) was calculated for each participant. Other variables: We examined a number of potential determinants of lung function selected from the literature. From the measurements in the clinic, we used information on height (cm), waist circumference (cm) and weight (kg). Body mass index (BMI) was calculated as weight/height2 and categorized into underweight (<18.5), normal weight (18.5–24.9), overweight (25.0–29.9) and obese (≥30.0). 15 Waist circumference was grouped into normal or large (with limits of >94 cm for men and >80 cm for women). 16 From the questionnaire, we used self‐reported smoking data categorized as current (daily or sometimes), former and never. Number of pack‐years was calculated for current smokers, with 1 pack‐year corresponding to 20 cigarettes or equivalent per day for 1 year. School education was divided into ≤7 years; 8–9 years; 10–11 years; graduated high school; under education; and other. Vocational education was divided into primary school only; semiskilled worker, for example, truck driver; vocational training, for example, hairdresser; short higher education, for example, laboratory worker; middle higher education, for example, school teacher; long higher education, for example, master's degree or equivalent; and other education. Physical activity was based on the following question: ‘How would you characterize your leisure time physical activity within the last year?’ and classified as sedentary, moderate, heavy activity or heavy activity at competition level. 17 Self‐reported prevalent morbidity was based on the following question from the LOFUS study questionnaire: ‘Do you suffer from any of the following diseases?’ Participants were asked to mark either yes or no for each of the following categories: ‘asthma’, ‘chronic bronchitis, hyperinflated lungs, chronic obstructive pulmonary disease (COPD), or emphysema’, ‘heart attack’, ‘atherosclerosis in the heart’, ‘angina’, ‘hypertension’, ‘diabetes’ and ‘cancer’. Information was merged with self‐reported daily medication use and categorized as asthma, other respiratory disease (including COPD, chronic bronchitis and emphysema), allergy, hypertension, diabetes, cancer or ischaemic heart disease. Data analysis: We included all 16 142 adults (i.e. aged 18 years and above) from the LOFUS study in the descriptive analyses of participant characteristics. Mean and standard deviation (SD) were calculated for characteristics of all participants. Before analysis, we checked for differences between those with successful and not successful spirometry using Kruskal–Wallis test and Pearson's X 2 test. We then compared spirometry results with the reference values stated for the Danish normal population by Løkke et al. 18 For this part of the analysis, we excluded participants with age <20 years, height <150 cm (male and female) or <155 cm (male) to match the population in the reference material (Figure 1). We considered a reduction in the ratio FEV1/FVC to be indicative of airway obstruction and compared three different cut‐offs: FEV1/FVC < 70%, FEV1/FVC (LLN 5%) and LLN 2.5%. LLN 5% was stated for the Danish normal population by Løkke et al. 18 The LLN 2.5% was not stated in the study but calculated by subtracting 1.96 × residual SD from the predicted mean (assuming a Gaussian distribution of the residuals). For each participant, we determined if FEV1, FVC and FEV1/FVC were above or below the LLN 5% and the LLN 2.5% corresponding to that participant's age, sex, and height. We then calculated the proportions with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70%. The proportion with FEV1, FVC and FEV1/FVC below LLN and the proportion with FEV1/FVC < 70% in subgroups of participants defined by sex, age and other variables were tabulated. Flowchart of the study population among 16 142 individuals aged ≥18 years in the LOFUS study Analyses were carried out for all participants ≥20 years. As COPD is usually diagnosed in middle‐aged or older adults, we also carried out analysis without the younger age groups. We chose 35 years as cut‐off, because this was used in a previous Danish prevalence study 7 (Figure 1). For the next part of the analysis, we excluded participants <35 years, leaving 11 709 participants for analysis. The proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70% in subgroups of participants were tabulated. Logistic regression was used to evaluate variables associated with airway obstruction defined as the proportion with FEV1/FVC below LLN 2.5 and 5.0% and the proportion with FEV1/FVC < 70%. Analyses were performed using STATA/SE 15.1. Ethics: Informed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896). RESULTS: Participant characteristics Spirometry measurements were tried for all 16 142 participants, 19 participants were excluded due to missing height, and of the 16 123 participants, 13 315 (82.5%) completed three acceptable measurements (Figure 1). Participants with unsuccessful spirometry were more likely to be men, old, overweight, sedentary, having large waist circumference, low school/vocational education, hypertension or ischaemic heart disease and less likely having asthma and other respiratory disease. The population with successful spirometry was on average 59.0 years with 54.2% being women. The majority were overweight (63.4%), had education below high school graduation (62.5%), reported moderate physical activity (61.3%) or were former or current smokers (51.8%), and 26.8% reported to have hypertension (Table S1). Spirometry measurements were tried for all 16 142 participants, 19 participants were excluded due to missing height, and of the 16 123 participants, 13 315 (82.5%) completed three acceptable measurements (Figure 1). Participants with unsuccessful spirometry were more likely to be men, old, overweight, sedentary, having large waist circumference, low school/vocational education, hypertension or ischaemic heart disease and less likely having asthma and other respiratory disease. The population with successful spirometry was on average 59.0 years with 54.2% being women. The majority were overweight (63.4%), had education below high school graduation (62.5%), reported moderate physical activity (61.3%) or were former or current smokers (51.8%), and 26.8% reported to have hypertension (Table S1). Respiratory disease The participants with other respiratory disease (n = 1028) was on average 61.2 years with 56.4% being women (Table S2). 75.1% reported also to have asthma. Compared to the total group with successful spirometry (Table S1), the participants with self‐reported respiratory disease was more likely to be older, have higher waist circumference, lower school education, more sedentary activity, be current daily smokers with more pack‐years and have asthma, allergy, hypertension, diabetes, cancer and ischaemic heart disease. The participants with other respiratory disease (n = 1028) was on average 61.2 years with 56.4% being women (Table S2). 75.1% reported also to have asthma. Compared to the total group with successful spirometry (Table S1), the participants with self‐reported respiratory disease was more likely to be older, have higher waist circumference, lower school education, more sedentary activity, be current daily smokers with more pack‐years and have asthma, allergy, hypertension, diabetes, cancer and ischaemic heart disease. Spirometry results Table 1 shows proportions of participants who met the three different criteria for airway obstruction, by age categories, anthropometric data, educational status and smoking status in participants aged 35 years or older. Overall 12.2% of participants had FEV1/FVC below the LLN 2.5% cut‐off, 19.0% below the LLN 5%, and 19.0% below 70%, but with variation by age group. Up until age 49, the proportions meeting the FEV1/FVC < 70% and the LLN 2.5% criteria were similar, but with increasing age, the proportion with FEV1/FVC < 70% increased more than the proportion meeting the LLN criteria. As a result, the difference between the proportions of FEV1/FVC < 70% and FEV1/FVC < LLN 2.5% increased in the oldest age groups. Proportion of participants with FEV1/FVC < LLN 2.5%, LLN 5% and <70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study Notes: LLN is defined by normal values in a Danish population. 17 Values are number (frequencies) for categorical values. P‐values were calculated with Pearson's X 2 test for categorical values. Other respiratory disease includes COPD, chronic bronchitis and emphysema. As expected, proportions were higher in smokers, especially current smokers and those with most pack‐years, and among participants reporting other respiratory disease. In addition, spirometry results below expected were more common among those with fewer years of schooling and those reporting sedentary or moderate physical activity. The proportion of participants with FEV1/FVC < 70% and FEV1/FVC LLN 2.5% was highest among those with BMI < 18.5 (42.6% and 27.8%, respectively), low school education (30.8% and 16.8%), sedentary activity (25.0% and 17.2%), current daily smoking (33.2% and 25.6%), asthma (49.2% and 41.3%) and other respiratory disease (57.6% and 46.7%). The highest proportion of participants with FEV1/FVC < 70% was observed among current daily smokers (33.2%) and among those with known asthma (49.2%) or other respiratory disease (57.6%). The corresponding values for FEV1/FVC below LLN 2.5 were 25.6%, 41.3% and 46.7%, respectively. Table 2 shows results of univariable and multivariable analyses for proportion with airway obstruction using the different criteria. The strongest association was seen in daily smokers, followed by sometimes smokers and former smokers. Increasing number of pack‐years was also strongly associated with airway obstruction by either criterion. Airway obstruction was associated with increasing age for the fixed ratio criterion but not for LLN. BMI < 18.5 was associated with airway obstruction, as was sedentary lifestyle. Asthma and other respiratory disease were associated with airway obstruction, while other chronic diseases were not. No clear association was found for educational level or waist circumference. Association of characteristics and FEV1/FVC < LLN 2.5%, FEV1/FVC < LLN 5% and proportion with FEV1/FVC < 70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study according to age Note: LLN is defined by normal values in a Danish population. 17 Cumulative smoking (pack‐years) was excluded from the multivariable models, due to missing values for all former smokers. Includes chronic obstructive pulmonary disease (COPD), chronic bronchitis, hyperinflated lungs and emphysema. Table S3 shows additional spirometry results by sex for all participants aged 20 years and above. For men (n = 5984), FEV1 decreased from an average of 4.39 L for individuals aged 20 years to 2.29 L for individuals aged 80 + years (Table S3A). For women (n = 7015), it decreased from 3.24 L for people aged 20 years to 1.61 L for people aged 80 years (Table S3B). Across all ages, men had higher FEV1 than women. For men, the FEV1/FVC < 70% was 11.7 percentage points and 16.5 percentage points higher than FEV1/FVC < LLN 2.5% within age groups 70–79 and 80 + years, respectively. For women, the difference was 17.3 and 29.9 percentage points, respectively, in the two oldest age groups. The proportion of participants aged 20 and above with airway obstruction by the FEV1/FVC < LLN 5%, FEV1/FVC < LLN 2.5% and FEV1/FVC < 70% criteria in different subgroups of participants is shown in Tables S4A (men) and S4B (women). Table 1 shows proportions of participants who met the three different criteria for airway obstruction, by age categories, anthropometric data, educational status and smoking status in participants aged 35 years or older. Overall 12.2% of participants had FEV1/FVC below the LLN 2.5% cut‐off, 19.0% below the LLN 5%, and 19.0% below 70%, but with variation by age group. Up until age 49, the proportions meeting the FEV1/FVC < 70% and the LLN 2.5% criteria were similar, but with increasing age, the proportion with FEV1/FVC < 70% increased more than the proportion meeting the LLN criteria. As a result, the difference between the proportions of FEV1/FVC < 70% and FEV1/FVC < LLN 2.5% increased in the oldest age groups. Proportion of participants with FEV1/FVC < LLN 2.5%, LLN 5% and <70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study Notes: LLN is defined by normal values in a Danish population. 17 Values are number (frequencies) for categorical values. P‐values were calculated with Pearson's X 2 test for categorical values. Other respiratory disease includes COPD, chronic bronchitis and emphysema. As expected, proportions were higher in smokers, especially current smokers and those with most pack‐years, and among participants reporting other respiratory disease. In addition, spirometry results below expected were more common among those with fewer years of schooling and those reporting sedentary or moderate physical activity. The proportion of participants with FEV1/FVC < 70% and FEV1/FVC LLN 2.5% was highest among those with BMI < 18.5 (42.6% and 27.8%, respectively), low school education (30.8% and 16.8%), sedentary activity (25.0% and 17.2%), current daily smoking (33.2% and 25.6%), asthma (49.2% and 41.3%) and other respiratory disease (57.6% and 46.7%). The highest proportion of participants with FEV1/FVC < 70% was observed among current daily smokers (33.2%) and among those with known asthma (49.2%) or other respiratory disease (57.6%). The corresponding values for FEV1/FVC below LLN 2.5 were 25.6%, 41.3% and 46.7%, respectively. Table 2 shows results of univariable and multivariable analyses for proportion with airway obstruction using the different criteria. The strongest association was seen in daily smokers, followed by sometimes smokers and former smokers. Increasing number of pack‐years was also strongly associated with airway obstruction by either criterion. Airway obstruction was associated with increasing age for the fixed ratio criterion but not for LLN. BMI < 18.5 was associated with airway obstruction, as was sedentary lifestyle. Asthma and other respiratory disease were associated with airway obstruction, while other chronic diseases were not. No clear association was found for educational level or waist circumference. Association of characteristics and FEV1/FVC < LLN 2.5%, FEV1/FVC < LLN 5% and proportion with FEV1/FVC < 70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study according to age Note: LLN is defined by normal values in a Danish population. 17 Cumulative smoking (pack‐years) was excluded from the multivariable models, due to missing values for all former smokers. Includes chronic obstructive pulmonary disease (COPD), chronic bronchitis, hyperinflated lungs and emphysema. Table S3 shows additional spirometry results by sex for all participants aged 20 years and above. For men (n = 5984), FEV1 decreased from an average of 4.39 L for individuals aged 20 years to 2.29 L for individuals aged 80 + years (Table S3A). For women (n = 7015), it decreased from 3.24 L for people aged 20 years to 1.61 L for people aged 80 years (Table S3B). Across all ages, men had higher FEV1 than women. For men, the FEV1/FVC < 70% was 11.7 percentage points and 16.5 percentage points higher than FEV1/FVC < LLN 2.5% within age groups 70–79 and 80 + years, respectively. For women, the difference was 17.3 and 29.9 percentage points, respectively, in the two oldest age groups. The proportion of participants aged 20 and above with airway obstruction by the FEV1/FVC < LLN 5%, FEV1/FVC < LLN 2.5% and FEV1/FVC < 70% criteria in different subgroups of participants is shown in Tables S4A (men) and S4B (women). Participant characteristics: Spirometry measurements were tried for all 16 142 participants, 19 participants were excluded due to missing height, and of the 16 123 participants, 13 315 (82.5%) completed three acceptable measurements (Figure 1). Participants with unsuccessful spirometry were more likely to be men, old, overweight, sedentary, having large waist circumference, low school/vocational education, hypertension or ischaemic heart disease and less likely having asthma and other respiratory disease. The population with successful spirometry was on average 59.0 years with 54.2% being women. The majority were overweight (63.4%), had education below high school graduation (62.5%), reported moderate physical activity (61.3%) or were former or current smokers (51.8%), and 26.8% reported to have hypertension (Table S1). Respiratory disease: The participants with other respiratory disease (n = 1028) was on average 61.2 years with 56.4% being women (Table S2). 75.1% reported also to have asthma. Compared to the total group with successful spirometry (Table S1), the participants with self‐reported respiratory disease was more likely to be older, have higher waist circumference, lower school education, more sedentary activity, be current daily smokers with more pack‐years and have asthma, allergy, hypertension, diabetes, cancer and ischaemic heart disease. Spirometry results: Table 1 shows proportions of participants who met the three different criteria for airway obstruction, by age categories, anthropometric data, educational status and smoking status in participants aged 35 years or older. Overall 12.2% of participants had FEV1/FVC below the LLN 2.5% cut‐off, 19.0% below the LLN 5%, and 19.0% below 70%, but with variation by age group. Up until age 49, the proportions meeting the FEV1/FVC < 70% and the LLN 2.5% criteria were similar, but with increasing age, the proportion with FEV1/FVC < 70% increased more than the proportion meeting the LLN criteria. As a result, the difference between the proportions of FEV1/FVC < 70% and FEV1/FVC < LLN 2.5% increased in the oldest age groups. Proportion of participants with FEV1/FVC < LLN 2.5%, LLN 5% and <70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study Notes: LLN is defined by normal values in a Danish population. 17 Values are number (frequencies) for categorical values. P‐values were calculated with Pearson's X 2 test for categorical values. Other respiratory disease includes COPD, chronic bronchitis and emphysema. As expected, proportions were higher in smokers, especially current smokers and those with most pack‐years, and among participants reporting other respiratory disease. In addition, spirometry results below expected were more common among those with fewer years of schooling and those reporting sedentary or moderate physical activity. The proportion of participants with FEV1/FVC < 70% and FEV1/FVC LLN 2.5% was highest among those with BMI < 18.5 (42.6% and 27.8%, respectively), low school education (30.8% and 16.8%), sedentary activity (25.0% and 17.2%), current daily smoking (33.2% and 25.6%), asthma (49.2% and 41.3%) and other respiratory disease (57.6% and 46.7%). The highest proportion of participants with FEV1/FVC < 70% was observed among current daily smokers (33.2%) and among those with known asthma (49.2%) or other respiratory disease (57.6%). The corresponding values for FEV1/FVC below LLN 2.5 were 25.6%, 41.3% and 46.7%, respectively. Table 2 shows results of univariable and multivariable analyses for proportion with airway obstruction using the different criteria. The strongest association was seen in daily smokers, followed by sometimes smokers and former smokers. Increasing number of pack‐years was also strongly associated with airway obstruction by either criterion. Airway obstruction was associated with increasing age for the fixed ratio criterion but not for LLN. BMI < 18.5 was associated with airway obstruction, as was sedentary lifestyle. Asthma and other respiratory disease were associated with airway obstruction, while other chronic diseases were not. No clear association was found for educational level or waist circumference. Association of characteristics and FEV1/FVC < LLN 2.5%, FEV1/FVC < LLN 5% and proportion with FEV1/FVC < 70% among all aged ≥35 years (n = 11 709) with successful spirometry in the LOFUS study according to age Note: LLN is defined by normal values in a Danish population. 17 Cumulative smoking (pack‐years) was excluded from the multivariable models, due to missing values for all former smokers. Includes chronic obstructive pulmonary disease (COPD), chronic bronchitis, hyperinflated lungs and emphysema. Table S3 shows additional spirometry results by sex for all participants aged 20 years and above. For men (n = 5984), FEV1 decreased from an average of 4.39 L for individuals aged 20 years to 2.29 L for individuals aged 80 + years (Table S3A). For women (n = 7015), it decreased from 3.24 L for people aged 20 years to 1.61 L for people aged 80 years (Table S3B). Across all ages, men had higher FEV1 than women. For men, the FEV1/FVC < 70% was 11.7 percentage points and 16.5 percentage points higher than FEV1/FVC < LLN 2.5% within age groups 70–79 and 80 + years, respectively. For women, the difference was 17.3 and 29.9 percentage points, respectively, in the two oldest age groups. The proportion of participants aged 20 and above with airway obstruction by the FEV1/FVC < LLN 5%, FEV1/FVC < LLN 2.5% and FEV1/FVC < 70% criteria in different subgroups of participants is shown in Tables S4A (men) and S4B (women). DISCUSSION: Among LOFUS participants aged 35 years or older, 19% had airway obstruction judged by the fixed ratio criterion of FEV1/FVC < 70% or the LLN‐5% cut‐off. Using LLN2.5%, the proportion with airway obstruction was 12.2% overall. While some subgroups had similar proportions with FEV1/FVC below 70% and below LLN 2.5%, in other subgroups, the proportions were quite different, which may be due to different age composition of the groups. A previous Danish study from Copenhagen estimated COPD prevalence in subjects aged over 35 years to be 17.4% based on FEV1/FVC < 70%. 7 Another Danish study found prevalence of airway obstruction 18.0% using FEV1/FVC < 70% and 5.6% using LLN 2.5%. 8 In a German cohort, the FEV1/FVC < 70% criterion was also shown to identify more individuals than LLN 5%, especially in older participants. 19 The study also showed that the proportion of participants reporting that they had a physician diagnosis of COPD or took lung medication was higher among those who had airway obstruction by the LLN 5% criterion than among those with the FEV1/FVC < 70% criterion, with difference increasing with age. The use of the LLN criterion is arguably better, because the FEV1/FVC declines with age, irrespective of the presence of lung disease. 9 In addition, using LLN, where each participant's spirometry result was compared to the expected value for that participant's age, height and sex, allows for comparison of proportions with airway obstruction across subgroups with different age composition. Our study showed that fewer years in school were related to higher prevalence of airway obstruction (Table 1), while in multivariable analysis (Table 2), there was no significant association. It has been found previously that persons with shorter educational attainment have higher risk of developing COPD, and they also tend to have more severe disease, and higher risk for exacerbations and death, even when controlling for disease severity. 20 In light of our results, this may be related to uneven distribution of risk factors and not educational level per se. More intense physical activity was associated with lower proportion of individuals with airway obstruction. Previous studies suggested that regular physical activity reduces risk of COPD exacerbations, and among smokers, physical activity slowed lung function decline. 21 In contrast, a Canadian cohort study found large waist circumference to be a strong predictor for impaired lung function with physical activity acting as a confounder. 22 Unlike waist circumference, proportion with airway obstruction varied with BMI. COPD may be accompanied by weight loss and loss of muscle mass due to systemic involvement in some patients, resulting in low BMI. On the other hand, studies have also found higher prevalence of obesity among COPD patients, 7 and in obese individuals, FEV1/FVC ratio may remain normal, and airway obstruction may be underdiagnosed. 23 16.5% of participants reported daily smoking. This is lower than the figures for the latest national health survey, where 21% and 22.8%, respectively, of the population of the two municipalities in Lolland‐Falster reported daily smoking. 13 Participants in LOFUS, as in other population health studies may be healthier and have higher socio‐economic status than the background population, leading to underestimation of disease burden. 24 Strengths of the study include a large sample size, high proportion of participants completing spirometry and comprehensive data collection. All data on comorbidities, medication and physical activity were self‐reported and could potentially suffer from reporting bias. For example, most participants reporting other respiratory disease also reported asthma. Although asthma and COPD frequently coexist, 25 we cannot be sure whether participants distinguish reliably between these conditions. Spirometry was performed without bronchodilator, which in a US study was associated with 50% higher prevalence of airway obstruction than post‐bronchodilator spirometry. 26 Information on medication use on the day of spirometry was not collected. Neither was information on symptoms and therefore the estimates of airway obstruction must be interpreted carefully. Nevertheless, the study showed proportions of participants with airway obstruction in agreement with previous Danish studies. 7 , 8 A possible limitation was the use of 35 years, while many studies use 40 years, which may affect comparability. However, one of our aims was to relate the LOFUS data to a previous Danish study using 35 years as cut‐off. 7 Table 1 shows that the age group 35–39 had the lowest prevalence of airway obstruction. Thus, calculating the AO prevalence in the 40+ population showed proportions of 19.8%, 19.3% and 12.4, using FEV1/FVC < 70, LLN 2.5 and LLN 5.0, respectively. Knowing the prevalence of COPD—and undiagnosed COPD—in a population is important for estimating the potential for prevention, early diagnosis and treatment and for planning services. Data from Copenhagen showed that only a minority of people meeting the criteria for COPD received treatment. 7 Among LOFUS participants who reported no known respiratory disease, 8.7% were found to have airway obstruction by the LLN 2.5% criterion, and 15.2% by the LLN 5% criterion (data not shown). For current daily smokers, these figures were 19.5% and 29.5%, respectively. Although no data on symptoms were available for assessment of clinical diagnoses, these numbers may give an indication of the extent of undiagnosed obstructive pulmonary disease. The prevalence estimate depends on the spirometry criterion used. 27 Even when using the LLN 5%, only 58.5% of men and 52.1% of women who reported respiratory disease had signs of airway obstruction. Consequently, a sizeable proportion of participants with self‐reported disease had spirometry result in the normal range. Whether this is due to error in self‐reported diagnosis, to effect of treatment of existing disease, or a combination of both, cannot be determined from the study. Nevertheless, it suggests that prevalence of lung disease may be underestimated when estimated from spirometry results in population studies such as LOFUS. Conversely, a Dutch study showed that population prevalence of COPD may be underestimated if including only self‐reported or physician diagnosed COPD, as substantial number of cases go undiagnosed. 28 In a clinical setting, the spirometry result is interpreted in light of patient history and response to bronchodilator treatment. Such information was not available in the LOFUS database, and therefore, using the same cut‐off as in clinical diagnosis but without the clinical information may lead to overestimation of the COPD prevalence. In older age groups, the fixed ratio criterion (FEV1/FVC < 70%) may overestimate COPD prevalence even more than in younger age groups. The LLN criterion seems better suited for population studies, 25 especially in older participants, and this has been found across geographical locations. 28 , 29 , 30 , 31 It has been suggested that LLN 2.5% cut‐off is more relevant in population studies than 5%. 8 In future population studies, inclusion of clinical information and response to bronchodilator treatment would enable researchers to evaluate further which criterion gives the better estimate. CONCLUSION: This was a descriptive cross‐sectional study presenting data on proportions with signs of airway obstruction in broad subgroups of participants in a population study from a rural part of Denmark using different cut‐offs for definition of airway obstruction. Our study shows that, using the same criteria to define airway obstruction as previous Danish studies, the population of Lolland‐Falster has comparable proportion airway obstruction, and hence possible COPD, 19% in participants aged 35 years or older. Our study also highlights how choosing a different cut‐off influences the estimate: using the LLN 2.5% cut‐off, which may be preferable for population studies, prevalence of airway obstruction was considerably lower at 12.2%. In addition, choice of criterion—LLN or FEV1/FVC ratio—influences the estimated prevalence of airway obstruction, especially with increasing age. CONFLICT OF INTEREST: All authors have no conflict of interest to declare. ETHICS STATEMENT: Informed written consent was obtained from all LOFUS participants. The LOFUS study was approved by the Region Zealand's Ethical Committee on Health Research (SJ‐421) and the Danish Data Protection Agency (REG‐024‐2015). LOFUS is registered in Clinical Trials (NCT02482896). AUTHOR CONTRIBUTIONS: Katja Kemp Jacobsen (KKJ), Randi Jepsen (RJ), Uffe Bodtger (UB), Knud Rasmussen (KR) and Gry St‐Martin (GSM) were involved in study design. RJ was involved in data collection. KKJ, RJ and GSM were involved with data analysis. RJ was involved with data curation. KKJ, RJ and GSM were involved with writing the original draft. KKJ, RJ, UB, KR and GSM were involved with writing—review and editing. All authors read and approved the final manuscript. Supporting information: Table S1. Characteristics of participants with and without spirometry results among 16 123 individuals aged ≥18 years in the LOFUS study. Table S2. Characteristics of participants with successful spirometry and self‐reported other respiratory disease (Includes chronic obstructive pulmonary disease (COPD), chronic bronchitis, hyperinflated lungs and emphysema) among 1028 individuals aged ≥18 years in the LOFUS study. Table S3a. Values for FEV1 (L) and FVC(L) (means (SD), proportion with FEV1 and FVC < 2.5 and 5% LLN and proportion with FEV1/FVC < 70% among 5984 men aged ≥20 years with successful spirometry in the LOFUS study according to age. LLN is defined by normal values in a Danish population (18). Table S3b. Values for FEV1 (L) and FVC(L) (means (SD), proportion with FEV1 and FVC < 2.5 and 5% LLN and proportion with FEV1/FVC < 70% among 7015 women aged ≥20 years with successful spirometry in the LOFUS study according to age. LLN is defined by normal values in a Danish population (18). Table S4a. Proportion with FEV1, FVC and FEV1/FVC < 2.5 and 5% LLN and proportion with FEV1/FVC < 70% among 5984 men ≥20 years with successful spirometry in the LOFUS study according to sex and other characteristics. LLN is defined by normal values in a Danish population (18). Table S4b. Proportion with FEV1, FVC and FEV1/FVC < 2.5 and 5% LLN and proportion with FEV1/FVC < 70% among 7015 women aged ≥20 years with successful spirometry in the LOFUS study according to sex and characteristics. LLN is defined by normal values in a Danish population (18) Click here for additional data file.
Background: COPD prevalence in Denmark is estimated at 18% based on data from urban populations. However, studies suggest that using the clinical cut-off for airway obstruction in population studies may overestimate prevalence. The present study aims to compare estimated prevalence of airway obstruction using different cut-offs and to present lung function data from the Lolland-Falster Health Study, set in a rural-provincial area. Methods: Descriptive analysis of participant characteristics and self-reported respiratory disease and of spirometry results in the total population and in subgroups defined by these characteristics. Airway obstruction was assessed using previously published Danish reference values and defined according to either FEV1 /FVC below lower limit of normal (LLN) 5% (as in clinical diagnosis) or 2.5% (suggested for population studies), or as FEV1 /FVC < 70%. Results: Using either FEV1 /FVC < 70% or LLN 5% cut-off, 19.0% of LOFUS participants aged 35 years or older had spirometry, suggesting airway obstruction. By the LLN 2.5% criterion, the proportion was considerably lower, 12.2%. The prevalence of airway obstruction was higher among current smokers, in participants with short education or reporting low leisure-time physical activity and in those with known respiratory disease. Approximately 40% of participants reporting known respiratory disease had normal spirometry, and 8.7% without known respiratory disease had airway obstruction. Conclusions: Prevalence of airway obstruction in this rural population was comparable to previous estimates from urban Danish population studies. The choice of cut-off impacts the estimated prevalence, and using the FEV1 /FVC cut-off may overestimate prevalence. However, many participants with known respiratory disease had normal spirometry in this health study.
BACKGROUND: Worldwide, lung disease is a major cause of morbidity and mortality. 1 Lung function declines with age, 2 and smoking and environmental pollution can cause lung disease, accelerated loss of lung function and premature death. 3 , 4 , 5 , 6 In Denmark, prevalence of chronic obstructive pulmonary disease (COPD) has been estimated at 17.5% of adults aged 35 years or older, 7 with estimated 2.0% having severe COPD. 7 This estimate derived from a large urban population study using a spirometry cut‐off intended for clinical diagnosis, that is, in a clinical setting with suspicion of airway disease. This may lead to overestimation of COPD prevalence. 8 , 9 In addition, the choice of spirometry criterion to most correctly detect airway obstruction has been debated in recent decades. Detection of reduced ratio between forced expiratory volume 1 s (FEV1) and forced vital capacity (FVC) indicates airway obstruction, and while a fixed ratio criterion of FEV1/FVC < 0.70 is often used, studies show that FEV1/FVC ratio below a lower limit of normal (LLN) may be more suitable. 8 , 10 Lolland‐Falster is a mixed rural‐provincial area of 103 000 inhabitants, situated on two main and several small islands in the southern part of Denmark, a small Scandinavian, high‐income country covering 43 000 km2 and a population of 5.8 million. Although the Danish population is genetically relatively homogeneous, population health varies across regions of the country, with life expectancy in Lolland‐Falster hree 3 years below the national average and 5 years below the municipalities with highest life expectancy. 11 , 12 The region scores worse than the national average on several health indicators, including diabetes prevalence, obesity, smoking and COPD. 13 The Lolland‐Falster Health Study (LOFUS) is a population‐based, prospective cohort study designed to investigate determinants of population health in this area. 14 In this paper, we report LOFUS data on lung function measurement with spirometry as well as anthropometric data and questionnaire‐based information on smoking and other risk factors. The aim is to describe the spirometry measurements and results in adults participating in LOFUS and compare it to similar findings from urban Danish population. In addition, we explore how different spirometric criteria affect the estimated prevalence of airway obstruction. CONCLUSION: This was a descriptive cross‐sectional study presenting data on proportions with signs of airway obstruction in broad subgroups of participants in a population study from a rural part of Denmark using different cut‐offs for definition of airway obstruction. Our study shows that, using the same criteria to define airway obstruction as previous Danish studies, the population of Lolland‐Falster has comparable proportion airway obstruction, and hence possible COPD, 19% in participants aged 35 years or older. Our study also highlights how choosing a different cut‐off influences the estimate: using the LLN 2.5% cut‐off, which may be preferable for population studies, prevalence of airway obstruction was considerably lower at 12.2%. In addition, choice of criterion—LLN or FEV1/FVC ratio—influences the estimated prevalence of airway obstruction, especially with increasing age.
Background: COPD prevalence in Denmark is estimated at 18% based on data from urban populations. However, studies suggest that using the clinical cut-off for airway obstruction in population studies may overestimate prevalence. The present study aims to compare estimated prevalence of airway obstruction using different cut-offs and to present lung function data from the Lolland-Falster Health Study, set in a rural-provincial area. Methods: Descriptive analysis of participant characteristics and self-reported respiratory disease and of spirometry results in the total population and in subgroups defined by these characteristics. Airway obstruction was assessed using previously published Danish reference values and defined according to either FEV1 /FVC below lower limit of normal (LLN) 5% (as in clinical diagnosis) or 2.5% (suggested for population studies), or as FEV1 /FVC < 70%. Results: Using either FEV1 /FVC < 70% or LLN 5% cut-off, 19.0% of LOFUS participants aged 35 years or older had spirometry, suggesting airway obstruction. By the LLN 2.5% criterion, the proportion was considerably lower, 12.2%. The prevalence of airway obstruction was higher among current smokers, in participants with short education or reporting low leisure-time physical activity and in those with known respiratory disease. Approximately 40% of participants reporting known respiratory disease had normal spirometry, and 8.7% without known respiratory disease had airway obstruction. Conclusions: Prevalence of airway obstruction in this rural population was comparable to previous estimates from urban Danish population studies. The choice of cut-off impacts the estimated prevalence, and using the FEV1 /FVC cut-off may overestimate prevalence. However, many participants with known respiratory disease had normal spirometry in this health study.
9,984
335
[ 2518, 70, 180, 425, 528, 48, 158, 101, 942, 48, 101 ]
17
[ "fev1", "fvc", "fev1 fvc", "lln", "years", "participants", "spirometry", "70", "disease", "proportion" ]
[ "spirometry lung", "disease copd estimated", "copd prevalence older", "spirometry lung function", "overestimate copd prevalence" ]
null
[CONTENT] COPD | COPD prevalence | criteria | FEV1/FVC ratio | lower limit of normal | lung function | spirometry [SUMMARY]
null
[CONTENT] COPD | COPD prevalence | criteria | FEV1/FVC ratio | lower limit of normal | lung function | spirometry [SUMMARY]
[CONTENT] COPD | COPD prevalence | criteria | FEV1/FVC ratio | lower limit of normal | lung function | spirometry [SUMMARY]
[CONTENT] COPD | COPD prevalence | criteria | FEV1/FVC ratio | lower limit of normal | lung function | spirometry [SUMMARY]
[CONTENT] COPD | COPD prevalence | criteria | FEV1/FVC ratio | lower limit of normal | lung function | spirometry [SUMMARY]
[CONTENT] Airway Obstruction | Forced Expiratory Volume | Humans | Lung | Pulmonary Disease, Chronic Obstructive | Spirometry | Vital Capacity [SUMMARY]
null
[CONTENT] Airway Obstruction | Forced Expiratory Volume | Humans | Lung | Pulmonary Disease, Chronic Obstructive | Spirometry | Vital Capacity [SUMMARY]
[CONTENT] Airway Obstruction | Forced Expiratory Volume | Humans | Lung | Pulmonary Disease, Chronic Obstructive | Spirometry | Vital Capacity [SUMMARY]
[CONTENT] Airway Obstruction | Forced Expiratory Volume | Humans | Lung | Pulmonary Disease, Chronic Obstructive | Spirometry | Vital Capacity [SUMMARY]
[CONTENT] Airway Obstruction | Forced Expiratory Volume | Humans | Lung | Pulmonary Disease, Chronic Obstructive | Spirometry | Vital Capacity [SUMMARY]
[CONTENT] spirometry lung | disease copd estimated | copd prevalence older | spirometry lung function | overestimate copd prevalence [SUMMARY]
null
[CONTENT] spirometry lung | disease copd estimated | copd prevalence older | spirometry lung function | overestimate copd prevalence [SUMMARY]
[CONTENT] spirometry lung | disease copd estimated | copd prevalence older | spirometry lung function | overestimate copd prevalence [SUMMARY]
[CONTENT] spirometry lung | disease copd estimated | copd prevalence older | spirometry lung function | overestimate copd prevalence [SUMMARY]
[CONTENT] spirometry lung | disease copd estimated | copd prevalence older | spirometry lung function | overestimate copd prevalence [SUMMARY]
[CONTENT] fev1 | fvc | fev1 fvc | lln | years | participants | spirometry | 70 | disease | proportion [SUMMARY]
null
[CONTENT] fev1 | fvc | fev1 fvc | lln | years | participants | spirometry | 70 | disease | proportion [SUMMARY]
[CONTENT] fev1 | fvc | fev1 fvc | lln | years | participants | spirometry | 70 | disease | proportion [SUMMARY]
[CONTENT] fev1 | fvc | fev1 fvc | lln | years | participants | spirometry | 70 | disease | proportion [SUMMARY]
[CONTENT] fev1 | fvc | fev1 fvc | lln | years | participants | spirometry | 70 | disease | proportion [SUMMARY]
[CONTENT] population | lung | prevalence | health | falster | lolland | lolland falster | estimated | airway | national average [SUMMARY]
null
[CONTENT] fev1 | lln | fev1 fvc | fvc | years | participants | fvc lln | fev1 fvc lln | 70 | disease [SUMMARY]
[CONTENT] airway obstruction | obstruction | airway | influences | cut | prevalence airway obstruction | prevalence airway | studies | different cut | study [SUMMARY]
[CONTENT] fev1 | fvc | fev1 fvc | lln | participants | years | lofus | disease | spirometry | airway [SUMMARY]
[CONTENT] fev1 | fvc | fev1 fvc | lln | participants | years | lofus | disease | spirometry | airway [SUMMARY]
[CONTENT] Denmark | 18% ||| ||| the Lolland-Falster Health Study [SUMMARY]
null
[CONTENT] 70% | LLN 5% | 19.0% | 35 years ||| LLN | 2.5% | 12.2% ||| ||| Approximately 40% | 8.7% [SUMMARY]
[CONTENT] Danish ||| ||| [SUMMARY]
[CONTENT] Denmark | 18% ||| ||| the Lolland-Falster Health Study ||| ||| Danish | FEV1 | LLN | 5% | 2.5% | FEV1 | 70% ||| 70% | LLN 5% | 19.0% | 35 years ||| LLN | 2.5% | 12.2% ||| ||| Approximately 40% | 8.7% ||| Danish ||| ||| [SUMMARY]
[CONTENT] Denmark | 18% ||| ||| the Lolland-Falster Health Study ||| ||| Danish | FEV1 | LLN | 5% | 2.5% | FEV1 | 70% ||| 70% | LLN 5% | 19.0% | 35 years ||| LLN | 2.5% | 12.2% ||| ||| Approximately 40% | 8.7% ||| Danish ||| ||| [SUMMARY]
The Effects of Urinary Albumin and Hypertension on All-Cause and Cardiovascular Disease Mortality in Korea.
28472229
Urinary albumin levels and hypertension (HTN) are independently associated with an increased risk of all-cause mortality. The effect of albuminuria on mortality in the absence or presence of HTN is uncertain. This study aimed to evaluate the effect of albuminuria and HTN on all-cause and cardiovascular disease (CVD) mortality.
BACKGROUND
Mortality outcomes for 32,653 Koreans enrolled in a health screening including measurements of the urinary albumin/creatinine ratio (UACR) at baseline and median follow-up of 5.13 years. Receiver operating characteristic curve analyses were performed in UACR and the cut-point was 5.42 mg/g. The participants for UACR at the cut-point of 5.42 μg/mg were categorized into UACR < 5.42 or UACR ≥ 5.42. HTN status was categorized as No HTN or HTN (defined as the absence or presence HTN).
METHODS
The median (interquartile) baseline UACRs were higher in those who died than in survivors. Subjects with a UACR ≥ 5.42 mg/g without or with HTN showed a similar increased risk for all-cause mortality and CVD mortality, even after adjusting for known CVD risk factors compared to those with no HTN/UACR < 5.42 (reference), (all-cause mortality; hazard ratio [HR] 1.48; 95% confidence interval [CI] 1.02-2.15: HR 1.47; 95% CI 0.94-2.32, respectively), (CVD mortality; HR 5.75; 95% CI 1.54-21.47: HR 5.87; 95% CI 1.36-25.29).
RESULTS
The presence of urinary albumin and HTN is a significant determinant of CVD and death. Urinary albumin might be more attributable to CVD and all-cause mortality than HTN.
CONCLUSIONS
[ "Adult", "Aged", "Albuminuria", "Body Mass Index", "Cardiovascular Diseases", "Female", "Follow-Up Studies", "Humans", "Hypertension", "Kaplan-Meier Estimate", "Lipids", "Male", "Middle Aged", "Predictive Value of Tests", "Prognosis", "ROC Curve", "Republic of Korea", "Risk Factors", "Survival Analysis" ]
5861583
null
null
METHODS
Study population The study population consisted of individuals with urinary albumin/creatinine ratio (UACR, mg/g) measurements who participated in a comprehensive health screening program at Kangbuk Samsung Hospital, Seoul, Korea, from 2002 to 2012 (n = 44,964). For this analysis, 12,311 subjects were excluded for one of more of the following reasons: 4,794 subjects had missing data on smoking status, alcohol consumption, and exercise; 868 subjects had a pre-existing history of malignancy; and 1 subject had an unknown vital status. Further analyses were undertaken after excluding subjects with diabetes (n = 3,435) and subjects with antihypertensive medication (n = 5,874). The total number of eligible individuals for the study was 32,653. This study was approved by the Institutional Review Board of Kangbuk Samsung Hospital. The requirement for informed consent was waived, and deidentified information was retrieved retrospectively. The study population consisted of individuals with urinary albumin/creatinine ratio (UACR, mg/g) measurements who participated in a comprehensive health screening program at Kangbuk Samsung Hospital, Seoul, Korea, from 2002 to 2012 (n = 44,964). For this analysis, 12,311 subjects were excluded for one of more of the following reasons: 4,794 subjects had missing data on smoking status, alcohol consumption, and exercise; 868 subjects had a pre-existing history of malignancy; and 1 subject had an unknown vital status. Further analyses were undertaken after excluding subjects with diabetes (n = 3,435) and subjects with antihypertensive medication (n = 5,874). The total number of eligible individuals for the study was 32,653. This study was approved by the Institutional Review Board of Kangbuk Samsung Hospital. The requirement for informed consent was waived, and deidentified information was retrieved retrospectively. Measurements Data on medical history, medication use, and health-related behaviors were collected through a self administered questionnaire. Details regarding alcohol use included the frequency of intake per week. Current smokers were identified, and the weekly frequency of moderate- or vigorous-intensity physical activity was assessed. Body weight was measured with the subject in light clothing and no shoes to the nearest 0.1 kg using a digital scale. Height was measured to the nearest 0.1 cm. Body mass index (BMI) was calculated as weight in kilograms divided by height in meters squared. Trained nurses measured sitting BP with a standard mercury sphygmomanometer. Blood specimens were sampled from the antecubital vein after 12 hours of fasting. Serum levels of glucose, total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, and high density lipoprotein (HDL) cholesterol were measured using Bayer Reagent Packs (Bayer Diagnostics, Leverkusen, Germany) on an automated chemistry analyzer (Advia 1650 Autoanalyzer; Bayer Diagnostics). Insulin levels were measured with an immunoradiometric assay (Biosource, Nivelle, Belgium) with an intra- and interassay coefficient of variation of 2.1–4.5% and 4.7–12.2%, respectively. Insulin resistance was estimated using the homeostasis model assessment of insulin resistance (HOMA-IR), calculated as insulin × glucose/22.5. The serum creatinine (SCr) level was measured by means of the alkaline picrate (Jaffe) method. We used an estimation of the glomerular filtration rate (GFR) to assess the degree of kidney impairment, which was calculated using the CKD-EPI equation: eGFR = 141 × min (SCr/K, 1)a × max (SCr/K, 1)−1.209 × 0.993age × 1.018 (if female) × 1.159 (if Black), where SCr is serum creatinine, K is 0.7 for females and 0.9 for males, a is −0.329 for females and −0.411 for males, min indicates the minimum of SCr/K or 1, and max indicates the maximum of SCr/K or 1.13 A single morning voided urine sample was used to measure the UACR. The urinary albumin concentration was determined by immunoradiometry (Radio-immunological competition assay, Immunotech Co., Prague, Czech Republic), and the urinary creatinine concentration was measured by a modified Jaffe method. The UACR measured in a spot urine sample has been reported to be highly correlated with the 24-hour urine albumin excretion level.14 Abdominal ultrasonography (Logic Q700 MR; GE, Milwaukee, WI, USA) using a 3.5 MHz probe was performed in all subjects by experienced clinical radiologists, and fatty liver was diagnosed or excluded based on standard criteria, including hepatorenal echo contrast, liver brightness, and vascular blurring.15 HTN was defined as a systolic BP ≥ 140 mm Hg, diastolic BP ≥ 90 mm Hg, a self-reported history of HTN. Diabetes mellitus was defined as a fasting serum glucose level ≥ 126 mg/dl, a self-reported history of diabetes, or the current use of diabetic medication.16 Obesity was defined according to those described for Asian populations, and the BMI threshold for obesity was ≥ 25 kg/m2.17 Receiver operating characteristic curve analyses were performed to find out appropriate UACR cutoff for predicting all-cause and CVD mortality.and the cut-point was 5.42 mg/g. Four groups were then defined as follows: (i) subjects who were below the UACR 5.42 mg/g but without HTN (no HTN/UACR < 5.42); (ii) subjects who were below 5.42 for UACR with HTN (HTN/UACR < 5.42); (iii) subjects at or above 5.42 for UACR without HTN (no HTN/UACR ≥ 5.42); and (iv) subjects at or above 5.42 for UACR with HTN (HTN/UACR ≥ 5.42) Data on medical history, medication use, and health-related behaviors were collected through a self administered questionnaire. Details regarding alcohol use included the frequency of intake per week. Current smokers were identified, and the weekly frequency of moderate- or vigorous-intensity physical activity was assessed. Body weight was measured with the subject in light clothing and no shoes to the nearest 0.1 kg using a digital scale. Height was measured to the nearest 0.1 cm. Body mass index (BMI) was calculated as weight in kilograms divided by height in meters squared. Trained nurses measured sitting BP with a standard mercury sphygmomanometer. Blood specimens were sampled from the antecubital vein after 12 hours of fasting. Serum levels of glucose, total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, and high density lipoprotein (HDL) cholesterol were measured using Bayer Reagent Packs (Bayer Diagnostics, Leverkusen, Germany) on an automated chemistry analyzer (Advia 1650 Autoanalyzer; Bayer Diagnostics). Insulin levels were measured with an immunoradiometric assay (Biosource, Nivelle, Belgium) with an intra- and interassay coefficient of variation of 2.1–4.5% and 4.7–12.2%, respectively. Insulin resistance was estimated using the homeostasis model assessment of insulin resistance (HOMA-IR), calculated as insulin × glucose/22.5. The serum creatinine (SCr) level was measured by means of the alkaline picrate (Jaffe) method. We used an estimation of the glomerular filtration rate (GFR) to assess the degree of kidney impairment, which was calculated using the CKD-EPI equation: eGFR = 141 × min (SCr/K, 1)a × max (SCr/K, 1)−1.209 × 0.993age × 1.018 (if female) × 1.159 (if Black), where SCr is serum creatinine, K is 0.7 for females and 0.9 for males, a is −0.329 for females and −0.411 for males, min indicates the minimum of SCr/K or 1, and max indicates the maximum of SCr/K or 1.13 A single morning voided urine sample was used to measure the UACR. The urinary albumin concentration was determined by immunoradiometry (Radio-immunological competition assay, Immunotech Co., Prague, Czech Republic), and the urinary creatinine concentration was measured by a modified Jaffe method. The UACR measured in a spot urine sample has been reported to be highly correlated with the 24-hour urine albumin excretion level.14 Abdominal ultrasonography (Logic Q700 MR; GE, Milwaukee, WI, USA) using a 3.5 MHz probe was performed in all subjects by experienced clinical radiologists, and fatty liver was diagnosed or excluded based on standard criteria, including hepatorenal echo contrast, liver brightness, and vascular blurring.15 HTN was defined as a systolic BP ≥ 140 mm Hg, diastolic BP ≥ 90 mm Hg, a self-reported history of HTN. Diabetes mellitus was defined as a fasting serum glucose level ≥ 126 mg/dl, a self-reported history of diabetes, or the current use of diabetic medication.16 Obesity was defined according to those described for Asian populations, and the BMI threshold for obesity was ≥ 25 kg/m2.17 Receiver operating characteristic curve analyses were performed to find out appropriate UACR cutoff for predicting all-cause and CVD mortality.and the cut-point was 5.42 mg/g. Four groups were then defined as follows: (i) subjects who were below the UACR 5.42 mg/g but without HTN (no HTN/UACR < 5.42); (ii) subjects who were below 5.42 for UACR with HTN (HTN/UACR < 5.42); (iii) subjects at or above 5.42 for UACR without HTN (no HTN/UACR ≥ 5.42); and (iv) subjects at or above 5.42 for UACR with HTN (HTN/UACR ≥ 5.42) Ascertainment of mortality Mortality follow-up between January 1, 2002 and December 31, 2012 was based on the nationwide death certificate data of the Korea National Statistical Office. Deaths among subjects were confirmed by matching the information to death records. Causes of death were coded centrally by trained coders using the ICD-10 classification (International Classification of Diseases, 10th revision) and ICD 00-99 codes were considered to represent cardiovascular death. Mortality follow-up between January 1, 2002 and December 31, 2012 was based on the nationwide death certificate data of the Korea National Statistical Office. Deaths among subjects were confirmed by matching the information to death records. Causes of death were coded centrally by trained coders using the ICD-10 classification (International Classification of Diseases, 10th revision) and ICD 00-99 codes were considered to represent cardiovascular death. Statistical analyses The χ2-test and Student’s t test were used to compare the characteristics of the deceased and alive study participants at baseline. We used receiver operating characteristic curve analysis to identify the optimal UACR cutoff value for predicting all-cause and CVD mortality. Cox proportional hazards models were used to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for all-cause and cardiovascular mortality, comparing the HTN/UACR < 5.42, no HTN/UACR ≥ 5.42, and the HTN/UACR ≥ 5.42 groups with the reference no HTN/UACR < 5.42 group. The models were initially adjusted for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, and education level (Model 1). In Model 2: the models were further adjusted for BMI, HTN, diabetes, and history of CVD. Model 3 included adjustments for the same factors as Model 2 plus an additional adjustment for eGFR. In addition, the participants were stratified into quartiles according to UACR. UACR quartiles were categorized as following; Q1: < 3.3 mg/g, Q2: 3.3–4.6 mg/g, Q3: 4.7–7.2 mg/g, and Q4: ≥ 7.3 mg/g. The risk of all-cause mortality and CVD mortality were analyzed with the degree of albuminuria according to sub-groups, which were based on conventional cardiovascular risk factors. Adjustment or stratification was made for multiple confounders/effect modifiers including age, sex, BMI, eGFR, alcohol intake and exercise, educational attainment (college graduation or higher), smoking status and prior evidence of CVD. The proportional hazards assumption was checked by examining graphs of estimated log (–log) survival. Statistical analysis was performed using Stata, version 11.2. All reported P-values are two tailed, and P < 0.05 was considered statistically significant. The χ2-test and Student’s t test were used to compare the characteristics of the deceased and alive study participants at baseline. We used receiver operating characteristic curve analysis to identify the optimal UACR cutoff value for predicting all-cause and CVD mortality. Cox proportional hazards models were used to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for all-cause and cardiovascular mortality, comparing the HTN/UACR < 5.42, no HTN/UACR ≥ 5.42, and the HTN/UACR ≥ 5.42 groups with the reference no HTN/UACR < 5.42 group. The models were initially adjusted for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, and education level (Model 1). In Model 2: the models were further adjusted for BMI, HTN, diabetes, and history of CVD. Model 3 included adjustments for the same factors as Model 2 plus an additional adjustment for eGFR. In addition, the participants were stratified into quartiles according to UACR. UACR quartiles were categorized as following; Q1: < 3.3 mg/g, Q2: 3.3–4.6 mg/g, Q3: 4.7–7.2 mg/g, and Q4: ≥ 7.3 mg/g. The risk of all-cause mortality and CVD mortality were analyzed with the degree of albuminuria according to sub-groups, which were based on conventional cardiovascular risk factors. Adjustment or stratification was made for multiple confounders/effect modifiers including age, sex, BMI, eGFR, alcohol intake and exercise, educational attainment (college graduation or higher), smoking status and prior evidence of CVD. The proportional hazards assumption was checked by examining graphs of estimated log (–log) survival. Statistical analysis was performed using Stata, version 11.2. All reported P-values are two tailed, and P < 0.05 was considered statistically significant.
null
null
DISCUSSION
The authors declared no conflicts of interest.
[ "Study population", "Measurements", "Ascertainment of mortality", "Statistical analyses", "RESULTS", "DISCUSSION" ]
[ "The study population consisted of individuals with urinary albumin/creatinine ratio (UACR, mg/g) measurements who participated in a comprehensive health screening program at Kangbuk Samsung Hospital, Seoul, Korea, from 2002 to 2012 (n = 44,964). For this analysis, 12,311 subjects were excluded for one of more of the following reasons: 4,794 subjects had missing data on smoking status, alcohol consumption, and exercise; 868 subjects had a pre-existing history of malignancy; and 1 subject had an unknown vital status. Further analyses were undertaken after excluding subjects with diabetes (n = 3,435) and subjects with antihypertensive medication (n = 5,874). The total number of eligible individuals for the study was 32,653.\nThis study was approved by the Institutional Review Board of Kangbuk Samsung Hospital. The requirement for informed consent was waived, and deidentified information was retrieved retrospectively.", "Data on medical history, medication use, and health-related behaviors were collected through a self administered questionnaire. Details regarding alcohol use included the frequency of intake per week. Current smokers were identified, and the weekly frequency of moderate- or vigorous-intensity physical activity was assessed. Body weight was measured with the subject in light clothing and no shoes to the nearest 0.1 kg using a digital scale. Height was measured to the nearest 0.1 cm.\nBody mass index (BMI) was calculated as weight in kilograms divided by height in meters squared. Trained nurses measured sitting BP with a standard mercury sphygmomanometer.\nBlood specimens were sampled from the antecubital vein after 12 hours of fasting. Serum levels of glucose, total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, and high density lipoprotein (HDL) cholesterol were measured using Bayer Reagent Packs (Bayer Diagnostics, Leverkusen, Germany) on an automated chemistry analyzer (Advia 1650 Autoanalyzer; Bayer Diagnostics). Insulin levels were measured with an immunoradiometric assay (Biosource, Nivelle, Belgium) with an intra- and interassay coefficient of variation of 2.1–4.5% and 4.7–12.2%, respectively. Insulin resistance was estimated using the homeostasis model assessment of insulin resistance (HOMA-IR), calculated as insulin × glucose/22.5. The serum creatinine (SCr) level was measured by means of the alkaline picrate (Jaffe) method. We used an estimation of the glomerular filtration rate (GFR) to assess the degree of kidney impairment, which was calculated using the CKD-EPI equation: eGFR = 141 × min (SCr/K, 1)a × max (SCr/K, 1)−1.209 × 0.993age × 1.018 (if female) × 1.159 (if Black), where SCr is serum creatinine, K is 0.7 for females and 0.9 for males, a is −0.329 for females and −0.411 for males, min indicates the minimum of SCr/K or 1, and max indicates the maximum of SCr/K or 1.13\nA single morning voided urine sample was used to measure the UACR. The urinary albumin concentration was determined by immunoradiometry (Radio-immunological competition assay, Immunotech Co., Prague, Czech Republic), and the urinary creatinine concentration was measured by a modified Jaffe method. The UACR measured in a spot urine sample has been reported to be highly correlated with the 24-hour urine albumin excretion level.14\nAbdominal ultrasonography (Logic Q700 MR; GE, Milwaukee, WI, USA) using a 3.5 MHz probe was performed in all subjects by experienced clinical radiologists, and fatty liver was diagnosed or excluded based on standard criteria, including hepatorenal echo contrast, liver brightness, and vascular blurring.15 HTN was defined as a systolic BP ≥ 140 mm Hg, diastolic BP ≥ 90 mm Hg, a self-reported history of HTN. Diabetes mellitus was defined as a fasting serum glucose level ≥ 126 mg/dl, a self-reported history of diabetes, or the current use of diabetic medication.16 Obesity was defined according to those described for Asian populations, and the BMI threshold for obesity was ≥ 25 kg/m2.17 Receiver operating characteristic curve analyses were performed to find out appropriate UACR cutoff for predicting all-cause and CVD mortality.and the cut-point was 5.42 mg/g.\nFour groups were then defined as follows: (i) subjects who were below the UACR 5.42 mg/g but without HTN (no HTN/UACR < 5.42); (ii) subjects who were below 5.42 for UACR with HTN (HTN/UACR < 5.42); (iii) subjects at or above 5.42 for UACR without HTN (no HTN/UACR ≥ 5.42); and (iv) subjects at or above 5.42 for UACR with HTN (HTN/UACR ≥ 5.42)", "Mortality follow-up between January 1, 2002 and December 31, 2012 was based on the nationwide death certificate data of the Korea National Statistical Office. Deaths among subjects were confirmed by matching the information to death records. Causes of death were coded centrally by trained coders using the ICD-10 classification (International Classification of Diseases, 10th revision) and ICD 00-99 codes were considered to represent cardiovascular death.", "The χ2-test and Student’s t test were used to compare the characteristics of the deceased and alive study participants at baseline. We used receiver operating characteristic curve analysis to identify the optimal UACR cutoff value for predicting all-cause and CVD mortality.\nCox proportional hazards models were used to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for all-cause and cardiovascular mortality, comparing the HTN/UACR < 5.42, no HTN/UACR ≥ 5.42, and the HTN/UACR ≥ 5.42 groups with the reference no HTN/UACR < 5.42 group. The models were initially adjusted for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, and education level (Model 1). In Model 2: the models were further adjusted for BMI, HTN, diabetes, and history of CVD. Model 3 included adjustments for the same factors as Model 2 plus an additional adjustment for eGFR. In addition, the participants were stratified into quartiles according to UACR.\nUACR quartiles were categorized as following; Q1: < 3.3 mg/g, Q2: 3.3–4.6 mg/g, Q3: 4.7–7.2 mg/g, and Q4: ≥ 7.3 mg/g. The risk of all-cause mortality and CVD mortality were analyzed with the degree of albuminuria according to sub-groups, which were based on conventional cardiovascular risk factors.\nAdjustment or stratification was made for multiple confounders/effect modifiers including age, sex, BMI, eGFR, alcohol intake and exercise, educational attainment (college graduation or higher), smoking status and prior evidence of CVD.\nThe proportional hazards assumption was checked by examining graphs of estimated log (–log) survival. Statistical analysis was performed using Stata, version 11.2. All reported P-values are two tailed, and P < 0.05 was considered statistically significant.", "There were 249 deaths during the follow-up period (Table 1). Table 1 describes the baseline clinical characteristics analyzed according to the vital status at follow-up. Subjects who died were older, and a higher proportion had a history of HTN at baseline. Other conventional cardiovascular risk factors including glucose, triglycerides, and blood pressure, were also higher, and the HDL-C and proportion of those performing regular exercise was lower in subjects who died by the end of the follow-up period. At baseline, mean and median UACR were higher in the group who died during follow-up compared to survivors.\nBaseline characteristics of the cohort according to death at follow-up\nData are mean (SD), median (interquartile range), or percentage. Abbreviations: ACR, albumin/creatinine ratio; BMI, body mass index; BP, blood pressure; CVD, cardiovascular disease; HDL-C, high density lipoprotein cholesterol; HOMA IR, homeostasis model assessment of insulin resistance; LDL-C, low density lipoprotein cholesterol.\n\na≥college graduate.\n\nb≥ 3 time per week.\n\nTable 2 shows the baseline characteristics across the four previously defined groups based on below or above the 75th percentile for UACR levels and the absence or presence of HTN. Differences in characteristics across the four groups were statistically significant. The median UACR levels were higher in the group that had HTN at baseline compared with the group without HTN.\nBaseline characteristics of study subjects by urineACR quartile\nData are mean (SD), median (interquartile range), or percentage. Abbreviations: ACR, albumin/creatinine ratio; BP, blood pressure; BMI, body mass index; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; HTN, hypertension; HDL-C, high density lipoprotein cholesterol; HOMA IR, homeostasis model assessment of insulin resistance; LDL-C, low density lipoprotein cholesterol.\n\na≥college graduate.\n\nb≥ 3 time per week.\n\nTable 3 shows the age-adjusted, sex-adjusted, and fully adjusted, including eGFR, HRs for all-cause and CVD mortality according to the four baseline groups. For all-cause mortality, there was a higher HR in the no HTN/UACR ≥ 5.42 and the HTN/UACR ≥ 5.42 groups (HR 1.48, CI 1.02–2.15; HR 1.47, CI 0.94–2.32, respectively) compared with the no HTN/UACR < 5.42 group (reference group).\nRisk of all cause and CVD mortality according to baseline urine ACR and HTN\nCox proportional hazard models were used to estimate HR (hazard ratio) and 95 percentage confidence intervals (95% CIs). Model 1: adjustment for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, education level. Model 2: Model 1 plus adjustment for body mass index, HTN, and history of CVD. Model 3: Model 2 + estimated glomerular filtration rate. Abbreviations: ACR, albumin/creatinine ratio; CVD, cardiovascular disease; HTN, hypertension.\nWhen HRs for CVD mortality were analyzed, no HTN/ACR ≥ 5.42 and HTN/ACR ≥ 5.42 group showed a similar significantly increased fully adjusted HR compared with the reference group (CVD mortality; HR 5.75; 95% CI 1.54–21.47: HR 5.87; 95% CI 1.36–25.29). However, subjects in the HTN/UACR < 5.42 group did not show statistically significantly increased HRs compared with the reference group (HR 4.13, CI 0.81–20.93). The number of event in CVD was small.\n\nFigure 1 illustrates the Kalpan–Meier curves for all-cause death(A) and CVD death(B) according to UACR < 5.42 or UACR ≥ 5.42 and HTN (hypertension) status.\nKaplan–Meier curves for all-cause (A) all-cause death (B) CVD mortality. ACR, albumin/creatinine ratio; CVD, cardiovascular disease; HTN, hypertension. ACR < 5.42 mg/g and ≥5.42 mg/g.\nAlthough overall mortality rates were low, subjects with a UACR ≥ 5.42 mg/g without or with HTN showed a similar increased in all-cause mortality and CVD mortality about 8 years.\nThe multivariate-adjusted HRs and CIs for all-cause and cardiovascular mortality according to the UACR quartiles are listed in Supplementary Tables S1 and S2.\nWhen the HR of first quartile group for all-cause mortality was set as the reference, HRs for all-cause mortality tend to increase in proportion to the quartiles of UACR from the second to the fourth quartile group (Supplementary Table S1). However, these associations were not statistically significant.\nSimilar to associations with CVD mortality, the statistically significant was present in subjects with men, age ≥50 years, eGFR ≥ 90 ml/min and and group with BMI M < 25 kg/m2 (Supplementary Table S2).", "We describe the effects of HTN and urinary albumin levels on all-cause and cardiovascular mortality in a large number of Koreans participating in a health screening program with a median follow-up of 5.13 years.\nEven small changes in the UACR (UACR level > 5.42 mg/g, below microalbuminuria level) showed an incremental association with mortality in those subjects with or without HTN. This trend was independent of eGFR levels. Importantly, for the first time, we showed that above 5.42 in UACR level in whole range of albuminuria increased risk of death and CVD to similar extent even if subject with or without HTN.\nIn addition, subjects with a UACR > 5.42 and HTN had a 3.27-fold increased risk of CVD morality compared to those with a UACR < 5.42 mg/g with no HTN.\nIn general, urinary albumin excretion is classified as: normoalbuminuria (< 30 mg per day or UACR < 30 mg/g), microalbuminuria (30–300 mg per day or UACR 30–300 mg/g, equivalent to 3.4–34 mg/mmol), and macroalbuminuria (>300 mg per day or UACR > 300 mg/g).\nFurthermore, these results suggest that urinary albumin, in particular at levels below the microalbuminuria level, might be more attributable to CVD and all-cause mortality than HTN in a healthy, young, occupational cohort. Thus, this study supports the concept that any range of albuminuria as regular examination could be helpful to prevent in organ damage with hypertensive patient.\nThe 2013 European Society of Hypertension (ESH) and the European Society of Cardiology (ESC) hypertension guidelines recommended that all hypertensive patients have a test for microalbuminuria with a spot urine sample.18 In addition, the guidelines stressed that a test for microalbuminuria is a cost-effective test for organ damage in hypertensive patients.19 Furthermore, some prospective studies indicate that the association between urinary albumin excretion without threshold effects and cardiovascular mortality in general population11,14 and in non-diabetic hypertensive patients, microalbuminuria.20\nA cohort study performed in North America, South America, and Europe that followed individuals ≥ 55 years old for a median of 4.5 years showed that any degree of albuminuria was a risk factor for the occurrence of CV events.5 Another multicenter cohort study involving patients with HTN and left ventricular hypertrophy also showed an association between UACR and increased cardiovascular morbidity and mortality with no UACR threshold necessary to demonstrate the increased risk.21 Another study suggested that urinary albumin excretion testing improves the accuracy of cardiovascular risk assessment in patients with HTN.22 It has also been suggested that albuminuria may be as reliable as cardiac and carotid ultrasound evaluations in predicting cardiovascular risk in hypertensive patients.22 Cuspidi et al. reported that combined ultrasound and microalbuminuria screening can improve the accuracy of target organ damage detection by 10-fold compared with routine investigations.23\nThus, our study suggests that a UACR > 5.42 mg/g (below the microalbuminuria level) is associated with an increased risk of death and CVD to a similar extent in those with or without HTN. This means that the UACR level, even at levels far below the microalbuminuria level, might be as useful a tool for evaluating risk stratification and target organ damage as is the presence or absence of HTN.\nIt is well known that overt proteinuria, macroalbuminuria, and microalbuminuria are associated with CVD mortality and other metabolic parameters.4,11,24 The Heart Outcomes Prevention Evaluation (HOPE) study found that there was a continuous association between albuminuria and cardiovascular events starting well below the microalbuminuria cutoff level in high risk patients with CVD.5 Dell’Omo et al. suggest the appropriateness of shifting downward the threshold level for diagnosing microalbuminuria in hypertensive patients.25 That study showed that high-normal albuminuria is associated with cardiovascular risk factors and predicts morbid events in hypertensive subjects. Some studies have a raised questions regarding the original definition of microalbuminuria when evaluating the risk of CVD or death.5,13,26,27 Thus, in recent years the focus of research has shifted towards the prognostic value of low-grade albuminuria at levels below the microalbuminuria range.28\nThe Nord-Trøndelag Health Study reported that the lowest UACR level associated with 2.2-fold increased risk for mortality was the 60th percentile (≥ 6.7 mg/g) during a 4.4-year follow-up of 2,089 subjects without diabetes and with treated HTN.29 In individuals with or without DM in the HOPE study, the relative risk of cardiovascular death in the fourth quartile was 1.97 (UACR > 1.62 mg/mmol) compared with the lowest quartile of the UACR.5 In general population with cohort study within UACR below 30mg/g, the hazard risk of HTN in fourth quartile was 1.97(UACR ≥ 7.4 mg/g) and the risk of CVD death increased 3.37-folds compared with the lowest quartile (UACR < 3.4 mg/g).15\nOur findings provide further evidence to support the assumption that UACR, even below the microalbuminuria level (UACR cutoff > 5.42 mg/g), with or without HTN, could predict the occurrence of death as well as CVD. This result suggests that the linkage between albuminuria and mortality might not be through elevated blood pressure, which has been known to be closely associated with urinary albumin excretion in general. Thus, our study shows that the risk of death and CVD was significantly increased if the UACR was ≥ 5.42 mg/g, independent of HTN status. In addition, our study found that albuminuria is superior to HTN status in predicting all-cause and CVD mortality after adjusting for associated cardiovascular risk factors, because of the relatively short follow-up period. Although there were few cardiovascular deaths, albuminuria and HTN could have an additive effect in all-cause and CVD mortality outcome.\nThe pathophysiological processes that link albuminuria and CVD are unclear.\nIncreased albumin excretion has been related to endothelial alterations in the glomerular capillaries in patients with essential HTN.30 However, the cause of albuminuria in person without HTN is still uncertain. In patient without heart failure, elevated UACR predicts future hospitalization for heart failure.31\nIncreased UACR is also associated with left ventrilcular hypertrophy, a potent risk factor for progression to heart failure.31 Also, another researcher showed that adverse hemodynamics may play a role in albuminuria in the absence of HTN or DM.32\nIn addition, low-grade inflammation can be both a cause and consequence of endothelial dysfunction. Some previous studies have associated markers of low-grade inflammation, such as C-reactive protein, IL-6, and TNF-α, to the occurrence and progression of microalbuminuria and an increased risk for atherosclerotic disease.33\nAlternatively, endothelial dysfunction leading stimulatneously to albuminuria and subclinical coronary artery disease (CAD) could explain the association.34\nThe multi-ethnic study of atherosclerosis including 6,774 individuals, asymptomatic individuals demonstrates an increased risk of incident coronary artery calcification (CAC) as well as greater CAC progression among those with microalbuminuria.35\nKramer et al. reported 6,814 participants without clinical CVD that mean CAC scores were higher among participants with high normal urinary albumin excretion, microalbuminuria and macroalbuminuria compared with normal urinary albumin.34 This study concluded that higher urinary albumin, including levels below microalbuminuria, may reflect the presence of subclinical CVD among adults without established CVD.34\nWe showed the pathologic link that increased a risk of CVD and mortality in general population may vary according to the albuminuria with the presence or absence of hypertensive status.\nWhen interpreting our results, several limitations should be considered.\nFirst, urinary albumin excretion was measured on a single voided urine collection, which may not have accurately reflected the true level of albuminuria. However, prior research suggested that a single-void urine UACR correlated highly with the 24-hour urinary albumin excretion with a high specificity and sensitivity; therefore, it can be used to estimate quantitative microalbuminuria.14\nSecond, we did not address the severity of HTN. Previous studies reported that BP is positively correlated with albuminuria.12 Thus, the level of the mean and median UACR had some differences.\nThird, there is a possibility that the pharmacological therapies used by the subjects are not fully reflected in the medication history. However, prior antihypertensive medication history excluded in our study population. Thus, we excluded the influence of antihypertensive medication.\nFourth, the study was performed in a relatively homogenous population of working individuals who participated in a health screening program, and it is not fully representative of the entire Korean population. However, the number of subjects included in our study is larger than in any previous studies investigating this question.\nIn conclusion, this study shows that urinary albumin is an important marker for both cardiovascular and all-cause mortality in subjects with or without HTN. Subjects with a UACR ≥ 5.42 have a similar risk of death and CVD among hypertensive patients if albuminuria defined as UACR (UACR cutoff = 5.42 mg/g) is present. This increased risk is independent of age, sex, smoking status, alcohol intake, regular exercise, BMI, HTN, history of CVD, and eGFR. In the general population, the risk of death and CVD increases as the UACR increases in subjects, with or without HTN. Thus, urinary albumin is more attributable to CVD and all-cause mortality than HTN. Urinary albumin measurement might be a valuable tool in subjects with or without HTN to identify those subjects at higher risk for CVD and all-cause mortality." ]
[ null, null, null, null, null, null ]
[ "METHODS", "Study population", "Measurements", "Ascertainment of mortality", "Statistical analyses", "RESULTS", "DISCUSSION", "Supplementary Material" ]
[ " Study population The study population consisted of individuals with urinary albumin/creatinine ratio (UACR, mg/g) measurements who participated in a comprehensive health screening program at Kangbuk Samsung Hospital, Seoul, Korea, from 2002 to 2012 (n = 44,964). For this analysis, 12,311 subjects were excluded for one of more of the following reasons: 4,794 subjects had missing data on smoking status, alcohol consumption, and exercise; 868 subjects had a pre-existing history of malignancy; and 1 subject had an unknown vital status. Further analyses were undertaken after excluding subjects with diabetes (n = 3,435) and subjects with antihypertensive medication (n = 5,874). The total number of eligible individuals for the study was 32,653.\nThis study was approved by the Institutional Review Board of Kangbuk Samsung Hospital. The requirement for informed consent was waived, and deidentified information was retrieved retrospectively.\nThe study population consisted of individuals with urinary albumin/creatinine ratio (UACR, mg/g) measurements who participated in a comprehensive health screening program at Kangbuk Samsung Hospital, Seoul, Korea, from 2002 to 2012 (n = 44,964). For this analysis, 12,311 subjects were excluded for one of more of the following reasons: 4,794 subjects had missing data on smoking status, alcohol consumption, and exercise; 868 subjects had a pre-existing history of malignancy; and 1 subject had an unknown vital status. Further analyses were undertaken after excluding subjects with diabetes (n = 3,435) and subjects with antihypertensive medication (n = 5,874). The total number of eligible individuals for the study was 32,653.\nThis study was approved by the Institutional Review Board of Kangbuk Samsung Hospital. The requirement for informed consent was waived, and deidentified information was retrieved retrospectively.\n Measurements Data on medical history, medication use, and health-related behaviors were collected through a self administered questionnaire. Details regarding alcohol use included the frequency of intake per week. Current smokers were identified, and the weekly frequency of moderate- or vigorous-intensity physical activity was assessed. Body weight was measured with the subject in light clothing and no shoes to the nearest 0.1 kg using a digital scale. Height was measured to the nearest 0.1 cm.\nBody mass index (BMI) was calculated as weight in kilograms divided by height in meters squared. Trained nurses measured sitting BP with a standard mercury sphygmomanometer.\nBlood specimens were sampled from the antecubital vein after 12 hours of fasting. Serum levels of glucose, total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, and high density lipoprotein (HDL) cholesterol were measured using Bayer Reagent Packs (Bayer Diagnostics, Leverkusen, Germany) on an automated chemistry analyzer (Advia 1650 Autoanalyzer; Bayer Diagnostics). Insulin levels were measured with an immunoradiometric assay (Biosource, Nivelle, Belgium) with an intra- and interassay coefficient of variation of 2.1–4.5% and 4.7–12.2%, respectively. Insulin resistance was estimated using the homeostasis model assessment of insulin resistance (HOMA-IR), calculated as insulin × glucose/22.5. The serum creatinine (SCr) level was measured by means of the alkaline picrate (Jaffe) method. We used an estimation of the glomerular filtration rate (GFR) to assess the degree of kidney impairment, which was calculated using the CKD-EPI equation: eGFR = 141 × min (SCr/K, 1)a × max (SCr/K, 1)−1.209 × 0.993age × 1.018 (if female) × 1.159 (if Black), where SCr is serum creatinine, K is 0.7 for females and 0.9 for males, a is −0.329 for females and −0.411 for males, min indicates the minimum of SCr/K or 1, and max indicates the maximum of SCr/K or 1.13\nA single morning voided urine sample was used to measure the UACR. The urinary albumin concentration was determined by immunoradiometry (Radio-immunological competition assay, Immunotech Co., Prague, Czech Republic), and the urinary creatinine concentration was measured by a modified Jaffe method. The UACR measured in a spot urine sample has been reported to be highly correlated with the 24-hour urine albumin excretion level.14\nAbdominal ultrasonography (Logic Q700 MR; GE, Milwaukee, WI, USA) using a 3.5 MHz probe was performed in all subjects by experienced clinical radiologists, and fatty liver was diagnosed or excluded based on standard criteria, including hepatorenal echo contrast, liver brightness, and vascular blurring.15 HTN was defined as a systolic BP ≥ 140 mm Hg, diastolic BP ≥ 90 mm Hg, a self-reported history of HTN. Diabetes mellitus was defined as a fasting serum glucose level ≥ 126 mg/dl, a self-reported history of diabetes, or the current use of diabetic medication.16 Obesity was defined according to those described for Asian populations, and the BMI threshold for obesity was ≥ 25 kg/m2.17 Receiver operating characteristic curve analyses were performed to find out appropriate UACR cutoff for predicting all-cause and CVD mortality.and the cut-point was 5.42 mg/g.\nFour groups were then defined as follows: (i) subjects who were below the UACR 5.42 mg/g but without HTN (no HTN/UACR < 5.42); (ii) subjects who were below 5.42 for UACR with HTN (HTN/UACR < 5.42); (iii) subjects at or above 5.42 for UACR without HTN (no HTN/UACR ≥ 5.42); and (iv) subjects at or above 5.42 for UACR with HTN (HTN/UACR ≥ 5.42)\nData on medical history, medication use, and health-related behaviors were collected through a self administered questionnaire. Details regarding alcohol use included the frequency of intake per week. Current smokers were identified, and the weekly frequency of moderate- or vigorous-intensity physical activity was assessed. Body weight was measured with the subject in light clothing and no shoes to the nearest 0.1 kg using a digital scale. Height was measured to the nearest 0.1 cm.\nBody mass index (BMI) was calculated as weight in kilograms divided by height in meters squared. Trained nurses measured sitting BP with a standard mercury sphygmomanometer.\nBlood specimens were sampled from the antecubital vein after 12 hours of fasting. Serum levels of glucose, total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, and high density lipoprotein (HDL) cholesterol were measured using Bayer Reagent Packs (Bayer Diagnostics, Leverkusen, Germany) on an automated chemistry analyzer (Advia 1650 Autoanalyzer; Bayer Diagnostics). Insulin levels were measured with an immunoradiometric assay (Biosource, Nivelle, Belgium) with an intra- and interassay coefficient of variation of 2.1–4.5% and 4.7–12.2%, respectively. Insulin resistance was estimated using the homeostasis model assessment of insulin resistance (HOMA-IR), calculated as insulin × glucose/22.5. The serum creatinine (SCr) level was measured by means of the alkaline picrate (Jaffe) method. We used an estimation of the glomerular filtration rate (GFR) to assess the degree of kidney impairment, which was calculated using the CKD-EPI equation: eGFR = 141 × min (SCr/K, 1)a × max (SCr/K, 1)−1.209 × 0.993age × 1.018 (if female) × 1.159 (if Black), where SCr is serum creatinine, K is 0.7 for females and 0.9 for males, a is −0.329 for females and −0.411 for males, min indicates the minimum of SCr/K or 1, and max indicates the maximum of SCr/K or 1.13\nA single morning voided urine sample was used to measure the UACR. The urinary albumin concentration was determined by immunoradiometry (Radio-immunological competition assay, Immunotech Co., Prague, Czech Republic), and the urinary creatinine concentration was measured by a modified Jaffe method. The UACR measured in a spot urine sample has been reported to be highly correlated with the 24-hour urine albumin excretion level.14\nAbdominal ultrasonography (Logic Q700 MR; GE, Milwaukee, WI, USA) using a 3.5 MHz probe was performed in all subjects by experienced clinical radiologists, and fatty liver was diagnosed or excluded based on standard criteria, including hepatorenal echo contrast, liver brightness, and vascular blurring.15 HTN was defined as a systolic BP ≥ 140 mm Hg, diastolic BP ≥ 90 mm Hg, a self-reported history of HTN. Diabetes mellitus was defined as a fasting serum glucose level ≥ 126 mg/dl, a self-reported history of diabetes, or the current use of diabetic medication.16 Obesity was defined according to those described for Asian populations, and the BMI threshold for obesity was ≥ 25 kg/m2.17 Receiver operating characteristic curve analyses were performed to find out appropriate UACR cutoff for predicting all-cause and CVD mortality.and the cut-point was 5.42 mg/g.\nFour groups were then defined as follows: (i) subjects who were below the UACR 5.42 mg/g but without HTN (no HTN/UACR < 5.42); (ii) subjects who were below 5.42 for UACR with HTN (HTN/UACR < 5.42); (iii) subjects at or above 5.42 for UACR without HTN (no HTN/UACR ≥ 5.42); and (iv) subjects at or above 5.42 for UACR with HTN (HTN/UACR ≥ 5.42)\n Ascertainment of mortality Mortality follow-up between January 1, 2002 and December 31, 2012 was based on the nationwide death certificate data of the Korea National Statistical Office. Deaths among subjects were confirmed by matching the information to death records. Causes of death were coded centrally by trained coders using the ICD-10 classification (International Classification of Diseases, 10th revision) and ICD 00-99 codes were considered to represent cardiovascular death.\nMortality follow-up between January 1, 2002 and December 31, 2012 was based on the nationwide death certificate data of the Korea National Statistical Office. Deaths among subjects were confirmed by matching the information to death records. Causes of death were coded centrally by trained coders using the ICD-10 classification (International Classification of Diseases, 10th revision) and ICD 00-99 codes were considered to represent cardiovascular death.\n Statistical analyses The χ2-test and Student’s t test were used to compare the characteristics of the deceased and alive study participants at baseline. We used receiver operating characteristic curve analysis to identify the optimal UACR cutoff value for predicting all-cause and CVD mortality.\nCox proportional hazards models were used to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for all-cause and cardiovascular mortality, comparing the HTN/UACR < 5.42, no HTN/UACR ≥ 5.42, and the HTN/UACR ≥ 5.42 groups with the reference no HTN/UACR < 5.42 group. The models were initially adjusted for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, and education level (Model 1). In Model 2: the models were further adjusted for BMI, HTN, diabetes, and history of CVD. Model 3 included adjustments for the same factors as Model 2 plus an additional adjustment for eGFR. In addition, the participants were stratified into quartiles according to UACR.\nUACR quartiles were categorized as following; Q1: < 3.3 mg/g, Q2: 3.3–4.6 mg/g, Q3: 4.7–7.2 mg/g, and Q4: ≥ 7.3 mg/g. The risk of all-cause mortality and CVD mortality were analyzed with the degree of albuminuria according to sub-groups, which were based on conventional cardiovascular risk factors.\nAdjustment or stratification was made for multiple confounders/effect modifiers including age, sex, BMI, eGFR, alcohol intake and exercise, educational attainment (college graduation or higher), smoking status and prior evidence of CVD.\nThe proportional hazards assumption was checked by examining graphs of estimated log (–log) survival. Statistical analysis was performed using Stata, version 11.2. All reported P-values are two tailed, and P < 0.05 was considered statistically significant.\nThe χ2-test and Student’s t test were used to compare the characteristics of the deceased and alive study participants at baseline. We used receiver operating characteristic curve analysis to identify the optimal UACR cutoff value for predicting all-cause and CVD mortality.\nCox proportional hazards models were used to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for all-cause and cardiovascular mortality, comparing the HTN/UACR < 5.42, no HTN/UACR ≥ 5.42, and the HTN/UACR ≥ 5.42 groups with the reference no HTN/UACR < 5.42 group. The models were initially adjusted for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, and education level (Model 1). In Model 2: the models were further adjusted for BMI, HTN, diabetes, and history of CVD. Model 3 included adjustments for the same factors as Model 2 plus an additional adjustment for eGFR. In addition, the participants were stratified into quartiles according to UACR.\nUACR quartiles were categorized as following; Q1: < 3.3 mg/g, Q2: 3.3–4.6 mg/g, Q3: 4.7–7.2 mg/g, and Q4: ≥ 7.3 mg/g. The risk of all-cause mortality and CVD mortality were analyzed with the degree of albuminuria according to sub-groups, which were based on conventional cardiovascular risk factors.\nAdjustment or stratification was made for multiple confounders/effect modifiers including age, sex, BMI, eGFR, alcohol intake and exercise, educational attainment (college graduation or higher), smoking status and prior evidence of CVD.\nThe proportional hazards assumption was checked by examining graphs of estimated log (–log) survival. Statistical analysis was performed using Stata, version 11.2. All reported P-values are two tailed, and P < 0.05 was considered statistically significant.", "The study population consisted of individuals with urinary albumin/creatinine ratio (UACR, mg/g) measurements who participated in a comprehensive health screening program at Kangbuk Samsung Hospital, Seoul, Korea, from 2002 to 2012 (n = 44,964). For this analysis, 12,311 subjects were excluded for one of more of the following reasons: 4,794 subjects had missing data on smoking status, alcohol consumption, and exercise; 868 subjects had a pre-existing history of malignancy; and 1 subject had an unknown vital status. Further analyses were undertaken after excluding subjects with diabetes (n = 3,435) and subjects with antihypertensive medication (n = 5,874). The total number of eligible individuals for the study was 32,653.\nThis study was approved by the Institutional Review Board of Kangbuk Samsung Hospital. The requirement for informed consent was waived, and deidentified information was retrieved retrospectively.", "Data on medical history, medication use, and health-related behaviors were collected through a self administered questionnaire. Details regarding alcohol use included the frequency of intake per week. Current smokers were identified, and the weekly frequency of moderate- or vigorous-intensity physical activity was assessed. Body weight was measured with the subject in light clothing and no shoes to the nearest 0.1 kg using a digital scale. Height was measured to the nearest 0.1 cm.\nBody mass index (BMI) was calculated as weight in kilograms divided by height in meters squared. Trained nurses measured sitting BP with a standard mercury sphygmomanometer.\nBlood specimens were sampled from the antecubital vein after 12 hours of fasting. Serum levels of glucose, total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, and high density lipoprotein (HDL) cholesterol were measured using Bayer Reagent Packs (Bayer Diagnostics, Leverkusen, Germany) on an automated chemistry analyzer (Advia 1650 Autoanalyzer; Bayer Diagnostics). Insulin levels were measured with an immunoradiometric assay (Biosource, Nivelle, Belgium) with an intra- and interassay coefficient of variation of 2.1–4.5% and 4.7–12.2%, respectively. Insulin resistance was estimated using the homeostasis model assessment of insulin resistance (HOMA-IR), calculated as insulin × glucose/22.5. The serum creatinine (SCr) level was measured by means of the alkaline picrate (Jaffe) method. We used an estimation of the glomerular filtration rate (GFR) to assess the degree of kidney impairment, which was calculated using the CKD-EPI equation: eGFR = 141 × min (SCr/K, 1)a × max (SCr/K, 1)−1.209 × 0.993age × 1.018 (if female) × 1.159 (if Black), where SCr is serum creatinine, K is 0.7 for females and 0.9 for males, a is −0.329 for females and −0.411 for males, min indicates the minimum of SCr/K or 1, and max indicates the maximum of SCr/K or 1.13\nA single morning voided urine sample was used to measure the UACR. The urinary albumin concentration was determined by immunoradiometry (Radio-immunological competition assay, Immunotech Co., Prague, Czech Republic), and the urinary creatinine concentration was measured by a modified Jaffe method. The UACR measured in a spot urine sample has been reported to be highly correlated with the 24-hour urine albumin excretion level.14\nAbdominal ultrasonography (Logic Q700 MR; GE, Milwaukee, WI, USA) using a 3.5 MHz probe was performed in all subjects by experienced clinical radiologists, and fatty liver was diagnosed or excluded based on standard criteria, including hepatorenal echo contrast, liver brightness, and vascular blurring.15 HTN was defined as a systolic BP ≥ 140 mm Hg, diastolic BP ≥ 90 mm Hg, a self-reported history of HTN. Diabetes mellitus was defined as a fasting serum glucose level ≥ 126 mg/dl, a self-reported history of diabetes, or the current use of diabetic medication.16 Obesity was defined according to those described for Asian populations, and the BMI threshold for obesity was ≥ 25 kg/m2.17 Receiver operating characteristic curve analyses were performed to find out appropriate UACR cutoff for predicting all-cause and CVD mortality.and the cut-point was 5.42 mg/g.\nFour groups were then defined as follows: (i) subjects who were below the UACR 5.42 mg/g but without HTN (no HTN/UACR < 5.42); (ii) subjects who were below 5.42 for UACR with HTN (HTN/UACR < 5.42); (iii) subjects at or above 5.42 for UACR without HTN (no HTN/UACR ≥ 5.42); and (iv) subjects at or above 5.42 for UACR with HTN (HTN/UACR ≥ 5.42)", "Mortality follow-up between January 1, 2002 and December 31, 2012 was based on the nationwide death certificate data of the Korea National Statistical Office. Deaths among subjects were confirmed by matching the information to death records. Causes of death were coded centrally by trained coders using the ICD-10 classification (International Classification of Diseases, 10th revision) and ICD 00-99 codes were considered to represent cardiovascular death.", "The χ2-test and Student’s t test were used to compare the characteristics of the deceased and alive study participants at baseline. We used receiver operating characteristic curve analysis to identify the optimal UACR cutoff value for predicting all-cause and CVD mortality.\nCox proportional hazards models were used to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for all-cause and cardiovascular mortality, comparing the HTN/UACR < 5.42, no HTN/UACR ≥ 5.42, and the HTN/UACR ≥ 5.42 groups with the reference no HTN/UACR < 5.42 group. The models were initially adjusted for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, and education level (Model 1). In Model 2: the models were further adjusted for BMI, HTN, diabetes, and history of CVD. Model 3 included adjustments for the same factors as Model 2 plus an additional adjustment for eGFR. In addition, the participants were stratified into quartiles according to UACR.\nUACR quartiles were categorized as following; Q1: < 3.3 mg/g, Q2: 3.3–4.6 mg/g, Q3: 4.7–7.2 mg/g, and Q4: ≥ 7.3 mg/g. The risk of all-cause mortality and CVD mortality were analyzed with the degree of albuminuria according to sub-groups, which were based on conventional cardiovascular risk factors.\nAdjustment or stratification was made for multiple confounders/effect modifiers including age, sex, BMI, eGFR, alcohol intake and exercise, educational attainment (college graduation or higher), smoking status and prior evidence of CVD.\nThe proportional hazards assumption was checked by examining graphs of estimated log (–log) survival. Statistical analysis was performed using Stata, version 11.2. All reported P-values are two tailed, and P < 0.05 was considered statistically significant.", "There were 249 deaths during the follow-up period (Table 1). Table 1 describes the baseline clinical characteristics analyzed according to the vital status at follow-up. Subjects who died were older, and a higher proportion had a history of HTN at baseline. Other conventional cardiovascular risk factors including glucose, triglycerides, and blood pressure, were also higher, and the HDL-C and proportion of those performing regular exercise was lower in subjects who died by the end of the follow-up period. At baseline, mean and median UACR were higher in the group who died during follow-up compared to survivors.\nBaseline characteristics of the cohort according to death at follow-up\nData are mean (SD), median (interquartile range), or percentage. Abbreviations: ACR, albumin/creatinine ratio; BMI, body mass index; BP, blood pressure; CVD, cardiovascular disease; HDL-C, high density lipoprotein cholesterol; HOMA IR, homeostasis model assessment of insulin resistance; LDL-C, low density lipoprotein cholesterol.\n\na≥college graduate.\n\nb≥ 3 time per week.\n\nTable 2 shows the baseline characteristics across the four previously defined groups based on below or above the 75th percentile for UACR levels and the absence or presence of HTN. Differences in characteristics across the four groups were statistically significant. The median UACR levels were higher in the group that had HTN at baseline compared with the group without HTN.\nBaseline characteristics of study subjects by urineACR quartile\nData are mean (SD), median (interquartile range), or percentage. Abbreviations: ACR, albumin/creatinine ratio; BP, blood pressure; BMI, body mass index; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; HTN, hypertension; HDL-C, high density lipoprotein cholesterol; HOMA IR, homeostasis model assessment of insulin resistance; LDL-C, low density lipoprotein cholesterol.\n\na≥college graduate.\n\nb≥ 3 time per week.\n\nTable 3 shows the age-adjusted, sex-adjusted, and fully adjusted, including eGFR, HRs for all-cause and CVD mortality according to the four baseline groups. For all-cause mortality, there was a higher HR in the no HTN/UACR ≥ 5.42 and the HTN/UACR ≥ 5.42 groups (HR 1.48, CI 1.02–2.15; HR 1.47, CI 0.94–2.32, respectively) compared with the no HTN/UACR < 5.42 group (reference group).\nRisk of all cause and CVD mortality according to baseline urine ACR and HTN\nCox proportional hazard models were used to estimate HR (hazard ratio) and 95 percentage confidence intervals (95% CIs). Model 1: adjustment for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, education level. Model 2: Model 1 plus adjustment for body mass index, HTN, and history of CVD. Model 3: Model 2 + estimated glomerular filtration rate. Abbreviations: ACR, albumin/creatinine ratio; CVD, cardiovascular disease; HTN, hypertension.\nWhen HRs for CVD mortality were analyzed, no HTN/ACR ≥ 5.42 and HTN/ACR ≥ 5.42 group showed a similar significantly increased fully adjusted HR compared with the reference group (CVD mortality; HR 5.75; 95% CI 1.54–21.47: HR 5.87; 95% CI 1.36–25.29). However, subjects in the HTN/UACR < 5.42 group did not show statistically significantly increased HRs compared with the reference group (HR 4.13, CI 0.81–20.93). The number of event in CVD was small.\n\nFigure 1 illustrates the Kalpan–Meier curves for all-cause death(A) and CVD death(B) according to UACR < 5.42 or UACR ≥ 5.42 and HTN (hypertension) status.\nKaplan–Meier curves for all-cause (A) all-cause death (B) CVD mortality. ACR, albumin/creatinine ratio; CVD, cardiovascular disease; HTN, hypertension. ACR < 5.42 mg/g and ≥5.42 mg/g.\nAlthough overall mortality rates were low, subjects with a UACR ≥ 5.42 mg/g without or with HTN showed a similar increased in all-cause mortality and CVD mortality about 8 years.\nThe multivariate-adjusted HRs and CIs for all-cause and cardiovascular mortality according to the UACR quartiles are listed in Supplementary Tables S1 and S2.\nWhen the HR of first quartile group for all-cause mortality was set as the reference, HRs for all-cause mortality tend to increase in proportion to the quartiles of UACR from the second to the fourth quartile group (Supplementary Table S1). However, these associations were not statistically significant.\nSimilar to associations with CVD mortality, the statistically significant was present in subjects with men, age ≥50 years, eGFR ≥ 90 ml/min and and group with BMI M < 25 kg/m2 (Supplementary Table S2).", "We describe the effects of HTN and urinary albumin levels on all-cause and cardiovascular mortality in a large number of Koreans participating in a health screening program with a median follow-up of 5.13 years.\nEven small changes in the UACR (UACR level > 5.42 mg/g, below microalbuminuria level) showed an incremental association with mortality in those subjects with or without HTN. This trend was independent of eGFR levels. Importantly, for the first time, we showed that above 5.42 in UACR level in whole range of albuminuria increased risk of death and CVD to similar extent even if subject with or without HTN.\nIn addition, subjects with a UACR > 5.42 and HTN had a 3.27-fold increased risk of CVD morality compared to those with a UACR < 5.42 mg/g with no HTN.\nIn general, urinary albumin excretion is classified as: normoalbuminuria (< 30 mg per day or UACR < 30 mg/g), microalbuminuria (30–300 mg per day or UACR 30–300 mg/g, equivalent to 3.4–34 mg/mmol), and macroalbuminuria (>300 mg per day or UACR > 300 mg/g).\nFurthermore, these results suggest that urinary albumin, in particular at levels below the microalbuminuria level, might be more attributable to CVD and all-cause mortality than HTN in a healthy, young, occupational cohort. Thus, this study supports the concept that any range of albuminuria as regular examination could be helpful to prevent in organ damage with hypertensive patient.\nThe 2013 European Society of Hypertension (ESH) and the European Society of Cardiology (ESC) hypertension guidelines recommended that all hypertensive patients have a test for microalbuminuria with a spot urine sample.18 In addition, the guidelines stressed that a test for microalbuminuria is a cost-effective test for organ damage in hypertensive patients.19 Furthermore, some prospective studies indicate that the association between urinary albumin excretion without threshold effects and cardiovascular mortality in general population11,14 and in non-diabetic hypertensive patients, microalbuminuria.20\nA cohort study performed in North America, South America, and Europe that followed individuals ≥ 55 years old for a median of 4.5 years showed that any degree of albuminuria was a risk factor for the occurrence of CV events.5 Another multicenter cohort study involving patients with HTN and left ventricular hypertrophy also showed an association between UACR and increased cardiovascular morbidity and mortality with no UACR threshold necessary to demonstrate the increased risk.21 Another study suggested that urinary albumin excretion testing improves the accuracy of cardiovascular risk assessment in patients with HTN.22 It has also been suggested that albuminuria may be as reliable as cardiac and carotid ultrasound evaluations in predicting cardiovascular risk in hypertensive patients.22 Cuspidi et al. reported that combined ultrasound and microalbuminuria screening can improve the accuracy of target organ damage detection by 10-fold compared with routine investigations.23\nThus, our study suggests that a UACR > 5.42 mg/g (below the microalbuminuria level) is associated with an increased risk of death and CVD to a similar extent in those with or without HTN. This means that the UACR level, even at levels far below the microalbuminuria level, might be as useful a tool for evaluating risk stratification and target organ damage as is the presence or absence of HTN.\nIt is well known that overt proteinuria, macroalbuminuria, and microalbuminuria are associated with CVD mortality and other metabolic parameters.4,11,24 The Heart Outcomes Prevention Evaluation (HOPE) study found that there was a continuous association between albuminuria and cardiovascular events starting well below the microalbuminuria cutoff level in high risk patients with CVD.5 Dell’Omo et al. suggest the appropriateness of shifting downward the threshold level for diagnosing microalbuminuria in hypertensive patients.25 That study showed that high-normal albuminuria is associated with cardiovascular risk factors and predicts morbid events in hypertensive subjects. Some studies have a raised questions regarding the original definition of microalbuminuria when evaluating the risk of CVD or death.5,13,26,27 Thus, in recent years the focus of research has shifted towards the prognostic value of low-grade albuminuria at levels below the microalbuminuria range.28\nThe Nord-Trøndelag Health Study reported that the lowest UACR level associated with 2.2-fold increased risk for mortality was the 60th percentile (≥ 6.7 mg/g) during a 4.4-year follow-up of 2,089 subjects without diabetes and with treated HTN.29 In individuals with or without DM in the HOPE study, the relative risk of cardiovascular death in the fourth quartile was 1.97 (UACR > 1.62 mg/mmol) compared with the lowest quartile of the UACR.5 In general population with cohort study within UACR below 30mg/g, the hazard risk of HTN in fourth quartile was 1.97(UACR ≥ 7.4 mg/g) and the risk of CVD death increased 3.37-folds compared with the lowest quartile (UACR < 3.4 mg/g).15\nOur findings provide further evidence to support the assumption that UACR, even below the microalbuminuria level (UACR cutoff > 5.42 mg/g), with or without HTN, could predict the occurrence of death as well as CVD. This result suggests that the linkage between albuminuria and mortality might not be through elevated blood pressure, which has been known to be closely associated with urinary albumin excretion in general. Thus, our study shows that the risk of death and CVD was significantly increased if the UACR was ≥ 5.42 mg/g, independent of HTN status. In addition, our study found that albuminuria is superior to HTN status in predicting all-cause and CVD mortality after adjusting for associated cardiovascular risk factors, because of the relatively short follow-up period. Although there were few cardiovascular deaths, albuminuria and HTN could have an additive effect in all-cause and CVD mortality outcome.\nThe pathophysiological processes that link albuminuria and CVD are unclear.\nIncreased albumin excretion has been related to endothelial alterations in the glomerular capillaries in patients with essential HTN.30 However, the cause of albuminuria in person without HTN is still uncertain. In patient without heart failure, elevated UACR predicts future hospitalization for heart failure.31\nIncreased UACR is also associated with left ventrilcular hypertrophy, a potent risk factor for progression to heart failure.31 Also, another researcher showed that adverse hemodynamics may play a role in albuminuria in the absence of HTN or DM.32\nIn addition, low-grade inflammation can be both a cause and consequence of endothelial dysfunction. Some previous studies have associated markers of low-grade inflammation, such as C-reactive protein, IL-6, and TNF-α, to the occurrence and progression of microalbuminuria and an increased risk for atherosclerotic disease.33\nAlternatively, endothelial dysfunction leading stimulatneously to albuminuria and subclinical coronary artery disease (CAD) could explain the association.34\nThe multi-ethnic study of atherosclerosis including 6,774 individuals, asymptomatic individuals demonstrates an increased risk of incident coronary artery calcification (CAC) as well as greater CAC progression among those with microalbuminuria.35\nKramer et al. reported 6,814 participants without clinical CVD that mean CAC scores were higher among participants with high normal urinary albumin excretion, microalbuminuria and macroalbuminuria compared with normal urinary albumin.34 This study concluded that higher urinary albumin, including levels below microalbuminuria, may reflect the presence of subclinical CVD among adults without established CVD.34\nWe showed the pathologic link that increased a risk of CVD and mortality in general population may vary according to the albuminuria with the presence or absence of hypertensive status.\nWhen interpreting our results, several limitations should be considered.\nFirst, urinary albumin excretion was measured on a single voided urine collection, which may not have accurately reflected the true level of albuminuria. However, prior research suggested that a single-void urine UACR correlated highly with the 24-hour urinary albumin excretion with a high specificity and sensitivity; therefore, it can be used to estimate quantitative microalbuminuria.14\nSecond, we did not address the severity of HTN. Previous studies reported that BP is positively correlated with albuminuria.12 Thus, the level of the mean and median UACR had some differences.\nThird, there is a possibility that the pharmacological therapies used by the subjects are not fully reflected in the medication history. However, prior antihypertensive medication history excluded in our study population. Thus, we excluded the influence of antihypertensive medication.\nFourth, the study was performed in a relatively homogenous population of working individuals who participated in a health screening program, and it is not fully representative of the entire Korean population. However, the number of subjects included in our study is larger than in any previous studies investigating this question.\nIn conclusion, this study shows that urinary albumin is an important marker for both cardiovascular and all-cause mortality in subjects with or without HTN. Subjects with a UACR ≥ 5.42 have a similar risk of death and CVD among hypertensive patients if albuminuria defined as UACR (UACR cutoff = 5.42 mg/g) is present. This increased risk is independent of age, sex, smoking status, alcohol intake, regular exercise, BMI, HTN, history of CVD, and eGFR. In the general population, the risk of death and CVD increases as the UACR increases in subjects, with or without HTN. Thus, urinary albumin is more attributable to CVD and all-cause mortality than HTN. Urinary albumin measurement might be a valuable tool in subjects with or without HTN to identify those subjects at higher risk for CVD and all-cause mortality.", "Click here for additional data file." ]
[ "methods", null, null, null, null, null, null, "supplementary-material" ]
[ "all-cause mortality", "blood pressure", "cardiovascular disease mortality", "hypertension", "urinary albumin." ]
METHODS: Study population The study population consisted of individuals with urinary albumin/creatinine ratio (UACR, mg/g) measurements who participated in a comprehensive health screening program at Kangbuk Samsung Hospital, Seoul, Korea, from 2002 to 2012 (n = 44,964). For this analysis, 12,311 subjects were excluded for one of more of the following reasons: 4,794 subjects had missing data on smoking status, alcohol consumption, and exercise; 868 subjects had a pre-existing history of malignancy; and 1 subject had an unknown vital status. Further analyses were undertaken after excluding subjects with diabetes (n = 3,435) and subjects with antihypertensive medication (n = 5,874). The total number of eligible individuals for the study was 32,653. This study was approved by the Institutional Review Board of Kangbuk Samsung Hospital. The requirement for informed consent was waived, and deidentified information was retrieved retrospectively. The study population consisted of individuals with urinary albumin/creatinine ratio (UACR, mg/g) measurements who participated in a comprehensive health screening program at Kangbuk Samsung Hospital, Seoul, Korea, from 2002 to 2012 (n = 44,964). For this analysis, 12,311 subjects were excluded for one of more of the following reasons: 4,794 subjects had missing data on smoking status, alcohol consumption, and exercise; 868 subjects had a pre-existing history of malignancy; and 1 subject had an unknown vital status. Further analyses were undertaken after excluding subjects with diabetes (n = 3,435) and subjects with antihypertensive medication (n = 5,874). The total number of eligible individuals for the study was 32,653. This study was approved by the Institutional Review Board of Kangbuk Samsung Hospital. The requirement for informed consent was waived, and deidentified information was retrieved retrospectively. Measurements Data on medical history, medication use, and health-related behaviors were collected through a self administered questionnaire. Details regarding alcohol use included the frequency of intake per week. Current smokers were identified, and the weekly frequency of moderate- or vigorous-intensity physical activity was assessed. Body weight was measured with the subject in light clothing and no shoes to the nearest 0.1 kg using a digital scale. Height was measured to the nearest 0.1 cm. Body mass index (BMI) was calculated as weight in kilograms divided by height in meters squared. Trained nurses measured sitting BP with a standard mercury sphygmomanometer. Blood specimens were sampled from the antecubital vein after 12 hours of fasting. Serum levels of glucose, total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, and high density lipoprotein (HDL) cholesterol were measured using Bayer Reagent Packs (Bayer Diagnostics, Leverkusen, Germany) on an automated chemistry analyzer (Advia 1650 Autoanalyzer; Bayer Diagnostics). Insulin levels were measured with an immunoradiometric assay (Biosource, Nivelle, Belgium) with an intra- and interassay coefficient of variation of 2.1–4.5% and 4.7–12.2%, respectively. Insulin resistance was estimated using the homeostasis model assessment of insulin resistance (HOMA-IR), calculated as insulin × glucose/22.5. The serum creatinine (SCr) level was measured by means of the alkaline picrate (Jaffe) method. We used an estimation of the glomerular filtration rate (GFR) to assess the degree of kidney impairment, which was calculated using the CKD-EPI equation: eGFR = 141 × min (SCr/K, 1)a × max (SCr/K, 1)−1.209 × 0.993age × 1.018 (if female) × 1.159 (if Black), where SCr is serum creatinine, K is 0.7 for females and 0.9 for males, a is −0.329 for females and −0.411 for males, min indicates the minimum of SCr/K or 1, and max indicates the maximum of SCr/K or 1.13 A single morning voided urine sample was used to measure the UACR. The urinary albumin concentration was determined by immunoradiometry (Radio-immunological competition assay, Immunotech Co., Prague, Czech Republic), and the urinary creatinine concentration was measured by a modified Jaffe method. The UACR measured in a spot urine sample has been reported to be highly correlated with the 24-hour urine albumin excretion level.14 Abdominal ultrasonography (Logic Q700 MR; GE, Milwaukee, WI, USA) using a 3.5 MHz probe was performed in all subjects by experienced clinical radiologists, and fatty liver was diagnosed or excluded based on standard criteria, including hepatorenal echo contrast, liver brightness, and vascular blurring.15 HTN was defined as a systolic BP ≥ 140 mm Hg, diastolic BP ≥ 90 mm Hg, a self-reported history of HTN. Diabetes mellitus was defined as a fasting serum glucose level ≥ 126 mg/dl, a self-reported history of diabetes, or the current use of diabetic medication.16 Obesity was defined according to those described for Asian populations, and the BMI threshold for obesity was ≥ 25 kg/m2.17 Receiver operating characteristic curve analyses were performed to find out appropriate UACR cutoff for predicting all-cause and CVD mortality.and the cut-point was 5.42 mg/g. Four groups were then defined as follows: (i) subjects who were below the UACR 5.42 mg/g but without HTN (no HTN/UACR < 5.42); (ii) subjects who were below 5.42 for UACR with HTN (HTN/UACR < 5.42); (iii) subjects at or above 5.42 for UACR without HTN (no HTN/UACR ≥ 5.42); and (iv) subjects at or above 5.42 for UACR with HTN (HTN/UACR ≥ 5.42) Data on medical history, medication use, and health-related behaviors were collected through a self administered questionnaire. Details regarding alcohol use included the frequency of intake per week. Current smokers were identified, and the weekly frequency of moderate- or vigorous-intensity physical activity was assessed. Body weight was measured with the subject in light clothing and no shoes to the nearest 0.1 kg using a digital scale. Height was measured to the nearest 0.1 cm. Body mass index (BMI) was calculated as weight in kilograms divided by height in meters squared. Trained nurses measured sitting BP with a standard mercury sphygmomanometer. Blood specimens were sampled from the antecubital vein after 12 hours of fasting. Serum levels of glucose, total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, and high density lipoprotein (HDL) cholesterol were measured using Bayer Reagent Packs (Bayer Diagnostics, Leverkusen, Germany) on an automated chemistry analyzer (Advia 1650 Autoanalyzer; Bayer Diagnostics). Insulin levels were measured with an immunoradiometric assay (Biosource, Nivelle, Belgium) with an intra- and interassay coefficient of variation of 2.1–4.5% and 4.7–12.2%, respectively. Insulin resistance was estimated using the homeostasis model assessment of insulin resistance (HOMA-IR), calculated as insulin × glucose/22.5. The serum creatinine (SCr) level was measured by means of the alkaline picrate (Jaffe) method. We used an estimation of the glomerular filtration rate (GFR) to assess the degree of kidney impairment, which was calculated using the CKD-EPI equation: eGFR = 141 × min (SCr/K, 1)a × max (SCr/K, 1)−1.209 × 0.993age × 1.018 (if female) × 1.159 (if Black), where SCr is serum creatinine, K is 0.7 for females and 0.9 for males, a is −0.329 for females and −0.411 for males, min indicates the minimum of SCr/K or 1, and max indicates the maximum of SCr/K or 1.13 A single morning voided urine sample was used to measure the UACR. The urinary albumin concentration was determined by immunoradiometry (Radio-immunological competition assay, Immunotech Co., Prague, Czech Republic), and the urinary creatinine concentration was measured by a modified Jaffe method. The UACR measured in a spot urine sample has been reported to be highly correlated with the 24-hour urine albumin excretion level.14 Abdominal ultrasonography (Logic Q700 MR; GE, Milwaukee, WI, USA) using a 3.5 MHz probe was performed in all subjects by experienced clinical radiologists, and fatty liver was diagnosed or excluded based on standard criteria, including hepatorenal echo contrast, liver brightness, and vascular blurring.15 HTN was defined as a systolic BP ≥ 140 mm Hg, diastolic BP ≥ 90 mm Hg, a self-reported history of HTN. Diabetes mellitus was defined as a fasting serum glucose level ≥ 126 mg/dl, a self-reported history of diabetes, or the current use of diabetic medication.16 Obesity was defined according to those described for Asian populations, and the BMI threshold for obesity was ≥ 25 kg/m2.17 Receiver operating characteristic curve analyses were performed to find out appropriate UACR cutoff for predicting all-cause and CVD mortality.and the cut-point was 5.42 mg/g. Four groups were then defined as follows: (i) subjects who were below the UACR 5.42 mg/g but without HTN (no HTN/UACR < 5.42); (ii) subjects who were below 5.42 for UACR with HTN (HTN/UACR < 5.42); (iii) subjects at or above 5.42 for UACR without HTN (no HTN/UACR ≥ 5.42); and (iv) subjects at or above 5.42 for UACR with HTN (HTN/UACR ≥ 5.42) Ascertainment of mortality Mortality follow-up between January 1, 2002 and December 31, 2012 was based on the nationwide death certificate data of the Korea National Statistical Office. Deaths among subjects were confirmed by matching the information to death records. Causes of death were coded centrally by trained coders using the ICD-10 classification (International Classification of Diseases, 10th revision) and ICD 00-99 codes were considered to represent cardiovascular death. Mortality follow-up between January 1, 2002 and December 31, 2012 was based on the nationwide death certificate data of the Korea National Statistical Office. Deaths among subjects were confirmed by matching the information to death records. Causes of death were coded centrally by trained coders using the ICD-10 classification (International Classification of Diseases, 10th revision) and ICD 00-99 codes were considered to represent cardiovascular death. Statistical analyses The χ2-test and Student’s t test were used to compare the characteristics of the deceased and alive study participants at baseline. We used receiver operating characteristic curve analysis to identify the optimal UACR cutoff value for predicting all-cause and CVD mortality. Cox proportional hazards models were used to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for all-cause and cardiovascular mortality, comparing the HTN/UACR < 5.42, no HTN/UACR ≥ 5.42, and the HTN/UACR ≥ 5.42 groups with the reference no HTN/UACR < 5.42 group. The models were initially adjusted for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, and education level (Model 1). In Model 2: the models were further adjusted for BMI, HTN, diabetes, and history of CVD. Model 3 included adjustments for the same factors as Model 2 plus an additional adjustment for eGFR. In addition, the participants were stratified into quartiles according to UACR. UACR quartiles were categorized as following; Q1: < 3.3 mg/g, Q2: 3.3–4.6 mg/g, Q3: 4.7–7.2 mg/g, and Q4: ≥ 7.3 mg/g. The risk of all-cause mortality and CVD mortality were analyzed with the degree of albuminuria according to sub-groups, which were based on conventional cardiovascular risk factors. Adjustment or stratification was made for multiple confounders/effect modifiers including age, sex, BMI, eGFR, alcohol intake and exercise, educational attainment (college graduation or higher), smoking status and prior evidence of CVD. The proportional hazards assumption was checked by examining graphs of estimated log (–log) survival. Statistical analysis was performed using Stata, version 11.2. All reported P-values are two tailed, and P < 0.05 was considered statistically significant. The χ2-test and Student’s t test were used to compare the characteristics of the deceased and alive study participants at baseline. We used receiver operating characteristic curve analysis to identify the optimal UACR cutoff value for predicting all-cause and CVD mortality. Cox proportional hazards models were used to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for all-cause and cardiovascular mortality, comparing the HTN/UACR < 5.42, no HTN/UACR ≥ 5.42, and the HTN/UACR ≥ 5.42 groups with the reference no HTN/UACR < 5.42 group. The models were initially adjusted for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, and education level (Model 1). In Model 2: the models were further adjusted for BMI, HTN, diabetes, and history of CVD. Model 3 included adjustments for the same factors as Model 2 plus an additional adjustment for eGFR. In addition, the participants were stratified into quartiles according to UACR. UACR quartiles were categorized as following; Q1: < 3.3 mg/g, Q2: 3.3–4.6 mg/g, Q3: 4.7–7.2 mg/g, and Q4: ≥ 7.3 mg/g. The risk of all-cause mortality and CVD mortality were analyzed with the degree of albuminuria according to sub-groups, which were based on conventional cardiovascular risk factors. Adjustment or stratification was made for multiple confounders/effect modifiers including age, sex, BMI, eGFR, alcohol intake and exercise, educational attainment (college graduation or higher), smoking status and prior evidence of CVD. The proportional hazards assumption was checked by examining graphs of estimated log (–log) survival. Statistical analysis was performed using Stata, version 11.2. All reported P-values are two tailed, and P < 0.05 was considered statistically significant. Study population: The study population consisted of individuals with urinary albumin/creatinine ratio (UACR, mg/g) measurements who participated in a comprehensive health screening program at Kangbuk Samsung Hospital, Seoul, Korea, from 2002 to 2012 (n = 44,964). For this analysis, 12,311 subjects were excluded for one of more of the following reasons: 4,794 subjects had missing data on smoking status, alcohol consumption, and exercise; 868 subjects had a pre-existing history of malignancy; and 1 subject had an unknown vital status. Further analyses were undertaken after excluding subjects with diabetes (n = 3,435) and subjects with antihypertensive medication (n = 5,874). The total number of eligible individuals for the study was 32,653. This study was approved by the Institutional Review Board of Kangbuk Samsung Hospital. The requirement for informed consent was waived, and deidentified information was retrieved retrospectively. Measurements: Data on medical history, medication use, and health-related behaviors were collected through a self administered questionnaire. Details regarding alcohol use included the frequency of intake per week. Current smokers were identified, and the weekly frequency of moderate- or vigorous-intensity physical activity was assessed. Body weight was measured with the subject in light clothing and no shoes to the nearest 0.1 kg using a digital scale. Height was measured to the nearest 0.1 cm. Body mass index (BMI) was calculated as weight in kilograms divided by height in meters squared. Trained nurses measured sitting BP with a standard mercury sphygmomanometer. Blood specimens were sampled from the antecubital vein after 12 hours of fasting. Serum levels of glucose, total cholesterol, triglycerides, low-density lipoprotein (LDL) cholesterol, and high density lipoprotein (HDL) cholesterol were measured using Bayer Reagent Packs (Bayer Diagnostics, Leverkusen, Germany) on an automated chemistry analyzer (Advia 1650 Autoanalyzer; Bayer Diagnostics). Insulin levels were measured with an immunoradiometric assay (Biosource, Nivelle, Belgium) with an intra- and interassay coefficient of variation of 2.1–4.5% and 4.7–12.2%, respectively. Insulin resistance was estimated using the homeostasis model assessment of insulin resistance (HOMA-IR), calculated as insulin × glucose/22.5. The serum creatinine (SCr) level was measured by means of the alkaline picrate (Jaffe) method. We used an estimation of the glomerular filtration rate (GFR) to assess the degree of kidney impairment, which was calculated using the CKD-EPI equation: eGFR = 141 × min (SCr/K, 1)a × max (SCr/K, 1)−1.209 × 0.993age × 1.018 (if female) × 1.159 (if Black), where SCr is serum creatinine, K is 0.7 for females and 0.9 for males, a is −0.329 for females and −0.411 for males, min indicates the minimum of SCr/K or 1, and max indicates the maximum of SCr/K or 1.13 A single morning voided urine sample was used to measure the UACR. The urinary albumin concentration was determined by immunoradiometry (Radio-immunological competition assay, Immunotech Co., Prague, Czech Republic), and the urinary creatinine concentration was measured by a modified Jaffe method. The UACR measured in a spot urine sample has been reported to be highly correlated with the 24-hour urine albumin excretion level.14 Abdominal ultrasonography (Logic Q700 MR; GE, Milwaukee, WI, USA) using a 3.5 MHz probe was performed in all subjects by experienced clinical radiologists, and fatty liver was diagnosed or excluded based on standard criteria, including hepatorenal echo contrast, liver brightness, and vascular blurring.15 HTN was defined as a systolic BP ≥ 140 mm Hg, diastolic BP ≥ 90 mm Hg, a self-reported history of HTN. Diabetes mellitus was defined as a fasting serum glucose level ≥ 126 mg/dl, a self-reported history of diabetes, or the current use of diabetic medication.16 Obesity was defined according to those described for Asian populations, and the BMI threshold for obesity was ≥ 25 kg/m2.17 Receiver operating characteristic curve analyses were performed to find out appropriate UACR cutoff for predicting all-cause and CVD mortality.and the cut-point was 5.42 mg/g. Four groups were then defined as follows: (i) subjects who were below the UACR 5.42 mg/g but without HTN (no HTN/UACR < 5.42); (ii) subjects who were below 5.42 for UACR with HTN (HTN/UACR < 5.42); (iii) subjects at or above 5.42 for UACR without HTN (no HTN/UACR ≥ 5.42); and (iv) subjects at or above 5.42 for UACR with HTN (HTN/UACR ≥ 5.42) Ascertainment of mortality: Mortality follow-up between January 1, 2002 and December 31, 2012 was based on the nationwide death certificate data of the Korea National Statistical Office. Deaths among subjects were confirmed by matching the information to death records. Causes of death were coded centrally by trained coders using the ICD-10 classification (International Classification of Diseases, 10th revision) and ICD 00-99 codes were considered to represent cardiovascular death. Statistical analyses: The χ2-test and Student’s t test were used to compare the characteristics of the deceased and alive study participants at baseline. We used receiver operating characteristic curve analysis to identify the optimal UACR cutoff value for predicting all-cause and CVD mortality. Cox proportional hazards models were used to estimate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for all-cause and cardiovascular mortality, comparing the HTN/UACR < 5.42, no HTN/UACR ≥ 5.42, and the HTN/UACR ≥ 5.42 groups with the reference no HTN/UACR < 5.42 group. The models were initially adjusted for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, and education level (Model 1). In Model 2: the models were further adjusted for BMI, HTN, diabetes, and history of CVD. Model 3 included adjustments for the same factors as Model 2 plus an additional adjustment for eGFR. In addition, the participants were stratified into quartiles according to UACR. UACR quartiles were categorized as following; Q1: < 3.3 mg/g, Q2: 3.3–4.6 mg/g, Q3: 4.7–7.2 mg/g, and Q4: ≥ 7.3 mg/g. The risk of all-cause mortality and CVD mortality were analyzed with the degree of albuminuria according to sub-groups, which were based on conventional cardiovascular risk factors. Adjustment or stratification was made for multiple confounders/effect modifiers including age, sex, BMI, eGFR, alcohol intake and exercise, educational attainment (college graduation or higher), smoking status and prior evidence of CVD. The proportional hazards assumption was checked by examining graphs of estimated log (–log) survival. Statistical analysis was performed using Stata, version 11.2. All reported P-values are two tailed, and P < 0.05 was considered statistically significant. RESULTS: There were 249 deaths during the follow-up period (Table 1). Table 1 describes the baseline clinical characteristics analyzed according to the vital status at follow-up. Subjects who died were older, and a higher proportion had a history of HTN at baseline. Other conventional cardiovascular risk factors including glucose, triglycerides, and blood pressure, were also higher, and the HDL-C and proportion of those performing regular exercise was lower in subjects who died by the end of the follow-up period. At baseline, mean and median UACR were higher in the group who died during follow-up compared to survivors. Baseline characteristics of the cohort according to death at follow-up Data are mean (SD), median (interquartile range), or percentage. Abbreviations: ACR, albumin/creatinine ratio; BMI, body mass index; BP, blood pressure; CVD, cardiovascular disease; HDL-C, high density lipoprotein cholesterol; HOMA IR, homeostasis model assessment of insulin resistance; LDL-C, low density lipoprotein cholesterol. a≥college graduate. b≥ 3 time per week. Table 2 shows the baseline characteristics across the four previously defined groups based on below or above the 75th percentile for UACR levels and the absence or presence of HTN. Differences in characteristics across the four groups were statistically significant. The median UACR levels were higher in the group that had HTN at baseline compared with the group without HTN. Baseline characteristics of study subjects by urineACR quartile Data are mean (SD), median (interquartile range), or percentage. Abbreviations: ACR, albumin/creatinine ratio; BP, blood pressure; BMI, body mass index; CVD, cardiovascular disease; eGFR, estimated glomerular filtration rate; HTN, hypertension; HDL-C, high density lipoprotein cholesterol; HOMA IR, homeostasis model assessment of insulin resistance; LDL-C, low density lipoprotein cholesterol. a≥college graduate. b≥ 3 time per week. Table 3 shows the age-adjusted, sex-adjusted, and fully adjusted, including eGFR, HRs for all-cause and CVD mortality according to the four baseline groups. For all-cause mortality, there was a higher HR in the no HTN/UACR ≥ 5.42 and the HTN/UACR ≥ 5.42 groups (HR 1.48, CI 1.02–2.15; HR 1.47, CI 0.94–2.32, respectively) compared with the no HTN/UACR < 5.42 group (reference group). Risk of all cause and CVD mortality according to baseline urine ACR and HTN Cox proportional hazard models were used to estimate HR (hazard ratio) and 95 percentage confidence intervals (95% CIs). Model 1: adjustment for age, sex, treatment center, year of screening exam, smoking status, alcohol intake, regular exercise, education level. Model 2: Model 1 plus adjustment for body mass index, HTN, and history of CVD. Model 3: Model 2 + estimated glomerular filtration rate. Abbreviations: ACR, albumin/creatinine ratio; CVD, cardiovascular disease; HTN, hypertension. When HRs for CVD mortality were analyzed, no HTN/ACR ≥ 5.42 and HTN/ACR ≥ 5.42 group showed a similar significantly increased fully adjusted HR compared with the reference group (CVD mortality; HR 5.75; 95% CI 1.54–21.47: HR 5.87; 95% CI 1.36–25.29). However, subjects in the HTN/UACR < 5.42 group did not show statistically significantly increased HRs compared with the reference group (HR 4.13, CI 0.81–20.93). The number of event in CVD was small. Figure 1 illustrates the Kalpan–Meier curves for all-cause death(A) and CVD death(B) according to UACR < 5.42 or UACR ≥ 5.42 and HTN (hypertension) status. Kaplan–Meier curves for all-cause (A) all-cause death (B) CVD mortality. ACR, albumin/creatinine ratio; CVD, cardiovascular disease; HTN, hypertension. ACR < 5.42 mg/g and ≥5.42 mg/g. Although overall mortality rates were low, subjects with a UACR ≥ 5.42 mg/g without or with HTN showed a similar increased in all-cause mortality and CVD mortality about 8 years. The multivariate-adjusted HRs and CIs for all-cause and cardiovascular mortality according to the UACR quartiles are listed in Supplementary Tables S1 and S2. When the HR of first quartile group for all-cause mortality was set as the reference, HRs for all-cause mortality tend to increase in proportion to the quartiles of UACR from the second to the fourth quartile group (Supplementary Table S1). However, these associations were not statistically significant. Similar to associations with CVD mortality, the statistically significant was present in subjects with men, age ≥50 years, eGFR ≥ 90 ml/min and and group with BMI M < 25 kg/m2 (Supplementary Table S2). DISCUSSION: We describe the effects of HTN and urinary albumin levels on all-cause and cardiovascular mortality in a large number of Koreans participating in a health screening program with a median follow-up of 5.13 years. Even small changes in the UACR (UACR level > 5.42 mg/g, below microalbuminuria level) showed an incremental association with mortality in those subjects with or without HTN. This trend was independent of eGFR levels. Importantly, for the first time, we showed that above 5.42 in UACR level in whole range of albuminuria increased risk of death and CVD to similar extent even if subject with or without HTN. In addition, subjects with a UACR > 5.42 and HTN had a 3.27-fold increased risk of CVD morality compared to those with a UACR < 5.42 mg/g with no HTN. In general, urinary albumin excretion is classified as: normoalbuminuria (< 30 mg per day or UACR < 30 mg/g), microalbuminuria (30–300 mg per day or UACR 30–300 mg/g, equivalent to 3.4–34 mg/mmol), and macroalbuminuria (>300 mg per day or UACR > 300 mg/g). Furthermore, these results suggest that urinary albumin, in particular at levels below the microalbuminuria level, might be more attributable to CVD and all-cause mortality than HTN in a healthy, young, occupational cohort. Thus, this study supports the concept that any range of albuminuria as regular examination could be helpful to prevent in organ damage with hypertensive patient. The 2013 European Society of Hypertension (ESH) and the European Society of Cardiology (ESC) hypertension guidelines recommended that all hypertensive patients have a test for microalbuminuria with a spot urine sample.18 In addition, the guidelines stressed that a test for microalbuminuria is a cost-effective test for organ damage in hypertensive patients.19 Furthermore, some prospective studies indicate that the association between urinary albumin excretion without threshold effects and cardiovascular mortality in general population11,14 and in non-diabetic hypertensive patients, microalbuminuria.20 A cohort study performed in North America, South America, and Europe that followed individuals ≥ 55 years old for a median of 4.5 years showed that any degree of albuminuria was a risk factor for the occurrence of CV events.5 Another multicenter cohort study involving patients with HTN and left ventricular hypertrophy also showed an association between UACR and increased cardiovascular morbidity and mortality with no UACR threshold necessary to demonstrate the increased risk.21 Another study suggested that urinary albumin excretion testing improves the accuracy of cardiovascular risk assessment in patients with HTN.22 It has also been suggested that albuminuria may be as reliable as cardiac and carotid ultrasound evaluations in predicting cardiovascular risk in hypertensive patients.22 Cuspidi et al. reported that combined ultrasound and microalbuminuria screening can improve the accuracy of target organ damage detection by 10-fold compared with routine investigations.23 Thus, our study suggests that a UACR > 5.42 mg/g (below the microalbuminuria level) is associated with an increased risk of death and CVD to a similar extent in those with or without HTN. This means that the UACR level, even at levels far below the microalbuminuria level, might be as useful a tool for evaluating risk stratification and target organ damage as is the presence or absence of HTN. It is well known that overt proteinuria, macroalbuminuria, and microalbuminuria are associated with CVD mortality and other metabolic parameters.4,11,24 The Heart Outcomes Prevention Evaluation (HOPE) study found that there was a continuous association between albuminuria and cardiovascular events starting well below the microalbuminuria cutoff level in high risk patients with CVD.5 Dell’Omo et al. suggest the appropriateness of shifting downward the threshold level for diagnosing microalbuminuria in hypertensive patients.25 That study showed that high-normal albuminuria is associated with cardiovascular risk factors and predicts morbid events in hypertensive subjects. Some studies have a raised questions regarding the original definition of microalbuminuria when evaluating the risk of CVD or death.5,13,26,27 Thus, in recent years the focus of research has shifted towards the prognostic value of low-grade albuminuria at levels below the microalbuminuria range.28 The Nord-Trøndelag Health Study reported that the lowest UACR level associated with 2.2-fold increased risk for mortality was the 60th percentile (≥ 6.7 mg/g) during a 4.4-year follow-up of 2,089 subjects without diabetes and with treated HTN.29 In individuals with or without DM in the HOPE study, the relative risk of cardiovascular death in the fourth quartile was 1.97 (UACR > 1.62 mg/mmol) compared with the lowest quartile of the UACR.5 In general population with cohort study within UACR below 30mg/g, the hazard risk of HTN in fourth quartile was 1.97(UACR ≥ 7.4 mg/g) and the risk of CVD death increased 3.37-folds compared with the lowest quartile (UACR < 3.4 mg/g).15 Our findings provide further evidence to support the assumption that UACR, even below the microalbuminuria level (UACR cutoff > 5.42 mg/g), with or without HTN, could predict the occurrence of death as well as CVD. This result suggests that the linkage between albuminuria and mortality might not be through elevated blood pressure, which has been known to be closely associated with urinary albumin excretion in general. Thus, our study shows that the risk of death and CVD was significantly increased if the UACR was ≥ 5.42 mg/g, independent of HTN status. In addition, our study found that albuminuria is superior to HTN status in predicting all-cause and CVD mortality after adjusting for associated cardiovascular risk factors, because of the relatively short follow-up period. Although there were few cardiovascular deaths, albuminuria and HTN could have an additive effect in all-cause and CVD mortality outcome. The pathophysiological processes that link albuminuria and CVD are unclear. Increased albumin excretion has been related to endothelial alterations in the glomerular capillaries in patients with essential HTN.30 However, the cause of albuminuria in person without HTN is still uncertain. In patient without heart failure, elevated UACR predicts future hospitalization for heart failure.31 Increased UACR is also associated with left ventrilcular hypertrophy, a potent risk factor for progression to heart failure.31 Also, another researcher showed that adverse hemodynamics may play a role in albuminuria in the absence of HTN or DM.32 In addition, low-grade inflammation can be both a cause and consequence of endothelial dysfunction. Some previous studies have associated markers of low-grade inflammation, such as C-reactive protein, IL-6, and TNF-α, to the occurrence and progression of microalbuminuria and an increased risk for atherosclerotic disease.33 Alternatively, endothelial dysfunction leading stimulatneously to albuminuria and subclinical coronary artery disease (CAD) could explain the association.34 The multi-ethnic study of atherosclerosis including 6,774 individuals, asymptomatic individuals demonstrates an increased risk of incident coronary artery calcification (CAC) as well as greater CAC progression among those with microalbuminuria.35 Kramer et al. reported 6,814 participants without clinical CVD that mean CAC scores were higher among participants with high normal urinary albumin excretion, microalbuminuria and macroalbuminuria compared with normal urinary albumin.34 This study concluded that higher urinary albumin, including levels below microalbuminuria, may reflect the presence of subclinical CVD among adults without established CVD.34 We showed the pathologic link that increased a risk of CVD and mortality in general population may vary according to the albuminuria with the presence or absence of hypertensive status. When interpreting our results, several limitations should be considered. First, urinary albumin excretion was measured on a single voided urine collection, which may not have accurately reflected the true level of albuminuria. However, prior research suggested that a single-void urine UACR correlated highly with the 24-hour urinary albumin excretion with a high specificity and sensitivity; therefore, it can be used to estimate quantitative microalbuminuria.14 Second, we did not address the severity of HTN. Previous studies reported that BP is positively correlated with albuminuria.12 Thus, the level of the mean and median UACR had some differences. Third, there is a possibility that the pharmacological therapies used by the subjects are not fully reflected in the medication history. However, prior antihypertensive medication history excluded in our study population. Thus, we excluded the influence of antihypertensive medication. Fourth, the study was performed in a relatively homogenous population of working individuals who participated in a health screening program, and it is not fully representative of the entire Korean population. However, the number of subjects included in our study is larger than in any previous studies investigating this question. In conclusion, this study shows that urinary albumin is an important marker for both cardiovascular and all-cause mortality in subjects with or without HTN. Subjects with a UACR ≥ 5.42 have a similar risk of death and CVD among hypertensive patients if albuminuria defined as UACR (UACR cutoff = 5.42 mg/g) is present. This increased risk is independent of age, sex, smoking status, alcohol intake, regular exercise, BMI, HTN, history of CVD, and eGFR. In the general population, the risk of death and CVD increases as the UACR increases in subjects, with or without HTN. Thus, urinary albumin is more attributable to CVD and all-cause mortality than HTN. Urinary albumin measurement might be a valuable tool in subjects with or without HTN to identify those subjects at higher risk for CVD and all-cause mortality. Supplementary Material: Click here for additional data file.
Background: Urinary albumin levels and hypertension (HTN) are independently associated with an increased risk of all-cause mortality. The effect of albuminuria on mortality in the absence or presence of HTN is uncertain. This study aimed to evaluate the effect of albuminuria and HTN on all-cause and cardiovascular disease (CVD) mortality. Methods: Mortality outcomes for 32,653 Koreans enrolled in a health screening including measurements of the urinary albumin/creatinine ratio (UACR) at baseline and median follow-up of 5.13 years. Receiver operating characteristic curve analyses were performed in UACR and the cut-point was 5.42 mg/g. The participants for UACR at the cut-point of 5.42 μg/mg were categorized into UACR < 5.42 or UACR ≥ 5.42. HTN status was categorized as No HTN or HTN (defined as the absence or presence HTN). Results: The median (interquartile) baseline UACRs were higher in those who died than in survivors. Subjects with a UACR ≥ 5.42 mg/g without or with HTN showed a similar increased risk for all-cause mortality and CVD mortality, even after adjusting for known CVD risk factors compared to those with no HTN/UACR < 5.42 (reference), (all-cause mortality; hazard ratio [HR] 1.48; 95% confidence interval [CI] 1.02-2.15: HR 1.47; 95% CI 0.94-2.32, respectively), (CVD mortality; HR 5.75; 95% CI 1.54-21.47: HR 5.87; 95% CI 1.36-25.29). Conclusions: The presence of urinary albumin and HTN is a significant determinant of CVD and death. Urinary albumin might be more attributable to CVD and all-cause mortality than HTN.
null
null
6,680
331
[ 166, 709, 77, 360, 936, 1745 ]
8
[ "uacr", "htn", "42", "cvd", "subjects", "mortality", "mg", "uacr 42", "risk", "study" ]
[ "reported history diabetes", "participating health screening", "cohort study performed", "republic urinary creatinine", "urinary creatinine concentration" ]
null
null
null
null
[CONTENT] all-cause mortality | blood pressure | cardiovascular disease mortality | hypertension | urinary albumin. [SUMMARY]
null
[CONTENT] all-cause mortality | blood pressure | cardiovascular disease mortality | hypertension | urinary albumin. [SUMMARY]
[CONTENT] all-cause mortality | blood pressure | cardiovascular disease mortality | hypertension | urinary albumin. [SUMMARY]
null
null
[CONTENT] Adult | Aged | Albuminuria | Body Mass Index | Cardiovascular Diseases | Female | Follow-Up Studies | Humans | Hypertension | Kaplan-Meier Estimate | Lipids | Male | Middle Aged | Predictive Value of Tests | Prognosis | ROC Curve | Republic of Korea | Risk Factors | Survival Analysis [SUMMARY]
null
[CONTENT] Adult | Aged | Albuminuria | Body Mass Index | Cardiovascular Diseases | Female | Follow-Up Studies | Humans | Hypertension | Kaplan-Meier Estimate | Lipids | Male | Middle Aged | Predictive Value of Tests | Prognosis | ROC Curve | Republic of Korea | Risk Factors | Survival Analysis [SUMMARY]
[CONTENT] Adult | Aged | Albuminuria | Body Mass Index | Cardiovascular Diseases | Female | Follow-Up Studies | Humans | Hypertension | Kaplan-Meier Estimate | Lipids | Male | Middle Aged | Predictive Value of Tests | Prognosis | ROC Curve | Republic of Korea | Risk Factors | Survival Analysis [SUMMARY]
null
null
[CONTENT] reported history diabetes | participating health screening | cohort study performed | republic urinary creatinine | urinary creatinine concentration [SUMMARY]
null
[CONTENT] reported history diabetes | participating health screening | cohort study performed | republic urinary creatinine | urinary creatinine concentration [SUMMARY]
[CONTENT] reported history diabetes | participating health screening | cohort study performed | republic urinary creatinine | urinary creatinine concentration [SUMMARY]
null
null
[CONTENT] uacr | htn | 42 | cvd | subjects | mortality | mg | uacr 42 | risk | study [SUMMARY]
null
[CONTENT] uacr | htn | 42 | cvd | subjects | mortality | mg | uacr 42 | risk | study [SUMMARY]
[CONTENT] uacr | htn | 42 | cvd | subjects | mortality | mg | uacr 42 | risk | study [SUMMARY]
null
null
[CONTENT] uacr | htn | 42 | measured | subjects | htn uacr | htn uacr 42 | uacr 42 | scr | mg [SUMMARY]
null
[CONTENT] microalbuminuria | risk | uacr | htn | albuminuria | cvd | increased | study | patients | hypertensive [SUMMARY]
[CONTENT] uacr | htn | 42 | subjects | cvd | mortality | death | uacr 42 | mg | file [SUMMARY]
null
null
[CONTENT] 32,653 | Koreans | UACR | 5.13 years ||| UACR | 5.42 ||| UACR | 5.42 | UACR | 5.42 | UACR | ≥ | 5.42 ||| [SUMMARY]
null
[CONTENT] CVD ||| CVD [SUMMARY]
[CONTENT] ||| ||| albuminuria and HTN ||| 32,653 | Koreans | UACR | 5.13 years ||| UACR | 5.42 ||| UACR | 5.42 | UACR | 5.42 | UACR | ≥ | 5.42 ||| ||| ||| 5.42 | 1.48 | 95% ||| CI | 1.02 | 1.47 | 95% | CI | 5.75 | 95% | CI | 1.54-21.47 | 5.87 | 95% | CI | 1.36 ||| CVD ||| CVD [SUMMARY]
null
Impact of seasonality on recruitment, retention, adherence, and outcomes in a web-based smoking cessation intervention: randomized controlled trial.
24201304
Seasonal variations in smoking and quitting behaviors have been documented, with many smokers seeking cessation assistance around the start of the New Year. What remains unknown is whether smokers who are recruited to cessation treatment trials during the New Year are as motivated to quit, or as likely to enroll in a research trial, adhere to a research protocol, and benefit from a cessation intervention compared to those who are recruited during other times of the year.
BACKGROUND
Participants were current smokers who had registered on a free Web-based cessation program (BecomeAnEX.org) and were invited to participate in a clinical trial. The New Year period was defined according to a clear peak and drop in the proportion of visitors who registered on the site, spanning a 15-day period from December 26, 2012 to January 9, 2013. Two other 15-day recruitment periods during summer (July 18, 2012 to August 1, 2012) and fall (November 7, 2012 to November 21, 2012) were selected for comparison. Data were examined from 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline and 3 months postrandomization, and (3) online tracking software that recorded website utilization during the first 3 months of the trial.
METHODS
Visitors to BecomeAnEX during the New Year period were more likely to register on the site than smokers who visited during summer or fall (conversion rates: 7.4%, 4.6%, 4.9%, respectively; P<.001), but there were no differences in rates of study acceptance, consent, randomization, 3-month follow-up survey completion, or cessation between the 3 periods. New Year participants were older, more educated, more likely to be employed full time, and more likely to have a relationship partner compared with participants recruited at other times during the year, but did not differ on measures of motivation and desire to quit.
RESULTS
Smokers visiting a Web-based cessation program during the New Year period were more likely to register for treatment and differ on several demographic variables, but showed similar patterns of treatment engagement, retention, follow-up, and short-term cessation outcomes compared with participants who visited the site during other periods of the year. These results allay scientific concerns about recruiting participants during this time frame and are reassuring for researchers conducting Web-based cessation trials.
CONCLUSIONS
[ "Adult", "Female", "Humans", "Internet", "Male", "Middle Aged", "Patient Compliance", "Seasons", "Smoking Cessation" ]
3841362
Introduction
Seasonal variations across a number of smoking and quitting behaviors have been documented. Most smokers express a desire to quit [1] and many make a quit attempt around the start of the New Year [2-6]. Reports have shown that sales of cigarettes are at their lowest during January and February [7,8] and sales of nicotine replacement therapies are at their highest January through March [9]. Seasonal variations in motivational stage of change among callers to a state quitline have also been documented [10] with callers in December and January being more likely to have recently quit than callers during other months. Internet search queries also provide evidence of the seasonal variations in smoking cessation, with clear peaks observed in the use of “quit smoking” as a search term at the beginning of each calendar year. Figure 1 shows the relative use of the search term “quit smoking” in Google search engine queries over 6 years in the United States as reported by Google Trends [11], the public database of Google queries. A greater number of smokers quitting around the New Year may mean a greater pool of potential research participants for smoking cessation trials. However, the effects of seasonality on research recruitment and retention have not been documented. Specifically, it is unknown whether smokers who are invited to cessation treatment trials during this period of time are as likely to enroll, to adhere to a research protocol, and to benefit from a cessation intervention. Smokers who elect to quit around the New Year may differ from those who quit during other times of the year on factors such as motivation, desire, confidence, or other factors that relate to trial participation, engagement, and cessation outcomes. These are important questions to address from both a pragmatic and a scientific standpoint. From a pragmatic standpoint, conducting trial recruitment during the New Year holiday may have staffing and cost considerations for all aspects of a trial. Research staff may be needed to field study inquiries, conduct eligibility screening, administer assessments, and manage study communications; intervention staff may be required to orient new participants to the trial and begin intervention delivery. Given the potential for higher recruitment volume during this period, staffing increases may be required. From a scientific standpoint, if participants enrolled during this time are less likely to adhere to research protocol (ie, lower rates of intervention adherence, lower retention at follow-up) because of a more transient commitment to quitting, this could have important implications for treatment trials. Lower retention rates (ie, higher loss to follow-up) would result in a higher proportion of participants counted as smokers in intention-to-treat analyses, which may artificially deflate overall abstinence rates and perceived effectiveness of an intervention. Lower rates of intervention adherence among participants recruited during the New Year could influence metrics of intervention feasibility and receptivity as well as cessation outcomes. We sought to examine these questions about the impact of seasonality on smoking cessation treatment trials in the context of an ongoing randomized trial of a Web-based cessation intervention. We extracted a subset of participants recruited during a 15-day window that spanned the 2013 New Year and compared them to participants recruited during 2 other 15-day periods in 2012 on recruitment and retention rates, baseline characteristics, website utilization, and cessation outcomes. Our a priori hypotheses were that New Year participants would differ on baseline measures of motivation and desire to quit consistent with a more transient commitment to quitting, and would have lower recruitment and retention rates, lower website utilization rates, and poorer cessation outcomes compared to participants enrolled during other periods. Use of search term “quit smoking” in Google search engine queries relative to the total number of Google searches between June 2007 and June 2013 in the United States as reported in Google Trends.
Methods
Study Overview The full study protocol has been published elsewhere [12]. Briefly, this is an ongoing Web-based randomized trial to compare the efficacy of an interactive, evidence-based, smoking cessation website alone and in conjunction with (1) a theory-driven, empirically informed social network intervention designed to integrate participants into an online community, and (2) access to a free supply of nicotine replacement therapy (NRT) products. The study uses a 2×2 factorial design to compare the following treatment conditions: (1) website, (2) website+social network intervention, (3) website+NRT, and (4) website+social network intervention+NRT. A total of 4000 participants will be randomized by the end of the study. Follow-up assessments are administered at 3- and 9-months postrandomization; 30-day point prevalence abstinence is the primary outcome of the parent trial. Study eligibility criteria are current smoking, age 18 years or older, and US residence. Exclusion criteria are contraindications to NRT (pregnant or breastfeeding, recent cardiac problems, current NRT use). Randomization is stratified by gender and baseline motivation. The full study protocol has been published elsewhere [12]. Briefly, this is an ongoing Web-based randomized trial to compare the efficacy of an interactive, evidence-based, smoking cessation website alone and in conjunction with (1) a theory-driven, empirically informed social network intervention designed to integrate participants into an online community, and (2) access to a free supply of nicotine replacement therapy (NRT) products. The study uses a 2×2 factorial design to compare the following treatment conditions: (1) website, (2) website+social network intervention, (3) website+NRT, and (4) website+social network intervention+NRT. A total of 4000 participants will be randomized by the end of the study. Follow-up assessments are administered at 3- and 9-months postrandomization; 30-day point prevalence abstinence is the primary outcome of the parent trial. Study eligibility criteria are current smoking, age 18 years or older, and US residence. Exclusion criteria are contraindications to NRT (pregnant or breastfeeding, recent cardiac problems, current NRT use). Randomization is stratified by gender and baseline motivation. Recruitment The study is conducted within BecomeAnEX.org, a free, publicly available, evidence-based intervention developed in accordance with the 2008 US Department of Health and Human Service’s Clinical Practice Guidelines [13]. The site was developed by Legacy, a nonprofit organization that develops smoking prevention and cessation programs, in collaboration with the Mayo Clinic Nicotine Dependence Center [14]. A national multichannel media campaign that included television, radio, and outdoor and online advertising was launched in 2008 to promote the website [15]. The present implementation of this campaign relies on various forms of online advertising, including social media, search engine marketing, and large targeted ad networks for display advertising. Search engine advertising targets keywords related to BecomeAnEX (eg, quit smoking, stop smoking) and display advertising targets males and females aged between 25 and 54 years. Participants are recruited to the trial immediately following registration on BecomeAnEX. The entire recruitment and enrollment process is automated using a Web-based clinical trials management system. Individuals that indicate current smoking (every day/some days) during registration are invited to the study. Interested individuals complete online eligibility screening; eligible individuals provide online informed consent and contact information, including an email address that is used to send a link to the online baseline assessment. Participants are randomized to treatment upon completion of the baseline survey. No incentive is provided for enrollment in the study. Recruitment volume is capped at a maximum of 10 new participants per day to ensure a manageable workload for intervention and research staff throughout the study period. Once 10 individuals are randomized, no new registered users are invited for the remainder of the 24-hour period. Recruitment began in March 2012. As of October 30, 2013, 3602 participants have been randomized. We defined the New Year period based on a clear peak and drop in the number of individuals that registered on BecomeAnEX between December 1, 2012 and January 31, 2013. The average conversion rate of unique visitors to registrants each day from December 1 through December 25 was approximately 4.7%. This proportion increased almost 2-fold on December 26, 2012 to 8.2% and stayed elevated through January 9, 2013, at an average daily conversion rate of 7.4%. Thus, we selected this 15-day period as our New Year period. For comparison, we selected 2 other 15-day periods during the year based on several criteria: (1) similar marketing and promotion approach, (2) variations in season (ie, summer, fall), (3) same span of days of the week (Wednesday to Wednesday), and (4) roughly similar number of participants randomized during the designated time period. Based on these factors, 2 separate 15-day periods were selected for comparison: 1 during the summer (July 18, 2012 to August 1, 2012) and 1 during the fall (November 7, 2012 to November 21, 2012). We deliberately selected the fall period to include another popular quitting holiday, the American Cancer Society’s Great American Smokeout, which falls on the third Thursday of November (November 15, 2012). Inclusion of this time frame enabled us to compare participants enrolling during the New Year to participants potentially enrolling in response to another seasonal trigger for cessation. The study is conducted within BecomeAnEX.org, a free, publicly available, evidence-based intervention developed in accordance with the 2008 US Department of Health and Human Service’s Clinical Practice Guidelines [13]. The site was developed by Legacy, a nonprofit organization that develops smoking prevention and cessation programs, in collaboration with the Mayo Clinic Nicotine Dependence Center [14]. A national multichannel media campaign that included television, radio, and outdoor and online advertising was launched in 2008 to promote the website [15]. The present implementation of this campaign relies on various forms of online advertising, including social media, search engine marketing, and large targeted ad networks for display advertising. Search engine advertising targets keywords related to BecomeAnEX (eg, quit smoking, stop smoking) and display advertising targets males and females aged between 25 and 54 years. Participants are recruited to the trial immediately following registration on BecomeAnEX. The entire recruitment and enrollment process is automated using a Web-based clinical trials management system. Individuals that indicate current smoking (every day/some days) during registration are invited to the study. Interested individuals complete online eligibility screening; eligible individuals provide online informed consent and contact information, including an email address that is used to send a link to the online baseline assessment. Participants are randomized to treatment upon completion of the baseline survey. No incentive is provided for enrollment in the study. Recruitment volume is capped at a maximum of 10 new participants per day to ensure a manageable workload for intervention and research staff throughout the study period. Once 10 individuals are randomized, no new registered users are invited for the remainder of the 24-hour period. Recruitment began in March 2012. As of October 30, 2013, 3602 participants have been randomized. We defined the New Year period based on a clear peak and drop in the number of individuals that registered on BecomeAnEX between December 1, 2012 and January 31, 2013. The average conversion rate of unique visitors to registrants each day from December 1 through December 25 was approximately 4.7%. This proportion increased almost 2-fold on December 26, 2012 to 8.2% and stayed elevated through January 9, 2013, at an average daily conversion rate of 7.4%. Thus, we selected this 15-day period as our New Year period. For comparison, we selected 2 other 15-day periods during the year based on several criteria: (1) similar marketing and promotion approach, (2) variations in season (ie, summer, fall), (3) same span of days of the week (Wednesday to Wednesday), and (4) roughly similar number of participants randomized during the designated time period. Based on these factors, 2 separate 15-day periods were selected for comparison: 1 during the summer (July 18, 2012 to August 1, 2012) and 1 during the fall (November 7, 2012 to November 21, 2012). We deliberately selected the fall period to include another popular quitting holiday, the American Cancer Society’s Great American Smokeout, which falls on the third Thursday of November (November 15, 2012). Inclusion of this time frame enabled us to compare participants enrolling during the New Year to participants potentially enrolling in response to another seasonal trigger for cessation. Interventions Participants in all 4 treatment groups had full access to the BecomeAnEX website which provides assistance setting a quit date, assessment of motivation and nicotine dependence, problem-solving/skills training to enhance self-efficacy for quitting, assistance in selecting and using US Food and Drug Administration (FDA)-approved pharmacotherapies, and social support through a large online community [14,15]. Participants randomized to receive the social network intervention received proactive communications from established members of the BecomeAnEX community (integrators). Within 24 hours after a new participant joined the study, the integrators posted a public message on the new member’s profile page to welcome them to the site, encourage them to fill out their profile, or comment on some aspect of an existing profile. Participants randomized to receive NRT products from the study were mailed a free 4-week supply of the NRT product of their choice (patch, gum, or lozenge) within 3 days of randomization. The NRT is provided as an over-the-counter product (ie, with no additional support or guidance provided) to parallel the experience participants would have if they purchased NRT on their own. Participants in all 4 treatment groups had full access to the BecomeAnEX website which provides assistance setting a quit date, assessment of motivation and nicotine dependence, problem-solving/skills training to enhance self-efficacy for quitting, assistance in selecting and using US Food and Drug Administration (FDA)-approved pharmacotherapies, and social support through a large online community [14,15]. Participants randomized to receive the social network intervention received proactive communications from established members of the BecomeAnEX community (integrators). Within 24 hours after a new participant joined the study, the integrators posted a public message on the new member’s profile page to welcome them to the site, encourage them to fill out their profile, or comment on some aspect of an existing profile. Participants randomized to receive NRT products from the study were mailed a free 4-week supply of the NRT product of their choice (patch, gum, or lozenge) within 3 days of randomization. The NRT is provided as an over-the-counter product (ie, with no additional support or guidance provided) to parallel the experience participants would have if they purchased NRT on their own. Data Collection Data are obtained through 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline, 3-, and 9-months postrandomization, and (3) online tracking software that records utilization of BecomeAnEX. Our analyses of smoking outcomes focus on the 3-month follow-up because this is typically when treatment utilization and intervention effects are the strongest. Telephone follow-up by professional telephone interviewers blinded to treatment condition for online nonresponders is used to maximize follow-up rates. Participants are reimbursed via Amazon or PayPal for survey completion (US $20 for Web survey, US $15 for phone survey). Individual level tracking metrics of BecomeAnEX utilization are recorded using Adobe/Omniture SiteCatalyst [16] software. Data are obtained through 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline, 3-, and 9-months postrandomization, and (3) online tracking software that records utilization of BecomeAnEX. Our analyses of smoking outcomes focus on the 3-month follow-up because this is typically when treatment utilization and intervention effects are the strongest. Telephone follow-up by professional telephone interviewers blinded to treatment condition for online nonresponders is used to maximize follow-up rates. Participants are reimbursed via Amazon or PayPal for survey completion (US $20 for Web survey, US $15 for phone survey). Individual level tracking metrics of BecomeAnEX utilization are recorded using Adobe/Omniture SiteCatalyst [16] software. Measures Overview The following measures from the parent trial were examined for these analyses. The following measures from the parent trial were examined for these analyses. Sociodemographic Variables Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection. Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection. Smoking Variables At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18]. At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18]. Psychosocial Variables The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking. The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking. Treatment Adherence Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion). Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion). Three-Month Outcome Measures Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months. Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months. Overview The following measures from the parent trial were examined for these analyses. The following measures from the parent trial were examined for these analyses. Sociodemographic Variables Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection. Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection. Smoking Variables At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18]. At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18]. Psychosocial Variables The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking. The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking. Treatment Adherence Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion). Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion). Three-Month Outcome Measures Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months. Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months. Statistical Analyses The effects of recruitment phase (New Year, summer, fall) on recruitment metrics, baseline characteristics, treatment utilization, and outcome measures were evaluated via chi-square tests for proportions or 1-way ANOVA, depending on whether the metrics were proportions or continuous variables. Significant omnibus tests were followed by unadjusted pairwise comparisons. Analyses were conducted on the full sample of participants recruited in each phase (ie, collapsed across treatment groups). The effects of recruitment phase (New Year, summer, fall) on recruitment metrics, baseline characteristics, treatment utilization, and outcome measures were evaluated via chi-square tests for proportions or 1-way ANOVA, depending on whether the metrics were proportions or continuous variables. Significant omnibus tests were followed by unadjusted pairwise comparisons. Analyses were conducted on the full sample of participants recruited in each phase (ie, collapsed across treatment groups).
null
null
Conclusions
Internet interventions for health behavior change are characterized by their ability to recruit broadly and provide treatment at scale. Secular or temporal variations, such as the New Year holiday, and the associated media attention to smoking cessation and resolution making can result in large-scale swings in the number of individuals arriving at Web-based cessation interventions. For interventions that can effectively capture and enroll those individuals, seasonal variations could dramatically increase recruitment efficiency for clinical trials.
[ "Introduction", "Study Overview", "Recruitment", "Interventions", "Data Collection", "Measures", "Overview", "Sociodemographic Variables", "Smoking Variables", "Psychosocial Variables", "Treatment Adherence", "Three-Month Outcome Measures", "Statistical Analyses", "Recruitment and Retention by Recruitment Phase", "Baseline Characteristics by Recruitment Phase", "Treatment Utilization Metrics by Recruitment Phase", "Smoking Outcomes by Time of Enrollment", "Principal Findings", "Limitations", "Conclusions" ]
[ "Seasonal variations across a number of smoking and quitting behaviors have been documented. Most smokers express a desire to quit [1] and many make a quit attempt around the start of the New Year [2-6]. Reports have shown that sales of cigarettes are at their lowest during January and February [7,8] and sales of nicotine replacement therapies are at their highest January through March [9]. Seasonal variations in motivational stage of change among callers to a state quitline have also been documented [10] with callers in December and January being more likely to have recently quit than callers during other months. Internet search queries also provide evidence of the seasonal variations in smoking cessation, with clear peaks observed in the use of “quit smoking” as a search term at the beginning of each calendar year. Figure 1 shows the relative use of the search term “quit smoking” in Google search engine queries over 6 years in the United States as reported by Google Trends [11], the public database of Google queries.\nA greater number of smokers quitting around the New Year may mean a greater pool of potential research participants for smoking cessation trials. However, the effects of seasonality on research recruitment and retention have not been documented. Specifically, it is unknown whether smokers who are invited to cessation treatment trials during this period of time are as likely to enroll, to adhere to a research protocol, and to benefit from a cessation intervention. Smokers who elect to quit around the New Year may differ from those who quit during other times of the year on factors such as motivation, desire, confidence, or other factors that relate to trial participation, engagement, and cessation outcomes.\nThese are important questions to address from both a pragmatic and a scientific standpoint. From a pragmatic standpoint, conducting trial recruitment during the New Year holiday may have staffing and cost considerations for all aspects of a trial. Research staff may be needed to field study inquiries, conduct eligibility screening, administer assessments, and manage study communications; intervention staff may be required to orient new participants to the trial and begin intervention delivery. Given the potential for higher recruitment volume during this period, staffing increases may be required. From a scientific standpoint, if participants enrolled during this time are less likely to adhere to research protocol (ie, lower rates of intervention adherence, lower retention at follow-up) because of a more transient commitment to quitting, this could have important implications for treatment trials. Lower retention rates (ie, higher loss to follow-up) would result in a higher proportion of participants counted as smokers in intention-to-treat analyses, which may artificially deflate overall abstinence rates and perceived effectiveness of an intervention. Lower rates of intervention adherence among participants recruited during the New Year could influence metrics of intervention feasibility and receptivity as well as cessation outcomes.\nWe sought to examine these questions about the impact of seasonality on smoking cessation treatment trials in the context of an ongoing randomized trial of a Web-based cessation intervention. We extracted a subset of participants recruited during a 15-day window that spanned the 2013 New Year and compared them to participants recruited during 2 other 15-day periods in 2012 on recruitment and retention rates, baseline characteristics, website utilization, and cessation outcomes. Our a priori hypotheses were that New Year participants would differ on baseline measures of motivation and desire to quit consistent with a more transient commitment to quitting, and would have lower recruitment and retention rates, lower website utilization rates, and poorer cessation outcomes compared to participants enrolled during other periods.\nUse of search term “quit smoking” in Google search engine queries relative to the total number of Google searches between June 2007 and June 2013 in the United States as reported in Google Trends.", "The full study protocol has been published elsewhere [12]. Briefly, this is an ongoing Web-based randomized trial to compare the efficacy of an interactive, evidence-based, smoking cessation website alone and in conjunction with (1) a theory-driven, empirically informed social network intervention designed to integrate participants into an online community, and (2) access to a free supply of nicotine replacement therapy (NRT) products. The study uses a 2×2 factorial design to compare the following treatment conditions: (1) website, (2) website+social network intervention, (3) website+NRT, and (4) website+social network intervention+NRT. A total of 4000 participants will be randomized by the end of the study. Follow-up assessments are administered at 3- and 9-months postrandomization; 30-day point prevalence abstinence is the primary outcome of the parent trial. Study eligibility criteria are current smoking, age 18 years or older, and US residence. Exclusion criteria are contraindications to NRT (pregnant or breastfeeding, recent cardiac problems, current NRT use). Randomization is stratified by gender and baseline motivation.", "The study is conducted within BecomeAnEX.org, a free, publicly available, evidence-based intervention developed in accordance with the 2008 US Department of Health and Human Service’s Clinical Practice Guidelines [13]. The site was developed by Legacy, a nonprofit organization that develops smoking prevention and cessation programs, in collaboration with the Mayo Clinic Nicotine Dependence Center [14]. A national multichannel media campaign that included television, radio, and outdoor and online advertising was launched in 2008 to promote the website [15]. The present implementation of this campaign relies on various forms of online advertising, including social media, search engine marketing, and large targeted ad networks for display advertising. Search engine advertising targets keywords related to BecomeAnEX (eg, quit smoking, stop smoking) and display advertising targets males and females aged between 25 and 54 years.\nParticipants are recruited to the trial immediately following registration on BecomeAnEX. The entire recruitment and enrollment process is automated using a Web-based clinical trials management system. Individuals that indicate current smoking (every day/some days) during registration are invited to the study. Interested individuals complete online eligibility screening; eligible individuals provide online informed consent and contact information, including an email address that is used to send a link to the online baseline assessment. Participants are randomized to treatment upon completion of the baseline survey. No incentive is provided for enrollment in the study. Recruitment volume is capped at a maximum of 10 new participants per day to ensure a manageable workload for intervention and research staff throughout the study period. Once 10 individuals are randomized, no new registered users are invited for the remainder of the 24-hour period. Recruitment began in March 2012. As of October 30, 2013, 3602 participants have been randomized.\nWe defined the New Year period based on a clear peak and drop in the number of individuals that registered on BecomeAnEX between December 1, 2012 and January 31, 2013. The average conversion rate of unique visitors to registrants each day from December 1 through December 25 was approximately 4.7%. This proportion increased almost 2-fold on December 26, 2012 to 8.2% and stayed elevated through January 9, 2013, at an average daily conversion rate of 7.4%. Thus, we selected this 15-day period as our New Year period. For comparison, we selected 2 other 15-day periods during the year based on several criteria: (1) similar marketing and promotion approach, (2) variations in season (ie, summer, fall), (3) same span of days of the week (Wednesday to Wednesday), and (4) roughly similar number of participants randomized during the designated time period. Based on these factors, 2 separate 15-day periods were selected for comparison: 1 during the summer (July 18, 2012 to August 1, 2012) and 1 during the fall (November 7, 2012 to November 21, 2012). We deliberately selected the fall period to include another popular quitting holiday, the American Cancer Society’s Great American Smokeout, which falls on the third Thursday of November (November 15, 2012). Inclusion of this time frame enabled us to compare participants enrolling during the New Year to participants potentially enrolling in response to another seasonal trigger for cessation.", "Participants in all 4 treatment groups had full access to the BecomeAnEX website which provides assistance setting a quit date, assessment of motivation and nicotine dependence, problem-solving/skills training to enhance self-efficacy for quitting, assistance in selecting and using US Food and Drug Administration (FDA)-approved pharmacotherapies, and social support through a large online community [14,15]. Participants randomized to receive the social network intervention received proactive communications from established members of the BecomeAnEX community (integrators). Within 24 hours after a new participant joined the study, the integrators posted a public message on the new member’s profile page to welcome them to the site, encourage them to fill out their profile, or comment on some aspect of an existing profile. Participants randomized to receive NRT products from the study were mailed a free 4-week supply of the NRT product of their choice (patch, gum, or lozenge) within 3 days of randomization. The NRT is provided as an over-the-counter product (ie, with no additional support or guidance provided) to parallel the experience participants would have if they purchased NRT on their own.", "Data are obtained through 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline, 3-, and 9-months postrandomization, and (3) online tracking software that records utilization of BecomeAnEX. Our analyses of smoking outcomes focus on the 3-month follow-up because this is typically when treatment utilization and intervention effects are the strongest. Telephone follow-up by professional telephone interviewers blinded to treatment condition for online nonresponders is used to maximize follow-up rates. Participants are reimbursed via Amazon or PayPal for survey completion (US $20 for Web survey, US $15 for phone survey). Individual level tracking metrics of BecomeAnEX utilization are recorded using Adobe/Omniture SiteCatalyst [16] software.", " Overview The following measures from the parent trial were examined for these analyses.\nThe following measures from the parent trial were examined for these analyses.\n Sociodemographic Variables Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection.\nParticipants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection.\n Smoking Variables At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18].\nAt baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18].\n Psychosocial Variables The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking.\nThe appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking.\n Treatment Adherence Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion).\nWebsite utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion).\n Three-Month Outcome Measures Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months.\nSmoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months.", "The following measures from the parent trial were examined for these analyses.", "Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection.", "At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18].", "The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking.", "Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion).", "Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months.", "The effects of recruitment phase (New Year, summer, fall) on recruitment metrics, baseline characteristics, treatment utilization, and outcome measures were evaluated via chi-square tests for proportions or 1-way ANOVA, depending on whether the metrics were proportions or continuous variables. Significant omnibus tests were followed by unadjusted pairwise comparisons. Analyses were conducted on the full sample of participants recruited in each phase (ie, collapsed across treatment groups).", "\nTable 1 shows the Consolidated Standards of Reporting Trials (CONSORT) metrics of each recruitment phase. We examined advertising expenditures during the 2 weeks before each recruitment phase in addition to the recruitment phase itself. For the New Year, summer, and fall phases, expenditures totaled US $57,508, US $48,632, and US $38,374, respectively. The conversion rate of unique visitors to new registered users during the New Year period (7.4%) was significantly higher than both summer (4.6%) and fall (4.9%) periods; summer and fall conversion rates did not differ. Among new registered users during the New Year period, 868 were invited to participate in the study. Of these, 44.1% (383/868) accepted the invitation and completed eligibility screening; 67.1% (257/383) were eligible, 93.0% (239/257) consented, and 52.9% of those eligible (136/257) completed the baseline assessment and were randomized to treatment. Among the New Year participants, 58.8% (80/136) completed the 3-month follow-up survey. With the exception of conversion rate, there were no significant differences between recruitment phases noted for any of the CONSORT metrics.", "New Year participants differed from participants recruited during other time periods on age, education, employment, and marital status (see Table 2). New Year participants were older (mean 43.2, SD 12.3 vs mean 39.1, SD 13.3; P=.01) and more likely to be employed full time (58.9%, 79/136 vs 40.6%, 43/106; P=.01) compared to summer participants. New Year participants were more likely to have attended college than both summer and fall participants (New Year: 80.9%, 110/136; summer: 66.0%, 70/106; fall: 68.7%, 68/99; P=.02). Both New Year and summer participants were more likely to have a spouse/partner compared to fall participants (New Year: 63.2%, 86/136; summer: 65.1%, 69/106; fall: 47.5%, 47/99; P=.02). Summer and fall participants differed on Internet access (P=.004), but were not different from New Year enrollees.", "Among the various utilization metrics we examined (Table 3), only the number of total Web pages viewed differed significantly between the 3 groups of participants: page views was higher among New Year participants (median 57, IQR 20-57) than both summer (median 29, IQR 13-59; P=.002) and fall enrollees (median 36, IQR 19-69; P=.004). Summer and fall participants did not differ on page views.", "There were no significant differences in any of the smoking outcomes we examined by recruitment phase. Using intention-to-treat analyses, 30-day point prevalence abstinence was 11.8% (16/136), 15.1% (16/106), and 17.2% (17/99), and 7-day point prevalence abstinence was 16.2% (22/136), 18.9% (20/106), and 22.2% (22/99) for New Year, summer, and fall participants, respectively. Using responder-only analysis, 30-day point prevalence abstinence was 20% (16/79), 23% (16/68), and 32% (17/53), and 7-day point prevalence abstinence was 28% (22/79), 29% (20/68), and 41% (22/53) for New Year, summer, and fall participants, respectively. There was no difference in the number of quit attempts reported by the 3 groups at the 3-month follow-up (Table 4).\nRecruitment and retention metrics by recruitment phase.\nBaseline characteristics by recruitment phase.\n\naMotivation to quit excluded 1 participant who reported no plans to quit smoking in New Year group.\n\nbFTND: Fagerström Test for Nicotine Dependence.\n\ncISEL: Interpersonal Support Evaluation List.\n\ndInternet access: n=3 cases dropped that reported using a dial-up connection (summer: n=2; New Year: n=1; fall: n=0).\nTreatment utilization metrics by recruitment phase.\n\na12 missing values (New Year: n=7; summer: n=3; fall: n=2).\nSmoking outcomes by recruitment phase.", "The results of this study indicate that smokers visiting a Web-based cessation program during the New Year period are more likely to register for treatment and differ on several demographic variables, but show similar patterns of treatment engagement, retention, and short-term cessation outcomes compared with participants who visit the site during other periods of the year. Our hypotheses that New Year participants would differ on measures of motivation and desire to quit were not supported, and there were no differences on any of the smoking variables we examined. In addition, our hypotheses about lower retention rates, website utilization rates, and cessation outcomes were also not supported. Follow-up rates were comparable across all 3 periods, and smokers recruited during the New Year period quit at the same rate as smokers recruited at other times during the year. These results mitigate scientific concerns about recruiting participants during this time frame and are reassuring for researchers conducting Web-based cessation trials.\nOur findings that New Year participants were older, more educated, more likely to be employed full time, and more likely to have a relationship partner may suggest that smokers with greater resources are more affected by the seasonal trends of quitting around the New Year. Alternatively, these differences may be a function of differential message exposure: older employed individuals may have been more likely to be impacted by the BecomeAnEX online advertising campaign, or reminded of cessation through workplace wellness programs or other promotional activities. Although not significant, smokers recruited during the New Year period also had a higher level of nicotine dependence and a higher number of previous quit attempts at baseline, also suggesting that seasonal trends may serve as a cue to action for more dependent and motivated smokers. We do not have an explanation for the finding that New Year participants viewed more website pages than participants in other recruitment phases did, especially because no other metric of engagement or utilization was significantly different.\nTo our knowledge, this is the first study to examine demographic, utilization, and outcome patterns of smokers recruited to a Web-based intervention during the New Year period, although there is interest in seasonal patterns of behavior. Delnevo and colleagues [10] noted seasonal patterns in motivation for quitting among callers to a quitline and discussed the implications for planning, promoting, and evaluating telephone quitlines. Using Internet search query data, Ayers et al [22] documented seasonality in searches for mental health information, with increases in information seeking that corresponded to patterns of seasonal affective disorder. The lack of previous publications in this area may be because dramatic increases in recruitment are to some degree unique to the online environment and Web-based studies that are capable of enrolling a large number of trial participants in a relatively short period of time.\nOur findings add to the small but growing literature on recruitment methods for Web-based tobacco interventions [23-31]. Most studies to date have focused on comparisons between online recruitment methods (eg, online banner ads, search engine advertising) and more traditional recruitment methods (eg, newspaper ads, targeted mailings) [24,25,32], evaluation of different Internet-based methods [28,29], or the use of offline methods (eg, physician referral) to drive tobacco users to Web-based interventions [30,31]. The primary endpoints of interest in most studies are baseline participant characteristics and recruitment yield and/or efficiency. Heffner et al [25] evaluated the impact of Web-based and traditional recruitment methods on 3-month data retention and 30-day point prevalence smoking abstinence at the 3-month outcome assessment in a cessation trial and found no differences by recruitment method. Our findings regarding the impact of seasonality are consistent with previous studies that have demonstrated some differences in the types of participants recruited to Web-based tobacco interventions, but no differences in their participation in or outcomes from such trials.", "Several limitations to this study should be noted. We examined variations in participant characteristics for a single cessation website as part of an ongoing randomized trial. This site exists in a larger ecosystem of promotion, advertising, and branding as part of the national BecomeAnEX campaign, which has been in existence since 2008. Our results are likely related to the specific strategy employed by BecomeAnEX, which is largely online advertising in its present implementation. Other advertising and promotional strategies of different Web-based interventions could yield different results. In addition, our automated titration of recruitment volume should be noted when considering the pragmatic implications of our results. Although the number of visitors to BecomeAnEX was higher during the New Year period and a higher proportion of them registered to become members, the number of participants recruited to our clinical trial during different periods throughout the year has remained relatively constant because of the daily cap we have on enrollment. This cap is designed to maintain a consistent volume of participants for our research and intervention staff to manage. If this cap were not in place, we may have seen a higher number of participants invited to the study and differences in the proportion of participants accepting or declining the study invitation. This may have important pragmatic considerations for other Web-based trials that have human involvement, but we believe this is unlikely to have affected the other metrics we examined (ie, follow-up rates, cessation outcomes). The daily cap on recruitment may also have affected statistical power. The response rate to the 3-month follow-up is lower than desired despite numerous online and offline strategies to reach participants, but is comparable to or higher than other Internet studies [33].", "Internet interventions for health behavior change are characterized by their ability to recruit broadly and provide treatment at scale. Secular or temporal variations, such as the New Year holiday, and the associated media attention to smoking cessation and resolution making can result in large-scale swings in the number of individuals arriving at Web-based cessation interventions. For interventions that can effectively capture and enroll those individuals, seasonal variations could dramatically increase recruitment efficiency for clinical trials." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study Overview", "Recruitment", "Interventions", "Data Collection", "Measures", "Overview", "Sociodemographic Variables", "Smoking Variables", "Psychosocial Variables", "Treatment Adherence", "Three-Month Outcome Measures", "Statistical Analyses", "Results", "Recruitment and Retention by Recruitment Phase", "Baseline Characteristics by Recruitment Phase", "Treatment Utilization Metrics by Recruitment Phase", "Smoking Outcomes by Time of Enrollment", "Discussion", "Principal Findings", "Limitations", "Conclusions" ]
[ "Seasonal variations across a number of smoking and quitting behaviors have been documented. Most smokers express a desire to quit [1] and many make a quit attempt around the start of the New Year [2-6]. Reports have shown that sales of cigarettes are at their lowest during January and February [7,8] and sales of nicotine replacement therapies are at their highest January through March [9]. Seasonal variations in motivational stage of change among callers to a state quitline have also been documented [10] with callers in December and January being more likely to have recently quit than callers during other months. Internet search queries also provide evidence of the seasonal variations in smoking cessation, with clear peaks observed in the use of “quit smoking” as a search term at the beginning of each calendar year. Figure 1 shows the relative use of the search term “quit smoking” in Google search engine queries over 6 years in the United States as reported by Google Trends [11], the public database of Google queries.\nA greater number of smokers quitting around the New Year may mean a greater pool of potential research participants for smoking cessation trials. However, the effects of seasonality on research recruitment and retention have not been documented. Specifically, it is unknown whether smokers who are invited to cessation treatment trials during this period of time are as likely to enroll, to adhere to a research protocol, and to benefit from a cessation intervention. Smokers who elect to quit around the New Year may differ from those who quit during other times of the year on factors such as motivation, desire, confidence, or other factors that relate to trial participation, engagement, and cessation outcomes.\nThese are important questions to address from both a pragmatic and a scientific standpoint. From a pragmatic standpoint, conducting trial recruitment during the New Year holiday may have staffing and cost considerations for all aspects of a trial. Research staff may be needed to field study inquiries, conduct eligibility screening, administer assessments, and manage study communications; intervention staff may be required to orient new participants to the trial and begin intervention delivery. Given the potential for higher recruitment volume during this period, staffing increases may be required. From a scientific standpoint, if participants enrolled during this time are less likely to adhere to research protocol (ie, lower rates of intervention adherence, lower retention at follow-up) because of a more transient commitment to quitting, this could have important implications for treatment trials. Lower retention rates (ie, higher loss to follow-up) would result in a higher proportion of participants counted as smokers in intention-to-treat analyses, which may artificially deflate overall abstinence rates and perceived effectiveness of an intervention. Lower rates of intervention adherence among participants recruited during the New Year could influence metrics of intervention feasibility and receptivity as well as cessation outcomes.\nWe sought to examine these questions about the impact of seasonality on smoking cessation treatment trials in the context of an ongoing randomized trial of a Web-based cessation intervention. We extracted a subset of participants recruited during a 15-day window that spanned the 2013 New Year and compared them to participants recruited during 2 other 15-day periods in 2012 on recruitment and retention rates, baseline characteristics, website utilization, and cessation outcomes. Our a priori hypotheses were that New Year participants would differ on baseline measures of motivation and desire to quit consistent with a more transient commitment to quitting, and would have lower recruitment and retention rates, lower website utilization rates, and poorer cessation outcomes compared to participants enrolled during other periods.\nUse of search term “quit smoking” in Google search engine queries relative to the total number of Google searches between June 2007 and June 2013 in the United States as reported in Google Trends.", " Study Overview The full study protocol has been published elsewhere [12]. Briefly, this is an ongoing Web-based randomized trial to compare the efficacy of an interactive, evidence-based, smoking cessation website alone and in conjunction with (1) a theory-driven, empirically informed social network intervention designed to integrate participants into an online community, and (2) access to a free supply of nicotine replacement therapy (NRT) products. The study uses a 2×2 factorial design to compare the following treatment conditions: (1) website, (2) website+social network intervention, (3) website+NRT, and (4) website+social network intervention+NRT. A total of 4000 participants will be randomized by the end of the study. Follow-up assessments are administered at 3- and 9-months postrandomization; 30-day point prevalence abstinence is the primary outcome of the parent trial. Study eligibility criteria are current smoking, age 18 years or older, and US residence. Exclusion criteria are contraindications to NRT (pregnant or breastfeeding, recent cardiac problems, current NRT use). Randomization is stratified by gender and baseline motivation.\nThe full study protocol has been published elsewhere [12]. Briefly, this is an ongoing Web-based randomized trial to compare the efficacy of an interactive, evidence-based, smoking cessation website alone and in conjunction with (1) a theory-driven, empirically informed social network intervention designed to integrate participants into an online community, and (2) access to a free supply of nicotine replacement therapy (NRT) products. The study uses a 2×2 factorial design to compare the following treatment conditions: (1) website, (2) website+social network intervention, (3) website+NRT, and (4) website+social network intervention+NRT. A total of 4000 participants will be randomized by the end of the study. Follow-up assessments are administered at 3- and 9-months postrandomization; 30-day point prevalence abstinence is the primary outcome of the parent trial. Study eligibility criteria are current smoking, age 18 years or older, and US residence. Exclusion criteria are contraindications to NRT (pregnant or breastfeeding, recent cardiac problems, current NRT use). Randomization is stratified by gender and baseline motivation.\n Recruitment The study is conducted within BecomeAnEX.org, a free, publicly available, evidence-based intervention developed in accordance with the 2008 US Department of Health and Human Service’s Clinical Practice Guidelines [13]. The site was developed by Legacy, a nonprofit organization that develops smoking prevention and cessation programs, in collaboration with the Mayo Clinic Nicotine Dependence Center [14]. A national multichannel media campaign that included television, radio, and outdoor and online advertising was launched in 2008 to promote the website [15]. The present implementation of this campaign relies on various forms of online advertising, including social media, search engine marketing, and large targeted ad networks for display advertising. Search engine advertising targets keywords related to BecomeAnEX (eg, quit smoking, stop smoking) and display advertising targets males and females aged between 25 and 54 years.\nParticipants are recruited to the trial immediately following registration on BecomeAnEX. The entire recruitment and enrollment process is automated using a Web-based clinical trials management system. Individuals that indicate current smoking (every day/some days) during registration are invited to the study. Interested individuals complete online eligibility screening; eligible individuals provide online informed consent and contact information, including an email address that is used to send a link to the online baseline assessment. Participants are randomized to treatment upon completion of the baseline survey. No incentive is provided for enrollment in the study. Recruitment volume is capped at a maximum of 10 new participants per day to ensure a manageable workload for intervention and research staff throughout the study period. Once 10 individuals are randomized, no new registered users are invited for the remainder of the 24-hour period. Recruitment began in March 2012. As of October 30, 2013, 3602 participants have been randomized.\nWe defined the New Year period based on a clear peak and drop in the number of individuals that registered on BecomeAnEX between December 1, 2012 and January 31, 2013. The average conversion rate of unique visitors to registrants each day from December 1 through December 25 was approximately 4.7%. This proportion increased almost 2-fold on December 26, 2012 to 8.2% and stayed elevated through January 9, 2013, at an average daily conversion rate of 7.4%. Thus, we selected this 15-day period as our New Year period. For comparison, we selected 2 other 15-day periods during the year based on several criteria: (1) similar marketing and promotion approach, (2) variations in season (ie, summer, fall), (3) same span of days of the week (Wednesday to Wednesday), and (4) roughly similar number of participants randomized during the designated time period. Based on these factors, 2 separate 15-day periods were selected for comparison: 1 during the summer (July 18, 2012 to August 1, 2012) and 1 during the fall (November 7, 2012 to November 21, 2012). We deliberately selected the fall period to include another popular quitting holiday, the American Cancer Society’s Great American Smokeout, which falls on the third Thursday of November (November 15, 2012). Inclusion of this time frame enabled us to compare participants enrolling during the New Year to participants potentially enrolling in response to another seasonal trigger for cessation.\nThe study is conducted within BecomeAnEX.org, a free, publicly available, evidence-based intervention developed in accordance with the 2008 US Department of Health and Human Service’s Clinical Practice Guidelines [13]. The site was developed by Legacy, a nonprofit organization that develops smoking prevention and cessation programs, in collaboration with the Mayo Clinic Nicotine Dependence Center [14]. A national multichannel media campaign that included television, radio, and outdoor and online advertising was launched in 2008 to promote the website [15]. The present implementation of this campaign relies on various forms of online advertising, including social media, search engine marketing, and large targeted ad networks for display advertising. Search engine advertising targets keywords related to BecomeAnEX (eg, quit smoking, stop smoking) and display advertising targets males and females aged between 25 and 54 years.\nParticipants are recruited to the trial immediately following registration on BecomeAnEX. The entire recruitment and enrollment process is automated using a Web-based clinical trials management system. Individuals that indicate current smoking (every day/some days) during registration are invited to the study. Interested individuals complete online eligibility screening; eligible individuals provide online informed consent and contact information, including an email address that is used to send a link to the online baseline assessment. Participants are randomized to treatment upon completion of the baseline survey. No incentive is provided for enrollment in the study. Recruitment volume is capped at a maximum of 10 new participants per day to ensure a manageable workload for intervention and research staff throughout the study period. Once 10 individuals are randomized, no new registered users are invited for the remainder of the 24-hour period. Recruitment began in March 2012. As of October 30, 2013, 3602 participants have been randomized.\nWe defined the New Year period based on a clear peak and drop in the number of individuals that registered on BecomeAnEX between December 1, 2012 and January 31, 2013. The average conversion rate of unique visitors to registrants each day from December 1 through December 25 was approximately 4.7%. This proportion increased almost 2-fold on December 26, 2012 to 8.2% and stayed elevated through January 9, 2013, at an average daily conversion rate of 7.4%. Thus, we selected this 15-day period as our New Year period. For comparison, we selected 2 other 15-day periods during the year based on several criteria: (1) similar marketing and promotion approach, (2) variations in season (ie, summer, fall), (3) same span of days of the week (Wednesday to Wednesday), and (4) roughly similar number of participants randomized during the designated time period. Based on these factors, 2 separate 15-day periods were selected for comparison: 1 during the summer (July 18, 2012 to August 1, 2012) and 1 during the fall (November 7, 2012 to November 21, 2012). We deliberately selected the fall period to include another popular quitting holiday, the American Cancer Society’s Great American Smokeout, which falls on the third Thursday of November (November 15, 2012). Inclusion of this time frame enabled us to compare participants enrolling during the New Year to participants potentially enrolling in response to another seasonal trigger for cessation.\n Interventions Participants in all 4 treatment groups had full access to the BecomeAnEX website which provides assistance setting a quit date, assessment of motivation and nicotine dependence, problem-solving/skills training to enhance self-efficacy for quitting, assistance in selecting and using US Food and Drug Administration (FDA)-approved pharmacotherapies, and social support through a large online community [14,15]. Participants randomized to receive the social network intervention received proactive communications from established members of the BecomeAnEX community (integrators). Within 24 hours after a new participant joined the study, the integrators posted a public message on the new member’s profile page to welcome them to the site, encourage them to fill out their profile, or comment on some aspect of an existing profile. Participants randomized to receive NRT products from the study were mailed a free 4-week supply of the NRT product of their choice (patch, gum, or lozenge) within 3 days of randomization. The NRT is provided as an over-the-counter product (ie, with no additional support or guidance provided) to parallel the experience participants would have if they purchased NRT on their own.\nParticipants in all 4 treatment groups had full access to the BecomeAnEX website which provides assistance setting a quit date, assessment of motivation and nicotine dependence, problem-solving/skills training to enhance self-efficacy for quitting, assistance in selecting and using US Food and Drug Administration (FDA)-approved pharmacotherapies, and social support through a large online community [14,15]. Participants randomized to receive the social network intervention received proactive communications from established members of the BecomeAnEX community (integrators). Within 24 hours after a new participant joined the study, the integrators posted a public message on the new member’s profile page to welcome them to the site, encourage them to fill out their profile, or comment on some aspect of an existing profile. Participants randomized to receive NRT products from the study were mailed a free 4-week supply of the NRT product of their choice (patch, gum, or lozenge) within 3 days of randomization. The NRT is provided as an over-the-counter product (ie, with no additional support or guidance provided) to parallel the experience participants would have if they purchased NRT on their own.\n Data Collection Data are obtained through 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline, 3-, and 9-months postrandomization, and (3) online tracking software that records utilization of BecomeAnEX. Our analyses of smoking outcomes focus on the 3-month follow-up because this is typically when treatment utilization and intervention effects are the strongest. Telephone follow-up by professional telephone interviewers blinded to treatment condition for online nonresponders is used to maximize follow-up rates. Participants are reimbursed via Amazon or PayPal for survey completion (US $20 for Web survey, US $15 for phone survey). Individual level tracking metrics of BecomeAnEX utilization are recorded using Adobe/Omniture SiteCatalyst [16] software.\nData are obtained through 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline, 3-, and 9-months postrandomization, and (3) online tracking software that records utilization of BecomeAnEX. Our analyses of smoking outcomes focus on the 3-month follow-up because this is typically when treatment utilization and intervention effects are the strongest. Telephone follow-up by professional telephone interviewers blinded to treatment condition for online nonresponders is used to maximize follow-up rates. Participants are reimbursed via Amazon or PayPal for survey completion (US $20 for Web survey, US $15 for phone survey). Individual level tracking metrics of BecomeAnEX utilization are recorded using Adobe/Omniture SiteCatalyst [16] software.\n Measures Overview The following measures from the parent trial were examined for these analyses.\nThe following measures from the parent trial were examined for these analyses.\n Sociodemographic Variables Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection.\nParticipants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection.\n Smoking Variables At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18].\nAt baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18].\n Psychosocial Variables The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking.\nThe appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking.\n Treatment Adherence Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion).\nWebsite utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion).\n Three-Month Outcome Measures Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months.\nSmoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months.\n Overview The following measures from the parent trial were examined for these analyses.\nThe following measures from the parent trial were examined for these analyses.\n Sociodemographic Variables Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection.\nParticipants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection.\n Smoking Variables At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18].\nAt baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18].\n Psychosocial Variables The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking.\nThe appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking.\n Treatment Adherence Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion).\nWebsite utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion).\n Three-Month Outcome Measures Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months.\nSmoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months.\n Statistical Analyses The effects of recruitment phase (New Year, summer, fall) on recruitment metrics, baseline characteristics, treatment utilization, and outcome measures were evaluated via chi-square tests for proportions or 1-way ANOVA, depending on whether the metrics were proportions or continuous variables. Significant omnibus tests were followed by unadjusted pairwise comparisons. Analyses were conducted on the full sample of participants recruited in each phase (ie, collapsed across treatment groups).\nThe effects of recruitment phase (New Year, summer, fall) on recruitment metrics, baseline characteristics, treatment utilization, and outcome measures were evaluated via chi-square tests for proportions or 1-way ANOVA, depending on whether the metrics were proportions or continuous variables. Significant omnibus tests were followed by unadjusted pairwise comparisons. Analyses were conducted on the full sample of participants recruited in each phase (ie, collapsed across treatment groups).", "The full study protocol has been published elsewhere [12]. Briefly, this is an ongoing Web-based randomized trial to compare the efficacy of an interactive, evidence-based, smoking cessation website alone and in conjunction with (1) a theory-driven, empirically informed social network intervention designed to integrate participants into an online community, and (2) access to a free supply of nicotine replacement therapy (NRT) products. The study uses a 2×2 factorial design to compare the following treatment conditions: (1) website, (2) website+social network intervention, (3) website+NRT, and (4) website+social network intervention+NRT. A total of 4000 participants will be randomized by the end of the study. Follow-up assessments are administered at 3- and 9-months postrandomization; 30-day point prevalence abstinence is the primary outcome of the parent trial. Study eligibility criteria are current smoking, age 18 years or older, and US residence. Exclusion criteria are contraindications to NRT (pregnant or breastfeeding, recent cardiac problems, current NRT use). Randomization is stratified by gender and baseline motivation.", "The study is conducted within BecomeAnEX.org, a free, publicly available, evidence-based intervention developed in accordance with the 2008 US Department of Health and Human Service’s Clinical Practice Guidelines [13]. The site was developed by Legacy, a nonprofit organization that develops smoking prevention and cessation programs, in collaboration with the Mayo Clinic Nicotine Dependence Center [14]. A national multichannel media campaign that included television, radio, and outdoor and online advertising was launched in 2008 to promote the website [15]. The present implementation of this campaign relies on various forms of online advertising, including social media, search engine marketing, and large targeted ad networks for display advertising. Search engine advertising targets keywords related to BecomeAnEX (eg, quit smoking, stop smoking) and display advertising targets males and females aged between 25 and 54 years.\nParticipants are recruited to the trial immediately following registration on BecomeAnEX. The entire recruitment and enrollment process is automated using a Web-based clinical trials management system. Individuals that indicate current smoking (every day/some days) during registration are invited to the study. Interested individuals complete online eligibility screening; eligible individuals provide online informed consent and contact information, including an email address that is used to send a link to the online baseline assessment. Participants are randomized to treatment upon completion of the baseline survey. No incentive is provided for enrollment in the study. Recruitment volume is capped at a maximum of 10 new participants per day to ensure a manageable workload for intervention and research staff throughout the study period. Once 10 individuals are randomized, no new registered users are invited for the remainder of the 24-hour period. Recruitment began in March 2012. As of October 30, 2013, 3602 participants have been randomized.\nWe defined the New Year period based on a clear peak and drop in the number of individuals that registered on BecomeAnEX between December 1, 2012 and January 31, 2013. The average conversion rate of unique visitors to registrants each day from December 1 through December 25 was approximately 4.7%. This proportion increased almost 2-fold on December 26, 2012 to 8.2% and stayed elevated through January 9, 2013, at an average daily conversion rate of 7.4%. Thus, we selected this 15-day period as our New Year period. For comparison, we selected 2 other 15-day periods during the year based on several criteria: (1) similar marketing and promotion approach, (2) variations in season (ie, summer, fall), (3) same span of days of the week (Wednesday to Wednesday), and (4) roughly similar number of participants randomized during the designated time period. Based on these factors, 2 separate 15-day periods were selected for comparison: 1 during the summer (July 18, 2012 to August 1, 2012) and 1 during the fall (November 7, 2012 to November 21, 2012). We deliberately selected the fall period to include another popular quitting holiday, the American Cancer Society’s Great American Smokeout, which falls on the third Thursday of November (November 15, 2012). Inclusion of this time frame enabled us to compare participants enrolling during the New Year to participants potentially enrolling in response to another seasonal trigger for cessation.", "Participants in all 4 treatment groups had full access to the BecomeAnEX website which provides assistance setting a quit date, assessment of motivation and nicotine dependence, problem-solving/skills training to enhance self-efficacy for quitting, assistance in selecting and using US Food and Drug Administration (FDA)-approved pharmacotherapies, and social support through a large online community [14,15]. Participants randomized to receive the social network intervention received proactive communications from established members of the BecomeAnEX community (integrators). Within 24 hours after a new participant joined the study, the integrators posted a public message on the new member’s profile page to welcome them to the site, encourage them to fill out their profile, or comment on some aspect of an existing profile. Participants randomized to receive NRT products from the study were mailed a free 4-week supply of the NRT product of their choice (patch, gum, or lozenge) within 3 days of randomization. The NRT is provided as an over-the-counter product (ie, with no additional support or guidance provided) to parallel the experience participants would have if they purchased NRT on their own.", "Data are obtained through 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline, 3-, and 9-months postrandomization, and (3) online tracking software that records utilization of BecomeAnEX. Our analyses of smoking outcomes focus on the 3-month follow-up because this is typically when treatment utilization and intervention effects are the strongest. Telephone follow-up by professional telephone interviewers blinded to treatment condition for online nonresponders is used to maximize follow-up rates. Participants are reimbursed via Amazon or PayPal for survey completion (US $20 for Web survey, US $15 for phone survey). Individual level tracking metrics of BecomeAnEX utilization are recorded using Adobe/Omniture SiteCatalyst [16] software.", " Overview The following measures from the parent trial were examined for these analyses.\nThe following measures from the parent trial were examined for these analyses.\n Sociodemographic Variables Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection.\nParticipants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection.\n Smoking Variables At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18].\nAt baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18].\n Psychosocial Variables The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking.\nThe appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking.\n Treatment Adherence Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion).\nWebsite utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion).\n Three-Month Outcome Measures Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months.\nSmoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months.", "The following measures from the parent trial were examined for these analyses.", "Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection.", "At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18].", "The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking.", "Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion).", "Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months.", "The effects of recruitment phase (New Year, summer, fall) on recruitment metrics, baseline characteristics, treatment utilization, and outcome measures were evaluated via chi-square tests for proportions or 1-way ANOVA, depending on whether the metrics were proportions or continuous variables. Significant omnibus tests were followed by unadjusted pairwise comparisons. Analyses were conducted on the full sample of participants recruited in each phase (ie, collapsed across treatment groups).", " Recruitment and Retention by Recruitment Phase \nTable 1 shows the Consolidated Standards of Reporting Trials (CONSORT) metrics of each recruitment phase. We examined advertising expenditures during the 2 weeks before each recruitment phase in addition to the recruitment phase itself. For the New Year, summer, and fall phases, expenditures totaled US $57,508, US $48,632, and US $38,374, respectively. The conversion rate of unique visitors to new registered users during the New Year period (7.4%) was significantly higher than both summer (4.6%) and fall (4.9%) periods; summer and fall conversion rates did not differ. Among new registered users during the New Year period, 868 were invited to participate in the study. Of these, 44.1% (383/868) accepted the invitation and completed eligibility screening; 67.1% (257/383) were eligible, 93.0% (239/257) consented, and 52.9% of those eligible (136/257) completed the baseline assessment and were randomized to treatment. Among the New Year participants, 58.8% (80/136) completed the 3-month follow-up survey. With the exception of conversion rate, there were no significant differences between recruitment phases noted for any of the CONSORT metrics.\n\nTable 1 shows the Consolidated Standards of Reporting Trials (CONSORT) metrics of each recruitment phase. We examined advertising expenditures during the 2 weeks before each recruitment phase in addition to the recruitment phase itself. For the New Year, summer, and fall phases, expenditures totaled US $57,508, US $48,632, and US $38,374, respectively. The conversion rate of unique visitors to new registered users during the New Year period (7.4%) was significantly higher than both summer (4.6%) and fall (4.9%) periods; summer and fall conversion rates did not differ. Among new registered users during the New Year period, 868 were invited to participate in the study. Of these, 44.1% (383/868) accepted the invitation and completed eligibility screening; 67.1% (257/383) were eligible, 93.0% (239/257) consented, and 52.9% of those eligible (136/257) completed the baseline assessment and were randomized to treatment. Among the New Year participants, 58.8% (80/136) completed the 3-month follow-up survey. With the exception of conversion rate, there were no significant differences between recruitment phases noted for any of the CONSORT metrics.\n Baseline Characteristics by Recruitment Phase New Year participants differed from participants recruited during other time periods on age, education, employment, and marital status (see Table 2). New Year participants were older (mean 43.2, SD 12.3 vs mean 39.1, SD 13.3; P=.01) and more likely to be employed full time (58.9%, 79/136 vs 40.6%, 43/106; P=.01) compared to summer participants. New Year participants were more likely to have attended college than both summer and fall participants (New Year: 80.9%, 110/136; summer: 66.0%, 70/106; fall: 68.7%, 68/99; P=.02). Both New Year and summer participants were more likely to have a spouse/partner compared to fall participants (New Year: 63.2%, 86/136; summer: 65.1%, 69/106; fall: 47.5%, 47/99; P=.02). Summer and fall participants differed on Internet access (P=.004), but were not different from New Year enrollees.\nNew Year participants differed from participants recruited during other time periods on age, education, employment, and marital status (see Table 2). New Year participants were older (mean 43.2, SD 12.3 vs mean 39.1, SD 13.3; P=.01) and more likely to be employed full time (58.9%, 79/136 vs 40.6%, 43/106; P=.01) compared to summer participants. New Year participants were more likely to have attended college than both summer and fall participants (New Year: 80.9%, 110/136; summer: 66.0%, 70/106; fall: 68.7%, 68/99; P=.02). Both New Year and summer participants were more likely to have a spouse/partner compared to fall participants (New Year: 63.2%, 86/136; summer: 65.1%, 69/106; fall: 47.5%, 47/99; P=.02). Summer and fall participants differed on Internet access (P=.004), but were not different from New Year enrollees.\n Treatment Utilization Metrics by Recruitment Phase Among the various utilization metrics we examined (Table 3), only the number of total Web pages viewed differed significantly between the 3 groups of participants: page views was higher among New Year participants (median 57, IQR 20-57) than both summer (median 29, IQR 13-59; P=.002) and fall enrollees (median 36, IQR 19-69; P=.004). Summer and fall participants did not differ on page views.\nAmong the various utilization metrics we examined (Table 3), only the number of total Web pages viewed differed significantly between the 3 groups of participants: page views was higher among New Year participants (median 57, IQR 20-57) than both summer (median 29, IQR 13-59; P=.002) and fall enrollees (median 36, IQR 19-69; P=.004). Summer and fall participants did not differ on page views.\n Smoking Outcomes by Time of Enrollment There were no significant differences in any of the smoking outcomes we examined by recruitment phase. Using intention-to-treat analyses, 30-day point prevalence abstinence was 11.8% (16/136), 15.1% (16/106), and 17.2% (17/99), and 7-day point prevalence abstinence was 16.2% (22/136), 18.9% (20/106), and 22.2% (22/99) for New Year, summer, and fall participants, respectively. Using responder-only analysis, 30-day point prevalence abstinence was 20% (16/79), 23% (16/68), and 32% (17/53), and 7-day point prevalence abstinence was 28% (22/79), 29% (20/68), and 41% (22/53) for New Year, summer, and fall participants, respectively. There was no difference in the number of quit attempts reported by the 3 groups at the 3-month follow-up (Table 4).\nRecruitment and retention metrics by recruitment phase.\nBaseline characteristics by recruitment phase.\n\naMotivation to quit excluded 1 participant who reported no plans to quit smoking in New Year group.\n\nbFTND: Fagerström Test for Nicotine Dependence.\n\ncISEL: Interpersonal Support Evaluation List.\n\ndInternet access: n=3 cases dropped that reported using a dial-up connection (summer: n=2; New Year: n=1; fall: n=0).\nTreatment utilization metrics by recruitment phase.\n\na12 missing values (New Year: n=7; summer: n=3; fall: n=2).\nSmoking outcomes by recruitment phase.\nThere were no significant differences in any of the smoking outcomes we examined by recruitment phase. Using intention-to-treat analyses, 30-day point prevalence abstinence was 11.8% (16/136), 15.1% (16/106), and 17.2% (17/99), and 7-day point prevalence abstinence was 16.2% (22/136), 18.9% (20/106), and 22.2% (22/99) for New Year, summer, and fall participants, respectively. Using responder-only analysis, 30-day point prevalence abstinence was 20% (16/79), 23% (16/68), and 32% (17/53), and 7-day point prevalence abstinence was 28% (22/79), 29% (20/68), and 41% (22/53) for New Year, summer, and fall participants, respectively. There was no difference in the number of quit attempts reported by the 3 groups at the 3-month follow-up (Table 4).\nRecruitment and retention metrics by recruitment phase.\nBaseline characteristics by recruitment phase.\n\naMotivation to quit excluded 1 participant who reported no plans to quit smoking in New Year group.\n\nbFTND: Fagerström Test for Nicotine Dependence.\n\ncISEL: Interpersonal Support Evaluation List.\n\ndInternet access: n=3 cases dropped that reported using a dial-up connection (summer: n=2; New Year: n=1; fall: n=0).\nTreatment utilization metrics by recruitment phase.\n\na12 missing values (New Year: n=7; summer: n=3; fall: n=2).\nSmoking outcomes by recruitment phase.", "\nTable 1 shows the Consolidated Standards of Reporting Trials (CONSORT) metrics of each recruitment phase. We examined advertising expenditures during the 2 weeks before each recruitment phase in addition to the recruitment phase itself. For the New Year, summer, and fall phases, expenditures totaled US $57,508, US $48,632, and US $38,374, respectively. The conversion rate of unique visitors to new registered users during the New Year period (7.4%) was significantly higher than both summer (4.6%) and fall (4.9%) periods; summer and fall conversion rates did not differ. Among new registered users during the New Year period, 868 were invited to participate in the study. Of these, 44.1% (383/868) accepted the invitation and completed eligibility screening; 67.1% (257/383) were eligible, 93.0% (239/257) consented, and 52.9% of those eligible (136/257) completed the baseline assessment and were randomized to treatment. Among the New Year participants, 58.8% (80/136) completed the 3-month follow-up survey. With the exception of conversion rate, there were no significant differences between recruitment phases noted for any of the CONSORT metrics.", "New Year participants differed from participants recruited during other time periods on age, education, employment, and marital status (see Table 2). New Year participants were older (mean 43.2, SD 12.3 vs mean 39.1, SD 13.3; P=.01) and more likely to be employed full time (58.9%, 79/136 vs 40.6%, 43/106; P=.01) compared to summer participants. New Year participants were more likely to have attended college than both summer and fall participants (New Year: 80.9%, 110/136; summer: 66.0%, 70/106; fall: 68.7%, 68/99; P=.02). Both New Year and summer participants were more likely to have a spouse/partner compared to fall participants (New Year: 63.2%, 86/136; summer: 65.1%, 69/106; fall: 47.5%, 47/99; P=.02). Summer and fall participants differed on Internet access (P=.004), but were not different from New Year enrollees.", "Among the various utilization metrics we examined (Table 3), only the number of total Web pages viewed differed significantly between the 3 groups of participants: page views was higher among New Year participants (median 57, IQR 20-57) than both summer (median 29, IQR 13-59; P=.002) and fall enrollees (median 36, IQR 19-69; P=.004). Summer and fall participants did not differ on page views.", "There were no significant differences in any of the smoking outcomes we examined by recruitment phase. Using intention-to-treat analyses, 30-day point prevalence abstinence was 11.8% (16/136), 15.1% (16/106), and 17.2% (17/99), and 7-day point prevalence abstinence was 16.2% (22/136), 18.9% (20/106), and 22.2% (22/99) for New Year, summer, and fall participants, respectively. Using responder-only analysis, 30-day point prevalence abstinence was 20% (16/79), 23% (16/68), and 32% (17/53), and 7-day point prevalence abstinence was 28% (22/79), 29% (20/68), and 41% (22/53) for New Year, summer, and fall participants, respectively. There was no difference in the number of quit attempts reported by the 3 groups at the 3-month follow-up (Table 4).\nRecruitment and retention metrics by recruitment phase.\nBaseline characteristics by recruitment phase.\n\naMotivation to quit excluded 1 participant who reported no plans to quit smoking in New Year group.\n\nbFTND: Fagerström Test for Nicotine Dependence.\n\ncISEL: Interpersonal Support Evaluation List.\n\ndInternet access: n=3 cases dropped that reported using a dial-up connection (summer: n=2; New Year: n=1; fall: n=0).\nTreatment utilization metrics by recruitment phase.\n\na12 missing values (New Year: n=7; summer: n=3; fall: n=2).\nSmoking outcomes by recruitment phase.", " Principal Findings The results of this study indicate that smokers visiting a Web-based cessation program during the New Year period are more likely to register for treatment and differ on several demographic variables, but show similar patterns of treatment engagement, retention, and short-term cessation outcomes compared with participants who visit the site during other periods of the year. Our hypotheses that New Year participants would differ on measures of motivation and desire to quit were not supported, and there were no differences on any of the smoking variables we examined. In addition, our hypotheses about lower retention rates, website utilization rates, and cessation outcomes were also not supported. Follow-up rates were comparable across all 3 periods, and smokers recruited during the New Year period quit at the same rate as smokers recruited at other times during the year. These results mitigate scientific concerns about recruiting participants during this time frame and are reassuring for researchers conducting Web-based cessation trials.\nOur findings that New Year participants were older, more educated, more likely to be employed full time, and more likely to have a relationship partner may suggest that smokers with greater resources are more affected by the seasonal trends of quitting around the New Year. Alternatively, these differences may be a function of differential message exposure: older employed individuals may have been more likely to be impacted by the BecomeAnEX online advertising campaign, or reminded of cessation through workplace wellness programs or other promotional activities. Although not significant, smokers recruited during the New Year period also had a higher level of nicotine dependence and a higher number of previous quit attempts at baseline, also suggesting that seasonal trends may serve as a cue to action for more dependent and motivated smokers. We do not have an explanation for the finding that New Year participants viewed more website pages than participants in other recruitment phases did, especially because no other metric of engagement or utilization was significantly different.\nTo our knowledge, this is the first study to examine demographic, utilization, and outcome patterns of smokers recruited to a Web-based intervention during the New Year period, although there is interest in seasonal patterns of behavior. Delnevo and colleagues [10] noted seasonal patterns in motivation for quitting among callers to a quitline and discussed the implications for planning, promoting, and evaluating telephone quitlines. Using Internet search query data, Ayers et al [22] documented seasonality in searches for mental health information, with increases in information seeking that corresponded to patterns of seasonal affective disorder. The lack of previous publications in this area may be because dramatic increases in recruitment are to some degree unique to the online environment and Web-based studies that are capable of enrolling a large number of trial participants in a relatively short period of time.\nOur findings add to the small but growing literature on recruitment methods for Web-based tobacco interventions [23-31]. Most studies to date have focused on comparisons between online recruitment methods (eg, online banner ads, search engine advertising) and more traditional recruitment methods (eg, newspaper ads, targeted mailings) [24,25,32], evaluation of different Internet-based methods [28,29], or the use of offline methods (eg, physician referral) to drive tobacco users to Web-based interventions [30,31]. The primary endpoints of interest in most studies are baseline participant characteristics and recruitment yield and/or efficiency. Heffner et al [25] evaluated the impact of Web-based and traditional recruitment methods on 3-month data retention and 30-day point prevalence smoking abstinence at the 3-month outcome assessment in a cessation trial and found no differences by recruitment method. Our findings regarding the impact of seasonality are consistent with previous studies that have demonstrated some differences in the types of participants recruited to Web-based tobacco interventions, but no differences in their participation in or outcomes from such trials.\nThe results of this study indicate that smokers visiting a Web-based cessation program during the New Year period are more likely to register for treatment and differ on several demographic variables, but show similar patterns of treatment engagement, retention, and short-term cessation outcomes compared with participants who visit the site during other periods of the year. Our hypotheses that New Year participants would differ on measures of motivation and desire to quit were not supported, and there were no differences on any of the smoking variables we examined. In addition, our hypotheses about lower retention rates, website utilization rates, and cessation outcomes were also not supported. Follow-up rates were comparable across all 3 periods, and smokers recruited during the New Year period quit at the same rate as smokers recruited at other times during the year. These results mitigate scientific concerns about recruiting participants during this time frame and are reassuring for researchers conducting Web-based cessation trials.\nOur findings that New Year participants were older, more educated, more likely to be employed full time, and more likely to have a relationship partner may suggest that smokers with greater resources are more affected by the seasonal trends of quitting around the New Year. Alternatively, these differences may be a function of differential message exposure: older employed individuals may have been more likely to be impacted by the BecomeAnEX online advertising campaign, or reminded of cessation through workplace wellness programs or other promotional activities. Although not significant, smokers recruited during the New Year period also had a higher level of nicotine dependence and a higher number of previous quit attempts at baseline, also suggesting that seasonal trends may serve as a cue to action for more dependent and motivated smokers. We do not have an explanation for the finding that New Year participants viewed more website pages than participants in other recruitment phases did, especially because no other metric of engagement or utilization was significantly different.\nTo our knowledge, this is the first study to examine demographic, utilization, and outcome patterns of smokers recruited to a Web-based intervention during the New Year period, although there is interest in seasonal patterns of behavior. Delnevo and colleagues [10] noted seasonal patterns in motivation for quitting among callers to a quitline and discussed the implications for planning, promoting, and evaluating telephone quitlines. Using Internet search query data, Ayers et al [22] documented seasonality in searches for mental health information, with increases in information seeking that corresponded to patterns of seasonal affective disorder. The lack of previous publications in this area may be because dramatic increases in recruitment are to some degree unique to the online environment and Web-based studies that are capable of enrolling a large number of trial participants in a relatively short period of time.\nOur findings add to the small but growing literature on recruitment methods for Web-based tobacco interventions [23-31]. Most studies to date have focused on comparisons between online recruitment methods (eg, online banner ads, search engine advertising) and more traditional recruitment methods (eg, newspaper ads, targeted mailings) [24,25,32], evaluation of different Internet-based methods [28,29], or the use of offline methods (eg, physician referral) to drive tobacco users to Web-based interventions [30,31]. The primary endpoints of interest in most studies are baseline participant characteristics and recruitment yield and/or efficiency. Heffner et al [25] evaluated the impact of Web-based and traditional recruitment methods on 3-month data retention and 30-day point prevalence smoking abstinence at the 3-month outcome assessment in a cessation trial and found no differences by recruitment method. Our findings regarding the impact of seasonality are consistent with previous studies that have demonstrated some differences in the types of participants recruited to Web-based tobacco interventions, but no differences in their participation in or outcomes from such trials.\n Limitations Several limitations to this study should be noted. We examined variations in participant characteristics for a single cessation website as part of an ongoing randomized trial. This site exists in a larger ecosystem of promotion, advertising, and branding as part of the national BecomeAnEX campaign, which has been in existence since 2008. Our results are likely related to the specific strategy employed by BecomeAnEX, which is largely online advertising in its present implementation. Other advertising and promotional strategies of different Web-based interventions could yield different results. In addition, our automated titration of recruitment volume should be noted when considering the pragmatic implications of our results. Although the number of visitors to BecomeAnEX was higher during the New Year period and a higher proportion of them registered to become members, the number of participants recruited to our clinical trial during different periods throughout the year has remained relatively constant because of the daily cap we have on enrollment. This cap is designed to maintain a consistent volume of participants for our research and intervention staff to manage. If this cap were not in place, we may have seen a higher number of participants invited to the study and differences in the proportion of participants accepting or declining the study invitation. This may have important pragmatic considerations for other Web-based trials that have human involvement, but we believe this is unlikely to have affected the other metrics we examined (ie, follow-up rates, cessation outcomes). The daily cap on recruitment may also have affected statistical power. The response rate to the 3-month follow-up is lower than desired despite numerous online and offline strategies to reach participants, but is comparable to or higher than other Internet studies [33].\nSeveral limitations to this study should be noted. We examined variations in participant characteristics for a single cessation website as part of an ongoing randomized trial. This site exists in a larger ecosystem of promotion, advertising, and branding as part of the national BecomeAnEX campaign, which has been in existence since 2008. Our results are likely related to the specific strategy employed by BecomeAnEX, which is largely online advertising in its present implementation. Other advertising and promotional strategies of different Web-based interventions could yield different results. In addition, our automated titration of recruitment volume should be noted when considering the pragmatic implications of our results. Although the number of visitors to BecomeAnEX was higher during the New Year period and a higher proportion of them registered to become members, the number of participants recruited to our clinical trial during different periods throughout the year has remained relatively constant because of the daily cap we have on enrollment. This cap is designed to maintain a consistent volume of participants for our research and intervention staff to manage. If this cap were not in place, we may have seen a higher number of participants invited to the study and differences in the proportion of participants accepting or declining the study invitation. This may have important pragmatic considerations for other Web-based trials that have human involvement, but we believe this is unlikely to have affected the other metrics we examined (ie, follow-up rates, cessation outcomes). The daily cap on recruitment may also have affected statistical power. The response rate to the 3-month follow-up is lower than desired despite numerous online and offline strategies to reach participants, but is comparable to or higher than other Internet studies [33].\n Conclusions Internet interventions for health behavior change are characterized by their ability to recruit broadly and provide treatment at scale. Secular or temporal variations, such as the New Year holiday, and the associated media attention to smoking cessation and resolution making can result in large-scale swings in the number of individuals arriving at Web-based cessation interventions. For interventions that can effectively capture and enroll those individuals, seasonal variations could dramatically increase recruitment efficiency for clinical trials.\nInternet interventions for health behavior change are characterized by their ability to recruit broadly and provide treatment at scale. Secular or temporal variations, such as the New Year holiday, and the associated media attention to smoking cessation and resolution making can result in large-scale swings in the number of individuals arriving at Web-based cessation interventions. For interventions that can effectively capture and enroll those individuals, seasonal variations could dramatically increase recruitment efficiency for clinical trials.", "The results of this study indicate that smokers visiting a Web-based cessation program during the New Year period are more likely to register for treatment and differ on several demographic variables, but show similar patterns of treatment engagement, retention, and short-term cessation outcomes compared with participants who visit the site during other periods of the year. Our hypotheses that New Year participants would differ on measures of motivation and desire to quit were not supported, and there were no differences on any of the smoking variables we examined. In addition, our hypotheses about lower retention rates, website utilization rates, and cessation outcomes were also not supported. Follow-up rates were comparable across all 3 periods, and smokers recruited during the New Year period quit at the same rate as smokers recruited at other times during the year. These results mitigate scientific concerns about recruiting participants during this time frame and are reassuring for researchers conducting Web-based cessation trials.\nOur findings that New Year participants were older, more educated, more likely to be employed full time, and more likely to have a relationship partner may suggest that smokers with greater resources are more affected by the seasonal trends of quitting around the New Year. Alternatively, these differences may be a function of differential message exposure: older employed individuals may have been more likely to be impacted by the BecomeAnEX online advertising campaign, or reminded of cessation through workplace wellness programs or other promotional activities. Although not significant, smokers recruited during the New Year period also had a higher level of nicotine dependence and a higher number of previous quit attempts at baseline, also suggesting that seasonal trends may serve as a cue to action for more dependent and motivated smokers. We do not have an explanation for the finding that New Year participants viewed more website pages than participants in other recruitment phases did, especially because no other metric of engagement or utilization was significantly different.\nTo our knowledge, this is the first study to examine demographic, utilization, and outcome patterns of smokers recruited to a Web-based intervention during the New Year period, although there is interest in seasonal patterns of behavior. Delnevo and colleagues [10] noted seasonal patterns in motivation for quitting among callers to a quitline and discussed the implications for planning, promoting, and evaluating telephone quitlines. Using Internet search query data, Ayers et al [22] documented seasonality in searches for mental health information, with increases in information seeking that corresponded to patterns of seasonal affective disorder. The lack of previous publications in this area may be because dramatic increases in recruitment are to some degree unique to the online environment and Web-based studies that are capable of enrolling a large number of trial participants in a relatively short period of time.\nOur findings add to the small but growing literature on recruitment methods for Web-based tobacco interventions [23-31]. Most studies to date have focused on comparisons between online recruitment methods (eg, online banner ads, search engine advertising) and more traditional recruitment methods (eg, newspaper ads, targeted mailings) [24,25,32], evaluation of different Internet-based methods [28,29], or the use of offline methods (eg, physician referral) to drive tobacco users to Web-based interventions [30,31]. The primary endpoints of interest in most studies are baseline participant characteristics and recruitment yield and/or efficiency. Heffner et al [25] evaluated the impact of Web-based and traditional recruitment methods on 3-month data retention and 30-day point prevalence smoking abstinence at the 3-month outcome assessment in a cessation trial and found no differences by recruitment method. Our findings regarding the impact of seasonality are consistent with previous studies that have demonstrated some differences in the types of participants recruited to Web-based tobacco interventions, but no differences in their participation in or outcomes from such trials.", "Several limitations to this study should be noted. We examined variations in participant characteristics for a single cessation website as part of an ongoing randomized trial. This site exists in a larger ecosystem of promotion, advertising, and branding as part of the national BecomeAnEX campaign, which has been in existence since 2008. Our results are likely related to the specific strategy employed by BecomeAnEX, which is largely online advertising in its present implementation. Other advertising and promotional strategies of different Web-based interventions could yield different results. In addition, our automated titration of recruitment volume should be noted when considering the pragmatic implications of our results. Although the number of visitors to BecomeAnEX was higher during the New Year period and a higher proportion of them registered to become members, the number of participants recruited to our clinical trial during different periods throughout the year has remained relatively constant because of the daily cap we have on enrollment. This cap is designed to maintain a consistent volume of participants for our research and intervention staff to manage. If this cap were not in place, we may have seen a higher number of participants invited to the study and differences in the proportion of participants accepting or declining the study invitation. This may have important pragmatic considerations for other Web-based trials that have human involvement, but we believe this is unlikely to have affected the other metrics we examined (ie, follow-up rates, cessation outcomes). The daily cap on recruitment may also have affected statistical power. The response rate to the 3-month follow-up is lower than desired despite numerous online and offline strategies to reach participants, but is comparable to or higher than other Internet studies [33].", "Internet interventions for health behavior change are characterized by their ability to recruit broadly and provide treatment at scale. Secular or temporal variations, such as the New Year holiday, and the associated media attention to smoking cessation and resolution making can result in large-scale swings in the number of individuals arriving at Web-based cessation interventions. For interventions that can effectively capture and enroll those individuals, seasonal variations could dramatically increase recruitment efficiency for clinical trials." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, "Results", null, null, null, null, "discussion", null, null, null ]
[ "seasonal variation", "smoking cessation", "Internet", "research subject recruitment" ]
Introduction: Seasonal variations across a number of smoking and quitting behaviors have been documented. Most smokers express a desire to quit [1] and many make a quit attempt around the start of the New Year [2-6]. Reports have shown that sales of cigarettes are at their lowest during January and February [7,8] and sales of nicotine replacement therapies are at their highest January through March [9]. Seasonal variations in motivational stage of change among callers to a state quitline have also been documented [10] with callers in December and January being more likely to have recently quit than callers during other months. Internet search queries also provide evidence of the seasonal variations in smoking cessation, with clear peaks observed in the use of “quit smoking” as a search term at the beginning of each calendar year. Figure 1 shows the relative use of the search term “quit smoking” in Google search engine queries over 6 years in the United States as reported by Google Trends [11], the public database of Google queries. A greater number of smokers quitting around the New Year may mean a greater pool of potential research participants for smoking cessation trials. However, the effects of seasonality on research recruitment and retention have not been documented. Specifically, it is unknown whether smokers who are invited to cessation treatment trials during this period of time are as likely to enroll, to adhere to a research protocol, and to benefit from a cessation intervention. Smokers who elect to quit around the New Year may differ from those who quit during other times of the year on factors such as motivation, desire, confidence, or other factors that relate to trial participation, engagement, and cessation outcomes. These are important questions to address from both a pragmatic and a scientific standpoint. From a pragmatic standpoint, conducting trial recruitment during the New Year holiday may have staffing and cost considerations for all aspects of a trial. Research staff may be needed to field study inquiries, conduct eligibility screening, administer assessments, and manage study communications; intervention staff may be required to orient new participants to the trial and begin intervention delivery. Given the potential for higher recruitment volume during this period, staffing increases may be required. From a scientific standpoint, if participants enrolled during this time are less likely to adhere to research protocol (ie, lower rates of intervention adherence, lower retention at follow-up) because of a more transient commitment to quitting, this could have important implications for treatment trials. Lower retention rates (ie, higher loss to follow-up) would result in a higher proportion of participants counted as smokers in intention-to-treat analyses, which may artificially deflate overall abstinence rates and perceived effectiveness of an intervention. Lower rates of intervention adherence among participants recruited during the New Year could influence metrics of intervention feasibility and receptivity as well as cessation outcomes. We sought to examine these questions about the impact of seasonality on smoking cessation treatment trials in the context of an ongoing randomized trial of a Web-based cessation intervention. We extracted a subset of participants recruited during a 15-day window that spanned the 2013 New Year and compared them to participants recruited during 2 other 15-day periods in 2012 on recruitment and retention rates, baseline characteristics, website utilization, and cessation outcomes. Our a priori hypotheses were that New Year participants would differ on baseline measures of motivation and desire to quit consistent with a more transient commitment to quitting, and would have lower recruitment and retention rates, lower website utilization rates, and poorer cessation outcomes compared to participants enrolled during other periods. Use of search term “quit smoking” in Google search engine queries relative to the total number of Google searches between June 2007 and June 2013 in the United States as reported in Google Trends. Methods: Study Overview The full study protocol has been published elsewhere [12]. Briefly, this is an ongoing Web-based randomized trial to compare the efficacy of an interactive, evidence-based, smoking cessation website alone and in conjunction with (1) a theory-driven, empirically informed social network intervention designed to integrate participants into an online community, and (2) access to a free supply of nicotine replacement therapy (NRT) products. The study uses a 2×2 factorial design to compare the following treatment conditions: (1) website, (2) website+social network intervention, (3) website+NRT, and (4) website+social network intervention+NRT. A total of 4000 participants will be randomized by the end of the study. Follow-up assessments are administered at 3- and 9-months postrandomization; 30-day point prevalence abstinence is the primary outcome of the parent trial. Study eligibility criteria are current smoking, age 18 years or older, and US residence. Exclusion criteria are contraindications to NRT (pregnant or breastfeeding, recent cardiac problems, current NRT use). Randomization is stratified by gender and baseline motivation. The full study protocol has been published elsewhere [12]. Briefly, this is an ongoing Web-based randomized trial to compare the efficacy of an interactive, evidence-based, smoking cessation website alone and in conjunction with (1) a theory-driven, empirically informed social network intervention designed to integrate participants into an online community, and (2) access to a free supply of nicotine replacement therapy (NRT) products. The study uses a 2×2 factorial design to compare the following treatment conditions: (1) website, (2) website+social network intervention, (3) website+NRT, and (4) website+social network intervention+NRT. A total of 4000 participants will be randomized by the end of the study. Follow-up assessments are administered at 3- and 9-months postrandomization; 30-day point prevalence abstinence is the primary outcome of the parent trial. Study eligibility criteria are current smoking, age 18 years or older, and US residence. Exclusion criteria are contraindications to NRT (pregnant or breastfeeding, recent cardiac problems, current NRT use). Randomization is stratified by gender and baseline motivation. Recruitment The study is conducted within BecomeAnEX.org, a free, publicly available, evidence-based intervention developed in accordance with the 2008 US Department of Health and Human Service’s Clinical Practice Guidelines [13]. The site was developed by Legacy, a nonprofit organization that develops smoking prevention and cessation programs, in collaboration with the Mayo Clinic Nicotine Dependence Center [14]. A national multichannel media campaign that included television, radio, and outdoor and online advertising was launched in 2008 to promote the website [15]. The present implementation of this campaign relies on various forms of online advertising, including social media, search engine marketing, and large targeted ad networks for display advertising. Search engine advertising targets keywords related to BecomeAnEX (eg, quit smoking, stop smoking) and display advertising targets males and females aged between 25 and 54 years. Participants are recruited to the trial immediately following registration on BecomeAnEX. The entire recruitment and enrollment process is automated using a Web-based clinical trials management system. Individuals that indicate current smoking (every day/some days) during registration are invited to the study. Interested individuals complete online eligibility screening; eligible individuals provide online informed consent and contact information, including an email address that is used to send a link to the online baseline assessment. Participants are randomized to treatment upon completion of the baseline survey. No incentive is provided for enrollment in the study. Recruitment volume is capped at a maximum of 10 new participants per day to ensure a manageable workload for intervention and research staff throughout the study period. Once 10 individuals are randomized, no new registered users are invited for the remainder of the 24-hour period. Recruitment began in March 2012. As of October 30, 2013, 3602 participants have been randomized. We defined the New Year period based on a clear peak and drop in the number of individuals that registered on BecomeAnEX between December 1, 2012 and January 31, 2013. The average conversion rate of unique visitors to registrants each day from December 1 through December 25 was approximately 4.7%. This proportion increased almost 2-fold on December 26, 2012 to 8.2% and stayed elevated through January 9, 2013, at an average daily conversion rate of 7.4%. Thus, we selected this 15-day period as our New Year period. For comparison, we selected 2 other 15-day periods during the year based on several criteria: (1) similar marketing and promotion approach, (2) variations in season (ie, summer, fall), (3) same span of days of the week (Wednesday to Wednesday), and (4) roughly similar number of participants randomized during the designated time period. Based on these factors, 2 separate 15-day periods were selected for comparison: 1 during the summer (July 18, 2012 to August 1, 2012) and 1 during the fall (November 7, 2012 to November 21, 2012). We deliberately selected the fall period to include another popular quitting holiday, the American Cancer Society’s Great American Smokeout, which falls on the third Thursday of November (November 15, 2012). Inclusion of this time frame enabled us to compare participants enrolling during the New Year to participants potentially enrolling in response to another seasonal trigger for cessation. The study is conducted within BecomeAnEX.org, a free, publicly available, evidence-based intervention developed in accordance with the 2008 US Department of Health and Human Service’s Clinical Practice Guidelines [13]. The site was developed by Legacy, a nonprofit organization that develops smoking prevention and cessation programs, in collaboration with the Mayo Clinic Nicotine Dependence Center [14]. A national multichannel media campaign that included television, radio, and outdoor and online advertising was launched in 2008 to promote the website [15]. The present implementation of this campaign relies on various forms of online advertising, including social media, search engine marketing, and large targeted ad networks for display advertising. Search engine advertising targets keywords related to BecomeAnEX (eg, quit smoking, stop smoking) and display advertising targets males and females aged between 25 and 54 years. Participants are recruited to the trial immediately following registration on BecomeAnEX. The entire recruitment and enrollment process is automated using a Web-based clinical trials management system. Individuals that indicate current smoking (every day/some days) during registration are invited to the study. Interested individuals complete online eligibility screening; eligible individuals provide online informed consent and contact information, including an email address that is used to send a link to the online baseline assessment. Participants are randomized to treatment upon completion of the baseline survey. No incentive is provided for enrollment in the study. Recruitment volume is capped at a maximum of 10 new participants per day to ensure a manageable workload for intervention and research staff throughout the study period. Once 10 individuals are randomized, no new registered users are invited for the remainder of the 24-hour period. Recruitment began in March 2012. As of October 30, 2013, 3602 participants have been randomized. We defined the New Year period based on a clear peak and drop in the number of individuals that registered on BecomeAnEX between December 1, 2012 and January 31, 2013. The average conversion rate of unique visitors to registrants each day from December 1 through December 25 was approximately 4.7%. This proportion increased almost 2-fold on December 26, 2012 to 8.2% and stayed elevated through January 9, 2013, at an average daily conversion rate of 7.4%. Thus, we selected this 15-day period as our New Year period. For comparison, we selected 2 other 15-day periods during the year based on several criteria: (1) similar marketing and promotion approach, (2) variations in season (ie, summer, fall), (3) same span of days of the week (Wednesday to Wednesday), and (4) roughly similar number of participants randomized during the designated time period. Based on these factors, 2 separate 15-day periods were selected for comparison: 1 during the summer (July 18, 2012 to August 1, 2012) and 1 during the fall (November 7, 2012 to November 21, 2012). We deliberately selected the fall period to include another popular quitting holiday, the American Cancer Society’s Great American Smokeout, which falls on the third Thursday of November (November 15, 2012). Inclusion of this time frame enabled us to compare participants enrolling during the New Year to participants potentially enrolling in response to another seasonal trigger for cessation. Interventions Participants in all 4 treatment groups had full access to the BecomeAnEX website which provides assistance setting a quit date, assessment of motivation and nicotine dependence, problem-solving/skills training to enhance self-efficacy for quitting, assistance in selecting and using US Food and Drug Administration (FDA)-approved pharmacotherapies, and social support through a large online community [14,15]. Participants randomized to receive the social network intervention received proactive communications from established members of the BecomeAnEX community (integrators). Within 24 hours after a new participant joined the study, the integrators posted a public message on the new member’s profile page to welcome them to the site, encourage them to fill out their profile, or comment on some aspect of an existing profile. Participants randomized to receive NRT products from the study were mailed a free 4-week supply of the NRT product of their choice (patch, gum, or lozenge) within 3 days of randomization. The NRT is provided as an over-the-counter product (ie, with no additional support or guidance provided) to parallel the experience participants would have if they purchased NRT on their own. Participants in all 4 treatment groups had full access to the BecomeAnEX website which provides assistance setting a quit date, assessment of motivation and nicotine dependence, problem-solving/skills training to enhance self-efficacy for quitting, assistance in selecting and using US Food and Drug Administration (FDA)-approved pharmacotherapies, and social support through a large online community [14,15]. Participants randomized to receive the social network intervention received proactive communications from established members of the BecomeAnEX community (integrators). Within 24 hours after a new participant joined the study, the integrators posted a public message on the new member’s profile page to welcome them to the site, encourage them to fill out their profile, or comment on some aspect of an existing profile. Participants randomized to receive NRT products from the study were mailed a free 4-week supply of the NRT product of their choice (patch, gum, or lozenge) within 3 days of randomization. The NRT is provided as an over-the-counter product (ie, with no additional support or guidance provided) to parallel the experience participants would have if they purchased NRT on their own. Data Collection Data are obtained through 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline, 3-, and 9-months postrandomization, and (3) online tracking software that records utilization of BecomeAnEX. Our analyses of smoking outcomes focus on the 3-month follow-up because this is typically when treatment utilization and intervention effects are the strongest. Telephone follow-up by professional telephone interviewers blinded to treatment condition for online nonresponders is used to maximize follow-up rates. Participants are reimbursed via Amazon or PayPal for survey completion (US $20 for Web survey, US $15 for phone survey). Individual level tracking metrics of BecomeAnEX utilization are recorded using Adobe/Omniture SiteCatalyst [16] software. Data are obtained through 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline, 3-, and 9-months postrandomization, and (3) online tracking software that records utilization of BecomeAnEX. Our analyses of smoking outcomes focus on the 3-month follow-up because this is typically when treatment utilization and intervention effects are the strongest. Telephone follow-up by professional telephone interviewers blinded to treatment condition for online nonresponders is used to maximize follow-up rates. Participants are reimbursed via Amazon or PayPal for survey completion (US $20 for Web survey, US $15 for phone survey). Individual level tracking metrics of BecomeAnEX utilization are recorded using Adobe/Omniture SiteCatalyst [16] software. Measures Overview The following measures from the parent trial were examined for these analyses. The following measures from the parent trial were examined for these analyses. Sociodemographic Variables Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection. Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection. Smoking Variables At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18]. At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18]. Psychosocial Variables The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking. The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking. Treatment Adherence Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion). Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion). Three-Month Outcome Measures Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months. Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months. Overview The following measures from the parent trial were examined for these analyses. The following measures from the parent trial were examined for these analyses. Sociodemographic Variables Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection. Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection. Smoking Variables At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18]. At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18]. Psychosocial Variables The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking. The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking. Treatment Adherence Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion). Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion). Three-Month Outcome Measures Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months. Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months. Statistical Analyses The effects of recruitment phase (New Year, summer, fall) on recruitment metrics, baseline characteristics, treatment utilization, and outcome measures were evaluated via chi-square tests for proportions or 1-way ANOVA, depending on whether the metrics were proportions or continuous variables. Significant omnibus tests were followed by unadjusted pairwise comparisons. Analyses were conducted on the full sample of participants recruited in each phase (ie, collapsed across treatment groups). The effects of recruitment phase (New Year, summer, fall) on recruitment metrics, baseline characteristics, treatment utilization, and outcome measures were evaluated via chi-square tests for proportions or 1-way ANOVA, depending on whether the metrics were proportions or continuous variables. Significant omnibus tests were followed by unadjusted pairwise comparisons. Analyses were conducted on the full sample of participants recruited in each phase (ie, collapsed across treatment groups). Study Overview: The full study protocol has been published elsewhere [12]. Briefly, this is an ongoing Web-based randomized trial to compare the efficacy of an interactive, evidence-based, smoking cessation website alone and in conjunction with (1) a theory-driven, empirically informed social network intervention designed to integrate participants into an online community, and (2) access to a free supply of nicotine replacement therapy (NRT) products. The study uses a 2×2 factorial design to compare the following treatment conditions: (1) website, (2) website+social network intervention, (3) website+NRT, and (4) website+social network intervention+NRT. A total of 4000 participants will be randomized by the end of the study. Follow-up assessments are administered at 3- and 9-months postrandomization; 30-day point prevalence abstinence is the primary outcome of the parent trial. Study eligibility criteria are current smoking, age 18 years or older, and US residence. Exclusion criteria are contraindications to NRT (pregnant or breastfeeding, recent cardiac problems, current NRT use). Randomization is stratified by gender and baseline motivation. Recruitment: The study is conducted within BecomeAnEX.org, a free, publicly available, evidence-based intervention developed in accordance with the 2008 US Department of Health and Human Service’s Clinical Practice Guidelines [13]. The site was developed by Legacy, a nonprofit organization that develops smoking prevention and cessation programs, in collaboration with the Mayo Clinic Nicotine Dependence Center [14]. A national multichannel media campaign that included television, radio, and outdoor and online advertising was launched in 2008 to promote the website [15]. The present implementation of this campaign relies on various forms of online advertising, including social media, search engine marketing, and large targeted ad networks for display advertising. Search engine advertising targets keywords related to BecomeAnEX (eg, quit smoking, stop smoking) and display advertising targets males and females aged between 25 and 54 years. Participants are recruited to the trial immediately following registration on BecomeAnEX. The entire recruitment and enrollment process is automated using a Web-based clinical trials management system. Individuals that indicate current smoking (every day/some days) during registration are invited to the study. Interested individuals complete online eligibility screening; eligible individuals provide online informed consent and contact information, including an email address that is used to send a link to the online baseline assessment. Participants are randomized to treatment upon completion of the baseline survey. No incentive is provided for enrollment in the study. Recruitment volume is capped at a maximum of 10 new participants per day to ensure a manageable workload for intervention and research staff throughout the study period. Once 10 individuals are randomized, no new registered users are invited for the remainder of the 24-hour period. Recruitment began in March 2012. As of October 30, 2013, 3602 participants have been randomized. We defined the New Year period based on a clear peak and drop in the number of individuals that registered on BecomeAnEX between December 1, 2012 and January 31, 2013. The average conversion rate of unique visitors to registrants each day from December 1 through December 25 was approximately 4.7%. This proportion increased almost 2-fold on December 26, 2012 to 8.2% and stayed elevated through January 9, 2013, at an average daily conversion rate of 7.4%. Thus, we selected this 15-day period as our New Year period. For comparison, we selected 2 other 15-day periods during the year based on several criteria: (1) similar marketing and promotion approach, (2) variations in season (ie, summer, fall), (3) same span of days of the week (Wednesday to Wednesday), and (4) roughly similar number of participants randomized during the designated time period. Based on these factors, 2 separate 15-day periods were selected for comparison: 1 during the summer (July 18, 2012 to August 1, 2012) and 1 during the fall (November 7, 2012 to November 21, 2012). We deliberately selected the fall period to include another popular quitting holiday, the American Cancer Society’s Great American Smokeout, which falls on the third Thursday of November (November 15, 2012). Inclusion of this time frame enabled us to compare participants enrolling during the New Year to participants potentially enrolling in response to another seasonal trigger for cessation. Interventions: Participants in all 4 treatment groups had full access to the BecomeAnEX website which provides assistance setting a quit date, assessment of motivation and nicotine dependence, problem-solving/skills training to enhance self-efficacy for quitting, assistance in selecting and using US Food and Drug Administration (FDA)-approved pharmacotherapies, and social support through a large online community [14,15]. Participants randomized to receive the social network intervention received proactive communications from established members of the BecomeAnEX community (integrators). Within 24 hours after a new participant joined the study, the integrators posted a public message on the new member’s profile page to welcome them to the site, encourage them to fill out their profile, or comment on some aspect of an existing profile. Participants randomized to receive NRT products from the study were mailed a free 4-week supply of the NRT product of their choice (patch, gum, or lozenge) within 3 days of randomization. The NRT is provided as an over-the-counter product (ie, with no additional support or guidance provided) to parallel the experience participants would have if they purchased NRT on their own. Data Collection: Data are obtained through 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline, 3-, and 9-months postrandomization, and (3) online tracking software that records utilization of BecomeAnEX. Our analyses of smoking outcomes focus on the 3-month follow-up because this is typically when treatment utilization and intervention effects are the strongest. Telephone follow-up by professional telephone interviewers blinded to treatment condition for online nonresponders is used to maximize follow-up rates. Participants are reimbursed via Amazon or PayPal for survey completion (US $20 for Web survey, US $15 for phone survey). Individual level tracking metrics of BecomeAnEX utilization are recorded using Adobe/Omniture SiteCatalyst [16] software. Measures: Overview The following measures from the parent trial were examined for these analyses. The following measures from the parent trial were examined for these analyses. Sociodemographic Variables Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection. Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection. Smoking Variables At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18]. At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18]. Psychosocial Variables The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking. The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking. Treatment Adherence Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion). Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion). Three-Month Outcome Measures Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months. Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months. Overview: The following measures from the parent trial were examined for these analyses. Sociodemographic Variables: Participants reported age, gender, race, ethnicity, marital status, employment, education, and type of Internet connection. Smoking Variables: At baseline, participants completed the Fagerström Test for Nicotine Dependence (FTND) [17] and also reported their confidence and desire to quit smoking (1=not at all, 5=very much), the current number of cigarettes smoked per day, the number of quit attempts in the previous year, and motivation to quit [18]. Psychosocial Variables: The appraisal and belonging subscales of the 12-item Interpersonal Support Evaluation List (ISEL) [19] were used to measure perceived availability of social resources at baseline. The appraisal subscale measures the perceived availability of someone to talk to about one’s problems; the belonging subscale assesses the perceived availability of people with whom to engage in activities. Perceptions of cessation-related social support are measured at baseline and follow-up with a 6-item version of the Partner Interaction Questionnaire [20,21] that assesses receipt of positive behaviors (supportive of cessation) and negative behaviors (harmful to cessation) from an individual who has followed the participant’s efforts to quit smoking. Treatment Adherence: Website utilization during the first 3 months of the study was extracted from the BecomeAnEX database and included the following metrics: number of log-ins, minutes spent using the site during each visit/session, number of pages viewed during each visit/session, and the number of blog posts read and made. Website utilization was recorded using Adobe/Omniture SiteCatalyst. Each page view by a participant was recorded into a relational database, and page views were grouped into sessions. The duration of a session was defined as the time elapsed between the first page view and the last page view in a given session. If a user did not view a new page for more than 30 minutes, the system marked them as inactive and their next return visit created a new session. At each follow-up, participants reported use of NRT and prescription cessation medications (eg, Chantix, bupropion). Three-Month Outcome Measures: Smoking outcomes examined in these analyses included self-reported point prevalence abstinence (30 day and 7 day) measured at 3 months. We also examined the number of quit attempts reported at 3 months. Statistical Analyses: The effects of recruitment phase (New Year, summer, fall) on recruitment metrics, baseline characteristics, treatment utilization, and outcome measures were evaluated via chi-square tests for proportions or 1-way ANOVA, depending on whether the metrics were proportions or continuous variables. Significant omnibus tests were followed by unadjusted pairwise comparisons. Analyses were conducted on the full sample of participants recruited in each phase (ie, collapsed across treatment groups). Results: Recruitment and Retention by Recruitment Phase Table 1 shows the Consolidated Standards of Reporting Trials (CONSORT) metrics of each recruitment phase. We examined advertising expenditures during the 2 weeks before each recruitment phase in addition to the recruitment phase itself. For the New Year, summer, and fall phases, expenditures totaled US $57,508, US $48,632, and US $38,374, respectively. The conversion rate of unique visitors to new registered users during the New Year period (7.4%) was significantly higher than both summer (4.6%) and fall (4.9%) periods; summer and fall conversion rates did not differ. Among new registered users during the New Year period, 868 were invited to participate in the study. Of these, 44.1% (383/868) accepted the invitation and completed eligibility screening; 67.1% (257/383) were eligible, 93.0% (239/257) consented, and 52.9% of those eligible (136/257) completed the baseline assessment and were randomized to treatment. Among the New Year participants, 58.8% (80/136) completed the 3-month follow-up survey. With the exception of conversion rate, there were no significant differences between recruitment phases noted for any of the CONSORT metrics. Table 1 shows the Consolidated Standards of Reporting Trials (CONSORT) metrics of each recruitment phase. We examined advertising expenditures during the 2 weeks before each recruitment phase in addition to the recruitment phase itself. For the New Year, summer, and fall phases, expenditures totaled US $57,508, US $48,632, and US $38,374, respectively. The conversion rate of unique visitors to new registered users during the New Year period (7.4%) was significantly higher than both summer (4.6%) and fall (4.9%) periods; summer and fall conversion rates did not differ. Among new registered users during the New Year period, 868 were invited to participate in the study. Of these, 44.1% (383/868) accepted the invitation and completed eligibility screening; 67.1% (257/383) were eligible, 93.0% (239/257) consented, and 52.9% of those eligible (136/257) completed the baseline assessment and were randomized to treatment. Among the New Year participants, 58.8% (80/136) completed the 3-month follow-up survey. With the exception of conversion rate, there were no significant differences between recruitment phases noted for any of the CONSORT metrics. Baseline Characteristics by Recruitment Phase New Year participants differed from participants recruited during other time periods on age, education, employment, and marital status (see Table 2). New Year participants were older (mean 43.2, SD 12.3 vs mean 39.1, SD 13.3; P=.01) and more likely to be employed full time (58.9%, 79/136 vs 40.6%, 43/106; P=.01) compared to summer participants. New Year participants were more likely to have attended college than both summer and fall participants (New Year: 80.9%, 110/136; summer: 66.0%, 70/106; fall: 68.7%, 68/99; P=.02). Both New Year and summer participants were more likely to have a spouse/partner compared to fall participants (New Year: 63.2%, 86/136; summer: 65.1%, 69/106; fall: 47.5%, 47/99; P=.02). Summer and fall participants differed on Internet access (P=.004), but were not different from New Year enrollees. New Year participants differed from participants recruited during other time periods on age, education, employment, and marital status (see Table 2). New Year participants were older (mean 43.2, SD 12.3 vs mean 39.1, SD 13.3; P=.01) and more likely to be employed full time (58.9%, 79/136 vs 40.6%, 43/106; P=.01) compared to summer participants. New Year participants were more likely to have attended college than both summer and fall participants (New Year: 80.9%, 110/136; summer: 66.0%, 70/106; fall: 68.7%, 68/99; P=.02). Both New Year and summer participants were more likely to have a spouse/partner compared to fall participants (New Year: 63.2%, 86/136; summer: 65.1%, 69/106; fall: 47.5%, 47/99; P=.02). Summer and fall participants differed on Internet access (P=.004), but were not different from New Year enrollees. Treatment Utilization Metrics by Recruitment Phase Among the various utilization metrics we examined (Table 3), only the number of total Web pages viewed differed significantly between the 3 groups of participants: page views was higher among New Year participants (median 57, IQR 20-57) than both summer (median 29, IQR 13-59; P=.002) and fall enrollees (median 36, IQR 19-69; P=.004). Summer and fall participants did not differ on page views. Among the various utilization metrics we examined (Table 3), only the number of total Web pages viewed differed significantly between the 3 groups of participants: page views was higher among New Year participants (median 57, IQR 20-57) than both summer (median 29, IQR 13-59; P=.002) and fall enrollees (median 36, IQR 19-69; P=.004). Summer and fall participants did not differ on page views. Smoking Outcomes by Time of Enrollment There were no significant differences in any of the smoking outcomes we examined by recruitment phase. Using intention-to-treat analyses, 30-day point prevalence abstinence was 11.8% (16/136), 15.1% (16/106), and 17.2% (17/99), and 7-day point prevalence abstinence was 16.2% (22/136), 18.9% (20/106), and 22.2% (22/99) for New Year, summer, and fall participants, respectively. Using responder-only analysis, 30-day point prevalence abstinence was 20% (16/79), 23% (16/68), and 32% (17/53), and 7-day point prevalence abstinence was 28% (22/79), 29% (20/68), and 41% (22/53) for New Year, summer, and fall participants, respectively. There was no difference in the number of quit attempts reported by the 3 groups at the 3-month follow-up (Table 4). Recruitment and retention metrics by recruitment phase. Baseline characteristics by recruitment phase. aMotivation to quit excluded 1 participant who reported no plans to quit smoking in New Year group. bFTND: Fagerström Test for Nicotine Dependence. cISEL: Interpersonal Support Evaluation List. dInternet access: n=3 cases dropped that reported using a dial-up connection (summer: n=2; New Year: n=1; fall: n=0). Treatment utilization metrics by recruitment phase. a12 missing values (New Year: n=7; summer: n=3; fall: n=2). Smoking outcomes by recruitment phase. There were no significant differences in any of the smoking outcomes we examined by recruitment phase. Using intention-to-treat analyses, 30-day point prevalence abstinence was 11.8% (16/136), 15.1% (16/106), and 17.2% (17/99), and 7-day point prevalence abstinence was 16.2% (22/136), 18.9% (20/106), and 22.2% (22/99) for New Year, summer, and fall participants, respectively. Using responder-only analysis, 30-day point prevalence abstinence was 20% (16/79), 23% (16/68), and 32% (17/53), and 7-day point prevalence abstinence was 28% (22/79), 29% (20/68), and 41% (22/53) for New Year, summer, and fall participants, respectively. There was no difference in the number of quit attempts reported by the 3 groups at the 3-month follow-up (Table 4). Recruitment and retention metrics by recruitment phase. Baseline characteristics by recruitment phase. aMotivation to quit excluded 1 participant who reported no plans to quit smoking in New Year group. bFTND: Fagerström Test for Nicotine Dependence. cISEL: Interpersonal Support Evaluation List. dInternet access: n=3 cases dropped that reported using a dial-up connection (summer: n=2; New Year: n=1; fall: n=0). Treatment utilization metrics by recruitment phase. a12 missing values (New Year: n=7; summer: n=3; fall: n=2). Smoking outcomes by recruitment phase. Recruitment and Retention by Recruitment Phase: Table 1 shows the Consolidated Standards of Reporting Trials (CONSORT) metrics of each recruitment phase. We examined advertising expenditures during the 2 weeks before each recruitment phase in addition to the recruitment phase itself. For the New Year, summer, and fall phases, expenditures totaled US $57,508, US $48,632, and US $38,374, respectively. The conversion rate of unique visitors to new registered users during the New Year period (7.4%) was significantly higher than both summer (4.6%) and fall (4.9%) periods; summer and fall conversion rates did not differ. Among new registered users during the New Year period, 868 were invited to participate in the study. Of these, 44.1% (383/868) accepted the invitation and completed eligibility screening; 67.1% (257/383) were eligible, 93.0% (239/257) consented, and 52.9% of those eligible (136/257) completed the baseline assessment and were randomized to treatment. Among the New Year participants, 58.8% (80/136) completed the 3-month follow-up survey. With the exception of conversion rate, there were no significant differences between recruitment phases noted for any of the CONSORT metrics. Baseline Characteristics by Recruitment Phase: New Year participants differed from participants recruited during other time periods on age, education, employment, and marital status (see Table 2). New Year participants were older (mean 43.2, SD 12.3 vs mean 39.1, SD 13.3; P=.01) and more likely to be employed full time (58.9%, 79/136 vs 40.6%, 43/106; P=.01) compared to summer participants. New Year participants were more likely to have attended college than both summer and fall participants (New Year: 80.9%, 110/136; summer: 66.0%, 70/106; fall: 68.7%, 68/99; P=.02). Both New Year and summer participants were more likely to have a spouse/partner compared to fall participants (New Year: 63.2%, 86/136; summer: 65.1%, 69/106; fall: 47.5%, 47/99; P=.02). Summer and fall participants differed on Internet access (P=.004), but were not different from New Year enrollees. Treatment Utilization Metrics by Recruitment Phase: Among the various utilization metrics we examined (Table 3), only the number of total Web pages viewed differed significantly between the 3 groups of participants: page views was higher among New Year participants (median 57, IQR 20-57) than both summer (median 29, IQR 13-59; P=.002) and fall enrollees (median 36, IQR 19-69; P=.004). Summer and fall participants did not differ on page views. Smoking Outcomes by Time of Enrollment: There were no significant differences in any of the smoking outcomes we examined by recruitment phase. Using intention-to-treat analyses, 30-day point prevalence abstinence was 11.8% (16/136), 15.1% (16/106), and 17.2% (17/99), and 7-day point prevalence abstinence was 16.2% (22/136), 18.9% (20/106), and 22.2% (22/99) for New Year, summer, and fall participants, respectively. Using responder-only analysis, 30-day point prevalence abstinence was 20% (16/79), 23% (16/68), and 32% (17/53), and 7-day point prevalence abstinence was 28% (22/79), 29% (20/68), and 41% (22/53) for New Year, summer, and fall participants, respectively. There was no difference in the number of quit attempts reported by the 3 groups at the 3-month follow-up (Table 4). Recruitment and retention metrics by recruitment phase. Baseline characteristics by recruitment phase. aMotivation to quit excluded 1 participant who reported no plans to quit smoking in New Year group. bFTND: Fagerström Test for Nicotine Dependence. cISEL: Interpersonal Support Evaluation List. dInternet access: n=3 cases dropped that reported using a dial-up connection (summer: n=2; New Year: n=1; fall: n=0). Treatment utilization metrics by recruitment phase. a12 missing values (New Year: n=7; summer: n=3; fall: n=2). Smoking outcomes by recruitment phase. Discussion: Principal Findings The results of this study indicate that smokers visiting a Web-based cessation program during the New Year period are more likely to register for treatment and differ on several demographic variables, but show similar patterns of treatment engagement, retention, and short-term cessation outcomes compared with participants who visit the site during other periods of the year. Our hypotheses that New Year participants would differ on measures of motivation and desire to quit were not supported, and there were no differences on any of the smoking variables we examined. In addition, our hypotheses about lower retention rates, website utilization rates, and cessation outcomes were also not supported. Follow-up rates were comparable across all 3 periods, and smokers recruited during the New Year period quit at the same rate as smokers recruited at other times during the year. These results mitigate scientific concerns about recruiting participants during this time frame and are reassuring for researchers conducting Web-based cessation trials. Our findings that New Year participants were older, more educated, more likely to be employed full time, and more likely to have a relationship partner may suggest that smokers with greater resources are more affected by the seasonal trends of quitting around the New Year. Alternatively, these differences may be a function of differential message exposure: older employed individuals may have been more likely to be impacted by the BecomeAnEX online advertising campaign, or reminded of cessation through workplace wellness programs or other promotional activities. Although not significant, smokers recruited during the New Year period also had a higher level of nicotine dependence and a higher number of previous quit attempts at baseline, also suggesting that seasonal trends may serve as a cue to action for more dependent and motivated smokers. We do not have an explanation for the finding that New Year participants viewed more website pages than participants in other recruitment phases did, especially because no other metric of engagement or utilization was significantly different. To our knowledge, this is the first study to examine demographic, utilization, and outcome patterns of smokers recruited to a Web-based intervention during the New Year period, although there is interest in seasonal patterns of behavior. Delnevo and colleagues [10] noted seasonal patterns in motivation for quitting among callers to a quitline and discussed the implications for planning, promoting, and evaluating telephone quitlines. Using Internet search query data, Ayers et al [22] documented seasonality in searches for mental health information, with increases in information seeking that corresponded to patterns of seasonal affective disorder. The lack of previous publications in this area may be because dramatic increases in recruitment are to some degree unique to the online environment and Web-based studies that are capable of enrolling a large number of trial participants in a relatively short period of time. Our findings add to the small but growing literature on recruitment methods for Web-based tobacco interventions [23-31]. Most studies to date have focused on comparisons between online recruitment methods (eg, online banner ads, search engine advertising) and more traditional recruitment methods (eg, newspaper ads, targeted mailings) [24,25,32], evaluation of different Internet-based methods [28,29], or the use of offline methods (eg, physician referral) to drive tobacco users to Web-based interventions [30,31]. The primary endpoints of interest in most studies are baseline participant characteristics and recruitment yield and/or efficiency. Heffner et al [25] evaluated the impact of Web-based and traditional recruitment methods on 3-month data retention and 30-day point prevalence smoking abstinence at the 3-month outcome assessment in a cessation trial and found no differences by recruitment method. Our findings regarding the impact of seasonality are consistent with previous studies that have demonstrated some differences in the types of participants recruited to Web-based tobacco interventions, but no differences in their participation in or outcomes from such trials. The results of this study indicate that smokers visiting a Web-based cessation program during the New Year period are more likely to register for treatment and differ on several demographic variables, but show similar patterns of treatment engagement, retention, and short-term cessation outcomes compared with participants who visit the site during other periods of the year. Our hypotheses that New Year participants would differ on measures of motivation and desire to quit were not supported, and there were no differences on any of the smoking variables we examined. In addition, our hypotheses about lower retention rates, website utilization rates, and cessation outcomes were also not supported. Follow-up rates were comparable across all 3 periods, and smokers recruited during the New Year period quit at the same rate as smokers recruited at other times during the year. These results mitigate scientific concerns about recruiting participants during this time frame and are reassuring for researchers conducting Web-based cessation trials. Our findings that New Year participants were older, more educated, more likely to be employed full time, and more likely to have a relationship partner may suggest that smokers with greater resources are more affected by the seasonal trends of quitting around the New Year. Alternatively, these differences may be a function of differential message exposure: older employed individuals may have been more likely to be impacted by the BecomeAnEX online advertising campaign, or reminded of cessation through workplace wellness programs or other promotional activities. Although not significant, smokers recruited during the New Year period also had a higher level of nicotine dependence and a higher number of previous quit attempts at baseline, also suggesting that seasonal trends may serve as a cue to action for more dependent and motivated smokers. We do not have an explanation for the finding that New Year participants viewed more website pages than participants in other recruitment phases did, especially because no other metric of engagement or utilization was significantly different. To our knowledge, this is the first study to examine demographic, utilization, and outcome patterns of smokers recruited to a Web-based intervention during the New Year period, although there is interest in seasonal patterns of behavior. Delnevo and colleagues [10] noted seasonal patterns in motivation for quitting among callers to a quitline and discussed the implications for planning, promoting, and evaluating telephone quitlines. Using Internet search query data, Ayers et al [22] documented seasonality in searches for mental health information, with increases in information seeking that corresponded to patterns of seasonal affective disorder. The lack of previous publications in this area may be because dramatic increases in recruitment are to some degree unique to the online environment and Web-based studies that are capable of enrolling a large number of trial participants in a relatively short period of time. Our findings add to the small but growing literature on recruitment methods for Web-based tobacco interventions [23-31]. Most studies to date have focused on comparisons between online recruitment methods (eg, online banner ads, search engine advertising) and more traditional recruitment methods (eg, newspaper ads, targeted mailings) [24,25,32], evaluation of different Internet-based methods [28,29], or the use of offline methods (eg, physician referral) to drive tobacco users to Web-based interventions [30,31]. The primary endpoints of interest in most studies are baseline participant characteristics and recruitment yield and/or efficiency. Heffner et al [25] evaluated the impact of Web-based and traditional recruitment methods on 3-month data retention and 30-day point prevalence smoking abstinence at the 3-month outcome assessment in a cessation trial and found no differences by recruitment method. Our findings regarding the impact of seasonality are consistent with previous studies that have demonstrated some differences in the types of participants recruited to Web-based tobacco interventions, but no differences in their participation in or outcomes from such trials. Limitations Several limitations to this study should be noted. We examined variations in participant characteristics for a single cessation website as part of an ongoing randomized trial. This site exists in a larger ecosystem of promotion, advertising, and branding as part of the national BecomeAnEX campaign, which has been in existence since 2008. Our results are likely related to the specific strategy employed by BecomeAnEX, which is largely online advertising in its present implementation. Other advertising and promotional strategies of different Web-based interventions could yield different results. In addition, our automated titration of recruitment volume should be noted when considering the pragmatic implications of our results. Although the number of visitors to BecomeAnEX was higher during the New Year period and a higher proportion of them registered to become members, the number of participants recruited to our clinical trial during different periods throughout the year has remained relatively constant because of the daily cap we have on enrollment. This cap is designed to maintain a consistent volume of participants for our research and intervention staff to manage. If this cap were not in place, we may have seen a higher number of participants invited to the study and differences in the proportion of participants accepting or declining the study invitation. This may have important pragmatic considerations for other Web-based trials that have human involvement, but we believe this is unlikely to have affected the other metrics we examined (ie, follow-up rates, cessation outcomes). The daily cap on recruitment may also have affected statistical power. The response rate to the 3-month follow-up is lower than desired despite numerous online and offline strategies to reach participants, but is comparable to or higher than other Internet studies [33]. Several limitations to this study should be noted. We examined variations in participant characteristics for a single cessation website as part of an ongoing randomized trial. This site exists in a larger ecosystem of promotion, advertising, and branding as part of the national BecomeAnEX campaign, which has been in existence since 2008. Our results are likely related to the specific strategy employed by BecomeAnEX, which is largely online advertising in its present implementation. Other advertising and promotional strategies of different Web-based interventions could yield different results. In addition, our automated titration of recruitment volume should be noted when considering the pragmatic implications of our results. Although the number of visitors to BecomeAnEX was higher during the New Year period and a higher proportion of them registered to become members, the number of participants recruited to our clinical trial during different periods throughout the year has remained relatively constant because of the daily cap we have on enrollment. This cap is designed to maintain a consistent volume of participants for our research and intervention staff to manage. If this cap were not in place, we may have seen a higher number of participants invited to the study and differences in the proportion of participants accepting or declining the study invitation. This may have important pragmatic considerations for other Web-based trials that have human involvement, but we believe this is unlikely to have affected the other metrics we examined (ie, follow-up rates, cessation outcomes). The daily cap on recruitment may also have affected statistical power. The response rate to the 3-month follow-up is lower than desired despite numerous online and offline strategies to reach participants, but is comparable to or higher than other Internet studies [33]. Conclusions Internet interventions for health behavior change are characterized by their ability to recruit broadly and provide treatment at scale. Secular or temporal variations, such as the New Year holiday, and the associated media attention to smoking cessation and resolution making can result in large-scale swings in the number of individuals arriving at Web-based cessation interventions. For interventions that can effectively capture and enroll those individuals, seasonal variations could dramatically increase recruitment efficiency for clinical trials. Internet interventions for health behavior change are characterized by their ability to recruit broadly and provide treatment at scale. Secular or temporal variations, such as the New Year holiday, and the associated media attention to smoking cessation and resolution making can result in large-scale swings in the number of individuals arriving at Web-based cessation interventions. For interventions that can effectively capture and enroll those individuals, seasonal variations could dramatically increase recruitment efficiency for clinical trials. Principal Findings: The results of this study indicate that smokers visiting a Web-based cessation program during the New Year period are more likely to register for treatment and differ on several demographic variables, but show similar patterns of treatment engagement, retention, and short-term cessation outcomes compared with participants who visit the site during other periods of the year. Our hypotheses that New Year participants would differ on measures of motivation and desire to quit were not supported, and there were no differences on any of the smoking variables we examined. In addition, our hypotheses about lower retention rates, website utilization rates, and cessation outcomes were also not supported. Follow-up rates were comparable across all 3 periods, and smokers recruited during the New Year period quit at the same rate as smokers recruited at other times during the year. These results mitigate scientific concerns about recruiting participants during this time frame and are reassuring for researchers conducting Web-based cessation trials. Our findings that New Year participants were older, more educated, more likely to be employed full time, and more likely to have a relationship partner may suggest that smokers with greater resources are more affected by the seasonal trends of quitting around the New Year. Alternatively, these differences may be a function of differential message exposure: older employed individuals may have been more likely to be impacted by the BecomeAnEX online advertising campaign, or reminded of cessation through workplace wellness programs or other promotional activities. Although not significant, smokers recruited during the New Year period also had a higher level of nicotine dependence and a higher number of previous quit attempts at baseline, also suggesting that seasonal trends may serve as a cue to action for more dependent and motivated smokers. We do not have an explanation for the finding that New Year participants viewed more website pages than participants in other recruitment phases did, especially because no other metric of engagement or utilization was significantly different. To our knowledge, this is the first study to examine demographic, utilization, and outcome patterns of smokers recruited to a Web-based intervention during the New Year period, although there is interest in seasonal patterns of behavior. Delnevo and colleagues [10] noted seasonal patterns in motivation for quitting among callers to a quitline and discussed the implications for planning, promoting, and evaluating telephone quitlines. Using Internet search query data, Ayers et al [22] documented seasonality in searches for mental health information, with increases in information seeking that corresponded to patterns of seasonal affective disorder. The lack of previous publications in this area may be because dramatic increases in recruitment are to some degree unique to the online environment and Web-based studies that are capable of enrolling a large number of trial participants in a relatively short period of time. Our findings add to the small but growing literature on recruitment methods for Web-based tobacco interventions [23-31]. Most studies to date have focused on comparisons between online recruitment methods (eg, online banner ads, search engine advertising) and more traditional recruitment methods (eg, newspaper ads, targeted mailings) [24,25,32], evaluation of different Internet-based methods [28,29], or the use of offline methods (eg, physician referral) to drive tobacco users to Web-based interventions [30,31]. The primary endpoints of interest in most studies are baseline participant characteristics and recruitment yield and/or efficiency. Heffner et al [25] evaluated the impact of Web-based and traditional recruitment methods on 3-month data retention and 30-day point prevalence smoking abstinence at the 3-month outcome assessment in a cessation trial and found no differences by recruitment method. Our findings regarding the impact of seasonality are consistent with previous studies that have demonstrated some differences in the types of participants recruited to Web-based tobacco interventions, but no differences in their participation in or outcomes from such trials. Limitations: Several limitations to this study should be noted. We examined variations in participant characteristics for a single cessation website as part of an ongoing randomized trial. This site exists in a larger ecosystem of promotion, advertising, and branding as part of the national BecomeAnEX campaign, which has been in existence since 2008. Our results are likely related to the specific strategy employed by BecomeAnEX, which is largely online advertising in its present implementation. Other advertising and promotional strategies of different Web-based interventions could yield different results. In addition, our automated titration of recruitment volume should be noted when considering the pragmatic implications of our results. Although the number of visitors to BecomeAnEX was higher during the New Year period and a higher proportion of them registered to become members, the number of participants recruited to our clinical trial during different periods throughout the year has remained relatively constant because of the daily cap we have on enrollment. This cap is designed to maintain a consistent volume of participants for our research and intervention staff to manage. If this cap were not in place, we may have seen a higher number of participants invited to the study and differences in the proportion of participants accepting or declining the study invitation. This may have important pragmatic considerations for other Web-based trials that have human involvement, but we believe this is unlikely to have affected the other metrics we examined (ie, follow-up rates, cessation outcomes). The daily cap on recruitment may also have affected statistical power. The response rate to the 3-month follow-up is lower than desired despite numerous online and offline strategies to reach participants, but is comparable to or higher than other Internet studies [33]. Conclusions: Internet interventions for health behavior change are characterized by their ability to recruit broadly and provide treatment at scale. Secular or temporal variations, such as the New Year holiday, and the associated media attention to smoking cessation and resolution making can result in large-scale swings in the number of individuals arriving at Web-based cessation interventions. For interventions that can effectively capture and enroll those individuals, seasonal variations could dramatically increase recruitment efficiency for clinical trials.
Background: Seasonal variations in smoking and quitting behaviors have been documented, with many smokers seeking cessation assistance around the start of the New Year. What remains unknown is whether smokers who are recruited to cessation treatment trials during the New Year are as motivated to quit, or as likely to enroll in a research trial, adhere to a research protocol, and benefit from a cessation intervention compared to those who are recruited during other times of the year. Methods: Participants were current smokers who had registered on a free Web-based cessation program (BecomeAnEX.org) and were invited to participate in a clinical trial. The New Year period was defined according to a clear peak and drop in the proportion of visitors who registered on the site, spanning a 15-day period from December 26, 2012 to January 9, 2013. Two other 15-day recruitment periods during summer (July 18, 2012 to August 1, 2012) and fall (November 7, 2012 to November 21, 2012) were selected for comparison. Data were examined from 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline and 3 months postrandomization, and (3) online tracking software that recorded website utilization during the first 3 months of the trial. Results: Visitors to BecomeAnEX during the New Year period were more likely to register on the site than smokers who visited during summer or fall (conversion rates: 7.4%, 4.6%, 4.9%, respectively; P<.001), but there were no differences in rates of study acceptance, consent, randomization, 3-month follow-up survey completion, or cessation between the 3 periods. New Year participants were older, more educated, more likely to be employed full time, and more likely to have a relationship partner compared with participants recruited at other times during the year, but did not differ on measures of motivation and desire to quit. Conclusions: Smokers visiting a Web-based cessation program during the New Year period were more likely to register for treatment and differ on several demographic variables, but showed similar patterns of treatment engagement, retention, follow-up, and short-term cessation outcomes compared with participants who visited the site during other periods of the year. These results allay scientific concerns about recruiting participants during this time frame and are reassuring for researchers conducting Web-based cessation trials.
Introduction: Seasonal variations across a number of smoking and quitting behaviors have been documented. Most smokers express a desire to quit [1] and many make a quit attempt around the start of the New Year [2-6]. Reports have shown that sales of cigarettes are at their lowest during January and February [7,8] and sales of nicotine replacement therapies are at their highest January through March [9]. Seasonal variations in motivational stage of change among callers to a state quitline have also been documented [10] with callers in December and January being more likely to have recently quit than callers during other months. Internet search queries also provide evidence of the seasonal variations in smoking cessation, with clear peaks observed in the use of “quit smoking” as a search term at the beginning of each calendar year. Figure 1 shows the relative use of the search term “quit smoking” in Google search engine queries over 6 years in the United States as reported by Google Trends [11], the public database of Google queries. A greater number of smokers quitting around the New Year may mean a greater pool of potential research participants for smoking cessation trials. However, the effects of seasonality on research recruitment and retention have not been documented. Specifically, it is unknown whether smokers who are invited to cessation treatment trials during this period of time are as likely to enroll, to adhere to a research protocol, and to benefit from a cessation intervention. Smokers who elect to quit around the New Year may differ from those who quit during other times of the year on factors such as motivation, desire, confidence, or other factors that relate to trial participation, engagement, and cessation outcomes. These are important questions to address from both a pragmatic and a scientific standpoint. From a pragmatic standpoint, conducting trial recruitment during the New Year holiday may have staffing and cost considerations for all aspects of a trial. Research staff may be needed to field study inquiries, conduct eligibility screening, administer assessments, and manage study communications; intervention staff may be required to orient new participants to the trial and begin intervention delivery. Given the potential for higher recruitment volume during this period, staffing increases may be required. From a scientific standpoint, if participants enrolled during this time are less likely to adhere to research protocol (ie, lower rates of intervention adherence, lower retention at follow-up) because of a more transient commitment to quitting, this could have important implications for treatment trials. Lower retention rates (ie, higher loss to follow-up) would result in a higher proportion of participants counted as smokers in intention-to-treat analyses, which may artificially deflate overall abstinence rates and perceived effectiveness of an intervention. Lower rates of intervention adherence among participants recruited during the New Year could influence metrics of intervention feasibility and receptivity as well as cessation outcomes. We sought to examine these questions about the impact of seasonality on smoking cessation treatment trials in the context of an ongoing randomized trial of a Web-based cessation intervention. We extracted a subset of participants recruited during a 15-day window that spanned the 2013 New Year and compared them to participants recruited during 2 other 15-day periods in 2012 on recruitment and retention rates, baseline characteristics, website utilization, and cessation outcomes. Our a priori hypotheses were that New Year participants would differ on baseline measures of motivation and desire to quit consistent with a more transient commitment to quitting, and would have lower recruitment and retention rates, lower website utilization rates, and poorer cessation outcomes compared to participants enrolled during other periods. Use of search term “quit smoking” in Google search engine queries relative to the total number of Google searches between June 2007 and June 2013 in the United States as reported in Google Trends. Conclusions: Internet interventions for health behavior change are characterized by their ability to recruit broadly and provide treatment at scale. Secular or temporal variations, such as the New Year holiday, and the associated media attention to smoking cessation and resolution making can result in large-scale swings in the number of individuals arriving at Web-based cessation interventions. For interventions that can effectively capture and enroll those individuals, seasonal variations could dramatically increase recruitment efficiency for clinical trials.
Background: Seasonal variations in smoking and quitting behaviors have been documented, with many smokers seeking cessation assistance around the start of the New Year. What remains unknown is whether smokers who are recruited to cessation treatment trials during the New Year are as motivated to quit, or as likely to enroll in a research trial, adhere to a research protocol, and benefit from a cessation intervention compared to those who are recruited during other times of the year. Methods: Participants were current smokers who had registered on a free Web-based cessation program (BecomeAnEX.org) and were invited to participate in a clinical trial. The New Year period was defined according to a clear peak and drop in the proportion of visitors who registered on the site, spanning a 15-day period from December 26, 2012 to January 9, 2013. Two other 15-day recruitment periods during summer (July 18, 2012 to August 1, 2012) and fall (November 7, 2012 to November 21, 2012) were selected for comparison. Data were examined from 3 sources: (1) a Web-based clinical trials management system that automated the recruitment and enrollment process, (2) self-report assessments at baseline and 3 months postrandomization, and (3) online tracking software that recorded website utilization during the first 3 months of the trial. Results: Visitors to BecomeAnEX during the New Year period were more likely to register on the site than smokers who visited during summer or fall (conversion rates: 7.4%, 4.6%, 4.9%, respectively; P<.001), but there were no differences in rates of study acceptance, consent, randomization, 3-month follow-up survey completion, or cessation between the 3 periods. New Year participants were older, more educated, more likely to be employed full time, and more likely to have a relationship partner compared with participants recruited at other times during the year, but did not differ on measures of motivation and desire to quit. Conclusions: Smokers visiting a Web-based cessation program during the New Year period were more likely to register for treatment and differ on several demographic variables, but showed similar patterns of treatment engagement, retention, follow-up, and short-term cessation outcomes compared with participants who visited the site during other periods of the year. These results allay scientific concerns about recruiting participants during this time frame and are reassuring for researchers conducting Web-based cessation trials.
13,749
472
[ 723, 212, 627, 215, 154, 908, 13, 23, 67, 127, 170, 38, 84, 226, 181, 86, 303, 731, 321, 85 ]
23
[ "participants", "new", "year", "new year", "recruitment", "cessation", "number", "smoking", "quit", "based" ]
[ "quit smoking search", "smokers quitting new", "number smokers quitting", "months smoking outcomes", "impact seasonality smoking" ]
null
[CONTENT] seasonal variation | smoking cessation | Internet | research subject recruitment [SUMMARY]
[CONTENT] seasonal variation | smoking cessation | Internet | research subject recruitment [SUMMARY]
null
[CONTENT] seasonal variation | smoking cessation | Internet | research subject recruitment [SUMMARY]
[CONTENT] seasonal variation | smoking cessation | Internet | research subject recruitment [SUMMARY]
[CONTENT] seasonal variation | smoking cessation | Internet | research subject recruitment [SUMMARY]
[CONTENT] Adult | Female | Humans | Internet | Male | Middle Aged | Patient Compliance | Seasons | Smoking Cessation [SUMMARY]
[CONTENT] Adult | Female | Humans | Internet | Male | Middle Aged | Patient Compliance | Seasons | Smoking Cessation [SUMMARY]
null
[CONTENT] Adult | Female | Humans | Internet | Male | Middle Aged | Patient Compliance | Seasons | Smoking Cessation [SUMMARY]
[CONTENT] Adult | Female | Humans | Internet | Male | Middle Aged | Patient Compliance | Seasons | Smoking Cessation [SUMMARY]
[CONTENT] Adult | Female | Humans | Internet | Male | Middle Aged | Patient Compliance | Seasons | Smoking Cessation [SUMMARY]
[CONTENT] quit smoking search | smokers quitting new | number smokers quitting | months smoking outcomes | impact seasonality smoking [SUMMARY]
[CONTENT] quit smoking search | smokers quitting new | number smokers quitting | months smoking outcomes | impact seasonality smoking [SUMMARY]
null
[CONTENT] quit smoking search | smokers quitting new | number smokers quitting | months smoking outcomes | impact seasonality smoking [SUMMARY]
[CONTENT] quit smoking search | smokers quitting new | number smokers quitting | months smoking outcomes | impact seasonality smoking [SUMMARY]
[CONTENT] quit smoking search | smokers quitting new | number smokers quitting | months smoking outcomes | impact seasonality smoking [SUMMARY]
[CONTENT] participants | new | year | new year | recruitment | cessation | number | smoking | quit | based [SUMMARY]
[CONTENT] participants | new | year | new year | recruitment | cessation | number | smoking | quit | based [SUMMARY]
null
[CONTENT] participants | new | year | new year | recruitment | cessation | number | smoking | quit | based [SUMMARY]
[CONTENT] participants | new | year | new year | recruitment | cessation | number | smoking | quit | based [SUMMARY]
[CONTENT] participants | new | year | new year | recruitment | cessation | number | smoking | quit | based [SUMMARY]
[CONTENT] google | cessation | lower | intervention | quit | search | smokers | queries | rates | year [SUMMARY]
[CONTENT] session | nrt | participants | page | 2012 | view | social | day | number | smoking [SUMMARY]
null
[CONTENT] interventions | scale | individuals | variations | cessation | swings number individuals | internet interventions health behavior | ability | efficiency clinical trials | efficiency clinical [SUMMARY]
[CONTENT] participants | new | year | new year | recruitment | cessation | summer | quit | fall | number [SUMMARY]
[CONTENT] participants | new | year | new year | recruitment | cessation | summer | quit | fall | number [SUMMARY]
[CONTENT] Seasonal | the New Year ||| the New Year | the year [SUMMARY]
[CONTENT] BecomeAnEX.org ||| The New Year | 15-day | December 26, 2012 to January 9, 2013 ||| Two | 15-day | summer | July 18, 2012 to | August 1, 2012 | November 7, 2012 to November 21, 2012 ||| 3 | 1 | 2 | 3 months | 3 | the first 3 months [SUMMARY]
null
[CONTENT] the New Year | the year ||| [SUMMARY]
[CONTENT] the New Year ||| the New Year | the year ||| BecomeAnEX.org ||| The New Year | 15-day | December 26, 2012 to January 9, 2013 ||| Two | 15-day | summer | July 18, 2012 to | August 1, 2012 | November 7, 2012 to November 21, 2012 ||| 3 | 1 | 2 | 3 months | 3 | the first 3 months ||| the New Year | summer | 7.4% | 4.6% | 4.9% | 3-month | 3 ||| New Year | the year ||| the New Year | the year ||| [SUMMARY]
[CONTENT] the New Year ||| the New Year | the year ||| BecomeAnEX.org ||| The New Year | 15-day | December 26, 2012 to January 9, 2013 ||| Two | 15-day | summer | July 18, 2012 to | August 1, 2012 | November 7, 2012 to November 21, 2012 ||| 3 | 1 | 2 | 3 months | 3 | the first 3 months ||| the New Year | summer | 7.4% | 4.6% | 4.9% | 3-month | 3 ||| New Year | the year ||| the New Year | the year ||| [SUMMARY]
EARLY AND LONG-TERM OUTCOME OF SURGICAL INTERVENTION IN CHILDREN WITH INFLAMMATORY BOWEL DISEASE.
33237162
Long-term complication-free survival chart in children with IBD . Although children with inflammatory bowel disease (IBD), disease control is possible through medical procedures, but surgical intervention is indicated in some cases.
BACKGROUND
This retrospective cohort study was done on 21 children suffering IBD with surgical indication admitted to a referral children hospital in Tehran in 2019. The baseline information was collected by reviewing the recorded files and children were followed-up to assess surgical outcome.
METHODS
The rate of early complications after surgery was 47.6%; they included intestinal perforation in 4.8%, peritonitis in 4.8%, wound infection in 23.8%, pelvic abscesses in 14.3%, deep vein thrombosis in 4.8%, intestinal obstruction in 9.5%, pancreatitis in 9.5% and anal fissure in 4.8%. The mean duration of follow-up for patients was 6.79±4.24 years. The rate of delayed complications during follow up was 28.6%. Accordingly, long-term free-complication survival rate during 5-10 years after surgery was 92.3% and 56.4%, respectively. Among the early features, lack of prior drug treatment and bleeding as indication for surgery, were two predictors of long-term surgical complications.
RESULTS
Standard surgery in the treatment of IBD in children with surgical indication is associated with favorable outcome, although short- and long-term surgical complications are also predictable.
CONCLUSION
[ "Child", "Female", "Humans", "Inflammatory Bowel Diseases", "Iran", "Male", "Postoperative Complications", "Retrospective Studies", "Treatment Outcome" ]
7682153
INTRODUCTION
Inflammatory bowel disease (IBD) consists of two distinct but related conditions of chronic and recurrent inflammatory disorders in the gastrointestinal tract, including Crohn’s disease and ulcerative colitis. The first is characterized by patchy transmural inflammation with small bowel involvement as well as colon with different and varied clinical manifestations 3 . In contrast, ulcerative colitis is classically manifested by rectosigmoid mucosal inflammation 14 , 15 . About a quarter of IBD cases occur in childhood and adolescence 9 , 13 . In this respect, the incidence of Crohn’s disease in children is estimated at about 500.000 cases per year 2 . The average age of the disease in children is 12 years and only 5% of cases are under five 1 . Treatment is mainly attempted to control the disease and relieve its symptoms through the use of medication and medical methods, but in some cases, such as exacerbation of pain, examination of the causes of rectal bleeding and anal pain, referral to the surgical center is necessary 5 , 11 . However, consideration should be given to the adverse effects and consequences of surgery, especially in the age group of children; as such complications in childhood can have a significant impact on their quality of life as well as their psychological aspects 6 , 16 . The objective of this study was to evaluate surgical outcomes in children with IBD in order to consider the need for surgical management and management of surgical complication by evaluating outcomes in 1, 5, and 10 years after surgery.
METHODS
This study was a retrospective cohort. All children with IBD with surgical indication admitted to referral children hospital in Tehran in 2019 were included. Patients with incomplete data in the records, especially the lack of one-year follow-up information on surgical outcomes, were excluded. After designing the checklist, a complete overview of children with IBD with surgical indication was performed and information on demographic characteristics (gender and age), surgical indications, duration of medical and drug treatment, type of surgical procedure, complications following surgery and methods of improving and controlling these complications as well as the consequences of long-term after surgery in these patients were extracted and finalized. Researchers at all stages of their research were committed to the Helsinki Treaty, and the information of the participants was used without disclosing their identities. Individuals’ data were coded so that their names would not be used. All cases were monitored and approved by Shahid Beheshti University of Medical Sciences, Tehran, Iran. Statistical analysis The results were presented as mean ± standard deviation (SD) for quantitative variables and were summarized by absolute frequencies and percentages for categorical variables. Normality of data was analyzed using the Kolmogorov-Smirnoff test. Categorical variables were compared using chi-square test or Fisher’s exact test when more than 20% of cells with expected count of less than five were observed. Quantitative variables were also compared with t test, Mann U, ANOVA or Kruskal-Wallis H tests. The survival was evaluated using the Kaplan-Mayer survival analysis. The multivariable regression modeling was used to determine the determinants of patients’ outcome. For the statistical analysis, the statistical software SPSS version 16.0 for windows (SPSS Inc., Chicago, IL) was used; p-values of 0.05 or less were considered statistically significant. The results were presented as mean ± standard deviation (SD) for quantitative variables and were summarized by absolute frequencies and percentages for categorical variables. Normality of data was analyzed using the Kolmogorov-Smirnoff test. Categorical variables were compared using chi-square test or Fisher’s exact test when more than 20% of cells with expected count of less than five were observed. Quantitative variables were also compared with t test, Mann U, ANOVA or Kruskal-Wallis H tests. The survival was evaluated using the Kaplan-Mayer survival analysis. The multivariable regression modeling was used to determine the determinants of patients’ outcome. For the statistical analysis, the statistical software SPSS version 16.0 for windows (SPSS Inc., Chicago, IL) was used; p-values of 0.05 or less were considered statistically significant.
RESULTS
Of the 22 patients enrolled, 21 underwent surgery and one improved with drug management and no surgical indication was done. Therefore, the statistical analysis of data was focused on 21 patients undergoing surgery. The mean age of the patients was 11.12±5.65 years ranged from 3-20 years. In terms of gender distribution, nine (42.9%) were male and 12 (57.1%) female. Mean time to treatment was 4.45±1.80 years. The most common indication for surgery was anemia in 100%, followed by intestinal bleeding in 33.3%, failure to thrive in 14.3%, recurrent defecations in 9.5% and severe abdominal colic in 14.3% (Table 1). TABLE 1Baseline characteristics of patientsMean age, year11.12 ± 5.65Male gender9 (42.9)Average time under medication, year4.45±1.80Surgery indications Anemia 21 (100)Bleeding 7 (33.3)Failure to thrive 3 (14.3)Repeated defecation 2 (9.5)Severe colic pains 3 (14.3)History of liver transplantation 2 (9.5)Type of surgery Coloproctectomy, ileoanostomy, GP, ileostomy19 (90.5)Colectomy, Hartmann, Ileostomy2 (9.5) In total 90.5% underwent standard total coloprotectomy with endorectal graft ileoanal anastomosis with ileal lobe ileostomy that was closed two years after surgery. In the remaining two patients (9.5%), the procedure was limited to total colectomy, Hartmann’s surgery and terminal ileostomy. In terms of early surgical complications, intestinal perforation occurred in 4.8%, peritonitis in 4.8%, wound infection in 23.8%, pelvic abscesses in 14.3%, deep vein thrombosis in 4.8%, intestinal obstruction in 9.5%, pancreatitis in 9.5% and anal fissure in 4.8%. Accordingly, the rate of early complications after surgery was 47.6% in total. Laparotomy was required in 23.8% to improve early surgical complications. The mean duration of follow-up for patients was 6.79±4.24 years, ranging from six months to 15 years. During this period, improvement in gastrointestinal function was reported in 100% of patients. However, in terms of long-term surgical complications, 23.8% had delayed fistulas and fecal incontinence was also reported in 4.8%. Also, the rate of delayed complications during follow up was 28.6%. Accordingly, long-term free-complication survival rate during 5 and 10 years after surgery was 92.3% and 56.4%, respectively (Figure 1). Comparison between pre- and postoperative characteristics in the two groups with and without post-surgical complications (Table 2) showed significantly shorter average time under medication and higher rate of intestinal bleeding as the indication for surgery in the group with postoperative complications. Thus, among the early features, lack of prior drug treatment and bleeding as an indication for surgery were two predictors of long-term surgical complications. FIGURE 1Long-term complication-free survival chart in children with IBD TABLE 2Comparing baseline characteristics between the groups with and without complicationsItem With eventWithout eventpMean age, year9.16±4.6611.90±5.950.329Male gender3 (50.0)6 (40.0)0.676Average time under medication, year3.25±1.754.93±1.630.049Surgery indications Anemia 6 (100)15 (100)1.000Bleeding 4 (66.7)3 (20.0)0.040Failure to thrive 2 (33.3)1 (6.7)0.115Repeated defecation 0 (0.0)2 (13.3)0.999Severe colic pains 0 (0.0)3 (20.0)0.526History of liver transplantation 0 (0.0)2 (13.3)0.999Type of surgery 0.999Coloproctectomy, ileoanostomy, ileostomy6 (100)13 (86.7) Colectomy, Hartmann, Ileostomy0 (0.0)2 (13.3)
CONCLUSION
Standard surgery in the treatment of IBD in children with surgical indication is associated with favorable outcome, although short- and long-term complications are also predictable. Among the early features, lack of prior drug treatment and bleeding were two predictors of long-term surgical complications. Overall, long-term complication-free survival within 5 and 10 years after surgery was estimated to be 92.3% and 56.4%, respectively.
[ "Statistical analysis" ]
[ "The results were presented as mean ± standard deviation (SD) for quantitative\nvariables and were summarized by absolute frequencies and percentages for\ncategorical variables. Normality of data was analyzed using the\nKolmogorov-Smirnoff test. Categorical variables were compared using chi-square\ntest or Fisher’s exact test when more than 20% of cells with expected count of\nless than five were observed. Quantitative variables were also compared with t\ntest, Mann U, ANOVA or Kruskal-Wallis H tests. The survival was evaluated using\nthe Kaplan-Mayer survival analysis. The multivariable regression modeling was\nused to determine the determinants of patients’ outcome. For the statistical\nanalysis, the statistical software SPSS version 16.0 for windows (SPSS Inc.,\nChicago, IL) was used; p-values of 0.05 or less were considered statistically\nsignificant." ]
[ null ]
[ "INTRODUCTION", "METHODS", "Statistical analysis", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Inflammatory bowel disease (IBD) consists of two distinct but related conditions of\nchronic and recurrent inflammatory disorders in the gastrointestinal tract,\nincluding Crohn’s disease and ulcerative colitis. The first is characterized by\npatchy transmural inflammation with small bowel involvement as well as colon with\ndifferent and varied clinical manifestations\n3\n. In contrast, ulcerative colitis is classically manifested by rectosigmoid\nmucosal inflammation\n14\n\n,\n\n15\n. About a quarter of IBD cases occur in childhood and adolescence\n9\n\n,\n\n13\n. In this respect, the incidence of Crohn’s disease in children is estimated\nat about 500.000 cases per year\n2\n. The average age of the disease in children is 12 years and only 5% of cases\nare under five\n1\n.\nTreatment is mainly attempted to control the disease and relieve its symptoms through\nthe use of medication and medical methods, but in some cases, such as exacerbation\nof pain, examination of the causes of rectal bleeding and anal pain, referral to the\nsurgical center is necessary\n5\n\n,\n\n11\n. However, consideration should be given to the adverse effects and\nconsequences of surgery, especially in the age group of children; as such\ncomplications in childhood can have a significant impact on their quality of life as\nwell as their psychological aspects\n6\n\n,\n\n16\n.\nThe objective of this study was to evaluate surgical outcomes in children with IBD in\norder to consider the need for surgical management and management of surgical\ncomplication by evaluating outcomes in 1, 5, and 10 years after surgery. ", "This study was a retrospective cohort. All children with IBD with surgical indication\nadmitted to referral children hospital in Tehran in 2019 were included. Patients\nwith incomplete data in the records, especially the lack of one-year follow-up\ninformation on surgical outcomes, were excluded. After designing the checklist, a\ncomplete overview of children with IBD with surgical indication was performed and\ninformation on demographic characteristics (gender and age), surgical indications,\nduration of medical and drug treatment, type of surgical procedure, complications\nfollowing surgery and methods of improving and controlling these complications as\nwell as the consequences of long-term after surgery in these patients were extracted\nand finalized. Researchers at all stages of their research were committed to the\nHelsinki Treaty, and the information of the participants was used without disclosing\ntheir identities. Individuals’ data were coded so that their names would not be\nused. All cases were monitored and approved by Shahid Beheshti University of Medical\nSciences, Tehran, Iran.\n Statistical analysis The results were presented as mean ± standard deviation (SD) for quantitative\nvariables and were summarized by absolute frequencies and percentages for\ncategorical variables. Normality of data was analyzed using the\nKolmogorov-Smirnoff test. Categorical variables were compared using chi-square\ntest or Fisher’s exact test when more than 20% of cells with expected count of\nless than five were observed. Quantitative variables were also compared with t\ntest, Mann U, ANOVA or Kruskal-Wallis H tests. The survival was evaluated using\nthe Kaplan-Mayer survival analysis. The multivariable regression modeling was\nused to determine the determinants of patients’ outcome. For the statistical\nanalysis, the statistical software SPSS version 16.0 for windows (SPSS Inc.,\nChicago, IL) was used; p-values of 0.05 or less were considered statistically\nsignificant.\nThe results were presented as mean ± standard deviation (SD) for quantitative\nvariables and were summarized by absolute frequencies and percentages for\ncategorical variables. Normality of data was analyzed using the\nKolmogorov-Smirnoff test. Categorical variables were compared using chi-square\ntest or Fisher’s exact test when more than 20% of cells with expected count of\nless than five were observed. Quantitative variables were also compared with t\ntest, Mann U, ANOVA or Kruskal-Wallis H tests. The survival was evaluated using\nthe Kaplan-Mayer survival analysis. The multivariable regression modeling was\nused to determine the determinants of patients’ outcome. For the statistical\nanalysis, the statistical software SPSS version 16.0 for windows (SPSS Inc.,\nChicago, IL) was used; p-values of 0.05 or less were considered statistically\nsignificant.", "The results were presented as mean ± standard deviation (SD) for quantitative\nvariables and were summarized by absolute frequencies and percentages for\ncategorical variables. Normality of data was analyzed using the\nKolmogorov-Smirnoff test. Categorical variables were compared using chi-square\ntest or Fisher’s exact test when more than 20% of cells with expected count of\nless than five were observed. Quantitative variables were also compared with t\ntest, Mann U, ANOVA or Kruskal-Wallis H tests. The survival was evaluated using\nthe Kaplan-Mayer survival analysis. The multivariable regression modeling was\nused to determine the determinants of patients’ outcome. For the statistical\nanalysis, the statistical software SPSS version 16.0 for windows (SPSS Inc.,\nChicago, IL) was used; p-values of 0.05 or less were considered statistically\nsignificant.", "Of the 22 patients enrolled, 21 underwent surgery and one improved with drug\nmanagement and no surgical indication was done. Therefore, the statistical analysis\nof data was focused on 21 patients undergoing surgery. The mean age of the patients\nwas 11.12±5.65 years ranged from 3-20 years. In terms of gender distribution, nine\n(42.9%) were male and 12 (57.1%) female. Mean time to treatment was 4.45±1.80 years.\nThe most common indication for surgery was anemia in 100%, followed by intestinal\nbleeding in 33.3%, failure to thrive in 14.3%, recurrent defecations in 9.5% and\nsevere abdominal colic in 14.3% (Table 1). \n\nTABLE 1Baseline characteristics of patientsMean age, year11.12 ± 5.65Male gender9 (42.9)Average time under medication, year4.45±1.80Surgery indications\nAnemia 21 (100)Bleeding 7 (33.3)Failure to thrive 3 (14.3)Repeated defecation 2 (9.5)Severe colic pains 3 (14.3)History of liver transplantation 2 (9.5)Type of surgery \nColoproctectomy, ileoanostomy, GP, ileostomy19 (90.5)Colectomy, Hartmann, Ileostomy2 (9.5)\n\nIn total 90.5% underwent standard total coloprotectomy with endorectal graft ileoanal\nanastomosis with ileal lobe ileostomy that was closed two years after surgery. In\nthe remaining two patients (9.5%), the procedure was limited to total colectomy,\nHartmann’s surgery and terminal ileostomy. In terms of early surgical complications,\nintestinal perforation occurred in 4.8%, peritonitis in 4.8%, wound infection in\n23.8%, pelvic abscesses in 14.3%, deep vein thrombosis in 4.8%, intestinal\nobstruction in 9.5%, pancreatitis in 9.5% and anal fissure in 4.8%. Accordingly, the\nrate of early complications after surgery was 47.6% in total. Laparotomy was\nrequired in 23.8% to improve early surgical complications.\nThe mean duration of follow-up for patients was 6.79±4.24 years, ranging from six\nmonths to 15 years. During this period, improvement in gastrointestinal function was\nreported in 100% of patients. However, in terms of long-term surgical complications,\n23.8% had delayed fistulas and fecal incontinence was also reported in 4.8%. Also,\nthe rate of delayed complications during follow up was 28.6%. Accordingly, long-term\nfree-complication survival rate during 5 and 10 years after surgery was 92.3% and\n56.4%, respectively (Figure 1).\nComparison between pre- and postoperative characteristics in the two groups with and\nwithout post-surgical complications (Table 2)\nshowed significantly shorter average time under medication and higher rate of\nintestinal bleeding as the indication for surgery in the group with postoperative\ncomplications. \nThus, among the early features, lack of prior drug treatment and bleeding as an\nindication for surgery were two predictors of long-term surgical complications. \n\nFIGURE 1Long-term complication-free survival chart in children with\nIBD\n\n\nTABLE 2Comparing baseline characteristics between the groups with and\nwithout complicationsItem With eventWithout eventpMean age, year9.16±4.6611.90±5.950.329Male gender3 (50.0)6 (40.0)0.676Average time under medication, year3.25±1.754.93±1.630.049Surgery indications\n\n\nAnemia 6 (100)15 (100)1.000Bleeding 4 (66.7)3 (20.0)0.040Failure to thrive 2 (33.3)1 (6.7)0.115Repeated defecation 0 (0.0)2 (13.3)0.999Severe colic pains 0 (0.0)3 (20.0)0.526History of liver transplantation 0 (0.0)2 (13.3)0.999Type of surgery \n\n0.999Coloproctectomy, ileoanostomy, ileostomy6 (100)13 (86.7)\nColectomy, Hartmann, Ileostomy0 (0.0)2 (13.3)\n\n", "Although in many patients, especially children with IBD, disease control is possible\nthrough medical procedures and is associated with acceptable therapeutic outcomes,\nbut in some cases, particularly in the context of anemia, gastrointestinal bleeding\nor severe manifestations of inactivity, surgical interventions are indicated. In\nthis regard, various techniques such as total coloprotectomy with endorectal graft\nileoanal anastomosis with ileostomy loop are commonly used. So far, no comprehensive\nstudy has been published on the consequences of surgery in children with IBD in\nIran, and this study was performed for this purpose. An overview of the results of\nour study showed that IBD was reported in children with surgical indication equally\nin both boys and girls, with a mean age of 11.12±5.65 years. In terms of indications\nfor surgery, anemia was reported as the most important indication in all patients\nevaluated, whereas other indications included hemorrhage, failure to thrive,\nrecurrent defecations, and severe colic, respectively. The dominant surgical\ntechnique included total coloproctectomy with endorectal graft ileoanal anastomosis\nwith ileal loop anesthesia or total colectomy, Hartmann’s operation, and terminal\nileostomy. In terms of early and in-hospital complications, the most common\ncomplications were wound infection in 23.8% of patients and pelvic abscesses in\n14.3% of patients. Interestingly, in 23.8% of patients, laparotomy was indicated to\ncontrol early surgical complications. In long-term follow-up, although remission in\nalmost all patients undergoing surgical treatment was achieved, long-term surgical\ncomplications (including fistula or fecal incontinence) were reported in 28.6% of\npatients with two main predictors of not using primary drug therapy and intestinal\nhemorrhage as surgical indication. Finally, it was stated that the long-term\ncomplication-free survival within 5 and 10 years after surgery was estimated to be\n92.3% and 56.4%, respectively, and the majority of delayed complications occurred\nafter five years of surgery. In fact, it seems that IBD surgical treatment in\nchildren is fundamentally associated with a high success rate. However, short- and\nlong-term complications are predictable in about a quarter of patients, which can be\nimproved and controlled with appropriate measures. \nDifferent reports have been released regarding the prevalence of complications of\nIBD-related surgery in children, and even in some reports, the number is similar to\nthat in colon cancer. In some studies, the complication rate after surgery was 18%\nand the need for reoperation was 7.3%\n8\n. Overall, the rate of recurrence rates among children under surgery was 50%,\n73%, and 77% at 1, 5, and 10 years, respectively\n4\n. Therefore, it seems that our center has been largely successful in the\nsurgical treatment and control of its complications and has been associated with\ncomplete improvement in patients, although short and long-term effects have also\nbeen of course controllable. \nRegarding the epidemiological aspects of IBD among children, our children appear to\nbe within the prescribed gender, age, and clinical manifestations of other societies\nand reports. In international reports, 25% of IBD cases occur before the age of 20\nyears, while among children with IBD, 4% occur under the age of five and 18% under\n10, and therefore most of the adolescent involvement will be in early youth\n16\n. In the present study, 13.6% of patients were under five years and 45.5%\nwere under 10. In terms of clinical manifestations, the most prominent\nmanifestations include anemia, developmental disorder, perianal diseases and some\nextraintestinal complications such as dermatological disorders, arthritis,\nosteopenia, autoimmune hepatitis, ophthalmic complications, nephrolithiasis,\npancreatitis and thromboembolism. Our study also described anemia and developmental\ndisorder at the top of the disease symptoms. The results of our study appear to be\nconsistent with previous studies in terms of therapeutic outcomes of IBD. In a study\nby El-Baba et al\n7\n in the United States, surgical indications included failure in medical\ntreatment, complete or partial bowel obstruction, growth retardation, intestinal\nperforation, and abscess or fistula. In their research, in 47% of patients, complete\nremission was seen at one-year follow-up, but 22% had recurrence, with the majority\nof patients undergoing controlled medical treatment and two requiring reoperation.\nIn 2012, in a study by Laituri et al\n12\n from five patients who underwent surgery three had obstruction or stenosis\nand two had perforation. Knod et al.\n11\n related a significant decrease in the frequency of defecation after surgery. \nFew studies have been performed to identify the predictors of surgical complications\nand, in our study, the history of medical treatment and bleeding as a surgical\nindication were identified as two predictors of surgical poor outcome. In a study by\nDukleska et al.\n6\n overweight and obesity along with the use of steroids in preoperative\ntreatment increased the likelihood of surgical complications. Overall, considering\nthe importance of predicting surgical complications, especially in the long-term,\nresearches on the identification of postoperative complications are essential. ", "Standard surgery in the treatment of IBD in children with surgical indication is\nassociated with favorable outcome, although short- and long-term complications are\nalso predictable. Among the early features, lack of prior drug treatment and\nbleeding were two predictors of long-term surgical complications. Overall, long-term\ncomplication-free survival within 5 and 10 years after surgery was estimated to be\n92.3% and 56.4%, respectively. " ]
[ "intro", "methods", null, "results", "discussion", "conclusions" ]
[ "Children", "Inflammatory bowel disease", "Surgery", "Crianças", "Doença inflamatória intestinal", "Cirurgia" ]
INTRODUCTION: Inflammatory bowel disease (IBD) consists of two distinct but related conditions of chronic and recurrent inflammatory disorders in the gastrointestinal tract, including Crohn’s disease and ulcerative colitis. The first is characterized by patchy transmural inflammation with small bowel involvement as well as colon with different and varied clinical manifestations 3 . In contrast, ulcerative colitis is classically manifested by rectosigmoid mucosal inflammation 14 , 15 . About a quarter of IBD cases occur in childhood and adolescence 9 , 13 . In this respect, the incidence of Crohn’s disease in children is estimated at about 500.000 cases per year 2 . The average age of the disease in children is 12 years and only 5% of cases are under five 1 . Treatment is mainly attempted to control the disease and relieve its symptoms through the use of medication and medical methods, but in some cases, such as exacerbation of pain, examination of the causes of rectal bleeding and anal pain, referral to the surgical center is necessary 5 , 11 . However, consideration should be given to the adverse effects and consequences of surgery, especially in the age group of children; as such complications in childhood can have a significant impact on their quality of life as well as their psychological aspects 6 , 16 . The objective of this study was to evaluate surgical outcomes in children with IBD in order to consider the need for surgical management and management of surgical complication by evaluating outcomes in 1, 5, and 10 years after surgery. METHODS: This study was a retrospective cohort. All children with IBD with surgical indication admitted to referral children hospital in Tehran in 2019 were included. Patients with incomplete data in the records, especially the lack of one-year follow-up information on surgical outcomes, were excluded. After designing the checklist, a complete overview of children with IBD with surgical indication was performed and information on demographic characteristics (gender and age), surgical indications, duration of medical and drug treatment, type of surgical procedure, complications following surgery and methods of improving and controlling these complications as well as the consequences of long-term after surgery in these patients were extracted and finalized. Researchers at all stages of their research were committed to the Helsinki Treaty, and the information of the participants was used without disclosing their identities. Individuals’ data were coded so that their names would not be used. All cases were monitored and approved by Shahid Beheshti University of Medical Sciences, Tehran, Iran. Statistical analysis The results were presented as mean ± standard deviation (SD) for quantitative variables and were summarized by absolute frequencies and percentages for categorical variables. Normality of data was analyzed using the Kolmogorov-Smirnoff test. Categorical variables were compared using chi-square test or Fisher’s exact test when more than 20% of cells with expected count of less than five were observed. Quantitative variables were also compared with t test, Mann U, ANOVA or Kruskal-Wallis H tests. The survival was evaluated using the Kaplan-Mayer survival analysis. The multivariable regression modeling was used to determine the determinants of patients’ outcome. For the statistical analysis, the statistical software SPSS version 16.0 for windows (SPSS Inc., Chicago, IL) was used; p-values of 0.05 or less were considered statistically significant. The results were presented as mean ± standard deviation (SD) for quantitative variables and were summarized by absolute frequencies and percentages for categorical variables. Normality of data was analyzed using the Kolmogorov-Smirnoff test. Categorical variables were compared using chi-square test or Fisher’s exact test when more than 20% of cells with expected count of less than five were observed. Quantitative variables were also compared with t test, Mann U, ANOVA or Kruskal-Wallis H tests. The survival was evaluated using the Kaplan-Mayer survival analysis. The multivariable regression modeling was used to determine the determinants of patients’ outcome. For the statistical analysis, the statistical software SPSS version 16.0 for windows (SPSS Inc., Chicago, IL) was used; p-values of 0.05 or less were considered statistically significant. Statistical analysis: The results were presented as mean ± standard deviation (SD) for quantitative variables and were summarized by absolute frequencies and percentages for categorical variables. Normality of data was analyzed using the Kolmogorov-Smirnoff test. Categorical variables were compared using chi-square test or Fisher’s exact test when more than 20% of cells with expected count of less than five were observed. Quantitative variables were also compared with t test, Mann U, ANOVA or Kruskal-Wallis H tests. The survival was evaluated using the Kaplan-Mayer survival analysis. The multivariable regression modeling was used to determine the determinants of patients’ outcome. For the statistical analysis, the statistical software SPSS version 16.0 for windows (SPSS Inc., Chicago, IL) was used; p-values of 0.05 or less were considered statistically significant. RESULTS: Of the 22 patients enrolled, 21 underwent surgery and one improved with drug management and no surgical indication was done. Therefore, the statistical analysis of data was focused on 21 patients undergoing surgery. The mean age of the patients was 11.12±5.65 years ranged from 3-20 years. In terms of gender distribution, nine (42.9%) were male and 12 (57.1%) female. Mean time to treatment was 4.45±1.80 years. The most common indication for surgery was anemia in 100%, followed by intestinal bleeding in 33.3%, failure to thrive in 14.3%, recurrent defecations in 9.5% and severe abdominal colic in 14.3% (Table 1). TABLE 1Baseline characteristics of patientsMean age, year11.12 ± 5.65Male gender9 (42.9)Average time under medication, year4.45±1.80Surgery indications Anemia 21 (100)Bleeding 7 (33.3)Failure to thrive 3 (14.3)Repeated defecation 2 (9.5)Severe colic pains 3 (14.3)History of liver transplantation 2 (9.5)Type of surgery Coloproctectomy, ileoanostomy, GP, ileostomy19 (90.5)Colectomy, Hartmann, Ileostomy2 (9.5) In total 90.5% underwent standard total coloprotectomy with endorectal graft ileoanal anastomosis with ileal lobe ileostomy that was closed two years after surgery. In the remaining two patients (9.5%), the procedure was limited to total colectomy, Hartmann’s surgery and terminal ileostomy. In terms of early surgical complications, intestinal perforation occurred in 4.8%, peritonitis in 4.8%, wound infection in 23.8%, pelvic abscesses in 14.3%, deep vein thrombosis in 4.8%, intestinal obstruction in 9.5%, pancreatitis in 9.5% and anal fissure in 4.8%. Accordingly, the rate of early complications after surgery was 47.6% in total. Laparotomy was required in 23.8% to improve early surgical complications. The mean duration of follow-up for patients was 6.79±4.24 years, ranging from six months to 15 years. During this period, improvement in gastrointestinal function was reported in 100% of patients. However, in terms of long-term surgical complications, 23.8% had delayed fistulas and fecal incontinence was also reported in 4.8%. Also, the rate of delayed complications during follow up was 28.6%. Accordingly, long-term free-complication survival rate during 5 and 10 years after surgery was 92.3% and 56.4%, respectively (Figure 1). Comparison between pre- and postoperative characteristics in the two groups with and without post-surgical complications (Table 2) showed significantly shorter average time under medication and higher rate of intestinal bleeding as the indication for surgery in the group with postoperative complications. Thus, among the early features, lack of prior drug treatment and bleeding as an indication for surgery were two predictors of long-term surgical complications. FIGURE 1Long-term complication-free survival chart in children with IBD TABLE 2Comparing baseline characteristics between the groups with and without complicationsItem With eventWithout eventpMean age, year9.16±4.6611.90±5.950.329Male gender3 (50.0)6 (40.0)0.676Average time under medication, year3.25±1.754.93±1.630.049Surgery indications Anemia 6 (100)15 (100)1.000Bleeding 4 (66.7)3 (20.0)0.040Failure to thrive 2 (33.3)1 (6.7)0.115Repeated defecation 0 (0.0)2 (13.3)0.999Severe colic pains 0 (0.0)3 (20.0)0.526History of liver transplantation 0 (0.0)2 (13.3)0.999Type of surgery 0.999Coloproctectomy, ileoanostomy, ileostomy6 (100)13 (86.7) Colectomy, Hartmann, Ileostomy0 (0.0)2 (13.3) DISCUSSION: Although in many patients, especially children with IBD, disease control is possible through medical procedures and is associated with acceptable therapeutic outcomes, but in some cases, particularly in the context of anemia, gastrointestinal bleeding or severe manifestations of inactivity, surgical interventions are indicated. In this regard, various techniques such as total coloprotectomy with endorectal graft ileoanal anastomosis with ileostomy loop are commonly used. So far, no comprehensive study has been published on the consequences of surgery in children with IBD in Iran, and this study was performed for this purpose. An overview of the results of our study showed that IBD was reported in children with surgical indication equally in both boys and girls, with a mean age of 11.12±5.65 years. In terms of indications for surgery, anemia was reported as the most important indication in all patients evaluated, whereas other indications included hemorrhage, failure to thrive, recurrent defecations, and severe colic, respectively. The dominant surgical technique included total coloproctectomy with endorectal graft ileoanal anastomosis with ileal loop anesthesia or total colectomy, Hartmann’s operation, and terminal ileostomy. In terms of early and in-hospital complications, the most common complications were wound infection in 23.8% of patients and pelvic abscesses in 14.3% of patients. Interestingly, in 23.8% of patients, laparotomy was indicated to control early surgical complications. In long-term follow-up, although remission in almost all patients undergoing surgical treatment was achieved, long-term surgical complications (including fistula or fecal incontinence) were reported in 28.6% of patients with two main predictors of not using primary drug therapy and intestinal hemorrhage as surgical indication. Finally, it was stated that the long-term complication-free survival within 5 and 10 years after surgery was estimated to be 92.3% and 56.4%, respectively, and the majority of delayed complications occurred after five years of surgery. In fact, it seems that IBD surgical treatment in children is fundamentally associated with a high success rate. However, short- and long-term complications are predictable in about a quarter of patients, which can be improved and controlled with appropriate measures. Different reports have been released regarding the prevalence of complications of IBD-related surgery in children, and even in some reports, the number is similar to that in colon cancer. In some studies, the complication rate after surgery was 18% and the need for reoperation was 7.3% 8 . Overall, the rate of recurrence rates among children under surgery was 50%, 73%, and 77% at 1, 5, and 10 years, respectively 4 . Therefore, it seems that our center has been largely successful in the surgical treatment and control of its complications and has been associated with complete improvement in patients, although short and long-term effects have also been of course controllable. Regarding the epidemiological aspects of IBD among children, our children appear to be within the prescribed gender, age, and clinical manifestations of other societies and reports. In international reports, 25% of IBD cases occur before the age of 20 years, while among children with IBD, 4% occur under the age of five and 18% under 10, and therefore most of the adolescent involvement will be in early youth 16 . In the present study, 13.6% of patients were under five years and 45.5% were under 10. In terms of clinical manifestations, the most prominent manifestations include anemia, developmental disorder, perianal diseases and some extraintestinal complications such as dermatological disorders, arthritis, osteopenia, autoimmune hepatitis, ophthalmic complications, nephrolithiasis, pancreatitis and thromboembolism. Our study also described anemia and developmental disorder at the top of the disease symptoms. The results of our study appear to be consistent with previous studies in terms of therapeutic outcomes of IBD. In a study by El-Baba et al 7 in the United States, surgical indications included failure in medical treatment, complete or partial bowel obstruction, growth retardation, intestinal perforation, and abscess or fistula. In their research, in 47% of patients, complete remission was seen at one-year follow-up, but 22% had recurrence, with the majority of patients undergoing controlled medical treatment and two requiring reoperation. In 2012, in a study by Laituri et al 12 from five patients who underwent surgery three had obstruction or stenosis and two had perforation. Knod et al. 11 related a significant decrease in the frequency of defecation after surgery. Few studies have been performed to identify the predictors of surgical complications and, in our study, the history of medical treatment and bleeding as a surgical indication were identified as two predictors of surgical poor outcome. In a study by Dukleska et al. 6 overweight and obesity along with the use of steroids in preoperative treatment increased the likelihood of surgical complications. Overall, considering the importance of predicting surgical complications, especially in the long-term, researches on the identification of postoperative complications are essential. CONCLUSION: Standard surgery in the treatment of IBD in children with surgical indication is associated with favorable outcome, although short- and long-term complications are also predictable. Among the early features, lack of prior drug treatment and bleeding were two predictors of long-term surgical complications. Overall, long-term complication-free survival within 5 and 10 years after surgery was estimated to be 92.3% and 56.4%, respectively.
Background: Long-term complication-free survival chart in children with IBD . Although children with inflammatory bowel disease (IBD), disease control is possible through medical procedures, but surgical intervention is indicated in some cases. Methods: This retrospective cohort study was done on 21 children suffering IBD with surgical indication admitted to a referral children hospital in Tehran in 2019. The baseline information was collected by reviewing the recorded files and children were followed-up to assess surgical outcome. Results: The rate of early complications after surgery was 47.6%; they included intestinal perforation in 4.8%, peritonitis in 4.8%, wound infection in 23.8%, pelvic abscesses in 14.3%, deep vein thrombosis in 4.8%, intestinal obstruction in 9.5%, pancreatitis in 9.5% and anal fissure in 4.8%. The mean duration of follow-up for patients was 6.79±4.24 years. The rate of delayed complications during follow up was 28.6%. Accordingly, long-term free-complication survival rate during 5-10 years after surgery was 92.3% and 56.4%, respectively. Among the early features, lack of prior drug treatment and bleeding as indication for surgery, were two predictors of long-term surgical complications. Conclusions: Standard surgery in the treatment of IBD in children with surgical indication is associated with favorable outcome, although short- and long-term surgical complications are also predictable.
INTRODUCTION: Inflammatory bowel disease (IBD) consists of two distinct but related conditions of chronic and recurrent inflammatory disorders in the gastrointestinal tract, including Crohn’s disease and ulcerative colitis. The first is characterized by patchy transmural inflammation with small bowel involvement as well as colon with different and varied clinical manifestations 3 . In contrast, ulcerative colitis is classically manifested by rectosigmoid mucosal inflammation 14 , 15 . About a quarter of IBD cases occur in childhood and adolescence 9 , 13 . In this respect, the incidence of Crohn’s disease in children is estimated at about 500.000 cases per year 2 . The average age of the disease in children is 12 years and only 5% of cases are under five 1 . Treatment is mainly attempted to control the disease and relieve its symptoms through the use of medication and medical methods, but in some cases, such as exacerbation of pain, examination of the causes of rectal bleeding and anal pain, referral to the surgical center is necessary 5 , 11 . However, consideration should be given to the adverse effects and consequences of surgery, especially in the age group of children; as such complications in childhood can have a significant impact on their quality of life as well as their psychological aspects 6 , 16 . The objective of this study was to evaluate surgical outcomes in children with IBD in order to consider the need for surgical management and management of surgical complication by evaluating outcomes in 1, 5, and 10 years after surgery. CONCLUSION: Standard surgery in the treatment of IBD in children with surgical indication is associated with favorable outcome, although short- and long-term complications are also predictable. Among the early features, lack of prior drug treatment and bleeding were two predictors of long-term surgical complications. Overall, long-term complication-free survival within 5 and 10 years after surgery was estimated to be 92.3% and 56.4%, respectively.
Background: Long-term complication-free survival chart in children with IBD . Although children with inflammatory bowel disease (IBD), disease control is possible through medical procedures, but surgical intervention is indicated in some cases. Methods: This retrospective cohort study was done on 21 children suffering IBD with surgical indication admitted to a referral children hospital in Tehran in 2019. The baseline information was collected by reviewing the recorded files and children were followed-up to assess surgical outcome. Results: The rate of early complications after surgery was 47.6%; they included intestinal perforation in 4.8%, peritonitis in 4.8%, wound infection in 23.8%, pelvic abscesses in 14.3%, deep vein thrombosis in 4.8%, intestinal obstruction in 9.5%, pancreatitis in 9.5% and anal fissure in 4.8%. The mean duration of follow-up for patients was 6.79±4.24 years. The rate of delayed complications during follow up was 28.6%. Accordingly, long-term free-complication survival rate during 5-10 years after surgery was 92.3% and 56.4%, respectively. Among the early features, lack of prior drug treatment and bleeding as indication for surgery, were two predictors of long-term surgical complications. Conclusions: Standard surgery in the treatment of IBD in children with surgical indication is associated with favorable outcome, although short- and long-term surgical complications are also predictable.
2,771
269
[ 164 ]
6
[ "surgical", "complications", "surgery", "patients", "children", "years", "ibd", "term", "treatment", "long" ]
[ "incidence crohn disease", "respect incidence crohn", "inflammation small bowel", "crohn disease children", "introduction inflammatory bowel" ]
[CONTENT] Children | Inflammatory bowel disease | Surgery | Crianças | Doença inflamatória intestinal | Cirurgia [SUMMARY]
[CONTENT] Children | Inflammatory bowel disease | Surgery | Crianças | Doença inflamatória intestinal | Cirurgia [SUMMARY]
[CONTENT] Children | Inflammatory bowel disease | Surgery | Crianças | Doença inflamatória intestinal | Cirurgia [SUMMARY]
[CONTENT] Children | Inflammatory bowel disease | Surgery | Crianças | Doença inflamatória intestinal | Cirurgia [SUMMARY]
[CONTENT] Children | Inflammatory bowel disease | Surgery | Crianças | Doença inflamatória intestinal | Cirurgia [SUMMARY]
[CONTENT] Children | Inflammatory bowel disease | Surgery | Crianças | Doença inflamatória intestinal | Cirurgia [SUMMARY]
[CONTENT] Child | Female | Humans | Inflammatory Bowel Diseases | Iran | Male | Postoperative Complications | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Child | Female | Humans | Inflammatory Bowel Diseases | Iran | Male | Postoperative Complications | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Child | Female | Humans | Inflammatory Bowel Diseases | Iran | Male | Postoperative Complications | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Child | Female | Humans | Inflammatory Bowel Diseases | Iran | Male | Postoperative Complications | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Child | Female | Humans | Inflammatory Bowel Diseases | Iran | Male | Postoperative Complications | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] Child | Female | Humans | Inflammatory Bowel Diseases | Iran | Male | Postoperative Complications | Retrospective Studies | Treatment Outcome [SUMMARY]
[CONTENT] incidence crohn disease | respect incidence crohn | inflammation small bowel | crohn disease children | introduction inflammatory bowel [SUMMARY]
[CONTENT] incidence crohn disease | respect incidence crohn | inflammation small bowel | crohn disease children | introduction inflammatory bowel [SUMMARY]
[CONTENT] incidence crohn disease | respect incidence crohn | inflammation small bowel | crohn disease children | introduction inflammatory bowel [SUMMARY]
[CONTENT] incidence crohn disease | respect incidence crohn | inflammation small bowel | crohn disease children | introduction inflammatory bowel [SUMMARY]
[CONTENT] incidence crohn disease | respect incidence crohn | inflammation small bowel | crohn disease children | introduction inflammatory bowel [SUMMARY]
[CONTENT] incidence crohn disease | respect incidence crohn | inflammation small bowel | crohn disease children | introduction inflammatory bowel [SUMMARY]
[CONTENT] surgical | complications | surgery | patients | children | years | ibd | term | treatment | long [SUMMARY]
[CONTENT] surgical | complications | surgery | patients | children | years | ibd | term | treatment | long [SUMMARY]
[CONTENT] surgical | complications | surgery | patients | children | years | ibd | term | treatment | long [SUMMARY]
[CONTENT] surgical | complications | surgery | patients | children | years | ibd | term | treatment | long [SUMMARY]
[CONTENT] surgical | complications | surgery | patients | children | years | ibd | term | treatment | long [SUMMARY]
[CONTENT] surgical | complications | surgery | patients | children | years | ibd | term | treatment | long [SUMMARY]
[CONTENT] disease | cases | children | surgical | inflammation | childhood | ulcerative colitis | ulcerative | colitis | inflammatory [SUMMARY]
[CONTENT] variables | test | statistical | analysis | spss | compared | quantitative variables | quantitative | categorical variables | categorical [SUMMARY]
[CONTENT] 100 | surgery | years | complications | table | time | patients | 14 | surgical complications | intestinal [SUMMARY]
[CONTENT] long term | term | long | treatment | complications | surgery | surgical | bleeding predictors long term | associated favorable | bleeding predictors [SUMMARY]
[CONTENT] surgical | surgery | complications | test | variables | patients | term | long term | long | children [SUMMARY]
[CONTENT] surgical | surgery | complications | test | variables | patients | term | long term | long | children [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 21 | Tehran | 2019 ||| [SUMMARY]
[CONTENT] 47.6% | 4.8% | 4.8% | 23.8% | 14.3% | 4.8% | 9.5% | 9.5% | 4.8% ||| 6.79±4.24 years ||| 28.6% ||| 5-10 years | 92.3% | 56.4% ||| two [SUMMARY]
[CONTENT] Standard [SUMMARY]
[CONTENT] ||| ||| 21 | Tehran | 2019 ||| ||| ||| 47.6% | 4.8% | 4.8% | 23.8% | 14.3% | 4.8% | 9.5% | 9.5% | 4.8% ||| 6.79±4.24 years ||| 28.6% ||| 5-10 years | 92.3% | 56.4% ||| two ||| [SUMMARY]
[CONTENT] ||| ||| 21 | Tehran | 2019 ||| ||| ||| 47.6% | 4.8% | 4.8% | 23.8% | 14.3% | 4.8% | 9.5% | 9.5% | 4.8% ||| 6.79±4.24 years ||| 28.6% ||| 5-10 years | 92.3% | 56.4% ||| two ||| [SUMMARY]
The standard of healthcare accreditation standards: a review of empirical research underpinning their development and impact.
22995152
Healthcare accreditation standards are advocated as an important means of improving clinical practice and organisational performance. Standard development agencies have documented methodologies to promote open, transparent, inclusive development processes where standards are developed by members. They assert that their methodologies are effective and efficient at producing standards appropriate for the health industry. However, the evidence to support these claims requires scrutiny. The study's purpose was to examine the empirical research that grounds the development methods and application of healthcare accreditation standards.
BACKGROUND
A multi-method strategy was employed over the period March 2010 to August 2011. Five academic health research databases (Medline, Psych INFO, Embase, Social work abstracts, and CINAHL) were interrogated, the websites of 36 agencies associated with the study topic were investigated, and a snowball search was undertaken. Search criteria included accreditation research studies, in English, addressing standards and their impact. Searching in stage 1 initially selected 9386 abstracts. In stage 2, this selection was refined against the inclusion criteria; empirical studies (n = 2111) were identified and refined to a selection of 140 papers with the exclusion of clinical or biomedical and commentary pieces. These were independently reviewed by two researchers and reduced to 13 articles that met the study criteria.
METHODS
The 13 articles were analysed according to four categories: overall findings; standards development; implementation issues; and impact of standards. Studies have only occurred in the acute care setting, predominately in 2003 (n = 5) and 2009 (n = 4), and in the United States (n = 8). A multidisciplinary focus (n = 9) and mixed method approach (n = 11) are common characteristics. Three interventional studies were identified, with the remaining 10 studies having research designs to investigate clinical or organisational impacts. No study directly examined standards development or other issues associated with their progression. Only one study noted implementation issues, identifying several enablers and barriers. Standards were reported to improve organisational efficiency and staff circumstances. However, the impact on clinical quality was mixed, with both improvements and a lack of measurable effects recorded.
RESULTS
Standards are ubiquitous within healthcare and are generally considered to be an important means by which to improve clinical practice and organisational performance. However, there is a lack of robust empirical evidence examining the development, writing, implementation and impacts of healthcare accreditation standards.
CONCLUSION
[ "Accreditation", "Delivery of Health Care", "Empirical Research", "Humans", "Quality Indicators, Health Care" ]
3520756
Background
In health accreditation a standard is “a desired and achievable level of performance against which actual performance is measured” [1]. Standards enable “health service organisations, large and small, to embed practical and effective quality improvement and patient safety initiatives into their daily operations” [2]. External organisational and clinical accreditation standards are considered necessary to promote high quality, reliable and safe products and services [2,3]. There are over 70 national healthcare accreditation agencies worldwide that develop or apply standards, or both, specifically for health services and organisations [4]. The International Society for Quality in Health Care (ISQua) seeks to guide and standardise the development of these agencies and the standards they implement [5]. ISQua advocates that accreditation standards themselves need to meet exacting standards, and has standards for how to develop, write and apply them. ISQua conducts the International Accreditation Program (IAP) for the certification or accreditation of standards against their standards [5]. The International Standards Organisation (ISO), a network of the national standards institutes of 162 countries, is the largest developer and publisher of international standards [6]. Standards from ISO are also applied in international health jurisdictions. In short, healthcare standards, and standards for standards, are ubiquitous. They are advocated to be an important means of improving clinical practice and organisational performance. ISQua, and many national bodies, espouse, and have documented methodologies to promote open, transparent, inclusive development processes where standards are developed by members [6-11]. They assert that their methodologies are effective and efficient at producing standards appropriate for the health industry. However, the evidence to support these claims requires scrutiny. What is the basis to ground the standard development methodologies in use? What research demonstrates how standards should be crafted and structured to ensure they are understandable, unambiguous, achievable and reliable in making assessments? What studies have identified the necessary steps to enable standards to be incorporated into everyday practice? Is there evidence to show whether standards improve practice? The purpose of this study was to examine these questions by identifying and analysing the research literature focusing on the development methods and application of healthcare accreditation standards. The analysis is a systematic narrative synthesis of the literature [12]. The intention is to generate new insights and bring transparency to the topic under investigation [13,14]. This type of review is appropriate for this topic for four reasons. First, the review aims to examine a complex initiative applied in diverse contexts [15]. That is, accreditation programs are complex organisational interventions, trying to shape both organisational and clinical conduct, within a multifaceted context in turn shaped by, for example, the healthcare and policy environment. Second, accreditation programs, involving healthcare standards, have been researched in different ways by divergent groups. The analysis method adopted here is intended specifically for interventions researched in a myriad of ways [12]. Third, the approach enables consideration of apparently disparate data generated by research into accreditation standards, as a complex organisational intervention [15]. Fourth, the questions being investigated are preliminary questions that need to be asked of this intervention and the approach is designed exactly for this [14,15]. The review differs from previous reviews [16,17] in being specifically focused only on healthcare accreditation standards and not the broader “standards” field. This review is the first to undertake a systematic and detailed narrative synthesis of accreditation standards.
Methods
Selection criteria and search strategy The selection criteria were: peer-reviewed, publicly available English language empirical research papers on the topic of healthcare accreditation standards. Discussion and commentary, and non-English language papers were excluded. Despite these focused criteria, we recognise that they may capture heterogeneous literature including, possibly, an overlap with work covering other forms of regulation. To counter this potential problem we used a staged search strategy to identify and remove any papers not focused on the study topic. This approach is valid for two reasons. First, there are overlaps between how regulatory strategies are at times discussed in the literature [18-20]. The reviewing of abstracts or the full papers provided a mechanism by which to screen out literature not on the study topic. Second, previous reviews and a preliminary investigation signalled that empirical research literature available on standards was limited. A multi-method strategy based on similar review designs was employed [16,21,22]. There were three stages (see Figure 1). The search was first conducted in March 2010 and updated in August 2011. Citations and abstracts that met the search criteria were downloaded into Endnote X.0.2, a reference management program. Abstracts and, where uncertainty arose, complete papers, were reviewed against the selection criteria for inclusion in the review. Literature search, review and selection flow chart. The first stage had three steps. First, we selected databases in the health sector. Literature was drawn from five electronic bibliographic databases: Medline, Psych INFO, EMBASE and Social Work databases from 1980, and CINAHL (nursing and allied health literature) from 1982. Second, we identified abstracts focusing on the topic of ‘accreditation’. Third, we selected abstracts using the terms ‘standard’, ‘guideline’, ‘policy’ and ‘legislation’; where appropriate, terms were truncated with the symbol ‘$’ and searched using the ‘Exp’ function to capture widest publication of papers (for example, guideline$ or polic$). The initial search yielded 9386 abstracts (including duplicates). We reviewed the selection to exclude those not written in English and also to remove duplicates. In the second stage we refined the collected abstracts. Two researchers independently reviewed the abstracts, selecting papers using two criteria. We selected for empirical research studies, using derivations of phrases such as ‘research’, ‘study’, ‘empirical’ or ‘report’, and ‘method’. Using this strategy the selection was reduced to 2111 articles. This group was further analysed to identify those papers that covered ‘impacts’ of accreditation standards. At this point we removed papers covering clinical or biomedical issues and also discussion pieces, commentaries or editorials. To supplement the formal search process, two less structured search methods were implemented. We undertook a 'snowballing' search, which is a variation on snowballing sampling [23]. That is, we examined the assembled manuscripts reference lists for additional relevant papers potentially missed in the formal search. In parallel, an investigation of websites of agencies associated with the study topic, that is, reports or papers investigating the evidence base for accreditation or quality standards in the health sector, was conducted. We searched: the ISQua research site; the websites of 31 healthcare accreditation agencies worldwide; ISO website; and standards organisations’ websites of a number of countries (Additional file 1: Appendix 1). The application of the stage 2 refinement processes to the collected abstracts yielded 140 articles. In the third stage, to determine the final selection of papers meeting the study criteria, two experienced researchers independently reviewed the identified 140 papers and discussed their relevance. The focus was the selection of papers that addressed development methods and application of healthcare accreditation standards. This stage derived 13 articles. The selection criteria were: peer-reviewed, publicly available English language empirical research papers on the topic of healthcare accreditation standards. Discussion and commentary, and non-English language papers were excluded. Despite these focused criteria, we recognise that they may capture heterogeneous literature including, possibly, an overlap with work covering other forms of regulation. To counter this potential problem we used a staged search strategy to identify and remove any papers not focused on the study topic. This approach is valid for two reasons. First, there are overlaps between how regulatory strategies are at times discussed in the literature [18-20]. The reviewing of abstracts or the full papers provided a mechanism by which to screen out literature not on the study topic. Second, previous reviews and a preliminary investigation signalled that empirical research literature available on standards was limited. A multi-method strategy based on similar review designs was employed [16,21,22]. There were three stages (see Figure 1). The search was first conducted in March 2010 and updated in August 2011. Citations and abstracts that met the search criteria were downloaded into Endnote X.0.2, a reference management program. Abstracts and, where uncertainty arose, complete papers, were reviewed against the selection criteria for inclusion in the review. Literature search, review and selection flow chart. The first stage had three steps. First, we selected databases in the health sector. Literature was drawn from five electronic bibliographic databases: Medline, Psych INFO, EMBASE and Social Work databases from 1980, and CINAHL (nursing and allied health literature) from 1982. Second, we identified abstracts focusing on the topic of ‘accreditation’. Third, we selected abstracts using the terms ‘standard’, ‘guideline’, ‘policy’ and ‘legislation’; where appropriate, terms were truncated with the symbol ‘$’ and searched using the ‘Exp’ function to capture widest publication of papers (for example, guideline$ or polic$). The initial search yielded 9386 abstracts (including duplicates). We reviewed the selection to exclude those not written in English and also to remove duplicates. In the second stage we refined the collected abstracts. Two researchers independently reviewed the abstracts, selecting papers using two criteria. We selected for empirical research studies, using derivations of phrases such as ‘research’, ‘study’, ‘empirical’ or ‘report’, and ‘method’. Using this strategy the selection was reduced to 2111 articles. This group was further analysed to identify those papers that covered ‘impacts’ of accreditation standards. At this point we removed papers covering clinical or biomedical issues and also discussion pieces, commentaries or editorials. To supplement the formal search process, two less structured search methods were implemented. We undertook a 'snowballing' search, which is a variation on snowballing sampling [23]. That is, we examined the assembled manuscripts reference lists for additional relevant papers potentially missed in the formal search. In parallel, an investigation of websites of agencies associated with the study topic, that is, reports or papers investigating the evidence base for accreditation or quality standards in the health sector, was conducted. We searched: the ISQua research site; the websites of 31 healthcare accreditation agencies worldwide; ISO website; and standards organisations’ websites of a number of countries (Additional file 1: Appendix 1). The application of the stage 2 refinement processes to the collected abstracts yielded 140 articles. In the third stage, to determine the final selection of papers meeting the study criteria, two experienced researchers independently reviewed the identified 140 papers and discussed their relevance. The focus was the selection of papers that addressed development methods and application of healthcare accreditation standards. This stage derived 13 articles. Analysis The selected papers were analysed by three independent researchers in two ways. First, the characteristics of the studies were noted. For each paper a summary of authors, country, sector, aim, methods, major findings and conclusions, and study quality was compiled. The level of evidence was assessed using Australian National Health and Medical Research Council guidelines [24] and study quality by an assessment tool developed from publically available checklists [21,25]. Together they enabled examination of study quality, incorporating intervention or aetiology (that is, impact), level of evidence, design and appraisal of quality (Table 1). Second, a narrative analysis of the literature was conducted in line with the study aims. Quality rating assessment criteria* *adapted from Cunningham et al. 2011. The selected papers were analysed by three independent researchers in two ways. First, the characteristics of the studies were noted. For each paper a summary of authors, country, sector, aim, methods, major findings and conclusions, and study quality was compiled. The level of evidence was assessed using Australian National Health and Medical Research Council guidelines [24] and study quality by an assessment tool developed from publically available checklists [21,25]. Together they enabled examination of study quality, incorporating intervention or aetiology (that is, impact), level of evidence, design and appraisal of quality (Table 1). Second, a narrative analysis of the literature was conducted in line with the study aims. Quality rating assessment criteria* *adapted from Cunningham et al. 2011.
Results
The 13 papers were synthesised (Table 2). The results are presented under three headings: standards development; implementation issues; and the impact of standards. The papers were examined according to date of publication, country, sector, methodology and focus. Assessment of empirical healthcare standards research Study details, characteristics and quality The dates of the studies ranged from 1995 to 2009 inclusive. The majority of studies were published in two years, 2003 [20,26-29] and 2009 [17,18,30,31], with five and four studies, respectively. One study was published in each of the following years: 1995 [32], 2004 [33], 2007 [34] and 2008 [35]. Studies were conducted in six countries. The United States of America (USA) was the setting for the majority of studies (n = 8) [17,18,26,29,31-34]. The remaining five countries all had one study: United Kingdom [35]; Philippines [30]; Australia [20]; South Africa [27]; and Taiwan [28]. The studies were all conducted in the acute sector (n = 13). The majority of studies had a multidisciplinary focus (n = 9) [17,18,20,26-28,32-34] and the practices of nurses [30,35] and managers [29,31] were the individual focus of two studies each. Research projects used mixed methods [20,32-35], employed quantitative methodologies to examine archival databases [17,18,26,28,31] or undertook a questionnaire survey [27,29,30]. Within the mixed methods studies the qualitative tools were questionnaires, surveys, interviews, reviews and evaluations. The quantitative methods covered examination of databases, prospective and retrospective studies and stratified randomised studies. The study content was categorised according to the focus of the papers, that is, program, clinical or workplace issues. Program issues was the topic that most studies examined via four different program sub-topics: reviews of programs (n = 5) [18,20,28,29,31]; policy compliance (n = 4) [17,32-34]; program impacts (n = 3) [26,27,30]; and organisational environment (n = 1) [35]. Just five studies had content relating to clinical care [17,18,20,26,34] and one on staff workplace issues [35]. A summary of the intervention or impact (aetiology) assessment, level of evidence classification and quality ratings for the selected literature is represented in Table 3. Using the NHMRC guidelines, three investigations [27,32,35] were classified as interventions and ten studies [20,26,28-33,36] under the aetiology criteria. In the intervention group, Aiken et al. (2008), was assessed as meeting the fourth level of evidence and all the quality criteria. While Salmon et al. (2003) and Stradling et al. (2007) were rated at the second and fourth level of evidence rating, respectively, each were missing some study details and so were rated at the second level for quality ratings. The studies within the aetiology group were divided between the two top quality levels. Six [26,29,30,32,33,36] were rated as meeting all criteria, and four [28,29,31,37], while missing some but not significant information to compromise them, were rated on the second tier of quality. Summary of the intervention or aetiology assessment, level of evidence classification and quality ratings Bold = Interventions as per the NHMRC criteria; Non-bold = Aetiology as per the NHMRC criteria. The dates of the studies ranged from 1995 to 2009 inclusive. The majority of studies were published in two years, 2003 [20,26-29] and 2009 [17,18,30,31], with five and four studies, respectively. One study was published in each of the following years: 1995 [32], 2004 [33], 2007 [34] and 2008 [35]. Studies were conducted in six countries. The United States of America (USA) was the setting for the majority of studies (n = 8) [17,18,26,29,31-34]. The remaining five countries all had one study: United Kingdom [35]; Philippines [30]; Australia [20]; South Africa [27]; and Taiwan [28]. The studies were all conducted in the acute sector (n = 13). The majority of studies had a multidisciplinary focus (n = 9) [17,18,20,26-28,32-34] and the practices of nurses [30,35] and managers [29,31] were the individual focus of two studies each. Research projects used mixed methods [20,32-35], employed quantitative methodologies to examine archival databases [17,18,26,28,31] or undertook a questionnaire survey [27,29,30]. Within the mixed methods studies the qualitative tools were questionnaires, surveys, interviews, reviews and evaluations. The quantitative methods covered examination of databases, prospective and retrospective studies and stratified randomised studies. The study content was categorised according to the focus of the papers, that is, program, clinical or workplace issues. Program issues was the topic that most studies examined via four different program sub-topics: reviews of programs (n = 5) [18,20,28,29,31]; policy compliance (n = 4) [17,32-34]; program impacts (n = 3) [26,27,30]; and organisational environment (n = 1) [35]. Just five studies had content relating to clinical care [17,18,20,26,34] and one on staff workplace issues [35]. A summary of the intervention or impact (aetiology) assessment, level of evidence classification and quality ratings for the selected literature is represented in Table 3. Using the NHMRC guidelines, three investigations [27,32,35] were classified as interventions and ten studies [20,26,28-33,36] under the aetiology criteria. In the intervention group, Aiken et al. (2008), was assessed as meeting the fourth level of evidence and all the quality criteria. While Salmon et al. (2003) and Stradling et al. (2007) were rated at the second and fourth level of evidence rating, respectively, each were missing some study details and so were rated at the second level for quality ratings. The studies within the aetiology group were divided between the two top quality levels. Six [26,29,30,32,33,36] were rated as meeting all criteria, and four [28,29,31,37], while missing some but not significant information to compromise them, were rated on the second tier of quality. Summary of the intervention or aetiology assessment, level of evidence classification and quality ratings Bold = Interventions as per the NHMRC criteria; Non-bold = Aetiology as per the NHMRC criteria. Standards development No study directly examined standards development or other issues associated with their progression. That is, no empirical study was identified which examined: what is best practice for developing standards; standard development processes; the wording or structure of standards; or what types of standards would have the greatest likelihood of improving practice. No study directly examined standards development or other issues associated with their progression. That is, no empirical study was identified which examined: what is best practice for developing standards; standard development processes; the wording or structure of standards; or what types of standards would have the greatest likelihood of improving practice. Implementation issues Only one study examined implementation issues with healthcare accreditation standards [33]. Five factors were noted as assisting implementation: external pressure from legislation and accreditation; the use of technology and self-evaluation as tools to leverage change; organisational culture characteristics; research; and peer education. Conversely, three factors were reported to hinder implementation: lack of external incentives or pressure; organisational policies and culture; and cost and resource constraints [33]. Only one study examined implementation issues with healthcare accreditation standards [33]. Five factors were noted as assisting implementation: external pressure from legislation and accreditation; the use of technology and self-evaluation as tools to leverage change; organisational culture characteristics; research; and peer education. Conversely, three factors were reported to hinder implementation: lack of external incentives or pressure; organisational policies and culture; and cost and resource constraints [33]. Impact of standards Twelve of the 13 papers addressed the impact of standards [26-32,35-37]. The impact of the standards on the organisation, clinical quality and staff could be identified. Impacts of standards on the organisation The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38% [27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay [26], improved management of disclosure of preventable harm [29], and utilisation of patient safety practices [36]. The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38% [27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay [26], improved management of disclosure of preventable harm [29], and utilisation of patient safety practices [36]. Impacts of standards on clinical quality Accreditation program standards encompassing trauma care [26], prenatal care [30], post partum care [37], stroke care [32], breastfeeding [28], pain management [29], and the institution wide organisation of care [27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay [26], and rates of infections and decubitus ulcers [36]; and improvements in breastfeeding rates [28] and the proportion of patients receiving relevant tests, medications and admission for stroke [32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care [30], document control [31], and the organisation of care [27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards [36]. Accreditation program standards encompassing trauma care [26], prenatal care [30], post partum care [37], stroke care [32], breastfeeding [28], pain management [29], and the institution wide organisation of care [27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay [26], and rates of infections and decubitus ulcers [36]; and improvements in breastfeeding rates [28] and the proportion of patients receiving relevant tests, medications and admission for stroke [32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care [30], document control [31], and the organisation of care [27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards [36]. Impact on staff Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff [35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making [27], and compliance with tobacco control [32]. Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff [35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making [27], and compliance with tobacco control [32]. Twelve of the 13 papers addressed the impact of standards [26-32,35-37]. The impact of the standards on the organisation, clinical quality and staff could be identified. Impacts of standards on the organisation The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38% [27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay [26], improved management of disclosure of preventable harm [29], and utilisation of patient safety practices [36]. The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38% [27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay [26], improved management of disclosure of preventable harm [29], and utilisation of patient safety practices [36]. Impacts of standards on clinical quality Accreditation program standards encompassing trauma care [26], prenatal care [30], post partum care [37], stroke care [32], breastfeeding [28], pain management [29], and the institution wide organisation of care [27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay [26], and rates of infections and decubitus ulcers [36]; and improvements in breastfeeding rates [28] and the proportion of patients receiving relevant tests, medications and admission for stroke [32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care [30], document control [31], and the organisation of care [27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards [36]. Accreditation program standards encompassing trauma care [26], prenatal care [30], post partum care [37], stroke care [32], breastfeeding [28], pain management [29], and the institution wide organisation of care [27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay [26], and rates of infections and decubitus ulcers [36]; and improvements in breastfeeding rates [28] and the proportion of patients receiving relevant tests, medications and admission for stroke [32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care [30], document control [31], and the organisation of care [27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards [36]. Impact on staff Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff [35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making [27], and compliance with tobacco control [32]. Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff [35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making [27], and compliance with tobacco control [32].
Conclusion
The challenge is to translate practical experiences and discussions into rigorous empirical evidence. We lack knowledge of how to strengthen the development of standards and the application of them based on sound critically peer-reviewed evidence. The process to develop standards essentially needs to be transformed from learnt experience to a verifiable, evidence-based methodology. Evidence-based mechanisms by which standards are developed, promulgated, reinforced, audited and evaluated are needed. Linking the writing of standards, including the wording, structure, design, focus and content, to improved outcomes requires further rigorous investigation. Factors that promote or inhibit implementation of standards, and the impacts that result, need detailed examination and analysis. This review has revealed some significant gaps in our knowledge in these areas, and, in doing so, extended previous reviews in the healthcare accreditation field. As to the limitations of our study, while we have endeavoured to be systematic, we may have overlooked some important literature. A further limitation is that papers or reports needed to be publicly available and in English to be included in the results.
[ "Background", "Selection criteria and search strategy", "Analysis", "Study details, characteristics and quality", "Standards development", "Implementation issues", "Impact of standards", "Impacts of standards on the organisation", "Impacts of standards on clinical quality", "Impact on staff", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "In health accreditation a standard is “a desired and achievable level of performance against which actual performance is measured”\n[1]. Standards enable “health service organisations, large and small, to embed practical and effective quality improvement and patient safety initiatives into their daily operations”\n[2]. External organisational and clinical accreditation standards are considered necessary to promote high quality, reliable and safe products and services\n[2,3]. There are over 70 national healthcare accreditation agencies worldwide that develop or apply standards, or both, specifically for health services and organisations\n[4].\nThe International Society for Quality in Health Care (ISQua) seeks to guide and standardise the development of these agencies and the standards they implement\n[5]. ISQua advocates that accreditation standards themselves need to meet exacting standards, and has standards for how to develop, write and apply them. ISQua conducts the International Accreditation Program (IAP) for the certification or accreditation of standards against their standards\n[5]. The International Standards Organisation (ISO), a network of the national standards institutes of 162 countries, is the largest developer and publisher of international standards\n[6]. Standards from ISO are also applied in international health jurisdictions.\nIn short, healthcare standards, and standards for standards, are ubiquitous. They are advocated to be an important means of improving clinical practice and organisational performance. ISQua, and many national bodies, espouse, and have documented methodologies to promote open, transparent, inclusive development processes where standards are developed by members\n[6-11]. They assert that their methodologies are effective and efficient at producing standards appropriate for the health industry. However, the evidence to support these claims requires scrutiny. What is the basis to ground the standard development methodologies in use? What research demonstrates how standards should be crafted and structured to ensure they are understandable, unambiguous, achievable and reliable in making assessments? What studies have identified the necessary steps to enable standards to be incorporated into everyday practice? Is there evidence to show whether standards improve practice? The purpose of this study was to examine these questions by identifying and analysing the research literature focusing on the development methods and application of healthcare accreditation standards.\nThe analysis is a systematic narrative synthesis of the literature\n[12]. The intention is to generate new insights and bring transparency to the topic under investigation\n[13,14]. This type of review is appropriate for this topic for four reasons. First, the review aims to examine a complex initiative applied in diverse contexts\n[15]. That is, accreditation programs are complex organisational interventions, trying to shape both organisational and clinical conduct, within a multifaceted context in turn shaped by, for example, the healthcare and policy environment. Second, accreditation programs, involving healthcare standards, have been researched in different ways by divergent groups. The analysis method adopted here is intended specifically for interventions researched in a myriad of ways\n[12]. Third, the approach enables consideration of apparently disparate data generated by research into accreditation standards, as a complex organisational intervention\n[15]. Fourth, the questions being investigated are preliminary questions that need to be asked of this intervention and the approach is designed exactly for this\n[14,15]. The review differs from previous reviews\n[16,17] in being specifically focused only on healthcare accreditation standards and not the broader “standards” field. This review is the first to undertake a systematic and detailed narrative synthesis of accreditation standards.", "The selection criteria were: peer-reviewed, publicly available English language empirical research papers on the topic of healthcare accreditation standards. Discussion and commentary, and non-English language papers were excluded. Despite these focused criteria, we recognise that they may capture heterogeneous literature including, possibly, an overlap with work covering other forms of regulation. To counter this potential problem we used a staged search strategy to identify and remove any papers not focused on the study topic. This approach is valid for two reasons. First, there are overlaps between how regulatory strategies are at times discussed in the literature\n[18-20]. The reviewing of abstracts or the full papers provided a mechanism by which to screen out literature not on the study topic. Second, previous reviews and a preliminary investigation signalled that empirical research literature available on standards was limited.\nA multi-method strategy based on similar review designs was employed\n[16,21,22]. There were three stages (see Figure\n1). The search was first conducted in March 2010 and updated in August 2011. Citations and abstracts that met the search criteria were downloaded into Endnote X.0.2, a reference management program. Abstracts and, where uncertainty arose, complete papers, were reviewed against the selection criteria for inclusion in the review. \nLiterature search, review and selection flow chart.\nThe first stage had three steps. First, we selected databases in the health sector. Literature was drawn from five electronic bibliographic databases: Medline, Psych INFO, EMBASE and Social Work databases from 1980, and CINAHL (nursing and allied health literature) from 1982. Second, we identified abstracts focusing on the topic of ‘accreditation’. Third, we selected abstracts using the terms ‘standard’, ‘guideline’, ‘policy’ and ‘legislation’; where appropriate, terms were truncated with the symbol ‘$’ and searched using the ‘Exp’ function to capture widest publication of papers (for example, guideline$ or polic$). The initial search yielded 9386 abstracts (including duplicates). We reviewed the selection to exclude those not written in English and also to remove duplicates.\nIn the second stage we refined the collected abstracts. Two researchers independently reviewed the abstracts, selecting papers using two criteria. We selected for empirical research studies, using derivations of phrases such as ‘research’, ‘study’, ‘empirical’ or ‘report’, and ‘method’. Using this strategy the selection was reduced to 2111 articles. This group was further analysed to identify those papers that covered ‘impacts’ of accreditation standards. At this point we removed papers covering clinical or biomedical issues and also discussion pieces, commentaries or editorials. To supplement the formal search process, two less structured search methods were implemented. We undertook a 'snowballing' search, which is a variation on snowballing sampling\n[23]. That is, we examined the assembled manuscripts reference lists for additional relevant papers potentially missed in the formal search. In parallel, an investigation of websites of agencies associated with the study topic, that is, reports or papers investigating the evidence base for accreditation or quality standards in the health sector, was conducted. We searched: the ISQua research site; the websites of 31 healthcare accreditation agencies worldwide; ISO website; and standards organisations’ websites of a number of countries (Additional file\n1: Appendix 1). The application of the stage 2 refinement processes to the collected abstracts yielded 140 articles.\nIn the third stage, to determine the final selection of papers meeting the study criteria, two experienced researchers independently reviewed the identified 140 papers and discussed their relevance. The focus was the selection of papers that addressed development methods and application of healthcare accreditation standards. This stage derived 13 articles.", "The selected papers were analysed by three independent researchers in two ways. First, the characteristics of the studies were noted. For each paper a summary of authors, country, sector, aim, methods, major findings and conclusions, and study quality was compiled. The level of evidence was assessed using Australian National Health and Medical Research Council guidelines\n[24] and study quality by an assessment tool developed from publically available checklists\n[21,25]. Together they enabled examination of study quality, incorporating intervention or aetiology (that is, impact), level of evidence, design and appraisal of quality (Table\n1). Second, a narrative analysis of the literature was conducted in line with the study aims. \nQuality rating assessment criteria*\n*adapted from Cunningham et al. 2011.", "The dates of the studies ranged from 1995 to 2009 inclusive. The majority of studies were published in two years, 2003\n[20,26-29] and 2009\n[17,18,30,31], with five and four studies, respectively. One study was published in each of the following years: 1995\n[32], 2004\n[33], 2007\n[34] and 2008\n[35]. Studies were conducted in six countries. The United States of America (USA) was the setting for the majority of studies (n = 8)\n[17,18,26,29,31-34]. The remaining five countries all had one study: United Kingdom\n[35]; Philippines\n[30]; Australia\n[20]; South Africa\n[27]; and Taiwan\n[28]. The studies were all conducted in the acute sector (n = 13). The majority of studies had a multidisciplinary focus (n = 9)\n[17,18,20,26-28,32-34] and the practices of nurses\n[30,35] and managers\n[29,31] were the individual focus of two studies each. Research projects used mixed methods\n[20,32-35], employed quantitative methodologies to examine archival databases\n[17,18,26,28,31] or undertook a questionnaire survey\n[27,29,30]. Within the mixed methods studies the qualitative tools were questionnaires, surveys, interviews, reviews and evaluations. The quantitative methods covered examination of databases, prospective and retrospective studies and stratified randomised studies. The study content was categorised according to the focus of the papers, that is, program, clinical or workplace issues. Program issues was the topic that most studies examined via four different program sub-topics: reviews of programs (n = 5)\n[18,20,28,29,31]; policy compliance (n = 4)\n[17,32-34]; program impacts (n = 3)\n[26,27,30]; and organisational environment (n = 1)\n[35]. Just five studies had content relating to clinical care\n[17,18,20,26,34] and one on staff workplace issues\n[35].\nA summary of the intervention or impact (aetiology) assessment, level of evidence classification and quality ratings for the selected literature is represented in Table\n3. Using the NHMRC guidelines, three investigations\n[27,32,35] were classified as interventions and ten studies\n[20,26,28-33,36] under the aetiology criteria. In the intervention group, Aiken et al. (2008), was assessed as meeting the fourth level of evidence and all the quality criteria. While Salmon et al. (2003) and Stradling et al. (2007) were rated at the second and fourth level of evidence rating, respectively, each were missing some study details and so were rated at the second level for quality ratings. The studies within the aetiology group were divided between the two top quality levels. Six\n[26,29,30,32,33,36] were rated as meeting all criteria, and four\n[28,29,31,37], while missing some but not significant information to compromise them, were rated on the second tier of quality. \nSummary of the intervention or aetiology assessment, level of evidence classification and quality ratings\nBold = Interventions as per the NHMRC criteria; Non-bold = Aetiology as per the NHMRC criteria.", "No study directly examined standards development or other issues associated with their progression. That is, no empirical study was identified which examined: what is best practice for developing standards; standard development processes; the wording or structure of standards; or what types of standards would have the greatest likelihood of improving practice.", "Only one study examined implementation issues with healthcare accreditation standards\n[33]. Five factors were noted as assisting implementation: external pressure from legislation and accreditation; the use of technology and self-evaluation as tools to leverage change; organisational culture characteristics; research; and peer education. Conversely, three factors were reported to hinder implementation: lack of external incentives or pressure; organisational policies and culture; and cost and resource constraints\n[33].", "Twelve of the 13 papers addressed the impact of standards\n[26-32,35-37]. The impact of the standards on the organisation, clinical quality and staff could be identified.\n Impacts of standards on the organisation The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38%\n[27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay\n[26], improved management of disclosure of preventable harm\n[29], and utilisation of patient safety practices\n[36].\nThe single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38%\n[27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay\n[26], improved management of disclosure of preventable harm\n[29], and utilisation of patient safety practices\n[36].\n Impacts of standards on clinical quality Accreditation program standards encompassing trauma care\n[26], prenatal care\n[30], post partum care\n[37], stroke care\n[32], breastfeeding\n[28], pain management\n[29], and the institution wide organisation of care\n[27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay\n[26], and rates of infections and decubitus ulcers\n[36]; and improvements in breastfeeding rates\n[28] and the proportion of patients receiving relevant tests, medications and admission for stroke\n[32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care\n[30], document control\n[31], and the organisation of care\n[27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards\n[36].\nAccreditation program standards encompassing trauma care\n[26], prenatal care\n[30], post partum care\n[37], stroke care\n[32], breastfeeding\n[28], pain management\n[29], and the institution wide organisation of care\n[27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay\n[26], and rates of infections and decubitus ulcers\n[36]; and improvements in breastfeeding rates\n[28] and the proportion of patients receiving relevant tests, medications and admission for stroke\n[32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care\n[30], document control\n[31], and the organisation of care\n[27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards\n[36].\n Impact on staff Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff\n[35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making\n[27], and compliance with tobacco control\n[32].\nStandards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff\n[35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making\n[27], and compliance with tobacco control\n[32].", "The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38%\n[27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay\n[26], improved management of disclosure of preventable harm\n[29], and utilisation of patient safety practices\n[36].", "Accreditation program standards encompassing trauma care\n[26], prenatal care\n[30], post partum care\n[37], stroke care\n[32], breastfeeding\n[28], pain management\n[29], and the institution wide organisation of care\n[27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay\n[26], and rates of infections and decubitus ulcers\n[36]; and improvements in breastfeeding rates\n[28] and the proportion of patients receiving relevant tests, medications and admission for stroke\n[32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care\n[30], document control\n[31], and the organisation of care\n[27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards\n[36].", "Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff\n[35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making\n[27], and compliance with tobacco control\n[32].", "The authors declare that they have no competing interests.", "DG and MP performed the literature search, and along with RH selected relevant papers for the review and analysed the included papers. DG, MP and RH drafted the initial manuscript, and all authors contributed to the revision of the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/12/329/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Selection criteria and search strategy", "Analysis", "Results", "Study details, characteristics and quality", "Standards development", "Implementation issues", "Impact of standards", "Impacts of standards on the organisation", "Impacts of standards on clinical quality", "Impact on staff", "Discussion", "Conclusion", "Competing interests", "Authors’ contributions", "Pre-publication history", "Supplementary Material" ]
[ "In health accreditation a standard is “a desired and achievable level of performance against which actual performance is measured”\n[1]. Standards enable “health service organisations, large and small, to embed practical and effective quality improvement and patient safety initiatives into their daily operations”\n[2]. External organisational and clinical accreditation standards are considered necessary to promote high quality, reliable and safe products and services\n[2,3]. There are over 70 national healthcare accreditation agencies worldwide that develop or apply standards, or both, specifically for health services and organisations\n[4].\nThe International Society for Quality in Health Care (ISQua) seeks to guide and standardise the development of these agencies and the standards they implement\n[5]. ISQua advocates that accreditation standards themselves need to meet exacting standards, and has standards for how to develop, write and apply them. ISQua conducts the International Accreditation Program (IAP) for the certification or accreditation of standards against their standards\n[5]. The International Standards Organisation (ISO), a network of the national standards institutes of 162 countries, is the largest developer and publisher of international standards\n[6]. Standards from ISO are also applied in international health jurisdictions.\nIn short, healthcare standards, and standards for standards, are ubiquitous. They are advocated to be an important means of improving clinical practice and organisational performance. ISQua, and many national bodies, espouse, and have documented methodologies to promote open, transparent, inclusive development processes where standards are developed by members\n[6-11]. They assert that their methodologies are effective and efficient at producing standards appropriate for the health industry. However, the evidence to support these claims requires scrutiny. What is the basis to ground the standard development methodologies in use? What research demonstrates how standards should be crafted and structured to ensure they are understandable, unambiguous, achievable and reliable in making assessments? What studies have identified the necessary steps to enable standards to be incorporated into everyday practice? Is there evidence to show whether standards improve practice? The purpose of this study was to examine these questions by identifying and analysing the research literature focusing on the development methods and application of healthcare accreditation standards.\nThe analysis is a systematic narrative synthesis of the literature\n[12]. The intention is to generate new insights and bring transparency to the topic under investigation\n[13,14]. This type of review is appropriate for this topic for four reasons. First, the review aims to examine a complex initiative applied in diverse contexts\n[15]. That is, accreditation programs are complex organisational interventions, trying to shape both organisational and clinical conduct, within a multifaceted context in turn shaped by, for example, the healthcare and policy environment. Second, accreditation programs, involving healthcare standards, have been researched in different ways by divergent groups. The analysis method adopted here is intended specifically for interventions researched in a myriad of ways\n[12]. Third, the approach enables consideration of apparently disparate data generated by research into accreditation standards, as a complex organisational intervention\n[15]. Fourth, the questions being investigated are preliminary questions that need to be asked of this intervention and the approach is designed exactly for this\n[14,15]. The review differs from previous reviews\n[16,17] in being specifically focused only on healthcare accreditation standards and not the broader “standards” field. This review is the first to undertake a systematic and detailed narrative synthesis of accreditation standards.", " Selection criteria and search strategy The selection criteria were: peer-reviewed, publicly available English language empirical research papers on the topic of healthcare accreditation standards. Discussion and commentary, and non-English language papers were excluded. Despite these focused criteria, we recognise that they may capture heterogeneous literature including, possibly, an overlap with work covering other forms of regulation. To counter this potential problem we used a staged search strategy to identify and remove any papers not focused on the study topic. This approach is valid for two reasons. First, there are overlaps between how regulatory strategies are at times discussed in the literature\n[18-20]. The reviewing of abstracts or the full papers provided a mechanism by which to screen out literature not on the study topic. Second, previous reviews and a preliminary investigation signalled that empirical research literature available on standards was limited.\nA multi-method strategy based on similar review designs was employed\n[16,21,22]. There were three stages (see Figure\n1). The search was first conducted in March 2010 and updated in August 2011. Citations and abstracts that met the search criteria were downloaded into Endnote X.0.2, a reference management program. Abstracts and, where uncertainty arose, complete papers, were reviewed against the selection criteria for inclusion in the review. \nLiterature search, review and selection flow chart.\nThe first stage had three steps. First, we selected databases in the health sector. Literature was drawn from five electronic bibliographic databases: Medline, Psych INFO, EMBASE and Social Work databases from 1980, and CINAHL (nursing and allied health literature) from 1982. Second, we identified abstracts focusing on the topic of ‘accreditation’. Third, we selected abstracts using the terms ‘standard’, ‘guideline’, ‘policy’ and ‘legislation’; where appropriate, terms were truncated with the symbol ‘$’ and searched using the ‘Exp’ function to capture widest publication of papers (for example, guideline$ or polic$). The initial search yielded 9386 abstracts (including duplicates). We reviewed the selection to exclude those not written in English and also to remove duplicates.\nIn the second stage we refined the collected abstracts. Two researchers independently reviewed the abstracts, selecting papers using two criteria. We selected for empirical research studies, using derivations of phrases such as ‘research’, ‘study’, ‘empirical’ or ‘report’, and ‘method’. Using this strategy the selection was reduced to 2111 articles. This group was further analysed to identify those papers that covered ‘impacts’ of accreditation standards. At this point we removed papers covering clinical or biomedical issues and also discussion pieces, commentaries or editorials. To supplement the formal search process, two less structured search methods were implemented. We undertook a 'snowballing' search, which is a variation on snowballing sampling\n[23]. That is, we examined the assembled manuscripts reference lists for additional relevant papers potentially missed in the formal search. In parallel, an investigation of websites of agencies associated with the study topic, that is, reports or papers investigating the evidence base for accreditation or quality standards in the health sector, was conducted. We searched: the ISQua research site; the websites of 31 healthcare accreditation agencies worldwide; ISO website; and standards organisations’ websites of a number of countries (Additional file\n1: Appendix 1). The application of the stage 2 refinement processes to the collected abstracts yielded 140 articles.\nIn the third stage, to determine the final selection of papers meeting the study criteria, two experienced researchers independently reviewed the identified 140 papers and discussed their relevance. The focus was the selection of papers that addressed development methods and application of healthcare accreditation standards. This stage derived 13 articles.\nThe selection criteria were: peer-reviewed, publicly available English language empirical research papers on the topic of healthcare accreditation standards. Discussion and commentary, and non-English language papers were excluded. Despite these focused criteria, we recognise that they may capture heterogeneous literature including, possibly, an overlap with work covering other forms of regulation. To counter this potential problem we used a staged search strategy to identify and remove any papers not focused on the study topic. This approach is valid for two reasons. First, there are overlaps between how regulatory strategies are at times discussed in the literature\n[18-20]. The reviewing of abstracts or the full papers provided a mechanism by which to screen out literature not on the study topic. Second, previous reviews and a preliminary investigation signalled that empirical research literature available on standards was limited.\nA multi-method strategy based on similar review designs was employed\n[16,21,22]. There were three stages (see Figure\n1). The search was first conducted in March 2010 and updated in August 2011. Citations and abstracts that met the search criteria were downloaded into Endnote X.0.2, a reference management program. Abstracts and, where uncertainty arose, complete papers, were reviewed against the selection criteria for inclusion in the review. \nLiterature search, review and selection flow chart.\nThe first stage had three steps. First, we selected databases in the health sector. Literature was drawn from five electronic bibliographic databases: Medline, Psych INFO, EMBASE and Social Work databases from 1980, and CINAHL (nursing and allied health literature) from 1982. Second, we identified abstracts focusing on the topic of ‘accreditation’. Third, we selected abstracts using the terms ‘standard’, ‘guideline’, ‘policy’ and ‘legislation’; where appropriate, terms were truncated with the symbol ‘$’ and searched using the ‘Exp’ function to capture widest publication of papers (for example, guideline$ or polic$). The initial search yielded 9386 abstracts (including duplicates). We reviewed the selection to exclude those not written in English and also to remove duplicates.\nIn the second stage we refined the collected abstracts. Two researchers independently reviewed the abstracts, selecting papers using two criteria. We selected for empirical research studies, using derivations of phrases such as ‘research’, ‘study’, ‘empirical’ or ‘report’, and ‘method’. Using this strategy the selection was reduced to 2111 articles. This group was further analysed to identify those papers that covered ‘impacts’ of accreditation standards. At this point we removed papers covering clinical or biomedical issues and also discussion pieces, commentaries or editorials. To supplement the formal search process, two less structured search methods were implemented. We undertook a 'snowballing' search, which is a variation on snowballing sampling\n[23]. That is, we examined the assembled manuscripts reference lists for additional relevant papers potentially missed in the formal search. In parallel, an investigation of websites of agencies associated with the study topic, that is, reports or papers investigating the evidence base for accreditation or quality standards in the health sector, was conducted. We searched: the ISQua research site; the websites of 31 healthcare accreditation agencies worldwide; ISO website; and standards organisations’ websites of a number of countries (Additional file\n1: Appendix 1). The application of the stage 2 refinement processes to the collected abstracts yielded 140 articles.\nIn the third stage, to determine the final selection of papers meeting the study criteria, two experienced researchers independently reviewed the identified 140 papers and discussed their relevance. The focus was the selection of papers that addressed development methods and application of healthcare accreditation standards. This stage derived 13 articles.\n Analysis The selected papers were analysed by three independent researchers in two ways. First, the characteristics of the studies were noted. For each paper a summary of authors, country, sector, aim, methods, major findings and conclusions, and study quality was compiled. The level of evidence was assessed using Australian National Health and Medical Research Council guidelines\n[24] and study quality by an assessment tool developed from publically available checklists\n[21,25]. Together they enabled examination of study quality, incorporating intervention or aetiology (that is, impact), level of evidence, design and appraisal of quality (Table\n1). Second, a narrative analysis of the literature was conducted in line with the study aims. \nQuality rating assessment criteria*\n*adapted from Cunningham et al. 2011.\nThe selected papers were analysed by three independent researchers in two ways. First, the characteristics of the studies were noted. For each paper a summary of authors, country, sector, aim, methods, major findings and conclusions, and study quality was compiled. The level of evidence was assessed using Australian National Health and Medical Research Council guidelines\n[24] and study quality by an assessment tool developed from publically available checklists\n[21,25]. Together they enabled examination of study quality, incorporating intervention or aetiology (that is, impact), level of evidence, design and appraisal of quality (Table\n1). Second, a narrative analysis of the literature was conducted in line with the study aims. \nQuality rating assessment criteria*\n*adapted from Cunningham et al. 2011.", "The selection criteria were: peer-reviewed, publicly available English language empirical research papers on the topic of healthcare accreditation standards. Discussion and commentary, and non-English language papers were excluded. Despite these focused criteria, we recognise that they may capture heterogeneous literature including, possibly, an overlap with work covering other forms of regulation. To counter this potential problem we used a staged search strategy to identify and remove any papers not focused on the study topic. This approach is valid for two reasons. First, there are overlaps between how regulatory strategies are at times discussed in the literature\n[18-20]. The reviewing of abstracts or the full papers provided a mechanism by which to screen out literature not on the study topic. Second, previous reviews and a preliminary investigation signalled that empirical research literature available on standards was limited.\nA multi-method strategy based on similar review designs was employed\n[16,21,22]. There were three stages (see Figure\n1). The search was first conducted in March 2010 and updated in August 2011. Citations and abstracts that met the search criteria were downloaded into Endnote X.0.2, a reference management program. Abstracts and, where uncertainty arose, complete papers, were reviewed against the selection criteria for inclusion in the review. \nLiterature search, review and selection flow chart.\nThe first stage had three steps. First, we selected databases in the health sector. Literature was drawn from five electronic bibliographic databases: Medline, Psych INFO, EMBASE and Social Work databases from 1980, and CINAHL (nursing and allied health literature) from 1982. Second, we identified abstracts focusing on the topic of ‘accreditation’. Third, we selected abstracts using the terms ‘standard’, ‘guideline’, ‘policy’ and ‘legislation’; where appropriate, terms were truncated with the symbol ‘$’ and searched using the ‘Exp’ function to capture widest publication of papers (for example, guideline$ or polic$). The initial search yielded 9386 abstracts (including duplicates). We reviewed the selection to exclude those not written in English and also to remove duplicates.\nIn the second stage we refined the collected abstracts. Two researchers independently reviewed the abstracts, selecting papers using two criteria. We selected for empirical research studies, using derivations of phrases such as ‘research’, ‘study’, ‘empirical’ or ‘report’, and ‘method’. Using this strategy the selection was reduced to 2111 articles. This group was further analysed to identify those papers that covered ‘impacts’ of accreditation standards. At this point we removed papers covering clinical or biomedical issues and also discussion pieces, commentaries or editorials. To supplement the formal search process, two less structured search methods were implemented. We undertook a 'snowballing' search, which is a variation on snowballing sampling\n[23]. That is, we examined the assembled manuscripts reference lists for additional relevant papers potentially missed in the formal search. In parallel, an investigation of websites of agencies associated with the study topic, that is, reports or papers investigating the evidence base for accreditation or quality standards in the health sector, was conducted. We searched: the ISQua research site; the websites of 31 healthcare accreditation agencies worldwide; ISO website; and standards organisations’ websites of a number of countries (Additional file\n1: Appendix 1). The application of the stage 2 refinement processes to the collected abstracts yielded 140 articles.\nIn the third stage, to determine the final selection of papers meeting the study criteria, two experienced researchers independently reviewed the identified 140 papers and discussed their relevance. The focus was the selection of papers that addressed development methods and application of healthcare accreditation standards. This stage derived 13 articles.", "The selected papers were analysed by three independent researchers in two ways. First, the characteristics of the studies were noted. For each paper a summary of authors, country, sector, aim, methods, major findings and conclusions, and study quality was compiled. The level of evidence was assessed using Australian National Health and Medical Research Council guidelines\n[24] and study quality by an assessment tool developed from publically available checklists\n[21,25]. Together they enabled examination of study quality, incorporating intervention or aetiology (that is, impact), level of evidence, design and appraisal of quality (Table\n1). Second, a narrative analysis of the literature was conducted in line with the study aims. \nQuality rating assessment criteria*\n*adapted from Cunningham et al. 2011.", "The 13 papers were synthesised (Table\n2). The results are presented under three headings: standards development; implementation issues; and the impact of standards. The papers were examined according to date of publication, country, sector, methodology and focus.\nAssessment of empirical healthcare standards research\n Study details, characteristics and quality The dates of the studies ranged from 1995 to 2009 inclusive. The majority of studies were published in two years, 2003\n[20,26-29] and 2009\n[17,18,30,31], with five and four studies, respectively. One study was published in each of the following years: 1995\n[32], 2004\n[33], 2007\n[34] and 2008\n[35]. Studies were conducted in six countries. The United States of America (USA) was the setting for the majority of studies (n = 8)\n[17,18,26,29,31-34]. The remaining five countries all had one study: United Kingdom\n[35]; Philippines\n[30]; Australia\n[20]; South Africa\n[27]; and Taiwan\n[28]. The studies were all conducted in the acute sector (n = 13). The majority of studies had a multidisciplinary focus (n = 9)\n[17,18,20,26-28,32-34] and the practices of nurses\n[30,35] and managers\n[29,31] were the individual focus of two studies each. Research projects used mixed methods\n[20,32-35], employed quantitative methodologies to examine archival databases\n[17,18,26,28,31] or undertook a questionnaire survey\n[27,29,30]. Within the mixed methods studies the qualitative tools were questionnaires, surveys, interviews, reviews and evaluations. The quantitative methods covered examination of databases, prospective and retrospective studies and stratified randomised studies. The study content was categorised according to the focus of the papers, that is, program, clinical or workplace issues. Program issues was the topic that most studies examined via four different program sub-topics: reviews of programs (n = 5)\n[18,20,28,29,31]; policy compliance (n = 4)\n[17,32-34]; program impacts (n = 3)\n[26,27,30]; and organisational environment (n = 1)\n[35]. Just five studies had content relating to clinical care\n[17,18,20,26,34] and one on staff workplace issues\n[35].\nA summary of the intervention or impact (aetiology) assessment, level of evidence classification and quality ratings for the selected literature is represented in Table\n3. Using the NHMRC guidelines, three investigations\n[27,32,35] were classified as interventions and ten studies\n[20,26,28-33,36] under the aetiology criteria. In the intervention group, Aiken et al. (2008), was assessed as meeting the fourth level of evidence and all the quality criteria. While Salmon et al. (2003) and Stradling et al. (2007) were rated at the second and fourth level of evidence rating, respectively, each were missing some study details and so were rated at the second level for quality ratings. The studies within the aetiology group were divided between the two top quality levels. Six\n[26,29,30,32,33,36] were rated as meeting all criteria, and four\n[28,29,31,37], while missing some but not significant information to compromise them, were rated on the second tier of quality. \nSummary of the intervention or aetiology assessment, level of evidence classification and quality ratings\nBold = Interventions as per the NHMRC criteria; Non-bold = Aetiology as per the NHMRC criteria.\nThe dates of the studies ranged from 1995 to 2009 inclusive. The majority of studies were published in two years, 2003\n[20,26-29] and 2009\n[17,18,30,31], with five and four studies, respectively. One study was published in each of the following years: 1995\n[32], 2004\n[33], 2007\n[34] and 2008\n[35]. Studies were conducted in six countries. The United States of America (USA) was the setting for the majority of studies (n = 8)\n[17,18,26,29,31-34]. The remaining five countries all had one study: United Kingdom\n[35]; Philippines\n[30]; Australia\n[20]; South Africa\n[27]; and Taiwan\n[28]. The studies were all conducted in the acute sector (n = 13). The majority of studies had a multidisciplinary focus (n = 9)\n[17,18,20,26-28,32-34] and the practices of nurses\n[30,35] and managers\n[29,31] were the individual focus of two studies each. Research projects used mixed methods\n[20,32-35], employed quantitative methodologies to examine archival databases\n[17,18,26,28,31] or undertook a questionnaire survey\n[27,29,30]. Within the mixed methods studies the qualitative tools were questionnaires, surveys, interviews, reviews and evaluations. The quantitative methods covered examination of databases, prospective and retrospective studies and stratified randomised studies. The study content was categorised according to the focus of the papers, that is, program, clinical or workplace issues. Program issues was the topic that most studies examined via four different program sub-topics: reviews of programs (n = 5)\n[18,20,28,29,31]; policy compliance (n = 4)\n[17,32-34]; program impacts (n = 3)\n[26,27,30]; and organisational environment (n = 1)\n[35]. Just five studies had content relating to clinical care\n[17,18,20,26,34] and one on staff workplace issues\n[35].\nA summary of the intervention or impact (aetiology) assessment, level of evidence classification and quality ratings for the selected literature is represented in Table\n3. Using the NHMRC guidelines, three investigations\n[27,32,35] were classified as interventions and ten studies\n[20,26,28-33,36] under the aetiology criteria. In the intervention group, Aiken et al. (2008), was assessed as meeting the fourth level of evidence and all the quality criteria. While Salmon et al. (2003) and Stradling et al. (2007) were rated at the second and fourth level of evidence rating, respectively, each were missing some study details and so were rated at the second level for quality ratings. The studies within the aetiology group were divided between the two top quality levels. Six\n[26,29,30,32,33,36] were rated as meeting all criteria, and four\n[28,29,31,37], while missing some but not significant information to compromise them, were rated on the second tier of quality. \nSummary of the intervention or aetiology assessment, level of evidence classification and quality ratings\nBold = Interventions as per the NHMRC criteria; Non-bold = Aetiology as per the NHMRC criteria.\n Standards development No study directly examined standards development or other issues associated with their progression. That is, no empirical study was identified which examined: what is best practice for developing standards; standard development processes; the wording or structure of standards; or what types of standards would have the greatest likelihood of improving practice.\nNo study directly examined standards development or other issues associated with their progression. That is, no empirical study was identified which examined: what is best practice for developing standards; standard development processes; the wording or structure of standards; or what types of standards would have the greatest likelihood of improving practice.\n Implementation issues Only one study examined implementation issues with healthcare accreditation standards\n[33]. Five factors were noted as assisting implementation: external pressure from legislation and accreditation; the use of technology and self-evaluation as tools to leverage change; organisational culture characteristics; research; and peer education. Conversely, three factors were reported to hinder implementation: lack of external incentives or pressure; organisational policies and culture; and cost and resource constraints\n[33].\nOnly one study examined implementation issues with healthcare accreditation standards\n[33]. Five factors were noted as assisting implementation: external pressure from legislation and accreditation; the use of technology and self-evaluation as tools to leverage change; organisational culture characteristics; research; and peer education. Conversely, three factors were reported to hinder implementation: lack of external incentives or pressure; organisational policies and culture; and cost and resource constraints\n[33].\n Impact of standards Twelve of the 13 papers addressed the impact of standards\n[26-32,35-37]. The impact of the standards on the organisation, clinical quality and staff could be identified.\n Impacts of standards on the organisation The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38%\n[27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay\n[26], improved management of disclosure of preventable harm\n[29], and utilisation of patient safety practices\n[36].\nThe single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38%\n[27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay\n[26], improved management of disclosure of preventable harm\n[29], and utilisation of patient safety practices\n[36].\n Impacts of standards on clinical quality Accreditation program standards encompassing trauma care\n[26], prenatal care\n[30], post partum care\n[37], stroke care\n[32], breastfeeding\n[28], pain management\n[29], and the institution wide organisation of care\n[27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay\n[26], and rates of infections and decubitus ulcers\n[36]; and improvements in breastfeeding rates\n[28] and the proportion of patients receiving relevant tests, medications and admission for stroke\n[32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care\n[30], document control\n[31], and the organisation of care\n[27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards\n[36].\nAccreditation program standards encompassing trauma care\n[26], prenatal care\n[30], post partum care\n[37], stroke care\n[32], breastfeeding\n[28], pain management\n[29], and the institution wide organisation of care\n[27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay\n[26], and rates of infections and decubitus ulcers\n[36]; and improvements in breastfeeding rates\n[28] and the proportion of patients receiving relevant tests, medications and admission for stroke\n[32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care\n[30], document control\n[31], and the organisation of care\n[27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards\n[36].\n Impact on staff Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff\n[35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making\n[27], and compliance with tobacco control\n[32].\nStandards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff\n[35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making\n[27], and compliance with tobacco control\n[32].\nTwelve of the 13 papers addressed the impact of standards\n[26-32,35-37]. The impact of the standards on the organisation, clinical quality and staff could be identified.\n Impacts of standards on the organisation The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38%\n[27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay\n[26], improved management of disclosure of preventable harm\n[29], and utilisation of patient safety practices\n[36].\nThe single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38%\n[27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay\n[26], improved management of disclosure of preventable harm\n[29], and utilisation of patient safety practices\n[36].\n Impacts of standards on clinical quality Accreditation program standards encompassing trauma care\n[26], prenatal care\n[30], post partum care\n[37], stroke care\n[32], breastfeeding\n[28], pain management\n[29], and the institution wide organisation of care\n[27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay\n[26], and rates of infections and decubitus ulcers\n[36]; and improvements in breastfeeding rates\n[28] and the proportion of patients receiving relevant tests, medications and admission for stroke\n[32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care\n[30], document control\n[31], and the organisation of care\n[27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards\n[36].\nAccreditation program standards encompassing trauma care\n[26], prenatal care\n[30], post partum care\n[37], stroke care\n[32], breastfeeding\n[28], pain management\n[29], and the institution wide organisation of care\n[27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay\n[26], and rates of infections and decubitus ulcers\n[36]; and improvements in breastfeeding rates\n[28] and the proportion of patients receiving relevant tests, medications and admission for stroke\n[32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care\n[30], document control\n[31], and the organisation of care\n[27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards\n[36].\n Impact on staff Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff\n[35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making\n[27], and compliance with tobacco control\n[32].\nStandards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff\n[35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making\n[27], and compliance with tobacco control\n[32].", "The dates of the studies ranged from 1995 to 2009 inclusive. The majority of studies were published in two years, 2003\n[20,26-29] and 2009\n[17,18,30,31], with five and four studies, respectively. One study was published in each of the following years: 1995\n[32], 2004\n[33], 2007\n[34] and 2008\n[35]. Studies were conducted in six countries. The United States of America (USA) was the setting for the majority of studies (n = 8)\n[17,18,26,29,31-34]. The remaining five countries all had one study: United Kingdom\n[35]; Philippines\n[30]; Australia\n[20]; South Africa\n[27]; and Taiwan\n[28]. The studies were all conducted in the acute sector (n = 13). The majority of studies had a multidisciplinary focus (n = 9)\n[17,18,20,26-28,32-34] and the practices of nurses\n[30,35] and managers\n[29,31] were the individual focus of two studies each. Research projects used mixed methods\n[20,32-35], employed quantitative methodologies to examine archival databases\n[17,18,26,28,31] or undertook a questionnaire survey\n[27,29,30]. Within the mixed methods studies the qualitative tools were questionnaires, surveys, interviews, reviews and evaluations. The quantitative methods covered examination of databases, prospective and retrospective studies and stratified randomised studies. The study content was categorised according to the focus of the papers, that is, program, clinical or workplace issues. Program issues was the topic that most studies examined via four different program sub-topics: reviews of programs (n = 5)\n[18,20,28,29,31]; policy compliance (n = 4)\n[17,32-34]; program impacts (n = 3)\n[26,27,30]; and organisational environment (n = 1)\n[35]. Just five studies had content relating to clinical care\n[17,18,20,26,34] and one on staff workplace issues\n[35].\nA summary of the intervention or impact (aetiology) assessment, level of evidence classification and quality ratings for the selected literature is represented in Table\n3. Using the NHMRC guidelines, three investigations\n[27,32,35] were classified as interventions and ten studies\n[20,26,28-33,36] under the aetiology criteria. In the intervention group, Aiken et al. (2008), was assessed as meeting the fourth level of evidence and all the quality criteria. While Salmon et al. (2003) and Stradling et al. (2007) were rated at the second and fourth level of evidence rating, respectively, each were missing some study details and so were rated at the second level for quality ratings. The studies within the aetiology group were divided between the two top quality levels. Six\n[26,29,30,32,33,36] were rated as meeting all criteria, and four\n[28,29,31,37], while missing some but not significant information to compromise them, were rated on the second tier of quality. \nSummary of the intervention or aetiology assessment, level of evidence classification and quality ratings\nBold = Interventions as per the NHMRC criteria; Non-bold = Aetiology as per the NHMRC criteria.", "No study directly examined standards development or other issues associated with their progression. That is, no empirical study was identified which examined: what is best practice for developing standards; standard development processes; the wording or structure of standards; or what types of standards would have the greatest likelihood of improving practice.", "Only one study examined implementation issues with healthcare accreditation standards\n[33]. Five factors were noted as assisting implementation: external pressure from legislation and accreditation; the use of technology and self-evaluation as tools to leverage change; organisational culture characteristics; research; and peer education. Conversely, three factors were reported to hinder implementation: lack of external incentives or pressure; organisational policies and culture; and cost and resource constraints\n[33].", "Twelve of the 13 papers addressed the impact of standards\n[26-32,35-37]. The impact of the standards on the organisation, clinical quality and staff could be identified.\n Impacts of standards on the organisation The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38%\n[27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay\n[26], improved management of disclosure of preventable harm\n[29], and utilisation of patient safety practices\n[36].\nThe single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38%\n[27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay\n[26], improved management of disclosure of preventable harm\n[29], and utilisation of patient safety practices\n[36].\n Impacts of standards on clinical quality Accreditation program standards encompassing trauma care\n[26], prenatal care\n[30], post partum care\n[37], stroke care\n[32], breastfeeding\n[28], pain management\n[29], and the institution wide organisation of care\n[27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay\n[26], and rates of infections and decubitus ulcers\n[36]; and improvements in breastfeeding rates\n[28] and the proportion of patients receiving relevant tests, medications and admission for stroke\n[32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care\n[30], document control\n[31], and the organisation of care\n[27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards\n[36].\nAccreditation program standards encompassing trauma care\n[26], prenatal care\n[30], post partum care\n[37], stroke care\n[32], breastfeeding\n[28], pain management\n[29], and the institution wide organisation of care\n[27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay\n[26], and rates of infections and decubitus ulcers\n[36]; and improvements in breastfeeding rates\n[28] and the proportion of patients receiving relevant tests, medications and admission for stroke\n[32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care\n[30], document control\n[31], and the organisation of care\n[27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards\n[36].\n Impact on staff Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff\n[35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making\n[27], and compliance with tobacco control\n[32].\nStandards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff\n[35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making\n[27], and compliance with tobacco control\n[32].", "The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38%\n[27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay\n[26], improved management of disclosure of preventable harm\n[29], and utilisation of patient safety practices\n[36].", "Accreditation program standards encompassing trauma care\n[26], prenatal care\n[30], post partum care\n[37], stroke care\n[32], breastfeeding\n[28], pain management\n[29], and the institution wide organisation of care\n[27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay\n[26], and rates of infections and decubitus ulcers\n[36]; and improvements in breastfeeding rates\n[28] and the proportion of patients receiving relevant tests, medications and admission for stroke\n[32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care\n[30], document control\n[31], and the organisation of care\n[27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards\n[36].", "Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff\n[35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making\n[27], and compliance with tobacco control\n[32].", "This study employed systematic search procedures to academic databases and accreditation agency websites to uncover empirical research that grounds the development methods and application of healthcare accreditation standards. The review has built on previous work in the healthcare accreditation field\n[16,17], commencing where previous reviews finished. We started with a proposition that standards are ubiquitous within healthcare and are generally considered to be an important means by which to improve clinical practice and organisational performance. However, the evidence about whether accreditation standards change behaviour of health care organisations, clinical quality and staff is at best equivocal, and is determined by the circumstances.\nOnly three intervention studies were identified in the review. Two interventions resulted in improvements attributed to the implementation of accreditation standards\n[32,35]. The improvements were the organisational working environment and staff perceptions\n[35], and care processes and appropriateness of care\n[32]. The remaining study, conducted in a developing country\n[27], involved health services seeking improvement from a very low base and hence the applicability of the results is limited to that context. The non-intervention studies have shown that, whilst there is adherence to standards in some cases, in a range of instances there is little evidence as to their effects. In short, the effectiveness of the development, writing, implementation and impacts of healthcare standards are significant issues that lack convincing evidence.\nIt is not clear, for example, what might be evidence-based practice in the development of standards. However, the literature synthesis suggests that reoccurring strategies include mobilising external leverage, organising teams or creating receptive cultures within health care organisations to optimise the opportunity to create standards. Yet an overarching finding was that applying standards has mixed results. There is limited published peer-reviewed evidence regarding the correspondence between the application of standards and improvements in organisational performance, clinical quality or staff behaviours.\nThere is the opportunity for the standards development field to learn from the experience of people developing technical standards, practice guidelines and evidence-based clinical policies. Consideration can be given to the applicability of translation of development processes and implementation strategies from other areas in healthcare\n[38-40]. The Joint Commission in the USA, for example, through the establishment of the National Patient Safety Goals initiative has used development and implementation processes from which lessons can be learnt\n[41].\nAgencies setting standards, including accreditation bodies or programs that develop or apply them, or both, also have significant experience and expertise in conducting these activities. Some have been doing so for decades. More recently, ISQua is utilising and sharing this experience through two strategies: the ISQua IAP and the accreditation workshop conducted at ISQua’s annual international quality conference. The ISQua IAP has been implemented to “build credibility and comparability for national organisations by harmonising standards and procedures on common international principles”\n[42]:349. Established in 1999, the IAP utilises the expertise of senior people within accreditation agencies to review, offer ideas for improvements, and accredit programs in other countries. ISQua reports that the IAP has accredited 19 organisations and 35 sets of standards (from 21 organisations), and eight surveyor training programs\n[36]. Each year the accreditation workshop at the ISQua international quality conference draws together practitioners and researchers from around the world to consider current developments and challenges associated with healthcare accreditation programs\n[43]. Discussions have centred upon all aspects of accreditation programs, for example: implementation of accreditation programs\n[44,45]; maintaining standards of accreditation programs\n[46]; survey methodologies\n[47,48]; linking standards to clinical indicators\n[49]; processes used to develop standards\n[50]; and the public disclose of accreditation results\n[51,52].", "The challenge is to translate practical experiences and discussions into rigorous empirical evidence. We lack knowledge of how to strengthen the development of standards and the application of them based on sound critically peer-reviewed evidence. The process to develop standards essentially needs to be transformed from learnt experience to a verifiable, evidence-based methodology. Evidence-based mechanisms by which standards are developed, promulgated, reinforced, audited and evaluated are needed. Linking the writing of standards, including the wording, structure, design, focus and content, to improved outcomes requires further rigorous investigation. Factors that promote or inhibit implementation of standards, and the impacts that result, need detailed examination and analysis. This review has revealed some significant gaps in our knowledge in these areas, and, in doing so, extended previous reviews in the healthcare accreditation field.\nAs to the limitations of our study, while we have endeavoured to be systematic, we may have overlooked some important literature. A further limitation is that papers or reports needed to be publicly available and in English to be included in the results.", "The authors declare that they have no competing interests.", "DG and MP performed the literature search, and along with RH selected relevant papers for the review and analysed the included papers. DG, MP and RH drafted the initial manuscript, and all authors contributed to the revision of the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/12/329/prepub\n", " Appendix 1. Accreditation and Standards Agencies websites searched.\nClick here for file" ]
[ null, "methods", null, null, "results", null, null, null, null, null, null, null, "discussion", "conclusions", null, null, null, "supplementary-material" ]
[ "Healthcare", "Accreditation", "Standards", "Evidence for use", "Narrative literature review" ]
Background: In health accreditation a standard is “a desired and achievable level of performance against which actual performance is measured” [1]. Standards enable “health service organisations, large and small, to embed practical and effective quality improvement and patient safety initiatives into their daily operations” [2]. External organisational and clinical accreditation standards are considered necessary to promote high quality, reliable and safe products and services [2,3]. There are over 70 national healthcare accreditation agencies worldwide that develop or apply standards, or both, specifically for health services and organisations [4]. The International Society for Quality in Health Care (ISQua) seeks to guide and standardise the development of these agencies and the standards they implement [5]. ISQua advocates that accreditation standards themselves need to meet exacting standards, and has standards for how to develop, write and apply them. ISQua conducts the International Accreditation Program (IAP) for the certification or accreditation of standards against their standards [5]. The International Standards Organisation (ISO), a network of the national standards institutes of 162 countries, is the largest developer and publisher of international standards [6]. Standards from ISO are also applied in international health jurisdictions. In short, healthcare standards, and standards for standards, are ubiquitous. They are advocated to be an important means of improving clinical practice and organisational performance. ISQua, and many national bodies, espouse, and have documented methodologies to promote open, transparent, inclusive development processes where standards are developed by members [6-11]. They assert that their methodologies are effective and efficient at producing standards appropriate for the health industry. However, the evidence to support these claims requires scrutiny. What is the basis to ground the standard development methodologies in use? What research demonstrates how standards should be crafted and structured to ensure they are understandable, unambiguous, achievable and reliable in making assessments? What studies have identified the necessary steps to enable standards to be incorporated into everyday practice? Is there evidence to show whether standards improve practice? The purpose of this study was to examine these questions by identifying and analysing the research literature focusing on the development methods and application of healthcare accreditation standards. The analysis is a systematic narrative synthesis of the literature [12]. The intention is to generate new insights and bring transparency to the topic under investigation [13,14]. This type of review is appropriate for this topic for four reasons. First, the review aims to examine a complex initiative applied in diverse contexts [15]. That is, accreditation programs are complex organisational interventions, trying to shape both organisational and clinical conduct, within a multifaceted context in turn shaped by, for example, the healthcare and policy environment. Second, accreditation programs, involving healthcare standards, have been researched in different ways by divergent groups. The analysis method adopted here is intended specifically for interventions researched in a myriad of ways [12]. Third, the approach enables consideration of apparently disparate data generated by research into accreditation standards, as a complex organisational intervention [15]. Fourth, the questions being investigated are preliminary questions that need to be asked of this intervention and the approach is designed exactly for this [14,15]. The review differs from previous reviews [16,17] in being specifically focused only on healthcare accreditation standards and not the broader “standards” field. This review is the first to undertake a systematic and detailed narrative synthesis of accreditation standards. Methods: Selection criteria and search strategy The selection criteria were: peer-reviewed, publicly available English language empirical research papers on the topic of healthcare accreditation standards. Discussion and commentary, and non-English language papers were excluded. Despite these focused criteria, we recognise that they may capture heterogeneous literature including, possibly, an overlap with work covering other forms of regulation. To counter this potential problem we used a staged search strategy to identify and remove any papers not focused on the study topic. This approach is valid for two reasons. First, there are overlaps between how regulatory strategies are at times discussed in the literature [18-20]. The reviewing of abstracts or the full papers provided a mechanism by which to screen out literature not on the study topic. Second, previous reviews and a preliminary investigation signalled that empirical research literature available on standards was limited. A multi-method strategy based on similar review designs was employed [16,21,22]. There were three stages (see Figure 1). The search was first conducted in March 2010 and updated in August 2011. Citations and abstracts that met the search criteria were downloaded into Endnote X.0.2, a reference management program. Abstracts and, where uncertainty arose, complete papers, were reviewed against the selection criteria for inclusion in the review. Literature search, review and selection flow chart. The first stage had three steps. First, we selected databases in the health sector. Literature was drawn from five electronic bibliographic databases: Medline, Psych INFO, EMBASE and Social Work databases from 1980, and CINAHL (nursing and allied health literature) from 1982. Second, we identified abstracts focusing on the topic of ‘accreditation’. Third, we selected abstracts using the terms ‘standard’, ‘guideline’, ‘policy’ and ‘legislation’; where appropriate, terms were truncated with the symbol ‘$’ and searched using the ‘Exp’ function to capture widest publication of papers (for example, guideline$ or polic$). The initial search yielded 9386 abstracts (including duplicates). We reviewed the selection to exclude those not written in English and also to remove duplicates. In the second stage we refined the collected abstracts. Two researchers independently reviewed the abstracts, selecting papers using two criteria. We selected for empirical research studies, using derivations of phrases such as ‘research’, ‘study’, ‘empirical’ or ‘report’, and ‘method’. Using this strategy the selection was reduced to 2111 articles. This group was further analysed to identify those papers that covered ‘impacts’ of accreditation standards. At this point we removed papers covering clinical or biomedical issues and also discussion pieces, commentaries or editorials. To supplement the formal search process, two less structured search methods were implemented. We undertook a 'snowballing' search, which is a variation on snowballing sampling [23]. That is, we examined the assembled manuscripts reference lists for additional relevant papers potentially missed in the formal search. In parallel, an investigation of websites of agencies associated with the study topic, that is, reports or papers investigating the evidence base for accreditation or quality standards in the health sector, was conducted. We searched: the ISQua research site; the websites of 31 healthcare accreditation agencies worldwide; ISO website; and standards organisations’ websites of a number of countries (Additional file 1: Appendix 1). The application of the stage 2 refinement processes to the collected abstracts yielded 140 articles. In the third stage, to determine the final selection of papers meeting the study criteria, two experienced researchers independently reviewed the identified 140 papers and discussed their relevance. The focus was the selection of papers that addressed development methods and application of healthcare accreditation standards. This stage derived 13 articles. The selection criteria were: peer-reviewed, publicly available English language empirical research papers on the topic of healthcare accreditation standards. Discussion and commentary, and non-English language papers were excluded. Despite these focused criteria, we recognise that they may capture heterogeneous literature including, possibly, an overlap with work covering other forms of regulation. To counter this potential problem we used a staged search strategy to identify and remove any papers not focused on the study topic. This approach is valid for two reasons. First, there are overlaps between how regulatory strategies are at times discussed in the literature [18-20]. The reviewing of abstracts or the full papers provided a mechanism by which to screen out literature not on the study topic. Second, previous reviews and a preliminary investigation signalled that empirical research literature available on standards was limited. A multi-method strategy based on similar review designs was employed [16,21,22]. There were three stages (see Figure 1). The search was first conducted in March 2010 and updated in August 2011. Citations and abstracts that met the search criteria were downloaded into Endnote X.0.2, a reference management program. Abstracts and, where uncertainty arose, complete papers, were reviewed against the selection criteria for inclusion in the review. Literature search, review and selection flow chart. The first stage had three steps. First, we selected databases in the health sector. Literature was drawn from five electronic bibliographic databases: Medline, Psych INFO, EMBASE and Social Work databases from 1980, and CINAHL (nursing and allied health literature) from 1982. Second, we identified abstracts focusing on the topic of ‘accreditation’. Third, we selected abstracts using the terms ‘standard’, ‘guideline’, ‘policy’ and ‘legislation’; where appropriate, terms were truncated with the symbol ‘$’ and searched using the ‘Exp’ function to capture widest publication of papers (for example, guideline$ or polic$). The initial search yielded 9386 abstracts (including duplicates). We reviewed the selection to exclude those not written in English and also to remove duplicates. In the second stage we refined the collected abstracts. Two researchers independently reviewed the abstracts, selecting papers using two criteria. We selected for empirical research studies, using derivations of phrases such as ‘research’, ‘study’, ‘empirical’ or ‘report’, and ‘method’. Using this strategy the selection was reduced to 2111 articles. This group was further analysed to identify those papers that covered ‘impacts’ of accreditation standards. At this point we removed papers covering clinical or biomedical issues and also discussion pieces, commentaries or editorials. To supplement the formal search process, two less structured search methods were implemented. We undertook a 'snowballing' search, which is a variation on snowballing sampling [23]. That is, we examined the assembled manuscripts reference lists for additional relevant papers potentially missed in the formal search. In parallel, an investigation of websites of agencies associated with the study topic, that is, reports or papers investigating the evidence base for accreditation or quality standards in the health sector, was conducted. We searched: the ISQua research site; the websites of 31 healthcare accreditation agencies worldwide; ISO website; and standards organisations’ websites of a number of countries (Additional file 1: Appendix 1). The application of the stage 2 refinement processes to the collected abstracts yielded 140 articles. In the third stage, to determine the final selection of papers meeting the study criteria, two experienced researchers independently reviewed the identified 140 papers and discussed their relevance. The focus was the selection of papers that addressed development methods and application of healthcare accreditation standards. This stage derived 13 articles. Analysis The selected papers were analysed by three independent researchers in two ways. First, the characteristics of the studies were noted. For each paper a summary of authors, country, sector, aim, methods, major findings and conclusions, and study quality was compiled. The level of evidence was assessed using Australian National Health and Medical Research Council guidelines [24] and study quality by an assessment tool developed from publically available checklists [21,25]. Together they enabled examination of study quality, incorporating intervention or aetiology (that is, impact), level of evidence, design and appraisal of quality (Table 1). Second, a narrative analysis of the literature was conducted in line with the study aims. Quality rating assessment criteria* *adapted from Cunningham et al. 2011. The selected papers were analysed by three independent researchers in two ways. First, the characteristics of the studies were noted. For each paper a summary of authors, country, sector, aim, methods, major findings and conclusions, and study quality was compiled. The level of evidence was assessed using Australian National Health and Medical Research Council guidelines [24] and study quality by an assessment tool developed from publically available checklists [21,25]. Together they enabled examination of study quality, incorporating intervention or aetiology (that is, impact), level of evidence, design and appraisal of quality (Table 1). Second, a narrative analysis of the literature was conducted in line with the study aims. Quality rating assessment criteria* *adapted from Cunningham et al. 2011. Selection criteria and search strategy: The selection criteria were: peer-reviewed, publicly available English language empirical research papers on the topic of healthcare accreditation standards. Discussion and commentary, and non-English language papers were excluded. Despite these focused criteria, we recognise that they may capture heterogeneous literature including, possibly, an overlap with work covering other forms of regulation. To counter this potential problem we used a staged search strategy to identify and remove any papers not focused on the study topic. This approach is valid for two reasons. First, there are overlaps between how regulatory strategies are at times discussed in the literature [18-20]. The reviewing of abstracts or the full papers provided a mechanism by which to screen out literature not on the study topic. Second, previous reviews and a preliminary investigation signalled that empirical research literature available on standards was limited. A multi-method strategy based on similar review designs was employed [16,21,22]. There were three stages (see Figure 1). The search was first conducted in March 2010 and updated in August 2011. Citations and abstracts that met the search criteria were downloaded into Endnote X.0.2, a reference management program. Abstracts and, where uncertainty arose, complete papers, were reviewed against the selection criteria for inclusion in the review. Literature search, review and selection flow chart. The first stage had three steps. First, we selected databases in the health sector. Literature was drawn from five electronic bibliographic databases: Medline, Psych INFO, EMBASE and Social Work databases from 1980, and CINAHL (nursing and allied health literature) from 1982. Second, we identified abstracts focusing on the topic of ‘accreditation’. Third, we selected abstracts using the terms ‘standard’, ‘guideline’, ‘policy’ and ‘legislation’; where appropriate, terms were truncated with the symbol ‘$’ and searched using the ‘Exp’ function to capture widest publication of papers (for example, guideline$ or polic$). The initial search yielded 9386 abstracts (including duplicates). We reviewed the selection to exclude those not written in English and also to remove duplicates. In the second stage we refined the collected abstracts. Two researchers independently reviewed the abstracts, selecting papers using two criteria. We selected for empirical research studies, using derivations of phrases such as ‘research’, ‘study’, ‘empirical’ or ‘report’, and ‘method’. Using this strategy the selection was reduced to 2111 articles. This group was further analysed to identify those papers that covered ‘impacts’ of accreditation standards. At this point we removed papers covering clinical or biomedical issues and also discussion pieces, commentaries or editorials. To supplement the formal search process, two less structured search methods were implemented. We undertook a 'snowballing' search, which is a variation on snowballing sampling [23]. That is, we examined the assembled manuscripts reference lists for additional relevant papers potentially missed in the formal search. In parallel, an investigation of websites of agencies associated with the study topic, that is, reports or papers investigating the evidence base for accreditation or quality standards in the health sector, was conducted. We searched: the ISQua research site; the websites of 31 healthcare accreditation agencies worldwide; ISO website; and standards organisations’ websites of a number of countries (Additional file 1: Appendix 1). The application of the stage 2 refinement processes to the collected abstracts yielded 140 articles. In the third stage, to determine the final selection of papers meeting the study criteria, two experienced researchers independently reviewed the identified 140 papers and discussed their relevance. The focus was the selection of papers that addressed development methods and application of healthcare accreditation standards. This stage derived 13 articles. Analysis: The selected papers were analysed by three independent researchers in two ways. First, the characteristics of the studies were noted. For each paper a summary of authors, country, sector, aim, methods, major findings and conclusions, and study quality was compiled. The level of evidence was assessed using Australian National Health and Medical Research Council guidelines [24] and study quality by an assessment tool developed from publically available checklists [21,25]. Together they enabled examination of study quality, incorporating intervention or aetiology (that is, impact), level of evidence, design and appraisal of quality (Table 1). Second, a narrative analysis of the literature was conducted in line with the study aims. Quality rating assessment criteria* *adapted from Cunningham et al. 2011. Results: The 13 papers were synthesised (Table 2). The results are presented under three headings: standards development; implementation issues; and the impact of standards. The papers were examined according to date of publication, country, sector, methodology and focus. Assessment of empirical healthcare standards research Study details, characteristics and quality The dates of the studies ranged from 1995 to 2009 inclusive. The majority of studies were published in two years, 2003 [20,26-29] and 2009 [17,18,30,31], with five and four studies, respectively. One study was published in each of the following years: 1995 [32], 2004 [33], 2007 [34] and 2008 [35]. Studies were conducted in six countries. The United States of America (USA) was the setting for the majority of studies (n = 8) [17,18,26,29,31-34]. The remaining five countries all had one study: United Kingdom [35]; Philippines [30]; Australia [20]; South Africa [27]; and Taiwan [28]. The studies were all conducted in the acute sector (n = 13). The majority of studies had a multidisciplinary focus (n = 9) [17,18,20,26-28,32-34] and the practices of nurses [30,35] and managers [29,31] were the individual focus of two studies each. Research projects used mixed methods [20,32-35], employed quantitative methodologies to examine archival databases [17,18,26,28,31] or undertook a questionnaire survey [27,29,30]. Within the mixed methods studies the qualitative tools were questionnaires, surveys, interviews, reviews and evaluations. The quantitative methods covered examination of databases, prospective and retrospective studies and stratified randomised studies. The study content was categorised according to the focus of the papers, that is, program, clinical or workplace issues. Program issues was the topic that most studies examined via four different program sub-topics: reviews of programs (n = 5) [18,20,28,29,31]; policy compliance (n = 4) [17,32-34]; program impacts (n = 3) [26,27,30]; and organisational environment (n = 1) [35]. Just five studies had content relating to clinical care [17,18,20,26,34] and one on staff workplace issues [35]. A summary of the intervention or impact (aetiology) assessment, level of evidence classification and quality ratings for the selected literature is represented in Table 3. Using the NHMRC guidelines, three investigations [27,32,35] were classified as interventions and ten studies [20,26,28-33,36] under the aetiology criteria. In the intervention group, Aiken et al. (2008), was assessed as meeting the fourth level of evidence and all the quality criteria. While Salmon et al. (2003) and Stradling et al. (2007) were rated at the second and fourth level of evidence rating, respectively, each were missing some study details and so were rated at the second level for quality ratings. The studies within the aetiology group were divided between the two top quality levels. Six [26,29,30,32,33,36] were rated as meeting all criteria, and four [28,29,31,37], while missing some but not significant information to compromise them, were rated on the second tier of quality. Summary of the intervention or aetiology assessment, level of evidence classification and quality ratings Bold = Interventions as per the NHMRC criteria; Non-bold = Aetiology as per the NHMRC criteria. The dates of the studies ranged from 1995 to 2009 inclusive. The majority of studies were published in two years, 2003 [20,26-29] and 2009 [17,18,30,31], with five and four studies, respectively. One study was published in each of the following years: 1995 [32], 2004 [33], 2007 [34] and 2008 [35]. Studies were conducted in six countries. The United States of America (USA) was the setting for the majority of studies (n = 8) [17,18,26,29,31-34]. The remaining five countries all had one study: United Kingdom [35]; Philippines [30]; Australia [20]; South Africa [27]; and Taiwan [28]. The studies were all conducted in the acute sector (n = 13). The majority of studies had a multidisciplinary focus (n = 9) [17,18,20,26-28,32-34] and the practices of nurses [30,35] and managers [29,31] were the individual focus of two studies each. Research projects used mixed methods [20,32-35], employed quantitative methodologies to examine archival databases [17,18,26,28,31] or undertook a questionnaire survey [27,29,30]. Within the mixed methods studies the qualitative tools were questionnaires, surveys, interviews, reviews and evaluations. The quantitative methods covered examination of databases, prospective and retrospective studies and stratified randomised studies. The study content was categorised according to the focus of the papers, that is, program, clinical or workplace issues. Program issues was the topic that most studies examined via four different program sub-topics: reviews of programs (n = 5) [18,20,28,29,31]; policy compliance (n = 4) [17,32-34]; program impacts (n = 3) [26,27,30]; and organisational environment (n = 1) [35]. Just five studies had content relating to clinical care [17,18,20,26,34] and one on staff workplace issues [35]. A summary of the intervention or impact (aetiology) assessment, level of evidence classification and quality ratings for the selected literature is represented in Table 3. Using the NHMRC guidelines, three investigations [27,32,35] were classified as interventions and ten studies [20,26,28-33,36] under the aetiology criteria. In the intervention group, Aiken et al. (2008), was assessed as meeting the fourth level of evidence and all the quality criteria. While Salmon et al. (2003) and Stradling et al. (2007) were rated at the second and fourth level of evidence rating, respectively, each were missing some study details and so were rated at the second level for quality ratings. The studies within the aetiology group were divided between the two top quality levels. Six [26,29,30,32,33,36] were rated as meeting all criteria, and four [28,29,31,37], while missing some but not significant information to compromise them, were rated on the second tier of quality. Summary of the intervention or aetiology assessment, level of evidence classification and quality ratings Bold = Interventions as per the NHMRC criteria; Non-bold = Aetiology as per the NHMRC criteria. Standards development No study directly examined standards development or other issues associated with their progression. That is, no empirical study was identified which examined: what is best practice for developing standards; standard development processes; the wording or structure of standards; or what types of standards would have the greatest likelihood of improving practice. No study directly examined standards development or other issues associated with their progression. That is, no empirical study was identified which examined: what is best practice for developing standards; standard development processes; the wording or structure of standards; or what types of standards would have the greatest likelihood of improving practice. Implementation issues Only one study examined implementation issues with healthcare accreditation standards [33]. Five factors were noted as assisting implementation: external pressure from legislation and accreditation; the use of technology and self-evaluation as tools to leverage change; organisational culture characteristics; research; and peer education. Conversely, three factors were reported to hinder implementation: lack of external incentives or pressure; organisational policies and culture; and cost and resource constraints [33]. Only one study examined implementation issues with healthcare accreditation standards [33]. Five factors were noted as assisting implementation: external pressure from legislation and accreditation; the use of technology and self-evaluation as tools to leverage change; organisational culture characteristics; research; and peer education. Conversely, three factors were reported to hinder implementation: lack of external incentives or pressure; organisational policies and culture; and cost and resource constraints [33]. Impact of standards Twelve of the 13 papers addressed the impact of standards [26-32,35-37]. The impact of the standards on the organisation, clinical quality and staff could be identified. Impacts of standards on the organisation The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38% [27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay [26], improved management of disclosure of preventable harm [29], and utilisation of patient safety practices [36]. The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38% [27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay [26], improved management of disclosure of preventable harm [29], and utilisation of patient safety practices [36]. Impacts of standards on clinical quality Accreditation program standards encompassing trauma care [26], prenatal care [30], post partum care [37], stroke care [32], breastfeeding [28], pain management [29], and the institution wide organisation of care [27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay [26], and rates of infections and decubitus ulcers [36]; and improvements in breastfeeding rates [28] and the proportion of patients receiving relevant tests, medications and admission for stroke [32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care [30], document control [31], and the organisation of care [27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards [36]. Accreditation program standards encompassing trauma care [26], prenatal care [30], post partum care [37], stroke care [32], breastfeeding [28], pain management [29], and the institution wide organisation of care [27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay [26], and rates of infections and decubitus ulcers [36]; and improvements in breastfeeding rates [28] and the proportion of patients receiving relevant tests, medications and admission for stroke [32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care [30], document control [31], and the organisation of care [27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards [36]. Impact on staff Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff [35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making [27], and compliance with tobacco control [32]. Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff [35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making [27], and compliance with tobacco control [32]. Twelve of the 13 papers addressed the impact of standards [26-32,35-37]. The impact of the standards on the organisation, clinical quality and staff could be identified. Impacts of standards on the organisation The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38% [27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay [26], improved management of disclosure of preventable harm [29], and utilisation of patient safety practices [36]. The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38% [27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay [26], improved management of disclosure of preventable harm [29], and utilisation of patient safety practices [36]. Impacts of standards on clinical quality Accreditation program standards encompassing trauma care [26], prenatal care [30], post partum care [37], stroke care [32], breastfeeding [28], pain management [29], and the institution wide organisation of care [27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay [26], and rates of infections and decubitus ulcers [36]; and improvements in breastfeeding rates [28] and the proportion of patients receiving relevant tests, medications and admission for stroke [32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care [30], document control [31], and the organisation of care [27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards [36]. Accreditation program standards encompassing trauma care [26], prenatal care [30], post partum care [37], stroke care [32], breastfeeding [28], pain management [29], and the institution wide organisation of care [27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay [26], and rates of infections and decubitus ulcers [36]; and improvements in breastfeeding rates [28] and the proportion of patients receiving relevant tests, medications and admission for stroke [32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care [30], document control [31], and the organisation of care [27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards [36]. Impact on staff Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff [35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making [27], and compliance with tobacco control [32]. Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff [35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making [27], and compliance with tobacco control [32]. Study details, characteristics and quality: The dates of the studies ranged from 1995 to 2009 inclusive. The majority of studies were published in two years, 2003 [20,26-29] and 2009 [17,18,30,31], with five and four studies, respectively. One study was published in each of the following years: 1995 [32], 2004 [33], 2007 [34] and 2008 [35]. Studies were conducted in six countries. The United States of America (USA) was the setting for the majority of studies (n = 8) [17,18,26,29,31-34]. The remaining five countries all had one study: United Kingdom [35]; Philippines [30]; Australia [20]; South Africa [27]; and Taiwan [28]. The studies were all conducted in the acute sector (n = 13). The majority of studies had a multidisciplinary focus (n = 9) [17,18,20,26-28,32-34] and the practices of nurses [30,35] and managers [29,31] were the individual focus of two studies each. Research projects used mixed methods [20,32-35], employed quantitative methodologies to examine archival databases [17,18,26,28,31] or undertook a questionnaire survey [27,29,30]. Within the mixed methods studies the qualitative tools were questionnaires, surveys, interviews, reviews and evaluations. The quantitative methods covered examination of databases, prospective and retrospective studies and stratified randomised studies. The study content was categorised according to the focus of the papers, that is, program, clinical or workplace issues. Program issues was the topic that most studies examined via four different program sub-topics: reviews of programs (n = 5) [18,20,28,29,31]; policy compliance (n = 4) [17,32-34]; program impacts (n = 3) [26,27,30]; and organisational environment (n = 1) [35]. Just five studies had content relating to clinical care [17,18,20,26,34] and one on staff workplace issues [35]. A summary of the intervention or impact (aetiology) assessment, level of evidence classification and quality ratings for the selected literature is represented in Table 3. Using the NHMRC guidelines, three investigations [27,32,35] were classified as interventions and ten studies [20,26,28-33,36] under the aetiology criteria. In the intervention group, Aiken et al. (2008), was assessed as meeting the fourth level of evidence and all the quality criteria. While Salmon et al. (2003) and Stradling et al. (2007) were rated at the second and fourth level of evidence rating, respectively, each were missing some study details and so were rated at the second level for quality ratings. The studies within the aetiology group were divided between the two top quality levels. Six [26,29,30,32,33,36] were rated as meeting all criteria, and four [28,29,31,37], while missing some but not significant information to compromise them, were rated on the second tier of quality. Summary of the intervention or aetiology assessment, level of evidence classification and quality ratings Bold = Interventions as per the NHMRC criteria; Non-bold = Aetiology as per the NHMRC criteria. Standards development: No study directly examined standards development or other issues associated with their progression. That is, no empirical study was identified which examined: what is best practice for developing standards; standard development processes; the wording or structure of standards; or what types of standards would have the greatest likelihood of improving practice. Implementation issues: Only one study examined implementation issues with healthcare accreditation standards [33]. Five factors were noted as assisting implementation: external pressure from legislation and accreditation; the use of technology and self-evaluation as tools to leverage change; organisational culture characteristics; research; and peer education. Conversely, three factors were reported to hinder implementation: lack of external incentives or pressure; organisational policies and culture; and cost and resource constraints [33]. Impact of standards: Twelve of the 13 papers addressed the impact of standards [26-32,35-37]. The impact of the standards on the organisation, clinical quality and staff could be identified. Impacts of standards on the organisation The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38% [27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay [26], improved management of disclosure of preventable harm [29], and utilisation of patient safety practices [36]. The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38% [27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay [26], improved management of disclosure of preventable harm [29], and utilisation of patient safety practices [36]. Impacts of standards on clinical quality Accreditation program standards encompassing trauma care [26], prenatal care [30], post partum care [37], stroke care [32], breastfeeding [28], pain management [29], and the institution wide organisation of care [27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay [26], and rates of infections and decubitus ulcers [36]; and improvements in breastfeeding rates [28] and the proportion of patients receiving relevant tests, medications and admission for stroke [32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care [30], document control [31], and the organisation of care [27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards [36]. Accreditation program standards encompassing trauma care [26], prenatal care [30], post partum care [37], stroke care [32], breastfeeding [28], pain management [29], and the institution wide organisation of care [27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay [26], and rates of infections and decubitus ulcers [36]; and improvements in breastfeeding rates [28] and the proportion of patients receiving relevant tests, medications and admission for stroke [32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care [30], document control [31], and the organisation of care [27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards [36]. Impact on staff Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff [35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making [27], and compliance with tobacco control [32]. Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff [35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making [27], and compliance with tobacco control [32]. Impacts of standards on the organisation: The single randomised controlled trial identified demonstrated that compliance with accreditation standards increased in the intervention group, from 38 to 76%, compared to in the control group, from 37 to 38% [27]. Furthermore, standards or guidelines about the organisation of clinical practice led to improved efficiency and quality practices. Specifically, standards within an accreditation program resulted in decreased length of hospital stay [26], improved management of disclosure of preventable harm [29], and utilisation of patient safety practices [36]. Impacts of standards on clinical quality: Accreditation program standards encompassing trauma care [26], prenatal care [30], post partum care [37], stroke care [32], breastfeeding [28], pain management [29], and the institution wide organisation of care [27,30,33] were reported to improve the provision of care. Additionally, there were links to improvements in various aspects of clinical quality. For example, standards contributed to: reductions in in-hospital mortality and length of stay [26], and rates of infections and decubitus ulcers [36]; and improvements in breastfeeding rates [28] and the proportion of patients receiving relevant tests, medications and admission for stroke [32]. Conversely, and at times simultaneously, standards introduced to improve care appeared not to do so. For example, exposure to standards for prenatal and delivery care [30], document control [31], and the organisation of care [27] did not show any measurable effects. Nor did rates of certain adverse events, such as failure to rescue or postoperative respiratory failure, alter with the implementation of accreditation standards [36]. Impact on staff: Standards were shown to produce an improved staff quality of life, working conditions and appraisals of the quality of care. This outcome was noted from the use of ‘Magnet’ principles, which sought to improve the attraction of the workplace in recruiting and retaining staff [35]. Additionally, the introduction of standards, through an accreditation program, resulted in the improved perceptions of teamwork and participation in decision making [27], and compliance with tobacco control [32]. Discussion: This study employed systematic search procedures to academic databases and accreditation agency websites to uncover empirical research that grounds the development methods and application of healthcare accreditation standards. The review has built on previous work in the healthcare accreditation field [16,17], commencing where previous reviews finished. We started with a proposition that standards are ubiquitous within healthcare and are generally considered to be an important means by which to improve clinical practice and organisational performance. However, the evidence about whether accreditation standards change behaviour of health care organisations, clinical quality and staff is at best equivocal, and is determined by the circumstances. Only three intervention studies were identified in the review. Two interventions resulted in improvements attributed to the implementation of accreditation standards [32,35]. The improvements were the organisational working environment and staff perceptions [35], and care processes and appropriateness of care [32]. The remaining study, conducted in a developing country [27], involved health services seeking improvement from a very low base and hence the applicability of the results is limited to that context. The non-intervention studies have shown that, whilst there is adherence to standards in some cases, in a range of instances there is little evidence as to their effects. In short, the effectiveness of the development, writing, implementation and impacts of healthcare standards are significant issues that lack convincing evidence. It is not clear, for example, what might be evidence-based practice in the development of standards. However, the literature synthesis suggests that reoccurring strategies include mobilising external leverage, organising teams or creating receptive cultures within health care organisations to optimise the opportunity to create standards. Yet an overarching finding was that applying standards has mixed results. There is limited published peer-reviewed evidence regarding the correspondence between the application of standards and improvements in organisational performance, clinical quality or staff behaviours. There is the opportunity for the standards development field to learn from the experience of people developing technical standards, practice guidelines and evidence-based clinical policies. Consideration can be given to the applicability of translation of development processes and implementation strategies from other areas in healthcare [38-40]. The Joint Commission in the USA, for example, through the establishment of the National Patient Safety Goals initiative has used development and implementation processes from which lessons can be learnt [41]. Agencies setting standards, including accreditation bodies or programs that develop or apply them, or both, also have significant experience and expertise in conducting these activities. Some have been doing so for decades. More recently, ISQua is utilising and sharing this experience through two strategies: the ISQua IAP and the accreditation workshop conducted at ISQua’s annual international quality conference. The ISQua IAP has been implemented to “build credibility and comparability for national organisations by harmonising standards and procedures on common international principles” [42]:349. Established in 1999, the IAP utilises the expertise of senior people within accreditation agencies to review, offer ideas for improvements, and accredit programs in other countries. ISQua reports that the IAP has accredited 19 organisations and 35 sets of standards (from 21 organisations), and eight surveyor training programs [36]. Each year the accreditation workshop at the ISQua international quality conference draws together practitioners and researchers from around the world to consider current developments and challenges associated with healthcare accreditation programs [43]. Discussions have centred upon all aspects of accreditation programs, for example: implementation of accreditation programs [44,45]; maintaining standards of accreditation programs [46]; survey methodologies [47,48]; linking standards to clinical indicators [49]; processes used to develop standards [50]; and the public disclose of accreditation results [51,52]. Conclusion: The challenge is to translate practical experiences and discussions into rigorous empirical evidence. We lack knowledge of how to strengthen the development of standards and the application of them based on sound critically peer-reviewed evidence. The process to develop standards essentially needs to be transformed from learnt experience to a verifiable, evidence-based methodology. Evidence-based mechanisms by which standards are developed, promulgated, reinforced, audited and evaluated are needed. Linking the writing of standards, including the wording, structure, design, focus and content, to improved outcomes requires further rigorous investigation. Factors that promote or inhibit implementation of standards, and the impacts that result, need detailed examination and analysis. This review has revealed some significant gaps in our knowledge in these areas, and, in doing so, extended previous reviews in the healthcare accreditation field. As to the limitations of our study, while we have endeavoured to be systematic, we may have overlooked some important literature. A further limitation is that papers or reports needed to be publicly available and in English to be included in the results. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: DG and MP performed the literature search, and along with RH selected relevant papers for the review and analysed the included papers. DG, MP and RH drafted the initial manuscript, and all authors contributed to the revision of the manuscript. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6963/12/329/prepub Supplementary Material: Appendix 1. Accreditation and Standards Agencies websites searched. Click here for file
Background: Healthcare accreditation standards are advocated as an important means of improving clinical practice and organisational performance. Standard development agencies have documented methodologies to promote open, transparent, inclusive development processes where standards are developed by members. They assert that their methodologies are effective and efficient at producing standards appropriate for the health industry. However, the evidence to support these claims requires scrutiny. The study's purpose was to examine the empirical research that grounds the development methods and application of healthcare accreditation standards. Methods: A multi-method strategy was employed over the period March 2010 to August 2011. Five academic health research databases (Medline, Psych INFO, Embase, Social work abstracts, and CINAHL) were interrogated, the websites of 36 agencies associated with the study topic were investigated, and a snowball search was undertaken. Search criteria included accreditation research studies, in English, addressing standards and their impact. Searching in stage 1 initially selected 9386 abstracts. In stage 2, this selection was refined against the inclusion criteria; empirical studies (n = 2111) were identified and refined to a selection of 140 papers with the exclusion of clinical or biomedical and commentary pieces. These were independently reviewed by two researchers and reduced to 13 articles that met the study criteria. Results: The 13 articles were analysed according to four categories: overall findings; standards development; implementation issues; and impact of standards. Studies have only occurred in the acute care setting, predominately in 2003 (n = 5) and 2009 (n = 4), and in the United States (n = 8). A multidisciplinary focus (n = 9) and mixed method approach (n = 11) are common characteristics. Three interventional studies were identified, with the remaining 10 studies having research designs to investigate clinical or organisational impacts. No study directly examined standards development or other issues associated with their progression. Only one study noted implementation issues, identifying several enablers and barriers. Standards were reported to improve organisational efficiency and staff circumstances. However, the impact on clinical quality was mixed, with both improvements and a lack of measurable effects recorded. Conclusions: Standards are ubiquitous within healthcare and are generally considered to be an important means by which to improve clinical practice and organisational performance. However, there is a lack of robust empirical evidence examining the development, writing, implementation and impacts of healthcare accreditation standards.
Background: In health accreditation a standard is “a desired and achievable level of performance against which actual performance is measured” [1]. Standards enable “health service organisations, large and small, to embed practical and effective quality improvement and patient safety initiatives into their daily operations” [2]. External organisational and clinical accreditation standards are considered necessary to promote high quality, reliable and safe products and services [2,3]. There are over 70 national healthcare accreditation agencies worldwide that develop or apply standards, or both, specifically for health services and organisations [4]. The International Society for Quality in Health Care (ISQua) seeks to guide and standardise the development of these agencies and the standards they implement [5]. ISQua advocates that accreditation standards themselves need to meet exacting standards, and has standards for how to develop, write and apply them. ISQua conducts the International Accreditation Program (IAP) for the certification or accreditation of standards against their standards [5]. The International Standards Organisation (ISO), a network of the national standards institutes of 162 countries, is the largest developer and publisher of international standards [6]. Standards from ISO are also applied in international health jurisdictions. In short, healthcare standards, and standards for standards, are ubiquitous. They are advocated to be an important means of improving clinical practice and organisational performance. ISQua, and many national bodies, espouse, and have documented methodologies to promote open, transparent, inclusive development processes where standards are developed by members [6-11]. They assert that their methodologies are effective and efficient at producing standards appropriate for the health industry. However, the evidence to support these claims requires scrutiny. What is the basis to ground the standard development methodologies in use? What research demonstrates how standards should be crafted and structured to ensure they are understandable, unambiguous, achievable and reliable in making assessments? What studies have identified the necessary steps to enable standards to be incorporated into everyday practice? Is there evidence to show whether standards improve practice? The purpose of this study was to examine these questions by identifying and analysing the research literature focusing on the development methods and application of healthcare accreditation standards. The analysis is a systematic narrative synthesis of the literature [12]. The intention is to generate new insights and bring transparency to the topic under investigation [13,14]. This type of review is appropriate for this topic for four reasons. First, the review aims to examine a complex initiative applied in diverse contexts [15]. That is, accreditation programs are complex organisational interventions, trying to shape both organisational and clinical conduct, within a multifaceted context in turn shaped by, for example, the healthcare and policy environment. Second, accreditation programs, involving healthcare standards, have been researched in different ways by divergent groups. The analysis method adopted here is intended specifically for interventions researched in a myriad of ways [12]. Third, the approach enables consideration of apparently disparate data generated by research into accreditation standards, as a complex organisational intervention [15]. Fourth, the questions being investigated are preliminary questions that need to be asked of this intervention and the approach is designed exactly for this [14,15]. The review differs from previous reviews [16,17] in being specifically focused only on healthcare accreditation standards and not the broader “standards” field. This review is the first to undertake a systematic and detailed narrative synthesis of accreditation standards. Conclusion: The challenge is to translate practical experiences and discussions into rigorous empirical evidence. We lack knowledge of how to strengthen the development of standards and the application of them based on sound critically peer-reviewed evidence. The process to develop standards essentially needs to be transformed from learnt experience to a verifiable, evidence-based methodology. Evidence-based mechanisms by which standards are developed, promulgated, reinforced, audited and evaluated are needed. Linking the writing of standards, including the wording, structure, design, focus and content, to improved outcomes requires further rigorous investigation. Factors that promote or inhibit implementation of standards, and the impacts that result, need detailed examination and analysis. This review has revealed some significant gaps in our knowledge in these areas, and, in doing so, extended previous reviews in the healthcare accreditation field. As to the limitations of our study, while we have endeavoured to be systematic, we may have overlooked some important literature. A further limitation is that papers or reports needed to be publicly available and in English to be included in the results.
Background: Healthcare accreditation standards are advocated as an important means of improving clinical practice and organisational performance. Standard development agencies have documented methodologies to promote open, transparent, inclusive development processes where standards are developed by members. They assert that their methodologies are effective and efficient at producing standards appropriate for the health industry. However, the evidence to support these claims requires scrutiny. The study's purpose was to examine the empirical research that grounds the development methods and application of healthcare accreditation standards. Methods: A multi-method strategy was employed over the period March 2010 to August 2011. Five academic health research databases (Medline, Psych INFO, Embase, Social work abstracts, and CINAHL) were interrogated, the websites of 36 agencies associated with the study topic were investigated, and a snowball search was undertaken. Search criteria included accreditation research studies, in English, addressing standards and their impact. Searching in stage 1 initially selected 9386 abstracts. In stage 2, this selection was refined against the inclusion criteria; empirical studies (n = 2111) were identified and refined to a selection of 140 papers with the exclusion of clinical or biomedical and commentary pieces. These were independently reviewed by two researchers and reduced to 13 articles that met the study criteria. Results: The 13 articles were analysed according to four categories: overall findings; standards development; implementation issues; and impact of standards. Studies have only occurred in the acute care setting, predominately in 2003 (n = 5) and 2009 (n = 4), and in the United States (n = 8). A multidisciplinary focus (n = 9) and mixed method approach (n = 11) are common characteristics. Three interventional studies were identified, with the remaining 10 studies having research designs to investigate clinical or organisational impacts. No study directly examined standards development or other issues associated with their progression. Only one study noted implementation issues, identifying several enablers and barriers. Standards were reported to improve organisational efficiency and staff circumstances. However, the impact on clinical quality was mixed, with both improvements and a lack of measurable effects recorded. Conclusions: Standards are ubiquitous within healthcare and are generally considered to be an important means by which to improve clinical practice and organisational performance. However, there is a lack of robust empirical evidence examining the development, writing, implementation and impacts of healthcare accreditation standards.
10,020
457
[ 673, 721, 153, 645, 58, 86, 908, 102, 229, 93, 10, 54, 16 ]
18
[ "standards", "accreditation", "quality", "care", "papers", "studies", "study", "26", "32", "27" ]
[ "implementation accreditation standards", "accreditation standards agencies", "accreditation quality standards", "healthcare accreditation standards", "accreditation standards complex" ]
[CONTENT] Healthcare | Accreditation | Standards | Evidence for use | Narrative literature review [SUMMARY]
[CONTENT] Healthcare | Accreditation | Standards | Evidence for use | Narrative literature review [SUMMARY]
[CONTENT] Healthcare | Accreditation | Standards | Evidence for use | Narrative literature review [SUMMARY]
[CONTENT] Healthcare | Accreditation | Standards | Evidence for use | Narrative literature review [SUMMARY]
[CONTENT] Healthcare | Accreditation | Standards | Evidence for use | Narrative literature review [SUMMARY]
[CONTENT] Healthcare | Accreditation | Standards | Evidence for use | Narrative literature review [SUMMARY]
[CONTENT] Accreditation | Delivery of Health Care | Empirical Research | Humans | Quality Indicators, Health Care [SUMMARY]
[CONTENT] Accreditation | Delivery of Health Care | Empirical Research | Humans | Quality Indicators, Health Care [SUMMARY]
[CONTENT] Accreditation | Delivery of Health Care | Empirical Research | Humans | Quality Indicators, Health Care [SUMMARY]
[CONTENT] Accreditation | Delivery of Health Care | Empirical Research | Humans | Quality Indicators, Health Care [SUMMARY]
[CONTENT] Accreditation | Delivery of Health Care | Empirical Research | Humans | Quality Indicators, Health Care [SUMMARY]
[CONTENT] Accreditation | Delivery of Health Care | Empirical Research | Humans | Quality Indicators, Health Care [SUMMARY]
[CONTENT] implementation accreditation standards | accreditation standards agencies | accreditation quality standards | healthcare accreditation standards | accreditation standards complex [SUMMARY]
[CONTENT] implementation accreditation standards | accreditation standards agencies | accreditation quality standards | healthcare accreditation standards | accreditation standards complex [SUMMARY]
[CONTENT] implementation accreditation standards | accreditation standards agencies | accreditation quality standards | healthcare accreditation standards | accreditation standards complex [SUMMARY]
[CONTENT] implementation accreditation standards | accreditation standards agencies | accreditation quality standards | healthcare accreditation standards | accreditation standards complex [SUMMARY]
[CONTENT] implementation accreditation standards | accreditation standards agencies | accreditation quality standards | healthcare accreditation standards | accreditation standards complex [SUMMARY]
[CONTENT] implementation accreditation standards | accreditation standards agencies | accreditation quality standards | healthcare accreditation standards | accreditation standards complex [SUMMARY]
[CONTENT] standards | accreditation | quality | care | papers | studies | study | 26 | 32 | 27 [SUMMARY]
[CONTENT] standards | accreditation | quality | care | papers | studies | study | 26 | 32 | 27 [SUMMARY]
[CONTENT] standards | accreditation | quality | care | papers | studies | study | 26 | 32 | 27 [SUMMARY]
[CONTENT] standards | accreditation | quality | care | papers | studies | study | 26 | 32 | 27 [SUMMARY]
[CONTENT] standards | accreditation | quality | care | papers | studies | study | 26 | 32 | 27 [SUMMARY]
[CONTENT] standards | accreditation | quality | care | papers | studies | study | 26 | 32 | 27 [SUMMARY]
[CONTENT] standards | accreditation | standards standards | international | health | healthcare | accreditation standards | organisational | 15 | questions [SUMMARY]
[CONTENT] papers | abstracts | search | selection | criteria | stage | study | literature | reviewed | topic [SUMMARY]
[CONTENT] standards | care | 26 | 30 | studies | 32 | quality | 29 | 28 | 27 [SUMMARY]
[CONTENT] based | evidence | standards | knowledge | rigorous | needed | evidence based | examination analysis review revealed | promote inhibit | factors promote [SUMMARY]
[CONTENT] standards | accreditation | care | quality | papers | study | 26 | studies | accreditation standards | improved [SUMMARY]
[CONTENT] standards | accreditation | care | quality | papers | study | 26 | studies | accreditation standards | improved [SUMMARY]
[CONTENT] Healthcare ||| Standard ||| ||| ||| [SUMMARY]
[CONTENT] the period March 2010 to August 2011 ||| Five | Medline | Embase | 36 ||| English ||| 1 | 9386 ||| 2 | 2111 | 140 ||| two | 13 [SUMMARY]
[CONTENT] 13 | four ||| 2003 | 5 | 2009 | 4 | the United States | 8) ||| 9 | 11 ||| Three | 10 ||| ||| Only one ||| ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Healthcare ||| Standard ||| ||| ||| ||| the period March 2010 to August 2011 ||| Five | Medline | Embase | 36 ||| English ||| 1 | 9386 ||| 2 | 2111 | 140 ||| two | 13 ||| ||| 13 | four ||| 2003 | 5 | 2009 | 4 | the United States | 8) ||| 9 | 11 ||| Three | 10 ||| ||| Only one ||| ||| ||| ||| [SUMMARY]
[CONTENT] Healthcare ||| Standard ||| ||| ||| ||| the period March 2010 to August 2011 ||| Five | Medline | Embase | 36 ||| English ||| 1 | 9386 ||| 2 | 2111 | 140 ||| two | 13 ||| ||| 13 | four ||| 2003 | 5 | 2009 | 4 | the United States | 8) ||| 9 | 11 ||| Three | 10 ||| ||| Only one ||| ||| ||| ||| [SUMMARY]
null
33557392
Myrtus communis (M. communis) is a wild aromatic plant used for traditional herbal medicine that can be demonstrated in insecticidal, antioxidant, anti-inflammatory, and antimicrobial activity of its essential oils (MCEO).
BACKGROUND
Gas chromatography/mass spectrometry (GC/MS) analysis was performed to determine the chemical composition of MCEO. Mice were then orally administrated with MCEO at the doses of 100, 200, and 300 mg/kg/day and also atovaquone 100 mg/kg for 21 days. On the 15th day, the mice were infected with the intraperitoneal inoculation of 20-25 tissue cysts from the Tehran strain of T. gondii. The mean numbers of brain tissue cysts and the mRNA levels of IL-12 and IFN-γ in mice of each tested group were measured.
METHODS
By GC/MS, the major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively. The results demonstrated that the mean number of T. gondii tissue cysts in experimental groups Ex1 (p < 0.05), Ex2 (p < 0.001) and Ex3 (p < 0.001) was meaningfully reduced in a dose-dependent manner compared with the control group (C2). The mean diameter of tissue cyst was significantly reduced in mice of the experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001). The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3 when compared with control groups.
RESULTS
The findings of the present study demonstrated the potent prophylactic effects of MCEO especially in the doses 200 and 300 mg/kg in mice infected with T. gondii. Although the exceptional anti-Toxoplasma effects of MCEO and other possessions, such as improved innate immunity and low toxicity are positive topics, there is, however, a need for more proof from investigations in this field.
CONCLUSION
[ "Animals", "Antiparasitic Agents", "Immunity, Innate", "Mice", "Myrtus", "Oils, Volatile", "Toxoplasma", "Toxoplasmosis" ]
7915315
1. Introduction
Toxoplasmosis is one of the most prevalent zoonotic parasitic diseases caused by the intracellular parasite Toxoplasma gondii (T. gondii). This parasite affects more than 30% of the world’s population and a wide range of warm-blooded animals [1]. Human as the intermediate host is infected through main routes, including: (i) the ingestion of undercooked or uncooked meat infected with tissue cysts of T. gondii, (ii) consumption of water and food contaminated with sporulated oocysts excreted in feces of the cat as definitive host, and (iii) congenital infection, when the mother becomes infected during pregnancy by one of two previous methods [2,3]. Considering clinical manifestations of toxoplasmosis, the disease does not cause any specific symptoms in healthy and immunocompetent people; but, a severe and even a deadly form can be observed in immunocompromised individuals (such as patients with human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS), and patients with organ transplantation, etc.) and congenitally infected fetuses [4]. At present, chemotherapy with the combination of pyrimethamine and sulfadiazine followed by azithromycin, clindamycin, atovaquone, etc., is considered as the preferred treatment for toxoplasmosis; however, studies in recent years have demonstrated that these drugs are associated with some side effects such as osteoporosis, and teratogenic effects mostly in immune-compromised patients [5,6]. Atovaquone, a hydroxynaphthoquinone derivative, has potent activity against tissue cysts through the blocking respiratory chain of the Toxoplasma; therefore, it is broadly used for in vivo activity against T. gondii during both infection stages [7]. Since there is currently no effective vaccine to prevent toxoplasmosis in human and animals, prophylaxis can therefore be considered the best way to prevent the toxoplasmosis, especially in immunocompromised individuals with a CD4 count below 100 cells/μL as well as in pregnant women who were not previously determined to be seronegative for Toxoplasma Immunoglobulin G (IgG) [8,9]. From ancient times, medicinal herbs and their derivatives have been broadly used for health promotion and therapy for chronic, as opposed to life-threatening, diseases [10,11]. Herbal medicines have also been successfully used in the treatment of a wide range of bacterial, viral, fungal, as well as parasitic infections [12,13,14,15,16]. Myrtus communis L. (M. communis), which also called myrtle (Myrtaceae family) is a medicinal herb that has been broadly applied for folk medicine around the world [17]. Since the old civilizations, myrtle has long been applied in traditional medicine as a reliever of stomach aches, wound healing, antihemorrhoid, etc. [18]. Recently, modern medicine demonstrated that various parts of this plant such as leaves, fruits, roots, berries, and its branches have different pharmacological possessions including anti-inflammatory, analgesic, antioxidant, anticancer, anti-diabetic, anti-mutagenic, neuro-protective, etc. [19]. Numerous studies have also reported antimicrobial effects of M. communis against a wide range of pathogenic strains of bacteria (Staphylococcus aureus, Listeria monocytogenes, Pseudomonas aeruginosa, Escherichia coli, Klebsiella pneumonia, etc.), viruses (Herpes simplex), fungi (Candida spp., etc), and parasites [19,20,21,22]. From a long time ago, essential oil and its constituents are considered as a promising therapeutic agent, due to their qualified safety, and broad biological and pharmacological activities [23]. Reviews have shown that essential oil of M. communis contains a large amount of are terpenes, terpenoids, and phenylpropanoids [17]. Given the various pharmacological effects of M. communis, the present study aimed to evaluate the prophylactic effects of M. communis essential oil against chronic toxoplasmosis induced by the Tehran strain of T. gondii in mice.
null
null
2. Results
2.1. GC/MS Analysis The yield of essential oil was 0.41 % (v/w). Density of essential oil at 25 °C was 0.831 g/mL and refractive index was 1.391 at 25 °C. Based on the obtained results in GC/MS, twenty-five compounds were identified, indicating 93.01% of the total essential oil (Table S1, Supplementary Materials). The major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively. The yield of essential oil was 0.41 % (v/w). Density of essential oil at 25 °C was 0.831 g/mL and refractive index was 1.391 at 25 °C. Based on the obtained results in GC/MS, twenty-five compounds were identified, indicating 93.01% of the total essential oil (Table S1, Supplementary Materials). The major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively. 2.2. Parasitological Study 2.2.1. The Mean Number of T. gondii Tissue Cysts Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2). Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2). 2.2.2. The Mean Diameter of T. gondii Tissue Cysts By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2). By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2). 2.2.1. The Mean Number of T. gondii Tissue Cysts Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2). Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2). 2.2.2. The Mean Diameter of T. gondii Tissue Cysts By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2). By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2). 2.3. Cytokine Expression by Real-Time PCR Figure 3 shows the mRNA levels of IFN-γ and IL-12, in mice of all tested groups. The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3, when compared with control groups. Figure 3 shows the mRNA levels of IFN-γ and IL-12, in mice of all tested groups. The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3, when compared with control groups.
5. Conclusions
The findings of the present study demonstrated the exceptional anti-Toxoplasma effects of MCEO in infected mice with T. gondii. Thus, oral administration of MCEO in the doses of 200 and 300 mg/kg for 21 days was able to prevent severe symptoms of the toxoplasmosis in the mouse model. However, the exceptional anti-Toxoplasma effects of MCEO and other effects, such as improved innate immunity and low toxicity, are positive topics that need more proof from investigations in this field.
[ "2.1. GC/MS Analysis", "2.2. Parasitological Study", "2.2.1. The Mean Number of T. gondii Tissue Cysts", "2.2.2. The Mean Diameter of T. gondii Tissue Cysts", "2.3. Cytokine Expression by Real-Time PCR", "4. Materials and methods", "4.1. Collecting the Plant Materials", "4.2. Isolation of the Essential Oil", "4.3. Gas Chromatography/Mass Spectrometry (GC/MS) Analysis of Essential Oil", "4.4. Experimental Design and Infection", "4.4.1. Animals", "4.4.2. Parasite", "4.4.3. Animal Model of Chronic Toxoplasmosis", "4.4.4. Design", "4.5. Serological Tests", "4.6. Sample Collection", "4.7. Anti-Parasitic Activity", "4.8. Induction of Innate Immune System", "4.9. Statistical Analysis" ]
[ "The yield of essential oil was 0.41 % (v/w). Density of essential oil at 25 °C was 0.831 g/mL and refractive index was 1.391 at 25 °C. Based on the obtained results in GC/MS, twenty-five compounds were identified, indicating 93.01% of the total essential oil (Table S1, Supplementary Materials). The major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively.", " 2.2.1. The Mean Number of T. gondii Tissue Cysts Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2).\nFigure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2).\n 2.2.2. The Mean Diameter of T. gondii Tissue Cysts By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2).\nBy the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2).", "Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2).", "By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2).", "Figure 3 shows the mRNA levels of IFN-γ and IL-12, in mice of all tested groups. The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3, when compared with control groups.", " 4.1. Collecting the Plant Materials In this investigation, the leaves of plant were prepared from mountain areas of Kerman province in September 2016. After identifying the plant by a botanist, a voucher sample of the plant materials was placed at the Herbarium of Department of Pharmacognosy of School of Pharmacy, (Kerman, Iran) (KF1356).\nIn this investigation, the leaves of plant were prepared from mountain areas of Kerman province in September 2016. After identifying the plant by a botanist, a voucher sample of the plant materials was placed at the Herbarium of Department of Pharmacognosy of School of Pharmacy, (Kerman, Iran) (KF1356).\n 4.2. Isolation of the Essential Oil About 500 g of air-dried leaves were used to hydro-distillation for 3 h by means of an all-glass Clevenger-type device. The obtained essential was dried over anhydrous sodium sulfate and kept in darkness at 4 °C in air-tight glass vials closed under nitrogen gas until testing [36].\nAbout 500 g of air-dried leaves were used to hydro-distillation for 3 h by means of an all-glass Clevenger-type device. The obtained essential was dried over anhydrous sodium sulfate and kept in darkness at 4 °C in air-tight glass vials closed under nitrogen gas until testing [36].\n 4.3. Gas Chromatography/Mass Spectrometry (GC/MS) Analysis of Essential Oil A Hewlett-Packard 6890 (Hewlett-Packard, Palo Alto, CA, USA) apparatus was applied to perform the GC analysis equipped with a HP-5MS column (30 m × 0.25 mm, film thickness 0.25 mm). Other device specifications and processes such as column temperature, injector and interface temperatures, flow rate of helium, etc., were previously described in the study conducted by Mahmoudvand et al. [28]. To determine the chemical composition of the essential oil we evaluated the relative retention time and mass spectra of each detected compound compared with the standards Wiley 2001 library data or literature ones [31].\nA Hewlett-Packard 6890 (Hewlett-Packard, Palo Alto, CA, USA) apparatus was applied to perform the GC analysis equipped with a HP-5MS column (30 m × 0.25 mm, film thickness 0.25 mm). Other device specifications and processes such as column temperature, injector and interface temperatures, flow rate of helium, etc., were previously described in the study conducted by Mahmoudvand et al. [28]. To determine the chemical composition of the essential oil we evaluated the relative retention time and mass spectra of each detected compound compared with the standards Wiley 2001 library data or literature ones [31].\n 4.4. Experimental Design and Infection 4.4.1. Animals A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \nA total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \n 4.4.2. Parasite In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\nIn this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\n 4.4.3. Animal Model of Chronic Toxoplasmosis The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\nThe chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\n 4.4.4. Design Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].\nFigure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].\n 4.4.1. Animals A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \nA total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \n 4.4.2. Parasite In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\nIn this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\n 4.4.3. Animal Model of Chronic Toxoplasmosis The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\nThe chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\n 4.4.4. Design Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].\nFigure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].\n 4.5. Serological Tests To confirm the development of toxoplasmosis in mice, serum samples of mice from each tested group was collected for evaluation of anti-T. gondii IgG antibody by a modified agglutination test (MAT) kit (Toxo screen DA, Biomérieux, Lyon, France), the formalized killed whole tachyzoites of T. gondii was prepared and procedures were carried out according to the method described by Shaapan et al. [38]. The agglutination titer of ≤1/20 was positive and end-titrated by 2-fold dilutions.\nTo confirm the development of toxoplasmosis in mice, serum samples of mice from each tested group was collected for evaluation of anti-T. gondii IgG antibody by a modified agglutination test (MAT) kit (Toxo screen DA, Biomérieux, Lyon, France), the formalized killed whole tachyzoites of T. gondii was prepared and procedures were carried out according to the method described by Shaapan et al. [38]. The agglutination titer of ≤1/20 was positive and end-titrated by 2-fold dilutions.\n 4.6. Sample Collection To collect the brain samples, mice in each group were deeply anesthetized by means of intraperitoneal administration of ketamine (150 mg/kg) and xylazine (10 mg/kg). In the next step followed by decapitation, total brain tissues from each mouse were aseptically collected. To evaluate the parasitological changes, we applied the right hemisphere; while another hemisphere was maintained in −80 °C to determine the molecular examinations.\nTo collect the brain samples, mice in each group were deeply anesthetized by means of intraperitoneal administration of ketamine (150 mg/kg) and xylazine (10 mg/kg). In the next step followed by decapitation, total brain tissues from each mouse were aseptically collected. To evaluate the parasitological changes, we applied the right hemisphere; while another hemisphere was maintained in −80 °C to determine the molecular examinations.\n 4.7. Anti-Parasitic Activity To assess the effects of MCEO on T. gondii infection, the right hemisphere of brain from each mouse was used to prepare the unstained-smears; in the next step, the diameter and the numbers of tissue cysts were calculated at two magnifications of 100× and 400× by means of light microscopy [39].\nTo assess the effects of MCEO on T. gondii infection, the right hemisphere of brain from each mouse was used to prepare the unstained-smears; in the next step, the diameter and the numbers of tissue cysts were calculated at two magnifications of 100× and 400× by means of light microscopy [39].\n 4.8. Induction of Innate Immune System The mRNA levels of IFN-γ, and IL12 which are considered as key factors related to toxoplasmosis control mechanisms were measured in all tested mice using quantitative real time PCR. The total brain RNA was extracted by means of the RNA-easy kits (Qiagen, Hilden, Germany); whereas all isolated RNAs were reverse-transcribed according to the manufacture’s protocols. Consequently, the collected complementary DNA (cDNA) was applied to conventional PCR amplification or real-time PCR. To perform the Real-time PCR we used the iQ5 real-time PCR detection system (Bio-Rad, Hercules, CA). All amplification products were determined by SYBR green [40]. The reaction conditions of real-time PCR were included initial denaturation at 95 °C for 10 min, 40 amplification cycles [denaturation at 95 °C for 10 s, annealing at 56 °C for 30 s, and elongation at 72 °C for 30 s], followed by one cycle at 72 °C for 5 min. The iQTM5 optical system software (Bio-Rad) was used for data analysis. Here, β-actin which is well-known as a housekeeping gene was considered as a normalization control. Oligonucleotide primers used for real-time RT-PCR analysis are shown in Table 1.\nThe mRNA levels of IFN-γ, and IL12 which are considered as key factors related to toxoplasmosis control mechanisms were measured in all tested mice using quantitative real time PCR. The total brain RNA was extracted by means of the RNA-easy kits (Qiagen, Hilden, Germany); whereas all isolated RNAs were reverse-transcribed according to the manufacture’s protocols. Consequently, the collected complementary DNA (cDNA) was applied to conventional PCR amplification or real-time PCR. To perform the Real-time PCR we used the iQ5 real-time PCR detection system (Bio-Rad, Hercules, CA). All amplification products were determined by SYBR green [40]. The reaction conditions of real-time PCR were included initial denaturation at 95 °C for 10 min, 40 amplification cycles [denaturation at 95 °C for 10 s, annealing at 56 °C for 30 s, and elongation at 72 °C for 30 s], followed by one cycle at 72 °C for 5 min. The iQTM5 optical system software (Bio-Rad) was used for data analysis. Here, β-actin which is well-known as a housekeeping gene was considered as a normalization control. Oligonucleotide primers used for real-time RT-PCR analysis are shown in Table 1.\n 4.9. Statistical Analysis Data analysis was carried out using SPSS statistical package version 17.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA with Turkey’s potshot test was used to assess differences between experimental groups. In addition, p < 0.05 was considered statistically significant.\nData analysis was carried out using SPSS statistical package version 17.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA with Turkey’s potshot test was used to assess differences between experimental groups. In addition, p < 0.05 was considered statistically significant.", "In this investigation, the leaves of plant were prepared from mountain areas of Kerman province in September 2016. After identifying the plant by a botanist, a voucher sample of the plant materials was placed at the Herbarium of Department of Pharmacognosy of School of Pharmacy, (Kerman, Iran) (KF1356).", "About 500 g of air-dried leaves were used to hydro-distillation for 3 h by means of an all-glass Clevenger-type device. The obtained essential was dried over anhydrous sodium sulfate and kept in darkness at 4 °C in air-tight glass vials closed under nitrogen gas until testing [36].", "A Hewlett-Packard 6890 (Hewlett-Packard, Palo Alto, CA, USA) apparatus was applied to perform the GC analysis equipped with a HP-5MS column (30 m × 0.25 mm, film thickness 0.25 mm). Other device specifications and processes such as column temperature, injector and interface temperatures, flow rate of helium, etc., were previously described in the study conducted by Mahmoudvand et al. [28]. To determine the chemical composition of the essential oil we evaluated the relative retention time and mass spectra of each detected compound compared with the standards Wiley 2001 library data or literature ones [31].", " 4.4.1. Animals A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \nA total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \n 4.4.2. Parasite In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\nIn this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\n 4.4.3. Animal Model of Chronic Toxoplasmosis The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\nThe chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\n 4.4.4. Design Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].\nFigure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].", "A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. ", "In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.", "The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.", "Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].", "To confirm the development of toxoplasmosis in mice, serum samples of mice from each tested group was collected for evaluation of anti-T. gondii IgG antibody by a modified agglutination test (MAT) kit (Toxo screen DA, Biomérieux, Lyon, France), the formalized killed whole tachyzoites of T. gondii was prepared and procedures were carried out according to the method described by Shaapan et al. [38]. The agglutination titer of ≤1/20 was positive and end-titrated by 2-fold dilutions.", "To collect the brain samples, mice in each group were deeply anesthetized by means of intraperitoneal administration of ketamine (150 mg/kg) and xylazine (10 mg/kg). In the next step followed by decapitation, total brain tissues from each mouse were aseptically collected. To evaluate the parasitological changes, we applied the right hemisphere; while another hemisphere was maintained in −80 °C to determine the molecular examinations.", "To assess the effects of MCEO on T. gondii infection, the right hemisphere of brain from each mouse was used to prepare the unstained-smears; in the next step, the diameter and the numbers of tissue cysts were calculated at two magnifications of 100× and 400× by means of light microscopy [39].", "The mRNA levels of IFN-γ, and IL12 which are considered as key factors related to toxoplasmosis control mechanisms were measured in all tested mice using quantitative real time PCR. The total brain RNA was extracted by means of the RNA-easy kits (Qiagen, Hilden, Germany); whereas all isolated RNAs were reverse-transcribed according to the manufacture’s protocols. Consequently, the collected complementary DNA (cDNA) was applied to conventional PCR amplification or real-time PCR. To perform the Real-time PCR we used the iQ5 real-time PCR detection system (Bio-Rad, Hercules, CA). All amplification products were determined by SYBR green [40]. The reaction conditions of real-time PCR were included initial denaturation at 95 °C for 10 min, 40 amplification cycles [denaturation at 95 °C for 10 s, annealing at 56 °C for 30 s, and elongation at 72 °C for 30 s], followed by one cycle at 72 °C for 5 min. The iQTM5 optical system software (Bio-Rad) was used for data analysis. Here, β-actin which is well-known as a housekeeping gene was considered as a normalization control. Oligonucleotide primers used for real-time RT-PCR analysis are shown in Table 1.", "Data analysis was carried out using SPSS statistical package version 17.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA with Turkey’s potshot test was used to assess differences between experimental groups. In addition, p < 0.05 was considered statistically significant." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Results", "2.1. GC/MS Analysis", "2.2. Parasitological Study", "2.2.1. The Mean Number of T. gondii Tissue Cysts", "2.2.2. The Mean Diameter of T. gondii Tissue Cysts", "2.3. Cytokine Expression by Real-Time PCR", "3. Discussion", "4. Materials and methods", "4.1. Collecting the Plant Materials", "4.2. Isolation of the Essential Oil", "4.3. Gas Chromatography/Mass Spectrometry (GC/MS) Analysis of Essential Oil", "4.4. Experimental Design and Infection", "4.4.1. Animals", "4.4.2. Parasite", "4.4.3. Animal Model of Chronic Toxoplasmosis", "4.4.4. Design", "4.5. Serological Tests", "4.6. Sample Collection", "4.7. Anti-Parasitic Activity", "4.8. Induction of Innate Immune System", "4.9. Statistical Analysis", "5. Conclusions" ]
[ "Toxoplasmosis is one of the most prevalent zoonotic parasitic diseases caused by the intracellular parasite Toxoplasma gondii (T. gondii). This parasite affects more than 30% of the world’s population and a wide range of warm-blooded animals [1]. Human as the intermediate host is infected through main routes, including: (i) the ingestion of undercooked or uncooked meat infected with tissue cysts of T. gondii, (ii) consumption of water and food contaminated with sporulated oocysts excreted in feces of the cat as definitive host, and (iii) congenital infection, when the mother becomes infected during pregnancy by one of two previous methods [2,3]. Considering clinical manifestations of toxoplasmosis, the disease does not cause any specific symptoms in healthy and immunocompetent people; but, a severe and even a deadly form can be observed in immunocompromised individuals (such as patients with human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS), and patients with organ transplantation, etc.) and congenitally infected fetuses [4].\nAt present, chemotherapy with the combination of pyrimethamine and sulfadiazine followed by azithromycin, clindamycin, atovaquone, etc., is considered as the preferred treatment for toxoplasmosis; however, studies in recent years have demonstrated that these drugs are associated with some side effects such as osteoporosis, and teratogenic effects mostly in immune-compromised patients [5,6]. Atovaquone, a hydroxynaphthoquinone derivative, has potent activity against tissue cysts through the blocking respiratory chain of the Toxoplasma; therefore, it is broadly used for in vivo activity against T. gondii during both infection stages [7].\nSince there is currently no effective vaccine to prevent toxoplasmosis in human and animals, prophylaxis can therefore be considered the best way to prevent the toxoplasmosis, especially in immunocompromised individuals with a CD4 count below 100 cells/μL as well as in pregnant women who were not previously determined to be seronegative for Toxoplasma Immunoglobulin G (IgG) [8,9].\nFrom ancient times, medicinal herbs and their derivatives have been broadly used for health promotion and therapy for chronic, as opposed to life-threatening, diseases [10,11]. Herbal medicines have also been successfully used in the treatment of a wide range of bacterial, viral, fungal, as well as parasitic infections [12,13,14,15,16]. Myrtus communis L. (M. communis), which also called myrtle (Myrtaceae family) is a medicinal herb that has been broadly applied for folk medicine around the world [17]. Since the old civilizations, myrtle has long been applied in traditional medicine as a reliever of stomach aches, wound healing, antihemorrhoid, etc. [18]. Recently, modern medicine demonstrated that various parts of this plant such as leaves, fruits, roots, berries, and its branches have different pharmacological possessions including anti-inflammatory, analgesic, antioxidant, anticancer, anti-diabetic, anti-mutagenic, neuro-protective, etc. [19]. Numerous studies have also reported antimicrobial effects of M. communis against a wide range of pathogenic strains of bacteria (Staphylococcus aureus, Listeria monocytogenes, Pseudomonas aeruginosa, Escherichia coli, Klebsiella pneumonia, etc.), viruses (Herpes simplex), fungi (Candida spp., etc), and parasites [19,20,21,22]. From a long time ago, essential oil and its constituents are considered as a promising therapeutic agent, due to their qualified safety, and broad biological and pharmacological activities [23]. Reviews have shown that essential oil of M. communis contains a large amount of are terpenes, terpenoids, and phenylpropanoids [17].\nGiven the various pharmacological effects of M. communis, the present study aimed to evaluate the prophylactic effects of M. communis essential oil against chronic toxoplasmosis induced by the Tehran strain of T. gondii in mice.", " 2.1. GC/MS Analysis The yield of essential oil was 0.41 % (v/w). Density of essential oil at 25 °C was 0.831 g/mL and refractive index was 1.391 at 25 °C. Based on the obtained results in GC/MS, twenty-five compounds were identified, indicating 93.01% of the total essential oil (Table S1, Supplementary Materials). The major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively.\nThe yield of essential oil was 0.41 % (v/w). Density of essential oil at 25 °C was 0.831 g/mL and refractive index was 1.391 at 25 °C. Based on the obtained results in GC/MS, twenty-five compounds were identified, indicating 93.01% of the total essential oil (Table S1, Supplementary Materials). The major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively.\n 2.2. Parasitological Study 2.2.1. The Mean Number of T. gondii Tissue Cysts Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2).\nFigure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2).\n 2.2.2. The Mean Diameter of T. gondii Tissue Cysts By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2).\nBy the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2).\n 2.2.1. The Mean Number of T. gondii Tissue Cysts Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2).\nFigure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2).\n 2.2.2. The Mean Diameter of T. gondii Tissue Cysts By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2).\nBy the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2).\n 2.3. Cytokine Expression by Real-Time PCR Figure 3 shows the mRNA levels of IFN-γ and IL-12, in mice of all tested groups. The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3, when compared with control groups.\nFigure 3 shows the mRNA levels of IFN-γ and IL-12, in mice of all tested groups. The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3, when compared with control groups.", "The yield of essential oil was 0.41 % (v/w). Density of essential oil at 25 °C was 0.831 g/mL and refractive index was 1.391 at 25 °C. Based on the obtained results in GC/MS, twenty-five compounds were identified, indicating 93.01% of the total essential oil (Table S1, Supplementary Materials). The major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively.", " 2.2.1. The Mean Number of T. gondii Tissue Cysts Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2).\nFigure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2).\n 2.2.2. The Mean Diameter of T. gondii Tissue Cysts By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2).\nBy the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2).", "Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2).", "By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2).", "Figure 3 shows the mRNA levels of IFN-γ and IL-12, in mice of all tested groups. The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3, when compared with control groups.", "At present, the combination of sulfonamide and pyrimethamine is considered the gold-standard therapy to treat toxoplasmosis [24]. According to the reports, these are associated with adverse side effects and responses such as teratogenic effects, hematological disorders, myelosuppression, and gastrointestinal effects, etc [5,6]; hence, the discovery of novel effective agents especially from natural sources with low toxicities is an absolute need.\nThe World Health Organization (WHO) reported that more than two-third of the world’s population trust in folk medicine for their early therapeutic purposes [25]. Reviews demonstrated that herbs used for therapeutic purposes in traditional medicine contain a variety of compounds that have different biological and therapeutic activities especially in the treatment of microbial infections [26]. Here, we aimed to assess the prophylactic effects of M. communis essential oil against chronic toxoplasmosis induced by the Tehran strain of T. gondii in mice.\nThe obtained parasitological results demonstrated the exceptional anti-Toxoplasma effects of MCEO, so that the mean number of T. gondii tissue cysts in experimental groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001) was meaningfully reduced in a dose-dependent manner compared with the control group (C2). The results also exhibited that the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001).\nConsidering the antiparasitic activity of M. communis, Mahmoudvand et al. (2015) have demonstrated that the essential oil and methanolic extract of M. communis significantly suppressed the growth percentage of promastigote and amastigote forms of Leishmania tropica with the IC50 values ranging from 8.4 to 40.8 μg/mL [20]. In the other study conducted by Azadbakht et al. (2004), the results showed that essential oil and methanolic extract of M. communis at the concentrations of 0.0001 to 0.1 considerably reduced the growth rate of Trichomonas vaginalis trophozoites on in vitro experiments [27]. The results of another study showed that MCEO at the dose of 12.5 to 100 μg/mL significantly reduced the viability of protoscoleces of Echinococcus granulosus in vitro [28]. The anti-plasmodial effects of methanolic extract of M. communis were demonstrated against chloroquine-resistant (K1) and chloroquine-sensitive (3D7) strains of Plasmodium falciparum with the IC50 values of 35.44 and 0.87 µg/mL, respectively [29]; the authors also concluded that M. communis methanolic extract considerably reduced the parasitemia in mice infected with Plasmodium berghei after 4 days of treatment.\nAlthough the chemical composition of MCEO was studied in different studies around the world [21], it has been proven that the chemical composition of essential oils is somewhat variable depending on some factors such as the plant collection place, part of used, the time of harvest, and the method of extraction [30]. Based on the previous studies, terpenoid compounds such as 1,8-cineole, α-pinene, limonene, linalool, α-terpinolene, etc., are considered as the main components found in M. communis essential oil [31]. The results of our study in agreement with previous studies show that the major constituents of MCEO were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively [32]. Reviews have demonstrated that the antiviral, antibacterial, antifungal, and anti-parasitic activities of terpenes, terpenoides, and their derivatives against a wide range of pathogenic strains [19]. It this indicated that these phytoconstituents could be responsible for their antimicrobial activity while their precise mechanism of action is not clearly understood. Previous studies have demonstrated that these compounds showed antimicrobial effects through the disruption of cell membrane, inhibition of oxygen consumption, inhibition of virulence factors, etc. [26].\nSince one of the most important mechanisms of control of toxoplasmosis is the immune system, mainly cellular immunity, we evaluated the mRNA levels of some innate immunity mediators such as IFN-γ and IL12 by quantitative real time PCR [33]. The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase was observed in mice treated by MCEO at the doses of Ex2 and Ex3 of MCEO when compared with control groups. With respect to the immunomodulatory effects, it has been proven that cytokine IL-12 controls nitric oxide synthesis via IFN-γ. In T. gondii infection, the production of nitric oxide is controlled by the partial inhibition of the synthesis of nitric oxide synthetase. Theoretically, modulation of cytokine is extremely important, because nitric oxide is considered as a part of the initial effectors in the immune system response against toxoplasmosis [34]. Our findings suggest that the decrease in parasite load in the infected mice treated with MCEO can be associated to the strengthening of the immune system, principally the innate immune system, of the tested mice that result in the control of T. gondii infection. Considering the toxicity of MCEO, in the study conducted by Mahmoudvand et al. (2016), the results showed that there was no significant toxicity in the clinical chemistry and hematological parameters after 14 days of oral administration of MCEO at the doses 0.05, 0.1, 0.2, and 0.4 mL/kg in tested mice, indicating that MCEO at the tested doses of the present study has no toxicity in BALB/c mice [28].\nAlthough the present investigation showed that the exceptional anti-Toxoplasma effects of MCEO has been proven, several important points must be considered in the use of plant products, including the use of a standard method for preparation of essential oil, the proper selection of concentrations or dilutions, the finding of the most active fraction or extracts, selection of the type of study to better investigate the mechanism of action, the study of the pharmacokinetic profile of the plant products, etc. [35].", " 4.1. Collecting the Plant Materials In this investigation, the leaves of plant were prepared from mountain areas of Kerman province in September 2016. After identifying the plant by a botanist, a voucher sample of the plant materials was placed at the Herbarium of Department of Pharmacognosy of School of Pharmacy, (Kerman, Iran) (KF1356).\nIn this investigation, the leaves of plant were prepared from mountain areas of Kerman province in September 2016. After identifying the plant by a botanist, a voucher sample of the plant materials was placed at the Herbarium of Department of Pharmacognosy of School of Pharmacy, (Kerman, Iran) (KF1356).\n 4.2. Isolation of the Essential Oil About 500 g of air-dried leaves were used to hydro-distillation for 3 h by means of an all-glass Clevenger-type device. The obtained essential was dried over anhydrous sodium sulfate and kept in darkness at 4 °C in air-tight glass vials closed under nitrogen gas until testing [36].\nAbout 500 g of air-dried leaves were used to hydro-distillation for 3 h by means of an all-glass Clevenger-type device. The obtained essential was dried over anhydrous sodium sulfate and kept in darkness at 4 °C in air-tight glass vials closed under nitrogen gas until testing [36].\n 4.3. Gas Chromatography/Mass Spectrometry (GC/MS) Analysis of Essential Oil A Hewlett-Packard 6890 (Hewlett-Packard, Palo Alto, CA, USA) apparatus was applied to perform the GC analysis equipped with a HP-5MS column (30 m × 0.25 mm, film thickness 0.25 mm). Other device specifications and processes such as column temperature, injector and interface temperatures, flow rate of helium, etc., were previously described in the study conducted by Mahmoudvand et al. [28]. To determine the chemical composition of the essential oil we evaluated the relative retention time and mass spectra of each detected compound compared with the standards Wiley 2001 library data or literature ones [31].\nA Hewlett-Packard 6890 (Hewlett-Packard, Palo Alto, CA, USA) apparatus was applied to perform the GC analysis equipped with a HP-5MS column (30 m × 0.25 mm, film thickness 0.25 mm). Other device specifications and processes such as column temperature, injector and interface temperatures, flow rate of helium, etc., were previously described in the study conducted by Mahmoudvand et al. [28]. To determine the chemical composition of the essential oil we evaluated the relative retention time and mass spectra of each detected compound compared with the standards Wiley 2001 library data or literature ones [31].\n 4.4. Experimental Design and Infection 4.4.1. Animals A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \nA total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \n 4.4.2. Parasite In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\nIn this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\n 4.4.3. Animal Model of Chronic Toxoplasmosis The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\nThe chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\n 4.4.4. Design Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].\nFigure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].\n 4.4.1. Animals A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \nA total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \n 4.4.2. Parasite In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\nIn this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\n 4.4.3. Animal Model of Chronic Toxoplasmosis The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\nThe chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\n 4.4.4. Design Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].\nFigure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].\n 4.5. Serological Tests To confirm the development of toxoplasmosis in mice, serum samples of mice from each tested group was collected for evaluation of anti-T. gondii IgG antibody by a modified agglutination test (MAT) kit (Toxo screen DA, Biomérieux, Lyon, France), the formalized killed whole tachyzoites of T. gondii was prepared and procedures were carried out according to the method described by Shaapan et al. [38]. The agglutination titer of ≤1/20 was positive and end-titrated by 2-fold dilutions.\nTo confirm the development of toxoplasmosis in mice, serum samples of mice from each tested group was collected for evaluation of anti-T. gondii IgG antibody by a modified agglutination test (MAT) kit (Toxo screen DA, Biomérieux, Lyon, France), the formalized killed whole tachyzoites of T. gondii was prepared and procedures were carried out according to the method described by Shaapan et al. [38]. The agglutination titer of ≤1/20 was positive and end-titrated by 2-fold dilutions.\n 4.6. Sample Collection To collect the brain samples, mice in each group were deeply anesthetized by means of intraperitoneal administration of ketamine (150 mg/kg) and xylazine (10 mg/kg). In the next step followed by decapitation, total brain tissues from each mouse were aseptically collected. To evaluate the parasitological changes, we applied the right hemisphere; while another hemisphere was maintained in −80 °C to determine the molecular examinations.\nTo collect the brain samples, mice in each group were deeply anesthetized by means of intraperitoneal administration of ketamine (150 mg/kg) and xylazine (10 mg/kg). In the next step followed by decapitation, total brain tissues from each mouse were aseptically collected. To evaluate the parasitological changes, we applied the right hemisphere; while another hemisphere was maintained in −80 °C to determine the molecular examinations.\n 4.7. Anti-Parasitic Activity To assess the effects of MCEO on T. gondii infection, the right hemisphere of brain from each mouse was used to prepare the unstained-smears; in the next step, the diameter and the numbers of tissue cysts were calculated at two magnifications of 100× and 400× by means of light microscopy [39].\nTo assess the effects of MCEO on T. gondii infection, the right hemisphere of brain from each mouse was used to prepare the unstained-smears; in the next step, the diameter and the numbers of tissue cysts were calculated at two magnifications of 100× and 400× by means of light microscopy [39].\n 4.8. Induction of Innate Immune System The mRNA levels of IFN-γ, and IL12 which are considered as key factors related to toxoplasmosis control mechanisms were measured in all tested mice using quantitative real time PCR. The total brain RNA was extracted by means of the RNA-easy kits (Qiagen, Hilden, Germany); whereas all isolated RNAs were reverse-transcribed according to the manufacture’s protocols. Consequently, the collected complementary DNA (cDNA) was applied to conventional PCR amplification or real-time PCR. To perform the Real-time PCR we used the iQ5 real-time PCR detection system (Bio-Rad, Hercules, CA). All amplification products were determined by SYBR green [40]. The reaction conditions of real-time PCR were included initial denaturation at 95 °C for 10 min, 40 amplification cycles [denaturation at 95 °C for 10 s, annealing at 56 °C for 30 s, and elongation at 72 °C for 30 s], followed by one cycle at 72 °C for 5 min. The iQTM5 optical system software (Bio-Rad) was used for data analysis. Here, β-actin which is well-known as a housekeeping gene was considered as a normalization control. Oligonucleotide primers used for real-time RT-PCR analysis are shown in Table 1.\nThe mRNA levels of IFN-γ, and IL12 which are considered as key factors related to toxoplasmosis control mechanisms were measured in all tested mice using quantitative real time PCR. The total brain RNA was extracted by means of the RNA-easy kits (Qiagen, Hilden, Germany); whereas all isolated RNAs were reverse-transcribed according to the manufacture’s protocols. Consequently, the collected complementary DNA (cDNA) was applied to conventional PCR amplification or real-time PCR. To perform the Real-time PCR we used the iQ5 real-time PCR detection system (Bio-Rad, Hercules, CA). All amplification products were determined by SYBR green [40]. The reaction conditions of real-time PCR were included initial denaturation at 95 °C for 10 min, 40 amplification cycles [denaturation at 95 °C for 10 s, annealing at 56 °C for 30 s, and elongation at 72 °C for 30 s], followed by one cycle at 72 °C for 5 min. The iQTM5 optical system software (Bio-Rad) was used for data analysis. Here, β-actin which is well-known as a housekeeping gene was considered as a normalization control. Oligonucleotide primers used for real-time RT-PCR analysis are shown in Table 1.\n 4.9. Statistical Analysis Data analysis was carried out using SPSS statistical package version 17.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA with Turkey’s potshot test was used to assess differences between experimental groups. In addition, p < 0.05 was considered statistically significant.\nData analysis was carried out using SPSS statistical package version 17.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA with Turkey’s potshot test was used to assess differences between experimental groups. In addition, p < 0.05 was considered statistically significant.", "In this investigation, the leaves of plant were prepared from mountain areas of Kerman province in September 2016. After identifying the plant by a botanist, a voucher sample of the plant materials was placed at the Herbarium of Department of Pharmacognosy of School of Pharmacy, (Kerman, Iran) (KF1356).", "About 500 g of air-dried leaves were used to hydro-distillation for 3 h by means of an all-glass Clevenger-type device. The obtained essential was dried over anhydrous sodium sulfate and kept in darkness at 4 °C in air-tight glass vials closed under nitrogen gas until testing [36].", "A Hewlett-Packard 6890 (Hewlett-Packard, Palo Alto, CA, USA) apparatus was applied to perform the GC analysis equipped with a HP-5MS column (30 m × 0.25 mm, film thickness 0.25 mm). Other device specifications and processes such as column temperature, injector and interface temperatures, flow rate of helium, etc., were previously described in the study conducted by Mahmoudvand et al. [28]. To determine the chemical composition of the essential oil we evaluated the relative retention time and mass spectra of each detected compound compared with the standards Wiley 2001 library data or literature ones [31].", " 4.4.1. Animals A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \nA total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. \n 4.4.2. Parasite In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\nIn this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.\n 4.4.3. Animal Model of Chronic Toxoplasmosis The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\nThe chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.\n 4.4.4. Design Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].\nFigure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].", "A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. ", "In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice.", "The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group.", "Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28].", "To confirm the development of toxoplasmosis in mice, serum samples of mice from each tested group was collected for evaluation of anti-T. gondii IgG antibody by a modified agglutination test (MAT) kit (Toxo screen DA, Biomérieux, Lyon, France), the formalized killed whole tachyzoites of T. gondii was prepared and procedures were carried out according to the method described by Shaapan et al. [38]. The agglutination titer of ≤1/20 was positive and end-titrated by 2-fold dilutions.", "To collect the brain samples, mice in each group were deeply anesthetized by means of intraperitoneal administration of ketamine (150 mg/kg) and xylazine (10 mg/kg). In the next step followed by decapitation, total brain tissues from each mouse were aseptically collected. To evaluate the parasitological changes, we applied the right hemisphere; while another hemisphere was maintained in −80 °C to determine the molecular examinations.", "To assess the effects of MCEO on T. gondii infection, the right hemisphere of brain from each mouse was used to prepare the unstained-smears; in the next step, the diameter and the numbers of tissue cysts were calculated at two magnifications of 100× and 400× by means of light microscopy [39].", "The mRNA levels of IFN-γ, and IL12 which are considered as key factors related to toxoplasmosis control mechanisms were measured in all tested mice using quantitative real time PCR. The total brain RNA was extracted by means of the RNA-easy kits (Qiagen, Hilden, Germany); whereas all isolated RNAs were reverse-transcribed according to the manufacture’s protocols. Consequently, the collected complementary DNA (cDNA) was applied to conventional PCR amplification or real-time PCR. To perform the Real-time PCR we used the iQ5 real-time PCR detection system (Bio-Rad, Hercules, CA). All amplification products were determined by SYBR green [40]. The reaction conditions of real-time PCR were included initial denaturation at 95 °C for 10 min, 40 amplification cycles [denaturation at 95 °C for 10 s, annealing at 56 °C for 30 s, and elongation at 72 °C for 30 s], followed by one cycle at 72 °C for 5 min. The iQTM5 optical system software (Bio-Rad) was used for data analysis. Here, β-actin which is well-known as a housekeeping gene was considered as a normalization control. Oligonucleotide primers used for real-time RT-PCR analysis are shown in Table 1.", "Data analysis was carried out using SPSS statistical package version 17.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA with Turkey’s potshot test was used to assess differences between experimental groups. In addition, p < 0.05 was considered statistically significant.", "The findings of the present study demonstrated the exceptional anti-Toxoplasma effects of MCEO in infected mice with T. gondii. Thus, oral administration of MCEO in the doses of 200 and 300 mg/kg for 21 days was able to prevent severe symptoms of the toxoplasmosis in the mouse model. However, the exceptional anti-Toxoplasma effects of MCEO and other effects, such as improved innate immunity and low toxicity, are positive topics that need more proof from investigations in this field." ]
[ "intro", "results", null, null, null, null, null, "discussion", null, null, null, null, null, null, null, null, null, null, null, null, null, null, "conclusions" ]
[ "chronic toxoplasmosis", "herbal medicines", "essential oils", "\nMyrtus communis\n", "\nToxoplasma gondii\n" ]
1. Introduction: Toxoplasmosis is one of the most prevalent zoonotic parasitic diseases caused by the intracellular parasite Toxoplasma gondii (T. gondii). This parasite affects more than 30% of the world’s population and a wide range of warm-blooded animals [1]. Human as the intermediate host is infected through main routes, including: (i) the ingestion of undercooked or uncooked meat infected with tissue cysts of T. gondii, (ii) consumption of water and food contaminated with sporulated oocysts excreted in feces of the cat as definitive host, and (iii) congenital infection, when the mother becomes infected during pregnancy by one of two previous methods [2,3]. Considering clinical manifestations of toxoplasmosis, the disease does not cause any specific symptoms in healthy and immunocompetent people; but, a severe and even a deadly form can be observed in immunocompromised individuals (such as patients with human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS), and patients with organ transplantation, etc.) and congenitally infected fetuses [4]. At present, chemotherapy with the combination of pyrimethamine and sulfadiazine followed by azithromycin, clindamycin, atovaquone, etc., is considered as the preferred treatment for toxoplasmosis; however, studies in recent years have demonstrated that these drugs are associated with some side effects such as osteoporosis, and teratogenic effects mostly in immune-compromised patients [5,6]. Atovaquone, a hydroxynaphthoquinone derivative, has potent activity against tissue cysts through the blocking respiratory chain of the Toxoplasma; therefore, it is broadly used for in vivo activity against T. gondii during both infection stages [7]. Since there is currently no effective vaccine to prevent toxoplasmosis in human and animals, prophylaxis can therefore be considered the best way to prevent the toxoplasmosis, especially in immunocompromised individuals with a CD4 count below 100 cells/μL as well as in pregnant women who were not previously determined to be seronegative for Toxoplasma Immunoglobulin G (IgG) [8,9]. From ancient times, medicinal herbs and their derivatives have been broadly used for health promotion and therapy for chronic, as opposed to life-threatening, diseases [10,11]. Herbal medicines have also been successfully used in the treatment of a wide range of bacterial, viral, fungal, as well as parasitic infections [12,13,14,15,16]. Myrtus communis L. (M. communis), which also called myrtle (Myrtaceae family) is a medicinal herb that has been broadly applied for folk medicine around the world [17]. Since the old civilizations, myrtle has long been applied in traditional medicine as a reliever of stomach aches, wound healing, antihemorrhoid, etc. [18]. Recently, modern medicine demonstrated that various parts of this plant such as leaves, fruits, roots, berries, and its branches have different pharmacological possessions including anti-inflammatory, analgesic, antioxidant, anticancer, anti-diabetic, anti-mutagenic, neuro-protective, etc. [19]. Numerous studies have also reported antimicrobial effects of M. communis against a wide range of pathogenic strains of bacteria (Staphylococcus aureus, Listeria monocytogenes, Pseudomonas aeruginosa, Escherichia coli, Klebsiella pneumonia, etc.), viruses (Herpes simplex), fungi (Candida spp., etc), and parasites [19,20,21,22]. From a long time ago, essential oil and its constituents are considered as a promising therapeutic agent, due to their qualified safety, and broad biological and pharmacological activities [23]. Reviews have shown that essential oil of M. communis contains a large amount of are terpenes, terpenoids, and phenylpropanoids [17]. Given the various pharmacological effects of M. communis, the present study aimed to evaluate the prophylactic effects of M. communis essential oil against chronic toxoplasmosis induced by the Tehran strain of T. gondii in mice. 2. Results: 2.1. GC/MS Analysis The yield of essential oil was 0.41 % (v/w). Density of essential oil at 25 °C was 0.831 g/mL and refractive index was 1.391 at 25 °C. Based on the obtained results in GC/MS, twenty-five compounds were identified, indicating 93.01% of the total essential oil (Table S1, Supplementary Materials). The major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively. The yield of essential oil was 0.41 % (v/w). Density of essential oil at 25 °C was 0.831 g/mL and refractive index was 1.391 at 25 °C. Based on the obtained results in GC/MS, twenty-five compounds were identified, indicating 93.01% of the total essential oil (Table S1, Supplementary Materials). The major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively. 2.2. Parasitological Study 2.2.1. The Mean Number of T. gondii Tissue Cysts Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2). Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2). 2.2.2. The Mean Diameter of T. gondii Tissue Cysts By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2). By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2). 2.2.1. The Mean Number of T. gondii Tissue Cysts Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2). Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2). 2.2.2. The Mean Diameter of T. gondii Tissue Cysts By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2). By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2). 2.3. Cytokine Expression by Real-Time PCR Figure 3 shows the mRNA levels of IFN-γ and IL-12, in mice of all tested groups. The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3, when compared with control groups. Figure 3 shows the mRNA levels of IFN-γ and IL-12, in mice of all tested groups. The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3, when compared with control groups. 2.1. GC/MS Analysis: The yield of essential oil was 0.41 % (v/w). Density of essential oil at 25 °C was 0.831 g/mL and refractive index was 1.391 at 25 °C. Based on the obtained results in GC/MS, twenty-five compounds were identified, indicating 93.01% of the total essential oil (Table S1, Supplementary Materials). The major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively. 2.2. Parasitological Study: 2.2.1. The Mean Number of T. gondii Tissue Cysts Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2). Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2). 2.2.2. The Mean Diameter of T. gondii Tissue Cysts By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2). By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2). 2.2.1. The Mean Number of T. gondii Tissue Cysts: Figure 1 shows the frequency of the brain tissue cysts in tested mice of each group. Based on the obtained findings, oral administration of MCEO for 3 weeks significantly decreased the mean number of T. gondii tissue cysts mice of the tested groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001), in comparison with the control group (C2). 2.2.2. The Mean Diameter of T. gondii Tissue Cysts: By the mean diameter of T. gondii tissue cysts, the results exhibited that the mean diameter of tissue cysts in experimental group C2 was 57.4 ± 3.35 µm, although this value was 43.5 ± 2.96 µm in mice of experimental group Ex1; however, the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001) (Figure 2). 2.3. Cytokine Expression by Real-Time PCR: Figure 3 shows the mRNA levels of IFN-γ and IL-12, in mice of all tested groups. The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3, when compared with control groups. 3. Discussion: At present, the combination of sulfonamide and pyrimethamine is considered the gold-standard therapy to treat toxoplasmosis [24]. According to the reports, these are associated with adverse side effects and responses such as teratogenic effects, hematological disorders, myelosuppression, and gastrointestinal effects, etc [5,6]; hence, the discovery of novel effective agents especially from natural sources with low toxicities is an absolute need. The World Health Organization (WHO) reported that more than two-third of the world’s population trust in folk medicine for their early therapeutic purposes [25]. Reviews demonstrated that herbs used for therapeutic purposes in traditional medicine contain a variety of compounds that have different biological and therapeutic activities especially in the treatment of microbial infections [26]. Here, we aimed to assess the prophylactic effects of M. communis essential oil against chronic toxoplasmosis induced by the Tehran strain of T. gondii in mice. The obtained parasitological results demonstrated the exceptional anti-Toxoplasma effects of MCEO, so that the mean number of T. gondii tissue cysts in experimental groups of Ex1 (p < 0.05), Ex2 (p < 0.001), and Ex3 (p < 0.001) was meaningfully reduced in a dose-dependent manner compared with the control group (C2). The results also exhibited that the mean diameter of tissue cyst was significantly reduced in mice of experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001). Considering the antiparasitic activity of M. communis, Mahmoudvand et al. (2015) have demonstrated that the essential oil and methanolic extract of M. communis significantly suppressed the growth percentage of promastigote and amastigote forms of Leishmania tropica with the IC50 values ranging from 8.4 to 40.8 μg/mL [20]. In the other study conducted by Azadbakht et al. (2004), the results showed that essential oil and methanolic extract of M. communis at the concentrations of 0.0001 to 0.1 considerably reduced the growth rate of Trichomonas vaginalis trophozoites on in vitro experiments [27]. The results of another study showed that MCEO at the dose of 12.5 to 100 μg/mL significantly reduced the viability of protoscoleces of Echinococcus granulosus in vitro [28]. The anti-plasmodial effects of methanolic extract of M. communis were demonstrated against chloroquine-resistant (K1) and chloroquine-sensitive (3D7) strains of Plasmodium falciparum with the IC50 values of 35.44 and 0.87 µg/mL, respectively [29]; the authors also concluded that M. communis methanolic extract considerably reduced the parasitemia in mice infected with Plasmodium berghei after 4 days of treatment. Although the chemical composition of MCEO was studied in different studies around the world [21], it has been proven that the chemical composition of essential oils is somewhat variable depending on some factors such as the plant collection place, part of used, the time of harvest, and the method of extraction [30]. Based on the previous studies, terpenoid compounds such as 1,8-cineole, α-pinene, limonene, linalool, α-terpinolene, etc., are considered as the main components found in M. communis essential oil [31]. The results of our study in agreement with previous studies show that the major constituents of MCEO were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively [32]. Reviews have demonstrated that the antiviral, antibacterial, antifungal, and anti-parasitic activities of terpenes, terpenoides, and their derivatives against a wide range of pathogenic strains [19]. It this indicated that these phytoconstituents could be responsible for their antimicrobial activity while their precise mechanism of action is not clearly understood. Previous studies have demonstrated that these compounds showed antimicrobial effects through the disruption of cell membrane, inhibition of oxygen consumption, inhibition of virulence factors, etc. [26]. Since one of the most important mechanisms of control of toxoplasmosis is the immune system, mainly cellular immunity, we evaluated the mRNA levels of some innate immunity mediators such as IFN-γ and IL12 by quantitative real time PCR [33]. The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase was observed in mice treated by MCEO at the doses of Ex2 and Ex3 of MCEO when compared with control groups. With respect to the immunomodulatory effects, it has been proven that cytokine IL-12 controls nitric oxide synthesis via IFN-γ. In T. gondii infection, the production of nitric oxide is controlled by the partial inhibition of the synthesis of nitric oxide synthetase. Theoretically, modulation of cytokine is extremely important, because nitric oxide is considered as a part of the initial effectors in the immune system response against toxoplasmosis [34]. Our findings suggest that the decrease in parasite load in the infected mice treated with MCEO can be associated to the strengthening of the immune system, principally the innate immune system, of the tested mice that result in the control of T. gondii infection. Considering the toxicity of MCEO, in the study conducted by Mahmoudvand et al. (2016), the results showed that there was no significant toxicity in the clinical chemistry and hematological parameters after 14 days of oral administration of MCEO at the doses 0.05, 0.1, 0.2, and 0.4 mL/kg in tested mice, indicating that MCEO at the tested doses of the present study has no toxicity in BALB/c mice [28]. Although the present investigation showed that the exceptional anti-Toxoplasma effects of MCEO has been proven, several important points must be considered in the use of plant products, including the use of a standard method for preparation of essential oil, the proper selection of concentrations or dilutions, the finding of the most active fraction or extracts, selection of the type of study to better investigate the mechanism of action, the study of the pharmacokinetic profile of the plant products, etc. [35]. 4. Materials and methods: 4.1. Collecting the Plant Materials In this investigation, the leaves of plant were prepared from mountain areas of Kerman province in September 2016. After identifying the plant by a botanist, a voucher sample of the plant materials was placed at the Herbarium of Department of Pharmacognosy of School of Pharmacy, (Kerman, Iran) (KF1356). In this investigation, the leaves of plant were prepared from mountain areas of Kerman province in September 2016. After identifying the plant by a botanist, a voucher sample of the plant materials was placed at the Herbarium of Department of Pharmacognosy of School of Pharmacy, (Kerman, Iran) (KF1356). 4.2. Isolation of the Essential Oil About 500 g of air-dried leaves were used to hydro-distillation for 3 h by means of an all-glass Clevenger-type device. The obtained essential was dried over anhydrous sodium sulfate and kept in darkness at 4 °C in air-tight glass vials closed under nitrogen gas until testing [36]. About 500 g of air-dried leaves were used to hydro-distillation for 3 h by means of an all-glass Clevenger-type device. The obtained essential was dried over anhydrous sodium sulfate and kept in darkness at 4 °C in air-tight glass vials closed under nitrogen gas until testing [36]. 4.3. Gas Chromatography/Mass Spectrometry (GC/MS) Analysis of Essential Oil A Hewlett-Packard 6890 (Hewlett-Packard, Palo Alto, CA, USA) apparatus was applied to perform the GC analysis equipped with a HP-5MS column (30 m × 0.25 mm, film thickness 0.25 mm). Other device specifications and processes such as column temperature, injector and interface temperatures, flow rate of helium, etc., were previously described in the study conducted by Mahmoudvand et al. [28]. To determine the chemical composition of the essential oil we evaluated the relative retention time and mass spectra of each detected compound compared with the standards Wiley 2001 library data or literature ones [31]. A Hewlett-Packard 6890 (Hewlett-Packard, Palo Alto, CA, USA) apparatus was applied to perform the GC analysis equipped with a HP-5MS column (30 m × 0.25 mm, film thickness 0.25 mm). Other device specifications and processes such as column temperature, injector and interface temperatures, flow rate of helium, etc., were previously described in the study conducted by Mahmoudvand et al. [28]. To determine the chemical composition of the essential oil we evaluated the relative retention time and mass spectra of each detected compound compared with the standards Wiley 2001 library data or literature ones [31]. 4.4. Experimental Design and Infection 4.4.1. Animals A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. 4.4.2. Parasite In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice. In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice. 4.4.3. Animal Model of Chronic Toxoplasmosis The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group. The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group. 4.4.4. Design Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28]. Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28]. 4.4.1. Animals A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. 4.4.2. Parasite In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice. In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice. 4.4.3. Animal Model of Chronic Toxoplasmosis The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group. The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group. 4.4.4. Design Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28]. Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28]. 4.5. Serological Tests To confirm the development of toxoplasmosis in mice, serum samples of mice from each tested group was collected for evaluation of anti-T. gondii IgG antibody by a modified agglutination test (MAT) kit (Toxo screen DA, Biomérieux, Lyon, France), the formalized killed whole tachyzoites of T. gondii was prepared and procedures were carried out according to the method described by Shaapan et al. [38]. The agglutination titer of ≤1/20 was positive and end-titrated by 2-fold dilutions. To confirm the development of toxoplasmosis in mice, serum samples of mice from each tested group was collected for evaluation of anti-T. gondii IgG antibody by a modified agglutination test (MAT) kit (Toxo screen DA, Biomérieux, Lyon, France), the formalized killed whole tachyzoites of T. gondii was prepared and procedures were carried out according to the method described by Shaapan et al. [38]. The agglutination titer of ≤1/20 was positive and end-titrated by 2-fold dilutions. 4.6. Sample Collection To collect the brain samples, mice in each group were deeply anesthetized by means of intraperitoneal administration of ketamine (150 mg/kg) and xylazine (10 mg/kg). In the next step followed by decapitation, total brain tissues from each mouse were aseptically collected. To evaluate the parasitological changes, we applied the right hemisphere; while another hemisphere was maintained in −80 °C to determine the molecular examinations. To collect the brain samples, mice in each group were deeply anesthetized by means of intraperitoneal administration of ketamine (150 mg/kg) and xylazine (10 mg/kg). In the next step followed by decapitation, total brain tissues from each mouse were aseptically collected. To evaluate the parasitological changes, we applied the right hemisphere; while another hemisphere was maintained in −80 °C to determine the molecular examinations. 4.7. Anti-Parasitic Activity To assess the effects of MCEO on T. gondii infection, the right hemisphere of brain from each mouse was used to prepare the unstained-smears; in the next step, the diameter and the numbers of tissue cysts were calculated at two magnifications of 100× and 400× by means of light microscopy [39]. To assess the effects of MCEO on T. gondii infection, the right hemisphere of brain from each mouse was used to prepare the unstained-smears; in the next step, the diameter and the numbers of tissue cysts were calculated at two magnifications of 100× and 400× by means of light microscopy [39]. 4.8. Induction of Innate Immune System The mRNA levels of IFN-γ, and IL12 which are considered as key factors related to toxoplasmosis control mechanisms were measured in all tested mice using quantitative real time PCR. The total brain RNA was extracted by means of the RNA-easy kits (Qiagen, Hilden, Germany); whereas all isolated RNAs were reverse-transcribed according to the manufacture’s protocols. Consequently, the collected complementary DNA (cDNA) was applied to conventional PCR amplification or real-time PCR. To perform the Real-time PCR we used the iQ5 real-time PCR detection system (Bio-Rad, Hercules, CA). All amplification products were determined by SYBR green [40]. The reaction conditions of real-time PCR were included initial denaturation at 95 °C for 10 min, 40 amplification cycles [denaturation at 95 °C for 10 s, annealing at 56 °C for 30 s, and elongation at 72 °C for 30 s], followed by one cycle at 72 °C for 5 min. The iQTM5 optical system software (Bio-Rad) was used for data analysis. Here, β-actin which is well-known as a housekeeping gene was considered as a normalization control. Oligonucleotide primers used for real-time RT-PCR analysis are shown in Table 1. The mRNA levels of IFN-γ, and IL12 which are considered as key factors related to toxoplasmosis control mechanisms were measured in all tested mice using quantitative real time PCR. The total brain RNA was extracted by means of the RNA-easy kits (Qiagen, Hilden, Germany); whereas all isolated RNAs were reverse-transcribed according to the manufacture’s protocols. Consequently, the collected complementary DNA (cDNA) was applied to conventional PCR amplification or real-time PCR. To perform the Real-time PCR we used the iQ5 real-time PCR detection system (Bio-Rad, Hercules, CA). All amplification products were determined by SYBR green [40]. The reaction conditions of real-time PCR were included initial denaturation at 95 °C for 10 min, 40 amplification cycles [denaturation at 95 °C for 10 s, annealing at 56 °C for 30 s, and elongation at 72 °C for 30 s], followed by one cycle at 72 °C for 5 min. The iQTM5 optical system software (Bio-Rad) was used for data analysis. Here, β-actin which is well-known as a housekeeping gene was considered as a normalization control. Oligonucleotide primers used for real-time RT-PCR analysis are shown in Table 1. 4.9. Statistical Analysis Data analysis was carried out using SPSS statistical package version 17.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA with Turkey’s potshot test was used to assess differences between experimental groups. In addition, p < 0.05 was considered statistically significant. Data analysis was carried out using SPSS statistical package version 17.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA with Turkey’s potshot test was used to assess differences between experimental groups. In addition, p < 0.05 was considered statistically significant. 4.1. Collecting the Plant Materials: In this investigation, the leaves of plant were prepared from mountain areas of Kerman province in September 2016. After identifying the plant by a botanist, a voucher sample of the plant materials was placed at the Herbarium of Department of Pharmacognosy of School of Pharmacy, (Kerman, Iran) (KF1356). 4.2. Isolation of the Essential Oil: About 500 g of air-dried leaves were used to hydro-distillation for 3 h by means of an all-glass Clevenger-type device. The obtained essential was dried over anhydrous sodium sulfate and kept in darkness at 4 °C in air-tight glass vials closed under nitrogen gas until testing [36]. 4.3. Gas Chromatography/Mass Spectrometry (GC/MS) Analysis of Essential Oil: A Hewlett-Packard 6890 (Hewlett-Packard, Palo Alto, CA, USA) apparatus was applied to perform the GC analysis equipped with a HP-5MS column (30 m × 0.25 mm, film thickness 0.25 mm). Other device specifications and processes such as column temperature, injector and interface temperatures, flow rate of helium, etc., were previously described in the study conducted by Mahmoudvand et al. [28]. To determine the chemical composition of the essential oil we evaluated the relative retention time and mass spectra of each detected compound compared with the standards Wiley 2001 library data or literature ones [31]. 4.4. Experimental Design and Infection: 4.4.1. Animals A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. 4.4.2. Parasite In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice. In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice. 4.4.3. Animal Model of Chronic Toxoplasmosis The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group. The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group. 4.4.4. Design Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28]. Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28]. 4.4.1. Animals: A total of 48 male BALB/c mice (6–8 weeks old) weighing from 20 to 25 g were used in this study. Mice were housed in a colony room with a 12:12 h light/ dark cycle at 21 ± 2 °C and maintained with free access to water and feeding ad libitum. Mice were handled based on the standard protocols for the use of laboratory animals [37]. 4.4.2. Parasite: In this survey, the Tehran strain of T. gondii (type II) was kindly prepared from the strain kindly obtained from Prof. Hossein Keshavarz and Prof. Saeedeh Shoajee (Tehran University of Medical Sciences, Tehran, Iran). The parasites were passaged via intraperitoneal injection of 15–20 tissue cysts every 90 days into new BALB/c mice. 4.4.3. Animal Model of Chronic Toxoplasmosis: The chronic model of toxoplasmosis in mice was induced based on the method described previously [37]. To do this, 0.5 mL of brain homogenized suspension (obtained from infected mice) having as a minimum 25–30 tissue cysts with antibiotics of penicillin and streptomycin were injected intraperitoneal to mice of each studied group. 4.4.4. Design: Figure 4 shows the experimental design of the present study. Male BALB/c mice were divided into two main groups (control (C) and experimental group (Ex)) with six sub-groups including C1 (non-treated non infected), C2 (treated with olive oil as solvent), C3 (Infected mice treated with Atovaquone 100 mg/kg/day), Ex1 (MCEO 100 mg/kg/day), Ex2 (MCEO 200 mg/kg/day), and Ex3 (MCEO 300 mg/kg/day). After 3 weeks of treatment, the mice in all groups, except the C1 group, were infected with the Tehran strain of T. gondii. It should also be mentioned that the selection of doses of MCEO was based on the previous study conducted by the present authors which revealed that MCEO in these doses has no toxicity in mice [28]. 4.5. Serological Tests: To confirm the development of toxoplasmosis in mice, serum samples of mice from each tested group was collected for evaluation of anti-T. gondii IgG antibody by a modified agglutination test (MAT) kit (Toxo screen DA, Biomérieux, Lyon, France), the formalized killed whole tachyzoites of T. gondii was prepared and procedures were carried out according to the method described by Shaapan et al. [38]. The agglutination titer of ≤1/20 was positive and end-titrated by 2-fold dilutions. 4.6. Sample Collection: To collect the brain samples, mice in each group were deeply anesthetized by means of intraperitoneal administration of ketamine (150 mg/kg) and xylazine (10 mg/kg). In the next step followed by decapitation, total brain tissues from each mouse were aseptically collected. To evaluate the parasitological changes, we applied the right hemisphere; while another hemisphere was maintained in −80 °C to determine the molecular examinations. 4.7. Anti-Parasitic Activity: To assess the effects of MCEO on T. gondii infection, the right hemisphere of brain from each mouse was used to prepare the unstained-smears; in the next step, the diameter and the numbers of tissue cysts were calculated at two magnifications of 100× and 400× by means of light microscopy [39]. 4.8. Induction of Innate Immune System: The mRNA levels of IFN-γ, and IL12 which are considered as key factors related to toxoplasmosis control mechanisms were measured in all tested mice using quantitative real time PCR. The total brain RNA was extracted by means of the RNA-easy kits (Qiagen, Hilden, Germany); whereas all isolated RNAs were reverse-transcribed according to the manufacture’s protocols. Consequently, the collected complementary DNA (cDNA) was applied to conventional PCR amplification or real-time PCR. To perform the Real-time PCR we used the iQ5 real-time PCR detection system (Bio-Rad, Hercules, CA). All amplification products were determined by SYBR green [40]. The reaction conditions of real-time PCR were included initial denaturation at 95 °C for 10 min, 40 amplification cycles [denaturation at 95 °C for 10 s, annealing at 56 °C for 30 s, and elongation at 72 °C for 30 s], followed by one cycle at 72 °C for 5 min. The iQTM5 optical system software (Bio-Rad) was used for data analysis. Here, β-actin which is well-known as a housekeeping gene was considered as a normalization control. Oligonucleotide primers used for real-time RT-PCR analysis are shown in Table 1. 4.9. Statistical Analysis: Data analysis was carried out using SPSS statistical package version 17.0 (SPSS Inc., Chicago, IL, USA). One-way ANOVA with Turkey’s potshot test was used to assess differences between experimental groups. In addition, p < 0.05 was considered statistically significant. 5. Conclusions: The findings of the present study demonstrated the exceptional anti-Toxoplasma effects of MCEO in infected mice with T. gondii. Thus, oral administration of MCEO in the doses of 200 and 300 mg/kg for 21 days was able to prevent severe symptoms of the toxoplasmosis in the mouse model. However, the exceptional anti-Toxoplasma effects of MCEO and other effects, such as improved innate immunity and low toxicity, are positive topics that need more proof from investigations in this field.
Background: Myrtus communis (M. communis) is a wild aromatic plant used for traditional herbal medicine that can be demonstrated in insecticidal, antioxidant, anti-inflammatory, and antimicrobial activity of its essential oils (MCEO). Methods: Gas chromatography/mass spectrometry (GC/MS) analysis was performed to determine the chemical composition of MCEO. Mice were then orally administrated with MCEO at the doses of 100, 200, and 300 mg/kg/day and also atovaquone 100 mg/kg for 21 days. On the 15th day, the mice were infected with the intraperitoneal inoculation of 20-25 tissue cysts from the Tehran strain of T. gondii. The mean numbers of brain tissue cysts and the mRNA levels of IL-12 and IFN-γ in mice of each tested group were measured. Results: By GC/MS, the major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively. The results demonstrated that the mean number of T. gondii tissue cysts in experimental groups Ex1 (p < 0.05), Ex2 (p < 0.001) and Ex3 (p < 0.001) was meaningfully reduced in a dose-dependent manner compared with the control group (C2). The mean diameter of tissue cyst was significantly reduced in mice of the experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001). The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3 when compared with control groups. Conclusions: The findings of the present study demonstrated the potent prophylactic effects of MCEO especially in the doses 200 and 300 mg/kg in mice infected with T. gondii. Although the exceptional anti-Toxoplasma effects of MCEO and other possessions, such as improved innate immunity and low toxicity are positive topics, there is, however, a need for more proof from investigations in this field.
1. Introduction: Toxoplasmosis is one of the most prevalent zoonotic parasitic diseases caused by the intracellular parasite Toxoplasma gondii (T. gondii). This parasite affects more than 30% of the world’s population and a wide range of warm-blooded animals [1]. Human as the intermediate host is infected through main routes, including: (i) the ingestion of undercooked or uncooked meat infected with tissue cysts of T. gondii, (ii) consumption of water and food contaminated with sporulated oocysts excreted in feces of the cat as definitive host, and (iii) congenital infection, when the mother becomes infected during pregnancy by one of two previous methods [2,3]. Considering clinical manifestations of toxoplasmosis, the disease does not cause any specific symptoms in healthy and immunocompetent people; but, a severe and even a deadly form can be observed in immunocompromised individuals (such as patients with human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS), and patients with organ transplantation, etc.) and congenitally infected fetuses [4]. At present, chemotherapy with the combination of pyrimethamine and sulfadiazine followed by azithromycin, clindamycin, atovaquone, etc., is considered as the preferred treatment for toxoplasmosis; however, studies in recent years have demonstrated that these drugs are associated with some side effects such as osteoporosis, and teratogenic effects mostly in immune-compromised patients [5,6]. Atovaquone, a hydroxynaphthoquinone derivative, has potent activity against tissue cysts through the blocking respiratory chain of the Toxoplasma; therefore, it is broadly used for in vivo activity against T. gondii during both infection stages [7]. Since there is currently no effective vaccine to prevent toxoplasmosis in human and animals, prophylaxis can therefore be considered the best way to prevent the toxoplasmosis, especially in immunocompromised individuals with a CD4 count below 100 cells/μL as well as in pregnant women who were not previously determined to be seronegative for Toxoplasma Immunoglobulin G (IgG) [8,9]. From ancient times, medicinal herbs and their derivatives have been broadly used for health promotion and therapy for chronic, as opposed to life-threatening, diseases [10,11]. Herbal medicines have also been successfully used in the treatment of a wide range of bacterial, viral, fungal, as well as parasitic infections [12,13,14,15,16]. Myrtus communis L. (M. communis), which also called myrtle (Myrtaceae family) is a medicinal herb that has been broadly applied for folk medicine around the world [17]. Since the old civilizations, myrtle has long been applied in traditional medicine as a reliever of stomach aches, wound healing, antihemorrhoid, etc. [18]. Recently, modern medicine demonstrated that various parts of this plant such as leaves, fruits, roots, berries, and its branches have different pharmacological possessions including anti-inflammatory, analgesic, antioxidant, anticancer, anti-diabetic, anti-mutagenic, neuro-protective, etc. [19]. Numerous studies have also reported antimicrobial effects of M. communis against a wide range of pathogenic strains of bacteria (Staphylococcus aureus, Listeria monocytogenes, Pseudomonas aeruginosa, Escherichia coli, Klebsiella pneumonia, etc.), viruses (Herpes simplex), fungi (Candida spp., etc), and parasites [19,20,21,22]. From a long time ago, essential oil and its constituents are considered as a promising therapeutic agent, due to their qualified safety, and broad biological and pharmacological activities [23]. Reviews have shown that essential oil of M. communis contains a large amount of are terpenes, terpenoids, and phenylpropanoids [17]. Given the various pharmacological effects of M. communis, the present study aimed to evaluate the prophylactic effects of M. communis essential oil against chronic toxoplasmosis induced by the Tehran strain of T. gondii in mice. 5. Conclusions: The findings of the present study demonstrated the exceptional anti-Toxoplasma effects of MCEO in infected mice with T. gondii. Thus, oral administration of MCEO in the doses of 200 and 300 mg/kg for 21 days was able to prevent severe symptoms of the toxoplasmosis in the mouse model. However, the exceptional anti-Toxoplasma effects of MCEO and other effects, such as improved innate immunity and low toxicity, are positive topics that need more proof from investigations in this field.
Background: Myrtus communis (M. communis) is a wild aromatic plant used for traditional herbal medicine that can be demonstrated in insecticidal, antioxidant, anti-inflammatory, and antimicrobial activity of its essential oils (MCEO). Methods: Gas chromatography/mass spectrometry (GC/MS) analysis was performed to determine the chemical composition of MCEO. Mice were then orally administrated with MCEO at the doses of 100, 200, and 300 mg/kg/day and also atovaquone 100 mg/kg for 21 days. On the 15th day, the mice were infected with the intraperitoneal inoculation of 20-25 tissue cysts from the Tehran strain of T. gondii. The mean numbers of brain tissue cysts and the mRNA levels of IL-12 and IFN-γ in mice of each tested group were measured. Results: By GC/MS, the major constituents were α-pinene (24.7%), 1,8-cineole (19.6%), and linalool (12.6%), respectively. The results demonstrated that the mean number of T. gondii tissue cysts in experimental groups Ex1 (p < 0.05), Ex2 (p < 0.001) and Ex3 (p < 0.001) was meaningfully reduced in a dose-dependent manner compared with the control group (C2). The mean diameter of tissue cyst was significantly reduced in mice of the experimental groups Ex2 (p < 0.01) and Ex3 (p < 0.001). The results demonstrated that although the mRNA levels of IFN-γ and IL-12 were elevated in all mice of experimental groups, a significant increase (p < 0.001) was observed in tested groups of Ex2 and Ex3 when compared with control groups. Conclusions: The findings of the present study demonstrated the potent prophylactic effects of MCEO especially in the doses 200 and 300 mg/kg in mice infected with T. gondii. Although the exceptional anti-Toxoplasma effects of MCEO and other possessions, such as improved innate immunity and low toxicity are positive topics, there is, however, a need for more proof from investigations in this field.
8,916
395
[ 99, 340, 78, 79, 67, 3171, 58, 62, 119, 770, 76, 63, 58, 174, 95, 80, 59, 249, 51 ]
23
[ "mice", "tissue", "mceo", "group", "cysts", "tissue cysts", "gondii", "groups", "experimental", "mean" ]
[ "toxoplasmosis human animals", "manifestations toxoplasmosis disease", "toxoplasmosis prevalent zoonotic", "intracellular parasite toxoplasma", "parasite toxoplasma gondii" ]
null
[CONTENT] chronic toxoplasmosis | herbal medicines | essential oils | Myrtus communis | Toxoplasma gondii [SUMMARY]
null
[CONTENT] chronic toxoplasmosis | herbal medicines | essential oils | Myrtus communis | Toxoplasma gondii [SUMMARY]
[CONTENT] chronic toxoplasmosis | herbal medicines | essential oils | Myrtus communis | Toxoplasma gondii [SUMMARY]
[CONTENT] chronic toxoplasmosis | herbal medicines | essential oils | Myrtus communis | Toxoplasma gondii [SUMMARY]
[CONTENT] chronic toxoplasmosis | herbal medicines | essential oils | Myrtus communis | Toxoplasma gondii [SUMMARY]
[CONTENT] Animals | Antiparasitic Agents | Immunity, Innate | Mice | Myrtus | Oils, Volatile | Toxoplasma | Toxoplasmosis [SUMMARY]
null
[CONTENT] Animals | Antiparasitic Agents | Immunity, Innate | Mice | Myrtus | Oils, Volatile | Toxoplasma | Toxoplasmosis [SUMMARY]
[CONTENT] Animals | Antiparasitic Agents | Immunity, Innate | Mice | Myrtus | Oils, Volatile | Toxoplasma | Toxoplasmosis [SUMMARY]
[CONTENT] Animals | Antiparasitic Agents | Immunity, Innate | Mice | Myrtus | Oils, Volatile | Toxoplasma | Toxoplasmosis [SUMMARY]
[CONTENT] Animals | Antiparasitic Agents | Immunity, Innate | Mice | Myrtus | Oils, Volatile | Toxoplasma | Toxoplasmosis [SUMMARY]
[CONTENT] toxoplasmosis human animals | manifestations toxoplasmosis disease | toxoplasmosis prevalent zoonotic | intracellular parasite toxoplasma | parasite toxoplasma gondii [SUMMARY]
null
[CONTENT] toxoplasmosis human animals | manifestations toxoplasmosis disease | toxoplasmosis prevalent zoonotic | intracellular parasite toxoplasma | parasite toxoplasma gondii [SUMMARY]
[CONTENT] toxoplasmosis human animals | manifestations toxoplasmosis disease | toxoplasmosis prevalent zoonotic | intracellular parasite toxoplasma | parasite toxoplasma gondii [SUMMARY]
[CONTENT] toxoplasmosis human animals | manifestations toxoplasmosis disease | toxoplasmosis prevalent zoonotic | intracellular parasite toxoplasma | parasite toxoplasma gondii [SUMMARY]
[CONTENT] toxoplasmosis human animals | manifestations toxoplasmosis disease | toxoplasmosis prevalent zoonotic | intracellular parasite toxoplasma | parasite toxoplasma gondii [SUMMARY]
[CONTENT] mice | tissue | mceo | group | cysts | tissue cysts | gondii | groups | experimental | mean [SUMMARY]
null
[CONTENT] mice | tissue | mceo | group | cysts | tissue cysts | gondii | groups | experimental | mean [SUMMARY]
[CONTENT] mice | tissue | mceo | group | cysts | tissue cysts | gondii | groups | experimental | mean [SUMMARY]
[CONTENT] mice | tissue | mceo | group | cysts | tissue cysts | gondii | groups | experimental | mean [SUMMARY]
[CONTENT] mice | tissue | mceo | group | cysts | tissue cysts | gondii | groups | experimental | mean [SUMMARY]
[CONTENT] communis | etc | effects | toxoplasmosis | human | broadly | pharmacological | patients | wide range | effects communis [SUMMARY]
null
[CONTENT] mean | tissue | mean diameter | cysts | tissue cysts | diameter | 001 | gondii tissue | gondii tissue cysts | groups [SUMMARY]
[CONTENT] effects | anti toxoplasma effects mceo | exceptional anti toxoplasma effects | exceptional anti toxoplasma | toxoplasma effects | toxoplasma effects mceo | exceptional anti | exceptional | anti toxoplasma | anti toxoplasma effects [SUMMARY]
[CONTENT] mice | tissue | groups | mceo | tissue cysts | cysts | group | mean | experimental | gondii [SUMMARY]
[CONTENT] mice | tissue | groups | mceo | tissue cysts | cysts | group | mean | experimental | gondii [SUMMARY]
[CONTENT] Myrtus [SUMMARY]
null
[CONTENT] GC/MS | 24.7% | 19.6% | 12.6% ||| T. | Ex3 | C2 ||| ||| IFN [SUMMARY]
[CONTENT] 200 | 300 mg/kg | T. ||| anti-Toxoplasma [SUMMARY]
[CONTENT] ||| GC/MS ||| 100 | 200 | 300 mg/kg/day | 100 mg/kg | 21 days ||| the 15th day | 20-25 | Tehran | T. ||| IFN ||| GC/MS | 24.7% | 19.6% | 12.6% ||| T. | Ex3 | C2 ||| ||| IFN ||| 200 | 300 mg/kg | T. ||| anti-Toxoplasma [SUMMARY]
[CONTENT] ||| GC/MS ||| 100 | 200 | 300 mg/kg/day | 100 mg/kg | 21 days ||| the 15th day | 20-25 | Tehran | T. ||| IFN ||| GC/MS | 24.7% | 19.6% | 12.6% ||| T. | Ex3 | C2 ||| ||| IFN ||| 200 | 300 mg/kg | T. ||| anti-Toxoplasma [SUMMARY]
Re-evaluating the prevalence and factors characteristic of catecholamine secreting head and neck paragangliomas.
34277980
We sought to characterize the prevalence and factors characteristic of head and neck paragangliomas (HNPGLs) that secrete catecholamines to inform best practices for diagnosis and management.
INTRODUCTION
This was a retrospective cohort study from 2000 to 2020 at a single-institution tertiary centre. One-hundred fifty-two patients (182 tumours) with HNPGLs with at least one measurement of urine or plasma catecholamines and/or catecholamine metabolite levels prior to treatment were included. We differentiated and characterized those patients with increased level(s) of any nature and those with 'clinically significant' versus 'clinically insignificant' catecholamine production.
METHODS
Thirty-one (20.4%) patients had increased catecholamine and/or catecholamine metabolite levels. In most patients, these levels were ≤5-fold above the upper limit of the reference range. Four of these 31 patients with increased levels were ultimately found to have an additional catecholamine secreting mediastinal paraganglioma or pheochromocytoma. Fourteen of 31 patients with HNPGL were deemed clinically significant secretors of catecholamines based on hyper-adrenergic symptoms and/or profound levels of normetanephrines. This cohort was enriched for patients with paragangliomas of the carotid body or cervical sympathetic chain and those with SDHB genetic mutations. Ultimately, the prevalence of clinically significant catecholamine secreting Hangs was determined to be 9.2% and 7.7% based on a per-patient and per-tumour basis, respectively.
RESULTS
The rate of catecholamine excess in the current cohort of patients with HNPGLs was higher than previously reported. Neuroendocrine tumours of any anatomic subsite may secrete catecholamines, although not all increased laboratory level(s) are indicative of clinically significant catecholamine secretion causing symptoms or warranting adrenergic blockade.
CONCLUSIONS
[ "Adrenal Gland Neoplasms", "Catecholamines", "Head and Neck Neoplasms", "Humans", "Paraganglioma", "Prevalence", "Retrospective Studies" ]
8279627
INTRODUCTION
Head and neck paragangliomas (HNPGLs) are rare tumours derived from paraganglial cells within autonomic ganglia of the carotid body (CBP), vagus nerve (VP), jugular bulb (JP), Jacobsen's nerve of the middle ear (TP) or cervical sympathetic chain (SCP). 1 Due to their cell of origin, HNPGLs have the potential to actively synthesize and secrete catecholamines with potentially deleterious systemic effects. Patients with catecholamine secreting HNPGLs may present with symptoms of catecholamine excess, including sustained or intermittent hypertension and tachycardia, cardiac palpitations, diaphoresis and/or pallor. Regardless of symptomatology, failure to identify and treat catecholamine excess may cause significant morbidity and even mortality in these patients, particularly those undergoing surgery. 2 As a result, contemporary clinical practice guidelines recommend biochemical testing of urine or plasma catecholamines and metabolites, usually metanephrine and normetanephrine levels, for all patients with newly diagnosed HNPGLs. 3 The prevalence of catecholamine secreting HNPGLs is typically low (approximately 3%–4% of tumours). 4 , 5 However, this estimate is based primarily on data from limited series published 20 years ago. More recent evidence suggests that the rate of catecholamine secreting HNPGLs may exceed 10%. 6 Recently, biochemical and genetic characteristics of HNPGLs, including standard treatment paradigms have been vastly transformed. 7 , 8 There now exists a clear gap in the literature regarding the true rate and factors characteristic of catecholamine secreting HNPGLs, particularly those whose functional status may pose significant challenges for peri‐operative management. Here, we report a large series of patients with HNPGLs with the primary aim of characterizing the prevalence and features of HNPGLs that secrete catecholamines at a level significant enough to cause symptoms or warrant consideration of adrenergic blockade, herein termed ‘clinically significant’. In an era of increasingly personalized treatment for these tumours, our data may inform contemporary, best practices for biochemical screening and multi‐disciplinary management of HNPGLs.
METHODS
This was a retrospective analysis of a prospectively maintained clinical database of patients with HNPGLs presenting to our institution for evaluation and management between 2000 and 2020. 9 Inclusion criteria for this study were as follows: (1) radiographically confirmed isolated or multi‐focal HNPGL; (2) previously untreated HNPGL tumour(s); and (3) at least one laboratory measurement of urine or plasma catecholamine or catecholamine metabolite levels prior to treatment onset. Urine measurements included analysis of 24‐h excretion of fractionated normetanephrines and metanephrines, vaniyllmandelic acid (VMA), norepinephrine, epinephrine and dopamine via standard clinical assays. 10 Similarly, plasma measurements included analysis of fractionated normetanephrines and metanephrines, VMA, norepinephrine, epinephrine and dopamine, as described. 10 As expected, laboratory reference ranges were not uniform due to the twenty‐year study period, differences in clinical assays, and few patients with laboratories from other institutions that were not repeated upon presentation. As such, we recorded absolute laboratory levels and calculated a normalized ‘per cent of reference range’ level for each measurement as follows: [(absolute level—lower bound of reference range)/upper bound of reference range] x 100. Laboratory assessments were considered increased when the per cent of reference range level exceeded 100%. 11 As previous authors have posited, it is a clear oversimplification to characterize HNPGLs as simply functional or not. Rather, HNPGLs exhibit a continuum of hormonal activity influencing clinical presentation and need for hemodynamic management. 12 , 13 Thus, the primary goal of this study was to investigate the rate and characteristics of ‘clinically significant’ catecholamine secreting HNPGLs. Clinically significant catecholamine secreting HNPGLs are defined as tumours in patients where (1) any laboratory level(s) are accompanied by clear hyper‐adrenergic symptoms at first presentation, as defined below; or (2) increased laboratory level(s) of normetanephrine ≥2‐fold were present with or without hyper‐adrenergic symptoms. We secondarily sought to determine rate and factors characteristic of ‘clinically insignificant’ catecholamine secreting HNPGL, defined as any patient with increased laboratory level(s) not meeting aforementioned criteria. Finally, we estimated the sensitivity and specificity of hyper‐adrenergic symptoms (defined below) at first presentation for predicting increased laboratory level(s) and clinically significant catecholamine secretion in our patient population. Operational definitions for other recorded clinical variables are as follows: hyper‐adrenergic symptoms at first presentation were defined as explicit documentation of sustained or intermittent palpitations, tachycardia, diaphoresis, and/or tremors or new‐onset hypertension in conjunction with at least one of these other symptoms. Hypertension at first presentation was defined as systolic blood pressure ≥140 mmHg or diastolic blood pressure ≥90 mmHg. Tachycardia at first presentation was defined as resting heart rate >100 beats per minute. Statistical comparisons between groups were made with chi‐square test and Student's t test for categorical and continuous variables, respectively. All statistical tests were two‐tailed and performed with SPSS Version 27 with a p ≤ .05 as the threshold for statistical significance. This study was deemed exempt from informed consent by the University of Michigan Institutional Review Board (IRB).
RESULTS
Our study cohort consisted of 280 patients with HNPGLs comprising 318 discrete tumours. There were significant differences evident among patients in whom laboratories were drawn/documented (n = 152, 182 tumours) versus not (n = 128, 136 tumours). Specifically, patients in the ‘labs drawn’ group were younger overall and more likely to endorse hyper‐adrenergic symptoms at presentation. HNPGL tumour subsite(s) also differed, with more JP and multi‐focal HNPGLs and fewer TP in the ‘labs drawn’ group overall (Table 1). Demographic, clinical and tumour characteristics of HNPGL patient cohorts Laboratories Not Drawn (n = 128) Laboratories Drawn (n = 152) Bolded values indicate significant p values (α = 0.05). Within the ‘labs drawn’ group, the specific laboratory assessments of catecholamine and/or catecholamine metabolite levels varied considerably (Figure 1A). Twenty‐four‐hour urinary dopamine excretion was the least commonly employed test (n = 12, 7.9%) while plasma normetanephrines and metanephrines (n = 90, 59.2%) were most frequently assessed in our cohort. Over the course of the study period, we saw a modest increase in percentage of patients with HNPGL who had lab(s) drawn at first presentation (Figure 1B). Further, we saw a significant but opposite trend in the use of urine and plasma normetanephrine and metanephrine assessments over time (p < .01 for both trends, Figure 1C). Percentage of patients (n = 152) who had specific laboratory assessments of catecholamine and catecholamine metabolite levels. ‘Catecholamines’ includes norepinephrine and epinephrine. ‘Metanephrines’ includes metanephrines and normetanephrines. (A). Trend in frequency of patients who had lab(s) drawn at first presentation (B). Significant but opposite trends in frequency of assessment of urine and plasma normetanephrines and metanephrines over time (C) The median and interquartile range of all laboratory measurements are provided in Table S1. In total, 31 (20.4%) of 152 patients had one or more laboratory assessments showing increased catecholamine or catecholamine metabolite levels. In most patients with HNPGL, these levels were ≤5‐fold above the upper limit of the reference range, though a few individuals had profound laboratory levels of ≥10‐fold (Figure 2, Table S2). Per cent increase in specific laboratory levels in 31 of 152 (20.4%) patients. Each data point represents a discrete laboratory measurement, 45 in total. Note y axis is logarithmic The median (range) age of these 31 patients was 52.4 (18.3–78.2) years, and 22 (71%) were female. Roughly half of the 31 patients endorsed hyper‐adrenergic symptoms at first presentation, and all presented with benign HNPGLs of the following subsites: CBP (n = 10), JP (n = 9), TP (n = 2), VP (n = 2), SCP (n = 5) and multi‐focal HNPGL (n = 3). Detailed cohort characteristics of all 31 patients with HNPGL and increased laboratory levels are provided in Table S3. A flow diagram delineating the anatomic source and clinical significance (ie any laboratory level(s) accompanied by clear hyper‐adrenergic symptoms and/or laboratory level(s) of normetanephrine ≥2‐fold) of increased laboratory level(s) of catecholamines and/or catecholamine metabolites in these 31 patients is depicted in Figure 3. Notably, an additional catecholamine secreting mediastinal paraganglioma (MP) or adrenal pheochromocytoma was discovered in a sizable 19.4% (n = 6) of HNPGL patients with increased laboratory level(s). Flow diagram of source and clinical significance of catecholamine and/or catecholamine metabolite elevations in 31 HNPGL patients Of the 13 HNPGL patients determined to have clinically insignificant increased laboratory level(s), none were treated with α‐ or β‐blockade prior to surgery or radiation or during observation. In those with clinically insignificant increases in laboratory level(s) who were treated surgically, review of anaesthesia records showed no instances of hemodynamic instability requiring pressors, aggressive fluid resuscitation or anti‐hypertensives. In total, 14 patients in our cohort were determined to have evidence of clinically significant catecholamine secretion from their tumours (Table 2). This small cohort was particularly enriched for patients with tumours of the carotid body (CBP) and cervical sympathetic chain (SCP) as well as those with pathogenic SDHB mutations. Ten of these 14 patients were started on α‐ and/or β‐blockade after laboratory assessments were completed. We could not determine whether blockade was initiated in the remaining four patients due to insufficient clinical documentation or limited follow‐up duration. In summary, of 152 HNPGL patients with 182 total tumours, clinically significant catecholamine secretion was shown in 9.2% and 7.7% on a per‐patient and per‐tumour basis, respectively. Profile of HNPGL patients (n = 14) with clinically significant catecholamine secretion Lastly, we sought to evaluate the sensitivity and specificity of hyper‐adrenergic symptoms at first presentation for both increased laboratory level(s) and clinically significant catecholamine secretion. As expected, the sensitivity for both outcomes was quite low (44.8% and 44.4%, respectively). Conversely, specificity was much higher at 83.3% and 79.8%, respectively (Figure 4). Contingency tables to calculate sensitivity and specificity of hyper‐adrenergic symptoms at first presentation for increased laboratory level(s) (A) and clinically significant catecholamine secretion from HNPGL, MP or pheochromocytoma (pheo) (B)
CONCLUSIONS
The rate of catecholamine excess in patients with HNPGLs may be higher than previously thought. Tumours of any anatomic subsite may secrete catecholamines, although not all increased laboratory level(s) are indicative of clinically significant catecholamine secretion causing symptoms or warranting adrenergic blockade. Our series provides a comprehensive, contemporary update on biochemical profiles of HNPGL in an era of evolving diagnostic and management standards for these tumours.
[ "INTRODUCTION", "AUTHOR CONTRIBUTION" ]
[ "Head and neck paragangliomas (HNPGLs) are rare tumours derived from paraganglial cells within autonomic ganglia of the carotid body (CBP), vagus nerve (VP), jugular bulb (JP), Jacobsen's nerve of the middle ear (TP) or cervical sympathetic chain (SCP).\n1\n Due to their cell of origin, HNPGLs have the potential to actively synthesize and secrete catecholamines with potentially deleterious systemic effects. Patients with catecholamine secreting HNPGLs may present with symptoms of catecholamine excess, including sustained or intermittent hypertension and tachycardia, cardiac palpitations, diaphoresis and/or pallor. Regardless of symptomatology, failure to identify and treat catecholamine excess may cause significant morbidity and even mortality in these patients, particularly those undergoing surgery.\n2\n As a result, contemporary clinical practice guidelines recommend biochemical testing of urine or plasma catecholamines and metabolites, usually metanephrine and normetanephrine levels, for all patients with newly diagnosed HNPGLs.\n3\n\n\nThe prevalence of catecholamine secreting HNPGLs is typically low (approximately 3%–4% of tumours).\n4\n, \n5\n However, this estimate is based primarily on data from limited series published 20 years ago. More recent evidence suggests that the rate of catecholamine secreting HNPGLs may exceed 10%.\n6\n Recently, biochemical and genetic characteristics of HNPGLs, including standard treatment paradigms have been vastly transformed.\n7\n, \n8\n There now exists a clear gap in the literature regarding the true rate and factors characteristic of catecholamine secreting HNPGLs, particularly those whose functional status may pose significant challenges for peri‐operative management.\nHere, we report a large series of patients with HNPGLs with the primary aim of characterizing the prevalence and features of HNPGLs that secrete catecholamines at a level significant enough to cause symptoms or warrant consideration of adrenergic blockade, herein termed ‘clinically significant’. In an era of increasingly personalized treatment for these tumours, our data may inform contemporary, best practices for biochemical screening and multi‐disciplinary management of HNPGLs.", "We certify that all authors have met the following criteria for authorship: (1) Have made substantial contributions to conception, design, acquisition and analysis of data. (2) Were involved in drafting and revising the manuscript. (3) Have given final approval of the version to be published and take full responsibility for its content. (4) Agree to be accountable for all aspects of the work." ]
[ null, null ]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION", "CONCLUSIONS", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTION", "Supporting information" ]
[ "Head and neck paragangliomas (HNPGLs) are rare tumours derived from paraganglial cells within autonomic ganglia of the carotid body (CBP), vagus nerve (VP), jugular bulb (JP), Jacobsen's nerve of the middle ear (TP) or cervical sympathetic chain (SCP).\n1\n Due to their cell of origin, HNPGLs have the potential to actively synthesize and secrete catecholamines with potentially deleterious systemic effects. Patients with catecholamine secreting HNPGLs may present with symptoms of catecholamine excess, including sustained or intermittent hypertension and tachycardia, cardiac palpitations, diaphoresis and/or pallor. Regardless of symptomatology, failure to identify and treat catecholamine excess may cause significant morbidity and even mortality in these patients, particularly those undergoing surgery.\n2\n As a result, contemporary clinical practice guidelines recommend biochemical testing of urine or plasma catecholamines and metabolites, usually metanephrine and normetanephrine levels, for all patients with newly diagnosed HNPGLs.\n3\n\n\nThe prevalence of catecholamine secreting HNPGLs is typically low (approximately 3%–4% of tumours).\n4\n, \n5\n However, this estimate is based primarily on data from limited series published 20 years ago. More recent evidence suggests that the rate of catecholamine secreting HNPGLs may exceed 10%.\n6\n Recently, biochemical and genetic characteristics of HNPGLs, including standard treatment paradigms have been vastly transformed.\n7\n, \n8\n There now exists a clear gap in the literature regarding the true rate and factors characteristic of catecholamine secreting HNPGLs, particularly those whose functional status may pose significant challenges for peri‐operative management.\nHere, we report a large series of patients with HNPGLs with the primary aim of characterizing the prevalence and features of HNPGLs that secrete catecholamines at a level significant enough to cause symptoms or warrant consideration of adrenergic blockade, herein termed ‘clinically significant’. In an era of increasingly personalized treatment for these tumours, our data may inform contemporary, best practices for biochemical screening and multi‐disciplinary management of HNPGLs.", "This was a retrospective analysis of a prospectively maintained clinical database of patients with HNPGLs presenting to our institution for evaluation and management between 2000 and 2020.\n9\n Inclusion criteria for this study were as follows: (1) radiographically confirmed isolated or multi‐focal HNPGL; (2) previously untreated HNPGL tumour(s); and (3) at least one laboratory measurement of urine or plasma catecholamine or catecholamine metabolite levels prior to treatment onset.\nUrine measurements included analysis of 24‐h excretion of fractionated normetanephrines and metanephrines, vaniyllmandelic acid (VMA), norepinephrine, epinephrine and dopamine via standard clinical assays.\n10\n Similarly, plasma measurements included analysis of fractionated normetanephrines and metanephrines, VMA, norepinephrine, epinephrine and dopamine, as described.\n10\n As expected, laboratory reference ranges were not uniform due to the twenty‐year study period, differences in clinical assays, and few patients with laboratories from other institutions that were not repeated upon presentation. As such, we recorded absolute laboratory levels and calculated a normalized ‘per cent of reference range’ level for each measurement as follows: [(absolute level—lower bound of reference range)/upper bound of reference range] x 100. Laboratory assessments were considered increased when the per cent of reference range level exceeded 100%.\n11\n\n\nAs previous authors have posited, it is a clear oversimplification to characterize HNPGLs as simply functional or not. Rather, HNPGLs exhibit a continuum of hormonal activity influencing clinical presentation and need for hemodynamic management.\n12\n, \n13\n Thus, the primary goal of this study was to investigate the rate and characteristics of ‘clinically significant’ catecholamine secreting HNPGLs. Clinically significant catecholamine secreting HNPGLs are defined as tumours in patients where (1) any laboratory level(s) are accompanied by clear hyper‐adrenergic symptoms at first presentation, as defined below; or (2) increased laboratory level(s) of normetanephrine ≥2‐fold were present with or without hyper‐adrenergic symptoms. We secondarily sought to determine rate and factors characteristic of ‘clinically insignificant’ catecholamine secreting HNPGL, defined as any patient with increased laboratory level(s) not meeting aforementioned criteria. Finally, we estimated the sensitivity and specificity of hyper‐adrenergic symptoms (defined below) at first presentation for predicting increased laboratory level(s) and clinically significant catecholamine secretion in our patient population.\nOperational definitions for other recorded clinical variables are as follows: hyper‐adrenergic symptoms at first presentation were defined as explicit documentation of sustained or intermittent palpitations, tachycardia, diaphoresis, and/or tremors or new‐onset hypertension in conjunction with at least one of these other symptoms. Hypertension at first presentation was defined as systolic blood pressure ≥140 mmHg or diastolic blood pressure ≥90 mmHg. Tachycardia at first presentation was defined as resting heart rate >100 beats per minute.\nStatistical comparisons between groups were made with chi‐square test and Student's t test for categorical and continuous variables, respectively. All statistical tests were two‐tailed and performed with SPSS Version 27 with a p ≤ .05 as the threshold for statistical significance. This study was deemed exempt from informed consent by the University of Michigan Institutional Review Board (IRB).", "Our study cohort consisted of 280 patients with HNPGLs comprising 318 discrete tumours. There were significant differences evident among patients in whom laboratories were drawn/documented (n = 152, 182 tumours) versus not (n = 128, 136 tumours). Specifically, patients in the ‘labs drawn’ group were younger overall and more likely to endorse hyper‐adrenergic symptoms at presentation. HNPGL tumour subsite(s) also differed, with more JP and multi‐focal HNPGLs and fewer TP in the ‘labs drawn’ group overall (Table 1).\nDemographic, clinical and tumour characteristics of HNPGL patient cohorts\nLaboratories Not Drawn\n(n = 128)\nLaboratories Drawn\n(n = 152)\nBolded values indicate significant p values (α = 0.05).\nWithin the ‘labs drawn’ group, the specific laboratory assessments of catecholamine and/or catecholamine metabolite levels varied considerably (Figure 1A). Twenty‐four‐hour urinary dopamine excretion was the least commonly employed test (n = 12, 7.9%) while plasma normetanephrines and metanephrines (n = 90, 59.2%) were most frequently assessed in our cohort. Over the course of the study period, we saw a modest increase in percentage of patients with HNPGL who had lab(s) drawn at first presentation (Figure 1B). Further, we saw a significant but opposite trend in the use of urine and plasma normetanephrine and metanephrine assessments over time (p < .01 for both trends, Figure 1C).\nPercentage of patients (n = 152) who had specific laboratory assessments of catecholamine and catecholamine metabolite levels. ‘Catecholamines’ includes norepinephrine and epinephrine. ‘Metanephrines’ includes metanephrines and normetanephrines. (A). Trend in frequency of patients who had lab(s) drawn at first presentation (B). Significant but opposite trends in frequency of assessment of urine and plasma normetanephrines and metanephrines over time (C)\nThe median and interquartile range of all laboratory measurements are provided in Table S1. In total, 31 (20.4%) of 152 patients had one or more laboratory assessments showing increased catecholamine or catecholamine metabolite levels. In most patients with HNPGL, these levels were ≤5‐fold above the upper limit of the reference range, though a few individuals had profound laboratory levels of ≥10‐fold (Figure 2, Table S2).\nPer cent increase in specific laboratory levels in 31 of 152 (20.4%) patients. Each data point represents a discrete laboratory measurement, 45 in total. Note y axis is logarithmic\nThe median (range) age of these 31 patients was 52.4 (18.3–78.2) years, and 22 (71%) were female. Roughly half of the 31 patients endorsed hyper‐adrenergic symptoms at first presentation, and all presented with benign HNPGLs of the following subsites: CBP (n = 10), JP (n = 9), TP (n = 2), VP (n = 2), SCP (n = 5) and multi‐focal HNPGL (n = 3). Detailed cohort characteristics of all 31 patients with HNPGL and increased laboratory levels are provided in Table S3. A flow diagram delineating the anatomic source and clinical significance (ie any laboratory level(s) accompanied by clear hyper‐adrenergic symptoms and/or laboratory level(s) of normetanephrine ≥2‐fold) of increased laboratory level(s) of catecholamines and/or catecholamine metabolites in these 31 patients is depicted in Figure 3. Notably, an additional catecholamine secreting mediastinal paraganglioma (MP) or adrenal pheochromocytoma was discovered in a sizable 19.4% (n = 6) of HNPGL patients with increased laboratory level(s).\nFlow diagram of source and clinical significance of catecholamine and/or catecholamine metabolite elevations in 31 HNPGL patients\nOf the 13 HNPGL patients determined to have clinically insignificant increased laboratory level(s), none were treated with α‐ or β‐blockade prior to surgery or radiation or during observation. In those with clinically insignificant increases in laboratory level(s) who were treated surgically, review of anaesthesia records showed no instances of hemodynamic instability requiring pressors, aggressive fluid resuscitation or anti‐hypertensives.\nIn total, 14 patients in our cohort were determined to have evidence of clinically significant catecholamine secretion from their tumours (Table 2). This small cohort was particularly enriched for patients with tumours of the carotid body (CBP) and cervical sympathetic chain (SCP) as well as those with pathogenic SDHB mutations. Ten of these 14 patients were started on α‐ and/or β‐blockade after laboratory assessments were completed. We could not determine whether blockade was initiated in the remaining four patients due to insufficient clinical documentation or limited follow‐up duration. In summary, of 152 HNPGL patients with 182 total tumours, clinically significant catecholamine secretion was shown in 9.2% and 7.7% on a per‐patient and per‐tumour basis, respectively.\nProfile of HNPGL patients (n = 14) with clinically significant catecholamine secretion\nLastly, we sought to evaluate the sensitivity and specificity of hyper‐adrenergic symptoms at first presentation for both increased laboratory level(s) and clinically significant catecholamine secretion. As expected, the sensitivity for both outcomes was quite low (44.8% and 44.4%, respectively). Conversely, specificity was much higher at 83.3% and 79.8%, respectively (Figure 4).\nContingency tables to calculate sensitivity and specificity of hyper‐adrenergic symptoms at first presentation for increased laboratory level(s) (A) and clinically significant catecholamine secretion from HNPGL, MP or pheochromocytoma (pheo) (B)", "In recent years, diagnostic and treatment paradigms for HNPGLs have undergone vast change with the discovery of heritable succinate dehydrogenase (SDHx) mutations and a trend towards non‐surgical management.\n14\n Traditionally, HNPGLs have been considered to rarely secrete catecholamines. This is in contrast to thoracoabdominal paragangliomas and adrenal pheochromocytomas derived primarily from sympathetic ganglia with comparatively higher rates of catecholamine hypersecretion.\n15\n However, this assumption is based on limited series.\n2\n, \n4\n A contemporary re‐evaluation of the prevalence and features characteristic of catecholamine secreting HNPGLs in an era of evolving management is thus warranted.\nWhile precise rates are poorly documented in the literature, it is evident that biochemical screening of newly diagnosed HNPGLs is not a uniform practice (Figure 1B).\n9\n In our cohort, only 54.3% of patients had documented catecholamine screening, which reflects a twenty‐year study period and evolving testing recommendations and provider awareness. Patients who had catecholamine screening were younger and more likely to present with jugular or multi‐focal HNPGLs. Younger patients may have been more likely to consider surgical therapy versus watchful waiting thus prompting preoperative screening. Explicit documentation of assessment of hyper‐adrenergic symptoms occurred in only half of our patients. This could be due to incomplete documentation and limitations of retrospective data collection or a true inconsistency of assessing such symptoms on initial patient history. However, although not systematically analyzed, our experience shows that there is a significant fraction of asymptomatic patients despite significant increase in catecholamines. Either way, biochemical screening for newly diagnosed HNPGLs was also more common in those patients who endorsed such symptoms potentially attributable to hormonally active tumours.\nAs a rapid and easy screening tool, we hypothesized that more uniform assessment of hyper‐adrenergic symptoms at first presentation may be warranted to increase detection rates of catecholamine secreting HNPGLs. We found that sensitivity of such symptoms for any increased laboratory level(s) and clinically significant catecholamine secretion was quite low, at 44.8% and 44.4%, respectively. Specificity was moderately better, at 83.3% and 79.8%, respectively. These data support a few important conclusions. First, tumour secretion of catecholamines at levels that impact peri‐operative or long‐term management of HNPGLs may not always manifest with clear symptomatology. Second, objective measurement of catecholamine metabolite levels is imperative to determine whether newly diagnosed HNPGLs are indeed functional.\nThere are a number of plasma and urine laboratory tests available to the provider managing patients with HNPGLs. Evidence suggests these do not all hold equivalent sensitivity and specificity for diagnosing catecholamine secreting HNPGLs.\n11\n, \n16\n Based on an enhanced understanding of tumour catecholamine metabolism, the gold‐standard tests for diagnosis are plasma free or urinary fractionated metanephrines.\n17\n, \n18\n Assessment of plasma or urine catecholamines including norepinephrine, epinephrine, dopamine and VMA are associated with unacceptably high false‐positive and false‐negative rates.\n3\n The precise tests ordered for our patients with HNPGL varied considerably through the study period, though we did see a statistically significant upward trend in the use of plasma free metanephrines over time (Figure 1C). At our institution, plasma free metanephrines have become the preferred screening test for HNPGLs due to superior test parameters, reliability of results and ease of specimen acquisition.\nDue to vast heterogeneity in laboratory tests employed and our aim to delineate clinically significant versus clinically insignificant catecholamine secreting HNPGLs, we chose to first identify all patients in our cohort with increased laboratory level(s) of any kind. In 16 of these 31 patients, increased laboratory level(s) led providers to order cross‐sectional imaging of the chest and abdomen in search of another potentially functional tumour. Six of 31 patients (19.4%) were found to have a concomitant thoracoabdominal paraganglioma or adrenal pheochromocytoma (Figure 3). Due to the heritable nature and potential for multi‐focality, providers managing patients with incidental HNPGLs must be aware of the value of screening laboratories and strongly consider whole‐body, cross‐sectional imaging in those patients with evidence of catecholamine excess or familial paraganglioma predisposition syndromes.\n19\n\n\nBased on our vast institutional experience, it is evident that increased laboratory level(s) indicative of catecholamine excess may not always lead to significant changes in clinical management. Thus, we sought to define and differentiate clinically significant versus clinically insignificant catecholamine secreting HNPGLs to better inform treatment/management decision‐making. Our operational definition for the former was based on reliability of plasma or urine metanephrine testing and/or the presence of unequivocal hyper‐adrenergic symptoms. When defined as such, 9.2% of HNPGL patients in our cohort had evidence of clinically significant catecholamine secretion. This stands in contrast to historically accepted rates, which estimated that only approximately 4% of HNPGLs were secretory.\n4\n However, our observed rate is in line with a recent report by van Duinen et al,\n6\n supporting more uniform attention to biochemical screening in patients with HNPGLs. Although the small number of patients with clinically insignificant elevations of catecholamines did not have any identifiable peri‐operative complications, it remains a matter of debate whether every patient with any catecholamine elevation should receive peri‐operative blockade.\nOur data suggest that HNPGLs derived from both parasympathetic (eg CBP, JP, VP) and sympathetic (eg SCP) ganglia are capable of secreting catecholamines at a clinically meaningful level (Table 2). We found a 90% rate of heritable SDHx mutations in patients with clinically significant catecholamine secreting HNPGLs who underwent genetic testing. This supports strong consideration of genetic counselling in all such patients in parallel with their recommended treatment plan.\n20\n, \n21\n This will likely become more important as unique phenotype‐genotype relationships are further elucidated in an era of increasingly personalized diagnosis and therapy for these tumours.\n22\n Due to the rarity of malignant HNPGL, it is interesting to speculate whether aggressive tumour behavior portends increased biochemical activity and catecholamine secretion. While we only had seven patients with malignant tumours and laboratory measurements at diagnosis, none of them had laboratory elevations indicative of catecholamine hypersecretion.\nA recent paradigm shift, towards watchful waiting and/or radiation to avoid surgical morbidity for benign tumours was evident even in our patients with clinically significant catecholamine secreting HNPGLs. As such, biochemical screening for catecholamine excess has become essential for blood pressure and heart rate control with adrenergic blockade during the period of watchful waiting or radiation. For those patients treated surgically, our data reiterate the strong indication for biochemical screening in pre‐surgical workup to avoid rare but catastrophic peri‐operative complications.\n23\n\n\nA measurable percentage (8.6%) of our cohort had clinically insignificant catecholamine secretion as we defined it. These patients had increased laboratory levels that were irreproducible, non‐specific or minor and not associated with hyper‐adrenergic symptoms. When patients first present to an otolaryngologist—head and neck surgeon, consultation with an endocrinologist with expertise in such tumours is ideal to help interpret laboratory levels and advise on appropriate next steps in diagnosis and management. While initiation of adrenergic blockade may not be immediately required for these individuals, the questionable catecholamine excess may prompt additional testing, imaging or genetic screening.", "The rate of catecholamine excess in patients with HNPGLs may be higher than previously thought. Tumours of any anatomic subsite may secrete catecholamines, although not all increased laboratory level(s) are indicative of clinically significant catecholamine secretion causing symptoms or warranting adrenergic blockade. Our series provides a comprehensive, contemporary update on biochemical profiles of HNPGL in an era of evolving diagnostic and management standards for these tumours.", "The authors have no conflicts of interest to disclose relevant to this manuscript.", "We certify that all authors have met the following criteria for authorship: (1) Have made substantial contributions to conception, design, acquisition and analysis of data. (2) Were involved in drafting and revising the manuscript. (3) Have given final approval of the version to be published and take full responsibility for its content. (4) Agree to be accountable for all aspects of the work.", "Table S1\nClick here for additional data file.\nTable S2\nClick here for additional data file.\nTable S3\nClick here for additional data file." ]
[ null, "methods", "results", "discussion", "conclusions", "COI-statement", null, "supplementary-material" ]
[ "adrenergic", "catecholamines", "functional", "head and neck", "metanephrines", "paragangliomas", "vasoactive" ]
INTRODUCTION: Head and neck paragangliomas (HNPGLs) are rare tumours derived from paraganglial cells within autonomic ganglia of the carotid body (CBP), vagus nerve (VP), jugular bulb (JP), Jacobsen's nerve of the middle ear (TP) or cervical sympathetic chain (SCP). 1 Due to their cell of origin, HNPGLs have the potential to actively synthesize and secrete catecholamines with potentially deleterious systemic effects. Patients with catecholamine secreting HNPGLs may present with symptoms of catecholamine excess, including sustained or intermittent hypertension and tachycardia, cardiac palpitations, diaphoresis and/or pallor. Regardless of symptomatology, failure to identify and treat catecholamine excess may cause significant morbidity and even mortality in these patients, particularly those undergoing surgery. 2 As a result, contemporary clinical practice guidelines recommend biochemical testing of urine or plasma catecholamines and metabolites, usually metanephrine and normetanephrine levels, for all patients with newly diagnosed HNPGLs. 3 The prevalence of catecholamine secreting HNPGLs is typically low (approximately 3%–4% of tumours). 4 , 5 However, this estimate is based primarily on data from limited series published 20 years ago. More recent evidence suggests that the rate of catecholamine secreting HNPGLs may exceed 10%. 6 Recently, biochemical and genetic characteristics of HNPGLs, including standard treatment paradigms have been vastly transformed. 7 , 8 There now exists a clear gap in the literature regarding the true rate and factors characteristic of catecholamine secreting HNPGLs, particularly those whose functional status may pose significant challenges for peri‐operative management. Here, we report a large series of patients with HNPGLs with the primary aim of characterizing the prevalence and features of HNPGLs that secrete catecholamines at a level significant enough to cause symptoms or warrant consideration of adrenergic blockade, herein termed ‘clinically significant’. In an era of increasingly personalized treatment for these tumours, our data may inform contemporary, best practices for biochemical screening and multi‐disciplinary management of HNPGLs. METHODS: This was a retrospective analysis of a prospectively maintained clinical database of patients with HNPGLs presenting to our institution for evaluation and management between 2000 and 2020. 9 Inclusion criteria for this study were as follows: (1) radiographically confirmed isolated or multi‐focal HNPGL; (2) previously untreated HNPGL tumour(s); and (3) at least one laboratory measurement of urine or plasma catecholamine or catecholamine metabolite levels prior to treatment onset. Urine measurements included analysis of 24‐h excretion of fractionated normetanephrines and metanephrines, vaniyllmandelic acid (VMA), norepinephrine, epinephrine and dopamine via standard clinical assays. 10 Similarly, plasma measurements included analysis of fractionated normetanephrines and metanephrines, VMA, norepinephrine, epinephrine and dopamine, as described. 10 As expected, laboratory reference ranges were not uniform due to the twenty‐year study period, differences in clinical assays, and few patients with laboratories from other institutions that were not repeated upon presentation. As such, we recorded absolute laboratory levels and calculated a normalized ‘per cent of reference range’ level for each measurement as follows: [(absolute level—lower bound of reference range)/upper bound of reference range] x 100. Laboratory assessments were considered increased when the per cent of reference range level exceeded 100%. 11 As previous authors have posited, it is a clear oversimplification to characterize HNPGLs as simply functional or not. Rather, HNPGLs exhibit a continuum of hormonal activity influencing clinical presentation and need for hemodynamic management. 12 , 13 Thus, the primary goal of this study was to investigate the rate and characteristics of ‘clinically significant’ catecholamine secreting HNPGLs. Clinically significant catecholamine secreting HNPGLs are defined as tumours in patients where (1) any laboratory level(s) are accompanied by clear hyper‐adrenergic symptoms at first presentation, as defined below; or (2) increased laboratory level(s) of normetanephrine ≥2‐fold were present with or without hyper‐adrenergic symptoms. We secondarily sought to determine rate and factors characteristic of ‘clinically insignificant’ catecholamine secreting HNPGL, defined as any patient with increased laboratory level(s) not meeting aforementioned criteria. Finally, we estimated the sensitivity and specificity of hyper‐adrenergic symptoms (defined below) at first presentation for predicting increased laboratory level(s) and clinically significant catecholamine secretion in our patient population. Operational definitions for other recorded clinical variables are as follows: hyper‐adrenergic symptoms at first presentation were defined as explicit documentation of sustained or intermittent palpitations, tachycardia, diaphoresis, and/or tremors or new‐onset hypertension in conjunction with at least one of these other symptoms. Hypertension at first presentation was defined as systolic blood pressure ≥140 mmHg or diastolic blood pressure ≥90 mmHg. Tachycardia at first presentation was defined as resting heart rate >100 beats per minute. Statistical comparisons between groups were made with chi‐square test and Student's t test for categorical and continuous variables, respectively. All statistical tests were two‐tailed and performed with SPSS Version 27 with a p ≤ .05 as the threshold for statistical significance. This study was deemed exempt from informed consent by the University of Michigan Institutional Review Board (IRB). RESULTS: Our study cohort consisted of 280 patients with HNPGLs comprising 318 discrete tumours. There were significant differences evident among patients in whom laboratories were drawn/documented (n = 152, 182 tumours) versus not (n = 128, 136 tumours). Specifically, patients in the ‘labs drawn’ group were younger overall and more likely to endorse hyper‐adrenergic symptoms at presentation. HNPGL tumour subsite(s) also differed, with more JP and multi‐focal HNPGLs and fewer TP in the ‘labs drawn’ group overall (Table 1). Demographic, clinical and tumour characteristics of HNPGL patient cohorts Laboratories Not Drawn (n = 128) Laboratories Drawn (n = 152) Bolded values indicate significant p values (α = 0.05). Within the ‘labs drawn’ group, the specific laboratory assessments of catecholamine and/or catecholamine metabolite levels varied considerably (Figure 1A). Twenty‐four‐hour urinary dopamine excretion was the least commonly employed test (n = 12, 7.9%) while plasma normetanephrines and metanephrines (n = 90, 59.2%) were most frequently assessed in our cohort. Over the course of the study period, we saw a modest increase in percentage of patients with HNPGL who had lab(s) drawn at first presentation (Figure 1B). Further, we saw a significant but opposite trend in the use of urine and plasma normetanephrine and metanephrine assessments over time (p < .01 for both trends, Figure 1C). Percentage of patients (n = 152) who had specific laboratory assessments of catecholamine and catecholamine metabolite levels. ‘Catecholamines’ includes norepinephrine and epinephrine. ‘Metanephrines’ includes metanephrines and normetanephrines. (A). Trend in frequency of patients who had lab(s) drawn at first presentation (B). Significant but opposite trends in frequency of assessment of urine and plasma normetanephrines and metanephrines over time (C) The median and interquartile range of all laboratory measurements are provided in Table S1. In total, 31 (20.4%) of 152 patients had one or more laboratory assessments showing increased catecholamine or catecholamine metabolite levels. In most patients with HNPGL, these levels were ≤5‐fold above the upper limit of the reference range, though a few individuals had profound laboratory levels of ≥10‐fold (Figure 2, Table S2). Per cent increase in specific laboratory levels in 31 of 152 (20.4%) patients. Each data point represents a discrete laboratory measurement, 45 in total. Note y axis is logarithmic The median (range) age of these 31 patients was 52.4 (18.3–78.2) years, and 22 (71%) were female. Roughly half of the 31 patients endorsed hyper‐adrenergic symptoms at first presentation, and all presented with benign HNPGLs of the following subsites: CBP (n = 10), JP (n = 9), TP (n = 2), VP (n = 2), SCP (n = 5) and multi‐focal HNPGL (n = 3). Detailed cohort characteristics of all 31 patients with HNPGL and increased laboratory levels are provided in Table S3. A flow diagram delineating the anatomic source and clinical significance (ie any laboratory level(s) accompanied by clear hyper‐adrenergic symptoms and/or laboratory level(s) of normetanephrine ≥2‐fold) of increased laboratory level(s) of catecholamines and/or catecholamine metabolites in these 31 patients is depicted in Figure 3. Notably, an additional catecholamine secreting mediastinal paraganglioma (MP) or adrenal pheochromocytoma was discovered in a sizable 19.4% (n = 6) of HNPGL patients with increased laboratory level(s). Flow diagram of source and clinical significance of catecholamine and/or catecholamine metabolite elevations in 31 HNPGL patients Of the 13 HNPGL patients determined to have clinically insignificant increased laboratory level(s), none were treated with α‐ or β‐blockade prior to surgery or radiation or during observation. In those with clinically insignificant increases in laboratory level(s) who were treated surgically, review of anaesthesia records showed no instances of hemodynamic instability requiring pressors, aggressive fluid resuscitation or anti‐hypertensives. In total, 14 patients in our cohort were determined to have evidence of clinically significant catecholamine secretion from their tumours (Table 2). This small cohort was particularly enriched for patients with tumours of the carotid body (CBP) and cervical sympathetic chain (SCP) as well as those with pathogenic SDHB mutations. Ten of these 14 patients were started on α‐ and/or β‐blockade after laboratory assessments were completed. We could not determine whether blockade was initiated in the remaining four patients due to insufficient clinical documentation or limited follow‐up duration. In summary, of 152 HNPGL patients with 182 total tumours, clinically significant catecholamine secretion was shown in 9.2% and 7.7% on a per‐patient and per‐tumour basis, respectively. Profile of HNPGL patients (n = 14) with clinically significant catecholamine secretion Lastly, we sought to evaluate the sensitivity and specificity of hyper‐adrenergic symptoms at first presentation for both increased laboratory level(s) and clinically significant catecholamine secretion. As expected, the sensitivity for both outcomes was quite low (44.8% and 44.4%, respectively). Conversely, specificity was much higher at 83.3% and 79.8%, respectively (Figure 4). Contingency tables to calculate sensitivity and specificity of hyper‐adrenergic symptoms at first presentation for increased laboratory level(s) (A) and clinically significant catecholamine secretion from HNPGL, MP or pheochromocytoma (pheo) (B) DISCUSSION: In recent years, diagnostic and treatment paradigms for HNPGLs have undergone vast change with the discovery of heritable succinate dehydrogenase (SDHx) mutations and a trend towards non‐surgical management. 14 Traditionally, HNPGLs have been considered to rarely secrete catecholamines. This is in contrast to thoracoabdominal paragangliomas and adrenal pheochromocytomas derived primarily from sympathetic ganglia with comparatively higher rates of catecholamine hypersecretion. 15 However, this assumption is based on limited series. 2 , 4 A contemporary re‐evaluation of the prevalence and features characteristic of catecholamine secreting HNPGLs in an era of evolving management is thus warranted. While precise rates are poorly documented in the literature, it is evident that biochemical screening of newly diagnosed HNPGLs is not a uniform practice (Figure 1B). 9 In our cohort, only 54.3% of patients had documented catecholamine screening, which reflects a twenty‐year study period and evolving testing recommendations and provider awareness. Patients who had catecholamine screening were younger and more likely to present with jugular or multi‐focal HNPGLs. Younger patients may have been more likely to consider surgical therapy versus watchful waiting thus prompting preoperative screening. Explicit documentation of assessment of hyper‐adrenergic symptoms occurred in only half of our patients. This could be due to incomplete documentation and limitations of retrospective data collection or a true inconsistency of assessing such symptoms on initial patient history. However, although not systematically analyzed, our experience shows that there is a significant fraction of asymptomatic patients despite significant increase in catecholamines. Either way, biochemical screening for newly diagnosed HNPGLs was also more common in those patients who endorsed such symptoms potentially attributable to hormonally active tumours. As a rapid and easy screening tool, we hypothesized that more uniform assessment of hyper‐adrenergic symptoms at first presentation may be warranted to increase detection rates of catecholamine secreting HNPGLs. We found that sensitivity of such symptoms for any increased laboratory level(s) and clinically significant catecholamine secretion was quite low, at 44.8% and 44.4%, respectively. Specificity was moderately better, at 83.3% and 79.8%, respectively. These data support a few important conclusions. First, tumour secretion of catecholamines at levels that impact peri‐operative or long‐term management of HNPGLs may not always manifest with clear symptomatology. Second, objective measurement of catecholamine metabolite levels is imperative to determine whether newly diagnosed HNPGLs are indeed functional. There are a number of plasma and urine laboratory tests available to the provider managing patients with HNPGLs. Evidence suggests these do not all hold equivalent sensitivity and specificity for diagnosing catecholamine secreting HNPGLs. 11 , 16 Based on an enhanced understanding of tumour catecholamine metabolism, the gold‐standard tests for diagnosis are plasma free or urinary fractionated metanephrines. 17 , 18 Assessment of plasma or urine catecholamines including norepinephrine, epinephrine, dopamine and VMA are associated with unacceptably high false‐positive and false‐negative rates. 3 The precise tests ordered for our patients with HNPGL varied considerably through the study period, though we did see a statistically significant upward trend in the use of plasma free metanephrines over time (Figure 1C). At our institution, plasma free metanephrines have become the preferred screening test for HNPGLs due to superior test parameters, reliability of results and ease of specimen acquisition. Due to vast heterogeneity in laboratory tests employed and our aim to delineate clinically significant versus clinically insignificant catecholamine secreting HNPGLs, we chose to first identify all patients in our cohort with increased laboratory level(s) of any kind. In 16 of these 31 patients, increased laboratory level(s) led providers to order cross‐sectional imaging of the chest and abdomen in search of another potentially functional tumour. Six of 31 patients (19.4%) were found to have a concomitant thoracoabdominal paraganglioma or adrenal pheochromocytoma (Figure 3). Due to the heritable nature and potential for multi‐focality, providers managing patients with incidental HNPGLs must be aware of the value of screening laboratories and strongly consider whole‐body, cross‐sectional imaging in those patients with evidence of catecholamine excess or familial paraganglioma predisposition syndromes. 19 Based on our vast institutional experience, it is evident that increased laboratory level(s) indicative of catecholamine excess may not always lead to significant changes in clinical management. Thus, we sought to define and differentiate clinically significant versus clinically insignificant catecholamine secreting HNPGLs to better inform treatment/management decision‐making. Our operational definition for the former was based on reliability of plasma or urine metanephrine testing and/or the presence of unequivocal hyper‐adrenergic symptoms. When defined as such, 9.2% of HNPGL patients in our cohort had evidence of clinically significant catecholamine secretion. This stands in contrast to historically accepted rates, which estimated that only approximately 4% of HNPGLs were secretory. 4 However, our observed rate is in line with a recent report by van Duinen et al, 6 supporting more uniform attention to biochemical screening in patients with HNPGLs. Although the small number of patients with clinically insignificant elevations of catecholamines did not have any identifiable peri‐operative complications, it remains a matter of debate whether every patient with any catecholamine elevation should receive peri‐operative blockade. Our data suggest that HNPGLs derived from both parasympathetic (eg CBP, JP, VP) and sympathetic (eg SCP) ganglia are capable of secreting catecholamines at a clinically meaningful level (Table 2). We found a 90% rate of heritable SDHx mutations in patients with clinically significant catecholamine secreting HNPGLs who underwent genetic testing. This supports strong consideration of genetic counselling in all such patients in parallel with their recommended treatment plan. 20 , 21 This will likely become more important as unique phenotype‐genotype relationships are further elucidated in an era of increasingly personalized diagnosis and therapy for these tumours. 22 Due to the rarity of malignant HNPGL, it is interesting to speculate whether aggressive tumour behavior portends increased biochemical activity and catecholamine secretion. While we only had seven patients with malignant tumours and laboratory measurements at diagnosis, none of them had laboratory elevations indicative of catecholamine hypersecretion. A recent paradigm shift, towards watchful waiting and/or radiation to avoid surgical morbidity for benign tumours was evident even in our patients with clinically significant catecholamine secreting HNPGLs. As such, biochemical screening for catecholamine excess has become essential for blood pressure and heart rate control with adrenergic blockade during the period of watchful waiting or radiation. For those patients treated surgically, our data reiterate the strong indication for biochemical screening in pre‐surgical workup to avoid rare but catastrophic peri‐operative complications. 23 A measurable percentage (8.6%) of our cohort had clinically insignificant catecholamine secretion as we defined it. These patients had increased laboratory levels that were irreproducible, non‐specific or minor and not associated with hyper‐adrenergic symptoms. When patients first present to an otolaryngologist—head and neck surgeon, consultation with an endocrinologist with expertise in such tumours is ideal to help interpret laboratory levels and advise on appropriate next steps in diagnosis and management. While initiation of adrenergic blockade may not be immediately required for these individuals, the questionable catecholamine excess may prompt additional testing, imaging or genetic screening. CONCLUSIONS: The rate of catecholamine excess in patients with HNPGLs may be higher than previously thought. Tumours of any anatomic subsite may secrete catecholamines, although not all increased laboratory level(s) are indicative of clinically significant catecholamine secretion causing symptoms or warranting adrenergic blockade. Our series provides a comprehensive, contemporary update on biochemical profiles of HNPGL in an era of evolving diagnostic and management standards for these tumours. CONFLICT OF INTEREST: The authors have no conflicts of interest to disclose relevant to this manuscript. AUTHOR CONTRIBUTION: We certify that all authors have met the following criteria for authorship: (1) Have made substantial contributions to conception, design, acquisition and analysis of data. (2) Were involved in drafting and revising the manuscript. (3) Have given final approval of the version to be published and take full responsibility for its content. (4) Agree to be accountable for all aspects of the work. Supporting information: Table S1 Click here for additional data file. Table S2 Click here for additional data file. Table S3 Click here for additional data file.
Background: We sought to characterize the prevalence and factors characteristic of head and neck paragangliomas (HNPGLs) that secrete catecholamines to inform best practices for diagnosis and management. Methods: This was a retrospective cohort study from 2000 to 2020 at a single-institution tertiary centre. One-hundred fifty-two patients (182 tumours) with HNPGLs with at least one measurement of urine or plasma catecholamines and/or catecholamine metabolite levels prior to treatment were included. We differentiated and characterized those patients with increased level(s) of any nature and those with 'clinically significant' versus 'clinically insignificant' catecholamine production. Results: Thirty-one (20.4%) patients had increased catecholamine and/or catecholamine metabolite levels. In most patients, these levels were ≤5-fold above the upper limit of the reference range. Four of these 31 patients with increased levels were ultimately found to have an additional catecholamine secreting mediastinal paraganglioma or pheochromocytoma. Fourteen of 31 patients with HNPGL were deemed clinically significant secretors of catecholamines based on hyper-adrenergic symptoms and/or profound levels of normetanephrines. This cohort was enriched for patients with paragangliomas of the carotid body or cervical sympathetic chain and those with SDHB genetic mutations. Ultimately, the prevalence of clinically significant catecholamine secreting Hangs was determined to be 9.2% and 7.7% based on a per-patient and per-tumour basis, respectively. Conclusions: The rate of catecholamine excess in the current cohort of patients with HNPGLs was higher than previously reported. Neuroendocrine tumours of any anatomic subsite may secrete catecholamines, although not all increased laboratory level(s) are indicative of clinically significant catecholamine secretion causing symptoms or warranting adrenergic blockade.
INTRODUCTION: Head and neck paragangliomas (HNPGLs) are rare tumours derived from paraganglial cells within autonomic ganglia of the carotid body (CBP), vagus nerve (VP), jugular bulb (JP), Jacobsen's nerve of the middle ear (TP) or cervical sympathetic chain (SCP). 1 Due to their cell of origin, HNPGLs have the potential to actively synthesize and secrete catecholamines with potentially deleterious systemic effects. Patients with catecholamine secreting HNPGLs may present with symptoms of catecholamine excess, including sustained or intermittent hypertension and tachycardia, cardiac palpitations, diaphoresis and/or pallor. Regardless of symptomatology, failure to identify and treat catecholamine excess may cause significant morbidity and even mortality in these patients, particularly those undergoing surgery. 2 As a result, contemporary clinical practice guidelines recommend biochemical testing of urine or plasma catecholamines and metabolites, usually metanephrine and normetanephrine levels, for all patients with newly diagnosed HNPGLs. 3 The prevalence of catecholamine secreting HNPGLs is typically low (approximately 3%–4% of tumours). 4 , 5 However, this estimate is based primarily on data from limited series published 20 years ago. More recent evidence suggests that the rate of catecholamine secreting HNPGLs may exceed 10%. 6 Recently, biochemical and genetic characteristics of HNPGLs, including standard treatment paradigms have been vastly transformed. 7 , 8 There now exists a clear gap in the literature regarding the true rate and factors characteristic of catecholamine secreting HNPGLs, particularly those whose functional status may pose significant challenges for peri‐operative management. Here, we report a large series of patients with HNPGLs with the primary aim of characterizing the prevalence and features of HNPGLs that secrete catecholamines at a level significant enough to cause symptoms or warrant consideration of adrenergic blockade, herein termed ‘clinically significant’. In an era of increasingly personalized treatment for these tumours, our data may inform contemporary, best practices for biochemical screening and multi‐disciplinary management of HNPGLs. CONCLUSIONS: The rate of catecholamine excess in patients with HNPGLs may be higher than previously thought. Tumours of any anatomic subsite may secrete catecholamines, although not all increased laboratory level(s) are indicative of clinically significant catecholamine secretion causing symptoms or warranting adrenergic blockade. Our series provides a comprehensive, contemporary update on biochemical profiles of HNPGL in an era of evolving diagnostic and management standards for these tumours.
Background: We sought to characterize the prevalence and factors characteristic of head and neck paragangliomas (HNPGLs) that secrete catecholamines to inform best practices for diagnosis and management. Methods: This was a retrospective cohort study from 2000 to 2020 at a single-institution tertiary centre. One-hundred fifty-two patients (182 tumours) with HNPGLs with at least one measurement of urine or plasma catecholamines and/or catecholamine metabolite levels prior to treatment were included. We differentiated and characterized those patients with increased level(s) of any nature and those with 'clinically significant' versus 'clinically insignificant' catecholamine production. Results: Thirty-one (20.4%) patients had increased catecholamine and/or catecholamine metabolite levels. In most patients, these levels were ≤5-fold above the upper limit of the reference range. Four of these 31 patients with increased levels were ultimately found to have an additional catecholamine secreting mediastinal paraganglioma or pheochromocytoma. Fourteen of 31 patients with HNPGL were deemed clinically significant secretors of catecholamines based on hyper-adrenergic symptoms and/or profound levels of normetanephrines. This cohort was enriched for patients with paragangliomas of the carotid body or cervical sympathetic chain and those with SDHB genetic mutations. Ultimately, the prevalence of clinically significant catecholamine secreting Hangs was determined to be 9.2% and 7.7% based on a per-patient and per-tumour basis, respectively. Conclusions: The rate of catecholamine excess in the current cohort of patients with HNPGLs was higher than previously reported. Neuroendocrine tumours of any anatomic subsite may secrete catecholamines, although not all increased laboratory level(s) are indicative of clinically significant catecholamine secretion causing symptoms or warranting adrenergic blockade.
3,547
311
[ 372, 78 ]
8
[ "patients", "catecholamine", "hnpgls", "laboratory", "significant", "clinically", "level", "symptoms", "hnpgl", "increased" ]
[ "tumour secretion catecholamines", "paraganglioma mp adrenal", "diagnosing catecholamine", "catecholamine secreting hnpgls", "hnpgls prevalence catecholamine" ]
[CONTENT] adrenergic | catecholamines | functional | head and neck | metanephrines | paragangliomas | vasoactive [SUMMARY]
[CONTENT] adrenergic | catecholamines | functional | head and neck | metanephrines | paragangliomas | vasoactive [SUMMARY]
[CONTENT] adrenergic | catecholamines | functional | head and neck | metanephrines | paragangliomas | vasoactive [SUMMARY]
[CONTENT] adrenergic | catecholamines | functional | head and neck | metanephrines | paragangliomas | vasoactive [SUMMARY]
[CONTENT] adrenergic | catecholamines | functional | head and neck | metanephrines | paragangliomas | vasoactive [SUMMARY]
[CONTENT] adrenergic | catecholamines | functional | head and neck | metanephrines | paragangliomas | vasoactive [SUMMARY]
[CONTENT] Adrenal Gland Neoplasms | Catecholamines | Head and Neck Neoplasms | Humans | Paraganglioma | Prevalence | Retrospective Studies [SUMMARY]
[CONTENT] Adrenal Gland Neoplasms | Catecholamines | Head and Neck Neoplasms | Humans | Paraganglioma | Prevalence | Retrospective Studies [SUMMARY]
[CONTENT] Adrenal Gland Neoplasms | Catecholamines | Head and Neck Neoplasms | Humans | Paraganglioma | Prevalence | Retrospective Studies [SUMMARY]
[CONTENT] Adrenal Gland Neoplasms | Catecholamines | Head and Neck Neoplasms | Humans | Paraganglioma | Prevalence | Retrospective Studies [SUMMARY]
[CONTENT] Adrenal Gland Neoplasms | Catecholamines | Head and Neck Neoplasms | Humans | Paraganglioma | Prevalence | Retrospective Studies [SUMMARY]
[CONTENT] Adrenal Gland Neoplasms | Catecholamines | Head and Neck Neoplasms | Humans | Paraganglioma | Prevalence | Retrospective Studies [SUMMARY]
[CONTENT] tumour secretion catecholamines | paraganglioma mp adrenal | diagnosing catecholamine | catecholamine secreting hnpgls | hnpgls prevalence catecholamine [SUMMARY]
[CONTENT] tumour secretion catecholamines | paraganglioma mp adrenal | diagnosing catecholamine | catecholamine secreting hnpgls | hnpgls prevalence catecholamine [SUMMARY]
[CONTENT] tumour secretion catecholamines | paraganglioma mp adrenal | diagnosing catecholamine | catecholamine secreting hnpgls | hnpgls prevalence catecholamine [SUMMARY]
[CONTENT] tumour secretion catecholamines | paraganglioma mp adrenal | diagnosing catecholamine | catecholamine secreting hnpgls | hnpgls prevalence catecholamine [SUMMARY]
[CONTENT] tumour secretion catecholamines | paraganglioma mp adrenal | diagnosing catecholamine | catecholamine secreting hnpgls | hnpgls prevalence catecholamine [SUMMARY]
[CONTENT] tumour secretion catecholamines | paraganglioma mp adrenal | diagnosing catecholamine | catecholamine secreting hnpgls | hnpgls prevalence catecholamine [SUMMARY]
[CONTENT] patients | catecholamine | hnpgls | laboratory | significant | clinically | level | symptoms | hnpgl | increased [SUMMARY]
[CONTENT] patients | catecholamine | hnpgls | laboratory | significant | clinically | level | symptoms | hnpgl | increased [SUMMARY]
[CONTENT] patients | catecholamine | hnpgls | laboratory | significant | clinically | level | symptoms | hnpgl | increased [SUMMARY]
[CONTENT] patients | catecholamine | hnpgls | laboratory | significant | clinically | level | symptoms | hnpgl | increased [SUMMARY]
[CONTENT] patients | catecholamine | hnpgls | laboratory | significant | clinically | level | symptoms | hnpgl | increased [SUMMARY]
[CONTENT] patients | catecholamine | hnpgls | laboratory | significant | clinically | level | symptoms | hnpgl | increased [SUMMARY]
[CONTENT] hnpgls | catecholamine | secreting hnpgls | catecholamine secreting hnpgls | catecholamine secreting | secreting | patients | significant | biochemical | nerve [SUMMARY]
[CONTENT] defined | laboratory | presentation | reference | presentation defined | level | catecholamine | reference range | range | clinical [SUMMARY]
[CONTENT] patients | laboratory | catecholamine | drawn | hnpgl | 152 | 31 | laboratory level | significant | figure [SUMMARY]
[CONTENT] catecholamine | tumours | rate catecholamine excess | adrenergic blockade series | diagnostic management | management standards | management standards tumours | tumours anatomic subsite secrete | tumours anatomic subsite | tumours anatomic [SUMMARY]
[CONTENT] catecholamine | patients | hnpgls | laboratory | significant | level | clinically | data | manuscript | tumours [SUMMARY]
[CONTENT] catecholamine | patients | hnpgls | laboratory | significant | level | clinically | data | manuscript | tumours [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] 2000 | 2020 ||| One-hundred fifty-two | 182 tumours | at least one ||| [SUMMARY]
[CONTENT] Thirty-one | 20.4% ||| ≤5-fold ||| Four | 31 ||| Fourteen | 31 | HNPGL ||| SDHB ||| Hangs | 9.2% | 7.7% [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| 2000 | 2020 ||| One-hundred fifty-two | 182 tumours | at least one ||| ||| Thirty-one | 20.4% ||| ≤5-fold ||| Four | 31 ||| Fourteen | 31 | HNPGL ||| SDHB ||| Hangs | 9.2% | 7.7% ||| ||| [SUMMARY]
[CONTENT] ||| 2000 | 2020 ||| One-hundred fifty-two | 182 tumours | at least one ||| ||| Thirty-one | 20.4% ||| ≤5-fold ||| Four | 31 ||| Fourteen | 31 | HNPGL ||| SDHB ||| Hangs | 9.2% | 7.7% ||| ||| [SUMMARY]
Biofilm formation of Clostridium difficile and susceptibility to Manuka honey.
25181951
Biofilm bacteria are relatively more resistant to antibiotics. The escalating trend of antibiotic resistance higlights the need for evaluating alternative potential therapeutic agents with antibacterial properties. The use of honey for treating microbial infections dates back to ancient times, though antimicrobial properties of Manuka honey was discovered recently. The aim of this study was to demonstrate biofilm formation of specific Clostridium difficile strains and evaluate susceptibility of the biofilm to Manuka honey.
BACKGROUND
Three C. difficile strains were used in the study including the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. Each test strain was grown in sterile microtitre plates and incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow formation of adherent growth (biofilm) on the walls of the wells. The effect of Manuka honey on the biofilms formed was investigated at varying concentrations of 1-50% (w/v) of Manuka honey.
METHODS
The three C. difficile strains tested formed biofilms after 24 hours with the ribotype 027 strain producing the most extensive growth. There was no significant difference (p > 0.05) found between the amount of biofilms formed after 24 and 48 hours of incubation for each of the three C. difficile strains. A dose-response relationship between concentration of Manuka honey and biofilm formation was observed for all the test strains, and the optimum Manuka honey activity occurred at 40-50% (v/v).
RESULTS
Manuka honey has antibacterial properties capable of inhibiting in vitro biofilm formed by C. difficile.
CONCLUSION
[ "Anti-Bacterial Agents", "Biofilms", "Clostridioides difficile", "Dose-Response Relationship, Drug", "Honey", "Microbial Sensitivity Tests" ]
4174649
Background
Biofilms are complex structures of polysaccharide matrix excreted by bacteria in a form of slimy, glue-like substance that adhere to material surfaces especially, when exposed to some amount of water [1]. Within the last decade, many studies have been done on biofilms which have established their association with infections and contaminations [1]. The conventional methods of killing bacteria by using antibiotics and disinfection are often unsuccessful with biofilm forming bacteria [1]. Thus biofilm forming bacteria may pose a relatively greater threat to public health. For instance, it is reported that patients suffering from biofilm associated infections tend to stay longer in hospital than expected [2, 3]. Worldwide, it is estimated that biofilms are associated with 65 percent of nosocomial infections and contribute to high death rate and 2-14% of all surgical wounds complications [2]. Biofilm formation by bacteria also has serious economic implications. It is reported that a microorganism associated with biofilm formation is reported to cost some nations billions of dollars yearly in medical infections, equipment damage and product contamination [4]. Some bacterial organisms known to form biofilms include Pseudomonas aeruginosa, Staphylococcus aureus, Listeria monocytogenes and Clostridium difficile [5, 6]. Biofilm formation by C. difficile was first reported in 2012 and has since then been demonstrated with a few strains of the organism [7–10]. Evidence from some studies indicate that amount of biofilm formed by C. difficile varies from strain to strain [4, 5]. Mutagenesis studies have identified several surface proteins such as SlecC, Cwp84 and LuxS that are required for biofilm formation of C. difficile [8, 10]. There is also evidence that some level of sporulation occurs in C. difficile biofilm [11, 12]. The use of honey for treating microbial infections dates back to ancient times, though antimicrobial properties of Manuka honey was discovered recently. The antibacterial effect of Manuka honey against bacterial biofilms has been demonstrated for several organisms such as Streptococcus pyogenes [13], Pseudomonas aeruginosa [14], Enterococcus faecalis [15] and Streptococcus mutans [16]. Bactericidal effects have been found in both planktonic cultures and biofilm, although higher concentrations were required to inhibit biofilms [17]. In a biofilm study, Maddocks et al. [17] reported that sublethal concentrations of Manuka honey disrupted the binding of S. pyogenes to the human fibronectin but did not prevent binding to fibrinogen. Manuka honey thus appears to be a promising antibacterial agent in this era of diminishing antimicrobial agents. However, further studies on its antibacterial properties are required especially, with highly resistant pathogens such as C. difficile. C. difficile, the causative agent of severe inflammation of the bowel (pseudomembranous colitis), has become the most significant nosocomial antibiotic-associated diarrhoea reported worldwide [18, 19]. The Centers for Disease Control and Prevention (CDC) estimate that nearly 250,000 serious C. difficile infections (CDI) occur in the US annually, at a cost of at least one billion dollars, resulting in 14,000 deaths (CDC, 2013) [20]. This high public health burden associated with C. difficile is partly due to the trend of increasing resistance of the organism to several essential antibiotics, a problem which highlights the need for alternative treatment methods of C. difficile infections. In a previous study, we demonstrated the susceptibility of C. difficile to Manuka honey (Leptospermum scoparium) [21]. The findings of the study showed that Manuka honey exhibits a bactericidal activity against C. difficile with minimum inhibitory and bactericidal concentrations of 6.25% (v/v) [21]. However, it is unknown if biofilm formed by C. difficle is also susceptible to Manuka honey. Biofilm formation by C. difficile in itself has been recently reported, and it is important to confirm this with various C. difficile strains. The aim of this study was to demonstrate biofilm formation of specific strains of C. difficile and the antibacterial effect of Manuka honey on the C. difficile biofilm.
Methods
Clostridium difficilestrains Three C. difficile strains were used for the experiments in this study, and they included the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. These strains were selected for the study due to their epidemiological or clinical significance. The C. difficile strains were obtained from University of Wales Hospital and were maintained in Robertson’s Cook meat medium (Oxoid, Cambridge, UK) at the Department of Microbiology, University of Wales Institute Cardiff where the study was carried out. Prior to using the C. difficile strains in the experimental work, they were purified on blood agar plates. Three C. difficile strains were used for the experiments in this study, and they included the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. These strains were selected for the study due to their epidemiological or clinical significance. The C. difficile strains were obtained from University of Wales Hospital and were maintained in Robertson’s Cook meat medium (Oxoid, Cambridge, UK) at the Department of Microbiology, University of Wales Institute Cardiff where the study was carried out. Prior to using the C. difficile strains in the experimental work, they were purified on blood agar plates. Manuka honey Woundcare™ 18+ Active Manuka honey (potency equivalent of greater than 18% (w/v) phenol) with non-peroxide antibacterial activity from Comvita UK was used in this study. Woundcare™ 18+ Active Manuka honey (potency equivalent of greater than 18% (w/v) phenol) with non-peroxide antibacterial activity from Comvita UK was used in this study. Microtitre plate assay for the assessment of biofilm formation in C. difficile strains The experiments performed to determine the capability of the C. difficile strains to form biofilms were based on the previously described methods [4]. The three C. difficile test strains were cultured overnight in Reinforced Clostridial Medium (RCM) broth at 37°C for 24 hours. For each strain, a dilution of 1:100 inoculum was made in a sterile broth bottle by pipetting 1 ml of each strain into 99 ml of RCM broth and vortexing to achieve a good mixture. An aliquot of 200 μl of each diluted inoculum was dispensed into a 96-well Nunc flat bottom microtitre plate. The plates and contents were incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow for formation of biofilms on the walls of the wells. For each experiment, wells of RCM broth without C. difficile strains were used as negative control. At the end of the 24 and 48 hours of incubation, the plates were removed from the anaerobic cabinet and the cultures were carefully removed by using a Pasteur pipette. Subsequently, 200 μl of 2.5% glutaraldehyde solution was pipetted into each of the drained wells, and allowed to stand for 5 minutes to allow fixation. The glutaraldehyde solution was then removed and the empty wells were washed by dispensing 200 μl of phosphate buffered saline (PBS) (Oxoid, Cambridge, UK) in them. The PBS was discarded and the wells were stained with 200 μl of 0.25% (w/v) aqueous crystal violet for 5 minutes. After this time, the wells were washed with PBS eight times and allowed to air dry. The quantity of biofilm formed was analyzed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye from adherent cells (biofilm). Absorbance was measured within 5 minutes of adding the solvent at 570 nm using a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all test C. difficile strains. The experiments performed to determine the capability of the C. difficile strains to form biofilms were based on the previously described methods [4]. The three C. difficile test strains were cultured overnight in Reinforced Clostridial Medium (RCM) broth at 37°C for 24 hours. For each strain, a dilution of 1:100 inoculum was made in a sterile broth bottle by pipetting 1 ml of each strain into 99 ml of RCM broth and vortexing to achieve a good mixture. An aliquot of 200 μl of each diluted inoculum was dispensed into a 96-well Nunc flat bottom microtitre plate. The plates and contents were incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow for formation of biofilms on the walls of the wells. For each experiment, wells of RCM broth without C. difficile strains were used as negative control. At the end of the 24 and 48 hours of incubation, the plates were removed from the anaerobic cabinet and the cultures were carefully removed by using a Pasteur pipette. Subsequently, 200 μl of 2.5% glutaraldehyde solution was pipetted into each of the drained wells, and allowed to stand for 5 minutes to allow fixation. The glutaraldehyde solution was then removed and the empty wells were washed by dispensing 200 μl of phosphate buffered saline (PBS) (Oxoid, Cambridge, UK) in them. The PBS was discarded and the wells were stained with 200 μl of 0.25% (w/v) aqueous crystal violet for 5 minutes. After this time, the wells were washed with PBS eight times and allowed to air dry. The quantity of biofilm formed was analyzed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye from adherent cells (biofilm). Absorbance was measured within 5 minutes of adding the solvent at 570 nm using a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all test C. difficile strains. Determination of the effect of various honey concentrations on C. difficile biofilms The three C. difficile test strains were cultured overnight in an RCM broth at 37°C for 24 hours. For each strain, an aliquot of 200 μl of each diluted inoculum (as described in Section 3.3) was dispensed into each well of a 96-microtitre plate, starting from the first column (Column 1) which contained inoculum but no Manuka honey (positive control). The last column (Column 12) contained RCM only (no inoculum) and was used as a negative control. Outer rows and columns were unused, except for column 12 to avoid edge effects. The plates were incubated at 37°C for 24 hours in an anaerobic cabinet to allow for the formation of biofilms on the walls of the wells. After incubation, a sterile Pasteur pipette was used to carefully remove the liquid phase containing any planktonic growth from each well into a discard jar, leaving adherent biofilm attached to the well walls. However the positive control was re-filled with 200 μl RCM to keep the biofilm alive. With the exception the negative control wells, each well was re-filled with 200 μl of Manuka honey solutions and incubated at 37°C for 24 hours in an anaerobic cabinet. Different concentrations of Manuka honey (0, 1, 2, 4, 8, 10, 20, 30, 40, 50% (w/v)) were included in the experimental setup and were prepared by dilutions with RCM. After incubation, the liquid phase in each well (containing any overnight planktonic growth together with the honey solutions) was discarded into the discard jar using a sterile Pasteur pipette. The adherent growth was fixed with 200 μl of 2.5% glutaraldehyde for 5 minutes. The fixative was removed into a toxic waste bottle and the wells washed twice with PBS. All the wells were stained with 200 μl of 0.25% crystal violet for 5 minutes. The stain was then removed and the wells washed 8 times with PBS. It was ensured that any remaining liquid was drained from the plates by inverting them vigorously onto paper hand towels. At this point, biofilms were visible as purple rings formed on the side of each well. The quantity of biofilms formed were analysed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye associated cells. The reading was taken (within 5 minutes of adding the solvent) at 570 nm in a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all C. difficile test strains. The three C. difficile test strains were cultured overnight in an RCM broth at 37°C for 24 hours. For each strain, an aliquot of 200 μl of each diluted inoculum (as described in Section 3.3) was dispensed into each well of a 96-microtitre plate, starting from the first column (Column 1) which contained inoculum but no Manuka honey (positive control). The last column (Column 12) contained RCM only (no inoculum) and was used as a negative control. Outer rows and columns were unused, except for column 12 to avoid edge effects. The plates were incubated at 37°C for 24 hours in an anaerobic cabinet to allow for the formation of biofilms on the walls of the wells. After incubation, a sterile Pasteur pipette was used to carefully remove the liquid phase containing any planktonic growth from each well into a discard jar, leaving adherent biofilm attached to the well walls. However the positive control was re-filled with 200 μl RCM to keep the biofilm alive. With the exception the negative control wells, each well was re-filled with 200 μl of Manuka honey solutions and incubated at 37°C for 24 hours in an anaerobic cabinet. Different concentrations of Manuka honey (0, 1, 2, 4, 8, 10, 20, 30, 40, 50% (w/v)) were included in the experimental setup and were prepared by dilutions with RCM. After incubation, the liquid phase in each well (containing any overnight planktonic growth together with the honey solutions) was discarded into the discard jar using a sterile Pasteur pipette. The adherent growth was fixed with 200 μl of 2.5% glutaraldehyde for 5 minutes. The fixative was removed into a toxic waste bottle and the wells washed twice with PBS. All the wells were stained with 200 μl of 0.25% crystal violet for 5 minutes. The stain was then removed and the wells washed 8 times with PBS. It was ensured that any remaining liquid was drained from the plates by inverting them vigorously onto paper hand towels. At this point, biofilms were visible as purple rings formed on the side of each well. The quantity of biofilms formed were analysed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye associated cells. The reading was taken (within 5 minutes of adding the solvent) at 570 nm in a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all C. difficile test strains. Statistical analysis The experimental data was entered into Microsoft Excel® 2010 and analyzed. Mean biofilm biomass was calculated for each C. difficile strain at 24 and 48 hours. The Student's t-test was used to test for any significant difference between biofilms formation as function of the hours for the different strains. A p- value less than or equal to 0.05 was considered significant. The experimental data was entered into Microsoft Excel® 2010 and analyzed. Mean biofilm biomass was calculated for each C. difficile strain at 24 and 48 hours. The Student's t-test was used to test for any significant difference between biofilms formation as function of the hours for the different strains. A p- value less than or equal to 0.05 was considered significant.
null
null
Conclusions
In this study, we have demonstrated biofilm formation of specific C. difficile strains including ATCC 9689, ribotype 027 and ribotype 106. We have also shown that Manuka honey exhibits antibacterial activity against C. difficile biofilm with the optimum activity occurring at 40-50% (w/v). The bactericidal action of Manuka honey may be exploited practically by incorporating a solution of 40-50% (w/v) Manuka honey in topical and hand-washing formulations in care homes and hospitals or places where C. difficile populations are likely to be high.
[ "Background", "Clostridium difficilestrains", "Manuka honey", "Microtitre plate assay for the assessment of biofilm formation in C. difficile strains", "Determination of the effect of various honey concentrations on C. difficile biofilms", "Statistical analysis", "Results and discussion" ]
[ "Biofilms are complex structures of polysaccharide matrix excreted by bacteria in a form of slimy, glue-like substance that adhere to material surfaces especially, when exposed to some amount of water\n[1]. Within the last decade, many studies have been done on biofilms which have established their association with infections and contaminations\n[1]. The conventional methods of killing bacteria by using antibiotics and disinfection are often unsuccessful with biofilm forming bacteria\n[1]. Thus biofilm forming bacteria may pose a relatively greater threat to public health. For instance, it is reported that patients suffering from biofilm associated infections tend to stay longer in hospital than expected\n[2, 3]. Worldwide, it is estimated that biofilms are associated with 65 percent of nosocomial infections and contribute to high death rate and 2-14% of all surgical wounds complications\n[2]. Biofilm formation by bacteria also has serious economic implications. It is reported that a microorganism associated with biofilm formation is reported to cost some nations billions of dollars yearly in medical infections, equipment damage and product contamination\n[4].\nSome bacterial organisms known to form biofilms include Pseudomonas aeruginosa, Staphylococcus aureus, Listeria monocytogenes and Clostridium difficile\n[5, 6]. Biofilm formation by C. difficile was first reported in 2012 and has since then been demonstrated with a few strains of the organism\n[7–10]. Evidence from some studies indicate that amount of biofilm formed by C. difficile varies from strain to strain\n[4, 5]. Mutagenesis studies have identified several surface proteins such as SlecC, Cwp84 and LuxS that are required for biofilm formation of C. difficile\n[8, 10]. There is also evidence that some level of sporulation occurs in C. difficile biofilm\n[11, 12].\nThe use of honey for treating microbial infections dates back to ancient times, though antimicrobial properties of Manuka honey was discovered recently. The antibacterial effect of Manuka honey against bacterial biofilms has been demonstrated for several organisms such as Streptococcus pyogenes\n[13], Pseudomonas aeruginosa\n[14], Enterococcus faecalis\n[15] and Streptococcus mutans\n[16]. Bactericidal effects have been found in both planktonic cultures and biofilm, although higher concentrations were required to inhibit biofilms\n[17]. In a biofilm study, Maddocks et al.\n[17] reported that sublethal concentrations of Manuka honey disrupted the binding of S. pyogenes to the human fibronectin but did not prevent binding to fibrinogen. Manuka honey thus appears to be a promising antibacterial agent in this era of diminishing antimicrobial agents. However, further studies on its antibacterial properties are required especially, with highly resistant pathogens such as C. difficile.\nC. difficile, the causative agent of severe inflammation of the bowel (pseudomembranous colitis), has become the most significant nosocomial antibiotic-associated diarrhoea reported worldwide\n[18, 19]. The Centers for Disease Control and Prevention (CDC) estimate that nearly 250,000 serious C. difficile infections (CDI) occur in the US annually, at a cost of at least one billion dollars, resulting in 14,000 deaths (CDC, 2013)\n[20]. This high public health burden associated with C. difficile is partly due to the trend of increasing resistance of the organism to several essential antibiotics, a problem which highlights the need for alternative treatment methods of C. difficile infections. In a previous study, we demonstrated the susceptibility of C. difficile to Manuka honey (Leptospermum scoparium)\n[21]. The findings of the study showed that Manuka honey exhibits a bactericidal activity against C. difficile with minimum inhibitory and bactericidal concentrations of 6.25% (v/v)\n[21]. However, it is unknown if biofilm formed by C. difficle is also susceptible to Manuka honey. Biofilm formation by C. difficile in itself has been recently reported, and it is important to confirm this with various C. difficile strains. The aim of this study was to demonstrate biofilm formation of specific strains of C. difficile and the antibacterial effect of Manuka honey on the C. difficile biofilm.", "Three C. difficile strains were used for the experiments in this study, and they included the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. These strains were selected for the study due to their epidemiological or clinical significance. The C. difficile strains were obtained from University of Wales Hospital and were maintained in Robertson’s Cook meat medium (Oxoid, Cambridge, UK) at the Department of Microbiology, University of Wales Institute Cardiff where the study was carried out. Prior to using the C. difficile strains in the experimental work, they were purified on blood agar plates.", "Woundcare™ 18+ Active Manuka honey (potency equivalent of greater than 18% (w/v) phenol) with non-peroxide antibacterial activity from Comvita UK was used in this study.", "The experiments performed to determine the capability of the C. difficile strains to form biofilms were based on the previously described methods\n[4]. The three C. difficile test strains were cultured overnight in Reinforced Clostridial Medium (RCM) broth at 37°C for 24 hours. For each strain, a dilution of 1:100 inoculum was made in a sterile broth bottle by pipetting 1 ml of each strain into 99 ml of RCM broth and vortexing to achieve a good mixture. An aliquot of 200 μl of each diluted inoculum was dispensed into a 96-well Nunc flat bottom microtitre plate. The plates and contents were incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow for formation of biofilms on the walls of the wells. For each experiment, wells of RCM broth without C. difficile strains were used as negative control. At the end of the 24 and 48 hours of incubation, the plates were removed from the anaerobic cabinet and the cultures were carefully removed by using a Pasteur pipette. Subsequently, 200 μl of 2.5% glutaraldehyde solution was pipetted into each of the drained wells, and allowed to stand for 5 minutes to allow fixation. The glutaraldehyde solution was then removed and the empty wells were washed by dispensing 200 μl of phosphate buffered saline (PBS) (Oxoid, Cambridge, UK) in them. The PBS was discarded and the wells were stained with 200 μl of 0.25% (w/v) aqueous crystal violet for 5 minutes. After this time, the wells were washed with PBS eight times and allowed to air dry. The quantity of biofilm formed was analyzed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye from adherent cells (biofilm). Absorbance was measured within 5 minutes of adding the solvent at 570 nm using a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all test C. difficile strains.", "The three C. difficile test strains were cultured overnight in an RCM broth at 37°C for 24 hours. For each strain, an aliquot of 200 μl of each diluted inoculum (as described in Section 3.3) was dispensed into each well of a 96-microtitre plate, starting from the first column (Column 1) which contained inoculum but no Manuka honey (positive control). The last column (Column 12) contained RCM only (no inoculum) and was used as a negative control. Outer rows and columns were unused, except for column 12 to avoid edge effects. The plates were incubated at 37°C for 24 hours in an anaerobic cabinet to allow for the formation of biofilms on the walls of the wells. After incubation, a sterile Pasteur pipette was used to carefully remove the liquid phase containing any planktonic growth from each well into a discard jar, leaving adherent biofilm attached to the well walls. However the positive control was re-filled with 200 μl RCM to keep the biofilm alive. With the exception the negative control wells, each well was re-filled with 200 μl of Manuka honey solutions and incubated at 37°C for 24 hours in an anaerobic cabinet. Different concentrations of Manuka honey (0, 1, 2, 4, 8, 10, 20, 30, 40, 50% (w/v)) were included in the experimental setup and were prepared by dilutions with RCM.\nAfter incubation, the liquid phase in each well (containing any overnight planktonic growth together with the honey solutions) was discarded into the discard jar using a sterile Pasteur pipette. The adherent growth was fixed with 200 μl of 2.5% glutaraldehyde for 5 minutes. The fixative was removed into a toxic waste bottle and the wells washed twice with PBS. All the wells were stained with 200 μl of 0.25% crystal violet for 5 minutes. The stain was then removed and the wells washed 8 times with PBS. It was ensured that any remaining liquid was drained from the plates by inverting them vigorously onto paper hand towels. At this point, biofilms were visible as purple rings formed on the side of each well. The quantity of biofilms formed were analysed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye associated cells. The reading was taken (within 5 minutes of adding the solvent) at 570 nm in a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all C. difficile test strains.", "The experimental data was entered into Microsoft Excel® 2010 and analyzed. Mean biofilm biomass was calculated for each C. difficile strain at 24 and 48 hours. The Student's t-test was used to test for any significant difference between biofilms formation as function of the hours for the different strains. A p- value less than or equal to 0.05 was considered significant.", "In this study we investigated biofilm formation of C. difficile and the susceptibility to Manuka honey. Figure \n1 shows that all the three C. difficile strains formed biofilms and there was no significant difference (p = 0.3901) between biofilm formation in 24 hours and biofilm formation in 48 hours.Figure 1\nFormation of adherent growth (biofilm) by\nC. difficile\nstrains after 24 and 48 hours incubation.\n\n\nFormation of adherent growth (biofilm) by\nC. difficile\nstrains after 24 and 48 hours incubation.\n\nBacterial biofilm formation occurs through an interesting mechanism which first involves bacterial attachment to a material surface. This initial attachment of the bacterial organism is influenced mainly by combination of environmental factors (such as nutrient levels, temperature, pH and duration of attachment) and genetic factors\n[1]. Immediately the cells are attached to suitable surfaces, they begin to multiply and grow to form a thin layer (monolayer) towards where conditions are favourable on material surfaces. At this stage, the cells undergo a developmental change which lead to the production of a complex structure referred to as exopolysaccharide glycocalyx polymers matrix and is one of the hallmarks of a matured biofilm\n[22–24].\nThe ability of C. difficile to form biofilms as observed in this study concurs with studies carried out by several other investigators\n[7–10]. As shown in Figure \n1, among the three C. difficile strains, the ribotype 027 strain showed the highest potential for biofilm formation. Biofilm formation has been linked with virulence of several bacterial pathogens including C. difficile\n[8, 13]. C. difficile of ribotype 027 is known to be a hypervirulent strain and the commonest cause of C. difficile associated outbreaks\n[25]. Data from our study appear to indicate that the high virulence and epidemiological significance of C. difficile ribotype 027 strains may be related to its relatively greater ability to form biofilms compared to other C. difficile strains such as ribotype 106 strains. In their study of C. difficile biofilm, Dapa et al.\n[8] also observed that the hypervirulent strain (ribotype 027) produced more biofilm than a less virulent C. difficile strain. Currently, very little is known about the actual role of biofilm in the pathogenicity and pathogenesis of C. difficile, and further studies are required to elucidate this.\nThe extent of Manuka honey inhibiting the established biofilm was determined by comparing the amount of biofilm formed in the wells with and without Manuka honey. This experiment showed that generally, there was a dose–response relationship between the amount of biofilm depleted and concentration of Manuka honey (Figure \n2). Concentrations of Manuka honey below 20% (w/v) appeared to have no or little effect on the established biofilm for each of the three C. difficle strains. However, concentrations between 20 and 50% (w/v) Manuka honey resulted in decreasing amount of biofilm formed by all test strains after 24 hours. Although MIC and MBC of Manuka honey against suspensions of the C. difficile strains used in this study were 6.25% (v/v)\n[21], much higher concentrations of 30-50% (w/v) of Manuka honey were required to deplete biofilms formed by the C. difficile strains.Figure 2\nThe effect of varying concentrations of Manuka honey on biofilm of\nC. difficile\nstrains.\n\n\nThe effect of varying concentrations of Manuka honey on biofilm of\nC. difficile\nstrains.\n\nThis may be due to the ability of the sessile bacteria (biofilm formed by C. difficile) to secrete proteins and polymeric sugars which serve as a protection to enhance quorum development for survival\n[1]. Similar results were reported by Okhiria et al.\n[26]. Fux et al.\n[27] showed that biofilms are strongly resistant to biocides, drying and most environmental stresses. Ashby et al.\n[28] and Costerton et al.\n[29] reported that the ability of biofilm to resist antibiotic effect could be due to the slow growth rate of biofilm, since the effect of bactericides on biofilm usually declines with lower growth rate.\nFrom this study, it may be inferred that the most suitable Manuka honey concentrations to inhibit C. difficile biofilm significantly were 40 and 50% (w/v). Similarly, Okhiria et al.\n[26] reported that Pseudomonas biofilm exposed to 40% (w/v) Manuka honey concentration showed a significant inhibition, but 20% (w/v) Manuka honey did not show any significant inhibition. From Figure \n2, it can be observed that Manuka honey could not exhibit 100% depletion of biofilm formed by the C. difficile strains. Studies by Alandejani et al.\n[30] also reported that the effectiveness of Manuka honey against biofilms formed by S. aureus and P. aeruginosa were less than 100%. Nevertheless, based on this study and others, it is important to note that Manuka honey has an appreciable antibacterial activity against biofilm formed by bacterial organisms\n[31]. While in this study we demonstrated the ability of Manuka honey to inhibit biofilm formation of C. difficile, Maddock et al.\n[17] demonstrated the ability of Manuka honey to disrupt pre-formed biofilms of Streptococcus pyogenes. Recently, Ansari et al.\n[31] also reported that Jujube honey can disrupt pre-formed biofilms of Candida albicans. Antimicrobial effect of Manuka honey is due to a property referred to as Unique Manuka Factor that is absent in other types of honey\n[32]. Various studies have revealed that the active ingredient in Manuka honey is Methylglyoxal\n[33, 34], and this compound is known to have synergistic effect with some antibiotics such as piperacillin\n[35].\nWe previously demonstrated susceptibility of C. difficile to Manuka honey, and in this study, have also shown that biofilm formed of the organism is similarly susceptible to Manuka honey. Overall these findings have important applications in the treatment of C. difficile infections given the escalating trend of the organism to several essential antibiotics. In the light of the findings of the current study, it is also important for further studies to determine the rate and concentrations at which Manuka honey inhibits biofilms of C. difficile in vivo. Additionally, it would useful to investigate the effect of Manuka honey on spores of C. difficile." ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Clostridium difficilestrains", "Manuka honey", "Microtitre plate assay for the assessment of biofilm formation in C. difficile strains", "Determination of the effect of various honey concentrations on C. difficile biofilms", "Statistical analysis", "Results and discussion", "Conclusions" ]
[ "Biofilms are complex structures of polysaccharide matrix excreted by bacteria in a form of slimy, glue-like substance that adhere to material surfaces especially, when exposed to some amount of water\n[1]. Within the last decade, many studies have been done on biofilms which have established their association with infections and contaminations\n[1]. The conventional methods of killing bacteria by using antibiotics and disinfection are often unsuccessful with biofilm forming bacteria\n[1]. Thus biofilm forming bacteria may pose a relatively greater threat to public health. For instance, it is reported that patients suffering from biofilm associated infections tend to stay longer in hospital than expected\n[2, 3]. Worldwide, it is estimated that biofilms are associated with 65 percent of nosocomial infections and contribute to high death rate and 2-14% of all surgical wounds complications\n[2]. Biofilm formation by bacteria also has serious economic implications. It is reported that a microorganism associated with biofilm formation is reported to cost some nations billions of dollars yearly in medical infections, equipment damage and product contamination\n[4].\nSome bacterial organisms known to form biofilms include Pseudomonas aeruginosa, Staphylococcus aureus, Listeria monocytogenes and Clostridium difficile\n[5, 6]. Biofilm formation by C. difficile was first reported in 2012 and has since then been demonstrated with a few strains of the organism\n[7–10]. Evidence from some studies indicate that amount of biofilm formed by C. difficile varies from strain to strain\n[4, 5]. Mutagenesis studies have identified several surface proteins such as SlecC, Cwp84 and LuxS that are required for biofilm formation of C. difficile\n[8, 10]. There is also evidence that some level of sporulation occurs in C. difficile biofilm\n[11, 12].\nThe use of honey for treating microbial infections dates back to ancient times, though antimicrobial properties of Manuka honey was discovered recently. The antibacterial effect of Manuka honey against bacterial biofilms has been demonstrated for several organisms such as Streptococcus pyogenes\n[13], Pseudomonas aeruginosa\n[14], Enterococcus faecalis\n[15] and Streptococcus mutans\n[16]. Bactericidal effects have been found in both planktonic cultures and biofilm, although higher concentrations were required to inhibit biofilms\n[17]. In a biofilm study, Maddocks et al.\n[17] reported that sublethal concentrations of Manuka honey disrupted the binding of S. pyogenes to the human fibronectin but did not prevent binding to fibrinogen. Manuka honey thus appears to be a promising antibacterial agent in this era of diminishing antimicrobial agents. However, further studies on its antibacterial properties are required especially, with highly resistant pathogens such as C. difficile.\nC. difficile, the causative agent of severe inflammation of the bowel (pseudomembranous colitis), has become the most significant nosocomial antibiotic-associated diarrhoea reported worldwide\n[18, 19]. The Centers for Disease Control and Prevention (CDC) estimate that nearly 250,000 serious C. difficile infections (CDI) occur in the US annually, at a cost of at least one billion dollars, resulting in 14,000 deaths (CDC, 2013)\n[20]. This high public health burden associated with C. difficile is partly due to the trend of increasing resistance of the organism to several essential antibiotics, a problem which highlights the need for alternative treatment methods of C. difficile infections. In a previous study, we demonstrated the susceptibility of C. difficile to Manuka honey (Leptospermum scoparium)\n[21]. The findings of the study showed that Manuka honey exhibits a bactericidal activity against C. difficile with minimum inhibitory and bactericidal concentrations of 6.25% (v/v)\n[21]. However, it is unknown if biofilm formed by C. difficle is also susceptible to Manuka honey. Biofilm formation by C. difficile in itself has been recently reported, and it is important to confirm this with various C. difficile strains. The aim of this study was to demonstrate biofilm formation of specific strains of C. difficile and the antibacterial effect of Manuka honey on the C. difficile biofilm.", " Clostridium difficilestrains Three C. difficile strains were used for the experiments in this study, and they included the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. These strains were selected for the study due to their epidemiological or clinical significance. The C. difficile strains were obtained from University of Wales Hospital and were maintained in Robertson’s Cook meat medium (Oxoid, Cambridge, UK) at the Department of Microbiology, University of Wales Institute Cardiff where the study was carried out. Prior to using the C. difficile strains in the experimental work, they were purified on blood agar plates.\nThree C. difficile strains were used for the experiments in this study, and they included the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. These strains were selected for the study due to their epidemiological or clinical significance. The C. difficile strains were obtained from University of Wales Hospital and were maintained in Robertson’s Cook meat medium (Oxoid, Cambridge, UK) at the Department of Microbiology, University of Wales Institute Cardiff where the study was carried out. Prior to using the C. difficile strains in the experimental work, they were purified on blood agar plates.\n Manuka honey Woundcare™ 18+ Active Manuka honey (potency equivalent of greater than 18% (w/v) phenol) with non-peroxide antibacterial activity from Comvita UK was used in this study.\nWoundcare™ 18+ Active Manuka honey (potency equivalent of greater than 18% (w/v) phenol) with non-peroxide antibacterial activity from Comvita UK was used in this study.\n Microtitre plate assay for the assessment of biofilm formation in C. difficile strains The experiments performed to determine the capability of the C. difficile strains to form biofilms were based on the previously described methods\n[4]. The three C. difficile test strains were cultured overnight in Reinforced Clostridial Medium (RCM) broth at 37°C for 24 hours. For each strain, a dilution of 1:100 inoculum was made in a sterile broth bottle by pipetting 1 ml of each strain into 99 ml of RCM broth and vortexing to achieve a good mixture. An aliquot of 200 μl of each diluted inoculum was dispensed into a 96-well Nunc flat bottom microtitre plate. The plates and contents were incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow for formation of biofilms on the walls of the wells. For each experiment, wells of RCM broth without C. difficile strains were used as negative control. At the end of the 24 and 48 hours of incubation, the plates were removed from the anaerobic cabinet and the cultures were carefully removed by using a Pasteur pipette. Subsequently, 200 μl of 2.5% glutaraldehyde solution was pipetted into each of the drained wells, and allowed to stand for 5 minutes to allow fixation. The glutaraldehyde solution was then removed and the empty wells were washed by dispensing 200 μl of phosphate buffered saline (PBS) (Oxoid, Cambridge, UK) in them. The PBS was discarded and the wells were stained with 200 μl of 0.25% (w/v) aqueous crystal violet for 5 minutes. After this time, the wells were washed with PBS eight times and allowed to air dry. The quantity of biofilm formed was analyzed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye from adherent cells (biofilm). Absorbance was measured within 5 minutes of adding the solvent at 570 nm using a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all test C. difficile strains.\nThe experiments performed to determine the capability of the C. difficile strains to form biofilms were based on the previously described methods\n[4]. The three C. difficile test strains were cultured overnight in Reinforced Clostridial Medium (RCM) broth at 37°C for 24 hours. For each strain, a dilution of 1:100 inoculum was made in a sterile broth bottle by pipetting 1 ml of each strain into 99 ml of RCM broth and vortexing to achieve a good mixture. An aliquot of 200 μl of each diluted inoculum was dispensed into a 96-well Nunc flat bottom microtitre plate. The plates and contents were incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow for formation of biofilms on the walls of the wells. For each experiment, wells of RCM broth without C. difficile strains were used as negative control. At the end of the 24 and 48 hours of incubation, the plates were removed from the anaerobic cabinet and the cultures were carefully removed by using a Pasteur pipette. Subsequently, 200 μl of 2.5% glutaraldehyde solution was pipetted into each of the drained wells, and allowed to stand for 5 minutes to allow fixation. The glutaraldehyde solution was then removed and the empty wells were washed by dispensing 200 μl of phosphate buffered saline (PBS) (Oxoid, Cambridge, UK) in them. The PBS was discarded and the wells were stained with 200 μl of 0.25% (w/v) aqueous crystal violet for 5 minutes. After this time, the wells were washed with PBS eight times and allowed to air dry. The quantity of biofilm formed was analyzed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye from adherent cells (biofilm). Absorbance was measured within 5 minutes of adding the solvent at 570 nm using a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all test C. difficile strains.\n Determination of the effect of various honey concentrations on C. difficile biofilms The three C. difficile test strains were cultured overnight in an RCM broth at 37°C for 24 hours. For each strain, an aliquot of 200 μl of each diluted inoculum (as described in Section 3.3) was dispensed into each well of a 96-microtitre plate, starting from the first column (Column 1) which contained inoculum but no Manuka honey (positive control). The last column (Column 12) contained RCM only (no inoculum) and was used as a negative control. Outer rows and columns were unused, except for column 12 to avoid edge effects. The plates were incubated at 37°C for 24 hours in an anaerobic cabinet to allow for the formation of biofilms on the walls of the wells. After incubation, a sterile Pasteur pipette was used to carefully remove the liquid phase containing any planktonic growth from each well into a discard jar, leaving adherent biofilm attached to the well walls. However the positive control was re-filled with 200 μl RCM to keep the biofilm alive. With the exception the negative control wells, each well was re-filled with 200 μl of Manuka honey solutions and incubated at 37°C for 24 hours in an anaerobic cabinet. Different concentrations of Manuka honey (0, 1, 2, 4, 8, 10, 20, 30, 40, 50% (w/v)) were included in the experimental setup and were prepared by dilutions with RCM.\nAfter incubation, the liquid phase in each well (containing any overnight planktonic growth together with the honey solutions) was discarded into the discard jar using a sterile Pasteur pipette. The adherent growth was fixed with 200 μl of 2.5% glutaraldehyde for 5 minutes. The fixative was removed into a toxic waste bottle and the wells washed twice with PBS. All the wells were stained with 200 μl of 0.25% crystal violet for 5 minutes. The stain was then removed and the wells washed 8 times with PBS. It was ensured that any remaining liquid was drained from the plates by inverting them vigorously onto paper hand towels. At this point, biofilms were visible as purple rings formed on the side of each well. The quantity of biofilms formed were analysed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye associated cells. The reading was taken (within 5 minutes of adding the solvent) at 570 nm in a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all C. difficile test strains.\nThe three C. difficile test strains were cultured overnight in an RCM broth at 37°C for 24 hours. For each strain, an aliquot of 200 μl of each diluted inoculum (as described in Section 3.3) was dispensed into each well of a 96-microtitre plate, starting from the first column (Column 1) which contained inoculum but no Manuka honey (positive control). The last column (Column 12) contained RCM only (no inoculum) and was used as a negative control. Outer rows and columns were unused, except for column 12 to avoid edge effects. The plates were incubated at 37°C for 24 hours in an anaerobic cabinet to allow for the formation of biofilms on the walls of the wells. After incubation, a sterile Pasteur pipette was used to carefully remove the liquid phase containing any planktonic growth from each well into a discard jar, leaving adherent biofilm attached to the well walls. However the positive control was re-filled with 200 μl RCM to keep the biofilm alive. With the exception the negative control wells, each well was re-filled with 200 μl of Manuka honey solutions and incubated at 37°C for 24 hours in an anaerobic cabinet. Different concentrations of Manuka honey (0, 1, 2, 4, 8, 10, 20, 30, 40, 50% (w/v)) were included in the experimental setup and were prepared by dilutions with RCM.\nAfter incubation, the liquid phase in each well (containing any overnight planktonic growth together with the honey solutions) was discarded into the discard jar using a sterile Pasteur pipette. The adherent growth was fixed with 200 μl of 2.5% glutaraldehyde for 5 minutes. The fixative was removed into a toxic waste bottle and the wells washed twice with PBS. All the wells were stained with 200 μl of 0.25% crystal violet for 5 minutes. The stain was then removed and the wells washed 8 times with PBS. It was ensured that any remaining liquid was drained from the plates by inverting them vigorously onto paper hand towels. At this point, biofilms were visible as purple rings formed on the side of each well. The quantity of biofilms formed were analysed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye associated cells. The reading was taken (within 5 minutes of adding the solvent) at 570 nm in a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all C. difficile test strains.\n Statistical analysis The experimental data was entered into Microsoft Excel® 2010 and analyzed. Mean biofilm biomass was calculated for each C. difficile strain at 24 and 48 hours. The Student's t-test was used to test for any significant difference between biofilms formation as function of the hours for the different strains. A p- value less than or equal to 0.05 was considered significant.\nThe experimental data was entered into Microsoft Excel® 2010 and analyzed. Mean biofilm biomass was calculated for each C. difficile strain at 24 and 48 hours. The Student's t-test was used to test for any significant difference between biofilms formation as function of the hours for the different strains. A p- value less than or equal to 0.05 was considered significant.", "Three C. difficile strains were used for the experiments in this study, and they included the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. These strains were selected for the study due to their epidemiological or clinical significance. The C. difficile strains were obtained from University of Wales Hospital and were maintained in Robertson’s Cook meat medium (Oxoid, Cambridge, UK) at the Department of Microbiology, University of Wales Institute Cardiff where the study was carried out. Prior to using the C. difficile strains in the experimental work, they were purified on blood agar plates.", "Woundcare™ 18+ Active Manuka honey (potency equivalent of greater than 18% (w/v) phenol) with non-peroxide antibacterial activity from Comvita UK was used in this study.", "The experiments performed to determine the capability of the C. difficile strains to form biofilms were based on the previously described methods\n[4]. The three C. difficile test strains were cultured overnight in Reinforced Clostridial Medium (RCM) broth at 37°C for 24 hours. For each strain, a dilution of 1:100 inoculum was made in a sterile broth bottle by pipetting 1 ml of each strain into 99 ml of RCM broth and vortexing to achieve a good mixture. An aliquot of 200 μl of each diluted inoculum was dispensed into a 96-well Nunc flat bottom microtitre plate. The plates and contents were incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow for formation of biofilms on the walls of the wells. For each experiment, wells of RCM broth without C. difficile strains were used as negative control. At the end of the 24 and 48 hours of incubation, the plates were removed from the anaerobic cabinet and the cultures were carefully removed by using a Pasteur pipette. Subsequently, 200 μl of 2.5% glutaraldehyde solution was pipetted into each of the drained wells, and allowed to stand for 5 minutes to allow fixation. The glutaraldehyde solution was then removed and the empty wells were washed by dispensing 200 μl of phosphate buffered saline (PBS) (Oxoid, Cambridge, UK) in them. The PBS was discarded and the wells were stained with 200 μl of 0.25% (w/v) aqueous crystal violet for 5 minutes. After this time, the wells were washed with PBS eight times and allowed to air dry. The quantity of biofilm formed was analyzed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye from adherent cells (biofilm). Absorbance was measured within 5 minutes of adding the solvent at 570 nm using a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all test C. difficile strains.", "The three C. difficile test strains were cultured overnight in an RCM broth at 37°C for 24 hours. For each strain, an aliquot of 200 μl of each diluted inoculum (as described in Section 3.3) was dispensed into each well of a 96-microtitre plate, starting from the first column (Column 1) which contained inoculum but no Manuka honey (positive control). The last column (Column 12) contained RCM only (no inoculum) and was used as a negative control. Outer rows and columns were unused, except for column 12 to avoid edge effects. The plates were incubated at 37°C for 24 hours in an anaerobic cabinet to allow for the formation of biofilms on the walls of the wells. After incubation, a sterile Pasteur pipette was used to carefully remove the liquid phase containing any planktonic growth from each well into a discard jar, leaving adherent biofilm attached to the well walls. However the positive control was re-filled with 200 μl RCM to keep the biofilm alive. With the exception the negative control wells, each well was re-filled with 200 μl of Manuka honey solutions and incubated at 37°C for 24 hours in an anaerobic cabinet. Different concentrations of Manuka honey (0, 1, 2, 4, 8, 10, 20, 30, 40, 50% (w/v)) were included in the experimental setup and were prepared by dilutions with RCM.\nAfter incubation, the liquid phase in each well (containing any overnight planktonic growth together with the honey solutions) was discarded into the discard jar using a sterile Pasteur pipette. The adherent growth was fixed with 200 μl of 2.5% glutaraldehyde for 5 minutes. The fixative was removed into a toxic waste bottle and the wells washed twice with PBS. All the wells were stained with 200 μl of 0.25% crystal violet for 5 minutes. The stain was then removed and the wells washed 8 times with PBS. It was ensured that any remaining liquid was drained from the plates by inverting them vigorously onto paper hand towels. At this point, biofilms were visible as purple rings formed on the side of each well. The quantity of biofilms formed were analysed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye associated cells. The reading was taken (within 5 minutes of adding the solvent) at 570 nm in a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all C. difficile test strains.", "The experimental data was entered into Microsoft Excel® 2010 and analyzed. Mean biofilm biomass was calculated for each C. difficile strain at 24 and 48 hours. The Student's t-test was used to test for any significant difference between biofilms formation as function of the hours for the different strains. A p- value less than or equal to 0.05 was considered significant.", "In this study we investigated biofilm formation of C. difficile and the susceptibility to Manuka honey. Figure \n1 shows that all the three C. difficile strains formed biofilms and there was no significant difference (p = 0.3901) between biofilm formation in 24 hours and biofilm formation in 48 hours.Figure 1\nFormation of adherent growth (biofilm) by\nC. difficile\nstrains after 24 and 48 hours incubation.\n\n\nFormation of adherent growth (biofilm) by\nC. difficile\nstrains after 24 and 48 hours incubation.\n\nBacterial biofilm formation occurs through an interesting mechanism which first involves bacterial attachment to a material surface. This initial attachment of the bacterial organism is influenced mainly by combination of environmental factors (such as nutrient levels, temperature, pH and duration of attachment) and genetic factors\n[1]. Immediately the cells are attached to suitable surfaces, they begin to multiply and grow to form a thin layer (monolayer) towards where conditions are favourable on material surfaces. At this stage, the cells undergo a developmental change which lead to the production of a complex structure referred to as exopolysaccharide glycocalyx polymers matrix and is one of the hallmarks of a matured biofilm\n[22–24].\nThe ability of C. difficile to form biofilms as observed in this study concurs with studies carried out by several other investigators\n[7–10]. As shown in Figure \n1, among the three C. difficile strains, the ribotype 027 strain showed the highest potential for biofilm formation. Biofilm formation has been linked with virulence of several bacterial pathogens including C. difficile\n[8, 13]. C. difficile of ribotype 027 is known to be a hypervirulent strain and the commonest cause of C. difficile associated outbreaks\n[25]. Data from our study appear to indicate that the high virulence and epidemiological significance of C. difficile ribotype 027 strains may be related to its relatively greater ability to form biofilms compared to other C. difficile strains such as ribotype 106 strains. In their study of C. difficile biofilm, Dapa et al.\n[8] also observed that the hypervirulent strain (ribotype 027) produced more biofilm than a less virulent C. difficile strain. Currently, very little is known about the actual role of biofilm in the pathogenicity and pathogenesis of C. difficile, and further studies are required to elucidate this.\nThe extent of Manuka honey inhibiting the established biofilm was determined by comparing the amount of biofilm formed in the wells with and without Manuka honey. This experiment showed that generally, there was a dose–response relationship between the amount of biofilm depleted and concentration of Manuka honey (Figure \n2). Concentrations of Manuka honey below 20% (w/v) appeared to have no or little effect on the established biofilm for each of the three C. difficle strains. However, concentrations between 20 and 50% (w/v) Manuka honey resulted in decreasing amount of biofilm formed by all test strains after 24 hours. Although MIC and MBC of Manuka honey against suspensions of the C. difficile strains used in this study were 6.25% (v/v)\n[21], much higher concentrations of 30-50% (w/v) of Manuka honey were required to deplete biofilms formed by the C. difficile strains.Figure 2\nThe effect of varying concentrations of Manuka honey on biofilm of\nC. difficile\nstrains.\n\n\nThe effect of varying concentrations of Manuka honey on biofilm of\nC. difficile\nstrains.\n\nThis may be due to the ability of the sessile bacteria (biofilm formed by C. difficile) to secrete proteins and polymeric sugars which serve as a protection to enhance quorum development for survival\n[1]. Similar results were reported by Okhiria et al.\n[26]. Fux et al.\n[27] showed that biofilms are strongly resistant to biocides, drying and most environmental stresses. Ashby et al.\n[28] and Costerton et al.\n[29] reported that the ability of biofilm to resist antibiotic effect could be due to the slow growth rate of biofilm, since the effect of bactericides on biofilm usually declines with lower growth rate.\nFrom this study, it may be inferred that the most suitable Manuka honey concentrations to inhibit C. difficile biofilm significantly were 40 and 50% (w/v). Similarly, Okhiria et al.\n[26] reported that Pseudomonas biofilm exposed to 40% (w/v) Manuka honey concentration showed a significant inhibition, but 20% (w/v) Manuka honey did not show any significant inhibition. From Figure \n2, it can be observed that Manuka honey could not exhibit 100% depletion of biofilm formed by the C. difficile strains. Studies by Alandejani et al.\n[30] also reported that the effectiveness of Manuka honey against biofilms formed by S. aureus and P. aeruginosa were less than 100%. Nevertheless, based on this study and others, it is important to note that Manuka honey has an appreciable antibacterial activity against biofilm formed by bacterial organisms\n[31]. While in this study we demonstrated the ability of Manuka honey to inhibit biofilm formation of C. difficile, Maddock et al.\n[17] demonstrated the ability of Manuka honey to disrupt pre-formed biofilms of Streptococcus pyogenes. Recently, Ansari et al.\n[31] also reported that Jujube honey can disrupt pre-formed biofilms of Candida albicans. Antimicrobial effect of Manuka honey is due to a property referred to as Unique Manuka Factor that is absent in other types of honey\n[32]. Various studies have revealed that the active ingredient in Manuka honey is Methylglyoxal\n[33, 34], and this compound is known to have synergistic effect with some antibiotics such as piperacillin\n[35].\nWe previously demonstrated susceptibility of C. difficile to Manuka honey, and in this study, have also shown that biofilm formed of the organism is similarly susceptible to Manuka honey. Overall these findings have important applications in the treatment of C. difficile infections given the escalating trend of the organism to several essential antibiotics. In the light of the findings of the current study, it is also important for further studies to determine the rate and concentrations at which Manuka honey inhibits biofilms of C. difficile in vivo. Additionally, it would useful to investigate the effect of Manuka honey on spores of C. difficile.", "In this study, we have demonstrated biofilm formation of specific C. difficile strains including ATCC 9689, ribotype 027 and ribotype 106. We have also shown that Manuka honey exhibits antibacterial activity against C. difficile biofilm with the optimum activity occurring at 40-50% (w/v). The bactericidal action of Manuka honey may be exploited practically by incorporating a solution of 40-50% (w/v) Manuka honey in topical and hand-washing formulations in care homes and hospitals or places where C. difficile populations are likely to be high." ]
[ null, "methods", null, null, null, null, null, null, "conclusions" ]
[ "\nClostridium difficile\n", "Biofilm", "Manuka honey", "Antibacterial", "Susceptibility" ]
Background: Biofilms are complex structures of polysaccharide matrix excreted by bacteria in a form of slimy, glue-like substance that adhere to material surfaces especially, when exposed to some amount of water [1]. Within the last decade, many studies have been done on biofilms which have established their association with infections and contaminations [1]. The conventional methods of killing bacteria by using antibiotics and disinfection are often unsuccessful with biofilm forming bacteria [1]. Thus biofilm forming bacteria may pose a relatively greater threat to public health. For instance, it is reported that patients suffering from biofilm associated infections tend to stay longer in hospital than expected [2, 3]. Worldwide, it is estimated that biofilms are associated with 65 percent of nosocomial infections and contribute to high death rate and 2-14% of all surgical wounds complications [2]. Biofilm formation by bacteria also has serious economic implications. It is reported that a microorganism associated with biofilm formation is reported to cost some nations billions of dollars yearly in medical infections, equipment damage and product contamination [4]. Some bacterial organisms known to form biofilms include Pseudomonas aeruginosa, Staphylococcus aureus, Listeria monocytogenes and Clostridium difficile [5, 6]. Biofilm formation by C. difficile was first reported in 2012 and has since then been demonstrated with a few strains of the organism [7–10]. Evidence from some studies indicate that amount of biofilm formed by C. difficile varies from strain to strain [4, 5]. Mutagenesis studies have identified several surface proteins such as SlecC, Cwp84 and LuxS that are required for biofilm formation of C. difficile [8, 10]. There is also evidence that some level of sporulation occurs in C. difficile biofilm [11, 12]. The use of honey for treating microbial infections dates back to ancient times, though antimicrobial properties of Manuka honey was discovered recently. The antibacterial effect of Manuka honey against bacterial biofilms has been demonstrated for several organisms such as Streptococcus pyogenes [13], Pseudomonas aeruginosa [14], Enterococcus faecalis [15] and Streptococcus mutans [16]. Bactericidal effects have been found in both planktonic cultures and biofilm, although higher concentrations were required to inhibit biofilms [17]. In a biofilm study, Maddocks et al. [17] reported that sublethal concentrations of Manuka honey disrupted the binding of S. pyogenes to the human fibronectin but did not prevent binding to fibrinogen. Manuka honey thus appears to be a promising antibacterial agent in this era of diminishing antimicrobial agents. However, further studies on its antibacterial properties are required especially, with highly resistant pathogens such as C. difficile. C. difficile, the causative agent of severe inflammation of the bowel (pseudomembranous colitis), has become the most significant nosocomial antibiotic-associated diarrhoea reported worldwide [18, 19]. The Centers for Disease Control and Prevention (CDC) estimate that nearly 250,000 serious C. difficile infections (CDI) occur in the US annually, at a cost of at least one billion dollars, resulting in 14,000 deaths (CDC, 2013) [20]. This high public health burden associated with C. difficile is partly due to the trend of increasing resistance of the organism to several essential antibiotics, a problem which highlights the need for alternative treatment methods of C. difficile infections. In a previous study, we demonstrated the susceptibility of C. difficile to Manuka honey (Leptospermum scoparium) [21]. The findings of the study showed that Manuka honey exhibits a bactericidal activity against C. difficile with minimum inhibitory and bactericidal concentrations of 6.25% (v/v) [21]. However, it is unknown if biofilm formed by C. difficle is also susceptible to Manuka honey. Biofilm formation by C. difficile in itself has been recently reported, and it is important to confirm this with various C. difficile strains. The aim of this study was to demonstrate biofilm formation of specific strains of C. difficile and the antibacterial effect of Manuka honey on the C. difficile biofilm. Methods: Clostridium difficilestrains Three C. difficile strains were used for the experiments in this study, and they included the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. These strains were selected for the study due to their epidemiological or clinical significance. The C. difficile strains were obtained from University of Wales Hospital and were maintained in Robertson’s Cook meat medium (Oxoid, Cambridge, UK) at the Department of Microbiology, University of Wales Institute Cardiff where the study was carried out. Prior to using the C. difficile strains in the experimental work, they were purified on blood agar plates. Three C. difficile strains were used for the experiments in this study, and they included the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. These strains were selected for the study due to their epidemiological or clinical significance. The C. difficile strains were obtained from University of Wales Hospital and were maintained in Robertson’s Cook meat medium (Oxoid, Cambridge, UK) at the Department of Microbiology, University of Wales Institute Cardiff where the study was carried out. Prior to using the C. difficile strains in the experimental work, they were purified on blood agar plates. Manuka honey Woundcare™ 18+ Active Manuka honey (potency equivalent of greater than 18% (w/v) phenol) with non-peroxide antibacterial activity from Comvita UK was used in this study. Woundcare™ 18+ Active Manuka honey (potency equivalent of greater than 18% (w/v) phenol) with non-peroxide antibacterial activity from Comvita UK was used in this study. Microtitre plate assay for the assessment of biofilm formation in C. difficile strains The experiments performed to determine the capability of the C. difficile strains to form biofilms were based on the previously described methods [4]. The three C. difficile test strains were cultured overnight in Reinforced Clostridial Medium (RCM) broth at 37°C for 24 hours. For each strain, a dilution of 1:100 inoculum was made in a sterile broth bottle by pipetting 1 ml of each strain into 99 ml of RCM broth and vortexing to achieve a good mixture. An aliquot of 200 μl of each diluted inoculum was dispensed into a 96-well Nunc flat bottom microtitre plate. The plates and contents were incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow for formation of biofilms on the walls of the wells. For each experiment, wells of RCM broth without C. difficile strains were used as negative control. At the end of the 24 and 48 hours of incubation, the plates were removed from the anaerobic cabinet and the cultures were carefully removed by using a Pasteur pipette. Subsequently, 200 μl of 2.5% glutaraldehyde solution was pipetted into each of the drained wells, and allowed to stand for 5 minutes to allow fixation. The glutaraldehyde solution was then removed and the empty wells were washed by dispensing 200 μl of phosphate buffered saline (PBS) (Oxoid, Cambridge, UK) in them. The PBS was discarded and the wells were stained with 200 μl of 0.25% (w/v) aqueous crystal violet for 5 minutes. After this time, the wells were washed with PBS eight times and allowed to air dry. The quantity of biofilm formed was analyzed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye from adherent cells (biofilm). Absorbance was measured within 5 minutes of adding the solvent at 570 nm using a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all test C. difficile strains. The experiments performed to determine the capability of the C. difficile strains to form biofilms were based on the previously described methods [4]. The three C. difficile test strains were cultured overnight in Reinforced Clostridial Medium (RCM) broth at 37°C for 24 hours. For each strain, a dilution of 1:100 inoculum was made in a sterile broth bottle by pipetting 1 ml of each strain into 99 ml of RCM broth and vortexing to achieve a good mixture. An aliquot of 200 μl of each diluted inoculum was dispensed into a 96-well Nunc flat bottom microtitre plate. The plates and contents were incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow for formation of biofilms on the walls of the wells. For each experiment, wells of RCM broth without C. difficile strains were used as negative control. At the end of the 24 and 48 hours of incubation, the plates were removed from the anaerobic cabinet and the cultures were carefully removed by using a Pasteur pipette. Subsequently, 200 μl of 2.5% glutaraldehyde solution was pipetted into each of the drained wells, and allowed to stand for 5 minutes to allow fixation. The glutaraldehyde solution was then removed and the empty wells were washed by dispensing 200 μl of phosphate buffered saline (PBS) (Oxoid, Cambridge, UK) in them. The PBS was discarded and the wells were stained with 200 μl of 0.25% (w/v) aqueous crystal violet for 5 minutes. After this time, the wells were washed with PBS eight times and allowed to air dry. The quantity of biofilm formed was analyzed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye from adherent cells (biofilm). Absorbance was measured within 5 minutes of adding the solvent at 570 nm using a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all test C. difficile strains. Determination of the effect of various honey concentrations on C. difficile biofilms The three C. difficile test strains were cultured overnight in an RCM broth at 37°C for 24 hours. For each strain, an aliquot of 200 μl of each diluted inoculum (as described in Section 3.3) was dispensed into each well of a 96-microtitre plate, starting from the first column (Column 1) which contained inoculum but no Manuka honey (positive control). The last column (Column 12) contained RCM only (no inoculum) and was used as a negative control. Outer rows and columns were unused, except for column 12 to avoid edge effects. The plates were incubated at 37°C for 24 hours in an anaerobic cabinet to allow for the formation of biofilms on the walls of the wells. After incubation, a sterile Pasteur pipette was used to carefully remove the liquid phase containing any planktonic growth from each well into a discard jar, leaving adherent biofilm attached to the well walls. However the positive control was re-filled with 200 μl RCM to keep the biofilm alive. With the exception the negative control wells, each well was re-filled with 200 μl of Manuka honey solutions and incubated at 37°C for 24 hours in an anaerobic cabinet. Different concentrations of Manuka honey (0, 1, 2, 4, 8, 10, 20, 30, 40, 50% (w/v)) were included in the experimental setup and were prepared by dilutions with RCM. After incubation, the liquid phase in each well (containing any overnight planktonic growth together with the honey solutions) was discarded into the discard jar using a sterile Pasteur pipette. The adherent growth was fixed with 200 μl of 2.5% glutaraldehyde for 5 minutes. The fixative was removed into a toxic waste bottle and the wells washed twice with PBS. All the wells were stained with 200 μl of 0.25% crystal violet for 5 minutes. The stain was then removed and the wells washed 8 times with PBS. It was ensured that any remaining liquid was drained from the plates by inverting them vigorously onto paper hand towels. At this point, biofilms were visible as purple rings formed on the side of each well. The quantity of biofilms formed were analysed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye associated cells. The reading was taken (within 5 minutes of adding the solvent) at 570 nm in a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all C. difficile test strains. The three C. difficile test strains were cultured overnight in an RCM broth at 37°C for 24 hours. For each strain, an aliquot of 200 μl of each diluted inoculum (as described in Section 3.3) was dispensed into each well of a 96-microtitre plate, starting from the first column (Column 1) which contained inoculum but no Manuka honey (positive control). The last column (Column 12) contained RCM only (no inoculum) and was used as a negative control. Outer rows and columns were unused, except for column 12 to avoid edge effects. The plates were incubated at 37°C for 24 hours in an anaerobic cabinet to allow for the formation of biofilms on the walls of the wells. After incubation, a sterile Pasteur pipette was used to carefully remove the liquid phase containing any planktonic growth from each well into a discard jar, leaving adherent biofilm attached to the well walls. However the positive control was re-filled with 200 μl RCM to keep the biofilm alive. With the exception the negative control wells, each well was re-filled with 200 μl of Manuka honey solutions and incubated at 37°C for 24 hours in an anaerobic cabinet. Different concentrations of Manuka honey (0, 1, 2, 4, 8, 10, 20, 30, 40, 50% (w/v)) were included in the experimental setup and were prepared by dilutions with RCM. After incubation, the liquid phase in each well (containing any overnight planktonic growth together with the honey solutions) was discarded into the discard jar using a sterile Pasteur pipette. The adherent growth was fixed with 200 μl of 2.5% glutaraldehyde for 5 minutes. The fixative was removed into a toxic waste bottle and the wells washed twice with PBS. All the wells were stained with 200 μl of 0.25% crystal violet for 5 minutes. The stain was then removed and the wells washed 8 times with PBS. It was ensured that any remaining liquid was drained from the plates by inverting them vigorously onto paper hand towels. At this point, biofilms were visible as purple rings formed on the side of each well. The quantity of biofilms formed were analysed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye associated cells. The reading was taken (within 5 minutes of adding the solvent) at 570 nm in a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all C. difficile test strains. Statistical analysis The experimental data was entered into Microsoft Excel® 2010 and analyzed. Mean biofilm biomass was calculated for each C. difficile strain at 24 and 48 hours. The Student's t-test was used to test for any significant difference between biofilms formation as function of the hours for the different strains. A p- value less than or equal to 0.05 was considered significant. The experimental data was entered into Microsoft Excel® 2010 and analyzed. Mean biofilm biomass was calculated for each C. difficile strain at 24 and 48 hours. The Student's t-test was used to test for any significant difference between biofilms formation as function of the hours for the different strains. A p- value less than or equal to 0.05 was considered significant. Clostridium difficilestrains: Three C. difficile strains were used for the experiments in this study, and they included the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. These strains were selected for the study due to their epidemiological or clinical significance. The C. difficile strains were obtained from University of Wales Hospital and were maintained in Robertson’s Cook meat medium (Oxoid, Cambridge, UK) at the Department of Microbiology, University of Wales Institute Cardiff where the study was carried out. Prior to using the C. difficile strains in the experimental work, they were purified on blood agar plates. Manuka honey: Woundcare™ 18+ Active Manuka honey (potency equivalent of greater than 18% (w/v) phenol) with non-peroxide antibacterial activity from Comvita UK was used in this study. Microtitre plate assay for the assessment of biofilm formation in C. difficile strains: The experiments performed to determine the capability of the C. difficile strains to form biofilms were based on the previously described methods [4]. The three C. difficile test strains were cultured overnight in Reinforced Clostridial Medium (RCM) broth at 37°C for 24 hours. For each strain, a dilution of 1:100 inoculum was made in a sterile broth bottle by pipetting 1 ml of each strain into 99 ml of RCM broth and vortexing to achieve a good mixture. An aliquot of 200 μl of each diluted inoculum was dispensed into a 96-well Nunc flat bottom microtitre plate. The plates and contents were incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow for formation of biofilms on the walls of the wells. For each experiment, wells of RCM broth without C. difficile strains were used as negative control. At the end of the 24 and 48 hours of incubation, the plates were removed from the anaerobic cabinet and the cultures were carefully removed by using a Pasteur pipette. Subsequently, 200 μl of 2.5% glutaraldehyde solution was pipetted into each of the drained wells, and allowed to stand for 5 minutes to allow fixation. The glutaraldehyde solution was then removed and the empty wells were washed by dispensing 200 μl of phosphate buffered saline (PBS) (Oxoid, Cambridge, UK) in them. The PBS was discarded and the wells were stained with 200 μl of 0.25% (w/v) aqueous crystal violet for 5 minutes. After this time, the wells were washed with PBS eight times and allowed to air dry. The quantity of biofilm formed was analyzed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye from adherent cells (biofilm). Absorbance was measured within 5 minutes of adding the solvent at 570 nm using a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all test C. difficile strains. Determination of the effect of various honey concentrations on C. difficile biofilms: The three C. difficile test strains were cultured overnight in an RCM broth at 37°C for 24 hours. For each strain, an aliquot of 200 μl of each diluted inoculum (as described in Section 3.3) was dispensed into each well of a 96-microtitre plate, starting from the first column (Column 1) which contained inoculum but no Manuka honey (positive control). The last column (Column 12) contained RCM only (no inoculum) and was used as a negative control. Outer rows and columns were unused, except for column 12 to avoid edge effects. The plates were incubated at 37°C for 24 hours in an anaerobic cabinet to allow for the formation of biofilms on the walls of the wells. After incubation, a sterile Pasteur pipette was used to carefully remove the liquid phase containing any planktonic growth from each well into a discard jar, leaving adherent biofilm attached to the well walls. However the positive control was re-filled with 200 μl RCM to keep the biofilm alive. With the exception the negative control wells, each well was re-filled with 200 μl of Manuka honey solutions and incubated at 37°C for 24 hours in an anaerobic cabinet. Different concentrations of Manuka honey (0, 1, 2, 4, 8, 10, 20, 30, 40, 50% (w/v)) were included in the experimental setup and were prepared by dilutions with RCM. After incubation, the liquid phase in each well (containing any overnight planktonic growth together with the honey solutions) was discarded into the discard jar using a sterile Pasteur pipette. The adherent growth was fixed with 200 μl of 2.5% glutaraldehyde for 5 minutes. The fixative was removed into a toxic waste bottle and the wells washed twice with PBS. All the wells were stained with 200 μl of 0.25% crystal violet for 5 minutes. The stain was then removed and the wells washed 8 times with PBS. It was ensured that any remaining liquid was drained from the plates by inverting them vigorously onto paper hand towels. At this point, biofilms were visible as purple rings formed on the side of each well. The quantity of biofilms formed were analysed by adding 200 μl solvent (1:1 ethanol and acetone solution) to each well to dissolve dye associated cells. The reading was taken (within 5 minutes of adding the solvent) at 570 nm in a Dynex plate reader. The microtitre plate biofilm assay was performed three times on separate occasions for all C. difficile test strains. Statistical analysis: The experimental data was entered into Microsoft Excel® 2010 and analyzed. Mean biofilm biomass was calculated for each C. difficile strain at 24 and 48 hours. The Student's t-test was used to test for any significant difference between biofilms formation as function of the hours for the different strains. A p- value less than or equal to 0.05 was considered significant. Results and discussion: In this study we investigated biofilm formation of C. difficile and the susceptibility to Manuka honey. Figure  1 shows that all the three C. difficile strains formed biofilms and there was no significant difference (p = 0.3901) between biofilm formation in 24 hours and biofilm formation in 48 hours.Figure 1 Formation of adherent growth (biofilm) by C. difficile strains after 24 and 48 hours incubation. Formation of adherent growth (biofilm) by C. difficile strains after 24 and 48 hours incubation. Bacterial biofilm formation occurs through an interesting mechanism which first involves bacterial attachment to a material surface. This initial attachment of the bacterial organism is influenced mainly by combination of environmental factors (such as nutrient levels, temperature, pH and duration of attachment) and genetic factors [1]. Immediately the cells are attached to suitable surfaces, they begin to multiply and grow to form a thin layer (monolayer) towards where conditions are favourable on material surfaces. At this stage, the cells undergo a developmental change which lead to the production of a complex structure referred to as exopolysaccharide glycocalyx polymers matrix and is one of the hallmarks of a matured biofilm [22–24]. The ability of C. difficile to form biofilms as observed in this study concurs with studies carried out by several other investigators [7–10]. As shown in Figure  1, among the three C. difficile strains, the ribotype 027 strain showed the highest potential for biofilm formation. Biofilm formation has been linked with virulence of several bacterial pathogens including C. difficile [8, 13]. C. difficile of ribotype 027 is known to be a hypervirulent strain and the commonest cause of C. difficile associated outbreaks [25]. Data from our study appear to indicate that the high virulence and epidemiological significance of C. difficile ribotype 027 strains may be related to its relatively greater ability to form biofilms compared to other C. difficile strains such as ribotype 106 strains. In their study of C. difficile biofilm, Dapa et al. [8] also observed that the hypervirulent strain (ribotype 027) produced more biofilm than a less virulent C. difficile strain. Currently, very little is known about the actual role of biofilm in the pathogenicity and pathogenesis of C. difficile, and further studies are required to elucidate this. The extent of Manuka honey inhibiting the established biofilm was determined by comparing the amount of biofilm formed in the wells with and without Manuka honey. This experiment showed that generally, there was a dose–response relationship between the amount of biofilm depleted and concentration of Manuka honey (Figure  2). Concentrations of Manuka honey below 20% (w/v) appeared to have no or little effect on the established biofilm for each of the three C. difficle strains. However, concentrations between 20 and 50% (w/v) Manuka honey resulted in decreasing amount of biofilm formed by all test strains after 24 hours. Although MIC and MBC of Manuka honey against suspensions of the C. difficile strains used in this study were 6.25% (v/v) [21], much higher concentrations of 30-50% (w/v) of Manuka honey were required to deplete biofilms formed by the C. difficile strains.Figure 2 The effect of varying concentrations of Manuka honey on biofilm of C. difficile strains. The effect of varying concentrations of Manuka honey on biofilm of C. difficile strains. This may be due to the ability of the sessile bacteria (biofilm formed by C. difficile) to secrete proteins and polymeric sugars which serve as a protection to enhance quorum development for survival [1]. Similar results were reported by Okhiria et al. [26]. Fux et al. [27] showed that biofilms are strongly resistant to biocides, drying and most environmental stresses. Ashby et al. [28] and Costerton et al. [29] reported that the ability of biofilm to resist antibiotic effect could be due to the slow growth rate of biofilm, since the effect of bactericides on biofilm usually declines with lower growth rate. From this study, it may be inferred that the most suitable Manuka honey concentrations to inhibit C. difficile biofilm significantly were 40 and 50% (w/v). Similarly, Okhiria et al. [26] reported that Pseudomonas biofilm exposed to 40% (w/v) Manuka honey concentration showed a significant inhibition, but 20% (w/v) Manuka honey did not show any significant inhibition. From Figure  2, it can be observed that Manuka honey could not exhibit 100% depletion of biofilm formed by the C. difficile strains. Studies by Alandejani et al. [30] also reported that the effectiveness of Manuka honey against biofilms formed by S. aureus and P. aeruginosa were less than 100%. Nevertheless, based on this study and others, it is important to note that Manuka honey has an appreciable antibacterial activity against biofilm formed by bacterial organisms [31]. While in this study we demonstrated the ability of Manuka honey to inhibit biofilm formation of C. difficile, Maddock et al. [17] demonstrated the ability of Manuka honey to disrupt pre-formed biofilms of Streptococcus pyogenes. Recently, Ansari et al. [31] also reported that Jujube honey can disrupt pre-formed biofilms of Candida albicans. Antimicrobial effect of Manuka honey is due to a property referred to as Unique Manuka Factor that is absent in other types of honey [32]. Various studies have revealed that the active ingredient in Manuka honey is Methylglyoxal [33, 34], and this compound is known to have synergistic effect with some antibiotics such as piperacillin [35]. We previously demonstrated susceptibility of C. difficile to Manuka honey, and in this study, have also shown that biofilm formed of the organism is similarly susceptible to Manuka honey. Overall these findings have important applications in the treatment of C. difficile infections given the escalating trend of the organism to several essential antibiotics. In the light of the findings of the current study, it is also important for further studies to determine the rate and concentrations at which Manuka honey inhibits biofilms of C. difficile in vivo. Additionally, it would useful to investigate the effect of Manuka honey on spores of C. difficile. Conclusions: In this study, we have demonstrated biofilm formation of specific C. difficile strains including ATCC 9689, ribotype 027 and ribotype 106. We have also shown that Manuka honey exhibits antibacterial activity against C. difficile biofilm with the optimum activity occurring at 40-50% (w/v). The bactericidal action of Manuka honey may be exploited practically by incorporating a solution of 40-50% (w/v) Manuka honey in topical and hand-washing formulations in care homes and hospitals or places where C. difficile populations are likely to be high.
Background: Biofilm bacteria are relatively more resistant to antibiotics. The escalating trend of antibiotic resistance higlights the need for evaluating alternative potential therapeutic agents with antibacterial properties. The use of honey for treating microbial infections dates back to ancient times, though antimicrobial properties of Manuka honey was discovered recently. The aim of this study was to demonstrate biofilm formation of specific Clostridium difficile strains and evaluate susceptibility of the biofilm to Manuka honey. Methods: Three C. difficile strains were used in the study including the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. Each test strain was grown in sterile microtitre plates and incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow formation of adherent growth (biofilm) on the walls of the wells. The effect of Manuka honey on the biofilms formed was investigated at varying concentrations of 1-50% (w/v) of Manuka honey. Results: The three C. difficile strains tested formed biofilms after 24 hours with the ribotype 027 strain producing the most extensive growth. There was no significant difference (p > 0.05) found between the amount of biofilms formed after 24 and 48 hours of incubation for each of the three C. difficile strains. A dose-response relationship between concentration of Manuka honey and biofilm formation was observed for all the test strains, and the optimum Manuka honey activity occurred at 40-50% (v/v). Conclusions: Manuka honey has antibacterial properties capable of inhibiting in vitro biofilm formed by C. difficile.
Background: Biofilms are complex structures of polysaccharide matrix excreted by bacteria in a form of slimy, glue-like substance that adhere to material surfaces especially, when exposed to some amount of water [1]. Within the last decade, many studies have been done on biofilms which have established their association with infections and contaminations [1]. The conventional methods of killing bacteria by using antibiotics and disinfection are often unsuccessful with biofilm forming bacteria [1]. Thus biofilm forming bacteria may pose a relatively greater threat to public health. For instance, it is reported that patients suffering from biofilm associated infections tend to stay longer in hospital than expected [2, 3]. Worldwide, it is estimated that biofilms are associated with 65 percent of nosocomial infections and contribute to high death rate and 2-14% of all surgical wounds complications [2]. Biofilm formation by bacteria also has serious economic implications. It is reported that a microorganism associated with biofilm formation is reported to cost some nations billions of dollars yearly in medical infections, equipment damage and product contamination [4]. Some bacterial organisms known to form biofilms include Pseudomonas aeruginosa, Staphylococcus aureus, Listeria monocytogenes and Clostridium difficile [5, 6]. Biofilm formation by C. difficile was first reported in 2012 and has since then been demonstrated with a few strains of the organism [7–10]. Evidence from some studies indicate that amount of biofilm formed by C. difficile varies from strain to strain [4, 5]. Mutagenesis studies have identified several surface proteins such as SlecC, Cwp84 and LuxS that are required for biofilm formation of C. difficile [8, 10]. There is also evidence that some level of sporulation occurs in C. difficile biofilm [11, 12]. The use of honey for treating microbial infections dates back to ancient times, though antimicrobial properties of Manuka honey was discovered recently. The antibacterial effect of Manuka honey against bacterial biofilms has been demonstrated for several organisms such as Streptococcus pyogenes [13], Pseudomonas aeruginosa [14], Enterococcus faecalis [15] and Streptococcus mutans [16]. Bactericidal effects have been found in both planktonic cultures and biofilm, although higher concentrations were required to inhibit biofilms [17]. In a biofilm study, Maddocks et al. [17] reported that sublethal concentrations of Manuka honey disrupted the binding of S. pyogenes to the human fibronectin but did not prevent binding to fibrinogen. Manuka honey thus appears to be a promising antibacterial agent in this era of diminishing antimicrobial agents. However, further studies on its antibacterial properties are required especially, with highly resistant pathogens such as C. difficile. C. difficile, the causative agent of severe inflammation of the bowel (pseudomembranous colitis), has become the most significant nosocomial antibiotic-associated diarrhoea reported worldwide [18, 19]. The Centers for Disease Control and Prevention (CDC) estimate that nearly 250,000 serious C. difficile infections (CDI) occur in the US annually, at a cost of at least one billion dollars, resulting in 14,000 deaths (CDC, 2013) [20]. This high public health burden associated with C. difficile is partly due to the trend of increasing resistance of the organism to several essential antibiotics, a problem which highlights the need for alternative treatment methods of C. difficile infections. In a previous study, we demonstrated the susceptibility of C. difficile to Manuka honey (Leptospermum scoparium) [21]. The findings of the study showed that Manuka honey exhibits a bactericidal activity against C. difficile with minimum inhibitory and bactericidal concentrations of 6.25% (v/v) [21]. However, it is unknown if biofilm formed by C. difficle is also susceptible to Manuka honey. Biofilm formation by C. difficile in itself has been recently reported, and it is important to confirm this with various C. difficile strains. The aim of this study was to demonstrate biofilm formation of specific strains of C. difficile and the antibacterial effect of Manuka honey on the C. difficile biofilm. Conclusions: In this study, we have demonstrated biofilm formation of specific C. difficile strains including ATCC 9689, ribotype 027 and ribotype 106. We have also shown that Manuka honey exhibits antibacterial activity against C. difficile biofilm with the optimum activity occurring at 40-50% (w/v). The bactericidal action of Manuka honey may be exploited practically by incorporating a solution of 40-50% (w/v) Manuka honey in topical and hand-washing formulations in care homes and hospitals or places where C. difficile populations are likely to be high.
Background: Biofilm bacteria are relatively more resistant to antibiotics. The escalating trend of antibiotic resistance higlights the need for evaluating alternative potential therapeutic agents with antibacterial properties. The use of honey for treating microbial infections dates back to ancient times, though antimicrobial properties of Manuka honey was discovered recently. The aim of this study was to demonstrate biofilm formation of specific Clostridium difficile strains and evaluate susceptibility of the biofilm to Manuka honey. Methods: Three C. difficile strains were used in the study including the ATCC 9689 strain, a ribotype 027 strain and a ribotype 106 strain. Each test strain was grown in sterile microtitre plates and incubated at 37°C for 24 and 48 hours in an anaerobic cabinet to allow formation of adherent growth (biofilm) on the walls of the wells. The effect of Manuka honey on the biofilms formed was investigated at varying concentrations of 1-50% (w/v) of Manuka honey. Results: The three C. difficile strains tested formed biofilms after 24 hours with the ribotype 027 strain producing the most extensive growth. There was no significant difference (p > 0.05) found between the amount of biofilms formed after 24 and 48 hours of incubation for each of the three C. difficile strains. A dose-response relationship between concentration of Manuka honey and biofilm formation was observed for all the test strains, and the optimum Manuka honey activity occurred at 40-50% (v/v). Conclusions: Manuka honey has antibacterial properties capable of inhibiting in vitro biofilm formed by C. difficile.
5,492
295
[ 773, 111, 37, 384, 495, 70, 1215 ]
9
[ "difficile", "biofilm", "honey", "strains", "manuka", "manuka honey", "biofilms", "wells", "μl", "200 μl" ]
[ "form biofilms compared", "biofilms compared", "biofilm forming bacteria", "complications biofilm formation", "biofilm associated infections" ]
null
[CONTENT] Clostridium difficile | Biofilm | Manuka honey | Antibacterial | Susceptibility [SUMMARY]
[CONTENT] Clostridium difficile | Biofilm | Manuka honey | Antibacterial | Susceptibility [SUMMARY]
null
[CONTENT] Clostridium difficile | Biofilm | Manuka honey | Antibacterial | Susceptibility [SUMMARY]
[CONTENT] Clostridium difficile | Biofilm | Manuka honey | Antibacterial | Susceptibility [SUMMARY]
[CONTENT] Clostridium difficile | Biofilm | Manuka honey | Antibacterial | Susceptibility [SUMMARY]
[CONTENT] Anti-Bacterial Agents | Biofilms | Clostridioides difficile | Dose-Response Relationship, Drug | Honey | Microbial Sensitivity Tests [SUMMARY]
[CONTENT] Anti-Bacterial Agents | Biofilms | Clostridioides difficile | Dose-Response Relationship, Drug | Honey | Microbial Sensitivity Tests [SUMMARY]
null
[CONTENT] Anti-Bacterial Agents | Biofilms | Clostridioides difficile | Dose-Response Relationship, Drug | Honey | Microbial Sensitivity Tests [SUMMARY]
[CONTENT] Anti-Bacterial Agents | Biofilms | Clostridioides difficile | Dose-Response Relationship, Drug | Honey | Microbial Sensitivity Tests [SUMMARY]
[CONTENT] Anti-Bacterial Agents | Biofilms | Clostridioides difficile | Dose-Response Relationship, Drug | Honey | Microbial Sensitivity Tests [SUMMARY]
[CONTENT] form biofilms compared | biofilms compared | biofilm forming bacteria | complications biofilm formation | biofilm associated infections [SUMMARY]
[CONTENT] form biofilms compared | biofilms compared | biofilm forming bacteria | complications biofilm formation | biofilm associated infections [SUMMARY]
null
[CONTENT] form biofilms compared | biofilms compared | biofilm forming bacteria | complications biofilm formation | biofilm associated infections [SUMMARY]
[CONTENT] form biofilms compared | biofilms compared | biofilm forming bacteria | complications biofilm formation | biofilm associated infections [SUMMARY]
[CONTENT] form biofilms compared | biofilms compared | biofilm forming bacteria | complications biofilm formation | biofilm associated infections [SUMMARY]
[CONTENT] difficile | biofilm | honey | strains | manuka | manuka honey | biofilms | wells | μl | 200 μl [SUMMARY]
[CONTENT] difficile | biofilm | honey | strains | manuka | manuka honey | biofilms | wells | μl | 200 μl [SUMMARY]
null
[CONTENT] difficile | biofilm | honey | strains | manuka | manuka honey | biofilms | wells | μl | 200 μl [SUMMARY]
[CONTENT] difficile | biofilm | honey | strains | manuka | manuka honey | biofilms | wells | μl | 200 μl [SUMMARY]
[CONTENT] difficile | biofilm | honey | strains | manuka | manuka honey | biofilms | wells | μl | 200 μl [SUMMARY]
[CONTENT] biofilm | difficile | reported | infections | honey | bacteria | manuka honey | manuka | biofilm formation | studies [SUMMARY]
[CONTENT] μl | 200 | 200 μl | wells | rcm | strains | plate | difficile | hours | minutes [SUMMARY]
null
[CONTENT] manuka | manuka honey | honey | 50 | 40 | 40 50 | ribotype | difficile | activity | bactericidal action [SUMMARY]
[CONTENT] difficile | biofilm | honey | manuka | manuka honey | strains | 200 | 200 μl | μl | wells [SUMMARY]
[CONTENT] difficile | biofilm | honey | manuka | manuka honey | strains | 200 | 200 μl | μl | wells [SUMMARY]
[CONTENT] ||| ||| Manuka ||| Clostridium | Manuka honey [SUMMARY]
[CONTENT] Three | 027 | 106 ||| 37 | 24 and 48 hours ||| Manuka | 1-50% | Manuka honey [SUMMARY]
null
[CONTENT] Manuka honey [SUMMARY]
[CONTENT] ||| ||| Manuka ||| Clostridium | Manuka honey ||| Three | 027 | 106 ||| 37 | 24 and 48 hours ||| Manuka | 1-50% | Manuka honey ||| ||| three | C. | 24 hours | 027 ||| 0.05 | 24 and 48 hours | three | C. ||| Manuka honey | Manuka | 40-50% ||| ||| Manuka honey [SUMMARY]
[CONTENT] ||| ||| Manuka ||| Clostridium | Manuka honey ||| Three | 027 | 106 ||| 37 | 24 and 48 hours ||| Manuka | 1-50% | Manuka honey ||| ||| three | C. | 24 hours | 027 ||| 0.05 | 24 and 48 hours | three | C. ||| Manuka honey | Manuka | 40-50% ||| ||| Manuka honey [SUMMARY]
Tendon vibration attenuates superficial venous vessel response of the resting limb during static arm exercise.
23134654
The superficial vein of the resting limb constricts sympathetically during exercise. Central command is the one of the neural mechanisms that controls the cardiovascular response to exercise. However, it is not clear whether central command contributes to venous vessel response during exercise. Tendon vibration during static elbow flexion causes primary muscle spindle afferents, such that a lower central command is required to achieve a given force without altering muscle force. The purpose of this study was therefore to investigate whether a reduction in central command during static exercise with tendon vibration influences the superficial venous vessel response in the resting limb.
BACKGROUND
Eleven subjects performed static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. The heart rate, mean arterial pressure, and rating of perceived exertion (RPE) in overall and exercising muscle were measured. The cross-sectional area (CSAvein) and blood velocity of the basilic vein in the resting upper arm were assessed by ultrasound, and blood flow (BFvein) was calculated using both variables.
METHODS
Muscle tension during exercise was similar between EX and EX + VIB. However, RPEs at EX + VIB were lower than those at EX (P <0.05). Increases in heart rate and mean arterial pressure during exercise at EX + VIB were also lower than those at EX (P <0.05). CSAvein in the resting limb at EX decreased during exercise from baseline (P <0.05), but CSAvein at EX + VIB did not change during exercise. CSAvein during exercise at EX was smaller than that at EX + VIB (P <0.05). However, BFvein did not change during the protocol under either condition. The decreases in circulatory response and RPEs during EX + VIB, despite identical muscle tension, showed that activation of central command was less during EX + VIB than during EX. Abolishment of the decrease in CSAvein during exercise at EX + VIB may thus have been caused by a lower level of central command at EX + VIB rather than EX.
RESULTS
Diminished central command induced by tendon vibration may attenuate the superficial venous vessel response of the resting limb during sustained static arm exercise.
CONCLUSION
[ "Arterial Pressure", "Elbow", "Exercise", "Female", "Heart Rate", "Humans", "Male", "Muscle Contraction", "Vibration", "Young Adult" ]
3520744
Background
Venomotor response is considered to play an important role in the transfer of blood from veins to the heart. Many studies have suggested that sympathetic activation has an impact on the venomotor responses in the resting limb during static exercise [1-3] and dynamic exercise [1,4-6], since especially superficial venous vessels have rich innervation of sympathetic nerves [7,8] and an operation of sympathectomy [1] and a dosage of α-blocking agent clearly abolishes the venoconstriction [2] observed during exercise. Sustained static exercise produces significant activation of the sympathetic nervous system, and the sympathetic activation during exercise is governed by both central (that is, central command) and peripheral (that is, muscle metaboreflex and mechanoreflex) mechanisms [9-16]. Metabolically sensitive afferents within exercising skeletal muscle detect the buildup of metabolites and act through cardiovascular centers to produce a muscle metaboreflex [11,13]. A component of this muscle reflex may arise from muscle mechanoreflex afferents [10]. Duprez and colleagues reported that post-exercise muscle ischemia produces a significant decrease in venous volume in the contralateral limb, and consequently suggested the importance of muscle metaboreflex on venomotor tone in the non-exercising limb during exercise [17]. In addition to muscle metaboreflex, an essential central command role has emerged from a preliminary study [2]. The concept of central command has been conventionally defined as feed-forward control. When a motor command is sent to a muscle, a parallel or collateral command is sent to cardiovascular centers in the brainstem, and this acts to activate sympathetic nerve activity. Indeed, Lorentsen found an anticipatory increase in the venous pressure in the contralateral limb before the onset of exercise [2], indicating that feed-forward control of the central command plays an important role in venous tone. However, the study did not examine the influence of central command during actual exercise [2]. Recent studies have also reported that central command also functions as feedback control, in which somatosensory signals arising from the working muscles continuously provide a feedback signal and probably modulate cardiovascular responses via alterations of perception of effort or effort sense [18-20]. If central command has the function of feedback control, the influence of central command will appear not only before exercise but also during sustained and later periods of exercise. However, this question has not yet been challenged. Verifying the influence of central command on the venous tone during actual and sustained static exercise is therefore necessary. On the other hand, using the non-invasive ultrasound Doppler method [21-24], assessment of a single vein response during exercise can extend prior knowledge about the influence of central command on the venous system during exercise. Based on these considerations, we investigated whether central command affects venomotor tone in the contralateral limb during sustained static exercise. In the present study, using vibration of the biceps brachii tendon reported previously [14,25], we evaluated whether less activation of central command during sustained static elbow flexion accompanies lower responses of the cross-sectional area of the superficial vein in the resting upper arm (CSAvein). Tendon vibration during active muscle contraction excites the primary afferents of muscle spindles of the contracting muscle, thereby inducing reflex tension via the monosynaptic tendon reflex, which in turn aids voluntary tension development and consequently reduces the amount of central command required to generate a given force [14,25,26]. In addition, tendon vibration was also useful for investigating the influence of central command without inducing discomfort or nociceptor afferent input [14].
Methods
Subjects Eleven healthy subjects (three males and eight females) volunteered to participate in the study. Their mean ± standard deviation age, height, and weight were 21.3 ± 0.9 years, 165.0 ± 6.6 cm, and 55.2 ± 5.7 kg, respectively. All subjects were nonsmokers. The participants were asked not to drink beverages containing caffeine or alcohol for 24 hours and not to eat for at least 2 hours before the start of the experiment. The purpose, procedures, and risks of the study were explained to the subjects, and their informed consent was obtained. The study was approved by the Human Ethics Committee of the Japan Women’s College of Physical Education and was conducted in accordance with the Declaration of Helsinki. Eleven healthy subjects (three males and eight females) volunteered to participate in the study. Their mean ± standard deviation age, height, and weight were 21.3 ± 0.9 years, 165.0 ± 6.6 cm, and 55.2 ± 5.7 kg, respectively. All subjects were nonsmokers. The participants were asked not to drink beverages containing caffeine or alcohol for 24 hours and not to eat for at least 2 hours before the start of the experiment. The purpose, procedures, and risks of the study were explained to the subjects, and their informed consent was obtained. The study was approved by the Human Ethics Committee of the Japan Women’s College of Physical Education and was conducted in accordance with the Declaration of Helsinki. Muscle tendon vibration and maximal voluntary contraction Before the main protocol, the subjects were examined to establish the force produced by tendon vibration at rest. A custom vibrator (DPS-380; Dia Medical, Tokyo, Japan) was used to induce left biceps brachii muscle contraction by reflex stimulation of the biceps brachii distal tendon on the cubital fossa [26]. The oscillating frequency of the vibrator was 100 Hz and its amplitude was 0.8 mm. On the same day, the subjects performed two maximal voluntary static elbow flexions of the left arm using a computer-based multifunctional dynamometer (VINE, Tokyo, Japan) to determine their maximal voluntary contraction (MVC) strength, defined as the highest value obtained in the two trials. Before the main protocol, the subjects were examined to establish the force produced by tendon vibration at rest. A custom vibrator (DPS-380; Dia Medical, Tokyo, Japan) was used to induce left biceps brachii muscle contraction by reflex stimulation of the biceps brachii distal tendon on the cubital fossa [26]. The oscillating frequency of the vibrator was 100 Hz and its amplitude was 0.8 mm. On the same day, the subjects performed two maximal voluntary static elbow flexions of the left arm using a computer-based multifunctional dynamometer (VINE, Tokyo, Japan) to determine their maximal voluntary contraction (MVC) strength, defined as the highest value obtained in the two trials. Experimental protocol In a room maintained at 25.1 ± 0.2°C, each subject stayed in a semi-reclined position in a chair in which body position could be maintained, while the left elbow was kept at a 90° angle on a padded armrest with the wrist attached to an arm lever by a Velcro strap. The subjects rested for at least 20 minutes before data collection began. After baseline data were collected for 5 minutes, subjects performed: static elbow flexion at 35% MVC without vibration of the biceps tendon for 2 minutes (EX); and static elbow flexion at 35% MVC with vibration of the biceps tendon for 2 minutes (EX + VIB). Each exercise period was followed by a recovery period of 1 minute. Static elbow flexion was produced using the same dynamometer that was used to measure the MVC (VINE), with visual feedback of the achieved force provided via an oscilloscope display. For EX + VIB, tendon vibration was initiated 1 minute before starting exercise and continued during the exercise. Immediately after exercise, subjects read instructions for the 6 to 20 rating of perceived exertion (Overall RPE) category scale developed by Borg [27] and instructions for rating muscle fatigue sensation (Arm RPE) on a scale of 1 to 10 [28]. In all trials, subjects regulated their respiratory frequency at 10 or 15 breaths/minute using a metronome, because exercise movement and respiratory cycle influence sympathetic nervous system activity. EX and EX + VIB were performed randomly, and the rest period between the two conditions was at least 20 minutes. In a room maintained at 25.1 ± 0.2°C, each subject stayed in a semi-reclined position in a chair in which body position could be maintained, while the left elbow was kept at a 90° angle on a padded armrest with the wrist attached to an arm lever by a Velcro strap. The subjects rested for at least 20 minutes before data collection began. After baseline data were collected for 5 minutes, subjects performed: static elbow flexion at 35% MVC without vibration of the biceps tendon for 2 minutes (EX); and static elbow flexion at 35% MVC with vibration of the biceps tendon for 2 minutes (EX + VIB). Each exercise period was followed by a recovery period of 1 minute. Static elbow flexion was produced using the same dynamometer that was used to measure the MVC (VINE), with visual feedback of the achieved force provided via an oscilloscope display. For EX + VIB, tendon vibration was initiated 1 minute before starting exercise and continued during the exercise. Immediately after exercise, subjects read instructions for the 6 to 20 rating of perceived exertion (Overall RPE) category scale developed by Borg [27] and instructions for rating muscle fatigue sensation (Arm RPE) on a scale of 1 to 10 [28]. In all trials, subjects regulated their respiratory frequency at 10 or 15 breaths/minute using a metronome, because exercise movement and respiratory cycle influence sympathetic nervous system activity. EX and EX + VIB were performed randomly, and the rest period between the two conditions was at least 20 minutes. Measurements Beat-to-beat changes in arterial pressure were assessed by finger photoplethysmography (Finometer; Finapres Medical Systems BV, Arnhem, the Netherlands). The monitoring cuff was placed around the middle finger. The heart rate (HR) and mean arterial pressure (MAP) were determined from the blood pressure waveform using the Modelflow software program, taking into account sex, age, height, and weight (BeatScope 1.1; Finapres Medical Systems BV). Muscle oxygenation (oxyhemoglobin (oxy-Hb) and deoxyhemoglobin (deoxy-Hb) concentration) in the left exercising upper arm and right resting forearm was monitored using a near-infrared spectroscopy system (NIRO-200; Hamamatsu Photonics, Hamamatsu, Japan) at dual wavelengths (760 nm and 850 nm). The near-infrared spectroscopy probe consisted of an optically dense holder containing an emission and detection probe and was secured to the skin with tape to minimize extraneous light. To measure blood velocity (Vvein) and cross-sectional area (CSA), non-invasive ultrasound imaging of the basilic vein (superficial vein) of the resting upper arm was performed 5 to 6 cm proximal to the cubitus using an 8.7-MHz linear array transducer (Vivid e; GE Healthcare Japan, Tokyo, Japan). A large quantity of ultrasound transmission gel was used to prevent direct contact with the skin and to avoid compression of the vein. Vvein and CSA were simultaneously measured on a transverse scan of the vein with the transducer tilted at 60°. Positioning of the transducer was determined at the beginning of each experiment, and it remained unchanged to limit potential errors in Doppler angle. Vvein was the result of the mean velocity of spectral Doppler recording every 12 seconds. CSA was calculated by manually tracing the edge of the offline transverse venous image at an arbitrary three points every 12 seconds, and then the three CSA values were averaged. Because CSA was obtained from the image measured at 60°, an accurate CSA (CSAvein) was determined as follows: (1) CSA vein cm 2 = CSA × sin 60 ° BFvein in the basilic vein was calculated according to the following formula: (2) BF vein ml / min = V vein × CSA vein Beat-to-beat changes in arterial pressure were assessed by finger photoplethysmography (Finometer; Finapres Medical Systems BV, Arnhem, the Netherlands). The monitoring cuff was placed around the middle finger. The heart rate (HR) and mean arterial pressure (MAP) were determined from the blood pressure waveform using the Modelflow software program, taking into account sex, age, height, and weight (BeatScope 1.1; Finapres Medical Systems BV). Muscle oxygenation (oxyhemoglobin (oxy-Hb) and deoxyhemoglobin (deoxy-Hb) concentration) in the left exercising upper arm and right resting forearm was monitored using a near-infrared spectroscopy system (NIRO-200; Hamamatsu Photonics, Hamamatsu, Japan) at dual wavelengths (760 nm and 850 nm). The near-infrared spectroscopy probe consisted of an optically dense holder containing an emission and detection probe and was secured to the skin with tape to minimize extraneous light. To measure blood velocity (Vvein) and cross-sectional area (CSA), non-invasive ultrasound imaging of the basilic vein (superficial vein) of the resting upper arm was performed 5 to 6 cm proximal to the cubitus using an 8.7-MHz linear array transducer (Vivid e; GE Healthcare Japan, Tokyo, Japan). A large quantity of ultrasound transmission gel was used to prevent direct contact with the skin and to avoid compression of the vein. Vvein and CSA were simultaneously measured on a transverse scan of the vein with the transducer tilted at 60°. Positioning of the transducer was determined at the beginning of each experiment, and it remained unchanged to limit potential errors in Doppler angle. Vvein was the result of the mean velocity of spectral Doppler recording every 12 seconds. CSA was calculated by manually tracing the edge of the offline transverse venous image at an arbitrary three points every 12 seconds, and then the three CSA values were averaged. Because CSA was obtained from the image measured at 60°, an accurate CSA (CSAvein) was determined as follows: (1) CSA vein cm 2 = CSA × sin 60 ° BFvein in the basilic vein was calculated according to the following formula: (2) BF vein ml / min = V vein × CSA vein Data analysis and statistical analysis The HR, MAP, muscle oxygenation, CSAvein, Vvein, and BFvein were averaged for 61 to 240 seconds before commencing exercise to establish a baseline value. The relative change in these variables from baseline during exercise and the recovery period was calculated. Data are expressed as mean ± standard error values. To compare the time-course changes, two-way analysis of variance with repeated measures was applied to the circulatory responses, CSAvein, Vvein, and BFvein under each condition (EX and EX + VIB), using time and condition as fixed factors. If a main effect of condition and/or interaction was detected, post hoc analysis with a paired t test was performed; and if a main effect of time was detected, post hoc analysis with a Bonferroni test was performed. To compare the baseline data of the circulatory response, CSAvein, Vvein, and BFvein between EX and EX + VIB, a paired t test was performed. In addition, differences in Overall RPE and Arm RPE between conditions were evaluated by paired t test. P <0.05 was considered significant. The HR, MAP, muscle oxygenation, CSAvein, Vvein, and BFvein were averaged for 61 to 240 seconds before commencing exercise to establish a baseline value. The relative change in these variables from baseline during exercise and the recovery period was calculated. Data are expressed as mean ± standard error values. To compare the time-course changes, two-way analysis of variance with repeated measures was applied to the circulatory responses, CSAvein, Vvein, and BFvein under each condition (EX and EX + VIB), using time and condition as fixed factors. If a main effect of condition and/or interaction was detected, post hoc analysis with a paired t test was performed; and if a main effect of time was detected, post hoc analysis with a Bonferroni test was performed. To compare the baseline data of the circulatory response, CSAvein, Vvein, and BFvein between EX and EX + VIB, a paired t test was performed. In addition, differences in Overall RPE and Arm RPE between conditions were evaluated by paired t test. P <0.05 was considered significant.
Results
Vibration of the biceps tendon for 2 minutes elicited a reflex force equivalent to 5.3 ± 2.3% of MVC. However, the HR (from 63 ± 3 beats/minute to 64 ± 2 beats/minute), MAP (from 78 ± 3 mmHg to 78 ± 3 mmHg), CSAvein (from 0.20 ± 0.03 cm2 to 0.20 ± 0.03 cm2), Vvein (from 4.1 ± 0.5 cm/second to 3.9 ± 0.5 cm/second), and BFvein of the basilic vein (from 51.6 ± 10.3 ml/minute to 52.7 ± 11.9 ml/minute) did not change. There were no significant differences in the baseline data of circulatory responses, CSAvein, Vvein, and BFvein between EX and EX + VIB (Table 1). Baseline data under each condition Values presented as mean ± standard error. Static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. Muscle tension during EX was similar to that during EX + VIB (Figure 1). However, both Overall RPE and Arm RPE after EX + VIB were significantly lower than after EX (Overall RPE: 11.5 ± 0.2 vs. 12.6 ± 0.3, P <0.05; Arm RPE: 3.2 ± 0.3 vs. 4.9 ± 0.4, P <0.05). Time courses of muscle tension under each condition.Static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. Data expressed as mean ± standard error. Figure 2 shows the time courses of HR, MAP, and CSAvein, Vvein, and BFvein of the resting upper arm during EX and EX + VIB. The increase in HR during exercise from 96 to 120 seconds at EX + VIB was less than that at EX (values at 120 seconds of exercise: 39.1 ± 4.0% vs. 50.0 ± 5.9%, P <0.05) (Figure 2A). Likewise, the increase in MAP during exercise from 96 to 120 seconds at EX + VIB was lower than that at EX (values at 120 seconds of exercise: 26.0 ± 3.7% vs. 29.6 ± 3.2%, P <0.05) (Figure 2B). CSAvein during exercise at EX decreased from baseline (values at 120 seconds of exercise: –22.9 ± 6.7%, P <0.05), but CSAvein during EX + VIB did not change from baseline throughout the protocol. In addition, CSAvein at 120 seconds of exercise and during recovery at EX was lower than at EX + VIB (P <0.05) (Figure 2C). Vvein and BFvein did not change from baseline, and this response was similar under both conditions (Figure 2D,E). Relative changes in circulatory responses and blood flow responses in the resting upper arm. Relative changes in (A) heart rate (HR), (B) mean arterial pressure (MAP), (C) cross-sectional area (CSAvein), (D) blood velocity (Vvein), and (E) venous blood flow (BFvein) of the basilic vein in the resting upper arm during static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. Data expressed as mean ± standard error. *P <0.05, difference between EX and EX + VIB; †P <0.05, difference from baseline level during EX; ‡P <0.05, difference from baseline level during EX + VIB. Figure 3 shows the time courses of muscle oxygenation of the exercising upper arm and resting forearm during EX and EX + VIB. In the exercising upper arm, Δoxy-Hb decreased from baseline during exercise and increased from baseline during recovery in both EX and EX + VIB (P <0.05). Δoxy-Hb of the exercising upper arm during recovery at EX was lower than at EX + VIB (P <0.05; Figure 3A). In the exercising upper arm, Δdeoxy-Hb increased from baseline during exercise (P <0.05) and returned to baseline during recovery in both EX and EX + VIB (Figure 3B). In the resting forearm, Δoxy-Hb and Δdeoxy-Hb did not change from baseline at both EX and EX + VIB (Figure 3C,D). However, Δdeoxy-Hb of the resting forearm during recovery at EX was lower than at EX + VIB (P <0.05). Relative changes in muscle oxygenation of the exercising upper arm and resting forearm. Relative changes in the oxyhemoglobin (Δoxy-Hb) and deoxyhemoglobin (Δdeoxy-Hb) concentrations of the (A, B) exercising upper arm and (C, D) resting forearm during static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. Data expressed as mean ± standard error. *P <0.05, difference between EX and EX + VIB; †P <0.05, difference from baseline level during EX; ‡P <0.05, difference from baseline level during EX + VIB.
Conclusions
Static elbow flexion with vibration of the biceps brachii tendon, which caused a decrease in central command during exercise, inhibited the increase in circulatory response and the decrease in CSAvein in the resting upper arm when compared with static exercise alone, although BFvein was similar during exercise both with and without tendon vibration. These findings suggest that central command may contribute to the superficial venous vessel response of the resting limb during sustained static elbow flexion.
[ "Background", "Subjects", "Muscle tendon vibration and maximal voluntary contraction", "Experimental protocol", "Measurements", "Data analysis and statistical analysis", "Limitations", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Venomotor response is considered to play an important role in the transfer of blood from veins to the heart. Many studies have suggested that sympathetic activation has an impact on the venomotor responses in the resting limb during static exercise [1-3] and dynamic exercise [1,4-6], since especially superficial venous vessels have rich innervation of sympathetic nerves [7,8] and an operation of sympathectomy [1] and a dosage of α-blocking agent clearly abolishes the venoconstriction [2] observed during exercise.\nSustained static exercise produces significant activation of the sympathetic nervous system, and the sympathetic activation during exercise is governed by both central (that is, central command) and peripheral (that is, muscle metaboreflex and mechanoreflex) mechanisms [9-16]. Metabolically sensitive afferents within exercising skeletal muscle detect the buildup of metabolites and act through cardiovascular centers to produce a muscle metaboreflex [11,13]. A component of this muscle reflex may arise from muscle mechanoreflex afferents [10]. Duprez and colleagues reported that post-exercise muscle ischemia produces a significant decrease in venous volume in the contralateral limb, and consequently suggested the importance of muscle metaboreflex on venomotor tone in the non-exercising limb during exercise [17].\nIn addition to muscle metaboreflex, an essential central command role has emerged from a preliminary study [2]. The concept of central command has been conventionally defined as feed-forward control. When a motor command is sent to a muscle, a parallel or collateral command is sent to cardiovascular centers in the brainstem, and this acts to activate sympathetic nerve activity. Indeed, Lorentsen found an anticipatory increase in the venous pressure in the contralateral limb before the onset of exercise [2], indicating that feed-forward control of the central command plays an important role in venous tone. However, the study did not examine the influence of central command during actual exercise [2]. Recent studies have also reported that central command also functions as feedback control, in which somatosensory signals arising from the working muscles continuously provide a feedback signal and probably modulate cardiovascular responses via alterations of perception of effort or effort sense [18-20]. If central command has the function of feedback control, the influence of central command will appear not only before exercise but also during sustained and later periods of exercise. However, this question has not yet been challenged. Verifying the influence of central command on the venous tone during actual and sustained static exercise is therefore necessary. On the other hand, using the non-invasive ultrasound Doppler method [21-24], assessment of a single vein response during exercise can extend prior knowledge about the influence of central command on the venous system during exercise.\nBased on these considerations, we investigated whether central command affects venomotor tone in the contralateral limb during sustained static exercise. In the present study, using vibration of the biceps brachii tendon reported previously [14,25], we evaluated whether less activation of central command during sustained static elbow flexion accompanies lower responses of the cross-sectional area of the superficial vein in the resting upper arm (CSAvein). Tendon vibration during active muscle contraction excites the primary afferents of muscle spindles of the contracting muscle, thereby inducing reflex tension via the monosynaptic tendon reflex, which in turn aids voluntary tension development and consequently reduces the amount of central command required to generate a given force [14,25,26]. In addition, tendon vibration was also useful for investigating the influence of central command without inducing discomfort or nociceptor afferent input [14].", "Eleven healthy subjects (three males and eight females) volunteered to participate in the study. Their mean ± standard deviation age, height, and weight were 21.3 ± 0.9 years, 165.0 ± 6.6 cm, and 55.2 ± 5.7 kg, respectively. All subjects were nonsmokers. The participants were asked not to drink beverages containing caffeine or alcohol for 24 hours and not to eat for at least 2 hours before the start of the experiment. The purpose, procedures, and risks of the study were explained to the subjects, and their informed consent was obtained. The study was approved by the Human Ethics Committee of the Japan Women’s College of Physical Education and was conducted in accordance with the Declaration of Helsinki.", "Before the main protocol, the subjects were examined to establish the force produced by tendon vibration at rest. A custom vibrator (DPS-380; Dia Medical, Tokyo, Japan) was used to induce left biceps brachii muscle contraction by reflex stimulation of the biceps brachii distal tendon on the cubital fossa [26]. The oscillating frequency of the vibrator was 100 Hz and its amplitude was 0.8 mm. On the same day, the subjects performed two maximal voluntary static elbow flexions of the left arm using a computer-based multifunctional dynamometer (VINE, Tokyo, Japan) to determine their maximal voluntary contraction (MVC) strength, defined as the highest value obtained in the two trials.", "In a room maintained at 25.1 ± 0.2°C, each subject stayed in a semi-reclined position in a chair in which body position could be maintained, while the left elbow was kept at a 90° angle on a padded armrest with the wrist attached to an arm lever by a Velcro strap. The subjects rested for at least 20 minutes before data collection began. After baseline data were collected for 5 minutes, subjects performed: static elbow flexion at 35% MVC without vibration of the biceps tendon for 2 minutes (EX); and static elbow flexion at 35% MVC with vibration of the biceps tendon for 2 minutes (EX + VIB). Each exercise period was followed by a recovery period of 1 minute. Static elbow flexion was produced using the same dynamometer that was used to measure the MVC (VINE), with visual feedback of the achieved force provided via an oscilloscope display. For EX + VIB, tendon vibration was initiated 1 minute before starting exercise and continued during the exercise. Immediately after exercise, subjects read instructions for the 6 to 20 rating of perceived exertion (Overall RPE) category scale developed by Borg [27] and instructions for rating muscle fatigue sensation (Arm RPE) on a scale of 1 to 10 [28]. In all trials, subjects regulated their respiratory frequency at 10 or 15 breaths/minute using a metronome, because exercise movement and respiratory cycle influence sympathetic nervous system activity. EX and EX + VIB were performed randomly, and the rest period between the two conditions was at least 20 minutes.", "Beat-to-beat changes in arterial pressure were assessed by finger photoplethysmography (Finometer; Finapres Medical Systems BV, Arnhem, the Netherlands). The monitoring cuff was placed around the middle finger. The heart rate (HR) and mean arterial pressure (MAP) were determined from the blood pressure waveform using the Modelflow software program, taking into account sex, age, height, and weight (BeatScope 1.1; Finapres Medical Systems BV).\nMuscle oxygenation (oxyhemoglobin (oxy-Hb) and deoxyhemoglobin (deoxy-Hb) concentration) in the left exercising upper arm and right resting forearm was monitored using a near-infrared spectroscopy system (NIRO-200; Hamamatsu Photonics, Hamamatsu, Japan) at dual wavelengths (760 nm and 850 nm). The near-infrared spectroscopy probe consisted of an optically dense holder containing an emission and detection probe and was secured to the skin with tape to minimize extraneous light.\nTo measure blood velocity (Vvein) and cross-sectional area (CSA), non-invasive ultrasound imaging of the basilic vein (superficial vein) of the resting upper arm was performed 5 to 6 cm proximal to the cubitus using an 8.7-MHz linear array transducer (Vivid e; GE Healthcare Japan, Tokyo, Japan). A large quantity of ultrasound transmission gel was used to prevent direct contact with the skin and to avoid compression of the vein. Vvein and CSA were simultaneously measured on a transverse scan of the vein with the transducer tilted at 60°. Positioning of the transducer was determined at the beginning of each experiment, and it remained unchanged to limit potential errors in Doppler angle. Vvein was the result of the mean velocity of spectral Doppler recording every 12 seconds. CSA was calculated by manually tracing the edge of the offline transverse venous image at an arbitrary three points every 12 seconds, and then the three CSA values were averaged. Because CSA was obtained from the image measured at 60°, an accurate CSA (CSAvein) was determined as follows:\n\n\n(1)\n\n\n\nCSA\nvein\n\n\n\ncm\n2\n\n\n=\nCSA\n×\nsin\n\n60\n°\n\n\n\n\nBFvein in the basilic vein was calculated according to the following formula:\n\n\n(2)\n\n\n\nBF\nvein\n\n\n\nml\n/\nmin\n\n\n=\n\nV\nvein\n\n×\n\nCSA\nvein\n\n\n\n\n", "The HR, MAP, muscle oxygenation, CSAvein, Vvein, and BFvein were averaged for 61 to 240 seconds before commencing exercise to establish a baseline value. The relative change in these variables from baseline during exercise and the recovery period was calculated. Data are expressed as mean ± standard error values.\nTo compare the time-course changes, two-way analysis of variance with repeated measures was applied to the circulatory responses, CSAvein, Vvein, and BFvein under each condition (EX and EX + VIB), using time and condition as fixed factors. If a main effect of condition and/or interaction was detected, post hoc analysis with a paired t test was performed; and if a main effect of time was detected, post hoc analysis with a Bonferroni test was performed. To compare the baseline data of the circulatory response, CSAvein, Vvein, and BFvein between EX and EX + VIB, a paired t test was performed. In addition, differences in Overall RPE and Arm RPE between conditions were evaluated by paired t test. P <0.05 was considered significant.", "Several limitations should be considered when interpreting our results. First, due to the large compliance of veins, volume (that is, CSAvein) is dependent on the venous pressure level – but we did not measure venous pressure. As mentioned above, however, BFvein and Δdeoxy-Hb (an index of venous blood volume) of the resting forearm did not change from baseline during both EX and EX + VIB (Figures 2E and 3D). We therefore believe that the effect of venous pressure-dependent control was scarcely observed during exercise in this study. Second, we did not account for the menstrual cycle in female subjects. However, because EX and EX + VIB were carried out in same day, this effect may be negligible in our study.", "BFvein: Blood flow of the basilic vein; CSA: Cross-sectional area of basilic vein before correction; CSAvein: Accurate cross-sectional area of the basilic vein after correction; Δ: Change; EX: Elbow flexion without vibration; EX + VIB: Elbow flexion with vibration; oxy-Hb: Oxyhemoglobin; deoxy-Hb: Deoxyhemoglobin; HR: Heart rate; MAP: Mean arterial pressure; MVC: Maximal voluntary contraction; RPE: Rating of perceived exertion; Vvein: Blood velocity of the basilic vein.", "The authors declare that they have no competing interests.", "AO designed and coordinated the study, carried out the experiment, and drafted the manuscript. KS participated in the design of the study and helped draft the manuscript. AH helped carry out the experiment. TS participated in the design of the study and helped draft the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects", "Muscle tendon vibration and maximal voluntary contraction", "Experimental protocol", "Measurements", "Data analysis and statistical analysis", "Results", "Discussion", "Limitations", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Venomotor response is considered to play an important role in the transfer of blood from veins to the heart. Many studies have suggested that sympathetic activation has an impact on the venomotor responses in the resting limb during static exercise [1-3] and dynamic exercise [1,4-6], since especially superficial venous vessels have rich innervation of sympathetic nerves [7,8] and an operation of sympathectomy [1] and a dosage of α-blocking agent clearly abolishes the venoconstriction [2] observed during exercise.\nSustained static exercise produces significant activation of the sympathetic nervous system, and the sympathetic activation during exercise is governed by both central (that is, central command) and peripheral (that is, muscle metaboreflex and mechanoreflex) mechanisms [9-16]. Metabolically sensitive afferents within exercising skeletal muscle detect the buildup of metabolites and act through cardiovascular centers to produce a muscle metaboreflex [11,13]. A component of this muscle reflex may arise from muscle mechanoreflex afferents [10]. Duprez and colleagues reported that post-exercise muscle ischemia produces a significant decrease in venous volume in the contralateral limb, and consequently suggested the importance of muscle metaboreflex on venomotor tone in the non-exercising limb during exercise [17].\nIn addition to muscle metaboreflex, an essential central command role has emerged from a preliminary study [2]. The concept of central command has been conventionally defined as feed-forward control. When a motor command is sent to a muscle, a parallel or collateral command is sent to cardiovascular centers in the brainstem, and this acts to activate sympathetic nerve activity. Indeed, Lorentsen found an anticipatory increase in the venous pressure in the contralateral limb before the onset of exercise [2], indicating that feed-forward control of the central command plays an important role in venous tone. However, the study did not examine the influence of central command during actual exercise [2]. Recent studies have also reported that central command also functions as feedback control, in which somatosensory signals arising from the working muscles continuously provide a feedback signal and probably modulate cardiovascular responses via alterations of perception of effort or effort sense [18-20]. If central command has the function of feedback control, the influence of central command will appear not only before exercise but also during sustained and later periods of exercise. However, this question has not yet been challenged. Verifying the influence of central command on the venous tone during actual and sustained static exercise is therefore necessary. On the other hand, using the non-invasive ultrasound Doppler method [21-24], assessment of a single vein response during exercise can extend prior knowledge about the influence of central command on the venous system during exercise.\nBased on these considerations, we investigated whether central command affects venomotor tone in the contralateral limb during sustained static exercise. In the present study, using vibration of the biceps brachii tendon reported previously [14,25], we evaluated whether less activation of central command during sustained static elbow flexion accompanies lower responses of the cross-sectional area of the superficial vein in the resting upper arm (CSAvein). Tendon vibration during active muscle contraction excites the primary afferents of muscle spindles of the contracting muscle, thereby inducing reflex tension via the monosynaptic tendon reflex, which in turn aids voluntary tension development and consequently reduces the amount of central command required to generate a given force [14,25,26]. In addition, tendon vibration was also useful for investigating the influence of central command without inducing discomfort or nociceptor afferent input [14].", " Subjects Eleven healthy subjects (three males and eight females) volunteered to participate in the study. Their mean ± standard deviation age, height, and weight were 21.3 ± 0.9 years, 165.0 ± 6.6 cm, and 55.2 ± 5.7 kg, respectively. All subjects were nonsmokers. The participants were asked not to drink beverages containing caffeine or alcohol for 24 hours and not to eat for at least 2 hours before the start of the experiment. The purpose, procedures, and risks of the study were explained to the subjects, and their informed consent was obtained. The study was approved by the Human Ethics Committee of the Japan Women’s College of Physical Education and was conducted in accordance with the Declaration of Helsinki.\nEleven healthy subjects (three males and eight females) volunteered to participate in the study. Their mean ± standard deviation age, height, and weight were 21.3 ± 0.9 years, 165.0 ± 6.6 cm, and 55.2 ± 5.7 kg, respectively. All subjects were nonsmokers. The participants were asked not to drink beverages containing caffeine or alcohol for 24 hours and not to eat for at least 2 hours before the start of the experiment. The purpose, procedures, and risks of the study were explained to the subjects, and their informed consent was obtained. The study was approved by the Human Ethics Committee of the Japan Women’s College of Physical Education and was conducted in accordance with the Declaration of Helsinki.\n Muscle tendon vibration and maximal voluntary contraction Before the main protocol, the subjects were examined to establish the force produced by tendon vibration at rest. A custom vibrator (DPS-380; Dia Medical, Tokyo, Japan) was used to induce left biceps brachii muscle contraction by reflex stimulation of the biceps brachii distal tendon on the cubital fossa [26]. The oscillating frequency of the vibrator was 100 Hz and its amplitude was 0.8 mm. On the same day, the subjects performed two maximal voluntary static elbow flexions of the left arm using a computer-based multifunctional dynamometer (VINE, Tokyo, Japan) to determine their maximal voluntary contraction (MVC) strength, defined as the highest value obtained in the two trials.\nBefore the main protocol, the subjects were examined to establish the force produced by tendon vibration at rest. A custom vibrator (DPS-380; Dia Medical, Tokyo, Japan) was used to induce left biceps brachii muscle contraction by reflex stimulation of the biceps brachii distal tendon on the cubital fossa [26]. The oscillating frequency of the vibrator was 100 Hz and its amplitude was 0.8 mm. On the same day, the subjects performed two maximal voluntary static elbow flexions of the left arm using a computer-based multifunctional dynamometer (VINE, Tokyo, Japan) to determine their maximal voluntary contraction (MVC) strength, defined as the highest value obtained in the two trials.\n Experimental protocol In a room maintained at 25.1 ± 0.2°C, each subject stayed in a semi-reclined position in a chair in which body position could be maintained, while the left elbow was kept at a 90° angle on a padded armrest with the wrist attached to an arm lever by a Velcro strap. The subjects rested for at least 20 minutes before data collection began. After baseline data were collected for 5 minutes, subjects performed: static elbow flexion at 35% MVC without vibration of the biceps tendon for 2 minutes (EX); and static elbow flexion at 35% MVC with vibration of the biceps tendon for 2 minutes (EX + VIB). Each exercise period was followed by a recovery period of 1 minute. Static elbow flexion was produced using the same dynamometer that was used to measure the MVC (VINE), with visual feedback of the achieved force provided via an oscilloscope display. For EX + VIB, tendon vibration was initiated 1 minute before starting exercise and continued during the exercise. Immediately after exercise, subjects read instructions for the 6 to 20 rating of perceived exertion (Overall RPE) category scale developed by Borg [27] and instructions for rating muscle fatigue sensation (Arm RPE) on a scale of 1 to 10 [28]. In all trials, subjects regulated their respiratory frequency at 10 or 15 breaths/minute using a metronome, because exercise movement and respiratory cycle influence sympathetic nervous system activity. EX and EX + VIB were performed randomly, and the rest period between the two conditions was at least 20 minutes.\nIn a room maintained at 25.1 ± 0.2°C, each subject stayed in a semi-reclined position in a chair in which body position could be maintained, while the left elbow was kept at a 90° angle on a padded armrest with the wrist attached to an arm lever by a Velcro strap. The subjects rested for at least 20 minutes before data collection began. After baseline data were collected for 5 minutes, subjects performed: static elbow flexion at 35% MVC without vibration of the biceps tendon for 2 minutes (EX); and static elbow flexion at 35% MVC with vibration of the biceps tendon for 2 minutes (EX + VIB). Each exercise period was followed by a recovery period of 1 minute. Static elbow flexion was produced using the same dynamometer that was used to measure the MVC (VINE), with visual feedback of the achieved force provided via an oscilloscope display. For EX + VIB, tendon vibration was initiated 1 minute before starting exercise and continued during the exercise. Immediately after exercise, subjects read instructions for the 6 to 20 rating of perceived exertion (Overall RPE) category scale developed by Borg [27] and instructions for rating muscle fatigue sensation (Arm RPE) on a scale of 1 to 10 [28]. In all trials, subjects regulated their respiratory frequency at 10 or 15 breaths/minute using a metronome, because exercise movement and respiratory cycle influence sympathetic nervous system activity. EX and EX + VIB were performed randomly, and the rest period between the two conditions was at least 20 minutes.\n Measurements Beat-to-beat changes in arterial pressure were assessed by finger photoplethysmography (Finometer; Finapres Medical Systems BV, Arnhem, the Netherlands). The monitoring cuff was placed around the middle finger. The heart rate (HR) and mean arterial pressure (MAP) were determined from the blood pressure waveform using the Modelflow software program, taking into account sex, age, height, and weight (BeatScope 1.1; Finapres Medical Systems BV).\nMuscle oxygenation (oxyhemoglobin (oxy-Hb) and deoxyhemoglobin (deoxy-Hb) concentration) in the left exercising upper arm and right resting forearm was monitored using a near-infrared spectroscopy system (NIRO-200; Hamamatsu Photonics, Hamamatsu, Japan) at dual wavelengths (760 nm and 850 nm). The near-infrared spectroscopy probe consisted of an optically dense holder containing an emission and detection probe and was secured to the skin with tape to minimize extraneous light.\nTo measure blood velocity (Vvein) and cross-sectional area (CSA), non-invasive ultrasound imaging of the basilic vein (superficial vein) of the resting upper arm was performed 5 to 6 cm proximal to the cubitus using an 8.7-MHz linear array transducer (Vivid e; GE Healthcare Japan, Tokyo, Japan). A large quantity of ultrasound transmission gel was used to prevent direct contact with the skin and to avoid compression of the vein. Vvein and CSA were simultaneously measured on a transverse scan of the vein with the transducer tilted at 60°. Positioning of the transducer was determined at the beginning of each experiment, and it remained unchanged to limit potential errors in Doppler angle. Vvein was the result of the mean velocity of spectral Doppler recording every 12 seconds. CSA was calculated by manually tracing the edge of the offline transverse venous image at an arbitrary three points every 12 seconds, and then the three CSA values were averaged. Because CSA was obtained from the image measured at 60°, an accurate CSA (CSAvein) was determined as follows:\n\n\n(1)\n\n\n\nCSA\nvein\n\n\n\ncm\n2\n\n\n=\nCSA\n×\nsin\n\n60\n°\n\n\n\n\nBFvein in the basilic vein was calculated according to the following formula:\n\n\n(2)\n\n\n\nBF\nvein\n\n\n\nml\n/\nmin\n\n\n=\n\nV\nvein\n\n×\n\nCSA\nvein\n\n\n\n\n\nBeat-to-beat changes in arterial pressure were assessed by finger photoplethysmography (Finometer; Finapres Medical Systems BV, Arnhem, the Netherlands). The monitoring cuff was placed around the middle finger. The heart rate (HR) and mean arterial pressure (MAP) were determined from the blood pressure waveform using the Modelflow software program, taking into account sex, age, height, and weight (BeatScope 1.1; Finapres Medical Systems BV).\nMuscle oxygenation (oxyhemoglobin (oxy-Hb) and deoxyhemoglobin (deoxy-Hb) concentration) in the left exercising upper arm and right resting forearm was monitored using a near-infrared spectroscopy system (NIRO-200; Hamamatsu Photonics, Hamamatsu, Japan) at dual wavelengths (760 nm and 850 nm). The near-infrared spectroscopy probe consisted of an optically dense holder containing an emission and detection probe and was secured to the skin with tape to minimize extraneous light.\nTo measure blood velocity (Vvein) and cross-sectional area (CSA), non-invasive ultrasound imaging of the basilic vein (superficial vein) of the resting upper arm was performed 5 to 6 cm proximal to the cubitus using an 8.7-MHz linear array transducer (Vivid e; GE Healthcare Japan, Tokyo, Japan). A large quantity of ultrasound transmission gel was used to prevent direct contact with the skin and to avoid compression of the vein. Vvein and CSA were simultaneously measured on a transverse scan of the vein with the transducer tilted at 60°. Positioning of the transducer was determined at the beginning of each experiment, and it remained unchanged to limit potential errors in Doppler angle. Vvein was the result of the mean velocity of spectral Doppler recording every 12 seconds. CSA was calculated by manually tracing the edge of the offline transverse venous image at an arbitrary three points every 12 seconds, and then the three CSA values were averaged. Because CSA was obtained from the image measured at 60°, an accurate CSA (CSAvein) was determined as follows:\n\n\n(1)\n\n\n\nCSA\nvein\n\n\n\ncm\n2\n\n\n=\nCSA\n×\nsin\n\n60\n°\n\n\n\n\nBFvein in the basilic vein was calculated according to the following formula:\n\n\n(2)\n\n\n\nBF\nvein\n\n\n\nml\n/\nmin\n\n\n=\n\nV\nvein\n\n×\n\nCSA\nvein\n\n\n\n\n\n Data analysis and statistical analysis The HR, MAP, muscle oxygenation, CSAvein, Vvein, and BFvein were averaged for 61 to 240 seconds before commencing exercise to establish a baseline value. The relative change in these variables from baseline during exercise and the recovery period was calculated. Data are expressed as mean ± standard error values.\nTo compare the time-course changes, two-way analysis of variance with repeated measures was applied to the circulatory responses, CSAvein, Vvein, and BFvein under each condition (EX and EX + VIB), using time and condition as fixed factors. If a main effect of condition and/or interaction was detected, post hoc analysis with a paired t test was performed; and if a main effect of time was detected, post hoc analysis with a Bonferroni test was performed. To compare the baseline data of the circulatory response, CSAvein, Vvein, and BFvein between EX and EX + VIB, a paired t test was performed. In addition, differences in Overall RPE and Arm RPE between conditions were evaluated by paired t test. P <0.05 was considered significant.\nThe HR, MAP, muscle oxygenation, CSAvein, Vvein, and BFvein were averaged for 61 to 240 seconds before commencing exercise to establish a baseline value. The relative change in these variables from baseline during exercise and the recovery period was calculated. Data are expressed as mean ± standard error values.\nTo compare the time-course changes, two-way analysis of variance with repeated measures was applied to the circulatory responses, CSAvein, Vvein, and BFvein under each condition (EX and EX + VIB), using time and condition as fixed factors. If a main effect of condition and/or interaction was detected, post hoc analysis with a paired t test was performed; and if a main effect of time was detected, post hoc analysis with a Bonferroni test was performed. To compare the baseline data of the circulatory response, CSAvein, Vvein, and BFvein between EX and EX + VIB, a paired t test was performed. In addition, differences in Overall RPE and Arm RPE between conditions were evaluated by paired t test. P <0.05 was considered significant.", "Eleven healthy subjects (three males and eight females) volunteered to participate in the study. Their mean ± standard deviation age, height, and weight were 21.3 ± 0.9 years, 165.0 ± 6.6 cm, and 55.2 ± 5.7 kg, respectively. All subjects were nonsmokers. The participants were asked not to drink beverages containing caffeine or alcohol for 24 hours and not to eat for at least 2 hours before the start of the experiment. The purpose, procedures, and risks of the study were explained to the subjects, and their informed consent was obtained. The study was approved by the Human Ethics Committee of the Japan Women’s College of Physical Education and was conducted in accordance with the Declaration of Helsinki.", "Before the main protocol, the subjects were examined to establish the force produced by tendon vibration at rest. A custom vibrator (DPS-380; Dia Medical, Tokyo, Japan) was used to induce left biceps brachii muscle contraction by reflex stimulation of the biceps brachii distal tendon on the cubital fossa [26]. The oscillating frequency of the vibrator was 100 Hz and its amplitude was 0.8 mm. On the same day, the subjects performed two maximal voluntary static elbow flexions of the left arm using a computer-based multifunctional dynamometer (VINE, Tokyo, Japan) to determine their maximal voluntary contraction (MVC) strength, defined as the highest value obtained in the two trials.", "In a room maintained at 25.1 ± 0.2°C, each subject stayed in a semi-reclined position in a chair in which body position could be maintained, while the left elbow was kept at a 90° angle on a padded armrest with the wrist attached to an arm lever by a Velcro strap. The subjects rested for at least 20 minutes before data collection began. After baseline data were collected for 5 minutes, subjects performed: static elbow flexion at 35% MVC without vibration of the biceps tendon for 2 minutes (EX); and static elbow flexion at 35% MVC with vibration of the biceps tendon for 2 minutes (EX + VIB). Each exercise period was followed by a recovery period of 1 minute. Static elbow flexion was produced using the same dynamometer that was used to measure the MVC (VINE), with visual feedback of the achieved force provided via an oscilloscope display. For EX + VIB, tendon vibration was initiated 1 minute before starting exercise and continued during the exercise. Immediately after exercise, subjects read instructions for the 6 to 20 rating of perceived exertion (Overall RPE) category scale developed by Borg [27] and instructions for rating muscle fatigue sensation (Arm RPE) on a scale of 1 to 10 [28]. In all trials, subjects regulated their respiratory frequency at 10 or 15 breaths/minute using a metronome, because exercise movement and respiratory cycle influence sympathetic nervous system activity. EX and EX + VIB were performed randomly, and the rest period between the two conditions was at least 20 minutes.", "Beat-to-beat changes in arterial pressure were assessed by finger photoplethysmography (Finometer; Finapres Medical Systems BV, Arnhem, the Netherlands). The monitoring cuff was placed around the middle finger. The heart rate (HR) and mean arterial pressure (MAP) were determined from the blood pressure waveform using the Modelflow software program, taking into account sex, age, height, and weight (BeatScope 1.1; Finapres Medical Systems BV).\nMuscle oxygenation (oxyhemoglobin (oxy-Hb) and deoxyhemoglobin (deoxy-Hb) concentration) in the left exercising upper arm and right resting forearm was monitored using a near-infrared spectroscopy system (NIRO-200; Hamamatsu Photonics, Hamamatsu, Japan) at dual wavelengths (760 nm and 850 nm). The near-infrared spectroscopy probe consisted of an optically dense holder containing an emission and detection probe and was secured to the skin with tape to minimize extraneous light.\nTo measure blood velocity (Vvein) and cross-sectional area (CSA), non-invasive ultrasound imaging of the basilic vein (superficial vein) of the resting upper arm was performed 5 to 6 cm proximal to the cubitus using an 8.7-MHz linear array transducer (Vivid e; GE Healthcare Japan, Tokyo, Japan). A large quantity of ultrasound transmission gel was used to prevent direct contact with the skin and to avoid compression of the vein. Vvein and CSA were simultaneously measured on a transverse scan of the vein with the transducer tilted at 60°. Positioning of the transducer was determined at the beginning of each experiment, and it remained unchanged to limit potential errors in Doppler angle. Vvein was the result of the mean velocity of spectral Doppler recording every 12 seconds. CSA was calculated by manually tracing the edge of the offline transverse venous image at an arbitrary three points every 12 seconds, and then the three CSA values were averaged. Because CSA was obtained from the image measured at 60°, an accurate CSA (CSAvein) was determined as follows:\n\n\n(1)\n\n\n\nCSA\nvein\n\n\n\ncm\n2\n\n\n=\nCSA\n×\nsin\n\n60\n°\n\n\n\n\nBFvein in the basilic vein was calculated according to the following formula:\n\n\n(2)\n\n\n\nBF\nvein\n\n\n\nml\n/\nmin\n\n\n=\n\nV\nvein\n\n×\n\nCSA\nvein\n\n\n\n\n", "The HR, MAP, muscle oxygenation, CSAvein, Vvein, and BFvein were averaged for 61 to 240 seconds before commencing exercise to establish a baseline value. The relative change in these variables from baseline during exercise and the recovery period was calculated. Data are expressed as mean ± standard error values.\nTo compare the time-course changes, two-way analysis of variance with repeated measures was applied to the circulatory responses, CSAvein, Vvein, and BFvein under each condition (EX and EX + VIB), using time and condition as fixed factors. If a main effect of condition and/or interaction was detected, post hoc analysis with a paired t test was performed; and if a main effect of time was detected, post hoc analysis with a Bonferroni test was performed. To compare the baseline data of the circulatory response, CSAvein, Vvein, and BFvein between EX and EX + VIB, a paired t test was performed. In addition, differences in Overall RPE and Arm RPE between conditions were evaluated by paired t test. P <0.05 was considered significant.", "Vibration of the biceps tendon for 2 minutes elicited a reflex force equivalent to 5.3 ± 2.3% of MVC. However, the HR (from 63 ± 3 beats/minute to 64 ± 2 beats/minute), MAP (from 78 ± 3 mmHg to 78 ± 3 mmHg), CSAvein (from 0.20 ± 0.03 cm2 to 0.20 ± 0.03 cm2), Vvein (from 4.1 ± 0.5 cm/second to 3.9 ± 0.5 cm/second), and BFvein of the basilic vein (from 51.6 ± 10.3 ml/minute to 52.7 ± 11.9 ml/minute) did not change.\nThere were no significant differences in the baseline data of circulatory responses, CSAvein, Vvein, and BFvein between EX and EX + VIB (Table 1).\nBaseline data under each condition\nValues presented as mean ± standard error. Static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon.\nMuscle tension during EX was similar to that during EX + VIB (Figure 1). However, both Overall RPE and Arm RPE after EX + VIB were significantly lower than after EX (Overall RPE: 11.5 ± 0.2 vs. 12.6 ± 0.3, P <0.05; Arm RPE: 3.2 ± 0.3 vs. 4.9 ± 0.4, P <0.05).\nTime courses of muscle tension under each condition.Static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. Data expressed as mean ± standard error.\nFigure 2 shows the time courses of HR, MAP, and CSAvein, Vvein, and BFvein of the resting upper arm during EX and EX + VIB. The increase in HR during exercise from 96 to 120 seconds at EX + VIB was less than that at EX (values at 120 seconds of exercise: 39.1 ± 4.0% vs. 50.0 ± 5.9%, P <0.05) (Figure 2A). Likewise, the increase in MAP during exercise from 96 to 120 seconds at EX + VIB was lower than that at EX (values at 120 seconds of exercise: 26.0 ± 3.7% vs. 29.6 ± 3.2%, P <0.05) (Figure 2B). CSAvein during exercise at EX decreased from baseline (values at 120 seconds of exercise: –22.9 ± 6.7%, P <0.05), but CSAvein during EX + VIB did not change from baseline throughout the protocol. In addition, CSAvein at 120 seconds of exercise and during recovery at EX was lower than at EX + VIB (P <0.05) (Figure 2C). Vvein and BFvein did not change from baseline, and this response was similar under both conditions (Figure 2D,E).\nRelative changes in circulatory responses and blood flow responses in the resting upper arm. Relative changes in (A) heart rate (HR), (B) mean arterial pressure (MAP), (C) cross-sectional area (CSAvein), (D) blood velocity (Vvein), and (E) venous blood flow (BFvein) of the basilic vein in the resting upper arm during static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. Data expressed as mean ± standard error. *P <0.05, difference between EX and EX + VIB; †P <0.05, difference from baseline level during EX; ‡P <0.05, difference from baseline level during EX + VIB.\nFigure 3 shows the time courses of muscle oxygenation of the exercising upper arm and resting forearm during EX and EX + VIB. In the exercising upper arm, Δoxy-Hb decreased from baseline during exercise and increased from baseline during recovery in both EX and EX + VIB (P <0.05). Δoxy-Hb of the exercising upper arm during recovery at EX was lower than at EX + VIB (P <0.05; Figure 3A). In the exercising upper arm, Δdeoxy-Hb increased from baseline during exercise (P <0.05) and returned to baseline during recovery in both EX and EX + VIB (Figure 3B). In the resting forearm, Δoxy-Hb and Δdeoxy-Hb did not change from baseline at both EX and EX + VIB (Figure 3C,D). However, Δdeoxy-Hb of the resting forearm during recovery at EX was lower than at EX + VIB (P <0.05).\nRelative changes in muscle oxygenation of the exercising upper arm and resting forearm. Relative changes in the oxyhemoglobin (Δoxy-Hb) and deoxyhemoglobin (Δdeoxy-Hb) concentrations of the (A, B) exercising upper arm and (C, D) resting forearm during static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. Data expressed as mean ± standard error. *P <0.05, difference between EX and EX + VIB; †P <0.05, difference from baseline level during EX; ‡P <0.05, difference from baseline level during EX + VIB.", "The primary findings in this study were that CSAvein decreased from baseline during static elbow flexion alone, although CSAvein during static elbow flexion with tendon vibration did not change, and that BFvein did not change significantly during static exercise with or without tendon vibration. These results suggest that a reduction in central command during static exercise with tendon vibration may attenuate the superficial venous vessel response of the resting limb during sustained static arm exercise.\nSuperficial venous vessel response may be controlled by both the sympathetic nervous system [1,4,6,29] and changes in venous pressure related to alterations in blood flow and blood volume [30,31]. In our study, BFvein did not change throughout the protocol during EX and EX + VIB (Figure 2E). In addition, Δdeoxy-Hb in the resting forearm did not change from baseline with static elbow flexion during both EX and EX + VIB (Figure 3D). Because the change in oxy-Hb and deoxy-Hb is used to evaluate blood volume in arterial and venous vascular beds, respectively [32,33], it is speculated that the venous blood volume in the resting forearm was unchanged during static elbow flexion. In our study, therefore, the decrease in CSAvein with exercise during EX may have been caused by sympathetic nervous system control. On the other hand, the difference in CSAvein between EX and EX + VIB during the recovery period might have been influenced by the change in venous blood volume but not by the sympathetic nervous system, because Δdeoxy-Hb of the resting forearm was also different between the two conditions (Figure 3D).\nThe concept of central command has been classically defined as a feed-forward control. Feed-forward characterization may be largely based on the immediate cardiovascular response to onset (or even anticipation) of exercise. In addition to feed-forward control, there is evidence that the effects of central command on cardiovascular responses are closely related to the intensity or perceived effort of the exercise [34,35]. Central command is there also proposed to be capable of functioning as feedback control, in which somatosensory signals arising from the working muscles may provide a feedback signal capable of influencing central command via alterations of perception of effort or effort sense [19,20]. The experimental model in our study might reflect central command that is defined as feedback control rather than feed-forward control, because the changes in HR and MAP, which are indexes of the cardiovascular response, were significantly lower during 96 to 120 seconds of exercise in EX + VIB than in EX (Figure 2A,B). These results are in agreement with those of previous studies [14,25,26]. In addition, the magnitude of the central command response has been assessed using an individual’s perception of effort sense during exercise, independent of force production [15,34]. Although the relationship between central command and RPE has not been clearly defined, the RPE scale [27] has been widely used to assess the level of central command. In the present study, RPE immediately after exercise was lower in EX + VIB than in EX, indicating that central command, which is defined as feedback control, might be lower in EX + VIB than in EX. Thus, related to the central command response that is defined as feedback control, CSAvein was also smaller in EX than in EX + VIB during the latter half of the exercise. In addition, activation of the central command at the onset of static elbow flexion exercise in the present study, which indicated the feed-forward control, may have been too small to cause venoconstriction. If activation of central command at the onset of static elbow flexion exercise was enough to cause venoconstriction, the decrease in CSAvein had to be obtained at the onset of exercise in both EX and EX + VIB.\nVibration is a powerful stimulus for primary muscle spindle afferents when applied to the biceps tendon during static exercise. When the biceps brachii was contracting, activation of its muscle spindle primary afferents provided reflex activation, which in turn aided voluntary tension development compared with contraction only of the biceps brachii. The afferent input of decreased voluntary tension during exercise with tendon vibration might thus cause interactions between perception of effort and central command, such that the activation of central command might alter [20].\nThe increase in sympathetic nervous system activity during exercise is caused not only by central command but also by the reflex neural mechanism that is activated by exercise (muscle mechanoreflex and muscle metaboreflex) [9-11,13,16]. Muscle-exerted tension during static elbow flexion did not differ between EX and EX + VIB (Figure 1), showing that the degree of activation of muscle mechanoreflex may be similar under both conditions. In addition, Δdeoxy-Hb concentration of the exercising upper arm was similar between EX and EX + VIB (Figure 3B). Because deoxy-Hb of exercising muscle is the index for oxygen consumption [36,37], the level of an exercise-induced metabolite accumulation during EX was expected to be equal to that found during EX + VIB, suggesting that the degree of activation of the muscle metaboreflex might not differ between EX and EX + VIB. In the present study, therefore, it is likely that the difference in CSAvein during static exercise between EX and EX + VIB might not be due to the differences in activation of the reflex neural mechanism under different conditions.\nAlthough the specific regions of the brain involved in exercise-related responses remain speculative, the following theory can be considered. Animal studies suggest that subthalamic regions are capable of generating both motor and cardiovascular responses [38]. In human studies, possible sites and neurocircuitry involving the insular cortex, sensorimotor cortex, anterior cingulate gyrus, medial prefrontal region and thalamic regions [18,39-43], and the periaqueductal gray [44,45], have been suggested. In addition, a recent hypothesis concerning the neural circuit responsible for generating central command is as follows: cerebral cortical output is not an essential component for the generation of central command but does seem to require a process that triggers activity in neural circuit(s) in the caudal brain to generate central command, and the region from the caudal diencephalon to the rostral mesencephalon plays an important role in the generation of central command [46], because in the decerebrate animal study the renal sympathetic nerve activity and HR abruptly increased in association with the start of locomotion [47], and spontaneous motor activity and the associated cardiovascular response were lost after decerebration at the midcollicular level [48].\nStewart and colleagues reported that venoconstriction during static exercise, which occurs not only in the splanchnic area but also in the resting extremities, may contribute to an increase in venous return to the heart to increase cardiac output [49]. Taking into account previous studies, including our own, venoconstriction via central command might play a significant role in hemodynamics during exercise. However, because the relationship between venous return and venoconstriction is not obvious, further investigation is required.\n Limitations Several limitations should be considered when interpreting our results. First, due to the large compliance of veins, volume (that is, CSAvein) is dependent on the venous pressure level – but we did not measure venous pressure. As mentioned above, however, BFvein and Δdeoxy-Hb (an index of venous blood volume) of the resting forearm did not change from baseline during both EX and EX + VIB (Figures 2E and 3D). We therefore believe that the effect of venous pressure-dependent control was scarcely observed during exercise in this study. Second, we did not account for the menstrual cycle in female subjects. However, because EX and EX + VIB were carried out in same day, this effect may be negligible in our study.\nSeveral limitations should be considered when interpreting our results. First, due to the large compliance of veins, volume (that is, CSAvein) is dependent on the venous pressure level – but we did not measure venous pressure. As mentioned above, however, BFvein and Δdeoxy-Hb (an index of venous blood volume) of the resting forearm did not change from baseline during both EX and EX + VIB (Figures 2E and 3D). We therefore believe that the effect of venous pressure-dependent control was scarcely observed during exercise in this study. Second, we did not account for the menstrual cycle in female subjects. However, because EX and EX + VIB were carried out in same day, this effect may be negligible in our study.", "Several limitations should be considered when interpreting our results. First, due to the large compliance of veins, volume (that is, CSAvein) is dependent on the venous pressure level – but we did not measure venous pressure. As mentioned above, however, BFvein and Δdeoxy-Hb (an index of venous blood volume) of the resting forearm did not change from baseline during both EX and EX + VIB (Figures 2E and 3D). We therefore believe that the effect of venous pressure-dependent control was scarcely observed during exercise in this study. Second, we did not account for the menstrual cycle in female subjects. However, because EX and EX + VIB were carried out in same day, this effect may be negligible in our study.", "Static elbow flexion with vibration of the biceps brachii tendon, which caused a decrease in central command during exercise, inhibited the increase in circulatory response and the decrease in CSAvein in the resting upper arm when compared with static exercise alone, although BFvein was similar during exercise both with and without tendon vibration. These findings suggest that central command may contribute to the superficial venous vessel response of the resting limb during sustained static elbow flexion.", "BFvein: Blood flow of the basilic vein; CSA: Cross-sectional area of basilic vein before correction; CSAvein: Accurate cross-sectional area of the basilic vein after correction; Δ: Change; EX: Elbow flexion without vibration; EX + VIB: Elbow flexion with vibration; oxy-Hb: Oxyhemoglobin; deoxy-Hb: Deoxyhemoglobin; HR: Heart rate; MAP: Mean arterial pressure; MVC: Maximal voluntary contraction; RPE: Rating of perceived exertion; Vvein: Blood velocity of the basilic vein.", "The authors declare that they have no competing interests.", "AO designed and coordinated the study, carried out the experiment, and drafted the manuscript. KS participated in the design of the study and helped draft the manuscript. AH helped carry out the experiment. TS participated in the design of the study and helped draft the manuscript. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, "results", "discussion", null, "conclusions", null, null, null ]
[ "Central command", "Ultrasound technique", "Venoconstriction", "Venous return" ]
Background: Venomotor response is considered to play an important role in the transfer of blood from veins to the heart. Many studies have suggested that sympathetic activation has an impact on the venomotor responses in the resting limb during static exercise [1-3] and dynamic exercise [1,4-6], since especially superficial venous vessels have rich innervation of sympathetic nerves [7,8] and an operation of sympathectomy [1] and a dosage of α-blocking agent clearly abolishes the venoconstriction [2] observed during exercise. Sustained static exercise produces significant activation of the sympathetic nervous system, and the sympathetic activation during exercise is governed by both central (that is, central command) and peripheral (that is, muscle metaboreflex and mechanoreflex) mechanisms [9-16]. Metabolically sensitive afferents within exercising skeletal muscle detect the buildup of metabolites and act through cardiovascular centers to produce a muscle metaboreflex [11,13]. A component of this muscle reflex may arise from muscle mechanoreflex afferents [10]. Duprez and colleagues reported that post-exercise muscle ischemia produces a significant decrease in venous volume in the contralateral limb, and consequently suggested the importance of muscle metaboreflex on venomotor tone in the non-exercising limb during exercise [17]. In addition to muscle metaboreflex, an essential central command role has emerged from a preliminary study [2]. The concept of central command has been conventionally defined as feed-forward control. When a motor command is sent to a muscle, a parallel or collateral command is sent to cardiovascular centers in the brainstem, and this acts to activate sympathetic nerve activity. Indeed, Lorentsen found an anticipatory increase in the venous pressure in the contralateral limb before the onset of exercise [2], indicating that feed-forward control of the central command plays an important role in venous tone. However, the study did not examine the influence of central command during actual exercise [2]. Recent studies have also reported that central command also functions as feedback control, in which somatosensory signals arising from the working muscles continuously provide a feedback signal and probably modulate cardiovascular responses via alterations of perception of effort or effort sense [18-20]. If central command has the function of feedback control, the influence of central command will appear not only before exercise but also during sustained and later periods of exercise. However, this question has not yet been challenged. Verifying the influence of central command on the venous tone during actual and sustained static exercise is therefore necessary. On the other hand, using the non-invasive ultrasound Doppler method [21-24], assessment of a single vein response during exercise can extend prior knowledge about the influence of central command on the venous system during exercise. Based on these considerations, we investigated whether central command affects venomotor tone in the contralateral limb during sustained static exercise. In the present study, using vibration of the biceps brachii tendon reported previously [14,25], we evaluated whether less activation of central command during sustained static elbow flexion accompanies lower responses of the cross-sectional area of the superficial vein in the resting upper arm (CSAvein). Tendon vibration during active muscle contraction excites the primary afferents of muscle spindles of the contracting muscle, thereby inducing reflex tension via the monosynaptic tendon reflex, which in turn aids voluntary tension development and consequently reduces the amount of central command required to generate a given force [14,25,26]. In addition, tendon vibration was also useful for investigating the influence of central command without inducing discomfort or nociceptor afferent input [14]. Methods: Subjects Eleven healthy subjects (three males and eight females) volunteered to participate in the study. Their mean ± standard deviation age, height, and weight were 21.3 ± 0.9 years, 165.0 ± 6.6 cm, and 55.2 ± 5.7 kg, respectively. All subjects were nonsmokers. The participants were asked not to drink beverages containing caffeine or alcohol for 24 hours and not to eat for at least 2 hours before the start of the experiment. The purpose, procedures, and risks of the study were explained to the subjects, and their informed consent was obtained. The study was approved by the Human Ethics Committee of the Japan Women’s College of Physical Education and was conducted in accordance with the Declaration of Helsinki. Eleven healthy subjects (three males and eight females) volunteered to participate in the study. Their mean ± standard deviation age, height, and weight were 21.3 ± 0.9 years, 165.0 ± 6.6 cm, and 55.2 ± 5.7 kg, respectively. All subjects were nonsmokers. The participants were asked not to drink beverages containing caffeine or alcohol for 24 hours and not to eat for at least 2 hours before the start of the experiment. The purpose, procedures, and risks of the study were explained to the subjects, and their informed consent was obtained. The study was approved by the Human Ethics Committee of the Japan Women’s College of Physical Education and was conducted in accordance with the Declaration of Helsinki. Muscle tendon vibration and maximal voluntary contraction Before the main protocol, the subjects were examined to establish the force produced by tendon vibration at rest. A custom vibrator (DPS-380; Dia Medical, Tokyo, Japan) was used to induce left biceps brachii muscle contraction by reflex stimulation of the biceps brachii distal tendon on the cubital fossa [26]. The oscillating frequency of the vibrator was 100 Hz and its amplitude was 0.8 mm. On the same day, the subjects performed two maximal voluntary static elbow flexions of the left arm using a computer-based multifunctional dynamometer (VINE, Tokyo, Japan) to determine their maximal voluntary contraction (MVC) strength, defined as the highest value obtained in the two trials. Before the main protocol, the subjects were examined to establish the force produced by tendon vibration at rest. A custom vibrator (DPS-380; Dia Medical, Tokyo, Japan) was used to induce left biceps brachii muscle contraction by reflex stimulation of the biceps brachii distal tendon on the cubital fossa [26]. The oscillating frequency of the vibrator was 100 Hz and its amplitude was 0.8 mm. On the same day, the subjects performed two maximal voluntary static elbow flexions of the left arm using a computer-based multifunctional dynamometer (VINE, Tokyo, Japan) to determine their maximal voluntary contraction (MVC) strength, defined as the highest value obtained in the two trials. Experimental protocol In a room maintained at 25.1 ± 0.2°C, each subject stayed in a semi-reclined position in a chair in which body position could be maintained, while the left elbow was kept at a 90° angle on a padded armrest with the wrist attached to an arm lever by a Velcro strap. The subjects rested for at least 20 minutes before data collection began. After baseline data were collected for 5 minutes, subjects performed: static elbow flexion at 35% MVC without vibration of the biceps tendon for 2 minutes (EX); and static elbow flexion at 35% MVC with vibration of the biceps tendon for 2 minutes (EX + VIB). Each exercise period was followed by a recovery period of 1 minute. Static elbow flexion was produced using the same dynamometer that was used to measure the MVC (VINE), with visual feedback of the achieved force provided via an oscilloscope display. For EX + VIB, tendon vibration was initiated 1 minute before starting exercise and continued during the exercise. Immediately after exercise, subjects read instructions for the 6 to 20 rating of perceived exertion (Overall RPE) category scale developed by Borg [27] and instructions for rating muscle fatigue sensation (Arm RPE) on a scale of 1 to 10 [28]. In all trials, subjects regulated their respiratory frequency at 10 or 15 breaths/minute using a metronome, because exercise movement and respiratory cycle influence sympathetic nervous system activity. EX and EX + VIB were performed randomly, and the rest period between the two conditions was at least 20 minutes. In a room maintained at 25.1 ± 0.2°C, each subject stayed in a semi-reclined position in a chair in which body position could be maintained, while the left elbow was kept at a 90° angle on a padded armrest with the wrist attached to an arm lever by a Velcro strap. The subjects rested for at least 20 minutes before data collection began. After baseline data were collected for 5 minutes, subjects performed: static elbow flexion at 35% MVC without vibration of the biceps tendon for 2 minutes (EX); and static elbow flexion at 35% MVC with vibration of the biceps tendon for 2 minutes (EX + VIB). Each exercise period was followed by a recovery period of 1 minute. Static elbow flexion was produced using the same dynamometer that was used to measure the MVC (VINE), with visual feedback of the achieved force provided via an oscilloscope display. For EX + VIB, tendon vibration was initiated 1 minute before starting exercise and continued during the exercise. Immediately after exercise, subjects read instructions for the 6 to 20 rating of perceived exertion (Overall RPE) category scale developed by Borg [27] and instructions for rating muscle fatigue sensation (Arm RPE) on a scale of 1 to 10 [28]. In all trials, subjects regulated their respiratory frequency at 10 or 15 breaths/minute using a metronome, because exercise movement and respiratory cycle influence sympathetic nervous system activity. EX and EX + VIB were performed randomly, and the rest period between the two conditions was at least 20 minutes. Measurements Beat-to-beat changes in arterial pressure were assessed by finger photoplethysmography (Finometer; Finapres Medical Systems BV, Arnhem, the Netherlands). The monitoring cuff was placed around the middle finger. The heart rate (HR) and mean arterial pressure (MAP) were determined from the blood pressure waveform using the Modelflow software program, taking into account sex, age, height, and weight (BeatScope 1.1; Finapres Medical Systems BV). Muscle oxygenation (oxyhemoglobin (oxy-Hb) and deoxyhemoglobin (deoxy-Hb) concentration) in the left exercising upper arm and right resting forearm was monitored using a near-infrared spectroscopy system (NIRO-200; Hamamatsu Photonics, Hamamatsu, Japan) at dual wavelengths (760 nm and 850 nm). The near-infrared spectroscopy probe consisted of an optically dense holder containing an emission and detection probe and was secured to the skin with tape to minimize extraneous light. To measure blood velocity (Vvein) and cross-sectional area (CSA), non-invasive ultrasound imaging of the basilic vein (superficial vein) of the resting upper arm was performed 5 to 6 cm proximal to the cubitus using an 8.7-MHz linear array transducer (Vivid e; GE Healthcare Japan, Tokyo, Japan). A large quantity of ultrasound transmission gel was used to prevent direct contact with the skin and to avoid compression of the vein. Vvein and CSA were simultaneously measured on a transverse scan of the vein with the transducer tilted at 60°. Positioning of the transducer was determined at the beginning of each experiment, and it remained unchanged to limit potential errors in Doppler angle. Vvein was the result of the mean velocity of spectral Doppler recording every 12 seconds. CSA was calculated by manually tracing the edge of the offline transverse venous image at an arbitrary three points every 12 seconds, and then the three CSA values were averaged. Because CSA was obtained from the image measured at 60°, an accurate CSA (CSAvein) was determined as follows: (1) CSA vein cm 2 = CSA × sin 60 ° BFvein in the basilic vein was calculated according to the following formula: (2) BF vein ml / min = V vein × CSA vein Beat-to-beat changes in arterial pressure were assessed by finger photoplethysmography (Finometer; Finapres Medical Systems BV, Arnhem, the Netherlands). The monitoring cuff was placed around the middle finger. The heart rate (HR) and mean arterial pressure (MAP) were determined from the blood pressure waveform using the Modelflow software program, taking into account sex, age, height, and weight (BeatScope 1.1; Finapres Medical Systems BV). Muscle oxygenation (oxyhemoglobin (oxy-Hb) and deoxyhemoglobin (deoxy-Hb) concentration) in the left exercising upper arm and right resting forearm was monitored using a near-infrared spectroscopy system (NIRO-200; Hamamatsu Photonics, Hamamatsu, Japan) at dual wavelengths (760 nm and 850 nm). The near-infrared spectroscopy probe consisted of an optically dense holder containing an emission and detection probe and was secured to the skin with tape to minimize extraneous light. To measure blood velocity (Vvein) and cross-sectional area (CSA), non-invasive ultrasound imaging of the basilic vein (superficial vein) of the resting upper arm was performed 5 to 6 cm proximal to the cubitus using an 8.7-MHz linear array transducer (Vivid e; GE Healthcare Japan, Tokyo, Japan). A large quantity of ultrasound transmission gel was used to prevent direct contact with the skin and to avoid compression of the vein. Vvein and CSA were simultaneously measured on a transverse scan of the vein with the transducer tilted at 60°. Positioning of the transducer was determined at the beginning of each experiment, and it remained unchanged to limit potential errors in Doppler angle. Vvein was the result of the mean velocity of spectral Doppler recording every 12 seconds. CSA was calculated by manually tracing the edge of the offline transverse venous image at an arbitrary three points every 12 seconds, and then the three CSA values were averaged. Because CSA was obtained from the image measured at 60°, an accurate CSA (CSAvein) was determined as follows: (1) CSA vein cm 2 = CSA × sin 60 ° BFvein in the basilic vein was calculated according to the following formula: (2) BF vein ml / min = V vein × CSA vein Data analysis and statistical analysis The HR, MAP, muscle oxygenation, CSAvein, Vvein, and BFvein were averaged for 61 to 240 seconds before commencing exercise to establish a baseline value. The relative change in these variables from baseline during exercise and the recovery period was calculated. Data are expressed as mean ± standard error values. To compare the time-course changes, two-way analysis of variance with repeated measures was applied to the circulatory responses, CSAvein, Vvein, and BFvein under each condition (EX and EX + VIB), using time and condition as fixed factors. If a main effect of condition and/or interaction was detected, post hoc analysis with a paired t test was performed; and if a main effect of time was detected, post hoc analysis with a Bonferroni test was performed. To compare the baseline data of the circulatory response, CSAvein, Vvein, and BFvein between EX and EX + VIB, a paired t test was performed. In addition, differences in Overall RPE and Arm RPE between conditions were evaluated by paired t test. P <0.05 was considered significant. The HR, MAP, muscle oxygenation, CSAvein, Vvein, and BFvein were averaged for 61 to 240 seconds before commencing exercise to establish a baseline value. The relative change in these variables from baseline during exercise and the recovery period was calculated. Data are expressed as mean ± standard error values. To compare the time-course changes, two-way analysis of variance with repeated measures was applied to the circulatory responses, CSAvein, Vvein, and BFvein under each condition (EX and EX + VIB), using time and condition as fixed factors. If a main effect of condition and/or interaction was detected, post hoc analysis with a paired t test was performed; and if a main effect of time was detected, post hoc analysis with a Bonferroni test was performed. To compare the baseline data of the circulatory response, CSAvein, Vvein, and BFvein between EX and EX + VIB, a paired t test was performed. In addition, differences in Overall RPE and Arm RPE between conditions were evaluated by paired t test. P <0.05 was considered significant. Subjects: Eleven healthy subjects (three males and eight females) volunteered to participate in the study. Their mean ± standard deviation age, height, and weight were 21.3 ± 0.9 years, 165.0 ± 6.6 cm, and 55.2 ± 5.7 kg, respectively. All subjects were nonsmokers. The participants were asked not to drink beverages containing caffeine or alcohol for 24 hours and not to eat for at least 2 hours before the start of the experiment. The purpose, procedures, and risks of the study were explained to the subjects, and their informed consent was obtained. The study was approved by the Human Ethics Committee of the Japan Women’s College of Physical Education and was conducted in accordance with the Declaration of Helsinki. Muscle tendon vibration and maximal voluntary contraction: Before the main protocol, the subjects were examined to establish the force produced by tendon vibration at rest. A custom vibrator (DPS-380; Dia Medical, Tokyo, Japan) was used to induce left biceps brachii muscle contraction by reflex stimulation of the biceps brachii distal tendon on the cubital fossa [26]. The oscillating frequency of the vibrator was 100 Hz and its amplitude was 0.8 mm. On the same day, the subjects performed two maximal voluntary static elbow flexions of the left arm using a computer-based multifunctional dynamometer (VINE, Tokyo, Japan) to determine their maximal voluntary contraction (MVC) strength, defined as the highest value obtained in the two trials. Experimental protocol: In a room maintained at 25.1 ± 0.2°C, each subject stayed in a semi-reclined position in a chair in which body position could be maintained, while the left elbow was kept at a 90° angle on a padded armrest with the wrist attached to an arm lever by a Velcro strap. The subjects rested for at least 20 minutes before data collection began. After baseline data were collected for 5 minutes, subjects performed: static elbow flexion at 35% MVC without vibration of the biceps tendon for 2 minutes (EX); and static elbow flexion at 35% MVC with vibration of the biceps tendon for 2 minutes (EX + VIB). Each exercise period was followed by a recovery period of 1 minute. Static elbow flexion was produced using the same dynamometer that was used to measure the MVC (VINE), with visual feedback of the achieved force provided via an oscilloscope display. For EX + VIB, tendon vibration was initiated 1 minute before starting exercise and continued during the exercise. Immediately after exercise, subjects read instructions for the 6 to 20 rating of perceived exertion (Overall RPE) category scale developed by Borg [27] and instructions for rating muscle fatigue sensation (Arm RPE) on a scale of 1 to 10 [28]. In all trials, subjects regulated their respiratory frequency at 10 or 15 breaths/minute using a metronome, because exercise movement and respiratory cycle influence sympathetic nervous system activity. EX and EX + VIB were performed randomly, and the rest period between the two conditions was at least 20 minutes. Measurements: Beat-to-beat changes in arterial pressure were assessed by finger photoplethysmography (Finometer; Finapres Medical Systems BV, Arnhem, the Netherlands). The monitoring cuff was placed around the middle finger. The heart rate (HR) and mean arterial pressure (MAP) were determined from the blood pressure waveform using the Modelflow software program, taking into account sex, age, height, and weight (BeatScope 1.1; Finapres Medical Systems BV). Muscle oxygenation (oxyhemoglobin (oxy-Hb) and deoxyhemoglobin (deoxy-Hb) concentration) in the left exercising upper arm and right resting forearm was monitored using a near-infrared spectroscopy system (NIRO-200; Hamamatsu Photonics, Hamamatsu, Japan) at dual wavelengths (760 nm and 850 nm). The near-infrared spectroscopy probe consisted of an optically dense holder containing an emission and detection probe and was secured to the skin with tape to minimize extraneous light. To measure blood velocity (Vvein) and cross-sectional area (CSA), non-invasive ultrasound imaging of the basilic vein (superficial vein) of the resting upper arm was performed 5 to 6 cm proximal to the cubitus using an 8.7-MHz linear array transducer (Vivid e; GE Healthcare Japan, Tokyo, Japan). A large quantity of ultrasound transmission gel was used to prevent direct contact with the skin and to avoid compression of the vein. Vvein and CSA were simultaneously measured on a transverse scan of the vein with the transducer tilted at 60°. Positioning of the transducer was determined at the beginning of each experiment, and it remained unchanged to limit potential errors in Doppler angle. Vvein was the result of the mean velocity of spectral Doppler recording every 12 seconds. CSA was calculated by manually tracing the edge of the offline transverse venous image at an arbitrary three points every 12 seconds, and then the three CSA values were averaged. Because CSA was obtained from the image measured at 60°, an accurate CSA (CSAvein) was determined as follows: (1) CSA vein cm 2 = CSA × sin 60 ° BFvein in the basilic vein was calculated according to the following formula: (2) BF vein ml / min = V vein × CSA vein Data analysis and statistical analysis: The HR, MAP, muscle oxygenation, CSAvein, Vvein, and BFvein were averaged for 61 to 240 seconds before commencing exercise to establish a baseline value. The relative change in these variables from baseline during exercise and the recovery period was calculated. Data are expressed as mean ± standard error values. To compare the time-course changes, two-way analysis of variance with repeated measures was applied to the circulatory responses, CSAvein, Vvein, and BFvein under each condition (EX and EX + VIB), using time and condition as fixed factors. If a main effect of condition and/or interaction was detected, post hoc analysis with a paired t test was performed; and if a main effect of time was detected, post hoc analysis with a Bonferroni test was performed. To compare the baseline data of the circulatory response, CSAvein, Vvein, and BFvein between EX and EX + VIB, a paired t test was performed. In addition, differences in Overall RPE and Arm RPE between conditions were evaluated by paired t test. P <0.05 was considered significant. Results: Vibration of the biceps tendon for 2 minutes elicited a reflex force equivalent to 5.3 ± 2.3% of MVC. However, the HR (from 63 ± 3 beats/minute to 64 ± 2 beats/minute), MAP (from 78 ± 3 mmHg to 78 ± 3 mmHg), CSAvein (from 0.20 ± 0.03 cm2 to 0.20 ± 0.03 cm2), Vvein (from 4.1 ± 0.5 cm/second to 3.9 ± 0.5 cm/second), and BFvein of the basilic vein (from 51.6 ± 10.3 ml/minute to 52.7 ± 11.9 ml/minute) did not change. There were no significant differences in the baseline data of circulatory responses, CSAvein, Vvein, and BFvein between EX and EX + VIB (Table 1). Baseline data under each condition Values presented as mean ± standard error. Static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. Muscle tension during EX was similar to that during EX + VIB (Figure 1). However, both Overall RPE and Arm RPE after EX + VIB were significantly lower than after EX (Overall RPE: 11.5 ± 0.2 vs. 12.6 ± 0.3, P <0.05; Arm RPE: 3.2 ± 0.3 vs. 4.9 ± 0.4, P <0.05). Time courses of muscle tension under each condition.Static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. Data expressed as mean ± standard error. Figure 2 shows the time courses of HR, MAP, and CSAvein, Vvein, and BFvein of the resting upper arm during EX and EX + VIB. The increase in HR during exercise from 96 to 120 seconds at EX + VIB was less than that at EX (values at 120 seconds of exercise: 39.1 ± 4.0% vs. 50.0 ± 5.9%, P <0.05) (Figure 2A). Likewise, the increase in MAP during exercise from 96 to 120 seconds at EX + VIB was lower than that at EX (values at 120 seconds of exercise: 26.0 ± 3.7% vs. 29.6 ± 3.2%, P <0.05) (Figure 2B). CSAvein during exercise at EX decreased from baseline (values at 120 seconds of exercise: –22.9 ± 6.7%, P <0.05), but CSAvein during EX + VIB did not change from baseline throughout the protocol. In addition, CSAvein at 120 seconds of exercise and during recovery at EX was lower than at EX + VIB (P <0.05) (Figure 2C). Vvein and BFvein did not change from baseline, and this response was similar under both conditions (Figure 2D,E). Relative changes in circulatory responses and blood flow responses in the resting upper arm. Relative changes in (A) heart rate (HR), (B) mean arterial pressure (MAP), (C) cross-sectional area (CSAvein), (D) blood velocity (Vvein), and (E) venous blood flow (BFvein) of the basilic vein in the resting upper arm during static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. Data expressed as mean ± standard error. *P <0.05, difference between EX and EX + VIB; †P <0.05, difference from baseline level during EX; ‡P <0.05, difference from baseline level during EX + VIB. Figure 3 shows the time courses of muscle oxygenation of the exercising upper arm and resting forearm during EX and EX + VIB. In the exercising upper arm, Δoxy-Hb decreased from baseline during exercise and increased from baseline during recovery in both EX and EX + VIB (P <0.05). Δoxy-Hb of the exercising upper arm during recovery at EX was lower than at EX + VIB (P <0.05; Figure 3A). In the exercising upper arm, Δdeoxy-Hb increased from baseline during exercise (P <0.05) and returned to baseline during recovery in both EX and EX + VIB (Figure 3B). In the resting forearm, Δoxy-Hb and Δdeoxy-Hb did not change from baseline at both EX and EX + VIB (Figure 3C,D). However, Δdeoxy-Hb of the resting forearm during recovery at EX was lower than at EX + VIB (P <0.05). Relative changes in muscle oxygenation of the exercising upper arm and resting forearm. Relative changes in the oxyhemoglobin (Δoxy-Hb) and deoxyhemoglobin (Δdeoxy-Hb) concentrations of the (A, B) exercising upper arm and (C, D) resting forearm during static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. Data expressed as mean ± standard error. *P <0.05, difference between EX and EX + VIB; †P <0.05, difference from baseline level during EX; ‡P <0.05, difference from baseline level during EX + VIB. Discussion: The primary findings in this study were that CSAvein decreased from baseline during static elbow flexion alone, although CSAvein during static elbow flexion with tendon vibration did not change, and that BFvein did not change significantly during static exercise with or without tendon vibration. These results suggest that a reduction in central command during static exercise with tendon vibration may attenuate the superficial venous vessel response of the resting limb during sustained static arm exercise. Superficial venous vessel response may be controlled by both the sympathetic nervous system [1,4,6,29] and changes in venous pressure related to alterations in blood flow and blood volume [30,31]. In our study, BFvein did not change throughout the protocol during EX and EX + VIB (Figure 2E). In addition, Δdeoxy-Hb in the resting forearm did not change from baseline with static elbow flexion during both EX and EX + VIB (Figure 3D). Because the change in oxy-Hb and deoxy-Hb is used to evaluate blood volume in arterial and venous vascular beds, respectively [32,33], it is speculated that the venous blood volume in the resting forearm was unchanged during static elbow flexion. In our study, therefore, the decrease in CSAvein with exercise during EX may have been caused by sympathetic nervous system control. On the other hand, the difference in CSAvein between EX and EX + VIB during the recovery period might have been influenced by the change in venous blood volume but not by the sympathetic nervous system, because Δdeoxy-Hb of the resting forearm was also different between the two conditions (Figure 3D). The concept of central command has been classically defined as a feed-forward control. Feed-forward characterization may be largely based on the immediate cardiovascular response to onset (or even anticipation) of exercise. In addition to feed-forward control, there is evidence that the effects of central command on cardiovascular responses are closely related to the intensity or perceived effort of the exercise [34,35]. Central command is there also proposed to be capable of functioning as feedback control, in which somatosensory signals arising from the working muscles may provide a feedback signal capable of influencing central command via alterations of perception of effort or effort sense [19,20]. The experimental model in our study might reflect central command that is defined as feedback control rather than feed-forward control, because the changes in HR and MAP, which are indexes of the cardiovascular response, were significantly lower during 96 to 120 seconds of exercise in EX + VIB than in EX (Figure 2A,B). These results are in agreement with those of previous studies [14,25,26]. In addition, the magnitude of the central command response has been assessed using an individual’s perception of effort sense during exercise, independent of force production [15,34]. Although the relationship between central command and RPE has not been clearly defined, the RPE scale [27] has been widely used to assess the level of central command. In the present study, RPE immediately after exercise was lower in EX + VIB than in EX, indicating that central command, which is defined as feedback control, might be lower in EX + VIB than in EX. Thus, related to the central command response that is defined as feedback control, CSAvein was also smaller in EX than in EX + VIB during the latter half of the exercise. In addition, activation of the central command at the onset of static elbow flexion exercise in the present study, which indicated the feed-forward control, may have been too small to cause venoconstriction. If activation of central command at the onset of static elbow flexion exercise was enough to cause venoconstriction, the decrease in CSAvein had to be obtained at the onset of exercise in both EX and EX + VIB. Vibration is a powerful stimulus for primary muscle spindle afferents when applied to the biceps tendon during static exercise. When the biceps brachii was contracting, activation of its muscle spindle primary afferents provided reflex activation, which in turn aided voluntary tension development compared with contraction only of the biceps brachii. The afferent input of decreased voluntary tension during exercise with tendon vibration might thus cause interactions between perception of effort and central command, such that the activation of central command might alter [20]. The increase in sympathetic nervous system activity during exercise is caused not only by central command but also by the reflex neural mechanism that is activated by exercise (muscle mechanoreflex and muscle metaboreflex) [9-11,13,16]. Muscle-exerted tension during static elbow flexion did not differ between EX and EX + VIB (Figure 1), showing that the degree of activation of muscle mechanoreflex may be similar under both conditions. In addition, Δdeoxy-Hb concentration of the exercising upper arm was similar between EX and EX + VIB (Figure 3B). Because deoxy-Hb of exercising muscle is the index for oxygen consumption [36,37], the level of an exercise-induced metabolite accumulation during EX was expected to be equal to that found during EX + VIB, suggesting that the degree of activation of the muscle metaboreflex might not differ between EX and EX + VIB. In the present study, therefore, it is likely that the difference in CSAvein during static exercise between EX and EX + VIB might not be due to the differences in activation of the reflex neural mechanism under different conditions. Although the specific regions of the brain involved in exercise-related responses remain speculative, the following theory can be considered. Animal studies suggest that subthalamic regions are capable of generating both motor and cardiovascular responses [38]. In human studies, possible sites and neurocircuitry involving the insular cortex, sensorimotor cortex, anterior cingulate gyrus, medial prefrontal region and thalamic regions [18,39-43], and the periaqueductal gray [44,45], have been suggested. In addition, a recent hypothesis concerning the neural circuit responsible for generating central command is as follows: cerebral cortical output is not an essential component for the generation of central command but does seem to require a process that triggers activity in neural circuit(s) in the caudal brain to generate central command, and the region from the caudal diencephalon to the rostral mesencephalon plays an important role in the generation of central command [46], because in the decerebrate animal study the renal sympathetic nerve activity and HR abruptly increased in association with the start of locomotion [47], and spontaneous motor activity and the associated cardiovascular response were lost after decerebration at the midcollicular level [48]. Stewart and colleagues reported that venoconstriction during static exercise, which occurs not only in the splanchnic area but also in the resting extremities, may contribute to an increase in venous return to the heart to increase cardiac output [49]. Taking into account previous studies, including our own, venoconstriction via central command might play a significant role in hemodynamics during exercise. However, because the relationship between venous return and venoconstriction is not obvious, further investigation is required. Limitations Several limitations should be considered when interpreting our results. First, due to the large compliance of veins, volume (that is, CSAvein) is dependent on the venous pressure level – but we did not measure venous pressure. As mentioned above, however, BFvein and Δdeoxy-Hb (an index of venous blood volume) of the resting forearm did not change from baseline during both EX and EX + VIB (Figures 2E and 3D). We therefore believe that the effect of venous pressure-dependent control was scarcely observed during exercise in this study. Second, we did not account for the menstrual cycle in female subjects. However, because EX and EX + VIB were carried out in same day, this effect may be negligible in our study. Several limitations should be considered when interpreting our results. First, due to the large compliance of veins, volume (that is, CSAvein) is dependent on the venous pressure level – but we did not measure venous pressure. As mentioned above, however, BFvein and Δdeoxy-Hb (an index of venous blood volume) of the resting forearm did not change from baseline during both EX and EX + VIB (Figures 2E and 3D). We therefore believe that the effect of venous pressure-dependent control was scarcely observed during exercise in this study. Second, we did not account for the menstrual cycle in female subjects. However, because EX and EX + VIB were carried out in same day, this effect may be negligible in our study. Limitations: Several limitations should be considered when interpreting our results. First, due to the large compliance of veins, volume (that is, CSAvein) is dependent on the venous pressure level – but we did not measure venous pressure. As mentioned above, however, BFvein and Δdeoxy-Hb (an index of venous blood volume) of the resting forearm did not change from baseline during both EX and EX + VIB (Figures 2E and 3D). We therefore believe that the effect of venous pressure-dependent control was scarcely observed during exercise in this study. Second, we did not account for the menstrual cycle in female subjects. However, because EX and EX + VIB were carried out in same day, this effect may be negligible in our study. Conclusions: Static elbow flexion with vibration of the biceps brachii tendon, which caused a decrease in central command during exercise, inhibited the increase in circulatory response and the decrease in CSAvein in the resting upper arm when compared with static exercise alone, although BFvein was similar during exercise both with and without tendon vibration. These findings suggest that central command may contribute to the superficial venous vessel response of the resting limb during sustained static elbow flexion. Abbreviations: BFvein: Blood flow of the basilic vein; CSA: Cross-sectional area of basilic vein before correction; CSAvein: Accurate cross-sectional area of the basilic vein after correction; Δ: Change; EX: Elbow flexion without vibration; EX + VIB: Elbow flexion with vibration; oxy-Hb: Oxyhemoglobin; deoxy-Hb: Deoxyhemoglobin; HR: Heart rate; MAP: Mean arterial pressure; MVC: Maximal voluntary contraction; RPE: Rating of perceived exertion; Vvein: Blood velocity of the basilic vein. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: AO designed and coordinated the study, carried out the experiment, and drafted the manuscript. KS participated in the design of the study and helped draft the manuscript. AH helped carry out the experiment. TS participated in the design of the study and helped draft the manuscript. All authors read and approved the final manuscript.
Background: The superficial vein of the resting limb constricts sympathetically during exercise. Central command is the one of the neural mechanisms that controls the cardiovascular response to exercise. However, it is not clear whether central command contributes to venous vessel response during exercise. Tendon vibration during static elbow flexion causes primary muscle spindle afferents, such that a lower central command is required to achieve a given force without altering muscle force. The purpose of this study was therefore to investigate whether a reduction in central command during static exercise with tendon vibration influences the superficial venous vessel response in the resting limb. Methods: Eleven subjects performed static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. The heart rate, mean arterial pressure, and rating of perceived exertion (RPE) in overall and exercising muscle were measured. The cross-sectional area (CSAvein) and blood velocity of the basilic vein in the resting upper arm were assessed by ultrasound, and blood flow (BFvein) was calculated using both variables. Results: Muscle tension during exercise was similar between EX and EX + VIB. However, RPEs at EX + VIB were lower than those at EX (P <0.05). Increases in heart rate and mean arterial pressure during exercise at EX + VIB were also lower than those at EX (P <0.05). CSAvein in the resting limb at EX decreased during exercise from baseline (P <0.05), but CSAvein at EX + VIB did not change during exercise. CSAvein during exercise at EX was smaller than that at EX + VIB (P <0.05). However, BFvein did not change during the protocol under either condition. The decreases in circulatory response and RPEs during EX + VIB, despite identical muscle tension, showed that activation of central command was less during EX + VIB than during EX. Abolishment of the decrease in CSAvein during exercise at EX + VIB may thus have been caused by a lower level of central command at EX + VIB rather than EX. Conclusions: Diminished central command induced by tendon vibration may attenuate the superficial venous vessel response of the resting limb during sustained static arm exercise.
Background: Venomotor response is considered to play an important role in the transfer of blood from veins to the heart. Many studies have suggested that sympathetic activation has an impact on the venomotor responses in the resting limb during static exercise [1-3] and dynamic exercise [1,4-6], since especially superficial venous vessels have rich innervation of sympathetic nerves [7,8] and an operation of sympathectomy [1] and a dosage of α-blocking agent clearly abolishes the venoconstriction [2] observed during exercise. Sustained static exercise produces significant activation of the sympathetic nervous system, and the sympathetic activation during exercise is governed by both central (that is, central command) and peripheral (that is, muscle metaboreflex and mechanoreflex) mechanisms [9-16]. Metabolically sensitive afferents within exercising skeletal muscle detect the buildup of metabolites and act through cardiovascular centers to produce a muscle metaboreflex [11,13]. A component of this muscle reflex may arise from muscle mechanoreflex afferents [10]. Duprez and colleagues reported that post-exercise muscle ischemia produces a significant decrease in venous volume in the contralateral limb, and consequently suggested the importance of muscle metaboreflex on venomotor tone in the non-exercising limb during exercise [17]. In addition to muscle metaboreflex, an essential central command role has emerged from a preliminary study [2]. The concept of central command has been conventionally defined as feed-forward control. When a motor command is sent to a muscle, a parallel or collateral command is sent to cardiovascular centers in the brainstem, and this acts to activate sympathetic nerve activity. Indeed, Lorentsen found an anticipatory increase in the venous pressure in the contralateral limb before the onset of exercise [2], indicating that feed-forward control of the central command plays an important role in venous tone. However, the study did not examine the influence of central command during actual exercise [2]. Recent studies have also reported that central command also functions as feedback control, in which somatosensory signals arising from the working muscles continuously provide a feedback signal and probably modulate cardiovascular responses via alterations of perception of effort or effort sense [18-20]. If central command has the function of feedback control, the influence of central command will appear not only before exercise but also during sustained and later periods of exercise. However, this question has not yet been challenged. Verifying the influence of central command on the venous tone during actual and sustained static exercise is therefore necessary. On the other hand, using the non-invasive ultrasound Doppler method [21-24], assessment of a single vein response during exercise can extend prior knowledge about the influence of central command on the venous system during exercise. Based on these considerations, we investigated whether central command affects venomotor tone in the contralateral limb during sustained static exercise. In the present study, using vibration of the biceps brachii tendon reported previously [14,25], we evaluated whether less activation of central command during sustained static elbow flexion accompanies lower responses of the cross-sectional area of the superficial vein in the resting upper arm (CSAvein). Tendon vibration during active muscle contraction excites the primary afferents of muscle spindles of the contracting muscle, thereby inducing reflex tension via the monosynaptic tendon reflex, which in turn aids voluntary tension development and consequently reduces the amount of central command required to generate a given force [14,25,26]. In addition, tendon vibration was also useful for investigating the influence of central command without inducing discomfort or nociceptor afferent input [14]. Conclusions: Static elbow flexion with vibration of the biceps brachii tendon, which caused a decrease in central command during exercise, inhibited the increase in circulatory response and the decrease in CSAvein in the resting upper arm when compared with static exercise alone, although BFvein was similar during exercise both with and without tendon vibration. These findings suggest that central command may contribute to the superficial venous vessel response of the resting limb during sustained static elbow flexion.
Background: The superficial vein of the resting limb constricts sympathetically during exercise. Central command is the one of the neural mechanisms that controls the cardiovascular response to exercise. However, it is not clear whether central command contributes to venous vessel response during exercise. Tendon vibration during static elbow flexion causes primary muscle spindle afferents, such that a lower central command is required to achieve a given force without altering muscle force. The purpose of this study was therefore to investigate whether a reduction in central command during static exercise with tendon vibration influences the superficial venous vessel response in the resting limb. Methods: Eleven subjects performed static elbow flexion at 35% of maximal voluntary contraction with (EX + VIB) and without (EX) vibration of the biceps brachii tendon. The heart rate, mean arterial pressure, and rating of perceived exertion (RPE) in overall and exercising muscle were measured. The cross-sectional area (CSAvein) and blood velocity of the basilic vein in the resting upper arm were assessed by ultrasound, and blood flow (BFvein) was calculated using both variables. Results: Muscle tension during exercise was similar between EX and EX + VIB. However, RPEs at EX + VIB were lower than those at EX (P <0.05). Increases in heart rate and mean arterial pressure during exercise at EX + VIB were also lower than those at EX (P <0.05). CSAvein in the resting limb at EX decreased during exercise from baseline (P <0.05), but CSAvein at EX + VIB did not change during exercise. CSAvein during exercise at EX was smaller than that at EX + VIB (P <0.05). However, BFvein did not change during the protocol under either condition. The decreases in circulatory response and RPEs during EX + VIB, despite identical muscle tension, showed that activation of central command was less during EX + VIB than during EX. Abolishment of the decrease in CSAvein during exercise at EX + VIB may thus have been caused by a lower level of central command at EX + VIB rather than EX. Conclusions: Diminished central command induced by tendon vibration may attenuate the superficial venous vessel response of the resting limb during sustained static arm exercise.
7,420
425
[ 671, 135, 129, 298, 447, 205, 145, 100, 10, 61 ]
14
[ "ex", "exercise", "vib", "ex vib", "command", "muscle", "central", "static", "central command", "vein" ]
[ "muscle metaboreflex venomotor", "muscle ischemia produces", "venomotor responses resting", "sympathetic nerve activity", "exercise induced metabolite" ]
[CONTENT] Central command | Ultrasound technique | Venoconstriction | Venous return [SUMMARY]
[CONTENT] Central command | Ultrasound technique | Venoconstriction | Venous return [SUMMARY]
[CONTENT] Central command | Ultrasound technique | Venoconstriction | Venous return [SUMMARY]
[CONTENT] Central command | Ultrasound technique | Venoconstriction | Venous return [SUMMARY]
[CONTENT] Central command | Ultrasound technique | Venoconstriction | Venous return [SUMMARY]
[CONTENT] Central command | Ultrasound technique | Venoconstriction | Venous return [SUMMARY]
[CONTENT] Arterial Pressure | Elbow | Exercise | Female | Heart Rate | Humans | Male | Muscle Contraction | Vibration | Young Adult [SUMMARY]
[CONTENT] Arterial Pressure | Elbow | Exercise | Female | Heart Rate | Humans | Male | Muscle Contraction | Vibration | Young Adult [SUMMARY]
[CONTENT] Arterial Pressure | Elbow | Exercise | Female | Heart Rate | Humans | Male | Muscle Contraction | Vibration | Young Adult [SUMMARY]
[CONTENT] Arterial Pressure | Elbow | Exercise | Female | Heart Rate | Humans | Male | Muscle Contraction | Vibration | Young Adult [SUMMARY]
[CONTENT] Arterial Pressure | Elbow | Exercise | Female | Heart Rate | Humans | Male | Muscle Contraction | Vibration | Young Adult [SUMMARY]
[CONTENT] Arterial Pressure | Elbow | Exercise | Female | Heart Rate | Humans | Male | Muscle Contraction | Vibration | Young Adult [SUMMARY]
[CONTENT] muscle metaboreflex venomotor | muscle ischemia produces | venomotor responses resting | sympathetic nerve activity | exercise induced metabolite [SUMMARY]
[CONTENT] muscle metaboreflex venomotor | muscle ischemia produces | venomotor responses resting | sympathetic nerve activity | exercise induced metabolite [SUMMARY]
[CONTENT] muscle metaboreflex venomotor | muscle ischemia produces | venomotor responses resting | sympathetic nerve activity | exercise induced metabolite [SUMMARY]
[CONTENT] muscle metaboreflex venomotor | muscle ischemia produces | venomotor responses resting | sympathetic nerve activity | exercise induced metabolite [SUMMARY]
[CONTENT] muscle metaboreflex venomotor | muscle ischemia produces | venomotor responses resting | sympathetic nerve activity | exercise induced metabolite [SUMMARY]
[CONTENT] muscle metaboreflex venomotor | muscle ischemia produces | venomotor responses resting | sympathetic nerve activity | exercise induced metabolite [SUMMARY]
[CONTENT] ex | exercise | vib | ex vib | command | muscle | central | static | central command | vein [SUMMARY]
[CONTENT] ex | exercise | vib | ex vib | command | muscle | central | static | central command | vein [SUMMARY]
[CONTENT] ex | exercise | vib | ex vib | command | muscle | central | static | central command | vein [SUMMARY]
[CONTENT] ex | exercise | vib | ex vib | command | muscle | central | static | central command | vein [SUMMARY]
[CONTENT] ex | exercise | vib | ex vib | command | muscle | central | static | central command | vein [SUMMARY]
[CONTENT] ex | exercise | vib | ex vib | command | muscle | central | static | central command | vein [SUMMARY]
[CONTENT] command | central | central command | exercise | muscle | influence central | influence central command | tone | venomotor | limb [SUMMARY]
[CONTENT] csa | vein | subjects | ex | performed | japan | minutes | vvein | exercise | analysis [SUMMARY]
[CONTENT] ex | 05 | ex vib | vib | figure | baseline | vib 05 | ex vib 05 | 05 difference | upper arm [SUMMARY]
[CONTENT] static | central | central command | command | decrease | exercise | response | static elbow flexion | elbow flexion | flexion [SUMMARY]
[CONTENT] ex | exercise | command | ex vib | vib | central | central command | vein | study | subjects [SUMMARY]
[CONTENT] ex | exercise | command | ex vib | vib | central | central command | vein | study | subjects [SUMMARY]
[CONTENT] ||| ||| ||| Tendon ||| [SUMMARY]
[CONTENT] 35% ||| ||| CSAvein [SUMMARY]
[CONTENT] EX ||| ||| ||| CSAvein | CSAvein ||| CSAvein | EX + VIB ||| ||| ||| CSAvein [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| ||| Tendon ||| ||| 35% ||| ||| CSAvein ||| ||| EX ||| ||| ||| CSAvein | CSAvein ||| CSAvein | EX + VIB ||| ||| ||| CSAvein ||| [SUMMARY]
[CONTENT] ||| ||| ||| Tendon ||| ||| 35% ||| ||| CSAvein ||| ||| EX ||| ||| ||| CSAvein | CSAvein ||| CSAvein | EX + VIB ||| ||| ||| CSAvein ||| [SUMMARY]
Novel mutations of TCOF1 gene in European patients with Treacher Collins syndrome.
21951868
Treacher Collins syndrome (TCS) is one of the most severe autosomal dominant congenital disorders of craniofacial development and shows variable phenotypic expression. TCS is extremely rare, occurring with an incidence of 1 in 50.000 live births. The TCS distinguishing characteristics are represented by down slanting palpebral fissures, coloboma of the eyelid, micrognathia, microtia and other deformity of the ears, hypoplastic zygomatic arches, and macrostomia. Conductive hearing loss and cleft palate are often present. TCS results from mutations in the TCOF1 gene located on chromosome 5, which encodes a serine/alanine-rich nucleolar phospho-protein called Treacle. However, alterations in the TCOF1 gene have been implicated in only 81-93% of TCS cases.
BACKGROUND
In this study, the entire coding regions of the TCOF1 gene, including newly described exons 6A and 16A, were sequenced in 46 unrelated subjects suspected of TCS clinical indication.
METHODS
Fifteen mutations were reported, including twelve novel and three already described in 14 sporadic patients and in 3 familial cases. Moreover, seven novel polymorphisms were also described. Most of the mutations characterised were microdeletions spanning one or more nucleotides, in addition to an insertion of one nucleotide in exon 18 and a stop mutation. The deletions and the insertion described cause a premature termination of translation, resulting in a truncated protein.
RESULTS
This study confirms that almost all the TCOF1 pathogenic mutations fall in the coding region and lead to an aberrant protein.
CONCLUSION
[ "Base Sequence", "Codon, Nonsense", "DNA Mutational Analysis", "DNA Primers", "Europe", "Exons", "Female", "Frameshift Mutation", "Humans", "Male", "Mandibulofacial Dysostosis", "Mutagenesis, Insertional", "Mutation", "Nuclear Proteins", "Phosphoproteins", "Polymorphism, Single Nucleotide", "Sequence Deletion" ]
3199234
Background
Treacher Collins syndrome (TCS; OMIM #154500) is an autosomal dominant disorder that affects the craniofacial development during early embryogenesis [1]. TCS is characterized by bilaterally symmetric features, including downward slanting palpebral fissures and colobomata of the lower eyelids, hypoplasia of the midfacial bones, cleft palate, and abnormal development of the external/middle ear that often leads to conductive hearing loss [2-4]. TCS occurs with an incidence of 1/50.000 and more than 60% of TCS cases has no previous family history and arises as the result of de novo mutations [5]. The syndrome is caused by mutations in the TCOF1 gene (OMIM #606847), which encodes the nucleolar phosphoprotein Treacle that may serve as a link between rDNA gene transcription and pre-rRNA processing [6]. Recently, Dauwerse et al. detected mutations in genes encoding subunits of RNA polymerases I and III (POLR1C and POLR1D) in Treacher Collins patients [7]. Thus far, most of the 200 disease-causing mutations described are deletions, insertions and nonsense, distributed along 28 exons [8]. Two additional exons have been reported: exon 6A, included in the most common isoform, and exon 16A, included in a minor isoform [9]. The mutations observed in TCS are predominantly sporadic, and the vast majority results in the introduction of a premature termination codon that can lead to the truncation of protein or to nonsense-mediated mRNA decay [10,11]. This suggests in the developmental anomalies result from haploinsufficiency of TCOF1. Penetrance of the genetic mutations underlying TCS is thought to be very high; however, extreme inter- and intra- familial phenotypic variation is reported [12]. In the present study, we screened 46 patients with a clinical diagnosis of TCS, by sequencing the entire TCOF1 coding sequence together with the splice junctions. As result, 12 novel and 3 already reported mutations were characterised together with 7 novel and 13 known polymorphisms.
Methods
Patients 46 patients with a clinical diagnosis of TCS were recruited through several European Medical Institutes since 2002. In particular, the patients were evaluated at the University of Torino, Napoli, Rome Tor Vergata and "La Sapienza", University of Aquila, Ospedali Galliera of Genova, IRCCS Casa Sollievo della Sofferenza at San Giovanni Rotondo, Pediatric Department of SS. Pietro e Paolo Hospital at Borgosesia, Pediatric Department of the Bolzano Hospital, "Gaetano Rummo" Hospital at Benevento, Clinica Mangiagalli at Milano, S. Pietro Hospital at Rome, IRCCS "Ass. Oasi Maria SS" at Troina (Italy), GENDIA lab (Antwerp, Belgium), Egas Moniz Hospital (Lisbon, Portugal), Centre Hospitalier Universitaire de Rennes (France). Three patients have one parent with similar TCS features, while the other 43 cases haven't a family history. The major clinical features of TCS were recognized in all patients. After informed consent was obtained from patients or their families, blood samples were collected. 46 patients with a clinical diagnosis of TCS were recruited through several European Medical Institutes since 2002. In particular, the patients were evaluated at the University of Torino, Napoli, Rome Tor Vergata and "La Sapienza", University of Aquila, Ospedali Galliera of Genova, IRCCS Casa Sollievo della Sofferenza at San Giovanni Rotondo, Pediatric Department of SS. Pietro e Paolo Hospital at Borgosesia, Pediatric Department of the Bolzano Hospital, "Gaetano Rummo" Hospital at Benevento, Clinica Mangiagalli at Milano, S. Pietro Hospital at Rome, IRCCS "Ass. Oasi Maria SS" at Troina (Italy), GENDIA lab (Antwerp, Belgium), Egas Moniz Hospital (Lisbon, Portugal), Centre Hospitalier Universitaire de Rennes (France). Three patients have one parent with similar TCS features, while the other 43 cases haven't a family history. The major clinical features of TCS were recognized in all patients. After informed consent was obtained from patients or their families, blood samples were collected. Mutational analysis Genomic DNA was obtained from peripheral blood samples using EZ1 DNA Blood 200 μl purification kit (Qiagen, GmbH, Germany). Coding regions and intron/exon boundaries of the TCOF1 gene were amplified in 28 reactions using specific primers [13]. For exons 6A, 10, 16A, 24, and 25 specific primers were self designed (Table 1). Primers self-designed for TCOF1 gene amplification and sequencing PCR amplification was performed in a 25 μl reaction volume containing 2.5 U AmpliTaq Gold™ DNA polymerase (Applied Biosystems, Foster City, CA), 1X reaction buffer (10 mM Tris HCl pH 8.3, 50 mM KCl, 2.5 mM MgCl2), 200 μM of each deoxyribonucleoside triphosphate (dNTPs) and 0.2 mM each of primers using a PTC 100 thermocycler (MJ Research, Inc. Waltham, MA, USA). A 10-minute denaturation step at 94°C was followed by 30 cycles at 94°C for 30 seconds, annealing temperature was performed, for each primer, 30 seconds at 52.5-62°C, and extending for 30 sec at 72°C; the reaction was completed by a final extension for 7 minutes at 72°C. PCR products were purified by digestion with Antartic Phosphatase and Exonuclease I (New England BioLabs Inc.) and were sequenced in both directions using the Applied Biosystem Big Dye Terminator v3.1 Cycle sequencing kit. New mutations were not found in 100 normal chromosomes by sequencing. Genomic DNA was obtained from peripheral blood samples using EZ1 DNA Blood 200 μl purification kit (Qiagen, GmbH, Germany). Coding regions and intron/exon boundaries of the TCOF1 gene were amplified in 28 reactions using specific primers [13]. For exons 6A, 10, 16A, 24, and 25 specific primers were self designed (Table 1). Primers self-designed for TCOF1 gene amplification and sequencing PCR amplification was performed in a 25 μl reaction volume containing 2.5 U AmpliTaq Gold™ DNA polymerase (Applied Biosystems, Foster City, CA), 1X reaction buffer (10 mM Tris HCl pH 8.3, 50 mM KCl, 2.5 mM MgCl2), 200 μM of each deoxyribonucleoside triphosphate (dNTPs) and 0.2 mM each of primers using a PTC 100 thermocycler (MJ Research, Inc. Waltham, MA, USA). A 10-minute denaturation step at 94°C was followed by 30 cycles at 94°C for 30 seconds, annealing temperature was performed, for each primer, 30 seconds at 52.5-62°C, and extending for 30 sec at 72°C; the reaction was completed by a final extension for 7 minutes at 72°C. PCR products were purified by digestion with Antartic Phosphatase and Exonuclease I (New England BioLabs Inc.) and were sequenced in both directions using the Applied Biosystem Big Dye Terminator v3.1 Cycle sequencing kit. New mutations were not found in 100 normal chromosomes by sequencing. Nucleotide and Aminoacid numbering All mutations were named considering the genomic reference [NT_029289] and the cDNA that corresponds to the major treacle isoform [NM_001135243.1] [14]. Mutation nomenclature is based on HGVS nomenclature guidelines [http://www.hgvs.org/mutnomen] [15]. All mutations were named considering the genomic reference [NT_029289] and the cDNA that corresponds to the major treacle isoform [NM_001135243.1] [14]. Mutation nomenclature is based on HGVS nomenclature guidelines [http://www.hgvs.org/mutnomen] [15]. In silico tools The splice predictor software program, NNSplice version 0.9 [http://www.fruitfly.org/seq_tools/splice.html], was used for an initial approach to novel variant (c.IVS16A-30G→A) suspected of causing aberrant RNA processing of the TCOF1 gene. The ESE finder 2.0 program [http://rulai.cshl.edu/tools/ESE2/] was used to predict hypothetical splicing enhancer in mutated c.IVS16A-30G→A sequence. The splice predictor software program, NNSplice version 0.9 [http://www.fruitfly.org/seq_tools/splice.html], was used for an initial approach to novel variant (c.IVS16A-30G→A) suspected of causing aberrant RNA processing of the TCOF1 gene. The ESE finder 2.0 program [http://rulai.cshl.edu/tools/ESE2/] was used to predict hypothetical splicing enhancer in mutated c.IVS16A-30G→A sequence.
null
null
Conclusion
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2350/12/125/prepub
[ "Background", "Patients", "Mutational analysis", "Nucleotide and Aminoacid numbering", "In silico tools", "Results", "Discussion", "Conclusion" ]
[ "Treacher Collins syndrome (TCS; OMIM #154500) is an autosomal dominant disorder that affects the craniofacial development during early embryogenesis [1]. TCS is characterized by bilaterally symmetric features, including downward slanting palpebral fissures and colobomata of the lower eyelids, hypoplasia of the midfacial bones, cleft palate, and abnormal development of the external/middle ear that often leads to conductive hearing loss [2-4]. TCS occurs with an incidence of 1/50.000 and more than 60% of TCS cases has no previous family history and arises as the result of de novo mutations [5]. The syndrome is caused by mutations in the TCOF1 gene (OMIM #606847), which encodes the nucleolar phosphoprotein Treacle that may serve as a link between rDNA gene transcription and pre-rRNA processing [6]. Recently, Dauwerse et al. detected mutations in genes encoding subunits of RNA polymerases I and III (POLR1C and POLR1D) in Treacher Collins patients [7].\nThus far, most of the 200 disease-causing mutations described are deletions, insertions and nonsense, distributed along 28 exons [8]. Two additional exons have been reported: exon 6A, included in the most common isoform, and exon 16A, included in a minor isoform [9]. The mutations observed in TCS are predominantly sporadic, and the vast majority results in the introduction of a premature termination codon that can lead to the truncation of protein or to nonsense-mediated mRNA decay [10,11]. This suggests in the developmental anomalies result from haploinsufficiency of TCOF1. Penetrance of the genetic mutations underlying TCS is thought to be very high; however, extreme inter- and intra- familial phenotypic variation is reported [12].\nIn the present study, we screened 46 patients with a clinical diagnosis of TCS, by sequencing the entire TCOF1 coding sequence together with the splice junctions. As result, 12 novel and 3 already reported mutations were characterised together with 7 novel and 13 known polymorphisms.", "46 patients with a clinical diagnosis of TCS were recruited through several European Medical Institutes since 2002. In particular, the patients were evaluated at the University of Torino, Napoli, Rome Tor Vergata and \"La Sapienza\", University of Aquila, Ospedali Galliera of Genova, IRCCS Casa Sollievo della Sofferenza at San Giovanni Rotondo, Pediatric Department of SS. Pietro e Paolo Hospital at Borgosesia, Pediatric Department of the Bolzano Hospital, \"Gaetano Rummo\" Hospital at Benevento, Clinica Mangiagalli at Milano, S. Pietro Hospital at Rome, IRCCS \"Ass. Oasi Maria SS\" at Troina (Italy), GENDIA lab (Antwerp, Belgium), Egas Moniz Hospital (Lisbon, Portugal), Centre Hospitalier Universitaire de Rennes (France). Three patients have one parent with similar TCS features, while the other 43 cases haven't a family history. The major clinical features of TCS were recognized in all patients. After informed consent was obtained from patients or their families, blood samples were collected.", "Genomic DNA was obtained from peripheral blood samples using EZ1 DNA Blood 200 μl purification kit (Qiagen, GmbH, Germany). Coding regions and intron/exon boundaries of the TCOF1 gene were amplified in 28 reactions using specific primers [13]. For exons 6A, 10, 16A, 24, and 25 specific primers were self designed (Table 1).\nPrimers self-designed for TCOF1 gene amplification and sequencing\nPCR amplification was performed in a 25 μl reaction volume containing 2.5 U AmpliTaq Gold™ DNA polymerase (Applied Biosystems, Foster City, CA), 1X reaction buffer (10 mM Tris HCl pH 8.3, 50 mM KCl, 2.5 mM MgCl2), 200 μM of each deoxyribonucleoside triphosphate (dNTPs) and 0.2 mM each of primers using a PTC 100 thermocycler (MJ Research, Inc. Waltham, MA, USA).\nA 10-minute denaturation step at 94°C was followed by 30 cycles at 94°C for 30 seconds, annealing temperature was performed, for each primer, 30 seconds at 52.5-62°C, and extending for 30 sec at 72°C; the reaction was completed by a final extension for 7 minutes at 72°C.\nPCR products were purified by digestion with Antartic Phosphatase and Exonuclease I (New England BioLabs Inc.) and were sequenced in both directions using the Applied Biosystem Big Dye Terminator v3.1 Cycle sequencing kit.\nNew mutations were not found in 100 normal chromosomes by sequencing.", "All mutations were named considering the genomic reference [NT_029289] and the cDNA that corresponds to the major treacle isoform [NM_001135243.1] [14]. Mutation nomenclature is based on HGVS nomenclature guidelines [http://www.hgvs.org/mutnomen] [15].", "The splice predictor software program, NNSplice version 0.9 [http://www.fruitfly.org/seq_tools/splice.html], was used for an initial approach to novel variant (c.IVS16A-30G→A) suspected of causing aberrant RNA processing of the TCOF1 gene. The ESE finder 2.0 program [http://rulai.cshl.edu/tools/ESE2/] was used to predict hypothetical splicing enhancer in mutated c.IVS16A-30G→A sequence.", "The TCOF1 gene analysis was carried out on the 46 TCS European patients by direct sequencing of 28 PCR genomic fragments encompassing the complete coding sequence and splice sites. We detected 15 different TCOF1 mutations in 17 patients (Table 2 Figure 1), 12 new and 3 already described.\nPathogenic mutations in TCS patients\nThe cDNA major isoform (including exon 6A, but without exon 16A) was deposited in GenBank with accession number AY460334[14].\nFor the cDNA sequence, nucleotide +1 is the A of the ATG-translation initiation codon.\nA-O Chromatograms of characterized pathogenic mutations in TCS patients. The header of each picture indicates the nucleotide mutation. The arrows show the site of mutation in chromatograms. A and E chromatograms report reverse sequences; B-D and F-O chromatograms report forward sequences.\nConsidering the already known mutations, the 5-bp deletion, c.4366_4370delGAAAA, in exon 24 (Figure 1A), was the most frequent mutation observed in this study (3/17, 17%) (Patient TCS 15,16 and 17). In the same exon (patient TCS 14), we identified the already known frameshift mutation c.4359_4363delAAAAA (Figure 1B). Both the mutations identified fall within the TCOF1 mutation hot spot rich in 18 Lysine residues. Other microdeletions have already been described in this region [8,11,16-18].\nThe other known mutation is a 2 bp deletion, c.1639_1640delAG (patient TCS 4), localized in exon 10 (Figure 1C) [8,13].\nA total of 12 novel disease-causative mutations were found in 2 familial cases and in 10 sporadic TCS patients. These consist of 10 microdeletions (sized 1/15 nucleotides), one single-base duplication, and one nonsense mutation. All deletions and insertion cause a frameshift and produce a truncated protein.\nThe first reported pathogenic microdeletion was a c.519delT in exon 5 (patient TCS 2) (Figure 1D) leading to formation of a stop codon 46 aminoacids later. This alteration was not found in the patient's parents. Patient TCS 5 was found to bear a de novo truncating mutation, c.1581delG, in exon 10 (Figure 1E) that introduces a premature stop codon 69 aminoacids later.\nTwo alterations were identified in exons 15 and 16, that encode repetitive motifs. The former, c.2626_2627delGA in exon 15 (Figure 1F), was identified in patient TCS 8 and causes the formation of a stop codon two aminoacids later. The latter, c.2831delA (Figure 1G), identified in the exon 16 of patient TCS 9, causes a frameshift.\nAnother proband (TCS 12) was heterozygous for the de novo c.3700_3704delACTCT mutation in exon 22 (Figure 1H).\nThe sequence analysis of a DNA sample obtained from patient TCS 10 identified the c.3118_3119dupG mutation in exon 18 (Figure 1I). Analysis of the relatives indicated that this duplication was also present in the affected father and brother.\nTwo more TCOF1 variations were found in patient TCS 7. The first one was a single base sostitution, c.2924C > T, in exon 17, determining the aminoacidic change p.P975L. The other one was a 2 bp deletion, c.2285_2286delCT in exon 13 (Figure 1J), wich results in a frameshift and premature stop codon. Neither abnormality was found in 140 and 100 healty chromosomes, respectively, indicating that they were not common polymorphisms.\nAnalysis of TCS 7's normal parents DNA showed the mother was a carrier of c.2924C > T. Neither of parents showed the 2 bp deletion. Thus, it was indicated that the 2 bp deletion, disrupting protein translation, is probably the causative mutation.\nIn the TCS 11 patient, a truncating mutation of 8 bases c.3456_3463delTTCTTCAG (Figure 1K) was found in exon 20, causing a frameshift resulting in a stop codon four codons later. The same kind of mutation was found in TCS 6 patient in exon 12. A microdeletion of one single base c.1973delC (Figure 1L) with a stop codon 52 codons later.\nIn patient TCS 13 a nonsense mutation c.4231C > T (Figure 1M) was found. This pathogenetic variation occurred in exon 23B causing a stop codon (p.Q1411X). In patient TCS 1 a small deletion was found. It was a truncating mutation of the last 2 nucleotides of exon 3, c.303_304delCA (Figure 1N), causing a stop codon 73 codons later. Actually, the deletion involves the first 13 nucleotides of intron 3 (IVS3+13delGTAAGAGCCTTGC), too.\nFinally, we detected a one bp deletion c.599delG in patient TCS 3 (Figure 1O). It was a familial case, as we confirmed the mutation in proband's son and daughter. It was located in exon 6 and it caused a stop codon 19 codons later.\nIn 29 patients, with apparently TCS phenotype, no pathogenic mutations have been identified after screening of the whole coding region of the gene. However these patients present clinical features of the syndrome. All these cases are isolated.\nA large number of TCOF1 polymorphisms were detected. Thirteen of these were already published and 7 are novel (Table 3). Twelve were silent or missense variation and 8 occur in intronic regions. All novel polymorphisms were present in controls in different percentages.\nPolymorphisms in TCS patients\nThe URL for International HapMap project is http://hapmap.ncbi.nlm.nih.gov/\nPatient TCS 21 had a single base substitution c.2859-30G > A in exon 16. This substitution wasn't identified in 150 healthy chromosomes; therefore, this variation could be a novel splicing mutation in the TCOF1 gene. Computer simulation studies were performed to evaluate the role of this variation in hypothetical splicing enhancer or splice site prediction, and we deduced that the mutated sequence determines a loss of SF2/ASF and Srp55 and a gain of SRp40 binding sites, but the same splice site strength score was 0.89, similar to wild type sequence, according to ESE finder and NNSplice programs, respectively.", "In this study, we report the screening of entire TCOF1 coding region and the identification of a spectrum of 3 known mutations, 12 novel pathogenetic mutations and 7 novel polymorphisms by direct sequencing. In all familial cases, we identified the TCOF1 mutation in one parent with similar TCS features. Of the 43 analyzed sporadic cases, 14 had arisen as the result of a de novo mutations in the TCOF1 gene. The sensitivity of sequencing analysis of TCOF1 gene on our patients was 37% (17/46). The remaining 29 TCS patients, negative to the TCOF1 screening, have to be clinically revaluated. In this regard, we recommend a clinical revaluation using craniofacial radiographs, extremely useful in detecting zygomatic hypoplasia as a clinical feature of TCS patients [19]. On the other hand, the differential diagnosis is necessary. In fact, Nager and Miller syndrome exhibits phenotypic overlap with TCS. Moreover, mutations in genes encoding subunits of RNA polymerases I and III (POLR1C and POLR1D) cause TCS, too [7]. We are considering to perform the screening of POLR1C and POLR1D genes in our negatives TCOF1 patients.\nThe TCOF1 gene mutations include missense, nonsense, small deletions and duplications. In particular, the most common classes of TCOF1 alleles are small deletions (60%) and duplications (25%) resulting in frameshifts [10]. According to literature data, 76.5% (13/17) of our characterised mutations are microdeletions and 6% (1/17) are duplications. Though it has been suggested that five exons (10, 15, 16, 23 and 24), are defined as a hot spot region of the TCOF1 gene mutations [14], a distribution of pathogenic variations along all the gene was reported by different authors. We confirmed exon 24 as hot spot of the TCOF1 gene as we described the major number of mutations in the exon (Table 2 Figure 1). The c.4366_4370delGAAAA is the most frequent mutation as we found it in three of 16 affected patients. This is probably due to the high repetition of adenines (60% of exon) making exon 24 region prones to polymerase slippage in DNA replication [14]. Moreover, the high complexity of this exon make it difficult to give the right nomenclature to the identified sequence variations. It is therefore mandatory to sequence exon 24 in both directions. Also in this study exons 10, 15, and 16 were revealed as pathogenic gene regions. Finally, three mutations were found in rarely affected sequences.\nTwo more TCOF1 variations were found in patient TCAR. The presence of two possible TCOF1 mutations in the same patient has been reported in a paper by Fujioka H et al. [20]. In this case, familiar analysis is requested to predict which is the benign and which is the pathological variation in TCS.\nAll patients, and in particular patients with typical TCS features but negative TCOF1 screening, were analyzed for the two alternatively spliced in-frame exons (6A and 16A) and no mutations were found.", "In this work, the observation of affected features, combined with a molecular analysis, is sufficient to perform a correct TCS diagnosis in 35% of cases. This is the phenotypic variability of TCS." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Mutational analysis", "Nucleotide and Aminoacid numbering", "In silico tools", "Results", "Discussion", "Conclusion" ]
[ "Treacher Collins syndrome (TCS; OMIM #154500) is an autosomal dominant disorder that affects the craniofacial development during early embryogenesis [1]. TCS is characterized by bilaterally symmetric features, including downward slanting palpebral fissures and colobomata of the lower eyelids, hypoplasia of the midfacial bones, cleft palate, and abnormal development of the external/middle ear that often leads to conductive hearing loss [2-4]. TCS occurs with an incidence of 1/50.000 and more than 60% of TCS cases has no previous family history and arises as the result of de novo mutations [5]. The syndrome is caused by mutations in the TCOF1 gene (OMIM #606847), which encodes the nucleolar phosphoprotein Treacle that may serve as a link between rDNA gene transcription and pre-rRNA processing [6]. Recently, Dauwerse et al. detected mutations in genes encoding subunits of RNA polymerases I and III (POLR1C and POLR1D) in Treacher Collins patients [7].\nThus far, most of the 200 disease-causing mutations described are deletions, insertions and nonsense, distributed along 28 exons [8]. Two additional exons have been reported: exon 6A, included in the most common isoform, and exon 16A, included in a minor isoform [9]. The mutations observed in TCS are predominantly sporadic, and the vast majority results in the introduction of a premature termination codon that can lead to the truncation of protein or to nonsense-mediated mRNA decay [10,11]. This suggests in the developmental anomalies result from haploinsufficiency of TCOF1. Penetrance of the genetic mutations underlying TCS is thought to be very high; however, extreme inter- and intra- familial phenotypic variation is reported [12].\nIn the present study, we screened 46 patients with a clinical diagnosis of TCS, by sequencing the entire TCOF1 coding sequence together with the splice junctions. As result, 12 novel and 3 already reported mutations were characterised together with 7 novel and 13 known polymorphisms.", " Patients 46 patients with a clinical diagnosis of TCS were recruited through several European Medical Institutes since 2002. In particular, the patients were evaluated at the University of Torino, Napoli, Rome Tor Vergata and \"La Sapienza\", University of Aquila, Ospedali Galliera of Genova, IRCCS Casa Sollievo della Sofferenza at San Giovanni Rotondo, Pediatric Department of SS. Pietro e Paolo Hospital at Borgosesia, Pediatric Department of the Bolzano Hospital, \"Gaetano Rummo\" Hospital at Benevento, Clinica Mangiagalli at Milano, S. Pietro Hospital at Rome, IRCCS \"Ass. Oasi Maria SS\" at Troina (Italy), GENDIA lab (Antwerp, Belgium), Egas Moniz Hospital (Lisbon, Portugal), Centre Hospitalier Universitaire de Rennes (France). Three patients have one parent with similar TCS features, while the other 43 cases haven't a family history. The major clinical features of TCS were recognized in all patients. After informed consent was obtained from patients or their families, blood samples were collected.\n46 patients with a clinical diagnosis of TCS were recruited through several European Medical Institutes since 2002. In particular, the patients were evaluated at the University of Torino, Napoli, Rome Tor Vergata and \"La Sapienza\", University of Aquila, Ospedali Galliera of Genova, IRCCS Casa Sollievo della Sofferenza at San Giovanni Rotondo, Pediatric Department of SS. Pietro e Paolo Hospital at Borgosesia, Pediatric Department of the Bolzano Hospital, \"Gaetano Rummo\" Hospital at Benevento, Clinica Mangiagalli at Milano, S. Pietro Hospital at Rome, IRCCS \"Ass. Oasi Maria SS\" at Troina (Italy), GENDIA lab (Antwerp, Belgium), Egas Moniz Hospital (Lisbon, Portugal), Centre Hospitalier Universitaire de Rennes (France). Three patients have one parent with similar TCS features, while the other 43 cases haven't a family history. The major clinical features of TCS were recognized in all patients. After informed consent was obtained from patients or their families, blood samples were collected.\n Mutational analysis Genomic DNA was obtained from peripheral blood samples using EZ1 DNA Blood 200 μl purification kit (Qiagen, GmbH, Germany). Coding regions and intron/exon boundaries of the TCOF1 gene were amplified in 28 reactions using specific primers [13]. For exons 6A, 10, 16A, 24, and 25 specific primers were self designed (Table 1).\nPrimers self-designed for TCOF1 gene amplification and sequencing\nPCR amplification was performed in a 25 μl reaction volume containing 2.5 U AmpliTaq Gold™ DNA polymerase (Applied Biosystems, Foster City, CA), 1X reaction buffer (10 mM Tris HCl pH 8.3, 50 mM KCl, 2.5 mM MgCl2), 200 μM of each deoxyribonucleoside triphosphate (dNTPs) and 0.2 mM each of primers using a PTC 100 thermocycler (MJ Research, Inc. Waltham, MA, USA).\nA 10-minute denaturation step at 94°C was followed by 30 cycles at 94°C for 30 seconds, annealing temperature was performed, for each primer, 30 seconds at 52.5-62°C, and extending for 30 sec at 72°C; the reaction was completed by a final extension for 7 minutes at 72°C.\nPCR products were purified by digestion with Antartic Phosphatase and Exonuclease I (New England BioLabs Inc.) and were sequenced in both directions using the Applied Biosystem Big Dye Terminator v3.1 Cycle sequencing kit.\nNew mutations were not found in 100 normal chromosomes by sequencing.\nGenomic DNA was obtained from peripheral blood samples using EZ1 DNA Blood 200 μl purification kit (Qiagen, GmbH, Germany). Coding regions and intron/exon boundaries of the TCOF1 gene were amplified in 28 reactions using specific primers [13]. For exons 6A, 10, 16A, 24, and 25 specific primers were self designed (Table 1).\nPrimers self-designed for TCOF1 gene amplification and sequencing\nPCR amplification was performed in a 25 μl reaction volume containing 2.5 U AmpliTaq Gold™ DNA polymerase (Applied Biosystems, Foster City, CA), 1X reaction buffer (10 mM Tris HCl pH 8.3, 50 mM KCl, 2.5 mM MgCl2), 200 μM of each deoxyribonucleoside triphosphate (dNTPs) and 0.2 mM each of primers using a PTC 100 thermocycler (MJ Research, Inc. Waltham, MA, USA).\nA 10-minute denaturation step at 94°C was followed by 30 cycles at 94°C for 30 seconds, annealing temperature was performed, for each primer, 30 seconds at 52.5-62°C, and extending for 30 sec at 72°C; the reaction was completed by a final extension for 7 minutes at 72°C.\nPCR products were purified by digestion with Antartic Phosphatase and Exonuclease I (New England BioLabs Inc.) and were sequenced in both directions using the Applied Biosystem Big Dye Terminator v3.1 Cycle sequencing kit.\nNew mutations were not found in 100 normal chromosomes by sequencing.\n Nucleotide and Aminoacid numbering All mutations were named considering the genomic reference [NT_029289] and the cDNA that corresponds to the major treacle isoform [NM_001135243.1] [14]. Mutation nomenclature is based on HGVS nomenclature guidelines [http://www.hgvs.org/mutnomen] [15].\nAll mutations were named considering the genomic reference [NT_029289] and the cDNA that corresponds to the major treacle isoform [NM_001135243.1] [14]. Mutation nomenclature is based on HGVS nomenclature guidelines [http://www.hgvs.org/mutnomen] [15].\n In silico tools The splice predictor software program, NNSplice version 0.9 [http://www.fruitfly.org/seq_tools/splice.html], was used for an initial approach to novel variant (c.IVS16A-30G→A) suspected of causing aberrant RNA processing of the TCOF1 gene. The ESE finder 2.0 program [http://rulai.cshl.edu/tools/ESE2/] was used to predict hypothetical splicing enhancer in mutated c.IVS16A-30G→A sequence.\nThe splice predictor software program, NNSplice version 0.9 [http://www.fruitfly.org/seq_tools/splice.html], was used for an initial approach to novel variant (c.IVS16A-30G→A) suspected of causing aberrant RNA processing of the TCOF1 gene. The ESE finder 2.0 program [http://rulai.cshl.edu/tools/ESE2/] was used to predict hypothetical splicing enhancer in mutated c.IVS16A-30G→A sequence.", "46 patients with a clinical diagnosis of TCS were recruited through several European Medical Institutes since 2002. In particular, the patients were evaluated at the University of Torino, Napoli, Rome Tor Vergata and \"La Sapienza\", University of Aquila, Ospedali Galliera of Genova, IRCCS Casa Sollievo della Sofferenza at San Giovanni Rotondo, Pediatric Department of SS. Pietro e Paolo Hospital at Borgosesia, Pediatric Department of the Bolzano Hospital, \"Gaetano Rummo\" Hospital at Benevento, Clinica Mangiagalli at Milano, S. Pietro Hospital at Rome, IRCCS \"Ass. Oasi Maria SS\" at Troina (Italy), GENDIA lab (Antwerp, Belgium), Egas Moniz Hospital (Lisbon, Portugal), Centre Hospitalier Universitaire de Rennes (France). Three patients have one parent with similar TCS features, while the other 43 cases haven't a family history. The major clinical features of TCS were recognized in all patients. After informed consent was obtained from patients or their families, blood samples were collected.", "Genomic DNA was obtained from peripheral blood samples using EZ1 DNA Blood 200 μl purification kit (Qiagen, GmbH, Germany). Coding regions and intron/exon boundaries of the TCOF1 gene were amplified in 28 reactions using specific primers [13]. For exons 6A, 10, 16A, 24, and 25 specific primers were self designed (Table 1).\nPrimers self-designed for TCOF1 gene amplification and sequencing\nPCR amplification was performed in a 25 μl reaction volume containing 2.5 U AmpliTaq Gold™ DNA polymerase (Applied Biosystems, Foster City, CA), 1X reaction buffer (10 mM Tris HCl pH 8.3, 50 mM KCl, 2.5 mM MgCl2), 200 μM of each deoxyribonucleoside triphosphate (dNTPs) and 0.2 mM each of primers using a PTC 100 thermocycler (MJ Research, Inc. Waltham, MA, USA).\nA 10-minute denaturation step at 94°C was followed by 30 cycles at 94°C for 30 seconds, annealing temperature was performed, for each primer, 30 seconds at 52.5-62°C, and extending for 30 sec at 72°C; the reaction was completed by a final extension for 7 minutes at 72°C.\nPCR products were purified by digestion with Antartic Phosphatase and Exonuclease I (New England BioLabs Inc.) and were sequenced in both directions using the Applied Biosystem Big Dye Terminator v3.1 Cycle sequencing kit.\nNew mutations were not found in 100 normal chromosomes by sequencing.", "All mutations were named considering the genomic reference [NT_029289] and the cDNA that corresponds to the major treacle isoform [NM_001135243.1] [14]. Mutation nomenclature is based on HGVS nomenclature guidelines [http://www.hgvs.org/mutnomen] [15].", "The splice predictor software program, NNSplice version 0.9 [http://www.fruitfly.org/seq_tools/splice.html], was used for an initial approach to novel variant (c.IVS16A-30G→A) suspected of causing aberrant RNA processing of the TCOF1 gene. The ESE finder 2.0 program [http://rulai.cshl.edu/tools/ESE2/] was used to predict hypothetical splicing enhancer in mutated c.IVS16A-30G→A sequence.", "The TCOF1 gene analysis was carried out on the 46 TCS European patients by direct sequencing of 28 PCR genomic fragments encompassing the complete coding sequence and splice sites. We detected 15 different TCOF1 mutations in 17 patients (Table 2 Figure 1), 12 new and 3 already described.\nPathogenic mutations in TCS patients\nThe cDNA major isoform (including exon 6A, but without exon 16A) was deposited in GenBank with accession number AY460334[14].\nFor the cDNA sequence, nucleotide +1 is the A of the ATG-translation initiation codon.\nA-O Chromatograms of characterized pathogenic mutations in TCS patients. The header of each picture indicates the nucleotide mutation. The arrows show the site of mutation in chromatograms. A and E chromatograms report reverse sequences; B-D and F-O chromatograms report forward sequences.\nConsidering the already known mutations, the 5-bp deletion, c.4366_4370delGAAAA, in exon 24 (Figure 1A), was the most frequent mutation observed in this study (3/17, 17%) (Patient TCS 15,16 and 17). In the same exon (patient TCS 14), we identified the already known frameshift mutation c.4359_4363delAAAAA (Figure 1B). Both the mutations identified fall within the TCOF1 mutation hot spot rich in 18 Lysine residues. Other microdeletions have already been described in this region [8,11,16-18].\nThe other known mutation is a 2 bp deletion, c.1639_1640delAG (patient TCS 4), localized in exon 10 (Figure 1C) [8,13].\nA total of 12 novel disease-causative mutations were found in 2 familial cases and in 10 sporadic TCS patients. These consist of 10 microdeletions (sized 1/15 nucleotides), one single-base duplication, and one nonsense mutation. All deletions and insertion cause a frameshift and produce a truncated protein.\nThe first reported pathogenic microdeletion was a c.519delT in exon 5 (patient TCS 2) (Figure 1D) leading to formation of a stop codon 46 aminoacids later. This alteration was not found in the patient's parents. Patient TCS 5 was found to bear a de novo truncating mutation, c.1581delG, in exon 10 (Figure 1E) that introduces a premature stop codon 69 aminoacids later.\nTwo alterations were identified in exons 15 and 16, that encode repetitive motifs. The former, c.2626_2627delGA in exon 15 (Figure 1F), was identified in patient TCS 8 and causes the formation of a stop codon two aminoacids later. The latter, c.2831delA (Figure 1G), identified in the exon 16 of patient TCS 9, causes a frameshift.\nAnother proband (TCS 12) was heterozygous for the de novo c.3700_3704delACTCT mutation in exon 22 (Figure 1H).\nThe sequence analysis of a DNA sample obtained from patient TCS 10 identified the c.3118_3119dupG mutation in exon 18 (Figure 1I). Analysis of the relatives indicated that this duplication was also present in the affected father and brother.\nTwo more TCOF1 variations were found in patient TCS 7. The first one was a single base sostitution, c.2924C > T, in exon 17, determining the aminoacidic change p.P975L. The other one was a 2 bp deletion, c.2285_2286delCT in exon 13 (Figure 1J), wich results in a frameshift and premature stop codon. Neither abnormality was found in 140 and 100 healty chromosomes, respectively, indicating that they were not common polymorphisms.\nAnalysis of TCS 7's normal parents DNA showed the mother was a carrier of c.2924C > T. Neither of parents showed the 2 bp deletion. Thus, it was indicated that the 2 bp deletion, disrupting protein translation, is probably the causative mutation.\nIn the TCS 11 patient, a truncating mutation of 8 bases c.3456_3463delTTCTTCAG (Figure 1K) was found in exon 20, causing a frameshift resulting in a stop codon four codons later. The same kind of mutation was found in TCS 6 patient in exon 12. A microdeletion of one single base c.1973delC (Figure 1L) with a stop codon 52 codons later.\nIn patient TCS 13 a nonsense mutation c.4231C > T (Figure 1M) was found. This pathogenetic variation occurred in exon 23B causing a stop codon (p.Q1411X). In patient TCS 1 a small deletion was found. It was a truncating mutation of the last 2 nucleotides of exon 3, c.303_304delCA (Figure 1N), causing a stop codon 73 codons later. Actually, the deletion involves the first 13 nucleotides of intron 3 (IVS3+13delGTAAGAGCCTTGC), too.\nFinally, we detected a one bp deletion c.599delG in patient TCS 3 (Figure 1O). It was a familial case, as we confirmed the mutation in proband's son and daughter. It was located in exon 6 and it caused a stop codon 19 codons later.\nIn 29 patients, with apparently TCS phenotype, no pathogenic mutations have been identified after screening of the whole coding region of the gene. However these patients present clinical features of the syndrome. All these cases are isolated.\nA large number of TCOF1 polymorphisms were detected. Thirteen of these were already published and 7 are novel (Table 3). Twelve were silent or missense variation and 8 occur in intronic regions. All novel polymorphisms were present in controls in different percentages.\nPolymorphisms in TCS patients\nThe URL for International HapMap project is http://hapmap.ncbi.nlm.nih.gov/\nPatient TCS 21 had a single base substitution c.2859-30G > A in exon 16. This substitution wasn't identified in 150 healthy chromosomes; therefore, this variation could be a novel splicing mutation in the TCOF1 gene. Computer simulation studies were performed to evaluate the role of this variation in hypothetical splicing enhancer or splice site prediction, and we deduced that the mutated sequence determines a loss of SF2/ASF and Srp55 and a gain of SRp40 binding sites, but the same splice site strength score was 0.89, similar to wild type sequence, according to ESE finder and NNSplice programs, respectively.", "In this study, we report the screening of entire TCOF1 coding region and the identification of a spectrum of 3 known mutations, 12 novel pathogenetic mutations and 7 novel polymorphisms by direct sequencing. In all familial cases, we identified the TCOF1 mutation in one parent with similar TCS features. Of the 43 analyzed sporadic cases, 14 had arisen as the result of a de novo mutations in the TCOF1 gene. The sensitivity of sequencing analysis of TCOF1 gene on our patients was 37% (17/46). The remaining 29 TCS patients, negative to the TCOF1 screening, have to be clinically revaluated. In this regard, we recommend a clinical revaluation using craniofacial radiographs, extremely useful in detecting zygomatic hypoplasia as a clinical feature of TCS patients [19]. On the other hand, the differential diagnosis is necessary. In fact, Nager and Miller syndrome exhibits phenotypic overlap with TCS. Moreover, mutations in genes encoding subunits of RNA polymerases I and III (POLR1C and POLR1D) cause TCS, too [7]. We are considering to perform the screening of POLR1C and POLR1D genes in our negatives TCOF1 patients.\nThe TCOF1 gene mutations include missense, nonsense, small deletions and duplications. In particular, the most common classes of TCOF1 alleles are small deletions (60%) and duplications (25%) resulting in frameshifts [10]. According to literature data, 76.5% (13/17) of our characterised mutations are microdeletions and 6% (1/17) are duplications. Though it has been suggested that five exons (10, 15, 16, 23 and 24), are defined as a hot spot region of the TCOF1 gene mutations [14], a distribution of pathogenic variations along all the gene was reported by different authors. We confirmed exon 24 as hot spot of the TCOF1 gene as we described the major number of mutations in the exon (Table 2 Figure 1). The c.4366_4370delGAAAA is the most frequent mutation as we found it in three of 16 affected patients. This is probably due to the high repetition of adenines (60% of exon) making exon 24 region prones to polymerase slippage in DNA replication [14]. Moreover, the high complexity of this exon make it difficult to give the right nomenclature to the identified sequence variations. It is therefore mandatory to sequence exon 24 in both directions. Also in this study exons 10, 15, and 16 were revealed as pathogenic gene regions. Finally, three mutations were found in rarely affected sequences.\nTwo more TCOF1 variations were found in patient TCAR. The presence of two possible TCOF1 mutations in the same patient has been reported in a paper by Fujioka H et al. [20]. In this case, familiar analysis is requested to predict which is the benign and which is the pathological variation in TCS.\nAll patients, and in particular patients with typical TCS features but negative TCOF1 screening, were analyzed for the two alternatively spliced in-frame exons (6A and 16A) and no mutations were found.", "In this work, the observation of affected features, combined with a molecular analysis, is sufficient to perform a correct TCS diagnosis in 35% of cases. This is the phenotypic variability of TCS." ]
[ null, "methods", null, null, null, null, null, null, null ]
[ "Treacher Collins syndrome", "TCOF1 mutations", "microdeletions", "microinsertions" ]
Background: Treacher Collins syndrome (TCS; OMIM #154500) is an autosomal dominant disorder that affects the craniofacial development during early embryogenesis [1]. TCS is characterized by bilaterally symmetric features, including downward slanting palpebral fissures and colobomata of the lower eyelids, hypoplasia of the midfacial bones, cleft palate, and abnormal development of the external/middle ear that often leads to conductive hearing loss [2-4]. TCS occurs with an incidence of 1/50.000 and more than 60% of TCS cases has no previous family history and arises as the result of de novo mutations [5]. The syndrome is caused by mutations in the TCOF1 gene (OMIM #606847), which encodes the nucleolar phosphoprotein Treacle that may serve as a link between rDNA gene transcription and pre-rRNA processing [6]. Recently, Dauwerse et al. detected mutations in genes encoding subunits of RNA polymerases I and III (POLR1C and POLR1D) in Treacher Collins patients [7]. Thus far, most of the 200 disease-causing mutations described are deletions, insertions and nonsense, distributed along 28 exons [8]. Two additional exons have been reported: exon 6A, included in the most common isoform, and exon 16A, included in a minor isoform [9]. The mutations observed in TCS are predominantly sporadic, and the vast majority results in the introduction of a premature termination codon that can lead to the truncation of protein or to nonsense-mediated mRNA decay [10,11]. This suggests in the developmental anomalies result from haploinsufficiency of TCOF1. Penetrance of the genetic mutations underlying TCS is thought to be very high; however, extreme inter- and intra- familial phenotypic variation is reported [12]. In the present study, we screened 46 patients with a clinical diagnosis of TCS, by sequencing the entire TCOF1 coding sequence together with the splice junctions. As result, 12 novel and 3 already reported mutations were characterised together with 7 novel and 13 known polymorphisms. Methods: Patients 46 patients with a clinical diagnosis of TCS were recruited through several European Medical Institutes since 2002. In particular, the patients were evaluated at the University of Torino, Napoli, Rome Tor Vergata and "La Sapienza", University of Aquila, Ospedali Galliera of Genova, IRCCS Casa Sollievo della Sofferenza at San Giovanni Rotondo, Pediatric Department of SS. Pietro e Paolo Hospital at Borgosesia, Pediatric Department of the Bolzano Hospital, "Gaetano Rummo" Hospital at Benevento, Clinica Mangiagalli at Milano, S. Pietro Hospital at Rome, IRCCS "Ass. Oasi Maria SS" at Troina (Italy), GENDIA lab (Antwerp, Belgium), Egas Moniz Hospital (Lisbon, Portugal), Centre Hospitalier Universitaire de Rennes (France). Three patients have one parent with similar TCS features, while the other 43 cases haven't a family history. The major clinical features of TCS were recognized in all patients. After informed consent was obtained from patients or their families, blood samples were collected. 46 patients with a clinical diagnosis of TCS were recruited through several European Medical Institutes since 2002. In particular, the patients were evaluated at the University of Torino, Napoli, Rome Tor Vergata and "La Sapienza", University of Aquila, Ospedali Galliera of Genova, IRCCS Casa Sollievo della Sofferenza at San Giovanni Rotondo, Pediatric Department of SS. Pietro e Paolo Hospital at Borgosesia, Pediatric Department of the Bolzano Hospital, "Gaetano Rummo" Hospital at Benevento, Clinica Mangiagalli at Milano, S. Pietro Hospital at Rome, IRCCS "Ass. Oasi Maria SS" at Troina (Italy), GENDIA lab (Antwerp, Belgium), Egas Moniz Hospital (Lisbon, Portugal), Centre Hospitalier Universitaire de Rennes (France). Three patients have one parent with similar TCS features, while the other 43 cases haven't a family history. The major clinical features of TCS were recognized in all patients. After informed consent was obtained from patients or their families, blood samples were collected. Mutational analysis Genomic DNA was obtained from peripheral blood samples using EZ1 DNA Blood 200 μl purification kit (Qiagen, GmbH, Germany). Coding regions and intron/exon boundaries of the TCOF1 gene were amplified in 28 reactions using specific primers [13]. For exons 6A, 10, 16A, 24, and 25 specific primers were self designed (Table 1). Primers self-designed for TCOF1 gene amplification and sequencing PCR amplification was performed in a 25 μl reaction volume containing 2.5 U AmpliTaq Gold™ DNA polymerase (Applied Biosystems, Foster City, CA), 1X reaction buffer (10 mM Tris HCl pH 8.3, 50 mM KCl, 2.5 mM MgCl2), 200 μM of each deoxyribonucleoside triphosphate (dNTPs) and 0.2 mM each of primers using a PTC 100 thermocycler (MJ Research, Inc. Waltham, MA, USA). A 10-minute denaturation step at 94°C was followed by 30 cycles at 94°C for 30 seconds, annealing temperature was performed, for each primer, 30 seconds at 52.5-62°C, and extending for 30 sec at 72°C; the reaction was completed by a final extension for 7 minutes at 72°C. PCR products were purified by digestion with Antartic Phosphatase and Exonuclease I (New England BioLabs Inc.) and were sequenced in both directions using the Applied Biosystem Big Dye Terminator v3.1 Cycle sequencing kit. New mutations were not found in 100 normal chromosomes by sequencing. Genomic DNA was obtained from peripheral blood samples using EZ1 DNA Blood 200 μl purification kit (Qiagen, GmbH, Germany). Coding regions and intron/exon boundaries of the TCOF1 gene were amplified in 28 reactions using specific primers [13]. For exons 6A, 10, 16A, 24, and 25 specific primers were self designed (Table 1). Primers self-designed for TCOF1 gene amplification and sequencing PCR amplification was performed in a 25 μl reaction volume containing 2.5 U AmpliTaq Gold™ DNA polymerase (Applied Biosystems, Foster City, CA), 1X reaction buffer (10 mM Tris HCl pH 8.3, 50 mM KCl, 2.5 mM MgCl2), 200 μM of each deoxyribonucleoside triphosphate (dNTPs) and 0.2 mM each of primers using a PTC 100 thermocycler (MJ Research, Inc. Waltham, MA, USA). A 10-minute denaturation step at 94°C was followed by 30 cycles at 94°C for 30 seconds, annealing temperature was performed, for each primer, 30 seconds at 52.5-62°C, and extending for 30 sec at 72°C; the reaction was completed by a final extension for 7 minutes at 72°C. PCR products were purified by digestion with Antartic Phosphatase and Exonuclease I (New England BioLabs Inc.) and were sequenced in both directions using the Applied Biosystem Big Dye Terminator v3.1 Cycle sequencing kit. New mutations were not found in 100 normal chromosomes by sequencing. Nucleotide and Aminoacid numbering All mutations were named considering the genomic reference [NT_029289] and the cDNA that corresponds to the major treacle isoform [NM_001135243.1] [14]. Mutation nomenclature is based on HGVS nomenclature guidelines [http://www.hgvs.org/mutnomen] [15]. All mutations were named considering the genomic reference [NT_029289] and the cDNA that corresponds to the major treacle isoform [NM_001135243.1] [14]. Mutation nomenclature is based on HGVS nomenclature guidelines [http://www.hgvs.org/mutnomen] [15]. In silico tools The splice predictor software program, NNSplice version 0.9 [http://www.fruitfly.org/seq_tools/splice.html], was used for an initial approach to novel variant (c.IVS16A-30G→A) suspected of causing aberrant RNA processing of the TCOF1 gene. The ESE finder 2.0 program [http://rulai.cshl.edu/tools/ESE2/] was used to predict hypothetical splicing enhancer in mutated c.IVS16A-30G→A sequence. The splice predictor software program, NNSplice version 0.9 [http://www.fruitfly.org/seq_tools/splice.html], was used for an initial approach to novel variant (c.IVS16A-30G→A) suspected of causing aberrant RNA processing of the TCOF1 gene. The ESE finder 2.0 program [http://rulai.cshl.edu/tools/ESE2/] was used to predict hypothetical splicing enhancer in mutated c.IVS16A-30G→A sequence. Patients: 46 patients with a clinical diagnosis of TCS were recruited through several European Medical Institutes since 2002. In particular, the patients were evaluated at the University of Torino, Napoli, Rome Tor Vergata and "La Sapienza", University of Aquila, Ospedali Galliera of Genova, IRCCS Casa Sollievo della Sofferenza at San Giovanni Rotondo, Pediatric Department of SS. Pietro e Paolo Hospital at Borgosesia, Pediatric Department of the Bolzano Hospital, "Gaetano Rummo" Hospital at Benevento, Clinica Mangiagalli at Milano, S. Pietro Hospital at Rome, IRCCS "Ass. Oasi Maria SS" at Troina (Italy), GENDIA lab (Antwerp, Belgium), Egas Moniz Hospital (Lisbon, Portugal), Centre Hospitalier Universitaire de Rennes (France). Three patients have one parent with similar TCS features, while the other 43 cases haven't a family history. The major clinical features of TCS were recognized in all patients. After informed consent was obtained from patients or their families, blood samples were collected. Mutational analysis: Genomic DNA was obtained from peripheral blood samples using EZ1 DNA Blood 200 μl purification kit (Qiagen, GmbH, Germany). Coding regions and intron/exon boundaries of the TCOF1 gene were amplified in 28 reactions using specific primers [13]. For exons 6A, 10, 16A, 24, and 25 specific primers were self designed (Table 1). Primers self-designed for TCOF1 gene amplification and sequencing PCR amplification was performed in a 25 μl reaction volume containing 2.5 U AmpliTaq Gold™ DNA polymerase (Applied Biosystems, Foster City, CA), 1X reaction buffer (10 mM Tris HCl pH 8.3, 50 mM KCl, 2.5 mM MgCl2), 200 μM of each deoxyribonucleoside triphosphate (dNTPs) and 0.2 mM each of primers using a PTC 100 thermocycler (MJ Research, Inc. Waltham, MA, USA). A 10-minute denaturation step at 94°C was followed by 30 cycles at 94°C for 30 seconds, annealing temperature was performed, for each primer, 30 seconds at 52.5-62°C, and extending for 30 sec at 72°C; the reaction was completed by a final extension for 7 minutes at 72°C. PCR products were purified by digestion with Antartic Phosphatase and Exonuclease I (New England BioLabs Inc.) and were sequenced in both directions using the Applied Biosystem Big Dye Terminator v3.1 Cycle sequencing kit. New mutations were not found in 100 normal chromosomes by sequencing. Nucleotide and Aminoacid numbering: All mutations were named considering the genomic reference [NT_029289] and the cDNA that corresponds to the major treacle isoform [NM_001135243.1] [14]. Mutation nomenclature is based on HGVS nomenclature guidelines [http://www.hgvs.org/mutnomen] [15]. In silico tools: The splice predictor software program, NNSplice version 0.9 [http://www.fruitfly.org/seq_tools/splice.html], was used for an initial approach to novel variant (c.IVS16A-30G→A) suspected of causing aberrant RNA processing of the TCOF1 gene. The ESE finder 2.0 program [http://rulai.cshl.edu/tools/ESE2/] was used to predict hypothetical splicing enhancer in mutated c.IVS16A-30G→A sequence. Results: The TCOF1 gene analysis was carried out on the 46 TCS European patients by direct sequencing of 28 PCR genomic fragments encompassing the complete coding sequence and splice sites. We detected 15 different TCOF1 mutations in 17 patients (Table 2 Figure 1), 12 new and 3 already described. Pathogenic mutations in TCS patients The cDNA major isoform (including exon 6A, but without exon 16A) was deposited in GenBank with accession number AY460334[14]. For the cDNA sequence, nucleotide +1 is the A of the ATG-translation initiation codon. A-O Chromatograms of characterized pathogenic mutations in TCS patients. The header of each picture indicates the nucleotide mutation. The arrows show the site of mutation in chromatograms. A and E chromatograms report reverse sequences; B-D and F-O chromatograms report forward sequences. Considering the already known mutations, the 5-bp deletion, c.4366_4370delGAAAA, in exon 24 (Figure 1A), was the most frequent mutation observed in this study (3/17, 17%) (Patient TCS 15,16 and 17). In the same exon (patient TCS 14), we identified the already known frameshift mutation c.4359_4363delAAAAA (Figure 1B). Both the mutations identified fall within the TCOF1 mutation hot spot rich in 18 Lysine residues. Other microdeletions have already been described in this region [8,11,16-18]. The other known mutation is a 2 bp deletion, c.1639_1640delAG (patient TCS 4), localized in exon 10 (Figure 1C) [8,13]. A total of 12 novel disease-causative mutations were found in 2 familial cases and in 10 sporadic TCS patients. These consist of 10 microdeletions (sized 1/15 nucleotides), one single-base duplication, and one nonsense mutation. All deletions and insertion cause a frameshift and produce a truncated protein. The first reported pathogenic microdeletion was a c.519delT in exon 5 (patient TCS 2) (Figure 1D) leading to formation of a stop codon 46 aminoacids later. This alteration was not found in the patient's parents. Patient TCS 5 was found to bear a de novo truncating mutation, c.1581delG, in exon 10 (Figure 1E) that introduces a premature stop codon 69 aminoacids later. Two alterations were identified in exons 15 and 16, that encode repetitive motifs. The former, c.2626_2627delGA in exon 15 (Figure 1F), was identified in patient TCS 8 and causes the formation of a stop codon two aminoacids later. The latter, c.2831delA (Figure 1G), identified in the exon 16 of patient TCS 9, causes a frameshift. Another proband (TCS 12) was heterozygous for the de novo c.3700_3704delACTCT mutation in exon 22 (Figure 1H). The sequence analysis of a DNA sample obtained from patient TCS 10 identified the c.3118_3119dupG mutation in exon 18 (Figure 1I). Analysis of the relatives indicated that this duplication was also present in the affected father and brother. Two more TCOF1 variations were found in patient TCS 7. The first one was a single base sostitution, c.2924C > T, in exon 17, determining the aminoacidic change p.P975L. The other one was a 2 bp deletion, c.2285_2286delCT in exon 13 (Figure 1J), wich results in a frameshift and premature stop codon. Neither abnormality was found in 140 and 100 healty chromosomes, respectively, indicating that they were not common polymorphisms. Analysis of TCS 7's normal parents DNA showed the mother was a carrier of c.2924C > T. Neither of parents showed the 2 bp deletion. Thus, it was indicated that the 2 bp deletion, disrupting protein translation, is probably the causative mutation. In the TCS 11 patient, a truncating mutation of 8 bases c.3456_3463delTTCTTCAG (Figure 1K) was found in exon 20, causing a frameshift resulting in a stop codon four codons later. The same kind of mutation was found in TCS 6 patient in exon 12. A microdeletion of one single base c.1973delC (Figure 1L) with a stop codon 52 codons later. In patient TCS 13 a nonsense mutation c.4231C > T (Figure 1M) was found. This pathogenetic variation occurred in exon 23B causing a stop codon (p.Q1411X). In patient TCS 1 a small deletion was found. It was a truncating mutation of the last 2 nucleotides of exon 3, c.303_304delCA (Figure 1N), causing a stop codon 73 codons later. Actually, the deletion involves the first 13 nucleotides of intron 3 (IVS3+13delGTAAGAGCCTTGC), too. Finally, we detected a one bp deletion c.599delG in patient TCS 3 (Figure 1O). It was a familial case, as we confirmed the mutation in proband's son and daughter. It was located in exon 6 and it caused a stop codon 19 codons later. In 29 patients, with apparently TCS phenotype, no pathogenic mutations have been identified after screening of the whole coding region of the gene. However these patients present clinical features of the syndrome. All these cases are isolated. A large number of TCOF1 polymorphisms were detected. Thirteen of these were already published and 7 are novel (Table 3). Twelve were silent or missense variation and 8 occur in intronic regions. All novel polymorphisms were present in controls in different percentages. Polymorphisms in TCS patients The URL for International HapMap project is http://hapmap.ncbi.nlm.nih.gov/ Patient TCS 21 had a single base substitution c.2859-30G > A in exon 16. This substitution wasn't identified in 150 healthy chromosomes; therefore, this variation could be a novel splicing mutation in the TCOF1 gene. Computer simulation studies were performed to evaluate the role of this variation in hypothetical splicing enhancer or splice site prediction, and we deduced that the mutated sequence determines a loss of SF2/ASF and Srp55 and a gain of SRp40 binding sites, but the same splice site strength score was 0.89, similar to wild type sequence, according to ESE finder and NNSplice programs, respectively. Discussion: In this study, we report the screening of entire TCOF1 coding region and the identification of a spectrum of 3 known mutations, 12 novel pathogenetic mutations and 7 novel polymorphisms by direct sequencing. In all familial cases, we identified the TCOF1 mutation in one parent with similar TCS features. Of the 43 analyzed sporadic cases, 14 had arisen as the result of a de novo mutations in the TCOF1 gene. The sensitivity of sequencing analysis of TCOF1 gene on our patients was 37% (17/46). The remaining 29 TCS patients, negative to the TCOF1 screening, have to be clinically revaluated. In this regard, we recommend a clinical revaluation using craniofacial radiographs, extremely useful in detecting zygomatic hypoplasia as a clinical feature of TCS patients [19]. On the other hand, the differential diagnosis is necessary. In fact, Nager and Miller syndrome exhibits phenotypic overlap with TCS. Moreover, mutations in genes encoding subunits of RNA polymerases I and III (POLR1C and POLR1D) cause TCS, too [7]. We are considering to perform the screening of POLR1C and POLR1D genes in our negatives TCOF1 patients. The TCOF1 gene mutations include missense, nonsense, small deletions and duplications. In particular, the most common classes of TCOF1 alleles are small deletions (60%) and duplications (25%) resulting in frameshifts [10]. According to literature data, 76.5% (13/17) of our characterised mutations are microdeletions and 6% (1/17) are duplications. Though it has been suggested that five exons (10, 15, 16, 23 and 24), are defined as a hot spot region of the TCOF1 gene mutations [14], a distribution of pathogenic variations along all the gene was reported by different authors. We confirmed exon 24 as hot spot of the TCOF1 gene as we described the major number of mutations in the exon (Table 2 Figure 1). The c.4366_4370delGAAAA is the most frequent mutation as we found it in three of 16 affected patients. This is probably due to the high repetition of adenines (60% of exon) making exon 24 region prones to polymerase slippage in DNA replication [14]. Moreover, the high complexity of this exon make it difficult to give the right nomenclature to the identified sequence variations. It is therefore mandatory to sequence exon 24 in both directions. Also in this study exons 10, 15, and 16 were revealed as pathogenic gene regions. Finally, three mutations were found in rarely affected sequences. Two more TCOF1 variations were found in patient TCAR. The presence of two possible TCOF1 mutations in the same patient has been reported in a paper by Fujioka H et al. [20]. In this case, familiar analysis is requested to predict which is the benign and which is the pathological variation in TCS. All patients, and in particular patients with typical TCS features but negative TCOF1 screening, were analyzed for the two alternatively spliced in-frame exons (6A and 16A) and no mutations were found. Conclusion: In this work, the observation of affected features, combined with a molecular analysis, is sufficient to perform a correct TCS diagnosis in 35% of cases. This is the phenotypic variability of TCS.
Background: Treacher Collins syndrome (TCS) is one of the most severe autosomal dominant congenital disorders of craniofacial development and shows variable phenotypic expression. TCS is extremely rare, occurring with an incidence of 1 in 50.000 live births. The TCS distinguishing characteristics are represented by down slanting palpebral fissures, coloboma of the eyelid, micrognathia, microtia and other deformity of the ears, hypoplastic zygomatic arches, and macrostomia. Conductive hearing loss and cleft palate are often present. TCS results from mutations in the TCOF1 gene located on chromosome 5, which encodes a serine/alanine-rich nucleolar phospho-protein called Treacle. However, alterations in the TCOF1 gene have been implicated in only 81-93% of TCS cases. Methods: In this study, the entire coding regions of the TCOF1 gene, including newly described exons 6A and 16A, were sequenced in 46 unrelated subjects suspected of TCS clinical indication. Results: Fifteen mutations were reported, including twelve novel and three already described in 14 sporadic patients and in 3 familial cases. Moreover, seven novel polymorphisms were also described. Most of the mutations characterised were microdeletions spanning one or more nucleotides, in addition to an insertion of one nucleotide in exon 18 and a stop mutation. The deletions and the insertion described cause a premature termination of translation, resulting in a truncated protein. Conclusions: This study confirms that almost all the TCOF1 pathogenic mutations fall in the coding region and lead to an aberrant protein.
Background: Treacher Collins syndrome (TCS; OMIM #154500) is an autosomal dominant disorder that affects the craniofacial development during early embryogenesis [1]. TCS is characterized by bilaterally symmetric features, including downward slanting palpebral fissures and colobomata of the lower eyelids, hypoplasia of the midfacial bones, cleft palate, and abnormal development of the external/middle ear that often leads to conductive hearing loss [2-4]. TCS occurs with an incidence of 1/50.000 and more than 60% of TCS cases has no previous family history and arises as the result of de novo mutations [5]. The syndrome is caused by mutations in the TCOF1 gene (OMIM #606847), which encodes the nucleolar phosphoprotein Treacle that may serve as a link between rDNA gene transcription and pre-rRNA processing [6]. Recently, Dauwerse et al. detected mutations in genes encoding subunits of RNA polymerases I and III (POLR1C and POLR1D) in Treacher Collins patients [7]. Thus far, most of the 200 disease-causing mutations described are deletions, insertions and nonsense, distributed along 28 exons [8]. Two additional exons have been reported: exon 6A, included in the most common isoform, and exon 16A, included in a minor isoform [9]. The mutations observed in TCS are predominantly sporadic, and the vast majority results in the introduction of a premature termination codon that can lead to the truncation of protein or to nonsense-mediated mRNA decay [10,11]. This suggests in the developmental anomalies result from haploinsufficiency of TCOF1. Penetrance of the genetic mutations underlying TCS is thought to be very high; however, extreme inter- and intra- familial phenotypic variation is reported [12]. In the present study, we screened 46 patients with a clinical diagnosis of TCS, by sequencing the entire TCOF1 coding sequence together with the splice junctions. As result, 12 novel and 3 already reported mutations were characterised together with 7 novel and 13 known polymorphisms. Conclusion: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2350/12/125/prepub
Background: Treacher Collins syndrome (TCS) is one of the most severe autosomal dominant congenital disorders of craniofacial development and shows variable phenotypic expression. TCS is extremely rare, occurring with an incidence of 1 in 50.000 live births. The TCS distinguishing characteristics are represented by down slanting palpebral fissures, coloboma of the eyelid, micrognathia, microtia and other deformity of the ears, hypoplastic zygomatic arches, and macrostomia. Conductive hearing loss and cleft palate are often present. TCS results from mutations in the TCOF1 gene located on chromosome 5, which encodes a serine/alanine-rich nucleolar phospho-protein called Treacle. However, alterations in the TCOF1 gene have been implicated in only 81-93% of TCS cases. Methods: In this study, the entire coding regions of the TCOF1 gene, including newly described exons 6A and 16A, were sequenced in 46 unrelated subjects suspected of TCS clinical indication. Results: Fifteen mutations were reported, including twelve novel and three already described in 14 sporadic patients and in 3 familial cases. Moreover, seven novel polymorphisms were also described. Most of the mutations characterised were microdeletions spanning one or more nucleotides, in addition to an insertion of one nucleotide in exon 18 and a stop mutation. The deletions and the insertion described cause a premature termination of translation, resulting in a truncated protein. Conclusions: This study confirms that almost all the TCOF1 pathogenic mutations fall in the coding region and lead to an aberrant protein.
3,879
282
[ 374, 188, 279, 43, 58, 1127, 573, 38 ]
9
[ "tcs", "patients", "tcof1", "mutations", "exon", "mutation", "gene", "patient", "10", "figure" ]
[ "possible tcof1 mutations", "collins syndrome tcs", "tcs mutations genes", "treacher collins syndrome", "novo mutations tcof1" ]
null
[CONTENT] Treacher Collins syndrome | TCOF1 mutations | microdeletions | microinsertions [SUMMARY]
[CONTENT] Treacher Collins syndrome | TCOF1 mutations | microdeletions | microinsertions [SUMMARY]
null
[CONTENT] Treacher Collins syndrome | TCOF1 mutations | microdeletions | microinsertions [SUMMARY]
[CONTENT] Treacher Collins syndrome | TCOF1 mutations | microdeletions | microinsertions [SUMMARY]
[CONTENT] Treacher Collins syndrome | TCOF1 mutations | microdeletions | microinsertions [SUMMARY]
[CONTENT] Base Sequence | Codon, Nonsense | DNA Mutational Analysis | DNA Primers | Europe | Exons | Female | Frameshift Mutation | Humans | Male | Mandibulofacial Dysostosis | Mutagenesis, Insertional | Mutation | Nuclear Proteins | Phosphoproteins | Polymorphism, Single Nucleotide | Sequence Deletion [SUMMARY]
[CONTENT] Base Sequence | Codon, Nonsense | DNA Mutational Analysis | DNA Primers | Europe | Exons | Female | Frameshift Mutation | Humans | Male | Mandibulofacial Dysostosis | Mutagenesis, Insertional | Mutation | Nuclear Proteins | Phosphoproteins | Polymorphism, Single Nucleotide | Sequence Deletion [SUMMARY]
null
[CONTENT] Base Sequence | Codon, Nonsense | DNA Mutational Analysis | DNA Primers | Europe | Exons | Female | Frameshift Mutation | Humans | Male | Mandibulofacial Dysostosis | Mutagenesis, Insertional | Mutation | Nuclear Proteins | Phosphoproteins | Polymorphism, Single Nucleotide | Sequence Deletion [SUMMARY]
[CONTENT] Base Sequence | Codon, Nonsense | DNA Mutational Analysis | DNA Primers | Europe | Exons | Female | Frameshift Mutation | Humans | Male | Mandibulofacial Dysostosis | Mutagenesis, Insertional | Mutation | Nuclear Proteins | Phosphoproteins | Polymorphism, Single Nucleotide | Sequence Deletion [SUMMARY]
[CONTENT] Base Sequence | Codon, Nonsense | DNA Mutational Analysis | DNA Primers | Europe | Exons | Female | Frameshift Mutation | Humans | Male | Mandibulofacial Dysostosis | Mutagenesis, Insertional | Mutation | Nuclear Proteins | Phosphoproteins | Polymorphism, Single Nucleotide | Sequence Deletion [SUMMARY]
[CONTENT] possible tcof1 mutations | collins syndrome tcs | tcs mutations genes | treacher collins syndrome | novo mutations tcof1 [SUMMARY]
[CONTENT] possible tcof1 mutations | collins syndrome tcs | tcs mutations genes | treacher collins syndrome | novo mutations tcof1 [SUMMARY]
null
[CONTENT] possible tcof1 mutations | collins syndrome tcs | tcs mutations genes | treacher collins syndrome | novo mutations tcof1 [SUMMARY]
[CONTENT] possible tcof1 mutations | collins syndrome tcs | tcs mutations genes | treacher collins syndrome | novo mutations tcof1 [SUMMARY]
[CONTENT] possible tcof1 mutations | collins syndrome tcs | tcs mutations genes | treacher collins syndrome | novo mutations tcof1 [SUMMARY]
[CONTENT] tcs | patients | tcof1 | mutations | exon | mutation | gene | patient | 10 | figure [SUMMARY]
[CONTENT] tcs | patients | tcof1 | mutations | exon | mutation | gene | patient | 10 | figure [SUMMARY]
null
[CONTENT] tcs | patients | tcof1 | mutations | exon | mutation | gene | patient | 10 | figure [SUMMARY]
[CONTENT] tcs | patients | tcof1 | mutations | exon | mutation | gene | patient | 10 | figure [SUMMARY]
[CONTENT] tcs | patients | tcof1 | mutations | exon | mutation | gene | patient | 10 | figure [SUMMARY]
[CONTENT] tcs | mutations | result | reported | included | omim | treacher | treacher collins | development | collins [SUMMARY]
[CONTENT] hospital | mm | primers | 30 | patients | reaction | blood | http | dna | 10 [SUMMARY]
null
[CONTENT] tcs | perform correct tcs diagnosis | features combined molecular | correct tcs diagnosis 35 | analysis sufficient perform correct | analysis sufficient perform | analysis sufficient | correct tcs diagnosis | correct tcs | correct [SUMMARY]
[CONTENT] tcs | patients | mutations | tcof1 | hospital | exon | mutation | gene | http | patient [SUMMARY]
[CONTENT] tcs | patients | mutations | tcof1 | hospital | exon | mutation | gene | http | patient [SUMMARY]
[CONTENT] Treacher Collins | TCS ||| TCS | 1 | 50.000 ||| TCS | micrognathia ||| ||| TCS | TCOF1 | 5 | Treacle ||| TCOF1 | only 81-93% | TCS [SUMMARY]
[CONTENT] TCOF1 | 16A | 46 | TCS [SUMMARY]
null
[CONTENT] TCOF1 [SUMMARY]
[CONTENT] Treacher Collins | TCS ||| TCS | 1 | 50.000 ||| TCS | micrognathia ||| ||| TCS | TCOF1 | 5 | Treacle ||| TCOF1 | only 81-93% | TCS ||| TCOF1 | 16A | 46 | TCS ||| ||| Fifteen | twelve | three | 14 | 3 ||| seven ||| one | one | 18 ||| ||| TCOF1 [SUMMARY]
[CONTENT] Treacher Collins | TCS ||| TCS | 1 | 50.000 ||| TCS | micrognathia ||| ||| TCS | TCOF1 | 5 | Treacle ||| TCOF1 | only 81-93% | TCS ||| TCOF1 | 16A | 46 | TCS ||| ||| Fifteen | twelve | three | 14 | 3 ||| seven ||| one | one | 18 ||| ||| TCOF1 [SUMMARY]
The effect of smoking on the duration of life with and without disability, Belgium 1997-2011.
25026981
Smoking is the single most important health threat yet there is no consistency as to whether non-smokers experience a compression of years lived with disability compared to (ex-)smokers. The objectives of the manuscript are (1) to assess the effect of smoking on the average years lived without disability (Disability Free Life Expectancy (DFLE)) and with disability (Disability Life Expectancy (DLE)) and (2) to estimate the extent to which these effects are due to better survival or reduced disability in never smokers.
BACKGROUND
Data on disability and mortality were provided by the Belgian Health Interview Survey 1997 and 2001 and a 10 years mortality follow-up of the survey participants. Disability was defined as difficulties in activities of daily living (ADL), in mobility, in continence or in sensory (vision, hearing) functions. Poisson and multinomial logistic regression models were fitted to estimate the probabilities of death and the prevalence of disability by age, gender and smoking status adjusted for socioeconomic position. The Sullivan method was used to estimate DFLE and DLE at age 30. The contribution of mortality and of disability to smoking related differences in DFLE and DLE was assessed using decomposition methods.
METHODS
Compared to never smokers, ex-smokers have a shorter life expectancy (LE) and DFLE but the number of years lived with disability is somewhat larger. For both sexes, the higher disability prevalence is the main contributing factor to the difference in DFLE and DLE. Smokers have a shorter LE, DFLE and DLE compared to never smokers. Both higher mortality and higher disability prevalence contribute to the difference in DFLE, but mortality is more important among males. Although both male and female smokers experience higher disability prevalence, their higher mortality outweighs their disability disadvantage resulting in a shorter DLE.
RESULTS
Smoking kills and shortens both life without and life with disability. Smoking related disability can however not be ignored, given its contribution to the excess years with disability especially in younger age groups.
CONCLUSION
[ "Activities of Daily Living", "Adult", "Aged", "Aged, 80 and over", "Belgium", "Disabled Persons", "Female", "Health Surveys", "Humans", "Life Expectancy", "Logistic Models", "Male", "Middle Aged", "Prevalence", "Smoking" ]
4223416
Background
Smoking is without doubt the single most important global cause of premature mortality. The current death toll from direct and second hand tobacco smoking in adults 30 years and over is estimated to be globally well over 5.5 million each year [1]. While at present the highest proportion of deaths attributable to tobacco are in America and Europe, the largest proportions of tobacco-related deaths in the coming decades is expected to occur in medium and low income countries [2]. Smokers may lose up to one decade of life expectancy [3,4]. However, prolonged cessation, when started early enough, reduces the risk of mortality associated with smoking by 90% or more [3-5] and hence greater mortality benefits are observed among early quitters [6]. Implementation of evidence-based tobacco control measures, such as smoke-free air laws or taxation, contribute to the avoidance of substantial numbers of premature deaths [7]. Smoking has also been associated with the incidence of chronic diseases, especially several cancers, cardiovascular diseases, and lung disease [8-10], and with the incidence of disability and poor health-related quality of life [11,12]. Although non-smoking is related to a longer life and a longer healthier life, there is no agreement in the literature on whether smoking cessation also leads to fewer years with morbidity. Some publications suggest that smoking reduces both the duration of life free of and with diseases and disability so that in the end, never smokers live the same or even more years in ill-health [8,13-17]. Other authors report that smokers have to endure in their shorter life more years and a greater proportion of their life with disability [18-20]. The first group of manuscripts suggests the need to consider a trade-off between a longer life and a longer life in ill-health [21], while the latter studies support the compression of morbidity theory that can be reached through primordial and primary prevention [22]. For public health policy, it is important to better understand this discrepancy in current literature and to better assess health gains or losses in relation to smoke reducing interventions, specifically: “Is the gap in duration of life in total and with or without disability, between never smokers and ex- or current smokers, due to differences in mortality and/or due to differences in disability?”. The objectives of the current manuscript are therefore (1) to determine the effect of smoking on the duration of life with and without disability and (2) to estimate the contribution of the higher mortality and higher disability associated with smoking to the difference in the years lived with and without disability between smoking groups.
Methods
Data To calculate Disability Free Life Expectancy (DFLE) and Disability Life Expectancy (DLE) by smoking status two sources of data are required. First, information is needed about the mortality by smoking status. This information was extracted from the mortality follow-up of the Belgian Health Interview Surveys 1997 and 2001 (HIS 1997; HIS 2001) participants. The surveys were carried out by Statistics Belgium and exempted by law from requiring ethics approval. The process of obtaining mortality follow-up information is regulated by the Belgian Commission for the Protection of Privacy. After the approval of the Commission, Statistics Belgium provided follow-up data for the HIS 1997 and HIS 2001 participants until date of death, date of emigration or until respectively 31/12/2007 and 31/12/2010. Follow-up was obtained by individual record linkage between the HIS and the National Register, a public register with details of all registered residents in Belgium, using the National Identification Number. Statistics Belgium provided the list, including the date of death, of the HIS 1997 and HIS 2001 participants who had died by the end of the follow-up period. Second, information is needed about the prevalence of disability by smoking status. This information was extracted from both surveys. The participants in these national cross-sectional surveys were selected from the National Register through a multistage stratified sample of the Belgian population aged 15 years and older. Potential participants were informed by an invitation letter with leaflet and by the interviewer that the participation to the survey is voluntary and that after given an oral consent they can stop the interview anytime or can skip a question if they felt they should not answer a particular question. The participation rate in both surveys was around 60%. The detailed methodology of the surveys is described elsewhere [23]. Data on disability and socioeconomic position were collected via face-to-face interviews, while data on smoking were provided by the participant through a self-administered questionnaire. To calculate Disability Free Life Expectancy (DFLE) and Disability Life Expectancy (DLE) by smoking status two sources of data are required. First, information is needed about the mortality by smoking status. This information was extracted from the mortality follow-up of the Belgian Health Interview Surveys 1997 and 2001 (HIS 1997; HIS 2001) participants. The surveys were carried out by Statistics Belgium and exempted by law from requiring ethics approval. The process of obtaining mortality follow-up information is regulated by the Belgian Commission for the Protection of Privacy. After the approval of the Commission, Statistics Belgium provided follow-up data for the HIS 1997 and HIS 2001 participants until date of death, date of emigration or until respectively 31/12/2007 and 31/12/2010. Follow-up was obtained by individual record linkage between the HIS and the National Register, a public register with details of all registered residents in Belgium, using the National Identification Number. Statistics Belgium provided the list, including the date of death, of the HIS 1997 and HIS 2001 participants who had died by the end of the follow-up period. Second, information is needed about the prevalence of disability by smoking status. This information was extracted from both surveys. The participants in these national cross-sectional surveys were selected from the National Register through a multistage stratified sample of the Belgian population aged 15 years and older. Potential participants were informed by an invitation letter with leaflet and by the interviewer that the participation to the survey is voluntary and that after given an oral consent they can stop the interview anytime or can skip a question if they felt they should not answer a particular question. The participation rate in both surveys was around 60%. The detailed methodology of the surveys is described elsewhere [23]. Data on disability and socioeconomic position were collected via face-to-face interviews, while data on smoking were provided by the participant through a self-administered questionnaire. Measures Disability The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only. Definition of disability by severity The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only. Definition of disability by severity Disability The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only. Definition of disability by severity The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only. Definition of disability by severity Smoking A four-category variable was used: never smokers, ex-smokers, light smokers (less than 20 cigarette per day) and heavy smokers (20 cigarettes or more per day). A four-category variable was used: never smokers, ex-smokers, light smokers (less than 20 cigarette per day) and heavy smokers (20 cigarettes or more per day). Socio-economic position Educational attainment was coded according to the International Standard Classification of Education (ISCED 2011) and was based on the highest level of education reached by the households’ reference person or his/her partner: lower education (ISCED 0–1), lower secondary education (ISCED 2), higher secondary education (ISCED 3) and higher education (ISCED 4–8) [26]. Educational attainment was coded according to the International Standard Classification of Education (ISCED 2011) and was based on the highest level of education reached by the households’ reference person or his/her partner: lower education (ISCED 0–1), lower secondary education (ISCED 2), higher secondary education (ISCED 3) and higher education (ISCED 4–8) [26]. Statistical methods Mortality and disability For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS. For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS. Mortality and disability For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS. For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS. Life table analysis The age specific mortality rates were used to estimate LE by gender and smoking category. DFLE and DLE at age 30 (last open age group: 85 years and plus) and partial DFLE and DLE in the age window 30–80 years (DFLE30–80 and DLE30–80) were calculated by gender and smoking category using the Sullivan method which integrates the age-specific disability prevalence into the life table [30,31]. To estimate the contribution of mortality and disability to the differences in DFLE and DLE between smoking groups, a decomposition method was used [32,33]. Differences in total life expectancy (LE), DFLE and DLE between never smokers and other smoking categories (ex-smokers, smokers, light and heavy smokers) were divided in two parts. The first component, the mortality effect, represents the differences in the expected years lived with and without disability due to a differential mortality experience between never smokers and the other smoking categories. The second component, the disability effect, represents the differences in the person-years lived with or without disability due to differences in the prevalence of disability by smoking status. Whereas differences in LE only reflect differences in mortality rates, differences in DFLE and DLE are a result of differences in age-specific mortality rates (mortality effect) and differences in the age-specific prevalence of disability (disability effect). Calculations were done using a R 2.14.2 program developed in the framework of the EHLEIS project [34] and a copy of the R program is available from W. Nusselder ([email protected]). For the decomposition, including the variance estimation, the analysis by smoking intensity was only possible for the partial DFLE30–80 and DLE30–80 as there were few very old heavy smoking females. The age specific mortality rates were used to estimate LE by gender and smoking category. DFLE and DLE at age 30 (last open age group: 85 years and plus) and partial DFLE and DLE in the age window 30–80 years (DFLE30–80 and DLE30–80) were calculated by gender and smoking category using the Sullivan method which integrates the age-specific disability prevalence into the life table [30,31]. To estimate the contribution of mortality and disability to the differences in DFLE and DLE between smoking groups, a decomposition method was used [32,33]. Differences in total life expectancy (LE), DFLE and DLE between never smokers and other smoking categories (ex-smokers, smokers, light and heavy smokers) were divided in two parts. The first component, the mortality effect, represents the differences in the expected years lived with and without disability due to a differential mortality experience between never smokers and the other smoking categories. The second component, the disability effect, represents the differences in the person-years lived with or without disability due to differences in the prevalence of disability by smoking status. Whereas differences in LE only reflect differences in mortality rates, differences in DFLE and DLE are a result of differences in age-specific mortality rates (mortality effect) and differences in the age-specific prevalence of disability (disability effect). Calculations were done using a R 2.14.2 program developed in the framework of the EHLEIS project [34] and a copy of the R program is available from W. Nusselder ([email protected]). For the decomposition, including the variance estimation, the analysis by smoking intensity was only possible for the partial DFLE30–80 and DLE30–80 as there were few very old heavy smoking females.
Results
Both the prevalence of disability and the mortality rate are higher in ex-smokers and in light and heavy smokers compared to never smokers (Tables 2 and 3). As expected, mortality rates increase with the intensity of smoking but the relationship between the prevalence of disability and the intensity of smoking is not as strong, especially for severe disability. In males, the age and education adjusted prevalence ratio (a-PR) for disability is 1.17 in ex-smokers, 1.27 in light and 1.34 in heavy smokers, whilst in females, the a-PR is 1.15 in ex-, 1.22 in light and 1.24 in heavy smokers. The prevalence of severe disability is lower, although not reaching statistical significance, in heavy smokers (a-PR = 0.83 in males; and 0.66 in females). The age and education adjusted mortality rate ratio for ex-, light and heavy smokers is respectively 1.50; 1.95 and 2.77 for males and 1.09; 1.41 and 2.67 for females. Weighted age and education adjusted (severe) disability prevalence (in %) and prevalence ratio by smoking status for those aged 30+, Health Interview Survey 1997 and 2001, Belgium *: 95% Confidence Interval. Weighted age and education adjusted mortality rate per 100 000 person years and mortality rate ratio by smoking status for those aged 30+, Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium *: 95% Confidence Interval. At age 30 and compared to never smokers, ex-smokers have a shorter LE and a somewhat shorter DFLE but their DLE is about one third of a year longer (Table 4). Smokers have a shorter LE, DFLE and DLE compared to never smokers. DFLE as a proportion of LE is 74.8% in male never smokers compared to 72.7% in ex-smokers and smokers, and 65.8% in female never smokers compared to 63.6% in ex-smokers and 64.0% in smokers. Both ex-smokers and smokers are estimated to live fewer years with severe disability (DLE_S). Table 5 presents the difference in DFLE, DLE, DLE_S and LE at age 30 between ex-smokers, smokers and never smokers. A negative value indicates less years lived compared to never smokers. Each estimated difference is divided into a part due to differential age-specific mortality (mortality effect) and a part that results from a differential age-specific prevalence of disability (disability effect). Thus, compared to male never smokers, LE for male smokers is 7.87 years shorter, this difference in LE being attributable only to the mortality disadvantage that male smokers have over never smokers. Male smokers have a shorter DFLE by 6.80 years, this difference being a result of differences in both the age-specific mortality rate and age-specific disability prevalence. The mortality effect accounts for 3.67 years or 54% of the difference, while the remaining 3.13 years are due to the higher disability prevalence among smokers. Due to their disability disadvantage, smokers are expected to live 3.13 more years with disability but because of the higher mortality the disability effect is cancelled out resulting in 1.07 year shorter DLE compared to never smokers (-1.07 years = -4.21 years (mortality effect) + 3.13 years (disability effect)). In both males and females the impact of the higher mortality among smokers on the DLE outweighs the disability effect so that they live fewer years with disability. This is not the case for DLE of ex-smokers where the disability effect is larger than the mortality effect resulting in about one third of a year longer DLE. Due to a larger mortality effect, both male and female ex-smokers and smokers live shorter DLE_S, although the difference is only significant for male smokers. Disability Free Life Expectancy (DFLE 30 ), (Severe) Disability Life Expectancy (DLE(_S) 30 ), Life Expectancy (LE 30 ) and the % of remaining life without disability (% DFLE/LE 30 ) at age 30 by smoking status, Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium *: 95% confidence interval. Decomposition of the difference between ex- and current smokers with never smokers in Disability Free Life Expectancy (DFLE 30 ), (Severe) Disability Life Expectancy (DLE(_S) 30 ), Life Expectancy (LE 30 ) at age 30 by type of effect (mortality or disability), Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium *: 95% confidence interval. Figure 1 presents the decomposition by age of (1) the difference in DFLE and DLE between never smokers and (ex-)smokers and of (2) the mortality and disability component of these differences. The disability effect is the most important contributor to the shorter DFLE among ex-smokers up to the age of 84 years (Figure 1a-b) and up to age 64 years and 74 years for male and female smokers respectively (Figure 1c-d). For the difference in DLE between never smokers and (ex-)smokers, the disability effect is actually outweighed by the mortality effect only in the older ages: 70+ years and 75+ years for male and female ex-smokers respectively, and age 65+ years for male and female smokers. For ex-smokers, the largest proportion (67%) of the disability effect of DLE difference is concentrated before age 70 years while for male and female smokers the proportion of the disability effect before age 70 years is 78% and 73% respectively (Figure 1e-h). At young ages, the importance of the disability disadvantage to the longer DLE in ex-smokers, smokers, light smokers and heavy smokers is further shown by the decomposition of the difference in the partial DLE in the age window 30 to 80 years (DLE30–80) (Tables 6, Figure 2). Within this age window, any smoking category experiences more years with disability compared to never smokers, as the disability effect cancels out the mortality effect. For example, the difference in DLE30–80 among male ex-smokers compared to never smokers is 1.22 years (1.22 years (95% CI: -0.04; 2.62) = -0.45 years (mortality effect) + 1.67 (disability effect)). The difference with smokers is 1.27 years (95% CI: -0.13; 2.57). We observe a larger difference among light males smokers (1.45 years (95% CI: -0.02; 2.90)) compared to difference among heavy smokers (0.82 years (95% CI: -1.15; 3.01)) suggesting a larger contribution of the mortality effect for heavy smokers even before age 80 years old. The difference in DLE30–80 among female ex-smokers, smokers, light and heavy smokers compared to never smokers is respectively 1.62 years (95 CI; 0.26; 2.88), 1.83 years (95 CI; 0.13; 3.35), 1.80 years (95 CI; -.0.09; 3.78) and 1.80 years (95 CI; -0.90; 4.90). Restricting the analysis to severe disability, the mortality effect by far outweighs any disability effect and is the most important contributor to shorter DLE_S30–80 in any age group. None of the differences in DLE_S30–80 for the different smoking categories compared to never smokers is statistically significant (Figure 2). Decomposition by age of the difference between ex- and current smokers with never smokers in Disability Free Life Expectancy (DFLE30), Disability Life Expectancy (DLE30) at age 30 and type of effect (mortality or disability), Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium. Legend: Panels a-d: DFLE (a: Male Ex-smoker; b: Female Ex-smoker; c: Male Smoker; d: Female Smoker). Panels e-h: DLE (e: Male Ex-smoker; f: Female Ex-smoker; g: Male Smoker; h: Female Smoker). Black bar: difference DFLE or DLE with never smokers. Green bar: mortality effect. Red bar: disability effect. E.g. black bar in panel a: DFLE among males Ex-smokers minus DFLE among males never smokers; black bar in panel h: DLE among females Ex-smokers minus DLE among females never smokers. Disability Free Life Expectancy (DFLE 30–80 ), (Severe) Disability Life Expectancy (DLE(_S) 30–80 ), Life Expectancy (LE 30–80 ) and the % of remaining life without disability (% DFLE/LE 30–80 ) between ages 30 and 80 by smoking status, Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium *: 95% confidence interval. Decomposition of the difference between ex- and current smokers with never smokers in Disability Free Life Expectancy (DFLE30–80), (Severe) Disability Life Expectancy (DLE(_S)30–80) between ages 30 and 80 by type of effect (mortality or disability), Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium. Legend: Black dot symbol: difference DFLE, DLE or DLE_S with never smoker and 95% CI. Green triangle symbol: mortality effect and 95% CI. Red letter x symbol: disability effect and 95% CI.
Conclusion
We were able to evaluate the contribution of the excess mortality versus the disabling impact of tobacco exposure on population health. Smoking kills and shortens both life without and with disability mainly due to its related excess mortality. However excess disability associated with smoking cannot be ignored given its contribution to substantially more years with disability before age 80. The important population health message remains: smoking is a major health hazard. Policy on smoking should strive for a smoke-free society through primordial prevention or reduction of smoking initiation and through primary prevention or smoke stop to increase LE and DFLE. Further, given the lack of compression of disability for never smokers compared to smokers, this study highlight the need for policy makers to monitor not only DFLE (e.g. the European Union 2020 health goal to increase the healthy and active ageing of the European population by 2 years [46]) but also DLE as reduction in health risks and the increase in DFLE, may not automatically result in a simultaneous reduction or status quo of the DLE.
[ "Background", "Data", "Measures", "\n\nDisability\n\n", "Smoking", "Socio-economic position", "Statistical methods", "\n\nMortality and disability\n\n", "Life table analysis", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Smoking is without doubt the single most important global cause of premature mortality. The current death toll from direct and second hand tobacco smoking in adults 30 years and over is estimated to be globally well over 5.5 million each year [1]. While at present the highest proportion of deaths attributable to tobacco are in America and Europe, the largest proportions of tobacco-related deaths in the coming decades is expected to occur in medium and low income countries [2]. Smokers may lose up to one decade of life expectancy [3,4]. However, prolonged cessation, when started early enough, reduces the risk of mortality associated with smoking by 90% or more [3-5] and hence greater mortality benefits are observed among early quitters [6]. Implementation of evidence-based tobacco control measures, such as smoke-free air laws or taxation, contribute to the avoidance of substantial numbers of premature deaths [7]. Smoking has also been associated with the incidence of chronic diseases, especially several cancers, cardiovascular diseases, and lung disease [8-10], and with the incidence of disability and poor health-related quality of life [11,12].\nAlthough non-smoking is related to a longer life and a longer healthier life, there is no agreement in the literature on whether smoking cessation also leads to fewer years with morbidity. Some publications suggest that smoking reduces both the duration of life free of and with diseases and disability so that in the end, never smokers live the same or even more years in ill-health [8,13-17]. Other authors report that smokers have to endure in their shorter life more years and a greater proportion of their life with disability [18-20]. The first group of manuscripts suggests the need to consider a trade-off between a longer life and a longer life in ill-health [21], while the latter studies support the compression of morbidity theory that can be reached through primordial and primary prevention [22]. For public health policy, it is important to better understand this discrepancy in current literature and to better assess health gains or losses in relation to smoke reducing interventions, specifically: “Is the gap in duration of life in total and with or without disability, between never smokers and ex- or current smokers, due to differences in mortality and/or due to differences in disability?”.\nThe objectives of the current manuscript are therefore (1) to determine the effect of smoking on the duration of life with and without disability and (2) to estimate the contribution of the higher mortality and higher disability associated with smoking to the difference in the years lived with and without disability between smoking groups.", "To calculate Disability Free Life Expectancy (DFLE) and Disability Life Expectancy (DLE) by smoking status two sources of data are required. First, information is needed about the mortality by smoking status. This information was extracted from the mortality follow-up of the Belgian Health Interview Surveys 1997 and 2001 (HIS 1997; HIS 2001) participants. The surveys were carried out by Statistics Belgium and exempted by law from requiring ethics approval. The process of obtaining mortality follow-up information is regulated by the Belgian Commission for the Protection of Privacy. After the approval of the Commission, Statistics Belgium provided follow-up data for the HIS 1997 and HIS 2001 participants until date of death, date of emigration or until respectively 31/12/2007 and 31/12/2010. Follow-up was obtained by individual record linkage between the HIS and the National Register, a public register with details of all registered residents in Belgium, using the National Identification Number. Statistics Belgium provided the list, including the date of death, of the HIS 1997 and HIS 2001 participants who had died by the end of the follow-up period. Second, information is needed about the prevalence of disability by smoking status. This information was extracted from both surveys. The participants in these national cross-sectional surveys were selected from the National Register through a multistage stratified sample of the Belgian population aged 15 years and older. Potential participants were informed by an invitation letter with leaflet and by the interviewer that the participation to the survey is voluntary and that after given an oral consent they can stop the interview anytime or can skip a question if they felt they should not answer a particular question. The participation rate in both surveys was around 60%. The detailed methodology of the surveys is described elsewhere [23]. Data on disability and socioeconomic position were collected via face-to-face interviews, while data on smoking were provided by the participant through a self-administered questionnaire.", " \n\nDisability\n\n The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only.\nDefinition of disability by severity\nThe Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only.\nDefinition of disability by severity", "The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only.\nDefinition of disability by severity", "A four-category variable was used: never smokers, ex-smokers, light smokers (less than 20 cigarette per day) and heavy smokers (20 cigarettes or more per day).", "Educational attainment was coded according to the International Standard Classification of Education (ISCED 2011) and was based on the highest level of education reached by the households’ reference person or his/her partner: lower education (ISCED 0–1), lower secondary education (ISCED 2), higher secondary education (ISCED 3) and higher education (ISCED 4–8) [26].", " \n\nMortality and disability\n\n For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS.\nFor each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS.", "For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS.", "The age specific mortality rates were used to estimate LE by gender and smoking category. DFLE and DLE at age 30 (last open age group: 85 years and plus) and partial DFLE and DLE in the age window 30–80 years (DFLE30–80 and DLE30–80) were calculated by gender and smoking category using the Sullivan method which integrates the age-specific disability prevalence into the life table [30,31]. To estimate the contribution of mortality and disability to the differences in DFLE and DLE between smoking groups, a decomposition method was used [32,33]. Differences in total life expectancy (LE), DFLE and DLE between never smokers and other smoking categories (ex-smokers, smokers, light and heavy smokers) were divided in two parts. The first component, the mortality effect, represents the differences in the expected years lived with and without disability due to a differential mortality experience between never smokers and the other smoking categories. The second component, the disability effect, represents the differences in the person-years lived with or without disability due to differences in the prevalence of disability by smoking status. Whereas differences in LE only reflect differences in mortality rates, differences in DFLE and DLE are a result of differences in age-specific mortality rates (mortality effect) and differences in the age-specific prevalence of disability (disability effect). Calculations were done using a R 2.14.2 program developed in the framework of the EHLEIS project [34] and a copy of the R program is available from W. Nusselder ([email protected]). For the decomposition, including the variance estimation, the analysis by smoking intensity was only possible for the partial DFLE30–80 and DLE30–80 as there were few very old heavy smoking females.", "ADL: Activities of daily living; a-PR: Adjusted prevalence ratio; CI: Confidence interval; DFLE: Disability free life expectancy; DLE: Disability life expectancy; DLE_S: Severe disability life expectancy; HIS: Health interview survey; ISCED: International Standard Classification of Education; LE: Life expectancy.", "None of the authors have to declare financial or non-financial competing interest.", "HVO worked out the concept and design of the study, performed part of the statistical analysis, participated in the interpretation; and drafted the manuscript. NB performed part of the statistical analysis, participated in the interpretation; and in the drafting of the manuscript. WN, CJ, EC and JMR participated in the concept development and the design of the study, the interpretation and revision of the manuscript. RC participated in the statistical analysis, the interpretation and revision of the manuscript. SD participated in the interpretation and the revision of the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/14/723/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Data", "Measures", "\n\nDisability\n\n", "Smoking", "Socio-economic position", "Statistical methods", "\n\nMortality and disability\n\n", "Life table analysis", "Results", "Discussion", "Conclusion", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Smoking is without doubt the single most important global cause of premature mortality. The current death toll from direct and second hand tobacco smoking in adults 30 years and over is estimated to be globally well over 5.5 million each year [1]. While at present the highest proportion of deaths attributable to tobacco are in America and Europe, the largest proportions of tobacco-related deaths in the coming decades is expected to occur in medium and low income countries [2]. Smokers may lose up to one decade of life expectancy [3,4]. However, prolonged cessation, when started early enough, reduces the risk of mortality associated with smoking by 90% or more [3-5] and hence greater mortality benefits are observed among early quitters [6]. Implementation of evidence-based tobacco control measures, such as smoke-free air laws or taxation, contribute to the avoidance of substantial numbers of premature deaths [7]. Smoking has also been associated with the incidence of chronic diseases, especially several cancers, cardiovascular diseases, and lung disease [8-10], and with the incidence of disability and poor health-related quality of life [11,12].\nAlthough non-smoking is related to a longer life and a longer healthier life, there is no agreement in the literature on whether smoking cessation also leads to fewer years with morbidity. Some publications suggest that smoking reduces both the duration of life free of and with diseases and disability so that in the end, never smokers live the same or even more years in ill-health [8,13-17]. Other authors report that smokers have to endure in their shorter life more years and a greater proportion of their life with disability [18-20]. The first group of manuscripts suggests the need to consider a trade-off between a longer life and a longer life in ill-health [21], while the latter studies support the compression of morbidity theory that can be reached through primordial and primary prevention [22]. For public health policy, it is important to better understand this discrepancy in current literature and to better assess health gains or losses in relation to smoke reducing interventions, specifically: “Is the gap in duration of life in total and with or without disability, between never smokers and ex- or current smokers, due to differences in mortality and/or due to differences in disability?”.\nThe objectives of the current manuscript are therefore (1) to determine the effect of smoking on the duration of life with and without disability and (2) to estimate the contribution of the higher mortality and higher disability associated with smoking to the difference in the years lived with and without disability between smoking groups.", " Data To calculate Disability Free Life Expectancy (DFLE) and Disability Life Expectancy (DLE) by smoking status two sources of data are required. First, information is needed about the mortality by smoking status. This information was extracted from the mortality follow-up of the Belgian Health Interview Surveys 1997 and 2001 (HIS 1997; HIS 2001) participants. The surveys were carried out by Statistics Belgium and exempted by law from requiring ethics approval. The process of obtaining mortality follow-up information is regulated by the Belgian Commission for the Protection of Privacy. After the approval of the Commission, Statistics Belgium provided follow-up data for the HIS 1997 and HIS 2001 participants until date of death, date of emigration or until respectively 31/12/2007 and 31/12/2010. Follow-up was obtained by individual record linkage between the HIS and the National Register, a public register with details of all registered residents in Belgium, using the National Identification Number. Statistics Belgium provided the list, including the date of death, of the HIS 1997 and HIS 2001 participants who had died by the end of the follow-up period. Second, information is needed about the prevalence of disability by smoking status. This information was extracted from both surveys. The participants in these national cross-sectional surveys were selected from the National Register through a multistage stratified sample of the Belgian population aged 15 years and older. Potential participants were informed by an invitation letter with leaflet and by the interviewer that the participation to the survey is voluntary and that after given an oral consent they can stop the interview anytime or can skip a question if they felt they should not answer a particular question. The participation rate in both surveys was around 60%. The detailed methodology of the surveys is described elsewhere [23]. Data on disability and socioeconomic position were collected via face-to-face interviews, while data on smoking were provided by the participant through a self-administered questionnaire.\nTo calculate Disability Free Life Expectancy (DFLE) and Disability Life Expectancy (DLE) by smoking status two sources of data are required. First, information is needed about the mortality by smoking status. This information was extracted from the mortality follow-up of the Belgian Health Interview Surveys 1997 and 2001 (HIS 1997; HIS 2001) participants. The surveys were carried out by Statistics Belgium and exempted by law from requiring ethics approval. The process of obtaining mortality follow-up information is regulated by the Belgian Commission for the Protection of Privacy. After the approval of the Commission, Statistics Belgium provided follow-up data for the HIS 1997 and HIS 2001 participants until date of death, date of emigration or until respectively 31/12/2007 and 31/12/2010. Follow-up was obtained by individual record linkage between the HIS and the National Register, a public register with details of all registered residents in Belgium, using the National Identification Number. Statistics Belgium provided the list, including the date of death, of the HIS 1997 and HIS 2001 participants who had died by the end of the follow-up period. Second, information is needed about the prevalence of disability by smoking status. This information was extracted from both surveys. The participants in these national cross-sectional surveys were selected from the National Register through a multistage stratified sample of the Belgian population aged 15 years and older. Potential participants were informed by an invitation letter with leaflet and by the interviewer that the participation to the survey is voluntary and that after given an oral consent they can stop the interview anytime or can skip a question if they felt they should not answer a particular question. The participation rate in both surveys was around 60%. The detailed methodology of the surveys is described elsewhere [23]. Data on disability and socioeconomic position were collected via face-to-face interviews, while data on smoking were provided by the participant through a self-administered questionnaire.\n Measures \n\nDisability\n\n The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only.\nDefinition of disability by severity\nThe Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only.\nDefinition of disability by severity\n \n\nDisability\n\n The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only.\nDefinition of disability by severity\nThe Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only.\nDefinition of disability by severity\n Smoking A four-category variable was used: never smokers, ex-smokers, light smokers (less than 20 cigarette per day) and heavy smokers (20 cigarettes or more per day).\nA four-category variable was used: never smokers, ex-smokers, light smokers (less than 20 cigarette per day) and heavy smokers (20 cigarettes or more per day).\n Socio-economic position Educational attainment was coded according to the International Standard Classification of Education (ISCED 2011) and was based on the highest level of education reached by the households’ reference person or his/her partner: lower education (ISCED 0–1), lower secondary education (ISCED 2), higher secondary education (ISCED 3) and higher education (ISCED 4–8) [26].\nEducational attainment was coded according to the International Standard Classification of Education (ISCED 2011) and was based on the highest level of education reached by the households’ reference person or his/her partner: lower education (ISCED 0–1), lower secondary education (ISCED 2), higher secondary education (ISCED 3) and higher education (ISCED 4–8) [26].\n Statistical methods \n\nMortality and disability\n\n For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS.\nFor each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS.\n \n\nMortality and disability\n\n For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS.\nFor each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS.\n Life table analysis The age specific mortality rates were used to estimate LE by gender and smoking category. DFLE and DLE at age 30 (last open age group: 85 years and plus) and partial DFLE and DLE in the age window 30–80 years (DFLE30–80 and DLE30–80) were calculated by gender and smoking category using the Sullivan method which integrates the age-specific disability prevalence into the life table [30,31]. To estimate the contribution of mortality and disability to the differences in DFLE and DLE between smoking groups, a decomposition method was used [32,33]. Differences in total life expectancy (LE), DFLE and DLE between never smokers and other smoking categories (ex-smokers, smokers, light and heavy smokers) were divided in two parts. The first component, the mortality effect, represents the differences in the expected years lived with and without disability due to a differential mortality experience between never smokers and the other smoking categories. The second component, the disability effect, represents the differences in the person-years lived with or without disability due to differences in the prevalence of disability by smoking status. Whereas differences in LE only reflect differences in mortality rates, differences in DFLE and DLE are a result of differences in age-specific mortality rates (mortality effect) and differences in the age-specific prevalence of disability (disability effect). Calculations were done using a R 2.14.2 program developed in the framework of the EHLEIS project [34] and a copy of the R program is available from W. Nusselder ([email protected]). For the decomposition, including the variance estimation, the analysis by smoking intensity was only possible for the partial DFLE30–80 and DLE30–80 as there were few very old heavy smoking females.\nThe age specific mortality rates were used to estimate LE by gender and smoking category. DFLE and DLE at age 30 (last open age group: 85 years and plus) and partial DFLE and DLE in the age window 30–80 years (DFLE30–80 and DLE30–80) were calculated by gender and smoking category using the Sullivan method which integrates the age-specific disability prevalence into the life table [30,31]. To estimate the contribution of mortality and disability to the differences in DFLE and DLE between smoking groups, a decomposition method was used [32,33]. Differences in total life expectancy (LE), DFLE and DLE between never smokers and other smoking categories (ex-smokers, smokers, light and heavy smokers) were divided in two parts. The first component, the mortality effect, represents the differences in the expected years lived with and without disability due to a differential mortality experience between never smokers and the other smoking categories. The second component, the disability effect, represents the differences in the person-years lived with or without disability due to differences in the prevalence of disability by smoking status. Whereas differences in LE only reflect differences in mortality rates, differences in DFLE and DLE are a result of differences in age-specific mortality rates (mortality effect) and differences in the age-specific prevalence of disability (disability effect). Calculations were done using a R 2.14.2 program developed in the framework of the EHLEIS project [34] and a copy of the R program is available from W. Nusselder ([email protected]). For the decomposition, including the variance estimation, the analysis by smoking intensity was only possible for the partial DFLE30–80 and DLE30–80 as there were few very old heavy smoking females.", "To calculate Disability Free Life Expectancy (DFLE) and Disability Life Expectancy (DLE) by smoking status two sources of data are required. First, information is needed about the mortality by smoking status. This information was extracted from the mortality follow-up of the Belgian Health Interview Surveys 1997 and 2001 (HIS 1997; HIS 2001) participants. The surveys were carried out by Statistics Belgium and exempted by law from requiring ethics approval. The process of obtaining mortality follow-up information is regulated by the Belgian Commission for the Protection of Privacy. After the approval of the Commission, Statistics Belgium provided follow-up data for the HIS 1997 and HIS 2001 participants until date of death, date of emigration or until respectively 31/12/2007 and 31/12/2010. Follow-up was obtained by individual record linkage between the HIS and the National Register, a public register with details of all registered residents in Belgium, using the National Identification Number. Statistics Belgium provided the list, including the date of death, of the HIS 1997 and HIS 2001 participants who had died by the end of the follow-up period. Second, information is needed about the prevalence of disability by smoking status. This information was extracted from both surveys. The participants in these national cross-sectional surveys were selected from the National Register through a multistage stratified sample of the Belgian population aged 15 years and older. Potential participants were informed by an invitation letter with leaflet and by the interviewer that the participation to the survey is voluntary and that after given an oral consent they can stop the interview anytime or can skip a question if they felt they should not answer a particular question. The participation rate in both surveys was around 60%. The detailed methodology of the surveys is described elsewhere [23]. Data on disability and socioeconomic position were collected via face-to-face interviews, while data on smoking were provided by the participant through a self-administered questionnaire.", " \n\nDisability\n\n The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only.\nDefinition of disability by severity\nThe Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only.\nDefinition of disability by severity", "The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only.\nDefinition of disability by severity", "A four-category variable was used: never smokers, ex-smokers, light smokers (less than 20 cigarette per day) and heavy smokers (20 cigarettes or more per day).", "Educational attainment was coded according to the International Standard Classification of Education (ISCED 2011) and was based on the highest level of education reached by the households’ reference person or his/her partner: lower education (ISCED 0–1), lower secondary education (ISCED 2), higher secondary education (ISCED 3) and higher education (ISCED 4–8) [26].", " \n\nMortality and disability\n\n For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS.\nFor each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS.", "For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS.", "The age specific mortality rates were used to estimate LE by gender and smoking category. DFLE and DLE at age 30 (last open age group: 85 years and plus) and partial DFLE and DLE in the age window 30–80 years (DFLE30–80 and DLE30–80) were calculated by gender and smoking category using the Sullivan method which integrates the age-specific disability prevalence into the life table [30,31]. To estimate the contribution of mortality and disability to the differences in DFLE and DLE between smoking groups, a decomposition method was used [32,33]. Differences in total life expectancy (LE), DFLE and DLE between never smokers and other smoking categories (ex-smokers, smokers, light and heavy smokers) were divided in two parts. The first component, the mortality effect, represents the differences in the expected years lived with and without disability due to a differential mortality experience between never smokers and the other smoking categories. The second component, the disability effect, represents the differences in the person-years lived with or without disability due to differences in the prevalence of disability by smoking status. Whereas differences in LE only reflect differences in mortality rates, differences in DFLE and DLE are a result of differences in age-specific mortality rates (mortality effect) and differences in the age-specific prevalence of disability (disability effect). Calculations were done using a R 2.14.2 program developed in the framework of the EHLEIS project [34] and a copy of the R program is available from W. Nusselder ([email protected]). For the decomposition, including the variance estimation, the analysis by smoking intensity was only possible for the partial DFLE30–80 and DLE30–80 as there were few very old heavy smoking females.", "Both the prevalence of disability and the mortality rate are higher in ex-smokers and in light and heavy smokers compared to never smokers (Tables 2 and 3). As expected, mortality rates increase with the intensity of smoking but the relationship between the prevalence of disability and the intensity of smoking is not as strong, especially for severe disability. In males, the age and education adjusted prevalence ratio (a-PR) for disability is 1.17 in ex-smokers, 1.27 in light and 1.34 in heavy smokers, whilst in females, the a-PR is 1.15 in ex-, 1.22 in light and 1.24 in heavy smokers. The prevalence of severe disability is lower, although not reaching statistical significance, in heavy smokers (a-PR = 0.83 in males; and 0.66 in females). The age and education adjusted mortality rate ratio for ex-, light and heavy smokers is respectively 1.50; 1.95 and 2.77 for males and 1.09; 1.41 and 2.67 for females.\nWeighted age and education adjusted (severe) disability prevalence (in %) and prevalence ratio by smoking status for those aged 30+, Health Interview Survey 1997 and 2001, Belgium\n*: 95% Confidence Interval.\nWeighted age and education adjusted mortality rate per 100 000 person years and mortality rate ratio by smoking status for those aged 30+, Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium\n*: 95% Confidence Interval.\nAt age 30 and compared to never smokers, ex-smokers have a shorter LE and a somewhat shorter DFLE but their DLE is about one third of a year longer (Table 4). Smokers have a shorter LE, DFLE and DLE compared to never smokers. DFLE as a proportion of LE is 74.8% in male never smokers compared to 72.7% in ex-smokers and smokers, and 65.8% in female never smokers compared to 63.6% in ex-smokers and 64.0% in smokers. Both ex-smokers and smokers are estimated to live fewer years with severe disability (DLE_S). Table 5 presents the difference in DFLE, DLE, DLE_S and LE at age 30 between ex-smokers, smokers and never smokers. A negative value indicates less years lived compared to never smokers. Each estimated difference is divided into a part due to differential age-specific mortality (mortality effect) and a part that results from a differential age-specific prevalence of disability (disability effect). Thus, compared to male never smokers, LE for male smokers is 7.87 years shorter, this difference in LE being attributable only to the mortality disadvantage that male smokers have over never smokers. Male smokers have a shorter DFLE by 6.80 years, this difference being a result of differences in both the age-specific mortality rate and age-specific disability prevalence. The mortality effect accounts for 3.67 years or 54% of the difference, while the remaining 3.13 years are due to the higher disability prevalence among smokers. Due to their disability disadvantage, smokers are expected to live 3.13 more years with disability but because of the higher mortality the disability effect is cancelled out resulting in 1.07 year shorter DLE compared to never smokers (-1.07 years = -4.21 years (mortality effect) + 3.13 years (disability effect)). In both males and females the impact of the higher mortality among smokers on the DLE outweighs the disability effect so that they live fewer years with disability. This is not the case for DLE of ex-smokers where the disability effect is larger than the mortality effect resulting in about one third of a year longer DLE. Due to a larger mortality effect, both male and female ex-smokers and smokers live shorter DLE_S, although the difference is only significant for male smokers.\n\nDisability Free Life Expectancy (DFLE\n\n30\n\n), (Severe) Disability Life Expectancy (DLE(_S)\n\n30\n\n), Life Expectancy (LE\n\n30\n\n) and the % of remaining life without disability (% DFLE/LE\n\n30\n\n) at age 30 by smoking status, Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium\n\n*: 95% confidence interval.\n\nDecomposition of the difference between ex- and current smokers with never smokers in Disability Free Life Expectancy (DFLE\n\n30\n\n), (Severe) Disability Life Expectancy (DLE(_S)\n\n30\n\n), Life Expectancy (LE\n\n30\n\n) at age 30 by type of effect (mortality or disability), Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium\n\n*: 95% confidence interval.\nFigure 1 presents the decomposition by age of (1) the difference in DFLE and DLE between never smokers and (ex-)smokers and of (2) the mortality and disability component of these differences. The disability effect is the most important contributor to the shorter DFLE among ex-smokers up to the age of 84 years (Figure 1a-b) and up to age 64 years and 74 years for male and female smokers respectively (Figure 1c-d). For the difference in DLE between never smokers and (ex-)smokers, the disability effect is actually outweighed by the mortality effect only in the older ages: 70+ years and 75+ years for male and female ex-smokers respectively, and age 65+ years for male and female smokers. For ex-smokers, the largest proportion (67%) of the disability effect of DLE difference is concentrated before age 70 years while for male and female smokers the proportion of the disability effect before age 70 years is 78% and 73% respectively (Figure 1e-h). At young ages, the importance of the disability disadvantage to the longer DLE in ex-smokers, smokers, light smokers and heavy smokers is further shown by the decomposition of the difference in the partial DLE in the age window 30 to 80 years (DLE30–80) (Tables 6, Figure 2). Within this age window, any smoking category experiences more years with disability compared to never smokers, as the disability effect cancels out the mortality effect. For example, the difference in DLE30–80 among male ex-smokers compared to never smokers is 1.22 years (1.22 years (95% CI: -0.04; 2.62) = -0.45 years (mortality effect) + 1.67 (disability effect)). The difference with smokers is 1.27 years (95% CI: -0.13; 2.57). We observe a larger difference among light males smokers (1.45 years (95% CI: -0.02; 2.90)) compared to difference among heavy smokers (0.82 years (95% CI: -1.15; 3.01)) suggesting a larger contribution of the mortality effect for heavy smokers even before age 80 years old. The difference in DLE30–80 among female ex-smokers, smokers, light and heavy smokers compared to never smokers is respectively 1.62 years (95 CI; 0.26; 2.88), 1.83 years (95 CI; 0.13; 3.35), 1.80 years (95 CI; -.0.09; 3.78) and 1.80 years (95 CI; -0.90; 4.90). Restricting the analysis to severe disability, the mortality effect by far outweighs any disability effect and is the most important contributor to shorter DLE_S30–80 in any age group. None of the differences in DLE_S30–80 for the different smoking categories compared to never smokers is statistically significant (Figure 2).\nDecomposition by age of the difference between ex- and current smokers with never smokers in Disability Free Life Expectancy (DFLE30), Disability Life Expectancy (DLE30) at age 30 and type of effect (mortality or disability), Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium. Legend: Panels a-d: DFLE (a: Male Ex-smoker; b: Female Ex-smoker; c: Male Smoker; d: Female Smoker). Panels e-h: DLE (e: Male Ex-smoker; f: Female Ex-smoker; g: Male Smoker; h: Female Smoker). Black bar: difference DFLE or DLE with never smokers. Green bar: mortality effect. Red bar: disability effect. E.g. black bar in panel a: DFLE among males Ex-smokers minus DFLE among males never smokers; black bar in panel h: DLE among females Ex-smokers minus DLE among females never smokers.\n\nDisability Free Life Expectancy (DFLE\n\n30–80\n\n), (Severe) Disability Life Expectancy (DLE(_S)\n\n30–80\n\n), Life Expectancy (LE\n\n30–80\n\n) and the % of remaining life without disability (% DFLE/LE\n\n30–80\n\n) between ages 30 and 80 by smoking status, Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium\n\n*: 95% confidence interval.\nDecomposition of the difference between ex- and current smokers with never smokers in Disability Free Life Expectancy (DFLE30–80), (Severe) Disability Life Expectancy (DLE(_S)30–80) between ages 30 and 80 by type of effect (mortality or disability), Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium. Legend: Black dot symbol: difference DFLE, DLE or DLE_S with never smoker and 95% CI. Green triangle symbol: mortality effect and 95% CI. Red letter x symbol: disability effect and 95% CI.", "The study confirms that smoking kills but also shows that smoking increases the years lived with disability before age 80 years old, while at older ages, the excess mortality of smokers hides the smoker disability disadvantage. In other words, through the excess premature mortality of smokers, their DLE is shorter compared to never smokers. When the intensity of smoking is high, the excess mortality hides the disability disadvantage in DLE even before age 80. Our study also shows that ex-smokers have a shorter DFLE and a longer DLE. The disability disadvantage that ex-smokers have is the main contributor to the shorter DFLE and longer DLE compared to never smokers, even though mortality rates for ex-smokers may approach those for never smokers. At older ages, as for smokers, the excess mortality offsets the disability disadvantage but this occurs at an older age than for smokers.\nSo at the one hand, the observations support the expansion hypothesis: in the end smokers may live less years with disability due to their strong excess mortality. At the other hand, ex-smokers and smokers have to endure more years with disability before age 80 years. These seemingly two opposing observations are a result of the fact that the expression of the difference in disability prevalence are concentrated at the younger ages, while the strength of smoking related mortality disadvantage is greater at older ages and reduces the years to be lived with and without disability. Moreover, the interaction between excess mortality and excess disability is further a function of gender, smoking intensity and the severity level of the disability. The expression of the disability effect is somewhat higher in women for whom lower premature excess mortality reduces DLE less than for men. The female–male difference in the mortality and disability impact of smoking may be a contributing factor to the gender health-survival paradox [35]. Our study also suggests a significantly shorter LE free of severe disability (DFLE_S) for male heavy smokers compared to never smokers.\nOverall, our study does support the statement that smoking is associated with mortality more than with disability, and that through excess mortality the years of life with disability are compressed compared to never smokers [15-17]. However the findings also partly corroborate previous reports [18-20] suggesting that smoking has an important and distinct impact on disability which results in more years with disability at younger ages for ex-smokers and smokers. Other studies on the effect of smoking have also reported an increased incidence of disability, a lower (physical) health related quality of life and an elevated use of health care services [11,12,36-39].\nOur analysis has several strengths. We were able to use one data set which had smoking, disability and mortality data. For the mortality follow-up of the survey, less than 3% of the participants could not be linked to the National Register. Our decomposition analysis allowed division of the differences in DFLE and DLE into the part due to excess mortality for (ex-)smokers and the part due to their excess disability, as well as how these varied by age group. This therefore helped explain the controversy that longer LE for non-smokers compared to (ex-)smokers translates into more years of disability. To obtain further insight we evaluated in which age groups the mortality effect or the disability effect were more substantial. To our knowledge, this paper is the first to show the excess disability associated with smoking contributing to more years with disability at younger ages.\nLimitations of the study are related to the cross-sectional design providing the smoking and disability data. E.g. current smokers at any age after the age of 30 years may be considered as lifelong smokers as the likelihood of smoking initiation after the age of 30 years is small. If we ignore non-successful smoke stop attempts, the category “current smoker” is probably a less heterogeneous population compare to the category of ex-smokers for whom no information on the age or the time since they stop smoking and their reasons to stop smoking is used: health benefits are larger in early quitters while former smokers who recently quit tend to have more health problems [6,36]. Further, we cannot attribute the lower prevalence of disability (which led among never smokers to more healthy years and to a reduction in the time spent with disability before age 80) to either a lower disability incidence or a higher recovery rate since this is beyond the decomposition method using Sullivan method based estimates [33]. The main assumption of stationary population in order to minimise bias of the Sullivan method compared to the multistate life table method using transition probability may hold as changes in smoking behaviour do not lead to sudden changes in both mortality incidence and disability incidence [40]. It is difficult to identify to what extend the method used to estimate the years lived with and without disability contributes to the lack of agreement related to the compression of disability in function of smoking elimination. Some authors include both transitions to disability and recovery [14,16,20] in the multistate method, others do not [17]. Next, studies differ further in definition of disability, the definition of smoking categories. Studies also studies differ in the age the DFLE and LE is estimated. The paper of Nusselder et al. [20], is the only one using the multistate method, including both disability incidence and recovery transitions, that provides evidence for a compression of years with disability related to smoking elimination both at age 30 and at age 70 years. The same conclusion was made by Bronnum-Hansen et al. using the Sullivan method [19]. Other studies using a multistate approach report that smoking reduces both the duration of life with and without disability [14,16,17].\nSecondly, low survey participation may bias the results [41]. We have shown in prior studies that participation is differentially linked to health status and socioeconomic position [42,43]. Charafedinne R. et al. [44] compared Belgian census-based DFLE by social position with survey-based estimates and found that although there was no statistical difference, the difference in LE and DFLE should be acknowledged. Low educated survey participants tended to be less healthy (i.e. having a lower LE and lower DFLE) compared to their counterparts in the general population, while the inverse was observed in the highest educational groups. The same author also reported evidence supporting the hypothesis that educational attainment does not substantially influence the association between smoking and mortality [28]. Therefore, we hypothesize that any selection bias in the difference in DFLE or DLE by smoking is most likely related to the survey-based disability prevalence and not to the mortality. If any, it is expected to overestimate the gap and the disability effect of the smoking related differences in DFLE and DLE.\nOther limitations are related to the validity of survey data. The validity of self-reported smoking can be questioned, although a number of studies have found the validity of this self-reporting high [45]. However we expect that any misclassification of smoking status would result in underestimation of the reported differences. A final important limitation is related to the delay in coding causes of mortality in Belgium. We were not able to estimate the contribution of specific diseases to the differences in DFLE and DLE by smoking status. This limits the interpretation on the role of specific diseases interfering with the balance between the smoking related excess of mortality and the smoking related disability.", "We were able to evaluate the contribution of the excess mortality versus the disabling impact of tobacco exposure on population health. Smoking kills and shortens both life without and with disability mainly due to its related excess mortality. However excess disability associated with smoking cannot be ignored given its contribution to substantially more years with disability before age 80.\nThe important population health message remains: smoking is a major health hazard. Policy on smoking should strive for a smoke-free society through primordial prevention or reduction of smoking initiation and through primary prevention or smoke stop to increase LE and DFLE. Further, given the lack of compression of disability for never smokers compared to smokers, this study highlight the need for policy makers to monitor not only DFLE (e.g. the European Union 2020 health goal to increase the healthy and active ageing of the European population by 2 years [46]) but also DLE as reduction in health risks and the increase in DFLE, may not automatically result in a simultaneous reduction or status quo of the DLE.", "ADL: Activities of daily living; a-PR: Adjusted prevalence ratio; CI: Confidence interval; DFLE: Disability free life expectancy; DLE: Disability life expectancy; DLE_S: Severe disability life expectancy; HIS: Health interview survey; ISCED: International Standard Classification of Education; LE: Life expectancy.", "None of the authors have to declare financial or non-financial competing interest.", "HVO worked out the concept and design of the study, performed part of the statistical analysis, participated in the interpretation; and drafted the manuscript. NB performed part of the statistical analysis, participated in the interpretation; and in the drafting of the manuscript. WN, CJ, EC and JMR participated in the concept development and the design of the study, the interpretation and revision of the manuscript. RC participated in the statistical analysis, the interpretation and revision of the manuscript. SD participated in the interpretation and the revision of the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/14/723/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, "results", "discussion", "conclusions", null, null, null, null ]
[ "Disability free life expectancy", "Disability life expectancy", "Life expectancy", "Health expectancy", "Disability", "Mortality", "Smoking", "Decomposition", "Belgium" ]
Background: Smoking is without doubt the single most important global cause of premature mortality. The current death toll from direct and second hand tobacco smoking in adults 30 years and over is estimated to be globally well over 5.5 million each year [1]. While at present the highest proportion of deaths attributable to tobacco are in America and Europe, the largest proportions of tobacco-related deaths in the coming decades is expected to occur in medium and low income countries [2]. Smokers may lose up to one decade of life expectancy [3,4]. However, prolonged cessation, when started early enough, reduces the risk of mortality associated with smoking by 90% or more [3-5] and hence greater mortality benefits are observed among early quitters [6]. Implementation of evidence-based tobacco control measures, such as smoke-free air laws or taxation, contribute to the avoidance of substantial numbers of premature deaths [7]. Smoking has also been associated with the incidence of chronic diseases, especially several cancers, cardiovascular diseases, and lung disease [8-10], and with the incidence of disability and poor health-related quality of life [11,12]. Although non-smoking is related to a longer life and a longer healthier life, there is no agreement in the literature on whether smoking cessation also leads to fewer years with morbidity. Some publications suggest that smoking reduces both the duration of life free of and with diseases and disability so that in the end, never smokers live the same or even more years in ill-health [8,13-17]. Other authors report that smokers have to endure in their shorter life more years and a greater proportion of their life with disability [18-20]. The first group of manuscripts suggests the need to consider a trade-off between a longer life and a longer life in ill-health [21], while the latter studies support the compression of morbidity theory that can be reached through primordial and primary prevention [22]. For public health policy, it is important to better understand this discrepancy in current literature and to better assess health gains or losses in relation to smoke reducing interventions, specifically: “Is the gap in duration of life in total and with or without disability, between never smokers and ex- or current smokers, due to differences in mortality and/or due to differences in disability?”. The objectives of the current manuscript are therefore (1) to determine the effect of smoking on the duration of life with and without disability and (2) to estimate the contribution of the higher mortality and higher disability associated with smoking to the difference in the years lived with and without disability between smoking groups. Methods: Data To calculate Disability Free Life Expectancy (DFLE) and Disability Life Expectancy (DLE) by smoking status two sources of data are required. First, information is needed about the mortality by smoking status. This information was extracted from the mortality follow-up of the Belgian Health Interview Surveys 1997 and 2001 (HIS 1997; HIS 2001) participants. The surveys were carried out by Statistics Belgium and exempted by law from requiring ethics approval. The process of obtaining mortality follow-up information is regulated by the Belgian Commission for the Protection of Privacy. After the approval of the Commission, Statistics Belgium provided follow-up data for the HIS 1997 and HIS 2001 participants until date of death, date of emigration or until respectively 31/12/2007 and 31/12/2010. Follow-up was obtained by individual record linkage between the HIS and the National Register, a public register with details of all registered residents in Belgium, using the National Identification Number. Statistics Belgium provided the list, including the date of death, of the HIS 1997 and HIS 2001 participants who had died by the end of the follow-up period. Second, information is needed about the prevalence of disability by smoking status. This information was extracted from both surveys. The participants in these national cross-sectional surveys were selected from the National Register through a multistage stratified sample of the Belgian population aged 15 years and older. Potential participants were informed by an invitation letter with leaflet and by the interviewer that the participation to the survey is voluntary and that after given an oral consent they can stop the interview anytime or can skip a question if they felt they should not answer a particular question. The participation rate in both surveys was around 60%. The detailed methodology of the surveys is described elsewhere [23]. Data on disability and socioeconomic position were collected via face-to-face interviews, while data on smoking were provided by the participant through a self-administered questionnaire. To calculate Disability Free Life Expectancy (DFLE) and Disability Life Expectancy (DLE) by smoking status two sources of data are required. First, information is needed about the mortality by smoking status. This information was extracted from the mortality follow-up of the Belgian Health Interview Surveys 1997 and 2001 (HIS 1997; HIS 2001) participants. The surveys were carried out by Statistics Belgium and exempted by law from requiring ethics approval. The process of obtaining mortality follow-up information is regulated by the Belgian Commission for the Protection of Privacy. After the approval of the Commission, Statistics Belgium provided follow-up data for the HIS 1997 and HIS 2001 participants until date of death, date of emigration or until respectively 31/12/2007 and 31/12/2010. Follow-up was obtained by individual record linkage between the HIS and the National Register, a public register with details of all registered residents in Belgium, using the National Identification Number. Statistics Belgium provided the list, including the date of death, of the HIS 1997 and HIS 2001 participants who had died by the end of the follow-up period. Second, information is needed about the prevalence of disability by smoking status. This information was extracted from both surveys. The participants in these national cross-sectional surveys were selected from the National Register through a multistage stratified sample of the Belgian population aged 15 years and older. Potential participants were informed by an invitation letter with leaflet and by the interviewer that the participation to the survey is voluntary and that after given an oral consent they can stop the interview anytime or can skip a question if they felt they should not answer a particular question. The participation rate in both surveys was around 60%. The detailed methodology of the surveys is described elsewhere [23]. Data on disability and socioeconomic position were collected via face-to-face interviews, while data on smoking were provided by the participant through a self-administered questionnaire. Measures Disability The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only. Definition of disability by severity The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only. Definition of disability by severity Disability The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only. Definition of disability by severity The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only. Definition of disability by severity Smoking A four-category variable was used: never smokers, ex-smokers, light smokers (less than 20 cigarette per day) and heavy smokers (20 cigarettes or more per day). A four-category variable was used: never smokers, ex-smokers, light smokers (less than 20 cigarette per day) and heavy smokers (20 cigarettes or more per day). Socio-economic position Educational attainment was coded according to the International Standard Classification of Education (ISCED 2011) and was based on the highest level of education reached by the households’ reference person or his/her partner: lower education (ISCED 0–1), lower secondary education (ISCED 2), higher secondary education (ISCED 3) and higher education (ISCED 4–8) [26]. Educational attainment was coded according to the International Standard Classification of Education (ISCED 2011) and was based on the highest level of education reached by the households’ reference person or his/her partner: lower education (ISCED 0–1), lower secondary education (ISCED 2), higher secondary education (ISCED 3) and higher education (ISCED 4–8) [26]. Statistical methods Mortality and disability For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS. For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS. Mortality and disability For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS. For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS. Life table analysis The age specific mortality rates were used to estimate LE by gender and smoking category. DFLE and DLE at age 30 (last open age group: 85 years and plus) and partial DFLE and DLE in the age window 30–80 years (DFLE30–80 and DLE30–80) were calculated by gender and smoking category using the Sullivan method which integrates the age-specific disability prevalence into the life table [30,31]. To estimate the contribution of mortality and disability to the differences in DFLE and DLE between smoking groups, a decomposition method was used [32,33]. Differences in total life expectancy (LE), DFLE and DLE between never smokers and other smoking categories (ex-smokers, smokers, light and heavy smokers) were divided in two parts. The first component, the mortality effect, represents the differences in the expected years lived with and without disability due to a differential mortality experience between never smokers and the other smoking categories. The second component, the disability effect, represents the differences in the person-years lived with or without disability due to differences in the prevalence of disability by smoking status. Whereas differences in LE only reflect differences in mortality rates, differences in DFLE and DLE are a result of differences in age-specific mortality rates (mortality effect) and differences in the age-specific prevalence of disability (disability effect). Calculations were done using a R 2.14.2 program developed in the framework of the EHLEIS project [34] and a copy of the R program is available from W. Nusselder ([email protected]). For the decomposition, including the variance estimation, the analysis by smoking intensity was only possible for the partial DFLE30–80 and DLE30–80 as there were few very old heavy smoking females. The age specific mortality rates were used to estimate LE by gender and smoking category. DFLE and DLE at age 30 (last open age group: 85 years and plus) and partial DFLE and DLE in the age window 30–80 years (DFLE30–80 and DLE30–80) were calculated by gender and smoking category using the Sullivan method which integrates the age-specific disability prevalence into the life table [30,31]. To estimate the contribution of mortality and disability to the differences in DFLE and DLE between smoking groups, a decomposition method was used [32,33]. Differences in total life expectancy (LE), DFLE and DLE between never smokers and other smoking categories (ex-smokers, smokers, light and heavy smokers) were divided in two parts. The first component, the mortality effect, represents the differences in the expected years lived with and without disability due to a differential mortality experience between never smokers and the other smoking categories. The second component, the disability effect, represents the differences in the person-years lived with or without disability due to differences in the prevalence of disability by smoking status. Whereas differences in LE only reflect differences in mortality rates, differences in DFLE and DLE are a result of differences in age-specific mortality rates (mortality effect) and differences in the age-specific prevalence of disability (disability effect). Calculations were done using a R 2.14.2 program developed in the framework of the EHLEIS project [34] and a copy of the R program is available from W. Nusselder ([email protected]). For the decomposition, including the variance estimation, the analysis by smoking intensity was only possible for the partial DFLE30–80 and DLE30–80 as there were few very old heavy smoking females. Data: To calculate Disability Free Life Expectancy (DFLE) and Disability Life Expectancy (DLE) by smoking status two sources of data are required. First, information is needed about the mortality by smoking status. This information was extracted from the mortality follow-up of the Belgian Health Interview Surveys 1997 and 2001 (HIS 1997; HIS 2001) participants. The surveys were carried out by Statistics Belgium and exempted by law from requiring ethics approval. The process of obtaining mortality follow-up information is regulated by the Belgian Commission for the Protection of Privacy. After the approval of the Commission, Statistics Belgium provided follow-up data for the HIS 1997 and HIS 2001 participants until date of death, date of emigration or until respectively 31/12/2007 and 31/12/2010. Follow-up was obtained by individual record linkage between the HIS and the National Register, a public register with details of all registered residents in Belgium, using the National Identification Number. Statistics Belgium provided the list, including the date of death, of the HIS 1997 and HIS 2001 participants who had died by the end of the follow-up period. Second, information is needed about the prevalence of disability by smoking status. This information was extracted from both surveys. The participants in these national cross-sectional surveys were selected from the National Register through a multistage stratified sample of the Belgian population aged 15 years and older. Potential participants were informed by an invitation letter with leaflet and by the interviewer that the participation to the survey is voluntary and that after given an oral consent they can stop the interview anytime or can skip a question if they felt they should not answer a particular question. The participation rate in both surveys was around 60%. The detailed methodology of the surveys is described elsewhere [23]. Data on disability and socioeconomic position were collected via face-to-face interviews, while data on smoking were provided by the participant through a self-administered questionnaire. Measures: Disability The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only. Definition of disability by severity The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only. Definition of disability by severity Disability : The Belgian Health Interview Surveys used the instruments proposed by the WHO-Europe working group to identify people with disability [24]. Activity restriction is used to define disability based on four dimensions: difficulties in doing any one of six Activities of Daily Living (ADL) - transfer in and out of bed, transfer in and out of chair, dressing, washing of hands and face, feeding, going to the toilet; or difficulties in mobility; continence problems; or limitations in sensory (vision, hearing) functions. Based on the severity of these different dimensions, a variable was constructed with 3 categories: severe disability, mild disability and no disability (Table 1). For people younger than 60 years, the functional domain scale of the SF-36 instrument [25] was used as a filter: (1) a score of 100 on the scale categorises the respondent as being not disabled; (2) when the score was less than 100, the disability questions were asked to the respondent, who was then classified as described in Table 1. In the manuscript we consider disability of all severity levels (mild and severe) as well as severe disability only. Definition of disability by severity Smoking: A four-category variable was used: never smokers, ex-smokers, light smokers (less than 20 cigarette per day) and heavy smokers (20 cigarettes or more per day). Socio-economic position: Educational attainment was coded according to the International Standard Classification of Education (ISCED 2011) and was based on the highest level of education reached by the households’ reference person or his/her partner: lower education (ISCED 0–1), lower secondary education (ISCED 2), higher secondary education (ISCED 3) and higher education (ISCED 4–8) [26]. Statistical methods: Mortality and disability For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS. For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS. Mortality and disability : For each subject, the person-years at risk for mortality were estimated up to the date of death or the end of the follow-up period. To account for the age changes during follow-up time, we used Lexis expansions of the original data with 1 year age-bands [27]. In this procedure, the observed individual follow-up times were split into periods that correspond to different current-age (or attained-age) groups. Therefore, each subject’s person-years of observation were split into several observations by expanding data by 1-year age bands. As disability, mortality and smoking are associated with age and education [16,28,29], we first estimated mortality rates and disability prevalence rates by smoking status adjusted for age and education. Poisson and multinomial logistic regression models were fitted to estimate the mortality rate and the prevalence of disability by age, gender and smoking status adjusted for socio-economic position. Lexis expansion and regression analysis were performed using Stata 10.0. The analysis accounted for the complex sampling design of the HIS. Life table analysis: The age specific mortality rates were used to estimate LE by gender and smoking category. DFLE and DLE at age 30 (last open age group: 85 years and plus) and partial DFLE and DLE in the age window 30–80 years (DFLE30–80 and DLE30–80) were calculated by gender and smoking category using the Sullivan method which integrates the age-specific disability prevalence into the life table [30,31]. To estimate the contribution of mortality and disability to the differences in DFLE and DLE between smoking groups, a decomposition method was used [32,33]. Differences in total life expectancy (LE), DFLE and DLE between never smokers and other smoking categories (ex-smokers, smokers, light and heavy smokers) were divided in two parts. The first component, the mortality effect, represents the differences in the expected years lived with and without disability due to a differential mortality experience between never smokers and the other smoking categories. The second component, the disability effect, represents the differences in the person-years lived with or without disability due to differences in the prevalence of disability by smoking status. Whereas differences in LE only reflect differences in mortality rates, differences in DFLE and DLE are a result of differences in age-specific mortality rates (mortality effect) and differences in the age-specific prevalence of disability (disability effect). Calculations were done using a R 2.14.2 program developed in the framework of the EHLEIS project [34] and a copy of the R program is available from W. Nusselder ([email protected]). For the decomposition, including the variance estimation, the analysis by smoking intensity was only possible for the partial DFLE30–80 and DLE30–80 as there were few very old heavy smoking females. Results: Both the prevalence of disability and the mortality rate are higher in ex-smokers and in light and heavy smokers compared to never smokers (Tables 2 and 3). As expected, mortality rates increase with the intensity of smoking but the relationship between the prevalence of disability and the intensity of smoking is not as strong, especially for severe disability. In males, the age and education adjusted prevalence ratio (a-PR) for disability is 1.17 in ex-smokers, 1.27 in light and 1.34 in heavy smokers, whilst in females, the a-PR is 1.15 in ex-, 1.22 in light and 1.24 in heavy smokers. The prevalence of severe disability is lower, although not reaching statistical significance, in heavy smokers (a-PR = 0.83 in males; and 0.66 in females). The age and education adjusted mortality rate ratio for ex-, light and heavy smokers is respectively 1.50; 1.95 and 2.77 for males and 1.09; 1.41 and 2.67 for females. Weighted age and education adjusted (severe) disability prevalence (in %) and prevalence ratio by smoking status for those aged 30+, Health Interview Survey 1997 and 2001, Belgium *: 95% Confidence Interval. Weighted age and education adjusted mortality rate per 100 000 person years and mortality rate ratio by smoking status for those aged 30+, Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium *: 95% Confidence Interval. At age 30 and compared to never smokers, ex-smokers have a shorter LE and a somewhat shorter DFLE but their DLE is about one third of a year longer (Table 4). Smokers have a shorter LE, DFLE and DLE compared to never smokers. DFLE as a proportion of LE is 74.8% in male never smokers compared to 72.7% in ex-smokers and smokers, and 65.8% in female never smokers compared to 63.6% in ex-smokers and 64.0% in smokers. Both ex-smokers and smokers are estimated to live fewer years with severe disability (DLE_S). Table 5 presents the difference in DFLE, DLE, DLE_S and LE at age 30 between ex-smokers, smokers and never smokers. A negative value indicates less years lived compared to never smokers. Each estimated difference is divided into a part due to differential age-specific mortality (mortality effect) and a part that results from a differential age-specific prevalence of disability (disability effect). Thus, compared to male never smokers, LE for male smokers is 7.87 years shorter, this difference in LE being attributable only to the mortality disadvantage that male smokers have over never smokers. Male smokers have a shorter DFLE by 6.80 years, this difference being a result of differences in both the age-specific mortality rate and age-specific disability prevalence. The mortality effect accounts for 3.67 years or 54% of the difference, while the remaining 3.13 years are due to the higher disability prevalence among smokers. Due to their disability disadvantage, smokers are expected to live 3.13 more years with disability but because of the higher mortality the disability effect is cancelled out resulting in 1.07 year shorter DLE compared to never smokers (-1.07 years = -4.21 years (mortality effect) + 3.13 years (disability effect)). In both males and females the impact of the higher mortality among smokers on the DLE outweighs the disability effect so that they live fewer years with disability. This is not the case for DLE of ex-smokers where the disability effect is larger than the mortality effect resulting in about one third of a year longer DLE. Due to a larger mortality effect, both male and female ex-smokers and smokers live shorter DLE_S, although the difference is only significant for male smokers. Disability Free Life Expectancy (DFLE 30 ), (Severe) Disability Life Expectancy (DLE(_S) 30 ), Life Expectancy (LE 30 ) and the % of remaining life without disability (% DFLE/LE 30 ) at age 30 by smoking status, Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium *: 95% confidence interval. Decomposition of the difference between ex- and current smokers with never smokers in Disability Free Life Expectancy (DFLE 30 ), (Severe) Disability Life Expectancy (DLE(_S) 30 ), Life Expectancy (LE 30 ) at age 30 by type of effect (mortality or disability), Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium *: 95% confidence interval. Figure 1 presents the decomposition by age of (1) the difference in DFLE and DLE between never smokers and (ex-)smokers and of (2) the mortality and disability component of these differences. The disability effect is the most important contributor to the shorter DFLE among ex-smokers up to the age of 84 years (Figure 1a-b) and up to age 64 years and 74 years for male and female smokers respectively (Figure 1c-d). For the difference in DLE between never smokers and (ex-)smokers, the disability effect is actually outweighed by the mortality effect only in the older ages: 70+ years and 75+ years for male and female ex-smokers respectively, and age 65+ years for male and female smokers. For ex-smokers, the largest proportion (67%) of the disability effect of DLE difference is concentrated before age 70 years while for male and female smokers the proportion of the disability effect before age 70 years is 78% and 73% respectively (Figure 1e-h). At young ages, the importance of the disability disadvantage to the longer DLE in ex-smokers, smokers, light smokers and heavy smokers is further shown by the decomposition of the difference in the partial DLE in the age window 30 to 80 years (DLE30–80) (Tables 6, Figure 2). Within this age window, any smoking category experiences more years with disability compared to never smokers, as the disability effect cancels out the mortality effect. For example, the difference in DLE30–80 among male ex-smokers compared to never smokers is 1.22 years (1.22 years (95% CI: -0.04; 2.62) = -0.45 years (mortality effect) + 1.67 (disability effect)). The difference with smokers is 1.27 years (95% CI: -0.13; 2.57). We observe a larger difference among light males smokers (1.45 years (95% CI: -0.02; 2.90)) compared to difference among heavy smokers (0.82 years (95% CI: -1.15; 3.01)) suggesting a larger contribution of the mortality effect for heavy smokers even before age 80 years old. The difference in DLE30–80 among female ex-smokers, smokers, light and heavy smokers compared to never smokers is respectively 1.62 years (95 CI; 0.26; 2.88), 1.83 years (95 CI; 0.13; 3.35), 1.80 years (95 CI; -.0.09; 3.78) and 1.80 years (95 CI; -0.90; 4.90). Restricting the analysis to severe disability, the mortality effect by far outweighs any disability effect and is the most important contributor to shorter DLE_S30–80 in any age group. None of the differences in DLE_S30–80 for the different smoking categories compared to never smokers is statistically significant (Figure 2). Decomposition by age of the difference between ex- and current smokers with never smokers in Disability Free Life Expectancy (DFLE30), Disability Life Expectancy (DLE30) at age 30 and type of effect (mortality or disability), Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium. Legend: Panels a-d: DFLE (a: Male Ex-smoker; b: Female Ex-smoker; c: Male Smoker; d: Female Smoker). Panels e-h: DLE (e: Male Ex-smoker; f: Female Ex-smoker; g: Male Smoker; h: Female Smoker). Black bar: difference DFLE or DLE with never smokers. Green bar: mortality effect. Red bar: disability effect. E.g. black bar in panel a: DFLE among males Ex-smokers minus DFLE among males never smokers; black bar in panel h: DLE among females Ex-smokers minus DLE among females never smokers. Disability Free Life Expectancy (DFLE 30–80 ), (Severe) Disability Life Expectancy (DLE(_S) 30–80 ), Life Expectancy (LE 30–80 ) and the % of remaining life without disability (% DFLE/LE 30–80 ) between ages 30 and 80 by smoking status, Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium *: 95% confidence interval. Decomposition of the difference between ex- and current smokers with never smokers in Disability Free Life Expectancy (DFLE30–80), (Severe) Disability Life Expectancy (DLE(_S)30–80) between ages 30 and 80 by type of effect (mortality or disability), Health Interview Survey 1997 and 2001 and follow-up until respectively 31/12/2007 and 31/12/2010, Belgium. Legend: Black dot symbol: difference DFLE, DLE or DLE_S with never smoker and 95% CI. Green triangle symbol: mortality effect and 95% CI. Red letter x symbol: disability effect and 95% CI. Discussion: The study confirms that smoking kills but also shows that smoking increases the years lived with disability before age 80 years old, while at older ages, the excess mortality of smokers hides the smoker disability disadvantage. In other words, through the excess premature mortality of smokers, their DLE is shorter compared to never smokers. When the intensity of smoking is high, the excess mortality hides the disability disadvantage in DLE even before age 80. Our study also shows that ex-smokers have a shorter DFLE and a longer DLE. The disability disadvantage that ex-smokers have is the main contributor to the shorter DFLE and longer DLE compared to never smokers, even though mortality rates for ex-smokers may approach those for never smokers. At older ages, as for smokers, the excess mortality offsets the disability disadvantage but this occurs at an older age than for smokers. So at the one hand, the observations support the expansion hypothesis: in the end smokers may live less years with disability due to their strong excess mortality. At the other hand, ex-smokers and smokers have to endure more years with disability before age 80 years. These seemingly two opposing observations are a result of the fact that the expression of the difference in disability prevalence are concentrated at the younger ages, while the strength of smoking related mortality disadvantage is greater at older ages and reduces the years to be lived with and without disability. Moreover, the interaction between excess mortality and excess disability is further a function of gender, smoking intensity and the severity level of the disability. The expression of the disability effect is somewhat higher in women for whom lower premature excess mortality reduces DLE less than for men. The female–male difference in the mortality and disability impact of smoking may be a contributing factor to the gender health-survival paradox [35]. Our study also suggests a significantly shorter LE free of severe disability (DFLE_S) for male heavy smokers compared to never smokers. Overall, our study does support the statement that smoking is associated with mortality more than with disability, and that through excess mortality the years of life with disability are compressed compared to never smokers [15-17]. However the findings also partly corroborate previous reports [18-20] suggesting that smoking has an important and distinct impact on disability which results in more years with disability at younger ages for ex-smokers and smokers. Other studies on the effect of smoking have also reported an increased incidence of disability, a lower (physical) health related quality of life and an elevated use of health care services [11,12,36-39]. Our analysis has several strengths. We were able to use one data set which had smoking, disability and mortality data. For the mortality follow-up of the survey, less than 3% of the participants could not be linked to the National Register. Our decomposition analysis allowed division of the differences in DFLE and DLE into the part due to excess mortality for (ex-)smokers and the part due to their excess disability, as well as how these varied by age group. This therefore helped explain the controversy that longer LE for non-smokers compared to (ex-)smokers translates into more years of disability. To obtain further insight we evaluated in which age groups the mortality effect or the disability effect were more substantial. To our knowledge, this paper is the first to show the excess disability associated with smoking contributing to more years with disability at younger ages. Limitations of the study are related to the cross-sectional design providing the smoking and disability data. E.g. current smokers at any age after the age of 30 years may be considered as lifelong smokers as the likelihood of smoking initiation after the age of 30 years is small. If we ignore non-successful smoke stop attempts, the category “current smoker” is probably a less heterogeneous population compare to the category of ex-smokers for whom no information on the age or the time since they stop smoking and their reasons to stop smoking is used: health benefits are larger in early quitters while former smokers who recently quit tend to have more health problems [6,36]. Further, we cannot attribute the lower prevalence of disability (which led among never smokers to more healthy years and to a reduction in the time spent with disability before age 80) to either a lower disability incidence or a higher recovery rate since this is beyond the decomposition method using Sullivan method based estimates [33]. The main assumption of stationary population in order to minimise bias of the Sullivan method compared to the multistate life table method using transition probability may hold as changes in smoking behaviour do not lead to sudden changes in both mortality incidence and disability incidence [40]. It is difficult to identify to what extend the method used to estimate the years lived with and without disability contributes to the lack of agreement related to the compression of disability in function of smoking elimination. Some authors include both transitions to disability and recovery [14,16,20] in the multistate method, others do not [17]. Next, studies differ further in definition of disability, the definition of smoking categories. Studies also studies differ in the age the DFLE and LE is estimated. The paper of Nusselder et al. [20], is the only one using the multistate method, including both disability incidence and recovery transitions, that provides evidence for a compression of years with disability related to smoking elimination both at age 30 and at age 70 years. The same conclusion was made by Bronnum-Hansen et al. using the Sullivan method [19]. Other studies using a multistate approach report that smoking reduces both the duration of life with and without disability [14,16,17]. Secondly, low survey participation may bias the results [41]. We have shown in prior studies that participation is differentially linked to health status and socioeconomic position [42,43]. Charafedinne R. et al. [44] compared Belgian census-based DFLE by social position with survey-based estimates and found that although there was no statistical difference, the difference in LE and DFLE should be acknowledged. Low educated survey participants tended to be less healthy (i.e. having a lower LE and lower DFLE) compared to their counterparts in the general population, while the inverse was observed in the highest educational groups. The same author also reported evidence supporting the hypothesis that educational attainment does not substantially influence the association between smoking and mortality [28]. Therefore, we hypothesize that any selection bias in the difference in DFLE or DLE by smoking is most likely related to the survey-based disability prevalence and not to the mortality. If any, it is expected to overestimate the gap and the disability effect of the smoking related differences in DFLE and DLE. Other limitations are related to the validity of survey data. The validity of self-reported smoking can be questioned, although a number of studies have found the validity of this self-reporting high [45]. However we expect that any misclassification of smoking status would result in underestimation of the reported differences. A final important limitation is related to the delay in coding causes of mortality in Belgium. We were not able to estimate the contribution of specific diseases to the differences in DFLE and DLE by smoking status. This limits the interpretation on the role of specific diseases interfering with the balance between the smoking related excess of mortality and the smoking related disability. Conclusion: We were able to evaluate the contribution of the excess mortality versus the disabling impact of tobacco exposure on population health. Smoking kills and shortens both life without and with disability mainly due to its related excess mortality. However excess disability associated with smoking cannot be ignored given its contribution to substantially more years with disability before age 80. The important population health message remains: smoking is a major health hazard. Policy on smoking should strive for a smoke-free society through primordial prevention or reduction of smoking initiation and through primary prevention or smoke stop to increase LE and DFLE. Further, given the lack of compression of disability for never smokers compared to smokers, this study highlight the need for policy makers to monitor not only DFLE (e.g. the European Union 2020 health goal to increase the healthy and active ageing of the European population by 2 years [46]) but also DLE as reduction in health risks and the increase in DFLE, may not automatically result in a simultaneous reduction or status quo of the DLE. Abbreviations: ADL: Activities of daily living; a-PR: Adjusted prevalence ratio; CI: Confidence interval; DFLE: Disability free life expectancy; DLE: Disability life expectancy; DLE_S: Severe disability life expectancy; HIS: Health interview survey; ISCED: International Standard Classification of Education; LE: Life expectancy. Competing interests: None of the authors have to declare financial or non-financial competing interest. Authors’ contributions: HVO worked out the concept and design of the study, performed part of the statistical analysis, participated in the interpretation; and drafted the manuscript. NB performed part of the statistical analysis, participated in the interpretation; and in the drafting of the manuscript. WN, CJ, EC and JMR participated in the concept development and the design of the study, the interpretation and revision of the manuscript. RC participated in the statistical analysis, the interpretation and revision of the manuscript. SD participated in the interpretation and the revision of the manuscript. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/14/723/prepub
Background: Smoking is the single most important health threat yet there is no consistency as to whether non-smokers experience a compression of years lived with disability compared to (ex-)smokers. The objectives of the manuscript are (1) to assess the effect of smoking on the average years lived without disability (Disability Free Life Expectancy (DFLE)) and with disability (Disability Life Expectancy (DLE)) and (2) to estimate the extent to which these effects are due to better survival or reduced disability in never smokers. Methods: Data on disability and mortality were provided by the Belgian Health Interview Survey 1997 and 2001 and a 10 years mortality follow-up of the survey participants. Disability was defined as difficulties in activities of daily living (ADL), in mobility, in continence or in sensory (vision, hearing) functions. Poisson and multinomial logistic regression models were fitted to estimate the probabilities of death and the prevalence of disability by age, gender and smoking status adjusted for socioeconomic position. The Sullivan method was used to estimate DFLE and DLE at age 30. The contribution of mortality and of disability to smoking related differences in DFLE and DLE was assessed using decomposition methods. Results: Compared to never smokers, ex-smokers have a shorter life expectancy (LE) and DFLE but the number of years lived with disability is somewhat larger. For both sexes, the higher disability prevalence is the main contributing factor to the difference in DFLE and DLE. Smokers have a shorter LE, DFLE and DLE compared to never smokers. Both higher mortality and higher disability prevalence contribute to the difference in DFLE, but mortality is more important among males. Although both male and female smokers experience higher disability prevalence, their higher mortality outweighs their disability disadvantage resulting in a shorter DLE. Conclusions: Smoking kills and shortens both life without and life with disability. Smoking related disability can however not be ignored, given its contribution to the excess years with disability especially in younger age groups.
Background: Smoking is without doubt the single most important global cause of premature mortality. The current death toll from direct and second hand tobacco smoking in adults 30 years and over is estimated to be globally well over 5.5 million each year [1]. While at present the highest proportion of deaths attributable to tobacco are in America and Europe, the largest proportions of tobacco-related deaths in the coming decades is expected to occur in medium and low income countries [2]. Smokers may lose up to one decade of life expectancy [3,4]. However, prolonged cessation, when started early enough, reduces the risk of mortality associated with smoking by 90% or more [3-5] and hence greater mortality benefits are observed among early quitters [6]. Implementation of evidence-based tobacco control measures, such as smoke-free air laws or taxation, contribute to the avoidance of substantial numbers of premature deaths [7]. Smoking has also been associated with the incidence of chronic diseases, especially several cancers, cardiovascular diseases, and lung disease [8-10], and with the incidence of disability and poor health-related quality of life [11,12]. Although non-smoking is related to a longer life and a longer healthier life, there is no agreement in the literature on whether smoking cessation also leads to fewer years with morbidity. Some publications suggest that smoking reduces both the duration of life free of and with diseases and disability so that in the end, never smokers live the same or even more years in ill-health [8,13-17]. Other authors report that smokers have to endure in their shorter life more years and a greater proportion of their life with disability [18-20]. The first group of manuscripts suggests the need to consider a trade-off between a longer life and a longer life in ill-health [21], while the latter studies support the compression of morbidity theory that can be reached through primordial and primary prevention [22]. For public health policy, it is important to better understand this discrepancy in current literature and to better assess health gains or losses in relation to smoke reducing interventions, specifically: “Is the gap in duration of life in total and with or without disability, between never smokers and ex- or current smokers, due to differences in mortality and/or due to differences in disability?”. The objectives of the current manuscript are therefore (1) to determine the effect of smoking on the duration of life with and without disability and (2) to estimate the contribution of the higher mortality and higher disability associated with smoking to the difference in the years lived with and without disability between smoking groups. Conclusion: We were able to evaluate the contribution of the excess mortality versus the disabling impact of tobacco exposure on population health. Smoking kills and shortens both life without and with disability mainly due to its related excess mortality. However excess disability associated with smoking cannot be ignored given its contribution to substantially more years with disability before age 80. The important population health message remains: smoking is a major health hazard. Policy on smoking should strive for a smoke-free society through primordial prevention or reduction of smoking initiation and through primary prevention or smoke stop to increase LE and DFLE. Further, given the lack of compression of disability for never smokers compared to smokers, this study highlight the need for policy makers to monitor not only DFLE (e.g. the European Union 2020 health goal to increase the healthy and active ageing of the European population by 2 years [46]) but also DLE as reduction in health risks and the increase in DFLE, may not automatically result in a simultaneous reduction or status quo of the DLE.
Background: Smoking is the single most important health threat yet there is no consistency as to whether non-smokers experience a compression of years lived with disability compared to (ex-)smokers. The objectives of the manuscript are (1) to assess the effect of smoking on the average years lived without disability (Disability Free Life Expectancy (DFLE)) and with disability (Disability Life Expectancy (DLE)) and (2) to estimate the extent to which these effects are due to better survival or reduced disability in never smokers. Methods: Data on disability and mortality were provided by the Belgian Health Interview Survey 1997 and 2001 and a 10 years mortality follow-up of the survey participants. Disability was defined as difficulties in activities of daily living (ADL), in mobility, in continence or in sensory (vision, hearing) functions. Poisson and multinomial logistic regression models were fitted to estimate the probabilities of death and the prevalence of disability by age, gender and smoking status adjusted for socioeconomic position. The Sullivan method was used to estimate DFLE and DLE at age 30. The contribution of mortality and of disability to smoking related differences in DFLE and DLE was assessed using decomposition methods. Results: Compared to never smokers, ex-smokers have a shorter life expectancy (LE) and DFLE but the number of years lived with disability is somewhat larger. For both sexes, the higher disability prevalence is the main contributing factor to the difference in DFLE and DLE. Smokers have a shorter LE, DFLE and DLE compared to never smokers. Both higher mortality and higher disability prevalence contribute to the difference in DFLE, but mortality is more important among males. Although both male and female smokers experience higher disability prevalence, their higher mortality outweighs their disability disadvantage resulting in a shorter DLE. Conclusions: Smoking kills and shortens both life without and life with disability. Smoking related disability can however not be ignored, given its contribution to the excess years with disability especially in younger age groups.
9,775
385
[ 518, 369, 470, 233, 37, 71, 416, 205, 324, 59, 15, 112, 16 ]
17
[ "disability", "smokers", "age", "mortality", "smoking", "years", "dle", "life", "dfle", "effect" ]
[ "mortality smoking status", "mortality smoking related", "mortality associated smoking", "needed mortality smoking", "premature mortality smokers" ]
[CONTENT] Disability free life expectancy | Disability life expectancy | Life expectancy | Health expectancy | Disability | Mortality | Smoking | Decomposition | Belgium [SUMMARY]
[CONTENT] Disability free life expectancy | Disability life expectancy | Life expectancy | Health expectancy | Disability | Mortality | Smoking | Decomposition | Belgium [SUMMARY]
[CONTENT] Disability free life expectancy | Disability life expectancy | Life expectancy | Health expectancy | Disability | Mortality | Smoking | Decomposition | Belgium [SUMMARY]
[CONTENT] Disability free life expectancy | Disability life expectancy | Life expectancy | Health expectancy | Disability | Mortality | Smoking | Decomposition | Belgium [SUMMARY]
[CONTENT] Disability free life expectancy | Disability life expectancy | Life expectancy | Health expectancy | Disability | Mortality | Smoking | Decomposition | Belgium [SUMMARY]
[CONTENT] Disability free life expectancy | Disability life expectancy | Life expectancy | Health expectancy | Disability | Mortality | Smoking | Decomposition | Belgium [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Aged | Aged, 80 and over | Belgium | Disabled Persons | Female | Health Surveys | Humans | Life Expectancy | Logistic Models | Male | Middle Aged | Prevalence | Smoking [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Aged | Aged, 80 and over | Belgium | Disabled Persons | Female | Health Surveys | Humans | Life Expectancy | Logistic Models | Male | Middle Aged | Prevalence | Smoking [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Aged | Aged, 80 and over | Belgium | Disabled Persons | Female | Health Surveys | Humans | Life Expectancy | Logistic Models | Male | Middle Aged | Prevalence | Smoking [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Aged | Aged, 80 and over | Belgium | Disabled Persons | Female | Health Surveys | Humans | Life Expectancy | Logistic Models | Male | Middle Aged | Prevalence | Smoking [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Aged | Aged, 80 and over | Belgium | Disabled Persons | Female | Health Surveys | Humans | Life Expectancy | Logistic Models | Male | Middle Aged | Prevalence | Smoking [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Aged | Aged, 80 and over | Belgium | Disabled Persons | Female | Health Surveys | Humans | Life Expectancy | Logistic Models | Male | Middle Aged | Prevalence | Smoking [SUMMARY]
[CONTENT] mortality smoking status | mortality smoking related | mortality associated smoking | needed mortality smoking | premature mortality smokers [SUMMARY]
[CONTENT] mortality smoking status | mortality smoking related | mortality associated smoking | needed mortality smoking | premature mortality smokers [SUMMARY]
[CONTENT] mortality smoking status | mortality smoking related | mortality associated smoking | needed mortality smoking | premature mortality smokers [SUMMARY]
[CONTENT] mortality smoking status | mortality smoking related | mortality associated smoking | needed mortality smoking | premature mortality smokers [SUMMARY]
[CONTENT] mortality smoking status | mortality smoking related | mortality associated smoking | needed mortality smoking | premature mortality smokers [SUMMARY]
[CONTENT] mortality smoking status | mortality smoking related | mortality associated smoking | needed mortality smoking | premature mortality smokers [SUMMARY]
[CONTENT] disability | smokers | age | mortality | smoking | years | dle | life | dfle | effect [SUMMARY]
[CONTENT] disability | smokers | age | mortality | smoking | years | dle | life | dfle | effect [SUMMARY]
[CONTENT] disability | smokers | age | mortality | smoking | years | dle | life | dfle | effect [SUMMARY]
[CONTENT] disability | smokers | age | mortality | smoking | years | dle | life | dfle | effect [SUMMARY]
[CONTENT] disability | smokers | age | mortality | smoking | years | dle | life | dfle | effect [SUMMARY]
[CONTENT] disability | smokers | age | mortality | smoking | years | dle | life | dfle | effect [SUMMARY]
[CONTENT] life | smoking | tobacco | disability | longer | longer life | deaths | smokers | duration life | diseases [SUMMARY]
[CONTENT] disability | age | mortality | smoking | follow | differences | education | surveys | data | smokers [SUMMARY]
[CONTENT] smokers | disability | effect | ex | 95 | difference | years | 30 | male | age [SUMMARY]
[CONTENT] excess | increase | reduction | smoking | health | population | population health | european | prevention | excess mortality [SUMMARY]
[CONTENT] disability | smokers | age | smoking | mortality | years | life | education | dfle | dle [SUMMARY]
[CONTENT] disability | smokers | age | smoking | mortality | years | life | education | dfle | dle [SUMMARY]
[CONTENT] years ||| 1 | the average years | Disability Free Life Expectancy | DFLE ||| Disability Life Expectancy | DLE | 2 [SUMMARY]
[CONTENT] the Belgian Health Interview Survey 1997 and 2001 | 10 years ||| daily | ADL ||| Poisson ||| Sullivan | DFLE | DLE | age 30 ||| DFLE | DLE [SUMMARY]
[CONTENT] LE | DFLE | the number of years ||| DFLE | DLE ||| DFLE | DLE ||| DFLE ||| DLE [SUMMARY]
[CONTENT] ||| the excess years [SUMMARY]
[CONTENT] years ||| 1 | the average years | Disability Free Life Expectancy | DFLE ||| Disability Life Expectancy | DLE | 2 ||| the Belgian Health Interview Survey 1997 and 2001 | 10 years ||| daily | ADL ||| Poisson ||| Sullivan | DFLE | DLE | age 30 ||| DFLE | DLE ||| LE | DFLE | the number of years ||| DFLE | DLE ||| DFLE | DLE ||| DFLE ||| DLE ||| ||| the excess years [SUMMARY]
[CONTENT] years ||| 1 | the average years | Disability Free Life Expectancy | DFLE ||| Disability Life Expectancy | DLE | 2 ||| the Belgian Health Interview Survey 1997 and 2001 | 10 years ||| daily | ADL ||| Poisson ||| Sullivan | DFLE | DLE | age 30 ||| DFLE | DLE ||| LE | DFLE | the number of years ||| DFLE | DLE ||| DFLE | DLE ||| DFLE ||| DLE ||| ||| the excess years [SUMMARY]
Impact of Different Resistance Training Protocols on Balance, Quality of Life and Physical Activity Level of Older Women.
36142038
Physical activity (PA) and physical fitness are key factors for quality of life (QoL) for older women. The aging process promotes the decrease in some capacities such as strength, which affect the activities of daily life. This loss of strength leads to a reduction in balance and an increased risk of falls as well as a sedentary lifestyle. Resistance Training (RT) is an effective method to improve balance and strength but different RT protocols can promote different responses. Power training has a higher impact on the performance of activities of daily life. Therefore, our study aimed to analyze if different RT protocols promote individual responses in balance, QoL and PA levels of older women and which are more effective for the older women.
BACKGROUND
Ninety-four older women were divided into four RT groups (relative strength endurance training, SET; Traditional strength training, TRT; absolute strength training, AST; power training, PWT) and one control group (CG). Each RT group performed a specific protocol for 16 weeks. At baseline and after 8 and 16 weeks, we assessed balance through the Berg balance scale; PA levels with a modified Baecke questionnaire and QoL with World Health Organization Quality of Life-BREF (WHOQOL-BREF) and World Health Organization Quality of Life-OLD module (WHOQOL-OLD).
METHODS
Balance improved after 16 weeks (baseline vs. 16 weeks; p &lt; 0.05) without differences between all RT groups. PWT (2.82%) and TRT (3.48%) improved balance in the first 8 weeks (baseline vs. 8 weeks; p &lt; 0.05). PA levels increased in PWT, TRT and AST after 16 weeks (baseline vs. 16 weeks; p &lt; 0.05).
RESULTS
All RT protocols improved PA levels and QoL after 16 weeks of training. For the improvement of balance, QoL and PA, older women can be subjected to PWT, AST and SET, and not be restricted to TRT.
CONCLUSION
[ "Aged", "Exercise", "Female", "Humans", "Muscle Strength", "Physical Fitness", "Quality of Life", "Resistance Training", "Surveys and Questionnaires" ]
9517151
1. Introduction
Worldwide, the population over the age of 60 years old and their average life expectancy is increasing year after year, mainly due to improvements in the health system and new technologies [1,2]. The aging process is associated with changes in older women affecting their daily routine, quality of life (QoL) and health. Sarcopenia is a syndrome that occurs after 60 years and is characterized by a progressive loss of muscle mass and strength that are associated with declines in functional capacity and loss of balance, and increases in the risk of falling [3,4,5]. Several worldwide organizations recommended resistance strength (RT), endurance training, interval training, aerobic training and multicomponent training executed at moderate-to-vigorous intensities, as methods to improve functional capacity and QoL [6,7,8]. RT is an effective method to increase fat-free mass, bone strength, muscle strength and functional capacity, and reduce risk factors for several diseases such as cardiovascular and metabolic diseases [3,4,6,7,9]. Filho et al. [10] reported that older women, after twenty weeks of strength training, improved functional capacity, strength and the ability to perform activities of daily life (ADL). In addition to these effects, RT increased psychosocial health, which combined, has an important influence on QoL. Additionally, physical fitness and physical activity (PA) combined can have considerable consequences on the performance of ADL and QoL [6,11]. For older women, QoL is becoming one of the most important factors as they become older since the extended years must be followed up with a healthy life [4,12]. Puciato et al. [13] showed that higher PA levels resulted in higher physical, psychosociological and environmental QoL domains. Several studies [6,11,14,15,16] have focused on the impact of endurance-based exercise and strength training on balance and QoL. Fragala et al. [7] suggested that RT is crucial for improving and maintaining muscle strength, psychological well-being, QoL and healthy life expectancy. The size of those benefits is dependent on the type of RT protocol used. Traditional resistance training (TRT) is performed 2–3 times a week based on relative strength with 8–12 repetitions at 60–80% of 1 RM. Strength endurance training (SET) comprises a higher number of repetitions at a light-to-moderate intensity until fatigue. Absolute strength training (AST) consists of no more than six repetitions at a higher amount of strength than the neuromuscular system performs [7,10]. Power training (PWT) is characterized by performing the concentric phase of the movement at the maximum amount of force as fast as possible with loads of no more than 60% of 1 RM. According to different studies, power training (PWT) is important for functional improvements in ADL and balance and can promote more power at lower external resistances, which will result in a higher velocity of power to perform any task [6,7,10,17]. All these different RT types are important for the balance and QoL of older women. According to the literature, it is not clear if any RT protocol can promote higher improvements in balance when compared to the other protocols. Diverse prescriptions of RT can promote different dose–response relationships since intensity and volume are key factors for positive adaptations [7]. To our knowledge, there is no study in the literature that compared all these four protocols (TRT, SET, PWT and AST) in the same investigation at the same time. Thus, we propose to analyze the effects of 16 weeks of four different RT protocols on QoL, PA levels and balance of older women and see if there are any individual responses from each one.
2.2. Sample and Ethical Procedures
The inclusion criteria were (a) being aged between 60–75 years old; (b) the ability to perform exercise without contraindication and attendance of at least 85% of the sessions. Exclusion criteria were (a) any osteoarticular problem that affects the performance of any exercise; (b) medical contraindications (e.g., heart problems, surgeries) and (c) involvement at the same time in any PA program. All procedures of the study were accomplished in accordance with the Declaration of Helsinki and approved for human experiments by local institutional ethical committee (protocol number 2.887.652). Before studying experimental protocols, all volunteers signed an informed consent form. Of one hundred fifty-six volunteers only ninety-five older women fulfilled the inclusion criteria and were randomly assigned to 5 groups: SET, AST, PWT, TRT and control group (CG) (Table 1). All participants were instructed to keep, over the study, their normal lifestyle and were advised not to consume coffee, tea, alcohol or tobacco and do any vigorous exercise during the last 24 h before all assessments. The volunteers of the CG did not perform any systematic PA during the 16 weeks and maintained their daily routine. During the study, five older women dropped out (one from CG and TRT; three from PWT) due to illness (3) or not attending at least 85% of the sessions (2). 2.2.1. Resistance Training Protocol The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8. In the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19]. The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8. In the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19]. 2.2.2. Anthropometric Measures and Balance Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit. The balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21]. Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit. The balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21]. 2.2.3. Level of Physical Activity and Quality of Life To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23]. The MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy. To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23]. The MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.
3. Results
The participants’ attendance rate was 87%. All RT protocols resulted in improvements in balance (Figure 1 and Table 2), total PA and QoL (Table 3). For balance, we verified that after 16 weeks, all RT groups improved but only SET (p = 0.046) and TRT (p = 0.036) improved after the first 8 weeks. All RT groups finished the study with better results than the CG (p < 0.05). PA levels increased after 16 weeks in all RT groups, despite the baseline values already being high (p < 0.05), except for SET (Table 3). With the questionnaire of WHOQOL-BREF, we reported no changes in the psychological domain. In the social domain, PWT (−6.5%) and TRT (−6.5%) decreased (p < 0.05). The physical domain of QoL only improved in SET (3.9%) after 16 weeks (p < 0.05). With the specific questionnaire for older adults, WHOQOL-OLD, 16 weeks resulted in improvements in QoL for SET (7.9%), PWT (17.5%), TRT (17.2%), and AST (14.5%).
5. Conclusions
After 16 weeks of SET, PWT, TRT and AST, the older women improved their balance, QoL and PA levels. Different RT protocols promoted similar responses, reinforcing the importance of all types of RT as a crucial method for health promotion. Although PWT and TRT seem to produce more effects in QoL for exercise professionals, we recommend all RT protocols to increase/maintain QoL and balance in older women and not only TRT. Moreover, associating the personal interests of older women with RT protocol selection can increase the final results.
[ "2. Materials and Methods", "2.1. Experimental Design", "2.2.1. Resistance Training Protocol", "2.2.2. Anthropometric Measures and Balance", "2.2.3. Level of Physical Activity and Quality of Life", "2.3. Statistical Analysis" ]
[ "2.1. Experimental Design This was a sixteen-week study that consisted of assessing the effects of TRT, SET, PWT and AST on balance, PA levels and QoL in older adults. To analyze these effects, all participants visited the laboratory at three specific times. The first visit was after the first two weeks of familiarization with RT, and then after 6 weeks of adaptation to RT, and the third assessment was performed 24 h after the last training session after eight weeks of specific RT. During all three visits, the participants performed the Berg Balance Scale and completed a questionnaire which comprised the modified Baecke questionnaire, World Health Organization Quality of Life—BREF (WHOQOL-BREF) and World Health Organization Quality of Life—OLD module (WHOQOL-OLD). All assessments were conducted in the same location and by experienced instructors in older adult training.\nThis was a sixteen-week study that consisted of assessing the effects of TRT, SET, PWT and AST on balance, PA levels and QoL in older adults. To analyze these effects, all participants visited the laboratory at three specific times. The first visit was after the first two weeks of familiarization with RT, and then after 6 weeks of adaptation to RT, and the third assessment was performed 24 h after the last training session after eight weeks of specific RT. During all three visits, the participants performed the Berg Balance Scale and completed a questionnaire which comprised the modified Baecke questionnaire, World Health Organization Quality of Life—BREF (WHOQOL-BREF) and World Health Organization Quality of Life—OLD module (WHOQOL-OLD). All assessments were conducted in the same location and by experienced instructors in older adult training.\n2.2. Sample and Ethical Procedures The inclusion criteria were (a) being aged between 60–75 years old; (b) the ability to perform exercise without contraindication and attendance of at least 85% of the sessions. Exclusion criteria were (a) any osteoarticular problem that affects the performance of any exercise; (b) medical contraindications (e.g., heart problems, surgeries) and (c) involvement at the same time in any PA program. All procedures of the study were accomplished in accordance with the Declaration of Helsinki and approved for human experiments by local institutional ethical committee (protocol number 2.887.652). Before studying experimental protocols, all volunteers signed an informed consent form. Of one hundred fifty-six volunteers only ninety-five older women fulfilled the inclusion criteria and were randomly assigned to 5 groups: SET, AST, PWT, TRT and control group (CG) (Table 1). All participants were instructed to keep, over the study, their normal lifestyle and were advised not to consume coffee, tea, alcohol or tobacco and do any vigorous exercise during the last 24 h before all assessments. The volunteers of the CG did not perform any systematic PA during the 16 weeks and maintained their daily routine. During the study, five older women dropped out (one from CG and TRT; three from PWT) due to illness (3) or not attending at least 85% of the sessions (2).\n2.2.1. Resistance Training Protocol The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].\nThe RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].\n2.2.2. Anthropometric Measures and Balance Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].\nBody mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].\n2.2.3. Level of Physical Activity and Quality of Life To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.\nTo measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.\nThe inclusion criteria were (a) being aged between 60–75 years old; (b) the ability to perform exercise without contraindication and attendance of at least 85% of the sessions. Exclusion criteria were (a) any osteoarticular problem that affects the performance of any exercise; (b) medical contraindications (e.g., heart problems, surgeries) and (c) involvement at the same time in any PA program. All procedures of the study were accomplished in accordance with the Declaration of Helsinki and approved for human experiments by local institutional ethical committee (protocol number 2.887.652). Before studying experimental protocols, all volunteers signed an informed consent form. Of one hundred fifty-six volunteers only ninety-five older women fulfilled the inclusion criteria and were randomly assigned to 5 groups: SET, AST, PWT, TRT and control group (CG) (Table 1). All participants were instructed to keep, over the study, their normal lifestyle and were advised not to consume coffee, tea, alcohol or tobacco and do any vigorous exercise during the last 24 h before all assessments. The volunteers of the CG did not perform any systematic PA during the 16 weeks and maintained their daily routine. During the study, five older women dropped out (one from CG and TRT; three from PWT) due to illness (3) or not attending at least 85% of the sessions (2).\n2.2.1. Resistance Training Protocol The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].\nThe RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].\n2.2.2. Anthropometric Measures and Balance Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].\nBody mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].\n2.2.3. Level of Physical Activity and Quality of Life To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.\nTo measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.\n2.3. Statistical Analysis Descriptive statistics were presented by mean ± standard deviation. Levene, Mauchly and Shapiro–Wilk tests were used to verify the assumptions of the sphericity, variance homogeneity and normality of the data, respectively. One-factor ANOVA was used to analyze sample characteristics and two-way ANOVA was performed to compare groups during the training period. Bonferroni adjustment was used for multiple comparisons. For relative differences we used delta percentage (∆% = [(post-test score − pretest score)/pretest score] × 100). Statistical significance was maintained at 5%. For data analysis, we used SPSS v.23. Previously, a sample size calculation was performed with G*Power 3.1 (ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with a power of 0.8, and a total of 100 older women was necessary, 20 in each group.\nDescriptive statistics were presented by mean ± standard deviation. Levene, Mauchly and Shapiro–Wilk tests were used to verify the assumptions of the sphericity, variance homogeneity and normality of the data, respectively. One-factor ANOVA was used to analyze sample characteristics and two-way ANOVA was performed to compare groups during the training period. Bonferroni adjustment was used for multiple comparisons. For relative differences we used delta percentage (∆% = [(post-test score − pretest score)/pretest score] × 100). Statistical significance was maintained at 5%. For data analysis, we used SPSS v.23. Previously, a sample size calculation was performed with G*Power 3.1 (ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with a power of 0.8, and a total of 100 older women was necessary, 20 in each group.", "This was a sixteen-week study that consisted of assessing the effects of TRT, SET, PWT and AST on balance, PA levels and QoL in older adults. To analyze these effects, all participants visited the laboratory at three specific times. The first visit was after the first two weeks of familiarization with RT, and then after 6 weeks of adaptation to RT, and the third assessment was performed 24 h after the last training session after eight weeks of specific RT. During all three visits, the participants performed the Berg Balance Scale and completed a questionnaire which comprised the modified Baecke questionnaire, World Health Organization Quality of Life—BREF (WHOQOL-BREF) and World Health Organization Quality of Life—OLD module (WHOQOL-OLD). All assessments were conducted in the same location and by experienced instructors in older adult training.", "The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].", "Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].", "To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.", "Descriptive statistics were presented by mean ± standard deviation. Levene, Mauchly and Shapiro–Wilk tests were used to verify the assumptions of the sphericity, variance homogeneity and normality of the data, respectively. One-factor ANOVA was used to analyze sample characteristics and two-way ANOVA was performed to compare groups during the training period. Bonferroni adjustment was used for multiple comparisons. For relative differences we used delta percentage (∆% = [(post-test score − pretest score)/pretest score] × 100). Statistical significance was maintained at 5%. For data analysis, we used SPSS v.23. Previously, a sample size calculation was performed with G*Power 3.1 (ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with a power of 0.8, and a total of 100 older women was necessary, 20 in each group." ]
[ null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Experimental Design", "2.2. Sample and Ethical Procedures", "2.2.1. Resistance Training Protocol", "2.2.2. Anthropometric Measures and Balance", "2.2.3. Level of Physical Activity and Quality of Life", "2.3. Statistical Analysis", "3. Results", "4. Discussion", "5. Conclusions" ]
[ "Worldwide, the population over the age of 60 years old and their average life expectancy is increasing year after year, mainly due to improvements in the health system and new technologies [1,2]. The aging process is associated with changes in older women affecting their daily routine, quality of life (QoL) and health. Sarcopenia is a syndrome that occurs after 60 years and is characterized by a progressive loss of muscle mass and strength that are associated with declines in functional capacity and loss of balance, and increases in the risk of falling [3,4,5].\nSeveral worldwide organizations recommended resistance strength (RT), endurance training, interval training, aerobic training and multicomponent training executed at moderate-to-vigorous intensities, as methods to improve functional capacity and QoL [6,7,8]. RT is an effective method to increase fat-free mass, bone strength, muscle strength and functional capacity, and reduce risk factors for several diseases such as cardiovascular and metabolic diseases [3,4,6,7,9]. Filho et al. [10] reported that older women, after twenty weeks of strength training, improved functional capacity, strength and the ability to perform activities of daily life (ADL). In addition to these effects, RT increased psychosocial health, which combined, has an important influence on QoL. Additionally, physical fitness and physical activity (PA) combined can have considerable consequences on the performance of ADL and QoL [6,11]. For older women, QoL is becoming one of the most important factors as they become older since the extended years must be followed up with a healthy life [4,12]. Puciato et al. [13] showed that higher PA levels resulted in higher physical, psychosociological and environmental QoL domains.\nSeveral studies [6,11,14,15,16] have focused on the impact of endurance-based exercise and strength training on balance and QoL. Fragala et al. [7] suggested that RT is crucial for improving and maintaining muscle strength, psychological well-being, QoL and healthy life expectancy. The size of those benefits is dependent on the type of RT protocol used. Traditional resistance training (TRT) is performed 2–3 times a week based on relative strength with 8–12 repetitions at 60–80% of 1 RM. Strength endurance training (SET) comprises a higher number of repetitions at a light-to-moderate intensity until fatigue. Absolute strength training (AST) consists of no more than six repetitions at a higher amount of strength than the neuromuscular system performs [7,10]. Power training (PWT) is characterized by performing the concentric phase of the movement at the maximum amount of force as fast as possible with loads of no more than 60% of 1 RM. According to different studies, power training (PWT) is important for functional improvements in ADL and balance and can promote more power at lower external resistances, which will result in a higher velocity of power to perform any task [6,7,10,17].\nAll these different RT types are important for the balance and QoL of older women. According to the literature, it is not clear if any RT protocol can promote higher improvements in balance when compared to the other protocols. Diverse prescriptions of RT can promote different dose–response relationships since intensity and volume are key factors for positive adaptations [7]. To our knowledge, there is no study in the literature that compared all these four protocols (TRT, SET, PWT and AST) in the same investigation at the same time. Thus, we propose to analyze the effects of 16 weeks of four different RT protocols on QoL, PA levels and balance of older women and see if there are any individual responses from each one.", "2.1. Experimental Design This was a sixteen-week study that consisted of assessing the effects of TRT, SET, PWT and AST on balance, PA levels and QoL in older adults. To analyze these effects, all participants visited the laboratory at three specific times. The first visit was after the first two weeks of familiarization with RT, and then after 6 weeks of adaptation to RT, and the third assessment was performed 24 h after the last training session after eight weeks of specific RT. During all three visits, the participants performed the Berg Balance Scale and completed a questionnaire which comprised the modified Baecke questionnaire, World Health Organization Quality of Life—BREF (WHOQOL-BREF) and World Health Organization Quality of Life—OLD module (WHOQOL-OLD). All assessments were conducted in the same location and by experienced instructors in older adult training.\nThis was a sixteen-week study that consisted of assessing the effects of TRT, SET, PWT and AST on balance, PA levels and QoL in older adults. To analyze these effects, all participants visited the laboratory at three specific times. The first visit was after the first two weeks of familiarization with RT, and then after 6 weeks of adaptation to RT, and the third assessment was performed 24 h after the last training session after eight weeks of specific RT. During all three visits, the participants performed the Berg Balance Scale and completed a questionnaire which comprised the modified Baecke questionnaire, World Health Organization Quality of Life—BREF (WHOQOL-BREF) and World Health Organization Quality of Life—OLD module (WHOQOL-OLD). All assessments were conducted in the same location and by experienced instructors in older adult training.\n2.2. Sample and Ethical Procedures The inclusion criteria were (a) being aged between 60–75 years old; (b) the ability to perform exercise without contraindication and attendance of at least 85% of the sessions. Exclusion criteria were (a) any osteoarticular problem that affects the performance of any exercise; (b) medical contraindications (e.g., heart problems, surgeries) and (c) involvement at the same time in any PA program. All procedures of the study were accomplished in accordance with the Declaration of Helsinki and approved for human experiments by local institutional ethical committee (protocol number 2.887.652). Before studying experimental protocols, all volunteers signed an informed consent form. Of one hundred fifty-six volunteers only ninety-five older women fulfilled the inclusion criteria and were randomly assigned to 5 groups: SET, AST, PWT, TRT and control group (CG) (Table 1). All participants were instructed to keep, over the study, their normal lifestyle and were advised not to consume coffee, tea, alcohol or tobacco and do any vigorous exercise during the last 24 h before all assessments. The volunteers of the CG did not perform any systematic PA during the 16 weeks and maintained their daily routine. During the study, five older women dropped out (one from CG and TRT; three from PWT) due to illness (3) or not attending at least 85% of the sessions (2).\n2.2.1. Resistance Training Protocol The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].\nThe RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].\n2.2.2. Anthropometric Measures and Balance Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].\nBody mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].\n2.2.3. Level of Physical Activity and Quality of Life To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.\nTo measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.\nThe inclusion criteria were (a) being aged between 60–75 years old; (b) the ability to perform exercise without contraindication and attendance of at least 85% of the sessions. Exclusion criteria were (a) any osteoarticular problem that affects the performance of any exercise; (b) medical contraindications (e.g., heart problems, surgeries) and (c) involvement at the same time in any PA program. All procedures of the study were accomplished in accordance with the Declaration of Helsinki and approved for human experiments by local institutional ethical committee (protocol number 2.887.652). Before studying experimental protocols, all volunteers signed an informed consent form. Of one hundred fifty-six volunteers only ninety-five older women fulfilled the inclusion criteria and were randomly assigned to 5 groups: SET, AST, PWT, TRT and control group (CG) (Table 1). All participants were instructed to keep, over the study, their normal lifestyle and were advised not to consume coffee, tea, alcohol or tobacco and do any vigorous exercise during the last 24 h before all assessments. The volunteers of the CG did not perform any systematic PA during the 16 weeks and maintained their daily routine. During the study, five older women dropped out (one from CG and TRT; three from PWT) due to illness (3) or not attending at least 85% of the sessions (2).\n2.2.1. Resistance Training Protocol The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].\nThe RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].\n2.2.2. Anthropometric Measures and Balance Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].\nBody mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].\n2.2.3. Level of Physical Activity and Quality of Life To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.\nTo measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.\n2.3. Statistical Analysis Descriptive statistics were presented by mean ± standard deviation. Levene, Mauchly and Shapiro–Wilk tests were used to verify the assumptions of the sphericity, variance homogeneity and normality of the data, respectively. One-factor ANOVA was used to analyze sample characteristics and two-way ANOVA was performed to compare groups during the training period. Bonferroni adjustment was used for multiple comparisons. For relative differences we used delta percentage (∆% = [(post-test score − pretest score)/pretest score] × 100). Statistical significance was maintained at 5%. For data analysis, we used SPSS v.23. Previously, a sample size calculation was performed with G*Power 3.1 (ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with a power of 0.8, and a total of 100 older women was necessary, 20 in each group.\nDescriptive statistics were presented by mean ± standard deviation. Levene, Mauchly and Shapiro–Wilk tests were used to verify the assumptions of the sphericity, variance homogeneity and normality of the data, respectively. One-factor ANOVA was used to analyze sample characteristics and two-way ANOVA was performed to compare groups during the training period. Bonferroni adjustment was used for multiple comparisons. For relative differences we used delta percentage (∆% = [(post-test score − pretest score)/pretest score] × 100). Statistical significance was maintained at 5%. For data analysis, we used SPSS v.23. Previously, a sample size calculation was performed with G*Power 3.1 (ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with a power of 0.8, and a total of 100 older women was necessary, 20 in each group.", "This was a sixteen-week study that consisted of assessing the effects of TRT, SET, PWT and AST on balance, PA levels and QoL in older adults. To analyze these effects, all participants visited the laboratory at three specific times. The first visit was after the first two weeks of familiarization with RT, and then after 6 weeks of adaptation to RT, and the third assessment was performed 24 h after the last training session after eight weeks of specific RT. During all three visits, the participants performed the Berg Balance Scale and completed a questionnaire which comprised the modified Baecke questionnaire, World Health Organization Quality of Life—BREF (WHOQOL-BREF) and World Health Organization Quality of Life—OLD module (WHOQOL-OLD). All assessments were conducted in the same location and by experienced instructors in older adult training.", "The inclusion criteria were (a) being aged between 60–75 years old; (b) the ability to perform exercise without contraindication and attendance of at least 85% of the sessions. Exclusion criteria were (a) any osteoarticular problem that affects the performance of any exercise; (b) medical contraindications (e.g., heart problems, surgeries) and (c) involvement at the same time in any PA program. All procedures of the study were accomplished in accordance with the Declaration of Helsinki and approved for human experiments by local institutional ethical committee (protocol number 2.887.652). Before studying experimental protocols, all volunteers signed an informed consent form. Of one hundred fifty-six volunteers only ninety-five older women fulfilled the inclusion criteria and were randomly assigned to 5 groups: SET, AST, PWT, TRT and control group (CG) (Table 1). All participants were instructed to keep, over the study, their normal lifestyle and were advised not to consume coffee, tea, alcohol or tobacco and do any vigorous exercise during the last 24 h before all assessments. The volunteers of the CG did not perform any systematic PA during the 16 weeks and maintained their daily routine. During the study, five older women dropped out (one from CG and TRT; three from PWT) due to illness (3) or not attending at least 85% of the sessions (2).\n2.2.1. Resistance Training Protocol The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].\nThe RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].\n2.2.2. Anthropometric Measures and Balance Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].\nBody mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].\n2.2.3. Level of Physical Activity and Quality of Life To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.\nTo measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.", "The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8.\nIn the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19].", "Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit.\nThe balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21].", "To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23].\nThe MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy.", "Descriptive statistics were presented by mean ± standard deviation. Levene, Mauchly and Shapiro–Wilk tests were used to verify the assumptions of the sphericity, variance homogeneity and normality of the data, respectively. One-factor ANOVA was used to analyze sample characteristics and two-way ANOVA was performed to compare groups during the training period. Bonferroni adjustment was used for multiple comparisons. For relative differences we used delta percentage (∆% = [(post-test score − pretest score)/pretest score] × 100). Statistical significance was maintained at 5%. For data analysis, we used SPSS v.23. Previously, a sample size calculation was performed with G*Power 3.1 (ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with a power of 0.8, and a total of 100 older women was necessary, 20 in each group.", "The participants’ attendance rate was 87%. All RT protocols resulted in improvements in balance (Figure 1 and Table 2), total PA and QoL (Table 3).\nFor balance, we verified that after 16 weeks, all RT groups improved but only SET (p = 0.046) and TRT (p = 0.036) improved after the first 8 weeks. All RT groups finished the study with better results than the CG (p < 0.05).\nPA levels increased after 16 weeks in all RT groups, despite the baseline values already being high (p < 0.05), except for SET (Table 3). With the questionnaire of WHOQOL-BREF, we reported no changes in the psychological domain. In the social domain, PWT (−6.5%) and TRT (−6.5%) decreased (p < 0.05). The physical domain of QoL only improved in SET (3.9%) after 16 weeks (p < 0.05). With the specific questionnaire for older adults, WHOQOL-OLD, 16 weeks resulted in improvements in QoL for SET (7.9%), PWT (17.5%), TRT (17.2%), and AST (14.5%).", "Balance in older women is crucial to reduce the risk of falling and is a key component of ALD. The purpose of this study was to analyze the possible individual dose–response of four different types of RT on the balance, PA and QoL of older women. Our results showed that after 16 weeks, (a) QoL and PA levels increased in all RT groups with higher improvements in TRT and PWT; (b) balance improved which reduced the risk of falling; and (c) every RT protocol resulted in similar results in balance despite their prescription specifications, which allows for concluding that balance reacts similarly with all of them.\nAll RT protocols promoted similar benefits in balance and reduced the risk of falling. The improvement of balance may result from the increased lower body strength and muscle mass after 16 weeks. A stronger lower body leads to a more stable base of support and a reduced risk of falling [24]. Additionally, all RT can promote a higher bone density, better metabolic capacity of skeletal muscle and higher gait speed that contributed to higher values of balance [7]. According to different studies [11,16,25,26], one huge problem is the risk of falling. Older women who fall can develop a fear of falling that will limit their PA and mobility. After falling, the risk of institutionalization increases and QoL is reduced due to loss of autonomy and mobility and will lead to lower PA levels. Associated with balance is also cognition, which regulates and controls mobility and can increase the level of PA and functional capacity. Both capacities are crucial to perform ADL and for improving QoL in older women [4,11,22,27]. Our results showed that the increase in balance in all RT groups promoted higher levels of QoL and PA levels. Additionally, Sampaio et al. [4] concluded that higher levels of balance are associated with general cognition.\nIncreasing PA can reduce the risk of mortality and some risk factors for cardiovascular diseases, and in addition, can improve aspects related to QoL. In our study, we observed that total PA increased in PWT, TRT and AST, which can lead to the observed increase in QoL because the highest levels of PA and physical fitness have a huge impact on QoL [5]. Another result we observed in all RT groups was that leisure and locomotion PA increased. According to Scarabottolo et al. [26], this result is positively associated with mental health and total PA levels [28,29], and can also explain the interest of the participants. Deciding which activity they are likely to perform will increase central and autonomous motivation [30].\nThe improvement of QoL observed in all RT groups from WHOQOL-BREF [5,13] can be explained in part by the higher values of all PA dimensions because enough energy, pain freedom and great ability to perform ALD are fundamental factors for QoL [31]. The environmental domain was the domain that had the lower value of QoL, and the physical and psychological domains were higher. This result is in accordance with Wong et al. [32] and can be explained through the moderate/vigorous intensity used in all RT protocols that correlate with psychological health [33] and higher QoL levels [15,16].\nIn the WHOQOL-OLD questionnaire for QoL, we observed higher values in all RT groups after 16 weeks. According to Haider et al. [25], balance can justify our results because it is the strongest factor associated with autonomy, physical health and social participation. Additionally, balance is associated with mental health and well-being [11], factors that are very important for social participation [25,34]. Kwon et al. [35], after 12 weeks of exercise, reported that QoL improved but not physical performance. Social participation has a crucial role in producing positive effects on the well-being and QoL of older adults [35,36,37], and not only the physical benefits that resulted from our program. Therefore, exercise promotes positive perceptions of QoL and not just physical and mental benefits [5,38,39], regardless of the type of RT.\nAll four RT protocols resulted in benefits for older women without differences between the groups after 16 weeks. Some possible reasons could be the higher baseline values that allowed our older women higher autonomy and independence for similar outcomes for all protocols. Balance values in all groups before the RT protocol were near the maximum of each BBS test, which may have promoted a similar adaptation. Additionally, good mobility and perception of health and emotional status observed in QoL and PA questionnaires were determinants of our results [40]. Toselli et al. [12], with institutionalized older adults, reported that low levels of PA, autonomy and mobility promoted low levels of QoL, and interventions of active aging should be engaged. QoL should be prioritized in later life in preference to disease-based outcomes because older women give more importance to the way they see themselves [33]. RT is crucial to improve QoL in the later years of older women [11,27].\nOur study has some limitations: (a) not controlling dietary routine can interfere with the performance of the older women during RT sessions; (b) the age range of all participants limits our results to be used with older adults of more than 75 years; and (c) our participants already had higher baseline values. In this way, for future studies, we recommend more research on the impact of different RT in older women with low levels of balance and PA, and with different age groups.", "After 16 weeks of SET, PWT, TRT and AST, the older women improved their balance, QoL and PA levels. Different RT protocols promoted similar responses, reinforcing the importance of all types of RT as a crucial method for health promotion. Although PWT and TRT seem to produce more effects in QoL for exercise professionals, we recommend all RT protocols to increase/maintain QoL and balance in older women and not only TRT. Moreover, associating the personal interests of older women with RT protocol selection can increase the final results." ]
[ "intro", null, null, "methods", null, null, null, null, "results", "discussion", "conclusions" ]
[ "older women", "resistance training", "quality of life", "physical activity level", "balance" ]
1. Introduction: Worldwide, the population over the age of 60 years old and their average life expectancy is increasing year after year, mainly due to improvements in the health system and new technologies [1,2]. The aging process is associated with changes in older women affecting their daily routine, quality of life (QoL) and health. Sarcopenia is a syndrome that occurs after 60 years and is characterized by a progressive loss of muscle mass and strength that are associated with declines in functional capacity and loss of balance, and increases in the risk of falling [3,4,5]. Several worldwide organizations recommended resistance strength (RT), endurance training, interval training, aerobic training and multicomponent training executed at moderate-to-vigorous intensities, as methods to improve functional capacity and QoL [6,7,8]. RT is an effective method to increase fat-free mass, bone strength, muscle strength and functional capacity, and reduce risk factors for several diseases such as cardiovascular and metabolic diseases [3,4,6,7,9]. Filho et al. [10] reported that older women, after twenty weeks of strength training, improved functional capacity, strength and the ability to perform activities of daily life (ADL). In addition to these effects, RT increased psychosocial health, which combined, has an important influence on QoL. Additionally, physical fitness and physical activity (PA) combined can have considerable consequences on the performance of ADL and QoL [6,11]. For older women, QoL is becoming one of the most important factors as they become older since the extended years must be followed up with a healthy life [4,12]. Puciato et al. [13] showed that higher PA levels resulted in higher physical, psychosociological and environmental QoL domains. Several studies [6,11,14,15,16] have focused on the impact of endurance-based exercise and strength training on balance and QoL. Fragala et al. [7] suggested that RT is crucial for improving and maintaining muscle strength, psychological well-being, QoL and healthy life expectancy. The size of those benefits is dependent on the type of RT protocol used. Traditional resistance training (TRT) is performed 2–3 times a week based on relative strength with 8–12 repetitions at 60–80% of 1 RM. Strength endurance training (SET) comprises a higher number of repetitions at a light-to-moderate intensity until fatigue. Absolute strength training (AST) consists of no more than six repetitions at a higher amount of strength than the neuromuscular system performs [7,10]. Power training (PWT) is characterized by performing the concentric phase of the movement at the maximum amount of force as fast as possible with loads of no more than 60% of 1 RM. According to different studies, power training (PWT) is important for functional improvements in ADL and balance and can promote more power at lower external resistances, which will result in a higher velocity of power to perform any task [6,7,10,17]. All these different RT types are important for the balance and QoL of older women. According to the literature, it is not clear if any RT protocol can promote higher improvements in balance when compared to the other protocols. Diverse prescriptions of RT can promote different dose–response relationships since intensity and volume are key factors for positive adaptations [7]. To our knowledge, there is no study in the literature that compared all these four protocols (TRT, SET, PWT and AST) in the same investigation at the same time. Thus, we propose to analyze the effects of 16 weeks of four different RT protocols on QoL, PA levels and balance of older women and see if there are any individual responses from each one. 2. Materials and Methods: 2.1. Experimental Design This was a sixteen-week study that consisted of assessing the effects of TRT, SET, PWT and AST on balance, PA levels and QoL in older adults. To analyze these effects, all participants visited the laboratory at three specific times. The first visit was after the first two weeks of familiarization with RT, and then after 6 weeks of adaptation to RT, and the third assessment was performed 24 h after the last training session after eight weeks of specific RT. During all three visits, the participants performed the Berg Balance Scale and completed a questionnaire which comprised the modified Baecke questionnaire, World Health Organization Quality of Life—BREF (WHOQOL-BREF) and World Health Organization Quality of Life—OLD module (WHOQOL-OLD). All assessments were conducted in the same location and by experienced instructors in older adult training. This was a sixteen-week study that consisted of assessing the effects of TRT, SET, PWT and AST on balance, PA levels and QoL in older adults. To analyze these effects, all participants visited the laboratory at three specific times. The first visit was after the first two weeks of familiarization with RT, and then after 6 weeks of adaptation to RT, and the third assessment was performed 24 h after the last training session after eight weeks of specific RT. During all three visits, the participants performed the Berg Balance Scale and completed a questionnaire which comprised the modified Baecke questionnaire, World Health Organization Quality of Life—BREF (WHOQOL-BREF) and World Health Organization Quality of Life—OLD module (WHOQOL-OLD). All assessments were conducted in the same location and by experienced instructors in older adult training. 2.2. Sample and Ethical Procedures The inclusion criteria were (a) being aged between 60–75 years old; (b) the ability to perform exercise without contraindication and attendance of at least 85% of the sessions. Exclusion criteria were (a) any osteoarticular problem that affects the performance of any exercise; (b) medical contraindications (e.g., heart problems, surgeries) and (c) involvement at the same time in any PA program. All procedures of the study were accomplished in accordance with the Declaration of Helsinki and approved for human experiments by local institutional ethical committee (protocol number 2.887.652). Before studying experimental protocols, all volunteers signed an informed consent form. Of one hundred fifty-six volunteers only ninety-five older women fulfilled the inclusion criteria and were randomly assigned to 5 groups: SET, AST, PWT, TRT and control group (CG) (Table 1). All participants were instructed to keep, over the study, their normal lifestyle and were advised not to consume coffee, tea, alcohol or tobacco and do any vigorous exercise during the last 24 h before all assessments. The volunteers of the CG did not perform any systematic PA during the 16 weeks and maintained their daily routine. During the study, five older women dropped out (one from CG and TRT; three from PWT) due to illness (3) or not attending at least 85% of the sessions (2). 2.2.1. Resistance Training Protocol The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8. In the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19]. The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8. In the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19]. 2.2.2. Anthropometric Measures and Balance Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit. The balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21]. Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit. The balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21]. 2.2.3. Level of Physical Activity and Quality of Life To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23]. The MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy. To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23]. The MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy. The inclusion criteria were (a) being aged between 60–75 years old; (b) the ability to perform exercise without contraindication and attendance of at least 85% of the sessions. Exclusion criteria were (a) any osteoarticular problem that affects the performance of any exercise; (b) medical contraindications (e.g., heart problems, surgeries) and (c) involvement at the same time in any PA program. All procedures of the study were accomplished in accordance with the Declaration of Helsinki and approved for human experiments by local institutional ethical committee (protocol number 2.887.652). Before studying experimental protocols, all volunteers signed an informed consent form. Of one hundred fifty-six volunteers only ninety-five older women fulfilled the inclusion criteria and were randomly assigned to 5 groups: SET, AST, PWT, TRT and control group (CG) (Table 1). All participants were instructed to keep, over the study, their normal lifestyle and were advised not to consume coffee, tea, alcohol or tobacco and do any vigorous exercise during the last 24 h before all assessments. The volunteers of the CG did not perform any systematic PA during the 16 weeks and maintained their daily routine. During the study, five older women dropped out (one from CG and TRT; three from PWT) due to illness (3) or not attending at least 85% of the sessions (2). 2.2.1. Resistance Training Protocol The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8. In the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19]. The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8. In the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19]. 2.2.2. Anthropometric Measures and Balance Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit. The balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21]. Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit. The balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21]. 2.2.3. Level of Physical Activity and Quality of Life To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23]. The MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy. To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23]. The MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy. 2.3. Statistical Analysis Descriptive statistics were presented by mean ± standard deviation. Levene, Mauchly and Shapiro–Wilk tests were used to verify the assumptions of the sphericity, variance homogeneity and normality of the data, respectively. One-factor ANOVA was used to analyze sample characteristics and two-way ANOVA was performed to compare groups during the training period. Bonferroni adjustment was used for multiple comparisons. For relative differences we used delta percentage (∆% = [(post-test score − pretest score)/pretest score] × 100). Statistical significance was maintained at 5%. For data analysis, we used SPSS v.23. Previously, a sample size calculation was performed with G*Power 3.1 (ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with a power of 0.8, and a total of 100 older women was necessary, 20 in each group. Descriptive statistics were presented by mean ± standard deviation. Levene, Mauchly and Shapiro–Wilk tests were used to verify the assumptions of the sphericity, variance homogeneity and normality of the data, respectively. One-factor ANOVA was used to analyze sample characteristics and two-way ANOVA was performed to compare groups during the training period. Bonferroni adjustment was used for multiple comparisons. For relative differences we used delta percentage (∆% = [(post-test score − pretest score)/pretest score] × 100). Statistical significance was maintained at 5%. For data analysis, we used SPSS v.23. Previously, a sample size calculation was performed with G*Power 3.1 (ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with a power of 0.8, and a total of 100 older women was necessary, 20 in each group. 2.1. Experimental Design: This was a sixteen-week study that consisted of assessing the effects of TRT, SET, PWT and AST on balance, PA levels and QoL in older adults. To analyze these effects, all participants visited the laboratory at three specific times. The first visit was after the first two weeks of familiarization with RT, and then after 6 weeks of adaptation to RT, and the third assessment was performed 24 h after the last training session after eight weeks of specific RT. During all three visits, the participants performed the Berg Balance Scale and completed a questionnaire which comprised the modified Baecke questionnaire, World Health Organization Quality of Life—BREF (WHOQOL-BREF) and World Health Organization Quality of Life—OLD module (WHOQOL-OLD). All assessments were conducted in the same location and by experienced instructors in older adult training. 2.2. Sample and Ethical Procedures: The inclusion criteria were (a) being aged between 60–75 years old; (b) the ability to perform exercise without contraindication and attendance of at least 85% of the sessions. Exclusion criteria were (a) any osteoarticular problem that affects the performance of any exercise; (b) medical contraindications (e.g., heart problems, surgeries) and (c) involvement at the same time in any PA program. All procedures of the study were accomplished in accordance with the Declaration of Helsinki and approved for human experiments by local institutional ethical committee (protocol number 2.887.652). Before studying experimental protocols, all volunteers signed an informed consent form. Of one hundred fifty-six volunteers only ninety-five older women fulfilled the inclusion criteria and were randomly assigned to 5 groups: SET, AST, PWT, TRT and control group (CG) (Table 1). All participants were instructed to keep, over the study, their normal lifestyle and were advised not to consume coffee, tea, alcohol or tobacco and do any vigorous exercise during the last 24 h before all assessments. The volunteers of the CG did not perform any systematic PA during the 16 weeks and maintained their daily routine. During the study, five older women dropped out (one from CG and TRT; three from PWT) due to illness (3) or not attending at least 85% of the sessions (2). 2.2.1. Resistance Training Protocol The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8. In the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19]. The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8. In the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19]. 2.2.2. Anthropometric Measures and Balance Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit. The balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21]. Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit. The balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21]. 2.2.3. Level of Physical Activity and Quality of Life To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23]. The MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy. To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23]. The MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy. 2.2.1. Resistance Training Protocol: The RT protocol consisted of training sessions twice a week with 72 h to 96 h of recovery between sessions for 16 weeks. The first two weeks for familiarization are followed by six weeks of adaptation and eight weeks of specific RT. All groups performed the same exercises (curl-ups, seated row, horizontal leg press, machine bench press, leg extension, and seated leg curl) in the same order, volume of repetitions and recovery interval. To assess muscle strength, 10 RM test was used [10]. Load progression was 5 to 10% every two weeks and rating of perceived exertion (RPE) [18] was used for load control. In familiarization and adaptation period, we used values of RPE from 5 to 7, and for specific RT period, we used RPE values from 6 to 8. In the familiarization period, all volunteers performed 2 sets of 15 repetitions at 50–60% of 10 RM with one minute of rest between series and exercises. The six weeks of adaptation were performed with 3 sets of 12–15 repetitions at 60% of 10 RM with one minute of rest between sets and two minutes between exercises. During the last eight weeks of RT, all groups performed their specific training with training load adjusted using RPE: SET—1 set of 20–25 repetitions; AST—4–5 sets of 4–5 repetitions; TRT—2–3 sets of 8–12 repetitions; PWT—2–3 sets of 8–12 repetitions. To equalize the total number of repetitions, all groups performed 20–25 repetitions in every exercise with three minutes of rest between sets and exercises. The concentric and eccentric phase lasted two seconds, controlled by a metronome, for TRT, AST and SET. For PWT, the concentric phase was performed at maximum velocity (50% of 10 RM). No participant performed any repetitions until failure [19]. 2.2.2. Anthropometric Measures and Balance: Body mass (kg) and body mass index (BMI, kg·m−2) were measured through an Omron Scale (OMRON, Kyoto, Japan), and height was assessed with a stadiometer (Sanny, Fortaleza, Brazil), all in the first visit. The balance of the older women was assessed from the performance of fourteen items scored from 0 to 4 (0—unable to perform; to 4—task complete with full independence) on the Berg Balance Scale: Sitting to standing; stand up for 2 min without support; stand up with eyes closed; stand up with feet together; standing to sitting; sitting with back unsupported; transferring with and without support; reaching forward with outstretched arms; pick up objects from the floor; turning to look behind oneself; turning 360°; placing one foot on a step; placing one foot in front of the other; and stand up on one leg. This scale allowed us to measure the risk of falls and quantitatively the balance of older women [20,21]. 2.2.3. Level of Physical Activity and Quality of Life: To measure PA levels we used modified Baecke questionnaire (MBQ) [22] and for QoL we used WHOQOL-BREF and WHOQOL-OLD [23]. The MBQ allowed us to measure habitual PA of the older women through the score of three specific domains: work activity; sports activity; and leisure activity. The WHOQOL-BREF is an instrument that is frequently used to assess QoL of healthy and ill populations. It comprised twenty-four questions covering four domains (physical, psychological, social relationships and environment) and two global questions of QoL in a total of twenty-six questions. WHOQOL-OLD questionnaire is an add-on module from WHOQOL-BREEF specific to older adults, with 24 items categorized into six facets: sensory abilities; autonomy; past, present and future activities; social participation; death and dying; and intimacy. 2.3. Statistical Analysis: Descriptive statistics were presented by mean ± standard deviation. Levene, Mauchly and Shapiro–Wilk tests were used to verify the assumptions of the sphericity, variance homogeneity and normality of the data, respectively. One-factor ANOVA was used to analyze sample characteristics and two-way ANOVA was performed to compare groups during the training period. Bonferroni adjustment was used for multiple comparisons. For relative differences we used delta percentage (∆% = [(post-test score − pretest score)/pretest score] × 100). Statistical significance was maintained at 5%. For data analysis, we used SPSS v.23. Previously, a sample size calculation was performed with G*Power 3.1 (ver. 3.1.9.7; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany) with a power of 0.8, and a total of 100 older women was necessary, 20 in each group. 3. Results: The participants’ attendance rate was 87%. All RT protocols resulted in improvements in balance (Figure 1 and Table 2), total PA and QoL (Table 3). For balance, we verified that after 16 weeks, all RT groups improved but only SET (p = 0.046) and TRT (p = 0.036) improved after the first 8 weeks. All RT groups finished the study with better results than the CG (p < 0.05). PA levels increased after 16 weeks in all RT groups, despite the baseline values already being high (p < 0.05), except for SET (Table 3). With the questionnaire of WHOQOL-BREF, we reported no changes in the psychological domain. In the social domain, PWT (−6.5%) and TRT (−6.5%) decreased (p < 0.05). The physical domain of QoL only improved in SET (3.9%) after 16 weeks (p < 0.05). With the specific questionnaire for older adults, WHOQOL-OLD, 16 weeks resulted in improvements in QoL for SET (7.9%), PWT (17.5%), TRT (17.2%), and AST (14.5%). 4. Discussion: Balance in older women is crucial to reduce the risk of falling and is a key component of ALD. The purpose of this study was to analyze the possible individual dose–response of four different types of RT on the balance, PA and QoL of older women. Our results showed that after 16 weeks, (a) QoL and PA levels increased in all RT groups with higher improvements in TRT and PWT; (b) balance improved which reduced the risk of falling; and (c) every RT protocol resulted in similar results in balance despite their prescription specifications, which allows for concluding that balance reacts similarly with all of them. All RT protocols promoted similar benefits in balance and reduced the risk of falling. The improvement of balance may result from the increased lower body strength and muscle mass after 16 weeks. A stronger lower body leads to a more stable base of support and a reduced risk of falling [24]. Additionally, all RT can promote a higher bone density, better metabolic capacity of skeletal muscle and higher gait speed that contributed to higher values of balance [7]. According to different studies [11,16,25,26], one huge problem is the risk of falling. Older women who fall can develop a fear of falling that will limit their PA and mobility. After falling, the risk of institutionalization increases and QoL is reduced due to loss of autonomy and mobility and will lead to lower PA levels. Associated with balance is also cognition, which regulates and controls mobility and can increase the level of PA and functional capacity. Both capacities are crucial to perform ADL and for improving QoL in older women [4,11,22,27]. Our results showed that the increase in balance in all RT groups promoted higher levels of QoL and PA levels. Additionally, Sampaio et al. [4] concluded that higher levels of balance are associated with general cognition. Increasing PA can reduce the risk of mortality and some risk factors for cardiovascular diseases, and in addition, can improve aspects related to QoL. In our study, we observed that total PA increased in PWT, TRT and AST, which can lead to the observed increase in QoL because the highest levels of PA and physical fitness have a huge impact on QoL [5]. Another result we observed in all RT groups was that leisure and locomotion PA increased. According to Scarabottolo et al. [26], this result is positively associated with mental health and total PA levels [28,29], and can also explain the interest of the participants. Deciding which activity they are likely to perform will increase central and autonomous motivation [30]. The improvement of QoL observed in all RT groups from WHOQOL-BREF [5,13] can be explained in part by the higher values of all PA dimensions because enough energy, pain freedom and great ability to perform ALD are fundamental factors for QoL [31]. The environmental domain was the domain that had the lower value of QoL, and the physical and psychological domains were higher. This result is in accordance with Wong et al. [32] and can be explained through the moderate/vigorous intensity used in all RT protocols that correlate with psychological health [33] and higher QoL levels [15,16]. In the WHOQOL-OLD questionnaire for QoL, we observed higher values in all RT groups after 16 weeks. According to Haider et al. [25], balance can justify our results because it is the strongest factor associated with autonomy, physical health and social participation. Additionally, balance is associated with mental health and well-being [11], factors that are very important for social participation [25,34]. Kwon et al. [35], after 12 weeks of exercise, reported that QoL improved but not physical performance. Social participation has a crucial role in producing positive effects on the well-being and QoL of older adults [35,36,37], and not only the physical benefits that resulted from our program. Therefore, exercise promotes positive perceptions of QoL and not just physical and mental benefits [5,38,39], regardless of the type of RT. All four RT protocols resulted in benefits for older women without differences between the groups after 16 weeks. Some possible reasons could be the higher baseline values that allowed our older women higher autonomy and independence for similar outcomes for all protocols. Balance values in all groups before the RT protocol were near the maximum of each BBS test, which may have promoted a similar adaptation. Additionally, good mobility and perception of health and emotional status observed in QoL and PA questionnaires were determinants of our results [40]. Toselli et al. [12], with institutionalized older adults, reported that low levels of PA, autonomy and mobility promoted low levels of QoL, and interventions of active aging should be engaged. QoL should be prioritized in later life in preference to disease-based outcomes because older women give more importance to the way they see themselves [33]. RT is crucial to improve QoL in the later years of older women [11,27]. Our study has some limitations: (a) not controlling dietary routine can interfere with the performance of the older women during RT sessions; (b) the age range of all participants limits our results to be used with older adults of more than 75 years; and (c) our participants already had higher baseline values. In this way, for future studies, we recommend more research on the impact of different RT in older women with low levels of balance and PA, and with different age groups. 5. Conclusions: After 16 weeks of SET, PWT, TRT and AST, the older women improved their balance, QoL and PA levels. Different RT protocols promoted similar responses, reinforcing the importance of all types of RT as a crucial method for health promotion. Although PWT and TRT seem to produce more effects in QoL for exercise professionals, we recommend all RT protocols to increase/maintain QoL and balance in older women and not only TRT. Moreover, associating the personal interests of older women with RT protocol selection can increase the final results.
Background: Physical activity (PA) and physical fitness are key factors for quality of life (QoL) for older women. The aging process promotes the decrease in some capacities such as strength, which affect the activities of daily life. This loss of strength leads to a reduction in balance and an increased risk of falls as well as a sedentary lifestyle. Resistance Training (RT) is an effective method to improve balance and strength but different RT protocols can promote different responses. Power training has a higher impact on the performance of activities of daily life. Therefore, our study aimed to analyze if different RT protocols promote individual responses in balance, QoL and PA levels of older women and which are more effective for the older women. Methods: Ninety-four older women were divided into four RT groups (relative strength endurance training, SET; Traditional strength training, TRT; absolute strength training, AST; power training, PWT) and one control group (CG). Each RT group performed a specific protocol for 16 weeks. At baseline and after 8 and 16 weeks, we assessed balance through the Berg balance scale; PA levels with a modified Baecke questionnaire and QoL with World Health Organization Quality of Life-BREF (WHOQOL-BREF) and World Health Organization Quality of Life-OLD module (WHOQOL-OLD). Results: Balance improved after 16 weeks (baseline vs. 16 weeks; p &lt; 0.05) without differences between all RT groups. PWT (2.82%) and TRT (3.48%) improved balance in the first 8 weeks (baseline vs. 8 weeks; p &lt; 0.05). PA levels increased in PWT, TRT and AST after 16 weeks (baseline vs. 16 weeks; p &lt; 0.05). Conclusions: All RT protocols improved PA levels and QoL after 16 weeks of training. For the improvement of balance, QoL and PA, older women can be subjected to PWT, AST and SET, and not be restricted to TRT.
1. Introduction: Worldwide, the population over the age of 60 years old and their average life expectancy is increasing year after year, mainly due to improvements in the health system and new technologies [1,2]. The aging process is associated with changes in older women affecting their daily routine, quality of life (QoL) and health. Sarcopenia is a syndrome that occurs after 60 years and is characterized by a progressive loss of muscle mass and strength that are associated with declines in functional capacity and loss of balance, and increases in the risk of falling [3,4,5]. Several worldwide organizations recommended resistance strength (RT), endurance training, interval training, aerobic training and multicomponent training executed at moderate-to-vigorous intensities, as methods to improve functional capacity and QoL [6,7,8]. RT is an effective method to increase fat-free mass, bone strength, muscle strength and functional capacity, and reduce risk factors for several diseases such as cardiovascular and metabolic diseases [3,4,6,7,9]. Filho et al. [10] reported that older women, after twenty weeks of strength training, improved functional capacity, strength and the ability to perform activities of daily life (ADL). In addition to these effects, RT increased psychosocial health, which combined, has an important influence on QoL. Additionally, physical fitness and physical activity (PA) combined can have considerable consequences on the performance of ADL and QoL [6,11]. For older women, QoL is becoming one of the most important factors as they become older since the extended years must be followed up with a healthy life [4,12]. Puciato et al. [13] showed that higher PA levels resulted in higher physical, psychosociological and environmental QoL domains. Several studies [6,11,14,15,16] have focused on the impact of endurance-based exercise and strength training on balance and QoL. Fragala et al. [7] suggested that RT is crucial for improving and maintaining muscle strength, psychological well-being, QoL and healthy life expectancy. The size of those benefits is dependent on the type of RT protocol used. Traditional resistance training (TRT) is performed 2–3 times a week based on relative strength with 8–12 repetitions at 60–80% of 1 RM. Strength endurance training (SET) comprises a higher number of repetitions at a light-to-moderate intensity until fatigue. Absolute strength training (AST) consists of no more than six repetitions at a higher amount of strength than the neuromuscular system performs [7,10]. Power training (PWT) is characterized by performing the concentric phase of the movement at the maximum amount of force as fast as possible with loads of no more than 60% of 1 RM. According to different studies, power training (PWT) is important for functional improvements in ADL and balance and can promote more power at lower external resistances, which will result in a higher velocity of power to perform any task [6,7,10,17]. All these different RT types are important for the balance and QoL of older women. According to the literature, it is not clear if any RT protocol can promote higher improvements in balance when compared to the other protocols. Diverse prescriptions of RT can promote different dose–response relationships since intensity and volume are key factors for positive adaptations [7]. To our knowledge, there is no study in the literature that compared all these four protocols (TRT, SET, PWT and AST) in the same investigation at the same time. Thus, we propose to analyze the effects of 16 weeks of four different RT protocols on QoL, PA levels and balance of older women and see if there are any individual responses from each one. 5. Conclusions: After 16 weeks of SET, PWT, TRT and AST, the older women improved their balance, QoL and PA levels. Different RT protocols promoted similar responses, reinforcing the importance of all types of RT as a crucial method for health promotion. Although PWT and TRT seem to produce more effects in QoL for exercise professionals, we recommend all RT protocols to increase/maintain QoL and balance in older women and not only TRT. Moreover, associating the personal interests of older women with RT protocol selection can increase the final results.
Background: Physical activity (PA) and physical fitness are key factors for quality of life (QoL) for older women. The aging process promotes the decrease in some capacities such as strength, which affect the activities of daily life. This loss of strength leads to a reduction in balance and an increased risk of falls as well as a sedentary lifestyle. Resistance Training (RT) is an effective method to improve balance and strength but different RT protocols can promote different responses. Power training has a higher impact on the performance of activities of daily life. Therefore, our study aimed to analyze if different RT protocols promote individual responses in balance, QoL and PA levels of older women and which are more effective for the older women. Methods: Ninety-four older women were divided into four RT groups (relative strength endurance training, SET; Traditional strength training, TRT; absolute strength training, AST; power training, PWT) and one control group (CG). Each RT group performed a specific protocol for 16 weeks. At baseline and after 8 and 16 weeks, we assessed balance through the Berg balance scale; PA levels with a modified Baecke questionnaire and QoL with World Health Organization Quality of Life-BREF (WHOQOL-BREF) and World Health Organization Quality of Life-OLD module (WHOQOL-OLD). Results: Balance improved after 16 weeks (baseline vs. 16 weeks; p &lt; 0.05) without differences between all RT groups. PWT (2.82%) and TRT (3.48%) improved balance in the first 8 weeks (baseline vs. 8 weeks; p &lt; 0.05). PA levels increased in PWT, TRT and AST after 16 weeks (baseline vs. 16 weeks; p &lt; 0.05). Conclusions: All RT protocols improved PA levels and QoL after 16 weeks of training. For the improvement of balance, QoL and PA, older women can be subjected to PWT, AST and SET, and not be restricted to TRT.
8,964
388
[ 4065, 160, 342, 197, 165, 161 ]
11
[ "weeks", "repetitions", "rt", "older", "performed", "qol", "balance", "sets", "older women", "women" ]
[ "active aging engaged", "exercise strength", "health sarcopenia syndrome", "strength rt endurance", "qol health sarcopenia" ]
[CONTENT] older women | resistance training | quality of life | physical activity level | balance [SUMMARY]
[CONTENT] older women | resistance training | quality of life | physical activity level | balance [SUMMARY]
[CONTENT] older women | resistance training | quality of life | physical activity level | balance [SUMMARY]
[CONTENT] older women | resistance training | quality of life | physical activity level | balance [SUMMARY]
[CONTENT] older women | resistance training | quality of life | physical activity level | balance [SUMMARY]
[CONTENT] older women | resistance training | quality of life | physical activity level | balance [SUMMARY]
[CONTENT] Aged | Exercise | Female | Humans | Muscle Strength | Physical Fitness | Quality of Life | Resistance Training | Surveys and Questionnaires [SUMMARY]
[CONTENT] Aged | Exercise | Female | Humans | Muscle Strength | Physical Fitness | Quality of Life | Resistance Training | Surveys and Questionnaires [SUMMARY]
[CONTENT] Aged | Exercise | Female | Humans | Muscle Strength | Physical Fitness | Quality of Life | Resistance Training | Surveys and Questionnaires [SUMMARY]
[CONTENT] Aged | Exercise | Female | Humans | Muscle Strength | Physical Fitness | Quality of Life | Resistance Training | Surveys and Questionnaires [SUMMARY]
[CONTENT] Aged | Exercise | Female | Humans | Muscle Strength | Physical Fitness | Quality of Life | Resistance Training | Surveys and Questionnaires [SUMMARY]
[CONTENT] Aged | Exercise | Female | Humans | Muscle Strength | Physical Fitness | Quality of Life | Resistance Training | Surveys and Questionnaires [SUMMARY]
[CONTENT] active aging engaged | exercise strength | health sarcopenia syndrome | strength rt endurance | qol health sarcopenia [SUMMARY]
[CONTENT] active aging engaged | exercise strength | health sarcopenia syndrome | strength rt endurance | qol health sarcopenia [SUMMARY]
[CONTENT] active aging engaged | exercise strength | health sarcopenia syndrome | strength rt endurance | qol health sarcopenia [SUMMARY]
[CONTENT] active aging engaged | exercise strength | health sarcopenia syndrome | strength rt endurance | qol health sarcopenia [SUMMARY]
[CONTENT] active aging engaged | exercise strength | health sarcopenia syndrome | strength rt endurance | qol health sarcopenia [SUMMARY]
[CONTENT] active aging engaged | exercise strength | health sarcopenia syndrome | strength rt endurance | qol health sarcopenia [SUMMARY]
[CONTENT] weeks | repetitions | rt | older | performed | qol | balance | sets | older women | women [SUMMARY]
[CONTENT] weeks | repetitions | rt | older | performed | qol | balance | sets | older women | women [SUMMARY]
[CONTENT] weeks | repetitions | rt | older | performed | qol | balance | sets | older women | women [SUMMARY]
[CONTENT] weeks | repetitions | rt | older | performed | qol | balance | sets | older women | women [SUMMARY]
[CONTENT] weeks | repetitions | rt | older | performed | qol | balance | sets | older women | women [SUMMARY]
[CONTENT] weeks | repetitions | rt | older | performed | qol | balance | sets | older women | women [SUMMARY]
[CONTENT] strength | training | higher | qol | functional | rt | capacity | functional capacity | important | life [SUMMARY]
[CONTENT] repetitions | sets | 10 | performed | weeks | rpe | 10 rm | exercises | stand | specific [SUMMARY]
[CONTENT] 05 | domain | weeks | table | weeks rt groups | weeks rt | improved | 16 | set | 16 weeks [SUMMARY]
[CONTENT] rt | increase | qol | trt | women | older women | rt protocols | pwt trt | older | protocols [SUMMARY]
[CONTENT] rt | repetitions | qol | weeks | balance | older | performed | whoqol | sets | women [SUMMARY]
[CONTENT] rt | repetitions | qol | weeks | balance | older | performed | whoqol | sets | women [SUMMARY]
[CONTENT] QoL ||| ||| ||| ||| ||| QoL [SUMMARY]
[CONTENT] Ninety-four | four | SET | TRT | AST | PWT | one ||| 16 weeks ||| 8 and 16 weeks | Berg | Baecke | QoL | World Health Organization Quality of Life-BREF | WHOQOL-BREF | World Health Organization Quality of Life-OLD | WHOQOL-OLD [SUMMARY]
[CONTENT] 16 weeks | 16 weeks | p &lt | 0.05 ||| 2.82% | TRT | 3.48% | the first 8 weeks | 8 weeks | p &lt | 0.05 ||| PWT | TRT | AST | 16 weeks | 16 weeks | p &lt | 0.05 [SUMMARY]
[CONTENT] QoL | 16 weeks ||| QoL | PA | AST | SET | TRT [SUMMARY]
[CONTENT] QoL ||| ||| ||| ||| ||| QoL ||| Ninety-four | four | SET | TRT | AST | PWT | one ||| 16 weeks ||| 8 and 16 weeks | Berg | Baecke | QoL | World Health Organization Quality of Life-BREF | WHOQOL-BREF | World Health Organization Quality of Life-OLD | WHOQOL-OLD ||| 16 weeks | 16 weeks | p &lt | 0.05 ||| 2.82% | TRT | 3.48% | the first 8 weeks | 8 weeks | p &lt | 0.05 ||| PWT | TRT | AST | 16 weeks | 16 weeks | p &lt | 0.05 ||| QoL | 16 weeks ||| QoL | PA | AST | SET | TRT [SUMMARY]
[CONTENT] QoL ||| ||| ||| ||| ||| QoL ||| Ninety-four | four | SET | TRT | AST | PWT | one ||| 16 weeks ||| 8 and 16 weeks | Berg | Baecke | QoL | World Health Organization Quality of Life-BREF | WHOQOL-BREF | World Health Organization Quality of Life-OLD | WHOQOL-OLD ||| 16 weeks | 16 weeks | p &lt | 0.05 ||| 2.82% | TRT | 3.48% | the first 8 weeks | 8 weeks | p &lt | 0.05 ||| PWT | TRT | AST | 16 weeks | 16 weeks | p &lt | 0.05 ||| QoL | 16 weeks ||| QoL | PA | AST | SET | TRT [SUMMARY]
Comparison of quantitation of cytomegalovirus DNA by real-time PCR in whole blood with the cytomegalovirus antigenemia assay.
25553288
Quantitation of cytomegalovirus (CMV) DNA using real-time PCR has been utilized for monitoring CMV infection. However, the CMV antigenemia assay is still the 'gold standard' assay. There are only a few studies in Korea that compared the efficacy of use of real-time PCR for quantitation of CMV DNA in whole blood with the antigenemia assay, and most of these studies have been limited to transplant recipients.
BACKGROUND
479 whole blood samples from 79 patients, falling under different disease groups, were tested by real-time CMV DNA PCR using the Q-CMV real-time complete kit (Nanogen Advanced Diagnostic S.r.L., Italy) and CMV antigenemia assay (CINA Kit, ArgeneBiosoft, France), and the results were compared. Repeatedly tested patients were selected and their charts were reviewed for ganciclovir therapy.
METHOD
The concordance rate of the two assays was 86.4% (Cohen's kappa coefficient value=0.659). Quantitative correlation between the two assays was a moderate (r=0.5504, P<0.0001). Among 20 patients tested repeatedly with the two assays, 13 patients were transplant recipients and treated with ganciclovir. Before treatment, CMV was detected earlier by real-time CMV DNA PCR than the antigenemia assay, with a median difference of 8 days. After treatment, the antigenemia assay achieved negative results earlier than real-time CMV DNA PCR with a median difference of 10.5 days.
RESULTS
Q-CMV real-time complete kit is a useful tool for early detection of CMV infection in whole blood samples in transplant recipients.
CONCLUSIONS
[ "Antiviral Agents", "Cytomegalovirus", "Cytomegalovirus Infections", "DNA, Viral", "Ganciclovir", "Humans", "Immunoassay", "Organ Transplantation", "Phosphoproteins", "Real-Time Polymerase Chain Reaction", "Viral Matrix Proteins", "Virology" ]
4272973
INTRODUCTION
Cytomegalovirus (CMV) infection is a major cause of morbidity in recipients of solid organ and bone marrow transplants, in spite of significant advances resulting from preemptive therapy and early diagnosis, thus limiting the effectiveness of organ transplantation as a procedure for the treatment of end-stage diseases [1]. Many techniques are currently available for identifying and monitoring CMV infection, including shell viral culture, antigenemia assay, hybrid capture assay, and qualitative and quantitative PCR assays [2]. The CMV pp65 antigenemia test is an immunofluorescence-based assay that utilizes an indirect immunofluorescence technique for identifying the pp65 protein of CMV in peripheral blood leukocytes. The CMV pp65 assay is widely used as the gold standard for monitoring CMV infections and the response of CMV positive patients to antiviral treatment. Even though the results of the pp65 assay can be obtained in a few hours, it is labor-intensive and suffers from a significant inter-laboratory variation with respect to sensitivity (from 50% to 83%) and specificity (less than 80%) [3, 4, 5]. Quantitation of CMV DNA by real-time PCR is a useful diagnostic technique with its high detection sensitivity and simplicity of use [6]. However, consensus regarding the cut-off level for the diagnosis of CMV infection has not yet been established [7]. Q-CMV real-time complete kit (Cepheid, Nanogen Advanced Diagnostic S.r.L., Torino, Italy) is a real-time PCR kit used for the quantitation of CMV DNA in whole blood. So far, one study comparing this kit with the CMV antigenemia assay has been reported in transplant recipients [8]. The aim of this study was to compare the Q-CMV kit with the CMV antigenemia assay in different disease groups of patients and to investigate the clinical advantage of the use of real-time PCR for quantitation of CMV DNA in whole blood.
METHODS
1. Patients and samples A retrospective study was conducted on a total of 79 patients who visited Korea University Anam Hospital from June 2011 to March 2013. The patients comprised of 41 stem cell transplant (SCT) recipients, 14 solid organ transplant recipients, 11 patients with hematologic or solid organ malignancies, 11 patients with inflammatory-related illnesses, one patient with diabetes mellitus (DM), and one patient with paroxysmal nocturnal hemoglobinuria (PNH) (Table 1). For the patients who were tested repeatedly, their medical records were reviewed to find if ganciclovir was treated. All patients signed an informed consent under the protocol for human use. The study was approved by the Human Use Ethical Committee of Korea University Anam Hospital. EDTA blood samples were collected simultaneously for the antigenemia assay and the real-time CMV DNA PCR. A retrospective study was conducted on a total of 79 patients who visited Korea University Anam Hospital from June 2011 to March 2013. The patients comprised of 41 stem cell transplant (SCT) recipients, 14 solid organ transplant recipients, 11 patients with hematologic or solid organ malignancies, 11 patients with inflammatory-related illnesses, one patient with diabetes mellitus (DM), and one patient with paroxysmal nocturnal hemoglobinuria (PNH) (Table 1). For the patients who were tested repeatedly, their medical records were reviewed to find if ganciclovir was treated. All patients signed an informed consent under the protocol for human use. The study was approved by the Human Use Ethical Committee of Korea University Anam Hospital. EDTA blood samples were collected simultaneously for the antigenemia assay and the real-time CMV DNA PCR. 2. The CMV pp65 antigenemia assay The CMV pp65 antigenemia assay was carried out within 4 hours of specimen collection using the CINA Kit system (ArgeneBiosoft, Varilhes, France). Briefly, the cytospin slides, with 200,000 cells per glass slide, were prepared, fixed, and permeabilized. The presence of CMV pp65 antigen was then detected using a monoclonal antibody against the CMV pp65 antigen and visualized with a fluorescent secondary antibody. The results were expressed as the number of positive cells per slide, with each slide containing 200,000 leukocytes. The test was considered positive when ≥1 fluorescent cell was observed for every 200,000 leukocytes. The CMV pp65 antigenemia assay was carried out within 4 hours of specimen collection using the CINA Kit system (ArgeneBiosoft, Varilhes, France). Briefly, the cytospin slides, with 200,000 cells per glass slide, were prepared, fixed, and permeabilized. The presence of CMV pp65 antigen was then detected using a monoclonal antibody against the CMV pp65 antigen and visualized with a fluorescent secondary antibody. The results were expressed as the number of positive cells per slide, with each slide containing 200,000 leukocytes. The test was considered positive when ≥1 fluorescent cell was observed for every 200,000 leukocytes. 3. Quantitation of CMV DNA by real-time PCR in whole blood The real-time CMV DNA PCR was carried out alongside the CMV antigenemia assay using the Q-CMV real-time complete kit according to the manufacturer's instructions. This test was based on the simultaneous amplification of the exon 4 region of the CMV MIEA (Major Immediate Early Antigen HCMVUL123) gene and the human β-globin gene DNA that was used as the internal control. Briefly, CMV DNA was isolated from 200 µL of EDTA-treated whole blood samples using the QIAamp DNA blood mini kit (Qiagen, Hilden, Germany). Five microliters of the extracted DNA sample and 20 µL of the reaction mix were added to each microplate well. Sterile water containing the reaction mix was used as a negative control. The PCR conditions were as follows: decontamination at 50℃ for 2 min, initial denaturation at 95℃ for 10 min, followed by 45 cycles at 95℃ for 15 sec each, and at 60℃ for 1 min. PCR reactions were performed on an Applied C1000 thermal cycler with a CFX96 real-time system (Bio-Rad, Hercules, CA, USA). The laboratory-determined limit of detection (LOD) of the assay was 65 copies/mL. The assay detected CMV DNA in a linear range from 790 copies/mL to 5×106 copies/mL. The value of LOD and the linear detection range were determined according to the manufacturer's package insert instructions. The real-time CMV DNA PCR was carried out alongside the CMV antigenemia assay using the Q-CMV real-time complete kit according to the manufacturer's instructions. This test was based on the simultaneous amplification of the exon 4 region of the CMV MIEA (Major Immediate Early Antigen HCMVUL123) gene and the human β-globin gene DNA that was used as the internal control. Briefly, CMV DNA was isolated from 200 µL of EDTA-treated whole blood samples using the QIAamp DNA blood mini kit (Qiagen, Hilden, Germany). Five microliters of the extracted DNA sample and 20 µL of the reaction mix were added to each microplate well. Sterile water containing the reaction mix was used as a negative control. The PCR conditions were as follows: decontamination at 50℃ for 2 min, initial denaturation at 95℃ for 10 min, followed by 45 cycles at 95℃ for 15 sec each, and at 60℃ for 1 min. PCR reactions were performed on an Applied C1000 thermal cycler with a CFX96 real-time system (Bio-Rad, Hercules, CA, USA). The laboratory-determined limit of detection (LOD) of the assay was 65 copies/mL. The assay detected CMV DNA in a linear range from 790 copies/mL to 5×106 copies/mL. The value of LOD and the linear detection range were determined according to the manufacturer's package insert instructions. 4. Definition of CMV infection and CMV disease CMV infection was defined as the detection of CMV DNA in blood leukocytes in the absence of clinical manifestations or organ function abnormalities [9]. CMV disease was defined as the association of documented CMV infection with clinical symptoms, such as unexplained fever and leukopenia (<4×109/L in two consecutive samples) and/or thrombocytopenia (<150×109/L) not developing in any patient during the follow-up [9]. CMV infection was defined as the detection of CMV DNA in blood leukocytes in the absence of clinical manifestations or organ function abnormalities [9]. CMV disease was defined as the association of documented CMV infection with clinical symptoms, such as unexplained fever and leukopenia (<4×109/L in two consecutive samples) and/or thrombocytopenia (<150×109/L) not developing in any patient during the follow-up [9]. 5. Statistical analysis A proportion of the positive and negative results were compared by using the chi-square test. Agreement between the real-time CMV DNA PCR and the antigenemia assay results was assessed by Cohen's kappa coefficient value with 95% confidence intervals (CI). Cohen's kappa coefficient values were interpreted as follows: 0-0.2 as slight agreement, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as excellent. The correlation between the two assays was analyzed using the Pearson's correlation coefficient. P values less than 0.05 were considered significant. MedCalC (MedCalc Software, Mariakerke, Belgium) and SPSS version 20 (SPSS Inc., Chicago, IL, USA) were used to analyze the data. A proportion of the positive and negative results were compared by using the chi-square test. Agreement between the real-time CMV DNA PCR and the antigenemia assay results was assessed by Cohen's kappa coefficient value with 95% confidence intervals (CI). Cohen's kappa coefficient values were interpreted as follows: 0-0.2 as slight agreement, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as excellent. The correlation between the two assays was analyzed using the Pearson's correlation coefficient. P values less than 0.05 were considered significant. MedCalC (MedCalc Software, Mariakerke, Belgium) and SPSS version 20 (SPSS Inc., Chicago, IL, USA) were used to analyze the data.
RESULTS
1. Qualitative results of the real-time CMV DNA PCR and the antigenemia assay 479 samples were obtained from 79 patients; the test was done only once in twenty-nine patients and repeatedly done in fifty patients. Out of a total of 479 samples, CMV was detected in 156 (32.6%) samples by the real-time CMV DNA PCR and in 99 (20.7%) samples by the antigenemia assay. Substantial concordance of the two assays was observed in 414 (86.4%) samples (Cohen's kappa coefficient value=0.659, 95% CI=0.585-0.732, Table 2). Of the 65 discrepant samples, 61 samples were antigenemia-negative but real-time CMV DNA PCR-positive. Among these, 45 were below 790 copies/mL and 16 were above 790 copies/mL. Four samples were antigenemia-positive but real-time CMV DNA PCR-negative. There were no clinical particularities for the four patients who were PCR-negative and antigenemia-positive. The number of pp65-positive cells was between 1 and 6 cells/200,000 leukocytes in all four samples. 479 samples were obtained from 79 patients; the test was done only once in twenty-nine patients and repeatedly done in fifty patients. Out of a total of 479 samples, CMV was detected in 156 (32.6%) samples by the real-time CMV DNA PCR and in 99 (20.7%) samples by the antigenemia assay. Substantial concordance of the two assays was observed in 414 (86.4%) samples (Cohen's kappa coefficient value=0.659, 95% CI=0.585-0.732, Table 2). Of the 65 discrepant samples, 61 samples were antigenemia-negative but real-time CMV DNA PCR-positive. Among these, 45 were below 790 copies/mL and 16 were above 790 copies/mL. Four samples were antigenemia-positive but real-time CMV DNA PCR-negative. There were no clinical particularities for the four patients who were PCR-negative and antigenemia-positive. The number of pp65-positive cells was between 1 and 6 cells/200,000 leukocytes in all four samples. 2. Quantitative results of the real-time CMV DNA PCR and the antigenemia assay The Pearson's correlation analysis was performed to assess the correlation between the quantitation of CMV DNA by real-time PCR, and the number of positive cells as determined by the antigenemia assay. There was a statistically significant linear correlation between the real-time PCR and the antigenemia assay for 156 real-time CMV DNA PCR-positive samples (P<0.0001), but the Pearson's coefficient value was moderately correlated (r=0.5504, Fig. 1). Antigenemia values were divided into four groups: group I having negative values, group II having low values (1-10 positive/200,000 leukocytes), group III having intermediate values (11-100 positive/200,000 leukocytes) and group IV having high values (>100 positive/200,000 leukocytes). Median CMV viral load of the real-time PCR was calculated for each antigenemia category. As shown in Fig. 2, when the antigenemia result was negative, the median CMV viral load was 0.1 log10 copies /mL. In groups II, III and IV, the median viral load increased to 3.1 log10 copies/mL, 4.1 log10 copies/mL, and 5.1 log10 copies/mL, respectively. The Pearson's correlation analysis was performed to assess the correlation between the quantitation of CMV DNA by real-time PCR, and the number of positive cells as determined by the antigenemia assay. There was a statistically significant linear correlation between the real-time PCR and the antigenemia assay for 156 real-time CMV DNA PCR-positive samples (P<0.0001), but the Pearson's coefficient value was moderately correlated (r=0.5504, Fig. 1). Antigenemia values were divided into four groups: group I having negative values, group II having low values (1-10 positive/200,000 leukocytes), group III having intermediate values (11-100 positive/200,000 leukocytes) and group IV having high values (>100 positive/200,000 leukocytes). Median CMV viral load of the real-time PCR was calculated for each antigenemia category. As shown in Fig. 2, when the antigenemia result was negative, the median CMV viral load was 0.1 log10 copies /mL. In groups II, III and IV, the median viral load increased to 3.1 log10 copies/mL, 4.1 log10 copies/mL, and 5.1 log10 copies/mL, respectively. 3. Longitudinal analysis in patients with CMV disease Among 23 patients showing discrepant results between the two assays, 20 patients were tested repetitively. Thirteen CMV-positive patients out of 20 were treated with ganciclovir and CMV disease was observed in 5 out of 13 patients. The median number of days for obtaining the first positive result was 15.5 days (range, 0-56 days) and 23.5 days (range, 8-252 days) for CMV viral load of the real-time CMV DNA PCR and positive cells of the antigenemia assays, respectively. The positive CMV viral load was detected prior to the antigenemia assay positively in 9 of the 13 ganciclovir-treated patients and the results of the antigenemia assay reached the threshold earlier in one patient and simultaneously in three patients. At the start of ganciclovir therapy, the median CMV viral load was 2,716 copies/mL (range, negative to 93,918 copies/mL) and the median number of the antigenemia-positive cells was 3/200,000 WBCs (range, 0-351/200,000 WBCs), respectively. Clinical monitoring during treatment revealed that the median number of days to obtain negative results after ganciclovir therapy was 36.0 days (range, 11-57 days) and 25.5 days (range, 3-53 days) by the real-time CMV DNA PCR and antigenemia assays, respectively. The results of the antigenemia assay were observed to be negative before the real-time CMV DNA PCR in 8 of the 13 ganciclovir-treated patients. The real-time CMV DNA PCR achieved negative results earlier in 3 patients, and simultaneous results were obtained in 2 patients. Among 23 patients showing discrepant results between the two assays, 20 patients were tested repetitively. Thirteen CMV-positive patients out of 20 were treated with ganciclovir and CMV disease was observed in 5 out of 13 patients. The median number of days for obtaining the first positive result was 15.5 days (range, 0-56 days) and 23.5 days (range, 8-252 days) for CMV viral load of the real-time CMV DNA PCR and positive cells of the antigenemia assays, respectively. The positive CMV viral load was detected prior to the antigenemia assay positively in 9 of the 13 ganciclovir-treated patients and the results of the antigenemia assay reached the threshold earlier in one patient and simultaneously in three patients. At the start of ganciclovir therapy, the median CMV viral load was 2,716 copies/mL (range, negative to 93,918 copies/mL) and the median number of the antigenemia-positive cells was 3/200,000 WBCs (range, 0-351/200,000 WBCs), respectively. Clinical monitoring during treatment revealed that the median number of days to obtain negative results after ganciclovir therapy was 36.0 days (range, 11-57 days) and 25.5 days (range, 3-53 days) by the real-time CMV DNA PCR and antigenemia assays, respectively. The results of the antigenemia assay were observed to be negative before the real-time CMV DNA PCR in 8 of the 13 ganciclovir-treated patients. The real-time CMV DNA PCR achieved negative results earlier in 3 patients, and simultaneous results were obtained in 2 patients.
null
null
[ "1. Patients and samples", "2. The CMV pp65 antigenemia assay", "3. Quantitation of CMV DNA by real-time PCR in whole blood", "4. Definition of CMV infection and CMV disease", "5. Statistical analysis", "1. Qualitative results of the real-time CMV DNA PCR and the antigenemia assay", "2. Quantitative results of the real-time CMV DNA PCR and the antigenemia assay", "3. Longitudinal analysis in patients with CMV disease" ]
[ "A retrospective study was conducted on a total of 79 patients who visited Korea University Anam Hospital from June 2011 to March 2013. The patients comprised of 41 stem cell transplant (SCT) recipients, 14 solid organ transplant recipients, 11 patients with hematologic or solid organ malignancies, 11 patients with inflammatory-related illnesses, one patient with diabetes mellitus (DM), and one patient with paroxysmal nocturnal hemoglobinuria (PNH) (Table 1). For the patients who were tested repeatedly, their medical records were reviewed to find if ganciclovir was treated. All patients signed an informed consent under the protocol for human use. The study was approved by the Human Use Ethical Committee of Korea University Anam Hospital. EDTA blood samples were collected simultaneously for the antigenemia assay and the real-time CMV DNA PCR.", "The CMV pp65 antigenemia assay was carried out within 4 hours of specimen collection using the CINA Kit system (ArgeneBiosoft, Varilhes, France). Briefly, the cytospin slides, with 200,000 cells per glass slide, were prepared, fixed, and permeabilized. The presence of CMV pp65 antigen was then detected using a monoclonal antibody against the CMV pp65 antigen and visualized with a fluorescent secondary antibody. The results were expressed as the number of positive cells per slide, with each slide containing 200,000 leukocytes. The test was considered positive when ≥1 fluorescent cell was observed for every 200,000 leukocytes.", "The real-time CMV DNA PCR was carried out alongside the CMV antigenemia assay using the Q-CMV real-time complete kit according to the manufacturer's instructions. This test was based on the simultaneous amplification of the exon 4 region of the CMV MIEA (Major Immediate Early Antigen HCMVUL123) gene and the human β-globin gene DNA that was used as the internal control. Briefly, CMV DNA was isolated from 200 µL of EDTA-treated whole blood samples using the QIAamp DNA blood mini kit (Qiagen, Hilden, Germany). Five microliters of the extracted DNA sample and 20 µL of the reaction mix were added to each microplate well. Sterile water containing the reaction mix was used as a negative control. The PCR conditions were as follows: decontamination at 50℃ for 2 min, initial denaturation at 95℃ for 10 min, followed by 45 cycles at 95℃ for 15 sec each, and at 60℃ for 1 min. PCR reactions were performed on an Applied C1000 thermal cycler with a CFX96 real-time system (Bio-Rad, Hercules, CA, USA). The laboratory-determined limit of detection (LOD) of the assay was 65 copies/mL. The assay detected CMV DNA in a linear range from 790 copies/mL to 5×106 copies/mL. The value of LOD and the linear detection range were determined according to the manufacturer's package insert instructions.", "CMV infection was defined as the detection of CMV DNA in blood leukocytes in the absence of clinical manifestations or organ function abnormalities [9]. CMV disease was defined as the association of documented CMV infection with clinical symptoms, such as unexplained fever and leukopenia (<4×109/L in two consecutive samples) and/or thrombocytopenia (<150×109/L) not developing in any patient during the follow-up [9].", "A proportion of the positive and negative results were compared by using the chi-square test. Agreement between the real-time CMV DNA PCR and the antigenemia assay results was assessed by Cohen's kappa coefficient value with 95% confidence intervals (CI). Cohen's kappa coefficient values were interpreted as follows: 0-0.2 as slight agreement, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as excellent. The correlation between the two assays was analyzed using the Pearson's correlation coefficient. P values less than 0.05 were considered significant. MedCalC (MedCalc Software, Mariakerke, Belgium) and SPSS version 20 (SPSS Inc., Chicago, IL, USA) were used to analyze the data.", "479 samples were obtained from 79 patients; the test was done only once in twenty-nine patients and repeatedly done in fifty patients. Out of a total of 479 samples, CMV was detected in 156 (32.6%) samples by the real-time CMV DNA PCR and in 99 (20.7%) samples by the antigenemia assay. Substantial concordance of the two assays was observed in 414 (86.4%) samples (Cohen's kappa coefficient value=0.659, 95% CI=0.585-0.732, Table 2). Of the 65 discrepant samples, 61 samples were antigenemia-negative but real-time CMV DNA PCR-positive. Among these, 45 were below 790 copies/mL and 16 were above 790 copies/mL. Four samples were antigenemia-positive but real-time CMV DNA PCR-negative. There were no clinical particularities for the four patients who were PCR-negative and antigenemia-positive. The number of pp65-positive cells was between 1 and 6 cells/200,000 leukocytes in all four samples.", "The Pearson's correlation analysis was performed to assess the correlation between the quantitation of CMV DNA by real-time PCR, and the number of positive cells as determined by the antigenemia assay. There was a statistically significant linear correlation between the real-time PCR and the antigenemia assay for 156 real-time CMV DNA PCR-positive samples (P<0.0001), but the Pearson's coefficient value was moderately correlated (r=0.5504, Fig. 1).\nAntigenemia values were divided into four groups: group I having negative values, group II having low values (1-10 positive/200,000 leukocytes), group III having intermediate values (11-100 positive/200,000 leukocytes) and group IV having high values (>100 positive/200,000 leukocytes). Median CMV viral load of the real-time PCR was calculated for each antigenemia category. As shown in Fig. 2, when the antigenemia result was negative, the median CMV viral load was 0.1 log10 copies /mL. In groups II, III and IV, the median viral load increased to 3.1 log10 copies/mL, 4.1 log10 copies/mL, and 5.1 log10 copies/mL, respectively.", "Among 23 patients showing discrepant results between the two assays, 20 patients were tested repetitively. Thirteen CMV-positive patients out of 20 were treated with ganciclovir and CMV disease was observed in 5 out of 13 patients. The median number of days for obtaining the first positive result was 15.5 days (range, 0-56 days) and 23.5 days (range, 8-252 days) for CMV viral load of the real-time CMV DNA PCR and positive cells of the antigenemia assays, respectively. The positive CMV viral load was detected prior to the antigenemia assay positively in 9 of the 13 ganciclovir-treated patients and the results of the antigenemia assay reached the threshold earlier in one patient and simultaneously in three patients. At the start of ganciclovir therapy, the median CMV viral load was 2,716 copies/mL (range, negative to 93,918 copies/mL) and the median number of the antigenemia-positive cells was 3/200,000 WBCs (range, 0-351/200,000 WBCs), respectively. Clinical monitoring during treatment revealed that the median number of days to obtain negative results after ganciclovir therapy was 36.0 days (range, 11-57 days) and 25.5 days (range, 3-53 days) by the real-time CMV DNA PCR and antigenemia assays, respectively. The results of the antigenemia assay were observed to be negative before the real-time CMV DNA PCR in 8 of the 13 ganciclovir-treated patients. The real-time CMV DNA PCR achieved negative results earlier in 3 patients, and simultaneous results were obtained in 2 patients." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "1. Patients and samples", "2. The CMV pp65 antigenemia assay", "3. Quantitation of CMV DNA by real-time PCR in whole blood", "4. Definition of CMV infection and CMV disease", "5. Statistical analysis", "RESULTS", "1. Qualitative results of the real-time CMV DNA PCR and the antigenemia assay", "2. Quantitative results of the real-time CMV DNA PCR and the antigenemia assay", "3. Longitudinal analysis in patients with CMV disease", "DISCUSSION" ]
[ "Cytomegalovirus (CMV) infection is a major cause of morbidity in recipients of solid organ and bone marrow transplants, in spite of significant advances resulting from preemptive therapy and early diagnosis, thus limiting the effectiveness of organ transplantation as a procedure for the treatment of end-stage diseases [1]. Many techniques are currently available for identifying and monitoring CMV infection, including shell viral culture, antigenemia assay, hybrid capture assay, and qualitative and quantitative PCR assays [2].\nThe CMV pp65 antigenemia test is an immunofluorescence-based assay that utilizes an indirect immunofluorescence technique for identifying the pp65 protein of CMV in peripheral blood leukocytes. The CMV pp65 assay is widely used as the gold standard for monitoring CMV infections and the response of CMV positive patients to antiviral treatment. Even though the results of the pp65 assay can be obtained in a few hours, it is labor-intensive and suffers from a significant inter-laboratory variation with respect to sensitivity (from 50% to 83%) and specificity (less than 80%) [3, 4, 5].\nQuantitation of CMV DNA by real-time PCR is a useful diagnostic technique with its high detection sensitivity and simplicity of use [6]. However, consensus regarding the cut-off level for the diagnosis of CMV infection has not yet been established [7]. Q-CMV real-time complete kit (Cepheid, Nanogen Advanced Diagnostic S.r.L., Torino, Italy) is a real-time PCR kit used for the quantitation of CMV DNA in whole blood. So far, one study comparing this kit with the CMV antigenemia assay has been reported in transplant recipients [8]. The aim of this study was to compare the Q-CMV kit with the CMV antigenemia assay in different disease groups of patients and to investigate the clinical advantage of the use of real-time PCR for quantitation of CMV DNA in whole blood.", " 1. Patients and samples A retrospective study was conducted on a total of 79 patients who visited Korea University Anam Hospital from June 2011 to March 2013. The patients comprised of 41 stem cell transplant (SCT) recipients, 14 solid organ transplant recipients, 11 patients with hematologic or solid organ malignancies, 11 patients with inflammatory-related illnesses, one patient with diabetes mellitus (DM), and one patient with paroxysmal nocturnal hemoglobinuria (PNH) (Table 1). For the patients who were tested repeatedly, their medical records were reviewed to find if ganciclovir was treated. All patients signed an informed consent under the protocol for human use. The study was approved by the Human Use Ethical Committee of Korea University Anam Hospital. EDTA blood samples were collected simultaneously for the antigenemia assay and the real-time CMV DNA PCR.\nA retrospective study was conducted on a total of 79 patients who visited Korea University Anam Hospital from June 2011 to March 2013. The patients comprised of 41 stem cell transplant (SCT) recipients, 14 solid organ transplant recipients, 11 patients with hematologic or solid organ malignancies, 11 patients with inflammatory-related illnesses, one patient with diabetes mellitus (DM), and one patient with paroxysmal nocturnal hemoglobinuria (PNH) (Table 1). For the patients who were tested repeatedly, their medical records were reviewed to find if ganciclovir was treated. All patients signed an informed consent under the protocol for human use. The study was approved by the Human Use Ethical Committee of Korea University Anam Hospital. EDTA blood samples were collected simultaneously for the antigenemia assay and the real-time CMV DNA PCR.\n 2. The CMV pp65 antigenemia assay The CMV pp65 antigenemia assay was carried out within 4 hours of specimen collection using the CINA Kit system (ArgeneBiosoft, Varilhes, France). Briefly, the cytospin slides, with 200,000 cells per glass slide, were prepared, fixed, and permeabilized. The presence of CMV pp65 antigen was then detected using a monoclonal antibody against the CMV pp65 antigen and visualized with a fluorescent secondary antibody. The results were expressed as the number of positive cells per slide, with each slide containing 200,000 leukocytes. The test was considered positive when ≥1 fluorescent cell was observed for every 200,000 leukocytes.\nThe CMV pp65 antigenemia assay was carried out within 4 hours of specimen collection using the CINA Kit system (ArgeneBiosoft, Varilhes, France). Briefly, the cytospin slides, with 200,000 cells per glass slide, were prepared, fixed, and permeabilized. The presence of CMV pp65 antigen was then detected using a monoclonal antibody against the CMV pp65 antigen and visualized with a fluorescent secondary antibody. The results were expressed as the number of positive cells per slide, with each slide containing 200,000 leukocytes. The test was considered positive when ≥1 fluorescent cell was observed for every 200,000 leukocytes.\n 3. Quantitation of CMV DNA by real-time PCR in whole blood The real-time CMV DNA PCR was carried out alongside the CMV antigenemia assay using the Q-CMV real-time complete kit according to the manufacturer's instructions. This test was based on the simultaneous amplification of the exon 4 region of the CMV MIEA (Major Immediate Early Antigen HCMVUL123) gene and the human β-globin gene DNA that was used as the internal control. Briefly, CMV DNA was isolated from 200 µL of EDTA-treated whole blood samples using the QIAamp DNA blood mini kit (Qiagen, Hilden, Germany). Five microliters of the extracted DNA sample and 20 µL of the reaction mix were added to each microplate well. Sterile water containing the reaction mix was used as a negative control. The PCR conditions were as follows: decontamination at 50℃ for 2 min, initial denaturation at 95℃ for 10 min, followed by 45 cycles at 95℃ for 15 sec each, and at 60℃ for 1 min. PCR reactions were performed on an Applied C1000 thermal cycler with a CFX96 real-time system (Bio-Rad, Hercules, CA, USA). The laboratory-determined limit of detection (LOD) of the assay was 65 copies/mL. The assay detected CMV DNA in a linear range from 790 copies/mL to 5×106 copies/mL. The value of LOD and the linear detection range were determined according to the manufacturer's package insert instructions.\nThe real-time CMV DNA PCR was carried out alongside the CMV antigenemia assay using the Q-CMV real-time complete kit according to the manufacturer's instructions. This test was based on the simultaneous amplification of the exon 4 region of the CMV MIEA (Major Immediate Early Antigen HCMVUL123) gene and the human β-globin gene DNA that was used as the internal control. Briefly, CMV DNA was isolated from 200 µL of EDTA-treated whole blood samples using the QIAamp DNA blood mini kit (Qiagen, Hilden, Germany). Five microliters of the extracted DNA sample and 20 µL of the reaction mix were added to each microplate well. Sterile water containing the reaction mix was used as a negative control. The PCR conditions were as follows: decontamination at 50℃ for 2 min, initial denaturation at 95℃ for 10 min, followed by 45 cycles at 95℃ for 15 sec each, and at 60℃ for 1 min. PCR reactions were performed on an Applied C1000 thermal cycler with a CFX96 real-time system (Bio-Rad, Hercules, CA, USA). The laboratory-determined limit of detection (LOD) of the assay was 65 copies/mL. The assay detected CMV DNA in a linear range from 790 copies/mL to 5×106 copies/mL. The value of LOD and the linear detection range were determined according to the manufacturer's package insert instructions.\n 4. Definition of CMV infection and CMV disease CMV infection was defined as the detection of CMV DNA in blood leukocytes in the absence of clinical manifestations or organ function abnormalities [9]. CMV disease was defined as the association of documented CMV infection with clinical symptoms, such as unexplained fever and leukopenia (<4×109/L in two consecutive samples) and/or thrombocytopenia (<150×109/L) not developing in any patient during the follow-up [9].\nCMV infection was defined as the detection of CMV DNA in blood leukocytes in the absence of clinical manifestations or organ function abnormalities [9]. CMV disease was defined as the association of documented CMV infection with clinical symptoms, such as unexplained fever and leukopenia (<4×109/L in two consecutive samples) and/or thrombocytopenia (<150×109/L) not developing in any patient during the follow-up [9].\n 5. Statistical analysis A proportion of the positive and negative results were compared by using the chi-square test. Agreement between the real-time CMV DNA PCR and the antigenemia assay results was assessed by Cohen's kappa coefficient value with 95% confidence intervals (CI). Cohen's kappa coefficient values were interpreted as follows: 0-0.2 as slight agreement, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as excellent. The correlation between the two assays was analyzed using the Pearson's correlation coefficient. P values less than 0.05 were considered significant. MedCalC (MedCalc Software, Mariakerke, Belgium) and SPSS version 20 (SPSS Inc., Chicago, IL, USA) were used to analyze the data.\nA proportion of the positive and negative results were compared by using the chi-square test. Agreement between the real-time CMV DNA PCR and the antigenemia assay results was assessed by Cohen's kappa coefficient value with 95% confidence intervals (CI). Cohen's kappa coefficient values were interpreted as follows: 0-0.2 as slight agreement, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as excellent. The correlation between the two assays was analyzed using the Pearson's correlation coefficient. P values less than 0.05 were considered significant. MedCalC (MedCalc Software, Mariakerke, Belgium) and SPSS version 20 (SPSS Inc., Chicago, IL, USA) were used to analyze the data.", "A retrospective study was conducted on a total of 79 patients who visited Korea University Anam Hospital from June 2011 to March 2013. The patients comprised of 41 stem cell transplant (SCT) recipients, 14 solid organ transplant recipients, 11 patients with hematologic or solid organ malignancies, 11 patients with inflammatory-related illnesses, one patient with diabetes mellitus (DM), and one patient with paroxysmal nocturnal hemoglobinuria (PNH) (Table 1). For the patients who were tested repeatedly, their medical records were reviewed to find if ganciclovir was treated. All patients signed an informed consent under the protocol for human use. The study was approved by the Human Use Ethical Committee of Korea University Anam Hospital. EDTA blood samples were collected simultaneously for the antigenemia assay and the real-time CMV DNA PCR.", "The CMV pp65 antigenemia assay was carried out within 4 hours of specimen collection using the CINA Kit system (ArgeneBiosoft, Varilhes, France). Briefly, the cytospin slides, with 200,000 cells per glass slide, were prepared, fixed, and permeabilized. The presence of CMV pp65 antigen was then detected using a monoclonal antibody against the CMV pp65 antigen and visualized with a fluorescent secondary antibody. The results were expressed as the number of positive cells per slide, with each slide containing 200,000 leukocytes. The test was considered positive when ≥1 fluorescent cell was observed for every 200,000 leukocytes.", "The real-time CMV DNA PCR was carried out alongside the CMV antigenemia assay using the Q-CMV real-time complete kit according to the manufacturer's instructions. This test was based on the simultaneous amplification of the exon 4 region of the CMV MIEA (Major Immediate Early Antigen HCMVUL123) gene and the human β-globin gene DNA that was used as the internal control. Briefly, CMV DNA was isolated from 200 µL of EDTA-treated whole blood samples using the QIAamp DNA blood mini kit (Qiagen, Hilden, Germany). Five microliters of the extracted DNA sample and 20 µL of the reaction mix were added to each microplate well. Sterile water containing the reaction mix was used as a negative control. The PCR conditions were as follows: decontamination at 50℃ for 2 min, initial denaturation at 95℃ for 10 min, followed by 45 cycles at 95℃ for 15 sec each, and at 60℃ for 1 min. PCR reactions were performed on an Applied C1000 thermal cycler with a CFX96 real-time system (Bio-Rad, Hercules, CA, USA). The laboratory-determined limit of detection (LOD) of the assay was 65 copies/mL. The assay detected CMV DNA in a linear range from 790 copies/mL to 5×106 copies/mL. The value of LOD and the linear detection range were determined according to the manufacturer's package insert instructions.", "CMV infection was defined as the detection of CMV DNA in blood leukocytes in the absence of clinical manifestations or organ function abnormalities [9]. CMV disease was defined as the association of documented CMV infection with clinical symptoms, such as unexplained fever and leukopenia (<4×109/L in two consecutive samples) and/or thrombocytopenia (<150×109/L) not developing in any patient during the follow-up [9].", "A proportion of the positive and negative results were compared by using the chi-square test. Agreement between the real-time CMV DNA PCR and the antigenemia assay results was assessed by Cohen's kappa coefficient value with 95% confidence intervals (CI). Cohen's kappa coefficient values were interpreted as follows: 0-0.2 as slight agreement, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as excellent. The correlation between the two assays was analyzed using the Pearson's correlation coefficient. P values less than 0.05 were considered significant. MedCalC (MedCalc Software, Mariakerke, Belgium) and SPSS version 20 (SPSS Inc., Chicago, IL, USA) were used to analyze the data.", " 1. Qualitative results of the real-time CMV DNA PCR and the antigenemia assay 479 samples were obtained from 79 patients; the test was done only once in twenty-nine patients and repeatedly done in fifty patients. Out of a total of 479 samples, CMV was detected in 156 (32.6%) samples by the real-time CMV DNA PCR and in 99 (20.7%) samples by the antigenemia assay. Substantial concordance of the two assays was observed in 414 (86.4%) samples (Cohen's kappa coefficient value=0.659, 95% CI=0.585-0.732, Table 2). Of the 65 discrepant samples, 61 samples were antigenemia-negative but real-time CMV DNA PCR-positive. Among these, 45 were below 790 copies/mL and 16 were above 790 copies/mL. Four samples were antigenemia-positive but real-time CMV DNA PCR-negative. There were no clinical particularities for the four patients who were PCR-negative and antigenemia-positive. The number of pp65-positive cells was between 1 and 6 cells/200,000 leukocytes in all four samples.\n479 samples were obtained from 79 patients; the test was done only once in twenty-nine patients and repeatedly done in fifty patients. Out of a total of 479 samples, CMV was detected in 156 (32.6%) samples by the real-time CMV DNA PCR and in 99 (20.7%) samples by the antigenemia assay. Substantial concordance of the two assays was observed in 414 (86.4%) samples (Cohen's kappa coefficient value=0.659, 95% CI=0.585-0.732, Table 2). Of the 65 discrepant samples, 61 samples were antigenemia-negative but real-time CMV DNA PCR-positive. Among these, 45 were below 790 copies/mL and 16 were above 790 copies/mL. Four samples were antigenemia-positive but real-time CMV DNA PCR-negative. There were no clinical particularities for the four patients who were PCR-negative and antigenemia-positive. The number of pp65-positive cells was between 1 and 6 cells/200,000 leukocytes in all four samples.\n 2. Quantitative results of the real-time CMV DNA PCR and the antigenemia assay The Pearson's correlation analysis was performed to assess the correlation between the quantitation of CMV DNA by real-time PCR, and the number of positive cells as determined by the antigenemia assay. There was a statistically significant linear correlation between the real-time PCR and the antigenemia assay for 156 real-time CMV DNA PCR-positive samples (P<0.0001), but the Pearson's coefficient value was moderately correlated (r=0.5504, Fig. 1).\nAntigenemia values were divided into four groups: group I having negative values, group II having low values (1-10 positive/200,000 leukocytes), group III having intermediate values (11-100 positive/200,000 leukocytes) and group IV having high values (>100 positive/200,000 leukocytes). Median CMV viral load of the real-time PCR was calculated for each antigenemia category. As shown in Fig. 2, when the antigenemia result was negative, the median CMV viral load was 0.1 log10 copies /mL. In groups II, III and IV, the median viral load increased to 3.1 log10 copies/mL, 4.1 log10 copies/mL, and 5.1 log10 copies/mL, respectively.\nThe Pearson's correlation analysis was performed to assess the correlation between the quantitation of CMV DNA by real-time PCR, and the number of positive cells as determined by the antigenemia assay. There was a statistically significant linear correlation between the real-time PCR and the antigenemia assay for 156 real-time CMV DNA PCR-positive samples (P<0.0001), but the Pearson's coefficient value was moderately correlated (r=0.5504, Fig. 1).\nAntigenemia values were divided into four groups: group I having negative values, group II having low values (1-10 positive/200,000 leukocytes), group III having intermediate values (11-100 positive/200,000 leukocytes) and group IV having high values (>100 positive/200,000 leukocytes). Median CMV viral load of the real-time PCR was calculated for each antigenemia category. As shown in Fig. 2, when the antigenemia result was negative, the median CMV viral load was 0.1 log10 copies /mL. In groups II, III and IV, the median viral load increased to 3.1 log10 copies/mL, 4.1 log10 copies/mL, and 5.1 log10 copies/mL, respectively.\n 3. Longitudinal analysis in patients with CMV disease Among 23 patients showing discrepant results between the two assays, 20 patients were tested repetitively. Thirteen CMV-positive patients out of 20 were treated with ganciclovir and CMV disease was observed in 5 out of 13 patients. The median number of days for obtaining the first positive result was 15.5 days (range, 0-56 days) and 23.5 days (range, 8-252 days) for CMV viral load of the real-time CMV DNA PCR and positive cells of the antigenemia assays, respectively. The positive CMV viral load was detected prior to the antigenemia assay positively in 9 of the 13 ganciclovir-treated patients and the results of the antigenemia assay reached the threshold earlier in one patient and simultaneously in three patients. At the start of ganciclovir therapy, the median CMV viral load was 2,716 copies/mL (range, negative to 93,918 copies/mL) and the median number of the antigenemia-positive cells was 3/200,000 WBCs (range, 0-351/200,000 WBCs), respectively. Clinical monitoring during treatment revealed that the median number of days to obtain negative results after ganciclovir therapy was 36.0 days (range, 11-57 days) and 25.5 days (range, 3-53 days) by the real-time CMV DNA PCR and antigenemia assays, respectively. The results of the antigenemia assay were observed to be negative before the real-time CMV DNA PCR in 8 of the 13 ganciclovir-treated patients. The real-time CMV DNA PCR achieved negative results earlier in 3 patients, and simultaneous results were obtained in 2 patients.\nAmong 23 patients showing discrepant results between the two assays, 20 patients were tested repetitively. Thirteen CMV-positive patients out of 20 were treated with ganciclovir and CMV disease was observed in 5 out of 13 patients. The median number of days for obtaining the first positive result was 15.5 days (range, 0-56 days) and 23.5 days (range, 8-252 days) for CMV viral load of the real-time CMV DNA PCR and positive cells of the antigenemia assays, respectively. The positive CMV viral load was detected prior to the antigenemia assay positively in 9 of the 13 ganciclovir-treated patients and the results of the antigenemia assay reached the threshold earlier in one patient and simultaneously in three patients. At the start of ganciclovir therapy, the median CMV viral load was 2,716 copies/mL (range, negative to 93,918 copies/mL) and the median number of the antigenemia-positive cells was 3/200,000 WBCs (range, 0-351/200,000 WBCs), respectively. Clinical monitoring during treatment revealed that the median number of days to obtain negative results after ganciclovir therapy was 36.0 days (range, 11-57 days) and 25.5 days (range, 3-53 days) by the real-time CMV DNA PCR and antigenemia assays, respectively. The results of the antigenemia assay were observed to be negative before the real-time CMV DNA PCR in 8 of the 13 ganciclovir-treated patients. The real-time CMV DNA PCR achieved negative results earlier in 3 patients, and simultaneous results were obtained in 2 patients.", "479 samples were obtained from 79 patients; the test was done only once in twenty-nine patients and repeatedly done in fifty patients. Out of a total of 479 samples, CMV was detected in 156 (32.6%) samples by the real-time CMV DNA PCR and in 99 (20.7%) samples by the antigenemia assay. Substantial concordance of the two assays was observed in 414 (86.4%) samples (Cohen's kappa coefficient value=0.659, 95% CI=0.585-0.732, Table 2). Of the 65 discrepant samples, 61 samples were antigenemia-negative but real-time CMV DNA PCR-positive. Among these, 45 were below 790 copies/mL and 16 were above 790 copies/mL. Four samples were antigenemia-positive but real-time CMV DNA PCR-negative. There were no clinical particularities for the four patients who were PCR-negative and antigenemia-positive. The number of pp65-positive cells was between 1 and 6 cells/200,000 leukocytes in all four samples.", "The Pearson's correlation analysis was performed to assess the correlation between the quantitation of CMV DNA by real-time PCR, and the number of positive cells as determined by the antigenemia assay. There was a statistically significant linear correlation between the real-time PCR and the antigenemia assay for 156 real-time CMV DNA PCR-positive samples (P<0.0001), but the Pearson's coefficient value was moderately correlated (r=0.5504, Fig. 1).\nAntigenemia values were divided into four groups: group I having negative values, group II having low values (1-10 positive/200,000 leukocytes), group III having intermediate values (11-100 positive/200,000 leukocytes) and group IV having high values (>100 positive/200,000 leukocytes). Median CMV viral load of the real-time PCR was calculated for each antigenemia category. As shown in Fig. 2, when the antigenemia result was negative, the median CMV viral load was 0.1 log10 copies /mL. In groups II, III and IV, the median viral load increased to 3.1 log10 copies/mL, 4.1 log10 copies/mL, and 5.1 log10 copies/mL, respectively.", "Among 23 patients showing discrepant results between the two assays, 20 patients were tested repetitively. Thirteen CMV-positive patients out of 20 were treated with ganciclovir and CMV disease was observed in 5 out of 13 patients. The median number of days for obtaining the first positive result was 15.5 days (range, 0-56 days) and 23.5 days (range, 8-252 days) for CMV viral load of the real-time CMV DNA PCR and positive cells of the antigenemia assays, respectively. The positive CMV viral load was detected prior to the antigenemia assay positively in 9 of the 13 ganciclovir-treated patients and the results of the antigenemia assay reached the threshold earlier in one patient and simultaneously in three patients. At the start of ganciclovir therapy, the median CMV viral load was 2,716 copies/mL (range, negative to 93,918 copies/mL) and the median number of the antigenemia-positive cells was 3/200,000 WBCs (range, 0-351/200,000 WBCs), respectively. Clinical monitoring during treatment revealed that the median number of days to obtain negative results after ganciclovir therapy was 36.0 days (range, 11-57 days) and 25.5 days (range, 3-53 days) by the real-time CMV DNA PCR and antigenemia assays, respectively. The results of the antigenemia assay were observed to be negative before the real-time CMV DNA PCR in 8 of the 13 ganciclovir-treated patients. The real-time CMV DNA PCR achieved negative results earlier in 3 patients, and simultaneous results were obtained in 2 patients.", "Antigenemia assay and real-time CMV DNA PCR are widely used for monitoring viral infection, for tracking its recurrence, and for initiating preemptive CMV therapy [1, 10]. However, guidelines for preemptive therapy based on these methods have yet to be established owing to the lack of standardization [11]. Clinically relevant cut-off values based on CMV diagnosis test differ among patient populations and institutional settings. Griffiths et al. [12] suggested guidelines for preemptive therapy based on the antigenemia test: thresholds of >10 positive cells/2×105 WBCs and of ≥1 or 2 positive cells/2×105 WBCs in solid organ and SCT recipients, respectively. Lilleri et al. [13] suggested a cut-off value for initiation of preemptive therapy on a real-time CMV DNA PCR: 300,000 copies/mL and 10,000 copies/mL in solid organ and SCT recipients, respectively. In our center, different assays are adopted for solid organ transplantation and SCT. In solid organ transplant, antigenemia assay results guide preemptive therapy, and treatment is started when more than 1 positive cell/2×105 WBCs is detected. In SCT, real-time CMV DNA PCR guides preemptive therapy, and treatment is started when more than 65 copies/mL is detected. Therefore, consensus regarding the cut-off level for diagnosis of CMV infection has to be defined for the real-time CMV DNA PCR and antigenemia assay.\nOur study revealed an 86.4% concordance rate between the real-time CMV DNA PCR and the antigenemia assays from whole blood samples of diverse patient groups, which is similar to the results obtained from other studies (Table 3). The design of this study differed from those of other studies in a number of ways, such as the type of samples, the different primer sets, and the diverse patient groups. In this study, whole blood specimen was selected over plasma or serum since several prior studies suggested that whole blood-based real-time PCR has a higher sensitivity compared to plasma or serum-based assays [14, 15]. In this study, the CMV MIEA gene was chosen for the Q-CMV Kit assay, but Heo et al. [16] targeted the CMV glycoprotein B (gB). No standardization of target genes is yet agreed on by the larger research community, so further work is needed to establish standardization to encourage comparison of assays across various laboratories using different target genes. In the current study, transplant recipients as well as diverse patients groups were investigated, including those with hematologic or solid organ malignancies, inflammatory-related illnesses, DM and PNH. Contrary to this study, other studies were comprised only of transplant recipients.\nMajority of the discrepancy in qualitative results (93.8%, 61/65) was real-time CMV DNA PCR detection in antigenemia-negative and 45 out of 61 samples were below lower linear range (790 copies/mL). This discrepancy could be explained by the increased sensitivity of real-time CMV DNA PCR compared to the antigenemia assay.\nIn this study, the correlation between the real-time CMV DNA PCR and antigenemia assay was moderate (r=0.5504, P<0.0001), as shown in a previous study [17]. In samples showing antigenemia levels of 0, 1-10, 10-100, and >100 positive cells, the corresponding median viral load measured by real-time CMV DNA PCR was 0.1, 3.1, 4.1, and 5.1 log10 copies/mL, respectively. Significant increases in viral load were observed in samples with more than 1 antigenemia-positive cell, which was above the cut-off level for starting preemptive therapy as chosen by the solid organ transplant centers. When real-time CMV DNA PCR results were compared with groups classified according to the results of the antigenemia assay, the results were moderately correlated (r=0.3877, P=0.0013), which is in agreement with previous studies [18].\nMhiri et al. [19] reported that the rate of viral load increase is significantly associated with the development of the disease; antigenemia assay does not give an accurate indication of viral load rate increase, since it only counts the number of infected cells. In the present study, the time for detection by the real-time PCR was earlier (median of 15.5 days) compared with that by the antigenemia assay (median of 23.5 days), consistent with the data of Ghaffari et al. [20]. Furthermore, the time to become undetectable by the real-time PCR was slower (median of 36 days) than the antigenemia assay (median of 25.5 days), which is similar to the results by Mhiri et al. [19]. The clinical application of real-time CMV DNA PCR for monitoring response to therapy remains unclear, because PCR used for detection of viral DNA cannot distinguish between CMV-infected cell destruction and the genome of the defective virus [21].\nAlthough the real-time CMV DNA PCR has high sensitivity, currently there is no clear agreement on the ideal cut-off for the diagnosis of CMV infection. Thus, treatment decisions should be done with caution, considering trends in viral load and the sensitivity of the methods used in individual laboratories and not just relying on the absolute value recorded in a single test. Even though moderate correlation was observed in this study, this study included a diverse group of patients with a small number of patients in each group. Thus, larger group studies are needed to confirm the optimal cut-off value of real-time PCR for CMV infection.\nIn conclusion, quantitative results of the Q-CMV real-time complete kit had good correlation with results of the CMV antigenemia assay, suggesting that the real-time CMV DNA PCR can be effective in early detection of CMV infection and guiding therapy." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "Cytomegalovirus", "Real-time PCR", "Antigenemia assay" ]
INTRODUCTION: Cytomegalovirus (CMV) infection is a major cause of morbidity in recipients of solid organ and bone marrow transplants, in spite of significant advances resulting from preemptive therapy and early diagnosis, thus limiting the effectiveness of organ transplantation as a procedure for the treatment of end-stage diseases [1]. Many techniques are currently available for identifying and monitoring CMV infection, including shell viral culture, antigenemia assay, hybrid capture assay, and qualitative and quantitative PCR assays [2]. The CMV pp65 antigenemia test is an immunofluorescence-based assay that utilizes an indirect immunofluorescence technique for identifying the pp65 protein of CMV in peripheral blood leukocytes. The CMV pp65 assay is widely used as the gold standard for monitoring CMV infections and the response of CMV positive patients to antiviral treatment. Even though the results of the pp65 assay can be obtained in a few hours, it is labor-intensive and suffers from a significant inter-laboratory variation with respect to sensitivity (from 50% to 83%) and specificity (less than 80%) [3, 4, 5]. Quantitation of CMV DNA by real-time PCR is a useful diagnostic technique with its high detection sensitivity and simplicity of use [6]. However, consensus regarding the cut-off level for the diagnosis of CMV infection has not yet been established [7]. Q-CMV real-time complete kit (Cepheid, Nanogen Advanced Diagnostic S.r.L., Torino, Italy) is a real-time PCR kit used for the quantitation of CMV DNA in whole blood. So far, one study comparing this kit with the CMV antigenemia assay has been reported in transplant recipients [8]. The aim of this study was to compare the Q-CMV kit with the CMV antigenemia assay in different disease groups of patients and to investigate the clinical advantage of the use of real-time PCR for quantitation of CMV DNA in whole blood. METHODS: 1. Patients and samples A retrospective study was conducted on a total of 79 patients who visited Korea University Anam Hospital from June 2011 to March 2013. The patients comprised of 41 stem cell transplant (SCT) recipients, 14 solid organ transplant recipients, 11 patients with hematologic or solid organ malignancies, 11 patients with inflammatory-related illnesses, one patient with diabetes mellitus (DM), and one patient with paroxysmal nocturnal hemoglobinuria (PNH) (Table 1). For the patients who were tested repeatedly, their medical records were reviewed to find if ganciclovir was treated. All patients signed an informed consent under the protocol for human use. The study was approved by the Human Use Ethical Committee of Korea University Anam Hospital. EDTA blood samples were collected simultaneously for the antigenemia assay and the real-time CMV DNA PCR. A retrospective study was conducted on a total of 79 patients who visited Korea University Anam Hospital from June 2011 to March 2013. The patients comprised of 41 stem cell transplant (SCT) recipients, 14 solid organ transplant recipients, 11 patients with hematologic or solid organ malignancies, 11 patients with inflammatory-related illnesses, one patient with diabetes mellitus (DM), and one patient with paroxysmal nocturnal hemoglobinuria (PNH) (Table 1). For the patients who were tested repeatedly, their medical records were reviewed to find if ganciclovir was treated. All patients signed an informed consent under the protocol for human use. The study was approved by the Human Use Ethical Committee of Korea University Anam Hospital. EDTA blood samples were collected simultaneously for the antigenemia assay and the real-time CMV DNA PCR. 2. The CMV pp65 antigenemia assay The CMV pp65 antigenemia assay was carried out within 4 hours of specimen collection using the CINA Kit system (ArgeneBiosoft, Varilhes, France). Briefly, the cytospin slides, with 200,000 cells per glass slide, were prepared, fixed, and permeabilized. The presence of CMV pp65 antigen was then detected using a monoclonal antibody against the CMV pp65 antigen and visualized with a fluorescent secondary antibody. The results were expressed as the number of positive cells per slide, with each slide containing 200,000 leukocytes. The test was considered positive when ≥1 fluorescent cell was observed for every 200,000 leukocytes. The CMV pp65 antigenemia assay was carried out within 4 hours of specimen collection using the CINA Kit system (ArgeneBiosoft, Varilhes, France). Briefly, the cytospin slides, with 200,000 cells per glass slide, were prepared, fixed, and permeabilized. The presence of CMV pp65 antigen was then detected using a monoclonal antibody against the CMV pp65 antigen and visualized with a fluorescent secondary antibody. The results were expressed as the number of positive cells per slide, with each slide containing 200,000 leukocytes. The test was considered positive when ≥1 fluorescent cell was observed for every 200,000 leukocytes. 3. Quantitation of CMV DNA by real-time PCR in whole blood The real-time CMV DNA PCR was carried out alongside the CMV antigenemia assay using the Q-CMV real-time complete kit according to the manufacturer's instructions. This test was based on the simultaneous amplification of the exon 4 region of the CMV MIEA (Major Immediate Early Antigen HCMVUL123) gene and the human β-globin gene DNA that was used as the internal control. Briefly, CMV DNA was isolated from 200 µL of EDTA-treated whole blood samples using the QIAamp DNA blood mini kit (Qiagen, Hilden, Germany). Five microliters of the extracted DNA sample and 20 µL of the reaction mix were added to each microplate well. Sterile water containing the reaction mix was used as a negative control. The PCR conditions were as follows: decontamination at 50℃ for 2 min, initial denaturation at 95℃ for 10 min, followed by 45 cycles at 95℃ for 15 sec each, and at 60℃ for 1 min. PCR reactions were performed on an Applied C1000 thermal cycler with a CFX96 real-time system (Bio-Rad, Hercules, CA, USA). The laboratory-determined limit of detection (LOD) of the assay was 65 copies/mL. The assay detected CMV DNA in a linear range from 790 copies/mL to 5×106 copies/mL. The value of LOD and the linear detection range were determined according to the manufacturer's package insert instructions. The real-time CMV DNA PCR was carried out alongside the CMV antigenemia assay using the Q-CMV real-time complete kit according to the manufacturer's instructions. This test was based on the simultaneous amplification of the exon 4 region of the CMV MIEA (Major Immediate Early Antigen HCMVUL123) gene and the human β-globin gene DNA that was used as the internal control. Briefly, CMV DNA was isolated from 200 µL of EDTA-treated whole blood samples using the QIAamp DNA blood mini kit (Qiagen, Hilden, Germany). Five microliters of the extracted DNA sample and 20 µL of the reaction mix were added to each microplate well. Sterile water containing the reaction mix was used as a negative control. The PCR conditions were as follows: decontamination at 50℃ for 2 min, initial denaturation at 95℃ for 10 min, followed by 45 cycles at 95℃ for 15 sec each, and at 60℃ for 1 min. PCR reactions were performed on an Applied C1000 thermal cycler with a CFX96 real-time system (Bio-Rad, Hercules, CA, USA). The laboratory-determined limit of detection (LOD) of the assay was 65 copies/mL. The assay detected CMV DNA in a linear range from 790 copies/mL to 5×106 copies/mL. The value of LOD and the linear detection range were determined according to the manufacturer's package insert instructions. 4. Definition of CMV infection and CMV disease CMV infection was defined as the detection of CMV DNA in blood leukocytes in the absence of clinical manifestations or organ function abnormalities [9]. CMV disease was defined as the association of documented CMV infection with clinical symptoms, such as unexplained fever and leukopenia (<4×109/L in two consecutive samples) and/or thrombocytopenia (<150×109/L) not developing in any patient during the follow-up [9]. CMV infection was defined as the detection of CMV DNA in blood leukocytes in the absence of clinical manifestations or organ function abnormalities [9]. CMV disease was defined as the association of documented CMV infection with clinical symptoms, such as unexplained fever and leukopenia (<4×109/L in two consecutive samples) and/or thrombocytopenia (<150×109/L) not developing in any patient during the follow-up [9]. 5. Statistical analysis A proportion of the positive and negative results were compared by using the chi-square test. Agreement between the real-time CMV DNA PCR and the antigenemia assay results was assessed by Cohen's kappa coefficient value with 95% confidence intervals (CI). Cohen's kappa coefficient values were interpreted as follows: 0-0.2 as slight agreement, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as excellent. The correlation between the two assays was analyzed using the Pearson's correlation coefficient. P values less than 0.05 were considered significant. MedCalC (MedCalc Software, Mariakerke, Belgium) and SPSS version 20 (SPSS Inc., Chicago, IL, USA) were used to analyze the data. A proportion of the positive and negative results were compared by using the chi-square test. Agreement between the real-time CMV DNA PCR and the antigenemia assay results was assessed by Cohen's kappa coefficient value with 95% confidence intervals (CI). Cohen's kappa coefficient values were interpreted as follows: 0-0.2 as slight agreement, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as excellent. The correlation between the two assays was analyzed using the Pearson's correlation coefficient. P values less than 0.05 were considered significant. MedCalC (MedCalc Software, Mariakerke, Belgium) and SPSS version 20 (SPSS Inc., Chicago, IL, USA) were used to analyze the data. 1. Patients and samples: A retrospective study was conducted on a total of 79 patients who visited Korea University Anam Hospital from June 2011 to March 2013. The patients comprised of 41 stem cell transplant (SCT) recipients, 14 solid organ transplant recipients, 11 patients with hematologic or solid organ malignancies, 11 patients with inflammatory-related illnesses, one patient with diabetes mellitus (DM), and one patient with paroxysmal nocturnal hemoglobinuria (PNH) (Table 1). For the patients who were tested repeatedly, their medical records were reviewed to find if ganciclovir was treated. All patients signed an informed consent under the protocol for human use. The study was approved by the Human Use Ethical Committee of Korea University Anam Hospital. EDTA blood samples were collected simultaneously for the antigenemia assay and the real-time CMV DNA PCR. 2. The CMV pp65 antigenemia assay: The CMV pp65 antigenemia assay was carried out within 4 hours of specimen collection using the CINA Kit system (ArgeneBiosoft, Varilhes, France). Briefly, the cytospin slides, with 200,000 cells per glass slide, were prepared, fixed, and permeabilized. The presence of CMV pp65 antigen was then detected using a monoclonal antibody against the CMV pp65 antigen and visualized with a fluorescent secondary antibody. The results were expressed as the number of positive cells per slide, with each slide containing 200,000 leukocytes. The test was considered positive when ≥1 fluorescent cell was observed for every 200,000 leukocytes. 3. Quantitation of CMV DNA by real-time PCR in whole blood: The real-time CMV DNA PCR was carried out alongside the CMV antigenemia assay using the Q-CMV real-time complete kit according to the manufacturer's instructions. This test was based on the simultaneous amplification of the exon 4 region of the CMV MIEA (Major Immediate Early Antigen HCMVUL123) gene and the human β-globin gene DNA that was used as the internal control. Briefly, CMV DNA was isolated from 200 µL of EDTA-treated whole blood samples using the QIAamp DNA blood mini kit (Qiagen, Hilden, Germany). Five microliters of the extracted DNA sample and 20 µL of the reaction mix were added to each microplate well. Sterile water containing the reaction mix was used as a negative control. The PCR conditions were as follows: decontamination at 50℃ for 2 min, initial denaturation at 95℃ for 10 min, followed by 45 cycles at 95℃ for 15 sec each, and at 60℃ for 1 min. PCR reactions were performed on an Applied C1000 thermal cycler with a CFX96 real-time system (Bio-Rad, Hercules, CA, USA). The laboratory-determined limit of detection (LOD) of the assay was 65 copies/mL. The assay detected CMV DNA in a linear range from 790 copies/mL to 5×106 copies/mL. The value of LOD and the linear detection range were determined according to the manufacturer's package insert instructions. 4. Definition of CMV infection and CMV disease: CMV infection was defined as the detection of CMV DNA in blood leukocytes in the absence of clinical manifestations or organ function abnormalities [9]. CMV disease was defined as the association of documented CMV infection with clinical symptoms, such as unexplained fever and leukopenia (<4×109/L in two consecutive samples) and/or thrombocytopenia (<150×109/L) not developing in any patient during the follow-up [9]. 5. Statistical analysis: A proportion of the positive and negative results were compared by using the chi-square test. Agreement between the real-time CMV DNA PCR and the antigenemia assay results was assessed by Cohen's kappa coefficient value with 95% confidence intervals (CI). Cohen's kappa coefficient values were interpreted as follows: 0-0.2 as slight agreement, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1 as excellent. The correlation between the two assays was analyzed using the Pearson's correlation coefficient. P values less than 0.05 were considered significant. MedCalC (MedCalc Software, Mariakerke, Belgium) and SPSS version 20 (SPSS Inc., Chicago, IL, USA) were used to analyze the data. RESULTS: 1. Qualitative results of the real-time CMV DNA PCR and the antigenemia assay 479 samples were obtained from 79 patients; the test was done only once in twenty-nine patients and repeatedly done in fifty patients. Out of a total of 479 samples, CMV was detected in 156 (32.6%) samples by the real-time CMV DNA PCR and in 99 (20.7%) samples by the antigenemia assay. Substantial concordance of the two assays was observed in 414 (86.4%) samples (Cohen's kappa coefficient value=0.659, 95% CI=0.585-0.732, Table 2). Of the 65 discrepant samples, 61 samples were antigenemia-negative but real-time CMV DNA PCR-positive. Among these, 45 were below 790 copies/mL and 16 were above 790 copies/mL. Four samples were antigenemia-positive but real-time CMV DNA PCR-negative. There were no clinical particularities for the four patients who were PCR-negative and antigenemia-positive. The number of pp65-positive cells was between 1 and 6 cells/200,000 leukocytes in all four samples. 479 samples were obtained from 79 patients; the test was done only once in twenty-nine patients and repeatedly done in fifty patients. Out of a total of 479 samples, CMV was detected in 156 (32.6%) samples by the real-time CMV DNA PCR and in 99 (20.7%) samples by the antigenemia assay. Substantial concordance of the two assays was observed in 414 (86.4%) samples (Cohen's kappa coefficient value=0.659, 95% CI=0.585-0.732, Table 2). Of the 65 discrepant samples, 61 samples were antigenemia-negative but real-time CMV DNA PCR-positive. Among these, 45 were below 790 copies/mL and 16 were above 790 copies/mL. Four samples were antigenemia-positive but real-time CMV DNA PCR-negative. There were no clinical particularities for the four patients who were PCR-negative and antigenemia-positive. The number of pp65-positive cells was between 1 and 6 cells/200,000 leukocytes in all four samples. 2. Quantitative results of the real-time CMV DNA PCR and the antigenemia assay The Pearson's correlation analysis was performed to assess the correlation between the quantitation of CMV DNA by real-time PCR, and the number of positive cells as determined by the antigenemia assay. There was a statistically significant linear correlation between the real-time PCR and the antigenemia assay for 156 real-time CMV DNA PCR-positive samples (P<0.0001), but the Pearson's coefficient value was moderately correlated (r=0.5504, Fig. 1). Antigenemia values were divided into four groups: group I having negative values, group II having low values (1-10 positive/200,000 leukocytes), group III having intermediate values (11-100 positive/200,000 leukocytes) and group IV having high values (>100 positive/200,000 leukocytes). Median CMV viral load of the real-time PCR was calculated for each antigenemia category. As shown in Fig. 2, when the antigenemia result was negative, the median CMV viral load was 0.1 log10 copies /mL. In groups II, III and IV, the median viral load increased to 3.1 log10 copies/mL, 4.1 log10 copies/mL, and 5.1 log10 copies/mL, respectively. The Pearson's correlation analysis was performed to assess the correlation between the quantitation of CMV DNA by real-time PCR, and the number of positive cells as determined by the antigenemia assay. There was a statistically significant linear correlation between the real-time PCR and the antigenemia assay for 156 real-time CMV DNA PCR-positive samples (P<0.0001), but the Pearson's coefficient value was moderately correlated (r=0.5504, Fig. 1). Antigenemia values were divided into four groups: group I having negative values, group II having low values (1-10 positive/200,000 leukocytes), group III having intermediate values (11-100 positive/200,000 leukocytes) and group IV having high values (>100 positive/200,000 leukocytes). Median CMV viral load of the real-time PCR was calculated for each antigenemia category. As shown in Fig. 2, when the antigenemia result was negative, the median CMV viral load was 0.1 log10 copies /mL. In groups II, III and IV, the median viral load increased to 3.1 log10 copies/mL, 4.1 log10 copies/mL, and 5.1 log10 copies/mL, respectively. 3. Longitudinal analysis in patients with CMV disease Among 23 patients showing discrepant results between the two assays, 20 patients were tested repetitively. Thirteen CMV-positive patients out of 20 were treated with ganciclovir and CMV disease was observed in 5 out of 13 patients. The median number of days for obtaining the first positive result was 15.5 days (range, 0-56 days) and 23.5 days (range, 8-252 days) for CMV viral load of the real-time CMV DNA PCR and positive cells of the antigenemia assays, respectively. The positive CMV viral load was detected prior to the antigenemia assay positively in 9 of the 13 ganciclovir-treated patients and the results of the antigenemia assay reached the threshold earlier in one patient and simultaneously in three patients. At the start of ganciclovir therapy, the median CMV viral load was 2,716 copies/mL (range, negative to 93,918 copies/mL) and the median number of the antigenemia-positive cells was 3/200,000 WBCs (range, 0-351/200,000 WBCs), respectively. Clinical monitoring during treatment revealed that the median number of days to obtain negative results after ganciclovir therapy was 36.0 days (range, 11-57 days) and 25.5 days (range, 3-53 days) by the real-time CMV DNA PCR and antigenemia assays, respectively. The results of the antigenemia assay were observed to be negative before the real-time CMV DNA PCR in 8 of the 13 ganciclovir-treated patients. The real-time CMV DNA PCR achieved negative results earlier in 3 patients, and simultaneous results were obtained in 2 patients. Among 23 patients showing discrepant results between the two assays, 20 patients were tested repetitively. Thirteen CMV-positive patients out of 20 were treated with ganciclovir and CMV disease was observed in 5 out of 13 patients. The median number of days for obtaining the first positive result was 15.5 days (range, 0-56 days) and 23.5 days (range, 8-252 days) for CMV viral load of the real-time CMV DNA PCR and positive cells of the antigenemia assays, respectively. The positive CMV viral load was detected prior to the antigenemia assay positively in 9 of the 13 ganciclovir-treated patients and the results of the antigenemia assay reached the threshold earlier in one patient and simultaneously in three patients. At the start of ganciclovir therapy, the median CMV viral load was 2,716 copies/mL (range, negative to 93,918 copies/mL) and the median number of the antigenemia-positive cells was 3/200,000 WBCs (range, 0-351/200,000 WBCs), respectively. Clinical monitoring during treatment revealed that the median number of days to obtain negative results after ganciclovir therapy was 36.0 days (range, 11-57 days) and 25.5 days (range, 3-53 days) by the real-time CMV DNA PCR and antigenemia assays, respectively. The results of the antigenemia assay were observed to be negative before the real-time CMV DNA PCR in 8 of the 13 ganciclovir-treated patients. The real-time CMV DNA PCR achieved negative results earlier in 3 patients, and simultaneous results were obtained in 2 patients. 1. Qualitative results of the real-time CMV DNA PCR and the antigenemia assay: 479 samples were obtained from 79 patients; the test was done only once in twenty-nine patients and repeatedly done in fifty patients. Out of a total of 479 samples, CMV was detected in 156 (32.6%) samples by the real-time CMV DNA PCR and in 99 (20.7%) samples by the antigenemia assay. Substantial concordance of the two assays was observed in 414 (86.4%) samples (Cohen's kappa coefficient value=0.659, 95% CI=0.585-0.732, Table 2). Of the 65 discrepant samples, 61 samples were antigenemia-negative but real-time CMV DNA PCR-positive. Among these, 45 were below 790 copies/mL and 16 were above 790 copies/mL. Four samples were antigenemia-positive but real-time CMV DNA PCR-negative. There were no clinical particularities for the four patients who were PCR-negative and antigenemia-positive. The number of pp65-positive cells was between 1 and 6 cells/200,000 leukocytes in all four samples. 2. Quantitative results of the real-time CMV DNA PCR and the antigenemia assay: The Pearson's correlation analysis was performed to assess the correlation between the quantitation of CMV DNA by real-time PCR, and the number of positive cells as determined by the antigenemia assay. There was a statistically significant linear correlation between the real-time PCR and the antigenemia assay for 156 real-time CMV DNA PCR-positive samples (P<0.0001), but the Pearson's coefficient value was moderately correlated (r=0.5504, Fig. 1). Antigenemia values were divided into four groups: group I having negative values, group II having low values (1-10 positive/200,000 leukocytes), group III having intermediate values (11-100 positive/200,000 leukocytes) and group IV having high values (>100 positive/200,000 leukocytes). Median CMV viral load of the real-time PCR was calculated for each antigenemia category. As shown in Fig. 2, when the antigenemia result was negative, the median CMV viral load was 0.1 log10 copies /mL. In groups II, III and IV, the median viral load increased to 3.1 log10 copies/mL, 4.1 log10 copies/mL, and 5.1 log10 copies/mL, respectively. 3. Longitudinal analysis in patients with CMV disease: Among 23 patients showing discrepant results between the two assays, 20 patients were tested repetitively. Thirteen CMV-positive patients out of 20 were treated with ganciclovir and CMV disease was observed in 5 out of 13 patients. The median number of days for obtaining the first positive result was 15.5 days (range, 0-56 days) and 23.5 days (range, 8-252 days) for CMV viral load of the real-time CMV DNA PCR and positive cells of the antigenemia assays, respectively. The positive CMV viral load was detected prior to the antigenemia assay positively in 9 of the 13 ganciclovir-treated patients and the results of the antigenemia assay reached the threshold earlier in one patient and simultaneously in three patients. At the start of ganciclovir therapy, the median CMV viral load was 2,716 copies/mL (range, negative to 93,918 copies/mL) and the median number of the antigenemia-positive cells was 3/200,000 WBCs (range, 0-351/200,000 WBCs), respectively. Clinical monitoring during treatment revealed that the median number of days to obtain negative results after ganciclovir therapy was 36.0 days (range, 11-57 days) and 25.5 days (range, 3-53 days) by the real-time CMV DNA PCR and antigenemia assays, respectively. The results of the antigenemia assay were observed to be negative before the real-time CMV DNA PCR in 8 of the 13 ganciclovir-treated patients. The real-time CMV DNA PCR achieved negative results earlier in 3 patients, and simultaneous results were obtained in 2 patients. DISCUSSION: Antigenemia assay and real-time CMV DNA PCR are widely used for monitoring viral infection, for tracking its recurrence, and for initiating preemptive CMV therapy [1, 10]. However, guidelines for preemptive therapy based on these methods have yet to be established owing to the lack of standardization [11]. Clinically relevant cut-off values based on CMV diagnosis test differ among patient populations and institutional settings. Griffiths et al. [12] suggested guidelines for preemptive therapy based on the antigenemia test: thresholds of >10 positive cells/2×105 WBCs and of ≥1 or 2 positive cells/2×105 WBCs in solid organ and SCT recipients, respectively. Lilleri et al. [13] suggested a cut-off value for initiation of preemptive therapy on a real-time CMV DNA PCR: 300,000 copies/mL and 10,000 copies/mL in solid organ and SCT recipients, respectively. In our center, different assays are adopted for solid organ transplantation and SCT. In solid organ transplant, antigenemia assay results guide preemptive therapy, and treatment is started when more than 1 positive cell/2×105 WBCs is detected. In SCT, real-time CMV DNA PCR guides preemptive therapy, and treatment is started when more than 65 copies/mL is detected. Therefore, consensus regarding the cut-off level for diagnosis of CMV infection has to be defined for the real-time CMV DNA PCR and antigenemia assay. Our study revealed an 86.4% concordance rate between the real-time CMV DNA PCR and the antigenemia assays from whole blood samples of diverse patient groups, which is similar to the results obtained from other studies (Table 3). The design of this study differed from those of other studies in a number of ways, such as the type of samples, the different primer sets, and the diverse patient groups. In this study, whole blood specimen was selected over plasma or serum since several prior studies suggested that whole blood-based real-time PCR has a higher sensitivity compared to plasma or serum-based assays [14, 15]. In this study, the CMV MIEA gene was chosen for the Q-CMV Kit assay, but Heo et al. [16] targeted the CMV glycoprotein B (gB). No standardization of target genes is yet agreed on by the larger research community, so further work is needed to establish standardization to encourage comparison of assays across various laboratories using different target genes. In the current study, transplant recipients as well as diverse patients groups were investigated, including those with hematologic or solid organ malignancies, inflammatory-related illnesses, DM and PNH. Contrary to this study, other studies were comprised only of transplant recipients. Majority of the discrepancy in qualitative results (93.8%, 61/65) was real-time CMV DNA PCR detection in antigenemia-negative and 45 out of 61 samples were below lower linear range (790 copies/mL). This discrepancy could be explained by the increased sensitivity of real-time CMV DNA PCR compared to the antigenemia assay. In this study, the correlation between the real-time CMV DNA PCR and antigenemia assay was moderate (r=0.5504, P<0.0001), as shown in a previous study [17]. In samples showing antigenemia levels of 0, 1-10, 10-100, and >100 positive cells, the corresponding median viral load measured by real-time CMV DNA PCR was 0.1, 3.1, 4.1, and 5.1 log10 copies/mL, respectively. Significant increases in viral load were observed in samples with more than 1 antigenemia-positive cell, which was above the cut-off level for starting preemptive therapy as chosen by the solid organ transplant centers. When real-time CMV DNA PCR results were compared with groups classified according to the results of the antigenemia assay, the results were moderately correlated (r=0.3877, P=0.0013), which is in agreement with previous studies [18]. Mhiri et al. [19] reported that the rate of viral load increase is significantly associated with the development of the disease; antigenemia assay does not give an accurate indication of viral load rate increase, since it only counts the number of infected cells. In the present study, the time for detection by the real-time PCR was earlier (median of 15.5 days) compared with that by the antigenemia assay (median of 23.5 days), consistent with the data of Ghaffari et al. [20]. Furthermore, the time to become undetectable by the real-time PCR was slower (median of 36 days) than the antigenemia assay (median of 25.5 days), which is similar to the results by Mhiri et al. [19]. The clinical application of real-time CMV DNA PCR for monitoring response to therapy remains unclear, because PCR used for detection of viral DNA cannot distinguish between CMV-infected cell destruction and the genome of the defective virus [21]. Although the real-time CMV DNA PCR has high sensitivity, currently there is no clear agreement on the ideal cut-off for the diagnosis of CMV infection. Thus, treatment decisions should be done with caution, considering trends in viral load and the sensitivity of the methods used in individual laboratories and not just relying on the absolute value recorded in a single test. Even though moderate correlation was observed in this study, this study included a diverse group of patients with a small number of patients in each group. Thus, larger group studies are needed to confirm the optimal cut-off value of real-time PCR for CMV infection. In conclusion, quantitative results of the Q-CMV real-time complete kit had good correlation with results of the CMV antigenemia assay, suggesting that the real-time CMV DNA PCR can be effective in early detection of CMV infection and guiding therapy.
Background: Quantitation of cytomegalovirus (CMV) DNA using real-time PCR has been utilized for monitoring CMV infection. However, the CMV antigenemia assay is still the 'gold standard' assay. There are only a few studies in Korea that compared the efficacy of use of real-time PCR for quantitation of CMV DNA in whole blood with the antigenemia assay, and most of these studies have been limited to transplant recipients. Methods: 479 whole blood samples from 79 patients, falling under different disease groups, were tested by real-time CMV DNA PCR using the Q-CMV real-time complete kit (Nanogen Advanced Diagnostic S.r.L., Italy) and CMV antigenemia assay (CINA Kit, ArgeneBiosoft, France), and the results were compared. Repeatedly tested patients were selected and their charts were reviewed for ganciclovir therapy. Results: The concordance rate of the two assays was 86.4% (Cohen's kappa coefficient value=0.659). Quantitative correlation between the two assays was a moderate (r=0.5504, P<0.0001). Among 20 patients tested repeatedly with the two assays, 13 patients were transplant recipients and treated with ganciclovir. Before treatment, CMV was detected earlier by real-time CMV DNA PCR than the antigenemia assay, with a median difference of 8 days. After treatment, the antigenemia assay achieved negative results earlier than real-time CMV DNA PCR with a median difference of 10.5 days. Conclusions: Q-CMV real-time complete kit is a useful tool for early detection of CMV infection in whole blood samples in transplant recipients.
null
null
6,064
301
[ 152, 110, 268, 80, 145, 191, 214, 296 ]
12
[ "cmv", "antigenemia", "pcr", "time", "dna", "real time", "real", "cmv dna", "patients", "assay" ]
[ "cytomegalovirus cmv infection", "introduction cytomegalovirus cmv", "antibody cmv pp65", "leukocytes cmv pp65", "antigenemia assay cmv" ]
null
null
[CONTENT] Cytomegalovirus | Real-time PCR | Antigenemia assay [SUMMARY]
[CONTENT] Cytomegalovirus | Real-time PCR | Antigenemia assay [SUMMARY]
[CONTENT] Cytomegalovirus | Real-time PCR | Antigenemia assay [SUMMARY]
null
[CONTENT] Cytomegalovirus | Real-time PCR | Antigenemia assay [SUMMARY]
null
[CONTENT] Antiviral Agents | Cytomegalovirus | Cytomegalovirus Infections | DNA, Viral | Ganciclovir | Humans | Immunoassay | Organ Transplantation | Phosphoproteins | Real-Time Polymerase Chain Reaction | Viral Matrix Proteins | Virology [SUMMARY]
[CONTENT] Antiviral Agents | Cytomegalovirus | Cytomegalovirus Infections | DNA, Viral | Ganciclovir | Humans | Immunoassay | Organ Transplantation | Phosphoproteins | Real-Time Polymerase Chain Reaction | Viral Matrix Proteins | Virology [SUMMARY]
[CONTENT] Antiviral Agents | Cytomegalovirus | Cytomegalovirus Infections | DNA, Viral | Ganciclovir | Humans | Immunoassay | Organ Transplantation | Phosphoproteins | Real-Time Polymerase Chain Reaction | Viral Matrix Proteins | Virology [SUMMARY]
null
[CONTENT] Antiviral Agents | Cytomegalovirus | Cytomegalovirus Infections | DNA, Viral | Ganciclovir | Humans | Immunoassay | Organ Transplantation | Phosphoproteins | Real-Time Polymerase Chain Reaction | Viral Matrix Proteins | Virology [SUMMARY]
null
[CONTENT] cytomegalovirus cmv infection | introduction cytomegalovirus cmv | antibody cmv pp65 | leukocytes cmv pp65 | antigenemia assay cmv [SUMMARY]
[CONTENT] cytomegalovirus cmv infection | introduction cytomegalovirus cmv | antibody cmv pp65 | leukocytes cmv pp65 | antigenemia assay cmv [SUMMARY]
[CONTENT] cytomegalovirus cmv infection | introduction cytomegalovirus cmv | antibody cmv pp65 | leukocytes cmv pp65 | antigenemia assay cmv [SUMMARY]
null
[CONTENT] cytomegalovirus cmv infection | introduction cytomegalovirus cmv | antibody cmv pp65 | leukocytes cmv pp65 | antigenemia assay cmv [SUMMARY]
null
[CONTENT] cmv | antigenemia | pcr | time | dna | real time | real | cmv dna | patients | assay [SUMMARY]
[CONTENT] cmv | antigenemia | pcr | time | dna | real time | real | cmv dna | patients | assay [SUMMARY]
[CONTENT] cmv | antigenemia | pcr | time | dna | real time | real | cmv dna | patients | assay [SUMMARY]
null
[CONTENT] cmv | antigenemia | pcr | time | dna | real time | real | cmv dna | patients | assay [SUMMARY]
null
[CONTENT] cmv | assay | kit | pp65 | infection | quantitation cmv dna | quantitation cmv | quantitation | cmv infection | kit cmv antigenemia [SUMMARY]
[CONTENT] cmv | dna | patients | cmv pp65 | slide | min | blood | assay | cmv dna | antigen [SUMMARY]
[CONTENT] days | patients | cmv | positive | antigenemia | pcr | real | time | real time | median [SUMMARY]
null
[CONTENT] cmv | patients | pcr | time | antigenemia | real time | real | dna | positive | cmv dna [SUMMARY]
null
[CONTENT] CMV | PCR | CMV ||| CMV ||| Korea | PCR [SUMMARY]
[CONTENT] 479 | 79 | CMV | the Q-CMV | Italy | CMV | CINA Kit, ArgeneBiosoft | France ||| [SUMMARY]
[CONTENT] two | 86.4% | Cohen ||| two ||| 20 | two | 13 ||| CMV | CMV | PCR | 8 days ||| CMV | 10.5 days [SUMMARY]
null
[CONTENT] CMV | PCR | CMV ||| CMV ||| Korea | PCR ||| 479 | 79 | CMV | the Q-CMV | Italy | CMV | CINA Kit, ArgeneBiosoft | France ||| ||| ||| two | 86.4% | Cohen ||| two ||| 20 | two | 13 ||| CMV | CMV | PCR | 8 days ||| CMV | 10.5 days ||| CMV [SUMMARY]
null
Botulinum toxin type A and acupuncture for masticatory myofascial pain: a randomized clinical trial.
34105695
BoNT-A has been widely used for TMD therapy. However, the potential benefits compared to dry needling techniques are not clear.
BACKGROUND
54 women were divided into three groups (n=18). AC patients received four sessions of traditional acupuncture, being one session/week during 20-min. BoNT-A patients were bilaterally injected with 30U and 10U in masseter and anterior temporal muscles, respectively. Moreover, a control group received saline solution (SS) in the same muscles. Self-perceived pain was assessed by visual analog scale, while pressure pain threshold (PPT) was verified by a digital algometer. Electromyographic evaluations (EMG) of anterior temporal and masseter muscles were also measured. All variables were assessed before and 1-month after therapies. The mixed-design two-way repeated measures ANOVA and Tukey's post-hoc tests were used for analysis, considering a=0.05.
METHODOLOGY
Self-perceived pain decreased in all groups after one month of therapy (P<.001). BoNT-A was not better than AC in pain reduction (P=0.05), but both therapies were more effective in reducing pain than SS (P<0.05). BoNT-A was the only treatment able to improve PPT values (P<0.05); however, a severe decrease of EMG activity was also found in this group, which is considered an adverse effect.
RESULTS
after one month of follow-up, all therapies reduced the self-perceived pain in myofascial TMD patients, but only BoNT-A enhanced PPT yet decreased EMG.
CONCLUSION
[ "Acupuncture Therapy", "Botulinum Toxins, Type A", "Female", "Humans", "Masseter Muscle", "Masticatory Muscles", "Myofascial Pain Syndromes", "Pain", "Pain Threshold", "Treatment Outcome" ]
8232932
Introduction
Myofascial pain (MFP) is a disorder characterized by localized muscle tenderness, regional pain, and limited range of motion.1 It is the most typical cause of persistent regional pain, such as back and shoulder pain, tension-type headaches, and facial pain.2 Furthermore, it is a common condition in dentistry with prevalence from 10% to 68% among subjects with temporomandibular disorders (TMD).3 Masticatory myofascial pain (MMFP) has a complex pathogenesis expressed by a multifactorial etiology, which led to the proposal of numerous conservatives, reversible, and minimally invasive therapies to treat this condition.4 The needling technique is a minimally invasive therapy widely used for MMFP and it can be classified as an injection technique (IT), often referred to as “wet needling,” and dry needling (DN). While the IT deliver pharmacological agents, e.g., anesthetics, botulinum toxins or other agents, with needles,5 the DN consists in the insertion of thin monofilament needles, as the ones used for acupuncture practice, without any injectate.6 Acupuncture is a therapeutic method of the traditional Chinese medicine which differs from conventional DN techniques since needles are not inserted just in the painful region. Its antinociceptive effects7 include immediate reduction in local, referred, and widespread pain,8 and reduction in peripheral and central sensitization.5 Although a recent randomized clinical trial reported pain reduction of 84% after one month of treatment – concluding that acupuncture was effective for MMFP pain9, available systematic reviews did not find further advantages in the use of acupuncture for MMFP over other treatments such as oral appliances, behavioral therapy, and/or pharmacotherapies.10,11 These controversies might be due to several methodological shortcomings, leading to inconclusive results, which expose the need for high quality studies comparing the efficacy of acupuncture with other treatments. Botulinum toxin type A (BoNT-A) is an FDA-approved treatment for some pain disorders (as dystonia and migraine), becoming one of the most popular IT used to control MFP.4 Animal studies have demonstrated that peripheral injections of BoNT-A have analgesic effects on pain stages by inhibiting the release of nociceptive mediators (peripherally and centrally), mechanism independent of its neuromotor effect.12,13 Based on this data, BoNT-A has been used as an off-label treatment to control MMFP. Moreover, a few well-designed clinical trials14,15,16 have demonstrated the superiority of this substance over placebo, but not over conservative treatments like oral appliances.15 Besides, a prospective study showed that BoNT-A injections in the masticatory muscles are also effective in reducing MMFP and tension-type headache, recommending this therapy for muscle pain.17 Conversely, the lack of consensus on the effects of BoNT-A is due to the number of low-quality studies available, and especially to the post-injection adverse effects in muscle and bone tissues, being the reason why its benefits for MMFP remain unclear.18,19 Studies comparing DN techniques with BoNT-A for myofascial TMD pain are scarce, and no previous study compared these treatments with an injection placebo group.20-22 Moreover, systematic reviews were inconclusive about the effectiveness of needling therapy, since it was not possible to determine if the technique (dry or wet) or the injectate were responsible for the improvements.23 Therefore, this study aims to compare the immediate effects of BoNT-A injections and acupuncture therapy in myofascial TMD patients.
Methodology
Experimental design This randomized single-blinded controlled clinical trial, conducted following the Helsinki Declaration, was approved by the local Research Ethics Committee (CAAE: 22953113.8.0000.5418) and by the Brazilian Registry of Clinical Trials (ReBEC RBR-2d4vvv). All individuals were informed about the research purposes and signed an informed consent form. The sample size estimation was based on the average pain scores of previous studies,14,24 and it was performed by using the G*Power 3.1.9.2 software (Düsseldorf, Germany). The following parameters were considered: a) test power of 0.9; b) 0.05 significance level; c) effect size of 0.4. Considering these standards, 15 participants per group would be sufficient to detect statistically significant differences. However, considering possible dropouts, 20% was added in each group. Thus, the final sample size comprised 54 individuals, which were randomly divided into three groups: acupuncture (n=18); BoNT-A (n=18); and saline solution (SS) (n=18), as a negative control group. For this allocation, a software was used (https://random-allocation-software.software.informer.com/2.0/) and the sequence was sealed in an opaque envelope, which was operated by a researcher not involved in other procedures of this study. The investigator assessing the outcomes was masked to the treatment assignments. The sample was obtained from women seeking for TMD treatment at the Piracicaba Dental School, University of Campinas, Piracicaba, Brazil, from 2014 to 2016. Patients had to be female with MMFP, with general good health, be aged between 18 and 45 years, with complete dentition, ongoing conservative treatment for at least three months without 30% of pain improvement and be using oral contraceptives in order to be included in the study. Subjects with systemic diseases (arthritis, arthrosis, diabetes), uncontrolled hypertension, neurological disorders, positive history of trauma in the orofacial and neck area, with dental pain, with self-reported sleep bruxism, and taking any medication for pain control were excluded from the recruitment (Figure 1). The diagnostic of MMFP was based on a clinical examination performed according to the official Portuguese version of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) – Axis I,25 by two calibrated raters (Kappa coefficient = 0.80). To achieve the 54 volunteers included in the study, 80 female subjects were screened for eligibility. Considering the inclusion and exclusion criteria, 65 participants were suitable to participate. However, before allocation, five individuals were excluded since they refused to participate (n=4) or to stop previous therapies (n=1). Six additional exclusions occurred due to lack of compliance, leading to a final sample of 18 volunteers per group (Figure 1). Figure 1Flowchart of participants enrollment This randomized single-blinded controlled clinical trial, conducted following the Helsinki Declaration, was approved by the local Research Ethics Committee (CAAE: 22953113.8.0000.5418) and by the Brazilian Registry of Clinical Trials (ReBEC RBR-2d4vvv). All individuals were informed about the research purposes and signed an informed consent form. The sample size estimation was based on the average pain scores of previous studies,14,24 and it was performed by using the G*Power 3.1.9.2 software (Düsseldorf, Germany). The following parameters were considered: a) test power of 0.9; b) 0.05 significance level; c) effect size of 0.4. Considering these standards, 15 participants per group would be sufficient to detect statistically significant differences. However, considering possible dropouts, 20% was added in each group. Thus, the final sample size comprised 54 individuals, which were randomly divided into three groups: acupuncture (n=18); BoNT-A (n=18); and saline solution (SS) (n=18), as a negative control group. For this allocation, a software was used (https://random-allocation-software.software.informer.com/2.0/) and the sequence was sealed in an opaque envelope, which was operated by a researcher not involved in other procedures of this study. The investigator assessing the outcomes was masked to the treatment assignments. The sample was obtained from women seeking for TMD treatment at the Piracicaba Dental School, University of Campinas, Piracicaba, Brazil, from 2014 to 2016. Patients had to be female with MMFP, with general good health, be aged between 18 and 45 years, with complete dentition, ongoing conservative treatment for at least three months without 30% of pain improvement and be using oral contraceptives in order to be included in the study. Subjects with systemic diseases (arthritis, arthrosis, diabetes), uncontrolled hypertension, neurological disorders, positive history of trauma in the orofacial and neck area, with dental pain, with self-reported sleep bruxism, and taking any medication for pain control were excluded from the recruitment (Figure 1). The diagnostic of MMFP was based on a clinical examination performed according to the official Portuguese version of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) – Axis I,25 by two calibrated raters (Kappa coefficient = 0.80). To achieve the 54 volunteers included in the study, 80 female subjects were screened for eligibility. Considering the inclusion and exclusion criteria, 65 participants were suitable to participate. However, before allocation, five individuals were excluded since they refused to participate (n=4) or to stop previous therapies (n=1). Six additional exclusions occurred due to lack of compliance, leading to a final sample of 18 volunteers per group (Figure 1). Figure 1Flowchart of participants enrollment Therapies Acupuncture Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian. Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian. Botulinum toxin type A (BoNT-A) BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face. Figure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection This injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration. BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face. Figure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection This injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration. Saline Solution (SS) SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment. SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment. Acupuncture Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian. Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian. Botulinum toxin type A (BoNT-A) BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face. Figure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection This injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration. BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face. Figure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection This injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration. Saline Solution (SS) SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment. SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment.
Results
Participants’ age did not differ among the three groups. acupuncture (mean age 30.3±6.9 years); BoNT-A (mean age 34.6±6.5 years); and SS (mean age 30.8±6.9 years) (P= .124). Self-perceived pain (VAS) Self-perceived pain showed a significant decrease in all groups after one-month of therapy (P<.001). When comparing the different treatments, improvement in pain levels did not significantly differ between the acupuncture and BoNT-A groups (P>.05), but both groups presented a significant reduction on pain compared to the SS group (P<.001) (Figure 3). Figure 3Self-perceived pain (VAS) values for each group, before and after treatments.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05). Self-perceived pain showed a significant decrease in all groups after one-month of therapy (P<.001). When comparing the different treatments, improvement in pain levels did not significantly differ between the acupuncture and BoNT-A groups (P>.05), but both groups presented a significant reduction on pain compared to the SS group (P<.001) (Figure 3). Figure 3Self-perceived pain (VAS) values for each group, before and after treatments.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05). Pain sensitivity (PPT) Considering the PPT values for the masseter muscles, the intragroup evaluation demonstrated that the acupuncture and SS groups did not show significant improvements (P=.359 and P=.220, respectively) after one-month of therapy. Notwithstanding, BoNT-A group presented significantly higher PPT values (P<.001) after one month of follow-up. Comparisons among groups, showed that acupuncture values did not differ from SS and BoNT-A (P=.751 and P=.123, respectively) groups. On the other hand, BoNT-A values were significantly higher than those obtained in the SS group (P=.006) (Table 1). Furthermore, the PPT results for the anterior temporal muscles are in accordance with those achieve for the masseter muscles. Thus, intragroup evaluation showed no improvements on PPT values for the acupuncture and SS groups (P=.415 and P=.471, respectively), after one month of evaluation. Only the BoNT-A group presented statistically significant differences from baseline to the first-month follow-up (P<.001). Intergroup comparisons reveled that acupuncture values did not differ from SS (P=1.000) and BoNT-A (P=.111) values. However, BoNT-A values were statistically significant different just when compared with the SS group (P=.016) (Table 1). Considering the PPT values for the masseter muscles, the intragroup evaluation demonstrated that the acupuncture and SS groups did not show significant improvements (P=.359 and P=.220, respectively) after one-month of therapy. Notwithstanding, BoNT-A group presented significantly higher PPT values (P<.001) after one month of follow-up. Comparisons among groups, showed that acupuncture values did not differ from SS and BoNT-A (P=.751 and P=.123, respectively) groups. On the other hand, BoNT-A values were significantly higher than those obtained in the SS group (P=.006) (Table 1). Furthermore, the PPT results for the anterior temporal muscles are in accordance with those achieve for the masseter muscles. Thus, intragroup evaluation showed no improvements on PPT values for the acupuncture and SS groups (P=.415 and P=.471, respectively), after one month of evaluation. Only the BoNT-A group presented statistically significant differences from baseline to the first-month follow-up (P<.001). Intergroup comparisons reveled that acupuncture values did not differ from SS (P=1.000) and BoNT-A (P=.111) values. However, BoNT-A values were statistically significant different just when compared with the SS group (P=.016) (Table 1). Electromyographic activity (EMG) The EMG results for both masseter and anterior temporal muscles demonstrated that only volunteers in the BoNT-A group presented a significant reduction of the EMG activity one-month after the treatment (P<.001). Intergroup comparisons at the one-month follow-up showed a significant decrease in the masseter muscle activity in BoNT-A group compared to acupuncture (P =.020) and SS (P <.001), and these results were also found for the anterior temporal muscles (P <.001) (Figure 4). Figure 4Root mean square scores (RMS μV) of maximum volunteer contraction for each group, before and after treatments. A, anterior temporal muscles mean values; B, masseter muscles mean values.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05). The EMG results for both masseter and anterior temporal muscles demonstrated that only volunteers in the BoNT-A group presented a significant reduction of the EMG activity one-month after the treatment (P<.001). Intergroup comparisons at the one-month follow-up showed a significant decrease in the masseter muscle activity in BoNT-A group compared to acupuncture (P =.020) and SS (P <.001), and these results were also found for the anterior temporal muscles (P <.001) (Figure 4). Figure 4Root mean square scores (RMS μV) of maximum volunteer contraction for each group, before and after treatments. A, anterior temporal muscles mean values; B, masseter muscles mean values.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05).
Conclusion
After one month of follow-up, all therapies reduced the self-perceived pain in patients with MMFP. BoNT-A was not superior to acupuncture in pain reduction, but both were superior to SS; moreover, BoNT-A was the only treatment able to improve PPT values. However, only patients treated with BoNT-A reduced the EMG activity in the injected muscles which should be considered as an adverse effect. Table 1Mean and standard deviation (SD) of pain pressure threshold (kg/cm-2) for each group, before and after treatmentsGroup/PPT-MuscleBaseline1 MonthTemporal  Acupuncture0.67 (±0.27)Aa0.73 (±0.19)AaBotulinum Toxin A0.54 (±0.22)Aa0.92 (±0.29)AbSaline Solution0.61 (±0.23)Aa0.66 (±0.33)AaMasseter  Acupuncture0.67 (±0.23)Aa0.72 (±0.18)AaBotulinum Toxin A0.53 (±0.19)Aa0.88 (±0.25)AbSaline Solution0.57 (±0.17)Aa0.64 (±0.25)AaDifferent uppercase letters represent significant differences among groups (P<0.05); different lowercase letters denote significant differences among assessment time points (P<0.05); PPT: Pressure pain threshold; kg/cm2=Kilogram per square centimeter. Different uppercase letters represent significant differences among groups (P<0.05); different lowercase letters denote significant differences among assessment time points (P<0.05); PPT: Pressure pain threshold; kg/cm2=Kilogram per square centimeter.
[ "Experimental design", "Therapies", "Acupuncture", "Botulinum toxin type A (BoNT-A)", "Saline Solution (SS)", "Outcomes", "Self-perceived pain: Visual Analog Scale (VAS)", "Pain sensitivity: Pressure Pain Threshold (PPT)", "Electromyographic assessment", "Statistical analysis", "Self-perceived pain (VAS)", "Pain sensitivity (PPT)", "Electromyographic activity (EMG)" ]
[ "This randomized single-blinded controlled clinical trial, conducted following the Helsinki Declaration, was approved by the local Research Ethics Committee (CAAE: 22953113.8.0000.5418) and by the Brazilian Registry of Clinical Trials (ReBEC RBR-2d4vvv). All individuals were informed about the research purposes and signed an informed consent form.\nThe sample size estimation was based on the average pain scores of previous studies,14,24 and it was performed by using the G*Power 3.1.9.2 software (Düsseldorf, Germany). The following parameters were considered: a) test power of 0.9; b) 0.05 significance level; c) effect size of 0.4. Considering these standards, 15 participants per group would be sufficient to detect statistically significant differences. However, considering possible dropouts, 20% was added in each group. Thus, the final sample size comprised 54 individuals, which were randomly divided into three groups: acupuncture (n=18); BoNT-A (n=18); and saline solution (SS) (n=18), as a negative control group. For this allocation, a software was used (https://random-allocation-software.software.informer.com/2.0/) and the sequence was sealed in an opaque envelope, which was operated by a researcher not involved in other procedures of this study. The investigator assessing the outcomes was masked to the treatment assignments.\nThe sample was obtained from women seeking for TMD treatment at the Piracicaba Dental School, University of Campinas, Piracicaba, Brazil, from 2014 to 2016. Patients had to be female with MMFP, with general good health, be aged between 18 and 45 years, with complete dentition, ongoing conservative treatment for at least three months without 30% of pain improvement and be using oral contraceptives in order to be included in the study. Subjects with systemic diseases (arthritis, arthrosis, diabetes), uncontrolled hypertension, neurological disorders, positive history of trauma in the orofacial and neck area, with dental pain, with self-reported sleep bruxism, and taking any medication for pain control were excluded from the recruitment (Figure 1). The diagnostic of MMFP was based on a clinical examination performed according to the official Portuguese version of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) – Axis I,25 by two calibrated raters (Kappa coefficient = 0.80). To achieve the 54 volunteers included in the study, 80 female subjects were screened for eligibility. Considering the inclusion and exclusion criteria, 65 participants were suitable to participate. However, before allocation, five individuals were excluded since they refused to participate (n=4) or to stop previous therapies (n=1). Six additional exclusions occurred due to lack of compliance, leading to a final sample of 18 volunteers per group (Figure 1).\n\nFigure 1Flowchart of participants enrollment\n", "Acupuncture Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian.\nAcupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian.\nBotulinum toxin type A (BoNT-A) BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face.\n\nFigure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection\n\nThis injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration.\nBoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face.\n\nFigure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection\n\nThis injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration.\nSaline Solution (SS) SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment.\nSS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment.", "Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian.", "BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face.\n\nFigure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection\n\nThis injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration.", "SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment.", "Self-perceived pain: Visual Analog Scale (VAS) VAS is a 100 mm horizontal line, anchored by the words “no pain” at the left end, and “worst pain imaginable” at the right end. Participants were instructed to mark a line at any point, representing the level of current, worst, and average pain of the last month.\nVAS is a 100 mm horizontal line, anchored by the words “no pain” at the left end, and “worst pain imaginable” at the right end. Participants were instructed to mark a line at any point, representing the level of current, worst, and average pain of the last month.\nPain sensitivity: Pressure Pain Threshold (PPT) PPT was assessed by a digital algometer (Kratos DDK-20; São Paulo, Brazil) with 1 cm2 circular flat rod, for the bilateral evaluation of the masseter and anterior temporal muscles. Patients were instructed to indicate the moment when the pressure became painful. They were sat in a chair with the Frankfurt plane parallel to the ground, and muscles should be relaxed. The circular flat rod was perpendicularly pressed to the surface skin at a 0.5 kg/cm2 rate, following the sequence: right anterior temporal, right masseter, left masseter, and left anterior temporal muscles. After a five-minutes rest, the pressure was applied again, as follows: left anterior temporal, right anterior temporal, left masseter, and right masseter.\nPPT was assessed by a digital algometer (Kratos DDK-20; São Paulo, Brazil) with 1 cm2 circular flat rod, for the bilateral evaluation of the masseter and anterior temporal muscles. Patients were instructed to indicate the moment when the pressure became painful. They were sat in a chair with the Frankfurt plane parallel to the ground, and muscles should be relaxed. The circular flat rod was perpendicularly pressed to the surface skin at a 0.5 kg/cm2 rate, following the sequence: right anterior temporal, right masseter, left masseter, and left anterior temporal muscles. After a five-minutes rest, the pressure was applied again, as follows: left anterior temporal, right anterior temporal, left masseter, and right masseter.", "VAS is a 100 mm horizontal line, anchored by the words “no pain” at the left end, and “worst pain imaginable” at the right end. Participants were instructed to mark a line at any point, representing the level of current, worst, and average pain of the last month.", "PPT was assessed by a digital algometer (Kratos DDK-20; São Paulo, Brazil) with 1 cm2 circular flat rod, for the bilateral evaluation of the masseter and anterior temporal muscles. Patients were instructed to indicate the moment when the pressure became painful. They were sat in a chair with the Frankfurt plane parallel to the ground, and muscles should be relaxed. The circular flat rod was perpendicularly pressed to the surface skin at a 0.5 kg/cm2 rate, following the sequence: right anterior temporal, right masseter, left masseter, and left anterior temporal muscles. After a five-minutes rest, the pressure was applied again, as follows: left anterior temporal, right anterior temporal, left masseter, and right masseter.", "The bilateral EMG signals of the anterior temporal and the superficial masseter muscles were recorded by the ADS 1200 device (Lynx Electronic Technology Ltd, Sao Paulo, Brazil), which has eight channels, adjusted gain of 1e16,000, band-pass filter of 20e500 Hz and sampling frequency of 2000 Hz per channel. A circular passive bipolar Ag/AgCl double electrode with 1 cm interelectrode distance was used (Hal Ind. Com. Ltda, Sao Paulo, Brazil).\nBefore the recordings, the volunteers’ skin was cleaned with cotton and 70% alcohol, and a function test was performed to identify the center of the muscle venter, in which the electrodes should be fixed. The reference electrode was placed on the participants’ manubrium of the sternum. The electrical activity of each muscle was recorded in mandibular postural position (rest) and maximum volunteer contraction (MVC). Each activity was measured three times, for five seconds, with a two minute period of rest between them, in order to avoid fatigue. The MVC activity was obtained by requiring the patients to chew a piece of Parafilm M (American National Can, Chicago, IL, USA) that was placed bilaterally in the molar region. Participants were instructed to clench their jaw to the maximum possible extent, and to maintain the pressure for five seconds, as were verbally stimulated by the examiner.\nSimultaneous signals were obtained by the software Lynx AqDados 7.02 (Lynx Electronic Technology Ltd, Sao Paulo, Brazil), and the root mean square (RMS) values were processed by the software Lynx AqD Analysis 7.0 (Lynx Electronic Technology Ltd, Sao Paulo, Brazil). The RMS values of each acquisition were considered as those obtained in the 2s and 4s-interval. The mean values of the three acquisitions (Rest and MI) were considered.\nAs the evaluations were carried out in different timepoints, an acetate plate was fabricated for each patient to standardize the algometer position and the electrodes’ placement between sessions. The acetate plate followed the anatomic reference lines (external angle of the eye, tragus of the ear, and external angle of the mandible), and was clipped where the algometer and electrodes were placed.", "All data for groups and periods were expressed as means ± standard deviation (SD) and were assessed for normal distribution with the Shapiro-Wilk test. A mixed-design repeated measures two-way ANOVA test was used to observe the difference among groups over time and within the group. The statistical analysis compared the results observed before the treatment (baseline) with those observed one month after the therapies. Moreover, the three groups were compared to verify a possible statistically significant difference among therapies. The ANOVA test was followed by post hoc Tukey’s test. All analyses were performed using SPSS for Windows (release 21.0, SPSS Inc.), with a 5% significance level.", "Self-perceived pain showed a significant decrease in all groups after one-month of therapy (P<.001). When comparing the different treatments, improvement in pain levels did not significantly differ between the acupuncture and BoNT-A groups (P>.05), but both groups presented a significant reduction on pain compared to the SS group (P<.001) (Figure 3).\n\nFigure 3Self-perceived pain (VAS) values for each group, before and after treatments.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05).\n", "Considering the PPT values for the masseter muscles, the intragroup evaluation demonstrated that the acupuncture and SS groups did not show significant improvements (P=.359 and P=.220, respectively) after one-month of therapy. Notwithstanding, BoNT-A group presented significantly higher PPT values (P<.001) after one month of follow-up. Comparisons among groups, showed that acupuncture values did not differ from SS and BoNT-A (P=.751 and P=.123, respectively) groups. On the other hand, BoNT-A values were significantly higher than those obtained in the SS group (P=.006) (Table 1).\nFurthermore, the PPT results for the anterior temporal muscles are in accordance with those achieve for the masseter muscles. Thus, intragroup evaluation showed no improvements on PPT values for the acupuncture and SS groups (P=.415 and P=.471, respectively), after one month of evaluation. Only the BoNT-A group presented statistically significant differences from baseline to the first-month follow-up (P<.001). Intergroup comparisons reveled that acupuncture values did not differ from SS (P=1.000) and BoNT-A (P=.111) values. However, BoNT-A values were statistically significant different just when compared with the SS group (P=.016) (Table 1).", "The EMG results for both masseter and anterior temporal muscles demonstrated that only volunteers in the BoNT-A group presented a significant reduction of the EMG activity one-month after the treatment (P<.001). Intergroup comparisons at the one-month follow-up showed a significant decrease in the masseter muscle activity in BoNT-A group compared to acupuncture (P =.020) and SS (P <.001), and these results were also found for the anterior temporal muscles (P <.001) (Figure 4).\n\nFigure 4Root mean square scores (RMS μV) of maximum volunteer contraction for each group, before and after treatments. A, anterior temporal muscles mean values; B, masseter muscles mean values.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05).\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methodology", "Experimental design", "Therapies", "Acupuncture", "Botulinum toxin type A (BoNT-A)", "Saline Solution (SS)", "Outcomes", "Self-perceived pain: Visual Analog Scale (VAS)", "Pain sensitivity: Pressure Pain Threshold (PPT)", "Electromyographic assessment", "Statistical analysis", "Results", "Self-perceived pain (VAS)", "Pain sensitivity (PPT)", "Electromyographic activity (EMG)", "Discussion", "Conclusion" ]
[ "Myofascial pain (MFP) is a disorder characterized by localized muscle tenderness, regional pain, and limited range of motion.1 It is the most typical cause of persistent regional pain, such as back and shoulder pain, tension-type headaches, and facial pain.2 Furthermore, it is a common condition in dentistry with prevalence from 10% to 68% among subjects with temporomandibular disorders (TMD).3\nMasticatory myofascial pain (MMFP) has a complex pathogenesis expressed by a multifactorial etiology, which led to the proposal of numerous conservatives, reversible, and minimally invasive therapies to treat this condition.4 The needling technique is a minimally invasive therapy widely used for MMFP and it can be classified as an injection technique (IT), often referred to as “wet needling,” and dry needling (DN). While the IT deliver pharmacological agents, e.g., anesthetics, botulinum toxins or other agents, with needles,5 the DN consists in the insertion of thin monofilament needles, as the ones used for acupuncture practice, without any injectate.6\nAcupuncture is a therapeutic method of the traditional Chinese medicine which differs from conventional DN techniques since needles are not inserted just in the painful region. Its antinociceptive effects7 include immediate reduction in local, referred, and widespread pain,8 and reduction in peripheral and central sensitization.5 Although a recent randomized clinical trial reported pain reduction of 84% after one month of treatment – concluding that acupuncture was effective for MMFP pain9, available systematic reviews did not find further advantages in the use of acupuncture for MMFP over other treatments such as oral appliances, behavioral therapy, and/or pharmacotherapies.10,11 These controversies might be due to several methodological shortcomings, leading to inconclusive results, which expose the need for high quality studies comparing the efficacy of acupuncture with other treatments.\nBotulinum toxin type A (BoNT-A) is an FDA-approved treatment for some pain disorders (as dystonia and migraine), becoming one of the most popular IT used to control MFP.4 Animal studies have demonstrated that peripheral injections of BoNT-A have analgesic effects on pain stages by inhibiting the release of nociceptive mediators (peripherally and centrally), mechanism independent of its neuromotor effect.12,13 Based on this data, BoNT-A has been used as an off-label treatment to control MMFP. Moreover, a few well-designed clinical trials14,15,16 have demonstrated the superiority of this substance over placebo, but not over conservative treatments like oral appliances.15 Besides, a prospective study showed that BoNT-A injections in the masticatory muscles are also effective in reducing MMFP and tension-type headache, recommending this therapy for muscle pain.17 Conversely, the lack of consensus on the effects of BoNT-A is due to the number of low-quality studies available, and especially to the post-injection adverse effects in muscle and bone tissues, being the reason why its benefits for MMFP remain unclear.18,19\nStudies comparing DN techniques with BoNT-A for myofascial TMD pain are scarce, and no previous study compared these treatments with an injection placebo group.20-22 Moreover, systematic reviews were inconclusive about the effectiveness of needling therapy, since it was not possible to determine if the technique (dry or wet) or the injectate were responsible for the improvements.23 Therefore, this study aims to compare the immediate effects of BoNT-A injections and acupuncture therapy in myofascial TMD patients.", "Experimental design This randomized single-blinded controlled clinical trial, conducted following the Helsinki Declaration, was approved by the local Research Ethics Committee (CAAE: 22953113.8.0000.5418) and by the Brazilian Registry of Clinical Trials (ReBEC RBR-2d4vvv). All individuals were informed about the research purposes and signed an informed consent form.\nThe sample size estimation was based on the average pain scores of previous studies,14,24 and it was performed by using the G*Power 3.1.9.2 software (Düsseldorf, Germany). The following parameters were considered: a) test power of 0.9; b) 0.05 significance level; c) effect size of 0.4. Considering these standards, 15 participants per group would be sufficient to detect statistically significant differences. However, considering possible dropouts, 20% was added in each group. Thus, the final sample size comprised 54 individuals, which were randomly divided into three groups: acupuncture (n=18); BoNT-A (n=18); and saline solution (SS) (n=18), as a negative control group. For this allocation, a software was used (https://random-allocation-software.software.informer.com/2.0/) and the sequence was sealed in an opaque envelope, which was operated by a researcher not involved in other procedures of this study. The investigator assessing the outcomes was masked to the treatment assignments.\nThe sample was obtained from women seeking for TMD treatment at the Piracicaba Dental School, University of Campinas, Piracicaba, Brazil, from 2014 to 2016. Patients had to be female with MMFP, with general good health, be aged between 18 and 45 years, with complete dentition, ongoing conservative treatment for at least three months without 30% of pain improvement and be using oral contraceptives in order to be included in the study. Subjects with systemic diseases (arthritis, arthrosis, diabetes), uncontrolled hypertension, neurological disorders, positive history of trauma in the orofacial and neck area, with dental pain, with self-reported sleep bruxism, and taking any medication for pain control were excluded from the recruitment (Figure 1). The diagnostic of MMFP was based on a clinical examination performed according to the official Portuguese version of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) – Axis I,25 by two calibrated raters (Kappa coefficient = 0.80). To achieve the 54 volunteers included in the study, 80 female subjects were screened for eligibility. Considering the inclusion and exclusion criteria, 65 participants were suitable to participate. However, before allocation, five individuals were excluded since they refused to participate (n=4) or to stop previous therapies (n=1). Six additional exclusions occurred due to lack of compliance, leading to a final sample of 18 volunteers per group (Figure 1).\n\nFigure 1Flowchart of participants enrollment\n\nThis randomized single-blinded controlled clinical trial, conducted following the Helsinki Declaration, was approved by the local Research Ethics Committee (CAAE: 22953113.8.0000.5418) and by the Brazilian Registry of Clinical Trials (ReBEC RBR-2d4vvv). All individuals were informed about the research purposes and signed an informed consent form.\nThe sample size estimation was based on the average pain scores of previous studies,14,24 and it was performed by using the G*Power 3.1.9.2 software (Düsseldorf, Germany). The following parameters were considered: a) test power of 0.9; b) 0.05 significance level; c) effect size of 0.4. Considering these standards, 15 participants per group would be sufficient to detect statistically significant differences. However, considering possible dropouts, 20% was added in each group. Thus, the final sample size comprised 54 individuals, which were randomly divided into three groups: acupuncture (n=18); BoNT-A (n=18); and saline solution (SS) (n=18), as a negative control group. For this allocation, a software was used (https://random-allocation-software.software.informer.com/2.0/) and the sequence was sealed in an opaque envelope, which was operated by a researcher not involved in other procedures of this study. The investigator assessing the outcomes was masked to the treatment assignments.\nThe sample was obtained from women seeking for TMD treatment at the Piracicaba Dental School, University of Campinas, Piracicaba, Brazil, from 2014 to 2016. Patients had to be female with MMFP, with general good health, be aged between 18 and 45 years, with complete dentition, ongoing conservative treatment for at least three months without 30% of pain improvement and be using oral contraceptives in order to be included in the study. Subjects with systemic diseases (arthritis, arthrosis, diabetes), uncontrolled hypertension, neurological disorders, positive history of trauma in the orofacial and neck area, with dental pain, with self-reported sleep bruxism, and taking any medication for pain control were excluded from the recruitment (Figure 1). The diagnostic of MMFP was based on a clinical examination performed according to the official Portuguese version of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) – Axis I,25 by two calibrated raters (Kappa coefficient = 0.80). To achieve the 54 volunteers included in the study, 80 female subjects were screened for eligibility. Considering the inclusion and exclusion criteria, 65 participants were suitable to participate. However, before allocation, five individuals were excluded since they refused to participate (n=4) or to stop previous therapies (n=1). Six additional exclusions occurred due to lack of compliance, leading to a final sample of 18 volunteers per group (Figure 1).\n\nFigure 1Flowchart of participants enrollment\n\nTherapies Acupuncture Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian.\nAcupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian.\nBotulinum toxin type A (BoNT-A) BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face.\n\nFigure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection\n\nThis injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration.\nBoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face.\n\nFigure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection\n\nThis injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration.\nSaline Solution (SS) SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment.\nSS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment.\nAcupuncture Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian.\nAcupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian.\nBotulinum toxin type A (BoNT-A) BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face.\n\nFigure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection\n\nThis injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration.\nBoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face.\n\nFigure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection\n\nThis injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration.\nSaline Solution (SS) SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment.\nSS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment.", "This randomized single-blinded controlled clinical trial, conducted following the Helsinki Declaration, was approved by the local Research Ethics Committee (CAAE: 22953113.8.0000.5418) and by the Brazilian Registry of Clinical Trials (ReBEC RBR-2d4vvv). All individuals were informed about the research purposes and signed an informed consent form.\nThe sample size estimation was based on the average pain scores of previous studies,14,24 and it was performed by using the G*Power 3.1.9.2 software (Düsseldorf, Germany). The following parameters were considered: a) test power of 0.9; b) 0.05 significance level; c) effect size of 0.4. Considering these standards, 15 participants per group would be sufficient to detect statistically significant differences. However, considering possible dropouts, 20% was added in each group. Thus, the final sample size comprised 54 individuals, which were randomly divided into three groups: acupuncture (n=18); BoNT-A (n=18); and saline solution (SS) (n=18), as a negative control group. For this allocation, a software was used (https://random-allocation-software.software.informer.com/2.0/) and the sequence was sealed in an opaque envelope, which was operated by a researcher not involved in other procedures of this study. The investigator assessing the outcomes was masked to the treatment assignments.\nThe sample was obtained from women seeking for TMD treatment at the Piracicaba Dental School, University of Campinas, Piracicaba, Brazil, from 2014 to 2016. Patients had to be female with MMFP, with general good health, be aged between 18 and 45 years, with complete dentition, ongoing conservative treatment for at least three months without 30% of pain improvement and be using oral contraceptives in order to be included in the study. Subjects with systemic diseases (arthritis, arthrosis, diabetes), uncontrolled hypertension, neurological disorders, positive history of trauma in the orofacial and neck area, with dental pain, with self-reported sleep bruxism, and taking any medication for pain control were excluded from the recruitment (Figure 1). The diagnostic of MMFP was based on a clinical examination performed according to the official Portuguese version of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) – Axis I,25 by two calibrated raters (Kappa coefficient = 0.80). To achieve the 54 volunteers included in the study, 80 female subjects were screened for eligibility. Considering the inclusion and exclusion criteria, 65 participants were suitable to participate. However, before allocation, five individuals were excluded since they refused to participate (n=4) or to stop previous therapies (n=1). Six additional exclusions occurred due to lack of compliance, leading to a final sample of 18 volunteers per group (Figure 1).\n\nFigure 1Flowchart of participants enrollment\n", "Acupuncture Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian.\nAcupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian.\nBotulinum toxin type A (BoNT-A) BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face.\n\nFigure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection\n\nThis injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration.\nBoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face.\n\nFigure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection\n\nThis injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration.\nSaline Solution (SS) SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment.\nSS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment.", "Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian.", "BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face.\n\nFigure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection\n\nThis injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration.", "SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment.", "Self-perceived pain: Visual Analog Scale (VAS) VAS is a 100 mm horizontal line, anchored by the words “no pain” at the left end, and “worst pain imaginable” at the right end. Participants were instructed to mark a line at any point, representing the level of current, worst, and average pain of the last month.\nVAS is a 100 mm horizontal line, anchored by the words “no pain” at the left end, and “worst pain imaginable” at the right end. Participants were instructed to mark a line at any point, representing the level of current, worst, and average pain of the last month.\nPain sensitivity: Pressure Pain Threshold (PPT) PPT was assessed by a digital algometer (Kratos DDK-20; São Paulo, Brazil) with 1 cm2 circular flat rod, for the bilateral evaluation of the masseter and anterior temporal muscles. Patients were instructed to indicate the moment when the pressure became painful. They were sat in a chair with the Frankfurt plane parallel to the ground, and muscles should be relaxed. The circular flat rod was perpendicularly pressed to the surface skin at a 0.5 kg/cm2 rate, following the sequence: right anterior temporal, right masseter, left masseter, and left anterior temporal muscles. After a five-minutes rest, the pressure was applied again, as follows: left anterior temporal, right anterior temporal, left masseter, and right masseter.\nPPT was assessed by a digital algometer (Kratos DDK-20; São Paulo, Brazil) with 1 cm2 circular flat rod, for the bilateral evaluation of the masseter and anterior temporal muscles. Patients were instructed to indicate the moment when the pressure became painful. They were sat in a chair with the Frankfurt plane parallel to the ground, and muscles should be relaxed. The circular flat rod was perpendicularly pressed to the surface skin at a 0.5 kg/cm2 rate, following the sequence: right anterior temporal, right masseter, left masseter, and left anterior temporal muscles. After a five-minutes rest, the pressure was applied again, as follows: left anterior temporal, right anterior temporal, left masseter, and right masseter.", "VAS is a 100 mm horizontal line, anchored by the words “no pain” at the left end, and “worst pain imaginable” at the right end. Participants were instructed to mark a line at any point, representing the level of current, worst, and average pain of the last month.", "PPT was assessed by a digital algometer (Kratos DDK-20; São Paulo, Brazil) with 1 cm2 circular flat rod, for the bilateral evaluation of the masseter and anterior temporal muscles. Patients were instructed to indicate the moment when the pressure became painful. They were sat in a chair with the Frankfurt plane parallel to the ground, and muscles should be relaxed. The circular flat rod was perpendicularly pressed to the surface skin at a 0.5 kg/cm2 rate, following the sequence: right anterior temporal, right masseter, left masseter, and left anterior temporal muscles. After a five-minutes rest, the pressure was applied again, as follows: left anterior temporal, right anterior temporal, left masseter, and right masseter.", "The bilateral EMG signals of the anterior temporal and the superficial masseter muscles were recorded by the ADS 1200 device (Lynx Electronic Technology Ltd, Sao Paulo, Brazil), which has eight channels, adjusted gain of 1e16,000, band-pass filter of 20e500 Hz and sampling frequency of 2000 Hz per channel. A circular passive bipolar Ag/AgCl double electrode with 1 cm interelectrode distance was used (Hal Ind. Com. Ltda, Sao Paulo, Brazil).\nBefore the recordings, the volunteers’ skin was cleaned with cotton and 70% alcohol, and a function test was performed to identify the center of the muscle venter, in which the electrodes should be fixed. The reference electrode was placed on the participants’ manubrium of the sternum. The electrical activity of each muscle was recorded in mandibular postural position (rest) and maximum volunteer contraction (MVC). Each activity was measured three times, for five seconds, with a two minute period of rest between them, in order to avoid fatigue. The MVC activity was obtained by requiring the patients to chew a piece of Parafilm M (American National Can, Chicago, IL, USA) that was placed bilaterally in the molar region. Participants were instructed to clench their jaw to the maximum possible extent, and to maintain the pressure for five seconds, as were verbally stimulated by the examiner.\nSimultaneous signals were obtained by the software Lynx AqDados 7.02 (Lynx Electronic Technology Ltd, Sao Paulo, Brazil), and the root mean square (RMS) values were processed by the software Lynx AqD Analysis 7.0 (Lynx Electronic Technology Ltd, Sao Paulo, Brazil). The RMS values of each acquisition were considered as those obtained in the 2s and 4s-interval. The mean values of the three acquisitions (Rest and MI) were considered.\nAs the evaluations were carried out in different timepoints, an acetate plate was fabricated for each patient to standardize the algometer position and the electrodes’ placement between sessions. The acetate plate followed the anatomic reference lines (external angle of the eye, tragus of the ear, and external angle of the mandible), and was clipped where the algometer and electrodes were placed.", "All data for groups and periods were expressed as means ± standard deviation (SD) and were assessed for normal distribution with the Shapiro-Wilk test. A mixed-design repeated measures two-way ANOVA test was used to observe the difference among groups over time and within the group. The statistical analysis compared the results observed before the treatment (baseline) with those observed one month after the therapies. Moreover, the three groups were compared to verify a possible statistically significant difference among therapies. The ANOVA test was followed by post hoc Tukey’s test. All analyses were performed using SPSS for Windows (release 21.0, SPSS Inc.), with a 5% significance level.", "Participants’ age did not differ among the three groups. acupuncture (mean age 30.3±6.9 years); BoNT-A (mean age 34.6±6.5 years); and SS (mean age 30.8±6.9 years) (P= .124).\nSelf-perceived pain (VAS) Self-perceived pain showed a significant decrease in all groups after one-month of therapy (P<.001). When comparing the different treatments, improvement in pain levels did not significantly differ between the acupuncture and BoNT-A groups (P>.05), but both groups presented a significant reduction on pain compared to the SS group (P<.001) (Figure 3).\n\nFigure 3Self-perceived pain (VAS) values for each group, before and after treatments.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05).\n\nSelf-perceived pain showed a significant decrease in all groups after one-month of therapy (P<.001). When comparing the different treatments, improvement in pain levels did not significantly differ between the acupuncture and BoNT-A groups (P>.05), but both groups presented a significant reduction on pain compared to the SS group (P<.001) (Figure 3).\n\nFigure 3Self-perceived pain (VAS) values for each group, before and after treatments.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05).\n\nPain sensitivity (PPT) Considering the PPT values for the masseter muscles, the intragroup evaluation demonstrated that the acupuncture and SS groups did not show significant improvements (P=.359 and P=.220, respectively) after one-month of therapy. Notwithstanding, BoNT-A group presented significantly higher PPT values (P<.001) after one month of follow-up. Comparisons among groups, showed that acupuncture values did not differ from SS and BoNT-A (P=.751 and P=.123, respectively) groups. On the other hand, BoNT-A values were significantly higher than those obtained in the SS group (P=.006) (Table 1).\nFurthermore, the PPT results for the anterior temporal muscles are in accordance with those achieve for the masseter muscles. Thus, intragroup evaluation showed no improvements on PPT values for the acupuncture and SS groups (P=.415 and P=.471, respectively), after one month of evaluation. Only the BoNT-A group presented statistically significant differences from baseline to the first-month follow-up (P<.001). Intergroup comparisons reveled that acupuncture values did not differ from SS (P=1.000) and BoNT-A (P=.111) values. However, BoNT-A values were statistically significant different just when compared with the SS group (P=.016) (Table 1).\nConsidering the PPT values for the masseter muscles, the intragroup evaluation demonstrated that the acupuncture and SS groups did not show significant improvements (P=.359 and P=.220, respectively) after one-month of therapy. Notwithstanding, BoNT-A group presented significantly higher PPT values (P<.001) after one month of follow-up. Comparisons among groups, showed that acupuncture values did not differ from SS and BoNT-A (P=.751 and P=.123, respectively) groups. On the other hand, BoNT-A values were significantly higher than those obtained in the SS group (P=.006) (Table 1).\nFurthermore, the PPT results for the anterior temporal muscles are in accordance with those achieve for the masseter muscles. Thus, intragroup evaluation showed no improvements on PPT values for the acupuncture and SS groups (P=.415 and P=.471, respectively), after one month of evaluation. Only the BoNT-A group presented statistically significant differences from baseline to the first-month follow-up (P<.001). Intergroup comparisons reveled that acupuncture values did not differ from SS (P=1.000) and BoNT-A (P=.111) values. However, BoNT-A values were statistically significant different just when compared with the SS group (P=.016) (Table 1).\nElectromyographic activity (EMG) The EMG results for both masseter and anterior temporal muscles demonstrated that only volunteers in the BoNT-A group presented a significant reduction of the EMG activity one-month after the treatment (P<.001). Intergroup comparisons at the one-month follow-up showed a significant decrease in the masseter muscle activity in BoNT-A group compared to acupuncture (P =.020) and SS (P <.001), and these results were also found for the anterior temporal muscles (P <.001) (Figure 4).\n\nFigure 4Root mean square scores (RMS μV) of maximum volunteer contraction for each group, before and after treatments. A, anterior temporal muscles mean values; B, masseter muscles mean values.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05).\n\nThe EMG results for both masseter and anterior temporal muscles demonstrated that only volunteers in the BoNT-A group presented a significant reduction of the EMG activity one-month after the treatment (P<.001). Intergroup comparisons at the one-month follow-up showed a significant decrease in the masseter muscle activity in BoNT-A group compared to acupuncture (P =.020) and SS (P <.001), and these results were also found for the anterior temporal muscles (P <.001) (Figure 4).\n\nFigure 4Root mean square scores (RMS μV) of maximum volunteer contraction for each group, before and after treatments. A, anterior temporal muscles mean values; B, masseter muscles mean values.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05).\n", "Self-perceived pain showed a significant decrease in all groups after one-month of therapy (P<.001). When comparing the different treatments, improvement in pain levels did not significantly differ between the acupuncture and BoNT-A groups (P>.05), but both groups presented a significant reduction on pain compared to the SS group (P<.001) (Figure 3).\n\nFigure 3Self-perceived pain (VAS) values for each group, before and after treatments.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05).\n", "Considering the PPT values for the masseter muscles, the intragroup evaluation demonstrated that the acupuncture and SS groups did not show significant improvements (P=.359 and P=.220, respectively) after one-month of therapy. Notwithstanding, BoNT-A group presented significantly higher PPT values (P<.001) after one month of follow-up. Comparisons among groups, showed that acupuncture values did not differ from SS and BoNT-A (P=.751 and P=.123, respectively) groups. On the other hand, BoNT-A values were significantly higher than those obtained in the SS group (P=.006) (Table 1).\nFurthermore, the PPT results for the anterior temporal muscles are in accordance with those achieve for the masseter muscles. Thus, intragroup evaluation showed no improvements on PPT values for the acupuncture and SS groups (P=.415 and P=.471, respectively), after one month of evaluation. Only the BoNT-A group presented statistically significant differences from baseline to the first-month follow-up (P<.001). Intergroup comparisons reveled that acupuncture values did not differ from SS (P=1.000) and BoNT-A (P=.111) values. However, BoNT-A values were statistically significant different just when compared with the SS group (P=.016) (Table 1).", "The EMG results for both masseter and anterior temporal muscles demonstrated that only volunteers in the BoNT-A group presented a significant reduction of the EMG activity one-month after the treatment (P<.001). Intergroup comparisons at the one-month follow-up showed a significant decrease in the masseter muscle activity in BoNT-A group compared to acupuncture (P =.020) and SS (P <.001), and these results were also found for the anterior temporal muscles (P <.001) (Figure 4).\n\nFigure 4Root mean square scores (RMS μV) of maximum volunteer contraction for each group, before and after treatments. A, anterior temporal muscles mean values; B, masseter muscles mean values.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05).\n", "The main findings of this study were that – after four weeks – all treatment groups (acupuncture, BoNT-A, and SS) were able to significantly reduce the self-perceived pain, without differences between acupuncture and BoNT-A, while both treatments were superior to the SS group. Moreover, considering the PPT values for both masseter and anterior temporal muscles, only BoNT-A group lead to a significant increase in PPT. Likewise, only patients treated with BoNT-A showed a significant reduction of the EMG activity in both studied muscles.\nPain is considered one of the most common reasons for a TMD patient seek treatment.26 All therapies used in this research were able to significantly decrease the self-perceived pain after one-month (P<.001). This was expected since the literature shows that the effectiveness of needling therapy in managing MMFP of the masticatory muscles does not necessarily depend on the needling type (dry or wet) or the injected substance, suggesting that the pain-reducing effect could be a consequence of the needle penetrating the skin.23\nComparing the treatments, improvement in pain levels did not significantly differ between the acupuncture and BoNT-A groups (P>.05). However, both groups presented a significant reduction on pain compared to the SS group (P<.001). A previous study showed that BoNT-A injection and dry needling yield similar satisfactory therapeutic outcomes regarding pain relief in patients with MMFP.22 Nevertheless, to the best of our knowledge, this is the first study comparing BoNT-A to acupuncture (not to a conventional dry needling) and including a placebo group. Considering that both BoNT-A and acupuncture have specific pain relief mechanisms, (reduction in peripheral and central sensitization), this can explain the improvement in the results achieved in comparison to SS group. Furthermore, two factors should be emphasized to explain the self-reported pain reduction: firstly, the placebo effect, in which the expectations generated by receiving a therapeutic approach is able to modulate pain perception, a mechanism known as placebo analgesia;27 secondly, the natural course of MMFP, which is generally favorable and with varying symptoms.28,29\nPatients with MMFP usually present reduced PPT.30 PPT may be the most sensitive measure to detect endogenous pain inhibitory mechanisms.31 This measurement tests deep pain sensitivity, which is probably mediated by A-δ or C fibers; although pressure applied on the skin could reflect the pain sensitivity of both superficial and deep structures, the deep-tissue nociceptors mediate a major component of the pressure-induced pain during pressure algometry.32\nConsidering the PPT values for the masseter and anterior temporal muscles, the intragroup evaluation demonstrated that only BoNT-A group presented significantly higher PPT values (P<.001) at the follow-up. This result can possibly be explained by the exclusive antinociceptive mechanism of the BoNT-A. After BoNT-A injection, there is a temporary inhibition of pain neurotransmitter release at the pain site, reducing peripheral sensitization. Additionally, due to the BoNT-A axonal transport by A-δ and/or C fibers to the central nervous system,33 there is an indirect or direct reduction of central sensitization, hyperalgesia, and allodynia occurs (features usually associated with chronic pain, such as refractory MFP), through the reduction of the peripheral nerve over-activity, resulting in an increased pain threshold.34\nBesides, some reasons might explain why acupuncture group did not show significant improvements in PPT: the short follow-up (one month), once a systematic review reported that dry needling treatments (such as acupuncture) may increase PPT immediately or until 12 weeks after the therapy;35 secondly, the type of acupuncture performed in this study (manual) may have influenced, as literature suggest that electroacupuncture is a technique more suitable to improve PPT.36\nBetween-groups comparisons on PPT values, BoNT-A treatment led to a higher increase in PPT values than in the SS group. This result is in accordance with a previous study that showed improvements of PPT after BoNT-A injection compared with the SS group in patients with MMFP.37 Notwithstanding, systematic reviews conclude that there is no consensus on the therapeutic benefits of BoNT-A on TMDs38 and that the efficacy-adverse effects ratio should be evaluated. Note that, in this study, BoNT-A improved all variables in a refractory MMFP population. These results reinforce that BoNT-A should not be the first option to MMFP treatment due to possible adverse effects, but it could be considered in patients that do not mitigate their pain with more conservative managements, fact that is corroborated by other studies.14,15,39\nThe EMG results for the masseter and anterior temporal muscles demonstrated that only patients treated with BoNT-A presented a significant reduction of the EMG activity one month after treatment (P<.001). Intergroup comparisons showed a significant decrease in muscle activity for both muscles in BoNT-A group when compared to acupuncture and SS groups. A reduction in EMG activity of masticatory muscles is expected after an intramuscular injection of BoNT-A once this toxin inhibits the release of acetylcholine at the neuromuscular junction of presynaptic motor neurons, reducing muscle activity.39 In fact, a temporary regional weakness is one of the most common adverse effects related to the use of BoNT-A in TMD treatment.19 A recent report also showed a significant reduction in maximum occlusal force in the BoNT-A group compared to SS and no injection groups.40\nThe occurrence and the intensity of these adverse effects are directly related to higher doses and repeated injections.19 In the present study, only a single injection of BoNT-A was used (30U in each masseter and 10U in each anterior temporal). Based on previous investigations,15 this dosage is able to reduce pain in MMFP patients, but it also allows the full recovery of muscle activity, which returned to normal EMG values after three months.15 Anyway, a reduction in the EMG activity of masticatory muscles is an adverse effect that must be considered; once acupuncture did not promote a significant change in EMG values, this could be considered an advantage of acupuncture over BoNT-A. Notably, the reduction of EMG values in BoNT-A groups is not responsible for decreasing subjective pain. Studies have demonstrated that BoNT-A presents an analgesic effect which is independent and precedes its neuromuscular effects. The EMG results in the acupuncture and BoNT-A groups confirms the disconnection between muscle electrical activity and muscle pain. Besides EMG reduction, patients receiving BoNT-A injections also reported adverse effects like edema and pain during injection, being the last also reported by the SS group. Conversely, self-reported adverse effects for the acupuncture group comprised itching and reddening of the skin, without pain symptoms nor edema.\nThese results suggest that all studied needling therapies (acupuncture, BoNT-A, and SS) are effective after one month of follow-up in reducing the self-perceived pain in patients with refractory MMFP; and, that BoNT-A seems to be superior due to the improvement in PPT values. Nevertheless, caution is necessary when judging these findings, since some limitations should be considered. Selecting only women as the study population hinder our results to be generalized to male patients. However, it was necessary once MMFP is more prevalent in this gender. Even though the 1-month follow-up is a restricted time of evaluation, our main objective was to assess the immediate effects of the proposed treatments; however, studies considering longer periods of evaluation should be performed. Finally, the effects of the proposed therapies on the psychosocial status of myofascial TMD patients should be also evaluated, considering that psychosocial variables generally act as chronification factors for TMDs.", "After one month of follow-up, all therapies reduced the self-perceived pain in patients with MMFP. BoNT-A was not superior to acupuncture in pain reduction, but both were superior to SS; moreover, BoNT-A was the only treatment able to improve PPT values. However, only patients treated with BoNT-A reduced the EMG activity in the injected muscles which should be considered as an adverse effect.\n\nTable 1Mean and standard deviation (SD) of pain pressure threshold (kg/cm-2) for each group, before and after treatmentsGroup/PPT-MuscleBaseline1 MonthTemporal  Acupuncture0.67 (±0.27)Aa0.73 (±0.19)AaBotulinum Toxin A0.54 (±0.22)Aa0.92 (±0.29)AbSaline Solution0.61 (±0.23)Aa0.66 (±0.33)AaMasseter  Acupuncture0.67 (±0.23)Aa0.72 (±0.18)AaBotulinum Toxin A0.53 (±0.19)Aa0.88 (±0.25)AbSaline Solution0.57 (±0.17)Aa0.64 (±0.25)AaDifferent uppercase letters represent significant differences among groups (P<0.05); different lowercase letters denote significant differences among assessment time points (P<0.05); PPT: Pressure pain threshold; kg/cm2=Kilogram per square centimeter.\n\nDifferent uppercase letters represent significant differences among groups (P<0.05); different lowercase letters denote significant differences among assessment time points (P<0.05); PPT: Pressure pain threshold; kg/cm2=Kilogram per square centimeter." ]
[ "intro", "methods", null, null, null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusions" ]
[ "Botulinum toxin", "Acupuncture", "Myofascial pain", "Temporomandibular disorders", "Chronic pain" ]
Introduction: Myofascial pain (MFP) is a disorder characterized by localized muscle tenderness, regional pain, and limited range of motion.1 It is the most typical cause of persistent regional pain, such as back and shoulder pain, tension-type headaches, and facial pain.2 Furthermore, it is a common condition in dentistry with prevalence from 10% to 68% among subjects with temporomandibular disorders (TMD).3 Masticatory myofascial pain (MMFP) has a complex pathogenesis expressed by a multifactorial etiology, which led to the proposal of numerous conservatives, reversible, and minimally invasive therapies to treat this condition.4 The needling technique is a minimally invasive therapy widely used for MMFP and it can be classified as an injection technique (IT), often referred to as “wet needling,” and dry needling (DN). While the IT deliver pharmacological agents, e.g., anesthetics, botulinum toxins or other agents, with needles,5 the DN consists in the insertion of thin monofilament needles, as the ones used for acupuncture practice, without any injectate.6 Acupuncture is a therapeutic method of the traditional Chinese medicine which differs from conventional DN techniques since needles are not inserted just in the painful region. Its antinociceptive effects7 include immediate reduction in local, referred, and widespread pain,8 and reduction in peripheral and central sensitization.5 Although a recent randomized clinical trial reported pain reduction of 84% after one month of treatment – concluding that acupuncture was effective for MMFP pain9, available systematic reviews did not find further advantages in the use of acupuncture for MMFP over other treatments such as oral appliances, behavioral therapy, and/or pharmacotherapies.10,11 These controversies might be due to several methodological shortcomings, leading to inconclusive results, which expose the need for high quality studies comparing the efficacy of acupuncture with other treatments. Botulinum toxin type A (BoNT-A) is an FDA-approved treatment for some pain disorders (as dystonia and migraine), becoming one of the most popular IT used to control MFP.4 Animal studies have demonstrated that peripheral injections of BoNT-A have analgesic effects on pain stages by inhibiting the release of nociceptive mediators (peripherally and centrally), mechanism independent of its neuromotor effect.12,13 Based on this data, BoNT-A has been used as an off-label treatment to control MMFP. Moreover, a few well-designed clinical trials14,15,16 have demonstrated the superiority of this substance over placebo, but not over conservative treatments like oral appliances.15 Besides, a prospective study showed that BoNT-A injections in the masticatory muscles are also effective in reducing MMFP and tension-type headache, recommending this therapy for muscle pain.17 Conversely, the lack of consensus on the effects of BoNT-A is due to the number of low-quality studies available, and especially to the post-injection adverse effects in muscle and bone tissues, being the reason why its benefits for MMFP remain unclear.18,19 Studies comparing DN techniques with BoNT-A for myofascial TMD pain are scarce, and no previous study compared these treatments with an injection placebo group.20-22 Moreover, systematic reviews were inconclusive about the effectiveness of needling therapy, since it was not possible to determine if the technique (dry or wet) or the injectate were responsible for the improvements.23 Therefore, this study aims to compare the immediate effects of BoNT-A injections and acupuncture therapy in myofascial TMD patients. Methodology: Experimental design This randomized single-blinded controlled clinical trial, conducted following the Helsinki Declaration, was approved by the local Research Ethics Committee (CAAE: 22953113.8.0000.5418) and by the Brazilian Registry of Clinical Trials (ReBEC RBR-2d4vvv). All individuals were informed about the research purposes and signed an informed consent form. The sample size estimation was based on the average pain scores of previous studies,14,24 and it was performed by using the G*Power 3.1.9.2 software (Düsseldorf, Germany). The following parameters were considered: a) test power of 0.9; b) 0.05 significance level; c) effect size of 0.4. Considering these standards, 15 participants per group would be sufficient to detect statistically significant differences. However, considering possible dropouts, 20% was added in each group. Thus, the final sample size comprised 54 individuals, which were randomly divided into three groups: acupuncture (n=18); BoNT-A (n=18); and saline solution (SS) (n=18), as a negative control group. For this allocation, a software was used (https://random-allocation-software.software.informer.com/2.0/) and the sequence was sealed in an opaque envelope, which was operated by a researcher not involved in other procedures of this study. The investigator assessing the outcomes was masked to the treatment assignments. The sample was obtained from women seeking for TMD treatment at the Piracicaba Dental School, University of Campinas, Piracicaba, Brazil, from 2014 to 2016. Patients had to be female with MMFP, with general good health, be aged between 18 and 45 years, with complete dentition, ongoing conservative treatment for at least three months without 30% of pain improvement and be using oral contraceptives in order to be included in the study. Subjects with systemic diseases (arthritis, arthrosis, diabetes), uncontrolled hypertension, neurological disorders, positive history of trauma in the orofacial and neck area, with dental pain, with self-reported sleep bruxism, and taking any medication for pain control were excluded from the recruitment (Figure 1). The diagnostic of MMFP was based on a clinical examination performed according to the official Portuguese version of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) – Axis I,25 by two calibrated raters (Kappa coefficient = 0.80). To achieve the 54 volunteers included in the study, 80 female subjects were screened for eligibility. Considering the inclusion and exclusion criteria, 65 participants were suitable to participate. However, before allocation, five individuals were excluded since they refused to participate (n=4) or to stop previous therapies (n=1). Six additional exclusions occurred due to lack of compliance, leading to a final sample of 18 volunteers per group (Figure 1). Figure 1Flowchart of participants enrollment This randomized single-blinded controlled clinical trial, conducted following the Helsinki Declaration, was approved by the local Research Ethics Committee (CAAE: 22953113.8.0000.5418) and by the Brazilian Registry of Clinical Trials (ReBEC RBR-2d4vvv). All individuals were informed about the research purposes and signed an informed consent form. The sample size estimation was based on the average pain scores of previous studies,14,24 and it was performed by using the G*Power 3.1.9.2 software (Düsseldorf, Germany). The following parameters were considered: a) test power of 0.9; b) 0.05 significance level; c) effect size of 0.4. Considering these standards, 15 participants per group would be sufficient to detect statistically significant differences. However, considering possible dropouts, 20% was added in each group. Thus, the final sample size comprised 54 individuals, which were randomly divided into three groups: acupuncture (n=18); BoNT-A (n=18); and saline solution (SS) (n=18), as a negative control group. For this allocation, a software was used (https://random-allocation-software.software.informer.com/2.0/) and the sequence was sealed in an opaque envelope, which was operated by a researcher not involved in other procedures of this study. The investigator assessing the outcomes was masked to the treatment assignments. The sample was obtained from women seeking for TMD treatment at the Piracicaba Dental School, University of Campinas, Piracicaba, Brazil, from 2014 to 2016. Patients had to be female with MMFP, with general good health, be aged between 18 and 45 years, with complete dentition, ongoing conservative treatment for at least three months without 30% of pain improvement and be using oral contraceptives in order to be included in the study. Subjects with systemic diseases (arthritis, arthrosis, diabetes), uncontrolled hypertension, neurological disorders, positive history of trauma in the orofacial and neck area, with dental pain, with self-reported sleep bruxism, and taking any medication for pain control were excluded from the recruitment (Figure 1). The diagnostic of MMFP was based on a clinical examination performed according to the official Portuguese version of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) – Axis I,25 by two calibrated raters (Kappa coefficient = 0.80). To achieve the 54 volunteers included in the study, 80 female subjects were screened for eligibility. Considering the inclusion and exclusion criteria, 65 participants were suitable to participate. However, before allocation, five individuals were excluded since they refused to participate (n=4) or to stop previous therapies (n=1). Six additional exclusions occurred due to lack of compliance, leading to a final sample of 18 volunteers per group (Figure 1). Figure 1Flowchart of participants enrollment Therapies Acupuncture Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian. Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian. Botulinum toxin type A (BoNT-A) BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face. Figure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection This injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration. BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face. Figure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection This injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration. Saline Solution (SS) SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment. SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment. Acupuncture Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian. Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian. Botulinum toxin type A (BoNT-A) BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face. Figure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection This injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration. BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face. Figure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection This injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration. Saline Solution (SS) SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment. SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment. Experimental design: This randomized single-blinded controlled clinical trial, conducted following the Helsinki Declaration, was approved by the local Research Ethics Committee (CAAE: 22953113.8.0000.5418) and by the Brazilian Registry of Clinical Trials (ReBEC RBR-2d4vvv). All individuals were informed about the research purposes and signed an informed consent form. The sample size estimation was based on the average pain scores of previous studies,14,24 and it was performed by using the G*Power 3.1.9.2 software (Düsseldorf, Germany). The following parameters were considered: a) test power of 0.9; b) 0.05 significance level; c) effect size of 0.4. Considering these standards, 15 participants per group would be sufficient to detect statistically significant differences. However, considering possible dropouts, 20% was added in each group. Thus, the final sample size comprised 54 individuals, which were randomly divided into three groups: acupuncture (n=18); BoNT-A (n=18); and saline solution (SS) (n=18), as a negative control group. For this allocation, a software was used (https://random-allocation-software.software.informer.com/2.0/) and the sequence was sealed in an opaque envelope, which was operated by a researcher not involved in other procedures of this study. The investigator assessing the outcomes was masked to the treatment assignments. The sample was obtained from women seeking for TMD treatment at the Piracicaba Dental School, University of Campinas, Piracicaba, Brazil, from 2014 to 2016. Patients had to be female with MMFP, with general good health, be aged between 18 and 45 years, with complete dentition, ongoing conservative treatment for at least three months without 30% of pain improvement and be using oral contraceptives in order to be included in the study. Subjects with systemic diseases (arthritis, arthrosis, diabetes), uncontrolled hypertension, neurological disorders, positive history of trauma in the orofacial and neck area, with dental pain, with self-reported sleep bruxism, and taking any medication for pain control were excluded from the recruitment (Figure 1). The diagnostic of MMFP was based on a clinical examination performed according to the official Portuguese version of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) – Axis I,25 by two calibrated raters (Kappa coefficient = 0.80). To achieve the 54 volunteers included in the study, 80 female subjects were screened for eligibility. Considering the inclusion and exclusion criteria, 65 participants were suitable to participate. However, before allocation, five individuals were excluded since they refused to participate (n=4) or to stop previous therapies (n=1). Six additional exclusions occurred due to lack of compliance, leading to a final sample of 18 volunteers per group (Figure 1). Figure 1Flowchart of participants enrollment Therapies: Acupuncture Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian. Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian. Botulinum toxin type A (BoNT-A) BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face. Figure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection This injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration. BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face. Figure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection This injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration. Saline Solution (SS) SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment. SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment. Acupuncture: Acupuncture group received four sessions of traditional acupuncture, one session per week, with 20-min each for one month. The following points were selected for this therapy: LI4 (Hegu), LI11 (Quchi), SI19 (Tinggong), LR2 (Xingjian), GB20 (Fengchi), GB21 (Jianjing), GB34 (Yanglingquan), BL2 (Zanzhu), CV23 (Lianquan), and TE23 (Sizhukong).24 Disposable, sterile, individually packed, stainless steel needles (Huan Qiu; Suzhou Huanqiu Acupuncture Medical Appliance Co. Ltd., Suzhou, China) were used for this purpose by a acupuncturist. For facial points, needles had 0.22 mm of diameter and 13 mm length, and 0.25 mm diameter and 30 mm length for distal points. The needles were manually inserted and rotated in both clockwise and counterclockwise directions, until the patient related the sensation on needle site or along the meridian. Botulinum toxin type A (BoNT-A): BoNT-A (100 U; Botox, Allergan, Irvine, California, CA, USA) was reconstituted using non-preserved sterile saline solution 0.9%. A single bilateral injection was applied in the masseter and anterior temporalis muscles using 30U and 10U of BoNT-A, respectively15 by a calibrated researcher, distributed in five sites for each muscle. It was used a 1 mL syringe with a 30-gauge and 13 mm needle. Briefly, for the masseter, the injected sites were in the inferior part of the muscle (on mandibular angle), 5 mm apart from each other. For the anterior temporal muscles, sites were determined according to the functional test, considering the most prominent part, and must be 1 cm external to the eyebrow and 5 mm apart among them (Figure 2). Injections were performed bilaterally regardless the patient had pain in only one side of the face. Figure 2A, marked points for BoNT-A injection. B, temporal muscle injection; C, masseter muscle injection This injection technique consisted in inserting the needle into the soft tissue until reaching the bone; then, the needle was slightly moved to place the tip inside the muscle. Before injection, a careful aspiration was performed to avoid a possible intravascular administration. Saline Solution (SS): SS (NaCl 0.9%) was bilaterally injected into the same muscles and sites, following the same protocol and doses, as described for BoNT-A injections. Injections of BoNT-A and SS were performed in a single appointment by the same trained clinician, who was blinded to the treatment assignment. Outcomes: Self-perceived pain: Visual Analog Scale (VAS) VAS is a 100 mm horizontal line, anchored by the words “no pain” at the left end, and “worst pain imaginable” at the right end. Participants were instructed to mark a line at any point, representing the level of current, worst, and average pain of the last month. VAS is a 100 mm horizontal line, anchored by the words “no pain” at the left end, and “worst pain imaginable” at the right end. Participants were instructed to mark a line at any point, representing the level of current, worst, and average pain of the last month. Pain sensitivity: Pressure Pain Threshold (PPT) PPT was assessed by a digital algometer (Kratos DDK-20; São Paulo, Brazil) with 1 cm2 circular flat rod, for the bilateral evaluation of the masseter and anterior temporal muscles. Patients were instructed to indicate the moment when the pressure became painful. They were sat in a chair with the Frankfurt plane parallel to the ground, and muscles should be relaxed. The circular flat rod was perpendicularly pressed to the surface skin at a 0.5 kg/cm2 rate, following the sequence: right anterior temporal, right masseter, left masseter, and left anterior temporal muscles. After a five-minutes rest, the pressure was applied again, as follows: left anterior temporal, right anterior temporal, left masseter, and right masseter. PPT was assessed by a digital algometer (Kratos DDK-20; São Paulo, Brazil) with 1 cm2 circular flat rod, for the bilateral evaluation of the masseter and anterior temporal muscles. Patients were instructed to indicate the moment when the pressure became painful. They were sat in a chair with the Frankfurt plane parallel to the ground, and muscles should be relaxed. The circular flat rod was perpendicularly pressed to the surface skin at a 0.5 kg/cm2 rate, following the sequence: right anterior temporal, right masseter, left masseter, and left anterior temporal muscles. After a five-minutes rest, the pressure was applied again, as follows: left anterior temporal, right anterior temporal, left masseter, and right masseter. Self-perceived pain: Visual Analog Scale (VAS): VAS is a 100 mm horizontal line, anchored by the words “no pain” at the left end, and “worst pain imaginable” at the right end. Participants were instructed to mark a line at any point, representing the level of current, worst, and average pain of the last month. Pain sensitivity: Pressure Pain Threshold (PPT): PPT was assessed by a digital algometer (Kratos DDK-20; São Paulo, Brazil) with 1 cm2 circular flat rod, for the bilateral evaluation of the masseter and anterior temporal muscles. Patients were instructed to indicate the moment when the pressure became painful. They were sat in a chair with the Frankfurt plane parallel to the ground, and muscles should be relaxed. The circular flat rod was perpendicularly pressed to the surface skin at a 0.5 kg/cm2 rate, following the sequence: right anterior temporal, right masseter, left masseter, and left anterior temporal muscles. After a five-minutes rest, the pressure was applied again, as follows: left anterior temporal, right anterior temporal, left masseter, and right masseter. Electromyographic assessment: The bilateral EMG signals of the anterior temporal and the superficial masseter muscles were recorded by the ADS 1200 device (Lynx Electronic Technology Ltd, Sao Paulo, Brazil), which has eight channels, adjusted gain of 1e16,000, band-pass filter of 20e500 Hz and sampling frequency of 2000 Hz per channel. A circular passive bipolar Ag/AgCl double electrode with 1 cm interelectrode distance was used (Hal Ind. Com. Ltda, Sao Paulo, Brazil). Before the recordings, the volunteers’ skin was cleaned with cotton and 70% alcohol, and a function test was performed to identify the center of the muscle venter, in which the electrodes should be fixed. The reference electrode was placed on the participants’ manubrium of the sternum. The electrical activity of each muscle was recorded in mandibular postural position (rest) and maximum volunteer contraction (MVC). Each activity was measured three times, for five seconds, with a two minute period of rest between them, in order to avoid fatigue. The MVC activity was obtained by requiring the patients to chew a piece of Parafilm M (American National Can, Chicago, IL, USA) that was placed bilaterally in the molar region. Participants were instructed to clench their jaw to the maximum possible extent, and to maintain the pressure for five seconds, as were verbally stimulated by the examiner. Simultaneous signals were obtained by the software Lynx AqDados 7.02 (Lynx Electronic Technology Ltd, Sao Paulo, Brazil), and the root mean square (RMS) values were processed by the software Lynx AqD Analysis 7.0 (Lynx Electronic Technology Ltd, Sao Paulo, Brazil). The RMS values of each acquisition were considered as those obtained in the 2s and 4s-interval. The mean values of the three acquisitions (Rest and MI) were considered. As the evaluations were carried out in different timepoints, an acetate plate was fabricated for each patient to standardize the algometer position and the electrodes’ placement between sessions. The acetate plate followed the anatomic reference lines (external angle of the eye, tragus of the ear, and external angle of the mandible), and was clipped where the algometer and electrodes were placed. Statistical analysis: All data for groups and periods were expressed as means ± standard deviation (SD) and were assessed for normal distribution with the Shapiro-Wilk test. A mixed-design repeated measures two-way ANOVA test was used to observe the difference among groups over time and within the group. The statistical analysis compared the results observed before the treatment (baseline) with those observed one month after the therapies. Moreover, the three groups were compared to verify a possible statistically significant difference among therapies. The ANOVA test was followed by post hoc Tukey’s test. All analyses were performed using SPSS for Windows (release 21.0, SPSS Inc.), with a 5% significance level. Results: Participants’ age did not differ among the three groups. acupuncture (mean age 30.3±6.9 years); BoNT-A (mean age 34.6±6.5 years); and SS (mean age 30.8±6.9 years) (P= .124). Self-perceived pain (VAS) Self-perceived pain showed a significant decrease in all groups after one-month of therapy (P<.001). When comparing the different treatments, improvement in pain levels did not significantly differ between the acupuncture and BoNT-A groups (P>.05), but both groups presented a significant reduction on pain compared to the SS group (P<.001) (Figure 3). Figure 3Self-perceived pain (VAS) values for each group, before and after treatments.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05). Self-perceived pain showed a significant decrease in all groups after one-month of therapy (P<.001). When comparing the different treatments, improvement in pain levels did not significantly differ between the acupuncture and BoNT-A groups (P>.05), but both groups presented a significant reduction on pain compared to the SS group (P<.001) (Figure 3). Figure 3Self-perceived pain (VAS) values for each group, before and after treatments.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05). Pain sensitivity (PPT) Considering the PPT values for the masseter muscles, the intragroup evaluation demonstrated that the acupuncture and SS groups did not show significant improvements (P=.359 and P=.220, respectively) after one-month of therapy. Notwithstanding, BoNT-A group presented significantly higher PPT values (P<.001) after one month of follow-up. Comparisons among groups, showed that acupuncture values did not differ from SS and BoNT-A (P=.751 and P=.123, respectively) groups. On the other hand, BoNT-A values were significantly higher than those obtained in the SS group (P=.006) (Table 1). Furthermore, the PPT results for the anterior temporal muscles are in accordance with those achieve for the masseter muscles. Thus, intragroup evaluation showed no improvements on PPT values for the acupuncture and SS groups (P=.415 and P=.471, respectively), after one month of evaluation. Only the BoNT-A group presented statistically significant differences from baseline to the first-month follow-up (P<.001). Intergroup comparisons reveled that acupuncture values did not differ from SS (P=1.000) and BoNT-A (P=.111) values. However, BoNT-A values were statistically significant different just when compared with the SS group (P=.016) (Table 1). Considering the PPT values for the masseter muscles, the intragroup evaluation demonstrated that the acupuncture and SS groups did not show significant improvements (P=.359 and P=.220, respectively) after one-month of therapy. Notwithstanding, BoNT-A group presented significantly higher PPT values (P<.001) after one month of follow-up. Comparisons among groups, showed that acupuncture values did not differ from SS and BoNT-A (P=.751 and P=.123, respectively) groups. On the other hand, BoNT-A values were significantly higher than those obtained in the SS group (P=.006) (Table 1). Furthermore, the PPT results for the anterior temporal muscles are in accordance with those achieve for the masseter muscles. Thus, intragroup evaluation showed no improvements on PPT values for the acupuncture and SS groups (P=.415 and P=.471, respectively), after one month of evaluation. Only the BoNT-A group presented statistically significant differences from baseline to the first-month follow-up (P<.001). Intergroup comparisons reveled that acupuncture values did not differ from SS (P=1.000) and BoNT-A (P=.111) values. However, BoNT-A values were statistically significant different just when compared with the SS group (P=.016) (Table 1). Electromyographic activity (EMG) The EMG results for both masseter and anterior temporal muscles demonstrated that only volunteers in the BoNT-A group presented a significant reduction of the EMG activity one-month after the treatment (P<.001). Intergroup comparisons at the one-month follow-up showed a significant decrease in the masseter muscle activity in BoNT-A group compared to acupuncture (P =.020) and SS (P <.001), and these results were also found for the anterior temporal muscles (P <.001) (Figure 4). Figure 4Root mean square scores (RMS μV) of maximum volunteer contraction for each group, before and after treatments. A, anterior temporal muscles mean values; B, masseter muscles mean values.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05). The EMG results for both masseter and anterior temporal muscles demonstrated that only volunteers in the BoNT-A group presented a significant reduction of the EMG activity one-month after the treatment (P<.001). Intergroup comparisons at the one-month follow-up showed a significant decrease in the masseter muscle activity in BoNT-A group compared to acupuncture (P =.020) and SS (P <.001), and these results were also found for the anterior temporal muscles (P <.001) (Figure 4). Figure 4Root mean square scores (RMS μV) of maximum volunteer contraction for each group, before and after treatments. A, anterior temporal muscles mean values; B, masseter muscles mean values.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05). Self-perceived pain (VAS): Self-perceived pain showed a significant decrease in all groups after one-month of therapy (P<.001). When comparing the different treatments, improvement in pain levels did not significantly differ between the acupuncture and BoNT-A groups (P>.05), but both groups presented a significant reduction on pain compared to the SS group (P<.001) (Figure 3). Figure 3Self-perceived pain (VAS) values for each group, before and after treatments.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05). Pain sensitivity (PPT): Considering the PPT values for the masseter muscles, the intragroup evaluation demonstrated that the acupuncture and SS groups did not show significant improvements (P=.359 and P=.220, respectively) after one-month of therapy. Notwithstanding, BoNT-A group presented significantly higher PPT values (P<.001) after one month of follow-up. Comparisons among groups, showed that acupuncture values did not differ from SS and BoNT-A (P=.751 and P=.123, respectively) groups. On the other hand, BoNT-A values were significantly higher than those obtained in the SS group (P=.006) (Table 1). Furthermore, the PPT results for the anterior temporal muscles are in accordance with those achieve for the masseter muscles. Thus, intragroup evaluation showed no improvements on PPT values for the acupuncture and SS groups (P=.415 and P=.471, respectively), after one month of evaluation. Only the BoNT-A group presented statistically significant differences from baseline to the first-month follow-up (P<.001). Intergroup comparisons reveled that acupuncture values did not differ from SS (P=1.000) and BoNT-A (P=.111) values. However, BoNT-A values were statistically significant different just when compared with the SS group (P=.016) (Table 1). Electromyographic activity (EMG): The EMG results for both masseter and anterior temporal muscles demonstrated that only volunteers in the BoNT-A group presented a significant reduction of the EMG activity one-month after the treatment (P<.001). Intergroup comparisons at the one-month follow-up showed a significant decrease in the masseter muscle activity in BoNT-A group compared to acupuncture (P =.020) and SS (P <.001), and these results were also found for the anterior temporal muscles (P <.001) (Figure 4). Figure 4Root mean square scores (RMS μV) of maximum volunteer contraction for each group, before and after treatments. A, anterior temporal muscles mean values; B, masseter muscles mean values.ϯ represent significant differences among groups (P<0.05); * represent significant differences among time points (P<0.05). Discussion: The main findings of this study were that – after four weeks – all treatment groups (acupuncture, BoNT-A, and SS) were able to significantly reduce the self-perceived pain, without differences between acupuncture and BoNT-A, while both treatments were superior to the SS group. Moreover, considering the PPT values for both masseter and anterior temporal muscles, only BoNT-A group lead to a significant increase in PPT. Likewise, only patients treated with BoNT-A showed a significant reduction of the EMG activity in both studied muscles. Pain is considered one of the most common reasons for a TMD patient seek treatment.26 All therapies used in this research were able to significantly decrease the self-perceived pain after one-month (P<.001). This was expected since the literature shows that the effectiveness of needling therapy in managing MMFP of the masticatory muscles does not necessarily depend on the needling type (dry or wet) or the injected substance, suggesting that the pain-reducing effect could be a consequence of the needle penetrating the skin.23 Comparing the treatments, improvement in pain levels did not significantly differ between the acupuncture and BoNT-A groups (P>.05). However, both groups presented a significant reduction on pain compared to the SS group (P<.001). A previous study showed that BoNT-A injection and dry needling yield similar satisfactory therapeutic outcomes regarding pain relief in patients with MMFP.22 Nevertheless, to the best of our knowledge, this is the first study comparing BoNT-A to acupuncture (not to a conventional dry needling) and including a placebo group. Considering that both BoNT-A and acupuncture have specific pain relief mechanisms, (reduction in peripheral and central sensitization), this can explain the improvement in the results achieved in comparison to SS group. Furthermore, two factors should be emphasized to explain the self-reported pain reduction: firstly, the placebo effect, in which the expectations generated by receiving a therapeutic approach is able to modulate pain perception, a mechanism known as placebo analgesia;27 secondly, the natural course of MMFP, which is generally favorable and with varying symptoms.28,29 Patients with MMFP usually present reduced PPT.30 PPT may be the most sensitive measure to detect endogenous pain inhibitory mechanisms.31 This measurement tests deep pain sensitivity, which is probably mediated by A-δ or C fibers; although pressure applied on the skin could reflect the pain sensitivity of both superficial and deep structures, the deep-tissue nociceptors mediate a major component of the pressure-induced pain during pressure algometry.32 Considering the PPT values for the masseter and anterior temporal muscles, the intragroup evaluation demonstrated that only BoNT-A group presented significantly higher PPT values (P<.001) at the follow-up. This result can possibly be explained by the exclusive antinociceptive mechanism of the BoNT-A. After BoNT-A injection, there is a temporary inhibition of pain neurotransmitter release at the pain site, reducing peripheral sensitization. Additionally, due to the BoNT-A axonal transport by A-δ and/or C fibers to the central nervous system,33 there is an indirect or direct reduction of central sensitization, hyperalgesia, and allodynia occurs (features usually associated with chronic pain, such as refractory MFP), through the reduction of the peripheral nerve over-activity, resulting in an increased pain threshold.34 Besides, some reasons might explain why acupuncture group did not show significant improvements in PPT: the short follow-up (one month), once a systematic review reported that dry needling treatments (such as acupuncture) may increase PPT immediately or until 12 weeks after the therapy;35 secondly, the type of acupuncture performed in this study (manual) may have influenced, as literature suggest that electroacupuncture is a technique more suitable to improve PPT.36 Between-groups comparisons on PPT values, BoNT-A treatment led to a higher increase in PPT values than in the SS group. This result is in accordance with a previous study that showed improvements of PPT after BoNT-A injection compared with the SS group in patients with MMFP.37 Notwithstanding, systematic reviews conclude that there is no consensus on the therapeutic benefits of BoNT-A on TMDs38 and that the efficacy-adverse effects ratio should be evaluated. Note that, in this study, BoNT-A improved all variables in a refractory MMFP population. These results reinforce that BoNT-A should not be the first option to MMFP treatment due to possible adverse effects, but it could be considered in patients that do not mitigate their pain with more conservative managements, fact that is corroborated by other studies.14,15,39 The EMG results for the masseter and anterior temporal muscles demonstrated that only patients treated with BoNT-A presented a significant reduction of the EMG activity one month after treatment (P<.001). Intergroup comparisons showed a significant decrease in muscle activity for both muscles in BoNT-A group when compared to acupuncture and SS groups. A reduction in EMG activity of masticatory muscles is expected after an intramuscular injection of BoNT-A once this toxin inhibits the release of acetylcholine at the neuromuscular junction of presynaptic motor neurons, reducing muscle activity.39 In fact, a temporary regional weakness is one of the most common adverse effects related to the use of BoNT-A in TMD treatment.19 A recent report also showed a significant reduction in maximum occlusal force in the BoNT-A group compared to SS and no injection groups.40 The occurrence and the intensity of these adverse effects are directly related to higher doses and repeated injections.19 In the present study, only a single injection of BoNT-A was used (30U in each masseter and 10U in each anterior temporal). Based on previous investigations,15 this dosage is able to reduce pain in MMFP patients, but it also allows the full recovery of muscle activity, which returned to normal EMG values after three months.15 Anyway, a reduction in the EMG activity of masticatory muscles is an adverse effect that must be considered; once acupuncture did not promote a significant change in EMG values, this could be considered an advantage of acupuncture over BoNT-A. Notably, the reduction of EMG values in BoNT-A groups is not responsible for decreasing subjective pain. Studies have demonstrated that BoNT-A presents an analgesic effect which is independent and precedes its neuromuscular effects. The EMG results in the acupuncture and BoNT-A groups confirms the disconnection between muscle electrical activity and muscle pain. Besides EMG reduction, patients receiving BoNT-A injections also reported adverse effects like edema and pain during injection, being the last also reported by the SS group. Conversely, self-reported adverse effects for the acupuncture group comprised itching and reddening of the skin, without pain symptoms nor edema. These results suggest that all studied needling therapies (acupuncture, BoNT-A, and SS) are effective after one month of follow-up in reducing the self-perceived pain in patients with refractory MMFP; and, that BoNT-A seems to be superior due to the improvement in PPT values. Nevertheless, caution is necessary when judging these findings, since some limitations should be considered. Selecting only women as the study population hinder our results to be generalized to male patients. However, it was necessary once MMFP is more prevalent in this gender. Even though the 1-month follow-up is a restricted time of evaluation, our main objective was to assess the immediate effects of the proposed treatments; however, studies considering longer periods of evaluation should be performed. Finally, the effects of the proposed therapies on the psychosocial status of myofascial TMD patients should be also evaluated, considering that psychosocial variables generally act as chronification factors for TMDs. Conclusion: After one month of follow-up, all therapies reduced the self-perceived pain in patients with MMFP. BoNT-A was not superior to acupuncture in pain reduction, but both were superior to SS; moreover, BoNT-A was the only treatment able to improve PPT values. However, only patients treated with BoNT-A reduced the EMG activity in the injected muscles which should be considered as an adverse effect. Table 1Mean and standard deviation (SD) of pain pressure threshold (kg/cm-2) for each group, before and after treatmentsGroup/PPT-MuscleBaseline1 MonthTemporal  Acupuncture0.67 (±0.27)Aa0.73 (±0.19)AaBotulinum Toxin A0.54 (±0.22)Aa0.92 (±0.29)AbSaline Solution0.61 (±0.23)Aa0.66 (±0.33)AaMasseter  Acupuncture0.67 (±0.23)Aa0.72 (±0.18)AaBotulinum Toxin A0.53 (±0.19)Aa0.88 (±0.25)AbSaline Solution0.57 (±0.17)Aa0.64 (±0.25)AaDifferent uppercase letters represent significant differences among groups (P<0.05); different lowercase letters denote significant differences among assessment time points (P<0.05); PPT: Pressure pain threshold; kg/cm2=Kilogram per square centimeter. Different uppercase letters represent significant differences among groups (P<0.05); different lowercase letters denote significant differences among assessment time points (P<0.05); PPT: Pressure pain threshold; kg/cm2=Kilogram per square centimeter.
Background: BoNT-A has been widely used for TMD therapy. However, the potential benefits compared to dry needling techniques are not clear. Methods: 54 women were divided into three groups (n=18). AC patients received four sessions of traditional acupuncture, being one session/week during 20-min. BoNT-A patients were bilaterally injected with 30U and 10U in masseter and anterior temporal muscles, respectively. Moreover, a control group received saline solution (SS) in the same muscles. Self-perceived pain was assessed by visual analog scale, while pressure pain threshold (PPT) was verified by a digital algometer. Electromyographic evaluations (EMG) of anterior temporal and masseter muscles were also measured. All variables were assessed before and 1-month after therapies. The mixed-design two-way repeated measures ANOVA and Tukey's post-hoc tests were used for analysis, considering a=0.05. Results: Self-perceived pain decreased in all groups after one month of therapy (P<.001). BoNT-A was not better than AC in pain reduction (P=0.05), but both therapies were more effective in reducing pain than SS (P<0.05). BoNT-A was the only treatment able to improve PPT values (P<0.05); however, a severe decrease of EMG activity was also found in this group, which is considered an adverse effect. Conclusions: after one month of follow-up, all therapies reduced the self-perceived pain in myofascial TMD patients, but only BoNT-A enhanced PPT yet decreased EMG.
Introduction: Myofascial pain (MFP) is a disorder characterized by localized muscle tenderness, regional pain, and limited range of motion.1 It is the most typical cause of persistent regional pain, such as back and shoulder pain, tension-type headaches, and facial pain.2 Furthermore, it is a common condition in dentistry with prevalence from 10% to 68% among subjects with temporomandibular disorders (TMD).3 Masticatory myofascial pain (MMFP) has a complex pathogenesis expressed by a multifactorial etiology, which led to the proposal of numerous conservatives, reversible, and minimally invasive therapies to treat this condition.4 The needling technique is a minimally invasive therapy widely used for MMFP and it can be classified as an injection technique (IT), often referred to as “wet needling,” and dry needling (DN). While the IT deliver pharmacological agents, e.g., anesthetics, botulinum toxins or other agents, with needles,5 the DN consists in the insertion of thin monofilament needles, as the ones used for acupuncture practice, without any injectate.6 Acupuncture is a therapeutic method of the traditional Chinese medicine which differs from conventional DN techniques since needles are not inserted just in the painful region. Its antinociceptive effects7 include immediate reduction in local, referred, and widespread pain,8 and reduction in peripheral and central sensitization.5 Although a recent randomized clinical trial reported pain reduction of 84% after one month of treatment – concluding that acupuncture was effective for MMFP pain9, available systematic reviews did not find further advantages in the use of acupuncture for MMFP over other treatments such as oral appliances, behavioral therapy, and/or pharmacotherapies.10,11 These controversies might be due to several methodological shortcomings, leading to inconclusive results, which expose the need for high quality studies comparing the efficacy of acupuncture with other treatments. Botulinum toxin type A (BoNT-A) is an FDA-approved treatment for some pain disorders (as dystonia and migraine), becoming one of the most popular IT used to control MFP.4 Animal studies have demonstrated that peripheral injections of BoNT-A have analgesic effects on pain stages by inhibiting the release of nociceptive mediators (peripherally and centrally), mechanism independent of its neuromotor effect.12,13 Based on this data, BoNT-A has been used as an off-label treatment to control MMFP. Moreover, a few well-designed clinical trials14,15,16 have demonstrated the superiority of this substance over placebo, but not over conservative treatments like oral appliances.15 Besides, a prospective study showed that BoNT-A injections in the masticatory muscles are also effective in reducing MMFP and tension-type headache, recommending this therapy for muscle pain.17 Conversely, the lack of consensus on the effects of BoNT-A is due to the number of low-quality studies available, and especially to the post-injection adverse effects in muscle and bone tissues, being the reason why its benefits for MMFP remain unclear.18,19 Studies comparing DN techniques with BoNT-A for myofascial TMD pain are scarce, and no previous study compared these treatments with an injection placebo group.20-22 Moreover, systematic reviews were inconclusive about the effectiveness of needling therapy, since it was not possible to determine if the technique (dry or wet) or the injectate were responsible for the improvements.23 Therefore, this study aims to compare the immediate effects of BoNT-A injections and acupuncture therapy in myofascial TMD patients. Conclusion: After one month of follow-up, all therapies reduced the self-perceived pain in patients with MMFP. BoNT-A was not superior to acupuncture in pain reduction, but both were superior to SS; moreover, BoNT-A was the only treatment able to improve PPT values. However, only patients treated with BoNT-A reduced the EMG activity in the injected muscles which should be considered as an adverse effect. Table 1Mean and standard deviation (SD) of pain pressure threshold (kg/cm-2) for each group, before and after treatmentsGroup/PPT-MuscleBaseline1 MonthTemporal  Acupuncture0.67 (±0.27)Aa0.73 (±0.19)AaBotulinum Toxin A0.54 (±0.22)Aa0.92 (±0.29)AbSaline Solution0.61 (±0.23)Aa0.66 (±0.33)AaMasseter  Acupuncture0.67 (±0.23)Aa0.72 (±0.18)AaBotulinum Toxin A0.53 (±0.19)Aa0.88 (±0.25)AbSaline Solution0.57 (±0.17)Aa0.64 (±0.25)AaDifferent uppercase letters represent significant differences among groups (P<0.05); different lowercase letters denote significant differences among assessment time points (P<0.05); PPT: Pressure pain threshold; kg/cm2=Kilogram per square centimeter. Different uppercase letters represent significant differences among groups (P<0.05); different lowercase letters denote significant differences among assessment time points (P<0.05); PPT: Pressure pain threshold; kg/cm2=Kilogram per square centimeter.
Background: BoNT-A has been widely used for TMD therapy. However, the potential benefits compared to dry needling techniques are not clear. Methods: 54 women were divided into three groups (n=18). AC patients received four sessions of traditional acupuncture, being one session/week during 20-min. BoNT-A patients were bilaterally injected with 30U and 10U in masseter and anterior temporal muscles, respectively. Moreover, a control group received saline solution (SS) in the same muscles. Self-perceived pain was assessed by visual analog scale, while pressure pain threshold (PPT) was verified by a digital algometer. Electromyographic evaluations (EMG) of anterior temporal and masseter muscles were also measured. All variables were assessed before and 1-month after therapies. The mixed-design two-way repeated measures ANOVA and Tukey's post-hoc tests were used for analysis, considering a=0.05. Results: Self-perceived pain decreased in all groups after one month of therapy (P<.001). BoNT-A was not better than AC in pain reduction (P=0.05), but both therapies were more effective in reducing pain than SS (P<0.05). BoNT-A was the only treatment able to improve PPT values (P<0.05); however, a severe decrease of EMG activity was also found in this group, which is considered an adverse effect. Conclusions: after one month of follow-up, all therapies reduced the self-perceived pain in myofascial TMD patients, but only BoNT-A enhanced PPT yet decreased EMG.
10,025
300
[ 512, 964, 171, 243, 58, 419, 59, 139, 416, 129, 108, 237, 157 ]
18
[ "bont", "pain", "acupuncture", "group", "muscles", "ss", "masseter", "injection", "mm", "significant" ]
[ "reduce pain mmfp", "pain mmfp patients", "pain refractory mfp", "masticatory myofascial", "tmd masticatory myofascial" ]
[CONTENT] Botulinum toxin | Acupuncture | Myofascial pain | Temporomandibular disorders | Chronic pain [SUMMARY]
[CONTENT] Botulinum toxin | Acupuncture | Myofascial pain | Temporomandibular disorders | Chronic pain [SUMMARY]
[CONTENT] Botulinum toxin | Acupuncture | Myofascial pain | Temporomandibular disorders | Chronic pain [SUMMARY]
[CONTENT] Botulinum toxin | Acupuncture | Myofascial pain | Temporomandibular disorders | Chronic pain [SUMMARY]
[CONTENT] Botulinum toxin | Acupuncture | Myofascial pain | Temporomandibular disorders | Chronic pain [SUMMARY]
[CONTENT] Botulinum toxin | Acupuncture | Myofascial pain | Temporomandibular disorders | Chronic pain [SUMMARY]
[CONTENT] Acupuncture Therapy | Botulinum Toxins, Type A | Female | Humans | Masseter Muscle | Masticatory Muscles | Myofascial Pain Syndromes | Pain | Pain Threshold | Treatment Outcome [SUMMARY]
[CONTENT] Acupuncture Therapy | Botulinum Toxins, Type A | Female | Humans | Masseter Muscle | Masticatory Muscles | Myofascial Pain Syndromes | Pain | Pain Threshold | Treatment Outcome [SUMMARY]
[CONTENT] Acupuncture Therapy | Botulinum Toxins, Type A | Female | Humans | Masseter Muscle | Masticatory Muscles | Myofascial Pain Syndromes | Pain | Pain Threshold | Treatment Outcome [SUMMARY]
[CONTENT] Acupuncture Therapy | Botulinum Toxins, Type A | Female | Humans | Masseter Muscle | Masticatory Muscles | Myofascial Pain Syndromes | Pain | Pain Threshold | Treatment Outcome [SUMMARY]
[CONTENT] Acupuncture Therapy | Botulinum Toxins, Type A | Female | Humans | Masseter Muscle | Masticatory Muscles | Myofascial Pain Syndromes | Pain | Pain Threshold | Treatment Outcome [SUMMARY]
[CONTENT] Acupuncture Therapy | Botulinum Toxins, Type A | Female | Humans | Masseter Muscle | Masticatory Muscles | Myofascial Pain Syndromes | Pain | Pain Threshold | Treatment Outcome [SUMMARY]
[CONTENT] reduce pain mmfp | pain mmfp patients | pain refractory mfp | masticatory myofascial | tmd masticatory myofascial [SUMMARY]
[CONTENT] reduce pain mmfp | pain mmfp patients | pain refractory mfp | masticatory myofascial | tmd masticatory myofascial [SUMMARY]
[CONTENT] reduce pain mmfp | pain mmfp patients | pain refractory mfp | masticatory myofascial | tmd masticatory myofascial [SUMMARY]
[CONTENT] reduce pain mmfp | pain mmfp patients | pain refractory mfp | masticatory myofascial | tmd masticatory myofascial [SUMMARY]
[CONTENT] reduce pain mmfp | pain mmfp patients | pain refractory mfp | masticatory myofascial | tmd masticatory myofascial [SUMMARY]
[CONTENT] reduce pain mmfp | pain mmfp patients | pain refractory mfp | masticatory myofascial | tmd masticatory myofascial [SUMMARY]
[CONTENT] bont | pain | acupuncture | group | muscles | ss | masseter | injection | mm | significant [SUMMARY]
[CONTENT] bont | pain | acupuncture | group | muscles | ss | masseter | injection | mm | significant [SUMMARY]
[CONTENT] bont | pain | acupuncture | group | muscles | ss | masseter | injection | mm | significant [SUMMARY]
[CONTENT] bont | pain | acupuncture | group | muscles | ss | masseter | injection | mm | significant [SUMMARY]
[CONTENT] bont | pain | acupuncture | group | muscles | ss | masseter | injection | mm | significant [SUMMARY]
[CONTENT] bont | pain | acupuncture | group | muscles | ss | masseter | injection | mm | significant [SUMMARY]
[CONTENT] pain | mmfp | dn | effects | myofascial | needling | bont | studies | acupuncture | therapy [SUMMARY]
[CONTENT] mm | injection | sites | muscle | needle | bont | muscle injection | needles | points | performed [SUMMARY]
[CONTENT] values | significant | groups | 001 | ss | bont | group | mean | acupuncture | represent [SUMMARY]
[CONTENT] aa0 | letters | threshold kg | ppt | threshold | kg | 05 | significant differences | pain | differences [SUMMARY]
[CONTENT] bont | pain | significant | masseter | values | mm | ss | groups | muscles | acupuncture [SUMMARY]
[CONTENT] bont | pain | significant | masseter | values | mm | ss | groups | muscles | acupuncture [SUMMARY]
[CONTENT] BoNT-A | TMD ||| [SUMMARY]
[CONTENT] 54 | three | n=18 ||| four | 20-min | BoNT | 30U | 10U ||| SS ||| PPT ||| EMG ||| 1-month ||| two | ANOVA | Tukey [SUMMARY]
[CONTENT] one month ||| AC | SS ||| PPT | EMG [SUMMARY]
[CONTENT] one month | TMD | BoNT | PPT | EMG [SUMMARY]
[CONTENT] TMD ||| ||| 54 | three | n=18 ||| four | 20-min | BoNT | 30U | 10U ||| SS ||| PPT ||| EMG ||| 1-month ||| two | ANOVA | Tukey ||| one month ||| AC | SS ||| PPT | EMG ||| one month | TMD | BoNT | PPT | EMG [SUMMARY]
[CONTENT] TMD ||| ||| 54 | three | n=18 ||| four | 20-min | BoNT | 30U | 10U ||| SS ||| PPT ||| EMG ||| 1-month ||| two | ANOVA | Tukey ||| one month ||| AC | SS ||| PPT | EMG ||| one month | TMD | BoNT | PPT | EMG [SUMMARY]